uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
1,314,259,994,663
arxiv
\section{Introduction} \label{intro} The rotational bands consisting of electric quadrupole transitions are usually related to the rotation of the deformed nucleus around an axis perpendicular to the symmetry axis of the deformed density distribution. With the development of theoretical and experimental research~\cite{Frauendorfcon,CLARK1992247,197Pb,BALDSIEFEN1992252}, a novel rotation has been found in weakly deformed or near-spherical nuclei, and is interpreted as a result of shears mechanism, i.e., the gradual closing of the angular momentum vector of relatively few high-$j$ proton particles ($j_\pi$) and high-$j$ neutron holes ($j_\nu$). Such kind of rotation is introduced as magnetic rotation. Up to now, numerous magnetic rotational bands have been observed in $A \sim 110$ mass region using the HI-13 tandem accelerator at the China Institute of Atomic Energy (CIAE), such as $^{106}$Ag~\cite{M1_106Ag,M1_106Ag_cpc,Shapecoexistence106Ag}, $^{107}$Ag~\cite{107Ag,M1_107Ag}, $^{112}$In~\cite{MR_112In_LXQ,MR112In,MR_112In}, $^{113}$In~\cite{113In,113In_cpl} and $^{115}$In~\cite{MR_115In}. Antimagnetic rotation (AMR) is another exotic rotation observed in near-spherical nuclei~\cite{Frauendorf2001,meng2013progress,6}. The angular momentum is increased by simultaneous closing of the two blades of protons and neutrons toward the total angular momentum vector, which is so called ``two-shears-like mechanism". Because the transverse magnetic moments of the valence nucleons are anti-aligned, there are no $M1$ transitions in antimagnetic rotational bands. AMR is characterized by weak $E2$ transitions and decreasing $B(E2)$ values with increasing spin, which reflects the nearly spherical core. The large $\mathfrak {J}^{(2)}/B(E2)$ ratio of the order of 100 $\hbar ^{2}$MeV$^{-1}$$e$$^{-2}$b$^{-2}$, compared with around 10 $\hbar ^{2}$MeV$^{-1}$$e$$^{-2}$b$^{-2}$ for well-deformed nucleus, is also a typical feature~\cite{Frauendorf2001,meng2013progress,6}. The antimagnetic rotation is expected to be realized in the same mass region with magnetic rotation. Experimentally, they have been observed simultaneously in the $A \sim 110$ mass region. Especially for Cd isotopes, the positive parity yrast bands after the alignment of neutrons at sufficiently high frequencies are perfect candidates for the two-shears-like mechanism. Up to now, the antimagnetic rotational bands have been identified in $^{105}$Cd~\cite{anti105Cd}, $^{106}$Cd~\cite{anti106Cd}, $^{107}$Cd~\cite{anti107Cd}, $^{108}$Cd~\cite{anti106108Cd,antimr_108Cd} and $^{110}$Cd~\cite{anti110Cd}. For In isotopes, when an additional proton occupies the $g_{7/2}$ or $d_{5/2}$ orbital, the ``two-shears-like mechanism" can also be expected. In fact, the rotational bands in $^{108,110}$In~\cite{anti108110In,Sun2016}, $^{112}$In~\cite{antiMR_112In} and $^{113}$In~\cite{AMR113In} have been taken as candidates for antimagnetic rotation. In our previous work~\cite{meng2018}, the triaxial deformation, shape evolution and possible chirality for the dipole bands in $^{109}$In were discussed in detail. However, it was unclear for the underlying nuclear structure of the $\Delta I$=2 bands. In this paper, the level scheme of those bands has been extended by eleven $\gamma$ rays. The $\Delta I$=2 rotational bands in $^{109}$In are investigated based on the systematic discussion, and the configurations have been suggested. The experimental results are compared with the tilted axis cranking relativistic mean-field (TAC-RMF) approach~\cite{meng2013progress,6}. Candidates for antimagnetic rotational bands in $^{109}$In will be discussed. \begin{figure} \resizebox{0.5\textwidth}{!}{ \includegraphics[scale=0.65]{fig1.eps} } \caption{(color online) Partial level scheme of $^{109}$In proposed in the present work. New transitions and levels are marked as red.} \label{band78} \end{figure} \section{EXPERIMENT AND RESULTS}\label{exp} The experiment was carried out using the HI-13 tandem accelerator at the China Institute of Atomic Energy (CIAE) in Beijing. Excited states in $^{109}$In were populated using the $^{100}$Mo($^{14}$N, 5$n$)$^{109}$In fusion-evaporation reaction, and the beam energy was 78 MeV. The target consisted of a 0.5 mg/cm$^{2}$ foil of $^{100}$Mo with a backing of 10 mg/cm$^{2}$-thick $^{197}$Au. The $\gamma$ rays were detected by an array composed of nine BGO-Compton-suppressed HPGe detectors, two low-energy photon (LEP) HPGe detectors, and one clover detector. A total of 84$\times$10$^{6}$ $\gamma$-$\gamma$ coincidence events were sorted into a fully symmetrized $E_{\gamma}$-$E_{\gamma}$ matrix, and analyzed using the software package RADWARE~\cite{Radford} for the $\gamma$-ray coincidence relationship. The data from the detectors at around 40$^{\circ}$ on one axis and at around 140$^{\circ}$ on the other axis were sorted into an asymmetric DCO matrix. By analyzed this asymmetric DCO matrix, the ratios of directional correlation of oriented states (DCO) can be obtained. The DCO ratios of the known $\gamma$-rays of nuclei produced in the present experiment were taken as the expected value. When the gate is set on a quadrupole transition, the expected value of stretched quadrupole transitions and pure dipole transitions are around 1.0 and 0.5 in the present array geometry. Analogously, when the gate is set on dipole transitions, the DCO ratios distribute from 1.5 to 2 for quadrupole transitions and from 0.5 to 1.3 for dipole transitions. When the gate is set on pure dipole transitions, the ratios are around 1 for pure dipole transitions. The partial level scheme focused on the $\Delta I$=2 bands in $^{109}$In is shown in Fig.~\ref{band78}. The placements of $\gamma$ rays in the level scheme were determined through the observed coincidence relationships, intensity balances, and energy summations. Compared with the results reported in Ref.~\cite{meng2018}, the level scheme of $^{109}$In has been revised by adding eleven new $\gamma$ rays. \begin{figure*} \center \includegraphics[width=17cm]{fig2.eps} \caption{(color online) $\gamma$-ray coincidence spectra with gates set on the (a) 1000.4 keV, (b) 443.6 keV, (c) 941.6 transitions. Inset shows the higher-energy part of the spectra (b). The energies marked by the asterisks and C are newly identified $\gamma$ rays and contaminants.} \label{AMR} \end{figure*} Band 7 is an $\Delta I$=2 band and extended to (41/2$^{+}$) state at energy of 8460.8 keV. In Fig.~\ref{AMR}(a), the 1000.4 keV transition has no coincidence with the newly identified $\gamma$-rays with energies of 1174.4 and 1024.8 keV, while it has coincidene with the 1026.3 keV transition which decays from 11/2$^{+}$ state at 1026.3 keV to 9/2$^{+}$ ground state, and the transitions with energies of 829.6, 888.2, 864.8, 643.3, 1109.6 keV, etc. In the spectrum gated on 443.6 keV transition, shown in Fig.~\ref{AMR}(b), the transitions decay out from higher levels of band 7 and 9 can be seen, along with the linking transitions with energies of 1109.6, 888.2 keV. The peak with a centroid energy of 1025 keV can been seen in Fig.~\ref{AMR}(b), which is composed of 1024.8 and 1026.3 keV transitions. This indicates the existence of 1024.8 keV transition. Moreover, the 1174.4 and 1024.8 keV transitions have mutual coincidence with the transitions of band 7, but has no coincidence with the 888.2, 1109.6 keV transitions and the $\gamma$ rays in band 9. Therefore, the 1174.4 and 1024.8 keV transitions are placed on the top of band 7, and the sequence of those transitions is determined by the intensity. The 864.8 keV transition has no coincidence with the 1109.6 and 643.3 keV transitions. Such coincident relationship along with the energy summation restricts the position of 643.4 and 1109.6 keV $\gamma$ rays to (31/2$^{+}$) $\rightarrow$ 29/2$^{(+)}$. The $R_{DCO}$ of 643.4 keV transition is consistent with the $\Delta I$=2 transition, which is similar to the 829.6 and 1000.4 keV transitions of band 9. Therefore, the 1109.6 keV transition is taken as a linking transition of band 9 and 7, and the 643.4 keV $\gamma$ ray is placed between the state at 7149.8 keV and the state at 6506.5 keV. The alignment analysis in Sec.~\ref{config} also supports such placement. In summary, the $\gamma$-rays with the energies of 1024.8, 1174.4 keV belong to band 7. Band 9 is built on the level at 6506.5 keV and decays to band 7 through the linking transitions with energies of 1109.6, 888.2 keV. \setlength{\tabcolsep}{14pt} \begin{table*} \centering \caption{The $\gamma$-ray energies, initial- and final-level energies, intensities, DCO ratios, the initial- and final- level spin-parities of $^{109}$In deduced in the present work.} \label{Table:exp} \begin{tabular}{ccccccc} \hline \hline $E_{\gamma}$(keV)$^{a}$ &$E_{i}$$\rightarrow$$E_{f}$ &$I_{\gamma}$($\%$)$^{b}$ &$R_{DCO}$(D)$^{c}$ & $R_{DCO}$(Q)$^{d}$ &$I_{i}^{\pi}$$\rightarrow$$I_{f}$$^{\pi}$&Band\\ \hline 355.7&2447.3$\rightarrow$2091.6&$<$0.1&&&(9/2$^{+}$)$\rightarrow$(5/2$^{+}$)&7 \\ 402.0&1428.3$\rightarrow$1026.3 &23(1)&0.61(17)&&13/2$^{+}$$\rightarrow$11/2$^{+}$\\ 443.6&4742.8$\rightarrow$4299.2 &4.0(3)&&0.82(17)&25/2$^{(+)}$$\rightarrow$21/2$^{(+)}$&7\\ 463.0&5218.7$\rightarrow$4755.7 &2.6(2)&&1.07(16)&27/2$^{+}$$\rightarrow$23/2$^{+}$&8\\ 469.7&5396.8$\rightarrow$4927.1 &3(1)&0.64(8)&&29/2$^{(+)}$$\rightarrow$27/2$^{-}$&7$\rightarrow$5\\ 475.9&5218.7$\rightarrow$4742.8 &0.7(4)&&0.60(10)&27/2$^{+}$$\rightarrow$25/2$^{(+)}$&8$\rightarrow$7\\ 521.2&2968.5$\rightarrow$2447.3 &$<$0.1&&&(13/2$^{+}$)$\rightarrow$(9/2$^{+}$)&7\\ 596.2&2318.5$\rightarrow$1722.2 &0.3(2)&&0.9(2)&11/2$^{+}$$\rightarrow$(7/2)$^{+}$&8\\ 605.6&2318.5$\rightarrow$1712.5 &0.7(2)&&0.7(2)&11/2$^{+}$$\rightarrow$(9/2)$^{+}$\\ 614.2&1712.5$\rightarrow$1099.4&0.2(1)&&&(9/2)$^{+}$$\rightarrow$5/2$^{+}$&\\ 623.6&1722.2$\rightarrow$1099.4&0.3(1)&&&(7/2)$^{+}$$\rightarrow$5/2$^{+}$&\\ 631.1&5849.8$\rightarrow$5218.7 &2.9(3)&&1.04(11)&31/2$^{+}$$\rightarrow$27/2$^{+}$&8\\ 643.3&7149.8$\rightarrow$6506.5 &0.5(2)&&1.0(3)&(35/2$^{+}$)$\rightarrow$(31/2$^{+}$)&9\\ 645.7&4299.2$\rightarrow$3653.5&0.8(3)&&1.1(3)&21/2$^{(+)}$$\rightarrow$17/2$^{(+)}$&7\\ 654.0&5396.8$\rightarrow$4742.8 &3.2(2)&&0.83(10)&29/2$^{(+)}$$\rightarrow$25/2$^{(+)}$&7\\ 658.6&4755.7$\rightarrow$4097.1 &1.2(1)&&0.93(9)&23/2$^{+}$$\rightarrow$19/2$^{+}$&8\\ 673.5&2101.8$\rightarrow$1428.3 &69(3)&&&19/2$^{+}$$\rightarrow$13/2$^{+}$\\ 673.7&2102.0$\rightarrow$1428.3 &14.7(7)&1.62(23)&&17/2$^{+}$$\rightarrow$13/2$^{+}$\\ 685.0&3653.5$\rightarrow$2968.5 &0.3(3)&&1.1(6)&17/2$^{(+)}$$\rightarrow$(13/2$^{+}$)&7\\ 816.3&6666.1$\rightarrow$5849.8 &1.5(1)&&1.19(16)&35/2$^{+}$$\rightarrow$31/2$^{+}$&8\\ 829.6&7979.4$\rightarrow$7149.8 &1.2(2)&&0.96(29)&(39/2$^{+}$)$\rightarrow$(35/2$^{+}$)&9\\ 837.0&3155.5$\rightarrow$2318.5 &1.0(3)&&0.93(7)&15/2$^{+}$$\rightarrow$11/2$^{+}$&8\\ 864.8&6261.6$\rightarrow$5396.8 &4.2(3)&&0.80(7)&33/2$^{(+)}$$\rightarrow$29/2$^{(+)}$&7\\ 888.2&7149.8$\rightarrow$6261.6 &1.6(2)&&0.78(17)&(35/2$^{+}$)$\rightarrow$33/2$^{(+)}$&9$\rightarrow$7\\ 893.0&2995.0$\rightarrow$2102.0 &7.3(7)&&0.46(8)&19/2$\rightarrow$17/2$^{+}$\\ 941.6&4097.1$\rightarrow$3155.5 &1.0(1)&&1.10(13)&19/2$^{+}$$\rightarrow$15/2$^{+}$&8\\ 973.0&7639.1$\rightarrow$6666.1 &1.6(2)&&0.98(13)&39/2$^{+}$$\rightarrow$35/2$^{+}$&8\\ 1000.4&8979.8$\rightarrow$7979.4&0.6(1)&&1.4(5)&(43/2$^{+}$)$\rightarrow$(39/2$^{+}$)&9\\ 1024.8&7286.4$\rightarrow$6261.6 &2.6(7)&&&(37/2$^{+}$)$\rightarrow$33/2$^{(+)}$&7\\ 1026.3&1026.3$\rightarrow$0 &31(1)&0.59(7)&&11/2$^{+}$$\rightarrow$9/2$^{+}$\\ 1099.4&1099.4$\rightarrow$0 &0.5(1)&&&5/2$^{+}$$\rightarrow$9/2$^{+}$&\\ 1109.6&6506.5$\rightarrow$5396.8&$<$0.1&&&(31/2$^{+}$)$\rightarrow$29/2$^{(+)}$&9$\rightarrow$7\\ 1143.4&8782.5$\rightarrow$7639.1 &0.4(2)&&1.00(26)&43/2$^{+}$$\rightarrow$39/2$^{+}$&8\\ 1174.4&8460.8$\rightarrow$7286.4&0.3(2)&&&(41/2$^{+}$)$\rightarrow$(37/2$^{+})$&7\\ 1304.2&4299.2$\rightarrow$2995.0 &0.8(1)&&0.54(11)&21/2$^{(+)}$$\rightarrow$19/2\\ 1428.3&1428.3$\rightarrow$0 &100&&&13/2$^{+}$$\rightarrow$9/2$^{+}$\\ 1551.7&3653.5$\rightarrow$2101.8&0.5(2)&&&17/2$^{(+)}$$\rightarrow$(19/2$^{+})$&\\ \hline \hline \end{tabular} \begin{threeparttable} \begin{tablenotes} \item[a)] Uncertainties are between 0.2 and 0.5 keV depending upon their intensity. \item[b)] Intensities are normalized to the 1428.3 keV transition with $I_{\gamma}=100$. \item[c)] DCO ratios gated by dipole transitions. \item[d)] DCO ratios gated by quadrupole transitions. \end{tablenotes} \end{threeparttable} \end{table*} The DCO of the 888.2 keV transition needs special explanation. In our early work~\cite{meng2018}, the 888.2 keV transition is thought to be an $E$2 transition and belong to band 7. Nevertheless, the DCO of the 888.2-keV transition is 0.78(17), which is not a strict proof for an $E$2 transition. Now the 888.2-keV transition is suggested as a linking transition between band 9 and 7, considering the newly found transitions and coincidence relationships. The DCO information of the newly found $\gamma$ rays with energy of 1109.6 keV can not been extracted for the weak intensity. However, if we suppose that the linking transitions with energies of 1109.6 and 888.2 keV are $E$2 transitions, there will be a 37/2$^{+}$ state at energy of 7149.8 keV, which is lower than that of the (37/2$^{+}$) state at 7286.4 keV of band 7. It will be inconsistent with the intensity of the 643.3 and 1024.8 keV transitions. Therefore, we suggest those linking transitions are $\Delta I$=1 transitions, and the bandhead of band 9 at energy of 6506.5 keV is (31/2$^{+}$). The newly identified 645.7 and 685.0 keV transitions can been seen in the spectrum gated on 443.6 keV transition, as shown in Fig.~\ref{AMR}(b). Though the 521.2 and 355.7 keV transitions can not been identified in Fig.~\ref{AMR}(b) for their weak intensities, each of the 645.7, 685.0, 521.2, 355.7 keV transitions has mutual coincidence with their cascade $\gamma$ rays. Therefore, they are taken as the member of band 7, and the sequence of those four $\gamma$ rays are determined by the intensities. The 443.6 keV transition has been identified as a $\Delta I$=2 transition in the early work~\cite{meng2018}. The $R_{DCO}$ of 645.7 and 685.0 keV transition extracted from the spectrum gated on the 443.6 keV transition are around 1, which correspond to $\Delta I$=2 transitions. While it is difficult to extract the DCO information of the 521.2 and 355.7 keV transitions, we suggest them as $\Delta I$=2 transition considering that they are the intraband transitions of band 7. The parity of band 7 is suggested to be positive according to the alignment analysis in Sec.~\ref{config}. Band 8 consists of nine $\Delta I$=2 transitions and is extended to the 43/2$^{+}$ state at 8782.5 keV. From the 941.6 keV gated spectrum, as shown in Fig.~\ref{AMR}(c), all the members of band 8 can be identified, along with the 605.6, 614.2, 623.6 and 1099.4 keV transitions. In the early work\\~\cite{meng2018}, the decay path of band 8 is not clear. With the observed 605.6 and 596.2 keV transitions, the band 8 are connected to the known (9/2)$^{+}$ state at 1712.5 keV and the (7/2)$^{+}$ state at 1722.2 keV~\cite{TOI}. The 605.6 keV transition is a linking transition, which links the 11/2$^{+}$ state at 2318.5 keV to the (9/2)$^{+}$ state at 1712.5 keV. Because the band 8 is composed of $\Delta I$=2 transitions, the $\gamma$ ray with energy of 596.2 keV is taken as a member of band 8, and the (7/2)$^{+}$ state at 1722.2 keV is assigned as the bandhead of band 8. The positive bandhead also supports the parity assignment of band 8 in Ref.~\cite{meng2018}. \section{DISCUSSION} \subsection{Systemic discussion and configuration assignment}\label{config} The experimental alignment as a function of rotational frequency for bands 7, 8 and 9 is shown in Fig.~\ref{band7}, and that of the yrast band and band 10 of the neighboring even-even nucleus $^{108}$Cd is shown as a comparison. The configuration of band 8 is assigned as $\pi g_{7/2}g^{-2}_{9/2}$ before the backbend and $\pi g_{7/2}g^{-2}_{9/2}\otimes\nu h_{11/2}^{2}$ after backbend in Ref.~\cite{meng2018}. In this work, the bandhead of band 8 has been observed. The whole behaviour of band 8 before the backbend also supports the configuration assignment. The band 10 of $^{108}$Cd is built on a non-aligned excitation into the $\nu$$h_{11/2}$ subshell~\cite{108Cd}. Before the backbend, the initial aligned spin of band 7 is nearly 2.5$\hbar$ greater than that of the band 10 of $^{108}$Cd, which can be caused by the occupation of the odd proton in the $d_{5/2}$ orbital. A sharp backbend in both bands occurs at around 0.28 MeV and with similar gains in aligned spin, which is consistent with $h_{11/2}$ neutron pair alignment. Therefore, the band 7 before the backbend should build on a non-aligned neutron excitation into the $\nu$$h_{11/2}$ subshell associating with the $\pi$$d_{5/2}$ orbital, and the backbend can be attributed to the alignment of the neutrons in the $h_{11/2}$ orbitals. Band 9, which decays to band 7 through two linking transitions, can be related to different neutron excitations in comparison with band 7. The alignments of band 9 is about 3~$\hbar$ greater than that of band 7, which could be attributed to the midshell $g_{7/2}(d_{5/2})$ neutron alignment. \begin{figure} \resizebox{0.5\textwidth}{!}{ \includegraphics{fig3} } \caption{ (Color online) Experimental alignment as a function of rotational frequency $\hbar$$\omega$ for band 7, 8, 9 in $^{109}$In, together with that of the yrast band and band 10 in $^{108}$Cd, relative to a Harris parametrization of the core of $J_{0}$=7~$\hbar$$^{2}$MeV$^{-1}$ and $J_{1}$=9~$\hbar$$^{4}$MeV$^{-3}$. } \label{band7} \end{figure} \begin{figure} \resizebox{0.5\textwidth}{!}{ \includegraphics{fig4} } \caption{ Rotational bands involving the $\pi$$g_{7/2}$ orbital in $^{107, 109, 111, 113}$In. Ground states of 9/2$^{+}$ are shown as references.} \label{energy} \end{figure} According to the configuration assignment, bands 7 and 8 are believed to be related to $1p1h$ proton excitations from the $g_{9/2}$ orbital to one of the $g_{7/2}$ and $d_{5/2}$ orbitals above the shell gap. Similar bands related to the $g_{7/2}$ orbital in $^{107, 111, 113}$In~\cite{SMB_107In,MR_111In,AMR113In} are summarized in Fig~\ref{energy}. Even though several corresponding levels in $^{107}$In have not been observed, the isotopic regularity of the level energies is significant. It is worth noting that the excitation energies of $1p1h$ proton excitation from $g_{9/2}$ to $g_{7/2}$ orbital in $^{109, 111, 113}$In are within 1$\sim$2 MeV relative to the ground state, and decreases with the increasing neutron number. The proton-neutron residual interaction may play an important role in $1p1h$ excitation from $\pi$$g_{9/2}$ to $\pi$$g_{7/2}$ orbital at such low energy. It reduces the energy spacing between the $\pi$$g_{9/2}$ and $\pi$$g_{7/2}$ orbitals, and its impact is enhanced when more neutrons are occupying the midshell. \begin{figure} \resizebox{0.5\textwidth}{!}{ \includegraphics{fig5} } \caption{(Color online) Dynamic moment of inertia $\mathfrak{J}^{(2)}$ as a function of rotational frequency $\hbar$$\omega$ for bands 7 and 8 in $^{109}$In, and the antimagnetic band in $^{106,108}$Cd.} \label{J2} \end{figure} Moreover, when the additional proton of indium nuclei is occupying the $g_{7/2}$ or $d_{5/2}$ orbital, the rotational bands after the $h_{11/2}$ neutrons alignment at high frequencies are perfect candidates for the two-shears-like mechanism, such as the rotational bands in $^{108, 110, 112, 113}$In~\cite{anti108110In,Sun2016,antiMR_112In,AMR113In}. The dynamic moment of inertia $\mathfrak{J}^{(2)}$ is a sensitive probe of the nuclear collectivity. The $\mathfrak{J}^{(2)}$ and rotational frequency can be extracted experimentally by the following formulae, \begin{displaymath} \hbar\omega_{\rm exp}=\frac{1}{2}E_\gamma(I \rightarrow I-2) \end{displaymath} \begin{displaymath} \mathfrak{J}^{(2)}\approx\frac{dI}{d\omega}=\frac{4}{E_\gamma(I+2 \rightarrow I)-E_\gamma(I \rightarrow I-2)} \end{displaymath} $\mathfrak{J}^{(2)}$ of bands 7 and 8 after the backbend in $^{109}$In are shown in Fig.~\ref{J2}. The typical antimagnetic rotational bands in $^{106}$Cd, $^{108}$Cd~\cite{anti106108Cd,antimr_108Cd} are also shown for comparison. As shown in Fig.~\ref{J2}, $\mathfrak{J}^{(2)}$ stays around 23~MeV$^ {-1}$$\hbar$$^{2}$ as rotation frequency increases for bands 7 and 8 after backbend, and have the similar pattern with that of AMR bands in $^{106,108}$Cd. Such small and stable value of $\mathfrak{J}^{(2)}$ indicates that bands 7 and 8 after backbend in $^{109}$In are much less collective, and can be candidates for antimagnetic rotation. \begin{figure*}[b] \center \includegraphics[width=12cm]{fig6} \caption{(color online) Energy spectrum of bands 7 (left) and 8 (right) after the backbend obtained from the TAC-RMF calculations, in comparison with the corresponding data in~$^{109}$In.} \label{EI} \end{figure*} \begin{figure*}[b] \center \includegraphics[width=12cm]{fig7} \caption{(color online) Total angular momentum as a function of the rotational frequency for bands 7 (left) and 8 (right) after the backbend obtained from the TAC-RMF calculations in comparison with the corresponding data in~$^{109}$In.} \label{spin} \end{figure*} \subsection{Theoretical interpretation} In the following, the rotational structure of two positive parity bands in $^{109}$In are investigated by tilted axis cranking relativistic mean-field (TAC-RMF) approach. In contrast to its non-relativistic counterparts~\cite{1}, the relativistic mean field (RMF) approach including point-coupling or mesonic exchange interaction~\cite{lj16,lj17,lj18}, takes the fundamental Lorentz symmetry into account from the very beginning so that naturally takes care of the important spin degree of freedom and time-odd fields, resulting in great successes on many nuclear phenomena~\cite{1,2,3,4,5,6}. Moreover, without any additional parameters, the rotation excitations can be described self-consistently with the tilted axis cranking relativistic mean-field (TAC-RMF) approach~\cite{meng2013progress,6}. In particular, the TAC-RMF model has been successfully used in describing magnetic rotation (MR) and AMR microscopically and self consistently in different mass regions~\cite{meng2013progress,6}, and especially the 110 region, such as the AMR bands in $^{105, 109, 110}$Cd~\cite{zhao2012prc,zhao2011prl,zhao2012covariant,peng2015magnetic,zhang2014competition} and $^{108, 110, 112, 113}$In \\ \cite{Sun2016,antiMR_112In,AMR113In}, and also the MR bands in $^{113,114}$In~\cite{113In,MR_114In}. In the present TAC-RMF calculations, the point-coupling interaction PC-PK1~\cite{zhao2010new} is used for the Lagrangian without any additional parameters. A basis of 10 major oscillator shells is adopted for the solving of the Dirac equation and pairing correlations are neglected. In order to describe bands 7 and 8 in $^{109}$In, the configurations $\pi d_{5/2}g_{9/2}^{-2}\otimes\nu h_{11/2}^2$ and $\pi g_{7/2}g_{9/2}^{-2}\otimes\nu h_{11/2}^2$ are adopted in the TAC-RMF calculations, respectively. The calculated results for the $\pi d_{5/2}g_{9/2}^{-2}\otimes\nu h_{11/2}^2$ and $\pi g_{7/2}g_{9/2}^{-2}\otimes\nu h_{11/2}^2$ configuration are shown in Fig.~$\ref{EI}$ and Fig.~$\ref{spin}$ in comparison with the experimental data for bands 7 and 8 after the backbend. It could be seen that the TAC-RMF calculations including both energy spectrum and rotational frequency based on the assigned configurations are in a good agreement with the experimental data, supporting the configuration assignment. \begin{figure} \resizebox{0.5\textwidth}{!}{ \includegraphics[scale=0.32]{fig8} } \caption{(color online) $B(E2)$ values~(a) and~$\mathfrak{J}^{(2)}/B(E2)$~ratios (b) as functions of the rotational frequency for bands 7 (left) and 8 (right) in the TAC-RMF calculations for the assigned configurations. Insert: Deformation parameters~$\beta$~and~$\gamma$~driven by the increasing rotational frequency in the TAC-RMF calculations. The arrow indicates the increasing direction of the rotational frequency.} \label{BJ} \end{figure} \begin{figure*}[b] \center \includegraphics[width=12cm]{fig9} \caption{(color online) Angular momentum vectors of neutrons and the low-$\Omega$ $d_{5/2}$($g_{7/2}$) proton, $J_{\pi+\nu}$, and the two high-$\Omega$ $g_{9/2}$ proton holes, $j_\pi$, for bands 7 and 8 calculated with TAC-RMF theory.} \label{twoS} \end{figure*} After the backbend in bands 7 and 8, the calculated $\mathfrak{J}^{(2)}$ well reproduce the data and the $\mathfrak{J}^{(2)}$ values are around 20-25~MeV$^ {-1}$$\hbar$$^{2}$, which are much smaller than the typical values ($\sim 35$~MeV$^ {-1}$$\hbar$$^{2}$) for the $A=110$ rigid spherical rotor. This indicates that bands 7 and 8 are not based on a collective behavior, but most likely an antimagnetic rotation, as discussed in Section.~\ref{config}. Weak $E2$ transition is one of the typical characteristics of AMR and reflects the small deformation of the core which causes large ratios of $\mathfrak{J}^{(2)}$ to the reduced transition probability $B(E2)$ values. Furthermore, the $B(E2)$ values decrease with increasing angular momentum rather rapidly. The $B(E2)$ values and $\mathfrak{J}^{(2)}/B(E2)$ ratios as functions of the rotational frequency in the TAC-RMF calculations for the assigned configurations of bands 7 and 8 are given respectively in Fig.~\ref{BJ}. The $B(E2)$ values are shown to be decrease smoothly with increasing rotational frequency, while the $\mathfrak{J}^{(2)}/B(E2)$ ratios show rising tendencies for both bands 7 and 8. It should be noted that the calculated $\mathfrak{J}^{(2)}/B(E2)$ ratios for those two bands are around $100-120\;\hbar^2$MeV$^ {-1}e^{-2}b^{-2}$, which are much higher than that for a typical deformed rotational band ($\sim 10$\\$\;\hbar^2$MeV$^ {-1}e^{-2}b ^{-2}$~\cite{Frauendorf2001}) and also in agreement with the expectations from AMR bands~\cite{zhao2012prc,antiMR_112In,Sun2016}. The decrease of the $B(E2)$ values can be attributed to the evolution of the nuclear deformation. As shown in the inset of Fig.~\ref{BJ}(a), with increasing rotational frequency, the nucleus undergoes a smooth decrease in $\beta$ deformation with a rather small and steady triaxiality ($\gamma\leq 10^\circ$) for both bands 7 and 8, which is responsible for the falling tendency of $B(E2)$ values with rotational frequency. In order to examine the two-shears-like mechanism for bands 7 and 8, $J_{\pi+\nu}$ (the angular momentum vectors of neutrons and the low-$\Omega$ proton) and $j_\pi$ (the two high-$\Omega$ $g_{9/2}$ proton holes) in the TAC-RMF calculations have been extracted and shown in Fig.~\ref{twoS}. Taking band 8 with the configuration $\pi g_{7/2}g_{9/2}^{-2}\otimes\nu h_{11/2}^2$ as an example. The angular momentum $J_{\pi+\nu}$ is related to all the neutron levels and the occupied low-$\Omega$ $g_{7/2}$ proton in the intrinsic system. At the bandhead ($\hbar\omega=\;$0.2 MeV), the two $j_\pi$ are nearly perpendicular to $J_{\pi+\nu}$ and pointing opposite to each other, which form the blades of the two shears. As the rotational frequency increases, the gradual alignment of the $g_{9/2}$ proton hole vectors $j_\pi$ toward $J_{\pi+\nu}$ generates angular momentum, while the direction of the total angular momentum stays unchanged. This leads to the closing of the two shears simultaneously by moving one blade toward the other, demonstrating the two-shears-like mechanism in band 8. A similar mechanism can also been seen in TAC-RMF calculations with assigned configuration of $\pi d_{5/2}g_{9/2}^{-2}\otimes\nu h_{11/2}^2$ for band 7 as shown in Fig.~\ref{twoS}. \section{SUMMARY} In summary, the $\Delta I$=2 rotational bands populated in the $^{100}$Mo($^{14}$N, 5$n$)$^{109}$In reaction have been modified and extended by eleven new $\gamma$ rays. The systematic discussion has been made and the configurations for the $\Delta I$=2 rotational bands have been assigned. The dynamic moment of inertia shows that bands 7 and 8 after backbend are much less collective. The experimental data of bands 7 and 8 in $^{109}$In have been compared with the TAC-RMF calculations, and good agreements have been obtained. The predicted $B(E2)$, deformation $\beta$ and $\gamma$, as well as $\mathfrak{J}^{(2)}/B(E2)$ ratios in TAC-RMF calculations based on the $\pi d_{5/2}g_{9/2}^{-2}\otimes\nu h_{11/2}^2$ and $\pi g_{7/2}g_{9/2}^{-2}\otimes\nu h_{11/2}^2$ configurations have been discussed and the characteristic features of AMR for the bands 7 and 8 after the backbend have been shown. The two-shears-like mechanism for bands 7 and 8 show that they can be candidate antimagnetic rotational bands. Further experimental investigation such as life-time measurements are expected for a conclusive interpretation. \vspace{24 pt} We thank the crew of the HI-13 tandem accelerator at the China Institute of Atomic Energy for their help in steady operation of the accelerator and for preparing the target. This work is partially supported by the National Natural Science Foundation of China under Contracts No. 11375023, No. 11575018, No. U1867210, No. 11675063, U1832211, and 11922501. \bibliographystyle{unsrt}
1,314,259,994,664
arxiv
\section{Introduction} Globular clusters are very efficient places to produce X-ray binaries via dynamical interactions. In particular, it has been known for many years that the formation rate per unit mass of luminous ($L_X >10^{36}$\thinspace\hbox{$\hbox{ergs}\thinspace\hbox{s}^{-1}$}) X-ray sources is much higher in globular clusters than that of the rest of our Galaxy. More recently, similar results are found in other nearby spiral galaxies like M31 (Di\,Stefano et al. 2002) and M104 (Di\,Stefano et al. 2003). Among all extragalactic globular clusters, G1 in M31 is an intriguing one. With a luminosity of $\sim 10^6 L_\odot$ (Rich et al. 1996), it is the most luminous star cluster in the Local Group, and also one of the most massive, at $(7-17) \times 10^6 M_\odot$ (Meylan et al. 2001). The rates at which X-ray binaries are created in the cluster core are therefore expected to be high compared with globular clusters in the Milky Way. Furthermore, it has been claimed, based on kinematic studies, that G1 hosts a $\sim 2\times 10^4 \, M_\odot$ intermediate-mass black hole (Gebhardt et al. 2002,2005). However, this result is controversial and has been challenged by Baumgardt et al. (2003). X-ray observations of G1 therefore allow us to investigate some of the interesting properties of the cluster. Recently, {\it XMM-Newton} has conducted three short ($<$ 10 ksec) observations of G1 and has discovered an X-ray source coincident with G1 (Trudolyubov \& Priedhorsky 2004; Pooley \& Rappaport 2006). To explain the origin of the X-ray emission of G1, Pooley \& Rappaport (2006) proposed that it could be due to accretion of ionized cluster gas by a central intermediate-mass black hole or it could be produced by a conventional X-ray binary, and it is also possible to distinguish these two scenarios by obtaining a precise localization of the X-ray emission. If the X-ray emission is due to a central 20,000 $M_{\odot}$ black hole, we expect it comes from within 50 mas of the center. However, if low-mass X-ray binaries are responsible to the X-ray emission, then we expect the X-ray emission offsets from the core. This requires high-resolution X-ray observations. However, there is no {\it Chandra}\ observation of G1 and only {\it XMM-Newton}\ observations are available. Although Pooley \& Rappaport (2006) investigated the {\it XMM-Newton}\ spectra of G1 in detail, they did not perform an astrometric study. The absolute astrometry of {\it XMM-Newton}\ is about $2''$ (Kirsch 2006) while the statistical uncertainty is intensity dependent. This leads to a positional error of about $2''-6''$ depending on the source brightness. While the spatial resolution of {\it XMM-Newton} is much poorer than {\it Chandra}\, it is possible to localize X-ray positions to $1''-2''$ with {\it XMM-Newton}\ if one calibrates the astrometry carefully. In this paper, we refined the X-ray position of G1 by performing precise relative astrometry using {\it XMM-Newton}\ and the {\it Hubble Space Telescope} ({\it HST}). \section{Observations and Data Analysis} \subsection{{\it XMM-Newton}} G1 was first observed with the {\it XMM-Newton}\ in 2001 January for a total exposure time of $\sim 8$ ksec. There were two more {\it XMM-Newton}\ observations in 2002 December and 2003 February. Both observations were off-axis resulting heavy vignetting and one of the observations was affected by high background [see Pooley \& Rappaport (2006) for a summary]. In this Letter, we only consider the first observation taken in 2001. All three cameras (one pn camera and two MOS cameras) of the European Photon Imaging Camera (EPIC) were turned on for collecting data. All the X-ray data were processed with the {\it XMM-Newton}\ Science Analysis System (SAS) version 7.0. We downloaded the raw data from the {\it XMM-Newton}\ archive and reprocessed with SAS together with the latest calibration products. The reprocessed event lists were first examined for background variation using the high energy (10--15 keV) background lightcurves and we did not find any significant background flaring event. We extracted X-ray images with photon energies in the range of 0.3--10 keV, and only considered events with FLAG = 0 and single and double events for the pn camera (PATTERN $\leq 4$), and single to quadruple events for the MOS cameras (PATTERN $\leq 12$). Source detection was then performed using a maximum likelihood approach as implemented by the SAS tools {\it edetect\_chain}. We ran the source detection simultaneously on the data from all three cameras. G1 was clearly detected and was seen in all three cameras with a combined detection likelihood of 63. We compared the X-ray source list with the 2MASS and USNO catalogs and images, and looked for coincidence of bright and isolated stellar objects. We found one star (2MASS\,00325251+3931424) that is $< 3''$ from the X-ray position and it is likely to be a foreground star. To verify the nature of this X-ray emitting stellar object, we computed the hardness ratios. These ratios were based on the source counts in the three energy bands: $S$ (0.3--1 keV), $M$ (1--2 keV), and $H$ (2-10 keV). The two hardness ratios are defined as HR1=$(M-S)/(M+S)$ and HR2=$(H-S)/(H+S)$. Figure 1 shows the color-color diagram of the X-ray emitting foreground star and G1. We have overlaid the color-color diagram with lines showing the tracks followed by representative spectra with differing values of $N_H$. The X-ray colors of the X-ray emitting star indicate that it has a very soft X-ray spectrum, consistent with a very soft X-ray source (Di\,Stefano \& Kong 2004). The X-ray radiation is therefore likely due to the coronal emission from a foreground star. The star has a $R$ magnitude of 14.0 (Monet et al. 2003). We calculated the X-ray to optical flux ratio as $\log(f_X/f_R)=\log f_X + 5.67 + 0.4 R$ (Hornschemeier et al. 2001). With a count rate of 0.018 c/s in the pn detector and assuming a Raymond-Smith model with $kT_{RS}=0.3$ keV and $N_H=10^{21}$ cm$^{-2}$, the 0.3--10 keV flux is $2.8\times10^{-14}$\thinspace\hbox{$\hbox{ergs}\thinspace\hbox{cm}^{-2}\thinspace\hbox{s}^{-1}$} and the corresponding $f_X/f_R$ is 0.005, typical for a foreground star (Hornschemeier et al. 2001). Based on the optical counterpart, the boresight correction that needs to be applied to the X-ray source positions is $1.47''\pm0.86''$ in R.A. and $2.51''\pm0.86''$ in decl.; the uncertainties are quadratic sum of the positional errors of the X-ray and 2MASS source. The correction is consistent with the absolute pointing accuracy of {\it XMM-Newton}\ (see also an example in Pietsch, Freyberg \& Haberl 2005). \vspace{2mm} \begin{inlinefigure} \psfig{file=f1.eps,width=3.4in} \caption{Color-color diagram of G1 and a nearby X-ray emitting object (lower left). Also plotted are the estimated hardness ratios estimated from different spectral models. Top to bottom: Power-law model with $\alpha$ of 1.0, 1.5, 2.0, and 2.5 and Raymond-Smith model with $kT_{RS}$ of 0.3 keV. For each model, $N_H$ varies from the left from $5\times10^{20}$, $10^{21}$, and $5\times10^{21}$ cm$^{-2}$.} \end{inlinefigure} \subsection{{\it Hubble Space Telescope}} G1 was observed with the {\it HST}\ Advanced Camera for Surveys (ACS) in High Resolution Channel (HRC) mode on 2003 October 24. The total integration time is 41 minutes in the F555W filter centering on G1. We used the {\it HST}\ pipeline data that were shifted and co-added using the MultiDrizzle package in PyRAF, with masking of cosmic rays, saturated pixels, and bad pixels. We calibrated the astrometry of the {\it HST}\ data by using the 2MASS catalog. The field-of-view of the ACS/HRC is small ($29''\times26''$) and only two stars in the field are in the 2MASS catalog. By computing the average offset between the {\it HST}\ and 2MASS stars, we shifted the {\it HST}\ image by $0.77634''$ in R.A. and $-0.5814''$ in decl., with a residual of $0.13''$. The ACS/HRC image of G1 is shown in Figure 2. G1 was also observed with the {\it HST}\ Wide Field Planetary Camera 2 (WFPC2) on 1995 October 2 with a total integration time of 37 minutes in the F555W filter. In addition to the F555W data, images were also taken in the F814W and F1042M filters. We downloaded the F555W image from the WFPC2 Associations Science Products Pipeline for which cosmic-ray free, science-quality images are dithered and co-added. Since the field-of-view of WFPC2 is much larger than that of ACS/HRC, we can correct the astrometry with 6 stars in the 2MASS catalog. We applied the astrometry correction using IRAF task {\it ccmap} yielding a residual of $0.14''$. The WFPC2 F555W image is shown in Figure 2. \section{X-ray Localization of G1} After registering the absolute reference frames of the {\it HST}\ and {\it XMM-Newton}\ images to the 2MASS catalog, we located the center of G1 in the {\it HST}\ images and the X-ray position of G1. We determined the centroid of G1 in the ACS/HRC image (R.A.=00h32m46.537s, decl.=+39d34m40.65s with $1\sigma$ error of $0.004''$) by computing the intensity weighted mean within the core radius ($0.21''$; Ma et al. 2007) using IRAF task {\it center}. We also checked the result by using the half-mass radius ($1.73''$) and there is no difference except for a larger error bar ($0.01''$). For the X-ray position, we applied the astrometric correction on the value determined by {\it edetect\_chain} yielding R.A.=00h32m46.6s, decl.=+39d34m40s. We then determined the $1\sigma$ radius error circle ($1.5''$) of the {\it XMM-Newton}\ position of G1 by computing the quadratic sum of the positional uncertainty for the X-ray source ($1.23''$), the uncertainty in the optical astrometry ($0.13''$), and the uncertainty in the X-ray boresight correction ($0.86''$). The same procedure was also applied to the WFPC2 image. Figure 2 shows the ACS/HRC and WFPC2 images of G1 and the $1\sigma$ radius X-ray error circles. The center of G1 derived from the optical image is also marked. \begin{figure*} \psfig{file=f2a.jpg.ps,width=3.65in} \psfig{file=f2b.jpg.ps,width=3.65in} \caption{ {\it HST}\ ACS/HRC F555W (left) and WFPC2 F555W (right) images of G1. The cluster core is marked by a cross. The circles of both images are the $1\sigma$ radius ($1.5''$) {\it XMM-Newton}\ error circles. The two bright stars in the field were used for calibrating the astrometry with the 2MASS catalog.} \end{figure*} \section{Discussion} Using {\it XMM-Newton}\ and {\it HST}, we determined the centroid of G1 in the optical images as well as the X-ray position of G1. From Figure 2, although the X-ray position offsets from the cluster core, the cluster center is within the $1\sigma$ error circle of the X-ray position. Therefore, the current {\it XMM-Newton}\ data cannot provide constraint on whether the X-rays of G1 come from Bondi accretion of ionized cluster gas by a central intermediate-mass black hole for which the X-rays should come from the central 50 mas of the cluster (Pooley \& Rappaport 2006). Alternatively, the X-ray emission could be produced by luminous low-mass X-ray binaries and we expect such X-ray emission may be outside the cluster center. Previous X-ray observations of globular clusters suggest that luminous ($L_X\gaeq10^{36}$ ergs s$^{-1}$) X-ray sources tend to locate within the core radius (Grindlay et al. 1984). Recent {\it Chandra}\ observations also show that nearly half of the quiescent low-mass X-ray binaries are found within the core radius (e.g., Grindlay et al. 2002; Pooley et al. 2002; Heinke et al. 2003,2006). Therefore, it is likely that a luminous low-mass X-ray binary would locate within the core radius ($0.21''$; Ma et al. 2007) of G1. The X-ray emission of G1 could also come from multiple low-mass X-ray binaries and we may be able to resolve G1 as an extended source with high spatial resolution instrument. Pooley \& Rappaport (2006) estimated that about 75 low-mass X-ray binaries might be in G1. It is worth noting that only one globular cluster, M15, is known to host two luminous X-ray sources (White \& Angelini 2001; Hannikainen et al. 2005). In conclusion, based on the current {\it XMM-Newton}\ data, we cannot distinguish the two possible mechanisms of generating the X-ray emission of G1. While the X-ray position of G1 is the most crucial factor to determine its nature, Pooley \& Rappaport (2006) also suggested that the X-ray spectrum may provide some hint. However, it has been proven that X-ray spectra of intermediate-mass black hole candidates consist of a class of different spectral shapes and in many cases, the spectra can be fit with several models and the estimated mass of the accreting black hole is model dependent (e.g. Stobbart et al. 2006; Gon{\c c}alves \& Soria 2006). Some intermediate-mass black hole candidates also show X-ray spectral change at different luminosity states (Kong \& Di\,Stefano 2005). These make interpretation based on X-ray spectra more difficult. Nevertheless, if we assume a simple accretion disk model for G1, following Pooley \& Rappaport (2006), we would expect G1 has a 10 eV supersoft component (see Di\,Stefano \& Kong 2003); any emission above 0.5 keV must come from additional components. From the color-color diagram (Figure 1), G1 has significant emission above 1 keV and indeed it is very similar to a typical X-ray binary in M31 with a simple power-law spectral model (Kong et al. 2002). Therefore, if G1 has a 10 eV supersoft spectrum, it must also have an additional hard component. Indeed, it would be a challenge for {\it XMM-Newton}\ and {\it Chandra}\ to detect such supersoft emission because an absorbed (Galactic value to the direction of M31; $N_H=7\times10^{20}$ cm$^{-2}$) 10 eV spectrum turns over at about 0.2 keV which is the sensitivity limit of these instruments. For instance, if the X-ray emission is dominated by a 10 eV spectrum, simulation shows that it requires 700 ksec {\it XMM-Newton}\ or 1 Msec {\it Chandra}\ observing time in order to detect the source. If the black hole of G1 is only 100 $M_\odot$, the thermal emission would have a temperature of about 80 eV and we should be able to detect it with a 6 ksec {\it XMM-Newton}\ or 10 ksec {\it Chandra}\ observaiton. As a supersoft X-ray source ($kT < 100$ eV), we do not expect to see X-rays above 1 keV which is not consistent with our current result. Alternatively, if the X-ray emission is from a luminous low-mass X-ray binary, we would also expect soft multi-color disk blackbody X-ray emission ($kT_{in}\approx0.3-3$ keV) in addition to a power-law like component associated with Comptonization of cooler photons (e.g. Sidoli et al. 2001). However, with only $\sim 70$ counts from all three {\it XMM-Newton}\ detectors, we do not have a good constraint on the spectral model. In conclusion, X-ray spectra provided by {\it XMM-Newton}\ and {\it Chandra}\ alone cannot provide convincing evidence for the nature of the X-ray emission from G1. Although the X-ray position provided by {\it XMM-Newton}\ cannot provide any reasonable constraint to the nature of the X-ray emission of G1, it suggests that future {\it Chandra}\ observations may resolve the problem. As discussed in Pooley \& Rappaport (2006), we can improve the relative astrometry of {\it Chandra}\ and {\it HST}\ to $0.1''-0.2''$ if we can match a few {\it Chandra}\ sources to their optical counterparts (e.g. foreground stars or background active galactic nuclei). With such observations, we can accurately localize the X-ray emission of G1. However, the fact that a luminous low-mass X-ray binary is likely to locate $\mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} 0.21''$ from the cluster core suggests that we may not disentangle from the emission of a possible intermediate-mass black hole. Alternatively, a {\it Chandra}\ observation may be able to distinguish between multiple X-ray sources as an extended object and point-like emission. \begin{acknowledgements} We would like to thank an anonymous referee for useful comments. This work is based on observations obtained with {\it XMM-Newton}, an ESA mission with instruments and contributions directly funded by ESA member states and the US (NASA). The {\it HST}\ data presented in this paper were obtained from the Multimission Archive at the Space Telescope Science Institute (MAST). STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. \end{acknowledgements} {\it Facilities:} \facility{XMM (EPIC)}, \facility{HST (ACS/HRC, WFPC2)}
1,314,259,994,665
arxiv
\section{Introduction} \label{sec:INT} Recently, much attention has been paid to brain rhythms in health and diseases \cite{Buz}. These brain rhythms emerge via synchronization between individual firings in neural circuits. This kind of neural synchronization may be used for efficient sensory and cognitive processing such as sensory perception, multisensory integration, selective attention, and memory formation \cite{Wang1,Wang2,Gray}, and it is also correlated with pathological rhythms associated with neural diseases (e.g., epileptic seizures and tremors in the Parkinson's disease) \cite{TW}. The brain receives natural sensory stimulation, and experimental electrical or magnetic stimulation in the neural system is used for analyzing the dynamical interactions between different brain areas. Responses to these external stimuli can provide crucial information about its dynamical properties. For example, the effects of periodic stimuli on rhythmic biological activity were experimentally studied by applying rhythmic visual stimulus \cite{VS} and periodic auditory stimulation \cite{AS}. Hence, it is of great importance to investigate how an external stimulus affects the neural synchronization in the brain. Techniques for controlling population synchronization have been proposed, which enables us to suppress or to enhance it. For examples, one technique is the external time-periodic stimulation \cite{TPS1,TPS2,TPS3,TPTD}, and the other one is the time-delayed feedback in the mean field \cite{TPTD,TDF1,TDF2,TDF3}. Synchronization suppression may be effective in suppressing pathological brain rhythms, while synchronization enhancement might be useful for the cases of failures of cardiac or neural pacemakers. Particularly, deep brain stimulation techniques have been used to suppress pathological rhythms in patients with neural diseases such as Parkinson's disease, essential tremor, and epilepsy \cite{DBS1,DBS2,DBS3}. For this technique, micro-electrodes are implanted in deep brain regions of patients, and then time-periodic electric signal or time-delayed feedback signals are injected for suppression of abnormal rhythms. Most of previous theoretical and computational works on control of population synchronization were focused on the case of excitatory-type couplings \cite{TPS1,TPS2,TPS3,TPTD,TDF1,TDF2,TDF3}. To study how dynamical responses to external stimuli depend on the synaptic type (excitatory or inhibitory), we consider two types of excitatory and inhibitory full synchronization [i.e., full synchronization (where all neurons fire in each global cycle of population rhythm) via excitatory and inhibitory synaptic interactions] in complex small-world networks of excitatory regular spiking (RS) pyramidal neurons and inhibitory fast spiking (FS) interneurons. We apply external time-periodic stimuli $S(t)$ $[=A \sin (\omega_d t)]$ to a fraction of neurons for both cases of excitatory and inhibitory synchronization, and investigate their dynamical responses to $S(t)$ by changing the driving amplitude $A$ for a fixed driving angular frequency $\omega_d$. For describing collective behaviors in the whole population, we use an instantaneous whole-population spike rate (IWPSR) $R_w(t)$ which may be obtained from the raster plot of spikes where population synchronization may be well seen \cite{Kim1}. For the case of synchronization, $R_w(t)$ shows an oscillatory behaviors, while it becomes nearly stationary in the case of desynchronization. We characterize dynamical responses to $S(t)$ in terms of a dynamical response factor $D_f$ (given by the square root of the ratio of the variance of $R_w(t)$ in the presence and absence of stimulus). If $D_f$ is larger than 1, then synchronization enhancement occurs; otherwise (i.e., $D_f <1$), synchronization suppression takes place. For both cases of excitatory and inhibitory couplings, stimulated neurons are phase-locked to external stimuli $S(t)$. In contrast, the stimulation effect on non-stimulated neurons varies depending on the synaptic-coupling type. For the excitatory case, non-stimulated RS neurons are also phase-locked to external stimulus $S(t)$ thanks to a constructive effect of $S(t)$ (resulting from phase-attractive synaptic excitation). On the other hand, in the inhibitory case the original full synchronization in the non-stimulated sub-population breaks up gradually with increasing $A$ due to a destructive effect of $S(t)$ (coming from strong synaptic inhibition), and then a new type of sparse synchronization (where only some fraction of neurons fire in each global cycle of population rhythm) appears. As results of these different effects of $S(t)$, the type and degree of dynamical response (characterized by $D_f$) vary differently, depending on the type of synaptic interaction. For further analysis of dynamical response, we also decompose the whole population into two sub-populations of the stimulated and the non-stimulated neurons. Then, two instantaneous sub-population spike rates (ISPSRs) $R_s^{(1)}(t)$ and $R_s^{(2)}(t)$ [the superscript 1 (2) corresponds to the stimulated (non-stimulated) case] may be used to show collective behaviors in the two sub-populations of stimulated and non-stimulated neurons, respectively, and the matching degree between the dynamics of the stimulated and the non-stimulated sub-populations is measured in terms of a ``cross-correlation'' measure $M_c$ between $R_s^{(1)}(t)$ and $R_s^{(2)}(t)$. $M_c$ also varies with $A$ in a distinctly different way, depending on the synaptic-coupling type, because of different effects of $S(t)$. Based on the cross-correlations between the two sub-populations (characterized by $M_c$), we also discuss the dynamical responses to $S(t)$. This paper is organized as follows. In Sec.~\ref{sec:SWN}, we describe complex small-world networks of excitatory RS pyramidal neurons and inhibitory FS interneurons, and the governing equations for the population dynamics are given. Then, in Sec.~\ref{sec:DR} we investigate the effects of synaptic couplings on dynamical responses to external time-periodic stimuli $S(t)$ for both excitatory and inhibitory cases. Finally, in Sec.~\ref{sec:SUM} a summary is given. Explanations on methods for characterization of synchronization in each of the stimulated and the non-stimulated sub-populations are also made in Appendix \ref{sec:SMSM}. \section{Small-World Networks of Excitatory RS Pyramidal Neurons and Inhibitory FS Interneurons} \label{sec:SWN} We consider two types of directed Watts-Strogatz small-world networks (SWNs) composed of $N$ excitatory RS pyramidal neurons and inhibitory FS interneurons equidistantly placed on a one-dimensional ring of radius $N/ 2 \pi$, respectively. The Watts-Strogatz SWN interpolates between a regular lattice with high clustering (corresponding to the case of $p=0$) and a random graph with short average path length (corresponding to the case of $p=1$) via random uniform rewiring with the probability $p$ \cite{SWN1,SWN2,SWN3}. For $p=0,$ we start with a directed regular ring lattice with $N$ nodes where each node is coupled to its first $M_{syn}$ neighbors ($M_{syn}/2$ on either side) via outward synapses, and rewire each outward connection uniformly at random over the whole ring with the probability $p$ (without self-connections and duplicate connections). This Watts-Strogatz SWN model may be regarded as a cluster-friendly extension of the random network by reconciling the six degrees of separation (small-worldness) \cite{SDS1,SDS2} with the circle of friends (clustering). As elements in our neural networks, we choose the Izhikevich RS pyramidal neuron and FS interneuron models which are not only biologically plausible, but also computationally efficient \cite{Izhi1,Izhi2,Izhi3,Izhi4}. The following equations (\ref{eq:PD1})-(\ref{eq:PD8}) govern the population dynamics in the SWNs: \begin{eqnarray} C\frac{dv_i}{dt} &=& k (v_i - v_r) (v_i - v_t) - u_i +I_{DC} +D \xi_{i} -I_{syn,i} + S_i(t), \label{eq:PD1} \\ \frac{du_i}{dt} &=& a \{ U(v_i) - u_i \}, \;\;\; i=1, \cdots, N, \label{eq:PD2} \end{eqnarray} with the auxiliary after-spike resetting: \begin{equation} {\rm if~} v_i \geq v_p,~ {\rm then~} v_i \leftarrow c~ {\rm and~} u_i \leftarrow u_i + d, \label{eq:PD3} \end{equation} where \begin{eqnarray} U(v) &=& b (v - v_b)~{\rm for~the~ RS~ pyramidal~ neurons}, \label{eq:PD4} \\ &=& \left\{ \begin{array}{l} 0 {\rm ~for~} v<v_b \\ b(v - v_b)^3 {\rm ~for~} v \ge v_b \end{array} \right. ~{\rm for~ the~ FS~ interneurons}, \label{eq:PD5} \\ I_{syn,i} &=& \frac{J}{d_i^{(in)}} \sum_{j=1 (\ne i)}^N w_{ij} s_j(t) (v_i - V_{syn}), \label{eq:PD6}\\ s_j(t) &=& \sum_{f=1}^{F_j} E(t-t_f^{(j)}-\tau_l);~E(t) = \frac{1}{\tau_d - \tau_r} (e^{-t/\tau_d} - e^{-t/\tau_r}) \Theta(t), \label{eq:PD7} \\ S_i(t) &=& \alpha_i A \sin(\omega_d t). \label{eq:PD8} \end{eqnarray} Here, $v_i(t)$ and $u_i(t)$ are the state variables of the $i$th neuron at a time $t$ which represent the membrane potential and the recovery current, respectively. These membrane potential and the recovery variable, $v_i(t)$ and $u_i(t)$, are reset according to Eq.~(\ref{eq:PD3}) when $v_i(t)$ reaches its cutoff value $v_p$. $C$, $v_r$, and $v_t$ in Eq.~(\ref{eq:PD1}) are the membrane capacitance, the resting membrane potential, and the instantaneous threshold potential, respectively. The parameter values used in our computations are listed in Table \ref{tab:Parm}. More details on the Izhikevich RS pyramidal neuron and FS interneuron models, the external stimulus to each Izhikevich neuron, the synaptic currents, the external time-periodic stimulus to sub-populations of randomly-selected neurons, and the numerical method for integration of the governing equations are given in the following subsections. \subsection{Izhikevich RS Pyramidal Neuron and FS Interneuron Models} \label{subsec:Izhi} The Izhikevich model matches neuronal dynamics by tuning the parameters $(k, a, b, c, d)$ instead of matching neuronal electrophysiology, unlike the Hodgkin-Huxley-type conductance-based models \cite{Izhi1,Izhi2,Izhi3,Izhi4}. The parameters $k$ and $b$ are related to the neuron's rheobase and input resistance, and $a$, $c$, and $d$ are the recovery time constant, the after-spike reset value of $v$, and the after-spike jump value of $u$, respectively. Depending on the values of these parameters, the Izhikevich neuron model may exhibit 20 of the most prominent neuro-computational features of cortical neurons \cite{Izhi1,Izhi2,Izhi3,Izhi4}. Here, we use the parameter values for the RS pyramidal neurons and the FS interneurons in the layer 5 rat visual cortex, which are listed in the 1st and the 2nd items of Table \ref{tab:Parm} \cite{Izhi3}. \subsection{External Stimulus to Each Izhikevich Neuron} \label{subsec:Sti} Each Izhikevich neuron is stimulated by both a common DC current $I_{DC}$ and an independent Gaussian white noise $\xi_i$ [see the 3rd and the 4th terms in Eq.~(\ref{eq:PD1})]. The Gaussian white noise satisfies $\langle \xi_i(t) \rangle =0$ and $\langle \xi_i(t)~\xi_j(t') \rangle = \delta_{ij}~\delta(t-t')$, where $\langle\cdots\rangle$ denotes the ensemble average. Here, the Gaussian noise $\xi$ may be regarded as a parametric one which randomly perturbs the strength of the applied current $I_{DC}$, and its intensity is controlled by the parameter $D$. For $D=0$, the Izhikevich RS pyramidal neurons exhibit the type-I excitability, while the Izhikevich FS interneurons show the type-II excitability \cite{Izhi3}. For the type-I case, a transition from a resting state to a spiking state occurs as $I_{DC}$ passes a threshold via a saddle-node bifurcation on an invariant circle, and firing begins at arbitrarily low frequency \cite{Izhi3,Ex1,Ex2}. On the other hand, a type-II neuron exhibits a jump from a resting state to a spiking state through a subcritical Hopf bifurcation when passing a threshold by absorbing an unstable limit cycle born via fold limit cycle bifurcation and hence, the firing frequency begins from a non-zero value \cite{Izhi3,Ex1,Ex2}. The values of $I_{DC}$ and $D$ used in this paper are given in the 3rd item of Table \ref{tab:Parm}. \subsection{Synaptic Currents} \label{subsec:Syn} The 5th term in Eq.~(\ref{eq:PD1}) denotes the synaptic couplings of Izhikevich neurons. $I_{syn,i}$ of Eqs.~(\ref{eq:PD6}) represents the synaptic current injected into the $i$th neuron. The synaptic connectivity is given by the connection weight matrix $W$ (=$\{ w_{ij} \}$) where $w_{ij}=1$ if the neuron $j$ is presynaptic to the neuron $i$; otherwise, $w_{ij}=0$. Here, the synaptic connection is modeled in terms of the Watts-Strogatz SWN. The in-degree of the $i$th neuron, $d_{i}^{(in)}$ (i.e., the number of synaptic inputs to the neuron $i$) is given by $d_{i}^{(in)} = \sum_{j=1(\ne i)}^N w_{ij}$. For this case, the average number of synaptic inputs per neuron is given by $M_{syn} = \frac{1}{N} \sum_{i=1}^{N} d_{i}^{(in)}$. The fraction of open synaptic ion channels at time $t$ is denoted by $s(t)$. The time course of $s_j(t)$ of the $j$th neuron is given by a sum of delayed double-exponential functions $E(t-t_f^{(j)}-\tau_l)$ [see Eq.~(\ref{eq:PD7})], where $\tau_l$ is the synaptic delay, and $t_f^{(j)}$ and $F_j$ are the $f$th spiking time and the total number of spikes of the $j$th neuron (which occur until time $t$), respectively. Here, $E(t)$ [which corresponds to contribution of a presynaptic spike occurring at time $0$ to $s(t)$ in the absence of synaptic delay] is controlled by the two synaptic time constants: synaptic rise time $\tau_r$ and decay time $\tau_d$, and $\Theta(t)$ is the Heaviside step function: $\Theta(t)=1$ for $t \geq 0$ and 0 for $t <0$. The synaptic coupling strength is controlled by the parameter $J$, and $V_{syn}$ is the synaptic reversal potential. For both excitatory AMPA synapse and the inhibitory GABAergic synapse (involving the $\rm{GABA_A}$ receptors), the values of $\tau_l$, $\tau_r$, $\tau_d$, and $V_{syn}$ are listed in the 4th item of Table \ref{tab:Parm} \cite{Synapse}. \subsection{External Time-Periodic Stimulus to Sub-Populations of Randomly-Selected Neurons} \label{subsec:ST} The last term in Eq.~(\ref{eq:PD1}) represents the external time-periodic stimulus to the $i$th neuron, $S_i(t)$, the explicit form of which is given in Eq.~(\ref{eq:PD8}). If stimulus is applied to the $i$th neuron, $\alpha_i=1$; otherwise, $\alpha_i=0.$ (In the absence of external stimulus, $\alpha_i=0$ for all $i$.) The driving angular frequency of the stimulus is $\omega_d$, and its amplitude is $A.$ We apply $S_i(t)$ to sub-groups of randomly-chosen $N_s (=50)$ RS pyramidal neurons and FS interneurons, respectively. \subsection{Numerical Method for Integration} \label{subsec:NM} Numerical integration of stochastic differential Eqs.~(\ref{eq:PD1})-(\ref{eq:PD8}) is done by employing the Heun method \cite{SDE} with the time step $\Delta t=0.01$ ms. For each realization of the stochastic process, we choose random initial points $[v_i(0),u_i(0)]$ for the $i$th $(i=1,\dots, N)$ RS pyramidal neuron and FS interneuron with uniform probability in the range of $v_i(0) \in (-50,-45)$ and $u_i(0) \in (10,15)$. \section{Effects of Synaptic-Coupling Type on Dynamical Responses to External Time-Periodic Stimuli} \label{sec:DR} In this section, we study the effects of synaptic-coupling type on dynamical responses to external time-periodic stimuli $S(t)$ in the Watts-Strogatz SWN with the average number of synaptic inputs $M_{syn}=50$ and the rewiring probability $p=0.2$. Both the excitatory and the inhibitory cases are investigated by varying the driving amplitude $A$ for a fixed driving angular frequency $\omega_d$. \subsection {Dynamical Response of Excitatory Synchronization to An External Time-Periodic Stimulus} \label{subsec:DREx} We consider an excitatory Watts-Strogatz SWN composed of $N (=10^3)$ Izhikevich RS pyramidal neurons. Figure \ref{fig:RS}(a) shows a plot of the firing frequency $f$ versus the external DC current $I_{DC}$ for a single Izhikevich RS neuron in the absence of noise ($D=0$). This Izhikevich RS neuron exhibits type-I excitability for $I_{DC} > 51$ because its frequency may be arbitrarily small \cite{Izhi3,Ex1,Ex2}. Here, we consider a suprathreshold case of $I_{DC}=70$ in the presence of noise with its intensity $D=1$ for which a time series of the membrane potential $v$ with an oscillating frequency $f \simeq 7.0$ Hz is shown in Fig.~\ref{fig:RS}(b). We set the coupling strength at $J=15$. Spike synchronization is well seen in the raster plot of spikes in Fig.~\ref{fig:RS}(c1). ``Stripes'' (composed of synchronized spikes) appear regularly. All Izhikevich RS neurons fire synchronously in each stripe, and hence full synchronization occurs. For this synchronous case, an oscillating IWPSR (instantaneous whole-population spike rate) $R_w(t)$ appears. To obtain a smooth IWPSR, we employ the kernel density estimation (kernel smoother) \cite{Kernel}. Each spike in the raster plot is convoluted (or blurred) with a kernel function $K_h(t)$ to obtain a smooth estimate of IWPSR $R_w(t)$: \begin{equation} R_w(t) = \frac{1}{N} \sum_{i=1}^{N} \sum_{s=1}^{n_i} K_h (t-t_{s}^{(i)}), \label{eq:IPSR} \end{equation} where $t_{s}^{(i)}$ is the $s$th spiking time of the $i$th neuron, $n_i$ is the total number of spikes for the $i$th neuron, and we use a Gaussian kernel function of band width $h$: \begin{equation} K_h (t) = \frac{1}{\sqrt{2\pi}h} e^{-t^2 / 2h^2}, ~~~~ -\infty < t < \infty. \label{eq:KF} \end{equation} Figure \ref{fig:RS}(c2) shows a regularly-oscillating IWPSR kernel estimate $R_w(t)$. The population frequency $f_p (\simeq 7.6$ Hz) of $R_w(t)$ may be obtained from the power spectrum of $\Delta R_w(t)$ $[= R_w(t) - \overline{R_w(t)}]$ (the overline represents the time average), which is shown in Fig.~\ref{fig:RS}(d). For analysis of individual spiking behaviors, an inter-spike interval (ISI) histogram is given in Fig.~\ref{fig:RS}(e). The ensemble-averaged ISI $\langle ISI \rangle$ ($\langle \cdots \rangle$ denotes an ensemble average) is 131.6 ms, and hence the ensemble-averaged mean firing rate (MFR) $\langle f_i \rangle$ of individual neurons ($f_i$ is the MFR of the $i$th neuron and $\langle f_i \rangle$ corresponds to the reciprocal of $\langle ISI \rangle$) is 7.6 Hz. For the case of full synchronization, $f_p = \langle f_i \rangle$, in contrast to the case of sparse synchronization where $f_p$ is larger than $\langle f_i \rangle$ due to stochastic spike skipping of individual neurons \cite{Sparse,SWN-Kim}. We apply an external time-periodic AC stimulus $S(t)$ to a sub-population of $N_s(=50)$ randomly-selected Izhikevich RS pyramidal neurons by fixing the driving angular frequency as $\omega_d (=2 \pi f_d)$ =0.048 rad/ms ($f_d=\langle f_i \rangle=$ 7.6 Hz), and investigate the dynamical response of the above full synchronization for $J=15$ by varying the driving amplitude $A$. Figures \ref{fig:DREx1}(a1)-\ref{fig:DREx1}(a8) show raster plots of spikes for various values of $A$. Their corresponding IWPSR kernel estimates $R_w(t)$ are shown in Figs.~\ref{fig:DREx1}(b1)-\ref{fig:DREx1}(b8), and the power spectra of $\Delta R_w(t)$ are also given in Figs.~\ref{fig:DREx1}(e1)-\ref{fig:DREx1}(e8). Population synchronization may be well seen in these raster plots of spikes. For a synchronous case, the IWPSR kernel estimates $R_w(t)$ exhibits an oscillating behavior. In addition, times series of individual membrane potentials $v_5(t)$ and $v_{20}(t)$ of the stimulated 5th and the non-stimulated 20th RS neurons are also given in Figs.~\ref{fig:DREx1}(c1)-\ref{fig:DREx1}(c8) and Figs.~\ref{fig:DREx1}(d1)-\ref{fig:DREx1}(d8), respectively. Then, the type and degree of dynamical response may be characterized in terms of a dynamical response factor $D_f$ \cite{TDF1,TDF2}: \begin{equation} D_f = \sqrt{\frac {Var(R_w^{(A)})} {Var(R_w^{(0)})} }, \label{eq:DRF} \end{equation} where $Var(R_w^{(A)})$ and $Var(R_w^{(0)})$ represent the variances of the IWPSR kernel estimate $R_w(t)$ in the presence and absence of stimulus, respectively. If the dynamical response factor $D_f$ is larger than 1, then synchronization enhancement occurs; otherwise (i.e., $D_f <1$), synchronization suppression takes place. Figure \ref{fig:DREx1}(f) shows a plot of $\langle D_f \rangle_r$ versus $A$; $\langle \cdots \rangle_r$ denotes an average over realizations. Three stages are found to appear. Synchronization enhancement ($\langle D_f \rangle_r >1$), synchronization suppression ($\langle D_f \rangle_r <1$), and synchronization enhancement (i.e., increase in $\langle D_f \rangle_r$ from 1) occur in the 1st (I) stage ($0< A < A^*_{1}$), the 2nd (II) stage ($A^*_{1} < A < A^*_{2}$), and the 3rd (III) stage ($A>A^*_{2}$), respectively; $A^*_{1}\simeq 83$ and $A^*_{2} \simeq 1287$. Examples are given for various values of $A$; 1st stage ($A=10$), 2nd stage ($A=150$, 400 and 800), and 3rd stage $(A=2000,$ 5000, and $10^4$). For further analysis of dynamical responses, we decompose the whole population of RS neurons into two sub-populations of the stimulated and the non-stimulated RS neurons. Dynamical responses in these two sub-populations are shown well in Fig.~\ref{fig:DREx2}. Raster plots of spikes, instantaneous sub-population spike rate (ISPSR) kernel estimates $R_s^{(1)}(t)$ and $R_s^{(2)}(t)$ [the superscript 1 (2) corresponds to the stimulated (non-stimulated) case], and power spectra of $\Delta R_s^{(1)}(t)$ and $\Delta R_s^{(2)}(t)$ in the stimulated and the non-stimulated sub-populations are shown in Figs.~\ref{fig:DREx2}(a1)-\ref{fig:DREx2}(a8), Figs.~\ref{fig:DREx2}(b1)-\ref{fig:DREx2}(b8), and Figs.~\ref{fig:DREx2}(c1)-\ref{fig:DREx2}(c8), respectively: the upper (lower) panels in these figures represent those for the stimulated (non-stimulated) case. We also measure the degree of population synchronization in each of the stimulated and the non-stimulated sub-populations by employing a realistic statistical-mechanical spiking measure, which was developed in our recent work \cite{Kim1}. As shown in Figs.~\ref{fig:DREx2}(a1)-\ref{fig:DREx2}(a8), population synchronization may be well visualized in a raster plot of spikes. For a synchronized case, the raster plot is composed of spiking stripes or bursting bands (indicating population synchronization). To measure the degree of the population synchronization seen in the raster plot, a statistical-mechanical spiking measure $M_s^{(l)}$ of Eq.~(\ref{eq:SM}), based on the ISPSR kernel estimates $R_s^{(l)}(t)$ [$l=1$ (2) corresponds to the stimulated (non-stimulated) case], was introduced by considering the occupation degrees $O_i^{(l)}$ of Eq.~(\ref{eq:OD}) (representing the density of stripes/bands) and the pacing degrees $P_i^{(l)}$ of Eq.~(\ref{eq:PD}) (denoting the smearing of stripes/bands) of the spikes in the stripes/bands \cite{Kim1}: for more details, refer to Appendix \ref{sec:SMSM}. The average occupation degree $\langle \langle O_i^{(l)} \rangle \rangle_r$, the average pacing degree $\langle \langle P_i^{(l)} \rangle \rangle_r$, and the average statistical-mechanical spiking measure $\langle M_s^{(l)} \rangle_r$ ($\langle \cdots \rangle$ and $\langle \cdots \rangle_r$ represent the averages over global cycles and realizations, respectively) are shown in Figs.~\ref{fig:DREx2}(d1)-\ref{fig:DREx2}(d3), respectively. Moreover, we obtain the cross-correlation function $C_{12}(\tau)$ between $R_s^{(1)}(t)$ and $R_s^{(2)}(t)$ of the two sub-populations: \begin{equation} C_{12}(\tau) = \frac{\overline{\Delta R_s^{(1)}(t+\tau) \Delta R_s^{(2)}(t)}} {\sqrt{\overline{ {\Delta R_s^{(1)}}^2(t)}} \sqrt{\overline{ {\Delta R_s^{(2)}}^2(t)}}}, \label{eq:CCF} \end{equation} where $\Delta R_s^{(1)}(t) = R_s^{(1)}(t) - \overline{R_s^{(1)}(t)}$, $\Delta R_s^{(2)}(t) = R_s^{(2)}(t) - \overline{R_s^{(2)}(t)}$, and the overline denotes the time average. Then, the cross-correlation measure $M_c$ between the stimulated and the non-stimulated sub-populations is given by the value of $C_{12}(\tau)$ at the zero-time lag: \begin{equation} M_c = C_{12}(0), \label{eq:CM} \end{equation} which corresponds to the Pearson's correlation coefficient for pairs of $[R_s^{(1)}(t), R_s^{(2)}(t)]$ \cite{NR}. The cross-correlation functions $\langle C_{12}(\tau) \rangle_r$ for various values of $A$ are shown in Figs.~\ref{fig:DREx2}(e1)-\ref{fig:DREx2}(e8), and Figure \ref{fig:DREx2}(f) shows a plot of $\langle M_c \rangle_r$ versus $A$. We consider the 1st stage [$0 < A < A^*_1(\simeq 83)$] where synchronization enhancement with $\langle D_f \rangle_r>1$ occurs. For small $A$, stimulated RS neurons exhibit spikings which are phase-locked to external AC stimulus $S(t)$ [e.g., see Figs.~\ref{fig:DREx1}(c2) and \ref{fig:DREx2}(a2) for $A=10$]. [In Fig.~\ref{fig:DREx2}, the upper (lower) panels correspond to the stimulated (non-stimulated) case.] Non-stimulated RS neurons also show spikings which are well matched with those of stimulated neurons thanks to phase-attractive effect of synaptic excitation, as shown in Figs.~\ref{fig:DREx1}(d2) and \ref{fig:DREx2}(a2) for $A=10$. For the case of $A=10$, the widths of stripes in the raster plot of spikes are reduced in comparison with those for $A=0$ [compare Fig.~\ref{fig:DREx1}(a2) with Fig.~\ref{fig:DREx1}(a1)], which implies an increase in the degree of population synchronization. Hence, the oscillating amplitudes of $R_w(t)$, $R_s^{(1)}(t)$, and $R_s^{(2)}(t)$ for $A=10$ become larger than those for $A=0$ [compare Figs.~\ref{fig:DREx1}(b2) and \ref{fig:DREx2}(b2) with Figs.~\ref{fig:DREx1}(b1) and \ref{fig:DREx2}(b1)]. Peaks of $\Delta R_w(t)$, $\Delta R_s^{(1)}(t)$, and $\Delta R_s^{(2)}(t),$ associated with external phase-lockings of both stimulated and non-stimulated RS neurons, appear at the driving frequency $f_d$ (=7.6 Hz) and its harmonics, as shown in Figs.~\ref{fig:DREx1}(e2) and \ref{fig:DREx2}(c2). In this way, synchronization enhancement occurs, and $\langle D_f \rangle_r$ increases until $A=10$ [see the inset of Fig.~\ref{fig:DREx1}(f)]. However, for $A > 10$ stimulated RS neurons begin to exhibit burstings, in contrast to spiking of non-stimulated RS neurons. Then, due to difference in the type of firings of individual neurons, it is not easy for the spikings of non-stimulated RS neurons to be well matched with burstings of stimulated RS neurons. With increasing $A$, this type of mismatching begins to be gradually intensified, and the degree of population synchronization decreases. Hence, $\langle D_f \rangle_r$ begins to decrease for $A > 10$, as shown in the inset of Fig.~\ref{fig:DREx1}(f). Eventually, when passing the 1st threshold $A^*_1~(\simeq 83)$, a 2nd stage [$A^*_1 < A < A^*_2(\simeq 1287)$] appears where synchronization suppression with $\langle D_f \rangle_r < 1$ occurs [see Fig.~\ref{fig:DREx1}(f)]. For this case, stimulated RS neurons exhibit burstings which are phase-locked to external AC stimulus $S(t)$. As shown in Figs.~\ref{fig:DREx1}(c3)-\ref{fig:DREx1}(c5) for the membrane potential $v_5(t)$ of the 5th stimulated RS neuron in the stage II, with increasing $A$ the number of spikes in each bursting increases. On the other hand, non-stimulated RS neurons show persistent spikings which are not well matched with burstings of stimulated neurons [e.g., see Figs.~\ref{fig:DREx1}(d3)-\ref{fig:DREx1}(d5) for the membrane potential $v_{20}(t)$ of the 20th non-stimulated RS neuron]. As an example, consider the case of $A=150$. Stimulated RS neurons exhibit burstings, each of which consists of two spikes, as shown in Fig.~\ref{fig:DREx1}(c3). These burstings are synchronized, and hence a pair of vertical trains (composed of synchronized spikes in burstings) appear successively in the raster plot of spikes, as shown in the upper panel of Fig.~\ref{fig:DREx2}(a3). On the other hand, spiking stripes of non-stimulated RS neurons are smeared in a zigzag way between the synchronized vertical bursting trains (i.e., the pacing degree between spikes of non-stimulated RS neruons is reduced) [see the lower panel of Fig.~\ref{fig:DREx2}(a3)]. This zigzag pattern (indicating local clustering of spikes) in the smeared stripes seems to appear because the Watts-Strogatz SWN with $p=0.2$ has a relatively high clustering coefficient (denoting cliquishness of a typical neighborhood in the network) \cite{SWN-Kim}. These zigzag smeared stripes also appear in a nearly regular way with the driving frequency $f_d$, like the case of vertical bursting trains of stimulated RS neurons [see Fig.~\ref{fig:DREx2}(a3)]. Hence, both cases of stimulated and non-stimulated RS neurons are phase-locked to external AC stimulus, although they are mismatched (i.e., phase-shifted). Peaks of $\Delta R_w(t)$, $\Delta R_s^{(1)}(t)$, and $\Delta R_s^{(2)}(t)$, corresponding to these external phase-lockings, appear at the driving frequency $f_d$ and its harmonics, as shown in Figs.~\ref{fig:DREx1}(e3) and \ref{fig:DREx2}(c3). Phase-shifted mixing of synchronized vertical bursting trains (of stimulated RS neurons) and zigzag smeared spiking stripes (of non-stimulated RS neurons) leads to decrease in the degree of population synchronization. Consequently, the amplitudes of $R_w(t)$ and $R_s^{(2)}(t)$ for $A=150$ are smaller than those for $A=10$ [compare Fig.~\ref{fig:DREx1}(b3) and Fig.~\ref{fig:DREx2}(b3) with Fig.~\ref{fig:DREx1}(b2) and Fig.~\ref{fig:DREx2}(b2)], and synchronization suppression (with $\langle D_f \rangle_r<1$) occurs [see Fig.~\ref{fig:DREx1}(f)]. As $A$ is further increased, more number of synchronized vertical busting trains (phase-locked to external stimulus) appear successively in the raster plot of spikes, because each bursting of stimulated RS neurons consists of more number of spikes. Zigzag smearing of spiking stripes of non-stimulated RS neurons becomes intensified (i.e., the pacing degree between spikes of non-stimulated RS neurons becomes worse), although they are phase-locked to external AC stimulus. These bursting trains and smeared spiking stripes are still phase-shifted. In this way, with increasing $A$, the degree of population synchronization is decreased mainly due to smearing of spiking stripes, and eventually a minimum ($\simeq 0.7937$) of $\langle D_f \rangle_r$ occurs for $A = A_{min}^{(1)} (\simeq 398)$, as shown in Fig.~\ref{fig:DREx1}(f). An example near this minimum is given for the case of $A=400$. A quadruple of vertical trains (consisting of synchronized spikes in burstings of stimulated RS neurons) and zigzag smeared spiking stripes of non-stimulated RS neurons appear successively in the raster plot of spikes, as shown in Fig.~\ref{fig:DREx2}(a4). Both of them are phase-locked to external stimulus, but they are more phase-shifted. Peaks of $\Delta R_w(t)$, $\Delta R_s^{(1)}(t)$, and $\Delta R_s^{(2)}(t)$, related to external phase-lockings for both cases of stimulated and non-stimulated RS neurons, also appear at the driving frequency $f_d$ and its harmonics [see Figs.~\ref{fig:DREx1}(e4) and \ref{fig:DREx2}(c4)]. Furthermore, the spiking stripes of non-stimulated RS neurons are much more smeared in a zigzag way when compared with the case of $A=150$ [compare Fig.~\ref{fig:DREx2}(a4) with Fig.~\ref{fig:DREx2}(a3)]. Hence, the amplitudes of $R_w(t)$ and $R_s^{(2)}(t)$ become smaller than those for $A=150$ (i.e., the degree of population synchronization is more reduced) [compare Figs.~\ref{fig:DREx1}(b4) and \ref{fig:DREx2}(b4) with Figs.~\ref{fig:DREx1}(b3) and \ref{fig:DREx2}(b3)]. However, with further increase in $A$ from $A_{min}^{(1)}$, synchronized burstings of stimulated RS neurons are more developed. Moreover, widths of zigzag smeared spiking stripes of non-stimulated RS neurons become gradually reduced [i.e., the degree of mismatching (phase-shift) between the stimulated and the non-stimulated sub-populations becomes decreased]. A constructive effect of $S(t)$ (resulting from a phase-attractive synaptic excitation) seems to appear effectively. Consequently, the degree of population synchronization begins to increase (i.e., $\langle D_f \rangle_r$ starts to grow). As an example, we consider the case of $A=800$. Both the bursting bands (composed of spikes in burstings of the stimulated RS neurons) and the spiking stripes of non-stimulated RS neurons, phase-locked to external AC stimulus, appear successively in the raster plots of spikes, as shown in Fig.~\ref{fig:DREx2}(a5). When compared with the case of $A=400$, the bursting bands are more developed, and the degree of zigzag smearing of spiking stripes is reduced [compare Fig.~\ref{fig:DREx2}(a5) with Fig.~\ref{fig:DREx2}(a4)]. Both the bursting bands and the smeared stripes are phase-locked to external AC stimulus, and their phase-shift is reduced. Peaks of $\Delta R_w(t)$, $\Delta R_s^{(1)}(t)$, and $\Delta R_s^{(2)}(t)$, related to external phase-lockings for both cases of stimulated and non-stimulated RS neurons, also appear at the driving frequency $f_d$ and its harmonics [see Figs.~\ref{fig:DREx1}(e5) and \ref{fig:DREx2}(c5)]. Hence, the amplitudes of $R_s^{(1)}(t)$ and $R_s^{(2)}(t)$ become larger than those for $A=400$ [compare Fig.~\ref{fig:DREx2}(b5) with Fig.~\ref{fig:DREx2}(b4)], which results in the increase in the amplitude of $R_w(t)$ [compare Fig.~\ref{fig:DREx1}(b5) with Fig.~\ref{fig:DREx1}(b4)]. As a result, the degree of population synchronization is larger than that for $A=400$. In this way, with increasing $A$ from $A_{min}^{(1)}$ the dynamical factor $\langle D_f \rangle_r$ is increased. Eventually, when passing the 2nd threshold $A^*_2~(\simeq 1287)$, $\langle D_f \rangle_r$ passes the unity, and a 3rd stage appears, where synchronization enhancement with $\langle D_f \rangle_r>1$ reappears thanks to a phase-attractive effect of synaptic excitation [see Fig.~\ref{fig:DREx1}(f)]. As examples, we consider the cases of $A=2000$, 5000, and $10^4$. As $A$ is increased in this 3rd stage, burstings of RS neurons are more developed [e.g., see Figs.~\ref{fig:DREx1}(c6)-\ref{fig:DREx1}(c8)], and non-stimulated RS neurons also begin to fire burstings for sufficiently large $A$ [e.g., see Figs.~\ref{fig:DREx1}(d7)-\ref{fig:DREx1}(d8)]. Then, synchronized bursting bands of stimulated RS neurons are more and more intensified, as shown in Figs.~\ref{fig:DREx2}(a6)-\ref{fig:DREx2}(a8). Moreover, ``firing'' bands, composed of spikings/burstings of non-stimulated RS neurons, become matched well with bursting bands of stimulated RS neurons [see Figs.~\ref{fig:DREx2}(a6)-\ref{fig:DREx2}(a8)]: the matching degree also increases with $A$. Peaks of $\Delta R_w(t)$, $\Delta R_s^{(1)}(t)$, and $\Delta R_s^{(2)}(t)$, related to external phase-lockings for both cases of stimulated and non-stimulated RS neurons, appear at the driving frequency $f_d$ and its harmonics [see Figs.~\ref{fig:DREx1}(e6)-\ref{fig:DREx1}(e8) and Figs.~\ref{fig:DREx2}(c6)-\ref{fig:DREx2}(c8)]. Consequently, with increasing $A$ the amplitudes of both $R_s^{(1)}(t)$ and $R_s^{(2)}(t)$ are increased, as shown in Figs.~\ref{fig:DREx2}(b6)-\ref{fig:DREx2}(b8), which also leads to increase in $R_w(t)$ [see Figs.~\ref{fig:DREx1}(b6)-\ref{fig:DREx1}(b8)]. In this way, $\langle D_f \rangle_r$ increases monotonically with $A$ and synchronization enhancement occurs in the 3rd stage, as shown in Fig.~\ref{fig:DREx1}(f). By varying $A$, we also characterize population synchronization in each of the stimulated and the non-stimulated sub-populations in terms of the average occupation degree $\langle \langle O_i^{(l)} \rangle \rangle_r$, the average pacing degree $\langle \langle P_i^{(l)} \rangle \rangle_r$, and the statistical-mechanical spiking measure $\langle M_s^{(l)} \rangle_r$; $l=1$ and 2 correspond to the stimulated and the non-stimulated cases, respectively. Plots of $\langle \langle O_i^{(l)} \rangle \rangle_r$, $\langle \langle P_i^{(l)} \rangle \rangle_r$, and $\langle M_s^{(l)} \rangle_r$ versus $A$ are shown in Figs.~\ref{fig:DREx2}(d1)-\ref{fig:DREx2}(d3), respectively. As $A$ is increased, external phase lockings of spikings or burstings of stimulated RS neurons are more and more enhanced, as shown in Figs.~\ref{fig:DREx2}(a1)-\ref{fig:DREx2}(a8). Hence, the stimulated RS neurons exhibit full synchronization with $\langle \langle O_i^{(1)} \rangle \rangle_r =1,$ independently of $A$ because every stimulated RS neuron makes a firing in each spiking stripe or bursting band [corresponding to each global cycle of $R_s^{(1)}(t)$]. These fully synchronized spikes also show high average pacing degree $\langle \langle P_i^{(1)} \rangle \rangle_r$. For $0 < A < 10,$ $\langle \langle P_i^{(1)} \rangle \rangle_r$ increases monotonically from 0.967 to 0.998 because smearing of spiking stripes (i.e. width of spiking stripes) becomes reduced [see the left inset of Fig.~\ref{fig:DREx2}(d2)]. For $A>10$ bursting bands appear, at first their widths increase, but eventually they become saturated for large $A$ [see Figs.~\ref{fig:DREx2}(a3)-\ref{fig:DREx2}(a8)]. Hence, for $A>10$ $\langle \langle P_i^{(1)} \rangle \rangle_r$ begins to decrease, but it seems to approach a limit value $(\simeq 0.82$). Consequently, the average spiking measure $\langle M_s^{(1)} \rangle_r$ (given by taking into consideration both the occupation and the pacing degrees) exhibit the same behaviors with $A$ as $\langle \langle P_i^{(1)} \rangle \rangle_r$ because $\langle \langle O_i^{(1)} \rangle \rangle_r =1.$ We next consider the non-stimulated case. Non-stimulated RS neurons also exhibit full synchronization with $\langle \langle O_i^{(2)} \rangle \rangle_r =1,$ independently of $A$. However, the average pacing degree $\langle \langle P_i^{(2)} \rangle \rangle_r$ varies with $A$, differently from the stimulated case. For $0 < A <10$, spikings of non-stimulated RS neurons are well matched with those of stimulated RS neurons thanks to phase-attractive effect of synaptic excitation, and hence the average pacing degree $\langle \langle P_i^{(2)} \rangle \rangle_r $ increases monotonically from $0.973$ to $0.989$ [see the right inset of Fig.~\ref{fig:DREx2}(d2)]. However, for $A>10$ it is not easy for spikings of non-stimulated neurons to be well matched with burstings of stimulated neurons because of different firing type. Hence, zigzag smearing occurs in the spiking stripes of non-stimulated neurons, and it is enhanced with $A$. Due to such developed zigzag smearing, $\langle \langle P_i^{(2)} \rangle \rangle_r$ decreases with $A$, and it arrives at its minimum ($\simeq 0.483$) for $A \simeq 391$. As $A$ is further increased from the minimum point, zigzag smearing begins to be gradually reduced thanks to a constructive effect of $S(t)$ (coming from the phase-attractive synaptic excitation). As a result, $\langle \langle P_i^{(2)} \rangle \rangle_r$ starts to increase, and its value becomes large for large $A$ (e.g., $\langle \langle P_i^{(2)} \rangle \rangle_r \simeq 0.65$ for $A=10^4$). The average spiking measure $\langle M_s^{(2)} \rangle_r$ also show the same behaviors with $A$ as $\langle \langle P_i^{(2)} \rangle \rangle_r$ because $\langle \langle O_i^{(2)} \rangle \rangle_r =1.$ Finally, to examine the matching degree between the stimulated and the non-stimulated sub-populations, we obtain the cross-correlation functions $\langle C_{12}(\tau) \rangle_r$ between $R_s^{(1)}(t)$ and $R_s^{(2)}(t)$ of the two sub-populations, which are shown for various values of $A$ in Figs.~\ref{fig:DREx2}(e1)-\ref{fig:DREx2}(e8). A plot of the cross-correlation measure $\langle M_c \rangle_r$ [given by $C_{12}(0)$] versus $A$ is also shown in Fig.~\ref{fig:DREx2}(f). Perfect cross-correlation with $\langle M_c \rangle_r = 1$ occurs in the range of $0<A<10$ where $\langle D_f \rangle_r$ increases monotonically from 1 to its maximum ($\simeq 1.095$) at $A=10$ [see the inset in Fig.~\ref{fig:DREx1}(f)]. In the remaining region ($10<A<A^*_1$) of the 1st stage, $\langle M_c \rangle_r$ decreases slowly, but it still indicates strong cross-correlation with $\langle M_c \rangle_r > 0.97$. This type of perfect/strong cross-correlation induces phase-attractive effect between the stimulated and the non-stimulated sub-populations, and hence synchronization enhancement occurs in the stage I. However, in the first part of the 2nd stage $\langle M_c \rangle_r$ decreases very rapidly to its minimum for $A \simeq 403$ (which is nearly the same as $A_{min}^{(1)} (\simeq 398)$ for the minimum of $\langle D_f \rangle_r$), mainly because of the different firing type of the stimulated RS neurons (bursting) and the non-stimulated FS interneurons (spiking). Due to sudden decrease in the cross-correlation, $\langle D_f \rangle_r$ also decreases from 1, and synchronization suppression occurs. After passing the minimum point ($A \simeq 403$), $\langle M_c \rangle_r$ begins to increase gradually with $A$, thanks to a phase-attractive effect of the excitatory coupling. Consequently, in the latter part of the 2nd stage (with $\langle D_f \rangle_r <1)$ $\langle D_f \rangle_r$ increases monotonically with $A$, and eventually when passing the 2nd threshold $A^*_2(\simeq 1287)$ $\langle D_f \rangle_r$ passes the unity. Thus, the 3rd stage appears, and synchronization enhancement reoccurs. \subsection {Dynamical Responses of Inhibitory Synchronization to An External Time-Periodic Stimulus} \label{subsec:DRIn} We consider an inhibitory Watts-Strogatz SWN composed of $N (=10^3)$ Izhikevich FS interneurons. Figure \ref{fig:FS}(a) shows a plot of the firing frequency $f$ versus the external DC current $I_{DC}$ for a single Izhikevich FS interneuron in the absence of noise ($D=0$). The Izhikevich FS interneuron exhibits a jump from a resting state to a spiking state via subcritical Hopf bifurcation at a higher threshold $I_{DC,h} (\simeq 73.7)$ by absorbing an unstable limit cycle born through a fold limit cycle bifurcation for a lower threshold $I_{DC,l} (\simeq 72.8)$. Hence, the Izhikevich FS interneuron exhibits type-II excitability because it begins to fire with a non-zero frequency \cite{Izhi3,Ex1,Ex2}. As $I_{DC}$ is increased from $I_{DC,h}$, the firing frequency $f$ increases monotonically. Here, we consider a suprathreshold case of $I_{DC}=1500$ in the presence of noise with $D=50$ for which a time series of the membrane potential $v$ with an oscillating frequency $f \simeq 635$ Hz is shown in Fig.~\ref{fig:FS}(b). We consider two coupling cases of $J=100$ and 1000 to study the effect of coupling strength $J$ on the dynamical responses. Full synchronization for $J=100$ is well shown in the raster plot of spikes in Fig.~\ref{fig:FS}(c1). For this case, the IWPSR kernel estimate $R_w(t)$ exhibits a regular oscillation with a fast population frequency $f_p~(\simeq 200$ Hz) [see the peak in the power spectrum of $\Delta R_w(t)$ in Fig.~\ref{fig:FS}(d)]. The ISI histogram for individual interneurons is also shown in Fig.~\ref{fig:FS}(e). The ensemble-averaged ISI $\langle ISI \rangle$ is 5.0 ms, and hence the ensemble-averaged MFR $\langle f_i \rangle$ of individual interneurons (corresponding to the reciprocal of $\langle ISI \rangle$) is 200 Hz, which is the same as $f_p$. For a strong-coupling case of $J=1000$, the raster plot of spikes and the IWPSR kernel estimate $R_w(t)$ in Figs.~\ref{fig:FS}(f1) and \ref{fig:FS}(f2) show full synchronization well. The population frequency $f_p$ of $R_w(t)$ is 76 Hz [see the peak in the power spectrum of $\Delta R_w(t)$ in Fig.~\ref{fig:FS}(g)], which is smaller than that for $J=100$ because of strong inhibition. The ensemble-averaged ISI $\langle ISI \rangle$ in Fig.~\ref{fig:FS}(h) is 13.1 ms which is longer than that for $J=100$. Hence, the ensemble-averaged MFR $\langle f_i \rangle$ of individual interneurons is 76 Hz, which is also the same as $f_p$. \subsubsection{Small-Coupling Case of $J=100$} \label{subsubsec:SJ} We first consider the case of $J=100$. We apply an external time-periodic AC stimulus $S(t)$ to $N_s(=50)$ randomly-selected Izhikevich FS interneurons by fixing the driving angular frequency as $\omega_d (=2 \pi f_d)$ =1.26 rad/ms ($f_d = \langle f_i \rangle$ =200 Hz), and investigate the dynamical response of inhibitory full synchronization by varying the driving amplitude $A$. Figures \ref{fig:DRIh1}(a1)-\ref{fig:DRIh1}(a8) show raster plots of spikes for various values of $A$. Population synchronization may be well seen in these raster plots of spikes. The IWPSR kernel estimates $R_w(t),$ exhibiting oscillatory behaviors, are shown in Figs.~\ref{fig:DRIh1}(b1)-\ref{fig:DRIh1}(b8), and the power spectra of $\Delta R_w(t)$ are also given in Figs.~\ref{fig:DRIh1}(f1)-\ref{fig:DRIh1}(f8). In addition, times series of membrane potentials of individual FS interneurons are given for various values of $A$. The time series of $v_5(t)$ of the 5th stimulated FS interneuron are shown in Figs.~\ref{fig:DRIh1}(c1)-\ref{fig:DRIh1}(c8). For the non-stimulated case, there are two types of FS interneurons, depending on their synaptic connections. Many non-stimulated FS interneurons (i.e., major non-stimulated FS interneurons) which have synaptic connections with fast-firing stimulated FS interneurons fire slowly due to increased inhibition. On the other hand, a small number of non-stimulated FS interneurons (i.e., minor non-stimulated FS interneurons) which have no direct synaptic connections with stimulated FS interneurons receive synaptic inputs from major slowly-firing non-stimulated FS interneurons, and hence MFRs of minor non-stimulated FS interneurons become fast due to decreased inhibition. Figures \ref{fig:DRIh1}(d1)-\ref{fig:DRIh1}(d8) show the time series of $v_{20}(t)$ of the 20th major slowly-firing non-stimulated FS interneuron, while Figs.~\ref{fig:DRIh1}(e1)-\ref{fig:DRIh1}(e8) show the time series of $v_{115}(t)$ of the 115th minor fast-firing non-stimulated FS interneuron. A plot of the dynamical factor $\langle D_f \rangle_r$ versus $A$ is given in Fig.~\ref{fig:DRIh1}(g). Two stages are thus found to appear. Synchronization suppression ($\langle D_f \rangle_r <1$) and synchronization enhancement ($\langle D_f \rangle_r >1$) occur in the 1st (I) stage ($0< A < A^*_3$) and the 2nd (II) stage ($A > A^*_3$), respectively, where $A^*_3 \simeq 49699$. Examples are given for various values of $A$; 1st stage ($A=1000$, 3000, 5000, 8000, $10^4$, and $3 \times 10^4$) and 2nd stage ($A=6 \times 10^4$). As in the above excitatory case, we make more detailed analysis of dynamical responses by decomposing the whole population of FS interneurons into two sub-populations of the stimulated and the non-stimulated FS interneurons. Dynamical responses in these two sub-populations are shown well in Fig.~\ref{fig:DRIh2}. Raster plots of spikes, ISPSR kernel estimates $R_s^{(1)}(t)$ and $R_s^{(2)}(t)$ [the superscript 1 (2) corresponds to the stimulated (non-stimulated) case], and power spectra of $\Delta R_s^{(1)}(t)$ and $\Delta R_s^{(2)}(t)$ in the stimulated and the non-stimulated sub-populations are shown in Figs.~\ref{fig:DRIh2}(a1)-\ref{fig:DRIh2}(a8), Figs.~\ref{fig:DRIh2}(b1)-\ref{fig:DRIh2}(b8), and Figs.~\ref{fig:DRIh2}(c1)-\ref{fig:DRIh2}(c8), respectively: the upper (lower) panels in these figures represent those for the stimulated (non-stimulated) case. For characterization of population synchronization in each of the stimulated and the non-stimulated sub-populations, the average occupation degree $\langle \langle O_i^{(l)} \rangle \rangle_r$, the average pacing degree $\langle \langle P_i^{(l)} \rangle \rangle_r$, and the average statistical-mechanical spiking measure $\langle M_s^{(l)} \rangle_r$ are given in Figs.~\ref{fig:DRIh2}(d1)-\ref{fig:DRIh2}(d3), respectively; $l=1$ (2) represents the stimulated (non-stimulated) case. The cross-correlation functions $\langle C_{12}(\tau) \rangle_r$ between $R_s^{(1)}(t)$ and $R_s^{(2)}(t)$ of the two sub-populations are also shown for various values of $A$ in Figs.~\ref{fig:DRIh2}(e1)-\ref{fig:DRIh2}(e8). Figure \ref{fig:DRIh2}(f) shows a plot of the cross-correlation measure $\langle M_c \rangle_r$ [given by $C_{12}(0)$] versus $A$. As $A$ is increased from 0 and passes a threshold, stimulated FS interneurons begin to exhibit burstings, as shown in Figs.~\ref{fig:DRIh1}(c2)-\ref{fig:DRIh1}(c8), and the number of spikings in each bursting increases with $A$. These burstings are phase-locked to external stimulus $S(t)$, which are intensified with increasing $A$ [see Figs.~\ref{fig:DRIh2}(a2)-\ref{fig:DRIh2}(a8)]. Consequently, as $A$ is increased, the amplitude of $R_s^{(1)}(t)$ also increases, as shown in Figs.~\ref{fig:DRIh2}(b2)-\ref{fig:DRIh2}(b8). Peaks in the power spectrum of $\Delta R_s^{(1)}(t)$, associated with the external phase lockings, appear at the driving frequency $f_d $ (=200 Hz) and its harmonics [see the upper panels of Figs.~\ref{fig:DRIh2}(c2)-\ref{fig:DRIh2}(c8)]. This kind of external phase lockings of stimulated FS interneurons are similar to those for the case of excitatory coupling. However, the external stimulus $S(t)$ makes a destructive effect on the sub-population of non-stimulated FS interneurons, in contrast to the excitatory case (where a constructive effect of $S(t)$, resulting from the phase-attractive synaptic excitation, leads to external phase lockings of non-stimulated RS neurons). In the presence of burstings of stimulated FS interneurons, spikings of non-stimulated FS interneurons cannot be well matched with burstings of stimulated FS interneurons, because of difference in the type of firings of individual neurons [e.g., see Fig.~\ref{fig:DRIh2}(a2) for $A=1000$]. However, these spiking stripes of non-stimulated FS interneurons are also phase-locked to external stimulus, although they are phase-shifted from the vertical bursting trains of the stimulated FS interneurons. Peaks in the power spectrum of $\Delta R_s^{(2)}(t)$, related to the external phase lockings, appear at the driving frequency $f_d $ (=200 Hz) and its harmonics [see the lower panel of Figs.~\ref{fig:DRIh2}(c2)]. As $A$ is further increased, a destructive effect of $S(t)$, resulting from repulsive synaptic inhibition, becomes intensified. Hence, zigzag smearing pattern appears in their spiking stripes, as shown in Fig.~\ref{fig:DRIh2}(a3) for $A=3000$. As explained in the excitatory case, such zigzag pattern in the smeared stripes seems to appear because the Watts-Strogatz SWN with $p=0.2$ has a relatively high clustering coefficient \cite{SWN-Kim}. Furthermore, major non-stimulated FS interneurons begin to exhibit intermittent and stochastic spikings (i.e., stochastic spike skipping) \cite{GR,Longtin1,Longtin2}. Due to the stochastic spike skipping, the original full synchronization (where all the non-stimulated FS interneurons fire spikings in each spiking stripe) in the non-stimulated sub-population begins to break up, and a sparse synchronization (where only some fraction of non-stimulated FS interneurons fire spikings in each spiking stripe) starts to appear (i.e., sparse spiking stripes begin to appear) \cite{Sparse,SWN-Kim}. [However, the degree of sparseness for $A=3000$ is relatively low, and hence no skippings are found in $v_{20}(t)$ of the 20th major non-stimulated FS interneuron for a short time interval of 20 ms in Fig.~\ref{fig:DRIh1}(d3).] In this way, with increasing $A$ the mismatching degree between the stimulated and the non-stimulated sub-populations is increased, although both the bursting bands of stimulated FS interneurons and the zigzag smeared sparse spiking stripes of non-stimulated FS interneurons are phase locked to external stimulus. Due to increased zigzag smearing, peaks at the driving frequency $f_d$ and its harmonics for $A=3000$ become more broad than those for $A=1000$ [compare Fig.~\ref{fig:DRIh2}(c3) with Fig.~\ref{fig:DRIh2}(c2)]. The effect of zigzag smearing and stochastic spike skipping in the non-stimulated sub-population is more dominant when compared with the enhanced external phase lockings in the stimulated sub-population. Hence, the overall degree of population synchronization in the whole population becomes worse. As a result, for the case of $A=3000$, the amplitudes of $R_s^{(2)}(t)$ and $R_w(t)$ are smaller than those for $A=1000$ [see Figs.~\ref{fig:DRIh2}(b3) and \ref{fig:DRIh1}(b3)], and $D_f$ decreases rapidly, as shown in Fig.~\ref{fig:DRIh1}(g). With further increase in $A$, this tendency of zigzag smearing and stochastic spike skipping in the non-stimulated sub-population is intensified, and eventually $D_f$ arrives at its minimum ($\simeq 0.548$) for $A=A_{min}^{(2)} (\simeq 4876)$. As an example near this minimum, we consider the case of $A=5000$. For this case, stochastic spike skipping is more intensified [see Fig.~\ref{fig:DRIh1}(d4)], and hence the original full synchronization in the non-stimulated sub-population becomes broken up (i.e., sparse stripes in the raster plot of spikes appear). Particularly, such sparse spiking stripes of non-stimulated FS interneurons are smeared in a zigzag way much more than those for the case of $A=3000$ [compare Fig.~\ref{fig:DRIh2}(a4) with Fig.~\ref{fig:DRIh2}(a3)]. Consequently, the amplitudes of $R_s^{(2)}(t)$ and $R_w(t)$ are much smaller than those for $A=3000$ [see Figs.~\ref{fig:DRIh2}(b4) and \ref{fig:DRIh1}(b4)] (i.e., the degree of population synchronization is reduced more significantly when compared with that for $A=3000$). As shown in the lower panel of Fig.~\ref{fig:DRIh2}(c4), peaks at the driving frequency $f_d$ and its harmonics also begin to be ``disrupted'' [i.e., their heights become smaller, and near $f_d$ new tiny peaks (of frequencies 164 and 183 Hz) appear]. However, with further increase in $A$ from $A_{min}^{(2)}$, non-stimulated FS interneurons begin to reorganize their spikings and exhibit a new type of sparse synchronization with the sub-population frequency $f_{sp}^{(2)} (\simeq 143$ Hz), along with enhanced external phase lockings of burstings of stimulated FS interneurons with the sub-population frequency $f_{sp}^{(1)} (\simeq 200$ Hz) [e.g., see the raster plots of spikes in Fig.~\ref{fig:DRIh2}(a5), the ISPSR kernel estimates $R_s^{(1)}(t)$ and $R_s^{(2)}(t)$ in Fig.~\ref{fig:DRIh2}(b5), and the power spectra in Fig.~\ref{fig:DRIh2}(c5) for $A=8000$]. A new peak, associated with sparse synchronization of non-stimulated FS interneurons, appears at $f \simeq 143$ Hz, as shown in the lower panel of Fig.~\ref{fig:DRIh2}(c5). (For $A=8000$, the peak at the driving frequency $f_d$ also coexists, but eventually it disappears for larger $A$ [see Figs.~\ref{fig:DRIh2}(c6)-\ref{fig:DRIh2}(c8)]). This ``sparse-synchronization'' peak of 143 Hz comes from evolution of the (above) tiny peak of 164 Hz for $A=5000$. With increasing $A$ the frequency of the tiny peak at 164 Hz for $A=5000$ becomes smaller, and for $A=8000$ the peak becomes broad and its frequency becomes 143 Hz. (On the other hand, as $A$ is increased the height of another peak of 183 Hz for $A=5000$ becomes smaller and it disappears.) Thanks to increase in the degree of synchronization in both the stimulated and the non-stimulated sub-populations, the amplitudes of both $R_s^{(1)}(t)$ and $R_s^{(2)}(t)$ become larger than those for $A=5000$ [compare Fig.~\ref{fig:DRIh2}(b5) with Fig.~\ref{fig:DRIh2}(b4)], which leads to the increase of the amplitude of $R_w(t)$ [see Fig.~\ref{fig:DRIh1}(b5)]. As a result, $D_f$ is increased, as shown in Fig.~\ref{fig:DRIh1}(g). As $A$ is further increased, external phase lockings of burstings with $f_{sp}^{(1)} \simeq 200$ Hz in the stimulated sub-population are more and more enhanced due to increased stimulation, while the degree of sparse synchronization in the non-stimulated sub-population becomes worse due to stochastic spike skipping of major non-stimulated FS interneurons and smearing of sparse stripes, as shown in the raster plots, the ISPSR kernel estimates $R_s^{(1)}(t)$ and $R_s^{(2)}(t)$, and the power spectra for $A=10^4$, $3 \times 10^4,$ and $6 \times 10^4$ [see Figs.~\ref{fig:DRIh2}(a6)-\ref{fig:DRIh2}(a8), Figs.~\ref{fig:DRIh2}(b6)-\ref{fig:DRIh2}(b8), and Figs.~\ref{fig:DRIh2}(c6)-\ref{fig:DRIh2}(c8)]; $f_{sp}^{(2)} \simeq$ 145, 146, and 146 Hz for $A=10^4$, $3 \times 10^4,$ and $6 \times 10^4$, respectively. Thanks to the dominance of external phase lockings in the stimulated sub-population, the overall degree of population synchronization in the whole population becomes better [i.e., the amplitudes of $R_w(t)$ increase, as shown in Figs.~\ref{fig:DRIh1}(b6)-\ref{fig:DRIh1}(b8)], and hence $D_f$ increases monotonically with $A$. Eventually when passing a threshold of $A^*_3 (\simeq 49699)$, $D_f$ becomes larger than 1, and then the 2nd stage appears where synchronization enhancement occurs, as shown in Fig.~\ref{fig:DRIh1}(g). We also characterize the population synchronization in each of the stimulated and the non-stimulated sub-populations by employing the average occupation degree $\langle \langle O_i^{(l)} \rangle \rangle_r$, the average pacing degree $\langle \langle P_i^{(l)} \rangle \rangle_r$, and the average statistical-mechanical spiking measure $\langle M_s^{(l)} \rangle_r$; $l=1$ (2) represents the stimulated (non-stimulated) case. Plots of $\langle \langle O_i^{(l)} \rangle \rangle_r$, $\langle \langle P_i^{(l)} \rangle \rangle_r$, and $\langle M_s^{(l)} \rangle_r$ versus $A$ are given in Figs.~\ref{fig:DRIh2}(d1)-\ref{fig:DRIh2}(d3), respectively. The average occupation degree $\langle \langle O_i^{(1)} \rangle \rangle_r$ is 1 (i.e., full synchronization occurs), independently of $A$, because every stimulated FS interneuron fires in each spiking stripe or bursting band. This inhibitory full synchronization in the stimulated sub-population also exhibits high pacing degree $\langle \langle P_i^{(1)} \rangle \rangle_r$, similar to the excitatory case. As $A$ is increased from 0, $\langle \langle P_i^{(1)} \rangle \rangle_r$ begins to decrease, and arrives at a minimum ($\simeq 0.737$) for $A \simeq 1320$. Near the minimum point, stimulated interneurons show mixed burstings and spikings, as shown in Fig.~\ref{fig:DRIh1}(a2) for $A=1000$ where each bursting consists of two spikes whose separation is wide. However, with further increase in $A$ external phase lockings of burstings of stimulated FS interneurons are more and more developed [see the developed bursting bands in Figs.~\ref{fig:DRIh2}(a3)-\ref{fig:DRIh2}(a8)]. Consequently, $\langle \langle P_i^{(1)} \rangle \rangle_r$ begins to increase, and it approaches a limit value ($ \simeq 0.82$). For this type of full synchronization (i.e., $\langle \langle O_i^{(1)} \rangle \rangle_r=1$), the average spiking measure $\langle M_s^{(1)} \rangle_r$ is the same as $\langle \langle P_i^{(1)} \rangle \rangle_r$. Unlike the stimulated case, when passing a threshold ($A \simeq 55$) (major) non-stimulated FS interneurons begin to exhibit stochastic spike skipping (i.e., intermittent and irregular spikings) due to a destructive effect of $S(t)$ (resulting from strong synaptic inhibition). Hence, $\langle \langle O_i^{(2)} \rangle \rangle_r$ varies depending on $A$ in the non-stimulated sub-population. Below the threshold $\langle \langle O_i^{(2)} \rangle \rangle_r =1$ (i.e., full synchronization takes place). However, above the threshold, sparse synchronization with $\langle \langle O_i^{(2)} \rangle \rangle_r <1$ occurs [i.e., sparse stripes appears in the raster plots of spikes. as shown in Figs.~\ref{fig:DRIh2}(a2)-\ref{fig:DRIh2}(a8)]. With increasing $A$ from the threshold, $\langle \langle O_i^{(2)} \rangle \rangle_r$ decreases monotonically, and its value becomes very low ($\simeq 0.23$) for large $A$, as shown in the lower panel of Fig.~\ref{fig:DRIh2}(d1), which is in contrast to the excitatory case of full synchronization [see in Fig.~\ref{fig:DREx2}(d1)]. As $A$ is increased from the threshold, zigzag smearing in the spiking stripes is more enhanced [see Figs.~\ref{fig:DRIh2}(a3)-\ref{fig:DRIh2}(a4)]. As a result, $\langle \langle P_i^{(2)} \rangle \rangle_r$ decreases rapidly, and it arrives at a minimum ($\simeq 0.265$) for $A \simeq 4981$, as shown in Fig.~\ref{fig:DRIh2}(d2). With increase in $A$ from the minimum point, such zigzag smearing begins to be reduced, and non-stimulated FS interneurons reorganize their spikings to exhibit a new type of sparse synchronization [compare Fig.~\ref{fig:DRIh2}(a5) with Fig.~\ref{fig:DRIh2}(a4)]. Then, $\langle \langle P_i^{(2)} \rangle \rangle_r$ increases a little, as shown in Fig.~\ref{fig:DRIh2}(d2). However, as $A$ is furtherer increased, sparse spiking stripes become more smeared [see Figs.~\ref{fig:DRIh2}(a6)-\ref{fig:DRIh2}(a8)], and hence $\langle \langle P_i^{(2)} \rangle \rangle_r$ decreases again; $\langle \langle P_i^{(2)} \rangle \rangle_r \simeq 0.26$ for large $A$. For this case of sparse synchronization, the average spiking measure $\langle M_s^{(2)} \rangle_r$ is less than $\langle \langle P_i^{(2)} \rangle \rangle_r$ because $\langle \langle O_i^{(2)} \rangle \rangle_r < 1$, unlike the full synchronization which occurs in the stimulated case and in the excitatory case. For examination of the matching degree between the stimulated and the non-stimulated sub-populations, we get the cross-correlation functions $\langle C_{12}(\tau) \rangle_r$ between $R_s^{(1)}(t)$ and $R_s^{(2)}(t)$ of the two sub-populations, which are shown for various values of $A$ in Figs.~\ref{fig:DRIh2}(e1)-\ref{fig:DRIh2}(e8). A plot of the cross-correlation measure $\langle M_c \rangle_r$ of Eq.~(\ref{eq:CM}) versus $A$ is also given in Fig.~\ref{fig:DRIh2}(f). Unlike the excitatory case, as $A$ is increased from 0 $\langle M_c \rangle_r$ decreases monotonically to its minimum ($\simeq -0.242$) for $A \simeq 4890$ (which is nearly the same as $A_{min}^{(2)} (\simeq 4876)$ for the minimum of $\langle D_f \rangle_r$) due to a destructive effect of external stimulus $S(t)$ (causing the zigzag smearing and the stochastic spike skipping in the non-stimulated sub-population). Because of monotonic decrease in $\langle M_c \rangle_r$, $\langle D_f \rangle_r$ also decreases from 1, and synchronization suppression occurs. After passing the minimum point ($A \simeq 4890$), $\langle M_c \rangle_r$ begins to increase slowly with $A$, but eventually it approaches 0 (without further increase), in contrast to the excitatory case (where $\langle M_c \rangle_r$ continue to increase monotonically without saturation) [compare Fig.~\ref{fig:DRIh2}(f) with Fig.~\ref{fig:DREx2}(f)]. We also note that the oscillating amplitudes of $\langle C_{12}(\tau) \rangle_r$ decrease with $A$, as shown in Figs.~\ref{fig:DRIh2}(e5)-\ref{fig:DRIh2}(e8), unlike the excitatory case where the oscillating amplitudes of $\langle C_{12}(\tau) \rangle_r$ increase with $A$ [see Figs.~\ref{fig:DREx2}(e5)-\ref{fig:DREx2}(e8)]. This weak cross-correlation between the stimulated and the non-stimulated sub-populations occurs due to completely different types of population behaviors in the two sub-populations: non-stimulated FS interneurons exhibit sparse synchronization of low degree (without any external phase lockings), while stimulated FS interneurons show external phase lockings of burstings. Due to stronger stimulation effect, external phase lockings of stimulated FS interneurons are more and more intensified, and they become dominant. As a result, with increasing $A$ the overall degree of population synchronization in the whole population becomes better. Hence, both the amplitude of $R_w(t)$ and $\langle D_f \rangle_r$ increase monotonically with $A$ (without saturation) [see Figs.~\ref{fig:DRIh1}(b5)-\ref{fig:DRIh1}(b8) and Fig.~\ref{fig:DRIh1}(g)], in spite of weak cross-correlations between the two sub-populations. However, the increasing rate for $\langle D_f \rangle_r$ is much slower when compared with that for the excitatory case where the increase in $\langle D_f \rangle_r$ results from cooperation of the two sub-populations with strong cross-correlations. \subsubsection{Large-Coupling Case of $J=1000$} \label{subsubsec:LJ} We now consider a large-coupling case of $J=1000$ for comparison with the small-coupling case of $J=100$. We apply an external time-periodic AC stimulus $S(t)$ to $N_s(=50)$ randomly-selected Izhikevich FS interneurons by fixing the driving angular frequency as $\omega_d (=2 \pi f_d)$ =0.48 rad/ms ($f_d = \langle f_i \rangle$ =76 Hz), and study the dynamical response of inhibitory full synchronization by varying the driving amplitude $A$. Population synchronization for various values of $A$ may be well seen in the raster plots of spikes which are shown in Figs.~\ref{fig:DRIh3}(a1)-\ref{fig:DRIh3}(a8). The IWPSR kernel estimates $R_w(t),$ exhibiting oscillatory behaviors, are also shown in Figs.~\ref{fig:DRIh3}(b1)-\ref{fig:DRIh3}(b8), and the power spectra of $\Delta R_w(t)$ are given in Figs.~\ref{fig:DRIh3}(f1)-\ref{fig:DRIh3}(f8). Moreover, times series of membrane potentials of individual FS interneurons are given for various values of $A$. The time series of $v_5(t)$ of the 5th stimulated FS interneuron are shown in Figs.~\ref{fig:DRIh3}(c1)-\ref{fig:DRIh3}(c8). As explained in the case of $J=100$, there are two types of non-stimulated FS interneurons, depending on their synaptic connections. Major non-stimulated FS interneurons (which have synaptic connections with fast-firing stimulated FS interneurons) fire slowly due to increased inhibition, while minor non-stimulated FS interneurons (which have no direct synaptic connections with stimulated FS interneurons and receive synaptic inputs from major slowly-firing non-stimulated FS interneurons) fire fast spikings due to decreased inhibition. Figures \ref{fig:DRIh3}(d1)-\ref{fig:DRIh3}(d8) show the time series of $v_{20}(t)$ of the 20th major slowly-firing non-stimulated FS interneuron. On the other hand, Figs.~\ref{fig:DRIh3}(e1)-\ref{fig:DRIh3}(e8) show the time series of $v_{115}(t)$ of the 115th minor fast-firing non-stimulated FS interneuron. A plot of the dynamical factor $\langle D_f \rangle_r$ versus $A$ is shown in Fig.~\ref{fig:DRIh3}(g). Like the case of $J=100$, two stages are thus found to appear. Synchronization suppression ($\langle D_f \rangle_r <1$) and synchronization enhancement ($\langle D_f \rangle_r >1$) occur in the 1st (I) stage ($0< A < A^*_4$) and the 2nd (II) stage ($A > A^*_4$), respectively, where $A^*_4 \simeq 29207$ [which is less than $A^*_3 (\simeq 49699)$ for the case of $J=100$]. Examples are given for various values of $A$; 1st stage ($A=500$, 1000, 4000, 9000, and $2.5 \times 10^4$), and 2nd stage ($A= 4.0 \times 10^4,$ and $6 \times 10^4$). As in the above case of $J=100$, we make a detailed analysis of dynamical responses by decomposing the whole population of FS interneurons into two sub-populations of the stimulated and the non-stimulated FS interneurons. Dynamical responses in these two sub-populations are given in Fig.~\ref{fig:DRIh4}. Raster plots of spikes, ISPSR kernel estimates $R_s^{(1)}(t)$ and $R_s^{(2)}(t)$ [the superscript 1 (2) corresponds to the stimulated (non-stimulated) case], and power spectra of $\Delta R_s^{(1)}(t)$ and $\Delta R_s^{(2)}(t)$ in the stimulated and the non-stimulated sub-populations are shown in Figs.~\ref{fig:DRIh4}(a1)-\ref{fig:DRIh4}(a8), Figs.~\ref{fig:DRIh4}(b1)-\ref{fig:DRIh4}(b8), and Figs.~\ref{fig:DRIh4}(c1)-\ref{fig:DRIh4}(c8), respectively: the upper (lower) panels in these figures denote those for the stimulated (non-stimulated) case. For characterization of population synchronization in each of the stimulated and the non-stimulated sub-populations, the average occupation degree $\langle \langle O_i^{(l)} \rangle \rangle_r$, the average pacing degree $\langle \langle P_i^{(l)} \rangle \rangle_r$, and the average statistical-mechanical spiking measure $\langle M_s^{(l)} \rangle_r$ are given in Figs.~\ref{fig:DRIh4}(d1)-\ref{fig:DRIh4}(d3), respectively; $l=1$ (2) represents the stimulated (non-stimulated) case. The cross-correlation functions $C_{12}(\tau)$ between $R_s^{(1)}(t)$ and $R_s^{(2)}(t)$ of the two sub-populations are shown for various values of $A$ in Figs.~\ref{fig:DRIh4}(e1)-\ref{fig:DRIh4}(e8). Figure \ref{fig:DRIh4}(f) shows a plot of the cross-correlation measure $\langle M_c \rangle_r$ of Eq.~(\ref{eq:CM}) versus $A$. As $A$ is increased from 0 and passes a threshold, stimulated FS interneurons begin to exhibit burstings, as shown in Fig.~\ref{fig:DRIh3}(c2) for $A=500$. These burstings are phase-locked to external stimulus $S(t)$. For this case, spikings of non-stimulated FS interneurons cannot be well matched with burstings of stimulated FS interneurons, because of difference in the type of firings of individual neurons [e.g., see the raster plots of spikes in Figs.~\ref{fig:DRIh3}(a2) and \ref{fig:DRIh4}(a2) for $A=500$]. However, these spiking stripes of non-stimulated FS interneurons are also phase-locked to external stimulus, although they are phase-shifted from the vertical bursting trains of stimulated FS interneurons. Peaks in the power spectra of $\Delta R_s^{(1)}(t)$ and $\Delta R_s^{(2)}(t)$, associated with external phase lockings for both cases of stimulated and non-stimulated FS interneurons, appear at the driving frequency $f_d $ (=76 Hz) and its harmonics [see Fig.~\ref{fig:DRIh4}(c2)]. As $A$ is further increased and passes another threshold $(\simeq 894$), single-periodic synchronization disappears and a new type of multi-periodic synchronization occurs abruptly for both cases of the stimulated and the non-stimulated sub-populations in a wide region of $A$, in contrast to the above case of $J=100$ where multi-periodic synchronization occurs only in the non-stimulated sub-population [e.g., see the lower panel of Fig.~\ref{fig:DRIh2}(c4) for $A=5000$]: this multi-periodicity for $J=1000$ ends earlier for the stimulated case ($A \sim 8900$) when compared with the non-stimulated case ($A \sim 23200$). In this intermediate range of $A$, major non-stimulated FS interneurons exhibit intermittent and stochastic spikings (i.e., stochastic spike skipping) [see Fig.~\ref{fig:DRIh3}(d3) for $A=1000$]. Due to stronger inhibition for $J=1000$, stochastic spike skipping occurs for smaller values of $A$ than those for $J=100$. Moreover, stimulated FS interneurons also show intermittent and stochastic mixed burstings and spikings due to strong stochastic synaptic inputs from non-stimulated FS interneurons, as shown in Fig.~\ref{fig:DRIh3}(c3) for $A=1000$, in contrast to the case of $J=100$ where only regular burstings of stimulated FS interneurons become gradually intensified (i.e., for $J=100$ only single-periodic full synchronization of burstings occurs in the stimulated sub-population). Thus, for $A=1000$ multi-periodic synchronization with two fundamental frequencies (of 76 Hz and 123 Hz) appears, as shown in the power spectra of $\Delta R_s^{(1)}(t)$ and $\Delta R_s^{(2)}(t)$ in Fig.~\ref{fig:DRIh4}(c3) where peaks appear at the two fundamental frequencies, their harmonics, their sum (i.e., 199 Hz), and so on. Due to stochastic spike/burst skipping, sparse stripes appear in the raster plots of spikes in both the stimulated and the non-stimulated sub-populations [see Fig.~\ref{fig:DRIh4}(a3)], in contrast to the case of $J=100$ where sparse spiking stripes appear only in the non-stimulated case. Furthermore, zigzag smearing also occurs in the sparse spiking stripes for the case of the non-stimulated sub-population, as in the case of $J=100$, due to the high clustering coefficient of the Watts-Strogatz SWN. As a result, the overall degree of population synchronization in the whole population for $A=1000$ is much more reduced when compared with the case of $A=500$ [compare Fig.~\ref{fig:DRIh3}(b3) with Fig.~\ref{fig:DRIh3}(b2)]. Hence, the dynamical factor $\langle D_f \rangle_r$ is abruptly decreased until about $A=1000$, in comparison with the case of $J=100$, and then it arrives slowly at its minimum ($\simeq 0.413$) for $A=A_{min}^{(3)} (\simeq 3753)$ (which is smaller than $A=A_{min}^{(2)} (\simeq 4876)$ for $J=100$) [see Fig.~\ref{fig:DRIh3}(g)]; near the minima of $\langle D_f \rangle_r$ for both cases of $J=100$ and 1000, $\langle D_f \rangle_r$ for $J=1000$ is lower than that for $J=100$. With further increase in $A$, the degree of stochastic skippings of stimulated FS interneurons is decreased, and they begin to exhibit more regular burstings. As a result of enhanced external phase lockings, distinct bursting bands appear in the raster plot of spikes, as shown in the upper panel of Fig.~\ref{fig:DRIh4}(a4) for $A=4000$. Peaks at the driving frequency $f_d$ (=76 Hz) and its harmonics, associated with external phase lockings, become sharper (i.e., their heights increase) than those for $A=1000$ in the power spectrum of $\Delta R_s^{(1)}(t)$ [compare the upper panel of Fig.~\ref{fig:DRIh4}(c4) with the upper panel of Fig.~\ref{fig:DRIh4}(c3)]. Hence, the amplitude of $R_s^{(1)}(t)$ is also larger than that for $A=1000$, as compared in Fig.~\ref{fig:DRIh4}(b4) and Fig.~\ref{fig:DRIh4}(b3). On the other hand, major non-stimulated FS interneurons show more stochastic spike skippings [see Fig.~\ref{fig:DRIh3}(d4)]. Moreover, sparse spiking stripes of non-stimulated FS interneurons for $A=4000$ are smeared in a zigzag way much more than those for $A=1000$, as shown in the lower panel of Fig.~\ref{fig:DRIh4}(a4), and hence the amplitude of $R_s^{(2)}(t)$ becomes smaller than that for $A=1000$ [compare Fig.~\ref{fig:DRIh4}(b4) with Fig.~\ref{fig:DRIh4}(b3)], in contrast to the stimulated case. For this case, the peak at another fundamental frequency $f (\simeq 130$ Hz) becomes sharper than that ($f \simeq 123$ Hz) for $A=1000$ in the power spectrum of $\Delta R_s^{(2)}(t)$ [compare the lower panel of Fig.~\ref{fig:DRIh4}(c4) with the lower panel of Fig.~\ref{fig:DRIh4}(c3)]. The overall degree of population synchronization for $A=4000$ is a little lower than that for $A=1000,$ mainly due to increased zigzag smearing in the non-stimulated sub-population [compare Fig.~\ref{fig:DRIh3}(b4) with Fig.~\ref{fig:DRIh3}(b3)]. However, as $A$ is further increased, stimulated FS interneurons begin to show single-periodic behavior (with only one fundamental frequency), associated with external phase lockings of burstings, as shown in the raster plot of spikes and the power spectrum of $\Delta R_s^{(1)}(t)$ in the upper panels of Figs.~\ref{fig:DRIh4}(a5) and \ref{fig:DRIh4}(c5) for $A=9000$. For this case, only peaks at the driving frequency $f_d (= 76$ Hz) and its harmonics, related to external phase lockings, persist (i.e., all the other old peaks disappear) in the power spectrum, and they become sharper. As a result of enhanced external phase locking of burstings, the amplitude of $R_s^{(1)}(t)$ is much increased, as shown in Fig.~\ref{fig:DRIh4}(b5). On the other hand, non-stimulated FS interneurons continue to exhibit multi-periodic behavior for $A=9000$. For this case, stochastic spike skipping and smearing are more enhanced, as shown in the lower panel of Fig.~\ref{fig:DRIh4}(a5). Consequently, the amplitude of $R_s^{(2)}(t)$ is reduced [see Fig.~\ref{fig:DRIh4}(b5)]. For this non-stimulated case, peaks in the power spectrum of $\Delta R_s^{(2)}(t)$ appear at two fundamental frequencies of 76 Hz and 137 Hz and their harmonics, as shown in the lower panel of Fig.~\ref{fig:DRIh4}(c5), in contrast to the case of stimulated sub-population. For this case of $A=9000$, the enhanced external phase lockings of burstings in the stimulated sub-population becomes dominant, and hence the overall degree of population synchronization for $A=9000$ is increased [see the increased amplitude of $R_w(t)$ in Fig.~\ref{fig:DRIh3}(b5)]. In this way, with increasing $A$ from $A_{min}^{(3)}$ the dynamical factor $\langle D_f \rangle_r$ increases gradually thanks to enhancement of external phase lockings of burstings of stimulated FS interneurons, as shown in Fig.~\ref{fig:DRIh3}(g). The increasing rate of $\langle D_f \rangle_r$ for $J=1000$ is larger than that for $J=100$ because stimulated FS interneurons receive weaker synaptic inputs from non-stimulated FS interneurons (resulting from more developed stochastic spike skipping of major non-stimulated interneurons). Eventually, as $A$ passes a threshold $(A \simeq 9183)$, $\langle D_f \rangle_r$ for $J=1000$ becomes larger than that for $J=100$ [see Fig.~\ref{fig:DRIh3}(g)]. When $A$ is sufficiently large, non-stimulated FS interneurons also begin to show single-periodic behaviors (with one fundamental frequency), as shown in the power spectrum of $\Delta R_s^{(2)}(t)$ for $A=25000$ [see the lower panel of Fig.~\ref{fig:DRIh4}(c6)] where only one fundamental frequency at $f \simeq 143$ Hz exists (i.e., all the other peaks, associated with external phase lockings, disappear). Then, similar to the case of $J=100$, stimulated FS interneurons exhibit regular bursting behaviors [with the sub-population frequency $f_{sp}^{(1)} (\simeq 76$ Hz)], while non-stimulated FS interneurons show fast sparse synchronization [with the sub-population frequency $f_{sp}^{(2)} (\simeq 143$ Hz)] [see the raster plots of spikes in Fig.~\ref{fig:DRIh4}(a6), the ISPSR kernel estimates $R_s^{(1)}(t)$ and $R_s^{(2)}(t)$ in Fig.~\ref{fig:DRIh4}(b6), and the power spectra in Fig.~\ref{fig:DRIh4}(c6)]. With increasing $A$ furthermore, external phase lockings of burstings with $f_{sp}^{(1)} \simeq 76$ Hz in the stimulated sub-population become more and more enhanced than those for $J=100$, thanks to both increased external stimulation and weaker synaptic inputs from non-stimulated FS interneurons, as shown in the raster plots of spikes, the ISPSR kernel estimate $R_s^{(1)}(t)$, and the power spectra for $A= 4 \times 10^4$ and $6 \times 10^4$ [see the upper panels of Figs.~\ref{fig:DRIh4}(a7)-\ref{fig:DRIh4}(a8), Figs.~\ref{fig:DRIh4}(b7)-\ref{fig:DRIh4}(b8), and Figs.~\ref{fig:DRIh4}(c7)-\ref{fig:DRIh4}(c8)]. On the other hand, the degree of fast sparse synchronization in the non-stimulated sub-population becomes very low due to stochastic spike skipping and smearing, as shown in the raster plots and the ISPSR kernel estimate $R_s^{(2)}(t)$ for $A= 4 \times 10^4$ and $6 \times 10^4$ [see the lower panels of Figs.~\ref{fig:DRIh4}(a7)-\ref{fig:DRIh4}(a8) and Figs.~\ref{fig:DRIh4}(b7)-\ref{fig:DRIh4}(b8)]; $f_{sp}^{(2)} \simeq 146$ Hz for $A= 4 \times 10^4$ and $6 \times 10^4$, as shown in the lower panels of Figs.~\ref{fig:DRIh4}(c7)-\ref{fig:DRIh4}(c8)]. Thanks to the dominance of more-developed external phase lockings of burstings of stimulated FS interneurons, $\langle D_f \rangle_r$ increases with $A$ in a faster rate than that for $J=100$, as shown in Fig.~\ref{fig:DRIh3}(g). Eventually when passing a threshold of $A^*_4 (\simeq 29207)$ [which is smaller than $A^*_3 (\simeq 49699)$ for $J=100$], $\langle D_f \rangle_r$ becomes larger than 1, and then the 2nd stage of synchronization enhancement occurs. For characterization of the population synchronization in each of the stimulated and the non-stimulated sub-populations, we employ the average occupation degree $\langle \langle O_i^{(l)} \rangle \rangle_r$, the average pacing degree $\langle \langle P_i^{(l)} \rangle \rangle_r$, and the average statistical-mechanical spiking measure $\langle M_s^{(l)} \rangle_r$; $l=1$ (2) denotes the stimulated (non-stimulated) case. Plots of $\langle \langle O_i^{(l)} \rangle \rangle_r$, $\langle \langle P_i^{(l)} \rangle \rangle_r$, and $\langle M_s^{(l)} \rangle_r$ versus $A$ are shown in Figs.~\ref{fig:DRIh4}(d1)-\ref{fig:DRIh4}(d3), respectively. For small $A$, stimulated FS interneurons exhibit regular spikings or burstings, which results in the full synchronization with $\langle \langle O_i^{(1)} \rangle \rangle_r =1$. However, when passing a threshold ($A \simeq 894$), stimulated FS interneurons exhibit stochastic spike/burst skippings due to strong stochastic synaptic inputs from non-stimulated FS interneurons [e.g., see Fig.~\ref{fig:DRIh3}(c3) for $A=1000$], in contrast to the case of $J=100$ where stimulated FS interneurons exhibit only regular burstings/spikings without skippings. Due to this stochastic skipping, $\langle \langle O_i^{(1)} \rangle \rangle_r$ becomes less than 1 (i.e. sparse synchronization appears) [see the inset of Fig.~\ref{fig:DRIh4}(d1)], unlike the case of full synchronization for $J=100$. However, as $A$ is further increased, external phase lockings of burstings of stimulated FS interneurons are more and more enhanced, and full synchronization with $\langle \langle O_i^{(1)} \rangle \rangle_r =1$ reappears when passing a higher threshold ($A \simeq 4159$). This stochastic spike/burst skipping of stimulated FS interneurons also affects the average pacing degree $\langle \langle P_i^{(1)} \rangle \rangle_r$. As $A$ is increased from 0, $\langle \langle P_i^{(1)} \rangle \rangle_r$ begins to decrease and arrives at its minimum ($ \simeq 0.532$) for $A \simeq 1208$ due to the stochastic skippings. Near this minimum point, $\langle \langle P_i^{(1)} \rangle \rangle_r$ is less than that for $J=100$, as shown in Fig.~\ref{fig:DRIh4}(d2). With further increase in $A$, $\langle \langle P_i^{(1)} \rangle \rangle_r$ begins to increase, and it approaches a limit value ($\simeq 0.82$) (which seems to be the same as that for $J=100$) thanks to enhancement of external phase lockings of burstings of stimulated FS interneurons. Consequently, $\langle M_s^{(1)} \rangle_r$ near the minimum point is much less than that for $J=100$ due to sparse synchronization (with $\langle \langle O_i^{(1)} \rangle \rangle_r < 1$), while for large $A$ the values of $\langle M_s^{(1)} \rangle_r$ for both cases of $J=1000$ and 100 seem to be the same thanks to developed external phase lockings of burstings of stimulated FS interneurons. When passing the threshold ($A \simeq 894$), (major) non-stimulated FS interneurons also begin to exhibit stochastic spike skippings due to strong inhibition from stimulated FS interneurons. The degree of stochastic skippings is more severe than that for $J=100$ since the strength of synaptic inhibition is stronger. As a result, the original full synchronization breaks up, and fast sparse synchronization occurs, as shown in the lower panel of Fig.~\ref{fig:DRIh4}(a3) for $A=1000$ [where more sparse spiking stripes appear due to fast oscillation with the sub-population frequency $f_s^{(2)} (\simeq 123$ Hz)]. For this case, $\langle \langle O_i^{(2)} \rangle \rangle_r $ (i.e., the average occupation degree in each spiking stripe) is decreased more abruptly when compared with the case of $J=100$ [see Fig.~\ref{fig:DRIh4}(d1)]. Thus, $\langle \langle O_i^{(2)} \rangle \rangle_r$ becomes much less than that for $J=100$. After that, $\langle \langle O_i^{(2)} \rangle \rangle_r$ decreases slowly, but it is still less than that for $J=100$. With increasing $A$ from the threshold, zigzag smearing in sparse spiking stripes is more developed [see Figs.~\ref{fig:DRIh4}(a3)-\ref{fig:DRIh4}(a4)]. As a result, $\langle \langle P_i^{(2)} \rangle \rangle_r$ is decreased in a relatively rapid way, as shown in Fig.~\ref{fig:DRIh4}(d2). As $A$ is further increased, sparse spiking stripes become smeared gradually [see Figs.~\ref{fig:DRIh4}(a5)-\ref{fig:DRIh4}(a8)], which also leads to gradual decrease in $\langle \langle P_i^{(2)} \rangle \rangle_r$. For large $A,$ $\langle \langle P_i^{(2)} \rangle \rangle_r \simeq 0.18$, which is less than that for $J=100$. Like the case of $\langle \langle O_i^{(2)} \rangle \rangle_r,$ the average spiking measure $\langle M_s^{(2)} \rangle_r$ also shows an abrupt decrease when fast sparse synchronization appears, and then it decreases slowly [see Fig.~\ref{fig:DRIh4}(d3)]. Due to stronger destructive effect of $S(t)$ (resulting from strong synaptic inhibition), the value of $\langle M_s^{(2)} \rangle_r$ becomes less than that for $J=100$. Finally, we obtain the cross-correlation functions $\langle C_{12}(\tau) \rangle_r$ between $R_s^{(1)}(t)$ and $R_s^{(2)}(t)$ of the two sub-populations, which are shown for various values of $A$ in Figs.~\ref{fig:DRIh4}(e1)-\ref{fig:DRIh4}(e8), and examine the matching degree between the stimulated and the non-stimulated sub-populations. A plot of the cross-correlation measure $\langle M_c \rangle_r$ of Eq.~(\ref{eq:CM}) versus $A$ is also shown in Fig.~\ref{fig:DRIh4}(f): $\langle M_c \rangle_r$ for $J=100$ is also given for comparison. In contrast to the case of $J=100$, $\langle M_c \rangle_r$ decreases abruptly until about $A=1000$, mainly due to stochastic skipping of firings in both the stimulated and the non-stimulated sub-populations, and then it arrives at its minimum ($\simeq -0.13$) in a relatively slow way for $A \simeq 3803$ (which is nearly the same as $A_{min}^{(3)} (\simeq 3753)$ for the minimum of $\langle D_f \rangle_r$). Because of the abrupt decrease in $\langle M_c \rangle_r$, the dynamical factor $\langle D_f \rangle_r$ also decreases rapidly from 1, and synchronization suppression occurs. After passing the minimum point ($A \simeq 3803$), $\langle M_c \rangle_r$ begins to increase slowly with $A$, but eventually it approaches 0 in an oscillatory way [see Fig.~\ref{fig:DRIh4}(f)]. In addition to $\langle M_c \rangle_r$ which is given by $\langle C_{12}(0) \rangle_r$ at the zero-time lag, we are also concerned about the peak amplitude (i.e., maximum amplitude) $\langle C_{12}(\tau_{peak}) \rangle_r$ at the ``peak'' time lag $\tau_{peak}$. For large $A$, the peak amplitude decreases with $A$ much more rapidly than that for $J=100$ [compare Figs.~\ref{fig:DRIh4}(e6)-\ref{fig:DRIh4}(e8) with Figs.~\ref{fig:DRIh2}(e6)-\ref{fig:DRIh2}(e8)]. Consequently, the cross-correlation between the stimulated and the non-stimulated sub-populations becomes very weak because of distinctly different types of population behaviors in the two sub-populations. Stimulated FS interneurons exhibit external phase lockings of burstings, while non-stimulated FS interneurons show fast sparse synchronization (without any external phase lockings). With increasing $A$, external phase lockings of stimulated FS interneurons become more and more intensified thanks to stronger stimulation and weaker synaptic inputs, while the degree of sparse synchronization in the non-stimulated sub-population is negligibly low. As a result of dominant effect of such external phase lockings, the dynamical factor $\langle D_f \rangle_r$ increases monotonically with $A$ [see Fig.~\ref{fig:DRIh3}(g)], in spite of weak cross-correlations between the two sub-populations, as in the case of $J=100$: the increasing rate for $J=1000$ is larger than that for $J=100$ due to more-developed external phase lockings. However, the increasing rates for $\langle D_f \rangle_r$ in both inhibitory cases of $J=100$ and 1000 are much slower than that for the excitatory case where the increase in $\langle D_f \rangle_r$ results from the interplay between the two sub-populations with strong cross-correlations. \section{Summary} \label{sec:SUM} Brain rhythms appear in health and diseases via neural synchronization. A neural system's response to external stimulus can provide useful information about its dynamical properties. Therefore, it is important to investigate how an external stimulus affects the neural synchronization. Synchronization enhancement or suppression may occur via control of population synchronization. In most previous theoretical and computational works on control of population synchronization, only excitatory-type couplings were considered. To see the dependence of dynamical responses to external stimuli on the synaptic-coupling type, we considered two types of excitatory and inhibitory full synchronization in the Watts-Strogatz SWN of excitatory RS pyramidal neurons and inhibitory FS interneurons, and investigated the effects of synaptic interactions on dynamical responses to external time-periodic stimuli $S(t)$ by varying the driving amplitude $A$. We have characterized dynamical responses to $S(t)$ in terms of the dynamical response factor $\langle D_f \rangle_r$ by increasing $A$. For the case of excitatory coupling, external phase lockings occur in both the stimulated and the non-stimulated sub-populations, thanks to a constructive effect of $S(t)$ which results from phase-attractive synaptic excitation. On the other hand, in the case of inhibitory coupling, external phase locking occurs only in the stimulated sub-population, while the original inhibitory full synchronization in the non-stimulated sub-population breaks up gradually (i.e., for large $A$ non-stimulated FS interneurons exhibit inhibitory sparse synchronization of low degree) due to a destructive effect of $S(t)$ which comes from strong synaptic inhibition. As results of these different effects of $S(t)$, the type and degree of dynamical response (e.g., synchronization enhancement or suppression characterized by $\langle D_f \rangle_r$ ) have been found to vary differently, depending on the type of synaptic interaction. For a detailed analysis, we have also measured the matching degree between the dynamics of the two sub-populations of stimulated and non-stimulated neurons in terms of a cross-correlation measure $\langle M_c \rangle_r$. $\langle M_c \rangle_r$ has been found to vary with $A$ in a different way, depending on the synaptic-coupling type. For small $A$, synchronization enhancement occurs for the excitatory case, thanks to strong cross-correlation (with $\langle M_c \rangle_r > 0.97$) between the two sub-populations, while synchronization suppression takes place in the inhibitory case, due to monotonic decrease in $\langle M_c \rangle_r$. Particularly, for large $A$ the cross-correlation becomes very weak in the inhibitory case, while for the excitatory case $\langle M_c \rangle_r$ increases gradually after passing its minimum (i.e., it becomes large for large $A$). Consequently, in the excitatory case synchronization enhancement reappears for an intermediate value of $A$, thanks to the strong cross-correlation: with increasing $A$ from 0, synchronization enhancement first appears, then synchronization suppression occurs, and finally synchronization enhancement reappears. For the inhibitory case, in spite of weak cross-correlation, synchronization enhancement also appears for sufficiently large $A$, just thanks to much-enhanced external phase lockings of burstings of stimulated FS interneurons: with increase in $A$ from 0, synchronization suppression appears in a wide range of $A$, and then synchronization enhancement occurs for very large $A$. Furthermore, in the inhibitory case we have also studied the effect of coupling strength $J$ on the dynamical responses by considering both the small- and the large-coupling cases (i.e., $J=100$ and 1000). Thus, dynamical response has been found to vary in a quantitatively different way, depending on the coupling strength $J$. For intermediate values of $A$ stimulated FS interneurons in the case of $J=1000$ have been found to exhibit intermittent and stochastic mixed burstings and spikings due to strong stochastic synaptic inputs from non-stimulated FS interneurons, in contrast to the case of $J=100$ where they show only regular burstings. As a result, the dynamical factor $\langle D_f \rangle_r$ decreases rapidly in comparison with the case of $J=100$. However, after passing its minimum, $\langle D_f \rangle_r$ increases faster than that for $J=100$, and eventually when passing an intermediate threshold it becomes larger, thanks to much more enhanced external phase lockings of burstings of stimulated FS interneurons (resulting from both stronger external stimulation and weaker synaptic inputs from non-stimulated FS interneurons). All these results for both excitatory and inhibitory cases are expected to provide useful insights on the dynamical responses to external stimuli in neural systems (i.e., how external stimuli affect brain rhythms emerging via excitatory and inhibitory synchronization). \begin{acknowledgments} This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (Grant No. 20162007688). \end{acknowledgments}
1,314,259,994,666
arxiv
\section{Introduction} When trying to describe $\sigma$-compactness in terms of open covers, Hurewicz \cite{Hur25} introduced the following property, nowadays called \emph{ the Menger property}: A topological space $X$ is said to have this property if for every sequence $\la \U_n : n\in\omega\ra$ of open covers of $X$ there exists a sequence $\la \V_n : n\in\omega \ra$ such that each $\V_n$ is a finite subfamily of $\U_n$ and the collection $\{\cup \V_n:n\in\omega\}$ is a cover of $X$. The current name (the Menger property) has been adopted because Hurewicz proved in \cite{Hur25} that for metrizable spaces his property is equivalent to a certain basis property considered by Menger in \cite{Men24}. If in the definition above we additionally require that $\{\cup\V_n:n\in\w\}$ is a \emph{$\gamma$-cover} of $X$ (this means that the set $\{n\in\w:x\not\in\cup\V_n\}$ is finite for each $x\in X$), then we obtain the definition of the Hurewicz covering property introduced in \cite{Hur27}. Contrary to a conjecture of Hurewicz, the class of metrizable spaces having the Hurewicz property turned out to be wider than the class of $\sigma$-compact spaces \cite[Theorem~5.1]{COC2}. Like for most of the topological properties, it is interesting to ask whether the Hurewicz property is preserved by finite products. One of the motivations behind this question comes from spaces of continuous functions, see \cite[Theorem~21]{KocSch03}. In the case of general topological spaces there are ZFC examples of Hurewicz spaces whose product is not even Menger, see \cite[\S 3]{Tod95} and the discussion in the introduction of \cite{RepZdo17}. That is why \emph{we concentrate in what follows on subspaces of the Cantor space $2^\w$.} (Let us note that the preservation of the Hurewicz property by finite products of metrizable spaces reduces to subspaces of $2^\w$, see the end of the proof of \cite[Theorem~1.1]{RepZdo17} on p. 331 of that paper.) The covering properties of products of subspaces of $2^\w$ with the Hurewicz property turned out to be sensitive to the ambient set-theoretic universe: Under CH there exists a Hurewicz space whose square is not Menger, see \cite[Theorem~2.12]{COC2}. Later, a similar construction has been carried out under a much weaker assumption, see \cite[Theorem~43]{TsaSch02}. In particular, under the Martin Axiom there are Hurewicz subspaces of the Cantor space whose product is not Menger. On the other hand, the product of any two Hurewicz subspaces of $2^\w$ is Menger in the Laver and Miller models, see \cite{RepZdo17} and \cite{Zdo??}, respectively. In the Miller model we actually know that the product of finitely many Hurewicz subspaces of $2^\w$ is Menger (for the Laver model this is unknown even for three Hurewicz subspaces), because in this model the Menger property is preserved by products of subspaces of $2^\w$, see \cite{Zdo??}. This is why the Miller model seemed to be the best candidate for a model where the Hurewicz property is preserved by finite products of metrizable spaces. Our next theorem refutes this expectation, and hence the question whether one can find ZFC examples of Hurewicz subspaces $X,Y$ of $2^\w$ with non-Hurewicz product remains open. Standardly, by the Miller model we mean a forcing extension of a ground model of GCH by adding a generic for the forcing obtained by an iteration of length $\w_2$ with countable supports of the poset defined by Miller in \cite{Mil84}. We recall the definition of this poset in the proof of Lemma~\ref{examples}. \begin{theorem} \label{main} In the Miller model there are two $\gamma$-subspaces $X,Y$ of $2^\w$ such that $X\times Y$ is not Hurewicz. In particular, in this model the Hurewicz property is not preserved by finite products of metrizable spaces. \end{theorem} A family $\U$ of subsets of a set $X$ is called an \emph{$\w$-cover} of $X$ if $X\not\in\U$ and for every finite subset $F$ of $X$ there exists $U\in\U$ such that $F\subset U$. A space $X$ is called a \emph{$\gamma$-set} if every open $\w$-cover of $X$ contains a $\gamma$-subcover. This notion was introduced in \cite{GerNag82} where it was proved that a Tychonoff space $X$ is a $\gamma$-space if and only if the space $C_p(X)$ of all continuous functions from $X$ to $\mathbb R$ with the topology of the pointwise convergence, has the Fr\'echet-Urysohn property, i.e., for every $f\in C_p(X)$ and $A\subset C_p(X)$ with $f\in\bar{A}$ there exists a sequence $\la f_n:n\in\w\ra\in A^\w$ converging to $f$. It is well-known that $\gamma$-spaces have the Hurewicz property in all finite powers, see, e.g., \cite[Th.~3.6 and Fig. 2]{COC2} and references therein. This follows from the following characterization proved in \cite{GerNag82}: $X$ is a $\gamma$-space if and only if for every sequence $\la\U_n:n\in\w\ra$ of open $\w$-covers of $X$ there exists a sequence $\la U_n\in\U_n:n\in\w\ra$ such that $\{U_n:n\in\w\}$ is a $\gamma$-cover of $X$. Our proof of Theorem~\ref{main} is based on the fact that if $X\subset 2^\w$, $X\in V,$ and $X$ is a $\gamma$-space in $V,$ then $X$ remains a $\gamma$-space in the forcing extension by a countable support iteration of posets satisfying property $(\dagger)$ introduced in Definition~\ref{def_dag} below. This seems to be the first attempt to find iterable properties of forcing posets guaranteeing the preservation of ground model $\gamma$-spaces. Previously, only specific posets were treated: By \cite{MilTsaZdo16} and \cite{Sch10} $\gamma$-spaces are preserved by Cohen and random forcing, respectively, whereas the Hechler forcing kills all ground model uncountable $\gamma$-spaces, see \cite{Mil05}. Let us note that Cohen forcing satisfies $(\dagger)$ but fails to preserve Hurewicz spaces, see the discussion in \cite{MilTsaZdo16} after Problem 4.1 therein. This is why our proof of Theorem~\ref{main} leaves open the following question: \begin{question} Does Miller forcing preserve the Hurewicz property of ground model metrizable spaces containing no topological copies of $2^\w$? What about Sierpi\'nski spaces? \end{question} \section{Proof of Theorem~\ref{main}} Theorem~\ref{main} is a direct consequence of Lemmata~\ref{just_def}, \ref{dad_impl_gampres}, \ref{dag_preservation}, and \ref{examples} proved below, combined with one of the main results of \cite{MilTsaZdo16}. We shall consider only posets $\IP$ such that below any $p\in\IP$ there exist incompatible $r,q$. This is not an essential restriction because most of the posets considered in literature have this property. First we need to introduce some auxiliary notions. \begin{definition} \label{def_dag} \begin{itemize} \item A poset $\IP$ has property $(\dagger)$ if for every countable elementary submodel $M\ni \IP$ of $H(\theta)$ for big enough $\theta$, every $p\in \IP\cap M$, and $\phi_i : \IP\cap M\to\IP\cap M$ for all $i\in\w$ such that $\phi_i(p)\leq p$ for all $p\in\IP\cap M$ and $i\in\w$, there exists an $(M,\IP)$-generic $q\leq p$ forcing $$\name{G}\cap\{\phi_i(p):p\in M\cap\IP\} \mbox{ is infinite for all }i\in\w,$$ where $\name{G}$ is the canonical $\IP$-name for the $\IP$-generic filter. \item Let $\B=\{B_n:n\in\w\}$ be a bijective enumeration of the standard clopen base of the topology on $2^\w$, i.e., $\B$ consists of finite unions of elements of the family $\big\{[s]=\{x\in 2^\w:x\uhr|s|=s\}:s\in 2^{<\w}\big\}$. Let $X\subset 2^\w$ and $M\ni X$ be as above. $\W\subset\B$ is called $\la X,M,\w\ra$-hitting if $\W\cap\U$ is infinite for every $\w$-cover $\U$ of $X$ such that $\U\in M$ and $\U\subset\B$. \item The poset $\IP$ is called \emph{$\la X,\gamma\ra$-preserving} if for every countable elementary submodel $M$ such that $X,\IP\in M$, $\la X,M,\w\ra$-hitting $\W\subset \B$, and $p\in \IP\cap M$ there exists an $(M,\IP)$-generic condition $q\leq p$ forcing $\W$ to be $\la X, M[\name{G}],\w\ra$-hitting. \end{itemize} \end{definition} In what follows we shall denote by $\Omega(X)$ and $\Gamma(X)$ the family of all open $\w$- and $\gamma$-covers of a topological space $X$, respectively. The following lemma justifies our terminology. \begin{lemma} \label{just_def} If $\IP$ is $\la X,\gamma\ra$-preserving and $X\subset 2^\w$ is a $\gamma$-set, then $X$ remains a $\gamma$-set in $V^{\IP}$. \end{lemma} \begin{proof} Let $\name{\U}$ be a $\IP$-name for an $\w$-cover of $X$ by elements of $\B$, $p\in\IP,$ and $M\ni\name{\U},p$ be a countable elementary submodel. Let $\{\U_i:i\in\w\}$ be an enumeration of $\Omega(X)\cap M\cap\mathcal P(\B)$ and $U_i\in\U_i$ be such that $\W=\{U_i:i\in\w\}\in\Gamma(X)$. Then $\W $ is $\la X,M,\w\ra$-hitting, and hence there exists an $(M,\IP)$-generic $q\leq p$ forcing $\W\cap\name{\U}$ to be infinite. Thus $q$ forces $\W\cap\name{\U}$ to be a $\gamma$-subcover of $\name{\U}$. \end{proof} \begin{lemma} \label{dad_impl_gampres} If $\IP$ satisfies $(\dagger)$, then it is $\la X,\gamma\ra$-preserving for every $X\subset 2^\w$. \end{lemma} \begin{proof} Let us enumerate $V^{\IP}\cap M$ as $\{\name{\U}_i:i\in\w\}$. For every $p\in\IP\cap M$ and $i\in\w$, if $p$ does not force $\name{\U}_i$ to be an $\w$-cover of $X$ consisting of elements of $\B$, we can find $r_{i,p}\leq p$ which forces that $\name{\U}_i$ is not an $\w$-cover of $X$ by elements of $\B$. Otherwise we set $\U_{i,p}=\{B\in\B:\exists r\leq p (r\forces B\in\name{\U}_i)\}$ and note that $\U_{i,p}\in\Omega(X)\cap M$. Furthermore, by the elementarity we have that for every $B\in\U_{i,p}$ there exists $M\ni r\leq p$ such that $r\forces B\in\name{\U}_i$. Let $\{p_n:n\in\w\}$ be an enumeration of $M\cap\IP$ and for every $n,i$ set $\U'_{i,p_n}=\U_{i,p_n}\setminus \{B_k : k\leq n\}$. Since $\W$ is $\la X,M,\w\ra$-hitting, $|\W\cap\U'_{i,p}|=\w$ for every $p\in M\cap \IP$ and $i\in\w$. For every $p,i$ as above pick $U_{i,p}\in\W\cap\U'_{i,p}$ and $r_{i,p}\leq p$ such that $r_{i,p}\in M$ and $r_{i,p}\forces U_{i,p}\in\name{\U}_i$. Now let us fix $p_*\in \IP\cap M$ and consider maps $\phi_i:p\mapsto r_{i,p}$, $i\in\w$. It follows that there exists an $(M,\IP)$-generic $q\leq p_*$ forcing the set $\name{G}\cap\{r_{i,p}:p\in\IP\cap M\}$ to be infinite for all $i\in\w$. Let $G\ni q$ be $\IP$-generic and $i\in\w$. If $\name{\U}_i^G$ is an $\w$-cover of $X$ by elements of $\B$, then no $r_{i,p}\in G$ can force the negation thereof, and hence for each such $r_{i,p}$ we have $U_{i,p}\in\W\cap\name{\U}_i^G$. Therefore $|\W\cap \name{\U}_i^G|=\w$ since no $B\in\B$ can belong to $\U'_{i,p}$ for infinitely many $p\in M\cap\IP$. \end{proof} \noindent\textbf{Remark.} It is a simple exercise to check that if in the definition of $(\dagger)$ we restrict ourselves to only one $\phi:M\cap\IP\to M\cap\IP$ then we get an equivalent statement. The longer formulation which we have chosen seems to be easier to apply, though. \hfill $\Box$ \medskip By the definition we have that for every $X\subset 2^\w$ finite iterations of $\la X,\gamma\ra$-preserving posets are again $\la X,\gamma\ra$-preserving. The proof of the next fact is modelled after that of \cite[Lemma~2.8]{Abr10}. In fact, we just ``add an $\epsilon$'' to it, using ideas from \cite{Dow90}. \begin{lemma} \label{dag_preservation} Let $X\subset 2^\w$. Then countable support iterations of $\la X,\gamma\ra$-preserving posets are again $\la X,\gamma\ra$-preserving. \end{lemma} \begin{proof} We shall inductively prove the following formally stronger statement: \begin{quote} Let $\la \IP_\alpha,\name{\IQ}_\alpha:\alpha<\delta \ra$ be a countable support iteration of $\la X,\gamma\ra$-preserving posets, $M$ a countable elementary submodel of $ H(\lambda)$ for a sufficiently large regular cardinal $\lambda$ such that $\delta,\IP_\delta\in M$, and $\W\subset\B$ be $\la X,M,\w\ra$-hitting. For any $\delta_0\in\delta\cap M$ and $(M,\IP_{\delta_0})$-generic condition $q_0$ forcing $\W$ to be $\la X, M[\name{G}_{\delta_0}],\w\ra$-hitting, the following holds: If $\name{p}_0\in V^{\IP_{\delta_0}}$ is such that $$q_0\forces_{\IP_{\delta_0}}\name{p}_0\in\IP_\delta\cap M \mbox{ and } \name{p}_0\uhr\delta_0\in\name{G}_{\delta_0},$$ where $\name{G}_{\delta_0}$ is the canonical name for the $\IP_{\delta_0}$-generic, then there is an $(M,\IP_\delta)$-generic condition $q$ such that $$ q\uhr\delta_0=q_0 \mbox{ and } q\forces_{\IP_\delta}\:``\name{p}_0\in\name{G}_\delta \:\wedge\:\W\mbox{ is }\la X,M[\name{G}_\delta],\w\ra\mbox{-hitting.''} $$ \end{quote} We are going to prove this statement by induction on $\delta$, the only non-trivial case (modulo \cite[Lemma~2.6]{Abr10} and the proof thereof) is when $\delta$ is a limit ordinal. Fix a strictly increasing sequence $\la\delta_n:n\in\w\ra$ of ordinals in $M$ cofinal in $M\cap\delta$. For every $\nu<\mu<\delta$ let us denote by $\IP_{[\nu,\mu)}$ a $\IP_{\nu}$-name for the iteration of $\name{\IQ}_\beta$, $\beta\in\mu\setminus\nu$, in $V^{\IP_{\nu}}$. As usual, (see, e.g., \cite{Lav76}) we shall identify $\IP_{[\nu,\mu)}$ with the set of all functions $p$ with domain $\mu\setminus \nu$ such that $1_{\IP_\nu}\bigvid p\in\IP_\mu$, ordered as follows: Given a $\IP_\nu$-generic $G$ and $p_0,p_1\in\IP_{[\nu,\mu)}$, $p_1^G\leq p_0^G$ in $\IP_{[\nu,\mu)}^G$ if there exists an $s\in G$ such that $s\bigvid p_1\leq s\bigvid p_0$ in $\IP_\mu$. Set $D_0=\IP_\delta$ and let $\{D_i:i\geq 1\}$ be the set of all open dense subsets of $\IP_\delta$ which belong to $M$ and $\{\name{\U}_i:i\geq 1\}$ an enumeration of $V^{\IP_\delta}\cap M$ such that each $\tau\in V^{\IP_\delta}\cap M$ equals $\name{\U}_i$ for infinitely many $i$. We shall define by induction on $n\in\w$ a condition $q_n\in\IP_{\delta_n}$ and a name $\name{p}_n\in V^{\IP_{\delta_n}}$ such that: \begin{itemize} \item[$(1)$] $q_0$ and $\name{p}_0$ are like in the quoted claim at the beginning of the proof; $q_n$ is $(M,\IP_{\delta_n})$-generic; $q_{n+1}\uhr \delta_n=q_n$; \item[$(2)$] $\name{p}_n$ is a $\IP_{\delta_n}$-name such that\\ \centerline{$q_n\forces_{\IP_{\delta_n}}\ ``\name{p_n} \mbox{ is a condition in } \IP_\delta\cap M \mbox{ such that }$} \begin{itemize} \item[$(a)$] $\name{p}_n\uhr \delta_n\in\name{G}_{\delta_n}$; \item[$(b)$] $\name{p}_n\leq\name{p}_{n-1}$; \item[$(c)$] $\name{p}_n\in D_n$; and \item[$(d)$] If $n\geq 1$ then $\name{p}_{n}$ decides whether $\name{\U}_n$ is an $\w$-cover of $X$ by elements of $\B$, and in the case when decided to be such a cover, $\name{p}_n$ forces, in addition, that $\exists m\geq n (B_m\in\name{\U_n}\cap\W)$.'' \end{itemize} \end{itemize} Assume that $q_n$ and $\name{p}_n$ have already been constructed. For a while we shall work in $V[G]$, where $G\ni q_n$ is $\IP_{\delta_n}$-generic. Then $p_n:=\name{p}_n^{G}\in D_n\cap M$ and $p_n\uhr\delta_n\in G$. Find $p'_n\leq p_n$ such that $p'_n\uhr \delta_n\in G$ and $p'_n\in D_{n+1}\cap M$. It exists because the set $$ D'=\{p'\in\IP_{\delta_n}: (p'\perp p_n\uhr\delta_n)\vee(\exists p_n'\in D_{n+1} (p_n'\leq p_n \wedge p'=p_n'\uhr\delta_n))\} $$ is dense in $\IP_{\delta_n}$ and belongs to $M$, and hence $D'\cap M$ is predense below $q_n$, which yields $D'\cap G\cap M\neq\emptyset$. Moreover, since $p_n\uhr\delta_n\in G$, any $p'\in D'\cap G$ is compatible with $p_n\uhr\delta_n$. It follows that for any $p'\in G\cap D'\cap M$, any $p_n'\in M$ witnessing for $p'\in D'$ is as required. Without loss of generality we may assume that each condition in $D_{n+1}$ decides whether $\name{\U}_{n+1}$ is an $\w$-cover of $X$ by elements of $\B$. If $p'_n$ decides that it is not, then we set $p_{n+1}=p'_n$ and take $q_{n+1}$ to be any $(M,\IP_{\delta_{n+1}})$-generic satisfying $(1), (2)$ and forcing over $\IP_{\delta_{n+1}}$ that $\mathcal W$ is $\la X,M[\name{G}_{\delta_{n+1}}],\w\ra$-hitting, its existence following by our inductive assumption. Otherwise fix a $\IP_{\delta_{n}}$-name $\name{p}'_n\in M$ for a condition in $\IP_\delta$ such that $q_n$ forces that $\name{p}'_n$ has all the properties of $p'_n$ stated above, and an $(M,\IP_{\delta_{i+1}})$-generic $q_{n+1}$ such that $q_{n+1}\uhr\delta_n=q_n$, $q_{n+1}\forces_{\IP_{\delta_{n+1}}} \name{p}'_n\uhr\delta_{n+1}\in\name{G}_{\delta_{n+1}}$, and $q_{n+1}\forces_{\IP_{\delta_{n+1}}}`` \W$ is $\la X, M[\name{G}_{\delta_{n+1}}],\w\ra$-hitting''. Consider the $\IP_{\delta_{n+1}}$-name $\name{\W}_{n+1}$ which equals \begin{eqnarray*} \big\{ \la r,\check{B}\ra\: :\: B\in\mathcal B \ \ \&\ \ \IP_{\delta_{n+1}}\ni r \mbox{ decides } \name{p}_n' \mbox{ as } p_n'\ \ \& \\ \mbox{ exists } p\leq p_n' \mbox{ such that } p\uhr\delta_{n+1}=r \mbox{ and } p\forces_{\IP_\delta}\check{B}\in\name{\U}_{n+1} \big\}. \end{eqnarray*} It follows that $\name{\W}_{n+1}\in M$ is a $\IP_{\delta_{n+1}}$-name which is forced by $q_{n+1}$ to be an $\w$-cover of $X$ by elements of $\B$, and hence $q_{n+1}\forces_{\IP_{\delta_{n+1}}} |\W\cap\name{\W}_{n+1}|=\w$. Let $H\ni q_{n+1}$ be $\IP_{\delta_{n+1}}$-generic over $V$ and $p'_n$ the interpretation $(\name{p}'_n)^H$. Now we shall work in $V[H]$ for a while. It follows from the above that there exists $m>n$ such that $B_m\in\W\cap\name{\W}_{n+1}^H$. Consequently, there exist $r\in H$ and $p\leq p_n'$ such that $p\uhr\delta_{n+1}=r$ and $p\forces_{\IP_\delta}\check{B}_m\in\name{\U}_{n+1}$. By elementarity we can find such $r$ in $M$ (note that $M[H]\cap \IP_{\delta_{n+1}}= M\cap \IP_{\delta_{n+1}}$), and hence we can also find $p\in M$ as above. Now let $\name{p}_{n+1}\in M$ be a $\IP_{\delta_{n+1}}$-name such that $q_{n+1}$ forces that $\name{p}_{n+1}$ has all the properties of $p$ stated above. Its existence follows by the maximality principle. This completes our inductive construction. Exactly as in the proof of \cite[Lemma 2.8]{Abr10} one can verify that $q=\bigcup_{n\in\w}q_n$ is $(M,\IP_\delta)$-generic. More precisely, it is easy to see by induction on $n$ that $q$ forces over $\IP_\delta$ that $\name{p}_{n+1}\leq \name{p}_n\in \name{G}_{\delta}\cap M$ for all $n\in\w$. Using this we are going to prove that each $D_n\cap M$ is predense below $q$. Suppose not. Then we can find $q'\leq q$ which is incompatible with all elements of $D_n\cap M$ for some $n\in\w$. Let $H\ni q'$ be $\IP_\delta$-generic. Then $p_n:=\name{p}_n^H\in H\cap M\cap D_n$ by $(2)$, and hence $p_n$ is a condition in $D_n\cap M$ compatible with $q$ (because $q\in H$), a contradiction. It suffices to note that $(2)(d)$ clearly ensures that $q$ forces $\W$ to be $\la X, M[\name{G}_\delta],\w\ra$-hitting. This completes our proof. \end{proof} \begin{lemma} \label{examples} The Miller, Sacks, and Cohen posets satisfy $(\dagger)$. \end{lemma} \begin{proof} We shall present the proof only for Miller forcing because it is exactly what is needed for the proof of Theorem~\ref{main}, and because the Sacks case is completely analogous, whereas the Cohen one is trivial. Before we pass to the proof, let us recall the definition of Miller forcing and fix our notation. By a Miller tree we understand a subtree $T$ of $\w^{<\w}$ consisting of increasing finite sequences such that the following conditions are satisfied: \begin{itemize} \item Every $t\in T$ has an extension $s\in T$ which is splitting in $T$, i.e., there are more than one immediate successors of $s$ in $T$; \item If $s$ is splitting in $T$, then it has infinitely many immediate successors in $T$. \end{itemize} Miller forcing is the collection $\mathbb M$ of all Miller trees ordered by inclusion, i.e., smaller trees carry more information about the generic. This poset was introduced in \cite{Mil84}. For a Miller tree $T$ we denote by $\Split(T)$ the set of all splitting nodes of $T$, and for some $t\in\Split(T)$ we denote the size of $\{s\in\Split(T):s\subsetneq t\}$ by $\Lev(t,T)$. For a node $t$ in a Miller tree $T$ we denote by $T_t$ the set $\{s\in T:s$ is compatible with $t\}$. It is clear that $T_t$ is also a Miller tree. If $T_1\leq T_0 $ and each $t\in \Split(T_0)$ with $\Lev(t,T_0)\leq k$ belongs to $\Split(T_1)$, where $k\in\w$, then we write $T_1\leq_k T_0$. It is easy to check (and is well-known) that if $T_{n+1}\leq_n T_n$ for all $n\in\w$, then $\bigcap_{n\in\w}T_n\in\mathbb M$. We are now in a position to start the proof. Let $M$ and $\{\phi_i:i\in\w\}$ be such as in the formulation of $(\dagger)$. We can additionally assume that for each $\phi\in\{\phi_i:i\in\w\}$ there are infinitely many $i$ such that $\phi=\phi_i$. Let $\{D_n:n\in\w\}$ be the set of all open dense subsets of $\mathbb M$ which belong to $M$. Given $T_0\in M\cap\mathbb M$, construct a sequence $\la T_n:n\in\w\ra\in\mathbb M^\w$ as follows: Assume that $T_n$ has been constructed such that $(T_n)_t\in M$ for every $t\in T_n$ with $\Lev(t,T_n)=n$. Given such a $t\in T_n$ and $k\in\w$ such that $t\vid k\in T_n$, find $R_{t,k}\leq\phi_n((T_n)_{t\vid k})$ such that $R_{t,k}\in D_n\cap M$. Now set $T_{n+1}=\bigcup\{R_{t,k}:t\in T_k,\Lev(t,T_n)=n, t\vid k\in T_n\}$ and note that $T_{n+1}\leq_n T_n$ and $(T_{n+1})_r\in M$ for all $r\in T_{n+1}$ with $\Lev(r,T_{n+1})=n+1$. This completes our construction. It is straightforward to check that $T=\bigcap_{n\in\w}T_n$ is a $(M,\mathbb M)$-generic condition forcing $\name{G}\cap\phi_n[M\cap\mathbb M]$ to be infinite for all $n$. \end{proof} Finally we have all necessary ingredients to complete the proof of Theorem~\ref{main}. Let $V$ be a model of GCH. By \cite[Theorem~3.2]{MilTsaZdo16} there exist $\gamma$-subspaces $X,Y$ of $2^\w$ and a continuous map $\phi:X\times Y\to\w^\w$ such that $\phi[X\times Y]$ is dominating, i.e., for every $f\in \w^\w$ there exists $\la x,y\ra\in X\times Y$ such that $f\leq^*\phi\la x,y\ra$. (As usual, $f\leq^* g$ for $f,g\in\w^\w$ means that the set $\{n\in\w: f(n)>g(n)\}$ is finite. Whenever we speak about unbounded or dominating subsets of $\w^\w$, we always mean with respect to $\leq^*$.) Let $\IP$ be the iteration of $\mathbb M$ of length $\w_2$ with countable supports, and $G$ be $\IP$-generic. It is well known that $V\cap \w^\w$ is unbounded\footnote{Even more is true: there exists an ultrafilter $\U\in V$ which remains a base for an ultrafilter in $V[G]$, namely all $P$-points are like that, see \cite{BlaShe89}. It is easy to see that the set of enumerating functions of a base of an ultrafilter cannot be bounded.} in $V[G]$, and hence so is $\phi[X\times Y]$. By a result of Hurewicz \cite{Hur27} (see also \cite[Theorem~4.3]{COC2}) this implies that $X\times Y$ is not Hurewicz in $V[G]$. On the other hand, $X$ and $Y$ remain $\gamma$-spaces in $V[G]$ by a combination of Lemmata~\ref{just_def}, \ref{dad_impl_gampres}, \ref{dag_preservation}, and \ref{examples}. This completes our proof. \medskip \noindent\textbf{Acknowledgments.} The authors wish to express their thanks to the referee whose suggestions have improved our exposition in Lemma~\ref{dag_preservation}.
1,314,259,994,667
arxiv
\section{Introduction} In string perturbation theory much effort was devoted historically to understand higher point and higher genus correlation functions. For a broad overview, see e.g.~\cite{DHoker:1988pdl, Witten:2012bh}. Despite a good understanding of the integrands of string perturbation theory, performing the actual integrals has remained a challenging task. On the other end of the spectrum, there are some exceptional correlators at genus 0 that require special attention. The reason for this is a residual gauge group after imposing conformal gauge which is present due to conformal Killing vectors. For the sphere, there are three complex conformal Killing vectors corresponding to the group of M\"obius transformations. Since the volume of this group is infinite, one naively concludes that zero-point, one-point and two-point functions vanish at tree-level in string theory. The same goes for the open string, where the group of residual M\"obius transformations in $\mathrm{PSL}(2,\mathbb{R})$. This conclusion is however premature, since the infinities of the residual gauge groups can potentially be compensated by other infinities in the worldsheet path integral. It is a subtle problem to compute the actual value of these quantities and only a partial understanding exists, see \cite{Tseytlin:1987ww, Liu:1987nz, Tseytlin:1988tv, Erbin:2019uiz}. Various such quantities were also successfully computed for strings on $\text{AdS}_3$ \cite{Maldacena:2001km, Troost:2011ud}. All these quantities have a physical meaning on which we would like to comment. Zero-point functions represent the on-shell value of the action of the effective spacetime theory, which is (super)gravity in the case of the closed string and the D-brane worldvolume gauge theory in the case of the open string. These quantities are generically non-vanishing and especially in the case of the gravity on-shell action somewhat subtle to define. To get a finite answer one has to introduce local counterterms on an asymptotic cutoff surface. The first of these is the Gibbons-Hawking-York boundary term \cite{Gibbons:1976ue}. Introducing a cutoff in spacetime would be inconsistent with Weyl symmetry in string theory and it is unclear in general how to implement it in string theory. We consider this a very important open problem in understanding the emergence of gravity from string theory. One-point functions for the closed string represent tadpole diagrams in spacetime. Most of these tadpole diagrams vanish due to the spacetime equations of motion. There are however interesting non-vanishing one-point functions in string theory such as the dilaton one-point function or the example considered in \cite{Troost:2011ud}. Two-point functions represent the tree-level propagators of the spacetime theory. It was explained in \cite{Erbin:2019uiz} that these two-point functions are actually non-zero because the momentum conserving $\delta$-function $\delta^D(k_1-k_2)$ in spacetime is divergent thanks to the mass-shell condition that implies the conservation of the last component of the momenta provided that the other components are conserved. The correct expression in flat space is instead $2k^0 (2\pi)^{D-1} \delta^{D-1}(\vec{k}'-\vec{k})$. \medskip In this paper, we give a reasonably complete understanding of the disk partition function, i.e.\ the open string zero-point function. The disk partition function computes interesting quantities directly in string theory such as D-brane tensions. Historically they are often computed in a roundabout way by imposing various consistency conditions for the exchange of closed strings between two parallel D-branes. The challenge in this computation is the presence of the residual gauge group $\mathrm{PSL}(2,\mathbb{R})$. Since this group is non-compact, it naively has infinite volume. However, it was proposed in \cite{Liu:1987nz} that it essentially behaves as a group with finite \emph{negative} volume in any computation so that the string disk partition function $Z_\text{disk}$ is simply related to worldsheet disk partition function $Z_\text{CFT}$ by \begin{equation} Z_\text{disk}=\frac{Z_\text{CFT}}{\vol(\mathrm{PSL}(2,\mathbb{R}))}\ . \end{equation} This volume can be defined by a procedure akin to defining the gravitational on-shell action. In the normalization where the Ricci scalar on the group on the group with respect to the biinvariant metric is $\mathcal{R}=-6$, this volume works out to be $-\frac{\pi^2}{2}$. It is however very mysterious (at least to the authors) why this procedure should give the correct result. We are thus motivated to reconsider the problem. We give in this paper three rigorous (for physicists' standards) ways to compute the disk partition function from first principles. Each of the methods reproduce this value for the effective volume. The first two methods are based on fixing a further gauge beyond the conformal gauge. Since the metric is already completely fixed, the further gauge fixing will invariably involve the matter fields on the worldsheet. For this reason we assume that the spacetime theory on which the string is propagating involves at least one flat direction, i.e.\ is for example time-independent. Backgrounds such as $\mathrm{AdS}_3 \times \mathrm{S}^3 \times \mathbb{T}^4$ also work, since the torus directions are flat. We think however that our method can be generalized to other backgrounds as well. We explore two different gauge fixing conditions in terms of the free boson $X$ describing the flat target space direction. Both of them are slightly subtle and we discuss them in detail. One can gauge fix the worldsheet path integral further and compute the effective volume of the gauge group directly in this way. In the third method, we compute the disk partition function by relating it to a one-point function on the disk which can be computed without problems. This is done by assuming that the flat direction is compact. This introduces a modulus in the problem and the derivative of the disk partition function with respect to the modulus is by conformal perturbation theory given by a one-point function. We again recover the effective volume of $\mathrm{PSL}(2,\mathbb{R})$. We finally apply this technique of computing disk partition functions to a short rederivation of D-brane tensions \cite{Polchinski:1995mt}. Since all relevant issues already arise for the bosonic string, we restrict to it for technical simplicity. We mention some open problems in Section~\ref{sec:conclusions}. \section{\texorpdfstring{Gauge fixing $\boldsymbol{X_{\ell,m}=0}$}{Gauge fixing Xl,m=0}} \label{sec:first gauge} We fix conformal gauge on the disk. In this section, it is convenient to use the upper hemisphere metric on the disk: \begin{equation} \label{eq:metric} \hat{g}=\frac{4 \, \mathrm{d}z \, \mathrm{d}\bar{z}}{(1+|z|^2)^2}\ , \qquad |z | \le 1\ . \end{equation} Any physical result will of course be independent of this choice because the full worldsheet theory is Weyl-invariant. This form of the metric is convenient, because there is a standard orthonormal basis for the space of $L^2$-functions given by the spherical harmonics. We can consider two function spaces given by $L^2_\text{D}(D)$ and $L^2_\text{N}(D)$, where $D$ denotes here and in the following the disk. The former consists of all square-integrable functions $f$ on the unit disk satisfying Dirichlet boundary conditions $f(|z|=1)=0$,\footnote{We could generalize this to $f(|z|=1)=x_0$ for some constant $x_0$, but this constant could be removed by a spacetime translation.} while the latter consist of all square-integrable functions satisfying Neumann boundary conditions $\partial_n f(|z|=1)=0$, where $\partial_n$ is the normal (radial) derivative. Spherical harmonics are given by $Y_{\ell,m}$, $\ell=0$, $1$, $2$, $\dots$ and $m=-\ell$, $-\ell+1$, $\dots$, $\ell$. They satisfy Neumann (Dirichlet) boundary conditions for $\ell+m \in 2\mathbb{Z}$ ($\ell+m \in 2\mathbb{Z}+1$). As we mentioned in the Introduction, we assume that there is one flat direction in spacetime which is described by the worldsheet boson $X$. In the following we will concentrate our attention on this boson. We can expand it into spherical harmonics \begin{equation} X=\sum_{\ell,m} X_{\ell,m} Y_{\ell,m} \label{eq:spherical harmonics expansion} \end{equation} with $X_{\ell,m}=0$ for $\ell+m \in 2\mathbb{Z}+1$ and Neumann boundary conditions or $\ell+m \in 2\mathbb{Z}$ and Dirichlet boundary conditions. Moreover, reality of $X$ imposes $X_{\ell,m}=\overline{X_{\ell,-m}}$. Even after fixing the conformal gauge, there is a remaining gauge freedom that is not fully fixed. This is given by the group of conformal transformations, which acts as \begin{equation} X(z) \longmapsto X \circ \gamma^{-1}(z) \end{equation} on the free boson $X$ and fixes $g$. The latter is achieved by combining the diffeomorphism $\gamma$ with an appropriate Weyl transformation. The (global) conformal group on the disk is $\mathrm{PSU}(1,1) \cong \mathrm{PSL}(2,\mathbb{R})$ and acts by fractional linear transformations.\footnote{$\mathrm{PSL}(2,\mathbb{R})$ naturally acts on the upper half plane, whereas $\mathrm{PSU}(1,1)$ naturally acts on the unit disk. The two groups are isomorphic via the Cayley transform. We mostly use the name $\mathrm{PSL}(2,\mathbb{R})$.} Thus we have a path integral schematically of the following form \begin{equation} Z_{\text{disk}}=\int\frac{\mathscr{D}X}{\mathop{\text{vol}}(\mathrm{PSL}(2,\mathbb{R}))} \ \mathrm{e}^{-S[X]}\ . \end{equation} The path integral over the appropriate space of functions (either $L^2_\text{N}(D)$ or $L^2_\text{D}(D)$). We remark that we have suppressed the presence of the ghosts and the other bosons in the path integral. Only with their presence the conformal anomaly cancels and it makes sense to gauge $\mathrm{PSL}(2,\mathbb{R})$. Liu and Polchinski \cite{Liu:1987nz} provided with a prescription to calculate the ``regularized'' finite volume of the the group $\mathrm{PSL}(2,\mathbb{R})$, which we review in Appendix~\ref{app:volume PSL2R}. Using that, one can obtain \begin{equation} Z_{\text{disk}}=-\frac{2}{\pi^2} \int \mathscr{D}X \ \mathrm{e}^{-S[X]}\ . \end{equation} Here one tacitly assumes a particular normalization of the ghost zero modes. This issue is also discussed in Appendix~\ref{app:volume PSL2R}. We denote the CFT path integral that appear on the RHS by $Z_\text{CFT}$, \begin{equation} Z_{\text{CFT}}\equiv \int \mathscr{D}X \ \mathrm{e}^{-S[X]}\ . \end{equation} We emphasize that the calculation of $Z_{\text{CFT}}$ does not gauge the global conformal group $\mathrm{PSL}(2,\mathbb{R})$. In what follows, we are going to show that \begin{equation}\label{eq:Main} \frac{Z_{\text{disk}}}{Z_{\text{CFT}}}=-\frac{2}{\pi^2} \end{equation} using standard QFT techniques, rather than calculating the regularized volume of $\mathrm{PSL}(2,\mathbb{R})$. Thus we want to also fix gauge-fix the global conformal group $\mathrm{PSL}(2,\mathbb{R})$. We achieve this by a slightly modified Faddeev-Popov procedure. \subsection{Gauge choice and admissibility} \label{subsec:gauge choice} The group of M\"{o}bius transformations preserving the unit disk is \begin{equation} \mathrm{PSU}(1,1)=\left\{ \begin{pmatrix} a & b \\ \bar{b} & \bar{a} \end{pmatrix} \, \Big| \, |a|^2-|b|^2=1 \right\}\Big/ \sim\ . \end{equation} Here, the equivalence $\sim$ identifies the matrix with the negative matrix. Only the $\mathrm{U}(1)$ subgroup specified by $b=0$ acts by isometries on the metric. This realization of $\mathrm{PSU}(1,1)$ leads to a natural normalization of the biinvariant metric that is induced from ambient $\mathbb{C}^2 \cong \mathbb{R}^4$. This is the normalization which we shall use in the following. The explicit measure is given in Appendix~\ref{app:volume PSL2R}. We would like to impose the gauge \begin{equation} X_{\ell,\pm m}=0 \end{equation} for some choice of $(\ell,m)$ in the expansion eq.~\!\eqref{eq:spherical harmonics expansion}. Note that due to the reality condition $\overline{X_{\ell,m}}=X_{\ell,-m}$, this is one complex or two real conditions. This fixes all non-compact directions of $\mathrm{PSL}(2,\mathbb{R}) \cong \mathrm{PSU}(1,1)$ and only leaves the Cartan subgroup $\mathrm{U}(1)$ unbroken. Since its volume is finite it is easy to take this into account. For concreteness, let us consider the following two gauge fixing conditions: \begin{tcolorbox} \begin{equation} \text{Dirichlet:}\ X_{2,\pm 1}=0\ , \qquad \text{Neumann:}\ X_{1,\pm 1}=0\ . \label{Gauge} \end{equation} \end{tcolorbox} In what follows we will be proving the admissibility of the gauge choice. The argument for $m\not \in \{-1,1\}$ is analogous and will lead to the same final result. \paragraph{Admissibility of gauge choice.} Since the Cartan generator $\mathrm{U}(1) \subset \mathrm{PSU}(1,1)$ remains unbroken, it is convenient to consider the coset $\mathrm{PSU}(1,1)/\mathrm{U}(1) \cong D$, which can also be identified with the unit disk. We stress that this unit disk is not the worldsheet! It comes equipped with a hyperbolic metric that descends from $\mathrm{PSU}(1,1)$, which takes the form for $\alpha \in D$ \begin{equation} g=\frac{\pi \, \mathrm{d} \alpha \, \mathrm{d} \bar{\alpha}}{(1-|\alpha|^2)^2}\ . \end{equation} The normalization is induced from the Haar measure on $\mathrm{PSU}(1,1)$. An explicit representative of $\alpha$ in $\mathrm{PSU}(1,1)$ is given by \begin{equation} \gamma_\alpha=\frac{1}{\sqrt{1-|\alpha|^2}} \begin{pmatrix} 1 & \alpha \\ \bar{\alpha} & 1 \end{pmatrix}\ . \label{eq:gammaalpha} \end{equation} This M\"{o}bius transformation has the property that $\gamma_\alpha(0)=\alpha$. To be explicit, the gauge conditions in eq.~\eqref{Gauge} read respectively \begin{subequations} \begin{align}\label{eq:Dg} \text{Dirichlet}&: \qquad\int_{D}\frac{4\,\mathrm{d}^2 z}{(1+|z|^2)^2} X \circ \gamma_\alpha^{-1} (z,\bar{z}) Y_{2,1}(\bar z, z) =0\ ,\\ \text{Neumann}&:\qquad \int_{D}\frac{4\,\mathrm{d}^2 z}{(1+|z|^2)^2} X \circ \gamma_\alpha^{-1} (z,\bar{z}) Y_{1,1}(\bar z, z)=0\ . \end{align} \end{subequations} Here we used orthornomality of the spherical harmonics on the disc, see Appendix~\ref{app:spherical harmonics}. We should also clarify that by $\mathrm{d}^2z$ we mean $\mathrm{d} \Re(z) \, \mathrm{d}\Im(z)$. We wrote the gauge condition as one complex condition here , which upon complex conjugation would also imply the vanishing of $X_{2,-1}$ and $X_{1,-1}$ respectively. In order to show the admissibility, we define the complex-valued function \begin{equation} V(\alpha)=\int_D \frac{\mathrm{d}^2 z}{(1+|z|^2)^2} X \circ \gamma_\alpha^{-1} (z,\bar{z}) \overline{Y_{\ell,1}(z,\bar{z})}\ . \end{equation} Note that $\overline{Y_{\ell,1}(z,\bar{z})}=Y_{\ell,-1}(\bar{z},z)$. We will call it $V_{\mathrm{N}}(\alpha)$ when we set $\ell=1$ and we are dealing with Neumann boundary condition. Similarly for the Dirchlet case, we will call it $V_{\mathrm{D}}(\alpha)$ and set $\ell=2$. Showing admissibility of the gauge amounts to showing that $V(\alpha)$ has a zero in the unit disk. In fact, we should also determine the number of zeros since this will be needed in the calculation of the Faddeev-Popov determinant eventually. It turns out that the number of zeros of $V(\alpha)$ in the unit disk can be determined from its behavior near the boundary by using Stokes' theorem as explained below. Thus, we first analyze the behavior of $V(\alpha)$ for $\alpha=\rho \, \mathrm{e}^{i \theta}$ and $\rho$ close to 1. This behavior of $V(\alpha)$ is entirely universal, because $\gamma_\alpha^{-1}(z)$ is close to the boundary of the worldsheet disk for any choice of $z$ and $\rho \sim 1$. Thus in this limit one is only probing the function $X$ close to the boundary of the worldsheet disk, where its behavior is specified by the boundary conditions. We find \begin{subequations} \label{eq:boundarybehave} \begin{align} V_{\mathrm{N}}(\alpha) &=i(1-\rho)e^{-i\theta} \underbrace{\sum_{\begin{subarray}{c} \ell \, m \\ \ell+m=\text{even}\end{subarray}} h_\text{N}(\ell,m) \Im\left(X_{\ell,m} \mathrm{e}^{i m \theta}\right)}_{f_\text{N}(\theta)\equiv\, \text{real function}}\, +\, o(1-\rho)\ ,\\ V_{\mathrm{D}}(\alpha)&=(1-\rho)e^{-i\theta} \underbrace{\sum_{\begin{subarray}{c} \ell,\, m \\ \ell+m=\text{odd}\end{subarray}} h_\text{D}(\ell,m) \Re\left(X_{\ell,m} \mathrm{e}^{i m \theta}\right)}_{f_\text{D}(\theta)\equiv\, \text{real function}}\,+\,o(1-\rho)\ . \end{align} \end{subequations} The numbers $h_\text{N}(\ell,m)$ and $h_\text{D}(\ell,m)$ are real. Eq.~\eqref{eq:boundarybehave} follows from the observation \begin{equation} \int_D \frac{4\, \mathrm{d}^2 z}{(1+|z|^2)^2} Y_{\ell,m} \circ \gamma_\alpha^{-1} (z,\bar{z}) \overline{Y_{1,1}(z,\bar{z})}=(1-\rho) e^{i(m-1)\theta} h_\text{N}(\ell,m) \,+\,o(1-\rho) \end{equation} with $h_\text{N}(\ell,m)=-h_\text{N}(\ell,-m)$ for the Neumann boundary condition. This leads to only the imaginary part of $X_{\ell,m} \mathrm{e}^{i m \theta}$ surviving in the sum. Furthermore, reality of $X_{0,0}$ implies the vanishing of the $m=0$ term. For Dirichlet boundary condition, we instead have \begin{equation} \int_D \frac{4\, \mathrm{d}^2 z}{(1+|z|^2)^2} Y_{\ell,m} \circ \gamma_\alpha^{-1} (z,\bar{z}) \overline{Y_{2,1}(z,\bar{z})}=(1-\rho) e^{i(m-1)\theta} h_\text{D}(\ell,m) \,+\,o(1-\rho) \end{equation} where $h_\text{D}(\ell,m)=h_\text{D}(\ell,-m)$. This leads to only the real part surviving. It is easy to compute these integrals in \texttt{Mathematica} for low values of $\ell$ and convince oneself of the validity of this behavior. We haven't tried to give a rigorous proof of this property. Now we consider eq.~\eqref{eq:boundarybehave} and compute the following contour integral: \begin{equation} N\equiv \frac{1}{2\pi i}\int_{\partial D} \frac{\mathrm{d} V}{V}\ . \end{equation} Here the contour encircles $D$ once in counterclockwise sense. To make this well-defined, we take the contour to be very close to the boundary. We can compute this directly from the behavior eq.~\!\eqref{eq:boundarybehave}: \begin{equation} N=\frac{1}{2\pi i} \int_0^{2\pi } \frac{\mathrm{d} (\mathrm{e}^{-i \theta} f(\theta))}{\mathrm{e}^{-i \theta}f(\theta)} =\frac{1}{2\pi i} \int_0^{2\pi } (-i \mathrm{d}\theta+\mathrm{d} \log f(\theta))=-1+w(f)\,. \end{equation} where $w(f)$ is the winding the number of the function $f(\theta)$ (which we called $f_\text{N}$ and $f_\text{D}$ in eq.~\eqref{eq:boundarybehave} depending on the boundary condition). We would like to conclude that the winding number $N$ of $V$ around the boundary is $-1$. However, $f(\theta)$ is real and is not generally sign definite, hence can potentially cross zero. For such functions, the winding number around zero is ill-defined. To cure this we perform the following replacement \begin{subequations} \label{eq:shift} \begin{align} \text{Dirichlet}:&\qquad X\to X+ i\varepsilon Y_{1,0}\ ,\\ \text{Neumann}:&\qquad X\to X+ \varepsilon Y_{1,0}\ , \end{align} \end{subequations} with fixed $\varepsilon\ne 0$. This results in an additive modification of eq.~\!\eqref{eq:boundarybehave}; the modified function $f_{\mathrm{N}}(\theta)$ has a constant real piece while the modified $f_{\mathrm{D}}(\theta)$ has a constant imaginary piece. This guarantees that the modified function $f(\theta)$ does not pass through the origin and $w(f)=0$. So with this modification, we have \begin{equation}\label{eq:gcs} N =-1\ . \end{equation} Before analyzing the above equation, let us discuss the meaning of the regularization. The path integral can be understood as a contour integral in the space of complex-valued $L^2$-functions. This translates into the reality condition $\overline{X_{\ell,m}}=X_{\ell,-m}$ which specifies the contour for the modes. However, one can slightly shift the contour which should leave the value of the path integral unchanged. For the Dirichlet case, the eq.~\!\eqref{eq:shift} amounts to $X_{1,0}\to X_{1,0}+i \varepsilon$. This should be thought of as doing the Gaussian integral over the $\mathrm{Im}X_{1,0}=\varepsilon$ line instead of on the real line.\footnote{The interpretation for the Neumann case is not as simple as the Dirichlet one, since here we are regulating using a component which does not really respect the Neumann boundary condition.} We should also mention that the details of this modification do not matter. We could modify $X$ in any infinitesimal way, since any generic perturbation of a real function will result in a vanishing winding number. We just choose \eqref{eq:shift} for definiteness. Eq.~\!\eqref{eq:gcs} implies that $V$ has exactly one zero in the disk, provided one counts zeros with signs and multiplicities as follows. For a generic complex function $V$ on the unit disk, zeros are isolated. We can encircle a zero by a contour and view $V(\alpha)$ restricted to the contour as a map $\mathrm{S}^1 \longmapsto \mathbb{C} \setminus \{0\}$. There is a winding number associated to this map which is the order of zero. For example the function $V(\alpha)=\alpha$ has a zero of order 1 around the origin, whereas the function $V(\alpha)=\bar{\alpha}$ has a zero of order $-1$ around the origin. For a zero of order $n$, we compute easily \begin{equation} \int_\mathcal{C} \frac{\mathrm{d}V}{V}=n\ , \end{equation} where the contour $\mathcal{C}$ encircles only the zero of $V$. Now by Stokes' theorem it follows that the sum of the orders of zeros has to be $-1$. In particular, there is at least one zero and the gauge is admissible. The significance of minus sign will become clear in following section when we discuss a signed version Faddeev-Popov gauge fixing procedure. Once we have proved that the gauge is admissible, the regularization parameter $\varepsilon$ does not matter and can be set to $0$. We will do so in rest of the calculation. For different gauges with where we impose $X_{\ell,m}=0$ with $m\not\in \{-1, 1\}$, we should instead consider \begin{equation} V(\alpha)=\int_D \frac{\mathrm{d}^2 z}{(1+|z|^2)^2} X \circ \gamma_\alpha^{-1} (z,\bar{z}) \overline{Y_{\ell,m}(z,\bar{z})}\ . \end{equation} and then the overall winding number $N$ turns out to be $-m$. In what follows we will use the gauge where $m=1$. It is possible to perform the computation with other choice of gauge as well with $m\neq 1$ (as long as $m \ne 0$ in which the gauge is no longer admissible). \subsection{Computation of the path integral} After these preparations, the actual computation of the gauge-fixed partition function is very easy. We can apply the modified Faddeev-Popov procedure that we reviewed in Appendix~\ref{app:FP procedure} to our problem. It is modified in that it counts intersections of the gauge orbit with the gauge slice with signs. This is necessary because while the gauge we have chosen is admissible, it is not uniquely so. The modified FP-procedure cancels unwanted intersections of the gauge orbit and the gauge slice by counting them with minus signs. The gauge group is $\mathrm{PSL}(2,\mathbb{R})$ and the gauge condition is $F(X)=(X^g)_{1,1}=0$ for Neumann and $F(X)=(X^g)_{2,1}=0$ for Dirichlet boundary conditions. The computation in the previous Section~\ref{subsec:gauge choice} shows in fact precisely that the intersection number $\mathcal{I}$ between the gauge orbit and the gauge slice is $\mathcal{I}=-1$, independent of $X$, i.e.\ \begin{equation} -1=\int_\mathcal{G} \mathrm{d}g\ \mathop{\text{det}} \mathop{\text{Jac}} F(X^g)\, \delta(F(X^g))\ . \end{equation} For $m\neq 1$, the LHS of the above equation reads $-m$ instead of $-1$, since the intersection number is $\mathcal{I}=-m$. In what follows we will use $m=1$. \paragraph{Neumann Condition.} The Neumann condition involves the modes with $\ell+m$ even. The gauge fixing condition is $F(X)=X_{1,1}=0$. The Jacobian \begin{equation} \mathop{\text{Jac}} F(X^g) \end{equation} is linear in $X$. Hence it can be evaluated mode by mode. It is actually only non-vanishing for finitely many values of mode $X_{\ell,m}$. When expressing the group element $g$ in terms of $\alpha \in \mathrm{PSU}(1,1)/\mathrm{U}(1)$ through \eqref{eq:gammaalpha} (and writing $X^{\gamma_\alpha} \equiv X^\alpha$), we have in fact the identity \begin{equation} 1=-\int\frac{\pi\, \mathrm{d}^2\alpha}{(1-|\alpha|^2)^2}\ J_{\mathrm{N}}(X^\alpha)\ \delta^2( F(X^{\alpha})) \end{equation} with \begin{equation} \pi J_{\mathrm{N}}(X)=\frac{36 }{5}(\mathrm{Im}X_{2,2})^2+\frac{36 }{5}(\mathrm{Re}X_{2,2})^2-\frac{6}{5} X_{2,0}^2\ . \end{equation} The gauge-fixed path integral hence reads explicitly \begin{equation} Z^{\mathrm{N}}_{\text{disk}}=-\int\mathscr{D}X\ \delta(\mathrm{Re}X_{1,1}) \delta(\mathrm{Im}X_{1,1})\, J_{\mathrm{N}}(X) \, \mathrm{e}^{-S[X]}\ . \end{equation} where the action in terms of modes is given by \begin{equation} S[X]=\frac{1}{4\pi\alpha'} \sum_{\ell+m\in 2\mathbb{Z}}\ell(\ell+1)|X_{\ell,m}|^2\ . \end{equation} Hence in the ratio of the gauged and the ungauged CFT partition all but finitely many modes cancel. Thus it is given by a simple ratio of Gaussian integrals. It works out to be \begin{align} \frac{Z^{\mathrm{N}}_{\text{disk}}}{Z^{\mathrm{N}}_{\text{CFT}}}&=-\frac{2}{\pi^2}\ . \end{align} \paragraph{Dirichlet boundary conditions.} The computation is completely analogous. The Fadeev-Popov determinant works out to be \begin{equation} \pi J_{\mathrm{D}}(X)=\frac{64 }{7}\left[(\Im X_{3,2})^2+(\Re X_{3,2})^2\right]-\frac{16}{5} \sqrt{\frac{3}{7}} X_{1,0} X_{3,0}-\frac{2 }{5}(X_{1,0})^2-\frac{96}{35}(X_{3,0})^2 \end{equation} in this case. In particular it again only involves finitely many modes and allows one to reduce the ratio of the gauged and the ungauged partition function to a ratio of finite-dimensional integrals. One again recovers \begin{tcolorbox} \begin{equation} \frac{Z_{\text{disk}}^\text{D}}{Z_{\text{CFT}}^\text{D}}=\frac{Z_{\text{disk}}^\text{N}}{Z_{\text{CFT}}^\text{N}}= -\frac{2}{\pi^2}=\text{Regularized volume of}\ \mathrm{PSL}(2,\mathbb{R})\ , \end{equation} \end{tcolorbox} in agreement with the regularization procedure discussed in \cite{Liu:1987nz}. This is the result we anticipated in eq.~\!\eqref{eq:Main}. \section{Gauge fixing \texorpdfstring{$\boldsymbol{\mathrm{d}X(0)=0}$}{dX(0)=0}} \label{sec:alternative gauge} In this section, we repeat the calculation using a different gauge choice. We mostly focus on the Neumann case and indicate the necessary changes for the Dirichlet case. We used the gauge choice $X_{1,\pm 1}=0$ before. The difficulty for this gauge choice was to establish admissibility. We saw that the gauge is not uniquely fixed, but counting solutions with a sign of the corresponding Jacobian that enters the Faddeev-Popov determinant, there is always a unique solution (up to the subtlety that we had to shift the contour slightly in the complex plane). On the other hand, it was almost trivial to compute the path integral with the insertion of the corresponding delta-function and the Jacobian, because this only involved finitely many modes $X_{\ell,m}$. In this section we will shift the difficulty -- our gauge choice is easily seen to be admissible, but computing the actual path integral will be more technical. \subsection{Admissibility and uniqueness} Our gauge condition reads \begin{equation} \mathrm{d} X(0)=0\ , \end{equation} i.e.~the center of the disk is a local extremum for one of the spacetime coordinates $X$. As before, this leaves the $\mathrm{U}(1) \subset \mathrm{PSL}(2,\mathbb{R})$ subgroup unbroken. But since $\mathrm{U}(1)$ is compact, it simply yields an additional factor of $\pi$ in the final result.\footnote{The volume of $\mathrm{U}(1)$ is $\pi$ and not $2\pi$ because the gauge group is $\mathrm{PSL}(2,\mathbb{R})$ and not $\mathrm{SL}(2,\mathbb{R})$.} We will first discuss this condition for Neumann boundary conditions. Before discussing admissibility of this gauge, we should address a subtlety. The restriction $X|_{\partial D}$ is a function on $\partial D \cong \mathrm{S}^1$ and as such would have local extrema (at least two of them). Since for Neumann boundary conditions, also $\partial_n X|_{\partial D}=0$, it follows that this local extrema of $X|_{\partial D}$ are also local extrema of $X$. Thus for generic $X$ there are always local extrema on the boundary of the disk. This is undesirable for our purposes. To rectify this behavior, we modify slightly the boundary condition as follows: \begin{equation} \partial_n X(z)\Big|_{\partial D}=\varepsilon \end{equation} for small $\varepsilon$. $\varepsilon$ can in principle be a non-trivial function on the boundary of the disk -- our only requirement is that it doesn't possess a zero. We think of $\varepsilon$ as being very small. This choice guarantees us that there will be no local extrema on the boundary of the disk. Our modification either shifted them slightly outside or inside of the disk. Now we can discuss admissibility of the gauge. For this consider $\mathrm{d}X$, which we can view as a vectorfield over $D$. We equip $D$ with a flat metric, so that vectorfields can be identified with 1-forms. Then this vectorfield has roughly the form as depicted in figure~\ref{fig:vectorfieldN}. In the example of the figure, there are three extrema: two (local) maxima and one saddle point. \begin{figure}[!ht] \begin{center} \includegraphics[width=.5\textwidth]{Neumann_vectorfield.pdf} \end{center} \caption{The derivative $\mathrm{d}X$ on the disk.} \label{fig:vectorfieldN} \end{figure} Thus, our gauge choice is admissible in this example, but not uniquely so. In general, the number of (local) maxima, minima and saddlepoints is constrained by the Poincar\'e-Hopf theorem.\footnote{Or alternatively by the Morse lemma when $X$ is a Morse function.} The Poincar\'e-Hopf theorem says that for a vectorfield of the form we are considering \begin{equation} \label{eq:topocons} \text{\# maxima}-\text{\# saddle points}+\text{\# minima}=1\ . \end{equation} The RHS of this equation is the Euler characteristic of the disk. This equation shows in particular that the gauge is admissible. We are thus in a similar situation as for the other gauge, where the gauge is not uniquely fixed, but different solutions to the gauge condition are constrained by a topological condition. We can exploit this by considering the following quantity \begin{equation} \int_{\mathrm{PSL}(2,\mathbb{R})} \mathrm{d}\gamma\ \det(\text{Hess}(X^\gamma)(0)) \delta^2(\mathrm{d} X^\gamma(0))\,. \end{equation} Here, $\mathrm{d}\gamma$ is the Haar measure and $X^\gamma \equiv X \circ \gamma^{-1}$ as before. $\text{Hess}(X)(0)$ is the Hessian matrix \begin{equation} \text{Hess}(X)(0)=\begin{pmatrix} \partial_x^2 X(0) & \partial_x \partial_y X(0)\\ \partial_x \partial_y X(0) & \partial_y^2 X(0) \end{pmatrix}\ . \end{equation} Given our previous discussion, we can evaluate this expression very explicitly. As before, we can parametrize the coset $\mathrm{PSL}(2,\mathbb{R})/\mathrm{U}(1)$ by $\alpha \in D$, see eq.~\eqref{eq:gammaalpha}. Following the logic of the modified Faddeev-Popov procedure, this evaluates to \begin{align} \int_D \frac{\pi \, \mathrm{d}\alpha\, \mathrm{d}\bar{\alpha}}{(1-|\alpha|^2)^2} \det(\mathrm{Hess}(X^\alpha)(0)) \delta^2(\mathrm{d}X^\alpha(0))=\pi \sum_{\alpha_0} \text{sgn}\left(\det \text{Hess}(X(\alpha_0))\right)\ . \end{align} We finally have \begin{align} \text{sgn}\left(\det \text{Hess}(X(\alpha_0))\right)=\begin{cases} +1 & \text{$\alpha_0$ is a maximum or minimum of $X(z)$} \\ -1 & \text{$\alpha_0$ is a saddlepoint of $X(z)$} \end{cases} \end{align} Thus, by the topological constraint \eqref{eq:topocons} on the maxima, minima and saddlepoints, we have simply \begin{align} \int_D \frac{\pi \, \mathrm{d}\alpha\, \mathrm{d}\bar{\alpha}}{(1-|\alpha|^2)^2} \det(\mathrm{Hess}(X^\alpha)(0)) \delta^2(\mathrm{d}X^\alpha(0))=\pi \ . \end{align} In other words, the intersection number between the gauge slice and the gauge orbit is $\mathcal{I}=1$. The general logic is again given by the modified FP-procedure that we review in Appendix~\ref{app:FP procedure}. We finally insert this identity in the path integral for the disk partition function \begin{multline} \int \frac{\mathscr{D}X}{\mathop{\text{vol}}\mathrm{PSL}(2,\mathbb{R}) }\mathrm{e}^{-S[X]}\\ =\frac{1}{\pi} \int \frac{\mathscr{D}X}{\mathop{\text{vol}}\mathrm{PSL}(2,\mathbb{R}) } \int_D \frac{\pi \, \mathrm{d}\alpha\, \mathrm{d}\bar{\alpha}}{(1-|\alpha|^2)^2} \det(\mathrm{Hess}(X^\alpha)(0)) \delta^2(\mathrm{d}X^\alpha(0))\mathrm{e}^{-S[X]}\ . \end{multline} While we suppressed the other directions of the sigma model as well as the ghost fields, we should remember that they are present in order to have a non-anomalous $\mathrm{PSL}(2,\mathbb{R})$ symmetry. We suppress them from the notation for simplicity. With this convention, both the measure and the action are invariant under $\mathrm{PSL}(2,\mathbb{R})$ transformations -- $\mathscr{D}X=\mathscr{D}X^\gamma$ and $S[X^\gamma]=S[X]$. Thus, after replacing $X$ by $X^\alpha$ in the measure and the action, we can rename $X^\alpha \to X$ everywhere. The $\alpha$-integral then formally is \begin{equation} \int_D \frac{\pi \, \mathrm{d}\alpha\, \mathrm{d}\bar{\alpha}}{(1-|\alpha|^2)^2}=\int_{\mathrm{PSL}(2,\mathbb{R})} \mathrm{d}\gamma=\mathop{\text{vol}}\mathrm{PSL}(2,\mathbb{R}) \ , \end{equation} which cancels the corresponding factor in the denominator (at least this is our definition what we mean by $\mathop{\text{vol}}\mathrm{PSL}(2,\mathbb{R}) $). Thus, we end up with the following gauge-fixed form of the disk partition function \begin{equation} Z_\text{disk}=\frac{1}{\pi} \int \mathscr{D}X \det(\mathrm{Hess}(X)(0)) \delta^2(\mathrm{d}X(0))\mathrm{e}^{-S[X]}\ . \label{eq:disk partition function gauge fixed 2} \end{equation} \paragraph{Dirichlet case.} Let us indicate the changes for the Dirichlet case. Here, $X|_{\partial D}=0$ and so the derivative along the boundary of $X$ vanishes. Hence we again expect that generically there can be critical points of $X(z)$ on the boundary $\partial D$ and we require a similar regularization as before. This situation is topologically completely equivalent to the Neumann case if we rotate the vectorfield pointwise by 90 degrees. Then the normal derivative and the derivative along the boundary get interchanged and we are back to the Neumann situation that can be regularized as discussed above. Thus, we again have after regularization \begin{equation} \text{\# maxima}-\text{\# saddle points}+\text{\# minima}=1\ . \end{equation} The rest of the computation did not require the boundary condition and hence \eqref{eq:disk partition function gauge fixed 2} also holds for Dirichlet boundary conditions. \subsection{Computation of the path integral} Next, we compute the gauge-fixed path integral eq.~\!\eqref{eq:disk partition function gauge fixed 2}. We choose a flat metric on the disk for simplicity and set $\alpha'=1$. We will again perform the computation first for Neumann boundary conditions and indicate the changes for Dirichlet boundary conditions below. Let us introduce the standard generating functional \begin{equation} W(J)=\left \langle \exp\left(i \int \mathrm{d}^2z\ X(z) J(z) \right) \right \rangle\ , \end{equation} where the correlation function is normalized such that $\langle 1 \rangle=1$. Here, $J(z)$ is an arbitrary source for $X$. We can compute the generating functional in the following standard way. The Green's function for the Laplacian on the disk with Neumann boundary conditions reads \begin{equation} G(z,w)=\frac{1}{2\pi}\left(\log |z-w|+\log \left(|w||z-w^*|\right)\right)-\frac{1}{4\pi}(|z|^2+|w|^2)\ , \end{equation} where $w^*=\frac{w}{|w|^2}$ is the point reflected at the unit circle. This Green's function is symmetric, which becomes obvious if we write it in the form \begin{equation} G(z,w)=\frac{1}{2\pi}\left(\log |z-w|+\log |1-z \bar{w}|\right)-\frac{1}{4\pi}(|z|^2+|w|^2)\ . \label{eq:Greens function N} \end{equation} It satisfies \begin{equation} \Delta_z G(z,w)=\delta^2(z,w)-\frac{1}{\pi}\ . \end{equation} The correction is expected, because the Laplacian has a zero mode and thus the inverse only exists for non-zero modes. One can complete the square in the path integral and derive \begin{equation} W(J)=\exp\left(\pi \int \mathrm{d}^2 z \ \mathrm{d}^2 w\ G(z,w) J(z) J(w) \right)\ . \end{equation} This expression is valid as long as the zero mode $\int \mathrm{d}^2 z\ J(z)$ vanishes. This will always be satisfied below since our gauge fixing condition does not involve the zero mode. Now we turn again to eq.~\!\eqref{eq:disk partition function gauge fixed 2}. It involves composite operators such as the determinant of the Hessian which have to be defined properly. Our regularization is to use point splitting. Correspondingly, the determinant of the Hessian becomes \begin{equation} \partial_x^2 X(z_x)\partial_y^2 X(z_y)-\partial_x \partial_y X(z_x)\partial_x \partial_y X(z_y)\ . \end{equation} Here and in the following $\partial_x$ ($\partial_y$) is the derivative with respect to the real (imaginary) part of the complex argument. We find it less confusing to use real coordinates in the computation. We used $z_x$ and $z_y$ for the two point-split points to remember which one carries more $x$ and $y$-derivatives. We ultimately want to take them both to zero. Similarly, the $\delta$-functions can be taken to be \begin{equation} \delta(\partial_x X(z_x))\delta(\partial_y X(x_y)) \ . \end{equation} It turns out that in the following computation it is very natural to take them at the same coordinates as the entries of the Hessian matrix -- this will not lead to singularities. In fact, this point-split version of the integral simply comes from the modified gauge condition \begin{equation} \partial_x X(z_x)=0\quad \text{and}\quad \partial_y X(z_y)=0\ . \end{equation} As a first step, we can compute \begin{align} \tilde{W}(J)&=\left\langle \delta(\partial_x X(z_x))\delta(\partial_y X(z_y)) \exp\left(i \int \mathrm{d}^2z\ X(z) J(z) \right) \right \rangle \\ &=\frac{1}{(2\pi)^2} \int _{-\infty}^\infty \mathrm{d}k_x \mathrm{d} k_y \ W\big(J+k_x \partial_x \delta^2(z-z_x)+k_y \partial_y \delta^2(z_y)\big)\ . \end{align} Notice that as promised, the modified source still does not have a zero mode. We can plug in the explicit form of $W(J)$ to obtain \begin{align} \tilde{W}(J)&=\frac{W(J) }{(2\pi)^2} \int _{-\infty}^\infty \mathrm{d}k_x \mathrm{d} k_y \ \exp\Bigg(\pi \sum_{i,j \in \{x,y\}} k_i k_j\partial_i^{(1)}\partial_j^{(2)}G(z_i,z_j)\nonumber\\ &\qquad\qquad\qquad-2\pi\sum_{i\in \{x,y\}}k_i\int \mathrm{d}^2 z \ \partial_{i}^{(2)}G(z,z_i) J(z)\Bigg)\ . \end{align} The superscript $(1)$ and $(2)$ indicates whether the derivative acts on the first or second entry of the Green's function. Remembering that we use point splitting to define Green's functions at coincident points, we need to subtract the singular piece of the Green's function that is $\frac{1}{2\pi} \log |z-z|$. This gives \begin{equation} G_\text{reg}(z,z)=\frac{1}{2\pi}\left(\log\left(1-|z|^2\right)-|z|^2\right)\ . \end{equation} We next compute the integral over $k_x$ and $k_y$. Let \begin{equation} A_{i,j}=-\partial_i^{(1)}\partial_j^{(2)}G(z_i,z_j)\ , \qquad b_i= \int \mathrm{d}^2 z \ \partial_{i}^{(2)}G(z,z_i) J(z)\ . \end{equation} We thus simply compute the Gaussian integral with the result \begin{equation} \tilde{W}(J)=\frac{W(J) }{(2\pi)^2\sqrt{\det(A)}}\exp\left(\pi \sum_{i,j} b_i (A^{-1})_{i,j} b_j\right)\ . \end{equation} It turns out that the matrix $A$, although complicated is indeed positive definite so that the integral over $k_x$ and $k_y$ is well-defined. By direct computation, we have \begin{equation} \det(A)\Big|_{z_x=0,z_y=0}=\frac{1}{(2\pi)^2}\ . \end{equation} Also the exponential behaves nicely in the limit where $z_x \to 0$ and $z_y \to 0$ and we obtain \begin{equation} \sum_{i,j} b_i (A^{-1})_{i,j} b_j=2\pi \int\mathrm{d}^2 z\ \mathrm{d}^2 w\ \sum_{p\in \{x,y\}}\partial_p^{(2)} G(z,0)\partial_p^{(2)} G(w,0) J(z) J(w) \end{equation} Let us define \begin{equation} \tilde{G}(z,w)=G(z,w)+2\pi\sum_{i\in \{x,y\}}\partial_i^{(2)} G(z,0)\partial_i^{(2)} G(w,0)\ . \label{eq:G tilde} \end{equation} Thus, after specialization of $z_x=z_y=0$, we have \begin{equation} \tilde{W}(J)=\frac{1}{2\pi} \exp\left(\pi \int \mathrm{d}^2 z \ \mathrm{d}^2 w\ \tilde{G}(z,w) J(z) J(w) \right) \end{equation} To complete the computation, we also want to include the effect of the Hessian. Point-splitting again, we simply obtain it by taking functional derivatives. Remembering also the additional factor of $\frac{1}{\pi}$ from the volume of the residual gauge group $\mathrm{U}(1)$, we want to compute \begin{align} \frac{Z_\text{disk}}{Z_\text{CFT}}&=-\frac{1}{2\pi^2}\lim_{z_x \to 0,\, z_y \to 0} \left((\partial_x^{(1)})^2 (\partial_y^{(2)} )^2-\partial_x^{(1)}\partial_y^{(1)}\partial_x^{(2)}\partial_y^{(2)}\right)\frac{\delta}{\delta J(z_x)} \frac{\delta}{\delta J(z_y)} \tilde{W}(J) \Big|_{J=0}\\ &=-\frac{1}{\pi}\lim_{z_x \to 0,\, z_y \to 0} \left((\partial_x^{(1)})^2 (\partial_y^{(2)} )^2-\partial_x^{(1)}\partial_y^{(1)}\partial_x^{(2)}\partial_y^{(2)}\right)\tilde{G}(z_x,z_y)\ . \end{align} Here, $Z_\text{CFT}$ is the CFT partition function without gauging of $\mathrm{PSL}(2,\mathbb{R})$. There are two terms -- from the original $G(z,w)$ and from the correction term in eq.~\!\eqref{eq:G tilde}. The second term leads again to Green's functions at coincident points which we regularize as before. A direct computation then leads to \begin{equation} \frac{Z_\text{disk}}{Z_\text{CFT}}=-\frac{1}{\pi} \times \frac{2}{\pi}=-\frac{2}{\pi^2}\ . \end{equation} This is in perfect agreement with out earlier calculation. \paragraph{Dirichlet case.} For Dirichlet boundary conditions, the following changes need to be made. The Green's function now takes the form \begin{equation} G(z,w)=\frac{1}{2\pi}\left(\log |z-w|-\log |1-z \bar{w}|\right) \label{eq:Greens function D} \end{equation} and there is no zero mode. Furthermore, the matrix $A_{i,j}$ is \emph{negative definite} in this case and thus the integral over $k_x$ and $k_y$ is a priori ill-defined. However, one can still go on by employing a double Wick rotation $k_p \to i k_p$ (but the answer is less well-defined in this case). This leads to \begin{equation} \tilde{W}(J)=-\frac{W(J) }{(2\pi)^2\sqrt{\det(A)}}\exp\left(\pi \sum_{i,j} b_i (A^{-1})_{i,j} b_j\right)\ , \end{equation} where the various quantities are given by analogous expressions as in the Neumann case. The extra minus sign comes from the analytic continuation. The Wick rotation exchanges branches of the square root. The remaining steps are completely analogous and one obtains the result \begin{equation} \frac{Z_\text{disk}}{Z_\text{CFT}}=\frac{1}{\pi} \times \left(-\frac{2}{\pi}\right)=-\frac{2}{\pi^2}\ . \end{equation} \section{Relation to a one-point function} \label{sec:one point function} In this section, we will explain yet another method to compute the disk partition function by relating it to a one-point function. This is more along the lines how the disk partition functions were evaluated previously in the literature. Actually, this was done historically by using the soft dilaton theorem \cite{Shapiro:1975cz, Ademollo:1975pf} that relates the disk partition function to a one-point function of the dilaton with zero momentum. This exploits the fact that the dilaton appears in the spacetime effective action as an exponential. The computation we present here is simpler because one does not have to deal with the subtleties of the dilaton vertex operator and one does not have to make any assumption about the spacetime theory. \subsection{Marginal operator} Let us suppose that there is a circle of radius $R$ in the spacetime which is described by a compact free boson $X \sim X+2\pi L$. As before, we want to compute the path integral over the worldsheet CFT with a $\mathrm{PSL}(2,\mathbb{R})$ gauging and compare it with the path integral without gauging. We make use of the fact that the worldsheet partition function as well as the gauged string partition function should behave in a simple way on $L$. In fact, $L$ only enters in the path integral formalism through the zero modes which leads to the behavior \begin{subequations} \begin{align} \text{Neumann}&:\ Z_\text{CFT} \propto L^1\ , \\ \text{Dirichlet}&:\ Z_\text{CFT} \propto L^0\ , \end{align} \label{eq:proportionalities} \end{subequations} because the zero mode is only present for the Neumann boundary condition. We assume that this property continues to be true in the full string partition function $Z_\text{disk}$. In the worldsheet path integral \begin{equation} Z_\text{CFT}=\int \mathscr{D}X \ \mathrm{e}^{-S[X]}\ , \end{equation} we can make the $L$-dependence explicit by defining $X'=L^{-1} X$, which has the periodicity. Then the worldsheet path integral reads \begin{equation} Z_\text{CFT}=L^\gamma\int \mathscr{D}X' \ \mathrm{e}^{-L^2 S[X']}\ , \end{equation} We put a prefactor $L^\gamma$ in front of the path integral to account for the fact that the measure $\mathscr{D}X'$ should also transform under this replacement. Since the replacement $X'=L^{-1} X$ is linear, the most general transformation is given by an overall factor $L^\gamma$. However, the precise value of the exponent $\gamma$ is scheme dependent and we leave it open. One can for example compute that in zeta-function regularization $\gamma=\frac{1}{6}$. Let us write $V(z) =g^{ab} \partial_a X' \partial_b X'(z)$ in the following for simplicity. We thus have \begin{equation} \frac{\partial_L (L^{-\gamma} Z_\text{CFT})}{L^{-\gamma} Z_\text{CFT}}=-\frac{L}{2\pi \alpha'} \frac{\int \mathscr{D}X' \ \int \mathrm{d}^2z \ \sqrt{g} \, V(z) \mathrm{e}^{-L^2 S[X']}}{\int \mathscr{D}X' \ \mathrm{e}^{-L^2 S[X']}} \end{equation} In this expression it is now very simple to gauge fix because we are computing a one-point function. We can put the vertex operator $V(z)$ in the center of the disk. We take the disk again to be the unit disk with flat metric so that the vertex operator is inserted at $z=0$. The remaining Faddeev-Popov determinant is simply $\frac{1}{\pi}$ coming from the unbroken $\mathrm{U}(1)$. We thus deduce \begin{equation} \frac{\partial_L (L^{-\gamma} Z_\text{disk})}{L^{-\gamma} Z_\text{CFT}}= -\frac{L}{2\pi^2 \alpha'} \langle V(0) \rangle_L\ , \end{equation} where the normalized expectation value is taken w.r.t.\ the action $L^2 S[X']$. \subsection{Computation} After having related the disk partition function to a one-point function, we proceed with the calculation. The expectation value $\langle V(0) \rangle_L$ can be computed via Green's functions as in Section~\ref{sec:alternative gauge}. To start, we first point split the operator $V(z)$ and compute the two point function \begin{equation} 4 \langle \partial X(z) \bar{\partial}X(w) \rangle \end{equation} instead which in the limit $z,w \to 0$ gives the desired one-point function. Here we wrote again $X$ for $X'$ to avoid cluttering the notation. This gives \begin{equation} \frac{\partial_L (L^{-\gamma} Z_\text{disk})}{L^{-\gamma} Z_\text{CFT}}=-\frac{L}{2\pi^2 \alpha'} \times \left(-\frac{2\pi \alpha'}{L^2} \right) \times 4 \lim_{z,w \to 0}\partial_z \bar{\partial}_w G(z,w)\ . \end{equation} The additional factor comes from the generating functional $W(J)$ that we determine as in Section~\ref{sec:alternative gauge}. Notice that so far everything works with both boundary conditions. We also make the important remark that through point-splitting we have chosen a renormalization scheme and thus we can only expect agreement for a specific $\gamma$. For this reason we will consider a combination of the Neumann and Dirichlet partition functions where the scheme dependence cancels. We can compute the ratio \begin{equation} \frac{Z_\text{disk}}{L Z_\text{CFT}}= \frac{\partial_L (L^{-\gamma} Z_\text{disk}^\text{N})}{L^{-\gamma} Z_\text{CFT}^\mathrm{N}}-\frac{\partial_L (L^{-\gamma} Z_\text{disk}^\text{D})}{L^{-\gamma} Z_\text{CFT}^\mathrm{D}}\ . \end{equation} In this equality, we used the proportionalities \eqref{eq:proportionalities} as well as the expectation that the ratio $Z_\text{disk}/Z_\text{CFT}$ does not depend on the boundary conditions as well as independent of $L$. We finally learn \begin{align} \frac{Z_\text{disk}}{Z_\text{CFT}}&=\frac{4}{\pi} \lim_{z,w \to 0} \partial_z \bar{\partial}_w \left(G^\text{N}(z,w)-G^\text{D}(z,w)\right) \\ &=\frac{4}{\pi} \lim_{z,w \to 0} \partial_z \bar{\partial}_w \left(\frac{1}{\pi} \log |1-z \bar{w}|-\frac{1}{4\pi} (|z|^2+|w|^2)\right) \\ &=-\frac{2}{\pi^2} \lim_{z,w \to 0} \frac{1}{(1-z \bar{w})^2}=-\frac{2}{\pi^2}\ , \end{align} in agreement with our previous results. Here we used the explicit form of the Green's function eq.~\!\eqref{eq:Greens function N} and eq.~\!\eqref{eq:Greens function D}. \section{Application to D-branes} \label{sec:D brane tension} In this section, we apply our method to the computation of D-brane tension. Let us imagine a setup with a D$p$-brane in directions $0$ through $p$ (in flat spacetime). Then without turning on any fluxes, the worldvolume action of the D-brane is given by the DBI-action -- the higher-dimensional generalization of the Nambu-Goto action (in the Einstein frame): \begin{equation} S_{\text{D$p$-brane}}=T_p \int \mathrm{d}^{p+1} x\ \sqrt{\det (G^{(p)})}=T_p \vol(\mathrm{D}p)\ , \end{equation} where $\vol(\mathrm{D}p)$ is the $(p+1)$-dimensional worldvolume in spacetime the D-brane occupies and $T_p$ is the D$p$-brane tension -- the object we want to compute. We do not turn on any $B$-field or gauge field background values. The fact that $\vol(\mathrm{D}p)$ is infinite is not a problem in our analysis. We could imagine that in a Euclidean spacetime, directions $0$ through $p$ are toroidally compactified so that the worldvolume becomes finite. We already know that $T_p \propto g_\text{s}^{-1}$ (the closed string coupling) since D-branes are non-perturbative objects. Hence the partition function of the system is to leading order in $g_s$ given by \begin{equation} Z_{\text{D$p$-brane}}=\mathrm{e}^{-S_{\text{D$p$-brane}}}=\mathrm{e}^{-T_p \vol(\mathrm{D}p)}\ \end{equation} This partition function needs to be reproduced by a worldsheet computation. To leading order in $g_\text{s}$, the worldsheet partition function of a single open string ending on the D-brane is given by the disk partition function $Z_\text{disk}$. To account for the fact that there can be arbitrarily many strings present we need to exponentiate the single-string answer. So we require \begin{equation} \mathrm{e}^{-T_p \vol(\mathrm{D}p)} \overset{!}{=}\mathrm{e}^{Z_\text{disk}+\mathcal{O}(1)}\ . \end{equation} Hence \begin{equation} T_p=-\frac{Z_\text{disk}}{\vol(\mathrm{D}p)}= -\frac{Z_\text{CFT}^{(p)}}{\vol(\mathrm{D}p)\vol(\mathrm{PSL}(2,\mathbb{R}))}\ . \end{equation} Here we used the above computations that showed that passing from the disk partition function with $\mathrm{PSL}(2,\mathbb{R})$ gauged to the ungauged CFT partition function gives rise to a relative factor given by the effective volume of $\mathrm{PSL}(2,\mathbb{R})$. The superscript $(p)$ reminds us that there are $p+1$ Neumann directions and $D-p-1=25-p$ Dirichlet directions in the partition function. We also note that it was crucial that the effective volume of $\mathrm{PSL}(2,\mathbb{R})$ turned out to be negative in order to get a positive D-brane tension.\footnote{One could repeat the same computation for O-planes, whose tensions are computed by the projective plane $\mathbb{RP}^2$ diagram. In this case, the residual symmetry group is $\mathrm{SO}(3)$, which is compact. Correspondingly, the tension of O-planes turns out to be \emph{negative}.} \subsection{\texorpdfstring{$p$-dependence}{p-dependence}} As a first step in out computation, we fix the $p$-dependence of $T_p$. We use the fact that the effective volume of $\mathrm{PSL}(2,\mathbb{R})$ can be assigned a finite regularized value (the precise value becomes important only in the next subsection) and arrive at \begin{equation} \frac{T_{p+1}}{T_p}=\frac{Z_\text{CFT}^{(p+1)}}{Z_\text{CFT}^{(p)}\vol(\mathbb{R})}=\frac{Z_\text{CFT}^\text{N}}{Z_\text{CFT}^\text{D}\vol(\mathbb{R})}\ , \end{equation} where $Z_\text{CFT}^\text{N,D}$ are the CFT partition functions for a single free boson. All other directions in the worldsheet partition function as well as the ghost partition functions cancel. The volume appearing here is the volume in the direction $p+1$. This will remove the zero mode from the Neumann partition function. Let us compute the partition function on a hemisphere of radius $R$ in zeta-function renormalization \cite{Hawking:1976ja}. The non-zero modes lead to \begin{equation} Z_\text{CFT}^\text{N,D}=\text{(zero modes)}\times\prod_{\lambda} \sqrt{\frac{4\pi^2 \alpha' R^2}{\lambda}}\ . \end{equation} The product runs over all eigenvalues of $-\Delta$ on the unit sphere with the correct boundary conditions. The zero mode for the Neumann condition leads to the following contribution. By definition, we normalized the path integral as follows. Choose an orthonormal basis of $\Delta$. Then the path integral is simply given by the usual integral over the all the coefficients in this orthonormal basis. The constant function is hence normalized as $\frac{1}{\sqrt{2\pi} R}$. Thus, the zero mode integral is \begin{equation} \int_{-L\sqrt{2\pi}R}^{L \sqrt{2\pi}R} \mathrm{d}X_0 =\sqrt{2\pi}R \vol(\mathbb{R})\ , \end{equation} where we imagined that the D-brane extends in some region $[-L,L]$. This again does not matter for the final result, we only need the factor $\sqrt{2\pi}R$ that arises from the correct normalization. Finally, we note that the eigenvalues of the Laplacian $-\Delta$ are just $\ell(\ell+1)$. For Neumann boundary conditions, they have multiplicity $\ell+1$, whereas for Dirichlet boundary conditions, they have multiplicity $\ell$. Thus, \begin{align} \frac{T_{p+1}}{T_p}=\sqrt{2\pi}R \prod_{\ell=1}^\infty \sqrt{\frac{4\pi^2 \alpha' R^2}{\ell(\ell+1)}}=\frac{1}{\sqrt{2\pi\alpha'}} \prod_{\ell=1}^\infty \frac{1}{\sqrt{\ell(\ell+1)}}\ . \end{align} Since the result is independent of $R$, we made the convenient choice $R=\frac{1}{2\pi \sqrt{\alpha'}}$. The infinite product can be evaluated using zeta-function regularization.\footnote{Tree level partition functions in zeta-function regularization in string theory were considered in \cite{Grinstein:1986hd, Douglas:1986eu, Weisberger:1986qd}.} Define \begin{equation} \zeta_{\text{N}/\text{D}}(s)=\sum_{\ell=1}^\infty \frac{1}{(\ell(\ell+1))^s}\ . \end{equation} We want to compute $\zeta_{\text{N}/\text{D}}'(0)$ which enters the regulated ratio of determinants. For this, we write \begin{align} \zeta_{\text{N}/\text{D}}(s)=\sum_{\ell=1}^\infty \left(\frac{1}{\ell^{2s}}-\frac{s}{\ell^{2s+1}}\right)+\sum_{\ell=1}^\infty \frac{1}{\ell^{2s}}\left(\frac{1}{(1+\ell^{-1})^s}-1+\frac{s}{\ell}\right)\ . \end{align} The first sum can be expressed through the Riemann zeta-function, whereas the second sum converges absolutely for $\Re s>-\frac{1}{2}$. Hence to evaluate the derivative at $s=0$, we can commute the derivative with the sum. We obtain \begin{equation} \zeta_{\text{N}/\text{D}}'(0)=2\zeta'(0)-\gamma+\sum_{\ell=1}^\infty \left(\frac{1}{\ell}-\log \left(1+\frac{1}{\ell}\right)\right)\ . \end{equation} Here, we used already that the Riemann zeta-function behaves near $s=1$ as \begin{equation} \zeta(s)=\frac{1}{s-1}+\gamma+\mathcal{O}(s-1)\ , \end{equation} where $\gamma$ is the Euler-Mascheroni constant. Furthermore, we can use that $\zeta'(0)=-\frac{1}{2}\log(2\pi)$. The remaining sum is seen to be equal to $\gamma$ by definition: \begin{equation} \sum_{\ell=1}^n \left(\frac{1}{\ell}-\log \left(1+\frac{1}{\ell}\right)\right)=\sum_{\ell=1}^n \frac{1}{\ell}-\log(n+1) \overset{n \to \infty}{\longrightarrow} \gamma\ , \end{equation} where we used the the logarithmic piece is a telescoping sum. Finally, we simply obtain \begin{equation} \zeta_{\text{N}/\text{D}}'(0)=2\zeta'(0)=-\log(2\pi)\ . \end{equation} Putting the pieces together gives \begin{equation} \frac{T_{p+1}}{T_p}=\frac{1}{\sqrt{2\pi\alpha'}} \exp\left(\frac{1}{2} \zeta_{\text{N}/\text{D}}'(0)\right)=\frac{1}{2\pi \sqrt{\alpha'}}\ . \label{eq:ratio tensions} \end{equation} \subsection{Fixing normalization}\label{subsec:norm} After having fixed the $p$-dependence, we can compute the overall normalization. We follow here the conventions of Polchinski \cite{Polchinski:1998rq}. We will compute the normalization for the D25-brane where we only impose Neumann boundary conditions. In his notation, \begin{equation} Z_\text{CFT}=C_{D_2}=\frac{1}{\alpha' g_\text{o}^2}\ , \end{equation} where $g_\text{o}$ is the open string coupling, compare to eq.~(6.4.14) in Polchinski. We also have the following relation of the gravitational coupling $\kappa=\sqrt{8\pi G_\text{N}}$ to the open string coupling (eq.~(6.6.18) and eq.~(8.7.28)): \begin{equation} \kappa=2\pi g_\text{c}=2^{-17}\pi^{-\frac{23}{2}} (\alpha')^{-6} g_\text{o}^2 \ . \end{equation} Finally, we should remember that the effective volume of the group $\mathrm{PSL}(2,\mathbb{R})$ is $-2\pi^2$ in Polchinski's normalization, see also the discussion in Appendix~\ref{app:volume PSL2R}. This is because the normalization of the ghosts lead to a different normalization of the measure on $\mathrm{PSL}(2,\mathbb{R})$ than the one we were considering above. Thus we can express the result for the D-brane tension as follows: \begin{equation} T_{25}=\frac{1}{2\pi^2} Z_\text{CFT}=\frac{1}{2\pi^2 \alpha' g_\text{o}^2}=\frac{\sqrt{\pi}}{16\kappa} (4\pi^2 \alpha')^{-7}\ . \end{equation} For a general D$p$-brane, we combine this result with eq.~\!\eqref{eq:ratio tensions} and obtain \begin{equation} T_p=\frac{\sqrt{\pi}}{16\kappa} (4\pi^2 \alpha')^{\frac{11-p}{2}}\ . \end{equation} This agrees with eq.~(8.7.26) of Polchinski and hence provides a simple way of computing D-brane tensions. \section{Conclusions} \label{sec:conclusions} We found that the disk partition function in string theory can be rigorously computed using standard path integral methods. Using one of the bosons on the worldsheet, one can further fix the residual gauge group $\mathrm{PSL}(2,\mathbb{R})$. We gave two possible gauge choices: in Section~\ref{sec:first gauge} we imposed that when expanding the boson $X$ into spherical harmonics, one of the coefficients is absent. In Section~\ref{sec:alternative gauge} we imposed that the derivative of $X$ vanishes at the origin of the worldsheet disk. Finally, in Section~\ref{sec:one point function} we used a more standard procedure and made use of the presence of a modulus in the worldsheet CFT which allows one to relate the result to a one-point function through conformal perturbation theory. In all these methods, the conclusion was the same: The group $\mathrm{PSL}(2,\mathbb{R})$ behaves as if it had a finite volume $-\frac{\pi^2}{2}$ in the path integral (for a suitable normalization of the metric on the group). We finally saw in Section~\ref{sec:D brane tension} that the disk partition function gives a very direct derivation of the D-brane tensions without the detours that are usually taken in the literature. \medskip In the following we mention some open questions and future directions. \paragraph{Infinite volume.} We have given three independent computations of the disk partition function and to us they are quite convincingly showing that the gauge group $\mathrm{PSL}(2,\mathbb{R})$ should be thought of having finite volume. However, conceptually, this is somewhat counterintuitive. One starts in CFT with an integral over a function space $L^2(D)$ with Neumann or Dirichlet boundary conditions which is finite after an appropriate regularization. Gauging of $\mathrm{PSL}(2,\mathbb{R})$ identifies the gauge orbits, which are non-compact slices inside $L^2(D)$. If we would talk about a finite-dimensional integral, such an identification surely would lead to a vanishing result, due to the non-compactness of the gauge orbits. The finiteness of the result for the path integral is hence very unexpected and a result of an interesting interplay between the non-compactness of the gauge group and the subtleties of the path integral. \paragraph{Sphere partition function.} Given our success with the disk partition function, one should ask whether one can similarly compute the more interesting sphere partition function in a similar manner. This does not seem to be the case from several perspectives. \begin{enumerate} \item Liu and Polchinski applied the same regularization procedure as for $\mathrm{PSL}(2,\mathbb{R})$ to the case of $\mathrm{PSL}(2,\mathbb{C})$. However, one also gets a logarithmic divergence in the cutoff that is akin to the appearance conformal anomaly in holographic renormalization \cite{Henningson:1998gx}. This prevents one from assigning a well-defined value to the volume. \item The sphere partition function in flat space is expected to vanish. If we could perform a similar gauge fixing procedure as explored in this article using one flat spacetime direction, we would conclude that the sphere partition function should be vanishing for every background with a flat direction in it. This is not the case -- counterexamples include $c=1$ string theory and $\mathrm{AdS}_3 \times \mathrm{S}^3 \times \mathbb{T}^4$. Thus, one spacetime direction should not be sufficient to fix the gauge. \item The sphere partition function should vanish for a compact target space. This is expected from supergravity where the on-shell action is a total derivative and hence vanishes for a compact spacetime. However, the ungauged worldsheet partition function is clearly non-vanishing and so $\mathrm{PSL}(2,\mathbb{C})$ needs to have an infinite volume for consistency. \end{enumerate} For these reasons, the computation of the sphere partition function is a much more subtle problem than the disk partition function that we have treated in this paper. \section*{Acknowledgements} We thank Raghu Mahajan for initial collaboration and very useful discussions. We also thank Adam Levine and Edward Witten for discussions and Douglas Stanford for comments on a preliminary draft of the paper. LE is supported by the IBM Einstein Fellowship at the Institute for Advanced Study. SP acknowledges the support from DOE grant DE-SC0009988.
1,314,259,994,668
arxiv
\section{\label{sec:introduction} Introduction} Economic Capital, a key tool of risk management, is computed by financial service firms to determine the amount of risk capital that they require to remain solvant in the face of adverse yet realistic conditions \cite{Porteous2003}. Financial service firms are exposed to many forms of risk \cite{Porteous2002} such as credit risk which is the risk of a monetary loss resulting from a counterparty failing to meet a financial obligation \cite{BIS2000, Bouteille2013}. For instance, a payment may not be made in due time or at all. Risk metrics such as Value at Risk and the Economic Capital Requirement (ECR) are often calculated for many different scenarios. Monte Carlo (MC) simulations are thus the method of choice for this task. In a MC simulation a parameter is estimated by building a distribution obtained by taking $M$ samples from the model input distributions. The error on the resulting estimation scales as $\mathcal{O}(1/\sqrt{M})$ \cite{Glasserman2003}. Evaluating credit risk with MC is a rare-event simulation problem which requires many samples thereby making MC computationally costly \cite{Glasserman2005}. Importance sampling reduces the computational cost by lowering the constants but does not change the asymptotic rate of convergence. Quantum computers process information using the laws of quantum mechanics \cite{Nielsen2010}. This opens up novel ways of addressing various computational tasks. Problems that may benefit from quantum computing include quantum chemistry calculations \cite{Moll2018, Kandala2018}, machine learning \cite{Havlicek2019}, and finance \cite{Woerner2019, Rebentrost2018, Martin2019, Orus2019}. Recently, it has been shown how the Quantum Amplitude Estimation (QAE) algorithm can be used to analyze financial risk measures \cite{Woerner2019} or to price financial derivatives \cite{Stamatopoulos2019} with a quadratic speedup. In Section \ref{sec:credit_risk_analysis}, we formally define the economic capital requirement as well as the two different uncertainty models considered. In Section \ref{sec:quantum_algorithm}, we build on previous work \cite{Woerner2019} and discuss how to implement the quantum algorithms on a gate based quantum computer. In Section \ref{sec:results}, we show simulation results for small instances of the considered models. Section \ref{sec:scaling} analyzes the scaling of the algorithm for problems of realistic size as well as the resulting quantum advantage. \section{\label{sec:credit_risk_analysis} Credit Risk Analysis} ECR summarizes in a single figure the amount of capital (or own funds) required to remain solvent at a given confidence level (usually linked to the risk appetite or target solvency rating) and a time horizon (usually one year). It is a complementary metric to the regulatory capital requirements that refers to the amount of own funds required following regulatory criteria and rules \cite{BaselIIIa}. In this paper, we consider only the ECR related to default risk, which is the loss that occurs when an obligor does not fulfill the repayment of a loan. The main components of an ECR model for a portfolio of assets are the single-asset default probabilities, the loss given default, and the correlation among the single-asset default events. In the following, we first introduce a general form of the credit risk analysis problem considered in this manuscript and then define concrete models in detail. For a portfolio of $K$ assets the multivariate random variable $(L_1, ..., L_K) \in \mathbb{R}_{\geq 0}^K$ denotes each possible loss associated to each asset. The expected value of the total loss $\mathcal{L} = \sum_{k=1}^K L_k$ is $\mathbb{E}[\mathcal{L}] = \sum_{k=1}^{K} \mathbb{E}[L_k]$. The Value at Risk (VaR) for a given confidence level $\alpha \in [0, 1]$ is defined as the smallest total loss that still has a probability greater than or equal to $\alpha$, i.e., \begin{eqnarray} \text{VaR}_{\alpha}[\mathcal{L}] &=& \inf_{x \geq 0} \left\{ x \mid \mathbb{P}[\mathcal{L} \leq x] \geq \alpha \right\}. \end{eqnarray} The ECR at confidence level $\alpha$ is thus defined as \begin{eqnarray} \text{ECR}_{\alpha}[\mathcal{L}] &=& \text{VaR}_{\alpha}[\mathcal{L}] - \mathbb{E}[\mathcal{L}]. \end{eqnarray} Common values of $\alpha$ for ECR found in the finance industry are around $99.9\%$. In a first model, we assume that all losses are independent and can be expressed as $L_k = \lambda_k X_k$ where $\lambda_k > 0$ is the loss given default (LGD) and $X_k \in \{0, 1\}$ is a corresponding Bernoulli random variable. The probability that $X_k=1$, i.e., a loss for asset $k$, is $p_k$. The expected loss of the portfolio $\mathbb{E}[\mathcal{L}] = \sum_{k=1}^K \lambda_k p_k$ is easier to evaluate than $\text{VaR}_{\alpha}[\mathcal{L}]$, which usually requires a Monte Carlo simulation. We extend this simple uncertainty model to a more realistic one, where the defaults $X_k$ are no longer independent but follow a conditional independence scheme \cite{Rutkowski2014}. Given a realization $z$ of a latent random variable $\mathcal{Z}$, the Bernoulli random variables $X_k \mid \mathcal{Z}=z$ are assumed independent, but their default probabilities $p_k$ depend on $z$. We follow \cite{Rutkowski2014} and assume that $\mathcal{Z}$ follows a standard normal distribution and that \begin{eqnarray} p_{k}(z) &=& F\left( \frac{F^{-1}(p_k^0) - \sqrt{\rho_k} z}{\sqrt{1 - \rho_k}} \right), \end{eqnarray} where $p_k^0$ denotes the default probability for $z=0$, $F$ is the cumulative distribution function (CDF) of the standard normal distribution, and $\rho_k \in [0, 1)$ determines the sensitivity of $X_k$ to $\mathcal{Z}$. This scheme is similar to the one used for regulatory purposes in the Basel II (and following) Internal Ratings-Based (IRB) approach to credit risk \cite{BaselII, BaselIII}, and is called the \emph{Gaussian conditional independence model} \cite{Rutkowski2014}. In order to scale the model to a larger number of assets, one can aggregate subsets of similar assets into random variables $L_k \geq 0$ that take more than two values. We briefly discuss this approach and the overall scaling of our algorithm to real world problems in Section \ref{sec:scaling}. In the following sections, we show how the ECR for the presented model can be estimated on a gate-based quantum computer with QAE resulting in a quadratic speedup over classical Monte Carlo simulations. \section{\label{sec:quantum_algorithm} Quantum Algorithm} For the models introduced in Section \ref{sec:credit_risk_analysis}, the expected total loss $\mathbb{E}[\mathcal{L}]$ can be efficiently computed classically, see Appendix \ref{sec:expected_total_loss}. Thus, we focus on quantum algorithms to estimate $\text{VaR}_{\alpha}[\mathcal{L}]$. For more details on the estimation of expected values using QAE we refer to \cite{Woerner2019}. To apply QAE, we map the problem of interest to a quantum operator $\mathcal{A}$ acting on $n+1$ qubits such that: \begin{eqnarray} \mathcal{A} \ket{0}_{n+1} &=& \sqrt{1 - a} \ket{\psi_0}_n \ket{0} + \sqrt{a} \ket{\psi_1}_n\ket{1}, \label{eq:a_operator} \end{eqnarray} where $a \in [0, 1]$. The probability to measure $\ket{1}$ in the last qubit, i.e., $a$, corresponds to the (normalized) property of interest. From $\mathcal{A}$ we construct a quantum operator \begin{eqnarray} \mathcal{Q} &=& \mathcal{A} \mathcal{S}_0 \mathcal{A}^{\dagger} \mathcal{S}_{\psi_0}, \label{eq:q_operator} \end{eqnarray} where $S_0=\mathbb{I}-2\ket{0}_{n+1}\bra{0}_{n+1}$ and $\mathcal{S}_{\psi_0} = \mathbb{I}-2\ket{\psi_0}_n\ket{0}\bra{\psi_0}_n\bra{0}$. Every application of $\mathcal{Q}$ corresponds to one \emph{quantum sample}. QAE allows us to estimate $a$ with an estimation error that is bounded by \begin{eqnarray} \frac{2 \sqrt{a(1-a)}\pi}{M} + \frac{\pi^2}{M^2} &=& \mathcal{O}\left(\frac{1}{M}\right), \label{eq:ae_error} \end{eqnarray} where $M$ corresponds to the number of quantum samples \cite{Brassard2000, Woerner2019}. QAE has a success probability of $81\%$, thus, by repeating only a few times and taking the median result the algorithm succeeds almost with certainty. This leads to a quadratic speedup over classical Monte Carlo simulations, where the estimation error behaves as $\mathcal{O}(1/\sqrt{M})$, where $M$ now denotes the number of \emph{classical samples}. A more detailed discussion of QAE can be found in Appendix \ref{sec:amplitude_estimation}. To estimate VaR, we use QAE to efficiently evaluate the CDF of the total loss, i.e., we will construct $\mathcal{A}$ such that $a = \mathbb{P}[\mathcal{L} \leq x]$ for a given $x \geq 0$, and apply a bisection search to find the smallest $x_{\alpha} \geq 0$ such that $\mathbb{P}[\mathcal{L} \leq x_{\alpha}] \geq \alpha$, which implies $x_{\alpha} = \text{VaR}_{\alpha}[\mathcal{L}]$ \cite{Woerner2019}. Mapping the CDF of the total loss to a quantum operator $\mathcal{A}$ requires three steps. Each step corresponds to a quantum operator. First, $\mathcal{U}$ loads the uncertainty model. Second, $\mathcal{S}$ computes the total loss into a quantum register with $n_S$ qubits. Last, $\mathcal{C}$ flips a target qubit if the total loss is less than or equal to a given level $x$ which is used to search for $\text{VaR}_\alpha$. Thus, we have $\mathcal{A} = \mathcal{CSU}$ and Fig.~\ref{fig:high_level_cdf_circuit} illustrates the corresponding circuit on a high level. \begin{figure}[hbtp] \centering \includegraphics[width=0.475\textwidth]{high_level_circuit} \caption{\label{fig:high_level_cdf_circuit} High level circuit of the operator $\mathcal{A}$ used to evaluate the CDF of the total loss: the first qubit register with $n_Z$ qubits represents $\mathcal{Z}$, the second qubit register with $K$ qubits represents the $X_k$, the third qubit register with $n_S$ qubits represents the sum of the losses, i.e., the total loss, and the last qubit is flipped to $\ket{1}$ if the total loss is less than or equal to a given $x$. The operators $\mathcal{U}$, $\mathcal{S}$, and $\mathcal{C}$ represent the loading of uncertainty, the summation of losses, and the comparison to a given $x$, respectively.} \end{figure} The estimation error given in Eq.~(\ref{eq:ae_error}) also depends on the exact result $a$. In particular, if $a$ is close to $0$ or $1$ the constant in the error bound becomes very small. When computing $\text{VaR}_\alpha$, we want to find the minimal threshold such that the estimated probability is larger than or equal to $\alpha$. Thus, we can replace $a$ in Eq.~(\ref{eq:ae_error}) by $\alpha$ to get a better error bound. When $\alpha = 99.9\%$ the error bound is approximately \begin{eqnarray} \frac{1}{5 M} + \frac{\pi^2}{M^2} \label{eq:ae_error_alpha} \end{eqnarray} which is independent of the other properties of the problem. In other words, QAE is particularly good at estimating tail probabilities of distributions. We now discuss the operators $\mathcal{U}$, $\mathcal{S}$, and $\mathcal{C}$ in more detail. When the default events $\{X_1, ..., X_K\}$ are uncorrelated we can encode the $X_k$ of each asset in the state of a corresponding qubit by applying to qubit $k$ a $Y$-rotation $R_Y(\theta_p^k)$ \cite{Nielsen2010} with angle $\theta_p^k = 2\arcsin(\sqrt{p_k})$. Therefore the loading operator is \begin{eqnarray} \mathcal{U} &=& \bigotimes_{k=1}^K R_Y(\theta_p^k). \end{eqnarray} This prepares qubit $k$ in the state $\sqrt{1 - p_k}\ket{0} + \sqrt{p_k}\ket{1}$ for which the probability to measure $\ket{1}$ is $p_k$. The $\ket{1}$ state of qubit $k$ thus corresponds to a loss for asset $k$. To adjust $\mathcal{U}$ to include correlations between the default events, we add another register with $n_Z$ qubits to represent $\mathcal{Z}$. The random variable $\mathcal{Z}$ follows a standard normal distribution. We use a truncated and discretized approximation with $2^{n_Z}$ values, where we consider an affine mapping $z_i = a_z i + b_z$ from $i \in \{0, ..., 2^{n_Z}-1\}$ to the desired range of values of $\mathcal{Z}$. Any discretized and truncated log-concave distribution, such as $\mathcal{Z}$, can be efficiently represented in a quantum register by an operator $\mathcal{U}_Z$ built from controlled rotations \cite{Grover2002}. The qubit register representing $\mathcal{Z}$ is then used to control the rotation angles $\theta_p^k(z) = 2 \arcsin(\sqrt{p_{k}(z)})$ that prepare the qubits representing the $X_k$. For simplicity, we use a first order approximation of $\theta_p^k(z)$ and include the affine mapping from $z$ (a value of the normal distribution) to $i$ (an integer represented by $n_Z$ qubits), i.e., $\theta_p^k(z_i) \approx a_k i + b_k$. This affine dependency of the rotation angles $\theta_p^k$ with respect to $\mathcal{Z}$ can be constructed with a controlled rotation, see Fig.~\ref{fig:affine_controlled_rotation}. Higher order approximations of $\theta_p^k(z)$ can be implemented using multi-controlled rotations. Furthermore, by using quantum arithmetic one could also compute $\theta_p^k(Y)$ directly \citep{Woerner2019}. \begin{figure}[hbtp] \centering \includegraphics[width=0.45\textwidth]{affine_rotation_angle_dependency.pdf} \caption{\label{fig:affine_controlled_rotation} Affine dependency of $X_k$ on $\mathcal{Z}$: The qubit representing $X_k$ is prepared using $Y$-rotations controlled by the qubits representing $\mathcal{Z}$. Since the rotation angles are additive this construction rotates qubit $k$ by an angle $a_k z + b_k$.} \end{figure} The ability to efficiently construct the uncertainty model is a crucial part in QAE-based algorithms, and if not handled carefully can diminish the potential quantum advantage. The previous discussion shows that the Gaussian conditional independence model is particularly suitable for efficient loading in a quantum computer. However, the depth of the circuit implementing $\mathcal{U}$, shown in Fig.~\ref{fig:affine_controlled_rotation}, scales as $\mathcal{O}(n_zK)$, i.e.~linear in the number of assets. By adding $\mathcal{O}(K)$ ancilla qubits, the scaling of the circuit depth can be reduced to $\mathcal{O}(\log{K})$, which can lead to a potential speed-up. The additional qubits provide the compute space to perform more operations in parallel. Depending on the number of available qubits and the complexity of the rest of the algorithm, the number of ancillas can also be set to a smaller value to achieve an optimal overall performance. The efficient implementation of $\mathcal{U}$ is discussed in detail in Sec.~\ref{sec:scaling}. Next, we need to compute the resulting total loss for every realization of the $X_k$. Therefore, we use a weighted sum operator \begin{eqnarray} &\mathcal{S}: & \ket{x_1, \cdots, x_K}_K \ket{0}_{n_S} \nonumber \\ &\mapsto & \ket{x_1, \cdots, x_K}_K \ket{ \lambda_1 x_1 + \cdots + \lambda_K x_K}_{n_S}, \end{eqnarray} where $x_k \in \{0, 1\}$ denote the possible realizations of $X_k$. We set $n_S = \lfloor\log_2(\lambda_1 + \cdots + \lambda_K)\rfloor + 1$ to represent in the second register all possible values of the sum of the losses given default $\lambda_k$, assumed to be integers. An efficient implementation of $S$ is discussed in Sec.~\ref{sec:scaling}. Last, we need an operator that compares a particular loss realization to a given $x$ and then flips a target qubit from $\ket{0}$ to $\ket{1}$ if the loss is less than or equal to $x$. This operator is defined by \begin{eqnarray} \mathcal{C}: \ket{i}_{n_S}\ket{0} \mapsto \begin{cases} \ket{i}_{n_S}\ket{1} & \text{if $i \leq x$}, \\ \ket{i}_{n_S}\ket{0} & \text{otherwise.} \end{cases} \end{eqnarray} An efficient implementation of $\mathcal{C}$ is discussed in Sec.~\ref{sec:scaling}. In the remainder of this paper we apply this algorithm to a small illustrative example using classical simulations of a quantum computer and we discuss the scaling to problems of realistic size. \section{\label{sec:results} Results} In this section, we analyze the performance of the quantum algorithm for an illustrative example with $K=2$ assets. The losses given default $\lambda_k$, the default probabilities $p_k^0$, and the sensitivities $\rho_k$ are given in Tab.~\ref{tab:illustrative_example}. Within this section we set $n_Z=2$, and from the $\lambda_k$ it follows that $n_S = 2$. Thus, $\mathcal{A}$ is operating on seven qubits that represent this problem on a quantum computer, including the objective qubit. \begin{table}[htbp!] \caption{\label{tab:illustrative_example} Problem parameters for the two-assets example.} \begin{tabular}{cccc} asset number & loss given default & default prob. & sensitivity \\ $k$ & $\lambda_k$ & $p_k^{0}$ & $\rho_k$ \\ \hline 1 & 1 & 0.15 & 0.1 \\ 2 & 2 & 0.25 & 0.05 \end{tabular} \end{table} To simulate our algorithm we input the circuit for $\mathcal{A}$ to the QAE sub-routine implemented in \emph{Qiskit}~\cite{Qiskit} and perform the bisection search using the result to find $x_\alpha$. Since $n_S = 2$, the bisection search requires at most two steps, as shown in Fig.~\ref{fig:var_bisection_search}. Note that QAE requires one additional ancilla qubit to implement $\mathcal{Q}$ and we use four evaluation qubits giving us 16 quantum samples. In total, this experiment requires 12 qubits that we simulate using classical computers. \begin{figure}[hbtp] \centering \includegraphics[width=0.475\textwidth]{bisection_search} \caption{\label{fig:var_bisection_search} Cumulative distribution function (left) of total loss $\mathcal{L}$ (blue) and target level of 95\% (red). Bisection search to compute VaR (middle / right): Upper bound (orange), lower bound (blue), estimate (green), and exact value (red dashed line). Here, we set $\alpha = 95\%$ and $m = 4$.} \end{figure} \section{\label{sec:scaling} Scaling to real world problem} We analyze the scaling of the quantum algorithm for problem sizes relevant to the finance industry. In particular, we analyze the circuit depth as a function of the number of assets $K$, to estimate the expected runtime on a fault-tolerant quantum computer~\cite{Shor1996, Kitaev2003, Fowler2012}. We consider a gate decomposition into the Clifford + T gate set and mainly focus on the circuit depth in terms of T-gates, since they are the most expensive gates in a fault-tolerant quantum computer \cite{Bravyi2012}. By using ancilla qubits, Toffoli gates can be constructed with a T-depth of one \cite{Selinger2013}, thus, we treat the two as equivalent in our runtime analysis. Clifford gates, such as for instance CNOT-gates, are considered to be orders of magnitudes faster than T-gates and we mostly ignore them in the following \cite{Fowler2012, Fowler2018}. Our algorithm mainly consists of $\mathcal{A}$, multiple applications of (controlled) $\mathcal{Q}$, and an inverse quantum Fourier transform (QFT) at the end. The complexity of the inverse QFT scales at most quadratically with the number of evaluation qubits $m$, and is orders of magnitude smaller than the rest of the algorithm, since we assume $K \gg m$ and since the inverse QFT is only applied once. Furthermore, the inverse QFT can even be approximated using $\mathcal{O}(n\log(n))$ T-gates \cite{Nam2018}, and, as discussed later in this section, it has been recently shown that Quantum Phase Estimation (QPE, includes the inverse QFT) can be omitted completely in QAE \cite{Suzuki2019}. Therefore, we ignore the contribution of the inverse QFT to the overall runtime. Since the controlled powers of $\mathcal{Q}$ will dominate the runtime, we focus on the T/Toffoli-gates in $\mathcal{Q}$. Eq.~(\ref{eq:q_operator}) implies that the controlled-$\mathcal{Q}$ operator in QAE requires controlling only the reflections $\mathcal{S}_0$ and $\mathcal{S}_{\psi_0}$. Indeed, $\mathcal{A}$ and $\mathcal{A}^{\dagger}$ are left uncontrolled and cancel each other when the control qubit of $\mathcal{Q}$ is in state $\ket{0}$, since in this case $\mathcal{S}_0$ is not applied. We now argue that $\mathcal{S}_0$ and $\mathcal{S}_{\psi_0}$ do not dominate the runtime. The reflection $\mathcal{S}_{\psi_0}$ can be implemented using an ancilla qubit and a phase kickback: an X-gate prepares the ancilla qubit in state $\ket{1}$, then the objective qubit of $\mathcal{A}$ is used to control a Z-gate targeting the ancilla qubit, a final X-gate uncomputes the ancilla qubit. This gate sequence transforms the objective qubit of $\mathcal{A}$ from $\alpha\ket{0}+\beta\ket{1}$ to $\alpha\ket{0}-\beta\ket{1}$ \cite{Nielsen2010} which is equivalent to the action of $\mathcal{S}_{\psi_0}$. For a controlled application of $\mathcal{S}_{\psi_0}$ we replace the single-controlled Z-gate by a double-controlled Z-gate where the second control is an evaluation qubit. Thus, $\mathcal{S}_{\psi_0}$ can be ignored in the overall runtime analysis as it can be implemented using a single Toffoli-gate (within the double-controlled Z-gate, exploiting that $Z = H X H$). We implement $\mathcal{S}_0$ using the same construction as for $\mathcal{S}_{\psi_0}$ but with the single-controlled Z-gate replaced by a multi-controlled Z-gate that only acts if all qubits $\mathcal{A}$ operates on are in state $\ket{0}$. However, if the sum-register is in state $\ket{0}_{n_S}$ then the $K$ qubits representing the $X_k$'s are also in state $\ket{0}_K$ and vice versa, since $\lambda_k > 0$ for all $k$. Thus, instead of controlling the Z-gate with all state qubits, we only need to control it by the $n_Z$ qubits representing $\mathcal{Z}$, the $n_S$ qubits representing the total loss, and the objective qubit of $\mathcal{A}$. Since multi-controlled gates can be implemented with logarithmic depth and a linear number of ancillas \cite{Maslov2015, Motzoi2017a}, we can also ignore the contribution of (controlled) $\mathcal{S}_0$ to the total runtime. The previous discussion in this section implies that the multiple applications of $\mathcal{A}$ dominate the total runtime. For $m$ evaluation qubits, $\mathcal{A}$ is called $n_S (2^{m+1} - 1)$ times: once for the initial state preparation, twice for each of the $2^m-1$ applications of $\mathcal{Q}$, and everything is repeated at most $n_S$ times for the bisection search to estimate VaR. Since QAE is a probabilistic algorithm, we need to run it multiple times. However, 25 repetitions are already sufficient to achieve a success probability of $99.75\%$ when using the median result \cite{Woerner2019}. These are independent repetitions that could be parallelized on multiple separate quantum computers, thus, we do not include this additional overhead. In the following, we analyze the circuit depth of $\mathcal{A}$. How to efficiently implement the operators $\mathcal{U}$, $\mathcal{S}$, and $\mathcal{C}$ and the assumptions made, e.g., on approximation errors, is discussed in Appendices \ref{sec:uncertainty_loading}, \ref{sec:weighted_sum_operator}, and \ref{sec:fixed_value_comparator}, respectively. The resulting circuit depths in terms of T/Toffoli-gates is stated in Table \ref{tab:scaling}. \begin{table}[htbp!] \begin{tabular}{cc} operator & circuit depth (T/Toffoli-gates) \\ \hline $\mathcal{U}$ & $26 + 28 n_Z$ \\ $\mathcal{S}$ & $\log_2(K) (\lfloor \log_2(n_S) \rfloor + \lfloor \log_2(n_S/3) \rfloor + 7)$\\ $\mathcal{C}$ & $2\lfloor \log_2(n_S - 1)\rfloor + 9$ \end{tabular} \caption{Bounds on circuit depth of the operators $\mathcal{U}$, $\mathcal{S}$, $\mathcal{C}$ in terms of CNOT and Toffoli-gates, see Appendices \ref{sec:uncertainty_loading}, and \ref{sec:weighted_sum_operator}, \ref{sec:fixed_value_comparator}, for more details.} \label{tab:scaling} \end{table} The total number of qubits will scale like $\mathcal{O}(K)$, since we represent every asset with a single qubit and the required ancillas also scale linearly in $K$. We are mainly interested in an estimation of the overall runtime, and thus, we will not further elaborate on the exact number of required qubits. In the remainder of this section, we consider $K = 2^{20}$, i.e., a portfolio of about one million assets, and assume $n_Z = 10$, and $n_S = 30$. This implies that we discretize $\mathcal{Z}$ with $1,024$ different values, and that we assume the average of $\lambda_k$ is at most $1,024 =2^{n_S}/K$, otherwise $n_S$ would be too small to represent the maximal possible sum of losses. Furthermore, we assume $m = 10$, which achieves an accuracy of $0.06\%$-points for $\alpha = 99.9\%$. Inserting these numbers into the formulas in Table \ref{tab:scaling} leads to a T/Toffoli-depth for $\mathcal{A}$ of about $N_{\text{T}}^{\mathcal{A}} = 600$. For the overall QAE, this implies a depth of \begin{eqnarray} \label{eq:n_toffoli} n_S (2^{m+1} - 1) N_{\text{T}}^{\mathcal{A}}. \end{eqnarray} which evaluates to a T/Toffoli-depth of approximately 37 million gates. Up to now, we have not considered the impact of the limited connectivity of quantum processors, i.e., the fact that we need to introduce SWAP-gates to realize CNOT-gates or Toffoli-gates between qubits that are not physically connected. It has been empirically shown in \cite{Woerner2019} for a related application that mapping comparable circuits to a realistic topology led to an increase in the number of CNOT-gates of about a factor of two. Since the runtime is dominated by the time for the T-gates, doubling the number of CNOT gates in our circuit should not significantly affect the overall runtime. We therefore ignore the impact of limited connectivity. Additionally, compiling the quantum circuits can be done in advance to produce a template circuit usable for a concrete problem. Thus, the actual compilation time and circuit-optimization time is not added in our analysis. We now assume that error-corrected T/Toffoli-gates can be executed in $10^{-4}$ seconds \cite{Fowler2018}. With this clock rate the 37 million gates obtained from Eq.~(\ref{eq:n_toffoli}) result in an estimated overall runtime of around one hour. Removing the QPE from QAE not only allows to remove the inverse QFT but also reduces the overall circuit depth by a factor of two by allowing us to parallelize on two quantum devices \cite{Suzuki2019}. This results in an estimated runtime of 30 minutes to estimate the VaR for a one-million-asset portfolio. Classical simulations of large portfolios are a big computation problem which requires significant time and hardware resources \cite{Lan2010, Desmettre2016, Stockinger2018}. To reduce classical simulation times, approximations are used and similar assets are aggregated in batches described by more complex random distributions. The same methods can also be applied to our quantum algorithm and should be able to achieve similar improvements, potentially reducing the expected runtime of 30 minutes for one million assets to near real-time. Furthermore, aggregating similar assets can also help to reduce the required number of qubits. Unlike for classical algorithms, estimating the Conditional Value at Risk (CVaR, or Expected Shortfall) can be achieved without much additional overhead, since it is just one additional (slightly more expensive) application of QAE without the bisection search \cite{Woerner2019}. \section{\label{sec:conclusion} Conclusion} In this paper we developed and analyzed a quantum algorithm to estimate ECR with a quadratic speedup. We have demonstrated the algorithm using a simulation and analyzed the scaling and expected runtime for realistic problem sizes under reasonable assumptions on future quantum computers. Furthermore, we argued that our results also hold for more complex uncertainty models or other objectives, such as CVaR, without much additional overhead. Although there is still a long way to go in terms of hardware development, this implies a huge potential for quantum computing in credit risk analysis. Further research in algorithms can help to reduce the number of required qubits as well as the gate depth. Within this paper, we made assumptions on the performance of future quantum hardware. We tried to make our analysis as transparent as possible to allow adjustments of our results in case of new insights on future hardware or algorithmic components. Until quantum computers of the required scale are available, a lot of research needs to take place also with focus on quantum algorithms, error correction, and circuit optimization. Thus, it would not be surprising, if our assumption will turnout to be conservative, implying an even larger potential for the technology, than outlined in the present manuscript. \begin{acknowledgments} The authors want to thank Joan Francesc Vidal Villal\'on and Santiago Murillo Pavas from CaixaBank for the inspiring discussion on this important use cases, and James Wootton as well as Dmitri Maslov for their valuable insights on quantum error-correction and gate decomposition. \end{acknowledgments}
1,314,259,994,669
arxiv
\section{Introduction} Quantum networks rely upon an efficient interface between the quantum memories and the photonic channels used for remote entanglement. Trapped ions are a standard platform to realize a quantum network due to their long coherence times and the availability of high fidelity quantum gate operations~\cite{blatt2008,duan2010}. Current implementations of trapped ion based quantum networks have demonstrated the ability to entangle remote nodes, violate Bell's inequality, perform teleportation, realize remote quantum gates, and generate private random numbers~\cite{moehring2007b, matsukevich2008, olmschenk2009, maunz2009, pironio2010}. In these experiments, the interface between the ion and the photonic channel depends upon a probabilistic process wherein the scattered photon is collected by a microscope objective subtending only a small fraction of the emission solid angle. There has been recent interest in integrating optical elements with an ion trap system to improve the photon collection efficiency. Nearby optics such as a fiber tip~\cite{vandevender2010}, reflective mirror~\cite{shu2009, shu2010}, or Fresnel lens~\cite{streed2009,streed2011} can have larger numerical apertures than common microscope objectives. Additionally, integrating multi-scale optics---such as microfabricated mirrors---with ion traps may provide a path towards scaling up a trapped ion network~\cite{noek2010,brady2011}. Although these methods increase the collection efficiency, they are still inherently probabilistic in nature. Coupling a trapped ion to an optical cavity can lead to a very high photon collection efficiency. In principle, with a large coupling strength between an ion and a cavity, the photon collection efficiency can approach unity~\cite{law1997, mckeever2004}. Since the cavity mode interacts coherently with the atomic state, this process is reversible and can be used for generating quantum networks~\cite{cirac1997}. Experiments with neutral atoms---where transition frequencies are typically in the infrared---have demonstrated efficient atom--photon interfaces as well as atom--photon and photon--photon entanglement~\cite{mckeever2004, wilk2007, weber2009}. Efforts toward coupling trapped ions to optical cavities have used infrared frequency transitions to a metastable D state for the optical cavity~\cite{guthohrlein2001, mundt2002, keller2004, barros2009}. At these wavelengths, high finesse mirrors are available and strong coupling can be achieved. Single photons can be efficiently generated in this system using techniques similar to neutral atom cavity QED experiments~\cite{keller2004, barros2009}. However, these methods do not integrate directly with currently demonstrated trapped ion quantum network protocols. In this paper, we detail the design and fabrication of a trapped ion system where a single ytterbium ion is coupled to a moderate-finesse optical cavity resonant with the ultraviolet $S_{1/2}\leftrightarrow P_{1/2}$ transition at $369.5~\mathrm{nm}$. Such a system could couple to individual hyperfine levels of the ytterbium qubit and be integrated into existing atom--photon quantum network protocols~\cite{luo2009}. We trap a single $^{174}\mathrm{Yb}^{+}$ ion inside the cavity and coherently pump it with a laser from the side of the cavity while monitoring the cavity output. The photon scatter rate into the solid angle subtended by the cavity mode in the outcoupling direction is enhanced by a factor of $600$. Additionally, the spectral properties of the atomic emission are observed as we detune the cavity from the atomic resonance. At large pump strengths, an emergence of a three-peak structure from the spectrum of emitted light indicates a Mollow triplet on a single atom level. \section{Experimental System} A single ytterbium ion is confined by a radiofrequency (RF) ion trap inside the mode of an optical cavity resonant to the $S_{1/2} \leftrightarrow P_{1/2}$ transition at $369.5~\mathrm{nm}$. To couple the ion to the optical cavity, we developed a novel micron-scale ion trap that can be inserted into the cavity mode \emph{in situ}. A nearby RF ground is an order of magnitude closer to the ion than the dielectric mirrors to mitigate effects of dielectric charging~\cite{harlander2010}. \subsection{Optical cavity design, fabrication, and test} The optical cavity consists of a pair of highly reflective concave mirrors from Advanced Thin Films set up in a near-planar Fabry--P\'erot configuration. The mirrors were initially $7.75~\mathrm{mm}$ in diameter and $4~\mathrm{mm}$ thick when coated. After coating, the mirrors were coned to a $2~\mathrm{mm}$ diameter reflective surface and $4~\mathrm{mm}$ outer diameter. The radius of curvature of the mirrors is $25~\mathrm{mm}$. At ultraviolet frequencies, optical coating losses are two orders of magnitude larger than infrared frequencies. The absorption and scattering losses for our cavity initially was $\approx 400~\mathrm{ppm}$. The mirrors form an asymmetric cavity with an outcoupling mirror transmission of $T_{\mathrm{out}}\approx 1000~\mathrm{ppm}$, which is larger than the incoupling mirror (set to $T_{\mathrm{in}}\approx 200~\mathrm{ppm}$). Because the ultraviolet light for Doppler cooling and coherently pumping the ion is derived from a frequency doubled diode laser, the mirrors were additionally coated for $739~\mathrm{nm}$ for cavity length stabilization (cf. Sec.~\ref{sec:setup}). The free spectral range of the optical cavity was measured to be $70.5~\mathrm{GHz}$ by scanning the $739~\mathrm{nm}$ laser across consecutive transmission peaks. This corresponds to a cavity length of $2.126~\mathrm{mm}$. The full-width at half-maximum was measured by scanning the cavity length across resonance and using the frequency difference between acousto-optic modulator (AOM) orders as a frequency marker~\cite{hood2001}. The measured full-width at half-maximum was $\kappa/\pi = 18.6~\mathrm{MHz}$, corresponding to a finesse of $\mathfrak{F}=3790$, and an outcoupling efficiency of $0.6$. However, we noticed that the finesse of our cavity degraded after running the experiment for several weeks. To our knowledge, several other groups have noticed a similar effect in their ultraviolet cavities~\cite{colombePC, bylinskiiPC}. A direct measurement of the cavity linewidth was made by driving the cavity mode at a fixed laser frequency and scanning the cavity length across resonance. After attenuation of the output, photon counts are measured on a photomultiplier tube (PMT). The increased linewidth is $\kappa/\pi = 47.4~\mathrm{MHz}$, corresponding to a finesse of $\mathfrak{F} = 1490$ and outcoupling efficiency of $0.24$. The relevant cavity QED parameters for our system are thus $(g,\kappa,\gamma)/2\pi = (3.92,23.7,19.6)~\mathrm{MHz}$, which gives a single-atom cooperativity of $C = g^{2}/\kappa\gamma = 0.033$. Due to the comparable strengths of the parameters, our system lies in the intermediate regime of cavity QED~\cite{childs1994, kimble1994}. \subsection{Ion trap for enhanced light collection} Our cavity QED system requires an optically open ion trap where the ion can be precisely placed inside the optical mode. The RF quadrupole trap is a modified version of an earlier design, where the ion trap electrodes can be adjusted independently~\cite{deslauriers2006a}. The designed trap consists of two identical laser machined alumina substrates with lithographically patterned electrodes (Figure~\ref{fig:doublefork}). Each substrate is mounted on a linear positioner inside the vacuum chamber such that the position of the ion trap can be placed inside the cavity mode \emph{in situ}. \begin{figure} \begin{center} \includegraphics[width=0.9\columnwidth,keepaspectratio]{Figure1} \end{center} \caption{Trap electrode and cavity geometry. \emph{(a)} Photograph of a substrate ready for wiring. \emph{(b)} Two identical substrates are inserted between the cavity mirrors such that the ion resides in the cavity mode. \emph{(c)} A top view of the ion trap inside the cavity. The substrates are separated by $180~\text{\textmu m}$ and are $1~\mathrm{mm}$ from the mirror. The mirrors are mounted in metal sheaths to provide compensation along the cavity direction. \emph{(d)} The tapered tip is machined to three individual tines. The outer tines are RF ground and provide compensation fields, while the center tine is RF high.} \label{fig:doublefork} \end{figure} Figure~\ref{fig:doublefork}a is a photograph of a finished substrate. Each substrate is laser machined to a narrow finger approximately $300~\text{\textmu m}$ wide. This finger is further machined into three individual tines, as illustrated in Fig.~\ref{fig:doublefork}d. The outer tines are approximately $100~\text{\textmu m}$ wide and provide a nearby RF ground. The center tine is approximately $50~\text{\textmu m}$ wide and is at RF high. The gaps between the tines are about $25~\text{\textmu m}$, and the tips are beveled on both sides to minimize surface area visible to the ion. The back portion of the electrodes provides enough space for onboard RF filters that are wire bonded onto the substrate. Gold is evaporated onto the surface of the substrate up to a thickness of $1~\text{\textmu m}$. A particular lithographic process was developed to ensure gold was coated around the entire substrate tip. Stray electric fields can be compensated by applying static DC offset voltages to the outer tines. To compensate for stray fields along the cavity axis, we apply a DC voltage on metal sheaths placed around the cavity mirrors as shown in Figure~\ref{fig:doublefork}c. To avoid diffractive losses in the cavity mode due to the presence of the ion trap, the trap separation was chosen to be $180~\text{\textmu m}$, which is greater than three times the mode diameter $2w_{0} = 50~\text{\textmu m}$. At this separation, the reduction in trap confinement compared to an ideal hyperbolic electrode trap of the same characteristic size---known as the voltage efficiency factor $\eta$---is $0.45$. We apply $300~\mathrm{V}$ of RF at $21.6~\mathrm{MHz}$, and observe secular frequencies of $4~\mathrm{MHz}$. The secular frequencies are measured by resonantly driving the secular motion of the ion with a sinusoidal voltage on one of the outer tines. While monitoring the ion motion on a CCD camera, the frequency of the voltage is swept across the motional resonances. From the orientation of the motion on the camera, we are able to discriminate between motion along the cavity axis and along the trap axis. We perform this measurement as a function of ion trap separation as well as a DC bias voltage on the RF electrodes. To lowest order, the secular frequencies of the trap are given by \begin{equation} \omega_{i} = \sqrt{ \frac{eU_{0}Q_{i}}{m} + \frac{1}{2}\left( \frac{e V_{0} Q_{i}}{m\Omega} \right)^{2}}, \label{eqn:secFreq} \end{equation} where $Q_{i}$ is the quadrupole moment of the trap potential in the $i$-th direction. Since the quadrupole moment is traceless, we note that $Q_{x}+Q_{y} + Q_{z}=0$. Along the electrode axis, the quadrupole moment is $Q_{x} = \eta/x_{0}^{2}$, where $2x_{0}$ is the separation of the trap electrodes. The voltage on the RF electrode consists of a DC bias $U_{0}$ and RF voltage $V_{0}$ at frequency $\Omega$. Figures~\ref{fig:secFreq}a--d illustrates the measured secular frequencies for various separations and bias voltages. The solid lines are fits to the data, providing good agreement with equation~\ref{eqn:secFreq}. From the data, we are able to extract the voltage efficiency factor for various separations and compare it to a numerical simulation of the trap (Figure~\ref{fig:secFreq}e). \begin{figure} \begin{center} \includegraphics[width=\columnwidth,keepaspectratio]{secFreq_new} \end{center} \caption{Electrical characterization of our trap. The secular frequency of the trap is measured for various bias voltages $U_{0}$ and trap separations. Crosses ($+$) indicate frequencies along the trap axis, while ($\times$) indicate frequencies along the cavity axis. Solid lines indicate fits to equation~\ref{eqn:secFreq}. Electrode separations are: \emph{(a)} $2x_{0}=130~\text{\textmu m}$, \emph{(b)} $150~\text{\textmu m}$. \emph{(c)} $170~\text{\textmu m}$. \emph{(d)} $200~\text{\textmu m}$. \emph{(e)} The voltage efficiency factor from our measurements is compared to numerical simulations. } \label{fig:secFreq} \end{figure} \subsection{Experimental Setup} \label{sec:setup} A diagram of the experimental apparatus is shown in Fig.~\ref{fig:setup}. The ion trap substrates are attached to individual linear positioners, allowing them to be placed inside the cavity mode with micron-level precision. To coarsely align the ion to the cavity mode, the cavity is repeatedly scanned over resonance with $739~\mathrm{nm}$ light pumping the cavity. The transmission peaks are monitored as the electrodes are inserted into the cavity mode. Loss of transmission indicates a rough estimate of the position of the mode. Finer adjustment of the ion position is made by pumping the cavity with ultraviolet light and increasing the photon scatter from the ion out the side of the cavity. The final method of improvement of ion--cavity coupling is an iterative process where the fluorescence out the cavity is monitored and maximized. \begin{figure} \begin{center} \includegraphics{ExptSetup3} \end{center} \caption{The experimental system consists of a diode laser at $739~\mathrm{nm}$ that is frequency doubled to drive the atom. The two arms of the $369.5~\mathrm{nm}$ light are combined into a fiber that delivers the cooling and pump light to the ion. The cooling and pump are toggled during the experiment, allowing the pump to vary in intesity while keeping the cooling constant. The cavity is stabilized to the fundamental at $739~\mathrm{nm}$. A fiber electro-optic phase modulator (EOM) provides tunability of the cavity resonance as well as the necessary frequency offset to ensure co-resonance of the beams. \emph{(Top Right)} A side view schematic along the cavity axis indicating the locations of the cooling/pump beam, the ionization beam, and the repump beam. The ion is imaged from above.} \label{fig:setup} \end{figure} A thermal beam of ytterbium is produced by resistive heating of a stainless steel tube in which Yb metal is packed. To minimize the probability of coating the cavity mirrors with ytterbium, current is run through the ovens slightly above the threshold to produce ytterbium atoms. Additionally, the thermal beam of atoms is perpendicular to the cavity axis. Ions are produced by a resonantly enhanced two-photon transition to the continuum with a $398.9~\mathrm{nm}$ and $369.5~\mathrm{nm}$ beam~\cite{olmschenk2007}. The ions are cooled on the $S_{1/2} \leftrightarrow P_{1/2}$ transition of $~^{174}\mathrm{Yb}^{+}$ at $369.5~\mathrm{nm}$ by a frequency doubled diode laser at $739~\mathrm{nm}$. A $935.2~\mathrm{nm}$ beam is used to repump the ion out of a low-lying $D_{3/2}$ level, which has a lifetime ($52.7~\mathrm{ms}$) longer than the measurement time. To measure the background light, the repump light is turned off with an AOM. The optical cavity is stabilized through a Pound--Drever--Hall locking technique with $200~\text{\textmu W}$ of $739~\mathrm{nm}$ light at a frequency $\nu_{\mathrm{ir}}$. Due to the Gouy phase shift and differences in the indicies of refraction of the optical coating at $739~\mathrm{nm}$ and at $369.5~\mathrm{nm}$, the second harmonic $2\nu_{\mathrm{ir}}$ is not resonant with the cavity. From the resonance condition of the $q$-th longitudinal mode of a symmetric optical cavity of length $L$ and mirror radii of curvature $\mathcal{R}$~\cite{siegman}, \begin{equation} \pi q = \frac{2\pi\nu_{q} L}{c} - \arccos \left( 1 - \frac{L}{\mathcal{R}} \right), \label{eqn:resonance} \end{equation} we find the ultraviolet resonace $\nu_{\mathrm{uv}}$ to be shifted from the harmonic of the infrared resonance by an amount $\Delta f = \nu_{\mathrm{ir}} - \frac{1}{2}\nu_{\mathrm{uv}}$ to be \begin{equation} \Delta f = \nu_{\mathrm{ir}} \frac{L_{\mathrm{uv}} - L_{\mathrm{ir}}}{L_{\mathrm{uv}}} + \frac{c}{2\pi L_{\mathrm{uv}}} \left[ \phi_{\mathrm{ir}} -\frac{1}{2}\phi_{\mathrm{uv}} \right]. \label{eqn:freq_shift} \end{equation} In equation~\ref{eqn:freq_shift}, we take the cavity lengths in the ultraviolet and infrared to be $L_{\mathrm{uv}}$ and $L_{\mathrm{ir}}$, and the Gouy phase $\phi_{\mathrm{ir(uv)}} = \arccos\left(1 - \frac{L_{\mathrm{ir(uv)}}}{\mathcal{R}_{\mathrm{ir(uv)}}} \right)$. For our cavity, we find there to be a $2.3~\mathrm{GHz}$ frequency offset of the infrared light to reach the resonance of the ytterbium ion, corresponding to a length difference of approximately $12~\mathrm{nm}$. The offset is provided by a wide-bandwidth fiber EOM. This modulator additionally gives us independent control of the cavity resonance with respect to the laser frequency. With the cavity locked, both ultraviolet light from the ion and infrared light exit the cavity. After an initial collimating lens, the output of the cavity is sent through a prism to separate the colors (Figure~\ref{fig:setup}). The infrared light is sent onto a photodiode to monitor the cavity transmission, while the ultraviolet beam is spatially filtered and directed onto a PMT. The overall efficiency of our detection system (including detector quantum efficiency) is $4\%$. \section{Experimental Results} The experimental procedure is as follows. The ion is Doppler cooled for $20~\mathrm{ms}$ with a weak cooling beam. Next, a strong pump beam is turned on for $2~\mathrm{ms}$ and photon counts out of the cavity are recorded. For half the detection time, the $935~\mathrm{nm}$ repump light is off, providing a measurement of the background light for subtraction. We average over $40$ measurement cycles before changing the cavity frequency. In this manner, a lineshape is built up for a set pump intensity. The strong pump is detuned from atomic resonance by $10~\mathrm{MHz}$ to avoid heating of the ion during the measurement. The power of the coherent pump is calibrated by ion fluorescence from the side of the cavity. The ion fluorescence is collected by a microscope objective and imaged onto a PMT. By measuring the scatter rate out the side of the cavity for various input powers, the intensity at the ion versus the input power can be determined in terms of the saturation intensity. When the cavity is resonant, we observe up to $8000$ photon counts per second on the PMT. Our PMT has a quantum efficiency of $19\%$ and the prism has a tranmission of $23.5\%$. Given these efficiencies in the detection path, our measured value corresponds to $200,000$ photons emerging from the cavity per second. From our estimated outcoupling efficiency of $0.24$, we estimate the cavity collects $800,000$ photons per second. Table~\ref{tab:effic} summarizes the efficiencies and backs out the photon scatter rate into the cavity. The photon emission rate agrees with the estimate that the emission rate is given by $p_{coll}\Gamma_{sc}$, where $p_{coll}$ is the collection probability given by~\cite{luo2009} \begin{equation} p_{coll} = \frac{T_{\mathrm{out}}}{\mathcal{L}} \left(\frac{2\kappa}{2\kappa+\gamma} \right) \left(\frac{2C_{\mathrm{eff}}}{1+2C_{\mathrm{eff}}}\right) \label{eqn:pcoll} \end{equation} and $\Gamma_{sc}$ is the photon scatter rate. Here, $T_{\mathrm{out}}/\mathcal{L}$ is the outcoupling efficiency while $2\kappa/(2\kappa+\gamma)$ is the ratio of the rate at which the photon leaves the cavity to the total rate at which the ion--cavity system loses photons. The third factor, $2C_{\mathrm{eff}}/(1+2C_{\mathrm{eff}})$, is the Purcell enhancement with an effective cooperativity $C_{\mathrm{eff}}$ for a reduced coherent coupling rate due to averaging of the atomic motion across the cavity standing wave. \begin{table} \centering \begin{tabular}{ c | c | c |} & Efficiency & Count rate\\ \hline Detected & & $8000~\mathrm{s}^{-1}$ \\ Before PMT & $0.19$ & $42,000~\mathrm{s}^{-1}$ \\ Before Prism & $0.235$ & $180,000~\mathrm{s}^{-1}$ \\ Before vacuum window & $0.9$ & $200,000~\mathrm{s}^{-1}$ \\ Outcoupling efficiency & $0.24$ & $\approx800,000~\mathrm{s}^{-1}$ \\ \end{tabular} \caption{List of efficiencies and the effective photon scatter rates before elements. There are approximately $800,000$ photons scattered into the cavity per second, where only $200,000$ photons/sec emerge from the cavity.} \label{tab:effic} \end{table} To compute the enhancement in photon collection efficiency, we compare our result to isotropic scattering of photons from a single ytterbium ion. We define the enhancement to be the ratio of photons emerging from the cavity to the isotropic scatter rate into the solid angle of the cavity mode in outcoupling direction. The solid angle subtended by the cavity mode is given by \begin{equation} \Delta \Omega_{\mathrm{cav}} = \frac{2\lambda^{2}}{\pi w_{0}^{2}} = 1.465\times10^{-4}~\mathrm{sr} \label{eqn:solidangle} \end{equation} which is twice the solid angle subtended by the mode in the outcoupling direction. We calculate the photon scatter rate for isotropic scattering into this solid angle for the parameters where we observe maximal counts on our PMT. We estimate that without enhancement we would observe $300$ photons per second scattered into the outcoupling direction of the cavity mode. This yields a spontaneous emission enhancement factor of $600$ compared to the free-space value into the same solid angle of the cavity mode in the outcoupling direction. Other cavity QED experiments in the intermediate regime of cavity QED have reported an enhancement of $18.5$ of spontaneous emission into an undriven cavity mode. This was achieved by delivering cold atoms from a magneto-optical trap into the cavity~\cite{terraciano2007}. Due to the strong confinement of our atom, our system achieves a much higher enhancement of spontaneous emission than any other experiment in the intermediate regime. In our experiment, we measure the photon counts from the cavity as a function of the cavity detuning $\delta_{c} = \omega_{c} - \omega_{L}$. The fiber EOM offset allows us to set the cavity resonance independently of the pump frequency We set the atom--laser detuning $\delta_{0} = \omega_{0} - \omega_{L}$ to be $10~\mathrm{MHz}$ below resonance, and measure the count rate versus the cavity detuning. Figure~\ref{fig:composite} illustrates the lineshapes observed for the coherently driven atom for various pumping strengths ($I/I_{\mathrm{sat}} = 2, 50, 150, 600$). At low intensities, we observe a Lorentzian lineshape consistent with a cavity broadened emission line. For strong pump intensities we observe the emergence of a three-peak structure characteristic of the Mollow triplet~\cite{mollow1969}. \begin{figure} \begin{center} \includegraphics[width=\columnwidth,keepaspectratio]{counts-vs-detuning} \end{center} \caption{Steady-state photon count rate from the cavity versus cavity detuning for several drive intensities. At intensities large compared to the saturation intensity, we observe the onset of a triplet structure. Red: $I/I_{\mathrm{sat}} = 2$, Green: $I/I_{\mathrm{sat}} = 50$, Blue: $I/I_{\mathrm{sat}} = 150$, Purple: $I/I_{\mathrm{sat}} = 600$. } \label{fig:composite} \end{figure} We attribute this three-peak structure to a cavity-broadened fluorescence spectrum of the ion. However, the fluoresence spectrum of a two level atom cannot fully describe our data. Instead, the Zeeman levels with various driving strengths must be taken into account. The 174 isotope of ytterbium has zero nuclear spin, and therefore has no hyperfine structure. The $S_{1/2}$ and $P_{1/2}$ manifolds each consist of two nearly degenerate Zeeman states, split by a weak magnetic field perpendicular to the cavity axis (Figure~\ref{fig:energylevels}). \begin{figure} \begin{center} \includegraphics{energylevels.pdf} \end{center} \caption{Relevant atomic energy levels for the $~^{174}\mathrm{Yb}^{+}$ ion. The Zeeman levels are shifted by $\pm\hbar \Delta_{s,p}$. The pump light is detuned from the bare resonance by $\delta_{0}$, and the cavity detuning is $\delta_{c} = \omega_{c} - \omega_{L}$. } \label{fig:energylevels} \end{figure} We label the four levels of interest as $\{\ket{S_{1/2},\pm}, \ket{P_{1/2},\pm}\}$, where $S_{1/2} (P_{1/2})$ corresponds to the ground (excited) state manifold. In a frame rotating at the classical drive frequency $\omega_{L}$, the atomic Hamiltonian is \begin{equation} H_{\mathrm{a}} = \hbar \sum_{\ell, m} \Delta_{\ell} \ket{\ell,m}\bra{\ell,m} + \hbar \delta_{0} \sum_{m} \ket{e,m}\bra{e,m} \label{eqn:H_atom} \end{equation} where $\pm\Delta_{s(p)}$ is the Zeeman level shift of $\ket{S_{1/2}(P_{1/2}),\pm}$. The second term is the energy of the excited state manifold in the rotated frame, where $\delta_{0} = \omega_{0} - \omega_{L}$. The classical driving field, $\mathbf{E} = \frac{1}{2} E_{0} \bm{\epsilon} e^{-i\omega_{L}t} + c.c.$, can drive all three types of transitions in the ion ($\Delta m = \pm1,0$), depending on the orientation of the polarization vector $\bm{\epsilon}$ with respect to the magnetic field. The interaction Hamiltonian with the classical field can be written as \begin{equation} H_{\mathrm{d}} = -\frac{\hbar\Omega}{2} \left[ \hat{\mathbf{A}} \cdot \bm{\epsilon} + \hat{\mathbf{A}}^{\dagger}\cdot\bm{\epsilon} \right] \label{eqn:H_drive} \end{equation} where $\Omega = \mu E_{0}/\hbar = \gamma \sqrt{I/2I_{\mathrm{sat}}}$ is the Rabi frequency for the $S_{1/2} \leftrightarrow P_{1/2}$ transition, and the lowering operator is \begin{equation} \hat{\mathbf{A}} = \sum_{q,m,m'}\braket{S_{1/2},m;1,q}{P_{1/2},m'} \ket{S_{1/2},m} \bra{P_{1/2},m'} \mathbf{e}_{q} \end{equation} with $\mathbf{e}_{q}$ being the spherical basis vector. The saturation intensity for the $S_{1/2} \leftrightarrow P_{1/2}$ transition is $I_{\mathrm{sat}} = \hbar \omega_{0}^{3} \gamma/12\pi c^{2} = 50.7~\mathrm{mW}/\mathrm{cm}^{2}$. The vector component $\hat{A}_{q}$ describes the transition between the magnetic sublevels $m$ and $m-q$ and is proportional to the Clebsch--Gordan coefficient of that transition~\cite{boozerThesis}. Additionally, we consider the two degenerate polarization modes of the cavity at a frequency $\omega_{c}$ detuned from the drive frequency by $\delta_{c} = \omega_{c} - \omega_{L}$. The Hamiltonian for the two cavity modes in the rotating frame is \begin{equation} H_{c} = \hbar\delta_{c} \sum_{p=H,V} \hat{a}_{p}^{\dagger} \hat{a}_{p}. \label{eqn:H_cav} \end{equation} The Jaynes--Cummings cavity interaction consists of the coupling of the three atomic transitions to the two cavity modes, and is given by \begin{equation} H_{jc} = i \hbar g \sum_{p=H,V} \left[ \hat{a}_{p}^{\dagger} (\hat{\mathbf{A}}\cdot \mathbf{e}_{p}^{*}) - (\hat{\mathbf{A}}^{\dagger}\cdot \mathbf{e}_{p})\hat{a}_{p} \right] \label{eqn:H_jc} \end{equation} The steady-state photon count rate is given by the steady-state intracavity photon number for both polarizations and the cavity decay rate: $2\kappa\left[ \langle n_{H} \rangle_{ss} + \langle n_{V} \rangle_{ss} \right]$. To compute the intracavity photon number, we numerically solve the master equation \begin{align} \dot{\rho} &= -i [\rho,H_{\mathrm{a}} + H_{\mathrm{d}} + H_{c} + H_{jc}] \\ \nonumber &+ \gamma \sum_{q} \left[ \hat{A}_{q} \rho \hat{A}_{q}^{\dagger} - \frac{1}{2} \left( \hat{A}_{q}^{\dagger} \hat{A}_{q} \rho + \rho \hat{A}_{q}^{\dagger} \hat{A}_{q} \right) \right] \\ \nonumber &+ 2\kappa \sum_{p} \left[ \hat{a}_{p} \rho \hat{a}_{p}^{\dagger} - \frac{1}{2}\left( \hat{a}_{p}^{\dagger}\hat{a}_{p}\rho + \rho \hat{a}_{p}^{\dagger}\hat{a}_{p} \right) \right] \label{eqn:master} \end{align} in steady state using the Quantum Optics Toolbox for \textsc{Matlab}~\cite{tan1999} with our experimental values. The numerical calculation is performed for various detunings of the cavity, and parameters of the classical beam (intensity, orientation relative the magnetic field, and polarizaiton). We fit the data to these simulations and achieve qualitative agreement. A typical fit curve is illustrated in Figure~\ref{fig:compare}, where the parameters for the classical beam are $I = 600I_{\mathrm{sat}}$, oriented $45^{\circ}$ from the magnetic field with linear polarization tilted $35^{\circ}$ from the cavity axis. \begin{figure} \begin{center} \includegraphics[width=\columnwidth,keepaspectratio]{simcomp_new2.pdf} \end{center} \caption{Typical fit to our data. The solid curve is a numerical calculation of the steady-state intracavity photon number of our cavity versus cavity detuning $\delta_{c}/2\pi$. Taking into account the Zeeman levels in $~^{174}\mathrm{Yb}^{+}$, we reach qualitative agreement with the observed data. } \label{fig:compare} \end{figure} Previously, strongly-driven atoms in a cavity QED system have been studied by passing an atomic beam into an optical cavity, where the Mollow triplet has be observed through the cavity output~\cite{zhu1988}. Extending such experiments to a single trapped atom can achieve significant vacuum field dressed-state pumping and steady-state inversion~\cite{lewenstein1988, savage1988, hughes2011}. However, such experiments have never been done in trapped neutral atom cavity QED experiments due to the relatively weak confinement compared to the atomic recoil. The strong confinement of our trapped ion allows us to observe the Mollow triplet transition coupled to a cavity at the single atom level for the first time. These results open the way towards studying cavity QED physics with trapped atoms in a strongly-driven regime. Superconducting qubits in a circuit QED system have recently been used to study similar physics~\cite{baur2009}. \section{Conclusion} We have designed and fabricated a trapped ion cavity QED system where we observe a Purcell enhancement of scattered light into the cavity mode. The scatter rate is two orders of magnitude larger than the scatter rate into the solid angle subtended by the optical cavity. Additionally, we have investigated the photon count rate from the cavity as a function of the cavity detuning and the pump strength. We have found at high intensities, the steady-state count rate exhibits a three-peak structure characteristic of a Mollow triplet. Numerical models of a $~^{174}\mathrm{Yb}^{+}$ ion coupled to an optical cavity yield qualitative agreement with our data. Additionally, we observe a degradation of the cavity finesse at ultraviolet frequencies. This unexpected effect is possibly a materials related issue and requires more study. Even at a distance of one millimeter from the ion, the presence of the mirror had a noticable influence on the ion position, as evidenced by a dynamic and variable stray electric field. This necessitates better shielding and possibly a cavity in a near-concentric configuration which would allow a larger mirror spacing while maintaining the small mode volume required for adequate atom--cavity coupling. Nevertheless, the enhancement of scattered light into the cavity mode demonstrates the feasibility of increasing the photon collection efficiency for quantum networks. Such an ion--cavity system can be readily integrated into current protocols for trapped-ion quantum networks, providing a pathway towards a practical, scalable quantum network. For example, ion--photon entanglement with the polarization degree of freedom~\cite{matsukevich2008,luo2009} is most amenable to an optical cavity. An optical cavity can be locked to the $\ket{S_{1/2},F=1} \leftrightarrow \ket{P_{1/2}, F=1}$ transition of $~^{171}\mathrm{Yb}^{+}$ with the quantization axis along the cavity mode. A weak $\pi$-polarized probe on an ion initialized to the $\ket{S_{1/2},F=0,m_{F}=0}$ state can excite the atom to the $\ket{P_{1/2},F=1,m_{F}=0}$ level, which is coupled to the $\ket{S_{1/2},F=1,m=\pm1}$ ground states through the two polarization modes of the cavity. Such a procedure can produce ion--photon entangled pairs that are useful for ion--photon quantum networks, loophole-free tests of Bell inequalities, and the generation of cluster states. \begin{acknowledgments} The authors would like to thank Luis A. Orozco and Howard Carmichael for providing useful insight into cavity QED theory as well as discussions of experimental techniques and system modeling. This work is supported by the US Army Research Office (ARO) with funds from the IARPA MQCO Program and the MURI program on Quantum Optical Circuits of Hybrid Quantum Memories, NSF Physics at the Information Frontier Program, and the NSF Physics Frontier Center at JQI. \end{acknowledgments} \bibstyle{apsrev}
1,314,259,994,670
arxiv
\section{Introduction} \label{Sec1} The main goal of this note is to provide new results concerning the application of symmetrization methods in the context of \emph{nonlocal, nonlinear} elliptic problems. In particular, we will focus in getting new estimates for solutions to the elliptic problem \begin{equation} \label{mainpp} \left\{ \begin{array}[c]{lll}% \left( -\Delta_{p}\right)^{s}u=f & & \text{in }% \Omega,\\ \\ u=0 & & \text{on }{\mathbb R}^{N}\setminus\Omega. \end{array}\right. \end{equation} The operator $\left( -\Delta_{p}\right)^{s}$ is the so called \emph{fractional} \emph{p-Laplacian} and is defined for $s\in (0,1)$ and $p>1$ by means of the singular integral formula \[ (-\Delta_p)^su(x)=\gamma(N,s,p)\;\text{P.V.} \int_{{\mathbb R}^N}\frac{|u(x)-u(y)|^{p-2} (u(x)-u(y))}{|x-y|^{N+sp}}dy, \] being $\gamma(N,s,p)$ a suitable normalization constant, whose value is specified as (see \cite[Lemma 5.1]{CDVAZ}) \begin{equation*}\label{constant} \gamma(N,s,p)=\frac{sp\,2^{2s-2}(1-s)}{\pi^{\frac{N-1}{2}}}\frac{\Gamma\left(\frac{N+sp}{2}\right)}{\Gamma\left(\frac{p+1}{2}\right)\Gamma(2-s)}. \end{equation*} In this note we will consider only the \emph{degenerate} case $p>2$. More details regarding the correct functional spaces for the source term $f$ and the corresponding solution $u$ will be given in Subsection \ref{Functionspac}. Moreover, we assume $N\geq1$ while the ground set $\Omega\subset {\mathbb R}^{N}$ will be always assumed bounded with Lipschitz boundary.\\[0.2pt] \noindent Inspired by the techniques of the recent work \cite{FeroneVolzone}, in this note we derive some symmetrization estimates for solutions $u$ to problem \eqref{mainpp}, in the form of \emph{mass concentration comparison}. For the sake of completeness, we recall the main result obtained in \cite{FeroneVolzone}, which was also established in \cite{VazVolSire} through the Caffarelli-Silvestre extension theorem. If we consider the linear, nonlocal Dirichlet problem \begin{equation} \label{symmline} \left\{ \begin{array}[c]{lll}% \left( -\Delta\right)^{s}u=f & & \text{in }% \Omega,\\ \\ u=0 & & \text{on }{\mathbb R}^{N}\setminus\Omega. \end{array}\right. \end{equation} then the \emph{worst} radial problem, that is the problem whose solution $v$ is the larger one in the class of problems of the type \eqref{mainpline} with domains $\Omega$ of fixed measure and corresponding data $f$, solves the \emph{symmetrized} problem \begin{equation} \label{mainpline} \left\{ \begin{array}[c]{lll}% \left( -\Delta\right)^{s}v=f^{\#} & & \text{in }% \Omega^{\#},\\ \\ v=0 & & \text{on }{\mathbb R}^{N}\setminus\Omega^{\#}, \end{array}\right. \end{equation} being $\Omega^{\#}$ the ball centred at the origin with volume $|\Omega|$: indeed, what we have is that \begin{equation}\label{comparis} u^{\#}\prec v \end{equation} where $u^{\#}$ is the \emph{Schwarz decreasing rearrangement of $u$ }and $\prec$ is the order relation in the form of mass concentration comparison (see Section \ref{Sec2} for precise definitions) . The machinery used in \cite{FeroneVolzone} consists in choosing a suitable truncation function in \eqref{symmline} and a convenient Riesz rearrangement inequality, which allows to obtain an integral estimate involving the Schwarz rearrangement $u^\#$ of $u$ and the datum $f$. Therefore, a quite subtle and technical work is made to reinterpret the latter estimate as a comparison between some integral mean functions of $u^\#$ and $v$ on balls, which in turn implies \eqref{comparis} by means of maximum principle arguments. Finally, an entire Section in \cite{FeroneVolzone} is dedicated to show that estimate \eqref{comparis} is \emph{optimal} for $s\in (0,1)$, in the sense that no pointwise comparison is achieved, while Talenti's classical result is recovered in the limit for $s\rightarrow1$. We refer the interested to \cite{FeroneVolzone} for all the details and for an exhaustive list of references concerning symmetrization results.\\\\[0.2pt] \noindent As it was mentioned above, we wish to apply the \emph{direct} methods of \cite{FeroneVolzone} (\emph{i.e.} which does not use the extension theorem of \cite{Caffarelli-Silvestre}) to the nonlinear context \eqref{mainpp}. For the \emph{local} context $s\rightarrow1$ the main reference is with no doubt \cite{talNON}, but several extentions to the case of more general local nonlinear problems can be found (see, for example, \cite{BFM}, \cite{AFT}, \cite{fermess}, \cite{cianchimaz}). In particular, in \cite{talNON} it is shown that if $u$ solves the nonlinear problem \begin{equation*} \label{mainp} \left\{ \begin{array}[c]{lll}% -\Delta_{p}u=f & & \text{in }% \Omega,\\ \\ u=0 & & \text{on }\partial\Omega \end{array}\right. \end{equation*} being $-\Delta_{p}$ the classical $p$-Laplacian, and $v$ is the solution to \begin{equation*} \label{mainp} \left\{ \begin{array}[c]{lll}% -\Delta_{p}v=f^{\#} & & \text{in }% \Omega^{\#},\\ \\ v=0 & & \text{on }\partial\Omega^{\#} \end{array}\right. \end{equation*} then a pointwise estimate is possible, namely we have \[ u^{\#}\leq v \quad\text{in }\Omega^{\#}. \] Such a result is essentially based on the techniques of \cite{Talenti1}, where a basic importance is assigned to the explicit form fo the solution $v$, obtained by solving a nonlinear radial ODE. In particular, the following estimate is obtained (with $r=|x|$) \[ |\nabla u^\#(r)|\leq \frac{1}{r^{\frac{N-1}{p-1}}}\left(\int_{B_{r}}f^{\#}\right)^{\frac{1}{p-1}} \] but an explicit computation gives that the right-hand side equals $|\nabla v|$ and the result follows. \\ There is no need to say that when $\Omega$ is a ball, such ODE approach is not appliable to problems of the type \eqref{mainpp}, thus the nonlocal nature of the problem affects again the style of the technical approach. Therefore our main goal will be to compare the solution $u$ to \eqref{mainpp} with the solution $v$ of a suitable nonlocal \emph{radial} problem posed on the ball $\Omega^{\#}$, see problem \eqref{eq.symmetric}. Such choice of the symmetrized problem is justified by the appearance of a nonlinear integral estimate for the solution $u$, to which a maximum principle argument (in the spirit of the one derived in \cite{FeroneVolzone}) does not seem to apply in a direct form. In any case, ours is the first symmetrization result in the literature for nonlocal nonlinear problem and will allow to find $L^{p}$ regularity estimates for $u$ as a quite direct consequence. Finally, this note focuses only on the case $p\ge2$, because we shall need such convexity requirement for the application of a convenient Riesz's general rearrangement inequality. \\[0.2pt] \noindent The paper is organized as follows. Section \ref{Sec2} is entirely devoted to various preliminaries which will be used throughout the text: symmetrization tools will be shortly introduced and the suitable functional context will be settled. In Section \eqref{Main} we state the main comparison Theorem \ref{maint} and the $L^{p}$ regularity estimates of Corollary \eqref{regularity}. In Section \ref{Proofs} we will prove all the stated results. In Section \ref{Open problems} we illustrate open problems and numerical studies. \section{Preliminaries and notation} \label{Sec2} For the sake of completeness, we shortly collect here some preliminary results regarding symmetrization, the functional spaces related to the main problem \ref{mainpp} and hypergeometric functions. \subsection{Rearrangements and symmetrization}\label{RearSym} We recall here the basic definitions concerning Schwarz symmetrization and some related fundamental properties. Readers who wish to find more more details of the theory are addressed to the classical monographs \cite{Hardy}, \cite{Bennett}, \cite{Kesavan}, \cite{Bandle} or to the paper \cite{Talentirearrinv}. A measurable real function $f$ defined on ${\mathbb R}^{N}$ is called \emph{radially symmetric} (or \emph{radial}) if there is a function $\widetilde{f}:[0,\infty)\rightarrow {\mathbb R}$ such that $f(x)=\widetilde{f}(|x|)$ for all $x\in {\mathbb R}^{N}$. We will often write $f(x)=f(r)$, $r=|x|\ge0$ for such functions by abuse of notation. We say that $f$ is \emph{rearranged} if it is radial, nonnegative and $\widetilde{f}$ is a right-continuous, non-increasing function of $r>0$. A similar definition can be applied for real functions defined on a ball $B_{R}(0)=\left\{x\in{\mathbb R}^{N}:|x|<R\right\}$. Let $f$ be a real measurable function on ${\mathbb R}^N$. If $f$ is such that its \emph{distribution function} $\mu_{f}$ satisfies% \begin{equation}\label{distribution} \mu_{f}( t) :=\left\vert \left\{ x\in\{\mathbb R}^{N}:\left\vert f\left( x\right) \right\vert >t\right\} \right\vert<+\infty, \qquad\text{for every }t >0, \end{equation} we define the \emph{one dimensional decreasing rearrangement} of $f$ as% \[ f^{\ast}\left( \sigma\right) =\sup\left\{ t\geq0:\mu_{f}\left( t\right) >\sigma\right\} \text{ , }\sigma>0. \] If $f$ is a real measurable function on an open set $\Omega\subset{\mathbb R}^N$ we extend $f$ as the zero function in ${\mathbb R}^N\backslash\Omega$ and we define the one dimensional decreasing rearrangement of $f$ as the rearrangement of such an extension. This means that $f^{\ast}(\sigma)=0$ for $\sigma\in[|\Omega|,\infty)$. From the above definition it follows that $\mu_{f^{\ast}}=\mu_{f}$ (\emph{i.e.,} $f$ and $f^{\ast}$ are equi-distributed) and $f^{\ast}$ is exactly the \emph{generalized right inverse function} of $\mu_{f}$. Furthermore, if $\Omega^{\#}$ is the ball of $% \mathbb{R} ^{N}$ centered at the origin having the same Lebesgue measure as $\Omega$ ($\Omega^{\#}={\mathbb R}^N$ if $|\Omega|=+\infty$), denoting by \begin{equation*} \omega_N=\frac{\pi^{N/2}} {\Gamma\left(\frac N2+1\right)} \end{equation*} the measure of the unit ball in ${\mathbb R}^N$, we define the function \[ f^{\#}\left( x\right) =f^{\ast}(\omega_{N}\left\vert x\right\vert ^{N})\text{ \ , }x\in\Omega^{\#}, \] that will be called \emph{radially decreasing rearrangement}, or \emph{Schwarz decreasing rearrangement}, of $f$. We easily infer that $f$ is rearranged if and only if $f=f^{\#}$. As a simple consequence of the definition is that rearrangements preserve $L^{p}$ norms, that is, for all $p\in[1,\infty]$ \[ \|f\|_{L^{p}(\Omega)}=\|f^{\ast}\|_{L^{p}(0,|\Omega|)}=\|f^{\#}\|_{L^{p}(\Omega^{\#})}\,; \] furthermore, the classical Hardy-Littlewood inequality holds true \begin{equation} \int_{\Omega}\vert f(x)\, g(x) \vert dx\leq\int_{0}^{\left\vert \Omega\right\vert}f^{\ast}(\sigma)\, g^{\ast}(\sigma) d\sigma=\int_{\Omega^{\#}}f^{\#}(x)\,g^{\#}(x)\,dx\,, \label{HardyLit}% \end{equation} where $f,g$ are measurable functions on $\Omega$. Here we recall an important tool in the proof of our main result, namely the following generalization of the \emph{Riesz rearrangement inequality} (see \cite[Theorem 2.2]{ALIEB} and \cite[Theorem 1]{Hichem} for a possible generalization). \begin{theorem} Let $F:{\mathbb R}^{+}\times{\mathbb R}^{+}\rightarrow{\mathbb R}^{+}$ be a continuous function such that $F(0,0)=0$ and \begin{equation} F(u_{2},v_{2})+F(u_{1},v_{1})\geq F(u_{2},v_{1})+F(u_{1},v_{2})\label{F} \end{equation} whenever $u_{2}\geq u_{1}>0$ and $v_{2}\geq v_{1}>0$. Assume that $f, g$ are nonnegative measurable functions on ${\mathbb R}^{N}$ which satisfy \eqref{distribution}, then we have the inequalities \begin{equation} \int_{{\mathbb R}^{N}}\int_{{\mathbb R}^{N}}F(f(x),g(y))W(ax+by)\,dx\,dy\leq \int_{{\mathbb R}^{N}}\int_{{\mathbb R}^{N}}F(f^{\#}(x),g^{\#}(y))W(ax+by)\,dx\,dy\label{mainRieszineq} \end{equation} and \[ \int_{{\mathbb R}^{N}}F(f(x),g(x))\,dx\leq \int_{{\mathbb R}^{N}}F(f^{\#}(x),g^{\#}(x))\,dx, \] for any nonnegative function $W\in L^{1}({\mathbb R}^{N})$ and any choice of nonzero numbers $a$ and $b$. \end{theorem} \subsection{Mass concentration} The following definition based on mass concentration comparison will be widely used throughout the text. We refer the reader to \cite{ChRice}, \cite{ALTa}, \cite{Vsym82} for further details and related properties. \begin{definition} Let $f,g\in L^{1}_{loc}({\mathbb R}^{N})$ be two radially symmetric functions on ${\mathbb R}^{N}$. We say that $f$ is less concentrated than $g$, and we write $f\prec g$ if for all $r>0$ we get \[ \int_{B_{r}(0)}f(x)\,dx\leq \int_{B_{r}(0)}g(x)\,dx. \] \end{definition} The partial order relationship $\prec$ is called \emph{comparison of mass concentrations}. Of course, this definition can be suitably adapted if $f,g$ are defined in a ball $B_{R}(0)$ (considering the extension to zero outside $B_{R}(0)$). Moreover, if $f$ and $g$ are defined on two open sets of the same measure $\kappa$ we have that $f^{\#}\prec g^{\#}$ if and only if \[ \int_{0}^{\sigma}f^{\ast}(\tau)\,d\tau\leq \int_{0}^{\sigma}g^{\ast}(\tau)\,d\tau, \] for all $\sigma\in[0,\kappa]$. The comparison of mass concentrations enjoys some nice equivalent formulations (for the proof we refer to \cite{Chong}, \cite{ALTa}, \cite{VANS05}). \begin{lemma}\label{lemma1} Let $f,g\in L_+^{1}(\Omega)$ two rearranged functions on a ball $\Omega=B_{R}(0).$ Then the following are equivalent: \vskip7pt \noindent(i) $f\prec g$; \vskip7pt \noindent(ii) for all $\phi\in L^\infty_+(\Omega)$, $$\int_{\Omega}f(x)\phi(x)\,dx\leq \int_{\Omega^\#}f^\#(x)\phi^\#(x)\,dx. $$ \vskip7pt \noindent(iii) for all convex, nonnegative functions $\Phi:[0,\infty)\rightarrow [0,\infty)$ with $\Phi(0) = 0$ it holds $$\int_{\Omega}\Phi(f(x))\,dx\leq \int_{\Omega}\Phi(g(x))\,dx. $$ \end{lemma} We explicitly observe that, if $f, g\in L^p(\Omega)$ $(1 < p \le\infty)$, then we may take $\phi \in L^{p'}(\Omega)$ in the point \textit{(ii)} above. From this Lemma it easily follows that if $f$ and $g$ are $L^{1}$ functions on $\Omega$ such that $f^{\#}\prec g^{\#}$, then \begin{equation} \|f\|_{L^{p}(\Omega)}\leq \|g\|_{L^{p}(\Omega)}\quad \forall p\in[1,\infty]. \end{equation} \subsection{Functional spaces}\label{Functionspac} Now we introduce the functional space where problem \eqref{mainpp} will be settled, namely a natural domain for the fractional $p$-Laplacian operator $(-\Delta _{p})^{s}$. Recall that $s\in (0,1)$ in our setting. For any open set $\Omega$, we introduce the fractional Gagliardo seminorm Gagliardo seminorm \[ [u]_{W^{s,p}(\Omega)}=\left(\int_{\Omega}\int_{\Omega}\frac{|u(x)-u(y)|^{p}}{|x-y|^{N+sp}}dx\,dy\right)^{1/p}, \] for a measurable function $u$ on $\Omega$. Then we define the fractional Sobolev space $W^{s.p}(\Omega)$ as the space \[ W^{s,p}(\Omega)=\left\{u\in L^{p}(\Omega):\,[u]_{W^{s,p}(\Omega)}<\infty\right\}, \] endowed with the norm \[ \|u\|_{W^{s,p}(\Omega)}=\| u \|_{L^{p}(\Omega)}+[u]_{W^{s,p}(\Omega)}. \] We denote by $W_{0}^{s,p}(\Omega)$ the closure of $C_{c}^{\infty}(\Omega)$ in the $W^{s,p}(\Omega)$ topology.\\ The natural space when considering the operator $(-\Delta _{p})^{s}$ with homogeneous external Dirichlet condition will be denoted by $\widetilde{W}_{0}^{s,p}(\Omega)$, which is defined as \[ \widetilde{W}_{0}^{s,p}(\Omega)=\left\{u:{\mathbb R}^{N}\rightarrow{\mathbb R}:\,[u]_{W^{s,p}({\mathbb R}^{N})}<+\infty\text{ and }u=0 \text{ in }{\mathbb R}^{N}\setminus\Omega\right\}. \] When $p>1$ and $\Omega$ is an open bounded set with Lipschitz boundary, it can be proved that (see \cite[Proposition B.1]{BrasParSquas}) $\widetilde{W}_{0}^{s,p}(\Omega)$ coincides with the completion of $C_{c}^{\infty}(\Omega)$ with respect to the seminorm $ [\cdot]_{W^{s,p}({\mathbb R}^{N})}$. Moreover, when $sp\neq1$, it can also proved that $\widetilde{W}_{0}^{s,p}(\Omega)$ \emph{coincides} with $W_{0}^{s,p}(\Omega)$ (see \cite[Proposition B.1]{Brasco}), while in general for $sp=1$, we have a \emph{strict} inclusion $$\widetilde{W}_{0}^{s,p}(\Omega)\subset W_{0}^{s,p}(\Omega)$$ (see \cite[Remark 2.1]{secondeigenbrasco}).\\ A consequence of fractional Poincar\'e inequalities (see \cite[Lemma 2.4]{Brasco}) is that we can equip the space $\widetilde{W}_{0}^{s,p}(\Omega)$ with the Gagliardo seminorm \[ \|u\|_{\widetilde{W}_{0}^{s,p}(\Omega)}=[u]_{W^{s,p}({\mathbb R}^{N})}=\left(\int_{{\mathbb R}^{N}}\int_{{\mathbb R}^{N}}\frac{|u(x)-u(y)|^{p}}{|x-y|^{N+sp}}dx\,dy\right)^{1/p}. \] We finally recall the definition of Lorentz space (see, {\sl e.g.}, \cite{hunt}, \cite{oneil}). We introduce the \emph{maximal function} of Hardy and Littlewood associated to $v^{\ast}$, namely \[ \bar{v}=\frac{1}{s}\int_{0}^{s}v^{\ast}(\sigma)d\sigma, \] and define for $1<p\leq\infty$, $0<q\le\infty$, $$\|v\|_{p,q}=\left\{\begin{array}{ll} \displaystyle\left(\int_0^{|\Omega|} \left(\bar{v}(s)\,s^{1 \over p}\right)^q\,{ds\over s}\right)^{1\over q},& \qquad \text{if } 0< q<\infty,\\ &\\ \displaystyle\sup_{s>0}\bar{v}(s)\,s^{1\over p},& \qquad \text{if } q=\infty. \end{array} \right. $$ The Lorentz space $L^{p,q}(\Omega)$ is defined as the set of the measurable functions $v$ such that $\|v\|_{L^{p,q}(\Omega)}<+\infty.$ We will use the Lorentz spaces in the proof of Corollary \eqref{regularity}. It is easy to verify that Lorentz spaces coincide with Lebesgue spaces $L^{p}(\Omega)$ when $p=q$ and with Marcinkiewicz spaces $M^{p}(\Omega)$ when $q=\infty$. As regards the other values of the second index $q$, Lorentz spaces are intermediate spaces between Lebesgue spaces in the sense that, since $\Omega$ is bounded, the following inclusions hold true \begin{align}\notag &L^{p_2,r}(\Omega)\subset L^{p,q_1}(\Omega)\subset L^{p,p}(\Omega)=L^{p}(\Omega) \subset\\ \notag &\qquad \qquad \subset L^{p,q_2}(\Omega)\subset L^{p,\infty}(\Omega)= M^{p}(\Omega)\subset L^{p_1,r}(\Omega), \end{align} when $1<p_1<p<p_2<\infty$, $1\le q_1<p<q_2\le\infty$ and $1 \leq r\leq \infty$. \subsection{Hypergeometric functions}\label{Hyperg} We now recall the definition of the hypergeometric function $_{2}F_1(a,b;c;x)$ (see, for example, \cite[Ch. II]{magnus} fo further details). The definition of $_{2}F_1(a,b;c;x)$ is given by \begin{equation}\label{verydefHyp} _{2}F_1(a,b;c;x)=\frac{\Gamma(c)}{\Gamma(b)\Gamma(a)}\sum \frac{\Gamma(a+n)\,\Gamma(b+n)}{\Gamma(c+n)}\frac{x^{n}}{n!} \end{equation} where the series converges in the unit interval $|x|<1$. For $c>b>0$ and $0<\tau<1$, we have the following representation \begin{equation}\label{represent} _{2}F_1(a,b;c;x)=\frac{\Gamma(c)}{\Gamma(b)\Gamma(c-b)}\int_0^1\tau^{b-1}(1-\tau)^{c-b-1}(1-x\tau)^{-a}d\tau \end{equation} Some classical results about the derivatives of $_{2}F_1(a,b;c;x)$ read as \begin{align*}\allowdisplaybreaks _{2}F_1'(a,b;c;x)&=\frac {ab}c {\ }_{2}F_1(a+1,b+1;c+1;x),\displaybreak[1]\\ \\ _{2}F_1(a+1,b;c+1;x)&=\frac c{c-b} {\ }_{2}F_1(a,b;c;x)-\frac c{c-b}\frac{1-x}a{\ }_{2}F_1'(a,b;c;x),\displaybreak[1]\\ \\ _{2}F_1(a-1,b;c-1;x)&=\frac {c-1-bx}{c-1}{\ }_{2}F_1(a,b;c;x)+\frac{x(1-x)}{c-1}{\ }_{2}F_1'(a,b;c;x). \end{align*} A direct consequence of the above equalities is the following: \begin{equation}\label{formulona} _{2}F_1'(a,b;c;x)=\frac {ab}{c}{\ }_{2}F_1(a+1,b;c+1;x)+\frac {ax}{c}{\ }_{2}F'_1(a+1,b;c+1;x). \end{equation} and \begin{equation} \label{formulonaa} x{\ }_{2}F_1'(a,b;c;x)+a{\ }_{2}F_1(a,b;c;x)=a{\ }_{2}F_1(a+1,b;c+1;x)+\frac {ax}{c}{\ }_{2}F'_1(a+1,b;c+1;x) \end{equation} In the present paper we have to use hypergeometric function in order to represent an integral which comes into play when one wants to calculate the fractional $p$-laplacian of radial function. Indeed, the following equality holds true for $b>0$ and $|x|<1$ (see \cite[Ch. II, sub. 2.5.1]{magnus}) \begin{equation}\label{hyper} \int_{0}^{\pi}\frac{\sin^{2b-1}\theta}{(1-2x\cos\theta+x^{2})^{a}}d\theta=\frac{\sqrt\pi\;\Gamma(b)}{\Gamma(b+\frac12)}{\ }_{2}F_1(a,a-b+\tfrac12;b+\tfrac12;x^{2}). \end{equation} Finally we recall that a direct computation in \eqref{verydefHyp} gives \begin{equation}\label{zero} {}_{2}F_{1}(a,b;c;0)=1, \end{equation} and, when $c>a+b$, for positive $a, b$, the following formula holds (see \cite[Ch. II, pag.40]{magnus}) \begin{equation}\label{gauss} {}_{2}F_{1}(a,b;c;1)={\frac {\Gamma (c)\Gamma (c-a-b)}{\Gamma (c-a)\Gamma (c-b)}}. \end{equation} Using equality (see \cite[Ch. II, subs. 2.4.1]{magnus}), the last information allows to find \begin{equation}\label{linear} {}_{2}F_{1}(a,b;c;x)=(1-x)^{c-a-b} {}_{2}F_{1}(c-a,c-b;c;x), \end{equation} which will be used in order to establish some asymptotic behaviours in the proof of Theorem \ref{maint}. \section{Main results}\label{Main} Assume that $s\in (0,1)$, $p\geq2$ and let $\Omega$ be a bounded open set with Lipschitz boundary. As mentioned in the introduction, we will focus on the nonlinear Dirichlet problem \begin{equation} \label{eq.0} \left\{ \begin{array}[c]{lll}% \left( -\Delta_{p}\right)^{s}u=f & & \text{in }% \Omega,\\ \\ u=0 & & \text{on }{\mathbb R}^{N}\setminus\Omega. \end{array}\right. \end{equation} A \emph{weak} solution to problem \eqref{eq.0} is a function $u\in \widetilde{W}_{0}^{s,p}(\Omega)$ such that \begin{equation}\label{weak} \frac{\gamma(N,s,p)}{2}\iint_{{\mathbb R}^{2N}}\frac{|u(x)-u(y)|^{p-2}(u(x)-u(y))(\varphi(x)-\varphi(y))}{|x-y|^{N+sp}}dx\,dy=\int_{\Omega}f(x)\varphi(x)dx, \end{equation} for all the test functions $\varphi\in \widetilde{W}_{0}^{s,p}(\Omega)$. We will assume that the source term $f$ will possess enough summability in order to ensure that problem \eqref{eq.0} has a unique weak solution $u\in\widetilde{W}_{0}^{s,p}(\Omega)$. To this aim, we will assume that $f\in L^{m}(\Omega)$, where $m$ is such that \begin{equation}\label{assumpt} m\geq \frac{pN}{(p-1)N+sp} \text{ if }sp<N,\quad m>1 \text{ if }sp=N,\quad m\geq1 \text{ if }sp>N. \end{equation} Under the assumptions \eqref{assumpt} it is not difficult to show that the strictly convex functional \[ \mathcal{J}(u)=\frac{\gamma(N,s,p)}{2p}\int_{{\mathbb R}^{N}}\int_{{\mathbb R}^{N}}\frac{|u(x)-u(y)|^{p}}{|x-y|^{N+sp}}dx\,dy-\int_{\Omega}fu\,dx. \] admits a minimizer in $\widetilde{W}_{0}^{s,p}(\Omega)$, which is in turn the unique weak solution to \eqref {eq.0}. Indeed, if $sp<N$, observe first that by the fractional Sobolev embedding (see \emph{e.g.} \cite[Theorem 6.5]{hitch} and \cite{Brasco}) \[ \widetilde{W}_{0}^{s,p}(\Omega)\hookrightarrow L^{p^{\ast}_{s}}(\Omega) \] where \[ p^{\ast}_{s}=\frac{Np}{N-sp} \] and by the Young inequality, one has for any $\varepsilon>0$ and for some positive constants $C$, $C^{\prime}$ \[ \int_{\Omega}f u\, dx\leq C\| f\|_{L^{(p^{\ast})^{\prime}}(\Omega)}^{p^{\prime}}+C^{\prime}\varepsilon[u]_{W^{s,p}({\mathbb R}^{N})}^{p}, \] hence for $\varepsilon$ small enough we have that $\mathcal{J}$ is bounded from below and that the following inequality holds for some positive constants $C$ and $C_{1}$: \[ \mathcal{J}(u)\geq C_{1}\int_{{\mathbb R}^{N}}\int_{{\mathbb R}^{N}}\frac{|u(x)-u(y)|^{p}}{|x-y|^{N+sp}}dx\,dy-C\| f\|_{L^{(p^{\ast})^{\prime}}(\Omega)}^{p^{\prime}}. \] This last inequality is sufficient to show that from any minimizing sequence of $\mathcal{J}$ it is possible to extract a subsequence which is weakly converging to some $u$ in $\widetilde{W}_{0}^{s,p}(\Omega)$, that turns to be a minimizer of $\mathcal{J}$ by the lower semicontinuity of the norm in $\widetilde{W}_{0}^{s,p}(\Omega)$. A similar argument can be reproduced in the case $sp>N$, when we have the continuous embedding \[ \widetilde{W}_{0}^{s,p}(\Omega)\hookrightarrow L^{\infty}(\Omega)\cap C^{0,\alpha}(\overline{\Omega}) \] where $\alpha=1-N/sp$, see for instance \cite[Proposition 2.9]{Brasco}. This remark also applies to the case $sp=N$, where we have the continuous embedding \[ \widetilde{W}_{0}^{s,p}(\Omega)\hookrightarrow L^{q}(\Omega) \] for all $q\in[\frac{N}{s},+\infty)$, see \cite[Theorem 6.9]{hitch} and \cite{Brasco}. \\[0.6pt] Our aim is to compare $u$ with the solution $v$ to a radial problem in the ball $\Omega^\#$. More precisely in the proof of our result an essential role will be played by the integral mean function of $u$ on balls, \emph{i.e}, \begin{equation}\label{sphmean} U(x)=U(|x|)=\frac1{|x|^N}\int_0^{|x|}\mathfrc{u}(\rho)\rho^{N-1}d\rho, \end{equation} where $\mathfrc{u}(\rho)$ denotes the radial profile of $u^\#(x)$, that is, $\mathfrc{u}(|x|)=u^\#(x)$. \begin{theorem}\label{maint} Assume that $p\ge2$, $s\in(0,1)$ and $f\in L^{m}(\Omega)$ where $m$ satisfies one of the assumptions \eqref{assumpt}. Let $u\in \widetilde{W}_{0}^{s,p}(\Omega)$ the weak solution to the Dirichlet problem \eqref{eq.0}. Let $v\in \widetilde{W}_{0}^{s,2}(\Omega^{\#})$ be the weak solution to the radial problem \begin{equation} \label{eq.symmetric} \left\{ \begin{array}[c]{lll}% \left( -\Delta\right)^{s}v=g & & \text{in }% \Omega^{\#},\\ \\ v=0 & & \text{on }{\mathbb R}^{N}\setminus\Omega^{\#}, \end{array}\right. \end{equation} where the datum $g=g(|x|)$ is the radial function defined by (we set as usual $r=|x|$) \begin{equation}\label{g} \begin{split} g(r)=\mathsf{H}(N,s,p)\,r^{\frac{(N-s)(p-2)}{p-1}}&\left[\frac{(N-s)(p-2)}{p-1}\,\frac{1}{r^{N}}\left(\int_{B_r}f^{\#}dx\right)^{\frac{1}{p-1}}\right.\\ &\left.+\frac{N\omega_{N}}{p-1}\left(\int_{B_r}f^{\#}dx\right)^{\frac{2-p}{p-1}}f^{\#}(x) \right], \end{split} \end{equation} with \[ \mathsf{H}(N,s,p)=\frac{\gamma(N,s,2)}{N\omega_{N}}\,\frac{\left(\mathcal{P}_{s}(B_{1})\right)^{\frac{p-2}{p-1}}}{\gamma(N,s,p)^{\frac{1}{p-1}}}, \] being \[ \mathcal{P}_{s}(B_{1})=\int_{B_r}\int_{B_r^c}\frac{1}{|x-y|^{N+s}}\,dx\,dy \] the fractional $s$-perimeter of the unit ball, see for instance \cite{CafVal}. Then we have \begin{equation}\label{massconc} u^{\#}\prec v. \end{equation} \end{theorem} \begin{remark} Notice that when $p=2$ we have $g(x)=f^{\#}(x)$, so the symmetrized problem coincides with the one appearing in the \emph{linear} case $p=2$, hence Theorem \ref{maint} reduces to \cite[Theorem 3.1]{FeroneVolzone}. If $p>2$, it is not difficult to show that problem \eqref{eq.symmetric} admits a unique weak solution. Indeed, using the simple estimate $$f^\#(r)\le \frac 1{\omega_Nr^N}\int_{B_r}f^\#(x)\,dx,$$ we observe that \begin{equation}\label{intermg} g(r)\le Cr^{-\frac{s(p-2)}{p-1}}\left(\frac 1{r^N}\int_{B_r}f^\#(x)\,dx\right)^{\frac1{p-1}}\end{equation} where $C$ denotes a constant which can change from line to line. Assume first that $sp<N$. An easy use of the H\"older inequality gives \begin{equation} g(r)\leq Cr^{-\frac{N+ms(p-2)}{m(p-1)}}\|f\|_{L^{m}(\Omega)}^{1/(p-1)}. \end{equation} Thus \[ \left(\int_{\Omega^{\#}}|g(r)|^{\frac{2N}{N+2s}}dx\right)^{\frac{N+2s}{N}}\leq C \|f\|_{L^{m}(\Omega)}^{1/(p-1)}\left(\int_{\Omega^{\#}}\frac{1}{r^{\ell}}dx\right)^{\frac{N+2s}{N}} \] where \[ \ell=\frac{N+ms(p-2)}{m(p-1)}\frac{2N}{N+2s} \] and it is easy to show that $\ell<N$ if and only if \begin{equation}\label{newest} m>\frac{2N}{N(p-1)+2s}. \end{equation} Observe that \[ \frac{Np}{(p-1)N+sp}>\frac{2N}{N(p-1)+2s} \] when $p^{2}-3p+2>0$, a condition which is assured by the restriction $p>2$. Therefore \eqref{assumpt} yields \eqref{newest}.\\ As regards to the range $sp\geq N$, with $N\geq2$, or $N=1$ and $s\in(0,1/2)$, by \eqref{intermg} we get the inequality \begin{equation}\label{boundg} g(r)\leq Cr^{-\frac{N+s(p-2)}{(p-1)}}\|f\|_{L^{1}(\Omega)}^{1/(p-1)} \end{equation} which implies \[ \left(\int_{\Omega^{\#}}|g(r)|^{\frac{2N}{N+2s}}dx\right)^{\frac{N+2s}{N}}\leq C \|f\|_{L^{1}(\Omega)}^{1/(p-1)}\left(\int_{\Omega^{\#}}\frac{1}{r^{\ell}}dx\right)^{\frac{N+2s}{N}} \] where \[ \ell=\frac{N+s(p-2)}{p-1}\frac{2N}{N+2s} \] hence $\ell<N$ if and only if \begin{equation} \label{boundpN} p>\frac{3N-2s}{N} \end{equation} is satisfied. But an easy computation shows that under our assumption $N>2s$ the following bound holds \[ \frac{N}{s}>\frac{3N-2s}{N} \] thus \eqref{boundpN} follows. In all these cases we showed that $g\in L^{2N/(N+2s)}(\Omega^{\#})$, thus the linear problem \eqref{eq.symmetric} has a unique solution. In the case $N=1$, $s\in [1/2,1)$ it will be sufficient to show that $g\in L^{q}(\Omega^{\#})$ for some $q>1$: this can be done by choosing \[ q<\frac{p-1}{1+s(p-2)} \] in order to have \[ \left(\int_{\Omega^{\#}}|g(r)|^{q}dx\right)^{\frac{1}{q}}\leq C \|f\|_{L^{1}(\Omega)}^{1/(p-1)}\left(\int_{\Omega^{\#}}\frac{1}{r^{\ell}}dx\right)^{\frac{1}{q}}<\infty \] where \[ \ell=\frac{1+s(p-2)}{p-1}q<1. \] \end{remark} As a consequence of Theorem \ref{maint} we will prove the following regularity result for solutions $u$ to \eqref{eq.0}. \begin{corollary}\label{regularity} Let us choose $s\in(0,1)$, $p\geq2$ and assume that $sp<N$. Let $u\in \widetilde{W}_{0}^{s,p}(\Omega)$ the weak solution to the Dirichlet problem \eqref{eq.0}. We have: \noindent 1. if $f\in L^{{m,\frac{Nm}{N+sm(p-2)}}}(\Omega)$, with $\frac{pN}{((p-1)N+sp)}\le m<N/(sp)$, then $u\in L^{q}(\Omega)$, with \begin{equation} q=\frac{Nm(p-1)}{N-smp}\label{q} \end{equation} and there exists a constant $C$ such that: \[ \|u\|_{L^{q}(\Omega)}\leq C\|f\|_{L^{{m,\frac{Nm}{N+sm(p-2)}}}(\Omega)}^{\frac1{p-1}}; \] \noindent 2. if $f\in L^{{m}}(\Omega)$, with $m>N/(sp)$, then $u\in L^{\infty}(\Omega)$ and there exists a constant $C$ such that: \[ \|u\|_{L^{\infty}(\Omega)}\leq C\|f\|_{L^{m}(\Omega)}^{\frac1{p-1}}. \] \end{corollary} \begin{remark} The case $m>N/(sp)$ in Corollary \ref{regularity} is obtained in \cite[Theorem 3.1]{secondeigenbrasco} and in \cite[Theorem 3.1]{BarPer} by Moser's iteration techniques . The case $\frac{pN}{((p-1)N+sp)}\le m<N/(sp)$ and $f\in L^{m}(\Omega)$ is studied in \cite[Theorem 3.4]{BarPer}. Notice that the Lorentz space in case 1. of Corollary \ref{regularity} is smaller that $L^{m}(\Omega)$, thus our regularity result is not sharp: this discrepancy is due to the choice of the symmetrized problem \eqref{eq.symmetric}, which is linear. \end{remark} \section{Proofs}\label{Proofs} First of all we prove a preliminary result which will be used in order to apply Riesz rearrangement inequality \eqref{mainRieszineq}. Let $\mathcal{G}_{t,h}$ , $t,h>0$, be the classical truncation function \begin{equation} \mathcal{G}_{t,h}(\theta) =\left\{ \begin{array} [c]{lll}% h & & \text{if }\theta > t+h\\ & & \\ \theta-t\, & & \text{if }t< \theta \le t+h\\ & & \\ 0 & & \text{if }\theta \leq t.\text{ }% \end{array} \right.\label{truncation} \end{equation} The following Lemma establishes a property of a nonlinear function $F(u,v)$ involving the truncations $\mathcal{G}_{t,h}$, that will be employed as an application of inequality \eqref{mainRieszineq}, a basic ingredient in the proof of Theorem \ref{maint}. \begin{lemma}\label{effe} For $p\ge2$, let $F:{\mathbb R}^{+}\times{\mathbb R}^{+}\rightarrow{\mathbb R}$ be defined as \begin{equation}\label{effedef} F(u,v)=u^{p}+v^{p}-|u-v|^{p-2}(u-v)(\mathcal{G}_{t,h}(u)-\mathcal{G}_{t,h}(v)). \end{equation} Then $F(u,v)$ is a continuous function, with $F(0,0)=0$, such that \begin{equation}\label{effepos} F(u,v)\geq0 \end{equation} and \begin{equation}\label{effeprop} F(u_{2},v_{2})+F(u_{1},v_{1})\geq F(u_{2},v_{1})+F(u_{1},v_{2}) \end{equation} \end{lemma} \noindent{\sc Proof.} As regards \eqref{effepos} we observe that \[ (u-v)(\mathcal{G}_{t,h}(u)-\mathcal{G}_{t,h}(v))\le (u-v)^2 \] Indeed \noindent-- if $0\le u\le t$ and $0\le v\le t$ then $$(u-v)(\mathcal{G}_{t,h}(u)-\mathcal{G}_{t,h}(v))=(u-v)\cdot0\le (u-v)^2$$ \noindent-- if $0\le u\le t$ and $t<v<t+h$ then $$(u-v)(\mathcal{G}_{t,h}(u)-\mathcal{G}_{t,h}(v))=(u-v)(t-v)\le (u-v)^2$$ \noindent-- if $0\le u\le t$ and $t+h\le v$ then $$(u-v)(\mathcal{G}_{t,h}(u)-\mathcal{G}_{t,h}(v))=(u-v)(-h)\le (u-v)^2$$ \noindent-- if $t<u<t+h$ and $0\le v\le t$ then $$(u-v)(\mathcal{G}_{t,h}(u)-\mathcal{G}_{t,h}(v))=(u-v)(u-t)\le (u-v)^2$$ \noindent-- if $t<u<t+h$ and $t<v<t+h$ then $$(u-v)(\mathcal{G}_{t,h}(u)-\mathcal{G}_{t,h}(v))=(u-v)^2$$ \noindent-- if $t<u<t+h$ and $t+h\le v$ then $$(u-v)(\mathcal{G}_{t,h}(u)-\mathcal{G}_{t,h}(v))=(u-v)(u-t-h)\le (u-v)^2$$ \noindent-- if $t+h\le u$ and $0\le v\le t$ then $$(u-v)(\mathcal{G}_{t,h}(u)-\mathcal{G}_{t,h}(v))=(u-v)h\le (u-v)^2$$ \noindent-- if $t+h\le u$ and $t<v<t+h$ then $$(u-v)(\mathcal{G}_{t,h}(u)-\mathcal{G}_{t,h}(v))=(u-v)(t+h-v)\le (u-v)^2$$ \noindent-- if $t+h\le u$ and $t+h\le v$ then $$(u-v)(\mathcal{G}_{t,h}(u)-\mathcal{G}_{t,h}(v))=(u-v)\cdot0\le (u-v)^2$$ \noindent Then, by monotonicity, \[ F(u,v)\ge u^{p}+v^{p}-|u-v|^{p}\geq0. \] In order to prove \eqref{effeprop} we observe that, for every fixed $u\ge0$ the function \begin{equation*} \Phi(v)=|u-v|^{p-2}\big(\mathcal{G}_{t,h}(u)-\mathcal{G}_{t,h}(v)\big),\quad v\ge0, \end{equation*} is decreasing with respect to $v$. Indeed, we can compute the derivative for a.e. $v$ to get \begin{equation*} \Phi'(v)=-|u-v|^{p-4}\big((p-2)(u-v)\big(\mathcal{G}_{t,h}(u)-\mathcal{G}_{t,h}(v)\big)+|u-v|^2\mathcal{G}'_{t,h}(v)\big)\le0. \end{equation*} This means that, for every $0\le v_1\le v_2$, it holds \begin{equation*} \Phi(v_1)\ge\Phi(v_2), \end{equation*} that is, for every $u\ge0$ and $0\le v_1\le v_2$ \begin{equation}\label{phi} |u-v_1|^{p-2}\big(\mathcal{G}_{t,h}(u)-\mathcal{G}_{t,h}(v_1)\big)- |u-v_2|^{p-2}\big(\mathcal{G}_{t,h}(u)-\mathcal{G}_{t,h}(v_2)\big)\ge0. \end{equation} Now, for every fixed $0\le v_1\le v_2$, we consider the function \begin{equation*} \Psi(u)=F(u,v_2)-F(u,v_1),\quad u\ge0. \end{equation*} Using the definition \eqref{effedef} of $F(u,v)$, we can compute the derivative of $\Psi(u)$ for a.e. $u$ to get \begin{align*} \Psi'(v)=&(p-1)\big(|u-v_1|^{p-2}\big(\mathcal{G}_{t,h}(u)-\mathcal{G}_{t,h}(v_1)\big)- |u-v_2|^{p-2}\big(\mathcal{G}_{t,h}(u)-\mathcal{G}_{t,h}(v_2)\big)\\ \\ &\>+\mathcal{G}'_{t,h}(u)\big(|u-v_1|^{p-2}(u-v_1)-|u-v_2|^{p-2}(u-v_2)\big)\ge0, \end{align*} where we have used inequality \eqref{phi} and the monotonicity of the function $\phi(t)=|t|^{p-2}t$, $t\in{\mathbb R}$. Using the monotonicity of $\Psi(u)$, we have, for $0\le u_1\le u_2$ and $0\le v_1\le v_2$, \begin{equation*} F(u_2,v_2)-F(u_2,v_1)\ge F(u_1,v_2)-F(u_1,v_1), \end{equation*} that is, \eqref{effeprop}. \hfill$\square$ \vskip.3cm \noindent{\sc Proof of Theorem \ref{maint}.} We first consider the case $f\ge0$ and we assume $f\in C_{0}^{\infty}(\Omega)$. By \cite[Theorem 1.4]{brascohol} or \cite[Theorem 1.1]{IanMoscSquas} we have that the solution $u$ to \eqref{mainpp} is locally H\"older continuous in $\Omega$ (see also \cite[Corollary 1.1]{KuusiMing} where continuity is derived for more general nonlocal operators). If $\mathcal{G}_{t,h}$ is the truncation function in \eqref{truncation}, we use the test function \[ \varphi(x)=\mathcal{G}_{t,h}(u(x)), \] in the weak formulation of \eqref{eq.0}, obtaining \begin{align} \displaystyle\frac{\gamma(N,s,p)}{2}&\int_{{\mathbb R}^{N}}\int_{{\mathbb R}^{N}}\frac{|u(x)-u(y)|^{p-2}\left(u(x)-u(y)\right)\left(\mathcal{G}_{t,h}(u(x))-\mathcal{G}_{t,h}(u(y))\right)}{|x-y|^{N+sp}}dx\,dy\label{eqtest} \\ &=\int_{\Omega}f(x)\,\mathcal{G}_{t,h}(u(x))\,dx.\notag \end{align} We will prove the inequality \begin{align}\label{Polyatype} &\int_{{\mathbb R}^{N}}\int_{{\mathbb R}^{N}}\frac{|u(x)-u(y)|^{p-2}\left(u(x)-u(y)\right)\left(\mathcal{G}_{t,h}(u(x))-\mathcal{G}_{t,h}(u(y))\right)}{|x-y|^{N+sp}}dxdy \\ &\geq \int_{{\mathbb R}^{N}}\int_{{\mathbb R}^{N}}\frac{|u^{\#}(x)-u^{\#}(y)|^{p-2}\left(u^{\#}(x)-u^{\#}(y)\right)\left(\mathcal{G}_{t,h}(u^{\#}(x))-\mathcal{G}_{t,h}(u^{\#}(y))\right)}{|x-y|^{N+sp}}dxdy.\nonumber \end{align} Following \cite[Section 9]{ALIEB}, we write \begin{align*} &\int_{{\mathbb R}^{N}}\int_{{\mathbb R}^{N}}\frac{|u(x)-u(y)|^{p-2}\left(u(x)-u(y)\right)\left(\mathcal{G}_{t,h}(u(x))-\mathcal{G}_{t,h}(u(y))\right)}{|x-y|^{N+sp}}dxdy\\ &=\frac{1}{\Gamma(\frac{N+sp}{2})}\int_{0}^{\infty}I_{\alpha}[u,t,h]\,\alpha^{(N+sp)/2-1}d\alpha, \end{align*} where \begin{equation} I_{\alpha}[u,t,h]=\int_{{\mathbb R}^{N}}\int_{{\mathbb R}^{N}}|u(x)-u(y)|^{p-2}\bigl(u(x)-u(y)\bigr)\bigl(\mathcal{G}_{t,h}(u(x))-\mathcal{G}_{t,h}(u(y))\bigr)\exp[-|x-y|^{2}\alpha]dx\,dy. \label{applRiesz} \end{equation} We want to prove\begin{equation} I_{\alpha}[u,t,h]\geq I_{\alpha}[u^{\#},t,h]\label{IneqI}, \end{equation} for all $\alpha>0$. To this aim, we define the function $F(u,v)$ according to \eqref{effedef}. Then Lemma \ref{effe} ensures that $F$ is eligible in Riesz rearrangement inequality \eqref{mainRieszineq}, with the choice $W_{\alpha}(x)=\exp[-|x|^{2}\alpha]$ and $a=1,\,b=-1$. Plugging such function $F$ in \eqref{mainRieszineq} yields \[ \int_{{\mathbb R}^{N}}\int_{{\mathbb R}^{N}}F(u(x),u(y))\,W_{\alpha}(x-y)dx\,dy \leq \int_{{\mathbb R}^{N}}\int_{{\mathbb R}^{N}}F(u^{\#}(x),u^{\#}(y))\,W_{\alpha}(x-y)dx\,dy, \] then the usual equimeasurability property of rearrangements and the symmetry of the kernel $W_{\alpha}$ applied to \eqref{applRiesz} gives \eqref{IneqI}.\\ As a consequence we have \begin{align} \frac{\gamma(N,s,p)}{2}&\int_{{\mathbb R}^{N}}\int_{{\mathbb R}^{N}}\frac{|u^{\#}(x)-u^{\#}(y)|^{p-2}\bigl(u^{\#}(x)-u^{\#}(y)\bigr)\bigl(\mathcal{G}_{t,h}(u^{\#}(x))-\mathcal{G}_{t,h}(u^{\#}(y))\bigr)}{|x-y|^{N+sp}}\,dy\,dx\label{mainineq}\\ &\leq \int_{\Omega}f(x)\,\mathcal{G}_{t,h}(u(x))\,dx.\notag \end{align} Now we set \[ \mathfrc{u}(x)=\mathfrc{u}(|x|):=u^{\#}(x), \] hence $\mathfrc{u}$ is a nonincreasing continuous function defined on ${\mathbb R}^N$ vanishing for $|x|\ge R>0$, $R$ being the radius of $\Omega^{\#}$. For any $0\le t\le u_{\text{max}}$ there exists a unique $r(t)$ such that $|\{x:\mathfrc{u}(x)>t\}|=|B_{r(t)}(0)|$. { Then we check how to pass to the limit as $h\rightarrow 0$ in \eqref{mainineq}. Let us consider the following integral: \begin{align}\allowdisplaybreaks I_{t,h}= \frac1{N\omega_{N}} \int_{{\mathbb R}^{N}}\int_{{\mathbb R}^{N}}\frac{|\mathfrc{u}(x)-\mathfrc{u}(y)|^{p-2}\left(\mathfrc{u}(x)-\mathfrc{u}(y)\right)\left(\mathcal{G}_{t,h}(\mathfrc{u}(x))-\mathcal{G}_{t,h}(\mathfrc{u}(y))\right)}{|x-y|^{N+sp}}dxdy, \label{integralh} \end{align} thus \eqref{mainineq} can be rewritten as \begin{equation} N\omega_{N}\frac{\gamma(N,s,p)}{2} I_{t,h}\leq \int_{\Omega}f(x)\,\mathcal{G}_{t,h}(u(x))\,dx.\label{maininequalith} \end{equation} Putting $r=|x|$, we have: \begin{equation*} \begin{split} I_{t,h}=\int_0^{+\infty}&\left(\int_0^{+\infty}|\mathfrc{u}(r)-\mathfrc{u}(\rho)|^{p-2} \big(\mathfrc{u}(r)-\mathfrc{u}(\rho)\big)\right.\\ &\times\left.\big(\mathcal{G}_{t,h}(\mathfrc{u}(r))-\mathcal{G}_{t,h}(\mathfrc{u}(\rho))\big)\Theta_{N,s,p}(r,\rho)\rho^{N-1}d\rho\right)r^{N-1}dr, \end{split} \end{equation*} where \begin{equation}\label{phi} \Theta_{N,s,p}(r,\rho)=\frac1{N\omega_{N}}\int_{|x'|=1}\left(\int_{|y'|=1}\frac1{|r\,x'-\rho\,y'|^{N+sp}}dH^{N-1}(y')\right)dH^{N-1}(x'). \end{equation} Thus from \eqref{hyper} (see \cite{FeroneVolzone}) it follows \begin{equation}\label{explicit} \Theta_{N,s,p}(r,\rho)= \left\{ \begin{array}{ll} \dfrac{\alpha_{N}}{\rho^{N+sp}}\>{}_{2}F_1\left(\dfrac{N+sp}2,\dfrac{sp}{2}+1;\dfrac N2;\dfrac{r^{2}}{\rho^{2}}\right) &\quad \text{if }0\le r<\rho<+\infty \\ &\\ \dfrac{\alpha_{N}}{r^{N+sp}}\>{}_{2}F_1\left(\dfrac{N+sp}2,\dfrac{sp}{2}+1;\dfrac N2;\dfrac{\rho^{2}}{r^{2}}\right) & \quad \text{if }0\le \rho<r<+\infty,\\ \end{array} \right. \end{equation} where $$\alpha_{N}=\frac{2\pi^{\frac{N-1}2}}{\Gamma(\frac{N-1}2)}. $$ Moreover by \eqref{zero} we have the following asymptotic behaviors \begin{equation}\label{infty} \left\{ \begin{array}{ll} \Theta_{N,s,p}(r,\rho)\sim\dfrac{\alpha_{N}}{r^{N+sp}}&\qquad\text{ as }r\rightarrow+\infty\\ \\ \Theta_{N,s,p}(r,\rho)\sim\dfrac{\alpha_{N}}{\rho^{N+sp}}&\qquad\text{ as }\rho\rightarrow+\infty, \end{array} \right. \end{equation} and a combination of \eqref{linear} and \eqref{gauss} provides \begin{equation}\label{uno} \Theta_{N,s,p}(r,\rho)\sim\dfrac1{|r-\rho|^{1+sp}}\qquad\text{ as }|r-\rho|\rightarrow0. \end{equation} Then, since $\mathfrc{u}$ is radially decreasing, all the machinery contained in the previous paper \cite{FeroneVolzone} can be used to employ the Lebesgue monotone convergence in order to pass to the limit as $h\rightarrow0$ in \eqref{maininequalith} and find the inequality \begin{equation}\label{inequality} \gamma(N,s,p) \int_0^{r}\left(\int_{r}^{+\infty}|\mathfrc{u}(\tau)-\mathfrc{u}(\rho)|^{p-1}\Theta_{N,s,p}(\tau,\rho)\rho^{N-1}d\rho\right)\tau^{N-1}d\tau \le\int_0^{r}f^{*}(\omega_{N}\rho^{N})\rho^{N-1}d\rho. \end{equation} Observe that by \eqref{maininequalith} the quotient ratio $I_{t,h}/h$ remains bounded, therefore the integral at the left hand side of \eqref{inequality} is finite. Now, writing \eqref{inequality} in cartesian coordinates again, \begin{equation} \gamma(N,s,p)\int_{B_r}\int_{B_r^c}\frac{|u^{\#}(x)-u^{\#}(y)|^{p-1}}{|x-y|^{N+sp}}\,dx\,dy \leq \int_{B_r}f^\#(x)\,dx.\label{mainineqnon} \end{equation} At this point we write \[ N+2s=\frac{N+2s}{p-1}+(N+s)\frac{p-2}{p-1} \] and use H\"older inequality in \eqref{mainineqnon} with exponents $1/(p-1)$ and $(p-2)/(p-1)$ to obtain \begin{align}\label{newnon} &\int_{B_r}\int_{B_r^c}\frac{|u^{\#}(x)-u^{\#}(y)|}{|x-y|^{N+2s}}\,dx\,dy\nonumber \\ & \le \left(\int_{B_r}\int_{B_r^c}\frac{|u^{\#}(x)-u^{\#}(y)|^{p-1}}{|x-y|^{N+sp}}\,dx\,dy\right)^{\frac1{p-1}} \left(\int_{B_r}\int_{B_r^c}\frac{1}{|x-y|^{N+s}}\,dx\,dy\right)^{\frac{p-2}{p-1}}\nonumber\\& =r^{\frac{(N-s)(p-2)}{p-1}}\mathcal{P}_{s}(B_{1})^{\frac{p-2}{p-1}}\,\left(\int_{B_r}\int_{B_r^c}\frac{|u^{\#}(x)-u^{\#}(y)|^{p-1}}{|x-y|^{N+sp}}\,dx\,dy\right)^{\frac1{p-1}}, \end{align} being $\mathcal{P}_{s}(B_{1})$ the fractional perimeter of the unit ball. Then by \eqref{mainineqnon} we have \begin{equation}\label{intermineq} \int_{B_r}\int_{B_r^c}\frac{|u^{\#}(x)-u^{\#}(y)|}{|x-y|^{N+2s}}\,dx\,dy\leq r^{\frac{(N-s)(p-2)}{p-1}}\frac{\left(\mathcal{P}_{s}(B_{1})\right)^{\frac{p-2}{p-1}}}{\gamma(N,s,p)^{\frac{1}{p-1}}} \left( \int_{B_{r}}f^{\#}\,dx\right)^{\frac{1}{p-1}}. \end{equation} Now, arguing as in \cite{FeroneVolzone} we find that \[ \int_{B_r}\int_{B_r^c}\frac{|u^{\#}(x)-u^{\#}(y)|}{|x-y|^{N+2s}}\,dx\,dy=\frac{N\omega_{N}}{\gamma(N,s,2)} r^{N} (-\Delta)_{{\mathbb R}^{N+2}}^{s}U(r), \] where $(-\Delta)_{{\mathbb R}^{N+2}}^{s}$ denotes the $s$-laplacian computed on a radial function in ${\mathbb R}^{N+2}$. Thus \eqref{intermineq} yields \begin{align*} (-\Delta)_{{\mathbb R}^{N+2}}^{s}U(r)&\leq\mathsf{H}(N,s,p)\, r^{\frac{(N-s)(p-2)}{p-1}}\frac{1}{r^N}\left(\int_{B_r}f^\#(x)\,dx\right)^{\frac1{p-1}} \nonumber\\& =\mathsf{H}(N,s,p)\,\frac{1}{r^{\frac N{p-1}+s\frac{p-2}{p-1}}}\left(\int_{B_r}f^\#(x)\,dx\right)^{\frac1{p-1}}. \label{comparison} \end{align*} Now we observe that a direct computation (see the proof of \cite[Theorem 3.1]{FeroneVolzone}) shows that the solution $v$ to problem \eqref{eq.symmetric} is such that the integral mean function of $v$ \begin{equation}\label{sphmeanv} V(x)=V(|x|)=\frac1{|x|^N}\int_0^{|x|}v(\rho)\rho^{N-1}d\rho, \end{equation} satisfies \begin{align*} (-\Delta)_{{\mathbb R}^{N+2}}^{s}V(x)&=\frac1{|x|^N}\int_0^{|x|}g(\rho)\rho^{N-1}\,d\rho\\ &=\mathsf{H}(N,s,p)\frac1{|x|^N}\int_0^{|x|} \frac{d}{d\rho}\left[r^{\frac{(N-s)(p-2)}{p-1}}\left(\int_{B_{\rho}}f^{\#}(x)dx\right)^{\frac{1}{p-1}}\right]\,d\rho\nonumber\\& =\mathsf{H}(N,s,p)\,\frac{1}{|x|^{\frac N{p-1}+s\frac{p-2}{p-1}}}\left(\int_{B_r}f^\#(x)\,dx\right)^{\frac1{p-1}}\nonumber\\& =\frac{1}{|x|^{s\frac{p-2}{p-1}}}\left(\frac{1}{|x|^{N}}\int_{B_r}f^\#(x)\,dx\right)^{\frac1{p-1}}, \label{comparison} \end{align*} which provides the radially decreasing monotonicity of $V$ in ${\mathbb R}^{N+2}$. It follows that \[ (-\Delta)_{{\mathbb R}^{N+2}}^{s}U(r)\leq (-\Delta)_{{\mathbb R}^{N+2}}^{s}V(r) \] and we can apply the comparison principle for the fractional Laplacian (see \cite[Theorem 3.1]{FeroneVolzone} again) , which gives \[ U\leq V, \] namely \eqref{massconc}. The result is then achieved when $f\ge0$ and $f$ is regular. It is possible to remove the regularity assumption by using a suitable sequence of data as made, for example in \cite[Section 5.2]{FeroneVolzone}. As regards the sign assumption, one can observe that the comparison principle (see, for example, \cite{BarPer}) states that $|u|\le\tilde u$, being $\tilde u$ the solution to problem \eqref{mainpp} having $|f|$ as source term. Thus, we have: $$u^\#\prec \tilde u^\#\prec v$$ and the theorem is completely proved. \hfill$\square$ \vskip.3cm \begin{remark} Now we make a closer inspection to the Holder inequality \eqref{newnon}, which we can write in the more representative form (we set $r=1$) \begin{equation}\label{nolocH} \begin{split} &\int_{B_{1}}\left(P.V.\int_{{\mathbb R}^{N}}\frac{u(x)-u(y)}{|x-y|^{N+2s}}dy\right)\,dx\\ &\leq \left(P_{s}(B_{1})\right)^{\frac{p-2}{p-1}}\left(\int_{B_{1}}\left(P.V.\int_{{\mathbb R}^{N}}\frac{|u(x)-u(y)|^{p-2}(u(x)-u(y))}{|x-y|^{N+ps}}dy\right)\,dx\right)^{\frac{1}{p-1}}. \end{split} \end{equation} This inequality, is in general strict for $s<1$. We want to use an heuristic argument to show that this equality will not degenerate to an \emph{equality} when $s\rightarrow1$. \\ To this aim, we check the asymptotics as $s\rightarrow1$ of \eqref{nolocH}, taking into account that, for sufficiently smooth functions, \[ \lim_{s\rightarrow1^{-}}(1-s)\,P.V.\int_{{\mathbb R}^{N}}\frac{u(x)-u(y)}{|x-y|^{N+2s}}dy=\frac{\pi^{N/2}}{4\Gamma\left(\frac{N+2}{2}\right)}(-\Delta)u(x), \] and ( see \cite[Lemma 5.1]{CDVAZ}) \[ \lim_{s\rightarrow1^{-}}(1-s)\,P.V.\int_{{\mathbb R}^{N}}\frac{|u(x)-u(y)|^{p-2}(u(x)-u(y))}{|x-y|^{N+ps}}dy=\frac{\pi^{\frac{N-1}{2}}\,\Gamma\left(\frac{p+1}{2}\right)} {p\Gamma\left(\frac{N+p}{2}\right)}(-\Delta_{p})u(x), \] and finally by \cite[Theorem 4]{Ludwig} \[ \lim_{s\rightarrow1^{-}}(1-s)\mathcal{P}_{s}(B_{1})=\omega_{N-1}\mathcal{P}(B_{1}). \] Passing to the limit as $s\rightarrow 1$ in \eqref{nolocH} we find \[ \frac{\pi^{N/2}}{4\Gamma\left(\frac{N+2}{2}\right)}\int_{B_{1}}(-\Delta)u(x)\,dx\leq (\mathcal{P}(B_{1}))^{\frac{p-2}{p-1}}\,\frac{\pi^{\frac{N-1}{2}}}{\Gamma\left(\frac{N+1}{2}\right)^{\frac{p-2}{p-1}}} \left(\frac{\Gamma\left(\frac{p+1}{2}\right)}{p\Gamma\left(\frac{N+p}{2}\right)}\right)^{\frac{1}{p-1}}\left(\int_{B_{1}}(-\Delta_{p})u\,dx\right)^{\frac{1}{p-1}} \] But now we observe that a formal use of the divergence theorem and the radiality of $u$ yields \[ \int_{B_{1}}(-\Delta)u\,dx=(\mathcal{P}(B_{1}))^{\frac{p-2}{p-1}}\left(\int_{B_{1}}(-\Delta_{p})u\,dx\right)^{\frac{1}{p-1}}, \] but the presence of the constants depending on $p$ in the previous inequality prevents \eqref{nolocH} to degenerate into an equality in the limit $s\rightarrow1$.\\ On the other hand, we observe that \eqref{mainineqnon} can be rewritten as \[ \int_{B_{r}}(-\Delta_p)^{s}u\,dx\leq \int_{B_{r}}f^{\#}dx, \] then formally passing to the limit as $s\rightarrow1$ gives \[ \int_{B_{r}}(-\Delta_{p})u\,dx\leq \int_{B_{r}}f^{\#}dx, \] from which \[ -\frac{u^{\prime}(r)}{r}\leq \frac{1}{r^{\frac{N-1}{p-1}+1}}\left(\int_{B_{r}}f^{\#}\right)^{\frac{1}{p-1}}=-\frac{v^{\prime}(r)}{r} \] that is the classical Talenti's result in \cite{talNON}. The previous remark then shows that using H\"{o}lder does not allow to recover the same inequality in the asymptotic limit $s\rightarrow1$. \end{remark} \noindent{\sc Proof of Corollary \ref{regularity}.} Recall that $g$ satisfies the bound \eqref{boundg}. If $f\in L^{{m,\frac{Nm}{N+sm(p-2)}}}(\Omega)$, with $\frac{pN}{((p-1)N+sp)}\le m<N/(sp)$, and $t=\frac{Nm(p-1)}{N+sm(p-2)}$, we have: \begin{equation}\label{norms} \|g\|_{L^t(\Omega)}\le C\left(\int_0^{+\infty}\tau^{-\frac{st(p-2)}{N(p-1)}}\left(\frac 1{\tau}\int_0^\tau f^*(\sigma)\,d\sigma\right)^{\frac t{p-1}} d\tau\right)^{\frac1t}=C\|f\|_{L^{m,\frac{Nm}{N+sm(p-2)}}(\Omega)}^{\frac1{p-1}}. \end{equation} Observe that $t\geq2N/(N+2s)$ exactly when \eqref{newest} holds.} This guarantees that the linear problem \eqref{eq.symmetric} has a unique solution. By \cite[Theorem 3.2]{FeroneVolzone} we know that \begin{equation}\label{LpreguFerVol} \|v\|_{L^q(\Omega)}\le C\|g\|_{L^t(\Omega)}, \quad\text{with }q=\frac{Nt}{N-2st}=\frac{Nm(p-1)}{N-smp}. \end{equation} Now by Hardy-Littlewood inequality \eqref{HardyLit} we have $v\prec v^{\#}$, therefore from Theorem \ref{maint} we easily infer \[ u^{\#}\prec v^{\#} \] and then \eqref{LpreguFerVol} and Lemma \ref{lemma1} imply $$\|u\|_{L^q(\Omega)}\le\|v\|_{L^q(\Omega)}\le \|f\|_{L^{{m,\frac{Nm}{N+sm(p-2)}}}(\Omega)}^{\frac1{p-1}}. $$ If $f\in L^{{m}}(\Omega)$, with $m>N/(sp)$, we choose $m'$ with $N/(sp)<m'<m$ and we put $t=\frac{Nm'(p-1)}{N+sm'(p-2)}$. It follows that $$t>\frac N{2s},$$ so, by \cite[Theorem 3.2]{FeroneVolzone} we have $$\|u\|_{L^\infty(\Omega)}\le C\|g\|_t$$ and then by \eqref{norms} $$\|u\|_{L^\infty(\Omega)}\le\|v\|_{L^\infty(\Omega)}\le C \|f\|_{L^{{m',\frac{Nm'}{N+sm'(p-2)}}}(\Omega)}^{\frac1{p-1}}\le C\|f\|_{L^m(\Omega)}^{\frac1{p-1}}, $$ where the last inequality comes from the well-known inclusions in Lorentz spaces when $m'<m$, see Section \ref{Functionspac}. \hfill$\square$ \section{Comments and open problems}\label{Open problems} \noindent {$\bullet$} As it was mentioned in the introduction, the degenerate condition $p>2$ seems to be indispensable for the validity of Lemma \eqref{effe}, where a basic importance is given to such convexity property. Therefore, it would be extremely interesting to have an extension of a Theorem in the form \eqref{maint} to the \emph{singular} case $p<2$.\\ \noindent {$\bullet$} A natural question would be to ask comparing the solution $u$ to problem \eqref{mainpp} with the radial solution $v$ to the \emph{nonlinear} problem \begin{equation} \label{maip} \left\{ \begin{array}[c]{lll}% (-\Delta_{p})^{s}v=f^{\#} & & \text{in }% \Omega^{\#},\\ \\ v=0 & & \text{on }\partial\Omega^{\#}. \end{array}\right. \end{equation} Actually from inequality \eqref{mainineqnon}, the radiality of $v$ would lead quite easily to the integral comparison \begin{equation}\label{openpr} \int_{B_r}\int_{B_r^c}\frac{|u^{\#}(x)-u^{\#}(y)|^{p-1}}{|x-y|^{N+sp}}\,dx\,dy \leq \int_{B_r}\int_{B_r^c}\frac{|v(x)-v(y)|^{p-1}}{|x-y|^{N+sp}}\,dx\,dy. \end{equation} An interesting open problem would be to apply maximum principle arguments to inequality \eqref{openpr} in order to derive an integral comparison between the $p-1$ powers of $u,\,v$ in the sense of \cite{fermess}, namely an inequality of the type \begin{equation} \int_{B_{r}(0)}(u^{\#})^{p-1}dx\leq \int_{B_{r}(0)} v^{p-1}dx,\quad r>0.\label{compconcp} \end{equation} Though such result would appear quite natural, at the same time it seems very difficult to prove: indeed, it seems to require some arguments that are fairly different to the ones established in \cite{FeroneVolzone}. Nevertheless, the following numerical simulation suggests that \eqref{compconcp} is quite natural to expect. The plots in Figure \ref{fig:useCase}, obtained by a suitable implementation of the robust numerical methods in \cite{TesoLind} seem to confirm our guess. Indeed, we have considered problem \eqref{mainpp} in the case $N=1$, $p=3$, $s=\frac12$, $\Omega=(-1,1)$, and we have compared, in terms of mass concentration, the solution $u$ when $f(x)=|x|$ with the solution $v$ to problem \eqref{maip} when $f^\#(x)=1-|x|$. The theoretical question will be an object of future investigations. \begin{figure}[ht] \centering \includegraphics[trim={3cm 8cm 2cm 7cm},clip, scale=0.8]{Plotp4.pdf} \caption{From left to right: plot of $u$ with the choice $f=|x|$, plot of $v$ and comparison of mass concentrations of $p-1$ powers of $u^{\#}$ and $v$.} \label{fig:useCase} \end{figure} \newpage \section*{Acknowledgments} V.F. was partially supported by Italian MIUR through research project PRIN 2017 ``Direct and inverse problems for partial differential equations: theoretical aspects and applications''. B.V. was partially supported by Gruppo Nazionale per l'Analisi Matematica, la Probabilit\`a e le loro Applicazioni (GNAMPA) of Istituto Nazionale di Alta Matematica (INdAM). Both authors are members of GNAMPA of INdAM. B.V. wishes to warmly thank L. Brasco for fruitful discussions and valuable suggestions. \normalcolor
1,314,259,994,671
arxiv
\section{Volume and homology} There is a connection between the homology and the volume of a simple convex polytope, that deserves to be more widely known. It leads to a presentation of the homology theory, equivalent to the usual one~\cite{bib.VD.GTV}, which is close in spirit to the concept of uniform expressions. Throughout $\Delta$ will be an $n$-dimensional convex polytope in an $n$-dimensional affine space, described by an irredundant system \[ \alpha_i v \geq 0 \qquad i = 1 , \ldots , f_{n-1} \] of affine linear inequalities, one for each facet $\delta_i$ of $\Delta$. This writes $\Delta$ as an intersection of half-spaces. We now subject the half-spaces to a small displacement, and consider how the volume changes. In other words, let $\Delta(\epsilon)$ be the convex polytope \[ \alpha_i v \geq \epsilon_i \qquad i = 1 , \ldots , f_{n-1} \] where the $\epsilon_i$ are close to zero. This process of $\epsilon$-variation will in general change the combinatorial structure of $\Delta$. The cone (or pyramid) on a square is an example of this. It will be studied further, in the course of this paper. If the $\epsilon$-variation process does not change the combinatorics (for $\epsilon$ close to zero of course) then $\Delta$ will be said to be \emph{simple}. This happens exactly when at each vertex there are exactly $n$ facets (or equivalently $n$ edges), which is the usual definition of simple. Now suppose that $\Delta$ is a simple polytope. From this it follows without any real difficulty that the volume $\vol(\epsilon)$ of $\Delta(\epsilon)$ is a polynomial of degree $n$ in the linear quantity $\epsilon$, at least for $\epsilon$ small. (To prove this, decompose $\Delta(\epsilon)$ into pyramids, one for each facet. The height of each such pyramid will be linear in $\epsilon$, while by induction the area of the base will be a polynomial of degree $n-1$ in $\epsilon$. For general polytopes the volume $\vol(\Delta)$ will be given by one of a number of polynomials, one for each simple structure reached via small $\epsilon$-variation.) The polynomial $\vol(\Delta)$ is perhaps complicated, but it is far from arbitrary. It has a great deal of structure. For example, to know the top degree term $\vol_\Delta$ is equivalent to knowing the homology ring $H_\bullet\Delta$, a fact that we will explain shortly. The lower degree terms give information about the Chern classes of $\Delta$ (or of the projective toric variety $\PDelta$ associated to $\Delta$). This we do not need, and so will not discuss further. The top degree term $\vol_\Delta$ can be written as a symmetric multilinear form, also to be denoted by $\vol_\Delta$, in $n$ variables $\alpha_1$, $\ldots$, $\alpha_n$. Each $\alpha_i$ is a (small) $\epsilon$-variation $\Delta$. The simplest case is where each $\alpha_i$ is a displacement of a single facet of $\Delta$. By linearity, these determine the general case, which is where the $\alpha_i$ are formal sums of displacements of individual facets. In any case, let $T^1\Delta$ denote the linear span of all such $\alpha_i$. It has dimension $f_{n-1}$, the number of facets, and represents \emph{thickenings} or $\epsilon$-displacements of the facets. Now let $T^i\Delta$ denote the $i$-fold tensor product of $T^1\Delta$ with itself. As already noted, $\vol_\Delta$ can be thought of as a symmetric linear function on $T^n\Delta$. Now let $i+j=n$ and consider the map \begin{equation} \label{eqn.Ti-Tj} \vol_\Delta : T^i \Delta \otimes T^j \Delta \to \bfR \end{equation} induced by $T^i\Delta \otimes T^j \Delta \cong T^n \Delta$. This is a pairing of vector spaces. Now form the null spaces \[ N^i\Delta = \{ \, \psi \in T^i \> | \> \vol_\Delta ( \psi , \eta ) = 0 \mbox{ for all $\psi \in T^j$} \, \} \] and $N^j\Delta$ similarly, and then form the quotient spaces $H^i = T^i/N^i$, $H^j = T^j/N^j$. The pairing now descends to give a nondegenerate (or perfect) pairing \begin{equation} \label{eqn.Hi-Hj} H^i \Delta \otimes H^j \Delta \to \bfR \end{equation} between a pair of vector spaces. This definition of the (co)homology spaces $H^i\Delta$ is equivalent to the usual one. (To simplify matters, we will identify the cohomology group $H^i\Delta$ with the homology group $H_j\Delta$, where $i+j=n$. Thus, cohomology is homology, but indexed by codimension as a superscript, rather than dimension as a subscript.) The key facts are this. First, the (co)homology is generated by the facets. This means that $T^i\Delta$ will generate $H^i\Delta$. Second, Poincar\'e duality holds. This means that passing from (\ref{eqn.Ti-Tj}) to (\ref{eqn.Hi-Hj}) will produce (co)homology in the usual sense. Third, the volume of $\Delta$ and the volume of the toric variety $\PDelta$ agree, provided $\PDelta$ exists. More exactly, $\Delta$ determines on $\PDelta$ a `hyperplane class' or K\"ahler form $\omega$ such that \[ \vol \PDelta = \int _{\PDelta} \omega ^n \] is proportional to $\vol\Delta$. This is true for all $\Delta$, and so respects $\epsilon$-variation. Agreement here is enough to force agreement everywhere. One can look at this in another way. Suppose $H^i\Delta$ is known, and that it is generated by $H^1\Delta$. The induced map \begin{equation} \label{eqn.H1-n-times} H^1\Delta \otimes \dots \otimes H^1 \Delta \to H^n\Delta \cong \bfR \end{equation} on the $n$-fold tensor product is more-or-less equivalent to $\vol_\Delta$ on $T^n\Delta$. (In fact, $H^1\Delta$ is $T^1\Delta$ modulo certain relations.) Because of Poincar\'e duality (and generation by $H^1\Delta$), using $i+j=n$ to break this up into a pairing will produce $H^i$ and $H^j$. The induced map \ref{eqn.H1-n-times} is however equivalent to knowing a homogeneous polynomial on $H^1\Delta$ (or $T^1\Delta$). In this way, the $\epsilon$-variation volume polynomial and the homology theory determine each other. Here is a result that will be used to help do an example in the next section. Clearly, an $\epsilon$-variation that is in fact a rigid bodily translation of $\Delta$ will not change its volume. Thus, $\vol_\Delta(\alpha_1, \dots , \alpha_n)$ will be zero if any one of the $\alpha_i$ is of this form. (Each $\alpha_i$ is a formal sum of facet displacements.) Next, just as \begin{equation} \label{eqn.2xy} 2 \langle x | y \rangle = \| x+y \| ^2 - \| x\| ^2 - \| y\|^2 \end{equation} turns a quadratic form into a bilinear form, so a similar process will produce the multilinear form $\vol_\Delta$ from the polynomial form. It now follows that if each $\alpha_i$ is a displacement of a single facet $\delta_i$, and these facets $\delta_i$ have empty common intersection, then $\vol(\alpha_i,\dots,\alpha_n)$ is zero. For $n=2$, this is because the $\delta_1$ and $\delta_2$ displacements do not, so to speak, interfere with each other, and so (\ref{eqn.2xy}) will be zero. A similar argument holds in general. In the usual approach Poincar\'e duality asserts that a pairing between already defined homology spaces is nondegenerate. In the present approach, duality is part of the definition. Poincar\'e duality then becomes a characterization of the nullspaces $N^i$ and $N^j$. It asserts that they are generated by the rigid displacement and empty intersection results stated in the previous paragraph. Finally, in the present approach, the ring structure is automatic. Clearly there is a map $T^i\otimes T^j \to T^{i+j}$, and it is obvious that $N^i\otimes T^j$ and $T^j\otimes N^j$ both lie in $N^{i+j}$. Hence the tensor product descends to give a multiplication $H^i\otimes H^j\to H^{i+j}$ on (co)homology. Because all this is based on the well-defined concept of volume, there is no need to move cycles into general position and so forth. \section {Uniform expressions} When $\Delta$ is not simple, the polytopes produced by $\epsilon$-variation will in general have different combinatorial structures. The cone (or pyramid) on a square is the simplest example. Suppose that $\Delta$ is this polytope. Let the points of the compass $N$, $S$, $E$ and $W$ denote the triangular facets, and let $B$ denote the square base. Use the same symbols to denote thickenings of these facets, so chosen that $N-S$, $E-W$ and $N+E-B$ each represent a thickening due to a rigid displacement. (Such displacements do not change the volume.) Normalise so that $N\frown E \frown B$ has volume one. Now let $\Delta_{NS}$ denote the simple polytope that results from `squeezing in' the $E$ and $W$ faces (or `letting out' the $N$ and $S$ faces). Thus, $\Delta_{NS}$ is a `ridge roof' polytope, with the ridge running $NS$. Similarly, let $\Delta_{EW}$ be the other ridge roof polytope obtained from $\Delta$ by the $\epsilon$-variation process. As before, let $N$, $S$, $E$, $W$ and $B$ denote the previous thickened facets on $\Delta_{NS}$ and $\Delta_{EW}$, as well as on $\Delta$. That $N-S$, $E-W$ and $N+E-B$ represent trivial thickenings is still true. In the language of algebraic varieties, $\Delta_{NS}$ and$\Delta_{EW}$ are two different resolutions of $\Delta$. The $H_2$ groups of the three polytopes are naturally isomorphic, and this isomorphism is the inclusion given by the decomposition theorem. Analogous results will hold for all convex polytopes. That $\epsilon$-variation does not change `the geometry in codimension one' allows us to use facets as a starting point for the use of the decomposition theorem. On $\Delta$, there are five generators for $H_2\Delta$ (the facets) and three relations ($N-S=E-W=N+E-B=0$). This leaves a two dimensional space, with generators $N$ and $E$ say. We will now look for uniform expressions. In other words, adapting the notation of the previous section, we seek $\alpha$ in $T^2\Delta$ such that \[ \Delta_{NS} \frown N \frown \alpha = \Delta_{EW} \frown N \frown \alpha \] and so on for the other facets. More exactly we seek an $\alpha$ and a $\beta$, both with this uniform property, that provide a basis for $H_1$ that is dual to the $\{\,N,E\,\}$ basis for $H_2$. Here is an easy solution. Take $\alpha=W\frown B$ and $\beta=S\frown B$. In every case the intersection or homology calculation can be performed entirely on the base $B$ of $\Delta$, and so is insensitive to the choice of $\Delta_{NS}$ or $\Delta_{EW}$. Thus, $\alpha$ and $\beta$ are uniform expressions, that provide the sought for dual basis. This solution is perhaps too easy. Duality is in essence a local matter. About the apex of $\Delta$ both $N$ and $E$ represent non-trivial local cycles. However, because $N+E-B$ is trivial, and because $B$ is not local to the apex, the local cycles due to $N$ and $E$ are equal but opposite. We therefore would like a uniform expression that is both local to the apex and dual to $N$. The quantity $\eta=E\frown W - N \frown S$ is a solution to this harder local problem. The verification proceeds as follows. First \[ \Delta_{NS} \frown N \frown ( E\frown W - N \frown S ) = \Delta_{NS} \frown N \frown E \frown W - 0 = 1 \] because on $\Delta_{NS}$ the facets $N$ and $S$ do not meet. Next \[ \Delta_{EW} \frown N \frown ( E\frown W - N \frown S ) = 0 - \Delta_{EW} \frown N \frown N \frown S \] because on $\Delta_{EW}$ the facets $E$ and $W$ do not meet. To compute $N \frown N \frown S$ on $\Delta_{EW}$ use $N=B-E$ to obtain \[ \Delta_{EW} \frown B \frown N \frown S - \Delta_{EW} \frown E \frown N \frown S = 0 -1 = -1 \] (because $B$, $N$ and $S$ have empty intersection on $\Delta_{EW}$) and thus the equation \[ \Delta_{NS} \frown N \frown \eta = \Delta_{EW} \frown N \frown \eta \] holds. The verification that \[ \Delta_{NS} \frown E \frown \eta = \Delta_{EW} \frown E \frown \eta \] is left to the reader. (It also follows because $N+E=B$, and any evaluation involving $B$ is automatically uniform.) Now let $\Delta$ be an arbitrary convex polytope, as usual of dimension $n$. In the simple case there was a single (multilinear) volume form $\vol_\Delta$, whose evaluation on $T^i\otimes T^j$ (with $i+j=n$) induced the homology groups. In the general case we will define $\vol_\Delta$ to be the set $\{\,\vol_r\,\}$ of multilinear forms, due to \emph{all} the simple polytopes $\Delta_r$ that arise from $\Delta$ as a result of $\epsilon$-variation. We now seek subspaces $U^i$ and $U^j$ of $T^i$ and $T^j$, such that $\vol_\Delta$ is on $U^i\otimes U^j$ a single valued function. The task now is to define $U^i$ and $U^j$. Of course, once say $U^i$ is known, one can define $U^j$ to be all $\xi$ in $T^j$ such that $\vol_\Delta(\eta,\xi)$ is single valued, for any $\eta$ in $U^i$. One can then as in the simple case quotient by the null spaces $N^i$ and $N^j$ of $U^i\otimes U^j \to \bfR$ to obtain a perfect pairing on $H^i$ and $H^j$. The correct choice of the $U^i$ and $U^j$ is somewhat delicate. One would like in some sense to choose them so that $H^i$ and $H^j$ are as large as possible. In this way, they will contain as much information as they are able to. Enlarging $U^i$, if it does not change $U^j$, might enlarge $H^i$. But it might reduce $U^j$, and thus enlarge $N^i \subseteq U^i$. As noted, once say $U^i$ is known, the rest of the construction follows in a mechanical manner. Thus, the definition of $U^i$ is the central question of this section. For $i=1$ the correct value of $U^1$ is already known. It is the whole of $T^1$. Thus, $U^{n-1}$ consists of all $\xi$ in $T^{n-1}$ such that $\vol_\Delta(\eta,\xi)$ is single valued, for any $\eta$ in $T^1$. For this construction, and indeed the whole of the uniform expression programme, to be correct, the resulting homology groups should have the desired dimension, which in this case is $f_{n-1}-n$. In other words, there should be enough uniform $\xi$. In the present case of $U^{n-1}$ we can use the strong Lefschetz theorem to find such $\xi$. Choice of a point $p$ interior to $\Delta$ will determine a thickening $\omega_p \in T^1\Delta$, namely the displacement that would take each of the facets from $p$ to where they actually are. Changing $p$ will add a rigid (and so trivial) displacement to $\omega_p$. We will think of $\omega_p$ as the unique \emph{hyperplane class} or \emph{Lefschetz element} $\omega$ in $T^1\Delta$, although it is only when in $H^1\Delta$ that it becomes unique. The Lefschetz element $\omega$ has an important local property, and an important global property. The local property is that it is, in the language of algebraic geometry, a \emph{slice} or a \emph{section} through a variety. As a result, it is locally trivial, in the sense that at each vertex there is a (unique) trivial displacement that agrees with $\omega$ on the facets through that vertex. We will let $S^1$ denote all such locally trivial systems of thickened facets. Similarly, $S^i$ will be the $i$-fold tensor product of $T^i$. If $\Delta$ is simple, then $S^1=T^1$, and vice versa. Even though in general two intersection homology classes cannot be multiplied together, they always can be, provided one of them is locally trivial. The global property is the strong Lefschetz theorem. This tells us that for $i+j=n$ and $i \geq j$ the map \[ \omega ^ {j-i} : H^j \Delta \to H^i \Delta \] is an isomorphism. This is at present known only for simple polytopes, general polytopes with rational vertices, and some other special cases. But it is reasonable to assume that it is true in general. As noted, $U^1$ is $T^1$. Now let $U^{n-1}$ be the expressions (in $T^{n-1}$) that are uniform when paired against $U^1$. The first property of $\omega$ allows us to conclude that $\omega^{n-2}\xi$ is a uniform expression, for any $\xi \in U^1 = T^1$. (This follows because $\Delta$ is simple along its faces of dimension $n-2$.) The second property allows us to conclude that there are enough such elements of $U^{n-1}$, to obtain dual spaces $H^1$ and $H^{n-1}$ of dimension $f_{n-1}-n$. (This requires either Poincar\'e duality for intersection homology, or better some further facts about the strong Lefschetz theorem. The required facts are, for $U^1$ and $U^{n-1}$, a consequence of Minkowski's results \cite{bib.HM.ALKP} on facet area and `outward normal vectors'.) Thus, it is already known that at least in this case the uniform expression approach gives the correct answer. There is unlikely to be an easy proof of this result. We now turn to $U^2$ and $U^{n-2}$, or more generally $U^i$ and $U^j$, for $i+j=n$ and $i \leq j$. We can suppose that $U^{i-1}$ is already known. If $\xi$ is to lie in $U^i$, the product \begin{equation} \label{eqn.xi-omega-eta} \xi \frown \omega ^ k \frown \eta \in T^n \end{equation} must also be uniform, for any $\eta$ in $U^{i-1}$. Here, $k$ is $n-2i+1$. We will assume that this necessary condition is also sufficient, or in other words that it produces the correct definition of $U^i$. The space $U^j$ of complementary dimension can be defined just as before, to be the expressions that are uniform when paired with expressions in $U^i$. This completes the definition except for the case $i=j$ (and $n$ is even). In this case $U^i$ and $U^j$ has better be equal. But this question arises already, with the other $U^i$. If $U^i$ has been correctly defined then expressions such as (\ref{eqn.xi-omega-eta}) must again be uniform, where now $\xi$ and $\eta$ are in $U^i$, and $k$ is $n-2i$. This then is a wished for property of $U^i$. If it fails to hold, then the uniform expression programme also fails. The equality of $U^i$ and $U^j$ for $i=j$ is a special case of this. Finally, the definitions of $U^0$ and $U^n$, and of $U^1$ and $U^{n-1}$, are special cases of this general scheme. \section {Local-global intersection homology} By design, the unifrom expressions of the previous section are insensitive to the choice of a resolution (simple $\epsilon$-variation) of the object being studied. The definition of this section will record information about how the value of a (non-uniform) expression depends on the resolution chosen. Here is an example. Let $\Delta$ be the cone on a square, and let $\Delta_{NS}$, $\Delta_{EW}$, $N$, $S$, $E$, $W$ and $B$ be as before. Now consider the expression $N\frown S \frown E$. This expression is not uniform. On $\Delta_{NS}$ it evaluates to zero, because the $N$ and $S$ facets do not meet. On $\Delta_{EW}$ it is $1$. (Replace $N$ by the equivalent cycle $B-W$. On $\Delta_{EW}$ the facets $W$ and $E$ do not meet. Moreover, $B\frown S \frown E$ is equal to $1$.) The task is to organise such information in a sensible way. The author has already defined a theory of local-global intersection homology \cite{bib.JF.LGIH,bib.JF.CPLA} which will, usually behind the scenes, guide the definitions that follow. This theory provides, for each $n$-polytope, a system of $F_{n+2}$ `Betti numbers', of which $F_{n+1}$ are linearly independent. These `Betti numbers' are organised into $F_{n}$ sequences or strings. The Lefschetz operator $\omega$ goes from the homology group at one location in a string to the previous one (or next, depending on the indexing scheme). Here, $F_i$ is the $i$-th Fibonacci number. Bayer and Billera \cite{bib.MB-LB.gDS} showed that the flag vectors of $n$-polytopes span a space whose dimension is $F_{n+1}$. The `Betti numbers' are a re-encoding of information in the flag vector. Now let $\Delta$ be any $3$-polytope, and let $\Delta_1$ be any resolution (by which we mean of course, a simple $\epsilon$-variation). Now think of the expression \[ \Delta_1 \frown \alpha \frown \beta \frown \gamma \qquad \alpha, \beta, \gamma \in U^1 = T^1 \] as a linear function of $\gamma$. It will of course vanish for $\gamma \in N^1$, and so by Poincar\'e duality determines a class in $H_1\Delta = H^2\Delta$. (In the first section, this class was denoted by $\alpha \frown_1 \beta$.) If $\Delta_1$ were replaced by another resolution $\Delta_2$, a different class might result. Now consider the expression \[ (\Delta_1 - \Delta_2 ) \frown \alpha \frown \beta \frown \gamma \qquad \alpha, \beta, \gamma \in U^1 = T^1 \] as a linear function of $\gamma$. Again it represents a class in $H_1\Delta$, but in contrast to the previous case it has an important geometric property. It is concentrated about, or local to, the locus in $\Delta$ above which the two resolutions $\Delta_1$ and $\Delta_2$ differ. In the present case ($3$-polytopes) this locus will be a finite set of points. Thus, global cycles that are constructed in this way satisfy certain locality properties. This is why they are called local-global cycles. They are neither purely local or purely global, but partake in the properties of both. Now consider the octahedron. Such a local-global cycle can be found at each of its six vertices. The octahedron has $(1,5,5,1)$ as its Betti numbers, and so there must be, when considered as global cycles, a relation between these six local-global cycles. (In fact there are two such relations.) These relations do not however respect the local aspect of the given local-global cycles. It is important, particularly in higher dimensions, to have a satisfactory definition of the equivalence of local-global cycles. The solution to this problem is to introduce a dual theory. In other words, a bilinear pairing will be produced. As in the previous section the quotients by the null-spaces will be called the \emph{local-global homology groups}. In contrast to the previous case, where $H^i$ and $H^j$ were paired with each other, here the pairing will be between \emph{compact} and \emph{open} local-global cycles. They are markedly different objects. The cycles just recently introduced are compact. We continue to study the case $n=3$. As already noted, any global cycle (thickened facet) $\gamma \in U^1 = T^1$ can be paired with a compact local-global cycle such as \begin{equation} \label{eqn.Delta12-alpha-beta} (\Delta_1 -\Delta_2) \frown \alpha \frown \beta \end{equation} but that there will not in general be enough such cycles. We can obtain more by relaxing the properties that $\gamma$ satisfies. This involves an understanding of how intersection numbers (products of homology cycles) can be calculated. Suppose $\Delta$ is simple. Previously the intersection number of $n$ thickened facets was defined using the already well-defined polynomial $\vol\Delta(\epsilon)$. There is another method, which has been used in the examples. It is to use equivalence of cycles to move them into `general position', and to then compute the well-defined volume of their common intersection (as thickened facets). This is the traditional method. It does not rely on the volume polynomial. That it produces a well-defined product is a consequence of two facts. The first is that cycles can always be moved into general position. This ensures that the product can be calculated in every case. The second fact is that howsoever the calculation is performed, the same answer results. The dual theory of \emph{open local-global cycles} will be developed using this `general position' approach to the pairing. When a product of $n$ thickened facets is in general position, the intersection takes place at a vertex. Suppose now that instead of having a global thickened facet $\gamma$, one has at each vertex $v$ a thickened facet $\gamma_v$. This is a more general concept, for $\gamma$ at $v$ need not be the same thickened facet as $\gamma$ at $w$. Such will be called a (vertex centered) system of open local cycles, as will be formal sums of such. Each (formal sum of) global thickened facets will determine such a system, although in general the converse is plainly false. Recall that an intersection pairing will exist as a consequence of the two properties of computability and consistency. The first is essentially a local matter. To move a class from a vertex, add to it a suitable cycle that is equivalent to zero. This process can be used to produce a perhaps ill-defined pairing between compact cycles on $\Delta$ and vertex-centered open local cycles of complementary dimension, as defined in the previous paragraph. The cycles equivalent to zero are as in the usual theory, but understood as open local cycles. In general, such a product will not be consistent. However, we wish to apply it not to all compact cycles, but only to those that arise in a particular way, namely as compact local-global cycles. Suppose such a set of cycles is given. For an open local cycle to give a consistent answer when evaluated against such a family of compact cycles is a \emph{global} property of open local cycles. Such a cycle will be called, as one would expect, an open local-global cycle. The global cycles (of appropriate dimension) have this property. Beause the open cycles are being paired with a restricted set of compact cycles, there will in general be many open cycles that do not arise from a global cycle. In fact, for $n=3$ the consistency condition turns out to be vacuous. (In higher dimensions it is more subtle.) There are some points that should be clarified. First, when computing the compact-open local-global pairing, any moving that is to be applied on the compact side should respect the local-global origin of the cycles. In other words, a cycle such as (\ref{eqn.Delta12-alpha-beta}) should be moved only by moving $\alpha$ and $\beta$, and not by the arbitrary moving of elements in $H_1\Delta_1$ and $H_1\Delta_2$. Second, on the open side the local model should be a uniform expression, and not an arbitrary product of thickened facets. (This seems to be demanded on aesthetic and logical grounds. This author does not know if this will affect the final outcome.) The third and final point is that the relation between compact and open is similar to that between $U^i$ and $U^j$, for $i+j=n$ and $i\geq j$. As noted, enlarging $U^i$ might reduce $U^j$ and hence reduce $H^i$. On the other hand, it might not change $U^j$, and can thus enlarge $H^i$. This phenomenom helps explain some of the subtle differences between various similar-looking local-global groups. We will now assume that whenever a system of compact local-global cycles is defined, it will be paired with the space of open local-global cycles that it determines (as with $U^i$ and $U^j$), to produce both the compact and the open local-global groups. (Recall that this is done by factoring out the nullspaces.) To complete this section we will exhibit, for $n\leq 6$, the appropriate spaces of compact local-global cycles. They all have the general form $(\Delta_1 - \Delta_2)\eta$, where $\eta$ is a non-uniform expression. As already noted, for $n=3$ we take $\eta=\alpha\frown\beta\in T^2$. This gives, as promised $4+1=5=F_5$ homology groups. (The $4$ comes from $H_0$, $\ldots$, $H_4$.) For $n=4$ we use $T^2$, $T^2\otimes S^1$ and $T^3$ to obtain $5+3=8$ homology groups. Now for $n=5$. We use $T^2$, $T^2\otimes S^1$ and $T^2 \otimes S^2$. This corresponds to a `string' of Betti numbers. Because the Lefschetz element $\omega$ is an element of $S^1$, it acts on this string of compact local-global groups. There will also be an adjoint map on the open groups. We also have $T^3$ and $T^3\otimes S^1$ as another string. (This group will be reconsidered shortly.) And there is also $T^4$. This gives $6$ local-global groups organised into $3$ strings. Theere is also $H_0$, $\ldots$, $H_5$. As the general theory, as already noted, gives $13$ organised into $5$ strings, we have missed one of the local-global groups. A certain amount of thought shows that the missing group consists of compact local $2$-cycles. Taking $\eta\in T^3$ is the way to produce such a cycle, but this must be distinguished from the previous use of $T^3$. There, the $T^3$ represents a `one-dimensional family of local $1$-cycles', and such will usually have a non-empty intersection against the generic element of $S^1$, whereas such will always miss a local $2$-cycle. Thus, the `missing' group will be produced by $T^3 \cap (S^1)^\perp$. (By this is meant formal sums $\eta$ of $\alpha_1\frown \alpha_2\frown \alpha_3$ such that $\eta \frown \xi$ is zero for any open $\xi$ that is locally of the form $S^1 \otimes T^1$. At this point it make no sense to impose global conditions of $\xi$.) The first use of $T^3$ is not, according to the local-global theory, quite as it should be. The cone on a simple $4$-polytope will in general have non-trivial such $T^3$ items, but will not of course have any `one-dimensional family of local $1$-cycles'. The solution is to impose an additional condition, besides consistency, on the open cycles that are used. THe condition is that if $\Delta_1$ and $\Delta_2$ differ only over isolated points of $\Delta$, then the open local cycle $\xi$ should vanish against $(\Delta_1-\Delta_2)\frown\eta$, for any $\eta$ in $T^3$. A similar condition should be imposed for $T^2$ when $n=4$, but in that case it is vacuous. For $n=6$ we will have all the $n=5$ items, both as themselves, and multiplied by $S^1$. In other words, there is $T^2$, $\ldots$, $T^2\otimes S^3$ and $T^3$, $T^3\otimes S^1$, $T^3 \otimes S^2$ and $T^4$, $T^4 \otimes S^1$. The missing groups from $n=5$ produces \[ T^3 \cap (S^2)^\perp \qquad (T^3 \otimes S^1) \cap (S^2)^\perp \] for $n=6$. (This last requires some thought.) There will also be $T^5$ and $T^4 \cap (S^1)^\perp$. Altogether this is $13$ local-global groups organised into $6$ strings. Conditions similar to those formulated in the previous paragraph should again be imposed on open cycles. Add to this the $H_0$, $\ldots$, $H_6$ string to obtain $20$ groups in $7$ strings. This is short, by a string containing a single group. This `missing' item will be defined in the next section. \section {Second order local-global homology} In the previous section it was seen how suitable expressions of the form $(\Delta_1 - \Delta_2)\frown\eta$ can be made to represent compact local-global homology classes. These, together with the uniform expressions of \S3, were enough in dimension at most $5$ to produce the $F_{n+2}$ homology groups that are demanded by the general combinatorial structure of the local-global theory. We also saw that for $n=6$, the count was one short. This section will define the missing group. (The previous sections dealt with order zero and order one respectively.) First let $\Delta$ be the cone on a square. This polytope has dimension three, and in some sense is the first non-simple polytope. On it there is a non-trivial compact local-global cycle. Now consider the product $\Delta\times\Delta$. This has dimension six. On it one can form the product $\eta$ of the local-global cycles on its factors. Once this has been suitably understood, it will provide an example of a second-order local-global cycle. It is natural to stratify $\Delta$ into the apex (the only non-simple point) and the rest. In the same way, $\Delta\times\Delta$ can be stratified into $\{0\}\times\{0\}$, $\{0\}\times\Delta$, $\Delta\times\{0\}$ and the rest. Here `$0$' denotes the apex of $\Delta$. The cycle $\eta$ will in some sense be local to the `apex' $\{0\}\times\{0\}$ of $\Delta\times\Delta$. It represents a local-global cycle that is local to $\{0\}\times\Delta$ (or to $\Delta\times\{0\}$) that can further be made local to $\{0\}\times\{0\}$. In fact, there will be two such cycles, one for each factor in $\Delta\times\Delta$. Although equivalent when regarded as first-order local-global cycles, they will be inequivalent when regarded as second-order such. For simplicity, let $\Delta_a$ and $\Delta_b$ denote the two simple polytopes formed from $\Delta$ by the process of $\epsilon$-variation. Now let $\Delta_{aa}$, $\ldots$, $\Delta_{bb}$ be the corresponding resolutions of $\Delta\times\Delta$. The alternating sum \[ \Delta_{{.}{.}} = \Delta_{aa} - \Delta_{ab} - \Delta_{ba} + \Delta_{bb} \] has the interesting property that above the whole of $\Delta\times\Delta$, except for the apex $\{0\}\times\{0\}$, it is so to speak zero. Consider, for example, the cycle $\Delta_{{.}{.}}\frown\eta$, for any suitable (non-uniform) $\eta$. This will be a compact cycle that is, in some sense, concentrated at the apex $\{0\}\times\{0\}$. (Throughout this section we will take $\eta$ to be as follows. Each thickened facet $\alpha$ on $\Delta$ determines thickened facets $\alpha\times\Delta$ and $\Delta\times\alpha$ on $\Delta\times\Delta$. We will have $\eta$ be $(\psi\times\Delta)\frown(\Delta\times\psi)$ where $\psi$ is say $N\frown E$ on $\Delta$, or any other non-uniform element of $T^2\Delta$.) As presented in the previous paragraph, the expression $\Delta_{{.}{.}}\frown\eta$ represents a first-order local-global cycle. Let us now present it as a second order such. To do this we introduce \[ \Delta_{{.}a} = \Delta_{aa} - \Delta_{ba} \> , \qquad \Delta_{{.}b} = \Delta_{ab} - \Delta_{bb} \] and then consider both $\Delta_{{.}a}\frown\eta$ and $\Delta_{{.}b}\frown\eta$. Each of these expressions represents a first-order local-global cycle on $\Delta\times\Delta$. The difference \[ (\Delta_{{.}a} - \Delta_{{.}b}) \frown \eta \] is again a local-global cycle, but now it is local to the apex $\{0\}\times\{0\}$. The differences between the various local-global cycles that can be constructed from $\Delta_{{.}{.}}$ and $\eta$ are subtle, and somewhat as in the previous section they only become apparent once the dual open theory has been defined. For order zero `open' cycles, the basic model is a (thickened) facet lying on $\Delta$. For the first order theory, the basic model is a facet passing through a vertex. For the second order theory, the basic model will be a facet that contains a flag. Here, a flag is for example a vertex lying on a $3$-face. The `Betti numbers' are in some clever way counting how many flags there are of each type. Now once again consider the difference $\Delta_{{.}a} - \Delta_{{.}b}$. We do this not as the formal sum $\Delta_{{.}{.}}$, but as an expression in its own right. In other words, it can more exactly be thought of as an ordered pair $(\Delta_{{.}a},\Delta_{{.}b})$, that is to be treated in a particular way. The quantity $\Delta_{{.}a}$ has a face of $\Delta\times\Delta$ naturally associated to it, namely the face $\{0\}\times\Delta$ along which the two components $\Delta_{aa}$ and $\Delta_{ba}$ differ. The same applies to $\Delta_{{.}b}$. Now note that $\Delta_{{.}a}$ and $\Delta_{{.}b}$ are so to speak concentrated over or around the face $\{0\}\times\Delta$, and so it makes sense that they should be paired with products of thickened facets that pass through this face. As already noted, this pairing will vanish over all of $\Delta\times\Delta$, except for the apex $\{0\}\times\{0\}$. One way to record this fact is to observe that $\Delta_{{.}{.}}\frown\eta\frown\alpha\frown\beta$ will always be zero, if it is known that $\alpha$ lies in $S^1$. This is true both as a local and a global statement. For the open cycles, there is a detail missing. Although the face $\{0\}\times\Delta$ has entered the discussion, the apex $\{0\}\times\{0\}$ has not. It does so in the following way. Let $\alpha$ and $\beta$ be the thickened facets $\Delta\times N$ and $N\times\Delta$ on $\Delta\times\Delta$. (Any of the four triangular facets of $\Delta$ could just as well have been chosen.) Now consider the object that is the flag $(\,\{0\}\times\{0\} \subset \{0\}\,)\times\Delta$, together with the (open) expression $\alpha\frown\beta$. Note that expression has certain vanishing or incidence properties with respect to the flag part of the object. Both $\alpha$ and $\beta$ pass through $\{0\}\times\{0\}$, while $\beta$ also contains $\{0\}\times\Delta$. As in \cite{bib.JF.CPLA}, these properties can be presented in an abstract form. This study of $\Delta\times\Delta$ has led to a compact cycle $(\Delta_{{.}a}-\Delta_{{.}b}) \frown\eta$, and an open cycle (described in the previous paragraph), that have a non-zero pairing with each other. By writing down the abstract properties that these cycles satisfy, a definition of the second-order local-global cycles for $n=6$ will be obtained. At this point, it is well to stop. The reader who is already familiar with \cite{bib.JF.LGIH} and \cite{bib.JF.CPLA}, especially the former, will appreciate that there are subtleties in the dimension of flags to be used, that have yet to manifest themselves. For the other readers, there is no easy way to explain further, other than to appeal to the principles already announced in \cite{bib.JF.LGIH}. It has not been the goal of this section, to produce a complete and rigorous definition of the higher-order local-global homology. Rather, it has been to develop the concepts in a fairly natural manner, up to the point where all the major features have been exposed. The goal has been more to show the existence of a such an approach, than to exhibit it in a formal and rigorous manner. \section {Summary and conclusions} This section discusses the following. First, application to the combinatorics of general convex polytopes. Second, application to more general algebraic varieties. Third, application of the volume polynomial approach to other situations, such as the Voronoi polytope of a positive definite quadratic form. Finally, some remarks are made on how the conjectures implicit in this paper might be proved. If $\Delta$ is a simple poytope, the known facts regarding $H_\bullet\Delta$ imply numerical conditions on the face vector that are, in addition to being necessary, are also sufficient for the existence of a simple polytope with given flag vector. These facts are generation by the (thickened) facets, the ring structure, the strong Lefschetz theorem, and a formula for the Betti numbers in terms of the face vector (and vice versa). The numerical conditions are implicit in the properties of $H_\bullet\Delta$. The proof of necessity requires both the strong Lefschetz theorem and some result in monomial rings and the like. The proof of sufficiency is a matter of finding a suitable ingenious construction, and showing that it gives a polytope with the required face vector. For general polytopes, results about the homology object $H_\bullet\Delta$ will again produce necessary conditions on this time the flag vector of $\Delta$. It is still true that $H_\bullet\Delta$ is generated, in some sense, by the possibly thickened facets of $\Delta$. This is of course true for the expressions $\eta$, $\psi$ and so forth, whether uniform or not. It also seems to be true for the simple $\epsilon$-variations $\Delta_i$. Each ordering of the facets of $\Delta$ will determine the combinatorial type of such a $\Delta_i$. Simply move the first facet outward until it is in general position, then the second, then the third, and so on. Even if not all $\Delta_i$ arise in this way, perhaps a `spanning set' will so arise. The nature of any ring-like structure that might exist (in the simple case this gives the `pseudo-power inequalities') is not so obvious. Proof of the strong Lefschetz theorem and formulae for the Betti numbers are likely, in general, to be deep results. (It follows from Bayer's example \cite[\S?]{bib.JF.LGIH} that such is unlikely to hold for all the local-global homology groups. There seems to be a strong analogy or connection between the $\epsilon$-variation process and the construction of the `secondary polytope' \cite{bib.LB-IG-BS.DMSP,bib.LB-BS.FP}.) Nonetheless, it is true that results of this nature regarding $H_\bullet\Delta$ will imply, as in the simple case, necessary numerical conditions on the $f$-vector of $\Delta$. Even if such results are only conjectural, the result will be numerical conjectures on $f\Delta$. This would be an advance, for we are at present without even any plausible conjectures for the conditions on $f\Delta$ in dimension greater than $3$. Perhaps the best way to explore this problem is to focus on $n=4$. This problem is hard enough to be instructive, and easy enough to be accessible. Provided the conjectural structure of $H_\bullet\Delta$ can be well understood in this case, it will be possible to extend the existing construction in the simple case \cite{bib.LB-CL.SMC}, so that it deals with this new situation. Such is probably an appropriate starting point for the study of the subtleties and complexities of local-global intersection homology in higher dimensions. The complications should precisely satisfy the requirements of the proof process. Now let $X$ be an irreducible projective algebraic variety. First, let $X$ be a Schubert variety, or something similar. Such varieties have been extensively studied. The uniform expression approach taken to the intersection homology of $\PDelta$ should apply with significant change to this new situation. In particular, the $\epsilon$-variation and study of volume method should still be valid. It may be that the concepts introduced in this paper are related to results and methods already known and used in this more complicated context. Now let $X$ be any irreducible projective algebraic variety. The homology will no longer be generated by the `facets', and so the volume approach will no longer give as much information as homology does. An elliptic curve is the simplest example of this. The volume approach is however very atractive, and it would be nice if for such a general $X$ there were an $\epsilon$-variation (of something) that would together with the volume analogue record all the information that homology does. This can be thought of as a problem for nonsingular varieties only. One would also wish for this theory to provide a `lifting' for each resolution $X_i\to X$, similar to the decomposition theorem lifting that exists for $\Delta_i$ and $\Delta$. Even without this, the decomposition theorem can be used to define the concept of a uniform expression, and so in the same way produce local-global homology groups. The general method of $\epsilon$-variation and volume can be applied in other situations, although with what success is not yet clear. Here is an example. Let $Q$ be a positive definite quadratic form on say $\bfR^n$. Inside $\bfR^n$ is the integer lattice $\bfZ^n$. The \emph{Voronoi polytope} $\Delta_Q$ of $Q$ consists of all points of $\bfR^n$ that are at least as close to the origin as they are to any other lattice point, where the distance is measured using $Q$. Now vary $Q$, say by adding a quadratic form $\epsilon$ that is close to zero. This will vary $\Delta_Q$ to $\Delta_Q(\epsilon)$. In this situation it is proper to study not only the volume of $\Delta_Q(\epsilon)$, but also the area (or length or whatever) of its faces of the various dimensions. Provided the same programme can be carried out, as has been done for simple polytopes, subtle combinatorial inequalities on $\Delta_Q$, and thus $Q$, are likely to result. One might also wish to apply similar methods to, say, centrally symmetric polytopes, or to arrangements of hyperplanes in an affine space, or even to graphs and hypergraphs. The beauty of the $\epsilon$-variation and volume approach, when or if it works, is that it produces the definition of homology out of straightforward geometric concepts, and the requirements of consistency. It is as if it provides a nucleus or seed, out of which an ingenious and appropriate homology theory might emerge. But for such to be useful, the Betti numbers should be linear functions of the `flag vector'. We close this paper by saying a few words about proofs. The local-global theory is of combinatorial interest only when it produces homology groups whose dimension is a linear function of the flag vector. The prototype for the method of proof is in some sense McMullen's elementary and largely geometric proof \cite{bib.McM.SP} of strong Lefschetz for simple convex polytopes. What happens here is that there is a package of properties, that is preserved as both the geometric realization of a combinatorial type (continuous change) and the combinatorial type itself (discrete change) is varied, as a result of passing from one simple convex polytope to another. This package includes a strengthened version of the strong Lefschetz theorem, namely the `Riemann-Hodge-Minkowski' inequalities. There is an induction built into the process. Now consider the problem of proving that the order zero (or the usual middle perversity intersection homology) part of the theory has the predicted Betti numbers. This is true for polytopes with rational vertices, as a consequence of Deligne's proof of the Weil conjectures. To prove such a result for general polytopes, a package and induction similar to that used by McMullen seems to be required. The author believes that the local-global intersection homology groups will provide the at present unknown part of this package. Again, this is a problem that can usefully be investigated in the case $n=4$.
1,314,259,994,672
arxiv
\section{Introduction and Preliminaries} Among the most well-established geometrical properties of norms are smoothness and strict convexity. A norm $\ensuremath{||\cdot||}$ on a Banach space $X$ is called \textit{G\^{a}teaux smooth}, or just \textit{G\^{a}teaux}, if, given any $x \in X\backslash\{0\}$, there exists a functional in $\dual{X}$, denoted by $\norm{x}^\prime$, such that \[ \lim_{\lambda \rightarrow 0} \frac{\norm{x + \lambda h} - \norm{x}}{\lambda} \;=\; \norm{x}^\prime(h) \] for all $h \in X$. In addition, if the limit above is uniform for $h$ in the unit sphere $\sph{X}$, then $\ensuremath{||\cdot||}$ is called \textit{Fr\'{e}chet smooth}, or simply \textit{Fr\'{e}chet}. Turning now to properties of strict convexity, we say that $\ensuremath{||\cdot||}$ is \textit{strictly convex} if, given $x,y \in X$ satisfying $\norm{x} = \frac{1}{2}\norm{x + y} = \norm{y}$, we have $x = y$. Of the many stronger cousins of strictly convex norms, we mention one. The norm $\ensuremath{||\cdot||}$ is \textit{locally uniformly rotund}, or \textit{LUR}, if, given a point $x \in \sph{X}$ and a sequence $(x_n) \subseteq \sph{X}$ satisfying $\norm{x + x_n} \rightarrow 2$, we have $\norm{x - x_n} \rightarrow 0$. Renorming theory is a branch of functional analysis that seeks to determine the extent to which a given Banach space can be endowed with equivalent norms sporting certain geometrical properties, such as the ones above. In this paper, a norm on a given Banach space is always assumed to be equivalent to the canonical norm. We refer the reader to \cite{dgz:93} for a comprehensive account of this field up to 1993, together with the more recent surveys \cite{godefroy:01} and \cite{zizler:03}. In recent years, trees have assumed an important role in the field, both as a source of counterexamples to existing questions and as a vehicle for exploring new avenues of research; see, for example \cite{haydon:90}, \cite{haydon:95} and \cite{haydon:99}. We say that a partially ordered set $(\Upsilon,\preccurlyeq)$ is a \textit{tree} if, given arbitrary $t \in \Upsilon$, the set of predecessors $\setcomp{s \in \Upsilon}{s \preccurlyeq t}$, denoted by the \textit{interval} $(0,t]$, is well-ordered. The set of \textit{immediate successors} of $t \in \Upsilon$ is denoted by $t^+$. In this way, trees are a natural generalisation of ordinal numbers. As well as $(0,t]$, we define the interval $(s,t] = (0,t]\backslash (0,s]$ for $s \preccurlyeq t$, the \textit{wedge} $[t,\infty) = \setcomp{u \in \Upsilon}{t \preccurlyeq u}$ and finally $(t,\infty) = [t,\infty)\backslash\{t\}$. We remark that the symbols $0$ and $\infty$ are, in this context, convenient notational devices and not themselves elements of $\Upsilon$. The scattered locally compact \textit{interval topology} on $\Upsilon$ is the coarsest topology for which all intervals $(0,t]$ are both open and closed. This topology agrees with the standard interval topology of any ordinal $\Omega$, if we consider $\Omega$ as a tree. To ensure that this topology is also Hausdorff, we restrict our attention to trees $\Upsilon$ with the property that every non-empty, linearly ordered set in $\Upsilon$ has at most one minimal upper bound. With this topology in mind, we consider the Banach space $\Czerok{\Upsilon}$ of continuous real-valued functions vanishing at infinity, and the dual space of measures. We remark that as $\Upsilon$ is scattered, the weak topology and the topology of pointwise convergence agree on norm-bounded subsets of $\Czerok{\Upsilon}$. Trees and linearly ordered sets enjoy close ties. For a comprehensive review of these relationships, we refer the reader to \cite{tod:84}. Given partial orders $P$ and $Q$, we say that the map $\mapping{\rho}{P}{Q}$ is called \textit{increasing} (respectively \textit{strictly increasing}) if $\rho(s) \preccurlyeq \rho(t)$ (respectively $\rho(s) \prec \rho(t)$) whenever $s \prec t$. \textit{Decreasing} and \textit{strictly decreasing} functions are defined analogously. If there exists a strictly increasing map from $P$ to a linear order $Q$, we say that $P$ is $Q$\textit{-embeddable}, or $P \preccurlyeq Q$. Evidently, in this context, $\preccurlyeq$ is a transitive relation on the class of partial orders. In much of what follows, $P$ will be a tree and $Q$ a linear order. It is well known that $\Upsilon \preccurlyeq \mathbb{Q}$ if and only if $\Upsilon$ is \textit{special}, which means that $\Upsilon$ can be written as a countable union of antichains (cf.\ \cite[Theorem 9.1]{tod:84}). Special trees tend to have very good properties; for example, the following result can be found in \cite{smith:05b}. \begin{thm} \label{speciallur} Given a tree $\Upsilon$, the space $\Czerok{\Upsilon}$ admits a norm with LUR dual norm if and only if $\Upsilon$ is special. \end{thm} We introduce a couple of combinatorial ideas used extensively in \cite{haydon:99}. \begin{defn} \label{badpoints} Given an increasing function $\mapping{\rho}{\Upsilon}{\mathbb{R}}$, we say that $t \in \Upsilon$ is a \textit{bad point for} $\rho$ if there exists a sequence of distinct points $(u_n) \subseteq t^+$, such that $\rho(u_n) \rightarrow \rho(t)$. \end{defn} Bad points are so named because their presence often indicates that the given $\Czerok{\Upsilon}$ space has negative renorming properties. An analogue of the next simple result appears at the beginning of Section \ref{examples}. \begin{prop}[(Haydon)] \label{ratbadpoints} The tree $\Upsilon$ is special if and only if $\Upsilon \preccurlyeq \mathbb{R}$ and there exists an increasing map $\mapping{\rho}{\Upsilon}{\mathbb{R}}$ that has no bad points. \end{prop} We move on to the second combinatorial property taken from \cite{haydon:99}. \begin{defn} \label{everbranching} A subset $E$ of a tree is said to be \textit{ever-branching} if each element of $E$ has a pair of strict successors in $E$ that are incomparable in the tree order. \end{defn} It is easy to see that within every ever-branching subset can be found a \textit{dyadic tree of height} $\omega$; that is, a tree with a single minimal element, no limit elements, and with the property that each element has exactly two immediate successors. Many types of norm on $\Czerok{\Upsilon}$ can be characterised in terms of increasing real-valued functions on $\Upsilon$, with further combinatorial properties that can be expressed in terms of bad points and ever-branching subsets. Of particular interest to us is the following result. \begin{thm}[(Haydon \cite{haydon:99})] \label{f} Given a tree $\Upsilon$, the space $\Czerok{\Upsilon}$ admits a Fr\'{e}chet norm if and only if there exists an increasing function $\mapping{\rho}{\Upsilon}{\mathbb{R}}$ that has no bad points and is not constant on any ever-branching subset. \end{thm} In order to exhibit a tree that does not satisfy the statement of Theorem \ref{f}, we introduce a fundamental construction, due to Kurepa. Given a linear order $\Sigma$, we define the Hausdorff tree \[ \sigma \Sigma \;=\; \setcomp{A \subseteq \Sigma}{A \mbox{ is well-ordered}}. \] We remark that some authors demand the additional requirement that elements of $\sigma \Sigma$ are bounded above. One of the reasons why Kurepa's construction is so important in the theory of trees is summed up by the following theorem. \begin{thm}[(Kurepa \cite{kurepa:56})] \label{sigmanoembed} If $\Sigma$ is a linear order then $\sigma \Sigma \not \preccurlyeq \Sigma$. \end{thm} From Theorem \ref{sigmanoembed}, $\sigma \mathbb{Q}$ is not special. On the other hand, if we take an enumeration $(q_n)$ of the rationals and consider the map $A \mapsto \sum_{q_n \in A} 2^{-n}$, we see that $\sigma \mathbb{Q} \preccurlyeq \mathbb{R}$. It follows that, by Proposition \ref{ratbadpoints}, every increasing, real-valued function defined on $\sigma \mathbb{Q}$ has a bad point. \begin{cor}[(Haydon)] \label{haydonfthm} The space $\Czerok{\sigma \mathbb{Q}}$ admits no Fr\'{e}chet norm. \end{cor} While many types of norm are accounted for in \cite{haydon:99}, equivalent conditions for the existence of norms on $\Czerok{\Upsilon}$ with strictly convex dual, or G\^{a}teaux norms, cannot be adequately expressed in terms of increasing real-valued functions. In all that follows, $\ensuremath{\omega_1}$ denotes the first uncountable ordinal. The following linearly ordered set is introduced in \cite{smith:05b}. \begin{defn} \label{ordery} Let $Y$ be the set of all strictly increasing, continuous, transfinite sequences $x = (x_\xi)_{\xi \leq \beta}$ of real numbers, where $0 \leq \beta < \ensuremath{\omega_1}$. Order $Y$ by declaring that $x < y$ if and only if either $y$ strictly extends $x$, or if there is some ordinal $\alpha$ such that $x_\xi = y_\xi$ for $\xi < \alpha$ and $y_\alpha < x_\alpha$. \end{defn} Observe that $Y$ is not ordered in the usual lexicographic way. Compared to the real line, $Y$ is large. \begin{prop}[(Smith {\cite{smith:05b}})] \label{ybetaembed} If $\beta < \ensuremath{\omega_1}$ then $Y^\beta \preccurlyeq Y$, where $Y^\beta$ is ordered lexicographically. \end{prop} As $\mathbb{R} \preccurlyeq Y$, we see that $\mathbb{R}^\beta \preccurlyeq Y$ for all $\beta < \ensuremath{\omega_1}$. On the other hand, it can be shown that $Y$ contains no well-ordered or conversely well-ordered subsets. The next theorem is the main result of \cite{smith:05b}. \begin{thm}[(Smith {\cite{smith:05b}})] \label{dualrotundthm} Given a tree $\Upsilon$, the Banach space $\Czerok{\Upsilon}$ admits a norm with strictly convex dual norm if and only if $\Upsilon \preccurlyeq Y$. \end{thm} Theorem \ref{dualrotundthm} is a direct analogue of Theorem \ref{speciallur}. In \cite{smith:05b}, it is shown that the spaces $\Czerok{\sigma (\mathbb{R}^\beta)}$, where $\mathbb{R}^\beta$ is ordered lexicographically, admit norms with strictly convex duals provided $\beta < \ensuremath{\omega_1}$. On the other hand, by Theorem \ref{sigmanoembed}, $\Czerok{\sigma Y}$ does not admit such a norm. The order $Y$ can also be used to give an improved sufficient condition for the existence of G\^{a}teaux norms in the context of trees. \begin{thm}[(Smith {\cite{smith:05c}})] \label{gsuff} If there exists an increasing function $\mapping{\rho}{\Upsilon}{Y}$ that is not constant on any ever-branching subset then $\Czerok{\Upsilon}$ admits a G\^{a}teaux norm. \end{thm} We end our review of the existing literature by presenting what was hitherto the best known necessary condition for G\^{a}teaux norms in this context. Given a tree $\Upsilon$, the \textit{forcing topology} on $\Upsilon$ takes as its basis the set of all wedges $[t,\infty)$, $t \in \Upsilon$. A subset $B \subseteq \Upsilon$ is called \textit{Baire} if it is a Baire space with respect to the induced forcing topology; that is, any countable intersection of relatively dense, open subsets of $B$ is again dense. When referring to the Baire property, we will only consider subsets that are \textit{perfect} with respect to the forcing topology; in other words those without isolated points or, equivalently, maximal elements. Arguably the simplest example of such an object is the ordinal $\ensuremath{\omega_1}$, though more interesting ones that have no uncountable linearly ordered subsets can be found in \cite[Lemma 9.12]{tod:84} (cf.\ \cite{haydon:95}). Theorems \ref{f} and \ref{gsuff} applied to a constant function on $\ensuremath{\omega_1}$ demonstrate that, by itself, the Baire property cannot destroy G\^{a}teaux renormability. Instead, we have the following result. \begin{thm}[(Haydon {\cite{haydon:95}})] \label{haydongnec} If $\Czerok{\Upsilon}$ admits a G\^{a}teaux norm then $\Upsilon$ contains no ever-branching Baire subsets. \end{thm} We turn now to the results of this paper. In order to properly express our necessary condition for G\^{a}teaux renormability, we must introduce a second linearly ordered set. \begin{defn} \label{orderz} Let $Z$ be the set of all increasing, continuous sequences $x = (x_\xi)_{\xi \leq \beta}$ of real numbers, where $0 \leq \beta < \ensuremath{\omega_1}$, and such that $x$ is strictly increasing on $[0,\beta)$. The order of $Z$ follows that of $Y$; $x < y$ if and only if either $y$ strictly extends $x$, or if there is some ordinal $\alpha$ such that $x_\xi = y_\xi$ for $\xi < \alpha$ and $y_\alpha < x_\alpha$. \end{defn} The elements of $Z$ that are not in $Y$ are exactly those of the form $x = (x_\xi)_{\xi \leq \beta+1}$, where $(x_\xi)_{\xi \leq \beta} \in Y$ and $x_\beta = x_{\beta+1}$. This order is a partial Dedekind completion of $Y$. We also need a natural definition of bad points with respect to $Z$. \begin{defn} \label{zbadpoint} Given an increasing function $\mapping{\rho}{\Upsilon}{Z}$, we say that $t \in \Upsilon$ is \textit{$Z$-bad} for $\rho$ if there exists a sequence of distinct points $(u_n) \subseteq t^+$ such that $\rho(u_n) \rightarrow \rho(t)$ in the order topology of $Z$. \end{defn} Using $Z$-bad points, we obtain a direct analogy to the necessity part of Theorem \ref{f}; the following is the main result of this paper. \begin{thm} \label{newgnec} If the space $\Czerok{\Upsilon}$ admits a G\^{a}teaux norm, then there exists an increasing function $\mapping{\rho}{\Upsilon}{Z}$ that has no $Z$-bad points and is not constant on any ever-branching subset. \end{thm} In some sense, $Y$ is to $\mathbb{Q}$ what $Z$ is to $\mathbb{R}$, and these relationships correspond well to those of Theorems \ref{dualrotundthm}, \ref{speciallur}, \ref{newgnec} and \ref{f} respectively. The following corollary of Theorem \ref{newgnec} generalises a result from \cite{fhz:97}, which states that $\Czerok{[0,\ensuremath{\omega_1})}$ does not admit any G\^{a}teaux lattice norm. \begin{cor} \label{latticeggivesr} If $\Czerok{\Upsilon}$ admits a G\^{a}teaux lattice norm then $\Upsilon \preccurlyeq Y$ and, consequently, $\Czerok{\Upsilon}$ admits a lattice norm with strictly convex dual. \end{cor} We end Section \ref{necessitycondition} by proving the next proposition, which shows that Theorem \ref{haydongnec} is a corollary of Theorem \ref{newgnec}. \begin{prop} \label{haydoncor} If $\mapping{\rho}{\Upsilon}{Z}$ is an increasing function that is not constant on any ever-branching subset, then $\Upsilon$ does not admit any ever-branching Baire subsets. \end{prop} The final section, devoted to examples, begins with a proof that Theorem \ref{haydongnec} is strictly implied by Theorem \ref{newgnec}. \begin{prop} \label{sigmaYnoG} The tree $\sigma Y$ is $Z$-embeddable, but every increasing function \mapping{\rho}{\Upsilon}{Z} has a $Z$-bad point. In particular, $\Czerok{\sigma Y}$ does not admit a G\^{a}teaux norm. \end{prop} Proposition \ref{sigmaYnoG} is analogous to Corollary \ref{haydonfthm}. Section \ref{examples} ends with Example \ref{Ggap}, which shows that there is a gap between the conditions of Theorems \ref{gsuff} and \ref{newgnec}. This, together with the analogies presented above and the author's bias, prompts the following problem. \begin{prob} \label{gateauxconj} If there exists an increasing function $\mapping{\rho}{\Upsilon}{Z}$ that has no $Z$-bad points and is not constant on any ever-branching subset, does $\Czerok{\Upsilon}$ admit a G\^{a}teaux norm? \end{prob} Recently, the author gave a purely topological formulation of Theorem \ref{dualrotundthm}. Given a tree $\Upsilon$, the space $\Czerok{\Upsilon}$ admits a norm with strictly convex dual norm if and only if $\Upsilon$ is a so-called \textit{Gruenhage space}, with respect to its interval topology \cite{smith:07}. \begin{prob} \label{internal} Is there an internal characterisation of trees $\Upsilon$, with the property that $\Czerok{\Upsilon}$ admits a G\^{a}teaux norm? \end{prob} Problem \ref{internal} may be restated in terms of Fr\'{e}chet norms, Kadec norms and others. This section closes with further problem, motivated by Corollary \ref{latticeggivesr}. \begin{prob} \label{latticegdualrgeneral} If $L$ is locally compact and $\Czerok{L}$ admits a G\^{a}teaux lattice norm, does $\Czerok{L}$ admit a norm with strictly convex dual? Is this statement also true with respect to a general Banach lattice? \end{prob} \section{Necessity conditions for G\^{a}teaux renormability} \label{necessitycondition} To help familiarise the reader with $Z$ and $Z$-bad points, we begin by briefly describing some forms of sequential convergence in $Z$. First observe that if $x \in Y$, $y \in Z$ and $y > x$ is sufficiently close to $x$ in the order topology of $Z$, then $y$ must be a strict extension of $x$. On the other hand, if $x \in Z\backslash Y$ then $x$ has no strict extensions in $Z$. The proof of the next lemma is a simple exercise in elementary analysis and is omitted. \begin{lem} \label{convergenceinZ} Let $x \in Z$ and suppose $(z^n) \subseteq Z$ is a sequence satisfying $x < z^n$. We have the following rules for the convergence of $(z^n)$ to $x$: \begin{enumerate} \item[1.] if $x = (x_\xi)_{\xi \leq \beta} \in Y$ then $z^n \rightarrow x$ if and only if $z^n$ strictly extends $x$ for large enough $n$, and $z^n_{\beta + 1} \rightarrow \infty$. \end{enumerate} If $x = (x_\xi)_{\xi \leq \beta+1} \in Z\backslash Y$ then since $x$ has no strict extensions, there exists $\alpha_n \leq \beta$ such that $z^n_\xi = x_\xi$ for $\xi < \alpha_n$ and $z^n_{\alpha_n} < x_{\alpha_n}$. In this case, we have: \begin{enumerate} \item[2.] if $\beta = 0$ or $\beta = \alpha + 1$ for some $\alpha$, then $z^n \rightarrow x$ if and only if $\alpha_n = \beta$ for large enough $n$, and $z^n_\beta \rightarrow x_\beta$; \item[3.] if $\beta$ is a limit ordinal, then $z^n \rightarrow x$ if and only if $\alpha_n \rightarrow \beta$. \end{enumerate} \end{lem} We present a simple application of Lemma \ref{convergenceinZ}. If $\mapping{\pi}{\Upsilon}{Y}$ is a strictly increasing map then it could have $Z$-bad points. However, if we fix an order isomorphism $\mapping{\theta}{\mathbb{R}}{(0,1)}$ and define, for $x = (x_\xi)_{\xi \leq \beta} \in Y$, $\Theta(x)_\xi = \theta(x_\xi)$ whenever $\xi \leq \beta$, then by Lemma \ref{convergenceinZ} part (1), the strictly increasing $Y$-valued map $\Theta \circ \pi$ has no $Z$-bad points. Thus, some $Z$-bad points are easily removed by making simple adjustments. More details of how $Z$ operates can be found in Section \ref{examples}. Now, for the rest of this section, we fix a norm $\ensuremath{||\cdot||}$ on $\Czerok{\Upsilon}$. We continue by introducing a concept that features in both \cite{haydon:95} and \cite{haydon:99}. Given $t \in \Upsilon$, let $C_t$ be the set of all $f \in \Czerok{\Upsilon}$ such that $f$ vanishes outside $(0,t]$ and increasing on $(0,t]$. \begin{defn} \label{mu-function} If $f \in C_t$ and $\delta \geq 0$, the increasing function $\mu(f,\delta,\cdot)$ is defined on the wedge $[t,\infty)$ by \[ \mu(f,\delta,\cdot) \;=\; \inf \setcomp{\norm{f + (f(t) + \delta) \ind{(t,u]} + \varphi}}{\varphi \in \Czerok{\Upsilon} \mbox{ and } \supp \varphi \subseteq (u,\infty)} \] where $\ind{A}$ denotes the indicator function of the set $A$ and $\supp \varphi$ is the support of $\varphi$. We also define the abbreviation $\mu(f,\cdot)$ by $\mu(f,u) = \mu(f,0,u)$ and the associated function $\mu$, given by $\mu(t) = \inf \setcomp{\norm{\ind{(0,t]} + \varphi}}{\varphi \in \Czerok{\Upsilon} \mbox{ and } \supp \varphi \subseteq (t,\infty)}$. \end{defn} Attainment of the infimum in the definition of these so-called $\mu$-functions has important consequences for the renormability of $\Czerok{\Upsilon}$, and bad points and ever-branching subsets come into play. The first consequence of the following lemma is trivial, and the second and third are immediate generalisations of \cite[Lemma 3.1]{haydon:99} and \cite[Proposition 3.4]{haydon:99} respectively. \begin{lem}[(Haydon \cite{haydon:99})] \label{infattaining} Suppose $t \in \Upsilon$, $f \in C_t$ and $\delta \geq 0$. Then: \begin{enumerate} \item if $\ensuremath{||\cdot||}$ is a lattice norm then $\norm{f + (f(t) + \delta) \ind{(t,u]}} = \mu(f,\delta,u)$ for all $u \succcurlyeq t$; \item if $u \succcurlyeq t$ is a bad point for $\mu(f,\delta,\cdot)$ then $\norm{f + (f(t) + \delta) \ind{(t,u]}} = \mu(f,\delta,u)$; \item if $\mu(f,\delta,\cdot)$ is constant on some ever-branching subset $E \subseteq (u, \infty)$, where $u \succcurlyeq t$, then there exists $\varphi \in \Czerok{\Upsilon}$ with \[ \supp \varphi \;\subseteq\; \setcomp{v \in (u,\infty)}{v \preccurlyeq w \mbox{ for some } w \in E} \] and $\mu(f,\delta,u) = \norm{f + (f(t) + \delta)(\ind{(t,u]} + \varphi)}$. \end{enumerate} \end{lem} We continue with an idea from \cite{smith:05b}. \begin{defn} \label{plateau} A subset $V \subseteq \Upsilon$ is called a \textit{plateau} if $V$ has a least element $0_V$ and $V = \bigcup_{t \in V}[0_V,t]$. A partition $\mathscr{P}$ of $\Upsilon$ consisting solely of plateaux is called a \textit{plateau partition}. \end{defn} Observe that if $V$ is a plateau then $V\backslash\{0_V\}$ is open. It follows that if we have a plateau partition $\mathscr{P}$ and define the \textit{set of least elements} $H = \setcomp{0_V}{V \in \mathscr{P}}$, then $H$ is closed in $\Upsilon$. Of course, $H$ may be regarded as a tree in its own right, with its own interval topology. Plateaux are stable under taking arbitrary intersections. \begin{prop}[(Smith {\cite[Proposition 10]{smith:05b}})] \label{plateauintersection} Let $\Upsilon$ be a tree and $\mathfrak{F}$ a family of plateaux of $\Upsilon$ with non-empty intersection $W$. Then $W$ is a plateau and $0_W = \sup_{V \in \mathfrak{F}} 0_V$. \end{prop} The connection between increasing functions and plateaux is given by the next proposition. \begin{prop}[(Smith {\cite[Proposition 9]{smith:05b}})] \label{plateaupartition} Let $\mapping{\rho}{\Upsilon}{\Sigma}$ be an increasing function into a linear order $\Sigma$. Then the equivalence relation $\sim$, given by $s \sim t$ if and only if there exists $r \preccurlyeq s,t$ such that $\rho(s) = \rho(r) = \rho(t)$, defines the plateau partition of $\Upsilon$, with respect to $\rho$. Moreover, the restriction of $\rho$ to the set of least elements $H = \setcomp{0_V}{V \in \mathscr{P}}$ is strictly increasing. \end{prop} Proposition \ref{plateaupartition} applies equally well to decreasing functions. As the $\mu$-functions from Definition \ref{mu-function} are increasing on their respective domains, they may be analysed using plateaux. Elements of the following technical lemma appear implicitly in the proof of \cite[Theorem 8.1]{haydon:99}. \begin{lem} \label{lambda} Let $\ensuremath{||\cdot||}$ be G\^{a}teaux smooth and suppose that $\varepsilon\pnormdot{\infty} \leq \ensuremath{||\cdot||} \leq \pnormdot{\infty}$ for some $\varepsilon \in (0,1)$. Moreover, suppose $V$ is a plateau, $f \in C_{0_V}$ and $\mu(f,\cdot)$ is constant on $V$. We define a function $\lambda$ on $V\backslash\{0_V\}$ by setting \[ \lambda(t) \;=\; \sup \setcomp{\delta \geq 0}{\mu(f,\delta,t) \leq \mu(f,0_V) + \textstyle{\frac{1}{2}}\varepsilon\delta}. \] We check that $\lambda$ is well-defined and satisfies the following properties: \begin{enumerate} \item $\lambda$ is decreasing on $V\backslash\{0_V\}$; \item if $\lambda$ takes constant value $\nu$ on the plateau $W \subseteq V\backslash\{0_V\}$ then $\mu(f,\nu,\cdot)$ takes constant value $\mu(f,0_V) + \frac{1}{2}\varepsilon\nu$ on $W$; \item if $\mathscr{P}$ is the plateau partition of $V\backslash\{0_V\}$ with respect to $\lambda$, supplied by Proposition \ref{plateaupartition}, $W \in \mathscr{P}$, and $f_W \in C_{0_W}$ is defined by \[ f_W \;=\; f + (f(0_V) + \lambda(0_W))\ind{(0_V,0_W]} \] then $\mu(f_W,\cdot)$ takes constant value $\mu(f,0_V) + \frac{1}{2}\varepsilon \lambda(0_W)$ on $W$; \item if the infimum in the definition of $\mu(f,t)$ is attained then $\lambda(t) > 0$. \end{enumerate} \end{lem} \begin{proof} Fix $t \in V\backslash\{0_V\}$ and, for $\delta \geq 0$, define $F(\delta) = \mu(f,\delta,t) - \mu(f,0_V) - \frac{1}{2}\varepsilon\delta$. Observe that $F$ is continuous and $F(0) = 0$. Moreover, if $\supp \varphi$ is a subset of $(t,\infty)$, we estimate that $\norm{f + (f(t) + \delta)\ind{(0_V,t]} + \varphi} \geq \varepsilon\delta - \norm{f + f(t)\ind{(0_V,t]}}$, whence $F(\delta)$ tends to $\infty$ as $\delta$ does. As a result, $\lambda(t)$ is well-defined. Now we can check the properties of $\lambda$. We see that $\mu(f,\lambda(t),t) = \mu(f,0_V) + \frac{1}{2}\varepsilon\lambda(t)$ for any $t \in V\backslash\{0_V\}$. Therefore, if $t \preccurlyeq u$ then, as $\mu(f,\lambda(u),\cdot)$ is increasing, we have \[ \mu(f,\lambda(u),t) \;\leq\; \mu(f,\lambda(u),u) \;=\; \mu(f,0_V) + \textstyle{\frac{1}{2}}\varepsilon\lambda(u) \] which shows that $\lambda(t) \geq \lambda(u)$, giving us property (1). The second property follows immediately and the third follows from the second. To prove property (4), we let $g = f + f(t)\ind{(0_V,t]} + \varphi$ with $\supp \varphi \subseteq (t,\infty)$, such that $\norm{g} = \mu(f,t) = \mu(f,0_V)$. Observe that as the infimum $\mu(f,0_V)$ is attained, we have \[ \norm{g}^\prime(\ind{(0_V,t]}) \;=\; \lim_{\delta \rightarrow 0_+} \frac{\norm{g + \delta \ind{(0_V,t]}} - \norm{g}}{\delta} \;\geq\; 0 \] and similarly for $-\ind{(0_V,t]}$, whence $\norm{g}^\prime{(\ind{(0_V,t]})} = 0$. Now it is evident that there exists $\delta > 0$ satisfying \[ \mu(f,\delta,t) \;\leq\; \norm{g + \delta\ind{(0_V,t]}} \;\leq\; \norm{g} + \textstyle{\frac{1}{2}}\varepsilon\delta \;=\; \mu(f,0_V) + \textstyle{\frac{1}{2}}\varepsilon\delta \] which means that $\lambda(t) \geq \delta > 0$. \end{proof} While noting property (4) above, we stress that sometimes $\lambda$ does vanish, and it is necessary to analyse what happens in this case. \begin{lem} \label{lambdavanish} Suppose $V$, $f$, $\mu(f,\cdot)$, $\lambda$ and the partition $\mathscr{P}$ are as in Lemma \ref{lambda}. If $\lambda(t) = 0$ for some $t \in W \in \mathscr{P}$, then: \begin{enumerate} \item $W = [0_W,\infty) \cap V$; \item $W$ is finitely-branching, in other words, $u^+ \cap W$ is finite whenever $u \in W$; \item $W$ contains no ever-branching subsets. \end{enumerate} \end{lem} \begin{proof} The first property follows because $\lambda \geq 0$ and is decreasing. To prove property (2), we suppose that $u \in V$ is such that $u^+ \cap V$ is infinite. Then $u$ is a bad point for $\mu(f,\cdot)$ as $\mu(f,v) = \mu(f,u)$ for infinitely many $v \in u^+$. Consequently, the infimum in the definition of $\mu(f,u)$ is attained by part (2) of Lemma \ref{infattaining}, and it follows from Lemma \ref{lambda} part (4) that $\lambda(u) > 0$. As a result, $u \notin W$. For property (3), it is enough to show that if $u \in V$ and $E$ is an ever-branching subset of $[u,\infty) \cap V$, then $\lambda(u) > 0$. Indeed, given such $u$ and $E$, by part (3) of Lemma \ref{infattaining}, the infimum in the definition of $\mu(f,u)$ is attained. Therefore, by part (4) of Lemma \ref{lambda}, $\lambda(u) > 0$. \end{proof} The proof of Theorem \ref{newgnec} is similar to that of Theorem \ref{dualrotundthm}, in that it employs monotone real-valued functions to recursively define a refining sequence of plateaux partitions of the given tree. This sequence is used to define a $Z$-valued function or, in the case of Theorem \ref{dualrotundthm} or Corollary \ref{latticeggivesr}, a $Y$-valued function. We will see that we must make use of the elements in $Z\backslash Y$ precisely when our $\lambda$-functions from Lemma \ref{lambda} vanish. \begin{proof}[of Theorem \ref{newgnec}] Let $\ensuremath{||\cdot||}$ be G\^{a}teaux smooth and suppose that $\varepsilon\pnormdot{\infty} \leq \ensuremath{||\cdot||} \leq \pnormdot{\infty}$ for some $\varepsilon \in (0,1)$. We assemble, for each $\beta < \ensuremath{\omega_1}$, a plateau partition $\mathscr{P}_{\beta}$, and for each $V \in \mathscr{P}_\beta$, a function $f_{(\beta,V)} \in C_{0_V}$ such that: \begin{enumerate} \item $\mu(f_{(\beta,V)},\cdot)$ takes constant value $\mu(f_{(\beta,V)},0_V)$ on $V$; \item $\mu(f_{(\beta,V)},0_V) - 1 \;\leq\; \frac{1}{2}\varepsilon(\pnorm{f_{(\beta,V)}}{\infty} - 1)$. \end{enumerate} Following this, we define a function $\mapping{\pi}{\Upsilon}{Z}$ and prove that it possesses a number of properties. Our final function $\rho$ will be a modification of $\pi$. We begin by constructing $\mathscr{P}_0$. Recall the increasing function $\mu$ from Definition \ref{mu-function}. Let $\mathscr{P}_0$ be its plateau partition, courtesy of Proposition \ref{plateaupartition}, and define $f_{(0,V)} = \ind{(0,0_V]}$ for $V \in \mathscr{P}_0$. It follows that $\mu(f_{(0,V)},\cdot)$ takes constant value $\mu(f_{(0,V)},0_V) = \mu(0_V)$ on $V$, and that \[ \mu(f_{(0,V)},0_V) - 1 \;\leq\; \norm{\ind{(0,0_V]}} - 1 \;\leq\; 0 \;=\; \textstyle{\frac{1}{2}}\varepsilon(\pnorm{f_{(0,V)}}{\infty} - 1). \] Now suppose $\mathscr{P}_\beta$ and the associated $f_{(\beta,V)}$ have been built. Let $V \in \mathscr{P}_\beta$. If $V = \{0_V\}$ then set $\mathscr{P}_V = \{V\}$ and $f_{(\beta+1,V)} = f_{(\beta,V)}$. Otherwise, Lemma \ref{lambda}, together with Proposition \ref{plateaupartition}, furnishes us with the plateau partition of $V\backslash\{0_V\}$ associated with the $\lambda$-function. We augment this with the single element $\{0_V\}$ to give a plateau partition $\mathscr{P}_V$ of $V$. Set $\mathscr{P}_{\beta+1} = \bigcup\setcomp{\mathscr{P}_V}{V \in \mathscr{P}_\beta}$. If $W \in \mathscr{P}_V$ then either $W = \{0_V\}$ or $W \subseteq V\backslash\{0_V\}$. In the former case let $f_{(\beta+1,W)} = f_{(\beta,V)}$; it is easy to see that $f_{(\beta+1,W)}$ satisfies conditions (1) and (2) above. In the latter case, let $f_{(\beta+1,W)} = f_W$, where $f_W$ is as in Lemma \ref{lambda} part (3). We observe condition (1) is satisfied, again by Lemma \ref{lambda} part (3). To see that condition (2) holds, note that \[ \mu(f_{(\beta+1,W)},0_W) - \mu(f_{(\beta,V)},0_V) \;=\; \textstyle{\frac{1}{2}}\varepsilon\lambda(0_W) \;=\; \textstyle{\frac{1}{2}}\varepsilon(\pnorm{f_{(\beta+1,W)}}{\infty} - \pnorm{f_{(\beta,V)}}{\infty}) \] and apply the inductive hypothesis. We move on to the limit case. Suppose that $\beta < \ensuremath{\omega_1}$ is a limit ordinal and that all has been constructed for $\alpha < \beta$. Given $t \in \Upsilon$, we let $V_\alpha^t \in \mathscr{P}_\alpha$ be such that $t \in V_\alpha^t$. Set $\mathscr{P}_\beta = \setcomp{\bigcap_{\alpha < \beta} V_\alpha^t}{t \in \Upsilon}$. Fix some $V \in \mathscr{P}_\beta$. Let $t = 0_V$, $V_\alpha = V_\alpha^t$, $t_\alpha = 0_{V_\alpha}$ and $f_\alpha = f_{(\alpha,V_\alpha)}$. Then $t = \sup_{\alpha < \beta} t_\alpha$ by Proposition \ref{plateauintersection}. What we would like to do is define $f_{(\beta,V)} = f \in \Czerok{\Upsilon}$ to be the unique function supported on $(0,t]$, such that its restriction to $(0,t_\alpha]$ is $f_\alpha$. This can indeed be done, provided that $(\pnorm{f_\alpha}{\infty})_{\alpha < \beta}$ is bounded. Observe that if $g \in C_u$ satisfies condition (2) above then \[ \varepsilon \pnorm{g}{\infty}-1 \;\leq\; \mu(g,u)-1 \;\leq\; \textstyle{\frac{1}{2}}\varepsilon(\pnorm{g}{\infty}-1) \] giving $\pnorm{g}{\infty} \leq \frac{2}{\varepsilon} - 1$. Therefore $(\pnorm{f_\alpha}{\infty})_{\alpha < \beta}$ is bounded as required. Moreover, since each $f_\alpha \in C_{t_\alpha}$, we have $f \in C_t$. Now set $g_\alpha = f_\alpha + f_\alpha(t_\alpha)\ind{(t_\alpha,t]}$. Of course, as $f_\alpha$ is increasing on $(0,t_\alpha]$ and vanishes elsewhere, we have $\pnorm{g_\alpha}{\infty} = \pnorm{f_\alpha}{\infty}$. Moreover, as $\mu(f_\alpha,\cdot)$ takes constant value $\mu(f_\alpha,t_\alpha)$ on $V_\alpha$ by inductive hypothesis, and $\mu(g_\alpha,u) = \mu(f_\alpha,u)$ whenever $u \in V \subseteq V_\alpha$, it follows that $\mu(g_\alpha,\cdot)$ takes constant value $\mu(f_\alpha,t_\alpha)$ on $V$. The reader can verify that, as $(g_\alpha)_{\alpha < \beta}$ converges in norm to $f$, $(\mu(g_\alpha,\cdot))_{\alpha < \beta}$ converges uniformly to $\mu(f,\cdot)$ (cf.\ \cite[Lemma 3.6]{haydon:99}). As a result, $f$ satisfies conditions (1) and (2) above. This ends the recursion. Now we define $\pi$. Given $t \in \Upsilon$, let $V^t_\beta$ be as above. In addition, we let $\lambda^t_\beta$ be the $\lambda$-function associated with $V^t_\beta$ and $f_{(\beta,V^t_\beta)}$, provided $V^t_\beta$ is not a singleton. Set $\pi(t)_0 = -\mu(t)$. If $\beta > 0$, let $\pi(t)_\beta = \mu(f_{(\beta,V^t_\beta)},t)$ as long as $0_{V^t_\alpha} \prec t$ for all $\alpha < \beta$ and $\lambda^t_\alpha(t) > 0$ whenever $\alpha+1 < \beta$. Otherwise, we leave $\pi(t)_\beta$ undefined. We verify that $\pi(t)$ is an element of $Z$. Observe that if $\pi(t)_\beta$ is defined, then so is $\pi(t)_\alpha$ whenever $\alpha < \beta$. If $0 < \alpha < \beta$ then $\pi(t)_0 < 0 < \pi(t)_\alpha$ and moreover \begin{eqnarray*} \pi(t)_{\alpha+1} &=& \mu(f_{(\alpha+1,V^t_{\alpha+1})},t) \\ &=& \mu(f_{(\alpha,V^t_\alpha)},t) + \textstyle{\frac{1}{2}}\varepsilon\lambda^t_\alpha (0_{V^t_{\alpha+1}}) \\ &=& \pi(t)_\alpha + \textstyle{\frac{1}{2}}\varepsilon \lambda^t_\alpha(t) \end{eqnarray*} whence $\pi(t)_{\alpha+1} \geq \pi(t)_\alpha$. In addition, if $\alpha + 1 < \beta$ then $\pi(t)_{\alpha+1} > \pi(t)_\alpha$ by our definition of $\pi$. Now, if $\beta$ is a limit ordinal and $\pi(t)_\alpha$ is defined for all $\alpha < \beta$, so is $\pi(t)_\beta$. Moreover, by applying the uniform convergence of the $\mu$-functions at limit stages of the partition construction, we see that $\pi(t)_\beta = \mu(f_{(\beta,V^t_\beta)},t) = \lim_{\alpha < \beta} \mu(f_{(\alpha,V^t_\alpha)},t) = \lim_{\alpha < \beta} \pi(t)_\alpha$. This is enough to prove that $\pi(t) \in Z$. We observe our first property of $\pi$, namely that it is increasing. Let $s,t \in \Upsilon$ with $s \prec t$. We set $\gamma$ to be the least ordinal such that $\pi(s)_\gamma$ and $\pi(t)_\gamma$ are not both defined and equal. If $\gamma = 0$ then, as $\mu$ is increasing, it follows that $\pi(s)_0 > \pi(t)_0$, whence $\pi(s) < \pi(t)$. If $\gamma > 0$ then, by continuity, $\gamma = \beta + 1$ for some $\beta$. By transfinite induction, $V^s_\alpha = V^s_\alpha$ for all $\alpha \leq \beta$. Indeed, $\mu(s) = -\pi(s)_0 = -\pi(t)_0 = \mu(t)$, so $V^s_0 = V^t_0$. If $V^s_\alpha = U = V^t_\alpha$ and $\alpha < \beta$, set $\lambda^s_\alpha = \lambda = \lambda^t_\alpha$. Remembering property (2) of Lemma \ref{lambda}, we have \begin{equation} \label{lambdaeqn} \textstyle{\frac{1}{2}}\varepsilon\lambda(s) \;=\; \pi(s)_{\alpha+1} - \pi(s)_\alpha \;=\; \pi(t)_{\alpha+1} - \pi(t)_\alpha \;=\; \textstyle{\frac{1}{2}}\varepsilon\lambda(t) \end{equation} whence $\lambda(s) = \lambda(t)$ and $V^s_{\alpha+1} = V^t_{\alpha+1}$. Limit stages of the induction follow by taking intersections. Now let $V^s_\beta = V = V^t_\beta$, $\lambda^s_\beta = \lambda =\lambda^t_\beta$ and observe that $0_V \preccurlyeq s \prec t$. There are two cases to consider: either $\pi(t)_{\beta+1}$ is defined or it is not. First of all, we suppose that $\pi(t)_{\beta+1}$ is defined and prove that $\pi(s) < \pi(t)$ in this case. Indeed, if $\pi(s)_{\beta+1}$ is not defined then we are done, as $\pi(t)$ strictly extends $\pi(s)$. On the other hand, if $\pi(s)_{\beta+1}$ is defined then since $\pi(s)_{\beta+1} \neq \pi(t)_{\beta+1}$ and $\lambda$ is decreasing, it must be that $\pi(s)_{\beta+1} > \pi(t)_{\beta+1}$. Therefore $\pi(s) < \pi(t)$. The other option is that $\pi(t)_{\beta+1}$ is undefined. In this case, since $0_V \prec t$, it must be that $\lambda^t_\alpha(t) = 0$ for some $\alpha+1 < \beta+1$, by the definition of $\pi$. As $\pi(t)_\beta$ is defined then, again by the definition of $\pi$, it follows that $\alpha+1 = \beta$. Let $V^s_\alpha = U = V^t_\alpha$ and $\lambda^s_\alpha = \lambda^\prime = \lambda^t_\alpha$. Then by Eqn.\ \ref{lambdaeqn} above, we have $\lambda^\prime(s) = \lambda^\prime(t) = 0$, meaning $\pi(s)_{\beta+1}$ is not defined either. Consequently, $\pi(s) = \pi(t)$. We have established that $\pi$ is an increasing function. Now we show that it is not constant on any ever-branching subset and, given $t \in \Upsilon$, there are only finitely many $u \in t^+$ such that $\pi(u) = \pi(t)$. To prove this claim, consider $t \in \Upsilon$ and the plateau $W = \setcomp{u \in [t,\infty)}{\pi(u) = \pi(t)}$. If $W$ is the singleton $\{t\}$ then there is nothing to prove, so we suppose that there exists some $u \in W$ with $t \prec u$. Let both $\pi(t)$ and $\pi(u)$ be defined on $[0,\beta]$ and fix $V = V^t_\beta$. In just the same way as above, we have that $V^t_\alpha = V^u_\alpha$ whenever $\alpha \leq \beta$ and, in particular, $V^u_\beta = V$. Observe that, as a consequence, $W \subseteq V$. Moreover, just as above, as $\pi(u)_{\beta+1}$ is undefined and $0_{V^u_\beta} \preccurlyeq t \prec u$, we have $\beta = \alpha+1$ for some $\alpha$. It follows that if we set $V^t_\alpha = U = V^u_\alpha$ and $\lambda^t_\alpha = \lambda^\prime = \lambda^u_\alpha$, then $\lambda^\prime(t) = \lambda^\prime(u) = 0$. Now we can appeal to parts (2) and (3) of Lemma \ref{lambdavanish} applied to $U$, $f_{(\alpha,U)}$, $\mu(f_{(\alpha,U)},\cdot)$ and $\lambda^\prime$ to conclude that $V$ is finitely-branching and contains no ever-branching subsets. As $W \subseteq V$, we are done. We finish our appraisal of $\pi$ by showing that it does not admit certain types of $Z$-bad points. First of all, if $\pi(t) \in Y$ then $t$ cannot be $Z$-bad for $\pi$. Indeed, by Lemma \ref{convergenceinZ} part (1) and the fact that the elements of $\ran \pi$ are uniformly bounded sequences, the only way that $t$ can be $Z$-bad for $\pi$ is if there are infinitely many $u \in t^+$ such that $\pi(u) = \pi(t)$. Now suppose that $\pi(t) = (\pi(t)_\xi)_{\xi \leq \beta + 1} \in Z\backslash Y$, where $\beta$ is a limit ordinal. We prove that $t$ is not $Z$-bad for $\pi$. We know already that $\pi(u) = \pi(t)$ for only finitely many $u \in t^+$ so, for a contradiction, we must suppose that there is a sequence of distinct points $(u_n) \subseteq t^+$ such that $\pi(t) < \pi(u_n)$ and $\pi(u_n) \rightarrow \pi(t)$. We have that $\pi(t)_\beta = \pi(t)_{\beta+1}$. Let $V = V^t_\beta$, where $V^t_\beta$ is the unique element $V \in \mathscr{P}_\beta$ containing $t$, and let $f = f_{(\beta,V)}$. Observe that if $\lambda$ is the function from Lemma \ref{lambda} associated with $f$ and $V$ then, necessarily, $\lambda(t) = 0$. Indeed, by the definition of $\pi$, we have $\frac{1}{2}\varepsilon \lambda(t) = \pi(t)_{\beta+1} - \pi(t)_\beta$. By Lemma \ref{convergenceinZ} part (3), there exist ordinals $\alpha_n < \beta$ such that $\alpha_n \rightarrow \beta$, $\pi(u_n)_\xi = \pi(t)_\xi$ whenever $\xi < \alpha_n$ and $\pi(u_n)_{\alpha_n} < \pi(t)_{\alpha_n}$. By continuity and transfinite induction, $\alpha_n = \xi_n + 1$ for some ordinals $\xi_n$ and $V^t_{\xi_n} = V^{u_n}_{\xi_n}$. Set $V_n = V^t_{\xi_n}$ and $f_n = f_{(\xi_n,V_n)}$. As $\alpha_n \rightarrow \beta$, it follows that $V = \bigcap_n V_n$ and the functions $f_n + f_n(0_{V_n})\ind{(0_{V_n},t]}$ converge in norm to $f + f(0_V)\ind{(0_V,t]}$. Moreover $\mu(f_n,u_n) = \pi(u_n)_{\xi_n} = \pi(t)_{\xi_n} \rightarrow \pi(t)_\beta = \mu(f,t)$. Now choose $\varphi_n \in \Czerok{\Upsilon}$ to satisfy $\supp \varphi_n \subseteq (u_n, \infty)$ and $\norm{f_n + f_n(0_{V_n})\ind{(0_{V_n},u_n]} + \varphi_n} \leq \mu(f_n,u_n) + 2^{-n} = \mu(f_n,t) + 2^{-n}$. As the $u_n$ are distinct, it follows that $(f_n + f_n(0_{V_n})\ind{(0_{V_n},u_n]} + \varphi_n)$ converges to $f + f(0_V)\ind{(0_V,t]}$ in the pointwise topology of $\Czerok{\Upsilon}$. As $\Upsilon$ is scattered and this sequence is norm-bounded, it converges in the weak topology too. Therefore $\norm{f + f(0_V)\ind{(0_V,t]}} = \mu(f,t)$. However, by part (4) of Lemma \ref{lambda}, the attainment of the infimum forces $\lambda(t) > 0$, which is not the case. It follows that $t$ cannot be a $Z$-bad point for $\pi$. One case remains untreated. If $\pi(t) = (\pi(t)_\xi)_{\xi \leq \beta + 1} \in Z\backslash Y$ and $\beta$ is not a limit ordinal, it is possible that $t$ is $Z$-bad for $\pi$. Fortunately, by making an adjustment to $\pi$ akin to that given after Lemma \ref{convergenceinZ}, we can remove $Z$-bad points of this kind. Given $x = (x_\xi)_{\xi \leq \beta} \in Z$, define \[ \Phi(x)_\xi \;=\; \left\{ \begin{array} {l@{}l} 2x_0 & \quad\mbox{if } \xi = 0 \\ x_\xi + x_{\xi - 1} + 1 & \quad\mbox{if } \xi \mbox{ is a successor ordinal}\\ 2x_\xi + 1 & \quad\mbox{otherwise} \end{array} \right. \] for $\xi \leq \beta$. It is easy to establish that $\Phi$ takes values in $Z$ and is strictly increasing. Set $\rho = \Phi \circ \pi$. As $\Phi$ is strictly increasing, $\rho$ is increasing and, if we consider Proposition \ref{plateaupartition}, partitions $\Upsilon$ in exactly the same way as $\pi$. In particular, $\rho$ is not constant on any ever-branching subset of $\Upsilon$. Again, as $\Phi$ is strictly increasing, if $t$ is $Z$-bad for $\rho$ then it is also $Z$-bad for $\pi$. Therefore, to prove that $\rho$ has no $Z$-bad points, we suppose that $\pi(t) = (\pi(t)_\xi)_{\xi \leq \beta + 1} \in Z\backslash Y$ and $\beta$ is not a limit ordinal. We have that $\pi(t)_\beta = \pi(t)_{\beta+1}$ so, by the construction of $\pi$, there exists an ordinal $\alpha$ such that $\beta = \alpha + 1$. Therefore, $\pi(t)_\alpha < \pi(t)_\beta$ and thus $\rho(t)_\beta < \rho(t)_{\beta+1}$, giving $\rho(t) \in Y$. Again by appealing to Lemma \ref{convergenceinZ} part (1), if $t$ is $Z$-bad for $\rho$ then $\rho(u) = \rho(t)$ for infinitely many $u \in t^+$. However, that would force $\pi(u) = \pi(t)$ for infinitely many $u \in t^+$, and we have already established that this is impossible. \end{proof} \begin{proof}[of Corollary \ref{latticeggivesr}] If $\ensuremath{||\cdot||}$ is a lattice norm then, by part (1) of Lemma \ref{infattaining}, the infima in the definition of the $\mu$-functions are always attained. It follows that the $\lambda$-functions of Lemma \ref{lambda} never vanish. Now, we prove that in this case, the map $\pi$ defined in the proof of Theorem \ref{newgnec} is $Y$-valued and strictly increasing. Indeed, if we return to the point where we prove that $\pi(t) \in Z$, we see that, as the $\lambda$-functions never vanish, $\pi(t)_\alpha < \pi(t)_{\alpha+1}$ whenever $\alpha+1 \leq \beta$. Consequently $\pi(t) \in Y$. To show that $\pi$ is strictly increasing, we let $s \prec t$ and return to the point in the proof where $\pi$ is shown to be increasing, specifically, where $\gamma$ is defined. If $\gamma = 0$ then we are done. Otherwise, $\gamma = \beta+1$ for some $\beta$. Since the $\lambda$-functions never vanish, it is impossible that $\pi(t)_{\beta+1}$ is undefined, therefore $\pi(s) < \pi(t)$. This proves that $\Upsilon \preccurlyeq Y$. The second statement of Corollary \ref{latticeggivesr} holds because the strictly convex dual norm constructed in Theorem \ref{dualrotundthm} is a lattice norm. \end{proof} We finish the section with a proof of Proposition \ref{haydoncor}. It will help to introduce a useful game-theoretic characterisation of Baire trees \cite{haydon:95}. Players \textbf{A} and \textbf{B} take turns to nominate elements of a tree $\Upsilon$, beginning with $t_0$ played by \textbf{B}. In general, \textbf{A} follows $t_{2n}$ with $t_{2n+1} \succcurlyeq t_{2n}$, and \textbf{B} responds with $t_{2n+2} \succcurlyeq t_{2n+1}$. The game is won by \textbf{B} if the sequence $(t_n)$ has no upper bound in $\Upsilon$. The tree $\Upsilon$ is Baire if and only if \textbf{B} has no winning strategy in this so-called $\Upsilon$\textit{-game}. Using this game, it is possible to prove the following result. \begin{prop}[({Haydon \cite[Proposition 1.4]{haydon:95}})] \label{realsubmissive} If $\Upsilon$ is Baire and $\mapping{\rho}{\Upsilon}{\mathbb{R}}$ is increasing, then there exists $t \in \Upsilon$ such that $\rho$ is constant on the wedge $[t,\infty)$. \end{prop} One trivial consequence of Proposition \ref{realsubmissive} is that if the increasing map $\mapping{\rho}{\Upsilon}{\mathbb{R}}$ is not constant on any ever-branching subset then $\Upsilon$ contains no ever-branching Baire subsets. Indeed, if $E \subseteq \Upsilon$ were ever-branching and Baire then, by Proposition \ref{realsubmissive}, we could find $t \in E$ such that $\rho$ is constant on $[t,\infty) \cap E$, which is an ever-branching subset of $\Upsilon$. We observe that the same holds if we replace $\mathbb{R}$ with any linear order $\Sigma$ satisfying the statement of Proposition \ref{realsubmissive}. Therefore, to establish Proposition \ref{haydoncor}, it is enough to prove the following result. \begin{prop} \label{zsubmissive} If $\Upsilon$ is Baire and $\mapping{\rho}{\Upsilon}{Z}$ is increasing, then there exists $t \in \Upsilon$ such that $\rho$ is constant on $[t,\infty)$. \end{prop} \begin{proof} The following order will be used in this and a subsequent proof. Define \[ Z_0 \;=\; \setcomp{x = (x_\alpha)_{\alpha \leq \beta} \in Z}{x \subseteq [0,1]\mbox{, }x_0 = 0\mbox{ and }\beta\mbox{ is a limit whenever }x_\beta = 1}. \] By considering the map $\Theta$, introduced after Lemma \ref{convergenceinZ}, we observe that $Z \preccurlyeq Z_0$ and, accordingly, we can assume that our increasing function $\rho$ takes values in $Z_0$. We show that $\rho$ is constant on some wedge of $\Upsilon$ by playing the $\Upsilon$-game with a particular strategy for \textbf{B}. Given $u \in \Upsilon$ and an ordinal $\alpha$, we call $(\alpha,u)$ a \textit{fixed pair} if $\rho(v)_\xi$ is defined and equal to $\rho(u)_\xi$ whenever $v \in [u,\infty)$ and $\xi \leq \alpha$. If $(\alpha,u)$ is fixed, $v \in [u,\infty)$ and $\xi \leq \alpha$, then $(\xi,v)$ is also fixed. Let \textbf{B} play arbitrary $t_0$ as the first move and put $\alpha_0 = 0$. Note that $(0,t_0)$ is fixed. Now suppose that $n \geq 1$ and that moves $t_0 \preccurlyeq t_1 \preccurlyeq \ldots \preccurlyeq t_{2n-1}$ have been played alternately by \textbf{B} and \textbf{A}. We choose the next move $t_{2n}$ played by \textbf{B}, together with $\alpha_n$, in the following manner. Let \[ r_n \;=\; \sup \setcomp{\rho(u)_\alpha}{u \succcurlyeq t_{2n-1}\mbox{ and }(\alpha,u)\mbox{ is a fixed pair}}. \] Let \textbf{B} choose fixed $(\alpha_n,t_{2n})$ such that $t_{2n} \succcurlyeq t_{2n-1}$ and $\rho(t_{2n})_{\alpha_n} > r_n - 2^{-n}$. This strategy does not guarantee a win for \textbf{B}, so there exist moves $(t_{2n+1})$ of \textbf{A} such that $(t_n)$ has an upper bound $u \in \Upsilon$. If $\alpha = \sup \alpha_n$, we see that $(\alpha,u)$ is fixed. This follows by continuity and the fact that $(\alpha_n,u)$ is fixed for all $n$. If $\rho(v)_{\alpha+1}$ is not defined for any $v \succcurlyeq u$ then $\rho$ takes constant value $\rho(u)$ on $[u,\infty)$, and we are done. Suppose instead that $\rho(v)_{\alpha+1}$ exists for some $v \succcurlyeq u$. Because $(\alpha,v)$ is fixed and $\rho$ is increasing, the real-valued map $\rho(\cdot)_{\alpha+1}$ must be decreasing on $[v,\infty)$. As the forcing-open set $[v,\infty)$ is Baire, by Proposition \ref{realsubmissive}, there exists $w \succcurlyeq v$ such that $\rho(\cdot)_{\alpha+1}$ is constant on $[w,\infty)$, and it follows that $(\alpha+1,w)$ is a fixed pair. We note that the inequalities \[ r_n - 2^{-n} \;<\; \rho(t_{2n})_{\alpha_n} \;=\; \rho(w)_{\alpha_n} \;\leq\; \rho(w)_\alpha \;\leq\; \rho(w)_{\alpha+1} \;\leq\; r_n \] hold for all $n$, and conclude that $\rho(w)_{\alpha+1} = \rho(w)_\alpha$. Consequently, by the definition of elements of $Z$, $\rho$ takes constant value $\rho(w)$ on $[w,\infty)$. \end{proof} \section{Examples} \label{examples} In this section, we prove Proposition \ref{sigmaYnoG} and present Example \ref{Ggap}. Before giving the proof of Proposition \ref{sigmaYnoG}, we make an observation about embeddability and $Z$-bad points that is analogous to Proposition \ref{ratbadpoints}. Given a tree $\Upsilon$, let $\Upsilon \preccurlyeq Z$ and suppose that there is an increasing function $\mapping{\rho}{\Upsilon}{Z}$ with no $Z$-bad points. We claim that if this is the case then $\Upsilon \preccurlyeq Y$. In order to prove this claim, we introduce the following algebraic operation on $Z$. Recall the order isomorphism $\mapping{\theta}{\mathbb{R}}{(0,1)}$, fixed after Lemma \ref{convergenceinZ}. For $x = (x_\xi)_{\xi \leq \alpha}$ and $y = (y_\xi)_{\xi \leq \beta}$ of $Z$, define $x\cdot y$ for $\xi \leq \max\{\alpha,\beta\}$ by \[ (x\cdot y)_\xi \;=\; \left\{ \begin{array} {l@{}l} \theta^{-1}(\theta(x_\xi)\theta(y_\xi)) & \quad\mbox{if } \xi \leq \min\{\alpha,\beta\} \\ x_\xi & \quad\mbox{if } \alpha < \xi \leq \beta\\ y_\xi & \quad\mbox{if } \beta < \xi \leq \alpha \end{array} \right. \] where $\theta(x_\xi)\theta(y_\xi)$ is an ordinary real product. We leave the reader with the simple task of verifying that $\cdot$ is a semigroup operation on $Z$ that respects the order; in other words, if $x \leq y$ and $u \leq v$ then $x\cdot u \leq y\cdot v$ and, moreover, the third inequality is strict if either of the first two are. Now, let the increasing function $\mapping{\nu}{\Upsilon}{Z}$ have no $Z$-bad points and suppose $\mapping{\tau}{\Upsilon}{Z}$ is strictly increasing. As $\cdot$ respects order, it follows that the pointwise product $\pi = \nu\cdot\tau$ is strictly increasing and has no $Z$-bad points. By Lemma \ref{convergenceinZ}, any element of $Z$ can be approached from above by a strictly decreasing sequence. Therefore, as $t \in \Upsilon$ is not a $Z$-bad point for $\pi$, there exists $\pi^*(t) \in Z$ such that $\pi(t) < \pi^*(t) \leq \pi(u)$ whenever $u \in t^+$. Finally, since $Y$ is dense in $Z$, we can pick $\rho(t) \in Y$ between $\pi(t)$ and $\pi^*(t)$; the resulting function $\rho$ is strictly increasing. \begin{proof}[of Proposition \ref{sigmaYnoG}] In the light of Theorem \ref{sigmanoembed} and our observation above, all we need to do is prove that $\sigma Y \preccurlyeq Z$. Recall the order $Z_0$ from the proof of Proposition \ref{zsubmissive}. As $Z \preccurlyeq Z_0$, elements of $\sigma Y$ can and are considered as subsets of $Z_0$. Our proof that $\sigma Y \preccurlyeq Z$ rests on the claim that $Z_0$ is Dedekind complete; that is, each subset of $A$ of $Z_0$ has a least upper bound, denoted by $\sup A$. For now, we assume that this claim holds and define a strictly increasing map $\mapping{\rho}{\sigma Y}{Z}$. Given $A \in \sigma Y$, treated as a subset of $Z_0$, let $\rho(A) = \sup A$ if $\sup A \in Z_0\backslash Y$ or if $A$ has no greatest element, and let $\rho(A) = (\sup A,2)$ otherwise. Here, $(x,2)$ denotes the sequence obtained by extending $x \in Z_0 \cap Y$ by a single element, namely $2$. Observe that if $x \in Z_0 \cap Y$, $y \in Z_0$ and $x < y$ then $(x,2) < y$ because every element of $y$ is strictly less than $2$. Let $A,B \in \sigma Y$ satisfy $A \prec B$. If $\sup A < \sup B$ then $\rho(A) < \sup B \leq \rho(B)$. Alternatively, if $\sup A = \sup B$ then $B = A \cup \{\sup A\}$; indeed, if $x \in B\backslash A$ then $\sup A \leq x \leq \sup B = \sup A$. In particular, $B$ has greatest element $\sup A \in Y$, whereas $A$ has no greatest element. Therefore $\rho(A) = \sup A < (\sup A,2) = \rho(B)$. This proves that $\rho$ is strictly increasing. To finish, we define $\sup A$ for $A \subseteq Z_0$. If $A$ is empty then its least upper bound is the one-element sequence $(0)$. From now on, we assume that $A$ is non-empty and has no greatest element. Taking our cue from the proof of Proposition \ref{zsubmissive}, given an ordinal $\alpha$ and $x \in A$, we will call $(\alpha,x)$ a \textit{fixed pair} if $x_\xi$ and $y_\xi$ are both defined and equal whenever $y \in A$, $x \leq y$ and $\xi \leq \alpha$. If $(\alpha,x)$ is fixed, $y \in A$, $x \leq y$ and $\xi \leq \alpha$, then $(\xi,y)$ is also fixed. Now let $\beta$ be minimal, subject to the condition that there is no fixed pair $(\beta,x)$. As A is non-empty and $(0,x)$ is fixed whenever $x \in A$, it follows that $\beta > 0$. We define a sequence $z = (z_\alpha)_{\alpha \leq \beta}$. If $\alpha < \beta$, let $z_\alpha = x_\alpha$, where $(\alpha,x)$ is some fixed pair. By the nature of fixed pairs, this is well-defined. If $\beta$ is a limit, let $z_\beta = \sup_{\alpha < \beta} z_\alpha$. Instead, if $\beta = \alpha + 1$ for some $\alpha$ then, as $A$ has no greatest element, there exists a fixed pair $(\alpha,x)$, such that $x_\beta$ is defined. Let $z_\beta$ be the infimum of all such $x_\beta$. It is easy to verify that $z \in Z_0$; it can be that $z_\beta = 1$, but only if $\beta$ is a limit ordinal. We omit the pedestrian task of proving that $z$ is the least upper bound of $A$. \end{proof} Our last task is to show that there is a tree $\Psi$ satisfying the condition of Theorem \ref{newgnec} but not that of Theorem \ref{gsuff}. Before doing so, we must make some remarks. Recall the plateau partitions of Proposition \ref{plateaupartition} and note the following slightly reworded version of a result from \cite{smith:05c}. \begin{prop}[({Smith \cite[Corollary 3]{smith:05c}})] \label{noebcor} Suppose that $\Upsilon$ is a tree, $\Sigma$ a linear order, and $\mapping{\rho}{\Upsilon}{\Sigma}$ an increasing function that is not constant on any ever-branching subset of $\Upsilon$. Then there exists an increasing function $\mapping{\pi}{\Upsilon}{\Sigma \times \omega}$, such that the plateau partition $\mathscr{P}$ of $\Upsilon$ with respect to $\pi$ consists solely of linearly ordered subsets. \end{prop} Let $\Upsilon$, $\Sigma$, $\pi$ and $\mathscr{P}$ be as in Proposition \ref{noebcor} and, moreover, let us suppose that $\Upsilon$ admits no uncountable linearly ordered subsets. In this case, each $V \in \mathscr{P}$ identifies with a finite or countable ordinal and, therefore, there exists a strictly increasing function $\mapping{\pi_V}{V}{\mathbb{Q}}$. It is apparent that the function $\mapping{\tau}{\Upsilon}{\Sigma \times \omega \times \mathbb{Q}}$, defined by $\tau(t) = (\pi(t),\pi_{V_t}(t))$, where $V_t$ is the unique element of $\mathscr{P}$ containing $t$, is strictly increasing. As $\omega \times \mathbb{Q} \preccurlyeq \mathbb{Q}$, it follows that $\Upsilon \preccurlyeq \Sigma \times \mathbb{Q}$. \begin{example} \label{Ggap} Observe that $Y$ has cardinality continuum $\ensuremath{\mathfrak{c}}$. If $A \in \sigma Y$ then $A^+$ identifies with the set $u(A)$ of all upper bounds of $A$ and, thus, has cardinality $\ensuremath{\mathfrak{c}}$ if $u(A)$ is non-empty. Fix a well-order $\sqsubseteq$ of $Y$, and let $\Psi = \sigma Y \times \ensuremath{\mathfrak{c}}$. We order $\Psi$ by declaring that $(A,\alpha) \preccurlyeq (B,\beta)$ if and only if either $A = B$ and $\alpha \leq \beta$, or if $A \prec B$ and $\alpha$ is no greater than the order type of $\setcomp{x \in u(A)}{x \sqsubset \min (B\backslash A,\leq)}$, with respect to $\sqsubset$. With respect to this order, each element of $\Psi$ has between one and two immediate successors. Indeed, if $(A,\alpha) \in \Psi$ then $(A,\alpha + 1)$ is always an immediate successor. If $u(A)$ is non-empty then $(A \cup \{y\},0)$ is also such a successor, where $y \in u(A)$ and $\setcomp{x \in u(A)}{x \sqsubset y}$ has order type $\alpha$. The set $\sigma Y \times \{0\}$ is a natural copy of $\sigma Y$ inside $\Psi$ that is closed with respect to the interval topology. Now, by Proposition \ref{sigmaYnoG}, there exists a strictly increasing map $\mapping{\pi}{\sigma Y}{Z}$. Define $\mapping{\rho}{\Psi}{Z}$ by $\rho(A,\alpha) = \pi(A)$. By Proposition \ref{plateaupartition}, the plateau partition of $\Psi$ with respect to $\rho$ consists exactly of the sets $\setcomp{(A,\alpha)}{\alpha < \ensuremath{\mathfrak{c}}}$, where $A \in \sigma Y$. Therefore, $\rho$ is not constant on any ever-branching subset. Because the number of immediate successors of any element of $\Psi$ is at most two, $\rho$ has no $Z$-bad points either. Therefore $\Psi$ satisfies the condition of Proposition \ref{newgnec}. On the other hand, there exists no increasing $Y$-valued function on $\Psi$ that is not constant on any ever-branching subset. Indeed, if there were such a function, by considering its restriction to $\sigma Y \times \{0\}$, there would be a map $\mapping{\tau}{\sigma Y}{Y}$, also not constant on any ever-branching subset. However, by following a similar argument to that given after Proposition \ref{realsubmissive}, being $Z$-embeddable, $\sigma Y$ has no perfect Baire subsets. In particular, $\sigma Y$ does not contain a copy of $\ensuremath{\omega_1}$. Therefore, by Proposition \ref{ybetaembed} and the remarks following Proposition \ref{noebcor}, we would have $\sigma Y \preccurlyeq Y \times \mathbb{Q} \preccurlyeq Y$ which, by Theorem \ref{sigmanoembed}, is impossible. \end{example} We recall Problem \ref{gateauxconj} and conjecture that $\Czerok{\Psi}$ admits a G\^{a}teaux norm. The G\^{a}teaux norms presented in \cite{smith:05c} are built by combining norms obtained from existing techniques, namely the Fr\'{e}chet norms of Talagrand and Haydon, and norms with strictly convex duals. In the author's opinion, if Problem \ref{gateauxconj} is to be resolved positively, we require a method of constructing G\^{a}teaux norms on $\Ck{K}$ spaces that unifies these techniques on a more fundamental level. \bibliographystyle{amsplain}
1,314,259,994,673
arxiv
\section{INTRODUCTION} Recent years have seen the breakthrough of mobile robotics into the consumer market. Domestic robots have become increasingly common, as well as vehicles making use of cameras, radar and other sensors to assist the driver. An important aspect of human-robot interaction, is the ability of artificial agents to understand the way humans think and talk about abstract spatial concepts. For example, a domestic robot may be asked to “clean the bathroom”, while a car may be asked to “stop at the parking area”. Hence, a robot's definition of “bathroom”, or “parking area” should point to the same set of places that a human would recognize as such. The problem of assigning a semantic spatial label to an image has been extensively studied in the computer and robot vision literature \cite{oliva2001modeling,wu2011centrist,fazl2012histogram,pronobis2006discriminative,pronobis2009ijrr}. The most important challenges in identifying places come from the complexity of the concepts to be recognized and from the variability of the conditions in which the images are captured. Scenes from the same category may differ significantly, while images corresponding to different places may look similar. The historical take on these issues has been to model the visual appearance of scenes considering a large variety of both global and local descriptors \cite{oliva2001modeling,wu2011centrist,fazl2012histogram,lazebnik2006beyond} and several (shallow) learning models (\textit{e.g.} SVMs, Random Forests). \begin{figure}[t] \centering \subfloat[Standard NBNN pipeline]{\includegraphics[width=0.22\textwidth]{standard_classifier2.pdf}\label{fig:cnn_nbnn}} \hfill \subfloat[FullyConv-NBNN pipeline]{\includegraphics[width=0.22\textwidth]{fullyconv_classifier.pdf}\label{fig:fullyconv}} \caption{The standard NBNN classification pipeline (a) versus the proposed model (b). The orange boxes indicate modules which involve a learning phase. Instead of extracting patches in a preprocessing step, we employ a fully-convolutional neural network, which automatically computes local features from the image. Moreover, features extraction and classifier modules are merged, allowing end-to-end training.} \label{fig:teaser} \vspace{-0.6cm} \end{figure} Since the (re-)emergence of Convolutional Neural Networks (CNNs), approaches based on learning deep representations have become mainstream. Several works exploited deep models for visual-based scene classification and place recognition tasks, showing improved accuracy over traditional methods based on hand-crafted descriptors \cite{urvsivc2016part,sunderhauf2015performance,arroyo2016fusion,kanji2016self,neubert2015local}. Some of these studies \cite{urvsivc2016part,sunderhauf2015performance,neubert2015local} demonstrated the benefit of adopting a region-based approach (\textit{i.e.} considering only specific image parts) in combination with descriptors derived from CNNs, such as to obtain models which are robust to viewpoint changes and occlusions. With a similar motivation, lately several works in computer vision have attempted to bring back the notion of localities into deep networks, \textit{e.g.} by designing appropriate pooling strategies \cite{gong2014multi} or by casting the problem within the Image-2-Class (I2C) recognition framework \cite{kuzborskij2016naive}, with a high degree of success. All these works decouple the choice of the significant localities from the learning of deep representations, as the CNN feature extraction and the classifier learning are implemented as two separate modules. This leads to two drawbacks: first, choosing heuristically the relevant localities means concretely cropping parts of the images before feeding them to the chosen features extractor This is clearly sub-optimal, and might turn out to be computationally expensive. Second, it would be desirable to fully exploit the power of deep networks by directly learning the best representations for the task at hand, rather than re-use architectures trained on general-purpose databases like ImageNet and passively processing patches from the input images without adapting its weights. Ideally, a fully-unified approach would guarantee more discriminative representations, resulting in higher recognition accuracy. This paper contributes to this last research thread by addressing these two issues. We propose an approach for semantic place categorization which exploits local representations within a deep learning framework. Our method is inspired by the recent work~\cite{kuzborskij2016naive}, which demonstrates that, by dividing images into regions and representing them with CNN-based features, state-of-the-art scene recognition accuracy can be achieved by exploiting an I2C approach, namely a parametric extension of the Na\"{i}ve Bayes Nearest Neighbor (NBNN) model. Following this intuition, we propose a deep architecture for semantic scene classification which seamlessly integrates the NBNN and CNN frameworks (Fig.~\ref{fig:teaser}). We automatize the multi-scale patch extraction process by adopting a fully-convolutional network \cite{long2015fully}, guaranteeing a significant advantage in terms of computational cost over two-steps methods. Furthermore, a differentiable counterpart of the traditional NBNN loss is considered to obtain an error that can be back-propagated to the underlying CNN layers, thus enabling end-to-end training. To the best of our knowledge, this is the first attempt to fully unify NBNN and CNN, building a deep version of Na\"{\i}ve Bayes Nearest Neighbor. We extensively evaluate our approach on several publicly-available benchmarks. Our results demonstrate the advantage of the proposed end-to-end learning scheme over previous works based on a two-step pipeline and the effectiveness of our deep network over state-of-the-art methods on challenging robot place categorization tasks. \section{RELATED WORK} \label{related} In this section we review previous works on (i) visual-based place recognition and categorization and (ii) Na\"{i}ve Bayes Nearest Neighbor classification. \subsection{Visual-based Place Recognition and Categorization} In the last decade several works in the robotic community addressed the problem of developing robust place recognition \cite{cummins2008fab,kanji2015cross,lowry2016visual,sunderhauf2015performance,arroyo2016fusion} and semantic classification \cite{pronobis2006discriminative,costante2013transfer,urvsivc2016part} approaches using visual data. In particular, focusing on place categorization from monocular images, earlier works adopted a two-step pipeline: first, hand-crafted features, such as GIST \cite{oliva2001modeling}, CENTRIST \cite{wu2011centrist}, CRFH \cite{pronobis2006discriminative} or HOUP \cite{fazl2012histogram}, are extracted from the query image, and then the image is classified into one of the predefined categories using a previously-trained discriminative model (\textit{e.g.}, Support Vector Machines). Similarly, earlier studies on visual-based place recognition and loop closing also considered hand-crafted feature representations \cite{cummins2008fab,kanji2015cross,ciarfuglia2012discriminative}. More recently, motivated by the success of deep learning models in addressing visual recognition tasks \cite{krizhevsky2012imagenet}, robotic researchers have started to exploit feature representations derived from CNNs for both place recognition \cite{sunderhauf2015performance,arroyo2016fusion,neubert2015local} and semantic scene categorization \cite{urvsivc2016part} tasks. Sunderhau ̈\textit{et al.} \cite{sunderhauf2015performance} analyzed the performance of CNN-based descriptors with respect to viewpoint changes and time variations, presenting the first real-time place recognition system based on convolutional networks. Arroyo \textit{et al.} \cite{arroyo2016fusion} addressed the problem of topological localization across different seasons and proposed an approach which fuses information derived from multiple convolutional layers of a deep architecture. Gout \textit{et al.} \cite{gout2017evaluation} evaluated the representational power of deep features for analyzing images collected by an autonomous surface vessel, studying the effectiveness of CNN descriptors in case of large seasonal and illumination changes. Ur{\v{s}}i{\v{c}} \textit{et al.} \cite{urvsivc2016part} proposed an approach for semantic room categorization: first, images are decomposed in regions and CNN-based descriptors are extracted for each region; then, a part-based classification model is derived for place categorization. Interestingly, they showed that their method outperforms traditional CNN architectures based on global representations \cite{krizhevsky2012imagenet}, as the part-based model guarantees robustness to occlusions and image scaling. Our work develops from a similar idea, but differently from \cite{urvsivc2016part} the deep network is not merely used as feature extractor and a novel CNN architecture, suitable to end-to-end training, is proposed. \subsection{Na\"{i}ve Bayes Nearest Neighbor Classification} The NBNN approach has been widely adopted in the computer and robot vision community, as an effective method to overcome the limitations of local descriptor quantization and Image-2-Image recognition \cite{boiman2008defense}. Several previous studies have demonstrated that the I2C paradigm implemented by NBNN models is especially beneficial for generalization and domain adaptation \cite{tommasi2013frustratingly} and that, by adding a learning component to the non-parametric NBNN, performance can be further boosted \cite{fornoni2014scene}. Recent works have also shown that the NBNN can be successfully employed for place recognition and categorization tasks \cite{kuzborskij2016naive,kanji2015cross,kanji2016self}. Kanji \cite{kanji2015cross} introduced a NBNN scene descriptor for cross-seasonal place recognition. In a later work \cite{kanji2016self}, Kanji extended this approach by integrating CNN-based features and PCA, deriving a PCA-NBNN model for addressing the problem of self-localization in case of images with small view overlap. Kuzborskij \textit{et al.} \cite{kuzborskij2016naive} proposed a multi-scale parametric version of the NBNN classifier and demonstrated its effectiveness in combination with precomputed CNN descriptors for scene recognition. Our work is inspired by \cite{kuzborskij2016naive}. However, the proposed learning model is based on a fully-convolutional network which can be trained in an end-to-end manner. Therefore, it is significantly faster and more accurate than \cite{kuzborskij2016naive}. \section{Fully-Convolutional CNN-NBNL} In this section we describe the proposed approach for semantic place categorization. As illustrated in Fig.~\ref{fig:teaser}, our method develops from the same idea of previous models based on local representations and CNN descriptors \cite{kuzborskij2016naive,urvsivc2016part}: images are decomposed into multiple regions (represented with CNN features) and a part-based classifier is used to infer the labels associated to places. However, differently from previous works, our approach unifies the feature extraction and the classifier learning phases, and we propose a novel CNN architecture which implements a part-based classification strategy. As demonstrated in our experiments (Sect.~\ref{experiments}), our deep network guarantees a significant boost in performance, both in term of accuracy and computational cost. Since our framework is derived from previous works on NBNN-based methods \cite{boiman2008defense,fornoni2014scene,kuzborskij2016naive}, we first provide a brief description of these approaches (Sect.~\ref{sec:NBNN}-\ref{cnnnbnl}) and then we introduce the proposed fully-convolutional NBNN-based network (Sect.~\ref{sec:fcnnbnl}). \begin{figure}[t] \centering \includegraphics[width=0.9\columnwidth]{sketch_cropped-crop.pdf} \caption{Simplified architecture of the proposed framework. The image is re-scaled to different sizes. The obtained images are fed in parallel to multiple FC-CNNs with shared weights. From the networks, descriptors are extracted and used as input to the NBNL classifier.} \label{fig:method} \vspace{-0.4cm} \end{figure} \subsection{Na\"{\i}ve Bayes Non-Linear Learning} \label{sec:NBNN} Let $\set X$ denote the set of possible images and let $\set Y$ be a finite set of class labels, indicating the different scene categories. The goal is to estimate a classifier $f:\set X\to\set Y$ from a training set $\set T\subset\set X\times\set Y$ sampled from the underlying, unknown data distribution. The NBNN method~\cite{boiman2008defense} works under the assumption that there is a an intermediate Euclidean space $\set Z$ and a set-valued function $\phi$ that abstracts an input image $x\in\set X$ into a set of descriptors in $\set Z$, \textit{i.e.}\xspace $\phi(x)\subset\set Z$. For instance, the image could be broken into patches and a descriptor in $\set Z$ could be computed for each patch. Given a training set $\set T$, let $\Phi_y(\set T)$ be the set of descriptors computed from images in $\set T$ having labels $y\in\set Y$, \textit{i.e.}\xspace $\Phi_y(\set T)=\{\phi(x)\,:\,x\in\set X,(x,y)\in \set T \}$. The NBNN classifier $f_\mathtt{NBNN}$ is given as follows: \begin{equation} f_\mathtt{NBNN}(x;\set T)=\argmin_{y\in\set Y}\sum_{z\in\phi(x)}d(z,\Phi_y(\set T))^2\,, \label{eq:NBNN} \end{equation} where $d(x,\set S)=\inf\{\Vert z-s\Vert_2\,:\,s\in\set S\}$ denotes the smallest Euclidean distance between $z$ and an element of $\set S\subset\set Z$, or in other terms it is the distance between $z$ and its nearest neighbor in $\set S$. Despite its effectiveness in terms of classification performance \cite{boiman2008defense}, $f_\mathtt{NBNN}$ has the drawback of being expensive at test time, due to the nearest-neighbor search. A possible way to reduce the complexity of this step consists in learning a small, finite set $\set W_y\subset\set Z$ of representative prototypes for each class $y\in\set Y$ to replace $\Phi_y(\set T)$. This idea was pursued by Fornoni \textit{et al.} \cite{fornoni2014scene} with a method named \textit{Na\"{\i}ve Bayes Non-Linear Learning} (NBNL). NBNL is developed from Eq.~\eqref{eq:NBNN} by replacing $\Phi_y(\set T)$ with the set of prototypes $\set W_y$ and by assuming $\set Z$ to be restricted to the unit ball. Under the latter assumption the bound $d(z,\set S)^2\geq 2-\omega(z,\set S)$ can be derived \cite{fornoni2014scene}, where: \begin{equation} \omega(z,\set S)= \left(\sum_{s\in\set S}|\langle z,s\rangle|_+^{q}\right)^{1/q}\,. \label{eq:s} \end{equation} Here, $\langle\cdot\rangle$ denotes the dot product, $q\in[1,+\infty]$ and $|x|_+=\max(0,x)$. The NBNL classifier is finally obtained in the form given below by using the bound as a replacement of $d()^2$ in Eq.\eqref{eq:NBNN} (and after simple algebraic manipulations): \begin{equation} f_\mathtt{NBNL}(x;\set W)=\argmax_{y\in\set Y}\sum_{z\in\phi(x)}\omega(z,\set W_y)\,, \label{eq:NBNL} \end{equation} where $\set W=\{\set W_y\}_{y\in \set Y}$ encompasses all the prototypes. In order to learn the prototypes $\set W_y$ for each $y\in\set Y$, Fornoni \textit{et al.} did not consider $f_\mathtt{NBNL}$ as classifier and $\set T$ as training set, but they considered (only at training time) a classifier having the form $f(x)=\argmax_{y\in\set Y}\omega(z,\set W_y)$ and an extended training set $\{(z,y)\,:z\in\Phi_y(\set T), y\in\set Y\}$, where each descriptor extracted from an image is promoted to a training sample. In this way they derived the equivalent of a Multiclass Latent Locally Linear SVM (ML3) that is trained using the algorithm in~\cite{fornoni2013multiclass}. \subsection{CNN-NBNL} \label{cnnnbnl} Motivated by the robustness of NBNN/NBNL models and by the recent success of deep architectures in addressing challenging visual tasks, Kuzborskij \textit{et al.} \cite{kuzborskij2016naive} introduced an approach, named CNN-NBNL, which combines the NBNL and CNN frameworks. Their method is an implementation of NBNL, where $\phi(x)$ is obtained by dividing an image $x\in\set X$ into patches at different scales and by employing a pre-trained CNN-based feature extractor~\cite{jia2014caffe} to compute a descriptor for each patch. In formal terms, if $g_\mathtt{CNN}:\set X\to\set Z$ is the CNN-based feature extractor that takes an input image/patch and returns a single descriptor, then $\phi(x)$ (see, Sect.~\ref{sec:NBNN}) is given by \begin{equation} \phi_\mathtt{CNN}(x)=\{g_\mathtt{CNN}(\hat x)\,:\,\hat x\in \text{patches}(x)\}\,, \label{eq:phi-CNN} \end{equation} where $\text{patches}(x)\subset\set X$ returns a set of patches extracted from $x$ at multiple scales and reshaped to be compatible in terms of resolution with the input dimensionality required by the implementation of $g_\mathtt{CNN}$ (\textit{e.g.}\xspace CaffeNet~\cite{jia2014caffe} requires $227\times 227$). To learn the prototypes $\set W_y$ in~\cite{kuzborskij2016naive} a training objective similar to~\cite{fornoni2014scene} is adopted, but the optimization is performed using a stochastic version of ML3 (STOML3) that better scales to larger datasets. At test time, $f_\mathtt{NBNL}$ defined as in Eq.~\ref{eq:NBNL} is used with $\phi$ replaced by $\phi_\mathtt{CNN}$. By moving from hand-crafted features to CNN-based features, the performance of the NBNL classifier improves considerably. Nonetheless, the approach proposed in~\cite{kuzborskij2016naive} has two limitations: 1) it requires the extraction of patches for each image as a pre-processing step, and CNN-features are extracted \emph{sequentially} from each patch; 2) the CNN architecture is used as a mere feature extractor and the method lacks the advantage of an end-to-end trainable system. The first limitation has a negative impact on the computation time of the method, while the latter makes way for further performance boosts. \subsection{Fully-Convolutional CNN-NBNL} \label{sec:fcnnbnl} To overcome the two limitations of CNN-NBNL mentioned above, in this work we introduce a fully-convolutional version of CNN-NBNL that is end-to-end trainable (Fig.~\ref{fig:method}). \vspace{3pt}\noindent\textbf{Fully-convolutional extension.} Extracting patches at multiple scales and extracting CNN features independently for each of them is a very costly operation, which severely impacts training and test time. In order to perform a similar operation but with a limited impact on computation time, we propose to employ a Fully-Convolutional CNN (FC-CNN)~\cite{long2015fully} to simulate the extraction of descriptors from multiple patches over the entire image. A FC-CNN can be derived from a standard CNN by replacing fully-connected layers with convolutional layers. By doing so, the network is able to map an input image of arbitrary size into a set of spatially-arranged output values (descriptors). To cover multiple scales, we simply aggregate descriptors that are extracted with the FC-CNN from images at different resolutions. In this way, as the receptive fields of the FC-CNN remain the same, changing the scale of the input image induces an implicit change in the scale of the descriptors. The number of obtained descriptors per image depends on the image resolution and can in general be controlled by properly shaping the convolutional layers: for instance, by increasing the stride of the last convolutional layer it is possible to reduce the number of descriptors that the FC-CNN returns. In the following, we denote by $g_\mathtt{FCN}(x;\theta)\subset\set Z$ the output of a FC-CNN parametrized by $\theta$ applied to an input image $x\in\set X$. As opposed to $g_\mathtt{CNN}$ defined in Sect.~\ref{cnnnbnl}, which returns a single descriptor, $g_\mathtt{FCN}(x;\theta)$ outputs a set of descriptors, one for each spatial location in the final convolutional layer of the FC-CNN. Each descriptor has a dimensionality that equals the number of output convolutional filters. We will also denote by $\eta(x)$ the number of descriptors that the FC-CNN generates for an input image $x$. Note that this number does not depend on the actual parametrization of the network, but only on its topology, which is assumed to be fixed, and on the resolution of the input image. \vspace{3pt}\noindent\textbf{End-to-end architecture.} The NBNL classifier that we propose and detail below can be implemented using layers that are commonly found in deep learning frameworks and can thus be easily stacked on top of a FC-CNN (see, Fig.~\ref{fig:architecture}). By doing so, we obtain an architecture that can be trained end-to-end. \begin{figure*} \centering\includegraphics[width=\textwidth]{plot-crop.pdf} \caption{Architecture of our fully-convolutional CNN-NBNL. We scale an input image $x\in\set X$ and obtain $\{\hat x_1,\ldots,\hat x_m\}=\text{scale}(x)$. The scaled versions of $x$ are forwarded in parallel through the net. The green block represents the FC-CNN. The gray blocks implement the NBNL classifier. The red blocks (top-left) are active only during training. Parameter $k$ represents the number of classes, $p$ the number of prototypes per class and $q$ the parameter in~Eq.\eqref{eq:s}. Further details about the building blocks are given hereafter. \texttt{FCN} is a FC-CNN. \texttt{conv}[W,C] is a $W\times W$ convolutional layer with $C$ filters. \texttt{relu} applies the ReLu non-linearity to each element. \texttt{pow}[E] raises each element to the power of $E$. \texttt{gconv}[G,W,C] is a grouped $W\times W$ convolutional layer with $G$ groups and $C$ filters (the filters are filled and fixed with 1s; biases are omitted). \texttt{reduce}[avg] averages out the spatial dimensions. \texttt{sum} performs the element-wise sum of the incoming lines. \texttt{argmax} returns the index of the maximum element. \texttt{softmax} applies the softmax operator along the input channels, for each spatial entry of each input line. \texttt{logloss}[$\frac{1}{\eta(\hat x_i)}$] sums up the log-loss computed along the input channels of each spatial entry of each input line, and each input line $i$ is weighted by $\frac{1}{\eta(\hat x_i)}$. } \label{fig:architecture} \vspace{-15pt} \end{figure*} Given an input image $x\in\set X$, we create a set of $m$ scaled versions of $x$, which we denote by $\text{scale}(x)\subset\set X$. Each scaled image $\hat x\in\text{scale}(x)$ is fed to the FC-CNN described before, yielding a set of descriptors $g_\mathtt{FCN}(\hat x;\theta)$. Instead of aggregating the descriptors from each scale, as done in Eq.~\eqref{eq:phi-CNN}, we keep them separated because they undergo a normalization step which avoids biasing the classifier towards scales that have a larger number of descriptors. The final form of our NBNL classifier is given by: \begin{equation} f_\mathtt{FCN\,NBNL}(x;\set W,\theta)=\argmax_{y\in\set Y}h(x;\set W_y,\theta)\,, \end{equation} where $h$ defined below measures the likelihood of $x$ given prototypes in $\set W_y$: \begin{equation} h(x;\set W_y,\theta)=\frac{1}{m}\sum_{\hat x\in\text{scale(x)}}\bar\omega(\hat x;\set W_y,\theta) \end{equation} and $\bar\omega$ is the scale-specific normalized score: \begin{equation} \bar\omega(\hat x;\set W_y,\theta)=\frac{1}{\eta(\hat x)}\sum_{z\in g_\mathtt{FCN}(\hat x;\theta)}\omega(z;\set W_y)\,. \end{equation} This normalization step is necessary to prevent scales that generate many descriptors to bias the final likelihood. To train our network we define the following regularized empirical risk with respect to both the classifiers' parameters $\set W$ and the FC-CNN's parameters $\theta$: \[ R(\set W,\theta;\set T)=\frac{1}{\set T}\sum_{(x,y)\in\set T}\ell(h(x;\set W,\theta), y) + \lambda \Omega(\set W,\theta)\,. \] Here, $h(x;\set W,\theta)=\{h(x;\set W_y,\theta)\}_{y\in\set Y}$, $\Omega$ is a $\ell_2$-regularizer acting on all the networks' parameters, and $\ell(u,y)$ with $u=\{u_y\}_{y\in\set Y}$, $u_y\in\mathbb R$, is the following loss function: \[ \ell(u,y)=-u_y+\log\sum_{y\in\set Y}e^{u_y}\,, \] obtained from the composition of the log-loss with the soft-max operator. Following \cite{fornoni2014scene,kuzborskij2016naive} we actually do not minimize directly $R(\set W,\theta;\set T)$ as defined above, but replace the loss terms with the following upper-bound, which is obtained by application of Jensen's inequality: \[ \ell(h(x;\set W,\theta), y)\leq \frac{1}{m}\sum_{\hat x\in\text{scale}(x)}\frac{1}{\eta(\hat x)}\sum_{z\in g_\mathtt{FCN}(\hat x;\theta)}\ell(\omega(z,\set W),y)\,, \] with $\omega(z,\set W)=\{\omega(z,\set W_y)\}_{y\in \set Y}$. This is equivalent to promoting descriptors to training samples, as in \cite{fornoni2014scene,kuzborskij2016naive}. \section{EXPERIMENTAL RESULTS} \label{experiments} In this section, we evaluate the performance of our approach. In Sect.~\ref{sceneexperiments} we compare against the method in \cite{kuzborskij2016naive}, demonstrating the advantages of our end-to-end learning framework. In Sect.~\ref{coldexp} we assess the effectiveness of the proposed approach for the place categorization task, considering images acquired from different robotic platforms in various indoor environments, comparing with state-of-the-art approaches. Finally, we demonstrate the robustness of our model to different environmental conditions and sensors (Sect.~\ref{crossexp}) and to occlusions and image perturbations (Sect.~\ref{occexp}). Our evaluation has been performed using NVIDIA GeForce 1070 GTX GPU, implementing our approach with the popular Caffe framework \cite{jia2014caffe}. \subsection{Comparison with Holistic and Part-based CNN models} \label{sceneexperiments} In a first series of experiments we demonstrate the advantages of the proposed part-based model and compare it with (i) its non end-to-end counterpart (\textit{i.e}, the CNN-NBNL method in \cite{kuzborskij2016naive}) and (ii) traditional CNN-based approaches not accounting for local representations. To implement \cite{kuzborskij2016naive} following the original paper, we split the input image into multiple patches, extracting features from the last fully-connected layer of a pre-trained CNN. The patches were extracted at three different scales (32,64,128 pixels) after the original image was rescaled (longest side 200 pixels). We adopted the sparse protocol in \cite{kuzborskij2016naive}, based on which features from 100 random patches are extracted. The features are equally distributed between the three scales and an additional descriptor representing the full image is considered. As representative for deep models based on holistic representations, we chose the successful approach of Zhou \textit{et al.} \cite{zhou2014learning,zhou2016places}: they pre-train a CNN on huge datasets (\textit{i.e.}, ImageNet \cite{deng2009imagenet}, Places \cite{zhou2014learning,zhou2016places} or both in the hybrid configuration) and used it as features extractor for learning a linear SVM model. Note that this is a strong baseline, widely used in the computer vision community for scene recognition tasks. To demonstrate the generality of our contribution, we tested all models considering three different base networks: the Caffe \cite{jia2014caffe} version of AlexNet \cite{krizhevsky2012imagenet}, VGG-16 \cite{simonyan2014very} and GoogLeNet \cite{szegedy2015going}. For AlexNet and VGG-16 we considered the networks pre-trained on both Places \cite{zhou2014learning,zhou2016places} and ImageNet \cite{deng2009imagenet} datasets (\textit{i.e}, the hybrid configuration). For GoogLeNet no pre-trained hybrid network was available, thus we took the model pre-trained on Places365. In order to fairly compare our model with the baseline method in \cite{kuzborskij2016naive}, our fully-convolutional network was designed to match the resolution of local patches adopted in \cite{kuzborskij2016naive}. To accomplish this, since a 128x128 patch covers 64\% of a 200x200 image, we rescaled the input image such that the receptive fields correspond to approximately 64\% of the input (\textit{i.e.}\xspace 355 pxls for CaffeNet and 350 pxls for VGG and GoogLeNet). The other scale features were obtained by upsampling the image twice with a deconvolutional layer. We extracted 25 local features for the larger scale (128x128 pxls), 36 for the medium and 49 for the smallest, for a total of 110 local descriptors. These number of features were obtained by regulating the stride of the last layers of the network. As in \cite{kuzborskij2016naive}, we extracted features at the last fully-connected layer level, applying batch normalization \cite{ioffe2015batch} before the classifier. Since the datasets considered in our evaluation have small/medium dimensions, fine-tuning was performed only in the last two layers of the network. The networks were trained with a fixed learning rate which was decreased twice of a factor $0.1$. To decide the proper learning rate schedule and number of epochs, we performed parameters tuning on a separate validation set. As parameters of the NBNL classifier, we chose $k=10$ and $p=2$, applying a weight decay of $10^{-5}$ on the prototypes. Notice that in our model we considered 110 descriptors, while 100 were used for the baseline method in~\cite{kuzborskij2016naive}. However, we experimentally verified that a difference of 10 descriptors does not influence performance. This confirms previous findings in~\cite{kuzborskij2016naive}, where Kuzborskij \textit{et al.}\xspace also tested their approach with a dense configuration employing 400 patches without significant improvements in accuracy over the sampling protocol. We performed experiments on three different datasets, previously used in~\cite{kuzborskij2016naive}: Sports8~\cite{li2007and}, Scene15 \cite{lazebnik2006beyond} and MIT67~\cite{quattoni2009recognizing}. The Sports8 dataset~\cite{li2007and} contains 8 different indoor and outdoor sport scenes (rowing, badminton, polo, bocce, snowboarding, croquet, sailing and rock climbing). The number of images per category ranges from 137 to 200. We followed the common experimental setting, taking 70 images per class for training and 60 for testing. The Scene15 dataset \cite{lazebnik2006beyond} is composed by different categories of outdoor and indoor scenes. It contains a maximum of 400 gray scale images per category. We considered the standard protocol, taking 100 images for training and 100 for testing for each class. The MIT67~\cite{quattoni2009recognizing} is a common benchmark for indoor scene recognition. It contains images of 67 indoor scenes, with at least 100 images per class. We adopted the common experimental setting, using 80 images per class for training and 20 for testing. For each dataset we took 5 random splits reporting the results as mean and standard deviation. Tab.~\ref{expScenes} shows the results of our evaluation. Mean and standard deviation are provided for our approach and \cite{kuzborskij2016naive}, while for the CNN models in~\cite{zhou2014learning,zhou2016places} we report results from the original papers. From the table it is clear that, for all base networks and datasets, our method outperforms the baselines. These results confirm the significant advantage of the proposed part-based approach over traditional CNN architectures which do not consider local representations. Moreover, our results show that our end-to-end training model guarantees an improvement in performance compared to its non end-to-end counterpart CNN-NBNL. This improvement is mostly due to the proposed end-to-end training strategy. A pre-trained network is able to extract powerful features, but they are not always discriminative when applied to specific tasks. On the other hand, end-to-end training allows to overcome this limitation by adapting the pre-trained features to a new target task, producing class discriminative representations. This is shown in Fig.~\ref{fig:tsne} where we plot the fc7 features extracted at scale 64x64 pixels (t-SNE visualizations~\cite{maaten2008visualizing}) with CNN-NBNL (Fig.~\ref{fig:tsne}.a) and with our approach (Fig.~\ref{fig:tsne}.b): while a pre-trained network fails at creating discriminative local features, our model is able to learn representations that cluster accordingly to class labels. \begin{table}[t] \caption{Comparing global and part-based CNN models. \vspace{-5pt}} \centering \scalebox{.9}{ \begin{tabular}{| c | c | c | c | c |} \hline Network& Method & Sports8 & Scene15 & MIT67\\ \hline \multirow{3}{*}{\specialcell{AlexNet\\Hybrid}} & \cite{zhou2014learning} & 94.22$\pm$0.78 &91.59$\pm$0.48& 70.8\\ & \cite{kuzborskij2016naive}& $95.29\pm0.61$&$92.42\pm0.64$&$73\pm0.36$ \\ & Ours & \textbf{95.58 $\pm$ 0.58}&\textbf{93.63 $\pm$ 0.90}&\textbf{74.98 $\pm$ 0.78} \\ \hline \multirow{3}{*}{\specialcell{GoogLeNet\\Places365}}& \cite{zhou2016places}& 91.00 &91.25& 73.30\\ & \cite{kuzborskij2016naive}& $93.08\pm1.78$&$92.29\pm0.59$&$73.14\pm1.43$ \\ & Ours & \textbf{94.46 $\pm$ 0.86}&\textbf{93.68 $\pm$ 0.57}&\textbf{80.55 $\pm$ 0.70} \\ \hline \multirow{3}{*}{\specialcell{VGG\\Hybrid}} & \cite{zhou2016places} & 94.17 &92.12& 77.63\\ & \cite{kuzborskij2016naive}& $94.79\pm0.42$&$92.97\pm0.68$&$77.62\pm0.97$ \\ & Ours & \textbf{97.04 $ \pm $ 0.27}&\textbf{95.12 $ \pm $ 0.41}&\textbf{82.49 $ \pm $ 1.35} \\ \hline \end{tabular} } \label{expScenes} \vspace{-10pt} \end{table} \begin{figure}[t] \centering \subfloat[\cite{kuzborskij2016naive}]{\includegraphics[width=0.24\textwidth]{tsne_scene_standard_64.pdf}\label{fig:cnn_nbnl_tsne}} \subfloat[Ours]{\includegraphics[width=0.24\textwidth]{tsne_scene_fconv_64.pdf}\label{fig:fullyconv_tsne}} \caption{t-SNE visualization of features extracted from 4 classes of the Scene15 dataset. } \label{fig:tsne} \vspace{-0.5cm} \end{figure} To further compare our approach and CNN-NBNL \cite{kuzborskij2016naive} we also analyzed the computational time required during the test phase to process an increasing number of patches. Fig.~\ref{time_vs_patch} report the results of our analysis: as expected, our fully-convolutional architecture is greatly advantageous over the CNN-NBNL model which extract local features independently patch-by-patch. We remark that reduced classification time is a fundamental for the adoption of the proposed model in robotic platforms operating in real environments. \begin{figure}[t] \centering \includegraphics[width=0.8\columnwidth]{n_time_vs_patch.pdf} \caption{Computational time at varying number of descriptors. } \vspace{-0.4cm} \label{time_vs_patch} \end{figure} \subsection{Robot Place Categorization} In this section we show the results of our evaluation when testing the proposed approach on publicly available robot vision datasets. These experiments aim at verifying the effectiveness of our fully-convolutional network and its robustness to varying environmental conditions and occlusions. \subsubsection{COLD dataset} \label{coldexp} We first tested our method on the COsy Localization Database (COLD) database \cite{pronobis2009ijrr}. This database contains three datasets of indoor scenes acquired in three different laboratories from different robots. The COLD-Freiburg contains 26 image sequences collected in the Autonomous Intelligent Systems Laboratory at the University of Freiburg with a camera mounted on an ActivMedia Pioneer-3 robot. COLD-Ljubljana contains 18 image sequences acquired from the camera of an iRobot ATRV-Mini platform at the Visual Cognitive Systems Laboratory of University of Ljubljana. In the COLD-Saarbr\"ucken an ActivMedia PeopleBot has been employed to gather 29 image sequences inside the Language Technology Laboratory at the German Research Center for Artificial Intelligence in Saarbr\"ucken. In our experiments we followed the protocol described in Rubio \textit{et al.} \cite{rubio2016comparison}, considering images of path 2 of each laboratory. These data depict significant changes with respect to illumination conditions and time of data acquisition. Using path 2, there are 9 categories for COLD-Freiburg, 6 for COLD-Ljubljana and 8 for COLD-Saarbr\"ucken. We trained and tested on data collected on the same laboratory, considering 5 random splits and reporting the average values. We compared our model with the methods proposed in \cite{rubio2016comparison}, since this work is one of the most recent studies adopting this dataset. In \cite{rubio2016comparison}, Rubio \textit{et al.} proposed to extract HOG features and to apply a dimensionality reduction technique before providing the features as input to different classifiers. As classifiers they considered linear SVM, Na\"{\i}ve Bayes (NB), Bayesian Network (BN) and the Tree Augmented Na\"{\i}ve Bayes (TAN). In our experiments, to train our model we adopted the same setting described in Sect.~\ref{sceneexperiments}, fine-tuning the last two layers of the network. The results are shown in Tab.~\ref{expCOLD}. Our model outperforms all the baselines in \cite{rubio2016comparison}, confirming the advantage of CNN-based approaches over traditional classifiers and hand crafted features. The high accuracy of our method also demonstrates that the proposed fully-convolutional network is highly effective at discerning among different rooms, even with significant lighting and environmental changes. \begin{table}[t] \caption{Results on COLD dataset.\vspace{-5pt}} \centering \begin{tabular}{| l | c | c | c |} \hline Method & Freiburg & Saarbrücken & Ljubljana\\ \hline HOG+SVM \cite{rubio2016comparison} & 46.5 &44.9& 66.2\\ HOG+NB \cite{rubio2016comparison} & 54.6 &52.9& 62.6\\ HOG+TAN \cite{rubio2016comparison} & 69.5 & 72.6& 75.2\\ HOG+BN \cite{rubio2016comparison} & 82.3 &84.4& 88.5\\ Ours& \textbf{95.2}&\textbf{97.3}&\textbf{99.2} \\ \hline \end{tabular} \label{expCOLD} \vspace{-0.4cm} \end{table} \subsubsection{KTH-IDOL dataset} \label{crossexp} To further assess the ability of the proposed method to generalize across different robotics platforms and illumination conditions, we performed experiments on the KTH Image Database for rObot Localization (KTH-IDOL) \cite{luo2006idol2}. This dataset contains 12 image sequences collected by two robots (Dumbo and Minnie) on 5 different rooms. The image sequences were collected along several days on three different illumination conditions: sunny, cloudy and night. Following \cite{wu2011centrist} we considered the first two sequences for each robot and weather condition, performing three different type of tests. First, we trained and tested using the same robot and same weather condition with one sequence used for training and the other for testing and vice-versa. As a second experiment, we used the same robot for training and testing, varying the weather conditions of the two sets. In the last experiment we trained the classifier with the same weather condition but testing it on a different robot. Notice that, differently from Sect.~\ref{coldexp}, in this case the illumination changes are not present in the training set. Our model is trained with the same setting of Sect.~\ref{sceneexperiments}. In this case, to reduce overfitting and improve the capability of our network, we apply data augmentation to the RGB channels, following the standard procedure introduced in \cite{krizhevsky2012imagenet}. We compared our method with three state-of-the-art approaches: (i) \cite{pronobis2006discriminative} which used high dimensional histogram global features as input for a $\chi^2$ kernel SVM; (ii) \cite{wu2011centrist} which proposed the CENTRIST descriptor and performed nearest neighbor classification and (iii) \cite{fazl2012histogram} which used again the nearest neighbor classifier but with Histogram of Oriented Uniform Patterns (HOUP) as features. Tab.~\ref{expIDOL} shows the results of our evaluation. Our method outperforms all the baselines in the first and third series of experiments (same lighting). In particular, the large improvements in performance in the third experiment clearly demonstrates its ability to generalize over different input representations of the same scene, independently of the camera mounted on the robot. These results suggest that it should be possible to train offline our model and apply it on arbitrary robotic platforms. On the second experiment, while the high classification accuracy demonstrates a significant robustness to lighting variations, our model achieves comparable performance with previous works, showing a small advantage of CNN representations over traditional methods in case of illumination changes. \begin{table}[t] \caption{Results on KTH-IDOL dataset (D and M denotes the names of the robot platforms Dumbo and Minnie).\vspace{-5pt}} \centering \begin{tabular}{| c | c | c || c | c | c | c | c |} \hline Train & Test & Lighting & \cite{pronobis2006discriminative}& \cite{wu2011centrist} & \cite{fazl2012histogram} & Ours \\ \hline D & D & Same & 97.26 & 97.62 & 98.24 &\textbf{98.61} \\ \hline M & M & Same & 95.51 & 95.35 & 96.61 & \textbf{97.32} \\ \hline D & D & Diff & 80.55 & 94.98 & \textbf{95.76} & 94.17 \\ \hline M & M & Diff & 71.90 & 90.17 & 92.01 & \textbf{93.62} \\ \hline D & M & Same & 66.63 & 77.78 & 80.05 & \textbf{87.05} \\ \hline M & D & Same & 62.20 & 72.44 & 75.43 & \textbf{88.51} \\ \hline \end{tabular} \label{expIDOL} \vspace{-.9cm} \end{table} \subsubsection{Household room dataset}\label{occexp} In the last series of experiments we tested the robustness of our model with respect to occlusions. We evaluate the performance of our approach on the recently introduced household room (or MIT8) dataset \cite{urvsivc2016part}. This dataset is a subset of MIT67 which contains 8 room categories: bathroom, bedroom, children room, closet, corridor, dining room, kitchen and living room. We used the setting provided in \cite{urvsivc2016part}, with 641 images for training and 155 for testing. The challenge proposed by Ur{\v{s}}i{\v{c}} \textit{et al.} \cite{urvsivc2016part} is to train the model on the original images and test its performances on various noisy conditions. The conditions are: occlusion in the center of the image, occlusion on the right border, occlusions from a person, addition of an outside border, upside down rotation and cuts on the top or right part of the image (inducing aspect ratio changes). All the test sets were produced following the protocol in \cite{urvsivc2016part}, apart from the person occlusions set provided directly by the authors. We compare our approach with the part-based model developed by Ur{\v{s}}i{\v{c}} \textit{et al.} \cite{urvsivc2016part} and the global CNN-based model in \cite{zhou2014learning}. In \cite{urvsivc2016part} selective search is used to extract informative regions inside the image, which are then provided as input to a pre-trained CNN. From these features, exemplar parts are learned for each category and used by a part-based mixture model for the final classification. The standard hybrid CaffeNet \cite{zhou2014learning} is employed as CNN architecture. For a fair comparison we adopted the same base architecture, extracting features at the last fully-connected layer before the classifier. In this case we used images rescaled to 256x256 as input, upsampling them twice to obtain descriptors at multiple scales. We extracted 45 descriptors, 4 for the smallest scale (256x256), 16 for the medium and 25 for the largest. The training procedure is the one described in Sect.~\ref{sceneexperiments} and the same parameters are used for the NBNL classifier, with batch normalization applied to the last layer. We trained our model 10 times, computing the average accuracy. The results of the evaluation are reported in Tab.~\ref{expMIT8}. As shown in the table, both our approach and the method in \cite{urvsivc2016part} achieve higher classification accuracy than the CNN model in \cite{zhou2014learning}, confirming the benefit of part-based modeling. It is interesting to compare our approach with \cite{urvsivc2016part}: while our framework guarantees better performances in certain conditions (\textit{e.g.} original frames, person occlusion), the method in \cite{urvsivc2016part} is more robust to changes of the aspect ratio (\textit{e.g.} cuts in the image) and scale (\textit{e.g.} outside border addition). Interestingly, when the occlusion is not created artificially obscuring patches (person occluder), our model achieves higher performance than \cite{urvsivc2016part}. Oppositely, in the case of the outside border experiments, almost half of the image is black and the real content reduces to a very small scale. In this (artificial) setting, \cite{urvsivc2016part} outperforms our model. For sake of completeness, we also report the confusion matrix associated with our results on the original frames (Fig.~\ref{fig:cm_mit8}). \begin{table}[t] \caption{Results on MIT8 dataset.} \centering \begin{tabular}{| l | c | c | c |} \hline Experiment & \cite{zhou2014learning} & \cite{urvsivc2016part} & Ours \\ \hline original & 86.45 &85.16& \textbf{89.10} \\ outside border & 62.58 & \textbf{85.16} & 74.65 \\ black occluder, right & 78.71 & 80.00& \textbf{80.53} \\ black occluder, central & 61.94 &\textbf{69.68}& 67.74 \\ person occluder, central & 59.35&68.39& \textbf{72.45} \\ cut right half & 62.58 &64.52 &\textbf{65.16} \\ cut top half & 52.26 &\textbf{68.39}& 63.16 \\ upside down & 52.26 &59.35& \textbf{63.94} \\ \hline \end{tabular} \label{expMIT8} \vspace{-0.3cm} \end{table} \begin{figure}[t] \includegraphics[width=0.90\columnwidth,center]{cm.pdf} \caption{Confusion matrix obtained with our model classifying the original images of the MIT8 dataset.} \label{fig:cm_mit8} \vspace{-0.4cm} \end{figure} \section{CONCLUSIONS} We presented a novel deep learning architecture for addressing the semantic place categorization task. By seamlessly integrating the CNN and NBNN frameworks, our approach permits to learn local deep representations, enabling robust scene recognition. The effectiveness of the proposed method is demonstrated on various benchmarks. We show that our approach outperforms traditional CNN baselines and previous part-based models which use CNNs purely as features extractors. In robotics scenarios, our deep network achieves state-of-the-art results on three different benchmarks, demonstrating its robustness to occlusions, environmental changes and different sensors. As future work, we plan to extend this model in order to handle multimodal inputs (\textit{e.g.} considering range sensors in addition to RGB cameras). \vspace{-10pt} \bibliographystyle{IEEEtran}
1,314,259,994,674
arxiv
\section{Introduction} Besides the recent high precision measurements of the $W$ mass~\cite{Karlen98,Dorigo98}, $M_W$, the most important input into precision tests of electroweak theory continues to come from the $Z$ factories LEP~1~\cite{Karlen98} and SLC~\cite{Baird98}. The vanguard of the physics program at LEP~1 is the analysis of the $Z$ lineshape. Its parameters are the $Z$ mass, $M_Z$, the total $Z$ width, $\Gamma_Z$, the hadronic peak cross section, $\sigma_{\rm had}$, and the ratios of hadronic to leptonic decay widths, $R_\ell = {\Gamma({\rm had})\over \Gamma(\ell^+\ell^-)}$, where $\ell = e$, $\mu$, or $\tau$. They are determined in a common fit with the leptonic forward-backward (FB) asymmetries, $A_{FB} (\ell) = {3\over 4} A_e A_\ell$. With $f$ denoting the fermion index, \begin{equation} A_f = {2 v_f a_f\over v_f^2 + a_f^2} \end{equation} is defined in terms of the vector ($v_f = I_{3,f} - 2 Q_f \sin^2 \theta_f^{\rm eff}$) and axial-vector ($a_f = I_{3,f}$) $Zf\bar{f}$ coupling; $Q_f$ and $I_{3,f}$ are the electric charge and third component of isospin, respectively, and $\sin^2 \theta_f^{\rm eff} \equiv \bar{s}^2_f$ is an effective mixing angle. The polarization of the electron beam at the SLC allows for competitive and complementary measurements with a much smaller number of $Z$'s than at LEP. In particular, the left-right (LR) cross section asymmetry, $A_{LR} = A_e$, represents the most precise determination of the weak mixing angle by a single experiment (SLD).~\cite{Baird98} Mixed FB-LR asymmetries, $A^{FB}_{LR} (f) = {3\over 4} A_f$, single out the final state coupling of the $Z$ boson. For several years there has been an experimental discrepancy at the $2 \sigma$ level between $A_\ell$ from LEP and the SLC. With the 1997/98 high statistics run at the SLC, and a revised value for the FB asymmetry of the $\tau$ polarization, ${\cal P}^{FB}_\tau$, the two determinations are now consistent with each other, \begin{equation} \begin{array}{l} \label{aell} A_\ell ({\rm LEP}) = 0.1470 \pm 0.0027, \\ A_\ell ({\rm SLD}) = 0.1503 \pm 0.0023. \end{array} \end{equation} \begin{table}[p] \caption{Principal precision observables from CERN, FNAL, SLAC, and elsewhere. Shown are the experimental results, the SM predictions, and the pulls. The SM errors are from the uncertainties in $M_Z$, $\ln M_H$, $m_t$, $\alpha (M_Z)$, and $\alpha_s$. They have been treated as Gaussian and their correlations have been taken into account. $\bar{s}_\ell^2 (Q_{FB} (q))$ is the weak mixing angle from the hadronic charge asymmetry; $R^-$ and $R^\nu$ are cross section ratios from deep inelastic $\nu$-hadron scattering; $g_{V,A}^{\nu e}$ are effective four-Fermi coefficients in $\nu$-e scattering; and the $Q_W$ are the weak charges from parity violation measurements in atoms. The uncertainty in the $b\rightarrow s\gamma$ observable includes theoretical errors from the physics model, the finite photon energy cut-off, and from uncalculated higher order effects. There are other precision observables which are not shown but included in the fits. Very good agreement with the SM is observed. Only $A_{LR}$ and the two measurements sensitive to $A_b$ discussed in the text, show some deviation, but even those are below $2\sigma$. \label{zpole}} \vspace{0.2cm} \begin{center} \footnotesize \begin{tabular}{|lcccr|} \hline Quantity & Group(s) & Value & Standard Model & pull \\ \hline $M_Z$ \hspace{0pt} [GeV]& LEP &$ 91.1867 \pm 0.0021 $&$ 91.1865 \pm 0.0021 $&$ 0.1$ \\ $\Gamma_Z$ \hspace{3pt} [GeV]& LEP &$ 2.4939 \pm 0.0024 $&$ 2.4957 \pm 0.0017 $&$-0.8$ \\ $\sigma_{\rm had}$ [nb] & LEP &$ 41.491 \pm 0.058 $&$ 41.473 \pm 0.015 $&$ 0.3$ \\ $R_e$ & LEP &$ 20.783 \pm 0.052 $&$ 20.748 \pm 0.019 $&$ 0.7$ \\ $R_\mu$ & LEP &$ 20.789 \pm 0.034 $&$ 20.749 \pm 0.019 $&$ 1.2$ \\ $R_\tau$ & LEP &$ 20.764 \pm 0.045 $&$ 20.794 \pm 0.019 $&$-0.7$ \\ $A_{FB} (e)$ & LEP &$ 0.0153 \pm 0.0025 $&$ 0.0161 \pm 0.0003 $&$-0.3$ \\ $A_{FB} (\mu)$ & LEP &$ 0.0164 \pm 0.0013 $&$ $&$ 0.2$ \\ $A_{FB} (\tau)$ & LEP &$ 0.0183 \pm 0.0017 $&$ $&$ 1.3$ \\ \hline $R_b$ & LEP + SLD &$ 0.21656\pm 0.00074$&$ 0.2158 \pm 0.0002 $&$ 1.0$ \\ $R_c$ & LEP + SLD &$ 0.1735 \pm 0.0044 $&$ 0.1723 \pm 0.0001 $&$ 0.3$ \\ $A_{FB} (b)$ & LEP &$ 0.0990 \pm 0.0021 $&$ 0.1028 \pm 0.0010 $&$-1.8$ \\ $A_{FB} (c)$ & LEP &$ 0.0709 \pm 0.0044 $&$ 0.0734 \pm 0.0008 $&$-0.6$ \\ $A_b$ & SLD &$ 0.867 \pm 0.035 $&$ 0.9347 \pm 0.0001 $&$-1.9$ \\ $A_c$ & SLD &$ 0.647 \pm 0.040 $&$ 0.6676 \pm 0.0006 $&$-0.5$ \\ \hline $A_{LR} + A_\ell$ & SLD &$ 0.1503 \pm 0.0023 $&$ 0.1466 \pm 0.0015 $&$ 1.6$ \\ ${\cal P}_\tau: A_e+A_\tau$ & LEP &$ 0.1452 \pm 0.0034 $&$ $&$-0.4$ \\ $\bar{s}_\ell^2 (Q_{FB})$ & LEP &$ 0.2321 \pm 0.0010 $&$ 0.2316 \pm 0.0002 $&$ 0.5$ \\ \hline $m_t$ \hspace{6pt} [GeV]& Tevatron &$173.8 \pm 5.0 $&$171.4 \pm 4.8 $&$ 0.5$ \\ $M_W$ \hspace{0pt} [GeV]& all &$ 80.388 \pm 0.063 $&$ 80.362 \pm 0.023 $&$ 0.4$ \\ \hline $R^-$ & NuTeV &$ 0.2277 \pm 0.0021 \pm 0.0007 $&$ 0.2297 \pm 0.0003 $&$-0.9$\\ $R^\nu$ & CCFR &$ 0.5820 \pm 0.0027 \pm 0.0031 $&$ 0.5827 \pm 0.0005 $&$-0.2$\\ $R^\nu$ & CDHS &$ 0.3096 \pm 0.0033 \pm 0.0028 $&$ 0.3089 \pm 0.0003 $&$ 0.2$\\ $R^\nu$ & CHARM &$ 0.3021 \pm 0.0031 \pm 0.0026 $&$ $&$-1.7$\\ \hline $g_V^{\nu e}$ & all &$ -0.041 \pm 0.015 $&$ -0.0395 \pm 0.0004 $&$-0.1$\\ $g_A^{\nu e}$ & all &$ -0.507 \pm 0.014 $&$ -0.5063 \pm 0.0002 $&$-0.1$\\ \hline $Q_W({\rm Cs})$& Boulder &$ -72.41 \pm 0.25\pm 0.80 $&$ -73.10 \pm 0.04 $&$ 0.8$\\ $Q_W({\rm Tl})$& all &$-114.8 \pm 1.2 \pm 3.4 $&$-116.7 \pm 0.1 $&$ 0.5$\\ \hline ${\Gamma (b\rightarrow s\gamma)\over \Gamma (b\rightarrow c e\nu)}$& CLEO &$ 3.26^{+0.75}_{-0.68} \times 10^{-3} $&$ 3.14^{+0.19}_{-0.18} \times 10^{-3} $&$ 0.1$\\ \hline \end{tabular} \end{center} \end{table} \noindent The LEP value is from $A_{FB}(\ell)$, ${\cal P}_\tau$, and ${\cal P}^{FB}_\tau$, while the SLD value is from $A_{LR}$ and $A^{FB}_{LR} (\ell)$. The data is consistent with lepton universality, which is assumed here. There remains a $2.5 \sigma$ discrepancy between the two most precise determinations of $\bar{s}^2_\ell$, i.e.\ $A_{LR}$ and $A_{FB} (b)$ (assuming no new physics in $A_b$). Of particular interest are the results on the heavy flavor sector~\cite{Karlen98} including $R_q = {\Gamma (q\bar{q}) \over \Gamma ({\rm had})}$, $A_{FB} (q)$, and $A^{FB}_{LR} (q)$, with $q = b$ or $c$. At present, there is some discrepancy in $A^{FB}_{LR} (b) = {3\over 4} A_b$ and $A_{FB} (b) = {3\over 4} A_e A_b$, both at the $2 \sigma$ level. Using the average of Eqs.~(\ref{aell}), $A_\ell = 0.1489 \pm 0.0018$, both can be interpreted as measurements of $A_b$. From $A_{FB} (b)$ one would obtain $A_b = 0.887 \pm 0.022$, and the combination with $A^{FB}_{LR} (b) = {3\over 4} (0.867 \pm 0.035)$ would yield $A_b = 0.881 \pm 0.019$, which is almost $3 \sigma$ below the SM prediction. Alternatively, one could use $A_\ell ({\rm LEP})$ above (which is closer to the SM prediction) to determine $A_b ({\rm LEP}) = 0.898 \pm 0.025$, and $A_b = 0.888 \pm 0.020$ after combination with $A^{FB}_{LR} (b)$, i.e., still a $2.3 \sigma$ discrepancy. An explanation of the 5--6\% deviation in $A_b$ in terms of new physics in loops, would need a 25--30\% radiative correction to $\hat\kappa_b$, defined by $\bar{s}^2_b \equiv \hat\kappa_b\sin^2\hat\theta_{\overline{\rm MS}} (M_Z)$. Only a new type of physics which couples at the tree level preferentially to the third generation~\cite{Erler95}, and which does not contradict $R_b$ (including the off-peak measurements by DELPHI~\cite{Abreu96}), can conceivably account for a low $A_b$. Given this and that none of the observables deviates by $2 \sigma$ or more, we can presently conclude that there is no compelling evidence for new physics in the precision observables, some of which are listed in Table~\ref{zpole}. \section{Bayesian Higgs mass inference} The data show a strong preference for a low $M_H \sim {\cal O} (M_Z)$, \begin{equation} \label{mh_fit} M_H = 107^{+67}_{-45} \mbox{ GeV}, \end{equation} where the central value (of the global fit to all precision data, including $m_t$) maximizes the likelihood, $N e^{-\chi^2 (M_H)/2}$. Correlations with other parameters, $\xi^i$, are accounted for, since minimization w.r.t. these is understood, $\chi^2 \equiv \chi^2_{\rm min}$. Bayesian methods, on the other hand, are based on Bayes theorem~\cite{Bayes63}, \begin{equation} \label{Bayes} p(M_H | {\rm data}) = \frac{p({\rm data}| M_H) p(M_H)}{p({\rm data})}, \end{equation} which must be satisfied once the {\em likelihood\/}, $p({\rm data}| M_H)$, and {\em prior\/} distribution, $p(M_H)$, are specified. $p(data) \equiv \int p({\rm data}| M_H) p(M_H) d M_H$ in the denominator provides for the proper normalization of the {\em posterior\/} distribution on the l.h.s. The prior can contain additional information not included in the likelihood model, or chosen to be {\em non-informative}. Occasionally, the Bayesian method is criticized for the need of a prior, which would introduce unnecessary subjectivity into the analysis. Indeed, care and good judgement is needed, but the same is true for the likelihood model, which has to be specified in both approaches. Moreover, it is appreciated among Bayesian practitioners, that the explicit presence of the prior can be advantageous: it manifests model assumptions and allows for sensitivity checks. From the theorem~(\ref{Bayes}) it is also clear that the maximum likelihood method corresponds, mathematically, to a particular choice of prior. Thus Bayesian methods differ rather in attitude: by their strong emphasis on the entire posterior distribution and by their first principles setup. Given extra parameters, $\xi^i$, the distribution function of $M_H$ is defined as the marginal distribution, $p(M_H|{\rm data}) = \int p(M_H, \xi^i | {\rm data}) \prod_i p(\xi^i) d \xi^i$. If the posterior factorizes, $p(M_H, \xi^i) = p(M_H) p(\xi^i)$, the $\xi^i$ dependence can be ignored. If not, but $p(\xi^i | M_H)$ is (approximately) multivariate normal, then \begin{equation} \chi^2 (M_H,\xi^i) = \chi^2_{\rm min} (M_H) + {1\over 2} \frac{\partial^2 \chi^2 (M_H)} {\partial \xi_i \partial \xi_j} (\xi^i - \xi^i_{\rm min} (M_H)) (\xi^j - \xi^j_{\rm min} (M_H)). \end{equation} The latter applies to our case, where $\xi^i = (m_t,\alpha_s,\alpha(M_Z))$. Integration yields, \begin{equation} p(M_H | {\rm data}) \sim \sqrt{\det E}\, e^{- \chi^2_{\rm min} (M_H)/2}, \end{equation} where the $\xi^i$ error matrix, $E = (\frac{\partial^2 \chi^2 (M_H)} {\partial \xi_i \partial \xi_j})^{-1}$, introduces a correction factor with a mild $M_H$ dependence. It corresponds to a shift relative to the standard likelihood model, $\chi^2 (M_H) = \chi^2_{\rm min}(M_H) + \Delta \chi^2 (M_H)$, where \begin{equation} \Delta \chi^2 (M_H) \equiv \ln \frac{\det E (M_H)}{\det E (M_Z)}. \end{equation} For example, $\Delta \chi^2 (300 \mbox{ GeV}) \sim 0.1$, which would {\em tighten} the $M_H$ upper limit by at most a few GeV. At present, we neglect this effect. We choose $p(M_H)$ as the product of $M_H^{-1}$, corresponding to a uniform (non-informative) distribution in $\log M_H$, times the exclusion curve from LEP~2.~\cite{McNamara98} This curve is from Higgs searches at center of mass energies up to 183 GeV. We find the 90 (95, 99)\% confidence upper limits, \begin{equation} \label{mh_limits} M_H < 220 \mbox{ (255, 335) GeV}. \end{equation} Theory uncertainties from uncalculated higher orders increase the 95\% CL by about 5~GeV. These limits are robust within the SM, but we caution that the results on $M_H$ are strongly correlated with certain new physics parameters~\cite{Erler99}. The one-sided confidence interval~(\ref{mh_limits}) is not an exclusion limit. For example, the 95\% upper limit of the standard uniform distribution, $x \in [0,1]$, is at $x = 0.95$, but all values of $x$ are equally likely, and $x > 0.95$ cannot be excluded. If there is a discrete set of competing hypotheses, $H_i$, one can use Bayes factors, $p({\rm data} | H_i)/p({\rm data} | H_j)$, for comparison. For example, LEP~2 rejects a standard Higgs boson with $M_H < 90$~GeV at the 95\% CL, because \begin{equation} \frac{p({\rm data} | M_H = M_0)}{p({\rm data} | M_H \neq M_0)} < 0.05 \hspace{20pt} \forall\; M_0 < 90 \hbox{ GeV}. \end{equation} On the other hand, the probability for $M_H < 90$~GeV is only $5\times 10^{-4}$. One could similarly note, that $p(M_H = M_0) < 0.05\, p(M_H = 107 \hbox{ GeV})$ for $M_0 > 334$ GeV; but the (arbitrary) choice of the best fit $M_H$ value as reference hypothesis is hardly justifiable. This affirms that variables continuously connecting a set of hypotheses should be treated in a fully Bayesian analysis. \section*{Acknowledgement} I would like to thank the organizers of WIN 99 for a very pleasant and memorable meeting and Paul Langacker for collaboration. \section*{References}
1,314,259,994,675
arxiv
\section*{Acknowledgements} \noindent The authors wish to thank Professor Philip K. Maini for helpful comments and feedback on the manuscript.~GC is supported by from EPSRC and MRC Centre for Doctoral Training in Systems Approaches to Biomedical Science and Cancer Research UK. C.Z. acknowledges Breast Cancer Research Foundation (BCRF). P.G.K. acknowledges support from the Leverhulme Trust via a Visiting Fellowship and thanks the Mathematical Institute of the University of Oxford for its hospitality during part of this work. \bibliographystyle{elsarticle-num} \section{Spatial Model} \label{AppendixA} \noindent We outline here the set up for the 1D simulations presented in Section \ref{conclusion}. As a full description of the spatial model goes beyond the scope of the present work, we focus on the main changes to~(\ref{mixedmodel})-(\ref{eq_ad}). We now view the oxygen concentration $c$ as a dependent variable, rather than a prescribed function. We suppose that oxygen is supplied to the region by blood vessels on the domain boundary $\partial \Omega_2$ (see Figure~\ref{schematicspatial}). Oxygen diffuses from the boundary into the tissue where it is consumed by the tumour cells at rates which depend on their phenotype and the local oxygen concentration. The evolution of the dimensionless cell density, $n=n(\vec{x},z,t)$, is driven by a phenotypic flux of the same form as in Equation~(\ref{mixedmodel}) but a spatial flux is included to account for random motion in the spatial dimension. \begin{figure}[h!] \centering \includegraphics[width=0.85\textwidth]{figures/model/modeldiagram2.pdf} \caption{Schematic representation of the phenotypic and spatial model.} \label{schematicspatial} \end{figure} As shown in Figure \ref{schematicspatial}, we consider a fixed tissue slice where the oxygen supply (i.e. vasculature) is confined to one of the tissue boundaries. Given the assumed symmetry of the problem, we can consider a 1D Cartesian geometry with $x\in[0,L]$. The spatial model is defined by the following system of coupled PDEs: \begin{subequations} \begin{align} \hspace{-10mm} \frac{\partial n}{\partial t}=\underbrace{ D_N \frac{\partial^2 n}{\partial x^2}}_{spatial \hspace{1mm} flux}+ \frac{\partial }{\partial z} \left(\theta \frac{\partial n}{\partial z}-n v_z(z,c)\right)+ F(z,c,\phi,t) n,\\ \frac{\partial c}{\partial t} = D_C\frac{\partial^2 n}{\partial x^2}-\Gamma(t,x,c),\label{ox}\\ \theta \frac{\partial n}{\partial z}-n v_z = 0, \qquad z\in\left\{0,1\right\},\, x\in [0,L],\, t>0,\\ \left.\frac{\partial n}{\partial x}\right|_{x=0}=\left.\frac{\partial n}{\partial x}\right|_{x=L}=0, \quad z\in(0,1),\, t>0,\\ \left.\frac{\partial c}{\partial x}\right|_{x=L}=0, \quad c(0,t)=c_{\infty}, \quad t>0,\\[2pt] n(x,z,0)= n_0(x,z) \quad x\in[0,L],\, z\in(0,1),\\[2mm] c(x,0)= c_0(x) \quad x\in[0,L],\\ \phi(x,t)=\int_0^1 n(x,z,t) \, dz,\\ \Gamma(t,x,c)=\int_0^1 \gamma(z,c) n(x,z,t) \, dz,\\ \begin{aligned} F(z,c,\phi,t)= p(z,c)\left(1-\phi\right) -f(z) - \underbrace{g H(c_N-c)}_{necrosis}\\-\sum^{N}_{i} \log\left(\frac{1}{SF(z,c)}\right)\delta(t-t_i). \end{aligned} \end{align}\label{spatial_mod} \end{subequations} In Equation~(\ref{spatial_mod}), $D_N$ and $D_C$ are the assumed constant spatial diffusion coefficient for the cells and oxygen, respectively, while $\gamma$ denotes the rate at which cells of phenotype $z$ consume oxygen and $\Gamma$ the net rate of oxygen consumption at position $x$ and time $t$. The advection velocity $v_z$ is as defined by Eq.~(\ref{eq_ad}), while the fitness function $F$ is analogous to that defined in Section \ref{fit}, with an additional term to account for necrosis. The latter is assumed to occur at a constant rate $g \geq 0$, independent of cell phenotype, when the oxygen concentration falls below a threshold value, $c_N \geq 0$. We also modify the definition of the survival fraction $SF$ given in \S\ref{fit} (see Equation~(\ref{lq_model})) to account for the \textit{oxygen-enhancement ratio} (OER)~\cite{lewin,lewin2}. According to the \textit{oxygen fixation hypothesis} \cite{HallEricJ2012Rftr}, part of the biological damage induced by radiation is indirect, being mediated by the presence of free radicals. Thus, when oxygen is limited, radio-sensitivity is accordingly reduced. Based on experiments, the range of oxygen concentrations at which this effect is relevant corresponds to more severe levels of hypoxia (where $c\sim 0.5\%$ or lower). We do not consider such situations for the well-mixed model, where we consider mild hypoxia. However, accounting for the OER will be important for the spatially extended model. Recall from Section \ref{hyp cond} that hypoxia is a favourable \textit{niche} for CSCs. Therefore the OER will endow them with additional protection from radiation. Denoting by $c^R_H$ the oxygen threshold at which the OER becomes active, we use the following functional form for the survival fraction when simulating the spatially-extended model: \begin{align} SF(z,c)=\begin{cases} \exp\left[-\alpha(z)d-\beta(z) d^2\right] \quad c>c^R_H\\[2mm] \exp\left[-\myfrac[2pt]{\alpha(z)}{OER}d-\myfrac[2pt]{\beta(z)}{OER^2} d^2\right] \quad c<c^R_H. \end{cases}\label{SF_space} \end{align} In Equation (\ref{SF_space}), $\alpha$ and $\beta$ are defined by~(\ref{radiosensitivity}). We note that in the main text, we consider $c=1$ (normoxia) and $c=0.2$ (hypoxia), so that the OER does not impact cell responses to RT. For the well-mixed model, the oxygen concentration is typically maintained at a prescribed, constant value. By contrast, for the spatially extended model, we suppose that the tumour cells consume oxygen at a rate $\gamma$ which depends on their phenotype, $z$. As mentioned previously, stem cells are known to have a glycolytic metabolism and, thus, we assume that they consume less oxygen than cancer cells. Consequently, we consider $\gamma$ to be a monotonically increasing function of the phenotypic variable $z$ which asymptotes to its maximum value for $z>0.5$: \begin{equation} \gamma(z,c)= H(c-c_N)\left[\gamma_{max} -\frac{\gamma_{max}}{2} e^{-k_{\gamma} z}\right].\label{gamma} \end{equation} In Equation (\ref{gamma}), $H=H(x)$ is the Heaviside function (i.e. $H(x)=1$ if $x>0$ and $H(x)=0$ if $x\leq 0$). In order to continue their normal function, glycolytic cells consume oxygen, albeit at a lower rate. Motivated by results presented in \cite{consumption}, we assume that glycolytic CSCs consume oxygen at approximately half the rate of terminally differentiated cancer cells. \subsection{Parameters} \label{App_param} \begin{table}[h!] \centering\footnotesize \begin{tabular}{l c@{\hspace{0.5cm}} l@{\hspace{0.5cm}} l@{\hspace{0.5cm}} c c} \toprule[2pt]\addlinespace[2pt] &Parameter & Value & Units & Reference & Label\\[2pt] \toprule\addlinespace[4pt] Phenotypic Diffusion&$\theta$ & $5\times 10^{-6}$& $hr^{-1}$ & -& \\[3pt] \hline\addlinespace[3pt] \multirow{3}{20mm}{\centering Advection velocity $v_z$ Eq~(\ref{eq_ad})}& $V_\pm$ & $\left\{2,4,8\right\}\times 10^{-4}$& $hr^{-1}$ & -& \\ & $\xi_\pm$ & $\left\{0.05,0.1,0.5\right\}$& - & -& \\ & $\omega_\pm$ & $\left\{1,2\right\}$& - & -& \\ \toprule\addlinespace[2pt] \multirow{9}{25mm}{\centering Fitness $F$ Eq~(\ref{eqnetprol})-(\ref{apoptosis})} &$p^{max}_0$ & 0.005& $hr^{-1}$&\cite{Sweeney1998} &\\[2pt] &$K_{H,0}$ & $0.05$ & -&-&\\[2pt] &$g_0$ &0.01&-&-&\\[2pt] &$p^{max}_1$ & 0.02& $hr^{-1}$&\cite{Sweeney1998}&\\[2pt] &$K_{H,1}$ & $0.3$ & -&-&\\[2pt] &$g_1$ &0.04&-&-&\\[2pt] &$d_f$ & $\left\{0.001,0.015\right\}$ &$hr^{-1}$ &-&\\[2pt] &$k_f$ & $10$ & -&-&\\[2pt] & $\Phi_{max}$ & $10^8$ & cell/cm$^3$&\cite{DelMonte2009}&\\[5pt] \multirow{4}{25mm}{\centering Survival Fraction $SF$ Eq~(\ref{radiosensitivity})/Eq~(\ref{SF_space})}& $\alpha_{min,max}$& Table \ref{rad_tab2}& Gy&\cite{Saga}&\\ & $\beta_{min,max}$& Table \ref{rad_tab2}& Gy$^{-2}$&\cite{Saga}&\\[2pt] & $\xi_R$ & 0.2& - & -&\\[3pt] & OER & 3 &- & \cite{lewin2}& S\\[2pt] \toprule\addlinespace[2pt] \multirow{2}{30mm}{\centering Initial phenotypic distribution $n_0$}&$\phi_0$ & 0.4& $hr^{-1}$&-&\\[2pt] &$\sigma$& 0.1 & -&- & \\[2pt] \toprule[1.5pt]\addlinespace[3pt] \centering Spatial Diffusion &$D_N$ &$1.25\times 10^{-4}$& mm$^2$hr$^{-1}$&&S\\[2pt] \centering Domain Size & L & 0.45 & mm&-&S\\[3pt] \toprule Oxygen Diffusion & $D_c$ & $6.84\times 10^{-1}$&mm$^2$hr$^{-1}$&-&S\\[3pt] \multirow{2}{25mm}{\centering Consumption $\gamma$ Eq~(\ref{gamma})}&$\gamma_{max}$ &$3.11\times 10^{-12}$&g(cell hr)$^{-1}$&\cite{Boag1970}&S\\ &$k_\gamma$ & $10$ & -&-&S\\[3pt] \multirow{3}{25mm}{\centering Oxygen thresholds}&$c_\infty$ & 1& -&\cite{Lewin2018}&S\\[2pt] &$c_H$ & 0.3& -& \cite{lewin,ester2}&S\\[2pt] &$c_N$ & 0.0125 & -&\cite{ester2}&S\\[2pt] \bottomrule[2pt]\addlinespace[2mm] \end{tabular} \caption{List of the parameters values in model~(\ref{mixedmodel})-(\ref{eq_ad}) and/or its spatial extension~(\ref{spatial_mod})-(\ref{gamma}). Where the parameters are free, we list the set of values considered in the paper. We further label with (S) those parameter that are only present in the spatial model.} \label{param_set} \end{table} The model contains a large number of parameters, most of which will vary in value between tumours and patients. The main focus of this work is to study the role played by phenotypic advection (as it interacts with cell proliferation and apoptosis, as well as competition mechanisms). On this basis, we decided to perform a parameter sweep for parameters associated with the advection velocity, while holding all other model parameters fixed at values previously reported in the literature, where such values exist. The main challenge is to identify the phenotypically dependent parameters, such as the growth rate in Equation (\ref{pO2}). As most data reported in the literature refer to processes, such as cell proliferation, at the population/cell colony and do not account for phenotypic variation, it was difficult to estimate parameters that characterise the phenotypic variation in these processes. We based our estimates of the proliferation rate on the doubling times reported by \cite{Sweeney1998} for two breast cancer cell lines, MCF-7 and BT-549. The former belong to the class of \textit{laminal}-like cells which are characterised by low stemness levels \cite{Ricardo2011} and high proliferation rates (doubling time $1.8$ days, i.e., growth rate $0.016$ hr$^{-1}$). On the other hand, BT-549 belong to the class of \textit{triple-negative} cells whose population is dominated by highly aggressive but slowly proliferating stem-like cells \cite{Ricardo2011} (doubling time $3.7$ days \cite{Sweeney1998}, i.e., growth rate $0.008$ hr$^-1$). Given the variability in the phenotypic distribution of these cell lines, we have rounded the values to those presented in Table \ref{param_set}. As is common in the literature, we have chosen the source of oxygen (i.e. $c_\infty$) to be at a pressure of $100$ mmHg \cite{Lewin2018}. Given that atmospheric pressure corresponds to $760$ mmHg with $21\%$ O$_2$, the oxygen tension corresponding to $c_\infty$ is about $8\%$\, O$_2$. The hypoxic and necrotic thresholds ($c_H$ and $c_N$) are then equivalent to oxygen pressures of $2.5\% \, O_2$ and $0.1\% \, O_2$ in line with \cite{ester,ester2}. These values can be converted into oxygen concentrations by use of Henry's law \cite{Lewin2018}, see Table \ref{param_set}. \section{Linear Stability Analysis.} \label{AppendixB} \noindent As mentioned in Section \ref{LSA}, in order to compute the largest eigenvalue $\lambda_0$ numerically we rely on the \textit{Chebfun} package for MATLAB \cite{Driscoll2014}. In order to solve the eigenvalue problem we first make the following substitution in Equation~(\ref{neu}): \begin{equation} \delta{n}=y(z) \exp\left[\frac{1}{2\theta}\int^z v_z(s) \,ds\right]. \end{equation} It is straightforward to show that the function $y$ satisfies the following eigenvalue problem: \begin{eqnarray} \begin{aligned} \theta \frac{d^2 y}{d z^2}+q(z;c,\bar{\phi})y-p(z;c)\bar{n}\int_0^1 y(s)k(s,z) \,ds=\lambda y \end{aligned}\label{eq:ApBeig}\\ \mbox{where} \quad q(z;c,\bar{\phi})=p(z;c)(1-\bar{\phi})-f(z)-\frac{1}{2}\frac{d v_z}{d z}-\frac{1}{4}\frac{v^2_z}{\theta},\label{eq:q}\\ \mbox{and} \quad k(s,z)=\exp\left[\frac{1}{2\theta}\int_s^z v_z(p) \,dp\right],\\ \frac{d y}{dz}=0\quad \mbox{at} \; z=0,1. \end{eqnarray} Note that the integral in Equation~(\ref{eq:ApBeig})is of the form of a Fredholm integral which is built in the \textit{Chebfun} package \cite{Driscoll2014}. The above differential equation for $\bar{n}=0$ corresponds to the standard form of a Schr{\"o}dinger-type, \textit{Sturm-Liouville} eigenvalue problem, where the \textit{Hermiticity} of the differential operator implies the existence of purely real eigenvalues. In the case of the null steady state, the eigenvalue problem simplifies to: \begin{subequations} \begin{align} \begin{aligned} \theta \frac{d^2 y}{d z^2}+\tilde{q}(z;c)y=\lambda y \end{aligned} \label{neu_cond0} \\ \frac{d y}{dz}=0 \quad \mbox{at} \; z=0,1. \label{neu_cond} \end{align}\label{eq:eigenSL}% \end{subequations} where $\tilde{q}(z;c)=q(z;c,0)$ as defined in~(\ref{eq:q}). Therefore, applying the \textit{Sturm Oscillation Theorem}~\cite{coddingtonlevinson} to~(\ref{eq:eigenSL}) we deduce that $\sigma(\mathcal{M})$ has infinitely many simple and real eigenvalues which can be enumerated in strictly decreasing order: \begin{equation} \lambda_0>\lambda_1>\ldots, \, \lim\limits_{n\rightarrow \infty} \lambda_n=-\infty. \end{equation} We conclude that the trivial steady state is either a stable node (if $\lambda_0<0$) or a saddle (if $\lambda_0>0$). In addition to numerical estimation of $\lambda_0$, analytical approximations and bounds can be obtained via the so-called \textit{Rayleigh quotient} $R(y)$. If we multiply Equation~(\ref{neu_cond0}) by $y$ and integrate by parts, then we obtain: \begin{subequations} \begin{align} R(y)=\frac{1}{\|y\|^2_{L^2}} \: \int_0^1 \left \{ \theta y\frac{d^2 y}{d z^2}+\tilde{q}(z;c)y^2 \right \} dz, \end{align} where $y$ also satisfies the Neumann boundary conditions \ref{neu_cond}. We deduce that the following therefore holds: \begin{align} \lambda_0 =\sup_{y\in E, \ y\neq 0} R(y) \end{align} \end{subequations} where $E$ is the set of twice differentiable functions that satisfy condition~(\ref{neu_cond}). \begin{lemma} If the function $\tilde{q}$ is such that $\max\limits_{z\in(0,1)} \tilde{q}<0$ then the null steady state is stable.\label{lemma1} \end{lemma} \begin{proof} Consider the numerator of the quotient defining $R(y)$: \begin{subequations} \begin{align} \begin{aligned} \int_0^1\left \{ \theta y\frac{d^2 y}{d z^2}+\tilde{q}(z;c)y^2 \right \} dz = - \cancelto{0}{\left[y\frac{d y}{d z}\right]_{0}^1} + \int_0^1\tilde{q}(z;c)y^2-\theta \left(\frac{d y}{d z}\right)^2 dz \\ \leq \int_0^1\tilde{q}(z;c)y^2 dz. \qquad \qquad \end{aligned} \end{align} We deduce that \begin{align} R(y) \leq \int_0^1 \tilde{q}(z;c) \frac{y^2}{\|y\|_2^2} dz=R_{up}(y). \end{align} It is therefore apparent that if the function $q$ is negative throughout the domain, then $R_{up}$ is negative for any choice of $y\in E$. In such a case, we have that: \begin{align} \lambda_0=\sup_{y\in E, \ y\neq 0} R(y)<\sup_{y\in E, \ y\neq 0} R_{up}(y)<0. \end{align} \end{subequations} \end{proof} We now show that under normoxia, $q < 0$ if the death rate is high and the magnitude of the phenotypic advection velocity is sufficiently large. \begin{lemma} If the model proliferation rate, apoptosis rate and phenotypic advection velocity and diffusion coefficient are chosen such that: \begin{equation} \int_0^1 \left \{ p(z,c)-f(z)- \frac{v_z^2}{4\theta} \right \} dz>0,\label{cond_ins} \end{equation} then the trivial steady state is unstable. \label{lemma2} \end{lemma} \begin{proof} Consider $y_0\equiv 1$, then $y_0\in V$ and $\lambda_0=R(y_0)$ where: \begin{equation*} R(y_0)=\int_0^1 \left \{ p(z,c)-f(z)- \frac{v_z^2}{4\theta} \right \} dz>0. \end{equation*} Consequently, $\sup_{y\in V} R(y)\geq R(y_0)>0$, and our steady state is unstable. \end{proof} \begin{remark} Note that for~(\ref{cond_ins}) to hold we require $\int_0^1 (p-f) dz>0$ so that cell proliferation dominates apoptosis. Based on the functional form defined in Section~\ref{fit}, we have that: \begin{equation} \begin{aligned} \hspace{-8mm}I(c;d_f)&=\int_0^1 (p-f ) dz\\ &= \sqrt{g_1} p_1(c) \left[\mathcal{Z}\left(\frac{z-0.55}{\sqrt{g_1}}\right) +\sqrt{g_0} p_0(c) \mathcal{Z}\left(\frac{z}{\sqrt{g_0}}\right)+\frac{d_f}{k_f}e^{-k_f z}\right]_{z=0}^{z=1}\\ &\sim \frac{\sqrt{g_0} p_0(c)}{2}+\sqrt{g_1} p_1(c)-\frac{d_f}{k_f} \end{aligned} \end{equation} where $\mathcal{Z}$ denotes the cumulative distribution function for the normal distribution. We note that $I(1;d_f)>0$ while $I(0.2;d_f)<0$ for all values of the parameters listed in Table \ref{par_fitness}. We conclude that under normoxia there is a threshold $\mathcal{V}_+(\xi_+,\omega)$ such that the system is unstable for all choices of $V_+<\mathcal{V}_+(\xi_+,\omega)$: \begin{subequations} \begin{align} \mathcal{V}_+=\sqrt{\frac{2I(1;d_f)\theta}{I_v(\xi_+,\omega_+)}},\\ \mbox{where} \; I_v(\xi_+,\omega_+)=\int_0^1 \left(\frac{1}{V^*_+}\tanh\left(\myfrac[3pt]{z^{\omega_+}}{\xi_+}\right)\tanh\left(\myfrac[2pt]{(1-z)}{\xi_+}\right)\right)^2 dz. \end{align} We note also that higher values of $\theta$ favour instability of the trivial solution as $\mathcal{V}_+$ increases with $\theta$. By inspecting Figure \ref{vel_prof}, we note qualitatively that $I_v$ is expected to decrease for increasing values of $\xi_+$ and $\omega_+$. \end{subequations} \end{remark} \begin{figure}[h!] \begin{subfigure}{0.49\textwidth} \centering \includegraphics[width=0.8\textwidth]{figures/nospace/linearstab/eigen6.pdf} \caption{} \label{lsa1_a} \end{subfigure} \begin{subfigure}{0.49\textwidth} \centering \includegraphics[width=0.8\textwidth]{figures/nospace/linearstab/eigen.pdf} \caption{} \label{lsa1_b} \end{subfigure} \caption{Linear stability analysis of the trivial solution: plot of the two largest eigenvalues $\lambda_0(\xi)$ and $\lambda_1(\xi)$ for (a) $V_+=6\times 10^{-4}$, $\omega_+=1$ and $d_f=0.001$, and (b) $V_+=8\times 10^{-4}$, $\omega_+=1$, $d_f=0.001$. In (a), $\lambda_0 > 0$ for all values of $\xi$. In (b), $\lambda_0$ changes sign as $\xi$ increases and we can identify a critical value of $\xi$ at which the trivial solution loses stability, favouring the emergence of a nontrivial, phenotypic cell distribution.} \label{lsa1} \end{figure} To analyse other regions of parameter space, where neither of the sufficient conditions holds, we rely on numerical estimates of the eigenvalue $\lambda_0$. As shown in Figure \ref{lsa1}, and as expected based on the above findings, when the magnitude of the velocity $V_+$ is small, $\lambda_0>0$ for all $\xi$ and the trivial solution is unstable. By contrast, as the magnitude of the advection velocity increases, its steepness, $\xi$, determines the stability of the trivial solution. Using this estimate, we can identify the region of stability of the trivial steady state (see Figure \ref{xi_crit} in Section \ref{LSA}). We remark that the boundary between the regions of stability is non-smooth. This is because $\lambda_0 = \lambda_0(\xi)$ plateaus as $\xi\ll1$ (see Figure \ref{lsa1}). By computing the second largest eigenvalue, $\lambda_1(\xi)$, we observe that the sharp change in the profile of $\lambda_0$ as $\xi$ decreases occurs where $|\lambda_0-\lambda_1|$ attains its minimum value. It is possible to show that the two eigenvalues do not cross, as expected by the Sturm Oscillation theorem. A similar phenomenon occurs in quantum physics~\cite{cohent} where it is known as \textit{avoided crossing}. Finally, we consider the stability of the trivial solution in an hypoxic environment. We confirm the numerical simulations from \S\ref{hyp cond} by showing that, under hypoxia, the trivial solution is always unstable. \begin{lemma} Under hypoxia (i.e. when $c=0.2$), and for the parameter values listed in Table \ref{param_set}, the trivial steady state is always unstable. \label{lemma3} \end{lemma} \begin{proof} Let us consider as a trial function: \begin{equation} y=\frac{1}{(\pi \kappa^2)^{1/4}}\exp\left(-\frac{z^2}{2\kappa^{2}}\right) +Az^2, \end{equation} where a small parabolic correction is added to the standard Gaussian, the constant $A$ being chosen to ensure that the boundary condition at $z=1$ is satisfied: \begin{equation} A=\frac{e^{-\frac{1}{2\kappa^2}}}{2\pi^{1/4}\kappa^{5/2}}; \end{equation} the derivative $y'$ at $z=0$ vanishes, by construction. We now want to show that the Rayleigh quotient is positive for such a choice of the test function $y$ which implies that the trivial steady state is unstable. Given that the denominator of $R(y)$ is always positive, its sign will be determined by the numerator $R_n(y)$ that is: \begin{equation} R_n(y)=\int_0^1 \left(p-f-\frac{v_z^2}{4\theta}\right) y^2 dz -\cancelto{0}{\left.\frac{v_z y^2}{2}\right|_0^1} +\int_0^1 v_z y\frac{d y}{d z} - \theta \left(\frac{d y}{d z}\right)^2 dz \end{equation} Computing the derivative of $y$ and denoting the Gaussian by $y_0$, we obtain: \begin{subequations} \begin{align} y^2= y_0^2 +2Az^2y_0+A^2z^4,\\ y'^2= \frac{z^2}{\kappa^4} y_0^2 -\frac{4Az^2}{\kappa^2}y_0+4z^2A^2,\\ yy'= -\frac{z}{\kappa^2}y_0^2+Az\left(2-\frac{z^2}{\kappa^2}\right)y_0+2A^2z^3. \end{align} Recalling that the constant $A$ is exponentially small in $\kappa$ while $y_0$ grows only as a power law of $\kappa^{-1}$, the terms multiplied by $A$ will be negligible and the sign of $R_{n}(y)$ will be determined also by the leading term: \begin{align} R_n(y)= I_0 + \mathcal{O}(A),\label{eq:Rnexp}\\ I_0=\int_0^1 \left(p-f\right)y_0^2 dz-\theta\int_0^1 m^2(z) y_0^2 dz,\label{eq:I0first}\\ \quad{where} \quad m(z)=\frac{v_z}{2\theta}+ \frac{ z}{\kappa^2}. \end{align} \end{subequations} Proving instability therefore reduces to show that $I_0$ is positive for the range of parameters and functional forms considered in hypoxic condition. We do so finding a lower bound on the value on $I_0$, exploiting the quick decay of the function $y_0$, whose mass is concentrated in a neighborhood of $z=0$. Given that $p(0)-f(0)>0$ and $m(0)=0$, provided that $m$ does not grow too fast near $z=0$, we can intuitively see that the major contribution to the integral $I_0$ will be positive. We will now expand this intuitive argument with a more rigorous calculation. We first focus on $I_0^{(1)}=\int_0^1(p-f)y_0^2 dz$, the contribution in~(\ref{eq:I0first}) due to cell proliferation. We can compute this integral exactly as the integrand comprises products of exponentials, that can be re-written as the integral of Gaussian distribution: \begin{subequations} \begin{align} I_0^{(1)}= &\left[\frac{p_1(c)\sqrt{2}\zeta_1 }{\kappa}e^{-\frac{0.55^2}{g_1}+\frac{c_1^2}{2\zeta_1^2}}\mathcal{Z}\left(\frac{z-c_1}{\zeta_1}\right)\right.\\ &\left.+\frac{p_0(c)\sqrt{2}\zeta_0}{\kappa}\mathcal{Z}\left(\frac{z}{\zeta_0}\right)-d_f e^{-k_f+\frac{c_f^2}{\kappa^2}}\mathcal{Z}\left(\frac{\sqrt{2}(z-c_f)}{\kappa}\right)\right]_{0}^1, \end{align}\label{eq:I_0(1)}% \end{subequations} where $2\zeta_{0,1}^2=(\kappa^2 g_{0,1})/ (g_{0,1}+\kappa^2)$, $c_1=0.55(2\zeta^2_1)/g_1$ and $c_f=k_f\kappa^2/2$, while $\mathcal{Z}$ is again the normal cumulative distribution function as in Lemma \ref{lemma2}. We now focus on the term in~(\ref{eq:I0low}) which depends on $m$. In this case the integral can not be computed exactly and we will therefore find a lower bound for its contribution instead. This is achieved by decomposing the full domain $[0,1]$ into two three sub-domains. This will allow us to balance the rapid growth of the function $m$ with the quicker decay of $y_0$ away from $z=0$: \begin{align} \int_0^1 m^2y^2_0 dz =\int_0^{z_0\kappa} m^2y^2_0dz + \int_{z_0\kappa}^{z_1\kappa}m^2 y_0^2dz+\int_{z_1\kappa}^1 m^2y_0^2 dz.\label{eq:mint1} \end{align} where $z_{0,1}$ are positive constants such that $0 < z_0 < z_1 < \kappa^{-1}$. Note that we have the freedom of choosing their values with the aim of making the quantity in~(\ref{eq:mint1}) as small as possible. It is straightforward to see that $m$ attains its maximum value at $z=1$ as both $v_z$ and $z/\kappa^2$ attain maxima there. We now choose the value of $\kappa$ so that the derivative of $m$ at $z=0$ vanishes: \begin{subequations} \begin{align} m'(z)= \left(\frac{v'_z(z)}{2\theta}+\frac{1}{\kappa^2}\right) \quad \Rightarrow \quad \kappa = \sqrt{\frac{2\theta}{|v'_z(0)|}}.\label{eq:kappa} \end{align} However, by definition (see Equation~(\ref{eq_ad_hyp})), under hypoxia, the advection velocity $v_z(z)=v_z^-(z)$ is such that $|v'_z(z)|\leq |v'_z(0)|$ for all $z\in(0,1]$, with equality only if $\omega_-=2$. Consequently, we have that $m(z)$ is a non-decreasing function of $z$, i.e. $m'(z)\geq 0$. Given the above, we can now construct an upper bound for the integral in~(\ref{eq:mint1}): \begin{align} \begin{aligned} \hspace{-8mm} \int_0^1 m^2 y^2_0 dz &\leq m^2(z_0 \kappa)\int_0^{z_0 \kappa} y^2_0 \: dz + m^2(z_1 \kappa)\int_{z_0 \kappa}^{z_1 \kappa} y^2_0 \: dz + m^2(1)\int_{z_1 \kappa}^1 y_0^2 \: dz\\ &=m^2(z_0 \kappa) \left[\mathcal{Z}\right]_0^{\sqrt{2}z_0}+m^2(z_1 \kappa) \left[\mathcal{Z}\right]_{\sqrt{2}z_0}^{ \sqrt{2}z_1}+ \frac{1}{\kappa^4} \left[\mathcal{Z}\right]_{\sqrt{2} z_1}^{\frac{\sqrt{2}}{\kappa}}\\ &\leq \frac{m^2(z_0 \kappa)}{2}+m^2(z_1 \kappa) \left[\mathcal{Z}\right]_{\sqrt{2}z_0}^{ \sqrt{2}z_1}+ \frac{1}{\kappa^4} \left[\mathcal{Z}\right]_{\sqrt{2} z_1}^{\infty} \end{aligned} \end{align} Let us reiterate that we want $z_0$ and $z_1$ to be such that $m^2(z_0\kappa)$ and $m^2(z_1\kappa)$ are not too large while $\left[\mathcal{Z}\right]_0^{\sqrt{2}z_0}$ and $\left[\mathcal{Z}\right]_{\sqrt{2} z_1}^{\infty}$ are sufficiently small. In this way, the growth of $m$ is balanced by the exponential decay of the Gaussian function $y^2_0$. In particular, we choose $z_0=\sqrt{2}$ and $z_1=5/\sqrt{2}$. \end{subequations} Combining the above with the estimate from Equation~(\ref{eq:I_0(1)}), we obtain: \begin{eqnarray} I_0 > I_0^{(1)}- \frac{\theta m^2(z_0\kappa)}{2} -\theta m^2(z_1\kappa)\left[\mathcal{Z}\right]_{\sqrt{2}z_0}^{ \sqrt{2}z_1}-\frac{v'_z(0)^2}{4\theta} \left[\mathcal{Z}\right]_{\sqrt{2} z_1}^{\infty}=I^{low}_0.\label{eq:I0low} \end{eqnarray} \begin{figure}[h!] \centering \includegraphics[width=0.95\textwidth]{figures/nospace/linearstab/I0max.pdf} \caption{Plot of the lower bound $I_0^{low}$ and the standard deviation $\kappa$ as defined by~(\ref{eq:I0low}) and~(\ref{eq:kappa}) respectively for parameter regime considered in the paper (note that $d_f$ is fixed to its maximum values $0.015$ as this gives the smaller bound $I_0^{max}$).} \label{fig:my_label} \end{figure} We can compute the values of $\kappa$ and $I_0^{low}$ associated with the value of the magnitude $V_-$ and steepness $\xi_-$ considered in the paper (without loss of generality, we only consider $d_f=0.015$ as $I^{low}_0$ decreases with $d_f$). As shown in Figure \ref{fig:my_label}, for all such values, we have that $I_0^{low}>0$. As $I_0>I_0^{low}$, we therefore have that generically $I_0$ is also positive. We estimate $A \leq O(10^{-13})$ which justifies us dropping the $O(A)$ in~(\ref{eq:Rnexp}). Consequently, we conclude that $R_n(y)$ is positive and so is the quotient $R$. Hence, in hypoxia, the trivial steady state is always unstable. \end{proof} \section{Conclusion and Future Challenges} \label{conclusion} \noindent We have developed a structured model to investigate how clonogenic heterogeneity affects the growth and treatment response of a population of tumour cells. Cell heterogeneity is incorporated via an independent and continuous structural variable which represents \emph{stemness}. As proposed by \cite{Scott169615,Chisholm2016}, we view stemness as a plastic trait, with cells becoming more, or less, stem-like depending on their environmental conditions. Our mathematical model accounts for cell proliferation and apoptosis, inter-cell competition, and phenotypic movement along the stemness axis, via diffusion and advection. Studies of the population dynamics in the absence of treatment revealed that, under normoxia, a variety of qualitative behaviours may arise depending on the functional forms used to represent the structural flux and fitness landscape. When advection dominates movement along the stemness axis, its magnitude, relative to the rates of proliferation and cell death, determines whether the population is driven to extinction. Multimodal distributions, which allow for the formation and maintenance of CSCs pools, are observed for asymmetric velocity profiles. Under hypoxia, the population distribution is unimodal and skewed toward stem-like phenotypes, with little intra-population variability. The resulting cell distribution is highly resistant to radiotherapy, the tumour will typically regrow following treatment. By contrast, under normoxia (or re-oxygenated hypoxia), and for suitable parameter values, the tumour may become extinct following radiotherapy. There are many ways in which the work presented in this paper could be extended. A first, natural extension would be to incorporate structural and spatial heterogeneity (i.e., both phenotypic and spatial dimensions) \cite{hodgkinson}. This would enable us to consider \textit{in vivo} situations, where spatial gradients in oxygen levels emerge naturally, due to oxygen consumption by the cells as it diffuses from blood vessels. As outlined in~\ref{AppendixA}, in such a model oxygen consumption rates may vary with cell phenotype, and spatial fluxes may account for random movement of the cells. Preliminary results for such a model are presented in Figure~\ref{space1}. We consider a 1D Cartesian geometry and focus on a tumour region of width $L$, in which a blood vessel located at $x=0$ provides a continuous supply of oxygen to the tissue. If the tumour initially comprises a spatially homogeneous distribution of terminally differentiated cells (see Equation~(\ref{initial_cond})), then the oxygen rapidly relaxes to a steady state and a hypoxic region forms at distance from $x=0$. In contrast to the well-mixed model, cells are now able to move, by random motion, between normoxic and hypoxic regions. While terminally differentiated cancer cells are dominant in the well-oxygenated region, a small fraction persists in the hypoxic region (in particular, near the boundary of the hypoxic region, orange line in the plots in Figure~\ref{space1}). This is due to the influx of cells from the well-oxygenated portion of the domain. Similarly, CSCs are dominant in the hypoxic region, but a small fraction of hypoxic CSCs migrate towards $x=0$, where re-oxygenation induces their maturation, creating a differentiated and highly proliferative cell phenotype, alongside terminally differentiated cancer cells. These results illustrate how the interplay between space, resources and phenotypic adaptation may give rise to complex behaviours; their investigation is the focus of ongoing work. \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{figures/space/simnotreatment} \caption{Series of plots showing how, in the absence of treatment, the cancer cell population $n(x,z,t)$ and the oxygen concentration $c(x,t)$ change over time $t$ when we account for spatial and phenotypic variation (see Equations~(\ref{spatial_mod})). We indicate the threshold $c=c_H$ which defines the boundary of the hypoxic region with a horizontal red line in the upper plots and with a vertical orange line in the lower plots. We fix $V_\pm=4\times10^{-4}$, $\xi_\pm=0.1$, $\omega_+=1$, $\omega_-=2$ and $d_f=0.001$, while the remaining model parameters are fixed at the values stated in Table \ref{param_set}.} \label{space1} \end{figure} A significant challenge of the modelling approach presented in this paper is the determination of model parameters and functional forms. In the longer term, techniques such as \textit{single-cell RNA sequencing}~\cite{tirosh,venteicher} will make it be possible to quantify specific aspects of our model, such as the dependence of the proliferation and apoptosis rates on cell stemness and the dependence on the tumour micro-environment of the (phenotypic) advection velocity associated with cell maturation and de-differentiation. In spite of their current limitations, we believe that studies of such models can increase understanding of the ways in which specific physical processes may influence the phenotypic distribution of cell populations in different environments. At the same time, we acknowledge that it remains a matter of debate as to whether asymmetric cell distributions are driven by micro-environmental signals (as in the model presented here), asymmetric division, or a combination of the two~\cite{Roeder2006}. By using a non-local proliferation kernel to account for asymmetric division, we could investigate these alternative hypotheses and identify conditions under which they lead to different outcomes. A important feature of our model is the way in which the response to radiotherapy (RT) varies with cell stemness (i.e., $z$). Our analysis shows how the functional forms used to describe the advection velocity and fitness functions can affect the system dynamics post-RT. While unimodal phenotypic distributions lead to monotonic growth curves post-treatment, more complex behaviour is observed when heterogeneous populations, with a pool of CSCs, are considered. For example, under normoxia, the presence of radio-resistant CSCs can drive recurrence, despite an initial phase of tumour regression. As the CSCs mature into highly-proliferating cancer cells, rapid re-growth is accompanied by re-sensitisation of the population to RT. Under hypoxia, CSCs maintain their stemness, leading to a slowly growing, radio-resistant cell population. More complex outcomes arise when we consider the effect that treatment might have on the environment. As noted in~\ref{radio+change}, changes in the vasculature induced by radiotherapy can result in either post-treatment re-oxygenation or hypoxia. While re-oxygenation increases the radio-sensitivity of the population, hypoxia increases their radio-resistance. In practice, such environmental changes are likely to be transient. Even in an untreated tumour, fluctuations in oxygen levels can occur. Consider, for example, cells in a neighborhood of immature blood vessels. As the cells proliferate, they exert mechanical pressure on the vessels, causing them to collapse and local oxygen levels to fall. Under hypoxia, the tumour cells stimulate the growth of new blood vessels from pre-existing ones, via angiogenesis. In this way, tumour regions may cycle between periods of hypoxia and normoxia. It would be of interest to extend the model to account explicitly for the tumour vasculature and its interaction with tumour cells. This could be achieved at a ``high level'' of description, via simple ODE models such as \cite{Hahnfeldt1999, Stamper2010}, or via more complex, multi-phase \cite{Hubbard2013} or multi-scale approaches \cite{Byrne2010,Macklin2009,Vavourakis2017,Walpole2013}. This would enable us to better capture the different time-scales on which the oxygen dynamics and cell adaptation velocity change. As shown in Figure \ref{space2}, variations in oxygen levels emerge naturally within spatially-resolved models. Here, cell killing leads to tissue re-oxygenation which, in turn, disrupts the CSC niche. Depending on the time scale over which the cells adapt to their new environmental conditions, this may increase the overall radio-sensitivity. Understanding and accounting for such phenomena is particularly relevant for predicting responses to RT and comparing alternative treatment protocols. \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{figures/space/simtreatment} \caption{Evolution of the population $n(x,z,t)$ in the spatial and phenotypic dimensions following a cycle of fractionated radiotherapy (5 $\times$ $2$ Gy). The parameter values are the same as those used in Figure \ref{space1} and the initial cell distribution is the same as the final distribution in Figure \ref{space1}. For the LQ-model we used parameter set $R3$ in Table \ref{rad_tab2}.} \label{space2} \end{figure} In the extinction scenario, or post administration of high RT doses, the number of cells in the population can become low and our continuum model may cease to be valid. In such conditions, stochastic effects which are neglected herein may become important. As in \cite{Ardaseva2020_2,Franz2013,Spill2015}, stochastic and mean field approaches may be combined with hybrid discrete-continuum techniques to account for small population effects and to study their impact on the probability of tumour extinction. In this paper, we considered only single dose and fractionated treatment protocols. In future work, we could investigate alternative strategies, such as \textit{adaptive therapeutic} protocols \cite{Gatenby2009} and/or multi-drug treatments, which have been proposed as an effective way to overcome radio-resistance. From this point of view, considerable efforts have been invested in designing treatments that exploit features of CSCs, such as their metabolic plasticity~\cite{frontiers}. Motivated by recent metabolically-structured models~\cite{Ardaseva2019,hodgkinson, Villa2019}, a natural extension of our model would be to include a ``metabolic dimension'' in order to investigate the interplay between stemness, metabolic switching and resistance. A biologically informed model that incorporates metabolic and phenotypic effects, together with the tumour micro-environment and vascular remodelling lies at the heart of a mathematical program that would enable systematic comparison with {\it in vivo} observations. The framework and results outlined in this work represent a first step towards achieving this long-term goal. \section{Introduction} \noindent Understanding of the mechanisms by which cancer is initiated and progresses continues to increase, and, yet, cancer remains one of the leading causes of premature mortality worldwide and a major barrier to increasing average life-expectancy. For example, in 2018, 9.6 million people are estimated to have died of cancer \cite{Bray2018}. Furthermore, treatment outcomes can differ markedly between patients with the same cancer type, with the emergence of resistance being one of the major causes of treatment failure. Over the past twenty years, there has been a major shift in our perception of solid tumours; they are now regarded as heterogeneous tissues in which malignant cells interact with normal cells and shape their environment in ways that favour malignant growth \cite{Hanahan2000}. Cancer stem cells (CSCs) were introduced to explain intra-tumour heterogeneity via the \emph{CSC hypothesis} \cite{Reya2001}. This hypothesis proposes that, while CSCs may comprise only a small fraction of the total cell population, their high clonogenic potential and their ability to produce more mature, or specialised, cancer cells enables them to create an entire tumour \cite{Rycaj2014}. As CSCs are found to be resistant to standard treatments, they are recognised as a major cause of disease recurrence and treatment failure \cite{Baumann2008,Rycaj2014,cancersreview}. These observations have stimulated the development of novel therapeutic strategies which aim to eradicate CSCs \cite{ende2,Kong2020,Shibata2019,frontiers}. In practice, the plasticity of CSCs represents a major obstacle to such treatments. Additionally, CSCs can adapt to their local micro-environment, and remodel it to create and maintain a niche which supports their survival~\cite{architects}. Increasingly, researchers are turning to mathematical models in order to understand how CSCs affect the growth and composition of tumours, particularly their heterogeneity and response to treatment. These models often decompose the tumour into a series of compartments, each representing a particular cell subtype. For example, in~\cite{ende2}, Enderling distinguishes cancer stem cells (CSCs) and cancer cells, whereas Saga and coworkers distinguish radio-resistant and radio-sensitive cells~\cite{Saga}, and Scott and colleagues distinguish tumour-initiating cells (or CSCs), transit-amplifying cells and terminally differentiated cells (TDCs)~\cite{Scott169615}. Thus, most compartmental models are based on the CSC hypothesis which assumes that it is possible to distinguish between cancer stem cells and the tumour bulk. However, this paradigm has been challenged by recent experimental studies~\cite{dirkse, Soleymani2018} that highlight the phenotypic heterogeneity and plasticity of cancer cells, whose clonogenic (or \emph{stemness}) potential can be altered by the surrounding micro-environment (extrinsic forces). These findings have led to a new hypothesis for intra-tumoural heterogeneity, based on \emph{adaptive CSC plasticity} \cite{Fanelli2020}. Under this hypothesis, cancer cells move between stem-like and terminally differentiated states in response to extrinsic (environmental) and/or intrinsic (random epigenetic mutation) forces. Remarkably, the development of state-of-the-art experimental tools, such as single-cell RNA-seq, means that it is now possible to track the evolution of stemness traits ~\cite{tirosh,venteicher}, rendering this an ideal time to develop mathematical models that can explore these concepts. Compartmental models can be used to study adaptive CSC plasticity , by allowing transitions between different compartments. However, since they assume that the tumour comprises distinct cell populations, with distinct properties, they are unable to account for continuous variation in cell properties. An increasingly popular mathematical approach for describing population heterogeneity and plasticity characterises tumour cells by their position on a continuous phenotypic axis. Position on the phenotypic axis determines cell properties such as resistance to treatment~\cite{Chisholm2016,Clairambault2020,hodgkinson,Lorenzi2016,Lorz2014} and/or metabolic state \cite{Ardaseva2019,Villa2019}. This approach is motivated by concepts from evolutionary ecology, such as risk-spreading through spontaneous (epigenetic or genetic) variations and evolutionary pressure \cite{Thomas2013}. The resulting models are typically formulated as systems of reaction-diffusion equations~\cite{Ardaseva2019,Lorenzi2016,Villa2019}, with an advective transport term sometimes included to account for biased mutation dynamics \cite{hodgkinson} or adaptive phenotypic switches \cite{Chisholm2016,Lorenzi2015,Stace2020}. In this paper, we formulate a mathematical model that accounts for the evolution of a cancer cell population along such a stemness axis in response to extrinsic and intrinsic stimuli. Initially, we focus on the plastic response of cells to changes in nutrient levels, in particular oxygen. This is motivated by recent experimental studies~\cite{Garnier2019,Liu2014,Pistollato2010,Pistollato2009} suggesting that hypoxia (i.e. low oxygen levels) is a key driver of cell de-differentiation. From this point of view, spatial heterogeneity may introduce significant additional complications: as oxygen diffuses into a tumour and is consumed by cells, spatial gradients in the oxygen levels are established. In this way, local micro-environments characterised by normoxia, hypoxia and necrosis form as the distance to the nearest nutrient supply (i.e., blood vessels) increases~\cite{hodgkinson,Lorz2014,Villa2019}. For simplicity, we postpone consideration of such spatial complexity to future work and focus, instead, on a \emph{well-mixed} setting where oxygen levels are homogeneous and prescribed. This idealised scenario allows us to investigate how cell properties, such as proliferation, apoptosis and adaptive response to environmental signals, contribute to the emergence of heterogeneous stemness levels in the population and the long term tumour composition. In this regard, we are interested in identifying conditions under which CSCs are favoured. We then extend the model to account for treatment via a phenotypically-modulated linear-quadratic model of radiotherapy (see, e.g.,~\cite{lewin2,lewin,Saga} for recent discussions) which accounts for differential radio-sensitivity of CSCs~\cite{frontiers}. This allows us to investigate how different radiotherapy protocols perturb the phenotypic distribution and subsequent regrowth of the tumour. In practice, stemness is just one of multiple traits that regulate cell behaviour and heterogeneity. We, therefore, anticipate that future models will combine multiple phenotypic axes or \emph{synthetic dimensions}, such as stemness and metabolic state \cite{Ardaseva2019,hodgkinson}. Given the complexity of such multi-dimensional models, it is important first to understand these aspects separately. Noting that considerable mathematical effort has been devoted to investigating cancer metabolism~\cite{cancermetab}, we choose here to focus on population heterogeneity with respect to a continuously varying stemness axis. We hope that in the long term this work will help motivate a systematic experimental characterization of cell plasticity and phenotype. The remainder of the article is organised as follows. In Section~\ref{model presentation}, we present a well-mixed, spatially homogeneous, model of solid tumour growth in response to a prescribed oxygen concentration. We first investigate the population dynamics in the absence of treatment, considering both normoxic and hypoxic conditions. Numerical results are presented in Section~\ref{notreatment}. As a partial validation of the numerical results, we use spectral stability analysis to characterise the long time behaviour of the solutions. Section~\ref{radio_result} focuses on tumour cell responses to different radiotherapy protocols. As in Section~\ref{notreatment}, we simulate responses under normoxia and hypoxia, but we also consider situations in which the environment alternates between periods of hypoxia and normoxia in order to explore the different ways that radiotherapy can alter tissue oxygenation. Finally in Section~\ref{conclusion}, we summarize our key findings and propose possible directions for future work. We also present preliminary results showing how accounting for spatial and phenotypic variation may affect a tumour's growth and response to radiotherapy. \subsection{Linear Stability Analysis} \label{LSA} \noindent We now validate some of the above numerical results by performing a linear stability analysis which enables us to characterise the equilibrium states. We denote by $\bar{n}=\bar{n}(z)$ a steady state for the (untreated) system~(\ref{mixedmodel})-(\ref{eq_ad}), with a total cell density $\bar{\phi}=\int_0^1 \bar{n}(z) dz$ and let $\delta n$ represent a small perturbation to this solution. Then we can approximate the solution $n$ in a neighbourhood of $\bar{n}$ as: \begin{equation} n(z,t)= \bar{n} + \delta n(z,t),\quad \|\delta n\| \ll 1 \quad \,\forall t>0. \end{equation} Substituting this ansatz into~(\ref{mixedmodel}) and retaining linear terms, we obtain the following equation for $\delta n$: \begin{subequations} \begin{align} \begin{aligned} \frac{\partial \delta n}{\partial t}= \mathcal{M}\delta n, \end{aligned} \label{defM} \\[2pt] \frac{\partial \delta n}{\partial z} = 0, \qquad z = 0, 1,\\ \delta n(z,0)\equiv 0,\label{neu} \end{align} \end{subequations} where $\mathcal{M}$ is the following integro-differential operator \begin{equation} \mathcal{M}\delta n \equiv \frac{\partial }{\partial z} \left(\theta \frac{\partial \delta n}{\partial z}- v_z\delta n\right)+ \left[p\left(1-\bar{\phi}\right) - f \right] \delta n - p\bar{n}\int_0^1 \delta n dz. \label{Mdef} \end{equation} The solution $\bar{n}$ is \textit{spectrally} stable if the spectrum of the operator, $\sigma(\mathcal{M})$, does not contain eigenvalues with positive real part, i.e., \begin{equation} \sigma(\mathcal{M}) \bigcap \left\{\lambda \in \mathbb{C}: \Re(\lambda)>0\right\}=\emptyset. \end{equation} Moreover, the dynamics of the system will be dominated by the fastest growing mode (i.e., the eigenfunction corresponding to the eigenvalue with the largest real part, $\lambda_0$). In \ref{AppendixB} we transform the above eigen-problem so that it does not include any first order derivatives. For a non-zero steady state, we retain a non-local term in the eigenvalue problem and this can give rise to a spectrum with a pair of complex eigenvalues. Recalling case A.1 from Section \ref{oxygen_env} (see Figure \ref{norm_sym}), the numerically estimated value of $\lambda_0$ is indeed complex ($\lambda_0=-1.535 \times 10^{-4} \pm i \, 2.24\times 10^{-3}$, where $i^2 = -1$). This result, in turn, explains why damped fluctuations are observed in the numerical simulations. By contrast, when considering the trivial steady state, $\bar{n}\equiv 0$, which is always a fixed point for the system, the non-local term vanishes and we obtain the standard form analysed by Sturm-Liouville theory. Using well known results, we can identify sufficient conditions for the stability/instability of the trivial steady state (see Lemmas \ref{lemma1}-\ref{lemma3} in \ref{AppendixB}). Under hypoxia, where $v_z<0$, we find that the trivial steady state is unstable (for the parameter sets in Table \ref{param_set}) and the system evolves to a non-zero distribution, which is consistent with the numerical results from Section \ref{hyp cond}. We note that the results relate only to the behaviour of the fitness function and advection velocity near the boundary $z=0$, suggesting that the most relevant parameters are $p_0^{max}$, $V_-$, $\theta$ and $\xi_-$. By contrast, under normoxia, and for the range of parameter considered here, the system undergoes a bifurcation. For sufficiently small $V_+$, the trivial steady state is unstable; for sufficiently large $V_+$ and for large values of the death rate, $d_f$, the trivial steady state is stable, (see, for example, case C2 in Section~\ref{oxygen_env}). To investigate other parameter regimes that we can not tackle analytically, we rely on numerical estimation of the largest eigenvalue, $\lambda_0$. As shown in Figure \ref{xi_crit}, it is possible to identify the boundary of the region of stability in $(\xi_+,V_+)$ space. This diagram does not change significantly as the death rate varies in the range from $d_f=0.001$ to $d_f=0.015$ (results not shown). However, the results are highly sensitive to the value of $\omega_+$. Comparing Figures \ref{xi_crit1} and \ref{xi_crit2}, we see that setting $\omega_+=2$ favours the formation of a non-trivial equilibrium distribution, with the curve shifting to the far right of the parameter space (i.e., small values of $\xi_+$ and large values of $V_+$). In the latter case, this implies that even higher velocities $V_+$ are needed to stabilise the tumour elimination solution. This is consistent with the numerical results in Section~\ref{oxygen_env}, where setting $\omega_+=2$ (see scenario B in Section~\ref{oxygen_env}) favoured the accumulation of CSCs which acted a reservoir for tumour cells. \begin{figure} \centering \begin{subfigure}{0.49\textwidth} \centering \includegraphics[width=\textwidth]{figures/nospace/linearstab/xi_crit_om1} \caption{$\omega_+=1$} \label{xi_crit1} \end{subfigure} \begin{subfigure}{0.49\textwidth} \centering \includegraphics[width=\textwidth]{figures/nospace/linearstab/xi_crit_om2} \caption{$\omega_+=2$} \label{xi_crit2} \end{subfigure} \caption{Series of phase diagrams partitioning the $(V_+,\xi_+)$ parameter space into regions where the trivial steady state is linearly stable (green regions) and unstable (white regions). The diagrams are obtained for $d_f=0.001$. We note that changing $\omega_+$ has a significant impact on the size of the region of $(V_+,\xi_+)$ parameter space in which the non-trivial steady state is stable (compare (a) and (b)).} \label{xi_crit} \end{figure} \section{Model Formulation} \label{model presentation} \noindent We consider the temporal evolution of a heterogeneous population of tumour cells, $N(z,t)$, where $t \geq 0$ denotes time and $z$ ($0 \leq z \leq 1$) represents their stemness or \emph{clonogenic capacity}. As shown in Figure \ref{schematic}, $z=0$ corresponds to cancer stem cells (CSCs) which have the maximum level of stemness, and $z=1$ corresponds to terminally differentiated cells (TDCs), which have lost their proliferative capacity and which can either enter replicative senescence or undergo cell death \cite{Lee2014}. \color{black} We assume that the population dynamics may by described by a reaction-advection-diffusion equation~(see Eq.~\ref{full1} below) which accounts for two essential physical/ecological processes. First, cells \emph{move} along the stemness axis (i.e., in the $z$-direction) in response to extrinsic (micro-environment) and intrinsic (random epimutation) \emph{forces} \cite{Scott169615}, which give rise to advective and diffusive fluxes respectively. Finally, the effect of natural selection on the population is represented by the fitness function $F$, which models the net growth rate of the cells. \begin{figure}[h!] \centering \includegraphics[width=0.85\textwidth]{figures/model/modeldiagram_well_mix} \caption{Schematic representation of the well-mixed, phenotypic model. We associate with each cell a stemness level $z$, which varies continuously between the cancer stem cell state (CSCs, with $z\sim0$), the differentiated cell state (with $z\sim 0.5$) and the terminally differentiated cell state (TDCs, with $z\sim 1$).} \label{schematic} \end{figure} While multiple nutrients and growth factors regulate the growth rate (or fitness function $F$) and phenotypic adaptation (i.e., the advective velocity $v_z$) of the tumour cells, here, for simplicity, we focus on a single nutrient, specifically oxygen. The critical role of low oxygen levels, or \textit{hypoxia}, in cancer has long been recognised due to its association with cell quiescence and poor therapeutic outcomes~\cite{hodgkinson,lewin,Saga}. Recent experimental results \cite{Soleymani2018} have shown that hypoxia also plays a role in de-differentiation by regulating pathways associated with a stem-like phenotype. We account for these phenomena in our model by assuming that all cells are exposed to the same level of oxygen, $c=c(t)$, which mediates the values of the fitness function, $F$, and the advection velocity, $v_z$; the latter feature distinguishes our work from existing theoretical models in which intrinsic forces are assumed to dominate phenotypic variation (i.e., $v_z = 0$) \cite{Ardaseva2019, Villa2019}. By combining the processes mentioned above, we deduce that the evolution over time $t$ and along the phenotypic axis $z$ of the cell concentration, $N(z,t)$, is governed by the following non-local partial differential equation (PDE) and associated boundary and initial conditions: \begin{subequations} \begin{align} \frac{\partial N}{\partial t}= \frac{\partial }{\partial z} \underbrace{\left(\theta \frac{\partial N}{\partial z}-N v_z(z,c)\right)}_{\text{structural flux}}+ \underbrace{F(z,\Phi,t;c)}_{\text{fitness}}N,\label{full1}\\ \theta \frac{\partial N}{\partial z}-N v_z = 0, \quad z=\left\{0,1\right\}, \, t>0,\\[1pt] N(z,0)= N_0(z) \quad z\in(0,1),\\[1pt] \Phi(t)=\int_0^1 N(z,t) \, dz. \end{align}% In Equation~(\ref{full}), the non-negative constant $\theta$ represents the rate at which cells diffuse along the phenotypic axis, due to random epigenetic mutations, $\Phi(t)$ denotes the density of cells in the domain at time $t$, and $N_0(z)$ is the initial distribution of cells along the phenotypic axis. In ecology, the function $F$ is referred to as fitness landscape which is a mathematical representation of natural, or \textit{Darwinian}, selection \cite{Pisco2015}. We suppose it has the following form: \begin{align} \begin{aligned} F(z,\Phi,t;c)= \underbrace{p(z,c)\left(1-\frac{\Phi}{\Phi_{max}}\right)}_{\text{proliferation}}-\overbrace{f(z)}^{\text{\shortstack{natural cell\\ death}}}-\underbrace{\sum^{M}_{i=1} \log\left(\frac{1}{SF(z,c)}\right)\delta(t-t_i)}_{\text{radiotherapy}}.\label{F} \end{aligned} \end{align}\label{full} \end{subequations} In Equation~(\ref{F}), $p=p(z,c)$ denotes the phenotype-dependent growth rate of the cells (see Section~\ref{fit} for details). It is multiplied by a non-local (in the phenotypic sense) logistic term, with constant carrying capacity $\Phi_{max}$, to capture intra-population competition for space and other resources. We assume that oxygen levels remain sufficiently high so that necrosis can be neglected. Hence, the death rate, $f$, accounts only for natural cell death, or apoptosis, which is assumed to occur at a rate which is independent of the oxygen concentration, $c(t)$. Radiotherapy (RT) also contributes to cell death and, in so doing, reduces cell fitness. We suppose that $M$ rounds of RT are administered at discrete times $t_i$ ($i=1,2,\ldots, M$). After each treatment dose, the proportion of cells of phenotype $z$ that survive is denoted by the survival fraction $SF(z,c)$. By allowing $SF$ to depend on $z$, we can account for phenotypic-dependent radio-sensitivity, and, for example, view the CSCs (i.e. $z=0$) as the most radio-resistant tumour subpopulation \cite{Rycaj2014}. Additionally, the dependence of $SF(z,c)$ on $c(t)$ enables us to account for differential radio-sensitivity under normoxia and hypoxia \cite{Hockel1996,Sorensen2020}. In contrast to~\cite{lewin2}, where the term $(1-SF)$ is used to capture cell death due to radiotherapy, here we use the term $\log(1/SF)$, to ensure that the jump in tumour cells following each dose of radiotherapy is consistent with the Linear-Quadratic (LQ) model. We now partially rescale our model by recasting the dependent variables $N$ and $\Phi$ in the following way: \begin{equation} n = \frac{N}{\Phi_{max}}, \qquad \phi = \frac{\Phi}{\Phi_{max}}, \end{equation} where the units of time, $t$ [hr] are preserved in a dimensional form to facilitate the interpretation of the results. Under this rescaling, equations~(\ref{full}) become \begin{subequations} \begin{align} \hspace{-10mm} \frac{\partial n}{\partial t}= \frac{\partial }{\partial z} \left(\theta \frac{\partial n}{\partial z}-n v_z(z,c)\right)+ F(z,\phi,t;c) n,\\ \theta \frac{\partial n}{\partial z}-n v_z = 0, \qquad z\in\left\{0,1\right\}, \, t>0,\\ n(z,0)= n_0(z) \quad z\in(0,1),\\ \phi(t)=\int_0^1 n(z,t) \, dz,\\ \begin{aligned} F(z,\phi,t;c)= p(z,c)\left(1-\phi\right) -f(z)-\sum^{N}_{i} \log\left(\frac{1}{SF(z,c)}\right)\delta(t-t_i).\label{Fitness} \end{aligned} \end{align} In order to complete the model, it remains to specify several functional forms; this will be done in Sections \ref{fit} and \ref{vz_sec}. Extending the model to account for spatial variation is presented in~\ref{AppendixA}, and preliminary results are included in Section~\ref{conclusion} (a full investigation of the spatially-extended model is postponed to future work). In what follows, we assume that oxygen concentration $c$ has been rescaled so that $c=1$ corresponds to physiological oxygen levels, namely \textit{physoxia}, which is about $8\%$ oxygen \cite{McKeown2014}. When considering hypoxia, we focus on mild hypoxia, fixing $c=0.2$ which corresponds to $1.6\%$ oxygen in standard units (see~\ref{App_param} for details). At this oxygen concentration, necrosis can be neglected; it typically occurs at lower oxygen tensions (approximately $0.1\%$ oxygen in standard units). Unless otherwise stated, we assume that the tumour initially comprises a small population of differentiated cells so that \begin{align} n_0(z)=\frac{\phi_0}{\sqrt{2\pi \sigma^2}} e^{-\frac{\left(z-0.5\right)^2}{2\sigma^2}}\label{initial_cond}, \end{align}\label{mixedmodel}% where the positive constants $\phi_0$ and $\sigma$ specify the initial size and phenotypic variance of the population. \end{subequations} The proportion of CSCs is often used to characterise heterogeneous populations of cancer cells. CSCs are typically identified by their expression of specific markers (such as CD44/CD24 and ALDH1, depending on the tumour type \cite{frontiers}); thresholds in these markers are used to distinguish stem from differentiated cancer cells. Since our model treats stemness as a continuously varying cell property, we introduce a threshold $z^* \in (0,1)$ in our simulations, and classify cells with $0<z<z^*$ as CSCs. We therefore define the proportion of stem cells at time $t$ to be: \begin{equation} \phi_{CSC}(t,z^*)=\frac{\int_0^{z^*} n(t,s) \,ds}{\phi(t)}. \label{cum_dist} \end{equation} As a further statistical feature of the cell population, we introduce the phenotypic mean, $\mu(t)$, which is defined as follows: \begin{equation} \mu(t)=\frac{1}{\phi(t)} \int_0^1 z n(z,t) dz. \label{mean} \end{equation} In the absence of suitable experimental data, it is difficult to specify many of the parameters and functional forms in Equations~(\ref{mixedmodel}). For this reason, we focus on identifying the qualitative behaviours that the model exhibits across a range of `biologically-reasonable' situations. \subsection{Fitness Landscape} \label{fit} \noindent When considering the fitness landscape, we assume that, for fixed values of $c$, the proliferation rate, $p(z,c)$ has a multi-peaked profile, with local maxima centred around $z=0$ and $z=0.55$, representing respectively cells with stem-like ($z=0$) and intermediate phenotypes ($z=0.55$, this value being arbitrary). As shown in Figure \ref{fitness landscape}, this choice reduces the overlap of the two Gaussian profiles while maintaining the proliferation rate at $z=1$ close to zero. This asymmetry also emphasises that, under normoxia, more stem-like cells (i.e. $z<0.5$) proliferate at lower rates than more differentiated cells (i.e. $z>0.5$). Different environmental conditions (i.e., oxygen concentrations), will create distinct ecological \textit{niches} each of which will favour a particular phenotype. We account for this effect by assuming that the amplitude of the peaks in the proliferation rate are oxygen dependent. Accordingly we write: \begin{subequations} \begin{align} p(z;c)=p_0(c)\exp\left[-\frac{z^2}{g_0}\right]+p_1(c)\exp\left[-\frac{(z-0.55)^2}{g_1}\right],\\ p_i(c)=p_i^{max}\myfrac[2pt]{c^4}{K_{i}^4+c^4}, \quad i=0,1,\label{pO2} \end{align} \label{eqnetprol}% \end{subequations} where $p_0(c)$ and $p_1(c)$ are Hill–Langmuir type equations with fourth order exponents, so that the growth rate decays rapidly when $c\sim K_{i}$. We assume that differentiated cells are fitter than CSCs under normoxia and, therefore, choose $p_1^{max}>p_0^{max}$. At the same time, we note that chronic hypoxia is widely considered to favour CSCs \cite{Ayob2018, Conley2012, Lan2018}. The plasticity of CSCs enables them more easily to adapt their metabolism to changing nutrient levels than differentiated cells \cite{Garnier2019,Snyder2018} and, therefore, to survive and proliferate in challenging conditions. This behaviour contrasts with that of differentiated cancer cells which tend to become quiescent when exposed to hypoxia. We account for these effects by assuming $K_0\ll K_1$. When we consider the rate of cell death due to apoptosis, $f(z)$, we note that apoptosis occurs predominantly when cells lose their clonogenic capacity. As such, it predominantly affects only TDCs with $z\sim 1$. Motivated by the mathematical models developed in \cite{ende2,Scott169615}, we propose the following monotonically increasing function for $f(z)$: \begin{equation} f(z)=d_f \,e^{-k_f(1-z)}.\label{apoptosis} \end{equation} Even though they may not proliferate, TDCs compete for space and resources and, thus, impact the tumour dynamics. In what follows, we consider two different cases. First, guided by experimental results reported by Driessens et al.~\cite{Driessens2012}, we assume that apoptosis of TDCs occurs on a much longer timescale than that on which cells proliferate so that $d_f<<\max_{z} p(z;1)$. In the second case, the rates of cell proliferation and apoptosis are assumed to be comparable. This situation represents a tumour with high cell turnover and, as we will see, gives rise to a tumour population with higher clonogenic capacity. In Figure \ref{fitness landscape}, we sketch the fitness landscape $F(z,0,t;c)$ for different environmental conditions in the absence of treatment and competition. In doing so, we have neglected competition and radiotherapy in Equations~(\ref{Fitness}), where $p$ and $f$ are defined by Equations~(\ref{eqnetprol})-(\ref{apoptosis}). \begin{figure}[h!] \begin{subfigure}{0.32\textwidth} \centering \includegraphics[width=0.95\textwidth]{figures/notreatment/fitness_land2} \caption{high $d_f$} \label{landnorm2} \end{subfigure} \begin{subfigure}{0.32\textwidth} \centering \includegraphics[width=0.95\textwidth]{figures/notreatment/fitness_land1} \caption{low $d_f$} \label{landnorm1} \end{subfigure} \begin{subfigure}{0.32\textwidth} \centering \includegraphics[width=0.95\textwidth]{figures/notreatment/fitness_land3} \caption{low $d_f$} \label{landhyp} \end{subfigure} \caption{Series of sketches showing how the maximum growth rate $p(z,c)-f(z)$, as defined by Equations~(\ref{eqnetprol})-(\ref{apoptosis}) changes in different micro-environments: (a)-(b) under normoxia ($c=1$), the progenitor cells ($z=0.55$) are the fittest phenotype, and the death rate may be either high (a) or low (b); (c) under hypoxia ($c=0.2$), the CSCs ($z=0$) are the fittest phenotype. The parameter values used to produce the sketches are listed in Table~\ref{par_fitness}. Regions of positive and negative fitness are highlighted in green and red, respectively.} \label{fitness landscape} \end{figure} \begin{table}[h!] \centering \subfloat[proliferation]{ \begin{tabular}{c|c c c } \toprule[1.5pt]\addlinespace[2pt] & $p^{max}_i\, (hr^{-1})$ &$K_i$&$g_i$ \\ \hline\addlinespace[2pt] $i$=0&$0.005$&$0.05$&$0.01$\\ $i$=1& $0.02$ &$0.3$&$0.04$\\ \bottomrule[1.5pt] \end{tabular}} \hspace{10mm} \subfloat[apoptosis]{ \begin{tabular}{ c c} \toprule[1.5pt]\addlinespace[2pt] $d_f\, (hr^{-1})$ & $k_f$\\ \hline\addlinespace[2pt] $\left\{0.001,0.015\right\} $& $10$\\ \bottomrule[1.5pt] \end{tabular}} \caption{Range of parameter values used in the sensitivity analysis. More information on the specific parameter choice can be found in~\ref{AppendixA}.} \label{par_fitness} \end{table} We now consider the impact of radiotherapy on cell fitness. As mentioned above, CSCs possess protective mechanisms that enable them to withstand damage caused by radiation and oxidative stresses~\cite{Radioresistance,Clark2016, Diehn2009,Rycaj2014,cancersreview,frontiers,Vassalli2019}. They are, therefore, more resistant to treatment than their differentiated counterparts. It is well known that local oxygen concentration levels also affect treatment outcomes~\cite{Horsman2012,Moulder1987}. While we account for this effect in the full spatial model (see~\ref{AppendixA}), here we focus on the role of phenotype-dependent radio-sensitivity. In particular, we adapt the standard Linear-Quadratic (LQ) model so that the tissue specific coefficients, $\alpha (Gy^{-1})$ and $\beta(Gy^{-2})$, are phenotype dependent: \begin{subequations} \begin{align} -\log(SF)=\alpha(z) d + \beta(z) d^2,\label{lq_model} \end{align}% where $d$ is the radiation dose in Grays (Gy). Equation~(\ref{lq_model}) is the natural, continuum extension of previous works \cite{Leder2014,Saga}, in which two-compartment models are used to describe the time-evolution of cancer cells and cancer stem cells exposed to radiotherapy, and CSCs are assumed to be radio-resistant. Accordingly, here, we assume $\alpha$ and $\beta$ are increasing functions of the phenotype $z$~\cite{Saga,cancersreview,frontiers} of the following form: \begin{align} \alpha(z)= \alpha_{min}+(\alpha_{max}-\alpha_{min})\tanh\left(\frac{z}{\xi_R}\right),\label{rad_alpha}\\[2pt] \beta(z)=\beta_{min}+(\beta_{max}-\beta_{min})\tanh\left(\frac{z}{\xi_R}\right).\label{rad_beta} \end{align}\label{radiosensitivity} \end{subequations} In Equations~(\ref{rad_alpha})-(\ref{rad_beta}), $\xi_R$, $\alpha_{min,max}$ and $\beta_{min,max}$ are non-negative constants with $\alpha_{min}<\alpha_{max}$ and $\beta_{min}<\beta_{max}$. Where possible, parameter estimates are taken from the literature (see~\cite{Saga} for estimates of $\alpha_{min,max}$ and $\beta_{min,max}$); the value of $\xi_R=0.2$ is instead chosen so that differentiated cells (i.e. $z>0.5$) have maximum sensitivity to treatment (i.e., $\alpha(z)\sim \alpha_{max}$ for $z>0.5$). \begin{table}[h!] \centering \begin{tabular}{c|l l | l l } \toprule[1.5pt]\addlinespace[3pt] & \multirow{2}{*}{$[\alpha_{min},\alpha_{max}](Gy^{-1})$}&\multirow{2}{*}{$[\beta_{min},\beta_{max}](Gy^{-2})$}&\multirow{2}{*}{$\displaystyle\frac{\alpha_{min}}{\beta_{min}}(Gy)$}&\multirow{2}{*}{$\displaystyle\frac{\alpha_{max}}{\beta_{max}}(Gy)$}\\ &&&&\\ \hline\addlinespace[2pt] R1& $[0.005,0.15]$ & $[0.002,0.10]$ &2.5&1.5\\[2pt] R2& $[0.050,0.20]$ & $[0.020,0.05]$ &2.5&4\\[2pt] R3& $[0.005,0.40]$ & $[0.002,0.05]$ &2.5&8\\[1pt] \bottomrule[1.5pt] \end{tabular} \vspace{3mm} \caption{Summary of the parameter values used in Equation~(\ref{radiosensitivity}) to describe the three different RT responses used in model simulations. In all cases, we fix $\xi_R=0.2$.} \label{rad_tab2} \end{table} We consider three different parameter sets (see Table \ref{rad_tab2}); they may represent three cell populations which differ in their sensitivity to radiotherapy (RT). For cases R1 and R3, CSCs (with $z\sim 0$) respond in the same way to RT, whereas differentiated cancer cells (with $z> 0.5$) respond differently. For case R1, the small value of $\alpha_{max}/\beta_{max}$ for the sensitive cells ($z=1$) corresponds to a late responding tissue, whereas for case R3, the large value of $\alpha_{max}/\beta_{max}$ corresponds to an early responding tissue, with a low repair capacity, for which fractionation is known to be beneficial \cite{McMahon_2018}. Finally, case R2 is intermediate between cases R1 and R3. By assuming heterogeneity in the cell response to RT, we allow consideration of the selective pressure of RT. For a given dosage and LQ model, differences in the radio-sensitivity of CSCs and differentiated cells are determined by the ratios $\alpha_{min}/\alpha_{max}\in (0,1)$ and $\beta_{min}/\beta_{max}\in(0,1)$. When both fractions are small, CSCs are more likely to survive RT than their differentiated counterparts and, therefore, the selective pressure of RT on the population is high. By contrast, as $\alpha_{min}/\alpha_{max}$ and $\beta_{min}/\beta_{max}$ approach the value of unity, RT offers no selective advantage to CSCs as, at leading order, the response is independent of phenotype. The latter also depends on the specific dose applied. For example, for high doses the quadratic term in Equation~(\ref{lq_model}) is dominant and the selective pressure is only associated with the value of $\beta_{min}/\beta_{max}$. By contrast, for lower doses, the linear and non-linear effects contribute to cell killing and, so, the selective pressure of RT is associated with both $\alpha_{min}/\alpha_{max}$ and $\beta_{min}/\beta_{max}$. For these reasons, we will consider two different RT protocols: either a single dose of $10\,Gy$ is delivered or a fractionated schedule is used (here five doses of $2\,Gy$ are delivered over five consecutive days \cite{Brenner1991,Dale1985}). While R2 is expected to have the least RT selective pressure in both scenarios, this might be higher in R1 or R3 depending on the treatment protocol considered. \section{Population Dynamics in the Absence of Treatment} \label{notreatment} \noindent In this section, we present numerical solutions of Equations~(\ref{mixedmodel})-(\ref{apoptosis}) and~(\ref{eq_ad}) showing how, in the absence of treatment, the tumour cell distribution along the stemness axis evolves under normoxia and hypoxia. Our numerical solutions are generated using the method of lines, with discretisation performed in the $z$-direction. In more detail and following \cite{Gerisch2006}, we use a finite volume scheme, opting for a Koren limiter to control the advection component of the structural flux. In this way, we reduce~(\ref{mixedmodel}) to a system of time-dependent, ordinary differential equations which can be solved in MATLAB using \textit{ode15s}, an adaptive solver for stiff equations. The numerical simulations are validated in Section \ref{LSA} where we perform a linear stability analysis. The associated eigenvalue problem is solved numerically using MATLAB's \textit{chebfun} package \cite{Driscoll2014}. \subsection{Normoxic Conditions} \label{oxygen_env} \noindent In well-oxygenated environments, the advection velocity is positive and cells are driven towards a terminally differentiated phenotype, with $z=1$. Depending on the balance between the advective flux and cell renewal (i.e., Darwinian selection and Lamarckian induction), the model predicts a variety of long-time behaviours: the system relaxes to its steady state via damped fluctuations or monotonically. We start by considering symmetric velocity profiles (see Figure \ref{vel_norm1}). As summarised in Figures \ref{norm_sym} and \ref{steady_state}, as the magnitude of the advection velocity, $V_+$, and its steepness, $\xi_+$, are varied, the system exhibits different long time behaviours, even though the dynamics at early times are similar for all parameter sets considered (see Figure \ref{norm_sym}). If simulations are initialised with a small population of cells with $z\sim 0.5$, then the dynamics are initially dominated by proliferation. Over time, as $\phi$ increases, competition slows the cell proliferation rate and phenotypic advection becomes more important. As the cells mature, they accumulate near $z=1$, and the rate of natural cell death exceeds the rate of cell proliferation. From this time onwards, the growth curves corresponding to different parameter sets start to deviate. \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{figures/notreatment/normoxia/new_evol_sym} \caption{ Results from a series of numerical simulations of Equations~(\ref{mixedmodel})-(\ref{apoptosis}) and~(\ref{eq_ad}), showing how the cell distribution, $n(z,t)$, the phenotypic mean, $\mu(t)$, and the cell density, $\phi(t)$, change over time when we use a symmetric velocity profile (i.e., $\omega_+=1$ in Equation~(\ref{eq_ad})). As $V_+$ increases and $\xi_+$ decreases, the system can be driven to extinction. See Figure \ref{steady_state} for the values of the other model parameters.} \label{norm_sym} \end{figure} For example, in case A.2, the system rapidly relaxes to a non-zero steady state distribution characterised by cells with medium clonogenic capacity (i.e., a mix of highly proliferating and terminally differentiated cells or TDCs). Similarly, for cases C.1 and C.2, the cell density, $\phi(t)$, decays exponentially to extinction at a rate dictated by $d_f$. In other parameter regimes, the relaxation phase is characterised by damped fluctuations. In case A.1, for example, fluctuations are driven by the interplay between apoptosis, competition and advection. As TDCs are eliminated, the reduction in competition allows re-growth of highly proliferative cancer cells (i.e., $z\sim0.55$). As these cells proliferate, competition slows growth and advection becomes dominant, resulting in the alternating pattern of red and white stripes observed in the surface plot for $n(z,t)$ shown in Figure \ref{norm_sym} for case A.1. Over time, the fluctuations decay and the system relaxes to its steady state distribution. In Section~\ref{LSA}, we present a complementary investigation of this behavior, relating the damped oscillations to a complex eigenvalue in the linearisation about the equilibrium solution. \begin{figure}[h!] \centering \includegraphics[width=0.9\textwidth]{figures/notreatment/normoxia/phase_diagrams_new} \caption{Series of phase diagrams characterising the steady state distribution predicted by the model as properties of the advection velocity, $v_z$, vary (i.e., for different values of the parameters $V_+$, $\xi_+$ and $\omega_+$), and the rate of apoptosis, $d_f$. At each point in $(V_+,\xi_+)$ parameter space, we characterise the equilibrium distribution based on the number of peaks and the dominant phenotype (i.e., the $z$-locations of the local maxima) for different values of the parameters $\omega_+$ and $d_f$. For parameter sets that give rise to a significant fraction of CSCs (i.e., $\%$ CSCs $\geq 1\%$), the value of $\phi_{CSC}(0.3,t_{\infty})$, as defined by Equation~(\ref{cum_dist}), is also indicated.} \label{steady_state} \end{figure} Focusing on the long time behaviour, the symmetric advective profile gives rise to a population with a unimodal equilibrium distribution where the location of the peak is dictated by the values of the other parameters. For example, for small values of the maximum death rate, $d_f$ (see case A.1), the distribution is skewed towards $z=1$, while for higher values of $d_f$ the peak is shifted towards the centre of the domain. These observations are summarized in Figure \ref{steady_state}, where we have further analysed how the properties of the equilibrium distribution depend on other parameters in the model. We note that as the advective velocity increases (i.e., larger $V_+$) the value of $\xi_+$ determines whether total extinction occurs. This suggests that there is a bifurcation as $V_+$ and $\xi_+$ vary, with the system transitioning from a trivial to a non-zero steady state (this behaviour will be investigated in Section~\ref{LSA}). By contrast, the equilibrium distribution for an asymmetric velocity profile (i.e., $\omega_+=2$, as in Figure \ref{vel_norm2}), has a multimodal distribution, typically characterised by two peaks. In this case, since the CSCs have a lower propensity to mature, they accumulate and persist in the population, even under normoxia. The second column of Figure \ref{steady_state} shows that the proportion of CSCs at long time increases as the death rate, $d_f$, the steepness parameter, $\xi_+$, and the maturation velocity, $V_+$, increase, until the CSCs become the dominant subpopulation (see, for example, Case B.3 in Figure \ref{norm_asym}). Varying the death rate, $d_f$, does not significantly affect whether extinction occurs; rather, it determines the location of the maximum peak in the equilibrium distribution (see, for example, case B.2 in Figure \ref{norm_asym}). For low death rates, cells are predominantly in a terminally differentiated state. As the death rate increases, the peak moves to the left, producing an equilibrium distribution in which a higher proportion of rapidly proliferating cells balances the high death rate. Figure \ref{norm_asym} shows how the system relaxes to its steady state when $\omega_+=2$. Comparison with Figure \ref{norm_sym} reveals that in this case the dynamics are characterised by secondary regrowth, driven by the accumulation of CSCs. For example, in case B.1, phenotypic diffusion enables the cancer cells to de-differentiate, acquire a stem-like phenotype and, therefore, contribute to population growth. \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{figures/notreatment/normoxia/new_evol_asym2} \caption{Results from a series of numerical simulations of Equations~(\ref{mixedmodel})-(\ref{apoptosis}) and~(\ref{eq_ad}), showing how the cell distribution, $n(z,t)$, the phenotypic mean, $\mu(t)$, and the cell density, $\phi(t)$, change over time. For these results, we use an asymmetric velocity profile (i.e., $\omega_+=2$ in Equation~(\ref{eq_ad})). See Figure \ref{steady_state} for the values of the other model parameters.} \label{norm_asym} \end{figure} To summarise, the properties of the advection velocity $v_z$, determine whether the model predicts extinction or persistence of CSCs, regardless of whether they are present initially. When $\omega_+=2$, random mutations (i.e., diffusion), may dominate the advective force near $z=0$, allowing CSCs first to form, then to proliferate and ultimately to comprise a significant proportion of the equilibrium population. CSCs have been observed in normoxic regions; for example, they have been found in perivascular tumour regions, where endothelial cells secrete factors that inhibit CSC maturation \cite{Calabrese2007}. By contrast, when $\omega_+=1$ (i.e., for symmetric velocity profiles), all cells mature over time, leading to the eventual extinction of CSCs. This behaviour could describe that of tumours which lack CSCs, or the effect of drugs which induce stem cell differentiation and, thereby, reduce the incidence of resistance to other treatments, such as radiotherapy. We conclude that targetting $V_+$ and $\xi_+$ may be effective for eliminating CSCs, increasing tumour sensitivity to treatment and, in certain scenarios, driving tumour extinction. \subsection{Hypoxic Conditions} \label{hyp cond} \noindent Under hypoxia, the advection velocity in our model is negative and cells will be driven to de-differentiate. In this case, the equilibrium distribution is unimodal, with the dominant phenotype at $z=0$. Although varying the death rate $d_f$ does not effect the equilibrium distribution (compare cases H3 and H4 in Figure \ref{hyp1}), the values of $\omega_-$ and $\xi$ influence the width of the peak (compare cases H1 and H2 in Figure \ref{hyp1}) and, therefore, the variability in the population. \begin{figure}[h!] \begin{subfigure}{\textwidth} \includegraphics[width=\textwidth]{figures/notreatment/hypoxia/evol_cond1} \caption{} \label{hyp1_cond1} \end{subfigure} \begin{subfigure}{\textwidth} \centering \includegraphics[width=\textwidth]{figures/notreatment/hypoxia/evol_cond2} \caption{} \label{hyp1_cond2} \end{subfigure} \caption{Numerical results under hypoxic condition for four parameter sets, all with $V_-$= $4\times 10^{-4}$. In (a) we use the standard initial condition defined by Equation~(\ref{initial_cond}) while in (b) the population is centred around $z=1$. The other parameter values are as follows: (H1) $\xi_-$= 0.05, $\omega_-$= 1 and $d_f$= 0.001; (H2) $\xi_-\mbox{= }0.5$, $\omega_-\mbox{= }1$ and $d_f\mbox{= }0.001$; (H3) $\xi_-\mbox{= }0.5$, $\omega_-\mbox{= }2$ and $d_f\mbox{= }0.001$; (H4) $\xi_-\mbox{= }0.5$, $\omega_-\mbox{= }2$ and $d_f\mbox{= }0.015$. } \label{hyp1} \end{figure} Differences in the system dynamics also arise as the initial conditions $n_0(z)$ vary. The results in Figure \ref{hyp1_cond1} indicate little variation in the system dynamics when the initial conditions from Section \ref{oxygen_env} are used. By contrast, in Figure \ref{hyp1_cond2} we observe marked differences when the initial conditions are centred around the TDCs. In this case, population regrowth is delayed, the delay depending on the choice of parameter values. For example, when $\omega_-=2$, the velocity in a neighbourhood of $z=1$ is smaller than when $\omega_-=1$. Consequently, cells de-differentiate more slowly, delaying tumour regrowth. Similarly, increasing the death rate, $d_f$, reduces the number of cells that can de-differentiate and, subsequently, delays regrowth. Therefore, while $d_f$ does not affect the equilibrium distribution, it influences the system dynamics. These results show how the formation of hypoxic regions can shape the development of a tumour. In particular, the emergence of hypoxia maintains and enhances the pool of CSCs, preventing population extinction (see, for example, scenario D in Section \ref{oxygen_env}). \section{Population Dynamics in the Presence of Treatment} \label{radio_result} \noindent In the previous section, we found that the system possesses a stable steady state to which the dynamics converge for the range of parameter values considered. Therefore, we anticipate that, while treatment can perturb the system from its equilibrium, it will eventually relax to its stable steady state once treatment ends. Thus we expect extinction to occur for parameter values lying in the stability region of the trivial steady state (see Figure \ref{xi_crit}). From this point of view, we are interested in understanding how different environmental conditions (i.e. normoxia and hypoxia), different treatment protocols and different tumour compositions affect the relaxation phase and, in particular, the time to recurrence. To account for variability in tumour responses, we consider the different advection velocities used in our earlier analysis (see Table \ref{tab_rad3}). Starting from the initial condition~(\ref{initial_cond}), cells follow different pre-treatment protocols as specified in Table \ref{tab_rad3}. Without loss of generality, we shift time so that $t=0$ corresponds to $24$ hours before treatment begins. While attention will focus on tumour responses in constant environmental conditions, we also consider briefly treatment responses in changing environments. For each scenario, we simulate the response to treatment for the range of values of the radiation parameters listed in Table \ref{rad_tab2}. We denote by $n^{(S1,R1)}(z,t)$ the solutions corresponding to scenario $S1$ from Tables \ref{tab_rad3} and radio-sensitivity parameter set $R1$ from Table \ref{rad_tab2}. \begin{table} \hspace{-4mm} \begin{tabular}{c c c c} \toprule[1.5pt]\addlinespace[2pt] Scenario & Protocol &Parameters& Subsection\\[2pt] \hline\addlinespace[6pt] \multirow{4}{*}{$S1$} &\multirow{2}{*}{\parbox[c]{120pt}{\includegraphics[width=0.3\textwidth]{figures/radio/growthcurves/protocols/protocols1}}} &&\multirow{12}{*}{\ref{radio_norm}} \\ &&$(V_+[10^{-4}],\xi_+,\omega_+,d_f)$&\\ & & =$\left(4,0.05,2,0.015\right)$&\\ &&&\\[2pt] \multirow{4}{*}{$S2$}&\multirow{2}{*}{\parbox[c]{120pt}{\includegraphics[width=0.3\textwidth]{figures/radio/growthcurves/protocols/protocols2}}}&&\\ &&$(V_+[10^{-4}],\xi_+,\omega_+,d_f)$&\\ &&=$\left(8,0.05,1,0.001\right)$&\\ &&&\\[2pt] \multirow{4}{*}{$S3$}& \multirow{2}{*}{\parbox[c]{120pt}{\includegraphics[width=0.3\textwidth]{figures/radio/growthcurves/protocols/protocols2}}}&&\\ &&$(V_+[10^{-4}],\xi_+,\omega_+,d_f)$&\\ &&=$\left(8,0.05,2,0.001\right)$&\\ &&&\\[2pt] \hline\addlinespace[4pt] \multirow{4}{*}{$S4$}&\multirow{2}{*}{\parbox[c]{120pt}{\includegraphics[width=0.3\textwidth]{figures/radio/growthcurves/protocols/protocols3}}}&&\multirow{4}{*}{\ref{radio_hyp}}\\ &&$(V_-[10^{-4}],\xi_-,\omega_-,d_f)$&\\ &&=$(2,0.5,2,0.001)$&\\ &&&\\[2pt] \hline\addlinespace[8pt] \multirow{4}{*}{$S5$} &\multirow{2}{*}{\parbox[c]{120pt}{\includegraphics[width=0.3\textwidth]{figures/radio/growthcurves/protocols/protocols4}}}&&\multirow{8}{*}{\ref{radio+change}}\\ &&$(V_\pm[10^{-4}],\xi_+,\xi_-,\omega_+,\omega_-,d_f)$&\\ &&=$(8,0.05,0.5,1,2,0.001)$&\\ &&&\\[2pt] \multirow{4}{*}{$S6$} &\multirow{2}{*}{\parbox[c]{120pt}{\includegraphics[width=0.3\textwidth]{figures/radio/growthcurves/protocols/protocols5}}}&&\\ &&$(V_\pm[10^{-4}],\xi_+,\xi_-,\omega_+,\omega_-,d_f)$&\\ &&=$(8,0.05,0.5,1,2,0.001)$&\\[2pt] &&&\\ \bottomrule[1.5pt] \end{tabular} \vspace{2mm} \caption{Parameter sets used to generate the numerical simulations in Section~\ref{fit}, together with the corresponding environmental conditions pre- and post-treatment (blue: normoxia, red: hypoxia). Simulations are initialised using equation \ref{initial_cond} at different times $t=-t_s$ as indicated in the second column. Radiotherapy is administered at time $t=24$ hours. The parameter values have been chosen to illustrate the range of qualitative behaviours that the model exhibits.} \label{tab_rad3} \end{table} \subsection{Treatment Response in Normoxic Conditions} \label{radio_norm} \noindent The simulation results presented in Figure \ref{radio_norm_growth} illustrate the different regrowth dynamics that can arise when well-oxygenated tumour cells are exposed to a single dose of RT. We identify three distinct behaviours: instantaneous regrowth (S1), decay and extinction (S2) and initial remission with subsequent regrowth (S3). While the cell survival fraction immediately post-treatment depends on the parameter values used in the LQ-model (see Equation~(\ref{radiosensitivity})), the qualitative population regrowth dynamics post-treatment do not depend on these values. In more detail, for scenario S1, the cell density increases rapidly after treatment, driving the system towards its (asymptotic) equilibrium. By contrast, for scenarios S2 and S3, the growth curves initially decrease at similar rates until about $40$ days after treatment. Thereafter, for scenario $S3$ the tumour exhibits rapid regrowth to the equilibrium distribution, whereas for scenario $S2$, the tumour continues to shrink, until it is eventually eliminated. \begin{figure}[h] \centering \includegraphics[width=1\textwidth]{figures/radio/growthcurves/normoxia} \caption{Different treatment outcomes under normoxia. For each scenario S1, S2 and S3 (see Table \ref{tab_rad3}) we consider the dynamics of the total cell number, $\phi(t)$, and compare the responses for the radio-sensitivity parameter sets R1, R2 and R3 (see Table \ref{rad_tab2}) to the control, untreated case. For each scenario we also present plots of the phenotypic cell distribution, $n(z,t)$, at different times for radiotherapy protocol R1. The vertical line indicates the time of irradiation, while a line is also shown that follows the evolution of the control (i.e., in the absence of treatment).} \label{radio_norm_growth} \end{figure} The origin of such differences can be understood from the time evolution of $n(z,t)$ post-radiotherapy. Figure \ref{radio_norm_growth} shows that for case R1 of Table~\ref{rad_tab2}, the balance between cell proliferation and advection drives the system dynamics. The reduction in the cell density $\phi(t)$ post-radiotherapy reduces intra-population competition and allows the cells to resume proliferation. Depending on the magnitude of the advection velocity (which is positive), the cells either regrow ($S3$) or they are driven to a terminally differentiated state and, thereafter, become extinct ($S2$). For scenario $S3$, the presence of radioresistant CSCs post treatment and a small positive velocity at $z=0$ together drive regrowth. As the CSCs start to mature, there is a continuous source of highly proliferative cells which, in turn, drive rapid regrowth of the tumour. As the total cell number increases, intra-population competition slows cell proliferation until eventually advection becomes dominant, driving the cells to de-differentiate. By contrast, for scenario $S2$, advection dominates proliferation along the entire phenotypic axis. Additionally, CSCs are absent so that all cells are rapidly terminally differentiated and, thereafter, undergo cell death. Comparison of scenarios S2 and S3 reveals how different phenotypic compositions can generate treatment responses which are initially qualitatively similar, but differ markedly at long times. This finding is reinforced in Figure \ref{radio_mean_norm} where we plot the mean phenotypes, $\mu = \mu(t)$, as defined by Equation~(\ref{mean}). For scenarios S2 and S3, the dynamics of the mean phenotype are indistinguishable at short times and do not start to diverge until approximately 20 days after treatment. \begin{figure}[h] \includegraphics[width=0.95\textwidth]{figures/radio/growthcurves/mean_normoxia} \caption{Series of plots showing the evolution of the phenotypic mean, $\mu(t)$, for scenarios S1, S2 and S3 (see Figure \ref{radio_norm_growth}). We note that the scales used on the vertical axes are different.} \label{radio_mean_norm} \end{figure} More generally, the results presented in Figure \ref{radio_mean_norm} reveal three characteristic behaviours for the evolution of the phenotypic mean following radiotherapy. The dynamics of $\mu$ may be the same as those prior to treatment, with negligible deviation from the control (see scenario S2). A discontinuity in $\mu$ may be induced by radiotherapy (see scenario $S1$). In this case, CSCs comprise a significant proportion of the population prior to RT and the effect of radioresistance is pronounced (see Figure \ref{radio_norm_growth}). As CSCs are more likely to survive radiotherapy than more mature cells, we observe an ``instantaneous'' shift in $\mu$ towards less mature phenotypes. The size of the discontinuity depends on the relative sensitivity of CSCs and TDCs to RT, or, using the terminology introduced in Section \ref{fit}, the selective power of RT. Since we are considering high radiation dosages, the discontinuity is determined by the ratio $\beta_{min}/\beta_{max}$. In order for the selective pressure of treatment to be apparent, CSCs must comprise a significant fraction of the population prior to treatment. This explains why, for scenario S3, there is an initial transient period during which, as for scenario S2, there is no discernible deviation from the control. Only at later times does the difference in the evolution of $\mu(t)$ for the different parameter sets become apparent. \begin{figure}[h!] \centering \includegraphics[width=0.9\textwidth]{figures/newradioprof/radio_evol_2.pdf} \caption{Series of numerical results showing how the growth dynamics and the phenotypic mean evolves following exposure to a single dose of radiotherapy when cell radio-sensitivity is a non-monotonic function of cell phenotype. The simulations are analogous to those presented in Figure~\ref{radio_norm_growth} and \ref{radio_mean_norm}, except that Equations~(\ref{eq:new_al_bet}) are used in place of Equations~(\ref{rad_alpha})-(\ref{rad_beta}). } \label{fig:App_radio1} \end{figure} We note that other factors, in addition to stemness, influence cell radio-sensitivity. It is natural to expect cells that have permanently exited the cell-cycle will be less radio-sensitive than cycling cells, as the DNA damage response may already be active in such cells \cite{Lee2014}. The functional forms for $\alpha$ and $\beta$ defined by Equations~(\ref{rad_alpha})-(\ref{rad_beta}) assume that radio-sensitivity increases monotonically with cell phenotype, $z$. In order to investigate situations in which TDCs have lower radio-sensitivity than proliferating cancer cells, we now the following, non-monotonic functional forms: \begin{subequations} \begin{align} \alpha(z)=\alpha_{min}+(\alpha_{max}-\alpha_{min})\tanh\left(\frac{z}{\xi_R}\right)H_{0.075}(1-z),\\[2pt] \beta(z)=\beta_{min}+(\beta_{max}-\beta_{min})\tanh\left(\frac{z}{\xi_R}\right)H_{0.075}(1-z), \end{align}\label{eq:new_al_bet where $H_\epsilon$ is defined in \S \ref{vz_sec}, and we arbitrarily fix $\epsilon=0.075$ (all other parameters are as defined in \S\ref{fit}). \end{subequations} When the single dose experiment is repeated with the new radio-sensitivity profile, we observe an overall increase in the population survival fraction (compare Figures \ref{fig:App_radio1} and \ref{radio_norm_growth}) and changes in the dynamics of the population mean $\mu(t)$ (compare Figures \ref{fig:App_radio1} and \ref{radio_mean_norm}). The differences are most pronounced for scenarios $S2$ and $S3$ where TDCs, localised near $z=1$, are dominant in the population prior to treatment. The qualitative growth dynamics (i.e., $\phi(t)$) is similar for both cases. Further investigation of these differences is beyond the scope of the current study and is postponed for future work. \begin{figure}[h] \centering \includegraphics[width=0.7\textwidth]{figures/radio/growthcurves/fractioned} \caption{Simulation results for fractionated radiotherapy protocols, showing how the total cell number $\phi(t)$ and the phenotypic mean $\mu(t)$ evolve for scenarios S1 and S3 (see Figure \ref{radio_norm_growth} for details). In all plots, the light purple shaded area indicates the variability in responses when a single dose of 10 Gy is administered and is included for comparison with the fractionated treatments (see Figure \ref{radio_mean_norm}). The yellow shaded area indicates the duration of the treatment for the fractionated case.} \label{frac_growth} \end{figure} In practice, delivery of a single (high) dose of 10 Gy may not be practical for treating patients, due to adverse side effects \cite{Taylor2011}. Therefore, we now consider tumour responses to fractionated RT protocols. The trends for fractionated RT are similar to those for single doses for all scenarios in Table \ref{tab_rad3}. Typically, the proportion of cells that survive fractionated therapy is larger than for the single-dose case, by a factor of about 100. Consequently, for scenarios $S1$ and $S3$, the time to return to the equilibrium population distributions is reduced. For S2, while treatment causes a monotonic decrease in the cell density $\phi$, since more cells survive fractionated RT, it takes longer for the cell population to become extinct. For scenarios $S1$ and $S3$, we recall that for high doses of RT, the phenotypic mean was markedly affected by the specific LQ model parameters considered; this is not the case when lower doses are applied (see Figure \ref{frac_growth}). \begin{figure}[h!] \centering \includegraphics[width=0.95\textwidth]{figures/radio/Ff2} \caption{Phenotypic distribution $n^{(S1,R1)}(z,t)$ for the control (light blue), the colony exposed to a single dose (dark blue) and the one treated with fractionated dose $2$ Gy $\times5$ (green). The orange and yellow lines indicate the phenotypic mean for the single dose (orange) and fractionated (yellow) therapy respectively. Note that the first panel corresponds to the end of the treatment so that $t_{pt}$ is 24 hr and 120 hr for the 10 Gy and fractionated protocol respectively. On the other hand, the remaining panels are measured relative to the beginning of the treatment, which is at $t=24$ hr for both protocols.} \label{distFd} \end{figure} The variability in responses for scenarios S1 and S3 following a single dose of radiotherapy can be attributed to the temporary advantage CSCs have post treatment. When using a fractionated protocol, intra-population competition is maintained at the cost of fewer cells being killed. This is apparent when we compare the phenotypic distribution at different times for the two treatment protocols (see Figure \ref{distFd}). When $10$ Gy is administered in one dose (first panel, dark blue region), the peak of the distribution is at $z=0$. On the other hand, after 5 doses of $2Gy$ per day (first panel, green region), the proportions of differentiated and cancer stem cells are approximately equal. Given that the former proliferate faster than the latter, the differentiated cells quickly become the dominant phenotype. Consequently, one month after treatment ends (third panel in Figure \ref{distFd}), the proportion of CSCs in the population is the same for both protocols. We conclude further that the single dose protocol outperforms the fractionated protocol when we compare the total number of cells (the blue curve is below the green one for all values of $z$). \subsection{Treatment Response in Hypoxic Conditions} \label{radio_hyp} \noindent Cell populations that are continuously exposed to hypoxia, exhibit instantaneous re-growth following RT, as shown in Figure \ref{rad_hypoxia_growth}. Compared with the treatment outcome under normoxia, a higher percentage of cells survive radiation, because there is a larger proportion of radio-resistant cells in the population under hypoxia. Even though a smaller fraction of cells are killed, re-growth is also usually slower under hypoxia than under normoxia. We note also that, following exposure to the single and fractionated protocols, the phenotypic mean $\mu(t)$ shifts toward $z=0$ under hypoxia, favouring CSCs as the dominant phenotype (see Figure \ref{rad_hypoxia_growth}). The drift in $\mu$ is less pronounced for the fractionated case, suggesting the latter protocol is less favourable for the immediate accumulation of resistant subpopulation of CSCs than the single dose. \begin{figure}[h] \centering \includegraphics[width=0.75\textwidth]{figures/radio/growthcurves/hypoxianew} \caption{Comparison of the tumour cell responses to single and fractionated radiotherapy protocols under hypoxia for scenario S4 (See Table \ref{tab_rad3}). Simulation results showing the time evolution of the cell density, $\phi(t)$, and phenotypic mean, $\mu(t)$, are presented. For comparison, the light purple shaded areas in the fractionated plots indicate the variability in the response when a single dose of $10$ Gy is administered. The yellow shaded areas indicate the duration of treatment for the fractionated case.} \label{rad_hypoxia_growth} \end{figure} Taken together, our simulation results suggest that, under hypoxia, RT may accelerate the accumulation of resistant cells, while significantly reducing the overall growth rate of the population. \subsection{Treatment Response in a Changing Environment} \label{radio+change} \noindent Thus far we have assumed that the oxygen concentration remains constant throughout treatment. While this may accurately describe RT responses for cells cultured \textit{in vitro}, such control is likely to be absent \textit{in vivo} \cite{Arnold2018,Fenton2001,Kempf2015}. There is currently no consensus about the impact of RT on tumour vasculature and, hence, tissue re-oxygenation. On the one hand, high doses of radiotherapy may damage the vasculature~\cite{Hormuth}, and decrease nutrient availability post radiotherapy. On the contrary, moderate RT may transiently increase tissue oxygenation by \emph{normalising} the tumour vasculature (vessel normalisation is a phenomenon that has been observed when tumours are exposed to vascular-targetting agents which destroy some of the blood vessels in a way that increases blood flow through the network and, thereby, tissue oxygen levels \cite{Carmeliet2011,Jain2014}). Moreover, as tumour cells are killed, the pressure on immature vessels, not damaged by the radiation, decreases, and oxygen supply to the surviving cells may increase. Equally, hypoxic regions may form at later times as the tumour regrows. From this point of view, radiotherapy may impact both the phenotypic distribution of the cell population (and, thereby, its radio sensitivity), and oxygen levels post-treatment. We can use our mathematical model to investigate these scenarios, by assuming that oxygen levels change post radiotherapy. Based on the results presented in Sections \ref{radio_norm} and \ref{radio_hyp}, we anticipate that reoxygenation of a hypoxic tumour will be beneficial in certain cases, driving CSC maturation, and even leading to tumour eradication. The results presented in Figure \ref{posthypo_2} show that the long-term tumour regression is preceded by an initial phase of regrowth during which CSCs that survive treatment de-differentiate and proliferate. Such a treatment might initially be considered unsuccessful, although the stability of the trivial steady state upon re-oxygenation leads to extinction at longer times. \begin{figure}[h] \begin{subfigure}{0.4\textwidth} \begin{subfigure}{\textwidth} \includegraphics[width=0.9\textwidth]{figures/radio/growthcurves/reox2.pdf} \caption{S5} \end{subfigure} \begin{subfigure}{\textwidth} \includegraphics[width=0.9\textwidth]{figures/radio/growthcurves/post_hyp} \caption{} \label{posthypo_2b} \end{subfigure} \end{subfigure} \hspace{5mm} \begin{subfigure}{0.5\textwidth} \includegraphics[width=0.9\textwidth]{figures/radio/growthcurves/differentreox2.pdf} \caption{S6} \label{posthypo_2c} \end{subfigure} \caption{Growth curves for changing environmental conditions: (a) re-oxygenation and (b) post-radiation hypoxia for the parameter values S5 and S6 in Table \ref{tab_rad3}, respectively. Different response to treatment are compared based on parameter values from Table~\ref{rad_tab2}. (c) Growth curve $\phi(t)$ and phenotypic mean $\mu(t)$ evolution for model $R1$ from Table~\ref{rad_tab2}, when exposed to transient post-treatment hypoxia. We denote by $T_R$ the time at which re-oxygenation occur (indicated by the arrows in the plot). If $T_R$ is sufficiently small than re-oxygenation does not drive re-growth of the cells population. If we waited for a sufficiently long time (as in case $T_R=1000$) then re-oxygenation would first drive regrowth. Areas in blue and pink correspond to intervals of normoxia and hypoxia respectively.} \label{posthypo_2} \end{figure} As mentioned previously, when high radiation doses are applied \textit{in vivo}, it is likely that the vessel network is also damaged, potentially inducing hypoxia \cite{Arnold2018}. Figure \ref{posthypo_2b} shows that such environmental changes may negatively impact the outcome. The formation of an hypoxic region favours the development and maintenance of radioresistant CSCs, reducing the treatment efficacy and making it more difficult to eradicate the tumour. At the same time, environmental changes may be transient: damaged blood vessels are likely to be replaced by new vessels which form via angiogenesis and re-oxygenate the damaged regions. As shown in Figure \ref{posthypo_2c}, depending on the time-scale required for vessel regrowth (indicated by $T_R$), different behaviours may arise. If the duration of RT-induced periods of hypoxia is sufficiently short, then the size of the cell population remains low. By contrast, if there is sufficient time for cells to de-differentiate (see $T_R=1000$), then re-oxygenation leads to a rapid increase in cell number, although eventually the cells die out. These results highlight the complex interplay between tumour growth and treatment response \textit{in vivo} and the importance of environmental factors in determining the eventual outcome of radiotherapy treatment. \subsection{Structural Flux} \label{vz_sec} Plasticity is an essential feature of phenotypic adaptation to changing environmental conditions~\cite{dirkse,Pisco2015}. It assumes that cells with the same genome can acquire distinct phenotypes depending on their epigenetic status, which is also inheritable. Phenotypic variation may be mediated by random (spontaneous) \textit{epigenetic mutations} \cite{Lorenzi2016}, which we assume to be rare. We account for this effect by including in the structural flux a diffusion term with a constant diffusion coefficient $\theta=5\times 10^{-6}$ hr$^{-1}$(see Equation \ref{full1}). Such random mutations should not favour any specific phenotype, and \textit{Darwinian} selection (i.e. the fitness function $F$) drives phenotypic evolution of the population. This aspect has been widely studied in previous work in order to investigate how cells adapt to different environments \cite{Ardaseva2019,Lorenzi2016,Villa2019}. At the same time, there is evidence that phenotypic switching may be mediated by environmental factors via \textit{Lamarckian} selection (or induction) \cite{Pisco2015}. In this framework, cells adapt to their environment~\cite{dirkse,schaider} by following a preferential (\emph{biased}) trajectory in phenotypic space. We can, therefore, envisage situations in which a subpopulation may be prevalent in a population without being the fittest subpopulation (i.e. the population with the highest proliferation rate). For example, recent studies have identified cell de-differentiation and CSC maintenance as stress responses to harsh environmental conditions \cite{Pisco2015}, including hypoxia. More specifically, cells respond to hypoxic stress by up-regulating Hypoxia Inducible Factors (HIFs) which, in turn, promote the expression of stem-related genes~\cite{Garnier2019,Liu2014,Pistollato2010,Pistollato2009}. HIF suppression has also been linked to cell differentiation and reduced levels of stemness~\cite{Shiraishi2017}. We account for such micro-environment mediated adaptation by incorporating an advective term in the structural flux. Cells are assumed to evolve along the stemness axis with a velocity $v_z=v_z(z,c)$, that depends on the oxygen concentration $c$ and cell phenotype $z$. Under normoxia, cells tend to differentiate, and $v_z>0$. From this point of view, the model is similar to classical age-structured models \cite{Perthame2007,Webb2008}, with $v_z$ being analogous to a \emph{maturation} velocity. In our model, however, \emph{ageing} (i.e. differentiation or loss of clonogenic potential \cite{Scott169615}) may be reversible. For example, under hypoxia (i.e. $c \leq c_H$), we assume $v_z<0$ (see Figure \ref{vel_prof}) and a more stem-like character is promoted. \begin{figure}[h!] \begin{subfigure}{0.32\textwidth} \centering \includegraphics[width=0.95\textwidth]{figures/model/velocity_prof/normxi0_05} \caption{ $\xi_+=0.05$} \label{vel_norm1} \end{subfigure} \begin{subfigure}{0.32\textwidth} \centering \includegraphics[width=0.95\textwidth]{figures/model/velocity_prof/normxi0_5} \caption{ $\xi_+=0.5$} \label{vel_norm2} \end{subfigure} \begin{subfigure}{0.32\textwidth} \centering \includegraphics[width=0.95\textwidth]{figures/model/velocity_prof/hypxi0_5} \caption{$\xi_-=0.5$} \end{subfigure} \caption{Series of sketches showing how $v_z^+$ and $v_z^-$, as defined by Equations~(\ref{eq_ad_norm}) and~(\ref{eq_ad_hyp}) respectively, change as the parameters $\xi_\pm$ and $\omega_\pm$ vary.} \label{vel_prof} \end{figure} Combining the above observations, and motivated in part by recent, similar considerations~\cite{hodgkinson}, we propose the following functional forms for the phenotypic drift term, $v_z$: \begin{subequations} \begin{align} v_z(z;c)=v^+_z(z) H_{\epsilon}(c-c_H)-v^-_z(z)H_{\epsilon}(c_H-c),\\[3pt] v^+_z(z)= \myfrac[2pt]{V_+}{V^*_+} \tanh\left(\myfrac[3pt]{z^{\omega_+}}{\xi_+}\right)\tanh\left(\myfrac[2pt]{(1-z)}{\xi_+}\right),\label{eq_ad_norm}\\[3pt] v^-_z(z)=\myfrac[2pt]{V_-}{V^*_-} \tanh\left(\myfrac[2pt]{z}{\xi_-}\right)\tanh\left(\myfrac[3pt]{(1-z)^{\omega_-}}{\xi_-}\right).\label{eq_ad_hyp} \end{align}\label{eq_ad} \end{subequations} where $H_\epsilon$ is a smooth variant of the Heaviside function approaching the latter in the limit of $\epsilon \rightarrow 0$ (i.e., $H_\epsilon(x)= {(1+\tanh(\epsilon^{-1}x))}/{2}$). In Equations~(\ref{eq_ad}), the normalising factors $V_\pm^*$ ensure that $\left(\max_z v^\pm_z\right)/V_\pm=1$ and $V_\pm\,(hr^{-1})$ corresponds to the magnitude of the velocity. Further, by controlling the advection speed along the stemness axis, $V^{-1}_\pm$ determines the timescales for maturation and de-differentiation. The parameters $\xi_\pm$ regulate the slopes of $v_z$ at the boundaries $z=0,1$. As shown in Figure \ref{vel_norm1}, when $\xi_\pm \ll 1$, the advection velocity is steep when $z\sim 0,1$ and flatter elsewhere. This functional form is similar to that proposed in~\cite{hodgkinson}. For larger values of $\xi_\pm$, the variation is more gradual, with a single maximum (or minimum) near $z\sim0.5$ (see Figure \ref{vel_norm2}). The exponents $\omega_\pm$ allow us to tune the symmetry/asymmetry in $v_z$ and also to modulate the flux at the boundaries (see Figure \ref{vel_prof}). For example, if $\omega_+=2$, then $v(0)=\partial_z v(0)=0$ which means that CSCs will be less likely to differentiate compared to the case $\omega_+=1$. In the absence of experimental data with which to specify the parameters in the phenotypic drift velocity, we consider combinations of the following parameter sets: \begin{itemize} \item $V_\pm \in \left\{2,4,8\right\} \times 10^{-4} \left[hr^{-1}\right]$, \item $\xi_\pm\in\left\{0.05,0.1,0.5\right\}$, and \item $\omega_\pm\in\left\{1,2\right\}$. \end{itemize} In summary, our phenotype-structured model for the growth and response to radiotherapy of a solid tumour is defined by Equations~(\ref{mixedmodel})-(\ref{eq_ad}). A list of the model parameters and estimates of their values can be found in Table~\ref{param_set} in~\ref{AppendixA}.
1,314,259,994,676
arxiv
\section{Introduction} In the past few years, deep learning has emerged as a promising approach and the foundation for a wide range of real-world applications. As network architecture becomes more and more sophisticated and training costs rise, well-trained models become lucrative targets for the adversary looking to ``steal'' them. By querying the publicly available APIs of these models, an adversary can collect the outputs to train a piracy model, dubbed \textit{model extraction} attacks~\cite{DBLP:conf/uss/TramerZJRR16, DBLP:conf/uss/JagielskiCBKP20, DBLP:conf/uss/ChandrasekaranC20, DBLP:conf/crypto/CarliniJM20, DBLP:conf/icml/RolnickK20, DBLP:conf/icml/BeguelinTPK21, DBLP:conf/uss/ZhuCZL21}. Existing works on mitigating model extraction attacks and protecting the Intellectual Property (IP) of trained models fall into two groups~\cite{DBLP:conf/sp/JiaYCDTCP21, DBLP:journals/corr/abs-2109-10870}. The first group is based on \textit{watermarking} techniques~\cite{DBLP:conf/uss/AdiBCPK18, DBLP:conf/uss/JiaCCP21, DBLP:conf/mm/SzyllerAMA21, DBLP:conf/cvpr/OngCNFY21, DBLP:journals/corr/abs-1809-00615, DBLP:journals/nca/MerrerPT20, DBLP:conf/ih/ShafieinejadLWL21}. The idea is that the model owner introduces into her IP model a backdoor (\ie, a watermark), which would persist during the model extraction. By checking whether a suspect model contains the injected watermark, a defender can determine whether this model is pirated. The other category is based on \textit{fingerprinting} techniques that leverage inherent information (\ie, \textit{decision boundary}) of the trained models. Observing that a DNN model can be uniquely profiled by its decision boundary which is also likely to be inherited by piracy models, a model extraction attack can be identified by examining whether a suspect model has (almost) the same decision boundary as the victim model. A line of research~\cite{DBLP:conf/cvpr/HeZL19, DBLP:conf/iclr/LukasZK21, DBLP:conf/asiaccs/CaoJG21} adopts adversarial examples to represent the decision boundaries. \begin{figure*}[thp] \centering \includegraphics[width=0.9\linewidth]{img/uap_for_model_verification.pdf} \vspace{-3mm} \caption{\small \textbf{Illustrations of local and universal adversarial perturbations.} Left: local adversarial perturbations are less robust to point-to-point decision boundary modification due to extraction. Right: our framework relies on the stable correlation of decision boundaries profiled by universal adversarial perturbations (UAPs).} \vspace{-3mm} \label{fig:my_label} \end{figure*} However, the effectiveness of existing mitigation schemes were challenged. Watermarking based solutions suffer from utility drop caused by watermarks. Another concern is that an attacker can illegally inject a backdoor to argue the ownership which violates the non-forgeable demand~\cite{DBLP:conf/uss/JiaCCP21}. Adversarial examples can only capture the \textit{local geometry}, particularly, orientations of the decision boundary in local regions surrounding the adversarial examples, which may fail to be transferred to the suspect model due to decision boundary variation during the extraction~\cite{DBLP:conf/ccs/PapernotMGJCS17,DBLP:conf/cvpr/KhrulkovO18}. In this paper, we explore methods to capture the global characteristics of the decision boundary. As demonstrated in \autoref{fig:my_label}, we propose a more effective model extraction detection scheme based on \textit{Universal Adversarial Perturbations} (UAPs)~\cite{DBLP:conf/cvpr/Moosavi-Dezfooli17}. A carefully selected UAP vector $\uap$ can fool the model on almost all datapoints. We find that UAPs are drawn from a low-dimensional subspace that contains most of the normal vectors of decision boundary. Due to decision boundary dependency, UAP subspaces of piracy models are more consistent with that of the victim model, which enables us to give a similarity score. There are two challenges in applying UAPs for detecting model extraction. First, since the calculation of UAPs usually requires the knowledge of model parameters (\ie, white-box access) which model owners are not willing to provide, it is intractable for the defender to obtain UAPs of suspect models via black-box access. The second challenge is how to reliably distinguish between a piracy model and a homologous model (\ie, model trained on the same training data rather than the victim model's outputs and should be not considered ``stolen''). To tackle the first challenge, we propose a fingerprinting function which is obtained by querying the suspect model with a number of datapoints, added by victim's UAPs. A more informative fingerprints need to capture as many parts of decision boundaries as possible. We therefore adopt K-means clustering on the last layer of the victim model to ensure that the datapoints are uniformly selected from different source classes and move towards different target classes. To address the second challenge, we design an encoder to map fingerprints of the victim model, piracy models, and homologous models into a joint representation space. We adopt \textit{contrastive learning}~\cite{DBLP:conf/icml/ChenK0H20} (which aims to shorten the distances of the samples in the same classes and push away any samples of other classes) to project homologous models farther away from the victim model than the piracy models. In summary, we propose a more accurate, robust and general IP protection framework against the model extraction attacks. Our main contributions are: \begin{itemize}[itemsep=0pt,topsep=2pt,leftmargin=12pt] \item We present one of the first attempts to leverage UAP distribution dependency to measure the decision boundary similarity between models. We show that UAP outperforms adversarial perturbation for model fingerprinting. \item We propose a novel model ownership verification framework based on UAP fingerprinting that achieves a highly competitive detection rate in terms of AUC. \item Compared with prior fingerprinting works, we demonstrate the capability of our framework for detecting post-modificated piracy models. \item We adopt contrastive learning in encoder training to address the similarity gap between homologous models and piracy models. A new data augmentation approach is proposed to create ``views" for fingerprints. \end{itemize} \section{Background and Related Work} \mypara{Model extraction} violates the confidentiality of machine learning models~\cite{DBLP:conf/uss/TramerZJRR16, DBLP:conf/uss/JagielskiCBKP20, DBLP:conf/uss/ChandrasekaranC20, DBLP:conf/crypto/CarliniJM20, DBLP:conf/eurosp/JuutiSMA19}. In a model extraction attack, the attacker only has black-box access to a victim model and aims at stealing it through posing queries. The obtained model is expected to be functionally similar. To extract a model, the attacker first needs to collect a set of unlabeled natural data. Then the natural data are mixed with carefully crafted synthesized data to query the victim model. The returned labels are then used to train a piracy model. This process repeats for several iterations until the piracy model recovers a satisfying utility of the victim. \mypara{Model fingerprinting} relies on finding existing features that characterises the model. Recent works rely on the different transferabilities of the victim model's adversarial examples~\cite{DBLP:conf/iclr/LiuCLS17, DBLP:journals/corr/TramerPGBM17, libo_2, bai2021ai, cezhang_2, DBLP:conf/iclr/MadryMSTV18} on piracy models and independent models. They are widely used to solve problems like model modifications ~\cite{DBLP:conf/cvpr/HeZL19} and model extraction attacks~\cite{DBLP:conf/asiaccs/CaoJG21, DBLP:conf/iclr/LukasZK21}. Cao \etal~\cite{DBLP:conf/asiaccs/CaoJG21} present an adversarial example based algorithm to generate datapoints near the decision boundary and utilize the transferability gap of those data to identify the piracy models. However, the performance of this work is not stable across different model architectures. Lukas \etal~\cite{DBLP:conf/iclr/LukasZK21} also adopt the transferability to craft synthesized datapoints named ``conferrable examples" that only transfer to piracy models instead of homologous models. Crafting conferrable examples involves training up to 30 models and then backpropogating through them to obtain a gradient update. This leads to a huge overhead cost. \section{Problem Formulation} \subsection{Model Definition} Now we formally define the IP protected DNN models and the \textit{piracy models}. Consider a problem domain denoted by $\mathcal{X} \subset \mathbb{R}^M$. Each element $\mathbf{x} \in \mathcal{X}$ is labeled by one of $N$ classes, say $i$-th class, denoted by a one-hot vector $l(\mathbf{x}) \in \mathbb{R}^N$. A DNN model is a function $f : \mathbb{R}^M \to \mathbb{R}^N$ which takes as input $\mathbf{x} \in \mathbb{R}^M$, and outputs a vector $f(\mathbf{x}) \in \mathbb{R}^N$ with $i$-th entry $f(\mathbf{x})_i$ denoting the model's confidence that $\mathbf{x}$ is from the $i$-th class. \begin{definition} (IP Protected DNN Model). A DNN model owned by a model owner $u$ is denoted by $\vmodel{u} : \mathbb{R}^M \to \mathbb{R}^N$. It is trained by the model owners on their dataset $\vdata{u} \subset \{ (\mathbf{x}, l(\mathbf{x})) \mid \mathbf{x} \in \mathcal{X} \}$, aiming to optimize \begin{equation} \mathbb{P}_{\mathbf{x} \sim \mathcal{X}}(\argmax_{k}(\vmodel{u}(\mathbf{x})_{k}) = i \land l(\mathbf{x})_i = 1). \end{equation} \end{definition} Two different model owners, say $u$ and $v$, might have slightly different training datasets, model structures, training processes, their trained models ($\vmodel{u}$ and $\vmodel{v}$) are highly similar but considered independent, we refer to as \textit{homologous models}. In contrast, in model extraction attacks, an adversary may query a victim model $\vmodel{u}$ with her chosen inputs and obtains the model outputs which can be used as labels to train a model. We refer to it as \textit{piracy model} and give a formal definition as follows. \begin{definition} (Piracy Model). A piracy model obtained by an attacker who launches model extraction attacks on a victim model $\vmodel{u}$ is denoted by $\pmodel{u}: \mathbb{R}^M \to \mathbb{R}^N$. It is trained on her dataset $\pdata{u} \subset \{ (\mathbf{x}, \vmodel{u}(\mathbf{x}))\mid\mathbf{x} \in \mathcal{X} \}$, aiming to optimize \begin{equation} \mathbb{P}_{\mathbf{x} \sim \mathcal{X}}(\argmax_{k}(\pmodel{u}(\mathbf{x})_{k}) = \argmax_{k}(\vmodel{u}(\mathbf{x})_{k}). \end{equation} \end{definition} \subsection{Threat Model} The threat model considered in this paper involves a model owner who deploys its trained model $\vmodel{u}$ as a cloud service and an adversary who tries to launch model extraction attacks against $\vmodel{u}$ and deploys the piracy model $\pmodel{u}$ for financial benefits. The model owner here acts as both the \textit{victim} and the \textit{defender} who aims to verify whether a suspect model $\smodel{}$ is a piracy or homologous model of $\vmodel{u}$. \mypara{Attacker’s Capability and Knowledge.} The attacker has black-box access to a victim model $\vmodel{u}$ and knows its problem domain. To evade potential model extraction detection schemes, the adversary may also apply various modifications (\eg, fine-tuning, compression, pruning and adversarial training) to piracy models. \mypara{Defender's Capabilities and Knowledge.} The defender (victim) has white-box access to its model $\vmodel{u}$ (\ie, model parameters, hyper-parameters, and training dataset) and black-box access to a suspected model $\smodel{}$ with a limited number of queries. Specifically, the defender has no knowledge about the suspect model's architecture, parameters, hyper-parameters, nor the attacker's data used during the extraction. \subsection{Design Overview} \label{sec:design_goals} In this paper, we propose to use UAPs to capture the global geometric information of a DNN model's decison boundary for model extraction detection. Universal Adversarial Perturbation suggests that a carefully selected perturbation vector $\uap \in \mathbb{R}^M$ of a model $\modelsymbol{}$ can fool the model on almost all datapoints drawn from the problem domain $\mathcal{X} \subset \mathbb{R}^M$. Formally, a UAP $\uap$ is $(\xi,\delta)$-universal such that \begin{equation} \begin{aligned} \mathbb{P}_{\mathbf{x} \sim \mathcal{X}} (\argmax_{k}{\modelsymbol{}(\mathbf{x} + \uap)_k} &\ne \argmax_{k'}{\modelsymbol{}(\mathbf{x})_{k'}}) \geq 1 - \delta, \\ s.t., ~~ ||\mathbf{v}||_2 &\leq \xi \end{aligned} \end{equation} where $f(\cdot)$ is the output probability vector. A UAP $\mathbf{v}$ can be viewed as a natural defect of the model $\modelsymbol{}$ which exposes the geometric correlations between different local gradients of the model's decision boundary. In fact, for a given model, there are a bunch of UAPs that exist in a low-dimensional subspace in which most of the normal vectors of decision boundaries lie, as pointed by Moosavi-Deafooli \etal~\cite{DBLP:conf/cvpr/Moosavi-Dezfooli17}. The UAP subspaces of two homologous models are independent since the decision boundaries are formed via two independent training processes. In contrast, Papernot \etal~\cite{DBLP:conf/ccs/PapernotMGJCS17} use chi-square test to statistically verify that datapoints' gradients of piracy models are dependent on those of the victim model. We observe that this dependency is maintained by UAPs. We postpone the detail of this observation to \autoref{sec:observation_exp} and continue our design overview. Since calculating a UAP requires white-box access to the model, it is infeasible for the defender to obtain the UAPs of suspect model $\smodel{}$ and compare their similarity with $\vmodel{u}$. Alternatively, with white-box access to the victim model $\vmodel{u}$, the defender (victim) could generate a UAP $\uap$ and verify whether $\uap$ lies in the UAP subspace of the suspect model $\smodel{}$. Specifically, we propose the following two primitives for the verification: \mypara{Fingerprint Generation.} To verify whether a vector $\uap$ lies in the UAP subspace of a model $\modelsymbol{}$, we propose to design a fingerprints generation function $\mathcal{F}$ which captures the \textit{fingerprints} of how the model $\modelsymbol{}$ behaves around $n$ datapoints $\mathbf{x}_1,\dots, \mathbf{x}_n$ with regards to $\uap$, denoted by $\mathcal{F}(\modelsymbol{}, \uap, [\mathbf{x}_1,\dots, \mathbf{x}_n])$. \mypara{Fingerprint Verification.} With fingerprints of both the victim model and the suspect model, the defender needs to determine whether the suspect model is a piracy model or a homologous model. Particularly, we propose to design an encoder ${E_\theta}$ with parameters $\theta$ to map the fingerprints of models to latent space, such that the mapped fingerprints of a victim model and its piracy model have a large similarity (\eg, cosine similarity) while the mapped fingerprints of a victim model and a homologous model has a small similarity. Particularly, the encoder aims to optimize: \begin{equation} \resizebox{.9\hsize}{!}{ $ \begin{aligned} \max \limits_{\theta} \; \mathbb{E}(sim(\vmodel{u}, & \pmodel{u})) \, - \mathbb{E}(sim(\vmodel{u}, \vmodel{v})), \\ \text{where}~~sim(\modelsymbol{}_a, \modelsymbol{}_b) &= cosine(E_\theta(\mathcal{F}_a), E_\theta(\mathcal{F}_b)) \\ \mathcal{F}_a &= \mathcal{F}(\modelsymbol{}_a, \uap, [\mathbf{x}_1,\dots, \mathbf{x}_n]) \\ \mathcal{F}_b &= \mathcal{F}(\modelsymbol{}_b, \uap, [\mathbf{x}_1,\dots, \mathbf{x}_n]). \end{aligned} $} \end{equation} \vspace{-4mm} \section{UAP based Fingerprinting} In this section, we first explain the observation on which our design is based. Then we introduce the design of fingerprint generation and fingerprint verification. \subsection{Observation Explanation} We first show the relation among UAP subspaces of the victim model, homologous models, and piracy models. \mypara{\textsc{Observation}:} \textit{A victim model's UAP subspace is consistent with its piracy model's UAP subspace and is inconsistent with a homologous model's UAP subspace.} \label{sec:observation_exp} \begin{figure}[t] \centering \begin{subfigure}[t]{0.22\textwidth} \centering \includegraphics[width=\linewidth, height=0.702\linewidth]{img/POCFMNIST_review.pdf} \caption{SVD on DNN models' UAPs.} \label{fig:poc:svd_bar} \end{subfigure} \hfill \begin{subfigure}[t]{0.21\textwidth} \centering \includegraphics[width=\linewidth]{img/poc.pdf} \caption{Inconsistency distributions.} \label{fig:poc:svd_inconsis} \end{subfigure} \vspace{-2mm} \caption{\small (a) Projections of $\vmodel{u}$, piracy and homologous models on top-5 principal directions of $\vmodel{u}$, the dark line indicates mean value over 20 models and light intervals indicate STD; (b) Inconsistency distribution of piracy models and homologous models calculated according to \autoref{eq:sec4:poc_incssts}.} \label{fig:poc:svd} \vspace{-3mm} \end{figure} We model the dependency of a suspect model $\smodel{}$ on a victim model $\vmodel{u}$ as their \textit{consistency}, which is defined as $\ell_2$ distance between the projections of these two models' UAPs on a set of orthogonal basis formed by principal directions of $\vmodel{u}$'s UAP matrix. To obtain this basis, we perform singular-value decomposition on $\vmodel{u}$'s UAP matrix and choose its right singular vector basis. Let $V_{\smodel{}}=\{ \mathbf{v}_{\smodel{}}^1, \dotsc, \mathbf{v}_{ \smodel{}}^L \}$ be UAPs of the suspect model $\smodel{}$. $V_{\vmodel{u}}=\{ \mathbf{v}_{\vmodel{u}}^1, \dotsc, \mathbf{v}_{ \vmodel{u}}^L \}$ be UAPs of the victim model. After performing SVD on $V_{\vmodel{u}}$, there is a $r$-dimensional orthogonal basis $\{ v_1, v_2, \cdots, v_r \}$, where $r$ is the rank of $V_{\vmodel{u}}$. We define the distribution inconsistency of UAPs between $V_{\smodel{}}$ and $V_{\vmodel{u}}$ as $Inconsist_{\vmodel{u}}(\smodel{})$: \begin{equation} \label{eq:sec4:poc_incssts} \resizebox{.9\hsize}{!}{ $ \sum_{0 \leq m \leq r} \big( {\sum_{0 \leq i \le L} (\mathbf{v}_{\smodel{}}^i \cdot v_m)^2 - \sum_{0 \leq j \leq L} (\mathbf{v}_{\vmodel{u}}^i \cdot v_m)^2} \big) ^ 2. $} \end{equation} We generate $m$ (input dimension) UAPs for each model on FMNIST dataset to form a square UAP matrix with its rank equals $m$. \autoref{fig:poc:svd_bar} shows that the piracy models' UAPs has similar projections as the victim, whereas the homologous models loosely follow principal directions. \autoref{fig:poc:svd_inconsis} shows that $Inconsist_{\vmodel{u}}(\pmodel{u})$ is 3 times smaller than $Inconsist_{\vmodel{u}}(\vmodel{v})$ on FMNIST dataset. We conclude that UAPs of piracy and homologous models differ and can be used to differentiate these two types of models. \begin{figure}[t] \centering \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\linewidth]{img/visualization/visualize_unprocesses.pdf} \label{fig:poc:uap_figer} \end{subfigure} \hfill \begin{subfigure}[b]{0.23\textwidth} \centering \includegraphics[width=\linewidth]{img/visualization/visualize_norm.pdf} \label{fig:poc:ae_figer} \end{subfigure} \vspace{-5mm} \caption{\small $t$-SNE visualization of fingerprints. UAP-based fingerprints (left) are naturally distinguishable compared with local adversarial perturbation based fingerprints (right). (FMNIST)} \label{fig:poc:uap_ae} \vspace{-3mm} \end{figure} \subsection{Fingerprint Generation} \label{sec4:quptse} We now define a fingerprint generation function as described in \autoref{sec:design_goals} as follows: \begin{equation} \label{eq:sec4:fgprt} \begin{aligned} &\mathcal{F}(\modelsymbol{}, \uap, (\mathbf{x}_1, \cdots, \mathbf{x}_n)) \\= & [\modelsymbol{}( \mathbf{x}_1) , \modelsymbol{}(\mathbf{x}_1 + \uap) , \cdots , \modelsymbol{}(\mathbf{x}_n) , \modelsymbol{}(\mathbf{x}_n + \uap)]. \end{aligned} \end{equation} The goal of $\mathcal{F}$ is to capture how the given model's outputs change around the given datapoints before and after adding the UAP. Intuitively, if $\uap$ is a UAP of $\smodel{}$, adding $\uap$ to samples will significantly reduce the confidence of $\smodel{}$ on its original predict class. Otherwise, adding $\uap$ will not push samples to a decision boundary. The next step is to select $n$ datapoints that could better profile the decision boundary. Besides using more datapoints (\ie, larger $n$), we would like the datapoints spread uniformly throughout entire decision boundaries to capture diverse information. Thus, we perform K-means to cluster all training datapoints of the victim model into $n$ clusters according to their representation vector in the last layer of the model, and select one datapoint from each cluster. We compare the effectiveness of UAP based fingerprints to local adversarial perturbation(LAP) based fingerprints of three types of models (the victim, piracy and homologous). As shown in \autoref{fig:poc:uap_ae}, for UAP based fingerprints, models with the same type form one cluster, which is distant from the clusters of models with different types. In contrast, LAP based fingerprints of models with different types mix together and are indistinguishable. Please be referred to {\color{magenta}supplementary materials} for more discussions. \subsection{Fingerprint Verification} \label{sec:encoder_training} \begin{figure}[t] \centering \includegraphics[width= 0.9\linewidth]{img/CL.pdf} \vspace{-3mm} \caption{\small Illustration of contrastive learning. $\mathcal{X}_n^1 \cdots \mathcal{X}_n^k$ are k sets of datapoints used to create ``views" (blue fingerprint is an augmented view for black one) . Piracy fingerprints are positive with each other (green) and negative with homologous fingerprints (red). Encoder projects fingerprints to a hyper-sphere. } \label{fig:fw:cl} \vspace{-3mm} \end{figure} We leverage an encoder to learn knowledge contained in fingerprints and output a human-comprehensible similarity score. The encoder projects the features of fingerprints into a latent space and one can easily compares two fingerprints by their representations' cosine similarity in the space. Simply training an encoder (\eg, AutoEncoder) can only extracts common features of piracy fingerprints and other fingerprints that have different features will not be mapped near to embedding space. As homologous models are highly similar to the victim model, it fails to project them away from piracy one. We leverage the supervised contrastive learning~\cite{DBLP:conf/nips/KhoslaTWSTIMLK20} to emphasize such differentiation on homologous models. Precisely, we assign label 0 to victim and piracy fingerprints, label 1 to homologous fingerprints. As demonstrated in \autoref{fig:fw:cl}, the encoder projects positive pairs onto the same part on the hyper-sphere (left side) and projects negative pairs onto the opposite part (right side). For self-supervised contrastive learning, a positive pair $(\mathbf{x}, \mathbf{\tilde{x}})$ refers to an input $\mathbf{x}$ and its view $\mathbf{\tilde{x}}$, all other inputs and their views are negative to $\mathbf{x}$. For supervised contrastive learning, the positive pairs are all inputs having the same label as $\mathbf{x}$ and their views, negative pairs are all other inputs and their views. To generate positive pairs for a given fingerprint in contrastive learning, we propose a novel data augmentation strategy as follows. \mypara{Multi-views Fingerprints Augmentation.} We denote $\mathcal{X}_n$ is the $n$ datapoints selected from $n$ different clusters. For each datapoint $\mathbf{x}_i$ in $\mathcal{X}_n$, we choose its $k$ nearest neighbors to form $\mathcal{K}_i$ according to their representations in the output layer. We perform sampling without replacement in each cluster $\mathcal{K}_i$ and obtain $k$ sets of datapoints, denoted by $\mathcal{X}_n^1, \cdots, \mathcal{X}_n^k$ (See \autoref{fig:fw:cl} for an illustration). In this way, we further generate $k$ positive views $\{\mathcal{F}(f, \mathbf{v}, \mathcal{X}_n^1), \cdots, \mathcal{F}(f, \mathbf{v}, \mathcal{X}_n^k)\}$ for the given fingerprint $\mathcal{F}(f, \mathbf{v}, \mathcal{X}_n)$. Please see {\color{magenta}supplementary materials} for more details, including the evidence that positive views are the most similar fingerprints among others. \mypara{Supervised Contrastive Loss.} We now describe our supervised contrastive loss as follows. Within a multiviewed batch, let $i \in I \equiv \{1, ..., k_N\}$ be batch index and $C(i) = I \backslash \{i\}$. Let $\Psi(i) := \{ \mu \in C(i) | \tilde{y_{\mu}} = y_i \}$ be indexes of positive pairs for the $i$-th sample. Then our supervised contrastive loss is: \begin{equation} \label{eq:spd_cts_loss} \mathcal{L} = \sum_{i \in I} -\frac{1}{|\Psi(i)|} \sum_{\mu \in \Psi(i)} \log \frac{ e ^ {{sim(\mathbf{z}_i, \mathbf{z}_{\mu})} / \tau}} { \sum_{ \nu \in C(i) } e^{sim(\mathbf{z}_i, \mathbf{z}_{\nu}) / \tau } }, \end{equation} where $N$ is the size of mini-batch. The encoder consists of two parallel networks, which is similar to SimCLR~\cite{DBLP:conf/icml/ChenK0H20} The overall procedure of model extraction detection is represented by \autoref{algo:framework}. \begin{algorithm}[t]\footnotesize \KwIn{ suspect model $\smodel{}$, victim's model $\vmodel{u}$, it's UAP $\mathbf{v}$ and training data $\mathcal{D}$, number of clusters $n$, number of fingerprints views $k$, a set of piracy models $\Phi$ and homologous models $\Upsilon$, batch size $N$ and loss function $\mathcal{L}$ for constrastive learning in \autoref{eq:spd_cts_loss}. } \KwOut{Trained encoder $E_{\theta}$, similarity $\mathbf{s}$ between $\smodel{}$ and $\vmodel{u}$.} \tcc{ Fingerprints Generation} \DontPrintSemicolon \SetKwFunction{FMain}{$\mathcal{F}$} \SetKwProg{Fn}{Function}{:}{} \Fn{\FMain{$f, \{ \mathbf{x}_1, \cdots, \mathbf{x}_n \}, \mathbf{v} $}}{ $\mathcal{X} \leftarrow \{\}$ \; \For{$i\in \{1,..,n\}$}{ $t_i = f(\mathbf{x}_i) \oplus f(\mathbf{x}_i + \mathbf{v}) $ \; $\mathcal{X} = \mathcal{X} \cup \{ t_i \}$ \; } \KwRet $\mathcal{X}$ } \tcc{Preparing trainset for encoder $E$} $B \leftarrow \{\}$; $\;$ $\mathcal{M} \leftarrow \{ \vmodel{u} \} \cup \Phi \cup \Upsilon $\; $\{C_1, C_2, \cdots, C_n \} = \;$ K-Means$(f_{\vmodel{u}}, \mathcal{D}, n) $\; \For{$f \in \mathcal{M}$}{ \tcc{Sample one point from each cluster without replacement} \For{$i \in \{1,...,k\}$}{ $\{ \mathbf{x}_1, \cdots, \mathbf{x}_n \}^i \leftarrow C_1 \times C_2 \cdots \times C_n$ \; $B = B \cup {\mathcal{F}(f, \{ \mathbf{x}_1, \cdots, \mathbf{x}_n \}^i, \mathbf{v})}$ } } \tcc{Training by contrastive loss} Initial parameters $\theta$ of the encoder $E$\; $E_{\theta} \leftarrow $Training($B, \mathcal{L}$)\; \tcc{Verifying suspect model} $\mathbf{s} = cosine(E_{\theta}(\smodel{}), E_{\theta}(\vmodel{u}))$ \; \Return{$\mathbf{s}$} \caption{Ownership Verification.} \label{algo:framework} \end{algorithm} \section{Experiments} \begin{figure*}[t] \centering \begin{subfigure}[b]{0.33\textwidth} \includegraphics[width=\linewidth]{img/exp1/exp1_ext.pdf} \caption{Sim CDF between $\vmodel{u}$ and $\pmodel{u}$(FMNIST)} \label{fig:eval:cdf_sim_pir_fmnist} \end{subfigure} \hfill \begin{subfigure}[b]{0.33\textwidth} \includegraphics[width=\linewidth]{img/exp1/exp1_indep.pdf} \caption{Sim CDF between $\vmodel{u}$and $\vmodel{v}$(FMNIST)} \label{fig:eval:cdf_sim_homo_fmnist} \end{subfigure} \hfill \begin{subfigure}[b]{0.33\textwidth} \centering \includegraphics[width=\linewidth]{img/exp1/exp1_ext_cifar.pdf} \caption{Sim CDF between $\vmodel{u}$and $\pmodel{u}$(CIFAR10)} \label{fig:eval:cdf_sim_pir_cifar10} \end{subfigure}\hspace{\fill} \begin{subfigure}[b]{0.33\textwidth} \includegraphics[width=\linewidth]{img/exp1/exp1_indep_cifar.pdf} \caption{Sim CDF between $\vmodel{u}$and $\vmodel{v}$(CIFAR10)} \label{fig:eval:cdf_sim_ind_cifar10} \end{subfigure} \begin{subfigure}[b]{0.33\textwidth} \includegraphics[width=\linewidth]{img/exp1/exp1_ext_ti.pdf} \caption{Sim CDF between $\vmodel{u}$ and $\pmodel{u}$(T-ImageNet)} \label{fig:eval:cdf_sim_ind_ti} \end{subfigure} \hfill \begin{subfigure}[b]{0.33\textwidth} \includegraphics[width=\linewidth]{img/exp1/exp1_indep_ti.pdf} \caption{Sim CDF between $\vmodel{u}$ and $\vmodel{v}$ (T-ImageNet).} \label{fig:eval:cdf_sim_ind_ti} \end{subfigure}\hspace{\fill} \vspace{-5mm} \caption{\small Cumulative distribution function (CDF) of similarities between fingerprints of $f_{\mathcal{V}}$ and suspect models. For $x$ in X-axis, the Y-axis is the percentage of fingerprints that has similarity smaller than $x$. The derivation of CDF is the probability density function.} \label{fig:eval:cdf} \vspace{-3mm} \end{figure*} \subsection{Setup} \mypara{Datasets.} We evaluate our approach on three popular image classification datasets: FashionMNIST (FMNIST)~\cite{xiao2017/online}, CIFAR-10~\cite{krizhevsky2009learning}, and TinyImageNet~\cite{tiny_imagenet}. \mypara{Model Architectures.} For FMNIST, SOTA classification accuracy can be achieved using simple CNN models. To ensure the diversity of models, we vary attributes such as kernel size, number of layers, activation functions, dropout, training batchsize and optimizer. Details are shown in \autoref{tab:evl:arch_FMNIST}. We regroup these attributes into 5 different model architectures. The numbers inside brackets indicate the attributes is assigned to which architectures. Attributes without numbers are randomly selected during each training process. For CIFAR-10 and TinyImageNet, we evaluate our encoder in 5 different architectures: ResNet18 and ResNet34~\cite{he2016deep}, VGG16~\cite{vgg}, DenseNet121~\cite{densenet}, GoogLeNet~\cite{szegedy2015going}. \begin{table}[t] \centering \caption{\small FashionMNIST classifier components.} \vspace{-3mm} \resizebox{.9\hsize}{!}{% \begin{tabular}{lll} \hline & Attribute & Value \\ \hline \multirow{4}{*}{Architecture} & Activation & ReLU[3,5], PReLU[4], ELU[1,2] \\ & Dropout & Yes[5], No[1,2,3,4] \\ & Conv ker size & 3[1,3,4], 5[2,5] \\ & \#Conv layers & 2[1,3,5], 3[2], 4[4] \\ \hline \multirow{2}{*}{Optimziation} & Algorithm & SGD, ADAM, RMSprop \\ & Batch size & 64,128,256 \\ \hline \end{tabular} } \vspace{-3mm} \label{tab:evl:arch_FMNIST} \end{table} \mypara{Model Preparation.} For each dataset, we only assign half of the train set $D$ as the victim model's trainset, denoted as $D_v$. Piracy models are generated by following the extraction attack in ~\cite{DBLP:conf/ndss/YuYZTHJ20}. All generated piracy models recover $85\%, 83\%, 40\%$ performance of $\vmodel{v}$ for FMNIST, CIFAR10 and TinyImageNet, respectively. Homologous models are trained on $D_{homo}$ which is sampled from $D$ and has equal size as $D_v$, overlapping with it. To avoid contingency, we generate 10 models for each type (piracy or homologous) and architecture (except for architecture reserved for the victim model). In total we train $81$ DNN models for CIFAR-10 and TinyImageNet respectively. We take both 4 model architectures and 3 optimizers into account and generate $241$ models. All models achieve SOTA performance. The UAP used in this work achieves $ > 80\% $ attack success rate. Please see more results and analyses in {\color{magenta}supplementary materials}. \mypara{Encoder Training.} Excluding the architecture of $\vmodel{v}$, we use the remaining architectures to train other models. We use 5 piracy models and 5 homologous models to train the encoder. The rest of the models are used to test the encoder. For three datasets, each fingerprint is consists of $100$ datapoints and we generate $200$ views for each fingerprint. For encoder training, we adopt a large batch size equals $512$ as recommended in the contrastive learning~\cite{DBLP:conf/icml/ChenK0H20}. Note that, in experiments, to demonstrate the generalizability and robustness of our approach, we generate lots of models to train and test our framework. In practice, a defender can safely claim its ownership with only 10 models and 20 fingerprints. \mypara{Evaluation Metrics.} The similarity between two fingerprints are defined as the cosine similarity of their representation vectors projected by the trained encoder. The similarity between two models is the average similarities on all generated fingerprints. \begin{table}[t!] \centering \caption{Mean and STD of model similarities between the victim model and suspect model and p-value (lower is better) using 20 fingerprints (FMNIST).} \vspace{-3mm} \resizebox{.98\hsize}{!}{% \begin{tabular}{cc|lll|lll} \hline \multicolumn{1}{l}{} & \multicolumn{1}{l|}{} & \multicolumn{3}{c|}{Piracy Model} & \multicolumn{3}{c}{Homologous Model} \\ \hline \multicolumn{2}{l|}{Architecture \& Optimizer} & Mean & STD & P value & Mean & STD & P value \\ \hline \multicolumn{1}{c|}{\multirow{3}{*}{Arc B}} & SGD & 0.9990 & 0.0012 & 0.0 & 10\textasciicircum{}(-5) & 0.0009 & 1.0 \\ \multicolumn{1}{c|}{} & Adam & 0.9974 & 0.0048 & 0.0 & 0.0858 & 0.2613 & 0.9167 \\ \multicolumn{1}{c|}{} & Rmsprop & 0.9964 & 0.0060 & 0.0 & 0.0002 & 0.0015 & 1.0 \\ \hline \multicolumn{1}{c|}{\multirow{3}{*}{Arc C}} & SGD & 0.9322 & 0.1882 & 10\textasciicircum{}(-15) & 0.0009 & 0.0149 & 1.0 \\ \multicolumn{1}{c|}{} & Adam & 0.9959 & 0.0050 & 0.0 & 0.0185 & 0.0842 & 1.0 \\ \multicolumn{1}{c|}{} & Rmsprop & 0.9734 & 0.0927 & 0.0 & 0.0074 & 0.0417 & 1.0 \\ \hline \multicolumn{1}{c|}{\multirow{3}{*}{Arc D}} & SGD & 0.9980 & 0.0027 & 0.0 & 0.1082 & 0.3010 & 0.8445 \\ \multicolumn{1}{c|}{} & Adam & 0.9972 & 0.0036 & 0.0 & 0.1024 & 0.2492 & 0.5663 \\ \multicolumn{1}{c|}{} & Rmsprop & 0.9977 & 0.0030 & 0.0 & 0.0295 & 0.0938 & 0.6942 \\ \hline \multicolumn{1}{c|}{\multirow{3}{*}{Arc E}} & SGD & 0.8821 & 0.203 & 10\textasciicircum{}(-16) & 0.0377 & 0.1360 & 0.9923 \\ \multicolumn{1}{c|}{} & Adam & 0.9515 & 0.1392 & 0.0 & 0.0010 & 0.0036 & 1.0 \\ \multicolumn{1}{c|}{} & Rmsprop & 0.9491 & 0.1450 & 0.0 & 0.0002 & 0.0015 & 1.0 \\ \hline \end{tabular} } \label{tab:evl:hypotest1} \end{table} \subsection{Fingerprint Identification and Matching } \autoref{fig:eval:cdf} reports the results of similarity for piracy and homologous fingerprints with different network architectures. The CDF of similarity distribution is calculated during each 0.1 interval. For $x$ in X-axis, the Y-axis is the percentage of models that have similarity smaller than $x$. We conclude that \textbf{1)} Similarities of piracy fingerprints and homologous fingerprints distribute differently. The former gathers near 1 whereas the latter gathers near zero. \textbf{2)} Our encoder has good generalizability in the sense that the similarity gaps exist regardless of models' architectures. For all types of architectures, a large amount (\eg, $100 \%$ for CIFAR10) of fingerprints have similarities above $0.8$ whereas for homologous models, only a small portion of fingerprints (\eg, $20 \%$ for CIFAR10 ) have similarity above $0.4$. However, the similarity gap does vary between architectures. The encoder performs the best on architecture that is trained on (\eg, Arc.A for FMNIST and ResNet34 for CIFAR10). \textbf{3)} The variance of homologous fingerprints are larger than that of piracy models (\ie, in CIFAR10, the largest similarity gap between architectures is 0.9 versus 0.0 with CDF granularity equals 0.1). This indicates that piracy model is restricted to a small subspace whereas homologous models lie in a larger subspace. \autoref{tab:evl:hypotest1} and \autoref{tab:evl:hypotest2} show the average similarities between suspect models and the victim. The largest similarity gaps between piracy and homologous models for three datasets are $0.99$, $0.95$, $0.58$, respectively. The smallest gaps are still over $0.77$, $0.69$, $0.43$. TinyImageNet has the smallest similarity gap because its lower SOTA accuracy results in worse extraction performance. We will study the influence of extraction performance on our framework in ablation study. \mypara{Hypothesis Tests.} In practice, defenders can adopt Two-Sample $t$-Test to safely verify the suspect model in less than 20 fingerprints. Formally, let $\Omega$ and $\Omega_{homo}$ be two sets of fingerprint similarities calculated from suspect models and homologous models. We define the null hypothesis as: $ \mathcal{H}_0 : \mu < \mu_{homo}, \; \text{where} \; \mu = \overline{\Omega}, \;\; \mu_{homo} = \overline{\Omega}_{homo}. $ The $t$-Test will either reject $H_0$ with a controllable significance level $\alpha$ to claim a piracy model, or give inconclusive results. The P value column in \autoref{tab:evl:hypotest1} and \autoref{tab:evl:hypotest2} is calculated using $t$-Test performed on $20$ randomly sampled fingerprints. We repeat this $t$-Test for 30 times and present the average value. By setting the predefined significance level $\alpha$ to 0.05, we successfully reject $H_0$ for all the piracy models, giving a detect success rate equals $100 \%$ \mypara{Comparison to Existing Methods.} We use the area under the ROC curve (AUC) to measure the performance of our work. Compared with prior work (IPGuard~\cite{DBLP:conf/asiaccs/CaoJG21}), the performance of that is the AUC of 0.83, 0.75, 0.61 in three datasets (FMNIST, CIFAR10 and T-ImageNet), our framework exceeds theirs and achieves the AUC of 1.0, 1.0, 0.98. \begin{table}[t!] \centering \caption{\small Mean and STD of model similarities between the victim and suspect model and p-value (lower is better) using 20 fingerprints (CIFAR10 \& TinyImageNet).} \vspace{-3mm} \resizebox{.98\hsize}{!}{% \begin{tabular}{c|llll|llll} \hline \multicolumn{1}{l|}{} & \multicolumn{4}{c|}{CIFAR10} & \multicolumn{4}{c}{TinyImageNet} \\ \hline Archi & \multicolumn{1}{c}{Type} & \multicolumn{1}{c}{Mean} & \multicolumn{1}{c}{STD} & \multicolumn{1}{c|}{P value} & Type & Mean & STD & P value \\ \hline \multirow{2}{*}{ResNet} & \multicolumn{1}{c}{Piracy} & 0.9945 & 0.0032 & 0.0 & Piracy & 0.7463 & 0.0955 & 10\textasciicircum{}(-14) \\ & Homo & 0.0476 & 0.0156 & 1.0 & Homo & 0.2500 & 0.1462 & 0.5278 \\ \hline \multirow{2}{*}{VGG} & Piracy & 0.9917 & 0.0050 & 0.0 & Piracy & 0.7568 & 0.0937 & 10\textasciicircum{}(-16) \\ & Homo & 0.1950 & 0.0920 & 0.99 & Homo & 0.2602 & 0.1530 & 0.1574 \\ \hline \multirow{2}{*}{GoogLeNet} & Piracy & 0.99 & 0.0066 & 0.0 & Piracy & 0.7280 & 0.1621 & 10\textasciicircum{}(-9) \\ & Homo & 0.2984 & 0.1682 & 0.69 & Homo & 0.2404 & 0.1440 & 0.8277 \\ \hline \multirow{2}{*}{DenseNet} & Piracy & 0.9924 & 0.0044 & 0.0 & Piracy & 0.8215 & 0.1002 & 10\textasciicircum{}(-15) \\ & Homo & 0.0552 & 0.1955 & 1.0 & Homo & 0.2943 & 0.1656 & 0.7088 \\ \hline \end{tabular} } \label{tab:evl:hypotest2} \end{table} \subsection{Ablation Study} \label{sec:abltsty} \mypara{Number of Datapoints $n$.} Recall that our fingerprint generation function $\mathcal{F}(f, \mathbf{v}, \mathcal{X}_n)$ depends on a set of datapoints $\mathcal{X}_n$ where $n$ is the number of datapoints. As demonstrated in \autoref{fig:eval:abla}, with the increment of $n$, the similarity gap between piracy models and homologous models increases, until $n$ reaches a plateau at 70 datapoints. Larger $n$ means that fingerprint captures more decision boundaries and is more informative. In our experiments, we fix $n$ as 100 to balance the trade-off between effectiveness and efficiency. \mypara{Top-K Confidence Values.} We evaluate the performance of our framework when the suspect model only returns its top-k confidence values. As demonstrated in Fig.~\ref{fig:eval:abla}, the similarity gap remains when $k = 1$, indicating that even hard-label can disclose global information of decision boundaries. The similarity gap is positively consistent with $k$ and becomes stable at $3$. We find the average sum of returned top-3 confidence scores equals 0.9996, meaning that there is not much information loss. \mypara{Performance of Model Extraction.} An attacker may stop early before reaching the optimal recovery rate during extraction. By varying the query and iteration numbers, we obtain 20 models for each interval of length $0.02$ of recovery rate in $[0.78, 0.94]$. \autoref{fig:eval:abla} shows that when the recovery rate is $0.78$, our framework can still detect the piracy with a high similarity $0.88$. One explanation is when performing model extraction, the piracy models roughly form a decision boundary globally aligned with the victim, which enables our detection. Then it refines its local gradients which improves recovery rate and increases similarity. \begin{figure}[t] \centering \begin{subfigure}[b]{0.23\textwidth} \centering \includegraphics[width=\linewidth]{img/Ablation/exp5_kn.pdf} \label{fig:eval:abla_nk} \end{subfigure} \hfill \begin{subfigure}[b]{0.23 \textwidth} \includegraphics[width=\linewidth]{img/Ablation/exp5_coverage.pdf} \label{fig:eval:abla_cov} \end{subfigure} \vspace{-5mm} \caption{Ablation Study on parameters $n$, $k$ (left) and Recovery rate (right).} \label{fig:eval:abla} \vspace{-2mm} \end{figure} \mypara{Universal \textit{vs.} Local Adversarial Perturbations.} To compare the information capture capabilities of UAP and AP, We replace UAP by AP and re-conduct our experiments. Specifically, the fingerprint is now $\mathcal{F}_{ap}(f, (\mathbf{x}_1,\cdots,\mathbf{x}_n) ) = [f(\mathbf{x}_1), f(\mathbf{x'}_1),\cdots,f(\mathbf{x}_n), f(\mathbf{x'}_n)$], where $\mathbf{x'}$ is the AP of $\mathbf{x}$ crafted by DeepFool~\cite{DBLP:conf/cvpr/Moosavi-Dezfooli16} ($\epsilon = 22$). The perturbation norm of $\mathbf{x'}$ approximates UAP, other settings is unchanged. \autoref{fig:abls_uap_ae_fmst} shows the similarity score given by the contrastive encoder. We observe that for AP, the similarities of piracy models are less concentrated in 1 and that of homologous models are less concentrared in 0. This indicates AP based fingerprints have worse performance than UAP based one. We also study the influence of the usage of contrastive loss and overlapping rate between homologous' and victim's datasets. See {\color{magenta}supplementary material} for details. \begin{figure} \centering \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\linewidth]{img/Ablation/ablation_ae_ext.pdf} \caption{Sim CDF between $\vmodel{u}$ and $\pmodel{u}$ (FMNIST)} \label{fig:eval:cdf_sim_ind_ablation} \end{subfigure} \hfill \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\linewidth]{img/Ablation/ablation_ae_indep.pdf} \caption{Sim CDF between $\vmodel{u}$ and $\vmodel{v}$ (FMNIST)} \label{fig:eval:cdf_sim_ind_ablation} \end{subfigure} \caption{The similarity score given by the contrastive encoder in UAP-based and adversarial example based fingerprints. } \label{fig:abls_uap_ae_fmst} \vspace{-4mm} \end{figure} \subsection{Resistance against Model Modifications} A smart attacker may deliberately modify the piracy copy in order to evade the detection, we evaluate the robustness of our framework against four post-processing techniques on FMNIST dataset . \mypara{Fine-tuning.} Fine-tuning involves continuing training piracy models using additional data. In our experiment, We fine-tune piracy models on datapoints sampled from test dataset for $10$ iterations. \mypara{Pruning \& Quantization.} Pruning~\cite{DBLP:conf/iclr/ZhuG18} and quantization~\cite{DBLP:journals/corr/HanMD15} are two usual techniques to compress models and reduce memory while preserve model's functionality. In our experiment we choose pruning rate in $[0.2, 0.6]$ and we convert models from FP32 into INT8. As \autoref{fig:eval:resisi_md_mdfy} (left) demonstrated, for the fine-tuning, quantization and pruning, the variation of similarities of piracy models are small. All modified piracy models by those three techniques still have similarities more than $0.88$. We hypothesize that they have little effect on the model's decision boundaries. \mypara{Adversarial Training.} Adversarial training~\cite{DBLP:conf/iclr/MadryMSTV18} aims to promote the robustness of models intrinsically. We train the piracy models for a maximum 270 adversarial iterations. In each iteration, we craft 128 adversarial examples using DeepFool~\cite{DBLP:conf/cvpr/Moosavi-Dezfooli16} as new datapoints. As shown in \autoref{fig:eval:resisi_md_mdfy} (right), when the iteration is larger than 120, the similarity continues to drop from $0.99$ to $0.89$ but still remains high. This is because the adversarial training will continuously shape the model's decision boundaries by pushing the decision boundary towards adversarial examples. The similarity gap is imperceptible after 270 adversarial training iterations with the model utility drops $0.17$. So the attacker faces a dilemma to sacrifice the stolen model's utility for evading our detection framework. \begin{figure}[t] \centering \begin{subfigure}[b]{0.233\textwidth} \includegraphics[width=\linewidth]{img/Post-process/post-process.pdf} \label{fig:eval:modify_ft} \end{subfigure} \hfill \begin{subfigure}[b]{0.22\textwidth} \includegraphics[width=\linewidth]{img/Post-process/Iteration.pdf} \label{fig:eval:modify_advtrain} \end{subfigure} \vspace{-5mm} \caption{Resistance against model modifications. Similarity distribution of piracy models after quantization, prune and fine tuning (left) and adversarial training (right). On the left, purple boxes assemble on top because their similarities achieve 1.0.} \label{fig:eval:resisi_md_mdfy} \vspace{-4mm} \end{figure} \section{Discussion} In this paper, we propose a novel framework against model extraction attacks based on the subspace dependency of UAPs of the victim model and piracy model. Incorporating with contrastive learning, we project victim model close to piracy models and far away from homologous models. Evaluation on three benchmark datasets show that our framework is highly effective, general and robust. \mypara{Limitations.} Our approach can be improved by making a better effectiveness-efficiency trade-off as the information in fingerprints relates to query number. We leave the overhead of training data preparation of encoder as another future work. Our discovery on transferability of encoder suggests a potential solution (see details in {\color{magenta}supplementary}). {\small \mypara{Acknowledgements.} Z. Peng, S. Li, and H. Zhu were partially supported by the National Key Research and Development Program of China under Grant 2018YFE0126000 and the National Natural Science Foundation of China under Grant 6213000013. M. Xue was partially supported by the Australian Research Council (ARC) Discovery Project (DP210102670) and COVID-19 Recognition Fund of The University of Adelaide. } \part*{Supplementary Materials}
1,314,259,994,677
arxiv
\section{Introduction} The nature of Dark Matter is still unknown today, but from simulations of structure formation we know it must be a neutral, Cold, collision-less (i.e. quite weakly interacting) and very long lived particle~\cite{DMreview}. Unfortunately such a particle does not exist in the Standard Model (SM): neutrinos are neutral and massive, but so light that they are at most Hot Dark Matter. Therefore we are obliged to look for DM candidates in models beyond the SM. One of the best motivated extensions, supersymmetry with R-parity conservation, naturally gives us a stable massive particle, the LSP. To be DM such a particle has to be neutral and very weakly interacting, so usually only the neutralino or the gravitino are possible LSPs. But if we invoke the Peccei-Quinn solution to the strong CP problem, a new multiplet has to be introduced, the axion multiplet. As long as supersymmetry is unbroken, this whole multiplet remains light, so that no supersymmetric mass parameter is allowed for it (contrary than for the Higgses). After supersymmetry breaking the fermionic component, the axino, obtains a mass, but it still could be the LSP and make a very good DM component. We will present in this talk a summary of axino CDM \cite{axino1,axino2,axino3,axino4,axino5} and explore in particular if such particles can be produced in sufficient numbers to make up most of the DM and what that implies for the supersymmetry breaking parameters and collider searches. \section{Producing axinos in the Early Universe} We briefly review here the two main mechanisms that produce axinos in the Early Universe. We concentrate here on the hadronic type of axion models, where is it expected that the axion multiplet does not interact directly with the SM multiplets and therefore the axino does not mix substantially with the standard neutralinos. In the other type of models, this mixing can be larger and the production is therefore enhanced. \subsection{Thermal scatterings} Any particle, even very weakly coupled, is produced in the thermal plasma by scatterings of the particles that are in thermal equilibrium. Axinos couple directly to the gluons and gluinos due to the axion ``anomaly'' coupling \begin{equation} W_{PQ} = {g^2 \over 16\sqrt{2} \pi^2 f_a} \Phi_a W^\alpha W_{\alpha} \quad\rightarrow\quad {\cal L}_{\tilde a g \tilde g} = {\alpha_s \over 8\pi f_a} \bar {\tilde a} \gamma_5 \sigma^{\mu\nu} \tilde g^b G^b_{\mu\nu} \label{dim5op} \end{equation} where $\Phi_a $ is the axion multiplet containing the axino $\tilde a$, $W$ the gluon vector multiplet containing the gluino $\tilde g^b$ and the gluon $G^b_{\mu\nu} $ and $f_a$ is the Peccei-Quinn scale of the order of $10^{11}$ GeV due to axion physics \cite{axion}. So many scattering of the primordial plasma involving colored particles can produce axinos~\footnote{ The same happens also in the case of the gravitino, but with different vertex structure and scale \cite{gravitino}.}. The axino number density is given by solving a Boltzmann equation of the type \begin{eqnarray} {d n_{\tilde a} \over d t} + 3 H n_{\tilde a} &=& \sum_{ij} \langle\sigma (i+j \rightarrow \tilde a + ...) v_{rel} \rangle n_i n_j + \sum_{i} \langle\Gamma (i \rightarrow \tilde a + ...)\rangle n_i \label{Boltzmann} \end{eqnarray} where we are neglecting back-reactions, that are suppressed by $n_{\tilde a} \ll n_i $. \begin{figure} \includegraphics[height=.4\textheight]{trmax.eps} \caption{Maximal reheat temperature as a function of the axino mass obtained by requiring that the axino energy density is below the present DM density~\cite{axino2}. The difference between solid and dashed line is due to the inclusion of the decay term in the Boltzmann equation~(\ref{Boltzmann}). In the gray area we expect the non-thermal production via out of equilibrium decays to be also substantial. } \end{figure} At high temperature the 2-body scatterings dominate the production, since they contain a vertex given by the dimension 5 operator in eq.~(\ref{dim5op}) and they show a characteristic linear dependence on $T$. So most of the axinos are produced at the highest temperature, the reheat temperature $T_R$, and the axino number density is proportional to $T_R$. Some of the two body scatterings are IR divergent due to the massless gluon propagator; in the thermal bath such a divergence is screened by the presence of a thermal gluon mass $\simeq g T$. In our computation we introduced such IR cut-off by hand \cite{axino2}. A self-consistent procedure is instead to perform a full resummation of the Hard Thermal Loops as in \cite{BS04}. At lower temperatures the decay terms start dominating and the number density is no more proportional to the reheat temperature, it depends instead on the supersymmetric parameters, in particular the gluino and squark masses~\cite{axino3}. Using the expression for the present axino energy density as \begin{equation} m_{\tilde a} {n_{\tilde a} (T)\over s(T)} = 0.72\,\mbox{eV} \left({\Omega_{\tilde a} h^2 \over 0.2 } \right)\; , \end{equation} where $s(T) = 2.89\times 10^3 \left( {T \over 2.726 K} \right) \mbox{cm}^{-3} $ is the present entropy density, we can then obtain a bound on the reheat temperature as shown in Figure~1. \subsection{Out of equilibrium decays} An axino population is also generated by the NLSP decay after it freezes out from the thermal bath. The heavier superpartners cascade-decay quickly into the NLSP (or very rarely in the LSP as we discussed in the previous section) while still in equilibrium, but the NLSP instead has a lifetime longer than the freeze-out time: in fact all the axino couplings are suppressed by the Peccei-Quinn scale $f_a \simeq 10^{11} $ GeV and so the NLSP lifetime is of the order of seconds or longer. Then the freeze-out process is unaffected and the decay takes place only much later as shown in Figure~2. \begin{figure} \includegraphics[height=.4\textheight]{ckr.eps} \caption{ Freeze-out of the NLSP and subsequent decay into axino. Due to R-parity conservation the number of axino produced in the decay is the same as the NLSP number. } \end{figure} In this case, the axino number density can be directly computed from the NLSP would-be-relic number density as \begin{equation} \Omega_{\tilde a}^{NT} = {m_{\tilde a}\over m_{NLSP}} \; \Omega_{NLSP}. \label{omegaresc} \end{equation} If the mass ratio is not too small, we still have a connection with the classical WIMP mechanism in case the NLSP is a neutralino or a stau. On the other hand, a couple of problems can arise if the decay happens too late: \begin{itemize} \item Big Bang Nucleosynthesis can be spoiled by the energetic ``active'' particles produced in the decay along with the axino: the strong limits on the injection of energetic particles depend on the electromagnetic/hadronic nature of the produced showers and the decay time~\cite{BBN}. In general such limits are weak for the axino case since the NLSP lifetime (excluding a strong mass degeneracy) is below $10^2 s $, but they can affect the region of small mass for both the neutralino and stau NLSP~\cite{axino2,axino3}. \item Are axino from the decay cold enough to be CDM ? They are relativistic at production even if the NLSP is not and have a non-thermal spectrum: \begin{equation} v (T) = {p(T) \over m_{\tilde a}} \simeq {m_{NLSP} \over 2 m_{\tilde a}} \left( {g_*(T ) \over g_*(T_{dec})} \right)^{1/3} {T\over T_{dec}}, \end{equation} where $T_{dec} $ is the temperature at the decay time. So the question is if they have sufficient time to cool down before structure formation begins. In \cite{JLM05} such constraints have been studied and the conclusions is that an axino mass at least of order of 1~GeV is probably needed. \end{itemize} \section{Axinos and the CMSSM} Depending on the parameters and $T_R$, either production mechanism can dominate and produce sufficient axinos to explain the present DM density. In general either $T_{R}$ is bounded as in Fig.~1 or the axino is so light to be a subdominant (warm or hot) DM component. In the last case in our scenario the axion \cite{axion} could be the DM. Assuming that the axinos are CDM and that the supersymmetric partners of the SM particles can be described by the Constrained MSSM, we can see which is the preferred parameter region depending on the production mechanism. In the CMSSM the superparticle spectrum and couplings are simply function of the SM and additional four parameters: the ratio of the Higgs {\it v.e.v.} $\tan\beta$, the gaugino and scalar masses $m_{1/2}, m_0$ and the trilinear coupling $A_0$, which are universal at the GUT scale. The modulus of the $\mu $ parameter is fixed by radiative electroweak symmetry breaking and we will always consider the positive sign in the following. \subsection{Mostly thermal production} \begin{figure} \includegraphics[height=.4\textheight]{maxino_1_t10_tr200_tp.ps} \caption{ Allowed parameter space for the case of dominant thermal production~\cite{axino4}. We have chosen here $T_R = 200 \mbox{GeV} $, $m_{\tilde a} = 1$ GeV and $f_a = 10^{11}$ GeV. The dark gray strip gives axino in the right abundance to explain all DM, while the lighter gray areas are excluded by LEP constraints or too large axino number density. The white area has too low axino density to explain DM. } \end{figure} In the case of high $T_{RH}$ all the particles in the thermal bath can be treated as massless and so there is practically no dependence on the supersymmetry breaking parameters. On the other hand if we require the axino mass to be above 1 GeV, the reheat temperature has to be sufficiently low and comparable to the superparter masses, so the decay term in the Boltzmann equation become important and a strong dependence on the gluino mass appears also due to the squark-quark-axino coupling~\cite{axino3}. We have then that the allowed region is a narrow band in the gaugino mass parameter with a much smaller dependence on $m_0$ as shown in Figure~3. Note that in this case the small gaugino mass region is excluded because there too many axinos are produced. The exact position of the allowed band though strongly depends on the chosen reheat temperature and it moves at larger $m_{1/2}$ for larger $T_{R}$. \vspace*{-0.2cm} \subsection{Mostly out of equilibrium NLSP decay} In the CMSSM the NLSP can be either the neutralino or the stau. The latter happens in the wedge with low $m_0$, that is usually considered excluded if the stau is the LSP. In our case though the LSP is the axino and even the stau wedge is viable; in particular there is a wide region where the stau decay can produce the right amount of axino DM as we see in Fig.~4. \begin{figure} \includegraphics[height=.4\textheight]{maxino_mlosp_t10_tr50_ntp.ps} \caption{ Allowed parameter space for the case of dominant production via out of equilibrium NLSP decay~\cite{axino4}. We have chosen here $T_R = 200 \mbox{GeV} $, $m_{\tilde a} = 1$ GeV and $f_a = 10^{11}$ GeV. The dark gray region gives axino in the right abundance to explain all DM, while the lighter gray areas are excluded by LEP constraints or too large axino number density. The white area has too low axino density to explain DM. } \end{figure} On the other hand there is also a tiny strip in the neutralino NLSP region, analogous to the neutralino DM region, but shifted slightly due to the rescaling in eq.~(\ref{omegaresc}). Both regions are practically unaffected by BBN constraints contrary to what happens for the gravitino case~\cite{gravitino2}. \vspace*{-0.2cm} \section{How to distinguish the LSP ?} If the axino is the LSP, very different signals could be found at colliders depending on the nature of the NLSP. If the neutralino is the NLSP, the only way to find out that it is not DM is if the mass and cross sections turn out to give a too large neutralino number density or to be excluded by direct DM searches. Then we would have good reasons to say that the neutralino must be unstable, but to study its decay will be very difficult. If the stau (or another charged sparticle) is the NLSP instead, we will have the striking signal of an apparently stable charged heavy particle in the detector. In that case it will be clear that the LSP must be a very weakly interacting particle, but to know which one, we will need to measure and study the NLSP decay. In particular to distinguish between axino and gravitino, that can give similar NLSP lifetimes, we will need to measure the branching ratio and the angular dependence of the radiative decay in order to reach a definitive identification~\cite{axino5}. \vspace*{-0.2cm} \section{Conclusions} Axinos with masses in the MeV-GeV range are good CDM candidates for low reheat~temperature: they can be produced either from thermal processes or from NLSP decay with the right abundance. Such scenario is analogous to the gravitino LSP one, but an axino LSP evades more easily BBN bounds, since the NLSP lifetime is shorter than $10^2$ s. Therefore both neutralino and stau NLSPs are allowed in our case. Compared to neutralino DM, different regions of the CMSSM parameter space become allowed and preferred, in particular even heavier sparticles masses and a charged NLSP like the stau. An apparently stable charged particle would surely give a striking signal at the LHC and ILC and indicate that the neutralino is not the DM. On the other hand, disentangling between LSP candidates will require to stop such NLSPs and measure their decay; in particular axino and gravitino will be distinguished if a sufficient number of radiative decay can be observed. \vspace*{-0.2cm} \begin{theacknowledgments} It is a pleasure to thank A. Brandenburg, K. Hamaguchi, H.B.~Kim, J.E.~Kim, R. Ruiz de Austri, M. Small, F.D. Steffen and in particular L. Roszkowski for several years of fruitful and exciting collaboration. The author would also like to thank the Organizers for the exciting Workshop and their patience in waiting for these proceedings. The author acknowledges the European Network of Theoretical Astroparticle Physics ILIAS/N6 under contract number RII3-CT-2004-506222 for financial support. \end{theacknowledgments}
1,314,259,994,678
arxiv
\section{Introduction} A \emph{resource} is defined as any physical object that is consumed during a process in order to perform some kind of useful action, such as the burning of fossil fuel for the generation of mechanical work to operate machinery or the amount of processing power needed to perform a computation. In the context of quantum information theory, a \emph{quantum resource} is similarly defined as any state or channel which can be utilized in order to simulate quantum operations which, due to certain restrictions, are otherwise unavailable to us. A well known example includes the use of an entangled Bell state needed for teleporting an unknown quantum state between two parties \cite{Bennett1993}. Over the past few years a number of researchers have developed various quantum resource theories such as the resource theory of entanglement \cite{Horodecki2009}, purity \cite{Horodecki2003a}, coherence \cite{Baumgratz2014,Chitambar2016,Winter2016,Marvian2016,Napoli}, athermality \cite{Brandao2013,Horodecki2013a,Gour2015} and asymmetry \cite{Gour2008,Gour2009,Marvian2014} amongst others. In any formulation one has to distinguish between the states that constitute a resource and those that are resource free and then construct a suitable measure for the amount of resource present in each case. One of the most intuitive and straightforward ways to do this involves determining how close the given state is to the set of resource free states denoted by $\mathcal{F}$. This can be achieved by introducing a distance function on the set of density matrices, the amount of resource that a state possesses will then be equal to it's distance from $\mathcal{F}$. The logic behind such a distance based measure is simple, the further away the state lies from the set of resource free states the more useful it must be as a resource. Unfortunately for most distance functions the minimization required can only be obtained by optimization methods making their use impractical. In this letter we extend a result by Zhao et. al. \cite{Zhao2018} and prove that by restricting attention to resource theories whose set of resource free states is equal to the fixed points of an \emph{idempotent} and \emph{unital} resource destroying map one can construct a continuous family of resource measures given by a closed expression. After a short discussion on the formulation of quantum resource theories we shall present this closed form and discuss it's properties. We then apply iy to the resource theory of coherence and demonstrate that the notion of a \emph{coarse grained measurement} requires modification for it to be consistently interpreted as a measurement which yields less information about a system's true state than a \emph{finer grained} one. In what follows we shall focus on finite dimensional systems with Hilbert space $\mathbb{C}^d$, quantum states are then equal to the set of positive definite Hermitian operators with unit trace. \section{Quantum resource theories}\label{sec1} To construct a theory of \emph{quantum resources} one begins by specifying the set of \emph{resource free} states $\mathcal{F}$. The class of allowed operations $\mathcal{L}$ that we are allowed to implement can then be defined as those trace preserving and completely positive (CPTP) quantum operations $\Lambda$ which leave the set invariant. \begin{equation*} \mathcal{L}:=\qty{\Lambda\in(CPTP)|\Lambda(\sigma)\in\mathcal{F},\forall \sigma \in \mathcal{F}} \end{equation*} By definition any state $\rho\not\in\mathcal{F}$ is considered to be a \emph{resource} and can be used to implement operations outside of $\mathcal{L}$ temporarily overcoming any restrictions that were imposed. By employing the operator sum representation of a quantum operation we can define an even stronger set of allowed operations, namely let $\Lambda({\sigma})=\sum_i\Lambda_i\sigma\Lambda_i^\dagger$ with $\sum_i\Lambda_i^\dagger\Lambda_i=I$ then \begin{equation*} \bar{\mathcal{L}}:=\qty{\Lambda\in(CPTP)\bigg| \frac{\Lambda_i\sigma\Lambda_i^\dagger}{p_i}\in\mathcal{F},\forall \sigma \in \mathcal{F},\forall i} \end{equation*} where $p_i=Tr\Lambda_i\sigma\lambda_i^\dagger$. Clearly $\bar{\mathcal{L}}\subseteq\mathcal{L}$. The physical motivation behind the stronger subset of allowed operations is that retaining a record of the classical variable $i$, a process which is physically realizable in the lab \cite{Baumgratz2014}, will not produce a resource state. A few examples include the theory of \emph{entanglement} \cite{Horodecki2009} where the class of \emph{Local Operations and Classical Communication} (LOCC) leaves the set of \emph{separable states} invariant, the resource theory of \emph{coherence} and \emph{purity} where \emph{Incoherent} (IO) and unitary operations leave invariant a set of states which are diagonal with respect to a fixed basis and the totally mixed state respectively, \cite{HorodeckiOppenheim2013} and the theory of \emph{quantum reference frames} and \emph{asymmetry} where a $G$-covariant operation leaves the set of symmetric states invariant, $G$ being the group describing the symmetry in question. \cite{Bartlett2007,Gour2008,Gour2009}. In some cases it is also possible to consider two resources jointly, this seems to be the case for entanglement and coherence under the extended class of Local Incoherent Operations with Classical Communication (LIOCC) \cite{Streltsov2015,Chitambar2016}. Once a resource theory has been formulated in such terms it is necessary to find a way of measuring the amount of resource present in any given state. This is made possible with the help of suitable \emph{resource measures}, i.e. non-negative real valued functions on the set of density matrices. In order to qualify as a resource measure any candidate $\mu$ is required to have the following properties \begin{enumerate}[i)] \item $\mathcal{F}\subseteq ker(\mu)$ i.e $\mu(\sigma)=0$, $\forall\sigma\in\mathcal{F}$ \item $\mu(U\rho U^\dagger)=\mu(\rho)$ for every unitary operation $U\in\bar{\mathcal{L}}$ \item $\mu$ must be non-increasing on average, specifically if $\Lambda(\rho)=\sum_ip_i\rho_i$ then $\sum_ip_i\mu(\rho_i)\leq\mu(\rho)$ for every $\Lambda\in\bar{\mathcal{L}}$ \item $\mu$ must be non-increasing under mixing i.e. $\mu(\sum_ip_i\rho_i)\leq\sum_ip_i\mu(\rho_i)$ \end{enumerate} By combining properties iii) and iv) which are known as the \emph{strong monotonicity} and \emph{convexity} properties respectively one can easily show that $\mu(\Lambda(\rho))\leq\mu(\rho)$, $\forall\Lambda\in\bar{\mathcal{L}}$. One way of constructing such measures involves considering the distance of the state from the set of free states $\mathcal{F}$. Specifically any distance function $d(\rho,\sigma)$ on the set of density matrices induces a corresponding \emph{distance measure} $\mu_d(\rho)$ defined as \begin{equation}\label{eq1} \mu_d(\rho)=\min_{\sigma\in\mathcal{F}}d(\rho,\sigma) \end{equation} Examples of some of the most commonly used distances include the \emph{relative entropy} \cite{Vedral1998} $S(\rho|\sigma)=-\tr\rho\log\sigma+\tr\rho\log\rho$, the \emph{trace norm} $\norm{\rho-\sigma}_1=\tr\abs{\rho-\sigma}$ and the \emph{Hilbert-Schmidt} norm $\norm{\rho-\sigma}_2=\sqrt{\tr(\rho-\sigma)^2}$ \cite{bhatia1997}, though technically speaking the relative entropy does not qualify as a distance function since it is not symmetric in it's arguments ($S(\rho|\sigma)\neq S(\sigma|\rho)$) it is frequently used because of it's operational interpretation and it's connection to asymptotic conversion rates \cite{HorodeckiOppenheim2013,Brandao2015}. \section{Optimization free measures} We now restrict our attention to those resource theories whose set of free states is equal to the image of an idempotent and unital \emph{resource destroying map} \cite{Liu2017,Gour2017}. Namely let $\mathcal{E}$ be a completely positive and trace preserving linear operation with the following properties \begin{itemize} \item $\mathcal{E}^2=\mathcal{E}$ (idempotency). \item $\mathcal{E}(I)=I$ (unitality). \end{itemize} With the exception of entanglement for all of the resource theories referred to earlier the set of free states is given by the image of such a suitably chosen operation on the set of density matrices i.e. $\mathcal{F}\equiv Im(\mathcal{E})$. This is indeed the case for the resource theory of coherence where $\mathcal{E}(\rho)=\sum_iP_i\rho P_i$ is the state after a projective measurement is performed in a fixed basis, the resource theory of purity where by definition $\mathcal{E}(\rho)=\frac{I}{d}$ and $d$ is the dimension of the underlying Hilbert space \footnote{Note that this is equivalent to the resource theory of assymmetry in the case of a degenerate Hamiltonian.}, and the resource theory of quantum reference frames and asymmetry where $\mathcal{E}(\rho)=\int U(g)\rho U^\dagger(g)dg$ is the G-twirling operation associated with the group G, $U(g)$ is the unitary representation of the group and $dg$ the invariant measure. Noting that due to idempotency the image of $\mathcal{E}$ is equal to it's set of fixed points, i.e. $Fix(\mathcal{E})=\{\sigma|E(\sigma)=\sigma\}$, Equation (\ref{eq1}) can then be rewritten as \begin{equation*} \mu_d^{\mathcal{E}}(\rho)=\min_{\sigma\in Fix(\mathcal{E})}d(\rho,\sigma) \end{equation*} We now employ the \emph{Tsallis based relative entropy distance} introduced in\cite{Zhao2018} \begin{equation*} \tilde{S_a}(\rho|\sigma)= \begin{cases} \frac{(Tr\rho^a\sigma^{1-a})^{\frac{1}{a}}-1}{a-1} & a\in(0,1)\cup(1,2]\\ Tr\rho\log\rho-Tr\rho\log\sigma & a=1 \end{cases} \end{equation*} In this case we generalize their basic result about the coherence of a quantum state and show that \begin{theorem} $$\min_{\sigma\in Fix(\mathcal{E})}\tilde{S}_a(\rho|\sigma)=\frac{Tr\mathcal{E}^{\frac{1}{a}}(\rho^a)-1}{a-1}$$ \end{theorem} \begin{proof} To begin with we note that as was already mentioned in \cite{Gour2009} (where the proof for $a=1$ was given) if f(x) is analytic at x=1 then $f(\sigma)\in\ Fix(\mathcal{E})$, $\forall\sigma\in Fix(\mathcal{E})$. Also for any unital superoperator one can define it's adjoint as $tr\mathcal{E}^\dagger(\rho)\sigma=tr\rho\mathcal{E}(\sigma)$ and further show that $Fix(\mathcal{E}^\dagger)=Fix(\mathcal{E})$. Applying this to the function $f(x)=x^{\frac{1}{a}}$ for $a\in(0,1)\cup(1,2]$ and setting $N=Tr\mathcal{E}^{\frac{1}{a}}(\rho^a)$ we find that \begin{equation*} \begin{split} Tr\rho^a\sigma^{1-a}&=Tr\rho^a\mathcal{E}^\dagger(\sigma^{1-a})\\ &=Tr\mathcal{E}(\rho^a)\sigma^{1-a}\\ &=N^aTr\left(\frac{\mathcal{E}^{\frac{1}{a}}(\rho^a)}{N}\right)^a\sigma^{1-a} \end{split} \end{equation*}% So \begin{equation*} \tilde{S}_a(\rho|\sigma)=\frac{N-1}{a-1}+\tilde{S}_a\left(\frac{\mathcal{E}^{1/a}(\rho^a)}{N}\bigg|\sigma\right) \end{equation*} and the minimum is given by $\sigma=\frac{\mathcal{E}^{\frac{1}{a}}(\rho^a)}{N}$. \end{proof} Due to it's definition the Tsallis based relative entropy satisfies properties i) to iii) \cite{Vedral1998,Zhao2018}. We now proceed to show that it is also convex. \begin{lemma} $\mu_{\tilde{S}_a}^\mathcal{E}(\rho)$ is operator convex. \end{lemma} \begin{proof} It is known that for any positive matrix $A$, $tr(XA^aX^\dagger)^\frac{1}{a}$ is concave for $ a\in(0,1]$ and convex for $a\in[1,2]$ \cite{Carlen2008}. Let $\mathcal{E}(\rho)=\sum_{i=1}^nE_i\rho E^\dagger_i$ be the operator sum representation of $\mathcal{E}$ and let \begin{equation*} X=\begin{pmatrix} E_1 & E_2 &\cdots& E_n\\ 0&0&\cdots & 0\\ \vdots&\vdots&\vdots&\vdots\\ 0&0&\cdots&0\end{pmatrix} \quad A=\begin{pmatrix} \rho&0&\cdots&0\\ 0&\rho&\cdots&0\\ \vdots&\vdots&\vdots&\vdots\\ 0&0&\cdots&\rho \end{pmatrix} \end{equation*} where $X$ and $A$ are matrices in the direct sum of $n$ copies of $\mathcal{H}$, $\mathcal{H}'=\bigoplus_{i=1}^n\mathcal{H}$. It follows that \begin{equation*} XA^aX^\dagger= \begin{pmatrix} \mathcal{E}(\rho^a)&0&\cdots&0\\ 0&0&\cdots&0\\ \vdots&\vdots&\vdots&\vdots\\ 0&0&\cdots&0 \end{pmatrix} \end{equation*} and $Tr_{\mathcal{H}'}(XA^aX^\dagger)^\frac{1}{a}=Tr_{\mathcal{H}}\mathcal{E}^\frac{1}{a}(\rho^a)$. Combining this with the definition of the Tsallis based relative entropy completes the proof. \end{proof} \section{Resource theory of projective quantum measurements.} The act of measurement consists of gaining information about a system by e.g. measuring the system's position or momentum, and then using this information in order to make an estimate of the system's true state \cite{Marehand1977}. It is natural to assume that by repeating the same measurement we obtain no new information and that no more information can be extracted from a completely mixed state. Quantum mechanically both of these assumptions are satisfied by a \emph{projective quantum measurement}. This is described by a set of mutually orthogonal projectors $P_i$ summing to the identity. The state obtained after outcome $i$ has occurred with probability $p_i=Tr\rho P_i$ is then equal to $\rho_i=\frac{P_i\rho P_i}{p_i}$ and our estimate is given by $\Pi(\rho)=\sum_i P_i\rho P_i$ (in what follows we shall identify projective measurements by their corresponding set of projections and write $\Pi=\{P_i\}$). As was already mentioned by keeping the set of projectors fixed we obtain the resource theory of coherence. \subsection{Coarse grained and fine grained measurements.} Let $n_i=Tr P_i$ be the degeneracy of the i-th projection. In a \emph{fine grained} (or Von Neumann \cite{von1955mathematical}) measurement $n_i=1$, $\forall i$ while in a \emph{coarse grained} (or L\"uders \cite{ANDP:ANDP200610207}) measurement $n_i\geq 1$. We now consider a special class of coarse grained measurements where each projection $L_i$ is a sum of fine grained projections. Specifically let $\Pi=\{P_i\}$, $i\in I$ be a fixed fine grained measurement then the corresponding coarse grained measurement $\bar{\Pi}=\{L_j\}$, $j\in J$ is obtained by partitioning the set of indexes $i$ into $J$ separate subsets $I_j$ and setting $L_j=\sum_iP_i\chi_{I_j}(i)$, $j\in J$ where $\chi_{I_j}$ is the characteristic function of set $I_j$. In \cite{Piani2014} the authors pointed out that for any distance $d$, $d(\rho,\Pi(\rho))\geq d(\rho,\bar{\Pi}(\rho))$, this is counter-intuitive as we would expect that by performing fine grained measurements we gain more information about the state and that our estimate would get closer to it. Following \cite{Wehrl1977} let us slightly modify our estimate of the state after the measurement and demand that it be equal to $\tilde{\Pi}(\rho)=\sum_ip_i\frac{L_i}{Tr L_i}$ where $p_i=Tr\rho L_i$, note that if each $L_i$ is one dimensional then $\tilde\Pi$ is equal to the fine-grained measurement. We now prove the following. \begin{theorem} $$\mu_{\tilde{S}_a}^\Pi(\rho)\leq\mu_{\tilde{S}_a}^{\tilde{\Pi}}(\rho)$$ \end{theorem} \begin{proof} Since $\Pi^2=\Pi$, $\tilde\Pi^2=\tilde\Pi$ and $\Pi(I)=\tilde\Pi(I)=I$ according to Theorem 1 $$\mu_{\tilde{S}_a}^\Pi(\rho)=\min_{\sigma\in Fix(\Pi)}\tilde{S}_a(\rho|\sigma)$$ and $$\mu_{\tilde{S}_a}^{\tilde{\Pi}}(\rho)=\min_{\sigma\in Fix(\tilde \Pi)}\tilde{S}_a(\rho|\sigma)$$ It is easy now to see that $\Pi(\tilde{\Pi}(\rho))=\tilde{\Pi}(\rho)$ while $\tilde\Pi(\Pi(\rho))=\tilde{\Pi}(\rho)$, this means that $Im(\tilde\Pi)\subset Im(\Pi)$ from which the theorem immediately follows. \end{proof} \section{Discussion and conclusions.} We have shown that for any quantum resource theory whose set of free states is given by the image of an idempotent and unital resource destroying map $\mathcal{E}$ on the set of density matrices, the amount of resource present in a state is given by a closed form when the Tsallis based relative entropy is used as a distance based measure. By applying it to the resource theory of projective measurements we demonstrated how to properly define coarse grained measurements such that the information obtained of a system's true state is increased under a finer grained measurement. In future research \cite{Kollas} it will be shown how for bipartite systems, taking into account the set of allowed projective measurements that each observer can implement on it's respective subsystem, and by employing an optimization procedure involving the set of allowed unitary operations we can recover measures of quantum correlations such as the \emph{quantum discord} which is known to be equal to the \emph{entropy of entanglement} in the special case where the state is pure \cite{Bravyi2003}. The same approach can also be applied to systems which do not admit a subsystem decomposition such as a qutrit, revealing a new novel feature of quantum mechanics, namely the restrictions on acquiring information about a state imposed by symmetry considerations. Finally an interesting open question concerns the case when the set of free states is described by a resource destroying map which is not unital, as is the case in the resource theory of athermality and the recently developed resource theory of imaginarity \cite{Hickey2018}. \section*{Acknowledgments} The author wishes to thank Charis Anastopoulos, Konstantinos Blekos, and D. Kat and for many fruitful discussions which helped in presenting the ideas in this paper. This research was supported by Grant No. E611 from the Research Committee of the University of Patras via the ''K. Karatheodoris'' program. \bibliographystyle{apsrev4-1}
1,314,259,994,679
arxiv
\section{Introduction} Research on gravitational lensing has grown substantially during the past two decades (see,~e.g., Schneider et~al.\ 1992; Blandford \& Narayan 1992). A major reason for this attention is the prospect of obtaining an estimate of Hubble's constant, $H_0$, directly from cosmologically distant sources (Refsdal 1964, 1966), bypassing the many calibration-sensitive rungs of the cosmic distance ladder. Such an estimate requires the measurement of a difference $\Delta\tau$ between the arrival times of light from a source via two image paths, and an accurate model of the lensing mass distribution. There are now five lensing systems for which estimates of $H_0$ have been published: 0957+561; PG1115+080 (Schechter et~al.\ 1997; Barkana 1997; Impey et~al.\ 1998); B0218+357 (Biggs et~al.\ 1999); CLASS1608+656 (Fassnacht et~al.\ 1997); and HE~1104--1805 (Wisotzki et~al.\ 1998). The ``double quasar'' Q0957+561 provided the first documented case of gravitational lensing (Walsh et~al.\ 1979). This system includes two images, $A$ and $B$, separated by $\sim 6\arcsec$ on the sky, of a single background quasar at $z$=1.41 and a lensing galaxy $G1$, at $z$=0.36\,. Monitoring started almost immediately after discovery, with the goal of measuring $\Delta\tau$. However, it was found to be a very challenging measurement to make. Optical (Lloyd 1981; Keel 1982; Florentin-Nielsen 1984; Schild \& Cholfin 1986; Vanderriest et~al.\ 1989; Schild \& Thomson 1995) and radio (Leh\'ar et~al.\ 1992) monitoring programs produced extensive data, but analyses with a host of sophisticated techniques (see,~e.g., Press, Rybicki \& Hewitt 1992a, 1992b; Pelt et~al.\ 1994, 1996) could not resolve the conflict between groups obtaining delays near 400 days and those finding delays close to 540 days. Only recently has an optical detection of a sharp event in the light curve of each image resulted in a precise determination ($\Delta\tau=417\pm3$\,days, Kundi\'{c} et~al.\ 1997), confirming the short value of the delay first obtained by Schild \& Cholfin (1986). Additional confidence in this measurement comes from the consistency with the latest results from radio monitoring (Haarsma et~al.\ 1998). The other essential ingredient for obtaining the value of $H_0$ is a well-constrained model for the lensing mass distribution. A major complication for Q0957+561 is that a cluster of galaxies surrounding $G1$ also contributes to the lensing (Young et~al.\ 1981). Lensing mass can be exchanged in models between the cluster and $G1$ without affecting the image configuration, but yielding different $\Delta\tau$ predictions. This {\it cluster degeneracy} is an example of the mass-sheet degeneracy in lensing identified by Falco et~al.\ (1985). Thus a direct measurement of the mass of either the galaxy or the cluster is required to remove the degeneracy and provide an estimate of $H_0$. The cluster's mass distribution can be estimated from ``weak lensing'', the shape distortions of background galaxies due to lensing by the cluster. Such an estimate has been attempted for the Q0957+561 cluster (Fischer et~al.\ 1997). However, this cluster is not very massive, and thus the effect is weak and the estimate imprecise. The cluster potential can also be estimated from X-ray emissions. This measurement was made only crudely with ROSAT (Chartas et~al.\ 1998), but observations with the Advanced X-ray Astrophysics Facility (AXAF) should yield considerable new information. As for the lens galaxy, its mass can be estimated from velocity dispersion measurements. Falco et~al.\ (1997) measured a $G1$ velocity dispersion of $279\pm12$\,km\,s$^{-1}$, improving on an earlier result by Rhee (1991). There was, however, a difference of $50\pm20$\,km\,s$^{-1}$ between the velocity dispersion measured within or outside a radius of $0\farcs2$ from the galaxy center. In a more recent observation, Tonry \&~Franx (1998) measured a $G1$ velocity dispersion of $288\pm9$\,km\,s$^{-1}$, which confirms the Falco et~al.\ result, although they found no evidence of a velocity dispersion gradient. However, converting from these stellar dispersions to mass estimates is fraught with uncertainty. Romanowsky \& Kochanek (1998) have considered combining the velocity dispersion measurement with detailed modeling of the stellar velocity distribution in the lens galaxy $G1$, but this combination relies on assumptions regarding the velocity dispersion profile of $G1$. The most recent effort to explore models of 0957+561 was by Grogin \& Narayan (1996, hereafter GN). They considered two types of models for the lens and approximated the effect of the surrounding cluster as an additional constant shear term. The basic model type explored by GN represents the lens galaxy as a softened power-law sphere (SPLS), a density profile which allows for both a core radius and an arbitrary radial power-law index. The other type of model was adopted from earlier work by Young et~al.\ (1980) and Falco et~al.\ (1991, hereafter FGS). In this latter type, the galaxy has a King profile, a generalization of the singular isothermal sphere which switches from constant surface density at the center to an isothermal profile at large radii. These models are strongly constrained by VLBI data which resolve each of the two images into a core and several jet components (Gorenstein et~al.\ 1988; Garrett et~al.\ 1994, hereafter G94). The positions of the core and jet components were estimated by G94 from the VLBI visibility amplitudes and phases; these positions were used to determine a relative magnification matrix for the two images, with spatial gradients along the direction of the jet. The magnification matrix and gradients were used by GN to constrain lens models. One concern in evaluating the results of GN is the poor reduced $\chi^2$ $(\sim 4)$ of even their best-fitting lens models. Using an earlier set of VLBI constraints, Kochanek (1991) modeled the galaxy and the cluster as potentials expanded to quadrupole order. Kochanek showed that the mass model is not well constrained by the lensing data alone, and that deriving a precise estimate of Hubble's constant depends on interpreting other observations such as the stellar velocity dispersion. In this paper we reconsider the data and the lens models for Q0957+561. In \S 2 we model the radio core and jet components in the two images using the VLBI visibility data of G94 as constraints, after finding substantial problems in previous work, both in estimates of parameter values and in determinations of standard errors and correlations. In \S 3 we summarize the other observations which we use to constrain lens models, including the Hubble Space Telescope (HST) observations of Bernstein et~al.\ (1997, hereafter B97). In \S 4 we discuss the lens models that we use. We add ellipticity parameters to the spherical models used by GN and FGS, and we include a cluster in the model. We show that for a given cluster density profile the lens constraints can, in principle, determine the center of mass and total mass of the cluster. We present our results in \S 5 and discuss their implications. Finally, in \S 6 we summarize our present understanding of Q0957+561, and consider the prospects for a useful $H_0$ determination. \section{VLBI Constraints} \label{09vlbi} While optical maps of the images of Q0957+561 can yield only their positions, radio VLBI maps have resolved the images and revealed internal structures. Early VLBI observations (Porcas et~al.\ 1981) found that both components have a core-jet radio structure. Improved maps (Gorenstein et al.\ 1988) resolved the $A$ and $B$~images each into a compact core with several jet components, enabling a reliable determination of the relative magnification matrix of the $A$ and $B$ images. The maps of G94 further resolved the images into six components each, denoted $A_{1 \ldots 6}$ and $B_{1 \ldots 6}$ (where $A_1$ and $B_1$ denote the cores), and provided sufficient precision to estimate the spatial gradient of the relative magnification matrix. G94 modeled the flux distribution of each component as an elliptical two-dimensional Gaussian, and estimated the parameter values for each image from separate fits to the VLBI visibility data. These component positions cannot be used directly as constraints on the lens model because their spatial separation is small compared to the length scale over which the lens magnification changes appreciably. Thus, the position constraints from the six pairs of corresponding components are highly degenerate. G94 addressed this concern by introducing a second step in their analysis (following FGS), using the VLBI components to determine a relative image magnification matrix and its spatial derivatives along the jet direction. GN used this magnification matrix and its derivatives to constrain their lens models. We have found substantial problems with the VLBI component positions and error estimates determined by G94. First, G94 used the Caltech VLBI package program MODELFIT to determine the six Gaussian components. MODELFIT used only single-precision computations, and stopped searching long before having converged on a least-squares solution. Second, G94 used the program ERRFIT in the same package to estimate parameter variances and covariances. We found serious mistakes in ERRFIT, including inconsistencies with MODELFIT, and an error which caused ERRFIT to ignore one third of the data. Third, since $A$ and $B$ are images of a common source, and since the extended jet components are not expected to vary on time scales comparable to $\Delta\tau$, we expect the flux density ratios of corresponding components along the jet to vary smoothly with position, according to a small macroscopic magnification gradient. But because G94 used only partial flux density information, with the component positions as constraints in their second step, their jet flux density ratios deviate strongly from a smooth gradient. We have corrected the errors in both MODELFIT and ERRFIT, and have changed the estimation procedure by combining the two steps of the G94 analysis into a single step. We take two sets of elliptical Gaussian components (with six flux density, position, and shape parameters for each image), and restrict some of the $B$~image parameters to correspond to those of the $A$~image, through a linear magnification matrix and its spatial derivatives along the jet direction. We simultaneously fit this combined set of image component and magnification matrix parameters to the VLBI visibility data for the $A$ and $B$ images. In contrast to G94, who ignored the important correlations of the parameter estimates in the second step of their analysis, our one-step derivation of the magnification matrix and spatial derivatives fully incorporates the parameter covariances. We required the total flux densities and center positions of the $B$~image jet components to map to those of the corresponding $A$~image jet components, through the relative magnification transformation (see Appendix), but accounted for limitations of our mapping and flux model by allowing the VLBI component shapes to vary independently. Since the core flux density varies perceptibly over time scales of years, we allowed the $B$~image core flux density (i.e., $B_1$) to be independent of the $A_1$ flux density. Thus, the overall fit involved 59 parameters: 36 $A$~image component parameters, minus 2 to fix $A_1$ at the origin; 4 magnification matrix and 2 independent spatial derivative parameters; 18 $B$~image component shape parameters; and one for the $B_1$ flux density. The overall fit yielded a reduced chi-squared of $\bar{\chi}^2=2.2$, for 21040 visibility amplitudes and phases. Although our improved analysis should produce more reliable parameter estimates, there are a number of reasons to treat the derived uncertainties conservatively. First, there is an issue of possible time dependence. Parsec-scale jet components often move outwards at apparently superluminal velocities (see,~e.g., Cawthorne 1991). With an apparent speed of, say, 6 times the speed of light, the jet components in 0957 would move roughly $0.7$~milliarcseconds (mas) in the time $\Delta\tau$. Since we are comparing VLBI observations of $A$ and $B$ obtained {\it simultaneously}, while the lens models assume a stationary source, superluminal motion could affect our conclusions. Campbell et~al.\ (1995) monitored the inner part of the VLBI jet over 6 years, and found no motion of the jet components with respect to the core, although any change in position of $\sim1\,$mas over this period would have been detected. This limit is still somewhat larger than our estimated errors for some of the component positions (see below), and hence we cannot exclude the possibility of superluminal motions being relevant. A~second concern is substructure in the lens galaxy (see,~e.g., Mao \& Schneider 1998). Either a globular cluster or a mass fluctuation in the lens of $\sim$10$^6\,M_{\odot}$ along the path of light from a jet component can deflect this component by about 1~mas. Such deflections would occur independently in each image, and would not be modeled by macro-lens models that assume smooth density distributions on arcsecond scales. Finally, although we have followed G94 in using a flux density model of multiple elliptical components, this simple model may not represent the actual flux density distribution satisfactorily, thereby resulting in $\bar{\chi}^2>1$ and underestimated uncertainties. To account for the imperfect fit, we increase our parameter error estimates by a factor of $\sqrt{\bar{\chi}^2}$. This increase corresponds to rescaling the VLBI data uncertainties to make $\bar{\chi}^2=1$. The VLBI image component parameter values resulting from our fit are shown in Table~1. Except for the $B_1$ flux density, the $B$~image positions and flux densities are derived from their $A$~image counterparts, through the image magnification matrix and its spatial derivatives. Each image component is described by a total flux density, a center position, the major axis full-width at half-maximum, the axis ratio, and the position angle of the major axis. We give component centers in relative right ascension $\Delta\alpha$ and declination $\Delta\delta$, rather than in G94's polar coordinates. Throughout this paper our coordinates refer to epoch B1950.0\, which was used for the G94 VLBI observations. The standard errors in Table~1 have been scaled to make $\bar{\chi^2}=1$. We do not show the parameter covariances, since these parameters were not used as direct constraints for lens models. Figure~1 shows the flux density contours of the $A$ and $B$ images as given by our model. Each image component is also represented in the figure by a rectangle whose dimensions coincide with the major and minor axes of the corresponding Gaussian. Note that the $B$~image components agree more closely with their $A$~image counterparts than is the case for G94's VLBI component model (see Figure~3 in G94). Table~2 presents the VLBI constraints that we use in developing lens models \footnote{The complete covariance matrix for the parameters of the VLBI fit is available at http://www.sns.ias.edu/$^{\sim}$barkana/0957.html}. These constraints include the parameter estimates, with scaled standard errors and normalized correlation coefficients, for the magnification matrix at the core and brightest jet component positions, and separately for the positions of the brightest jet components ($A_5$ and $B_5$) relative to their respective cores. Although the jet component positions in Table~2 are not independent of the magnification matrix, they do provide the most precise information on the jet structure. Thus we follow GN by including them as direct constraints to the lens models. Our results imply a relative $A\rightarrow B$ magnification of $0.74\pm 0.06$ at the core and $0.64\pm 0.04$ at the brightest jet component. The gradients of the eigenvalues along the jet direction from $A_1$ to $A_5$ (see Appendix) are $\dot{M}_1=(-2.6\pm 1.3) \times10^{-3}\ {\rm mas}^{-1}$ and $\dot{M}_2=(4.0\pm 3.3)\times10^{-4}\ {\rm mas}^{-1}$. These gradients differ somewhat from the values estimated by G94, of $\dot{M}_1=(0.5\pm 1.7)\times 10^{-3}\ {\rm mas}^{-1}$ and $\dot{M}_2=(2.6\pm 0.9)\times 10^{-3}\ {\rm mas}^{-1}$. \section{Other Observational Constraints} \label{09s3} In addition to the information provided by the VLBI structures, there are a number of other observations that we use to constrain the lens models. The separation between the two quasar images determines the mass scale of the lens model. For the $A-B$ core separation, FGS and GN adopted the value of $(-1\farcs25271,6\farcs04662)$ with $0\farcs00004$ uncertainty from the original measurement of Gorenstein et~al.\ (1984). There seems, however, to have been a slight error in FGS in the conversion from seconds to arcseconds. We use the correct value of $(-1\farcs25254,6\farcs04662)$. The difference is tiny and has a negligible effect on the results. The position of the principal lens galaxy, $G1$, provides an important constraint. GN assumed the optical center of brightness of $G1$ to be at $(0\farcs 19, 1\farcs 00)$ (Stockton 1980) from image B, with an uncertainty in each component of 30 mas. The lens galaxy is also detectable at radio wavelengths, and the most precise VLBI observations of the faint radio component $G'$ yielded an estimated separation from $B$ of $(0\farcs 181, 1\farcs 029)$, with a standard error of 1~mas (Gorenstein et~al.\ 1983). Recent HST observations (B97) yield a $G1$ position of $(0\farcs 1776, 1\farcs 0186)$ with $3.5$ mas errors, only about three standard deviations away from the VLBI $G'$ position. It is not certain whether the position of the radio source or even that of the optical center coincides with the center of the lens potential within the measurement uncertainties, but as a conservative option, we have chosen to use the B97 $G1$ coordinates and errors to constrain the lens position. In addition to the VLBI constraints G94 included two $B/A$ magnification ratios as constraints: those observed at the core and at the position of the brightest jet component. Since this jet flux ratio is incorporated in our VLBI fitting, we take only the core magnification ratio as an additional constraint. The core flux density varies over times comparable to $\Delta\tau$. To account for this, we allowed for a variable core flux ratio in the VLBI fitting, but the jet constraints alone yield a predicted magnification ratio at the core. The core magnification ratio has been independently determined to be $0.747 \pm 0.015$, from a combination of optical emission line ratios with VLA and VLBI light curve analyses (see,~e.g., Conner et~al.\ 1992). If we add the directly observed core magnification ratio, we have two constraints on the same quantity and an additional degree of freedom. Models with a smooth surface mass density for the lens produce a third image of Q0957+561, typically demagnified and near the center of the lens galaxy. No such image has been seen down to a $5\sigma$ limit of $1/30$ the flux density of image~$B$ (Gorenstein et~al.\ 1984). We follow the approach of GN in penalizing models only when their predictions exceed this $5\sigma$ limit, which GN achieve by adding to the $\chi^2$ a term \begin{equation} \chi^2_{C/B}= \left\{ \begin{array}{ll} 0 & C/B < 1/30 \\ \frac{(C/B-1/30)^2}{(1/150)^2} & C/B > 1/30 \end{array} \right\} , \end{equation} where $C/B$ refers to the third image flux density ratio with respect to the $B$~image. In the SPLS model, the core radius determines the degree of central mass concentration and is the parameter most sensitive to the third-image flux limit. In the FGS model the central point mass prevents a third image from forming. Following GN, we add this constraint only in cases like the SPLS model where the third-image limit plays a role. B97 discovered a faint arc with two bright ``Knots'' and a number of ``Blobs''; B97 noted that the Knots, which form part of a single arc, appear to be images of each other, if the arc is indeed produced by gravitational lensing, and that two Blobs (2~and~3) are also multiple images of a background galaxy. These Blobs may differ somewhat in their peak surface brightness, but this difference may be an artifact of limited angular resolution. We summarize the various constraints used in our model fitting in Table~3, which defines our fiducial, ``full'' set of constraints. Additional global constraints are given by the extended radio lobes found with the VLA (components $C$, $D$ and $E$ of Greenfield et~al.\ 1985), which must not be multiply imaged by a lens model. We check for this constraint but do not formally include it in the $\chi^2$ since our models always satisfy it easily. \section{Lens Models} \label{09s4} For modeling the gravitational lensing of Q0957+561, we consider a lens at redshift $z_L$ and a source at $z_S$, with corresponding angular diameter distances to the observer $D_L$ and $D_S$, and a lens-source distance $D_{LS}$. For a deflecting mass localized in a plane perpendicular to the line of sight, we write the lens equation (see,~e.g., Schneider et~al.\ 1992) as \begin{equation} \vec{\beta}=\vec{\theta}-\vec{\alpha}\ , \end{equation} where $\vec{\beta}$ is the source position, $\vec{\theta}$ is the image position, and $\vec{\alpha}$ is the deflection angle scaled by $D_{LS}/D_S$, all measured in the lens plane with the center of mass of the lens at the origin. We denote the mass density of the lens projected on this plane by $\Sigma$ and define a critical density $\Sigma_c=c^2 D_S/(4 \pi G D_L D_{LS})$. Then (in angular units) $\vec{\alpha}$ is the gradient of the two-dimensional potential $\psi$ which is determined by \begin{equation} \nabla^2\psi=2\frac{\Sigma} {\Sigma_c}\equiv 2\kappa\ . \end{equation} If we have one lens but multiple sources at different redshifts, then to determine the corresponding $\vec{\alpha}$, a given deflection angle must be scaled by the appropriate factor of $D_{LS}/D_S$. Therefore, to account for the HST Blob and Knot sources in Q0957+561, whose distances are unknown, we must add additional parameters $f_{\rm blob}$ and $f_{\rm knot}$ which are the $D_{LS}/D_S$ ratios for each of these sources over the same ratio for the quasar. Because of their simplicity, axisymmetric mass distributions are often used to model gravitational lenses. The Softened Power-Law Sphere (SPLS) model, defined by GN, can account for physical profiles ranging from isothermal to a point-mass, with the added possibility of a softened core. Most elliptical galaxies have central cusps in their luminosity profiles (e.g., Gebhardt et~al.\ 1996). The possible existence of core radii in dark matter halos is unresolved, with some simulations finding a shallow inner density profile with a large scatter among halos (Kravtsov et~al.\ 1998), while others find a density profile steeper than $1/r$ (e.g., Moore et~al.\ 1998). The SPLS model has a spherically symmetric volume density profile, \begin{equation} \rho(r)=\rho_0 \left(1+\frac{r^2}{r_c^2}\right)^{(\eta-3)/2}, \end{equation} with a corresponding projected surface density \begin{equation} \Sigma(\xi)=\Sigma_0 \left(1+\frac{\xi^2}{r_c^2}\right)^{(\eta-2)/2}\ , \end{equation} where $\Sigma_0=\rho_0 r_C B(1/2,1-\eta/2)$ and $B$ is the Euler beta function. The deflection law is \begin{equation} \vec{\alpha}(\vec{\theta}\,)=\left(\frac{\alpha_E^2}{\theta^2}\right) \left[\frac{(\theta^2+\theta_c^2)^{\eta/2}-\theta_c^{\eta}} {\alpha_E^{\eta}}\right] \vec{\theta}\ , \end{equation} where $\alpha_E=\alpha_0^{2/(2-\eta)} \theta_C^{-\eta/(2-\eta)}$ and in radians \begin{equation} \alpha_0=\left(\frac{8 \pi G \Sigma_0 r_c^2}{c^2 D \eta}\right)^{1/2}\ , \end{equation} with $D=D_L D_S /D_{LS}$. We note that the corresponding dimensionless surface density (i.e.\ convergence) is \begin{equation} \kappa(\theta)=\frac{\eta}{2} \alpha_E^{2-\eta} (\theta^2+\theta_c^2)^{\frac{\eta}{2}-1}\ . \end{equation} The parameters are thus a normalization $\alpha_E$, core radius $\theta_c$, and power-law index $\eta$. We also use the empirical model introduced by FGS, which consists of a King profile and a central point mass. FGS adopted an analytic approximation introduced by Young et~al. (1981) for the deflection law of the King profile: \begin{eqnarray} \vec{\alpha}(\vec{\theta}\,)\ [{\rm radians}] & = & \left(\frac{D_L}{D}\right) \left(\frac{\sigma_v^2}{c^2}\right) \alpha_*(\theta)\ \hat{\theta}\ , \\ \alpha_*(\theta) & = & 53.2468\, f\left(1.155 \frac{\theta}{\theta_c}\right) - 44.0415\, f\left(0.579 \frac{\theta}{\theta_c}\right)\ , \nonumber \\ f(x) & = & \frac{\sqrt{1+x^2}-1}{x}\ . \nonumber \end{eqnarray} The parameters are a velocity dispersion $\sigma_v$ and a core radius $\theta_c$. The corresponding convergence is \begin{eqnarray} \kappa(\theta) & = & \left(\frac{D_L}{D}\right)\left(\frac{\sigma_v^2} {c^2}\right)\left\{ \frac{30.75}{\theta_c}g\left(1.155 \frac{\theta}{\theta_c}\right)- \frac{12.75}{\theta_c}g\left(0.579 \frac{\theta}{\theta_c}\right) \right\}\ , \label{kapfgs} \\ g(x) & = & \frac{1}{\sqrt{1+x^2}}\ . \nonumber \end{eqnarray} In order to fit the data, FGS also included a point mass of mass $M_{\rm bh}$ at the center of the galaxy, which yields \begin{equation} \vec{\alpha}(\vec{\theta}\,)=\left(\frac{\alpha_{\rm bh}^2}{\theta^2}\right) \vec{\theta}\ , \end{equation} where the Einstein radius is \begin{equation} \alpha_{\rm bh}=\left(\frac{4 G M_{\rm bh}}{c^2 D}\right)^{1/2}=0\farcs 91 \left(\frac{M_{\rm bh}}{10^{11}\, h^{-1}\, M_{\odot}}\right)^{1/2}\ . \end{equation} Fitted models imply this point mass is $\sim 10^{11}$ $M_{\odot}$, much larger than expected for black holes, so this term should be interpreted as correcting the King profile which by itself is not steep enough near the center of the lens. Of course, the mass of the point mass may be redistributed in any axisymmetric manner (inside the $B$ image radius) without affecting the lensing, so the FGS model is not necessarily unrealistic. This ambiguity of the FGS model with respect to the central distribution of mass in the lens galaxy $G1$ makes it difficult to utilize velocity dispersion measurements to break the $H_0$ degeneracy. Since galaxies are usually not observed to be axisymmetric, elliptical mass distributions offer more general and realistic lens models. They are difficult to use, however, since the deflection angle obtained by Schramm (1990) for general elliptical models requires the evaluation of rather slow numerical integrals. To add ellipticity to the lens model while avoiding this difficulty, GN used an elliptical potential model. The imaging properties of elliptical potentials have been investigated extensively (Kovner 1987, Blandford \& Kochanek 1987 and Kochanek \& Blandford 1987). They become identical to elliptical densities for very small ellipticities and produce similar image configurations even for moderate ellipticity (Kassiola \& Kovner 1993). However, elliptical potentials cannot represent mass distributions with ellipticities exceeding about $0.5$ because the corresponding density contours acquire the artificial feature of a dumbbell shape, and the density can also become negative in some cases (Kochanek \& Blandford 1987, Kassiola \& Kovner 1993, Barkana 1998). Because of this, GN restricted their model to the small ellipticity of $e=0.3$ measured for the lens galaxy light profile by Bernstein et~al.\ (1993). However, the more recent observations by B97 found that the isophotal ellipticity increases with radius, from $0.1$ to $0.4$\,. Furthermore, there is no guarantee that the dark matter has the same shape as the light profile, so it is interesting to test the ability of the lensing data to constrain the dark matter ellipticity directly. We use the SPLS density profile with elliptical isodensity contours, a model which may be called a softened power-law elliptical mass distribution (SPEMD). We calculate the deflection angle and magnification matrix of this family of models using the fast method of Barkana (1998) which avoids the numerical integrations. We parameterize the SPEMD convergence analogously to the SPLS, as \begin{equation} \kappa(\vec{\theta}\,)=\frac{\eta}{2} \alpha_E^{2-\eta} \left[(x^2/a^2+y^2+\theta_c^2)^{\frac{\eta}{2}-1} \right]\ , \end{equation} where we write $\vec{\theta}=(x,y)$, $a$ is the axis ratio (related to the ellipticity $e=1-a$), and we assumed a major axis along the $y$-axis. More generally the major axis is rotated at an angle $\varphi_a$, which we measure from North through East, consistent with Bernstein et~al.\ (1993,1997). The SPEMD thus adds $a$ and $\varphi_a$ to the set of parameters of the SPLS. We also explore an elliptical density model based on the FGS profile, keeping the point mass and adding ellipticity parameters to the King profile. As we did with the SPLS, we first take the axisymmetric convergence of the FGS model and substitute $(x^2/a^2+y^2)$ for $r^2$, and then rotate the major axis by an angle $\varphi_a$. When made elliptical, the convergence (Equation~\ref{kapfgs}) in the approximation of Young et~al.\ (1981) yields the difference of two terms, each of which corresponds to the special case of an isothermal SPEMD. The deflection angle and magnification of such a softened {\it isothermal} elliptical mass distribution has been computed analytically in terms of complex numbers by Kassiola \& Kovner (1993), so it is easy to perform lens modeling with the FGS elliptical mass distribution, or FGSE. The lensing galaxy in 0957+561 is a massive galaxy near the center of a galaxy cluster. Following FGS, we assume that the cluster deflection varies on a scale large compared to the image separation, so we expand the cluster deflection about the center of the lens galaxy and assume it has a linear deflection law, $\alpha_i=M_{ij} \theta^j$. The traceless part of the matrix $M_{ij}$ is a shear $\gamma$ with direction $\varphi_{\gamma}$, where \begin{equation} M=\gamma \left(\begin{array}{cc} \cos 2\varphi_{\gamma} & -\sin 2\varphi_{\gamma} \\ -\sin 2\varphi_{\gamma} & -\cos 2\varphi_{\gamma} \end{array}\right)\ . \end{equation} Note that GN denoted the shear angle $\phi$, and we have defined $\varphi_{\gamma}=-\phi$ for consistency with measuring the position angle of a possible corresponding cluster from North through East. The trace part is a convergence $\kappa$, which corresponds to the degeneracy identified by Falco et~al.\ (1985): Given any lens model, if we multiply the deflection $\vec{\alpha}(\vec{\theta}\,)$ by the factor $(1-\kappa)$ and at the same time include a convergence $\kappa$ in the model, the relative image positions and magnifications remain unchanged. The time delay changes, however, by the factor $(1-\kappa)$, inducing an uncertainty in the derived $H_0$ unless $\kappa$ can be determined. GN note that because of this, models really only determine the scaled shear $\gamma' = \gamma/(1-\kappa)$, and (for a given measured time delay) a scaled value of h which we denote $h'$, where $H_0 = 100\, h\, {\rm km\,s^{-1}\,Mpc^{-1}}$ is standard notation and we also have \begin{equation} H_0 = 100\, h' \, (1-\kappa)\, {\rm km\,s^{-1}\,Mpc^{-1}}\ . \end{equation} In models which include external shear but no explicit convergence, the mass of the lens galaxy is also related to the physical mass by the same factor of $(1-\kappa)$. This is true for $\alpha_E^{2-\eta}$ of the SPLS and $\sigma_v^2$ and $M_{\rm bh}$ of the FGS model. As noted above, a direct measurement of the mass of the lens galaxy or the cluster can determine $\kappa$. Hereafter we use the symbol $\kappa$ to refer to the convergence produced by the cluster only. As an independent attempt to determine $\kappa$, we also model the cluster as a Singular Isothermal Sphere (SIS) with a variable position, letting the fit determine the position as well as the velocity dispersion. For this model, $\rho(r) \propto 1/r^2$, $\Sigma(\xi) \propto 1/\xi$, and \begin{equation} \vec{\alpha}(\vec{\theta}\,)=b_{\rm cl} \hat{\theta}'\ , \ \ \ \ \ b_{\rm cl}=4 \pi\left(\frac{\sigma_{\rm cl}}{c}\right)^2 \frac{D_L}{D}= 17\farcs3 \left(\frac{\sigma_{\rm cl}}{1000\, {\rm km\,s} ^{-1}}\right)^2\ , \end{equation} where $\sigma_{\rm cl}$ is the velocity dispersion of the cluster and $\vec{\theta}'=\vec{\theta}-\vec{\theta}_{\rm cl}$. The cluster parameters in this case are thus $\sigma_{\rm cl}$ and the coordinates $(x_{\rm cl},y_{\rm cl})$ of the cluster center $\vec{\theta}_{\rm cl}$ with respect to the lens galaxy position. GN considered this type of profile for the cluster but did not use it as part of their lens model. Bernstein et~al.\ (1993) included an isothermal cluster in some of their models. Some information can be obtained from lens modeling about the cluster position, because of the influence of terms of higher order than the shear. However, fitted models imply a cluster far from the lens galaxy and the results are similar to those obtained for a cluster at an infinite distance. Therefore in the external shear model, while $h'$ (which doesn't include the effect of the cluster convergence) gives only an upper limit to the value of $h$, we can obtain an estimate of $h$ by assuming an SIS cluster at infinite distance, i.e. at a distance large compared to the image separation. In this case, since the external shear model determines $\gamma'$ while an SIS cluster has $\kappa=\gamma$, we obtain an estimate for $h$ of \begin{equation} h_{\rm SIS}=h' /(1+\gamma')\ . \end{equation} We can now count the number of degrees of freedom (ndof) for various models. We have 8 position constraints (core and jet $(x,y)$ positions in images $A$ and $B$, all relative to an observed lens position) and 6 magnification constraints (relative magnification matrix at the brightest jet component plus the two eigenvalues at the core). We add an independent core flux ratio constraint, and there is one more constraint for non-singular models which produce a third image. The SPLS and FGS models have 9 parameters : 3 for the lens galaxy profile , 2 for the external shear, and 4 for the two source positions. Two more parameters are added to the elliptical models, and one more when the SIS cluster is used instead of external shear. If we interpret the B97 Blob and Knot components as two additional pairs of lensed images, then each pair adds 4 position constraints and one flux ratio. Each pair also adds to the model a source position (2 parameters) and a variable source redshift, since the redshifts of these faint sources have not been measured. Thus, e.g., the ndof is 6 for the SPLS model fit only to the VLBI data and the third image flux limit, and 8 for the FGSE model fit to the VLBI data, the core flux ratio, and the HST Knots and Blobs of B97. \section{Results \& Discussion} \label{09s5} In this section we apply the lens models defined in \S 4 to the constraints described in \S\S 2 and 3, and discuss the results which are summarized in Table~4. For each model, we use $\bar{\chi}^2$ to denote the reduced $\chi^2$, and estimate $95\%$ confidence bounds as in GN, from the conservative condition $\Delta\chi^2=4\bar{\chi}^2$. Confidence ranges are included for the FGSE model and all $H_0$ values, to illustrate the scale of our uncertainties. We also assume an Einstein-deSitter $\Omega=1$ cosmology in deriving $H_0$ values. The effects of this assumption are small for standard cosmologies, e.g., an open $\Omega=0.3$ universe increases the $H_0$ estimate by $\sim6\%$, while a flat $\Omega_{\rm matter}=0.3$ universe with a cosmological constant yields an increase of only $\sim4\%$. Finally, we use the Kundi\'c et~al.\ (1997) time delay measurement of 417$\pm$3~days, throughout. We begin with the axisymmetric models for the lens galaxy together with the external shear model for the cluster, and fit to the full set of constraints (Table~3). The first two columns of Table~4 show the best-fit parameters for the SPLS and FGS models (for $\kappa=0$). Note that some of the parameter values using our corrected constraints differ substantially from the corresponding results of GN. However, the new constraints are very poorly fit, with $\bar\chi^2$ values over three times those of GN. The lens galaxy is observed to be elliptical (B97), and when we add ellipticity as a parameter the lens models gain great flexibility. The $\bar\chi^2$ values are considerably lower for the elliptical SPEMD and FGSE models (see Table~4), and are comparable to the GN goodness-of-fit estimates. As a check, we tried using the SPLS profile with elliptical isopotentials as used by GN, instead of the elliptical isodensity contours of the SPEMD, and this fit gave similar parameter values to the SPEMD but with a larger $\bar\chi^2$ of 14. The $H_0$ estimates are very different for the FGSE and SPEMD models, but the FGSE has a much lower $\bar\chi^2$. The FGSE model also provides a closer match to the observed galaxy orientation ($\varphi_{\rm obs}\approx40^{\circ}$, with a scatter of $\sim10^{\circ}$, see B97). Mass and light tend to align to within $\sim10^{\circ}$ in other lens systems, once external tides are accounted for (Keeton, Kochanek, \& Falco 1998). Unfortunately, the two sources of asymmetry in both models are nearly degenerate, which tends to increase the error ranges on all parameter estimates. FGSE models with zero external shear are within our 2$\sigma$ range, implying that there is no lower limit on $\gamma'$, and that $\phi_{\gamma}$ is undetermined. For both the FGSE and SPEMD models, the ellipticity is high, and for the FGSE model $h_{\rm SIS}$ increases steadily as $a$ decreases. If we constrain the ellipticity to equal the highest value observed for the light (i.e., axis ratio set to 0.6, see B97), we decrease $h_{\rm SIS}$ to 1.01 for the FGSE model, yielding $\bar{\chi}^2=8.0$. The SPEMD with $a=0.6$ yields $h_{\rm SIS}=0.626$ with $\bar{\chi}^2$=10.4\,. A more accurate description of the cluster contribution could remove much of the uncertainty in the models. Note that the FGSE and SPEMD models make very different $H_0$ predictions, with the SPEMD requiring four times as much external shear. We can model the cluster contribution by replacing the external shear with a simplified cluster mass model (see,~e.g., Kochanek 1993). For example, our FGSE+CL model combines a shear-less FGSE model with a movable SIS cluster mass distribution. The overall $\bar{\chi^2}$ is comparable to that of the FGSE but the estimated values for some of the parameters are substantially changed. The $h$ estimate is very similar to that for the FGSE, but the uncertainty has increased from $\sim20\%$ to $\sim30\%$, demonstrating the sensitivity of the result to the assumed model. Figure~2 shows the appearance of the source and image planes for this model. We compare in Figure~3 the estimates from the FGSE and FGSE+CL lens models with the estimates from observations of the cluster center and velocity dispersion. Both of the observed cluster positions (Fischer et~al.\ 1997) are offset from $G1$ in the same direction, and agree approximately with the positions from the two lens models. Note that the external shear model does not depend on whether the cluster lies to the East or West of $G1$, but the SIS cluster model breaks this degeneracy in favor of the observed direction. Still, the observational uncertainties encompass most of the models within our 2$\sigma$ contour. Likewise, the observed velocity dispersions (Fischer et~al.\ 1997; Garrett et~al.\ 1992; Angonin-Willaime et~al.\ 1994) agree with the model estimates, and are too imprecise to distinguish between the models. The error estimates on cluster parameters also depend strongly on the effective $\gamma'$, which produces large uncertainties for FGS models due to the weaker cluster contribution. Nevertheless, it is clear that more precise measurements of the cluster properties should provide significant constraints on the lens models. For example, if we assume a cluster velocity dispersion of 715~km/s and place the cluster center at the same distance ($32\farcs2$) as the ``galaxies'' position, derived from number counts (see Figure~3), we obtain $h=1.05$ with $\bar{\chi}^2=49.3/9=5.5$. The same velocity dispersion and a distance of $22\farcs2$, corresponding to the ``weak'' position from weak lensing, lowers $h$ further to 0.86 with $\bar{\chi}^2=82.3/9=9.1$. More precise cluster measurements may also result in $G1$ mass estimates which are more typical of a massive elliptical galaxy. For the clusters at the ``galaxies'' and ``weak'' positions, the corresponding $G1$ velocity dispersions $\sigma_v$ are 378~km/s and 331~km/s, respectively. Thus far, we have assumed an SIS cluster profile, but other profiles would yield different estimates of $h$. We can explore the effect of different cluster profiles by approximating the cluster as an external shear $\gamma$ and convergence $\kappa$. Then, given $\gamma'$ and $h'$, there is a relation between $\gamma$ and $\kappa$ for each cluster profile and position which allows us to estimate both, and thus $h=h' (1-\kappa)$. Figure~4 illustrates the effect on the estimate of $h$ of using different profiles for the simple, hypothetical case of $\gamma'=0.20$ and $h'=1.00$\,. If we assume that the cluster is spherically symmetric and described by the SPLS profile, the estimate of $h$ depends on the cluster power-law index $\eta$ and on the distance to the cluster from the lens galaxy in units of the cluster core radius. If the cluster is singular, even large deviations in $\eta$ from the isothermal value of unity have a small effect on the estimated $h$. This insensitivity results from the estimates of $\kappa$ being small, so large fractional changes in $\kappa$ produce smaller fractional changes in $(1-\kappa)$. On the other hand, if $G1$ is within a few cluster core radii of the cluster center, then the SPLS profile approaches a constant density sheet (corresponding to a $1/r$ density profile in 3 dimensions); so with $\gamma'$ fixed, $\kappa$ is driven toward 1 and $h$ decreases. The observations of Fischer et~al.\ (1997) imply a cluster core radius of $5\pm5\arcsec$, for an isothermal cluster. This accuracy is insufficient for use in lens modeling; the determination of the cluster's mass distribution from weak lensing is difficult because of the insufficient number of faint background sources in the small central area of the cluster. However, a more precise determination of the cluster's center and mass profile may allow us to distinguish between some models, and thus further reduce the model uncertainties. As noted in \S \ref{09s3}, the center of $G1$ has been estimated from VLBI (Gorenstein et~al.\ 1983) and HST (B97) observations; we denote these positions as $G'$ and $G1$, respectively. If we substitute the $G'$ position with its 1~mas standard errors for the $G1$ position in Table~3, then the parameter estimates are almost unchanged for the SPEMD and FGSE models. Of course, it is not clear how close the center of light --- radio or optical --- is to the mass center of the galaxy. It is correspondingly unclear what standard deviation should be used for these possible separations. Moreover, the mass distribution may not follow the shape of the light distribution, which has an ellipticity that varies with radius (B97). As an extreme case, we can assume an effectively infinite position uncertainty, by removing the lens position constraint. The resultant fit to an SPEMD model has $\bar{\chi}^2=57.9/7=8.3$, with $a=0.48$, $\varphi_a=-16\hbox{$^\circ$}$, $h_{\rm SIS}=0.796$, and the estimated lens center is displaced by $(21,-78)$~mas from $G1$'s optical center. The corresponding values for the FGSE model are $\bar{\chi}^2=22.9/6=3.8$, with $a=0.53$, $\varphi_a=-1\hbox{$^\circ$}$, $h_{\rm SIS}=1.01$ and the lens displaced by $(45,-44)$~mas. These large estimated lens displacements from the center of light call for a general study of how much the mass and light centers should differ in the cores of elliptical galaxies. If we allow the center of mass for $G1$ to be as far as $\sim80$~mas away from the optical center, then the estimated $h_{\rm SIS}$ changes by $\sim20\%$. Another indication of uncertainty in our models is that image $B$ is well inside the effective radius of $G1$ ($R_{\rm eff}\sim4\farcs5$, Bernstein et~al.\ 1993), while image $A$ lies just outside this radius. Although we might expect that the galaxy mass is dominated by stars out to the distance of $B$, there may be a significant dark matter contribution at the radius of $A$, requiring a more complicated model. Since our full set of constraints supplies a fairly large number of degrees of freedom, we can explore the robustness of the results by observing the effect of removing individual constraints. If, e.g., we use the FGSE model without including the HST Blobs (but including the Knots), we find $\bar{\chi}^2=9.1/6=1.5$ with $a=0.15$, $\varphi_a=73\hbox{$^\circ$}$, and $h_{\rm SIS}=0.340$. This estimated ellipticity is much higher than that of the light distribution, which suggests that the FGSE model is not well constrained without the Blobs. By contrast, the Knots are only weak constraints due to the large errors associated with their positions and fluxes. The results are thus clearly sensitive to which constraints are included. As another test of robustness, we used the FGSE model with the full set of constraints but we recomputed the VLBI constraints requiring the $B$~image component shapes also to agree with those of the $A$~image through the spatially-varying magnification transformation. The resulting parameter values and uncertainties are almost identical to the FGSE results in Table~4, but with $\bar{\chi}^2=10.0$, higher than before. Our models also yield estimates of the distances to the HST objects. For example, the FGSE $f_{\rm blob}$ and $f_{\rm knot}$ values correspond to redshifts of $z_{\rm blob}=1.64$ and $z_{\rm knot}=3.54$ for $\Omega=1$. Assuming an open $\Omega=0.3$ universe or a flat universe with $\Omega_{\rm matter}=0.3$ increases $z_{\rm knot}$ by about 50\% and 20\% respectively, but for both cases $z_{\rm blob}$ increases by only a few percent. The allowed 2$\sigma$ ranges are wide, (e.g., $1.15 < z_{\rm blob} < 2.42$ and $1.82 < z_{\rm knot} < 12.2$ for $\Omega=1$) so the Blob and Knot sources could be at the same distance for any of these cosmologies. Note that all of our models predict additional fainter counterimages of the Knot source close to the observed Blob images (Figure~2). Such counterimages may have been marginally detected (Avruch et~al.\ 1997) in the HST images. If the Knot and Blob sources are physically associated, the additional Knot images could provide more stringent lensing constraints. Large-scale mass fluctuations along the line-of-sight to Q0957+561 can produce an additional source of uncertainty in the $H_0$, which will be important if the cluster is properly modeled. Barkana (1996) shows that large-scale structure affects the determination of $H_0$ with an uncertainty $\Delta_1$, but for models which are normalized to the $G1$ velocity dispersion, the velocity dispersion effectively constrains part of the effect of large-scale structure, and a smaller uncertainty $\Delta_2$ is left over. Given the source and lens redshifts for Q0957+561, a suite of models for the power spectrum of large-scale structure (Barkana 1996 and Figure~2 of Keeton et~al.\ 1997) yields typical $2\sigma$ uncertainties of $\Delta_1=9.8\%$ and $\Delta_2=5.5\%$. It is often argued that $H_0$ estimates from the Q0957+561 time delay are less reliable than those obtained from other lensed systems, because the cluster contribution is important. But most of the other systems for which a time delay has been measured have significant lensing contributions from a nearby group of galaxies. Although the cluster in Q0957+561 is more dominant than a smaller group would be, its mass distribution can, in principle, be directly measured. Only limited information on the cluster is presently available, but future prospects are promising for deeper weak lensing observations and for high-resolution X-ray measurements with the AXAF satellite. It may even be possible to check the cluster for dominant substructure with AXAF. Another possibility for determining indirectly the cluster contribution to lensing was suggested by Romanowsky \& Kochanek (1998), who used the velocity dispersion measurement of $G1$ to estimate its mass distribution via detailed modeling of the stellar velocity distribution. Unfortunately, the result depends on the mass distribution near the center of $G1$, which is poorly determined by lens models, especially for the FGS profile which has a large point mass at the center. A possible solution is to construct a lens model which follows the light shape near the center but becomes an independent dark matter halo farther out. The present data cannot constrain the additional parameters necessary for such a lens model. \section{Conclusions} We have used improved data in the analysis of the gravitational lens system Q0957+561. We re-analyzed the VLBI data of G94 with corrected numerical procedures, and obtained new estimates for the components and spatial gradients of the relative magnification matrix between the $A$ and $B$ images. We also included new lensing constraints from recently discovered optical components (B97). The VLBI and optical constraints were used to determine more elaborate lens models than had been previously explored. In particular, we considered models with two sources of asymmetry: ellipticity in the lens galaxy and external shear from the surrounding cluster of galaxies. Models with an axially symmetric lens are unable to fit the data, yielding $\bar{\chi}^2=23$ for the SPLS and $\bar{\chi}^2=27$ for the FGS model (all with the optical $G1$ lens position). Adding ellipticity as a model parameter leads to $\bar{\chi}^2=9.9$ for the SPEMD model and $\bar{\chi}^2=6.0$ for the FGSE model. The $H_0$ estimates derived from these two models differ substantially, with $h_{\rm SIS}=0.61^{+.18}_{-.16}$ and $h_{\rm SIS}=1.23^{+.22}_{-.23}$, respectively, where the uncertainties correspond to two standard deviations when an SIS cluster is used to represent the external shear. The two models can be distinguished, in principle, since they differ greatly in predicting the lens ellipticity direction and the magnitude of the cluster shear. Direct measurements of the cluster mass distribution thus have great potential. The simple lens models that we have considered cannot be uniquely constrained by VLBI measurements alone. The HST Blobs and Knots (B97) have lines of sight that are far away from those for the $A$ and $B$ images, and could eliminate highly elliptical lens models that are permitted by the basic VLBI and core flux constraints. The discovery of more background sources in the field, or other extended radio structures (see,~e.g., Avruch et~al.\ 1997), might eventually distinguish between models, and thus narrow the allowed range of $H_0$. New structures may also provide enough constraints to permit the application of more complicated and realistic mass models which can account for all of the observations. A reliable measurement of $H_0$ may be achievable by combining the results from several such well-studied lensed systems. \acknowledgements We are truly grateful to Mike Garrett for sending us the VLBI data and to Gary Bernstein for making some of the HST results available to us before publication. We thank Paul Schechter and Chris Kochanek for many valuable discussions and Ed Bertschinger, Peter Schneider, Simon White, and Avi Loeb for helpful discussions. RB acknowledges support by Institute Funds and by NASA grant NAG5-2816. JL is grateful for support from NSF grant AST93-03527 and from the NASA/HST grant GO-7495. This research was supported by the Smithsonian Institution. \section{APPENDIX} \label{09ap} The two images $A$~and $B$, and hence their VLBI components, are related by a relative magnification mapping. Up to an irrelevant translation, we can describe the mapping from image $A$ to image $B$ by: \begin{equation} \bf{x}^B_{i}=\bf{M}^{BA}_{ij}\bf{x}^A_{j} +\frac{1}{2}\bf{\partial M}^{BA}_{ijk}\bf{x}^A_{j}\bf{x}^A_{k}\ \label{magmap}, \end{equation} where repeated indices are summed. Here, $\bf{x}^A$ and $\bf{x}^B$ are the respective positions in the $A$ and $B$ images, referred to the origins at the centers of their respective cores; $\bf{M}^{BA}$ is the $2\times 2$ relative magnification matrix, evaluated at the center of $A_1$; and $\bf{\partial M}^{BA}$ is the tensor that represents the next order term in a Taylor series expansion (this tensor, by definition, is symmetric with respect to its last two indices). There are thus 4 independent parameters that define $\bf{M}^{BA}$ and 6 that define $\bf{\partial M}^{BA}$. The data, however, which are confined to a relatively small region, are sensitive only to the magnification derivatives along the jet direction, and weakly even to these. When we attempt estimates which include all 6 independent components of $\bf{\partial M}^{BA}$ the resultant values for most components are much higher than is physically reasonable. Thus, we remove the parameters to which our data are insensitive. The mapping of Equation~\ref{magmap} describes the expansion of the relative magnification matrix $\bf{M}^{BA}(\bf{x}^A)$ about the origin: \begin{equation} \bf{M}^{BA}_{ij} (\bf{x}^A)\equiv\frac{\partial \bf{x}^B_i}{\partial \bf{x}^A_j} =\bf{M}^{BA}_{ij}+\bf{\partial M}^{BA}_{ijk}\bf{x}^A_{k}\ . \label{map2} \end{equation} For such a position-dependent magnification matrix, a Gaussian representation of flux density components in the $A$ jet is no longer mapped to a corresponding Gaussian representation in the $B$ jet. In our analysis, however, we ignore the variation of $\bf{M}^{BA}$ over the extent of a component; thus we map component $A_4$, for example, to $B_4$ using a relative magnification matrix, from Equation~\ref{map2}, evaluated at the center of $A_4$. This approximation provides another reason for our treating the error estimates conservatively. Following Gorenstein et~al.\ (1988) and G94, we decompose the matrix $\bf{M}^{BA}$ into its eigenvalues ($M_1$ and $M_2$) and the corresponding position angles of the eigenvectors ($\phi_1$ and $\phi_2$). The matrix can be represented as \begin{equation} {\bf M^{BA}}=M_1 {\bf E}(\phi_1,\phi_2)+ M_2 {\bf E}(\phi_2, \phi_1)\ , \end{equation} where \begin{equation} \bf{E}(\rho,\sigma)= \left[\begin{array}{c} \cos \rho \\ \sin \rho \end{array}\right] \cdot \left[\begin{array}{cc} -\sin \sigma & \cos \sigma \end{array}\right] \cdot \csc(\rho-\sigma)\ , \end{equation} in the notation of G94, but without rotating coordinates to align with the $A$ jet as do G94. Because of the limited sensitivity of our data, we restrict our estimation to a subset of the $\bf{\partial M}^{BA}$ parameters. We fix $\phi_1$ and $\phi_2$ to be constant along both the jet axis and the perpendicular direction, which is slightly different from the procedure of G94. Thus, we fix four $\bf{\partial M}^{BA}$ components through: \begin{eqnarray} \bf{\partial M}^{BA}_{221}&=&\frac{\bf{M}^{BA}_{21}} {\bf{M}^{BA}_{12}}\bf{\partial M}^{BA}_{122}\ , \nonumber \\ \bf{\partial M}^{BA}_{211}&=&\frac{\bf{M}^{BA}_{21}} {\bf{M}^{BA}_{12}}\bf{\partial M}^{BA}_{121}\ , \nonumber \\ \bf{\partial M}^{BA}_{111}&=&\frac{\bf{M}^{BA}_{21}} {\bf{M}^{BA}_{12}}\bf{\partial M}^{BA}_{122}- \frac{\bf{M}^{BA}_{22}-\bf{M}^{BA}_{11}} {\bf{M}^{BA}_{12}} \bf{\partial M}^{BA}_{121}\ , \label{restrict} \\ \bf{\partial M}^{BA}_{222}&=&\bf{\partial M}^{BA}_{121}+ \frac{\bf{M}^{BA}_{22}-\bf{M}^{BA}_{11}} {\bf{M}^{BA}_{12}}\bf{\partial M}^{BA}_{122}\ , \nonumber \end{eqnarray} leaving only two independent components of the derivative matrix.
1,314,259,994,680
arxiv
\section{Introduction} Let $A$ be an additive group and $A$ be a subset of $A$. We denote by $S_A$ the collection of subset sums of $A$: $$ S_A= \{ \sum_{x\in B}x |B\subset A, |B| < \infty \}. $$ The following two questions are among the most popular questions in additive combinatorics \begin{question} \label{question:1} When $ 0 \in S_A$ ? \end{question} \begin{question} \label{question:2} When $S_A= G$ ? \end{question} If $S_A$ does not contain the zero element, we say that $A$ is {\it zero-sum-free}. If $S_A =G$ ($S_A \neq G$), then we say that $A$ is {\it complete (incomplete)}. In this paper, we focus on the case $G={\mathbf Z}_p$, the cyclic group of order $p$, where $p$ is a large prime. The asymptotic notation will be used under the assumption that $p \rightarrow \infty$. For $x \in {\mathbf Z}_p$, $\|x\|$ (the norm of $x$) is the distance from $x$ to $0$. (For example, the norm of $p-1$ is 1.) All logarithms have natural base and $[a,b]$ denotes the set of integers between $a$ and $b$. \subsection{A sharp bound on the maximum cardinality of a zero-sum-free set} How big can a zero-sum-free set be ? This question was raised by Erd\H os and Heilbronn \cite{EH} in 1964. In \cite{Szem1}, Szemer\'edi proved that \begin{theorem} \label{theorem:Szem} There is a positive constant $c$ such that the following holds. If $A \subset {\mathbf Z}_p$ and $|A| \ge cp^{1/2}$, then $0 \in S_A$. \end{theorem} A result of Olson \cite{Olson} implies that one can set $c=2$. More than a quarter of century later, Hamindoune and Z\'emor \cite{HZ} showed that one can set $c= \sqrt 2 +o(1)$, which is asymptotically tight. \begin{theorem} \label{theorem:HZ} If $A \subset {\mathbf Z}_p$ and $|A| \ge (2p)^{1/2} +5 \log p$, then $0 \in S_A$. \end{theorem} Our first result removes the logarithmic term in Theorem \ref{theorem:HZ}, giving the best possible bound (for all sufficiently large $p$). Let $n(p)$ denote the largest integer such that $\sum_{i=1}^{n-1}i < p$. \begin{theorem} \label{theorem:main1} There is a constant $C$ such that the following holds for all prime $p \ge C$. \begin{itemize} \item If $p \neq \frac{n(p)(n(p) +1)}{2} -1$, and $A$ is a subset of ${\mathbf Z}_p$ with $n(p)$ elements, then $0 \in S_A$. \item If $p = \frac{n(p)(n(p) +1)}{2} -1$, and $A$ is a subset of ${\mathbf Z}_p$ with $n(p)+1$ elements, then $0 \in S_A$. Furthermore, up to a dilation, the only $0$-sum-free set with $n(p)$ elements is $\{-2,1,3,4, \dots, n(p)\}$. \end{itemize} \end{theorem} To see that the bound in the first case is sharp, consider $A=\{1, 2, \dots, n(p)-1\}$. \subsection{The structure of zero-sum-free sets with cardinality closed to maximum} Theorem \ref{theorem:main1} does not provide information about zero-sum-free sets of size slightly smaller than $n(p)$. The arch typical example for a zero-sum-free set is a set whose sum of elements (as positive integers between 1 and $p-1$) is less than $p$. The general phenomenon we would like to support here is that a zero-sum-free set with sufficiently large cardinality should be close to such a set. In \cite{D1}, Deshouillers \cite{D1} showed \begin{theorem} \label{theorem:D1} Let $A$ be a zero-sum-free subset of ${\mathbf Z}_p$ of size at least $p^{1/2}$. Then there is some non-zero element $b \in {\mathbf Z}_p$ such that $$\sum_{a \in bA, a<p/2} \| a\| \le p+O(p^{3/4}\log p) $$ and $$\sum_{a \in bA, a>p/2} \| a\|=O(p^{3/4} \log p).$$ \end{theorem} The main issue here is the magnitude of the error term. In the same paper, there is a construction of a zero-sum-free set with $cp^{1/2}$ elements ($c >1)$ where $$\sum_{a \in bA, a<p/2} \| a\| = p+\Omega(p^{1/2}) $$ and $$\sum_{a \in bA, a>p/2} \| a\|= \Omega (p^{1/2}).$$ \noindent It is conjectured \cite{D1} that $p^{1/2}$ is the right order of magnitude of the error term. Here we confirm this conjecture, assuming that $|A|$ is sufficiently close to the upper bound. \begin{theorem}\label{theorem:main2} Let $A$ be a zero-sum-free subset of ${\mathbf Z}_p$ of size at least $.99 (2p)^{1/2}$. Then there is some non-zero element $b \in {\mathbf Z}_p$ such that $$\sum_{a \in bA, a<p/2} \| a\| \le p+O(p^{1/2}) $$ and $$\sum_{a \in bA, a>p/2} \| a\|=O(p^{1/2}).$$ \end{theorem} The constant $.99$ is adhoc and can be improved. However, we do not elaborate on this point. \subsection{Complete sets} All questions concerning zero-sum-free sets are also natural for incomplete sets. Here is a well-known result of Olson \cite{Olson} \begin{theorem} Let $A$ be a subset of ${\mathbf Z}_p$ of more than $(4p-3)^{1/2}$ elements, then $A$ is complete. \end{theorem} Olson's bound is essentially sharp. To see this, observe that if the sum of the norms of the elements of $A$ is less than $p$, then $A$ is incomplete. Let $m(p)$ be the largest cardinality of a small set. One can easily verify that $m(p)= 2p^{1/2} +O(1)$. We now want to study the structure of incomplete sets of size close to $2p^{1/2}$. Deshouillers and Freiman \cite{DF} proved \begin{theorem} \label{theorem:DF} Let $A$ be an incomplete subset of ${\mathbf Z}_p$ of size at least $(2p)^{1/2}$. Then there is some non-zero element $b \in {\mathbf Z}_p$ such that $$\sum_{a \in b A} \| a\| \le p +O(p^{3/4} \log p). $$ \end{theorem} \noindent Similar to the situation with Theorem \ref{theorem:D1}, it is conjectured that the right error term has order $p^{1/2}$ (see \cite{D2} for a construction that matches this bound from below). We establish this conjecture for sufficiently large $A$. \begin{theorem} \label{theorem:main3} Let $A$ be an incomplete subset of ${\mathbf Z}_p$ of size at least $1.99 p^{1/2}$. Then there is some non-zero element $b \in {\mathbf Z}_p$ such that $$\sum_{a \in b A} \| a\| \le p +O(p^{1/2}). $$ \end{theorem} {\it Added in proof.} While this paper was written, Deshouillers informed us that he and Prakash have obtained a result similar to Theorem \ref{theorem:main1}. \section{Main lemmas} The main tools in our proofs are the following results from \cite{SzemVu1}. \begin{theorem} \label{lemma:main1} Let $A$ be a zero-free-sum subset of ${\mathbf Z}_p$. Then we can partition $A $ into two disjoint sets $ A'$ and $ A^{''} $ where \begin{itemize} \item $A'$ has negligible cardinality: $|A'| = O(p^{1/2} /\log^2 p).$ \item The sum of the elements of (a dilate of) $A^{''}$ is small: There is a non-zero element $b \in {\mathbf Z}_p$ such that the elements of $b A^{''} $ belong to the interval $[1, (p-1)/2]$ and their sum is less than $p$. \end{itemize} \end{theorem} \begin{theorem}\label{lemma:main2} Let $A$ be an incomplete subset of ${\mathbf Z}_p$. Then we can partition $A $ into two disjoint sets $ A'$ and $ A^{''} $ where \begin{itemize} \item $A'$ has negligible cardinality: $|A'| = O(p^{1/2} /\log^2 p). $ \item The norm sum of the elements of (a dilate of) $A^{''}$ is small: There is a non-zero element $b \in {\mathbf Z}_p$ such that the sum of the norms of the elements of $b A^{''} $ is less than $p$. \end{itemize} \end{theorem} The above two theorems were proved (without being formally stated) in \cite{SZemVu1}. A stronger version of these theorems will appear in a forth coming paper \cite{NgSzV}. We also need the following simple lemmas. \begin{lemma}\label{lemma:simple4} Let $T'\subset T$ be sets of integers with the following property. There are integers $a \le b$ such that $[a,b] \subset S_{T'}$ and the non-negative (non-positive) elements of $T\backslash T'$ are less than $b-a$ (greater than $a-b$). Then $$[a, b+ \sum_{x \in T\backslash T', x \ge 0} x] \subset S_T. $$ $$([a+ \sum_{x \in T\backslash T', x \le 0} x,b] \subset S_T. )$$ \end{lemma} \noindent The (almost trivial) proof is left as an exercise. \begin{lemma} \label{lemma:simple5} Let $K=\{k_1, \dots, k_l\}$ be a subset of ${\mathbf Z}_p$, where the $k_i$ are positive integers and $\sum_{i=1}^l k_i \le p$. Then $|S_K| \ge l(l+1)/2.$ \end{lemma} To verify this lemma, notice that the numbers $$k_1, \dots, k_l, k_1+k_l, k_2+k_l, \dots, k_{l-1}+k_l, k_1 + k_{l-1} +k_l, \dots, k_{l-2} + k_{l-1}+k_l, \dots, k_1+\dots + k_l $$ \noindent are different and all belong to $S_K$. \section{Proof of Theorem \ref{theorem:main1}} Let $A$ be a zero-free-sum subset of ${\mathbf Z}_p$ with size $n(p)$. In fact, as there is no danger for misunderstanding, we will write $n$ instead of $n(p)$. We start with few simple observations. Consider the partition $A= A' \cup A^{''}$ provided by Theorem \ref{lemma:main1}. Without loss of generality, we can assume that the element $b$ equals one. Thus $A^{''} \subset [1, (p-1)/2]$ and the sum of its elements is less than $p$. Set $I_n:=[1,n]$ be the set of the first $n$ positive integers. We first show that most of the elements of $A^{''}$ belong to $I_n$. \begin{lemma} \label{lemma:simple1} $|A^{''} \cap I_n | \ge n - O(n / \log n). $ \end{lemma} \begin{proof} By the definition of $n$ and the property of $A^{''}$ $$\sum_{i=1}^n i \ge p > \sum_{a \in A^{''}} a. $$ \noindent Assume that $A^{''}$ has $l$ elements in $I_n$ and $k$ elements outside. Then $$ \sum_{a \in A^{''}} a \ge \sum_{i=1}^l i + \sum_{j=1}^k (n+j). $$ \noindent It follows that $$\sum_{i=1}^n i > \sum_{i=1}^l i + \sum_{j=1}^k (n+j), $$ \noindent which, after a routine simplification, yields $$ (l+n+1) (n-l) > (2n+k) k. $$ \noindent On the other hand, $n \ge k+l= |A^{''}| \ge n-O(n/\log^2n)$, thus $n-l =k+O(n/\log^2 n)$ and $n+l+1 \le 2n-k+1$. So there is a constant $c$ such that $$(2n-k+1) (k+ cn/\log^2 n) > (2n+k) k, $$ \noindent or equivalently $$\frac{cn}{k \log^2 n } > \frac{k+1}{2n-k+1}. $$ \noindent Since $2n-k+1\le 2n+1$, a routine consideration shows that $k^2 \log^2 n =O(n^2)$ and thus $k= O(n/\log n)$, completing the proof. \end{proof} The above lemma shows that most of the elements of $A^{''}$ (and $A$) belong to $I_n$. Let $A_1 = A \cap I_n$. It is trivial that $$|A_1| \ge |A^{''} \cap I_n| = n -O(n /\log n). $$ \noindent Let $A_2 = A \backslash A_1$. We have $$t := |I_n \backslash A_1| = |A_2| =|A| -|A_1| = O(n /\log n). $$ \noindent Next we show that $S_{A_1}$ contains a very long interval. Set $I:= [2t+3, (n+1) (\lfloor n/2 \rfloor -t-1)]$. The length of $I$ is $(1-o(1))p$; thus $I$ almost cover ${\mathbf Z}_p$. \begin{lemma} \label{lemma:simple2} $I \subset S_{A_1}.$ \end{lemma} \begin{proof} We need to show that every element $x$ of in this interval can be written as a sum of distinct elements of $A_1$. There are two cases: {\bf Case 1.} $2t+3 \le x \le n.$ In this case $A_1$ contains at least $x-1-t \ge (x+1)/2$ elements in the interval $[1,x-1]$. This guarantees that there are two distinct elements of $A_1$ adding up to $x$. {\bf Case 2.} $x= k(n+1) +r$ for some $1 \le k \le \lfloor n/2 \rfloor -t-2$ and $0 \le r \le n+1$. First, notice that since $|A_1|$ is very close to $n$ (in fact it is enough to have $|A_1|$ slightly larger than $2n/3$ here), one can find three distinct elements $a,b,c \in A_1$ such that $a+b+c = n+1+r$. Consider the set $A_1'= A_1\backslash \{a,b,c\}$. We will represent $x-(n+1+r) =(k-1) (n+1)$ as a sum of distinct elements of $A_1'$. Notice that there are exactly $\lfloor n/2 \rfloor$ ways to write $n+1$ as a sum of two different positive integers. We discard a pair if (at least) one of its two elements is not in $A_1'$. Since $|A'_1|= n-t-3$, we discard at most $t+3$ pairs. So there are at least $\lfloor n/2 \rfloor -t-3$ different pairs $(a_i,b_i)$ where $a_i, b_i \in A_1'$ and $a_i+b_i =(n+1)$. Thus, $(k-1)(n+1)$ can be written as a sum of distinct pairs. Finally, $x$ can be written as a sum of $a,b,c$ with these pairs. \end{proof} Now we investigate the set $A_2= A\backslash A_1$. This is the collection of elements of $A$ outside the interval $I_n$. Since $A$ is zero sum free, $0 \notin A_2 +I $ thanks to Lemma \ref{lemma:simple2}. It follows that $$A_2 \subset {\mathbf Z}_p \backslash \{I_n \cup (-I) \cup \{0\} \subset J_1 \cup J_2, $$ \noindent where $J_1:= [-2t-2,-1]$ and $J_2= [(n+1), p -(n+1)(\lfloor n/2 \rfloor -t-t)] =[(n+1),q]$. We set $B:=A_2 \cap J_1$ and $C:=A_2 \cap J_2$. \begin{lemma}\label{lemma:simple3} $S_B \subset J_1.$ \end{lemma} \begin{proof} Assume otherwise. Then there is a subset $B'$ of $B$ such that $\sum_{a \in B'} a \le -2t-3$ (here the elements of $B$ are viewed as negative integers between $-1$ and $-2t-3$). Among such $B'$, take one where $\sum_{a \in B'} a$ has the smallest absolute value. For this $B'$, $\le -4t-4 \sum_{a \in B'} a \le -2t-3$. On the other hand, by Lemma \ref{lemma:simple2}, the interval $2t+3, 4t+4$ belongs to $S_{A_1}$. This implies that $0 \in S_{A_1} + S_b \subset S_A$, a contradiction. \end{proof} Lemma \ref{lemma:simple3} implies that $\sum_{a \in B} |a| \le 2t+2$, which yields \begin{equation} \label{boundonB} |B| \le 2 (t+1)^{1/2}. \end{equation} Set $s:=|C|$. We have $s \ge t - 2 (t+1)^{1/2}. $Let $c_1 < \dots < c_s$ be the elements of $C$ and $h_1 < \dots < h_t$ be the elements of $I_n \backslash A_1$. By the definition of $n$, $\sum_{i=1}^n i > p > \sum_{i=1}^{n-1} i$. Thus, there is an (unique) $h \in I_n$ such that \begin{equation} \label{missingh} p= 1+\dots + (h-1) +(h+1) + \dots + n. \end{equation} \noindent A quantity which plays an important role in what follows is $$ D:= \sum_{i=1}^s c_i -\sum_{j=1}^{t} h_j. $$ \noindent Notice that if we replace the $h_j$ by the $c_i$ in \eqref{missingh}, we represent $p+D$ as a sum of distinct elements of $A$ \begin{equation} \label{missingh1} p+D = \sum_{a\in X, X \subset A} a. \end{equation} \noindent The leading idea now is to try to cancel $D$ by throwing a few elements from the right hand side or adding a few negative elements (of $A$) or both. If this was always possible, then we would have a representation of $p$ as a sum of distinct elements in $A$ (in other words $0 \in S_A$), a contradiction. To conclude the proof of Theorem \ref{theorem:main1}, we are going to show that the only case when it is not possible is when $p = n(n+1)/2-1$ and $A=\{-2,1,3,4, \dots, n \}$. We consider two cases: {\bf Case 1. $h \in A_1$.} Set $A_1'= A_1 \backslash \{h \}$ and apply Lemma \ref{lemma:simple2} to $A_1'$, we conclude that $S_{A_1'}$ contains the interval $I'=[2(t+1)+3, (n+1)(\lfloor n/2 \rfloor -t-2)]$. \noindent \begin{lemma}\label{lemma:simple6} $D < 2(t+1)+3. $ \end{lemma} \begin{proof} Assume $D \ge 2(t+1)+3$. Notice that the largest element in $J_2$ (and thus in $C$) is less than the length of $I'$. So by removing the $c_i$ one by one from $D$, one can obtain a sum $D'= \sum_{i=1}^{s'} c_i - \sum_{j=1}^{t} h_j$ which belongs to $I'$, for some $s' \le s$. This implies $$\sum_{i=1}^{s'} c_i = \sum_{j=1}^t h_j + \sum_{a \in X} a $$ \noindent for some subset $X$ of $A_1'$. Since $h \notin A_1'$, the right hand side is a sub sum of the right hand side of \eqref{missingh}. Let $Y$ be the collection of the missing elements (from the right hand side of \eqref{missingh}). Then $Y \subset A_1$ and $\sum_{i=1}^{s'} c_i + \sum_{a \in Y} a = p$. On the other hand, the left hand side belongs to $S_{A_1} + S_{A_2} \subset S_A$. It follows that $0 \in S_A$, a contradiction. \end{proof} \noindent Now we take a close look at the inequality $D < 2(t+1)+ 3$. First, observe that since $A$ is zero-sum-free, $-S_B \subset \{h_1, \dots, h_t \}$. By Lemma \ref{lemma:simple3}, $\sum_{a \in B} |a| \le 2t+2 <p$. As $B$ has $t-s$ elements, by Lemma \ref{lemma:simple5}, $S_B$ has at least $(t-s)(t-s+1)/2$ elements. It follows that $$\sum_{i=1}^t h_i \le (2t+2) + \sum_{j=0}^ {(t-(t-s)(t-s+1)/2) +1}(n- j). $$ \noindent On the other hand, as all elements of $C$ are larger than $n$ $$\sum_{i=1}^s c_s \ge \sum_{i=1}^s (n+i). $$ \noindent It follows that $D$ is at least $$ \sum_{i=1}^s (n+i) - (2t+2) - \sum_{j=0} ^{ (t-(t-s)(t-s+1)/2) +1} (n-j) . $$ \noindent If $t-s \ge 2$, then $s > t- (t-s)(t-s+1)/2$, so the last formula has order $\Omega (n) \gg t$, thus $D \gg 2(t+1)+3$, a contradiction. Therefore, $t-s$ is either $0$ or $1$. If $t-s=0$, then $D =\sum_{i=1}^t c_i -\sum_{i=1}^t h_i \ge t^2$. This is larger than $2t+5$ if $t \ge 4$. Thus, we have $t=0,1,2,3$. \begin{itemize} \item $t=0.$ In this case $A=I_n$ and $0 \in S_A$. \item $t=1$. In this case $A= I_n \backslash \{h_1 \} \cup c_1$. If $c_1 -h_1 \neq h$, then we could substitute $c_1$ for $h_1 + (c_1-h_1)$ in \eqref{missingh} and have $0 \in S_A$. This means that $h=c_1-h_1$. Furthermore, $h < 2t+5=7$ so both $c_1$ and $h_1$ are close to $n$. If $h \ge 3$, $$p= \sum_{i=1}^{h-1} i +\sum_{j=h+1}^n j = \sum_{i=2}^{h-2} i + \sum_{h+1 \le j \le n, j \neq h_1} j + c_ 1.$$ Similarly, if $h=1$ or $2$ then we have $$p =\sum_{i=1}^h i + \sum_{h+2 \le j \le n, j \neq h_1} j + c_1. $$ \item $t > 1$. Since $D < 2t +5$, $h_1, \dots, h_t$ are all larger than $n-2t-4$. As $p$ is sufficiently large, we can assume $n \ge 4t +10$, which implies that $[1, 2t+5] \subset A_1$. If $h \neq 1$, then it is easy to see that $[3, 2t+5] \subset S_{A_1 \backslash \{h\}}$. As $t >1$, $D \ge t^2 \ge 4$ and can be represented as a sum of elements in ${A_1 \backslash \{h\}}$. Omitting these elements from \eqref{missingh1}, we obtain a representation of $p$ as a sum of elements of $A$. The only case left is $h=1$ and $D=4$. But $D$ can equal $4$ if and only if $t=2$, $c_1=n+1, c_2 = n+2, h_1=n-1, h_2=n$. In this case, we have $$p= \sum_{i=2}^n i = 2+3 + \sum_{i=5}^{n+2} i .$$ \end{itemize} Now we turn to the case $t-s=1$. In this case $B$ has exactly one element in the interval $[-2t-2,-1]$ (modulo $p$) and $D$ is at least $s^2-(2t+2)= (t-1)^2 -(2t+2)$. Since $D < 2t +5$, we conclude that $t$ is at most 6. Let $-b$ be the element in $B$ (where $b$ is a positive integer). We have $b \le 2t+2 \le 14$. $A_1$ misses exactly $t$ elements from $I_n$; one of them is $b$ and all other are close to $n$ (at least $n-(2t+4)$). Using this information, we can reduce the bound on $b$ further. Notice that the whole interval $[1,b-1]$ belongs to $A_1$. So if $b \ge 3$, then there are two elements $x,y$ of $A_1$ such that $x+y=b$. Then $x+y+(-b)=0$, meaning $0 \in S_A$. It thus remain to consider $b=1$ or $2$. Now we consider a few cases depending on the value of $D$. Notice that $D \ge s^2-b \ge -2$. In fact, if $s \ge 2$ then $D \ge 2$. Furthermore, if $s=0$, then $t=1$ and $D= -h_1 =-b$. \begin{itemize} \item $D \ge 5$. Since $A_1$ misses at most one element in $[1,D]$ (the possible missing element is $b$), there are two elements of $A_1$ adding up to $D$. Omitting these elements from \eqref{missingh1}, we obtain a representation of $p$ as a sum of distinct elements of $A$. \item $D=4$. If $b =1$, write $p= \sum_{a \in X, a \neq 2 } a + (-b)$. If $b=2$, then $p= \sum_{a \in X, a \neq 1,3} a $. (Here and later $X$ is the set in \eqref{missingh1}.) \item $D = 3$. Write $p= \sum_{a \in X, a \neq 3-b} a + (-b)$. \item $D=2$. If $b=1$, then $p= \sum_{a \in X, a \neq 2} a$. If $b=2$, then $p= \sum_{a\in X} a +(-2). $ \item $D=1$. If $b=1$, then $p= \sum_{a \in X} a +(-1)$. If $b=2$, then $p= \sum_{a\in X, a \neq 1} a. $ \item $D=0$. In this case \eqref{missingh1} already provides a representation of $p$. \item $D=-1$. In this case $s < 2$. But since $h \neq b$, $s$ cannot be $0$. If $s=1$ then $b=2$ and $c_1=n+1$, $h_1= n$. By \eqref{missingh}, we have $p= \sum_{i=1}^{h-1} i + \sum_{j=h+1}^n j$ and so $$p +(h-1) = \sum_{1 \le i \le n+1, i \notin \{2, n\}} i $$ where the right hand side consists of elements of $A$ only. If $h-1 \in A$ then we simply omit it from the sum. If $h-1 \notin A$, then $h-1=2$ and $h=3$. In this case, we can write $$p= \sum_{1 \le i \le n+1, i \notin \{2, n\}} i +(-2).$$ \item $D=-2$. This could only occur if $s=0$ and $b=2$. In this case $A= \{-2, 1,3, \dots, n \}$. If $h=1$, then $p=\sum_{i=2}^n = n(n+1)/2 -1$ and we end up with the only exceptional set. If $h \ge 3$, then $p+(h-2) =\sum_{1 \le i \le n, i \neq 2} i$. If $ h\neq 4$, then we can omit $h-2$ from the right hand side to obtain a representation of $p$. If $h=4$, then we can write $$p= \sum_{1 \le i \le n, i \neq 2} i + (-2). $$ \end{itemize} \vskip2mm {\bf Case 2. $h \notin A$}. In this case we can consider $A_1$ instead of $A_1'$. The consideration is similar and actually simpler. Since $h \notin A$, we only need to consider $D:= \sum_{i=1}^s c_i - \sum_{1 \le j \le t, h_j \neq h} h_j$. Furthermore, as $h \notin A$, if $s=0$ we should have $h=b$ and this forbid us to have any exceptional structure in the case $D=-2$. The detail is left as an exercise. \section{Proof of Theorem \ref{theorem:main2}} \noindent We follow the same terminology used in the previous section. Assume that $A$ is zero-sum-free and $|A|=\lambda n=\lambda (2p)^{1/2}$ with some $1\ge \lambda\ge .99.$ Furthermore, assume that the element $b$ in Theorem \ref{lemma:main1} is one. We will use the notation of the previous proof. Let the {\it core} of $A$ be the collection of all pairs $(a,a') \in A \times A, a \neq a'$ and $a+a'=n+1$. Theorem \ref{theorem:main2} follows directly from the following two lemmas. \begin{lemma} \label{lemma:main2-1} The core of $A$ has size at least $.6n$. \end{lemma} \begin{lemma} \label{lemma:main2-2} Let $A$ be a zero-sum-free set whose core has size at least $(1/2+{\epsilon})n$ (for some positive constant ${\epsilon}$). Then $$\sum_{a \in A, a < p/2} a \le p + \frac{1}{{\epsilon}} (n+1) $$ \noindent and $$ \sum_{a \in A, a > p/2} \|a \| \le (\frac{1}{{\epsilon}}+1)n. $$ \end{lemma} \begin{proof} (Proof of Lemma \ref{lemma:main2-1}.) Following the proof of Lemma \ref{lemma:simple1}, with $l=|A''\cap I_n|$ and $k=|A''\setminus I_n|$, we have $$(l+n+1)(n-l)>(2n+k)k.$$ \noindent On the other hand, $n\ge k+l=|A''|=|A|-O(n/\log^2 n),$ thus $n-l=k+n-|A|+O(n/\log^2 n)=(1-\lambda+o(1))n+k$ and $n+l\le(1+\lambda)n-k.$ Putting all these together with the fact that $\lambda$ is quite close to 1, we can conclude that that $k < .1n$. It follows (rather generously) that $l=\lambda n-k-O(n/\log^2 n)>.8n$. The above shows that most of the elements of $A$ belong to $I_n$, as $$|A_1|=|A\cap I_n|\ge |A''\cap I_n|> .8n.$$ \noindent Split $A_1$ into two sets, $A_1'$ and $A_1'':=A_1\setminus A_1'$, where $A_1'$ contains all elements $a$ of $A_1$ such that $n+1-a$ also belongs to $A_1$. Recall that $A_1$ has at least $\lfloor n/2 \rfloor-t$ pairs $(a_i,b_i)$ satisfying $a_i+b_i=n+1$. This guarantees that $|A_1'|\ge 2(\lfloor n/2 \rfloor-t)\ge.6 n$. On the other hand, $A_1'$ is a subset of the core of $A$. The proof is complete. \end{proof} \begin{proof} (Proof of Lemma \ref{lemma:main2-2}) Abusing the notation slightly, we use $A_1'$ to denote the core of $A$. We have $|A_1'|\ge (1/2+\epsilon)n$. \begin{lemma}\label{lemma:simple8} Any $l\in [n(1/\epsilon+1),n(1/\epsilon+1) + n]$ can be written as a sum of $2(1/\epsilon+1)$ distinct elements of $A_1'$. \end{lemma} \begin{proof} First notice that for any $m$ belongs to $I_{\epsilon}=[(1-\epsilon)n,(1+\epsilon)n]$, the number of pairs $(a,b)\in {A_1'}^2$ satisfying $a<b$ and $a+b=m$ is at least $\epsilon n/2$. Next, observe that any $k$, $k\in [0,n]$, is a sum of $1/\epsilon +1$ integers (not necessarily distinct) from $[0,\epsilon n]$. Consider $l$ from $[n(1/\epsilon+1),n(1/\epsilon+1) + n]$; we can represent $l-n(1/\epsilon+1)$ as a sum $a_1+\cdots+a_{1/\epsilon+1}$ where $0\le a_1,\dots, a_{1/\epsilon+1}\le \epsilon n$. Thus $l$ can be written as a sum of $1/\epsilon +1$ elements (not necessarily distinct) of $I_{\epsilon}$, as $l=(n+a_1)+\cdots+(n+a_{1/\epsilon+1}).$ Now we represent each summand in the above representation of $l$ by two elements of $A_1'$. By the first observation, the numbers of pairs are much larger than the number of summands, we can manage so that all elements of pairs are different. \end{proof} \noindent Recall that $A_1'$ consists of pairs $(a_i',b_i')$ where $a_i'+b_i'=n+1$, so $$\sum_{a'\in A_1'}a'=(n+1)|A_1'|/2.$$ \begin{lemma}\label{lemma:simple9} $I':=[n(1/\epsilon+1), \sum_{a'\in A_1'}a'-(n+1)/\epsilon]\subset S_{A_1'}.$ \end{lemma} \begin{proof} Lemma \ref{lemma:simple8} implies that for each $x \in [n(1/\epsilon+1),n(1/\epsilon+1) + n]$ there exist distinct elements $a_1',\dots, a_{2(1/\epsilon+1)}'\in A_1'$ such that $x=\sum_{i=1}^{2(1/\epsilon+1)}a_i'$. We discard all $a_i'$ and $(n+1)-a_i'$ from $A_1'$. Thus there remain exactly $|A_1'|/2-2(1/\epsilon+1)$ different pairs $(a_i'',b_i'')$ where $a_i''+b_i''=n+1$. The sums of these pairs represent all numbers of the form $k(n+1)$ for any $0\le k \le |A_1'|/2-2(1/\epsilon+1)$. We thus obtained a representation of $x+k(n+1)$ as a sum of different elements of $A_1'$, in other word $x+k(n+1)\in S_{A_1'}$. As $x$ varies in $[n(1/\epsilon+1),n(1/\epsilon+1) + n]$ and $k$ varies in $[0,|A_1'|/2-2(1/\epsilon+1)]$, the proof is completed. \end{proof} \noindent Let $A_2=A\setminus A_1$ and set $A_2':= A_2 \cap [0,(p-1)/2]$ and $A_2'' = A_2 \backslash A_2'$. We are going to view $A_2''$ as a subset of $[-(p-1)/2,-1]$. \noindent We will now invoke Lemma \ref{lemma:simple4} several times to conclude Lemma \ref{lemma:main2-2}. First, it is trivial that the length of $I'$ is much larger than $n$, whilst elements of $A_1$ are positive integers bounded by $n$. Thus, Lemma \ref{lemma:simple4} implies that $$I'':=[n(1/\epsilon+1),\sum_{a\in A_1}a-(n+1)/\epsilon ]\subset S_{A_1}.$$ Note that the length of $I''$ is greater than $(p-1)/2$. Indeed $n \approx (2p)^{1/2}$ and $$|I''|=\sum_{a\in A_1}a-(n+1)/\epsilon -n(1/\epsilon+1) \ge \sum_{a\in A_1'}a- O(n)$$ $$\ge (1/2+\epsilon)n(n+1)/2-O(n)>(p-1)/2.$$ Again, Lemma \ref{lemma:simple4} (applied to $I''$) yields that $$[n(1/\epsilon+1),\sum_{a\in A_1\cup A_2'}a-(n+1)/\epsilon ]\subset S_{A_1\cup A_2'}$$ and $$[\sum_{a\in A_2''}a+n(1/\epsilon+1),\sum_{a\in A_1}a-(n+1)/\epsilon ]\subset S_{A_1\cup A_2''}.$$ The union of these two long intervals belongs to $S_A$ $$[\sum_{a\in A_2''}a+n(1/\epsilon+1),\sum_{a\in A_1\cup A_2'}a-(n+1)/\epsilon ]\subset S_A.$$ On the other hand, $0\notin S_A$ implies $$\sum_{a\in A_2''}a+n(1/\epsilon+1)>0$$ and $$\sum_{a\in A_1\cup A_2'}a-(n+1)/\epsilon < p.$$ \noindent The proof of Lemma \ref{lemma:main2-2} is completed. \end{proof} \section {Sketch of the proof of Theorem \ref{theorem:main3}} Assume that $A$ is incomplete and $|A|=\lambda p^{1/2}$ with some $2\ge \lambda\ge 1.99.$ Furthermore, assume that the element $b$ in Theorem \ref{lemma:main2} is one. We are going to view ${\mathbf Z}_p$ as $[-(p-1)/2,(p-1)/2]$. \noindent To make the proof simple, we made some new invention: $n=\lfloor p^{1/2}\rfloor$, $A_1:=A\cap [-n,n], A_1':=A\cap [0,n], A_1'':=A\cap [-n,-1], A_2':=A\cap [n+1,(p-1)/2], A_2'':=A\cap [-(p-1)/2,-(n+1)], t_1':=|A_1'|, t_1'':=|A_1''|, t_1:=|A_1|=t_1'+t_1''$. \noindent Notice that $|A''|$ (in Theorem \ref{lemma:main2}) is sufficiently close to the upper bound. The following holds. \begin{lemma}\label{lemma:main3-1} Most of the elements of $A''(A)$ belong to $[-n,n]$; \begin{itemize} \item both $t_1'$ and $t_1''$ are larger than $(1/2+\epsilon)n,$ \item $t_1$ is larger than $(2^{1/2}+\epsilon)n$ \end{itemize} with some positive constant $\epsilon$. \end{lemma} \noindent As a consequent, both $S_{A\cap [-n,-1]}$ and $S_{A\cap [1,n]}$ contain long intervals thanks to the following Lemma, which is a direct application of Lemma \ref{lemma:simple8} and argument provided in Lemma \ref{lemma:simple2}. \begin{lemma}\label{lemma:main3-2} If $X$ is a subset of $[1,n]$ with size at least $(1/2+\epsilon)n$. Then $$[(n+1)(1/\epsilon+1),(n+1)(n/2-t-c_{\epsilon})]\subset S_X$$ where $t=n-|X|$ and $c_{\epsilon}$ depends only on $\epsilon$. \end{lemma} \noindent Now we can invoke Lemma \ref{lemma:simple4} several times to conclude Theorem \ref{theorem:main3}. Lemma \ref{lemma:main3-2} implies $$I':=[(n+1)(1/\epsilon+1),(n+1)(n/2-t_1'-c_{\epsilon})]\subset S_{A_1'}.$$ and $$I'':=[-(n+1)(n/2-t_1''-c_{\epsilon}),-(n+1)(1/\epsilon+1)]\subset S_{A_1''}.$$ Lemma \ref{lemma:simple4} (applied to $I'$ and $A_1''$; $I''$ and $A_1'$ respectively) yields $$[\sum_{a_1''\in A_1''}a_1''+(n+1)(1/\epsilon+1),(n+1)(n/2-t_1'- c_{\epsilon})]\subset S_{A_1}$$ and $$[-(n+1)(n/2-t_1''-c_{\epsilon}),\sum_{a_1'\in A_1'}a_1'-(n+1)(1/\epsilon+1)]\subset S_{A_1}.$$ which gives $$I:=[\sum_{a_1''\in A_1''}a_1''+(n+1)(1/\epsilon+1),\sum_{a_1'\in A_1'}a_1'-(n+1)(1/\epsilon+1)]\subset S_{A_1}.$$ \noindent Note that the length of $I$ is greater than $(p-1)/2$. Again, Lemma \ref{lemma:simple4} (applied to $I$ and $A_2'$, $I$ and $A_2''$ respectively) implies $$[\sum_{a''\in A_1''\cup A_2''}a''+(n+1)(1/\epsilon+1),\sum_{a_1'\in A_1'}a_1'-(n+1)(1/\epsilon+1)] \subset S_A$$ and $$[\sum_{a_1''\in A_1''}a_1''+(n+1)(1/\epsilon+1), \sum_{a'\in A_1'\cup A_2'}a'-(n+1)(1/\epsilon+1)] \subset S_A.$$ The union of these two intervals belongs to $S_A$, $$[\sum_{a''\in A_1''\cup A_2''}a''+(n+1)(1/\epsilon+1),\sum_{a'\in A_1'\cup A_2'}a'-(n+1)(1/\epsilon+1)]\subset S_{A}.$$ On the other hand, $S_A\neq {\mathbf Z}_p$ implies $$\sum_{a'\in A_1'\cup A_2'}a'-\sum_{a''\in A_1''\cup A_2''}a''-2(n+1)(1/\epsilon+1)<p.$$ In other word $$\sum_{a \in A} \| a\| \le p +O(p^{1/2}).$$
1,314,259,994,681
arxiv
\section{Introduction and context of the paper} The idea of studying ``network motifs'' in biological systems, namely simple, frequently occurring, structures from which some properties of the system may be inferred, has attracted considerable recent interest (for example \cite{alon2007,Grochow2007}). The reason is natural: systems biology proceeds dually by elucidating the functioning of subsystems while at the same time attempting to fit together these pieces to explain large-scale behaviours in biological systems. Crucial to this study is a need to know how behaviours in a dynamical system are affected by embedding this system in some larger system in well-defined ways. Some rigorous results in this area come from the literature on hyperbolicity and structural stability (\cite{Ano95}, for example), and on monotone control systems (\cite{angeli}, for example). A closely related type of question is when a behaviour known to occur in some member of a {\em family} of dynamical systems $\mathcal{F}$ can be guaranteed to occur in some member of a new family $\mathcal{F}'$ related to $\mathcal{F}$ in some natural sense. Our goal is understanding the inheritance of {\em multiple positive nondegenerate equilibria} (MPNE) and {\em multiple positive linearly stable equilibria} (MPSE) in chemical reaction networks (CRNs) which form the main component of most biological models. The literature on multiple equilibria in CRNs is extensive and we do not attempt a survey here; the recent papers \cite{muellerAll}, \cite{banajipantea}, and \cite{Joshi.2015aa} provide some references to key strands of this work. But most work, including our own, has focussed on conditions for {\em precluding} multistationarity; in some ways the opposite of the question treated here. An exception is the paper by Joshi and Shiu \cite{joshishiu}, which inspired this paper. The authors treated questions of inheritance of multistationarity primarily, but not exclusively, focussed on fully open CRNs. They highlighted that in studying multistationarity (and potentially other nontrivial behaviours in CRNs), we need to identify a relevant partial order on the set of all CRNs, and then search for minimal elements w.r.t. this partial order, which they termed {\em atoms}. This partial order $\leq$ should be such that if some CRN $\mathcal{R}$ displays the behaviour of interest, and $\mathcal{R} \leq \mathcal{R}'$, then $\mathcal{R}'$ must display this behaviour. In the context of multistationarity in fully open CRNs, and with certain kinetic assumptions, there is a nice result: MPNE and MPSE are inherited under the induced subnetwork partial order. This result (Corollary~\ref{coropeninduced} below) was largely known to Joshi and Shiu (Corollary~4.6 of \cite{joshishiu}). In fact, the results here allow considerably stronger claims about the inheritance of MPNE and MPSE in fully open networks (see Remark~\ref{remFObest}). Results about fully open CRNs do not generalise easily to arbitrary CRNs: building larger CRNs from smaller ones in natural ways such as adding reactions can both introduce and destroy multistationarity. Nevertheless, it is possible to formulate several operations which together define a partial order $\preceq$ on CRNs such that if $\mathcal{R}$ displays MPNE (resp., MPSE) and $\mathcal{R} \preceq \mathcal{R}'$ then $\mathcal{R}'$ displays MPNE (resp., MPSE), and it is this task which we begin in this paper. For example, we can add new reactions involving some new species while preserving MPNE and MPSE provided certain conditions are met (Theorem~\ref{thmblockadd}). Feliu and Wiuf \cite{feliuwiufInterface2013} show that multistationarity can survive when a reaction is ``split'' with the introduction of a new intermediate species. We have a result involving splitting reactions and inserting intermediate complexes (Theorem~\ref{thmintermediates}) which reproduces and generalises some elements of their results. Availability of a partial order relevant to MPNE or MPSE couples naturally with the task of characterising minimal networks admitting these behaviours w.r.t. this partial order. Although minimal networks need not be small, in the sense of having few species, few reactions, or reactions of low molecularity, a first step is naturally to identify {\em small} minimal networks with the desired behaviour. We do not undertake this task here, but it was begun for the induced subnetwork partial order in \cite{Joshi.2013aa} and continued in the recent work \cite{JoshiShiu2016}. Our proofs largely rely on the implicit function theorem. We do not need degree theory \cite{soule, Conradi.2016aa}, or homotopy theory \cite{CraciunHeltonWilliams}, or any nontrivial algebra, even though many of the resulting claims can be seen as claims about the zeros of polynomial equations. Our local approach, though powerful, necessarily has limitations discussed in the conclusions. The paper is laid out as follows. After introducing some background on CRNs in Section~\ref{secbackground} we present an extended example (Section~\ref{secextended}) beginning with a CRN which displays MPNE and illustrating some of the many ways one might ``build up'' a reaction network, some of which will be proved to preserve MPNE and some of which destroy it. This foreshadows the theorems to follow, while also highlighting their limitations. We then present in Section~\ref{secthms} the results, some known and some new, but defer the proofs until Section~\ref{secproofs}, after illustrating application of the results on a biologically important CRN in Section~\ref{secbioexample}. Finally, we present some discussion and conclusions. A consequence of the analysis in Section~\ref{secbioexample} is previewed in Figure~\ref{fig:MAPK}. \begin{figure}[h] \begin{center} \resizebox{8cm}{!}{ \begin{tikzpicture} [scale=2.4, place/.style={circle,draw=blue!50,fill=blue!20,thick,inner sep=0pt,minimum size=5.5mm}, enzyme/.style={circle,draw= blue!50,fill=white,thick,inner sep=0pt,minimum size=5.5mm}, pre/.style={<-,shorten <=1pt,>=stealth',semithick}, post/.style={->,shorten >=1pt,>=stealth',semithick}] \node (MAPK) at (-1, 0) {\bf{MAPK}}; \node (MAPK-pp) at (1,0) {\bf{MAPK-pp}}; \node (MAPK-p) at (0,0) {\bf{MAPK-p}} edge [bend left = 40, post,ultra thick] (MAPK) edge [bend right = 40, pre,ultra thick] (MAPK-pp) edge [bend right = 40, pre,ultra thick] (MAPK) edge [bend left = 40, post,ultra thick] (MAPK-pp); \node (MKK) at (-2, 1) {MKK}; \node (MKK-pp) at (0,1) {\bf{MKK-pp}}; \node (MKK-p) at (-1,1) {MKK-p} edge [bend left = 40, post] (MKK) edge [bend right = 40, pre] (MKK-pp) edge [bend right = 40, pre] (MKK) edge [bend left = 40, post] (MKK-pp); \node (MKKK) at (-2, 2) {MKKK}; \node (MKKK-p) at (-1,2) {MKKK-p} edge [bend left = 40, post] (MKKK) edge [bend right = 40, pre] (MKKK); \node (Ras/MKKKK) at (-1.5, 2.8) {E$_1$}; \node (F1) at (-1.46, 1.67) {$\text{F}_1$}; \node (F2) at (-1.46, .67) {$\text{F}_2$}; \node (F22) at (-.46, .67) {$\text{F}_2$}; \node (F3) at (-.46, -.35) {$\text{F}_{3}$}; \node (F32) at (.54, -.35) {$\text{F}_{3}$}; \draw [semithick, ->,>=stealth', rounded corners=1.5mm, ultra thick] (0,.9) -- (0,0.55) -- (0.5,0.55) -- (0.5,0.27); \draw [semithick, ->,>=stealth', rounded corners=1.5mm, ultra thick] (0,.9) -- (0,0.55) -- (-0.5,0.55) -- (-0.5,0.27); \draw [semithick, ->,>=stealth', rounded corners=1.5mm] (-1,1.9) -- (-1,1.55) -- (-0.5,1.55) -- (-0.5,1.27); \draw [semithick, ->,>=stealth', rounded corners=1.5mm] (-1,1.9) -- (-1,1.55) -- (-1.5,1.55) -- (-1.5,1.27); \draw [semithick, ->,>=stealth', rounded corners=1mm] (-1.5,2.7) -- (-1.5,2.27); \draw [densely dashed,semithick, ->,>=stealth', rounded corners=1.5mm] (1,0.1) -- (1,2.5) -- (-1.47,2.5) ; \end{tikzpicture} } \caption{MPSE in the Huang-Ferrell MAPK cascade with negative feedback \cite{Huang.1996aa} can be inferred from MPSE in the subnetwork in bold. The full analysis is carried out in Section~\ref{secbioexample}.}\label{fig:MAPK} \end{center} \end{figure} \section{Background on CRNs} \label{secbackground} We provide only a brief, informal introduction to CRNs, and the reader is referred to \cite{banajipantea, banajiCRNcount, banajiCRNosci} for further detail. A {\em complex} is a formal linear combination of chemical species. Given a list of species $X = (X_1, \ldots, X_n)$, and a nonnegative integer vector $a \in \mathbb{R}^n$, we write $a \cdot X$ for the complex $a_1X_1 + a_2 X_2 + \cdots + a_nX_n$. The zero complex $0x_1 + \cdots + 0x_n$ will be denoted $0$. An irreversible {\em reaction} is an ordered pair of complexes, the {\em source} and {\em target} complexes. A CRN consists of a set of species, and a set of irreversible reactions involving these species. A CRN involving species $(X_1, \ldots, X_n)$ is {\em fully open} if it includes all the inflow-outflow reactions $0 \rightleftharpoons X_i$ ($i = 1, \ldots, n$). The {\em fully open extension} of a CRN $\mathcal{R}$ is created by adjoining to $\mathcal{R}$ any such reactions which are absent from $\mathcal{R}$. \begin{remark}[Forbidden CRNs] \label{remforbid} Although it is not required for any results here, and so is not assumed, it is common to forbid from the definition of a CRN reactions with the same complex on left and right hand sides, reactions which figure more than once, and species which participate in no reactions. \end{remark} The {\em Petri net graph (PN graph)} of a CRN $\mathcal{R}$, denoted $PN(\mathcal{R})$, is an edge-weighted bipartite digraph \cite{angelipetrinet}, closely related to other bipartite graphs associated with CRNs, notably SR and DSR graphs \cite{craciun1,banajicraciun2}. Its two vertex sets $V_S$ (species vertices) and $V_R$ (reaction vertices) correspond to the species and the reactions of $\mathcal{R}$, and given $u \in V_S$ and $v \in V_R$, there exists an arc $uv$ (resp., $vu$) with weight $w$ if and only if the species corresponding to $u$ occurs with stoichiometry $w$ as a reactant (resp., product) in reaction corresponding to $v$. \begin{example}[CRN and its PN graph] We illustrate below the PN graph of $X+2Y \rightarrow 3Y$, $Y \rightarrow X \rightleftharpoons 0$, with the convention that arc-weights of $1$ are omitted: \begin{center} \begin{tikzpicture}[scale=1.2] \fill[color=black] (1,0.5) circle (1.5pt); \fill[color=black] (1,-0.5) circle (1.5pt); \fill[color=black] (3,0.5) circle (1.5pt); \fill[color=black] (3,-0.5) circle (1.5pt); \node at (2,0) {$X$}; \node at (4,0) {$Y$}; \draw [<-, thick] (1.85,0.15) .. controls (1.7,0.3) and (1.5,0.5) .. (1.1,0.5); \draw [->, thick] (1.85,-0.15) .. controls (1.7,-0.3) and (1.5,-0.5) .. (1.1,-0.5); \draw [->, thick] (2.15,0.15) .. controls (2.3,0.3) and (2.5,0.5) .. (2.9,0.5); \draw [<-, thick] (3.85,0.15) .. controls (3.7,0.3) and (3.5,0.5) .. (3.1,0.5); \draw [<-, thick] (2.15,-0.15) .. controls (2.3,-0.3) and (2.5,-0.5) .. (2.9,-0.5); \draw [->, thick] (3.85,-0.15) .. controls (3.7,-0.3) and (3.5,-0.5) .. (3.1,-0.5); \draw[->, thick] (3.85,0.05) .. controls (3.5, 0.05) and (3.3, 0.2) .. (3.07,0.43); \node at (3.35,0.05) {$\scriptstyle{2}$}; \node at (3.6,0.5) {$\scriptstyle{3}$}; \end{tikzpicture} \end{center} \end{example} CRNs $\mathcal{R}_1$ and $\mathcal{R}_2$ are {\em isomorphic} if $PN(\mathcal{R}_1)$ and $PN(\mathcal{R}_2)$ are isomorphic as edge-labelled digraphs, via an isomorphism which preserves the bipartition into species and reaction vertices. In other words, the species and reactions of $\mathcal{R}_1$ can be (independently) relabelled to get $\mathcal{R}_2$. An equivalence class of labelled CRNs under isomorphism is termed an {\em unlabelled CRN}. An {\em induced subnetwork} (or ``embedded network'' in the terminology of \cite{joshishiu}) of a (labelled) CRN $\mathcal{R}$ is a CRN whose PN graph is a vertex-induced subgraph of $PN(\mathcal{R})$, namely can be obtained by deleting some vertices and all their incident arcs from $PN(\mathcal{R})$. The definition extends naturally to unlabelled CRNs and the relationship ``is an induced subnetwork'' defines a partial order on the set of unlabelled CRNs, whether or not the definition of a CRN includes additional restrictions as in Remark~\ref{remforbid}. Isomorphism is discussed in more detail in \cite{banajiCRNcount}. In order to discuss multiple equilibria, we need a brief discussion of ODE models of CRNs. \begin{notation}[Nonnegative and positive vectors] $\mathbb{R}^n_{\geq 0}$ refers to the nonnegative orthant in $\mathbb{R}^n$, namely $\{x \in \mathbb{R}^n\,|\, x_i \geq 0,\,\,i=1, \ldots, n\}$, while $\mathbb{R}^n_{\gg 0}$ refers to the positive orthant, namely $\{x \in \mathbb{R}^n\,|\, x_i > 0,\,\,i=1,\ldots, n\}$. For $x \in \mathbb{R}^n$, $x \geq 0$ (resp., $x > 0$, rep., $x \gg 0$) will mean that $x \in \mathbb{R}^n_{\geq 0}$ (resp., $x \in \mathbb{R}^n_{\geq 0}\backslash\{0\}$, resp. $x \in \mathbb{R}^n_{\gg 0}$). A vector is nonnegative (resp., positive) if each component is nonnegative (resp., positive). \end{notation} Consider $n$ chemical species $X = (X_1, \ldots, X_n)$ with concentrations $(x_1, \ldots, x_n) \in \mathbb{R}^n_{\geq 0}$, and a CRN $\mathcal{R}$ involving $r_0$ reactions between these species. Choose some order for the reactions and let $\Gamma_{ij}$ be the net change in $X_i$ when reaction $j$ occurs. The $n \times r_0$ integer matrix $\Gamma = (\Gamma_{ij})$ is termed the {\em stoichiometric matrix} of $\mathcal{R}$ and the $j$th column of $\Gamma$ is the {\em reaction vector} of reaction $j$. $\mathrm{im}\,\Gamma$ is the {\em stoichiometric subspace} of $\mathcal{R}$ and the nonempty intersection of a coset of $\mathrm{im}\,\Gamma$ with the nonnegative (resp., positive) orthant is a {\em stoichiometry class} (resp., {\em positive stoichiometry class}) of $\mathcal{R}$. We say that $p,q \in \mathbb{R}^n$ are {\em compatible} if $p-q \in \mathrm{im}\,\Gamma$. In spatially homogeneous, deterministic, continuous time models, the evolution of the species concentrations $x$ is governed the ODE: \begin{equation} \label{genCRN} \dot x = \Gamma v(x)\,, \end{equation} where $v$ is the {\em vector of reaction rates} or {\em rate vector} of the CRN. We always assume that $v$ is defined and $C^1$ on $\mathbb{R}^n_{\gg 0}$ and belongs to the class of {\em positive general kinetics} \cite{banajipantea}: in brief, on the positive orthant, the rate of an irreversible reaction (i) is positive; (ii) depends only on the concentrations of its reactants; and (iii) increases strictly with the concentration of each reactant. Positive stoichiometry classes are locally invariant for (\ref{genCRN}) for positive general kinetics, and indeed under more general assumptions. Positive general kinetics includes mass action (MA) kinetics as a special case, and in fact all our inheritance results are presented to be consistent with the assumption that all reactions have MA kinetics. The emphasis on MA kinetics is for brevity and readability: we include remarks about other classes of kinetics for which the results hold -- often requiring only minor, formal modification of the proofs. {\bf Equilibria: nondegeneracy and stability (MPNE and MPSE).} \begin{def1}[Reduced Jacobian matrices, the reduced determinant \cite{banajipantea}] \label{notationreddet} Consider a differentiable function $F\colon X\subseteq \mathbb{R}^n \to \mathbb{R}^n$ such that that $\mathrm{im}\,F \subseteq S \subseteq \mathbb{R}^n$, where $S$ is some $k$-dimensonal linear subspace of $\mathbb{R}^n$. Let $M$ be some matrix whose columns are a basis for $S$, so there is a unique differentiable function $\hat{F}\colon X \to \mathbb{R}^k$ s.t. $F(x) = M\hat{F}(x)$. We term $D_MF:= (D\hat{F})M$ the {\em reduced Jacobian matrix} of $F$ w.r.t. $M$. A different choice of basis for $S$, written as the columns of a matrix $M'$, leads to a reduced Jacobian matrix $D_{M'}F$ similar to $D_MF$. When interested only in the spectral properties of such reduced Jacobian matrices, we refer to the linear operator they represent as $D_SF$. We refer to $J_SF:=\mathrm{det}(D_SF)$ as the {\em reduced determinant} of $F$ w.r.t. $S$. \end{def1} A point $p \in \mathbb{R}^n_{\gg 0}$ is {\em nondegenerate} for (\ref{genCRN}) if the Jacobian determinant of the restricted vector field $\left.\Gamma v(\cdot)\right|_{p+\mathrm{im}\,\Gamma}$ is nonzero at $p$ (see also Definition~4 in \cite{craciun2}). Equivalently, $J_{\mathrm{im}\,\Gamma}(\Gamma Dv(p)) \neq 0$. $J_{\mathrm{im}\,\Gamma}(\Gamma Dv(p))$ can be computed in various ways \cite{banajipantea}. Most simply, $J_{\mathrm{im}\,\Gamma}(\Gamma Dv(p))$ is the sum of $r \times r$ principal minors of $\Gamma Dv(p)$ where $r=\mathrm{rank}\,\Gamma$. Alternatively, following Definition~\ref{notationreddet}, we may choose $\Gamma_0$ to be any matrix whose columns are a basis for $\mathrm{im}\,\Gamma$ and define $Q$ via $\Gamma = \Gamma_0Q$. Then $J_{\mathrm{im}\,\Gamma}(\Gamma Dv(p)) = \mathrm{det}(Q Dv(p) \Gamma_0)$. A CRN evolving according to (\ref{genCRN}) with kinetics in some fixed class admits {\em multiple positive equilibria} (MPE) if there there exists {\em some} choice of rate function $v$ in this class and $p,q \gg 0$ satisfying $p-q \in \mathrm{im}\,\Gamma$ and $\Gamma v(p) = \Gamma v(q) = 0$. Note that it is implicit in the term ``multiple equilibria'' that the equilibria are compatible. The CRN admits {\em multiple, positive, nondegenerate equilibria} (MPNE) if, additionally, $p$ and $q$ can be chosen to be nondegenerate, namely $J_{\mathrm{im}\,\Gamma}(\Gamma Dv(p))\neq 0$ and $J_{\mathrm{im}\,\Gamma}(\Gamma Dv(q))\neq 0$. The CRN admits {\em multiple positive linearly stable equilibria} (MPSE) if, additionally, $p$ and $q$ can be chosen to be linearly stable w.r.t. their stoichiometry class, namely $D_{\mathrm{im}\,\Gamma}(\Gamma Dv(p))$ and $D_{\mathrm{im}\,\Gamma}(\Gamma Dv(q))$ are both Hurwitz stable (i.e., have eigenvalues in the open left half plane of $\mathbb{C})$. MPSE are clearly MPNE. Given a CRN $\mathcal{R}$ admitting MPNE (resp., MPSE), we will present a number of ways in which we can modify $\mathcal{R}$ to create a new CRN $\mathcal{R}'$ in such a way that $\mathcal{R}'$ displays MPNE (resp., MPSE). This will generally involve adding more reactions and/or more species, or modifying reactions in some way. A similar project can be undertaken for other dynamical behaviours, in particular for CRNs which admit hyperbolic periodic orbits \cite{banajiCRNosci}, and we remark on this after certain proofs. \section{An extended example} \label{secextended} To motivate the results to follow and highlight their limitations we present an extended example beginning with a CRN $\mathcal{R}_0$ displaying MPNE with mass action kinetics and then performing various modifications to create CRNs $\mathcal{R}_1$ to $\mathcal{R}_{12}$. Each modification can either be proved to preserve MPNE using a result in the paper, or highlights some point about the scope of these theorems. {\bf Base reaction network.} We begin with the following CRN (see also Remark 4.4 of \cite{JoshiShiu2016}). \begin{equation} X+2Y \rightarrow 3Y, \quad Y \rightarrow X\,. \tag{\mbox{$\mathcal{R}_0$}} \end{equation} If $\mathcal{R}_0$ is given mass action kinetics with both rate constants set at $1$, we get the ODEs $\dot x = -xy^2+y,\,\,\dot y = xy^2-y$. Fixing any $C>2$ and defining $t_{\pm} = \frac{C \pm \sqrt{C^2-4}}{2}$, the reader may easily confirm that $p=(t_+,t_-)$ and $q=(t_-,t_+)$, are positive, nondegenerate, compatible equilibria of the system. That $\mathcal{R}_0$ admits MPNE is also an immediate application of Theorem 4.5 in \cite{JoshiShiu2016}. {\bf Modification 1: adding new linearly dependent reactions preserves MPNE.} Consider adding to $\mathcal{R}_0$ some new reaction with reaction vector which is a linear combination of existing reaction vectors. For example we might enlarge $\mathcal{R}_0$ with the reaction $X \rightarrow Y$ to get \begin{equation} X+2Y \rightarrow 3Y, \quad Y \rightleftharpoons X\,. \tag{\mbox{$\mathcal{R}_1$}} \end{equation} As the stoichiometry classes do not change, intuition suggests that a small rate for the new reaction causes only a small perturbation to the vector field on any stoichiometry class, and hence $\mathcal{R}_1$ should admit MPNE. This is formalised as a general result (Theorem~\ref{thmnewdepreac}). {\bf Modification 2: adding new independent reactions may destroy MPNE.} Consider now adding to $\mathcal{R}_0$ the reversible reaction $0 \rightleftharpoons X$ with mass action kinetics, to get \begin{equation} X+2Y \overset{\scriptstyle{k_1}}\longrightarrow 3Y, \quad Y \overset{\scriptstyle{k_2}}\longrightarrow X, \quad 0 \overset{\scriptstyle{k_3}}{\underset{\scriptstyle{k_4}}\rightleftharpoons} X\,. \tag{\mbox{$\mathcal{R}_2$}} \end{equation} The unique stoichiometry class of this system is now $\mathbb{R}^2_{\geq 0}$, and the unique positive equilibrium of this system is at $x=k_3/k_4,\,y=k_2k_4/(k_1k_3)$. Thus $\mathcal{R}_2$ forbids MPNE, and we conclude that adding reactions to a CRN may destroy the capacity for MPNE. {\bf Modification 3: adding new independent reactions may allow multiple degenerate equilibria.} Consider adding to $\mathcal{R}_0$ the reaction $0 \rightleftharpoons X+Y$ with mass action kinetics, to get \begin{equation} X+2Y \overset{\scriptstyle{k_1}}\longrightarrow 3Y, \quad Y \overset{\scriptstyle{k_2}}\longrightarrow X, \quad 0 \overset{\scriptstyle{k_3}}{\underset{\scriptstyle{k_4}}\rightleftharpoons} X + Y\,. \tag{\mbox{$\mathcal{R}_3$}} \end{equation} The unique stoichiometry class of $\mathcal{R}_3$ is $\mathbb{R}^2_{\geq 0}$. If $k_2/k_1 \neq k_3/k_4$, this system has no equilibria at all. On the other hand if $k_2/k_1 = k_3/k_4$, then the set of equilibria consists of solutions to $xy = k_2/k_1$, and all these equilibria are degenerate. Thus the capacity for MPNE is destroyed, even though $\mathcal{R}_3$ still allows MPE for special choices of rate constants. {\bf Modification 4: adding new independent reactions may preserve MPNE.} Consider adding to $\mathcal{R}_0$ three irreversible reactions $X+2Y \rightarrow 2X+2Y$, $3X\rightarrow 4X$ and $X \rightarrow 0$, and choosing mass action rate constants as indicated: \begin{equation} 2X+2Y \overset{\scriptstyle{1}}\longleftarrow X+2Y \overset{\scriptstyle{1}}\longrightarrow 3Y, \quad Y \overset{\scriptstyle{1}}\longrightarrow X \overset{\scriptstyle{4}}\longrightarrow 0, \quad 3X \overset{\scriptstyle{1}}\longrightarrow 4X\,. \tag{\mbox{$\mathcal{R}_4$}} \end{equation} The unique stoichiometry class of $\mathcal{R}_4$ is $\mathbb{R}^2_{\geq 0}$. $(p,1/p)$ and $(q, 1/q)$ for $p=\sqrt{2+\sqrt{3}}$ and $q=\sqrt{2-\sqrt{3}}$ are nondegenerate positive equilibria of the system. Thus $\mathcal{R}_4$ admits MPNE. However, there is no obvious {\em local} argument for this: the MPNE constructed for the new system are not in any sense derived as perturbations of original MPNE. {\bf Modification 5: adding inflows and outflows of all species preserves MPNE.} Suppose we add to $\mathcal{R}_0$ inflows and outflows of all species to get the fully open extension of $\mathcal{R}_0$: \begin{equation} X+2Y \rightarrow 3Y, \quad Y\rightarrow X, \quad X \rightleftharpoons 0 \rightleftharpoons Y\,. \tag{\mbox{$\mathcal{R}_5$}} \end{equation} The stoichiometric subspace becomes all of $\mathbb{R}^2$. This modification is well known to preserve MPNE. A variety of proofs are known and a very simple IFT-based argument is possible (Theorem~\ref{thmopenextension}). {\bf Modification 6: adding a trivial species preserves MPNE.} Suppose we add a new species $Z$ into the reactions of $\mathcal{R}_0$ in a rather trivial way: the species always figures with the same stoichiometry on both sides of any reaction in which it participates. We might get for example: \begin{equation} X+2Y+Z \rightarrow 3Y+Z, \quad Y+2Z \rightarrow X+2Z\,.\tag{\mbox{$\mathcal{R}_6$}} \end{equation} The stoichiometry classes of $\mathcal{R}_6$ are still one dimensional and are parallel to the $x\mhyphen y$ plane. An easy general result (Theorem~\ref{thmtrivial}) confirms that $\mathcal{R}_6$ must admit MPNE. {\bf Modification 7: adding new species may destroy MPNE.} Suppose we add a new species $Z$ into the reactions of $\mathcal{R}_0$ as follows: \begin{equation} X+2Y \rightarrow 3Y, \quad Y+Z \rightarrow X\,.\tag{\mbox{$\mathcal{R}_7$}} \end{equation} The concentration of $Z$ must be zero at any equilibrium, and thus $\mathcal{R}_7$ trivially forbids MPNE. It is also straightforward to find examples where adding a new species into some reactions of a CRN admitting MPNE results in a CRN with a unique positive equilibrium on each stoichiometric class. For example, assuming mass action kinetics, suppose we start with $\mathcal{R}_4$ above and add the new species $Z$ into two reactions to get \begin{equation} X+2Y \overset{\scriptstyle{k_1}}\longrightarrow 3Y, \quad X+2Y+2Z \overset{\scriptstyle{k_2}}\longrightarrow 2X+2Y, \quad Y \overset{\scriptstyle{k_3}}\longrightarrow X \overset{\scriptstyle{k_4}}\longrightarrow Z, \quad 3X \overset{\scriptstyle{k_5}}\longrightarrow 4X\,. \tag{\mbox{$\mathcal{R}'_4$}} \end{equation} The unique stoichiometry class of $\mathcal{R}'_4$ is all of $\mathbb{R}^3_{\geq0}$, and $\mathcal{R}'_4$ has a unique positive equilibrium at $(\alpha, \beta, \gamma)$ where $\alpha = \sqrt{k_4/2k_5}$, $\beta = k_3/k_1\alpha$, $\gamma = \sqrt{k_4/2k_2\beta^2}$. {\bf Modification 8: adding new species may preserve MPNE.} Suppose, we add the new species $Z$ into the reactions of $\mathcal{R}_0$ as follows: \begin{equation} X+2Y+2Z \rightarrow 3Y+3Z, \quad Y+Z \rightarrow X\,.\tag{\mbox{$\mathcal{R}_8$}} \end{equation} Stoichiometry classes of $\mathcal{R}_8$ remain one dimensional and we can confirm that $(x,y,z)=(1,6,4)$, and $(x,y,z)=(3,4,2)$ are positive, nondegenerate, compatible equilibria of $\mathcal{R}_8$. Thus $\mathcal{R}_8$ admits MPNE. {\bf Modification 9: adding new reactions involving new species often preserves MPNE.} Consider adding to $\mathcal{R}_0$ a single reversible reaction $Y \rightleftharpoons Z$ involving a new species $Z$ to get: \begin{equation} X+2Y \rightarrow 3Y, \quad X \leftarrow Y \rightleftharpoons 2Z\,.\tag{\mbox{$\mathcal{R}_9$}} \end{equation} The evolution now occurs in $\mathbb{R}^3$, and the stoichiometric subspace of $\mathcal{R}_9$ is now $2$-dimensional. That $\mathcal{R}_9$ admits MPNE is a simple application of a much more general result (Theorem~\ref{thmblockadd}) allowing the addition of several new reactions involving several new species. We remark that Theorem~\ref{thmblockadd} includes a nondegeneracy condition which in this case is equivalent to the requirement that the new species must not occur with the same stoichiometry on both sides of the new reaction. The reader may confirm, for example, that \begin{equation} X+2Y \rightarrow 3Y, \quad Y \rightarrow X, \quad Z \rightleftharpoons X+Z\,.\tag{\mbox{$\mathcal{R}'_9$}} \end{equation} forbids MPNE. {\bf Modification 10: adding a new species into reactions while also adding its inflow and outflow preserves MPNE.} Suppose we add into some reactions of $\mathcal{R}_0$ a new species $Z$ (in any way) while also adding its inflow and outflow $0 \rightleftharpoons Z$. We may get, for example, \begin{equation} X+2Y \rightarrow 3Y, \quad Y+Z \rightarrow X, \quad 0 \rightleftharpoons Z\,.\tag{\mbox{$\mathcal{R}_{10}$}} \end{equation} Intuition suggests that sufficiently large inflow-outflow rates for $Z$ should effectively hold $Z$ constant and thus the net effect is effectively only to modify the rates of the reactions in which $Z$ participates. This argument is made precise in Theorem~\ref{thmnewwithopen}. {\bf Modification 11: splitting a reaction and inserting a complex involving a new species often preserves MPNE.} Suppose that we modify an irreversible reaction of $\mathcal{R}_0$ by adding a complex containing some new species $Z$, as an intermediate. For example we might get: \begin{equation} X+2Y \rightarrow X+Z \rightarrow 3Y\,, \quad Y \rightarrow X\,.\tag{\mbox{$\mathcal{R}_{11}$}} \end{equation} The evolution now occurs in $\mathbb{R}^3$, and the stoichiometric subspace is now $2$-dimensional. $\mathcal{R}_{11}$ admits MPNE as a consequence of a general result (Theorem~\ref{thmintermediates}) which allows adding several new species in several intermediate complexes. {\bf Modification 12: splitting a reaction and inserting a complex involving no new species may destroy MPNE.} Suppose that we modify an existing irreversible reaction of $\mathcal{R}_0$ by adding a new complex comprised of existing species as an intermediate. For example we might get: \begin{equation} X+2Y \overset{\scriptstyle{k_1}}\longrightarrow 2X \overset{\scriptstyle{k_2}}\longrightarrow 3Y\,, \quad Y \overset{\scriptstyle{k_3}}\longrightarrow X\,.\tag{\mbox{$\mathcal{R}_{12}$}} \end{equation} The evolution occurs in $\mathbb{R}^2$, and the stoichiometric subspace is now $2$-dimensional. The reader may confirm that the only positive equilibrium of $\mathcal{R}_{12}$ is at $x = \sqrt[3]{\frac{k_3^2}{k_1k_2}}, y=\sqrt[3]{\frac{k_3k_2}{k_1^2}}$. Thus $\mathcal{R}_{12}$ forbids MPNE. {\bf Remarks on the modifications} \begin{enumerate}[align=left,leftmargin=*] \item If we add new ``dependent'' reactions, as in $\mathcal{R}_{1}$, then we can quite generally expect MPNE and MPSE to survive (Theorem~\ref{thmnewdepreac}). This is also the subject of Theorem~3.1 in \cite{joshishiu}. \item If we add new ``independent'' reactions on existing species, as in Modifications 2--5, then the consequences are unclear as illustrated by $\mathcal{R}_2$, $\mathcal{R}_3$ and $\mathcal{R}_4$. However, MPNE and MPSE are preserved in the important and well known special case of taking the fully open extension as in $\mathcal{R}_{5}$ (Theorem~\ref{thmopenextension}). \item If we add species into reactions, without adding new reactions, then MPNE and MPSE are preserved if the added species figure trivially in reactions (Theorem~\ref{thmtrivial}). More generally, the consequences are unclear as illustrated by $\mathcal{R}_{7}$ and $\mathcal{R}_{8}$. \item If we add new species to a CRN, then MPNE and MPSE survive in the important special case where we add an inflow and outflow reaction for each new species as in $\mathcal{R}_{10}$ (Theorem~\ref{thmnewwithopen}, and see also Theorem~4.2 in \cite{joshishiu}). If we add new reversible reactions involving some new species, without modifying any existing reactions (as in $\mathcal{R}_{9}$) then, with mild additional hypotheses, MPNE and MPSE survive (Theorem~\ref{thmblockadd}). \item Finally, $\mathcal{R}_{11}$ and $\mathcal{R}_{12}$ provide examples where we modify a network by adding intermediate complexes into reactions, effectively both deleting and adding reactions. If these involve new species, then under mild conditions, this preserves MPNE and MPSE, as in $\mathcal{R}_{11}$ (Theorem~\ref{thmintermediates}). The addition of intermediate complexes is also the subject of \cite{feliuwiufInterface2013}, and we comment later on the relationship with results in this paper. \end{enumerate} \section{Results on the inheritance of MPNE and MPSE} \label{secthms} We gather together the main results, presenting the proofs later. Theorems~\ref{thmnewdepreac}-\ref{thmnewwithopen} are relatively easy and some elements of these results are well known. Theorems~\ref{thmblockadd}~and~\ref{thmintermediates} are harder. In each case, $\mathcal{R}$ is an unknown CRN on species $X = (X_1, \ldots, X_n)$ with positive general kinetics or mass action kinetics. The reactions of $\mathcal{R}$ are assumed, w.l.o.g., to be irreversible. $\mathcal{R}'$ is created from $\mathcal{R}$ by adding reactions and/or species (Theorems~\ref{thmnewdepreac}~to~\ref{thmblockadd}), or by splitting reactions (Theorem~\ref{thmintermediates}), and is given kinetics compatible with that of $\mathcal{R}$: if $\mathcal{R}$ has MA kinetics, then so does $\mathcal{R}'$, while if $\mathcal{R}$ has positive general kinetics, then so does $\mathcal{R}'$. Much more general assumptions on the kinetics are possible but are omitted for brevity: the proofs and surrounding remarks make clear how the kinetic assumptions can be loosened. Theorem~\ref{thmnewdepreac} is closely related to Theorem~3.1 in \cite{joshishiu}. \begin{thm}[Adding a dependent reaction] \label{thmnewdepreac} Suppose we create $\mathcal{R}'$ from $\mathcal{R}$, by adding to $\mathcal{R}$ a new irreversible reaction with reaction vector $\alpha$ which is a linear combination of reaction vectors of $\mathcal{R}$. If $\mathcal{R}$ admits MPNE (resp., MPSE) then $\mathcal{R}'$ admits MPNE (resp., MPSE). \end{thm} \begin{remark}[Adding the reverse of a reaction or a reversible reaction] \label{newdeprev} From Theorem~\ref{thmnewdepreac}, we could add the reverse of any existing reaction to $\mathcal{R}$ as this would be a dependent reaction. Equally, we could add a new {\em reversible} dependent reaction to $\mathcal{R}$ and preserve MPNE, as this amounts to sequentially adding two new irreversible reactions to $\mathcal{R}$. \end{remark} The next theorem is well known. It appears as Theorem~2 in \cite{craciun2} proved using an approach similar to the one here, and as Lemma~B1 in \cite{banajicraciun2} proved using techniques involving the invariance of Brouwer degree, and closely related to the proof of Proposition~1 in \cite{soule}. In the case of MA kinetics or positive general kinetics adding inflows and outflows of all species, as in Theorem~\ref{thmopenextension}, is equivalent to taking the fully open extension of a CRN, and so the theorem may be stated simply as ``taking fully open extensions preserves the capacity for MPNE and MPSE''. \begin{thm}[Adding inflows and outflows of all species] \label{thmopenextension} Suppose we create $\mathcal{R}'$ from $\mathcal{R}$, by adding to $\mathcal{R}$ the reactions $0 \rightleftharpoons X_i$ for $i$ in $1, \ldots, n$. If $\mathcal{R}$ admits MPNE (resp., MPSE) then $\mathcal{R}'$ admits MPNE (resp., MPSE). \end{thm} The following theorem is very easy to prove, and yet surprisingly important in applications when combined with other results in this paper. \begin{thm}[Adding a trivial species] \label{thmtrivial} Suppose we create $\mathcal{R}'$ from $\mathcal{R}$, by adding into some reactions of $\mathcal{R}$ a new species $Y$ which occurs with the same stoichiometry on both sides of each reaction in which it participates. If $\mathcal{R}$ admits MPNE (resp., MPSE) then $\mathcal{R}'$ admits MPNE (resp., MPSE). \end{thm} The following theorem is closely related to Theorem~4.2 in \cite{joshishiu}: \begin{thm}[Adding a new species with inflow and outflow] \label{thmnewwithopen} Suppose we create $\mathcal{R}'$ from $\mathcal{R}$, by adding into some reactions of $\mathcal{R}$ the new species $Y$ in an arbitrary way, while also adding the new reaction $0 \rightleftharpoons Y$. If $\mathcal{R}$ admits MPNE (resp., MPSE) then $\mathcal{R}'$ admits MPNE (resp., MPSE). \end{thm} Theorem~\ref{thmnewwithopen}, combined with Theorem~\ref{thmnewdepreac} allows us to deduce an important corollary whose claim about MPNE appears, although with different terminology, in \cite{joshishiu}. \begin{cor}[Inheritance of MPNE and MPSE in the induced subnetwork partial order] \label{coropeninduced} Let $\mathcal{R}$ and $\mathcal{R}'$ be fully open CRNs with $\mathcal{R}$ an induced subnetwork of $\mathcal{R}'$. If $\mathcal{R}$ admits MPNE (resp., MPSE) then $\mathcal{R}'$ admits MPNE (resp., MPSE). \end{cor} \begin{proof} We can construct $PN(\mathcal{R}')$ from $PN(\mathcal{R})$ via a sequence of steps as follows: (i) first, for each absent species (if any) we add the species vertex, its inflow and outflow reaction vertices, and all missing arcs (corresponding to Theorem~\ref{thmnewwithopen}); (ii) then, for each remaining absent reaction (if any) we add the reaction vertex and all its incident arcs (corresponding to Theorem~\ref{thmnewdepreac} because a fully open CRN has stoichiometric subspace which is the whole ambient space, and hence each added reaction is now a dependent reaction). If $\mathcal{R}$ admits MPNE (resp., MPSE), then since each step preserves this property, $\mathcal{R}'$ admits MPNE (resp., MPSE). \end{proof} \begin{remark}[Notes on Corollary~\ref{coropeninduced}] The claim about MPNE in Corollary~\ref{coropeninduced} appears as Corollary~4.6 of \cite{joshishiu}. The claim about MPSE is part of Theorem~4.5 in the later work \cite{Joshi.2015aa}. However, it appears that our approach allows considerably weaker kinetic assumptions than needed for Corollary~4.6 of \cite{joshishiu}. Joshi (Theorem 4.13 of \cite{Joshi.2013aa}) showed that the fully open CRNs admitting MPNE, minimal w.r.t. the induced subnetwork partial order, and with only one non-flow reaction are the fully open extensions of either $a_1X\to a_2X$ or $X+Y\to b_1X+b_2Y$, where $a_2 >a_1 >1$, respectively $b_1>1$ and $b_2 >1$. Corollary~\ref{coropeninduced} can be improved using Theorem~\ref{thmintermediates} below as indicated in Remark~\ref{remFObest}. \end{remark} The next two theorems are somewhat harder than Theorems~\ref{thmnewdepreac}~to~\ref{thmnewwithopen}, and can be regarded as the main results of this paper. \begin{thm}[Adding new reversible reactions involving new species] \label{thmblockadd} Suppose we create $\mathcal{R}'$ from $\mathcal{R}$, by adding $m \geq 1$ new reversible reactions involving $k$ new species with the following nondegeneracy condition: the submatrix of the stoichiometric matrix of $\mathcal{R}'$ corresponding to the new species has rank $m$ (this implies $k \geq m$). If $\mathcal{R}$ admits MPNE (resp., MPSE) then $\mathcal{R}'$ admits MPNE (resp., MPSE). \end{thm} \begin{thm}[Adding intermediate complexes involving new species] \label{thmintermediates} Let $Y$ be a list of $k$ new species, and suppose we create $\mathcal{R}'$ from $\mathcal{R}$ by replacing each of the $m$ reactions: \[ a_i \cdot X \rightarrow b_i \cdot X \mbox{with a chain}\quad a_i \cdot X \rightarrow c_i \cdot X + \beta_i\cdot Y \rightarrow b_i \cdot X,\,\,(i=1,\ldots,m)\,.\] Suppose further that the new species $Y$ enter nondegenerately into $\mathcal{R}'$ in the sense that $\beta : = (\beta_1|\beta_2|\cdots|\beta_m)$ has rank $m$ (this implies $k \geq m$). $a_i$, $b_i$ and $c_i$ are arbitrary nonnegative vectors and any or all may coincide. If $\mathcal{R}$ admits MPNE (resp., MPSE) then $\mathcal{R}'$ admits MPNE (resp., MPSE). \end{thm} \begin{remark}[Scope of Theorem~\ref{thmintermediates}] Since we do not impose the assumptions in Remark~\ref{remforbid} we may always formally write a single reaction as multiple reactions; for example we way consider the single reaction $a \cdot X \rightarrow b\cdot X$ as a set of $m$ such reactions, each with rate $\frac{1}{m}$ times the original rate. This clearly does not affect the associated dynamics and can be done while remaining in any reasonable class of kinetics (in particular mass action kinetics, where each new reaction has rate constant which is the original rate constant divided by $m$). Thus in Theorem~\ref{thmintermediates} some reactions may be split multiple times and acquire multiple intermediates. Many (but not all) instances of Theorem~\ref{thmblockadd} also follow from Theorem~\ref{thmintermediates}. For example, adding to a CRN a trivial reaction with the same complex on both sides (with any kinetics) does not alter the vector field. Given a CRN with MPNE, we could add and then split such a trivial reaction using Theorem~\ref{thmintermediates} to create a new reversible reaction involving some new species. The resulting CRN admits MPNE by Theorem~\ref{thmintermediates}. \end{remark} \begin{remark}[Relationship between Theorem~\ref{thmintermediates} and results in \cite{feliuwiufInterface2013}] Theorem~\ref{thmintermediates} appears to generalise certain aspects of results in \cite{feliuwiufInterface2013}, while being unable to reproduce others. In particular, we allow the introduction of complexes (possibly involving existing species) rather than just new individual species as intermediates; however we forbid the introduction of intermediates in a degenerate way allowed in \cite{feliuwiufInterface2013} (although see the remarks in the conclusions). This divergence is not surprising as the techniques in \cite{feliuwiufInterface2013} are global and algebraic, while those here are local and analytical. \end{remark} Theorems~\ref{thmnewdepreac}~to~\ref{thmintermediates} collectively define a partial order $\preceq$ on the set of all CRNs as follows: $\mathcal{R}_1 \preceq \mathcal{R}_2$ if and only if $\mathcal{R}_2$ can be obtained from $\mathcal{R}_1$ by a sequence of modifications (possibly empty) as described in Theorems~\ref{thmnewdepreac}~to~\ref{thmintermediates}. Reflexivity and transitivity are trivial; antisymmetry is ensured by the fact that each modification either increases the number of species or the number of reactions or both. So, for example, the CRNs $\mathcal{R}_1$, $\mathcal{R}_5$, $\mathcal{R}_6$, $\mathcal{R}_9$, $\mathcal{R}_{10}$ and $\mathcal{R}_{11}$ in Section~\ref{secextended} are all $\succ$ $\mathcal{R}_0$. \begin{remark}[Improving Corollary~\ref{coropeninduced}] \label{remFObest} Theorem~\ref{thmintermediates} allows an improvement on Corollary~\ref{coropeninduced}. Given a fully open CRN $\mathcal{R}$ admitting MPNE (resp., MPSE), we may split some reactions as in Theorem~\ref{thmintermediates} to get a CRN $\mathcal{R}'$ admitting MPNE (resp., MPSE). $\mathcal{R}'$ will have some new species, say $Y$, but the stoichiometric matrix of $\mathcal{R}'$ has rank equal to the number of species in $\mathcal{R}'$ by the rank condition in Theorem~\ref{thmintermediates}. We may now add inflows and outflows of the species $Y$ to get the fully open extension $\mathcal{R}''$ of $\mathcal{R}'$. By Theorem~\ref{thmnewdepreac}, $\mathcal{R}''$ admits MPNE (resp., MPSE) since the added reactions were dependent reactions. Now $\mathcal{R}''$ may be a minimal CRN admitting MPNE (resp., MPSE) in the induced subnetwork partial order, but by construction is not minimal w.r.t. the partial order $\preceq$ here. For example, according to Theorem~4.13 of \cite{Joshi.2013aa}, the CRN $X \rightleftharpoons 0, \,\,2X\rightarrow 3X$ with MA kinetics admits MPNE and consequently, by Theorems~\ref{thmintermediates}~and~\ref{thmnewdepreac} applied in that order, \begin{equation} Y \rightleftharpoons 0 \rightleftharpoons X, \,\,2X\rightarrow X + Y \rightarrow 3X \tag{\mbox{$\mathcal{R}$}} \end{equation} admits MPNE. It is easy to confirm however, either by direct calculation or by applying Theorem~4.13 of \cite{Joshi.2013aa} and Theorem~3.6 of \cite{JoshiShiu2016}, that no fully open induced subnetwork of $\mathcal{R}$ admits MPNE. Thus $\mathcal{R}$ is a minimal fully open CRN admitting MPNE in the induced subnetwork partial order, while it is not minimal w.r.t. the partial order $\preceq$ here. \end{remark} The following corollary is of importance in the analysis of biological systems, and involves the operation of adding an enzymatic mechanism to a set of reactions. It will be used a number of times in the analysis of a particular system in Section~\ref{secbioexample}: \begin{cor}[Adding enzymatic mechanisms]\label{cor:addEnz} Let $E$ and $I_1, \ldots, I_m$ be new species, not involved in $\cal R$, and let $c_i \ge 0$ ($i = 1, \ldots, m$). Suppose we create ${\cal R}'$ from ${\cal R}$ by replacing each of the reactions \[ a_i\cdot X\to b_i\cdot X \quad \mbox{with a chain} \quad c_iE+ a_i\cdot X\rightleftharpoons I_i\to c_iE+b_i\cdot X,\,\,(i=1,\ldots,m)\,. \] If $\cal R$ admits MPNE (resp., MPSE), then ${\cal R}'$ admits MPNE (resp., MPSE). \end{cor} \begin{proof} We perform successive modifications that preserve the capacity for MPNE and MPSE: \begin{enumerate}[align=left,leftmargin=*] \item We add the trivial species E into all the reactions simultaneously (an application of Theorem~\ref{thmtrivial}): \[ c_iE+ a_i\cdot X\to c_iE+b_i\cdot X \text{ replaces } a_i\cdot X\to b_i\cdot X,\quad (i=1,\ldots,m). \] \item We add the set of intermediate species $I_i$ (an application of Theorem~\ref{thmintermediates}): \[ c_iE+ a_i\cdot X\to I_i\to c_iE+b_i\cdot X \text{ replaces } c_iE+ a_i\cdot X\to c_iE+b_i\cdot X,\quad (i=1,\ldots,m). \] \item We add the dependent reactions $I_i\to c_iE+a_i\cdot X$ ($m$ applications of Theorem~\ref{thmnewdepreac}): \[ c_iE+ a_i\cdot X\rightleftharpoons I_i\to c_iE+b_i\cdot X \text{ replaces } c_iE+ a_i\cdot X\to I_i\to c_iE+b_i\cdot X,\quad (i=1,\ldots,m) \] \end{enumerate} We could see step (2) as $m$ applications of Theorem~\ref{thmintermediates} or a single application of the theorem. In either case, the nondegeneracy condition of Theorem~\ref{thmintermediates} is easily shown to be met. \end{proof} \section{A biologically motivated example} \label{secbioexample} Before turning to the proofs of the theorems, we present an example indicating how the results here intersect with published work on biologically important systems. The mitogen-activated (MAPK) cascade is an extensively studied network occurring in various biological processes in eukaryotic cells. The model of the MAPK cascade (Figure \ref{fig:HFfeed}(f)) was developed by Huang and Ferrell \cite{Huang.1996aa}, who showed that the cascade may exhibit ultrasensitive behaviour, making it appropriate for switch-like processes like mitogenesis or cell-fate determination. Addition of a negative feedback in the MAPK cascade (Figure \ref{fig:HFfeed}(g)) was demonstrated numerically to cause oscillatory behaviour by Kholodenko \cite{Kholodenko.2000aa}. Qiao et al. \cite{Qiao.2007aa} later showed numerically that the Huang-Ferrell model without the added negative feedback can also exhibit oscillatory behaviour and bistability. Bistability in the MAPK cascade has also been observed numerically and discussed in earlier papers \cite{Ferrell.1998aa, Kholodenko.2000aa}. Here we show how straightforward applications of our results prove the existence of MPNE and in fact MPSE in the MAPK cascade, with or without negative feedback. With mass action, the MAPK cascade with negative feedback (Figure \ref{fig:HFfeed}(g)) involves 25 species and 36 reactions; we start by proving that the much simpler network in Figure \ref{fig:HFfeed}(a) admits MPSE, and then apply a sequence of modifications that preserve MPSE to ultimately transform network (a) into (g) and (f) of Figure \ref{fig:HFfeed}. \begin{figure}[!p] \resizebox{15cm}{!}{ \begin{tikzpicture} \node (a) at (-3,0) {\includegraphics[scale=.27]{MAPKa.png}}; \draw [draw=mygray, rounded corners, fill=light-gray, opacity=0.1] (-6,1.7) -- (-6,-1.7) -- (0,-1.7) -- (0,1.7) -- cycle; \node () at (-5.5, 1.3) {\bf (a)}; \begin{scope}[yshift=-8] \node (b) at (-3,-5) {\includegraphics[scale=.27]{MAPKb.png}}; \draw [draw=mygray, rounded corners, fill=light-gray, opacity=0.1] (-6,-3.3) -- (-6,-6.7) -- (0,-6.7) -- (0,-3.3) -- cycle; \node () at (-5.5, -3.7) {\bf (b)}; \end{scope} \node (c) at (-3,-11) {\includegraphics[scale=.27]{MAPKc.png}}; \draw [draw=mygray, rounded corners, fill=light-gray, opacity=0.1] (-6.9,-9) -- (-6.9,-12.9) -- (1,-12.9) -- (1,-9) -- cycle; \node () at (-6.5, -12.5) {\bf (c)}; \node (d) at (-3,-18) {\includegraphics[scale=.27]{MAPKd.png}}; \draw [draw=mygray, rounded corners, fill=light-gray, opacity=0.1] (-6.9,-15) -- (-6.9,-21) -- (1,-21) -- (1,-15) -- cycle; \node () at (-6.5, -20.6) {\bf (d)}; \node (e) at (7,-18) {\includegraphics[scale=.27]{MAPKe.png}}; \draw [draw=mygray, rounded corners, fill=light-gray, opacity=0.1] (3.1,-15) -- (3.1,-21) -- (11,-21) -- (11,-15) -- cycle; \node () at (3.5, -20.6) {\bf (e)}; \begin{scope}[yshift=-8] \node (f) at (7,-9.5) {\includegraphics[scale=.27]{MAPKf.png}}; \draw [draw=mygray, rounded corners, fill=light-gray, opacity=0.1] (3.1,-5.9) -- (3.1,-13.2) -- (11,-13.2) -- (11,-5.9) -- cycle; \node () at (3.5, -12.8) {\bf (f)}; \end{scope} \node (g) at (7,-1) {\includegraphics[scale=.27]{MAPKg.png}}; \draw [draw=mygray, rounded corners, fill=light-gray, opacity=0.1] (3.1,2.55) -- (3.1,-4.6) -- (11,-4.6) -- (11,2.55) -- cycle; \node () at (3.5, -4.2) {\bf (g)}; \begin{scope}[every node/.style={single arrow, draw}, rotate border/.style={shape border uses incircle, shape border rotate=#1}] \node at (1.9, -18)[minimum height=1.45cm, minimum width = 1.2cm, fill=light-gray]{{\color{light-gray} up}};{up}; \node at (7,-5.5) [minimum height=1.45cm, shape border rotate=90, fill=light-gray]{{\color{light-gray} up}}; \node at (7,-14.4) [minimum height=1.45cm,shape border rotate=90, fill=light-gray]{\color{light-gray}up}; \node at (-3,-13.85) [minimum height=1.45cm,shape border rotate=270, fill=light-gray]{\color{light-gray}up}; \node at (-3,-7.9) [minimum height=1.45cm,shape border rotate=270, fill=light-gray]{\color{light-gray}up}; \node at (-3,-2.5) [minimum height=1.45cm,shape border rotate=270, fill=light-gray]{\color{light-gray}up}; \end{scope} \end{tikzpicture} } \caption{The capacity for MPNE is preserved through successive modifications of the minimal network (a) into the MAPK cascade with negative feedback (g).}\label{fig:HFfeed} \end{figure} In what follows, networks (a)--(g) refer to Figure \ref{fig:HFfeed}. We use the following abbreviations in order to aid readability of the reaction networks: X = MAPK, Y = MKK, Z = MKKK. {\bf (a) A minimal multistationary subnetwork.} The two first phosphorylation steps of X (namely, MAPK) are induced by Y-pp (namely, MKK-pp), but only the first one involves an intermediate species. Both dephosphorylation steps are direct and involve no enzyme. The reactions are as follows: \begin{eqnarray}\label{eq:neta} &\text{Y-pp$\,\,+\,\,$X}\overset{k_1}{\underset{k_2}{\,\,\rightleftharpoons\,\,}} \text{Y-pp--X}\overset{k_3}{\,\,\to\,\,} \text{Y-pp+X-p}\overset{k_4}{\,\,\to\,\,}\text{Y-pp+X-pp}, \qquad \text{X-pp}\overset{k_5}{\,\,\to\,\,} \text{X-p} \overset{k_6}{\,\,\to\,\,} \text{X} \end{eqnarray} This network involves 5 species and 6 irreversible reactions. By an exact calculation (Appendix~\ref{appbio}) one can prove that the network with mass action kinetics admits MPNE. This can also be confirmed using CRNToolbox \cite{crntoolbox}. We also confirmed numerically that it admits MPSE. Moreover, it can be checked that the network cannot be constructed from any simpler network which admits MPNE using the modifications described in Theorems~\ref{thmnewdepreac}~to~\ref{thmintermediates}. In this sense the network is a minimal network admitting MPNE with respect to the partial order $\preceq$ and we might refer to it as an ``atom of multistationarity'' borrowing the terminology of \cite{joshishiu}. {\bf (b) Adding enzymes: the double phosphorylation network.} Network (a) is a simplified version of the much-studied double phosphorylation-dephosphorylation cycle, whose capacity for bistable behaviour has been both demonstrated by numerical simulations, and shown analytically \cite{markevich, Ortega.2006ab, Wang.2008aa}. Its reaction network \begin{eqnarray}\label{eq:netb} \nonumber &\text{Y-pp\,\,+\,\,X}\,\,\,\rightleftharpoons\,\,\, \text{Y-pp--X} \,\,\,\to\,\,\,\text{Y-pp\,\,+\,\,X-p}\,\,\,\rightleftharpoons\,\,\,\text{Y-pp--X-p}\,\,\,\to\,\,\,\text{Y-pp\,\,+\,\,X-pp}\\ &\text{F$_3$\,\,+\,\,X-pp}\,\,\,\rightleftharpoons\,\,\, \text{F$_3$--X-pp}\,\,\,\to\,\,\, \text{F$_3$\,\,+\,\,X-p} \,\,\,\rightleftharpoons\,\,\, \text{F$_3$--X-p}\,\,\,\to\,\,\,\text{F$_3$\,\,+\,\,X} \end{eqnarray} is obtained from that of (\ref{eq:neta}) by adding enzymatic mechanisms as in Corollary~\ref{cor:addEnz}: the two reactions $\text{X-pp}\to \text{X-p}$ and $\text{X-p}\to \text{X}$ get replaced by the two chains $\text{F$_3+$X-pp}\rightleftharpoons\text{F$_3$--X-pp}\to\text{F$_3+$X-p}$ and $\text{F$_3+$X-p}\rightleftharpoons\text{F$_3$--X-p}\to\text{F$_3+$X}$ respectively. It follows that (b), with mass action kinetics, admits MPSE. {\bf (c) Cascading double phosphorylations I.} A simple version of the MKK double phosphorylation, not mediated by an enzymatic mechanism, is coupled upstream of the double phosphorylation mechanism of MAPK from (b). The network is composed of the reactions in (\ref{eq:netb}), plus \begin{equation} \label{eq:netc} \text{Y}\,\,\rightleftharpoons\,\, \text{Y-p} \,\,\rightleftharpoons\,\, \text{Y-pp}\,. \end{equation} Adding these two reversible reactions involving the two new species Y-p (MKK-p) and Y (MKK) preserves the capacity for MPSE of the new network. This can be regarded as two applications of Theorem~\ref{thmblockadd}, or a single application of the theorem; the nondegeneracy condition is easily seen to be satisfied. Therefore (c), with mass action kinetics, admits MPSE. {\bf (d) Cascading double phosphorylations II.} Here the upstream double phosphorylation in (c) is modified to include enzymatic mechanisms. The reaction network of (d) is obtained from that of (c) by replacing the four irreversible reactions (\ref{eq:netc}) by the four chains \[ \begin{array}{cc} \text{Z-p$\,\,+\,\,$Y}\,\,\rightleftharpoons\,\, \text{Z-p--Y}\,\,\to\,\, \text{Z-p$\,\,+\,\,$Y-p,}&\text{Z-p$\,\,+\,\,$Y-p}\,\,\rightleftharpoons\,\, \text{Z-p--Y-p}\,\,\to\,\, \text{Z-p$\,\,+\,\,$Y-pp,}\\ \text{F$_2$$\,\,+\,\,$Y-pp}\,\,\rightleftharpoons\,\, \text{F$_2$--Y-pp}\,\,\to\,\, \text{F$_2$$\,\,+\,\,$Y-p,} & \text{F$_2$$\,\,+\,\,$Y-p}\,\,\rightleftharpoons\,\, \text{F$_2$--Y-p}\,\,\to\,\, \text{F$_2$$\,\,+\,\,$Y}. \end{array} \] Corollary~\ref{cor:addEnz} guarantees that (d) admits MPSE. {\bf (e) A three layer cascade.} In network (e) a third layer is added upstream of cascade (d). The network is composed of all reactions of (d) and \begin{equation}\label{eq:nete} \text{Z}\,\,\rightleftharpoons\,\, \text{Z-p} \end{equation} Adding this reversible reaction involving the new species Z (namely, MKKK) falls immediately under the scope of Theorem~\ref{thmblockadd}. Therefore (e), with mass action kinetics, admits MPSE. {\bf (f) MAPK cascade.} Now we modify (e) by adding enzymatic mechanisms (Corollary~\ref{cor:addEnz}) to the reversible reaction (\ref{eq:nete}), which is replaced by the two chains $\text{E$_1$$+$Z}\rightleftharpoons \text{E$_1$--Z}\to \text{E$_1$$+$Z-p}$ and $\text{F$_1$$+$Z-p}\rightleftharpoons \text{F$_1$--Z-p}\to \text{F$_1$$+$Z}.$ It follows that the new network (Huang-Ferrell MAPK cascade \cite{Huang.1996aa}, reactions below), with mass action kinetics, admits MPSE: \begin{equation} \label{eq:netf} \begin{array}{c} \text{E$_1$$\,\,+\,\,$Z}\,\,\rightleftharpoons\,\, \text{E$_1$--Z}\,\,\to\,\, \text{E$_1$$\,\,+\,\,$Z-p},\,\,\,\text{F$_1$$\,\,+\,\,$Z-p}\,\,\rightleftharpoons\,\, \text{F$_1$--Z-p}\,\,\to\,\, \text{F$_1$$\,\,+\,\,$Z}\\ \text{Z-p$\,\,+\,\,$Y}\,\,\rightleftharpoons\,\, \text{Z-p--Y}\,\,\to\,\, \text{Z-p$\,\,+\,\,$Y-p} \,\,\rightleftharpoons\,\, \text{Z-p--Y-p}\,\,\to\,\, \text{Z-p$\,\,+\,\,$Y-pp} \\ \text{F$_2$$\,\,+\,\,$Y-pp}\,\,\rightleftharpoons\,\, \text{F$_2$--Y-pp}\,\,\to\,\, \text{F$_2$$\,\,+\,\,$Y-p} \,\,\rightleftharpoons\,\, \text{F$_2$--Y-p}\,\,\to\,\, \text{F$_2$$\,\,+\,\,$Y} \\ \text{Y-pp$\,\,+\,\,$X}\,\,\rightleftharpoons\,\, \text{Y-pp--X} \,\,\to\,\,\text{Y-pp$\,\,+\,\,$X-p}\,\,\rightleftharpoons\,\,\text{Y-pp--X-p}\,\,\to\,\,\text{Y-pp$\,\,+\,\,$X-pp}\\ \text{F$_3$$\,\,+\,\,$X-pp}\,\,\rightleftharpoons\,\, \text{F$_3$--X-pp}\,\,\to\,\, \text{F$_3$$\,\,+\,\,$X-p} \,\,\rightleftharpoons\,\, \text{F$_3$--X-p}\,\,\to\,\,\text{F$_3$$\,\,+\,\,$X} \end{array} \end{equation} {\bf (g) MAPK cascade with negative feedback.} Finally, we add a negative feedback loop from X-pp as in \cite{Kholodenko.2000aa}. In that paper, the model describes the feedback as noncompetitive inhibition of Z phosphorylation, whose mechanism can be unpacked as (see for example \cite{Marangoni.2003aa}): \begin{eqnarray}\label{eq:netg} \label{eq:nf1} &\text{E}_1\,\,+\,\,\text{X-pp}\,\,\rightleftharpoons\,\, \text{E}_1\text{--X-pp}\\ \label{eq:nf2} &\text{E}_1\text{--X-pp$\,\,+\,\,$Z}\,\,\rightleftharpoons\,\, \text{E}_1\text{--X-pp--Z}\\ \label{eq:nf3} &\text{E}_1\text{--X-pp--Z}\,\,\rightleftharpoons\,\, \text{E}_1\text{--Z$\,\,+\,\,$X-pp}. \end{eqnarray} These reactions, together with those of (\ref{eq:netf}) make up network (g). First we add the reversible reactions (\ref{eq:nf1}) and (\ref{eq:nf2}), involving the new species E$_1$--X-pp and E$_1$--X-pp--Z, to the network (\ref{eq:netf}). The nondegeneracy condition is straightforward to check, and by Theorem~\ref{thmblockadd} the resulting network ${\cal R}_1$ admits MPSE. Next, let $w_1, w_2, w_3$ and $w_4$ denote the reaction vectors of (\ref{eq:nf1}), (\ref{eq:nf2}), (\ref{eq:nf3}) and E$_1$+Z $\rightleftharpoons$ $\text{E}_1\text{--Z}$, and note that $w_3=-w_1-w_2+w_4,$ making (\ref{eq:nf3}) a dependent reaction for ${\cal R}_1$. By Theorem~\ref{thmnewdepreac} (and Remark~\ref{newdeprev}), we may add (\ref{eq:nf3}) to ${\cal R}_1$, and thus the resulting network (g), with mass action kinetics, admits MPSE. \section{Proofs of the theorems} \label{secproofs} Some additional notation simplifies the presentation. \begin{notation}[Entrywise product and entrywise functions] Given two matrices $A$ and $B$ with the same dimensions, $A \circ B$ will refer to the entrywise (or Hadamard) product of $A$ and $B$, namely $(A\circ B)_{ij} = A_{ij}B_{ij}$. When we apply functions such as $\ln(\cdot)$ and $\exp(\cdot)$ with a vector or matrix argument, we mean entrywise application. Similarly, if $x = (x_1, \ldots, x_n)^\mathrm{t}$ and $y = (y_1, \ldots, y_n)^\mathrm{t}$, then $x/y$ means $(x_1/y_1, x_2/y_2,\ldots, x_n/y_n)^\mathrm{t}$. \end{notation} \begin{notation}[Set of integers, vector of ones] We write $\langle n \rangle$ for $\{1, \ldots, n\}$. We write $\mathbf{1}$ for a vector of ones whose length is inferred from the context, and given some parameter $\epsilon$, we write $\bm{\epsilon}$ for $\epsilon \mathbf{1}$. \end{notation} \begin{notation}[Monomials, vector of monomials] Given $x=(x_1,\ldots, x_n)^\mathrm{t}$ and $a = (a_1, \ldots, a_n)$, $x^a$ is an abbreviation for the monomial $\prod_ix_i^{a_i}$. If $A$ is an $m \times n$ matrix with rows $A_1, \ldots, A_m$, then $x^A$ means the vector of monomials $(x^{A_1}, x^{A_2}, \ldots, x^{A_m})^\mathrm{t}$. \end{notation} The following three examples demonstrate how entrywise and monomial notation greatly abbreviate otherwise lengthy calculations. \begin{example}[Rules of exponentiation] Let $x \in \mathbb{R}^m_{\gg 0}$, $A,B \in \mathbb{R}^{n \times m}$ and $C \in \mathbb{R}^{k \times n}$. Let $O$ refer to the $n \times m$ matrix of zeros. Then (i) $x^O = \mathbf{1}$, (ii) $x^{A+B} = x^A \circ x^B$, and (iii) $(x^A)^C = x^{CA}$. \end{example} \begin{example}[Logarithm of monomials] Suppose $x \in \mathbb{R}^m_{\gg 0}$, $y,z \in \mathbb{R}^n_{\gg 0}$ and $A,B \in \mathbb{R}^{m \times n}$. If $w=x \circ y^A \circ z^B$, then $\ln w = \ln x + A\ln y + B \ln z$. \end{example} \begin{example}[Differentiation of monomials] Suppose $k \in \mathbb{R}^m_{\gg 0}$, $x \in \mathbb{R}^n_{\gg 0}$, $A \in \mathbb{R}^{m \times n}$. Let $w \colon \mathbb{R}^n_{\gg 0} \to \mathbb{R}^m_{\gg 0}$ be defined by $w(x) := k \circ x^A$. Then $D_xw = \mathrm{diag}(w)\,A\,\mathrm{diag}(\mathbf{1}/x)$. This expression is familiar as the Jacobian matrix of a rate function of a mass action CRN (\cite{banajiSIAM} for example.) \end{example} \begin{notation}[Submatrices and minors of a matrix] Given an $n \times m$ matrix $A$ and (nonempty) sets $\alpha \subseteq \langle n \rangle$ and $\beta \subseteq \langle m \rangle$, define $A(\alpha|\beta)$ to be the submatrix of $A$ with rows from $\alpha$ and columns from $\beta$. If $|\alpha| = |\beta|$, then $A[\alpha|\beta] := \mathrm{det}(A(\alpha|\beta))$. $A(\alpha)$ is shorthand for $A(\alpha|\alpha)$, and $A[\alpha]$ is shorthand for the principal minor $A[\alpha|\alpha]$. \end{notation} For our purposes here, the implicit function theorem (IFT) takes the form: \begin{lemma1}[Implicit Function Theorem] \label{lemIFT} Let $X \subseteq \mathbb{R}^n \times \mathbb{R}^m \simeq \mathbb{R}^{n+m}$ be open and $F\colon X \to \mathbb{R}^n$ be $C^1$. Suppose $F(a,b)=0$ for some $(a,b) \in X$ and the Jacobian matrix $D_1F(a,b)$ (namely with respect to the first variables) is nonsingular. Then there exist $U \subseteq \mathbb{R}^m$, $V \subseteq \mathbb{R}^n$ both open with $(a,b) \in W:=V\times U \subseteq X$, and a $C^1$ function $\phi \colon U \to V$ satisfying $\phi(b) = a$, and \[ \{(x,y) \in W\,|\, F(x,y)=0\} = \{(\phi(y),y)\,|\, y \in U\}\,, \] namely the zero set of $F$ in $W$ is precisely the graph of $\phi$. \end{lemma1} \begin{proof} See, for example, Theorem 9.28 in \cite{rudin}. \end{proof} Application of the IFT will frequently take the following form: \begin{cor}[The IFT and reduced Jacobian matrices]\label{corift}Let $X\subseteq \mathbb{R}^n$ be open and let $S \subseteq \mathbb{R}^n$ be a $k$-dimensional linear subspace of $\mathbb{R}^n$. Let $\epsilon_1>0$ and consider a $C^1$ function $F\colon X \times (-\epsilon_1,\epsilon_1) \to \mathbb{R}^n$ s.t. for each fixed $\epsilon$, $F_\epsilon := F(\cdot, \epsilon)$ satisfies $\mathrm{im}\,F_\epsilon \subseteq S$. Let $p \in X$ be a nondegenerate zero of $F_0$ w.r.t. $S$, namely, $F_0(p) = 0$ and $J_SF_0(p) \neq 0$. Then there exists $0< \epsilon_0\leq\epsilon_1$ and a $C^1$ curve $\phi\colon(-\epsilon_0,\epsilon_0) \to p+S$ with $\lim_{\epsilon\to 0} \phi(\epsilon) = p$ and $F_\epsilon(\phi(\epsilon)) = 0$ for $\epsilon \in (-\epsilon_0, \epsilon_0)$. Moreover, $\phi(\epsilon)$ is an isolated zero of $F_\epsilon$ on $p+S$. \end{cor} \begin{proof} As in Definition~\ref{notationreddet}, let $M$ be a matrix whose columns are a basis for $S$, and define $\hat{F}_\epsilon\colon X \to \mathbb{R}^k$ via $F_\epsilon(x) = M\hat{F}_\epsilon(x)$. Let $Y = \{y \in \mathbb{R}^k \,|\, p+My \in X\}$. Note that $Y$ is open and includes the origin. Define $G \colon Y\times (-\epsilon_1, \epsilon_1) \to \mathbb{R}^k$ by $G(y,\epsilon) = \hat{F}_\epsilon(p+My)$. Note that (fixing $\epsilon$) $D_MF_\epsilon(x) = (D\hat{F}_\epsilon(x))M = D_yG(y, \epsilon)$ (where $x = p+My$), namely the reduced Jacobian matrix of $F_\epsilon$ w.r.t. $M$ is just the Jacobian matrix of $G$ w.r.t. its first argument. As $p$ is a nondegenerate zero of $F_0$, we have $G(0,0) = F_0(p) =0$ and $D_yG(0,0) = D_MF_0(p)$ is nonsingular. We may then apply the IFT to $G$ giving an open neighbourhood $V$ of $0$ in $\mathbb{R}^k$, some $\epsilon_0>0$, and a function $\hat\phi\colon (-\epsilon_0,\epsilon_0) \to \mathbb{R}^k$ s.t. \[ \{(y,\epsilon) \in V \times (-\epsilon_0,\epsilon_0)\,|\, G(y,\epsilon)=0\} = \{(\hat\phi(\epsilon),\epsilon)\,|\, \epsilon \in (-\epsilon_0,\epsilon_0)\}\,. \] The result follows with $\phi = p+M\hat\phi$. \end{proof} \begin{lemma1}[Matrices with Hurwitz diagonal blocks] \label{lemHurwitz} Let $\epsilon \in (0, \epsilon_0)$ for some $\epsilon_0 > 0$ and suppose we have an $\epsilon$-dependent family of matrices partitioned into four blocks \[ M(\epsilon) = \left(\begin{array}{cc}A(\epsilon)&B(\epsilon)\\C(\epsilon)&D(\epsilon)\end{array}\right)\,, \] with $A$ and $D$ square. Suppose further that: (i) $A_0:=\lim_{\epsilon \to 0+}A(\epsilon)$ is defined and Hurwitz; (ii) $\lim_{\epsilon \to 0+}\epsilon B(\epsilon)$ and $\lim_{\epsilon \to 0+}\epsilon C(\epsilon)$ are zero matrices; (iii) $D_0 := \lim_{\epsilon \to 0+}\epsilon D(\epsilon)$ is defined and Hurwitz. Then there exists $\epsilon' > 0$ s.t. $M(\epsilon)$ is Hurwitz for all $\epsilon \in (0, \epsilon')$. \end{lemma1} \begin{proof} Let $A$ be $n \times n$ and $D$ be $m \times m$. We calculate \begin{equation} \label{eqspeca} \lim_{\epsilon \to 0+}\mathrm{det}(\epsilon M(\epsilon)-\lambda I) = \left|\begin{array}{cc}-\lambda I&0\\0&D_0-\lambda I\end{array}\right| = (-\lambda)^n\mathrm{det}(D_0-\lambda I)\,. \end{equation} By the continuous dependence of eigenvalues of a matrix on its entries, (\ref{eqspeca}) tells us that $m$ eigenvalues of $\epsilon M(\epsilon)$ approach the eigenvalues of $D_0$, and so there exist $\theta > 0$ and $\epsilon_1>0$ s.t. for $0 < \epsilon < \epsilon_1$ $m$ eigenvalues of $\epsilon M(\epsilon)$ lie in $\mathbb{C}_-\backslash B_\theta$, where $B_\theta := \{z\in\mathbb{C}\,\colon\, |z| \leq \theta\}$. On the other hand, expanding $\mathrm{det}(\epsilon M(\epsilon) - \epsilon \lambda I)$ along the top $n$ rows gives: \begin{equation} \label{eqspecb} \epsilon^{n+m}\mathrm{det}(M(\epsilon)-\lambda I) = \mathrm{det}(\epsilon M(\epsilon) - \epsilon \lambda I) = \epsilon^nG_\epsilon(\lambda)\,, \end{equation} where $G_\epsilon(\lambda) := \mathrm{det}(\epsilon D(\epsilon))\mathrm{det}\left(A(\epsilon)-\lambda I\right)+O(\epsilon)$. Namely, eigenvalues of $M(\epsilon)$ are the zeros of $G_\epsilon(\lambda)$. But $G_0(\lambda):=\lim_{\epsilon \to 0+}G_\epsilon(\lambda) = \mathrm{det}D_0\,\mathrm{det}\left(A_0-\lambda I\right)$, so (since $\mathrm{det}\,D_0 \neq 0$) the zeros of $G_0$ are the eigenvalues of $A_0$. Consider any contour $C$ in $\mathbb{C}_-$ with the spectrum of $A_0$ in its interior. Choose $\epsilon_2>0$ such that $0<\epsilon < \epsilon_2$ implies that $|G_\epsilon - G_0|<|G_0|$ on $C$, and $\sup_{z \in \mathrm{int}\,C}\epsilon |z| < \theta$. Then by Rouch\'e's theorem (Thm. 7.7 in \cite{Priestley} for example), $G_\epsilon$ has the same number of zeros in $C$ as $G_0$. Thus (\ref{eqspecb}) tells us that, for $0<\epsilon < \epsilon_2$, $n$ eigenvalues of $M(\epsilon)$ lie in $C$ and hence in $\mathbb{C}_-$ (in fact, it is easy to refine the argument to see that they approach the spectrum of $A_0$ as $\epsilon \to 0+$). The corresponding $n$ eigenvalues of $\epsilon M(\epsilon)$ lie in $\mathbb{C}_- \cap B_\theta$. In summary, for $0 < \epsilon < \epsilon':=\min\{\epsilon_1,\epsilon_2\}$, $\epsilon M(\epsilon)$ has $m$ eigenvalues in $\mathbb{C}_-\backslash B_\theta$ and $n$ eigenvalues in $\mathbb{C}_- \cap B_\theta$, and so is Hurwitz stable. The same clearly holds for $M(\epsilon)$. \end{proof} \subsection{Proofs of Theorems~\ref{thmnewdepreac}-\ref{thmnewwithopen}} In each proof below, the species $X = (X_1, \ldots, X_n)$ have corresponding concentration vector $x = (x_1, \ldots, x_n)^\mathrm{t}$. It is assumed at the outset that $\mathcal{R}$ admits MPNE and the rate vector $v(x) = (v_1(x), v_2(x), \ldots, v_{r_0}(x))^\mathrm{t}$ of $\mathcal{R}$ is fixed such that MPNE occurs. $p$ and $q$ are a pair of positive, compatible, nondegenerate equilibria of $\mathcal{R}$. The stoichiometric matrix of $\mathcal{R}$, $\Gamma$, is an $n \times r_0$ matrix with rank $r$. Columns of the matrix $\Gamma_0$ are a basis for $S:=\mathrm{im}\,\Gamma$, and $Q$ is defined via $\Gamma = \Gamma_0Q$. $S_p:=p+S$ refers to the coset of $S$ containing $p$, and by assumption $S_p = S_q$. Any new species in $\mathcal{R}'$ are termed $Y$ with concentration vector $y$. \begin{myproof}{Theorem~\ref{thmnewdepreac}} Let the new reaction be $a\cdot X \rightarrow a' \cdot X$ so the new stoichiometric matrix is $\Gamma' = [\Gamma|\alpha]$ where $\alpha = a'-a$. Define $c$ by $\alpha = \Gamma_0 c$. We give the new reaction mass action kinetics and set its rate constant to be $\epsilon$, so that the evolution of $\mathcal{R}'$ is governed by: \[ \dot x = \Gamma v(x) + \epsilon \alpha x^a =: F(x;\epsilon)\,. \] The reduced Jacobian matrix of $F$ w.r.t. $\Gamma_0$ is \[ D_{\Gamma_0}F(x;\epsilon) = \left(Q\,\,\,c\right)\left(\begin{array}{c}Dv(x)\\\epsilon x^a a^\mathrm{t}\mathrm{diag}(\mathbf{1}/x)\end{array}\right)\Gamma_0 = QDv(x)\Gamma_0 + \epsilon x^a ca^\mathrm{t}\mathrm{diag}(\mathbf{1}/x) \Gamma_0\,. \] So $D_{\Gamma_0}F(p;0) = QDv(p)\Gamma_0$ is nonsingular as $p$ is a nondegenerate equilibrium of $\mathcal{R}$. Consequently, by Corollary~\ref{corift}, there exists, for each sufficiently small $\epsilon > 0$, an equilibrium $p_\epsilon$ on $S_p$ and such that $\lim_{\epsilon \to 0+}p_\epsilon = p$. By continuity of $D_{\Gamma_0}F$, $p_\epsilon$ is nondegenerate for sufficiently small $\epsilon > 0$, and linearly stable for $\mathcal{R}'$ if $p$ was linearly stable for $\mathcal{R}$. The same arguments apply to give a nondegenerate equilibrium $q_\epsilon$ on $S_p$ and such that $\lim_{\epsilon \to 0+}q_\epsilon = q$. As $p_\epsilon$ and $p_\epsilon$ are distinct for small $\epsilon>0$, $\mathcal{R}'$ displays MPNE, and MPSE if $p$ and $q$ were linearly stable. \end{myproof} \begin{remark}[Notes on Theorem~\ref{thmnewdepreac}] \label{remnewdepreac} It is clear from the proof that the added reaction in Theorem~\ref{thmnewdepreac} can be given kinetics in any class provided the rate is a $C^1$ function of the concentrations and can be made arbitrarily small on any compact set in $\mathbb{R}^n_{\gg 0}$. The theorem generalises to other structurally stable objects, notably nondegenerate periodic orbits \cite{banajiCRNosci}. \end{remark} \begin{myproof}{Theorem~\ref{thmopenextension}} The stoichiometric matrix of $\mathcal{R}'$ can be taken to be $\Gamma' = [\Gamma|I]$ where $I$ is the $n \times n$ identity matrix. Let $\hat{x}$ be any point on $S_p$, and let the $i$th inflow-outflow reaction (treated as a single reversible reaction) have mass action kinetics with forward and backwards rate constants $\epsilon \hat{x}_i$ and $\epsilon$ respectively, so that $\epsilon(\hat{x}_i-x_i)$ is the rate of the $i$th added reaction. Evolution of $\mathcal{R}'$ is then governed by the ODEs $\dot x = \Gamma v(x) + \epsilon (\hat{x} - x)$. Let $\Gamma_0' = [\Gamma_0|T]$ where the columns of $T$ are any basis for $S^\perp$. $\Gamma_0'$ is nonsingular and hence its columns are a basis for $\mathbb{R}^n$. Define $z = (\hat{z}, \doublehat{z}) \in \mathbb{R}^{r} \times \mathbb{R}^{n-r}$ by $x = \hat{x} + \Gamma_0'z$. Define $z_p$ and $z_q$ by $p = \hat{x} + \Gamma_0'z_p$ and $q = \hat{x} + \Gamma_0'z_q$. Note that $\Gamma = \Gamma_0'\left(\begin{array}{c}Q\\0\end{array}\right)$. In this coordinate system equilibria of $\mathcal{R}'$ occur when \[ \Gamma v(\hat{x} + \Gamma_0'z) - \epsilon \Gamma_0'z=0\,, \quad \mbox{or equivalently,} \quad F(\hat{z}, \doublehat{z};\epsilon) := \left(\begin{array}{c}Q\\0\end{array}\right) v(\hat{x} + \Gamma_0'z) - \epsilon \left(\begin{array}{c}\hat{z}\\\doublehat{z}\end{array}\right) =0, \] (as $\Gamma_0'$ is nonsingular). For $\epsilon > 0$, $F$ has the same zeros as \[ G(\hat{z}, \doublehat{z};\epsilon):=\left(\begin{array}{c}Q\\0\end{array}\right) v(\hat{x} + \Gamma_0'z) - \left(\begin{array}{c}\epsilon\hat{z}\\\doublehat{z}\end{array}\right)\,. \] Moreover $G(z_p;0) = G(z_q;0)=0$ (as $\doublehat{z} = 0$ at $z = z_p$, $\Gamma v(p) = 0 \Leftrightarrow Qv(p) = 0$, and $\Gamma v(q) = 0 \Leftrightarrow Qv(q) = 0$). Differentiating gives \[ DG(z_p;0) = \left(\begin{array}{cc}QDv(p)\Gamma_0&QDv(p)T\\0&-I\end{array}\right)\,, \] which is nonsingular as $QDv(p)\Gamma_0$ is. By the IFT there exists, for sufficiently small $\epsilon > 0$, a $C^1$ curve $\epsilon \mapsto z_{p,\epsilon}$ of zeros of $G$ satisfying $\lim_{\epsilon \to 0+}z_{p,\epsilon} = z_p$. The same argument holds to give a $C^1$ curve $z_{q,\epsilon}$ of zeros of $G$ satisfying $\lim_{\epsilon \to 0+}z_{q,\epsilon} = z_q$. By continuity of the determinant, they are nondegenerate zeros of $G$ for sufficiently small $\epsilon>0$, and the same holds for $F$ as a quick calculation reveals that $\mathrm{det}\,DF(z;\epsilon) = \epsilon^{n-r}\mathrm{det}\,DG(z;\epsilon)$. Since $0 \ll p \neq q \gg 0$, $p_\epsilon := \hat{x} + \Gamma_0'z_{p,\epsilon}$ and $q_\epsilon := \hat{x} + \Gamma_0'z_{q,\epsilon}$ are positive and distinct for sufficiently small $\epsilon$. Thus $\mathcal{R}'$ admits MPNE. The spectrum of $DF(z_{p,\epsilon};\epsilon)$ is precisely $-\epsilon$ with multiplicity $n-r$ with the remaining $r$ eigenvalues approaching the spectrum of $Q Dv(p)\Gamma_0$ as $\epsilon \to 0+$. Thus if $p$ is a linearly stable equilibrium of $\mathcal{R}$, then $p_\epsilon$ is a linearly stable equilibrium of $\mathcal{R}'$ for sufficiently small $\epsilon>0$, and the same holds for $q$. In other words, if $\mathcal{R}$ admits MPSE, then so does $\mathcal{R}'$. \end{myproof} \begin{remark}[Geometric interpretation of Theorem~\ref{thmopenextension}] Inflows and outflows were chosen to guarantee that $S_p$ remained locally invariant for $\mathcal{R}'$ and that for sufficiently small $\epsilon > 0$, the vector field of $\mathcal{R}'$ restricted to $S_p$ was $\epsilon$-close to that of $\mathcal{R}$ restricted to $S_p$ in a natural sense. Nondegeneracy of $p$ and $q$ for $\mathcal{R}$ ensured existence and nondegeneracy w.r.t. $S_p$ of $p_\epsilon$ and $q_\epsilon$ for sufficiently small $\epsilon >0$. Moreover the asymptotic stability of $S_p$ for $\epsilon > 0$ guaranteed nondegeneracy of $p_\epsilon$ and $q_\epsilon$ w.r.t. directions transverse to $S_p$. \end{remark} \begin{myproof}{Theorem~\ref{thmtrivial}} Let $\mathcal{R}'$ have reaction rates $w(x,y)$. Choose $w$ so that $w_j(x, y) = y^{s_j}v_j(x)$ where $s_j$ is the stoichiometry of $Y$ on the left of reaction $j$. We see that \[ \Gamma' = \left(\begin{array}{c}\Gamma\\0\end{array}\right) \quad \mbox{and} \quad \Gamma_0' = \left(\begin{array}{c}\Gamma_0\\0\end{array}\right) \] are the stoichiometric matrix of $\mathcal{R}'$ and a matrix whose columns are a basis for the stoichiometric subspace of $\mathcal{R}'$ respectively. $\Gamma' = \Gamma_0'Q$ and the evolution of $\mathcal{R}'$ is governed by \[ \left(\begin{array}{c}\dot x\\ \dot y\end{array}\right) = \Gamma'w(x,y) =: F(x,y). \] Since $w(p,1) = v(p)$ and $\Gamma v(p) = 0$, $\Gamma'w(p,1) = 0$. So $(p,1)$ is a positive equilibrium of $\mathcal{R}'$, and likewise for $(q,1)$. Moreover these equilibria are compatible as $(p,1) - (q,1) = (p-q,0) \in \mathrm{im}\,\Gamma \times \{0\} = \mathrm{im}\,\Gamma'$. Nondegeneracy is easy to confirm as \begin{equation} \label{eqtriv} D_{\Gamma_0'}F(p,1) = Q\,Dw(p,1)\,\Gamma_0' = Q\,(Dv(p), D_yw(p,1))\left(\begin{array}{c}\Gamma_0\\0\end{array}\right) = QDv(p)\Gamma_0\,. \end{equation} Recall that nondegeneracy of $p$ for $\mathcal{R}$ is equivalent to nonsingularity of $QDv(p)\Gamma_0$, and so $(p,1)$ is a nondegenerate equilibrium of $\mathcal{R}'$. A similar argument holds for $(q,1)$, and so $\mathcal{R}'$ displays MPNE. If $p$ is linearly stable w.r.t. $\mathrm{im}\,\Gamma$ then, by (\ref{eqtriv}), $(p,1)$ is linearly stable w.r.t. $\mathrm{im}\,\Gamma'$, and the same holds for $q$ and $(q,1)$. Thus if $\mathcal{R}$ admits MPSE then so does $\mathcal{R}'$. \end{myproof} \begin{remark}[Notes on Theorem~\ref{thmtrivial}] \label{remkintriv} In Theorem~\ref{thmtrivial}, the vector field of $\mathcal{R}'$ restricted to $\{(x,y): y = 1\}$ is precisely that of $\mathcal{R}$, and so not just MPNE, but all dynamical behaviours of $\mathcal{R}$ are reproduced on this set. This can be achieved with any class of kinetics where we can ensure that $w(x,1) = v(x)$ including, of course, mass action (see the discussion of ``species-extensions'' in \cite{banajiCRNosci}). \end{remark} \begin{myproof}{Theorem~\ref{thmnewwithopen}} Let $s_i$ be the net stoichiometry change of $Y$ in the $i$th reaction and $s := (s_1, s_2, \ldots, s_{r_0})$. Give the new reaction $0 \rightleftharpoons Y$ mass action kinetics with both rate constants set to be $\frac{1}{\epsilon}$. The new rates $w$ for the other reactions are set as in Theorem~\ref{thmtrivial} (consistent with, but not assuming, mass action kinetics for $\mathcal{R}$ and $\mathcal{R}'$), so that $w(x,y)$ satisfies $w(x,1) = v(x)$, and $D_xw(x, 1) = Dv(x)$. Define \[ \Gamma_0' := \left(\begin{array}{cc}\Gamma_0&0\\0&1\end{array}\right), \quad Q' := \left(\begin{array}{cc}Q&0\\\epsilon s & 1\end{array}\right)\,, \quad Q'_1 := \left(\begin{array}{cc}Q&0\\s & 1\end{array}\right)\,. \] $\Gamma' = \Gamma_0'Q'_1$ is the stoichiometric matrix of $\mathcal{R}'$. Let $S':=\mathrm{im}\,\Gamma' (=\mathrm{im}\,\Gamma_0')$. $\mathcal{R}'$ gives rise to the following singularly perturbed system: \[ \left(\begin{array}{c}\dot x\\\dot y\end{array}\right) = \left(\begin{array}{cc}\Gamma&0\\s&1\end{array}\right)\left(\begin{array}{c} w(x,y)\\\frac{1}{\epsilon}(1-y)\end{array}\right) = \Gamma_0' Q'_1\left(\begin{array}{c} w(x,y)\\\frac{1}{\epsilon}(1-y)\end{array}\right) =: F(x,y,\epsilon)\,. \] For $\epsilon > 0$, $F(x,y;\epsilon)$ has the same zeros as: \[ G(x,y;\epsilon):= \left(\begin{array}{cc}I&0\\0&\epsilon\end{array}\right)F(x,y;\epsilon) = \Gamma_0'Q'\left(\begin{array}{c} w(x,y)\\1-y\end{array}\right) = \left(\begin{array}{cc}\Gamma&0\\\epsilon s&1\end{array}\right)\left(\begin{array}{c} w(x,y)\\1-y\end{array}\right)\,. \] Moreover $G(p,1;0) = G(q,1;0) = 0$ and we can calculate: \begin{eqnarray*} D_{\Gamma_0'}G(x,y,\epsilon) &=& \left(\begin{array}{cc}QD_xw(x,y)\Gamma_0&QD_yw(x,y)\\\epsilon s D_xw(x,y)\Gamma_0 & \epsilon s D_yw(x,y)-1\end{array}\right)\,. \end{eqnarray*} Using the fact that $D_xw(p,1) = Dv(p)$, we calculate $J_{S'}G(p,1,0) = -\mathrm{det}(QDv(p)\Gamma_0) \neq 0$. Thus Corollary~\ref{corift} gives, for sufficiently small $\epsilon > 0$, a $C^1$ curve $p_\epsilon := (x(\epsilon), y(\epsilon))$ of zeros of $G$ on $(p,1)+\mathrm{im}\Gamma'$, such that $\lim_{\epsilon \to 0+}(x(\epsilon), y(\epsilon)) = (p,1)$. Positivity of $p_\epsilon$ for sufficiently small $\epsilon$ is immediate as $(p,1) \gg 0$. Nondegeneracy of $p_\epsilon$ for sufficiently small $\epsilon > 0$ follows from continuity of $J_{S'}G(x(\epsilon),y(\epsilon);\epsilon)$ and the fact that $J_{S'}F(x,y;\epsilon) = \frac{1}{\epsilon}\,J_{S'}G(x,y;\epsilon)$. In a similar way we have, for sufficiently small $\epsilon > 0$, a curve $q_\epsilon$ of nondegenerate positive equilibria of $\mathcal{R}'$ on $(q,1)+S'$, and such that $\lim_{\epsilon \to 0+}q_\epsilon = (q,1)$. As $(p,1)-(q,1) = (p-q,0) \in \mathrm{im}\,\Gamma'$, $p_\epsilon$ and $q_\epsilon$ are compatible equilibria of $\mathcal{R}'$. Thus $\mathcal{R}'$ admits MPNE. If $p$ is linearly stable for $\mathcal{R}$, then $p_\epsilon$ is linearly stable for $\mathcal{R}'$ for sufficiently small $\epsilon > 0$. Define \[ M(\epsilon) :=D_{\Gamma_0'}F(x(\epsilon),y(\epsilon),\epsilon) = \left(\begin{array}{cc}QD_xw(x(\epsilon),y(\epsilon))\Gamma_0&QD_yw(x(\epsilon),y(\epsilon))\\s D_xw(x(\epsilon),y(\epsilon))\Gamma_0 & s D_yw(x(\epsilon),y(\epsilon))-\frac{1}{\epsilon}\end{array}\right)\,. \] We can check quickly that if $p$ is linearly stable for $\mathcal{R}$, namely $\lim_{\epsilon \to 0+} QD_xw(x(\epsilon),y(\epsilon))\Gamma_0 = QD_xw(p,1)\Gamma_0 = QDv(p)\Gamma_0$ is Hurwitz, then the hypotheses of Lemma~\ref{lemHurwitz} are satisfied, and for sufficiently small $\epsilon >0$ all eigenvalues of $M(\epsilon)$ lie in $\mathbb{C}_{-}$. A similar argument applies to $q_\epsilon$ and so, if $\mathcal{R}$ admits MPSE, then so does $\mathcal{R}'$. \end{myproof} \begin{remark}[Kinetic assumptions and extensions in Theorem~\ref{thmnewwithopen}] \label{remnewwithopen} The kinetic assumptions in Theorem~\ref{thmnewwithopen} can be greatly weakened as seen in the analogous result on periodic orbits in \cite{banajiCRNosci}. \end{remark} \subsection{Proofs of Theorems~\ref{thmblockadd}~and~\ref{thmintermediates}} We begin with a lemma useful in both proofs: \begin{lemma1} \label{lemnondegen} Let $\Gamma$ be an $n \times r_0$ matrix, $\Gamma_0$ be an $n \times r$ matrix whose columns are a basis for $\mathrm{im}\,\Gamma$, and $Q$ a matrix defined by $\Gamma = \Gamma_0Q$. Let $\alpha$ be an $n \times m$ matrix, and $\beta$ a $k \times m$ matrix with rank $m$. Let $v\colon\mathbb{R}^n_{\gg 0} \to \mathbb{R}^{r_0}$ and $f\colon \mathbb{R}^n_{\gg 0} \times \mathbb{R}^k \times \mathbb{R}\to \mathbb{R}^m$ be some $C^1$ functions. Define $\Gamma'$ and $F\colon \mathbb{R}^n_{\gg 0} \times \mathbb{R}^k \times \mathbb{R}\to \mathbb{R}^n \times \mathbb{R}^k$ by \[ \Gamma' := \left(\begin{array}{cc}\Gamma&\alpha\\0&\beta\end{array}\right)\,\quad \mbox{and}\quad F(x, y;\epsilon) := \Gamma'\left(\begin{array}{c}v(x)\\f(x,y;\epsilon)\end{array}\right)\,. \] Clearly $\mathrm{im}\,F \subseteq \mathrm{im}\,\Gamma'$. Suppose there is a $C^1$ curve $(0, \epsilon_0) \ni \epsilon \mapsto p_\epsilon := (x_\epsilon, y_\epsilon) \in \mathbb{R}^n_{\gg 0} \times \mathbb{R}^k_{\gg 0}$ satisfying $\lim_{\epsilon \to 0+}(x_\epsilon, y_\epsilon) = (p,\underline{y})$ for some $p \in \mathbb{R}^n_{\gg 0}$, $\underline{y} \in \mathbb{R}^k_{\geq 0}$. Define $A_\epsilon := \epsilon D_xf(x_\epsilon, y_\epsilon; \epsilon)$ and $B_\epsilon := \epsilon D_yf(x_\epsilon, y_\epsilon; \epsilon)$. Assume that $\lim_{\epsilon \to 0+}A_\epsilon = 0$, and that $N_0:=\lim_{\epsilon\to 0+}B_\epsilon\beta$ is defined. \begin{enumerate}[align=left,leftmargin=*] \item[(a)] Suppose that (i) $J_{\mathrm{im}\,\Gamma}\Gamma v(p) \neq 0$, and (ii) $N_0$ is nonsingular. Then there exists $\epsilon'>0$ s.t., for $\epsilon \in (0,\epsilon')$, $J_{\mathrm{im}\,\Gamma'}F(x_\epsilon, y_\epsilon;\epsilon) \neq 0$. \item[(b)] Suppose that (i) $D_{\mathrm{im}\,\Gamma}\Gamma v(p)$ is Hurwitz stable, and (ii) $N_0$ is Hurwitz stable. Then there exists $\epsilon'>0$ s.t., for $\epsilon \in (0,\epsilon')$, $D_{\mathrm{im}\,\Gamma'}F(x_\epsilon, y_\epsilon;\epsilon)$ is Hurwitz stable. \end{enumerate} \end{lemma1} \begin{proof} Define \[ \Gamma_0' := \left(\begin{array}{cc}\Gamma_0&\alpha\\0&\beta\end{array}\right), \quad Q' := \left(\begin{array}{cc}Q&0\\0&I\end{array}\right) \] and note that $\Gamma_0'$ has rank $r+m$ and that $\Gamma' = \Gamma_0'Q'$. Let $M_\epsilon := QDv(x_\epsilon)\Gamma_0$. We are interested in the reduced Jacobian matrix: \[ D_{\Gamma_0'}F(x_\epsilon,y_\epsilon;\epsilon) = Q'\left(\begin{array}{ccc}Dv(x_\epsilon)&0\\\frac{1}{\epsilon}A_\epsilon&\frac{1}{\epsilon}B_\epsilon\end{array}\right)\Gamma_0' = \left(\begin{array}{cc}M_\epsilon&QDv(x_\epsilon)\alpha\\\frac{1}{\epsilon}A_\epsilon\Gamma_0&\frac{1}{\epsilon}A_\epsilon\alpha + \frac{1}{\epsilon}B_\epsilon\beta\end{array}\right)\,, \] which is a representative of $D_{\mathrm{im}\,\Gamma'}F(x_\epsilon, y_\epsilon;\epsilon)$. \begin{enumerate}[align=left,leftmargin=*] \item[(a)] If $M_0 := QDv(p)\Gamma_0$ is nonsingular then, as $v$ is $C^1$, there exists $\epsilon_1 > 0$ s.t. $M_\epsilon$ is nonsingular for $\epsilon \in (0,\epsilon_1)$. Then as $\epsilon \to 0+$, by the Schur determinant formula (p46 of \cite{gantmacher}, for example): \[ \epsilon^mJ_{\Gamma_0'}F(x_\epsilon,y_\epsilon; \epsilon) = \mathrm{det}\,\left[A_\epsilon\alpha + B_\epsilon \beta- A_\epsilon\Gamma_0(M_\epsilon)^{-1}QDv(x_\epsilon)\alpha\right]\,\mathrm{det}(M_\epsilon) \to \mathrm{det}\,N_0\mathrm{det}\,M_0 \neq 0, \] as $N_0$ is nonsingular. Consequently, there exists $\epsilon' \leq \min\{\epsilon_0, \epsilon_1\}$ s.t. $J_{\mathrm{im}\Gamma'}F(x_\epsilon,y_\epsilon; \epsilon) \neq 0$ for $0<\epsilon < \epsilon'$. \item[(b)] If $N_0$ and $QDv(p)\Gamma_0$ are Hurwitz stable, then the assumptions of Lemma~\ref{lemHurwitz} are satisfied, and $D_{\mathrm{im}\Gamma'}F(x_\epsilon,y_\epsilon; \epsilon)$ is Hurwitz stable. \end{enumerate} This completes proof of the lemma. \end{proof} \begin{myproof}{Theorem~\ref{thmblockadd}} Let the $m$ added reversible reactions be \[ a_i\cdot X +b_i\cdot Y \rightleftharpoons a_i'\cdot X + b_i'\cdot Y,\quad (i = 1, \ldots, m)\,. \] Define $a = (a_1| a_2|\cdots|a_m)$, with $a'$, $b$ and $b'$ defined similarly. Define $\alpha = a'-a$ and $\beta = b'-b$. $\alpha$ is now an $n \times m$ matrix, and $\beta$ is a $k \times m$ matrix with rank $m$ by assumption, implying $k \geq m$. W.l.o.g., namely by reordering the species of $Y$ if necessary, let the first $m$ rows of $\beta$ form a nonsingular square matrix so that $\beta = \left(\begin{array}{c}\hat{\beta}\\\doublehat{\beta}\end{array}\right)$, where $\hat{\beta}$ is now a nonsingular $m \times m$ matrix, and $\doublehat{\beta}$ is a $(k-m) \times m$ matrix. We have \[ \Gamma_0' := \left(\begin{array}{cc}\Gamma_0&\alpha\\0&\beta\end{array}\right),\quad Q' := \left(\begin{array}{cc}Q&0\\0&I\end{array}\right), \quad \mbox{and}\quad \Gamma' = \Gamma_0'Q' = \left(\begin{array}{cc}\Gamma&\alpha\\0&\beta\end{array}\right)\,, \] where $\Gamma'$ is the stoichiometric matrix of $\mathcal{R}'$ and $\Gamma_0'$ is a basis for $S':=\mathrm{im}\,\Gamma'$. Choose mass action kinetics for the added reactions and give the $j$th reaction forward and backward rate constants $k_j$ and $k_{-j}$ respectively. Setting $k_+ := (k_1, k_2, \ldots, k_m)^\mathrm{t}$ and $k_- := (k_{-1}, k_{-2}, \ldots, k_{-m})^\mathrm{t}$, the dynamics of the new system takes the form \[ \left(\begin{array}{c}\dot x\\\dot y\end{array}\right) = \left(\begin{array}{cc}\Gamma&\alpha\\0&\beta\end{array}\right)\left(\begin{array}{c}v(x)\\\hat{f}(x,y)\end{array}\right)\,,\quad \mbox{where}\quad \hat{f}(x,y) := k_+\circ x^{a^\mathrm{t}}\circ y^{b^\mathrm{t}} - k_{-}\circ x^{{a'}^\mathrm{t}}\circ y^{{b'}^\mathrm{t}}\,. \] As $\beta$ has rank $m$ and hence represents an injective linear transformation, positive equilibria of $\mathcal{R}'$ are precisely the solutions of $\Gamma v(x)=0, \hat{f}(x,y)=0$, namely of $\Gamma v(x) = 0, \,\,y^{\beta^\mathrm{t}} = K \circ x^{-\alpha^\mathrm{t}}$, where $K = k_+/k_-$. Write $y = (\hat{y}, \doublehat{y}) \in \mathbb{R}^m \times \mathbb{R}^{k-m}$ so that $y^{\beta^\mathrm{t}} = K \circ x^{-\alpha^\mathrm{t}}$ can be written $\hat{y}^{\hat{\beta}^\mathrm{t}}\circ \doublehat{y}^{\hat{\hat{\beta}}^\mathrm{t}} = K\circ x^{-\alpha^\mathrm{t}}$. Taking logs gives: \[ \hat{\beta}^\mathrm{t}\ln \hat{y} = \ln K -\alpha^\mathrm{t} \ln x - \doublehat{\beta}^\mathrm{t} \ln \doublehat{y}\,. \] We multiply through by $(\hat{\beta}^\mathrm{t})^{-1}$ which is defined by assumption, and let $\gamma := -(\alpha\,\hat{\beta}^{-1})^\mathrm{t}$, $\delta := -(\doublehat{\beta}\,\hat{\beta}^{-1})^\mathrm{t}$. Fix $k_+(\epsilon):=\bm{\epsilon}^{-\hat{b}^\mathrm{t}}$ and $k_-(\epsilon):=\bm{\epsilon}^{-{\hat{b}'}^\mathrm{t}}$ where $\hat{b}$ and $\hat{b}'$ refer to the top $m \times m$ submatrices of $b$ and $b'$ respectively. Setting $K=K(\epsilon):=k_+(\epsilon)/k_-(\epsilon) = \bm{\epsilon}^{\hat{\beta}^\mathrm{t}}$ and exponentiating again gives, at equilibrium: \[ \hat{y} = \epsilon x^\gamma\circ \doublehat{y}^\delta\,. \] Define $g \colon \mathbb{R}^n_{\gg 0} \times \mathbb{R}^m \times \mathbb{R}^{k-m}_{\gg 0} \times \mathbb{R} \to \mathbb{R}^m$ by $g(x, \hat{y}, \doublehat{y};\epsilon) = \hat{y} - \epsilon x^\gamma\circ \doublehat{y}^\delta$. With $\epsilon>0$ fixed and rate constants chosen as above, positive equilibria of $\mathcal{R}'$ occur precisely at \[ 0 = G(x, \hat{y}, \doublehat{y};\epsilon) := \left(\begin{array}{cc}\Gamma&\alpha\\0&\hat{\beta}\\0&\doublehat{\beta}\end{array}\right)\left(\begin{array}{c}v(x)\\ g(x, \hat{y}, \doublehat{y};\epsilon)\end{array}\right)\,. \] Note that $(p,0,\mathbf{1};0)$ and $(q,0,\mathbf{1};0)$ are in the domain of $G$ (taken to be that of $g$), which is open, and that $G(p,0,\mathbf{1};0) = G(q,0, \mathbf{1};0)=0$. We compute: \begin{eqnarray*} D_{\Gamma_0'}G(x,\hat{y}, \doublehat{y};\epsilon) &=& \left(\begin{array}{cc}QDv(x)\Gamma_0&QDv(x)\alpha\\D_xg\Gamma_0&D_xg\alpha + D_{\hat{y}}g\hat{\beta}+D_{\hat{\hat{y}}}g\doublehat{\beta}\end{array}\right)\,. \end{eqnarray*} Now $D_{\hat{y}}g$ is the $m \times m$ identity matrix and we confirm that $D_xg(p, 0, \mathbf{1};0) = 0$ and $D_{\hat{\hat{y}}}g(p, 0, \mathbf{1};0) = 0$. So, by the nonsingularity of $QDv(p)\Gamma_0$ and of $\hat{\beta}$, \[ J_{S'}G(p,0, \mathbf{1};0) = \mathrm{det}(D_{\Gamma_0'}G(p,0, \mathbf{1};0)) = \mathrm{det}(QDv(p)\Gamma_0)\mathrm{det}\,\hat{\beta} \neq 0\,. \] Corollary~\ref{corift} gives, for sufficiently small $\epsilon>0$, a $C^1$ curve $\epsilon \mapsto p_\epsilon := (x(\epsilon), \hat{y}(\epsilon), \doublehat{y}(\epsilon))$ of zeros of $G$ on $(p,0,\mathbf{1})+S'$ such that $\lim_{\epsilon \to 0+}(x(\epsilon), \hat{y}(\epsilon), \doublehat{y}(\epsilon)) = (p,0,\mathbf{1})$. For sufficiently small $\epsilon > 0$, $x(\epsilon)\gg 0$ (as $x(0) = p \gg 0$) and $\doublehat{y}(\epsilon)\gg 0$ (as $\doublehat{y}(0) = \mathbf{1} \gg 0$), and consequently $\hat{y}(\epsilon) \gg 0$, since \begin{equation} \label{eqyhat} \hat{y}(\epsilon) = \epsilon x(\epsilon)^\gamma\circ \doublehat{y}(\epsilon)^\delta. \end{equation} Thus each $p_\epsilon$ is a positive equilibrium of $\mathcal{R}'$. An identical argument replacing $p$ with $q$ gives a curve of positive equilibria $q_\epsilon$ on $(q,0,\mathbf{1})+S'$ approaching $(q,0,\mathbf{1})$ as $\epsilon \to 0+$. Since $(p,0,\mathbf{1}) - (q, 0, \mathbf{1}) = (p-q, 0, 0) \in S'$, $(p,0,\mathbf{1})$ and $(q, 0, \mathbf{1})$ are compatible, and hence $p_\epsilon$ and $q_\epsilon$ are compatible. We now infer nondegeneracy of $p_\epsilon$ using Lemma~\ref{lemnondegen}(a). Define \[ f(x,y;\epsilon):=k_+(\epsilon)\circ x^{a^\mathrm{t}}\circ y^{b^\mathrm{t}} - k_{-}(\epsilon)\circ x^{{a'}^\mathrm{t}}\circ y^{{b'}^\mathrm{t}} \quad \mbox{and} \quad F(x,y;\epsilon) := \Gamma'\left(\begin{array}{c}v(x)\\f(x,y;\epsilon)\end{array}\right)\,. \] Define $T\colon (0, \epsilon_0) \to \mathbb{R}^m$ by $T(\epsilon) = k_+(\epsilon)\circ x(\epsilon)^{a^\mathrm{t}}\circ y(\epsilon)^{b^\mathrm{t}}$. Then, using (\ref{eqyhat}), we compute that $T_0:=\lim_{\epsilon\to 0+}T(\epsilon) = p^{a^\mathrm{t}+\hat{b}^\mathrm{t}\gamma} \gg 0$. Observe that \[ \begin{array}{l} A_\epsilon:=\epsilon D_xf(x(\epsilon), y(\epsilon); \epsilon) = -\epsilon \mathrm{diag}(T(\epsilon))\alpha^\mathrm{t}\mathrm{diag}(\mathbf{1}/x(\epsilon)),\\ B_\epsilon:=\epsilon D_yf(x(\epsilon), y(\epsilon); \epsilon) = -\mathrm{diag}(T(\epsilon))\beta^\mathrm{t}\mathrm{diag}(\bm{\epsilon}/y(\epsilon)). \end{array} \] Set $N_1:=\mathrm{diag}(T_0)$, $C_0 := \mathrm{det}(N_1)$, $N_2:= \lim_{\epsilon \to 0+}\mathrm{diag}(\bm{\epsilon}/y(\epsilon))$ (which, by (\ref{eqyhat}), is defined and nonnegative), and $N_0:=\lim_{\epsilon \to 0+}B_\epsilon\beta =-N_1\beta^\mathrm{t}N_2\beta$. Note that $C_0 > 0$. We compute that, as required in Lemma~\ref{lemnondegen}, $A_\epsilon \to 0 \,\,\mbox{as}\,\, \epsilon \to 0+$ since $\mathrm{diag}(T(\epsilon)) \to N_1$ as $\epsilon \to 0+$ and $\mathrm{diag}(\mathbf{1}/x(\epsilon)) \to \mathrm{diag}(\mathbf{1}/p)$ as $\epsilon\to 0+$. By (\ref{eqyhat}), $K' :=N_2[\langle m \rangle] = p^{-\mathbf{1}^\mathrm{t}\gamma} > 0$, while $N_2[\theta] = 0$ for $\theta \subseteq \langle k \rangle$, $|\theta|=m$, $\theta \neq \langle m \rangle$. Applying the Cauchy-Binet formula (\cite{gantmacher} for example) to $N_0$ gives, \[ \mathrm{det}(N_0) = (-1)^mC_0\sum_{\theta \subseteq \langle k \rangle,\,\, |\theta| = m}N_2[\theta]\beta[\theta|\langle m \rangle]^2 = (-1)^mC_0K'\beta[\langle m \rangle]^2\neq 0, \] and so $N_0$ is nonsingular. All the conditions of Lemma~\ref{lemnondegen}(a) are met, and we conclude that for each sufficiently small $\epsilon>0$, $p_\epsilon$ is a nondegenerate equilibrium of $\mathcal{R}'$. An identical argument applies to $q_\epsilon$. Thus $\mathcal{R}'$ admits MPNE. To see inheritance of MPSE observe that $N_0 = -N_1^{1/2}AA^\mathrm{t}N_1^{-1/2}$, where $A := N_1^{1/2}\beta^\mathrm{t}N_2^{1/2}$. $N_0$ is hence similar to the negative semidefinite matrix $-AA^\mathrm{t}$. As $N_0$ has already been shown to be nonsingular, it must in fact be Hurwitz. If $p$ is linearly stable for $\mathcal{R}$, namely $D_{\mathrm{im}\,\Gamma}\Gamma v(p)$ is Hurwitz, then the conditions of Lemma~\ref{lemnondegen}(b) are met, and we conclude that for each sufficiently small $\epsilon$, $p_\epsilon$ is linearly stable for $\mathcal{R}'$. An identical argument applies to $q_\epsilon$ and so, if $\mathcal{R}$ admits MPSE, then so does $\mathcal{R}'$. \end{myproof} \begin{myproof}{Theorem~\ref{thmintermediates}} The proof is similar to that of Theorem~\ref{thmblockadd}. Recall that we create $\mathcal{R}'$ from $\mathcal{R}$ by replacing each of the $m$ reactions: \[ a_i \cdot X \rightarrow b_i \cdot X\quad \mbox{with a chain}\quad a_i \cdot X \rightarrow c_i \cdot X + \beta_i\cdot Y \rightarrow b_i \cdot X,\,\,(i=1,\ldots,m)\,.\] As we are assuming that $\beta:=(\beta_1|\beta_2|\cdots|\beta_m)$ has rank $m$, as in the proof of Theorem~\ref{thmblockadd} we may write $\beta = \left(\begin{array}{c}\hat{\beta}\\\doublehat{\beta}\end{array}\right)$ where $\hat{\beta}$ is a nonsingular $m \times m$ matrix, and $\doublehat{\beta}$ is a $(k-m) \times m$ matrix. Also w.l.o.g (namely, by writing a reaction as multiple reactions and/or reordering the reactions of $\mathcal{R}$ as necessary) let the reaction $a_i \cdot X \rightarrow b_i\cdot X$ figure as the $i$th reaction in the original CRN, proceeding with rate $v_i(x)$, and let $\underline{v}(x) = (v_1(x), \ldots, v_m(x))^\mathrm{t}$. Set the rate of each new reaction $a_i \cdot X \rightarrow c_i \cdot X + \beta_i \cdot Y$ to be $v_i(x)$ (this is consistent with mass action kinetics if the original reaction had mass action kinetics) and set the second added reaction in the $i$th chain to have mass action kinetics with rate constant $k_i$ ($i = 1, \ldots, m$). Define $k := (k_1, \ldots, k_m)^\mathrm{t}$ and $c:=(c_1|c_2|\cdots|c_m)$. Note that for $x \gg 0$, $v_i(x) \gg 0$ under the assumption of positive general kinetics, and in particular $v_i(p) \gg 0$. Define $\alpha_i := c_i - b_i$, $(i = 1, \ldots, m)$, $\alpha := (\alpha_1|\alpha_2|\cdots |\alpha_m)$, \[ \hat{f}(x,y) := \underline{v}(x) - k \circ x^{c^\mathrm{t}} \circ y^{\beta^\mathrm{t}},\quad\Gamma_0' := \left(\begin{array}{cc}\Gamma_0&\alpha\\0&\beta\end{array}\right)\quad \mbox{and}\quad Q' := \left(\begin{array}{cc}Q&0\\0&I\end{array}\right) \] so that $\Gamma' = \Gamma_0'Q'$. $\Gamma'$ is the stoichiometric matrix of $\mathcal{R}'$ and the columns of $\Gamma_0'$ are a basis for $S':=\mathrm{im}\,\Gamma'$. The dynamics of $\mathcal{R}'$ is governed by \[ \left(\begin{array}{c}\dot x\\\dot y\end{array}\right) = \Gamma'\left(\begin{array}{c}v(x)\\\hat{f}(x,y)\end{array}\right) = \left(\begin{array}{cc}\Gamma&\alpha\\0&\beta\end{array}\right)\left(\begin{array}{c}v(x)\\\hat{f}(x,y)\end{array}\right)\,. \] Thus, with the kinetic choices made so far, the net effect on the vector field of ``inserting'' the complexes $c_i \cdot X + \beta_i\cdot Y$ ($i=1,\ldots,m$) is the same adding to $\mathcal{R}$ $m$ new (pseudo)-reactions with stoichiometric matrix $\left(\begin{array}{c}\alpha\\\beta\end{array}\right)$ and rate vector $\underline{v}(x)-k \circ x^{c^\mathrm{t}}\circ y^{\beta^\mathrm{t}}$. As $\beta$ has rank $m$ (and hence corresponds to an injective linear transformation), positive equilibria of $\mathcal{R}'$ are solutions of $\Gamma v(x) = 0, \,\,\hat{f}(x,y)=0$. Write $y = (\hat{y}, \doublehat{y}) \in \mathbb{R}^m \times \mathbb{R}^{k-m}$. The condition $\hat{f}(x,y)=0$ reads $\underline{v}(x)-k \circ x^{c^\mathrm{t}}\circ \hat{y}^{\hat{\beta}^\mathrm{t}}\circ \doublehat{y}^{\hat{\hat{\beta}}^\mathrm{t}}=0$. We now perform some manipulations similar to those in the proof of Theorem~\ref{thmblockadd}. Setting $k = k(\epsilon):= \bm{\epsilon}^{-\hat{\beta}^\mathrm{t}}$, and defining $\gamma := -(c(\hat{\beta})^{-1})^\mathrm{t}$, $\delta := -(\doublehat{\beta}(\hat{\beta})^{-1})^\mathrm{t}$, and $V(x) := (\underline{v}(x))^{(\hat{\beta}^{-1})^\mathrm{t}}$, we end up with, at equilibrium, \[ \hat{y} = \epsilon V(x) \circ x^\gamma \circ \doublehat{y}^\delta\,. \] Define $g \colon \mathbb{R}^n_{\gg 0} \times \mathbb{R}^m \times \mathbb{R}^{k-m}_{\gg 0} \times \mathbb{R} \to \mathbb{R}^m$ by $g(x, \hat{y}, \doublehat{y}; \epsilon) = \hat{y} - \epsilon V(x) \circ x^\gamma \circ \doublehat{y}^\delta$. Then, for $\epsilon > 0$ and reaction rates chosen as above, positive equilibria of $\mathcal{R}'$ occur precisely at \[ 0 = G(x, \hat{y}, \doublehat{y};\epsilon) := \left(\begin{array}{cc}\Gamma&\alpha\\0&\hat{\beta}\\0&\doublehat{\beta}\end{array}\right)\left(\begin{array}{c}v(x)\\ g(x, \hat{y}, \doublehat{y};\epsilon)\end{array}\right)\,. \] Note that $(p,0,\mathbf{1};0)$ and $(q,0,\mathbf{1};0)$ are in the domain of $G$ (taken to be that of $g$), which is open, and that $G(p,0,\mathbf{1};0) = G(q,0, \mathbf{1};0)=0$. We compute: \[ D_{\Gamma_0'}G(x,\hat{y}, \doublehat{y};\epsilon) = \left(\begin{array}{cc}QDv(x)\Gamma_0&QDv(x)\alpha\\D_xg\Gamma_0&D_xg\alpha + D_{\hat{y}}g\hat{\beta}+D_{\hat{\hat{y}}}g\doublehat{\beta}\end{array}\right)\,. \] Now $D_{\hat{y}}g = I$, $D_xg(p, 0, \mathbf{1};0) = 0$, $D_{\hat{\hat{y}}}g(p, 0, \mathbf{1};0) = 0$, and so, as $\hat{\beta}$ and $QDv(p)\Gamma_0$ are nonsingular, $J_{S'}G(p,0, \mathbf{1};0) = \mathrm{det}(QDv(p)\Gamma_0)\mathrm{det}(\hat{\beta}) \neq 0$. Corollary~\ref{corift} gives, for sufficiently small $\epsilon>0$, a curve $\epsilon \mapsto p_\epsilon := (x(\epsilon), \hat{y}(\epsilon), \doublehat{y}(\epsilon))$ of zeros of $G$ on $(p,0,\mathbf{1})+S'$ such that $\lim_{\epsilon \to 0+}(x(\epsilon), \hat{y}(\epsilon), \doublehat{y}(\epsilon)) = (p,0,\mathbf{1})$. For sufficiently small $\epsilon>0$, $x(\epsilon)\gg 0$, $V(x(\epsilon)) \gg 0$ (as $x(\epsilon) \to p \gg 0$) and $\doublehat{y}(\epsilon)\gg 0$ (as $\doublehat{y}(\epsilon) \to \mathbf{1} \gg 0$), and consequently $\hat{y}(\epsilon) \gg 0$ as \begin{equation} \label{eqyhat1} \hat{y}(\epsilon) = \epsilon V(x(\epsilon)) \circ x(\epsilon)^\gamma \circ \doublehat{y}(\epsilon)^\delta. \end{equation} Thus the curve $p_\epsilon$ consists of positive equilibria of $\mathcal{R}'$. An identical argument replacing $p$ with $q$ gives positive equilibria $q_\epsilon$ on $(q,0,\mathbf{1})+S'$ approaching $(q,0,\mathbf{1})$ as $\epsilon \to 0+$. Since $(p,0,\mathbf{1}) - (q, 0, \mathbf{1}) = (p-q, 0, 0) \in S'$, $p_\epsilon$ and $q_\epsilon$ are compatible. We can now infer nondegeneracy of $p_\epsilon$ and $q_\epsilon$ using Lemma~\ref{lemnondegen}(a). Set \[ f(x,y;\epsilon) := \underline{v}(x) - k(\epsilon) \circ x^{c^\mathrm{t}} \circ y^{\beta^\mathrm{t}} \quad \mbox{and}\quad F(x,y;\epsilon):= \Gamma'\left(\begin{array}{c}v(x)\\f(x,y;\epsilon)\end{array}\right). \] Define \[ \begin{array}{l} A_\epsilon := \epsilon D_xf(x(\epsilon), y(\epsilon);\epsilon)=\epsilon\left[D\underline{v}(x(\epsilon)) - \mathrm{diag}(\underline{v}(x(\epsilon)))c^\mathrm{t}\mathrm{diag}(\mathbf{1}/x(\epsilon))\right]\,,\\ B_\epsilon:=\epsilon D_yf(x(\epsilon), y(\epsilon); \epsilon) = -\mathrm{diag}(\underline{v}(x(\epsilon)))\beta^\mathrm{t}\mathrm{diag}(\bm{\epsilon}/y(\epsilon))\,. \end{array} \] Set $N_1:=\mathrm{diag}(\underline{v}(p))$, $C_0 := \mathrm{det}(N_1)$ and $N_2:= \lim_{\epsilon \to 0+}\mathrm{diag}(\bm{\epsilon}/y(\epsilon))$ (which, by (\ref{eqyhat1}), is defined and nonnegative). Note that $C_0 > 0$. It is clear that $A_\epsilon \to 0$ as $\epsilon \to 0+$ as the quantity in the square bracket approaches the constant matrix $\left[D\underline{v}(p) - N_1c^\mathrm{t}\mathrm{diag}(\mathbf{1}/p)\right]$ as $\epsilon \to 0+$. From (\ref{eqyhat1}), $K':=N_2[\langle m \rangle]$ is a positive constant, while $N_2[\theta] = 0$ for $\theta \subseteq \langle k \rangle$, $|\theta|=m$, $\theta \neq \langle m \rangle$. Then $N_0:=\lim_{\epsilon \to 0+}B_\epsilon\beta = -N_1\beta^\mathrm{t}N_2\beta$, and by the Cauchy-Binet formula, \[ \mathrm{det}(N_0) = (-1)^mC_0\sum_{\theta \subseteq \langle k \rangle,\,\, |\theta| = m}N_2[\theta](\beta[\theta|\langle m \rangle])^2 \to (-1)^mC_0K'\beta[\langle m \rangle]^2\neq 0, \] and so $N_0$ is nonsingular. All the conditions of Lemma~\ref{lemnondegen}(a) are met, and we conclude that for each sufficiently small $\epsilon>0$, $p_\epsilon$ is a nondegenerate equilibrium of $\mathcal{R}'$. An identical argument applies to $q_\epsilon$. Almost identically to the calculation in the proof of Theorem~\ref{thmblockadd}, $N_0$ can be seen to be Hurwitz. If $p$ is linearly stable for $\mathcal{R}$, namely $D_{\mathrm{im}\,\Gamma}\Gamma v(p)$ is Hurwitz, then by Lemma~\ref{lemnondegen}(b), for each sufficiently small $\epsilon>0$, $p_\epsilon$ is linearly stable with respect to $\mathrm{im}\,\Gamma'$. An identical argument applies to $q_\epsilon$, and so if $\mathcal{R}$ admits MPSE, then so does $\mathcal{R}'$. \end{myproof} \begin{remark}[Kinetic assumptions in Theorems~\ref{thmblockadd}~and~\ref{thmintermediates}] \label{remkin} Examining the proofs of Theorems~\ref{thmblockadd}~and~\ref{thmintermediates} we see that mass action kinetics does not really play a fundamental role. Most important are the equations (\ref{eqyhat}) and (\ref{eqyhat1}), namely that we can sufficiently control the rates of the added reactions to ensure that the $\hat{y}$ values at the new equilibria approach $0$ as some parameter $\epsilon \to 0+$. The results admit generalisation in this direction. \end{remark} \section{Conclusions} We have begun to describe how a CRN may be enlarged while maintaining the properties of having multiple positive nondegenerate equilibria (MPNE), or multiple positive linearly stable equilibria (MPSE). The modifications in Theorems~\ref{thmnewdepreac}~to~\ref{thmintermediates} collectively define a partial order $\preceq$ on the set of all CRNs. In the biologically motivated example of Section~\ref{secbioexample} we identify an ``atom of MPSE'', namely a CRN which admits MPSE and which is minimal w.r.t. $\preceq$, and show how this atom occurs in various published models, immediately implying MPSE in these models. We suspect that often the small multistationary networks described in \cite{JoshiShiu2016}, minimal w.r.t. the induced subnetwork partial order, are also minimal w.r.t. $\preceq$, and hence form natural building-blocks of CRNs displaying MPNE. This remains to be confirmed. Our approach was local: the main theoretical tool used was the IFT and, informally speaking, we mostly constructed $\mathcal{R}'$ as a {\em perturbation} of $\mathcal{R}$ in some sense. The results presented here certainly do not exhaust the possibilities in this approach. We have chosen not to include a number of partial or isolated results in the same vein as Theorems~\ref{thmblockadd}~and~\ref{thmintermediates}. These theorems themselves can probably be strengthened; for example, with some added effort, reversibility of the added reactions in Theorem~\ref{thmblockadd} can probably be replaced with a weak reversibility requirement. It also seems that insisting that the original CRN should have mass action kinetics may allow modifications which our less restrictive assumptions would not permit because it forces certain relationships to hold between reaction rates: in particular, the ratio of the rates of two reactions with the same source complex would be constant. Going beyond the techniques here, it seems likely that algebraic approaches may reveal network modifications which preserve MPE or MPNE, but which could not be predicted using a local approach: some of the modifications in Section~\ref{secextended} and the results in \cite{feliuwiufInterface2013} point in this direction. Ultimately the existence of MPE for mass action systems is about the cardinality of the intersection of certain algebraic varieties: it may be that adding new species or reactions in certain ways can be proved to allow MPE, but not by a local approach. Working towards the ``best'' partial order on CRNs from the point of view of the inheritance of MPNE is a project to be continued.
1,314,259,994,682
arxiv
\section{Introduction} \subsection{Motivation}\label{secmotivation} Originally, super Brownian motion arises as the limit of branching random walks; see \cite{[D93], [CDP01], [P02]}. Recently, it has been shown that many interacting particle systems with very different dynamics, when suitably rescaled, all converge to super Brownian motion. Such examples include the voter model, the contact process, interacting diffusion process and the spatial Lotka-Volterra model; see \cite{[CDP01],[DP99],[CK03], [CP05],[CP08]}. Donsker's invariance principle is deeply involved in those results; see \cite{[S02]} for an excellent nontechnical introduction. So if we assume that the kernel of the underlying motion has finite variance, super Brownian motion is obtained as the limit process. On the other hand, the general class of stable distribution was introduced and given this name by the famous French mathematician Paul L\'{e}vy. The inspiration for L\'{e}vy was the desire to generalize the Central Limit Theorem which is the foundation of Donsker's principle. Thus we can expect that if we let the kernel of the underlying motion be in the domain of attraction of a stable law, the limit process could be a super stable process. A motivation for proving those limit theorems is to actually use it in the study of complicated approximating systems. For example, the Lotka-Volterra invariance principle established in \cite{[CP05]} was used to study the coexistence and survival problem of the Lotka-Volterra model; see \cite{[CP07]}. Cox and Perkins \cite{[CP04]} used the voter invariance principle to give a probabilistic proof of the asymptotics for the voter model obtained in \cite{[BG80]}. In this paper, we will show that rescaled stochastic spatial Lotka-Volterra models can converge to super stable processes and also use those limit theorems to get some new results on the asymptotics for the voter model. Coexistence and survival for the Lotka-Volterra model will be discussed in a future work. \subsection{Our model}\label{ourmodel} A stochastic spatial version of the Lotka-Volterra model was first introduced and studied by Neuhauser and Pacala \cite{[NP99]}. In this paper, we follow the construction of the model suggested by \cite{[CP05]} but we assume that the kernel of the model is in the domain of attraction of a symmetric stable law. We first briefly describe the model. Let $\{p(x,y)\}$ be a random walk kernel on ${\mathbb{Z}}^d$ (the $d$-dimensional integer lattice). Suppose at each site of ${\mathbb Z}^d$ there is a plant of one of two type. We label the two types 0 and 1. At random times plants die and are replaced by new plants. The times and the types depend on the configuration of surrounding plants. We denote by $\xi_t$, an element of $\{0,1\}^{{\mathbb Z}^d}$, the state of the system at time $t$ and $\xi_t(x)$ gives the type of the plant at $x$ at time $t$. To describe the evolution of the system, for $\xi\in\{0,1\}^{{\mathbb Z}^d}$, define \begin{equation} \label{1.1} f_i(x,\xi)=\sum_{y\in {\bf Z}^d}p(x,y)1_{\{\xi(y)=i\}},\quad i=0,1. \end{equation} Let $\alpha_0$, $\alpha_1$ be nonnegative parameters. Define the Lotka-Volterra \textit{rate function} $c(x,\xi)$ by \begin{eqnarray*} c(x,\xi)=\left\{\begin{array}{lll}f_1(f_0+\alpha_0f_1)\quad\textrm{if }\xi(x)=0,\\ f_0(f_1+\alpha_1f_0)\quad\textrm{if }\xi(x)=1. \end{array}\right. \end{eqnarray*} The Lotka-Volterra process $\xi_t$ is the unique $\xi\in\{0,1\}^{{\bf Z}^d}$-valued Feller process with rate function $c(x,\xi)$, meaning that the generator of $\xi_t$ is the closure of the operator $\Omega$ $$ \Omega\phi(\xi)=\sum_xc(x,\xi)(\phi(\xi^x)-\phi(\xi)) $$ on the set of function $\phi:\xi\in\{0,1\}^{{\mathbb Z}^d}\rightarrow\mathbb R$ depending on only finitely many coordinates, where $\xi^x(y)=\xi(y)$ for $y\neq x$ and $\xi^x(x)=1-\xi(x)$. Note that $f_0+f_1=1$. The dynamics of $\xi_t$ can now be described as follows: at site $x$ in configuration $\xi$, the coordinate $\xi(x)$ makes transitions \begin{eqnarray*} 0\rightarrow1\quad\quad\textrm{at rate}\quad f_1(f_0+\alpha_0f_1)=f_1+(\alpha_0-1)f_1^2,\cr 1\rightarrow0\quad\quad\textrm{at rate}\quad f_0(f_1+\alpha_1f_0)=f_0+(\alpha_1-1)f_0^2. \end{eqnarray*} These rates are interpreted in \cite{[NP99]} as follows. A plant of type $i$ t site $x$ dies at rate $f_i+\alpha_if_{1-i}$, and is replaced by a plant of type $\xi(y)$ where $y$ is chosen with probability $p(x,y)$. $\alpha_i$ measures the strength of interspecific competition of type $i$ and we set the self-competition parameter equal to one. In \cite{[CDP01]} an invariance principle was proved for the voter model. That is appropriately rescaled voter models converge to super-Brownian motion. Thus we can expect that when the parameters $\alpha_i$ are close to one a similar result holds for the Lotka-Volterra model. The results in \cite{[CP05]} and \cite{[CP08]} say that it is true. The intuition of the voter invariance principle is that when appropriately rescaled, the dependence on the local density of particles gets washed out and the rescaled voter models should behave like the rescaled branching random walk. The asymptotics behavior of the latter is well known: it approaches super-Brownian motion. On the other hand, if the kernel of the underlying motion is in the domain of attraction of a stable law, appropriately rescaled branching random walk could approach a super stable process; see Theorem II.5.1 of \cite{[P02]}. The above reasoning suggests the possibility of that suitably rescaled Lotka-Volterra should approach a super stable process. Our main results in this paper will show that it is the case. Let $M(\mathbb R^d)$ denote the space of finite measures on $\mathbb R^d$, endowed with the topology of weak convergence of measures. Let $\Omega_D=D([0,\infty),M(\mathbb R^d))$ be the Skorohod space of c\`{a}dl\`{a}g paths taking values in $M(\mathbb R^d)$. Let $\Omega_C$ be the space of continuous $M(\mathbb R^d)$-valued paths with the topology of uniform convergence on compact set. We denote by $X_t(\omega)=\omega_t$ the coordinate function. We write $\mu(\phi)$ for $\int\phi d\mu$. For $1\leq n\leq\infty$ let $C_b^n(\mathbb R^d)$ be the space of bounded continuous function whose partial derivatives of order less than $n+1$ are also bounded and continuous, and let $C_0^n(\mathbb R^d)$ be the space of those functions in $C_b^n(\mathbb R^d)$ with compact support. A $\mathbb R^d$-valued L\'{e}vy process $Y_t$ is said to be a symmetric $\alpha$-stable process with index $\alpha\in(0,2]$ and diffusion speed $\sigma^2>0$ if \begin{equation}\label{stablelaw}\Psi(\eta):={ E}(e^{i\eta\cdot Y_1})=e^{-\sigma^2|\eta|^\alpha},\end{equation} where $|y|$ is the Euclidean norm of $y$. The distribution of $Y_1$ will be called $(\sigma^2,\alpha)$-stable law. When $\alpha=2$, $Y_t\in \mathbb R^d$ is a $d$-dimensional $\sigma^2$-Brownian motion whose generator is ${\cal A}\phi=\frac{\sigma^2\Delta\phi}{2}$ for $\phi\in C_b^2({\mathbb R^d})$. When $0<\alpha<2$, the generator of $Y_t$ is given by $$ {\cal A}\phi(x)=\frac{\sigma^2\Delta^{\alpha/2}\phi(x)}{2}= \sigma^2\int\left[\phi(x+y)-\phi(x)-\frac{1}{1+|y|^2} \sum_{i=1}^dy_jD_j\phi(x)\right]\nu(dy) $$ for $\phi\in C_b^2(\mathbb R^d)$ and $D_j=\frac{\partial}{\partial x_j}$, where $$\nu(dy)=c|y|^{-d-\alpha}1_{\{|y|\neq0\}}(dy)$$ for an appropriate $c>0$; see \cite{[S99]} for details. In both cases, $C_b^{\infty}(\mathbb R^d)$ is a core for $\cal A$ in that the $bp$-closure of $\{(\phi,{\cal A}\phi):\phi\in C_b^{\infty}\}$ contains $\{(\phi,{\cal A}\phi):\phi\in{\cal D}({\cal A})\}$, where ${\cal D}({\cal A})$ denotes the domain of the weak generator for the process $Y$; see \cite{[P02]}. An adapted a.s.-continuous $M(\mathbb R^d)$-valued process $\{X_t:t\geq0\}$ on a complete filtered probability space $(\Omega, {\cal F}, {\cal F}_t, P)$ is said to to a \textit{super symmetric $\alpha$-stable process with branching rate $b\geq0$, drift $\theta\in \mathbb R$ and diffusion coefficient $\sigma^2>0$ starting at $X_0\in M(\mathbb R^d)$} if it solves the following martingale problem: \noindent{\narrower For all $\phi\in C_b^{\infty}(\mathbb R^d)$, \begin{eqnarray} \label{MP1} M_t(\phi)=X_t(\phi)-X_0(\phi)-\int_0^tX_s\left( \frac{\sigma^2\Delta^{\alpha/2}\phi(x)}{2} \right)ds -\theta\int_0^tX_s(\phi)ds \end{eqnarray} is a continuous $({\cal F}_t)$-martingale, with $M_0(\phi)=0$ and predictable square function \begin{equation}\label{MP2} \langle M(\phi)\rangle_t=\int_0^tX_s(b\phi^2)ds.\end{equation} \par} \noindent The existence and uniqueness in law of a solution to this martingale problem is well known; see Theorem II.5.1 and Remark II.5.13 of \cite{[P02]}. Let $P^{b,\theta,\sigma^2,\alpha}_{X_0}$ denote the law of the solution on $\Omega_C$. So $b$ and $\theta$ can be regarded as branching parameters and parameters $\sigma$ and $\alpha$ determine the underlying motion. Let $\{Z_n:n\geq1\}$ be a discrete time random walk on $\mathbb{Z}^d$, $$ Z_n=z_0+\sum_{i=1}^nU_i, $$ where $z_0\in\mathbb{Z}^d$ and the random variables $(U_i:i\geq1)$ are independent identically distributed on $\mathbb{Z}^d$. Let $\{p(x,y)\}$ be a random walk kernel. In the following of this paper we assume that \noindent {\narrower {\bf(A1)}: $p(x,y)=p(x-y)$ is an irreducible, symmetric, random walk kernel on ${\mathbb{Z}}^d$ and $p(0)=0$. For $\alpha\in(0,2]$ and $\sigma^2>0$, $\{p(x)\}$ is in the domain of attraction of a symmetric $(\sigma^2,\alpha)$-stable law; i.e., $$ P(U_1=x)=p(x) $$ and there exists a function $b(n)$ of regular variation of index $1/\alpha$ such that \begin{equation} \label{defDA} b(n)^{-1}\sum_{i=1}^nU_i\xrightarrow{(d)}Y_1\quad\textrm{ as }n\rightarrow\infty, \end{equation} where $Y_1$ is determined by (\ref{stablelaw}) and the symbol $\xrightarrow{(d)}$ means convergence in distribution. \par} \noindent We will call a random walk (discrete time or continuous time) with kernel satisfying assumption (A1) a stable random walk. In the following of this paper, we always assume that $$\textit{(A1) holds for some}~ \sigma>0 ~and ~\alpha\in(0,2].$$ \begin{remark} \label{remark1.1} Without loss of generality, we may and will assume that function $b$ is continuous and monotonically increasing from $\mathbb R^{+}$ onto $\mathbb R^{+}$ and $b(0)=0$; see \cite{[LR91]} or \cite{[F71]}. We also have that $$ b(x)=x^{1/\alpha}s(x),\quad x>0, $$ where $s:(0,\infty)\rightarrow(0,\infty)$ is a slowly varying function, meaning that for any $c>0$, $$ \lim_{x\rightarrow\infty}\frac{s(cx)}{s(x)}=1 $$ where the convergence holds uniformly when $c$ varies over the interval $[\epsilon, 1/\epsilon]$ for any $\epsilon>0$; see Lemma 2 of VIII.8 of \cite{[F71]}. \end{remark} \begin{remark} \label{transient} According to Proposition 2.5 of \cite {[LR91]} and its proof, we have that under (A1), random walk $\{Z_n\}$ is transient if and only if $$ \sum_{k=1}^{\infty}b(k)^{-d}<\infty. $$ By Lemma 2 in Section VIII.8 of \cite{[F71]}, the random walk is always transient when $d>\alpha$. Typically, when $d=\alpha=1$, the random walk is recurrent if only if $$ \sum_{k=1}^{\infty}\frac{1}{ks(k)}=\infty. $$ \end{remark} Now, we are ready to define our rescaled Lotka-Volterra models. For $N=1,2,\cdots,$ let $$\mathbb{S}_N=\mathbb{Z}^d/b(N).$$ Define the kernel $p_N$ on $\mathbb{S}_N$ by $$ p_N(x)=p(xb(N)),\quad\quad x\in\mathbb{S}_N. $$ For $\xi\in\{0,1\}^{\mathbb{S}_N}$, define the densities $f_i^N=f_i^N(\xi)=f_i^N(x,\xi)$ by $$ f_i^N(x,\xi)=\sum_{y\in\mathbb{S}_N}p_N(y-x)1_{\{\xi(y)=i\}},\quad\quad i=0,1. $$ Let $\alpha_i=\alpha_i^N$ depend on $N$ and let $\xi_t^N$ be the process taking values in $\{0,1\}^{\mathbb{S}_N}$ determined by the rates: at site $x$ in configuration $\xi$, the coordinate $\xi(x)$ makes transitions \begin{eqnarray*} 0\rightarrow1\quad\quad\textrm{at rate}\quad Nf_1^N(f_0^N+\alpha_0^Nf_1^N),\cr 1\rightarrow0\quad\quad\textrm{at rate}\quad Nf_0^N(f_1^N+\alpha_1^Nf_0^N). \end{eqnarray*} That is $\xi^N_t$ is rate-$N$ Lotka-Volterra process determined by the parameters $\alpha_i^N$ and the kernel $p_N$. More precisely, if set \begin{eqnarray*} c_N(x,\xi)=\left\{\begin{array}{lll}Nf_1^N(f_0^N+\alpha_0^Nf_1^N) \quad\textrm{if }\xi(x)=0,\\ Nf_0^N(f_1^N+\alpha_1^Nf_0^N)\quad\textrm{if }\xi(x)=1, \end{array}\right. \end{eqnarray*} $\xi^N_t$ is the unique Feller process taking values in $\{0,1\}^{\mathbb S^N}$ whose generator is the closure of the operator $$ \Omega_N\phi(\xi)=\sum_{x\in\mathbb S^N}c_N(x,\xi)(\phi(\xi^x)-\phi(\xi)) $$ on the set of function $\phi:\xi\in\{0,1\}^{{\mathbb Z}^d}\rightarrow\mathbb R$ depending on only finitely many coordinates. Here $\xi^x(y)=\xi(y)$ for $y\neq x$ and $\xi^x(x)=1-\xi(x)$. \begin{remark} If we assume $\sum_{x\in\mathbb{Z}^d}x^ix^jp(x)=\delta_{ij}\sigma^2<\infty$, then $p(x)$ is in the domain of attraction of a normal law. That is the case of $\alpha=2$. So we recover the fixed kernel models in \cite{[CP05]}. For critical case, since there are significant differences between the case of $d=\alpha=1$ and the case of $d=\alpha=2$, we only consider the case of $d=\alpha=1$. For $d=\alpha=2$, please see the work in \cite{[CP08]}. \end{remark} Define $$g(x)=\int_1^x b(s)^{-1}ds$$ for $d=\alpha=1$ and $x\geq0$. According to Remark \ref{transient}, the one-dimensional random walk $Z$ is recurrent if and only if $\lim_{x\rightarrow\infty} g(x)=\infty.$ Set \begin{eqnarray*} N'=\begin{cases}N,&\textrm{if }d>\alpha,\\ N,&\textrm{if }d=\alpha=1 \textrm{ and } \lim_{x\rightarrow\infty}g(x)<\infty ,\\ N/{g(N)},&\textrm{if }d=\alpha=1\textrm{ and }\lim_{x\rightarrow\infty} g(x)=\infty. \end{cases} \end{eqnarray*} That is when the stable random walk is transient $N'=N$ and $N'=N/g(N)$ if the stable random walk is recurrent. We define the corresponding measure-valued process $X_t^N$ by \begin{equation} \label{resMV} X_t^N=\frac{1}{N'}\sum_{x\in\mathbb{S}_N}\xi_t^N(x)\delta_x. \end{equation} As in \cite{[CP05]} and \cite{[CP08]}, we make the following assumptions: \begin{align} \label{A2} &(1)~\sum_{x\in\mathbb{S}_N}\xi_0^N(x)<\infty.\notag\\ &(2)~X_0^N\rightarrow X_0\quad\quad \textrm{in }~ M(\mathbb R^d) \quad\textrm{ as }N\rightarrow\infty. \tag{${\bf A2}$}\\ &(3)~\theta_i^N=N'(\alpha_i^N-1)\rightarrow\theta_i\in\mathbb R\quad\quad \textrm{as }N\rightarrow\infty,\quad i=0,1.\notag \end{align} Now, we are ready to describe our main results. \subsection{Main results}\label{MR} To describe the limit process, we introduce a coalescing random walk systems $\{\hat{B}_t^x,x\in \mathbb{Z}^d\}$. Each $\hat{B}_t^x$ is a rate 1 random walk on $\mathbb{Z}^d$ with kernel $p$, with $\hat{B}_0^x=x$. The walks move independently until they collide, and then move together after that. For finite $A\subset\mathbb{Z}^d$, let $$\hat{\tau}(A)=\inf\{s:|\{\hat{B}_t^x,x\in A\}|=1\}$$ be the time at which the particles starting from $A$ coalesce into a single particle, and write $\hat{\tau}(a,b,\cdots)$ when $A=\{a,b,\cdots\}$. Note that when the stable random walk is transient, we can define the ``escape'' probability by $$ \gamma_e=\sum_{e\in \mathbb{Z}^d}p(e)P(\hat{\tau}(0,e)=\infty). $$ We also define \begin{eqnarray*} &&\beta=\sum_{e,e'\in\mathbb Z^d}p(e)p(e') P(\hat{\tau}(e,e')<\infty,\hat{\tau}(0,e)=\hat{\tau}(0,e')=\infty),\cr &&\delta=\sum_{e,e'\in\mathbb Z^d}p(e)p(e') P(\hat{\tau}(0,e)=\hat{\tau}(0,e')=\infty). \end{eqnarray*} We also need a collection of independent (noncoalescing) rate-1 continuous time random walks with step function $p$, which we will denote $\{B_t^{x}:x\in \mathbb Z^d \}$, such that $B_0^{x}=x$. Define the collision times $$ \tau(x,y)=\inf\{t\geq0:B_t^x=B_t^y\},\quad x,y\in \mathbb Z^d.$$ Let $P_N$ denote the law of $X^N_.$. Our first result is following. \begin{theorem} \label{mainUP} Assume (A1), (A2) and $d\geq\alpha$. If the stable random walk is transient, then $$P_N\xrightarrow{(d)}P_{X_0}^{2\gamma_e,\theta,\sigma^2,\alpha}$$ as $N\rightarrow\infty$, where $\theta=\theta_0\beta-\theta_1\delta$. \end{theorem} Note that if we assume $\sum_{x\in\mathbb{Z}^d}x^ix^jp(x)=\delta_{ij}\sigma^2<\infty$, then $\{p(x)\}$ is in the domain of attraction of a normal law with $b(N)=\sqrt{N}$. So Theorem \ref{mainUP} generalizes Theorem 1.2 in \cite{[CP05]}. Next, we consider the recurrent case. And for some technical reasons we need to assume that the $\{p(x)\}$ is in the \textit{ domain of normal attraction} of $(\sigma^2,1)$-stable law; see Remark \ref{LVreasoning} below. To state our result, we introduce the one-dimensional potential kernel $a(x)$, \begin{equation} \label{Pa} a(x)=\int_0^{\infty}\left[P(B_t^0=0)-P(B_t^x=0)\right]dt. \end{equation} We will discuss the existence of $a(x)$ later. Note that $a(x)\geq0$. Let $\{p_t(x):t\geq0,x\in\mathbb R\}$ denote the transition density of $\{Y_t\}$. Now we define \begin{eqnarray} \label{Gamma} \gamma^{\ast}=(p_1(0))^{-1}\int_0^\infty\sum_{x,y,e,e'}p(e)p(e') P({\tau}(0,e)&\wedge&{\tau}(0,e')>{\tau}(e,e')\in du,\cr && B_u^0=x,B_u^e=y)a(y-x). \end{eqnarray} Our critical Lotka-Volterra invariance principle is \begin{theorem} \label{main2} Assume (A2), $d=\alpha=1$, (A1) holds with $b(t)=t$ and $N'=N/\log N$. Then $$P_N\xrightarrow{(d)}P_{X_0}^{2\hat{p},\theta,\sigma^2,1}$$ as $N\rightarrow\infty$, where $\theta=\gamma^{\ast}(\theta_0-\theta_1)$ and $\hat{p}=(p_1(0))^{-1}$. \end{theorem} \begin{remark} According to Remark \ref{transient}, the assumption that (A1) holds with $b(t)=t$ implies that the stable random walk is recurrent. \end{remark} Now, we consider the applications of the convergence theorems. One can see from the rate function form that if we set $\alpha_0=\alpha_1=1$, $\xi_t$ is just the well known \textit{voter model}. Identify $\xi_t$ with the set $\{x:\xi_t(x)=1\}$ and let $\xi_t^A$ denote the voter model starting from 1's exactly on $A$, $\xi_0^A=A$. Write $\xi_t^x$ for $\xi_t^{\{x\}}$. The usual additive construction of the voter models yields $$ \xi_t^A=\bigcup_{x\in A}\xi_t^x. $$ The fact that $|\xi_t^0|=\sum_x\xi_t^0(x)$ is martingale tells us $|\xi_t^0|$ hits 0 eventually with probability 1. Letting $p_t=P(|\xi_t^0|>0)$, it follows that $p_t\rar0$ as $t\rightarrow\infty$. People always want to determine the rate at which $p_t\rar0$. By using a result in \cite{[Sa79]}, Bramson and Griffeath \cite{[BG80]} were able to obtain precise asymptotics under the assumption that the underlying motion is a simple random walk. By making the voter model invariance principle, Cox and Perkins \cite{[CP04]} reproved the main result in \cite{[BG80]} under a weaker assumption that the jump kernel has finite variance. In this paper as applications of the convergence theorems above we want to determine the rate at which $p_t\rar0$ under the assumption (A1). With notation $f(t)\sim g(t)$ as $t\rightarrow\infty$ we mean $\lim_{n\rightarrow\infty}f(t)/g(t)=1$. Our result is following theorem. \begin{theorem} \label{voterasy} Assume $d\geq\alpha$ and (A1) holds with $b(t)=t^{1/\alpha}$; i.e., $\{p(x)\}$ is in the domain of normal attraction of the $(\sigma,\alpha)$-stable law. Let $\gamma_{1}=p_1(0)^{-1}$ for $d=\alpha$. Then as $t\rightarrow\infty$ \begin{alignat*}{2} p_t&\sim\frac{\log t}{\gamma_1t}\qquad &d=\alpha,\\ &\sim{(\gamma_et)^{-1}}\qquad &d>\alpha. \end{alignat*} Moreover, $$ P\left(p_t|\xi_t^0|>u\big{|}|\xi_t^0|>0\right)\xrightarrow{t\rightarrow\infty} e^{-u},\quad u>0. $$ \end{theorem} \medskip At last, we introduce some notations which will play important roles in our proofs of the main results. First, according to \cite{[F71]}, for $0<\underline{\alpha}\leq\alpha$, we can define $$ |p|_{\underline{\alpha}}:=\sum_{x\in\mathbb Z^d}|x|^{\underline{\alpha}}p(x)<\infty. $$ And by (A2), define $$\bar{\theta}=1\vee\sup_{N,i}N'|\alpha_i^N-1|<\infty.$$ For $D\subset\mathbb R^d$ and $\phi: D\rightarrow\mathbb R$, define $$ ||\phi||_{\textrm{Lip}}=||\phi||_{\infty}+\sup_{x\neq y}\frac{|\phi(x)-\phi(y)|}{|x-y|}. $$ For $0<\underline{\alpha}\leq1$, let \begin{eqnarray*}}\def\eeqnn{\end{eqnarray*} ||\phi||_{\underline{\alpha}}=\left\{\begin{array}{ccc}0,&\phi\equiv c\text{ for some constant } c\in\mathbb R\\ \sup_{x\neq y,|x-y|\leq1}\frac{|\phi(x)-\phi(y)|}{|x-y|^{\underline{\alpha}}} \vee2||\phi||_{\infty},& \text{ otherwise},\end{array}\right. \eeqnn and for $\underline{\alpha}>1$ let $$||\phi||_{\underline{\alpha}}=2||\phi||_{\text{Lip}}.$$ Note that for $\underline{\alpha}\leq 1$, $$ \sup_{x\neq y,|x-y|\leq1}\frac{|\phi(x)-\phi(y)|}{|x-y|^{\underline{\alpha}}} \leq\sup_{x\neq y}\frac{|\phi(x)-\phi(y)|}{|x-y|}. $$ Thus for any $\underline{\alpha}>0$ { \begin{equation} \label{LIP} ||\phi||_{\underline{\alpha}}\leq 2||\phi||_{\textrm{Lip}}\quad\textrm{and }\quad|\phi(x)-\phi(y)|\leq ||\phi||_{\underline{\alpha}}|x-y|^{\underline{\alpha}}. \end{equation}} \begin{remark} \label{holder} Since $p(\cdot)$ in this paper may not have bounded moment of the first order, we can not use Lipschitz norm to do estimates. Thus a `H\"{o}lder' norm is introduced. \end{remark} \medskip The remaining of this paper is organized as follows. In Section \ref{Pre}, we first give some random walk estimates and then deduce the semimartingale decompositions for the approximating processes. Finally, we prove a key result, uniform convergence of random walk generators to the generator of the symmetric stable process. In Section 3 and Section 4, we follow the strategy in \cite{[CP05]} and \cite{[CP08]} to prove our convergence theorems, Theorem \ref{mainUP} and Theorem \ref{main2}. Our proofs will be deeply involved due to the lack of high moments. We will carry out in detail only the part that differs. Theorem \ref{voterasy} will be proved in Section 5. \section{Preliminaries}\label{Pre} \subsection{Random walk estimate} Recall that $\{B_t^x,x\in\mathbb Z^d\}$ is a collection of rate-one independent stable random walks with $B_0^x=x$. Let $p_t(x,y)=P(B_t^x=y)$ denote the transition function of $\{B_t^x\}$. We denote by $l$ the inverse of $b$. Define the characteristic function of the step function $p(\cdot)$ by $$ \psi(\eta)=\sum_xp(x)e^{-iy\cdot\eta}\quad\textrm{ for }\quad\eta\in T^d:=(-\pi,\pi]^d. $$ Since $p$ is symmetric, $\psi(\eta)$ is real. So \begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{A3} p_t(0,x)\leq p_t(0,0). \eeqlb The following proposition is taken from \cite{[LR91]}. \begin{proposition} \label{DAeqv} The following are equivalent: \begin{enumerate} \item[(1)]$p(\cdot)$ is in the domain of attraction of $(\sigma^2,\alpha)$-stable law. \item[(2)]$\psi(\eta)=1-\frac{\sigma^2}{l(1/|\eta|)} +o\left(\frac{1}{l(1/|\eta|)}\right)$ as $|\eta|$ tends to 0. \item[(3)] $\psi\left(\frac{\eta}{b(n)}\right)^n\xrightarrow{n\rightarrow\infty}\Psi(\eta)$, \quad$\eta\in\mathbb R^d$. \end{enumerate} \end{proposition} We also have that $l$ is of regular variation of index $\alpha$ and $$ l(x)=x^{\alpha}t(x), $$ where $$ t(x)=s(l(x))^{-\alpha}. $$ By Lemma 2.1 in \cite{[LR91]}, for any $\epsilon>0$, we have that there exist two positive constants $C_{\epsilon}, C'_{\epsilon}$ such that, for any $1\leq y\leq z$, \begin{equation} \label{regular} C_{\epsilon}y^{\alpha-\epsilon}\leq l(y)\leq C'_{\epsilon}y^{\alpha+\epsilon} \quad \textrm{ and }\quad C_{\epsilon}\left(\dfrac{z}{y}\right)^{\alpha-\epsilon}\leq\frac{l(z)}{l(y)}\leq C_{\epsilon}'\left(\dfrac{z}{y}\right)^{\alpha+\epsilon}. \end{equation} A similar result also holds for $b$, with $\alpha$ replaced by $1/\alpha$. Since $p(\cdot)$ is symmetric and irreducible, $\psi$ is real and $\psi(\eta)=1$ if and only if $\eta=0$; see \cite{[Sp76]}. According to Proposition \ref{DAeqv}, we may assume that there exists a constant $C>0$ such that $$ \frac{C}{l(1/|\eta|)}\leq1-\psi(\eta)\leq1 $$ for every $\eta\in T^d$. (\ref{regular}) tells us that for $b(t)\geq d\pi$, and $0\leq\epsilon\leq\alpha,$ \begin{equation} \label{boundCha} t(1-\psi(\frac{\eta}{b(t)}))\geq \frac{Cl\left(b(t)\right)}{l\left(b(t)/|\eta|\right)}\geq (C_{\epsilon}\vee C_{\epsilon}')(|\eta|^{\alpha+\epsilon}+|\eta|^{\alpha-\epsilon}). \end{equation} Recall that $\{p_t(x):t\geq0,x\in\mathbb R\}$ denote the transition density of $\{Y_t\}$. The local limit theorem for the stable random walk which plays an important role in our proofs of main results will be given in the following proposition. \begin{proposition} \label{CPtostable} If (A1) holds, \begin{equation} \label{tranapp} \lim_{t\rightarrow\infty}\sup_{x\in\mathbb Z^d} \left|b(t)^dp_t(0,x)-p_1\left(\frac{x}{b(t)}\right) \right|=0 \end{equation} and there exists a constant $C$ depending on $p(\cdot)$ such that for every $t\geq0$ and $x\in \mathbb R^d$, \begin{equation} \label{boundtransition} p_t(0,x)\leq C b(t)^{-d}. \end{equation} Moreover, if $b(t)=t$ and $d=1$, \begin{equation} \label{7.6} \sup_{x\in \mathbb Z}P(B_t^0=x)\leq C_{\ref{7.6}}(t+1)^{-1}. \end{equation} \end{proposition} {\bf Proof.} Since $l$ is a function of regular variation, by Proposition \ref{DAeqv}, for each $|\eta|>0$, \begin{eqnarray} \label{concha} \lim_{t\rightarrow\infty} t\left(1-\psi\left(\frac{\eta}{b(t)}\right)\right) =\lim_{t\rightarrow\infty}\frac{l(b(t))}{l(b(t)/|\eta|)} (\sigma^2+o(1))=\sigma^2|\eta|^{\alpha}. \end{eqnarray} Then \begin{eqnarray*} &&\left|b(t)^dp_t(0,x)-p_1\left(\frac{x}{b(t)}\right) \right|\\ &&\quad\leq(2\pi)^{-d}\left|\int_{b(t)T^d}e^{-ix\cdot(\eta/b(t))} \exp\left\{-t\left(1-\psi\left(\frac{\eta}{b(t)}\right)\right)\right\}d\eta -\int_{b(t)T^d}e^{-i(x/b(t))\cdot\eta}\Psi(\eta)d\eta\right|\\ &&\quad\quad+(2\pi)^{-d}\int_{\mathbb R^d\setminus b(t)T^d}\exp\left\{-\sigma^2|\eta|^{\alpha}\right\}d\eta\\ &&\quad\leq(2\pi)^{-d}\int_{b(t)T^d}\left| \exp\left\{-t\left(1-\psi\left(\frac{\eta}{b(t)}\right)\right)\right\} -\exp\left\{-\sigma^2|\eta|^{\alpha}\right\}\right|d\eta\\ &&\quad\quad+(2\pi)^{-d}\int_{\mathbb R^d\setminus b(t)T^d}\exp\left\{-\sigma^2|\eta|^{\alpha}\right\}d\eta. \end{eqnarray*} Then the Dominated Convergence Theorem with (\ref{boundCha}) yields (\ref{tranapp}). For (\ref{boundtransition}), when $b(t)\geq d\pi$, \begin{eqnarray*} p_t(0,x)&=&(2\pi)^{-d}\int_{T^d}e^{-ix\cdot\eta} \exp\left\{-t\left(1-\psi(\eta)\right)\right\}d\eta\\ &\leq&(2\pi)^{-d}b(t)^{-d}\int_{b(t)T^d} \exp\left\{-t\left(1-\psi\left(\frac{\eta}{b(t)}\right)\right)\right\}d\eta\\ &\leq&(2\pi)^{-d}b(t)^{-d}\int_{\mathbb R^d} \exp\{-(C_{\epsilon}\vee C_{\epsilon}')(|\eta|^{\alpha+\epsilon}+|\eta|^{\alpha-\epsilon})\}d\eta\\&\leq& Cb(t)^{-d}, \end{eqnarray*} where the second inequality follows from (\ref{boundCha}). Then (\ref{boundtransition}) holds for every $t\geq0$. We complete the proof. \hfill$\Box$\medskip \smallskip The following two propositions consider the growth of the stable random walk. \begin{proposition} \label{lemma7.3} (a) If $z_T\in \mathbb Z^d$ and $t_T>0$ satisfy \begin{equation} \label{7.8} \lim_{T\rightarrow\infty}\frac{z_T}{b(T)}=z \textrm{ and }\lim_{T\rightarrow\infty} \frac{t_T}{T}=s>0 \end{equation} then \begin{equation} \label{7.9} \lim_{T\rightarrow\infty}b(T)^dP(B_{t_T}^0=z_T)=\frac{p_1(z/s)}{s^d}. \end{equation} (b) For each $K>0$, there is a constant $C_{\ref{7.10}}(K)>0$ such that \begin{equation} \label{7.10} \liminf_{T\rightarrow\infty}\inf_{|x|\leq Kb(T)}b(T)^dP(B_T^0=x)\geq C_{\ref{7.10}}(K). \end{equation} \end{proposition} {\bf Proof. } By (\ref{7.8}) and Remark \ref{remark1.1}, we have $\lim_{T\rightarrow\infty}\frac{b(t_T)}{b(T)}=s$. Then (\ref{7.9}) follows from (\ref{tranapp}). For (b), when $\alpha=2$, by (\ref{tranapp}), the desired result is immediate. When $0<\alpha<2$, recall that $\{p_t(x):t\geq0, x\in \mathbb R^d\}$ is the transition density of a symmetric $\alpha$-stable process. By the arguments after Remark 5.3 of \cite{[BL02]}, there exists two positive constants $c_1$ and $c_2$ such that \begin{equation} \label{estistable} c_1\left(t^{-d/\alpha}\wedge\frac{t}{|x|^{d+\alpha}}\right)\leq p_t(x)\leq c_2\left(t^{-d/\alpha}\wedge\frac{t}{|x|^{d+\alpha}}\right). \end{equation} By above bounds and (\ref{tranapp}), \begin{eqnarray*}}\def\eeqnn{\end{eqnarray*} \liminf_{T\rightarrow\infty}\inf_{|x|\leq Kb(T)}b(T)^dP(B_T^0=x)& =& \liminf_{T\rightarrow\infty}\inf_{|x|\leq Kb(T)}p_1(x/b(T))\\ &\geq& c\left(1\wedge K^{d+\alpha}\right). \eeqnn The desired result follows readily. \hfill$\Box$\medskip \smallskip \begin{proposition} \label{propincrease} Assume $d=1$. If $g_1$ and $g_2$ are two positive functions on $\mathbb R^+$ such that $g_1(x)\rightarrow+\infty,~g_2(x)\rightarrow+\infty$ as $x\rightarrow+\infty$, then there is exists a constant $C_{\ref{increasein}}$ which only depends on $p$ such that \begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{increasein} P\left(|B_{g_1(N)}^0|\geq g_2(N)\right)\leq \frac{C_{\ref{increasein}}g_1(N)}{l(g_2(N))}. \eeqlb \end{proposition} {\bf Proof. }First, $$P\left(|B_{g_1(N)}^{0}|\geq g_2(N)\right)\leq P\left(\max_{ u\leq {g_1(N)}}|B_u^0|\geq g_2(N)\right).$$ Note that $\{B_u^0:u\geq0\}$ is a compound Poisson process whose L\'{e}vy measure is given by $$ \nu_0(dz):=\sum_{y\in \mathbb Z^d}p(y)\delta_y(dz), $$ which is a symmetric measure. According to the arguments in Section 3 of \cite{[Pr81]}, $$ P\left(\max_{ u\leq {g_1(N)}}|B_u^0|\geq g_2(N)\right)\leq Cg_1(N)\left(\nu_0(z:|z|>g_2(N))+g_2(N)^{-2}\int_{|z|\leq {g_2(N)}}z^2\nu_0(dz)\right), $$where $C$ is a positive constant; see (3.2) of \cite{[Pr81]}. Since $p(\cdot)$ is in the domain of attraction of $(\sigma,\alpha)$-stable law, we have \begin{equation} \label{for3} \frac{x^2[\nu_0(z:|z|>x)]}{\int_{|z|\leq x}z^2\nu_0(dz)}\longrightarrow\frac{2-\alpha}{\alpha} \end{equation} and \begin{equation} \label{for2} \frac{x\int_{|z|\leq b(x)}z^2\nu_0(dz)}{b(x)^2}\longrightarrow C_0 \end{equation} as $x\rightarrow\infty$ for some constant $C_0>0$; see (5.16) and (5.23) in Chapter XVII of \cite{[F71]}. By (\ref{for3}) there exists a constant $C_1$ independent $N$ such that $$ \nu_0\left(z:|z|>g_2(N)\right)\leq C_1{g_2(N)^{-2}}{\int_{|z|\leq g_2(N)}z^2\nu_0(dz)}. $$ According to (\ref{for2}), there exists another constant $C_2$ independent of $N$ such that \begin{eqnarray*}}\def\eeqnn{\end{eqnarray*} {g_2(N)^{-2}}{\int_{|z|\leq g_2(N)}z^2\nu_0(dz)} \leq \frac{C_2}{l(g_2(N))}. \eeqnn (Recall that $l$ is the inverse function of $b$.) Thus $$ P\left(\max_{ u\leq {g_1(N)}}|B_u^0|\geq g_2(N)\right)\leq CC_2(C_1+1)\frac{g_1(N)}{l(g_2(N))} $$ which yields the desired result. \hfill$\Box$\medskip \subsection{Semimartingale decompositions}\label{secsemi} Some results in this subsection are exactly the same with those in Section 3 of \cite{[CP08]}. For complement, we list them here. Let $\xi_t^N$ be the rescaled Lotka-Volterra model we have constructed in Section \ref{ourmodel}. As in \cite{[CP08]}, we introduce the following notation. If $$ \phi=\phi_s(x),\quad \dot{\phi}_s(x)\equiv\frac{\partial}{\partial s}\phi(s,x)\in C_b([0,T]\times\mathbb S_N), $$ and $s\leq T$, define \begin{eqnarray} \label{generatorN} {\cal A}_N(\phi_s)(x)&=&\sum_{y\in \mathbb S_N}Np_N(y-x)(\phi_s(y)-\phi_s(x))\\ D_t^{N,\,1}(\phi)&=&\int_0^tX_s^N({\cal A}_N\phi_s+\dot{\phi_s})ds\\ D_t^{N,\,2}(\phi)&=&\frac{N(\alpha_0^N-1)}{N'}\int_0^t \sum_{x\in\mathbb S_N}\phi_s(x)1_{\{\xi_s^N(x)=0\}}(f_1^N(x,\xi_s^N))^2ds\\ D_t^{N,\,3}(\phi)&=&\frac{N(\alpha^N_1-1)}{N'}\int_0^t \sum_{x\in\mathbb S_N}\phi_s(x)1_{\{\xi_s^N(x)=1\}}(f_0^N(x,\xi_s^N))^2ds \end{eqnarray} \begin{eqnarray} \langle M^N(\phi)\rangle_{1,\,t}&=&\frac{N}{(N')^2}\int_0^t\sum_{x\in \mathbb S_N}\phi_s^2(x)\sum_{y\in\mathbb S_N}p_N(y-x)(\xi_s^N(y)-\xi_s^N(x))^2ds\\ \langle M^N(\phi)\rangle_{2,\,t}&=&\frac{1}{(N')^2}\int_0^t\sum_{x\in \mathbb S_N}\phi_s^2(x)\big{[}(\alpha_0^N-1)1_{\{\xi_s^N(x)=0\}} (f_1^N(x,\xi_s^N))^2\cr &&\quad +(\alpha_1^N-1)1_{\{\xi_s^N(x)=1\}}(f_0^N(x,\xi_s^N))^2\big{]}ds \end{eqnarray} If $X_{\cdot}$ is a process let $({\cal F}_t^X,t\geq0)$ be the right-continuous filtration generated by $X_{\cdot}$. The following proposition is a version of Proposition 3.1 of \cite{[CP08]}. For its proof, please go to Section 2 of \cite{[CP05]}. \begin{proposition} \label{semidec} For $\phi, \dot{\phi}\in C_b([0,T]\times\mathbb S_N)$ and $t\in [0,T]$, \begin{equation} \label{XNdec} X_t^N(\phi_t)=X_0^N(\phi_0)+D_t^N(\phi)+M_t^N(\phi), \end{equation} where \begin{equation} \label{Driftdec} D_t^N(\phi)=D_t^{N,1}(\phi)+D_t^{N,2}(\phi)-D_t^{N,3}(\phi) \end{equation} and $M_t^N(\phi)$ is an ${\cal F}_t^{X^N}$-square-integrable martingale with predictable square function \begin{equation} \label{premartN} \langle M^N(\phi)\rangle_t=\langle M^N(\phi)\rangle_{1,t}+\langle M^N(\phi)\rangle_{2,t}. \end{equation} \end{proposition} The following lemma is a generalization of Lemma 3.5 of \cite{[CP05]} and Lemma 4.8 of \cite{[CP08]}. \begin{lemma} \label{estimateMNt} There is a constant $C$ such that if $\phi:[0,T]\times\mathbb S_N\rightarrow\mathbb R$ is a bounded measurable function, then (a) $\langle M^N(\phi)\rangle_{2,t}=\int_0^tm_{2,s}^N(\phi)ds$, where \begin{equation} \label{m2st} |m_{2,s}^N(\phi)|\leq C\frac{||\phi_s||_{\infty}^2}{(N')^2}X_s^N(1). \end{equation} (b) For $\underline{\alpha}<1\wedge\alpha,$ \begin{equation} \label{MN1tdec} \langle M^N(\phi)\rangle_{1,t}=2\int_0^tX_s^N((N/N')\phi_s^2f_0^N(\xi_s^N))ds +\int_0^tm_{1,s}^N(\phi_s)ds, \end{equation} where \begin{equation} \label{m1sNest} |m_{1,s}^N(\phi)|\leq \left[X_s^N(1)\frac{2N||\phi||^2_{\underline{\alpha}} |p|_{\underline{\alpha}}}{N'b(N)^{\underline{\alpha}}} \right] \wedge\left[\frac{2N||\phi||^2_{\infty}X_s^N(1)}{N'}\right]. \end{equation} (c) For $i=2,3, D_t^{N,i}(\phi)=\int_0^td_s^{N,i}(\phi)ds$ for $t\leq T$, where for all $N$, $s\leq T$, $$ |d_s^{N,i}(\phi)|\leq C||\phi_s||_{\infty}X_s^N\left( (N/N')f_0^N(\xi_s^N)\right). $$ \end{lemma} \begin{remark} Note that when $N'=N$, since $f_0^N\leq1$, $$ |d_s^{N,i}(\phi)|\leq C||\phi_s||_{\infty}X_s^N( 1),\quad i=2,3. $$ \end{remark} {\bf Proof.} (a) In the following of this proof, with $C$ we denote a positive constant which may change from line to line. Since $f_0^N\leq1$, $f_1^N\leq1$ and $1_{\{\xi_s^N(x)=1\}}=\xi_s^N(x)$, the definition of $\langle M^N(\phi)\rangle_{2,t}$ and the fact that $f_0^N+f_1^N=1$ imply \begin{eqnarray*}}\def\eeqnn{\end{eqnarray*} |m_{2,s}^N(\phi)|&\leq&\frac{||\phi||^2_{\infty}\sup_N N'|\alpha_0^N-1|} {(N')^3} \sum_{x\in\mathbb S_N}(f_1^N(x,\xi_s^N))1_{\{\xi_s^N(x)=0\}} \\&& +\frac{||\phi||^2_{\infty}\sup_N N'|\alpha_1^N-1|}{(N')^2}X_s^N(1)\\ &\leq&\frac{C||\phi||^2_{\infty}}{(N')^3}\sum_{x,y}p_N(x-y) (1-1_{\{\xi_s^N(x)=1\}})1_{\{\xi_s^N(y)=1\}} +\frac{C||\phi||^2_{\infty}}{(N')^2}X_s^N(1)\\ &\leq&\frac{C||\phi||^2_{\infty}}{(N')^2}X_s^N(1), \eeqnn where the second inequality follows from (A2). For (b), note that \begin{eqnarray*}}\def\eeqnn{\end{eqnarray*} \langle M^N(\phi)\rangle_{2,t}&=&\frac{1}{(N')^2}\int_0^t\sum_{x\in \mathbb S_N}\phi_s^2(x)\sum_{y\in\mathbb S_N}Np_N(y-x)(\xi_s^N(y)-\xi_s^N(x))^2ds\\ &=&\frac{1}{(N')^2}\int_0^t\sum_{x\in \mathbb S_N}\phi_s^2(x)\sum_{y\in\mathbb S_N}Np_N(y-x)\left(2\xi_s^N(x)(1-\xi_s^N(y))\right)ds\\ &&\quad+\frac{1}{(N')^2}\int_0^t\sum_{x\in \mathbb S_N}\phi_s^2(x)\sum_{y\in\mathbb S_N}Np_N(y-x)\left(\xi_s^N(y)-\xi_s^N(x)\right)ds. \eeqnn Thus (\ref{MN1tdec}) holds with \begin{eqnarray*}}\def\eeqnn{\end{eqnarray*} m_{1,s}^N(\phi)&=&\frac{N}{(N')^2}\sum_{x\in\mathbb S_N}\phi^2_s(x)\sum_{y\in \mathbb S_N}p_N(y-x)(\xi_s^N(y)-\xi_s^N(x))\\ &=&\frac{N}{(N')^2}\sum_{x\in\mathbb S_N}\phi^2_s(x)\sum_{y\in \mathbb S_N}p_N(y-x)(\xi_s^N(y)1_{\{\xi_s^N(x)=0\}}-\xi_s^N(x)1_{\{\xi_s^N(y)=0\}})\\ &=&\frac{N}{(N')^2}\sum_{x,y\in\mathbb S_N} p_N(y-x)(\phi^2_s(x)-\phi_s^2(y))\xi_s^N(y)(1-\xi_s^N(x))\\ &\leq&\frac{2N||\phi||^2_{\infty}X_s^N(1)}{N'}. \eeqnn On the other hand, $$|\phi^2_s(x)-\phi_s^2(y)|\leq 2||\phi||^2_{\underline{\alpha}}|x-y|^{\underline{\alpha}}$$ for $\underline{\alpha}<1\wedge\alpha.$ Thus \begin{eqnarray*}}\def\eeqnn{\end{eqnarray*} m_{1,s}^N(\phi)&\leq& 2(N/N')||\phi||^2_{\underline{\alpha}}\frac{1}{N'} \sum_y\xi_s^N(y)\sum_x|y-x|^{\underline{\alpha}}p_N(y-x)\\ &\leq& X_s^N(1)\frac{2N||\phi||^2_{\underline{\alpha}} |p|_{\underline{\alpha}}}{N'b(N)^{\underline{\alpha}}}. \eeqnn We complete the proof of (b). For (c), according to (A2), the fact that both $f_0^N$ and $f_1^N$ are less than 1 yields \begin{eqnarray*}}\def\eeqnn{\end{eqnarray*} |d_s^{N,i}(\phi)|&\leq&\frac{N\sup_N N'|\alpha^N_{i-2}-1|}{N'}||\phi_s||_{\infty} \frac{1}{N'}\sum_x\sum_yp_N(y-x)\xi_s^N(x)(1-\xi_s^N(y))\\ &\leq& C||\phi_s||_{\infty}X_s^N((N/N')f_0^N(\xi_s^N)). \eeqnn We are done. \hfill$\Box$\medskip \subsection{Convergence of Generators} In this subsection we consider the uniform convergence of ${\cal A}_N$. Recall the definition of generators of symmetric stable processes and the stable random walk $Z_n$ defined in section \ref{ourmodel}. For each $N>1$, let $\{P_t^{(N)}:t\geq0\}$ be a rate-$N$ Poisson process which is independent of $\{U_i:i\geq1\}$. Then $$ \hat{Z}_t^N=b(N)^{-1}\sum_{i=1}^{P_t^{(N)}}U_i $$ is a compound Poisson process on $\mathbb R^d$ whose L\'{e}vy measure is given by $$ \nu_N(dy):=\sum_{z\in \mathbb S_N}Np_N(z)\delta_z(dy); $$ see \cite{[S99]}. Note that both the law of $\hat{Z}_1^N$ and the $(\sigma^2,\alpha)$-stable law are infinitely divisible distributions. We also have that $$ {\bf E}\left(e^{-i\hat{Z}_1^N\cdot\eta}\right) =\exp\left\{-N\left(\psi\left(\frac{\eta}{b(N)}\right)-1\right)\right\}. $$ By (\ref{concha}), $$ \hat{Z}_1^N\xrightarrow{(d)}Y_1\quad\textrm{as}\quad N\rightarrow\infty. $$ According to Theorem 8.7 of \cite{[S99]} and its proof, we see $$ \rho_N(dy):=\frac{|y|^2}{1+|y|^2}\nu_N(dy)\rightarrow\rho(dy):= \frac{\sigma^2|y|^2}{1+|y|^2}\nu(dy)\quad\textrm{ in }M(\mathbb R^d).$$ For $f\in C_b(\mathbb R^d)$, define $$ ||f||_{BL}=\sup_x|f(x)|\vee\sup_{x\neq y}\frac{|f(x)-f(y)|}{|x-y|}. $$ Let ${\cal P},{\cal Q}$ be two probability measures on $\mathbb R^d$. Set $$ ||{\cal P}-{\cal Q}||_{BL}:=\sup_{||f||_{BL}=1}\left|\int fd{\cal P}-\int fd{\cal Q}\right|. $$ It is easy to see that \begin{equation} \label{probcon2} ||{\cal P}-{\cal Q}||_{BL}=\sup_{||f||_{BL}<\infty}\frac{\left|\int fd{\cal P}-\int fd{\cal Q}\right|}{||f||_{BL}}. \end{equation} By Problem 3.11.2 of \cite{[EK86]}, \begin{equation}\label{probcon} ||{\cal P}-{\cal Q}||_{BL}\leq 3 {\cal M}({\cal P},{\cal Q}), \end{equation} where $\cal M$ denotes the Prohorov metric; see Chapter 3 of \cite{[EK86]}. \begin{lemma} \label{conge} For $\phi\in C_b^{1,3}([0,T]\times\mathbb R^d),$ $$ \lim_{N\rightarrow\infty}\sup_{s\leq T}||{\cal A}_N\phi_s-\frac{\sigma^2\Delta^{\alpha/2}\phi_s}{2}||_{\infty}=0. $$ Moreover, for each $R<\infty$, the rate of convergence is uniform on \begin{eqnarray*} H_R:=\left\{\phi\in C_b^{1,3}([0,T]\times\mathbb R^d): \sup_{s,i,j,k}(||\phi_s||_{\infty}+||(\phi_s)_i||_{\infty} +||(\phi_s)_{ij}||_\infty+||(\phi_s)_{ijk}||_\infty)<R\right\}, \end{eqnarray*} where the subscripts $i,j,k$ indicate partial derivatives with respect to the spatial variable. \end{lemma} {\bf Proof.} Recall that $D_j=\frac{\partial}{\partial x_j}$. Define $$ g_s(x,y)=\left[\phi_s(x+y)-\phi_s(x)-\frac{1}{1+|y|^2} \sum_{i=1}^dy_jD_j\phi_s(x)\right]\cdot \frac{1+|y|^2}{|y|^2}. $$ Since $p_N$ is symmetric, we may rewrite $$ {\cal A}_N\phi_s(x)=\int g_s(x,y)\rho_N(dy) $$ and we also have that $$ \frac{\sigma^2\Delta^{\alpha/2}\phi_s(x)}{2}=\int g_s(x,y)\rho(dy). $$ Let $h:\mathbb R^d\rightarrow[0,1]$ be a $C_b^{\infty}$ function such that $$ B(0,1)\subset\{x:h(x)=0\}\subset\{x:h(x)<1\}\subset B(0,2) $$ and $$ B(0,2)^c\subset\{x:h(x)=1\}. $$ Define $ h_k(x)=h(kx)$ for $k\geq1$. Let $$ g_k(s,x,y):=h_k(y)g_s(x,y). $$ Then $g_k(s,x,y)=g_s(x,y)$ for $|y|>2/k$. One can check that $$ \sup_k\sup_{\phi\in H_R}\sup_s\sup_x\left(||g_k(s,x,\cdot||_{\infty} +||g_s(x,\cdot)||_{\infty}\right)<C_d R $$ and for each $k\geq1$ $$ \sup_{\phi\in H_R}\sup_s\sup_x||\sum_{j=1}^d |\frac{\partial g_k(s,x,y)}{\partial y_j}|||_{\infty}<kC_d R, $$ where $C_d$ is a constant which only depend on $d$. Typically, for each $k\geq1$, $$ \sup_{\phi\in H_R}\sup_s\sup_x||g_k(s,x,\cdot)||_{BL}<(k+1)C_d R. $$ By (\ref{probcon2}) and (\ref{probcon}), we obtain \begin{eqnarray*} &&\sup_{\phi\in H_R}\sup_{s\leq T}\sup_x\left|\frac{\int g_k(s,x,y)\rho_N(dy)}{\rho_N(\mathbb R^d)}-\frac{\int g_k(s,x,y)\rho(dy)}{\rho(\mathbb R^d)}\right|\\&&\quad\quad\leq(k+1)C_d R \cdot3{\cal M}\left(\frac{\rho_N}{\rho_N(\mathbb R^d)},\frac{\rho}{\rho(\mathbb R^d)}\right)\\&&\quad\quad\rightarrow0,\quad \textrm{ as }N\rightarrow\infty. \end{eqnarray*} By triangle inequality, \begin{eqnarray*} &&\sup_{\phi\in H_R}\sup_{s\leq T}\sup_x\left|{\int g_k(s,x,y)\rho_N(dy)}-{\int g_k(s,x,y)\rho(dy)}\right|\\&&\quad\quad\leq C_dR\left|\rho_N(\mathbb R^d)-\rho(\mathbb R^d)\right|\\&&\quad\quad\quad+\rho(\mathbb R^d)\sup_{\phi\in H_R}\sup_{s\leq T}\sup_x\left|\frac{\int g_k(s,x,y)\rho_N(dy)}{\rho_N(\mathbb R^d)}-\frac{\int g_k(s,x,y)\rho(dy)}{\rho(\mathbb R^d)}\right|\\&&\quad\quad\rightarrow0,\quad \textrm{ as }N\rightarrow\infty. \end{eqnarray*} Using triangle inequality again, \begin{eqnarray*} && \sup_{\phi\in H_R}\sup_{s\leq T}||{\cal A}_N\phi_s-\frac{\sigma^2\Delta^{\alpha/2}\phi_s}{2}||_{\infty}\\ &&\quad\quad \leq\sup_{\phi\in H_R}\sup_{s\leq T}\sup_x\left|\int g_s(x,y)\rho_N(dy)-\int g_k(s,x,y)\rho_N(dy)\right|\\&&\quad\quad\quad +\sup_{\phi\in H_R}\sup_{s\leq T}\sup_x\left|\int g_k(s,x,y)\rho_N(dy)-\int g_k(s,x,y)\rho(dy)\right|\\&&\quad\quad\quad +\sup_{\phi\in H_R}\sup_{s\leq T}\sup_x\left|\int g_k(s,x,y)\rho(dy)-\int g_s(x,y)\rho(dy)\right|\\ &&\quad\quad\leq C_d R\rho_N(\{y:|y|\leq2/k\})+C_d R\rho(\{y:|y|\leq2/k\})\\&&\quad\quad\quad +\sup_{\phi\in H_R}\sup_{s\leq T}\sup_x\left|\int g_k(s,x,y)\rho_N(dy)-\int g_k(s,x,y)\rho(dy)\right| \end{eqnarray*} Note that $\rho(dy)$ is absolutely continuous with respect to the Lebesgue measure. Letting $N$ go to infinity above yields $$ \lim_{N\rightarrow\infty}\sup_{\phi\in H_R}\sup_{s\leq T}||{\cal A}_N\phi_s-\frac{\sigma^2\Delta^{\alpha/2}\phi_s}{2}||_{\infty} \leq2C_d R\rho(\{y:|y|\leq2/k\}). $$ Then since $\rho(\{0\})=0$ the desired result follows readily if we let $k\rightarrow\infty$. \hfill$\Box$\medskip \section{Proof of Theorem \ref{mainUP}} In this section, we assume the stable random walk $Z$ is transient, which is equivalent to $$\int_1^{\infty}\frac{dx}{b(x)^d}<\infty.$$ When $d=\alpha=1$, above condition implies that $s(x)\rightarrow\infty$ as $x\rightarrow\infty$.The strategy of the proof is the same with that used in \cite{[CP05]}. In \cite{[CP05]} the authors worked with a more general class of particle systems they called voter perturbations. As a result we will specialize the setting there for the reader's convenience. Let $\{\hat{B}_t^{N,x}:x\in\mathbb S_N\}$ denote a rate-$N$ continuous time coalescing random walk system on $\mathbb S_N$ with step function $p_N$ such that $\hat{B}_0^{N,x}=x$. For a finite set $A\subset \mathbb S_N$, let $$\hat{\tau}^N(A)=\inf\{t\geq0:|\{\hat{B}_t^{N,x},x\in A\}|=1\}.$$ We also need a collection of independent (noncoalescing) rate-$N$ continuous time random walks on $\mathbb S_N$ with step function $p_N$, which we will denote $\{B_t^{N,x}:x\in \mathbb S_N \}$, such that $B_0^{N,x}=x$. For any finite subset $A$ of $\mathbb Z^d$, let $\hat{\tau}^N(A)=\hat{\tau}(A/b(N))$. We first check the kernel assumptions in Section 1.2 of \cite{[CP05]}. \begin{lemma} \label{Kerlemma} There exists a positive sequence $\{\epsilon_N^{*}\}$ with $\epsilon_N^*\rightarrow0$ and $N\epsilon_N^*\rightarrow\infty$. such that the following hold: \begin{eqnarray} \lim_{N\rightarrow\infty}NP(B_{\epsilon_N^*}^{N,0}=0)&=&0.\\ \lim_{N\rightarrow\infty}\sum_{e\in\mathbb S_N}p_N(e)P(\hat{\tau}^N(\{0,e\})\in(\epsilon_N^*,t])&=&0\quad \textrm{ for all }\qquad t>0,\cr \lim_{N\rightarrow\infty}\sum_{e\in\mathbb S_N}p_N(e)P(\hat{\tau}^N(\{0,e\})>\epsilon_N^*)&=&\gamma_e. \end{eqnarray} and if we define $\sigma_N(A)=P(\hat{\tau}^N(A)\leq\epsilon_N^*)$ for any finite subset $A$ of $\mathbb Z^d$, then \begin{equation}\label{K3}\lim_{N\rightarrow\infty}\sigma_N(A)=\sigma(A)\quad\textrm{ exists}. \end{equation} \end{lemma} {\bf Proof }. First, consider the case $d>\alpha$. We may assume $\epsilon_N^*=N^{-\epsilon^*}$ for some $0<\epsilon^*<1$. We need to find a suitable condition on $\epsilon^*$. Recall that $b$ is a function of regular variation with index $1/\alpha$. Given $\epsilon<1/2$, there exist two positive constants $C_{\epsilon}$, $C_{\epsilon}'$ such that for $y\geq1$, $$ C_{\epsilon}y^{1/\alpha-\epsilon}\leq b(y)\leq C_{\epsilon}'y^{1/\alpha+\epsilon}. $$ By (\ref{boundtransition}), we see \begin{eqnarray*} NP(B^{N,0}_{\epsilon_N^*}=0)=NP(B^{0}_{N\epsilon_N^*}=0)\leq C N b(N\epsilon_N^*)^{-d}\leq \frac{C}{C_{\epsilon}'}\frac{N (N\epsilon_N^*)^{d\epsilon}} {(N\epsilon_N^*)^{d/\alpha}}. \end{eqnarray*} A simple calculation shows that given $\epsilon<1/2$, we can set \begin{equation} \label{ezNstar} \epsilon_N^*=N^{-\epsilon^*}\quad \textrm{ for } \epsilon^*<1-\frac{\alpha}{d-\alpha d\epsilon}<1. \end{equation} Then $ NP(B^{N,0}_{\epsilon_N^*}=0)\rar0$ as $N\rightarrow\infty$. When $d=\alpha=1$, since $s(x)\rightarrow\infty$ as $x\rightarrow\infty$, we can set $x(0)=0$ and $\forall\, k\geq1$, there exists $x(k)>x(k-1)$, such that if $x>x(k)$, $s(x)>k$. Then $x(k)\rightarrow\infty$ as $k\rightarrow\infty$. Define function $s'$ on $\mathbb R^+$ such that $s'(x)=1$ for $0\leq x\leq x(1)$ and $$s'(x)=k, \textrm{ for }x(k)< x\leq x(k+1) \text{ and } k\geq 1. $$ It is easy to see that $s'(x)\uparrow\infty$ as $x\rightarrow\infty$ and $\forall x>x(1)$, $s'(x)<s(x)$. Define $$ \epsilon_N^*:=\left((\log N)\wedge\sqrt{s'(N/\log N)}\right)^{-1}. $$ Then $N\epsilon_N^*\geq N/\log N$ and $N\epsilon_N^*\rightarrow\infty$ as $N\rightarrow\infty$. Thus when $N$ is large enough ($N\epsilon_N^*>x(1)$), $$ \epsilon_N^*s(N\epsilon_N^*)\geq s'(N\epsilon_N^*)/\sqrt{s'(N/\log N)}\geq \sqrt{s'(N/\log N)}\xrightarrow{N\rightarrow\infty}\infty. $$ We have that $$ NP(B^{N,0}_{\epsilon_N^*}=0)\leq C N b(N\epsilon_N^*)^{-1}=\frac{1}{\epsilon_N^*s(N \epsilon_N^*)}\rar0 $$ as $N\rightarrow\infty$. Next, \begin{eqnarray*}}\def\eeqnn{\end{eqnarray*} \sum_{e\in\mathbb S_N}p_N(e)P({\hat{\tau}}^N(\{0,e\})>\epsilon_N^*) &=&\sum_{e\in Z^d}p(e)P(\hat{\tau}(0,e)>N\epsilon_N^*)\\ &\rightarrow&\sum_{e\in Z^d}p(e)P(\hat{\tau}(0,e)=\infty)=\gamma_e. \eeqnn Note that $$P\left(\hat{\tau}^N(\{0,e\})\in(\epsilon_N^*,t]\right) =P(\hat{\tau}^N(\{0,e\})>\epsilon_N^*)-P(\hat{\tau}^N(\{0,e\})>t).$$ Then the second limit also holds. For any finite set $A\subset\mathbb Z^d$, $$ \sigma_N(A)=P(\hat{\tau}^N(A)\leq\epsilon_N^*)=P(\hat{\tau}(A)\leq N\epsilon_N^*)\rightarrow P(\hat{\tau}(A)<\infty)=\sigma(A). $$ We are done. \hfill$\Box$\medskip \smallskip Next, we consider the `perturbation' term. As in \cite{[CP05]}, let $P_F$ denote the set of finite subsets of $\mathbb Z^d$. For $A\in P_F$, $x\in\mathbb S_N$, $\xi\in\{0,1\}^{\mathbb S_N}$, define $$ \chi_N(A,x,\xi)=\prod_{e\in A/{b(N)}}\xi(x+e). $$ We also define \begin{eqnarray*}}\def\eeqnn{\end{eqnarray*} \beta_N(A)=\left\{\begin{array}{lll} \theta_0^N(p(e))^2,& A=\{e\},\\ 2\theta_0^Np(e)p(e'),&A=\{e,e'\},\\ 0,&\textrm{otherwise,} \end{array}\right. \eeqnn and \begin{eqnarray*}}\def\eeqnn{\end{eqnarray*} \delta_N(A)=\left\{\begin{array}{lll} \theta_1^N,& A=\emptyset,\\ \theta_1^N[(p(e))^2-2p(e)],&A=\{e\},\\ 2\theta_1^Np(e)p(e'),& A=\{e,e'\},\\ 0,&\textrm{otherwise}. \end{array}\right. \eeqnn \begin{remark} \label{Perburtion} According to the arguments in Section 1.2 of \cite{[CP05]}, the `Perturbation assumptions' (P1) to (P5) there are satisfied by the above coefficients with $l_N=b(N)$. \end{remark} The following proposition is exactly the same with Proposition 3.3 of \cite{[CP05]}. The Proposition 3.3 of \cite{[CP05]} was proved in Section 4 there in which the proof of the results did not use any of the kernel assumptions. Thus we can state the following proposition without proof. \smallskip \begin{proposition} \label{prop3.3} For $K,T>0$, there exists a finite constant $C_1(K,T)$ such that if $\sup_NX_0^N(1)\leq K$, then $$ \sup_N E\left(\sup_{t\leq T}X_t^N(1)^2\right)\leq C_1(K,T). $$ \end{proposition} \smallskip This bound allows us to employ the $L^2$ arguments of \cite{[CP05]}. Next, we consider another technical result, a version of Proposition 3.4 of \cite{[CP05]}. For $A\in P_F$, $\phi:[0,T]\times S_N\longrightarrow\mathbb R$ bounded and measurable, $K>0$ and $t\in[0,T]$, define \begin{eqnarray*}}\def\eeqnn{\end{eqnarray*} &&\mathcal{E}_N(A,\phi,K,t)\\ &&\quad=\sup_{X_0^N(1)\leq K}E\left(\left(\int_0^t\left[ \frac{1}{N}\sum_x\phi_s(x)\chi_N(A,x,\xi_s^N)-\sigma_N(A)X_s^N(\phi_s) \right]ds\right)^2\right). \eeqnn Set $c_{\beta}=\sup_N|\theta_0^N|\sum_{e,e'\in\mathbb Z^d}p(e)p(e')$ and $\bar{c}=c_{\beta}+k_{\delta}$, where $k_{\delta}=\sup_N|\theta_1^N|$. The following proposition is a version of Proposition 3.4 of \cite{[CP05]}. \begin{proposition} \label{Prop3.4} There is a positive sequence $\epsilon_N\lra0$ as $N\longrightarrow\infty$, and for any $K,T>0$, a constant $C_2(K,T)>0$, $\underline{\alpha}\leq1\wedge\alpha$, such that for any $\phi\in C_b([0,T]\times\mathbb S_N)$ satisfying $\sup_{s\leq T}||\phi_s||_{\textrm{Lip}}\leq K$, nonempty $A\in P_F$, $\bar{a}\in A$, $J\geq1$ and $0\leq t\leq T$, \begin{eqnarray*}}\def\eeqnn{\end{eqnarray*} \mathcal{E}_N(A,\phi,K,t)\leq C_4(K,T)&\bigg{[}&\epsilon_N^* e^{\bar{c}\epsilon_N^*}+J^{-2}\\ &&\quad+J^2\left(\epsilon_N|A|+(\sigma_N(A)\wedge(\epsilon_N +\left|\frac{\bar{a}}{b(N)}\right|^{\underline{\alpha}})) \right)\bigg{]}. \eeqnn In particular, $\lim_{N\rightarrow\infty}\sup_{t\leq T}\mathcal{E}_N(A,\phi,K,t)=0.$ \end{proposition} {\bf Proof. } We can follow the arguments in Section 5 and Section 6 of \cite{[CP05]}. In fact, only a small trick is needed. For $\alpha\in(0,2]$ and $d>\alpha$, we may find an $\underline{\alpha}<\alpha$ which is close enough to $\alpha$ so that \begin{equation} \label{Nezlimit} E(|B_{\epsilon_N^*}^{N,0}|^{\underline{\alpha}}) =\frac{N\epsilon_N^*|p|_{\underline{\alpha}}}{b(N)^{\underline{\alpha}}} \lra0\quad\textrm{ as } N\longrightarrow\infty. \end{equation} (Note that $b$ is a function of regular variation with index $1/\alpha$ and recall the choice of $\epsilon_N^*$ in Lemma \ref{Kerlemma} when $d>\alpha$). Fix this $\underline{\alpha}$. For $||\phi||_{\textrm{Lip}}\leq K$, (\ref{LIP}) implies \begin{eqnarray*}}\def\eeqnn{\end{eqnarray*} E\left(\left|\phi\left(y-\frac{\bar{a}}{b(N)}+B_s^{N,0}\right) -\phi(y)\right|\right)&\leq& 2K E\left|B_s^{N,0}-\frac{\bar{a}}{b(N)}\right|^{\underline{\alpha}\wedge1}\\ &\leq&2KE\left(|B_{s}^{N,0}|^{\underline{\alpha}\wedge1}\right) +2K\left|\frac{\bar{a}}{b(N)}\right|^{\underline{\alpha}\wedge1}.\eeqnn When $\alpha>1$, we may assume $\underline{\alpha}\wedge1=1\leq\underline{\alpha}$. (\ref{Nezlimit}) suggests $$ E\left(|B_{\epsilon_N^*}^{N,0}|^{\underline{\alpha}\wedge1}\right)\lra0\quad\textrm{ as } N\longrightarrow\infty. $$ When $d=\alpha=1$, for any $\underline{\alpha}<1$, by (\ref{LIP}), \begin{eqnarray*}}\def\eeqnn{\end{eqnarray*} && E\left(\left|\phi\left(y-\frac{\bar{a}}{b(N)}+B_s^{N,0}\right) -\phi(y)\right|\right)\\ &&\quad\leq 2K E\left(|B_s^{N,0}-{\bar{a}}/{b(N)}|^{\underline{\alpha}}; |B_s^{N,0}|<s(N)^{-1} \right) +2||\phi||_{\infty}P\left(|B_s^{N,0}|\geq s(N)^{-1}\right)\\ &&\quad\leq \frac{2K}{s(N)^{\underline{\alpha}}} +2K\left|\frac{\bar{a}}{b(N)}\right|^{\underline{\alpha}} +2KP\left(|B_s^{N,0}|\geq s(N)^{-1}\right) .\eeqnn We want to estimate the last term above for $s=\epsilon_N^*$. First, $$P\left(|B_{\epsilon_N^*}^{N,0}|\geq s(N)^{-1}\right) =P\left(|B_{N\epsilon_N^*}^{0}|\geq N\right).$$ By Proposition \ref{propincrease} and (\ref{regular}), $P(|B_{\epsilon_N^*}^{N,0}|\geq s(N)^{-1})$ is bounded by $$ C_{\ref{increasein}}\frac{N\epsilon_N^*}{l(N)}= C_{\ref{increasein}}\frac{l(N\epsilon_N^*s(N\epsilon_N^*))}{l(N)}\leq \frac{C_{\ref{increasein}}s}{C_{\epsilon}(\epsilon_N^*s(N\epsilon_N^*))^{1-\epsilon}}. $$ Recall the choice of $\epsilon_N^*$ in the Lemma \ref{Kerlemma} when $d=\alpha=1$. The last term above goes to zero when $N\rightarrow\infty$. Set $$\epsilon_N=2KE(|B_{\epsilon_N^*}^{N,0}|^{\underline{\alpha}\wedge1})\quad\textrm{ for }d>\alpha$$ and $$\epsilon_N=\frac{2K}{s(N)^{\underline{\alpha}}} +2KP(|B_{\epsilon_N^*}^{N,0}|\geq s(N)^{-1})\quad\textrm{ for }d=\alpha=1.$$ Then $\epsilon_N\rar0$ as $N\rightarrow\infty$ and \begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{keyineq}E\left(\left|\phi\left(y-\frac{\bar{a}}{b(N)} +B_{\epsilon_N^*}^{N,0}\right) -\phi(y)\right|\right)\leq \epsilon_N +2K\left|\frac{\bar{a}}{b(N)}\right|^{\underline{\alpha}\wedge1}.\eeqlb With (\ref{keyineq}) in mind, the reader may go back to \cite{[CP05]} for the proof of this proposition. In fact, as in \cite{[CP05]}, we first define $\eta_N$ as (5.1) of \cite{[CP05]} and decompose it into four error terms $\eta_i^N, i=1,2,3,4$. And decompose $\eta_3^N$ into two terms, $\eta_{3,1}^N$ and $\eta_{3,2}^N$, as in (5.15) and (5.16) of \cite{[CP05]} respectively. (\ref{keyineq}) will be used when we estimate $\eta_{3,2}^N(s)$ as on p.944 of \cite{[CP05]}. Only a part of the proof at the end of Section 5 of \cite{[CP05]} is needed to be modified. When estimate $\eta_{3,1}^N$, we also need (\ref{A3}).\hfill$\Box$\medskip The following technical lemma will be used in checking the Compact Containment Condition. \begin{lemma} \label{CCCtech}Let $P_t^N$ denote the semigroup associated with generator ${\cal A}_N$. We have $$ X_0^N\left(P_s^N(1_{B(0,n)^c})\right)\lra0\quad \textrm{ as } n\longrightarrow\infty $$ uniformly in $N$ and $s\leq t$. \end{lemma} {\bf Proof. } Since $$ X_0^N\left(P_s^N(1_{B(0,n)^c})\right)\leq X_0^N\left(B(0,n/2)^c\right)+X_0^N(1)P(|B_s^{N,0}|>n/2), $$ and (A2) holds, it suffices to show $P(|B_s^{N,0}|>n/2)$ goes to 0 uniformly as $n\rightarrow\infty$. For $0<c<1$, note that \begin{equation} \label{for1}P(|B_s^{N,0}|>cn)=P(|B_{Ns}^0|>cnb(N)). \end{equation} When $\alpha=2$, the desired result follows from Chebyshev's inequality. We only need to consider the case of $\alpha<2$. Clearly, we can deal separately with the different coordinates of $B_s^{N,0}$ and the distribution of each coordinate of $Y_1$ is a dimension-one $(\sigma^2,\alpha)$-stable distribution. (A1) implies that each coordinate of $p(\cdot)$ is in the domain of attraction of the dimension-one $(\sigma^2,\alpha)$-stable distribution. Thus, for this proof only, we can assume $d=1$ (Here we drop the assumption $d\geq \alpha$). By Proposition \ref{propincrease} and (\ref{regular}), the right hand side of (\ref{for1}) is bounded by $$ C_{\ref{increasein}}\frac{Ns}{l(cnb(N))}= C_{\ref{increasein}}\frac{l(b(N))s}{l(cnb(N))} \leq \frac{C_{\ref{increasein}}s}{C_{\epsilon}(cn)^{\alpha-\epsilon}}, $$ where the inequality holds for $cn>1$. The desired result is then immediate. \hfill$\Box$\medskip \noindent {\bf Proof of Theorem \ref{mainUP}.} Now, we are in position to prove Theorem \ref{mainUP}. First, we check the compact containment condition. Let $h_n:\mathbb R^d\rightarrow[0,1]$ be a $C^{\infty}$ function such that $$ B(0,n)\subset\{x:h_n(x)=0\}\subset\{x:h_n(x)<1\}\subset B(0,n+1) $$ and $$ \sup_n\sum_{i,j,k\leq d}||(h_n)_i||_{\infty}+||(h_n)_{ij}||_{\infty} +||(h_n)_{ijk}||_{\infty}\equiv C_h<\infty. $$ Let $\phi_n=\sigma^2|\Delta^{\alpha/2}h_n|/2$. Using Taylor's formula and dominated convergence theorem we obtain there exists a constant $C>0$ such that $$\sup_n\sum_{i\leq d}||(\Delta^{\alpha/2}h_n)_i||_{\infty}<C.$$ Thus $\sup_n||\phi_n||_{\textrm{Lip}}<C$. We may define $\delta_N^1$, $\delta_N^2$ and $d_0^N$ as on p.927 of \cite{[CP05]}. With Proposition \ref{Prop3.4} in hand one can check that both Lemma 3.6 and Proposition 3.8 in \cite{[CP05]} are available. To establish the Compact Containment Condition, we may follow the proof of Proposition 3.9 of \cite{[CP05]}. In fact, the argument above and Lemma \ref{CCCtech} show that $$ \lim_{(N,n)\rightarrow\infty}E\left( \int_0^tX_s^N (|{\cal A}_Nh_n| )ds\right)=0. $$ Then the following argument for the compact containment condition are exactly the same with that in \cite{[CP05]}. Next, with Lemma \ref{conge}, Proposition \ref{prop3.3} and Lemma \ref{estimateMNt} in hand, the proof of C-tightness is analogous to that of Proposition 3.7 of \cite{[CP05]}. By Proposition \ref{prop3.3}, we see that the $L^2$-method in \cite{[CP05]} is available. Thus, we may use the arguments in the proof of Proposition 3.2 in \cite{[CP05]} with some trivial modifications to obtain the desired convergence theorem, Theorem \ref{mainUP}. \hfill$\Box$\medskip \section{Proof of Theorem \ref{main2}} In this section we assume that $$ d=\alpha=1\qquad\textrm{ and }\qquad b(t)=t. $$ With $\underline{\alpha}$ we always mean a constant which is strictly less than 1. We can adopt some of the arguments of \cite{[CP08]} to prove some analogous results to those in \cite{[CP08]} without using the fact that $p(\cdot)$ is in the domain of attraction of a stable law. We will refer the reader to these results as we use them. \subsection{Characterization of $\gamma^*$} Recall the definitions of $\hat{\tau}$ and $\tau$ in Section \ref{MR}. For $e,e'\in \mathbb Z$ define the event $\Gamma_T(e,e')=\{\hat{\tau}(e,e')<T,\hat{\tau}(0,e)\wedge \hat{\tau}(0,e')>T\}$, and let \begin{equation} \label{QT} q_T=\sum_{e,e'}p(e)p(e')P(\Gamma_T(e,e')). \end{equation} We have the following characterization of $\gamma^*$. \begin{proposition} \label{Prop2.1} \begin{equation}\label{gamma*} \gamma*=\lim_{T\rightarrow\infty} (\log T)q_T<\infty. \end{equation} \end{proposition} To prove Proposition \ref{Prop2.1}, we follow the arguments in Section 2 of \cite{[CP08]}. Let $\tau_x=\inf\{t\geq0:B_t^0=x\}$, and write $P^x$ to indicate the law of the walk $B^x_{\cdot}$. Let $\tilde{P}(\cdot)=\sum_ep(e)P^e(\cdot)$, and define \begin{equation} \label{Ht} H(t)=\tilde{P}(\tau_0>t). \end{equation} The following proposition is a version of Proposition 2.2 of \cite{[CP08]}. \begin{proposition} \label{prop2.2} \begin{equation} \label{2.4} \lim_{t\rightarrow\infty}H(t)\log t=p_1(0)^{-1}. \end{equation} \begin{equation} \label{2.5} \frac{P^x(\tau_0>t)}{H(t)}\leq 2a(x)\quad\textrm{for all}\quad x\in\mathbb Z,\,t>0. \end{equation} \begin{equation} \label{2.6} \lim_{t\rightarrow\infty}\frac{P^x(\tau_0>t)}{H(t)}=a(x)\quad\textrm{for all}\quad x\in \mathbb Z. \end{equation} \begin{equation} \label{2.7} a(x)/|x|,\, x\neq0\textrm{ is bounded on }\mathbb Z. \end{equation} \end{proposition} {\bf Proof. }For (\ref{2.4}), let $G(t)=\int_0^tp_s(0,0)ds$. Proposition \ref{CPtostable} implies $G(t)\sim p_1(0)\log t$ as $t\rightarrow\infty$ in $d=1$. Then one can follow the arguments in the proof of Lemma A.3 in \cite{[CDP01]} by using the last exit time decomposition of Lemma A.2 there and with (A.7) replaced by (\ref{boundtransition}) to obtain that $G(t)H(t)\rar1$ as $t\rightarrow\infty$; see the arguments after (A.8) of \cite{[CDP01]}. Then (\ref{2.4}) holds. Recall that $\{Z_n:n=0,1,2,\cdots\}$ denote the discrete time stable random walk defined in Section \ref{ourmodel}. With abuse of notation, let $P^x$ denote the law of the walk starting at $Z_0=x$. Let $\sigma_x=\inf\{n\geq1:Z_n=x\}$. By T29.1 of \cite{[Sp76]}, $$ a(x)=\lim_{n\rightarrow\infty}\sum_{k=0}^n[P^0(Z_k=0)-P^0(Z_k=x)]<\infty \quad\textrm{exists for all }x\textrm{ in }\mathbb Z. $$ Note that P11.1, P11.2 and P11.3 in Chapter III of \cite{[Sp76]} are available for one-dimensional recurrent random walk; see arguments before P28.1 of \cite{[Sp76]}. Meanwhile, according to T29.1 and P30.1 of \cite{[Sp76]}, (i)' and (ii)' on page 116 in Chapter III of \cite{[Sp76]} also hold for one-dimensional random walk. Then we can check that both P11.4 and P11.5 in Chapter III of \cite{[Sp76]} are also available. Thus we have $$P^0(\sigma_x<\sigma_0)=1/2a(x).$$ Since the sequences of states visited by the walk $B_t^0$ is equal in law to the sequences visited by the walk $Y_n$ (with $Y_0=0$), we have $\tilde{P}(\tau_x<\tau_0)=1/2a(x)$. The strong Markov property implies that $$ H(t)\geq\sum_ep(e)P^e(\tau_x<\tau_0,\tau_0>t)\geq\sum_e P^e(\tau_x<\tau_0)P^x(\tau_0>t) $$ and then (\ref{2.5}) follows. For (\ref{2.6}), by T32.1 of \cite{[Sp76]}, \begin{equation} \label{2.8} \lim_{n\rightarrow\infty}\frac{P^x(\sigma_0>n)}{P^0(\sigma_0>n)}=a(x). \end{equation} Define $$ h(n)=\sum_{0\leq k\leq n}P^0(Y_k=0). $$ Then \begin{equation} \label{asmhn} h(n)\sim p_1(0)\sum_{k=1}^n\frac{1}{k} \quad\textrm{as }n\rightarrow\infty; \end{equation} see Page 696 of \cite{[LR91]}. We also have that $$ P^0(\sigma_0>n)=\frac{1}{h(n)}+o\left(\frac{1}{h(n)^2}\right);$$ see the proof of Theorem 6.9 of \cite{[LR91]}. Thus \begin{equation} \label{2.9} P^0(\sigma_0>n)\log n\rightarrow p_1(0)^{-1}. \end{equation} According to a standard large deviations estimate for a rate-1 Poisson process, say $S(t)$, $e^{Ct}P(S(t) \notin[t/2,2t])\rar0$ as $n\rightarrow\infty$ for a some constant $C>0$. Then the fact that $Y_{S(\cdot)}$ is a realization of $B^0_{\cdot}$ yields $$ (1-o(e^{-Ct}))P^x(\sigma_0>2t)\leq P^x(\tau_0>t)\leq o(e^{-Ct})+P^x(\sigma_0>t/2). $$ The inequalities above, together with (\ref{2.8}) and (\ref{2.9}), imply \begin{equation} \label{2.10} \lim_{t\rightarrow\infty}\frac{P^x(\tau_0>t)}{P^x(\sigma_0>t)}=1. \end{equation} By (\ref{2.4}) we see $H(t)/P^0(\sigma_0>t)\rar1$ as $t\rightarrow\infty$. Then (\ref{2.8}) and (\ref{2.10}) tell us (\ref{2.6}) holds readily. Finally, (\ref{2.7}) follows from the fact that $$\lim_{|x|\rightarrow\infty}\frac{a(x)}{|x|}=0;$$ see P29.3 of \cite{[Sp76]} and elsewhere. We have completed the proof. \hfill$\Box$\medskip The proof of Proposition \ref{Prop2.1} is now exactly as that of Proposition 2.1 in Section 2 of \cite{[CP08]}. We omit it here. \subsection{Voter and Biased Voter Estimates} In this subsection, we consider voter, biased voter bounds. We follow the arguments in Section 5 of \cite{[CP08]} step by step. For $b,\nu\geq0$, the 1-biased voter model $\bar{\xi}_t$ is the Feller process taking values in $\{0,1\}^{\mathbb Z}$, with rate function \begin{eqnarray}}\def\eeqlb{\end{eqnarray} \label{5.1} \bar{c}(x,\xi)=\begin{cases} (\nu+b)f_1(x,\xi)& \textrm{ if }\xi(x)=0,\cr \nu f_0(x,\xi)& \textrm{ if }\xi(x)=1, \end{cases} \eeqlb where $f_i(x,\xi)$ is as in (\ref{1.1}). The 0-biased voter model is the Feller process $\underline{\xi}_t$ taking values in $\{0,1\}^{\mathbb Z}$ with rate function \begin{eqnarray}}\def\eeqlb{\end{eqnarray} \label{5.2} \underline{c}(x,\xi)=\begin{cases} \nu f_1(x,\xi)& \textrm{ if }\xi(x)=0,\cr (\nu+b) f_0(x,\xi)& \textrm{ if }\xi(x)=1. \end{cases} \eeqlb The voter model $\hat{\xi}_t$ is the 1-biased voter model with bias $b=0$. Then by Theorem III.1.5 of \cite{[L85]}, assuming $\underline{\xi}_0=\hat{\xi}_0=\bar{\xi}_0$, we may define $\underline{\xi}_t, \hat{\xi}_t$ and $\bar{\xi}_t$ on a common probability space so that \begin{equation} \label{5.3}\underline{\xi}_t\leq\hat{\xi}_t\leq\bar{\xi}_t\textrm{ for all }t\geq0.\end{equation} For $\xi,\zeta\in\{0,1\}^{\mathbb Z}$, $\xi\leq\zeta$ means $\xi(x)\leq\zeta(x)$ for all $x\in \mathbb Z$. Let us recall the voter model duality; see \cite{[L85]}. Recall also the coalescing random walk system $\{\hat{B}_t^x:x\in \mathbb Z\}$ defined in Subsection \ref{MR}. The duality equation for the rate-1 ($\nu=1$) voter model is: for finite $A\subset\mathbb Z$, \begin{equation} \label{5.4} P(\hat{\xi}_t(x)=1\forall x\in A)=P(\hat{\xi}_0(\hat{B}_t^x)=1\forall x\in A). \end{equation} Define the mean range of the random walk $B_t^0$ by $$ R(t)=E\left(\sum_x1_{\{B_s^0=x \textrm{ for some } s\leq t\}}\right).$$ By a result for the range of the discrete time stable random walk in \cite{[LR91]}, \begin{equation} \label{5.5} \lim_{t\rightarrow\infty}\frac{R(t)}{t/\log t}=p_1(0)^{-1}; \end{equation} see (1.e) of \cite{[LR91]} and recall (\ref{asmhn}) for the asymptotic behavior of $h(n)$. First, we consider the voter estimates. Let $P_t,t\geq0$ be the semigroup of a rate-1 random walk with step distribution $p(\cdot)$. Recall the definition of $|p|_{\underline{\alpha}}$ in Section 3. For $\phi:\mathbb Z\rightarrow\mathbb R$ and $\xi\in\{0,1\}^{\mathbb Z}$, let $$\xi(\phi)=\sum_x\phi(x)\xi(x).$$ \begin{lemma} \label{lemma5.1} Let $\hat{\xi}_t$ denote the rate-$\nu$ voter model. Then for all bounded $\phi:\mathbb Z\rightarrow\mathbb R^+$, $0<\underline{\alpha}<1$ and $t\geq0$, \begin{equation}\label{5.8} E(\hat{\xi}(\phi f_0(\hat{\xi}_t)))\leq(\nu t|p|_{\underline{\alpha}}H(2\nu t))^{1/2}||\phi||_{\underline{\alpha}/2}|\bar{\xi}_0|+H(2\nu t)\hat{\xi}_0(\phi). \end{equation} \end{lemma} \begin{remark} \label{remark4.1} (\ref{5.8}) is just a version of (5.8) in Lemma 5.1 of \cite{[CP08]}. We slightly abuse our notation and we can prove that the other statements in Lemma 5.1 of \cite{[CP08]} ((5.6), (5.7) and (5.9) there) hold without modifying any arguments of their proofs. \end{remark} \begin{remark} Recall the definition of $||\phi||_{\underline{\alpha}}$ in Section 3. We see for $\phi=1$, the right side of (\ref{5.8}) is just $H(2\nu t)|\hat{\xi}_0|$. \end{remark} {\bf Proof. } It suffices to consider $\nu=1$. Using the voter duality equation (\ref{5.4}) and following the arguments in the proof of (5.8) of \cite{[CP08]}, we have $$E(\hat{\xi}(\phi f_0(\hat{\xi}_t)))\leq\sum_{e,z}\hat{\xi}_0(z)p(e) E\left(\phi(z+B_t^0)1_{\{\tau(0,e)>t\}}\right). $$ For any $z$ and $0<\underline{\alpha}<1$, \begin{eqnarray*}}\def\eeqnn{\end{eqnarray*} &&\sum_ep(e)E\left(\phi(z+B_t^0)1_{\{\tau(0,e)>t\}}\right)\cr &&\quad\quad\leq\sum_ep(e) E\left(\left(||\phi||_{\underline{\alpha}/2}|B_t^0|^{\underline{\alpha}/2}+\phi(z)\right) 1_{\{\tau(0,e)>t\}}\right)\cr &&\quad\quad\leq ||\phi||_{\underline{\alpha}/2}\left(E(|B_t^0|^{\underline{\alpha}}) \sum_ep(e)P(\tau(0,e)>t)\right)^{1/2}\cr &&\quad\quad\quad+\phi(z)\sum_ep(e)P(\tau(0,e)>t). \eeqnn Since $E(|B_t^0|^{\underline{\alpha}})\leq t|p|_{\underline{\alpha}},$ this proves (\ref{5.8}). \hfill$\Box$\medskip Next, we give some biased voter model bounds. Let $\bar{\xi}_t$ be the 1-biased voter model with rate function (\ref{5.1}). By the same arguments in Section 4 of \cite{[CP05]}, we can prove the following inequalities without using any of kernel assumptions. \begin{eqnarray}}\def\eeqlb{\end{eqnarray} \label{5.12} E(|\bar{\xi}_t|)&\leq& e^{bt}|\bar{\xi}_0|,\\ \label{5.13} E(|\bar{\xi}_t|^2)&\leq& e^{2bt}\left(|\bar{\xi}_0|^2 +\frac{2\nu+b}{b}(1-e^{-bt})|\bar{\xi}_0|\right)\\ \label{5.14} &\leq& e^{2bt}\left(|\bar{\xi}_0|^2 +(2\nu+b)t|\bar{\xi}_0|\right) \eeqlb In the subsection 4.3 below, we will compare the Lotka-Volterra model $\xi_t^N$ with the biased voter models $\underline{\xi}_t^N,\bar{\xi}_t^N$ on $\mathbb S_N$. In order to construct coupling $\underline{\xi}_t^N\leq \xi_t^N\leq \bar{\xi}_t^N$ we assume that the voting and bias rates $\nu_N$ and $b_N$ are \begin{equation} \label{5.15} \nu=\nu_N=N-\bar{\theta}\log N \textrm{ and }b=b_N=2\bar{\theta}\log N. \end{equation} As in \cite{[CP08]}, we need improved versions of (\ref{5.12}) and (\ref{5.13}). For $p\geq2$ and $0<\underline{\alpha}<1$ define \begin{eqnarray*}}\def\eeqnn{\end{eqnarray*} \kappa_p&=&\kappa_p(b,\nu)=3(b H(2\nu/b^p)+e^2)\textrm{ and }\kappa=\kappa_3,\cr A&=& A(b,\nu)=bR(2\nu/b^3)+3e^2(1+2\nu/b),\cr B_p&=& B_p(b,\nu,\underline{\alpha})=(|p|_{\underline{\alpha}}\nu b^{2-p}H(2\nu/b^p))^{1/2}+b H(2\nu/b^p)(|p|_{\underline{\alpha}}(\nu/b^p+1))^{1/2} \eeqnn and \begin{eqnarray*}}\def\eeqnn{\end{eqnarray*} h_1(b,\nu)(t)&=& e^2t^{-1/3}+2\kappa e^{2+2\kappa t},\cr h_2(b,\nu)(t)&=& e^2t^{-1/3}(1+2\nu/b)+5\kappa A e^{1+3\kappa t}. \eeqnn Put $P\phi(x)=\sum_yp(y-x)\phi(y)$ and define the operators \begin{equation} \label{defA*} \bar{\cal A}\phi=\nu(P\phi-\phi)\textrm{ and }{\cal A}^*=(1+b/\nu)\bar{\cal A} \end{equation} and denote the associated semigroups by $\bar{P}_t$ and $P^*_t$ respectively. \begin{remark} Comparing the constants and functions defined above with those defined in (5.16) and (5.17) of \cite{[CP08]}, we see that only $B_p$ is different. We replaced $2\sigma^2$ by $|p|_{\underline{\alpha}}$. \end{remark} \begin{remark} \label{remark5.3} For the parameters $\nu=\nu_N$, $b=b_N$ in (\ref{5.15}), (\ref{2.4}) and (\ref{5.5}) imply that $\kappa_p=O(1)$, $A=O(N/\log N)$ and $B_p=O(N^{1/2}(\log N)^{(1-p)/2})$ as $N\rightarrow\infty$. \end{remark} \begin{remark} \label{LVreasoning} The estimates in Remark \ref{remark5.3} will play important roles in the following proofs. That is why we are forced to assume that $\{p(x)\}$ is in the domain of normal attraction of a stable law. Or we need to replace $\log N$ by $\int_1^N b(s)^{-1}ds$. Then the estimates in Remark \ref{remark5.3} will be not available. \end{remark} The following proposition is a version of Proposition 5.4 of \cite{[CP08]}. \begin{proposition} \label{prop5.4} Assume $b\geq1$ and $p\geq2$. For all $t\geq0$, \begin{eqnarray}}\def\eeqlb{\end{eqnarray} \label{5.19} E(|\bar{\xi}_t|)&\leq& e^{b^{1-p}+\kappa_pt}|\bar{\xi}_0|,\\ \label{5.20} E(|\bar{\xi}_t|^2)&\leq& e^{2+2\kappa t}|\bar{\xi}_0|^2+4Ae^{1+3\kappa t}|\bar{\xi}_0|,\\ \label{5.21} b E(\bar{\xi}_t(f_0(\bar{\xi}_t)))&\leq& h_1(t)|\bar{\xi}_0|,\\ \label{5.22} b E(|\bar{\xi}_t|\bar{\xi}_t(f_0(\bar{\xi}_t))) &\leq& h_1(t)|\bar{\xi}_0|^2+h_2(t)|\bar{\xi}_0|. \eeqlb For all bounded $\phi:\mathbb Z\rightarrow[0,\infty)$, $p\geq3$ and $0<\underline{\alpha}<1$, \begin{eqnarray}}\def\eeqlb{\end{eqnarray} \label{5.23} E(\bar{\xi}_t(\phi))\leq e^{b^{1-p}+(1+\kappa_p)t} \left(\bar{\xi}_0(P_t^*(\phi))+\left[\kappa_pb^{2-p} ||\phi||_{\infty}+B_p||\phi||_{\underline{\alpha}/2} \right]|\bar{\xi}_0| \right). \eeqlb \end{proposition} \begin{remark} \label{remark4.5} Proposition 5.4 of \cite{[CP08]} was proved with the help of Lemma 5.1, Lemma 5.5 and Lemma 5.6 there. We can adopt the arguments in \cite{[CP08]} to obtain similar results in Lemma 5.5 and Lemma 5.6 of \cite{[CP08]}. With abuse of notation, in the following we assume that those two lemmas are available for us. \end{remark} \begin{remark} The only difference between Proposition \ref{prop5.4} and Proposition 5.4 of \cite{[CP08]} is that inequality (\ref{5.23}) is different from inequality (5.23) there. In fact, the key reason is that when prove the inequality (\ref{5.23}), we will use estimate (\ref{5.8}) in Lemma \ref{lemma5.1} of this paper replacing the estimate (5.8) of Lemma 5.1 of \cite{[CP08]}. \end{remark} {\bf Proof. }According to Remark \ref{remark4.1}, Remark \ref{remark4.5} and the coupling (\ref{5.3}), we can follow the arguments in \cite{[CP08]} to obtain that (5.36), (5.37) and (5.38) there are available which will be used in the following proof. Put $\epsilon=b^{-p}$ and assume $\phi\geq0$. We also have that \begin{eqnarray}}\def\eeqlb{\end{eqnarray} \label{5.39} E(|\bar{\xi}_{\epsilon}(b\phi f_0(\bar{\xi}_{\epsilon}))&-&\hat{\xi}_{\epsilon}(b\phi f_0(\hat{\xi}_{\epsilon}))|)\cr &&\leq2b||\phi||_{\infty}E(|\bar{\xi}_{\epsilon}|-|\hat{\xi}_{\epsilon}|)\leq 2b(e^{b\epsilon}-1)||\phi||_{\infty}|\bar{\xi}_0| \eeqlb which is just a version of (5.39) of \cite{[CP08]} (In fact, they are the same). The voter model estimate (\ref{5.8}) tells us \begin{eqnarray}}\def\eeqlb{\end{eqnarray} \label{5.40} E(\bar{\xi}_{\epsilon}(b\phi f_0(\bar{\xi}_{\epsilon})))&\leq& 2eb^2\epsilon||\phi||_{\infty}|\bar{\xi}_0|\cr &&+b(|p|_{\underline{\alpha}}\nu\epsilon H(2\nu\epsilon))^{1/2}||\phi||_{\underline{\alpha}/2}|\bar{\xi}_0|+b H(2\nu\epsilon)\bar{\xi}_0(\phi). \eeqlb By using Markov property, we see for $s\geq\epsilon$, \begin{eqnarray}}\def\eeqlb{\end{eqnarray} \label{5.41} && E(\bar{\xi}_{s}(b\phi f_0(\bar{\xi}_{s}))|{\cal F}_{s-\epsilon})\cr &&\quad \leq\left(2eb^2\epsilon||\phi||_{\infty}+b(|p|_{\underline{\alpha}}\nu\epsilon H(2\nu\epsilon))^{1/2}||\phi||_{\underline{\alpha}/2}\right)|\bar{\xi}_{s-\epsilon}|+b H(2\nu\epsilon)\bar{\xi}_{s-\epsilon}(\phi). \eeqlb Take expectations in (\ref{5.41}) for $\phi=1$ and recall the definition $||\phi||_{\underline{\alpha}}$ in Section 3. We have for $s\geq\epsilon$ \begin{eqnarray}}\def\eeqlb{\end{eqnarray} \label{5.42} E(\bar{\xi}_{s}(b\phi f_0(\bar{\xi}_{s})))\leq \kappa_pE(|\bar{\xi}_{s-\epsilon}|). \eeqlb Using this inequality in (5.36) of \cite{[CP08]} yields for $s\geq\epsilon$, $$ E(|\bar{\xi}_t|)\leq E(|\bar{\xi}_{\epsilon}|) +\kappa_p\int_{\epsilon}^tE(|\bar{\xi}_{s-\epsilon}|)ds\leq e^{b\epsilon}+\kappa_p\int_0^tE(|\bar{\xi}_s|)ds, $$ where the second inequality follows from (5.38) of \cite{[CP08]}. This bound also holds for $t\leq \epsilon$. Then Gronwall's inequality implies that (\ref{5.19}) holds. Again using (5.38) of \cite{[CP08]} gives that for $\psi:\mathbb Z\rightarrow\mathbb R^+$, $$ |E(\bar{\xi}_{\epsilon}(\psi))-\bar{\xi}_0(\psi)|\leq (e^{b\epsilon}-1)\bar{\xi}_0(P^*_{\epsilon}\psi)+|\bar{\xi}_0(P^*_{\epsilon}) -\bar{\xi}_0(\psi)|. $$ Note that $$ |P_{\epsilon}^*\psi(x)-\psi(x)|\leq||\psi||_{\underline{\alpha}/2} E(|B^0_{\nu\epsilon(1+b/\nu)}|^{\underline{\alpha}/2}) \leq||\psi||_{\underline{\alpha}/2}(\epsilon(\nu+b)|p|_{\underline{\alpha}})^{1/2}. $$ Thus $$|E(\bar{\xi}_{\epsilon}(\psi))-\bar{\xi}_0(\psi)|\leq \left(eb\epsilon||\psi||_{\infty}+ ||\psi||_{\underline{\alpha}/2}(\epsilon(\nu+b)|p|_{\underline{\alpha}})^{1/2}\right)|\bar{\xi}_0|. $$ Then by using Markov property, for $s\geq\epsilon$, $$ E(\bar{\xi}_{s-\epsilon}(\psi))\leq E(\bar{\xi}_s(\psi))+ \left(eb\epsilon||\psi||_{\infty}+ ||\psi||_{\underline{\alpha}/2}(\epsilon(\nu+b)|p|_{\underline{\alpha}})^{1/2}\right) E(|\bar{\xi}_{s-\epsilon}|). $$ Since $||P_{t-s}^*\phi||_{\underline{\alpha}/2}\leq||\phi||_{\underline{\alpha}/2}$, using above inequality in (\ref{5.41}) with $\psi=P_{t-s}^*\phi$ replacing $\phi$, we have for $s\geq\epsilon$, \begin{eqnarray}}\def\eeqlb{\end{eqnarray} \label{5.43} E(\bar{\xi}_{s}(bP^*_{t-s}\phi f_0(\bar{\xi}_{s}))) \leq\left(\kappa_p b^2\epsilon||\phi||_{\infty}+B_p||\phi||_{\underline{\alpha}/2}\right) E(|\bar{\xi}_{s-\epsilon}|)+\kappa_p E(\bar{\xi}_{s}(P^*_{t-s}\phi)), \eeqlb which is a version of (5.43) of \cite{[CP08]}. Then the following arguments for proving (\ref{5.23}) are very similar to those after (5.43) in \cite{[CP08]}. We have proved (\ref{5.19}) and (\ref{5.23}). The other statements in the proposition can be proved in a similar way to that used to prove their counterparts in \cite{[CP08]} (recall Remark \ref{remark4.1}, Remark \ref{remark4.5}). We omit it here. \hfill$\Box$\medskip \begin{remark} \label{remarksec5} We have followed the arguments in Section 5 of \cite{[CP08]} to obtain some voter and biased voter estimates. In fact, we only replaced (5.8) and (5.23) in Section 5 of \cite{[CP08]} by (\ref{5.8}) and (\ref{5.23}) respectively and modified the arguments in the proof of (5.19) and (5.23) of \cite{[CP08]}; please compare (\ref{5.40})-(\ref{5.43}) with their counterparts (5.40)-(5.43) in Section 5 of \cite{[CP08]}. We can also adopt the arguments there to obtain similar results to all other statements in Section 5 of \cite{[CP08]} without using the fact the $p(\cdot)$ is in the domain of attraction of a stable law. In the next subsection, we will directly refer to them. \end{remark} \subsection{Four Key Results} In this subsection, we will give analogous results to Propositions 4.3, 4.4, 4.5 and 4.7 of \cite{[CP08]}. We first list those results and will give their proofs later. Let \begin{equation} \label{4.1} g(s)=C_{\ref{4.1}}s^{-1/3}e^{C_{\ref{4.1}}s},\end{equation} where $C_{\ref{4.1}}$ will be chosen later. \begin{proposition} \label{prop4.3} (a) For $T>0$ there is a constant $C_{\ref{4.2}}(T)$ such that for all $N\in \mathbb N$, \begin{eqnarray}}\def\eeqlb{\end{eqnarray} \label{4.2} \sup_{t\leq T}E(X_t^N(1))&\leq& C_{\ref{4.2}}(T)X_0^N(1),\\ \label{4.3} E\left(\sup_{t\leq T}X_t^N(1)^2\right)&\leq& C_{\ref{4.2}}(T)(X_0^N(1)^2+X_0^N(1)). \eeqlb (b) For all $s>0$ and $N\in\mathbb N$, \begin{eqnarray}}\def\eeqlb{\end{eqnarray} \label{4.4} (\log N)E(X_s^N(f_0^N(\cdot,\xi_s^N)))&\leq& g(s)X_0^N(1),\\ \label{4.5} (\log N)E(X_s^N(1)X_s^N(f_0^N(\cdot,\xi_s^N)))&\leq& g(s)(X_0^N(1)^2+X_s^N(1)). \eeqlb \end{proposition} Let ${\cal A}^*_N(\psi)=\frac{1}{N}(N+\bar{\theta}\log N){\cal A}_N(\psi)$ with semigroup $P_t^{N,*}$. \begin{proposition} \label{prop4.4} For $p\geq 3$ there is a constant $C_{\ref{4.6}}(p)$ such that for any $t\geq0$ and $\phi:\mathbb R\rightarrow\mathbb R^+$, \begin{eqnarray}}\def\eeqlb{\end{eqnarray} \label{4.6} E(X_t^N(\phi))&\leq& e^{(\log N)^{1-p}}e^{C_{\ref{4.6}}t} X_0^N(P_t^{N,*}\phi)\cr &&\quad\quad+C_{\ref{4.6}}e^{C_{\ref{4.6}}t}||\phi||_{1/2} (\log N)^{(1-p)/2}X_0^N(1). \eeqlb \end{proposition} \begin{proposition} \label{prop4.5} For $p\geq 3$ there is a constant $C_{\ref{4.7}}(p)$ such that for all $\phi:\mathbb R\rightarrow\mathbb R^+$, if $\epsilon=(\log N)^{-p}$, then \begin{eqnarray}}\def\eeqlb{\end{eqnarray} \label{4.7} E(X_{\epsilon}^N(\log N\phi f_0^N(\cdot,\xi_{\epsilon}^N)))\leq C_{\ref{4.7}}X_0^N(1)||\phi||_{1/2}(\log N)^{(1-p)/2}+C_{\ref{4.7}}X_0^N(\phi). \eeqlb \end{proposition} Let $\sup_{K,T}$ indicate a supremum over all $X_0^N\in M(\mathbb S_N)$, $\phi:\mathbb R\rightarrow\mathbb R$ and $t\geq0$ satisfying $X_0^N(1)\leq K$, $||\phi||_{\textrm{Lip}}\leq K$ and $t\leq T$. \begin{remark} Note that if $||\phi||_{\textrm{Lip}}\leq K$, then $||\phi||_{\underline{\alpha}}\leq 2K$ for any $0<\underline{\alpha}<1$. \end{remark} \begin{proposition} \label{prop4.7} For every $K,T>0$ and $0<p<2$, \begin{equation} \label{4.7a}\lim_{N\rightarrow\infty}\sup_{K,T}E\left(\left|\int_0^tX_s^N(\log N\phi^2f_0^N(\cdot,\xi_s^N))-p_1(0)^{-1}X_s^N(\phi^2)\right|^p\right)=0 \end{equation} and for i=2,\,3, \begin{equation} \label{4.7b} \lim_{N\rightarrow\infty}\sup_{K,T} E\left(\left|D_t^{N,i} -\int_0^t\theta_{i-2}\gamma^*X_s^N(\phi)ds\right|^p\right)=0. \end{equation} \end{proposition} Recall the rescaled Lotka-Volterra models in Section \ref{ourmodel} and assume (A2) holds. Also recall the 1-biased voter model and 0-biased voter model with rates $\nu=\nu_N$ and $b=b_N$ defined in the last subsection. Set $\bar{\xi}_t^N(x)=\bar{\xi}_t(Nx)$ and $\underline{\xi}_t^N(x)=\underline{\xi}_t(Nx)$ for $x\in \mathbb S_N$. Thus the rate function of $\bar{\xi}_t^N$ is given by\begin{eqnarray*}}\def\eeqnn{\end{eqnarray*} \bar{c}(x,\xi)=\begin{cases} (\nu_N+b_N)f_1^N(x,\xi)& \textrm{ if }\xi(x)=0,\cr \nu_N f_0^N(x,\xi)& \textrm{ if }\xi(x)=1 \end{cases} \eeqnn and the rate function of $\underline{\xi}_t^N(x)$ is given by \begin{eqnarray*}}\def\eeqnn{\end{eqnarray*} \underline{c}(x,\xi)=\begin{cases} \nu_N f_1^N(x,\xi)& \textrm{ if }\xi(x)=0,\cr (\nu_N+b_N) f_0^N(x,\xi)& \textrm{ if }\xi(x)=1. \end{cases} \eeqnn Assume $N$ is large enough ($N\geq N_0$) so that $\nu_N>0$ and $b_N>1$. As in the last subsection, we may construct the three processes on one probability space so that $\underline{\xi}^N_0=\hat{\xi}^N_0=\bar{\xi}^N_0$ and \begin{equation} \label{6.1}\underline{\xi}^N_t\leq\hat{\xi}^N_t\leq\bar{\xi}^N_t\textrm{ for all }t\geq0.\end{equation} Define $$\bar{X}_t^N=\frac{1}{N'}\sum_{x\in\mathbb S_N}\bar{\xi}_t^N(x)\delta_{x} \textrm{ and }\underline{X}_t^N=\frac{1}{N'}\sum_{x\in\mathbb S_N}\underline{\xi}_t^N(x)\delta_{x}. $$ It follows that \begin{equation} \label{6.2} \underline{X}_t^N\leq X_t^N\leq\bar{X}_t^N\textrm{ for all }t\geq0. \end{equation} Keep Remark \ref{remark5.3} in mind. Applying Proposition \ref{prop5.4} gives that there are constants $C_{\ref{6.3}}$ and $C_{\ref{4.1}}$ such that for all $N\geq N_0$ and $t\geq0$, \begin{eqnarray}}\def\eeqlb{\end{eqnarray} \label{6.3} E(\bar{X}_t^N(1))&\leq& C_{\ref{6.3}}e^{C_{\ref{6.3}}t}\bar{X}_0^N(1),\\ \label{6.4} E(\bar{X}_t^N(1)^2)&\leq& C_{\ref{6.3}}e^{C_{\ref{6.3}}t}(\bar{X}_0^N(1)^2 +\bar{X}_0^N(1)) \eeqlb and if $g$ is as in (\ref{4.1}), then \begin{eqnarray}}\def\eeqlb{\end{eqnarray} \label{6.5} (\log N)E(\bar{X}_t^N(f_0^N(\cdot,\bar{\xi}_t^N))) &\leq& g(t)X_0^N(1),\\ \label{6.6} (\log N)E(\bar{X}_t^N(1)\bar{X}_t^N(f_0^N(\cdot,\bar{\xi}_t^N)))&\leq& g(t)(X_0^N(1)^2+X_s^N(1)). \eeqlb Typically, we have there exists a constant $C_{\ref{6.7}}$ such that \begin{equation} \label{6.7} E(\bar{X}_t^N(1))-E(\underline{X}_t^N(1))\leq C_{\ref{6.7}}[(\log N)^{-2}+t]X_0^N(1),\quad0\leq t\leq1 \end{equation} whose counterpart in \cite{[CP08]} is (6.7). We first prove Proposition \ref{prop4.3}. In fact, we only give an outline. {\textit{ Proof of Proposition \ref{prop4.3}}.} With inequalities (\ref{6.3}), (\ref{6.4}) and the coupling (\ref{6.2}) in hand, part (a) follows from the strong $L^2$ inequality for non-negative submartingales and the fact that $\bar{X}_t^N(1)^2$ is a submartingales; see Remark \ref{remarksec5} and (5.29) of \cite{[CP08]}. For part (b), if we have similar results to those in Proposition 6.1 of \cite{[CP08]}, then part (b) follows from Remark \ref{remark5.3}. But the proof of Proposition 6.1 of \cite{[CP08]} works here if we replace (5.40) there by (\ref{5.40}) in the last subsection; see Remark \ref{remarksec5}. \hfill$\Box$\medskip {\textit{ Proof of Proposition \ref{prop4.4}}}. Recall that $\bar{\xi}_t$ is the biased voter model with rates $\nu=N-\bar{\theta}\log N$ and $b=2\bar{\theta}\log N$, and $\bar{\xi}_t^N(x)=\bar{\xi}_t(Nx)$, $x\in\mathbb S_N$. For $\psi:\mathbb R\rightarrow\mathbb R^+$, define $\phi:\mathbb Z\rightarrow\mathbb R^+$ by $\phi(x)=\psi(x/N)$. Then $||\phi||_{\infty}=||\psi||_{\infty}$ and for $0<\underline{\alpha}<1$, \begin{eqnarray*}}\def\eeqnn{\end{eqnarray*}\sup_{x\neq y,|x-y|\leq1}\frac{|\phi(x)-\phi(y)|}{|x-y|^{\underline{\alpha}/2}}&\leq& \sup_{x\neq y,|x-y|\leq1}\frac{|\phi(x)-\phi(y)|}{|x-y|^{1/2}}\cr &\leq& N^{-1/2}\sup_{x\neq y,|x-y|\leq1/N}\frac{|\psi(x)-\psi(y)|}{|x-y|^{1/2}}.\eeqnn Thus $||\phi||_{\underline{\alpha}/2}\leq N^{-1/2}||\psi||_{1/2}.$ Note that ${\cal A}^*_N\psi(x)=(N+\bar{\theta}\log N)\sum_{y\in\mathbb S_N}p_N(y-x)\psi(y)$ with semigroup $P_t^{N,*}$ and ${\cal A}^*\phi(x)=(N+\bar{\theta}\log N)\sum_yp(y-x)\phi(y)$ with semigroup $P_t^*$; see (\ref{defA*}) for the definition of ${\cal A}^*$. We have that $P_t^*\phi(x)=P_t^{N,*}\psi(x/N)$ and $\bar{\xi}^N_t(\psi)=\bar{\xi}_t(\phi)$. According to (\ref{5.23}), we obtain \begin{eqnarray*}}\def\eeqnn{\end{eqnarray*} E(\bar{\xi}^N_t(\psi))\leq e^{b^{1-p}+(1+\kappa_p)t} \left(\bar{\xi}^N_0(P_t^{N,*}(\psi))+\left[\kappa_pb^{2-p} ||\psi||_{\infty}+B_pN^{-1/2}||\psi||_{1/2} \right]|\bar{\xi}^N_0| \right). \eeqnn Since $p\geq3$, Remark \ref{remark5.3} implies $\kappa_pb^{2-p}+B_pN^{-1/2}=O((\log N)^{(1-p)/2})$ as $N\rightarrow\infty.$ Then the fact that $\bar{\theta}\geq1$ implies $b\geq\log N$ and the coupling (\ref{6.2}) yield the desired inequality (\ref{4.6}). \hfill$\Box$\medskip {\textit{ Proof of Proposition \ref{prop4.5}}}. Let $\epsilon=b^{-p}$. According to Remark \ref{remarksec5}, we may use (5.32) of \cite{[CP08]} to obtain that $$ E(X_{\epsilon}^N(b\phi f_0(\xi_{\epsilon}^N)))\leq E(\bar{X}_{\epsilon}^N(b\phi f_0(\bar{\xi}_{\epsilon}^N)))+2b||\phi||_{\infty} (E(\bar{X}_{\epsilon}^N(1)-X_{\epsilon}^N(1))). $$ Applying (5.62) of \cite{[CP08]} and (\ref{5.40}) gives $$ E(X_{\epsilon}^N(b\phi f_0(\xi_{\epsilon}^N)))\leq (6eb^{2-p}||\phi||_{\infty}+B_p N^{-1/2}||\phi||_{1/2})X_0^N(1) +\kappa_p X_0^N(\phi). $$ Then Remark \ref{remark5.3} yields (\ref{4.7}). \hfill$\Box$\medskip We will give the proof of Proposition \ref{prop4.7} in the final subsection. In the next subsection with the help of the four propositions in this subsection we prove Theorem \ref{main2}. \subsection{Convergence Theorem} In this subsection, we follow the strategy in the Section 4 of \cite{[CP08]} to obtain Theorem \ref{main2}. First, we check the compact containment condition. \begin{proposition} \label{Prop4.12} For all $\epsilon>0$ there is an $n\in \mathbb N$, so that $$ \sup_NP\left(\sup_{t\leq\epsilon^{-1}}X_t^N(B(0,n)^c)>\epsilon\right)<\epsilon. $$ \end{proposition} {\bf Proof. }The proof is similar to that for Proposition 4.12 of \cite{[CP08]}. We only give an outline here. Recall that $b(N)=N$. Let $h_n:\mathbb R^d\rightarrow[0,1]$ be a $C^{\infty}$ function such that $$ 1_{\{|x|>n+1\}}\leq h_n(x)\leq 1_{\{|x|>n\}} $$ and $$ \sup_n\sum_{i,j,k\leq d}||(h_n)_i||_{\infty}+||(h_n)_{ij}||_{\infty} +||(h_n)_{ijk}||_{\infty}\equiv C_h<\infty. $$ By the semimartingale decomposition $$ \sup_{t\leq T}X_t^N(h_n)\leq X_0^N(h_n)+\sum_{i=1}^3 \sup_{t\leq T}|D_t^{N,i}(h_n)|+\sup_{t\leq T}|M_t^N(h_n)|. $$ We need to check the right hand side tends to zero as $N,n\rightarrow\infty$. Let $$ \eta_N:=\sup_n||{\cal A}_N(h_n)-\frac{\sigma^2\Delta^{1/2}h_n}{2}||_{\infty}. $$ Then $\lim_{N\rightarrow\infty}\eta_N=0$ by Lemma \ref{conge}. Note that \begin{eqnarray*}}\def\eeqnn{\end{eqnarray*} &&\frac{1}{N'}\sum_{x,y}|h_n(x)-h_n(y)|p_N(x-y)\xi_s^N(y)\\ &&\qquad\leq \frac{||h_n||_{\underline{\alpha}}}{N'}\sum_{y}\sum_{x}|x-y|^{\underline{\alpha}}p_N(x-y)\xi_s^N(y)\\ &&\qquad\leq\frac{C_h|p|_{\underline{\alpha}}}{N^{\underline{\alpha}}}X_s^N(1). \eeqnn Set $\eta'_N(T)=C_{\ref{4.2}}(T)(\eta_N+\bar{\theta}C_h\log N|p|_{\underline{\alpha}}/{N^{\underline{\alpha}}})T )$. We have, as in the deviation of (4.17) in \cite{[CP08]} \begin{eqnarray}}\def\eeqlb{\end{eqnarray} \label{4.16} && E\left(\sup_{t\leq T}X_t^N(h_n)\right)\leq X_0^N(h_n)+2(\langle M^N(h_n)\rangle_T)^{1/2}+\eta'_N X_0^N(1)\cr &&\quad+C_h\int_0^TE(X_s^N(h_{n-1}))ds+2\bar{\theta}\int_0^TE(X_s^N(h_n\log Nf_0^N(\xi_s^N)))ds. \eeqlb Applying Proposition \ref{prop4.5} and (\ref{4.2}), we obtain the last integral above is bounded by \begin{equation} \label{4.17} \eta''_N(T)X_0^N(1)+C_{\ref{4.7}}\int_0^TE(X_s^N(h_n))ds, \end{equation} where $\eta''_N(T)=C_{\ref{4.2}}(T)[(\log N)^{-2}+C_{\ref{4.7}}C_h T/\log N].$ By Lemma \ref{estimateMNt} and (\ref{4.2}) there is a constant $C_{\ref{4.11}}(T)$ such that if $\phi_s=\psi$, then for any $\underline{\alpha}<1$ and $0\leq s\leq T,$ \begin{equation} \label{4.11} E(|m_{1,s}^N|+|m_{2,s}^N|)\leq C_{\ref{4.11}}(T)||\phi||^2_{\underline{\alpha}} (\log N/N^{\underline{\alpha}})X_0^N(1). \end{equation} Then the above inequality, (\ref{4.17}) and Lemma \ref{estimateMNt} gives (recall $N/N'=\log N$) \begin{equation} \label{4.18} E(\langle M^N(h_n)\rangle_T)\leq \eta_N'''(T)X_0^N(1)+2C_{\ref{4.7}}\int_0^TE(X_s^N(h_n))ds, \end{equation} where $\eta_N'''(T)=2\eta_N''(T)+C_{\ref{4.11}}(T)TC_h^2\log N/N^{\underline{\alpha}}.$ Finally, let $B_t^{N,*}$ be the continuous random walk with semigroup $P_t^{N,*}$ defined before Proposition \ref{prop4.4}, $B_0^{N,*}=0$. Note that $$ P\left(|B_s^{N,*}|\geq \frac{n-1}{2}\right)=P\left(|B^0_{(N+\bar{\theta}\log N)s}|\geq \frac{N(n-1)}{2}\right). $$ Since $b(t)=l(t)=t$, Proposition \ref{propincrease} yields that the left hand side above goes to 0 uniformly in $N\in\mathbb N$ and $0\leq s\leq T$ as $n\rightarrow\infty$. Thus with the help of Proposition \ref{prop4.4} and the inequalities (\ref{4.16}), (\ref{4.17}), (\ref{4.18}) we can conclude: for any $T,\epsilon>0$ there is an $N_0$ such that $$ \textrm{for } N\geq N_0, n\geq N_0, E(\sup_{t\leq T}X_t^N(h_n))<\epsilon. $$ The desired result is immediate. \hfill$\Box$\medskip {\textit{ Proof of Theorem \ref{main2}. }} In fact, we have already completed all tasks. First, with (\ref{4.4}) and (\ref{4.5}) in hand, by the same arguments as those in the proof of Lemma 4.10 of \cite{[CP08]}, we have there exists a constant $C_{\ref{4.12}}(T)$ such that for all $0\leq s\leq t\leq T$, \begin{equation} \label{4.12} E\left(\left[\int_s^t X_r^N(\log N f_0^N(\xi_r^N))dr\right]^2\right) \leq C_{\ref{4.12}}(T)(t-s)^{4/3}(X_0^N(1)^2+X_0^N(1)). \end{equation} Now, recall the decomposition of $X_t^N(\phi_t)$ in Section \ref{secsemi}. With the help of Lemma \ref{estimateMNt} and (\ref{4.12}), by the the same arguments as those in the proof of Proposition 4.11 of \cite{[CP08]}, for each $\phi\in C_b^{1,3}({\mathbb R_+}\times\mathbb R)$, each of families $\{X_{\cdot}^N(\phi),N\in\mathbb N\}$, $\{D_{\cdot}^{N,i},N\in\mathbb N\}$, $i=1,2,3,$ $\{\langle M^N(\phi\rangle)_{\cdot},N\in\mathbb N\}$, and $\{M_{\cdot}^N(\phi),N\in\mathbb N\}$ is C-tight in $D([0,\infty),\mathbb R)$. The C-tightness of $\{P_N,N\in\mathbb N\}$ is now immediate from Proposition \ref{Prop4.12} and Theorem II.4.1 of \cite{[P02]}. Then to check any limit point of $\{P_N\}$ is the law claimed in the Theorem, one can follow the same arguments as those in the proof of proposition 4.2 of \cite{[CP08]}, using Proposition \ref{prop4.7} above. \hfill$\Box$\medskip \subsection{Proof of Proposition \ref{prop4.7}} For $N$ fixed, let $\hat{\xi}_t$ be the rate $\nu_N=N-\bar{\theta}\log N$ voter model on $\mathbb Z$ with rate as in (\ref{5.1}) for $b=0$ and $\nu=\nu_N$. Define $\hat{\xi}_t^N(x)=\hat{\xi}_t(xN)$, $x\in\mathbb S_N$, the rate $\nu_N$ voter model on $\mathbb S_N$. Recall the independent and coalescing random walks system $\{B_t^x\}$ and $\{\hat{B}_t^x\}$ defined in Section \ref{MR}. We need to introduce their rescaled versions as follows: for $x,y\in\mathbb S_N$, \begin{equation} \label{7.13} B_t^{N,x}=B^{xN}_{\nu_Nt}/N,\quad \hat{B}_t^{N,x}=B_{\nu_Nt}^{xN}/N, \end{equation} and $$ \tau^N(x,y)=\tau(Nx,Ny)/\nu_N,\quad\hat{\tau}^N(x,y)=\hat{\tau}(Nx,Ny)/\nu_N. $$ Define $$ \varepsilon(t)=\sup_{x\in\mathbb Z}|tp_t(0,x)-p_1(x/t)|\vee(1/t^2). $$ By Proposition \ref{CPtostable} $\varepsilon(t)\rar0$ as $t\rightarrow\infty$. Then for each $k\in\mathbb Z^+$, there exists a $t(k)$ such that for $t>t(k)$, $\varepsilon(t)\leq 1/k$. Define \begin{eqnarray}}\def\eeqlb{\end{eqnarray}\varepsilon'(t)=\begin{cases}1,&0\leq t\leq t(1),\\ 1/k,& t(k)<t\leq t(k+1). \end{cases}\eeqlb Then $\varepsilon'(t)\downarrow0$ as $t\rightarrow\infty$ and $\varepsilon'(t)\geq\varepsilon(t)$ for $t>t(1)$. Let $\hat{\eta}_N=e^{-\sqrt{\log N}}$ and $a_N=\nu_N(2-\hat{\eta}_N)/\log N$ and $$ \epsilon_N'=({\log\log N})^{-1}\vee\sqrt{\varepsilon'(a_N/\log\log N)}. $$ Then \begin{eqnarray*}}\def\eeqnn{\end{eqnarray*}&&\epsilon_N:=\left(\varepsilon(a_N\epsilon'_N)/\epsilon'_N +\frac{\log\log N}{\log N}\right)\\&&\quad\leq\varepsilon'(a_N\epsilon'_N) \left(\sqrt{\varepsilon'(a_N/\log\log N)}\right)^{-1}+\frac{\log\log N}{\log N}\\ &&\quad\leq\sqrt{\varepsilon'(a_N/\log\log N)}+\frac{\log\log N}{\log N} \rar0\eeqnn as $N\rightarrow\infty$. Define the sequences \begin{equation} \label{7.3} t_N=\frac{\epsilon'_N}{\log N},\quad K_N=(\log N)^{1/2},\quad\delta_N=K_Nt_N. \end{equation} We assume that $N$ is large enough so that $\epsilon'_N\vee t_N\vee \delta_N\leq 1$ and $\delta_N/\epsilon'_N\rar0$ as $N\rightarrow\infty$. The following lemma is a version of Lemma 7.6 of \cite{[CP08]}. \begin{lemma} \label{lemma7.6} There is a constant $C_{\ref{7.14}}$ such that \begin{eqnarray}}\def\eeqlb{\end{eqnarray} \label{7.14} &&\frac{\log N}{N'}\sum_{x,e}p_N(e)P\left(\hat{\xi}_0^N(B_{t_N}^{N,x})= \hat{\xi}_0^N(B_{t_N}^{N,x+e})=1,\tau^N(x,x+e)>t_N\right)\cr &&\quad\quad\leq C_{\ref{7.14}}(\epsilon'_N)^{-1}\int\int_{|w-z|\leq\delta_N}d \hat{X}^N_0(w)d\hat{X}^N_0(z)+C_{\ref{7.14}}\epsilon_N \hat{X}_0^N(1)^2. \eeqlb\end{lemma} {\bf Proof. }By translation invariance and symmetry, the left side of (\ref{7.14}) is \begin{eqnarray}}\def\eeqlb{\end{eqnarray} \label{7.15} &&(N')^{-2}\sum_{w,z}\hat{\xi}_0^N(w)\hat{\xi}_0^N(z)\sum_ep_N(e)\cr &&\qquad\quad\times\left[\sum_x NP(B_{t_N}^{N,0}=w-x, B_{t_N}^{N,e}=z-x, \tau^N(0,e)>t_N)\right]\cr &&\quad=(N')^{-2}\sum_{w,z}\hat{\xi}_0^N(w)\hat{\xi}_0^N(z) \sum_ep_N(e)NP(B_{2t_N}^{N,e}=z-w,\tau_0^{N,e}>2t_N)\cr &&\quad\equiv\Sigma_d^N+\Sigma_c^N, \eeqlb where $\tau_0^{N,e}=\inf\{s:B_s^{N,e}=0\}$, and $\Sigma_d^N$, respectively, $\Sigma_c^N,$ denotes the contribution to (\ref{7.15}) from $w,z$ satisfying $|w-z|\leq K_Nt_N$, respectively, $|w-z|>K_Nt_N$. Let $$ \tilde{P}((B_{\cdot}^N,\tau_0^N)\in\cdot)=\sum_ep_N(e)P((B^{N,e}_{\cdot}, \tau_0^{N,e})\in\cdot). $$ For $\Sigma_d^N$, use (\ref{boundtransition}) and the Markov property at time $t_N$ to see that \begin{eqnarray*}}\def\eeqnn{\end{eqnarray*} && N\tilde{P}^N(B_{2t_N}^N=z-w,\tau_0^N>2t_N)\\ &&\quad\quad\leq N\tilde{E}(P(B_{t_N}^{N,0}=z-w-B_{t_N}^N(w)); \tau_0^N>t_N)\\ &&\quad\quad\leq CN\tilde{P}(\tau_0^N>t_N)(\nu_Nt_N)^{-1}\\ &&\quad\quad\leq C\frac{NH(\nu_Nt_N)}{\nu_Nt_N}. \eeqnn By (\ref{2.4}), there is a constant $C_{\ref{7.16}}$ such that \begin{equation} \label{7.16} \Sigma_d^N\leq C_{\ref{7.16}}(\epsilon'_N)^{-1}\int\int_{|w-z|\leq K_Nt_N} d\hat{X}^N_0(w)d\hat{X}^N_0(z). \end{equation} It is more complicated to bound $\Sigma_c^N$. Using the Markov property at time $\hat{\eta}_Nt_N$ gives \begin{eqnarray*}}\def\eeqnn{\end{eqnarray*} &&\tilde{P}^N\left(B_{2 t_N}^N=w-z,\tau_0^N>2t_N\right)\\ &&\qquad\leq\tilde{P}\left(\tau_0^N>\hat{\eta}_Nt_N, |B_{\hat{\eta}_Nt_N}^N|>\frac{K_Nt_N}{2}\right)\sup_{x'} P\left(B_{(2-\hat{\eta}_N)t_N}^{N,0}=x'\right)\cr &&\qquad\quad+\tilde{P}\left( P\left(B_{(2-\hat{\eta}_N)t_N}^{N,0}=w-z-B_{\hat{\eta}_Nt_N}^N\right); \tau_0^N>\hat{\eta}_Nt_N, |B_{\hat{\eta}_Nt_N}^N|\leq\frac{K_Nt_N}{2}\right)\cr &&\qquad=\Sigma_{1c}^N+\Sigma_{2c}^N,\quad\qquad\textrm{say}. \eeqnn Note that $$\tilde{P}\left(|B_{\hat{\eta}_Nt_N}^N|>\frac{K_Nt_N}{2}\right)= \sum_ep_N(e)P\left(|B_{N\hat{\eta}_Nt_N}^0+e|>\frac{NK_Nt_N}{2}\right)$$ which is bounded by $$ \frac{2|p|_{1/2}}{(NK_Nt_N)^{1/2}} +P\left(|B_{N\hat{\eta}_Nt_N}^0|>\frac{NK_Nt_N}{4}\right). $$ By Proposition \ref{propincrease}, $$ P\left(|B_{N\hat{\eta}_Nt_N}^0|>\frac{NK_Nt_N}{4}\right)\leq \frac{4C_{\ref{increasein}}N\hat{\eta}_Nt_N}{{NK_Nt_N}}= 4C_{\ref{increasein}}\hat{\eta}_N/K_N. $$ (Note that $l(t)=b(t)=t$.) Thus by (\ref{7.6}) \begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{1c} \Sigma_{1c}^N\leq \frac{C\left(\hat{\eta}_N/K_N+1/(NK_Nt_N)^{1/2}\right)}{\nu_N(2-\hat{\eta_N})t_N}. \eeqlb Let us consider $\Sigma_{2c}^N$. By the definition of $\varepsilon(t)$ and (\ref{estistable}) (recall $d=\alpha=1$),\begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{inequ4.7} p_t(0,x)&\leq& \frac{\varepsilon(t)}{t}+\frac{p_1(x/t)}{t}\cr &\leq& \frac{1}{t}\left({\varepsilon(t)}+c_2\left(1\wedge \left|\frac{t}{x}\right|^2\right)\right). \eeqlb Note that for $|w-z|>K_Nt_N$, on $\left\{|B_{\hat{\eta}_Nt_N}^N|\leq\frac{K_Nt_N}{2}\right\}$, $$|w-z-B_{\hat{\eta}_Nt_N}^N|^{-1}\leq\frac{2}{K_Nt_N}. $$ Thus by inequality (\ref{inequ4.7}), $\Sigma_{2c}^N$ is less than $$ \left(\varepsilon(\nu_N(2-\hat{\eta}_N)t_N) +c_2\left(1\wedge\left(\frac{2\nu_N(2-\hat{\eta}_N)}{NK_N}\right)^2\right)\right) \frac{H(\nu_N\hat{\eta}_Nt_N)}{\nu_N(2-\hat{\eta}_N)t_N}. $$ Thus by $a_N\epsilon'_N=\nu_N(2-\hat{\eta}_N)t_N$ and (\ref{2.4}), \begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{2c} \Sigma_{2c}^N&\leq& C\left(\varepsilon(a_N\epsilon'_N)+1/K_N^2\right) \frac{\log N}{\nu_N\epsilon'_N\log (\nu_N\hat{\eta}_Nt_N)}\cr &\leq& C\left(\varepsilon(a_N\epsilon'_N)/(N\epsilon_N')+(N\log N\epsilon_N')^{-1}\right)\cr &\leq &C\left(\varepsilon(a_N\epsilon'_N)/(N\epsilon_N')+\frac{\log\log N}{N\log N}\right)\cr &=& C\epsilon_N/N, \eeqlb where $C$ may change its values from line to line and the second inequality follows from $$ \log(\nu_n\hat{\eta}_Nt_N)=\log (\epsilon_N')+\log (\nu_N)-\log\log N-\sqrt{\log N} $$ and $\lim_{N\rightarrow\infty}\frac{N}{\nu_N}=1.$ With (\ref{7.16}), (\ref{1c}) and (\ref{2c}) in hand, (\ref{7.15}) yields the desired result, (\ref{7.14}). \hfill$\Box$\medskip For $\phi:\mathbb R^2\rightarrow\mathbb R$, $\zeta\in\{0,1\}^{\mathbb S_N}$ and $X(\phi)=(1/N')\sum_x\phi(x)\zeta(x)$, define \begin{eqnarray*}}\def\eeqnn{\end{eqnarray*} \Delta_1^{N,+}(\phi,\zeta)&=& X(\log N\phi^2f_0^N(\cdot,\zeta))\\ \Delta_2^{N,+}(\phi,\zeta)&=& \frac{1}{N'}\sum_x(1-\zeta(x))\phi(x)\log N f_1^N(x,\zeta)^2\\ \Delta_3^{N,+}(\phi,\zeta)&=& X(\log N\phi f_0^N(\cdot,\zeta)^2) \eeqnn and $$ \Delta_j^N(\phi,\zeta)=\Delta_j^{N,+}(\phi,\zeta)\gamma_j X(\phi),\quad j=1,2,3, $$ where $\gamma_1=p_1(0)^{-1}$ and $\gamma_2=\gamma_3=\gamma^*$. Define $$ m(1)=2\quad\textrm{and}\quad m(2)=m(3)=1. $$ The following proposition is a version of Proposition 7.5 of \cite{[CP08]}. \begin{proposition} \label{prop7.5} There is a constant $C_{\ref{7.12}}$ and a sequence $\eta_{\ref{7.12}}(N)\downarrow0$ such that for $j=1,2,3,$ if $\phi:\mathbb R^2\rightarrow\mathbb R,$ then for any $0<\underline{\alpha}<1$ \begin{eqnarray}}\def\eeqlb{\end{eqnarray} \label{7.12} |E(\Delta_j^N(\phi,\hat{\xi}_{t_N}^N))| &\leq&\eta_{\ref{7.12}}(N)\left(\hat{X}_0^N(1)+\hat{X}_0^N(1)^2\right) ||\phi||_{\underline{\alpha}}^{m(j)}\cr &&\quad+\frac{C_{\ref{7.12}}||\phi||_{\infty}^{m(j)}}{\epsilon_N'} \int\int_{|w-z|\leq\epsilon_N}d\hat{X}_0^N(w)d\hat{X}_0^N(z). \eeqlb \end{proposition} {\bf Proof. } To prove the proposition, we can define $\Sigma_j^{i,N}$, $i=1,2$ for $j=1$ and $i=1,2,3$ for $j=2,3$ as in (7.20), (7.21) and (7.22) of \cite{[CP08]} and decompose each $E(\Delta_j^{N,+})$ into a sum of those terms. We omit the definitions and decompositions here, since they are the same. By Lemma \ref{lemma7.6}, we can show that \begin{eqnarray}}\def\eeqlb{\end{eqnarray} \label{7.23} &&\Sigma_j^{2,N}\leq C_{\ref{7.14}}||\phi||_{\infty}^{m(j)} \left[(\epsilon'_N)^{-1}\int\int_{|w-z|\leq\delta_N}d \hat{X}^N_0(w)d\hat{X}^N_0(z)+\epsilon_N\hat{X}_0^N(1)^2\right]. \eeqlb For $\Sigma_j^{3,N}$, $j=2,3$, with Proposition \ref{prop2.2} in hand, one can check that a similar conclusion to that in Lemma 2.5 of \cite{[CP08]} is available. Following the proof of Proposition 7.5 of \cite{[CP08]}, we have there exists a constant $C_{\ref{7.24}}$ depending on $p(\cdot)$, \begin{equation} \label{7.24}\Sigma_2^{3,N}+\Sigma_3^{3,N}\leq C_{\ref{7.24}} ||\phi||_{\infty}\hat{X}_0^N(1)(\log N)^{-1/2}. \end{equation} Now, we need to establish that there is a sequence $\eta(N)\rar0$ such that for $j=1,2,3$, \begin{equation} \label{7.19} |\Sigma_j^{1,N}-\gamma_j\hat{X}_0^N(\phi)|\leq\eta(N)||\phi||_{\underline{\alpha}}^{m(j)} \hat{X}_0^N(1). \end{equation} Let $e$ denote independent random variable with law $p(\cdot)$. First, $$P\left( {B}_{t_N}^{N,e}>\sqrt{\epsilon_N'}\right)=P\left( |{B}_{\nu_Nt_N}^{0}+e|>N\sqrt{\epsilon_N'}\right).$$ We also have \begin{eqnarray}}\def\eeqlb{\end{eqnarray} \label{bound1} P\left( |{B}_{\nu_Nt_N}^{0}+e|>N\sqrt{\epsilon_N'}\right)&\leq& \frac{2|p|_{\underline{\alpha}}}{(N\sqrt{\epsilon_N'})^{\underline{\alpha}}}+P\left( |{B}_{\nu_Nt_N}^{0}|>N\sqrt{\epsilon_N'}/2\right)\cr &\leq&\frac{2|p|_{\underline{\alpha}}}{(N\sqrt{\epsilon_N'})^{\underline{\alpha}}}+ \frac{C_{\ref{increasein}}\nu_Nt_N}{N\sqrt{\epsilon_N'}}, \eeqlb where the second inequality follows from Proposition \ref{propincrease}. Typically, we have \begin{equation} \label{bound2} P\left({B}_{t_N}^{N,0}>\sqrt{\epsilon_N'}\right)\leq \frac {C_{\ref{increasein}}\nu_Nt_N}{N\sqrt{\epsilon_N'}} = \frac{C_{\ref{increasein}}\nu_N\sqrt{\epsilon_N'}}{N\log N}. \end{equation} Now, we consider the case of $j=2$. By the same arguments as in \cite{[CP08]}, we can show \begin{eqnarray*}}\def\eeqnn{\end{eqnarray*} &&|\Sigma_2^{1,N}-\gamma^*\hat{X}_0^N(\phi)|\\ &&\leq \frac{1}{N'}\sum_w\hat{\xi}_0^N(w)\log N E\bigg{(}|\phi(w-\hat{B}_{t_N}^{N,e})-\phi(w)|; \hat{\tau}^N(0,e)\wedge\hat{\tau}^N(0,f)>t_N,\\ &&\qquad\quad\hat{\tau}^N(e,f)\leq t_N\bigg{)}+\left|\frac{1}{N'}\sum_w\hat{\xi}_0^N(w)\phi(w) (q_{\nu_Nt_N}\log N -\gamma^*)\right|\\ &&\leq ||\phi||_{\underline{\alpha}}\hat{X}_0^N(1)\log N\left(\sqrt{\epsilon_N'}\right)^{\underline{\alpha}}q_{\nu_Nt_N} +2||\phi||_{\infty}\hat{X}_0^N(1)\log N P\left(|{B}_{t_N}^{N,e}|>\sqrt{\epsilon_N'}\right)^{1/2}q_{\nu_Nt_N}^{1/2}\\ &&\qquad+||\phi||_{\infty}\hat{X}_0^N(1)|\log N q_{\nu_Nt_N}-\gamma^*|, \eeqnn where the second inequality follows from Cauchy-Schwarz inequality and considering the cases $|{B}_{t_N}^{N,e}|>\sqrt{\epsilon_N'}$ and $|{B}_{t_N}^{N,e}|\leq \sqrt{\epsilon_N'}$. Thus by (\ref{2.4}), (\ref{bound1}) and Proposition \ref{Prop2.1}, there exists a sequence $\eta_{\ref{7.26}}(N)$ which goes to 0 as $N\rightarrow\infty$ such that \begin{equation} \label{7.26} |\Sigma_2^{1,N}-\gamma^*\hat{X}_0^N(\phi)|\leq \eta_{\ref{7.26}}(N)||\phi||_{\underline{\alpha}}\hat{X}_0^N(1). \end{equation} By replacing $\hat{B}_{t_N}^{N,e},{B}_{t_N}^{N,e}$ with $\hat{B}_{t_N}^{N,0},{B}_{t_N}^{N,0}$ respectively, the same argument as that above gives the same bound for $|\Sigma_3^{1,N}-\gamma^*\hat{X}_0^N(\phi)|$. Typically, inequality $(\ref{bound1})$ could be simplified. Next, we turn to $\Sigma_2^{1,N}$. Following the strategy of the proof for term on $\Sigma_2^{1,N}$, we have that \begin{eqnarray*}}\def\eeqnn{\end{eqnarray*} &&|\Sigma_1^{1,N}-p_1(0)^{-1}\hat{X}_0^N(\phi^2)|\\ &&\quad=\left|\frac{1}{N'}\sum_w\hat{\xi}_0^N(w)\left[\log NE\left(\phi^2(w-B_{t_N}^{N,0});\tau^N(0,e)>t_N\right) -p_1(0)^{-1}\phi^2(w)\right]\right|\\ &&\quad\leq \frac{1}{N'}\sum_w\hat{\xi}_0^N(w)\left[\log NE\left(\left|\phi^2(w-B_{t_N}^{N,0})-\phi^2(w)\right|;\tau^N(0,e)>t_N\right) \right]\\ &&\qquad+\frac{1}{N'}\sum_w\hat{\xi}_0^N(w)\phi^2(w)|\log NP(\tau^N(0,e)>t_N)-p_1(0)^{-1}|\\ &&\quad\leq \bigg{(} 2||\phi||_{\underline{\alpha}}\log N\left(\sqrt{\epsilon_N'}\right)^{\underline{\alpha}}H({\nu_Nt_N}) +2||\phi||_{\infty}\log N P\left(|{B}_{t_N}^{N,0}|>\sqrt{\epsilon_N'}\right)^{1/2}H({\nu_Nt_N})^{1/2}\\ &&\qquad+||\phi||_{\infty} |\log NH({\nu_Nt_N})-p_1(0)^{-1}|\bigg{)}||\phi||_{\infty}\hat{X}_0^N(1). \eeqnn According to (\ref{bound1}) and (\ref{2.4}), we can conclude \begin{equation} \label{7.27} |\Sigma_1^{1,N}-p_1(0)^{-1}\hat{X}_0^N(\phi^2)|\leq \eta_{\ref{7.27}}\hat{X}_0^N(1)||\phi||_{\underline{\alpha}}^2, \end{equation} where $\eta_{\ref{7.27}}\rar0$ as $N\rightarrow\infty$. Thus we get the (\ref{7.19}). By decompositions in (7.18) of \cite{[CP08]}, we obtain the desired result.\hfill$\Box$\medskip With Proposition \ref{prop7.5} in hand, Proposition \ref{prop4.7} follows from the following two propositions which are analogous to Proposition 7.1 and Proposition 7.2 in \cite{[CP08]} and a similar argument to that in Section 8 of \cite{[CP08]}. \begin{proposition} \label{prop7.1} There is a constant $C_{\ref{7.4}}(K)$ and sequence $\eta_{\ref{7.4}}(N)\downarrow0$ such that for all $\phi:\mathbb R\rightarrow[ 0,\infty)$ satisfying $||\phi||_{\textrm{Lip}}\vee X_0^N(1)\leq K$ and $j=1,2,3$, \begin{eqnarray}}\def\eeqlb{\end{eqnarray} \label{7.4} |E(\Delta_j^N(\phi,\xi_{t_N}^N))|&\leq& C_{\ref{7.4}}(K)\bigg{(}\eta_{\ref{7.4}}(N)\left(X_0^N(1)+X_0^N(1)^2\right)\cr &&\qquad+(\epsilon_N')^{-1}\int\int_{|w-z|\leq\delta_N}dX_0^N(w)dX_0^N(z)\bigg{)}. \eeqlb \end{proposition} {\bf Proof. }First, we can obtain follow the strategy in the proof of Lemma 7.8 in \cite{[CP08]} to obtain an analogous result to that in Lemma 7.8 of \cite{[CP08]}. Then with our coupling, (\ref{6.7}) and Proposition \ref{prop7.5} in hand, following the argument in \cite{[CP08]}, one can get the desired result. \hfill$\Box$\medskip \begin{proposition} \label{prop7.2} There is a constant $C_{\ref{7.5}}$ such that for all $0\leq t\leq T$, \begin{eqnarray}}\def\eeqlb{\end{eqnarray} \label{7.5} &&E\bigg{(}\int\int_{|w-z|\leq\delta_N}dX_0^N(w)dX_0^N(z)\bigg{)}\cr &&\qquad\leq C_{\ref{7.5}}e^{C_{\ref{7.5}}T}(X_0^N(1)+X_0^N(1)^2)\cr &&\qquad\qquad\times\left[\frac{\delta_N}{\delta_N+t}(1+t^{2/3})+\delta_Nt^{-1/3} \log (1+\frac{t}{\delta_N}) \right]. \eeqlb \end{proposition} The proof of Proposition \ref{prop7.2} is also exactly the same with that of Proposition 7.2 of \cite{[CP08]}. In fact, we only need to prove the following random walk estimate which is a version of Corollary 7.9 of \cite{[CP08]} and can be deduced directly from (\ref{7.6}) and Proposition \ref{lemma7.3}. Let $B_t^{N,*}$ be the random walk with semigroup $(P_t^{N,*},t\geq0)$ from Proposition \ref{prop4.4}, at rate $\nu_N+b_N=N+\bar{\theta}\log N$, $B_{\cdot}^{N,*}$ takes steps with $p_N(\cdot)$ and $B_0^{N,*}=0$. \begin{corollary} \label{cor7.9} (a) For all $x\in \mathbb S_N$ and $t\geq0$, \begin{equation} \label{7.30} P(B_t^{N,*}=x)\leq \frac{C_{\ref{7.6}}}{1+Nt}. \end{equation} (b) Assume $\delta_N'\downarrow0$ and $N\delta_N'\rightarrow\infty$. For each $K>0$ there is a constant $C_{\ref{7.31}}(K)>0$ such that \begin{equation} \label{7.31} \inf_{N\geq1,w\in\mathbb S_N,|w|\leq K\delta_N'}N\delta_N'P(B_{2\delta_N'}^{N,*}=w)\geq C_{\ref{7.31}}(K)>0. \end{equation} \end{corollary} Now, one follows the argument in \cite{[CP08]} to get Proposition \ref{prop7.2}. To obtain Proposition \ref{prop4.7}, the following arguments are similar to those in Section 8 of \cite{[CP08]}. We omit it here. \section{Voter Model's Asymptotics} In this section, we will prove Theorem \ref{voterasy} and we assume that assumption (A1) holds with $b(t)=t^{1/\alpha}$. Recall that $p_t=P(|\xi_t^0|>0)$. Our first object is to prove that \begin{alignat}{2} \label{obj1} p_t&=O\left(\frac{\log t}{t}\right)\quad&\textrm{as }~t\rightarrow\infty\quad& d=\alpha,\cr &=O(t^{-1})\quad&\textrm{as }~t\rightarrow\infty\quad& d>\alpha. \end{alignat} The asymptotics above are similar to the results in Theorem 1 of \cite{[BG80]}. Note that Theorem 1 of \cite{[BG80]} could be proved under the assumption that the underlying motion has finite variance and one only need to modify the proof of Lemma 5 of \cite{[BG80]}; see Lemma 2 of \cite{[BCL01]}. For our purpose we also need to generalize the asymptotic results in (14) of \cite{[BG80]}. \medskip Recall that $\{B_t^x,x\in\mathbb Z^d\}$ is a collection of rate-one independent stable random walks with $B_0^x=x$. Let $p_t(x,y)=P(B_t^x=y)$ denote the transition function of $\{B_t^x\}$. Define the mean range of the stable random walk $B_t^0$ by $$ R(t)=E\left(\sum_x1_{\{B_s^0=x \textrm{ for some } s\leq t\}}\right).$$ By the results for the range of the discrete time stable random walk in \cite{[LR91]}, we see \begin{alignat}{2} \label{Range} \lim_{t\rightarrow\infty}\frac{R(t)}{t/\log t}&=p_1(0)^{-1}\qquad&d=\alpha,\cr \lim_{t\rightarrow\infty}\frac{R(t)}{t}&=\gamma_e\qquad&d>\alpha. \end{alignat} With this in hand, one can generalize the asymptotics results in (14) of \cite{[BG80]}. Now, to prove (\ref{obj1}) we only need to prove some analogous results to those in Lemma 5 of \cite{[BG80]}. Set $G_t(x)=\int_0^tp_s(0,x)ds$ and let $\tau(x)=\inf\{t\geq0:B_t^x=0\}$, define $H_t(x)={P}(\tau(x)\leq t).$ \begin{lemma} \label{aymgreen} If $x\in \mathbb Z^d$ with $|x|=r$, then there is a constant $C_{d,\alpha}>0$ such that \begin{alignat*}{2} H_{r^{\alpha}}(x)&\geq C_{d,\alpha}/\log r\qquad &d=\alpha,\\ &\geq C_{d,\alpha} r^{\alpha-d}\qquad &~d>\alpha. \end{alignat*} \end{lemma} {\bf Proof. } We first consider the asymptotics for the Green's function. According to (\ref{tranapp}) and (\ref{estistable}), when $r$ large enough, $$ G_{r^{\alpha}}(x)=\int_0^{r^{\alpha}}p_s(0,x)ds\geq c_1\int_{r^{\alpha}/2}^{r^{\alpha}}\frac{s}{r^{d+\alpha}} ds-\int_{r^{\alpha}/2}^{r^{\alpha}}s^{-d/\alpha}ds. $$ A bit of calculation show that there exist a constant $\bar{C}_{d,\alpha}>0$ such that \begin{alignat*}{2} G_{r^{\alpha}}(x)&\geq \bar{C}_{d,\alpha}r^{\alpha-d}\quad&d>\alpha,\\ &\geq \bar{C}_{d,\alpha}\quad&d=\alpha. \end{alignat*} By (\ref{boundtransition}), we see that there exist constants $\underline{C}_{d,\alpha}>0$ such that \begin{alignat*}{2} G_{r^{\alpha}}(0)&\leq\underline{C}_{d,\alpha}\quad&d>\alpha,\\ &\leq\underline{C}_{d,\alpha}\log r\quad&d=\alpha. \end{alignat*} Then the desired result follows from inequality $ H_t(x)\geq G_t(x)/G_t(0). $\hfill$\Box$\medskip Now, one can follow the arguments in Section 3 of \cite{[BG80]} to obtain (\ref{obj1}) (Note that when prove an analogous result to that in Lemma 4 of \cite{[BG80]} one may need to set $s_t=d[(2p_t^{-1})^{1/d}]^{\alpha}$.) With (\ref{obj1}), Theorem \ref{mainUP} and Theorem \ref{main2} in hand, the following proof for Theorem \ref{voterasy} are exactly the same with that in \cite{[CP04]}. We left it to the interested readers. The intuition is that the underlying motion has nothing to do with the total mass process. \ \bigskip \ \bigskip \bigskip \textbf{References} \begin{enumerate} \renewcommand{\labelenumi}{[\arabic{enumi}]} \bibitem{[BL02]} {} Bass, Richard F.; Levin, David A. (2002): Transition probabilities for symmetric jump processes. \textit{Transactions of the American Mathematical Society} {\bf 354}, no. 7, 2933-2953. \bibitem{[BCL01]} {}Bramson, M.; Cox, J. T. and Le Gall, J.-F. (2001): Super-Brownian limits of voter model clusters. \textit{Ann. Probab.} {\bf 29}, 1001-1032. \bibitem{[BG80]} {} Bramson, M. and Griffeath, D. (1980): Asymptotics for interacting particle systems on $\mathbb{Z}^d$. \textit{Z. Wahrsch Verw. Gebiete} {\bf{53}}, 183-196. \bibitem{[CDP01]} {} Cox, J. T.; Durrett, Richard; Perkins, E. A. (2000): Rescaled voter models converge to super-Brownian motion. \textit{Ann. Probab.} {\bf{28}}, no. 1, 185--234. \bibitem{[CK03]} {}Cox, J. T.; Klenke, Achim (2003): Rescaled interacting diffusion converge to super Brownian motion. \textit{Ann. Appl. Prob.} {\bf 13}, no. 2, 501-514. \bibitem{[CP04]} {} Cox, J. T.; Perkins, E. A. (2004): An application of the voter model--super-Brownian motion invariance principle. \textit{Ann. Inst. H. Poincar\'{e} Probab. Statist.} {\bf 40}, no. 1, 25--32. \bibitem{[CP05]} {} Cox, J. T.; Perkins, E. A. (2005): Rescaled Lotka-Volterra models converge to Super-Brownian motion. \textit{Ann. Probab.} {\bf{33},} no. 3, 904-947. \bibitem{[CP07]} {} Cox, J. T.; Perkins, E. A. (2007): Survival and coexistence in stochastic spatial Lotka-Volterra models. \textit{Probability Theory and Related Fields} {\bf{139}}, 89-142 \bibitem{[CP08]} {} Cox, J. T.; Perkins, E. A. (2008): Renormalization of the two-dimensional Lotka-Volterra model. \textit{Ann. Appl. Probab.} {\bf {18}}, no. 2, 747-812. \bibitem{[D93]} {} Dawson, Donald A. (1993): Measure-valued Markov processes, in: Lecture Notes in Math., 1541, Springer, Berlin, pp.1-260. \bibitem{[DP99]} {}Durrett, Richard; Perkins, Edwin A. (1999): Rescaled contact processes converge to super-Brownian motion in two or more dimensions. \textit{Probab. Theory Related Fields} {\bf114}, no. 3, 309--399. \bibitem{[EK86]} {} Ethier, S. N.; Kurtz, T. G. (1986): Markov Processes: Characterization and Convergence, John Wiley \& Sons, Inc., New York. \bibitem{[F71]} {} Feller, W. (1971): \textit{An Introduction to Probability Theory and Its Applications} {\bf 2}, 2nd ed, Wiley, New York. \bibitem{[GK54]} {}Gnedenko, B. V.; Kolmogorov, A. N. (1954): Limit Distributions for Sums of Independent Random Variables, Addison-Wesley, Cambridge, Mass. [English Transl. from the Russian edition (1949), with notes by K.L. Chung, revised (1968)] \bibitem{[LR91]} {} Le Gall, J.-F.; Rosen, Jay (1991): The range of stable random walks. \textit{Ann. Probab.} {\bf{19}}, no. 2, 650--705. \bibitem{[L85]} {} Liggett, T. M. (1985): \textit{Interacting Particle Systems}, Springer-Verlag, New York. \bibitem{[NP99]} {} Neuhauser, C.; Pacala, S. (1999): An explicitly spatial version of the Lotka-Volterra model with interspecific competition. \textit{Ann. Appl. Probab.} {\bf 9} 1226-1259. \bibitem{[P02]} {} Perkins, Edwin A., Dawson-Watanabe superprocesses and measure-valued diffusions. \textit{Lectures on probability theory and statistics (Saint-Flour, 1999)}, 125--324, Lecture Notes in Math., 1781, Springer, Berlin, 2002. \bibitem{[Pr81]} {}Pruitt, W. E.(1981): The growth of random walks and Levy processes. \textit{ Ann. Prob.} {\bf 9}, no. 2, 948-956. \bibitem{[S99]} {}Sato, K. (1999): \textit{L\'{e}vy Processes and Infinitely Divisible Distributions}, English edition, Cambridge University Press. \bibitem{[Sa79]} {} Sawyer, S (1979): A limit theorem for patch sizes in a selectively-neutral migration model. \textit{J. Appl. Probab.} \textbf{16}, 482-495. \bibitem{[S02]} {} Slade, Gordon (2002): Scaling limits and super-Brownian motion. \textit{Notices Amer. Math. Soc.} {\bf{49}}, no. 9, 1056--1067. \bibitem{[Sp76]} {} Spitzer, F. L. (1976): \textit{Principles of Random Walk}, 2nd ed. Springer-Verlag, New York. \end{enumerate} \end{document}
1,314,259,994,683
arxiv
\section{Introduction} Exclusive electroproduction is a powerful tool for the study of nucleon structure. In contrast to inclusive $(e,e')$ or photoproduction measurements, the transverse momentum of a scattering constituent (and thus its transverse size is proportional to $1/\sqrt{-t}$) can be probed in addition to its longitudinal momentum, and independent of the momentum transfer \mbox{$Q^2$}\ to this constituent. Exclusive {\em forward pion} electroproduction is especially interesting because the longitudinal and transverse virtual photon polarizations act as a filter on the spin and hence the type of the participating constituents. By detecting the charge of the pion, even the flavor of the constituents can be tagged. Finally, {\em ratios} of separated response functions can be formed for which nonperturbative corrections may cancel, yielding insight into soft-hard factorization at the modest \mbox{$Q^2$}\ to which exclusive measurements will be limited for the foreseeable future. The full potential of pion electroproduction is only now being realized due to the availability of high-luminosity, multi-GeV beams at the Thomas Jefferson National Accelerator Facility (Jefferson Lab, or JLab). Four amplitudes contribute to pion electroproduction from a nucleon in the Born approximation, where a single virtual photon $\gamma^*$ emitted by the electron couples to the hadronic system: pion-pole, nucleon-pole, crossed nucleon-pole and contact term. The first three amplitudes correspond to Mandelstam $t$, $s$ and $u$-channel processes, respectively, Fig. \ref{fig:stu_channels}. The contact term is used to restore gauge invariance. Born-amplitude based models ~\cite{Ber70,Gut72} indicate that for values of the invariant mass $W$ above the resonance region and for not too large values of \mbox{$Q^2$}, the longitudinal part \mbox{$\sigma_L$}\ of the cross section for pion electroproduction at small values of $-t$ is dominated by the $t$-channel process. The other response functions (transverse \mbox{$\sigma_T$}\ and interference terms \mbox{$\sigma_{LT}$}\ and \mbox{$\sigma_{TT}$}) are relatively small. In this regime, the process can be viewed as quasi-elastic scattering of the electron from a virtual pion and thus is sensitive to the pion form factor, $F_{\pi}$. At values of $t$ approaching the pion mass squared (the so called $t$-pole), the longitudinal response function becomes approximately proportional to the square of the charged pion form factor \begin{equation} \sigma_{L} \approx \frac{-t Q^2}{(t-M^2_{\pi})^2} g^2_{\pi NN}(t) F^2_{\pi}(Q^2, t). \end{equation} Here, the factor $g_{\pi NN}(t)$ comes from the $\pi NN$ vertex and represents the probability amplitude to have a virtual charged $\pi$ meson inside the nucleon at a given $t$. \begin{figure} \begin{center} \vskip 0.5cm \includegraphics[angle=0,width=3.25in]{stu_channels.eps} \caption{\label{fig:stu_channels} Born diagrams for $\pi^+$ electroproduction from a proton.} \end{center} \end{figure} In order to reliably extract $F_{\pi}$, the $t$-pole process should be dominant in the kinematic region under study. This dominance can be studied experimentally through the ratio of longitudinal $\gamma^{*}_L n \to \pi^- p$ and $\gamma^*_L p \to \pi^+ n$ cross sections, which can be expressed in terms of contributions from isoscalar $A_S$ and isovector $A_V$ photon amplitudes: \begin{equation} R_L \equiv \frac{\gamma_L^{*}n \to \pi^- p}{\gamma_L^{*}p \to \pi^+ n} = \frac{|A_{V}-A_{S}|^2}{|A_{V}+A_{S}|^2}. \end{equation} The $t$-channel process proceeds purely via isovector amplitudes. Interference terms between the isoscalar and isovector photon amplitudes have opposite signs for $\pi^+$ and $\pi^-$ production, which leads to a difference in the cross sections if significant isoscalar contributions are present. Hence, where the $t$-pole dominates (small $-t$), the ratio $R_L$ is expected to be close to unity. A departure from $R_L=1$ would indicate the presence of isoscalar backgrounds arising from mechanisms such as $\rho$ meson exchange \cite{VGL1} or perturbative contributions due to transverse quark momentum \cite{Milana}. Such physics backgrounds may be expected to be larger at higher $-t$ (due to the drop-off of the pion pole contribution) or non-forward kinematics (due to angular momentum conservation) \cite{raskin}. Because previous data are unseparated \cite{Brauel1}, no firm conclusions about possible deviations of $R_L$ from unity were possible. One can also use such hard exclusive processes to investigate the range of applicability of QCD factorization and scaling theorems. The most important of these is the handbag factorization, where only one parton participates in the hard subprocess, and the soft physics is encoded in generalized parton distributions (GPDs). The handbag approach applies to deep exclusive meson production, where \mbox{$Q^2$}\ is large and $-t$ is small \cite{collins,eides}. For longitudinal photons with $Q^2>10$ \mbox{GeV$^2$}\ and $-t\ll M_N^2$, this theorem allows one to relate exclusive $N(e,e'\pi^{\pm})N$ observables to integrals over the quark flavor-dependent GPDs. Pseudoscalar meson-production observables not dominated by the pion pole term, such as \mbox{$\sigma_T$}\ in exclusive $\pi^{\pm}$ electroproduction, have also been identified as being especially sensitive to the chiral-odd transverse GPDs \cite{ahmad,gk10}. However, large higher-order corrections \cite{Belitsky} have delayed the application of GPDs to pion electroproduction until recently. The model of Refs. \cite{gk10,gk13} uses a modified perturbative approach based on GPDs, incorporating the empirical pion electromagnetic form factor and significant contributions from the twist-3 transversity GPD, $H_T$, which gives substantial strength in the transverse cross section. In the transition region between low $-t$ (where a description of hadronic degrees of freedom in terms of effective hadronic Lagrangians is valid) and large $-t$ (where the degrees of freedom are quarks and gluons), $t$-channel exchange of a few Regge trajectories permits an efficient description of the energy-dependence and the forward angular distribution of many real- and virtual-photon-induced reactions. The VGL Regge model \cite{VGL,VGL1} has provided a good and consistent description of a wide variety of $\pi^{\pm}$ photo- and electroproduction data above the resonance region, as well as of the $p(e,e'\pi^+)n$ reaction using longitudinally polarized virtual photons. However, the model has consistently failed to provide a good description of $p(e,e'\pi^+)n$ \mbox{$\sigma_T$}\ data \cite{Blok08}. The VGL Regge model was recently extended \cite{kaskulov, vrancx} by the addition of a hard deep-inelastic scattering (DIS) of virtual photons off nucleons. The DIS process dominates the transverse response at moderate and high \mbox{$Q^2$}, providing a better description of \mbox{$\sigma_T$}. By assuming that the exclusive \mbox{$\sigma_T$}\ cross section behaves as $\sigma_T^{\text{DIS}}(Q^2)\propto F_1^p(x,Q^2)$, the authors predict that at moderate \mbox{$Q^2$} \begin{equation} R_T\equiv\frac{\sigma_T^{\pi^-}}{\sigma_T^{\pi^+}}\approxeq\frac{F_1^n}{F_1^p} \approx\frac{F_2^n}{F_2^p}<1. \end{equation} Our purpose was to perform a complete $L$/$T$/$LT$/$TT$ separation in exclusive forward $\pi^{\pm}$ electroproduction on the proton and neutron. Because there are no practical free neutron targets, the $^2$H$(e,e'\pi^{\pm})NN_s$ reactions (where $N_s$ denotes the spectator nucleon) were used. As those reactions proceed via quasi-free production, the results can be used to compare $\pi^-$ production on the neutron to $\pi^+$ production on the proton, particularly if ratios are formed. However, due to binding effects, the $\pi^+$ results on the deuteron may differ from those on the proton, which were taken in the same kinematics. The data were obtained in Hall C at JLab as part of the two pion form factor experiments presented in detail in Ref.~\cite{Blok08}. The purpose of this paper is to describe the experiment and analysis of these data in detail, concentrating on those parts that differ from our $^1$H$(e,e'\pi^+)n$ study Ref.~\cite{Blok08}. This paper is organized as follows. Sec \ref{sec:expt} describes the experiment and the determination of the various efficiencies that are applied in calculating the cross sections. Sec \ref{sec:analysis} presents the determination of the unseparated cross sections, their separation into the $L$/$T$/$LT$/$TT$ structure functions, and the systematic uncertainties. Sec \ref{sec:results} discusses these results and compares them with various theoretical calculations. The paper is concluded with a short summary. \section{Experiment and Data Analysis \label{sec:expt}} The analysis procedures applied here were also used in our recent letter on $^2$H$(e,e'\pi^{\pm})NN_s$ results \cite{Huber14}. For details of the experiment and the analysis not discussed here, we refer the reader to the discussion of our $^1$H experiment \cite{Blok08}. \subsection{Experiment and Kinematics \label{sec:kinematics}} The two $F_{\pi}$ experiments were carried out in 1997 (F$_{\pi}$-1) and 2003 (F$_{\pi}$-2) in Hall C at JLab. For the measurements presented here, the unpolarized electron beam from the CEBAF accelerator was incident on a liquid deuterium target. Two moderate acceptance, magnetic focusing spectrometers were used to detect the particles of interest. The spectrometer settings correspond to either $^2$H$(e,e'\pi^+)nn_s$ or $^2$H$(e,e'\pi^-)pp_s$ kinematics, where the Short Orbit Spectrometer (SOS) was always used to detect the scattered electron, and the High Momentum Spectrometer (HMS) was used to detect the high momentum $\pi^+$ or $\pi^-$. The choice of kinematics for the two experiments was based on maximizing the range in \mbox{$Q^2$}\ for a value of the invariant mass $W$ above the resonance region, while still enabling a longitudinal-transverse separation. The value $W$=1.95 GeV used in the first experiment is high enough to suppress most $s$-channel baryon resonance backgrounds, but this suppression should be even more effective at the $W$=2.2 GeV of the second experiment. For each \mbox{$Q^2$}, data were taken at two values of the virtual photon polarization, $\epsilon$, with $\Delta\epsilon>$0.25. This allowed for a separation of the longitudinal and transverse cross sections. Constraints on the kinematics were imposed by the maximum available electron energy, the maximum central momentum of the SOS, and the minimum HMS angle. In parallel kinematics, i.e., when the pion spectrometer is situated in the direction of the $\vec{q}$ vector, the acceptances of the two spectrometers do not provide a uniform coverage in $\phi_{\pi}$. Thus, to attain full coverage in $\phi_{\pi}$ and allow a separation of the interference $LT$ and $TT$ cross sections, additional data were taken in most cases with the HMS at a slightly smaller and larger angle compared to the central angle for the high $\epsilon$ settings. At low $\epsilon$, only the larger angle setting was possible. The kinematic settings are summarized in Table \ref{tab:kin}. \begin{table*} \begin{center} \begin{tabular}{|c|c|c|}\hline & $^2$H$(e,e'\pi^+)nn$ & $^2$H$(e,e'\pi^-)pp$\\ \hline \multicolumn{3}{|c|}{F$_{\pi}$-1 Settings}\\ \hline \multicolumn{3}{|c|}{$Q^2$=0.6 GeV$^2$, $W=$1.95 GeV}\\ \hline $\epsilon$=0.37, $E_{e}=$2.445 GeV & 3 HMS settings: $\Theta_{\pi q}$=+0.5, +2.0, +4.0$^o$ & 2 HMS settings: +0.5, +4.0$^o$\\ $\epsilon$=0.74, $E_{e}=$3.548 GeV & 4 HMS settings: $\Theta_{\pi q}$=-2.7, 0.0, +2.0, +4.0$^o$ & 1 HMS setting: 0.0$^o$\\ \hline \multicolumn{3}{|c|}{$Q^2$=0.75 GeV$^2$, $W=$1.95 GeV}\\ \hline $\epsilon$=0.43, $E_{e}=$2.673 GeV & 2 HMS settings: $\Theta_{\pi q}$=0.0, +4.0$^o$ & 2 HMS settings: $\theta_{\pi q}$=0.0, +4.0$^o$\\ $\epsilon$=0.70, $E_{e}=$3.548 GeV & 3 HMS settings: $\Theta_{\pi q}$=-4.0, 0.0, +4.0$^o$ & No data\\ \hline \multicolumn{3}{|c|}{$Q^2$=1.0 GeV$^2$, $W=$1.95 GeV}\\ \hline $\epsilon$=0.33, $E_{e}=$2.673 GeV & 2 HMS settings: $\Theta_{\pi q}$=0.0, +4.0$^o$ & 2 HMS settings: $\theta_{\pi q}$=0.0, +4.0$^o$\\ $\epsilon$=0.65, $E_{e}=$3.548 GeV & 3 HMS settings: $\Theta_{\pi q}$=-4.0, 0.0, +4.0$^o$ & 1 HMS setting: 0.0$^o$\\ \hline \multicolumn{3}{|c|}{$Q^2$=1.6 GeV$^2$, $W=$1.95 GeV}\\ \hline $\epsilon$=0.27, $E_{e}=$3.005 GeV & 2 HMS settings: $\Theta_{\pi q}$=0.0, +4.0$^o$ & Same settings as $\pi^+$\\ $\epsilon$=0.63, $E_{e}=$4.045 GeV & 3 HMS settings: $\Theta_{\pi q}$=-4.0, 0.0, +4.0$^o$ & Same settings as $\pi^+$\\ \hline \multicolumn{3}{|c|}{F$_{\pi}$-2 Settings}\\ \hline \multicolumn{3}{|c|}{$Q^2$=2.45 GeV$^2$, $W=$2.2 GeV}\\ \hline $\epsilon$=0.27, $E_{e}=$4.210 GeV & 2 HMS settings: $\Theta_{\pi q}$=+1.35, +3.0$^o$ & Same settings as $\pi^+$\\ $\epsilon$=0.55, $E_{e}=$5.248 GeV & 3 HMS settings: $\Theta_{\pi q}$=-3.0, 0.0, +3.0$^o$ & Same settings as $\pi^+$\\ \hline \end{tabular} \caption{A summary of the $^2$H kinematic settings taken in the two pion form factor experiments. The angle $\Theta_{\pi q}$ refers to the lab angle between the pion spectrometer and the central $\vec{q}$-vector as defined by the beam energy and the angle of the electron spectrometer. \label{tab:kin}} \end{center} \end{table*} For each \mbox{$Q^2$}, $\epsilon$ setting, the electron spectrometer angle and momentum, as well as the pion spectrometer momentum, were kept fixed. The HMS magnetic polarity was reversed between $\pi^+$ and $\pi^-$ running, with the quadrupoles and dipole magnets cycled according to a standard procedure, then set to the final values by current (in the case of the quadrupoles) or by NMR probe (in the case of the dipole). Kinematic offsets in spectrometer angle and momentum, as well as in beam energy, were previously determined using elastic $ep$ coincidence data taken during the same run, and the reproducibility of the optics was checked \cite{Blok08}. For the deuterium data sets studied here, elastic runs on $^1$H were used to check the validity of the HMS and SOS corrections for several momentum ranges. The reproducibility of the optics was checked during electron running with sieve slits and by the position of the missing mass peak for $^2$H$(e,e'\pi^+)nn_s$ or $^2$H$(e,e'\pi^-)pp_s$. No shifts beyond the expected calibration residuals $\pm$2 MeV were observed \cite{Volmerthesis,Tanjathesis}. \subsection{HMS Tracking and Tracking Efficiency \label{sec:trackeff}} The HMS singles rates were much higher for the $\pi^-$ settings than the $\pi^+$ settings because of the large electron background at negative spectrometer polarity, so accurate HMS track reconstruction at high rates is needed. Charged particle trajectories are measured by two drift chambers, each with six planes \cite{chambers}. All data presented here used the track selection criterion that 5 out of 6 planes in each drift chamber for both spectrometers should have a valid signal. This criterion is much better suited to high rate data (in this case the $\pi^-$ channel data) than the analysis of our earlier F$_{\pi}$-1 $\pi^+$ data from hydrogen target \cite{volmer,tadevosyan}, which used a 4/6 tracking selection criterion for HMS and 5/6 for SOS tracking. The HMS tracking algorithm used here is the same as used in our earlier F$_{\pi}$-2 analysis from liquid hydrogen target \cite{hornt}. The algorithm has several requirements: \begin{itemize} \item {If the program reconstructed only one track, then that track was used.} \item {If two or more tracks are reconstructed, then the track that projects to the blocks in the calorimeter measuring the energy deposit (i.e. the cluster) was used. The calorimeter cut used was quite loose to only eliminate ``noise'' tracks in the chambers.} \item{In case two or more tracks hit the cluster in the calorimeter (or neither of them), then additional criteria based on which hodoscope bar was hit were used to select a correct track.} \end{itemize} The above criteria ensured that the chosen track was the most likely one to have resulted from the trigger for that event and greatly reduced the number of events improperly excluded from the analysis. The fiducial tracking inefficiencies were 2-9\% for HMS rates up to 1.4 MHz. The tracking efficiency is defined as the ratio of the number of events for which an actual track is found, to the `events' that pass through the drift chambers. This ratio is extracted from events in a fiducial area where it is extremely likely that the scintillator hits are due to particles that also traversed the chambers. The tracking efficiency depends on both the drift chamber hit efficiency and the tracking algorithm used in finding the track. In order to accurately calculate the tracking efficiency, tight particle identification (PID) requirements were applied to select a pure data sample. These requirements are stricter than those used in the regular analysis. In the HMS, the particle identification requirements used to select pions in the tracking efficiency calculation consisted of cuts on the gas \u{C}erenkov and the calorimeter for F$_{\pi}$-1 data, while for F$_{\pi}$-2 an additional cut on the aerogel \u{C}erenkov was applied. The fiducial tracking efficiency analysis also incorporates a cut on the integrated pulse (ADC) from the scintillator hodoscope PMTs, to exclude events with multiple hits per scintillator plane. In the case where there are multiple tracks in the same scintillator plane, this cut places a bias on the event sample used to calculate the tracking efficiency. Since 2-track events have a lower efficiency than 1-track events, the resulting bias causes the HMS tracking efficiency to be overestimated. To obtain a better understanding of the HMS tracking efficiencies, in F$_{\pi}$-2 a study of singles yields from a carbon target versus HMS rate and beam current was performed. The normalized yields from a carbon target should present no significant beam current- or rate-dependence if the various efficiencies are calculated correctly. Unfortunately, no luminosity scans on carbon target were taken at different beam currents in the F$_{\pi}$-1 experiment, so any conclusions obtained from the F$_{\pi}$-2 study have to be applied also to the F$_{\pi}$-1 data. Since the probability of a second particle traversing the HMS during the event resolving time is greater at high rates, a tight electron PID cut might introduce its own deadtime not due to tracking efficiency, causing the rate-dependence to be underestimated. Therefore, only HMS fiducial acceptance cuts were applied in this study. Normalized yields from the carbon target were computed from the number of events passing cuts, the integrated beam charge, the electronic and CPU data acquisition livetimes and the HMS tracking efficiency. They are plotted versus rate in Fig. \ref{fig:Carbon_boiling_rate}. The error bars include statistical uncertainties and an estimated systematic uncertainty of 0.3\% added in quadrature, to take into account beam steering on the target and other sensitive effects when no PID cut is applied. Data from the two kinematic settings were separately fit versus rate (dashed red and dash-dot blue curves in the figure) and normalized to unity at zero rate. The two data sets, thus normalized, were then fit together, yielding the solid black curve. The observed rate-dependence suggests that the fiducial HMS tracking efficiencies $htr$, as determined using the procedure described at the start of this section, should be corrected in the following manner \begin{equation} htr^{\prime}=htr (e^{-6.76236\cdot 10^{-5}\times \text{HMSrate(kHz)}}). \label{eqn:tracking_corr} \end{equation} This is particularly important for the F$_{\pi}$-1 $\pi^-$ runs, which are at higher HMS rate. \begin{figure} \begin{center} \includegraphics[angle=89.9,width=3.25in]{gh_c_boiling_paper.ps} \caption{\label{fig:Carbon_boiling_rate} (Color online) Normalized yields (no PID cut) from the carbon target versus HMS singles rate. As the tracking efficiency calculation uses a data sample where multiple track events are rejected, the HMS tracking efficiencies are overestimated at high rates, leading to an effective drop in normalized yield versus rate. The HMS tracking efficiencies for both of the F$_{\pi}$-1 and F$_{\pi}$-2 data sets are corrected with the linear rate dependent function shown here, leading to a normalized yield that is independent of rate. } \end{center} \end{figure} The systematic uncertainties in the HMS tracking efficiencies were estimated as follows. In the F$_{\pi}$-2 hydrogen analysis, the tracking efficiencies were assigned a 1.0\% scale and an 0.4\% $\epsilon$-uncorrelated systematic uncertainty, where the first is the scale uncertainty common to all settings, and the second is due to a variety of factors that may affect high and low $\epsilon$ settings differently, as evidenced by the greater scatter exhibited by the tracking efficiencies at high rates (see Refs. \cite{Tanjathesis,Blok08} and Sec. \ref{sec:syst}). There is an additional uncertainty of 0.2\%/MHz due to the tracking efficiency correction shown in Fig. \ref{fig:Carbon_boiling_rate}. Since the maximum rate variation for all F$_{\pi}$-2 $\pi^{\pm}$ settings, as well as the F$_{\pi}$-1 $\pi^+$ settings, is about 400~kHz, this gives a total $\epsilon$-uncorrelated systematic uncertainty of 0.45\%. The F$_{\pi}$-1 $\pi^-$ $\epsilon$-uncorrelated systematic uncertainty is somewhat larger. Since the high rate scatter in these $\pi^-$ tracking efficiencies is approximately $\pm 1.25\%$ at 1.3 MHz, we assign an $\epsilon$-uncorrelated systematic uncertainty for these settings of 1.3\%. In addition to the above tracking efficiencies, the experimental yields were also corrected for data acquisition electronic and CPU dead time. The correction ranged from 1-11\% with minimal uncertainty, as discussed in Refs. \cite{Blok08,Volmerthesis}. \subsection{Cryotarget Boiling Correction \label{sec:ld2_boiling} } When the electron beam hits a liquid target, it deposits a large power per unit target area and as a result induces localized density fluctuations referred to as ``target boiling.'' In order to reduce these fluctuations, the beam was rastered over a small area rather than localizing it at one point on the target. The target boiling effect can be measured by comparing the yields at fixed kinematics and varying beam current. During both experiments (F$_{\pi}$-1 and F$_{\pi}$-2), dedicated luminosity elastic runs were taken for both liquid targets (hydrogen and deuterium). The two experiments used cryotargets with significantly different geometries, as well as significantly different beam raster patterns, leading to very different boiling effects. F$_{\pi}$-2 used the ``tuna can'' cryotarget geometry\footnote{Cylindrical cryotarget with its axis vertical, transverse to the beam.} and circular beam raster design, which are expected to result in boiling corrections $<1\%$~\cite{Tanjathesis}. To determine the appropriate correction when the corrected HMS tracking efficiencies are used, data were acquired in dedicated runs with a wide variety of electron beam currents for all $\pi^-$ kinematic settings except \mbox{$Q^2$}=2.45~\mbox{GeV$^2$}, high $\epsilon$, $E_e=5.25$ GeV, $\theta_{\text{HMS}}=13.61^{\circ}$. Only fiducial acceptance cuts were applied in this study, and normalized singles yields from these $^2$H negative polarity HMS data were computed from the number of counts passing cuts, the integrated beam charge, electronic and CPU data acquisition livetimes, and the HMS tracking efficiencies corrected via Eqn.~\ref{eqn:tracking_corr}. The observed current-dependence suggests that no correction should be applied, which is similar to the conclusion reached in Ref.~\cite{Tanjathesis} for a liquid $^1$H target of the same shape and dimensions. \begin{figure} \begin{center} \includegraphics[angle=90,width=3.25in]{gh_ld2_boiling_offset_fpi1_paper.ps} \caption{\label{fig:Fpi1_LD2_boiling} (Color online) Normalized HMS yields from F$_{\pi}$-1 $^2$H elastics data taken with an electron trigger plotted as a function of beam current. A +0.2$~\mu$A beam current offset is applied, as described in the text. The error bars include statistical uncertainties and an estimated systematic uncertainty of 0.3\% added in quadrature.} \end{center} \end{figure} F$_{\pi}$-1 used the so-called ``soda can'' cryotarget geometry\footnote{Cylindrical cryotarget with its axis horizontal, in the direction of the beam.} and ``bed post'' beam rastering\footnote{Un-even rastering over a rectangular area, with sinusoidal motion in $x$ and $y$, leading to the beam spending more time on the four corners and less time in the middle, see Fig~3.3 of Ref.~\cite{Volmerthesis}.}, which leads to a significant boiling correction. The magnitude of this correction is sensitive to the rate-dependent correction applied to the HMS tracking efficiencies. The HMS tracking efficiencies were corrected via Eqn.~\ref{eqn:tracking_corr} and normalized yields calculated in the same manner as in the F$_{\pi}$-2 cryotarget boiling study. In analyzing these data, it was found that the slope of yield versus beam current was overly sensitive to the inclusion of the lowest current points in the fit. The beam current calibration has an inherent 0.2~$\mu$A uncertainty due to noise in the Unser monitor. A sigificantly reduced sensitivity to these low current points was obtained with the addition of a +0.2~$\mu$A beam current offset, which was subsequently applied in all F$_{\pi}$-1 yield calculations was determined via a $\chi^2$ minimization technique. A similar current offset was used in Ref. ~\cite{Gaskellthesis}. The corrected data were thus fit versus current and normalized to unity at zero current, yielding the black curve in Fig. \ref{fig:Fpi1_LD2_boiling}, and a $^2$H target density correction of $(4.72\pm 0.27\%)/100~\mu$A. This correction is particularly important for the F$_{\pi}$-1 $\pi^+$ data. Since the HMS detector rates were lower when the HMS was set at positive polarity compared to negative polarity, higher incident electron beam currents were often used for the $\pi^+$ runs. The resulting cryotarget boiling correction is similar to the $(6\pm 1\%)/100~\mu$A correction determined for the F$_{\pi}$-1 $^1$H cell in Ref.~\cite{Volmerthesis}. \subsection{HMS \u{C}erenkov Blocking Correction \label{sec:cerblock}} The potential contamination by electrons when the pion spectrometer is set to negative polarity, and by protons when it is set to positive polarity, introduces some differences in the $\pi^{\pm}$ data analyses which were carefully examined. For most negative HMS polarity runs, electrons were rejected at the trigger level by a gas \u{C}erenkov detector containing C$_4$F$_{10}$ at atmospheric pressure acting as a veto in order to avoid high DAQ deadtime due to large $e^{-}$ rates in the HMS. There is a loss of pions due to electrons passing through the HMS gas \u{C}erenkov within $\sim$100~ns after a pion has traversed the detector, resulting in a mis-identification of the pion event as an electron and being eliminated by the PID cuts applied (\u{C}erenkov blocking). To reduce this effect, the beam current was significantly reduced during $\pi^-$ running. Two independent studies were performed to determine the correction that should be applied to both experiments. \begin{figure} \begin{center} \vskip -0.2cm \includegraphics[angle=0,width=3.7in]{tdc_tworuns.eps} \vskip -0.2cm \caption{\label{fig:Cerenkov_TDC_2runs} (Color online) HMS \u{C}erenkov Trigger multi-hit TDC histograms for two F$_{\pi}$-2 runs with \u{C}erenkov veto disabled. The top panel is a low rate run, and the bottom panel is a high rate run. HMS singles events, subject to a variety of indicated \u{C}erenkov cuts, are used in both spectra. The TDC scale is 100~ps/chan. Please see the text for further information.} \end{center} \end{figure} In our first study, the timing spectra features of the \u{C}erenkov signal into the HMS trigger were investigated for a variety of F$_{\pi}$-2 $\pi^-$ runs with HMS singles rates between 7~kHz and $\sim$600~kHz. The multi-hit TDC is started by the HMS pretrigger signal and can be stopped multiple times by the retimed (i.e. delayed and discriminated) \u{C}erenkov signal (Fig. \ref{fig:Cerenkov_TDC_2runs}). The main peak corresponds to signals (primarily electrons) that result in the trigger, starting the TDC. Events not associated with the original trigger (other electrons or pions) appear as additional events to the left and right of the main electron peak. The second peak to the right is due to a second electron arriving within the timing window, but after the discriminator ``dead window'' of $\sim$40~ns (caused by the length of the discriminator pulse). The backgrounds to the left and right of the two peaks are due to earlier and later electrons, while the tail extending to channel 4096 is due to pedestal noise that crosses the discriminator threshold. The peak at channel 4096 is the accumulation of very late TDC stops, while zeros correspond to electrons (or pions) that did not give a stop. As indicated by the differences between the low rate and high rate runs plotted in Fig.~\ref{fig:Cerenkov_TDC_2runs}, the main peak to pedestal ratio degrades with increasing rate, and the second peak to first peak ratio gets larger. The width of the portion of the TDC spectrum corresponding to electrons traversing the detector current-to or after the original trigger particle indicated that the effective \u{C}erenkov TDC gate width was 116.4$\pm$ 6.3 ns for the F$_{\pi}$-2 $\pi^-$ runs, where the uncertainty is estimated from the slopes and widths of the TDC spectra features. We confirmed that the basic features of the TDC spectra are the same for HMS singles and HMS+SOS coincidences. We also compared the TDC spectra for five pairs of $\pi^-$ runs, where for each pair the beam and rate conditions were identical but in one run the HMS \u{C}erenkov veto was disabled and in the other it was enabled. The spectra for runs with \u{C}erenkov trigger veto had a much greater proportion of events where no TDC stop was recorded, due to the \u{C}erenkov signal being below the discriminator threshold. From the normalized differences of these pairs of runs we estimated that the \u{C}erenkov trigger was about 90\% efficient at vetoing electrons. A comparison of $\pi^-$ runs with same rate but different trigger condition can also be used to determine the effective threshold of the \u{C}erenkov trigger veto. The normalized difference of \u{C}erenkov photoelectron (ADC) spectra was formed for each pair of runs, and the excess of counts at a large number of photoelectrons when the veto was disabled indicated an effective veto threshold of approximately 2.5 photoelectrons. Because PMT gain variations and pile-up effects will cause the actual veto threshold to vary with rate, a slightly more restrictive software threshold on the number of photoelectrons detected in the HMS \u{C}erenkov, $hcer_{\text{npe}}<2.0$, was uniformly applied in the F$_{\pi}$-2 data analysis to cut out electrons. In our second study, we made use of the same dedicated F$_{\pi}$-2 $\pi^-$ runs already used to determine the liquid deuterium cryotarget boiling correction. The \u{C}erenkov veto was disabled in all of these runs, and the beam current was varied over a wide range for each $\pi^-$ kinematic setting except for the high $\epsilon$ setting at \mbox{$Q^2$}=2.45~\mbox{GeV$^2$}, $E_e=5.25$~GeV, $\theta_{\text{HMS}}=13.61^{\circ}$. HMS fiducial and $hcer_{\text{npe}}<2.0$ cuts were applied to these HMS singles data, and the normalized $\pi^-$ yields (with HMS tracking efficiency corrected by Eqn.~\ref{eqn:tracking_corr}) were plotted versus HMS electron rate. The normalized pion yield is expected to drop with rate because of electrons passing through the \u{C}erenkov detector within the trigger gate width after a pion has traversed the detector. The rate-dependences of the normalized pion yields at each kinematic setting were consistent within their (large) uncertainties, and yielded an average gate width of $\tau=139\pm 19$~ns. Note that this study depends upon the tracking efficiency and cryotarget boiling corrections used, while the first study based on the \u{C}erenkov TDC spectra does not. Finally, since the $\tau$ value from the second study was determined with singles events, it needs to be adjusted to yield the effective gate width for coincidence events. This correction is determined from the portion of the \u{C}erenkov TDC spectrum corresponding to early electrons passing through the detector before the particle associated with the trigger, yielding $99.2\pm 19$~ns. The two F$_{\pi}$-2 \u{C}erenkov blocking studies (TDC gate width of $ 116.4\pm 6.3$ ns and corrected singles value of $99.2\pm 19$ ns) are consistent within uncertainties. It is difficult to tell which one is more definitive, so the error weighted average $\tau_{\text{eff}}=114.7\pm 6.0$ ns, is used to compute the \u{C}erenkov blocking correction $\delta_{\text{CCblock}}=e^{-(\text{ELECTRONrate})\cdot\tau_\text{{eff}}}$ for the F$_{\pi}$-2 $\pi^-$ analysis. For the F$_{\pi}$-2 $\pi^-$ data, the HMS electron rate varied from nearly zero to $\sim$600~kHz, resulting in a \u{C}erenkov blocking correction of 0-6\%. The $\pm 6.0$ ns uncertainty gives an uncorrelated systematic uncertainty of 0.3\% at 500~kHz, while the 17 ns difference in $\tau$ values from the two methods gives a scale uncertainty of 0.8\%. $\pi^-$ data without \u{C}erenkov veto at different rates were unfortunately not taken during the F$_{\pi}$-1 experiment, so the \u{C}erenkov blocking correction cannot be directly determined for those data. We therefore modify the \u{C}erenkov blocking correction determined from F$_{\pi}$-2 data for use in the F$_{\pi}$-1 analysis according to the following procedure. A HMS \u{C}erenkov photoelectron histogram for a carbon elastics run taken at the very beginning of F$_{\pi}$-1, immediately before the first $\pi$ data run, indicates that the effective veto threshold in the F$_{\pi}$-1 experiment is slightly lower than that used in F$_{\pi}$-2. Therefore, a slightly more restrictive software threshold of $hcer_{\text{npe}}<1.5$ was applied in the analysis of the F$_{\pi}$-1 data. The figure also indicates that the \u{C}erenkov veto would be about 80\% efficient for this run. \begin{figure} \begin{center} \vskip -0.2cm \includegraphics[angle=0,width=3.65in]{tdc_fpi1_fpi2_comparo.eps} \vskip -0.2cm \caption{\label{fig:Cerenkov TDC} (Color online) HMS \u{C}erenkov Trigger TDC histogram for the one F$_{\pi}$-1 $\pi^-$ run with \u{C}erenkov veto disabled (blue), compared to a F$_{\pi}$-2 run with same trigger at similar rate (red). HMS singles events, subject to a $hcer_{\text{npe}}>2.0$ \u{C}erenkov cut, are used in both spectra. The TDC scale is 100~ps/chan.} \end{center} \end{figure} We therefore reanalyzed the F$_{\pi}$-2 dedicated $\pi^-$ runs without \u{C}erenkov veto, except that a $hcer_{\text{npe}}<1.5$ \u{C}erenkov particle identification appropriate to the F$_{\pi}$-1 analysis was applied. The dependence of normalized pion singles yields on rate yielded a value of $\tau=162\pm 19$ ns, which was then adjusted to give an effective gate width for coincidence events of $116\pm 20$ ns. Finally, we used the TDC timing information from the only F$_{\pi}$-1 ``open trigger'' run taken just before the main data taking to estimate the scaling with respect to the F$_{\pi}$-2 timing information. As shown in Fig. \ref{fig:Cerenkov TDC}, the TDC timing window used during F$_{\pi}$-1 is wider than in F$_{\pi}$-2. Comparing the equivalent features of the two spectra gives a scale factor of $1.19\pm 0.084$. Application of this scale factor to the $\tau$ value determined from the F$_{\pi}$-2 data yields $\tau=(115.7\pm 20)\times (1.19\pm 0.084)=137.7\pm 26$ ns. The two values compare well (TDC gate width of $ 138.4\pm 6.3$ ns and corrected singles value of $137.7\pm 26$ ns) and thus the error-weighted average $\tau_{\text{eff}}=138.4\pm 6.1$ ns of the two was taken as the effective $\tau$ value to compute the \u{C}erenkov blocking corrections for the F$_{\pi}$-1 data normalization. For the F$_{\pi}$-1 $\pi^-$ data, the HMS electron rate varied from nearly zero to $\sim$1.2~MHz, resulting in a \u{C}erenkov blocking correction of 0-15\%. The $\pm 6.1$ ns uncertainty gives an uncorrelated systematic uncertainty of 0.7\% at 1.2 MHz, and scaling the 0.8\% F$_{\pi}$-2 scale uncertainty to 1.2 MHz gives a scale uncertainty of 1.0\%. \subsection{Other Particle Identification Corrections \label{sec:betacorr}} Fig. \ref{fig:beta_HMS_for_Fpi-1_data_set} shows the HMS particle speed, $\beta=v/c$, which is calculated from the time-of-flight difference between two scintillator planes in the HMS detector stack. The upper band events are $\pi^+$ in the HMS, with the 2~ns beam structure of the incident electron beam clearly visible. The lower band events are protons. In both F$_{\pi}$-1 and F$_{\pi}$-2, a cut $\beta>0.95$ was used to eliminate the protons. Additionally in the F$_{\pi}$-2 experiment, an aerogel \u{C}erenkov detector was used for separating protons and $\pi^+$ for HMS central momenta above 3 GeV/c. \begin{figure} \vskip -0.3cm \begin{center} \includegraphics[angle=0,width=3.8in]{beta_vs_MM.eps} \vskip -0.3cm \caption{\label{fig:beta_HMS_for_Fpi-1_data_set} (Color online) HMS+SOS coincindence time versus $\beta_{\text{HMS}}$ for a representative F$_{\pi}$-1 $\pi^+$ run. The dashed line indicates the $\beta >0.95$ cut used to separate pions from protons. The solid lines indicate the region (for $\pi^-$ runs, without proton contamination) used to compute the $\beta$ cut correction. See the text for more details.} \end{center} \end{figure} Figure \ref{fig:beta_HMS_for_Fpi-1_data_set} also displays a ``tail'' at low ${\beta_{\text{HMS}}}$ due to pions undergoing nuclear interactions in the scintillators, \u{C}erenkov detector material, and in the case of F$_{\pi}$-2 experiment, the aerogel \u{C}erenkov detector material. A correction for pion events at lower $\beta$ eliminated by the $\beta>0.95$ cut was applied. In F$_{\pi}$-1 this correction was extracted from the $\pi^-$ data and was applied to both the $\pi^-$ and $\pi^+$ data sets. The correction was 4.89\%, with an uncertainty of 0.41\% determined from the standard deviation of the correction determined from the different $\pi^-$ kinematic settings. For the F$_{\pi}$-2 data, the same procedure was used, except that the aerogel \u{C}erenkov detector permitted the separation of protons from pions, leading to a cleaner pion sample. For each $\pi^+$ and $\pi^-$ kinematic setting, ``beta cut corrections'' were extracted in the same fashion, yielding average $\beta$ cut corrections of $2.42\%\pm 0.12\%$ and $2.51\%\pm 0.18\%$ for $\pi^+$ and $\pi^-$, respectively. A correction for the number of pions lost due to pion nuclear interactions and true absorption in the HMS exit window and detector stack of 1-2\% was also applied. For details on how this correction was determined, see Ref.~\cite{Blok08}. A comprehensive summary of the various corrections applied to the data is given in Table \ref{tab:eff_summary}. \begin{table*} \begin{center} \begin{tabular}{|l|c|c|}\hline \multicolumn{3}{|c|}{Summary of F$_{\pi}$-1 Correction Factors}\\ \hline\hline HMS tracking efficiency correction & $1-(0.0676\pm 0.002)/$S1Xrate(MHz) & Sec.~\ref{sec:trackeff}\\ LD$_2$ Cryotarget Boiling & $1-(0.0472\pm 0.003)/100~\mu$A & Sec.~\ref{sec:ld2_boiling}\\ Beam Current Offset & $0.2\mu$A & Sec.~\ref{sec:ld2_boiling}\\ HMS \u{C}erenkov blocking & $e^{{\rm -(\text{ELECTRONrate})}\cdot(138.4\pm 6.1~{\rm ns})}$ & Sec.~\ref{sec:cerblock}\\ $\beta_{cut}$ correction ($\pi^{\pm}$) & $4.89\% \pm 0.41\%$ & Sec.~\ref{sec:betacorr}\\ Pion Absorption & $1\%\pm 1\%$ & Sec.~\ref{sec:betacorr}, Ref.~\cite{Blok08}\\ \hline SOS \u{C}erenkov efficiency & $99.92\%\pm 0.02\%$ & Ref.~\cite{Tanjathesis}\\ SOS Calorimeter efficiency & $99.5\%\pm 0.1\%$ & Ref.~\cite{Tanjathesis}\\ HMS \u{C}erenkov efficiency & $99.6\%\pm 0.05\%$ & Ref.~\cite{Tanjathesis}\\ Coincidence Time Blocking & $e^{{\rm -Total Pretrig rate}\cdot(140 ~{\rm ns})}$ & Ref.~\cite{Volmerthesis}\\ HMS electronic live time & $1 - 5/6(N_{h60}-N_{h120})/N_{\text{hELREAL}}$ & Ref.~\cite{Volmerthesis}\\ SOS electronic live time & $1 - 5/6(N_{s60}-N_{s120})/N_{\text{sELREAL}}$ & Ref.~\cite{Volmerthesis}\\ \hline\hline \multicolumn{3}{|c|}{Summary of F$_{\pi}$-2 Correction Factors}\\ \hline\hline HMS tracking efficiency correction & $1-(0.0676\pm 0.002)/$S1Xrate(MHz) & Sec.~\ref{sec:trackeff}\\ LD$_2$ Cryotarget Boiling & No correction. $\pm 0.3\%/100~\mu A$ & Sec.~\ref{sec:ld2_boiling}\\ HMS \u{C}erenkov blocking & $e^{{\rm -\text{ELECTRONrate}}\cdot(114.7\pm 6.0~{\rm ns})}$ & Sec.~\ref{sec:cerblock}\\ $\beta_{cut}$ correction ($\pi^-$) & $2.51\% \pm 0.18\%$ & Sec.~\ref{sec:betacorr}\\ $\beta_{cut}$ correction ($\pi^+$) & $2.42\% \pm 0.12\%$ & Sec.~\ref{sec:betacorr}\\ Pion Absorption & $2\%\pm 1\%$ & Sec.~\ref{sec:betacorr}, Ref.~\cite{Blok08}\\ \hline SOS \u{C}erenkov efficiency & $99.92\%\pm 0.02\%$ & Ref.~\cite{Tanjathesis}\\ SOS Calorimeter efficiency & $99.5\%\pm 0.1\%$ & Ref.~\cite{Tanjathesis}\\ HMS \u{C}erenkov efficiency & $99.6\%\pm 0.05\%$ & Ref.~\cite{Tanjathesis}\\ HMS Aerogel efficiency & $99.5\%\pm 0.02\%$ & Ref.~\cite{Tanjathesis}\\ Coincidence Time Blocking & $e^{{\rm -SOS Pretrig rate}\cdot(92 ~{\rm ns})}$ & Ref.~\cite{Tanjathesis}\\ HMS electronic live time & $1 - 6/5(N_{h100}-N_{h150})/N_{h100}$ & Ref.~\cite{Tanjathesis}\\ SOS electronic live time & $1 - 6/5(N_{s100}-N_{s150})/N_{s100}$ & Ref.~\cite{Tanjathesis}\\ \hline \end{tabular} \end{center} \vskip -.5cm \caption{Summary of corrections applied to the deuterium data. In addition, HMS and SOS tracking efficiencies and computer live times are applied on a run-by-run basis. The electronic livetimes are measured by counting pretrigger signals with different gate widths $N_X$. \label{tab:eff_summary}} \end{table*} \subsection{Backgrounds} The coincidence timing structure between unrelated electrons and protons or pions from any two beam bursts is peaked every 2~ns, due to the accelerator timing structure. Real and random $e$-$\pi$ coincidences were selected with a coincidence time cut of $\pm 1$ ns. The random coincidence background (2-10\% during F$_{\pi}$-1, depending on the kinematic setting, 1-2\% during F${_\pi}$-2) were subtracted on a bin by bin basis. The contribution of background events from the aluminum cell walls was estimated using dedicated runs with two ``dummy'' aluminium targets placed at the appropriate locations. These data were analyzed in the same way as the cryotarget data and the yields (2-4\% of the total yield) were subtracted from the cryotarget yields, taking into account the different thicknesses (about a factor of seven) of the target-cell walls and dummy target. The contribution of the subtraction to the total uncertainty is negligible. \section{Cross Section Determination and Systematic Uncertainties \label{sec:analysis}} \subsection{Method} Following our earlier procedure \cite{Blok08}, we write the unpolarized pion electroproduction cross section as the product of a virtual photon flux factor and a virtual photon cross section, \begin{equation} \frac{d^5 \sigma}{d \Omega_e dE_e^\prime d \Omega_{\pi}} = J\left(t,\phi \rightarrow \Omega_{\pi}\right) \Gamma_v \frac{d^2 \sigma}{dt d \phi}, \end{equation} where $J\left(t,\phi \rightarrow \Omega_{\pi}\right)$ is the Jacobian of the transformation from $dtd\phi$ to $d\Omega_{\pi}$, $\phi$ is the azimuthal angle between the scattering and the reaction plane, and $\Gamma_v$=$\frac{\alpha}{2 \pi^2} \frac{E^\prime_e}{E_e} \frac{1}{Q^2} \frac{1}{1-\epsilon} \frac{W^2-M^2}{2 M}$ is the virtual photon flux factor. The (reduced) cross section can be expressed in terms of contributions from transversely and longitudinally polarized photons, \begin{eqnarray} \label{eqn:unsep} 2\pi \frac{d^2 \sigma}{dt d\phi} & = & \frac{d \sigma_T}{dt} + \epsilon \frac{d \sigma_L}{dt} + \sqrt{2 \epsilon (1 + \epsilon)} \frac{d \sigma_{LT}}{dt} \cos \phi \\ \nonumber & + & \epsilon \frac{d \sigma_{TT}}{dt} \cos 2 \phi. \end{eqnarray} Here, $\epsilon=\left(1+2\frac{|\vec{q}|^2}{Q^2}\tan^2\frac{\theta}{2}\right)^{-1}$ is the virtual photon polarization, where $\vec{q}$ is the three-momentum transferred to the quasi-free nucleon, $\theta$ is the electron scattering angle, and $\phi$ has already been defined. In order to separate the different structure functions, one has to determine the cross section both at high and at low $\epsilon$ as a function of $\phi$ for fixed values of $W$, \mbox{$Q^2$}\ and $t$. Since the $t$-dependence is important, this should be done for various values of $t$ at every central \mbox{$Q^2$}\ setting. Therefore, the data are binned in $t$ and $\phi$, thus integrating over $W$ and \mbox{$Q^2$}\ within the experimental acceptance, and also over $\theta_\pi$ (the latter is of relevance, since the interference structure functions include a dependence on $\sin \theta_\pi$). However, the average values of $W$, \mbox{$Q^2$}, and $\theta_\pi$ generally are not the same for different $\phi$ and for low and high $\epsilon$. Moreover, the average values of $W$, \mbox{$Q^2$}, $t$, and $\theta_\pi$, only three of which are independent, may be inconsistent. Both problems can be avoided by comparing the measured yields to the results of a Monte-Carlo simulation for the actual experimental setup, in which a realistic model of the cross section is implemented. At the same time, effects of finite experimental resolution, pion decay, radiative effects, etc., can be taken into account. When the model describes the dependence of the four structure functions on $W$, \mbox{$Q^2$}, $t$, $\theta_\pi$ sufficiently well, i.e. when the ratio of experimental to simulated yields is close to unity within the statistical uncertainty, the cross section for any value of $\overline{W}$, $\overline{Q^2}$ within the acceptance can be determined as \begin{equation} \label{eqn:ratio_to_sigma} \left( \frac{d^2 \sigma}{dt d\phi}(t,\phi) \right)^{\mathrm{exp}}_{\overline{W},\overline{Q^2}} =\frac{Y_{\mathrm{exp}}}{Y_{\mathrm{sim}}} \left( \frac{d^2 \sigma}{dt d\phi}(t,\phi) \right)^{\mathrm{model}}_{\overline{W},\overline{Q^2}}, \end{equation} where $Y$ is the yield over $W$ and \mbox{$Q^2$}, with common values of $\overline{W}$, $\overline{Q^2}$ (if needed different for different values of $t$) for all values of $\phi$, and for the high and low $\epsilon$ data, so as to enable a separation of the structure functions. In practice the data at both high and low $\epsilon$ were binned in 4-6 $t$-bins and 16 $\phi$-bins and the cross section was evaluated at the center of each bin. The overlined values in the expression above were taken as the acceptance weighted average values for all $\phi$-bins (at both high and low $\epsilon$) together, which results in them being slightly different for the $t$-bins. \subsection{Description of the Simulation Model and Kinematic Variables \label{sec:model}} The Hall C Monte Carlo package, SIMC, is used as the simulation package for this experiment. A detailed description of the program is given in Refs. \cite{Blok08,Volmerthesis, Gaskellthesis}. For each event, the program generates the coordinates of the interaction vertex ($x,y,z$) and the three-momenta of the scattered electron and the produced pion for the $^2$H$(e,e'\pi^{\pm})NN_s$ reaction. In the SIMC event generator, the following off-shell prescription was taken to determine the kinematics. The ``spectator'' nucleon was taken to be ``on-shell'' in the initial state, while the struck nucleon was taken to be ``off-shell'' with the requirement that the total momentum of the nucleus is zero, and the total energy is the mass of a deuteron, $M_{D}$. The nucleon on which the pion is produced thus has a certain momentum (Fermi motion), taken from a deuteron wave function calculated with the Bonn $NN$-potential \cite{Koltenukthesis}. The outgoing particles are followed on their way through the target, spectrometer and detector stack, taking into account energy loss and multiple scattering. Possible radiation of the incoming and outgoing electron and the pion is included \cite{Blok08,ent00}. This leads to `experimental' values for the three-momenta of the scattered electron and the produced pion. Together with the value for the incoming electron, these are used to calculate kinematic quantities such as \mbox{$Q^2$}, $W$, $t$, $\theta_\pi$, and $\phi_\pi$, just as for the experimental data. Because experimentally the momentum of the struck nucleon is not observable, the kinematic quantities $t$, missing mass $M_X$, and $\theta_\pi$ were reconstructed (both for the experimental data and for the SIMC data) assuming quasi-free pion electroproduction, $\gamma^* N \rightarrow \pi^{\pm} N'$, where the virtual photon interacts with a nucleon at rest. The Mandelstam variable $t$ is calculated as $t=(p_{\text{target}}-p_{\text{recoil}})^2$. (In the limit of perfect resolution and no radiative effects, for $^1$H this formula gives the same result as $(p_{\gamma}-p_{\pi})^2$, but for $^2$H it does not, because of binding effects.) The missing mass $M_X$ was calculated according to: \begin{eqnarray} \vec{p}_{\text{missing}}&=&\vec{q}-\vec{p}_{\pi},\nonumber\\ E_{\text{missing}}&=&\nu+m_N-E_{\pi},\\ M_X^2&=&E_{\text{missing}}^2-p_{\text{missing}}^2\nonumber \end{eqnarray} where $m_N$ equals the free proton mass for $\pi^+$ production and the free neutron mass for $\pi^-$ production. See Fig. \ref{fig:MMplot} for a representative example. Finally, the center of mass (CM) frame azimuthal angle $\phi_{\text{CM}}$ in Eqn.~\ref{eqn:unsep} equals the experimentally reconstructed $\phi_{\pi q}$ and $\theta_{\text{CM}}$ is calculated by boosting to the photon plus nucleon at rest system. Event weighting in the simulation used a model cross section that depends on the values of \mbox{$Q^2$}, $W$, $t$, $\theta_\pi$, and $\phi_\pi$, calculated in the same way as for the (experimental and simulated) data, but using the vertex $k_{e'}$ and $k_{\pi}$. An iterative fitting procedure, discussed in Sec.~\ref{sec:modelxsec}, was used to determine this model cross section. It should be stressed that because of the quasi-free assumption with an initial nucleon at rest, the extracted cross sections and structure functions are effective ones, which cannot be directly compared to those from $^1$H. It was considered better that the influence of off-shell effects (and possible other mechanisms in $^2$H) are studied separately, using cross sections that were determined in a well defined way, than that off-shell effects are incorporated already in some way in the extracted cross sections. (Although the differences in practice may not be large.) \begin{figure} \begin{center} \includegraphics[width=3.25in]{mm_160_27_0000.ps} \end{center} \caption{(Color online) Missing mass of the undetected nucleon calculated as quasi-free pion electroproduction for a representative $\pi^+$ setting. The diamonds are experimental data, and the red line is the quasi-free Monte Carlo simulation. The vertical line indicates the $M_X$ cut upper limit.} \label{fig:MMplot} \end{figure} In extracting the deuterium cross sections, it is desirable to keep as much of the missing-mass tail as possible (up to the two-pion threshold of 1.1 GeV), to maximize the acceptance of the ``quasifree'' distribution, and to minimize the systematic uncertainty associated with the missing mass cut. The thick collimators of the HMS and SOS are very effective at stopping electrons, but a non-negligible fraction of the pions undergo multiple scattering and ionization energy loss and consequentially end up contributing to the experimental yield~\cite{Gaskellthesis}. These pion (hadron) punch-through events have been observed in earlier experiments, and corrections are needed for a precise yield extraction. Since the pions in F$_{\pi}$-1 and F$_{\pi}$-2 are detected in the HMS, the implementation of the simulated collimator punch-through events was done for only this arm. The HMS event simulation therefore takes into account the probability that a pion interacts hadronically with the collimator (allowing the pion to undergo multiple scattering and ionization energy loss). After implementing the pion punch-through events in SIMC, the $M_X$ cut upper limit was determined by the value where the missing mass peak is no longer well reproduced by a quasi-free Monte Carlo simulation including all known detector effects, indicating the presence of additional backgrounds, such as two pion production. The missing mass cut was taken to be 0.875$\leq M_X\leq$1.03 GeV. It is wider than the one used in the analysis of the hydrogen data because of Fermi motion in the deuteron. Compared to hydrogen, the backgrounds from target windows and random coincidences are generally larger due to the wider $M_X$ cut. \subsection{Determination of Separated Structure Functions} \label{sec:modelxsec} \begin{figure} \begin{center} \includegraphics[angle=0,width=3.0in]{exp_kinematics_mnw.eps} \caption{\label{fig:ExpKin} (Color online) Normalized experimental $\pi^-$ yield (black diamonds) in comparison to the quasi-free Monte Carlo simulation (red lines) for one HMS+SOS setting at \mbox{$Q^2$}=0.60 \mbox{GeV$^2$}, low $\epsilon$.} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[angle=0,width=3.0in]{plot_unsep_mn.eps} \caption{\label{fig:Phifit} (Color online) Unseparated experimental $\pi^-$ cross sections as a function of azimuthal angle $\phi$ at \mbox{$Q^2$}=0.60 \mbox{GeV$^2$}, low $\epsilon$ (black triangles) and high $\epsilon$ (blue inverted triangles). The curves shown represent the fit of the measured values of the cross section to Eqn.~\ref{eqn:unsep}. In this fit, all four parameters $\sigma_{L, T, LT, TT}$ are extracted simultaneously, separate for each $-t$ bin. } \end{center} \end{figure} The SIMC model cross section and the final separated structure functions were determined in the same (iterative) procedure. The model cross section was taken as the product of a global function describing the $W$-dependence times (a sum of) \mbox{$Q^2$}\ and $t$ dependent functions for the different structure functions. For the $LT$ and $TT$ parts, their leading order dependence on $\sin$($\theta_{CM}$) was taken into account \cite{raskin}. The $W$-dependence was taken as $(W^2-M_N^2)^{-2}$, where $M_N$ is the struck nucleon mass, based on analyses of experimental data from Refs.~\cite{Brauel1,beb78}. For the parts depending on \mbox{$Q^2$}\ and $t$, phenomenological forms were used and the parameters were fitted. For all $t$-bins at every (central) \mbox{$Q^2$}\ setting, $\phi$-dependent cross sections were determined both at high and low $\epsilon$ for chosen values of $\overline{W}$, $\overline{Q}^2$ (and corresponding values of $\theta_{\pi}$ and $\epsilon$) according to Eqn.~\ref{eqn:ratio_to_sigma}. The iteration procedure was repeated until satisfactory agreement between the experimental and simulated distributions was obtained, the values of $\sigma_{L,T,LT,TT}$ (and the associated fit parameters) were stable in subsequent iterations, and the parameters fitted at the individual \mbox{$Q^2$}-settings did not change much with \mbox{$Q^2$}. A representative example of some relevant variables and of the fit of the experimental cross section as a function of $\phi_\pi$ is shown in Figs.~\ref{fig:ExpKin},~\ref{fig:Phifit}. The cosine structure from the interference terms is clearly visible in Fig.~\ref{fig:Phifit}. This procedure was carried out independently for $\pi^+$ and $\pi^-$ at each \mbox{$Q^2$}, in order to have optimal descriptions in the different kinematic ranges covered. The parameterizations used in the F$_{\pi}$-1 $\pi^+$ analysis are: \begin{eqnarray} \frac{d\sigma_{L}}{dt} &=& g(W) \bigl(p_{1}+p_{2} \ln(Q^2)\bigr) e^{(p_{3}+p_{4}\ln(Q^2))(-t)},\nonumber\\ \frac{d\sigma_{T}}{dt} &=& g(W) \biggl(\frac{|t|-|t_{\text{ave}}|}{|t_{\text{ave}}|}\biggr)\nonumber\\ &\times&\biggl( p_{5}+p_{6}\ln(Q^2) +\bigl(p_{7}+p_{8}\ln(Q^2)\bigr)\biggr),\nonumber\\ \frac{d\sigma_{LT}}{dt} &=& g(W) p_{9} e^{p_{10}(-t)} \sin\theta_{\text{CM}},\\ \frac{d\sigma_{TT}}{dt} &=& g(W) f(t) \frac{p_{11}}{Q^2 }e^{-Q^2} \sin^2\theta_{\text{CM}},\nonumber \label{eqn:mc_model_pl} \end{eqnarray} where $g(W)=1/(W^2-m_p^2)^2$ is the assumed $W$-dependence discussed earlier, $f(t)=-t/(-t-m_{\pi}^2)^2$ is the pion pole factor, $|t_{\text{ave}}|$ is the average $-t$ value for a given kinematic setting, given by $|t_{\text{ave}}|=(0.105+0.04 \ln(Q^2))Q^2$, and $p_{i=1,...,12}$ are the fit parameters. For the F$_{\pi}$-1 $\pi^-$ analysis, a slightly different parameterization (because \mbox{$\sigma_T$}\ and \mbox{$\sigma_{TT}$}\ showed a stronger \mbox{$Q^2$}-dependence) yielded a better fit: \begin{eqnarray} \frac{d\sigma_{L}}{dt} &=& g(W) \bigl(p_{1}+p_{2} \ln(Q^2)\bigr) e^{(p_{3}+p_{4}\ln(Q^2))(-t)},\nonumber \\ \frac{d\sigma_{T}}{dt} &=& g(W) \biggl( p_{5}+\frac{p_{6}}{Q^4+0.1}\nonumber\\ &+&\bigl(p_{7}+p_{8}\ln(Q^2)\bigr)\biggl( \frac{|t|-|t_{\text{ave}}|}{|t_{\text{ave}}|}\biggr)\biggr),\nonumber \\ \frac{d\sigma_{LT}}{dt} &=& g(W) p_{9} e^{p_{10}(-t)} \sin\theta_{\text{CM}},\\ \frac{d\sigma_{TT}}{dt} &=& g(W) f(t) \biggl( \frac{p_{11}}{Q^2 }+\frac{p_{12}}{Q^4+0.2}\biggr) \sin^2\theta_{\text{CM}}. \nonumber \label{eqn:mc_model_mn} \end{eqnarray} In the F$_{\pi}$-2 analyses, a common parameterization (similar to those in F$_{\pi}$-1) was used for both $\pi^+$ and $\pi^-$: \begin{eqnarray} \frac{d\sigma_{L}}{dt} &=& g(W) \bigl(p_{1}+p_{2} \ln(Q^2)\bigr) e^{(p_{3}+p_{4}\ln(Q^2))(-t-0.2)},\nonumber \\ \frac{d\sigma_{T}}{dt} &=& g(W) \biggl( p_{5}+p_{6}\ln(Q^2)\nonumber\\ &+&\bigl(p_{7}+p_{8}\ln(Q^2)\bigr)\biggl( \frac{|t|-|t_{\text{ave}}|}{|t_{\text{ave}}|}\biggr)\biggr),\nonumber \\ \frac{d\sigma_{LT}}{dt} &=& g(W) \biggl( p_{9} e^{p_{10}(-t)} + \frac{p_{11}}{(-t)} \biggr) \sin\theta_{\text{CM}},\\ \frac{d\sigma_{TT}}{dt} &=& g(W) f(t) \frac{p_{12}}{Q^2 }e^{-Q^2} \sin^2\theta_{\text{CM}}, \nonumber \label{eqn:mc_model_fpi2} \end{eqnarray} where $|t_{\text{ave}}|=\bigl(0.0735+0.028 \ln(Q^2)\bigr)Q^2$ and $p_4=0$. \begin{table} \begin{center} \begin{tabular}{|l|c|c|c|}\hline Correction & Uncorrelated & $\epsilon$ uncorr. & Correlated \\ & (pt-to-pt) & {\it t} corr. & (scale) \\ & [\%] & [\%] & [\%] \\ \hline \hline d$\theta_{e}$ & 0.1 & 0.7-1.1 & \\ d$E_{\text{beam}}$ & 0.1 & 0.2-0.3 & \\ d$P_{e}$ & 0.1 & 0.1-0.3 & \\ d$\theta_{\pi}$ & 0.1 & 0.2-0.3 & \\ Radiative corr & & 0.4 & 2.0 \\ HMS $\beta$ corr & 0.4 & & \\ Particle ID & & 0.2 & \\ Pion absorption & & & 1.0 \\ Pion decay & 0.03 & & 1.0 \\ HMS Tracking ($\pi^+$) & & 0.4 & 1.0 \\ HMS Tracking ($\pi^-$) & & 1.3 & 1.0 \\ SOS Tracking & & 0.2 & 0.5 \\ Charge & & 0.3 & 0.5 \\ Target Thickness & & 0.3 & 1.0 \\ CPU dead time & & 0.2 & \\ HMS Trigger & & 0.1 & \\ SOS Trigger & & 0.1 & \\ Electronic DT & & 0.1 & \\ HMS Cer. block. ($\pi^-$) & 0.7 & & 1.0 \\ Acceptance & 1.0 & 0.6 & 1.0 \\ \hline\hline Total ($\pi^+$) & 1.1 & 1.3-1.6 & 3.1 \\ Total ($\pi^-$) & 1.3 & 1.8-2.0 & 3.2 \\ \hline \end{tabular} \end{center} \vskip -.5cm \caption{\label{table:Fpi1_syst_unc_pl} Systematic uncertainties for F$_{\pi}$-1. Those items not discussed explicitly in preceeding sections are assumed to be the same as for the published $^1$H analysis. These are the uncertainties in: kinematic offsets, radiative corrections, pion decay, SOS tracking, trigger efficiency, CPU and electronic dead time, and acceptance. The systematic uncertainties in each column are added quadratically to obtain the total systematic uncertainty.} \end{table} \begin{table} \begin{center} \begin{tabular}{|l|c|c|c|}\hline Correction & Uncorrelated & $\epsilon$ uncorr. & Correlated \\ & (pt-to-pt) & {\it t} corr. & (scale) \\ & [\%] & [\%] & [\%] \\ \hline \hline d$\theta_{e}$ & 0.1 & 0.7-1.1 & \\ d$E_{\text{beam}}$ & 0.1 & 0.2-0.3 & \\ d$P_{e}$ & 0.1 & 0.1-0.3 & \\ d$\theta_{\pi}$ & 0.1 & 0.2-0.3 & \\ Radiative corr & & 0.4 & 2.0 \\ HMS $\beta$ corr ($\pi^+$)& 0.12 & & \\ HMS $\beta$ corr ($\pi^-$)& 0.18 & & \\ Particle ID & & 0.2 & \\ Pion absorption & & & 1.0 \\ Pion decay & 0.03 & & 1.0 \\ HMS Tracking ($\pi^+$) & & 0.3 & 0.5 \\ HMS Tracking ($\pi^-$) & & 0.45 & 0.75 \\ SOS Tracking & & 0.2 & 0.5 \\ Charge & & 0.3 & 0.5 \\ Target Thickness & & 0.2 & 0.8 \\ CPU dead time & & 0.2 & \\ HMS Trigger & & 0.1 & \\ SOS Trigger & & 0.1 & \\ Electronic DT & & 0.1 & \\ HMS Cer. block. ($\pi^-$) & 0.3 & & 0.8 \\ Acceptance & 0.6 & 0.6 & 1.0 \\ \hline\hline Total ($\pi^+$) & 0.6 & 1.2-1.5 & 2.9 \\ Total ($\pi^-$) & 0.7 & 1.3-1.6 & 3.1 \\ \hline \end{tabular} \end{center} \vskip -.5cm \caption{\label{table:Fpi2_syst_unc_pl} Systematic uncertainties for F$_{\pi}$-2, similar to Table \ref{table:Fpi1_syst_unc_pl}. Those items not discussed explicitly in preceeding sections are assumed to be the same as for the published $^1$H analysis.} \end{table} \subsection{Systematic Uncertainties due to Missing Mass Cut and SIMC Model Dependence} Since the extracted separated cross sections depend in principle on the cross section model, there is a ``model systematic uncertainty.'' This uncertainty was studied by extracting \mbox{$\sigma_L$}\ and \mbox{$\sigma_T$}\ with different cross section models. There is a second, related uncertainty due to the modeling of the missing mass distribution. The combined systematic uncertainty due to both effects was estimated by modifying the missing mass cuts and SIMC model parameters $p_i$ and investigating the resulting differences on the separated cross sections. To estimate the missing mass cut dependence, the experimental and simulated data were analyzed with two tighter missing mass cuts, $M_X<0.98,\ 1.00$~GeV. A detailed comparison of the separated cross sections for each $t$-bin indicated that the $\pi^-$ separated cross sections for higher $-t$ at \mbox{$Q^2$}=0.6, 1.0~\mbox{GeV$^2$}\ \mbox{$\sigma_L$}\ were extremely sensitive to the applied $M_X$ cut and/or the disabling of the collimator pion punch-through routine in the SIMC simulations. We believe this is a result of the incomplete $\phi$ coverage for these settings, as listed in Table \ref{tab:kin}. The data for any $\pi^-$ $t$-bin were discarded if \mbox{$\sigma_L$}\ changed significantly more than the statistical uncertainty when the nominal $M_X<$1.03 GeV cut is replaced with a $M_X<$1.00 GeV cut in both the experimental and simulation analyses. For the remaining $\pi^+$ and $\pi^-$ data, the differences between the ``final'' separated cross sections and those determined with tighter $M_X$ cuts were computed and the standard deviation was tabulated for each $-t$ bin at each \mbox{$Q^2$}. These standard deviations for the remaining F$_{\pi}$-1 $\pi^-$ data are in almost all cases larger than for the corresponding $\pi^+$ data, generally comparable to the statistical errors. The standard deviations are typically smallest at or near $-t_{\text{min}}$ and grow with increasing $-t$. The cross section model dependence was estimated in a similar manner. Since the longitudinal and transverse cross sections in the model reproduce the experimental values to within 10\%, these two terms were independently increased and decreased by 10\% in the model. Independent of this, the separated cross sections were also determined by alternately setting \mbox{$\sigma_{LT}$}=0 and \mbox{$\sigma_{TT}$}=0 in the model. Unseparated cross sections were calculated using Eqn.~\ref{eqn:ratio_to_sigma} and a fit performed using Eqn.~\ref{eqn:unsep} to extract $L$/$T$/$LT$/$TT$. The differences between the ``final'' separated cross sections minus the six independent variations were computed and the standard deviations tabulated for each $-t$ bin at each \mbox{$Q^2$}\ in the same manner as the missing mass cut study. The model sensitivities of the $L$,$T$ cross sections are generally similar to each other, and exhibit a weaker $t$-dependence than the $M_X$ cut sensitivities. The observed variations are relatively small, about half the statistical uncertainties in these cross sections (per $t$-bin) of 5-10\%. The reason is that \mbox{$\sigma_L$}\ and \mbox{$\sigma_T$}\ are effectively determined by the $\phi$-integrated cross section, which reduces the model uncertainty. The sensitivities of the $TT$ interference response functions are strongly $t$-dependent, being smaller for the lowest $-t$ bins at each \mbox{$Q^2$}\ and increasing for the larger $-t$ bins. These higher $-t$ bins have relatively poorer statistics as well as incomplete $\phi$ coverage at low $\epsilon$ (as well as at high $\epsilon$ for $\pi^-$ \mbox{$Q^2$}=0.6, 1.0~\mbox{GeV$^2$}). The $LT$ model sensitivities are smaller than for $TT$, and show no obvious trends. The standard deviations for each \mbox{$Q^2$}, $t$ bin from the two above studies were combined in quadrature to obtain the combined systematic uncertainty due to the missing mass cut and SIMC model dependence (labeled henceforth as ``model-dependence'' for brevity). The uncertainties computed in this manner are shown as error bands, presented along with the data in Sec.~\ref{sec:results}, and the values for each bin are listed as the second uncertainty in Tables \ref{tab:xsec_mn}, \ref{tab:xsec_pl}. \subsection{Systematic Uncertainties \label{sec:syst}} The various systematic uncertainties determined in Secs.~\ref{sec:expt}, ~\ref{sec:analysis} are listed in Tables \ref{table:Fpi1_syst_unc_pl},~\ref{table:Fpi2_syst_unc_pl}. Those items not discussed explicitly in these sections are assumed to be the same as for the previously published $^1$H analyses. The systematic uncertainties are subdivided into correlated and uncorrelated contributions. The correlated uncertainties, i.e., those that are the same for both $\epsilon$ points, such as target thickness corrections, are attributed directly to the separated cross sections. Uncorrelated uncertainties are attributed to the unseparated cross sections, with the result that in the separation of \mbox{$\sigma_L$}\ and \mbox{$\sigma_T$}\ they are inflated, just as the statistical uncertainties, by the factor $1/\Delta\epsilon$ (for \mbox{$\sigma_L$}), which is about three. The uncorrelated uncertainties can be further subdivided into those that differ in size between $\epsilon$ points, but may influence the $t$-dependence at a fixed value of $\epsilon$ in a correlated way. The largest contributions to the ``$t$-correlated'' uncertainty are acceptance and kinematic offsets, and as a result, they are the dominating systematic uncertainties for, e.g. \mbox{$\sigma_L$}. In addition to the uncertainties listed below, are the uncertainties in the separated cross sections at each $-t$, \mbox{$Q^2$}\ setting due to the $M_X$ cut and SIMC model ``model-dependence''. \section{Results and Discussion \label{sec:results}} \subsection{$\bf ^2$H$\bf(e,e'\pi^{\pm})NN_s$ Separated Cross Sections and Ratios} \begin{figure*} \begin{center} \includegraphics[width=0.9\textwidth]{dsig_sep6_1h2h.13sep23.ps} \caption{\label{fig:xsec} (Color online) Separated cross sections as a function of $-t$. {\em $\pi^-$~from~$^2$H:} [red circles], {\em $\pi^+$~from~$^2$H:} [black squares], {\em $\pi^+$~from~$^1$H:} [blue triangles]. The error bars include both statistical and uncorrelated systematic uncertainties. The ``model-dependences'' of the $L$, $T$, $LT$, $TT$ cross sections are indicated by the shaded bands, by which all data points move collectively. The $^1$H data have not been scaled to the mean $\overline{Q^2}$, $\overline{W}$ values for each $-t$ bin of $^2$H data. In addition, please keep in mind the issues relating to $^2$H off-shell effects before directly comparing the $^1$H and $^2$H data.} \end{center} \end{figure*} \begin{table*} \begin{center} \begin{tabular}{||c|c|c|c|c|c|c||} \hline $\overline{W}$& $\overline{Q^2}$ & $-t$& $\sigma_T$ & $\sigma_L$ & $\sigma_{TT}$ & $\sigma_{LT}$ \\ (GeV) & (GeV$^2$) & (GeV$^2$) & ($\mu$b/GeV$^2$)& ($\mu$b/GeV$^2$)& ($\mu$b/GeV$^2$)& ($\mu$b/GeV$^2$)\\ \hline\hline \multicolumn{7}{|c|}{$^2$H$(e,e'\pi^-)pp_s$} \\ \hline\hline \multicolumn{7}{|c|}{$Q^2=0.60$ GeV$^2$ $W=1.95$ GeV} \\ \hline 1.9733 & 0.5505 & 0.026 & 13.07 $\pm$ 1.44 $\pm$ 0.69 & 34.74 $\pm$ 2.39 $\pm$ 1.03 & 0.47 $\pm$ 1.17 $\pm$ 0.13 & 1.19 $\pm$ 0.70 $\pm$ 0.26 \\ 1.9568 & 0.5765 & 0.038 & 12.31 $\pm$ 0.69 $\pm$ 0.17 & 19.71 $\pm$ 1.13 $\pm$ 0.36 & -3.95 $\pm$ 0.48 $\pm$ 0.74 & 1.54 $\pm$ 0.24 $\pm$ 0.42 \\ 1.9452 & 0.6048 & 0.050 & 13.88 $\pm$ 0.62 $\pm$ 0.66 & 8.53 $\pm$ 1.03 $\pm$ 1.01 & -6.13 $\pm$ 0.43 $\pm$ 1.44 & 2.00 $\pm$ 0.20 $\pm$ 0.48 \\ \hline \multicolumn{7}{|c|}{$Q^2=1.00$ GeV$^2$ $W=1.95$ GeV} \\ \hline 1.9864 & 0.9095 & 0.060 & 5.47 $\pm$ 1.29 $\pm$ 0.17 & 20.25 $\pm$ 2.25 $\pm$ 0.29 & -0.50 $\pm$ 0.80 $\pm$ 0.06 & -0.30 $\pm$ 0.48 $\pm$ 0.12 \\ 1.9703 & 0.9483 & 0.080 & 5.85 $\pm$ 0.68 $\pm$ 0.16 & 10.27 $\pm$ 1.16 $\pm$ 0.47 & -2.31 $\pm$ 0.38 $\pm$ 0.50 & 0.16 $\pm$ 0.19 $\pm$ 0.19 \\ 1.9489 & 0.9977 & 0.100 & 5.56 $\pm$ 0.51 $\pm$ 0.46 & 5.75 $\pm$ 0.91 $\pm$ 1.31 & -3.09 $\pm$ 0.32 $\pm$ 1.01 & -0.08 $\pm$ 0.15 $\pm$ 0.33 \\ \hline \multicolumn{7}{|c|}{$Q^2=1.60$ GeV$^2$ $W=1.95$ GeV} \\ \hline 2.0116 & 1.4345 & 0.135 & 2.51 $\pm$ 0.39 $\pm$ 0.02 & 4.31 $\pm$ 0.66 $\pm$ 0.12 & 0.22 $\pm$ 0.16 $\pm$ 0.04 & 0.13 $\pm$ 0.07 $\pm$ 0.03 \\ 1.9867 & 1.5064 & 0.165 & 1.58 $\pm$ 0.24 $\pm$ 0.08 & 3.64 $\pm$ 0.40 $\pm$ 0.09 & 0.33 $\pm$ 0.10 $\pm$ 0.04 & -0.00 $\pm$ 0.05 $\pm$ 0.02 \\ 1.9644 & 1.5650 & 0.195 & 1.83 $\pm$ 0.18 $\pm$ 0.05 & 1.82 $\pm$ 0.30 $\pm$ 0.05 & 0.25 $\pm$ 0.08 $\pm$ 0.05 & 0.04 $\pm$ 0.04 $\pm$ 0.03 \\ 1.9433 & 1.6178 & 0.225 & 1.52 $\pm$ 0.16 $\pm$ 0.10 & 1.53 $\pm$ 0.27 $\pm$ 0.11 & 0.29 $\pm$ 0.08 $\pm$ 0.09 & 0.04 $\pm$ 0.03 $\pm$ 0.03 \\ 1.9229 & 1.6664 & 0.255 & 1.52 $\pm$ 0.18 $\pm$ 0.15 & 0.80 $\pm$ 0.29 $\pm$ 0.10 & 0.19 $\pm$ 0.09 $\pm$ 0.16 & 0.03 $\pm$ 0.03 $\pm$ 0.03 \\ \hline \multicolumn{7}{|c|}{$Q^2=2.45$ GeV$^2$ $W=2.22$ GeV} \\ \hline 2.2978 & 2.1619 & 0.150 & 0.85 $\pm$ 0.11 $\pm$ 0.01 & 1.46 $\pm$ 0.22 $\pm$ 0.02 & -0.13 $\pm$ 0.10 $\pm$ 0.01 & 0.18 $\pm$ 0.04 $\pm$ 0.01 \\ 2.2695 & 2.2598 & 0.190 & 0.67 $\pm$ 0.05 $\pm$ 0.01 & 0.90 $\pm$ 0.10 $\pm$ 0.04 & -0.07 $\pm$ 0.05 $\pm$ 0.05 & 0.16 $\pm$ 0.03 $\pm$ 0.01 \\ 2.2400 & 2.3537 & 0.230 & 0.51 $\pm$ 0.03 $\pm$ 0.02 & 0.67 $\pm$ 0.07 $\pm$ 0.01 & -0.07 $\pm$ 0.04 $\pm$ 0.03 & 0.15 $\pm$ 0.02 $\pm$ 0.02 \\ 2.2154 & 2.4289 & 0.270 & 0.47 $\pm$ 0.03 $\pm$ 0.01 & 0.39 $\pm$ 0.06 $\pm$ 0.08 & -0.13 $\pm$ 0.04 $\pm$ 0.03 & 0.13 $\pm$ 0.02 $\pm$ 0.02 \\ 2.1932 & 2.4993 & 0.310 & 0.41 $\pm$ 0.02 $\pm$ 0.01 & 0.22 $\pm$ 0.06 $\pm$ 0.06 & -0.07 $\pm$ 0.03 $\pm$ 0.04 & 0.14 $\pm$ 0.02 $\pm$ 0.02 \\ 2.1688 & 2.5753 & 0.350 & 0.31 $\pm$ 0.02 $\pm$ 0.03 & 0.21 $\pm$ 0.06 $\pm$ 0.04 & -0.15 $\pm$ 0.03 $\pm$ 0.06 & 0.13 $\pm$ 0.02 $\pm$ 0.02 \\ \hline \end{tabular} \end{center} \caption{\label{tab:xsec_mn} Separated cross sections for the $^2$H$(e,e'\pi^-)pp_s$ reaction. The first uncertainties listed are statistical only. The second uncertainties listed are the $M_X$ cut and SIMC model ``model-dependences''. In addition to these, the systematic uncertainties listed in Tables \ref{table:Fpi1_syst_unc_pl} and \ref{table:Fpi2_syst_unc_pl} must be applied.} \end{table*} \begin{table*} \begin{center} \begin{tabular}{||c|c|c|c|c|c|c||} \hline $\overline{W}$& $\overline{Q^2}$ & $-t$& $\sigma_T$ & $\sigma_L$ & $\sigma_{TT}$ & $\sigma_{LT}$ \\ (GeV) & (GeV$^2$) & (GeV$^2$) & ($\mu$b/GeV$^2$)& ($\mu$b/GeV$^2$)& ($\mu$b/GeV$^2$)&($\mu$b/GeV$^2$)\\ \hline\hline \multicolumn{7}{|c|}{$^2$H$(e,e'\pi^+)nn_s$} \\ \hline\hline \multicolumn{7}{|c|}{$Q^2=0.60$ GeV$^2$ $W=1.95$ GeV} \\ \hline 1.9702 & 0.5445 & 0.026 & 1.32 $\pm$ 1.49 $\pm$ 0.10 & 49.44 $\pm$ 2.51 $\pm$ 0.56 & 0.80 $\pm$ 1.11 $\pm$ 0.21 & -0.40 $\pm$ 0.53 $\pm$ 0.08 \\ 1.9572 & 0.5736 & 0.038 & 6.15 $\pm$ 0.64 $\pm$ 0.06 & 33.17 $\pm$ 1.18 $\pm$ 0.16 & -1.06 $\pm$ 0.56 $\pm$ 0.24 & 0.32 $\pm$ 0.26 $\pm$ 0.07 \\ 1.9495 & 0.5953 & 0.050 & 8.15 $\pm$ 0.51 $\pm$ 0.12 & 23.94 $\pm$ 0.97 $\pm$ 0.47 & -3.33 $\pm$ 0.46 $\pm$ 0.65 & -0.61 $\pm$ 0.20 $\pm$ 0.10 \\ 1.9444 & 0.6092 & 0.062 & 8.76 $\pm$ 0.54 $\pm$ 0.17 & 19.08 $\pm$ 0.99 $\pm$ 0.54 & -3.73 $\pm$ 0.49 $\pm$ 1.02 & -0.25 $\pm$ 0.21 $\pm$ 0.11 \\ 1.9423 & 0.6146 & 0.074 & 10.73 $\pm$ 0.64 $\pm$ 0.48 & 14.08 $\pm$ 1.15 $\pm$ 1.90 & -5.99 $\pm$ 0.61 $\pm$ 2.04 & 0.19 $\pm$ 0.23 $\pm$ 0.17 \\ 1.9411 & 0.6206 & 0.086 & 12.25 $\pm$ 0.81 $\pm$ 1.29 & 11.18 $\pm$ 1.45 $\pm$ 0.53 & -7.84 $\pm$ 0.83 $\pm$ 2.19 & 0.30 $\pm$ 0.29 $\pm$ 0.18 \\ \hline \multicolumn{7}{|c|}{$Q^2=0.75$ GeV$^2$ $W=1.95$ GeV} \\ \hline 1.9894 & 0.6668 & 0.037 & 8.76 $\pm$ 1.22 $\pm$ 0.15 & 21.76 $\pm$ 2.03 $\pm$ 0.48 & 2.13 $\pm$ 0.68 $\pm$ 0.18 & 0.67 $\pm$ 0.29 $\pm$ 0.02 \\ 1.9691 & 0.6978 & 0.051 & 10.82 $\pm$ 0.80 $\pm$ 0.29 & 15.90 $\pm$ 1.32 $\pm$ 0.39 & -0.54 $\pm$ 0.42 $\pm$ 0.38 & 0.42 $\pm$ 0.18 $\pm$ 0.07 \\ 1.9579 & 0.7259 & 0.065 & 10.34 $\pm$ 0.66 $\pm$ 0.34 & 14.41 $\pm$ 1.11 $\pm$ 0.28 & -3.70 $\pm$ 0.38 $\pm$ 0.75 & 0.54 $\pm$ 0.15 $\pm$ 0.10 \\ 1.9467 & 0.7483 & 0.079 & 9.36 $\pm$ 0.64 $\pm$ 0.29 & 16.06 $\pm$ 1.08 $\pm$ 1.65 & -6.93 $\pm$ 0.42 $\pm$ 1.42 & 0.22 $\pm$ 0.13 $\pm$ 0.11 \\ 1.9404 & 0.7640 & 0.093 & 9.75 $\pm$ 0.69 $\pm$ 0.37 & 15.82 $\pm$ 1.18 $\pm$ 4.73 & -9.57 $\pm$ 0.52 $\pm$ 2.41 & 0.39 $\pm$ 0.15 $\pm$ 0.23 \\ 1.9357 & 0.7805 & 0.107 & 11.10 $\pm$ 0.81 $\pm$ 0.58 & 13.76 $\pm$ 1.38 $\pm$ 7.22 &-12.50 $\pm$ 0.69 $\pm$ 3.45 & 1.12 $\pm$ 0.16 $\pm$ 0.41 \\ \hline \multicolumn{7}{|c|}{$Q^2=1.00$ GeV$^2$ $W=1.95$ GeV} \\ \hline 1.9970 & 0.8941 & 0.060 & 4.24 $\pm$ 0.82 $\pm$ 0.06 & 22.87 $\pm$ 1.55 $\pm$ 0.28 & 2.13 $\pm$ 0.71 $\pm$ 0.08 & 0.17 $\pm$ 0.31 $\pm$ 0.04 \\ 1.9802 & 0.9305 & 0.080 & 3.78 $\pm$ 0.50 $\pm$ 0.05 & 18.16 $\pm$ 0.95 $\pm$ 0.12 & -0.42 $\pm$ 0.41 $\pm$ 0.31 & -0.25 $\pm$ 0.18 $\pm$ 0.04 \\ 1.9602 & 0.9745 & 0.100 & 4.68 $\pm$ 0.40 $\pm$ 0.14 & 13.00 $\pm$ 0.76 $\pm$ 0.45 & -2.07 $\pm$ 0.35 $\pm$ 0.59 & -0.23 $\pm$ 0.13 $\pm$ 0.06 \\ 1.9458 & 1.0061 & 0.120 & 4.74 $\pm$ 0.37 $\pm$ 0.09 & 10.60 $\pm$ 0.72 $\pm$ 0.20 & -2.93 $\pm$ 0.36 $\pm$ 1.01 & -0.20 $\pm$ 0.12 $\pm$ 0.06 \\ 1.9349 & 1.0320 & 0.140 & 5.72 $\pm$ 0.44 $\pm$ 0.20 & 7.10 $\pm$ 0.83 $\pm$ 0.45 & -3.07 $\pm$ 0.43 $\pm$ 2.03 & -0.36 $\pm$ 0.13 $\pm$ 0.05 \\ 1.9247 & 1.0602 & 0.160 & 6.00 $\pm$ 0.62 $\pm$ 0.55 & 6.04 $\pm$ 1.14 $\pm$ 1.05 & -3.44 $\pm$ 0.58 $\pm$ 2.69 & -0.22 $\pm$ 0.16 $\pm$ 0.21 \\ \hline \multicolumn{7}{|c|}{$Q^2=1.60$ GeV$^2$ $W=1.95$ GeV} \\ \hline 2.0112 & 1.4353 & 0.135 & 3.43 $\pm$ 0.22 $\pm$ 0.03 & 6.38 $\pm$ 0.43 $\pm$ 0.03 & 0.34 $\pm$ 0.22 $\pm$ 0.09 & -0.05 $\pm$ 0.09 $\pm$ 0.01 \\ 1.9884 & 1.4998 & 0.165 & 3.52 $\pm$ 0.17 $\pm$ 0.07 & 5.00 $\pm$ 0.34 $\pm$ 0.12 & -1.01 $\pm$ 0.16 $\pm$ 0.23 & -0.03 $\pm$ 0.06 $\pm$ 0.02 \\ 1.9669 & 1.5553 & 0.195 & 3.43 $\pm$ 0.15 $\pm$ 0.05 & 4.44 $\pm$ 0.30 $\pm$ 0.31 & -1.70 $\pm$ 0.16 $\pm$ 0.44 & 0.16 $\pm$ 0.05 $\pm$ 0.05 \\ 1.9463 & 1.6082 & 0.225 & 3.44 $\pm$ 0.15 $\pm$ 0.12 & 3.74 $\pm$ 0.30 $\pm$ 0.26 & -1.70 $\pm$ 0.17 $\pm$ 0.65 & 0.16 $\pm$ 0.05 $\pm$ 0.06 \\ 1.9276 & 1.6568 & 0.255 & 3.63 $\pm$ 0.18 $\pm$ 0.18 & 3.15 $\pm$ 0.36 $\pm$ 0.43 & -2.10 $\pm$ 0.20 $\pm$ 0.86 & -0.02 $\pm$ 0.05 $\pm$ 0.13 \\ 1.9097 & 1.7025 & 0.285 & 4.29 $\pm$ 0.25 $\pm$ 0.36 & 1.97 $\pm$ 0.48 $\pm$ 0.30 & -2.24 $\pm$ 0.26 $\pm$ 1.20 & 0.04 $\pm$ 0.07 $\pm$ 0.25 \\ \hline \multicolumn{7}{|c|}{$Q^2=2.45$ GeV$^2$ $W=2.22$ GeV} \\ \hline 2.3017 & 2.1503 & 0.150 & 1.40 $\pm$ 0.12 $\pm$ 0.01 & 1.90 $\pm$ 0.26 $\pm$ 0.03 & -0.04 $\pm$ 0.12 $\pm$ 0.02 & 0.15 $\pm$ 0.05 $\pm$ 0.01 \\ 2.2719 & 2.2518 & 0.190 & 1.23 $\pm$ 0.06 $\pm$ 0.02 & 1.22 $\pm$ 0.14 $\pm$ 0.08 & -0.21 $\pm$ 0.07 $\pm$ 0.04 & 0.18 $\pm$ 0.03 $\pm$ 0.01 \\ 2.2448 & 2.3391 & 0.230 & 1.26 $\pm$ 0.04 $\pm$ 0.01 & 0.65 $\pm$ 0.11 $\pm$ 0.03 & -0.12 $\pm$ 0.06 $\pm$ 0.03 & 0.23 $\pm$ 0.03 $\pm$ 0.02 \\ 2.2197 & 2.4180 & 0.270 & 1.22 $\pm$ 0.04 $\pm$ 0.03 & 0.35 $\pm$ 0.09 $\pm$ 0.02 & -0.16 $\pm$ 0.05 $\pm$ 0.04 & 0.26 $\pm$ 0.02 $\pm$ 0.02 \\ 2.1977 & 2.4878 & 0.310 & 1.16 $\pm$ 0.04 $\pm$ 0.05 & 0.18 $\pm$ 0.10 $\pm$ 0.06 & -0.19 $\pm$ 0.06 $\pm$ 0.05 & 0.22 $\pm$ 0.02 $\pm$ 0.02 \\ 2.1750 & 2.5570 & 0.350 & 1.19 $\pm$ 0.05 $\pm$ 0.04 &-0.10 $\pm$ 0.11 $\pm$ 0.04 & -0.22 $\pm$ 0.06 $\pm$ 0.05 & 0.23 $\pm$ 0.02 $\pm$ 0.04 \\ \hline \end{tabular} \end{center} \caption{\label{tab:xsec_pl} Separated cross sections for the $^2$H$(e,e'\pi^+)nn_s$ reaction. The first uncertainties listed are statistical only. The second uncertainties listed are the $M_X$ cut and SIMC model ``model-dependences''. In addition to these, the systematic uncertainties listed in Tables \ref{table:Fpi1_syst_unc_pl} and \ref{table:Fpi2_syst_unc_pl} must be applied.} \end{table*} \begin{figure*} \begin{center} \includegraphics[angle=90,width=6.5in]{ltratio.13sep23.ps} \caption{\label{fig:ltratio} (Color online) $L$/$T$ separated cross section ratios as a function of $-t$ for $\pi^+$ [black squares] and $\pi^-$ [red circles] production on $^2$H, and for $\pi^+$ on $^1$H [blue triangles]. The model-dependences of the ratios are indicated by the shaded bands, by which all data points move collectively.} \end{center} \end{figure*} The $\pi^{\pm}$ separated cross sections from $^2$H are shown in Fig.~\ref{fig:xsec} and are listed in Tables~\ref{tab:xsec_mn} and \ref{tab:xsec_pl}. Also shown for comparison are our previously published $\pi^+$ data from $^1$H \cite{Blok08}. Please keep in mind the issues relating to $^2$H off-shell effects discussed in Sec.~\ref{sec:model} before directly comparing the $^1$H and $^2$H data, particularly at higher $-t$, where the effect of Fermi momentum is larger. In the $L$ response of Fig. \ref{fig:xsec}, the pion pole is evident by the sharp rise at small $-t$. The cross sections for $\pi^-$ and $\pi^+$ from $^2$H are similar to each other and to those from $^1$H, but there is a general tendency for the $\pi^-$ \mbox{$\sigma_L$}\ to drop more rapidly with $-t$ than the $\pi^+$ \mbox{$\sigma_L$}. The $T$ responses are much flatter versus $t$. With the exception of the lowest two $-t$ bins at \mbox{$Q^2$}=0.6 \mbox{GeV$^2$}, the $\pi^+$ \mbox{$\sigma_T$}\ from $^2$H are generally within the uncertainties of the \mbox{$\sigma_T$}\ from $^1$H. We have looked very carefully at the analysis of these two low $-t$ bins, but we were unable to identify a specific reason for this behavior, hence we do not believe it is due to an artifact of the analysis. We note that these two $-t$ bins correspond to the smallest relative momentum of the two recoil nucleons in our data set ($<$170 MeV/c), where nucleonic final-state interactions absent for $^1$H may be relevant. It is also seen that the $\pi^-$ \mbox{$\sigma_T$}\ are significantly lower than the $\pi^+$ \mbox{$\sigma_T$}\ at \mbox{$Q^2$}=1.6, 2.45 \mbox{GeV$^2$}. The suppression of $\sigma_T^{\pi^-}$ relative to $\sigma_T^{\pi^+}$ may benefit future measurements of $F_{\pi}(Q^2)$ since the larger $L$/$T$ ratio in $^2$H$(e,e'\pi^-)pp_s$ would enjoy reduced error magnification compared to $p(e,e'\pi^+)nn_s$. This enhancement in the $L$/$T$ ratio at higher \mbox{$Q^2$}\ is seen more clearly in Fig. \ref{fig:ltratio}. The interference \mbox{$\sigma_{LT}$}, \mbox{$\sigma_{TT}$}\ cross sections are shown in the bottom two rows of Fig.~\ref{fig:xsec}. Interestingly, at higher \mbox{$Q^2$}\ the $\pi^-$ interference cross sections are more similar to the $\pi^+$ cross sections from $^1$H than from $^2$H. Also note that the model-dependence of the interference cross sections grows dramatically with $-t$, particularly for the $\pi^+$ cross sections from $^2$H. The model-dependences from $^1$H are not shown, but are significantly smaller. \begin{figure} \begin{center} \includegraphics[height=3.25in]{ratioL_overlay.ps} \end{center} \caption{(Color online) The ratios $R_L$ and $R_T$ versus $-t$ for four \mbox{$Q^2$}\ settings from this work. The model-dependences of the ratios are indicated by the shaded bands, and the error bars include statistical and uncorrelated systematic uncertainties. Also shown are the ratios at \mbox{$Q^2$}=0.4 \mbox{GeV$^2$}\ in the resonance region from Refs. \cite{Gaskellthesis, Gaskell01}, and $R_T$ from the $E_{\gamma}$=3.4 GeV photoproduction data of Ref.~\cite{heide}. \label{fig:Rlt_plot1}} \end{figure} $\pi^-/\pi^+$ ratios of the separated cross sections were formed, in which nuclear binding and rescattering effects are expected to cancel. (No corrections have been made for electromagnetic FSI or two-photon exchange effects, but these are expected to be small.) Many experimental normalization factors cancel to a high degree in the ratio (acceptance, target thickness, pion decay and absorption in the detectors, radiative corrections, etc.). The principal remaining uncorrelated systematic errors are in the tracking inefficiencies, target boiling corrections (due to different beam currents used), and \u{C}erenkov blocking corrections. Figure \ref{fig:Rlt_plot1} shows the values of the separated cross section $\pi^-/\pi^+$ ratios. $R_L$ is approximately 0.8 near $-t_{\text{min}}$ at each \mbox{$Q^2$}\ setting, as predicted in the large $N_c$ limit calculation of Ref.~\cite{frankfurt}. Under the not necessarily realistic assumption that the isoscalar and isovector amplitudes are real, $R_L=0.8$ gives $A_S/A_V=6\%$. This is relevant for the extraction of the pion form factor from electroproduction data, which uses a model including some isoscalar background. It is difficult at this stage to make a more quantitative conclusion, but this result is qualitatively in agreement with the findings of our pion form factor analyses \cite{Huber08,volmer}, which found evidence of a small additional contribution to \mbox{$\sigma_L$}\ not taken into account by the VGL Regge Model in our \mbox{$Q^2$}=0.6-1.6~\mbox{GeV$^2$}\ data at $W=1.95$ GeV, but little evidence for any additional contributions in our \mbox{$Q^2$}=1.6-2.45 \mbox{GeV$^2$}\ data at $W=2.2$ GeV. The main conclusion to be drawn is that pion exchange dominates the forward longitudinal response even $\sim 10\ m_{\pi}^2$ away from the pion pole. The $R_L$ results from Gaskell, et al. \cite{Gaskellthesis, Gaskell01} at \mbox{$Q^2$}=0.4~\mbox{GeV$^2$}, $W<1.7$ GeV, are above 1, presumably because of significant resonance contributions. Also in Fig.~\ref{fig:Rlt_plot1} are our $R_T$ results, following a nearly universal curve with $-t$, and exhibiting only a small \mbox{$Q^2$}-dependence. Interestingly, above $-t=0.15$ \mbox{GeV$^2$}, the photoproduction $R_T$ at $E_{\gamma}$=3.4 GeV from Heide, et al., \cite{heide} are very close in value to our ratios from electroproduction. Of the \mbox{$Q^2$}=0.4~\mbox{GeV$^2$}\ data from Refs. \cite{Gaskellthesis, Gaskell01}, the higher $-t$ point [$-t=0.21$ \mbox{GeV$^2$}\ at $W=1.15$ GeV, below the $\Delta_{1232}$] is closer to the `universal curve', while the lower $-t$ point [$-t=0.04$ \mbox{GeV$^2$}\ at $W=1.6$ GeV, in the resonance region] is well below it. At the highest \mbox{$Q^2$}\ and $-t$, $R_T$ reaches $0.26\pm 0.02$, which is consistent with the $s$-channel knockout of valence quarks prediction by Nachtmann \cite{nachtmann}, \begin{equation} \frac{\gamma^*_T n\rightarrow\pi^-p}{\gamma^*_T p\rightarrow\pi^+n}=\Bigl(\frac{e_d}{e_u}\Bigr)^2=\frac{1}{4}, \end{equation} at sufficiently large $-t$. This value is reached at a much lower value of $-t$ than for the unseparated ratios of Ref.~\cite{Brauel1}. A value of $-t=0.3$ \mbox{GeV$^2$}\ seems quite a low value for quark charge scaling arguments to apply directly. This might indicate the partial cancellation of soft QCD corrections in the formation of the $\pi^-/\pi^+$ ratio. Data at larger $-t$ are needed to see if this interpretation is correct. Photoproduction data \cite{Gao} at $-t\geq3$ \mbox{GeV$^2$}\ have hinted at quark-partonic behavior, based on the combination of constituent scaling, and experimental results for $R_T$. However, the experimental photoproduction cross sections are much larger than can be accounted for by one-hard-gluon-exchange diagrams in a handbag factorization calculation, even at $s\sim 10$ \mbox{GeV$^2$}\ \cite{huang00}. Either the vector meson dominance contribution is still large, or the leading-twist generation of the meson underestimates the handbag contribution \cite{kroll02}. However, by forming the $\pi^-/\pi^+$ ratio the nonperturbative components represented by the form factors and meson distribution amplitude may be divided out, allowing the perturbative contribution to be observed more readily. In the limit that the soft contributions are completely divided out, the one-hard-gluon-exchange calculation predicts \cite{kroll02} the simple scaling behavior \begin{displaymath} \frac{d\sigma(\gamma n\rightarrow\pi^- p)}{d\sigma (\gamma p\rightarrow \pi^+n)} \approx \Bigl[ \frac{e_d (u-m_p^2)+ e_u(s-m_p^2)}{e_u (u-m_p^2)+ e_d(s-m_p^2)} \Bigr]^2. \end{displaymath} The recent JLab data at $\theta_{CM}=90^o$ and above $-t=3$ \mbox{GeV$^2$}\ are in agreement with the above expression, while those at smaller $\theta_{CM}$ are not \cite{Gao}. A possible explanation for the relatively early perturbative behavior in transverse electroproduction is that the quasi-free process $e q\rightarrow e q$ has the minimal total number of elementary fields (4) \cite{brodsky73} and so requires only a single photon exchange. The fact that only a single pion is created may be crucial to this quasi-free picture, since it implies that the string tension never greatly exceeds O($m_{\pi}$). By contrast, the photoproduction reaction $\gamma q\rightarrow q$ at high $-t$ can only proceed if the initial quark is far off its mass shell. The required strong binding to other quarks leads to the larger number of active elementary fields in $\gamma N \rightarrow \pi N$ (9) and hence $s^{2-n}=s^{-7}$ scaling. \begin{figure*} \begin{center} \includegraphics[angle=90,width=6.5in]{ratioTT.13sep23.ps} \caption{\label{fig:tttratio} (Color online) $TT$/$T$ separated cross section ratios as a function of $-t$. The legend is the same as in Fig.~\protect{\ref{fig:ltratio}}.} \end{center} \end{figure*} Another prediction of the quark-parton mechanism \cite{nachtmann} is the suppression of $\sigma_{TT}/\sigma_T$ due to $s$-channel helicity conservation. Our data support this hypothesis in that \mbox{$\sigma_{TT}$}\ decreases more rapidly than \mbox{$\sigma_T$}\ with increasing \mbox{$Q^2$}. This is particularly true for $\pi^+$ electroproduction on both $^2$H and $^1$H, where $\sigma_{TT}/\sigma_T\simeq (-19\pm 1)\%$ at our highest \mbox{$Q^2$}, $-t$ setting (see Fig.~ \ref{fig:tttratio}). The $\sigma_{TT}/\sigma_T$ ratios for $\pi^-$ production are generally consistent with those for $\pi^+$, once one takes into account the respective error bars and model-dependences. \subsection{Comparison of Various Models with the Data} \begin{figure*} \begin{center} \includegraphics[width=0.9\textwidth]{dsig_sep6_models1.ps} \caption{\label{fig:xsec_models1} (Color online) Comparison of separated cross sections as a function of $-t$ with various models. {\em $\pi^-$~from~$^2$H:} [red circles], {\em $\pi^+$~from~$^2$H:}[black squares]. The data error bars and bands are as in Fig.~\ref{fig:xsec}. The dotted black curves are predictions of the VGL Regge model \protect{\cite{VGL1}} using the values $\Lambda_{\pi}^2$=0.0.394, 0.411, 0.455, 0.491~\mbox{GeV$^2$}\ and $\Lambda_{\rho}$=1.50~\mbox{GeV$^2$}, as determined from fits to our $^1$H data \protect{\cite{Huber08}}. The short-dashed green curves are predictions by Kaskulov and Mosel \protect{\cite{kaskulov}}, and the dot-dashed blue curves are the predictions by Vrancx and Ryckebusch \protect{\cite{vrancx}}, both models are evaluated at the nominal kinematics. In all cases, the thick lines are the model predictions for $\pi^+$ and the thin lines are the predictions for $\pi^-$.} \end{center} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=0.9\textwidth]{dsig_sep6_models2.ps} \caption{\label{fig:xsec_models2} (Color online) Comparison of separated cross sections as a function of $-t$ with various models. The symbols are as in Fig.~\ref{fig:xsec_models1}. The long-dashed magenta curves are the predictions of the MAID07 model \protect{\cite{maid07}}, and the solid red curves are predictions by Goloskokov and Kroll \protect{\cite{gk13}}. Both models are evaluated at the same $\overline{W}$, $\overline{Q^2}$ as the data. The thick lines are the model predictions for $\pi^+$ and the thin lines are the predictions for $\pi^-$.} \end{center} \end{figure*} \begin{figure*} \begin{center} \includegraphics[height=6.5in, angle=90.]{vgl_ratio3_prc.ps} \end{center} \caption{(Color online) The ratios $R_L$, $R_T$ and $R_{TT}\equiv\sigma_{TT}^{\pi^-}/\sigma_{TT}^{\pi^+}$ versus $-t$ for four \mbox{$Q^2$}\ settings. The error bars include statistical and uncorrelated systematic uncertainties. The model-dependences of the ratios are indicated by the shaded bands. The model legend is the same as Figs. \ref{fig:xsec_models1}, \ref{fig:xsec_models2}; i.e. dotted black curves are the VGL Regge model \protect{\cite{VGL1}}, short-dashed green curves are Kaskulov and Mosel \protect{\cite{kaskulov}}, dot-dashed blue curves are Vrancx and Ryckebusch \protect{\cite{vrancx}}, long-dashed magenta curves are the MAID07 model \protect{\cite{maid07}}, and solid red curves are Goloskokov and Kroll \protect{\cite{gk13}}. \label{fig:Rlt_plot}} \end{figure*} The separated cross section data are compared to a variety of models in Figs. \ref{fig:xsec_models1},~\ref{fig:xsec_models2}, and our $R_L$, $R_T$ and $R_{TT}\equiv\sigma_{TT}^{\pi^-}/\sigma_{TT}^{\pi^+}$ ratios are compared to the same models in Fig.~\ref{fig:Rlt_plot}. The VGL Regge model, which does well for photoproduction \cite{VGL1} and longitudinal electroproduction \cite{Blok08}, fails to describe the magnitude or the \mbox{$Q^2$}-dependence of \mbox{$\sigma_T$}. For any choice for the $\rho \pi \gamma$ monopole mass, it underpredicts the transverse cross sections by a large factor, which increases with \mbox{$Q^2$}. As briefly mentioned in the introduction, the VGL Regge model was extended by Kaskulov and Mosel (KM) \cite{kaskulov} and more recently by Vrancx and Ryckebusch (VR) \cite{vrancx}. KM add to the Regge model a hadronic model, which incorporates DIS $\pi^{\pm}$ electroproduction at the amplitude level. This DIS process dominates the transverse response at moderate and high \mbox{$Q^2$}, increasing the predicted \mbox{$\sigma_T$}. In this approach, the residual effects of nucleon resonances in the proton electromagnetic transition form factor are treated as dual to partons, i.e. ``resonance-parton (R-P) contributions''. The VR model differs from the KM model by using an alternative R-P transition form factor, which better describes the deep-inelastic $N(e,e'\pi^{\pm})$ data. The VGL model parameters used here are taken from the fits to our $^1$H \mbox{$\sigma_L$}\ data shown in Ref.~\cite{Huber08}. Similarly, the KM and VR models base their parameterization of the pion electromagnetic form factor upon fits to our $^1$H \mbox{$\sigma_L$}\ data. Not surprisingly, the VGL and KM models predict nearly identical \mbox{$\sigma_L$}\ in Fig.~\ref{fig:xsec_models1}, while the VR values are a bit higher. For \mbox{$\sigma_T$}, the KM and VR models are much closer to the experimental values than VGL, but they predict a steeper $t$-dependence than exhibited by the data. Of these three models, KM also provides the best description of the $\pi^+$ \mbox{$\sigma_{LT}$}\ and \mbox{$\sigma_{TT}$}\ data. The $R_L$ predictions of the VGL, KM and VR models are nearly identical at \mbox{$Q^2$}=0.6, 1.0 \mbox{GeV$^2$}, with some differences becoming apparent at larger \mbox{$Q^2$}\ and $-t$. With the exception of the highest $-t$ points at \mbox{$Q^2$}=2.45 \mbox{GeV$^2$}, the models generally predict $R_L$ ratios that are too large in comparison to the data. As already discussed, the reason for this discrepancy for the three \mbox{$Q^2$}\ taken at $W=1.95$ GeV is believed to be a small resonance contribution in the longitudinal channel that is not included in these models. The VGL, KM and VR models also generally underpredict $R_T$, particularly at $-t_{\text{min}}$. However, the KM and VR models predict systematically larger $R_T$ values than VGL due to the addition of the DIS mechanism to the transverse channel. In fact, the VR model comes quite close to the data at higher $-t$, and \mbox{$Q^2$}, validating their improvements to the R-P transition form factor, such as a softer proton Dirac form factor. The MAID model is a phenomenological fit to pion electroproduction data in the canonical resonance region ($W<2$ GeV). This model incorporates Breit-Wigner fits to nucleon resonances and also includes (unitarized) non-resonant backgrounds. Originally introduced in 1998 \cite{maid98}, MAID has undergone incremental improvements. Shown here are the results of the most recent version of the MAID model from 2007 \cite{maid07}. For these calculations, we have used the MAID07 standard parameter set, although some parameters (such as relative strengths of resonances, the charged pion form factor, etc.) can be adjusted. Finally, note that we apply the MAID model to some kinematics with $W>2$ GeV. Strictly speaking, the model is not constrained in this regime and the results plotted represent an extrapolation of a calculation fit at lower $W$. For \mbox{$\sigma_L$}, the MAID07 predictions are slightly higher than the VGL, KM and VR models, while the \mbox{$\sigma_T$}\ predictions are midway between the purely Regge-based VGL and the VGL+DIS KM and VR. In terms of $\pi^-/\pi^+$ ratios, MAID07 provides by far the best description of $R_L$, providing further evidence that the disagreement between the pion-pole dominated models and the $R_L$ data is due to small resonant contributions in the longitudinal channel. MAID07 also provides a fairly good description of $R_T$ at \mbox{$Q^2$}=1.6 \mbox{GeV$^2$}, although it undershoots the $R_T$ at \mbox{$Q^2$}=0.6, 1.0 \mbox{GeV$^2$}. The overshoot at \mbox{$Q^2$}=2.45 \mbox{GeV$^2$}\ is probably due to the significant extrapolation from the optimized parameter region $W<2$ GeV. We further investigated the impact of resonances in the MAID07 model on the $\pi^-/\pi^+$ ratios. With all resonances turned off (Born term and meson exchange contributions on), the model gives $R_L\approx 1$ and $R_T$ far below the data ($R_T\approx$0.5 at \mbox{$Q^2$}=0.6, 1.0 \mbox{GeV$^2$}, $R_T\approx$0.2 at \mbox{$Q^2$}=1.6, 2.45 \mbox{GeV$^2$}). Even though the data are acquired near $W=2$ GeV or higher, turning on only the P$_{33}$(1232) resonance has a significant effect on $R_T$ (increasing it to $\approx$1.5 at \mbox{$Q^2$}=0.6, 1.0 \mbox{GeV$^2$}, and $\approx$0.8 at \mbox{$Q^2$}=1.6, 2.45 \mbox{GeV$^2$}), but it has only a small effect on $R_L$. Progressively turning on the other resonances yields no clear trend in the behavior of either ratio. Curiously, turning off only the highest three resonances, F$_{37}$(1950), P$_{31}$(1910), F$_{35}$(1905), results in virtually no change from the nominal case. In summary, no clear single resonance seems to account for the global behavior of the separated ratios in the MAID07 model. It would be extremely interesting to see the result if the model parameters could be optimized for higher $W$. The Goloskokov-Kroll (GK) GPD-based model \cite{gk10,gk13} is a modified perturbative approach, incorporating the full pion electromagnetic form factor (as determined by fits to our $F_{\pi}$ data \cite{Huber08}) in the longitudinal channel and the $H_T$ transversity GPD dominating the transverse channel. The GK model is in good agreement with our $R_T$ data at $-t_{\text{min}}$, but predicts too-flat of a $t$-dependence. The predictions for $R_L$ are very similar to the pion-pole dominated VGL, KM and VR models. It is extremely important to keep in mind that the parameters in the GK model are optimized for small skewness ($\xi<0.1$) and large $W>4$ GeV, and have not been adjusted at all for the kinematics of our data. This limitation becomes apparent when comparing the GK-predicted \mbox{$\sigma_L$}\ and \mbox{$\sigma_T$}\ to our data in Fig.~\ref{fig:xsec_models2}. The predicted \mbox{$\sigma_T$}\ are too large in magnitude, being entirely off the plotting scale at \mbox{$Q^2$}=1.0 \mbox{GeV$^2$}, and dropping very rapidly with $-t$ to come close to the data for the highest $-t$ at \mbox{$Q^2$}=1.6, 2.45 \mbox{GeV$^2$}. The predicted \mbox{$\sigma_L$}\ are generally similar to, but slightly smaller in magnitude than the VGL, KM and VR models. All four models use our $^1$H $\pi^+$ data as a constraint in one manner or other. The reasonable agreement between the GPD-based model and our data is sufficiently encouraging in our view to justify further effort to better describe the larger $\xi$, smaller $W$ regime such as covered by our data. \section{Summary} We present $L$/$T$/$LT$/$TT$ separated cross sections for the $^2$H$(e,e'\pi^{\pm})NN_s$ reactions, at \mbox{$Q^2$}=0.6-1.6 \mbox{GeV$^2$}, $W=1.95$ GeV and \mbox{$Q^2$}=2.45 \mbox{GeV$^2$}, $W=2.2$ GeV. The data were acquired with the HMS+SOS spectrometers in Hall C of Jefferson Lab, with the exclusive production of a single pion assured via a missing mass cut. The separated cross sections have typical statistical uncertainties per $t$-bin of 5-10\%. The dominant systematic uncertainties are due to HMS tracking at high rates ($\pi^-$), HMS \u{C}erenkov blocking ($\pi^-$), cryotarget boiling at high current ($\pi^+$), spectrometer acceptance modeling, radiative corrections, pion absorption and decay. These data represent a substantial advance over previous measurements, which were either unseparated at \mbox{$Q^2$}=0.7 \mbox{GeV$^2$}\ \cite{Brauel1}, or separated but over a limited kinematic range in the resonance region \cite{Gaskell01,Gaskellthesis}. In comparison to our previously published $\pi^+$ data from $^1$H \cite{Blok08}, the $\pi^+$ $L$/$T$ ratios from $^2$H are higher at \mbox{$Q^2$}=0.6, 1.0 \mbox{GeV$^2$}\ but fall more steeply with $-t$, are nearly the same as from $^1$H at \mbox{$Q^2$}=1.6 \mbox{GeV$^2$}, and lower at \mbox{$Q^2$}=2.45 \mbox{GeV$^2$}. In contrast, the $\pi^-$ longitudinal cross sections are lower than for $\pi^+$ at \mbox{$Q^2$}=0.6, 1.0 \mbox{GeV$^2$}, but the drop with increasing \mbox{$Q^2$}\ is less drastic and by \mbox{$Q^2$}=2.45 \mbox{GeV$^2$}\ the $\pi^-$ $L$/$T$ ratio is slightly more favorable than for $\pi^+$. If this trend continues to higher \mbox{$Q^2$}, this larger $L$/$T$ ratio would benefit future planned $L$/$T$-separations of the $^2$H$(e,e'\pi^-)pp_s$ reaction \cite{12gev} due to a smaller error magnification factor. \mbox{$\sigma_{LT}$}\ is nearly zero for all kinematic settings, and we also observe a significant suppression of \mbox{$\sigma_{TT}$}\ compared to \mbox{$\sigma_T$}, particularly at \mbox{$Q^2$}=2.45 \mbox{GeV$^2$}. Our data for $R_L$ trend toward 0.8 at low $-t$, indicating the dominance of isovector processes in forward kinematics, which is consistent with our earlier findings when extracting the pion form factor from $^1$H data at the same kinematics \cite{Huber08}. Although higher order corrections in the longitudinal cross section are expected to be quite large even at \mbox{$Q^2$}=10 \mbox{GeV$^2$}, these corrections may largely cancel in the ratios of longitudinal observables such as $R_L$ \cite{Belitsky,frankfurt}. Since the transverse target asymmetry is difficult to separate from significant non-longitudinal contaminations at $Q^2=5-10$ \mbox{GeV$^2$}, $R_L$ may be the only practical ratio for constraining the polarized GPDs. In addition to the longitudinal cross section, $R_L$ is one of the few realistically testable predictions of the GPD model, particularly if higher order corrections cancel at a relatively low value of \mbox{$Q^2$}\ of 2.45 \mbox{GeV$^2$}. The evolution of $R_T$ with $-t$ shows a rapid fall off with apparently very little \mbox{$Q^2$}-dependence above $-t=0.1$ \mbox{GeV$^2$}\ within the range covered by our data. Even the old photoproduction data above $-t$=0.15 \mbox{GeV$^2$}\ from DESY \cite{heide} follow this universal curve. The $R_T$ value at the highest $-t$ is consistent with $s$-channel quark knockout. However, it is unclear if this indicates a transition from nucleon and meson degrees of freedom to quarks and gluons, as such quark-partonic behavior is at variance with theoretical expectations of large higher twist effects in exclusive measurements \cite{Berger} and the MAID \cite{maid07} results suggest important soft effects. Measurements at larger values of $-t$ and \mbox{$Q^2$}\ and further theoretical work are clearly needed to better understand the observed ratios. If $R_T$ is still $\simeq$1/4 to $\pm$10\% at higher \mbox{$Q^2$}\ and similar $x_B$, the hypothesis of a quark knockout reaction mechanism will be strengthened since there is no natural mechanism for generating $R_T$=1/4 in a Regge model over a wide range of \mbox{$Q^2$}. Since $R_T$ is not dominated by the pion pole term, this observable is likely to play an important role in future transverse GPD programs planned after the completion of the JLab 12 GeV upgrade. The larger energy bites will also permit simultaneous separations of electroproduction of other exclusive transitions, such as $\gamma_v+N\rightarrow K^+\Lambda$ and $\Sigma$ \cite{12gev-K}. \begin{acknowledgments} The authors thank Drs. Goloskokov and Kroll for the unpublished model calculations at the kinematics of our experiment, and Drs. Guidal, Laget, and Vanderhaeghen for modifying their computer program for our needs. This work is supported by DOE and NSF (USA), NSERC (Canada), FOM (Netherlands), NATO, and NRF (Republic of Korea). Additional support from Jefferson Science Associates and the University of Regina is gratefully acknowledged. At the time these data were taken, the Southeastern Universities Research Association (SURA) operated the Thomas Jefferson National Accelerator Facility for the United States Department of Energy under contract DE-AC05-84ER40150. \end{acknowledgments}
1,314,259,994,684
arxiv
\section{Introduction}\label{intro} Network design to optimize the reliability of a network is one of the most classic problems in the optimization literature, with applications in many of the aspects of our lives and certainly in many more in the future. In the recent special issue to celebrate the 50-year anniversary of the \emph{Networks} journal, a beautiful and exhaustive review by Brown et al.~\cite{brown2021network} was published, which covers the different notions, theory and applications of network reliability, along with future directions. In a similar way, P\'erez-Ros\'es~\cite{SixtyYears} revisited the 60 years of this problem, initially started by E.F. Moore and C.E Shannon~\cite{moore1956reliable} in 1956. This paper proposes a new methodology to advance in one of the most classical and defying problems in this area, the exact reliability optimization of a network. Consider a simple undirected graph $G=(V,E)$ representing a network whose edges fail according to a random probability---these failures are assumed to be independent. We consider one of the most classical notions of reliability: the \emph{all-terminal reliability} of $G$, which is defined as the probability that $G$ remains connected. The exact reliability evaluation belongs to the class of $\mathcal{NP}$-hard problems~\cite{Ball1983}, even when only two terminals are required to be connected~\cite{Valiant1979} or under identical link failure probabilities~\cite{Rosenthal}. However, for certain specific families of graphs, the problem can be solved in polynomial time. In the particular case of series-parallel graphs, Satyanarayana and Wood~\cite{Satya1985} provide a set of reliability-preserving reductions to simplify the graph and its structure that make it possible to compute the reliability of a series-parallel network in linear time. This work is our starting point. This type of reduction can also be applied to general graphs to reduce the size and complexity of the problem. For example, a recent work~\cite{goharshady2020efficient} extended these results to obtain a parametrized algorithm for computing the reliability of graphs with small treewidth in linear time. Nonetheless, this approach has not been thoroughly exploited in the literature. As Brown et al.~\cite{brown2021network} note in their recent review, one of the most relevant and less-traveled paths is the optimization of network reliability. That is, given a limited set of resources (e.g., number of edges), how should one select a subgraph of $G$ that maximizes its reliability. Given the hardness of computing the exact reliability of a network, using this metric as the objective function for an optimization problem can be highly impractical in general. However, since the computation of the reliability in certain families of graphs is tractable and due to the sharp progress we have witnessed in mixed-integer nonlinear optimization technology, an efficient method for reliability optimization for particular classes of graphs is plausible. A survey on reliability optimization is provided in~\cite{Boesch2009}. Most previous works have assumed either identical link costs or independent link failures with identical probability; in these cases, the optimal topologies are highly symmetrical~\cite{zia2010redundancy}. A foundational work on this problem for the all-terminal reliability problem was carried out by Boesch~\cite{BoeschHello}. In this work, a fixed budget is considered under identical costs (i.e., the network has a precise number of links), and identical independent link failures with probability $\rho$ are assumed. The goal is to design a graph whose all-terminal reliability is maximized in a uniform sense, i.e., over the whole compact set $\rho \in [0,1]$, among all the graphs with the same number of nodes and links. These graphs are known as \emph{uniformly most-reliable graphs} or UMRGs. It is known that UMRGs must have the maximum tree number and maximum connectivity, providing evidence of the symmetry of the optimal graphs under the strong assumptions of independence and identical costs. Nevertheless, several historical conjectures regarding the construction and existence of UMRGs remain open~\cite{UMRG2019}. In the restricted class of series-parallel graphs, their UMRGs are characterized in~\cite{neufeld1985most}. One way of tackling the difficulties associated with exact reliability computation is to use meta-heuristics to provide a \emph{good} solution for the problem; this approach was highly studied during the 1990s~\cite{aboelfotoh2001neural,dengiz1997efficient,deeter1998economic}. Later work on this subject includes hybrid ACO~\cite{dengiz2010design} and self-tuning heuristics~\cite{dengiz2015self}. More recently, \cite{ozkan2020reliable} combines heuristic techniques with branch-and-bound methods. However, the main drawback of these techniques is that heuristics cannot provide guarantees on the optimality of their solutions. A different approach to overcome this problem is to use simulations of the failure process and embed these sampled scenarios into an optimization model, known as the sample average approximation method~\cite{kleywegt2002sample}. This approach allows one to consider failure models without assuming independent or identical failures probabilities but generates an approximated formulation. For example, \cite{SAA} provides a powerful methodology to minimize the cost of a network meeting two-terminal reliability constraints, which considers probabilistic cuts in the branch-and-bound tree. Similarly, \cite{TopologicalRel2015} presents a reliability optimization model with dependent link failures ruled by the Marshall-Olkin copula~\cite{MarshalOlkinParameters,MarshalOlkin2}. In \cite{TOP}, a stochastic network flow model using scenarios for the two-terminal reliability case is provided. Since the nonlinear functions representing the reliability of the problem are neither convex nor concave functions, straightforward mathematical optimization formulations seem to be of little use in reliability optimization problems. Nonetheless, one can rely on convexification techniques to approximate the problem with a tractable alternative and exploit spatial branch-and-bound to further refine such approximations. This is a key component of state-of-the-art mixed-integer nonlinear programming (MINLP) technology and, to the best of our knowledge, it has not been exploited for the general network reliability problem. For example, convex envelopes of functions can be efficiently computed under special cases, which yield strong convex relaxations. This idea is used in~\cite{ye2018mixed} to optimize the design of a reliable chemical plant, which can be represented as a series graph. Similar techniques have been used for other network-related problems, for example, on AC optimal power flow problems~\cite{bynum2018strengthened}. Our paper follows this previous idea. We consider the optimization reliability problem with independent failure but with different failure probability among edges. In this setting, the reduction techniques remain valid~\cite{Satya1985}. Based on these reliability-preserving serial-parallel reductions, a convex MINLP formulation for the reliability optimization problem is obtained for series-parallel graphs. In this formulation, each reduction generates new constraints (a linear number). We provide tight convex envelopes for the functions appearing from the reduction process, which, combined with the refinements carried out in spatial branch-and-bound trees, allows us to obtain the (exact) optimal solution efficiently. This idea could also be extended to other families of graphs: for instance, one can rely on series-parallel reductions to decrease the size of a problem. This article is organized as follows. Section~\ref{sec:concepts} presents series and parallel reductions that preserve all-terminal reliability, following the work of Satyanarayana and Wood~\cite{Satya1985}. Section~\ref{sec:main} presents the main contributions of this work. Specifically, a nonlinear and nonconvex formulation of the reliability optimization problem is introduced in Subsection~\ref{basic}, considering series-parallel reductions. Subsection~\ref{sec:envelopes} introduces convex envelopes associated with the series-parallel reductions, using classical McCormick envelopes \cite{mccormick1976computability} and a novel envelope for series-type reductions (Theorem~\ref{thm:main}). Further improvements to the resulting convex optimization problem are addressed in Subsection~\ref{improvements}. The computational effectiveness of our proposal is studied in Section~\ref{sec:results}. Finally, Section~\ref{sec:conclusions} presents concluding remarks and directions for future work. \section{Definitions and reliability-preserving reductions}\label{sec:concepts} Consider an undirected graph $G=(V,E)$. Nodes are perfectly reliable, but links may fail with independent probabilities $q_e$ for $e=1,\ldots |E|$. Let us denote by $p_e=1-q_e$ the elementary reliability of the link $e$. We denote by $\mathcal{R}_G(p)$ the \emph{all-terminal reliability} of graph $G$: the probability that $G$ is connected. Given two graphs $G_1$ and $G_2$ with two distinguished vertices $s(G_i)$ and $t(G_i)$, a \emph{series composition} of $G_1$ and $G_2$, denoted by $G_1 +_S G_2$, is the disjoint union of both graphs, merging $t(G_1)$ with $s(G_2)$. In this case, $s(G_1 +_S G_2) = s(G_1)$ and $t(G_1 +_S G_2) = t(G_2)$. Similarly, a \emph{parallel composition} $G_1 +_P G_2$ is the disjoint union of both graphs, merging $s(G_1)$ with $s(G_2)$ (thus $s(G_1) = s(G_2) = s(G_1 +_S G_2) $ and merging $t(G_1)$ with $t(G_2)$ (thus $t(G_1)=t(G_2)=t(G_1 +_P G_2)$). A graph $G$ is a \emph{series-parallel graph} if it can be obtained from a sequence of series-parallel compositions starting from the single edges $G_e=\{e\}$ for $e= 1\ldots |E|$. Formally, let $\mathcal{G}_0=\{ G_e : e=1\ldots |E|\}$ be the set of single edges. Iteratively, we construct $\mathcal{G}_i = \mathcal{G}_{i-1} \bigcup (G_{|E|+i} \setminus (G_j \cup G_k))$ where $G_{|E|+i}=G_j+_\odot G_k$ is either a series or parallel composition of graphs $G_j$ and $G_k$ in $\mathcal{G}_{i-1}$. Note that the number of connected components in $\mathcal{G}_i$ is $|E|-i$ because this number decreases by one in each iteration. Therefore, $\mathcal{G}_{|E|-1}$ contains only one element, which is a connected series-parallel graph. We denote by $\mathcal{S}_G = \left[G_{|E|+i}=G_j+_\odot G_k\right]_{i=1}^{|E|-1}$ this sequence of series-parallel compositions to construct $G$. Note that this sequence is not unique for a given graph $G$. Satyanarayana and Wood~\cite{Satya1985} presented a set of reliability-preserving transformations for computing the reliability of series-parallel graphs based on a sequence of series-parallel compositions These transformations based on series-parallel compositions can also be used in general graphs to reduce their size in their reliability computation. When two edges $e_j,e_k$ are \emph{in parallel}, with reliabilities $p_j,p_k$, these edges can be replaced by a new single edge $e_i$ with reliability $p_i = 1-(1-p_j)(1-p_k)$, which is the probability that at most one of these two links fails. If two edges $e_j,e_k$ are \emph{in series}, with reliabilities $p_j,p_k$, at least one of them must remain operational to keep the graph connected. Thus, if we replace these two edges with a new edge $e_i$, the reliability of this edge must consider this event. Let $\mathcal{A}$ be the event that $e_j$ and $e_j$ do not fail simultaneously. Hence, the reliability of the graph satisfies \begin{align*} \mathcal{R}_G &= \mathbb{P}[G \text{ is connected} | \mathcal{A}]\cdot \mathbb{P}[\mathcal{A}] + \underbrace{\mathbb{P}[G \text{ is connected} | \mathcal{A}^C]}_{0} \cdot \mathbb{P}[\mathcal{A}^C] \\ &= \mathbb{P}[G \text{ is connected} | \mathcal{A}]\cdot \mathbb{P}[\mathcal{A}] \end{align*} Therefore, in the case of a series reduction replacing edges $e_j$ and $e_k$ with a new edge $e_i$, the reliability of the resulting graph must be multiplied by $\mathbb{P}[\mathcal{A}] = 1-(1-p_j)(1-p_k)$, and the reliability of the new edge $e_i$ is the probability that both edges are operational conditional to the event that at least one of them remains operational, that is $p_j \cdot p_k$ normalized by the probability of $\mathcal{A}$: $$p_i = \frac{p_j p_k}{1-(1-p_j)(1-p_k)}.$$ Finally, note that the sequence $\mathcal{S}_G$ that \emph{constructs} the graph $G$ can be used to compute the reliability of $G$. That is, if $G_{|E|+i}=G_j+_\odot G_k$ and $G_j$ and $G_k$ corresponds to an edge of $E$, then $G_{|E|+i}$ is a new edge that replaces the original two edges with a new edge representing its series/parallel composition. Therefore, applying this sequence iteratively, at each step of the sequence $\mathcal{S}_G$, the subgraphs $G_j,G_k$ in a composition are edges. We formalize the reliability computation of a series-parallel graph $G$ in Algorithm~\ref{alg:computeReliability}. \begin{algorithm} \caption{Compute the reliability of a series-parallel graph $G$ from its composition sequence $\mathcal{S}_G$.} \label{alg:computeReliability} \begin{algorithmic} \Require Composition sequence $\mathcal{S}_G = \left[G_{|E|+i}=G_j+_\odot G_k\right]_{i=1}^{|E|-1}$ \Require $p_e$ reliabilities of edges $e\in 1\ldots |E|$ \For {$i=1\ldots |E|$} \State $G_i \gets \{i\}$ \State $Y_i \gets p_i$ \State $\Omega_i \gets 1$ \EndFor \For {$i=1\ldots |E|-1$} \If{$G_{|E|+i}=G_j+_P G_k$} \State $Y_{|E|+i} \gets 1-(1-Y_j)(1-Y_k)$ \State $\Omega_{|E|+i} \gets 1$ \ElsIf{$G_{|E|+i}=G_j+_S G_k$} \State $Y_{|E|+i} \gets \frac{Y_j Y_k}{1-(1-Y_j)(1-Y_k)} $ \State $\Omega_{|E|+i} \gets 1-(1-Y_j)(1-Y_k)$ \EndIf \EndFor \State \Return $\mathcal{R}_G := Y_{2|E|-1} \cdot \prod_{i=1}^{2|E|-1} \Omega_i$ \end{algorithmic} \end{algorithm} In other words, $Y_i$ represents the reliability of the edge $i$ for $i\leq |E|$, or the reliability of the edge resulting from the reduction $G_{|E|+i}=G_j+_\odot G_k$ for $i> |E|$. Similarly, $\Omega_{|E|+i}$ represents the reliability factor from the reduction $G_{|E|+i}$, which is either $\Omega_{|E|+i}=1$ (parallel composition) or $\Omega_{|E|+i}=1-(1-p_j)(1-p_k)$ (series composition). When the graph has been reduced to a single edge, the reliability of $G$ is equal to the reliability of this edge ($Y_{2|E|-1}$) multiplied by all the factors $\Omega_i$. Algorithm~\ref{alg:computeReliability} allows one to compute the all-terminal reliability of $G$ in linear time. \section{Optimizing the reliability of a series-parallel graph}\label{sec:main} \subsection{A nonlinear optimization model for a series-parallel graph}\label{basic} The aforementioned results provide a procedure to compute the resulting reliability $\mathcal{R}_G$ of a series-parallel graph $G$. Our main interest is in studying the network design problem of selecting the subgraph that maximizes reliability given a set of constraints. Specifically, given a graph $G=(V,E)$, we are interested in the selection of a subset of edges $F\subseteq E$ satisfying the given constraints such that the reliability of the graph $(V,F)$ is maximized. We now proceed to formulate this problem as an MINLP. Let $X_e\in \{0,1\}$ for $e\in E$ be binary variables indicating whether $e\in F$, and let $A X \leq b$ be a given set of the arbitrary linear constraints that any valid $X$ must satisfy. These can be, for example, an upper bound on the number of edges to be considered. Let $\mathcal{R}(p)$ be the reliability of network $G$ given the probability vector $p\in[0,1]^E$---each component $p_e$ is the reliability of $e\in E$. We note that if a link $e\in E$ is not considered in $F$, this is equivalent to assuming that its elementary reliability is 0. Then, we can formulate our problem as: \begin{align*} \max &\ \mathcal{R}(p_1X_1,p_2X_2,\ldots ,p_{|E|} X_{|E|})\\ s.t. \, \, & A X \leq b \\ & X_{e} \in \{0,1\} \quad \forall e\in 1\ldots |E| \end{align*} If $G$ is a series-parallel graph, we can apply the reliability-preserving reductions over the composition sequence $\mathcal{S}_G = \left[G_{|E|+i}=G_j+_\odot G_k\right]_{i=1}^{|E|-1}$ that constructs $G$. Following the idea and notation behind Algorithm~\ref{alg:computeReliability}, we define the continuous variables $Y_i\in[0,1]$ and $\Omega_i\in[0,1]$ for each $i=1\ldots 2|E|-1$ to represent the reliability and the correction factor of each step of the sequence. Additionally, we define the continuous variables $\bar\Omega_i\in[0,1]$ to represent the product of the correction factors. Using these variables, we can formulate the problem as follows: \begin{subequations}\label{eq:mainmodel} \begin{align} \max\ & R &\label{eq:original_start}\\ A X &\leq b & \label{eq:sideConstraints}\\ Y_i &= p_i X_i & & & i=1\ldots |E| \label{eq1:original}\\ \Omega_i &= 1 & & & i=1\ldots |E| \label{eq1:original2}\\ Y_{|E|+i} &= 1-(1-Y_j)(1-Y_k) & & & i=1\ldots |E|-1 : G_{|E|+i} = G_j +_P G_k\label{eq1:parallel}\\ \Omega_{|E|+i} &= 1 & & & i=1\ldots |E|-1 : G_{|E|+i} = G_j +_P G_k\label{eq1:parallel2}\\ Y_{|E|+i} &= \frac{Y_j Y_k}{1-(1-Y_j)(1-Y_k)} & & & i=1\ldots |E|-1 : G_{|E|+i} = G_j +_S G_k \label{eq1:series}\\ \Omega_{|E|+i} &= 1-(1-Y_j)(1-Y_k) & & & i=1\ldots |E|-1 : G_{|E|+i} = G_j +_S G_k \label{eq1:series2}\\ \bar{\Omega}_1 &= \Omega_1 & & & i=2\ldots 2|E|-1 \label{eq1:cumulative}\\ \bar{\Omega}_i &= \bar{\Omega}_{i-1} \cdot \Omega_i & & & i=2\ldots 2|E|-1 \label{eq1:cumulative2}\\ R &= Y_{2|E|-1} \cdot \bar{\Omega}_{2|E|-1} & \label{eq1:objetive}\\ Y_i, \Omega_i, \bar{\Omega}_i&\in[0,1] & & & i=1\ldots 2|E|-1\\ X_i &\in \{0,1\} && & i=1\ldots |E|\label{eq:original_finish} \end{align} \end{subequations} Constraints~\eqref{eq1:original}-\eqref{eq1:original2} correspond to the main decision variables, indicating whether an edge $e$ is considered in the subgraph $F$. In the latter case, edge $e$ has elementary reliability equal to zero. Constraints~\eqref{eq1:parallel}-\eqref{eq1:parallel2} model a parallel reduction $G_{|E|+i} = G_j +_P G_k$, in which case the new edge $i$ has reliability $1-(1-p_i)(1-p_k)$ and there is no reliability correction factor ($\Omega_{|E|+i}=1$). Constraints~\eqref{eq1:series}-\eqref{eq1:series2} model a series reduction, where the new edge has reliability $\frac{p_j p_k}{1-(1-p_j)(1-p_k)}$ and the reliability correction factor is $\Omega_{|E|+i}=1-(1-p_j)(1-p_k)$. Constraints~\eqref{eq1:cumulative}-\eqref{eq1:cumulative2} ensure that $\bar{\Omega}_i$ is equal to the cumulative product of the factors $\Omega_i$, that is, $\bar{\Omega}_i = \prod_{j<i} \Omega_j$. Finally, constraint~\eqref{eq1:objetive} provides the reliability of the graph $R$, which is the operational probability after the last reduction, i.e., when $G$ has been reduced to a single edge. Note that this problem can be seen as a \emph{mixed-integer quadratically constrained program} (MIQCP); the left side of constraint~\eqref{eq1:series} can also be written as $Y_i\cdot \Omega_i = Y_j\cdot Y_k$. However, all quadratic constraints are nonconvex, and thus this model can be challenging for most of the current nonlinear optimization solvers. \subsection{Convex envelopes for the problem}\label{sec:envelopes} The main issue with model \eqref{eq:mainmodel} is that the resulting constraints involve nonconvex and nonconcave functions. In fact, the three bivariate functions $f_1(x,y)=xy$, $f_2(x,y) = 1-(1-x)(1-y)$ and $f_3(x,y)=\tfrac{xy}{1-(1-x)(1-y)}$ appearing in \eqref{eq:mainmodel} are neither convex nor concave functions\footnote{Without loss of generality, we assume that $f_3(0,0) = \lim_{(x,y)\rightarrow (0,0)} f_3(x,y) = 0$.}. One common approach for generating a (possibly strong) convex relaxation is to use the \emph{envelopes} of these functions. The concave envelope of $f(x,y)$ over a given domain $D$ is the smallest concave overestimator $\ave{f}(x,y)\geq f(x,y)$ for all $(x,y)\in D$ and can be used to relax a constraint of type $f(x,y)\geq z$ with a convex constraint $\ave{f}(x,y)\geq z$. Similarly, the convex envelope of $f(x,y)$ is the largest convex underestimator $\vex{f}(x,y)\leq f(x,y)$ for all $(x,y)\in D$ and can be used to relax a constraint $f(x,y)\leq z$. In our setting, this implies that we can relax an equality constraint $z=f_i(x,y)$ with two convex constraints $ \vex{f}_i(x,y) \leq z \leq \ave{f}_i(x,y)$. Since our optimization problem only considers equality constraints, in principle, we should aim at computing both convex and concave envelopes. However, due to the structure of our problem, constraints $\vex{f}_i(x,y) \leq z$ are not necessary. We show below that $\ave{f}_i(x,y)$ are all increasing functions in both variables, and considering that we are maximizing reliability, along with the simple structure of our constraints, $z \leq \ave{f}_i(x,y)$ will always be active in an optimal solution of the resulting convex relaxation. This is expected, as $f_1(x,y), f_2(x,y)$ and $f_3(x,y)$ are increasing functions in both variables in the square $[0,1]\times[0,1]$. For the case of $f_1(x,y)=x\cdot y$, its envelopes are well-known and can be obtained by the McCormick envelopes~\cite{mccormick1976computability}. Let us assume that $x,y\in [0,1]$, and let $L_x,L_y$ and $U_x,U_y$ be lower and upper bounds for $x$ and $y$; then, the concave envelope of $f_1(x,y)$ is given by: \begin{equation} f_1(x,y) = x\cdot y \leq \begin{cases} U_x \cdot y + x\cdot L_y - U_x\cdot L_y & \text{if } y - L_y \geq \frac{U_y-L_y}{U_x-L_x} \cdot (x - L_x) \\ x \cdot U_y + L_x\cdot y - L_x \cdot U_y & \text{if not} \end{cases} \end{equation} This envelope is increasing in both variables when the variable bounds are nonnegative. For the case of $f_2(x,y)=1-(1-x)(1-y) = x + y - x\cdot y$, its concave envelope can be obtained using the convex envelope of $f_1(x,y)$, resulting in the following piecewise linear function: \begin{equation} f_2(x,y) = 1-(1-x)(1-y) \leq \begin{cases} x \cdot (1-L_y) + y\cdot (1-L_x) + L_x L_y & \text{if }U_y - y \geq \frac{U_y-L_y}{U_x-L_x} \cdot (x - L_x)\\ x \cdot (1-U_y) + y\cdot (1-U_x) + U_x U_y & \text{if not} \end{cases} \end{equation} Since in our case both variable bounds are less than 1, this envelope is also increasing in both variables. For the case of $f_3(x,y)=\tfrac{x\cdot y}{x+y-xy}$, an explicit formula for its concave envelopes is not known. Here, we provide its concave envelope for the case of $L_x=L_y=0$. \begin{theorem}\label{thm:main} The concave envelope of $f_3(x,y)=\tfrac{x\cdot y}{x+y-xy}$ for $(x,y)\in[0, U_x]\times [0, U_y]$ for $U_x,U_y\leq 1$ is given by: \begin{equation} \ave{f}_3(x,y) = \begin{cases} \frac{x\cdot y}{x+y-U_x\cdot y} & \text{if } x/U_x \geq y/U_y \\ \frac{x\cdot y}{x+y-x\cdot U_y} & \text{if not} \end{cases} \end{equation} \end{theorem} \begin{proof} It is easy to see that $\ave{f}_3(x,y) \geq f_3(x,y)$. On the other hand, note that for $(x,y)\in [0,U_x]\times[0,U_y]$, $$\ave{f}_3(x,y)= \min\left\{\frac{x\cdot y}{x+y- U_x y}, \frac{x\cdot y}{x+y-x U_y}\right\}$$ The Hessian matrix of $\frac{x\cdot y}{x+y- U_x y}$ is: \[ \mathbf{H} \left(\frac{x\cdot y}{x+y-U_x\cdot y}\right) = \frac{2 (1-U_x)}{(x+y-U_x\cdot y)^3} \begin{bmatrix} -y^2 & x y \\ x y & -x^2 \end{bmatrix} \] which is a negative semidefinite-matrix, and thus it is a concave function in $ [0,U_x]\times[0,U_y]$. The same result can be obtained for $\frac{x\cdot y}{x+y-x\cdot U_y}$ by exchanging $x$ and $y$. Finally, since $\ave{f}_3(x,y)$ is the minimum of these two concave functions, we conclude that $\ave{f}_3(x,y)$ is concave over $[0,U_x]\times[0,U_y]$. Finally, we need to show that $\ave{f}_3(x,y)$ is the \emph{smallest} concave overestimator of $f_3(x,y)$. We note that if $y=\lambda x$, then the function $f_3(x,\lambda x) = \tfrac{\lambda x}{1+\lambda-\lambda x}$ is a convex function on $x$. In fact, $\tfrac{\partial f_3(x,\lambda x)}{\partial x} = 2\lambda^2(1+\lambda) / (1+\lambda-\lambda x)^3$, which is positive for any $\lambda>0$. Therefore, the best possible overestimator over the line $y=\lambda x$ is given by the linear function interpolating the origin and the intersection of $(x,\lambda x)$ with either $x=U_x$ or $y=U_y$. We show that $\ave{f}_3(x,y)$ is a function satisfying this condition. In fact, if $U_y/U_x \geq \lambda$, then $y=\lambda x$ intersects first with $x=U_x$ and $\ave{f}_3(x,\lambda x) = \tfrac{\lambda x}{1+\lambda - \lambda U_x}$, which is a linear function, and $f_3(U_x,\lambda U_x) = \ave{f}_3(U_x,\lambda U_x)$. Otherwise, if $U_y/U_x \leq \lambda$, then $y=\lambda x$ intersects first with $y=U_y$ and $\ave{f}_3(x,\lambda x) = \tfrac{\lambda x}{1+\lambda - U_y}$, which is a linear function such that $f_3(U_y/\lambda, U_y) = \ave{f}_3(U_y/\lambda, U_y)$. \qed\end{proof} This idea of exploiting the convexity of $f_3$ over the rays $f_3(x,\lambda x)$ can also be extended to find concave envelopes for other functions satisfying this property; this is further elaborated in parallel work~\cite{barrera2021convex}. Note that, as anticipated, $\ave{f}_3(x,y)$ is an increasing function in both variables in $[0,U_x]\times[0,U_y]$; it can be easily verified that \[\nabla \left(\frac{x\cdot y}{x+y-U_x\cdot y}\right) =\frac{1}{(-U_x y+x+y)^2} \left((1-U_x) y^2,x^2\right) \] which is a nonnegative vector whenever $U_x\leq 1$. The other part of the definition of $\ave{f}_3(x,y)$ can be verified similarly. In Figure \ref{fig:threeave}, we show the three concave envelopes we have discussed in this section. Using these envelopes, we can formulate a mixed-integer convex nonlinear problem that can be solved more efficiently than the original model. This provides a tractable overestimation of the reliability of the resulting graph. \begin{figure}[tbp] \centering \includegraphics[height=3.5cm]{f1.pdf}% \includegraphics[height=3.5cm]{f2.pdf}% \includegraphics[height=3.5cm]{f3.pdf} \caption{Concave envelopes for $x\cdot y$, $x+y-x\cdot y$ and $\tfrac{xy}{x+y-x\cdot y}$ for $0\leq x \leq 1$ and $0\leq y \leq 1$.} \label{fig:threeave} \end{figure} \subsection{A mixed integer convex approximation}\label{sec:mip} Using the concave envelopes of the bivariate functions resulting from the series and parallel reductions, we can replace the corresponding constraint from model \eqref{eq:original_start}-\eqref{eq:original_finish} and obtain the following mixed-integer convex optimization approximation model: \begin{subequations}\label{eq:convexrelax} \begin{align} & \max\ R & \\ & A X \leq b & \\ Y_i &= p_i X_i, \quad \Omega_i=1,\quad \bar\Omega_i=1 & & & i=1\ldots |E| \\ Y_{|E|+i} &\leq Y_j (1-U_k)+ (1-U_j) Y_k + U_j U_k & & & i=1\ldots |E|-1: G_i = G_j +_P G_k \label{eq:final:parallel1}\\ Y_{|E|+i} &\leq Y_j (1-L_k) +(1-L_j) Y_k + L_j L_k& & & i=1\ldots |E|-1: G_i = G_j +_P G_k \label{eq:final:parallel2}\\ \Omega_{|E|+i} &= 1,\quad \bar\Omega_{|E|+i} = \bar\Omega_{|E|+i-1} & & & i=1\ldots |E|-1: G_i = G_j +_P G_k \\ Y_{|E|+i} &\leq \begin{cases} \frac{Y_j Y_k}{Y_j+Y_k - U_j Y_k} & \text{if } x/U_x \geq y/U_y \\ \frac{Y_j Y_k}{Y_j+Y_k - Y_j U_k} & \text{if not} \end{cases} & && i=1\ldots |E|-1: G_i = G_j +_S G_k \label{eq:final:concave}\\ \Omega_{|E|+i} &\leq Y_j (1-U_k)+ (1-U_j) Y_k + U_j U_k &&& i=1\ldots |E|-1: G_i = G_j +_S G_k \\ \Omega_{|E|+i} &\leq Y_j (1-L_k)+ (1-L_j) Y_k + L_j L_k &&& i=1\ldots |E|-1: G_i = G_j +_S G_k \\ \bar\Omega_{|E|+i} &\leq U^{\bar\Omega}_{|E|+i-1}\cdot \Omega_{|E|+i} + \bar\Omega_{|E|+i-1}\cdot L^\Omega_{|E|+i} - U^{\bar\Omega}_{i-1}\cdot L^\Omega_i & && i=1\ldots |E|-1: G_i = G_j +_S G_k \\ \bar\Omega_{|E|+i} &\leq L^{\bar\Omega}_{i-1}\cdot \Omega_i + \bar\Omega_{i-1}\cdot U^\Omega_i - L^{\bar\Omega}_{i-1}\cdot U^\Omega_i & && i=1\ldots |E|-1: G_i = G_j +_S G_k \\ R &\leq U_{2|E|-1}\cdot \bar\Omega_{2|E|-1} + Y_z\cdot L^{\bar\Omega}_{2|E|-1} - U_{2|E|-1}\cdot L^{\bar\Omega}_{2|E|-1} & && \\ R &\leq L_{2|E|-1}\cdot \bar\Omega_{2|E|-1} + Y_z\cdot U^{\bar\Omega}_{2|E|-1} - L_{2|E|-1}\cdot U^{\bar\Omega}_{2|E|-1} & && \\ Y_i&, \Omega_i, \bar{\Omega}_i\in[0,1] & & & i=1\ldots 2|E|-1\\ X_i &\in \{0,1\} &&& i=1\ldots |E| \end{align} \end{subequations} where the constants $L_i$ and $U_i$ are valid lower and upper bounds for $Y_i$ and $L^\Omega_i$, $U^\Omega_i$, $L^{\bar\Omega}_i$ and $U^{\bar\Omega}_i$ are valid lower and upper bounds for variables $\Omega_i$ and $\bar\Omega_i$, respectively. These upper bounds can be precomputed by assigning $U_i = p_i$ and $L_i=0$ for all $i\in 1\ldots |E|$ and then applying the corresponding functions $f_1$, $f_2$ or $f_3$ to these bounds for each series or parallel composition in $\mathcal{S}_G$. All constraints in previous model are linear, except for inequality~\eqref{eq:final:concave}. However, since $\ave{f}_3(x,y)$ is concave, we can enforce this nonlinear constraint with linear constraints given by its tangent hyperplane. Given a point $(x^*,y^*)$, we upper bound $\ave{f}_3(x,y)$ by the linear constraint $\ave{f}_3(x^*,y^*) + \frac{\partial\ave{f}_3}{\partial x}(x^*,y^*) \cdot (x-x^*) + \frac{\partial\ave{f}_3}{\partial y}(x^*,y^*) \cdot (y-y^*)$, which is \begin{align} \ave{f}_3(x,y)&\leq \begin{cases} \left(\frac{y^*}{x^*+y^*-U_x\cdot y^*}\right)^2 \cdot (1-U_x)\cdot x + \left(\frac{x^*}{x^*+y^*-U_xy^*}\right)^2 \cdot y & \text{if } x/U_x \geq y/U_y \\ \left(\frac{y^*}{x^*+y^*-x^*U_y}\right)^2 \cdot x + \left(\frac{x^*}{x^*+y^*-x^*U_y}\right)^2 \cdot (1-U_y) \cdot y & \text{if not} \end{cases} \label{eq:cuts} \end{align} These linear constraints can be added dynamically to optimization solvers during their optimization procedures. \subsection{Computational experiments}\label{sec:results} \subsubsection{Instances and implementation} To test the effectiveness of our methodology, we generated random series-parallel graphs with $|E|=50$ and $|E|=100$ edges in the following way. The elementary reliabilities $p_e$ are generated uniformly at random between $0.9$ and $1.0$ for each edge $e$. To generate each instance, we start with a perfect bipartite matching. Clearly, the number of connected components in $G$ is initially $|E|$. Then, we iteratively select two components uniformly at random and connect them either in series or parallel (with equal probability), which diminishes the number of connected components in $G$ by one. The process is repeated until the resulting graph is connected. This procedure is detailed in Algorithm~\ref{alg:genGraphs}. For the additional constraints (Eq.~\ref{eq:sideConstraints}), we impose a cardinality constraint that only a $\alpha$ fraction of the edges can be selected: $\sum_{e\in E} X_e \leq \alpha |E|$. Low values of $\alpha$ ($\alpha < 0.5$) lead to either infeasible problems or a reduced set of feasible solutions, which makes the problem easy to solve. Similarly, high values of $\alpha$ ($\alpha > 0.8$) encourage the solution to include most of the edges, also making the problem easy to solve. For these reasons, in our experiments, we use $\alpha=0.8$, which is a high value where our models behave very well, and $\alpha=0.6$, which yields the most challenging instances of our problem. In our computational experiments, we compare two different configurations: \begin{enumerate} \item[Convex envelope cuts:] The model described in Section~\ref{sec:mip}, including the cuts from \eqref{eq:cuts} to approximate our concave envelope from Theorem~\ref{thm:main}. \item[Without cuts:] The model from Section~\ref{sec:mip}, considering a simpler constraint to bound the nonlinear concave constraint~\eqref{eq:final:concave}, which we describe next. \end{enumerate} The second configuration is constructed to understand the effectiveness of the concave approximation provided in Theorem~\ref{thm:main} for series compositions. This simpler general approximation for $f_3(x,y)$ is given by constraints $f_3(x,y) \leq x$ and $f_3(x,y) \leq y$; these constraints are obtained when replacing either $x$ or $y$ in the denominator of $f_3(x,y)$ with 1 or by replacing $U_x$ or $U_y$ with 1 in $\ave{f}_3(x,y)$. This corresponds to the tangent hyperplanes of $\ave{f}_3(x,y)$ on $(x,y) = (0,U_y)$ and $(x,y)=(U_x,0)$. \begin{algorithm} \caption{Generation of random instances of size $k$}\label{alg:genGraphs} \begin{algorithmic} \State $E \gets \{e_1,\ldots e_k\}$ (disjoint edges) \State $p_e \gets U[0.9,1]$ for all $e\in E$ \Comment{Random initial reliabilities} \State $\mathcal{G} \gets E$ \While{$|ConnectedComponents(\mathcal{G})|> 1$} \State $g_1,g_2 \gets$ random subgraphs from $\mathcal{G}$ (chosen equiprobable) \State $u \gets U[0,1]$ \If {$u<0.5$} \State $g = g_1 +_P g_2$ \Comment{Parallel connection} \Else \State $g = g_1 +_S g_2$ \Comment{Series connection} \EndIf \State $\mathcal{G} \gets \mathcal{G} \setminus \{g_1 \cup g_2\} \cup g$ \Comment{Replace $g_1$ and $g_2$ in $\mathcal{G}$ by $g$} \EndWhile \end{algorithmic} \end{algorithm} These models were implemented with Python 3.7 using the IBM\textsuperscript{\tiny\textregistered} Decision Optimization CPLEX\textsuperscript{\tiny\textregistered} Modeling for Python (DOcplex.MP) v2.11 of CPLEX Studio v12.10~\cite{docplex}. All CPLEX parameters have their default values, and no cut manager was implemented for the additional cuts~\eqref{eq:cuts}. All computations were made on machines running Linux under x86\_64 architecture running in a single thread. \subsubsection{Computational results} \label{sec:firstexperiments} \begin{figure}[htbp] \centering \includegraphics[height=4.5cm]{fig_rep_vs_true_point.pdf}~% \includegraphics[height=4.5cm]{fig_rep_vs_optimal_point.pdf} \caption{Comparison between the reported objective value and the true reliability and the optimal reliability of the resulting network}\label{fig:results} \end{figure} Figure~\ref{fig:results} (left) compares the objective value reported by our model (x-axis) for each instance versus the true reliability of the resulting solution for the two configurations described. Similarly, Figure~\ref{fig:results} (right) compares the objective value reported versus the true optimal solution of the problem, for the cases where the latter is known.\footnote{True optimal solutions are obtained using the results of the following sections but presented here for illustrative purposes.} Each subfigure is divided in four, depending on the number of edges (horizontal) and the budget (vertical) of the problem. In these experiments, we can see that both configurations behave extremely well for a budget of $\alpha=0.8$: the resulting objective value of the model is not only very close to the real reliability of the solution but also very close to the true optimal solution of the problem. These relative errors are less than $1\%$ for all instances and configurations evaluated. For $\alpha=0.6$, this behavior changes drastically. The reported objective value of the problem (an upper bound on the reliability) differs considerably for many instances in both configurations. Moreover, it seems that this effect is more pronounced for lower reliabilities. A similar effect is observed when we compare the reported reliability with the optimal reliability of the problem.\\ These experiments show that, in challenging instances, the overestimation of the reliability provided by the concave envelopes alone is not good enough for estimating the true reliability of the resulting graphs, leading to suboptimal solutions for our problem. Moreover, they indicate that little is gained from using cutting planes to iteratively approximate $\ave{f}_3$, in comparison to its simple approximation based on two tangent hyperplanes, at least in this setting. In the next section, we study different improvements to avoid these issues and where the cutting planes associated with $\ave{f}_3$ have considerably more impact. These improvements lead the model to the true optimal solution in most instances. \subsection{Further improvements to the model}\label{improvements} \subsubsection{Providing the true reliability of the solution}\label{sec:trueReliability} The previous model considers overestimators of the nonlinear functions, which can lead to overestimations of the true reliability of the resulting network. However, based on the selected edges of the graph (the values of the $X$ variables), we can compute the ``true'' reliability of the resulting graph, and we can improve our model by introducing \emph{combinatorial Benders cuts}~\cite{codato2006combinatorial} to avoid this problem. Specifically, given a feasible solution $\hat{X}$ we can compute the resulting reliability $\mathcal{R}_{\hat{X}}=\mathcal{R}(p_1\hat{X}_1,\ldots,p_{|E|}\hat{X}_{|E|})$ in linear time using Algorithm~\ref{alg:computeReliability} and obtain the following valid inequality---a combinatorial Benders cut associated with $\hat{X}$---that bounds the value of $R$ \begin{equation} R \leq \mathcal{R}_{\hat{X}} + \sum_{e\in E : \hat{X}_e=1} (1-X_e) + \sum_{e\in E : \hat{X}_e=0} X_e \end{equation} This constraint implies that if $\hat{X}$ is the optimal solution to the problem, then $R=\mathcal{R}_{\hat{X}}$. For other feasible solutions $X^*\neq \hat{X}$, this constraint is trivially satisfied because $R \leq 1$. We can strengthen this inequality using the following observation. Note that the all-terminal reliability of a graph $\mathcal{R}(p_1X_1,p_2X_2,\ldots ,p_{|E|} X_{|E|})$ only increases when a new edge is added. In other words, the reliability of any solution $X$ such that $\hat{X}_e = 0 \Rightarrow X_e = 0$ must be smaller than that of $\hat{X}$. Therefore, the combinatorial cuts can be strengthened to \begin{equation} R \leq \mathcal{R}_{\hat{X}} + \sum_{e\in E : \hat{X}_e=0} X_e. \label{eq:CombinatorialCut} \end{equation} By the same reasoning, if a feasible solution of the problem contains all edges selected in $\hat{X}$, then its reliability will be at least $\mathcal{R}_{\hat{X}}$. Hence, we can enforce this lower bound for the reliability by adding the following constraint each time that an incumbent solution $\hat{X}$ has been found: \begin{equation} R \geq \mathcal{R}_{\hat{X}} - \sum_{e\in E : \hat{X}_e=1} (1-X_e). \label{eq:CombinatorialCutLB} \end{equation} This reasoning can also be expanded to other variables. The monotonicity exhibited by the reliability function also holds for $Y_i$, $\Omega_i$ and $\bar{\Omega}_i$ as functions of $X$, because all three functions $f_1(x,y)$, $f_2(x,y)$ and $f_3(x,y)$ are nondecreasing in both dimensions. Therefore, given a solution $\hat{X}$, we can apply Algorithm~\ref{alg:computeReliability} to compute the values of variables $Y$, $\Omega$ and $\bar{\Omega}$ associated with this solution and derive similar cuts for all variables of the problem each time that a new incumbent solution $\hat{X}$ is found during the branch-and-bound process. \begin{remark} Including these inequalities during a branch-and-bound procedure ensures that we obtain an optimal solution to the original problem \eqref{eq:mainmodel} at the end of the optimization routine, unless a time limit is reached. Whenever a feasible solution $\hat{X}$ to the relaxation is found, the combinatorial Benders cut ensures that $R=\mathcal{R}_{\hat{X}}$ if $\hat{X}$ is reported optimal. The resulting optimization routine might be impractical though, as it might resort to a costly enumeration if the relaxation \eqref{eq:convexrelax} is not tight enough. The next improvements aim at better approximating the problem and making the tree-search more efficient. \end{remark} \subsubsection{Improving inequalities on the branch-and-bound tree}\label{sec:dynamicCuts} Our proposed model considers the best possible concave overestimators for each function. These envelopes depend on the lower and upper bounds for each variable, and even if these bounds are tight, the envelopes may not provide a tight approximation of the functions in the whole feasible region. Nevertheless, the branch-and-bound procedure of MILP solvers is based on imposing new bounds on the variables while branching and thus improving the relaxations in each node. These bounds are only valid locally, but we can use them to obtain the concave envelopes based on these new bounds, which yields better local approximations of each function. If the branch-and-bound process fixes a variable $X_e$ to 0 or 1, we can propagate this decision to improve the lower and upper bounds for all variables corresponding to reductions that include this edge. For instance, if the fixed variable is $X_j=1$, then $Y_j=p_j$. Thus, for the case of a parallel reduction, we can improve \eqref{eq:final:parallel1} and \eqref{eq:final:parallel2} by adding the local linear constraint $Y_i = 1-(1-p_j)(1-Y_k)$. In the case of a series reduction, we can improve \eqref{eq:final:concave} and its associated cuts \eqref{eq:cuts} by adding the inequality \begin{equation} Y_i \leq \left(\frac{p_j}{Y_k^*+p_j-Y^*_k\cdot p_j}\right)^2 \cdot (Y_k-Y_k^*) + \frac{p_j Y_k^*}{Y_k^*+p_j-Y^*_k\cdot p_j} \label{eq:convcutImproved} \end{equation} where $Y_k^*$ is the current solution of variable $Y_k$ at the node of the branch-and-bound tree. This is possible because the function $f_3(p,y)$ is concave on $y$ for any fixed value of $p$, so \eqref{eq:convcutImproved} corresponds to the gradient of this function on $y=Y_k^*$. Similar improvements can be included for the remaining equations involving $\Omega_i$ and $\bar{\Omega}_i$. Note that our concave envelopes for series reduction (Theorem~\ref{thm:main}) only apply when the lower bounds are equal to $0$, so this approximation cannot be improved if the lower bounds are improved. However, we can still add linear constraints in this case: since $Y_i=f_3(Y_j,Y_k)$ is increasing and concave for a fixed $Y_j$ or $Y_k$, we can overestimate this function by two hyperplanes tangent to the point $(L_j,L_k)$. \begin{align*} Y_i &\leq \left.\frac{\partial f_3}{\partial x}\right|_{(L_j,L_k)} \cdot (Y_j-L_j) + \left.\frac{\partial f_3}{\partial y}\right|_{(U_j,L_k)} \cdot (Y_k-L_k) + f_3(L_j,L_k) \\ &= \frac{L_k^2}{(L_j + L_k - L_j L_k)^2}\cdot (Y_j-L_j) + \frac{U_j^2}{(U_j + L_k - U_j L_k)^2} \cdot(Y_k-L_k) + \frac{L_j L_k}{L_j+L_k-L_jL_k} \end{align*} To see that these cuts are valid, note that function $f_3(x,y)$ is concave on $x$ for a fixed $y$, so in particular, this is a valid upper bound for $y=L_k$. On the other hand, the partial derivative $\frac{\partial f_3}{\partial y}$ is increasing with respect to $x$, attaining its maximum value on $x=U_j$. Since $f_3$ is also concave for a fixed $x$, then it is a valid bound for $x=U_j$ and then for all $(x,y)\in [L_j,U_j]\times [L_k,U_k]$. Interchanging the roles of $x$ and $y$ in $f_3(x,y)$, we can also bound $Y_i$ for a series composition by \begin{align*} Y_i &\leq \left.\frac{\partial f_3}{\partial x}\right|_{(L_j,U_k)} \cdot (Y_j-L_j) + \left.\frac{\partial f_3}{\partial y}\right|_{(L_j,L_k)} \cdot (Y_k-L_k) + f_3(L_j,L_k) \\ &= \frac{U_k^2}{(L_j + U_k - L_j U_k)^2}\cdot (Y_j-L_j) + \frac{L_j^2}{(L_j + L_k - L_j L_k)^2}\cdot (Y_k-L_k) + \frac{L_j L_k}{L_j+L_k-L_jL_k} \end{align*} See Figure~\ref{fig:envelopeImproved} for an example on how these hyperplanes improve the overestimation of $f_3$ on $(L_j,L_k)$. \begin{figure}[tbp] \centering \includegraphics[height=3.5cm]{f3p1.pdf} \hspace{1cm}% \includegraphics[height=3.5cm]{f3p2.pdf} \caption{Function $\tfrac{xy}{x+y-x\cdot y}$ for $0.3\leq x \leq 0.8$ and $0.4\leq y \leq 0.9$ (in orange), its concave envelope over $[0,0.8]\times [0,0.9]$ (in blue) and the additional tangent hyperplanes at $(0.3,0,4)$ (in green)}\label{fig:envelopeImproved} \end{figure} \subsection{Computational experiments for the true optimal solution for the problem} In this second set of experiments, we now include the combinatorial Benders cut \eqref{eq:CombinatorialCut} to ensure that the optimal solution provides the true reliability for the problem. We include these cuts for the previous two configurations \textbf{convex envelope cuts} and \textbf{without cuts}, and we add a third configuration: \begin{description} \item[Improved envelope cuts:] The model described in Section~\ref{sec:mip}, including the improvements based on the local bounds provided by the branch-and-bound tree (see Subsection~\ref{sec:dynamicCuts}). \end{description} Combinatorial cuts for computing the exact reliability (\S\ref{sec:trueReliability}) are implemented as \texttt{LazyContraintCallback}, and improved cuts in branch-and-bound (\S\ref{sec:dynamicCuts}) are implemented as \texttt{UserCutCallback}. For this set of experiments, we set a time limit of 3 hours for each problem. Additionally, to benchmark our models with other solvers, we solve the original model \eqref{eq:original_start}-\eqref{eq:original_finish} using the MINLP solver SCIP v7.02~\cite{scip} compiled with the parameters for better performance for this kind of nonlinear nonconvex problem. \begin{figure}[htbp] \centering \includegraphics[height=5.5cm]{PP50_time.pdf}~% \includegraphics[height=5.5cm]{PP50_gap.pdf} \caption{Performance profiles of the different configurations ($|E|=50$)}\label{fig:resultsReal50} \end{figure} Figure~\ref{fig:resultsReal50} shows the performance profiles of the different configurations for instances with $|E|=50$ edges. The figure on the left shows the percentage of instances solved up to optimality before a given time ($x$-axis). For the instances that are not solved within the time limit, the figure on the right shows the percentage of instances attaining a given optimality gap (in log scale). Let us first analyze the instances for $\alpha=0.8$. As shown in previous experiments, for these cases, all methods provide a good approximation of the true and optimal reliabilities. Therefore, the configurations without the improved envelope cuts behave well, solving 90\% the instances within the time limit of 3 hours, and the unsolved instances present a small optimality gap---a 2\% gap in the worst case. However, note that there is a nonnegligible portion of instances that require more than an hour to solve; this indicates that the model is able to find a good solution but that it cannot quickly prove its optimality because it needs to visit a large branch-and-bound tree to discard all other potential solutions. On the other hand, the improved envelope cuts behave drastically differently, solving all instances in less than one minute. This can be explained by this configuration's ability to locally adapt concave envelopes during the branch-and-bound tree, providing better estimations and thus better bounds, which yield a smaller branch-and-bound tree. This better approximation of the nonconcave functions is even more relevant for $\alpha=0.6$. While the combinatorial Benders cuts help in fixing the mismatch between the reported reliability and the true reliability discussed in Section \ref{sec:firstexperiments}, the weak estimations provided by the global approximation of the functions prevent the solver from finding better solutions and/or proving optimality efficiently: only 14\% of the instances are solved within the time limit when the improved envelope cuts are not included. This number increases to 88\% when these cuts are included, with most of these instances being solved in just a few minutes. \begin{figure}[htbp] \centering \includegraphics[height=5.5cm]{PP100_time.pdf}~% \includegraphics[height=5.5cm]{PP100_gap.pdf} \caption{Performance profiles of the different configurations ($|E|=100$)}\label{fig:resultsReal100} \end{figure} The dominance of the improved envelope cuts also occurs for the instances with $|E|=100$ edges. We present these results in Figure~\ref{fig:resultsReal100}. We first note that the problem is considerably harder to solve. For instance, the configuration \emph{without cuts} can solve only 31\% of the instances for $\alpha=0.8$ within the time limit of three hours. Adding the convex envelopes of Theorem~\ref{thm:main} improves this metric, but only marginally. Nevertheless, the optimality gap obtained by these configurations is good, with more than 95\% of the instances finishing with a gap of less than 1\%. In the improved envelope cuts setting, 96\% of the instances are solved for $\alpha=0.8$, and the remaining instances finish with an optimality gap less than $0.1\%$. The problem becomes much more challenging for $\alpha=0.6$ and 100 edges. This is expected, because due to the sequential construction process of the graph, the differences between the nonlinear functions and their concave envelopes are propagated into the overall approximation quality and become more relevant when the number of steps in the construction sequence (i.e., the number of edges) is large. In fact, without considering the improved envelope cuts, the solver is not able to solve any instance, and the optimality gaps are substantial for most of the cases. The performance improves when including the improved envelope cuts, resulting in 17\% of the instances being solved and obtaining better optimality gaps for the unsolved instances. \subsubsection{Comparison with MINLP solver} To benchmark the proposed model against current state-of-art solvers for nonlinear optimization models, we also solve the problem using the SCIP solver. SCIP is among the best general-purpose solvers that are able to deal with nonconvex constraints. It implements multiple bounding techniques, some of which are similar to those studied in this paper, along with spatial branch-and-bound based on linear outer-approximations of the problem. For more details, see~\cite{vigerske2018scip}. Figures~\ref{fig:resultsReal50}~and~\ref{fig:resultsReal100} show performance profiles of SCIP in comparison with our approach. We can see that for $|E|=50$, SCIP behaves in a similar way to the improved envelope cuts, being slightly slower for $\alpha=0.8$. However, for larger problems with $|E|=100$ edges, SCIP's performance decreases considerably, and it is outperformed by our proposed improved envelope cuts. SCIP can solve only half of the instances for $\alpha=0.8$ and only 6\% of the instances for $\alpha=0.6$, reaching the time limit with optimality gaps that are worse than the basic configuration without cuts, in most cases. \begin{figure}[htbp] \centering \includegraphics[width=.48\linewidth]{fig_nodes_vs_gap_cuts.pdf}~% \includegraphics[width=.48\linewidth]{fig_nodes_vs_gap_scip.pdf} \caption{Final optimality gaps and number of nodes traversed in the branch-and-bound tree ($|E|=100$, $\alpha=0.6$)}\label{fig:resultNodes} \end{figure} Figure~\ref{fig:resultNodes} shows the optimality gaps (log scale) versus the best-bound objective value obtained by the improved envelope cuts setting and SCIP on the unsolved instances for the case of $100$ edges and $\alpha=0.6$. This figure indicates that the problems become harder when the reliability of the problem is lower: the optimality gaps are large when the objective bound is low. This can be explained because this is the region where the difference between the nonconvex functions and their concave envelopes differ the most (recall Figure~\ref{fig:results}), so the approximation is not sufficiently tight to lead the solver to prove the optimality of the solutions. This is also correlated with the number of nodes traversed in the branch-and-bound: the number of branch-and-bound nodes are smaller in harder instances, indicating that the subproblems at each node are harder to solve (probably because they include a larger number of cuts). Interestingly, similar behaviors also occur in SCIP, even if the latter is able to visit 10 times more nodes of the branch-and-bound tree. \section{Conclusions and further extensions}\label{sec:conclusions} We provide an optimization framework to solve network design problems for maximizing the all-terminal reliability problem on series-parallel graphs when failure probabilities are independent but not identical. Our approach exploits the use of concave envelopes of the nonconcave functions that can be implemented successfully using current optimization solvers, something that has not been explored thus far in this context. The special properties of the functions that appear in reliability optimization allow us to derive envelopes that can be refined and exploited in the solution process. Computational experiments show that it is highly beneficial to perform such refinements of the concave envelopes along the branch-and-bound process and thus provide better local approximations for the nonlinear functions. If this is not done, the solver faces difficulties in obtaining good solutions or proving optimality. These techniques can be extended to more general contexts of network reliability optimization. For example, similar ideas can be used for $K$-terminal reliabilities, where the functions associated to other reliability-preserving reductions (see \cite{Satyanarayana}) could also be approximated by their concave envelopes in a similar way. However, these concave envelopes are not known and can be difficult to find in closed form. Therefore, developing new techniques, such as those presented in~\cite{barrera2021convex}, that can handle these functions is a promising future direction that can considerably widen the applicability of our proposed framework. Additionally, the use of convex/concave envelopes for reliability optimization can also be applied to more general families of graphs. In fact, the reductions discussed in this paper apply to any graph and allow us to reduce the size of the problem. The smaller problem can be solved using other optimization techniques such as sample average approximation. This appears very promising, in particular for graphs with small treewidth, as recently discussed in~\cite{goharshady2020efficient}. \bibliographystyle{amsplain}
1,314,259,994,685
arxiv
\section{Conclusion}\label{sec-con} In this study, we present a new prediction methodology for the time series data, based on option theories in finance when the underlying dynamics is assumed to follow the GBM process. To characterize time-varying patterns, we allow the GBM model parameters to vary over time and update the parameter values using recent observations. We formulate the prediction problem with unequal overestimation and underestimation penalties as the stochastic optimization problem and provide its solution procedure. We demonstrate the prediction capability of the proposed approach in various applications. Our approach appears to work well in the manufacturing application, when the order size varies over time. For more highly volatile processes such as stock prices and wind speeds, the proposed model exhibits much stronger prediction capability, compared to alternative ARIMA-based models. In the future, we plan to investigate other parameter updating schemes. In this study, we update parameters in a rolling horizon manner using the maximum likelihood estimations. Another possibility is to use the Kalman filtering or its variants. Long-term predictions are beyond the scope of this study, but we plan to extend the approach presented in this study for obtaining accurate long-term predictions. We will also incorporate prediction results into managerial decision-making in several applications such as power grid operation with renewable energy \cite{Bouffard2008}. \bibliographystyle{IEEEtran}
1,314,259,994,686
arxiv
\section{INTRODUCTION} \label{sec:intro} A key aspect of measuring total star formation rates is to assess how a galaxy's spectral energy distribution is modulated by the effects of interstellar dust. The ability to quantify dust attenuation at high redshift is hindered by the limited dynamic range in luminosity at wavelengths that are sensitive to dust emission. While the Lyman Break technique \citep{steidel95} has proven to be the most effective method of identifying galaxies across a large dynamic range in luminosity and lookback time, it requires observations in the rest-frame UV: the massive stars giving rise to the UV continuum also produce much of the dust that attenuates this continuum. The diminution of UV light is compounded by the fact that much of this radiation is re-emitted in the far-infrared where current instrumental sensitivity is insufficient to directly detect typical star-forming galaxies at $z\ga 2$. It has therefore become common to use local relations between monochromatic and bolometric luminosity to infer the star formation rates and dust attenuation of high-redshift galaxies. Prior to the era of large-scale multi-wavelength surveys like GOODS \citep{dickinsongoods03,giavalisco04}, inferring the dust attenuation at high redshift commonly entailed using relations between the variation of the UV continuum slope with dustiness as found for local starburst galaxies \citep{meurer99, calzetti00, heckman98}. However, such correlations were previously untested at high redshift. The correlation between the redness of the UV slope ($\beta$) and dust attenuation has limited applicability when examined over a larger range in galaxy stellar population and luminosity. Galaxies with older and less massive stars contributing significantly to the UV emission can also exhibit a redder spectral slope \citep{calzetti97, kong04, buat05}. Further, the UV slope decouples from extinction for very luminous galaxies where virtually all of the star formation is obscured, such as is the case for low redshift ultra-luminous infrared galaxies (ULIRGs; \citealt{goldader02}). Nonetheless, the local trend between UV slope and dustiness is still applied widely to galaxies at high redshift; this correlation is often the only means by which one can infer the dust attenuation of galaxies at $z\ga 3$, where the dust emission from a typical galaxy may be several orders of magnitude below the sensitivity limits of current infrared instruments. Incremental progress in determining dust attenuation at high redshift was achieved with the first ultra-deep radio and X-ray surveys (e.g., \citealt{richards00, alexander03}). Though the sensitivity (even at GOODS-depth) at these wavelengths was insufficient to individually detect typical star-forming galaxies at $z\ga 2$, these surveys did allow for estimates of the mean dust attenuation based on stacking analyses. Initial studies suggested a general agreement between UV and radio/X-ray inferences of dust attenuation at $z\sim 2-3$ \citep{nandra02, seibert02, reddy04, reddy05a, reddy06a, daddi07a, pannella09}. The launch of the {\em Spitzer Space Telescope} enabled the first direct detection of the dust emission in non-lensed $L^{\ast}$ galaxies at $z\sim 2$; at these redshifts, {\em Spitzer}'s MIPS $24$\,$\mu$m band is sensitive to the strongest dust emission feature (at $7.7$\,$\mu$m) in star-forming galaxies. This feature arises from the stochastic UV photo-heating of small dust grains and hydrocarbons (e.g., \citealt{puget89, tielens99}) and is found to correlate with the UV radiation from OB stars (e.g., \citealt{forster04, roussel01}), albeit with significant scatter (e.g., \citealt{kennicutt09, hogg05, helou01, engelbracht05, normand95}). Several studies have demonstrated that the dust attenuation inferred from the UV spectral slope, $\beta$, for typical $z\sim 2$ galaxies is in general agreement with that inferred from MIPS $24$\,$\mu$m (e.g., \citealt{reddy06a, reddy10a, daddi07a}). Regardless, local studies of resolved galaxies have emphasized the complexity of the $8$\,$\mu$m emission and the need to combine it with other obscured (IR) and unobscured (UV, H$\alpha$) tracers of star formation in order to obtain the most reliable measure of the bolometric luminosity \citep{kennicutt09}. To this end, we take advantage of the improved sensitivity of {\em Herschel}/PACS at $100$ and $160$\,$\mu$m \citep{pilbratt10, poglitsch10} to measure directly for the first time the thermal dust emission from a large sample of typical ($L^{\ast}$) star-forming galaxies at $z\sim 2$. We make use of the deep PACS data made possible by the GOODS-{\em Herschel} Open Time Key project (PI: D. Elbaz). We supplement these data with deep Very Large Array (VLA) $1.4$\,GHz continuum imaging in the GOODS-North field \citep{morrison10}. The primary aim is to constrain the average infrared luminosities of galaxies at high redshift, particularly those selected by their rest-frame UV colors, and to compare them to UV-based inferences. We begin in Section~\ref{sec:sample} by discussing the UV-selected galaxies and previous efforts to measure their stellar populations and dust attenuation. We also provide a brief description of the {\em Herschel} and radio data. Our stacking method and stacking simulations are described in Section~\ref{sec:stacking}, followed by a discussion of the dust spectral energy distribution (SED) fits to the stacked fluxes and the total infrared luminosities in Section~\ref{sec:luminosities}. In Section~\ref{sec:discussion} we proceed to compare the {\em Herschel}-based inferences of the dust attenuation with that inferred from the UV slope and discuss systematics in the dust obscuration with stellar population age and bolometric and UV luminosity. For ease of comparison with other studies, we assume a \citet{salpeter55} initial mass function (IMF) and adopt a cosmology with $H_{0}=70$\,km\,s$^{-1}$\,Mpc$^{-1}$, $\Omega_{\Lambda}=0.7$, and $\Omega_{\rm m}=0.3$. \section{SAMPLE SELECTION AND {\em Herschel} PACS 100 and 160\,$\mu$m Data} \label{sec:sample} \subsection{Rest-Frame UV-Selected Sample of Star-Forming Galaxies at $z\sim 2-3$} The GOODS-North field was targeted as part of an ongoing imaging and spectroscopic survey of UV-selected galaxies at $z\sim 2-3$ \citep{steidel04}, primarily to take advantage of the extensive multi-wavelength data that exist for this field. Spectroscopic catalogs and analysis of the GOODS-North sample, including the stellar masses and dust attenuation of $L^{\ast}$ galaxies at $z\sim 2-3$, are presented in \citet{reddy06b}. Briefly, the galaxies were selected to lie at redshifts $1.5\la z\la 3.4$ based on the ``BM,'' ``BX,'' and Lyman Break galaxy (LBG) color criteria \citep{steidel03, steidel04, adelberger04}. Extensive spectroscopic followup of candidates brighter than ${\cal R} = 25.5$ was conducted using the blue arm of the Low Resolution Imaging Spectrograph (LRIS; \citealt{steidel04}) on the Keck I telescope. The criteria used to construct our sample are identical to those of \citet{reddy10a}. Only galaxies with secure spectroscopic redshifts $1.5\le z_{\rm spec}\le 2.6$ were considered as this is the redshift range over which the MIPS band is sensitive to rest-frame $8$\,$\mu$m emission. Galaxies with any AGN signatures based on optical emission lines (Ly$\alpha$, CIV, NV) or a power-law SED through the IRAC and MIPS $24$\,$\mu$m bands were excluded. Additionally, galaxies were included only if we were able to obtain robust point spread function (PSF) fits to their $24$\,$\mu$m emission; in practice, this meant that a few galaxies were rejected because they are confused with nearby neighbors (see \citealt{reddy10a} for the parameter used to measure the degeneracy of the PSF fits). This sample forms the basis of our stacking analysis, and includes a total of 146 galaxies, approximately $30\%$ of which are detected with greater than $3$\,$\sigma$ significance in the GOODS-N $24$\,$\mu$m data. \subsection{Properties of the Sample and SED Modeling} \citet{reddy09} use a larger spectroscopic and photometric sample (the latter extending to ${\cal R}\sim 27.0$) in 31 independent fields to quantify the selection function for the UV sample and determine the UV luminosity function at $z\sim 2$. This analysis indicates that UV-selected galaxies commonly targeted for spectroscopy with ${\cal R}\la 25.5$ have luminosities similar to $L^{\ast}$ of the UV luminosity function, where $L^{\ast}_{\rm UV}\approx 4\times 10^{10}$\,L$_{\odot}$ \citep{reddy09}. Throughout the text, $L^{\ast}$ refers to the characteristic luminosity of the UV luminosity function, unless stated otherwise. The stellar populations of these galaxies have been modeled previously, and we refer the reader to \citet{reddy10a} for a detailed description of the modeling procedure. Briefly, Charlot \& Bruzual (in prep.) stellar population models with a range of exponentially-declining star formation histories $\tau =$ 10, 20, 50, 100, 200, 500, 1000, 2000, and 5000\,Myr, as well as a constant star formation history, were fit to the observed $U_{\rm n}G{\cal R}$, $JK_{\rm s}$, and {\em Spitzer}/IRAC data. In addition, we considered ages spaced roughly logarithmically between 70 and 5000\,Myr, excluding any that are older than the age of the Universe at the redshift of each galaxy. The lower limit to the allowed ages ($70$\,Myr) was adopted to reflect the typical dynamical time-scale as inferred from velocity dispersion and size measurements of $z\sim 2$ LBGs \citep{erb06c, law07}. Reddening was incorporated by employing the \citet{calzetti00} attenuation curve and allowing $E(B-V)$ to range between 0.0 and 0.6. We adopt the best-fit stellar population parameters obtained when assuming a constant star formation (CSF) history, unless an exponentially declining ($\tau$) model gives a significantly better fit to the broadband data. Generally, the $\chi^2$ values assuming the CSF model were similar to those obtained when $\tau$ is allowed to vary freely. Further, more extreme star formation histories, where the ratio of the age to $\tau$ is much greater than unity, i.e., $t_{\rm age}/\tau \gg 1$, can be ruled out based on the presence of O star and Wolf-Rayet features in the composite UV spectra of $z\sim 2$ galaxies, as well as the fact that such models often predict ages that are unrealistically much younger than the dynamical timescale at $z\sim 2$. For the present analysis, we use the results of the stellar population modeling to distinguish those galaxies that have young star formation ages $\la 100$\,Myr. Such galaxies have been shown previously to depart from the UV attenuation curve established for more typical galaxies at $z\sim 2$ \citep{reddy10a, reddy06a}, and we wish to explore this difference with the {\em Herschel} data. \subsection{Construction of Subsamples} The sample of 146 UV-selected galaxies was subdivided into subsamples in order to investigate the differences in dust attenuation between galaxies with blue and red UV spectral slopes, those with high bolometric luminosities ($L_{\rm bol} \equiv L_{\rm IR} + L_{\rm UV}$), and those with young stellar population ages. The properties of these subsamples, including the criteria used to construct them, the number of galaxies in each subsample, and their $\beta$ and redshift distributions, are summarized in Table~\ref{tab:stacks}. As noted above, the ages are estimated from stellar population modeling of the rest-frame UV through {\em Spitzer} IRAC photometry. For the purpose of contructing the subsamples, we estimated bolometric luminosities based on the \citet{reddy10a} calibration between $8$\,$\mu$m and infrared luminosity (see next section). Finally, UV slopes $\beta$ were determined from the $G-{\cal R}$ colors of galaxies as follows. We generated power laws in $f(\lambda)\propto \lambda^\beta$ for $-2.5\le \beta\le 1.0$ with $\Delta\beta = 0.01$. These were attenuated for the Ly$\alpha$ forest opacity assuming the \citet{madau95} prescription and multiplied by the $G$ and ${\cal R}$ transmission filters. The $G$ band is affected by the Ly$\alpha$ forest only for redshifts $z=2.5$; statistical fluctuations in the forest will not affect $\beta$ as most of the galaxies considered here lie below redshift $z\le 2.5$. For this same reason, Ly$\alpha$ lies outside of the $G$-band filter and will therefore not affect $\beta$. The UV slope for a given galaxy is taken to be the one which gives the closest match in $G-{\cal R}$ color to the observed value. The error in UV slope is related directly to the error in color and is typically $\sigma_\beta\simeq 0.11$. \begin{deluxetable*}{llll} \tabletypesize{\scriptsize} \tablewidth{0pc} \tablecaption{Properties of the Stacks I: $100$\,$\mu$m Fluxes, Redshifts, and UV Slopes} \tablehead{ \colhead{} & \colhead{} & \colhead{$f_{\rm 100}$} & \colhead{$f_{\rm 100}$} \\ \colhead{Sample} & \colhead{Criterion~\tablenotemark{a}} & \colhead{$r_{\rm exc}=3\farcs43$~\tablenotemark{b}} & \colhead{No Exclusion~\tablenotemark{c}}} \startdata {\bf A.} & All & $(3.0\pm0.5)\times 10^{-4}$\,Jy & $(3.0\pm0.4)\times 10^{-4}$\,Jy \\ {\bf ALL} & & $N=106$ & $N=146$ \\ {\bf UV-} & & $\langle \beta\rangle=-1.41\pm0.42$ & $\langle \beta\rangle=-1.41\pm0.41$ \\ {\bf SELECTED} & & $\langle z\rangle = 2.08\pm0.24$ & $\langle z\rangle = 2.08\pm0.26$ \\ \\ {\bf B.} & Age $\ga 100$\,Myr; & $(2.4\pm0.5)\times 10^{-4}$\,Jy & $(2.8\pm0.5)\times 10^{-4}$\,Jy \\ {\bf $L^{\ast}$} & $L_{\rm bol}\le 10^{12}$\,L$_{\odot}$ & $N=83$ & $N=114$ \\ {\bf GALAXIES} & & $\langle \beta\rangle=-1.48\pm0.37$ & $\langle \beta\rangle=-1.46\pm0.38$ \\ & & $\langle z\rangle=2.09\pm0.24$ & $\langle z\rangle=2.09\pm0.25$ \\ \\ {\bf C.} & Age $\ga 100$\,Myr; & $(1.9\pm0.7)\times 10^{-4}$\,Jy & $(2.1\pm0.7)\times 10^{-4}$\,Jy \\ {\bf BLUE} & $L_{\rm bol}\le 10^{12}$\,L$_{\odot}$; & $N=51$ & $N=67$ \\ {\bf UV-SLOPES} & $\beta < -1.4$ & $\langle \beta\rangle = -1.61\pm0.21$ & $\langle \beta\rangle = -1.59\pm0.21$ \\ & & $\langle z\rangle = 2.05\pm0.24$ & $\langle z \rangle = 2.05\pm0.26$ \\ \\ {\bf D.} & Age $\ga 100$\,Myr; & $(3.8\pm0.9)\times 10^{-4}$\,Jy & $(3.4\pm0.6)\times 10^{-4}$\,Jy \\ {\bf RED} & $L_{\rm bol}\le 10^{12}$\,L$_{\odot}$; & $N=32$ & $N=47$ \\ {\bf UV-SLOPES} & $\beta \ge -1.4$ & $\langle \beta\rangle = -1.10\pm0.26$ & $\langle \beta\rangle = -1.15\pm0.26$ \\ & & $\langle z\rangle = 2.19\pm0.22$ & $\langle z \rangle = 2.14\pm0.24$ \\ \\ {\bf E.} & Age $\ga 100$\,Myr; & $(11.0\pm1.7)\times 10^{-4}$\,Jy & $(9.9\pm1.2)\times 10^{-4}$\,Jy \\ {\bf ULIRGs} & $L_{\rm bol}>10^{12}$\,L$_{\odot}$ & $N=9$ & $N=12$ \\ & & $\langle \beta\rangle=-0.84\pm0.48$ & $\langle\beta\rangle=-0.84\pm0.46$ \\ & & $\langle z\rangle=2.03\pm0.27$ & $\langle z\rangle=2.22\pm0.29$ \\ & & $\langle L_{\rm bol}\rangle=(1.3\pm0.1)\times 10^{12}$\,L$_{\odot}$ & $\langle L_{\rm bol}\rangle=(1.3\pm0.1)\times 10^{-4}$\,L$_{\odot}$ \\ & & $L_{\rm bol}^{\rm max} = 1.8\times 10^{12}$\,L$_{\odot}$ & $L_{\rm bol}^{\rm max} = 1.8\times 10^{12}$\,L$_{\odot}$ \\ \\ {\bf F.} & Age $\la 100$\,Myr & 3\,$\sigma$: $<4.4\times10^{-4}$\,Jy & 3\,$\sigma$: $<3.2\times10^{-4}$\,Jy \\ {\bf YOUNG} & & $N=14$ & $N=20$ \\ {\bf GALAXIES} & & $\langle \beta\rangle= -1.17\pm0.40$ & $\langle \beta\rangle= -1.30\pm0.40$ \\ & & $\langle z\rangle=2.00\pm0.26$ & $\langle z\rangle = 2.00\pm 0.28$ \\ \enddata \tablenotetext{\,}{NOTE. -- Each entry includes (a) the stacked flux at $100$\,$\mu$m and its measurement uncertainty; (b) the number of galaxies contributing to the stack; and (c) the mean and sample dispersion of the UV slopes ($\beta$) and redshifts of those galaxies. We assign 3\,$\sigma$ upper limits to fluxes in cases where the $1$\,$\sigma$ measurement uncertainty is larger than the stacked flux. For Sample E (ULIRGs), we include the mean and error in the mean of the bolometric luminosity ($L_{\rm bol}$; derived using the calibration of \citealt{reddy10a}) and the maximum bolometric luminosity ($L_{\rm bol}^{\rm max}$) contributing to the stack.} \tablenotetext{a}{The criteria used to select each subsample are listed here. The ages were determined from SED-fitting to the broadband photometry (see text) and the bolometric luminosities, $L_{\rm bol}$, were determined from our previous calibration between $24$\,$\mu$m and infrared luminosity \citep{reddy10a}.} \tablenotetext{b}{Criteria for excluding galaxies from the stack. If any galaxy lies within a distance $r_{\rm exc}$ of a nearby optical, $K_{\rm s}$-band, or IRAC source, it is excluded from the stack.} \tablenotetext{c}{No exclusion radius adopted; all galaxies are median stacked.} \label{tab:stacks} \end{deluxetable*} \subsection{Previous MIPS Results} To provide a context for our present analysis, we briefly summarize previous efforts to constrain the dust emission and bolometric luminosities of galaxies at $z\sim 2-3$. \citet{reddy06a} investigated the variation in $24$\,$\mu$m flux with dust-corrected UV and X-ray measures of the bolometric luminosities of the same UV-selected galaxies analyzed here. Extending upon this result, \citet{reddy10a} used a sample of 90 Lyman Break galaxies to examine the relationship between rest-frame $8$\,$\mu$m ($\nu L_{\nu}[8\mu {\rm m}]\equiv L_{\rm 8}$) and H$\alpha$ luminosity at $z\sim 2$, finding a tight trend between the two, with a scatter of $\approx 0.24$\,dex. Using H$\alpha$ luminosity as a proxy for total star formation rate -- after accounting for dust with a \citet{calzetti00} attenuation curve -- then allowed these authors to establish a relationship between $L_{\rm 8}$ and dust obscured SFR, or $L_{\rm IR}$. The previous studies found that typical ($L^{\ast}$) star-forming galaxies at $z\sim 2$ have dust attenuations, or infrared-to-UV luminosity ratios, $L_{\rm IR}/L_{\rm UV} \simeq 5$, similar to the values predicted from the UV slope, $\beta$, using the local correlation between $\beta$ and $L_{\rm IR}/L_{\rm UV}$ \citep{meurer99}. Investigations of the $24$\,$\mu$m emission of galaxies selected at rest-frame optical wavelengths -- resulting in samples that are not orthogonal to the one analyzed here -- have reached similar conclusions regarding the validity of UV-based dust corrections for moderately-luminous galaxies at $z\sim 2$ \citep{daddi07a}. These results have been extended to higher redshift: \citet{magdis10a} demonstrate that UV-based dust corrections for LBGs at $z\sim 3$ yield bolometric luminosities comparable to those inferred from infrared, radio, and millimeter measures. The $160$~$\mu$m emission from $z\sim 3$ LBGs also corresponds to $L_{\rm IR}$ that are similar to those obtained with $24$\,$\mu$m estimates \citep{magdis10b}. While this is important validation of our understanding of star formation and dust obscuration at high redshift, the LBGs that \citet{magdis10b} detected in stacked $160$\,$\mu$m data have significantly higher luminosities ($\langle L_{\rm IR}\rangle \approx 1.6\times 10^{12}$\,L$_{\odot}$) and star formation rates ($> 100$\,M$_{\odot}$\,yr$^{-1}$) than those of the typical, $L^{\ast}$, galaxies analyzed here. In any case, while this agreement is encouraging, and instills confidence in our ability to recover the dust attenuation of typical galaxies from measurements of their UV continuum slopes, uncertainties in the {\em k}-corrections and the conversion between $8$\,$\mu$m and infrared luminosity (as well as conversion between X-ray emission and SFR, for the X-ray stacking analyses) no doubt introduce some scatter in the inferred dust obscurations. The primary goal of our present analysis to obtain more direct measures of the total infrared luminosities of $z\sim 2$ galaxies using {\em Herschel} data, which are described below. \subsection{{\em Herschel} and VLA Data} The GOODS-{\em Herschel} Open Time Key Program (PI: D. Elbaz) includes $\approx 124$\,hrs of $100$ and $160$\,$\mu$m PACS imaging and $31$\,hrs of SPIRE imaging in the GOODS-North field. The $3$\,$\sigma$ depths of the PACS 100 and $160$\,$\mu$m images are 1.1\,mJy and 2.6\,mJy, respectively. None of the galaxies in the sample analyzed here are detected to these depths. Further details on the data reduction are given in \citet{elbaz11}. PSFs were constructed based on the catalogs of {\em Herschel} detections discussed in \citet{elbaz11}. When performing PSF photometry, we adjusted all fluxes upward by multiplicative factors of 1.37 and 1.29 at $100$ and $160$\,$\mu$m respectively, to account for missing flux in the wings of the PSFs. The similar factor for the MIPS $24$\,$\mu$m PSF is 1.22. The VLA radio 1.4\,GHz data are described in \citet{morrison10}. Briefly, a total of $165$\,hrs of observations, including $42$\,hr from \citet{richards00}, were combined to produce a radio map of the GOODS-North field. The noise level of the image is $\sim 3.9$\,$\mu$Jy\,beam$^{-1}$ near the center and $\sim 8$\,$\mu$Jy\,beam$^{-1}$ at 15$\arcmin$ from the center. The synthesized beamsize of the radio data is $\sim 1\farcs7 \times 1\farcs6$, corresponding to a physical scale of $14.1\times 13.3$\,kpc at $z=2.30$. The radio map used in the stacking analysis is primary beam corrected. \begin{figure} \plotone{f1.eps} \caption{PSF-convolved median stacked images for ``Sample B'' ($L^{\ast}$ galaxies) at $24$, $100$, $160$\,$\mu$m, and $1.4$\,GHz, with pixel scales of $1\farcs2$, $1\farcs2$, $2\farcs4$, and $0\farcs5$, respectively.} \label{fig:stackpost} \end{figure} \section{Stacking Method and Simulations} \label{sec:stacking} \subsection{General Stacking Procedure} The method used to stack the {\em Herschel} $100$ and $160$\,$\mu$m, {\em Spitzer} $24$\,$\mu$m, and the VLA radio data is as follows. We extracted an area $50\arcsec \times 50\arcsec$ around each object to be stacked. The area was chosen to be large enough to obtain a reliable estimate of the local background. These cutouts were then median combined after rotating each sub-image by $90\degr$ relative the previous sub-image contributing to the stack, in order to minimize image defects that are aligned with the scanning orientation of the data acquisition.\footnote{To test for any systematic effect associated with asymmetries in the PSF, we also stacked without rotating each sub-image. The results obtained with and without rotating the sub-images were identical within the uncertainties of the stacked flux measurements.} We adopted the median stacks to ensure the results are not biased by bright outliers. We stacked the $24$\,$\mu$m data in exactly the same way as was done for the $100$ and $160$\,$\mu$m data, irrespective of whether a galaxy was directly detected at $24$\,$\mu$m, in order to ensure the most consistent results across wavelengths.\footnote{For the most conservative estimate of the stacked radio flux, we did not correct for bandwidth smearing (e.g., \citealt{pannella09}), rather we fit an elliptical Gaussian to the stacked light profile and adopted the peak value for the radio flux.} To test the effects of a non-negligible amount of flux in the wings of the stacked signal relative to the PSF (due to positional uncertainty), we performed both PSF-fitting photometry and simple aperture photometry on the stacked flux. The background and measurement uncertainty in the stacked image are taken respectively to be the mean and $1$\,$\sigma$ dispersion in flux computed by fitting many PSFs at random positions in the stacked image. We simulatenously fit the target flux (with a PSF) and a constant value to account for the background level in the stacked image. We found no significant systematic bias in the aperture versus PSF-derived flux relative to the measurement uncertainties, and we adopted the PSF-determined flux measurements. For the radio data, the PSF-measured flux and total integrated flux differ by $\approx 20-30\%$ due to bandwidth smearing. To account for this effect, we have adopted the fluxes computed from fitting the best-fit elliptical Gaussian profile (with peak intensity normalized to unity) to the stacked radio emission, yielding the integrated flux density. \subsection{Residual Images} Residual maps at $100$ and $160$\,$\mu$m were constructed in order to ensure the most robust determinations of the stacked fluxes.\footnote{We did not construct a residual map for the VLA data given the higher resolution of these data and the lower $1.4$\,GHz source surface density.} Sources detected at $>3$\,$\sigma$ were subtracted from the science mosaics (none of the galaxies in our sample are detected at $100$ and $160$\,$\mu$m at this level). We considered detections based both on a blind catalog (``blind-subtracted''), where a detection algorithm was used directly on the PACS mosaics, as well as a catalog constructed using $24$\,$\mu$m priors to define the positions of sources in the longer wavelength PACS bands (``prior-substracted''). In both cases, we used PSF photometry to determine the fluxes of sources and subtract them from the science images. To ensure that there are no systematic effects that may bias the flux measurement made on the residual maps, we performed the same stacking analysis on the science images themselves. Comparison between the stacks on the blind-residual maps, prior-residual maps, and the science images, showed that the stacked fluxes were within $10\%$ of each other. Typically, the stacked fluxes measured on the science images were {\em lower} than those measured on the residual images. This effect is attributed to the higher background level in the stack derived from the science images due to the PSF wings of nearby sources. The similarity in flux regardless of the image used for the stack suggests that the median combination of sub-images is robust to bright outliers, as expected. The flux measurements obtained by stacking on the prior-fit residual maps are adopted for further analysis. The median stacked data for $L^{\ast}$ galaxies (Sample B; Table~\ref{tab:stacks}) at $24$, $100$, and $160$\,$\mu$m, and $1.4$\,GHz, are shown in Figure~\ref{fig:stackpost}. \subsection{Exclusion Tests and Simulations} A common concern in stacking data with relatively poor resolution is the potential contribution from unrelated sources within the beam that are formally undetected (i.e., with a $<3\,\sigma$ significance) and which therefore remain in the residual images, but which may contribute significantly to the stacked flux. If these unrelated sources have a random spatial distribution, then their contribution to the stacked flux will be accounted for when we subtract the background in the stacked images. However, if the unrelated sources are clustered with respect to the UV-selected galaxies, they will contribute to the stacked fluxes. To test for the effect of clustering on the stacked fluxes, we considered a number of tests, as we describe below. \begin{deluxetable*}{lrrrr} \tabletypesize{\scriptsize} \tablewidth{0pc} \tablecaption{Properties of the Stacks II: Summary of Mid-IR, IR, and Radio Fluxes} \tablehead{ \colhead{} & \colhead{$f_{\rm 24}$} & \colhead{$f_{\rm 100}$} & \colhead{$f_{\rm 160}$} & \colhead{$f_{\rm 1.4}$} \\ \colhead{Sample} & \colhead{(Jy)} & \colhead{(Jy)} & \colhead{(Jy)} & \colhead{(Jy)}} \startdata {\bf A.} & $(33\pm4)\times 10^{-6}$ & $(3.0\pm0.4)\times 10^{-4}$ & $(7.3\pm1.2)\times 10^{-4}$ & $(2.1\pm0.5)\times 10^{-6}$ \\ {\bf B.} & $(27\pm2)\times 10^{-6}$ & $(2.8\pm0.5)\times 10^{-4}$ & $(5.9\pm1.5)\times 10^{-4}$ & $(2.0\pm0.4)\times 10^{-6}$ \\ {\bf C.} & $(21\pm3)\times 10^{-6}$ & $(2.1\pm0.7)\times 10^{-4}$ & $(5.4\pm1.6)\times 10^{-4}$ & $(1.6\pm0.6)\times 10^{-6}$ \\ {\bf D.} & $(35\pm3)\times 10^{-6}$ & $(3.4\pm0.6)\times 10^{-4}$ & $(6.6\pm2.1)\times 10^{-4}$ & $(2.6\pm0.7)\times 10^{-6}$ \\ {\bf E.} & $(130\pm4)\times 10^{-6}$ & $(9.9\pm1.2)\times 10^{-4}$ & $(22.4\pm3.8)\times 10^{-4}$ & $(6.6\pm1.3)\times 10^{-6}$ \\ {\bf F.} & $(10\pm2)\times 10^{-6}$ & 3\,$\sigma$: $<3.2\times 10^{-4}$ & $(4.6\pm3.3)\times 10^{-4}$ & 3$\,\sigma$: $<3.1\times 10^{-6}$ \\ \enddata \tablenotetext{\,}{NOTE. -- The quoted errors reflect measurement uncertainty in the stacked fluxes.} \label{tab:fluxes} \end{deluxetable*} \begin{deluxetable*}{lcccc} \tabletypesize{\scriptsize} \tablewidth{0pc} \tablecaption{Properties of the Stacks III: Biases and Errors in Stacked Fluxes} \tablehead{ \colhead{} & \colhead{$24$\,$\mu$m} & \colhead{$100$\,$\mu$m} & \colhead{$160$\,$\mu$m} & \colhead{$1.4$\,GHz} \\ \colhead{Sample} & \colhead{1-Bias~\tablenotemark{a} / Error~\tablenotemark{b}} & \colhead{1-Bias~\tablenotemark{a} / Error~\tablenotemark{b}} & \colhead{1-Bias~\tablenotemark{a} / Error~\tablenotemark{b}} & \colhead{1-Bias~\tablenotemark{a} / Error~\tablenotemark{b}}} \startdata {\bf A.} & 0.94 / 0.002 & 0.96 / 0.012 & 0.92 / 0.014 & 1.03 / 0.016 \\ {\bf B.} & 0.94 / 0.003 & 0.95 / 0.017 & 0.92 / 0.022 & 1.02 / 0.024 \\ {\bf C.} & 0.94 / 0.015 & 0.96 / 0.039 & 0.92 / 0.040 & 1.03 / 0.054 \\ {\bf D.} & 0.94 / 0.004 & 0.96 / 0.035 & 0.92 / 0.045 & 1.03 / 0.038 \\ {\bf E.} & 0.95 / 0.009 & 0.95 / 0.046 & 0.93 / 0.058 & 1.05 / 0.075 \\ \enddata \tablenotetext{a}{Average bias of stacked flux, defined as the ratio of the mean measured flux to the simulated flux, $\langle f^{meas}\rangle / f^{sim}$.} \tablenotetext{b}{Fractional uncertainty in the mean stacked flux, taken as the ratio of the error in the mean of the measured stacked flux ($\sigma/\sqrt{N}$) to the median measured flux, i.e., $\sigma (f^{meas})/(\sqrt{N}\langle f^{meas}\rangle)$.} \label{tab:biaserr} \end{deluxetable*} \subsubsection{Exclusion Radius} The most conservative measure of the median flux can be achieved by stacking only those galaxies without any nearby sources. The full-width-half-max (FWHM) of the PSF at $100$\,$\mu$m is small enough ($\simeq 6\arcsec$) such that we can adopt an ``exclusion radius'' $r_{\rm exc}=$FWHM$/2$, and still have enough galaxies to stack. We first constructed a catalog containing all optical, $K_{\rm s}$-band, and IRAC detections in the GOODS-North field. The $3$\,$\sigma$ sensitivities, as measured in a $2\arcsec$ diameter aperture in the optical and $K_{\rm s}$-band images, are ${\cal R}\simeq 27.6$ and $K_{\rm s}\simeq 24.55$. The $3$\,$\sigma$ sensitivities of the GOODS-North IRAC data, as measured in a $4\arcsec$ diameter aperture, are 26.56, 26.00, 24.17, and 24.16, for the 4 IRAC channels, respectively. Any galaxies in our sample that lie within a distance $r_{\rm exc}$ of any source detected in the optical, near-IR, or with IRAC, are excluded from the stack. The sample itself was selected such that galaxies that were confused with any nearby MIPS $24$\,$\mu$m sources were excluded. The median $100$\,$\mu$m fluxes obtained with and without adopting an exclusion radius are listed in Table~\ref{tab:stacks}. The differences between these fluxes are smaller than the $1$\,$\sigma$ measurement uncertainties in the stacked fluxes. The similarity in flux between the exclusion and no-exclusion cases implies that any objects that may cluster around the UV-selected galaxies do not contribute significantly to the stacked far-infrared fluxes of the UV-selected galaxies. We note that there may exist very faint sources that are undetected in the optical, near-IR, and IRAC data and which may lie close to our targets. However, we consider it unlikely for sources that are faint at virtually all other wavelengths (optical, near-IR, and the {\em Spitzer} IRAC and MIPS bands) to be bright enough at $100$\,$\mu$m to contribute significantly to the stacked flux of the UV-selected galaxies. Performing a similar test at $160$\,$\mu$m is not possible, as the larger FWHM results in an exclusion radius that precludes any galaxies from being stacked. Because the $100$ and $160$\,$\mu$m emission arises from the same mechanism (dust emission), then one would conclude that the effects of clustering are unlikely to be significant at $160$\,$\mu$m if the $100$\,$\mu$m stacked emission yields similar infrared luminosities to those obtained from the stacked $160$\,$\mu$m data. In Section~\ref{sec:luminosities} we present evidence that suggests that an additional contribution to the $160$\,$\mu$m flux from other sources must be negligible compared to the flux from the UV-selected galaxies of interest here. Based on this evidence, and to take advantage of all the galaxies in our sample, we proceed by adopting the fluxes obtained without using the exclusion radius; these values are given in Table~\ref{tab:fluxes}. Below, we discuss two additional tests used to verify these stacked fluxes. \subsubsection{Comparison with Random Stacks} The probability of a chance measurement that results in fluxes as high as the ones obtained by stacking on the positions of UV-selected galaxies (i.e., the target stacks) can be determined by stacking at random positions in the images (i.e., random stacks). We performed this random stack test on the radio and $24$, $100$, and $160$\,$\mu$m data (we did not exclude positions that correspond to detected sources). The results for the mid-IR and IR simulations for Sample B (stacking on $N=114$ positions) are shown in Figure~\ref{fig:ranflux}. For this sample, the $24$ and $100$\,$\mu$m fluxes measured for 10,000 random stacks were never as high as those obtained for the target stacks. For the $160$\,$\mu$m data, 15 out of 10,000 random stacks result in fluxes that were within the $1$\,$\sigma$ measurement uncertainty of the target stack. The random stacks indicate a very low probability ($\la 0.2\%$) of accidentally recovering stacked fluxes as high as the ones observed for the sample of $L^{\ast}$ galaxies at $z\sim 2$, and these probabilities are consistent with those expected based on the measurement uncertainties of the stacked fluxes. \begin{figure*} \plotone{f2.eps} \caption{Comparison of measured stacked fluxes (mean and uncertainty indicated by the solid lines and hashed regions, respectively) for Sample B and the distribution of fluxes obtained by stacking on $N=114$ random positions, repeated 10000 times (histograms).} \label{fig:ranflux} \end{figure*} \begin{figure*} \plotone{f3.eps} \caption{Distribution of stacked fluxes for artificial galaxies added to the $24$, $100$, and $160$\,$\mu$m images (histograms) for Samples A through E, relative to the simulated flux (numbers in panels).} \label{fig:simflux} \end{figure*} \subsubsection{Stacks of Simulated Galaxies} Building on the random stack tests, point sources of known flux density were added at random locations in the residual images (using the same PSFs that are used to obtain photometry\footnote{We performed another test to determine if asymmetries in the intrinsic {\em Herschel} PSF result in systematic differences in the photometry obtained for simulated sources. For this test, we redid the simulations by adding point sources of known flux density, where we assumed the flux profile given by the PACS PSF measured by observing the asteroid Vesta. The sources were then recovered using the PSF measured from the GOODS-N {\em Herschel} images. Based on this second set of simulations, we find less than $5\%$ systematic offset between the difference of input and output flux relative to the difference obtained when assuming the same PSF, both for adding sources to the images and recovering their fluxes.}) and were recovered by a stacking analysis. We added the same number of point sources to the residual images as are used in constructing the stacks for Samples A through E. Flux densities were assigned based on a Gaussian distribution with a mean equivalent to the median stacked flux for each Sample (the stacks are insensitive to the dispersion in simulated fluxes). The ratio of the measured stack to simulated mean flux are shown in Figure~\ref{fig:simflux} for the different samples, with numbers and input mean fluxes indicated, for the $24$, $100$, and $160$\,$\mu$m data. This ratio is close to, but not exactly, equal to $1$, with a bias of just a few percent. The biases and dispersions measured for the different stacks are summarized in Table~\ref{tab:biaserr}. The error in the mean recovered stacked flux from the simulations is much smaller than the measurement uncertainty. Therefore, for the subsequent discussion we assume a flux uncertainty that is equal to the measurement uncertainty of the stacked flux. Finally, we corrected the observed fluxes (listed in Table~\ref{tab:fluxes}) by the bias factors given in Table~\ref{tab:biaserr} before inferring the total infrared luminosities, as we discuss in the next section. \subsubsection{Summary of Tests} We have performed several tests to validate our measures of the stacked fluxes for UV-selected galaxies. The method employed here is able to recover the stacked fluxes of galaxies with a bias of just a few percent relative to the known fluxes of artificial sources added to the 1.4\,GHz, and $24$, $100$, and $160$\,$\mu$m images. These tests also imply a very low probability ($\la 0.2\%$) of accidentally recovering stacked fluxes that are as high as the target stacked fluxes. Finally, we test for the effect of source clustering on the stacked flux at $100$\,$\mu$m. By excluding galaxies from the stack which have any nearby sources, we find that the remaining galaxies have a median flux that is identical within the measurement errors of the median flux inferred for the entire sample of galaxies. This indicates that any sources that may cluster around the UV-selected galaxies (if they exist) do not affect the stacking results. \section{Infrared Luminosities and Dust SEDs} \label{sec:luminosities} \subsection{Dust SED Templates and Extrapolation to the Radio} To infer the total infrared luminosities $L(8\,-\,1000\,\mu{\rm m})\equiv L_{\rm IR}$ of the $z\sim 2$ galaxies, we employed several publicly available dust SED templates including those of \citet{chary01} (``CE01''), \citet{dale02} (``DH02''), and \citet{rieke09} (``Rieke+09''). We also include the median templates presented in \citet{elbaz11}. Specifically, these authors define an ``infrared main sequence'' of galaxies, where the ratio of $L_{\rm IR}$ to $L_{\rm 8}$ (IR8) is universal for most star-forming galaxies at $z\la 2.0$, having a value of $L_{\rm IR}/L_{\rm 8} = 4.9^{+2.9}_{-2.2}$. The ``infrared starbursts'' are considered to be those galaxies with IR8 ratios in excess of $\approx 15$. \citet{elbaz11} demonstrate that variations in IR8 may be due primarily to differences in the infrared luminosity surface density of galaxies, such that main sequence galaxies are characterized by star formation that is more extended than that present in starbursts (see Section~\ref{sec:discussion}). We fit the median IR SED of these main sequence (``Elbaz+11-MS'') and starburst galaxies (``Elbaz+11-SB'') to the stacked fluxes. The \citet{reddy10a} calibration between rest-frame $8$\,$\mu$m emission and infrared luminosity is considered as well. In this case, $L_{\rm 8}$ is computed by k-correcting the $24$\,$\mu$m flux using the average of 12 local galaxy mid-IR SEDs specified in \citet{reddy06a}. The correlation between $L_{\rm 8}$ and H$\alpha$ luminosity is then used to infer the $L_{\rm 8}$--$L_{\rm IR}$ conversion, based on using H$\alpha$ as an independent probe of the star formation \citep{reddy10a}. Finally, we make use of the radio-infrared correlation to provide an independent estimate of $L_{\rm IR}$, assuming that this relation does not evolve with redshift, as is consistent with the evidence, at least at redshifts $z\la 3$ (e.g., \citealt{appleton04, ivison10, sargent10, bourne11, mao11}). For the subsequent analysis, we use the radio-infrared correlation published in \citet{bell03}. The sample of \citet{yun01} is roughly a factor of 10 larger than the one analyzed by \citet{bell03}. However, \citet{yun01} calibrate the $60$\,$\mu$m luminosity, $L_{\rm 60}$, with radio luminosity, and the former requires some assumption about the relation between $L_{\rm 60}$ and $L_{\rm IR}$. To make the minimum number of assumptions, we therefore adopted the \citet{bell03} calibration which directly relates the total infrared luminosities to the specific luminosity at $1.4$\,GHz. The CE01 and Rieke+09 models are parameterized such that we can relate any template to a given total infrared luminosity. The Elbaz+11-MS/SB templates are normalized to an infrared luminosity of $L_{\rm IR}=10^{11}$\,L$_{\odot}$. The DH02 models are presented as a function of radiation field intensity and, following the literature (e.g., \citealt{papovich07}), we associate infrared luminosities with each of the DH02 templates assuming the empirical calibration of \citet{marcillac06}. The CE01 and Rieke+09 models also include an extrapolation to radio wavelengths by assuming some version of the local radio-far-infrared correlation \citep{condon92, yun01, bell03}. For a consistent treatment, we have adjusted the models (or added to them, in the case of DH02 and Elbaz+11-MS/SB) for a radio spectrum with index $\gamma=-0.8$ \citep{condon92}, normalized to have a specific luminosity at $1.4$\,GHz corresponding to the $L_{\rm IR}$ for that template, assuming the \citet{bell03} calibration. \subsection{Fitting Procedure} Because the far-infrared peak of the dust emission in the SED templates shifts to shorter wavelengths at higher luminosities, we can either (1) fit these templates directly to the observed infrared and radio fluxes (``luminosity-matched'' fitting), or (2) find the template that best matches the infrared colors and then normalize this template to the observed fluxes (``color-matched'' fitting). The color-matched fitting yields larger uncertainties in $L_{\rm IR}$ because the color errors are larger than those for individual flux measurements. We have adopted both methods for comparison purposes. The Elbaz+11-MS/SB templates are based on the composite IR spectral energy distribution of the ``main sequence'' and ``starburst'' galaxy samples of \citet{elbaz11}, and are normalized to $L_{\rm IR}=10^{11}$\,L$_{\odot}$. We assume the same spectral shape (i.e., no color dependence) when fitting these templates to the observed fluxes. Further, in finding the best-fit template, we include results where all fluxes have been weighted equally, as well as being weighted by their errors. The equal weighting is done to ensure that the higher S/N $24$\,$\mu$m data (and thus the smaller measurement uncertainties at $24$\,$\mu$m) do not unduly skew the template fits. \subsection{Infrared Luminosities} Table~\ref{tab:lircomp} lists the $L_{\rm IR}$ corresponding to the best-fit templates for different combinations of the stacked fluxes for Samples A through F, after taking into account the biases and dispersions in these fluxes (see discussion above and Table~\ref{tab:biaserr}). Fitting for different combinations of the infrared and radio data allows us to determine if the inclusion of any of the stacked fluxes results in significant changes in the inferred infrared luminosities. Also included in this table are the results from using the \citet{reddy10a} calibration between $L_{\rm 8}$ and $L_{\rm IR}$, and the $L_{\rm IR}$ corresponding radio luminosity assuming the \citet{bell03} radio-IR correlation. \begin{deluxetable*}{llccccrc} \tabletypesize{\scriptsize} \tablewidth{0pc} \tablecaption{Comparison of Infrared Luminosities ($L_{\rm IR}$)~\tablenotemark{a}} \tablehead{ \colhead{Template} & \colhead{$\lambda$, $\nu$~\tablenotemark{b}} & \colhead{{\bf Sample A}} & \colhead{{\bf Sample B}} & \colhead{{\bf Sample C}} & \colhead{{\bf Sample D}} & \colhead{{\bf Sample E}} & \colhead{{\bf Sample F}}} \startdata R10a~\tablenotemark{c} & 24 & $2.0\pm0.3$ & $1.8\pm0.1$ & $1.2\pm0.2$ & $2.7\pm0.2$ & $15.7\pm0.5$ & $0.3\pm0.1$\\ \\ Bell03~\tablenotemark{d} & 1.4 & $2.2\pm0.5$ & $2.2\pm0.4$ & $1.7\pm0.6$ & $2.9\pm0.8$ & $7.9\pm1.7$ & 3\,$\sigma$: $<3.0$ \\ \\ Elbaz+11-MS Lum~\tablenotemark{e} & 24 & $1.9\pm0.2$ & $1.6\pm0.1$ & $1.2\pm0.1$ & $2.2\pm0.2$ & $9.0\pm0.3$ & $0.5\pm0.1$ \\ ... & 100 & $3.0\pm0.4$ & $2.9\pm0.5$ & $2.0\pm0.6$ & $3.8\pm0.7$ & $12.7\pm1.4$ & 3\,$\sigma$: $<2.8$ \\ ... & 160 & $3.0\pm0.4$ & $2.4\pm0.6$ & $2.1\pm0.6$ & $2.9\pm0.9$ & $11.1\pm1.7$ & $1.7\pm1.1$ \\ ... & 100, 160 & $3.0\pm0.3$ & $2.7\pm0.4$ & $2.1\pm0.4$ & $3.5\pm0.5$ & $12.0\pm1.0$ & 3\,$\sigma$: $<2.8$ \\ ... & 24, 100, 160 & $2.3\pm0.3$ & $1.7\pm0.2$ & $1.3\pm0.2$ & $2.3\pm0.3$ & $9.2\pm0.4$ & 3\,$\sigma$: $<2.8$ \\ ... & 24, 100, 160, 1.4 & $2.3\pm0.3$ & $1.7\pm0.2$ & $1.3\pm0.2$ & $2.3\pm0.3$ & $9.2\pm0.5$ & 3\,$\sigma$: $<2.8$ \\ Elbaz+11-MS Lum-Eq Weight~\tablenotemark{f} & 24, 100, 160, 1.4 & $3.0\pm0.2$ & $2.5\pm0.2$ & $2.1\pm0.2$ & $3.0\pm0.2$ & $11.3\pm0.3$ & 3\,$\sigma$: $<2.8$ \\ \\ Elbaz+11-SB Lum~\tablenotemark{e} & 24 & $3.7\pm0.5$ & $3.1\pm0.2$ & $2.3\pm0.3$ & $4.3\pm0.4$ & $17.7\pm0.5$ & $1.0\pm0.2$ \\ ... & 100 & $2.4\pm0.3$ & $2.3\pm0.4$ & $1.6\pm0.5$ & $3.0\pm0.5$ & $10.0\pm1.1$ & 3\,$\sigma$: $<2.2$ \\ ... & 160 & $2.3\pm0.3$ & $1.9\pm0.5$ & $1.7\pm0.5$ & $2.3\pm0.7$ & $8.6\pm1.3$ & $1.3\pm0.9$ \\ ... & 100, 160 & $2.4\pm0.2$ & $2.1\pm0.3$ & $1.6\pm0.3$ & $2.7\pm0.4$ & $9.4\pm0.9$ & 3\,$\sigma$: $<2.2$ \\ ... & 24, 100, 160 & $2.6\pm0.3$ & $2.7\pm0.3$ & $2.0\pm0.3$ & $3.6\pm0.4$ & $15.5\pm0.7$ & 3\,$\sigma$: $<2.2$ \\ ... & 24, 100, 160, 1.4 & $2.6\pm0.4$ & $2.6\pm0.3$ & $2.0\pm0.4$ & $3.5\pm0.5$ & $15.0\pm0.8$ & 3\,$\sigma$: $<2.2$ \\ Elbaz+11-SB Lum-Eq Weight~\tablenotemark{f} & 24, 100, 160, 1.4 & $2.3\pm0.2$ & $2.0\pm0.1$ & $1.7\pm0.2$ & $2.4\pm0.2$ & $8.8\pm0.2$ & 3\,$\sigma$: $<2.2$ \\ \\ CE01 Lum~\tablenotemark{e} & 24 & $3.4\pm0.4$ & $2.6\pm0.2$ & $1.7\pm0.2$ & $4.1\pm0.3$ & $41.3\pm1.2$ & $0.4\pm0.1$ \\ ... & 100 & $2.4\pm0.3$ & $2.3\pm0.4$ & $1.6\pm0.5$ & $3.1\pm0.6$ & $10.9\pm1.2$ & 3\,$\sigma$: $<2.3$ \\ ... & 160 & $2.2\pm0.3$ & $1.8\pm0.4$ & $1.5\pm0.4$ & $2.1\pm0.6$ & $7.9\pm1.2$ & $1.2\pm0.8$ \\ ... & 100, 160 & $2.3\pm0.3$ & $2.0\pm0.3$ & $1.6\pm0.3$ & $2.7\pm0.4$ & $9.4\pm0.9$ & 3\,$\sigma$: $<2.3$ \\ ... & 24, 100, 160 & $2.4\pm0.3$ & $2.3\pm0.3$ & $1.6\pm0.3$ & $3.3\pm0.4$ & $12.3\pm0.4$ & 3\,$\sigma$: $<2.3$ \\ ... & 24, 100, 160, 1.4 & $2.4\pm0.4$ & $2.3\pm0.3$ & $1.6\pm0.4$ & $3.2\pm0.4$ & $12.1\pm0.6$ & 3\,$\sigma$: $<2.3$ \\ CE01 Lum-Eq Weight~\tablenotemark{f} & 24, 100, 160, 1.4 & $2.2\pm0.2$ & $1.8\pm0.1$ & $1.5\pm0.2$ & $2.3\pm0.2$ & $8.2\pm0.2$ & 3\,$\sigma$: $<2.3$ \\ CE01 Col~\tablenotemark{g} & 100, 160 & $2.1\pm0.4$ & $1.8\pm0.5$ & $2.1\pm1.0$ & $1.7\pm0.6$ & $7.9\pm1.5$ & ... \\ ... & 24, 100, 160 & $2.3\pm0.6$ & $2.2\pm0.8$ & $1.6\pm0.9$ & $3.0\pm1.2$ & $10.4\pm2.3$ & ... \\ CE01 Col-Eq Weight~\tablenotemark{h} & 24, 100, 160, 1.4 & $2.4\pm0.7$ & $2.2\pm0.8$ & $1.6\pm0.9$ & $3.0\pm1.2$ & $12.1\pm2.7$ & ... \\ \\ DH02 Lum~\tablenotemark{e} & 24 & $3.0\pm0.4$ & $2.3\pm0.2$ & $1.7\pm0.2$ & $3.6\pm0.3$ & $21.5\pm0.6$ & $0.6\pm0.1$ \\ ... & 100 & $2.1\pm0.3$ & $2.0\pm0.4$ & $1.5\pm0.5$ & $2.7\pm0.5$ & $7.8\pm0.9$ & 3\,$\sigma$: $<2.0$ \\ ... & 160 & $2.2\pm0.3$ & $1.8\pm0.4$ & $1.6\pm0.4$ & $2.1\pm0.6$ & $7.4\pm1.2$ & $1.3\pm0.9$ \\ ... & 100, 160 & $2.1\pm0.2$ & $1.9\pm0.2$ & $1.5\pm0.3$ & $2.5\pm0.4$ & $7.7\pm0.6$ & 3\,$\sigma$: $<2.0$ \\ ... & 24, 100, 160 & $2.3\pm0.3$ & $2.2\pm0.2$ & $1.6\pm0.2$ & $3.0\pm0.4$ & $13.6\pm0.7$ & 3\,$\sigma$: $<2.0$ \\ ... & 24, 100, 160, 1.4 & $2.3\pm0.3$ & $2.2\pm0.3$ & $1.6\pm0.3$ & $3.0\pm0.5$ & $13.2\pm0.8$ & 3\,$\sigma$: $<2.0$ \\ DH02 Lum-Eq Weight~\tablenotemark{f} & 24, 100, 160, 1.4 & $2.2\pm0.2$ & $1.8\pm0.1$ & $1.6\pm0.2$ & $2.2\pm0.2$ & $7.4\pm0.2$ & 3\,$\sigma$: $<2.0$ \\ DH02 Col~\tablenotemark{g} & 100, 160 & $2.2\pm0.4$ & $1.5\pm0.5$ & $1.7\pm0.7$ & $1.7\pm0.6$ & $6.9\pm1.3$ & ... \\ ... & 24, 100, 160 & $2.4\pm0.7$ & $2.1\pm0.7$ & $1.5\pm0.8$ & $2.8\pm1.1$ & $10.5\pm2.4$ & ... \\ DH02 Col-Eq Weight~\tablenotemark{h} & 24, 100, 160, 1.4 & $2.3\pm0.6$ & $2.2\pm0.8$ & $1.6\pm0.9$ & $3.6\pm1.5$ & $9.5\pm2.2$ & ... \\ \\ Rieke+09 Lum~\tablenotemark{e} & 24 & $5.0\pm0.6$ & $3.6\pm0.3$ & $2.0\pm0.2$ & $6.1\pm0.5$ & $46.1\pm1.3$ & $0.5\pm0.1$ \\ ... & 100 & $2.5\pm0.3$ & $2.3\pm0.4$ & $1.8\pm0.6$ & $3.0\pm0.5$ & $8.2\pm0.9$ & 3\,$\sigma$: $<2.3$ \\ ... & 160 & $2.2\pm0.3$ & $1.9\pm0.4$ & $1.7\pm0.5$ & $2.1\pm0.6$ & $6.1\pm0.9$ & $1.4\pm1.0$ \\ ... & 100, 160 & $2.3\pm0.2$ & $2.1\pm0.2$ & $1.7\pm0.2$ & $2.5\pm0.3$ & $7.1\pm0.6$ & 3\,$\sigma$: $<2.3$ \\ ... & 24, 100, 160 & $2.3\pm0.2$ & $2.3\pm0.3$ & $1.8\pm0.3$ & $3.1\pm0.4$ & $11.2\pm0.7$ & 3\,$\sigma$: $<2.3$ \\ ... & 24, 100, 160, 1.4 & $2.3\pm0.3$ & $2.3\pm0.3$ & $1.8\pm0.4$ & $3.0\pm0.5$ & $11.0\pm0.9$ & 3\,$\sigma$: $<2.3$ \\ Rieke+09 Lum-Eq Weight~\tablenotemark{f} & 24, 100, 160, 1.4 & $2.2\pm0.2$ & $1.9\pm0.1$ & $1.7\pm0.2$ & $2.2\pm0.2$ & $6.3\pm0.2$ & 3\,$\sigma$: $<2.3$ \\ Rieke+09 Col~\tablenotemark{g} & 100, 160 & $3.1\pm0.6$ & $2.8\pm0.8$ & $2.4\pm1.3$ & $3.6\pm1.3$ & $12.5\pm2.7$ & ... \\ ... & 24, 100, 160 & $2.7\pm0.7$ & $2.4\pm0.8$ & $1.8\pm1.0$ & $3.2\pm1.3$ & $11.9\pm2.7$ & ... \\ Rieke+09 Col-Eq Weight~\tablenotemark{h} & 24, 100, 160, 1.4 & $2.6\pm0.7$ & $2.3\pm0.8$ & $1.8\pm0.9$ & $3.1\pm1.2$ & $12.1\pm2.7$ & ... \\ \enddata \tablenotetext{a}{\,Luminosities are in units of $10^{11}$\,L$_{\odot}$. Errors in luminosities are derived from the uncertainties in the stacked fluxes (see text).} \tablenotetext{b}{Wavelengths (in micron) or frequency (in GHz) used to compute the best-fit $L_{\rm IR}$.} \tablenotetext{c}{$L_{\rm IR}$ determined from the calibration of $24$\,$\mu$m with $L_{\rm IR}$ of \citet{reddy10a}, which is based upon the correlation between $8$\,$\mu$m and H$\alpha$ luminosity for $L^{\ast}$ galaxies at $z\sim 2$.} \tablenotetext{d}{$L_{\rm IR}$ determined from the radio-IR correlation of \citet{bell03}, and assuming a radio spectral slope of $\gamma=-0.8$.} \tablenotetext{e}{$L_{\rm IR}$ for the \citet{elbaz11} main sequence/starburst, \citet{chary01}, \citet{dale02}, or \citet{rieke09} template that best matches the observed fluxes (weighted by the flux errors), irrespective of the infrared colors.} \tablenotetext{f}{$L_{\rm IR}$ for the \citet{elbaz11} main sequence/starburst, \citet{chary01}, \citet{dale02}, or \citet{rieke09} template that best matches the observed fluxes (including 1.4\,GHz), with all fluxes given equal weight in the fitting, irrespective of the infrared colors.} \tablenotetext{g}{$L_{\rm IR}$ for the \citet{chary01}, \citet{dale02}, or \citet{rieke09} template that best fits the observed infrared colors and is normalized to match the observed fluxes, with colors and fluxes weighted by their errors.} \tablenotetext{h}{$L_{\rm IR}$ for the \citet{chary01}, \citet{dale02}, or \citet{rieke09} template that best fits the observed infrared colors and is normalized to match the observed fluxes (including 1.4\,GHz), with colors and fluxes weighted equally.} \label{tab:lircomp} \end{deluxetable*} The systematic uncertainties in $\log(L_{\rm IR})$ for the different combinations of templates, fluxes, and fitting methods for each of the subsamples are typically within $0.1$\,dex. We note that the $L_{\rm IR}$ derived from $160$\,$\mu$m is not systematically larger than that inferred from $100$\,$\mu$m, and hence runs counter to what one might expect if the larger $160$\,$\mu$m beam included more sources clustered around the UV-selected galaxies that contribute to the measured far-infrared flux. The stacked radio data have significantly higher resolution ($\sim 1\farcs7$ at $1.4$\,GHz versus $\sim 11\arcsec$ at $160$\,$\mu$m) and yield a radio flux that implies an $L_{\rm IR}$ that is also not any smaller than the value inferred from the $160$\,$\mu$m data. Based on this evidence, we conclude that the $160$\,$\mu$m emission from sources proximate to the UV-selected galaxies is negligible compared to the emission from the UV-selected galaxies themselves. The notable outliers listed in Table~\ref{tab:lircomp} are the $L_{\rm IR}$ determined from $24$\,$\mu$m flux density alone, which tend to overpredict $L_{\rm IR}$, particularly for the highest luminosity subsamples D and E, relative to cases where we include the $100$ and $160$\,$\mu$m data in most of the template fits (Figure~\ref{fig:lirvlir}). Alternatively, with the Elbaz+11-MS template, $24$\,$\mu$m-only determinations result in systematically lower $L_{\rm IR}$. These differences are due to the intrinsic variation in the ratio of $L_{\rm 8}$ to $L_{\rm IR}$ luminosity present in the templates relative to the observed SED. The systematic overestimation of $L_{\rm IR}$ based on $L_{\rm 8}$ alone has been noted before for ultraluminous infrared galaxies studied with {\em Spitzer} \citep{papovich07, daddi07a, papovich09, murphy11, magnelli11} and {\em Herschel} \citep{nordon10, elbaz10}. For the samples considered here, the same $24$\,$\mu$m bias is apparent for all templates (except for Elbaz+11-MS) even at LIRG luminosities, where the difference between the best-fit $L_{\rm IR}$ and that determined from $24$\,$\mu$m alone is similar to the $1$\,$\sigma$ uncertainties in $L_{\rm IR}$. \begin{figure} \plotone{f4.eps} \caption{Comparison between the $L_{\rm IR}$ computed from the $24$, $100$, $160$\,$\mu$m, and $1.4$\,GHz equally-weighted fluxes, and the $L_{\rm IR}$ computed from the $24$\,$\mu$m data only. The heavy black points denote the $L_{\rm IR}$ computed from the \citet{reddy10a} calibration versus the $L_{\rm IR}$ obtained from the best-fit DH02 template that includes all the data ($24$, $100$, $160$\,$\mu$m, and 1.4GHz), equally-weighted. Error bars reflect the uncertainty in the $L_{\rm IR}$ inferred from the stacked measurements, except for the high luminosity subsample (Sample E), where the errors include the intrinsic dispersion in $24$\,$\mu$m fluxes of objects that contribute to the sample. For clarity, error bars are not shown for the Elbaz+11-MS/SB points.} \label{fig:lirvlir} \end{figure} Using {\em Spitzer} and {\em Herschel} data for galaxies at redshifts $z\la 2.0$, \citet{elbaz11} investigate the physical reasons for the overestimation of $L_{\rm IR}$ based on rest-frame $L_{\rm 8}$ alone. These authors point out that the SED templates used to infer $L_{\rm IR}$ are calibrated to match local ULIRGs, which are starbursts with compact, high surface density star formation. This contrasts with ULIRGs at high redshift ($z\ga 2.0$) which have more extended star formation occuring over longer timescales. These differences in star formation surface densities and timescales lead to noticeable differences in the IR SEDs because of the variations in the spatial distribution of dust with respect to the massive stars that are heating this dust. In Section~\ref{sec:discussion}, we discuss the $L_{\rm IR}/L_{\rm 8}$ ratios found here and place them in the context of the ratios found for other star-forming galaxies with similar luminosities but at lower redshifts. We conclude by noting that the \citet{reddy10a} calibration, which is based on the correlation between $L_{\rm 8}$ and dust-corrected H$\alpha$ luminosity at $z\sim 2$, predicts $L_{\rm IR}$ for LIRGs that are similar within the errors to those computed using the combined {\em Spitzer}, {\em Herschel}, and VLA data. Finally, the infrared luminosities based on the radio data alone are in excellent agreement with the infrared luminosities inferred from fitting the dust templates to the observed $24$, $100$, and $160$\,$\mu$m fluxes. \subsection{Comparison of Dust SEDs} Figure~\ref{fig:sedcomp} compares the dust SEDs found for the luminosity-matched fitting to the $24$, $100$, and $160$\,$\mu$m, and 1.4\,GHz data, weighted by their errors. The dust SEDs for the different models are broadly consistent with each other; one difference is in the stronger silicate absorption at $9$\,$\mu$m and additional aromatic features longward of $10$\,$\mu$m in the Rieke+09 model fit. The Rieke+09 templates are calibrated using {\em Spitzer}/IRS spectra, rather than photometry, between $5$ and $36$\,$\mu$m, and will understandably include mid-IR spectral features that are not present in the other models. \begin{figure} \plotone{f5.eps} \caption{Comparison of the \citet{chary01}, \citet{dale02}, \citet{rieke09}, and \citet{elbaz11} template fits to the observed $24$, $100$, and $160$\,$\mu$m and $1.4$\,GHz measurements, derived by scaling the template that best matches the observed fluxes, for $L^{\ast}$ galaxies at $z\sim 2$. The total infrared luminosities for the three templates are identical within the uncertainties (Table~\ref{tab:lircomp}).} \label{fig:sedcomp} \end{figure} The most notable difference between the templates can be seen in Figure~\ref{fig:sedcomp}: the Elbaz+11 and DH02 templates exhibit a broader range of dust temperatures with a colder dust component, relative to CE01 and Rieke+09. Resolving the full shape of the SED, and the average dust temperatures, will require larger stacked samples and deeper data in the submillimeter and millimeter regime. In any case, while some obvious differences remain between the model templates given the data at our disposal, the integrals of these SEDs, namely the total $L_{\rm IR}$, are essentially identical within the uncertainties (Table~\ref{tab:lircomp}). For the subsequent analysis, we assume the $L_{\rm IR}$ determined from the DH02 model that best fits the error-weighted fluxes at 1.4\,GHz, $24$, $100$, and $160$\,$\mu$m. Assuming the $L_{\rm IR}$ derived using any of the other models does not affect our conclusions. \begin{figure*} \plotone{f6.eps} \caption{Best-fit stellar population (CB08) and dust (DH02) SEDs for typical $L^{\ast}$ galaxies at $z\sim 2$. Included are $U_{\rm n}G{\cal R}$+$JK_{\rm s}$, {\em Spitzer}/IRAC $3.6\,-\,8.0$\,$\mu$m, {\em Spitzer} MIPS $24$\,$\mu$m, {\em Herschel} PACS $100$ and $160$\,$\mu$m, and VLA 1.4\,GHz stacked measurements.} \label{fig:avesed} \end{figure*} \subsection{The Dust SED of Typical Star-Forming Galaxies at $z\sim 2$} The average stellar population and dust SEDs for $L^{\ast}_{\rm UV}$ galaxies at $z\sim 2$ are shown in Figure~\ref{fig:avesed}. The optical photometry indicates that an $L^{\ast}_{\rm UV}$ galaxy at $z\sim 2$ has a UV luminosity of $L_{\rm UV} \simeq 3.1\times10^{10}$\,L$_{\odot}$ and the dust SED indicates a total infrared luminosity of $L_{\rm IR} \simeq 2.2\times 10^{11}$\,L$_{\odot}$. The {\em Herschel} and VLA data confirm the previous finding that UV-selected galaxies at $z\sim 2$ with ${\cal R}<25.5$ are luminous infrared galaxies (LIRGs; \citealt{reddy05a, reddy06a, reddy10a, adel00}); in particular, the median $L_{\rm IR}$ found here is virtually identical to that found by \citet{reddy10a} based on an analysis of the $24$\,$\mu$m, H$\alpha$, and UV emission for UV-selected galaxies at $z\sim 2$. We also point out that the median value of $L_{\rm IR}$ for $L^{\ast}_{\rm UV}$ galaxies is similar (to within a factor of $\approx 3$) to the value of $L^{\ast}_{\rm IR}$ deduced from direct {\em Spitzer} determinations of the IR LF at $z\sim 2$ \citep{magnelli11}. The necessity of stacking the UV-selected galaxies, even for GOODS-depth {\em Herschel} PACS data, is illustrated in Figure~\ref{fig:detlims}. Typical $L^{\ast}$ galaxies at $z\sim 2$ are easily detected at wavelengths blueward of observed $24$\,$\mu$m given the depths of the data considered here. Redward of this point, however, the dust emission is not sufficient for directly detecting these galaxies, thus stacking in the infrared and radio bands is required. Fortunately, the {\em Herschel} PACS sensivity and resolution are such that we can for the first time significantly detect the average thermal emission of non-lensed $L^{\ast}$ galaxies at $z\sim 2$. \begin{figure} \plotone{f7.eps} \caption{Detections limits ($3$\,$\sigma$) for the ground-based optical ($U_{\rm n}G{\cal R}$), ground-based near-IR ($JK_{\rm s}$), {\em Spitzer}/IRAC ($3.6\,-\,8.0$\,$\mu$m), {\em Spitzer}/MIPS ($24$\,$\mu$m), {\em Herschel}/PACS (100, 160\,$\mu$m), and VLA 1.4\,GHz data in the GOODS-North field, relative to the SED of an $L^{\ast}$ galaxy at $z\sim 2$.} \label{fig:detlims} \end{figure} \subsection{Variation of $L_{\rm IR}$ with UV Slope and Bolometric Luminosity} The stacks for the different subsamples indicate that galaxies with bluer UV spectral slopes are on average less infrared luminous than those with red UV spectral slopes (Table~\ref{tab:lircomp}). This systematic effect is approximately equal in magnitude to the uncertainties in the stacked flux measurements. Not surprisingly, those galaxies that are inferred to have $L_{\rm bol} \ge 10^{12}$\,L$_{\odot}$ based on the MIPS $24$\,$\mu$m data (Sample E; Table~\ref{tab:lircomp}) have median stacked fluxes at $100$ and $160$\,$\mu$m that are a factor of $3-4$ times larger than the corresponding fluxes for the $L^{\ast}$ sample. The median infrared luminosity for Sample E is not as large as that computed from $24$\,$\mu$m alone and hence not as large as the value of $L_{\rm bol}$ used to construct this subsample (e.g., Figure~\ref{fig:lirvlir}). However, the best-fit template to the mid-IR, IR, and radio fluxes indicates that galaxies in Sample E are still more infrared luminous than galaxies in other subsamples. These galaxies correspond to low-luminosity ULIRGs or higher luminosity LIRGs. In the next section, we compare the infrared and UV luminosities for each of the subsamples of UV-selected galaxies. \section{Discussion} \label{sec:discussion} In the following, we discuss our results on $L^{\ast}$ galaxies at $z\sim 2$ in the context of the infrared properties of star-forming galaxies with similar luminosities at lower redshifts. We then discuss the combined UV and IR luminosity measurements for $z\sim 2$ galaxies and the implication for their average dust attenuation. We also compare the measured dust attenuation with that inferred from the local correlation between dustiness and UV slope. Finally, we use the {\em Herschel} data to determine the median bolometric luminosities and star formation rates of galaxies in our sample, the correlation between bolometric luminosity and dust attenuation, and the variation in dust attenuation with UV luminosity. \subsection{Ratio of Infrared to Mid-Infrared Luminosity} For a consistent comparison with \citet{elbaz11}, we have recomputed $L_{\rm 8}$ using the mid-IR SED of M82 to {\em k}-correct the $24$\,$\mu$m flux. The implied {\em k}-corrections are not substantially different than those obtained from the average mid-IR SED of the 12 local star-forming galaxies listed in \citet{reddy06a}. Adopting the CE01 luminosity matched and weighted value of $L_{\rm IR}$, we compute IR8\,$= 7.7\pm1.6$, $8.9\pm1.3$, $8.3\pm2.4$, $8.9\pm1.4$, and $8.4\pm0.5$, for Samples A, B, C, D, and E, respectively. Assuming the $L_{\rm IR}$ computed from the $100$ and $160$\,$\mu$m data only results in ratios that are not significantly different than the ones quoted above. These values imply that the galaxies in our sample predominantly lie on the boundary between the ratios found for infrared main sequence galaxies (IR8$=4.9^{+2.9}_{-2.2}$), and those found for starburst galaxies which have an exponential tail distribution extending to IR8\,$=15\,-\,20$. The relatively high IR8 ratios also may explain why the typically-adopted templates (e.g., CE01) tend to not fail as badly for galaxies in our sample relative to more luminous ULIRGs at $z\sim 2$ when inferring $L_{\rm IR}$ from $L_{\rm 8}$ alone (e.g., \citealt{reddy06a, reddy10a}). A detailed comparison between the morphologies of the $z\sim 2$ galaxies studied here and local star-forming galaxies is beyond the scope of this paper. However, we can still make some general inferences regarding the degree of ``compactness'' of the IR emission in these galaxies and how it may affect their IR8 ratios. We compute the IR luminosity surface density following \citet{elbaz11}: \begin{eqnarray} \Sigma_{\rm IR} \equiv \frac{L_{\rm IR}/2}{\pi r^2_{\rm IR}}. \end{eqnarray} The typical UV half-light radius of galaxies in our sample is $r_{\rm UV}\simeq 2$\,kpc. If we assume that the IR half-light radius is roughly half this value, as suggested by \citet{elbaz11}, then $r_{\rm IR}\approx 1$\,kpc, implying $\Sigma_{\rm IR} \approx 3\times 10^{10}$\,L$_\odot$\,kpc$^{-2}$. On the other hand, if $r_{\rm IR} \approx r_{\rm UV}$ as might be expected for galaxies at high redshift where the UV emission is dominated by OB stars, then $\Sigma_{\rm IR} \approx 8\times 10^{9}$\,L$_\odot$\,kpc$^{-2}$. Even this lower value is a factor of 4 larger than the $\Sigma_{\rm IR} \la 2\times 10^{9}$\,L$_\odot$\,kpc$^{-2}$ typical of the infrared main sequence galaxies with extended star formation in the \citet{elbaz11} sample. The relevance for the present discussion is that while $L^{\ast}$ galaxies at $z\sim 2.3$ appear to have IR8 ratios that are a factor of $\approx 2$ larger than for main sequence galaxies at lower redshifts, they also exhibit $\Sigma_{\rm IR}$ that are larger than those found for main sequence galaxies, and hence are more compact for their IR luminosity. Formally, galaxies in our sample would lie on the ``boundary'' between IR main sequence and IR starburst galaxies. These larger IR8 ratios result in the over-prediction of $L_{\rm IR}$ based on $L_{\rm 8}$ alone when using the standard templates, as noted in Section~\ref{sec:luminosities}. One possibility is that the IR8 ratio for typical star-forming ($L^{\ast}$) galaxies may evolve with redshift, transitioning from IR8\,$\simeq 4.9$ at $z\la 2.0$ to IR8\,$\simeq 8$ at $z\sim 2.3$. This effect is likely due to the smaller sizes of high redshift galaxies for their IR luminosities relative to lower redshift galaxies. Finally, it is worth noting that the {\em star formation rate} surface density appears to be roughly constant for $L^{\ast}$ galaxies at higher redshifts ($z\sim 4\,-\,7$; $\Sigma_{\rm SFR} \simeq 1.9$\,M$_\odot$\,yr$^{-1}$\,kpc$^{-2}$; e.g., \citealt{oesch10}), suggesting that there may be a similar commonality in the IR8 ratios (if one could measure them) for typical star-forming galaxies at very high redshift, just as is observed for main sequence galaxies at $z\la 2.0$ \citep{elbaz11}. \subsection{Dust Obscuration of Typical Star-Forming Galaxies at $z\sim 2$} \subsubsection{Definitions Relevant to Dust Obscuration} Before proceeding, it is useful to define several terms that have been typically used interchangeably in the literature. First, we define ``dust obscuration'', or attenuation, as $L_{\rm IR}/L_{\rm UV}$. Note that this ratio is not equivalent to the ratio of obscured to unobscured star formation rate, SFR$_{\rm IR}$/SFR$_{\rm UV}$, given the difference in scaling required to convert the UV and IR luminosities to star formation rates. We also define the ``dust correction factor'' needed to recover the total star formation rate from that computed based on the unobscured UV luminosity as (SFR$_{\rm IR}$+SFR$_{\rm UV}$)/SFR$_{\rm UV}$ $\equiv 1 +$ SFR$_{\rm IR}$/SFR$_{\rm UV}$. For the $L^{\ast}$ sample, the median dust obscuration is $L_{\rm IR}/L_{\rm UV} = 7.1\pm1.1$, the ratio of obscured to unobscured SFR is SFR$_{\rm IR}$/SFR$_{\rm UV} = 4.2\pm0.6$, and the dust correction factor is $5.2\pm 0.6$. The $L_{\rm UV}$, $L_{\rm IR}$, ratio of obscured to unobscured SFR, and total SFR for each subsample are listed in Table~\ref{tab:lums}. For the conversion to SFR, we assume a \citet{salpeter55} initial mass function with limits from 0.1 to 100\,M$_\odot$ and the \citet{kennicutt98} conversions betweeen UV/IR luminosity and SFR. The results indicate that roughly $80\%$ of the star formation is obscured for $L^{\ast}$ galaxies at $z\sim 2$. \begin{deluxetable*}{lcrcc} \tabletypesize{\scriptsize} \tablewidth{0pc} \tablecaption{Properties of the Stacks IV: Derived Quantities} \tablehead{ \colhead{} & \colhead{$L_{\rm UV}$~\tablenotemark{a}} & \colhead{$L_{\rm IR}$~\tablenotemark{b}} & \colhead{} & \colhead{SFR(UV)+SFR(IR)~\tablenotemark{d}} \\ \colhead{Sample} & \colhead{($10^{11}$\,L$_\odot$)} & \colhead{($10^{11}$\,L$_\odot$)} & \colhead{1+SFR(IR)/SFR(UV)~\tablenotemark{c}\tablenotemark{d}} & \colhead{(M$_\odot$\,yr$^{-1}$)}} \startdata {\bf A.} & $0.32\pm0.02$ & $2.3\pm0.3$ & $5.3\pm0.6$ & $49\pm6$ \\ {\bf B.} & $0.31\pm0.02$ & $2.2\pm0.3$ & $5.2\pm0.6$ & $47\pm6$ \\ {\bf C.} & $0.33\pm0.03$ & $1.6\pm0.3$ & $3.9\pm0.6$ & $37\pm6$ \\ {\bf D.} & $0.28\pm0.02$ & $3.0\pm0.5$ & $7.2\pm1.1$ & $60\pm9$ \\ {\bf E.} & $0.35\pm0.07$ & $13.2\pm0.8$ & $20.9\pm3.2$ & $238\pm 15.5$ \\ {\bf F.} & $0.38\pm0.05$ & 3\,$\sigma$: $<2.0$ & 3\,$\sigma$: $<2.4$ & 3\,$\sigma$: $<58$ \\ \enddata \tablenotetext{a}{Mean and error in mean of UV luminosity in units of $10^{11}$\,L$_\odot$.} \tablenotetext{b}{Infrared luminosity, in units of $10^{11}$\,L$_\odot$, derived from color-matching and normalizing the \citet{dale02} models to the observed fluxes. For Sample F, we assume the upper limit in $L_{\rm IR}$ implied by the observed fluxes at $24$ and $160$\,$\mu$m and the upper limit at $100$\,$\mu$m and $1.4$\,GHz.} \tablenotetext{c}{Dust correction factor required to recover the total SFR from the UV-determined SFR.} \tablenotetext{d}{Median star formation rates in M$_\odot$\,yr$^{-1}$ assuming a \citet{salpeter55} IMF from $0.1$ to $100$\,M$_\odot$ and the \citet{kennicutt98} relations between UV/IR luminosity and star formation rate. For the ``young'' subsample (Sample F), we multiply the UV SFR determined from the \citet{kennicutt98} relation by a factor of 2. This is done to account for the fact that the mix of O and B stars contributing to the UV continuum emission has not equilibrated for ages $\la 100$\,Myr (assuming a constant star formation); thus, the \citet{kennicutt98} conversion between UV luminosity and SFR will underpredict the total SFR for such ``young'' galaxies.} \label{tab:lums} \end{deluxetable*} The dust obscuration varies between $L_{\rm IR}/L_{\rm UV} = 4.8\pm1.0$ for the subsample with blue UV slopes (Sample C) and $L_{\rm IR}/L_{\rm UV}=10.7\pm1.9$ for the subsample with red UV slopes (Sample D), and is as high as $L_{\rm IR}/L_{\rm UV} \approx 37.7\pm7.9$ for the most bolometrically-luminous galaxies (Sample E). These observations imply a trend in dustiness with both UV slope and bolometric luminosity. \subsubsection{Validity of the UV Attenuation Curve for $L^{\ast}$ Galaxies at $z\sim 2$} In particular, we show the dust obscuration derived for these samples as a function of UV slope, $\beta$, in Figure~\ref{fig:ebmv100}. Up to luminosities of $L_{\rm IR} \approx 10^{12}$\,L$_{\odot}$, we find that galaxies with redder $\beta$ are on average dustier. Furthermore, the correlation between dustiness and UV slope is essentially identical to that found for local starburst galaxies \citep{meurer99, calzetti00}. This result has been found by several other investigations targeting moderately luminous galaxies and using a variety of star formation tracers at $z\sim 2$ (e.g., \citealt{reddy04, reddy06a, reddy10a, daddi07a, pannella09}) and $z\sim 3$ (e.g., \citealt{seibert02, nandra02, magdis10a, magdis10b}). Infrared selection, e.g., such as $24$\,$\mu$m selection, generally results in samples where the bulk of galaxies do not abide by the \citet{meurer99} or \citet{calzetti00} attenuation curves (e.g., \citealt{murphy11}). As discussed in \citet{reddy06a} and \citet{reddy10a}, the correlation between UV slope and dust attenuation breaks down for more infrared luminous galaxies at $z\sim 2$, as well as younger galaxies with ages $\la 100$~Myr at the same redshifts, where the latter tend to follow a steeper attenuation curve (i.e., they are less reddened at a given UV slope than predicted by the \citealt{meurer99} relation). Systematic deviations from the local starburst attenuation curve have also been observed at lower redshift ($z\la 2$) based on {\em Herschel}/PACS and SPIRE data (e.g., \citealt{buat10, burgarella11}) and {\em Akari} data \citep{buat11}. \begin{figure} \plotone{f8.eps} \caption{Mean dust attenuation ($L_{\rm IR}/L_{\rm UV}$) versus UV slope ($\beta$) for different subsamples of $z\sim 2$ galaxies. Also shown are attenuation curves for the SMC and for local UV starbursts from \citet{meurer99}, and the $3$\,$\sigma$ upper limit and stacked $24$\,$\mu$m implied value ({\em cyan} point) of the dust attenuation for the youngest galaxies in our sample.} \label{fig:ebmv100} \end{figure} Much of the aforementioned deviation from the local starburst attenuation curve can be understood in the context of the range of bolometric luminosity probed by the different UV and IR selections. UV color selection is sensitive to galaxies with moderate ($L^{\ast}$) luminosities and lower dust extinction than those selected in the infrared. Because dust attenuation is a strong function of bolometric luminosity and the validity of the \citet{meurer99} relation is luminosity dependent \citep{meurer99, goldader02, reddy06a, reddy10a}, it is natural to expect departures from this relation for galaxies that may be selected via their infrared emission. Deviations may also be observed in infrared luminous galaxies that also have large stellar masses; in this case, the UV continuum associated with the massive OB stars may be extinguished relative the UV emission from less massive stars, resulting in a redder UV continuum slope for a given dust obscuration (e.g., \citealt{murphy11, buat10}). These results stress that one must take care in applying {\em any} starburst attenuation curve to high redshift galaxies, with the realization that such relations may apply in one regime but fail in another depending on the properties of the galaxies in one's sample. Here, we have shown that galaxies that lie around $L^{\ast}$ of the UV luminosity function have dust obscuration -- as measured from {\em Spitzer}, {\em Herschel}, and VLA data -- that correlate with their UV slopes, and that this correlation is similar to that observed for local starburst galaxies. The stacked {\em Herschel} data do not directly indicate the intrinsic dispersion in the relation between UV slope and dustiness. However, an indirect estimate of this scatter comes from an analysis of the $24$\,$\mu$m data. Specifically, 109 of 311 galaxies in the larger (and multiple field) sample of \citet{reddy10a} are detected at $24$\,$\mu$m. Using a survival analysis to take into account both detections and non-detections, \citet{reddy10a} found a dispersion of $\approx 0.40$\,dex between dust attenuation, $L_{\rm IR}/L_{\rm UV}$, and rest-UV slope, $\beta$. If the scatter in the $L_{\rm 8}$-to-$L_{\rm IR}$ ratio is similar to that found by \citet{elbaz11} for lower redshift ($z\la 2.0$) galaxies with $L_{\rm IR}\la 10^{12}$\,L$_{\odot}$ (having a $1$\,$\sigma$ dispersion of $\approx 0.1$\,dex), then the implied total dispersion in the relation between dustiness and UV slope is $\approx 0.45$\,dex. The correspondence between $L_{\rm IR}/L_{\rm UV}$ and $\beta$ to within a factor of $\approx 3$ thus verifies the applicability of the \citet{meurer99} and \citet{calzetti00} attenuation curves for galaxies with $L_{\rm IR} \la 10^{12}$\,L$_{\odot}$ at $z\sim 2$. The UV attenuation curve is sensitive primarily to the geometry of dust and stars within galaxies, and/or variations in dust composition. Hence, the correspondence of the UV attenuation curves between local starbursts and $z\sim 2$ $L^{\ast}$ galaxies implies a remarkable similarily in the processes that give rise to the spatial distribution of dust and stars and the dust composition in galaxies over $\approx 10$\,billion years of cosmic history. \subsection{Comparison with Recent Studies} Using the {\em Herschel} data to directly probe the thermal dust emission, we have shown that the local correlation between UV slope and dust obscuration remains valid for typical star-forming galaxies at $z\sim 2$. In the following, we compare our results with several recent studies of the dust attenuation of high redshift galaxies. \subsubsection{Radio Emission from UV-selected Galaxies} \citet{carilli08} stack the radio emission of $z\sim 3$ LBGs in the COSMOS field and deduce that SFR$_{\rm radio}$/SFR$_{\rm UV} = 1.8\pm 0.4$, where SFR$_{\rm radio}$ is derived using the calibration of \citet{yun01}. This calibration is based on equating the local star formation rate density with the integral of the radio luminosity function, so in this case the radio SFR should represent a total SFR (but see below), including obscured and unobscured components. The factor of $1.8$ from the \citet{carilli08} study is significantly smaller than the factor of $\approx 5$ computed for our UV-selected sample at $z\sim 2$. \citet{carilli08} discuss several possibilities for the suppression of radio flux with respect to SFR at $z\sim 3$. For a consistent comparison, we assess their results using the same calibration used in this analysis. In particular, we employ the \citet{bell03} correlation between total infrared luminosity ($L_{\rm IR}$) and specific luminosity at 1.4\,GHz. Doing so, the stacked median radio luminosity of LBGs in the COSMOS field, $L_{1.4} = 5.1\times 10^{29}$\,erg\,s$^{-1}$\,Hz$^{-1}$, corresponds to $L_{\rm IR} \approx 2.2\times 10^{11}$\,L$_{\odot}$. These values are essentially identical to those determined for our $L^{\ast}$ sample at $z\sim 2$: $L_{1.4} = (5.2\pm1.0)\times 10^{29}$\,erg\,s$^{-1}$\,Hz$^{-1}$ and $L_{\rm IR} = (2.2\pm0.3)\times 10^{11}$\,L$_{\odot}$ (Table~\ref{tab:lircomp}). The {\em obscured} SFR corresponding to the $L_{\rm IR}$ for the COSMOS LBGs, assuming the \citet{kennicutt98} relation, is $\approx 38$\,M$_{\odot}$\,yr$^{-1}$. Hence, the factor that we compute to recover the total star formation rate from the UV star formation for the \citet{carilli08} sample, assuming their value of the unobscured SFR of $17$\,M$_{\odot}$\,yr$^{-1}$, is $1+38/17 \approx 3.2$. This is close to a factor of two larger than the value of 1.8 given in \citet{carilli08}. This discrepancy results from the fact that while the \citet{yun01} correlation between radio luminosity and star formation rate gives a star formation rate that is in good agreement with that inferred from $L_{\rm IR}$ (SFR\,$\simeq 38$\,M$_{\odot}$\,yr$^{-1}$), it underestimates the total SFR (SFR$_{\rm IR}$+SFR$_{\rm UV}$) of $\simeq 55$\,M$_{\odot}$\,yr$^{-1}$. To illustrate this point, we plot in Figure~\ref{fig:sfrir} the relationship between {\em total} star formation rate and infrared luminosity, adopting the \citet{kennicutt98} conversions between UV/IR luminosity and star formation rate, for the sample of 392 $z\sim 2$ galaxies of \citet{reddy10a}, and for the local samples of \citet{bell03} and \citet{huang09}. The top axis in Figure~\ref{fig:sfrir} indicates the radio luminosity that corresponds to $L_{\rm IR}$ assuming the \citet{yun01} calibration, and the solid line shows the relationship between radio luminosity and star formation rate derived in that study. The \citet{yun01} calibration is valid if most of the star formation is obscured, as is the case for ULIRGs at both $z\sim 2$ and $z\sim 0$. However, a substantial fraction of the LIRGs in the $z\sim 2$ sample have a significant contribution from unobscured star formation, where the unobscured component is at least $50\%$ of the obscured star formation. The same is true for the COSMOS LBG sample of \citet{carilli08}. The ratio of obscured to unobscured star formation rate is a strong function of total star formation rate or bolometric luminosity (e.g., Figure~\ref{fig:sfrir}; \citealt{reddy10a}), and the UV component obviously cannot be neglected for objects that have significant UV emission. Figure~\ref{fig:sfrir} shows that one would significantly underestimate the total star formation rate of LIRGs at $z\sim 2$ based on their IR emission alone (the \citealt{kennicutt98} relation between SFR and $L_{\rm IR}$ is only valid in the optically thick limit), or based on using the \citet{yun01} relationship between radio luminosity and star formation rate. \begin{figure} \plotone{f9.eps} \caption{ Total star formation rate versus infrared luminosity ($L_{\rm IR}$) for the sample of 392 UV-selected $z\sim 2$ galaxies of \citet{reddy10a} ({\em red circles and orange upper limits}), and the local samples of \citet{bell03} and \citet{huang09} ({\em dark green circles}). The top axis shows the radio luminosity that corresponds to $L_{\rm IR}$ assuming the radio-IR correlation of \citet{yun01}, and the solid line denotes the relationship between radio luminosity and star formation rate derived in that study. The shaded region covers the area where the unobscured star formation is at least $50\%$ of the obscured star formation. The large open star denotes the position of the COSMOS LBGs from \citet{carilli08}.} \label{fig:sfrir} \end{figure} Note also that \citet{carilli08} combine the {\em median} radio SFR with the {\em mean} unobscured UV SFR to determine the effect of dust. In general, mean luminosities will be larger than median ones for a population drawn from a \citet{schechter76} luminosity function, modulo sample incompleteness. For our $L^{\ast}$ sample, the mean UV luminosity is about $15\%$ larger than the median. More importantly, the {\em mean} unobscured UV luminosity of the \citet{carilli08} sample of $17$\,M$_{\odot}$\,yr$^{-1}$ is about a factor of two larger than the {\em median} UV SFR for our $L^{\ast}$ sample of $\approx 8.5$\,M$_{\odot}$\,yr$^{-1}$. It is possible that the relatively UV-bright LBGs of the \citet{carilli08} sample are somewhat less attenuated than more typical (and less UV-luminous) LBGs at $z\sim 2-3$, though this is contrary to what has been found for UV-selected galaxies at these redshifts \citep{reddy10a}. Without further analysis of their candidates, we conclude that the lower UV obscuration factor deduced by \citet{carilli08} is likely due to the brighter unobscured UV luminosity of their candidates, relative to the obscured star formation. What is clear from our spectroscopic sample is that the stacked radio flux for UV-selected galaxies at $z\sim 2$ predicts an $L_{\rm IR}$ that is identical to that obtained using direct measurements of thermal dust emission from the {\em Herschel} data, and this value of $L_{\rm IR}$ implies a dust correction of a factor of $\approx 5$. \subsubsection{Investigations of the Extragalactic Background Light (EBL)} A second and more recent study that has suggested obscuration factors that are different from those predicted from the \citet{meurer99} relation comes from an analysis of the infrared background based on stacked {\em Spitzer} and BLAST data by \citet{chary10}. These authors estimate the extragalactic background light (EBL) contributed from galaxies at $z\ga 1$ and conclude that the UV-based dust corrections for typical star-forming galaxies (e.g., those selected by their UV emission) must be lower than the \citet{meurer99} prediction in order that their integrated emission does not violate the background light constraints. However, directly probing the thermal dust emission of $L^{\ast}$ galaxies at $z\sim 2$, as we have done here, shows conclusively that these galaxies have dust attenuations that are similar, on average, to those computed based on the \citet{meurer99} and \citet{calzetti00} attenuation curves (Figure~\ref{fig:ebmv100}). How can we reconcile these two results? The EBL is sensitive to the average dust attenuation of {\em all} galaxies, not just the UV-bright ones studied here. Indeed, \citet{reddy09} and \citet{reddy10a} use physical arguments and stacked {\em Spitzer} data to show that the dust obscuration of UV-faint galaxies is lower than in UV-bright ones (see also next section), and that this luminosity dependence implies that the average dust obscuration, integrated over the entire luminosity function, can be close to a factor of two lower than the mean dust obscuration found for just the UV bright galaxies (see Table~5 of \citealt{reddy09}). This does not necessarily imply a failure of the \citet{meurer99} relation for typical star-forming galaxies at high redshift; it simply means that the rest-frame UV slope also becomes bluer, on average, for UV-faint galaxies, as the empirical evidence seems to indicate at $z\sim 2.5$ \citep{bouwens09}. What is clear from the present analysis is that the direct measurements of the thermal dust emission of UV-selected galaxies at $z\sim 2$ imply that the local UV attenuation curve remains valid for these galaxies. \subsubsection{Stacked {\em Herschel}/SPIRE Measurements for $z\sim 2$ UV-selected Galaxies} Strong source confusion in the {\em Herschel}/SPIRE 250, 350 and 500\,$\mu$m data make it very difficult to carry out stacking experiments capable of reaching flux limits as faint as those we expect for $L^\ast$ UV-selected galaxies at $z\sim2$ (approximately 1\,mJy -- see Figure~\ref{fig:avesed}). After some experimentation, we have chosen not to include those data in our analysis. Nevertheless, \citet{rigopoulou10} stack SPIRE 250\,$\mu$m data for a smaller sample of brighter, 24\,$\mu$m-detected, UV-selected galaxies at $z\sim 2$. These were taken from the sample of \citet{reddy06b}, also used here, but were limited to a 69 objects with individual 24\,$\mu$m detections. That subsample is therefore likely to be more IR-luminous, on average, than the larger sample considered here, where individual detection at 24\,$\mu$m is not required. \citet{rigopoulou10} measure a stacked 250\,$\mu$m flux of $f_{250} = 2.7\pm0.8$\,mJy, corresponding to a total infrared luminosity $L_{\rm IR} \approx 4.2\times 10^{11}$\,L$_\odot$ at $\langle z \rangle \approx 2$.\footnote{We note that the stacked 250\,$\mu$m flux shown in Figure~2 of \citet{rigopoulou10} appears to be nearly 10$\times$ fainter than the value cited in the text. The authors state that they derive a total infrared luminosity $L_{\rm IR} = (1.5\pm 0.5) \times 10^{11}$\,L$_\odot$ using the CE01 dust templates, but we are unable to reproduce this, finding instead a luminosity that is 2.8$\times$ larger, based on the value $f_{250} = 2.7$\,mJy cited in the text.} This value is similar to what we predict at 250\,$\mu$m when we stack the 100 and 160\,$\mu$m data for the same set of galaxies (i.e., the brighter objects with individual 24\,$\mu$m detections). Given that we have (1) controlled for issues of clustering and confusion at $100$\,$\mu$m, and cross-checked the $160$\,$\mu$m fluxes with those obtained at $100$\,$\mu$m, and (2) performed detailed simulations to demonstrate the robustness of the stacked fluxes (Section~\ref{sec:stacking}), we believe the PACS constraints on the median IR SED to be robust. With the PACS data we are able to constrain the median IR SED for $L^\ast$ galaxies with lower average infrared luminosities, irrespective of whether they are individually detected at 24\,$\mu$m. Future observations with the Atacama Large Millimeter Array (ALMA) should provide higher resolution and more sensitive observations of the Rayleigh-Jeans emission from typical star-forming galaxies at high redshift. \begin{figure*} \plottwo{f10a.eps}{f10b.eps} \caption{Left: Bolometric luminosity, $L_{\rm bol} \equiv L_{\rm IR} + L_{\rm UV}$, as a function of $L_{\rm IR}/L_{\rm UV}$ for a sample of local galaxies from \citet{bell03} and \citet{huang09}, shown by the open diamonds. The sample of 392 UV-selected galaxies at $1.5\le z<2.6$ analyzed in \citet{reddy10a} are shown by the open circles and upper limits. The relationship between $L_{\rm bol}$ and $L_{\rm IR}/L_{\rm UV}$ for Samples A, C, D, and E, based on the stacked {\em Herschel}, {\em Spitzer}, and VLA data, are denoted by the large filled squares (error bars are not shown for clarity). The same quantity for the $L^{\ast}$ sample at $z\sim 2$ (Sample B) is shown by the large open square. Right: Correlation between $L_{\rm bol}$ and $L_{\rm UV}$, where the cyan points denote the results using $24$\,$\mu$m data and the \citet{reddy10a} calibration between $L_{\rm 8}$ and $L_{\rm IR}$. The purple points indicate results from incorporating the {\em Spitzer}/MIPS $24$\,$\mu$m, {\em Herschel}/PACS 100 and 160\,$\mu$m, and VLA 1.4GHz data to determine the median IR SED as a function of UV luminosity.} \label{fig:bol} \end{figure*} \subsection{Dependence of Dust Attenuation on Stellar Population Age and Luminosity} \subsubsection{Galaxies with ``Young'' ($\la 100$\,Myr) Stellar Population Ages} Roughly $14\%$ of the galaxies in our entire sample are identified as having a ``young'' stellar population with inferred ages of $\la 100$\,Myr. Stacking these galaxies results in a formal nondetection (S/N\,$\la 3$) in the {\em Herschel} and VLA stacks (Table~\ref{tab:fluxes}). The implied $3$\,$\sigma$ upper limit to the dust obscuration suggests that these galaxies follow an attenuation curve that is ``steeper'' than the typically assumed \citet{meurer99} and \citet{calzetti00} attenuation curves (Figure~\ref{fig:ebmv100}), in the sense that they exhibit a redder UV slope for a given obscuration. More stringent constraints on the dust attenuation of these galaxies comes from their stacked $24$\,$\mu$m emission. Assuming the DH02 template that best matches the median $24$\,$\mu$m flux of the ``young'' galaxies implies $L_{\rm IR}/L_{\rm UV} \approx 1.55\pm 0.35$. The median UV slope of these galaxies of $\beta = -1.30$ then implies that their dust obscurations are consistent with the SMC attenuation curve. The deviation of such young, high-redshift galaxies from the standard attenuation curve is a result first noted in \citet{reddy06a} and further investigated in \citet{reddy10a}. IRS spectrocopy of a couple of young lensed LBGs provide additional evidence that such young galaxies are less dusty at a given UV slope \citep{siana08, siana09} than their older and more typical counterparts at the same redshifts \citep{reddy10a}. The small carbonaceous dust grains giving rise to the mid-IR emission are believed to be produced in AGB stars over a longer timescale than the Type II SNe responsible for the large grains that emit thermally in the infrared (e.g., \citealt{galliano08}). It is therefore relevant to ask whether evolution in the small-to-large dust grain ratio may be responsible for the observed differences in attenuation curve with stellar population age. Based on an X-ray stacking analysis and an examination of the theoretical PAH metallicity from dust models, \citet{reddy10a} argued that the small-to-large dust grain ratio is unlikely to change significantly over the relatively narrow range in metallicity probed by the UV-selected sample. IRS spectroscopy of at least a couple of young lensed LBGs at high-redshift also indicates that their ratios of mid-to-far IR emission are similar to those found for local starburst galaxies \citep{siana08, siana09} and similar to the ratios observed for $z\sim 2$ galaxies \citep{reddy06a, reddy10a}. The {\em Herschel} observations, which probe the thermal dust emission at $z\sim 2$, are consistent with these findings based on $24$\,$\mu$m emission alone. If the difference in attenuation curve is due purely to geometrical effects, then it suggests that the dust covering fraction of high-redshift galaxies evolves significantly with time. A possible simple scenario is one in which the first generation of stars quickly pollutes the ISM with dust and metals over a dynamical timescale of a few tens of millions of years, at which point much of the dust is foreground to the stars and the dust covering fraction is high. As star formation proceeds and either becomes spatially extended or is able to drive outflows sufficient in momentum to perturb the dust/gas distribution of the ISM, the resulting attenuation becomes patchier. So, we might expect in this situation that galaxies would gradually transition from a steep attenuation curve like that of the SMC to a grayer starburst attenuation curve (e.g., \citealt{meurer99, calzetti00}; see also discussion in \citealt{buat11}). Detailed studies of the variation of the interstellar absorption lines (as a proxy for the dust covering fraction) with dust attenuation will be needed to test this scenario. \subsubsection{Variation in Dust Attenuation with Bolometric and UV Luminosity} MIPS $24$\,$\mu$m studies have shown that $z\sim 2$ galaxies follow a tight trend between bolometric luminosity and dust attenuation (e.g., \citealt{reddy10a} and references therein), similar to that observed locally and at $z\sim 1$ \citep{burgarella09, buat07, buat09, wang96}. However, the normalization of this trend depends on redshift, such that at a given bolometric luminosity, galaxies at high redshift are on average less dusty than local ones \citep{reddy06a, reddy08, reddy10a}. This result is not unique to our UV-selected sample. Rest-frame optical, near-IR, and submillimeter selected galaxies at $z\sim 2$ also show a similar offset in obscuration per unit star formation compared with local galaxies \citep{reddy06a}. \citet{reddy10a} demonstrate that this relationship is likely driven by metallicity evolution. The stacked {\em Herschel} and radio data confirm these previous results (Figure~\ref{fig:bol}). This is not surprising given the similarity in $L_{\rm IR}$ and dust attenuation inferred in this study with those inferred for $L^{\ast}$ galaxies based on $24$\,$\mu$m data alone. The combined {\em Herschel}, {\em Spitzer}, and VLA data confirm that $L^{\ast}_{\rm UV}$ galaxies at $z\sim 2$ are a factor of $\approx 10$ less dusty than galaxies with similar bolometric luminosities in the local universe. Similarly, $L^{\ast}_{\rm UV}$ galaxies at $z\sim 2$ are a factor of $\approx 20-30$ times more bolometrically-luminous than local galaxies with a similar dust obscuration (Figure~\ref{fig:bol}). These evolutionary effects can also be seen in Figure~\ref{fig:sfrir}. In the local universe, it is only for galaxies with $L_{\rm IR}\la 10^{11}$\,L$_{\odot}$ (i.e., galaxies fainter than LIRGs) where the unobscured star formation begins to contribute significantly to the total star formation rate. In contrast, most lower luminosity LIRGs at $z\sim 2$ have a significant fraction of unobscured star formation, as indicated by the plume of $z\sim 2$ galaxies extending away from the SFR$_{\rm bol}=$SFR$_{\rm IR}$ line in Figure~\ref{fig:sfrir}, indicating that LIRGs at $z\sim 2$ are more UV-transparent relative to LIRGs in the local Universe. Finally, we note that \citet{reddy10a} use a $24$\,$\mu$m analysis of a larger sample of UV-selected galaxies at $z\sim 2$ to show that the fraction of $24$\,$\mu$m detections of these galaxies decreases by a factor of two proceeding from UV-bright ($L_{\rm UV} \approx 10^{11}$\,L$_\odot$) to UV-faint ($L_{\rm UV} \approx 10^{10}$\,L$_\odot$) galaxies (see Figure~14 in \citealt{reddy10a}). This UV luminosity trend in $24$\,$\mu$m detection fraction implies that UV-faint galaxies are on average less infrared-luminous than their UV-bright counterparts, a result that is further confirmed by stacking the $24$\,$\mu$m emission of galaxies in bins of UV luminosity. A stacking of the combined {\em Spitzer}, {\em Herschel} and VLA data as a function of UV luminosity shows a similar trend, indicating that UV-faint galaxies are also less bolometrically luminous than UV-bright galaxies (Figure~\ref{fig:bol}). The actual dust attenuation, $L_{\rm IR}/L_{\rm UV}$, is roughly constant (within the uncertainties) with decreasing UV luminosity, as both UV and IR luminosities decrease in tandem. In any case, it is clear from these observations that care must be taken when inferring the average dust obscuration and $L_{\rm IR}$, and their effect on global quantities like the star formation rate density, given the UV luminosity dependence of these quantities \citep{reddy08, reddy09}. \section{Conclusions} We have used the deep $100$ and $160$\,$\mu$m data of the GOODS-{\em Herschel} Open Time Key Program, supplemented with deep {\em Spitzer}/MIPS $24$\,$\mu$m and VLA 1.4\,GHz radio imaging, to investigate the infrared luminosities and dust obscuration of typical star-forming galaxies at high redshift. We focus on the median stacked mid-infrared, far-infrared, and radio fluxes of a sample of $146$ UV-selected galaxies with spectroscopic redshifts $1.5\le z_{\rm spec}<2.6$ in the GOODS-North field. These galaxies have luminosities around $L^{\ast}$ of the UV luminosity function at these redshifts. Because these galaxies are individually undetected at $100$, $160$\,$\mu$m, and $1.4$\,GHz, we perform median stacking analyses to measure their average fluxes. Three tests are performed to verify the robustness of our stacked results. The first test measured the effects of clustering. The second test determined the chance probability of measuring a stacked flux as high as the one obtained for the target galaxies. The third test allows us to measure the biases and uncertainties in stacked flux measurements by stacking on the positions of artificial sources added to the images. To interprete these fluxes, we consider a variety of dust templates, including those of \citet{elbaz11}, \citet{chary01}, \citet{dale02}, and \citet{rieke09}, as well as the $24$\,$\mu$m-$L_{\rm IR}$ and radio-infrared correlations of \citet{reddy10a} and \citet{bell03}, respectively. Fitting these templates to the bias-corrected fluxes and infrared colors reveals that $L^{\ast}_{\rm UV}$ galaxies at $z\sim 2$ with UV luminosities $L_{\rm UV} \ga 10^{10}$\,L$_{\odot}$ have a median infrared luminosity of $L_{\rm IR}= (2.2\pm 0.3)\times 10^{11}$\,L$_{\odot}$. Galaxies in our sample exhibit $L_{\rm IR}/L_{\rm 8}$ (IR8) ratios that are a factor of $\approx 2$ larger than those found for most star-forming galaxies at lower redshifts, likely due to the fact these UV-selected galaxies are relatively compact for their infrared luminosity. One possibility is that the roughly constant IR8 ratio observed for most (main sequence) galaxies with redshifts $z\la 2.0$ likely shifts towards larger values at $z\ga 2.0$, due to the fact that the galaxies are on average smaller for their IR luminosity relative to lower redshift galaxies. Stacking the {\em Spitzer}, {\em Herschel}, and VLA data as a function of UV spectral slope, $\beta$, and bolometric luminosity, $L_{\rm bol}$, indicates that galaxies with redder $\beta$ and higher $L_{\rm bol}$ have on average larger infrared luminosities, in accord with expectations based on trends found for local star-forming galaxies. Based on the $L_{\rm IR}$, we proceed to examine the dust attenuation, $L_{\rm IR}/L_{\rm UV}$, and the dust correction factor (i.e., the factor required to recover the bolometric SFR from the unobscured UV SFR), 1+ SFR$_{\rm IR}$/SFR$_{\rm UV}$, for galaxies in our sample. For $L^{\ast}$ galaxies at $z\sim 2$, we find a dust attenuation of $L_{\rm IR}/L_{\rm UV} = 7.1\pm1.1$, which corresponds to a dust correction factor of $5.2\pm 0.6$, implying that $\simeq 80\%$ of the star formation is obscured. This result is consistent with those found in previous studies of the dust-corrected UV and H$\alpha$, $24$\,$\mu$m, and radio and X-ray stacks of the same galaxies examined here \citep{reddy04, reddy06a, reddy10a}. We have examined the relationship between the UV spectral slope, $\beta$, and dustiness for $z\sim 2$ galaxies. A small fraction ($\approx 14\%$) of galaxies are identified as having stellar population ages $\la 100$\,Myr from fitting stellar population SEDs to the observed optical through near-IR photometry. The upper limit in dust attenuation for these ``young'' galaxies suggests that they may follow a ``steeper'' attenuation curve than the one observed for local starburst galaxies \citep{meurer99, calzetti00}, in the sense that they are less dusty (i.e., have lower $L_{\rm IR}/L_{\rm UV}$ ratios) for a given $\beta$ than older and more typical galaxies at the same redshift. This result was first noted in the $24$\,$\mu$m analyses of \citet{reddy06a} and further investigated in \citet{reddy10a}, and may be attributable to the higher dust covering fractions in young galaxies. When considering more typical galaxies with ages $\ga 100$\,Myr, we find that their median $L_{\rm IR}/L_{\rm UV}$ increases as $\beta$ becomes redder, and that this correlation is essentially identical to that found for local starbursts \citep{meurer99, calzetti00}. Comparison of the {\em Herschel} results at $z\sim 2$ with measurements of local galaxies confirms the previously found trends that imply that LIRGs at $z\sim 2$ are more UV transparent (i.e., less dusty) than LIRGs in the local universe, and that UV-faint galaxies at $z\sim 2$ have lower $L_{\rm IR}$ and hence fainter bolometric luminosities than UV-bright galaxies at $z\sim 2$ (e.g., see also \citealt{reddy10a}). Based on the direct {\em Herschel} measurements of the rest-frame $\simeq 30$ and $50$\,$\mu$m thermal dust emission, we find that the local UV attenuation curve holds at $z\sim 2$ for galaxies with $L_{\rm bol} \la 10^{12}$\,L$_{\odot}$, suggesting a remarkable similarity in the processes governing star formation and dust production over the last $10$\,billion years of cosmic time. We have made significant progress in evaluating the effects of dust in typical star-forming galaxies at $z\sim 2$ as traced by thermal infrared emission. However, even the superb sensitivity and resolution of {\em Herschel} are not sufficient to individually detect $L^{\ast}$ galaxies at these redshifts. The most accurate estimate of dust luminosity, and hence total star formation rate, can only come from combining {\em individual} measurements of UV and IR luminosities. Though the observations will be targeted, ALMA promises to extend our knowledge of the dust continuum luminosities of {\em individual} star-forming galaxies at $z\sim 2$, thus providing the key ingredient to combine with UV measurements. These developments point to a wealth of forthcoming information on the properties of dust in typical star-forming galaxies at high redshift. Ultimately, a robust characterization of the relationship between infrared and short wavelength (UV, H$\alpha$) emission will be crucial for inferring dust attenuation and total star formation rates of the very faint and/or very high redshift galaxies that will be inaccessible to even the next generation of long wavelength observatories. \acknowledgements Support for NAR was provided by NASA through Hubble Fellowship grant HST-HF-01223.01 awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS 5-26555. This work is based on observations made with the {\em Herschel Space Observatory}, a European Space Agency Cornerstone Mission with significant participation by NASA. Support for this work was provided by NASA through an award issued by JPL/Caltech.
1,314,259,994,687
arxiv
\section{Introduction} Following a turbulent election season, 2016's digital footprint is awash with hate speech. Apart from censorship, the goals of enabling computers to understand inflammatory language are many. Sensing increased proliferation of hate speech can elucidate public opinion surrounding polarizing events. Identifying hateful declarations can bolster security in revealing individuals harboring malicious intentions towards specific groups. Recent studies on supervised methods for online hate speech detection \cite{waseem2016hateful,nobata2016abusive} have relied on manually annotated datasets, which are not only costly to create but also likely to be insufficient to obtain wide-coverage hate speech detection systems. This is mainly because online hate speech is relatively infrequent (among large amounts of online contents) and tends to transform rapidly following a new “trigger” event. Our pilot annotation experiment with 5,000 randomly selected tweets shows that around 0.6\% (31 tweets) of tweets are hateful. The mass-scale (Yahoo! Finance online comments) hate speech annotation effort from Yahoo! \cite{nobata2016abusive} revealed that only 5.9\% of online comments contained hate speech. Therefore, large amounts of online texts need to be annotated to adequately identify hate speech. In recent studies \cite{waseem2016hateful,kwok2013locate}, the data selection methods and annotations are often biased towards a specific type of hate speech or hate speech generated in certain scenarios in order to increase the ratio of hate speech content in the annotated data sets, which however made the resulting annotations too distorted to reflect the true distribution of hate speech. Furthermore, inflammatory language changes dramatically following new hate ``trigger'' events, which will significantly devalue annotated data. To address the various limitations of supervised hate speech detection methods, we present a weakly supervised two-path bootstrapping approach for online hate speech detection that requires minimal human supervision and can be easily retrained and adapted to capture new types of inflammatory language. Our two-path bootstrapping architecture consists of two learning components, an explicit slur term learner and a neural net classifier (LSTMs \cite{hochreiter1997long}), that can capture both explicit and implicit phrasings of online hate speech. Specifically, our bootstrapping system starts with automatically labeled online hateful content that are identified by matching a large collection of unlabeled online content with several hateful slur terms. Then two learning components will be initiated simultaneously. A slur term learner will learn additional hateful slur terms from the automatically identified hateful content. Meanwhile, a neural net classifier will be trained using the automatically labeled hateful content as positive instances and randomly sampled online content as negative instances. Next, both string matching with the newly learned slur terms and the trained neural net classifier will be used to recognize new hateful content from the large unlabeled collection of online contents. Then the newly identified hateful content by each of the two learning components will be used to augment the initially identified hateful content, which will be used to learn more slur terms and retrain the classifier. The whole process iterates. The design of the two-path bootstrapping system is mainly motivated to capture both explicit and implicit inflammatory language. Explicit hate speech is easily identifiable by recognizing a clearly hateful word or phrase. For example: \vspace{.05in} \noindent (1) {\it Don't talk to me from an anonymous account you faggot coward, whither up and die.} \vspace{.05in} \noindent (2) {\it And that's the kind of people who support Trump! Subhumans!} \noindent In contrast, implicit hate speech employs circumlocution, metaphor, or stereotypes to convey hatred of a particular group, in which hatefulness can be captured by understanding its overall compositional meanings, For example: \vspace{.05in} \noindent (3) {\it Hillary's welfare army doesn't really want jobs. They want more freebies.} \vspace{.05in} \noindent (4) {\it Affirmative action means we get affirmatively second rate doctors and other professionals.} Furthermore, our learning architecture has a flavor of co-training \cite{blum1998combining} in maintaining two learning components that concentrate on different properties of inflammatory language. By modeling distinct aspects of online hate speech, such a learning system is better equipped to combat semantic drift, which often occurs in self-learning where the learned model drifts away from the esteemed track. Moreover, training two complementary models simultaneously and utilizing both models to identify hate speech of different properties in each iteration of the learning process is important to maintain the learning momentum and to generate models with wide coverage. Indeed, our experimental results have shown that the two-path bootstrapping system is able to jointly identify many more hate speech texts (214,997 v.s 52,958 v.s 112,535) with a significantly higher F-score (48.9\% v.s 19.7\% v.s 26.1\%), when compared to the bootstrapping systems with only the slur term learner and only the neural net classifier. In addition, the evaluation shows that the two-path bootstrapping system identifies 4.4 times more hateful texts than hate speech detection systems that are trained using manually annotated data in a supervised manner. \section{Related Work} Previous studies on hate speech recognition mostly used supervised approaches. Due to the sparsity of hate speech overall in reality, the data selection methods and annotations are often biased towards a specific type of hate speech or hate speech generated in certain scenarios. For instance, \citet{razavi2010offensive} conducted their experiments on 1525 annotated sentences from a company's log file and a certain newsgroup. \citet{warner2012detecting} labeled around $9000$ human labeled paragraphs from Yahoo!'s news group post and American Jewish Congress's website, and the labeling is restricted to anti-Semitic hate speech. \citet{sood2012profanity} studied use of profanity on a dataset of 6,500 labeled comments from Yahoo! Buzz. \citet{kwok2013locate} built a balanced corpus of 24582 tweets consisting of anti-black and non-anti black tweets. The tweets were manually selected from Twitter accounts that were believed to be racist based upon their reactions to anti-Obama articles. \citet{burnap2014hate} collected hateful tweets related to the murder of Drummer Lee Rigby in 2013. \citet{waseem2016hateful} collected tweets using hateful slurs, specific hashtags as well as suspicious user IDs. Consequently, all of the 1,972 racist tweets are by 9 users, and the majority of sexist tweets are related to an Australian TV show. \citet{djuric2015hate} is the first to study hate speech using a large-scale annotated data set. They have annotated 951,736 online comments from Yahoo!Finance, with 56,280 comments labeled as hateful. \citet{nobata2016abusive} followed \citet{djuric2015hate}'s work. In addition to the Yahoo!Finance annotated comments, they also annotated 1,390,774 comments from Yahoo!News. Comments in both data sets were randomly sampled from their corresponding websites with a focus on comments by users who were reported to have posted hateful comments. We instead aim to detect hate speech w.r.t. its real distribution, using a weakly supervised method that does not rely on large amounts of annotations. The commonly used classification methods in previous studies are logistic regression and Naive Bayes classifiers. \citet{djuric2015hate} and \citet{nobata2016abusive} applied neural network models for training word embeddings, which were further used as features in a logistic regression model for classification. We will instead train a neural net classifier \cite{kim2014convolutional,lai2015recurrent,zhou2015c} in a weakly supervised manner in order to capture implicit and compositional hate speech expressions. \citet{xiang2012detecting} is related to our research because they also used a bootstrapping method to discover offensive language from a large-scale Twitter corpus. However, their bootstrapping model is driven by mining hateful Twitter users, instead of content analysis of tweets as in our approach. Furthermore, they recognize hateful Twitter users by detecting explicit hateful indicators (i.e., keywords) in their tweets while our bootstrapping system aim to detect both explicit and implicit expressions of online hate speech. \section{The Two-path Bootstrapping System for Online Hate Speech Detection} \subsection{Overview} \begin{figure}[ht] \centering \includegraphics[width=7.6cm,height=10cm,keepaspectratio]{final-model.png} \caption{Diagram of co-training model}\label{model} \end{figure} Figure \ref{model} illustrates that our weakly supervised hate speech detection system starts with a few pre-identified slur terms as seeds and a large collection of unlabeled data instances. Specifically, we experiment with identifying hate speech from tweets. Hateful tweets will be automatically identified by matching the large collection of unlabeled tweets with slur term seeds. Tweets that contain one of the seed slur terms are labeled as hateful. The two-path bootstrapping system consists of two learning components, an explicit slur term learner and a neural net classifier (LSTMs \cite{hochreiter1997long}), that can capture both explicit and implicit descriptions of online hate speech. Using the initial seed slur term labeled hateful tweets, the two learning components will be initiated simultaneously. The slur term learner will continue to learn additional hateful slur terms. Meanwhile, the neural net classifier will be trained using the automatically labeled hateful tweets as positive instances and randomly sampled tweets as negative instances. Next, both the newly learned slur terms and the trained neural net classifier will be used to identify new hateful content from the unlabeled large collection of tweets. The newly labeled hateful tweets by each of the two learning components will be used to augment the initial slur term seed identified hateful tweet collection, which will be used to learn more slur terms and retrain the classifier in the next iteration. The whole process then iterates. After each iteration, we have to determine if a stopping criterion is met and we should terminate the bootstrapping process. In general, a tuned threshold score is applied or a small annotated dataset is used to evaluate the learned classifiers. We adopt the latter method. Specifically, the bootstrapping system stops when the precision of the LSTM classifier is lower than $0.6$ when evaluated using an existing small annotated tweet set \cite{waseem2016hateful}. \subsection{Automatic Data Labeling of Initial Data} Seeing a hate slur term in a tweet strongly indicates that the tweet is hateful. Therefore, we use 20 manually selected slur terms to match with a large unlabeled tweet collection in order to quickly construct the initial small set of hateful tweets. Table \ref{seeds} shows the $20$ seed slurs we used. \begin{table}[ht] \begin{center} \scalebox{0.9}{ \begin{tabular}{ l l l l l} \hline bimbo & chink & commie & coon & cunt \\ fag & faggot & feminazi & honky & islamist \\ libtard & muzzie & negro & nigger & paki \\ skank & subhuman & tranny & twat & wanker \\ \hline \end{tabular}} \end{center} \caption{Seed slurs }\label{seeds} \end{table} We obtained our initial list of slurs from Hatebase\footnote{https://www.hatebase.org}, the Racial Slurs Database \footnote{http://www.rsdb.org}, and a page of LGBT slang terms\footnote{https://en.wikipedia.org/wiki/List\_of\_LGBT\_slang\_terms}. We ranked the slur terms by their frequencies in tweets, eliminating ambiguous and outdated terms. The slur "gypsy", for example, refers to derogatorily to people of Roma descent, but currently in popular usage is an idealization of a trendy bohemian lifestyle. The word "bitch" is ambiguous, sometimes a sexist slur but other times innocuously self-referential or even friendly. For these reasons, we only selected the top $20$ terms we considered reliable (shown in Table \ref{seeds}). We use both the singular and the plural form for each of these seed slur terms. \subsection{Slur Term Learner} The slur term learning component extracts individual words from a set of hateful tweets as new slurs. Intuitively, if a word occurs significantly more frequently in hateful tweets than in randomly selected tweets, this term is more likely to be a hateful slur term. Following this intuition, we assign a score to each unique unigram that appears $10$ or more times in hateful tweets, and the score is calculated as the relative ratio of its frequency in the labeled hateful tweets over its frequency in the unlabeled set of tweets. Then the slur term learner recognizes a unigram with a score higher than a certain threshold as a new slur. Specifically, we use the threshold score of $100$ in identifying individual word slur terms. The newly identified slur terms will be used to match with unlabeled tweets in order to identify additional hateful tweets. A tweet that contains one of the slur terms is deemed to be a hateful tweet. While we were aware of other more sophisticated machine learning models, one purpose of this research is to detect and learn new slur terms from constantly generated user data. Therefore, the simple and clean string matching based slur learner is designed to attentively look for specific words that alone can indicate hate speech. In addition, this is in contrast with the second learning component that uses a whole tweet and model its compositional meanings in order to recognize implicit hate speech. These two learners are complementary in the two-path bootstrapping system. \subsection{The LSTM Classifier} We aim to recognize implicit hate speech expressions and capture composite meanings of tweets using a sequence neural net classifier. Specifically, our LSTM classifier has a single layer of LSTM units. The output dimension size of the LSTM layer is $100$. A sigmoid layer is built on the top of the LSTM layer to generate predictions. The input dropout rate and recurrent state dropout rate are both set to $0.2$. In each iteration of the bootstrapping process, the training of the LSTM classifier runs for $10$ epochs. The input to our LSTM classifier is a sequence of words. We pre-process and normalize tokens in tweets following the steps suggested in \cite{pennington2014glove}. In addition, we used the pre-processing of emoji and smiley described in a preprocess tool \footnote{https://pypi.python.org/pypi/tweet-preprocessor/0.4.0}. Then we retrieve word vector representations from the downloaded\footnote{https://code.google.com/archive/p/word2vec/} pre-trained word2vec embeddings \cite{mikolov2013distributed}. The LSTM classifier is trained using the automatically labeled hateful tweets as positive instances and randomly sampled tweets as negative instances, with the ratio of POS:NEG as 1:10. Then the classifier is used to identify additional hateful tweets from the large set of unlabeled tweets. The LSTM classifier will deem a tweet as hateful if the tweet receives a confidence score of $0.9$ or higher. Both the low POS:NEG ratio and the high confidence score are applied to increase the precision of the classifier in labeling hateful tweets and control semantic drift in the bootstrapping learning process. To further combat semantic drift, we applied weighted binary cross-entropy as the loss function in LSTM. \subsection{One vs. Two Learning Paths} As shown in Figure \ref{model}, if we remove one of the two learning components, the two-path learning system will be reduced to a usual self-learning system with one single learning path. For instance, if we remove the LSTM classifier, the slur learner will learn new slur terms from initially seed labeled hateful tweets and then identify new hateful tweets by matching newly learned slurs with unlabeled tweets. The newly identified hateful tweets will be used to augment the initial hateful tweet collection and additional slur terms can be learned from the enlarged hateful tweet set. The process will iterates. However as shown later in the evaluation section, single-path variants of the proposed two-path learning system are unable to receive additional fresh hateful tweets identified by the other learning component and lose learning momentum quickly. \subsection{Tackling Semantic Drifts} Semantic drift is the most challenging problem in distant supervision and bootstrapping. First of all, we argue that the proposed two-path bootstrapping system with two significantly different learning components is designed to reduce semantic drift. According to the co-training theory \cite{blum1998combining}, the more different the two components are, the better. In evaluation, we will show that such a system outperforms single-path bootstrapping systems. Furthermore, we have applied several strategies in controlling noise and imbalance of automatically labeled data, e.g., the high frequency and the high relative frequency thresholds enforced in selecting hate slur terms, as well as the low POS:NEG training sample ratio and the high confidence score of 0.9 used in selecting new data instances for the LSTM classifier. \section{Evaluations} \subsection{Tweets Collection} We randomly sampled 10 million tweets from 67 million tweets collected from Oct. 1st to Oct. 24th using Twitter API. These 10 million tweets were used as the unlabeled tweet set in bootstrapping learning. Then we continued to collect 62 million tweets spanning from Oct.25th to Nov.15th, essentially two weeks before the US election day and one week after the election. The 62 million tweets will be used to evaluate the performance of the bootstrapped slur term learner and LSTM classifier. The timestamps of all these tweets are converted into EST. By using Twitter API, the collected tweets were randomly sampled to prevent a bias in the data set. \subsection{Supervised Baselines} We trained two supervised models using the 16 thousand annotated tweets that have been used in a recent study \cite{waseem2016hateful}. The annotations distinguish two types of hateful tweets, sexism and racism, but we merge both categories and only distinguish hateful from non-hateful tweets. First, we train a traditional feature-based classification model using logistic regression (LR). We apply the same set of features as mentioned in \cite{waseem2016hateful}. The features include character-level bigrams, trigrams, and four-grams. In addition, for direct comparisons, we train a LSTM model using the 16 thousand annotated tweets, using exactly the same settings as we use for the LSTM classifier in our two-path bootstrapping system. \subsection{Evaluation Methods} We apply both supervised classifiers and our weakly supervised hate speech detection systems to the 62 million tweets in order to identify hateful tweets that were posted before and after the US election day. We evaluate both precision and recall for both types of systems. Ideally, we can easily measure precision as well as recall for each system if we have ground truth labels for each tweet. However, it is impossible to obtain annotations for such a large set of tweets. The actual distribution of hateful tweets in the 62 million tweets is unknown. Instead, to evaluate each system, we randomly sampled 1,000 tweets from the whole set of hateful tweets that {\it had been tagged as hateful} by the corresponding system. Then we annotate the sampled tweets and use them to estimate precision and recall of the system. In this case, \[ precision = \frac{n}{ 1000 } \] \[ recall \propto precision \cdot N \] Here, $n$ refers to the number of hateful tweets that human annotators identified in the 1,000 sampled tweets, and $N$ refers to the total number of hateful tweets the system tagged in the 62 million tweets. We further calculated system recall by normalizing the product, $precision \cdot N$, with an estimated total number of hateful tweets that exist in the 62 million tweets, which was obtained by multiplying the estimated hateful tweet rate of 0.6\%\footnote{We annotated 5,000 tweets that were randomly sampled during election time and 31 of them were labeled as hateful, therefore the estimated hateful tweet rate is 0.6\% (31/5,000).} with the exact number of tweets in the test set. Finally, we calculate F-score using the calculated recall and precision. Consistent across the statistical classifiers including both logistic regression classifiers and LSTM models, only tweets that receive a confidence score over $0.9$ were tagged as hateful tweets. \begin{table*}[ht] \begin{center} \scalebox{0.94}{ \begin{tabular}{|l|c|c|c|c|c|} \hline \bf Classifier & \bf Precision & \bf Recall & \bf F1 & \bf \# of Predicted Tweets & \bf \# of Estimated Hateful \\ \hline \multicolumn{6}{|c|}{Supervised Baselines} \\ \hline Logistic Regression & 0.088 & 0.328& 0.139& \bf{1,380,825} & 121,512\\ LSTMs & \bf{0.791} & 0.132& 0.228& 62,226 & 49,221 \\ \hline \multicolumn{6}{|c|}{The Two-path Weakly Supervised Learning System} \\ \hline LSTMs & 0.419 & 0.546& 0.474& 483,298 & 202,521 \\ Slur Matching & 0.565 & 0.398& 0.468 & 261,183 & 147,595 \\ Union & 0.422 & \bf{0.580} & \bf{0.489} & 509,897 & \bf{214,997}\\ \hline Union* & 0.626* & 0.258* & 0.365* & - & - \\ \hline \multicolumn{6}{|c|}{Variations of the Two-path Weakly Supervised Learning System} \\ \hline Slur Matching Only & 0.318 & 0.143 & 0.197& 166,535 & 52,958 \\ LSTMs Only & 0.229 & 0.303 & 0.261 & 491,421 & 112,535 \\ \hline \end{tabular} } \end{center} \caption{\label{pm} Performance of Different Models } \end{table*} \subsection{Human Annotations} When we annotate system predicted tweet samples, we essentially adopt the same definition of hate speech as used in \cite{waseem2016hateful}, which considers tweets that explicitly or implicitly propagate stereotypes targeting a specific group whether it is the initial expression or a meta-expression discussing the hate speech itself (i.e. a paraphrase). In order to ensure our annotators have a complete understanding of online hate speech, we asked two annotators to first discuss over a very detailed annotation guideline of hate speech, then annotate separately. This went for several iterations. Then we asked the two annotators to annotate the 1,000 tweets that were randomly sampled from all the tweets tagged as hateful by the supervised LSTM classifier. The two annotators reached an inter-agreement Kappa \cite{cohen1960coefficient} score of 85.5\%. Because one of the annotators become unavailable later in the project, the other annotator annotated the remaining sampled tweets. \subsection{Experimental Results} \noindent {\bf Supervised Baselines} The first section of Table \ref{pm} shows the performance of the two supervised models when applied to 62 million tweets collected around election time. We can see that the logistic regression model suffers from an extremely low precision, which is less than 10\%. While this classifier aggressively labeled a large number of tweets as hateful, only 121,512 tweets are estimated to be truly hateful. In contrast, the supervised LSTM classifier has a high precision of around 79\%, however, this classifier is too conservative and only labeled a small set of tweets as hateful. \begin{table}[t] \begin{center} \begin{tabular}{|l|r|r|r|r|} \hline \bf Its & \bf Prev & \bf Slur Match & \bf LSTMs \\ \hline 1 & 8,866 & 422 & 3,490 \\ 2 & 12,776 & 4,890 & 13,970 \\ 3 & 27,274 & 6,299 & 21,579 \\ 4 & 50,721 & 9,895 & 22,768 \\ \hline \end{tabular} \end{center} \caption{Number of Labeled Tweets in Each Iteration }\label{boot} \end{table} \noindent {\bf The Two-path Bootstrapping System} Next, we evaluate our weakly supervised classifiers which were obtained using only $20$ seed slur terms and a large set of unlabeled tweets. The two-path weakly supervised bootstrapping system ran for four iterations. The second section of Table \ref{pm} shows the results for the two-path weakly supervised system. The first two rows show the evaluation results for each of the two learning components in the two-path system, the LSTM classifier and the slur learner, respectively. The third row shows the results for the full system. We can see that the full system {\bf Union} is significantly better than the supervised LSTM model in terms of recall and F-score. Furthermore, we can see that a significant portion of hateful tweets were identified by both components and the weakly supervised LSTM classifier is especially capable to identify a large number of hateful tweets. Then the slur matching component obtains an precision of around 56.5\% and can identify roughly 3 times of hateful tweets compared with the supervised LSTM classifier. The last column of this section shows the performance of our model on a collection of human annotated tweets as introduced in the previous work \cite{waseem2016hateful}. The recall is rather low because the data we used to train our model is quite different from this dataset which contains tweets related to a TV show \cite{waseem2016hateful}. The precision is only slightly lower than previous supervised models that were trained using the same dataset. Table \ref{boot} shows the number of hateful tweets our bootstrapping system identified in each iteration during training. Specifically, the columns {\bf Slur Match} and {\bf LSTMs} show the number of hateful tweets identified by the slur learning component and the weakly supervised LSTM classifier respectively. We can see that both learning components steadily label new hateful tweets in each iteration and the LSTM classifier often labels more tweets as hateful compared to slur matching. \begin{table}[t] \begin{center} \begin{tabular}{|c|c|c|} \hline \bf Intersection & \bf LSTM Only & \bf Slur Only \\ \hline 234,584 & 248,714 & 26,599 \\ \hline \end{tabular} \end{center} \caption{Number of Hateful Tweets in Each Segment} \label{perf} \end{table} Furthermore, we found that many tweets were labeled as hateful by both slur matching and the LSTM classifier. Table \ref{perf} shows the number of hateful tweets in each of the three segments, hateful tweets that have been labeled by both components as well as hateful tweets that were labeled by one component only. Note that the three segments of tweets are mutually exclusive from others. We can see that many tweets were labeled by both components and each component separately labeled some additional tweets as well. This demonstrates that hateful tweets often contain both explicit hate indicator phrases and implicit expressions. Therefore in our two-path bootstrapping system, the hateful tweets identified by slur matching are useful for improving the LSTM classifier, vice versa. This also explains why our two-path bootstrapping system learn well to identify varieties of hate speech expressions in practice. \iffalse We randomly sampled 1,000 tweets from each of the three segments of system predicted hateful tweets and have them annotated. We use the three sets of annotated tweet samples to calculate estimated number of truly hateful tweets and hence estimated precision and recall for each of the two components in the bootstrapping system as well as the system overall. First, we estimate the number of truly hateful tweets in each segment based on the precision calculated using tweet samples for the same segment. Then we obtain the estimated number of truly hateful tweets labeled by each of the two hate detection components by summing up the estimated number of truly hateful tweets from the intersection segment and from one other segment corresponding to the component. Next, we obtain the number of predicted tweets by each of the two components by summing up the number of predicted tweets from the intersection segment and from the segment corresponding to the component. Then the estimated precision of each component is the ratio of the estimated number of truly hateful tweets over the predicted number of tweets by each component. In addition, we obtain these three metrics when applying the complete system (Union) by considering tweet samples across the three segments. \fi \noindent {\bf One-path Bootstrapping System Variants} In order to understand how necessary it is to maintain two learning paths for online hate speech detection, we also ran two experiments with one learning component removed from the loop each time. Therefore, the reduced bootstrapping systems can only repeatedly learn explicit hate speech (with the slur learner) or implicit hateful expressions (with the LSTM classifier). The third section of Table \ref{pm} shows the evaluation results of the two single-path variants of the weakly supervised system. We can see that both the estimated precision, recall, F score and the estimated number of truly hateful tweets by the two systems are significantly lower than the complete two-path bootstrapping system, which suggests that our two-path learning system can effectively capture diverse descriptions of online hate speech, maintain learning momentums as well as effectively combat with noise in online texts. \section{Analysis} \subsection{Analysis of the Learned Hate Indicators} \vspace{.1in} \begin{table}[ht] \begin{center} \scalebox{0.9}{ \begin{tabular}{ l l l l} \hline berk & chavs & degenerates & douches\\ facist & hag & heretics & jihadists\\ lesbo & pendejo & paedo & pinche\\ retards & satanist & scum & scumbag\\ slutty & tards & unamerican & wench\\ \hline \end{tabular}} \end{center} \caption{New slurs learned by our model}\label{new slur} \end{table} \vspace{-.1in} We have learned 306 unigram phrases using the slur term learning component. Among them, only 45 phrases were seen in existing hate slur databases while the other terms, 261 phrases in total, were only identified in real-world tweets. Table \ref{new slur} shows some of the newly discovered hate indicating phrases. Our analysis shows that 86 of the newly discovered hate indicators are strong hate slur terms and the remaining 175 indicators are related to discussions of identity and politics such as 'supremacist' and 'Zionism'. \subsection{Analysis of LSTM Identified Hateful Tweets} The LSTM labeled 483,298 tweets as hateful, and 172,137 of them do not contain any of the original seed slurs or our learned indicator phrases. The following are example hateful tweets that have no explicit hate indicator phrase: \vspace{.05in} \noindent (1) {\it @janh2h The issue is that internationalists keep telling outsiders that they're just as entitled to the privileges of the tribe as insiders.} \vspace{.05in} \noindent (2) {\it This is disgusting! Christians are very tolerant people but Muslims are looking to wipe us our and dominate us! Sen https://t.co/7DMTIrOLyw} We can see that the hatefulness of these tweets is determined by their overall compositional meanings rather than a hate-indicating slur. \subsection{Error Analysis} The error of our model comes from semantic drift in bootstrapping learning, which partially results from the complexity and dynamics of language. Specifically, we found dynamic word sense of slurs and natural drifting of word semantic. Many slur terms are ambiguous and have multiple word senses. For instance, ``Chink'', an anti-Asian epithet, can also refer to a patch of light from a small aperture. Similarly, ``Negro'' is a toponym in addition to a racial slur. Further, certain communities have reclaimed slur words. Though the word ``dyke'' is derogatory towards lesbians, for example, some use it self-referentially to destigmatize it, a phenomenon we sometimes encountered. \subsection{Temporal Distributions of Tagged Hateful Tweets} By applying our co-training model on the 62 million tweets corpus, we found around 510 thousand tweets labeled as hateful in total. \begin{figure}[ht] \centering \includegraphics[width=7.6cm,keepaspectratio]{trend.png} \caption{Temporal Distribution of Hateful Tweets} \label{trend} \end{figure} The figure \ref{trend} displays the temporal distribution of hateful tweets. There is a spike in hateful tweets from Nov.7th to Nov.12th in terms of both number of hateful tweets and ratio of hateful tweets to total tweets. \iffalse \begin{figure}[ht] \centering \includegraphics[width=7.6cm,keepaspectratio]{geo1.png} \caption{Geographical Distribution of Hateful Tweets} \label{geo} \end{figure} \fi \subsection{Most Frequent Mentions and Hashtags of Tagged Hateful Tweets} Table \ref{mention} and \ref{hashtag} show the top 30 most frequent mentions in hateful tweets. They are ranked by frequency from left to right and from top to bottom. It is clear that the majority of mentions found in tweets tagged as hateful address polarizing political figures (i.e. @realDonaldTrump and @HillaryClinton), indicating that hate speech is often fueled by partisan warfare. Other common mentions include news sources, such as Politico and MSNBC, which further support that "trigger" events in the news can generate inflammatory responses among Twitter users. Certain individual Twitter users also received a sizable number of mentions. @mitchellvii is a conservative activist whose tweets lend unyielding support to Donald Trump. Meanwhile, Twitter user @purplhaze42 is a self-proclaimed anti-racist and anti-Zionist. Both figured among the most popular recipients of inflammatory language. Table \ref{hashtag} shows that the majority of hashtags also indicate the political impetus behind hate speech with hashtags such as \#Trump and \#MAGA (Make America Great Again, Trump's campaign slogan) among the most frequent. The specific televised events also engender proportionally large amounts of hateful language as they can be commonly experienced by all television-owning Americans and therefore a widely available target for hateful messages. \begin{table}[ht] \begin{center} \scalebox{0.8}{ \begin{tabular}{ l l l} \hline @realDonaldTrump & @HillaryClinton & @megynkelly \\ @CNN & @FoxNews & @newtgingrich \\ @nytimes & @YouTube & @POTUS\\ @KellyannePolls & @MSNBC & @seanhannity \\ @washingtonpost & @narendramodi & @CNNPolitics \\ @PrisonPlanet & @guardian & @JoyAnnReid \\ @BarackObama & @thehill & @BreitbartNews \\ @politico & @ABC & @AnnCoulter \\ @jaketapper & @ArvindKejriwal & @FBI \\ @mitchellvii & @purplhaze42 & @SpeakerRyan \\ \hline \end{tabular}} \end{center} \caption{List of Top 30 Mentions in Hateful Tweets During Election Days}\label{mention} \end{table} \begin{table}[ht] \begin{center} \scalebox{0.8}{ \begin{tabular}{ l l l} \hline \#Trump & \#ElectionNight & \#Election2016 \\ \#MAGA & \#trndnl & \#photo \\ \#nowplaying & \#Vocab & \#NotMyPresident \\ \#ElectionDay & \#trump & \#ImWithHer \\ \#halloween & \#cdnpoli & \#Latin \\ \#Hillary & \#WorldSeries & \#1 \\ \#Brexit & \#Spanish & \#auspol \\ \#notmypresident & \#C51 & \#NeverTrump \\ \#hiring & \#bbcqt & \#USElection2016 \\ \#tcot & \#TrumpProtest & \#XFactor \\ \hline \end{tabular}} \end{center} \caption{List of Top 30 Hashtags in Hateful Tweets During Election Days}\label{hashtag} \end{table} \vspace{-0.1in} \section{Conclusions} Our work focuses on the need to capture both explicit and implicit hate speech from an unbiased corpus. To address these issues, we proposed a weakly supervised two-path bootstrapping model to identify hateful language in randomly sampled tweets. Starting from 20 seed rules, we found 210 thousand hateful tweets from 62 million tweets collected during the election. Our analysis shows a strong correlation between temporal distributions of hateful tweets and the election time, as well as the partisan impetus behind large amounts of inflammatory language. In the future, we will look into linguistic phenomena that often occur in hate speech, such as sarcasm and humor, to further improve hate speech detection performance. \section{Introduction} Following a turbulent election season, 2016's digital footprint is awash with hate speech. Apart from censorship, the goals of enabling computers to understand inflammatory language are many. Sensing increased proliferation of hate speech can elucidate public opinion surrounding polarizing events. Identifying hateful declarations can bolster security in revealing individuals harboring malicious intentions towards specific groups. Recent studies on supervised methods for online hate speech detection \cite{waseem2016hateful,nobata2016abusive} have relied on manually annotated datasets, which are not only costly to create but also likely to be insufficient to obtain wide-coverage hate speech detection systems. This is mainly because online hate speech is relatively infrequent (among large amounts of online contents) and tends to transform rapidly following a new “trigger” event. Our pilot annotation experiment with 5,000 randomly selected tweets shows that around 0.6\% (31 tweets) of tweets are hateful. The mass-scale (Yahoo! Finance online comments) hate speech annotation effort from Yahoo! \cite{nobata2016abusive} revealed that only 5.9\% of online comments contained hate speech. Therefore, large amounts of online texts need to be annotated to adequately identify hate speech. In recent studies \cite{waseem2016hateful,kwok2013locate}, the data selection methods and annotations are often biased towards a specific type of hate speech or hate speech generated in certain scenarios in order to increase the ratio of hate speech content in the annotated data sets, which however made the resulting annotations too distorted to reflect the true distribution of hate speech. Furthermore, inflammatory language changes dramatically following new hate ``trigger'' events, which will significantly devalue annotated data. To address the various limitations of supervised hate speech detection methods, we present a weakly supervised two-path bootstrapping approach for online hate speech detection that requires minimal human supervision and can be easily retrained and adapted to capture new types of inflammatory language. Our two-path bootstrapping architecture consists of two learning components, an explicit slur term learner and a neural net classifier (LSTMs \cite{hochreiter1997long}), that can capture both explicit and implicit phrasings of online hate speech. Specifically, our bootstrapping system starts with automatically labeled online hateful content that are identified by matching a large collection of unlabeled online content with several hateful slur terms. Then two learning components will be initiated simultaneously. A slur term learner will learn additional hateful slur terms from the automatically identified hateful content. Meanwhile, a neural net classifier will be trained using the automatically labeled hateful content as positive instances and randomly sampled online content as negative instances. Next, both string matching with the newly learned slur terms and the trained neural net classifier will be used to recognize new hateful content from the large unlabeled collection of online contents. Then the newly identified hateful content by each of the two learning components will be used to augment the initially identified hateful content, which will be used to learn more slur terms and retrain the classifier. The whole process iterates. The design of the two-path bootstrapping system is mainly motivated to capture both explicit and implicit inflammatory language. Explicit hate speech is easily identifiable by recognizing a clearly hateful word or phrase. For example: \vspace{.05in} \noindent (1) {\it Don't talk to me from an anonymous account you faggot coward, whither up and die.} \vspace{.05in} \noindent (2) {\it And that's the kind of people who support Trump! Subhumans!} \noindent In contrast, implicit hate speech employs circumlocution, metaphor, or stereotypes to convey hatred of a particular group, in which hatefulness can be captured by understanding its overall compositional meanings, For example: \vspace{.05in} \noindent (3) {\it Hillary's welfare army doesn't really want jobs. They want more freebies.} \vspace{.05in} \noindent (4) {\it Affirmative action means we get affirmatively second rate doctors and other professionals.} Furthermore, our learning architecture has a flavor of co-training \cite{blum1998combining} in maintaining two learning components that concentrate on different properties of inflammatory language. By modeling distinct aspects of online hate speech, such a learning system is better equipped to combat semantic drift, which often occurs in self-learning where the learned model drifts away from the esteemed track. Moreover, training two complementary models simultaneously and utilizing both models to identify hate speech of different properties in each iteration of the learning process is important to maintain the learning momentum and to generate models with wide coverage. Indeed, our experimental results have shown that the two-path bootstrapping system is able to jointly identify many more hate speech texts (214,997 v.s 52,958 v.s 112,535) with a significantly higher F-score (48.9\% v.s 19.7\% v.s 26.1\%), when compared to the bootstrapping systems with only the slur term learner and only the neural net classifier. In addition, the evaluation shows that the two-path bootstrapping system identifies 4.4 times more hateful texts than hate speech detection systems that are trained using manually annotated data in a supervised manner. \section{Related Work} Previous studies on hate speech recognition mostly used supervised approaches. Due to the sparsity of hate speech overall in reality, the data selection methods and annotations are often biased towards a specific type of hate speech or hate speech generated in certain scenarios. For instance, \citet{razavi2010offensive} conducted their experiments on 1525 annotated sentences from a company's log file and a certain newsgroup. \citet{warner2012detecting} labeled around $9000$ human labeled paragraphs from Yahoo!'s news group post and American Jewish Congress's website, and the labeling is restricted to anti-Semitic hate speech. \citet{sood2012profanity} studied use of profanity on a dataset of 6,500 labeled comments from Yahoo! Buzz. \citet{kwok2013locate} built a balanced corpus of 24582 tweets consisting of anti-black and non-anti black tweets. The tweets were manually selected from Twitter accounts that were believed to be racist based upon their reactions to anti-Obama articles. \citet{burnap2014hate} collected hateful tweets related to the murder of Drummer Lee Rigby in 2013. \citet{waseem2016hateful} collected tweets using hateful slurs, specific hashtags as well as suspicious user IDs. Consequently, all of the 1,972 racist tweets are by 9 users, and the majority of sexist tweets are related to an Australian TV show. \citet{djuric2015hate} is the first to study hate speech using a large-scale annotated data set. They have annotated 951,736 online comments from Yahoo!Finance, with 56,280 comments labeled as hateful. \citet{nobata2016abusive} followed \citet{djuric2015hate}'s work. In addition to the Yahoo!Finance annotated comments, they also annotated 1,390,774 comments from Yahoo!News. Comments in both data sets were randomly sampled from their corresponding websites with a focus on comments by users who were reported to have posted hateful comments. We instead aim to detect hate speech w.r.t. its real distribution, using a weakly supervised method that does not rely on large amounts of annotations. The commonly used classification methods in previous studies are logistic regression and Naive Bayes classifiers. \citet{djuric2015hate} and \citet{nobata2016abusive} applied neural network models for training word embeddings, which were further used as features in a logistic regression model for classification. We will instead train a neural net classifier \cite{kim2014convolutional,lai2015recurrent,zhou2015c} in a weakly supervised manner in order to capture implicit and compositional hate speech expressions. \citet{xiang2012detecting} is related to our research because they also used a bootstrapping method to discover offensive language from a large-scale Twitter corpus. However, their bootstrapping model is driven by mining hateful Twitter users, instead of content analysis of tweets as in our approach. Furthermore, they recognize hateful Twitter users by detecting explicit hateful indicators (i.e., keywords) in their tweets while our bootstrapping system aim to detect both explicit and implicit expressions of online hate speech. \section{The Two-path Bootstrapping System for Online Hate Speech Detection} \subsection{Overview} \begin{figure}[ht] \centering \includegraphics[width=7.6cm,height=10cm,keepaspectratio]{final-model.png} \caption{Diagram of co-training model}\label{model} \end{figure} Figure \ref{model} illustrates that our weakly supervised hate speech detection system starts with a few pre-identified slur terms as seeds and a large collection of unlabeled data instances. Specifically, we experiment with identifying hate speech from tweets. Hateful tweets will be automatically identified by matching the large collection of unlabeled tweets with slur term seeds. Tweets that contain one of the seed slur terms are labeled as hateful. The two-path bootstrapping system consists of two learning components, an explicit slur term learner and a neural net classifier (LSTMs \cite{hochreiter1997long}), that can capture both explicit and implicit descriptions of online hate speech. Using the initial seed slur term labeled hateful tweets, the two learning components will be initiated simultaneously. The slur term learner will continue to learn additional hateful slur terms. Meanwhile, the neural net classifier will be trained using the automatically labeled hateful tweets as positive instances and randomly sampled tweets as negative instances. Next, both the newly learned slur terms and the trained neural net classifier will be used to identify new hateful content from the unlabeled large collection of tweets. The newly labeled hateful tweets by each of the two learning components will be used to augment the initial slur term seed identified hateful tweet collection, which will be used to learn more slur terms and retrain the classifier in the next iteration. The whole process then iterates. After each iteration, we have to determine if a stopping criterion is met and we should terminate the bootstrapping process. In general, a tuned threshold score is applied or a small annotated dataset is used to evaluate the learned classifiers. We adopt the latter method. Specifically, the bootstrapping system stops when the precision of the LSTM classifier is lower than $0.6$ when evaluated using an existing small annotated tweet set \cite{waseem2016hateful}. \subsection{Automatic Data Labeling of Initial Data} Seeing a hate slur term in a tweet strongly indicates that the tweet is hateful. Therefore, we use 20 manually selected slur terms to match with a large unlabeled tweet collection in order to quickly construct the initial small set of hateful tweets. Table \ref{seeds} shows the $20$ seed slurs we used. \begin{table}[ht] \begin{center} \scalebox{0.9}{ \begin{tabular}{ l l l l l} \hline bimbo & chink & commie & coon & cunt \\ fag & faggot & feminazi & honky & islamist \\ libtard & muzzie & negro & nigger & paki \\ skank & subhuman & tranny & twat & wanker \\ \hline \end{tabular}} \end{center} \caption{Seed slurs }\label{seeds} \end{table} We obtained our initial list of slurs from Hatebase\footnote{https://www.hatebase.org}, the Racial Slurs Database \footnote{http://www.rsdb.org}, and a page of LGBT slang terms\footnote{https://en.wikipedia.org/wiki/List\_of\_LGBT\_slang\_terms}. We ranked the slur terms by their frequencies in tweets, eliminating ambiguous and outdated terms. The slur "gypsy", for example, refers to derogatorily to people of Roma descent, but currently in popular usage is an idealization of a trendy bohemian lifestyle. The word "bitch" is ambiguous, sometimes a sexist slur but other times innocuously self-referential or even friendly. For these reasons, we only selected the top $20$ terms we considered reliable (shown in Table \ref{seeds}). We use both the singular and the plural form for each of these seed slur terms. \subsection{Slur Term Learner} The slur term learning component extracts individual words from a set of hateful tweets as new slurs. Intuitively, if a word occurs significantly more frequently in hateful tweets than in randomly selected tweets, this term is more likely to be a hateful slur term. Following this intuition, we assign a score to each unique unigram that appears $10$ or more times in hateful tweets, and the score is calculated as the relative ratio of its frequency in the labeled hateful tweets over its frequency in the unlabeled set of tweets. Then the slur term learner recognizes a unigram with a score higher than a certain threshold as a new slur. Specifically, we use the threshold score of $100$ in identifying individual word slur terms. The newly identified slur terms will be used to match with unlabeled tweets in order to identify additional hateful tweets. A tweet that contains one of the slur terms is deemed to be a hateful tweet. While we were aware of other more sophisticated machine learning models, one purpose of this research is to detect and learn new slur terms from constantly generated user data. Therefore, the simple and clean string matching based slur learner is designed to attentively look for specific words that alone can indicate hate speech. In addition, this is in contrast with the second learning component that uses a whole tweet and model its compositional meanings in order to recognize implicit hate speech. These two learners are complementary in the two-path bootstrapping system. \subsection{The LSTM Classifier} We aim to recognize implicit hate speech expressions and capture composite meanings of tweets using a sequence neural net classifier. Specifically, our LSTM classifier has a single layer of LSTM units. The output dimension size of the LSTM layer is $100$. A sigmoid layer is built on the top of the LSTM layer to generate predictions. The input dropout rate and recurrent state dropout rate are both set to $0.2$. In each iteration of the bootstrapping process, the training of the LSTM classifier runs for $10$ epochs. The input to our LSTM classifier is a sequence of words. We pre-process and normalize tokens in tweets following the steps suggested in \cite{pennington2014glove}. In addition, we used the pre-processing of emoji and smiley described in a preprocess tool \footnote{https://pypi.python.org/pypi/tweet-preprocessor/0.4.0}. Then we retrieve word vector representations from the downloaded\footnote{https://code.google.com/archive/p/word2vec/} pre-trained word2vec embeddings \cite{mikolov2013distributed}. The LSTM classifier is trained using the automatically labeled hateful tweets as positive instances and randomly sampled tweets as negative instances, with the ratio of POS:NEG as 1:10. Then the classifier is used to identify additional hateful tweets from the large set of unlabeled tweets. The LSTM classifier will deem a tweet as hateful if the tweet receives a confidence score of $0.9$ or higher. Both the low POS:NEG ratio and the high confidence score are applied to increase the precision of the classifier in labeling hateful tweets and control semantic drift in the bootstrapping learning process. To further combat semantic drift, we applied weighted binary cross-entropy as the loss function in LSTM. \subsection{One vs. Two Learning Paths} As shown in Figure \ref{model}, if we remove one of the two learning components, the two-path learning system will be reduced to a usual self-learning system with one single learning path. For instance, if we remove the LSTM classifier, the slur learner will learn new slur terms from initially seed labeled hateful tweets and then identify new hateful tweets by matching newly learned slurs with unlabeled tweets. The newly identified hateful tweets will be used to augment the initial hateful tweet collection and additional slur terms can be learned from the enlarged hateful tweet set. The process will iterates. However as shown later in the evaluation section, single-path variants of the proposed two-path learning system are unable to receive additional fresh hateful tweets identified by the other learning component and lose learning momentum quickly. \subsection{Tackling Semantic Drifts} Semantic drift is the most challenging problem in distant supervision and bootstrapping. First of all, we argue that the proposed two-path bootstrapping system with two significantly different learning components is designed to reduce semantic drift. According to the co-training theory \cite{blum1998combining}, the more different the two components are, the better. In evaluation, we will show that such a system outperforms single-path bootstrapping systems. Furthermore, we have applied several strategies in controlling noise and imbalance of automatically labeled data, e.g., the high frequency and the high relative frequency thresholds enforced in selecting hate slur terms, as well as the low POS:NEG training sample ratio and the high confidence score of 0.9 used in selecting new data instances for the LSTM classifier. \section{Evaluations} \subsection{Tweets Collection} We randomly sampled 10 million tweets from 67 million tweets collected from Oct. 1st to Oct. 24th using Twitter API. These 10 million tweets were used as the unlabeled tweet set in bootstrapping learning. Then we continued to collect 62 million tweets spanning from Oct.25th to Nov.15th, essentially two weeks before the US election day and one week after the election. The 62 million tweets will be used to evaluate the performance of the bootstrapped slur term learner and LSTM classifier. The timestamps of all these tweets are converted into EST. By using Twitter API, the collected tweets were randomly sampled to prevent a bias in the data set. \subsection{Supervised Baselines} We trained two supervised models using the 16 thousand annotated tweets that have been used in a recent study \cite{waseem2016hateful}. The annotations distinguish two types of hateful tweets, sexism and racism, but we merge both categories and only distinguish hateful from non-hateful tweets. First, we train a traditional feature-based classification model using logistic regression (LR). We apply the same set of features as mentioned in \cite{waseem2016hateful}. The features include character-level bigrams, trigrams, and four-grams. In addition, for direct comparisons, we train a LSTM model using the 16 thousand annotated tweets, using exactly the same settings as we use for the LSTM classifier in our two-path bootstrapping system. \subsection{Evaluation Methods} We apply both supervised classifiers and our weakly supervised hate speech detection systems to the 62 million tweets in order to identify hateful tweets that were posted before and after the US election day. We evaluate both precision and recall for both types of systems. Ideally, we can easily measure precision as well as recall for each system if we have ground truth labels for each tweet. However, it is impossible to obtain annotations for such a large set of tweets. The actual distribution of hateful tweets in the 62 million tweets is unknown. Instead, to evaluate each system, we randomly sampled 1,000 tweets from the whole set of hateful tweets that {\it had been tagged as hateful} by the corresponding system. Then we annotate the sampled tweets and use them to estimate precision and recall of the system. In this case, \[ precision = \frac{n}{ 1000 } \] \[ recall \propto precision \cdot N \] Here, $n$ refers to the number of hateful tweets that human annotators identified in the 1,000 sampled tweets, and $N$ refers to the total number of hateful tweets the system tagged in the 62 million tweets. We further calculated system recall by normalizing the product, $precision \cdot N$, with an estimated total number of hateful tweets that exist in the 62 million tweets, which was obtained by multiplying the estimated hateful tweet rate of 0.6\%\footnote{We annotated 5,000 tweets that were randomly sampled during election time and 31 of them were labeled as hateful, therefore the estimated hateful tweet rate is 0.6\% (31/5,000).} with the exact number of tweets in the test set. Finally, we calculate F-score using the calculated recall and precision. Consistent across the statistical classifiers including both logistic regression classifiers and LSTM models, only tweets that receive a confidence score over $0.9$ were tagged as hateful tweets. \begin{table*}[ht] \begin{center} \scalebox{0.94}{ \begin{tabular}{|l|c|c|c|c|c|} \hline \bf Classifier & \bf Precision & \bf Recall & \bf F1 & \bf \# of Predicted Tweets & \bf \# of Estimated Hateful \\ \hline \multicolumn{6}{|c|}{Supervised Baselines} \\ \hline Logistic Regression & 0.088 & 0.328& 0.139& \bf{1,380,825} & 121,512\\ LSTMs & \bf{0.791} & 0.132& 0.228& 62,226 & 49,221 \\ \hline \multicolumn{6}{|c|}{The Two-path Weakly Supervised Learning System} \\ \hline LSTMs & 0.419 & 0.546& 0.474& 483,298 & 202,521 \\ Slur Matching & 0.565 & 0.398& 0.468 & 261,183 & 147,595 \\ Union & 0.422 & \bf{0.580} & \bf{0.489} & 509,897 & \bf{214,997}\\ \hline Union* & 0.626* & 0.258* & 0.365* & - & - \\ \hline \multicolumn{6}{|c|}{Variations of the Two-path Weakly Supervised Learning System} \\ \hline Slur Matching Only & 0.318 & 0.143 & 0.197& 166,535 & 52,958 \\ LSTMs Only & 0.229 & 0.303 & 0.261 & 491,421 & 112,535 \\ \hline \end{tabular} } \end{center} \caption{\label{pm} Performance of Different Models } \end{table*} \subsection{Human Annotations} When we annotate system predicted tweet samples, we essentially adopt the same definition of hate speech as used in \cite{waseem2016hateful}, which considers tweets that explicitly or implicitly propagate stereotypes targeting a specific group whether it is the initial expression or a meta-expression discussing the hate speech itself (i.e. a paraphrase). In order to ensure our annotators have a complete understanding of online hate speech, we asked two annotators to first discuss over a very detailed annotation guideline of hate speech, then annotate separately. This went for several iterations. Then we asked the two annotators to annotate the 1,000 tweets that were randomly sampled from all the tweets tagged as hateful by the supervised LSTM classifier. The two annotators reached an inter-agreement Kappa \cite{cohen1960coefficient} score of 85.5\%. Because one of the annotators become unavailable later in the project, the other annotator annotated the remaining sampled tweets. \subsection{Experimental Results} \noindent {\bf Supervised Baselines} The first section of Table \ref{pm} shows the performance of the two supervised models when applied to 62 million tweets collected around election time. We can see that the logistic regression model suffers from an extremely low precision, which is less than 10\%. While this classifier aggressively labeled a large number of tweets as hateful, only 121,512 tweets are estimated to be truly hateful. In contrast, the supervised LSTM classifier has a high precision of around 79\%, however, this classifier is too conservative and only labeled a small set of tweets as hateful. \begin{table}[t] \begin{center} \begin{tabular}{|l|r|r|r|r|} \hline \bf Its & \bf Prev & \bf Slur Match & \bf LSTMs \\ \hline 1 & 8,866 & 422 & 3,490 \\ 2 & 12,776 & 4,890 & 13,970 \\ 3 & 27,274 & 6,299 & 21,579 \\ 4 & 50,721 & 9,895 & 22,768 \\ \hline \end{tabular} \end{center} \caption{Number of Labeled Tweets in Each Iteration }\label{boot} \end{table} \noindent {\bf The Two-path Bootstrapping System} Next, we evaluate our weakly supervised classifiers which were obtained using only $20$ seed slur terms and a large set of unlabeled tweets. The two-path weakly supervised bootstrapping system ran for four iterations. The second section of Table \ref{pm} shows the results for the two-path weakly supervised system. The first two rows show the evaluation results for each of the two learning components in the two-path system, the LSTM classifier and the slur learner, respectively. The third row shows the results for the full system. We can see that the full system {\bf Union} is significantly better than the supervised LSTM model in terms of recall and F-score. Furthermore, we can see that a significant portion of hateful tweets were identified by both components and the weakly supervised LSTM classifier is especially capable to identify a large number of hateful tweets. Then the slur matching component obtains an precision of around 56.5\% and can identify roughly 3 times of hateful tweets compared with the supervised LSTM classifier. The last column of this section shows the performance of our model on a collection of human annotated tweets as introduced in the previous work \cite{waseem2016hateful}. The recall is rather low because the data we used to train our model is quite different from this dataset which contains tweets related to a TV show \cite{waseem2016hateful}. The precision is only slightly lower than previous supervised models that were trained using the same dataset. Table \ref{boot} shows the number of hateful tweets our bootstrapping system identified in each iteration during training. Specifically, the columns {\bf Slur Match} and {\bf LSTMs} show the number of hateful tweets identified by the slur learning component and the weakly supervised LSTM classifier respectively. We can see that both learning components steadily label new hateful tweets in each iteration and the LSTM classifier often labels more tweets as hateful compared to slur matching. \begin{table}[t] \begin{center} \begin{tabular}{|c|c|c|} \hline \bf Intersection & \bf LSTM Only & \bf Slur Only \\ \hline 234,584 & 248,714 & 26,599 \\ \hline \end{tabular} \end{center} \caption{Number of Hateful Tweets in Each Segment} \label{perf} \end{table} Furthermore, we found that many tweets were labeled as hateful by both slur matching and the LSTM classifier. Table \ref{perf} shows the number of hateful tweets in each of the three segments, hateful tweets that have been labeled by both components as well as hateful tweets that were labeled by one component only. Note that the three segments of tweets are mutually exclusive from others. We can see that many tweets were labeled by both components and each component separately labeled some additional tweets as well. This demonstrates that hateful tweets often contain both explicit hate indicator phrases and implicit expressions. Therefore in our two-path bootstrapping system, the hateful tweets identified by slur matching are useful for improving the LSTM classifier, vice versa. This also explains why our two-path bootstrapping system learn well to identify varieties of hate speech expressions in practice. \iffalse We randomly sampled 1,000 tweets from each of the three segments of system predicted hateful tweets and have them annotated. We use the three sets of annotated tweet samples to calculate estimated number of truly hateful tweets and hence estimated precision and recall for each of the two components in the bootstrapping system as well as the system overall. First, we estimate the number of truly hateful tweets in each segment based on the precision calculated using tweet samples for the same segment. Then we obtain the estimated number of truly hateful tweets labeled by each of the two hate detection components by summing up the estimated number of truly hateful tweets from the intersection segment and from one other segment corresponding to the component. Next, we obtain the number of predicted tweets by each of the two components by summing up the number of predicted tweets from the intersection segment and from the segment corresponding to the component. Then the estimated precision of each component is the ratio of the estimated number of truly hateful tweets over the predicted number of tweets by each component. In addition, we obtain these three metrics when applying the complete system (Union) by considering tweet samples across the three segments. \fi \noindent {\bf One-path Bootstrapping System Variants} In order to understand how necessary it is to maintain two learning paths for online hate speech detection, we also ran two experiments with one learning component removed from the loop each time. Therefore, the reduced bootstrapping systems can only repeatedly learn explicit hate speech (with the slur learner) or implicit hateful expressions (with the LSTM classifier). The third section of Table \ref{pm} shows the evaluation results of the two single-path variants of the weakly supervised system. We can see that both the estimated precision, recall, F score and the estimated number of truly hateful tweets by the two systems are significantly lower than the complete two-path bootstrapping system, which suggests that our two-path learning system can effectively capture diverse descriptions of online hate speech, maintain learning momentums as well as effectively combat with noise in online texts. \section{Analysis} \subsection{Analysis of the Learned Hate Indicators} \vspace{.1in} \begin{table}[ht] \begin{center} \scalebox{0.9}{ \begin{tabular}{ l l l l} \hline berk & chavs & degenerates & douches\\ facist & hag & heretics & jihadists\\ lesbo & pendejo & paedo & pinche\\ retards & satanist & scum & scumbag\\ slutty & tards & unamerican & wench\\ \hline \end{tabular}} \end{center} \caption{New slurs learned by our model}\label{new slur} \end{table} \vspace{-.1in} We have learned 306 unigram phrases using the slur term learning component. Among them, only 45 phrases were seen in existing hate slur databases while the other terms, 261 phrases in total, were only identified in real-world tweets. Table \ref{new slur} shows some of the newly discovered hate indicating phrases. Our analysis shows that 86 of the newly discovered hate indicators are strong hate slur terms and the remaining 175 indicators are related to discussions of identity and politics such as 'supremacist' and 'Zionism'. \subsection{Analysis of LSTM Identified Hateful Tweets} The LSTM labeled 483,298 tweets as hateful, and 172,137 of them do not contain any of the original seed slurs or our learned indicator phrases. The following are example hateful tweets that have no explicit hate indicator phrase: \vspace{.05in} \noindent (1) {\it @janh2h The issue is that internationalists keep telling outsiders that they're just as entitled to the privileges of the tribe as insiders.} \vspace{.05in} \noindent (2) {\it This is disgusting! Christians are very tolerant people but Muslims are looking to wipe us our and dominate us! Sen https://t.co/7DMTIrOLyw} We can see that the hatefulness of these tweets is determined by their overall compositional meanings rather than a hate-indicating slur. \subsection{Error Analysis} The error of our model comes from semantic drift in bootstrapping learning, which partially results from the complexity and dynamics of language. Specifically, we found dynamic word sense of slurs and natural drifting of word semantic. Many slur terms are ambiguous and have multiple word senses. For instance, ``Chink'', an anti-Asian epithet, can also refer to a patch of light from a small aperture. Similarly, ``Negro'' is a toponym in addition to a racial slur. Further, certain communities have reclaimed slur words. Though the word ``dyke'' is derogatory towards lesbians, for example, some use it self-referentially to destigmatize it, a phenomenon we sometimes encountered. \subsection{Temporal Distributions of Tagged Hateful Tweets} By applying our co-training model on the 62 million tweets corpus, we found around 510 thousand tweets labeled as hateful in total. \begin{figure}[ht] \centering \includegraphics[width=7.6cm,keepaspectratio]{trend.png} \caption{Temporal Distribution of Hateful Tweets} \label{trend} \end{figure} The figure \ref{trend} displays the temporal distribution of hateful tweets. There is a spike in hateful tweets from Nov.7th to Nov.12th in terms of both number of hateful tweets and ratio of hateful tweets to total tweets. \iffalse \begin{figure}[ht] \centering \includegraphics[width=7.6cm,keepaspectratio]{geo1.png} \caption{Geographical Distribution of Hateful Tweets} \label{geo} \end{figure} \fi \subsection{Most Frequent Mentions and Hashtags of Tagged Hateful Tweets} Table \ref{mention} and \ref{hashtag} show the top 30 most frequent mentions in hateful tweets. They are ranked by frequency from left to right and from top to bottom. It is clear that the majority of mentions found in tweets tagged as hateful address polarizing political figures (i.e. @realDonaldTrump and @HillaryClinton), indicating that hate speech is often fueled by partisan warfare. Other common mentions include news sources, such as Politico and MSNBC, which further support that "trigger" events in the news can generate inflammatory responses among Twitter users. Certain individual Twitter users also received a sizable number of mentions. @mitchellvii is a conservative activist whose tweets lend unyielding support to Donald Trump. Meanwhile, Twitter user @purplhaze42 is a self-proclaimed anti-racist and anti-Zionist. Both figured among the most popular recipients of inflammatory language. Table \ref{hashtag} shows that the majority of hashtags also indicate the political impetus behind hate speech with hashtags such as \#Trump and \#MAGA (Make America Great Again, Trump's campaign slogan) among the most frequent. The specific televised events also engender proportionally large amounts of hateful language as they can be commonly experienced by all television-owning Americans and therefore a widely available target for hateful messages. \begin{table}[ht] \begin{center} \scalebox{0.8}{ \begin{tabular}{ l l l} \hline @realDonaldTrump & @HillaryClinton & @megynkelly \\ @CNN & @FoxNews & @newtgingrich \\ @nytimes & @YouTube & @POTUS\\ @KellyannePolls & @MSNBC & @seanhannity \\ @washingtonpost & @narendramodi & @CNNPolitics \\ @PrisonPlanet & @guardian & @JoyAnnReid \\ @BarackObama & @thehill & @BreitbartNews \\ @politico & @ABC & @AnnCoulter \\ @jaketapper & @ArvindKejriwal & @FBI \\ @mitchellvii & @purplhaze42 & @SpeakerRyan \\ \hline \end{tabular}} \end{center} \caption{List of Top 30 Mentions in Hateful Tweets During Election Days}\label{mention} \end{table} \begin{table}[ht] \begin{center} \scalebox{0.8}{ \begin{tabular}{ l l l} \hline \#Trump & \#ElectionNight & \#Election2016 \\ \#MAGA & \#trndnl & \#photo \\ \#nowplaying & \#Vocab & \#NotMyPresident \\ \#ElectionDay & \#trump & \#ImWithHer \\ \#halloween & \#cdnpoli & \#Latin \\ \#Hillary & \#WorldSeries & \#1 \\ \#Brexit & \#Spanish & \#auspol \\ \#notmypresident & \#C51 & \#NeverTrump \\ \#hiring & \#bbcqt & \#USElection2016 \\ \#tcot & \#TrumpProtest & \#XFactor \\ \hline \end{tabular}} \end{center} \caption{List of Top 30 Hashtags in Hateful Tweets During Election Days}\label{hashtag} \end{table} \vspace{-0.1in} \section{Conclusions} Our work focuses on the need to capture both explicit and implicit hate speech from an unbiased corpus. To address these issues, we proposed a weakly supervised two-path bootstrapping model to identify hateful language in randomly sampled tweets. Starting from 20 seed rules, we found 210 thousand hateful tweets from 62 million tweets collected during the election. Our analysis shows a strong correlation between temporal distributions of hateful tweets and the election time, as well as the partisan impetus behind large amounts of inflammatory language. In the future, we will look into linguistic phenomena that often occur in hate speech, such as sarcasm and humor, to further improve hate speech detection performance.
1,314,259,994,688
arxiv
\section{Introduction} After forty years of studies and applications it is now clear that the density functional theory (DFT)~\cite{H&K, Mermin} constitutes a formidable tool for the understanding of the matter. Nowadays, DFT-based total energy calculations~\cite{Dreizler&daProvidencia,Gross&Dreizler,Dreizler&Gross, Gonis} and Car-Parrinello molecular dynamics simulations~\cite{CarParrinello} are used in a growing number of scientific fields, ranging from physics to chemistry to biology. The reason of such an ubiquitous fortune is that these methods are {\it ab initio}, in the sense that the only underlying models are the fundamental interactions laws and quantum mechanics. However, just because of their {\it ab initio} nature, DFT-based methods generally require large computational resources. In spite of the availability of faster and faster computers, this circumstance sets up the limitations to the applicability of the same methods. Most DFT implementations are based on the Kohn-Sham scheme~\cite{K&S} and require the solution for the wave-functions of the appropriate Kohn-Sham Schroedinger equation. This usually implies the orthogonalization or the inversion of large matrices and, hence, a number of operations scaling, in principle, as $N^3$, where N is the number of atoms in the system. While for semiconductors or insulators, the wave-functions localization quite naturally leads to sparse problems, the case of metals appears to be the most challenging. For metallic systems, in fact, the computational effort required by wave-functions based approaches remains $O(N^3)$. Nevertheless, even for metals, approaches based on the direct minimization of the Hohenberg-Kohn functional wrt. the charge density~\cite{WeitaoYang,Cortona,Krajewski&Parrinello} could achieve $O(N)$ scaling. In this case, the basic strategy consists in partitioning the system under consideration in a collection of weakly interacting fragments~\cite{Harris,Goedecker}. Even with nowadays computers, the scaling properties of DFT algorithms wrt. the size of the system remain a crucial issue since they determine which classes of phenomena can be studied by {\it ab initio} methods. Among DFT implementations, the oldest methods using such a {\it 'divide and conquer'} strategy for metallic systems are perhaps the self-consistent versions of the Korringa~\cite{Korringa}, Kohn and Rostoker\cite{Kohn&Rostoker} multiple scattering theory (MST). The MST method~\cite{Gonis} views the system under consideration as a collection of fragments (usually in a one to one correspondence to lattice sites) whose scattering properties are determined by solving a Kohn-Sham Schroedinger equation. Once the fragment (or single-site) scattering matrices are determined, they are assembled together with the free electron propagator in order to obtain the scattering matrix, or, equivalently, the Green's function of the system. Both the determination of the fragment scattering matrices and the potential reconstruction are $O(N)$, while, in principle, the solution for the global scattering matrix is an $O(N^3)$ problem, as it corresponds to the determination of the appropriate boundary conditions for the wave-functions in each fragment. However, a number of algorithms have been devised~\cite{LSMS,LSGF,swisscheese} that are able to obtain $O(N)$ scaling by mapping the determination of the system's Green function in a sparse problem. This is usually obtained by assuming zero the electronic propagator outside the so called Local Interaction Zone (LIZ) of each fragment. If the free electrons propagator is used, about ten neighbors shells should be included in the LIZ~\cite{LSMS}, while using screened propagators~\cite{Lodder} allows to have much smaller LIZ's: typically one or two neighbors shells\cite{LSGF} are sufficient. Another remarkable feature of the MST method is that, being based on Green functions rather than on wave-functions, it can deal easily with disordered systems and ensemble statistical averages. For this reason, since many years, the Coherent Potential Approximation (CPA) theory~\cite{soven} for disordered alloys has been used in conjuction with the MST~\cite{KKRCPA} and the DFT~\cite{DFTKKRCPA1}. The present paper shall be concerned with the study of metallic alloys in which the nuclei are assumed to occupy the positions of an ordered lattice, while substitutional disorder may be permitted. For these systems, in spite of the apparent complexity of the DFT algorithmic implementations, the analysis of large supercell calculations has allowed for the identification of remarkably simple trends. Namely, the charge excesses associated with each lattice site appear to be linear functions of the electrostatic potentials at the same site~\cite{FWS1,FWS2}. These simple relationships, to be referred in the following to as to the 'qV' laws, allow to describe the 'atoms' of each chemical species in an extended metallic system in terms of two parameters, say, the slope and the intercept of the above linear functions\cite{CPALF,CEF}, and appear to be the appropriate generalization of Pauling's concept of electronegativity~\cite{Pauling} to solid state physics. We have already suggested that the 'qV' laws can lead to important simplifications for total energy calculations in metallic alloys~\cite{CEF}. In the present paper, we shall introduce the class of the Generalized CPA's (GCPA) for dealing both with ordered and disordered metallic alloys. From the computational point of view, GCPA schemes present $O(N)$ scaling. Their principal virtue, however, is that, as we shall demonstrate, the GCPA functional {\it exactly} reduces to a {\it function} of the relevant charge multipole moments at the various lattice sites, thus constituting a {\it coarse grained} approximate version of the original DFT. At a further level of approximation, the GCPA density functional leads to a Ginzburg-Landau functional, the Charge Excesses Functional (CEF)~\cite{CEF}, which is equivalent to the above linear 'qV' laws and computationally inexpensive. The predictions of the GCPA and the CEF about the 'qV' laws and total energies shall be compared vs. full-potential Linearized Augmented Plane Waves (LAPW) calculations~\cite{Andersenlinmet,WIEN2Kb} for 62 ordered crystal structures~\cite{Curtarolo,Curtarolothesis}. Our conclusions shall be that, at least for the systems considered, both GCPA and CEF are generally able to find out correctly the system ground state and to fairly well reproduce the energy differences between ordered structures in a fixed concentration ensemble. The following of this paper is organized as follows. In Sect. II we shall briefly review the MST version of the DFT. In order to have a functional form as much localized as possible, the relevant electrostratic contributions shall be rewritten using an exact multipole expansion. In Sect. III, we shall introduce the class of the GCPA theories and investigate the analytical properties of the corresponding approximate density functional. Moreover we shall obtain, as a further approximation to the GCPA, the CEF theory, already obtained in a much more phenomenological context~\cite{CEF}, that here is offered in a generalized form suitable for the inclusion of dipole or quadrupole interactions. In Sect. IV we shall compare the numerical results obtained from the GCPA and CEF approximations with those from full-potential LAPW calculations. CEF and GCPA calculations appear {\it numerically indistinguishable} one from the other and both theories appear able to fairly well reproduce the LAPW total energies. In the final Sect. V, we shall draw our conclusions, make our comments and briefly discuss the possible developments of CEF and GCPA theories. \section{Review of The Density Functional Multiple Scattering Theory} \label{SectII} \subsection{The MST formalism} \label{SectIIA} In this subsection we shall briefly overview the grand canonical ensemble formulation of the MST-DFT~\cite{Mermin,Dreizler&Gross}. Our aim shall be developing a common ground for dealing both with ordered and substitutionally disordered systems. Although finite temperature, relativistic and magnetic generalizations could straightforwardly be carried out~\cite{Dreizler&Gross}, in this paper we focus on the non-relativistic, non spin-polarized case at $T=0$. Furthermore, when not otherwise stated, we shall consider the Local Density Approximation (LDA)~\cite{Dreizler&Gross} to the DFT and assume to have ions of charge $+eZ_i$ fixed at the lattice positions $\mathbf{R}_i$. In our discussion, the relevant density functional is the electronic grand potential~\cite{ DFTKKRCPA1,DFTKKRCPA2}, \begin{eqnarray} \label{gpotNmu} &\Omega&(T=0,V,\mu)=E_{TOT}-\mu N(\mu)= \nonumber \\ &-&\int_{-\infty}^{\mu} d \varepsilon \; N(\varepsilon;\mu) +\int_{-\infty}^{\mu} d \mu^\prime \int_{-\infty}^{\mu^\prime} d \varepsilon \; \frac{d N(\varepsilon;\mu^\prime)}{d \mu^\prime} \nonumber \\ &+&\frac{e^2}{2} \sum_{i,j \; (i \ne j)} \frac{Z_i Z_j}{R_{ij}} \end{eqnarray} where $V$ is the volume of the system, $\mu$ is the chemical potential and $E_{TOT}$ is the sum of the total electronic energy and the nuclei electrostatic interaction. $N(\varepsilon;\mu)$ is the integrated density of states which is related to to the electronic density of states (DOS), $n(\varepsilon,\mu)$, through the following relationship: \begin{equation} \label{NmuT0} N(\varepsilon,\mu)=\int_{-\infty}^{\varepsilon} d \varepsilon \; n(\varepsilon,\mu) \end{equation} The notation highlights the implicit $\mu$ dependence of the DOS that arises from the effective Kohn-Sham potential. In a frozen ions treatment, of course, the nuclear interactions term is just a constant that is included here for future convenience. The basic idea underlying the MST is partitioning the system in 'small' scattering volumes, $v_i$, $\sum_i v_i=V$, which in most implementations are 'centred' at the nuclei positions. Although at this stage the partitioning is quite arbitrary, as we shall see in the following, there is a natural choice for it. Using the Lloyd's formula~\cite{Lloyd,FaulknerLloyd}, the integrated DOS, $N(\varepsilon;\mu)$, can be expressed as the excess with respect to the corresponding free electrons quantity, $N^0(\varepsilon)$: \begin{eqnarray} \label{Lloyd1} &&N(\varepsilon;\mu)=N^0(\varepsilon)-\frac{1}{\pi} Im \ln \det \underline{\underline{M}}(\varepsilon)= \nonumber \\ &&N^0(\varepsilon)+\frac{1}{\pi} Im \sum_i Tr (\ln \underline{\underline{\tau}}(\varepsilon))_{ii} \end{eqnarray} where the trace is taken only over the angular momentum components. In Eq.~(\ref{Lloyd1}) the multiple scattering matrix, $\underline{\underline{M}}$, or the scattering-path matrix~\cite{Gyorffy&Stott}, $\underline{\underline{\tau}}=\underline{\underline{M}}^{-1}$ are defined in terms of the single-site scattering matrices \footnote{Matrices in the angular momentum space are denoted by a single underline, double underline is used for matrices both in the angular momentum and in the site spaces. Angular momentum components are denoted by capital letters, $L \equiv (\ell,m)$, $L^\prime \equiv(\ell^\prime,m^\prime)$, $\dots$, site components by small Latin letters, $i, j, \dots$.}, $\underline{t}_i(\varepsilon)$, and the free electron propagator, $\underline{G}_{ij}^0(\varepsilon)$, is given by: \begin{equation} \label{MKKR} \underline{M}_{ij}(\varepsilon)=\underline{t}_i^{-1}(\varepsilon) \delta_{ij}- \underline{G}^0_{ij}(\varepsilon) \end{equation} It is convenient to recall here that the single-site scattering matrices convey the informations about the phase shifts at the surfaces delimiting each scattering volume. The continuity of the wave-functions at the same surfaces is ensured by the construction of the scattering-path matrix, $\underline{\underline{\tau}}$, this is accomplished by the numerical inversion of the multiple scattering matrix $\underline{\underline{M}}$. Since the size of $\underline{\underline{M}}$ is proportional to the number of scatterers in the problem, its inversion is the source of $O(N^3)$ scaling in the MST version of the DFT. Within MST the link between the electronic density and the scattering matrices is provided by the Green function~\cite{Gonis} \begin{eqnarray} \label{greenfun} &&G_{ij,LL^\prime}(\mathbf{r},\mathbf{r}^\prime,\varepsilon)= Z_{i,L}(\mathbf{r},\varepsilon) \tau_{ij,LL^\prime}(\varepsilon) Z_{j,L^\prime}(\mathbf{r}^\prime,\varepsilon) \nonumber \\ &&- \Big[ \theta(r-r^\prime) Z_{i,L}(\mathbf{r},\varepsilon) J_{j,L^\prime}(\mathbf{r}^\prime,\varepsilon) \nonumber \\ &&+\theta(r^\prime-r) J_{i,L}(\mathbf{r},\varepsilon) Z_{j,L^\prime}(\mathbf{r}^\prime,\varepsilon) \Big] \delta_{L L^\prime} \delta_{i j} \end{eqnarray} where $\mathbf{r} \; \epsilon \; v_i$, $\mathbf{r}^\prime \; \epsilon \; v_j$. $Z_{i,L}(\mathbf{r},\varepsilon)$ and $J_{i,L}(\mathbf{r},\varepsilon)$ are, respectively, the regular and irregular at $\mathbf{r}=0$ solutions\footnote{In the present paper the convention is used that for any function of the position $f_i(\mathbf{r})$ stands for $f(\mathbf{r-R}_i)$ with $\mathbf{r} \; \epsilon \; v_i$.} of the KS Schroedinger equation for the energy $\varepsilon$. For real energies both $Z_{i,L}(\mathbf{r},\varepsilon)$ and $J_{i,L}(\mathbf{r},\varepsilon)$ are real functions. The (site resolved) charge densities, the DOS and the neat charges at the $i$-th site can be obtained by integrating the Green function over the energy and/or the appropriate volumes and by taking the trace over the angular momentum indexes as follows: \begin{equation} \label{rhoi} \rho_i(\mathbf{r} ; \mu)=-\frac{1}{\pi} \int_{-\infty}^{\mu} d \varepsilon \sum_L Im \{ G_{ii,LL}(\mathbf{r},\mathbf{r}^\prime=\mathbf{r};\varepsilon) \} \end{equation} \begin{equation} \label{ni} n_i(\varepsilon;\mu)=-\frac{1}{\pi} \int_{v_i} d \mathbf{r} \sum_L Im \{G_{ii,LL}(\mathbf{r},\mathbf{r}^\prime=\mathbf{r};\varepsilon) \} \end{equation}. As it is shown in Refs.~\onlinecite{Janak}, \onlinecite{DFTKKRCPA1}, \onlinecite{DFTKKRCPA2}, the Hohenberg-Kohn density functional, Eq.~(\ref{gpotNmu}), can be more conveniently rewritten within the MST formalism as the sum of a kinetic and a potential energy functionals, as follows: \begin{equation} \label{varform} \Omega(T=0,V,\mu)=T - \mu N + U \end{equation} where the above two contributions are given by the following expressions: \begin{eqnarray} \label{Ti} T-\mu N = -\int_{-\infty}^{\mu} d \varepsilon N(\varepsilon;\mu) -\int_V d \mathbf{r} \rho(\mathbf{r};\mu) v^{eff}(\mathbf{r};\mu) \end{eqnarray} \begin{eqnarray} \label{Ualt} U&=&\frac{e^2}{2}\int_V d \mathbf{r} \int_V d \mathbf{r}^\prime \frac{\rho(\mathbf{r}) \rho(\mathbf{r}^\prime)} {|\mathbf{r}-\mathbf{r}^\prime|} -\sum_{j} \int_V d \mathbf{r} \frac{e^2 Z_j \rho(\mathbf{r})}{|\mathbf{r}-\mathbf{R}_j|} \nonumber \\ &+&\int_V d \mathbf{r} \rho(\mathbf{r}) e^{XC}(\mathbf{r}, [\rho]) +\frac{e^2}{2}\sum_{ij \; (i \neq j)} \frac{Z_i Z_j}{R_{ij}} \end{eqnarray} The effective potential in Eq.~(\ref{Ti}), $v^{eff}(\mathbf{r})$, is specified by the Kohn-Sham equation, \begin{eqnarray} \label{effpot} v^{eff}(\mathbf{r})&=&\int_{V} d \mathbf{r}^\prime \frac{e^2 \rho(\mathbf{r}^\prime)}{|\mathbf{r}-\mathbf{r}^\prime |} -\sum_j \frac{e^2 Z_j}{|\mathbf{r}-\mathbf{R}_{j}|} \nonumber \\ &+&v^{XC}(\mathbf{r}, [\rho]) \end{eqnarray} It consists of the Coulombian potential due to the electronic and ionic charges and of the exchange-correlation potential, $v^{XC}(\mathbf{r}, [\rho])=\delta E^{XC}[\rho] / \delta \rho(\mathbf{r})$, where $E^{XC}[\rho]$ is the third term on the RHS of Eq.~(\ref{Ualt}). In the local density approximation (LDA) $v^{XC}$ is assumed to depend locally on the electronic density, i.e., $v^{XC}(\mathbf{r}, [\rho])=v^{XC}(\rho(\mathbf{r}))$ and $e^{XC}(\mathbf{r}, [\rho])=e^{XC}(\rho(\mathbf{r}))$. We wish to highlight a useful consequence of the above partitioning of the system volume. The density functional defined by Eq.~(\ref{varform}), which is, of course, variational wrt the global charge density, $\rho(\mathbf{r})$, turns out to be variational also wrt the charge densities in each scattering volume $v_i$, in formulae, \begin{equation} \label{varfun} \frac{\delta \Omega} {\delta \rho_i(\mathbf{r})}=0 \end{equation} Furthermore, it is possible to show\cite{DFTKKRCPA2} that \begin{equation} \label{pass4} \frac{\delta U}{\delta \rho_i(\mathbf{r})}= v_i^{eff}(\mathbf{r};\mu) \end{equation} and that \begin{equation} \label{pass3} \frac{\delta (T -\mu N)}{\delta \rho_i(\mathbf{r})}= -v_i^{eff}(\mathbf{r};\mu) \end{equation} It is interesting to observe that, the expression for the site resolved DOS, Eq.~(\ref{ni}), allows to recast the integrated DOS and the electronic grand potential as sums of site resolved contributions. These contributions, however, involve the site-diagonal part of the system Green function or scattering matrix, $\underline{G}_{ii}$, or $\underline{\tau}_{ii}$, and then are non trivially coupled together through the boundary conditions. If this coupling was neglected, as it is done, for instance, in the case of the Harris-Foulkes density functional\cite{Harris}, $O(N)$ scaling could be obtained. Fortunately, similar numerical performances can be achieved with less dramatic approximations. A sensible alternative is to impose {\it random} boundary conditions at the fragments surfaces~\cite{Krajewski&Parrinello}. In this paper we shall follow a different approach and use {\it averaged} boundary conditions. As we shall see in the following Section, this allows to have a tractable form for the coupling in the kinetic part of the density functional and permit to obtain $O(N)$ algorithms. Although it was proposed with a different aim, one of the oldest method applying such mean boundary conditions is the CPA, a generalized version of which shall be offered in the next Section. Before, however, we need to discuss a different source of coupling that is present in the potential energy part of the functional, namely, the electrostatic interactions between fragments. In the past this subject has received little consideration and it has been ruled out by invoking the screening properties of metals. However, nowadays there is a general consensus that careful estimates of these interactions are necessary in order to obtain accurate total energies for metallic alloys. \subsection{Multipole expansions for the effective potentials and the potential energy} \label{SectIIB} We have shown in the previous Sect.~\ref{SectIIA} that the DF-MST theory is variational wrt the local charge densities, $\rho_i(\mathbf{r})$, of each fragment or scattering volume. In this subsection we shall see how the multipole expansion used by most numerical implementations of the theory has the conceptual advantage of giving expressions for the effective Kohn-Sham potentials and the potential energy in which different scattering volumes are coupled together only through simple functions of the multipole moments. The relevant formulae can be obtained by splitting the volume integrals in Eqs.~(\ref{Ualt}) and~(\ref{effpot}) in sums of integrals extending over the scattering volumes, $v_i$ and by expanding the denominators in spherical harmonics. Although they require some labour, the derivations are very straightforward and need not to be reported here. The resulting expressions for the potential energy and the effective potentials are listed below: \begin{equation} \label{Ualtsm} U=\sum_i \left[ u_i(\rho_i(\mathbf{r}))+\frac{e^2}{2} \; \sum_L q_{i,L} V^{MAD}_{i,L} \right] \end{equation} and \begin{eqnarray} \label{effpotsm} v_i^{eff}(\mathbf{r})&=&e^2 \int_{v_i} d \mathbf{r}^\prime \frac{\rho_i(\mathbf{r}^\prime)}{|\mathbf{r}-\mathbf{r}^\prime |} -\frac{e^2 Z_i}{r}+ v^{XC}(\rho_i(\mathbf{r})) \nonumber \\ &+& e^2 V^{MAD}_i(\mathbf{r}) \end{eqnarray} In Eqs.~(\ref{Ualtsm}) and~(\ref{effpotsm}) we have introduced the local multipole moments, \begin{equation} \label{qim} q_{i,L}=\int_{v_i} d \mathbf{r} p_L(\mathbf{r}) \rho_i(\mathbf{r}) - Z_i \delta_{L,(0,0)} \end{equation} and the Madelung potentials, \begin{equation} \label{vmadm} V^{MAD}_i(\mathbf{r}) = \sum_L V^{MAD}_{i,L} p_L(\mathbf{r}) \end{equation} where \begin{equation} \label{vmadm2} V^{MAD}_{i,L}=\sum_{j \ne i} \sum_{L^\prime} M_{ij,LL^\prime} q_{J,L^\prime} \end{equation} The coefficients $M_{ij,LL^\prime}=M_{LL^\prime}(\mathbf{R}_{ji})$ are given by \begin{equation} \label{madgen} M_{LL^\prime}(\mathbf{R})=4 \pi \sum_{L^{\prime\prime}\; (\ell^{\prime\prime}=\ell+\ell^\prime)} C_{LL^\prime}^{L^{\prime\prime}} \frac{(2\ell^{\prime\prime}+1)!!}{(2\ell^{\prime\prime}+1)} \frac{Y_{L^{\prime\prime}}(\hat{\mathbf{R}})}{R^{\ell^{\prime\prime}+1}} \end{equation} $C_{LL^\prime}^{L^{\prime\prime}}$ are the Gaunt numbers~\cite{Condon&Shortley} and the functions $p_L(\mathbf{r})$ in Eq.~(\ref{effpotsm}) are defined as, \begin{equation} \label{pdef} p_L(\mathbf{r})=\frac{\sqrt{4 \pi}}{(2 \ell +1)!!} r^\ell Y^*_L(\mathbf{r}) \end{equation} The only values that are relevant for spherical approximations are $p_{00}(\mathbf{r})=1$ and $M_{00,00}(R)=1/R$. In Eq.~(\ref{Ualtsm}), the contribution from the $i$-th lattice site to the potential energy is denoted as $u_i([\rho_i(\mathbf{r})])$ and given by \begin{widetext} \begin{equation} \label{Ualts2} u_i([\rho_i(\mathbf{r})]) = \frac{e^2}{2}\int_{v_i} d \mathbf{r} \int_{v_i} d \mathbf{r}^\prime \frac{\rho_i(\mathbf{r}) \rho_i(\mathbf{r}^\prime)} {|\mathbf{r}-\mathbf{r}^\prime|} - \int_{v_i} d \mathbf{r} \frac{e^2 Z_i \rho_i(\mathbf{r})}{r} +\int_{v_i} d \mathbf{r} \rho_i(\mathbf{r}) e^{XC}(\rho_i(\mathbf{r})) \end{equation} \end{widetext} Within the LDA, $u_i$ depends on the electronic density at the $i$-th fragment only, while for non-local approximations to the DFT there could be some dependence on the density at sites $j \ne i$. Much published work has been done within spherical approximations (SA), namely the muffin-tin (MT) or the atomic sphere approximation (ASA). In that case only the first terms, $\ell=0$, of the multipole expansions are included. Thus, Eqs.~(\ref{Ualtsm}) and~(\ref{effpotsm}) must be replaced by the following expressions, \begin{equation} \label{Ualts} U=\sum_i \left[ u_i([\rho_i(\mathbf{r})])+\frac{e^2}{2} \; q_i V^{MAD}_i \right] \end{equation} and \begin{equation} \label{effpots} v_i^{eff}(\mathbf{r})=\int_{v_i} d \mathbf{r}^\prime \frac{e^2 \rho_i(\mathbf{r}^\prime)}{|\mathbf{r}-\mathbf{r}^\prime |} -\frac{e^2 Z_i}{r}+v^{XC}(\rho_i(\mathbf{r}))+e^2 V^{MAD}_i \end{equation} Thus, for SA's, the only relevant multipole moments are the local charge excesses, \begin{equation} \label{qi} q_i\equiv q_{i,00}=\int_{v_i} d \mathbf{r} \rho_i(\mathbf{r}) - Z_i \end{equation} and the Madelung potentials are constant within each scattering volume to the values \begin{equation} \label{vmad} V^{MAD}_i \equiv V^{MAD}_{i,00}=\sum_{j \ne i} \frac{q_j}{R_{ij}} \end{equation} Remarkably, in Eqs.~(\ref{Ualtsm}) and~(\ref{effpotsm}), or in their SA counterparts, Eqs.~(\ref{Ualts}) and~(\ref{effpots}), the charge densities at different sites, $\rho_i(\mathbf{r})$, are coupled only with the Madelung potentials at the same sites, $V^{MAD}_i(\mathbf{r})$. Of course, the last quantities contain information about the charge densities at all crystal sites. \begin{figure} \includegraphics[width=7cm]{fig01.eps} \caption{Triangular inequalities that must be satisfied in order to have a convergent multipole expansion: $r<|\mathbf{r}-(\mathbf{R}_2-\mathbf{R}_1)|$, $r<|\mathbf{R}_2-\mathbf{R}_1|$. The partition of the system volume in Voronoi polyhedra, marked by the lines, guarantees that the inequalities hold.} \label{triang} \end{figure} We wish to highlight that the multipole expansion does not converge for arbitrary partitions of the system. Actually, convergence requires that, for any pair of scattering centers, $\mathbf{R}_i$ and $\mathbf{R}_j$, and for any point $\mathbf{r}$ belonging to the scattering volume $v_i$, the triangular inequality illustrated in Fig.~(1) must be satisfied. It is easy to realize that partitioning of the system in Voronoi polyhedra accordingly with the Wigner-Seitz construction guarantees the above condition to be fullfilled everywhere except but for the zero measure set of points constituted by the surfaces of the polyhedra, thus ensuring the convergence of the theory. The Wigner-Seitz construction, therefore, constitutes a natural choice for the partitioning. \section{Generalized Coherent Potential Approximations (GCPA) and Charge Excesses Functional theory (CEF)} \subsection{Generalized Coherent Potential Approximations (GCPA) for the scattering matrices} \label{GCPA} In this Section we shall discuss a whole class of approximations for systems with atoms lying on a regular lattice, where, however, substitutional disorder is allowed for. Metallic alloys, both ordered intermetallic compounds and random alloys, constitute the most relevant example of such systems. Other examples are crystals with empty, or 'vacancy', sites. Although, in general, these systems do not have translational invariance, nevertheless the underlying 'geometrical lattice' does. Forty years ago, this consideration led Soven to formulate the Coherent Potential Approximation or CPA~\cite{soven}. Since then, the CPA had an appreciable fortune. Its crucial virtue was that, by introducing a 'mean field' fashion effective crystal, it allows to use many techniques designed for ordered systems that were already well developed at the time at which the theory was proposed. For many years, the DFT implementations of the CPA~\cite{F&S,DFTKKRCPA1} have been based on the assumption (in the following referred to as the single-site approximation or SS) that sites occupied by atoms of the same chemical species are characterized by the same effective Kohn-Sham potentials. Although the DFT-SS-CPA has been proved able to carefully determine the electronic structure and the spectral properties of many alloy systems~\cite{Faulknerrev, Abrikosov_cpa,CPALF}, nevertheless it leads to an incorrect description of the electrostatics and of the total energies in metallic alloys~\cite{Magri}. Due to its mean field nature, in fact, the SS approximation neglects the fluctuations of the charge transfers and the energetic electrostatic contributions associated with them. This failure has stimulated many authors that envisaged CPA generalizations aimed to include the effects of different chemical environments~\cite{isomorphous,Ujfalussy,SIMCPAI,SIMCPAII,ccCPA}. In this paper we define a {\it class of approximations} for DFT-based electronic theories in which most of the above CPA generalizations can be included. We shall refer to the approximations belonging to such a class to as Generalized CPA (GCPA). A theory belonging to the GCPA class shall be identified by: (a) a theory specific {\it 'external model'}, i.e. a rule for determining the effective Kohn-Sham 'site' potentials and the statistical weights $w_i$ to be assigned to each 'site', and (b) an approximate form for the kinetic part of the density functional, specified by Eqs.~(\ref{CPA0}-\ref{CPA3}) below. The last feature is common to all the theories belonging to the GCPA class. Before discussing the ansatz for the kinetic functional we wish to illustrate what a GCPA 'external model' can be on the basis of a few examples. The first example of a GCPA theory is, of course, the DFT implementation of the SS-CPA in Refs.~\onlinecite{DFTKKRCPA1} and ~\onlinecite{DFTKKRCPA2}. Its external model is the SS assumption (identical effective potentials for atoms of the same atomic species and weights proportional to the respective atomic concentrations). Another example is the Polymorphous CPA (PCPA) of Ujfalussy et al.~\cite{Ujfalussy, Faulkerphilmag,PCPA2001}. The external model is constructed using an auxiliary supercell containing $N$ atoms, usually hundreds or thousands, each to be weighted with the same weight. The effective site potentials are reconstructed on the same supercell via Eq.~(\ref{effpot}), thus atoms of the same chemical species are allowed to have different potentials depending on their environments. This specific choice for the external model appears the reason why the PCPA theory substantially improves the alloy electrostatics while maintaining all the advantages of the standard SS-CPA about the spectral properties~\cite{PCPAapplications}. Other existing CPA-based approaches like, e.g., the Non-Local CPA~\cite{NLCPAI,NLCPAII}, or the SIM-CPA~\cite{SIMCPAI,SIMCPAII} can also be considered as particular cases of GCPA's. We shall now introduce the kinetic ansatz that is common to all GCPA theories. For this purpose we prefer not to start from the definition of the functional. Rather we shall follow a path closer to physical intuition and to the spirit of Soven's original CPA formulation\cite{soven}. At similarity of SS-CPA calculations, the GCPA defines an effective periodic crystal whose sites are occupied by effective 'coherent' scatterers characterized by the single-site scattering matrix $\underline{t}^c(\varepsilon)$, the corresponding Green function shall be $\underline{G}^c(\underline{t}^c)$. Then, if we considers the Green function of a single substitutional impurity with a single-site scattering matrix $\underline{t}_i$ embedded in the above effective crystal, $\underline{G}_{ii}(\underline{t}_i,\underline{t}^c)$, the GCPA consists in requiring that \begin{equation} \label{CPA0} \sum_i \frac{w_i}{N} \; \underline{G}_{ii}(\underline{t}_i,\underline{t}^c) = \underline{G}^c(\underline{t}^c) \end{equation} In other words, the weighted average of the impurity Green functions must be equal to the 'coherent' Green function $\underline{G}^c(\underline{t}^c)$. In Eq.~(\ref{CPA0}) the energy dependences have been dropped for sake of simplicity and $N$ stands for the number of different scatterers in the model. \begin{figure} \includegraphics[width=7cm]{fig02.eps} \caption{\label{fig2} A pictorial illustration of the SS-CPA for a binary, A$_{c_A}$B$_{c_B}$, alloy (top) and of the GCPA (bottom). The rectangular frames have the meaning "the Green function of what is inside".} \end{figure} Eq.~(\ref{CPA0}) is illustrated in Fig.~(2). In terms of the 'coherent' scattering-path matrix of the effective lattice, $\underline{\tau}^c$, and of the CPA 'projectors', $\underline{D}_i$, it can be rearranged as follows: \begin{equation} \label{CPA1} \sum_i \frac{w_i}{N} \underline{D}_i =\underline{1} \end{equation} \begin{equation} \label{CPA2} \underline{D}_i = \left[\underline{1}+\left(\left(\underline{t}_i\right)^{-1}- \left(\underline{t}^c\right)^{-1}\right) \underline{\tau}^c \right]^{-1} \end{equation} A GCPA theory is then an approximation for the $\underline{\underline{\tau}}$ matrix, whose diagonal elements are given by \begin{equation} \label{CPA3} \underline{\tau}_{ii}=\underline{D}_i \; \underline{\tau}^c \end{equation} while the diagonal matrix elements of the Green function are given by Eq.~(\ref{greenfun}) with $i=j$. Within the approximation defined by Eqs.~(\ref{CPA0}-\ref{CPA3}) above, the MST reviewed in the previous Section allows to calculate the charge densities and the integrated DOS, $N(\varepsilon,\mu)$ and, through Eq.~(\ref{Ti}), the kinetic part of the Hohenberg-Kohn functional. Here we need not to trace all the intermediate steps that can be reproduced following the scheme of Ref.~\cite{DFTKKRCPA2}. The GCPA approximate version of the Lloyd formula, Eq.~(\ref{MKKR}), is given by: \begin{eqnarray} \label{Lloyd3} \frac{N(\varepsilon;\mu)}{N} &=&\frac{N^0(\varepsilon)}{N}+\frac{1}{\pi} Im \sum_i w_i Tr \ln \underline{\tau}^c(\varepsilon) \nonumber \\ &+&\frac{1}{\pi} Im \sum_i w_i Tr \ln \underline{D}_i(\varepsilon) \end{eqnarray} It has the very remarkable property that the integrated DOS $N(\varepsilon;\mu)$ and, hence, the kinetic functional are variational~\cite{DFTKKRCPA2} wrt. both $\underline{t}^c$ and $\underline{\tau}^c$. In Sec. II A, we have mentioned that in the exact MST the contributions to the integrated DOS associated with each lattice site and proportional to $(ln \underline{\tau})_{ii}$, are coupled together because each element of the $\underline{\underline{\tau}}$ matrix depends on the scattering properties of all the lattice sites. Within the GCPA, the only source of coupling is $\underline{\tau}^c$. Each local contribution depends on $\underline{\tau}^c$ and on the local potential. However, within the GCPA, the Lloyd formula does not depend on $\underline{\tau}^c$ nor on the local potentials. As a consequence, the integrated DOS results in a sum of local contributions, coupled together only through $\underline{\tau}^c$, \begin{equation} \label{Lloyd4} N(\varepsilon;\mu) = \sum_i w_i N_i(\varepsilon;\mu) \end{equation} We shall call this very controlled and tractable kind of coupling {\it marginal coupling}. In view of further developments, it is convenient to isolate in Eq.~(\ref{Lloyd4}) two distinct terms. The first arises from the first two addends in Eq.~(\ref{Lloyd3}), it is identical for all sites and related to the effective background defined by the GCPA medium. The second depends on the local CPA projectors and, through them, on the local potentials. In formulae: \begin{equation} \label{Lloydterm} N_i(\varepsilon;\mu) =N_i^{back}(\varepsilon;\mu)+\frac{1}{\pi} Im Tr \log \underline{D}_i(\varepsilon) \end{equation} Implementing the GCPA within the DFT gives for the kinetic functional of Eq.~(\ref{Ti}) the following marginally coupled form: \begin{eqnarray} \label{kin1st} T-\mu N = T^{back}(\mu) + \sum_i w_i T_i ([\rho_i],\mu) \end{eqnarray} where \begin{equation} \label{Tback} T^{back}(\mu) = - \int_{-\infty}^\mu d\varepsilon N^{back}(\varepsilon;\mu) \end{equation} and \begin{eqnarray} \label{Tigcpa} T_i([\rho_i],\mu)&=& \frac{1}{\pi} Im Tr \int_{-\infty}^\mu d\varepsilon \log \underline{D}_i(\varepsilon) \nonumber \\ &-&\int_{v_i} d \mathbf{r} \rho_i(\mathbf{r};\mu) v_i^{eff}(\mathbf{r};\mu) \end{eqnarray} As mentioned in Sect. II, in MST-based DFT calculations, the only source for $O(N^3)$ scaling is the inversion of the multiple scattering matrix, Eq.~(\ref{MKKR}), required to obtain the scattering-path matrix $\underline{\underline{\tau}}$. This step is bypassed in a GCPA theory by approximating the relevant matrix elements $\underline{\tau}_{ii}$ via Eq.~(\ref{CPA1}) in terms of the local scattering properties and the coherent scattering matrix $\underline{\tau}^c$, the last of which is, in turn, obtained by an averaging process. For this reason, GCPA theories are $O(N)$, allowing for very substantial savings of computing time. Of course, the price for these savings is payed by the approximation implied by Eq.~(\ref{CPA3}). A diagrammatic analysis of these errors can be found in Ref.~\onlinecite{Gonis}. It is necessary to make a couple of remarks about the physical meaning of the GCPA in the present context and to highlight the differences with respect to the traditional way in which CPA-based theories have been introduced in the past. In first place, the GCPA has been introduced here as an approximation for the Hohenberg-Kohn density functional. As an approximation, it may well be used to describe an ordered alloy. Its range of applicability is by no means confined to the realm of random alloys. In second place, the introduction of the weights $w_i$ to be assigned to each scatterer makes GCPA theories suitable for dealing with sophisticated pictures of the order (or the disorder) in metallic alloys. This, of course, requires what we have called an 'external model'. In a foregoing paper we shall discuss a (to some extent) self-consistent way to define an external model that is able to provide a picture of ordering phenomena in metallic alloys as a function of the temperature. \subsection{The DFT-MST-GCPA functional: the 'marginal coupling' property} In the present subsection we shall analyze certain formal properties of the GCPA approximations introduced in the previous subsection. All the above discussion can be summarized in the following approximate density functional: \begin{eqnarray} \label{gpot} \Omega^{GCPA}&=&T^{back}(\mu)+\sum_i w_i \bigg[ \omega^{GCPA}_i([\rho_i],\mu) \nonumber \\ &+&\frac{e^2}{2} \sum_L q_{i,L} V^{MAD}_{i,L} \bigg] \end{eqnarray} In Eq.~(\ref{gpot}) the $q_{i,L}$ are defined by Eq.~(\ref{qim}) and the {\it local part} of the GCPA {\it functional} by \begin{equation} \label{gpoti} \omega^{GCPA}_i([\rho_i],\mu)= T_i([\rho_i],\mu)+u_i([\rho_i],\mu) \end{equation} where the terms $T_i([\rho_i],\mu)$ and $u_i([\rho_i],\mu)$ are given by Eqs.~(\ref{Tigcpa}) and~(\ref{Ualts2}) above. We note that the local GCPA functional, $\omega^{GCPA}_i$, depends also on the atomic number of the atom at $\mathbf{R}_i$, $Z_i$, and on the volume and the shape of the $i-th$ Voronoi polyhedron through the local potential energy term, $u_i$. In the following we shall make the simplifying assumpion of having identical Voronoi polyhedra for all the sites considered. In Eq.~(\ref{gpot}) the coupling potentials, $V^{MAD}_{i,L}$, are provided by the specific external model. In the following of this Section we shall assume: \begin{equation} \label{vmadlam} V^{MAD}_{i,L}= \sum_{j \ne i} \sum_{L^\prime} \lambda_L \lambda_{L^\prime} w_j M_{ij,LL^\prime,} q_{j,L^\prime} \end{equation} Appropriate choices of the coefficients $\lambda_L$ and of the weights, $w_i$, give then the SS-CPA or the PCPA. Furthermore, Eq.~(\ref{vmadlam}) can also be used for spherical or for full-potential charge reconstructions. As mentioned in Sect. II A, Eq.~(\ref{varfun}), the density functional is variational not only wrt the global charge density, $\rho(\mathbf{r})$, and the chemical potential $\mu$, but also wrt the charge densities in each scattering volume, $\rho_i(\mathbf{r})$. Moreover, as discussed in Sec. III A and in Ref.~\onlinecite{DFTKKRCPA2}, the GCPA density functional is variational wrt the effective medium scattering matrix, $\underline{\tau}^c$. Furthermore, in a GCPA theory, the background kinetic term, $T^{back}(\mu)$ in Eq.~(\ref{gpot}) depends on the electronic density only through $\underline{\tau}^c$ and $\mu$. Thus, the functional derivation of Eq.~(\ref{gpot}) wrt the local densities, $\rho_i(\mathbf{r})$, gives the following set of coupled equations: \begin{equation} \label{varGCPA} \frac{\delta \omega^{GCPA}_i}{\delta \rho_i(\mathbf{r})} + V^{MAD}_i(\mathbf{r}) =0 \end{equation} where we have used Eqs.~(\ref{qim}), (\ref{vmadm}), (\ref{vmadm2}) and~(\ref{pdef}). Within a GCPA theory, solving the set of the Euler-Lagrange equations~(\ref{varGCPA}), one for each scattering center, together with the equations that determine the chemical potential and the coherent scattering matrix $\underline{\tau}^c$, is completely equivalent to the minimization of the density functional. As it is apparent, these Euler-Lagrange equations are coupled each other {\it only} through the Madelung potentials, $\underline{\tau}^c$ and $\mu$. Moreover, the functionals $\omega^{GCPA}_i(\rho_i(\mathbf{r}))$ are identical for sites occupied by the same chemical species. In order to understand the consequences of the above result let us consider, for instance, an alloy sample constituted by a large supercell. One may wish to calculate, {\it in the given sample}, the properties of different 'atoms', in first place the charge densities, $\rho_i(\mathbf{r})$. Inside the sample, $\underline{\tau}^c$, $\mu$ and the cell geometry are fixed, thus the set of the $\rho_i(\mathbf{r})$ is completely determined by the values of the Madelung potentials $V^{MAD}_i(\mathbf{r})$ and by the atomic number $Z_i$ of the ion at the position ${\mathbf{R}_i}$. More generally, inside the given sample, any site diagonal property $\Pi_i$ shall be completely determined by $Z_i$ and by the set of Madelung potentials. We can establish this result as follows: \begin{equation} \label{Pii} \Pi_i=\Pi(Z_i,V_i^{MAD}(\mathbf{r})) \end{equation} Examples of such site diagonal properties are the local contributions to the grand potential, the multipole moments, and the local DOS. The functional forms, one for each alloying species, given by Eqs.~(\ref{Pii}) sometimes can be easily numerically fitted and then constitute a useful tool for the evaluation of the site quantities in the given sample. They are the source of the simple laws, as e.g. the 'qV' laws, empirically found from extended metallic systems calculations. This notwithstanding, GCPA theories are able to predict complex trends for certain site diagonal properties as, e.g., the site resolved DOS's~\cite{PCPAapplications,PCPADOS}. Since Eqs.~(\ref{Pii}) allows to evaluate, among other properties, also the charge density of each fragment and, hence, the full charge density, $\rho(\mathbf{r})$, then, in virtue of the Hohenberg and Kohn theorem, it follows that any ground state observable in the sample given is a {\it functional} of the set of the Madelung potentials at all the crystal sites, $V^{MAD}_i(\mathbf{r})$, and of the set of the atomic numbers {\it only}. Since the last is, again, specified by the sample, it follows the theorem: {\it any ground state observable in the sample given is a functional of the Madelung potentials only}, or, equivalently, {\it a function of the set of coefficients}, $\{V^{MAD}\}$, that completely determine the Madelung potentials. \footnote{ In our notation $\{V^{MAD}_i\}$ stands for the set of all the Madelung coefficients of the $i$-th site, $\{V^{MAD}_{i,L_1}, V^{MAD}_{i,L_2}, \cdots\}$ and $\{V^{MAD}\}$ for the set of all the Madelung coefficients at all crystal sites, i.e. $\{V^{MAD}\}=\{V^{MAD}_{i_1}\} \cup \{V^{MAD}_{i_2}\} \cup \cdots$. The notations $\{q_i\}$ and $\{q\}$ have a similar meaning.} Since, in virtue of Eq.~(\ref{vmadlam}), the coefficients in the set $\{V^{MAD}\}$ are linear functions of the set of the multipole moments, $\{q\}$, then the above theorem implies the corollary that any ground state property of the sample is a function of the same moments. Within the GCPA and for the specific sample given, it is then possible to reformulate the DFT in terms of the charge multipole moments. By neglecting a constant term with the physical meaning of the grand potential contribution due to the mean GCPA 'atom', Eq.~(\ref{gpot}) can be written as \begin{eqnarray} \label{gpotcg} &&\widetilde{\Omega}^{GCPA}(\{q\},\mu)=\sum_i w_i \widetilde{\omega}^{GCPA}_i(\{q_i\}, Z_i) \nonumber \\ &&+\frac{e^2}{2} \sum_{i,j,L,L^\prime} w_i w_j \lambda_L \lambda_{L^\prime} M_{ij,LL^\prime} q_{i,L} q_{j,L^\prime} \nonumber \\ &&- \mu \sum_i w_i q_{i,00} \end{eqnarray} In deriving Eq.~(\ref{gpotcg}), we have used Eq.~(\ref{vmadlam}) and the fact that, since $\omega^{GCPA}_i$ is completely determined by the local density, $\rho_i(\mathbf{r})$, it cannot depend on the multipole moments at other sites. Moreover, we have isolated the contribution proportional to the chemical potential, $\mu$. Having introduced an explicit dependence on $\mu$, the last term in Eq.~(\ref{gpotcg}) can be thought as a way of enforcing the global electroneutrality. This is unnecessary if we consider a specified sample in which, of course, $\mu$ has a precise, fixed vale. However, since the term proportional to $\mu$ has precisely the form it must have, its introduction is equivalent to extend the validity of Eq.~(\ref{gpotcg}) to {\it all samples specified by the same mean atomic concentrations and by the same value for} $\underline{\tau}^c$. To summarize, we have established the following results. Within the GCPA class of approximations the Hohenberg and Kohn density functional can be recast in the form of Eq.~(\ref{gpotcg}). It consists of (a) local terms, $\widetilde{\omega}^{GCPA}_i$ for the $i$-th scattering site, consisting in functions of the charge multipole moments that are identical for sites with the same chemical occupation; (b) a bilinear form coupling the charge multipole moments at different sites, with coupling coefficients $M_{ij,LL^\prime}$ defined by the crystal geometry; (c) a term proportional to the chemical potential that ensures the global electroneutrality. The functional defined by Eq.~(\ref{gpotcg}) is identical for all the alloy samples characterized by the same mean atomic concentrations and the same value for the coherent scattering-path matrix $\underline{\tau}^c$. Evidently, it constitutes a {\it coarse grained} version of the DTF because the mathematical definition of the multipole moments, Eq.~(\ref{qim}), does not completely determine the charge density. The last is determined by the multipole moments only within the GCPA theory. This reduction of the relevant information has been obtained at the price (a) of the GCPA approximation and (b) of having restricted the consideration to a specific sample. Nevertheless, no restriction has been made about the size of the sample that, therefore can be chosen in such a way to guarantee an appropriate description for a fixed concentration ensemble, as we shall discuss at the end of the present section. Having recast the GCPA {\it functional} as a sum of {\it functions} of the charge multipole moments has obvious mathematical advantages. However, we have not yet completely determined the functional form of the local energetic contributions $\widetilde{\omega}^{GCPA}_i(\{q_i\},Z_i)$. In order to do this, we need to make the hypothesys that, in the sample considered, the distribution of the Madelung potentials coefficients, $\{V^{MAD}\}$, is continuous in the range of the values that the same potentials assume in the sample. This is consistent with the observations in Refs.~\onlinecite{FWS1}, \onlinecite{FWS2}, \onlinecite{CEF}. Let us consider two scattering sites, say $i$ and $j$, occupied by the same chemical species, $\alpha$, at which the Madelung coefficients take very close numerical values, $V^{MAD}_{i,L}=V^{MAD}_L$ and $V^{MAD}_j(\mathbf{r})=V^{MAD}(\mathbf{r})+\Delta V^{MAD}(\mathbf{r})$. The local energetic contributions, the charge densities and the local multipole moments shall be: $\tilde{\omega}^{GCPA}_i=\tilde{\omega}^{GCPA}_\alpha$, $\rho_i(\mathbf{r})=\rho(\mathbf{r})$ and $q_{i,L}=q_L$ for the $i$-th site, and $\tilde{\omega}^{GCPA}_j=\tilde{\omega}^{GCPA}_\alpha+\Delta \tilde{\omega}^{GCPA}_\alpha$, $\rho_j(\mathbf{r})=\rho(\mathbf{r})+\Delta \rho(\mathbf{r})$ and $q_{j,L}=q_L+\Delta q_L$ for the $j$-th site. To the first order in $\Delta \rho(\mathbf{r})$ we have: \begin{eqnarray} \Delta \tilde{\omega}^{GCPA}_\alpha & = &\int_{v_i=v_j} d \mathbf{r} \bigg( \frac{\delta \tilde{\omega}^{GCPA}_\alpha}{\delta \rho(\mathbf{r})} \bigg)_{\rho_i(\mathbf{r})=\rho(\mathbf{r})} \Delta \rho(\mathbf{r}) \nonumber \\ & = &- \int_{v_i=v_j} d \mathbf{r} V^{MAD}(\mathbf{r}) \Delta \rho_(\mathbf{r}) \nonumber \end{eqnarray} where Eq.~(\ref{varGCPA}) has been used. The substitutions of the expansion for the Madelung potential, Eq.(\ref{vmadm}), and of the expressions for the charge multipole moments, Eq.(\ref{qim}), then give: \begin{equation} \label{domega} \Delta \tilde{\omega}^{GCPA}_\alpha = -\sum_L V^{MAD}_L \Delta q_L \end{equation} Once integrated over $q_L$ Eq.~(\ref{domega}) gives \begin{equation} \label{intomega} \tilde{\omega}^{GCPA}_\alpha(\{q\}) =\tilde{\omega}^{GCPA}_\alpha(\{q^0\}) -\sum_L \int_{\{q^0\}}^{\{q\}} V^{MAD}_{L,\alpha}(\{q\prime\}) d q_L^\prime \end{equation} Eq.~(\ref{intomega}) can be easily numerically evaluated from the 'qV' data, $V^{MAD}_{L,\alpha}=V^{MAD}_{L,\alpha}(\{q\})$ obtained as an output from GCPA calculations. Unless a constant with the meaning of the local energy contribution at $\{q\}=\{q^0\}$, it determines the local energies for each chemical species $\alpha$. Eqs.~(\ref{domega}) and~(\ref{intomega}) have been obtained under very broad conditions: the differentiability of the kinetic functional~\cite{lieb,Englisch} and the monotonicity of the 'qV' laws. The first is the usual requirement for the convergence of the Kohn-Sham scheme of the DFT, while the second condition is certainly verified by all GCPA calculations reported in the literature, including those executed at extremely high values for the Madelung potential (see Fig. 7 in the next subsection and the related discussion). In the remainder of the present Section we shall make a few general comments about the validity of the framework defined by the GCPA theory in comparison with the exact density functional. (i) The fact that the effective potential and the potential energy functional can be decomposed in site contributions coupled together only through the Madelung potentials is an exact consequence of the LDA and it has nothing to do with the GCPA. This kind of coupling is {\it 'marginal'} in the sense that, although not necessarily small, it has the simple and tractable functional form which arises from the bilinear terms involving the multipole moments in Eq.~(\ref{gpotcg}). Although this is beyond the purpose of the present paper, we notice that most non-local density functionals offered in the literature~\cite{Dreizler&Gross,Dreizler&daProvidencia} are actually local wrt the density gradients, thus most non-local schemes will remain {\it marginally} coupled in the above sense. \begin{figure} \includegraphics[width=6.8cm,height=3.06cm]{fig03.eps} \caption{\label{comparison} The Local Interaction Zones (LIZ) used in PCPA (left frame) and LSGF (right frame) calculations are marked by the dark areas. For each scattering site, the kinetic functional is evaluated using the appropriate single-site scattering matrices, $t_i$, inside the LIZ, and the effective CPA scattering matrix, $t^c$, outside.} \end{figure} (ii) Splitting the kinetic functional into local contributions marginally coupled trough the coherent scattering matrix $\underline{\tau}^c$ is a simplification due to the GCPA. In fact, it has been obtained by assuming averaged boundary conditions at the surfaces of the Voronoi polyhedra through Eq.~(\ref{CPA0}). An estimate of the so induced errors can be obtained by the comparison of PCPA vs. Locally Self-Consistent Green Function (LSGF) calculations~\cite{LSGF} executed on the same supercell. As it is sketched in Fig.~(3), both calculations evaluate the kinetic contribution from the $i$-th to the functional by solving the problem of a single impurity, in the case of the PCPA, or of an impurity cluster, the Local Interaction Zone (LIZ), for the LSGF. In both cases the scattering matrices outside the LIZ are set to the coherent scattering matrix, $\underline{t}^c$. PCPA calculations can then be viewed as LSGF calculations with only one atom in the LIZ. This argument also suggest that, wrt GCPA calculations, exact DFT results include, for each site, corrections depending on its chemical environment. (iii) We have already seen that the coarse grained version of the GCPA functional, Eq.~(\ref{gpotcg}), holds for all the alloy configurations characterized by a specified value for $\underline{\tau}^c$ in a fixed concentration ensemble. This could appear as a serious limitation since it looks unlikely that, e.g., $\underline{\tau}^c$ could have the same functional energy dependence for two different systems. In general, in a GCPA theory, $\underline{\tau}^c$ is a ground state property determined not only by the mean concentrations but also by the distributions of the Madelung potentials for each alloying species. As opposite to the SS model where these distributions are trivial, more sophisticated external models, as e.g. the PCPA, give complicated charge and Madelung potential distributions. How could then the GCPA functional be useful in such cases? \begin{figure} \includegraphics[width=7cm]{fig04.eps} \caption{The PCPA supercells shown in frame (a) and (b) contains, respectively, $N \rightarrow \infty$ atoms in a random alloy configuration and $n$ atoms in an ordered configuration, both at the same mean atomic concentrations. $N$ is large enough to to guarantee an appropriate description of a random alloy within a GCPA theory. Similarly, $n$ has been chosen to permit the description of an ordered alloy up to some length scale $l$. The supercell in the frame (c) is identical to that in (a) except but for the dashed region which contains $n$ atoms in the same ordered configuration as in (b). The cell (c) is therefore able to describe an ordering fluctuation up to the scale set by $l$. } \end{figure} As argued by Faulkner et al.~\cite{PCPAmath}, the PCPA theory applied to ideal random alloys gives well-defined values for all physical properties. This is because, in perfect random alloys, the distribution of the chemical environments is easily obtained by statistical considerations and it is given by the appropriate multinomial distributions. Therefore, the PCPA random alloy constitutes a privileged reference system whose physical properties, including $\underline{\tau}^c$, can be approximated up to an arbitrary accuracy by letting the number of atoms in the PCPA supercell, $N$, going to infinity (see Fig. 4a). We believe that the same $\underline{\tau}^c$ obtained for a random alloy at a given concentration can be used for building a physically clean, though approximate, theory also for ordered alloys at the same concentration. In the next section we shall provide numerical evidences for that, here we present a more formal argument. Imagine that an ordered array containing $n$ atoms (Fig.4b) is able to account for the properties of same ordered alloy configuration, up to a length scale, $l$, that can be made large at will in the $n \rightarrow \infty$ limit. In Fig. 4c we draw a supercell, a part of which is constituted by the supercell of Fig. 4b, while the remaining $N-n$ sites are occupied as in the random alloy supercell of Fig. 4a. We can think that the supercell in Fig. 4c represents a fluctuation of an ordered phase in a random alloy matrix and that it describes the physical properties of such fluctuation up to the same length scale $l$ as in Fig. 4b. We are implicitly using the common idea of 'locality' in physics, or, in a more specific context, of 'nearsightedness' of the DFT\cite{nearsightedness}. However, as Eq.~(\ref{CPA1}) implies, the difference between the coherent scattering-path matrices corresponding to Figs. 4a and 4c, $\underline{\tau}^c_a - \underline{\tau}^c_c$, is proportional to the ratio $n/N$ and, then, it can be made small at will in the $N \rightarrow \infty$ limit, for {\it any} value of $n$. We conclude that the coherent scattering-path matrices of a random alloy, $\underline{\tau}^c_a$, is able to account for the physical properties of ordered configuration considered. The limitation $n/N <<1$, that comes from the above argument, does not impose any upper bound on the maximum length scale at which chemical fluctuations can be studied and it is of no practical importance provided that $N$ is large enough to ensure a good approximation $\underline{\tau}^c_a$. As reported in the literature~\cite{PCPAapplications,Faulkerphilmag}, it seems that $N$ about 100 is already enough. iv) Although we have suggested that the coherent scattering matrix from random alloys GCPA calculations can be used for ordered alloys too, we are aware of the limitations of such a physical picture. For instance, a GCPA theory always implies finite quasiparticle lifetimes~\cite{physrep}, and, hence, a smearing of the peaks of the Bloch spectral function (BSF). \subsection{A generalized version of the Charge Excess Functional Theory} As a matter of fact, the analysis of DFT supercell calculations for metallic alloys suggests the existence of simple relationships between the charge excesses at the lattice sites, $q_{i,00}$, and the Madelung potentials at the same sites, $V^{MAD}_{i,00}$. Namely, simple linear laws, one for each alloying species, have been found to hold, say \begin{equation} \label{qv} a_i q_{i,00} + V^{MAD}_{i,00} = k_i \end{equation} where $a_i$ and $k_i$ have the same numerical values for atoms of the same chemical species in the given supercell. Examples of the linear 'qV' laws obtained from PCPA calculations for a binary and a quaternary alloy are reported in Figs. 5 and 6. \begin{figure}\label{binqV} \includegraphics[width=7cm]{fig05.eps} \caption{'qV' relationships for a bcc random $Cu_{0.50}Zn_{0.50}$ alloy. The excesses of electrons, $q_i=q_{i,00}$ are plotted vs. the Madelung potentials, $V^{MAD}_i$. The results have been obtained from $\ell_{MAX}=0$ PCPA calculations for a supercell containing 432 atoms at lattice costant $a=5.50$ a.u.. Circles represent Cu atoms and triangles Zn atoms. Note that positive values for $q_i$ correspond to negative net charges and vice versa. } \end{figure} \begin{figure} \includegraphics[width=7cm]{fig06.eps} \caption{\label{quaternqV} 'qV' relationships for an fcc random $Al_{0.25}Cu_{0.25}Ni_{0.25}Zn_{0.25}$ alloy. The excesses of electrons, $q_i=q_{i,00}$ are plotted vs. the Madelung potentials, $V^{MAD}_i$. The results have been obtained from $\ell_{MAX}=0$ PCPA calculations for a bcc supercell containing 108 atoms at lattice costant $a=6.88$ a.u.. Circles, squares, triangles and crosses stand for Cu, Ni, Zn and Al atoms. } \end{figure} It is interesting to observe that similar linear 'qV' laws can be derived starting from the GCPA functional, Eq.~(\ref{gpotcg}), for random alloys by a second order series expansion about the zero Madelung field multipole moments, $\{q^0\}$, that can be obtained by solving the following set of equations: \begin{equation} \label{q0ldef} \frac{\partial \widetilde{\omega}^{GCPA}_i}{\partial q_{i,L}} =0 \end{equation} This procedure leads to a Ginzburg-Landau configurational 'Hamiltonian' in which the relevant fields are constituted by the values of the multipole moments of each lattice site. In formulae: \begin{widetext} \begin{equation} \label{gpotcef} \widetilde{\Omega}^{CEF}(\{q\}, \mu)= \frac{1}{2}\sum_{i,L,L^\prime} w_i a_{i,LL^\prime} (q_{i,L}-q^0_{i,L}) (q_{i,L^\prime}-q^0_{i,L^\prime}) +\frac{1}{2} \sum_{i,j,L,L^\prime} w_i w_j \lambda_L \lambda_{L^\prime} M_{ij,LL^\prime} q_{i,L} q_{j,L^\prime} - \mu \sum_i q_{i,00} \end{equation} \end{widetext} where we have omitted the term $\widetilde{\Omega}^{CEF}(\{q^0\}, \mu=0)$ that represent the GCPA energy at zero Madelung field and chemical potential and that is constant in a fixed concentration ensemble. The coefficients $a_{i,LL^\prime}$ are given by the second derivatives of the GCPA functional \begin{equation} \label{alldef} a_{i,LL^\prime}= \bigg( \frac{\partial^2 \widetilde{\omega}^{GCPA}_i(\{q_i\})} {\partial q_{i,L} \partial q_{i,L^\prime}} \bigg)_{\{q_i\}=\{q^0_i\}} \end{equation} The functional of Eq.~(\ref{gpotcef}) constitutes a generalization of the Charge Excess Function (CEF) proposed in Ref.~\onlinecite{CEF} for discussing the charge transfers in metallic alloys and shall then be referred in the following to as the CEF. The novel feature here is that Eq.~(\ref{gpotcef}) includes not only the charge excesses, $q_{i,00}$, but also the charge multipole moments with $\ell>0$. The minimization of the CEF functional $\widetilde{\Omega}^{CEF}$ wrt. its variables, the set of the multipole moments, $\{q\}$, and the chemical potential, $\mu$, gives \begin{equation} \label{gleq} \sum_{L^\prime} \big[ a_{i,LL^\prime} (q_{i,L^\prime}-q^0_{i,L^\prime}) + M_{LL^\prime,ij} \; q_{j,L^\prime}\big] = \mu \delta_{i,00} \end{equation} and \begin{equation} \label{electroneut} \sum_i q_{i,00}=0 \end{equation} Using the definition of the Madelung potentials, Eq.~(\ref{vmadlam}), and setting \begin{equation} \label{kdef} k_{i,L}=\sum_{L^\prime} a_{i,LL^\prime} q^0_{i,L^\prime} + \mu \delta_{i,00} \end{equation} it is easy to show that Eq.~(\ref{gleq}) for $L=(0,0)$ coincides with the linear laws given by Eq.~(\ref{qv}). Versions of Eqs.~(\ref{gleq}), (\ref{electroneut}) and (\ref{kdef}) with the angular momentum summations truncated at $\ell=0$ can be found in Ref.~\onlinecite{CEF}. We wish to highlight that the CEF derivation from the GCPA functional is based on the assumption, common to all Ginzburg-Landau theories~\cite{Khachaturyan,S2}, that the homogeneously disordered phase, in the present case the random alloy phase, can the starting point for a perturbative treatment of ordering or segregation phenomena. As discussed in the previous subsection, in the GCPA context, this amounts to conjecture that the coherent scattering-path matrix $\underline{\tau}^c$ of a random alloy can be used for obtaining a physical picture of concentration fluctuations, or, in other words, that it is able to {\it represent} such fluctuations. \begin{figure} \includegraphics[width=7cm]{fig07.eps} \caption{\label{lightqv}'qV' relationships for light substitutional impurities (vacancies, H and Li atoms) dissolved in bcc Al, from CPA+LF calculations. The quantity $Q$ is the number of valence electrons at the impurity site, therefore $Q=0$ corresponds to 0 electrons for vacancies and H atoms and 2 electrons for Li. The linear behaviors observed for small fields ($V_{MAD}< \;1\;a.u.$) are superseded by power law trends (see the log-log plot in the inset) for very high fields.} \end{figure} We wish to close this Section with a few comments. The principal result of this paragraph, the CEF functional of Eq.~(\ref{gpotcef}), has been obtained by a series expansion of the GCPA functional about the values that the multipole moments would have in absence of coupling. The series has been terminated at the lowest order at which differences with respect to SS approximations are expected for. This, not surprisingly, is enough to obtain a physical picture of the charge transfers in metallic alloys. For a given alloy configuration, the linear Euler-Lagrange equations obtained by minimizing the CEF can be easily solved for the charge multipole moments. The procedure require the inversion of the matrix $\underline{\underline{F}}$ of elements \begin{equation} \label{fmatrix} F_{ij,LL^\prime} = a_{i,LL^\prime} \; \delta_{ij} + M_{ij,LL^\prime} \end{equation} As we have shown elsewere~\cite{CEF,brunomatsci}, for a given alloy configuration, the value of the CEF functional at its minimum has the physical meaning of the total energy of the same configuration. The ambiguity due to the presence of the above mentioned concentration dependent constant can be resolved by comparing CEF and GCPA calculations for a single configuration in a fixed concentration ensemble. In the previous subsection we have described a general procedure based on the numerical integration of the 'qV' laws for evaluating the functional form of $\tilde{\omega}_i^{GCPA}(\{q_i\})$. Of course, if the random alloy $\underline{\tau}^c$ was able to represent concentration fluctuations and the 'qV' laws were linear, the GCPA and the CEF functionals would be coincident. We do not think that the 'qV' laws can be truly linear. The argument is as follows. The local excesses of electrons, $q_{i,00}$, accordingly with the physical intuition and with the results plotted in Figs. 5 and 6, are non-increasing functions of the Madelung potential $V^{MAD}_{i,00}$. If the 'qV' laws were really linear, $q_{i,00}$ would decrease indefinitively and eventually reach unphysical values, $q_{i,00}<Z_i$ corresponding to negative charge densities. Actually we expect that the linear laws cannot be any longer valid when all valence electrons are expelled from the site. This circumstance would correspond to some critical value for the charge excess, say $q_{i,00}^{crit}$. Before this critical value is reached, the 'qV' laws should exhibit a crossover to an asymptotic behavior, say $q_{i,00} \rightarrow q_{i,00}^{crit}$ as $V^{MAD}_{i,00} \rightarrow \infty$. We have tested this conjecture by executing CPA+LF calculations~\cite{CPALF} for a single impurities, vacancies, H or Li atoms, embedded in Al. The results are shown in Fig. 7, where we plot $Q=q_{i,00}-q_{i,00}^{crit}$ for the impurity site as a function of the relevant Madelung fields. In all the cases considered, a linear regime is clearly visible at low fields. A very high fields, a crossover to a power law dependence is observed, with the number of electrons tending to the critical value from above. The crossover field is comparable with the host band width. We recall that in CPA+LF calculations $\underline{\tau}^c$ is that of the host while the Madelung potential is just an adjustable papameter. While this is a sensible way for studying the response of the impurity to the perturbing field, this do not imply that all the range of the perturbations considered is physically meaningful. We do not think that such high fields, corresponding to the tunneling regime in the impurity site, could occur in real systems as this would require a too large defect of electrons at the impurity nearest neighbors. Hence, Fig. 7, while supporting the view that the linearity of the 'qV' laws and the CEF are just approximations, does not support the possibility that, at least for metallic systems, appreciable deviations from linearity or failures of the CEF are likely to occur. Another point we wish to address is concerned with the value of the chemical potential $\mu$ in Eq.~(\ref{gpotcef}). In a recent paper, Drchal et al.~\cite{Drchal} argued that $\mu$ should be always zero since the Fourier transform of the Madelung coefficients with $L=L^\prime=(0,0)$ diverges as $k \rightarrow \infty$ implying that the sum of the charge excesses $\sum q_{i,00}$ must vanish, automatically satisfying the electroneutrality constraint. The observation of Drchal et al. is correct for infinite systems, while for finite supercells, even with periodic boundary condition, the same Fourier transform always remains finite. $k$, in fact can take only the values of the reciprocal space vectors that consitute the tiling of the supercell considered~\cite{NLCPAII}. The set of the allowed values for $k$ includes $0$ only for infinitely large supercells. In most practical calculations, then, $\mu$ is necessary, although usually it takes small non-zero values. \section{Numerical results} In this Section we present a series of numerical tests designed to study the limits of validity of the GCPA and CEF theoretical frameworks. The central issues here shall be investigating the realm of validity of the linear 'qV' laws, Eq.~(\ref{qv}) or~(\ref{gleq}), and of the energetics implied by the CEF functional, Eq.~(\ref{gpotcef}). Furthermore, we shall try to answer two questions: (i) to what an extent the CEF is able to approximate GCPA calculations and (ii) how do the predictions from the CEF and the GCPA compare vs. 'exact' DFT calculations for ordered systems. The GCPA theory chosen for these tests is the PCPA~\cite{Ujfalussy}, that, being based on a supercell approach, allows for easy comparison vs. 'exact' DFT calculations. Several kinds of calculations shall be presented in this Section. The 'exact' DTF results used for comparison shall be LDA full-potential LAPW calculations produced using the WIEN2K {\it ab initio} package~\cite{WIEN2Ka,WIEN2Kb}. They are referred below to as LAPW. In all the cases about 10$^4$ k-points in the full Brillouin zone have been used, the spherical harmonics expansion of the potentials in the muffin-tin spheres has been truncated at $\ell=6$ and the parameter $R_{MT} \cdot K_{MAX}$ has been set to 7. The PCPA calculations have been performed by a conveniently modified version of our KKR-CPA code~\cite{numerics}. All PCPA calculations are based on the ASA approximation for the site potentials, use several thousands k-points in the full Brillouin Zone and 31 energies over a complex integration contour. For both LAPW and PCPA calculations, the core electrons treatment is fully relativistic while a non-relativistic approximation is used for valence states. Finally, we present CEF calculations~\cite{CEF,philmag} with the charge multipolar expansion truncated at $\ell=0$. The concentration dependent parameters required by the CEF have been obtained from the linear regressions of the 'qV' data generated from supercells with random occupancies and the required mean atomic concentrations and are reported in Tables~\ref{tabi} and ~\ref{tabii}. Depending on which was the source of the parameters, the CEF calculations shall be referred to as CEF-PCPA or CEF-LAPW. Using the formalism of the previous Section, for both PCPA and CEF calculations we set $w_i=1$ for all lattice sites, $\lambda_{00}=1$ and $\lambda_{\ell m}=0$ for $\ell >0$. \begin{table} \caption{\label{tabi} CEF parameters obtained by the linear regression of the 'qV' data from PCPA calculations for random Cu$_c$Zn$_{1-c}$ alloys in bcc or fcc lattices. $C_{Cu}$ and $C_{Zn}$ are defined as the difference between 1 and the correlations obtained from the regressions for the Cu and Zn site charges. All the quantities are expressed in atomic units. CEF calculations using the coefficients presented in this Table are referred to as CEF-PCPA. } \begin{ruledtabular} \begin{tabular}{cc|ccccc} & c & $a_{Cu}$ & $a_{Zn}$ & $k_{Cu}-k_{Zn}$ & $C_{Cu}$ & $C_{Zn}$ \\ \hline & 0.20 & 1.223 & 1.211 & 0.146 & 3~10$^{-7}$ & 1~10$^{-6}$ \\ & 0.25 & 1.225 & 1.214 & 0.147 & 4~10$^{-7}$ & 1~10$^{-6}$ \\ & 0.33 & 1.223 & 1.215 & 0.148 & 5~10$^{-7}$ & 2~10$^{-6}$ \\ bcc & 0.50 & 1.219 & 1.214 & 0.146 & 5~10$^{-7}$ & 2~10$^{-6}$ \\ & 0.67 & 1.215 & 1.214 & 0.144 & 3~10$^{-7}$ & 2~10$^{-6}$ \\ & 0.75 & 1.214 & 1.214 & 0.144 & 3~10$^{-7}$ & 2~10$^{-6}$ \\ & 0.80 & 1.213 & 1.214 & 0.144 & 3~10$^{-7}$ & 9~10$^{-7}$ \\ \hline & 0.20 & 1.220 & 1.212 & 0.138 & 3~10$^{-7}$ & 1~10$^{-6}$ \\ & 0.25 & 1.221 & 1.214 & 0.140 & 3~10$^{-7}$ & 9~10$^{-7}$ \\ & 0.33 & 1.222 & 1.216 & 0.142 & 3~10$^{-7}$ & 1~10$^{-6}$ \\ fcc & 0.50 & 1.222 & 1.217 & 0.143 & 5~10$^{-7}$ & 2~10$^{-6}$ \\ & 0.67 & 1.223 & 1.222 & 0.145 & 5~10$^{-7}$ & 1~10$^{-6}$ \\ & 0.75 & 1.222 & 1.223 & 0.145 & 3~10$^{-7}$ & 1~10$^{-6}$ \\ & 0.80 & 1.222 & 1.222 & 0.145 & 3~10$^{-7}$ & 2~10$^{-6}$ \\ \end{tabular} \end{ruledtabular} \end{table} \begin{table} \caption{\label{tabii} CEF parameters obtained by the linear regression of the 'qV' data from LAPW calculations for random Cu$_c$Zn$_{1-c}$ alloys in bcc or fcc lattices. $C_{Cu}$ and $C_{Zn}$ are defined as the difference between 1 and the correlations obtained from the regressions for the Cu and Zn site charges. All the quantities are expressed in atomic units. CEF calculations using the coefficients presented in this Table are referred to as CEF-LAPW. } \begin{ruledtabular} \begin{tabular}{cc|ccccc} & c & $a_{Cu}$ & $a_{Zn}$ & $k_{Cu}-k_{Zn}$ & $C_{Cu}$ & $C_{Zn}$ \\ \hline & 0.25 & 2.968 & 2.181 & 0.456 & 3~10$^{-2}$ & 2~10$^{-2}$ \\ & 0.33 & 2.704 & 2.327 & 0.445 & 4~10$^{-3}$ & 8~10$^{-3}$ \\ bcc & 0.50 & 2.811 & 2.307 & 0.413 & 9~10$^{-3}$ & 5~10$^{-2}$ \\ & 0.67 & 3.388 & 3.351 & 0.590 & 3~10$^{-3}$ & 7~10$^{-3}$ \\ & 0.75 & 2.586 & 2.652 & 0.432 & 3~10$^{-2}$ & 1~10$^{-2}$ \\ \hline & 0.25 & 2.457 & 1.949 & 0.360 & 5~10$^{-3}$ & 9~10$^{-4}$ \\ fcc & 0.50 & 2.287 & 2.130 & 0.350 & 9~10$^{-3}$ & 3~10$^{-3}$ \\ & 0.75 & 2.646 & 2.317 & 0.399 & 4~10$^{-3}$ & 2~10$^{-4}$ \\ \end{tabular} \end{ruledtabular} \end{table} \subsection{qV laws} In Sect. III we have presented the CEF functional as an approximation for the GCPA functional and have shown how this is equivalent to assume the linearity of the 'qV' laws. In Figs. (5) and (6), we plot the 'qV' curves from our PCPA calculations for the binary bcc Cu$_{0.50}$Zn$_{0.50}$ and the quaternary Al$_{0.25}$Cu$_{0.25}$Ni$_{0.25}$Zn$_{0.25}$ fcc random alloys. It is surprising to observe how much accurately the PCPA data can be fitted by straight lines. The correlations coefficients obtained from the linear regression of the same data differ from unit by about $10^{-6}$. Similar very high correlations are always obtained from the analysis of PCPA 'qV' data, as it is evident by looking at Table~\ref{tabi}. As it is shown in Table~\ref{tabii}, also LAPW data present high correlations, although the corresponding linear fits are not perfect and their correlations deviate from unit by $10^{-2}$ or $10^{-3}$. This notwithstanding, as argued in Sect. III C, we believe that the linearity of the 'qV' relationships within the PCPA is just an approximation. In order to check out how much accurate it is, we have studied one of the most difficult realistic cases, that of a high charge transfer ordered alloy, namely the CuZn system. This system has been studied with many different theoretical approaches~\cite{Althoff,Crisan,alphabrass,cunizn,optcuzn,Tulip}. It is also relevant, for our present concerns, that the total energy differences between fcc and bcc geometrical alloy arrangements are relatively small. We have executed calculations for all the set of 62 bcc and fcc based structures reported in Refs.~\onlinecite{Curtarolo} and~\onlinecite{Curtarolothesis}. These structures include several ordered crystals for each of the the following Cu atomic concentrations: 0.20, 0.25, 0.33, 0.50, 0.66, 0.75 and 0.80. In order to facilitate the comparison, the lattice constants have been kept fixed to the values 5.5 and 6.9 a.u., respectively for bcc and fcc based lattices. The results for bcc- and fcc-based alloys are reported in Tables~\ref{tabiii} and~\ref{tabiv}, respectively. \begingroup \squeezetable \begin{table*} \caption{\label{tabiii} \scriptsize{Charge excesses and total energies per atom for bcc-based Cu$_c$Zn$_{1-c}$ alloys. The first two columns on the left give, for each system, the mean Cu atomic concentration ($c$) and, when available, the supercell identifier in the database of Ref.~\cite{Curtarolothesis} (conf). "R" followed by a number, e.g., R16, stands for a quasirandom supercell containing the corresponding number of atoms not included in the database. In the third column $n_{unl}$ indicates the mean number of unlike nearest neighbors of Zn sites. The columns from 4th to 7th report the MSD of the charge excesses, $<(\Delta q)^2>$, obtained by the comparison of different theories: PCPA vs. CEF-PCPA (a), LAPW vs. CEF-PCPA (b), LAPW vs. CEF-LAPW (c), LAPW vs. the model of Ref.~\onlinecite{Magri} (d). Columns from 8th to 11th: total energies per atom from PCPA, CEF-PCPA, LAPW and CEF-LAPW calculations. The energy zero is given, for each concentration, by the PCPA prediction for the ground state.}} \begin{ruledtabular} \begin{tabular}{ccc|cccc|cccc} & & & \multicolumn{4}{c|}{$<(\Delta q)^2>$} & \multicolumn{4}{c}{$\Delta E$ (mRy)} \\ \cline{4-11} c & conf & $n^{unl}$ & \textbf{a} & \textbf{b} & \textbf{c} & \textbf{d} & PCPA & CEF-PCPA & LAPW & CEF-LAPW \\ \hline & 92 & 1.5 & 5~10$^{-11}$ & 1~10$^{-4}$ & - & - & 0.038 & 0.040 & -0.015 & - \\ 0.20 & 98 & 2.0 & 2~10$^{-9}$ & 4~10$^{-4}$ & - & - & 0.000 & 0.000 & 0.000 & - \\ \hline & 69 & 2.0 & 3~10$^{-9}$ & 4~10$^{-5}$ & 2~10$^{-6}$ & - & 0.741 & 0.742 & 0.991 & 0.556 \\ & 72 & 1.3 & 9~10$^{-9}$ & 4~10$^{-4}$ & 2~10$^{-5}$ & - & 1.966 & 1.971 & 3.951 & 2.437 \\ & 75 & 2.0 & 1~10$^{-9}$ & 1~10$^{-5}$ & 1~10$^{-6}$ & - & 0.824 & 0.823 & 1.089 & 0.631 \\ 0.25 & 78 & 2.7 & 1~10$^{-9}$ & 4~10$^{-4}$ & 5~10$^{-7}$ & - & 0.865 & 0.864 & 0.980 & 0.838 \\ & 81 & 2.7 & 3~10$^{-8}$ & 2~10$^{-4}$ & 4~10$^{-7}$ & - & 0.353 & 0.353 & 0.050 & 0.246 \\ & 83 & 2.7 & 2~10$^{-8}$ & 3~10$^{-4}$ & 1~10$^{-6}$ & - & 0.327 & 0.327 & 0.194 & 0.260 \\ & 86 & 2.7 & 6~10$^{-8}$ & 4~10$^{-4}$ & 2~10$^{-5}$ & - & 0.000 & 0.000 & 0.000 & 0.000 \\ & R16 & 2.2 & 1~10$^{-8}$ & 1~10$^{-4}$ & 3~10$^{-6}$ & - & 0.797 & 0.799 & 1.117 & 0.669 \\ \hline & 63 & 3.0 & 3~10$^{-8}$ & 2~10$^{-4}$ & 2~10$^{-5}$ & - & 0.000 & 0.000 & 0.000 & 0.000 \\ 0.33 & 65 & 2.0 & 2~10$^{-9}$ & 3~10$^{-4}$ & 2~10$^{-5}$ & - & 1.741 & 1.747 & 2.950 & 2.129 \\ & 67 & 4.0 & 2~10$^{-8}$ & 2~10$^{-4}$ & 3~10$^{-5}$ & - & 0.078 & 0.078 & -0.519 & 0.068 \\ & R18 & 3.3 & 6~10$^{-9}$ & 3~10$^{-4}$ & 4~10$^{-5}$ & - & 0.558 & 0.562 & 0.729 & 0.838 \\ \hline & 60 & 4.0 & 4~10$^{-9}$ & 4~10$^{-5}$ & 5~10$^{-6}$ & 3~10$^{-4}$ & 1.661 & 1.662 & 3.457 & 1.188 \\ & 61 & 8.0 & 3~10$^{-9}$ & 1~10$^{-3}$ & 6~10$^{-6}$ & 3~10$^{-3}$ & 0.000 & 0.000 & 0.000 & 0.000 \\ & 71 & 2.0 & 2~10$^{-9}$ & 1~10$^{-3}$ & 4~10$^{-5}$ & 9~10$^{-4}$ & 4.107 & 4.115 & 8.657 & 5.089 \\ 0.50 & 74 & 4.0 & 3~10$^{-9}$ & 5~10$^{-7}$ & 4~10$^{-7}$ & 3~10$^{-4}$ & 1.823 & 1.824 & 3.666 & 1.342 \\ & 77 & 4.0 & 4~10$^{-11}$ & 2~10$^{-4}$ & 1~10$^{-6}$ & 7~10$^{-5}$ & 2.736 & 2.739 & 4.804 & 2.404 \\ & 80 & 6.0 & 7~10$^{-9}$ & 4~10$^{-4}$ & 8~10$^{-7}$ & 3~10$^{-4}$ & 0.885 & 0.883 & 1.666 & 0.557 \\ & 85 & 4.0 & 7~10$^{-9}$ & 2~10$^{-4}$ & 1~10$^{-5}$ & 7~10$^{-4}$ & 1.007 & 1.006 & 2.757 & 0.646 \\ & R16 & 4.3 & 3~10$^{-9}$ & 2~10$^{-4}$ & 3~10$^{-6}$ & 4~10$^{-4}$ & 1.989 & 1.989 & 3.806 & 1.613 \\ \hline & 62 & 6.0 & 7~10$^{-10}$ & 2~10$^{-4}$ & 6~10$^{-7}$ & - & 0.000 & 0.000 & 0.000 & 0.000 \\ 0.67 & 64 & 4.0 & 8~10$^{-12}$ & 3~10$^{-4}$ & 2~10$^{-5}$ & - & 1.698 & 1.703 & 2.671 & 2.021 \\ & 66 & 8.0 & 2~10$^{-10}$ & 1~10$^{-4}$ & 2~10$^{-7}$ & - & 0.076 & 0.077 & -0.650 & 0.061 \\ & R18 & 6.7 & 3~10$^{-9}$ & 3~10$^{-4}$ & 1~10$^{-6}$ & - & 0.545 & 0.549 & 0.508 & 0.870 \\ \hline & 68 & 6.0 & 2~10$^{-11}$ & 2~10$^{-5}$ & 1~10$^{-6}$ & - & 0.726 & 0.732 & 1.364 & 0.566 \\ & 70 & 4.0 & 3~10$^{-9}$ & 6~10$^{-4}$ & 3~10$^{-5}$ & - & 1.927 & 1.935 & 3.395 & 2.505 \\ & 73 & 6.0 & 2~10$^{-10}$ & 9~10$^{-6}$ & 2~10$^{-6}$ & - & 0.806 & 0.812 & 1.390 & 0.642 \\ 0.75 & 76 & 8.0 & 1~10$^{-9}$ & 5~10$^{-4}$ & 4~10$^{-6}$ & - & 0.850 & 0.852 & 1.022 & 0.873 \\ & 79 & 8.0 & 3~10$^{-9}$ & 2~10$^{-4}$ & 1~10$^{-6}$ & - & 0.345 & 0.349 & 0.464 & 0.251 \\ & 82 & 8.0 & 4~10$^{-9}$ & 3~10$^{-4}$ & 2~10$^{-6}$ & - & 0.322 & 0.323 & 0.462 & 0.269 \\ & 84 & 8.0 & 3~10$^{-8}$ & 4~10$^{-4}$ & 4~10$^{-6}$ & - & 0.000 & 0.000 & 0.000 & 0.000 \\ & R16 & 6.8 & 1~10$^{-9}$ & 2~10$^{-4}$ & 3~10$^{-6}$ & - & 0.782 & 0.788 & 1.278 & 0.685 \\ \hline & 87 & 6.0 & 5~10$^{-10}$ & 1~10$^{-4}$ & - & - & 0.033 & 0.038 & 0.327 & - \\ 0.80 & 93 & 8.0 & 4~10$^{-10}$ & 6~10$^{-4}$ & - & - & 0.000 & 0.000 & 0.000 & - \\ \end{tabular} \end{ruledtabular} \end{table*} \endgroup The charges from the CEF-PCPA are, in practice, identical to those obtained from the PCPA theory for the ordered systems. In order to represent the size of these tiny differences, we report in Tables~\ref{tabiii} and~\ref{tabiv} the mean square displacement between the two sets of calculated charges, $<(\Delta q)^2>$. In the worst case, the bcc-based DO$_2$ structure, identified in Table~\ref{tabiv} by the number 86, we find $<(\Delta q)^2>=6~10^{-8}$. Such an excellent agreement has been obtained for all the set of ordered structures considered, in spite of the fact that the CEF input has been obtained from random supercells. \begingroup \squeezetable \begin{table*} \caption{\label{tabiv} \scriptsize{Charge excesses and total energies per atom for fcc-based Cu$_c$Zn$_{1-c}$ alloys. The first two columns on the left give, for each system, the mean Cu atomic concentration ($c$) and, when available, the supercell identifier in the database of Ref.~\cite{Curtarolothesis} (conf). "R" followed by a number, e.g., R16, stands for a quasirandom supercell containing the corresponding number of atoms not included in the database. In the third column $n_{unl}$ indicates the mean number of unlike nearest neighbors of Zn sites. The columns from 4th to 7th report the MSD of the charge excesses, $<(\Delta q)^2>$, obtained by the comparison of different theories: PCPA vs. CEF-PCPA (a), LAPW vs. CEF-PCPA (b), LAPW vs. CEF-LAPW (c), LAPW vs. the model of Ref.~\onlinecite{Magri} (d). Columns from 8th to 11th: total energies per atom from PCPA, CEF-PCPA, LAPW and CEF-LAPW calculations. The energy zero is given, for each concentration, by the PCPA prediction for the ground state.}} \begin{ruledtabular} \begin{tabular}{ccc|cccc|cccc} & & & \multicolumn{4}{c|}{$<(\Delta q)^2>$} & \multicolumn{4}{c}{$\Delta E$ (mRy)} \\ \cline{4-11} c & conf & $n^{unl}$ & \textbf{a} & \textbf{b} & \textbf{c} & \textbf{d} & PCPA & CEF-PCPA & LAPW & CEF-LAPW \\ \hline & 35 & 2.5 & 3~10$^{-10}$ & 2~10$^{-5}$ & - & - & 0.500 & 0.503 & 0.926 & - \\ 0.20 & 39 & 3.0 & 7~10$^{-9}$ & 7~10$^{-5}$ & - & - & 0.000 & 0.000 & 0.000 & - \\ \hline & 12 & 3.3 & 2~10$^{-9}$ & 7~10$^{-5}$ & 4~10$^{-6}$ & - & 0.515 & 0.516 & 1.078 & 0.438 \\ & 15 & 2.7 & 1~10$^{-8}$ & 2~10$^{-4}$ & 4~10$^{-7}$ & - & 1.379 & 1.386 & 2.151 & 1.658 \\ & 18 & 3.3 & 2~10$^{-9}$ & 2~10$^{-5}$ & 2~10$^{-6}$ & - & 0.566 & 0.568 & 1.054 & 0.471 \\ 0.25 & 21 & 3.3 & 2~10$^{-9}$ & 1~10$^{-4}$ & 8~10$^{-6}$ & - & 0.680 & 0.685 & 1.510 & 0.616 \\ & 24 & 4.0 & 2~10$^{-8}$ & 2~10$^{-4}$ & 5~10$^{-6}$ & - & 0.000 & 0.000 & 0.000 & 0.000 \\ & 26 & 4.0 & 2~10$^{-8}$ & 2~10$^{-4}$ & 3~10$^{-7}$ & - & 0.068 & 0.069 & 0.073 & 0.051 \\ & 29 & 2.0 & 5~10$^{-9}$ & 4~10$^{-4}$ & 2~10$^{-5}$ & - & 1.817 & 1.824 & 4.202 & 2.349 \\ & R16 & 3.0 & 1~10$^{-8}$ & 2~10$^{-4}$ & 1~10$^{-6}$ & - & 1.015 & 1.020 & 1.705 & 1.169 \\ \hline & 6 & 4.0 & 1~10$^{-10}$ & 5~10$^{-5}$ & - & - & 1.179 & 1.185 & 1.016 & - \\ 0.33 & 8 & 5.0 & 1~10$^{-8}$ & 1~10$^{-4}$ & - & - & 0.000 & 0.000 & 0.000 & - \\ & 10 & 3.0 & 6~10$^{-11}$ & 3~10$^{-4}$ & - & - & 1.800 & 1.807 & 3.300 & - \\ \hline & 3 & 8.0 & 7~10$^{-9}$ & 3~10$^{-4}$ & 5~10$^{-6}$ & 1~10$^{-4}$ & 0.139 & 0.144 & -0.075 & 0.111 \\ & 4 & 6.0 & 2~10$^{-9}$ & 5~10$^{-6}$ & 2~10$^{-5}$ & 6~10$^{-5}$ & 1.075 & 1.081 & 2.141 & 0.961 \\ & 14 & 4.0 & 1~10$^{-9}$ & 6~10$^{-4}$ & 5~10$^{-6}$ & 2~10$^{-4}$ & 2.895 & 2.905 & 4.803 & 3.657 \\ 0.50 & 17 & 7.0 & 3~10$^{-9}$ & 3~10$^{-5}$ & 2~10$^{-6}$ & 1~10$^{-6}$ & 0.717 & 0.720 & 1.121 & 0.605 \\ & 20 & 6.0 & 5~10$^{-10}$ & 5~10$^{-5}$ & 1~10$^{-7}$ & 5~10$^{-5}$ & 1.429 & 1.434 & 2.464 & 1.353 \\ & 23 & 8.0 & 7~10$^{-9}$ & 2~10$^{-4}$ & 1~10$^{-6}$ & 4~10$^{-5}$ & 0.000 & 0.000 & 0.000 & 0.000 \\ & 28 & 3.0 & 2~10$^{-9}$ & 9~10$^{-4}$ & 6~10$^{-5}$ & 5~10$^{-4}$ & 3.339 & 3.350 & 6.745 & 4.694 \\ & R16 & 6.8 & 3~10$^{-9}$ & 2~10$^{-4}$ & 2~10$^{-6}$ & 6~10$^{-5}$ & 0.913 & 0.918 & 1.519 & 0.972 \\ \hline & 5 & 8.0 & 3~10$^{-10}$ & 9~10$^{-5}$ & - & - & 1.102 & 1.224 & 1.283 & - \\ 0.67 & 7 & 10.0 & 1~10$^{-8}$ & 1~10$^{-4}$ & - & - & 0.000 & 0.000 & 0.000 & - \\ & 9 & 6.0 & 3~10$^{-9}$ & 3~10$^{-4}$ & - & - & 1.743 & 1.866 & 3.224 & - \\ \hline & 11 & 10.0 & 3~10$^{-10}$ & 9~10$^{-5}$ & 2~10$^{-5}$ & - & 0.549 & 0.550 & 1.009 & 0.501 \\ & 13 & 8.0 & 5~10$^{-9}$ & 5~10$^{-4}$ & 3~10$^{-6}$ & - & 1.479 & 1.482 & 2.230 & 1.986 \\ & 16 & 10.0 & 3~10$^{-10}$ & 2~10$^{-5}$ & 1~10$^{-5}$ & - & 0.604 & 0.605 & 1.054 & 0.535 \\ 0.75 & 19 & 10.0 & 5~10$^{-10}$ & 1~10$^{-4}$ & 5~10$^{-6}$ & - & 0.729 & 0.731 & 1.083 & 0.709 \\ & 22 & 12.0 & 3~10$^{-9}$ & 2~10$^{-4}$ & 5~10$^{-6}$ & - & 0.000 & 0.000 & 0.000 & 0.000 \\ & 25 & 12.0 & 2~10$^{-9}$ & 2~10$^{-4}$ & 5~10$^{-6}$ & - & 0.073 & 0.073 & -0.193 & 0.057 \\ & 27 & 6.0 & 1~10$^{-8}$ & 5~10$^{-4}$ & 1~10$^{-5}$ & - & 1.944 & 1.949 & 3.654 & 2.799 \\ & R16 & 3.0 & 2~10$^{-9}$ & 3~10$^{-4}$ & 2~10$^{-6}$ & - & 1.088 & 1.091 & 1.790 & 1.409 \\ \hline & 30 & 10.0 & 1~10$^{-9}$ & 6~10$^{-5}$ & - & - & 0.544 & 0.550 & -4.023 & - \\ 0.80 & 36 & 12.0 & 2~10$^{-8}$ & 7~10$^{-5}$ & - & - & 0.000 & 0.000 & 0.000 & - \\ \end{tabular} \end{ruledtabular} \end{table*} \endgroup In a previous Letter~\cite{CEF} we have shown that the CEF is able to carefully reproduce the charges from LSMS calculations. Moreover, the parameters extracted from ordered structure calculations can be used to predict the charges for random structures and vice-versa. The quality of the CEF predictions was very good either, with $<(\Delta q)^2>$ of the order of $10^{-6}$, i.e. about three orders of magnitude less than what we have found by the comparison of CEF and PCPA. Since the LSMS calculations presented in Ref.~\onlinecite{CEF} were based on the ASA, we surmise that the modest lost of accuracy of CEF predictions for LSMS wrt. PCPA calculations constitutes a measure of the importance of the scattering effects from nearest neighbors. These effects, in fact, can be accounted for only in a mean field fashion by the PCPA. We have also investigated the effects of the spherical approximation for the atomic potentials by executing full-potential LAPW calculations. In Fig. 8 we plot the site charge excesses obtained from LAPW vs. the number of unlike nearest neighbors of the same sites, for all the structures corresponding to equimolar concentrations. In Tables~\ref{tabiii} and~\ref{tabiv} we report the results for $<(\Delta q)^2>$ at all the concentrations. As apparent from Fig. 8, the trends of $q_i$ are not easily accounted for by the nearest neighbors environment only~\cite{Magri}, expecially for bcc based structures. \begin{figure}\label{magrifig7} \includegraphics[width=7cm]{fig08.eps} \caption{ Charge excesses, $q_i$, vs. the number of unlike nearest neighbors of the corresponding site, $n^{unl}_i$, from LAPW calculations for many, bcc and fcc based, ordered and disordered, configurations, of $Cu_{0.50}Zn_{0.50}$ alloys. Circles and triangles represent charges on Cu and Zn sites, respectively. Left frame: bcc-based alloys; right frame: fcc-based alloys. The straight lines in each panel indicate the best fits obtained by the model of Magri et al.~\cite{Magri}. } \end{figure} This notwithstanding, CEF-PCPA calculations reasonably account for the LAPW charges. As it can be seen in the columns marked as (b) of Tables~\ref{tabiii} and~\ref{tabiv}, $<(\Delta q)^2>$ is usually of the order of $10^{-4}$, sometimes less, and about $10^{-3}$ in the worst case. In order to understand how much these results can be affected by the PCPA input coefficients, we have repeated CEF calculations by fitting the coefficients from LAPW 'qV' data for the random alloy configurations corresponding to the relevant stoichiometries and reported in Tables~\ref{tabiii} and~\ref{tabiv}. As shown in the columns (c) of Tables~\ref{tabiii} and~\ref{tabiv}, this reduces $<(\Delta q)^2>$ of about one order of magnitude. \begin{figure}\label{dqs8} \includegraphics[width=7cm]{fig09.eps} \caption{ Charge excesses on Cu sites $q_i$ for the same $Cu_{0.50}Zn_{0.50}$ alloys as in Fig. 8. Circles, triangles and squares represent the calculated values by CEF-PCPA, CEF-LAPW and by the model of Ref.~\onlinecite{Magri}, In abscissa the charge excesses obtained by LAPW calculations are reported. Left frame: bcc-based alloys; right frame: fcc-based alloys. In oder to improve readability we plot also the straight lines $q=q^{LAPW}$. The deviations from these lines measure the accuracy of the various calculations. } \end{figure} Interestingly, the present CEF-LAPW calculations confirm the observations about the transferability of CEF parameters in Ref.~\onlinecite{CEF}, where the CEF charges have been compared vs. LSMS results. As a typical example, let us consider the results for $c=0.50$ reported in Table~\ref{tabiii}. The $< \Delta q^2 >$ obtained are alway small: $4 \; 10^{-5}$ in the worst case and $4\; 10^{-7}$ in the best, corresponding respectively to structures 71 and 74, while an intermediate value, $3 \; 10^{-6}$ is found for the structure $R16$, from which the CEF-LAPW coefficients have been obtained. The same holds for all the concentrations, both for bcc and fcc structures. A look to the columns (c) in the Tables~\ref{tabiii} and~\ref{tabiv}, in fact, shows that while excellent results have been obtained for all the supercell, the random structure from which the CEF coefficients have been extracted is not necessarily the best performing. The above arguments about the charges should not lead to the conclusion that, for this purpose, any fit is comparable with any other. This is clearly shown in Fig. 9, where the performances of PCPA, CEF-PCPA, CEF-LAPW and by the model of Magri et al.~\cite{Magri} are compared for the equiatomic concentration alloys. In the same Figure, the distances from the diagonal lines measures the differences between the LAPW charges and those by various approximations, for all the Cu sites in the supercells with $c=0.5$. It is there evident that the results by CEF-LAPW, marked by open triangles, are much better than those by other approximations. \subsection{Total energies} In Tables~\ref{tabiii} and~\ref{tabiv} we compare the total energies obtained for CuZn alloys by CEF, PCPA and LAPW calculations. We have used the same extended set of bcc and fcc based structures listed in Ref.~\onlinecite{Curtarolothesis}. Since the CEF energies contain a, concentration dependent constant, we report the quantity $\Delta E$, defined as the energy difference between the structure at hand and the structure that, at the same concentration, has the lowest energy, according with PCPA calculations. The same $\Delta E$ is plotted in Figs. 10 and 11 for the Cu concentrations $c=0.25$, $0.50$, an $0.75$, for which the database of Ref.~\onlinecite{Curtarolothesis} contains a number of structure sufficient to individuate trends. \begin{figure} \includegraphics[width=7cm]{fig10.eps} \caption{\label{fig10} Total energy differences with respect to the PCPA predicted ground state, $\Delta E$, for bcc based Cu$_c$Zn$_{1-c}$ alloys. The labels indicated in abscissa identify the various configurations in the database of Ref.~\onlinecite{Curtarolothesis}, $R$ stands for structures with randomly generated chemical occupations containing 16 atoms with mean Cu contents c=0.25, 0.5 and 0.75. Open triangles, open circles, open squares and filled triangles, indicate LAPW, CEF-LAPW, CEF-PCPA and PCPA calculations, respectively. Lines are a guide for the eye. } \end{figure} Our first observation is that the total energies obtained by PCPA and CEF-PCPA calculations perfectly overlap on the scale of Figs. 10 and 11, where they are represented as filled triangles and open squares, respectively. As reported in Tables~\ref{tabiii} and~\ref{tabiv}, in fact, the values obtained by the two methods are different by a few $\mu Ry$ per atom, that is comparable with the accuracy of the calculations. Thus, PCPA and CEF-PCPA give indintinguishable results both for the charges (as discussed in the previous subsection) and the total energies. Therefore, it is compelling to conclude that the CEF theory is a numerically excellent and powerful tool to reproduce with much less efforts GCPA electronic structure calculations. Moreover, since CEF-PCPA calculations use as an input the 'qV' data obtained from {\it random} supercells, the perfect agreement obtained for the properties of so many different {\it ordered} structures that have not been used to fit the CEF coefficients has only one possible explanation. Accordingly with the discussion in Sects. III B and III C, both the following conditions must be fulfilled. (i) The coherent scattering-path matrix $\underline{\tau}^c$ of the random alloy configuration used as an input must be representative of the whole set of ordered structures considered; (ii) the linearity of the 'qV' laws is almost perfectly observed in all the range of values that the charge excesses and the Madelung potentials take for the structures considered. In Sect. III we have offered several arguments supporting the validity of both points above, but we have not been able to provide an analytical demonstration. We think that the numerical evidence found is very strong and compelling. \begin{figure} \includegraphics[width=7cm]{fig11.eps} \caption{\label{fig11} Total energy differences with respect to the PCPA predicted ground state, $\Delta E$, for fcc based Cu$_c$Zn$_{1-c}$ alloys. The labels indicated in abscissa identify the various configurations in the database of Ref.~\onlinecite{Curtarolothesis}, $R$ stands for structures with randomly generated chemical occupations containing 16 atoms with mean Cu contents c=0.25, 0.5 and 0.75. Open triangles, open circles, open squares and filled triangles, indicate LAPW, CEF-LAPW, CEF-PCPA and PCPA calculations, respectively. Lines are a guide for the eye. } \end{figure} Accordingly with the discussion in Sect. III, the success of the CEF theory in reproducing the charges or, equivalently, the Madelung potentials guarantees the reproducibility of {\it any} ground state property within the GCPA theory through Eq.~(\ref{Pii}). Hence, even spectral properties as the DOS or the Bloch Spectral functions, although buried, are contained in the CEF functional that, if the input parameters are extracted from a GCPA theory, inherits all the good and the bad things of same GCPA theory. The comparison with LAPW calculations is more difficult, for two different reasons. In first place, these calculations do not assume mean boundary conditions for the wave-functions and use a procedure equivalent to the full calculation of the $\underline{\underline{\tau}}$ matrix. In second place, within LAPW calculations, the charge multipole summation is truncated at some high $\ell$ value. With these clarifications, the agreement between LAPW and PCPA or CEF-PCPA calculations (there is no reason for discussing the last two models separately) is quite good. As a general rule, the two set of calculations find the same ground states at the concentrations considered. In the few exceptions (that correspond to the negative figures in the LAPW columns of Tables~\ref{tabiii} and~\ref{tabiv}) the disagreement can be explained by the fact that the structures indicated as the ground state by the two theories are almost degenerate in energy. Also the general trends for the total energies are well reproduced, as it is visible in Figs. 10 and 11, though the PCPA generally underestimate the energy differences. Fitting the CEF parameters from the LAPW 'qV' laws generally improves the agreement. At variance of what found for the charges, however, the improvement is quite modest. In summary: the CEF appears able to perfectly reproduce GCPA calculations for both ordered and disordered metallic systems. The reasons why the agreement is so excellent are not yet completely understood. Although CEF and GCPA theories are both coarse grained versions of the DFT, opposite to what numerical results suggest, they are not the {\it same} theory. In fact, as discussed in Sect. III, in order to be coincident to the CEF, GCPA theories should (i) exactly observe linear 'qV' laws and (ii) lead to coherent scattering matrices independent on the configuration in a fixed concentration ensemble. For metallic alloys, these conditions appear plausible and the numerical evidence strongly support the view that both are nearly satisfied. However we must highlight that the condition (i) is not verified for pathologically high values of the Madelung field. The comparison vs. LAPW calculations suggest that both coarse grained theories, GCPA and CEF, are able to reproduce semiquantitatively the total energies of the alloy configurations considered. In particular, the results by the coarse grained theories are strongly correlated with those by LAPW. This fact is better elucidated by Figs. 10 and 11, where configurations belonging to the same fixed concentration ensemble are ordered in such a way to have increasing PCPA total energies. If the same ordering was not observed by some other method for some configuration, this would show up as a local minimum in the corresponding curve. The most visible of such events, occurs in Fig. 11 for $c=0.75$, where the curve corresponding to LAPW calculations presents a very weak local minimum at the configuration 25. The examination of Figs. 10 and 11 suggests that the coarse grained theories are able to give qualitatively correct predictions about ordering for the alloys considered, while the fact that they generally underestimate the corresponding energies could imply incorrect estimates of the corresponding transition temperatures. \section{Conclusions} We wish to conclude this paper with a summary and a few comments. We have introduced the class of the GCPA theories, that are characterized by (i) a specific ansatz for the kinetic part of the density functional, which is common to all CPA-based theories, and (ii) an external model that determines the way in which the atomic effective potentials should be reconstructed and the statistical weights to be assigned each. The GCPA class of approximations includes most existing CPA-based density functional theories, to mention a few: the CPA prototype, i.e. the single site CPA~\cite{DFTKKRCPA1,DFTKKRCPA2}, the Screened Impurity Model CPA (SIM-CPA)~\cite{SIMCPAI,SIMCPAII}, the Polymorphous CPA (PCPA)~\cite{Ujfalussy}, the CPA including Local Fields (CPA+LF)~\cite{CPALF}, the Non Local CPA (NL-CPA)~\cite{NLCPAI}. The ansatz (i) consists in applying averaged boundary conditions at the surfaces of each scattering volume and naturally leads to algorithms requiring a number of operation that scales as $N$. As it is discussed by Abrikosov and Johansson~\cite{Abrikosov_cpa}, CPA-based approximations allow for a careful picture of the spectral properties of metallic alloys. The so much criticized results of the SS-CPA about the total alloy energies can be healed by external models that consider the charge distribution in the system. We have shown how this can be done systematically by writing the relevant energetic contributions as a series involving the charge multipole moments in each scattering volume. The truncation errors of the same series are probably already quite small when only the first term is included, as in the case of spherical approximations. We have derived an expression of the GCPA density functional that, together with the above multipole sums, includes local 'atomic' terms, completely determined by the atomic number of the ion in the volume, and by the geometry of the same volume. The local term at the $i$-th site is coupled to the others only through the coherent scattering matrix $\underline{\tau}^c$ and the Madelung potential at the same site. Although this kind of coupling, that we have called {\it marginal coupling}, is not necessarily weak, nevertheless, it is analytically tractable and it is the source of the $O(N)$ scaling in GCPA theories. We have demonstrated that in a GCPA theory all ground state properties within a specific sample are functions of the appropriate coupling Madelung potential {\it only}, or, equivalently, of the charge multipole moments at each lattice site. To put it into other words: we have demonstrated that the GCPA approximations realize a {\it coarse graining} of the Hohenberg-Kohn density functional, since only a part of the information conveyed by the electronic density field, namely the charge multipole moments, is actually entering in the GCPA approximate functional. Moreover we have suggested that the explicit form of the GCPA functional dependence on the multipole moments can be obtained in a fixed concentration ensemble by the numerical integration of the 'qV' relationships for a random alloy configuration belonging to the same ensemble. The above procedure does not rely on the linearity of the 'qV' laws. We have re-derived the CEF~\cite{CEF} as a sensible approximation of the GCPA theories, with which it would coincide provided the 'qV' were exactly linear as claimed by many groups. The present derivation allows for the inclusion of higher order multipole moments. A very remarkable feature of the CEF theory is that it shares the same structure of the MST. In fact, the minimization of the CEF requires the solution of a set of Euler-Lagrange equations that has the same structure of the Korringa-Kohn-Rostoker (KKR) matrix at zero energy and wave-vector~\cite{brunomatsci,Drchal}. More specifically, as it can be seen by comparing Eqs.~(\ref{MKKR}) and~(\ref{fmatrix}), the site-diagonal response functions, $a_{i,LL^\prime}$, and the Madelung coefficients, $M_{ij,LL^\prime}$, in the CEF theory, play the role of the site diagonal scattering matrices and the KKR structure constants in the MST theory. The correspondence is not only formal, since the $a_{i,LL^\prime}$ are single-site quantities in the same sense of the SS scattering matrices~\cite{brunomatsci} and, in plain analogy with them, are related with the SS response to the appropriate perturbing field~\cite{CPALF}. In the present paper we have provided several formal arguments and strong numerical evidences that CEF and GCPA theories lead to very similar results, the discrepancies being of the order of the numerical errors. We have also shown that CEF and GCPA theories are able to reproduce the charges and the total energies for many ordered alloy configurations. In our view the {\it coarse grained theories}, GCPA and CEF, constitute a valuable alternative to full DFT calculations. Although the CPA theory was proposed many year ago with the purpose of dealing with substitutionally disordered alloys, we think that we have shown that today GCPA theories are able to deal with ordered intermetallic compounds too. Therefore, the fact that CPA-based theories are able to cope with sophisticated model of disorder is not an original sin but, rather, an added value. The computational performances of the CEF have been discussed in more details elsewhere~\cite{CEF}. Here we like to mention that the possibility of evaluating total energies for thousand atoms in a few seconds CPU time could constitute a substantial enlargement of the domain of the applications of the DFT. \begin{acknowledgments} We acknowledge financial support from MURST (PRIN grant no. 2004023079/004). Discussions with Professors B. Ginatempo and I.A. Abrikosov are also gratefully acknowledged. \end{acknowledgments}
1,314,259,994,689
arxiv
\section{Introduction} In this paper we propose a Question Paraphrase Retrieval (QPR) \cite{bernhard2008answering} system that can operate at industrial scale. A QPR system retrieves a set of paraphrase questions for a given input, enabling existing question answering systems to answer rare formulations present in incoming questions. QPR finds natural applications in question answering systems, and is especially relevant to the community Question Answering (cQA) systems. Common cQA websites such as Quora or Yahoo Answers are platforms in which users interact by asking and answering questions. The community-driven nature of these platforms leads to problems such as question duplication. Therefore, having a way to identify paraphrases can reduce clutter and improve the user experience. Question duplication can be prevented by presenting users a set of candidate paraphrase questions by retrieving them from the set of questions already in the system. Open-domain QA systems provide answers to a user's questions with or without human intervention. These systems are employed by virtual assistants such as Alexa, Siri, Cortana and Google Assistant. Some virtual assistants use noisy channels, such as speech, to interact with users. Questions that are the output of an Automated Speech Recognition (ASR) system could contain errors such as truncations and misinterpretations. Transcription errors are more likely to occur for rarer or grammatically non-standard formulations of a question. For instance `Where Michael Jordan at?' could be a reformulation for `Where is Michael Jordan?'. A QPR system tries to mitigate the impact of this noise by identifying an answerable paraphrase of the noisy query and hence improves the overall performance of the system. Paraphrase Identification (PI) \cite{mihalcea2006corpus,islam2009semantic,he2015multi} is a related task where the objective is to recognize whether a pair of sentences are paraphrases. The largest dataset for this task was released by Quora.com\footnote{https://data.quora.com/First-Quora-Dataset-Release-Question-Pairs}. State-of-the-art approaches on this dataset use neural architectures with attention mechanisms across both the query and candidate questions \cite{parikh2016decomposable,wang2017bilateral}. However these systems are impractical for large scale applications with millions of candidates, since they involve quadratic number of vector comparisons per question pair, which are non-trivial to parallelize. Information Retrieval (IR) systems have been very successful to operate at scale for such tasks. However, standard IR systems, such as BM25 \cite{bm25}, are based on lexical overlap rather than on deep semantic understanding of the questions \cite{robertson2009probabilistic}, making them unable to recognize paraphrases that lack significant lexical overlap. In recent years, the focus of the IR community has moved towards neural network based systems that can provide a better representation of the object to be retrieved, while maintaining the performance of the standard model. Neural representations can capture latent syntactic and semantic information from the text, overcoming the shortcomings of systems based purely on lexical information. Moreover, representations trained using a neural network can be task specific, allowing them to encode domain specific information that helps them outperform generic systems. The major components of a Neural Information Retrieval (NIR) system are a neural encoder and a k-Nearest Neighbour (kNN) index \cite{mitra2017neural}. The encoder is a neural network capable of transforming an input example, in our case a question, to a fixed size vector representation. In a standard setting, the encoder is trained via triplet loss \cite{schroff2015facenet} to reduce the distance between two paraphrase vectors as compared to a paraphrase vector and a non-paraphrase vector. After being trained for this task, the encoder is used to embed the questions that can be later retrieved at inference time. The encoded questions are added to the kNN index for efficient retrieval. The input question is encoded and used as a query to the index, returning the top k most similar questions Public datasets, such as Quora Question Pairs, are built to train and evaluate classifiers to identify paraphrases rather than evaluating retrieval systems. Additionally, the Quora dataset is not manually curated, thus resulting in a dataset that contains false negative question paraphrases. This problem introduces noise in the training procedure when minimizing the triplet loss. This noise is further exacerbated in approaches for training procedures that exploit the concept of hard negatives, i.e., mining the non-paraphrase samples that are close to paraphrase samples in the vector space \cite{manmatha2017sampling}. In this work, we propose a loss function that minimizes the effect of false negatives in the training data. The proposed loss function uses label smoothing to assign some probability mass to negative examples, thus mitigating the impact of false negatives The proposed technique is evaluated on two datasets: a distantly supervised dataset of questions collected from a popular virtual assistant system, and a modified version of the Quora dataset that allows models to be evaluated in a retrieval setting. The effect of our proposed loss and the impact of the smoothing parameters are analysed in Section 4. \section{Question Paraphrase Retrieval} In QPR the task is to retrieve a set of candidate paraphrases for a given query. Formally, given a new query $q_{new}$, the task is to retrieve k-questions, $Q_k$ ($|Q_k| = k$), that are more likely to be paraphrases of the original question. The questions need to be retrieved from a given set of questions $Q_{all}$ such that $ Q_k \subseteq Q_{all}$, e.g., questions already answered in a cQA website. \subsection{System overview} The QPR system described in this paper is made of two core components: an encoder and an index. The encoder $\phi$ is a function ($\phi: Q \rightarrow \mathbb{R}^{n}$) that takes as input a question $q \in Q$ and maps it to a $n$-dimensional vector representation. The index is defined as the encoded set of all the questions that can be retrieved $\{ \phi(q') | q' \in Q_{all} \}$ using the standard kNN search mechanism. \subsubsection{Encoder} The encoder $\phi$ used by our system is a neural network that transforms the input question to a fixed size vector representation. To this end, we use a convolutional encoder since it scales better (is easily parallelizable) compared to a recurrent neural network encoder while maintaining similar performance on sentence matching tasks \cite{yin2017comparative}. The encoder uses a three-step process: (i) An embedding layer maps each word $w_i$ in the question $q$ to its corresponding word embedding $x_i \in \mathbb{R}^{e_{dim}}$ and thereby generating a sentence matrix $X_q \in \mathbb{R}^{l \times e_{dim}}$, where $l$ is number of words in the question. We also use the hashing trick of \cite{weinberger2009feature} to map rare words to $m$ bins via random projection to reduce the number of false matches at the retrieval time. (ii) A convolutional layer \cite{kim2014convolutional} takes the question embedding matrix $X_q$ as input and applies a trained convolutional filter $W \in \mathbb{R}^{e_{dim}win}$ iteratively by taking at each timestep $i$ a set of $win$ word embeddings. This results in the output: $h^{win}_i = \sigma(W x_{i-\frac{win}{2}:i+\frac{win}{2}} + b)$, where $\sigma$ is a non linearity function, $tanh$ in our case, and $b \in \mathbb{R}$ is the bias parameter. By iterating over the whole sentence it produces a feature map $\textbf{h}^{win} = [h^{win}_1, .., h^{win}_l]$. (iii) A global max pooling operation is applied over the feature map ($\hat{h}^{win} = max(\textbf{h}^{win})$) to reduce it into a single feature value. The convolutional step described above is applied multiple times ($c_{dim}$ times) with varying window size with resultant $\hat{h}$ values concatenated to get a feature vector $h \in \mathbb{R}^{c_{dim}}$ which is then linearly projected to an $n$-dimensional output vector using a learned weight matrix $W_p \in \mathbb{R}^{n \times c_{dim}}$. \subsubsection{kNN Index} For our system we use FAISS \cite{JDH17} as an approximate kNN index for performance reasons. All the questions ($Q_{all}$) are encoded offline using the encoder $\phi$ and added to the index. At retrieval time a new question is encoded and used as a query to the index. FAISS uses a predefined distance function (e.g. Euclidean distance) to retrieve the nearest questions in the vector space. \section{Training} Typical approaches for training the encoder use triplet loss \cite{schroff2015facenet}. This loss attempts to minimize the distance between positive examples while maximizing the distance between positive and negative examples. The loss is formalized as follows: \begin{equation} \sum_i^N[ \lVert \phi(q_i^a) - \phi(q_i^p) \rVert^2_2 - \lVert \phi(q_i^a) - \phi(q_i^n) \rVert^2_2 + \alpha]_+ \end{equation} where $q_i^a$ is a positive (anchor) question, $q_i^p$ is a positive match to the anchor (a valid paraphrase), $q_i^n$ is a negative match (i.e. a non-paraphrase), $\alpha$ is a margin parameter and $N$ is the batch size. In a recent work by \citealt{manmatha2017sampling} the authors found that better results could be obtained by training the above objective with hard negative samples. These hard negatives are samples from the negative class that are the closest in vector space to the positive samples, hence most likely to be misclassified. However, in our case, and in other cases with noisy training data, this technique negatively impacts the performance of the model since it starts focusing disproportionately on any false negative samples in the data (i.e. positive examples labelled as negative due to noise) making the learning process faulty. \subsection{Smoothed Deep Metric Learning} In this paper we propose a new loss function that overcomes the limitation of triplet loss in the noisy setting. Instead of minimizing the distance between positive examples with respect to negative examples, we view the problem as a classification problem. Ideally we would like to classify the paraphrases of the original question amongst all other questions in the dataset. This is infeasible due to time and memory constraints. We can however approximate this general loss by identifying a valid paraphrase in a set of randomly sampled questions \cite{kannan2016smart}. We map vector distances into probabilities similar to \citealt{goldberger2005neighbourhood} by applying a softmax operation over the negative squared euclidean distance: \begin{equation} \hat{p}{(a,i)} = \frac{e^{-\lVert \phi(q^a) - \phi(q^i) \rVert^2_2}}{\sum_j^{N} e^{-\lVert \phi(q^a) - \phi(q^j) \rVert^2_2}} \end{equation} where $q^a$ is an anchor question and $q^j$ and $q^i$ are questions belonging in a batch of size $N$ containing one paraphrase and $N-1$ randomly sampled non-paraphrases.The network is then trained to assign a higher probability to pair of questions that are paraphrases. Additionally, we apply the label smoothing regularization technique \cite{szegedy2016rethinking} to reduce impact of false negatives. This technique reduces the probability of the ground truth by a smoothing factor $\epsilon$ and redistributes it uniformly across all other values, i.e., \begin{equation} p'(k|a) = (1 - \epsilon ) p(k|a) + \frac{\epsilon}{N} \end{equation} where $p(k|a)$ is the probability for the gold label. The new smoothed labels computed in this way are used to train the network using Cross-Entropy (CE) or Kullback - Leibler (KL) divergence loss.\footnote{In this setting CE loss and KL divergence loss are equivalent in expected values. However, we use the KL divergence loss for performance reasons.} A standard cross-entropy loss tries to enforce the euclidean distance between all random points to become infinity, which may not be feasible and could lead to noisy training. Instead, assigning a constant probability to random interactions tries to position random points onto the surface of a hypersphere around the anchor. The sampling required for this formulation can be easily implemented in frameworks like PyTorch \cite{paszke2017automatic} or MxNet \cite{chen2015mxnet} using a batch of positive pairs $<q_{1,j}, q_{2,j}>$ derived from a shuffled dataset, as depicted in Figure~\ref{fig:loss}. In this setting, each question $q_{1,i}$ would have exactly one paraphrase, i.e., $q_{2,i}$ and $N-1$ all other questions $q_{2, j}$ when $j \neq i$ would serve as counter-examples. This batched implementation reduces training time and makes sampling tractable by avoiding sampling $N$ questions for each example, reducing the number of forward passes required to encode the questions in a batch from $\mathcal{O}(N^2)$ in a naive implementation to $\mathcal{O}(2N)$. \begin{figure} \centering \includegraphics{distribution.pdf} \caption{Batched implementation of the loss with smoothing parameter $\epsilon = 0.3$ and batch size $N = 3$. Each paraphrase pair $<q_{1,j}, q_{2,j}>$ in the batch is compared with all the others questions in the batch. } \label{fig:loss} \end{figure} \section{Experiments} In this section, we present the experimental setup used to validate our approach for QPR using the Smoothed Deep Metric Learning (SDML) technique. \subsection{Datasets} In order to generate a dataset for question paraphrase retrieval we propose a technique that uses distant supervision to create it automatically from high-precision question-answer (QA) logs. Due to the proprietary nature of our internal dataset, we also report numbers on a modified version of Quora paraphrase identification dataset that has been adapted for the paraphrase retrieval task. \label{datasets} \subsubsection{Open Domain QA dataset} Our open domain Q\&A dataset is created by weak supervision method using high precision QA logs of a large scale industrial virtual assistant. From the logs we retrieve `clusters' of questions that are mapped to the same answer. However we notice that this may generate clusters where unrelated questions are mapped to a generic answer. For instance many different math questions may map to the same answer; a given number. To further refine these clusters, the data is filtered using a heuristic based on an intra-cluster similarity metric that we call cluster \textit{coherence}, denoted as $c$. We define this metric as the mean Jaccard similarity \cite{levandowsky1971distance} of each question in a cluster to the cluster taken as the whole. Mathematically, for a given cluster $ \mathbb{A} =\{q_1, q_2 ... q_n\}$ and defining $\mathbb{T}_{q_i} = \{w_{i_1}, w_{i_2}, ... w_{i_k}\}$ as shorthand for the set of unique tokens present in a given question, the coherence of the cluster is defined as: \begin{equation} \mathbb{S} = \bigcup_{i=1}^{n} \mathbb{T}_{q_i} \end{equation} \begin{equation} c = \frac{1}{n}\Sigma_{i=1}^{n} \frac{|\mathbb{T}_{q_i}\cap \mathbb{S}|} {|\mathbb{S}|} \end{equation} In practice we found that even a small coherence filter ($c < 0.1$) is able to eliminate all incoherent question clusters. Our approach to weak supervision can be considered as a generalized instance of the candidate-generation noise-removal pipeline paradigm used by \citealt{kim2018efficient}. Once the incoherent clusters are removed from the dataset, the remaining clusters are randomly split in a 80:10:10 ratio into training, validation and test sets and question pairs are generated from them\footnote{The open domain QA dataset contains on order of 100k - 1M training clusters, 10-100k clusters each for validation and testing, and a search index of size $\approx 10M$.}. A second filter is applied to remove questions in the validation and test sets that overlap with questions in the training set. The final output of the weak supervision process is a set of silver labelled clusters with $>99\%$ accuracy based on spot checking a random sample of 200 clusters. \subsubsection{Quora dataset} We introduce a variant of the Quora dataset for QPR task. The original dataset consists of pairs of questions with a positive label if they are paraphrases, and a negative label if they are not. We identify question clusters in the dataset by exploiting the transitive property of the paraphrase relation in the original pairs, i.e., if $q_1$ and $q_2$ are paraphrases, and $q_2$ and $q_3$ are paraphrases then $q_1$ and $q_3$ are also paraphrases, hence $q_1$, $q_2$, and $q_3$ belong to the same cluster. After iterating over the entire dataset we identified $60,312$ question clusters. The question clusters are split into the training, validation and test sets such that the resulting validation and test set contains roughly $5,000$ question pairs each, and the training set contains $219,369$ question pairs\footnote{The code to generate the splits will be released upon acceptance.}. The kNN index is composed of all the questions in the original Quora datasets (including questions that appear only as negative, thus not being part of any cluster) for a total of $556,107$ questions. \subsection{Experimental setup} We described the architecture of our encoder previously in section 2.1.1. For experimentation we randomly initialized word embeddings. The size of vocabulary for Quora dataset is fixed at 50,000 whereas for the open domain QA dataset we used a vocabulary of size 100,000. To map rare words we use 5,000 bins for the Quora dataset and 10,000 bins for the QA dataset. We set the dimensionality of word embeddings at 300 (i.e., $e_{dim} = 300$); the convolutional layer uses a window size of $5$ (i.e., $win = 5$) and the encoder outputs a vector of size $n = 300$. For triplet loss the network is trained with margin $\alpha = 0.5$. The default batch size for all the experiments is 512 (i.e., $N=512$) and the smoothing factor for SDML, $\epsilon$, is 0.3. For all experiments training is performed using the Adam optimizer with learning rate $\lambda = 0.001$ until the model stops improving on the validation test, using early stopping \cite{prechelt1998early} on the ROC AUC metric \cite{bradley1997use}. \paragraph{Evaluation.} We use \textit{IVF2000, Flat} configuration of the FAISS library as our index, which is a hierarchical index consisting of an index of k-means centroids as the top-level index. For evaluation we retrieve $20$ questions with an average query time of $<10$ ms. These questions are used to measure the system performance via standard information retrieval metrics, Precision@N ($P@N$) and Mean Reciprocal Rank (MRR). $P@N$ measures if at least one question in the first $N$ that are retrieved is a paraphrase and MRR is the average reciprocal rank (position) at which the first retrieved paraphrase is encountered. \subsection{Results} In the first set of experiments we measured the impact of varying the smoothing factor $\epsilon$. The results for the Quora validation set are presented in Table~\ref{epsilonsearch}. We observe that the presence of smoothing leads to a significant increase over the baseline (simple cross entropy loss), and increasing this parameter has a positive impact up to $\epsilon = 0.3$. In our second experiment, we hold the $\epsilon$ constant at $0.3$ and experiment with varying the number of negative samples. Table~\ref{Batch Size} shows the effect of an increase in the number of negative examples in a batch. The model's performance reaches its maximum value at $N=512$, i.e., with $511$ negative samples for each positive sample. We would like to point out that we limited our exploration to 1024 due to memory constraints. However, better performance may be achieved by further increasing the number of examples, since the batch becomes a better approximation of the true distribution. Table~\ref{quora_dev} and~\ref{quora_test} compare the proposed loss with the triplet loss with random sampling, TL(Rand). We compared the proposed approach with two variants of triplet loss that uses different distance functions Euclidean Distance (EUC) and Sum of Squared Distances (SSD). The euclidean distance is the standard distance function for triplet loss implementation present in popular deep learning frameworks, PyTorch and Mxnet, whereas SSD is the distance function used in the original paper of \citealt{schroff2015facenet}. Our approach improves over the original triplet loss considerably on both datasets. The SSD distance also outperforms the EUC implementation of the loss. \begin{table}[t!] \begin{center} \begin{tabular}{|l|rcl|} \hline \bf $\epsilon$ & \bf P@1 & \bf P@10 & \bf MRR \\ \hline 0& 0.5568 & 0.7381 & 0.6217\\ 0.1 & 0.5901 & 0.7841& 0.6591 \\ 0.2 & 0.6030 & 0.8090 & 0.6762\\ 0.3 & \textbf{0.6133} & 0.8113 & \textbf{0.6837} \\ 0.4 & 0.6107 & \textbf{0.8144} & 0.6815 \\ \hline \end{tabular} \end{center} \caption{\label{epsilonsearch} Impact of smoothing factor $\epsilon$ on the Quora validation set.} \end{table} \begin{table}[t!] \begin{center} \begin{tabular}{|l|rcl|} \hline \bf N & \bf P@1 & \bf P@10 & \bf MRR \\ \hline 32 & 0.5389 & 0.7444 & 0.6103\\ 64 & 0.5710&0.7726& 0.6410 \\ 128 & 0.6093 &0.8085 & 0.6777 \\ 256 &0.6112 &\textbf{0.8141}& 0.6833\\ 512 & \textbf{0.6133} & 0.8113 & \textbf{0.6837} \\ 1024 & 0.6081&0.8008&0.6764 \\ \hline \end{tabular} \end{center} \caption{\label{Batch Size} Impact of the batch size $N$ on the Quora validation set. For computing SDML a batch consists of a paraphrase and $N - 1 $ negative examples.} \end{table} \begin{table}[t!] \begin{center} \begin{tabular}{|l|l|rcl|} \hline \bf Loss & \bf Dist & \bf P@1 & \bf P@10 & \bf MRR \\ \hline TL (Rand) & EUC & 0.4742 & 0.6509 & 0.5359 \\ TL (Rand) & SSD & 0.5763 & 0.7640 & 0.6421 \\\hline SDML & SSD &\textbf{0.6133} & \textbf{0.8113} & \textbf{0.6837}\\ \hline \end{tabular} \end{center} \caption{\label{quora_dev} Comparison of different loss functions on Quora validation set.} \end{table} \begin{table}[t!] \begin{center} \begin{tabular}{|l|l|rcl|} \hline \bf Loss & \bf Dist & \bf P@1 & \bf P@10 & \bf MRR \\ \hline TL (Rand) & EUC & 0.4641 & 0.6523 & 0.5297 \\ TL (Rand) & SSD & 0.5507 & 0.7641 & 0.6265 \\\hline SDML & SSD &\textbf{0.6043} & \textbf{0.8179} & \textbf{0.6789}\\ \hline \end{tabular} \end{center} \caption{\label{quora_test} Comparison of different loss functions on Quora test set.} \end{table} \begin{table}[t!] \begin{center} \begin{tabular}{|l|l|rcl|} \hline \bf Loss & \bf Dist & \bf P@1 & \bf P@10 & \bf MRR \\ \hline TL (Rand) & EUC &0.5738 &0.7684&0.6428\\ TL (Rand) & SSD &0.6506 &0.8579 &0.7252\\ \hline TL (Hard) & EUC &0.5549&0.7534&0.6256\\ TL (Hard) & SSD & 0.5233 & 0.7077 & 0.5870\\ \hline SDML & EUC & 0.6526 & \textbf{0.8832} & 0.7361 \\ SDML & SSD & \textbf{0.6745} & 0.8817 & \textbf{0.7491} \\ \hline \end{tabular} \end{center} \caption{\label{od_dev}Comparison of different loss functions on open domain QA dataset validation set.} \end{table} \begin{table}[t!] \begin{center} \begin{tabular}{|l|l|rcl|} \hline \bf Loss & \bf Dist & \bf P@1 & \bf P@10 & \bf MRR \\ \hline TL (Rand) & EUC &0.5721 &0.7695&0.6431\\ TL (Rand) & SSD &0.6538 & 0.8610&0.7271\\ \hline TL (Hard) & EUC & 0.5593 & 0.7593 & 0.6304\\ TL (Hard) & SSD & 0.5201 & 0.7095 & 0.5863\\ \hline SDML & EUC & 0.6545 & \textbf{0.8846} & 0.7382 \\ SDML & SSD & \textbf{0.6718} & 0.8830 & \textbf{0.7480} \\ \hline \end{tabular} \end{center} \caption{\label{od_test} Comparison of different loss functions on open domain QA dataset test set.} \end{table} Tables ~\ref{od_dev} and \ref{od_test} show the results on the open domain QA dataset validation and test set. TL(Rand) is the triplet loss with random sampling of negative examples whereas TL(Hard) is a variant with hard negative mining. In both the cases the SDML outperforms triplet loss by a considerable margin. It is important to note that since our dataset contains noisy examples triplet loss with random sampling outperforms hard sampling setting, in contrast with the results presented in \citealt{manmatha2017sampling}. The results presented in this section are consistent with our expectations based on the design of the loss function. \section{Conclusion} \label{sec:length} We investigated a variant of the paraphrase identification task - large scale question paraphrase retrieval, which is of particular importance in industrial question answering applications. We devised a weak supervision algorithm to generate training data from the logs of an existing high precision question-answering system, and introduced a variant of the popular Quora dataset for this task. In order to solve this task efficiently, we developed a neural information retrieval system consisting of a convolutional neural encoder and a fast approximate nearest neighbour search index. Triplet loss, a standard baseline for learning-to-rank setting, tends to overfit to noisy examples in training. To deal with this issue we designed a new loss function inspired by label smoothing, which assigns a small constant probability to randomly paired question utterances in a training mini-batch resulting in a model that demonstrates superior performance. We believe that our batch-wise smoothed loss formulation will be applicable to a variety of metric learning and information retrieval problems for which triplet loss is currently popular. The loss function framework we describe is also flexible enough to experiment with different priors - for e.g. allocating probability masses based on the distances between the points.
1,314,259,994,690
arxiv
\section{Introduction} The Milky Way is known to be a spiral galaxy, and its structure has been intensively studied for many decades (e.g., Oort, Kerr \& Westerhout 1958; Dame \etal\ 1987, 2001). However, there is still little agreement on the detailed spiral pattern, including the number of the spiral arms (e.g., Cohen \etal\ 1980; Drimmel 2000; Russeil 2003; Benjamin \etal\ 2005; Dame \& Thaddeus 2008; Hou \etal\ 2009). Spiral arms are regions of active star formation and traced primarily by H\emissiontype{II} regions, where young stellar populations (hot OB stars) ionize surrounding gas. The major difficulty in revealing the precise spiral structure of the Galaxy arises from the lack of accurate distances to the H\emissiontype{II} regions. Optical distance measurements such as can be obtained from photometric studies are limited in the Galactic disk by the large opacity due to dust. Instead, the most widely used method to map the Galaxy is to adopt kinematic distances, which are derived by matching the observed radial velocities (obtained from the Doppler shift in observed frequencies) with respect to the local standard of rest (LSR) with line-of-sight velocities expected from a Galactic rotation model (e.g., Schmidt 1965; Brand \& Blitz 1993). The famous work done by Georgelin \& Georgelin (1976) adopts this method (with the help of optical observations where available) to map H\emissiontype{II} regions in the Galaxy. However, significant unmodelled deviations from circular motions can cause large distance errors (Burton \& Bania 1974). Accurate and direct distance measurements without any assumption on the Galactic rotation are thus of the greatest importance to delineate the true Galactic structure. \begin{table*}[tp] \begin{center} \caption{VERA Observations of G14.33$-$0.64} \begin{tabular}{clcccc} \hline \multicolumn{1}{c}{Epoch} & Date & Day of Year & Time Range [UT] & Beam [mas] & Beam $_{{\rm EL}>35^\circ}$ [mas]\\ \multicolumn{1}{c}{(1)}&(2)&(3)&(4)&(5)&(6) \\ \hline \hline 1 & 2006 Oct 27 & 2006/300 & 03:00$-$12:00 & 1.87$\times$0.89 @ $-25.^\circ 5$ & 2.70$\times$0.75 @ $-7.^\circ 3$ \\ 2 & 2006 Nov 26 & 2006/330 & 01:00$-$08:45 & 1.81$\times$0.83 @ $-26.^\circ 7$ & 2.78$\times$0.73 @ $-5.^\circ 9$ \\ 3 & 2007 Jan 7 & 2007/007 & 22:00$-$05:45 & 1.87$\times$0.86 @ $-26.^\circ 1$ & --- \\ 4 & 2007 Feb 14 & 2007/045 & 20:00$-$03:43 & 1.92$\times$0.86 @ $-24.^\circ 2$ & 2.48$\times$0.81 @ $-4.^\circ 5$ \\ 5 & 2007 Mar 27 & 2007/086 & 17:00$-$00:43 & 2.10$\times$0.82 @ $-28.^\circ 3$ & 2.67$\times$0.76 @ $-5.^\circ 8$ \\ 6 & 2007 May 6 & 2007/126 & 14:00$-$21:43 & 2.24$\times$0.82 @ $-26.^\circ 0$ & 2.49$\times$0.82 @ $-8.^\circ 1$ \\ 7 & 2007 Aug 8 & 2007/220 & 08:00$-$15:50 & (1.82$\times$0.92 @ $-22.^\circ 0$) & 2.64$\times$0.77 @ $-1.^\circ 6$ \\ 8 & 2007 Oct 10 & 2007/283 & 04:00$-$11:55 & (1.72$\times$0.92 @ $-24.^\circ 6$) & 2.72$\times$0.75 @ $-3.^\circ 5$ \\ 9 & 2008 Jan 16 & 2008/016 & 21:00$-$04:55 & (1.79$\times$0.92 @ $-27.^\circ 5$) & 2.49$\times$0.81 @ $-6.^\circ 4$ \\ 10 & 2008 Apr 14 & 2008/105 & 15:00$-$22:55 & (2.00$\times$0.85 @ $-30.^\circ 9$) & 2.69$\times$0.75 @ $-8.^\circ 6$ \\ 11 & 2008 Jul 21 & 2008/203 & 08:30$-$16:15 & (1.89$\times$0.84 @ $-24.^\circ 5$) & 3.03$\times$0.70 @ $-8.^\circ 1$ \\ \hline \multicolumn{6}{@{}l@{}}{\hbox to 0pt{\parbox{150mm}{\footnotesize (1) Epoch number. (2) The date of observation start time in universal time (UT). (3) Day of year of observation. (4) Start time and end time in UT. (5) Beam size (major and minor axes) and its position angle (PA) east of north in single-beam images (with no data flagged). Parentheses indicate epochs not used in relative proper-motion measurements since the reference spot 4b and feature 4 were not detected. (6) Beam size and its PA east of north in dual-beam phase-referenced images, where data with antenna elevations below 35$^\circ$ were flagged. }\hss}} \end{tabular} \end{center} \end{table*} It has become feasible to map Galactic structure with VLBI (Very Long Baseline Interferometry) techniques, notably with the phase-referencing VLBI technique, by directly measuring trigonometric parallaxes of strong maser sources in star-forming regions associated with H\emissiontype{II} regions throughout the Galaxy. In addition to precise distances and absolute sky positions that locate the source in 3 dimensions in the Galaxy, measurements of absolute proper motions yield the full 3-dimensional space motions (i.e., secular proper motions and source distances together give tangential velocities), which enables one to obtain full source information for Galactic structure and dynamics. Reid \etal\ (2009b) recently refined our knowledge of the Galactic spiral structure and kinematics by integrating early results from VLBI astrometry of the Galaxy for total 18 high-mass star-forming regions (HMSFRs) with methanol, H$_2$O, SiO maser and continuum emission, carried out with the NRAO Very Long Baseline Array (VLBA) and with the Japanese VERA project. VERA (VLBI Exploration of Radio Astrometry) is the first VLBI array dedicated to phase referencing VLBI for Galactic astrometry, consisting of 4 antennas (20~meters each in diameter) across Japan (e.g., Honma \etal\ 2000). The recent VERA results for Galactic astrometry through maser parallax measurements are reported by Honma \etal\ (2007), Hirota \etal\ (2007, 2008a, 2008b), Imai \etal\ (2007), Sato \etal\ (2008), Kim \etal\ (2008), Choi \etal\ (2008), Nakagawa \etal\ (2008) and Oh \etal\ (2009). The object of this study, G14.33$-$0.64 (IRAS 18159$-$1648), is a Galactic star-forming region and is VERA's first target source toward the Sagittarius spiral arm in the inner Galaxy, which is an important step toward our goal of mapping the structure of the Galaxy. In particular, located at a low galactic longitude of $l=14.^\circ 33$ (with a latitude of $b=-0.^\circ 64$ within the Galactic plane), G14.33$-$0.64 is expected to trace the closest part of the Sagittarius arm to the Sun, and thus is an important target to determine the direct distance to the arm. G14.33$-$0.64 was initially discovered as a far-infrared (FIR) source in a 70-$\mu$m survey of the Galactic plane by Jaffe, Stier, \& Fazio (1982). It was soon followed by the first detection of H$_2$O maser emission at 22~GHz associated with the FIR source by Jaffe, G\"usten, \& Downes (1981). Later the H$_2$O maser emission was identified with an IRAS point source by Scalise, Rodriguez, \& Mendoza-Torres (1989). Both class I and II methanol (CH$_3$OH) maser sources were also found in the region: class II emission at 6.7~GHz (Walsh \etal\ 1995, 1997) and class I emission at 36~GHz, at 44~GHz (Slysh \etal\ 1994, 1999), at 84~GHz (Kalenski\u{\i} \etal\ 2001), and at 95 GHz (Val'tts \etal\ 2000). G14.33$-$0.64 has been observed to display OH thermal absorption line at 1665 MHz (Wouterloot \etal\ 1993), NH$_3$ (1,1) and (2,2) inversion transition lines at 23.7~GHz (Molinari \etal\ 1996), CS($J=2\rightarrow 1$) and CS($J=5\rightarrow 4$) rotational transition lines at 98.0~GHz (Bronfman \etal\ 1996) and at 244.9~GHz (Shirley \etal\ 2003), respectively, and 1.2-mm continuum emission (Fa\'{u}ndez \etal\ 2004). The radial velocities observed for many molecular lines of G14.33$-$0.64 are in good agreement at $V_{\rm LSR}\simeq 22$~km~s$^{-1}$. In the present study, we report on our successful determination of the parallax of G14.33$-$0.64 with VERA as a step toward revealing the structure of the Sagittarius spiral arm in the inner Galaxy. \section{VERA Observations} VERA observations of the 22-GHz H$_2$O maser source (the 6$_{16}\rightarrow$5$_{23}$ rotational transition) in G14.33$-$0.64 were carried out at 11 epochs between 2006 October and 2008 July as listed in Table~1. Using VERA's dual-beam mode for phase referencing (e.g., Honma \etal\ 2003, 2008a), we simultaneously observed the H$_2$O maser source in G14.33$-$0.64 and the extragalactic position-reference quasar (phase calibrator) J1825$-$1718 with an angular separation of 1$^\circ$.7 at a position angle (PA) of 108$^\circ$ east of north relative to G14.33$-$0.64. The flux density of the phase calibrator J1825$-$1718 was up to $\sim$140 mJy. A nominal H$_2$O maser position for G14.33$-$0.64 was used as reference center both for observation and for correlation: $\alpha_{2000}=$18$^{\rm h}$18$^{\rm m}$53.$^{\rm s}$8 and $\delta_{2000}=-$16$^\circ$\timeform{47'}\timeform{50.''0} (Comoretto \etal\ 1990). The position of J1825$-$1718 was adopted from the second VLBA Calibrator Survey by Fomalont \etal\ (2003): $\alpha_{2000}=$18$^{\rm h}$25$^{\rm m}$36.$^{\rm s}$532283 and $\delta_{2000}=-$17$^\circ$\timeform{18'}\timeform{49.''84781}. The ICRF source NRAO~530 (J1733$-$1304; Ma \etal\ 1998) was also observed as a bright calibrator source (fringe finder) for 7-minute scans hourly in each beam. The instrumental phase difference between the two beams were calibrated by recording the real-time phase data with an artificial noise source in each beam (Kawaguchi \etal\ 2000; Honma \etal\ 2008a). Left-hand circularly polarized signals were digitized at 2-bit sampling and recorded at a data rate of 1024 Mbps. In the total bandwidth of 256~MHz (16$\times$16~MHz), one of the sixteen 16-MHz IF channels was assigned to the H$_2$O maser lines in G14.33$-$0.64. The other 15 IF channels were for the continuum emission in the phase calibrator J1825$-$1718, with the central IF channel set at the maser frequency, using the VERA digital filter unit (Iguchi \etal\ 2005). The data correlation was performed with the Mitaka FX correlator (Chikada \etal\ 1991). In order to obtain sufficient resolution for the H$_2$O maser lines, only the central 8~MHz (of the 16-MHz IF channel) for the maser lines was split into 512 spectral points, yielding frequency and velocity resolutions of 15.625~kHz and 0.21~km~s$^{-1}$, respectively. Due to the spectral splitting method, one of the other 15 IF channels for J1825$-$1718 was also split into 512 spectral points (with the maser channel), which was not used for data reduction. The other 14 IF channels were split into 64 spectral points each and used in data reduction. The system noise temperatures at the zenith were typically $T_{\rm {sys}}=$150$-$300~K for the first 5 epochs. For the last 6 epochs, one or two antennas showed higher system noise temperatures of $T_{\rm {sys}}=300-800$~K due to bad weather, while the other antennas remained at $T_{\rm {sys}}=$150$-$300~K. \section{Data Reduction} Visibility calibration and imaging were performed in a standard manner with the NRAO Astronomical Image Processing System (AIPS) package (Greisen 2003). The observed frequencies of the maser lines were converted to radial (line-of-sight) velocities with respect to the local standard of rest (LSR), $V_{\rm LSR}$, using a rest frequency of 22.235080~GHz (Pickett \etal\ 1998) for the H$_2$O 6$_{16}\rightarrow$5$_{23}$ transition. We first searched for the relative positions of all H$_2$O maser spots in the single-beam data (i.e., without phase-referencing to the calibrator J1825$-$1718 in the other beam) of the third epoch and found maser emission over several spectral components (see figure~1). At this epoch, the brightest H$_2$O maser channel was at $V_{\rm LSR}=26.6$~km~s$^{-1}$ (feature~7 in table~3), and the visibilities of all maser channels were firstly phase-referenced to this channel by fringe fitting (AIPS task FRING) using the channel and by applying the phase solutions to all the maser channels. In order to find the maser spot distribution, we imaged all channels with the AIPS task IMAGR with a wide field of view of $\sim2$\timeform{''}$\times 2$\timeform{''} around the reference maser spot (feature~7), with 2048 pixels $\times$ 2048 pixels of size 1 mas. Many of the maser spots were outside this field ($\sim5$\timeform{''} offset from feature~7 as seen in table~3) and were found by fringe rate mapping with the AIPS task FRMAP and by shifting the image center accordingly. \begin{figure}[t] \begin{center} \FigureFile(50mm,60mm){figure1.eps} \end{center} \caption{Spectral evolution of H$_2$O maser emission in G14.33$-$0.64. Numbers show the observed year and day of year. Scalar-averaged cross-power spectra are shown between VERA Mizusawa and Iriki stations. The radial velocities for many molecular lines of G14.33$-$0.64 are at $V_{\rm LSR}\simeq 22$~km~s$^{-1}$. }\label{fig:spect} \end{figure} \subsection{Phase Referencing for Parallax Measurements} Next, we obtained absolute-position maps of bright maser spots by phase referencing visibilities of G14.33$-$0.64 to those of J1825$-$1718. For each epoch, phase solutions from fringe fitting with J1825$-$1718 were applied to the H$_2$O maser channels of G14.33$-$0.64 for the corresponding frequencies. The instrumental phase difference between the two beams was also corrected using the real-time phase-calibration data recorded during each observation. Visibility phase errors caused by the Earth's atmosphere were calibrated based on GPS measurements of the atmospheric zenith delay which occurs due to tropospheric water vapor (Honma \etal\ 2008b). Since a nominal reference center of G14.33$-$0.64 was used for correlation, we first imaged the phase-referenced maser data to find the positional offset of each maser {`}feature{'} (i.e., a group of maser spots in the same position over adjacent velocity channels) from the reference center, and then recalculated and corrected the delays for the obtained absolute positions of the maser features until the features came at the map center within 10~mas. After correcting the absolute position of the reference center for each maser feature, we used the same map center at all epochs for the same maser feature. We imaged the detected maser spots with the AIPS task IMAGR for a field of view 25.6~mas $\times$ 25.6~mas around each map center, with 512 pixels $\times$ 512 pixels of size 0.05~mas. Maser positions were fitted with elliptical Gaussian distributions with the task JMFIT. RMS noise levels in each image per channel were typically 200$-$600 mJy/beam. We performed least-squares fitting to simultaneously solve for the sinusoidal parallax curve and linear proper motion in right ascension (RA) for maser spots at consecutive velocity channels for two features that were persistent over more than a year. We did not solve for the parallax in declination because positional errors due to tropospheric zenith delay residuals were larger in declination as in other measurements (e.g., Sato \etal\ 2007, 2008) and also because angular resolution was lower in declination than in RA for G14.33$-$0.64 (see table~1 for beam size), and the parallax ellipse was smaller in declination. Instead, we removed the parallax obtained from RA fits to fit linear proper motion in declination. Since image distortion and positional errors due to tropospheric zenith delay residuals are severe for sources at low elevation angles associated with low source declinations including G14.33$-$0.64 (e.g., Honma \etal\ 2008b), we attempted 4 different elevation cutoff values $25^\circ$, $30^\circ$, $35^\circ$, $39^\circ$, below which we flagged the data with the AIPS task UVFLG. For cutoffs above $39^\circ$, imaging became difficult with high sidelobes due to insufficient data. We adopted an elevation cutoff of $35^\circ$ to obtain the best fitting result. For example, a typical error in the position measurement with one maser spot reduced from 0.18 mas to 0.14 mas by changing the elevation cutoff from $30^\circ$ to $35^\circ$. Flagging low-elevation data changed the beam size of antennas to be elongated in declination, however the beam size in RA was kept almost unchanged or slightly better (smaller) (see table~1). \begin{table*}[t] \begin{center} \caption{Parallax fitting for G14.33$-$0.64 with elevation cutoff 35 degrees.} \begin{tabular}{ccclccc} \hline \multicolumn{1}{c}{Feature ID} & $V_{\rm{LSR}}$ &$N_{\rm{epochs}}$& Detected Epochs & RA Parallax, $\pi$ & $\mu_X$ & $\mu_Y$ \\ \multicolumn{1}{c}{$\#$} & [km s$^{-1}$] & & & [mas] & [mas~yr$^{-1}$] & [mas~yr$^{-1}$] \\ \multicolumn{1}{c}{(1)} & (2) & (3) & (4) & (5) & (6) &(7) \\ \hline \hline 1a & 14.6 & 6 & $-$$-$$-$\hspace{1.4pt}4\hspace{1.4pt}5\hspace{1.4pt}6\hspace{1.4pt}7\hspace{1.4pt}8$-$10\hspace{1.4pt}$-$ & 0.931 (0.124) & 6.13 (0.27) & $-$4.50 (0.37)\\ 1b & 14.8 & 8 & $-$$-$$-$\hspace{1.4pt}4\hspace{1.4pt}5\hspace{1.4pt}6\hspace{1.4pt}7\hspace{1.4pt}8\hspace{1.4pt}9\hspace{1.4pt}10\hspace{1.4pt}11 & 0.936 (0.151) & 6.47 (0.19) & $-$4.15 (0.26)\\ 1c & 15.0 & 8 & $-$$-$$-$\hspace{1.4pt}4\hspace{1.4pt}5\hspace{1.4pt}6\hspace{1.4pt}7\hspace{1.4pt}8\hspace{1.4pt}9\hspace{1.4pt}10\hspace{1.4pt}11 & 0.950 (0.141) & 6.49 (0.19) & $-$4.12 (0.26)\\ 1d & 15.2 & 6 & $-$$-$$-$\hspace{1.4pt}4\hspace{1.4pt}5$-$7\hspace{1.4pt}8\hspace{1.4pt}9\hspace{1.4pt}10\hspace{1.4pt}$-$ & 1.004 (0.135) & 6.28 (0.26) & $-$4.23 (0.35)\\ 1 combined & & & & 0.954 (0.130) & & \\ \hline 4a & 21.4 & 6 & \hspace{1.5pt}1\hspace{1.5pt}2\hspace{1.4pt}$-$\hspace{1.4pt}4\hspace{1.4pt}5\hspace{1.4pt}6$-$\hspace{0.1pt}$-$\hspace{0.1pt}$-$10\hspace{1.4pt}$-$ & 0.629 (0.171) & $-$1.60 (0.23) & $-$0.26 (0.30)\\ 4b & 21.6 & 6 & \hspace{1.5pt}1\hspace{1.5pt}2\hspace{1.4pt}$-$\hspace{1.4pt}4\hspace{1.4pt}5\hspace{1.4pt}6$-$$-$$-$10\hspace{1.4pt}$-$ & 0.631 (0.162) & $-$1.58 (0.23) & $-$0.42 (0.30)\\ 4c & 21.8 & 6 & \hspace{1.5pt}1\hspace{1.5pt}2\hspace{1.4pt}$-$\hspace{1.4pt}4\hspace{1.4pt}5$-$\hspace{0.3pt}$-$8\hspace{0.3pt}$-$10\hspace{1.4pt}$-$ & 0.900 (0.151) & $-$1.70 (0.21) & $-$0.12 (0.28)\\ 4 combined & & & & 0.768 (0.160) & & \\ \hline 1\&4 combined & & & & 0.893 (0.101) & & \\ \hline \multicolumn{7}{@{}l@{}}{\hbox to 0pt{\parbox{150mm}{\footnotesize (1) Feature/spot ID, corresponding to table~3. (2) LSR velocity of the maser spot. (3) Total number of detected epochs. (4) Detected epochs. (5) Measured parallax in right ascension in mas (with estimated errors in parentheses). (6) (7) Proper motions in right ascention and in declination, respectively. The results presented here were obtained by fitting with a single parallax of 0.893~mas (the final result from the combined RA parallax fit). }\hss}} \end{tabular} \end{center} \end{table*} \subsection{Single-Beam Analysis for Relative Proper Motions} We also measured relative proper motions from the single-beam data to study internal motions of H$_2$O maser spots. Since the H$_2$O maser emission in G14.33$-$0.64 was variable over the observing period (figure~1), the phase-reference maser channel used for fringe fitting differed epoch to epoch: we used the brightest velocity channel at each epoch as the phase reference, excluding the channels around $V_{\rm LSR}\sim 26$~km~s$^{-1}$ (feature~7 in table~3) because the maser spots in this velocity range were 5\timeform{''} away from the other spots. We imaged each maser spot with the AIPS task IMAGR for a field of view of 25.6~mas $\times$ 25.6~mas (512 pixels $\times$ 512 pixels of size 0.05 mas) by shifting the map center. The FWHM beam size of each epoch is shown in table 1. RMS noise levels in each image per channel were typically 50$-$110 mJy/beam. Maser positions were fitted with elliptical Gaussian distributions with the task JMFIT and were measured relative to the reference spot chosen at each epoch. In order to obtain relative proper motions of all spots, we calculated all maser positions relative to the maser spot at $V_{\rm LSR}\sim 21.6$~km~s$^{-1}$ (spot~4b in table~3) by subtracting the position of this spot from the maser positions at each epoch. Since feature~4 (including spot~4b) was only persistent over the first 6 epochs, relative proper motions were measured over the first 6 epochs. Our criteria for detection of a maser feature are: (1) a signal-to-noise ratio higher than 7 is obtained in the map at more than two consecutive velocity channels, (2) the spots are identified at three or more epochs for detecting relative proper motions, and (3) their positions agree with those expected from the fitted proper motions within 1 mas. In table~3, we also list the strong feature at $V_{\rm LSR}\sim 26$~km~s$^{-1}$, even though it has no measured proper motion. \section{Results} Fig~1 shows the spectral evolution of H$_2$O maser emission in G14.33$-$0.64 over the observing period: scalar-averaged spectra are shown for the baseline between VERA Mizusawa and Iriki stations. \subsection{Parallax Measurements} Table~2 summarizes the results from measurements of parallax $\pi$ in RA ($X$) and absolute proper motions $\mu_X$ and $\mu_Y$ in RA ($X$) and Dec ($Y$). We used a total of seven maser spots of two maser features (features 1 and 4; feature IDs in table~2 correspond to those in table~3). The absolute maser positions used for the measurements were: $\alpha_{2000}=$18$^{\rm h}$18$^{\rm m}$54.$^{\rm s}$67444 and $\delta_{2000}=-$16$^\circ$\timeform{47'}\timeform{50.''2640} for feature~1; $\alpha_{2000}=$18$^{\rm h}$18$^{\rm m}$54.$^{\rm s}$65341 and $\delta_{2000}=-$16$^\circ$\timeform{47'}\timeform{50.''0650} for feature~4. Errors in the measurements are indicated in parentheses in table~2. For single-spot measurements, errors were estimated from the residuals from the least-squares fitting with uniform weights for all epochs. For combined fits where different spots or features were simultaneously fitted with a single parallax and with different proper motions, we have estimated the upper limit of the errors. We will discuss error estimates further in detail in $\S 5.1$. The final value of the parallax (from the combined fit with all seven spots) is $\pi=0.893\pm0.101$~mas. This corresponds to a source distance of $d=1.12\pm 0.13$~kpc. Absolute proper motions $\mu_X$ and $\mu_Y$ of the seven spots listed in table~2 were derived using this final value of $\pi$, instead of using different $\pi$ values from single-spot measurements. \begin{figure*}[hp] \begin{center} \FigureFile(135mm,135mm){figure2.eps} \end{center} \caption{Parallax and absolute proper-motion measurements for G14.33$-$0.64. Filled and open circles show positional evolution of maser features 1 and 4, respectively, with respect to the reference positions at origin. (a) East offset ($X$) in mas from the reference positions of RA(J2000)$=$18$^{\rm h}$18$^{\rm m}$54.$^{\rm s}$674440 for maser feature 1 and RA(J2000)$=$18$^{\rm h}$18$^{\rm m}$54.$^{\rm s}$653410 for feature 4, as a function of time in days since the first epoch. Best-fitting models for parallax and proper motion are shown in solid curves and gray lines, respectively. Additional shifts are given for clarity: $\Delta X=$ $+$6, 3.5, 1, $-$1.5 mas for 1a, 1b, 1c, 1d; $-$5, $-$7, $-$9 mas for 4a, 4b, 4c, respectively. (b) North offset ($Y$) in mas from the reference position of Dec(J2000)$=-$16$^\circ$\timeform{47'}\timeform{50.''26400} for feature 1 and Dec(J2000)$=-$16$^\circ$\timeform{47'}\timeform{50.''06500} for feature 4, as a function of time in days since the first epoch. Additional shifts are given for clarity: $\Delta Y=$ $+$9, 6.5, 4, 1.5 mas for 1a, 1b, 1c, 1d; $-$9, $-$11.5, $-$14 mas for 4a, 4b, 4c, respectively. }\label{fig:parallax1} \end{figure*} \begin{figure}[th] \begin{center} \FigureFile(70mm,70mm){figure3.eps} \end{center} \caption{Trajectory of maser positions on the sky. Reference positions are the same as in figure~2 for each feature. Additional shifts are given for clarity: $\Delta Y=$ $+$9, 5, 1, $-$3 mas for 1a, 1b, 1c, 1d; $-$7, $-$9.5, $-$12 mas for 4a, 4b, 4c, respectively.}\label{fig:parallax2} \end{figure} Figures~2 and 3 show the position measurements of the seven spots used for parallax and absolute proper motion fitting. Numbers indicate feature IDs corresponding to those in table~2. Figures~2a and 2b show eastward ($X$) and northward ($Y$) positional offsets versus time, respectively, for seven maser spots. Additional constant offsets are added to each maser spot in the figures for clarity. The best-fit models for the single parallax (solid curves) and different proper motions (gray lines) are plotted for the seven spots from the combined fit. Error bars are plotted for the standard deviation ($\sigma$) of the post-fit residuals from the least-squares fitting. Figure~3 shows the trajectory on the sky. \subsection{Proper Motions} \begin{table*}[tbh] \begin{center} \caption{Relative proper motions.} \begin{tabular}{ccclrrrr} \hline \multicolumn{1}{c}{Feature ID} & $V_{\rm{LSR}}$ &$N_{\rm{epochs}}$& Epochs & $x_1$ & $y_1$ & $\mu_x$ & $\mu_y$ \\ \multicolumn{1}{c}{$\#$} & [km s$^{-1}$] & & & [mas] & [mas] & [mas~yr$^{-1}$] & [mas~yr$^{-1}$] \\ \multicolumn{1}{c}{(1)} & (2) & (3) & (4) & (5) & (6) & (7) &(8) \\ \hline \hline 1 & 12.5 & 3 & $--$345$-$ & 302.7 & $-$199.2 & 7.05 (0.16) & $-$3.69 (0.39) \\ 1 & 12.7 & 3 & $--$345$-$ & 302.8 & $-$199.3 & 7.08 (0.19) & $-$3.59 (0.11) \\ 1 & 12.9 & 3 & $--$345$-$ & 302.9 & $-$199.1 & 6.63 (0.10) & $-$3.70 (0.38) \\ 1 & 13.2 & 3 & $--$345$-$ & 302.8 & $-$199.0 & 7.04 (0.23) & $-$4.15 (0.14) \\ 1 & 13.4 & 3 & $--$345$-$ & 303.0 & $-$199.0 & 6.53 (0.02) & $-$3.86 (0.03) \\ 1 & 14.2 & 3 & $-$$-$$-$456 & 303.2 & $-$198.1 & 6.31 (0.41) & $-$6.46 (1.93) \\ 1 & 14.4 & 3 & $-$$-$$-$456 & 303.2 & $-$198.1 & 6.22 (0.29) & $-$6.65 (1.83) \\ 1a & 14.6 & 4 & $--$3456 & 303.0 & $-$198.8 & 6.70 (0.53) & $-$5.11 (2.01) \\ 1b & 14.8 & 4 & $--$3456 & 303.1 & $-$198.8 & 6.66 (0.23) & $-$4.22 (0.85) \\ 1c & 15.0 & 3 & $--$345$-$ & 302.9 & $-$199.3 & 7.33 (0.06) & $-$2.14 (0.13) \\ 1 w-mean & & & & 303.0 & $-$199.0 & 6.64 (0.02) & $-$3.79 (0.03) \\ \hline 2 & 17.6 & 4 & $-$23$-$56 & 187.3 & $-$134.1 & 2.23 (0.23) & $-$3.69 (0.19) \\ 2 & 17.8 & 5 & 123$-$56 & 187.3 & $-$134.2 & 2.12 (0.09) & $-$3.64 (0.10) \\ 2 & 18.0 & 4 & 123$--$6 & 187.4 & $-$134.2 & 2.05 (0.19) & $-$4.06 (0.26) \\ 2 & 18.2 & 3 & 123$-$$-$$-$& 187.4 & $-$134.2 & 1.41 (0.46) & $-$3.43 (0.28) \\ 2 w-mean & & & & 187.3 & $-$134.1 & 2.10 (0.08) & $-$3.67 (0.08) \\ \hline 3 & 18.0 & 4 & $--$3456 & 183.7 & $-$129.4 & $-$0.14 (0.39) & $-$1.57 (0.45) \\ 3 & 18.2 & 4 & $--$3456 & 183.7 & $-$129.5 & $-$0.35 (0.23) & $-$0.75 (0.18) \\ 3 & 18.4 & 3 & $--$345$-$ & 183.8 & $-$129.6 & $-$0.57 (0.47) & $-$0.42 (0.69) \\ 3 & 18.6 & 3 & $--$345$-$ & 183.9 & $-$129.8 & $-$0.90 (0.28) & { }{ }0.18 (0.80)\\ 3 w-mean & & & & 183.8 & $-$129.5 & $-$0.50 (0.15) & $-$0.80 (0.16) \\ \hline 4 & 20.9 & 6 & 123456 & 0.1 & $-$0.1 & $-$0.07 (0.05) & $-$0.17 (0.16) \\ 4 & 21.2 & 6 & 123456 & 0.0 & $-$0.1 & $-$0.03 (0.02) & $-$0.14 (0.07) \\ 4a & 21.4 & 6 & 123456 & 0.0 & { }{ }0.0 & 0.07 (0.03) & { }{ }0.02 (0.05) \\ 4b & 21.6 & 6 & 123456 & --- & --- & --- & --- \\ 4c & 21.8 & 5 & 12345$-$ & 0.0 & 0.0 & $-$0.01 (0.01) & 0.03 (0.07) \\ 4 & 22.0 & 5 & 12345$-$ & $-$0.1 & 0.0 & 0.09 (0.07) & 0.04 (0.07) \\ 4 & 22.2 & 4 & 1234$--$ & $-$0.1 & 0.1 & 0.33 (0.19) & $-$0.27 (0.27) \\ 4 w-mean & & & & 0.0 & 0.0 & $-$0.01 (0.01) & $-$0.02 (0.03) \\ \hline 5 & 21.6 & 5 & 12345$-$ & $-$0.3 & 3.1 & 0.50 (0.16) & 0.61 (0.25) \\ 5 & 21.8 & 5 & 12345$-$ & $-$0.2 & 3.0 & 0.22 (0.03) & 0.75 (0.08) \\ 5 & 22.0 & 5 & 12345$-$ & $-$0.3 & 3.1 & 0.32 (0.10) & 0.63 (0.08) \\ 5 & 22.2 & 5 & 12345$-$ & $-$0.1 & 3.2 & $-$0.11 (0.19) & 0.61 (0.14) \\ 5 & 22.4 & 5 & 12345$-$ & 0.0 & 3.4 & 0.11 (0.12) & $-$0.13 (0.64) \\ 5 w-mean & & & & $-$0.2 & 3.1 & 0.23 (0.03) & 0.67 (0.05) \\ \hline 6 & 22.6 & 3 & 123$-$$-$$-$& 0.0 & $-$16.0 & 0.73 (0.14) & $-$1.33 (0.01) \\ 6 & 22.8 & 6 & 123456 & 0.1 & $-$16.0 & 0.10 (0.09) & $-$1.44 (0.08) \\ 6 & 23.1 & 6 & 123456 & 0.0 & $-$16.0 & 0.19 (0.02) & $-$1.26 (0.05) \\ 6 & 23.3 & 6 & 123456 & 0.0 & $-$15.9 & 0.25 (0.01) & $-$1.43 (0.12) \\ 6 & 23.5 & 6 & 123456 & 0.0 & $-$15.9 & 0.17 (0.05) & $-$1.24 (0.07) \\ 6 & 23.7 & 6 & 12345$-$ & 0.0 & $-$15.8 & 0.30 (0.12) & $-$1.45 (0.10) \\ 6 w-mean & & & & 0.0 & $-$16.0 & 0.23 (0.01) & $-$1.32 (0.01) \\ \hline 7 & 25.4$-$27.7 & & & $-$4810 & 319 & & \\ \hline 1,23,456 u-mean & & & & & & 2.53 & $-$2.08 \\ \hline \multicolumn{8}{@{}l@{}}{\hbox to 0pt{\parbox{150mm}{\footnotesize (1) Feature/spot ID. {`}w-mean{'} notates error-weighted mean of all spots of each feature and {`}u-mean{'} refers to unweighted- mean of all features (see text). (2) LSR velocity of the maser spot. (3) Number of detected epochs. (4) Detected epochs. (5) (6) Positional offset in mas toward east ($X$) and north ($Y$), respectively, from the reference spot 4b expected from the linear fit at the first epoch. (7) (8) Relative proper motion of the spot in $X$ and $Y$, respectively, with respect to spot spot 4b. Estimated errors are shown in parentheses. }\hss}} \end{tabular} \end{center} \end{table*} \begin{figure*}[tphb] \begin{center} \FigureFile(150mm,150mm){figure4.eps} \end{center} \caption{(a) (b) Radio maps of G14.33$-$0.64 showing H$_2$O (dot) and methanol (triangle) maser positions superimposed on contours for 6-cm continuum emission obtained from VLA archive data (program AH361). The angular resolution is \timeform{3''} for the VLA observation (Hughes \& MacLeod 1994) and the beam size is shown in gray at the left corner of each panel. Image noise level is $\sigma=0.07$~mJy/beam, and contours are linearly spaced and correspond to 4$\sigma$, 6$\sigma$, 8$\sigma$, $\cdots$, 22$\sigma$. Peak intensity is 1.6~mJy/beam. Map origin is at the IRAS source position of $\alpha_{2000}=$18$^{\rm h}$18$^{\rm m}$53.$^{\rm s}$9, $\delta_{2000}=-$16$^\circ$\timeform{47'}\timeform{39''}. Dots represent our VLBI absolute positions of H$_2$O maser features in table~3. Numbers correspond to feature IDs in table~3. Our absolute position errors essentially come from errors of the reference quasar position J1825$-$1718, which are 1.23~mas in RA and 1.97~mas in Dec (Fomalont \etal\ 2003). Triangles show the positions of 44-GHz methanol masers mapped with the VLA by Slysh \etal\ (1999) with position errors of \timeform{0.''2}. Colors indicate the LSR velocity of the spots for both methanol and H$_2$O maser emission. (c) Absolute proper motions of H$_2$O maser features without correction of the Solar motion and Galactic rotation. Map origin (reference spot~4b) is at $\alpha_{2000}=$18$^{\rm h}$18$^{\rm m}$54.$^{\rm s}$653181 and $\delta_{2000}=-$16$^\circ$\timeform{47'}\timeform{50.''07668} (i.e., the position of spot~4b at the first epoch). (d) Internal motions of all maser features with the mean motion of the features of ($\bar{\mu_X}$, $\bar{\mu_Y}$)$=$(0.95, $-$2.50) mas~yr$^{-1}$ removed(without correction of the Solar motion and Galactic rotation). Map origin is the same as in figure~4c. A proper motion of 1.00~mas~yr$^{-1}$ corresponds to a linear velocity of 5.31~km~s$^{-1}$ at a source distance of 1.12~kpc. }\label{fig:map} \end{figure*} Table~3 lists the results from relative position and proper-motion measurements from the first 6 epochs. A total of 6 maser features are presented here with proper motions detected over 3 or more epochs. Errors of relative proper motions (shown in parentheses for $\mu_x$ and $\mu_y$ in table~3) for each spot were estimated from formal fitting uncertainties scaled by the RMS residuals of the spot positions. For each feature ($\#$1 through 6), the relative position and proper motion were calculated as error-weighted means of those for all detected spots of the feature, as notated by {`}w-mean{'} in table~3. Figure~4 shows the maser distribution and proper motion for G14.33$-$0.64. Figure~4a and 4b are radio maps of the region with contours showing continuum emission at 6-cm wavelength (C-band) from VLA archive data (program AH361) observed in the {`}C{'} configuration at an angular resolution of 3\timeform{''} (Hughes \& MacLeod 1994). Our VLBI absolute positions of H$_2$O maser features are also shown. Our absolute position accuracy is essentially limited by the position errors of the position-reference quasar J1825$-$1718, which are 1.23~mas in RA and 1.97~mas in Dec (Fomalont \etal\ 2003). Hughes \& MacLeod (1994) originally associated the H$_2$O maser emission in G14.33$-$0.64 with brighter radio continuum emission, offset \timeform{1.'16} toward northeast from the IRAS position (at the origin) as seen in figure 4a. Our new VLBI map finds the H$_2$O maser emission (feature~7 in particular) associated (within \timeform{5''}) with a different and closer radio continuum source and yields the first precise distribution of H$_2$O maser spots in G14.33$-$0.64. Figure~4c shows the absolute proper motions of maser features 1 to 6, which were obtained by adding relative proper motions (in table~3) to the absolute proper motion of the reference spot 4b (in table~2). These absolute proper motions are not corrected for apparent motions due to the Solar motion and the Galactic rotation, in addition to the peculiar motion of the source. The map origin is the position of the reference spot~4b at the first epoch: $\alpha_{2000}=$18$^{\rm h}$18$^{\rm m}$54.$^{\rm s}$653181 and $\delta_{2000}=-$16$^\circ$\timeform{47'}\timeform{50.''07668}. Figure~4d shows the internal motions of the maser features relative to the mean motion of the features. The mean motion of all features 1 to 6 was obtained by averaging obtained proper motions over the 3 distinct regions: (1) feature~1; (2) features~2 and 3; and (3) features~4, 5 and 6. We took an unweighted mean of relative motions of maser features in each region, and then took an unweighted mean of the 3 regions, as listed in table~3 by {`}1,23,456 u-mean{'}. We obtained mean relative motion of ($\bar{\mu_x}$, $\bar{\mu_y}$)$=$(2.53, $-$2.08) mas~yr$^{-1}$ (the bar symbols indicate mean values). By adding the absolute proper motion of the reference spot 4b, ($\mu_X$, $\mu_Y$)$_{\rm{4b}}=$($-$1.58, $-$0.42) mas~yr$^{-1}$, we obtained the absolute mean motion ($\bar{\mu_X}$, $\bar{\mu_Y}$)$=$(0.95, $-$2.50) mas~yr$^{-1}$. Note a proper motion of 1.00~mas~yr$^{-1}$ corresponds to a linear velocity of 5.31~km~s$^{-1}$ at a source distance of 1.12~kpc. In $\S5.5$, we will adopt this mean motion to discuss the systemic motion of G14.33$-$0.64, by taking errors into account to allow for the possibility that the mean maser motion does not trace the systemic motion. \section{Discussion} \subsection{Astrometric Error Sources} In this section, we will discuss possible error sources in our parallax and proper-motion measurements and how we estimated the errors. The first possible error source in individual position measurements is thermal errors due to noise, which can be approximated by the halfwidth (HWHM) of the beam size divided by the signal-to-noise ratio of the maser map. We find thermal errors can account for the errors of relative position measurements in the single-beam data. Thermal errors are $\sim 0.01-0.1$~mas (i.e., beam size $\sim1$~mas, signal $\sim1-10$~Jy/beam, and noise $\sim$0.1Jy/beam) and agree well with errors in the relative proper motions as listed in table~3, which were estimated from standard deviations from the post-fit residual from the least-squares fits (see $\S3.2$ and $\S4.2$). However, for parallax and proper-motion measurements in the dual-beam data, errors in the measurements are larger than expected from thermal errors of $\sim 0.1$~mas (i.e., beam size $\sim1$~mas, signal $\gtrsim3$~Jy/beam, and noise $\sim 0.3$~Jy/beam). The standard deviations from the fits were $\sigma=0.26$~mas and thus are larger than thermal noise errors. Here we do not consider the reference quasar as predominant error source since it did not show any resolved structure. Also, even though the accuracy of the maser absolute position is limited by the uncertainties of the reference quasar position, the positional error of the reference quasar only adds as a constant offset to the maser spot position at each epoch and do not contribute to uncertainties in parallax and proper-motion measurements. One of the likely sources that would cause large errors in the parallax and proper-motion measurements is errors in modeling of tropospheric zenith delay (see Sato \etal\ 2008 and references therein). Indeed, the fact that a high elevation cutoff of $35^\circ$ yielded the best-fit result for the parallax fitting for G14.33$-$0.34 indicates that this low-declination source is subject to tropospheric delay errors. However, if errors in modeling of tropospheric zenith delay are the predominant error source, then all maser features at the same epoch should show systematic errors in the position measurements. As can be seen in figure 2, the deviations from the parallax fits clearly differs for the two different maser features at each eposh (features 1 and 4), which indicate that errors are random for different features at the same epoch. Therefore, it is likely that errors in modeling of tropospheric zenith delay are not the predominant error source in the remaining data after having removed as much effects of the tropospheric delay errors as possible by adopting a high elevation cutoff. Another likely error source in the parallax measurements is variation in maser structure. In figure 2, the tendency of deviations from the parallax fits is similar for maser spots in the same feature but different between different features (features 1 and 4). This is consistent with the fact that the variation of maser structure causes positional errors that are uncorrelated for different features but might be correlated between maser spots in adjacent velocity channels within the same feature. In our parallax measurements, the variation of maser structure is likely the predominant error source. We estimated errors of the parallax measurements from the post-fit residuals from the least-squares fitting. For different spots within the same maser feature (e.g., spots 1a, 1b, 1c, and 1d in feature \#1), we allow for the possibility that errors due to variation in maser structure data may be partially correlated. As a conservative approach, we assumed errors of all spots within the same maser feature at the same epoch are 100\% correlated (but random for different features). This means that, even though we used 7 maser spots of 1a, 1b, 1c, 1d, 4a, 4b, 4c for the measurements, we assume only 2 different maser features contribute as 2 independent spots to reduce the errors of the measurements. We obtained an error of $\sigma_\pi=0.101$~mas. Instead, if we assume errors due to variation in maser structure are random and uncorrelated for all the 7 spots (1a$-$1d and 4a$-$4c), the errors would reduce to $\sigma_\pi '=0.060$~mas. In reality, errors of different spots in the same feature are not likely to be 100\% correlated but only partially correlated (if not uncorrelated). Therefore, the error estimate of $\sigma_\pi=0.101$~mas in our parallax measurements is the upper limit of the errors, adopted as a conservative approach. \subsection{Distance to the Sagittarius Spiral Arm} \begin{figure*}[tp] \begin{center} \FigureFile(125mm,125mm){figure5.eps} \end{center} \caption{Model of the Galaxy by Georgelin \& Georgelin (1976), overlaid with the modified model by Taylor \& Cordes (1993) shown in gray. {`}$\odot${'} indicates the location of the Sun and {`}GC{'} the position of the Galactic center. The red square shows the new location of G14.33$-$0.64 based on our parallax measurements, while the yellow square is the previously estimated position of G14.33$-$0.64 based on kinematic distances. Three star-forming regions, G35.20$-$0.74 (blue diamond), G35.20$-$1.74 (pink triangle) and W51~IRS2 (green hexagon), possibly belonging to the Sagittarius spiral arm are also indicated with parallactic distances measured by Zhang \etal\ (2009) and by Xu \etal\ (2009) with the VLBA for 12-GHz methanol maser emission. Errors for all parallactic distances are also shown, which are within the size of dots for G14.33$-$0.64 and for G35.20$-$0.74.}\label{fig:mw} \end{figure*} \begin{figure*}[bp] \begin{center} \FigureFile(150mm,60mm){figure6.eps} \end{center} \caption{Fits for the pitch angle of the Sagittarius spiral arm. The logarithm of Galactocentric radius $R$ (measured in kpc) is plotted against Galactocentric longitude $\beta$ (in degrees). The Sun-center distance of 8.5~kpc was adopted. (a) G14.33$-$0.64 (red square), G35.20$-$0.74 (blue diamond), G35.20$-$1.74 (pink triangle) and W51~IRS2 (green hexagon) are plotted with parallaxes and associated uncertainties from this study, Zhang \etal\ (2009) and Xu \etal\ (2009). Gray lines show the best-fit straight lines from unweighted linear least-squares fitting to the data. The pitch angle $i$ is obtained by taking the negative of the arctangent of the line slopes. (Note that we need to express $\ln R$ in natural logarithm and $\beta$ in radians to calculate the pitch angle.) Line A shows the fitting result from G14.33$-$0.64, G35.20$-$0.74 and G35.20$-$1.74, while line B is from G14.33$-$0.64, G35.20$-$0.74 and W51~IRS2. (b) Same as (a), but with five sources (cyan dots) in the Local (Orion) arm (spur) also shown with precise parallax measurements. Line C is an unweighted straight line fit to W51~IRS2 and the five sources in the Local arm (see text). }\label{fig:logR} \end{figure*} \begin{figure}[tp] \begin{center} \FigureFile(70mm,70mm){figure7.eps} \caption{ Galactic maser source locations in the Sagittarius and Local (Orion) arms, superimposed on artist's conception (R.\ Hurt: NASA/JPL-Caltech/SSC). {`}$\odot${'} indicates the location of the Sun and {`}$*${'} the position of the Galactic center. The red square shows the new location of G14.33$-$0.64 based on our parallax measurements. Three star-forming regions, G35.20$-$0.74 (blue diamond), G35.20$-$1.74 (pink triangle) and W51~IRS2 (green hexagon), possibly belonging to the Sagittarius spiral arm, are also indicated with parallactic distances. The positions of five sources in the Local (Orion) {`}arm{'} or spur are indicated by cyan dots with precise parallactic distances (see text). Errors for all parallactic distances are also shown, which are mostly smaller than the size of the symbols.}\label{fig:mw_overlay} \end{center} \end{figure} Our parallax measurement for G14.33$-$0.64 reveals the source distance to be $d=1.12\pm 0.13$~kpc, which is less than half of previously derived kinematic distances. The kinematic distances for G14.33$-$0.64 are, for example, 2.5~kpc by Molinari \etal\ (1996) from the NH$_3$ (1,1) and (2,2) lines; 2.6 kpc both by Walsh \etal\ (1997) and Val'tts \etal\ (2000) from 6.7-GHz and 95-GHz methanol maser lines, respectively; 3.1~kpc from the H110$\alpha$ line and 2.6~kpc from H$_2$CO absorption lines by Sewilo \etal\ (2004). All of the kinematic distances above were derived using the Galactic rotation model by Brand \& Blitz (1993). Palagi \etal\ (1993) derived 2.7~kpc from H$_2$O maser lines with a peak at $V_{\rm LSR}=22.8$~km~s$^{-1}$ using the rotation curve of Brand (1986). The good agreement among previous kinematic distances is a result of using the same rotation model and similar radial velocities $V_{\rm LSR}\simeq 22$~km~s$^{-1}$ observed at different wavelengths. The most persistent H$_2$O maser feature in our measurements, feature~4, also showed a radial velocity of $V_{\rm LSR}\simeq 22$~km~s$^{-1}$, which agrees well with the systemic radial velocity of G14.33$-$0.64, but several other spectral components differed up to 10~km~s$^{-1}$ in radial velocities. Figure~5 shows the classic model of the Galaxy by Georgelin \& Georgelin (1976). Gray lines show the modified model by Taylor \& Cordes (1993). Note that a shift toward the Galactic center in the position of the Sagittarius arm was introduced by Taylor \& Cordes (1993), to correspond better with the kinematic distances of Downes \etal\ (1980). Downes \etal\ (1980) estimated the kinematic distances to Galactic H\emissiontype{II} regions from radio observations of H110$\alpha$ and H$_2$CO lines using the Schmidt (1965) model, with typical errors of $\pm1$ to 2 kpc for galactic longitudes $l=20^\circ$ to $60^\circ$, which can be more than $\sim$50\% errors for the Sagittarius arm at lower galactic latitudes. Although G14.33$-$0.64 was not in the catalog by Downes \etal\ (1980), it was in the catalog by Sewilo \etal\ (2004) in H110$\alpha$ and H$_2$CO line observations. It can be clearly seen in figure~5 that the kinematic distance (shown as the yellow square) places G14.33$-$0.64 as well as the interpolated Sagittarius arm further toward the Galactic center, like other sources in the arm. However, our direct parallax measurements (red square in figure~5) reveals the location of G14.33$-$0.64 to be closer to the Sun and outward the Galaxy, in good agreement with the Sagittarius arm originally modeled by Georgelin \& Georgelin (1976), without the {`}bump{'} toward the Galactic center. In figure~5, three other star-forming regions, G35.20$-$0.74 (blue diamond), G35.20$-$1.74 (pink triangle) and W51~IRS2 (green hexagon), possibly in the Sagittarius spiral arm, are also plotted with parallax distances of $2.19^{+0.24}_{-0.20}$~kpc, $3.27^{+0.56}_{-0.42}$~kpc (Zhang \etal\ 2009) and $5.1^{+2.9}_{-1.4}$~kpc (Xu \etal\ 2009) from the VLBA for 12-GHz methanol maser emission. It is most likely that the {`}bump{'} in the Sagittarius spiral arm toward the Galactic center suggested in Taylor \& Cordes (1993) is due to errors of kinematic distances. A more recent model by Cordes \& Lazio (2002), which is built upon the Taylor \& Cordes (1993) model, also retains the {`}bump{'} of the Sagittarius arm toward the Galactic center. Both Taylor \& Cordes (1993) and Cordes \& Lazio (2002) give models for the distribution of free electrons in the Galaxy, upon which most pulsar distances are determined using the observed dispersion measures (DM), i.e., the column density of electrons toward the pulsars (Frail \& Weisberg 1990). These models are built by numerically fitting predicted and observed dispersion measures for pulsars with known {`}independent distance estimates{'} (Taylor \& Cordes 1993), most of which come from uncertain kinematic distances. In particular, kinematic distances are more severely affected by errors of the radial velocities for sources at low galactic longitudes than at high longitudes. For example, for the simplest assumption of circular Galactic rotation with a source distance $d$ in the solar neighborhood ($d\ll R_0$, where $R_0$ is the distance to the Galactic center from the Sun), the kinematic distance $d$ can be approximated by $d_{\rm kin}\approx V_{\rm LSR}/(A\sin(2l))$ using Oort's constant $A$ (see e.g., Karttunen \etal\ 2007). Errors in the kinematic distances $\sigma_{d_{\rm kin}}$ are thus proportional to the errors in the radial velocities divided by $\sin (2l)$: $\sigma_{d_{\rm kin}} \propto \sigma_{V_{\rm LSR}}/\sin(2l)$. Therefore the kinematic distances toward the Sagittarius arm in the inner Galaxy are expected to be particularly uncertain. Taylor \& Cordes (1993) acknowledge that pulsar distances derived from previous models generally tend to be overestimated for $|l|<30^\circ$ and underestimated for $l=50^\circ - 70^\circ$ (although they claim their own model has no significant dependence of distance errors on $l$), which can account for the {`}bump{'} of the Sagittarius arm toward the Galactic center at low galactic longitudes. Our results as shown in figure~6 indicate that the previously expected {`}bump{'} in the Sagittarius arm toward the Galactic center is most likely due to errors that arise from kinematic distances. Russeil (2003) points out that the nearest part of the Sagittarius arm is placed at $\sim 2$~kpc based on kinematic distances (using the rotation curve of Brand \& Blitz 1993), while a fitted regular logarithmic arm, also based on kinematic distances, passes at $\sim 1$~kpc, indicating the possibility that the Galaxy does not have a regular design. However, our parallax measurements suggest that the nearest part of the Sagittarius arm, indeed, lies at $\sim 1$~kpc. The disagreement between the arm fitting and the kinematic distance is likely due to errors of kinematic distances, rather than an irregular design of the Sagittarius arm. Direct determination of distances are of great importance and required to obtain a true map of the Galaxy and, in particular, of the Sagittarius arm. Our parallax measurement of G14.33$-$0.64 with VERA reveals the location of the Sagittarius arm to be closer to the Sun than previously thought. \subsection{Pitch angle of the Sagittarius arm} We attempted to fit the pitch angle $i$ of the Sagittarius arm using our parallax measurement of G14.33$-$0.64 with three other parallax measurements of sources shown in figure~5, which may lie in the Sagittarius arm: G35.20$-$0.74, G35.20$-$1.74 (Zhang \etal\ 2009) and W51~IRS2 (Xu \etal\ 2009). The pitch angle $i$ is defined as the angle between the arm and the tangent to a Galactocentric circular orbit. For an ideal logarithmic spiral arm, it can be expressed as, $\ln (R_1/R_2) = - (\beta_1 - \beta_2) \tan i$, for two sources 1 and 2 (indicated by subscripts) in the arm, where $R$ is the Galactocentric radius at a Galactocentric longitude $\beta$ (0 toward the Sun and increasing with galactic longitude; see Reid \etal\ 2009b). Figure~6a shows a plot of $\log_{10} (R/{\rm kpc})$ vs.\ $\beta$ (in degrees) for G14.33$-$0.64 (red square), G35.20$-$0.74 (blue diamond), G35.20$-$1.74 (pink triangle) and W51~IRS2 (green hexagon). Here we adopted the Sun-center distance of $R_0 =8.5$~kpc. Errors are indicated for each source with parallax uncertainties of $\pm 1\sigma$ from this study, Zhang \etal\ (2009) and Xu \etal\ (2009) . We attempted linear least-squares fitting to the sources with unweighted straight lines. (Note that we need to express $\ln R$ in natural logarithm and $\beta$ in radians to calculate the pitch angle.) As seen in figure~6a, the four sources do not lie in a straight line, and we attempted fitting with two possible combinations of three sources, which are shown in gray lines in the figure. Line A shows a best-fit straight line for G14.33$-$0.64, G35.20$-$0.74 and G35.20$-$1.74 (excluding W51~IRS2), which yields a pitch angle of $i=34.^\circ 7 \pm 2.^\circ 7$. Line B is a fitting result from G14.33$-$0.64, G35.20$-$0.74 and W51~IRS2 (excluding G35.20$-$1.74), which yields a smaller pitch angle of $i = 11.^\circ 2 \pm 10.^\circ 5$. This pitch angle $i\sim 11^\circ$ agrees well with the four-arm Milky Way model by Vall\'ee (1995) with a best-fit pitch angle of $i=12.^\circ 1\pm1$. Figure~7 is a plot of the positions of the four sources superimposed on artist's conception of the Milky Way. For comparison, five sources in the Local (Orion) {`}arm{'} or spur are also shown with precise parallax measurements: G59.7$+$0.1 (Xu \etal\ 2009), Cep~A (Moscadelli \etal\ 2009), Orion (Hirota \etal\ 2007; Menten \etal\ 2007; Kim \etal\ 2008), G232.6+1.0 (Reid \etal\ 2009a), and VY~CMa (Choi \etal\ 2009; Reid \etal\ 2009c). With the five sources, Reid \etal\ (2009b) fitted the pitch angle of the Local arm to be $27.^\circ 8 \pm 4.^\circ 7$, which is larger than the pitch angles they fitted for another spiral arm, e.g., $16.^\circ 5 \pm 3.^\circ 1$ for the Perseus spiral arm. In figure~6b, we also attempted a straight line fitting (line C) to the five Local arm sources (marked by cyan dots) plus W51~IRS2 (green hexagon), which yields a pitch angle of $26.^\circ 1 \pm 12.^\circ 3$, which is consistent with the pitch angle fitted with only five sources above. Thus the Local arm/spur may branch from the Sagittarius arm near the position of W51~IRS2, which is often considered to be at the tangent point of the Sagittarius arm. One possible interpretation is that the Sagittarius arm bifurcates near the position of W51~IRS2 into the Local spur (line C) at a pitch angle of $i\sim 26^\circ$ and into the other arm traced by G14.33$-$0.64 and G35.20$-$0.74 (line B) at a pitch angle of $i\sim11^\circ$. Another possibility is that the Sagittarius arm is traced by G14.33$-$0.64, G35.20$-$0.74 and G35.20$-$1.74 (line B) and branches from the interior (Scutum-Crux) arm at a large pitch angle of $i\sim34^\circ$. However, more sources with precise parallaxes are needed to establish clear spiral arm structure. Ongoing and future parallax measurements with VERA and with the VLBA are expected to reveal the structure of the Sagittarius arm and other spiral arms of the Galaxy further in detail. \subsection{Magnetic Field Reversals and the Sagittarius Arm} It is of interest to compare our results for the distance to the Sagittarius arm with studies of Galactic magnetic field reversals. The Galactic magnetic field has been probed most often by Faraday rotation measure (RM) observations of linearly polarized emission from both pulsars (e.g., Noutsos \etal\ 2008) and extragalactic radio sources (e.g., Brown \etal\ 2007). A common conclusion in many pulsar polarization studies is that the magnetic field in the Local arm is clockwise while it is counterclockwise in the first quadrant ($0^\circ \le l \le 90^\circ$) component of the Sagittarius arm, indicating the existence of a magnetic field reversal between the arms. Weisberg \etal\ (2003) found from pulsar polarimetry a null in the magnetic field of a width less than 0.5~kpc extending from near the Sun over 7~kpc toward $l\sim 60^\circ$ (figure~4 in Weisberg \etal\ 2003), located midway between the Local and Sagittarius arms, which is most likely the field reversal region. Weisberg \etal\ (2003) noted a "1-kpc wide strip" of steady magnetic field from the local reversal (midway between the Local and Sagittarius arms) into the Sagittarius arm, based on the Sagittarius arm model by Cordes \& Lazio (2002). As previously discussed, our parallax measurements demonstrate the Sagittarius arm lies at a closer distance of $\sim1$~kpc, instead of previously estimated $\sim 2-3$~kpc from kinematic distances, and we find that G14.33$-$0.64 (this study) and G35.20$-$0.74 (Zhang \etal\ 2009) trace out the near side of the Sagittarius arm, which lie outside of the {`}bump{'} delineated in Taylor \& Cordes (1993) as well as in Cordes \& Lazio (2002). Our parallax measurements thus indicate that the strip of steady magnetic field found by Weisberg \etal\ (2003) is likely in the Sagittarius arm, rather than in an inter-arm region exterior to the arm. This lends support to the fact that the magnetic field in the Sagittarius arm is steadily and dominantly counterclockwise, and is further evidence for the conclusion of Weisberg \etal\ (2003) that the field maxima tend to lie along the spiral arms, while the field reversals occur between the arms. \subsection{Motion of G14.33$-$0.64 and the Galactic Rotation} As seen in figures 4c and 4d, the internal motions of the H$_2$O masers in G14.33$-$0.64 show a bipolar jet-like motion on the sky, with deviations of $\simeq 1-2$~mas~yr$^{-1}$ from the mean, which correspond to a linear velocity of $5-10$~km~s$^{-1}$ at a distance of 1.12~kpc. The central radial velocity $V_{\rm LSR}\simeq 22$~km~s$^{-1}$ of the maser emission agrees well with other molecular line velocities, and the deviations up to 10~km~s$^{-1}$ from the central radial velocity agree with the proper motions. From the parallax, proper motion, radial velocity and the sky position of the H$_2$O maser source, we can now calculate the full three-dimensional position and velocity of the source in the Galaxy. By following the methods described in detail by Reid \etal\ (2009b) to convert from the heliocentric reference frame to a reference frame that rotates with the Galaxy, we obtain the peculiar motion of the source with respect to the Galactic rotation. We adopt the mean absolute proper motion (the reference frame in figure~4d) of ($\bar{\mu_X}$, $\bar{\mu_Y}$)$=$(0.95, $-$2.50) mas~yr$^{-1}$ as the systemic motion of the source (before the correction of the solar motion and the Galactic rotation), with uncertainties of $\pm$2~mas~yr$^{-1}\simeq \pm$10~km~s$^{-1}$ in each of the eastward ($X$) and northward ($Y$) directions to allow for the possibility that the mean maser motion does not trace the systemic motion. For the radial velocity, we adopted $V_{\rm LSR}=22\pm 10$~km~s$^{-1}$. Adopting the {\it Hipparcos} solar motion values of $U_\odot=10.0\pm 0.36$~km~s$^{-1}$ (radially toward the Galactic center), $V_\odot=5.25\pm0.62$ (in the local direction of Galactic rotation) and $W_\odot = 7.17\pm0.38$~km~s$^{-1}$ (vertically upwards, i.e., toward the north Galactic pole perpendicularly to the Galactic plane) from Dehnen \& Binney (1998) with the recent best-fit results for the Galactic constants of $R_0 = 8.4\pm0.6$~kpc and $\Theta_0 = 254\pm16$~km~s$^{-1}$ by Reid \etal\ (2009b), and assuming a flat rotation of the Galaxy (i.e., rotational velocity $\Theta$ at the source location is the same as at the Sun, $\Theta \simeq \Theta_0$) the peculiar velocity components of G14.33$-$0.63 are obtained to be $U_s = 11\pm10$~km~s$^{-1}$ toward the Galactic center at the source position, $V_s = -1 \pm 11$~km~s$^{-1}$ in the local direction of the Galactic rotation, and $W_s = -4 \pm 11$~km~s$^{-1}$ vertically out of the Galactic plane toward the north Galactic pole. Here the uncertainties of $10-11$~km~s$^{-1}$ in the derived peculiar motion are directly due to the uncertainties for the proper motion and radial velocity of G14.33$-$0.64. The contribution from uncertainties in the Galactic constants $R_0$ and $\Theta_0$ are negligible, because the Galactic rotation term is almost canceled out in the differential calculation. If we adopt the IAU standard values of $R_0 = 8.5$~kpc and $\Theta_0 = 220$~km~s$^{-1}$ instead, the resulting peculiar motion becomes $U_s = 12\pm10$~km~s$^{-1}$, $V_s = -1 \pm 11$~km~s$^{-1}$, and $W_s = -4 \pm 11$~km~s$^{-1}$. Therefore, the peculiar motion of G14.33$-$0.64 is not significant in the direction of Galactic rotation ($V_s$) or in the direction out of the Galactic plane ($W_s$). For the source location of G14.33$-$0.64 relative to the Sun in the Galaxy, the larger peculiar velocity component of G14.33$-$0.64 toward the Galactic center ($U_s$) reflects a radial velocity larger than expected from the circular rotation model, which has led to the larger kinematic distances derived in the previous studies. Overall, G14.33$-$0.64 shows no significant peculiar motion and is consistent with the circular Galactic rotation model. \vspace{5mm} We are deeply grateful to an anonymous referee for his/her careful reading of the paper and for a number of invaluable suggestions. We would like to express our sincere gratitude to all staff members and students at VERA and Kagoshima University for their continuous support. MS gratefully acknowledges the financial support from the Research Fellowships of the Japan Society for the Promotion of Science (JSPS) for Young Scientists.
1,314,259,994,691
arxiv
\section{Introduction} The search ever deeper into the interior of matter successfully started by Rutherford's discovery of atomic structure is going on now at much lower scales (below 10$^{-13}$ cm) at high energy accelerators. The interaction region of colliding protons can be quantitatively explored with the help of the unitarity condition if experimental data on their elastic scattering are used. With only these two ingredients at hand we are able to show that the energy evolution of the inelastic interaction region demonstrates quite surprising features. \section{The unitarity condition} From the theoretical side, the most reliable information comes from the unitarity condition. The unitarity of the $S$-matrix $SS^+$=1 relates the amplitude of elastic scattering $f(s,t)$ to the amplitudes of inelastic processes $M_n$. In the $s$-channel they are subject to the integral relation (for more details see, e.g., \cite{PDG, anddre1, ufnel}) which can be written symbolically as \begin{equation} {\rm Im}f(s,t)= I_2(s,t)+g(s,t)= \int d\Phi _2 ff^*+\sum _n\int d\Phi _n M_nM_n^*. \label{unit} \end{equation} The variables $s$ and $t$ are the squared energy and transferred momentum of colliding protons in the center of mass system $s=4E^2=4(p^2+m^2)$, $-t=2p^2(1-\cos \theta)$ at the scattering angle $\theta $. The non-linear integral term represents the two-particle intermediate states of the incoming particles. The second term represents the shadowing contribution of inelastic processes to the imaginary part of the elastic scattering amplitude. Following \cite{hove} it is called the overlap function. This terminology is ascribed to it because the integral there defines the overlap within the corresponding phase space $d\Phi _n$ between the matrix element $M_n$ of the $n$-th inelastic channel and its conjugated counterpart with the collision axis of initial particles deflected by an angle $\theta $ in proton elastic scattering. It is positive at $\theta =0$ but can change sign at $\theta \neq 0$ due to the relative phases of inelastic matrix elements $M_n$'s. At $t=0$ it leads to the optical theorem \begin{equation} {\rm Im}f(s,0)=\sigma _{tot}/4\sqrt {\pi} \label{opt} \end{equation} and to the general statement that the total cross section is the sum of cross sections of elastic and inelastic processes \begin{equation} \sigma _{tot}=\sigma _{el}+\sigma _{in}, \label{telin} \end{equation} i.e., that the total probability of all processes is equal to one. \section{The geometry of the interaction region} Here, we show that it is possible to study the space structure of the interaction region of colliding protons using information about their elastic scattering within the unitarity condition. The whole procedure is simplified because in the space representation one gets an algebraic relation between the elastic and inelastic contributions to the unitarity condition in place of the more complicated non-linear integral term $I_2$ in Eq. (\ref{unit}). To define the geometry of the collision we must express all characteristics presented by the angle $\theta $ and the transferred momentum $t$ in terms of the transverse distance between the trajectories of the centers of the colliding protons - namely the impact parameter, $b$. This is easily carried out using the Fourier -- Bessel transform of the amplitude $f$ which retranslates the momentum data to the corresponding transverse space features and is written as \begin{equation} i\Gamma (s,b)=\frac {1}{2\sqrt {\pi }}\int _0^{\infty}d\vert t\vert f(s,t) J_0(b\sqrt {\vert t\vert }). \label{gamm} \end{equation} The unitarity condition in the $b$-representation reads \begin{equation} G(s,b)=2{\rm Re}\Gamma (s,b)-\vert \Gamma (s,b)\vert ^2. \label{unit1} \end{equation} The left-hand side (the overlap function in the $b$-representation) describes the transverse impact-parameter profile of inelastic collisions of protons. It is just the Fourier -- Bessel transform of the overlap function $g$. It satisfies the inequalities $0\leq G(s,b)\leq 1$ and determines how absorptive the interaction region is, depending on the impact parameter (with $G=1$ for full absorption and $G=0$ for complete transparency). The profile of elastic processes is determined by the subtrahend in Eq. (\ref{unit1}). If $G(s,b)$ is integrated over all impact parameters, it leads to the cross section for inelastic processes. The terms on the right-hand side would produce the total cross section and the elastic cross section, correspondingly, as should be the case according to Eq. (\ref{telin}). The overlap function is often discussed in relation with the opacity (or the eikonal phase) $\Omega (s,b)$ such that $G(s,b)=1-\exp (-\Omega (s,b))$. Thus, full absorption corresponds to $\Omega =\infty $ and complete transparency to $\Omega =0$. The most prominent feature of elastic scattering is the rapid decrease of the differential cross section with increasing transferred momentum, $\vert t\vert $, in the diffraction peak. As a first approximation, at present energies, it can be described by the exponential shape with the slope $B(s)$: \begin{equation} \frac {d\sigma }{dt}=\frac {\sigma ^2_{tot}}{16\pi }\exp (-B(s)\vert t\vert ). \label{expB} \end{equation} The diffraction cone contributes predominantly to the Fourier - Bessel transform of the amplitude. Using the above formulae, one can write the dimensionless $\Gamma $ as \begin{equation} i\Gamma (s,b)=\frac {\sigma _t}{8\pi }\int _0^{\infty}d\vert t\vert \exp (-B\vert t\vert /2 )(i+\rho )J_0(b\sqrt {\vert t\vert }). \label{gam2} \end{equation} Here, the diffraction cone approximation (\ref{expB}) is inserted. Herefrom, one calculates \begin{equation} {\rm Re}\Gamma (s,b)= {\zeta }{\exp (-\frac {b^2}{2B})}, \label{rega} \end{equation} where we introduce the dimensionless ratio of the cone slope (or the elastic cross section) to the total cross section \begin{equation} \zeta=\frac {\sigma _{tot}}{4\pi B}= \frac {4\sigma _{el}}{(1+\rho ^2)\sigma _{tot}} \approx \frac {4\sigma _{el}}{\sigma _{tot}}. \label{ze} \end{equation} The ratio $\sigma _{el}/\sigma _{tot}$ defines the survival probability of initial protons. The approximation sign refers to the neglected factor $1+\rho ^2$ where $\rho $ is the ratio of the real to imaginary part of the amplitude in the diffraction cone. In what follows we neglect $\rho $ according to experimental data (with $\rho (7 \; TeV, 0)\approx 0.145$) and theoretical considerations which favor its decrease inside the diffraction cone. Thus one gets \begin{equation} G(s,b)= \zeta \exp (-\frac {b^2}{2B})[2-\zeta \exp (-\frac {b^2}{2B})]. \label{ge} \end{equation} The inelastic profile depends on two measured quantities - the diffraction cone width $B(s)$ and its ratio to the total cross section, $\zeta $. It scales as a function of $b/\sqrt {2B}$. For central collisions with $b=0$ one gets \begin{equation} G(s,b=0)= \zeta (2-\zeta). \label{gZ} \end{equation} This formula is very significant because it follows herefrom that the darkness at the very center is fully determined by only one parameter, $\zeta $, which is the ratio of experimentally measured quantities. It is given by the ratio of the width of the diffraction cone $B$ (or $\sigma _{el}$) to the total cross section. The energy evolution of these quantities defines the evolution of the absorption value. The interaction region becomes completely absorptive $G(s,0)=1$ in the center only at $\zeta =1$ and the absorption diminishes for other values of $\zeta $. However for small variations of $\zeta =1\pm \epsilon $ the value of $G(s,0)=1-\epsilon ^2$ varies even less. In the Table, we show the energy evolution of $\zeta $ and $G(s,0)$ for $pp$ and $p\bar p$ scattering as calculated from experimental data about the total cross section and the diffraction cone slope at corresponding energies. \medskip \begin{table} \medskip Table. $\;\;$ The energy behavior of $\zeta $ and $G(s,0)$. \medskip \begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|l} \hline $\sqrt s$, GeV&2.70&4.11&4.74&7.62&13.8&62.5&546&1800&7000\\ \hline $\zeta $ &1.56&0.98&0.92&0.75&0.69&0.67&0.83&0.93&1.00-1.02 \\ $G(s,0)$ &0.68&1.00&0.993&0.94&0.904&0.89&0.97&0.995&1.00 \\ \hline \end{tabular} \end{table} Let us point out that starting from ISR energies the value of $\zeta $ increases systematically and at LHC energies becomes equal to 1 within the accuracy of measurements of $B$ and $\sigma _{tot}$. The impact parameter distribution of $G(s,b)$ (\ref{ge}) has its maximum at $b_m^2=2B\ln \zeta$ with full absorption $G(b_m)=1$. Its position depends both on $B$ and $\zeta $. Note, that, for $\zeta <1$ (which is the case, e.g., at ISR energies) one gets incomplete absorption $G(s,b)<1$ at any physical $b\geq 0$ with the largest value reached at $b=0$ because the maximum appears at non-physical values of $b$. The disk is semi-transparent. At $\zeta =1$, which is reached at 7 TeV, the maximum is positioned exactly at $b=0$, and full absorption occurs there, i.e. $G(s,0)=1$. The disk center becomes impenetrable (black). The strongly absorptive core of the inelastic interaction region grows in size as we see from expansion of Eq. (\ref{ge}) at small impact parameters: \begin{equation} G(s,b)= \zeta [2-\zeta -\frac {b^2}{B}(1-\zeta )-\frac {b^4}{4B^2}(2\zeta -1)]. \label{gb} \end{equation} The term proportional to $b^2$ vanishes at $\zeta =1$, and $G(b)$ develops a plateau which extends to quite large values of $b$ (about 0.5 fm). The plateau is very flat because the last term starts to play a role at 7 TeV (where $B\approx 20$ GeV$^{-2}$) only for larger values of $b$. At $\zeta >1$, the maximum shifts to positive physical impact parameters. A dip is formed at $b$=0 leading to a concave shaped inelastic interaction region - approaching a toroidal shape. This dip becomes deeper at larger $\zeta $. The limiting value $\zeta =2$ leads to complete transparency at the center $b=0$. \begin{figure} \caption{ The evolution of the inelastic interaction region in terms of the survival probability. The values $\zeta =0.7$ and $1.0$ correspond to ISR and LHC energies and agree well with the result of detailed fitting to the elastic scattering data \cite{amal, dnec, mart}. A further increase of $\zeta $ leads to the toroid-like shape with a dip at $b=0$. The values $\zeta =1.5$ are proposed in \cite{kfk, fms} and $\zeta =1.8$ in \cite{roy} as corresponding to asymptotical regimes. The value $\zeta =2$ corresponds to the "black disk" regime ($\sigma _{el}=\sigma _{inel}= 0.5\sigma _{tot}$).} \centerline{\includegraphics[width=\textwidth, height=9cm]{fig2_better.jpg}} \end{figure} All these cases are demonstrated in Fig. 1 where $G(s,b)$ is plotted as a function of the scaling variable $b/\sqrt {2B}$ for different values of the parameter $\zeta $ according to Eq. (\ref{ge}). The line with $\zeta =0.7$ corresponds to ISR results and with $\zeta =1$ to LHC. Earlier it was shown that the results of analytical calculations according to (\ref{ge}) and the computation with experimental data directly inserted in the unitarity condition practically coincide (see Fig. 1 in \cite{ads}). What can we expect at higher energies? The profiles shown in Fig. 1 are valid so long as we can assume that the differential cross section of elastic scattering decreases exponentially with $\vert t\vert $ within the diffraction cone. They can change if this traditional behavior is no longer valid at higher energies. Slope variations of the order of 1 per cent found at 8 TeV by TOTEM \cite{totem} are still not significant. Only guesses can be obtained from the extrapolation of results at lower energies to new regimes, even though experience shows how indefinite and even erroneous such extrapolations can be. First, one may assume that $\zeta $ will increase without crossing 1 but approaching it asymptotically. That would imply that its precise value at 7 TeV is still slightly lower than 1 within the present experimental errors\footnote{ The value of $\sigma _{el}/\sigma _{tot}=0.257\pm0.005$ reported in \cite{totem1} would imply $\zeta =1.01$ with an uncertainty of $\sim2\%$.} . Then the inelastic profile shown in Fig. 1 for $\zeta =1$ will be quite stable with a slow approach to complete blackness in central collisions and a steady increase of its range. This situation seems most appealing to our theoretical intuition. However, given the experimentally observed increase of the share of elastic scattering from ISR to LHC, it is tempting to consider another intriguing possibility- that there could be a further increase at still higher energy. Then the interaction region inevitably acquires a toroid-like shape with a dip at the very center ($b=0$). Some extrapolations of fits at lower energies are presented in \cite{kfk, fms} and theoretical speculations are discussed in \cite{roy}. The line with $\zeta =1.5$ describes the profile of the inelastic interaction region according to asymptotic expectations predicted in \cite{kfk, fms}, where successful fits to present experimental data are reported. The new proposal of \cite{roy} is shown at $\zeta =1.8$. The dip increases at larger $\zeta $ and reaches the very bottom $G(0)=0$ for $\zeta =2$. Strangely enough this situation with $\sigma _{el}= \sigma _{inel}=0.5\sigma _{tot}$ is usually referred to as the "black disk" limit \cite{blha}. Protons become impenetrable when $b=0$ for $\zeta =2$ and only undergo elastic scattering. It is discussed in Ref. \cite{trt}. This condition results in purely backward scattering (as in head-on collisions of billiard balls). In conclusion, we can state that, analyzing the unitarity condition, we have found a special role for the ratio of elastic to total cross sections being equal to 1/4 in 7 TeV pp-interactions and described the consequences of its energy evolution. This role could be attributed to an equal share of processes with exchange and no-exchange of quantum numbers in particle collisions. Then elastic processes constitute a half of the no-exchange share. Another half would be attributed to inelastic diffraction processes. That would lead to saturation of the Pumplin bound \cite{pump} which states that their sum is less or equal to 0.5 of the total cross section. However there is still no consensus among experiments about the saturation of the bound at 7 TeV (see \cite{CMS, ALICE}). Although there is currently a large latitude for the inelastic diffractive cross sections permitted by the accuracy of the experiments, as was pointed out in \cite{lilu} the value presented in \cite{ALICE} corresponds to perfect agreement with the above picture( ie the above-metioned saturation). In general, inelastic diffraction is determined by the dispersion of matrix elements while only their averages enter into Eqs (10), (11). Some models have to be invoked in order to predict the dispersion. On a qualitative level it looks as though the absorptive structure of protons is extremely inhomogeneous \cite{fimi}. That could explain the behavior of the inelastic profile described above. \medskip {\bf Acknowledgments} \medskip I.D. is grateful for support by the RFBR-grant 14-02-00099 and the RAS-CERN program.
1,314,259,994,692
arxiv
\section{Clumping and Surface Densities in ISM Maps} \label{sec:intro} Observations of atomic and molecular gas now achieve spatial resolution of several hundred parsecs to a few kiloparsecs in large samples of nearby galaxies \citep[e.g.][]{HELFER03,WALTER08,LEROY09} or even small sets of high redshift galaxies \citep{HODGE12,TACCONI12}. Such observations isolate key physical conditions such as stellar surface density, metallicity, or the interstellar radiation field. However, with a few exceptions, these observations still do not resolve individual clouds of atomic and molecular gas, which are often considered to be the fundamental units of the interstellar medium (ISM). The interpretation of these observations often utilizes predictions from models that treat the surface density of gas {\em on the scale of individual clouds}. For example the models of \citet{KRUMHOLZ09A,KRUMHOLZ09B}, \citet{WOLFIRE10}, \citet{FELDMANN12}, and \citet{NARAYANAN12} all consider the structure of individual photodissociation regions or atomic-molecular complexes to explain observations on the scale of galaxies. In these models, the surface density of an individual cloud represents a key parameter, often because it indicates the degree of shielding from the ambient radiation field. Because these models focus on cloud structure the mapping between the readily observed average, or ``area-weighted,'' surface density at $\sim$ kpc resolution and the cloud-scale, ``mass-weighted,'' surface density represents an essential component of comparing observations and theory. This mapping is often referred to as ``clumping'' and quantified via a ``clumping factor.'' For the most part the adopted clumping factors represent guesses informed by our coarse knowledge of ISM structure and giant molecular clouds (GMCs) but not directly based on observations. However, this factor can also be directly measured from high spatial resolution data. In this letter we collect a large set of observations to measure the relationship between the surface density of the ISM averaged over large ($\sim$ kpc) scales and the ``true'' small scale surface density. We consider both atomic (\mbox{\rm \ion{H}{1}} ) and molecular (\mbox{\rm H$_2$} , traced by CO) gas and discuss the implications of our calculation for the comparison to models. We cast this discussion in terms of three quantities: the {\em mass-weighted} average surface density, \mbox{$\left< \Sigma \right>^{\rm M}$} , the {\em area-weighted} average surface density, \mbox{$\left< \Sigma \right>^{\rm A}$} , and a clumping factor, $c$, relating the two. The {\em mass-weighted} average surface density is \begin{equation} \label{eq:iwt} \mbox{$\left< \Sigma \right>^{\rm M}$} = \frac{\int_A \Sigma \times \Sigma~dA }{\int_A \Sigma~dA} = \frac{ \int_A \Sigma^2~dA}{\int_A \Sigma~dA} \end{equation} \noindent where $\Sigma$ is the true gas mass surface density along a line of sight and the integral occurs over some area element $A$. Then the denominator is simply the sum of gas in that area and the calculation returns the mass-weighted average surface density over the area. That is \mbox{$\left< \Sigma \right>^{\rm M}$}\ is the column density at which most mass exists. Contrast this quantity with what is observed by a telescope for which a resolution element has size $A$, \begin{equation}\label{eq:awt} \mbox{$\left< \Sigma \right>^{\rm A}$} = \frac{\int_A \Sigma~dA}{\int_A~dA}~. \end{equation} \noindent That is, the telescope observes the area-weighted average surface density within the beam. These two quantities, \mbox{$\left< \Sigma \right>^{\rm M}$}\ and \mbox{$\left< \Sigma \right>^{\rm A}$} , will be the same for a smooth medium. They differ for a clumpy medium with most of the mass in small, high column density regions spread over large, low column density areas. In this case, \mbox{$\left< \Sigma \right>^{\rm A}$}\ may be much lower than \mbox{$\left< \Sigma \right>^{\rm M}$} . We define a {\em clumping factor}, $c$, to quantify this distinction as: \begin{equation} \label{eq:clumping} c \equiv \frac{\mbox{$\left< \Sigma \right>^{\rm M}$}}{\mbox{$\left< \Sigma \right>^{\rm A}$}}~. \end{equation} \noindent $c$ will be high for a clumpy, inhomogeneous medium and low for a smooth medium. It will never fall below unity. In practice, \mbox{$\left< \Sigma \right>^{\rm M}$}\ will be derived at finite resolution, so that $c$ could be more rigorously defined as $c_{a}^{b}$, the clumping factor calculated at final resolution $b$ with \mbox{$\left< \Sigma \right>^{\rm M}$}\ derived from data with intrinsic resolution $a$. In this paper, $b$ will always be 1~kpc; $a$ will vary from data set to data set. The clumping factor, $c$, \mbox{$\left< \Sigma \right>^{\rm A}$} , and \mbox{$\left< \Sigma \right>^{\rm M}$}\ give us a formalism to ask several questions related to the structure of the ISM and the interpretation of $\sim$ kpc resolution elements: \begin{itemize} \item How does the mass-weighted $\mbox{$\left< \Sigma \right>^{\rm M}$}$ relate to the area-weighted, observable $\mbox{$\left< \Sigma \right>^{\rm A}$}$ for kpc-resolution measurements of atomic and molecular gas in galaxies? What are typical clumping factors? \item Is clumping the same for atomic and molecular gas, so that the H$_2$-to-\mbox{\rm \ion{H}{1}}\ ratio at large scales may be readily interpreted in terms of cloud structure? \item Can one reliably predict the surface density of individual regions --- relevant to PDR and cloud structure calculations --- from coarse resolution measurements? \end{itemize} \begin{figure} \plotone{iwt_vs_awt.eps} \caption{Mass-weighted surface density at 1-kpc resolution ($y$-axis), Equation \ref{eq:iwt}, as a function of area-weighted surface density, Equation \ref{eq:awt}, at the same resolution ($x$-axis) --- note that no inclination corrections are applied, broadening the spread of apparent column densities. That is, the true surface density from which most emission arises as a function of the column density that would be measured at 1~kpc resolution. Blue points show \mbox{\rm \ion{H}{1}}\ data, red points show \mbox{\rm H$_2$}\ estimates from CO emission. Light points show individual lines of sight, dark points show averages for whole data sets. Gray lines show fixed clumping factors spaced by a factor of 2. \mbox{\rm \ion{H}{1}}\ shows good tracking between \mbox{$\left< \Sigma \right>^{\rm M}$}\ and \mbox{$\left< \Sigma \right>^{\rm A}$}\ with very low clumping factors, seldom above a factor of two. Conversely, CO exhibits a high degree of clumping, almost never less than a factor of two but often more than a factor of ten and a scattered, non-universal relation between \mbox{$\left< \Sigma \right>^{\rm M}$}\ and \mbox{$\left< \Sigma \right>^{\rm A}$} .} \label{fig:col_vs_col} \end{figure} \begin{figure} \plotone{cfac_vs_res.eps} \caption{Clumping factor ($y$-axis), Equation \ref{eq:clumping}, as a function of the linear resolution of the data used to derive the mass-weighted surface density, \mbox{$\left< \Sigma \right>^{\rm M}$} , i.e., the native resolution of the data. The color scheme follows Figure \ref{fig:col_vs_col}. Again, \mbox{\rm H$_2$}\ traced by CO appears much more strongly clumped than \mbox{\rm \ion{H}{1}} , exhibiting a wide range of clumping factors and tending to show a high degree of clumping. \mbox{\rm \ion{H}{1}} , by contrast, appears remarkably smooth, seldom exceeding clumping factors of two even at high spatial resolution. As in Figure \ref{fig:col_vs_col}, the difference in structure of the atomic and molecular medium and the variable clumpiness of the molecular medium are clearly evident.} \label{fig:clumping} \end{figure} \section{Data and Calculations} \label{sec:data} We assemble all readily available high spatial resolution ($\lesssim 500$~pc) CO and \mbox{\rm \ion{H}{1}}\ maps of nearby galaxies and use these to calculate \mbox{$\left< \Sigma \right>^{\rm M}$} , \mbox{$\left< \Sigma \right>^{\rm A}$} , and $c$. We make use of three recent \mbox{\rm \ion{H}{1}}\ surveys of nearby galaxies: THINGS \citep{WALTER08}, LITTLE THINGS \citep{HUNTER12}, and VLA ANGST \citep{OTT12}. We supplement these with a collection of \mbox{\rm \ion{H}{1}}\ data obtained to complement the HERACLES CO survey \citep[presented in][]{LEROY12,SCHRUBA11,SANDSTROM12} and WSRT maps of M33 \citep{DEUL87} and M31 \citep{BRINKS84}. Whenever possible, we use the naturally weighted data. We include all galaxies from these surveys that have linear resolution better than 500~pc and inclination less than $50\arcdeg$ (we except M31 and M33 from the inclination requirement). For the \mbox{\rm \ion{H}{1}}\ calculation we consider only regions inside $r_{25}$ with column densities $N ({\rm H}) > 10^{20}$~cm$^{-2}$. We use the integrated intensity maps provided by each survey in its data release. High spatial resolution CO data remains harder to come by than high resolution \mbox{\rm \ion{H}{1}}\ data because nearby dwarf galaxies tend to be faint in CO emission and the sensitivity of mm-wave telescopes has been limited before ALMA. This scarcity leads us to assemble a heterogeneous collection of high resolution CO. This includes the MAGMA \citep{WONG11} and NANTEN \citep{FUKUI99} surveys of the LMC, the IRAM 30-m survey of CO in M31 \citep{NIETEN06}, the combined BIMA and FCRAO survey of M33 \citep{ROSOLOWSKY07}, ALMA science verification data on the Antennae galaxies, and a handful of the brightest and nearest galaxies from BIMA SONG \citep[NGC 2903, 3627, 5194, and 6946]{HELFER03}. We supplement these with two new datasets: high resolution CARMA mapping of select fields in M31 (PI: A. Schruba, Schruba et al. in prep.) and the Plateau de Bure Arcsecond Whirlpool Survey (PAWS, Schinnerer et al. submitted, Pety et al. accepted) of M51. Except for the ALMA Antennae data, all of these data sets target the CO $J=1\rightarrow0$ line and include (sometimes exclusively) short-spacing data; the Antennae data target the CO $J=3\rightarrow2$ and CO $J=2\rightarrow1$ transitions\footnote{For purposes of calculating surface densities, we assume these lines to be thermalized. In actuality, they are likely somewhat subthermal but uncertainty is likely offset by a somewhat lower $\alpha_{\rm CO}$ in the Antennae. In any case, these conversion factors effectively divide out when calculating $c$.}. Note that we have multiple data sets on several galaxies (M31, the LMC, M51) and treat each data set, rather than galaxy, as a separate measurement. For comparison, we also calculate $c$ from the composite CO survey of the Milky Way by \citet{DAME01}, considering only intermediate latitude ($30\arcdeg > |b| > 5\arcdeg$) gas. We smooth their data with a $1.25\arcdeg$ kernel to minimize sampling issues and integrate only over areas covered by the surveys. {\em Calculating Moment Zero Maps:} Because of the $\Sigma^2$ term in Equation \ref{eq:iwt}, \mbox{$\left< \Sigma \right>^{\rm M}$}\ is not robust to the inclusion of noise in the calculation. We must therefore mask the data before carrying out our calculations. This is mostly an issue for the CO data, as the integrated intensity maps provided by the \mbox{\rm \ion{H}{1}}\ surveys have sufficient S/N for our purposes. For the CO, we create new masks and re-derive integrated intensity maps for each data set. Typically we estimate the noise from the empty regions of the cube, identify a core of high significance emission, often two successive channels with $S/N > 5$ and expand this high significance core to include fainter but still significant emission. We integrate the masked data cube to produce a moment 0 map, which we use in further analysis. For the LMC maps from MAGMA and NANTEN and the M33 map, the signal to noise at the native resolution is too low to yield a high quality masked integrated intensity map. Therefore we convolve the data to a slightly worse resolution before masking and any analysis. This exercise produces maps well-suited to derive \mbox{$\left< \Sigma \right>^{\rm M}$} , but the process of masking at the native resolution does remove the possibility of picking up any contribution from a low S/N diffuse component (e.g., Pety et al., accepted). Fundamentally, this is a limitation of the data themselves and future, more sensitive surveys capable of detecting diffuse emission over individual lines of sight will improve this situation and quantify the contribution of faint, pervasive CO emission to the total molecular gas budget\footnote{This effect matters but does not appear to dominate our results. For example, if we add a pervasive CO component to the PAWS M51 data with magnitude $\sim 0.5$ times our sensitivity --- an aggressive scenario --- then $c$ drops from $\approx 6$ to $\approx 5.1$ over the region that we consider. Fainter regions, which we avoid, will be more affected.}. When sampling the CO emission, we restrict ourselves to areas that include significant emission within the mask. Roughly, our criterion is that in the map smoothed to 1~kpc resolution, the average brightness is such that we could have detected that line of sight at the original, higher resolution. That is, we consider areas where our sensitivity at high resolution is sufficient to detect the average brightness. This allows us to avoid ``edge'' or ``clipping'' effects in which only one small patch of bright emission is included in the beam, leading to high \mbox{$\left< \Sigma \right>^{\rm M}$}\ but low \mbox{$\left< \Sigma \right>^{\rm A}$} . Because these ``edges'' are mostly present (within $r_{25}$) in the CO and not \mbox{\rm \ion{H}{1}}\ maps, including them would make our conclusions more extreme. {\em Deriving \mbox{$\left< \Sigma \right>^{\rm M}$} , \mbox{$\left< \Sigma \right>^{\rm A}$} , and $c$:} We assume that 21-cm and CO emission linearly trace the surface density of atomic and molecular gas, i.e., $\Sigma \propto I$, and calculate \mbox{$\left< \Sigma \right>^{\rm M}$}\ and \mbox{$\left< \Sigma \right>^{\rm A}$}\ following Equations \ref{eq:iwt} and \ref{eq:awt}. We convert from intensity to surface density adopting a fixed $\alpha_{\rm CO} = 4.35$~\mbox{M$_\odot$ pc$^{-2}$ (K km s$^{-1}$)$^{-1}$}\ and $N \left( \mbox{\rm \ion{H}{1}} \right)~\left[ {\rm cm}^{-2} \right] = 1.823 \times 10^{18} I_{\rm HI}~\left[{\rm K~km~s^{-1}}\right]$. Note that these factors divide out when calculating $c$. To calculate \mbox{$\left< \Sigma \right>^{\rm A}$} , we smooth from the native resolution to 1~kpc using a normalized Gaussian kernel. To calculate \mbox{$\left< \Sigma \right>^{\rm M}$} , we calculate $\Sigma^2$ at the native resolution, convolve this map to 1~kpc resolution using a normalized Gaussian kernel, and then divide that map by \mbox{$\left< \Sigma \right>^{\rm A}$}\ following Equation \ref{eq:iwt}. We record \mbox{$\left< \Sigma \right>^{\rm M}$} , \mbox{$\left< \Sigma \right>^{\rm A}$} , and the clumping factor, $c$, for a hexagonally-spaced set of Nyquist-sampled (at 1~kpc resolution) points. After this exercise, we have $\approx 50,000$ data points from 46 galaxies for \mbox{\rm \ion{H}{1}}\ and $\approx 1,000$ data points from 15 data sets in 8 galaxies for CO. As these numbers make clear, CO data represent the limiting reagent in this calculation, though thanks to ALMA their prospect for short-term improvement is excellent . \section{Results} \label{sec:results} \begin{deluxetable}{lccc} \tablecaption{Clumping Factors for ISM Maps} \tablehead{ \colhead{Data Set} & \colhead{res.} & \colhead{$\left< c \right>$} & \\ \colhead{} & \colhead{[pc]} & \colhead{} \\ \colhead{(1)} & \colhead{(2)} & \colhead{(3)} } \startdata CO Data & & \\ \hline M31 IRAM 30-m & 87 & $3.7_{-1.3}^{+1.6}$ \\ M31 CARMA ``Brick 9'' & 22 & $8.4_{-2.0}^{+3.4}$ \\ M31 CARMA ``Brick 15'' & 21 & $6.4_{-1.4}^{+1.3}$ \\ M33 BIMA+FCRAO & 98 & $13_{-5.7}^{+6.9}$ \\ LMC NANTEN & 58 & $33_{-12}^{+15}$ \\ LMC MAGMA & 15 & $31_{-13}^{+8.0}$ \\ M51 PAWS & 39 & $6.0_{-2.7}^{+3.4}$ \\ NGC 2903 BIMA SONG & 264 & $4.2_{-1.5}^{+4.2}$ \\ NGC 3627 BIMA SONG & 295 & $2.7_{-0.9}^{+2.2}$ \\ M51 BIMA SONG & 200 & $3.5_{-1.6}^{+2.4}$ \\ NGC 6946 BIMA SONG & 145 & $6.2_{-2.2}^{+7.1}$ \\ Antennae CO(2-1) North & 135 & $13_{-7.4}^{+6.7}$ \\ Antennae CO(2-1) South & 127 & $7.1_{-2.7}^{+6.8}$ \\ Antennae CO(3-2) North & 95 & $7.1_{-9.6}^{+12.4}$ \\ Antennae CO(3-2) South & 87 & $9.8_{-2.9}^{+5.6}$ \\ Local Milky Way ($30\arcdeg > \left|b\right| > 5\arcdeg$) & \nodata & $\approx 6$ \\ \hline \\ HI ensemble & & $1.26_{-0.15}^{+0.46}$ \\ ... 0--250 pc resolution & & $1.34_{-0.17}^{+0.32}$ \\ ... 250--500 pc resolution & & $1.19_{-0.11}^{+0.18}$ \\ \enddata \label{tab:clumping} \end{deluxetable} Figures \ref{fig:col_vs_col} and \ref{fig:clumping} and Table \ref{tab:clumping} report our results. Figure \ref{fig:col_vs_col} shows \mbox{$\left< \Sigma \right>^{\rm M}$}\ as a function of \mbox{$\left< \Sigma \right>^{\rm A}$}\ for CO (red) and \mbox{\rm \ion{H}{1}}\ (blue) data. Figure \ref{fig:clumping} plots the clumping factor, $c$, as a function of the linear resolution of the original data set used to calculate \mbox{$\left< \Sigma \right>^{\rm M}$} . In both figures, light points show individual lines of sight and dark, solid points show averages for whole data sets. Error bars on the whole galaxy points in Figure \ref{fig:clumping} show the $1\sigma$ range for that data set. Table \ref{tab:clumping} reports the native resolution (after convolution to increase the S/N in M33 and the LMC) and median clumping factor with $1\sigma$ range for each CO data set. We report results for the ensemble of \mbox{\rm \ion{H}{1}}\ data, which Figure \ref{fig:col_vs_col} and \ref{fig:clumping} demonstrate to be uniform. These figures illustrate three points: \begin{enumerate} \item {\em \mbox{\rm \ion{H}{1}}\ and \mbox{\rm H$_2$}\ (traced by CO) exhibit different clumping factors.} Both Figure \ref{fig:col_vs_col} and Figure \ref{fig:clumping} show that essentially {\em all} of our CO data is more highly clumped than {\em all} of our \mbox{\rm \ion{H}{1}}\ data. The median clumping factor for a CO data set is $c=7$, while the median clumping factor for an \mbox{\rm \ion{H}{1}}\ data set is $c=1.26$. The specific cases of M33 and M31 illustrate this point cleanly. The M31 IRAM CO map shows clumping factor $\approx 4$; the M33 BIMA+FCRAO map shows clumping factor $\approx 13$. Both the M31 and M33 \mbox{\rm \ion{H}{1}}\ maps show clumping factor $\approx 1.3$. As a direct result, a ratio of H$_2$-to-\mbox{\rm \ion{H}{1}}\ surface densities obtained at large scales does not translate trivially into a ratio of surface densities at small scales. The assumption that the large scale surface density in galaxies reflects the small scale surface density in the same way for \mbox{\rm H$_2$}\ and \mbox{\rm \ion{H}{1}}\ underlies the application of the \citet{KRUMHOLZ09A} model to explain \mbox{\rm H$_2$} -to-\mbox{\rm \ion{H}{1}}\ ratios in galaxies. Though the physics of the model appear to apply successfully to individual clouds or regions \citep{BOLATTO11,LEE12}, we suggest that more than a single ``clumping'' factor is necessary to make a rigorous comparison of the model to observations of large parts of galaxies. Our calculation does not invalidate the kpc-resolution ratio of $\Sigma_{\rm H2}/\Sigma_{\rm HI}$ as an interesting measurement. It simply suggests that this be viewed as a measure of mass balance among ISM phases over a large area and not indicative of small-scale ISM structure. \item {\em \mbox{\rm \ion{H}{1}}\ is very smooth.} This conclusion leaps out of both figures. Even with high linear resolution, \mbox{\rm \ion{H}{1}}\ column density maps remain smooth and only weakly clumped. This can be explained by most 21-cm emission originating not from clumped bound clouds, but a diffuse medium with a high volume filling factor. The \mbox{\rm \ion{H}{1}}\ clumping factor does depend weakly on scale, a reasonable functional form is $c = 392~\left(l_{\rm pc} + 100\right)^{-1.27} + 1$, where $l_{\rm pc}$ is the (FWHM) linear resolution in parsecs. \item {\em CO is clumpy with a wide range of $c$, making it hard to predict \mbox{$\left< \Sigma \right>^{\rm M}$}\ from \mbox{$\left< \Sigma \right>^{\rm A}$} .} In contrast to \mbox{\rm \ion{H}{1}} , \mbox{\rm H$_2$}\ traced by CO emission appears clumpy with a wide range of $c$. The median among all data sets is $\approx 7$ with a factor of $2$--$3$ rms scatter among measurements. This is also close to the value that we estimate for the Solar Neighborhood from intermediate latitude gas, but we caution that we expect $c$ to change with improving native resolution of the data. Because we consider only bright regions, this represents a conservative estimate; the edges and faint regions that we exclude tend to have high $c$. The molecule-poor systems (M33 and the LMC) in the sample show the highest $c$, perhaps because they contain more isolated clouds and perhaps because their somewhat low metallicities lead any diffuse H$_2$ component to emit less in CO (below some metallicity the clumping of CO emission and $\Sigma_{\rm H2}$ will dramatically diverge). \\ \end{enumerate} \noindent We also note two less secure points implied by the data but requiring more aggressive assumptions about the CO-to-H$_2$ conversion factor: \begin{enumerate} \setcounter{enumi}{3} \item {\em The total (\mbox{\rm H$_2$} $+$ \mbox{\rm \ion{H}{1}} ) clumping factor must vary significantly among and within galaxies.} We can only calculate the clumping factor for the total (\mbox{\rm H$_2$} $+$ \mbox{\rm \ion{H}{1}} ) gas in M31, M33, and the LMC. In each case the median $c$ is low ($\sim 1.3$, $\sim 1.3$, and $\sim 1.5$), resembling that of the \mbox{\rm \ion{H}{1}} . This is not surprising because for fixed $\alpha_{\rm CO}$, the \mbox{\rm \ion{H}{1}}\ mass exceeds the H$_2$ mass in these galaxies by more than an order of magnitude, with H$_2$ making up most of the gas along only a small fraction of the lines of sight at our resolution (M31 is more molecule-rich than the other two, but still \mbox{\rm \ion{H}{1}}\ dominated). Generally, we expect that across most of the area in dwarf galaxies, which tend to be low-metallicity and \mbox{\rm \ion{H}{1}} -dominated, $c$ will resemble the $\sim 1.3$ that we measure for \mbox{\rm \ion{H}{1}} . The outer parts of most spirals also tend to be \mbox{\rm \ion{H}{1}}\ dominated and should show similar values, while in the molecule-rich central parts of actively star-forming galaxies the values will more closely resemble the higher $c$ that we find for M51, NGC 2903, NGC 3627, and NGC 6946. \item {\em CO exhibits a wide range of \mbox{$\left< \Sigma \right>^{\rm M}$} .} In contrast to the common assumption that CO emerges from a population of fixed surface density clouds, the CO data in Figure \ref{fig:col_vs_col} span two orders of magnitude in \mbox{$\left< \Sigma \right>^{\rm M}$} . The figure provides no good evidence that CO emerges from a fixed \mbox{$\left< \Sigma \right>^{\rm M}$}\ at high spatial resolution. In fact, the highest resolution data sets span roughly an order of magnitude (for fixed $\alpha_{\rm CO}$) in \mbox{$\left< \Sigma \right>^{\rm M}$}\ from the LMC ($\sim 25$~M$_\odot$~pc$^{-2}$) to M51 ($\sim 330$~M$_\odot$~pc$^{-2}$). The spread in \mbox{$\left< \Sigma \right>^{\rm M}$}\ may arise from sources other than the cloud scale surface density: variations in inclination and the conversion factor, superposition of fixed surface density clouds, the convolution of bound clouds with a diffuse background, or a lack of spatial resolution matched to individual clouds. However, our best guess is that the large range in apparent \mbox{$\left< \Sigma \right>^{\rm M}$}\ visible in Figure \ref{fig:col_vs_col} in fact reflects a systematic dependence of cloud surface density on environment. This reinforces the thorough analysis of Hughes et al. (accepted), who compare CO maps of M51 (PAWS), the LMC (MAGMA), and M33 and conclusively demonstrate fundamental differences among the volume and surface density PDFs among and within the three galaxies \citep[see also the spread in Milky Way $\Sigma$ discussed by][]{BOLATTO13}. \end{enumerate} \noindent Our calculations shows that the structure of the molecular and atomic ISM are more complex than has been assumed while vetting recent models. We suggest the ``clumping factor'' approach defined in \S \ref{sec:intro} to quantify this structure and aid interpretation of lower resolution observations. With ALMA now able to easily obtain high resolution, high sensitivity ISM maps we expect such calculations to be feasible in many systems over the coming years. \acknowledgments We thank the referee for a constructive report and Scott Schnee and Mark Krumholz for feedback on drafts. We acknowledge the BIMA SONG, LITTLE THINGS, and VLA ANGST collaborations for making their data public. We thank IRAM for making the moment 0 map of M31 public. This paper makes use of the following ALMA data: ADS/JAO.ALMA\#2011.0.00003.SV. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada) and NSC and ASIAA (Taiwan), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. AB acknowledges partial support from grants NSF AST-0838178, NSF AST-0955836, and a Cottrell Scholar award from the Research Corporation for Science Advancement. KS is supported by a Marie Curie International Incoming fellowship. AH acknowledges funding from the Deutsche Forschungsgemeinschaft via grants SCHI 536/5-1 and SCHI 536/7-1 as part of the priority program SPP 1573 'ISM-SPP: Physics of the Interstellar Medium.' JP was partially funded by the grant ANR-09-BLAN-0231-01 from the French {\it Agence Nationale de la Recherche} as part of the SCHISM project.
1,314,259,994,693
arxiv
\section{#1}} \newcommand{\PRsection}[1]{} \pagestyle{plain} \begin{document} \title{Pressure-Driven Evaporative Cooling in Atom Guides} \author{Spencer E. Olson} \affiliation{ Air Force Research Laboratory, Space Vehicles Directorate, \\ 3550 Aberdeen Ave. SE, Kirtland Air Force Base, NM 87117-5776 } \author{Georg Raithel} \affiliation{ Physics Department, University of Michigan\\ 2477 Randall Lab, 450 Church Street, Ann Arbor, Michigan 48109-1120 } \author{Andrew J. Christlieb} \affiliation{ Mathematics Department, Michigan State University\\ D304 Wells Hall, East Lansing, Michigan 48824-1027 } \date{\today} \begin{abstract} We study steady-state evaporation in an atom guide via Monte Carlo simulations. The evaporation surface follows a specific profile as a function of longitudinal guide location. We demonstrate that the choice of evaporation profile significantly impacts the performance of the evaporation. Our simulations also demonstrate a significant performance boost in the evaporation when using a longitudinally compressed guide. We show that for a purely pressure-driven atom beam, it should be possible to reach degeneracy within a $0.5~\m$ guide for experimentally feasible, albeit challenging, loading conditions. \end{abstract} \keywords{Guided atom beam, Evaporative cooling, Evaporation surface, DSMC} \pacs { 64.70.fm, 03.75.Pp } \maketitle \printgloss[symb]{symbols} \ifthenelse{\boolean{publish}}{ \input{paper.sym.bbl.tex} }{} \PRsection{Introduction} The past several years have seen significant research to develop a continuous-wave (CW) atom laser~\cite{Robins:2013:alp}. Analogous to the impacts the CW optical laser had on precision measurements, a CW atom laser is expected to impact precision atom-based metrology via longer coherence lengths and greater continuity of temporal measurement coverage. A CW laser is identified by a continuously loaded and leaked macroscopic occupation of a quantum wave-function. The macroscopic-occupation state for an atom laser is a Bose-Einstein Condensate (BEC), where ultra-cold atoms can condense into the single ground state of a reservoir trap. Thus, in order for a CW atom laser to be established, atoms must be continuously cooled to sub-microkelvin temperatures, transported, and loaded into the BEC. One of the primary tools for obtaining sub-microkelvin temperatures in atomic systems entails the use of forced evaporative cooling~\cite{Ketterle1996a}. Evaporative cooling consists of removing the most energetic particles of a system, thereby lowering the total energy of the system. With forced evaporative cooling, the evaporation threshold is strategically lowered in order to maintain the energy-removal rate. As thermal energy is removed from an ensemble of atoms, the first-order coherence within the sample increases, often increasing the signal-to-noise ratio and precision of atomic measurements. This technique has been key in the forming of the Bose-Einstein condensate state of matter and has proven useful in many other cold-atom applications.. \begin{figure}[htb] \centerline{ \includegraphics[width=\columnwidth]{concept} } \caption[Conceptual figure] { \label{fig:concept} Continuous evaporation is distributed in space and maintained in time by continually narrowing an effective evaporative surface in towards the center of the atom beam. } \end{figure} Some approaches for developing a CW atom laser attempt to establish a steady-state evaporative cooling process along the longitudinal direction of a guided cold atom beam. In this manner, an atomic beam is transferred into and transported by a magnetic guide~\cite{Olson:cpe2006} where the beam is cooled as it travels through the guide~\cite{Lahaye2005a}. Beam temperature is lowered using an evaporation threshold that varies as a function of longitudinal position. Fig.~\ref{fig:concept} shows a conceptual picture of distributed evaporative cooling. An atomic beam, injected at $z=0$, travels down the length of a magnetic guide. During this travel, an evaporation edge gradually removes more and more atoms from the beam. By maintaining collisionality throughout the guide, the beam is locally thermally equilibrated at all longitudinal positions, resulting in a continual decrease of the beam temperature~\cite{Mandonnet:eca2000,Lahaye:2006:kec}. The goal of this paper is to demonstrate that a realistic and efficient strategy exists for establishing steady-state forced evaporation along the length of an atom beam. While the use of a supersonic beam of cold atoms is usually justified in order to inhibit thermal shortcuts between longitudinally distant atoms, we demonstrate that a pressure-driven flow has unique advantage for establishing steady-state cooling: not only are thermal shortcuts eliminated via collisionally viscous flow, but evaporative processes can be tuned within significantly shorter distances. We study a basic set of longitudinally varying evaporation surfaces that might be applied in an experiment. Simulation results are presented to depict the relative performance among the various strategies, and a clear relative optimum is demonstrated. By employing a method known as Direct Simulation Monte Carlo (DSMC)~\cite{Bird:mgd1994}, we simulate the evaporative cooling process in the magnetic guide. Simulations offer a reliable preview of the performance of a particular evaporation design. The DSMC method has been used successfully to simulate cold-atom collision processes such as evaporative cooling~\cite{Wu:dsec1996,Mandonnet:eca2000} and s/d-wave collisions statistics~\cite{Wade:dsmc:2011}. In this work, we use a new gridless DSMC algorithm that provides uniform accuracy independent of density modulations. Our algorithm, the advantages thereof, and thorough testing are more thoroughly discussed in Ref.~\cite{Olson:gdsmc:2008}. Simulations of the atom guide presented here were done using a parallel algorithm discussed in Ref.~\cite{Olson:pid:2010}. \PRsection{Key Concepts} There are two key ingredients for establishing an efficient temperature gradient along the length of an atom guide using evaporative cooling: rethermalization and thermal isolation. Rethermalization occurs as a locally disturbed gas undergoes collisions to reach local thermal equilibrium. Thermal isolation in a guide, on the other hand, pertains to disallowing direct energy exchange between longitudinally distant groups of atoms. As an example of rethermalization, consider a uniform gas of $^{87}$Rb atoms with initial conditions given by $n_{\rm i} = \sci{1.25}{8}~\cm^{-3}$ and $T_{\rm i} = 100~\uK$, where $n_{\rm i}$ and $T_{\rm i}$ are the initial number density and temperature of the sample respectively. The velocity distribution of this gas can be represented by a normal distribution as shown in the top dashed curve in Fig.~\ref{fig:retherm}. \begin{figure}[htb] \centerline{ \includegraphics[width=\columnwidth]{vbin-waterfall} } \caption[Rethermalization for an initially velocity-truncated distribution] { \label{fig:retherm} The velocity distribution of a $100~\uK$ sample of $^{87}$Rb atoms is truncated and allowed to rethermalize via collisions. Approximately $20\%$, of the initial atom population, is removed as the wings of the thermal distribution are truncated. After roughly $10$ collisions per atom, the velocity distribution errs, on average, less than $(1/100)\%$ from a thermal distribution with $T = 47~\uK$. } \end{figure} A disturbance is created by suddenly removing the $20\%$ most energetic particles such that the velocity distribution is truncated as approximately represented by the highest solid curve in Fig.~\ref{fig:retherm}. As shown in Fig.~\ref{fig:retherm}, the rethermalization process begins immediately with the first collisions after the truncation. Each collision quickly brings the velocity distribution of the gas closer and closer to a Boltmann distribution until the sample reaches thermal equilibrium. Because energy was removed via the truncation, the resulting Boltzmann distribution is narrower and more peaked, representing a colder collection of atoms. Depending on the final error admissible, Fig.~\ref{fig:retherm} shows that the rethermalization time is in the range of $2$--$10 \symb{gamma_coll}^{-1}$ where $\symb{gamma_coll}$ is the average collision rate~\cite{Wu:dsec1996,Snoke:pdb1989,Monroe:mcs1993}. Thermal isolation in the guide direction, $z$, is critical to ensure that hotter, upstream atoms cannot reach, collide with, and heat downstream portions of the atomic beam. Problematic longitudinal heat conduction arises from (1) low stream densities that result in collisionless flow or (2) high-angular-momentum trajectories of beam atoms through the guide~\cite{Meppelink:ehf:2009}. Avoiding heat conduction is made possible by first maintaining collisionality and second, removing atoms that reach large guiding radii. High collisionality ensures that atoms exchange momentum only with other atoms of similar kinetic energy in nearly the same portion of the beam. By choosing an injection velocity $\symb{vlong}_0$ on the same order as the initial thermal velocity, flow is pressure driven instead of being primarily due to the inertia of the incident flow. Evaporative removal of all atoms outside of a $z$-dependent, critical radius ensures that high-angular-momentum states are disallowed. \PRsection{Evaporation Function} Assuming that a particular temperature gradient can be established, the goal becomes to identify the type of gradient which results in the highest final phase-space density \symb{nl3} where $n$ is the number density and \symb{l_th} is the thermal de~Broglie wavelength given by \[ \symbi{l_th} = \sqrt{ \frac{2\pi\hbar^{2}} {\symbi{mass} \symbi{K_B} T} } \; . \] Neglecting the longitudinal potential as a degree of freedom, each particle in the system has an average energy equal to $5\symb{K_B}T/2$. For simplicity, we assume that the edge of the near-thermal spatial distribution is at a transverse potential energy of $15\symb{K_B}T/2$. The exact value of this assumption is arbitrary but must result in a nearly total inclusion of the number and kinetic-energy distributions. For the choice of $15\symb{K_B}T/2$, $99.9\%$ of the distribution with $99.5\%$ of the kinetic energy lies within this range. By removing atoms that reach $U(\vec{\bf x}) = 15\symb{K_B}T/2 + \Delta$, where $\Delta$ is the minimum energy in the center of the trap given by $\Delta = g_{F} m_{F} \mu_{B} \symb{B0}$, we can thus force the guide to support a temperature no greater than $T$. To establish a particular temperature gradient through the guide, we define the evaporation threshold function \symb{evap_func} as the potential energy $U(\vec{\bf x})$ at which atoms are removed at a longitudinal location $z$ in the guide. This results in an atomic distribution that increasingly narrows as the beam progresses down the guide. Fig.~\ref{fig:atoms} shows such an atom distribution and the corresponding evaporation threshold surface that causes this narrowing, where $L$ is the length of the guide over which the evaporation threshold changes. \begin{figure}[htb] \centerline{ \includegraphics[width=\columnwidth]{particles} } \caption[Picture of atoms in the forced evaporation guide] { \label{fig:atoms} Snapshot of the guided atom beam under the influence of forced evaporative cooling. The 3D surface indicates the location of the evaporation surface. } \end{figure} To evaluate the effect of the evaporation threshold function $\symb{evap_func}$ on the final phase-space density $\symb{nl3}$, we explore a (somewhat arbitrary) basic set given by: \begin{equation} \label{eq:evap_func} \symb{evap_func} \sim \left\{ \begin{array}{rll} T_{\rm f} + \left(T_{\rm i} - T_{\rm f}\right) & \cdot~ \left[1\right. - &\left(z/L\right)]^{1/2}, \\ T_{\rm i} + \left(T_{\rm f} - T_{\rm i}\right) & \cdot &\left(z/L\right)^{2}, \\ T_{\rm i} + \left(T_{\rm f} - T_{\rm i}\right) & \cdot &\left(z/L\right), \\ T_{\rm i} + \left(T_{\rm f} - T_{\rm i}\right) & \cdot &\left(z/L\right)^{1/2}, \\ T_{\rm f} + \left(T_{\rm i} - T_{\rm f}\right) & \cdot~ \left[1\right. - &\left(z/L\right)]^{2} \end{array} \right\} \,,\, 0 \leq z \leq L \end{equation} Each of the basic $\symb{evap_func}$ functions in Eq.~\ref{eq:evap_func}, shown in Fig.~\ref{fig:evap-types}, serves to evaluate a particular strategy for evaporative cooling. Each represents a different level of compromise between the need to establish thermal isolation (by removing high energy atoms earlier) and the need to achieve a high density in the downstream portions of the guide. \begin{figure}[htb] \centerline{ \includegraphics[width=\columnwidth]{evap-types-T} } \caption[Evaporation strategies of interest] { \label{fig:evap-types} Evaporation strategies of interest for this paper. The curves here depict the radial position $\rho$ at which atoms are removed from the system. By lowering the barrier in the forward direction, a forced evaporative cooling is imposed. The curves (a)--(e) represent the lines of \symb{evap_func} (Eq.~\ref{eq:evap_func}), in order respectively. For $z \leq 0$ and $z \geq L$, $\symb{evap_func}$ is held constant. These different strategies are to provide a set of basic types of evaporation surfaces that might be used experimentally. } \end{figure} As the atoms travel down the guide, the evaporation process depletes the atom number and can eventually stall (on the timescale of the guide traversal) as the collision rate decreases. To prevent the forced evaporation from stalling, the atoms are magnetically squeezed to enhance the collision rate. In a two-wire magnetic guide, this is done by simply decreasing the separation between the two wires as a function of $z$. The computed magnetic field used for this work corresponds to two parallel currents ($I = 150~\amps$) with a separation of $5.175~\mm$ at $z=0$ linearly decreasing to $4.175~\mm$ at $z = 50~\cm$. This separation results in gradients similar to the apparatus in Ref.~\cite{Olson:cpe2006}. Compression enhances the density and thus helps to maintain the collision rate as the atom number diminishes through the evaporative cooling process. Compression as a function of longitudinal position in the guide is analogous to temporal compression performed in standard BEC formation~\cite{Hess:ecm1986, Monroe:mcs1993}. It should be noted that the work done in Ref.~\cite{Mandonnet:eca2000} simulated evaporation in a non-compressed magnetic guide. After forced evaporation ends at $z = L = 40~\cm$, the atoms continue until they reach an elastically reflecting barrier at $z = 50~\cm$. As described in ~\cite{Olson:cpe2006}, this wall could be created by a blue-detuned sheet of light, such that a three dimensional trap is established wherein a condensate can form. \PRsection{Configuration} To simulate the atom guide, $^{87}$Rb atoms are injected at $z=0$ into an initially empty guide at a rate of $\sci{3}{9}~\s^{-1} \Delta t$ per timestep $\Delta t$ with an average stream velocity of $\symb{vlong}=0$ and a temperature $T = 100~\uK$. For these simulations, $L = 40~\cm$. As allowed by the Boltzmann equation, each simulated particle is scaled to represent $\symb{FN}\ge1$ rubidium atoms, such that each simulated particle has a cross section $\symb{FN} \symb{scatT}$ where \symb{scatT} is the total cross section of a single particle. Scaled representative particles decrease the computational burden and are typical in gas dynamics simulations. The figure of merit for choosing \symb{FN} is the ratio of the average distance between colliding particles to the mean free path~\cite{Olson:gdsmc:2008}. A large \symb{FN} results in higher values of this ratio. If this ratio is too large ($\gtrsim 1$), the collisionality of the simulation is significantly diminished. For this work, \symb{FN} was chosen as high as possible ($\sci{5}{3}$) without significantly diminishing the collision processes. It should be noted that a decrease in \symb{FN} will only result in a more collisional system and a larger temperature gradient along the length of the guide. These simulations therefore represent a lower bound of the evaporative-cooling performance. By using a very low input stream velocity, it is expected that the pressure driven flow will allow a very short guide that is still able to maintain a strong temperature gradient. The input source is assumed to be in the $\left|\symb{F}=2,\symb{mF}=2\right>$ stretched state. Thus, atoms are guided by a potential $U(\vec{\bf x})$ given by \begin{equation} \label{eq:U} U(\vec{\bf x}) = m g \vec{\bf x}\cdot \hat{\bf y} + \symb{mu_B} \left|\vec{\bf \symb{B0}} + \symb{B}\right| \end{equation} where $\symb{B}$ is the magnetic field resulting from two parallel wire currents and $\symb{B0}$ is a small longitudinal bias (0.5~\G) to prevent non-adiabatic Majorana spin flips. The injected atom stream is mode-matched to the guide according to the distribution $P(\vec{\bf x},\vec{\bf v}) |_{z=0}$ given by \begin{equation} \label{eq:P} P(\vec{\bf x},\vec{\bf v}) |_{z=0} \propto \exp \left[ -\frac{\symb{mass} \left| \vec{\bf v} \right|^{2} } { 2 \symb{K_B} T} -\frac{ U(\vec{\bf x}) |_{z=0} } { \symb{K_B} T} \right]\, . \end{equation} Each timestep of the simulation is broken into two major sub-steps: collisionless motion followed by momentum exchanging collisions within nearest-neighbor atom groups, as described in Ref.~\cite{Olson:gdsmc:2008}. \PRsection{Results} To quantify the results of the simulated evaporative cooling, we examine the increase in \symb{nl3} over the length of the guide. For all cases, $\symb{nl3} \sim 10^{-4}$ at $z=0$. Fig.~\ref{fig:results-nl3} compares the value of \symb{nl3} at $z=L$ for each instance of \symb{evap_func} in Eq.~\ref{eq:evap_func}. For the results shown in Fig.~\ref{fig:results-nl3}, it is clear that the performance depends greatly on the form of \symb{evap_func}, with $\symb{evap_func}\sim(1- (z/L))^2$ being the best candidate and $\symb{evap_func}\sim(1-(z/L))^{1/2}$ being the least promising. Fig.~\ref{fig:results-nl3} also compares the following cases for each type of \symb{evap_func}: (a) a non-compressed guide with a wall at $-4~\cm$, (b) a compressed guide with a wall at $-4~\cm$, and (c) a compressed guide with {\it no} wall at $-4~\cm$. For cases (a-b), the reflecting barrier at $-4~\cm$ boosts the density throughout the guide and provides a best case scenario for comparison. This comparison shows that the compressed guide does indeed result in a higher final value of \symb{nl3}. \begin{figure} \centerline{ \includegraphics[width=\columnwidth]{results-nl3} } \caption[\symb{nl3} of the different evaporation strategies] { \label{fig:results-nl3} Final phase space density $\symb{nl3}$ of the different evaporation strategies. (a) Non-compressed guide has a constant magnetic field gradient of $750~\G/\cm$. A reflecting barrier is also placed at $-4~\cm$. (b) Compressed guide begins with a gradient of $750~\G/\cm$ and ends with $1500~\G/\cm$. The reflecting barrier at $-4~\cm$ is also present here. (c) Compressed guide ($750~\G/\cm \rightarrow 1500~\G/\cm$) without reflective barrier at $-4~\cm$. } \end{figure} \PRsection{Conclusion and Discussion} Our simulations show that an optimum strategy does exist for cooling atoms in a guide. By appropriately choosing the evaporation threshold function $\symb{evap_func}$, it should be possible to achieve degeneracy even with short atom guides. Furthermore, as stated earlier, these simulations are expected to show only increased thermal isolation in the $z$ direction and hence greater evaporative-cooling performance as the representative particle size $\symb{FN} \rightarrow 1$. In addition, as $\symb{nl3} \rightarrow 1$, Bose statistics are expected to accelerate the condensation process of atoms into the ground state of the atom guide. We therefore conclude that a more accurate model of the physics would predict even higher evaporative cooling efficiency. For high-precision metrology, one might use the evaporation profiles described here to obtain a steady-state (stationary in time) BEC at the end of the guide. For this, it is apparent that additional controls must be introduced for reducing the forward stream velocity. The most practical control variable for removing the longitudinal energy is the tilt applied to the guide, such that atoms are forced to climb a gravitational potential. Other methods of removing longitudinal energy could include moving magnetic~\cite{Reinaudi:2006:mmm,Bethlem:2008:ghd,imhof2013chip} or perhaps optical potentials. \begin{acknowledgments} This work was supported by the Naval Research Laboratory, Department of Defense High Performance Computing Modernization Program, and the Army Research Office (Project number 42791-PH). A.~J. Christlieb was supported from AFOSR grants FA9550-11-1-0281, FA9550-12-1-0343. \end{acknowledgments} \ifthenelse{\boolean{publish}}% { \input{paper.bbl.tex} }{
1,314,259,994,694
arxiv
\section{Introduction} The Earth's magnetopause exists in a delicate balance between forces exerted between the impinging solar wind and the Earth's intrinsic magnetic field. The subsolar magnetopause is typically located approximately ten Earth radii (R$_E$) upstream but, during periods of enhanced solar wind forcing, this can be compressed to half this distance and inside the drift paths of radiation belt electrons and protons \citep{Shprits06} and the orbits of geosynchronous satellites \citep{Cahill99}. Moreover, magnetopause motion can drive gloabl ultra-low-frequency (ULF) pulsations \citep{Li97,Green04} and intense ionospheric and ground induced current systems \citep{Fujita03,Smith19}. The dynamics and location of the magnetopause are therefore of wide relevance to the understanding of planetary magnetospheres and to space weather forecasting. {The location and shape of the magnetopause was initially theoretically predicted to depend on the pressure exerted by a stream of charged particles from the Sun \citep{Chapman1931} and its three dimensional geometry was derived based on solar wind dynamic pressure alone \citep{Mead64}. Measurements with in-situ spacecraft broadly confirmed these predictions and were then used to derive a large suite of empirical models of the magnetopause location \citep[e.g.][and references therein]{Shue1998} based on elliptical and parabolic functions. These empirical studies revealed additional influences from the Interplanetary Magnetic Field (IMF) orientation, which modulates magnetic reconnection and the \citet{Dungey1961} cycle, solar wind magnetic pressure and dipole tilt \citep{Lin2010}, IMF cone angle \citep{Merka2003}, and ionospheric conductivity and solar wind velocity \citep{Nemecek2016}}. These best-fit models are, however, static and can deviate when compared to specific observations \citep{Samsonov2019}, particularly during extreme solar wind conditions with discrepancies of $>$1 R$_E$ observed when located less than 8 R$_E$ upstream \citep{Staples2020}. Satellite observations have revealed that the magnetopause boundary exists in a perpetual state of motion \citep{Bowe90}. Solar wind pressure variations drive the magnetopause response which results in fast magnetosonic waves that can couple to poloidal and toroidal Alfv\'en modes of the large-scale magnetospheric fields \citep{Southward1974,Kivelson1984}. Bow shock- and magnetosheath-generated phenomena, including; hot flow anomalies \citep{Burgess1989}, magnetosheath jets \citep{Hietala09}, foreshock cavities \citep{Sibeck2002} and bubbles \citep{Omidi2010}, similarly produce pressure fluctuations which elicit magnetopause motion. Only four studies have formally examined the directly driven response of the magnetopause to upstream pressure variations. \citet{Smit1968} initially formulated magnetopause motion as a simple harmonic oscillator consisting of inertial, damping, and restoring forces. \citet{Freeman1995} and \citet{Freeman1998} subsequently used the Newton-Busemann approximation to develop a formal consistent theory of the magnetopause as an elastic membrane which could be applied locally. \citet{Borve2011} similarly modelled the magnetopause response to solar wind pressure pulses and found qualitative agreement with 2--D MHD simulations. \citet{Freeman1995} and \citet{Borve2011} notably predict magnetopause oscillations to be strongly damped. These studies, however, focussed on small perturbations in solar wind dynamic pressure. Fast-forward inter-planetary (IP) shocks, as occur at the front of Interplanetary Coronal Mass Ejections and corotating-interaction-regions, can rapidly compress the magnetosphere in just a few minutes \citep{Smith76,Araki94}, and present a {further} regime for studying magnetopause motion. Global magnetohydrodyanmic (MHD) codes are able to self-consistently model the dynamic solar wind-magnetosphere interaction for a wide variety of solar wind conditions and, in this study, we test theoretical predictions using global MHD simulations to constrain nonlinear magnetopause behaviour across extreme scenarios for which spacecraft observations are limited or unavailable. This letter is organised as follows: Section 2 describes the Gorgon Global MHD model, the simulation parameters including the IP shocks considered and theory of the magnetopause. Section 3 then describes the simulations conducted and the comparison to theory. Sections 4 concludes with a summary discussed in relation to space weather forecasting. \section{Method} \subsection{Global-MHD} Gorgon is a 3-D simulation code with resistive MHD and hydrodynamic capabilities, originally developed to study high-energy-density laboratory plasmas \citep{Chittenden2004,Ciardo2007}. Gorgon has been adapted and applied to planetary magnetospheres in several contexts, including: the inclined and rotating Neptunian magnetosphere \citep{Mejnertsen2016}, the variable motion of the terrestrial bow shock \citep{Mejnertsen2018}, and the effects of dipole-tilt on terrestrial magnetopause reconnection and ionospheric current systems \citep{Eggington2020}. The MHD equations are implemented to represent a fully ionised quasi-neutral hydrogen plasma on a 3-D uniform Eulerian cartesian grid. A second order finite volume Van Leer advection scheme uses a vector potential representation of the magnetic field on a staggered \citet{Yee1966} grid which maintains a divergence free magnetic field to machine-precision. The system is closed assuming an ideal gas and stepped forward with a variable time-step using a second order Runge Kutta scheme. These numerics conserve the internal energy, rather than total energy, which negates negative pressures. A split magnetic field is implemented \citep{Tanaka1994} where the curl-free dipole-component is omitted from the induction equation which reduces discretisation errors within the magnetosphere. A \citet{Boris1970} correction is used to limit the Alfv\'en speed in the presence of a reduced speed of light, and a Von Neumann artificial viscosity is applied to accurately capture shock physics and improve energy conservation \citep{Benson1992}. Due to its heritage in simulating laboratory plasmas, Gorgon also includes individual pressure terms for protons and electrons, Ohmic heating based upon the Spitzer resistivity, optically thin radiative loss terms, and electron-proton energy exchange. These are, however, vanishingly small within collisionless magnetospheric plasmas. Magnetic reconnection therefore develops through numerical diffusion alone. The simulation domain extends from -20 to 100 R$_E$ in X and -40 to 40 in Y and Z, with a uniform grid spacing of 1/2 R$_E$, and which corresponds to GSM coordinates with --X. An inflow boundary condition is located on the sun-ward edge (--X) where the solar wind propagates into the domain, and outflow boundary conditions are used at the tailward X, and Y and Z boundaries. The dipole is located at the origin and the inner ionospheric boundary is located at $\mid$3$\mid$ R$_E$ with a 370 cm$^{-3}$ fixed density of cold 0.1 eV plasma which diffuses outward to form a rudimentary plasmasphere. The ionosphere at the inner boundary \citep{Eggington2018} is represented by a thin conducting shell, upon which the generalized Ohm's law is solved for a given ionospheric conductance profile to obtain an electrostatic potential \citep{Ridley2004}. The corresponding electric field then modifies the plasma flow via the associated drift velocity. The simulation is initialised with a dipole field with an exponentially decreasing low plasma density through the domain and with a mirror dipole within the solar wind to produce a B$_x$=0 surface \citep{Raeder03}. Constant solar wind conditions of n$_0$ = 5 cm$^{-3}$, B$_z$ = -2 nT, T$_i$ = T$_e$ = 5 eV, v$_x$ = 400 km s$^{-1}$, as shown in Table 1, are run for two hours with geomagnetic dipole moment M$_z$ = 7.94$\cdot$10$^{22}$ Am$^2$ to produce a fully formed magnetosphere. \subsection{Interplanetary Shocks} {Interplanetary shocks are produced at the interface of plasma regimes in the solar wind when the relative speed of the shock structure to the ambient solar wind exceeds the magnetosonic velocity} \citep{Kennel85}. Fast-forward shocks are characterised by an increase in velocity, density, pressure and magnetic field strength, as produced at the leading edge of impulsive phenomena such as interplanetary coronal mass ejections \citep{Burlaga71} and between fast and slow solar wind streams as these boundaries steepen into corotating interaction region-driven shocks \citep{Smith76}. \vspace{-2em} \begin{center} \begin{table}[ht] \centering \caption{Rankine-Hugoniot jump conditions for four fast-forward perpendicular IP shocks corresponding to four Gorgon simulations with the same initial solar wind conditions. } \centering \begin{tabular}{|c|c|c|c|c|c|c|} \hline \textbf{} & \textbf{\begin{tabular}[c]{@{}c@{}}n\\ {[}cm$^{-3}${]}\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}v$_x$ \\ {[}km s$^{-1}${]}\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}D$_p$ \\ {[}nPa{]}\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}T\\ {[}eV{]}\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}B\\ {[}nT{]}\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}v$_{shock}$\\ {[}km s$^ {-1}${]}\end{tabular}}\\ \hline \textbf{Solar Wind} & 5 & 400 & 1.34 & 5.0 & [0, 0, -2] & - \\ \hline \textbf{Shock I} & 7.5 & 500 & 3.14 & 210.1 & [0, 0, -3] & 700 \\ \hline \textbf{Shock II} & 10 & 600 & 6.03 & 416.3 & [0, 0, -4] & 800 \\ \hline \textbf{Shock III} & 15 & 800 & 16.1 & 830.1 & [0, 0, -6] & 1000 \\ \hline \textbf{Shock IV} & 20 & 1000 & 33.5 & 1244.3 & [0, 0, -8] & 1200 \\ \hline \end{tabular} \end{table} \label{tableshocks} \end{center} \vspace{-1em} Four perpendicular fast-forward shocks of varying strengths are injected into the solar wind within four separate Gorgon simulations in order to characterise the magnetospheric response to impulsive events of varying magnitude. {Perpendicular shocks denote shock geometries where {the magnetic field is orthogonal to the shock normal}. The jump in solar wind conditions therefore manifests as a spatially uniform front.} The shocks are calculated in accordance with the Rankine-Hugoniot conditions {\citep{Priest14}} with the four jumps from the same initial solar wind, as shown in Table 1. Shock I shows a modest jump in all parameters representative of the median southward IP shock properties pbserved at 1 au during solar minimum \citep{Echer03}. The solar wind number density, n, jumps from 5 to 7.5 cm$^{-3}$, southward IMF, B$_z$, from -2 to -3 nT and solar wind velocity, v$_x$, from 400 to 500 km s$^{-1}$. Shocks II, III and IV represent increasingly stronger cases up to the maximum possible four-fold increase in the solar wind density and magnetic field for Shock IV with a solar wind velocity jump of 400 to 1000 km s$^{-1}$ . All shocks are travelling at 200 km s$^{-1}$ in the solar wind frame which is at the upper bound of the 50--200 km s$^{-1}$ range {typically observed at 1 au \citep[e.g.][]{Berdichevsky00}}. The parameters of Shock IV are judged as an estimate \citep{Hudson97} of the extreme IP shock of 24 March 1991 which rapidly compressed the magnetosphere over the course of minutes and promptly formed a new radiation belt in the slot region \citep{Blake1992,Horne15}. Space weather events of this extremity are rare \citep{Riley12,Meredith17}, but not unique as there are other examples where the magnetopause has been observed inside geosynchronous orbit as low as 5.2 R$_E$. \citep{Cahill77}. It is also important to note that greater shock velocities of over twice that of Shock IV are possible. For example, on 23 July 2012, the STEREO-A spacecraft observed a non-Earth directed fast-forward shock with a velocity of $\approx$ 2250 km s$^{-1}$ \citep{Russell2013} and theoretical studies have highlighted the possibility of shock velocities over 3,000 km/s emerging from the solar corona \citep{Yashiro04,Gopalswamy05,Tsurutani14} with corresponding velocities of up to $\approx$ 2,750 km s$^{-1}$ manifesting at 1 AU \citep{Desai20}. \subsection{Theory} \label{SHM} To understand the motion of the subsolar magnetopause in response to an IP shock, it is useful to consider the forces acting upon it. The following is based on the theory of \citet{Freeman1995} and \citet{Freeman1998}, and references therein. In steady state, the geocentric distance to the subsolar magnetopause, R, is well approximated by a balance between the pressure exerted on the magnetopause by the shocked solar wind and the magnetic pressure of the compressed dipole magnetic field of the Earth \begin{equation} s \rho u^2 = \frac{f^2B^2_e R^6_E}{2 \mu_0 R^6}, \label{numerical} \end{equation} where $\rho$ and $u$ are the solar wind density and speed, respectively, and $s$ = 1 in the Newtonian approximation. B$_{eq}$ = 31100 nT is the equatorial magnetic field strength at $1$ R$_E$, and $\mu_0$ is the permeability of free space. $f \approx 2.44$ is the typical dipole compression factor but this can theoretically vary between $f = 2$ for a plane magnetopause to $f = 3$ for a spherical magnetopause {\citep{Mead64}}. \begin{equation} m \frac{d^2 R}{d t^2} = \frac{f^2B^2_{eq} R^6_E}{2 \mu_0 R^6} - s \rho (u_\infty+\frac{dR}{dt})^2, \label{numerical2} \end{equation} where the final term is the Newtonian pressure applied to the now-moving magnetopause and the subscript $\infty$ denotes the constant post-shock solar wind values. The inertial mass m is expected to be that of the subsolar magnetosheath column. Writing m = c $\rho_{\infty}$ R$_{\infty}$, where R$_{\infty}$ is the final equilibrium position, we estimate c $\approx$ 1.2 in this case. Also rewriting the magnetic pressure term using the final equilibrium version of Equation \ref{numerical}, Equation \ref{numerical2} becomes \begin{equation} \frac{d^2 R}{d t^2} + \frac{s}{c R_\infty} \left[ \left( u_\infty + \frac{dR}{dt} \right) ^2 - u_\infty^2 \left(\frac{R_\infty}{R}\right)^2 \right] = 0 \label{numerical3}. \end{equation} Linearising Equation \ref{numerical3} by substituting R(t) = R$_\infty$ + r(t), assuming r $<<$ R$_\infty$, and retaining only first-order terms, the equation of motion becomes: \begin{equation} \frac{d^2 r}{d t^2} + \left( \frac{2}{K \tau} \right) \frac{d r}{d t} + \left( \frac{6}{K \tau^2} \right) r = 0 \label{linear} \end{equation} where $\tau$ = R$_\infty$ / u$_\infty$ is the characteristic system time scale, and K = c/s. The homogeneous second-order ordinary differential Equation \ref{linear} is that of a damped simple harmonic oscillator whose solution is an exponentially-decaying sinusoid, \begin{equation} r = A e^{-bt}cos(\omega t + \phi), \label{linear2} \end{equation} where b = 1 / (K $\tau$) and $\omega$ = b $\sqrt(6K-1)$. For a stationary pre-shock magnetopause at position, R$_0$, we have tan($\phi$) = -b / $\omega$ and A cos($\phi$) = R$_0$ - R$_\infty$. \section{Results} \label{23july2012} \subsection{Shock-Magnetosphere Interaction} Figure \ref{compression} shows the Gorgon pressure at six stages during the simulation of Shock IV, starting within the upstream solar wind, then at four stages within the magnetosphere, and then sometime after when the system has reached a new compressed steady state. Selected magnetic field lines are depicted in white and the shock moves through the domain shown in just over 200 seconds. \begin{figure*}[ht] \hspace*{-2cm} \includegraphics[width=1.25\textwidth]{pressure.png} \caption{Gorgon pressure in the x--y plane at six instances corresponding to before, during, and after, IP Shock IV impacts the simulated magnetosphere. Selected magnetic field lines are depicted in white and the shock parameters are listed in Table 1. \label{compression}} \end{figure*} The IP shock slows down upon passing through the bow shock and panel (b) shows it develops a curved front as it propagates through the dense magnetosheath \citep{Samsonov06,Andreeova11}. The subsequent impact on the magnetopause disrupts the pressure-balanced equilibrium which initiates the commencement phase associated with geomagnetic storms \citep{Smith86,Araki94}. The initial magnetospheric state shows pressures below 1 nPa and the enhanced solar wind pressure consequently produces magnetosheath pressures over an order of magnitude higher. The tailward propagating magnetosonic pulse, panels (d--e), subsequently produces enhanced plasma sheet pressures, thinning of the tail current sheet and induces near-Earth tail reconnection {\citep{Oliveira14}}. The enhanced dynamic pressure in the solar wind compresses the magnetopause boundary from its initial position near --10 R$_E$ to its final position near --6 R$_E$. \subsection{Subsolar Magnetopause} The magnetopause can be characterised as possessing a finite thickness from several ion gyroradii of several hundred kilometres \citep{Le94} to over half an Earth radius \citep{Kaufmann73}. The different plasma conditions on either side results in this being an asymmetric structure. Along the sub-solar line the shocked solar wind first slows and diverts at the fluopause \citep{Palmroth03} which, based on the gradient of the velocity stream lines, is initially determined at $\approx$ --10.9 R$_E$. The southward oriented magnetic field then passes through zero at --10.75 R$_E$ as it tends to the significantly larger positive magnetospheric fields. Further inward, the peak in the magnetopause current density is located at --10.45 R$_E$, which is then followed by a local depletion in the plasma density at --10.1 R$_E$. In this study we determine the magnetopause position using the B$_z$ = 0 condition, which provides a consistent measure for southward IMF regardless of solar wind conditions and stand-off distance. \begin{figure*}[ht] \centering \hspace*{-1cm} \includegraphics[width=1.1\textwidth]{magnetopause_4subplots_revised.png} \caption{Simulated subsolar magnetopause stand-off distance compared to linear and nonlinear theoretical predictions for the four Shocks listed in Table 1. For reference the location of geosynchronous orbit is annotated with an arrow. \label{subsolar}} \end{figure*} Figure \ref{subsolar} shows traces of the Gorgon subsolar magnetopause stand-off distances (solid lines) over time for the four shocks simulated. The motion of the magnetopause appears as three distinct phases. The first involves an acceleration as the inertia of the magnetosheath is overcome. The second appears as a rapid compressive phase which comprises the majority of the change in standoff distance. The end of this rapid compression marks the third stage of large-scale oscillatory motion with amplitudes of the order of an Earth radius before the magnetopause reaches pressure-balanced equilibrium. Shock I has the smallest compressive phase as the final oscillations around pressure balance appear of a comparable magnitude to the total stand-off distance travelled. For increasing shock strengths, the duration of the compressive phase increases and the amplitudes and also frequencies of the oscillations appear to decrease. The underlying position about which the oscillations occur shifts Earthward as the oscillations are damped away which may be attributed to changing conditions within the sheath, see Figure \ref{compression}. The oscillations also appear more strongly damped for the stronger shocks with Shock IV producing magnetopause oscillations for approximately 300 seconds compared to Shock I which produces oscillations which last four times as long. Shocks I and II also feature more oscillations than III and IV, indicating that they have a weaker damping ratio. Also shown in Figure \ref{subsolar} are the subsolar magnetopause motions of the four shocks predicted by the nonlinear numerical solution of Equation \ref{numerical3} (dashed lines) and the linear solution given by Equation \ref{linear2} (dotted lines), using c=1.2, s=1, and values for $\rho_\infty$, u$_\infty$, R$_0$, and R$_\infty$ taken from the simulations. As expected, the linear solution is most similar to the nonlinear numerical solution for Shock I, where the approximation r$<<$R$_\infty$ is most valid. The difference increases with shock strength, especially in the initial phase when the second-order (dR/dt)$^2$ term in Equation \ref{numerical3} is not negligible. Nevertheless, the linear theory is instructive in explaining the qualitative response characteristics of a finite magnetopause response time, overshoot, and decaying oscillation. For the nonlinear theory solution, the oscillation period of the simulation {produces good agreement} in all cases but the initial response time and oscillation damping rate are both progressively overestimated compared to the simulation with weakening shock strength. This suggests that the second term in Equation \ref{numerical3} may be an oversimplification in the weak shock limit, and particularly the (dR/dt)$^2$ term within it because the linear theory that neglects (dR/dt)$^2$ actually captures the initial simulation response better than the nonlinear theory for Shocks I and II. It should also be noted that the initial response is very sensitive to the initial conditions in the magnetosheath (not shown) which may differ in the simulation from those assumed in the theory. The linear theory is instructive in understanding the underlying physics of the magnetopause response. {The nonlinear theoretical solutions of Equations \ref{numerical3} provide a means to extend this to larger peturbations} but the solutions are {still} necessarily dependent on the choice of coefficients and the assumptions behind these such as the shape of the magnetopause surface and constant sheath thickness. Further effects such as magnetic reconnection, magnetosheath heating, finite solar wind mach numbers and wave speeds, {and the reflection at the pulse of the inner boundary back onto the magnetopause \citep{Li1993,Samsonov07}}, are also not accounted for. The time-dependent and self-consistent numerical solutions to the MHD equations, as solved by Gorgon, instead provide the means of testing the {the theory outlined in Section \ref{SHM} }for realistic nonlinear {system-scale} scenarios of strong fast-forward IP shock-induced magnetopause motion. Large-scale periodic magnetopause motion, consistent with those described here, have been observed following the arrival of strong fast-forward IP shocks. During the impact of the August 1972 ICME, when the sub-solar magnetopause was compressed to less than 5.2 R$_E$ upstream, the Explorer 45 satellite experienced multiple magnetopause crossings in rapid succession \citep{Cahill77}. Similarly, during the extreme event of March 1991, the GOES-6 satellite experienced six inward-outward periodic movements of the magnetopause over a 30 minute period \citep{Cahill92}. The lack of an upstream solar wind monitor does, however, complicate further direct comparison to these events. \subsection{Frequency Analysis} The response of the magnetopause in the Gorgon simulations requires time-frequency analysis suitable for non-stationary and nonlinear processes. Figure \ref{EEMD} uses ensemble empirical mode decomposition (EEMD) \citep{Wu04,Torres11} to derive the statistically significant modes associated with the magnetopause motion and a Hilbert transform spectrum shows the associated characteristic frequencies. \begin{figure*}[ht] \centering \hspace*{-1cm} \includegraphics[width=1\textwidth]{modes_hilbert_combined.pdf} \caption{(a) Shows the original modes shown in Figure \ref{subsolar}, (b) shows these decomposed into their empirical modes and (c) shows a Hilbert transform of their instantaneous frequencies. \label{EEMD}} \end{figure*} These show the oscillations as a function of up to four statistically significant modes, the primary of which exhibit frequencies between 2--13 mHz with the frequencies of the dominant modes increasing with shock strength. The instantaneous frequencies initially increase from zero as the inertial phase begins and then plateaus somewhat during the compressive phase. They then rapidly increase during the first magnetopause rebound before relaxing back to values between 2--5 mHz. The instantaneous frequency is the time-derivative of the phase at each moment and this distinct peak is therefore interpretted as evidence of nonlinear phase steepening. Due to the strong damping, the instantaneous frequencies don't always provide a good handle on the overall periodicity of the oscillations in each mode. Taking the auto-correlation of each mode and finding the peak, we find slightly higher overall frequencies of: 3.3, 4.1, 5.4 and 5.8 mHz for Shocks I--IV, respectively. The primary empirical modes appear strongly damped with Shocks I and II inducing approximately three total periods of diminishing amplitude whereas Shocks III and IV induce less than two such periods. The eventual periods of the oscillations are much longer than the first few oscillations for both Shocks I and II. These are being picked up by the secondary mode and the frequencies appear in the range of magnetopause surface eigenmodes being reflected between the northern and southern ionospheres \citep{Chen74} as seen in high-resolution global MHD simulations \citep{Hartinger15} at 1.8 and 2.3 mHz respectively. With Shocks III and IV these are not apparent, possibly due to the grid resolution not sufficiently resolving field-aligned currents near the inner boundary and magnetopause reconnection at the subsolar point from the strong southward driving prohibiting a surface eigenmode forming \citep{Plaschke11,Archer19}. Further modes are apparent extending up to 0.1 Hz but these likely correspond to nonlinear higher order terms. { The simulated magnetopause frequencies at the subsolar point lie where the natural frequencies of the magnetopause fall according to the theory outlined in Section \ref{SHM}. These oscillations notably occur at the lower end of the ULF range observed throughout the magnetosphere \citep{Menk2011} and \citet{Freeman1995} point out that the linear theory predicts that the magnetopause acts as a low pass filter of compressional waves due to solar wind dynamic pressure variations and resonances may thus be selectively enhanced at the natural eigenfrequency and suppressed at higher frequencies. Higher frequency waves, however, could well exist further within the magnetosphere, for example via field line resonances excited by the fast magnetosonic pulse during the compression phase. The reproduction of ULF waves in global-MHD simulations can, however, be sensitive to numerical effects \citep{Claudepierre09} and an exploration of the magnetospheric ULF counterparts to IP shocks is therefore left for a future endeavour}. \section{Conclusions} \label{summary} This study has examined the magnetospheric and magnetopause response to four synthetic IP shocks of varying magnitudes using Global-MHD simulations. While previous studies \citep{Smit1968,Freeman1995,Borve2011} focussed on small-scale dynamic pressure changes in the upstream driver, {we developed nonlinear theory suitable for large peturbations and compared these to} self consistent global MHD simulations. This approach enabled the characterisation of magnetopause motion for extreme scenarios representative of fast-forward shocks striking the magnetosphere, as occur at the forefront of coronal mass ejections. In response to the IP shocks, the simulated magnetopause notably featured large-scale oscillatory motion of the order of an Earth radius, prior to reaching pressure balance. This was readily explained when considering the driving, inertial and restoring forces associated with theory of the magnetopause as a forced damped simple harmonic oscillator. The frequencies of the oscillations occurred in the range of 2--13 mHz, predominantly occurring between 2--5 mHz. {The response times and oscillation periods seen in the simulations were quantitatively consistent with the nonlinear theory, and the damping time of the oscillation was also quantitatively consistent with nonlinear theory for the stronger shocks but underestimated by theory for the weaker shocks. The initial magnetopause response was also best predicted by linear theory for the weaker shocks and by nonlinear theory for the strongest shock, which is consistent with the assumptions beyond deriving the linearised solutions.} These large-amplitude oscillations provide an explanation for periodic magnetopause motion observed following the impact of strong interplanetary shocks during the extreme space weather events of August 1972 \citep{Cahill77} and March 1991 \citep{Cahill92}. The time-delay in the magnetopause response due to the inertia of the magnetosheath, combined with the large-scale oscillatory motion, also helps to understand why static models of the magnetopause break down during periods of strong solar wind driving \citep[e.g.][]{Staples2020}. Furthermore, the varying structure throughout a given Earth-bound coronal mass ejection, combined with the dynamic magnetopause response, could well mean that the magnetopause rarely settles into highly compressed equilibrium states, which would also introduce a significant bias to in-situ measurements of its locations. Rapid inward motion of the magnetopause has been observed to consistently produce enhancements and dropouts in the radiation belt phase space distributions \citep{Reeves03,Schiller16} and to drive an abundance of global ultra-low-frequency wave activity \citep{Li97,Green04} and enhance ionospheric and ground-induced currents \citep{Fujita03,Smith19}. These phenomena therefore present further observables which could be affected by and tested {\citep[e.g.][]{Wang10}} for the large-scale magnetopause oscillations described herein. \section*{Acknowledgements} RTD, JPE and JPC acknowledge funding from NERC grant NE/P017347/1 (Rad-Sat). MPF was supported by NERC grant NE/P016693/1 (SWIGS). JWBE is funded by a UK Science and Technology Facilities Council (STFC) Studentship (ST/R504816/1). MOA holds a UKRI (STFC / EPSRC) Stephen Hawking Fellowship EP/T01735X/1. Research into magnetospheric modelling at Imperial College London is also supported by Grant NE/P017142/1 (SWIGS). NM and RH would like to acknowledge the Natural Environment Research Council Highlight Topic grant NE/P10738X/1 (Rad-Sat) and the NERC grants NE/V00249X/1 298 (Sat-Risk) and NE/R016038/1. IJR and FAS acknowledge STFC grants ST/V006320/1 and NE/P017185/1. This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No. 870452 (PAGER). This work used the Imperial College High Performance Computing Service (doi: 10.14469/hpc/2232). \newline \section*{Data Availability Statement} The simulation data used in this paper is openly available on the UK Polar Data Centre (UK PDC): https://doi.org/10.5285/3774fa5b-f2fb-42c3-9091-5b11ac9744ea \bibliographystyle{abbrvnat}
1,314,259,994,695
arxiv
\section{\label{sec:level1}First-level heading:\protect\\ The line The interactions between electrons determine the structural and electronic properties of solids and depend strongly on the distance between the atomic nuclei. Ultrashort laser pulses are able to drive coherent vibrational motions of the atoms (coherent phonons) with large amplitudes, which in turn can lead to emergent phenomena, such as light-induced superconductivity \cite{Mankowsky2014,Mitrano2016,Liu2020}, ferroelectricity \cite{Nova2019,Li2019}, and phonon control of magnetic order \cite{Nova2017,Afanasiev2019,Disa2020,Juraschek2020_2}. Conventionally, coherent phonons are created by ultrashort laser pulses in the visible spectral region which interact predominantly with interband electronic transitions. Electron-phonon coupling then results in an effective time-dependent force that drives coherent motion of the atoms~\cite{Merlin1997}. If the photon energy is far from a direct electronic resonance, the force arises from electronic Raman scattering (ERS) that can be described as a $E^2 Q_\mathrm{R}$ term in the second-order dipole coupling, where $E$ is the electric field and $Q_\mathrm{R}$ is the normal coordinate of the Raman-active phonon. Although in principle one could achieve arbitrarily high coherent phonon amplitudes by increasing the peak electric field of the pump, in reality competing interactions such as multi-photon interband absorption lead to elevated temperatures and even irreversible damage to the material~\cite{Zukerstein2019}. As an alternative mechanism, infrared-active phonons can replace electronic states in the scattering process. This so-called ionic Raman scattering (IRS) mechanism was predicted half a century ago \cite{Maradudin1970}. Only recently has it become possible to unequivocally observe the generation of coherent phonons via IRS, largely because of new technologies to generate intense sources of radiation in the THz and mid-infrared frequency ranges~\cite{Forst2011}. IRS is based on nonlinear interactions between phonons due to anharmonicities in the potential energy landscape of the crystal lattice. An infrared-active phonon is initially coherently excited by a terahertz or mid-IR pulse through IR absorption and is subsequently scattered to a Raman-active phonon. The scattering process often involves a displacement of the potential energy surface of the Raman-active phonon toward a new quasi-equilibrium lattice structure~\cite{Mankowsky:2015,Mankowsky:2017,Mankowsky_2:2017}. The leading-order nonlinear phononic interaction in ionic Raman scattering can be described by a $Q_\textrm{IR}^2Q_\mathrm{R}$ in the potential energy of the lattice, where $Q_\textrm{IR}$ is the coordinate of an infrared active phonon driven by the pump~\cite{subedi:2014,subedi:2015,fechner:2016,Subedi2017,juraschek:2017,Gu2017,VonHoegen2018}. This interaction is often invoked to explain a large variety of lattice-driven phenomena demonstrated in recent years. Due to the low photon energy of the pump relative to electronic transitions in insulating materials, IRS has proven to be highly selective \cite{Liu2020} and potentially less dissipative than ERS~\cite{Nicoletti2016}. The goal of our study is to explicitly compare ERS and IRS without the interference from other order parameters that have been present in the vast majority of IRS demonstrations to date. In order to reduce possible extraneous influences from such interference, we study both mechanisms of coherent phonon generation in lanthanum aluminate (LaAlO$_3$), which we choose because it does not exhibit any magnetic, ferroelectric, or complex electronic order at any temperature. We measure an oscillatory response from coherent excitation of the $\sim$ 1~THz Raman-active $E_g$ mode in LaAlO$_3$ over a wide range of pump frequencies in the mid- and near-infrared. Although the oscillations are measurable over the entire range of pump frequencies used, the amplitude of the response is maximized when the frequency of the laser is tuned into resonance with the high-frequency infrared-active $E_u$ mode at 20.6~THz. Our results are complementary to a recent study by Hortensius \textit{et al.}~\cite{Hortensius2020}, which reports similar measurements in LaAlO$_3$ with a focus on acoustic phonons. \section{\label{sec:exp_setup}Experiment} LaAlO\(_3\) is a wide band-gap insulator (\(E_{\rm gap}\) = 5.5 eV \cite{Lim2002}) that adopts the cubic perovskite structure with a low symmetry rhombohedral distortion (space group R$\bar{3}$c), characterized by alternating rotations of the AlO\(_6\) octahedra at temperatures below 813~K \cite{Hayward2005}. The samples used in our experiments are double-side polished bulk crystals with a thickness of 0.5~mm grown through the Czochralski method (MTI Corporation). The surface of these crystals is (1~0~0) using the pseudocubic unit cell. A pump-probe scheme measures the response of the samples to mid-infrared excitation (see Appendix A). An amplified Ti:Al\(_2\)O\(_3\) laser system consisting of an oscillator, regenerative amplifier and single-pass amplifiers creates femtosecond pulses of light (1~kHz, 800~nm, 110~fs). A pair of optical parametric amplifiers (OPAs) with a common white light seed generates independently tunable near-infrared signal pulses in a wavelength range between 1.2 and 1.6~\(\mu\)m. These signal pulses in turn generate tunable, carrier-envelope phase-stable mid-infrared light via difference-frequency generation in GaSe~\cite{Junginger2010}. An off-axis parabolic mirror focuses the generated mid-infrared light to a full width at half maximum (FWHM) spot diameter of 100~\(\mu\)m on the sample. For center frequencies below 33~THz, electro-optic sampling using another GaSe crystal of 30~\(\mu\)m thickness and a 15 fs probe pulse at 650~nm from a non-collinear OPA measures the electric field waveform. For higher central frequencies a commercial Fourier transform interferometer characterizes the frequency content. The inset to Fig.~\ref{fig:fluence_dep}(a) shows the power spectral density from an example electro-optic sampling measurement of a pump pulse with center frequency near 20~THz. Fig.~\ref{fig:fluence_dep}(a) shows the corresponding time-domain electro-optic sampling measurement, where a Fourier filter is applied to cut off frequencies below 15~THz and above 22~THz to isolate the mid-infrared pulse component. The mid-infrared pulses have a FWHM duration (in intensity) of approximately 180~fs and were tuned between central frequencies of around 17 and 40~THz. The maximum incident fluences depend strongly on the generated spectrum due to absorption in GaSe \cite{Mandal2008} and varied from several mJ/cm\(^2\) below 19~THz to more than 100~mJ/cm\(^2\) above 30~THz. The probe pulses (650 nm, 15 fs) are transmitted through a small hole in the parabolic mirror and focused onto the sample at normal incidence to a FWHM spot diameter of 25~\(\mu\)m. The reflected pulses then propagate back through the parabolic mirror and are partially split into a balanced detection scheme that measures changes to the ellipticity $S$ (see Appendix A). Both pump and probe pulses are sent to the sample linearly polarized along [0~1~1] (pseudocubic). Since LaAlO\(_3\) is non-magnetic and non-absorbing at 650~nm \cite{Nelson2012}, the measured changes in ellipticity arise from the part of the probe beam that is transmitted into the sample and then reflected from the backside. \section{\label{sec:results}Results} We acquired pump-probe data for a variety of pump fluences \(F\), center frequencies \(\nu_{\rm p}\), and positions on the sample. As an example, Fig.~\ref{fig:fluence_dep}(b) shows the measured changes in ellipticity \(S\) over delay time \(t\) when applying different pump fluences \(F\) for a center frequency of 19~THz. Here we use the absorbed fluence \begin{equation} F = \frac{4 \ln 2 (1-R) U}{ \pi d^2}\label{eq:fluence} \end{equation} where $U$ is the pulse energy, and $d = 100\,\mu\textrm{m}$ is the FWHM of the spot on the sample. Here $R$ is an average of the reflectivity over a Gaussian-distributed range of frequencies with a center-frequency and FWHM matching the pump pulse parameters, calculated using complex permittivity values from the literature~\cite{Willett-Gies2014}. The pump pulse shown in panel (a) corresponds to the trace at \(F = 29.9\)~mJ/cm\(^2\). The data are characterized predominantly by an oscillation with a period of approximately 1~ps around a displaced value of \(S\). At pump-probe delay times corresponding to near-overlap (\(t = 0\)) there are additional, sharper features. To better isolate and quantify the oscillations, we fit the function \begin{eqnarray} S = A\sin\left(2\pi\nu t - \phi\right)e^{-t/\tau_\gamma} + B\left(1-e^{-t/\tau_{\rm R}}\right)\label{eq:model} \end{eqnarray} to all data sets for times \(t > 1.2\) ps, well beyond the times when the pump and probe pulses overlap. The first term describes the oscillations with amplitude \(A\), frequency \(\nu\), phase \(\phi\), and damping constant \(\tau_\gamma\). The second term represents a slower, non-coherent contribution with amplitude \(B\) that recovers with an exponential time constant \(\tau_{\rm R}\). In Fig.~\ref{fig:fluence_dep}(b) the model curves resulting from the fit are shown as black lines. The parameters \(A\) and \(B\) are adjusted to best fit each data set individually, while \(\phi\) and \(\tau_{\rm R}\) are fit globally to each group of data sets for which the center frequency of the pump remained the same. The parameters \(\nu = 0.95\pm 0.04\)~THz and \(\tau_\gamma = 1.4\pm 0.3\)~ps are determined by a global fit to all data. These parameters agree well with previous measurements if we identify the oscillations as a coherent excitation of a degenerate pair of phonon modes with \(E_g\)-symmetry at room temperature~\cite{Scott1969}. These \(E_g\)-modes are soft modes of the structural transition at 813~K. The inset of Fig.~\ref{fig:fluence_dep}(b) shows the scaling of the amplitude \(A\) with the fluence \(F\) for the data shown on the main panel. While the behavior of \(A\) for the lower three fluences is consistent with a linear relationship \(A = a F\) (fit shown by the red line), the highest fluence at 29.9 mJ/cm$^2$ deviates from this proportionality. Fig.~\ref{fig:scaling_various} shows the corresponding fluence dependence on a log-log scale for a variety of pump frequencies. Deviations from linearity at these high fluences are also evident for these measurements. For fluences less than $F_c$ = 28 mJ/cm$^2$ we fit these data to \(A = a F\) to extract a pump-frequency-dependent parameter $a$ that gives a quantitative measure of the sensitivity of the coherent phonon response to the pump at a given absorbed fluence. \begin{figure \includegraphics[scale=0.7]{fluence_dependence} \caption{\label{fig:fluence_dep} (a) Temporal evolution of the pump field according to Fourier-filtered electro-optic sampling (see text for details). The inset shows the power spectral density. The yellow-shaded area is the support of the Fourier filter for the temporal evolution. (b) Time-dependence of the ellipticity change \(S\) for various absorbed pump fluences. The data are shown together with fits (black lines, see text). The curves are offset for clarity. The inset shows how the amplitude \(A\) changes with fluence \(F\). Both panels share the delay time axis and the pump pulse displayed in (a) corresponds to an absorbed fluence of 29.9~mJ/cm\(^2\).} \end{figure} \begin{figure* \begin{center} \includegraphics[scale=0.7]{LAO_fluence_limit} \caption{\label{fig:scaling_various} Absorbed fluence dependence of the oscillations at different pump frequencies. (a) Scaling of the \(E_g\) mode amplitude \(A\) with the absorbed fluence calculated using Eq.~\ref{eq:fluence} for various pump spectra including linear fits up to the fluence \(F_{\rm c}\) (dashed vertical line). (b) Different normalized pump spectra featuring Gaussian fits (black lines). The colors of the spectra in (b) correspond to the data points and fits shown in (a).} \end{center} \end{figure*} The rhombohedral axis of the low-symmetry, room-temperature structure of LaAlO$_3$ can lie along any of the body diagonals of the high temperature cubic cell, giving rise to distinct structural domains, which can lead to twinning~\cite{Yao1992, Hayward2002}. The different relative orientation of these domains results in a small amount of optical contrast. Fig.~\ref{fig:grid}(a) shows an image of a 100~\(\mu\)m \(\times\) 100~\(\mu\)m area of the sample taken using an optical microscope. Here \(y\) corresponds to [0~1~1] and \(x\) to [0~-1~1] of the pseudocubic cell. Stripes of varying optical contrast along [0~1~0] with a width on the order of 10~\(\mu\)m are evident, suggesting the presence of lamella-like domains. We investigated how these domains affect the oscillatory response of the system and mapped the 100~\(\mu\)m \(\times\) 100~\(\mu\)m area by scanning the position of the pump and probe focus positions in steps of 20~\(\mu\)m. The applied pump pulses had a central frequency of 23.8~THz and a FWHM bandwidth of 9.1~THz. For each position we fit \(S\) to the resulting traces to extract \(A\) and \(\phi\) as a function of the spot location. Fig.~\ref{fig:grid}(b) shows the respective map of \(A\) on a logarithmic color scale, while panel (c) shows the map of \(\phi\) on a linear scale. Over the mapped area \(A\) varies by a factor of more than 30, while \(\phi\) varies between -100\(^\circ\) and 180\(^\circ\). Stripes along [010] are evident in both parameter maps, suggesting a strong influence of the domain structure on the measured oscillations. \begin{figure*} \includegraphics[scale=0.275]{LAO_grid_combined2} \caption{\label{fig:grid} Spatially-resolved maps of a selected area of the sample. (a) optical microscope image showing stripes oriented along the [010] direction (indicated by the arrow), (b) amplitude \(A\) on logarithmic color scale, and (c) phase \(\phi\) on a linear color scale, extracted from fits of transient data taken at each point to Eq.~\ref{eq:model}.} \end{figure*} \section{\label{sec:discussion}Discussion} While the parameter \(a\) that relates the absorbed fluence to the oscillation amplitude in principle reflects the relative sensitivity of the \(E_g\) mode to different excitation frequencies, the oscillation amplitude of the coordinate \(Q_{\rm R}\) also depends on other quantities that change strongly with the pump frequency. Both the pump and probe pulses interact with the material with a strong dependence on both time and distance from the surface. These spatial and temporal dynamics of the pump pulse vary considerably with frequency, necessitating a correction to the value of $a$ in order to genuinely compare the dependence of the \(Q_\textrm{R}\) on the absorbed pump fluence for different pump wavelengths. In order to estimate the required correction factor, we assume that the time- and space-dependent coordinate of the Raman active mode is given by \begin{equation} Q_R(z,t) = \int\limits_{-\infty}^\infty dt^\prime \left|E_\textrm{pump}(z,t^\prime)\right|^2 G_{\nu_p}(t-t^\prime) \end{equation} where \begin{align} E_\textrm{pump}(z,t) = & E_0 \tau \sqrt{\frac{\pi}{2 \ln 2}} \int\limits_{-\infty}^\infty d\nu \frac{2}{1+n(\nu)} \nonumber\\ & \times \mathrm{exp}\left[-\frac{\pi^2\tau^2(\nu-\nu_p)^2}{2\ln 2}\right] \nonumber\\ & \times \mathrm{exp}\left[i 2 \pi \nu \left[t - n(\nu) \frac{z}{c}\right]\right] \end{align} is the complex-valued electric field inside the crystal from a Gaussian-profile pump with peak incident amplitude $E_0$ at a depth $z$ and a time $t$, $n(\nu)$ is the complex-valued index of refraction at the frequency $\nu$, $\tau$ is the FWHM pulse duration, and $G_{\nu_p}$ is an impulse response function that depends on the pump frequency $\nu_p$. Assuming that the probe pulse duration is small compared to the dynamics of \(Q_R\), the probe ellipticity change is \begin{equation} S(t) \propto \int\limits_0^D dz Q_R\left(z,t+\frac{z}{v_g}\right) \end{equation} where \(D\) is the thickness of the sample and \(v_g\) is the group velocity of the probe. Here we are concerned only with the component of $S$ that oscillates at the phonon frequency \(\nu = 0.95\,\textrm{THz}\). We then have \begin{equation}\label{eq:lin_fit} A \propto |\tilde{G}_{\nu_p}(\nu) f(\nu)| F_\textrm{abs} \end{equation} as a relation between the observed oscillation amplitude $A$ and the Fourier transform \(\tilde{G}_{\nu_p}\) of the impulse response function, where \begin{align} f(\nu) = & \frac{1}{1-R} \int\limits_0^D dz e^{i 2 \pi \nu z/v_g} \\ & \times \int\limits_{-\infty}^\infty dt \left|\frac{E_\textrm{pump}(z,t)}{E_0}\right|^2 e^{-i2 \pi \nu t} \label{eq:f} \end{align} is a correction factor. We then define a new quantity \begin{equation} a_\textrm{corr} \equiv a / |f(\nu)|\label{eq:acorr} \end{equation} that depends on $\nu_p$ only via $\tilde{G}_{\nu_p}(\nu)$. We calculate $a_\textrm{corr}$ from Eqs.~\ref{eq:f} and \ref{eq:acorr} for our experiment taking optical constants from Ref.~\onlinecite{Willett-Gies2014} and assuming Gaussian pump pulses with a 100 fs FWHM. Values of $a_\textrm{corr}$ as a function of pump frequency for one particular region of the sample are shown in Fig.~\ref{fig:resonance}. There is a significant enhancement of $a_\textrm{corr}$ at frequencies near 20~THz. We also observe a smaller enhancement when the pump is tuned to near 40~THz. Due to the domain structure of the sample we do not attempt to compare the absolute magnitude of the response measured at different regions, although the relative response for a subset of pump frequencies and fluences was reproduced at several different sites. Two mechanisms could lead to coherent oscillations of the \(E_g\) mode. The first is electronic Raman scattering (ERS). The two most prominent manifestations of ERS are electronically-driven displacive excitation and impulsive stimulated Raman scattering \cite{Merlin1997,Stevens2002}. In the former casr of displacive excitation, electrons are excited to higher bands in metals or semiconductors, which displaces the effective potential energy surface of the lattice. This causes the ions to move coherently, oscillating about a new equilibrium position \cite{Zeiger1992}. In the latter case of impulsive Raman scattering, the energy of the photons is not large enough to persistently excite valence electrons into higher bands. The resulting effective force on the ions exists only during the pump pulse interaction~\cite{Stevens2002,Glerean2019}. For the measurements reported here the photon energy is very far from an electric-dipole allowed transition, and so we are in the impulsive Raman scattering regime. It has previously been shown that at higher pump frequencies this mechanism can excite measurable $E_g$-mode oscillations in LaAlO$_3$~\cite{Liu1995}. In the impulsive limit we would expect the phase of the oscillations $\phi$ to be close to zero, and the magnitude of the oscillations to be nearly frequency independent, since the photon energy is very small compared to the band gap of 5.5~eV. The non-zero values of $\phi$ and the strong pump-frequency dependence of $a_\textrm{corr}$ both indicate that ERS alone is not sufficient to explain our data. A second possible mechanism is ionic Raman scattering arising from coupling via potential energy terms proportional to \(Q_{\rm IR}^2 Q_{\rm R}\), where $Q_\textrm{IR}$ is the normal coordinate of an infrared-active phonon mode at 20.6 THz~\cite{Scott1969}. To quantify the relative contributions of the two mechanisms, we perform simulations of both interactions using a combination of density-functional theory calculations and phenomenological modeling (see Appendix B). For the simulations we assume a constant envelope FWHM duration of 250~fs and peak electric fields of 12~MV/cm with varying central frequency \(\nu_{\rm p}\). From these we compute the maximum amplitudes of both the \(E_u\) mode \(Q_{\rm IR, 0}\) and the \(E_g\) mode \(Q_{\rm R, 0}\). \(Q_{\rm IR, 0}\) was calculated through the coupling between the pump field and the mode effective charge. For \(Q_{\rm R, 0}\) both IRS and ERS are taken into account. The simulations assume that the pump-field polarization is aligned orthogonal to the c-axis of the crystal, and we compute the amplitude of the $E_g$ mode that couples most strongly to the pump. This is a simplification of the experimental conditions, where the pump and probe polarizations depend strongly on the domain orientation and is a priori not known because of the complex twinning. This makes the calculated amplitude of the phonon response difficult to directly compare with experiment, but should give approximate indications of the relative efficiency of phonon generation as a function of pump frequency on the same location on the sample. The dependence of the simulated responses on \(\nu_{\rm p}\) are shown in Fig.~\ref{fig:resonance} with the relevant scale on the right-hand axis. \(Q_{\rm IR, 0}\) is shown as a solid red and \(Q_{\rm R, 0}\) as a dashed blue line, reproducing the resonant behavior of the coherent phonon amplitude for pump frequencies near $Q_\textrm{IR}$. The strong enhancement of $a_\textrm{corr}$ in the vicinity of the $E_u$ mode at 20.6~THz and the agreement with the predictions of the simulations provide strong evidence that IRS is the dominant mechanism for the coherent driving of the 1.1 THz $E_g$ mode for pump frequencies ranging from 16-25~THz. Outside this range ERS plays a more significant role, and is largely characterized by a nearly frequency independent impulsive excitation. Note that this does not explain the enhancement in $a_\textrm{corr}$ at 40 THz. Measurements on other locations on the sample (using fewer pump frequencies) reproduced the enhancement near 20 THz but not the one at 40 THz, and so we do not consider the 40 THz enhancement further. We suspect that this apparent enhancement may be the result of a small change in the probe spot location in the sample that leads to different domain contributions that may affect the assumptions used to estimate $a_\textrm{corr}$. We also note that, even discounting the 40 THz pump data, there is a discrepancy between the experimental results and the simulations regarding the relative magnitudes of the IRS and ERS signals. While the simulations show a difference of about a factor of 20 between the amplitudes of IRS and ERS driven coherent phonons, the experiment shows an enhancement of nearly 100 between the peak response at 20 THz pump and the response at pump frequencies near 15~THz or 30~THz. Some of this discrepancy may arise from the fact that the polarization of the pump in the experiment differs from that assumed in the simulations, since components of the pump polarization orthogonal to the c axis can affect the efficiency of ERS (see Appendix B). \begin{figure \includegraphics[scale=0.825]{spectral_response} \caption{\label{fig:resonance} Fluence-corrected scalings, \(a_{\rm corr}(\nu_{\rm p})\) (points, left axis), of the \(E_g\)-phonon amplitude and calculation values of the phonon-amplitude responses (lines, right axis) on logarithmic scales. For \(a_{\rm corr}\), the \(\nu_{\rm p}\) values are set according to the center of mass of the absorbed excitation spectrum, of which the horizontal errorbars indicate the FWHM. We show the calculated frequency-dependent amplitudes of both the Raman-active \(E_g\)-mode \(Q_{\rm R, 0}\) and the infrared-active \(E_u\)-mode \(Q_{\rm IR, 0}\).} \end{figure} One striking feature of our data is the observation that for fluences above 28 mJ/cm$^2$ the amplitude of oscillations $A$ is no longer proportional to the absorbed fluence. This result deviates from the expectation of our simple models of ERS and IRS. At most pump wavelengths the measured result for higher fluences is sublinear, although at 40~THz pump frequencies we observed slightly superlinear behavior at these fluences. The value of the pump fluence at which deviations from linear behavior become measurable is relatively frequency independent and suggests that it is not related to a nonlinear phononic interaction, but instead to an optical phenomenon. The fact that our photon energies are less than $5$\% of the band gap suggests that these effects may be due to impurities or other defects that create a small number of carriers that then interact with the intense pump field. These effects could modify the magnitude of $Q_R$ and/or the phase matching conditions that allow us to detect the coherent phonon via ellipticity changes. \section{\label{sec:conclusion}Conclusion} Our data show clear evidence for identifying IRS as the dominant mechanism for coherent excitation of soft $E_g$ modes in LaAlO$_3$ when pumped with frequencies between 18 and 23 THz, with ERS playing a significant role at frequencies between 25 and 40 THz. Future experiments using ultrafast x-ray diffraction on detwinned samples could be used to further test the quantitative correspondence between the predictions of density-functional-theory-based simulations and experiment. Interestingly, despite the fact that the photon energies of the pump pulses were an extremely small fraction of the optical band gap, we observed nonlinear responses at fluences above 28~mJ/cm$^2$. These fluences are considerably less than those used in recent experiments where IRS was considered to be the only mechanism of interaction due to the band-gap mismatch~\cite{Mankowsky2017,VonHoegen2018}. In such experiments, it may therefore become necessary to consider additional interactions in this excitation range, even for wide band-gap materials. Finally, we note that comparisons of electronic and ionic excitation mechanisms have recently been discussed both theoretically \cite{Maehrlein2017,Juraschek2018a} and experimentally \cite{Maehrlein2017,Melnikov2018,Johnson2019,Knighton2019,Melnikov2020} for the sum-frequency counterparts of ERS and IRS. These sum-frequency mechanisms are fundamentally two-photon and two-phonon absorption processes (compared to the difference-frequency Raman scattering processes here) and follow the same theoretical formalism as we use in this study, however at different spectral ranges.
1,314,259,994,696
arxiv
\section{Introduction} \lettrine{C}{ognitive} radio (CR) is a promising paradigm that can resolve the problem of the spectrum scarcity \cite{Haykin}. Cognitive radio network (CRN) consists of primary users (PUs), who are licensed users of the spectrum, and secondary users (SUs), who can access the spectrum only when they do not cause harmful interference to PUs \cite{Goldsmith}. SUs can access the licensed spectrum by three paradigms such as interweave, underlay and overlay \cite{Goldsmith}. Interference alignment (IA) is a very powerful technique potential to provide higher degrees of freedom (DoFs) which means interference-free signaling dimensions at receivers \cite{galym1,galym2,GS1, GS2,Cadambe}. The promising performance of IA can be also attained in CR by suppressing interference at PUs. By using this technique, SUs easily coexist with primary networks (PNs) by achieving higher data rates and not causing harmful interference to PUs \cite{Amir}. Many previous studies introduced the impact of IA on the performance of the underlay CR \cite{Yi,Xu,Zhao}. A multiple-input multiple-output (MIMO) CRN with cooperative relay was presented in \cite{Tang1}, where DoFs for the network was increased by using IA. The authors in \cite{Chen} and \cite{Ding} introduced energy-harvesting (EH) as a promising technology to prolong the life-time of battery-powered wireless devices. There are two main EH architectures such as time-switching (TS) and power-splitting (PS) protocols \cite{Nasir}. In \cite{Zhao1}, the authors proposed a common framework to jointly study IA and simultaneous wireless information and power transfer (SWIPT) in MIMO networks, where the users were dynamically chosen as EH terminals to improve the performance of wireless power transfer (WPT) and information transmission. The work in \cite{Park} considered a spectrum sensing policy for the EH-based CR to ensure that SUs can harvest the energy from the PUs signals in the TS mode. In \cite{Zheng}, the same system was investigated with the optimal information and energy cooperation methods between SUs and PUs, where SUs harvest energy from PUs, then use that energy to transmit their own and PUs' signals. Both \cite{Park} and \cite{Zheng} assumed that all nodes in CR have a perfect channel state information (CSI), however, in practice it is more common to deal with imperfect CSI due to channel estimation errors \cite{Wang}. Unlike the existing studies and to the best of our knowledge, no work has been addressed to jointly study these three important areas, namely, CRN, WPT and IA. Therefore, this work focuses on studying the underlay IA-based CRN where an energy-restricted relay node operates in the time-switching relaying (TSR) and power-splitting relaying (PSR) modes. The performance of primary receivers and the EH-based relay of the secondary network (SN) under interference environment is analyzed. Interference at the PUs and the relay is cancelled by the well-known IA technique. The network capacity for these protocols is evaluated with different settings of TS/PS portions and under various CSI scenarios. \section{System Model} \label{sec:system model} \begin{figure}[!t] \centering \includegraphics[width=0.4\columnwidth]{system_model.eps} \caption{The IA- and EH-based CRN with two PUs and one SU sharing the spectrum simultaneously.} \label{system_model} \end{figure} We consider a system model consisting of two pairs of PUs and three SUs. Within the PN, each transmitter communicates to its corresponding receiver and causes interference to another primary receiver and secondary relay node. It is assumed that there is no temperature constraint at the SUs which may imply interference at the primary receivers. The SN consists of a source ($S$), a relay ($R$) and a destination ($D$) nodes. $S$ communicates with $D$ through the assistance of the energy-constrained $R$ that operates in a half-duplex mode and decodes and forwards (DF) the signal of $S$ to $D$ within two time slots. To support this, $R$ harvests energy from the interference signals, while $S$ and $D$ are assumed to have external power sources. Furthermore, D is assumed to be located far from PUs and does not receive any interference. Each node of the network is assumed to be deployed with multiple antennas as shown in Fig.\ref{system_model}, where solid lines indicate the direct channels while the dotted lines denote the interfering links. We also assume that the channels remain constant during a transmission block time $T$, but vary independently from one block to another. Channel links between nodes are defined as follows. For the PN case, $\mathbf{H}_{j,i}^{[k]}\in\mathbb{C}^{N_j\times M_i},~\forall i,j \in \{1,2\}$ denotes the channel between receiver $j$th ($R_j$) and transmitter $i$th ($T_i$), where superscript $k$ indicates a certain time period (TP) when the data transmission occurs. $N_j$ and $M_i$ are the numbers of antennas at $R_j$ and $T_i$, respectively. For the SN case, $\mathbf{H}_{R,S}$ and $\mathbf{H}_{D,R}$ denote the channel links related to the $S$-$R$ and $R$-$D$ transmissions while the inter-network channels are given by $\mathbf{H}_{j,R}\in\mathbb{C}^{N_j\times N_R}$, $\mathbf{H}_{j,S}\in\mathbb{C}^{N_j\times N_S}$ and $\mathbf{H}_{R,i}\in\mathbb{C}^{N_R\times M_i}$, where $N_S$, $N_R$ and $N_D$ denote the numbers of antennas at $S$, $R$ and $D$, respectively. Each entry of any matrix $\mathbf{H}$ is assumed to be independent and identically distributed (i.i.d.) random variables according to $\mathcal{CN}(0,1)$, where $\mathcal{CN}(0,1)$ denotes the complex normal distribution with zero mean and unit variance. Also, note that each channel link is characterized by the corresponding distance and path-loss exponent denoted by $d_{m,n}$ and $\tau_{m,n},~\forall m\in\{1,2,R,D\},~\forall n\in\mathcal{A} = \{1,2,S,R\}$, respectively. Moreover, a global CSI is also assumed. We assume that the SN and PN nodes exploit the IA technique to mitigate the interference at $R$ and $R_{i}$, respectively. Thus, any transmit node $l$ with power $P_l$ utilizes a transmit BF matrix $\mathbf{V}_{l}\in\mathbb{C}^{(M_l~\textrm{or}~N_l)\times f_l}$, with $\textrm{trace}\{\mathbf{V}_l\mathbf{V}_l^H\}=1,~\forall l \in \mathcal{A}$, where $f_l$ is the number of the transmitted data streams. At the same time, all receive nodes (except $D$) exploit the receive BF matrix $\mathbf{U}_t \in\mathbb{C}^{N_t\times f_t},~\forall t\in\{1,2,R\}$, where $f_t$ is the number of data streams to be decoded at the corresponding receiver. For simplicity, we assume that each node is deployed with $N$ antennas ($M_i=N_j=N_{S}=N_{R}=N_{D} = N$). The primary receivers, within two TPs, obtain the following signal \begin{multline} \label{y_j} \mathbf{y}_{j}^{[k]} =\underbrace{\sqrt{\frac{P_{j}}{d_{j,j}^{\tau_{j,j}}}} \mathbf{U}^{[k]H}_{j} \mathbf{H}^{[k]}_{j,j}\mathbf{V}^{[k]}_{j}\mathbf{s}_{j}}_{\text{desired signal}} + \underbrace{\mathbf{T}^{[k]}}_{\text{interference from SN}} + \underbrace{ \sqrt{\frac{P_{i}}{d_{j,i}^{\tau_{j,i}}}} \mathbf{U}^{[k]H}_{j}\mathbf{H}^{[k]}_{j,i}\mathbf{V}^{[k]}_{i}\mathbf{s}_{i}}_{\text{interference from PN, \( i\neq j \)}}+ {\tilde{\mathbf{n}}}^{[k]}_{j},~k\in\{1,2\}, \end{multline where $\tilde{\mathbf{n}}^{[k]}_{j}=\mathbf{U}^{[k]\,H}_{j}\mathbf{n}^{[k]}_{j}$ is the effective zero-mean additive white Gaussian noise (AWGN) vector at the output of the beamformer, with $\mathbb{E}\{\tilde{\mathbf{n}}^{[k]}_{j}\tilde{\mathbf{n}}^{[k]H}_{j}\}=\sigma^2_{\tilde{n}_j}\mathbf{I}$, where $\mathbb{E}\{\cdot\}$ denotes an expectation operator. Since we assume that $\mathbf{s}_l$ is the vector containing the symbols drawn from i.i.d. Gaussian input signalling and chosen from a desired constellation, we have $\mathbb{E}\{\mathbf{s}_l\mathbf{s}_l^H\}=\mathbf{I}$, with $l\in\mathcal{A}$. Meeting all these conditions satisfies the average power constraint at each transmit node. Regarding the SN, its interference to PN can be defined as \begin{equation} \mathbf{T}^{[k]} = \begin{cases} \sqrt{\frac{P_{S}}{d_{j,S}^{\tau_{j,S}}}} \mathbf{U}^{[k]H}_{j}\mathbf{H}_{j,S}\mathbf{V}_S\mathbf{s}_S,~\textrm{if}~k=1,\\ \sqrt{\frac{P_{R}}{d_{j,R}^{\tau_{j,R}}}} \mathbf{U}^{[k]H}_{j}\mathbf{H}_{j,R}\mathbf{V}_R\mathbf{s}_R,~\textrm{if}~k=2. \end{cases} \end{equation} During the $S$-$R$ transmission, the received signal at $R$ can be written as \begin{equation} \mathbf{y}_{R} = \underbrace{ \sqrt{\frac{P_{S}}{d_{R,S}^{\tau_{R,S}}}} \mathbf{U}_{R}^{H}\mathbf{H}_{R,S}\mathbf{V}_S\mathbf{{s}}_S}_{\text{desired signal}} + \underbrace{ \sqrt{\frac{P_{i}}{d_{R,i}^{\tau_{R,i}}}} \sum_{i=1}^{2}\mathbf{U}_{R}^{H}\mathbf{H}_{R,i}\mathbf{V}^{[1]}_{i}\mathbf{s}_{i}}_{\text{interference from PN}} + \tilde{\mathbf{n}}_{R}, \end{equation} where $\tilde{\mathbf{n}}_{R} = \mathbf{U}_{R}^{H}\mathbf{n}_{R}$ is the effective noise after receive beamforming at the relay. Then, $R$ decodes the desired signal $\mathbf{s}_S$ and forwards the information signal to $D$. Hence, $D$ receives the following signal \begin{align} \mathbf{y}_{D} = \sqrt{\frac{P_{R}}{d_{D,R}^{\tau_{D,R}}}} \mathbf{H}_{D,R}\mathbf{V}_{R}\mathbf{s}_R+\mathbf{n}_{D}, \end{align} where $\mathbf{n}_{D}$ is the AWGN noise vector, with $\mathbb{E}\{\mathbf{{n}}_{D}\mathbf{{n}}^H_{D}\}=\sigma^2_{{D}}\mathbf{I}$. The interference is assumed to be completely eliminated if the following conditions are satisfied at $R_j$ as \cite{galym3,galym4} \begin{subequations} \label{Rj condition} \begin{align} &\mathbf{U}^{[k]H}_{j}\mathbf{H}^{[k]}_{j,i}\mathbf{V}^{[k]}_{i} = \mathbf{0},~\forall i,j\in\{1,2\},~\forall i\not=j,\\ &\mathbf{U}^{[k]H}_{j} \mathbf{J}^{[k]} = \mathbf{0}, ~\text{where}~ \mathbf{J}^{[k]} = \begin{cases}\mathbf{H}_{j,S} \mathbf{V}_{S},~\text{if}~k = 1,\\ \mathbf{H}_{j,R} \mathbf{V}_{R},~\text{if}~k = 2, \end{cases}\\ &\textrm{rank}\left(\mathbf{U}_{j}^{[k]H}\mathbf{H}^{[k]}_{j,j}\mathbf{V}^{[k]}_{j}\right) = f_j,~\forall j\in\{1,2\}, \end{align} \end{subequations} and at $R$ as \begin{subequations} \label{R condition} \begin{align} &\mathbf{U}_{R}^{H}\mathbf{H}_{R,i}\mathbf{V}^{[1]}_{i} = \mathbf{0},~\forall i\in\{1,2\},\\ &\textrm{rank}\left(\mathbf{U}_{R}^{H}\mathbf{H}_{R,S}\mathbf{V}_{S}\right) = f_S. \end{align} \end{subequations} \subsection{Beamforming Design} \label{sec:Transmit Beamforming Design} The existing interfering signals need to be orthogonalized to ${\mathbf{U}_{j}^{[1]}}$ and ${\mathbf{U}_{j}^{[2]}}$ at $R_j$ during two TPs to satisfy conditions \eqref{Rj condition}. Then, the interference at $R$ needs to be orthogonalized to $\mathbf{U}_{R}$, as $R$ receives the interference only during the $S$-$R$ transmission. To decode the desired signal from the received signal, the interference space should be linearly independent from the desired signal space. Thus, the precoding matrices need to be designed in such a way that all interference in each receiver span to each others. Thereby, the interference at $R_1$, $R_2$ and $R$, in the first TP, can be spanned as $span(\mathbf{H}_{1,2}^{[1]}\mathbf{V}_{2}^{[1]}) = span(\mathbf{H}_{1,S}\mathbf{V}_{S})$, $span(\mathbf{H}_{2,1}^{[1]}\mathbf{V}_{1}^{[1]})=span(\mathbf{H}_{2,S}\mathbf{V}_{S})$ and $span(\mathbf{H}_{R,1}\mathbf{V}_{1}^{[1]}) = span(\mathbf{H}_{R,2}\mathbf{V}_{2}^{[1]})$, where $span(\mathbf{A})$ denotes the vector space spanned by the column vectors of $\mathbf{A}$. From these definitions, precoding matrices $\mathbf{V}_{1}^{[1]}$, $\mathbf{V}_{2}^{[1]}$ and $\mathbf{V}_{S}$ can be derived as \cite{Sung} \begin{subequations}\label{V matirces_1} \begin{align} &\mathbf{V}_{2}^{[1]}=(\mathbf{H}_{R,2})^{-1}\mathbf{H}_{R,1}\mathbf{V}_{1}^{[1]},\\ &\mathbf{V}_{S}=(\mathbf{H}_{2,S})^{-1}\mathbf{H}_{2,1}^{[1]}\mathbf{V}_{1}^{[1]}, \end{align} \end{subequations} where $\mathbf{V}_{1}^{[1]}=eig\left(\mathbf{A}\right)$ and $\mathbf{A}=(\mathbf{H}_{R,1})^{-1}\mathbf{H}_{R,2}(\mathbf{H}_{1,2}^{[1]})^{-1}\mathbf{H}_{1,S}(\mathbf{H}_{2,S})^{-1}\mathbf{H}_{2,1}^{[1]}$; $eig\left(\mathbf{A}\right)$ are the eigenvectors of $\mathbf{A}$. Now, we derive the corresponding receive BF matrices as \begin{subequations} \label{U matirces_1} \begin{align} &\mathbf{U}_{j}^{[k]} = null\left( [\mathbf{H}_{j,i}^{[k]}\mathbf{V}_{i}^{[k]}]^H\right),~j\not=i,\\ &\mathbf{U}_{R} = null\left( [\mathbf{H}_{R,1}\mathbf{V}_{1}^{[1]}]^H\right). \end{align} \end{subequations} During the $R$-$D$ transmission, $S$ stays silent while $R$ establishes its own communication. Therefore, it is clear that the BF matrices design follows the same steps given by \eqref{V matirces_1}--\eqref{U matirces_1}, and hence they are omitted for the sake of brevity. \subsection{Imperfect CSI} The model for imperfect CSI can be written as \cite{galym2,Aquilina} \begin{align} \hat{\mathbf{H}}=\mathbf{H}+\mathbf{E}, \end{align} where $\hat{\mathbf{H}}$ is the observed mismatched channel, $\mathbf{H}\sim \mathcal{CN}(0,\mathbf{I})$ represents the real channel matrix and $\mathbf{E}$ is the error matrix which represents an inaccuracy degree in the estimated CSI. It is also assumed that $\mathbf{E}$ is independent of $\mathbf{H}$. Considering the nominal signal-to-noise ratio (SNR), $\theta$, $\mathbf{E}$ is described as \begin{align} \mathbf{E}\sim \mathcal{CN}(0,\lambda\mathbf{I}) {\text{~with~}} \lambda=\psi{\theta}^{-\kappa}, \end{align} where $\lambda$ is an error variance, $\kappa\geq 0$ and $\psi>0$ determine various CSI scenarios. Finally, the real channel matrix, conditioning on ${\hat{\mathbf{H}}}$, \cite{Kay}, can be described as \begin{align} \label{Imperfect CSI} \mathbf{H}=\frac{1}{1+\lambda} \hat{\mathbf{H}}+\tilde{\mathbf{H}}, \end{align} where $\tilde{\mathbf{H}}\sim\mathcal{CN}(0, \frac{\lambda}{1+\lambda} \mathbf{I})$ is independent of $\hat{\mathbf{H}}$. \section{Time-Switching Relaying} \label{sec:TSR} The time used for the $S$-$D$ information transmission is given by $T$, and the time fraction devoted for EH purposes is given by $\alpha T$, with $0\leq\alpha\leq1$. The remaining time $(1-\alpha)T$ is formed by two equal time phases to support the $S$-$R$ and $R$-$D$ transmissions \cite{Nasir}. At the same time, the PN adopts its one-hop transmission policy according to the SN time frame architecture as follows. The first data transmission occurs during the $(1+\alpha)T/2$ time because within this TP the network transmission is performed by the primary transmitters and the source node only. The remaining time $(1-\alpha)T/2$ is dedicated for the second PN transmission when the source node is replaced by the relay in the network transmission. Hence, the received signal at $R$ during the EH phase can be written as \begin{align} \mathbf{y}_{R} = \sqrt{\frac{P_S}{d^{\tau_{R,S}}_{R,S}}}\mathbf{H}_{R,S}\mathbf{V}_{S}\mathbf{s}_S+\sum_{i=1}^{2}\sqrt{\frac{P_i}{d^{\tau_{R,i}}_{R,i}}}\mathbf{H}_{R,i}\mathbf{V}^{[1]}_{i}\mathbf{s}_i + \mathbf{n}_{R}. \end{align} Now, neglecting the power harvested from the noise, the harvested energy at $R$ is derived as \cite{Nasir} \begin{equation} \label{eh_tsr} E_H^{TSR} = \eta \alpha T\left( \frac{P_S}{d^{\tau_{R,S}}_{R,S}}\left| \left|\mathbf{H}_{R,S}\mathbf{V}_S\right| \right| ^2 + \sum_{i=1}^{2}\frac{P_i}{d^{\tau_{R,i}}_{R,i}}\left| \left| \mathbf{H}_{R,i}\mathbf{V}^{[1]}_i\right| \right|^2\right), \end{equation} where $\parallel\cdot\parallel$ denotes the Euclidean norm. The relay transmit power relates to the harvested energy as $P^{TSR}_R = E_H^{TSR} / ((1 - \alpha)T/2)$ and can be further rewritten as \begin{equation} P^{TSR}_R =\frac{2\alpha\eta}{1-\alpha} \left( \frac{P_S}{d^{\tau_{R,S}}_{R,S}}\left| \left|\mathbf{H}_{R,S}\mathbf{V}_S\right| \right| ^2 +\sum_{i=1}^{2}\frac{P_i}{d^{\tau_{R,i}}_{R,i}}\left| \left| \mathbf{H}_{R,i}\mathbf{V}^{[1]}_i\right| \right|^2\right), \end{equation} where $\eta$ ($0<\eta<1$) is the EH conversion efficiency. During the $S$-$R$ information transmission, taking into account imperfect CSI given by \eqref{Imperfect CSI} and receive BF matrices, the information signal at the relay can be written as \begin{align} \label{y_it_r} \mathbf{y}_R^{IT} = \sqrt{\frac{P_S}{d^{\tau_{R,S}}_{R,S}}}{\mathbf{U}_R^H}\left( \frac{1}{1+\lambda}{\hat{\mathbf{H}}}_{R,S}+{\tilde{\mathbf{H}}}_{R,S}\right) \mathbf{V}_{S}\mathbf{s}_S + \sum_{i=1}^{2}\sqrt{\frac{P_i}{d^{\tau_{R,i}}_{R,i}}}{\mathbf{U}_R^H}\left( \frac{1}{1+\lambda}{\hat{\mathbf{H}}}_{R,i}+{\tilde{\mathbf{H}}}_{R,i}\right)\mathbf{V}^{[1]}_{i}\mathbf{s}_i + {\tilde{\mathbf{n}}}_{R}. \end{align} After some manipulation, the corresponding signal-to-interference-noise ratio (SINR) at $R$ can be expressed as \begin{align} \label{gamma_r_ts} \gamma_R = \frac{\frac{P_S}{d^{\tau_{R,S}}_{R,S} (1+\lambda)^2}|| \mathbf{U}_R^H\hat{\mathbf{H}}_{R,S}\mathbf{V}_S||^2}{\frac{P_S}{d^{\tau_{R,S}}_{R,S}} || \mathbf{U}_R^H\tilde{\mathbf{H}}_{R,S}\mathbf{V}_S||^2 + I_{PN} + \sigma^2_{\tilde{n}_R}}, \end{align} where $I_{PN} = \frac{P_i}{d^{\tau_{R,i}}_{R,i}} \sum_{i=1}^{2} || \mathbf{U}_R^H\tilde{\mathbf{H}}_{R,i}\mathbf{V}^{[1]}_i||^2$ indicates the interference from the PN, and $\sigma^2_{\tilde{n}_R}$ stands for the noise power. Since it is assumed that the PN does not interfere the destination node, the signal received at $D$ can be written as \begin{align} \label{y_d} \mathbf{y}_{D}=&\sqrt{\frac{P^{TSR}_R}{d^{\tau_{D,R}}_{D,R}}}\left( \frac{1}{1+\lambda}\hat{\mathbf{H}}_{D,R}+{\tilde{\mathbf{H}}}_{D,R}\right) \mathbf{V}_{R}\mathbf{{s}}_R+\mathbf{n}_{D}. \end{align} Then, the respective received SINR at $D$ is calculated as \begin{align} \label{gamma_d_ts} \gamma_D = \frac{\frac{P^{TSR}_R}{d^{\tau_{D,R}}_{D,R} (1+\lambda)^2 } || {\hat{\mathbf{H}}}_{D,R}\mathbf{V}_R||^2}{ \frac{P^{TSR}_R}{d^{\tau_{D,R}}_{D,R}} || \tilde{\mathbf{H}}_{D,R}\mathbf{V}_R||^2+ \sigma^2_{D} } , \end{align} where $\sigma^2_{D}$ is the noise power. Finally, the received SINR for the primary receivers $R_j$ is derived as \begin{align} \label{gamma_j} \gamma^{[k]}_j=\frac{\frac{P_j}{d^{\tau_{j,j}}_{j,j} (1+\lambda)^2}|| {\mathbf{U}^{[k]H}_j}\hat{\mathbf{H}}^{[k]}_{j,j}\mathbf{V}^{[k]}_j||^2}{ B^{[k]} + F^{[k]}+ {\sigma_{\tilde{n}_j}^2}^{[k]}}, \end{align} where $B^{[k]}= \frac{P_j}{d^{\tau_{j,j}}_{j,j}} || {\mathbf{U}^{[k]H}_j}\tilde{\mathbf{H}}^{[k]}_{j,j}\mathbf{V}^{[k]}_j||^2 + \frac{P_i}{d^{\tau_{j,i}}_{j,i}} || {\mathbf{U}^{[k]H}_j}\tilde{\mathbf{H}}^{[k]}_{j,i}\mathbf{V}^{[k]}_i||^2_{i\not=j}$ is the intra-network interference of the PN due to the CSI mismatch while the inter-network interference from the SN is expressed by \begin{equation} F^{[k]} = \begin{cases} {\frac{P_S}{d^{\tau_{j,S}}_{j,S}} || {\mathbf{U}^{[k]H}_j}\tilde{\mathbf{H}}_{j,S}\mathbf{V}_S||^2},~\textrm{if}~k=1,\\ {\frac{P^{TSR}_R}{d^{\tau_{j,R}}_{j,R}} || {\mathbf{U}^{[k]H}_j}\tilde{\mathbf{H}}_{j,R}\mathbf{V}_R|| ^2},~\textrm{if}~k=2. \end{cases} \end{equation} \section{Power-Splitting Relaying} \label{sec:PSR} The PSR protocol exploits the time $T$ divided into two equal parts to support the $S$-$R$ and $R$-$D$ information transmissions \cite{GN}. In the first TP, $R$ utilizes a portion of the received signal power for EH purposes, $\rho$, while the rest of the power, $(1 - \rho)$, is dedicated for the $S$-$R$ data transmission \cite{Arzykulov1}. Thus, the relay utilizes the energy harvested from the received signal given by \begin{equation} \mathbf{y}_R^{EH}=\sqrt{\frac{\rho P_S}{d^{\tau_{R,S}}_{R,S}}}\mathbf{H}_{R,S}\mathbf{V}_{S}\mathbf{s}_S + \sum_{i=1}^{2}\sqrt{\frac{\rho P_i}{d^{\tau_{R,i}}_{R,i}}}\mathbf{H}_{R,i}\mathbf{V}^{[1]}_{i}\mathbf{s}_i+\sqrt{\rho}\mathbf{n}_{R}. \end{equation} By assuming that the received signal $\mathbf{y}_R^{EH}$ at $R$ is used only for WPT and the noise power is neglected, the instantaneous harvested energy can be expressed as \cite{Nasir} \begin{equation} \label{E_h} E_H^{PSR} = \frac{\eta\rho T}{2}\left( \frac{P_S}{d^{\tau_{R,S}}_{R,S}}\left| \left|\mathbf{H}_{R,S}\mathbf{V}_S\right| \right| ^2 +\sum_{i=1}^{2}\frac{P_i}{d^{\tau_{R,i}}_{R,i}}\left| \left| \mathbf{H}_{R,i}\mathbf{V}^{[1]}_i\right| \right|^2 \right), \end{equation} where $0<\rho<1$ is the signal power portion dedicated for EH purposes. The relay transmit power as a function of the harvested energy is given by $P^{PSR}_R=2 E_H^{PSR}/T$, and it can be then written as \cite{Nasir} \begin{equation} P^{PSR}_R=\eta\rho\left( \frac{P_S}{d^{\tau_{R,S}}_{R,S}}\left| \left|\mathbf{H}_{R,S}\mathbf{V}_S\right| \right| ^2 +\sum_{i=1}^{2}\frac{P_i}{d^{\tau_{R,i}}_{R,i}}\left| \left| \mathbf{H}_{R,i}\mathbf{V}^{[1]}_i\right| \right|^2 \right). \end{equation} Using \eqref{Imperfect CSI}, the same signal, but with the power portion of $(1-\rho)$ and with the receive BF matrix $\mathbf{U}_R$ applied, can be received at the information decoder terminal as shown in \eqref{y_IT} at the top of the next page. \begin{figure*}[!t] \small \begin{align} \label{y_IT} \mathbf{y}^{IT}_R =& \sqrt{1-\rho} \left( \sqrt{\frac{P_S}{d^{\tau_{R,S}}_{R,S}}} \mathbf{U}^H_R\left( \frac{1}{1+\lambda}\hat{\mathbf{H}}_{R,S} + \tilde{\mathbf{H}}_{R,S}\right) \mathbf{V}_{S}\mathbf{s}_S + \sum_{i=1}^{2} \sqrt{\frac{P_i}{d^{\tau_{R,i}}_{R,i}}} \mathbf{U}^H_R \left( \frac{1}{1+\lambda}\hat{\mathbf{H}}_{R,i}+\tilde{\mathbf{H}}_{R,i} \right) \mathbf{V}^{[1]}_{i}{\mathbf{s}_i} + \tilde{\mathbf{n}}_{R} \right). \end{align} \hrulefill \end{figure*} Now, by using \eqref{y_IT}, the SINR of the $S$-$R$ link can be derived as \begin{align} \label{gamma_r_ps} \gamma_R=\frac{\frac{P_S(1-\rho)}{d^{\tau_{R,S}}_{R,S} (1+\lambda)^2}|| \mathbf{U}_R^H\hat{\mathbf{H}}_{R,S}\mathbf{V}_S|| ^2}{\frac{P_S (1-\rho)}{d^{\tau_{R,S}}_{R,S}} || \mathbf{U}_R^H\tilde{\mathbf{H}}_{R,S}\mathbf{V}_S||^2 + I_{PN} + \sigma^2_{\tilde{n}_R}}, \end{align} where $I_{PN} = \frac{P_i (1-\rho)}{d^{\tau_{R,i}}_{R,i}} \sum_{i=1}^{2} || \mathbf{U}_R^H\tilde{\mathbf{H}}_{R,i}\mathbf{V}^{[1]}_i||^2$ denotes the interference from the PN. Then, the corresponding SINR at $D$ is calculated as \begin{align} \label{gamma_d_ps} \gamma_D = \frac{\frac{P^{PSR}_R}{d^{\tau_{D,R}}_{D,R} (1+\lambda)^2 } || {\hat{\mathbf{H}}}_{D,R}\mathbf{V}_R||^2}{ \frac{P^{PSR}_R}{d^{\tau_{D,R}}_{D,R}} || \tilde{\mathbf{H}}_{D,R}\mathbf{V}_R||^2+ \sigma^2_{D} } , \end{align} where $\sigma^2_{D}$ is the noise power. Finally, the SINR of the primary users can be calculated as \begin{align} \label{gamma_j_ps} \gamma^{[k]}_j=\frac{\frac{P_j}{d^{\tau_{j,j}}_{j,j} (1+\lambda)^2}|| {\mathbf{U}^{[k]H}_j}\hat{\mathbf{H}}^{[k]}_{j,j}\mathbf{V}^{[k]}_j||^2}{B^{[k]} + F^{[k]} + {\sigma_{\tilde{n}_{j}}^2}^{[k]}}, \end{align} where $B^{[k]} = \frac{P_j}{d^{\tau_{j,j}}_{j,j}} || {\mathbf{U}^{[k]H}_j}\tilde{\mathbf{H}}^{[k]}_{j,j}\mathbf{V}^{[k]}_j ||^2 + \frac{P_i}{d^{\tau_{j,i}}_{j,i}} || {\mathbf{U}^{[k]H}_j}\tilde{\mathbf{H}}^{[k]}_{j,i}\mathbf{V}^{[k]}_i ||^2_{i\not=j}$ is the intra-network interference of the PN due to the CSI mismatch while the inter-network interference from the SN is expressed by \begin{equation} F^{[k]} = \begin{cases} {\frac{P_S}{d^{\tau_{j,S}}_{j,S}} || {\mathbf{U}^{[k]H}_j}\tilde{\mathbf{H}}_{j,S}\mathbf{V}_S|| ^2},~\textrm{if}~k=1,\\ {\frac{P^{PSR}_R}{d^{\tau_{j,R}}_{j,R}} || {\mathbf{U}^{[k]H}_j}\tilde{\mathbf{H}}_{j,R}\mathbf{V}_R||^2},~\textrm{if}~k=2. \end{cases} \end{equation} \section{Ergodic Capacity} \label{sec:capacity} For the PN receivers, the general derivation of the instantaneous capacity can be written as \cite{Tang} \begin{align} \mathcal{C}_j = \log_2(1+\gamma_j), \end{align} where $\gamma_j$ is the instantaneous SINR at $R_j$. Regarding the SN operating in the DF mode, the end-to-end capacity at $D$ is related to the weakest link of the $S$-$R$ and $R$-$D$ transmissions and can be written as \cite{Arzykulov} \begin{align} \label{capacity} \mathcal{C}_D = \frac{1}{2} \log_2\left(1+\min\left(\gamma_R, \gamma_D\right)\right), \end{align} where $\gamma_R$ and $\gamma_D$ denote the instantaneous SINR at $R$ and $D$, respectively. Now, we derive an expression for the capacity of the TSR-based system. Using \eqref{gamma_r_ts}, \eqref{gamma_d_ts} and \eqref{capacity}, the capacity of the destination node can be written as \begin{align} \label{C_d_tsr} \mathcal{C}_D = \frac{1-\alpha}{2}\log_2\left(1 + \min\left(\gamma_R,\gamma_D\right)\right). \end{align} Regarding the PN, the capacity of $R_j$ can be written as \begin{equation} \mathcal{C}_j = \sum_{k=1}^{2} E^{[k]} \log_2(1+\gamma_j^{[k]}),~j\in\{1,2\}, \end{equation} where ${E}^{[k]} = \begin{cases} {\frac{1+\alpha}{2}},~\textrm{when}~k=1,\\ {\frac{1-\alpha}{2}},~\textrm{otherwise}, \end{cases}$ with $k\in\{1,2\}$. Using \eqref{gamma_r_ps}, \eqref{gamma_d_ps} and \eqref{capacity}, the capacity of the destination node for the PSR-based system can be written as \begin{align} \label{C_d_psr} \mathcal{C}_D = \frac{1}{2}\log_2\left(1 + \min\left(\gamma_R,\gamma_D\right)\right). \end{align} Regarding the PN, the capacity of $R_j$ can be written using \eqref{gamma_j_ps} as \begin{equation} \mathcal{C}_j = \frac{1}{2} \sum_{k=1}^{2} \log_2(1+\gamma_j^{[k]}),~j\in\{1,2\}. \end{equation} We evaluate the capacity of the optimized TSR and PSR-based systems as a function of SNR. To begin with, we find the optimal values of $\alpha$ and $\rho$ given by $\alpha^*$ and $\rho^*$ with $\eta = 0.8$ for the corresponding TSR and PSR protocols by solving the following $d \mathcal{C}_D / d\alpha= 0$ and $d \mathcal{C}_D / d\rho= 0$. \section{Simulation Results} \label{sec:numerical} In this section, we present numerical examples for the capacity expressions derived above. The adopted system parameters are as follows: the channel distances and path loss exponents are identical and given by $d = 3$ m and $\tau = 2.7$, respectively; the EH conversion efficiency $\eta = 0.8$; the fixed transmit powers are assumed to be equal ($P_1 = P_2 = P_S$). The calculated optimal EH time fraction $\alpha$ and power-splitting factor $\rho$ are taken as 0.19 and 0.75, accordingly. Finally, the values of $(\kappa,\psi)$ such as $(1.5,15),~(1,10),~(0,0.001)$ are considered to describe various CSI mismatch scenarios. \begin{figure*}[!h] \centering \subfloat[Perfect CSI vs. CSI mismatch $(\kappa=0,~\psi=0.001)$.]{ \label{subfig:cap_snr_perf} \includegraphics[width=0.4\columnwidth]{cap_vs_snr_perf_k0psi0001}}~~~~ \subfloat[CSI mismatch: $(\kappa=1.5,~\psi=15)$ vs. $(\kappa=1,~\psi=10)$.]{ \label{subfig:Nakagami_d} \includegraphics[width=0.4\columnwidth]{cap_vs_snr_imperfect}} \caption{Capacity vs. SNR of the primary user and the destination node operating in the TSR and PSR protocols under different CSI scenarios.} \label{results1} \end{figure*} Fig. \ref{results1} presents an insight into how the CSI mismatch affects the obtainable capacity of the PUs and the SU of the considered system model. When $\kappa=0$, it can be noticed that $\psi$ has a significant impact on the system capacity. At $\psi=0.001$ and SNR of 20 dB, the capacity loss of the PUs equals 1.85 bit/s/Hz while the capacity losses of the destination node for the TSR and PSR protocols are given by 0.09 and 0.17 bit/s/Hz, respectively. Hence, it is obvious that the CSI mismatch severely degrades the performance of the PUs which can be explained by the higher level of the interference due to the number of the transmitting nodes. In general, the capacity of the destination node of the PSR-based system always outperforms the capacity of the TSR protocol even the former relaying method experiences worse performance degradation than the latter one. When $\kappa\not=0$, it can be noticed that the capacity demonstrates a poor performance in the low SNR region. On the other side, the dependence on the SNR results in the capacity growth when SNR increases. Being $\kappa$ large makes the curve slope sharper. \begin{figure}[!t] \centering \includegraphics[width=0.4\columnwidth]{cap_rho} \caption{Capacity vs. the EH time fraction and power splitting factor for the TSR and PSR at 20 dB, respectively.} \label{results2} \end{figure} Fig. \ref{results2} illustrates some simulated results for the capacities as a function of $\alpha$ and $\rho$ for various CSI acquisition scenarios. The results for the TSR and PSR systems are obtained from \eqref{C_d_tsr} and \eqref{C_d_psr}, respectively. For the case of the PSR-based system, no sufficient portion of power allocated for EH will consequently result in poor capacity. At the other extreme, being $\rho$ too large which results in too much unnecessarily harvested energy at the cost of the power level of the received signal leading to poor capacity. Similarly, this justification applies to $\alpha$ in the TSR-based system. \section{Conclusion} \label{sec:Conclusion} In this paper, we analyzed the capacity of different wireless powered IA-based DF CRN considering the EH protocols, namely, TSR and PSR. The three special scenarios of the imperfect CSI given by $(1.5,15),~(1,10),~(0,0.001)$ were studied to analyse the impact of the CSI quality on the system capacity. The presented results demonstrated that an optimal selection of the power splitting factor in the PSR protocol and the EH time factor in the TSR protocol were found to be important in achieving the best performance. Finally, the optimized PSR-based system was shown to have the best performance over the optimized TSR system. \ifCLASSOPTIONcaptionsoff \newpage \fi
1,314,259,994,697
arxiv
\section{Motivation}\label{secintro} It is still a widespread belief that a complete description of a composite entangled quantum system cannot be obtained by descriptions of the parts, if those are expressed independently of what happens to other parts. This apparently holistic feature of entangled quantum states entails violation of Bell inequalities~\cite{bell1964, aspect1982experimental} and quantum teleportation~\cite{bennett1993teleporting}, which are repeatedly invoked to sanctify the ``non-local'' character of quantum theory. But this widespread belief has been proven false more than twenty years ago by Deutsch and Hayden~\cite{deutsch2000information}, who by the same token provided an entirely local explanation of Bell-inequality violations and teleportation. Descriptions of dynamically isolated --- but possibly entangled --- systems~$A$ and~$B$ are \emph{local} if that of~$A$ is unaffected by any process system~$B$ may undergo, and vice versa. After Bell, it has become conventional wisdom to equate locality with a possible explanation by a local hidden variable theory. However, local hidden variables are only one way in which locality can be instantiated~\cite{brassard2019parallel}. Here, locality is taken in its crudest form, the one advocated by Einstein: ``the real factual situation of the system~$S_2$ is independent of what is done with the system~$S_1$, which is spatially separated from the former'' \cite{schilppalbert1970}. Descriptions of individual systems~$A$ and~$B$ are \emph{complete} if, when put together, they can predict the distributions of any measurement performed on the whole system~$AB$. For instance, if~$AB$ is in a pure entangled state~$\ket{\Psi}^{AB}$, the reduced density matrices \begin{equation*} \rho^A= \mathrm{tr}_{B} \ketbra{\Psi}{\Psi} \disand \rho^B = \mathrm{tr}_{A} \ketbra{\Psi}{\Psi} \end{equation*} are local but incomplete descriptions. This is because~$\rho_A$ is left unaffected regardless of what happens to system~$B$, however, since~$\ket{\Psi}^{AB}$ is entangled, it or its associated density matrix~$\ketbra{\Psi}{\Psi}$ can no longer be recovered from~$\rho^A$ and~$\rho^B$. Some information that could reveal crucial to compute the distribution of some joint measurements has been discarded in the tracing out. If instead the descriptions of~$A$ and~$B$ are both taken to be the global wave function~$\ket{\Psi}^{AB}$, then one finds a complete but non-local account. We seem to be stuck in a dichotomy, apparently forced to describe quantum systems either non-locally or incompletely. But the dichotomy is false. Following Gottesman's~\cite{gottesman1999heisenberg} quantum computation in the Heisenberg picture, Deutsch and Hayden define so-called \emph{descriptors} for individual qubits and showed this mode of description to be both local and complete, hence vindicating the locality of quantum theory. In other words, even entangled systems admit a separable description. When such a bold foundational result collects a mere 190 citations in more than 20 years, it is evidence that a large portion of the community of quantum foundations is unaware of the idea, or worse, does not understand it. This is the problem that this paper addresses, and it does so by providing a detailed and self-contained explanation of how and why descriptors work. The paper culminates with the superdense coding protocol being revisited in the established framework. It is aimed both for experts and non-experts in quantum theory. A background in physics is optional; Only introductory knowledge in quantum information theory is required. \section{A Question of Picture} In quantum theory, computations leading to statistics of measurable quantities all take the same form, namely, that of Dirac's celebrated bra-ket notation,~$\bra{\cdots} \cdots \ket{\cdots}$. Physicists recognize this kind of computation as the expected value of some observable. Quantum information scientists, bear with me for another 10 lines. An observable $\mathcal{O}$ is represented by a hermitian operator which admits a spectral decomposition \begin{equation*} \mathcal{O} = \sum_i \lambda_i \Pi_i \,, \end{equation*} where $\lambda_i \in \bbR$ is an eigenvalue corresponding to the measurement outcome and $\Pi_i$ is the corresponding projector on the eigensubspace. If the system is in state~$\ket{\psi}$, the expected value of such an observable is given by~$\bra{\psi} \mathcal{O} \ket{\psi}$, since \begin{equation*} \label{eqev} \bra{\psi} \mathcal{O} \ket{\psi} = \bra{\psi} \sum_i \lambda_i \Pi_i \ket{\psi} = \sum_i \bra{\psi} \Pi_i \ket{\psi} \lambda_i = \sum_i p_i \lambda_i \,, \end{equation*} where $p_i$ can be thought of the probability of measuring outcome $\lambda_i$. While this type of computation is routine for physicists, quantum information scientists usually compute probabilities of measurement outcomes. An $n$-qubit network in the state \begin{equation*} \sum_{j=0}^{2^n-1} \alpha_j \ket j \end{equation*} has a probability $|\alpha_l|^2$ to return the classical value \doublequote{$l$}. But \begin{equation*} |\alpha_l|^2 = \bra \psi \ketbra{l}{l} \ket \psi \, \end{equation*} is nothing but the expectation value of the observable $\ketbra{l}{l}$. Hence, the reader who is unfamiliar with observables can simply keep in mind projectors of the form $\ketbra ll$ required to compute probabilities, but this\footnote{ The most general observable can be thought as a choice of basis~$\{\ket{\phi_i} \}_i$, with a real number~$\lambda_i$ corresponding to each basis vector. Indeed, \mbox{$\mathcal{O} = \sum_i \lambda_i \ketbra{\phi_i}{\phi_i}$} defines a generic hermitian operator, so a generic observable. Constructing an observable in this way makes ``measuring an observable'' clearer for quantum information scientists: It corresponds to measuring in the basis~$\{\ket{\phi_i} \}_i$ with the measurement outcomes labelled by~$\lambda_i$. Of course, this measurement can be done by performing the unitary that maps the basis~$\{\ket{\phi_i} \}_i$ to the computational basis~$\{ \ket i \}_i$, before measuring in this last basis.} footnote explains further. A generic state $\ket \psi$ arises from the evolution of an initial state that shall be denoted $\ket{0}$. If $U$ is the unitary operator representing this evolution,~\mbox{$\ket \psi = U \ket 0 $}, so the computations carried to predict measurable quantities all have the form \begin{equation} \label{eq:sand} \bra 0 U^\dagger \mathcal{O} U \ket 0 \,. \end{equation} The \emph{Schr\"odinger picture} is about viewing the sandwich equation \eqref{eq:sand} as if the bread evolves and the meat stays constant, namely, \begin{equation*} \left(\bra 0 U^\dagger \right) \, \mathcal{O} \, \left(U\vphantom{^\dagger} \ket 0\right) \,. \end{equation*} With such a viewpoint, the initial state $\ket 0$ evolves to the final state \mbox{$\ket \psi= U\ket 0$} and the observable $\mathcal{O}$ remains constant. The \emph{Heisenberg picture} is about regarding the sandwich equation as if the meat evolves but the bread remains constant, \begin{equation*} \label{eq:bread} \bra 0 \left( U^\dagger \mathcal{O} U \right) \ket 0 \,. \end{equation*} In this picture, the state vector remains fixed to $\ket 0$ but the observable $\mathcal{O}$ evolves to $ U^\dagger \mathcal{O} U $. Therefore, in the Heisenberg picture, the term `state', which refers to a quantity that is fixed to $\ket 0$, becomes a misnomer. It will thus be called the \emph{reference vector}. But then, in the Heisenberg picture, can the quantum information of the system at a given time be encoded in a single mathematical object? Yes: It is precisely what the \emph{descriptor} does. \section{Tracking Observables} In the Heisenberg picture, a quantum system shall no longer be described by its state vector, but rather by an object that encodes the information about \emph{all} the evolved observables of the system. This is a tall order since there is an uncountable number of such observables. Things are greatly simplified once it is realized that observables are linear operators and that the latter form a vector space. Since the evolution $\mathcal{O} \to U^\dagger \mathcal{O} U$ is linear, one does not need to track the evolution of infinitely many observables: \emph{Only a basis} of the linear operators suffices. Indeed, if $\mathcal{O} = \sum_j a_j B_j$, then $U^\dagger \mathcal{O} U = \sum_j a_j U^\dagger B_j U$, so it suffices to track how each operator $B_j$ of the basis evolves by $U$ to then compute how any observable evolves. \subsection*{The Descriptor of a $1$-Qubit Network} In the case of a singe qubit, the Pauli matrices together with the identity, \begin{equation*} \bol \sigma = (\sigma_x, \sigma_y, \sigma_z) = \left( \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}, \begin{bmatrix} 0 & -i \\ i & 0 \end{bmatrix}, \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix} \right) ~\et~ \sigma_0 = \mathds{1} = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} \end{equation*} form a basis of the $2\times2$ matrices, if the linear combinations are taken over complex numbers. Following the evolution of $\mathds{1}$ is trivial, \mbox{$U^\dagger \mathds{1} U = \mathds{1}$}, so it can be neglected. This means that one only needs to follow the evolution of $\boldsymbol \sigma$, to then be able to recover any evolved observable, or the expectation value thereof. Hence, for a single qubit quantum network, the \emph{descriptor} of the qubit at time $t$ is given by $$\bol q(t) = U^\dagger \boldsymbol \sigma U\,,$$ where $U$ is the unitary operator that represents the evolution undergone by the quantum network between time~$0$ and time~$t$. \begin{example} \label{ex:H} Consider the following quantum circuit \begin{center} \begin{picture}(80,35)(-40,-7) \put(-40,0){\makebox(20,20){$\ket 0$}} \put(-20,10){\line(1,0){20}} \put(0,0){\framebox(20,20){$H$}} \put(20,10){\line(1,0){20}} \multiput(-10,-2)(0,2){13} {\line(0,1){1}} \put(-19,-11){\footnotesize{$t=0$}} \put(20,10){\line(1,0){20}} \multiput(30,-2)(0,2){13} {\line(0,1){1}} \put(21,-11){\footnotesize{$t=1$}} \end{picture} \hspace{20pt} \raisebox{10 pt}{where~$\qquad H= \frac{1}{\sqrt 2} \begin{bmatrix} 1&1\\1&-1 \end{bmatrix}$} \end{center} is the Hadamard gate. At time $t=0$, the descriptor is~\mbox{$\bol q(0)= \bol \sigma = (\sigma_x, \sigma_y, \sigma_z)$}, while at time $t=1$, the descriptor is \begin{equation} \label{eq:ex1} \bol q(1) = H^\dagger \bol \sigma H = H^\dagger(\sigma_x,\sigma_y, \sigma_z)H = (\sigma_z, -\sigma_y, \sigma_x)\,. \end{equation} The Heisenberg picture and the expression for~$\bol q(1)$ can be used to compute the probability of measuring the outcome ``$0$''. Representing $\ket 0$ and $\ket 1$ respectively by $ \begin{bmatrix} 1 \\ 0 \end{bmatrix}$ and $ \begin{bmatrix} 0 \\ 1 \end{bmatrix}$, \begin{eqnarray*} \bra 0 H^\dagger \ketbra 00 H \ket 0 &=& \bra 0 H^\dagger \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} H \ket 0 \\ &=& \bra 0 H^\dagger \left ( \frac{\mathds{1} + \sigma_z}{2} \right) H \ket 0 \\ &=& \bra 0 H^\dagger \left ( \frac{\mathds{1} + q_z(0)}{2} \right )H \ket 0 \\ &=& \bra 0 \frac{\mathds{1} + q_z(1)}{2} \ket 0 \\ &=& \frac{\bra 0 \mathds{1} \ket 0 + \bra 0 \sigma_x \ket 0}{2} \\ &=& \frac 12 \,. \end{eqnarray*} \end{example} \subsection*{Descriptors of an $n$-Qubit Network} Consider now, and for the rest of the paper, the case of $n$ interacting qubits in a quantum computational network. Suppose that the qubits are initialized at time~$0$ in the state~$\ket 0^{\otimes n}$, which, when more conveniently denoted $\ket 0$, correspond to the Heisenberg reference vector so far invoked. Although this network seems like a restricted system, its ability to simulate any other quantum system to arbitrary accuracy~\cite{deutsch1989quantum} makes it completely general. Moreover, no generality is lost by assuming that each gate in the network requires exactly one unit of time, so that the state of the network needs only be specified at integer values of time. Let again~$U$ be the unitary operator representing the evolution of the network between time~$0$ and time~$t$. A natural basis of the space of all operators on $n$ qubits is the product of Pauli operators, namely, \begin{equation*} \label{eq:myset} \mathcal B \equiv \left \{ \sigma_{\mu_1} \otimes \sigma_{\mu_2} \otimes \ldots \sigma_{\mu_n} \, \colon \, \mu_i \in \{ 0,x,y,z\} \right \} \,. \end{equation*} There are $4^n$ such matrices, and they are linearly independent, so, indeed, they form a basis of the $2^n \times 2^n = 4^n$ dimensional complex\footnote{In fact,~$\mathcal B$ is a basis of hermitian operators if~\emph{real} linear combinations are considered. However, in the present context, it is more relevant to think of~$\mathcal B$ as a basis of all linear operators.} vector space of linear operators on $n$-qubits. This means that if one knows how each observable of the basis $\mathcal B$ evolves by the action of~$U$, \begin{equation*} \sigma_{\mu_1} \otimes \sigma_{\mu_2} \otimes \ldots \sigma_{\mu_n} \to U^\dagger \sigma_{\mu_1} \otimes \sigma_{\mu_2} \otimes \ldots \sigma_{\mu_n} U \,, \qquad \mu_i \in \{ 0,x,y,z\} \,, \end{equation*} then one knows, by linearity, how each observable evolves. \subsection*{The Main Simplification} A great simplification is to track the evolution of only the set of observables \begin{equation} \label{eq:q0} \bol q_i(0) = \mathds{1}^{i-1} \otimes \boldsymbol \sigma \otimes \mathds{1}^{n-i}\,, \qquad i= 1, \ldots , n \,, \end{equation} where $\mathds{1}^k$ stands for the tensor product of $k$ copies of the identity. Note that for each~$i$,~$\bol q_i(0)$ has 3 components, each of them being an operator acting on the whole Hilbert space. The $n$-tuple whose components are the~$\bol q_i(0)$ is denoted~$\bol q(0)$. Bold quantities are vectors, so for instance one writes~$\bol q_i(0)$, but~$ q_{ix}(0)$. This~$\bol q_i(0)$ is \emph{the descriptor of qubit~$i$ at time~$0$}. The descriptor at time~$t$ is then given by \begin{equation} \label{eq:qt \bol q_i(t) = U^\dagger \bol q_i(0) U \,. \end{equation} Importantly, note that $\bol q(0)$ contains many fewer components than $\mathcal B$ contains elements. In fact, instead of tracking the $4^n$ operators of~$\mathcal B$, only $3n$ are suggested here. The reason is that these~$3n$ operators can be multiplied to generate any of the $4^n$ basis operators. Moreover, this multiplicative structure is preserved by the evolution~$U$, namely, if an observable is generated multiplicatively by~$q_{iw}(0)q_{jw'}(0)$, then the evolved observable is given by \begin{equation*} U^\dagger q_{iw}(0)q_{jw'}(0) U = U^\dagger q_{iw}(0) U U^\dagger q_{jw'}(0)U = q_{iw}(t)q_{jw'}(t) \,. \end{equation*} This observation obviously extends to larger products, as well as to sums of products of components of~$\bol q(0)$. \begin{example} Considering a 2-qubit network, the observable~$\ketbra{01}{01}$ can be expanded in the basis~$\mathcal B= \left \{\sigma_\mu \otimes \sigma_\nu \suchthat \mu, \nu \in \{0,x,y,z\} \right \}$, and then expressed in terms of $\bol q_1(0)$ and~$\bol q_2(0)$. Indeed, \begin{eqnarray*} \ketbra{01}{01} &=& (\ketbra{0}{0} \otimes \mathds{1}) (\mathds{1} \otimes \ketbra{1}{1}) \\ &=& \left( \frac{\mathds{1} + \sigma_z}{2} \otimes \mathds{1} \right) \left( \mathds{1} \otimes \frac{\mathds{1} - \sigma_z}{2} \right )\\ &=& \frac 14 \left( \mathds{1}^2 - \mathds{1} \otimes \sigma_z + \sigma_z \otimes \mathds{1} - \sigma_z \otimes \sigma_z \right ) \\ &=& \frac 14 \left( \mathds{1}^2 - q_{2z}(0) + q_{1z}(0) - q_{1z}(0)q_{2z}(0) \right ) \,. \end{eqnarray*} This can then be used to express in terms of~$\bol q(t)$ the time-evolved counter-part of the observable, $U^\dagger \ketbra{01}{01} U$, under a an evolution~$U$ between time~$0$ and~$t$: \begin{eqnarray*} U^\dagger \ketbra{01}{01} U &=& \frac 14 \left( U^\dagger \mathds{1}^2 U - U^\dagger q_{2z}(0) U + U^\dagger q_{1z}(0) U - U^\dagger q_{1z}(0) U U^\dagger q_{2z}(0) U \right ) \\ &=& \frac 14 \left( \mathds{1}^2 - q_{2z}(t) + q_{1z}(t) - q_{1z}(t)q_{2z}(t) \right ) \,. \end{eqnarray*} \end{example} \subsection*{The Algebra of Descriptors} The addition and multiplication of components of descriptors grant them with an algebraic structure. \begin{remark} The operators of $\bol q(0)$ satisfy the $\mathfrak{su}(2)^{\otimes n}$ algebra, namely \begin{eqnarray*} \label{eq:su2n} [q_{iw} (0), q_{jw'}(0)] &=& 0\hphantom{q_{z}(0)} \qquad (i \neq j \text{ and } \forall w, w') \nonumber \\ q_{ix}(0) q_{iy}(0) &=& i q_{iz}(0) \qquad(\text{and cyclic permutations}) \\ q_{iw}(0)^2 &=& \mathds{1}\hphantom{q_{z}(0)} \qquad( \forall w)\,. \nonumber \end{eqnarray*} \end{remark} In the first line, the bracket denotes the commutator, $[A,B] = AB - BA$. The above algebraic relations follow from those of the Pauli matrices and from the factorized form of the descriptors at time~$0$, displayed in equation~\eqref{eq:q0}. After evolving by~$U$, the descriptors~$\bol q_i(t)$ shall in general loose their direct connection with Pauli matrices, as well as their factorized form, but still, they preserve their algebraic relations. \begin{remark} \label{remt} For any~$t$, $\bol q(t)$ satisfies the $\mathfrak{su}(2)^{\otimes n}$ algebra : \begin{eqnarray*} [ q_{iw}(t), q_{jw'}(t)] &=& q_{iw}(t) q_{jw'}(t) - q_{jw'}(t) q_{iw}(t) \\ &=& U^\dagger q_{iw}(0) U U^\dagger q_{jw'}(0) U - U^\dagger q_{jw'}(0) U U^\dagger q_{iw}(0) U \\ &=& U^\dagger q_{iw}(0) q_{jw'}(0) U - U^\dagger q_{jw'}(0) q_{iw}(0) U \\ &=& U^\dagger [q_{iw}(0), q_{jw'}(0)] U \\ &=& 0 \,\hspace{16ex} (i \neq j \text{ and } \forall w, w') \\ &&\vphantom{ [ } \\ q_{ix}(t) q_{iy}(t) &=& U^\dagger q_{ix}(0) U U^\dagger q_{iy}(0) U \\ &=& U^\dagger q_{ix}(0) q_{iy}(0) U \\ &=& U^\dagger i q_{iz}(0) U \\ &=& i q_{iz}(t) \hspace{12ex} (\text{and cyclic permutations})\\ &&\vphantom{ [ } \\ q_{iw}(t)^2 &=& U^\dagger q_{iw}(0) U U^\dagger q_{iw}(0) U \\ &=& U^\dagger q_{iw}(0) q_{iw}(0) U \\ &=& U^\dagger \mathds{1} U \\ &=& \mathds{1} \hspace{16ex} ( \forall w)\,. \hspace{15ex} \end{eqnarray*} \end{remark} One might object that unitary evolution is but a special case of a larger class of processes represented by \emph{completely positive and trace preserving} maps. Such processes include for instance noisy channels or maps that do not preserve the dimensionality of the system (and hence do not preserve the system's algebra). These processes are, however, a special case of unitary evolution. In fact, not only that, by Stinespring dilation theorem, these processes can be \emph{mathematically} understood as sub-processes of a larger unitary evolution, but they \emph{physically} are. Real quantum processes are unitary evolutions. \subsection*{One More Simplification} Following Gottesman~\cite{gottesman1999heisenberg}, the generating tuple $\bol q(0)$ could be reduced to $2n$ elements by noticing a redundancy due to the $\mathfrak{su}(2)^{\otimes n}$ algebra. In fact, for any $i$, only two of the triplet of operators $(q_{ix}(0), q_{iy}(0), q_{iz}(0))$ are required, since the omitted operator can be recovered by the product of the selected two. In what follows, the notation will not be modified, but one will happily use this shortcut to avoid tracking the observables~$q_{iy}(t)$, keeping in mind that~\mbox{$q_{iy}(t)= -i q_{ix}(t)q_{iz}(t)$}. Summing this up, the Heisenberg picture is about tracking the evolution $\mathcal{O} \to U^\dagger \mathcal{O} U$ of uncountably many initial observables~$\mathcal{O}$. This can be done by instead tracking the evolution~$\bol q(0) \to \bol q(t) = U^\dagger \bol q(0) U$ of only~$2n$ observables ($q_{iy}$ is omitted). In fact,~$\bol q(t)$ allows to infer, by multiplication, the evolution of the $4^n$ observables of $\mathcal B$, which allow to infer, by linearity, the evolution of any observable. \section{Evolution from the Future?!}\label{subsec:wayout} Although $\bol q(0) \to \bol q(t) = U^\dagger \bol q(0) U$ looks like a completely fine way in which observables should evolve, when $U$ is broken down into different gates, for instance \mbox{$U= G_t \dots G_2 G_1$}, one finds that the observables of the descriptors evolve in the wrong order! In fact, the order in which the gates are applied is first $G_1$, then $G_2$, and so on, until the last gate $G_t$ is applied. However, the descriptors evolve as \begin{equation} \label{eq:prob} \bol q(0) \to G_1^\dagger G_2^\dagger \dots G_t^\dagger \bol q(0) G_t \dots G_2 G_1 \,. \end{equation} The evolution of observables appears to occur from the last gate of the network to the first, which is inconvenient, since the network needs to be final before one can start to compute anything. Much worse, it does not reflect the actual dynamics that the system is undergoing, so this kind of evolution from the future simply cannot be the right explanation. The way out of this conundrum is to notice that inasmuch as observables are linear operators generated by some set~$\bol q(0)$ of operators, the evolution operators --- or gates --- are too. They are generated multiplicatively and additively by the same set~$\bol q(0)$, since questions of hermiticity versus unitarity do not arise. \subsection*{The Functional Representation of a Gate} For a fixed gate with matrix representation~$G$, its multiplicative and additive generation by $\bol q(0)$ defines a function~$\Uf_{G}(\cdot)$ through \begin{equation*} \label{eqgateq0} G=\Uf_{G}(\bol q(0))\,. \end{equation*} The function~$\Uf_G(\cdot)$ takes value in unitary operators and will be referred to as the~\emph{functional representation} of the gate~$G$. Its functionality encodes the multiplicative and linear generation of~$G$ by the elements of~$\bol q(0)$. In other words, any matrix~$G$ can be expressed as a polynomial in the $2n$ matrices $q_{1x}(0), q_{1z}(0), \dots, q_{nz}(0)$, and~$\Uf_G(\cdot)$ is one such polynomial. Now, when $\bol q(t)$ varies with $t$, the matrix representation $\Uf_G(\bol q(t))$ varies accordingly, but as we shall see in the next section, it is the fixed functionality of~$\Uf_G$ that plays a central algebraic role when performing computations in the Heisenberg picture. \begin{example} In the case of a single qubit network, the negation and Hadamard gates are described by \begin{eqnarray*} N = \begin{bmatrix} 0 & 1\\ 1 & 0 \end{bmatrix} = \sigma_x = q_{x}(0) \qquad \text{and} \qquad H = \frac{1}{\sqrt 2} \begin{bmatrix} 1 & 1\\ 1 & -1 \end{bmatrix} = \frac{q_x(0) + q_z(0)}{\sqrt 2} \,, \end{eqnarray*} so their functional representations are \begin{equation*} \Uf_N(\bol q(t)) = q_x(t) \disand \Uf_H(\bol q(t))= \frac{q_x(t) + q_z(t)}{\sqrt 2} \,. \end{equation*} The counterclockwise rotation of a state vector in the $\ket 0$ \& $\ket 1$ plane\footnote{ Note that this operation represents the rotation of a polarized photon, but not exactly that of the spin of an electron. The reason for this is that a $\pi / 2$ rotation of a photon takes the horizontal polarization $\ket \leftrightarrow \equiv \ket 0$ to the vertical polarization $\ket{\updownarrow} \equiv \ket{1}$. However, the spin of an electron needs a~$\pi$ rotation to take the $\ket{\uparrow_z} \equiv \ket 0 $ to $\ket{\downarrow_z} \equiv \ket 1$.} is described by \begin{eqnarray*} R_\theta = \begin{bmatrix} \cos \theta & - \sin \theta \\ \sin \theta & \cos \theta \end{bmatrix} = \cos \theta~ \mathds{1} - i \sin \theta~ \sigma_y = \cos \theta~ \mathds{1} + \sin \theta~ q_{x}(0)q_{z}(0) \,, \end{eqnarray*} which defines its functional representation~$\Uf_{R_\theta}(\cdot)$. In the case of an~$n$-qubit network, if such a unary gate, say~$H$, is applied on qubit~$i$, while all other qubits are left invariant, then the matrix representation of the corresponding evolution operator is $$ H_i \equiv \mathds{1}^{i-1} \otimes H \otimes \mathds{1}^{n-i} = \frac{q_{ix}(0) + q_{iz}(0)}{\sqrt 2} \,, $$ so its corresponding functional representation is $\Uf_{H_i}(\bol q(t))= \frac{q_{ix}(t) + q_{iz}(t)}{\sqrt 2}$. \end{example} \subsection*{Back in order!} The apparently reversed-ordered evolution of equation~\eqref{eq:prob} can then be transformed back in the right order. Denoting \mbox{$V=G_{t-1}\dots G_2 G_1$}, one finds \begin{eqnarray*} \bol q (0) \to V^\dagger G_t^\dagger \bol q (0) G_t V &=& V^\dagger \Uf_{G_t}^\dagger(\bol q(0) ) \bol q (0) \Uf_{G_t}(\bol q(0) ) V \\ &=& V^\dagger \Uf_{G_t}^\dagger(\bol q(0) ) V V^\dagger \bol q (0) V V^\dagger \Uf_{G_t}(\bol q(0) ) V \\ &=& \Uf_{G_t}^\dagger \left(V^\dagger \bol q(0) V \right)~ V^\dagger \bol q (0) V~ \Uf_{G_t}\left(V^\dagger \bol q(0) V\right) \\ &=& \Uf_{G_t}^\dagger( \bol q(t-1)) V^\dagger \bol q (0) V \Uf_{G_t}( \bol q(t-1) ) \,. \end{eqnarray*} In the second last line, the function $\Uf_{G_t}$ (and its hermitian conjugate) is applied to the components of $\bol q(0)$ that are sandwiched by $V^\dagger$ and $V$. The equality holds because if $\Uf_{G_t}$ contains products of components of $\bol q(0)$, the inner $V^\dagger$ and $V$ in the expansion of~$\Uf_{G_t}\left(V^\dagger \bol q(0) V\right)$ shall cancel out, leaving only the outer ones, which can then be factored out to retrieve the line before. At this stage, the computation can be continued in two different ways. First, remembering that $V=G_{t-1}\dots G_2 G_1$, the argument can be iterated on both sides of the equation. This makes explicit that the problem of the order in which the observables evolve in the Heisenberg picture is solved by introducing the functional representation of the gates. Indeed, evolving the observables by the matrix representation of the gates acting in the wrong order, \begin{equation*} G_1^\dagger G_2^\dagger \dots G_t^\dagger \bol q (0) G_t \dots G_2 G_1 \,, \end{equation*} is equivalent to the right ordering of the functional representation of the gates evaluated at the corresponding times, \emph{i.e.}, \begin{equation*} \Uf_{G_t}^\dagger (\bol q(t-1)) \dots \Uf_{G_2}^\dagger (\bol q(1)) \Uf_{G_1}^\dagger (\bol q(0)) ~ \bol q (0)~ \Uf_{G_1} (\bol q(0)) \Uf_{G_2} (\bol q(1)) \ldots \Uf_{G_t} (\bol q(t-1))\,. \end{equation*} Another way to continue the previous calculation is to invoke equation~\eqref{eq:qt} on both sides of the equation to find \begin{equation} \label{eq:qt2} \bol q(t) = \Uf_{G_t}^\dagger( \bol q(t-1)) \bol q (t-1) \Uf_{G_t}( \bol q(t-1) ) \,. \end{equation} This is the way in which descriptors are prescribed to evolve in Ref.~\cite{deutsch2000information}. It is in fact correct and equivalent to equation \eqref{eq:qt}, although not trivially recognized. \section{The Action on Desriptors}\label{sub:action} Evolving the descriptor in a step-by-step fashion, as prescribed by equation~\eqref{eq:qt2}, permits to find out how a specific gate affects the different descriptors,~\emph{i.e.}, the action of the gate on the descriptors. A gate $G_t$ transforms the~$2n$ components of~$\bol q(t-1)$ in the following way: \begin{equation*} G_t \colon q_{iw}(t-1) \to q_{iw}(t) = \Uf_{G_t}^\dagger(\bol q(t-1) ) q_{iw}(t-1) \Uf_{G_t}(\bol q(t-1)) \,. \end{equation*} Leveraging the fact that the descriptors at time~$t-1$ satisfy the~$\mathfrak{su}(2)^{\otimes n}$ algebra (\emph{c.f.} Remark~\ref{remt}), the functional representation~\mbox{$\Uf_{G_t}(\bol q(t-1))$} can be expanded and the algebraic relations of the many components of~$\bol q(t-1)$ that shall crop up are used to simplify the expression. As it shall be seen, the locality of the applied gate renders trivial most of those~$2n$ computations. \begin{example} Between time~$t-1$ and~$t$, let a Hadamard gate~$H$ be performed on the $i$-th qubit, so $G_t = H_i$. What is the action of~$H_i$ on~$\bol q_i$? And on~$\bol q_j$, with $j\neq i$? Recalling that \begin{equation*} \Uf_{H_i}(\bol q(t-1)) = \frac{q_{ix}(t-1)+q_{iz}(t-1)}{\sqrt 2} \,, \end{equation*} the action on descriptor~$\bol q_i$ is then \begin{eqnarray*} H_i \,\colon \, (q_{ix}(t-1), q_{iz}(t-1)) &\to& (q_{ix}(t), q_{iz}(t))\\ &=& \Uf^\dagger_{H_i}(\bol q(t-1)) (q_{ix}(t-1), q_{iz}(t-1)) \Uf_{H_i}(\bol q(t-1)) \\ &=& \frac{q_{ix}+q_{iz}}{\sqrt 2} (q_{ix}, q_{iz}) \frac{q_{ix}+q_{iz}}{\sqrt 2} \\ &=& \frac 12 (q_{ix}+q_{iz}+q_{iz}-q_{ix}, -q_{iz}+q_{ix}+q_{ix}+q_{iz})\\ &=& (q_{iz}, q_{ix}) \,. \end{eqnarray*} When the context does not require it, the time labels can be omitted, like here, from the third line onwards, the ``$(t-1)$'' has been discarded. One can then simply denote the action of the gate on the descriptors as~$H_{i} \,\colon \, (q_{ix}, q_{iz}) \to (q_{iz}, q_{ix}) $ without insisting on the time labels, since the calculation relies only on the time-independent algebra of descriptors. Notice that the result is analogous to what has been computed in Example~\ref{ex:H}, equation~\eqref{eq:ex1}, but here, no matrix multiplication was involved, only the algebra of descriptors. More specifically, the properties~$q_{iw}^2 = \mathds{1}$ and~$q_{iz}q_{ix} = i q_{iy} = -q_{ix}q_{iz}$ have been used. How about the action of~$H_i$ on all other~$\bol q_j$, with $j \neq i$? Since~$\Uf_{H_i}(\bol q)$ depends only on $q_{ix}$ and~$q_{iz}$ (time labels removed), it commutes with~$\bol q_j$, leaving it invariant, $$ \Uf^\dagger_{H_i}(\bol q) (q_{jx}, q_{jz}) \Uf_{H_i}(\bol q) = \Uf^\dagger_{H_i}(\bol q) \Uf_{H_i}(\bol q) (q_{jx}, q_{jz}) = (q_{jx}, q_{jz}) \,. $$ \end{example} \subsection*{Locality and Completeness} The fact that $\Uf_{H_i}$ depends only on $\bol q_i$ --- and so leaves invariant the descriptor of all qubits but qubit~$i$ --- is precisely due to the fact that $H_i$ is a gate that acts only on qubit~$i$. More generally, if the gate~$G_t$ acts only on qubits of the subset $I\subseteq \{1,2,\ldots,n\}$, then its functional representation~$\Uf_{G_t}$ shall only depend on components of~$\bol q_k(t-1)$, for~$k \in I$. For~$j \notin I$, the descriptor~$\bol q_{j}(t-1)$ shall then commute with~$\Uf_{G_t}(\bol q(t-1))$, so it will remain unchanged between times~$t-1$ and~$t$. Hence, anything that is done to any system that does not concern qubit~$j$ leaves its descriptor invariant, namely, \emph{the descriptors are a local description of quantum systems.} The descriptors are also \emph{complete}, in that the expectation value of any time-evolved observable~$U^\dagger \mathcal{O} U$ that concerns only qubits of~$I$ can be determined by the descriptors~$\bol q_k(t)$, with~$k\in I$. This can be seen more clearly at time~$0$, where an observable~$\mathcal{O}$ on the qubits of~$I$ is a linear (hermitian) operator that acts non-trivially \emph{only} on the qubits of~$I$. Any such operator can be generated additively and multiplicatively by the components of~$\bol q_k(0)$, with~$k\in I$, thereby defining a polynomial~$f_\mathcal{O}(\cdot)$ for which \begin{equation*} \mathcal{O} = f_{\mathcal{O}}(\{\bol q_k(0)\}_{k\in I}) \,, \qquad \text{and so} \qquad U^\dagger \mathcal{O} U = f_{\mathcal{O}}(\{\bol q_k(t)\}_{k\in I})\,. \end{equation*} \begin{example} Determine the action of~$N = \sigma_x$ and of~$\sigma_z$ on the descriptor of the qubit that is acted upon. \begin{eqnarray*} N \,\colon \, (q_{x}(t-1), q_{z}(t-1)) &\to& (q_{x}(t), q_{z}(t))\\ &=& \Uf^\dagger_{N}(\bol q(t-1)) (q_{x}(t-1), q_{z}(t-1)) \Uf_{N}(\bol q(t-1)) \\ &=& q_{x} (q_{x}, q_{z}) q_{x} \\ &=& (q_{x}, - q_{z}) \,. \end{eqnarray*} Similarly, and with a lighter, time-independent notation, \begin{eqnarray*} \sigma_z \,\colon \, (q_{x}, q_{z}) &\to& \Uf^\dagger_{\sigma_z}(\bol q) (q_{x}, q_{z}) \Uf_{\sigma_z}(\bol q) \\ &=& q_{z} (q_{x}, q_{z}) q_{z} \\ &=& ( - q_{x}, q_{z}) \,. \end{eqnarray*} \end{example} \subsection*{The $\text{Cnot}$} The controlled not gate, denoted~$\text{Cnot}$, is a two qubit gate of great importance. Not only does it represent a perfect measurement, but when the~$\text{Cnot}$ is supplemented by arbitrary unary gates, it forms a universal gate set. This means that any unitary transformation can be realized by a circuit with gates chosen solely among this set. Consider a $\text{Cnot}$ gate where the qubit $c$ controls the target qubit $t$. Restricting to the subspace acted upon, the linear transformation is represented by \begin{equation*} \text{Cnot} = \begin{bmatrix} 1&0&0&0\\ 0&1&0&0\\ 0&0&0&1\\ 0&0&1&0 \end{bmatrix} \,. \end{equation*} The functional representation~$\Uf_{\text{Cnot}}(\cdot)$ is established by expressing the above matrix in terms of the components of the descriptor at time~$0$, \begin{eqnarray*} \text{Cnot} &=& \begin{bmatrix} 1&0&0&0\\ 0&1&0&0\\ 0&0&0&0\\ 0&0&0&0 \end{bmatrix} + \begin{bmatrix} 0&0&0&0\\ 0&0&0&0\\ 0&0&0&1\\ 0&0&1&0 \end{bmatrix} \\ &=& \vphantom{\int^A }\ketbra 00 \otimes \mathds{1} + \ketbra 11 \otimes N \\ &=& \frac{\mathds{1} + \sigma_z}{2} \otimes \mathds{1} + \frac{\mathds{1} - \sigma_z}{2} \otimes \sigma_x \\ &=& \frac 12 \left ( \mathds{1}^2 + q_{cz}(0) + q_{tx}(0) - q_{cz}(0)q_{tx}(0) \right) \,. \end{eqnarray*} The functional representation of $\text{Cnot}$ ($c$ controls $t$) is thus given by \begin{equation*} \Uf_{\text{Cnot}}(\bol q (t)) = \frac 12 (\mathds{1} + q_{cz}(t) + q_{tx}(t) - q_{cz}(t)q_{tx}(t) ) \,. \end{equation*} The action of the~$\text{Cnot}$ on the descriptors that it affects can be found to be \begin{eqnarray*} \text{Cnot} \, \colon \, \left \{ \hspace{-5pt} \begin{array}{c} (q_{cx} , q_{cz}) \\ (q_{tx} , q_{tz}) \end{array} \hspace{-5pt} \right \} \to \left \{ \begin{array}{c} \hspace{-5pt} (q_{cx} q_{tx}, q_{cz}) \\ (q_{tx} , q_{cz} q_{tz}) \end{array} \hspace{-5pt} \right \} \,. \end{eqnarray*} For example, the calculation of $q_{cx} (t)$ is done below. \begin{eqnarray*} q_{cx}(t-1) &\to& q_{cx}(t) \\ &=& \Uf^\dagger_{\text{Cnot}}(\bol q (t-1)) q_{cx}(t-1)\Uf_{\text{Cnot}}(\bol q (t-1)) \\ &=& \frac 14 \left ( \mathds{1} + q_{cz} + q_{tx} - q_{cz}q_{tx} \right ) q_{cx} \left (\mathds{1} + q_{cz} + q_{tx} - q_{cz}q_{tx} \right) \\ &=& \frac 14 ( q_{cx} + q_{cx} q_{cz} + q_{cx} q_{tx} - q_{cx} q_{cz} q_{tx} \\ && \hspace{15pt}+ q_{cz} q_{cx} + q_{cz} q_{cx} q_{cz} + q_{cz} q_{cx} q_{tx} - q_{cz} q_{cx} q_{cz} q_{tx} \\ && \hspace{15pt}+ q_{tx} q_{cx} + q_{tx} q_{cx} q_{cz} + q_{tx} q_{cx} q_{tx} - q_{tx} q_{cx} q_{cz} q_{tx} \\ && \hspace{15pt}-q_{cz} q_{tx} q_{cx} -q_{cz} q_{tx} q_{cx} q_{cz} -q_{cz}q_{tx} q_{cx} q_{tx} + q_{cz} q_{tx} q_{cx} q_{cz} q_{tx}) \\ &=& \frac 14 ( q_{cx} + q_{cx} q_{cz} + q_{cx} q_{tx} - q_{cx} q_{cz} q_{tx} \\ && \hspace{15pt}- q_{cx} q_{cz} - q_{cx} - q_{cx} q_{cz} q_{tx} + q_{cx} q_{tx} \\ && \hspace{15pt}+ q_{cx} q_{tx} + q_{cx} q_{cz}q_{tx} + q_{cx} - q_{cx} q_{cz} \\ && \hspace{15pt}+ q_{cx} q_{cz} q_{tx} + q_{cx} q_{tx} +q_{cx} q_{cz}- q_{cx}) \\ &=& q_{cx}q_{tx} \,, \end{eqnarray*} where, the dependency on $t-1$ has again been discarded. The action of a gate on a descriptor can also be found directly from the matrix representation of the gate, without the detour by its functional representation and the gymnastic of the $\mathfrak{su}(2)^{\otimes n}$ algebra. Let's exemplify the method with the case of the $\text{Cnot}$, which, in this case consists of calculating \begin{equation*} \text{Cnot}^\dagger \left \{ \begin{array}{c} \bol q_c(0) \\ \bol q_t(0) \end{array} \right \} \text{Cnot} \,. \end{equation*} For the $q_{cx}$ element, this yields \begin{eqnarray*} \text{Cnot}^\dagger (\sigma_x \otimes \mathds{1}) \text{Cnot} &=& \begin{bmatrix} 1&0&0&0\\ 0&1&0&0\\ 0&0&0&1\\ 0&0&1&0 \end{bmatrix} \begin{bmatrix} 0&0&1&0\\ 0&0&0&1\\ 1&0&0&0\\ 0&1&0&0 \end{bmatrix} \begin{bmatrix} 1&0&0&0\\ 0&1&0&0\\ 0&0&0&1\\ 0&0&1&0 \end{bmatrix} \\ &=& \begin{bmatrix} 0&0&0&1\\ 0&0&1&0\\ 0&1&0&0\\ 1&0&0&0 \end{bmatrix}\\ &=& \sigma_x \otimes \sigma_x \\ &=& q_{cx}(0)q_{tx}(0)\,, \end{eqnarray*} consistently with the previous approach. But why does this work? In fact what has been computed is \begin{equation*} q_{cx}(1) = \Uf^\dagger_{\text{Cnot}}(\bol q(0)) q_{cx} (0) \Uf_{\text{Cnot}}(\bol q(0)) = q_{cx}(0)q_{tx}(0)\,. \end{equation*} The leap to the general case, \textit{i.e.}, to have $t$ and $t-1$ instead of $1$ and $0$ in the above equation, follows from observing that the calculation \emph{could have been} done by replacing~$\Uf_{\text{Cnot}}(\bol q(0))$ by its functional representation, and then use the~$\mathfrak{su}(2)^{\otimes n}$ algebraic relations at time~$0$. But since the algebraic relations are preserved,~$\bol q(0)$ could then invariably have be changed to~$\bol q(t-1)$, to obtain that, generically, $q_{cx}(t) = q_{cx}(t-1)q_{tx}(t-1)$\,. \section{Superdense Coding, Revisited} In the Schr\"odinger picture, the superdense coding~\cite{bennett1992communication} may appear to hinge on 'non-local' properties of the wave-function. See Figure~\ref{densecoding}. \begin{figure}[h] \centering \begin{tikzpicture} \node (ket1) at (0,1){$\ket 0$}; \node (ket2) at (0,0){$\ket 0$}; \node[draw,rectangle, minimum width=0.7cm, minimum height=0.7cm ] (H1) at (1.5,1) {$H$}; \node[draw, circle, minimum width=0.25cm] (T1) at (2.5,0) {}; \filldraw (2.5,1) circle (1.8pt); \draw (2.5,1) --(T1.south); \node[draw,rectangle, minimum width=0.7cm, minimum height=0.7cm] (H2) at (8.5,1){$H$}; \node[draw, circle, minimum width=0.25cm] (T2) at (7.5,0) {}; \filldraw (7.5,1) circle (1.8pt); \draw (7.5,1) --(T2.south); \node[draw,rectangle,minimum width=0.7cm, minimum height=0.7cm, right=2.1cm of H1] (xi) {$\sigma_x^j$}; \node[draw,rectangle,minimum width=0.7cm, minimum height=0.7cm, left=2.1cm of H2] (zj) {$\sigma_z^i$}; \node[above left = 0.7cm and 0.7cm of xi] (p1) {}; \node[below left = 0.1cm and 0.7cm of xi] (p2) {}; \node[below right = 0.1cm and 0.7cm of zj] (p3) {}; \node[above right = 0.7cm and 0.7cm of zj] (p4) {}; \draw[dashed] (p1) -- (p2); \draw[dashed] (p2) -- (p3); \draw[dashed] (p3) -- (p4); \node[below left = 0cm and 0.1cm of p4] (alice) {Alice}; \node[below right = 0cm and 0.1cm of p4] (bob) {Bob}; \node (fin1) at (9.7,1){}; \node (fin2) at (9.7,0){}; \draw (ket1) -- (H1); \draw (H1) -- (xi); \draw (xi) -- (zj); \draw (zj) -- (H2); \draw (H2) -- (fin1); \draw (ket2) -- (fin2); \node at (0.8,-0.2) {\scriptsize{0}}; \node at (2,-0.2) {\scriptsize{1}}; \node at (3.2,-0.2) {\scriptsize{2}}; \node at (5,-0.2) {\scriptsize{3}}; \node at (6.8,-0.2) {\scriptsize{4}}; \node at (8,-0.2) {\scriptsize{5}}; \node at (9.2,-0.2) {\scriptsize{6}}; \end{tikzpicture} \caption{Network representing the superdense coding protocol.} \label{densecoding} \end{figure} The Schr\"odinger state at time~$2$ is given by the Bell state $$\ket{\Phi^+} = \frac{\ket{00}+\ket{11}}{\sqrt 2}\,.$$ The local operations performed by Alice on her qubit shall evolve the system to one of the four Bell states in accordance with the bits~$i$ and~$j$ that she wants to transmit. The latter are then revealed by a Bell measurement. See Table~\ref{tablecoding}. \begin{table}[h] \centering \begin{tabular}{|c|c|c|} \hline Bits $i,j$ & State at time $4$& State at time $6$\\ \hline \hline $0,0$ & $\ket{\Phi^+}= \frac{\ket{00}+\ket{11}}{\sqrt 2}$ \vphantom{$\frac{\int_A^A}{\int_A^A}$}& $\ket{00}= \ket{ij}$\\ \hline $0,1$ & $\ket{\Psi^+}= \frac{\ket{01}+\ket{10}}{\sqrt 2}$ \vphantom{$\frac{\int_A^A}{\int_A^A}$}& $\ket{01}= \ket{ij}$\\ \hline $1,0$ & $\ket{\Phi^-}=\frac{\ket{00}-\ket{11}}{\sqrt 2}$ \vphantom{$\frac{\int_A^A}{\int_A^A}$} & $\ket{10}= \ket{ij}$\\ \hline $1,1$ & $\ket{\Psi^-}=\frac{\ket{01}-\ket{10}}{\sqrt 2}$ \vphantom{$\frac{\int_A^A}{\int_A^A}$} & $\ket{11}= \ket{ij}$\\ \hline \end{tabular} \caption{The Schr\"odinger state in relation to the bits to transmit.} \label{tablecoding} \end{table} The protocol is now revisited in the language of descriptors. Denoting the descriptor at time~$0$ without any time labels, the computation can be done as follows. \begin{eqnarray*} \hphantom{\to} &\bol q (0)& \equiv \left \{ \hspace{-5pt} \begin{array}{rl} (q_{1x} ,& q_{1z}) \vspace{2pt}\\ (q_{2x} ,& q_{2z}) \end{array} \hspace{-5pt} \right \} \\ \stackrel{H}{\to}& \bol q (1) & = \left \{ \begin{array}{rl} \hspace{-5pt} (q_{1z},& q_{1x}) \vspace{2pt}\\ (q_{2x},& q_{2z}) \end{array} \hspace{-5pt} \right \} \\ \stackrel{\tiny{\text{Cnot}}}{\to} & \bol q (2) & = \left \{ \begin{array}{ll} \hspace{-5pt} (q_{1z} q_{2x},& q_{1x}) \vspace{2pt}\\ (q_{2x} ,& q_{1x} q_{2z}) \end{array} \hspace{-5pt} \right \} \\ \stackrel{\sigma_x^j}{\to} & \bol q (3) & = \left \{ \begin{array}{ll} \hspace{-5pt} ( q_{1z} q_{2x},& (-1)^j q_{1x}) \vspace{2pt}\\ (q_{2x} ,&q_{1x} q_{2z}) \end{array} \hspace{-5pt} \right \} \\ \stackrel{\sigma_z^i}{\to} & \bol q (4) & = \left \{ \begin{array}{ll} \hspace{-5pt} ((-1)^i q_{1z} q_{2x}, &(-1)^j q_{1x}) \vspace{2pt}\\ (q_{2x} , &q_{1x} q_{2z}) \end{array} \hspace{-5pt} \right \} \\ \stackrel{\text{Cnot}}{\to} & \bol q (5) & = \left \{ \begin{array}{ll} \hspace{-5pt} ((-1)^i q_{1z} ,& (-1)^j q_{1x}) \vspace{2pt}\\ (q_{2x} ,& (-1)^j q_{2z}) \end{array} \hspace{-5pt} \right \} \\ \stackrel{H}{\to} & \bol q (6) & = \left \{ \begin{array}{ll} \hspace{-5pt} ( (-1)^j q_{1x},& (-1)^iq_{1z} ) \vspace{2pt}\\ (q_{2x} ,& (-1)^j q_{2z}) \end{array} \hspace{-5pt} \right \} \,. \end{eqnarray*} Denoting by~$U^{(ij)}$ the evolution throughout the the protocol, the probability of measuring an outcome ``$i'$'' on the first qubit is given by $$ \bra{00} U^{(ij)\dagger} (\ketbra{i'}{i'} \otimes \mathds{1}) U^{(ij)} \ket{00}\,. $$ In the Heisenberg picture, this computation is performed from the middle outwards. The initial observables are expressed in terms of descriptors as $$ \ketbra{i'}{i'} \otimes \mathds{1}= \frac{\mathds{1}^2 + (-1)^{i'} q_{1z}}{2} \,, $$ which evolve by $U^{(ij)}$ to \begin{equation*} \frac{\mathds{1}^2 + (-1)^{i'} q_{1z}(6)}{2} = \frac{\mathds{1}^2 + (-1)^{i'+i} q_{1z}}{2} \,. \end{equation*} The expectation value with the reference vector~$\ket{00}$ thus yields \begin{equation*} \frac{1 + (-1)^{i'+i}}{2} = \delta_{ii'} \,. \end{equation*} Similarly, the probability of measuring ``$j'$'' on qubit 2 is given by $\delta_{jj'}$ and hence, the system shall deterministically return the value of the bits $i$ and~$j$. When revisited with the help of descriptors, the superdense coding of two bits into a single qubit appears quite natural: Alice's qubit's descriptor has precisely two slots in which bits can be encoded. When Alice transmits her qubit to Bob, measurements on that qubit alone could not leak any information about~$i$ or~$j$. In fact, any observable on Alice's qubit at time~$4$ is a linear combination of~$\mathds{1}$, $q_{1x}(4)$, $q_{1y}(4) = -i q_{1x}(4)q_{1z}(4)$ and $q_{1z}(4)$, and since \begin{equation*} \bra{00} \bol (\mathds{1}, q_{1x}(4), q_{1y}(4), q_{1z}(4)) \ket{00} = (1,0,0,0) \,, \end{equation*} the expectation value of any observable on that qubit alone is independent of~$i$ and~$j$. However, the information about the bits~$i$ and~$j$ is contained in the transmitted qubit at time~$4$, since not only does~$\bol q_1(4)$ depend on~$i$ and~$j$, but those bits eventually become accessible to measurement. This kind of information, present in a system but unretrievable by measurements on the system alone has been called \emph{locally inaccessible} by Deutsch and Hayden. In step~$5$ of the protocol, Bob's qubit serves as a key as well as an extra capacity: It unlocks the bit~$i$ by getting rid of the obfuscating~$q_{2x}$ while copying the bit $j$ in its $z$ component. Finally, notice that between time $2$ and time $4$, only the descriptor of the first qubit is affected, which invalidates the idea that the superdense coding protocol relies on non-local properties of entanglement. Indeed, there is an important asymmetry to be underlined: The existence of a local way in which a phenomenon (or more generally, a theory) can be explained makes the phenomenon (or theory) local. But this doesn't hold for the attribute ``non-local'', otherwise, all phenomena and all theories would qualify as non-local by considering ad hoc non-local explanations. \section{Conclusions} The formalism of descriptors has been re-explained in this paper in what I hope is a more complete exposition. I re-showed that the Heisenberg picture entails a local and complete way of describing quantum systems, and I used the approach to revisit superdense coding. By the way, in quantum field theory, locality in the sense advocated here as no-action-at-a-distance, as well as Lorentz invariance, are also recognized in the Heisenberg picture. The reader who is curious to unravel the mysteries of Bell inequality violations and of quantum teleportation is referred to~\S4 and~\S5 of the article by Deutsch and Hayden (\emph{op. cit.}). When I explained in terms of descriptors the teleportation process to one of its pioneers, Gilles Brassard told me enthusiastically that it was the most satisfactory elucidation he had ever heard of his own invention. The best explanations of quantum processes are unlocked by the Heisenberg picture, which is manifestly local, but remain oblivious in the widespread Schr\"odinger picture. \section*{Acknowledgements} I am deeply grateful to Gilles Brassard for his benevolent support and his valuation of my research autonomy. I am also grateful to Charles H. Bennett, Xavier Coiteux-Roy, Samuel Ducharme, Samuel Kuypers, Chiara Marletto, Pierre McKenzie, Lodovico Scarpa, William Schober and Nicetu Tibau Vidal for fruitful discussions and comments on earlier versions of this paper. I also wish to thank Stefan Wolf as well as the Institute for Quantum Optics and Quantum Information of Vienna, in particular Marcus Huber's group, for warm welcome and inspiring discussions. This work was supported in part by the Fonds de recherche du Qu\'ebec -- Nature et technologie (FRQNT), the Swiss National Science Foundation (SNF), the National Centre for Competence in Research ``Quantum Science and Technology'' (NCCR \emph{QSIT}), the Natural Sciences and Engineering Research Council of Canada (NSERC) as well as Qu\'ebec's Institut transdisciplinaire d'information quantique (INTRIQ). \bibliographystyle{unsrt}
1,314,259,994,698
arxiv
\section{Constructing Pesudo Data for Pre-training}\label{section:pseudo data} We constructed the pseudo data for CNNDM with {\bf Lead}. We also conducted a simple data cleaning procedure to the self-supervised pre-train corpus. First, we cleaned away irrelevant information, such as media names, reporter names or dates from the summaries. Second, for those summaries with less than 50 tokens, we iteratively collected the first sentence of the remaining text to the pseudo summary, until the length of summary reaches 70. This procedure was set up to prevent the target text from being too short to form a meaningful summary. Third, for those samples in which the source document is shorter than its summary, we filtered them out. For XSum, we constructed the pseudo data for pre-training following {\bf GSG}. The top-1 most important sentence was selected as the pseudo summary. Then we filtered out those pseudo summaries that are not relevant enough to the pseudo passages. In particular, we leveraged hand-written summaries in the few-shot dataset to determine the filtering threshold of pseudo data. We calculated the ROUGE-1 F1 between each ground-truth summary and its corresponding passage, represented as ${Ri}$. Then we calculated the mean and variance of ${Ri}$: $\epsilon = \frac{1}{n}\sum_{i=1}^nRi$, $\sigma^2 = \frac{1}{n}\sum_{i=1}^n(Ri-\epsilon)^2$, and $\epsilon-\sigma^2$ was used as a lower-bound threshold to filter out low quality pseudo data. For those pseudo samples where ROUGE1-F1 between the pseudo summary and the pseudo passage is lower than the threshold $\epsilon-\sigma^2$, we filtered them out. Finally, we conducted pre-training on our soft prompts with these filtered pseudo-data. Table~\ref{tab:pseduo-data-statistics} shows the statistics for the pre-training data corpus. \begin{table}[t] \begin{tabular}{cclcl} \toprule \multirow{2}{*}{} & \multicolumn{2}{c}{CNNDM} & \multicolumn{2}{c}{XSum} \\ & \multicolumn{2}{c}{Pseudo Corpus} & \multicolumn{2}{c}{Pseudo Corpus} \\ \midrule \# of Original Passages & \multicolumn{2}{c}{287,113} & \multicolumn{2}{c}{204,017} \\ \# of Pre-training Data & \multicolumn{2}{c}{284,177} & \multicolumn{2}{c}{158,499} \\ \bottomrule \end{tabular} \caption{Pseudo-summarization corpus statistics. ``\# of Original Passages” means the number of original passages in the training set, ``\# of Pre-training data” means the number of pseudo data after data cleaning.} \label{tab:pseduo-data-statistics} \end{table} \section{Implementation Details} We first split sentences with the Stanford CoreNLP toolkit~\cite{manning2014stanford}, and the input documents were truncated to 1024 BPE tokens. We adopted BART-base for all the experiments. Our implementation was based on the Hugging Face Transformer models~\cite{wolf2020transformers}. We used a mini-batch size of 8 with a gradient accumulation for 10 iterations. We used Adam optimizer with momentum $\beta_1$ = 0.9, $\beta_2$ = 0.998 and noam decay. In the stage of pre-training, the peak value of learning rate was 1e-3, and we set the warm up ratio to 10\%. During fine-tuning, the peak value of learning rate was 3e-4, and we set the warm up steps to 100 with 400 epochs. In the decoding stage, we used beam search with a beam size of 4. The decoding process will not stop until an end-of sequence (EOS) token was emitted or the length of the generated summary reached to 256 tokens. All models were trained on 4 TITAN RTX GPUs. \section{The Performance of Pre-training on Prefix-Tuning} A crucial strategy for PSP is the pre-training of soft prompts. To give a fairly comparison, we performed prefix pre-training for Prefix-Tuning in the same way with the PSP. The results are shown in Table 2. We can find that the Prefix model obtains slightly improvements on the XSum dataset after adopting the pre-training strategy, and even underperforms the original one on the CNNDM dataset. It indicates that Prefix-Tuning shows limited potential for the tasks compared to our model. \begin{table}[t] \centering \resizebox{.9999\linewidth}{!}{ \begin{tabular}{lrrrrrr} \toprule \multirow{2}*{Method} & \multicolumn{3}{c}{CNNDM} & \multicolumn{3}{c}{XSum} \\ \cmidrule(r{4pt}){2-4} \cmidrule{5-7} ~ & R-1 & R-2 & R-L & R-1 & R-2 & R-L \\ \midrule Prefix-Tuning & 36.18 & 15.58 & 25.14 & 33.10 & 11.47 & 25.96 \\ Prefix-Tuning w/ Pre. & 36.01 & 14.96 & 24.85 & 33.49 & 11.69 & 26.12 \\ \bottomrule \end{tabular}} \caption{Test set results of Prefix-Tuning. ``w/ Pre." means that we pre-trained the prefix with pseudo data constructed as described in Section~\ref{section:pseudo data}. ``R-1" is short for ``ROUGE-1", ``R-2" for ``ROUGE-2", and ``R-L" for ``ROUGE-L".} \label{tab:pretrain prefixs} \end{table} \section{The Universality of GSG to Construct Pseudo-data} To demonstrate the universality of using the GSG method to construct pseudo-data for prompt pre-training, we conducted a complimentary experiment to testify its effect on the CNNDM\footnote{We do not conduct ablation experiments on XSum, as there is no `` lead bias" in this dataset. So it is inappropriate to take the first sentences of the passage as the pseudo summary.}. Specifically, we selected $m=3$ important sentences. Results in Table~\ref{tab:ablation_pseudo} indicate that the PSP model pre-trained by GSG is equally effective with the original PSP$_{\tt Lead}$, showing that the GSG can be universally employed to pre-train soft prompts for abstractive summarization. \begin{table}[t] \begin{center} \centering \resizebox{.9\linewidth}{!}{ \begin{tabular}{lrrr} \toprule & ROUGE-1 & ROUGE-2 & ROUGE-L \\ \midrule PSP$_{\tt Lead}$ (w/o inner-prompts) & 37.66 & 15.07 & 24.52 \\ PSP$_{\tt GSG}$ (w/o inner-prompts) & 37.04 & 15.04 & 25.20 \\ \bottomrule \end{tabular} } \end{center} \caption{Results on CNNDM by using the Lead and the GSG to construct pseudo-data for prompt pre-training.} \label{tab:ablation_pseudo} \end{table} \section{Human Evaluation} We conducted a human evaluation study. To this end, we randomly selected 20 instances from the test set of each dataset. Ten graduate students with high levels of fluency in English were asked to assess the generated summaries and golden summaries from independent perspectives: {\it Informativeness} (how much useful information does the summary provide?), {\it Relevance} (how well does the summary reflect the input document?), and {\it Fluency} (how grammatically correct are the summary sentences and how easy are they to read?). Scoring followed the Best-Worst Scaling method~\cite{kiritchenko2017best}. Participants were asked to select the best and worst summaries from each perspective. The scores were computed as the percentage of times a summary was chosen as the best minus the times it was selected as the worst. The scores ranged from -1 (worst) to 1 (best). Results are shown in Table~\ref{tab:human evaluation}. Qualitatively, we show several examples generated by different models and the reference in Table~\ref{tab:cases cnndm} and Table~\ref{tab:cases xsum}. Compared with all baselines, the summaries generated by PSP are always more fluent and relevant to the source document, consistent with the results of human evaluation. Further more, we found summaries generated by PSP and Prefix-Tuning are always similar in sentence patterns and expressions. However, Prefix-Tuning tends to generate texts shorter than PSP, which often leads to lack of information. For example, Prefix-Tuning missed the point of ``£15million deal ", while PSP encapsulates all the important information of the document. \begin{table}[t] \centering \resizebox{.98\linewidth}{!}{ \begin{tabular}{lrrrrrr} \toprule \multirow{2}*{Methods} & \multicolumn{3}{c}{CNNDM} & \multicolumn{3}{c}{XSum} \\ \cmidrule(r{4pt}){2-4} \cmidrule{5-7} ~ & IF & RL & FL & IF & RL & FL \\ \midrule PSP & {\bf 0.500}&{\bf 0.708}&{\bf 0.667} & {\bf 0.217}&{\bf 0.275}&{\bf 0.492} \\ Prompt Tuning & -0.317&-0.758&-0.975 & -0.336&-0.400&-0.867 \\ Prefix-Tuning & -0.233&0.067&0.158 & 0.017&-0.008&0.292 \\ Full-Model Tuning &0.067 &-0.025&0.075 & 0.117&0.092&0.075 \\ \bottomrule \end{tabular}} \caption{Human evaluation results. Best results are bold.} \label{tab:human evaluation} \end{table} \section{Related Work} \paragraph{Few-Shot Abstractive Summarization} In practical application scenarios, the lack of manual constructed document-summary pairs makes data-driven neural models performs badly. Fabbri \textit{et al.}~\shortcite{fabbri2020improving} condense characteristics of the target dataset into Wikipedia data to construct pseudo-summaries. Bražinskas \textit{et al.}~\shortcite{bravzinskas2020few} introduce plug-in networks to reproduce characteristics of the target dataset with only a small set of labeled examples. Bai \textit{et al.}~\shortcite{bai-etal-2021-cross} conduct cross-lingual summarization in a low-resource setting. Yu \textit{et al.}~\shortcite{yu2021adaptsum} design the second phase of pre-training on large-scale generative models before fine-tuning. In this paper, we construct pseudo-summary corpus with heuristic rules, providing a better parameter initialization for soft prompts under few-shot settings. More importantly, we design summarization-oriented soft prompts to help the model produce few-shot summaries. \paragraph{Prompt Learning} The emergence of GPT-3~\cite{brown2020language} introduces the concept of {\it ``prompting"}. One only needs to assemble a task description and few examples into a prompt, and then prepend it to the task input. With the large-scale frozen parameters, a pre-trained model can generate the output without any task-specific tuning. However, task description is error-prone while there is no unified, explicit, and effective way to build these hard prompts manually~\cite{logan2021cutting}. Hence, several works~\cite{gao2020making,jiang2020can,shin2020autoprompt} are proposed to generate prompts automatically, but they all restrict prompts to discrete spaces. These discrete prompts are less expressive and sub-optimal. To overcome the shortcomings of hard prompts, Li and Liang~\shortcite{li-liang-2021-prefix} propose ``Prefix-Tuning". This method only tunes prefix activation prepended to all transformer layers, and keeps the LM parameters frozen. To further simplify, Prompt Tuning~\cite{lester-etal-2021-power} only prepends tunable tokens to the encoder input, and keeps all other parameters frozen. Logan {\it et al.}~\shortcite{logan2021cutting} and Gu {\it et al.}~\shortcite{gu2021ppt} propose to use pre-training to boost the low performance of Prompt Tuning for few-shot learning. In this work, we fit the structure of Prompt Tuning to text generation models, proposing encoder prompts, decoder prompts, and inner prompts. We successfully apply prompt tuning methods to few-shot abstractive summarization task. \begin{table*} \centering \resizebox{\linewidth}{!}{ \begin{tabular}{lp{17cm}} \toprule Source & Manchester City are monitoring Juventus striker Alvaro Morata. The Spain international was a target for Arsenal a year ago when he left Real Madrid but opted for a move to Italy instead. Real have a buy-back option set at around £15million but are unlikely to take that up. Manchester City are keeping tabs on Juventus striker Alvaro Morata ahead of possible summer bid . Morata closes down Lazio goalkeeper Federico Marchetti during Juve's Serie A win on Saturday . City also remain keen on Juventus midfielder Paul Pogba but face competition from Paris Saint Germain . Morata has been a success at Juventus but the Italians are always susceptible to offers at the right price and are making plans to bring in Edinson Cavani from Paris Saint Germain despite the French club's resistance. PSG also are one of the frontrunners to sign Juve midfielder Paul Pogba. City remain one of the clubs pushing to sign the France international too and Juventus have City's Edin Dzeko, Aleksandar Kolarov and Stevan Jovetic on their list of potential targets for the summer. \\ \midrule PSP & Manchester City are keeping tabs on Juventus striker Alvaro Morata . The Spain international was a target for Arsenal a year ago when he left Real Madrid but opted for a move to Italy instead . Juventus have a buy-back option set at around £15million but are unlikely to take that up . \\ Prefix-Tuning & Manchester City are keeping tabs on Juventus striker Alvaro Morata ahead of possible summer bid. Morata closes down Lazio goalkeeper Federico Marchetti during Juve's Serie A win on Saturday. \\ Prompt Tuning & LMorata is a target for Real Madrid, who have a buy - back option set at around £ 15million . He has been a success at Real Madrid but opted for a move to Italy instead . The Italian club are unlikely to take that up . . .. . . ." . .." . ." ." ." . ."." . .". . ."". . ." . . " . . ... . . [and] . . (He] had a great season at Real but who more had been about” . . , but who else had been close enough to sign him? . . He had her all after." . .) . . She didn't year when when when two over people A lot more time than two theret. . ." —LastJustInOther \\ Full-Model Tuning & NEW City are monitoring Juventus striker Alvaro Morata . The Spain international was a target for Arsenal a year ago when he left Real Madrid . Morata has a buy-back option set at around £15million but is unlikely to be offered by the French side .\\ Reference & Alvaro Morata had attracted interest from Arsenal before joining Juventus . Spain international made move to Italy in £15million deal from Real Madrid . Manchester City are monitoring the 22-year-old after impressive season .\\ \toprule Source & Nairobi, Kenya (CNN)University of Nairobi students were terrified Sunday morning when they heard explosions -- caused by a faulty electrical cable -- and believed it was a terror attack, the school said. Students on the Kikuyu campus stampeded down the halls of the Kimberly dormitory, and some jumped from its fifth floor, the university said. Hundreds were injured and were taken to hospitals. One person died, according to the school. The confusion and panic came less than two weeks after Al-Shabaab slaughtered 147 people at a college in Garissa, Kenya. Kenyan teachers and students have said they fear being targeted by the Somalia-based terrorists. On Sunday, as many as 108 students from the University of Nairobi were admitted to Kenyatta National Hospital. Among them, at least 63 students have been discharged, and at least four are slated for surgery, the school said. Almost all of the 54 students being treated at PCEA Kikuyu Hospital have been released, the university said. Kenya Power authorities and its CEO are at the school and looking into the electrical issue. Normal power supply will resume after repairs, the university said. "As we mourn the unfortunate loss of the departed student, we are also praying for the quick recovery of those who were injured," said Vice Chancellor Peter M.F. Mbithi in a statement. He called on the students, staff and public to remain calm. CNN's Lillian Leposo reported from Nairobi and Ashley Fantz wrote this story in Atlanta. \\ \midrule PSP & University of Nairobi students were terrified Sunday morning when they heard explosions -- caused by a faulty electrical cable -- and believed it was a terror attack . Hundreds were injured and were taken to hospitals . One person died, according to the school . The confusion and panic came less than two weeks after Al- Shabaab slaughtered 147 people at a college in Garissa .\\ Prefix-Tuning & University of Nairobi students were terrified Sunday morning when they heard explosions -- caused by a faulty electrical cable -- and believed it was a terror attack.\\ Prompt Tuning & L Hundreds were injured and taken to hospitals . Hundreds of students were evacuated from the dormitory . The school said they feared being targeted by al-Shabaab . Hundreds were evacuated and treated in hospitals . The university said it is investigating the cause of the explosion . . . The explosion was caused by a faulty electrical cable. . .. . ." . . ." ." ." . ."." . .." . .""People were terrified," said the school's vice chancellor . "People were screaming, but who more had been were about” . "We had no idea what was going on but who else had been about to blow her all after." ... .. ." .."." ..""They were terrified at the time than two overtakes" —LastJustIn3\\ Full-Model Tuning & NEW students panicked when they heard explosions -- caused by a faulty electrical cable -- and believed it was a terror attack, university says . As many as 108 students from University of Nairobi were admitted to Kenyatta National Hospital . One person died, according to the school .\\ Reference & Students stampeded; some jumped from a fifth story at a dorm; one student died, school officials say . The blasts were caused by faulty electrical cable, and Kenya Power is at the school . The panic came less than two weeks after terrorists attacked Kenya's Garissa University .\\\bottomrule \end{tabular}} \caption{Qualitative examples of CNNDM.} \label{tab:cases cnndm} \end{table*} \begin{table*} \centering \resizebox{\linewidth}{!}{ \begin{tabular}{lp{17cm}} \toprule Source & Brunon Kwiecien, 48, was convicted of planning a terrorist attack, illegal weapons possession and inciting two students to carry out an attack.He suggested he had been manipulated by Polish intelligence agents.Kwiecien was said to be fascinated with Norwegian mass killer Anders Behring Breivik.Right-wing extremist Breivik killed 77 people in a bombing and shooting rampage in Norway in July 2011.Kwiecien, a former professor at Krakow's University of Agriculture, was arrested in 2012.Investigators believe he wanted to target parliament with four tonnes of explosives while then-President Bronislaw Komorowski and former Prime Minister Donald Tusk were due to attend, the trial heard."If Brunon Kwiecien hadn't been stopped, we would be talking amid the ruins of the state today," said judge Aleksandra Almert, according to the AFP agency.While admitting he planned to carry out the attack, he also said he was subject to "provocation" by the intelligence services.Kwiecien is the first Pole to be accused of terrorism, Polish media reported. He has no known links to established extremist groups. \\ \midrule PSP & A Pole has been convicted of planning a terrorist attack in Poland, a court heard. \\ Prefix-Tuning & A Pole has been convicted of planning to carry out a terrorist attack in Poland. \\ Prompt Tuning & AA Polish man has been convicted of planning a terrorist attack in the Polish capital, Warsaw, on Thursday. \\ Full-Model Tuning & A Pole has been found guilty of planning a terrorist attack in the Polish state of Krakow.\\ Reference & A Polish university lecturer has been sentenced to 13 years in jail for plotting to ram a car packed with explosives into parliament.\\ \toprule Source & Schmidt was sent off by the referee for insulting Hoffenheim's Julian Nagelsmann in Saturday's 3-0 home loss."That was nothing, what sort of a nutcase are you? Just shut your mouth," Schmidt shouted after going 2-0 down.The 49-year-old has been banned for two games and handed a 15,000 euros (£13,373) fine.The German was sanctioned after triggering a suspended sentence from February this year.He had been banned for three games, with a further two in the event of a repeat offence before June 2017, for refusing a referee's order to leave the sidelines during a 1-0 defeat to Borussia Dortmund.Schmidt will be unable to have any contact with the team for half an hour before, during and after Tuesday's German Cup second-round match against Lotte and Saturday's league match against Wolfsburg.Leverkusen's director of sport Rudi Voller has sought a meeting with the head of the disciplinary committee. \\ \midrule PSP & Leverkusen defender Christian Schmidt has been banned for two games for insulting the referee.\\ Prefix-Tuning & Leverkusen midfielder Matthias Schmidt has been banned for two games after refusing to leave the sidelines during a match against Wolfsburg.\\ Prompt Tuning & ALeverkusen midfielder Christian Schmidt has been banned for two games for insulting the referee in a game against Hoffenheim on Saturday..'\\ Full-Model Tuning & Aeverkusen manager Gerhard Schmidt has been banned for two games for insulting the head of the German national team.\\ Reference & Bayer Leverkusen head coach Roger Schmidt has been banned and fined for calling an opposing manager "a nutcase" during a Bundesliga game.\\\bottomrule \end{tabular}} \caption{Qualitative examples of XSum.} \label{tab:cases xsum} \end{table*} \bibliographystyle{named} \section{Introduction} Given the high labor-costs of obtaining quality abstractive summaries, few-shot abstractive summarization is very demanding and highly challenging. A widely accepted paradigm for almost all NLP tasks is to fine-tune the entire set of parameters for a large pre-trained language model to suit the target task~\cite{liu2019text,liu2020multilingual}. However, the fine-tuning with few-shot examples usually leads to disappointing results, especially with generation tasks like abstractive summarization~\cite{fabbri2020improving,yu2021adaptsum}. The likely outcome is an overfit model. Further, for every specific task, a large number of pre-trained parameters need to be updated and stored, which is not efficient to use. Pre-trained language models are few-shot learners, i.e., GPT-3 \cite{brown2020language} that surprisingly perform generation tasks from a few examples without any further gradient updates. Although it lacks a rigorously theoretical proof, prompt learning inherits the few-shot property \cite{li-liang-2021-prefix,schick2020few,jin2021good,liu2021gpt}. Commonly, this type of learning is considered to retrieve relevant knowledge from frozen language models, only tuning continuous prompts to quickly adapt to new tasks with very few examples. More recently, Prompt Tuning~\cite{lester-etal-2021-power} has received much attention. With large frozen language models (say, $>$10 billion parameters), Prompt Tuning simply adds a tunable soft prompt to the input of the encoder, achieving results that are comparable to full-model tuning. Yet, our empirical results, in Section \ref{section:pilot}, demonstrate that Prompt Tuning for abstractive summarization yields simply abysmal performance. Prefix-Tuning~\cite{li-liang-2021-prefix} extends the use of prompt learning in the natural language generation area. With this technique, continuous prompts are applied to every layer of the pre-trained model and even shows increase in few-shot generation tasks over fine-tuning. Yet the training process is not stable and updates are required that add to the memory and training costs.\footnote{See more related work in Section F of the supplementary file.} \begin{figure}[t] \centering \includegraphics[width=0.47\textwidth]{intro.png} \caption{The comparison between PSP and previous methods. ``E" and ``D" represents the encoder and the decoder, respectively.} \label{fig:intro} \end{figure} Given the shortcomings of these two methods, we have developed a soft prompts tuning method that is specifically designed for summarization. The structure is given in Figure \ref{fig:intro}. The method is capable of performing few-shot language generation task (i.e., abstractive summarization) with an efficient amount of training parameters. Prompt tokens are added before the decoder input tokens to guide the generation process toward the target summary. Moreover, we have designed three inner prompts – interval, sequential, and fixed-length – one of which is placed among the source input tokens. The aim is to capture the structure in the source document and aid in understanding its semantics, so as to better prompt the model to generate document-related content. Each kind of inner prompts focuses on different semantic units (e.g., phrases, sentences, and etc.), differentiating important units from non-informative ones. To bolster the summarization ability of the model and assist the prompts to understand the documents, prompt pre-training is performed before the tuning process, and leveraged by self-supervised pseudo data. As a last step, all the prompts are fine-tuned with few-shot training examples. Experiments conducted on two commonly used datasets - CNNDM~\cite{see2017get} and XSum~\cite{xsum-emnlp} - demonstrate that our method outperforms full-model tuning under few-shot settings only with 0.1\% of the parameters. It also surpasses naive Prompt Tuning by a large margin. Our model also yields a performance competitive to Prefix-Tuning with 3\% of the trainable parameters. A detailed analysis shows that the designed prompt-pre-training phase and the inner prompts are effective for few-shot text summarization. Thus, the major contributions of this work include : 1) A novel soft prompt architecture for few-shot abstractive summarization. With the well-designed prompts in embedding layer, our model fulfills the task effectively and efficiently; 2) It is necessary to perform prompt pre-training strategy which benefits soft prompts model for few-shot summarization and shows excellent zero-shot capabilities; 3) Experiments that investigate the effect of different prompts by probing the attention weights. The results show our model is able to: extract knowledge from the encoder language model; understand the discourse in the document; and guide the decoder language model to generate fluent summaries. \section{Pilot Experiments}\label{section:pilot} In a pilot study, we experimented with using Prompt Tuning under 300-shots settings to find reasonable clues as to how to design summary-prompts for the task. Our findings follow. Consider an encoder-decoder language model $p_{\theta}(y|x)$ based on the Transformer architecture~\cite{Vaswani2017AttentionIA} (e.g., BART~\cite{lewis2020bart}) and parameterized by $\theta$. To conduct a few-shot summarization task, we have some few-shot training pairs of a document $X = \{x_1, x_2, \dots, x_{|X|}\}$ and a corresponding summary $Y= \{y_1, y_2, \dots, y_{|Y|}\}$. Specifically, we divided $X$ into different subsets with \textbf{sentences}\footnote{Note that, throughout this work, a ``sentence" can be an arbitrary span of contiguous text (e.g., fixed length of 10 tokens), or an actual linguistic sentence.} as our unit, $X = \{x^1_1, \dots x^i_j,\dots, x^n_m \}$, where $x^i_j$ denotes the $j_{\rm th}$ token in the $i_{\rm th}$ sentence. First, naive Prompt Tuning is applied by concatenating a series of prompt tokens ${P}_{en}$, parameterized by $\theta_{p_{en}}$, to the encoder input $X_{en} = \{e^1_1, \dots, e^i_j, \dots e^n_m\}$, where $e$ represents the embedding of each token. The gradients are backpropagated through the prompts and the weights $\theta$ of language model are frozen. In this way, the model maximizes the likelihood of the output $Y$: \begin{equation} \small p_{\theta;\theta_{\tt p_{en}}}(Y|[P_{en};X_{en}]) \end{equation}% The result of this naive tuning is shown on the first line in Table \ref{tab:pt variants}, where we see it severely underperforms versus full-model tuning. In further experiments, we added a series of prompts $P_{de}$ to the decoder inputs $X_{de}$ following the generation $p_{\theta;\theta_{p_{de}}}(Y|X_{en},P_{de})$. Here, we found the results to be even worse than the last. \paragraph{Necessary Prompts for Generation} For generation-based tasks, prompts in both the encoder and decoder are equivalently useful. Therefore, our model employs a combination of the two series of prompts mentioned above, and generates $Y$ conditioning on $X_{en}$, $P_{en}$ and $P_{de}$: \begin{equation} \small p_{\theta;\theta_{p_{en}};\theta_{p_{de}}}(Y|[P_{en};X_{en}],P_{de}) \end{equation} \begin{table} \centering \resizebox{.8\linewidth}{!}{ \begin{tabular}{lccc} \toprule {Model} & {ROUGE-1} & {ROUGE-2} & {ROUGE-L} \\ \midrule Prompt in encoder & 32.87 & 11.92 & 21.73 \\ Prompt in decoder & 26.77 & 11.73 & 16.71 \\ Prompt in en.\&de. & 36.37 & 14.41 &\textbf{24.46} \\ Full-Model Tuning & \textbf{37.01} & \textbf{14.49} & 23.91 \\ \bottomrule \end{tabular} } \caption{Results of BART-base on CNN/DailyMail Datasets. Best results are bold. } \label{tab:pt variants} \end{table} The result on the third line in Table~\ref{tab:pt variants} again verify our hypothesis. Prompts across the encoder and decoder even achieve comparable results with full-model tuning under few-shot settings. This verifies two things for us. First, prepending simple prompts to only the input embedding layer is effective and efficient for few-shot abstractive summarization. Second, prompts across the encoder and decoder are both necessary for generation tasks. \begin{figure}[t] \centering \includegraphics[width=0.3\textwidth]{nopretrain_noinner_attn.png} \caption{Visualization of the encoder-decoder attention weights. The x-axis are the encoder input, including prompts across the encoder $P_{en}$ and the source document $X$. The y-axis are the decoder input, including prompts across the decoder $P_{de}$ and the target summary $Y$. The area in the red box represents the attentions of $P_{de}$ assigning to $P_{en}$. The area in the yellow box represents the attentions of $Y$ assigning to $X$. Darker color shows the more highly related associations between tokens.} \label{fig:nopretrain_noinner_attn} \end{figure} \paragraph{Lack of Attention on the Document} We further explored the encoder-decoder attention to investigate the effect of the prompts and freezing the language model. From Figure \ref{fig:nopretrain_noinner_attn}, we find the generating output is mainly focused on the soft prompts to come with little attention given to the document itself. This outcome is detrimental to summarization that requires to understand the semantics and inner discourse structure of documents. Without the associations of target summaries and source documents, it is impossible to obtain high-quality summaries using current prompt architectures. From Figure \ref{fig:nopretrain_noinner_attn}, we can observe that prompts in the encoder and the ones in decoder are consistently and directly associated with each other. We speculate that the mechanism is that encoder prompts retrieve relevant knowledge from the frozen encoder language model as a document representation, and decoder prompts copy the encoder's behaviour, guiding the decoder language model to generate text. \begin{figure}[t] \centering \includegraphics[width=0.40\textwidth]{architecture.png} \caption{Architecture and training scheme of PSP. Squares in blue and red indicates frozen and tuned parameters, respectively. } \label{fig:arch} \end{figure} \section{Method}\label{section:method} In light of our findings about the current architectures, we developed a new architecture of pre-trained soft prompts, for few-shot abstractive summarization called PSP. The framework includes continuous prompts across the encoder and decoder inputs, as well as inner-prompts to capture the dependencies between documents and target summaries. To better understand a given document, we add a prompt pre-training process before few-shot tuning. It also brings a good initialization for the prompting. The overall architecture and training scheme are illustrated in Figure~\ref{fig:arch}. \subsection{Encoder-Decoder Basic Prompts} As mentioned in Section~\ref{section:pilot}, in the training phase of current architectures, $P_{en}$ is responsible for extracting knowledge from the encoder's frozen language model as a document representation. Meanwhile, $P_{de}$ mostly copies the behavior of $P_{en}$ and guides the frozen decoder's language model to generate fluent text as a summary. To strengthen the model’s ability to understand a document, the dependencies and attentions given to the source document need to be embodied in the prompt architecture. \subsection{Inner-Prompts for Document Understanding} To achieve our goal, we propose the notion of adding inner-prompts within the source document, denoted as $P_{in}= \{p_{in}^{0}, p_{in}^{1}, \dots, p_{in}^{n}\}$ with the parameters $\theta_{P_{in}}$ to be updated. Each $p_{in}^i$ corresponds to a single sentence. These inner-prompts are added to the corresponding token embedding, which gives rise to a new $X'_{in}$: \begin{equation} \small X'_{in} = \{e^1_1 + p^1_{in}, e^1_2 + p^1_{in}, \dots, e^i_j + p^{i}_{in}, \dots, e^n_m + p^n_{in} \} \end{equation} We believe that by prompting different semantic units (e.g., sentences, phrases, etc.), more attention can be given to understanding the document’s discourse. Furthermore, the inner-prompts help the model to quickly interpret the document by strengthening the associations between outputs and documents. What follows are three different strategies for incorporating the three different inner-prompts. Note that there is more discussion on this point in Section \ref{sec:analysis on inner}. \begin{figure}[t] \centering \includegraphics[width=0.40\textwidth]{inner-2.png} \caption{Different inner prompts for one example source document. Different colors indicate different inner prompt embeddings.} \label{fig:inner} \end{figure} \paragraph{Interval} Following \cite{liu2019text}, the interval inner-prompts comprises two inner-prompt tokens are assigned to each sentence $sent_i$, depending on whether $i$ is odd. Specifically, \begin{equation} \small P_{in}= \{p_{in}^{0}, p_{in}^{1}, p_{in}^{0}, \dots, p_{in}^{n {\rm mod} 2}\} \end{equation} In this way, the model can identify important sentences to encode the document at sentence level. \paragraph{Sequential} To highlight the complex discourse structure of documents, sentence positions need to be considered. Therefore, different tokens are set in sentences by their sequences, formulated as: \begin{equation} \small P_{in}= \{p_{in}^{0}, p_{in}^{1}, \dots, p_{in}^{n}\} \end{equation} \paragraph{Fixed-length} To discover more fine-grained semantic units, a text span with a fixed length $k$ is manipulated into a new ``sentence" and a corresponding sequential token is assigned to it. Further, prompts are assigned to the newly divided sentences [$sent_{1}$, $sent_{2}$, ..., $sent_{n}$], as $\{p_{in}^{0}, p_{in}^{1}, \dots, p_{in}^{n}\}$. Figure~\ref{fig:inner} illustrates some examples where the above strategies have been used. \begin{table*}[t] \begin{center} \resizebox{.8\linewidth}{!}{ \begin{tabular}{llrrrrrrrr} \toprule \multicolumn{2}{c}{} & \multicolumn{4}{c}{CNNDM} & \multicolumn{4}{c}{XSum} \\ \multicolumn{2}{l}{Model} & ROUGE-1 & ROUGE-2 & ROUGE-L & PPL & ROUGE-1 & ROUGE-2 & ROUGE-L & PPL \\ \midrule \multicolumn{2}{l}{Prompt Tuning} & $30.58_{2.07}$ & {$11.93_{0.46}$} & $21.73_{1.86}$ & $141.56$ & $29.63_{1.21}$ & $8.84_{0.55}$ & $22.00_{1.23}$ & $101.96$\\ \multicolumn{2}{l}{Prefix-Tuning} & $37.12_{0.15}$ & ${\bf 16.59_{0.09}}$ & ${\bf 26.28_{0.06}}$ & $52.59$ & $32.18_{0.16}$ & $11.13_{0.08}$ & $25.50_{0.14}$ & $39.58$ \\ \multicolumn{2}{l}{Full-Model Tuning} & $38.03_{0.56}$ & $16.01_{0.79}$ & $25.21_{0.70}$ & $65.73$ & $32.85_{0.25}$ & $10.52_{0.24}$ & $25.15_{0.29}$ & $51.63$ \\ \midrule \multicolumn{2}{l}{PSP$_{\tt{Interval}}$} & $37.82_{0.29}$ & {$15.40_{0.31}$} & $25.10_{0.36}$ &${\bf 45.54}$ & $\underline{\bf 32.86_{0.21}}$ & $\underline{\bf 11.27_{0.08}}$ & $\underline{\bf 25.64_{0.11}}$ & $44.25$\\ \multicolumn{2}{l}{PSP$_{\tt{Sequential}}$} & $37.82_{0.39}$ & {$15.58_{0.32}$} & $25.16_{0.32}$ & $48.10$ & $32.57_{0.11}$ & $\underline{10.97_{0.07}}$ & $\underline{25.39_{0.05}}$ &${\bf 35.70}$\\ \multicolumn{2}{l}{PSP$_{\tt{Fixed-k}}$} & $\underline{\bf 38.31_{0.15}}$ & $15.94_{0.21}$ & $\underline{25.41_{0.25}}$ & $58.50$ & $32.81_{0.10}$ & $\underline{11.15_{0.10}}$ & $\underline{25.48_{0.13}}$ & $52.10$ \\ \bottomrule \end{tabular} } \end{center} \caption{Results on CNNDM and XSum Datasets. The experiments are conducted with 300 training samples and 300 validation samples on each dataset. We select $k$ = 10 for {PSP$_{\tt{Fixed-k}}$}. We report the mean value and the standard deviation over 5 sampled datasets. ``PPL" represents the perplexity of generated summaries. A low perplexity indicates the summaries are fluent. Best results are bold. Underline means our models outperform Full-model tuning. } \label{tab:automatic evaluation} \end{table*} \begin{table}[t] \centering \resizebox{.8\linewidth}{!}{ \begin{tabular}{lrrrrrr} \toprule \multirow{2}*{Datasets} & \multicolumn{3}{c}{CNNDM} & \multicolumn{3}{c}{XSum} \\ \cmidrule(r{4pt}){2-4} \cmidrule{5-7} ~ & train & dev & test & train & dev & test \\ \midrule Avg.Passage & 626.45&647.64&717.92 & 396.53&387.62&380.55 \\ Avg.Sum & 45.91&47.97&58.62 & 22.90&23.29&22.11 \\ Labled data & 300&300&11,490 & 300&300&11,333 \\ \bottomrule \end{tabular}} \caption{Datasets statistics. “Avg.Passage” means the average length of passages and “Avg.Sum” means the average length of summaries.} \label{tab:Datasets statistics} \end{table} \subsection{Self-supervised Prompt Pre-training} To improve ability of the prompts to understand the documents and to help the model to adapt to the summarization tasks, soft prompts are further pre-trained on the corpus using summarization-oriented self-supervised objectives. Doing this also means that the prompts are well initialized for few-shot tuning. We tested two strategies for constructing the self-supervised data. Each strategy was designed to suit a particular type of writing bias in the document. These are ``lead” and ``gap sentences generation”. \paragraph{Lead} Lead bias is common in news articles, which usually follow an inverted pyramid structure where the first few sentences contain the most salient information~\cite{see2017get,yang2020ted}. With this type of bias, we initially select the first three sentences as our target summary, and treated the rest of the document as the source text. With this type of prompt pre-training process, the model was able to infer the salient information based on the remaining text. \paragraph{GSG} Gap sentences generation applies to all documents that do not follow the lead bias structure (e.g., XSum~\cite{xsum-emnlp}). The strategy used here follows Zhang~\textit{et al.}~\shortcite{zhang2020pegasus}, where we used ROUGE1-F1~\cite{lin2004rouge} between each sentence $x_{i}$ and the rest of the document as a proxy for the principal score, $s_{i}=rouge(x_{i},D \setminus \{x_{i}\}),\forall{i}$. The top-$m$ most important sentences were selected according to $s_i$, and removed from the document. Then these $m$ sentences are concatenated in the same order as the original text in the form of a pseudo summary. The remainder of the text is treated as a pseudo document. With the constructed data, our designed prompts can be pre-trained and further tuned with few-shot examples. \subsection{Training Objective} The model is trained with maximum likelihood estimation (MLE). Given a ground-truth summary $Y = [y_{1}, y_{2}, ..., y_{|Y|}]$ for an input passage $X$, the objective is to minimize the negative log-likelihood of the target word sequence: \begin{equation} \small \mathcal{L} = -\sum_{t=1}^{|Y|}\log p_{\theta^{*}}(y_{t}|[P_{en};X'_{in}],[P_{de};y_{1},...y_{t-1}]) \end{equation} \begin{equation} \small \theta^{*} = \{\theta;\theta_{p_{en}};\theta_{p_{de}};\theta_{p_{in}}\} \end{equation Note that only these prepended-prompts parameters ($\theta_{p_{en}}$, $\theta_{p_{de}}$) and the inner-prompts parameters ($\theta_{p_{in}}$) are optimized, the language model parameters ($\theta$) are all frozen. \section{Experiments} \paragraph{Datasets} We experimented with the CNN/DailyMail (CNNDM) dataset~\cite{hermann2015teaching} and the XSum dataset~\cite{xsum-emnlp}. We chose these datasets because they differ in abstraction level and text length, which helps to show the generalization ability of our results. We constructed the self-supervised pre-training data for CNNDM with Lead, and for XSum with GSG. We show details in Section A of the supplementary file. Our few-shot training set $D_{train}$ contained 300 document-summary pairs randomly sampled from the original training data. To tune the hyper-parameters and select the best checkpoint, we composed a validation set $D_{dev}$ from the original validation data. Here, we were careful to ensure that $\lvert D_{train} \rvert = \lvert D_{dev} \rvert$ so that it fit into a true few-shot learning setting, following \textit{Perez et al.}~\shortcite{perez2021true}. Since few-shot learning may have high variance, we sampled the examples with 5 different random seeds. We used the original test set to report our results, including the mean value and the standard deviation. Table \ref{tab:Datasets statistics} shows the statistics of the pre-processed corpus. \paragraph{Setup} The base version of BART was used in our work. Following Lester~\textit{et al.}~\shortcite{lester-etal-2021-power}, we used 100 prompt tokens for both the encoder inputs and the decoder inputs. These prompts were randomly initialized from the set of vocabularies. The sequential and fixed-length inner-prompts require a maximum number. Hence, we counted the number of sentences in each document and divided the results into two groups – the 85\% with the least sentences (Group A) and the 15\% with the most sentences (Group B)\footnote{We made our division at 85\% to ensure all embeddings of inner-prompt tokens could be fully trained, because sentences after the $n$-th only exist in 15\% of the data.}. We then set the number of prompts to the most number of sentences in Group A plus one, i.e., $n + 1$. For CNNDM, that number was 61 and, for XSum, it was 33. In this way, one inner-prompt token was assigned to each sentence up to $n$. For the excessively long documents in Group B, the text after $n$ sentences was assigned an $n + 1$-th token. Further, we drew from a normal distribution $\mathcal{N}(0,0.05)$ to initialize the inner-prompt embeddings\footnote{More information about implementation details are shown in Section B of the supplementary file.}. Taking CNNDM as an example, all the tunable parameters that need to be stored amount to only $2\times 10^5$. This is compared to the ($1.4\times10^8$) parameters of full-model tuning. That equates to around 0.1\% of the parameters for each dataset that need to be tuned and stored. \paragraph{Evaluation Metrics} We adopted ROUGE~\cite{lin2004rouge} to measure the quality of the summaries produced in our experiments. The F1 scores for ROUGE-1, ROUGE-2, and ROUGE-L between the ground-truth and the generated summaries are each reported. \paragraph{Baseline Models} We compared PSP to: {\bf Prompt Tuning}~\cite{lester-etal-2021-power}, which only concatenates soft prompts into the encoder input; {\bf Prefix Tuning}~\cite{li-liang-2021-prefix}, which adds a prefix to all the encoder layers, cross-attention layers, and the decoder layers; and {\bf Full-Model Tuning}, which does not have any prompts and fine-tunes all the parameters of the pre-trained language model. \begin{table}[t] \small \centering \resizebox{.78\linewidth}{!}{ \begin{tabular}{lrrrrrr} \toprule \multirow{2}*{$k$} & \multicolumn{3}{c}{$D_{dev}$} & \multicolumn{3}{c}{$D_{test}$} \\ \cmidrule(r{4pt}){2-4} \cmidrule{5-7} & R-1 & R-2 & R-L & R-1 & R-2 & R-L \\ \midrule 5 & 34.27 & 11.90 & 26.41 & 31.90 & 10.28 & 24.20 \\ 10 & {\bf 35.31} & {\bf 12.88} & {\bf 26.85} & {\bf 32.89} & {\bf 11.13} & {\bf 25.51} \\ 15 & 34.98 & 11.68 & 26.45 & 32.11 & 10.46 & 24.72 \\ 30 & 34.48 & 12.57 & 26.55 & 32.20 & 11.03 & 25.30 \\ \bottomrule \end{tabular}} \caption{Results of different fixed length $k$ on validation set $D_{dev}$ and test set $D_{test}$ of XSum. “R-1” is short for “ROUGE-1”, “R-2” for “ROUGE-2”, and “R-L” for “ROUGE-L”.} \label{tab:fixed length} \end{table} \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth]{k-shot.png} \caption{$k$-shot summarization results on XSum.} \label{fig:xsum kshot} \end{figure} \subsection{Experimental Results of Our Method} Table~\ref{tab:automatic evaluation} presents the results of all PSP variants and baselines across CNNDM and XSum datasets. With the exception of the ROUGE-2 and ROUGE-L scores for the Prefix-Tuning on the CNNDM dataset, our proposed PSP, outperforms the others. However, PSP delivered a competitive result with only 3\% of the parameters, which is an acceptable place to start. To our surprise, we observe that 50\% of PSP's results surpass the full-model tuning, especially on XSum, as underlined in the table. Besides, results on the PPL metric show that PSP can generate more fluent summaries than other models. These results indicate that fine-tuning large language models is not necessarily a good or efficient idea with few-shot generation. It also shows that soft prompts with frozen language models are effective for few-shot abstractive summarization. And, moreover, it statistically verifies that PSP with its three inner-prompt strategies is effective. Other supplementary experiments, including the performance of the pre-training operation on Prefix-Tuning, demonstrating the universality of the GSG to construct pseudo-data, and quantitative and qualitative human evaluation are shown in Section C, D, and E, respectively, in the supplementary file. \paragraph{Efficiency v.s. effectiveness.} We gave an overall comparison to baseline models on effectiveness and memory-efficiency, evaluated by ROUGE and the number of parameters, respectively. The results are shown in Table~\ref{tab:efficient}. Prompt Tuning has the least number of parameters, while its capacity is limited to this and lacks control over the decoder side, hence it can not perform natural language generation tasks well. We can see that substantial gains are made when going from vanilla Prompt Tuning to PSP. However, even if Prefix-Tuning is nearly thirty times more parameters than ours, there is either a marginal improvement or even performance decrease on some metrics. Besides, Prefix-Tuning relies on reparameterization tricks to stabilize the training, i.e., adds a MLP with large number of parameters to the training stage. Our method provides the best effectiveness-efficiency trade off, and outperforms full-model tuning with only 0.1\% parameters, and presents competitive results against Prefix-Tuning with 3\% parameters. \begin{table}[t] \small \centering \resizebox{.95\linewidth}{!}{ \begin{tabular}{lcccc} \toprule \multirow{2}{*}{Model} & \multirow{2}{*}{\# Train} & \multirow{2}{*}{\# Store} & \multicolumn{2}{c}{ROUGE-1} \\ \cmidrule(r{4pt}){4-4} \cmidrule{5-5} & & & CNNDM & XSUM \\ \midrule PSP & $2.0\times10^5$ & $2.0\times10^5$ & {\bf 38.32} & {\bf 32.86} \\ Prefix-Tuning & $2.4\times10^7$ & $5.5\times10^6$ & 37.12 & 32.18 \\ Prompt Tuning & $7.7\times10^4$ & $7.7\times10^4$ & 30.58 & 29.63 \\ Full-Model Tuning & $1.4\times10^8$ & $1.4\times10^8$ & 38.03 & 32.85 \\ \bottomrule \end{tabular}} \caption{Comparison with baseline models on effectiveness and efficiency. ``\# Train" means the number of tuned parameters during training. `` \# Store" means the number of stored parameters. Best results are bold.} \label{tab:efficient} \end{table} \begin{table}[t] \centering \small \resizebox{.99\linewidth}{!}{ \begin{tabular}{lrrrrrr} \toprule \multirow{2}*{Model} & \multicolumn{3}{c}{CNNDM} & \multicolumn{3}{c}{XSum} \\ \cmidrule(r{4pt}){2-4} \cmidrule{5-7} & R-1 & R-2 & R-L & R-1 & R-2 & R-L \\ \midrule Soft prompts (en.\&de., 100) & 36.89 & 14.96 & 24.63 & 29.36 & 9.90 & 22.92 \\ Soft prompts (en.\&de., 150) & 35.71 & 14.86 & 23.97 &28.94 &9.52 &22.24 \\ Soft prompts (en.\&de.\&ip., 100) & {\bf 37.87} & {\bf 15.83} & {\bf 25.37} & {\bf 31.95} &{\bf 10.52} &{\bf 24.80} \\ \bottomrule \end{tabular}} \caption{Results of different architectures of soft prompts on CNNDM and XSum, where ``en." ``de." ``ip." are short for encoder, decoder and inner prompts, respectively. Numbers in parentheses represent the number of prompt tokens we prepended before the encoder and decoder input.} \label{tab:analyses on inner} \end{table} \paragraph{Selection of fixed length $k$.} As shown in Table~\ref{tab:automatic evaluation}, {PSP$_{\tt{Fixed-k}}$} performs consistently well on both datasets. So we further explored the influence of different length $k$, i.e., $k=5, 10, 15, 30$, for inner-prompt tokens of the PSP$_{\tt Fixed-k}$\footnote{The average number of tokens per sentence in both datasets was about 18, so we did not consider fixed lengths of 20, for its similarity to the PSP$_{\tt Sequential}$.}. Table~\ref{tab:fixed length} presents the results of the variants on XSum. We observe the segmented spans with 10 tokens achieve the best performance. Interestingly, it can be induced that, to understand a document, it is possible to reorganize the sentence into several semantic units, where the number of the tokens is 10 on average. We also report results of different $k$ on our validation set in Table~\ref{tab:fixed length}. The ranking is consistent with the test set. From a practical perspective, when applying PSP to a new dataset, we can choose the best $k$ based on performance on the validation set. \begin{table}[t] \small \begin{center} \resizebox{.75\linewidth}{!}{ \begin{tabular}{lrrr} \toprule Model & ROUGE-1 & ROUGE-2 & ROUGE-L \\ \midrule Full-Model Tuning & 11.69 & 2.67 & 7.74 \\ Prefix-Tuning & 11.76 & 2.63 & 7.93 \\ Prompt Tuning & 9.40 & 1.86 & 6.19 \\ PSP-Interval & {\bf 17.16} & {\bf 3.36} & {\bf 12.65} \\ \bottomrule \end{tabular}} \end{center} \caption{Zero-shot results on XSum. Best results are bold.} \label{tab:zero-shot} \end{table} \subsection{Analyses on Soft Prompts} \label{sec:analysis on inner} \paragraph{Whether our model attends to understand documents?} According to Figure \ref{fig:nopretrain_noinner_attn}, we further present the encoder-decoder attention distribution of the advanced PSP. The comparison visualization is shown in Figure~\ref{fig:pretrain+inner_attn}. We find the following enhancement of our model by introducing the inner prompts. First, the PSP model strengthens the associations between the encoder prompts and the decoder prompts compared to the original model. Second, the soft prompt $P_{en}$ has more opportunities to be related to the output $Y$, indicating the semantic relations between them. Third, the output $Y$ assigns more attention to the source document $X$. This suggests that the hidden structure of the document is emphasized, increasing the capability of understanding its semantics. As such, these prompts can properly elect salient information from the document and prompt the model to generate the output. \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth]{attention_contrast.png} \caption{Visualization of the encoder-decoder attention weights of the model with only prompts across the encoder and the decoder (left) and PSP (right). Detailed descriptions refer to Figure \ref{fig:nopretrain_noinner_attn}.} \label{fig:pretrain+inner_attn} \end{figure} \paragraph{Do inner prompts assist the model to understand the content of documents or simply increase the model's capacity?} Instead of using inner-prompts, we prepended additional tunable tokens (i.e. 150 tokens) in front of the encoder and the decoder inputs. Comparison results are shown in Table~\ref{tab:analyses on inner}. Despite the larger capacity, soft prompts with 150 tunable tokens before the input performed the worst, denoted as \textit{soft prompts (en.\&de., 150)}. This suggests the inner-prompts with a few parameters do help to understand the document by prompting the structures, rather than simply add more trainable parameters to increase the model's capacity. \paragraph{Further insight on soft prompts across the encoder and the decoder.} To verify our hypothesis that the decoder prompts largely copy the behaviour of the encoder's prompts, we shared similar embeddings of the soft prompts before the encoder and the decoder. In Table~\ref{tab:share}, we observe the Soft prompts (en.\&de., shared) and the Soft prompts (en.\&de., separate) almost perform identical results. Although the parameters are only half of the original model, the performance consistently remains competitive. This shows that the shared prompts can extract important information from the document and further guide the language model to generate consistently good summaries more efficiently. \begin{table}[t] \small \begin{center} \resizebox{.9\linewidth}{!}{ \begin{tabular}{lrrr} \toprule Model & ROUGE-1 & ROUGE-2 & ROUGE-L \\ \midrule Soft prompts (en.\&de., shared) & 36.06 & 14.30 & 24.24 \\ Soft prompts (en.\&de., separate) & 36.37 & 14.41 & 24.46 \\ \bottomrule \end{tabular}} \end{center} \caption{Results of basic soft prompts on the CNNDM } \label{tab:share} \end{table} \begin{table}[t] \small \centering \resizebox{.99\linewidth}{!}{ \begin{tabular}{lrrrrrr} \toprule \multirow{2}*{Method} & \multicolumn{3}{c}{CNNDM} & \multicolumn{3}{c}{XSum} \\ \cmidrule(r{4pt}){2-4} \cmidrule{5-7} ~ & ROUGE-1 & ROUGE-2 & ROUGE-L & ROUGE-1 & ROUGE-2 & ROUGE-L \\ \midrule {PSP$_{\tt{Fixed-k}}$} & $38.31_{0.15}$ & $15.94_{0.21}$ & $25.41_{0.25}$ & $32.81_{0.10}$ & $11.15_{0.10}$ & $25.48_{0.13}$ \\ \quad w/o PP & {$37.30_{0.56}$} & $15.45_{0.39}$ & $24.93_{0.38}$ & $32.17_{0.16}$ & $10.69_{0.13}$ & $25.02_{0.21}$ \\ \quad w/o IP & $37.76_{0.28}$ & {$15.22_{0.31}$} & $24.80_{0.40}$ & $32.59_{0.17}$ & $11.14_{0.17}$ & $25.46_{0.24}$ \\ \quad w/o PP \& IP & $36.88_{0.42}$ & {$14.96_{0.45}$} & $24.63_{0.40}$ & $29.35_{1.5}$ & $9.87_{0.43}$ & $22.89_{1.19}$ \\ \bottomrule \end{tabular}} \caption{Ablation study of PSP on two datasets. ``w/o" means without. ``PP" and ``IP" are short for Prompt Pre-training and Inner-Prompts, respectively. The variance of each result is provided. } \label{tab:ablation_methods} \end{table} \subsection{Analysis on Few-shot and Zero-shot Summarization} To examine the performance of different methods under few-shots, we further randomly sampled number of \{50, 100, 200\} as the settings. Figure~\ref{fig:xsum kshot} reports a more detailed overview of all models' performance across a range of different few-shots. The ROUGE scores of our model generally outperform other baselines and remain steady across different scenarios. Especially, the PSP with only 50 examples receives the most significant improvements, while the Prefix-Tuning doesn't even work (tuning based on BART$_{\tt base}$) possibly due to its instability of the model. Moreover, we report the results of zero-shot on XSum in Table~\ref{tab:zero-shot}. Benefiting from the knowledge gained in the pre-training phase, our model shows a significant advantage of zero-shot adaptation in generating quality summaries. \subsection{Ablation Study} We conducted experiments to examine the effectiveness of the major components of our model, and Table \ref{tab:ablation_methods} shows the ablation results across the two datasets. We observed both the prompt pre-training operation and the inner-prompts component contribute to the main model. Notably, with the removal of each component, the model becomes considerably unstable, indicated by the variance shown in the ablation results. Comparably, prompt pre-training in our model accounts for more importance on the XSum dataset whose summaries have a higher abstract level (we assume it's more ``difficult") than the CNNDM. In sum, these two components support the performance and stability of our model in terms of summarization adaption (by prompt pre-training) and structural documents understanding (by inner-prompts). \section{Conclusion} In this paper, we present a novel pre-trained soft prompts architecture (PSP) specifically designed for few-shot abstractive summarization. We design continuous input embeddings across an encoder and a decoder alongside several kinds of inner-prompts placed in the text, assisting the model better to understand documents and guide accurate generation. Empirical results find the necessity of using prompt pre-training for few-shot/zero-shot abstractive summarization. Extensive experiments and analyses show that the proposed PSP provides the best effectiveness-efficiency trade off among all the baseline methods. \bibliographystyle{named}
1,314,259,994,699
arxiv
\section{Introduction} Explaining structure formation in the expanding Universe is one of the major topics in cosmology and astrophysics. According to the current main-stream understanding, dark matter (DM) and dark energy (DE) are the dynamically dominating components of the Universe \cite{wmap,wmap12,planck}. Baryons contribute only a small fraction of less than 5\% to the cosmic energy budget. The standard $\Lambda$CDM model does well in fitting most observational data but there is an ongoing interest in alternative models within and beyond General Relativity. A class of alternative models within General Relativity "dynamizes" the cosmological constant, resulting in so-called $\Lambda(t)$CDM models. Taking the cosmological principle for granted, cosmic structures represent inhomogeneities in the matter distribution on an otherwise spatially homogeneous and isotropic background. Dynamical DE models, $\Lambda(t)$CDM models are a subclass of them, have to deal with inhomogeneities of the DE component in addition to the matter inhomogeneities to which they are coupled. This makes these models technically more complex than the standard model. Ignoring perturbations of the DE component altogether may lead to inconsistencies and unreliable conclusions concerning the interpretation of observational data \cite{Park-Hwang}. Whether or not DE perturbations are relevant has to be decided on a case-by-case basis. The directly observed inhomogeneities are of baryonic nature. From the outset it is not clear that the inhomogeneities in the baryonic matter coincide with the inhomogeneities of the DM distribution. In particular, if DM interacts nongravitationally with DE, which happens in $\Lambda(t)$CDM models, while baryonic matter is in geodesic motion, this issue has to be clarified. A reliable description of the observed matter distribution has to consider the perturbation dynamics of the baryon fraction even though the latter only marginally influences the homogeneous and isotropic cosmic background dynamics. Then, in models with dynamical DE, the perturbations of baryonic matter will necessarily be coupled to the inhomogeneities of both DM and DE. In a general context, the importance of including the physics of the baryon component in the cosmic dynamics has been emphasized recently \cite{nature}. In this paper we extend a previously established decaying vacuum model \cite{humberto,julio,zimdahl,saulo,saulochap} by including a separately conserved baryon fluid with a four-velocity that differs from the four-velocity of the DM component. The basic ingredient of this model is a DE component with an energy density proportional to the Hubble rate. Moreover, it is characterized by an equation-of-state (EoS) parameter $-1$ for vacuum. Equivalently, the resulting dynamics can be understood as a scenario of DM particle production at a constant rate \cite{saulo} or as the dynamics of a non-adiabatic Chaplygin gas \cite{saulochap}. DE perturbations for this model are explicitly related to DM perturbations and their first derivative with respect to the scale factor in a scale-dependent way. It has been shown that on scales that are relevant for structure formation, DE fluctuations are smaller than the DM fluctuations by several orders of magnitude \cite{zimdahl}. Our analysis will be performed within a gauge-invariant formalism in terms of variables adapted to comoving observers \cite{VDF}. We shall derive a set of two second-order equations that couple the total fractional energy-density perturbations of the cosmic medium to the difference between these total perturbations and the fractional baryonic perturbations. The perturbations of the baryon fluid are then found as a suitable linear combination. As far as the background dynamics is concerned, our updated tests against observations from SNIa, BAO and the position of the first acoustic peak of the CMB spectrum confirm previous results \cite{Pigozzo2011}. Including the LSS data improves the concordance of the model compared with the case without a separately conserved baryon component. The joint analysis allows us to predict the baryon abundance of the Universe independently of the DM abundance. The corresponding probability density function (PDF) exhibits a pronounced peak at about 5\% for this abundance. This is a new feature which entirely relies on a separate consideration of the baryon fluid. The paper is organized as follows. In Sec.~\ref{model} we establish the basic relations of our three-component model of DE, DM and baryons. In Sec.~\ref{background} we recall the homogeneous and isotropic background dynamics of this model. Sec.~\ref{perturbations} is devoted to a gauge-invariant perturbation analysis which results in an explicit expression for the energy-density perturbations of the baryon fluid. In Sec.~\ref{observations} we test the model against observations using both background and LSS data. Our results are summarized in Sec.~\ref{conclusions}. \section{The model} \label{model} We describe the cosmic medium as a perfect fluid with a conserved energy momentum tensor \begin{equation} T_{ik} = \rho u_{i}u_{k} + p h_{ik}\ , \qquad T_{\ ;k}^{ik} = 0\,, \label{T} \end{equation} where $u^{i}$ is the cosmic four-velocity, $h _{ik}=g_{ik} + u_{i}u_{k}$ and $g_{ik}u^{i}u^{k} = -1$. Here, $\rho$ is the energy density for a comoving (with $u^{i}$) observer and $p$ is the fluid pressure. Latin indices run from $0$ to $3$. Let us consider a three--component system by assuming a split of the total energy-momentum tensor in (\ref{T}) into a DM component (subindex M), a DE component (subindex X) and a baryonic component (subindex B), \begin{equation}\label{Ttot} T^{ik} = T_{M}^{ik} + T_{X}^{ik} + T_{B}^{ik}\,. \end{equation} Each of the components is also modeled as a perfect fluid with ($A= M, X$, B) \begin{equation}\label{TA} T_{A}^{ik} = \rho_{A} u_A^{i} u^{k}_{A} + p_{A} h_{A}^{ik} \ ,\qquad\ h_{A}^{ik} = g^{ik} + u_A^{i} u^{k}_{A} \,. \end{equation} DM and baryonic matter are assumed to be pressureless. In general, each component has its own four-velocity with $g_{ik}u_{A}^{i}u_{A}^{k} = -1$. According to the model to be studied here we include a (so far unspecified) interaction between the dark components: \begin{equation}\label{Q} T_{M\ ;k}^{ik} = Q^{i}\qquad T_{X\ ;k}^{ik} = - Q^{i}\,. \end{equation} Then, the energy-balance equations of the dark components are \begin{equation} -u_{Mi}T^{ik}_{M\ ;k} = \rho_{M,a}u_{M}^{a} + \Theta_{M}\rho_{M} = -u_{Ma}Q^{a}\ \label{eb1} \end{equation} and \begin{equation} -u_{Xi}T^{ik}_{X\ ;k} = \rho_{X,a}u_{X}^{a} + \Theta_{X} \left(\rho_{X} + p_{X}\right) = u_{Xa}Q^{a}\,. \label{eb2} \end{equation} The baryonic component is separately conserved, \begin{equation} -u_{Bi}T^{ik}_{B\ ;k} = \rho_{B,a}u_{B}^{a} + \Theta_{B}\rho_{B} = 0\,. \label{ebb} \end{equation} The quantities $\Theta_{A}$ are defined as $\Theta_{A} = u^{a}_{A;a}$. For the homogeneous and isotropic background we assume $u_{M}^{a} = u_{X}^{a} = u_{B}^{a} = u^{a}$. Likewise, we have the momentum balances \begin{equation} h_{Mi}^{a}T^{ik}_{M\ ;k} = \rho_{M}\dot{u}_{M}^{a} = h_{M i}^{a} Q^{i}\,, \label{mb1} \end{equation} \begin{equation} h_{Xi}^{a}T^{ik}_{X\ ;k} = \left(\rho_{X} + p_{X}\right)\dot{u}_{X}^{a} + p_{X,i}h_{X}^{ai} = - h_{X i}^{a} Q^{i}\,, \label{mb2} \end{equation} and \begin{equation} h_{Bi}^{a}T^{ik}_{B\ ;k} = \rho_{B}\dot{u}_{B}^{a} = 0\,, \label{mbb} \end{equation} where $\dot{u}_{A}^{a} \equiv u_{A ;b}^{a}u_{A}^{b}$. The source term $Q^{i}$ is split into parts proportional and perpendicular to the total four-velocity according to \begin{equation} Q^{i} = u^{i}Q + \bar{Q}^{i}\,, \label{Qdec} \end{equation} where $Q = - u_{i}Q^{i}$ and $\bar{Q}^{i} = h^{i}_{a}Q^{a}$ with $u_{i}\bar{Q}^{i} = 0$. The contribution $T_{X}^{ik}$ is supposed to describe some form of DE. In the simple case of an EoS $p_{X} = - \rho_{X}$, where $\rho_{X}$ is not necessarily constant, we have \begin{equation} T_{X}^{ik} = - \rho_{X}g^{ik}\,. \label{Tx} \end{equation} Dynamically, an energy-momentum tensor like this corresponds to a time-dependent cosmological term. Various approaches to such type of $\Lambda(t)$ cosmology term can be found in the literature \cite{Lambda(t)}. Since the only time scale in a homogeneous and isotropic universe is the Hubble time $H^{-1}$, the simplest phenomenological guess here is $\rho_{X} \propto H$. Interestingly, this guess has some support from particle physics. The QCD vacuum condensate associated to the chiral phase transition leads to a vacuum density proportional to $H$ \cite{QCD}. It is a dynamics along this line which we intend to study here, albeit in an entirely phenomenological context. An obvious covariant generalization of a cosmological term that, in the homogeneous and isotropic background, decays linearly with the Hubble rate $H$, i.e., $\rho_{X} \propto H$, is \begin{equation} \rho_{X} = \frac{\sigma}{3}\Theta\,,\qquad p_{X} = - \frac{\sigma}{3}\Theta\,, \label{rX} \end{equation} where $\Theta \equiv u^{a}_{;a}$ is the expansion scalar and $\sigma$ is a constant. In the homogeneous and isotropic background one has $\Theta = 3H$ and recovers $\rho_{X} \propto H$. \section{Background dynamics} \label{background} The homogeneous and isotropic background dynamics is governed by Friedmann's equation \begin{equation} 3 H^{2} = 8\pi G \rho = 8\pi G \left(\rho_{M} + \rho_{X} + \rho_{B}\right) = 8\pi G \left(\rho_{M} + \rho_{B} + \sigma H\right)\ \label{fried} \end{equation} and \begin{equation} \dot{H} = - 4\pi G \left(\rho + p\right) = - 4\pi G \left(\rho_{M} + \rho_{B}\right) \,. \label{dH} \end{equation} Combining Eqs.~(\ref{fried}) and (\ref{dH}) we obtain \begin{equation} \dot{H} = - \frac{3}{2}H^{2} + 4\pi G \sigma H\,. \label{dH1} \end{equation} Changing to the scale factor $a$ as independent variable, the solution of Eq.~(\ref{dH1}) is \begin{equation} H = \frac{8\pi G }{3}\sigma + \left(H_{0} - \frac{8\pi G }{3}\sigma\right) a^{-3/2} \,, \label{Hsol} \end{equation} where a subindex 0 indicates the present value of the corresponding quantity and where we put $a_{0} = 1$. With \begin{equation} 3H_{0}^{2} = 8\pi G \rho_{0}\,,\quad \Omega_{M0}\equiv \frac{\rho_{M0}}{\rho_{0}}\,,\quad \Omega_{B0}\equiv \frac{\rho_{B0}}{\rho_{0}}\,,\quad\sigma = \frac{\rho_{0}}{H_{0}}\left(1 - \Omega_{M0} - \Omega_{B0}\right)\,, \label{sig} \end{equation} the Hubble rate (\ref{Hsol}) may be written as \begin{equation} H = H_{0}\left[1 - \Omega_{M0} - \Omega_{B0} + \left(\Omega_{M0} + \Omega_{B0}\right)a^{-3/2}\right] \,. \label{Hom} \end{equation} The existence of the last relation in (\ref{sig}) implies that $\sigma$ is not an additional parameter. The limit of a vanishing $\sigma$ is the Einstein-de Sitter universe, not the $\Lambda$CDM model. There is no $\Lambda$CDM limit of the dynamics described by the Hubble rate (\ref{Hom}). The background source terms are \begin{equation} u_{a}Q^{a} = - Q = -\dot{p}_{X} = \sigma\dot{H}\quad \mathrm{and} \quad \bar{Q}^{a} = 0\ \label{Q01} \end{equation} and the energy densities $\rho_{M}$ and $\rho_{X}$ are given by \begin{equation} \frac{\rho_{M}}{\rho_{0}} = \left(\Omega_{M0} + \Omega_{B0}\right)a^{-3/2}\left[1 - \Omega_{M0} - \Omega_{B0} + \left(\Omega_{M0} + \Omega_{B0} - \frac{\Omega_{B0}}{\Omega_{M0} + \Omega_{B0}} \right)a^{-3/2}\right] \ \label{rmom} \end{equation} and \begin{equation} \frac{\rho_{X}}{\rho_{0}} = \left(1 - \Omega_{M0} - \Omega_{B0}\right)\left[1 - \Omega_{M0} - \Omega_{B0} + \left(\Omega_{M0} + \Omega_{B0}\right)a^{-3/2}\right] \,, \label{rxom} \end{equation} respectively. The baryon energy density is \begin{equation} \frac{\rho_{B}}{\rho_{0}} = \Omega_{B0}a^{-3} \,. \label{rb} \end{equation} With~(\ref{Hom}) - ~(\ref{rb}) the background dynamics for the three-component system is exactly solved. An additional radiation component (subscript R) can be included approximately \cite{saulorad}: \begin{equation}\label{Happr} H = H_{0}\left[\left[1 - \Omega_{M0} - \Omega_{B0} + \left(\Omega_{M0} + \Omega_{B0}\right)a^{-3/2}\right]^{2} + \Omega_{R0}a^{-4}\right]^{1/2} \,. \end{equation} (Notice that this is an exact solution of the dynamics only for $\Omega_{R0} = 0$.) It can be shown that for the standard-model values of $\Omega_{M0}$, $\Omega_{B0}$ and $\Omega_{R0}$ the deviation of (\ref{Happr}) from the exact numerical solution for the Hubble rate is only of the order of 0.6\%. \section{Perturbations} \label{perturbations} \subsection{Balance and conservation equations} First-order perturbations will be denoted by a hat symbol. While for the background $u_{M}^{a} = u_{B}^{a} = u_{X}^{a} = u^{a}$ is assumed to be valid, the first-order perturbations of these quantities are different, in general. The perturbed time components of the four-velocities, however, still coincide: \begin{equation} \hat{u}_{0} = \hat{u}^{0} = \hat{u}_{M}^{0} = \hat{u}_{B}^{0} = \hat{u}_{X}^{0} = \frac{1}{2}\hat{g}_{00}\,. \label{u0} \end{equation} According to the perfect-fluid structure of both the total energy-momentum tensor (\ref{T}) and the energy-momentum tensors of the components in (\ref{TA}), and with $u_{M}^{a} = u_{B}^{a} = u_{X}^{a} = u^{a}$ in the background, we have first-order energy-density perturbations $\hat{\rho} = \hat{\rho}_{M} + \hat{\rho}_{B} + \hat{\rho}_{X}$, pressure perturbations $\hat{p} = \hat{p}_{M} + \hat{p}_{B} + \hat{p}_{X} = \hat{p}_{X}$ and \begin{equation} \hat{T}^{0}_{\alpha} = \hat{T}^{0}_{M\alpha} + \hat{T}^{0}_{B\alpha} +\hat{T}^{0}_{X\alpha}\quad\Rightarrow\quad \left(\rho + p\right)\hat{u}_{\alpha} = \rho_{M}\hat{u}_{M\alpha} + \rho_{B}\hat{u}_{B\alpha} + \left(\rho_{X} + p_{X}\right)\hat{u}_{X\alpha} \,. \label{T0al} \end{equation} For $p_{X} = - \rho_{X}$ it follows \begin{equation} p_{X} = - \rho_{X} \ \Rightarrow\ \rho + p = \rho_{M} + \rho_{B} \ \Rightarrow\ \hat{u}_{\alpha} = \frac{\rho_{M}}{\rho_{M} + \rho_{B}}\hat{u}_{M\alpha} + \frac{\rho_{B}}{\rho_{M} + \rho_{B}}\hat{u}_{B\alpha} \,. \label{ual} \end{equation} The perturbations of the time derivatives of the spatial components of the four-velocities differ from the time derivatives of the perturbations by the spatial gradient of $g_{00}$: \begin{equation}\label{} \hat{\dot{u}}_{\alpha} = \dot{\hat{u}}_{\alpha} - \frac{1}{2}g_{00,\alpha}\,, \qquad \hat{\dot{u}}_{M\alpha} = \dot{\hat{u}}_{M\alpha} - \frac{1}{2}g_{00,\alpha}\,, \qquad \hat{\dot{u}}_{B\alpha} = \dot{\hat{u}}_{B\alpha} - \frac{1}{2}g_{00,\alpha}\,. \end{equation} The total first-order energy conservation reads \begin{equation}\label{ebaltot} \dot{\hat{\rho}} + \dot{\hat{\rho}}\hat{u}^{0} + \hat{\Theta}\left(\rho_{M} + \rho_{B}\right) + \Theta\left(\hat{\rho} + \hat{p}\right) = 0\,, \end{equation} while the separate balances are \begin{equation}\label{ebalM} \dot{\hat{\rho}}_{M} + \dot{\rho}_{M}\hat{u}^{0} + \hat{\Theta}_{M}\rho_{M} + \Theta\hat{\rho}_{M} = Q = - \left(u_{Ma}Q^{a}\right)^{\hat{}}\,, \end{equation} \begin{equation}\label{ebalX} \dot{\hat{\rho}}_{X} + \dot{\rho}_{X}\hat{u}^{0} + \Theta\left(\hat{\rho}_{X} + \hat{p}_{X}\right) = \left(u_{Xa}Q^{a}\right)^{\hat{}}\ \end{equation} and \begin{equation}\label{ebalB} \dot{\hat{\rho}}_{B} + \dot{\rho}_{B}\hat{u}^{0} + \hat{\Theta}_{B}\rho_{B} + \Theta\hat{\rho}_{B} = 0\,. \end{equation} Comparing the total first-order energy conservation (\ref{ebaltot}) with the sum of the separate balances (\ref{ebalM}), (\ref{ebalX}) and (\ref{ebalB}) results in \begin{equation}\label{hTheta} \hat{\Theta}\left(\rho_{M} + \rho_{B}\right) = \hat{\Theta}_{M}\rho_{M} + \hat{\Theta}_{B}\rho_{B} + \left(u_{Ma}Q^{a}\right)^{\hat{}} - \left(u_{Xa}Q^{a}\right)^{\hat{}}\,. \end{equation} To be consistent with the last equation in (\ref{ual}), the last two terms on the right-hand side of (\ref{hTheta}) have to cancel each other. This establishes a relation between the perturbations of the projected interaction terms. We shall restrict ourselves to scalar perturbations which are described by the line element \begin{equation} \mbox{d}s^{2} = - \left(1 + 2 \phi\right)\mbox{d}t^2 + 2 a^2 F_{,\alpha }\mbox{d}t\mbox{d}x^{\alpha} + a^2\left[\left(1-2\psi\right)\delta _{\alpha \beta} + 2E_{,\alpha \beta} \right] \mbox{d}x^\alpha\mbox{d}x^\beta \,.\label{ds} \end{equation} We also define the three-scalar quantities $v$, $v_{M}$ and $v_{B}$ by \begin{equation} a^2\hat{u}^\mu + a^2F_{,\mu} = \hat{u}_\mu \equiv v_{,\mu}\,,\quad a^2\hat{u}_{M}^\mu + a^2F_{,\mu} = \hat{u}_{M\mu} \equiv v_{M,\mu}\,,\quad a^2\hat{u}_{B}^\mu + a^2F_{,\mu} = \hat{u}_{B\mu} \equiv v_{B,\mu}\,. \label{} \end{equation} With the abbreviation \begin{equation} \chi \equiv a^2\left(\dot{E} -F\right) \,, \label{} \end{equation} the perturbed scalars $\Theta_{M}$, $\Theta_{B}$ and $\Theta$ are \begin{equation} \hat{\Theta}_{M} = \frac{1}{a^2}\left(\Delta v_{M} +\Delta \chi\right) - 3\dot{\psi} - 3 H\phi \ , \quad \hat{\Theta}_{B} = \frac{1}{a^2}\left (\Delta v_{B} +\Delta \chi\right) - 3\dot{\psi} - 3 H\phi \ \label{Thetaexp1} \end{equation} and \begin{equation} \hat{\Theta} = \frac{1}{a^2}\left (\Delta v +\Delta \chi\right) - 3\dot{\psi} - 3 H\phi\,,\label{Thetaexp} \end{equation} respectively, where $\Delta$ denotes the three-dimensional Laplacian. The last relation of (\ref{ual}) then implies \begin{equation}\label{} \left(\rho_{M} + \rho_{B}\right)v = \rho_{M}v_{M} + \rho_{B}v_{B}\,. \end{equation} Moreover, as already mentioned, consistency with (\ref{hTheta}) requires \begin{equation}\label{} \left(u_{Ma}Q^{a}\right)^{\hat{}} = \left(u_{Xa}Q^{a}\right)^{\hat{}}\,. \end{equation} In terms of the fractional quantities \begin{equation}\label{} \delta = \frac{\hat{\rho}}{\rho}\,,\quad \delta_{M} = \frac{\hat{\rho}_{M}}{\rho_{M}}\,,\quad\delta_{X} = \frac{\hat{\rho}_{X}}{\rho_{X}}\,,\quad\delta_{B} = \frac{\hat{\rho}_{B}}{\rho_{B}}\,, \end{equation} the energy balances (\ref{ebaltot}), (\ref{ebalM}), (\ref{ebalX}) and (\ref{ebalB}) transform into \begin{equation}\label{ebaldelta} \dot{\delta} + \frac{\dot{\rho}}{\rho}\hat{u}^{0} + \hat{\Theta}\frac{\rho_{M} + \rho_{B}}{\rho} + \Theta\frac{p}{\rho}\left(\frac{\hat{p}}{p} - \delta\right) = 0\,, \end{equation} \begin{equation}\label{ebalMdelta} \dot{\delta}_{M} + \frac{\dot{\rho}_{M}}{\rho_{M}}\hat{u}^{0} + \hat{\Theta}_{M} = \frac{\hat{Q}}{\rho_{M}} - \frac{Q}{\rho_{M}}\delta_{M} \,, \end{equation} \begin{equation}\label{ebalXdelta} \dot{\delta}_{X} + \Theta\left(\frac{\hat{p}_{X}}{\rho_{X}} + \delta_{X}\right) = \frac{1}{\rho_{X}}\left(u_{Xa}Q^{a}\right)^{\hat{}} + \frac{Q}{\rho_{X}}\left(\delta_{X} + \hat{u}^{0}\right) \ \end{equation} and \begin{equation}\label{ebalBdelta} \dot{\delta}_{B} + \frac{\dot{\rho}_{B}}{\rho_{B}}\hat{u}^{0} + \hat{\Theta}_{B} = 0 \,, \end{equation} respectively. The total momentum conservation reads (recall that $p_{X} = - \rho_{X}$) \begin{equation}\label{mbt} \left(\rho_{M} + \rho_{B}\right)\dot{u}^{a} + p_{X,i}h^{ai} = 0\,. \end{equation} The DM and DE momentum balances are given by (\ref{mb1}) and (\ref{mb2}), respectively, with $p_{X} = - \rho_{X}$. The baryon-fluid motion is geodesic according to (\ref{mbb}). Our aim is to calculate the energy-density perturbations of the baryon component. In the following subsection we establish, in a first step, an equation for the perturbations of the total energy density. Subsequently, we shall derive an equation for the difference between total and baryonic density perturbations. From the solutions of this system of coupled second-order equations we then obtain the desired perturbations of the baryon fluid. \subsection{Perturbations of the total energy density} To obtain an equation for the total energy-density perturbations it is convenient to introduce gauge-invariant quantities, adapted to an observer that is comoving with the total fluid four-velocity, \begin{equation} \delta^{c} \equiv \delta + \frac{\dot{\rho}}{\rho} v\,, \quad \hat{\Theta}^{c} \equiv \hat{\Theta} + \dot{\Theta} v\,, \quad \hat{p}^{c} \equiv \hat{p} + \dot{p}v \,. \label{gi} \end{equation} Then, the total energy and momentum conservations (\ref{ebaltot}) and (\ref{mbt}), respectively, can be combined into \begin{equation}\label{balcomb} \dot{\delta}^{c} - \Theta\frac{p}{\rho}\delta^{c} + \hat{\Theta}^{c}\left(1+\frac{p}{\rho}\right) = 0\,. \end{equation} The perturbation $\hat{\Theta}$ has to be determined from the Raychaudhuri equation \begin{equation} \dot{\Theta} + \frac{1}{3}\Theta^{2} - \dot{u}^{a}_{;a} + 4\pi G \left(\rho + 3 p\right) = 0\,,\label{Ray} \end{equation} where we have neglected shear and vorticity. At first order we have \begin{equation}\label{dThetacfin} \dot{\hat{\Theta}}^{c} + \frac{2}{3}\Theta\hat{\Theta}^{c} + 4\pi G\rho\delta^{c} + \frac{1}{a^2}\frac{\Delta \hat{p}^{c}}{\rho + p} = 0\,. \end{equation} Combining Eqs.~(\ref{balcomb}) and (\ref{dThetacfin}) and changing to $a$ as independent variable ($\delta^{c\prime} \equiv \frac{d \delta^{c}}{d a}$), we obtain \begin{equation} \delta^{c\prime\prime} + \left[\frac{3}{2}-\frac{15}{2}\frac{p}{\rho}+ 3\frac{\dot{p}}{\dot{\rho}}\right]\frac{\delta^{c\prime}}{a} - \left[\frac{3}{2} + 12\frac{p}{\rho} - \frac{9}{2}\frac{p^{2}}{\rho^{2}} - 9\frac{\dot{p}}{\dot{\rho}} \right]\frac{\delta^{c}}{a^{2}} + \frac{k^{2}}{a^{2}H^{2}}\frac{\hat{p}^{c}}{\rho a^{2}} = 0\,, \label{dddeltak} \end{equation} where $k$ is the comoving wavenumber. According to (\ref{rX}), for the present model \begin{equation}\label{} \hat{p}^{c} = - \frac{\sigma}{3}\hat{\Theta}^{c} \end{equation} is valid. With the help of (\ref{balcomb}) we find that the pressure perturbation is not just proportional to the energy-density perturbation but to the derivative of $\delta^{c}$ as well: \begin{equation}\label{hatpc1} \hat{p}^{c} = - \frac{1}{3}\frac{p}{1 + \frac{p}{\rho}} \left[a\delta^{c\prime} - 3\frac{p}{\rho}\delta^{c}\right]\,. \end{equation} For the later important gauge-invariant combination $\hat{p}^{c} - \frac{\dot{p}}{\dot{\rho}}\rho\delta^{c}$ we have \begin{equation}\label{nad} \hat{p}_{nad} \equiv \hat{p} - \frac{\dot{p}}{\dot{\rho}}\rho\delta = \hat{p}^{c} - \frac{\dot{p}}{\dot{\rho}}\rho\delta^{c} = - \frac{1}{3}\frac{p}{1 + w} \left[a\delta^{c\prime} + \frac{3}{2}\left(1 - w\right)\delta^{c}\right]\,. \end{equation} This quantity describes the non-adiabatic pressure perturbations. With the expression (\ref{hatpc1}) for the pressure perturbations, Eq.~(\ref{dddeltak}) takes the final form \begin{equation} \delta^{c\prime\prime} + \left[\frac{3}{2}- 6w - \frac{1}{3} \frac{w}{1+ w}\frac{k^{2}}{a^{2}H^{2}}\right]\frac{\delta^{c\prime}}{a} - \left[\frac{3}{2} + \frac{15}{2}w - \frac{9}{2}w^{2} - \frac{w^{2}}{1+ w}\frac{k^{2}}{a^{2}H^{2}}\right]\frac{\delta^{c}}{a^{2}} = 0\,. \label{dddeltak1} \end{equation} Here, the total EoS parameter $w = \frac{p}{\rho}$ is explicitly given by \begin{equation}\label{w(a)} w = \frac{p}{\rho} = - \frac{\sigma H}{\rho} = - \frac{1}{1 + r a^{-3/2}}\,, \end{equation} where \begin{equation}\label{} r \equiv \frac{\Omega_{M0} + \Omega_{B0}}{1 - \Omega_{M0} - \Omega_{B0}}\ \end{equation} is the present-time ratio of total matter (DM and baryonic matter) to DE. It is remarkable that there appears a scale-dependence in the $\delta^{c\prime}$ term in Eq.~(\ref{dddeltak1}). A similar feature holds in bulk-viscous models which are characterized by a non-adiabatic dynamics as well \cite{VDF}. At high redshifts with $a\ll 1$ the EoS parameter $w$ tends to zero and (\ref{dddeltak1}) approaches \begin{equation} \delta^{c\prime\prime} + \frac{3}{2}\frac{\delta^{c\prime}}{a} - \frac{3}{2}\frac{\delta^{c}}{a^{2}} = 0 \qquad (a \ll 1)\,, \label{prpreds} \end{equation} i.e., we recover the equation for density perturbations in an Einstein-de Sitter universe. \subsection{Relative energy-density perturbations} As already mentioned, we shall calculate the baryonic matter perturbations via the total energy-density perturbations, governed by Eq.~(\ref{dddeltak1}), and the relative energy perturbations $\frac{\hat{\rho}}{\rho + p} - \frac{\hat{\rho}_{B}}{\rho_{B}}$. It is the dynamics of this difference which we shall consider in the present subsection. Let us consider to this purpose equations (\ref{ebaldelta}) and (\ref{ebalBdelta}). In (\ref{ebaldelta}) we introduce \begin{equation}\label{} D\equiv \frac{\hat{\rho}}{\rho + p}\quad\Rightarrow \quad \delta = D\left(1+\frac{p}{\rho}\right)\,, \end{equation} in terms of which Eq.~(\ref{ebaldelta}) reads \begin{equation}\label{dotD} \dot{D} + \Theta \left(\frac{\hat{p}}{\rho + p} - \frac{\dot{p}}{\dot{\rho}}D\right) + \hat{\Theta} - \Theta\hat{u}^{0} = 0\,. \end{equation} Combining the conservation equation (\ref{dotD}) for the total energy with the energy conservation (\ref{ebalBdelta}) of the baryons and defining $S_{B} \equiv D - \delta_{B}$, we obtain \begin{equation}\label{D-} \dot{S}_{B} + \left(\hat{\Theta} - \hat{\Theta}_{B}\right) + \Theta \left(\frac{\hat{p}}{\rho + p} - \frac{\dot{p}}{\dot{\rho}}D\right) = 0\,. \end{equation} In the following we shall derive an equation for $S_{B}$ in which this quantity is coupled to the total energy-density perturbations $\delta^{c}$. While the physical meaning of $\delta^{c}$ is obvious, the situation seems less clear for $S_{B}$. Simply from the definition one has \begin{equation}\label{} S_{B} = \frac{\rho_{X}}{\rho_{M} + \rho_{B}}\delta_{X} + \frac{\rho_{M}}{\rho_{M} + \rho_{B}}\left(\delta_{M} - \delta_{B}\right)\,. \end{equation} If the DE perturbations can be neglected, which is the case in many situations (cf. \cite{zimdahl}), one has $S_{B} \propto \delta_{M} - \delta_{B}$. Thus it represents a measure for the difference in the fractional perturbations of DM and baryonic matter. It is useful as an auxiliary quantity since both the total energy-momentum and the baryon energy-momentum are conserved. According to the expressions (\ref{Thetaexp1}) and (\ref{Thetaexp}) the difference between the quantities $\hat{\Theta}$ and $\hat{\Theta}_{B}$ is \begin{equation}\label{hatT-} \hat{\Theta} - \hat{\Theta}_{B} = \frac{1}{a^{2}}\Delta\left(v - v_{B}\right)\,. \end{equation} Differentiating equation (\ref{D-}) and using the definition of $\hat{p}_{nad}$ in (\ref{nad}) results in \begin{equation}\label{D-.} \ddot{S}_{B} + \left(\hat{\Theta} - \hat{\Theta}_{B}\right)^{\displaystyle \cdot} + \left[\Theta \frac{\hat{p}_{nad}}{\rho + p}\right]^{\displaystyle \cdot} = 0\,. \end{equation} To deal with the time-derivative of expression (\ref{hatT-}) we consider the momentum conservations (\ref{mbt}) and (\ref{mbb}) which, at first order, can be written as \begin{equation}\label{} \dot{v} + \phi = - \frac{\hat{p}^{c}}{\rho + p}\ \quad \mathrm{and } \quad \dot{v}_{B} + \phi = 0\,, \end{equation} respectively. It follows that \begin{equation}\label{diffv} \left(v - v_{B}\right)^{\displaystyle \cdot} = - \frac{\hat{p}^{c}}{\rho + p}\,. \end{equation} With (\ref{diffv}) and (\ref{hatT-}) the resulting $k$-space equation for $S_{B}$ is \begin{equation}\label{D-..k} \ddot{S}_{B}+ 2 H \dot{S}_{B} + \frac{k^{2}}{a^{2}}\frac{\hat{p}^{c}}{\rho + p} + \left[3H \frac{\hat{p}^{c}_{nad}}{\rho + p}\right]^{\displaystyle \cdot} + 6H^{2}\frac{\hat{p}^{c}_{nad}}{\rho + p} = 0\,. \end{equation} Introducing the explicit expressions (\ref{hatpc1}) and (\ref{nad}), use of (\ref{dddeltak1}) to eliminate the second derivative of $\delta^{c}$ provides us with \begin{eqnarray} S_{B}^{\prime\prime} + \frac{3}{2}\left(1-w\right)\frac{S_{B}^{\prime}}{a}&=& \frac{w}{\left(1 + w\right)^{2}} \left[ \left(3 + \frac{3}{2}w + \frac{1}{3}\frac{1 + 2w}{1+w}\frac{k^{2}}{a^{2}H^{2}}\right)\frac{\delta^{c\prime}}{a}\qquad\qquad\qquad \right. \nonumber \\ &&\left. \qquad\qquad + \left(\frac{9}{2} - \frac{9}{4}w - \frac{9}{4}w^{2} - w\frac{1 + 2w}{1+w}\frac{k^{2}}{a^{2}H^{2}}\right)\frac{\delta^{c}}{a^{2}}\right]\,. \label{Sprpr} \end{eqnarray} The total density perturbation $\delta^{c}$ and its first derivative appear as inhomogeneities in the equation for $S_{B}$. Eqs.~(\ref{dddeltak1}) and (\ref{Sprpr}) are the key equations of this paper. In the next section we demonstrate how a solution of the coupled system (\ref{dddeltak1}) and (\ref{Sprpr}) will allow us to obtain the perturbations of the baryon fluid. It is expedient to notice that for $a \ll 1$ one has $w\approx 0$ and the total cosmic medium behaves as dust. Under this condition the right-hand side of Eq.~(\ref{Sprpr}) vanishes and we can use $S_{B} = $ const $\approx 0$ as initial condition for the numerical analysis. \subsection{Baryonic energy-density perturbations} By definition, the fractional baryonic energy-density perturbations $\delta_{B} = \frac{\hat{\rho}_{B}}{\rho_{B}}$ are determined by $D$ and $S_{B}$, \begin{equation}\label{} \delta_{B} = D - S_{B}\,. \end{equation} Since $S_{B}$ is gauge-invariant by itself, we may write \begin{equation}\label{} S_{B} = D - \frac{\hat{\rho}_{B}}{\rho_{B}} = \frac{\delta^{c}}{1+w} - \delta_{B}^{c}\,, \end{equation} where \begin{equation}\label{} \delta_{B}^{c} = \delta_{B} + \frac{\dot{\rho}_{B}}{\rho_{B}}v = \delta_{B} - \Theta v\,. \end{equation} Consequently, the comoving (with $v$) baryon energy-density perturbations are given by the combination \begin{equation}\label{deltaB} \delta_{B}^{c} = \frac{\delta^{c}}{1+w} - S_{B}\,. \end{equation} \begin{figure}[!htb] \subfloat[]{ \includegraphics[width=7cm]{bao+cmb+snemlcs.eps} \label{fig1a} } \quad \subfloat[]{ \includegraphics[width=7cm]{bao+cmb+snemlcs+lss.eps} \label{fig1b} } \caption{(a) Constitution data set with MLCS17 fitter combined with BAO and the position of the first acoustic peak. (b) The same as in (a) with LSS data added. The dashed and continuous contour lines refer to the $1\sigma$ and $2\sigma$ confidence regions, respectively. The blue regions indicate the results of the joint tests at the $2\sigma$ level.} \label{fig1} \end{figure} It seems more convenient, however, to consider the perturbations of the baryon fluid with respect to the velocity potential $v_{B}$ of the baryon component itself. These perturbations are obtained via \begin{equation}\label{vcB} \delta_{B}^{c_{B}} \equiv \delta_{B} - \Theta v_{B} = \delta_{B}^{c} + \Theta\left(v - v_{B}\right)\,. \end{equation} Use of (\ref{D-}) with (\ref{nad}) and (\ref{hatT-}) leads to \begin{equation}\label{} \frac{k^{2}}{a^{2}}\left(v - v_{B}\right) = \dot{S}_{B} + 3H \frac{\hat{p}_{nad}}{\rho + p}\,. \end{equation} For $\delta_{B}^{c_{B}}$ we obtain \begin{equation}\label{ccB} \delta_{B}^{c_{B}} = \delta_{B}^{c} + 3 \frac{a^{2}H^{2}}{k^{2}}\left[aS_{B}^{\prime} + \frac{\hat{p}_{nad}}{\rho + p}\right]\,. \end{equation} Equation (\ref{ccB}) establishes a relation between perturbations measured by an observer, comoving with the baryon fluid and perturbations measured by an observer, comoving with the total velocity of the cosmic substratum. Obviously, the difference between both quantities depends on the perturbation scale. On small scales $\frac{a^{2}H^{2}}{k^{2}} \ll 1$ one has $\delta_{B}^{c_{B}} \approx \delta_{B}^{c}$, i.e., the difference is negligible. Explicitly, $\delta_{B}^{c_{B}}$ is given in terms of $\delta^{c}$ and $S_{B}$ and their first derivatives by \begin{equation}\label{deltacB} \delta_{B}^{c_{B}} = \frac{\delta^{c}}{1+w} - S_{B} + 3 \frac{a^{2}H^{2}}{k^{2}}\left[aS_{B}^{\prime} - \frac{w}{3}\frac{1}{\left(1+w\right)^{2}} \left(a\delta^{c\prime} + \frac{3}{2}\left(1-w\right)\delta^{c}\right)\right]\,. \end{equation} One has to solve now Eq.~(\ref{dddeltak1}) for $\delta^{c}$ and afterwards equation (\ref{Sprpr}) for $S_{B}$, in which $\delta^{c}$ and its first derivative appear as inhomogeneities. The coefficients are given by (\ref{Hom}) and (\ref{w(a)}). The initial conditions at high redshift are determined by the Einstein - de Sitter type behavior (\ref{prpreds}) with $S_{B} \approx 0$, equivalent to an almost adiabatic behavior. The perturbations of the baryonic component then are found by the combinations (\ref{deltaB}) or (\ref{deltacB}). As already mentioned, because of the factor $\frac{a^{2}H^{2}}{k^{2}}$ in front of the last term on the right-hand side of (\ref{deltacB}) one expects negligible differences between $\delta_{B}^{c_{B}}$ and $\delta_{B}^{c}$ on sub-horizon scales $k^{2} \ll a^{2}H^{2}$. \begin{figure}[!h] \subfloat[]{ \includegraphics[width=7cm]{bao+cmb+snesalt.eps} \label{fig2a} } \quad \subfloat[]{ \includegraphics[width=7cm]{bao+cmb+snesalt+lss.eps} \label{fig2b} } \caption{Data as in Fig.~1, here with SALT II fitter.} \label{fig2} \end{figure} \section{Observational analysis} \label{observations} As far as the background dynamics is concerned, the explicit inclusion of a baryon component does not significantly change the Hubble rate (\ref{Hom}). It is only the combination $\Omega_{M0} + \Omega_{B0}$ which matters. For our background tests, which in part are updates of previous studies, we have considered data from SNIa (Constitution \cite{Hicken} and Union 2.1 \cite{suzuki}), BAO \cite{eisenstein,tegmark,percival} and the position of the first acoustic peak of the CMB spectrum \cite{hinshaw,spergel}. For a more complete analysis of the SNIa samples and to test the robustness of the results, we use both the fitters Multicolor Light Curve Shapes (MLCS) \cite{riess1} and Spectral Adaptive Lightcurve Template (SALT II) \cite{guy,guy1}. As is well known, SNIa tests are using the luminosity distance modulus \begin{equation} \mu=5\log d_L(z)+\mu_0 \ \label{modulo} \end{equation} with $\mu_0=42.384-5\log h $, where \begin{eqnarray} d_{L}=\left(z+1\right)H_0\int_{0}^{z}\frac{dz'}{H\left(z'\right)}\, \label{luminosidade} \end{eqnarray} and $h$ is given by $H_0 = 100 h \mathrm{km s^{-1} Mpc^{-1}}$. Tests against BAO data are based on the geometric quantity \cite{eisenstein,tegmark,percival} \begin{equation} D_{v}\left(z\right)=\left[\left(1+z\right)^{2}d_{A}^{2}\frac{z}{H\left(z\right)}\right]^{1/3} z\,, \end{equation} where $d_{A}$ is the angular-diameter distance. Concerning the position of the first acoustic peak of the CMB anisotropy spectrum, we rely on the distance scale \cite{hu1,sethi}, \begin{figure}[!t] \subfloat[]{ \includegraphics[width=7cm]{bao+cmb+sneunion.eps} \label{fig3a} } \quad \subfloat[]{ \includegraphics[width=7cm]{bao+cmb+sneunion+lss.eps} \label{fig3b} } \caption{(a) Union 2.1 data set with SALT II fitter combined with BAO and the position of the first acoustic peak. (b) The same as in (a) with LSS data added. The dashed and continuous contour lines refer to the $1\sigma$ and $2\sigma$ confidence regions, respectively. The blue regions indicate the results of the joint tests at the $2\sigma$ level.} \label{fig3} \end{figure} \begin{equation} l_{1}=l_{A}\left(1-\delta_{1}\right)\,. \end{equation} Here, $l_{A}$ is the acoustic scale ($c_{s}$ is the sound speed) \begin{eqnarray} l_{A}=\pi \frac{\int\frac{dz}{H(z)}}{\int c_{s}^{2}\frac{dz}{H(z)}}, \qquad c_{s}^{2}=\sqrt{3+\frac{9\Omega_{B0}}{4\Omega_{R0}}z^{-1}}\, \end{eqnarray} and $\delta_{1}\approx 0.267\left(\frac{10\Omega_{R_{0}}}{3\Omega^{2}_{m_{0}}}\right)^{1/10}$ is a correction term, adapted to the decaying vacuum model \cite{Pigozzo2011}. At the perturbative level we consider the LSS data of Ref.~\cite{sdss} and calculate the baryonic power spectrum $P_k\propto |\delta_B|^2$. \begin{figure}[!h] \subfloat[]{ \includegraphics[width=6.5cm]{powerspectrum.eps} \label{fig4a} } \subfloat[]{ \includegraphics[width=6.5cm]{pslcdm.eps} \label{fig4b} } \caption{Left panel: baryonic matter-power spectrum with different values of $\Omega_{M0}$. Values between $0.28$ and $0.36$ are in reasonable agreement with the LSS data (SDSS DR7). Notice that these values are considerably lower than those found in \cite{Pigozzo2011,zimdahl,saulochap} ($\sim 0.37 -0.43$) without a separate baryon component. Right panel: best-fit power spectra for the $\Lambda(t)$CDM and $\Lambda$CDM models.} \label{fig4} \end{figure} \noindent For our tests we perform a $\chi^2$ analysis, using \begin{eqnarray} \chi^{2}(\theta)=\sum\limits_{i=1}^{N}\frac{\left[y_{i}-y\left(x_{i}\vert\theta\right)\right]^{2}}{\sigma_{i}^{2}}\,. \label{chi2} \end{eqnarray} Here, the $y_{i}$ are the observational data (SNIa, CMB, BAO, LSS) which are compared with the theoretical predictions $y(x_{i}\vert\theta)$, where $\theta$ represents a set of model parameters and $\sigma_i$ denotes the error bars. Out of $\chi^{2}$ in (\ref{chi2}) one defines the probability distribution function (PDF) $\mathcal{P}\propto\exp\left(-\frac{\chi^{2}(\theta)}{2}\right)$. For the present model the set of parameters is $\theta=(h, \Omega_{B0}, \Omega_{M0})$. In a first step, however, we fix the baryon abundance in agreement with primordial nucleosynthesis. Under this condition the free parameters are the same as in the $\Lambda$CDM model, namely $h$ and the DM abundance $\Omega_{M0}$. Our results are presented in figures 1 - 3. The dashed and continuous contour lines in all these figures refer to the $1\sigma$ and $2\sigma$ confidence levels (CL), respectively. Fig. 1(a) shows the $h$ - $\Omega_{M_{0}}$ plane based on the Constitution data with MLCS17 fitter combined with data from BAO and the position of the first acoustic peak of the CMB. In Fig. 1(b) we have added LSS data to the background tests of Fig.~1(a). In both cases blue regions mark the results of the joint tests at the $2\sigma$ CL. Figures 2(a) and 2(b) visualize the $h$ - $\Omega_{M_{0}}$ plane for the same data as in Figs. 1(a) and 1(b), but with SALT II fitter. In Figs. 3(a) and 3(b) the corresponding curves for the Union 2.1 sample are presented. Again, in both cases blue regions indicate the results of the joint tests at 2$\sigma$ CL. Our background tests largely reproduce previous results \cite{Pigozzo2011}. Only that our value for the position of the first acoustic peak differs slightly from the result of \cite{Pigozzo2011}. In our case the baryon abundance is fixed both in the Hubble rate and in the expression for the sound speed, in \cite{Pigozzo2011} it is fixed only for calculating the sound speed. The best-fit values for the background tests alone are summarized in Table I where we compare our model with the $\Lambda$CDM model via their $\chi_{\nu}^{2}$ values (reduced $\chi^{2}$ values). For the joint background and LSS tests we find the best-fit values in Table II. \begin{table} \caption{Best fit values at the 2$\sigma$ CL using background tests (SNIa, BAO, CMB).} {\scriptsize{\begin{tabular}{c|ccc|ccc} \hline \hline &\multicolumn{3}{c|}{\textbf{$\Lambda$CDM} } & \multicolumn{3}{|c}{\textbf{$\Lambda(t)$CDM}} \\\hline \hline Test &h&$\Omega_{m0}$&$\chi^2_{\nu}$ & h&$\Omega_{m0}$&$\chi^2_{\nu}$ \\ \hline SNIa Constitution (MLCS17) &$0.650^{+0.009}_{-0.009}$&$\,\, 0.324^{+0.056}_{-0.054}$&$\,\,1.087$ & $0.648^{+0.009}_{-0.010}$&$\,\,0.399^{+0.066}_{-0.062}$&$\,\,1.087$ \\ \hline SNIa Constitution (SALT II) & $0.649^{+0.010}_{-0.009}$&$0.282^{+0.057}_{-0.060}$ &$0.979$ & $0.647^{+0.011}_{-0.012}$& $0.355^{+0.072}_{-0.066}$& $0.983$ \\ \hline SNIa Union 2.1 & $0.700^{+0.008}_{-0.008}$&$0.278^{+0.032}_{-0.040}$ & $0.973$& $0.697^{+0.009}_{-0.009}$& $0.348^{+0.041}_{-0.051}$&$0.975$ \\ \hline BAO+CMB+SNIa Constitution (MLCS17)& $0.656^{+0.010}_{-0.011}$ & $0.255^{+0.015}_{-0.012}$& $1.090$&$0.651^{+0.011}_{-0.016}$ &$0.377^{+0.017}_{-0.018}$&$1.094$\\ \hline BAO+CMB+SNIa Constitution (SALT II)& $0.652^{+0.012}_{-0.014}$ & $0.257^{+0.013}_{-0.012}$ &$0.992$ &$0.645^{+0.016}_{-0.019}$ &$0.382^{+0.018}_{-0.018}$ &$0.996$\\ \hline BAO+CMB+SNeIa Union 2.1& $0.701^{+0.008}_{-0.007}$ &$0.242^{+0.009}_{-0.008}$ &$0.969$ & $0.699^{+0.007}_{-0.014}$&$0.328^{+0.013}_{-0.016}$ &$0.974$\\ \hline \end{tabular}}} \label{table1} \end{table} \begin{figure}[!b] \begin{center} \includegraphics[width=1.0\textwidth]{barions.eps} \caption{PDFs for the baryon fraction $\Omega_{B0}$ (left panel) and the DM fraction $\Omega_{M0}$ (central panel) based on the LSS data. The right panel shows the $\Omega_{B0}$-$\Omega_{M0}$ plane with the $1\sigma$, $2\sigma$ and $3\sigma$ contour lines. The dot indicates the best-fit values $\Omega_{B0}= 0.05\pm 0.02$ and $\Omega_{M0}=0.35\pm0.03$ at the $2\sigma$ CL.} \end{center} \label{fig5} \end{figure} Our analysis confirms that the decaying $\Lambda$ model predicts a higher value of the current DM abundance than the $\Lambda$CDM model. Interestingly, from the LSS data alone we find (at the $2\sigma$ CL) $\Omega_{M0}=0.32\pm 0.04$, a lower value than for the model without a separate baryon component \cite{Pigozzo2011,zimdahl,saulochap}, although still higher than in the $\Lambda$CDM model. The $\chi^2_{\nu}$ values in Table I reveal that, as far as the background dynamics is concerned, our $\Lambda(t)$CDM model is competitive with the $\Lambda$CDM model. On the other hand, comparing the results for the baryon power spectrum, the situation changes. While for the data from the 2dFGRS project \cite{cole} we find $\chi_{\nu}^{2} \approx 0.91$ for the $\Lambda(t)$CDM model and $\chi_{\nu}^{2} \approx 0.96$ for $\Lambda$CDM, the SDSS DR7 data with their much smaller error bars clearly favor the $\Lambda$CDM model with $\chi_{\nu}^{2} \approx 0.93$ compared with $\chi_{\nu}^{2} = 3.63$ of the decaying $\Lambda$ model. The left panel of Fig.~4 visualizes the baryonic power spectrum confronted with the SDSS DR7 data for different values of $\Omega_{M0}$. The best-fit power spectra for both models are shown in Fig. 4. One should keep in mind here that in obtaining the spectrum the BBKS transfer function \cite{bbks} was used which naturally favors the $\Lambda$CDM model. \begin{figure}[!t] \subfloat[]{ \includegraphics[width=5.5cm]{barions-mlcs.eps} \label{fig6a} } \subfloat[]{ \includegraphics[width=5.5cm]{barions-salt.eps} \label{fig6b} } \subfloat[]{ \includegraphics[width=5.5cm]{barions-union.eps} \label{fig6c} } \caption{Two-dimensional contour plots for the abundances of baryons and DM. (a) Joint analysis with data from LSS, CMB, BAO and Constitution SNIa data with SALT II fitter. (b) Same data as in (a) with MLSCk2 fitter. (c) Joint analysis with data from LSS, CMB, BAO and Union 2.1 SNIa data. The results for the baryon abundance (see Table III) are in agreement with primordial nucleosynthesis. } \label{fig6} \end{figure} In the tests so far the baryon fraction $\Omega_{B0}$ was assumed to be given. Now we relax this assumption and consider $\Omega_{B0}$ and $\Omega_{M0}$ to be free parameters. Performing a statistical analysis of the LSS data with $h=0.7$ as a prior (in concordance with our result for the Union2.1 based background test in Tab. I), we obtain the the two-dimensional curves in the right panel of Fig.~5 with the best-fit values $\Omega_{B0}= 0.05\pm 0.02$ ($2\sigma$ CL) and $\Omega_{M0}= 0.35\pm 0.03$ ($2\sigma$ CL). The one-dimensional PDF for $\Omega_{B0}$ (left panel of Fig.~5) is then found by fixing $\Omega_{M0}= 0.35$, the corresponding plot for $\Omega_{M0}$ (central panel) by fixing $\Omega_{B0}= 0.05$. The same PDFs follow for a prior $h=0.65$, indicating that these results do not depend strongly on the specific choice of the prior. Remarkably, the best-fit value $\Omega_{B0}= 0.05\pm 0.02$ ($2\sigma$ CL) is found to be in agreement with the result from nucleosynthesis and, at the same time, also demonstrates the consistency of our approach. In a next step we performed an enlarged analysis using the entire set of data (SNIa, CMB, BAO and LSS). This enlarged analysis (see Fig.~6) confirms the LSS-based results of Fig.~5. The left panel of Fig.~6 shows the two-dimensional contour plots resulting from a joint test with LSS, CMB, BAO and the Constitution data with SALT II fitter. Figure~6(b) was obtained with the same data but now with MLSC17 fitter. On the basis of the Union 2.1 data we found the results in Fig.~6(c). The best-fit values for the baryon and DM abundances are summarized in Table III. We conclude that our results for the baryon abundance are in agreement with the results from nucleosynthesis at the 2$\sigma$ CL. The consistent reproduction of the cosmic baryon abundance on the basis of data from LSS and background tests is a main achievement of this paper. \ \\ {\scriptsize{ \begin{table} \caption{Best fit values at the 2$\sigma$ CL using joint tests (SNIa, BAO, CMB, LSS).} {\scriptsize{\begin{tabular}{c|cc|cc} \hline \hline &\multicolumn{2}{c|}{\textbf{$\Lambda$CDM} } & \multicolumn{2}{|c}{\textbf{$\Lambda(t)$CDM}} \\\hline \hline Test &$\Omega_{M0}$&$\chi^2_{\nu}$ & $\Omega_{M0}$&$\chi^2_{\nu}$ \\ \hline LSS& $0.292^{+0.025}_{-0.023}$& $0.929$&$0.363^{+0.032}_{-0.031}$&$3.634$\\ \hline BAO+CMB+SNIa Constitution (MLCS17)+LSS &$\,\, 0.315^{+0.026}_{-0.024}$&$\,\,0.970$ &$\,\,0.375^{+0.034}_{-0.036}$&$\,\,1.352$ \\ \hline BAO+CMB+SNIa Constitution (SALT II)+LSS&$0.310^{+0.025}_{-0.022}$ &$0.975$ & $0.375^{+0.038}_{-0.040}$& $1.352$ \\ \hline BAO+CMB+SNIa Union 2.1+LSS &$0.284^{+0.021}_{-0.021}$ & $0.963$& $0.330^{+0.032}_{-0.035}$&$1.240$ \\ \hline \end{tabular}}} \label{table2} \end{table} }} {\scriptsize{ \begin{table} \caption{Best-fit values for the $\Lambda$(t)CDM model at the 2$\sigma$ CL using data from SNIa, CMB, BAO and LSS, considering DM and baryon abundances as free parameters.} \begin{tabular}{c|cc} \hline \hline Test &$\Omega_{B_{0}}$ &$\Omega_{M_{0}}$ \\ \hline\hline LSS& $0.054^{+0.023}_{-0.018}$ & $0.347^{+0.023}_{-0.025}$ \\ \hline BAO+CMB+SNe Ia Constitution (MLCS17)+LSS& $0.026^{+0.013}_{-0.008}$ & $0.324^{+0.015}_{-0.012}$ \\ \hline BAO+CMB+SNe Ia Constitution (SALT II)+LSS& $0.051^{+0.010}_{-0.010}$ & $0.325^{+0.015}_{-0.010}$ \\ \hline BAO+CMB+SNe Ia Union 2.1 (SALT II)+LSS& $0.083^{+0.032}_{-0.041}$ & $0.317^{+0.028}_{-0.018}$ \\ \hline \end{tabular} \label{tabela3} \end{table} }} \section{Conclusions} \label{conclusions} The components of the cosmological dark sector, DM and DE, are dominating the overall dynamics of the Universe. The small baryonic fraction of presently less than 5\% of the energy budget does only marginally influence the homogeneous and isotropic expansion history. With the help of data from SNIa, BAO and the position of the first peak of the CMB anisotropy spectrum we updated and confirmed previous results for the background. But as far as structure formation is concerned, the situation is different. The directly observed inhomogeneous matter distribution in the Universe is the distribution of visible, i.e., baryonic matter. While the standard scenario according to which the baryons after radiation decoupling are falling into the potential wells created by the DM inhomogeneities may suggest a similar distribution of DM and baryonic matter, the situation less clear if DM is in (non-gravitational) interaction with DE, while the (directly) observed baryon component is separately conserved. We have carried out a detailed gauge-invariant perturbation analysis for the baryon fluid in a $\Lambda(t)$CDM cosmology in which a cosmological term is decaying into DM linearly with the Hubble rate. Our key result is an expression for the fractional baryon energy-density perturbation for an observer comoving with the baryon fluid. Using the LSS data of the SDSS DR7 project we obtained the PDF for the baryon abundance of the Universe independently of the DM abundance. The best-fit value of this abundance is $\Omega_{B0} = 0.05 \pm 0.02$ (2$\sigma$ CL) in remarkable agreement with the result from primordial nucleosnthesis. A combined analysis, including also data from SNIa, BAO and CMB confirms this result. For the best-fit value of the DM abundance we found $\Omega_{M0}=0.32\pm0.02$ ($2\sigma$ CL) from the combined analysis (LSS+BAO+SNIa(Union2.1)+CMB) and $\Omega_{M0}= 0.35\pm 0.03$ ($2\sigma$ CL) from the LSS data alone. These values are higher than those for the standard model but smaller than the corresponding value for a $\Lambda(t)$CDM model without a separately conserved baryon component. Generally, the explicit inclusion of the baryon fluid improves the concordance between background and perturbation dynamics. Our results indicate that the investigated $\Lambda(t)$CDM cosmology, which does not have a $\Lambda$CDM limit, has a competitive background dynamics but as far as the baryon matter power spectrum is concerned, the $\Lambda$CDM model is clearly favored. \acknowledgments{We thank Saulo Carneiro and J\'{u}lio Fabris for helpful discussions. Financial support by CAPES, FAPES and CNPq is gratefully acknowledged. WSHR is thankful to FAPES for the grant (BPC No 476/2013), under which this work was carried out.}
1,314,259,994,700
arxiv
\subsection{Basics} \subsection{Differential Privacy (DP)} DP is a statistical disclosure control technique ensuring that the outputs of queries do not leak information about the individuals found in a dataset. It injects a certain amount of noise into the replies of the queries so that while it is not possible to infer an individual-level leak, the output of the query is still ``almost'' the same. In other words, query results of a data release algorithm for two closely similar data sets give the same answer. The formal definition of $\epsilon-$differential privacy is formulated as follows~\cite{dwork2014algorithmic}: \begin{definition} A randomized algorithm $\mathcal{M}$ is $\epsilon$-differentially private if for all data sets $D$ and $D'$ differing on at most one element and all $S \subseteq Range(\mathcal{M})$, \begin{equation} Pr[\mathcal{M}(D) \in S] \leq \exp(\epsilon) \times Pr[\mathcal{M}(D') \in S], \end{equation} where $Range(\mathcal{M})$ shows all possible outputs of the function (query), $f$. \end{definition} The definition states that two adjacent sets $D$ and $D'$, which differs at most one element, act approximately the same against a query\footnote{The queries or functions correspond to the predictions in the statistical models.} defined by a given mechanism $M$. $\epsilon$ can be considered as the degree of the privacy guarantee and the amount of information which can be learned from a result of a single query is bounded by $\exp(\epsilon)$. Since $\epsilon$ is too small, its guarantee is preserved for consecutive queries. Differential privacy works on the release mechanism and does not modify data or the format of the data in any way. The parameter $\epsilon$, called \textit{privacy budget}, is the main parameter to tune the balance between privacy and accuracy. Decreasing $\epsilon$ increases the privacy guarantees while decreasing the accuracy. The common mechanism to control the amount of noise that needs to be added is \textit{Laplace Mechanism} (LM). In this \revision{case}, the noise is drawn from a Laplace Distribution. The probability density function of LM is as follows: \begin{equation} \label{eqn:laplace} Lap(x|b)=\frac{1}{2b} exp \left(-\frac{|x|}{b}\right), \end{equation} for scale $b$ and center $0$. It is shown that LM preserves $\epsilon$-differential privacy~\cite{dwork2014algorithmic}. \begin{definition} Given any function $f: \mathbb{N}^{|\mathcal{X}|} \to \mathbb{R}^{k}$, the mechanism is a Laplace Mechanism $\mathcal{M}$ if: \begin{equation} \mathcal{M}(x)=f(x)+ \eta, \end{equation} where $x \in \mathcal{X}$ and $\eta$ is a vector of independent and identically distributed random variables drawn from Lap($\Delta f / \epsilon$). \end{definition} In addition to the $\epsilon$, \textit{sensitivity} is another important parameter in DP to determine the optimum noise amount. It is defined as follows: \begin{definition} For a function $f:D \rightarrow R^k$, sensitivity of $f$ is \begin{equation} \Delta f = \max_{D,D'} \parallel f(D)-f(D') \parallel \end{equation} for all $D, D'$ differing in at most one element. \end{definition} The sensitivity shows the maximum number of elements that can change in two different queries. \vspace{3pt} \noindent\textbf{Functional Mechanism (FM)-} FM is an algorithm that is used to provide differential privacy guarantees for a set of linear models~\cite{zhang2012functional}. It is an extension of the Laplace Mechanism. The goal of the algorithm is injecting the noise to the polynomial coefficients of a model's objective function. This is accomplished with the mechanism of \textit{objective perturbation}~\cite{chaudhuri2011differentially}. The optimization of the noisy objective function gives new model parameters that ensure the $\epsilon$-privacy of each element in a database. Algorithm~\ref{alg:func_mech}~\cite{zhang2012functional} presents the functional mechanism. \begin{algorithm} \caption{\cite{zhang2012functional} Functional Mechanism ($D$, $\mathcal{L}$, $\epsilon$)}\label{alg:func_mech} \begin{algorithmic}[1] \Require Let $\mathcal{L} (f,D)=\displaystyle \sum_{j=1}^J \sum_{\phi \in \Phi_j} \sum_{i=1}^n \lambda_{\phi_i} \phi (w)$ \State Set $\Delta=2 \displaystyle \max_{\substack{w}} \sum_{i=1}^n ||\lambda_{\phi_i}||_1 $ \For{each $j \in \{0,...,J\}$} \For{each ${\phi \in \Phi_j}$} \State $\lambda_{\phi}=\sum_{i=1}^n \lambda_{\phi_i}+Lap(\frac{\Delta}{\epsilon})$ \Comment{$noise~inject$} \EndFor \EndFor \State Compute new $w^*= \displaystyle\arg \min_{\substack{w}} \mathcal{L} (f,D)$ \Comment{$optimize$} \\ \Return $w^*$ \end{algorithmic} \end{algorithm} \begin{figure*} \centering \includegraphics[width=.75\textwidth,trim= 4cm 1.5cm 8cm 2cm]{framework_final} \caption{Secure Multiparty Distributed Differentially Private (SM-DDP) protocol for the computation of a linear model coefficients. The parties create a ring topology and the Data Collector (DC) initiates the protocol. The protocol can be applied to any statistical model function that allows independent calculation of local statistics. \vspace{-10pt} \label{fig:framework}} \end{figure*} As illustrated in Algorithm~\ref{alg:func_mech}, FM takes a dataset $D$, the polynomial representation of the objective function $L$, and the privacy budget $\epsilon$ as inputs and it returns the differentially private model coefficients $w^*$. It firstly injects noise drawn from a Laplacian distribution ($Lap(\frac{\Delta}{\epsilon})$) into all the coefficients $\lambda_{\phi_i}$ of the polynomial representation of the objective function and then the optimization is performed using noisy coefficients. It is shown that it satisfies $\epsilon$-differential privacy~\cite{zhang2012functional} {i.e.},~ the predictions using $w^*$ does not leak any information about an individual in the database data. For example, if we have a quadratic objective function in the matrix form of $w^\top \mathcal{P} w+w^\top \mathcal{V}+\mathcal{O}$, where $\mathcal{P}$, $\mathcal{V}$, and $\mathcal{O}$ are the coefficients of the polynomial representation of the objective function. FM firstly injects noise into the coefficients, which results in $w^\top \mathcal{P}^* w+w^\top \mathcal{V}^*+\mathcal{O}^*$. Then, the optimization problem (i.e., $w^*= \displaystyle\arg \min_{\substack{w}} \mathcal{L} (f,D)$) is solved using $\mathcal{P}^*$, $\mathcal{V}^*$, and $\mathcal{O}^*$. \section{Discussion} \label{sec:discuss} \vspace{-5pt} The preceding analysis showed how to achieve secure multiparty computation and differential privacy in distributed settings focusing on linear regression on horizontally distributed data. That is, parties do not see each others' inputs and further can not infer individuals' data from the final constructed model. A limitation of our algorithm is that we assume parties do not collaborate to learn a target party's input. \revision{However, if the party that generates the key pair conspires with the parties that are neighbors of a target in the ring topology, the noisy local statistics ($\xi $, $\kappa $, $\delta $) of the victim can be extracted.} More generally, this is known as \textit{active corruption}, where the data collector is an active attacker and has control over the other corrupted parties. \revision{Our protocol in Fig.~\ref{fig:framework} achieves only a collusion threshold of 1, but the distributed DP algorithm that we present here can easily be adapted to work with recent solutions in SMC such as~\cite{damgaard2012multiparty}, which is secure in the presence of an active adversary corrupting up to $n-1$ of the $n$ parties. To extend our work with these more secure SMC schemes, it suffices to use the noisy output of the functional mechanism instead of using the local statistics directly as input to the underlying SMC algorithm.} In our evaluation, we used HElib, an implementation of the fully homomorphic operation, to compute generic results. It supports both addition and multiplication; however, \revision{while computing the linear regression coefficients, we only used the addition operation.} The performance of secure computation can be improved by using other libraries such as Paillier cryptosystem~\cite{paillier1999public}, which is \revision{only} additively homomorphic cryptosystem. Finally, our algorithms can be easily extended to other algorithms such as logistic regression in a supervised classification setting. In logistic regression, each party independently computes a score vector $u_i$ and information matrix $\mathcal{I}_i$. Instead of injecting noise to the local statistics as in linear regression, noise can be injected into $u_i$ and $\mathcal{I}_i$ vectors. However, the optimization of objective function differs in logistic regression as it requires several iterations. Fortunately, there exist some techniques that let implementing the iterations for computing the secure multi-site logistic regression~\cite{el2013secure}. Combining this secure multi-site logistic regression algorithm with FM would solve this issue. We defer the detailed application of this method to future work. \section{Performance Evaluation} \label{sec:evaluation} In this section, we give the experimental results for the application of our SM-DDP protocol to linear regression. Table~\ref{table:notations} presents the notations used throughout the experiments. We first demonstrate how we set the parameters that are introduced in the distributed setting. Particularly, the success probability of the geometric random variable $p$ in Equation~\ref{eqn:lap_sum} and $\alpha$ introduced in Algorithm~\ref{distributed} is investigated. After experimentally tuning these two parameters, we test the final protocol with a different dataset without random sampling directly as it is collected. During evaluation, we focus on the following questions: (\rmnum{1}) Can we obtain a differentially private global linear regression model from differentially private local statistics? (\rmnum{2}) Does our approach support up to 100 parties? (\rmnum{3}) How long does it take to complete the protocol? (\rmnum{4}) Does it guarantee the security and privacy of both data and individuals? We analyzed and discussed each of these questions in Sections~\ref{subsec:accuracy}-\ref{subsec:security}. \vspace{3pt} \noindent\textbf{Dataset-} We used two real-world datasets to evaluate the algorithms of our protocol. Both datasets include highly sensitive data. The first dataset is \textit{Integrated Public Use Microdata Series} (IPUMS)~\cite{ipums}. It contains 370K decennial census records of people living in the US with 14 attributes, 7 of which are demographic information and the rest are working hours per week, the number of years residing in the current location, the number of children, the number of automobiles, and the annual income. The attributes are used to predict the \textit{annual income} of a person. The second dataset is the warfarin dataset collected by the International Warfarin Pharmacogenetics Consortium (IWPC)~\cite{international2009estimation}. The dataset contains clinical and genetic data of patients to predict the stable therapeutic dose of warfarin. Clinical data includes demographics, background, and phenotypic attributes. Genetic data includes genotype variants of CYP2C9 (*1, *2 and *3) and VKORC1 (one of seven single nucleotide polymorphisms in linkage disequilibrium). 21 sites in 9 countries and four continents contributed to the dataset. We used a subset of this dataset wherein patient samples include no missing attributes. Overall, we used 1400 complete patient samples from seven medical institutions. We used IPUMS dataset to experimentally set the parameters of our protocol and we tested the final protocol with the IWPC dataset, where each party corresponds to a medical institution in the dataset. \vspace{3pt} \noindent\textbf{Evaluation Metrics-} We applied stratified cross validation to split the dataset into training and test sets. To evaluate the model's prediction accuracy, we used \textit{Mean Squared Error} (MSE) as it is a commonly used metric for linear regression analysis. It is calculated as $\frac{1}{n} \sum_{i=1}^n (\hat{y_i}-y_i)$, which gives the average of the squared errors between actual ($y_i$) and predicted ($\hat{y_i}$) values in $n$ data samples. The lower values of MSE shows better predictions. Finally, it is worth mentioning that all the experiments show 100 independent runs and their average is reported in this work. \begin{table} \centering \caption{Abbreviations and notations used in experiments} \label{table:notations} \begin{tabular}{|p{.7cm}|p{3.3cm}|l|} \hline Notation & Description & Range \\ \hline \hline DDP & Distributed Differential Privacy & - \\ \hline NoDP & No Differential Privacy & - \\ \hline CDP & Centralized Differential Privacy & - \\ \hline $\epsilon$ & global privacy budget & \{0.1,0.2,0.4,0.8,1.6,3.2,6.4,12.8\} \\ \hline $\epsilon_i$ & local privacy budget & $\epsilon_i=\alpha \epsilon$ \\ \hline $\alpha$ & local privacy ratio {i.e.},~ $\alpha = \epsilon_i / \epsilon$ & \{1,10,100\} \\ \hline p & success probability of the geometric random variable, $a_p$ & \{0.1,0.5,0.9\} \\ \hline n & number of parties & {[}1,100{]} \\ \hline L & number of levels in HElib & \{4,6\} \\ \hline $nslots$ & number of slots in HElib & calculated by HElib \\ \hline s & minimum of $nslots$ & \{$8^2,16^2,24^2,32^2,40^2$\} \\ \hline \end{tabular} \end{table} \noindent\textbf{Experimental Setup-} To evaluate the computational overhead, we used open-source HE library (HElib)~\cite{helib}, which implements BGV homomorphic cryptosystem~\cite{brakerski2012leveled} and we ran experiments on 16-core Intel Xeon CPU at 1.90 GHz running Linux Server. In BGV, a prior level $L$ should be set before initiating the computation. In addition to the level $L$, HElib also has a parameter $nslots$ which defines a number of slots for the utilization of SIMD techniques~\cite{smart2010fully,smart2014fully}. HElib allows encrypting multiple messages at one time through its SIMD features by packing the messages into the independent slots of an array. We note that the parameter $L$ affects not only the number of allowed homomorphic operation but also all the other timings and the key size. Therefore, the parameter $L$ should be optimized so that the minimum $L$ is set without failure of the decryption. To do so, we first calculated the table of a number of homomorphic operations for each level $L$ and we used the minimum level for each number of the party. Furthermore, in our experiments, the data encrypted is the local statistics {i.e.},~ not the raw data. The size of the local statistics is considered the same for all the parties. The homomorphic operation computed for linear regression is the element-wise matrix addition. To take advantage of HElib library SIMD features, we converted matrices into arrays and the parameter of minimum number for $nslots$ was set to the length of the array for each statistics. This prevents data loss during the conversion. We did not utilize any multi-threading technique during our experiments to see the lower bound of the performance of our protocol. Thus, our results are lower bound and can be improved with the use of any multi-threading technique. \vspace{5pt} \subsection{Accuracy Analysis} \label{subsec:accuracy} \begin{figure} \centering \includegraphics[width=.4\textwidth,trim=1cm 0cm 17.5cm 0cm]{dp_exp1.eps} \caption{Tuning $p$ \label{fig:dp_1}. Variation of error is tested for several values of $p$. As a result, $p=0.1$ is not stable or convergent; $p=0.5$ is convergent, but error is much higher than CDP for especially small $\epsilon$ values. Hence, we chose $p=0.9$ as the best case.} \end{figure} \begin{figure} \centering \includegraphics[width=.4\textwidth,trim=1cm 0cm 17.5cm 0cm]{dp_exp2.eps} \caption{Tuning $\epsilon_i$. Variation of error is tested for several values of local privacy budget $\epsilon_i$ for $\alpha=\epsilon_i / \epsilon$. For $\alpha=1$, error is too high for small $\epsilon$ values. For $\alpha=10$, error is lower than CDP and and it converging to the value as NoDP. For $\alpha=1$, error is low, but it converges to a value higher than NoDP. Hence, we chose $\alpha=10$ as the best case.\label{fig:dp_2}} \vspace{-7.5pt} \end{figure} We evaluate the accuracy-privacy trade-off of distributed evaluation of differential privacy on linear regression. Specifically, we compare our results with the centralized approach. In \textit{Centralized Differential Privacy} (CDP), the accuracy of the regression depends only on the global privacy budget $\epsilon$. However, in \textit{Distributed Differential Privacy} (DDP), each party has its own local privacy budget $\epsilon_i$ and DDP is applied independently by each party. We note that this is a particular property of FM. In FM, data is first normalized and the optimum noise amount is only determined by the number of the attributes which is same for all parties. Therefore, the size and the range of the local statistics are same for all the parties; it does not depend on the number of tuples in the local database. Since all parties are identical, we choose the same local privacy budget $\epsilon_i$ for all the parties. Finally, in our fist three experiments (Fig.~\ref{fig:dp_1},~\ref{fig:dp_2}, and~\ref{fig:dp_3}), we used IPUMS dataset and split it into parties using random sampling methods. In the last experiment, we used IWPC dataset for accuracy evaluation. We split the dataset based on the given medical institutions (See Fig.~\ref{fig:warfarin}) \begin{figure} \centering \includegraphics[width=.4\textwidth,trim=1cm 0cm 17.5cm 0cm]{warfarin.eps} \caption{A real test: Warfarin dataset with 7 parties with $\epsilon_i=n\epsilon$ and $p=0.9$. Exactly the same trade-off as the centralized differential privacy is obtained. \label{fig:warfarin}} \vspace{-7.5pt} \end{figure} The first set of experiments was conducted to analyze the optimum value of $p$, which is a parameter of geometric random variable $a_p$ given in Equation~\ref{eqn:lap_sum}. In theory, $a_p$ is required to obtain a Laplace distribution in the global model, thereby it is required to be able to satisfy $\epsilon$-differential private model. To present the impact of the parameter $p$ on the accumulated global noise, we kept the party number constant for several values of $p$ and various $\epsilon$ values ($\epsilon_i=\epsilon$). To do so, each party multiplies the noise drawn from Laplace distribution with a random variable $a_p$, which is a geometric random variable with success probability $p$. We compared the error rates of CDP, DDP, and NoDP algorithms in terms of MSE. Fig.~\ref{fig:dp_1} illustrates the error and privacy budget trade-off for various values of $p$. We varied $p$ from $\{0.1,0.5,0.9\}$. We found that DDP with $p=0.1$ does not converge to a value while increasing the value of $\epsilon$. However, $p=0.5$ and $p=0.9$ converges to the same value as NoDP as it is desired and when $p$ is $0.9$, it gives similar results to CDP. In the sequel, we tuned $p=0.9$ and used it in our experiments. In the second set of experiments, we were interested in finding the optimal local privacy budget $\epsilon_i$ for a predetermined global privacy budget. In other words, we assume all parties agree on a global privacy budget according to the sensitivity of the dataset, which was indeed calculated by the number of attributes. We denote the ratio of local privacy budget to the global privacy budget as $\alpha$, {i.e.},~ $\alpha=\epsilon_i / \epsilon$. We first tried the value of $\alpha$ less than $1$, the result of DDP was much worse than CDP. This is because smaller $\epsilon_i$ means more noise injected locally by each party than the centralized approach. This noise decreases the accuracy significantly. Therefore, we changed $\alpha$ from $\{1,10,100\}$ and compared the results with CDP and NoDP mechanisms. The results are presented in Fig.~\ref{fig:dp_2}. We found that if $\alpha$ is the number of parties, which is $10$ in this experiment, the plot gets closer to CDP and the error is converging to NoDP, which is the desired case. Therefore, in the rest of experiments, we set $\alpha=n$, where $n$ is the number of parties. \begin{figure} \centering \includegraphics[width=.4\textwidth,trim=1cm 0cm 17.5cm 0cm]{dp_exp3.eps} \caption{Impact of number of parties in the collaboration for $\epsilon_i=n\epsilon$ and $p=0.9$. \label{fig:dp_3}} \vspace{-10pt} \end{figure} \begin{figure*} \centering \includegraphics[width=.95\textwidth,trim=0cm 1cm 1cm 0cm]{three_figure.eps} \vspace{10pt} \caption{Performance evaluation of SM-DDP computations of linear regression algorithm.\label{fig:performance}} \vspace{-5pt} \end{figure*} So far, we tuned the parameters of our approach experimentally. Now, in our last experiment, we evaluated the efficiency of our protocol using the dataset (IWPC dataset) collected from multi sources. We applied DP locally on each party's dataset and calculated the global model and error. Our goal was to see the feasibility of our approach in a real case and test the feasibility of our approach. In this experiment, we set $\epsilon_i=n\epsilon$, $p=0.9$ as we found in earlier experiments. We compared the performance of CDP, DDP, and NoDP algorithms. Fig.~\ref{fig:warfarin} shows MSE rates for varying $\epsilon$. We found that the same trade-off with CDP can be achieved by applying DP while training the classifiers locally. We note the DDP is also converging to the error of NoDP when $\epsilon$ approaches infinity as desired. \subsection{Scalability Analysis} In this set of experiments, we evaluated the scalability of our proposed protocol. We set $\epsilon_i= n \epsilon$, where $n$ is the number of parties; as we found $\alpha=n$ is optimum and for a different number of parties, we split the dataset into the number of parties ($n$) by using random sub-sampling. Then, each party applies DP locally, but we note that the pooled dataset is still the same. Laplace distribution is infinitely divisible~\cite{kotz2012laplace}. Therefore, the accumulated error of global model should not be affected by the number of parties. We ran the analysis for some users ranging from 1 to 100 and present the results in Fig.~\ref{fig:dp_3}. The results demonstrated an interesting point, which is when $\epsilon=0.01$, even though CDP is not stable, DDP is. On the other hand, when $\epsilon$ is 1 or 100, the error rate stays the same even for 100 parties. This means our protocol is scalable even for 100 parties. \subsection{Computational Overhead Analysis} In this subsection, we evaluate the computational overhead of linear regression presented in Algorithm~\ref{distributed}. We found that DP algorithms do not introduce computational overhead. Therefore, we only evaluate the computational overhead of our SMC algorithm, which consists of three main parts: Key generation of HE, min-max, and regression computation. Fig.~\ref{fig:performance} shows the computation time for different dimension sizes. Fig.~\ref{fig:performance}a presents the time for secure computation of finding global min-max of each attribute. It increases quadratically with the number of parties. However, this algorithm runs at the setup phase, so it is performed before initiating the computations. There are two interesting results worth to note. First, the time of secure regression computation increases linearly as a number of parties in the collaboration increases, but with a different slope for dimension, which is illustrated in Fig.~\ref{fig:performance}. The reason for the linear increase is that the number of encryptions and homomorphic evaluations are directly scaled by the number of parties in the group. Second, similar results hold for the overall computation time (see Fig.~\ref{fig:performance}c), but as a minor change since the key generation time shifts the lines in the y-axis and also increases the scale. However, similar to the secure min-max computation, the execution of the key generation algorithm does not require all parties in the group to be online since it occurs in the setup phase. On the other hand, we also note that size of the local database of each party does not have an impact on the total computational time since parties only share the local statistics, which is dependent on the attribute size, instead of the raw data. As can be seen in Fig.~\ref{fig:performance}c, the overall computation of the protocol including both offline and online phases for 20 parties with 32 attributes and 10K samples is less than a minute. Hence, our SM-DDP protocol yields minimal computational overhead. \vspace{-5pt} \subsection{Security and Privacy Analysis} \label{subsec:security} In this section, we discuss the security and privacy guarantees of SM-DDP protocol given in Fig.~\ref{fig:framework}. As all the communication among the parties is encrypted, the security of the algorithm is simply reduced to the security of underlying HE scheme. A leak can occur \revision{only} if DC is corrupted since the data is encrypted using the public key generated by DC. However, \revision{even in this case, DC will only obtain the noisy local statistics, not the raw data, and at the end of the protocol,} DC has only control over the aggregated data while reconstructing the global model and it can not know which party contributed to the result. While the protocol is running, the view of all the other parties consists of homomorphically encrypted data. Therefore, if the given homomorphic encryption scheme is semantically secure, the parties can not distinguish the corresponding plaintexts. So, the computation is private even in the presence of an honest, but curious adversary model presented in~\cite{goldreich2009foundations}. Therefore, data privacy is preserved. On the other hand, we both showed theoretically (Section~\ref{subsec:case}) and experimentally (Fig.~\ref{fig:warfarin}), a differentially private global model can be obtained through the locally applied DP. Therefore, it is not possible that an untrusted data collector can infer information about the individuals. Furthermore, the collaboration comes with a price as the local parties used $\epsilon_i$ instead of $\epsilon$. Therefore, the local privacy guarantee is decreased by $\alpha$ ({i.e.},~ $\epsilon_i$ is increased by $\alpha$), even though the global model's guarantee is still the same, meaning that data privacy against an untrusted DC is still preserved and the local privacy guarantee is important only if the underlying SMC is bypassed. Finally, since we set $\alpha$ as the number of parties in the collaboration, each party should take this into consideration while deciding on the global privacy budget. \section{Secure and Differentially-private Distributed Computations}\label{sec:framework} \begin{algorithm*} \caption{Computation of Linear Regression using SM-DDP protoco }\label{distributed} \begin{algorithmic}[1] \Require Each party holds a database in the format of $D_i=(x_i,y_i)_{i=1}^n$ i.e., horizontally partitioned \Statex \hspace{0.4cm} The global privacy budget $\epsilon$. \Ensure The differentially private global regression model of $D=\cup_{i=1}^n D_i$ \Algphase{Setup: Runs at the party $P_i$ (DC)} \State $({pk}_i, {sk}_i) \leftarrow KeyGen()$ \Comment{generate the key pair of HE} \State $\eta_{max}$, $\eta_{min} \leftarrow ComputeMinMax(D)$ \Comment{calculate the global max and min of each attribute via~\cite{ccetin2015depth}} \State $\Delta \gets 2 (d+1)^2$ \Comment{calculate the global sensitivity, $d$ is the number of attributes} \Algphase{Secure Regression Protocol: each party $P_j$ runs locally} \hspace{0.4cm} \Require Received aggregate statistics for all previous parties as: \hspace{0.4cm} \Statex $\xi$: ${E_{{pk}_i}(\sum_{k=1}^{j-1}\mathcal{P}_k^*})$ \hspace{0.4cm} \Statex $\kappa $: ${E_{{pk}_i}(\sum_{k=1}^{j-1}\mathcal{V}_k^*})$ \hspace{0.4cm} \Statex $\delta $: ${E_{{pk}_i}(\sum_{k=1}^{j-1}\mathcal{O}_k^*})$ \vspace{0.3cm} \State $D_j^{norm} \gets (D_j- \eta_{min})/(\eta_{max}-\eta_{min})$ \Comment{perform min-max normalization} \State $\mathcal{P}_j \gets \textbf{X}_j^\top \textbf{X}_j $, $\mathcal{V}_j \gets \textbf{X}_j^\top \textbf{Y}_j $, and $\mathcal{O}_j \gets \textbf{Y}_j^\top \textbf{Y}_j $ \Comment{compute local statistics} \State $\epsilon_i \gets \alpha \epsilon$ \Comment{compute its share from the global privacy budget \State $({\mathcal{P}_j^*},{\mathcal{V}_j^*},{\mathcal{O}_j^*}) \gets FM . NoiseInject({\mathcal{P}_j},{\mathcal{V}_j},{\mathcal{O}_j})$ \Comment{apply FM noise injection} \State $C_{j}^*=\big({E_{{pk}_i}(\mathcal{P}_i^*}),E_{{pk}_i}({\mathcal{V}_i^*}),E_{{pk}_i}({\mathcal{O}_i^*})\big)$ \Comment{perform encryption} \Statex \Comment{add its own encrypted local statistics to the received aggregate statistics} \State ${E_{{pk}_i}(\sum_{k=1}^{j}\mathcal{P}_k^*}) \gets {E_{{pk}_i}(\mathcal{P}_j^*})+ \xi $ \State ${E_{{pk}_i}(\sum_{k=1}^{j}\mathcal{V}_k^*}) \gets {E_{{pk}_i}(\mathcal{V}_j^*})+ \kappa $ \State ${E_{{pk}_i}(\sum_{k=1}^{j}\mathcal{O}_k^*}) \gets {E_{{pk}_i}(\mathcal{O}_j^*})+ \delta $ \State \textbf{Send(~}${E_{{pk}_i}(\sum_{k=1}^{j}\mathcal{P}_k^*}),~ {E_{{pk}_i}(\sum_{k=1}^{j}\mathcal{V}_k^*}),~ {E_{{pk}_i}(\sum_{k=1}^{j}\mathcal{O}_k^*}) $\textbf{~)} to $P_{j+1}$ \Comment{send updated aggregate statistics to the next party.} \vspace{0.1cm} \Algphase{Reconstruction: runs at the party $P_i$ (DC)} \hspace{0.4cm} \Require Received aggregate statistics for all parties as: \hspace{0.4cm} \Statex $\xi $: ${E_{{pk}_i}(\sum_{k=1}^{n}\mathcal{P}_k^*})$ \hspace{0.4cm} \Statex $\kappa $: ${E_{{pk}_i}(\sum_{k=1}^{n}\mathcal{V}_k^*})$ \hspace{0.4cm} \Statex $\delta $: ${E_{{pk}_i}(\sum_{k=1}^{n}\mathcal{O}_k^*})$ \vspace{0.3cm} \State $\mathcal{P}^* \gets D_{{sk}_i}\Big({\xi}\Big)$ \Comment{acquire the cleartext} \State $\mathcal{V}^* \gets D_{{sk}_i}\Big({\kappa}\Big)$ \Comment{acquire the cleartext} \State $\mathcal{O}^* \gets D_{{sk}_i}\Big({\delta}\Big)$ \Comment{acquire the cleartext} \State $({\mathcal{P}^*},{\mathcal{V}^*},{\mathcal{O}^*}) \gets FM . Optimize({\mathcal{P}^*},{\mathcal{V}^*},{\mathcal{O}^*})$\Comment{apply optimization} \State $w^* \gets {\mathcal{P}^*}^{-1} {\mathcal{V}^*}$ (i.e., $ w^*= \displaystyle\arg \min_{\substack{w}} w^\top \mathcal{P}^* w+w^\top \mathcal{V}^*+\mathcal{O}^*$) \Comment{compute the global parameters} \State $ Err \gets {w^*}^\top {\mathcal{P}^*} {w^*}+ {w^*}^\top \mathcal{V}^*+\mathcal{O}^*$ \State \textbf{Publish(~}$w^*,~Err$\textbf{~)} to all parties. \Statex \Comment{Use of Model:} \State $f(x_i,w^*) \gets \sum_{i=1}^n x_i {w^*}_i$ for an input $x_i \in \textbf{X}_i$ \Comment{computes the normalized predictions} \State $y_{pred} \gets f(x_i,w^*)(\eta_{max}-\eta_{min})+\eta_{min}$\Comment{perform de-normalization to get actual values \end{algorithmic} \end{algorithm*} In this section, we propose a novel protocol for secure multiparty distributed and differentially private (SM-DDP) computations through the use of homomorphic encryption (HM) and functional mechanism (FM). We evaluate its application to linear regression and discuss its extension to the logistic regression that can be used in supervised classification. Consider $n$ parties $P_1,\dots,P_n$, where each has private horizontally distributed database $D_1,\dots,D_n$. Each database consists of a certain number of tuples in the format of $t_i=(x_i,y_i)$. The parties would like to jointly build a linear model of the pooled database $f(D)$, where $D=\cup_{i=1}^{n} D_{i}$ so that the security guarantees of both SMC and DP are preserved. Before running the protocol, each party in the collaboration agrees on the function to be computed and compute a collection of local statistics $M_i=(L_{i_1},\dots,L_{i_t})$. We assume the linear model can be computed using the local statistics generated by each party independently i.e., $\eta_{global}=f(M_1,\dots,M_i,\dots,M_n)$. We define the guarantees and goals of our protocol as follows: \begin{itemize} \item \textit{Individual privacy:} No information leaks about the individuals in the private databases held by the parties, {i.e.},~ tuples $t_i$ is not leaked. \item \textit{Data privacy:} Information about the statistics of the data does not leak in the databases held by the parties, {i.e.},~ the statistics about the data $M_i$ is not leaked. \item \textit{Correctness:} The parties receive the correct output of the model. \end{itemize} We note that using SMC only would violate the individual privacy while using DP only violates the data privacy. In our combined protocol, we achieve individual privacy through FM and data privacy through HE and since all operations in the protocol are deterministic, the correctness is satisfied by design. We note that we assume there is a secure channel between parties to exchange messages. Fig.~\ref{fig:framework} illustrates our protocol to be able to perform SM-DDP computations. It is initiated by one of the parties called \textit{data collector} (DC). In the setup phase, DC generates a key pair $({pk}_i,{sk}_i)$ and computes its own local statistics $M_i$ independent from other parties. Then, in the next phase, DC applies DP by injecting (adding) noise drawn from a random distribution that satisfies $\epsilon$-differential privacy into its local statistics. The encryption of the noisy local statistics is transmitted to the next party $P_{i+1}$. The next party $P_{i+1}$ also computes its local statistics and injects noise into them. The result is encrypted with $pk_i$ and the function is evaluated homomorphically with the inputs of parties $P_i$ and $P_{i+1}$. The protocol is continuous in the same way, where parties are located in a ring topology. At the final step, the securely evaluated function result is used by the party $P_i$ which decrypts it with $sk_i$. In the end, $P_i$ reveals the differentially private global model. \vspace{-.3cm} \subsection{Case Study: Linear Regression} \label{subsec:case} In this subsection, we show how to compute linear regression using our protocol proposed in Fig.~\ref{fig:framework}. Particularly, we use functional mechanism shown in Algorithm~\ref{alg:func_mech} by splitting it into two parts: $NoiseInject()$ and $Optimize()$. In $NoiseInject()$, the noise drawn from Laplacian distribution (Equation~\ref{eqn:laplace}) is injected into each coefficient of the polynomial representation of the objective function. Then, in $Optimize()$, the optimization problem of the objective function is solved by applying regularization and spectral trimming introduced in~\cite{zhang2012functional} in order to avoid unbounded noisy objective function. Moreover, in FM, it is assumed that $\sqrt{\sum_{i=1}^{d}x_{id}^2} \leq 1$. Therefore, a secure maximum computation is performed to calculate $\eta_{min}$ and $\eta_{max}$ in setup phase of Algorithm~\ref{distributed}, where $\eta_{min}$ (resp. $\eta_{max}$) is vector consists of global minimum (resp. maximum) of each attribute. Before applying FM, each party normalizes its database using the global maximum and minimum values. This guarantees that the local sensitivity of the parties is always same as the global sensitivity as we focus on the horizontally distributed data. Algorithm~\ref{distributed} illustrates the computation of linear regression algorithm using the protocol presented in Fig.~\ref{fig:framework}. In linear regression, the global model is calculated by simply aggregating locally calculated noisy statistics. While aggregating the local statistics, the noise of each party is aggregated as well. Therefore, it is necessary to make sure the final model will not violate $\epsilon$-differential privacy nor cause an unbounded noise. Particularly, the noise is injected to each coefficient as follows: \begin{equation} {\mathcal{P}_i}^*=\mathcal{P}_i+Lap\Big(\frac{\Delta}{\epsilon_i}\Big). \end{equation} Then, when DC computes the global model, the local statistics are summed up as follows: \begin{equation} \mathcal{P}^*=\sum_{i=1}^n {\mathcal{P}_i}^*= \sum_{i=1}^n \bigg(\mathcal{P}_i+Lap\Big(\frac{\Delta}{\epsilon_i}\Big)\bigg)= \mathcal{P}+ \sum_{i=1}^n Lap\Big(\frac{\Delta}{\epsilon_i}\Big). \end{equation} Moreover, $\mathcal{V}^*$ and $\mathcal{O}^*$ can be computed similarly. In all $\mathcal{P}^*$, $\mathcal{V}^*$, and $\mathcal{O}^*$, the noise term is $\sum_{i=1}^n Lap\big(\frac{\Delta}{\epsilon_i}\big)$. In order to make sure that the accumulated noise is also Laplacian distribution, we use the following theorem. \begin{theorem} Let $Y$, $Y_1$, $Y_2$... be non-degenerate and symmetric i.i.d. random variables with variance $\sigma^2>0$, and let $\nu_p$ be a geometric random variable with mean $1/p$, independent of the $Y_i$'s. Then, the following statements are equivalent (Proof is given in~\cite{kotz2012laplace}): \\ (i) Y is stable with respect to geometric summation, i.e., there exist constants $a_p>0$ and $b_p \in \mathbb{R} $, such that \begin{equation} \label{eqn:lap_sum} a_p \sum_{i=1}^{\nu_p} (Y_i+b_p)=Y \quad \forall p \in(0,1) \end{equation} (ii) Y possesses the Laplace distribution with mean zero and variance $\nu_2$. Moreover, the constants $a_p$ and $b_p$ must be of the form: $a_p=\sqrt{p}$, $b_p=0$ \end{theorem} From the theorem above, a Laplace distribution can be calculated by summing up several Laplace distributions in a certain form. In other words, the sequence of partial sums, $a_p \sum_{i=1}^{\nu_p} (Y_i+b_p)$ converges to a Laplace distribution under beta-distributed $a_p$. We addressed requirements of the theorem in Algorithm~\ref{distributed} by multiplying the noise distribution of local parties with a number drawn from the geometric distribution i.e., $a_p \sum_{i=1}^n Lap\big(\frac{\Delta}{\epsilon_i}\big)$, where $a_p$ is a geometric random variable. \section{Introduction} Secure and private computation of statistical models is increasingly used in different operational settings from healthcare~\cite{kimura2016evaluation,celikPatient,DBLP:journals/corr/CelikAASUM17} to finance~\cite{bogdanov2012deploying} and security sensitive applications~\cite{freudiger2015controlled,celikForensic}. Given the distributed nature of these applications, security and privacy are mostly achieved by utilizing Secure Multiparty Computation (SMC). SMC allows distributed parties to jointly compute an agreed function over their private inputs without revealing those inputs to other parties. Each party learns the final result, but no other information. However, SMC has a major privacy concern for a targeted individual as it does not guarantee that the final result of distributed computation would not leak any information about an individual in a sensitive dataset. Privacy of individuals and their data can be easily violated.~\cite{el2013secure,narayanan2008robust,ganta2008composition}. Therefore, there is a need for a mechanism, where individual parties do not see each others' inputs and further can not infer their data from the final constructed model. Indeed, combining SMC with Differential Privacy (DP) could solve this privacy problem as DP introduces sufficient noise into the final result to prevent any leakage about a single individual. However, combining SMC with DP is not a trivial task. In \revision{an} ideal case, a trusted data collector\footnote{A data collector is either one of the parties or a third party. Every discussion here applies to both of the types.} can collect the data, aggregate them and add calibrated noise to the results of the queries (predictions) (Centralized DP (CDP) in Fig.~\ref{fig:com}). However, a trusted party does not exist in many real life scenarios. This technique would easily leak the model of the sensitive data to an untrusted data collector who collects the final model of the data. Even for scenarios with a trusted data collector, relying on the centralized entity makes it a single point of failure for the entire data collection mechanism. \begin{figure}[t!] \centering \includegraphics[width=.5\textwidth,trim= 0cm 2cm 0cm 0cm]{fig2.pdf} \caption{Illustration of secure multiparty computation with distributed and centralized differential privacy methods.} \label{fig:com} \end{figure} On the other hand, another mechanism involves applying a data sanitization technique (Distributed DP (DDP) in Fig.~\ref{fig:com}) directly on the local data held by the parties. In this case, the untrusted data collector can not infer individuals' data since sufficient noise is injected by DP to hide the individuals' data. However, this mechanism requires a meticulous analysis since it may lead to a divergent or excessive amount of accumulated noise due to DP at the data collector end. As such, this process may lead to a significant accuracy loss in the final models, which may cause catastrophic consequences in, for example, the healthcare domain. Therefore, enabling distributed differential privacy on local data with differential privacy guarantees on final results is a challenging problem. In this paper, we are motivated to provide a solution to this problem. Specifically, we propose a novel protocol for achieving Secure Multiparty Distributed Differentially Private (SM-DDP) computations on sensitive data. The protocol provides the guarantees of both SMC and DP. SMC is provided through Homomorphic Encryption (HE)~\cite{gentry2009fully} while DP is provided via Functional Mechanism (FM)~\cite{zhang2012functional}. An important characteristic of FM is that it injects noise into the feature matrices ({i.e.},~ coefficients of objective function), which can be computed independently by each party in a multiparty computational environment. We explore this feature of FM and apply it to linear regression using our SM-DDP protocol, but it can be applied to the computation of statistical model function that allows independent calculation from the local statistics. We show that the accumulated noise in our protocol is still bounded and convergent by using the infinite divisibility property of Laplacian distribution~\cite{kotz2012laplace}. Finally, we evaluated SM-DDP protocol's computational efficacy on linear regression using two real-world datasets. We compare our results with the use of Centralized DP (CDP) in a multiparty setting as in Fig.~\ref{fig:com}. The intuition is that the distributed setting of DP (DDP), which is proposed in this paper, would cause a greater accuracy loss than the typical client-server setting of SMC systems. However, we show exactly same trade-off can be achieved using the SM-DDP protocol that is presented in Fig.~\ref{fig:framework}. The extensive evaluation results indicate that the proposed SM-DDP protocol does not yield minimal computational overhead---less than a minute for 20 parties with 32 attributes and 10K samples. The individual parties obtain better accuracy than that would be obtained from a single party model. Finally, SM-DDP is scalable while providing security and privacy guarantees. \noindent\textbf{Contributions:} In this paper, we summarize our contributions as follows: \begin{itemize} \item We proposed a novel Secure Multiparty Distributed Differentially Private (SM-DDP) protocol to achieve secure and differentially private computations in distributed multiparty settings. This protocol can be applied to any statistical model function that allows the calculation of global model from the independent local statistics. \item We implemented the SM-DDP protocol on linear models. We showed that SM-DDP allows parties \revision{to} compute regression model on pooled data while providing secure computation and differential privacy guarantees. \item We showed that the accumulated noise in our protocol is bounded and convergent. This allows parties to build a model function, which offers the individual-level privacy against an untrusted data collector. \item We evaluated the performance of the proposed protocol using two different datasets. The results demonstrated that the parties compute the models in less than a minute while preserving the security guarantees of SMC and DP. \end{itemize} \noindent\textbf{Organization:} The rest of the paper is organized as follows: We present the related work in Section~\ref{sec:related}. In Section~\ref{sec:tech}, we give the technical preliminaries about SMC and DP methods that we utilized. Then, in Section~\ref{sec:linear}, background about regression analysis and specifically distributed calculation of linear regression is given. In Section~\ref{sec:framework}, a novel protocol for SM-DDP computation of a statistical model function $f$ and its application to linear regression is presented. Furthermore, we give the experimental results for the application of our protocol to linear regression in terms of accuracy, scalability, computational overhead, and security trade-offs in Section~\ref{sec:evaluation}. Finally, we discuss some of the related issues in Section~\ref{sec:discuss} and then we conclude the paper in Section~\ref{sec:Conclusion}. \input{related_work} \input{regression} \section{Technical Preliminaries} \label{sec:tech} Preserving the privacy of the users and data is a long-studied problem in the area of cryptography~\cite{shi2011privacy,damgaard2012multiparty,dankar2015privacy,chaudhuri2011differentially,sanil2004privacy,dwork2006our}. As a result of these long-term studies, there are several theoretically well-studied tools that can be employed to protect the data and user privacy such as Secure Multiparty Computation (SMC)~\cite{damgaard2012multiparty} and Differential Privacy (DP)~\cite{dwork2008differential}. In this section, we introduce the essentials of the secure computation and differential privacy primitives to understand the implementation of SM-DDP algorithms. Particularly, we introduce Homomorphic Encryption (HE) to provide SMC and Functional Mechanism (FM) to provide DP guarantees. \input{secure_computation} \input{differential_privacy} \input{framework} \input{evaluation} \input{discussion} \section{Conclusion}\label{sec:Conclusion} \vspace{5pt} In this work, we have proposed a novel Secure Multiparty Distributed Differentially Private (SM-DDP) protocol to achieve private computations in a multiparty environment as an application in linear regression. Using homomorphic encryption and functional mechanism, we first presented a protocol to provide the guarantees of secure multiparty computation and differential privacy. Then, we built the algorithms that would allow distributed parties to compute a global model while preserving the privacy of their data and individuals found in the dataset. Any statistical model function that can be independently calculated by sharing the local statistics of the parties can be computed through this protocol. Finally, we evaluated the performance of the proposed protocol on two datasets, namely, warfarin dose and budget predictions. Our findings show that a party can achieve individual-level privacy via our proposed protocol for distributed differential privacy, which is independently applied by each party in a distributed fashion. Moreover, the experiment results demonstrated that the proposed SM-DDP protocol is both feasible and scalable that is its computational overhead is minimal and overall computation time is sub-linear with the number of parties. Indeed, SM-DDP protocol provides security and privacy guarantees while being feasible and scalable. Our future work will extend the algorithms outside the linear models and investigate the accuracy and performance trade-offs of other algorithms. \revision{We are also planning to compare the performance of Laplacian mechanism used in FM with other DP mechanisms such as Exponential Mechanism~\cite{mcsherry2007mechanism} and Sample-and-aggregate~\cite{nissim2007smooth}.} \section*{Acknowledgment} \revision{This work was partly supported by US NSF-CAREER-CNS-1453647 and Army Research Laboratory under Cooperative Agreement Number W911NF-13-2-0045 (ARL Cyber Security CRA). The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation here on.} \newpage \bibliographystyle{IEEEtran} \section{Linear Models} \label{sec:linear} In this section, we start by introducing the linear models. We, then, show how to compute linear regression in a distributed fashion. \subsection{Background} Assume a database $D$ consists of $n$ observations $\{x_i,y_i\}_{i=1}^n$, where $x_i$ is a vector of $d$ attributes (i.e., $x_i=(x_{i1},x_{i2},\dots,x_{id})$ and $y_i$ is a scalar response. The aim is to find a \emph{model function} $f: X \to Y$ that can predict $y_i \in Y$ as close as its actual value using the attributes $x_i \in X$. The type of the regression model is decided by the type of the model function. For instance, in linear regression, the model function is simply a straight line. Model function $f$ takes model coefficients $w=(w_1,w_2,\dots,w_d)$ and $x_i$ as inputs and outputs a prediction for the value of $y_i$. The deviations between predicted value and the actual response value are calculated through a \emph{loss function} $\ell: Y \times Y \to \mathbb{R}$. The global value of $w$ over the training data $D$ is calculated by the objective function. We denote the objective function by $\mathcal{L}$ and it is calculated as follows: \begin{equation} \mathcal{L} (f,D)=\sum_{i=1}^n \ell(f(x_i,w),y_i). \end{equation} \subsection{Distributed Linear Regression} Regression is a statistical approach that explores the relationships between a set of independent variables called \textit{attributes} and one dependent variable called \textit{response}. In regression, the relationship between the attributes and the response is modeled using a prediction function. In linear regression, $L_2$-norm of the objective function ({i.e.},~ $\ell(f(x_i,w),y_i)=(w \cdot x_i-y_i)^2 $) that is minimized in the matrix form as follows: \begin{equation} \label{eqn:obj:linear} w^*= \displaystyle\arg \min_{\substack{w}} \mathcal{L} (f,D)= \displaystyle\arg \min_{\substack{w}} \sum_{i=1}^m (w \cdot x_i-y_i)^2, \end{equation} where $m$ is the number of tuples in the database. To calculate the regression in a distributed way, we represent the regression objective by minimizing with the \textit{Maximum likelihood Estimation} (MLE). MLE allows us to obtain the global solution of the Equation~\ref{eqn:obj:linear} as follows\footnote{A unique solution only exists if $ (\textbf{X}^\top \textbf{X})^{-1}$ is non-singular. In other cases, there are techniques for solving Equation~\ref{eqn:obj:linear}~\cite{mohriintroduction}; however, it is out of the scope of this paper.}: \begin{equation} w^*= (\textbf{X}^\top \textbf{X})^{-1} \textbf{X}^\top \textbf{Y} . \end{equation} We characterize the model parameter $w$ of each party using three parameters: \begin{center} \begin{equation} \mathcal{P}_i= \textbf{X}_i^\top \textbf{X}_i, \mathcal{V}_i= \textbf{X}_i^\top \textbf{Y}_i, \mathcal{O}_i= \textbf{Y}_i^\top \textbf{Y}_i \end{equation} \end{center} Each party computes its \textit{local statistics} $<\mathcal{P}_i,\mathcal{V}_i,\mathcal{O}_i>$ and shares with other parties. Then, the global values of $\mathcal{P}$,$\mathcal{V}$ and $\mathcal{O}$ are computed using the shared local statistics as follows: $$\mathcal{P}= \textbf{X}^\top \textbf{X}=\begin{bmatrix} X^\top_{i_1}|...|X^\top_{i_n} \end{bmatrix} \begin{bmatrix} X_{i_1} \\ \vdots \\ X_{i_n} \end{bmatrix} = \sum_{k=1}^n \textbf{X}_{i_k}^\top \textbf{X}_{i_k}= \sum_{k=1}^n \mathcal{P}_k$$ $$\mathcal{V}= \textbf{X}^\top \textbf{Y}=\begin{bmatrix} X^\top_{i_1}|...|X^\top_{i_n} \end{bmatrix} \begin{bmatrix} Y_{i_1} \\ \vdots \\ Y_{i_n} \end{bmatrix} = \sum_{k=1}^n \textbf{X}_{i_k}^\top \textbf{Y}_{i_k}= \sum_{k=1}^n \mathcal{V}_k$$ $$\mathcal{O}= \textbf{Y}^\top \textbf{Y}=\begin{bmatrix} Y^\top_{i_1}|...|Y^\top_{i_n} \end{bmatrix} \begin{bmatrix} Y_{i_1} \\ \vdots \\ Y_{i_n} \end{bmatrix} = \sum_{k=1}^n \textbf{Y}_{i_k}^\top \textbf{Y}_{i_k}= \sum_{k=1}^n \mathcal{O}_k,$$ where $n$ is the number of parties in the collaboration. Using this, the global coefficients can be computed as follows: \begin{equation} w^*= (\textbf{X}^\top \textbf{X})^{-1} \textbf{X}^\top \textbf{Y} = \mathcal{P}^{-1} \mathcal{V}. \end{equation} In order to calculate the error of the global function, we rewrite the objective function in Equation~\ref{eqn:obj:linear} in terms of the local statistics ({i.e.},~ matrix form) as follows: \begin{equation} \begin{split} \sum_{i=1}^m (w \cdot x_i-y_i)^2 &= (\textbf{X}w-\textbf{Y})^\top (\textbf{X}w-\textbf{Y})\\ &=||(\textbf{X}w-\textbf{Y})||^2 \\ &= w^\top \textbf{X}^\top \textbf{X} w- 2 w^\top \textbf{X}^\top \textbf{Y} + \textbf{Y}^\top \textbf{Y} \\ &= w^\top \mathcal{P} w-2 w^\top \mathcal{V}+\mathcal{O}, \end{split} \end{equation} where $||\cdot||$ denotes the Euclidean norm. We note that even though we do not need $\mathcal{O}$ to calculate the global coefficients, it is used for computing the error of the model. \section{Related Work} \label{sec:related} There have been many works on the secure computation of linear regression over distributed databases~\cite{karr2005secure,du2004privacy,karr2009privacy,sanil2004privacy,hall2011secure}. In these, the threat model is considered as a third party that does not have access to data, but curious about it. However, one of the parties may want to release the model function after computing function securely, which still poses threats to the individuals~\cite{narayanan2008robust,ganta2008composition,el2013secure}. DP copes with this problem as it injects a certain amount of noise to the results of the queries to mask the individuals in the database. Indeed, there have been different works about the DP~\cite{dwork2006our,dwork2006calibrating,dwork2008differential,mcsherry2007mechanism} and particularly about differentially private linear regression~\cite{chaudhuri2011differentially,bassily2014private,duchi2013local,fredrikson2014privacy,jain2013differentially,zhang2012functional,smith2017interaction}. However, these works consider DP without SMC. Although they are useful, they only provide privacy guarantees that the output of queries does not carry information about the individuals. Approaches combining SMC and DP to provide both individual-level privacy and secure computation would be more secure. However, combining DP and SMC is not trivial; indeed, it is a rather challenging task since the application of centralized DP just after SMC in client-server settings would leak the model to \revision{an} untrusted data collector, which results in a privacy violation of individuals in the database. Applying distributed DP directly on the local data held by the parties is more secure, but \revision{if each user independently injects noise randomly,} it may lead to an excessive or \revision{uncontrollable} amount of accumulated noise at the data collector end. Recent works focused on combining SMC and DP~\cite{goryczka2013secure,clifton2013challenges,shi2011privacy}, but none of them focused on linear regression. \revision{As pointed in~\cite{zhang2012functional}, the main reason behind this is that the regression analysis involves an optimization problem, which makes it harder to control the required amount of noise, and if the data is also distributed among parties, that makes it much more difficult to control the privacy-accuracy trade-off introduced by DP.}\revision{In another relevant work}~\cite{pathak2010multiparty}, a combination of SMC and DP is proposed for aggregate classifiers. However, this approach injects the noise to the optimum model parameter. This resulted in excessive noise in the global model and significant loss in the accuracy. \revision{Particularly, the experimental evaluation shows that when the classifier is locally trained, the error rate obtained from locally trained classifiers is higher than the optimum error rates that could be obtained from a centralized approach.} However, in our work, we take a different approach from this work. We deploy FM~\cite{zhang2012functional}, which adds noise to local statistics, which provides the same model as the centralized approach. \revision{Lastly, even though a similar idea is proposed in~\cite{aono2015fast}, it is not analyzed in detail.} \begin{comment} We can, in principle, combine both approaches and at the same time attempt to make the output satisfy a formal definition of privacy such as “$\epsilon$-differential privacy” approach due to Dwork [12] and Nissim [27]. Dwork et al. [11] discuss efficient ways to do this for several problems involving the secure evaluation of sums, whereas our protocols involve calculation of secure sums and products. This combined secure-private approach would involve computing some form of perturbed regression coefficients and statistics for assessing goodness of fit. A very different approach involves carrying out data sanitization directly on the data held by the parties. This would entail the parties each adding random noise to their data in an effort to preserve individuals’ privacy, while maintaining some form of utility in the data. Next, the parties would share these sanitized databases among themselves, at which point they could perform whatever statistical analysis they wanted. This approach requires a formal definition of privacy 17 to be achieved via the sanitization process, e.g., using “$\epsilon$-differential privacy.” Were we to insist on this cryptographic definition of privacy, the use of a sanitization scheme would thwart the data merger, except in the case of horizontal partitioning, and even then it would affect the regression coefficients and related goodness of fit statistics. There is no developed theory that would allow us to carry out accurate statistical inference under such a scheme. Therefore, although the approach is a conceptually appealing alternative, we would need to do further work before it would be practical for multi-party statistical calculations especially in moderate to high dimensional problems. \end{comment} \subsection{Secure Multiparty Computation} SMC allows the computation of a function with multiple inputs from different users while keeping the users' inputs hidden from each other. For instance, each party $P_i$ in a $n$-party environment holds input $x_i$ learns nothing but the output $f(x_1,...,x_n)$ of a computation. In the literature, SMC schemes are mostly achieved via either the Yao's garbled circuits~\cite{yao1982protocols} or Homomorphic Encryption (HE)~\cite{gentry2009fully}. In the following, we use HE to provide guarantees of secure computation. \vspace{3pt} \noindent\textbf{Homomorphic Encryption (HE)-} HE provides an ability to evaluate the functions directly on the encrypted data while keeping the data confidential. The primary advantage of the HE is that it does not require any interaction between the parties other than the data exchange. That is, there is no additional communication complexity. However, it may introduce computational overhead on large plaintexts. Recent works improved its performance significantly by introducing new techniques like single instruction, multiple data (SIMD) operations~\cite{smart2014fully} or using different mathematical assumptions like learning with errors LWE~\cite{brakerski2012leveled,brakerski2014efficient} (see~\cite{2017arXiv170403578A} for a recent survey about HE). \begin{figure} \centering \begin{tikzpicture}[>=stealth,thick, scale=0.4] \node[] (A) {$m_1,m_2$}; \node[right=2cm of A] (B) {$f(m_1,m_2)$}; \node[above=2cm of A] (C) {$c_1,c_2$}; \node[right=2cm of C] (D) {$f(c_1,c_2)$}; \draw[shorten <=0cm,shorten >=1.5cm,-latex,line width=0.6mm] (C.0)--node[above left]{$Eval(...)$}(D.0); \draw[shorten <=0cm,shorten >=0.5cm,-latex,line width=0.6mm] (A.100)--node[left]{$Enc_{pk}(...)$}(C.100); \draw[shorten <=0cm,shorten >=0.5cm,-latex,line width=0.5mm] (D.-100)--node[right]{$Dec_{sk}(...)$}(B.-130); \draw[shorten <=0cm,shorten >=1.8cm,-latex,line width=0.5mm] (A.0)--node[below left]{$f(...)$}(B.0); \end{tikzpicture} \caption{HE operations of encryption, evaluation, and decryption ($pk$ is the public key, $sk$ is the secret key, and $f$ is the function desired to be computed). \label{diagram}} \end{figure} An HE scheme is primarily characterized by four operations: key generation ($KeyGen$), encryption ($Enc$), decryption ($Dec$), and evaluation ($Eval$). $KeyGen$ is the operation that is used to generate a secret and public key pair for the asymmetric version of HE or a single key for the symmetric version. $KeyGen$, $Enc$ and $Dec$ are similar to the ones used in conventional encryption schemes. However, $Eval$ is an HE-specific operation, which takes ciphertexts as input and outputs a ciphertext corresponding to a functioned plaintext. Fig.~\ref{diagram} illustrates a commutative diagram depicting the relationship among the four major operations. The simplified version of the diagram shows only one homomorphic encryption with two ciphertexts~\cite{gentry2014computing}.
1,314,259,994,701
arxiv
\section{\protect\bigskip Introduction} In the past few years there has been a remarkable progress in the determination of fermion masses and mixing \cite{mix}, involving advances both in theory and experiment. In the quark sector, input from experiment includes the knowledge of $|V_{us}|$, $|V_{cb}|$, $|V_{ub}/V_{cb}|$, |V_{td}| $, $|V_{ts}|$, together with the measurement of the rephasing invariant angles $\beta $ and $\gamma $. In the framework of the Standard Model (SM), these results constrain the location of the vertex of the unitarity triangle to a small region. The measurement of the angle $\gamma $ is specially important since it provides clear evidence that the Cabibbo- Kobayashi-Maskawa (CKM) matrix \cite{ckm} is complex, even if one allows for the presence of New Physics beyond the SM \cite{bsm}. In the leptonic sector, non-vanishing neutrino masses and mixing has been established \cite {neut0}. However, there are still important open questions like the nature of the neutrino mass spectrum (normal hierarchy, inverted hierarchy or quasi degenerate), the determination of whether neutrinos are Majorana or Dirac particles, as well as the search for leptonic CP violation. In this respect, the recent evidence in favour of a non-vanishing value of $U_{e3}$, provides the hope of discovering leptonic CP violation in neutrino oscillations. It is well known that having a non-vanishing $U_{e3}$\ is a necessary requirement in order to have leptonic Dirac-type CP violation, which is detectable in neutrino oscillations. In spite of these developments, one does not have yet a standard theory of flavour. One may adopt a bottom-up approach and try to discover a symmetry principle from the observed pattern of fermion masses and mixing. One of the difficulties that one encounters in following this approach, stems from the fact that there is a large redundancy in the Yukawa couplings $Y_{u}$, Y_{d} $\ \ which generate the quark masses $M_{u}$, $M_{d}$. One can make weak-basis (WB) transformations which change $M_{u}$, $M_{d}$, but do not alter their physical content. The above considerations also apply to flavour models which postulate the existence of so-called "texture-zeros". It is clear that these zeros only exist in a particular WB. The above redundancy in Yukawa couplings and fermion mass matrices motivates the use of WB invariants, i.e. functions of quark masses which do not change when one preforms a WB transformation. These WB invariants are very useful in the analysis of CP violation, where they have been derived from first principles \cite{principles} and have been applied to both the quark \cite {cp-quark} and lepton \cite{cp-lepton} sectors, including leptogenesis, as well as to the Higgs sector \cite{cp-higgs}. In this paper, we show that the main features of the pattern of fermion masses and mixing can be expressed in terms of simple relations involving only WB invariants. We introduce the concept of "alignment", which can be understood in the following way. For definiteness, let us consider the quark sector, where small flavour mixing is indicated by experiment. Small mixing implies that there is a WB where both $M_{u}$ and $M_{d}$ are close to the diagonal form. However, experiment shows more than that, it tells us that the quark mass matrices are aligned in flavour space, meaning that there is a basis where $M_{d}=diag(m_{d},m_{s},m_{b})$ and $M_{u}$ is close to diag(m_{u},m_{c},m_{t})$ and not to $diag(m_{t},m_{c},m_{u})$, for example. Obviously, only the relative ordering of the eigenvalues of $M_{u}$, $M_{d}$ is physically meaningful, since by making a WB transformation, one can change simultaneously the eigenvalue ordering in $M_{u}$, $M_{d}$. At this point, it is worth recalling that in the context of the SM, the Yukawa couplings leading to $M_{u}$ and $M_{d}$ are entirely independent, there is no "dialog". between $Y_{u}$ and $Y_{d}$. Therefore, in the context of the SM, or its minimal supersymmetric extension, alignment is in no way more "natural" than misalignment. Quite on the contrary, if one considers the manifold of matrices $M_{u}$, $M_{d}$ leading to "small mixing" as previously defined, the probability of having alignment is only 1/6. In this paper we will show how WB invariants can distinguish not only between small mixing and large mixing, but also between alignment and misalignment. The paper is organized as follows:. In the next section, we consider the quark sector, where we illustrate the usefulness of WB invariants for two and three generations. In section 3, we apply WB invariants to the study of some ans\"{a}tze for quark mass matrices, while in section 4 we briefly study the lepton sector, showing in particular how the observed pattern of leptonic mixing can be obtained through a set of conditions on WB invariants. Our conclusions are contained in section 5. \section{The quark sector} \subsection{Two quark generations} For simplicity and in order to explain the concept of alignment and its connection to weak-basis invariants, we consider first the case of two generations. The three important features of quark masses and mixing are: \begin{description} \item (i) Hierarchical quark masses, \item (ii) Small mixing, \item (iii) Alignment. \end{description} We consider separately each one of these features, emphasizing that they are logically independent, at least in the context of the SM. We shall identify how each one of these features can be expressed in terms of weak-basis invariants. \textbf{(i) Hierarchical masses} \noindent For two generations, the fact that quark masses are hierarchical can be expressed in terms of invariants as, \begin{equation} r_{1}\equiv \frac{Det[H]}{\left( \frac{1}{2}Tr[H]\right) ^{2}}\ll 1 \label{r1} \end{equation} where $H\equiv MM^{\dagger }$\ denotes either $H_{d}$\ or $H_{u}$. The case of exact degeneracy corresponds to $r=1$. \textbf{(ii) Small mixing} \noindent It can be readily verified that the following relation holds \begin{equation} Tr\left( [H_{u},H_{d}]^{2}\right) =-\frac{1}{2}\left( \Delta _{12}^{d}\right) ^{2}\left( \Delta _{12}^{u}\right) ^{2}\sin ^{2}(2\theta ) \label{i1} \end{equation} where $\Delta _{12}^{d}=m_{d}^{2}-m_{s}^{2}$, $\Delta _{12}^{u}=m_{c}^{2}-m_{u}^{2}$ and $\theta $\ denotes the Cabibbo angle. Let us consider the following invariant ratio \begin{equation} r_{2}\equiv \frac{\left| Tr\left( [H_{u},H_{d}]^{2}\right) \right| }{\frac{ }{2}\left( Tr[H_{d}]\right) ^{2}\left( Tr[H_{u}]\right) ^{2}} \label{r2} \end{equation} Assuming that quark masses are hierarchical, which can be guaranteed through the invariant condition of Eq. (\ref{r1}), it is clear from Eq. (\ref{i1}), that \begin{equation} r_{2}\approx \sin ^{2}(2\theta ) \label{r2s} \end{equation} Therefore small mixing can be achieved through the invariant condition \begin{equation} r_{2}\ll 1 \label{r2ss} \end{equation} Maximal mixing corresponds to $\theta =45^{o}$, i.e. \begin{equation} r_{2}=1 \label{r2l} \end{equation} \textbf{(iii) Alignment} Small mixing means that there is a weak-basis where both $H_{d}$\ and $H_{u}$ are close to the diagonal form. As mentioned before, this is not sufficient to have ''alignment'', since it does not guarantee the same ''ordering'' in both $H_{d}$\ and $H_{u}$. Alignment means, of course, that in a WB where H_{d}$ is close to $diag(m_{d}^{2},m_{s}^{2})$, $H_{u}$ is close to diag(m_{u}^{2},m_{c}^{2})$ and not $diag(m_{c}^{2},m_{u}^{2})$. As previously emphasized, one can change the ordering simultaneously in the up and down sectors through a WB transformation. Only the relative ordering in the up and down quark sectors is physically meaningful. In the case of two generations, assuming hierarchical quark masses and small mixing, one has alignment if for the following invariant \begin{equation} \begin{array}{l} I_{1}\equiv \frac{Tr[H_{u}]\ Tr[H_{d}]-Tr[H_{u}H_{d}]}{Tr[H_{u}]\ Tr[H_{d}] =\sin ^{2}(\theta )+ \\ \\ +\left( \frac{\left( m_{d}/m_{s}\right) ^{2}}{\left( 1+\left( m_{d}/m_{s}\right) ^{2}\right) }+\frac{\left( m_{u}/m_{c}\right) ^{2}} \left( 1+\left( m_{u}/m_{c}\right) ^{2}\right) }-\frac{2\left( m_{d}/m_{s}\right) ^{2}\left( m_{u}/m_{c}\right) ^{2}}{\left( 1+\left( m_{d}/m_{s}\right) ^{2}\right) \left( 1+\left( m_{u}/m_{c}\right) ^{2}\right) }\right) \cos (2\theta ) \end{array} \label{I10} \end{equation} the condition is satisfied: \begin{equation} I_{1}\ll 1 \label{I1} \end{equation} On the contrary, assuming again hierarchical quark masses and small mixing, misalignment implies: \begin{equation} I_{1}\approx 1 \label{I1-a} \end{equation} \subsection{Three quark generations} \subsubsection{Hierarchy of quark masses} The hierarchy of quarks masses in both the up and down quark sectors, namely \begin{equation} m_{1}^{2}\ll m_{2}^{2}\ll m_{3}^{2} \label{ms} \end{equation} can be translated into invariant conditions. We introduce the Hermitian quark mass matrices $H\equiv MM^{\dagger }$\ and the corresponding invariants $\det (H)$, $Tr[H]$ together with the third invariant $\chi \lbrack H]$\ which stands for $\chi \lbrack H]\equiv m_{1}^{2}m_{2}^{2}+m_{1}^{2}m_{3}^{2}+m_{2}^{2}m_{3}^{2}$. Note that for an Hermitian $3\times 3$ matrix $H$, one has \begin{equation} \chi \lbrack H]=\frac{1}{2}\left( \left( Tr[H]\right) ^{2}-Tr[H^{2}]\right) \label{ki1} \end{equation} The following invariant condition \begin{equation} R_{1}\equiv \frac{\chi \lbrack H]}{Tr[H]^{2}}\ll 1 \label{hier2} \end{equation} implies that one of the eigenvalues of $H$ is much larger that the other two. Finally, it can be readily verified that the condition \begin{equation} R_{2}\equiv \frac{Tr[H]\ Det[H]}{\left( \chi \lbrack H]\right) ^{2}}\ll 1 \label{hier3} \end{equation} together with Eq. (\ref{hier2}) implies that of the two smaller eigenvalues, one of them is much smaller than the other one, i.e. $m_{1}^{2}\ll m_{2}^{2}. $ \subsubsection{Invariants and the pattern of mixing} \bigskip Previously \cite{previously}, invariants were used to study specific ans\"{a}tze where the quark mass matrices were written in an Hermitian basis. Here, we consider WB invariants which can be applied in an arbitrary basis, not necessarily Hermitian. It is convenient to introduce the following dimensionless matrices with unit trace, $Tr[h_{u,d}]=1$: \begin{equation} h_{u}=\frac{H_{u}}{Tr[H_{u}]}\qquad ;\qquad h_{d}=\frac{H_{d}}{Tr[H_{d}]} \label{tr1} \end{equation} and their difference: \begin{equation} A\equiv h_{d}-h_{u} \label{diffh} \end{equation} By construction, one has $Tr[A]=0$, which in turn implies \begin{equation} \chi \left[ A\right] =-\frac{1}{2}Tr[A^{2}] \label{chia} \end{equation} There is a relation between $\chi (A)$ and $I_{1}$, defined in Eq. (\ref{I10 ). Indeed, from Eqs. (\ref{I10}, \ref{tr1}), one obtains \begin{equation} I_{1}=1-Tr[h_{u}h_{d}] \label{reli1} \end{equation} while Eqs. (\ref{diffh}, \ref{chia}) lead to \begin{equation} \chi \left[ A\right] =Tr[h_{u}h_{d}]-\frac{1}{2}Tr[h_{u}^{2}]-\frac{1}{2 Tr[h_{d}^{2}] \label{chia1} \end{equation} From Eqs. (\ref{reli1}, \ref{chia1}), one therefore gets \begin{equation} \chi \left[ A\right] =1-I_{1}-\frac{1}{2}Tr[h_{u}^{2}]-\frac{1}{2 Tr[h_{d}^{2}] \label{reli2} \end{equation} Assuming hierarchy of the quark masses, which can be implemented through the invariants of Eqs. (\ref{hier2}, \ref{hier3}), one obtains \begin{equation} \begin{array}{l} Tr[h_{d}^{2}]=1-2\left( \frac{m_{s}}{m_{b}}\right) ^{2}+O\left( \left( \frac m_{s}}{m_{b}}\right) ^{4}\right) \\ \\ Tr[h_{u}^{2}]=1-2\left( \frac{m_{c}}{m_{t}}\right) ^{2}+O\left( \left( \frac m_{c}}{m_{t}}\right) ^{4}\right) \end{array} \label{trhud} \end{equation} On the other hand, an explicit evaluation of $I_{1}$ for three generations in terms of $\left| V_{ij}\right| $ and quark mass ratios gives \begin{equation} I_{1}=\left| V_{23}\right| ^{2}+\left| V_{13}\right| ^{2}+\left( \frac{m_{s }{m_{b}}\right) ^{2}+\left( \frac{m_{c}}{m_{t}}\right) ^{2}+O\left( \left( \frac{m_{s}}{m_{b}}\right) ^{4}\right) \label{i10} \end{equation} Using Eqs. (\ref{reli2}, \ref{trhud}, \ref{i10}), one finally gets: \begin{equation} \left|\chi \left[ A\right]\right| =\left| V_{23}\right| ^{2}+\left| V_{13}\right| ^{2}+O\left( \left( \frac{m_{s}}{m_{b}}\right) ^{4}\right) \label{chia3} \end{equation} The usefulness of $\chi \left[ A\right] $\ is clear from Eq. (\ref{chia3}): it gives to an excellent approximation the value of $\left| V_{23}\right| ^{2}+\left| V_{13}\right| ^{2}$. At this stage, it is worth recalling that one of the main features of quark mixing, is the fact that the 3rd generation almost decouples from the other two. The deviation of exact decoupling is given by the size of $\left\vert V_{23}\right\vert ^{2}+\left\vert V_{13}\right\vert ^{2}$. The experimental measurement of $\left\vert V_{23}\right\vert $ and $\left\vert V_{13}\right\vert $ shows that: \begin{equation} \left\{ \left\vert V_{23}\right\vert ^{2}+\left\vert V_{13}\right\vert ^{2}\right\} ^{\exp .}=O\left( \left( \frac{m_{s}}{m_{b}}\right) ^{2}\right) \label{expv23} \end{equation} This input from experiment, can be written in terms of a simple relation among WB invariants \begin{equation} \chi \left[ A\right] =O\left( \frac{\chi \left[ H_{d}\right] }{\left( Tr[H_{d}]\right) ^{2}}\right) =O\left( \chi \left[ h_{d}\right] \right) \label{expchia} \end{equation} It is worth emphasizing that, for three generations, $\chi \left[ A\right] $ is also a measure of the alignment of the down and up quark mass matrices. Working in a WB where the up quarks are diagonal, one can choose, without loss of generality, to order the up quarks in such a way that H_{u}=diag(m_{u}^{2},m_{c}^{2},m_{t}^{2})$. In the context of small mixing, alignment means that, in the above basis, $H_{d}$ is close to diag(m_{d}^{2},m_{s}^{2},m_{b}^{2})$. In this case, $\chi \left[ A\right] $ is small. In fact, if we take the limit $m_{t}\rightarrow \infty $, m_{b}\rightarrow \infty $, one has $h_{u}=diag(0,0,1)$, $h_{d}=diag(0,0,1)$ and $A$ vanishes, so $\chi \left[ A\right] =0$. On the other hand, if there is small mixing, but no alignment, in the WB where H_{u}=diag(m_{u}^{2},m_{c}^{2},m_{t}^{2})$, one may have $H_{d}$ is close to $diag(m_{b}^{2},m_{s}^{2},m_{d}^{2})$. In this case, $\chi \left[ A\right] $ is large. Indeed in the limit $m_{t}\rightarrow \infty $, $m_{b}\rightarrow \infty $, one has, for this case, $h_{u}=diag(0,0,1)$ but $h_{d}=diag(1,0,0) , which leads to $\left|\chi \left[ A\right]\right| =1$, signalling total misalignment. Next, we address the question of how to use invariants to constrain separately $\left| V_{23}\right| ^{2}$ and $\left| V_{13}\right| ^{2}$. This is a more difficult task, involving more complicated invariants, as it was to be expected. In order to constrain $\left| V_{13}\right| $, let us consider the following WB invariant: \begin{equation} I_{2}\equiv 1-\frac{Tr[H_{u}]\ Tr[H_{u}H_{d}]-Tr[H_{u}^{2}H_{d}]}{\chi [H_{u}]\ Tr[H_{d}]} \label{i2} \end{equation} This invariant can be readily calculated and one obtains in the chiral limit, i.e. when $m_{u},m_{d}=0$: \begin{equation} I_{2}=\frac{\frac{m_{s}^{2}}{m_{b}^{2}}\ |V_{12}|^{2}+|V_{13}|^{2}}{1+\frac m_{s}^{2}}{m_{b}^{2}}} \label{i2-limit} \end{equation} It is clear from Eq. (\ref{i2-limit}) that if we constrain $I_{2}$ to be of order $\lambda ^{6}$, $\lambda $\ denoting the Cabibbo angle, then $|V_{13}| \ is at most of order $\lambda ^{3}$. It can be shown that this conclusion holds when one does not assume the chiral limit. Indeed, an exact calculation of \ gives: \begin{equation} \begin{array}{l} I_{2}=\frac{1}{\left[ 1+\left( \frac{m_{s}}{m_{b}}\right) ^{2}+\left( \frac m_{d}}{m_{b}}\right) ^{2}\right] \left[ 1+\left( \frac{m_{u}}{m_{c}}\right) ^{2}+\left( \frac{m_{u}}{m_{t}}\right) ^{2}\right] }\ \cdot \\ \\ (\ |V_{13}|^{2}+\left( \frac{m_{u}}{m_{c}}\right) ^{2}|V_{23}|^{2}+\left( \frac{m_{u}}{m_{t}}\right) ^{2}|V_{33}|^{2}+ \\ \\ +\left( \frac{m_{s}}{m_{b}}\right) ^{2}\left[ |V_{12}|^{2}+\left( \frac{m_{u }{m_{c}}\right) ^{2}|V_{22}|^{2}+\left( \frac{m_{u}}{m_{t}}\right) ^{2}|V_{32}|^{2}\right] + \\ \\ +\left( \frac{m_{d}}{m_{b}}\right) ^{2}\left[ |V_{11}|^{2}+\left( \frac{m_{u }{m_{c}}\right) ^{2}|V_{21}|^{2}+\left( \frac{m_{u}}{m_{t}}\right) ^{2}|V_{31}|^{2}\right] \ ) \end{array} \label{i2-exact} \end{equation} From Eq. (\ref{i2-exact}), and given the quark mass hierarchy, one concludes that putting $I_{2}\approx \lambda ^{6}$ constrains $|V_{13}|$ to be at most of order $\lambda ^{3}$. Then, from Eq. (\ref{chia3}), it follows that setting $\chi [A]\approx \lambda ^{4}$ constrains $|V_{23}|$\ to be of order $\lambda ^{2}$, as indicated by experiment. We have thus shown how to fix separately $|V_{23}|$ and $|V_{13}|$ through WB invariants. In order to constrain\ $|V_{12}|$, it is convenient to use WB invariants involving $H_{u,d}^{-1}$. Let us define \begin{equation} \widehat{A}=\widehat{h}_{d}-\widehat{h}_{u} \label{diffinv} \end{equation} where \begin{equation} \widehat{h}_{u}=\frac{H_{u}^{-1}}{Tr[H_{u}^{-1}]}\qquad ;\qquad \widehat{h _{d}=\frac{H_{d}^{-1}}{Tr[H_{d}^{-1}]} \label{hinv} \end{equation} We have assumed that none of the quark masses vanish, as indicated by experiment and theory. In the weak basis where the up quark mass matrix is diagonal, we have \begin{equation} \begin{array}{l} \widehat{h}_{u}=\frac{1}{\left( 1+\frac{m_{u}^{2}}{m_{c}^{2}}+\frac{m_{u}^{2 }{m_{t}^{2}}\right) }\ diag\left( 1,\frac{m_{u}^{2}}{m_{c}^{2}},\frac m_{u}^{2}}{m_{t}^{2}}\right) \\ \\ \widehat{h}_{d}=\frac{1}{\left( 1+\frac{m_{d}^{2}}{m_{s}^{2}}+\frac{m_{d}^{2 }{m_{b}^{2}}\right) }\ V\cdot diag\left( 1,\frac{m_{d}^{2}}{m_{s}^{2}},\frac m_{d}^{2}}{m_{b}^{2}}\right) \cdot V^{\dagger } \end{array} \label{hhoed} \end{equation} Note the eigenvalues of $\widehat{h}_{u,d}$, denoted by $\widehat{\mu }_{i}$ satisfy an inverted hierarchy: $\widehat{\mu }_{1}\gg \widehat{\mu }_{2}\gg \widehat{\mu }_{3}$. We evaluate now $\chi \lbrack \widehat{A}]$, obtaining: \begin{equation} \left\vert \chi (\widehat{A})\right\vert =\left\vert V_{12}\right\vert ^{2}+\left\vert V_{13}\right\vert ^{2}-2\left( \frac{m_{d}}{m_{s}}\right) ^{2}\left\vert V_{12}\right\vert ^{2}+O(\lambda ^{8}) \label{ahoed} \end{equation} If one constrains $\chi \lbrack \widehat{A}]$ to be of order $\lambda ^{2}$, one necessarily has $\left\vert V_{12}\right\vert \cong \lambda $, taking into account that $\left\vert V_{13}\right\vert ^{2}$ was already constrained to be of order $\lambda ^{6}$. In order to complete the determination of $V^{CKM}$ through WB invariants, we have to address the question of CP violation. It has been shown \cite {principles} from first principles that the vanishing of the following WB invariant is a necessary condition for CP invariance in the SM, for an arbitrary number of generations: \begin{equation} I^{CP}\equiv Tr\left( [H_{u},H_{d}]^{3}\right) \label{icp} \end{equation} For three generations $I^{CP}=0$ is a necessary and sufficient condition for CP invariance. In terms of physical quantities, one has \begin{equation} I^{CP}=G\ \mathrm{{Im}[V_{12}V_{23}V_{13}^{*}V_{22}^{*}]} \label{icp0} \end{equation} where $ G=6i(m_{b}^{2}-m_{s}^{2})(m_{b}^{2}-m_{d}^{2})(m_{s}^{2}-m_{d}^{2})(m_{t}^{2}-m_{c}^{2})(m_{t}^{2}-m_{u}^{2})(m_{c}^{2}-m_{u}^{2}) $. If we now set $\frac{1}{G}I^{CP}$ to be of order $\lambda ^{6}$, and take into account that $\mathrm{Im}[V_{12}V_{23}V_{13}^{*}V_{22}^{*}]=$ $\left| V_{12}\right| \left| V_{23}\right| \left| V_{13}\right| \left| V_{22}\right| \sin (\phi )$, (with $\phi =\arg [V_{12}V_{23}V_{13}^{*}V_{22}^{*}]$), then we conclude that this constrains $\sin (\phi )$ to be of order one. We have thus shown, without going through any process of diagonalization of the quark mass matrices, how a set of WB invariants can completely fix the pattern of mixing and strength of CP violation present in $V^{CKM}$. \section{Applying Invariants to various ans\"{a}tze} \subsection{General remark} Next, we show the usefulness of the WB invariants introduced in the previous section and apply these invariants to some specific ans\"{a}tze. First, we derive some general results which apply to any flavour model where both the up and down Hermitian squared quark mass matrices with trace normalized to unity are equal to some fixed matrix $\Delta _{o}$ of order one plus some small perturbation denoted by $\left( \varepsilon A\right) _{u,d}$: \begin{equation} h_{d}=\Delta _{o}+\varepsilon _{d}A_{d}\qquad ;\qquad h_{u}=\Delta _{o}+\varepsilon _{u}A_{u} \label{small} \end{equation} An example could be the case where $\Delta _{o}$ stands for the so-called democratic matrix, where all elements are equal, but our results apply to a broader class of flavour matrices. It is clear that Eq. (\ref{small}) is a sufficient condition to obtain alignment, since it follows from Eq. (\ref {small}), that $A=h_{d}-h_{u}=\varepsilon _{d}A_{d}-\varepsilon _{u}A_{u}$ is small. Let us consider now those ans\"{a}tze where the following further conditions are satisfied \begin{equation} \begin{array}{l} \left| \varepsilon _{u}\right| \ll \left| \varepsilon _{d}\right| \qquad ;\qquad Tr[A]_{u,d}=0\qquad ;\qquad \left( Tr[\Delta _{o}]\right) ^{2}=Tr[\Delta _{o}^{2}] \\ \\ Tr[A_{u,d}\ \Delta _{o}]\leq O(\ \varepsilon)_{u,d}\qquad \end{array} \label{valid} \end{equation} It follows from Eq. (\ref{valid}), that to a good approximation $A\approx \varepsilon _{d}A_{d}$ and one obtains \begin{equation} \left| \chi (A)\right| =\varepsilon _{d}^{2}\ \left| \chi [A_{d}^{2}]\right| =\frac{1}{2}\varepsilon _{d}^{2}Tr[A_{d}^{2}]=O\left( \varepsilon _{d}^{2}\right) \label{apchi} \end{equation} Furthermore, using the conditions of Eq. (\ref{valid}) and computing $\chi (h_{d})$, one gets \begin{equation} \begin{array}{l} \left| \chi (h_{d})\right| = \\ \\ =\frac{1}{2}\left| \left( Tr[\Delta _{o}]\right) ^{2}-Tr[\Delta _{o}^{2}]-\varepsilon _{d}Tr[A_{d}\ \Delta _{o}+\Delta _{o}A_{d}]-\varepsilon _{d}^{2}Tr[A_{d}^{2}]\right| =O\left( \varepsilon _{d}^{2}\right) \end{array} \label{trhd} \end{equation} Therefore, one finds \begin{equation} \left| \chi (A)\right| =O\left( \ \left| \chi (h_{d})\right| \right) =O(\left( \frac{m_{s}}{m_{b}}\right) ^{2}) \label{cases} \end{equation} Note that Eq. (\ref{cases}) coincides with Eq. (\ref{expchia}) and therefore, for the whole class of ans\"{a}tze satisfying the generic conditions of Eqs. (\ref{small}, \ref{valid}), one has the correct prediction \begin{equation} \left| V_{23}\right| ^{2}+\left| V_{13}\right| ^{2}=O(\left( \frac{m_{s}} m_{b}}\right) ^{2}) \label{result0} \end{equation} This is a remarkable result. Using WB invariants, one can show that a whole class of ans\"{a}tze for $M_{u}$, $M_{d}$ satisfying the generic of Eq. (\ref {valid}), satisfies Eq. (\ref{result0}), which is one of the experimentally observed salient features of $V^{CKM}$. \subsection{The USY ansatz} We now apply our invariants to the hypothesis of Universality of Strength of Yukawa (USY) couplings \cite{usy}, \cite{usy1}, \cite{usy2}, where all Yukawa couplings have equal moduli, the flavour dependence being all contained in their phases. For definiteness, let us consider the case where M_{u,d}$ have the symmetric form: \begin{equation} M_{u,d}=c_{u,d}\left( \begin{array}{ccc} 1 & 1 & e^{i(\alpha -\beta )} \\ 1 & 1 & e^{i(\alpha )} \\ e^{i(\alpha -\beta )} & e^{i(\alpha )} & e^{i(\alpha )} \end{array} \right) _{u,d} \label{usy1} \end{equation} Computing the invariants of the associated $h_{u,d}$, \begin{equation} \begin{array}{l} Det[h]=\frac{4^{2}}{9^{3}}\sin ^{4}(\frac{\beta }{2}) \\ \\ \chi [h]=\frac{4}{9^{2}}\left[ \sin ^{2}(\frac{\alpha }{2})+4\sin ^{2}(\frac \beta }{2})+\sin ^{2}(\frac{\alpha -2\beta }{2})+2\sin ^{2}(\frac{\alpha -\beta }{2})\right] \end{array} \label{inv1} \end{equation} we find that in leading order the parameters $\alpha _{u,d}$ and $\beta _{u,d}$ are small, \begin{equation} \begin{array}{c} \left| \alpha _{d}\right| =\frac{9}{2}\frac{m_{s}}{m_{b}}\quad ;\quad \left| \beta _{d}\right| =3\sqrt{3}\frac{\sqrt{m_{d}m_{s}}}{m_{b}} \\ \\ \left| \alpha _{u}\right| =\frac{9}{2}\frac{m_{c}}{m_{t}}\quad ;\quad \left| \beta _{u}\right| =3\sqrt{3}\frac{\sqrt{m_{u}m_{c}}}{m_{t}} \end{array} \label{ab} \end{equation} and that the $h_{u,d}$ computed from Eq. (\ref{usy1}) have the form: \begin{equation} h_{u,d}=\frac{\Delta }{3}+\left( \varepsilon A\right) _{u,d}\quad ;\quad \Delta =\left( \begin{array}{ccc} 1 & 1 & 1 \\ 1 & 1 & 1 \\ 1 & 1 & 1 \end{array} \right) _{u,d} \label{demo} \end{equation} where $\Delta $ is the democratic mass matrix and $\left( \varepsilon A\right) _{u,d}$ are matrices of order $\alpha _{u,d}$ and $\beta _{u,d}$. Up to second order in the largest parameter $\alpha $, we find \begin{equation} \left( \varepsilon A\right) _{u,d}=\frac{1}{9}\left( \begin{array}{lll} 0 & -i\beta & -2i\alpha -\alpha ^{2} \\ i\beta & 0 & -2i\alpha -\alpha ^{2}+i\beta \\ 2i\alpha -\alpha ^{2} & 2i\alpha -\alpha ^{2}-i\beta & 0 \end{array} \right) _{u,d} \label{eps} \end{equation} The form of $h_{u,d}$ corresponds to our general conditions in Eqs. (\ref{small}, \ref{valid}) and we find that $\left| \chi (A)\right| $ is indeed small. Thus, the USY scenario implies alignment. Furthermore, we find that in leading order \begin{equation} \left| \chi (A)\right| =\left| \chi (\varepsilon _{d}A_{d})\right| \label{chiusyd} \end{equation} and that \begin{equation} \left| \chi (\varepsilon _{d}A_{d})\right| =2\left| \chi (h_{d})\right| \label{chiusyd1} \end{equation} Therefore, combining with Eq. (\ref{chia3}), we obtain in leading order \begin{equation} \left| V_{23}\right| ^{2}+\left| V_{13}\right| ^{2}=2\left( \frac{m_{s}} m_{b}}\right) ^{2} \label{usyv23} \end{equation} In addition, one obtains for the invariant $I_{2}$ associated with \left\vert V_{13}\right\vert $ of Eq. (\ref{i2}) and in the limit $m_{u}=0$, the exact result \begin{equation} I_{2}=\frac{2}{9}\sin ^{2}\left( \frac{\beta _{d}}{2}\right) \label{i2-usy} \end{equation} which combined with Eqs. (\ref{i2-limit}, \ref{ab}) leads, in leading order to the following expression: \begin{equation} \frac{m_{s}^{2}}{m_{b}^{2}}\ |V_{12}|^{2}+|V_{13}|^{2}=\frac{3}{2}\frac m_{d}m_{s}}{m_{b}^{2}} \label{i2-usy0} \end{equation} The results, expressed in Eqs. (\ref{usyv23}, \ref{i2-usy0}) are in agreement with the results which were obtained for this ansatz \cite{usy2}, where in leading order, it was found that $\left\vert V_{23}\right\vert \sqrt{2}\frac{m_{s}}{m_{b}}$. With respect to $|V_{12}|$ and the second Invariant in Eq. (\ref{ahoed}), we compute $\widehat{h}_{u}=\frac{H_{u}^{-1}}{Tr[H_{u}^{-1}]}$, $\widehat{h _{d}=\frac{H_{d}^{-1}}{Tr[H_{d}^{-1}]}$ and find in leading order \begin{equation} \widehat{h}_{u,d}=\frac{1}{2}\left( \begin{array}{ccc} 1 & -1 & 0 \\ -1 & 1 & 0 \\ 0 & 0 & 0 \end{array} \right) +\frac{1}{2}\left( \frac{\beta }{\alpha }\right) _{u,d}\left( \begin{array}{ccc} 1 & 0 & -1 \\ 0 & -1 & 1 \\ -1 & 1 & 0 \end{array} \right) =\Delta _{o}+\widehat{\varepsilon }_{u,d}\widehat{A}_{u,d} \label{ahoed1} \end{equation} which then leads to \begin{equation} \left\vert \chi (\widehat{A})\right\vert =\left\vert \chi (\widehat \varepsilon }_{d}\widehat{A}_{d})\right\vert =\frac{3}{4}\left( \frac{\beta _{d}}{\alpha _{d}}\right) ^{2} \label{usy12} \end{equation} Combining with Eqs. (\ref{ab}, \ref{ahoed})\ and with the results already obtained for Eqs. (\ref{usyv23}, \ref{i2-usy0}), we find in leading order \begin{equation} \left\vert V_{12}\right\vert ^{2}=\frac{m_{d}}{m_{s}} \label{usy12new} \end{equation} which corresponds exactly to what was known for this USY ansatz. Finally, putting together Eqs. (\ref{usyv23}, \ref{i2-usy0}, \ref{usy12new}) one obtains the correct USY approximate expression for $|V_{13}|=\frac{1} \sqrt{2}}\frac{\sqrt{m_{d}m_{s}}}{m_{b}}$. \subsection{Asymmetry in the NNI Weak Basis} It has been shown \cite{nni}, that starting with arbitrary quark mass matrices $M_{u}^{\circ }$, $M_{d}^{\circ }$, in the framework of the SM, it is possible to make a WB transformation such that $M_{u}$, $M_{d}$ acquire the Nearest Neighbour Interaction (NNI) form: \begin{equation} M_{u}=c_{u}\left( \begin{array}{lll} 0 & a_{u} & 0 \\ \widehat{a}_{u} & 0 & b_{u} \\ 0 & \widehat{b}_{u} & 1 \end{array} \right) \qquad ;\qquad M_{d}=c_{d}\ K\cdot \left( \begin{array}{lll} 0 & a_{d} & 0 \\ \widehat{a}_{d} & 0 & b_{d} \\ 0 & \widehat{b}_{d} & 1 \end{array} \right) \label{nni} \end{equation} where all $a_{u,d},\widehat{a}_{u,d},b_{u,d},\widehat{b}_{u,d},c_{u,d}$ are real and the matrix $K=diag(1,e^{i\phi _{1}},e^{i\phi _{2}})$. In the limit that $\widehat{a}_{u,d}=a_{u,d}$ and $\widehat{b}_{u,d}=b_{u,d}$, one obtains the Fritzsch ansatz \cite{frit}, which has been eliminated by experiment, namely by the large value of the top quark and the observed value of $|V_{cb}|$. In the following, we use our invariants to find out the minimal asymmetry which is required in $M_{u}$, $M_{d}$, when written in the NNI basis, in order to conform with experiment. Let us define the asymmetries \begin{equation} \varepsilon _{u}\equiv \frac{\widehat{b}_{u}-b_{u}}{\widehat{b}_{u}+b_{u} \qquad ;\qquad \varepsilon _{d}\equiv \frac{\widehat{b}_{d}-b_{d}}{\widehat{ }_{d}+b_{d}} \label{eud} \end{equation} and the total asymmetry \begin{equation} \varepsilon =\sqrt{\varepsilon _{u}^{2}+\varepsilon _{d}^{2}} \label{aeud} \end{equation} Note that alignment and hierarchy of the quark mass matrices are guaranteed in Eq. (\ref{nni}) by taking $(a,b)_{u,d}$, $(\widehat{a},\widehat{b})_{u,d}$ much smaller than 1. Computing the invariants associated to $h_{u}$ and h_{d}$ as in Eq.(\ref{tr1}), and taking into account the hierarchy of the quark mass matrices, one obtains in good approximation \begin{equation} \left| a\right| ^{2}\ \left| \widehat{a}\right| ^{2}=\frac{\left| m_{1}m_{2}\right| }{m_{3}^{2}}\qquad ;\qquad \left| b\right| ^{2}\ \left| \widehat{b}\right| ^{2}=\left( \frac{m_{2}}{m_{3}}\right) ^{2} \label{ab1} \end{equation} Then, combining Eqs. (\ref{eud}, \ref{ab1}) we obtain: \begin{equation} b_{u}^{2}=\frac{m_{c}}{m_{t}}\left( \frac{1-\varepsilon _{u}}{1+\varepsilon _{u}}\right) \qquad ;\qquad b_{d}^{2}=\frac{m_{s}}{m_{b}}\left( \frac 1-\varepsilon _{d}}{1+\varepsilon _{d}}\right) \label{bud} \end{equation} Now, computing $\chi (A)$ as in Eq. (\ref{diffh}) with $h_{u}$ and $h_{d} $ obtained from the NNI form in Eq. (\ref{nni}), and using Eq. (\ref{ab1}), we get an expression which relates the experimental value for $\left| V_{cb}\right| $ and the $\left| \chi (A)\right| $ as in Eq. (\ref{chia3}) in terms of the parameters of the NNI form. We find in leading order \begin{equation} b_{d}^{2}-2b_{u}b_{d}\cos (\phi )+b_{u}^{2}-b_{d}^{4}=\left| V_{cb}\right| ^{2}+2\left( \frac{m_{s}}{m_{b}}\right) ^{2} \label{bud1} \end{equation} where $\phi =\phi _{1}-\phi _{2}$ is a complex phase resulting from the diagonal matrix $K$ in Eq. (\ref{nni}). This expression is obtained taking into account that $(a,\widehat{a})=O(\frac{\sqrt{\left| m_{1}m_{2}\right| }} m_{3}}) $ and that $(b,\widehat{b})=O($ $\sqrt{\left| \frac{m_{2}}{m_{3} \right| })$ as implied by Eqs. (\ref{ab1}, \ref{bud}). From Eqs. (\ref{bud}, \ref{bud1}) we find that there is a connection between the required asymmetries $\varepsilon _{u},\varepsilon _{d}$ of the up and down quark sectors in order to conform to experiment. This connection can be understood as follows. Take the case when $\phi =0$ and $\varepsilon _{d}=0 , then from the second relation in Eq. (\ref{bud}) it follows that $b_{d}=$ \sqrt{\frac{m_{s}}{m_{b}}}$, but then the expression of Eq. (\ref{bud1}) forces also $b_{u}=$ $\sqrt{\frac{m_{s}}{m_{b}}}$ in leading order, and therefore, from the first relation in Eq. (\ref{bud}), one gets $\varepsilon _{u}\approx -1+2\left( \frac{m_{c}}{m_{t}}\right) /\left( \frac{m_{s}}{m_{b} \right) $. Therefore, when the asymmetry in the down sector is small, the required asymmetry in the up sector is large, and vice versa. It can be readily verified that when $\phi \neq 0$, this result also holds. Indeed, by eliminating from Eqs.(\ref{bud}, \ref{bud1}) both $b_{u}$ and $b_{d}$ one finds $\varepsilon _{u}$ as a function of $\varepsilon _{d}$ and $\phi $ \begin{equation} \varepsilon _{u}=\varepsilon _{u}(\varepsilon _{d},\phi ) \label{feud} \end{equation} and one then computes the total asymmetry $\varepsilon $ in Eq. (\ref{aeud ). This total asymmetry can thus be written as a function of $\varepsilon _{d}$ and $\phi .$ One finds that it increases for all values of $\phi \neq 0 $, and it has a minimum for a certain value of $\varepsilon _{d}$ (and $\phi =0$). We have plotted the total asymmetry $\varepsilon $ in Fig. 1 as a function of $\varepsilon _{d}$ for typical values of $\frac{m_{c}}{m_{t}}$ , $\frac m_{s}}{m_{b}}$ and $\left\vert V_{cb}\right\vert $ at $M_{Z}$. As can be seen from the plot, the minimal required total asymmetry is about \varepsilon =0.2$, which indicates clearly that ans\"{a}tze, written in the NNI basis, require quark mass matrices with a considerable amount of asymmetry in order to conform to experiment. This finding agrees with the result previously obtained \cite{emagus} by explicitly diagonalizing the quark mass matrices written in the NNI basis. \begin{figure}[!t] \begin{center} \includegraphics[width=10cm,height=7cm]{koek.eps} \end{center} \caption{Total required asymmetry $\varepsilon$ as a function of the down quark mass matrix asymmetry $\varepsilon_d $. The full line is for the values of $m_s = 60 MeV, \phi=0$, the tiny dashed line for values of m_s=100\ MeV, \phi=0$ and large dashed line for values of $m_s=80\ MeV, \phi=0.35$. For all curves, we took the values for $m_b=3.0\ GeV $, m_c=680\ MeV$, $m_t=181\ GeV$ and $|V_{cb}|=0.037$ at $M_Z$. } \label{fig:asym} \end{figure} \section{Leptons} The hierarchy of lepton masses may also be expressed in terms of invariants of $H_{l},H_{\nu }$. For the charged leptons one may use essentially the same invariants as the quarks. For the neutrinos, the invariant \begin{equation} R_{1}\equiv \frac{4\chi [H_{\nu }]}{Tr[H_{\nu }]^{2}} \label{hier2n} \end{equation} may distinguish normal hierarchy corresponding to $R_{1}\ll 1$ from inverted hierarchy ($R_{1}=1$) and degeneracy ($R_{1}=\frac{4}{3}$). The invariant \begin{equation} R_{2}\equiv \frac{3Tr[H_{\nu }]\ Det[H_{\nu }]}{\left( \chi [H_{\nu }]\right) ^{2}} \label{hier3n} \end{equation} may also be used to distinguish inverted hierarchy when $R_{2}$ is small from degeneracy when $R_{2}=1$. Furthermore, for normal hierarchy, it can distinguish the cases when one of the two smaller masses is much small then the other one, $R_{2}\ll 1$, or the case when these two small masses are of the same order. In this case $R_{2}\ $\ is of order one$.$ Thus, we have \begin{equation} \begin{array}{lllllll} Normal_{1} & & Normal_{2} & & Inverted & & Degenerate \\ R_{1}\ll 1 & & R_{1}\ll 1 & & R_{1}=1 & & R_{1}=\frac{4}{3} \\ & & & & & & \\ R_{2}\ll 1 & & R_{2}=O(1) & & R_{2}\ll 1 & & R_{2}=1 \end{array} \label{tab} \end{equation} \subsection{Normal Hierarchy} For definiteness, let us assume that neutrinos have normal hierarchy, with m_{1}^{2}\ll m_{2}^{2}\ll m_{3}^{2}$. Next, we show that one can use a set of WB invariants of the lepton sector in order to fix the leptonic mixing matrix. In the limit, $m_{e}=0$, one obtains: \begin{equation} I_{\nu _{2}}\equiv 1-\frac{Tr[H_{l}]\ Tr[H_{l}H_{\nu }]-Tr[H_{l}^{2}H_{\nu } }{\chi [H_{l}]\ Tr[H_{\nu }]}=\frac{\frac{m_{1}^{2}}{m_{3}^{2}}\ |V_{12}|^{2}+|V_{13}|^{2}}{1+\frac{m_{1}^{2}}{m_{2}^{2}}} \label{limit-v13} \end{equation} The experimental data show that the leptonic mixing matrix is close to the tribimaximal mixing \cite{tribi}. In particular, it is shown that |V_{13}|^{2}\equiv |U_{e3}|^{2}$ is small. From Eq. (\ref{limit-v13}), it follows that this can be guaranteed by having $I_{\nu _{2}}\ll 1$. On the other hand, the associated invariant $I_{\nu _{2}}^{\prime }$, obtained by interchanging $H_{l}$ and $H_{\nu }$ yields in the limit $m_{1}=0$: \begin{equation} I_{\nu _{2}}^{\prime }\equiv 1-\frac{Tr[H_{\nu }]\ Tr[H_{\nu }H_{l}]-Tr[H_{\nu }^{2}H_{l}]}{\chi [H_{\nu }]\ Tr[H_{l}]}=\frac{\frac m_{e}^{2}}{m_{\tau }^{2}}\ |V_{21}|^{2}+|V_{31}|^{2}}{1+\frac{m_{e}^{2}} m_{\tau }^{2}}} \label{v13n} \end{equation} By putting $I_{\nu _{2}}^{\prime }\approx \frac{1}{6}$, one has |V_{31}|^{2}\approx \frac{1}{6}$ as in the tribimaximal mixing. Then, computing the leptonic invariant equivalent to that of Eqs. (\ref{i10}, \ref {chia3}), one obtains: \begin{equation} \left| \chi \left[ A_{\nu }\right]\right| =\left| V_{23}\right| ^{2}+\left| V_{13}\right| ^{2}+O\left( \left( \frac{m_{2}}{m_{3}}\right) ^{4}\right) \label{chian} \end{equation} which must be near $\frac{1}{2}$ for tri-bimaximal mixing. With $I_{\nu _{2}} $, $I_{\nu _{2}}^{\prime }$ and $\chi \left[ A_{\nu }\right] $ and the unitarity of $V^{PMNS}$ one can ensure that the mixing is near to the tri-bimaximal mixing: if $|V_{13}|^{2}$ is small and $\left| V_{23}\right| ^{2}$ is near $\frac{1}{2}$, then $\left| V_{33}\right| ^{2}$ must also be near to $\frac{1}{2}$. Since $\left| V_{31}\right| ^{2}$ is near to $\frac{ }{6}$, it follows that $\left| V_{32}\right| ^{2}$ is near to $\frac{1}{3}$ and then $|V_{12}|^{2}$ and $|V_{11}|^{2}$ are near to $\frac{4}{6}$ and \frac{1}{3}$ respectively. We have thus shown how to obtain the observed pattern of leptonic mixing through a set of invariant conditions. \section{Conclusions} We pointed out that the use of weak-basis invariants can avoid the well known redundancy of free parameters in the flavour structure of mass matrices. These invariants are specially useful when one opts for a bottom-up approach to the study of the flavour structure of Yukawa couplings and fermion mass matrices. In particular, we have shown that the pattern of fermion mixing both in the quark and lepton sectors can be expressed in terms of relations only involving weak-basis invariants. We have also pointed out the observed alignment of the up and down quark mass matrices in flavour space can also be guaranteed through a weak-basis invariant condition. It was emphasized that in the context of the SM, the above alignment in no way follows automatically from the Yukawa couplings structure, since $Y_{u}$ and $Y_{d}$\ are independent. On the other hand, this alignment may arise naturally, e.g. in left-right symmetric theories or in $SO(10)$, where $Y_{u}$ and $Y_{d}$\ may be approximately proportional to each other. In summary, WB invariants may play an important r\^{o}le in a systematic search for patterns of fermion mass matrices consistent with experiment and may thus help to uncover a possible flavour symmetry chosen by nature. \section{\protect\bigskip Acknowledgments} This work was partially supported by Funda\c{c}\~{a}o para a Ci\^{e}ncia e a Tecnologia (FCT, Portugal) through the projects CERN/FP/11638/2010, PTDC/FIS/098188/2008 and CFTP-FCT Unit 777 which are partially funded through POCTI (FEDER) and by Marie Curie Initial Training Network "UNILHC" PITN-GA-2009-237920.
1,314,259,994,702
arxiv
\section{Introduction } Consider the two-stage linear Adjustable Robust Optimization (ARO) problem with an ellipsoidal uncertainty set \begin{mini} {\substack{\under{\vect{x},\vect{y}(\cdot)}}}{\vect{c}^T\vect{x}}{}{\name{$P_0$}} \addConstraint{A(\vect{z})\vect{x} + B\vect{y}(\vect{z})}{\leq\vect{d}(\vect{z}),\quad}{\forall\vect{z}\in\mathcal{Z}} \end{mini} where $\mathcal{Z} = \left\{\vect{z}\in\mathbb{R}^l : \norm{\vect{z}}^2 \leq r^2, r > 0\right\}$ is the user specified ellipsoidal uncertainty set, $\vect{x}\in \mathbb{R}^n$ is the first-stage ``here and now" decision that is made before $\vect{z}\in\mathbb{R}^l$ is realized, $\vect{y}(\vect{z})\in\mathbb{R}^k$ is the second-stage ``wait and see" decision that can be adjusted according to the actual data; the coefficient matrix $A\in \mathbb{R}^{m\times n}$ and the right hand side vector $\vect{d}\in\mathbb{R}^m$ depend on the uncertainty parameter $\vect{z}$, and the (fixed recourse) coefficient matrix $B=(\vect{b}_1, \ldots, \vect{b}_m)^T, \; \vect{b}_i\in\mathbb{R}^k $ does not depend on $\vect{z}$. The ARO approach, which employs ARO model problems of the form $(P_0)$, is less conservative than the traditional Robust Optimization (RO) methodology, pioneered by Ben-tal et. al \cite{robustbook,7,34,siamRevRobust,Goberna-Jeya-Li15,jeya-optimL}, as it yields more flexible decisions that can be adjusted according to the realized portion of data at a given stage, and so allows multi-stage decision-making in practical applications \cite{De-tutorial15}. Moreover, ARO provides optimal objective values that are at least as good as that of the standard RO approach \cite{robustbook,dendick-18}. However, the two-stage ARO problem $(P_0)$ is a challenging optimization problem to study, theoretically and numerically because a linear function is optimized over $\vect{y}(\cdot)$, which are mappings $\vect{y}:\mathcal{Z}\rightarrow \mathbb{R}^k$, rather than vectors. It is generally hard to obtain a numerically tractable characterization of the system with a mapping $\vect{y}(\cdot)$ unless the mapping is restricted to satisfy some special rules, called ``decision rules". Traditionally, $\vect{y}(\cdot)$ is assumed to satisfy an Affine Decision Rule (ADR), such as $\vect{y}(\vect{z})= \vect{y}_0 + W\vect{z}, $ where $ \vect{y}_0\in \mathbb{R}^k, \; W\in \mathbb{R}^{k\times l}$ are the coefficients of the decision rule that are to be optimized \cite{robustbook,Chen-09-OR}. In many cases, affine decision rules, in particular, for the affinely parameterized ARO problems \cite{Ben-tal-MP-04,Survey-19}, often result in computationally tractable reformulations and have been known to give optimal or near optimal solutions for broad classes of practical problems,, e.g. inventory management \cite{robustbook}. On the other hand, transformations of two-stage ARO problems with nonlinear decision rules to single-stage robust problems often result in hard non-convex optimization problems \cite{robustbook}. Consequently, the study of computational tractability and applicability of these problems with nonlinear decision rules is of great interest in robust optimization. In this paper we examine {\it affinely parameterized two-stage adjustable robust linear optimization problems} with quadratic decision rules under an ellipsoidal uncertainty set and make the following contributions. \begin{itemize} \item[(i)] We introduce a new parameterized Quadratic Decision Rule (QDR), generalizing the commonly employed affine decision rule, and show that affinely parameterized linear ARO problems with QDRs are numerically tractable by presenting exact conic reformulations. In particular, we establish exact second order cone program (SOCP) reformulations for the linear ARO problems under a special separable QDRs. We do this by generalizing the approach of \cite{34,robustbook,jeyaChuong} for ADRs and employing the $\mathcal{S}$-lemma \cite{Ben-tal01} and the Schur's complement. We further show how exact conic programming reformulations can be derived from our results for ARO problems with adjustable variables also in their objective functions as they appear in many practical decision-making models of optimization, such as the lot-sizing problem with uncertain demand. Various nonlinear decision rules, such as the homogeneous \cite{arxiv} and non-homogeneous quadratic decision rules \cite{thesis,robustbook}, and polynomial decision rules \cite{bert}, have also recently been used to approximate and reformulate ARO problems. Our results readily yield corresponding exact conic program reformulations for affinely parameterized linear ARO problems \cite{Ben-tal-MP-04} with affine decision rules and homogeneous as well as non-homogeneous quadratic decision rules. \item[(ii)] We employ our SDP and SOCP reformulations to solve the lot-sizing problem with uncertain demand and present a comparison of our techniques in their performance by contrasting their optimal solutions both in the worst-case sense and after simulated realisations of the uncertain demand. Numerical experiments on lot-sizing problems demonstrate that the quadratic decision rule outperforms affine decision rules in both cases, whilst the time taken to solve problems with quadratic decision rules is significantly greater (due to the larger number of variables) than the ones with affine decision rules. \end{itemize} \medskip In section 2 we present the parameterized quadratic decision rule, an extension of the affine decision rule, and present exact SDP and SOCP reformulations for two-stage ARO problems. In section 3, we derive exact conic programming reformulations for ARO problems with adjustable variables also in their objective functions. In section 4, we employ our reformulation schemes to solve the lot-sizing problem and show that it is both consistent with the ADR and improves upon it. In section 5 we present concluding remarks with a brief discussion on further research. \medskip \section{Quadratic Decision Rules \& Exact Conic Program Reformulations} We begin by fixing some preliminaries. The notation $\mathbb{R}^n$ signifies the Euclidean space for each $n\in\mathbb N:=\{1,2,\ldots\}$ and $\mathbb{S}_l$ is the space of all real $l \times l$ symmetric matrices. As usual, the symbol $I_n$ stands for the identity $(n\times n)$ matrix, while $\mathbb{R}_+:=[0,+\infty)\subset \mathbb{R}.$ The inner product in $\mathbb{R}^n$ is defined by $\langle x,y\rangle:=x^T y$ for all $x, y\in\mathbb{R}^n.$ A symmetric $(n\times n)$ matrix $A$ is said to be positive semi-definite, denoted by $A\succeq 0$, whenever $x^T Ax\ge 0$ for all $x\in\mathbb{R}^n.$ \medskip In this section, we present numerically tractable conic linear program reformulations of the affinely adjustable case of the two-stage robust linear optimization problem $(P_0)$ under a parameterised quadratic decision rule (QDR) which is defined as follows: \begin{definition}[{\bf Quadratic Decision Rule}] Let $\theta\in [0,1]$. The ARO problem $(P_0)$ is said to satisfy the parameterized quadratic decision rule whenever the mapping $\vect{y}(\cdot)$ is restricted to mappings of the form \[ \vect{y}(\vect{z}) = \theta(\vect{y}_0 + W\vect{z}) + (1-\theta) \mymat{c}{\vect{z}^TQ_1\vect{z} \\ \vect{z}^T Q_2\vect{z} \\ \vdots \\ \vect{z}^T Q_k \vect{z}}. \] \end{definition} We define the following operator to simplify working: \[ \vect{z}^T \mathcal{Q}_k \vect{z} = \mymat{c}{\vect{z}^T Q_1 \vect{z} \\ \vdots \\ \vect{z}^T Q_k \vect{z}} \] so that our QDR is $\vect{y}(\vect{z}) = \theta(\vect{y}_0 + W\vect{z}) + (1-\theta)\vect{z}^T\mathcal{Q}_k \vect{z}$. \medskip \noindent\textbf{QDRs and SDP Reformulations}. Consider the following affinely parameterized version of ARO problem $(P_0)$ with the parameterized QDR, \begin{mini} {\substack{\under{\vect{x},\vect{y}_0},\\W,Q_j,j=1,\dots,k}}{\vect{c}^T\vect{x}}{}{(P)} \addConstraint{A(\vect{z})\vect{x} + B\left(\theta(\vect{y}_0 + W\vect{z}) + (1-\theta)\vect{z}^T \mathcal{Q}_k \vect{z}\right)}{\leq\vect{d}(\vect{z}),\quad}{\forall\vect{z}\in\mathcal{Z},} \end{mini} where $\mathcal{Z} = \left\{\vect{z}\in\mathbb{R}^l : \norm{\vect{z}}^2 \leq r^2\right\}$ is ellipsoidal uncertainty set; $\vect{c}\in\mathbb{R}^n$; $B=(\vect{b}_1, \ldots, \vect{b}_m)^T; \vect{b}\in\mathbb{R}^k $; $A(\vect{z}) = (\vect{a}_1+A_1\vect{z}, \dots, \vect{a}_m + A_m\vect{z})^T,\; \vect{a}_i\in\mathbb{R}^{n},\; A_i\in\mathbb{R}^{n\times l}$, $\vect{d}(\vect{z}) = (d_{0,1}+\vect{d}_1^T\vect{z},\dots,d_{0,m} + \vect{d}_m^T\vect{z})^T,\; d_{0,i}\in\mathbb{R}$, $\vect{d}_i\in\mathbb{R}^{l}$ and $\theta \in [0, 1]$. We associate with ($P$) the following semi-definite program \begingroup \begin{mini*} {\substack{\under{\vect{x},\vect{y}_0, \vect{\lambda}},\\W,Q_j,j=1,\dots,k}}{\vect{c}^T\vect{x}}{}{(P-QDR)} \addConstraint{ \lambda_i\geq 0, \; i=1,\dots,m}, \mymat{ccc}{ P_1 & & \\ & \ddots \\ & & P_m }\succeq 0, \end{mini*} where $\vect{x}\in\mathbb{R}^{n},\;\vect{y}_0\in\mathbb{R}^{k},\;W\in\mathbb{R}^{k\times l},\; Q_j\in\mathbb{S}_l,j=1,\dots,k$ and \[ P_i = \mymat{cc}{d_{0,i}-\vect{a}_i^T\vect{x}-\theta\vect{b}_i^T\vect{y}_0-\lambda_i r^2 & \displaystyle\frac{1}{2}(\vect{d}_i^T-\vect{x}^T A_i-\theta\vect{b}_i^T W) \\ \displaystyle\frac{1}{2}(\vect{d}_i^T-\vect{x}^T A_i-\theta\vect{b}_i^T W)^T & \lambda_i I_l -(1-\theta)\displaystyle\sum_{j=1}^k{(\vect{b}_i)_j Q_j}},\quad i = 1,\dots,m. \] \endgroup We first show that the problem (P) admits an exact SDP reformulation in the sense that the objective values of (P) and (P-QDR) are equal and their constraint systems are equivalent. To do this, we first recall the celebrated $\mathcal{S}$-Lemma \cite{Ben-tal01} which is a useful tool in nonconvex quadratic optimization. \begin{lemma}[$\mathcal{S}$-Lemma] Let $A$, $B$ be two symmetric matrices such that there exists a $\vect{z}_0$ such that $\vect{z}_0^T A \vect{z}_0> 0$. Then, \[ \vect{z}^T A \vect{z}\geq 0 \implies \vect{z}^T B \vect{z}\geq 0 \] holds true if and only if \[ \exists\lambda\geq 0 : B-\lambda A\succeq 0. \] \end{lemma} The following Theorem provides an exact SDP reformulation result for the linear ARO problem (P). \begin{theorem}[{\bf General QDRs and Exact SDP Reformulations}] \label{thm:1} Let $\theta \in [0, 1]$. Consider the linear ARO problem (P) with the parameterized quadratic decision rule and its associated semi-definite program (P-QDR). Then, problem (P) and the semi-definite program (P-QDR) are equivalent, in the sense that, $(\vect{x},\vect{y}_0,W,Q_1,\ldots,Q_k)$ is a solution for (P) if and only if there exists $\vect{\lambda} \in \mathbb{R}^m_+$ such that $(\vect{x},\vect{y}_0,\vect{\lambda},W,Q_1,\ldots,Q_k)$ is a solution for (P-QDR). Moreover, $ \min{\text{\emph{(P)}}} = \min{\text{\emph{(P-QDR)}}}. $ \end{theorem} \begin{proof} The constraint system of (P) \[ A(\vect{z})\vect{x} + B\left(\theta(\vect{y}_0 + W\vect{z}) + (1-\theta)\vect{z}^T \mathcal{Q}_k\vect{z}\right)\leq\vect{d}(\vect{z}),\quad \forall\vect{z}\in\mathcal{Z} \] is equivalently re-written as the following semi-infinite system of $m$ constraints: \begin{equation}\label{eq:mconstraintsA} (\vect{a}_i + A_i\vect{z})^T\vect{x} + \vect{b}_i^T\left(\theta(\vect{y}_0 + W\vect{z}) + (1-\theta)\vect{z}^T \mathcal{Q}_k \vect{z}\right) \leq d_{0,i} + \vect{d}_i^T\vect{z},\;\forall\vect{z}\in\mathcal{Z}, \; i=1,2,\ldots, m \end{equation} For each $i=1,2,\ldots, m$, we claim that the system \[ (\vect{a}_i + A_i\vect{z})^T\vect{x} + \vect{b}_i^T\left(\theta(\vect{y}_0 + W\vect{z}) + (1-\theta)\vect{z}^T \mathcal{Q}_k \vect{z}\right) \leq d_{0,i} + \vect{d}_i^T\vect{z},\;\forall\vect{z}\in\mathcal{Z} \] is equivalent to the liner matrix inequality: \begin{equation}\label{eq:LMI} \exists \lambda_i \geq 0, \; \; \; \; \mymat{cc}{d_{0,i}-\vect{a}_i^T\vect{x}-\theta\vect{b}_i^T\vect{y}_0-\lambda_i r^2 & \displaystyle\frac{1}{2}(\vect{d}_i^T-\vect{x}^T A_i-\theta\vect{b}_i^T W) \\ \displaystyle\frac{1}{2}(\vect{d}_i^T-\vect{x}^T A_i-\theta\vect{b}_i^T W)^T & \lambda_i I_l -(1-\theta)\displaystyle\sum_{j=1}^k{(\vect{b}_i)_j Q_j}}\succeq 0. \end{equation} Granting this, we obtain that $(\vect{x},\vect{y}_0,W,Q_j) \in \mathbb{R}^n \times \mathbb{R}^{k} \times \mathbb{R}^{k \times l} \times \mathbb{S}^l$, $j=1,\dots,k$, satisfies the system of constraints in \eqref{eq:mconstraintsA} if and only if there exists $\vect{\lambda} \in \mathbb{R}^m_+$ such that $(\vect{x},\vect{y}_0,\vect{\lambda},W,Q_j) \in \mathbb{R}^n \times \mathbb{R}^{k} \times \mathbb{R}^m \times \mathbb{R}^{k \times l} \times \mathbb{S}^l$ satisfies the semi-definite constraint system of (P-QDR). As the objective functions of both problems (P) and (P-QDR) are the same, we see that problem (P) and the semi-definite program (P-QDR) are equivalent and $\min{\text{(P)}} = \min{\text{(P-QDR)}}$. Then, the conclusion of this theorem follows. We now turn to the proof of the claim. Fix $i \in \{1,\ldots,m\}$. Then, \begingroup \allowdisplaybreaks \begin{eqnarray*} & & (\vect{a}_i + A_i\vect{z})^T\vect{x} + \vect{b}_i^T\left(\theta(\vect{y}_0 + W\vect{z}) + (1-\theta)\vect{z}^T \mathcal{Q}_k \vect{z}\right) \leq d_{0,i} + \vect{d}_i^T\vect{z},\;\forall\vect{z}\in\mathcal{Z} \\ & \iff & (\vect{a}_i + A_i\vect{z})^T\vect{x} + \vect{b}_i^T\left(\theta(\vect{y}_0 + W\vect{z}) + (1-\theta)\mymat{c}{\vect{z}^TQ_1\vect{z} \\ \vdots \\ \vect{z}^T Q_k \vect{z}}\right) \leq d_{0,i} + \vect{d}_i^T\vect{z},\quad\forall\vect{z}\in\mathcal{Z} \\ & \iff & (d_{0,i}-\vect{a}_i^T\vect{x}-\theta\vect{b}_i^T\vect{y}_0) + \left(\vect{d}_i^T-\vect{x}^T A_i-\theta\vect{b}_i^T W\right)\vect{z}- \vect{z}^T\left((1-\theta)\displaystyle\sum_{j=1}^k{(\vect{b}_i)_j Q_j}\right)\vect{z}\geq 0, \forall\vect{z}\in\mathcal{Z} \end{eqnarray*} \endgroup which is, in turn, equivalent to the implication: \begin{align} \begin{split} r^2-\vect{z}^T\vect{z}\geq 0 \implies (d_{0,i}-\vect{a}_i^T\vect{x}-\theta\vect{b}_i^T\vect{y}_0) + \left(\vect{d}_i^T-\vect{x}^T A_i-\theta\vect{b}_i^T W\right)\vect{z} \\ -\vect{z}^T\left((1-\theta)\displaystyle\sum_{j=1}^k{(\vect{b}_i)_j Q_j}\right)\vect{z}\geq 0. \end{split} \end{align} Letting $\vect{u} = \mymat{c}{1\\\vect{z}}$ we can write the above implication as \[ \vect{u}^T P \vect{u}\geq 0 \implies \vect{u}^T R_i \vect{u}\geq 0, \] where \[ P = \mymat{cc}{r^2 & 0\\ 0 & -I_l},\quad R_i = \mymat{cc}{d_{0,i}-\vect{a}_i^T\vect{x}-\theta\vect{b}_i^T\vect{y}_0 & \displaystyle\frac{1}{2}(\vect{d}_i^T-\vect{x}^T A_i-\theta\vect{b}_i^T W) \\ \displaystyle\frac{1}{2}(\vect{d}_i^T-\vect{x}^T A_i-\theta\vect{b}_i^T W)^T & -(1-\theta)\displaystyle\sum_{j=1}^k{(\vect{b}_i)_j Q_j}}. \] Clearly $P$ and $R$ are symmetric matrices. If we choose $\vect{u}_0 = \mymat{cc}{1 & \vect{0}}^T$ then $\vect{u}_0^T P \vect{u}_0 = r^2 > 0$ and so the $\mathcal{S}$-Lemma \cite{Ben-tal01} applies. Hence, \eqref{eq:mconstraintsA} is equivalent to the linear matrix inequality: \begingroup \allowdisplaybreaks \begin{align*} & \exists\lambda_i \geq 0 : R_i-\lambda_i P\succeq 0 \\ & \iff \lambda_i\geq 0, \mymat{cc}{d_{0,i}-\vect{a}_i^T\vect{x}-\theta\vect{b}_i^T\vect{y}_0-\lambda_i r^2 & \displaystyle\frac{1}{2}(\vect{d}_i^T-\vect{x}^T A_i-\theta\vect{b}_i^T W) \\ \displaystyle\frac{1}{2}(\vect{d}_i^T-\vect{x}^T A_i-\theta\vect{b}_i^T W)^T & \lambda_i I_l -(1-\theta)\displaystyle\sum_{j=1}^k{(\vect{b}_i)_j Q_j}}\succeq 0. \end{align*} \endgroup Thus, the claim follows. \end{proof} \begin{remark}[{\bf Exact SDPs for affine \& and other known quadratic decision rules}] It is worth noting that Theorem 2.3. readily yields exact SDP reformulations for linear ARO problems with affine decision rules \cite{robustbook} by setting $\theta =1$, homogeneous quadratic decision rules \cite{arxiv} by setting $\theta =0$ and with non homogeneous quadratic decision rules \cite{thesis} by setting $\theta =\frac{1}{2}$. \end{remark} \medskip \noindent\textbf{Separable QDRs and SOCP Reformulations}. We now show that, if we consider a restricted version of the quadratic decision rule (see Definition 2.1), then the ADR problem can be equivalently reformulated as a second order cone programming problem. Second order cone programming reformulations for classes of nonconvex quadratic optimization problems and robust optimization problems have been of great interest in recent years \cite{bental1,JL1}. This is because the second order cone programming method has proved to be a powerful scheme for solving various class of practical optimization problems and advanced commercial software is available to solve SOCPs. \begin{definition}[{\bf Separable Quadratic Decision Rule}]\label{def2} Let $\theta\in [0,1]$. The ARO problem $(P_0)$ is said to satisfy the parameterized separable quadratic decision rule whenever the mapping $\vect{y}(\cdot)$ is restricted to mappings of the form \[ \vect{y}(\vect{z}) = \theta(\vect{y}_0 + W\vect{z}) + (1-\theta) \mymat{c}{\vect{z}^TQ_1\vect{z} \\ \vect{z}^T Q_2\vect{z} \\ \vdots \\ \vect{z}^T Q_k \vect{z}}=\theta(\vect{y}_0 + W\vect{z}) + (1-\theta) \mymat{c}{\displaystyle\sum_{p=1}^l q_{1,p}z_p^2 \\ \displaystyle \sum_{p=1}^l q_{2,p}z_p^2 \\ \vdots \\ \displaystyle \sum_{p=1}^l q_{k,p}z_p^2 }, \] where $Q_j$, $j=1,\ldots,k$, are diagonal matrices whose diagonal elements are $q_{1,j},\ldots,q_{l,j}$. \end{definition} We now consider the following affinely parameterized version of ARO problem $(P_0)$ with the separable quadratic decision rule: \begin{mini} {\substack{\under{\vect{x},\vect{y}_0},\\W,Q_j,j=1,\dots,k}}{\vect{c}^T\vect{x}}{}{\name{$P_s$}} \addConstraint{A(\vect{z})\vect{x} + B\left(\theta(\vect{y}_0 + W\vect{z}) + (1-\theta)\vect{z}^T \mathcal{Q}_k \vect{z}\right)}{\leq\vect{d}(\vect{z}),\quad}{\forall\vect{z}\in\mathcal{Z},} \end{mini} where $\vect{z}^T \mathcal{Q}_k \vect{z} = \mymat{c}{\vect{z}^T Q_1 \vect{z} \\ \vdots \\ \vect{z}^T Q_k \vect{z}}$ and each $Q_j$, $j=1,\ldots,k$, is a diagonal matrix whose diagonal elements are $q_{1,j},\ldots,q_{l,j}$. Other assumptions on $(P_s)$ are the same as on $(P)$. To do this, we first show that, using a linear transform and Schur's complement, the constraints of ($P_s$) (with separable quadratic decision rule) can be characterized in terms of second order cone constraints. \begin{proposition}[{\bf Equivalent Second-order Cone Constraints}] \label{lemma:2.3} Let $\theta \in [0, 1]$; let $\vect{a}\in \mathbb{R}^n, \; A\in \mathbb{R}^{n\times l}, \; \vect{b}\in \mathbb{R}^k, \; \vect{d}\in\mathbb{R}^l, \; d_0\in\mathbb{R}$; let $\vect{x}\in\mathbb{R}^{n},\;\vect{y}_0\in\mathbb{R}^{k},\; W\in\mathbb{R}^{k\times l},\; Q_j\in\mathbb{S}_l,\; j=1,\dots,k$. Let $\mathcal{Z}$ be an ellipsoidal uncertainty set, defined by $\mathcal{Z} = \left\{\vect{z}\in \mathbb{R}^l : \norm{\vect{z}}^2\leq r^2\right\}$. Suppose that each $Q_j$, $j=1,\ldots,k$, is a diagonal matrix whose diagonal elements are $q_{1,j},\ldots,q_{l,j}$. Then, the following systems are equivalent: \begin{enumerate} \item[{\rm (I)}] $ (\vect{a} + A\vect{z})^T\vect{x} + \vect{b}^T\left(\theta(\vect{y}_0 + W\vect{z}) + (1-\theta)\vect{z}^T \mathcal{Q}_k \vect{z}\right) \leq d_0 + \vect{d}^T\vect{z},\quad\forall\vect{z}\in\mathcal{Z} $ \item[{\rm (II)}] There exist $\lambda \in \mathbb{R}_+$ and $s_p \in \mathbb{R}_+$, $p=1,\ldots,l$, such that \[ \left\{ \begin{array}{l} \displaystyle \sum_{p=1}^l s_p \le d_0-\vect{a}^T\vect{x}-\theta\vect{b}^T\vect{y}_0-\lambda r^2, \\ \lambda -(1-\theta)\sigma_p \ge 0, \ \ \ p=1,\ldots,l, \\ \left\|\left((\vect{d}-A^T\vect{x} -\theta W^T\vect{b})_p,s_p-\lambda +(1-\theta)\sigma_p \right) \right)\| \le s_p +\lambda -(1-\theta)\sigma_p,\ \ \ p=1,\ldots,l. \end{array} \right. \] {\rm Here, $\sigma_p=\sum_{j=1}^k b_j q_{p,j}$, $p=1,\ldots,l$ are the diagonal elements of $\sum_{j=1}^k{b_j Q_j}$}. \end{enumerate} \end{proposition} \begin{proof} Following the same line of arguments as in the proof of Theorem \ref{thm:1} we can prove, using $\mathcal{S}$-Lemma, that {\rm (I)} is equivalent to the semi-definite inequality: \begin{equation}\label{eq:use1} \exists \, \lambda \ge 0, \mbox{ such that } \mymat{cc}{d_0-\vect{a}^T\vect{x}-\theta\vect{b}^T\vect{y}_0-\lambda r^2 & \displaystyle\frac{1}{2}(\vect{d}^T-\vect{x}^T A-\theta\vect{b}^T W) \\ \displaystyle\frac{1}{2}(\vect{d}^T-\vect{x}^T A-\theta\vect{b}^T W)^T & \lambda I_l -(1-\theta)\displaystyle\sum_{j=1}^k{b_j Q_j}} \succeq 0. \end{equation} {\bf [${\rm (I)} \Rightarrow {\rm (II)}$]} We now show that \eqref{eq:use1} implies {\rm (II)}. Observe that \eqref{eq:use1} implies that, for each $p=1,\ldots,l$ the following $(2 \times 2)$ matrix \[ \mymat{cc}{d_0-\vect{a}^T\vect{x}-\theta\vect{b}^T\vect{y}_0-\lambda r^2 & \displaystyle\frac{1}{2}\left(\vect{d}-A^T\vect{x} - \theta W^T\vect{b}\right)_p \\ \displaystyle\frac{1}{2}\left(\vect{d}-A^T\vect{x} - \theta W^T\vect{b}\right)_p & \lambda -(1-\theta)\sigma_p} \succeq 0. \] So, $d_0-\vect{a}^T\vect{x}-\theta\vect{b}^T\vect{y}_0-\lambda r^2 \ge 0$, for each $p=1,\ldots,l,$ $\lambda -(1-\theta)\sigma_p \ge 0$, and \begin{equation}\label{eq:00} \left(d_0-\vect{a}^T\vect{x}-\theta\vect{b}^T\vect{y}_0-\lambda r^2\right)\left(\lambda -(1-\theta)\sigma_p\right) \ge \left[\displaystyle\frac{1}{2}\left(\vect{d}-A^T\vect{x} - \theta W^T\vect{b}\right)_p\right]^2. \end{equation} Now, define the index $L$ by \begin{equation}\label{def:L} L=\{p \in \{1,\ldots,l\}: \lambda -(1-\theta)\sigma_p> 0\}, \end{equation} and let \[ s_p=\left\{\begin{array}{ccc} 0 & \mbox{ if } & p \notin L, \\ \displaystyle \frac{\left[\displaystyle\left(\vect{d}-A^T\vect{x} - \theta W^T\vect{b}\right)_p\right]^2}{4(\lambda -(1-\theta)\sigma_p)} & \mbox{ if } & p \in L. \end{array} \right. \] Then it follows that, for all $p \notin L$, $ \lambda -(1-\theta)\sigma_p=0$ and \eqref{eq:00} gives us that \[ \displaystyle\left(\vect{d}-A^T\vect{x} - \theta W^T\vect{b}\right)_p =0. \] So, from the construction of $s_p$, we obtain that $s_p \ge 0$, $p=1,\ldots,l$, and, \[ \left[\displaystyle\left(\vect{d}-A^T\vect{x} - \theta W^T\vect{b}\right)_p\right]^2 \le 4 s_p(\lambda -(1-\theta)\sigma_p), \ p=1,\ldots,l. \] Using the following well-known equivalence \begin{equation}\label{eq:trick} t^2 \le 4 \alpha \beta, \ \alpha,\beta \ge 0 \ \Leftrightarrow \ \|(t,\alpha-\beta)\| \le \alpha+\beta, \end{equation} we obtain that $d_0-\vect{a}^T\vect{x}-\theta\vect{b}^T\vect{y}_0-\lambda r^2 \ge 0$, and for each $p=1,\ldots,l,$ $\lambda -(1-\theta)\sigma_p \ge 0$, and \[ \left\|\left(\left(\vect{d}-A^T\vect{x}^T -W^T\theta\vect{b}\right)_p,s_p-\lambda +(1-\theta)\sigma_p \right) \right \| \le s_p +\lambda -(1-\theta)\sigma_p. \] Let $M={\rm diag}(\lambda -(1-\theta)\sigma_1,\ldots,\lambda -(1-\theta)\sigma_l) \in \mathbb{R}^{l \times l}$ and $u=\vect{d}-A^T\vect{x} - \theta W^T\vect{b} \in \mathbb{R}^l$. For the index $L \subseteq \{1,\ldots,l\}$ defined as before, let $M_L=(M_{\alpha \beta})_{\alpha,\beta \in L}$ and $u_L=(u_\alpha)_{\alpha\in L}$. Then, \eqref{eq:use1} gives us that \begin{equation}\label{eq:99} \mymat{cc}{d_0-\vect{a}^T\vect{x}-\theta\vect{b}^T\vect{y}_0-\lambda r^2 & \frac{1}{2}u_L^T \\ \frac{1}{2} u_L & M_L }\succeq 0. \end{equation} Note from the definition of $L$ that $M_L \succ 0$. The Schur's complement together with \eqref{eq:99} implies that \[ \displaystyle (d_0-\vect{a}^T\vect{x}-\theta\vect{b}^T\vect{y}_0-\lambda r^2) -\frac{1}{4} u_L^T M_L^{-1} u_L \ge 0. \] It follows from the definitions of $M_L$ and $u_L$ that \begin{eqnarray*} 0 &\le & (d_0-\vect{a}^T\vect{x}-\theta\vect{b}^T\vect{y}_0-\lambda r^2) -\frac{1}{4} \sum_{p \in L} \frac{[\left(\vect{d}-A^T\vect{x}^T -W^T\theta\vect{b}\right)_p]^2}{\lambda -(1-\theta)\sigma_p} \\ &=& (d_0-\vect{a}^T\vect{x}-\theta\vect{b}^T\vect{y}_0-\lambda r^2) - \sum_{p \in L} s_p \\ &=& (d_0-\vect{a}^T\vect{x}-\theta\vect{b}^T\vect{y}_0-\lambda r^2) - \sum_{p=1}^l s_p, \end{eqnarray*} where the first equality is from the definition of $s_p$, $p=1,\ldots,l$, and the last system of equalities follows from the fact that $s_p=0$ for all $p \notin L$. So, {\rm (II)} holds. {\bf [${\rm (II)} \Rightarrow {\rm (I)}$]} Suppose that {\rm (II)} holds. Define the index $L$ as in \eqref{def:L}. The last relation in {\rm (II)} shows that \[ \left[\displaystyle\left(\vect{d}-A^T\vect{x} - \theta W^T\vect{b}\right)_p\right]^2 \le 4 s_p \, \left(\lambda -(1-\theta)\sigma_p\right), \ p=1,\ldots,l. \] So, $u_p=\displaystyle\left(\vect{d}-A^T\vect{x} - \theta W^T\vect{b}\right)_p=0$ for all $p \notin L$, and for all $p \in L$ \[ s_p \ge \frac{\left[\displaystyle\left(\vect{d}-A^T\vect{x} - \theta W^T\vect{b}\right)_p\right]^2}{4(\lambda -(1-\theta)\sigma_p)}. \] This together with the second relation in {\rm (II)} gives us that \[ d_0-\vect{a}^T\vect{x}-\theta\vect{b}^T\vect{y}_0-\lambda r^2 \ge \sum_{p=1}^l s_p \ge \sum_{p \in L} s_p \ge \sum_{p \in L}\frac{\left[\displaystyle\left(\vect{d}-A^T\vect{x} - \theta W^T\vect{b}\right)_p\right]^2}{4(\lambda -(1-\theta)\sigma_p)}=\frac{1}{4} u_L^T M_L^{-1} u_L, \] where the first equality follows by noting that $s_p \ge 0$ for all $p=1,\ldots,l$, and the last equality follows from the definitions of $M_L$ and $u_L$. This shows that \eqref{eq:99} holds. As for all $p \notin L$, $u_p=0$ and $\lambda -(1-\theta)\sigma_p=0$, it follows that \begin{equation} \mymat{cc}{d_0-\vect{a}^T\vect{x}-\theta\vect{b}^T\vect{y}_0-\lambda r^2 & \frac{1}{2}u^T \\ \frac{1}{2} u & M }\succeq 0, \end{equation} and so, \eqref{eq:use1} holds. Hence, {\rm (I)} follows. \end{proof} We now associate with $(P_s)$ the following second order cone program: {\small \begin{eqnarray*} ({P_s{\emph-QDR}}) & \displaystyle \min_{\substack{\under{\vect{x}, \, \vect{y}_0,} \, W, \, \lambda_i, \, s_{p,i}, \, \sigma_{p,i}}} & \vect{c}^T\vect{x} \\ & \mbox{s.t.} & \lambda_i\geq 0, \, s_{p,i} \ge 0, \; i=1,\dots,m, \, p=1,\ldots,l, \\ & & \displaystyle \sum_{p=1}^l s_{p,i} \le d_{0,i}-\vect{a}^T\vect{x}-\theta\vect{b}_i^T\vect{y}_0-\lambda_i r^2, \; i=1,\dots,m, \\ & & \lambda_i -(1-\theta)\sigma_{p,i} \ge 0, \ \ \ i=1,\ldots,m, \, p=1,\ldots,l,\\ & & \left\|\left(\left(\vect{d}_i-A^T\vect{x} -\theta W^T\vect{b}_i\right)_p,s_{p,i}-\lambda_i +(1-\theta)\sigma_{p,i} \right) \right \| \le s_{p,i} +\lambda_i -(1-\theta)\sigma_{p,i},\\ & & \ \ \ \ \ \ \ \ \ \ \ \ i=1,\ldots,m, \ p=1,\ldots,l, \end{eqnarray*}} where $\vect{x}\in\mathbb{R}^{n},\;\vect{y}_0\in\mathbb{R}^{k},\;W\in\mathbb{R}^{k\times l}, \; \lambda_i \in \mathbb{R}, \; s_{p,i} \in \mathbb{R},\; \sigma_{p,i}\in\mathbb{R},\; p=1,\dots,l, \, i=1,\ldots,m$. Using Proposition \ref{lemma:2.3}, we now show that the problem $(P_s)$ admits an exact SOCP reformulation in the sense that the objective values of $(P_s)$ and $(P_s\mbox{-QDR})$ are equal and their constraint systems are equivalent. \begin{theorem}[{\bf Separable QDRs and Exact SOCP Reformulations}]\label{thm:2} Let $\theta \in [0, 1]$. Consider the linear ARO problem $(P_s)$ with the parameterized separable quadratic decision rule and its associated second order cone program $(P_s\mbox{-QDR}) $. Then, problem $(P_s)$ and the second order cone program $(P_s\mbox{-QDR})$ are equivalent, in the sense that, $(\vect{x},\vect{y}_0,W,Q_1,\ldots,Q_k)$ is a solution for $(P_s)$ with $Q_j={\rm diag}(q_{1,j},\ldots,q_{l,j})$, $j=1,\ldots,k$, if and only if there exists $\lambda_i, \, s_{p,i}, \, \sigma_{p,i} \ge 0$, $p=1,\ldots,l$, $i=1,\ldots,m$, such that $(\vect{x},\vect{y}_0, W, \lambda_i, \, s_{p,i}, \, \sigma_{p,i})$ is a solution for $(P_s\mbox{-QDR})$ with $\sigma_{p,i}=\sum_{j=1}^k (\vect{b_i})_j q_{p,j}$, $p=1,\ldots,l$, $i=1,\ldots,m$. Moreover, $ \min (P_s) =\min (P_s\mbox{-QDR}) $ \end{theorem} \begin{proof} The constraint of $(P_s)$ \[ A(\vect{z})\vect{x} + B\left(\theta(\vect{y}_0 + W\vect{z}) + (1-\theta)\vect{z}^T \mathcal{Q}_k\vect{z}\right)\leq\vect{d}(\vect{z}),\quad \forall\vect{z}\in\mathcal{Z} \] can be equivalently rewritten as the following system of $m$ constraints: \[ (\vect{a}_i + A_i\vect{z})^T\vect{x} + \vect{b}_i^T\left(\theta(\vect{y}_0 + W\vect{z}) + (1-\theta)\vect{z}^T \mathcal{Q}_k \vect{z}\right) \leq d_{0,i} + \vect{d}_i^T\vect{z},\;\forall\vect{z}\in\mathcal{Z}, \; i=1,2,\ldots, m. \] It now follows from Proposition \ref{lemma:2.3} that, for each $i=1,2,\ldots, m$, the system \[ (\vect{a}_i + A_i\vect{z})^T\vect{x} + \vect{b}_i^T\left(\theta(\vect{y}_0 + W\vect{z}) + (1-\theta)\vect{z}^T \mathcal{Q}_k \vect{z}\right) \leq d_{0,i} + \vect{d}_i^T\vect{z},\;\forall\vect{z}\in\mathcal{Z} \] {\small is equivalent to \begin{eqnarray*} & &\exists \lambda_i \geq 0, s_{p,i} \ge 0, \; p=1,\ldots,l \mbox{ such that} \\ & & \left\{ \begin{array}{l} \lambda_i\geq 0, \, s_{p,i} \ge 0, \; i=1,\dots,m, \, p=1,\ldots,l, \\ \displaystyle \sum_{p=1}^l s_{p,i} \le d_{0,i}-\vect{a}^T\vect{x}-\theta\vect{b}_i^T\vect{y}_0-\lambda_i r^2, \; \\ \lambda_i -(1-\theta)\sigma_{p,i} \ge 0, \ \ \ i=1,\ldots,m, \, p=1,\ldots,l,\\ \left\|\left(\left(\vect{d}_i-A^T\vect{x} -\theta W^T\vect{b}_i\right)_p,s_{p,i}-\lambda_i +(1-\theta)\sigma_{p,i} \right) \right \| \le s_{p,i} +\lambda_i -(1-\theta)\sigma_{p,i}, \; p=1,\ldots,l, \end{array} \right. \end{eqnarray*} where,} for each $i=1,\ldots,m$, $\sigma_{p,i}$, $p=1,\ldots,l$, are the diagonal elements of $\sum_{j=1}^k{(\vect{b_i})_j Q_j}$, that is, $\sigma_{p,i}=\sum_{j=1}^k (\vect{b_i})_j q_{p,j}$, $p=1,\ldots,l$, $i=1,\ldots,m$. As the objective functions of both problems $(P_s)$ and $(P_s\mbox{-QDR})$ are the same, we obtain that $(\vect{x},\vect{y}_0,W,Q_1,\ldots,Q_k)$ is a solution for $(P_s)$ if and only if there exist $\lambda_i, \, s_{p,i}, \, \sigma_{p,i} \ge 0$, $p=1,\ldots,l$, $i=1,\ldots,m$, such that $(\vect{x},\vect{y}_0,W, \lambda_i, \, s_{p,i}, \, \sigma_{p,i})$, is a solution for $(P_s\mbox{-QDR})$ with $\sigma_{p,i}=\sum_{j=1}^k (\vect{b_i})_j q_{p,j}$, $p=1,\ldots,l$, $i=1,\ldots,m$, and $\min (P_s) =\min (P_s\mbox{-QDR}).$ \end{proof} \begin{remark} Note that, in the SDP formulation, the decision variable is $(\vect{x},\vect{y}_0,W,Q_1,\ldots,Q_k)$ which is of dimension $n+k+k l + k \frac{l(l+1)}{2}$; while in the SOCP formulation, the decision variable is $(\vect{x},\vect{y}_0, W, \lambda_i, \, s_{p,i}, \, \sigma_{p,i})$ which is of dimension $n+k+k l + m + 2ml$. \end{remark} \section{ARO Problems with Objective and Constraint Adjustable Variables} In this section we establish exact conic program reformulations for the following affinely parameterized version of ARO problem $(P_0)$ with adjustable variables also in the objective function: \begin{mini} {\substack{\under{\vect{x},\vect{y}(\cdot)}}}{\vect{c}^T\vect{x}+ \max_{\vect{z}\in\mathcal{Z}} \{\vect{w}^T\vect{y}(z)\}}{}{\name{$\overline{P_0}$}} \addConstraint{A(\vect{z})\vect{x} + B\vect{y}(\vect{z})}{\leq\vect{d}(\vect{z}),\quad}{\forall\vect{z}\in\mathcal{Z}} \end{mini} where $\mathcal{Z} = \left\{\vect{z}\in\mathbb{R}^l : \norm{\vect{z}}^2 \leq r^2\right\}$ is ellipsoidal uncertainty set; $\vect{c}\in\mathbb{R}^n$; $\vect{w} \in \mathbb{R}^k$; $B=(\vect{b}_1, \ldots, \vect{b}_m)^T, \vect{b}\in\mathbb{R}^k $; $A(\vect{z}) = (\vect{a}_1+A_1\vect{z}, \dots, \vect{a}_m + A_m\vect{z})^T, \vect{a}_i\in\mathbb{R}^{n}, A_i\in\mathbb{R}^{n\times l}$ and $\vect{d}(\vect{z}) = (d_{0,1}+\vect{d}_1^T\vect{z},\dots,d_{0,m} + \vect{d}_m^T\vect{z})^T,\; d_{0,i}\in\mathbb{R},\; \vect{d}_i\in\mathbb{R}^{l}$. The problem $(\overline{P_0})$ with the quadratic decision rule as in Definition 2.1 takes the form: \begin{mini} {\substack{\under{\vect{x},\vect{y}_0},\\W,Q_j,j=1,\dots,k}}{\vect{c}^T\vect{x}+ \max_{\vect{z}\in\mathcal{Z}} \{\vect{w}^T(\theta(\vect{y}_0 + W\vect{z}) + (1-\theta)\vect{z}^T\mathcal{Q}\vect{z})\}}{}{(\overline{P})} \addConstraint{A(\vect{z})\vect{x} + B\left(\theta(\vect{y}_0 + W\vect{z}) + (1-\theta)\vect{z}^T \mathcal{Q}_k \vect{z}\right)}{\leq\vect{d}(\vect{z}),\quad}{\forall\vect{z}\in\mathcal{Z}.} \end{mini} We associate with $(\overline{P})$ the following semi-definite program \begingroup \begin{mini*} {\substack{\under{\vect{x},\vect{y}_0, \vect{\lambda}, \tau},\\W,Q_j,j=1,\dots,k}}{\vect{c}^T\vect{x} + \tau}{}{({\overline{P}\mbox{-QDR}})} \addConstraint{ \lambda_i\geq 0, \; i=1,\dots,m}, \mymat{ccc}{ P_1 & & \\ & \ddots \\ & & P_{m+1} }\succeq 0, \end{mini*} where $\vect{x}\in\mathbb{R}^{n},\;\vect{y}_0\in\mathbb{R}^{k},\;\tau\in\mathbb{R},\; W\in\mathbb{R}^{k\times l},\; Q_j\in\mathbb{S}_l,j=1,\dots,k$ and \[ P_i = \left\{\begin{array}{cl} \mymat{cc}{d_{0,i}-\vect{a}_i^T\vect{x}-\theta\vect{b}_i^T\vect{y}_0-\lambda_i r^2 & \displaystyle\frac{1}{2}(\vect{d}_i^T-\vect{x}^T A_i-\theta\vect{b}_i^T W) \\ \displaystyle\frac{1}{2}(\vect{d}_i^T-\vect{x}^T A_i-\theta\vect{b}_i^T W)^T & \lambda_i I_l -(1-\theta)\displaystyle\sum_{j=1}^k{(\vect{b}_i)_j Q_j}} & \quad i=1,\ldots,m, \\ \mymat{cc}{\tau-\theta\vect{w}^T\vect{y}_0-\lambda_{m+1} r^2 & \displaystyle\frac{1}{2}(-\theta\vect{w}^T W) \\ \displaystyle\frac{1}{2}(-\theta\vect{w}^T W)^T & \lambda_i I_l -(1-\theta)\displaystyle\sum_{j=1}^k{\vect{w}_j Q_j}} & \quad i=m+1. \end{array} \right.P_d \] \endgroup \begin{corollary} Let $\theta \in [0, 1]$. Consider the linear ARO problem $(\overline{P})$ with the parameterized quadratic decision rule and its associated semi-definite program $(\overline{P}\mbox{-QDR})$ . Then, problem $(\overline{P})$ and the semi-definite program $(\overline{P}\mbox{-QDR})$ are equivalent, in the sense that, $(\vect{x},\vect{y}_0,W,Q_1,\ldots,Q_k)$ is a solution for $(\overline{P})$ if and only if there exists $\vect{\lambda} \in \mathbb{R}^m_+$ and $\tau\in\mathbb{R}$ such that $(\vect{x},\vect{y}_0,\vect{\lambda},\tau,W,Q_1,\ldots,Q_k)$ is a solution for $(\overline{P}\mbox{-QDR})$. Moreover, $ \min (\overline{P}) = \min (\overline{P}\mbox{-QDR}) $ \end{corollary} \begin{proof} The problem $(\overline{P})$ can be equivalently rewritten as \begin{mini} {\substack{\under{\vect{x},\vect{y}_0},\\W,Q_j,j=1,\dots,k, \tau}}{\vect{c}^T\vect{x}+\tau}{}{ \addConstraint{A(\vect{z})\vect{x} + B\left(\theta(\vect{y}_0 + W\vect{z}) + (1-\theta)\vect{z}^T \mathcal{Q}_k \vect{z}\right)}{\leq\vect{d}(\vect{z}),\quad}{\forall\vect{z}\in\mathcal{Z},} \addConstraint{ \vect{w}^T(\theta(\vect{y}_0 + W\vect{z}) + (1-\theta)\vect{z}^T\mathcal{Q}_k \vect{z}) \le \tau, \quad }{\forall \vect{z} \in \mathcal{Z},} \end{mini} The constraints of (3.2) can equivalently be re-written as the following system of $m+1$ constraints: \begin{equation}\label{eq:mconstraints} (\vect{a}_i + A_i\vect{z})^T\vect{x} + \vect{b}_i^T\left(\theta(\vect{y}_0 + W\vect{z}) + (1-\theta)\vect{z}^T \mathcal{Q}_k \vect{z}\right) \leq d_{0,i} + \vect{d}_i^T\vect{z},\;\forall\vect{z}\in\mathcal{Z}, \; i=1,2,\ldots, m \end{equation} and \[ \vect{w}^T \left(\theta(\vect{y}_0 + W\vect{z}) + (1-\theta)\vect{z}^T \mathcal{Q}_k \vect{z}\right) \le \tau. \] So, the conclusion follows by the same line of arguments as in Theorem \ref{thm:1}. \end{proof} Now, consider the following ARO problem with separable quadratic decision rule as in Definition \ref{def2}: \begin{mini} {\substack{\under{\vect{x},\vect{y}_0} \\ W, Q_j, j\in [k]_+}}{\vect{c}^T\vect{x}+ \max_{\vect{z}\in\mathcal{Z}} \{\vect{w}^T(\theta(\vect{y}_0 + W\vect{z}) + (1-\theta)\vect{z}^T\mathcal{Q}_k \vect{z})\}}{}{(\overline{P}_s)} \addConstraint{A(\vect{z})\vect{x} + B\left(\theta(\vect{y}_0 + W\vect{z}) + (1-\theta)\vect{z}^T \mathcal{Q}_k \vect{z}\right)}{\leq\vect{d}(\vect{z}),\quad}{\forall\vect{z}\in\mathcal{Z},} \end{mini} where the assumptions of $(\overline{P}_s)$ are the same as of $(P_s)$. We associate with $(\overline{P}_s)$ the following second order cone program: {\small \begin{eqnarray*} ({\overline{P}_s{\emph-QDR}}) & \displaystyle \min_{\substack{\under{\vect{x}, \, \vect{y}_0,} \, W, \, \tau, \\ \, \lambda_i, \, s_{p,i}, \, \sigma_{p,i}}} & \vect{c}^T\vect{x} +\tau \\ & \mbox{s.t.} & \lambda_i\geq 0, \, s_{p,i} \ge 0, \; i=1,\dots,m+1, \, p=1,\ldots,l, \\ & & \displaystyle \sum_{p=1}^l s_{p,i} \le d_{0,i}-\vect{a}^T\vect{x}-\theta\vect{b}_i^T\vect{y}_0-\lambda_i r^2, \; i=1,\dots,m, \\ & & \sum_{p=1}^l s_{p,m+1} \le \tau-\theta\vect{w}^T\vect{y}_0-\lambda_{m+1} r^2, \\ & & \lambda_i -(1-\theta)\sigma_{p,i} \ge 0, \ \ \ i=1,\ldots,m+1, \, p=1,\ldots,l,\\ & & \left\|\left(\left(\vect{d}_i-A^T\vect{x} -\theta W^T\vect{b}_i\right)_p,s_{p,i}-\lambda_i +(1-\theta)\sigma_{p,i} \right) \right \| \le s_{p,i} +\lambda_i -(1-\theta)\sigma_{p,i}, \\ & & \ \ \ \ \ \ \ \ \ \ \ \ i=1,\ldots,m, \ p=1,\ldots,l, \\ & & \left\|\left((-\theta W^T\vect{w})_p,s_{p,m+1}-\lambda_{m+1} +(1-\theta)\sigma_{p,m+1} \right) \right \| \le s_{p,m+1} +\lambda_{m+1} -(1-\theta)\sigma_{p,m+1}, \\ & & \ \ \ \ \ \ \ \ \ \ \ \ p=1,\ldots,l. \end{eqnarray*}} \begin{corollary} Let $\theta \in [0, 1]$. Consider the linear ARO problem $(\overline{P})$ with the parameterized separable quadratic decision rule and its associated second order cone program $(\overline{P}_s\mbox{-QDR})$. Then, problem $(\overline{P})$ and the second order cone program $(\overline{P}_s\mbox{-QDR})$ are equivalent, in the sense that, $(\vect{x},\vect{y}_0,W,Q_1,\ldots,Q_k)$ is a solution for $(\overline{P})$ with $Q_j={\rm diag}(q_{1,j},\ldots,q_{l,j})$, $j=1,\ldots,k$, if and only if there exists $\tau,\, \lambda_i, \, s_{p,i}, \, \sigma_{p,i} \ge 0$, $p=1,\ldots,l$, $i=1,\ldots,m+1$, such that $(\vect{x},\vect{y}_0, W, \tau, \, \lambda_i, \, s_{p,i}, \, \sigma_{p,i})$ is a solution for $(\overline{P}_s\mbox{-QDR})$ with $\sigma_{p,i}=\sum_{j=1}^k (\vect{b_i})_j q_{p,j}$, $p=1,\ldots,l$, $i=1,\ldots,m$ and $\sigma_{p,m+1}=\sum_{j=1}^k w_j q_{p,j}$, $p=1,\ldots,l$. Moreover, $ \min (\overline{P}_s) = \min (\overline{P}_s\mbox{-QDR}). $ \end{corollary} \begin{proof} As we have seen in Corollary 2.8, the problem $(\overline{P}_s)$ can be equivalently rewritten as \begin{mini} {\substack{\under{\vect{x},\vect{y}_0, W, \tau} \\ Q_j, j=1,\dots,k}}{\vect{c}^T\vect{x} + \tau }{}{} \addConstraint{A(\vect{z})\vect{x} + B(\theta(\vect{y}_0 + W\vect{z}) + (1-\theta)\vect{z}^T\mathcal{Q}_k \vect{z})(\vect{z})}{\leq\vect{d}(\vect{z}),\quad}{\forall\vect{z}\in\mathcal{Z}} \addConstraint{ \vect{w}^T(\theta(\vect{y}_0 + W\vect{z}) + (1-\theta)\vect{z}^T\mathcal{Q}_k \vect{z}) \le \tau, \quad }{\forall \vect{z} \in \mathcal{Z}.} \end{mini} Now, the conclusion follows from Proposition \ref{lemma:2.3}. \end{proof} \begin{remark} We note that, our exact SDP (resp. SOCP reformulation) continues to hold in the general case where the cost vector $\vect{c}$ is also uncertain and it belongs to the norm uncertainty set $\mathcal{U}:=\{\vect{c}: \|\vect{c}-\vect{c}_0\| \le \rho\}$ for some $\vect{c}_0 \in \mathbb{R}^n$ and $\rho \ge 0$. Here $\|\cdot \|$ denotes a norm in $\mathbb{R}^n$. Indeed, in this case, problem $(\overline{P})$ becomes \begin{mini} {\substack{\under{\vect{x},\vect{y}_0} \\ W,Q_j,j=1,\dots,k}}{ \max_{\vect{c} \in \mathcal{U}}\vect{c}^T\vect{x}+ \max_{\vect{z}\in\mathcal{Z}} \{\vect{w}^T(\theta(\vect{y}_0 + W\vect{z}) + (1-\theta)\vect{z}^T\mathcal{Q}_k \vect{z})\}}{}{} \addConstraint{A(\vect{z})\vect{x} + B(\theta(\vect{y}_0 + W\vect{z}) + (1-\theta)\vect{z}^T\mathcal{Q}_k \vect{z})}{\leq\vect{d}(\vect{z}),\quad}{\forall\vect{z}\in\mathcal{Z},} \end{mini} which can be further rewritten as \begin{mini} {\substack{\under{\vect{x},\vect{y}_0, \tau_1,\tau_2}\\ W, Q_j, j=1,\dots,k}}{\vect{c}^T\vect{x} + \tau_1+\tau_2 }{}{} \addConstraint{A(\vect{z})\vect{x} + B(\theta(\vect{y}_0 + W\vect{z}) + (1-\theta)\vect{z}^T\mathcal{Q}_k \vect{z})}{\leq\vect{d}(\vect{z}),\quad}{\forall\vect{z}\in\mathcal{Z}} \addConstraint{ \vect{w}^T(\theta(\vect{y}_0 + W\vect{z}) + (1-\theta)\vect{z}^T\mathcal{Q}_k \vect{z}) \le \tau_1, \quad }{\forall \vect{z} \in \mathcal{Z}} \addConstraint{ \vect{c}_0^T\vect{x} +\rho \|\vect{x}\| \le \tau_2. }{ } \end{mini} Thus, the conclusion follows by employing the same line of arguments as in the proof of the preceding two corollaries. \end{remark} \section{Lot-Sizing Problem: Worst-case \& Uncertainty-Realisation Comparisons} In the lot-sizing problem on a network, we consider $N$ stores, for which stock allocations must be determined to fulfill the demand at each store. Stock can be delivered at the beginning of the day and stored, or transported from another store at a later point in time. Let $x_i$ denote the quantity of stock to initially deliver to store $i$, with unit storage cost $c_i$. Each store can hold up to $\Gamma$ units of stock at any time. Let $y_{ij}$ denote the quantity of stock to transport from store $i$ to store $j$, with unit transportation cost $t_{ij}$. Note that $t_{ii} = 0$ and $t_{ij}$ is not necessarily equal to $t_{ji}$. In general the demand for store $i$, denoted $z_i$, is uncertain at the beginning of the day, only known to reside in some uncertainty set $\mathcal{Z}$. Hence, we formulate the problem as a two-stage adjustable robust problem, by allowing the transportation decisions $y_{ij}$ to become wait-and-see variables. That is, an initial stock delivery $\vect{x}$ is sent to all stores at the beginning of the day, and once the demand $\vect{z}$ is revealed, the transportation decisions $y_{ij}(\vect{z}),\; i,j=1,\dots,N$ are implemented to fulfill the demand at each store. Wanting to minimize total costs, $\sum_{i=1}^{N}{c_ix_i} + \sum_{i,j=1}^{N}{t_{ij}y_{ij}(\vect{z})}$, this gives the following ARO formulation: \begin{mini*} {\substack{\under{\vect{x}\in \mathcal{X}\subset\mathbb{R}^N, \tau\in\mathbb{R},}\\\under{y_{ij}:\mathcal{Z}\subseteq\mathbb{R}^N\rightarrow\mathbb{R},}\\i,j=1,\dots,N}}{\sum_{i=1}^{N}{c_i x_i} + \max_{\under{\vect{z}}\in\mathcal{Z}}{\left\{\sum_{i,j=1}^{N}{t_{ij}y_{ij}(\vect{z})}\right\}}}{}{\text{(LS)}} \addConstraint{x_i+\sum_{j=1}^{N}{y_{ji}(\vect{z})}-\sum_{j=1}^{N}{y_{ij}(\vect{z})}}{\geq z_i,\quad}{\forall \vect{z}\in\mathcal{Z},,\quad i=1\dots N} \addConstraint{y_{ij}(\vect{z})}{\geq 0,\quad}{\forall \vect{z}\in\mathcal{Z},\quad i,j=1\dots N,} \end{mini*} where $x_i$ are the here-and-now decisions, $\vect{x}\in\mathcal{X} = \{\vect{x}\in\mathbb{R}^{N} : 0\leq x_i\leq \Gamma,\; i=1,\dots,N\}$, $y_{ij}$ are the wait-and-see variables, with uncertainty set $\mathcal{Z} = \{\vect{z}\in\mathbb{R}^{N} : \norm{\vect{z}}^2\leq\frac{\Gamma^2}{2}\}$ (for further details, see \cite{Zhen-Motzkin-18}). We wish to compare the solution methods of direct ADR substitution, QDR via SDP as in Corollary 3.1 and Separable QDR via SOCP as in Corollary 3.2. We will first compare their solutions after realisation of the uncertain demand $\vect{d}$, by comparison of the realised cost to the true solution given by \begin{mini*} {\under{\vect{x}, y_{ij}}}{\sum_{i=1}^{N}{c_i x_i} + \sum_{i,j=1}^{N}{t_{ij} y_{ij}}}{}{\text{(TD)}\quad} \addConstraint{x_i + \sum_{j=1}^{N}{y_{ji}} - \sum_{j=1}^{N}{y_{ij}} \geq d_i,\quad i=1,\dots,N} \addConstraint{0 \leq x_i \leq \Gamma,\quad i=1,\dots,N} \addConstraint{y_{ij}\geq 0,\quad i,j=1,\dots,N.} \end{mini*} We will then compare our solution methods to the worst-case solution, given by substitution into (TD) of the worst case value for $d_i$ in $\mathcal{Z}$; namely, $d_i = \frac{\Gamma}{\sqrt{2}}$: \begin{mini*} {\under{\vect{x}, y_{ij}}}{\sum_{i=1}^{N}{c_i x_i} + \sum_{i,j=1}^{N}{t_{ij} y_{ij}}}{}{\text{(WC)}\quad} \addConstraint{x_i + \sum_{j=1}^{N}{y_{ji}} - \sum_{j=1}^{N}{y_{ij}} \geq \frac{\Gamma}{\sqrt{2}},\quad i=1,\dots,N} \addConstraint{0 \leq x_i \leq \Gamma,\quad i=1,\dots,N} \addConstraint{y_{ij}\geq 0,\quad i,j=1,\dots,N.} \end{mini*} Note that the direct ADR substitution into (LS) is solved via a SOCP (see, e.g. \cite[Theorem 3.1]{7}). We create 50 random instances of the lot-sizing problem by generating storage and transportation costs from the uniform distribution on $[0, 1000]$. We produce a random demand $\vect{d}\in\mathcal{Z}$ for each. We then compare the methods by calculating the following \emph{percentage difference} metrics: \begin{align*} & m_1 = 100\cdot\frac{v-t}{v}, && m_2 = 100\cdot\frac{w-v}{w} \end{align*} where $v$ is the optimal (realized/worst-case) value produced by solving (LS) via the method, $t$ is the optimal value for (TD), $w$ is the optimal value for (WC), $m_1$ is a comparison metric against (TD) and $m_2$ is a comparison metric against the (WC). We also compute the average time taken to solve a single problem instance, for each of these methods. Note that the lower the calculated $m_1$ and the higher the calculated $m_2$, the better the performance of the method. All computations were performed using a 3.2GHz Intel(R) Core(TM) i7-8700 and 16GB of RAM, equipped with MATLAB R2019B. All problem instances, being conic programs, were solved using the CVX toolbox (see, e.g. \cite{CVX-sofware14}). \begin{table}[H] \centering \captionsetup{font = footnotesize} \begin{tabu}{| l |c | c | c |} \hline \textbf{True Solution} & ADR via SOCP \cite{7}. & QDR via SDP & Separable QDR via SOCP\Tstrut\Bstrut\\ \hline \hline $N = 2$: \% Diff. & 67.7072 & 64.7297 & 64.9984 \Tstrut\Bstrut\\ \hline \hline $N = 3$: \% Diff. & 67.5402 & 63.9634 & 64.4285\Tstrut\Bstrut\\ \hline \hline $N = 4$: \% Diff. & 70.6617 & 65.9177 & 66.6341\Tstrut\Bstrut\\ \hline \hline $N = 5$: \% Diff. & 68.8926 & 63.6860 & 64.5545\Tstrut\Bstrut\\ \hline \hline $N = 8$: \% Diff. & 71.8619 & 64.1862 & 65.3614\Tstrut\Bstrut\\ \hline \end{tabu} \caption{ Results for the case of true solution comparison. \% Diff. represents the average percentage difference between the solution to (TD) and the realised solution for the method ($m_1$). Time is measured in seconds. } \end{table} \begin{figure}[hbt!] \centering \begin{minipage}{.5\textwidth} \centering \includegraphics[width=\linewidth]{rand4} \end{minipage}% \begin{minipage}{.5\textwidth} \centering \includegraphics[width=\linewidth]{rand8} \end{minipage} \end{figure} \clearpage \begin{table}[H] \centering \captionsetup{font = footnotesize} \begin{tabu}{| l |c | c | c |} \hline \textbf{Worst-Case} & ADR via SOCP \cite{7}. & QDR via SDP & Separable QDR via SOCP\Tstrut\Bstrut\\ \hline \hline $N = 2$: \% Diff. & 17.2541 & 20.7104 & 20.6353 \Tstrut\Bstrut\\ \hline \hline $N = 3$: \% Diff. & 30.3298 & 36.4901 & 35.6099\Tstrut\Bstrut\\ \hline \hline $N = 4$: \% Diff. & 42.1128 & 49.4073 & 48.2685\Tstrut\Bstrut\\ \hline \hline $N = 5$: \% Diff. & 46.9065 & 55.1508 & 53.9234\Tstrut\Bstrut\\ \hline \hline $N = 8$: \% Diff. & 63.2694 & 71.2344 & 70.2952\Tstrut\Bstrut\\ \hline \end{tabu} \caption{ Results for the case of worst-case comparison and random costs. \% Diff. represents the average percentage difference between worst-case solution of (WC) and the worst-case solution for the method ($m_2$). Average Time is not presented as it is previously demonstrated in Table 1. } \end{table} \begin{figure}[h] \centering \begin{minipage}{.5\textwidth} \centering \includegraphics[width=\linewidth]{wrand4} \end{minipage}% \begin{minipage}{.5\textwidth} \centering \includegraphics[width=\linewidth]{wrand8} \end{minipage} \end{figure} \clearpage \begin{table}[H] \centering \captionsetup{font = footnotesize} \begin{tabu}{| l |c | c | c |} \hline \textbf{Problem Sizes} & ADR via SOCP \cite{7}. & QDR via SDP & Separable QDR via SOCP\Tstrut\Bstrut\\ \hline \hline \,\,$N = 2$: Variables: & 25 & 78 & 154 \Tstrut\Bstrut\\ \hline \rowfont{\color{blue}} Constraints: & 14 & 27 & 73\Tstrut\Bstrut \\ \hline \hline \,\,$N = 3$: Variables: & 63 & 200 & 370 \Tstrut\Bstrut\\ \hline \rowfont{\color{blue}} Constraints: & 25 & 84 & 186\Tstrut\Bstrut \\ \hline \hline \,\,$N = 4$: Variables: & 123 & 435 & 728 \Tstrut\Bstrut\\ \hline \rowfont{\color{blue}} Constraints: & 41 & 215 & 383 \Tstrut\Bstrut \\ \hline \hline \,\,$N = 5$: Variables: & 213 & 840 & 1264 \Tstrut\Bstrut\\ \hline \rowfont{\color{blue}} Constraints: & 61 & 468 & 688 \Tstrut\Bstrut \\ \hline \hline \,\,$N = 8$: Variables: & 723 & 3890 & 4813 \Tstrut\Bstrut\\ \hline \rowfont{\color{blue}} Constraints: & 145 & 1271 & 2322 \Tstrut\Bstrut \\ \hline \end{tabu} \caption{ Representation of the problem size for each method. Number of optimisation variables and number of equality constraints as outputted by CVX after solving. } \end{table} Based on the numerical experiments, we can conclude then that both SDP and SOCP reformulation based methods for solving affinely parameterized ARO problems with a quadratic decision rule exceed the performance of the classical ADR approach. With our state-of-the-art conic programming solver, we were only able to solve up to size $N=8$, as is demonstrated in Table 1. \section{Conclusion and Outlook} In this paper we have shown that affinely parameterized linear adjustable robust optimization problems with a new parametric QDRs under ellipsoidal uncertainty are numerically tractable by establishing exact semi-definite program and second order cone program reformulations. We have also demonstrated via numerical experiments on lot-sizing problems with uncertain demand that these adjustable robust linear optimization problems with QDRs improve upon the affine decision rules in their performance both in the worst-case sense and after simulated realization of the uncertain demand relative to the true solution. It is of great interest to study computational tractability of adjustable robust linear optimization problems with QDRs in the presence of uncertainty sets that are expressed as the intersection of ellipsoids and will be examined in a forthcoming study. \section{References} \section{} \label{} \section{} \label{} \section{} \label{} \section{Introduction} \file{elsarticle.cls} is a thoroughly re-written document class for formatting \LaTeX{} submissions to Elsevier journals. The class uses the environments and commands defined in \LaTeX{} kernel without any change in the signature so that clashes with other contributed \LaTeX{} packages such as \file{hyperref.sty}, \file{preview-latex.sty}, etc., will be minimal. \file{elsarticle.cls} is primarily built upon the default \file{article.cls}. This class depends on the following packages for its proper functioning: \begin{enumerate} \item \file{natbib.sty} for citation processing; \item \file{geometry.sty} for margin settings; \item \file{fleqn.clo} for left aligned equations; \item \file{graphicx.sty} for graphics inclusion; \item \file{txfonts.sty} optional font package, if the document is to be formatted with Times and compatible math fonts; \item \file{hyperref.sty} optional packages if hyperlinking is required in the document; \item \file{endfloat.sty} optional packages if floats to be placed at end of the PDF. \end{enumerate} All the above packages (except some optional packages) are part of any standard \LaTeX{} installation. Therefore, the users need not be bothered about downloading any extra packages. Furthermore, users are free to make use of \textsc{ams} math packages such as \file{amsmath.sty}, \file{amsthm.sty}, \file{amssymb.sty}, \file{amsfonts.sty}, etc., if they want to. All these packages work in tandem with \file{elsarticle.cls} without any problems. \section{Major Differences} Following are the major differences between \file{elsarticle.cls} and its predecessor package, \file{elsart.cls}: \begin{enumerate}[\textbullet] \item \file{elsarticle.cls} is built upon \file{article.cls} while \file{elsart.cls} is not. \file{elsart.cls} redefines many of the commands in the \LaTeX{} classes/kernel, which can possibly cause surprising clashes with other contributed \LaTeX{} packages; \item provides preprint document formatting by default, and optionally formats the document as per the final style of models $1+$, $3+$ and $5+$ of Elsevier journals; \item some easier ways for formatting \verb+list+ and \verb+theorem+ environments are provided while people can still use \file{amsthm.sty} package; \item \file{natbib.sty} is the main citation processing package which can comprehensively handle all kinds of citations and works perfectly with \file{hyperref.sty} in combination with \file{hypernat.sty}; \item long title pages are processed correctly in preprint and final formats. \end{enumerate} \section{Installation} The package is available at author resources page at Elsevier (\url{http://www.elsevier.com/locate/latex}). It can also be found in any of the nodes of the Comprehensive \TeX{} Archive Network (\textsc{ctan}), one of the primary nodes being \url{http://tug.ctan.org/tex-archive/macros/latex/contrib/elsarticle/}. Please download the \file{elsarticle.dtx} which is a composite class with documentation and \file{elsarticle.ins} which is the \LaTeX{} installer file. When we compile the \file{elsarticle.ins} with \LaTeX{} it provides the class file, \file{elsarticle.cls} by stripping off all the documentation from the \verb+*.dtx+ file. The class may be moved or copied to a place, usually, \verb+$TEXMF/tex/latex/elsevier/+, or a folder which will be read by \LaTeX{} during document compilation. The \TeX{} file database needs updation after moving/copying class file. Usually, we use commands like \verb+mktexlsr+ or \verb+texhash+ depending upon the distribution and operating system. \section{Usage}\label{sec:usage} The class should be loaded with the command: \begin{vquote} \documentclass[<options>]{elsarticle} \end{vquote} \noindent where the \verb+options+ can be the following: \begin{description} \item [{\tt\color{verbcolor} preprint}] default option which format the document for submission to Elsevier journals. \item [{\tt\color{verbcolor} review}] similar to the \verb+preprint+ option, but increases the baselineskip to facilitate easier review process. \item [{\tt\color{verbcolor} 1p}] formats the article to the look and feel of the final format of model 1+ journals. This is always single column style. \item [{\tt\color{verbcolor} 3p}] formats the article to the look and feel of the final format of model 3+ journals. If the journal is a two column model, use \verb+twocolumn+ option in combination. \item [{\tt\color{verbcolor} 5p}] formats for model 5+ journals. This is always of two column style. \item [{\tt\color{verbcolor} authoryear}] author-year citation style of \file{natbib.sty}. If you want to add extra options of \file{natbib.sty}, you may use the options as comma delimited strings as arguments to \verb+\biboptions+ command. An example would be: \end{description} \begin{vquote} \biboptions{longnamesfirst,angle,semicolon} \end{vquote} \begin{description} \item [{\tt\color{verbcolor} number}] numbered citation style. Extra options can be loaded with\linebreak \verb+\biboptions+ command. \item [{\tt\color{verbcolor} sort\&compress}] sorts and compresses the numbered citations. For example, citation [1,2,3] will become [1--3]. \item [{\tt\color{verbcolor} longtitle}] if front matter is unusually long, use this option to split the title page across pages with the correct placement of title and author footnotes in the first page. \item [{\tt\color{verbcolor} times}] loads \file{txfonts.sty}, if available in the system to use Times and compatible math fonts. \item [{\tt\color{verbcolor} reversenotenum}] Use alphabets as author--affiliation linking labels and use numbers for author footnotes. By default, numbers will be used as author--affiliation linking labels and alphabets for author footnotes. \item [{\tt\color{verbcolor} lefttitle}] To move title and author/affiliation block to flushleft. \verb+centertitle+ is the default option which produces center alignment. \item [{\tt\color{verbcolor} endfloat}] To place all floats at the end of the document. \item [{\tt\color{verbcolor} nonatbib}] To unload natbib.sty. \item [{\tt\color{verbcolor} doubleblind}] To hide author name, affiliation, email address etc. for double blind refereeing purpose. \item[] All options of \file{article.cls} can be used with this document class. \item[] The default options loaded are \verb+a4paper+, \verb+10pt+, \verb+oneside+, \verb+onecolumn+ and \verb+preprint+. \end{description} \section{Frontmatter} There are two types of frontmatter coding: \begin{enumerate}[(1)] \item each author is connected to an affiliation with a footnote marker; hence all authors are grouped together and affiliations follow; \pagebreak \item authors of same affiliations are grouped together and the relevant affiliation follows this group. \end{enumerate} An example of coding the first type is provided below. \begin{vquote} \title{This is a specimen title\tnoteref{t1,t2}} \tnotetext[t1]{This document is the results of the research project funded by the National Science Foundation.} \tnotetext[t2]{The second title footnote which is a longer text matter to fill through the whole text width and overflow into another line in the footnotes area of the first page.} \end{vquote} \begin{vquote} \author[1]{Jos Migchielsen\corref{cor1}% \fnref{fn1}} \ead{J.Migchielsen@elsevier.com} \author[2]{CV Radhakrishnan\fnref{fn2}} \ead{cvr@sayahna.org} \author[3]{CV Rajagopal\fnref{fn1,fn3}} \ead[url]{www.stmdocs.in} \end{vquote} \begin{vquote} \cortext[cor1]{Corresponding author} \fntext[fn1]{This is the first author footnote.} \fntext[fn2]{Another author footnote, this is a very long footnote and it should be a really long footnote. But this footnote is not yet sufficiently long enough to make two lines of footnote text.} \fntext[fn3]{Yet another author footnote.} \address[1]{Elsevier B.V., Radarweg 29, 1043 NX Amsterdam, The Netherlands} \address[2]{Sayahna Foundations, JWRA 34, Jagathy, Trivandrum 695014, India} \address[3]{STM Document Engineering Pvt Ltd., Mepukada, Malayinkil, Trivandrum 695571, India} \end{vquote} The output of the above \TeX{} source is given in Clips~\ref{clip1} and \ref{clip2}. The header portion or title area is given in Clip~\ref{clip1} and the footer area is given in Clip~\ref{clip2}. \deforange{blue!70} \src{Header of the title page.} \includeclip{1}{130 612 477 707}{1psingleauthorgroup.pdf \deforange{orange} \deforange{blue!70} \src{Footer of the title page.} \includeclip{1}{93 135 499 255}{1pseperateaug.pdf \deforange{orange} Most of the commands such as \verb+\title+, \verb+\author+, \verb+\address+ are self explanatory. Various components are linked to each other by a label--reference mechanism; for instance, title footnote is linked to the title with a footnote mark generated by referring to the \verb+\label+ string of the \verb=\tnotetext=. We have used similar commands such as \verb=\tnoteref= (to link title note to title); \verb=\corref= (to link corresponding author text to corresponding author); \verb=\fnref= (to link footnote text to the relevant author names). \TeX{} needs two compilations to resolve the footnote marks in the preamble part. Given below are the syntax of various note marks and note texts. \begin{vquote} \tnoteref{<label(s)>} \corref{<label(s)>} \fnref{<label(s)>} \tnotetext[<label>]{<title note text>} \cortext[<label>]{<corresponding author note text>} \fntext[<label>]{<author footnote text>} \end{vquote} \noindent where \verb=<label(s)>= can be either one or more comma delimited label strings. The optional arguments to the \verb=\author= command holds the ref label(s) of the address(es) to which the author is affiliated while each \verb=\address= command can have an optional argument of a label. In the same manner, \verb=\tnotetext=, \verb=\fntext=, \verb=\cortext= will have optional arguments as their respective labels and note text as their mandatory argument. The following example code provides the markup of the second type of author-affiliation. \begin{vquote} \author{Jos Migchielsen\corref{cor1}% \fnref{fn1}} \ead{J.Migchielsen@elsevier.com} \address{Elsevier B.V., Radarweg 29, 1043 NX Amsterdam, The Netherlands} \author{CV Radhakrishnan\fnref{fn2}} \ead{cvr@sayahna.org} \address{Sayahna Foundations, JWRA 34, Jagathy, Trivandrum 695014, India} \author{CV Rajagopal\fnref{fn1,fn3}} \ead[url]{www.stmdocs.in} \address{STM Document Engineering Pvt Ltd., Mepukada, Malayinkil, Trivandrum 695571, India} \end{vquote} \vspace*{-.5pc} \begin{vquote} \cortext[cor1]{Corresponding author} \fntext[fn1]{This is the first author footnote.} \fntext[fn2]{Another author footnote, this is a very long footnote and it should be a really long footnote. But this footnote is not yet sufficiently long enough to make two lines of footnote text.} \end{vquote} The output of the above \TeX{} source is given in Clip~\ref{clip3}. \deforange{blue!70} \src{Header of the title page..} \includeclip{1}{119 563 468 709}{1pseperateaug.pdf \deforange{orange} \pagebreak Clip~\ref{clip4} shows the output after giving \verb+doubleblind+ class option. \deforange{blue!70} \src{Double blind article} \includeclip{1}{124 567 477 670}{elstest-1pdoubleblind.pdf \deforange{orange} \vspace*{-.5pc} The frontmatter part has further environments such as abstracts and keywords. These can be marked up in the following manner: \begin{vquote} \begin{abstract} In this work we demonstrate the formation of a new type of polariton on the interface between a .... \end{abstract} \end{vquote} \vspace*{-.5pc} \begin{vquote} \begin{keyword} quadruple exiton \sep polariton \sep WGM \end{keyword} \end{vquote} \noindent Each keyword shall be separated by a \verb+\sep+ command. \textsc{msc} classifications shall be provided in the keyword environment with the commands \verb+\MSC+. \verb+\MSC+ accepts an optional argument to accommodate future revisions. eg., \verb=\MSC[2008]=. The default is 2000.\looseness=-1 \subsection{New page} Sometimes you may need to give a page-break and start a new page after title, author or abstract. Following commands can be used for this purpose. \begin{vquote} \newpageafter{title} \newpageafter{author} \newpageafter{abstract} \end{vquote} \begin{itemize} \leftskip-2pc \item [] {\tt\color{verbcolor} \verb+\newpageafter{title}+} typeset the title alone on one page. \item [] {\tt\color{verbcolor} \verb+\newpageafter{author}+} typeset the title and author details on one page. \item [] {\tt\color{verbcolor} \verb+\newpageafter{abstract}+} typeset the title, author details and abstract \& keywords one one page. \end{itemize} \section{Floats} {Figures} may be included using the command, \verb+\includegraphics+ in combination with or without its several options to further control graphic. \verb+\includegraphics+ is provided by \file{graphic[s,x].sty} which is part of any standard \LaTeX{} distribution. \file{graphicx.sty} is loaded by default. \LaTeX{} accepts figures in the postscript format while pdf\LaTeX{} accepts \file{*.pdf}, \file{*.mps} (metapost), \file{*.jpg} and \file{*.png} formats. pdf\LaTeX{} does not accept graphic files in the postscript format. The \verb+table+ environment is handy for marking up tabular material. If users want to use \file{multirow.sty}, \file{array.sty}, etc., to fine control/enhance the tables, they are welcome to load any package of their choice and \file{elsarticle.cls} will work in combination with all loaded packages. \section[Theorem and ...]{Theorem and theorem like environments} \file{elsarticle.cls} provides a few shortcuts to format theorems and theorem-like environments with ease. In all commands the options that are used with the \verb+\newtheorem+ command will work exactly in the same manner. \file{elsarticle.cls} provides three commands to format theorem or theorem-like environments: \begin{vquote} \newtheorem{thm}{Theorem} \newtheorem{lem}[thm]{Lemma} \newdefinition{rmk}{Remark} \newproof{pf}{Proof} \newproof{pot}{Proof of Theorem \ref{thm2}} \end{vquote} The \verb+\newtheorem+ command formats a theorem in \LaTeX's default style with italicized font, bold font for theorem heading and theorem number at the right hand side of the theorem heading. It also optionally accepts an argument which will be printed as an extra heading in parentheses. \begin{vquote} \begin{thm} For system (8), consensus can be achieved with $\|T_{\omega z}$ ... \begin{eqnarray}\label{10} .... \end{eqnarray} \end{thm} \end{vquote} Clip~\ref{clip5} will show you how some text enclosed between the above code\goodbreak \noindent looks like: \vspace*{6pt} \deforange{blue!70} \src{{\ttfamily\color{verbcolor}\expandafter\@gobble\string\\ newtheorem}} \includeclip{2}{1 1 453 120}{jfigs.pdf} \deforange{orange} The \verb+\newdefinition+ command is the same in all respects as its\linebreak \verb+\newtheorem+ counterpart except that the font shape is roman instead of italic. Both \verb+\newdefinition+ and \verb+\newtheorem+ commands automatically define counters for the environments defined. \vspace*{6pt} \deforange{blue!70} \src{{\ttfamily\color{verbcolor}\expandafter\@gobble\string\\ newdefinition}} \includeclip{1}{1 1 453 105}{jfigs.pdf} \deforange{orange} The \verb+\newproof+ command defines proof environments with upright font shape. No counters are defined. \vspace*{6pt} \deforange{blue!70} \src{{\ttfamily\color{verbcolor}\expandafter\@gobble\string\\ newproof}} \includeclip{3}{1 1 453 65}{jfigs.pdf} \deforange{orange} Users can also make use of \verb+amsthm.sty+ which will override all the default definitions described above. \section[Enumerated ...]{Enumerated and Itemized Lists} \file{elsarticle.cls} provides an extended list processing macros which makes the usage a bit more user friendly than the default \LaTeX{} list macros. With an optional argument to the \verb+\begin{enumerate}+ command, you can change the list counter type and its attributes. \begin{vquote} \begin{enumerate}[1.] \item The enumerate environment starts with an optional argument `1.', so that the item counter will be suffixed by a period. \item You can use `a)' for alphabetical counter and '(i)' for roman counter. \begin{enumerate}[a)] \item Another level of list with alphabetical counter. \item One more item before we start another. \end{vquote} \deforange{blue!70} \src{List -- Enumerate} \includeclip{4}{1 1 453 185}{jfigs.pdf} \deforange{orange} Further, the enhanced list environment allows one to prefix a string like `step' to all the item numbers. \begin{vquote} \begin{enumerate}[Step 1.] \item This is the first step of the example list. \item Obviously this is the second step. \item The final step to wind up this example. \end{enumerate} \end{vquote} \deforange{blue!70} \src{List -- enhanced} \includeclip{5}{1 1 313 83}{jfigs.pdf} \deforange{orange} \section{Cross-references} In electronic publications, articles may be internally hyperlinked. Hyperlinks are generated from proper cross-references in the article. For example, the words \textcolor{black!80}{Fig.~1} will never be more than simple text, whereas the proper cross-reference \verb+\ref{tiger}+ may be turned into a hyperlink to the figure itself: \textcolor{blue}{Fig.~1}. In the same way, the words \textcolor{blue}{Ref.~[1]} will fail to turn into a hyperlink; the proper cross-reference is \verb+\cite{Knuth96}+. Cross-referencing is possible in \LaTeX{} for sections, subsections, formulae, figures, tables, and literature references. \section[Mathematical ...]{Mathematical symbols and formulae} Many physical/mathematical sciences authors require more mathematical symbols than the few that are provided in standard \LaTeX. A useful package for additional symbols is the \file{amssymb} package, developed by the American Mathematical Society. This package includes such oft-used symbols as $\lesssim$ (\verb+\lesssim+), $\gtrsim$ (\verb+\gtrsim+) or $\hbar$ (\verb+\hbar+). Note that your \TeX{} system should have the \file{msam} and \file{msbm} fonts installed. If you need only a few symbols, such as $\Box$ (\verb+\Box+), you might try the package \file{latexsym}. Another point which would require authors' attention is the breaking up of long equations. When you use \file{elsarticle.cls} for formatting your submissions in the \verb+preprint+ mode, the document is formatted in single column style with a text width of 384pt or 5.3in. When this document is formatted for final print and if the journal happens to be a double column journal, the text width will be reduced to 224pt at for 3+ double column and 5+ journals respectively. All the nifty fine-tuning in equation breaking done by the author goes to waste in such cases. Therefore, authors are requested to check this problem by typesetting their submissions in final format as well just to see if their equations are broken at appropriate places, by changing appropriate options in the document class loading command, which is explained in section~\ref{sec:usage}, \nameref{sec:usage}. This allows authors to fix any equation breaking problem before submission for publication. \file{elsarticle.cls} supports formatting the author submission in different types of final format. This is further discussed in section \ref{sec:final}, \nameref{sec:final}. \subsection*{Displayed equations and double column journals} Many Elsevier journals print their text in two columns. Since the preprint layout uses a larger line width than such columns, the formulae are too wide for the line width in print. Here is an example of an equation (see equation 6) which is perfect in a single column preprint format: \bigskip \setlength\Sep{6pt} \src{See equation (6)} \deforange{blue!70} \includeclip{4}{105 500 500 700}{1psingleauthorgroup.pdf} \deforange{orange} \noindent When this document is typeset for publication in a model 3+ journal with double columns, the equation will overlap the second column text matter if the equation is not broken at the appropriate location. \vspace*{6pt} \deforange{blue!70} \src{See equation (6) overprints into second column} \includeclip{3}{59 421 532 635}{elstest-3pd.pdf} \deforange{orange} \vspace*{6pt} \noindent The typesetter will try to break the equation which need not necessarily be to the liking of the author or as it happens, typesetter's break point may be semantically incorrect. Therefore, authors may check their submissions for the incidence of such long equations and break the equations at the correct places so that the final typeset copy will be as they wish. \section{Bibliography} Three bibliographic style files (\verb+*.bst+) are provided --- \file{elsarticle-num.bst}, \file{elsarticle-num-names.bst} and \file{elsarticle-harv.bst} --- the first one can be used for the numbered scheme, second one for numbered with new options of \file{natbib.sty}. The third one is for the author year scheme. In \LaTeX{} literature, references are listed in the \verb+thebibliography+ environment. Each reference is a \verb+\bibitem+ and each \verb+\bibitem+ is identified by a label, by which it can be cited in the text: \verb+\bibitem[Elson et al.(1996)]{ESG96}+ is cited as \verb+\citet{ESG96}+. \noindent In connection with cross-referencing and possible future hyperlinking it is not a good idea to collect more that one literature item in one \verb+\bibitem+. The so-called Harvard or author-year style of referencing is enabled by the \LaTeX{} package \file{natbib}. With this package the literature can be cited as follows: \begin{enumerate}[\textbullet] \item Parenthetical: \verb+\citep{WB96}+ produces (Wettig \& Brown, 1996). \item Textual: \verb+\citet{ESG96}+ produces Elson et al. (1996). \item An affix and part of a reference: \verb+\citep[e.g.][Ch. 2]{Gea97}+ produces (e.g. Governato et al., 1997, Ch. 2). \end{enumerate} In the numbered scheme of citation, \verb+\cite{<label>}+ is used, since \verb+\citep+ or \verb+\citet+ has no relevance in the numbered scheme. \file{natbib} package is loaded by \file{elsarticle} with \verb+numbers+ as default option. You can change this to author-year or harvard scheme by adding option \verb+authoryear+ in the class loading command. If you want to use more options of the \file{natbib} package, you can do so with the \verb+\biboptions+ command, which is described in the section \ref{sec:usage}, \nameref{sec:usage}. For details of various options of the \file{natbib} package, please take a look at the \file{natbib} documentation, which is part of any standard \LaTeX{} installation. In addition to the above standard \verb+.bst+ files, there are 10 journal-specific \verb+.bst+ files also available. Instruction for using these \verb+.bst+ files can be found at \href{http://support.stmdocs.in/wiki/index.php?title=Model-wise_bibliographic_style_files} {http://support.stmdocs.in} \section{Graphical abstract and highlights} A template for adding graphical abstract and highlights are available now. This will appear as the first two pages of the PDF before the article content begins. \pagebreak Please refer below to see how to code them. \begin{vquote} .... .... \end{abstract} \begin{graphicalabstract} \end{graphicalabstract} \begin{highlights} \item Research highlight 1 \item Research highlight 2 \end{highlights} \begin{keyword} .... .... \end{vquote} \section{Final print}\label{sec:final} The authors can format their submission to the page size and margins of their preferred journal. \file{elsarticle} provides four class options for the same. But it does not mean that using these options you can emulate the exact page layout of the final print copy. \lmrgn=3em \begin{description} \item [\texttt{1p}:] $1+$ journals with a text area of 384pt $\times$ 562pt or 13.5cm $\times$ 19.75cm or 5.3in $\times$ 7.78in, single column style only. \item [\texttt{3p}:] $3+$ journals with a text area of 468pt $\times$ 622pt or 16.45cm $\times$ 21.9cm or 6.5in $\times$ 8.6in, single column style. \item [\texttt{twocolumn}:] should be used along with 3p option if the journal is $3+$ with the same text area as above, but double column style. \item [\texttt{5p}:] $5+$ with text area of 522pt $\times$ 682pt or 18.35cm $\times$ 24cm or 7.22in $\times$ 9.45in, double column style only. \end{description} Following pages have the clippings of different parts of the title page of different journal models typeset in final format. Model $1+$ and $3+$ will have the same look and feel in the typeset copy when presented in this document. That is also the case with the double column $3+$ and $5+$ journal article pages. The only difference will be wider text width of higher models. Therefore we will look at the different portions of a typical single column journal page and that of a double column article in the final format. \begin{center} \hypertarget{bsc}{} \hyperlink{sc}{ {\bf [Specimen single column article -- Click here]} } \hypertarget{bsc}{} \hyperlink{dc}{ {\bf [Specimen double column article -- Click here]} } \end{center} \src{}\hypertarget{sc}{} \deforange{blue!70} \hyperlink{bsc}{\includeclip{1}{88 120 514 724}{elstest-1p.pdf}} \deforange{orange} \src{}\hypertarget{dc}{} \deforange{blue!70} \hyperlink{bsc}{\includeclip{1}{27 61 562 758}{elstest-5p.pdf}} \deforange{orange} \end{document} \section{Introduction and features} This Latex library provides a standard set of environments for writing optimization problems. The most important features are: \begin{enumerate} \item It references optimization problem using three different policies: no equation is referenced, the problem is referenced with a single label, each equation has an individual reference. For more details refer to Sections \ref{sec:syntax} and \ref{sec:environments}. \item It defines two problem size formats: a long format and a short format. For more details refer to Sections \ref{sec:syntax} and \ref{sec:longshort}. \item It allows four different outputs for the location of the constraints. For more details refer to Sections \ref{sec:syntax} and \ref{sec:format}. \item It allows the definition of a limitless number of constraints. For more details refer to Section \ref{subsec:syntax}. \item Four different type of problems: \textit{minimize}, \textit{maximize}, \textit{arg min} and \textit{arg max}. For more details refer to Sections \ref{sec:syntax} and \ref{sec:environments}. \item The optimization problem can be broken in several pages without compromising the alignment or the structure of the problem. For more details refer to Section \ref{sec:breakpages}. \item The objective function can be broken in several lines without compromising the alignment or the structure of the problem. For more details refer to Section \ref{sec:breakObj}. \end{enumerate} \section{Using the package} The package can be imported by directly adding \begin{lstlisting} \usepackage{optidef} \end{lstlisting} to the document preamble. When importing the packages three options can be used, \verb|short|, \verb|nocomma|, and either \verb|c1|, \verb|c2|, or \verb|c3|: \begin{lstlisting} \usepackage[short,c1|c2|c3,nocomma]{optidef} \end{lstlisting} The first option changes the default long format of the optimization problems to a shorter format; for a better explanation (including examples) of the \verb|short| option check Section \ref{sec:longshort}. The options \verb|c1|, \verb|c2|, and \verb|c3| change the default format of the constraints; the default format is format 0 (as defined in Section \ref{sec:format}); \verb|c1|, \verb|c2|, and \verb|c3| respectively change the default constraint arrangement to format 1, 2, and 3. For a better explanation of the four formats including examples, we refer to Section \ref{sec:format}. For the \verb|nocomma| option check Section \ref{sec:comma}. For a detailed description of how to use the package keep reading the next section. \section{Environment Syntax Definition} \label{sec:syntax} Considering that \verb|Const.i| stands for constraint $i$, \verb|LHS.i| stands for the left-hand-side of constraint $i$, and \verb|RHS.i| for the right-hand-side counterpart, the basic structure to define a general optimization problem with $N$ constraints is: \begin{verbatim} \begin{mini#}|sizeFormat|[constraintFormat]<break> {optimizationVariable} {objectiveFunction\label{objective}} {\label{optimizationProblem}} {optimizationResult} \addConstraint{LHS.1}{RHS.1\label{Const1}}{extraConst1} \addConstraint{LHS.2}{RHS.2\label{Const2}}{extraConst2} . . \addConstraint{LHS.N}{RHS.N\label{ConstN}}{extraConstN} \end{mini#} \end{verbatim} \subsection{Definition of Problem parameters} \begin{enumerate}[label=(\roman*)] \item \verb|mini#|: defines the type of environment and reference used. There are four environments: \verb|mini|, \verb|maxi|, \verb|argmini|, and \verb|argmaxi|. There are three types of referencing: \verb|mini|, \verb|mini*| and \verb|mini!|. Consult Section \ref{sec:environments} for more details. \item (Optional) \verb|sizeFormat|: optional parameter to define the size format of the problem. The possible values are: \begin{itemize} \item l: for the long format as defined in Section \ref{sec:longshort}. \item s: for the short format as defined in Section \ref{sec:longshort}. \end{itemize} \item (Optional) \verb|constraintFormat|: optional parameter to change the format of the constraints. The parameter \verb|constraintFormat| can take the following values: \begin{itemize} \item 0: for the Standard definition in Section \ref{sec:format}. \item 1: for Alternative 1 in Section \ref{sec:format}. \item 2: for Alternative 2 in Section \ref{sec:format} \item 3: for Alternative 3 in Section \ref{sec:format} \end{itemize} \item (Optional) \verb|break|: optional parameter to allow the optimization problem to break across multiple pages. For details on this feature, check Section \ref{sec:breakpages}. \item \verb|optimizationVariable|: variable to be optimizated in the problem, e.g. $w \in \Re^N$. \item \verb|objectiveFunction\label{objective}|: function to be minimized/maximized as a function of the optimization variable, e.g. $\|w\|_2$. If required, the objective function label should also be included withing this term \item \verb|\label{optimizationProblem}|: it defines the main and general reference for the optimization problem. It is used for the \verb|mini| and \verb|mini!| enviroments. In the \verb|mini*| environment should be left blank, i.e. \{\}, \textbf{not to be ommited}. \item \verb|optimizationResult|: a term expressing the result of the optimization problem, e.g. $J(w^*)~=$. If not needed leave it blank, \textbf{not to be ommited}. \end{enumerate} The last two defined problem parameters, \verb|\label{optimizationProblem}| and \verb|optimizationResult|, could be made optional. However, in order to improve the problem readibility, line breaking between the 7 parametes was implemented; unfortunately, linea breaking and optional parameters are not compatible and these two parameters had to be made mandatory. \subsection{Adding Constraints} \label{subsec:syntax} After the definition of the problem parameters, the environment accepts the definition of an infinite number of constraints. For this definitions the following command is used: ~\\ \verb|\addConstraint{LHS.k}{RHS.k\label{Const.k}}{extraConst.k}| ~\\ The command accepts three different parameters \begin{enumerate} \item \verb|LHS.k|: the left-hand side of the the constraint $k$, e.g. $3w^\top w$. \item (Optional) \verb|RHS.k\label{Const.k}|: the right-hand side of the constraint k if the equations should be aligned in the equality or inequality signs, e.g. $\leq \|w\|_\infty$. If required, the constraint label should also be included in this term. \item (Optional) \verb|extraConst.k|: optional parameter to add extra alignment point for additional constraint information. An example would be the constraint names. Look Example \ref{ex:extra} or the Section \ref{sec:extraAlign}. \end{enumerate} \subsubsection{Constraints referencing} Notice that the label for the constraints is always included in the right hand side expression and it only makes sense for the case of using the \verb|mini!| enviroment. The label of the objective function can also be included in a similar way. \section{Environment Types} \label{sec:environments} There are four basic environments depending on the type of referencing that should be used. \begin{enumerate} \item The \textbf{mini} environment for defining problems with a single reference label: \begin{mini} {w}{f(w)+R(w+6x)} {\label{eq:Ex1}}{} \breakObjective{+L(x)} \addConstraint{g(w)}{=0} \end{mini} \item The \textbf{mini*} environment if the problem does not have to be referenced: \begin{mini*} {w}{f(w)+ R(w+6x)} {}{} \addConstraint{g(w)}{=0} \end{mini*} \item The \textbf{mini!} environment if each equation should be referenced: \begin{mini!} {w}{f(w)+ R(w+6x)\label{eq:Ex2}} {\label{eq:Ex1}}{} \addConstraint{g(w)}{=0} \end{mini!} \item The \textbf{minie} environment: same functionality as the \textbf{mini!} environment and it replaces \textbf{mini!} when using the \texttt{optidef} library with some languages in the babel package. For further details we refer to Section \ref{sec:babel}. \end{enumerate} \noindent Additionally, there are four basic definitions of optimization problems: \begin{enumerate} \item The \textbf{mini} environment: \begin{mini} {w}{f(w)+ R(w+6x)} {}{} \addConstraint{g(w)}{=0} \end{mini} \item The \textbf{maxi} environment: \begin{maxi} {w}{f(w)+ R(w+6x)} {}{} \addConstraint{g(w)}{=0} \end{maxi} \item The \textbf{argmini} environment: \begin{argmini} {w}{f(w)+ R(w+6x)} {}{} \addConstraint{g(w)}{=0} \end{argmini} \item The \textbf{argmaxi} environment: \begin{argmaxi} {w}{f(w)+ R(w+6x)} {}{} \addConstraint{g(w)}{=0} \end{argmaxi} \end{enumerate} \section{Long and Short Output Formats} \label{sec:longshort} The library permits the definition of two different problem size: a long format and a short format. \subsection{Long Format} Selected by \verb|sizeFormat|=l. It makes use of \textit{subject to} and \textit{minimize/maximize} \begin{mini*}|l| {w}{f(w)+ R(w+6x)}{}{} \addConstraint{g(w)}{=0} \end{mini*} \subsection{Short Format} Selected by \verb|sizeFormat|=s. It uses instead the shorter \textit{s.t.} and \textit{min/max} \begin{mini*}|s| {w}{f(w)+ R(w+6x)}{}{} \addConstraint{g(w)}{=0} \end{mini*} \noindent By the default the long format is used. To change the default to the short format the package must be imported with the \verb|short| option: \begin{lstlisting} \usepackage[short]{optidef} \end{lstlisting} \section{Output Formats for the Constraints} \label{sec:format} There are four basic output formats for the location of the constraints. They are controlled by the environment parameter \verb|constraintFormat|. \subsection{Alternative 0} In this format option, the constraints are located to the right of \textit{subject to} and aligned with the objective function. It also has a second alignment point at the $=,~\leq,~\geq$ signs: \begin{mini} {w}{f(w)+ R(w+6x)} {\label{eq:Ex1}}{} \addConstraint{g(w)+h(w)}{=0} \addConstraint{t(w)}{=0.} \end{mini} \noindent It is the default format if no format option is provided. Alternatively, it can also be set by selecting \verb|constraintFormat|=0. \subsection{Alternative 1} Selected by \verb|constraintFormat|=1. It locates the constraints below \textit{subject to} and keeps them aligned at the inequality/equality signs: \begin{mini}[1] {w}{f(w)+ R(w+6x)} {\label{eq:Ex1}}{} \addConstraint{g(w)+h(w)}{=0} \addConstraint{t(w)}{=0.} \end{mini} \subsection{Alternative 2} Selected by \verb|constraintFormat|=2. It aligns all the constraints with the objective function. \begin{mini}[2] {w}{f(w)+ R(w+6x)} {\label{eq:Ex1}}{} \addConstraint{g(w)+h(w)}{=0} \addConstraint{t(w)}{=0.} \end{mini} \subsection{Alternative 3} Selected by \verb|constraintFormat|=3. It aligns all the constraints below \textit{subject to}: \begin{mini}[3] {w}{f(w)+ R(w+6x)} {\label{eq:Ex1}}{} \addConstraint{g(w)+h(w)}{=0} \addConstraint{t(w)}{=0.} \end{mini} \begin{lstlisting} \usepackage[c1|c2|c3]{optidef} \end{lstlisting} \subsection{Extra alignment alternative} \label{sec:extraAlign} By default, the constraints have 2 aligned elements. However, a third alignment point can be used to set some constraint features. A clear example could be the constraints names: \begin{mini*} {w}{f(w)+ R(w+6x)}{}{} \addConstraint{g(w)+h(w)}{=0,}{\text{(Topological Constraint)}} \addConstraint{l(w)}{=5w,\quad}{\text{(Boundary Constraint)}} \end{mini*} or the index of the constraints: \begin{mini*} {w,u}{f(w)+ R(w+6x)}{}{} \addConstraint{g(w_k)+h(w_k)}{=0,}{k=0,\ldots,N-1} \addConstraint{l(w_k)}{=5u,\quad}{k=0,\ldots,N-1} \end{mini*} This extra alignment point can be added using a third input parameter on the \verb|\addConstraint| parameter. An example using the last constraint of the previous example would be: \begin{lstlisting} \addConstraint{l(w_k)}{=5u,\quad}{k=0,\ldots,N-1} \end{lstlisting} \subsection{Default format} The default format is alternative 0. To change the default format across the whole document, the package can be imported using one of the three options: \verb|c1|, \verb|c2|, \verb|c3|, i.e.: \section{Breaking the optimization problem across multiple pages} \label{sec:breakpages} In several cases, people encounter the problem of having an optimization problem that is too long to fit in a single page. In those cases, optidef can automatically break the problem across multiple pages by simply using the optional argument \verb|<b>|. For example: \begin{lstlisting} \begin{mini*}<b> {w,u}{f(w)+ R(w+6x)+ H(100w-x*w/500)}{}{} \addConstraint{g(w_k)+h(w_k)}{=0,}{k=0,\ldots,N-1} \addConstraint{l(w_k)}{=5u,\quad}{k=0,\ldots,N-1} \end{mini*} \end{lstlisting} However, when using this option \verb|<b>|, it is important to note that labeling of equations is no longer automatic. To create the number/label, the command \verb|\labelOP{label}| should be used. In particular, in the equation/constraint of the optimization problem where the label/number should be located, simply add \verb|\labelOP{label}|. For example, the following code: \begin{lstlisting} \begin{mini}<b> {w,u}{f(w)+ R(w+6x)+ H(100w-x*w/500)}{}{} \addConstraint{g(w_k)+h(w_k)}{=0,}{k=0,\ldots,N-1 \labelOP{eq:label}} \addConstraint{l(w_k)}{=5u,\quad}{k=0,\ldots,N-1} \end{mini} \end{lstlisting} \noindent would display this: \begin{mini}<b> {w,u}{f(w)+ R(w+6x)+ H(100w-x*w/500)}{}{} \addConstraint{g(w_k)+h(w_k)}{=0,}{k=0,\ldots,N-1 \labelOP{eq:label}} \addConstraint{l(w_k)}{=5u,\quad}{k=0,\ldots,N-1} \end{mini} The option \verb|<b>| automatically breaks the optimization problem when the problem is too large to fit in one page (e.g.\ see an example in \ref{ex:break}). However, manual breaks at selected locations are also possible using the \verb|\displaybreak| command. Just add \verb|\displaybreak| between the two constraints that need to be broken, e.g.: \begin{lstlisting} \begin{mini}<b> {w,u}{f(w)+ R(w+6x)+ H(100w-x*w/500)}{}{} \breakObjective{-g(w^3-x^2*200+10000*w^5)} \addConstraint{g(w_k)+h(w_k)}{=0,}{k=0,\ldots,N-1 \labelOP{eq:label}} \displaybreak \addConstraint{l(w_k)}{=5u,\quad}{k=0,\ldots,N-1} \end{mini} \end{lstlisting} \noindent would display: \begin{mini}<b> {w,u}{f(w)+ R(w+6x)+ H(100w-x*w/500)}{}{} \breakObjective{-g(w^3-x^2*200+10000*w^5)} \addConstraint{g(w_k)+h(w_k)}{=0,}{k=0,\ldots,N-1 \labelOP{eq:label}} \displaybreak \addConstraint{l(w_k)}{=5u,\quad}{k=0,\ldots,N-1} \end{mini} \section{Breaking the objective across several lines} \label{sec:breakObj} In several cases, people encounter the problem of having an optimization problem which objective function is too long to be set in a single line. In such cases, a line breaking that respects the rest of the problem syntax would be desirable. To account for that, the command \verb|\breakObjective| can be used. The idea is that, if the objective function shall be split in $n$ different functions, e.g.~$f_1,\ldots,f_n$, the default objective parameter would include just $f_1$ and then, we would include $n-1$ statements \verb|\breakObjective|($f_k$), $\forall k=2,\ldots,n$ right before defining the \verb|\addConstraint| commands. Let's illustrate this with an example. We could consider the example from before: \begin{mini} {w,u}{f(w)+ R(w+6x)}{}{} \addConstraint{g(w_k)+h(w_k)}{=0,}{k=0,\ldots,N-1} \addConstraint{l(w_k)}{=5u,\quad}{k=0,\ldots,N-1} \end{mini} If now the cost function were too long, i.e: \[ f(w)+ R(w+6x)+ H(100w-x*w/500)-g(w^3-x^2*200+10000*w^5) \] We could split it as: \begin{mini} {w,u}{f(w)+ R(w+6x)+ H(100w-x*w/500)}{}{} \breakObjective{-g(w^3-x^2*200+10000*w^5)} \addConstraint{g(w_k)+h(w_k)}{=0,}{k=0,\ldots,N-1} \addConstraint{l(w_k)}{=5u,\quad}{k=0,\ldots,N-1} \end{mini} by simpling using the following command: \begin{lstlisting} \begin{mini*} {w,u}{f(w)+ R(w+6x)+ H(100w-x*w/500)}{}{} \breakObjective{-g(w^3-x^2*200+10000*w^5)} \addConstraint{g(w_k)+h(w_k)}{=0,}{k=0,\ldots,N-1} \addConstraint{l(w_k)}{=5u,\quad}{k=0,\ldots,N-1} \end{mini*} \end{lstlisting} It is important to notice the specific location of the \verb|\breakObjective| command. In order to work properly, it has to be defined right before \verb|\addConstraint| and right after the definition of the environment parameters; i.e.~in any case the command should be used right after defining the first part of the objective function and not finishing the definition of the mandatory environment parameters. \section{Default comma at the end of the constraint} \label{sec:comma} By default, the algorithms adds a comma at the end of any constraint that is not the last one. This feature was implemented due to correctness of mathematical notation. However, this behavior can be removed by adding the option \verb|nocomma| when importing the package: \begin{lstlisting} \usepackage[nocomma]{optidef} \end{lstlisting} \section{Long Optimization Variables} The standard appearance for long optimization variables is as follows: \begin{mini!} {x_0,u_0,x_1,\hdots,u_{N-1},x_N} {\sum_{k=0}^{N-1} L(x_k,u_k)\!\!+\!\!E(x_N)\label{OCPobj}} {\label{eq:OCP}}{} \addConstraint{x_{k+1}-f(x_k,u_k)}{= 0, \label{dOCP:modelc}\quad k=0,\dots,N-1} \addConstraint{h(x_k,u_k)}{\leq 0, \quad k=0,\dots,N-1} \addConstraint{r(x_0,x_N)}{= 0. \label{dOCP:boundary}} \end{mini!} \noindent A possible way to reduce the large variable spacing is to stack them with the command: \begin{verbatim} \substack{x_0,u_0,x_1,\hdots,\\u_{N-1},x_N} \end{verbatim} \begin{mini!} {\substack{x_0,u_0,x_1,\hdots,\\ u_{N-1},x_N}} {\sum_{k=0}^{N-1} L(x_k,u_k)\!\!+\!\!E(x_N)\label{OCPobj}} {\label{eq:OCP}}{} \addConstraint{x_{k+1}-f(x_k,u_k)}{= 0, \label{dOCP:modelc}\quad k=0,\dots,N-1} \addConstraint{h(x_k,u_k)}{\leq 0, \quad k=0,\dots,N-1} \addConstraint{r(x_0,x_N)}{= 0. \label{dOCP:boundary}} \end{mini!} \section{Compatibility issues with other packages} Issues with three different packages have been reported: cleveref, babel, and mathabx. \subsection{Cleveref} When using the cleveref package in couple with the optidef package two measures have to taken for the packages to work properly: \begin{enumerate} \item As also indicated in the cleveref documentation, the optidef package has to be loaded before the cleveref package. \item To avoid crashes, the \verb|\label| commands in the optidef environments have to be replaced by the protected counterparts \verb|\protect\label|. This is required because of the standard Latex issue of moving arguments and fragile commands\footnote{\url{goo.gl/wmKbNU}}. \end{enumerate} \noindent A code example taking into account both measures is the following: \begin{verbatim} \documentclass{article} \usepackage{optidef} \usepackage{cleveref} \begin{document} \begin{mini!} {w}{f(w)+ R(w+6x) \protect\label{eq:ObjectiveExample1}} {\label{eq:Example1}}{} \addConstraint{g(w)}{=0 \protect\label{eq:C1Example3}} \addConstraint{n(w)}{= 6 \protect\label{eq:C2Example1}} \addConstraint{L(w)+r(x)}{=Kw+p \protect\label{eq:C3Example1}} \end{mini!} Example labels: \cref{eq:Example1} and \cref{eq:ObjectiveExample1}. \end{document} \end{verbatim} As an alternative to the second step, i.e.~protecting the \verb|\label| command, the command can be robustify in the document preamble and then \verb|\protect| is not longer needed. To robustify the \verb|\label| command, the following has to be added to the preamble: \begin{verbatim} \usepackage{etoolbox} \robustify{\label} \end{verbatim} \subsection{Babel} \label{sec:babel} When importing the package babel with some specific languages, e.g.~French, the \verb|mini!| environment clashes because of the exclamation mark. This issue has been resolved starting from Optidef 2.7, where a working alternative to the \verb|mini!| environment is included: the \verb|minie| enviroment. Both environemnts have the same functionality, but when using the babel package it is recommended to use the \verb|minie| environment to avoid issues. \subsection{Mathabx} When using the mathabx package in couple with the optidef package, the optidef package must be loaded first in order to avoid malfunction of the mathabx package. In addition, the amsmath package should also be loaded before both of them. The preamble should look like: \begin{verbatim} \usepackage{amsmath} \usepackage{mathabx} \usepackage{optidef} \end{verbatim} \section{Examples} \subsection{Example 1 - mini environment} The code: \begin{verbatim} \begin{mini} {w}{f(w)+ R(w+6x)} {\label{eq:Example1}}{} \addConstraint{g(w)}{=0} \addConstraint{n(w)}{= 6} \addConstraint{L(w)+r(x)}{=Kw+p} \addConstraint{h(x)}{=0.} \end{mini} \end{verbatim} \noindent outputs: \begin{mini} {w}{f(w)+ R(w+6x)} {\label{eq:Ex11}}{} \addConstraint{g(w)}{=0} \addConstraint{n(w)}{= 6} \addConstraint{L(w)+r(x)}{=Kw+p} \addConstraint{h(x)}{=0.} \end{mini} \subsection{Example 2 - mini* environment} On the other hand: \begin{verbatim} \begin{mini*} {w}{f(w)+ R(w+6x)} {}{} \addConstraint{g(w)}{=0} \addConstraint{n(w)}{= 6,} \addConstraint{L(w)+r(x)}{=Kw+p} \addConstraint{h(x)}{=0.} \end{mini*} \end{verbatim} \noindent it is almost the same but removing the reference: \begin{mini*} {w}{f(w)+ R(w+6x)} {}{} \addConstraint{g(w)}{=0} \addConstraint{n(w)}{= 6} \addConstraint{L(w)+r(x)}{=Kw+p} \addConstraint{h(x)}{=0.} \end{mini*} \subsection{Example 3 - mini! environment} \noindent Finally, the multireferencing environment outputs: \begin{verbatim} \begin{mini!} {w}{f(w)+ R(w+6x) \label{eq:ObjectiveExample1}} {\label{eq:Example1}}{} \addConstraint{g(w)}{=0 \label{eq:C1Example3}} \addConstraint{n(w)}{= 6 \label{eq:C2Example1}} \addConstraint{L(w)+r(x)}{=Kw+p \label{eq:C3Example1}} \addConstraint{h(x)}{=0. \label{eq:C4Example1}} \end{mini!} \end{verbatim} \begin{mini!} {w}{f(w)+ R(w+6x)\label{eq:ObjectiveExample3}} {\label{eq:Example3}} {} \addConstraint{g(w)}{=0 \label{eq:C1Example3}} \addConstraint{n(w)}{= 6 \label{eq:C2Example3}} \addConstraint{L(w)+r(x)}{=Kw+p \label{eq:C3Example3}} \addConstraint{h(x)}{=0.\label{eq:C4Example3}} \end{mini!} \subsection{Example 4 - Problem Result} \noindent Adding the problem result: \begin{verbatim} \begin{mini} {w}{f(w)+ R(w+6x)} {\label{eq:Example1}} {J(w^*)=} \addConstraint{g(w)}{=0} \addConstraint{n(w)}{= 6} \addConstraint{L(w)+r(x)}{=Kw+p} \addConstraint{h(x)}{=0.} \end{mini} \end{verbatim} \noindent outputs: \begin{mini} {w}{f(w)+ R(w+6x)} {\label{eq:Ex1}}{J(w^*)~=~} \addConstraint{g(w)}{=0} \addConstraint{n(w)}{= 6} \addConstraint{L(w)+r(x)}{=Kw+p} \addConstraint{h(x)}{=0.} \end{mini} \subsection{Example 5 - Short Format} \noindent Adding the short format parameter: \begin{verbatim} \begin{mini}|s| {w}{f(w)+ R(w+6x)} {\label{eq:Example1}} {} \addConstraint{g(w)}{=0} \addConstraint{n(w)}{= 6} \addConstraint{L(w)+r(x)}{=Kw+p} \addConstraint{h(x)}{=0.} \end{mini} \end{verbatim} \noindent outputs: \begin{mini}|s| {w}{f(w)+ R(w+6x)} {\label{eq:Ex1}}{} \addConstraint{g(w)}{=0} \addConstraint{n(w)}{= 6} \addConstraint{L(w)+r(x)}{=Kw+p} \addConstraint{h(x)}{=0.} \end{mini} \subsection{Example 6 - Alternative 1 for Constraints} \noindent If including a 1 as optional parameter, the first constraint will appear aligned to the left right below \textit{subject to}. \begin{verbatim} \begin{mini}[1] {w}{f(w)+ R(w+6x)} {\label{eq:Example1}} {} \addConstraint{g(w)}{=0} \addConstraint{n(w)}{= 6} \addConstraint{L(w)+r(x)}{=Kw+p} \addConstraint{h(x)}{=0.} \end{mini} \end{verbatim} \noindent outputs: \begin{mini}[1] {w}{f(w)+ R(w+6x)} {\label{eq:Ex1}}{} \addConstraint{g(w)}{=0} \addConstraint{n(w)}{= 6} \addConstraint{L(w)+r(x)}{=Kw+p} \addConstraint{h(x)}{=0.} \end{mini} \subsection{Example 7 - Alternative 2 for Constraints} \noindent If including a 2 as optional parameter, the constraint will appear to the right of \textit{subject to} but a single alignment point. \begin{verbatim} \begin{mini}[2] {w}{f(w)+ R(w+6x)} {\label{eq:Example1}} {} \addConstraint{g(w)}{=0} \addConstraint{n(w)}{= 6} \addConstraint{L(w)+r(x)}{=Kw+p} \addConstraint{h(x)}{=0.} \end{mini} \end{verbatim} \noindent outputs: \begin{mini}[2] {w}{f(w)+ R(w+6x)} {\label{eq:Ex1}}{} \addConstraint{g(w)}{=0} \addConstraint{n(w)}{= 6} \addConstraint{L(w)+r(x)}{=Kw+p} \addConstraint{h(x)}{=0.} \end{mini} \subsection{Example 8 - Alternative 3 for Constraints} \noindent If including a 3 as optional parameter, the first constraint will appear aligned to the left right below \textit{subject to} and with a single alignment point. \begin{verbatim} \begin{mini}[3] {w}{f(w)+ R(w+6x)} {\label{eq:Example1}} {} \addConstraint{g(w)}{=0} \addConstraint{n(w)}{= 6} \addConstraint{L(w)+r(x)}{=Kw+p} \addConstraint{h(x)}{=0.} \end{mini} \end{verbatim} \noindent outputs: \begin{mini}[3] {w}{f(w)+ R(w+6x)} {\label{eq:Ex1}}{} \addConstraint{g(w)}{=0} \addConstraint{n(w)}{= 6} \addConstraint{L(w)+r(x)}{=Kw+p} \addConstraint{h(x)}{=0.} \end{mini} \subsection{Example 9 - Breaking a long objective} \begin{lstlisting} \begin{mini*} {w,u}{f(w)+ R(w+6x)+ H(100w-x*w/500)}{}{} \breakObjective{-g(w^3-x^2*200+10000*w^5)} \addConstraint{g(w_k)+h(w_k)}{=0,} \addConstraint{l(w_k)}{=5u,\quad} \end{mini*} \end{lstlisting} outputs: \begin{mini} {w,u}{f(w)+ R(w+6x)+ H(100w-x*w/500)}{}{} \breakObjective{-g(w^3-x^2*200+10000*w^5)} \addConstraint{g(w_k)+h(w_k)}{=0} \addConstraint{l(w_k)}{=5u.} \end{mini} \subsection{Example 10 - Extra Alignment in the Constraints} \label{ex:extra} Adding optional alignment to add constraint names: \begin{verbatim} \begin{mini*} {w}{f(w)+ R(w+6x)} {}{} \addConstraint{g(w)}{=0,}{ \quad \text{(Dynamic constraint)}} \addConstraint{n(w)}{= 6,}{ \quad \text{(Boundary constraint)}} \addConstraint{L(w)+r(x)}{=Kw+p,}{ \quad \text{(Random constraint)}} \addConstraint{h(x)}{=0,}{ \quad \text{(Path constraint).}} \end{mini*} \end{verbatim} \subsection{Example 11 - The \textit{argmini} Environment} Similar to the \verb|mini|, \verb|mini*| and \verb|mini!| environments, the environments \verb|argmini|, \verb|argmini*| and \verb|argmini!| are very similar environments that use the same syntax but the output is slightly different: \begin{verbatim} \begin{argmini} {w}{f(w)+ R(w+6x)} {\label{eq:Example1}}{w^*=} \addConstraint{g(w)}{=0} \addConstraint{n(w)}{= 6} \addConstraint{L(w)+r(x)}{=Kw+p} \addConstraint{h(x)}{=0.} \end{argmini} \end{verbatim} \noindent outputs: \begin{argmini} {w}{f(w)+ R(w+6x)} {\label{eq:Ex1}}{w^*~=~} \addConstraint{g(w)}{=0} \addConstraint{n(w)}{= 6} \addConstraint{L(w)+r(x)}{=Kw+p} \addConstraint{h(x)}{=0.} \end{argmini} \subsection{Example 12 - The \textit{maxi} and \textit{argmaxi} Environments} Exactly the same syntax and definition as the previous environments, but now for defining maximization environments. The following code serves for illustration: \begin{verbatim} \begin{maxi} {w}{f(w)+ R(w+6x)} {g(w)}{=0} {\label{eq:Example1}}{} \addConstraint{g(w)}{=0} \addConstraint{n(w)}{= 6} \addConstraint{L(w)+r(x)}{=Kw+p} \addConstraint{h(x)}{=0.} \end{maxi} \end{verbatim} \noindent outputs: \begin{maxi} {w}{f(w)+ R(w+6x)} {\label{eq:Example1}}{} \addConstraint{g(w)}{=0} \addConstraint{n(w)}{= 6} \addConstraint{L(w)+r(x)}{=Kw+p} \addConstraint{h(x)}{=0.} \end{maxi} \subsection{Example 13 - Breaking optimization problem} \label{ex:break} \begin{lstlisting} \begin{mini}<b> {w}{f(w)+ R(w+6x)} {\label{eq:Example1}}{} \addConstraint{g(w)}{=0} \addConstraint{p(w)}{=0} \addConstraint{q(w)}{=0} \addConstraint{r(w)}{=0\labelOP{testLabel}} \addConstraint{n(w)}{= 6} \addConstraint{L(w)+r(x)}{=Kw+p} \addConstraint{h(x)}{=0.} \end{mini} \end{lstlisting} outputs: \begin{mini}<b> {w}{f(w)+ R(w+6x)} {\label{eq:Example1}}{} \addConstraint{g(w)}{=0} \addConstraint{p(w)}{=0} \addConstraint{q(w)}{=0} \addConstraint{r(w)}{=0\labelOP{testLabel}} \addConstraint{n(w)}{= 6} \addConstraint{L(w)+r(x)}{=Kw+p} \addConstraint{h(x)}{=0.} \end{mini} \subsection{Example 14 - All Possible Parameters} \begin{verbatim} \begin{mini!}|s|[2]<b> {w}{f(w)+ R(w+6x)\label{eq:ObjectiveExample3}} {\label{eq:Example3}} {w^*=} \addConstraint{g(w)}{=0 \label{eq:C1Example3}} \addConstraint{n(w)}{= 6 \label{eq:C2Example3}} \addConstraint{L(w)+r(x)}{=Kw+p \label{eq:C3Example3}} \addConstraint{h(x)}{=0.\label{eq:C4Example3}} \end{mini!} \end{verbatim} \begin{mini!}|s|[2]<b> {w}{f(w)+ R(w+6x)\label{eq:ObjectiveExample3}} {\label{eq:Example3}} {w^*=} \addConstraint{g(w)}{=0 \label{eq:C1Example3}} \addConstraint{n(w)}{= 6 \label{eq:C2Example3}} \addConstraint{L(w)+r(x)}{=Kw+p \label{eq:C3Example3}} \addConstraint{h(x)}{=0.\label{eq:C4Example3}} \end{mini!} \section{Reporting bugs and feature requests} To report any bug or request some feature please use the issue section in the github repository: \url{https://github.com/jeslago/optidef/issues}. \end{document} \section*{\texttt{pzccal} package test} The script (calligraphic) math alphabet with Zapf Chancery (\verb+\mathpzc+): \[ \mathpzc{ABCDEFGHIJKLMNOPQRSTUVWXYZ\, abcdefghijklmnopqrstuvwxyz\, 1234567890} \] The script (calligraphic) math alphabet with Euler Script (\verb+\EuScript+): \[ \EuScript{ABCDEFGHIJKLMNOPQRSTUVWXYZ\,abc\,123} \] The script (calligraphic) math alphabet with Ralph Smith's Formal Script (\verb+\mathrsfs+): \[ \mathrsfs{ABCDEFGHIJKLMNOPQRSTUVWXYZ\,abc\,123} \] The script (calligraphic) math alphabet with (\verb+\mathcal+): \[ \mathcal{ABCDEFGHIJKLMNOPQRSTUVWXYZ\,abc\,123} \] The script (calligraphic) math alphabet with (\verb+\mathscr+): \[ \mathscr{ABCDEFGHIJKLMNOPQRSTUVWXYZ\,abc\,123} \] Compare script letters (CM, CMcal, pzcm, EuScript, rsfs): \[ H \CMcal{H} \mathpzc{H} \EuScript{H} \mathrsfs{H} Z \CMcal{Z} \mathpzc{Z} \EuScript{Z} \mathrsfs{Z} F \CMcal{F} \mathpzc{F} \EuScript{F} \mathrsfs{F} \] Usage examples with \verb+\mathcal+: \[ \mathcal{F}\left\{ s(x)\right\} =\int_{-\infty}^{\infty}s(x) \mathrm{e}^{\mathrm{i}\omega_{x}x}\,\mathrm{d}{x} \] Die Hamiltonfunktion ist die Legendre-Transformierte der Lagrange-Funktion $\mathcal L(t,q,\dot q)$, die von den generalisierten Koordinaten und ihren Geschwindigkeiten $\dot q=(\dot q_1,\dot q_2\dots \dot q_n)$ abhängt: \[ \mathcal H(t,q,p)= \sum_{k=1}^n \dot q_k\, p_k - \mathcal L(t, q,\dot q) \] \makeatother \end{document} \section*{\texttt{pzccal} package test} The script (calligraphic) math alphabet with Zapf Chancery (\verb+\mathpzc+): \[ \mathpzc{ABCDEFGHIJKLMNOPQRSTUVWXYZ\, abcdefghijklmnopqrstuvwxyz\, 1234567890} \] The script (calligraphic) math alphabet with Euler Script (\verb+\EuScript+): \[ \EuScript{ABCDEFGHIJKLMNOPQRSTUVWXYZ\,abc\,123} \] The script (calligraphic) math alphabet with Ralph Smith's Formal Script (\verb+\mathrsfs+): \[ \mathrsfs{ABCDEFGHIJKLMNOPQRSTUVWXYZ\,abc\,123} \] The script (calligraphic) math alphabet with (\verb+\mathcal+): \[ \mathcal{ABCDEFGHIJKLMNOPQRSTUVWXYZ\,abc\,123} \] The script (calligraphic) math alphabet with (\verb+\mathscr+): \[ \mathscr{ABCDEFGHIJKLMNOPQRSTUVWXYZ\,abc\,123} \] Compare script letters (CM, CMcal, pzcm, EuScript, rsfs): \[ H \CMcal{H} \mathpzc{H} \EuScript{H} \mathrsfs{H} Z \CMcal{Z} \mathpzc{Z} \EuScript{Z} \mathrsfs{Z} F \CMcal{F} \mathpzc{F} \EuScript{F} \mathrsfs{F} \] Usage examples with \verb+\mathcal+: \[ \mathcal{F}\left\{ s(x)\right\} =\int_{-\infty}^{\infty}s(x) \mathrm{e}^{\mathrm{i}\omega_{x}x}\,\mathrm{d}{x} \] Die Hamiltonfunktion ist die Legendre-Transformierte der Lagrange-Funktion $\mathcal L(t,q,\dot q)$, die von den generalisierten Koordinaten und ihren Geschwindigkeiten $\dot q=(\dot q_1,\dot q_2\dots \dot q_n)$ abhängt: \[ \mathcal H(t,q,p)= \sum_{k=1}^n \dot q_k\, p_k - \mathcal L(t, q,\dot q) \] \makeatother \end{document} \section*{\texttt{pzccal} package test} The script (calligraphic) math alphabet with Zapf Chancery (\verb+\mathpzc+): \[ \mathpzc{ABCDEFGHIJKLMNOPQRSTUVWXYZ\, abcdefghijklmnopqrstuvwxyz\, 1234567890} \] The script (calligraphic) math alphabet with Euler Script (\verb+\EuScript+): \[ \EuScript{ABCDEFGHIJKLMNOPQRSTUVWXYZ\,abc\,123} \] The script (calligraphic) math alphabet with Ralph Smith's Formal Script (\verb+\mathrsfs+): \[ \mathrsfs{ABCDEFGHIJKLMNOPQRSTUVWXYZ\,abc\,123} \] The script (calligraphic) math alphabet with (\verb+\mathcal+): \[ \mathcal{ABCDEFGHIJKLMNOPQRSTUVWXYZ\,abc\,123} \] The script (calligraphic) math alphabet with (\verb+\mathscr+): \[ \mathscr{ABCDEFGHIJKLMNOPQRSTUVWXYZ\,abc\,123} \] Compare script letters (CM, CMcal, pzcm, EuScript, rsfs): \[ H \CMcal{H} \mathpzc{H} \EuScript{H} \mathrsfs{H} Z \CMcal{Z} \mathpzc{Z} \EuScript{Z} \mathrsfs{Z} F \CMcal{F} \mathpzc{F} \EuScript{F} \mathrsfs{F} \] Usage examples with \verb+\mathcal+: \[ \mathcal{F}\left\{ s(x)\right\} =\int_{-\infty}^{\infty}s(x) \mathrm{e}^{\mathrm{i}\omega_{x}x}\,\mathrm{d}{x} \] Die Hamiltonfunktion ist die Legendre-Transformierte der Lagrange-Funktion $\mathcal L(t,q,\dot q)$, die von den generalisierten Koordinaten und ihren Geschwindigkeiten $\dot q=(\dot q_1,\dot q_2\dots \dot q_n)$ abhängt: \[ \mathcal H(t,q,p)= \sum_{k=1}^n \dot q_k\, p_k - \mathcal L(t, q,\dot q) \] \makeatother \end{document}
1,314,259,994,703
arxiv
\section{Introduction} With the advent of precise cosmological data, it is now possible to constrain models of inflation by the measured magnitude and scale-dependence of correlated temperature perturbations in the cosmic microwave background (CMB) and from tracking density perturbations in dark matter from measuring the Large Scale Structure (LSS) of our universe. In these observations, it is found that the primordial perturbations coming from inflation are Gaussian to a remarkable accuracy, in agreement with the predictions of most single field models of inflation. Non-Gaussianity (NG) can be quantified by the magnitude of the bispectrum denoted $f_{NL}$ (this is usually quoted at the equilateral point in momentum space where all three momenta are equal). For most slow-roll models, $f_{NL}$ is smaller than 1~\cite{Maldacena:2002vr, Acquaviva:2002ud}. By comparison, the most recent constraints from WMAP5 \cite{Komatsu:2008hk} data are $-4<f_{NL}<80$ for the local shape and $-125<f_{NL}^{equi}<435$ for the equilateral shape \cite{Senatore:2009gt}. The Planck satellite is expected to improve the bounds to $\Delta f_{NL} < 7$ \cite{Cooray:2008xz}. There are also a large number of running and upcoming experiments probing LSS scales (such as LSST, DES, SDSS, etc.) and they may allow us to eventually probe non-Gaussianity on smaller scales. In this note, we shall consider multi-field models with a large bispectrum (three-point correlation function) that is strongly scale dependent\footnote{There has been much recent work in calculating the bispectrum and trispectrum in multi-field inflation, for some recent references see \cite{Langlois:2008vk, Gao:2009at, Arroja:2008yy, Huang:2009xa, Huang:2009vk, Byrnes:2009qy, Battefeld:2009ym}.}. The running is positive (or blue which means that the NG grows as $k$ increases) and can be achieved while keeping the power spectrum nearly scale invariant. It arises from loops (or higher order terms in the local ansatz) and the shape of the bispectrum is very well approximated by the local shape multiplied by a logarithm. We provide a consistent setup where the 1-loop effect dominates the bispectrum while giving a subdominant contribution to the power spectrum, and where higher loop contributions can be neglected. Since the running is positive, we can engineer a set-up where the curvature perturbation on CMB scales are extremely Gaussian while having a detectable NG on LSS scales. Running NG has already been considered in the context of DBI inflation \cite{Alishahiha:2004eh, Silverstein:2003hf}. This model can have a strong NG signal due to a small and varying sound speed for the inflaton fluctuations~\cite{Chen:2005fe}. The amplitude of the 3-pt can strongly run with scale if the sound speed varies but the running of the sound speed is exactly cancelled by the quickly varying Hubble constant along the trajectory. This is the key point of this type of model where the potential is steep but the inflaton moves slowly because of a speed limit. This causes the power spectrum to be scale invariant while the bispectrum can run wildly \cite{ArmendarizPicon:2003ht, Khoury:2008wj}. The prospect of detecting large NG with large scale structure data has spurred much activity recently. LoVerde et al \cite{LoVerde:2007ri} have examined the possibility of using cluster counts and the galaxy bispectrum to constrain running $f_{NL}$. It was also realized in \cite{Dalal:2007cu, Matarrese:2008nc}, that NG of the local shape can induce a scale dependence of the galaxy/halo bias (see also \cite{Slosar:2008hx, Afshordi:2008ru, McDonald:2008sc, Seljak:2008xr,Taruya:2008pg, Grossi:2009an}). This effect can be easily found in the data and it results in a competitive bound on NG with local shape $ -29 < f_{NL} < 70$ \cite{Slosar:2008hx}. At the time of this writing, there exists no significant experimental bound on the running of NG with scale. Recently, Sefusatti et al \cite{Sefusatti:2009xu} argued that Planck could bound $n_{NG}$, the running of non-Gaussianity, with a precision $\Delta n_{NG} \sim 0.1 $ $(0.3)$ for a local (equilateral) shape of non-Gaussianity. In our models, we find NG with a (nearly) local shape with a scale dependence such that the NG signal grows on small scales. The magnitude of the bispectrum grows with $k$ with a model independent running of $n_{NG} \sim 0.2$ at CMB scale and $0.1$ on LSS scale. The strongest constraint on the magnitude of NG arises from $n_s$. We find that $f_{NL} \sim 100$ can be achieved in principle. We also calculate the trispectrum $\tau_{NL}$, which also runs. Before getting into the details, we summarize the basic idea and results. \section{Scale Dependence from Loops} Local shape NG can be obtained in multi-field models of inflation, where each field is Gaussian but a non-linear relation between the inflaton perturbations and curvature perturbations induces NG. The original definition of the local ansatz for the curvature perturbation was done in real space \cite{Komatsu:2001rj} \begin{equation}\label{localansatz} \zeta(\vec{x},t) = \zeta_{Gauss} + \frac{3}{5}f_{NL} (\zeta_{Gauss}^2 - \Expect{\zeta_{Gauss}^2})\; , \end{equation} where $\zeta_{Gauss}$ is the Gaussian piece of the curvature perturbation. $f_{NL}$ in this formula is by definition scale invariant. In momentum space, the above ansatz leads to the following bispectrum \begin{eqnarray} \label{fnldef2} \Expect{\zeta_{\vec{k}_1} \zeta_{\vec{k}_2}\zeta_{\vec{k}_3}} & = & \frac{3}{5} f_{NL} \Expect{\zeta_{\vec{k}_1} \zeta_{\vec{k}_2}(\zeta \star \zeta)_{\vec{k}_3}}\nonumber\\ &=& (2\pi)^7\delta^3(\sum\vec{k}_i) \frac{3}{10} f_{NL} (\mathcal{P}^\zeta)^2 \frac{\sum k_i^3} {\prod k_i^3}\; , \end{eqnarray} where $(\zeta \star \zeta)_{\vec{k}_3}$ denotes a convolution, $\mathcal{P}^\zeta$ is the power spectrum (which is assumed to be scale invariant, for simplicity) and $\frac{\sum k_i^3}{\prod k_i^3}$ defines the local shape. Many multi-field models (such as curvatons \cite{Linde:1996gt, Lyth:2001nq}) have local scale invariant NG of this type. The NG can also be scale dependent even if the shape is nearly local; for example, this is expected to happen when the NG is generated throughout the whole trajectory as opposed to simply at some fixed later time, such as in curvaton models. A particular model with this feature was considered by Byrnes et al \cite{Byrnes:2008zy, Byrnes:2008wi}, where the scale-dependence arises from the dependence of $f_{NL}$ on the (time-dependent) slow-roll and Hubble parameters. In their case, the NG decreases on small scales. We instead look for scale dependence coming from loops and higher order terms. Indeed, it was realized early on \cite{Lyth:2005fi} that an additional contribution to the bispectrum in the ansatz Eq.~(\ref{localansatz}) comes from \begin{equation} \Expect{\zeta_{\vec{k}_1} \zeta_{\vec{k}_2}\zeta_{\vec{k}_3}} = \left(\frac{3}{5} f_{NL}\right)^3\Expect{(\zeta \star \zeta)_{\vec{k}_1}(\zeta \star \zeta)_{\vec{k}_2} (\zeta \star \zeta)_{\vec{k}_3}} \; . \end{equation} This higher order contribution to the bispectrum has a structure similar from a loop contribution as it involves an integral over internal momenta. The integral converges in the UV but contains IR divergences if the power spectrum is nearly scale invariant. One can `regulate' this divergence by introducing an IR cutoff in momenta $1/L$ \footnote{These loops have been called c-loops~\cite{Lyth:2006qz}. They must not be confused with q-loops, or loops coming from the expansion of the quantum evolution operator prior to horizon crossing \cite{Weinberg:2005vy}. There has been much discussion recently on the physical significance of the IR divergences in loop calculation in inflation. For c-loops, this IR cutoff is physical and depends on the observational probe and on how we measure the zero mode of curvature perturbations. We will justify this point of view in more detail in Sec.~(\ref{IR}).}. Doing so, the shape of this term is close to local up to a log \cite{Lyth:2005fi, Boubekeur:2005fj} \begin{equation} \Expect{\zeta_{\vec{k}_1} \zeta_{\vec{k}_2}\zeta_{\vec{k}_3}} \propto \ln(\rm{Min}[k_i] L) \frac{\sum k_i^3}{\prod k_i^3}\; . \end{equation} If this term dominates the bispectrum, we will have a scale dependence with a running of order $n_{NG} \sim \frac{1}{\ln kL}$. As we will show later, the cutoff $L$ is well approximated by the size of the universe today such that $\ln kL \sim 5$ around CMB scale and $n_{NG} \sim 0.2$. The NG grows with scale becoming more important for smaller wavelength. Needless to say this is the interesting case as it gives rise to a stronger signal for LSS. Recently, Cogollo et al \cite{Cogollo:2008bi} and Rodriguez et al \cite{Rodriguez:2008hy} have argued that loops can dominate in a particular 2-brid model. While their idea is very similar to what we propose, their particular model suffers from a problem pointed out in \cite{Byrnes:2008zy}. One of the fields that is assumed to follow a smooth classical trajectory is actually dominated by its quantum fluctuations, undermining part of their analysis. As we will show, the field that gives rise to NG in our model is also dominated by its quantum fluctuations. But this field plays no role in the inflationary trajectory and there is no inconsistency. We consider multi-field models of hybrid inflation where the inflationary trajectory is dictated by a single field but the surface of reheating (determined by when an extra waterfall/tachyon field starts condensing) fluctuates due to two fields \cite{Dutta:2008if, Dutta:2007cr} (as originally envisioned by \cite{Alabidi:2006wa, Alabidi:2006hg} -- see also \cite{Bernardeau:2007xi, Sasaki:2008uc, Naruko:2008sq} for similar models). In section 3, we describe the detailed set-up for the model, and describe the infra-red momentum cutoff. In section 4 we compute the power spectrum, and in section 5 we compute the bispectrum and trispectrum. We conclude in section 6 with a discussion of these results. \section{Multi-Field Model} A simple way to move beyond single field slow-roll and generate NG is to have multiple fields. This type of model can quickly become very complicated and in order to simply illustrate the main physical effect of interest (namely large scale dependent NG from loops), we will consider a very simplified set-up. More general models and in-depth analysis of the model we present is left for future work. Consider a model of hybrid inflation with two real light scalar fields ($\phi$ and $\chi$) and a waterfall field $T$ which ends inflation when it becomes tachyonic and condenses. In this paper, we will consider a rather general action, a more detailed and worked example is given in Appendix \ref{details}. The action is (we follow the notation of~\cite{Dutta:2007cr}): \begin{eqnarray} S & = &\frac12 \int\sqrt{g}[M_p^2 R - (\partial\phi)^2- (\partial T)^2 - (\partial\chi)^2 -2 V]\; , \nonumber\\ V &=& V_{\rm{inf}}(\phi) + V_{\rm{hid}}(\chi) + V_{\rm{mess}}(\phi,\chi,T)]\; . \end{eqnarray} The only coupling between $\phi$ and $\chi$ are through the tachyon which acts as a mediator or messenger. The form of $V_{\rm{mess}}$ is taken to be \begin{equation}\label{Vmess} V_{\rm{mess}} \propto T^2 f(\phi, \chi) + \mathcal{O}(T^n) \;\;\; ; n>2 \; . \end{equation} The function $f$ interpolates from large and positive values (in Hubble units) during inflation to negative values after the system crosses a critical line in field space. Therefore during inflation, $T$ has a large positive mass, its vev is driven to zero and its potential vanishes. Because of its large mass, this field will not fluctuate and it can be integrated out of the theory. In this model, inflation ends suddenly when the mass of the tachyon vanishes, which occurs on a line in field space parameterized by \begin{equation}\label{SurfaceReheat} f(\phi_e,\chi_e) = 0 \; , \end{equation} where the index ``$e$" denotes the value of the fields at the end of inflation. During the inflationary phase, $\phi$ and $\chi$ have no direct coupling. To simplify further, we assume that $V_{hid}(\chi) \ll V_{inf}(\phi)$ and we refer to $\phi$ as the inflaton from now on. The Hubble scale is then approximately given by \begin{equation} H^2 \approx \frac{V_{\rm{inf}}}{3M_p^2} \end{equation} and $\chi$ is a ``hidden" field during inflation which fluctuates but without much impact on the total energy density of the Universe. Nevertheless, its quantum fluctuations are still important as they will be felt as ripples on the surface of reheating. Indeed, at different point in space, the (slightly) different value of $\chi_e$ will mean different critical value $\phi_e$ for the inflaton resulting in more or less inflation in these different regions. This correlates directly in curvature perturbations (See Fig.~(\ref{surfacereheat})) \begin{figure}[ht] \centering \includegraphics[width=3.5in]{surfacereheat} \caption{This figure depicts the trajectory in field space. The blue (dashed) line denote the surface of reheating defined by $f(\phi_e,\chi_e)=0$ and it is assumed to be thin. The classical trajectory is in the $\phi$ direction (red/dotted line) but both $\delta\phi$ and $\delta\chi$ will induce curvature perturbations.}\label{surfacereheat} \end{figure} Since the quantum perturbations of $\chi$ mainly affect the surface of reheating, this system is well amenable to analysis through the $\delta N$ (or separable universe) formalism~\cite{Sasaki:1995aw}. The idea is that the curvature perturbation on large scales is simply given by the perturbation in the number of efolds for each trajectories \begin{equation}\label{deltaN} \zeta(\vec{x},t) = \delta N(\vec{x},t)\; , \end{equation} where the curvature perturbation $\zeta$ is given by fluctuations of the scale factor $a(\vec{x},t) = a(t) e^{\zeta(\vec{x},t)}$ and the difference in number of efolds is from a initial flat hypersurface to a uniform energy density final hypersurface. This formula does not take into account possible interactions between the various fields inside the horizon (on small scale) and it is only valid after horizon crossing where the evolution of the curvature perturbation is classical\footnote{The $\delta N$ formalism will not account correctly for multi-field effects for modes inside the horizon. In our case, because the fields are uncoupled during inflation, we can solve for $\delta\phi$ and $\delta\chi$ are horizon exit independently and follow the subsequent evolution of $\zeta$ with the $\delta N$ formalism.}. The surface where inflation ends Eq.~(\ref{SurfaceReheat}) is not a uniform energy density hypersurface and a correction term must be included as discussed in \cite{Vernizzi:2006ve, Sasaki:2008uc}. The correction term is very small in the hybrid scenario where the potential is very flat and it will be dropped in what follows. The number of efolds is given by $dN = -Hdt$. For the case where the classical trajectory is determined by a single field $\phi$, one has \begin{equation} N = - \int_{\phi_*}^{\phi_e(\chi)} \frac{H}{\dot\phi}d\phi^\prime \; , \end{equation} where the critical value of $\phi$ depends on the value of the field $\chi$ at the end of inflation (we dropped the subscript $e$ and $\chi = \chi_e$ unless otherwise specified\footnote{The field $\chi$ is evolving stochastically and the value of the field at the end of inflation is the sum of all fluctuations created for each mode as they exit the horizon.}) and $*$ refers to horizon crossing for a given mode. By varying $\phi_* \rightarrow \phi_* + \delta\phi$ and then $\phi_e(\overline{\chi}+\delta\chi) = \phi_e + \gamma\delta\chi + \gamma_{,\chi}\delta\chi^2/2 + \cdots$ with \begin{equation} \gamma(\overline\chi) = \frac{\partial\phi_e}{\partial\overline\chi} \end{equation} where we denote the zero mode of $\chi$ by $\overline\chi$, that is $\chi(\vec{x},t) = \overline\chi(t) + \delta\chi(\vec{x},t)$ (for notational simplicity, the bar is omitted in any derivative subscript). We get at second order (using $\frac{H}{\dot\phi} = - N^\prime$) \begin{equation}\label{eqN} \delta N = N^\prime\delta\phi\big|_* - N^\prime \gamma \delta\chi\big|_e + \frac12N^{\prime\prime}\delta\phi^2\big|_* - \frac12N^\prime\gamma_{,\chi}\delta\chi^2\big|_e - \frac12N^{\prime\prime}\gamma^2\delta\chi^2\big|_e \; , \end{equation} where $'$ denotes derivatives with respect to $\phi$. This can be reproduced using the formula of Vernizzi and Wands \cite{Vernizzi:2006ve}, for the case $\epsilon^\chi \ll \epsilon^\phi$ albeit they implicitly assume that all fields obey their equation of motion which is not true here for the field $\chi$. It is simple to show that $N' = \partial N / \partial \phi =1/\sqrt{2\epsilon^\phi} M_p$ where the slow-roll parameters are \begin{align} \e^\phi &= \frac12 M_p^2 \left(\frac{V_{\rm{inf},\phi}}{V}\right)^2 \; ,& \e^\chi &= \frac12 M_p^2 \left(\frac{V_{\rm{hid},\chi}}{V}\right)^2 \; . \end{align} The terms with $N''$ involve derivatives of slow-roll parameters and will therefore be suppressed. To simplify the formula and the analysis we will consider the case where the slow-roll parameter at horizon crossing and at the end are equal, $\e_e^\phi = \e_*^\phi$. This is not true in many models and we will discuss at the end how that would affect our results. We thus drop all subscript referring to the time of evaluation. The mean of Eq.~(\ref{eqN}) is non-zero and as it is we will generate a one-pt function. To ensure that the mean is zero we can subtract a constant piece (keeping only the leading terms) \begin{eqnarray}\label{curveRealSpace} \zeta & = &N^\prime\delta\phi - N^\prime \gamma \delta\chi - \frac12N^\prime\gamma_{,\chi} \delta\chi^2 + \frac12N^\prime\gamma_{,\chi} \Expect{\delta\chi^2}\; , \end{eqnarray} which is of the form Eq.~(\ref{localansatz}). \begin{equation} \zeta = \zeta_1 + \zeta_2 - \Expect{\zeta_2} \; . \end{equation} Note that this series terminates if \begin{enumerate} \item the function $\gamma$ is such that $\gamma_{,\chi\chi}$ and higher derivatives are small. \item $N''$ and higher derivative contributions are small. \end{enumerate} In this type of model, the function $\gamma$ could be anything and in the case where $\gamma_{,\chi}\delta\chi > \gamma$ the quadratic piece in $\delta\chi$ will dominate over the linear piece (in $\delta\chi$) which ensures that the loop contribution to the bispectrum will dominate \begin{equation} \Expect{\zeta^3} \propto \gamma_{,\chi}^3 \Expect{(\delta\chi^2)^3}\; , \end{equation} as we advocated earlier. In order for the power spectrum to be nearly scale invariant we will still need the $\delta\phi$ piece to be the dominant contribution to the power spectrum. There is no contradiction since the linear perturbation in $\phi$ does not contribute to the bispectrum (or gives a very small slow-roll suppressed contribution). Furthermore, in the case where the higher derivatives of $\gamma$ are suppressed, the higher loop contribution can be neglected, ensuring a consistent truncation. Another important point is that for the loop to dominate, the zero mode of $\chi$ at the end of inflation ($\overline\chi_e$ which is the mean averaged over the size of the universe at the end of inflation) must be smaller then the 1-$\sigma$ deviation value of the perturbation around the mean. Taking the quantum perturbation to be of order $\delta\chi \sim H$, we must have $\overline\chi_e < \delta\chi$. This is better seen in a specific model such as the one presented in Appendix \ref{details}. There, we use a model where $\phi_e = f(\chi^2)$ such that $\gamma \propto \chi$ and $\gamma_{,\chi} \sim \rm{cst}$ and the series truncate. In such models it is clear that the quadratic term dominate over the linear piece when \begin{equation} \frac{\gamma_{,\chi}\delta\chi}{\gamma} \sim \frac{\delta\chi}{\chi} > 1\; . \end{equation} It is then clear that the field $\chi$ has to behave stochastically and is in no way following a classical equation of motion. The fact that $\chi$ has essentially no effect on the inflationary dynamics prior to the reheating tells us that the stochastic behavior is unimportant during inflation. The value of $\overline\chi_e$, being stochastic, could have any value and it is therefore a free parameter. Before going into more details of the calculation, we need to discuss the choice of IR cutoff in the loop calculation. \subsection{The IR cutoff} \label{IR} There has been much discussion in the literature about the choice of cutoff that should be used in loop calculations. For the calculation of quantum loops in the in-in formalism (prior to horizon exit), the correlations function of scalars appear to be sensitive to this choice of cutoff, and there is no clear understanding of how this cutoff should be set. But for the c-loops which we consider in this paper, the situation is considerably simpler and there is a natural choice of cutoff~\cite{Lyth:2007jh}. We will define the observed zero modes of the fields $\phi,\chi$ as \begin{eqnarray} \phi_0={1\over L^3}\int_{-L/2}^{L/2}d^3x \phi\; ,\qquad \qquad \chi_0={1\over L^3} \int_{-L/2}^{L/2}d^3x \, \chi \; , \end{eqnarray} where $L$ is the largest scale over which we have measured the fields. The perturbations of the fields are then defined as $\delta\phi=\phi-\phi_0, \delta\chi=\chi-\chi_0$. When computing correlators of $\d N$, we are actually interested in the correlations functions of the perturbations e.g. $\langle \d\phi_{\V{k}_1} \d\phi_{\V{k}_2} \rangle$. From the definition of the perturbations, we see that the effect of subtracting the zero mode is to remove all Fourier modes with momentum $k>L^{-1}$. Hence $\d\phi_{\V{k}}=\phi_{\V{k}}$ for $k>L^{-1}$, and zero otherwise. Similarly, we find \begin{eqnarray} \langle \d\phi_{\V{k}_1} \d\phi_{\V{k}_2} \rangle=\left\{ \begin{array}{c}(2\pi)^3 \delta^3(\vec{k}_1 + \vec{k}_2) 2\pi^2 {\mathcal{P}_*\over k_1^3}\qquad k>L^{-1} \\ 0\qquad k <L^{-1}\end{array}\right. \; . \end{eqnarray} The effect is to include a cutoff $L^{-1}$ on any momentum integral. Due to the cutoff, the correlation functions will have an explicit dependence on $L$. This can be traced back directly to the fact that we are calculating correlation functions of perturbations like $\d\phi=\phi-\phi_0$, which have a direct dependence on $L$ through $\phi_0$. In this formalism, it is clear that all the dependence on $L$ comes from the variation in the zero mode as a function of $L$ as was discussed in more details in~\cite{Lyth:2007jh} (see also \cite{Enqvist:2008kt}). To summarize, there is a natural cutoff $L$ determined by the biggest scale on which we are able to measure the background zero mode of curvature. This is maximally the size of the universe today $L\sim 1/H_0$. This coincides with the lowest $k$ perturbations that are possible to observe now. Since there are about 5 efolds between when the lowest observable wavenumber leaves the horizon and when CMB scales leave the horizon, we have $k_{CMB} L\sim e^5$. LSS are about two orders of magnitude greater than CMB scales, giving $k_{LSS} L \sim e^{10}$. \section{The Power Spectrum} We will first consider the two-point function $\langle \zeta_{k_1} \zeta_{k_2} \rangle$. For the scalar fields, we have \begin{eqnarray} \Expect{\delta\chi^2_{\V{k}}} &= & \Expect{\delta\phi^2_{\V{k}}} = (2\pi)^3 \delta^3(\sum_i \vec{k}_i) P(k)\; ,\nonumber\\ P(k) & = & \frac{2\pi^2\mathcal{P}}{{k^3}}\; , \end{eqnarray} and we consider a model where these expectation values are approximately constant and where $\mathcal{P}$ is scale invariant (independent of $k$). We will also assume that any intrinsic 3-pt functions are negligible, $\Expect{(\d\phi_k)^n}\approx 0$ and $\Expect{(\d\chi_k)^n}\approx 0$ for $n$ odd. In Fourier space the curvature perturbation is given by (from Eq.~(\ref{curveRealSpace})) \begin{equation} \zeta_{\V{k}} = N^\prime\delta\phi_{\V{k}} - N^\prime \gamma \delta\chi_{\V{k}} - \frac12N^\prime \gamma_{,\chi} \int \frac{d^3\V{k}'}{(2\pi)^3}\delta\chi_{\V{k}-\V{k}'}\delta\chi_{\V{k}'} + \frac12N^\prime \gamma_{,\chi} \Expect{\delta\chi_{\V{k}}^2}\; , \end{equation} The ``tree-level" contribution to the power spectrum arises from linear terms in the expansion of $\delta N$, and it is easily seen to give \begin{eqnarray} \Expect{\zeta^2_{\V{k}}}_{tree} &= &N'^2 (\Expect{\delta\phi^2_{\V{k}}} + \gamma^2 \Expect{\delta \chi^2_k} )\; ,\nonumber\\ &= &N'^2 (1 + \gamma^2)(2\pi)^3 \delta^3(\sum_i \vec{k}_i) P(k)\; . \end{eqnarray} However, there is also a ``one-loop" contribution which arises from the non-linear terms in the $\delta N$ expansion which leads to \begin{eqnarray} \Expect{\zeta_{\V{k}_1}\zeta_{\V{k}_2}}_{loop} & = & N'^2\frac{\gamma_{,\chi}^2}{4} \int \frac{d^3\V{k}'}{(2\pi)^3}\frac{d^3\V{k}''}{(2\pi)^3} \Expect{\delta \chi_{\V{k}_1-\V{k}'}\delta \chi_{\V{k}'} \delta \chi_{\V{k}_2-\V{k}''}\delta \chi_{\V{k}''}}\; , \nonumber\\ & = & N'^2\frac{\gamma_{,\chi}^2}{4} (2\pi)^3 \delta^3(\sum_i \vec{k}_i) \int \frac{d^3k'}{(2\pi)^3} \frac{(2) (2\pi^2 \mathcal{P})^2}{|\vec{k} - \vec{k'}|^3k'^3}\; , \end{eqnarray} where the factor of $(2)$ is from the combinatorics. For a scale invariant power spectra $\mathcal{P}$, the integral is approximately \begin{equation} \int_{1/L} ^{k} \frac{d^3\V{k}'}{(2\pi)^3} \frac{1}{|\vec{k}-\vec{k'}|^3k'^3}\; , \end{equation} where we use $k$ as the upper limit because for $k' > k$ the denominator goes as $k'^n$ with $n>3$, and the integrand drops rapidly. The integrand has two simple poles which give logarithmic divergences. We regulate these by putting an IR cutoff on the integral. Hence for this example, we get \begin{equation} \int_{1/L} ^{k} \frac{d^3\V{k}'}{(2\pi)^3} \frac{1}{|\vec{k}-\vec{k'}|^3k'^3} \sim 2 \frac{\ln(kL)} {2\pi^2}\; . \end{equation} This contribution will depend on the IR limit of the momentum integration. This limit is given by the size of the observable universe today, $L\sim H_0^{-1}$ as we discussed in Sec.~(\ref{IR}). Modes of longer wavelength are already summed in the background value of the field. We thus find \begin{eqnarray} \Expect{\zeta^2_{\V{k}}}_{loop} & = & N'^2\gamma_{,\chi}^2 (2\pi)^3 \delta^3(\sum_i \vec{k}_i)\frac{2\pi^2\mathcal{P}^2 \ln(kL)}{k^3}\; . \end{eqnarray} Combining these terms yields \begin{eqnarray} \label{powerspectrum} \Expect{\zeta^2_k} & = & (2\pi)^3 \delta^3(\sum_i \vec{k}_i)\frac{2\pi^2\mathcal{P}^\zeta}{{k^3}}\; , \\ &= & N'^2 (2\pi)^3 \delta^3(\sum_i \vec{k}_i)P \left[ 1 + \gamma^2 +\gamma_{,\chi}^2 \mathcal{P} \ln(kL)\right]\; . \end{eqnarray} We have defined the power spectrum for curvature with the superscript $\zeta$. The spectral index $n_s - 1 = \frac{d\ln \mathcal{P}^\zeta}{d\ln k}$ is \begin{eqnarray} n_s -1 &=& {\gamma_{,\chi}^2 \mathcal{P} \over {1+\gamma^2 + \gamma_{,\chi}^2 \mathcal{P} \ln kL }}\; . \end{eqnarray} Note that the log contribution is positive (blue) and if this is the only contribution, we cannot match to the currently observed value of $n_s \sim 0.96$ \cite{Komatsu:2008hk}. For now, we simply impose that the log contribution contribute no more than a percent correction to $n_s$ \begin{equation} \gamma_{,\chi}^2 \mathcal{P} \mathrel {\vcenter {\baselineskip 0pt \kern 0pt \hbox{$<$} \kern 0pt \hbox{$\sim$} }} 10^{-2}\; , \end{equation} which in turn implies that the non-linear contribution to the 2-point function must be subleading if $\log (kL) \sim 1$. \section{Higher Point Functions} \subsection{Bispectrum}\label{bispectrum} We now compute the 3-point function $\langle \zeta_{\V{k}_1} \zeta_{\V{k}_2} \zeta_{\V{k}_3} \rangle$. Again, we find that this correlation function can easily be computed by expanding $\delta N $ in terms of $\delta \phi$ and $\delta \chi$. Since $\delta \phi$ and $\delta \chi$ are Gaussian fields, the only non-trivial contributions will come from non-linearities in the $\delta N$ expansion. As in the case of the 2-point function, there is a natural separation into ``tree-level" and ``loop" contributions~\cite{Zaballa:2006pv}. The contribution which is of lowest order in $\gamma_{,\chi}$ is \begin{eqnarray}\label{bitree} \langle \zeta_{\V{k}_1} \zeta_{\V{k}_2} \zeta_{\V{k}_3} \rangle_{tree} &=& -\gamma^3 N'^3 {1\over 2}{\gamma_{,\chi} \over \gamma} (3) \int { d^3 \V{k}' \over (2\pi)^3} \langle \delta \chi_{\V{k}_1} \delta \chi_{\V{k}_2} \delta \chi_{\V{k}_3-\V{k}'} \delta \chi_{k'}\rangle\; , \nonumber\\ &=&-\gamma^3 N'^3 (2\pi)^3 \delta^3 (\sum_i \vec{k}_i) {\gamma_{,\chi} \over \gamma} {(2\pi^2\mathcal{P})^2} \frac{\sum_i k_i^3}{\prod_ik_i^3}\; . \end{eqnarray} The next term in the $\gamma_{,\chi}$ expansion is \begin{eqnarray} \langle \zeta_{\V{k}_1} \zeta_{\V{k}_2} \zeta_{\V{k}_3} \rangle_{loop} &=& -\gamma^3 N'^3 {1\over 8}{\gamma_{,\chi}^3 \over \gamma^3} \int \frac{d^3 \V{k}'d^3 \V{k}''d^3 \V{k}'''}{(2\pi)^9} \langle (\delta \chi_{\V{k}_1-\V{k}'} \delta \chi_{\V{k}'}) (\delta \chi_{\V{k}_2-\V{k}''} \delta \chi_{\V{k}''}) (\delta \chi_{\V{k}_3-\V{k}'''} \delta \chi_{\V{k}'''})\rangle\; , \nonumber\\ &=&-\gamma^3N'^3 (2\pi)^3 \delta^3 (\sum_i \vec{k}_i) {1\over 8}{\gamma_{,\chi}^3 \over \gamma^3} \int{d^3 \V{k}' \over (2\pi)^3 } \left({(2\pi^2 \mathcal{P})^3 \over k'^3 |\vec{k_1} +\vec{k'}|^3 |\vec{k_2} -\vec{k'}|^3 } + 7 \; \rm{perms}\right)\; ,\nonumber \\ &=&-\gamma^3N'^3 (2\pi)^3 \delta^3 (\sum_i \vec{k}_i) {1\over 8}{\gamma_{,\chi}^3 \over \gamma^3} (2\pi^2 \mathcal{P})^3 B(\vec{k_1},\vec{k_2},\vec{k_3})\; . \end{eqnarray} Now the loop integral involves two different momenta \begin{equation}\label{shape} B(\vec{k_1},\vec{k_2},\vec{k_3}) = \int \frac{d^3\V{k}'}{(2\pi)^3}\left( {1 \over k'^3 |\vec{k_1} +\vec{k'}|^3 |\vec{k_2} -\vec{k'}|^3 } + 7 \; \rm{perms} \right)\; . \end{equation} Diagrammatically this is equivalent to a triangular loop of scalars (see Fig.~(\ref{tria})). \begin{figure}[ht] \centering \includegraphics[width=5in]{triangle} \caption{The 1-loop diagram. In our case, each vertex is accompanied by a factor of $N'^3\gamma_{,\chi}^3$ while each internal propagator is given by $\frac{2\pi^2\mathcal{P}}{p^3}$. More detailed Feynman rules for use with the $\delta N$ expansion (which we are not carefully describing here) can be found in \cite{Byrnes:2007tm}.}\label{tria} \end{figure} We note that near the poles at $\vec{k}' = 0,\vec{k}_2,-\vec{k}_1$, we get logarithmic divergences which are cut off by the IR scale $L$. This logarithmic dependence breaks scale invariance. So our shape $B$ is a function of three variables which we choose to simply be the norm of all three vectors $k_1, k_2,k_3$. An estimate of the shape can be obtained by simply evaluating the integral around each poles, cutting off the momentum integration in the infrared at scale $1/L$. So for example, the integrand \begin{equation} \int \frac{d^3 \V{k}'}{(2\pi)^3} {1 \over k'^3 |\vec{k_1} +\vec{k'}|^3 |\vec{k_2} -\vec{k'}|^3} \end{equation} has a pole around $\vec{k'} = 0$, and the integrand falls off rapidly when $k'$ becomes of the same order as $k_1$ or $k_2$. Hence we can approximate the integral around that pole as \begin{equation} \int \frac{d^3 \V{k}'}{(2\pi)^3} {1 \over k'^3 |\vec{k_1} +\vec{k'}|^3 |\vec{k_2} -\vec{k'}|^3} = \frac{\ln(\rm{Min}(k_1,k_2) L )}{2\pi^2 k_1^3k_2^3} + \cdots \; . \end{equation} The same thing can be done for the other poles and for the various permutations. There are also points in parameter space where the integrand has a pole of order 4. These poles occur in the squeezed limit where $\vec{k_1} = -\vec{k_2}$ and hence $\vec{k}_3 \rightarrow \vec{0}$. This shows that the bispectrum diverges in the squeezed limit, as is usual for the local shape. In principle, we can only measure $k$ to a resolution $\sim 1/L$ and the bispectrum, while large, is finite and of order $L^3/(3 k_i^3)$ in this limit. Hence the stronger poles that we have neglected are only important in the squeezed limit and they give contributions of the same order as the log terms in that limit. The full approximative shape is \begin{equation}\label{approximateshape} B(k_1,k_2,k_3) \approx \frac{8}{2\pi^2}\left(\frac{\ln (\rm{Min}(k_1,k_2) L)+1/3}{ k_1^3k_2^3} + \rm{2\;\;perm.}\right)\; . \end{equation} The $1/3$ term is only relevant for scales smaller than $k \sim e^{1/3} \frac{1}{L}$. For larger $k$ the shape is very well approximated by \begin{equation} B(k_1,k_2,k_3) \approx \frac{8}{2\pi^2}\ln (\rm{Min}(k_i) L)\frac{\sum_i k_i^3}{\prod_ik_i^3}\; . \end{equation} We show numerically in Appendix \ref{sectionshape} that this is a good approximation. In Figure (\ref{shape}), we plotted the shape given by Eq.~(\ref{approximateshape}) in term of the usual variable $x_2 = k_2/k_1$ and $x_3= k_3/k_1$. When the bispectrum is scale invariant, $k_1$ is fixed to 1 (arbitrarily) but here we plotted the shape for different value of $k_1$. As the figure clearly shows, the graph is very close to local and the magnitude grows as $k_1$ increases. \begin{figure}[ht] \centering \includegraphics[width=6in]{scaledependence2} \caption{Plot of the approximate shape $B(k_1,k_1x_2,k_1x_3) x_2^2 x_3^2 k_1^6$ (with $B(k_1,k_2,k_3)$ given by Eq.~(5.6)) in terms of $x_2 = \frac{k_2}{k_1}$ and $x_3= \frac{k_3}{k_1}$ for $k_1 = 0.5$ (left) and $k_1 = 1.5$ (right). The shape was restricted to be in the quadrant defined by $k_1 (1-x_2) < k_1 x_3 < k_1 x_2$ due to momentum conservation and to avoid overcounting identical triangle configurations (see \cite{Babich:2004gb}). The shape is clearly very close to local with the strongest signal in the squeezed limit when $k_3 = k_1 x_3 \rightarrow 0$. The overall magnitude of NG increases with the wavenumber $k_1$ or as we consider smaller wavelengths.}\label{shape} \end{figure} At the equilateral point $k_1= k_2= k_3 \equiv k$, the loop contribution to the bispectrum simplifies to \begin{eqnarray}\label{equilateral} \Expect{\zeta^3}&=&-\gamma^3 N'^3 (2\pi)^3 \delta^3 (\sum_i \vec{k}_i) {\gamma_{,\chi}^3 \over \gamma^3} \ln(kL) (2\pi^2)^2 \mathcal{P}^3 \frac{3}{k^6}\; . \end{eqnarray} If we compare the standard parameterization for local non-Gaussianities (Eqns.~(\ref{localansatz}) and (\ref{fnldef2})) at the equilateral point to Eq.~(\ref{bitree}) and Eq.~(\ref{equilateral}) and using the approximation $\mathcal{P}^\zeta \approx N'^2 \mathcal{P}$, we have \begin{equation} f_{NL} \approx - {5\over 6} {\gamma^2 \gamma_{,\chi} \over N'} \left(1 +\frac{\gamma_{,\chi}^2}{\gamma^2} \ln(kL) \mathcal{P}\right)\; , \end{equation} where the first term is the tree-level contribution, and the second term is the one-loop contribution. In the case of the two-point function, experimental bounds on the spectral index required the loop-contribution to be subleading. But there is no such requirement for the bispectrum. The loop contribution will dominate if \begin{equation} \frac{\gamma_{,\chi}^2}{\gamma^2} \mathcal{P} \ln(kL) >1\; . \end{equation} In this limit we have \begin{eqnarray} |f_{NL}| \approx {5\over 6} {(\gamma_{,\chi}^2 \mathcal{P})^{3\over 2} \over N' \mathcal{P}^{1\over 2}} \ln(kL) \mathrel {\vcenter {\baselineskip 0pt \kern 0pt \hbox{$<$} \kern 0pt \hbox{$\sim$} }} 100 \ln (kL)\; , \end{eqnarray} where we have utilized the bound $\gamma_{,\chi}^2 \mathcal{P} < 10^{-2}$ and the normalization $\mathcal{P}_\zeta^{1/2} \sim N'\mathcal{P}^{1/2} \sim 10^{-5}$ from COBE data. We thus find, in this scenario, that one can easily generate local non-Gaussianity which is not ruled out by WMAP5 and can potentially be probed at Planck. Note that the magnitude of the non-Gaussianity increases logarithmically with momentum, suggesting that non-Gaussianity can have an important impact on the formation of structure at smaller scales. If we define the running of $f_{NL}$ at the equilateral point \begin{equation} n_{NG} =\left. \frac{d\ln f_{NL}}{d\ln k}\right|_{k_i = k} \end{equation} one gets in the loop dominated limit \begin{equation} n_{NG} \simeq {1\over \ln(kL)}\; . \end{equation} In the limit where non-linearities dominate, the running of $f_{NL}$ is thus independent of $N'$, $\gamma$ and $\gamma_{,\chi}$. \subsection{Trispectrum} As in the case of the 3-point function, the only non-vanishing contributions will arise from the non-linear dependence of $\delta N$ on $\delta \chi$, so we can ignore $\delta \phi$ fluctuations. To simplify notation, we define \begin{equation} \zeta_{\V{k}} = A \delta\chi_{\V{k}} +B \int \frac{d^3\V{k}'}{(2\pi)^3} \delta\chi_{\V{k} - \V{k}'}\delta \chi_{\V{k}'} -B\Expect{\delta\chi_{\V{k}}^2}\; , \end{equation} where $A = - N^\prime \gamma$ and $B= - \frac12N^\prime\gamma_{,\chi}$. The last term ensures that we only keep the connected part of every diagrams. The tree level contribution (the term of lowest order in $B$) is \begin{eqnarray} \Expect{\zeta_{\V{k}_1}\zeta_{\V{k}_2}\zeta_{\V{k}_3}\zeta_{\V{k}_4}} &=& A^2B^2 \int {d^3\V{k}'\over (2\pi)^3 } {d^3\V{k}'' \over (2\pi)^3} \Expect{\delta \chi_{\V{k}_1}\delta \chi_{\V{k}_2}\delta \chi_{\V{k}_3-\V{k}'}\delta \chi_{\V{k}'}\delta \chi_{\V{k}_4-\V{k}''}\delta \chi_{\V{k}''}} + 5\;\rm{perm}\; , \nonumber\\ &=& (2\pi)^3 4 A^2B^2 \delta^3\left(\sum \vec{k_i}\right) \left[P(k_1+k_3)P(k_1)P(k_2) + 11\;\rm{perm} \right]\; , \nonumber\\ &=& 4A^2B^2 (2\pi)^3 \delta^3\left(\sum \vec{k_i}\right) \frac{T(k_i)}{N'^6}\; , \end{eqnarray} where we have used that $\mathcal{P}^\zeta \sim N'^2\mathcal{P}$ and the shape is given by \begin{equation} T(k_i) = \left(\frac{(2\pi^2\mathcal{P}^\zeta)^3}{(k_1k_{13} k_2)^3} + 11\;\rm{perm}\right)\; \end{equation} with the notation $k_{ij} = |\vec{k_i} + \vec{k_j}|$. The magnitude of the trispectrum is usually given by two numbers ($\tau_{NL}$ and $g_{NL}$) corresponding to two distinct shapes: \begin{eqnarray} \Expect{\zeta^4} = (2\pi)^3\delta^3\left(\sum \vec{k_i}\right) \left[\tau_{NL} T(k_i) +{54\over 25} g_{NL} (P^\zeta(k_2) P^\zeta(k_3) P^\zeta(k_4) + 3\;\rm{perm})\right]\; . \end{eqnarray} The lowest order contribution thus corresponds to $g_{NL}=0$ and $\tau_{NL}=4A^2 B^2 / N'^6$. The 1-loop contribution comes from the following term \begin{eqnarray} \Expect{\zeta^4}_{1-loop} & = & B^4 \int {d^3\V{k}' \cdots d^3\V{k}^{iv} \over (2\pi)^{12}} \Expect{\delta \chi_{\V{k}_1-\V{k}'}\delta \chi_{\V{k}'}\cdots \delta \chi_{\V{k}_4-\V{k}^{iv}}\delta \chi_{\V{k}^{iv}}}\; ,\\ & = & (2\pi)^3 (16) B^4 \delta^3\left(\sum \vec{k_i}\right)\left[ \int {d^3k' \over (2\pi)^3} \frac{(2\pi^2\mathcal{P})^4}{k'^3 |\V{k}_1-\V{k}'|^3|\V{k}_1+ \V{k}_2-\V{k}'|^3|\V{k}_3+\V{k}'|^3} + 5\;\rm{perm}\right]\; . \nonumber \end{eqnarray} The integral over momentum is difficult in general, so we will only estimate its value at the equilateral point $|k_i| = k$ \begin{equation} \Expect{\zeta^4}_{1-loop} = 16 (2\pi)^3 B^4 \delta^3\left(\sum \vec{k_i}\right) (2\pi^2\mathcal{P}) {\ln(kL)\over 2\pi^2 } \frac{T(k_i)}{N'^6} \end{equation} and thus \begin{eqnarray} \tau_{NL} &=& \frac{4B^2}{N'^6} \left( A^2 + 4 B^2\mathcal{P} \ln (kL)\right)\; , \nonumber\\ &=& \frac{\gamma^2 \gamma_{,\chi}^2}{N'^2} \left(1+ \frac{\gamma_{,\chi}^2}{\gamma^2} \mathcal{P} \ln(kL)\right)\; , \nonumber\\ g_{NL} &=& 0\; . \end{eqnarray} We see that the trispectrum is dominated by the non-linear contributions in largely the same regime as the bispectrum. Given the bound from $n_s-1$, the maximum value for $\tau_{NL}$ in this loop dominated regime is \begin{eqnarray} \tau_{NL} & \sim &\frac{\gamma_{,\chi}^4 \mathcal{P} ^2}{N'^2 \mathcal{P}} \ln(kL) < 10^6\ln(kL)\; . \end{eqnarray} Interestingly, the bound from WMAP5 on this parameter is $|\tau_{NL}|<10^8$ while Planck is expected to improve this bound up to $|\tau_{NL}| < 560$. \section{Conclusions} We have studied a simple class of models in which non-Gaussianity is dominantly produced by higher-order non-linearities in the transfer of fluctuations from the fundamental scalars to the curvature. These higher-order non-linear order contributions are often referred to in the literature as ``c-loops", and can dominate the lowest order ``tree-level" contribution in the limit where ${\gamma_{,\chi}^2 \over \gamma^2} \mathcal{P} \ln(kL) >1$, where $\gamma$ and $\gamma_{,\chi}$ parameterize the non-linear transfer of fluctuations. In particular, $f_{NL} \sim 100$ can be achieved in these models. We have also found in these models that the magnitude of non-Gaussianity is scale dependent, with $n_{NG} \sim 0.2$ at CMB scales and $n_{NG} \sim 0.1$ at LSS scale. Interestingly, the non-Gaussianity of the bispectrum is stronger at smaller scales, where it can potentially be observed by large scale structure experiments. The shape of our NG signal is very nearly local. Moreover, this class of models yields a non-trivial trispectrum (parameterized by $\tau_{NL}$) that also runs. A number of open issues remain. In our model, we have assumed that the slow-roll parameter $\epsilon$ is constant throughout inflation. This was necessary in order to have an observable effect from the end of inflation, but it requires tuning and it leads to a very flat power spectrum. It would be interesting to either relax this assumption in our scenarios or to look at a completely different set-up where the NG is not generated at the end of inflation. We expect that we can relax this assumption since we could have a case where $f_{NL}$ is very small on CMB scales but grows to be detectable on LSS scales. We note though that D-term inflation with a Coleman-Weinberg potential (as illustrated in Appendix A) has a natural regime with the required flat potential, $\epsilon_e \sim \epsilon_f$. From an effective field theory point of view (and from string theory models such as \cite{Dutta:2007cr, Haack:2008yb}), the real tuning is in keeping all other allowed terms (such as a mass term for $\phi$) subdominant to the Coleman-Weinberg potential. We have also assumed that the fundamental scalars ($\phi$ and $\chi$) are Gaussian, and that all non-Gaussianity is induced by the non-linear transfer of $\delta\chi$ fluctuations to the curvature. Non-trivial NG can also arise from non standard kinetic terms, or a steep potential for $\chi$ (which unlike the inflaton does not have to satisfy slow-roll conditions). Loop corrections then have a richer structure although the basic idea remains the same. Of particular interest are models like DBI inflation where the spectral index is nearly one and entropy modes being converted to curvature at the end of inflation can also be observable \cite{Leblond:2006cc}. This scenario has been analyzed recently in \cite{RenauxPetel:2009sj} based on methods developed in \cite{Langlois:2008wt, Langlois:2008qf} (see also \cite{Gao:2009gd}\cite{Gao:2009fx}) and a mixture of equilateral and local NG has been found. It would be interesting to consider the regime where the loop dominate in this kind of models. \acknowledgments We are particularly thankful to Bhaskar Dutta for early collaboration on this project. We are grateful to Niayesh Afshordi, Sarah Shandera, Martin Sloth, Xerxes Tata and Andrew Tolley for useful discussions. L.L. would like to thank the organizers of the workshop on Effective Field Theory of Inflation at the Perimeter Institute and of the Phenomenology workshop at Cooks Branch Conservancy where part of this work was presented. L.L. would also like thank the KITP and the Aspen Institute for their hospitality. LL is supported in part by NSF Grant No.~PHY--0505757. AR is supported in part by NSF Grant No.~PHY--0653656. This research was supported in part by Perimeter Institute for Theoretical Physics. Research at Perimeter Institute is supported by the Government of Canada though Industry Canada and by the province of Ontario through the Ministry of Research \& Innovation.
1,314,259,994,704
arxiv
\section{Cups, caps and double braiding functors}\label{sec:bimod} Throughout this section, we fix $\underline{N} = (1, 1, \dots, 1) \in \mathbb{N}^r$ and write $T^{\lambda,r} := T^{\lambda,\underline{N}}$, and $\otimes_T := \otimes_{(\oplus_{r\geq 0} T^{\lambda,r})}$. Also, when we will talk about (bi)modules, we will generally mean $\mathbb{Z}^2$-graded dg-(bi)module, assuming it is clear from the context. \subsection{Cup and cap functors}\label{sec:Tlaction} Following \cite[\S 7]{webster} (see also \cite[\S4.3]{webstersl2}), we define the \emph{cup bimodule $B_i$} for $1 \leq i \leq r+1$ as the $(T^{\lambda,r+2},0)$-$(T^{\lambda,r},0)$-bimodule generated by the diagrams \begin{equation}\label{eq:cupbim} \tikzdiagh{0}{ \draw[vstdhl] (0,0) node[below]{\small $\lambda$} --(0,1); \draw (.5,0) -- (.5,1); \node at(1,.5) {\tiny$\dots$}; \draw (1.5,0) -- (1.5,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (.4,-.35) -- node {$b_0$} (1.6,-.35); \draw[stdhl] (2,0) node[below]{\small $1$} --(2,1); % \draw (2.5,0) -- (2.5,1); \node at(3,.5) {\tiny$\dots$}; \draw (3.5,0) -- (3.5,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (2.4,-.35) -- node {$b_1$} (3.6,-.35); \draw[stdhl] (4,0) node[below]{\small $1$} --(4,1); % \node[red] at (5,.5) {\dots}; % % \draw[stdhl] (6,0) node[below]{\small $1$} --(6,1); \draw (6.5,0) -- (6.5,1); \node at(7,.5) {\tiny$\dots$}; \draw (7.5,0) -- (7.5,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (6.4,-.35) -- node {$b_{i-1}$} (7.6,-.35); % \draw (8.5,.5) -- (8.5,1); \draw [stdhl] (8,1) .. controls (8,.25) and (9,.25) .. (9,1); % \draw (9.5,0) -- (9.5,1); \node at(10,.5) {\tiny$\dots$}; \draw (10.5,0) -- (10.5,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (9.4,-.35) -- node {$b_{i-1}'$} (10.6,-.35); \draw[stdhl] (11,0) node[below]{\small $1$} --(11,1); % % \node[red] at (12,.5) {\dots}; % \draw[stdhl] (13,0) node[below]{\small $1$} --(13,1); \draw (13.5,0) -- (13.5,1); \node at(14,.5) {\tiny$\dots$}; \draw (14.5,0) -- (14.5,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (13.4,-.35) -- node {$b_{r}$} (14.6,-.35); } \end{equation} for all $(b_0, \dots ,b_{i-2}, b_{i-1}, b_{i-1}', b_{i}, \dots, b_r) \in \mathbb{N}^{r+2}$. Here, generated means that elements of $B_i$ are given by taking the diagram above and gluing any diagram of $T^{\lambda,r+2}$ on the top, and any diagram of $T^{\lambda,r}$ on the bottom. The diagrams in $B_i$ are considered up to graded braid-like planar isotopy, with the cup being in homological degree $0$, and subject to the same local relations as the dg-enhanced KLRW algebra \eqref{eq:dotredstrand}-\eqref{eq:redR3} and \eqref{eq:relNail}, together with the following extra local relations: \begin{align} \tikzdiag[xscale=-1]{ \draw (3.5,1.25) .. controls (3.5,.875) and (4.5,.875) .. (4.5,.5) ; \draw [stdhl] (4,1.25) -- (4,1) .. controls (4,.25) and (5,.25) .. (5,1) -- (5,1.25); } \ &= \ 0, & \tikzdiag{ \draw (3.5,1.25) .. controls (3.5,.875) and (4.5,.875) .. (4.5,.5) ; \draw [stdhl] (4,1.25) -- (4,1) .. controls (4,.25) and (5,.25) .. (5,1) -- (5,1.25); } \ &= \ 0, \label{eq:killcup} \\ \tikzdiag{ \draw (4.5,.5) -- (4.5,1.25); \draw (3.5,0) .. controls (3.5,.75) and (5.5,1) .. (5.5,1.25); \draw [stdhl] (4,1.25) -- (4,1) .. controls (4,.25) and (5,.25) .. (5,1) -- (5,1.25); } \ &= \ \tikzdiag{ \draw (4.5,.75) -- (4.5,1.25); \draw (3.5,0) .. controls (3.5,.25) and (5.5,.5) .. (5.5,1.25); \draw [stdhl] (4,1.25) .. controls (4,.5) and (5,.5) .. (5,1.25); } & \tikzdiag[xscale=-1]{ \draw (4.5,.5) -- (4.5,1.25); \draw (3.5,0) .. controls (3.5,.75) and (5.5,1) .. (5.5,1.25); \draw [stdhl] (4,1.25) -- (4,1) .. controls (4,.25) and (5,.25) .. (5,1) -- (5,1.25); } \ &= - \ \tikzdiag[xscale=-1]{ \draw (4.5,.75) -- (4.5,1.25); \draw (3.5,0) .. controls (3.5,.25) and (5.5,.5) .. (5.5,1.25); \draw [stdhl] (4,1.25) .. controls (4,.5) and (5,.5) .. (5,1.25); } \label{eq:cupstrandslides} \end{align} We set the $\mathbb{Z}^2$-degree of the generator in~\eqref{eq:cupbim} as \[ \deg_{q,\lambda}\left( \tikzdiag{ \draw (4.5,.5) -- (4.5,1); \draw [stdhl] (4,1) .. controls (4,.25) and (5,.25) .. (5,1); } \right) \ := \ (0,0). \] \smallskip Similarly, we define the \emph{cap bimodule $\overline{B}_i$} by taking the mirror along the horizontal axis of $B_i$. However, we declare that the cap is in homological degree $-1$, and with $\mathbb{Z}^2$-degree given by \[ \deg_{q,\lambda}\left( \tikzdiag[yscale=-1]{ \draw (4.5,.5) -- (4.5,1); \draw [stdhl] (4,1) .. controls (4,.25) and (5,.25) .. (5,1); } \right) \ := \ (-1,0). \] Note that since the red cap has a $-1$ homological degree, it anticommutes with the nails when applying a graded planar isotopy. \smallskip From this, one defines the \emph{coevaluation} and \emph{evaluation dg-functors} as \begin{align*} \B_i := B_i \otimes^{\Lderiv}_T - : \mathcal{D}_{dg}(T^{\lambda,r},0) \rightarrow \mathcal{D}_{dg}(T^{\lambda,r+2},0), \\ \overline{\B}_i := \overline{B}_i \otimes^{\Lderiv}_T - : \mathcal{D}_{dg}(T^{\lambda,r+2},0) \rightarrow \mathcal{D}_{dg}(T^{\lambda,r},0). \end{align*} \subsubsection{Biadjointness} Note that \[ \overline{\B}_i\cong q \RHOM_T( B_i ,-) [1], \] by \cref{prop:cofBi} below. Thus, $q^{-1}\overline{\B}_i [-1]$ is right adjoint to $\B_i$. Similarly, we obtain that $q \overline{\B}_i [1]$ is left-adjoint to $\B_i$. The unit and counit of $\B_i \dashv q^{-1}\overline{\B}_i [-1]$ gives a pair of maps of bimodules \begin{align*} \eta_i &: q(T^{\lambda,r})[1] \rightarrow \overline{B}_i \otimes^{\Lderiv}_T B_i, & \varepsilon_i &: B_i \otimes^{\Lderiv} \overline{B}_i \rightarrow q(T^{\lambda,r})[1], \intertext{and similarly $q \overline{\B}_i [1] \dashv \B_i$ gives} \overline{\eta}_i &: q^{-1} (T^{\lambda,r})[-1] \rightarrow B_i \otimes^{\Lderiv} \overline{B}_i, & \overline{\varepsilon}_i &: \overline{B}_i \otimes^{\Lderiv}_T B_i \rightarrow q^{-1} (T^{\lambda,r})[-1]. \end{align*} \subsubsection{Tightened basis}\label{sec:tightbasisBi} Take $\kappa=(b_0,\ldots,b_{r+2}) \in \mathcal{P}_{b}^{r+2}$ and $\rho \in \mathcal{P}_{b}^{r}$. Let $\bar \kappa^i$ be given by $(b_0, b_1, \dots, b_{i-2}, b_{i-1} + b_i - 1 + b_{i+1} , \widehat b_i, \widehat b_{i+1}, b_{i+2}, \dots b_r)\in\mathcal{P}_{b}^{r}$. For each $1 \leq \ell \leq b_i$, consider the map \[ g_\ell : q^{b_i+1-2\ell} (1_{\bar \kappa^i} T^{\lambda,r} 1_\rho) \rightarrow 1_\kappa B_i 1_\rho, \] given by gluing on the top the following element: \[ \tikzdiag{ \draw[vstdhl] (-6,-.5) node[below]{$\lambda$} -- (-6,1); % \draw (-5.5,1) -- (-5.5,-.5); \node at(-5,.85){\small $\dots$}; \node at(-5,-.35){\small $\dots$}; \draw (-4.5,1) -- (-4.5,-.5); \tikzbraceop{-5.5}{-4.5}{1}{\small $b_{0}$}; % \draw[stdhl] (-4,-.5) node[below]{1} -- (-4,1); % \node[red] at(-3,.125) {$\dots$}; % \draw[stdhl] (-2,-.5) node[below]{1} -- (-2,1); % \draw (-1.5,1) -- (-1.5,-.5); \node at(-1,.85){\small $\dots$}; \node at(-1,-.35){\small $\dots$}; \draw (-.5,1) -- (-.5,-.5); \tikzbraceop{-1.5}{-.5}{1}{\small $b_{i-1}$}; % % \draw (3.5,1) .. controls (3.5,.25) and (2.5,.25) .. (2.5,-.5); \node at(3,.85){\small $\dots$}; \node at(2,-.35){\small $\dots$}; \draw (2.5,1) .. controls (2.5,.25) and (1.5,.25) .. (1.5,-.5); \tikzbraceop{2.5}{3.5}{1}{\small $\ell - 1$}; % \draw (2,1) .. controls (2,.75) and (3.5,.75) .. (3.5,-.35); % \draw (1.5,1) .. controls (1.5,.25) and (1,.25) .. (1,-.5); \node at(1,.85){\small $\dots$}; \node at(.5,-.35){\small $\dots$}; \draw (.5,1) .. controls (.5,.25) and (0,.25) .. (0,-.5); \tikzbraceop{.5}{1.5}{1}{\small $b_i - \ell$}; % \draw[stdhl] (4,1) -- (4,0) .. controls (4,-.5) and (3,-.5) .. (3,0) .. controls (3,.5) and (0,.5) .. (0,1); % % \draw (4.5,1) -- (4.5,-.5); \node at(5,.85){\small $\dots$}; \node at(5,-.35){\small $\dots$}; \draw (5.5,1) -- (5.5,-.5); \tikzbraceop{4.5}{5.5}{1}{\small $b_{i+1}$}; % % \draw[stdhl] (6,-.5) node[below]{1} -- (6,1); % \node[red] at(7,.125) {$\dots$}; % \draw[stdhl] (8,-.5) node[below]{1} -- (8,1); % \draw (8.5,1) -- (8.5,-.5); \node at(9,.85){\small $\dots$}; \node at(9,-.35){\small $\dots$}; \draw (9.5,1) -- (9.5,-.5); \tikzbraceop{8.5}{9.5}{1}{\small $b_{r}$}; } \] Recall the basis ${}_\kappa B_\rho$ of \cref{thm:Tbasis}. We claim that \begin{equation}\label{eq:basisBi} \bigsqcup_{\ell = 1}^{b_i} g_\ell({}_{\overline \kappa^i} B_\rho), \end{equation} is a basis for $1_\kappa B_i 1_\rho$. We postpone the proof of this for later. \subsection{Cofibrant replacement of $B_i$}\label{sec:cofBi} As explained in~\cite[\S4.3]{webstersl2}, $B_i$ admits an easily describable cofibrant replacement as a left module. But before describing it, let us introduce some extra notations. Let $T_{i, \ \tikzRBR}$ be the left $(T^{\lambda,r},0)$-module generated by the elements \[ \tikzdiagh{0}{ \draw[vstdhl] (0,0) node[below]{\small $\lambda$} --(0,1); \draw (.5,0) -- (.5,1); \node at(1,.5) {\tiny$\dots$}; \draw (1.5,0) -- (1.5,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (.4,-.35) -- node {$b_0$} (1.6,-.35); \draw[stdhl] (2,0) node[below]{\small $1$} --(2,1); % \draw (2.5,0) -- (2.5,1); \node at(3,.5) {\tiny$\dots$}; \draw (3.5,0) -- (3.5,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (2.4,-.35) -- node {$b_1$} (3.6,-.35); \draw[stdhl] (4,0) node[below]{\small $1$} --(4,1); % \node[red] at (5,.5) {\dots}; % \draw[stdhl] (6,0) node[below]{\small $1$} --(6,1); % \draw[stdhl] (6.5,0) node[below]{\small $1$} --(6.5,1); \draw (7,0) node[below]{\small $1$} --(7,1); \draw[stdhl] (7.5,0) node[below]{\small $1$} --(7.5,1); % \draw (8,0) -- (8,1); \node at(8.5,.5) {\tiny$\dots$}; \draw (9,0) -- (9,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (7.9,-.35) -- node {$b_{i-1}$} (9.1,-.35); \draw[stdhl] (9.5,0) node[below]{\small $1$} --(9.5,1); % \draw (10,0) -- (10,1); \node at(10.5,.5) {\tiny$\dots$}; \draw (11,0) -- (11,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (9.9,-.35) -- node {$b_{i}$} (11.1,-.35); \draw[stdhl] (11.5,0) node[below]{\small $1$} --(11.5,1); % \node[red] at (12.5,.5) {\dots}; % \draw[stdhl] (13.5,0) node[below]{\small $1$} --(13.5,1); \draw (14,0) -- (14,1); \node at(14.5,.5) {\tiny$\dots$}; \draw (15,0) -- (15,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (13.9,-.35) -- node {$b_{n}$} (15.1,-.35); } \] for all $(b_0, b_1, \dots, b_{r})$. We define similarly $T_{i,\ \tikzBRR}$ and $T_{i,\ \tikzRRB}$. \smallskip Let $\br B_i$ be the left $(T^{\lambda,r+2},0)$-module given by the dg-module \[ \br B_i := \begin{tikzcd}[row sep = 1ex] & q (T_{i,\ \tikzBRR}) [1] \ar{dr} \ar[no head]{dr}{ {\tikzdiag[scale=.5]{ \draw (1,0) .. controls (1,.5) and (0,.5) .. (0,1); \draw[stdhl] (0,0) .. controls (0,.5) and (1,.5) .. (1,1); \draw[stdhl] (2,0) -- (2,1); }}} & \\ q^2 (T_{i,\ \tikzRBR}) [2] \ar{ur} \ar[no head]{ur}{ {\tikzdiag[scale=.5]{ \draw (0,0) .. controls (0,.5) and (1,.5) .. (1,1); \draw[stdhl] (1,0) .. controls (1,.5) and (0,.5) .. (0,1); \draw[stdhl] (2,0) -- (2,1); }}} \ar{dr} \ar[no head,swap]{dr}{ -\ {\tikzdiag[scale=.5]{ \draw (1,0) .. controls (1,.5) and (0,.5) .. (0,1); \draw[stdhl] (-1,0) -- (-1,1); \draw[stdhl] (0,0) .. controls (0,.5) and (1,.5) .. (1,1); }}} & \oplus & T_{i,\ \tikzRBR} \\ & q (T_{i,\ \tikzRRB}) [1] \ar{ur} \ar[no head,swap]{ur}{ {\tikzdiag[scale=.5]{ \draw (0,0) .. controls (0,.5) and (1,.5) .. (1,1); \draw[stdhl] (1,0) .. controls (1,.5) and (0,.5) .. (0,1); \draw[stdhl] (-1,0) -- (-1,1); }}} & \end{tikzcd} \] where the differential is given by the arrows, which are the maps given by adding the term in the label at the bottom of $\tikzRBR$ , $\tikzRRB$ or $\tikzBRR$. Similarly, we define a right cofibrant replacement $\overline{B}_i \rb \overset{\simeq}{\twoheadrightarrow}\overline{B}_i$ by taking the symmetric along the horizontal line and shifting everything by $q^{-1}(-) [-1]$. \begin{prop}\label{prop:cofBi} There is a surjective quasi-isomorphism of left $\mathbb{Z}^2$-graded $(T^{\lambda,r+2},0)$-modules \[ \br B_i \overset{\simeq}{\twoheadrightarrow} B_i. \] \end{prop} \begin{proof} Consider the surjective map $T_{i,\ \tikzRBR} \twoheadrightarrow B_i$ that closes the elements $\tikzRBR$ at the bottom by a cup: \[ \tikzdiag{ \draw[stdhl](0,0) -- (0,1);\draw (.5,0) -- (.5,1);\draw[stdhl](1,0) -- (1,1); } \quad \mapsto \quad \tikzdiag{ \draw (4.5,.5) -- (4.5,1); \draw [stdhl] (4,1) .. controls (4,.25) and (5,.25) .. (5,1); } \] This map is indeed surjective since any black strand going to the left of the cap factors through a black strand going to the right, using \cref{eq:cupstrandslides}. Then the claim follows by observing that \[ \begin{tikzcd}[row sep = 1ex] & & q T_{i,\ \tikzBRR} \ar[hookrightarrow]{dr} \ar[no head]{dr}{ {\tikzdiag[scale=.5]{ \draw (1,0) .. controls (1,.5) and (0,.5) .. (0,1); \draw[stdhl] (0,0) .. controls (0,.5) and (1,.5) .. (1,1); \draw[stdhl] (2,0) -- (2,1); }}} & & &\\ 0 \ar{r} & q^2 T_{i,\ \tikzRBR} \ar[hookrightarrow]{ur} \ar[no head]{ur}{ {\tikzdiag[scale=.5]{ \draw (0,0) .. controls (0,.5) and (1,.5) .. (1,1); \draw[stdhl] (1,0) .. controls (1,.5) and (0,.5) .. (0,1); \draw[stdhl] (2,0) -- (2,1); }}} \ar[hookrightarrow]{dr} \ar[no head,swap]{dr}{ -\ {\tikzdiag[scale=.5]{ \draw (1,0) .. controls (1,.5) and (0,.5) .. (0,1); \draw[stdhl] (-1,0) -- (-1,1); \draw[stdhl] (0,0) .. controls (0,.5) and (1,.5) .. (1,1); }}} & \oplus & T_{i,\ \tikzRBR} \ar[twoheadrightarrow]{r} \ar[no head]{r}{ {\tikzdiag[scale=.5]{ \draw (4.5,.5) -- (4.5,1); \draw [stdhl] (4,1) .. controls (4,.25) and (5,.25) .. (5,1); }}} & B_i \ar{r} & 0,\\ && q T_{i,\ \tikzRRB} \ar[hookrightarrow]{ur} \ar[no head,swap]{ur}{ {\tikzdiag[scale=.5]{ \draw (0,0) .. controls (0,.5) and (1,.5) .. (1,1); \draw[stdhl] (1,0) .. controls (1,.5) and (0,.5) .. (0,1); \draw[stdhl] (-1,0) -- (-1,1); }}} & & & \end{tikzcd} \] is an exact sequence. Indeed, by \cref{thm:Tbasis}, we know that adding a black/red crossing is an injective operation, and thus the sequence is exact on $q^2 T_{i,\ \tikzRBR}$. For the same reason we also have that \[ \ker\left( \begin{tikzcd}[row sep = 1ex] q T_{i,\ \tikzBRR} \ar[hookrightarrow]{dr} \ar[no head]{dr}{ {\tikzdiag[scale=.5]{ \draw (1,0) .. controls (1,.5) and (0,.5) .. (0,1); \draw[stdhl] (0,0) .. controls (0,.5) and (1,.5) .. (1,1); \draw[stdhl] (2,0) -- (2,1); }}} & \\ \oplus & T_{i,\ \tikzRBR} \\ q T_{i,\ \tikzRRB} \ar[hookrightarrow]{ur} \ar[no head,swap]{ur}{ {\tikzdiag[scale=.5]{ \draw (0,0) .. controls (0,.5) and (1,.5) .. (1,1); \draw[stdhl] (1,0) .. controls (1,.5) and (0,.5) .. (0,1); \draw[stdhl] (-1,0) -- (-1,1); }}} & \end{tikzcd} \right) \cong T_{i,\ \stikzdiag{ \draw (1,0) .. controls (1,.5) and (0,.5) .. (0,1); \draw[stdhl] (0,0) .. controls (0,.5) and (1,.5) .. (1,1); \draw[stdhl] (2,0) -- (2,1); }} \cap T_{i,\ \stikzdiag{ \draw (0,0) .. controls (0,.5) and (1,.5) .. (1,1); \draw[stdhl] (1,0) .. controls (1,.5) and (0,.5) .. (0,1); \draw[stdhl] (-1,0) -- (-1,1); }}. \] By \cref{thm:Tbasis}, we know that if an element can be written as a diagram with a black strand crossing a red strand on the left, and as a different diagram with the same black strand crossing a red strand on the right, then it can be rewritten as a diagram with the same strand going straight, but carrying a dot. These elements correspond exactly with the image of the preceding map in the complex, which is thus exact at the second position. Finally, we observe that \[ B_i \cong T_{i,\ \tikzRBR} \ /\ \bigl( T_{i,\ \stikzdiag{ \draw (1,0) .. controls (1,.5) and (0,.5) .. (0,1); \draw[stdhl] (0,0) .. controls (0,.5) and (1,.5) .. (1,1); \draw[stdhl] (2,0) -- (2,1); }} + T_{i,\ \stikzdiag{ \draw (0,0) .. controls (0,.5) and (1,.5) .. (1,1); \draw[stdhl] (1,0) .. controls (1,.5) and (0,.5) .. (0,1); \draw[stdhl] (-1,0) -- (-1,1); }} \bigr), \] by constructing an inverse of the map that adds a cup on the bottom, by pulling the cup to the bottom. It is not hard, but a bit lengthy, to check that it respects the defining relations of $B_i$ in the quotient $T_{i,\ \tikzRBR} \ /\ \bigl( T_{i,\ \stikzdiag{ \draw (1,0) .. controls (1,.5) and (0,.5) .. (0,1); \draw[stdhl] (0,0) .. controls (0,.5) and (1,.5) .. (1,1); \draw[stdhl] (2,0) -- (2,1); }} + T_{i,\ \stikzdiag{ \draw (0,0) .. controls (0,.5) and (1,.5) .. (1,1); \draw[stdhl] (1,0) .. controls (1,.5) and (0,.5) .. (0,1); \draw[stdhl] (-1,0) -- (-1,1); }} \bigr)$. \end{proof} \begin{cor}\label{thm:basisBi} The elements in \cref{eq:basisBi} form a $\mathbb{Z}\times\mathbb{Z}^2$-graded $\Bbbk$-basis for $1_\kappa B_i 1_\rho$. \end{cor} \begin{proof} As in \cref{thm:Tbasis}, one can show that the elements in \cref{eq:basisBi} span the space $1_\kappa B_i 1_\rho$, mainly using \cref{eq:cupstrandslides} and \cref{eq:redR3}. Linear independence follows from a dimensional argument, using \cref{prop:cofBi} and \cref{thm:Tbasis}. The computation of the dimensions can be done at the non-categorified level, and thus is a consequence of \cref{eq:caponk} of \cref{lem:explicitaction}. \end{proof} Therefore, the map $\sum g_\ell : \oplus_{\ell = 1}^{b_i} q^{b_i+1-2\ell} (1_{\bar \rho^i} T^{\lambda,r}) \xrightarrow{\simeq} 1_\rho B_i$ of right modules is an isomorphism, where $\bar \rho^{i}$ and $g_\ell$ are as in \cref{sec:tightbasisBi}. In particular, $B_i $ is a cofibrant right dg-module. With \cref{thm:K0} in mind, this means that $\overline{\B}_i$ acts on ${}_\mathbb{Q}\boldsymbol{K}_0^\Delta(T^{\lambda,\underline{N}},0)$ as the cap of $\mathcal{B}$ on $M \otimes V^r$ (see \cref{eq:caponk}), and \cref{prop:cofBi} means that $\B_i$ acts as the cup (see \cref{eq:cuponk}). \subsection{Double braiding functor}\label{sec:dbbraiding} Inspired by the definition of the braiding functor in~\cite[\S6]{webster} (see also~\cite[\S4.1]{webstersl2}), we introduce a double braiding functor that will play the role of a categorification of the action of $\xi$ on $M \otimes V^r$. \begin{defn}\label{def:dbbraiding} The \emph{double braiding bimodule $X$} (see \cref{rem:dbbraiding} for an explanation about the terminology) is the $(T^{\lambda,r},0)$-$(T^{\lambda,r},0)$-bimodule generated by the diagrams \[ \tikzdiagh{0}{ \draw[stdhl] (0,0) node[below]{\small $1$} .. controls (0,.25) .. (-1,.5) .. controls (0,.75) .. (0,1); \draw[fill=white, color=white] (-1.1,.5) circle (.1cm); \draw[vstdhl] (-1,0) node[below]{\small $\lambda$} -- (-1,1); % \draw (.5,0) -- (.5,1); \node at(1,.5) {\tiny$\dots$}; \draw (1.5,0) -- (1.5,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (.4,-.35) -- node {$b_0$} (1.6,-.35); \draw[stdhl] (2,0) node[below]{\small $1$} --(2,1); % \draw (2.5,0) -- (2.5,1); \node at(3,.5) {\tiny$\dots$}; \draw (3.5,0) -- (3.5,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (2.4,-.35) -- node {$b_1$} (3.6,-.35); \draw[stdhl] (4,0) node[below]{\small $1$} --(4,1); % \node[red] at (5,.5) {\dots}; % \draw[stdhl] (6,0) node[below]{\small $1$} --(6,1); \draw (6.5,0) -- (6.5,1); \node at(7,.5) {\tiny$\dots$}; \draw (7.5,0) -- (7.5,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (6.4,-.35) -- node {$b_r$} (7.6,-.35); } \] for all $(b_0, \dots, b_r) \in \mathbb{N}^{r+1}$. We consider diagrams in $X$ up to graded braid-like planar isotopy with the generators being in homological degree $0$, and subject to the relations\eqref{eq:dotredstrand}-\eqref{eq:redR3} and \eqref{eq:relNail}, and the extra local relations \begin{align}\label{eq:nailslidedcross} \tikzdiagh{0}{ \draw (.5,-.5) .. controls (.5,-.3) .. (0,-.1) .. controls (.5,.1) .. (.5,.3) -- (.5,1.5); \draw[stdhl] (1,-.5) node[below]{\small $1$} -- (1,0) .. controls (1,.25) .. (0,.5) .. controls (1,.75) .. (1,1) -- (1,1.5); \draw[fill=white, color=white] (-.1,.5) circle (.1cm); \draw[vstdhl] (0,-.5) node[below]{\small $\lambda$} -- (0,1.5) node[pos=.2,nail]{}; } \ &= \ \tikzdiagh[yscale=-1]{0}{ \draw (.5,-.5) .. controls (.5,-.3) .. (0,-.1) .. controls (.5,.1) .. (.5,.3) -- (.5,1.5); \draw[stdhl] (1,-.5)-- (1,0) .. controls (1,.25) .. (0,.5) .. controls (1,.75) .. (1,1) -- (1,1.5) node[below]{\small $1$} ; \draw[fill=white, color=white] (-.1,.5) circle (.1cm); \draw[vstdhl] (0,-.5) -- (0,1.5) node[pos=.2,nail]{} node[below]{\small $\lambda$}; } & \tikzdiagh{0}{ \draw (1,-.5) .. controls (1,-.3) .. (0,-.1) .. controls (1,.1) .. (1,.3) -- (1,1.5); \draw[stdhl] (.5,-.5) node[below]{\small $1$} -- (.5,0) .. controls (.5,.25) .. (0,.5) .. controls (.5,.75) .. (.5,1) -- (.5,1.5); \draw[fill=white, color=white] (-.1,.5) circle (.1cm); \draw[vstdhl] (0,-.5) node[below]{\small $\lambda$} -- (0,1.5) node[pos=.2,nail]{}; } \ &= \ \tikzdiagh[yscale=-1]{0}{ \draw (1,-.5) .. controls (1,-.3) .. (0,-.1) .. controls (1,.1) .. (1,.3) -- (1,1.5); \draw[stdhl] (.5,-.5) -- (.5,0) .. controls (.5,.25) .. (0,.5) .. controls (.5,.75) .. (.5,1) -- (.5,1.5) node[below]{\small $1$}; \draw[fill=white, color=white] (-.1,.5) circle (.1cm); \draw[vstdhl] (0,-.5) -- (0,1.5) node[pos=.2,nail]{} node[below]{\small $\lambda$}; } \end{align} We set the $\mathbb{Z}^2$-degree of the generator as \begin{align*} \deg_{q,\lambda}\left( \tikzdiag{ \draw[stdhl] (1,0) node[below]{\small $1$} .. controls (1,.25) .. (0,.5) .. controls (1,.75) .. (1,1); \draw[fill=white, color=white] (-.1,.5) circle (.1cm); \draw[vstdhl] (0,0) node[below]{\small $\lambda$} -- (0,1); } \right) &:= (0,-1). \end{align*} \end{defn} We define the \emph{double braiding functor} as \[ \Xi := X \otimes^{\Lderiv}_T - : \mathcal{D}_{dg}(T^{\lambda,r},0) \rightarrow \mathcal{D}_{dg}(T^{\lambda,r},0). \] \subsubsection{Tightened basis} Let us now describe a basis of the bimodule $X$, similar as the basis of $T^{\lambda,r}_b$ given in \cref{thm:Tbasis}. We fix $\kappa$ and $\rho$ two elements of $\mathcal{P}^r_b$ and recall the set ${}_\kappa S_\rho$ defined in \cref{ssec:basisTl}. For each $w\in {}_\kappa S_\rho,\ \underline{l}=(l_1,\ldots,l_b)\in \{0,1\}^b$ and $\underline{a}=(a_1,\ldots,a_b)\in\mathbb{N}^b$ we define an element $x_{w,\underline{l},\underline{a}}\in 1_\kappa X 1_\rho$ as follows: \begin{enumerate} \item choose a left-reduced expression of $w$ in terms of diagrams as above, \item for each $1\leq i \leq b$, if $l_i=1$, nail the $i$-th black strand (counting on the top from the left) on the blue strand by pulling it from its leftmost position, \item for each $1\leq i \leq b$, add $a_i$ dots on the $i$-th black strand at the top, \item finally, attach the first red strand to the blue strand by pulling it from its leftmost position. \end{enumerate} \begin{defn}\label{def:unbraidingmap} Define the \emph{unbraiding map} \[ u : \lambda X \rightarrow T^{\lambda,r}, \] as the map given by removing the double braiding \[ \tikzdiagh{0}{ \draw[stdhl] (1,0) node[below]{\small $1$} .. controls (1,.25) .. (0,.5) .. controls (1,.75) .. (1,1); \draw[fill=white, color=white] (-.1,.5) circle (.1cm); \draw[vstdhl] (0,0) node[below]{\small $\lambda$} -- (0,1); } \mapsto \tikzdiagh{0}{ \draw[stdhl] (1,0) node[below]{\small $1$} .. controls (1,.25) and (.25,.25) .. (.25,.5) .. controls (.25,.75) and (1,.75) .. (1,1); \draw[vstdhl] (0,0) node[below]{\small $\lambda$} -- (0,1); } \] \end{defn} Note that the unbraiding map is a map of $(T_b^{\lambda,r},0)$-$(T_b^{\lambda,r},0)$-bimodules. \begin{thm}\label{thm:X0basis} The set $\left\{x_{w,\underline{l},\underline{a}}\ \middle\vert w\in {}_\kappa S_\rho,\ \underline{l}\in \{0,1\}^b,\ \underline{a}\in\mathbb{N}^b\right\}$ is a $\mathbb{Z}\times\mathbb{Z}^2$-graded $\mathbb{\Bbbk}$-basis of $1_\kappa X 1_\rho$. \end{thm} \begin{proof} Showing that this set generates $1_\kappa X 1_\rho$ is similar to \cite[Proposition 3.13]{naissevaz3} and we leave the details to the reader. To show that the elements $(x_{w,\underline{l},\underline{a}})_{w,\underline{l},\underline{a}}$ are linearly independant we consider a linear combination $\sum_{w,\underline{l},\underline{a}}\alpha_{w,\underline{l},\underline{a}}x_{w,\underline{l},\underline{a}}=0$ and apply the unbraiding map $u$. We now pull the first red strand to its original position before the last step of the construction of $x_{w,\underline{l},\underline{a}}$. This has the effect of adding dots on some black strands because of \cref{eq:redR2}. We now rewrite $u\left(\sum_{w,\underline{l},\underline{a}}\alpha_{w,\underline{l},\underline{a}}x_{w,\underline{l}\underline{a}}\right)=0$ in terms of the tightened basis of $T^{\lambda,r}_b$. We carefully look at the terms with the highest number of crossings: by pulling the dots at the top, we obtain different elements of the tightened basis of $T^{\lambda,r}_b$ plus terms with a lower number of crossings. From the freeness of the tightened basis of $T^{\lambda,r}_b$, we deduce that the coefficient of the terms with the highest number of crossings must be zero and we can proceed by a descending induction on the number of crossings. \end{proof} \begin{cor}\label{cor:uinj} The unbraiding map $u : \lambda X \rightarrow T^{\lambda,r}$ is injective. \end{cor} \begin{proof} The matrix of $u$ in terms of tightened bases can be made in column echelon form with pivots being $1$. \end{proof} \subsection{Cofibrant replacement of $X$}\label{sec:cofX} We now want to construct a left cofibrant replacement for $X$. Take $\rho = (b_2, \dots, b_{r}) \in \mathcal{P}_b^{r-2}$ and consider the idempotent $1_{k,\ell,\rho} := 1_{k,\ell,b_2,\dots,b_{r}}$. We also write \[ \bar 1_{\ell,\rho} := \tikzdiagh{0}{ \draw (.5,0) -- (.5,1); \node at(1,.5) {\tiny$\dots$}; \draw (1.5,0) -- (1.5,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (.4,-.35) -- node {$\ell$} (1.6,-.35); \draw[stdhl] (2,0) node[below]{\small $1$} --(2,1); % \draw (2.5,0) -- (2.5,1); \node at(3,.5) {\tiny$\dots$}; \draw (3.5,0) -- (3.5,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (2.4,-.35) -- node {$b_2$} (3.6,-.35); \draw[stdhl] (4,0) node[below]{\small $1$} --(4,1); % \node[red] at (5,.5) {\dots}; % \draw[stdhl] (6,0) node[below]{\small $1$} --(6,1); \draw (6.5,0) -- (6.5,1); \node at(7,.5) {\tiny$\dots$}; \draw (7.5,0) -- (7.5,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (6.4,-.35) -- node {$b_{r}$} (7.6,-.35); } \] so that for example \[ 1_{0,k+\ell,\rho} = \tikzdiagh{0}{ \draw[vstdhl] (-.5,0) node[below]{\small $\lambda$} --(-.5,1); \draw[stdhl] (0,0) node[below]{\small $1$} --(0,1); \draw (.5,0) -- (.5,1); \node at(1,.5) {\tiny$\dots$}; \draw (1.5,0) -- (1.5,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (.4,-.35) -- node {$k$} (1.6,-.35); } \otimes \bar 1_{\ell,\rho}. \] For $k \geq 0, \ell \geq 0$ and $\rho \in \mathcal{P}_b^{r-2}$, we define \begin{align*} Y^1_{k,\ell,\rho} &:= \bigoplus_{t=0}^{k-1} Y^{1,t}_k, & Y^{1,t}_{k,\ell,\rho} &:= \lambda q^{k-2t+1} (T_b^{\lambda,r} 1_{1,k+\ell-1,\rho})[1], \end{align*} \begin{align*} Y^0_{k,\ell,\rho} &:= {Y'}^0_{k,\ell,\rho} \oplus \bigoplus_{t = 0}^{k-1} Y^{0,t}_{k,\ell,\rho}, & {Y'}^0_{k,\ell,\rho} := \lambda^{-1} q^{k} (T_b^{\lambda,r} 1_{0,k+\ell,\rho}), \quad Y^{0,t}_{k,\ell,\rho} &:= \lambda q^{k-2t} (T_b^{\lambda,r} 1_{0,k+\ell,\rho})[1]. \end{align*} Note that $Y^1_0 = 0$ and $Y^0_0 = \lambda^{-1} (T_b^{\lambda,r} 1_{0,\ell,\rho})$. We write \begin{align*} X_k &:= \bigoplus_{\ell \geq 0, \rho \in \mathcal{P}_b^{r-2}} X 1_{k,\ell,\rho}, & Y^1_{k} &:= \bigoplus_{\ell \geq 0, \rho \in \mathcal{P}_b^{r-2}} Y^1_{k,\ell,\rho}, & Y^0_{k} &:= \bigoplus_{\ell \geq 0, \rho \in \mathcal{P}_b^{r-2}} Y^0_{k,\ell,\rho}, \end{align*} and similarly for $Y^{1,t}_{k}$, ${Y'}^0_{k}$ and $Y^{0,t}_{k}$. Define the cofibrant $(T^{\lambda,r}_b,0)$-module $\br X_k$ given by the mapping cone \[ \br X_k := \cone\bigl( Y^1_k \xrightarrow{\ \imath_k\ } Y^0_k \bigr), \] where $\imath_k := \sum_{t = 0}^{k-1} \imath_k^t$ for \begin{align*} \imath_k^t :& Y^{1,t}_k \rightarrow {Y'}^0_k \oplus Y^{0,t}_k, \\ & \tikzdiagh{0}{ \draw[vstdhl] (-.5,0) node[below]{\small $\lambda$} --(-.5,1); \draw (0,0) -- (0,1); \draw[stdhl] (.5,0) node[below]{\small $1$} --(.5,1); } \otimes \bar 1_{k+\ell-1,\rho} \ \mapsto \ \left( -\ \tikzdiagh{0}{ \draw (1.25,0) .. controls (1.25,.5) .. (-.5,.75) .. controls (0,.875) .. (0,1); \draw[stdhl] (0,0) node[below]{\small $1$} .. controls(0,.5) and (.5,.5) .. (.5,1); \draw[vstdhl] (-.5,0) node[below]{\small $\lambda$} --(-.5,1) node[pos=.75,nail]{}; \draw (.5,0) .. controls (.5,.5) and (.75,.5) .. (.75,1); \node at (.75,.15){\tiny $\dots$}; \draw (1,0) .. controls (1,.5) and (1.25,.5).. (1.25,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (.4,-.35) -- node { \small $t$} (1.1,-.35); } \otimes \bar 1_{\ell+k-1-t,\rho}, \ \tikzdiagh{0}{ \draw[vstdhl] (-.5,0) node[below]{\small $\lambda$} --(-.5,1); \draw (.5,0) .. controls (.5,.5) and (0,.5) .. (0,1); \draw[stdhl] (0,0) node[below]{\small $1$} .. controls(0,.5) and (.5,.5) .. (.5,1); } \otimes \bar 1_{\ell+k-1,\rho} \right) \end{align*} Note that each $\imath_k^t$ is injective, and therefore so is $\imath_k$. Then, consider the left module map \[ \gamma_k : \br X_k \rightarrow X_k, \] given by $\gamma_k := \gamma_k' + \sum_{t = 0}^{k-1} \gamma_k^t$ where \begin{align*} \gamma_k' : & {Y'}^0_k \rightarrow X_k, \\ & \tikzdiagh{0}{ \draw[vstdhl] (-.5,0) node[below]{\small $\lambda$} --(-.5,1); \draw[stdhl] (0,0) node[below]{\small $1$} --(0,1); \draw (.5,0) -- (.5,1); \node at(.75,.5) {\tiny$\dots$}; \draw (1,0) -- (1,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (.4,-.35) -- node {$k$} (1.1,-.35); } \otimes \bar 1_{\ell,\rho} \ \mapsto \ \tikzdiagh{0}{ \draw (0,0) .. controls (0,.5) and (.5,.5) .. (.5,1); \node at(.75,.75) {\tiny$\dots$}; \draw (.5,0) .. controls (.5,.5) and (1,.5) .. (1,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (-.1,-.35) -- node {$k$} (.6,-.35); % \draw[stdhl] (1,0) node[below]{\small $1$} .. controls (1,.25) .. (-.5,.5) .. controls (0,.75) .. (0,1); \draw[fill=white, color=white] (-.6,.5) circle (.1cm); \draw[vstdhl] (-.5,0) node[below]{\small $\lambda$} -- (-.5,1); % } \otimes \bar 1_{\ell,\rho}, \end{align*} and \begin{align*} \gamma_k^t : & Y^{0,t}_k \rightarrow X_k, \\ & \tikzdiagh{0}{ \draw[vstdhl] (-.5,0) node[below]{\small $\lambda$} --(-.5,1); \draw[stdhl] (0,0) node[below]{\small $1$} --(0,1); \draw (.5,0) -- (.5,1); \node at(.75,.5) {\tiny$\dots$}; \draw (1,0) -- (1,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (.4,-.35) -- node {$k$} (1.1,-.35); } \otimes \bar 1_{\ell,\rho} \ \mapsto \ \tikzdiagh{0}{ % \draw (0,-.5) .. controls (0,0) and (.75,0) .. (.75,1); \node at(1,.75) {\tiny$\dots$}; \draw (.5,-.5) .. controls (.5,0) and (1.25,0) .. (1.25,1); \draw (.75,-.5) .. controls (.75,-.25) .. (-.5,0) .. controls (.5,.5) .. (.5,1); \draw (1,-.5) .. controls (1,0) and (1.5,0) .. (1.5,1); \node at(1.75,.75) {\tiny$\dots$}; \draw (1.5,-.5) .. controls (1.5,0) and (2,0) .. (2,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (-.1,-.85) -- node {$t$} (.6,-.85); % \draw[stdhl] (2,-.5) node[below]{\small $1$} -- (2,0) .. controls (2,.25) .. (-.5,.5) .. controls (0,.75) .. (0,1); \draw[fill=white, color=white] (-.6,.5) circle (.1cm); \draw[vstdhl] (-.5,-.5) node[below]{\small $\lambda$} -- (-.5,1) node[pos=.33,nail]{}; } \otimes \bar 1_{\ell,\rho}, \end{align*} for all $0 \leq t \leq k-1$. \begin{lem}\label{lem:gammasurjective} The map $\gamma_k : \br X_k \rightarrow X_k$ is surjective. \end{lem} \begin{proof} The statement can be proved by observing that $X_k$ is generated as a left $(T_b^{\lambda,r},0)$-module by the elements \begin{align*} \tikzdiagh{0}{ \draw (0,0) .. controls (0,.5) and (.5,.5) .. (.5,1); \node at(.75,.75) {\tiny$\dots$}; \draw (.5,0) .. controls (.5,.5) and (1,.5) .. (1,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (-.1,-.35) -- node {$k$} (.6,-.35); % \draw[stdhl] (1,0) node[below]{\small $1$} .. controls (1,.25) .. (-.5,.5) .. controls (0,.75) .. (0,1); \draw[fill=white, color=white] (-.6,.5) circle (.1cm); \draw[vstdhl] (-.5,0) node[below]{\small $\lambda$} -- (-.5,1); % } &\otimes \bar 1_{\ell,\rho}, & \tikzdiagh{0}{ % \draw (0,-.5) .. controls (0,0) and (.75,0) .. (.75,1); \node at(1,.75) {\tiny$\dots$}; \draw (.5,-.5) .. controls (.5,0) and (1.25,0) .. (1.25,1); \draw (.75,-.5) .. controls (.75,-.25) .. (-.5,0) .. controls (.5,.5) .. (.5,1); \draw (1,-.5) .. controls (1,0) and (1.5,0) .. (1.5,1); \node at(1.75,.75) {\tiny$\dots$}; \draw (1.5,-.5) .. controls (1.5,0) and (2,0) .. (2,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (-.1,-.85) -- node {$t$} (.6,-.85); % \draw[stdhl] (2,-.5) node[below]{\small $1$} -- (2,0) .. controls (2,.25) .. (-.5,.5) .. controls (0,.75) .. (0,1); \draw[fill=white, color=white] (-.6,.5) circle (.1cm); \draw[vstdhl] (-.5,-.5) node[below]{\small $\lambda$} -- (-.5,1) node[pos=.33,nail]{}; } &\otimes \bar 1_{\ell,\rho}, \end{align*} for all $0 \leq t \leq k-1$. The details can be found in \cref{sec:proofsofsecbimod}. \end{proof} \begin{lem}\label{lem:sesX0} The sequence \[ 0 \rightarrow Y^1_k \xrightarrow{\imath_k} Y^0_k \xrightarrow{\gamma_k} X_k \rightarrow 0, \] is a short exact sequence of left $\mathbb{Z}^2$-graded $(T^{\lambda,r}, 0)$-modules. \end{lem} \begin{proof} Since we already have a complex with an injection and a surjection, it is enough to show that \[ \gdim X_k = \gdim Y^0_k - \gdim Y^1_k, \] where $\gdim$ is the graded dimension in the form of a Laurent series in $\mathbb{N}\llbracket h^{\pm 1}, \lambda^{\pm 1}, q^{\pm 1} \rrbracket$. This can be shown by induction on $k$, and the details are in~\cref{sec:proofsofsecbimod}. \end{proof} From that, we induce a right $(T^{\lambda,r},0)$-$A_\infty$-action on $\br X := \bigoplus_{k \geq 0} \br X_k$ (see \cref{sec:Ainftyaction}), turning it into a $\mathbb{Z}^2$-graded $(T^{\lambda,r},0)$-$(T^{\lambda,r},0)$-$A_\infty$-bimodule, and we obtain: \begin{prop}\label{prop:gammaqi} The map $\gamma := \sum_{k \geq 0} \gamma_k : \br X \twoheadrightarrow X$ is a quasi-isomorphism of $\mathbb{Z}^2$-graded $(T^{\lambda,r},0)$-$(T^{\lambda,r},0)$-$A_\infty$-bimodules. \end{prop} \begin{proof} It is an immediate consequence of \cref{lem:sesX0}. \end{proof} Again, having \cref{thm:K0} in mind, it means $\Xi$ acts on ${}_\mathbb{Q}\boldsymbol{K}_0^\Delta(T^{\lambda,\underline{N}},0)$ as the element $\xi$ of $\mathcal{B}$ on $M \otimes V^r$ (see \cref{eq:xionk}). \section{A categorification of $M(\lambda)\otimes V(\protect\underline{N})$}\label{sec:catTensProd} In this section we explain how derived categories of $(T^{\lambda,\underline{N}}_b,0)$-dg-modules categorify the $U_q(\mathfrak{sl}_2)$-module $M(\lambda)\otimes V(\underline{N})$. Since the construction is very similar to the one in~\cite{naissevaz2} and~\cite{naissevaz3}, we will assume some familiarity with~\cite{naissevaz2} and~\cite{naissevaz3}, and we will refer to these papers for several details. \smallskip We introduce the notations \begin{align*} \oplus_{[k]_q} (-)& := \bigoplus_{p = 0}^{k-1} q^{k-1-2p} (-), \\ \oplus_{[\beta+k]_q} (-) & := \bigoplus_{p \geq 0} \lambda q^{1+2p+k} (-) [1] \oplus \lambda^{-1} q^{1+2p-k}(-), \end{align*} where we recall that $q^a \lambda^b(-)$ is a shift up by $(a,b)$ in the $\mathbb{Z}^2$-grading, and $(-)[1]$ is a shift up by $1$ in the homological grading. We write $\otimes$ for $\otimes_\Bbbk$, and $\otimes_b$ for $\otimes_{(T^{\lambda,\underline{N}}_b,0)}$. We also write $\mathcal{D}_{dg}(T^{\lambda,\underline{N}}_b, 0)$ for the dg-enhanced derived category of $\mathbb{Z}^2$-graded dg-modules over $(T^{\lambda,\underline{N}}_b, 0)$ (see \cref{sec:dgdercat} for a precise definition). \subsection{Categorical action} Let $1_{b,1} \in T^{\lambda,\underline{N}}_{b+1}$ be the idempotent given by \[ 1_{b,1} := \sum_{\rho \in \mathcal{P}_b^r} \ \tikzdiagh{0}{ \draw[vstdhl] (0,0) node[below]{\small $\lambda$} --(0,1); \draw (.5,0) -- (.5,1); \node at(1,.5) {\tiny$\dots$}; \draw (1.5,0) -- (1.5,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (.4,-.35) -- node {$b_0$} (1.6,-.35); \draw[stdhl] (2,0) node[below]{\small $N_1$} --(2,1); % \draw (2.5,0) -- (2.5,1); \node at(3,.5) {\tiny$\dots$}; \draw (3.5,0) -- (3.5,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (2.4,-.35) -- node {$b_1$} (3.6,-.35); \draw[stdhl] (4,0) node[below]{\small $N_2$} --(4,1); % \node[red] at (5,.5) {\dots}; % \draw[stdhl] (6,0) node[below]{\small $N_{r}$} --(6,1); \draw (6.5,0) -- (6.5,1); \node at(7,.5) {\tiny$\dots$}; \draw (7.5,0) -- (7.5,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (6.4,-.35) -- node {$b_r$} (7.6,-.35); \draw (8,0) -- (8, 1); } \] There is a (non-unital) map of algebras $ T^{\lambda,\underline{N}}_b \rightarrow T^{\lambda,\underline{N}}_{b+1} $ given by adding a vertical black strand to the right of a diagram from $T^{\lambda,\underline{N}}_b$: \begin{equation}\label{eq:addblackstrand} \tikzdiagh[xscale=1.25]{0}{ \draw [vstdhl] (-.25,0) node[below]{\small $\lambda$} -- (-.25,1); % \draw (0,0) -- (0,1); \node at(.25,.125) {\tiny $\dots$}; \node at(.25,.875) {\tiny $\dots$}; \draw (.5,0) -- (.5,1); % \draw [stdhl] (.75,0) node[below]{\small $N_1$} -- (.75,1); % \node[red] at(1.125,.125) { $\dots$}; \node[red] at(1.125,.875) { $\dots$}; % \draw [stdhl] (1.5,0) node[below]{\small $N_{r-1}$} -- (1.5,1); % \draw (1.75,0) -- (1.75,1); \node at(2,.125) {\tiny $\dots$}; \node at(2,.875) {\tiny $\dots$}; \draw (2.25,0) -- (2.25,1); % \draw [stdhl] (2.5,0) node[below]{\small $N_{r}$} -- (2.5,1); % \draw (2.75,0) -- (2.75,1); \node at(3,.125) {\tiny $\dots$}; \node at(3,.875) {\tiny $\dots$}; \draw (3.25,0) -- (3.25,1); % \filldraw [fill=white, draw=black] (-.375,.25) rectangle (3.375,.75) node[midway] { $D$}; } \ \mapsto \ \tikzdiagh[xscale=1.25]{0}{ \draw [vstdhl] (-.25,0) node[below]{\small $\lambda$} -- (-.25,1); % \draw (0,0) -- (0,1); \node at(.25,.125) {\tiny $\dots$}; \node at(.25,.875) {\tiny $\dots$}; \draw (.5,0) -- (.5,1); % \draw [stdhl] (.75,0) node[below]{\small $N_1$} -- (.75,1); % \node[red] at(1.125,.125) { $\dots$}; \node[red] at(1.125,.875) { $\dots$}; % \draw [stdhl] (1.5,0) node[below]{\small $N_{r-1}$} -- (1.5,1); % \draw (1.75,0) -- (1.75,1); \node at(2,.125) {\tiny $\dots$}; \node at(2,.875) {\tiny $\dots$}; \draw (2.25,0) -- (2.25,1); % \draw [stdhl] (2.5,0) node[below]{\small $N_{r}$} -- (2.5,1); % \draw (2.75,0) -- (2.75,1); \node at(3,.125) {\tiny $\dots$}; \node at(3,.875) {\tiny $\dots$}; \draw (3.25,0) -- (3.25,1); % \filldraw [fill=white, draw=black] (-.375,.25) rectangle (3.375,.75) node[midway] { $D$}; % \draw (3.5,0) -- (3.5,1); } \end{equation} sending the unit $1 \in T^{\lambda,\underline{N}}$ to the idempotent $1_{b,1}$. This map gives rise to derived induction and restriction dg-functors \begin{align*} \Ind_b^{b+1} &: \mathcal{D}_{dg}(T^{\lambda,\underline{N}}_{b},0) \rightarrow \mathcal{D}_{dg}(T^{\lambda,\underline{N}}_{b+1},0), &&\Ind_b^{b+1}(-) := (T^{\lambda,\underline{N}}_{b+1},0) 1_{b,1} \otimes^{\Lderiv}_b (-),\\[1ex] \Res_b^{b+1} &: \mathcal{D}_{dg}(T^{\lambda,\underline{N}}_{b+1},0) \rightarrow \mathcal{D}_{dg}(T^{\lambda,\underline{N}}_{b},0), &&\Res_b^{b+1}(-) := \RHOM_{b}(-,1_{b,1}(T^{\lambda,\underline{N}}_{b+1},0)), \end{align*} which are adjoint (see \cref{sec:deriveddghomtensor}). By \cref{prop:Tdecomp}, we know that $(T^{\lambda,\underline{N}}_{b+1},0)$ is a cofibrant dg-module over $(T^{\lambda,\underline{N}}_{b},0)$, so that we can replace derived tensor products (resp. derived homs) by usual tensor products \begin{align*} \Ind_b^{b+1}(-) &\cong (T^{\lambda,\underline{N}}_{b+1},0) 1_{b,1} \otimes_b (-), & \Res_b^{b+1}(-) &\cong 1_{b,1} (T^{\lambda,\underline{N}}_{b+1},0) \otimes_{b+1} (-). \end{align*} Then, we define \begin{align*} \F_b &:= \Ind_b^{b+1}, & \E_b &:= \lambda^{-1} q^{1+2b-|\underline{N}|} \Res_b^{b+1}, \end{align*} and $\text{Id}_b$ is the identity dg-functor on $\mathcal{D}_{dg}(T^{\lambda,\underline{N}}_{b},0)$. \begin{thm}\label{thm:sl2comqi} There is a quasi-isomorphism \[ \cone(\F_{b-1}\E_{b-1} \rightarrow \E_b\F_b) \xrightarrow{\cong} \oplus_{[\beta+|\underline{N}|-2b]_q} \text{Id}_b, \] of dg-functors. \end{thm} \begin{proof} Consider the map \[ \psi : q^{-2} (T^{\lambda,\underline{N}}_{b} 1_{b-1,1}\otimes_{b-1} 1_{b-1,1} T^{\lambda,\underline{N}}_b) \rightarrow1_{b,1} T^{\lambda,\underline{N}}_{b+1} 1_{b,1}, \] given by \[ x \otimes_{b-1} y \mapsto x \tau_b y, \] where $\tau_b$ is a crossing between the $b$-th and $(b+1)$-th black strands. Diagrammatically, one can picture it as \[ \tikzdiag[xscale=.75,yscale=.75]{ \draw (0,-1.25) -- (0,1.25); \draw (.5,-1.25) -- (.5,1.25); \draw (1.5,-1.25) -- (1.5,1.25); \draw (2,-1.25) -- (2,-.5) .. controls (2,-.25) .. (2.25,-.25); \draw (2,1.25) -- (2,.5) .. controls (2,.25) .. (2.25,.25); \node at(1,1.2) {\small $\dots$}; \filldraw [fill=white, draw=black] (-.25,-1) rectangle (2.25,-.5) \node at(1,0) {\small $\dots$}; \filldraw [fill=white, draw=black] (-.25,.5) rectangle (2.25,1) \node at(1,-1.2) {\small $\dots$}; } \ \mapsto \ \tikzdiag[xscale=.75,yscale=.75]{ \draw (0,-1.25) -- (0,1.25); \draw (.5,-1.25) -- (.5,1.25); \draw (1.5,-1.25) -- (1.5,1.25); \draw (2,-1.25) -- (2,-.5) .. controls (2,0) and (2.5,0) .. (2.5,.5) -- (2.5,1.25) \draw (2,1.25) -- (2,.5) .. controls (2,0) and (2.5,0) .. (2.5,-.5) -- (2.5,-1.25) \node at(1,1.2) {\small $\dots$}; \filldraw [fill=white, draw=black] (-.25,-1) rectangle (2.25,-.5) \node at(1,0) {\small $\dots$} \filldraw [fill=white, draw=black] (-.25,.5) rectangle (2.25,1) \node at(1,-1.2) {\small $\dots$}; } \] where the bent black strands informally depict the induction/restriction functors. Then, as in \cite[Theorem 5.1]{naissevaz3}, we obtain an exact sequence of $(T^{\lambda,\underline{N}}_{b},T^{\lambda,\underline{N}}_{b})$-bimodules \begin{align*} 0 \rightarrow q^{-2} (T^{\lambda,\underline{N}}_{b} 1_{b-1,1}\otimes_{b-1} 1_{b-1,1} T^{\lambda,\underline{N}}_b) &\xrightarrow{\ \psi\ }1_{b,1} T^{\lambda,\underline{N}}_{b+1} 1_{b,1} \\ &\xrightarrow{\ \phi\ } \bigoplus_{p\geq 0} q^{2p} (T^{\lambda,\underline{N}}_{b}) \oplus \lambda^2 q^{2p+2|\underline{N}|-4b} (T^{\lambda,\underline{N}}_b) \rightarrow 0, \end{align*} where $\phi$ is the projection onto the following summands \begin{align*} \bigoplus_{p \geq 0} \ \tikzdiag[xscale=1.25]{ % \draw (0,-.5) -- (0,1); \node at(.25,-.35) {\tiny $\dots$}; \draw (.5,-.5) -- (.5,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (-.1,-.85) -- node {\small $b_0$} (.6,-.85); % \draw [stdhl] (.75,-.5) node[below,yshift={-1ex}]{\small $N_1$} -- (.75,1); % \node[red] at(1.125,-.35) { $\dots$}; % \draw [stdhl] (1.5,-.5) node[below,yshift={-1ex}]{\small $N_{r}$} -- (1.5,1); % \draw (1.75,-.5) -- (1.75,1); \node at(2,-.35) {\tiny $\dots$}; \draw (2.25,-.5) -- (2.25,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (1.65,-.85) -- node {\small $b_{r}-1$} (2.35,-.85); % \draw (2.75, -.5) -- (2.75, 1.25) node[pos=.825,tikzdot]{} node[pos=.825, xshift=1.5ex, yshift=1ex]{\small $p$}; % \draw [vstdhl] (-.25,-.5) node[below,yshift={-1ex}]{\small $\lambda$} -- (-.25,1); % \filldraw [fill=white, draw=black] (-.375,.5) rectangle (2.375,1.25) node[midway] { $T_{b-1}^{\lambda,\underline{N}}$}; } \oplus \ \tikzdiag[xscale=1.25]{ % \draw (0,-.5) -- (0,1); \node at(.25,-.35) {\tiny $\dots$}; \draw (.5,-.5) -- (.5,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (-.1,-.85) -- node {\small $b_0$} (.6,-.85); % \draw (2.75, -.5) .. controls (2.75,-.25) .. (-.5,0) .. controls (2.75,.25) .. (2.75, .75) ; % \draw [stdhl] (.75,-.5) node[below,yshift={-1ex}]{\small $N_1$} -- (.75,1); % \node[red] at(1.125,-.35) { $\dots$}; % \draw [stdhl] (1.5,-.5) node[below,yshift={-1ex}]{\small $N_{r}$} -- (1.5,1); % \draw (1.75,-.5) -- (1.75,1); \node at(2,-.35) {\tiny $\dots$}; \draw (2.25,-.5) -- (2.25,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (1.65,-.85) -- node {\small $b_{r}-1$} (2.35,-.85); % \draw (2.75, .75) -- (2.75, 1.25) node[pos=.5,tikzdot]{} node[midway, xshift=1.5ex, yshift=1ex]{\small $p$}; % \draw[fill=white, color=white] (-.35,0) circle (.15cm); \draw [vstdhl] (-.25,-.5) node[below,yshift={-1ex}]{\small $\lambda$} -- (-.25,1) node[pos=.33, nail]{}; % \filldraw [fill=white, draw=black] (-.375,.5) rectangle (2.375,1.25) node[midway] { $T_{b-1}^{\lambda,\underline{N}}$}; } \end{align*} of \cref{prop:Tdecomp} (i.e. when $i=r$ and $t=b_r-1$). Note that, a priori, this only defines a map of left modules. Fortunately, by applying similar arguments as in~\cite[Lemma 5.4]{naissevaz3}, it is possible to show that it defines a map of bimodules. Exactness follows from a dimensional argument using \cref{prop:Tdecomp}. \end{proof} \subsubsection{Recovering $V(N)\otimes V(\protect\underline{N})$} Introducing the differential $d_N$ from \cref{sec:dgenh} in the picture, the map \cref{eq:addblackstrand} lifts to a map of dg-algebras $(T^{\lambda,\underline{N}}_b,d_N) \rightarrow (T^{\lambda,\underline{N}}_{b+1},d_N)$. Then we define dg-functors \begin{align*} \F_b^N(-) &:= (T^{\lambda,\underline{N}}_{b+1},d_N) 1_{b,1} \otimes_b (-), & \E_b^N(-) &:= q^{2b-|\underline{N}|-N} 1_{b,1}(T^{\lambda,\underline{N}}_{b+1},d_N) \otimes_{b+1} (-). \end{align*} These corresponds with derived induction and (shifted) derived restriction dg-functors along \cref{eq:addblackstrand}, by \cref{prop:Tdecomp} again. Recall the notion of a strongly projective dg-module from \cite{moore} (or see \cref{sec:stronglyproj}). \begin{prop}\label{prop:stronglyproj} As $(T^{\lambda,\underline{N}}_b,d_N)$-module, $(T^{\lambda,\underline{N}}_{b+1},d_N)$ is strongly projective. \end{prop} \begin{proof} As in \cite[Proposition 5.15]{naissevaz3}, and omitted. \end{proof} By \cref{prop:stronglyproj}, \cref{thm:sl2comqi} can be seen as a quasi-isomorphism of mapping cones \[ \cone\bigl(\F_{b-1}^N\E_{b-1}^N \rightarrow \E_b^N\F_b^N\bigr) \xrightarrow{\simeq} \cone\bigl(\bigoplus_{p \geq 0} q^{1+2p+N+|\underline{N}|-2b} \text{Id}_b \xrightarrow{h_N} \bigoplus_{p \geq 0} q^{1+2p-N-|\underline{N}|+2b} \text{Id}_b \bigr), \] where $h_N$ is given by multiplication by the element \[ \tikzdiag[xscale=1.25]{ % \draw (0,-1) -- (0,1); \node at(.25,-.85) {\tiny $\dots$}; \node at(.25,.85) {\tiny $\dots$}; \draw (.5,-1) -- (.5,1); % % \node[red] at(1.125,-.85) { $\dots$}; \node[red] at(1.125,.85) { $\dots$}; % % \draw (1.75,-1) -- (1.75,1); \node at(2,-.85) {\tiny $\dots$}; \node at(2,.85) {\tiny $\dots$}; \draw (2.25,-1) -- (2.25,1); % \draw (2.5,-1) .. controls (2.5,-.25) and (-.5,-.25) .. (-.25,0) node[pos=1,tikzdot]{} node[pos=1,xshift=-1.5ex,yshift=.75ex]{\small $N$} .. controls (-.5,.25) and (2.5,.25) .. (2.5,1); % \draw [stdhl] (1.5,-1) node[below,yshift={-1ex}]{\small $N_r$} -- (1.5,1); \draw [stdhl] (.75,-1) node[below,yshift={-1ex}]{\small $N_1$} -- (.75,1); \draw [vstdhl] (-.75,-1) node[below,yshift={-1ex}]{\small $\lambda$} -- (-.75,1) ; } \] \begin{prop}\label{prop:actionN} There is a quasi-isomorphism \[ \cone\bigl(\bigoplus_{p \geq 0} q^{1+2p+N+|\underline{N}|-2b} \text{Id}_b \xrightarrow{h_N} \bigoplus_{p \geq 0} q^{1+2p-N-|\underline{N}|+2b} \text{Id}_b \bigr) \xrightarrow{\cong} \oplus_{[N+|\underline{N}|-2b]_q} \text{Id}_b, \] where $\oplus_{[-k]_q} M := \oplus_{[k]_q} M[1]$. \end{prop} \begin{proof} As in \cite[Proposition 5.9]{naissevaz3}, and omitted. \end{proof} \subsubsection{Induction along red strands}\label{sec:redind} Take $\underline{N} = (N_1, \dots, N_r)$ and $\underline N ' = (\underline N, N_{r+1})$. Consider the (non-unital) map of algebras $T^{\lambda,\underline{N}}_b \rightarrow T^{\lambda,\underline{N'}}_{b}$ that consists in adding a vertical red strand labeled $N_{r+1}$ at the right a diagram: \begin{equation* \tikzdiagh[xscale=1.25]{0}{ \draw [vstdhl] (-.25,0) node[below]{\small $\lambda$} -- (-.25,1); % \draw (0,0) -- (0,1); \node at(.25,.125) {\tiny $\dots$}; \node at(.25,.875) {\tiny $\dots$}; \draw (.5,0) -- (.5,1); % \draw [stdhl] (.75,0) node[below]{\small $N_1$} -- (.75,1); % \node[red] at(1.125,.125) { $\dots$}; \node[red] at(1.125,.875) { $\dots$}; % \draw [stdhl] (1.5,0) node[below]{\small $N_{r-1}$} -- (1.5,1); % \draw (1.75,0) -- (1.75,1); \node at(2,.125) {\tiny $\dots$}; \node at(2,.875) {\tiny $\dots$}; \draw (2.25,0) -- (2.25,1); % \draw [stdhl] (2.5,0) node[below]{\small $N_{r}$} -- (2.5,1); % \draw (2.75,0) -- (2.75,1); \node at(3,.125) {\tiny $\dots$}; \node at(3,.875) {\tiny $\dots$}; \draw (3.25,0) -- (3.25,1); % \filldraw [fill=white, draw=black] (-.375,.25) rectangle (3.375,.75) node[midway] { $D$}; } \ \mapsto \ \tikzdiagh[xscale=1.25]{0}{ \draw [vstdhl] (-.25,0) node[below]{\small $\lambda$} -- (-.25,1); % \draw (0,0) -- (0,1); \node at(.25,.125) {\tiny $\dots$}; \node at(.25,.875) {\tiny $\dots$}; \draw (.5,0) -- (.5,1); % \draw [stdhl] (.75,0) node[below]{\small $N_1$} -- (.75,1); % \node[red] at(1.125,.125) { $\dots$}; \node[red] at(1.125,.875) { $\dots$}; % \draw [stdhl] (1.5,0) node[below]{\small $N_{r-1}$} -- (1.5,1); % \draw (1.75,0) -- (1.75,1); \node at(2,.125) {\tiny $\dots$}; \node at(2,.875) {\tiny $\dots$}; \draw (2.25,0) -- (2.25,1); % \draw [stdhl] (2.5,0) node[below]{\small $N_{r}$} -- (2.5,1); % \draw (2.75,0) -- (2.75,1); \node at(3,.125) {\tiny $\dots$}; \node at(3,.875) {\tiny $\dots$}; \draw (3.25,0) -- (3.25,1); % \filldraw [fill=white, draw=black] (-.375,.25) rectangle (3.375,.75) node[midway] { $D$}; % \draw [stdhl] (3.5,0) node[below]{\small $N_{r+1}$} -- (3.5,1); } \end{equation*} Let $\mathfrak{I} : \mathcal{D}_{dg}(T^{\lambda,\underline{N}}_{b},0) \rightarrow \mathcal{D}_{dg}(T^{\lambda,\underline{N}'}_{b},0)$ be the corresponding induction dg-functor, and let $ \mathfrak{\bar I} : \mathcal{D}_{dg}(T^{\lambda,\underline{N}'}, 0) \rightarrow \mathcal{D}_{dg}(T^{\lambda,\underline{N}}_{b},0) $ be the restriction dg-functor. \begin{prop}\label{prop:indresredstrand} There is an isomorphism $ \mathfrak{\bar I} \circ \mathfrak{I} \cong \text{Id}$. \end{prop} \begin{proof} The statement follows from \cref{prop:Tdecomp}. \end{proof} \subsection{Categorification theorem} In this section we suppose that $\Bbbk$ is a field. Recall the notion of an asymptotic Grothendieck group $\boldsymbol{K}_0^\Delta$ from \cite[\S 8]{asympK0} (or see \cref{sec:asympK0}). Since $(T^{\lambda,\underline{N}}_{b},0)$ is a positive c.b.l.f. dimensional $\mathbb{Z}^2$-graded dg-algebra (see \cref{def:positivecblfdgalg}), we have by \cref{thm:triangtopK0genbyPi} that $\boldsymbol{K}_0^\Delta(T^{\lambda,\underline{N}}_{b},0)$ is a free $\mathbb{Z}\pp{q,\lambda}$-module generated by the classes of projective $T^{\lambda,\underline{N}}_{b}$-modules with a trivial differential. Let ${}_\mathbb{Q} \boldsymbol{K}_0^\Delta(-) := \boldsymbol{K}_0^\Delta(-) \otimes_{\mathbb{Z}\pp{q,\lambda}} \mathbb{Q}\pp{q,\lambda}$. \smallskip For each $\rho \in \mathcal{P}_{b}^{r}$, there is a projective $T^{\lambda,\underline{N}}_{b}$-module given by \[ {\boldsymbol{P}}_{\rho} := T^{\lambda,\underline{N}}_{b}1_\rho. \] Recall the inclusion $\eta_\rho \colon A_{b_0}\otimes \nh_{b_1} \otimes \cdots \otimes \nh_{b_r} \hookrightarrow T_b^{\lambda,\underline{N}}$ defined in \cref{eq:defetainclusion}. It is well-known (see for example~\cite[\S~2.2.3]{KL1}) that $\nh_n$ admits a unique primitive idempotent up to equivalence given by \[ e_{n} := \tau_{\vartheta_n} x_1^{n-1} x_2^{n-2} \cdots x_{n-1} \in \nh_n, \] where $\vartheta_n \in S_n$ is the longest element, $\tau_{w_1w_2\cdots w_k} := \tau_{w_1}\tau_{w_2}\cdots\tau_{w_k}$, with $\tau_i$ being a crossing between the $i$-th and $(i+1)$-th strands, and $x_i$ is a dot on the $i$-th strand. There is a similar result for $\nh_{b_0} \subset A_{b_0}$ (see \cite[\S 2.5.1]{naissevaz2}). Moreover, for degree reasons, any primitive idempotent of $T^{\lambda,\underline{N}}_{b}$ is the image of a collection of idempotents under the inclusion $\eta_\rho $ for some $\rho$, and thus is of the form \[ e_{\rho} := \eta_\rho \left( e_{b_1} \otimes \cdots \otimes e_{b_n} \right). \] It is also well-known (see for example~\cite[\S~2.2.3]{KL1}) that there is a decomposition \[ \nh_n \cong q^{n(n-1)/2} \bigoplus_{[n]_q!} \nh_n e_n, \] as left $\nh_n$-modules. For the same reasons, we obtain \[ {\boldsymbol{P}}_\rho \cong q^{\sum_{i=0}^r b_i(b_i-1)/2} \bigoplus_{\prod_{i=0}^r [b_i]_q!} T^{\lambda,\underline{N}}_{b}e_\rho. \] \subsubsection{Categorifed Shapovalov form} Let $T^{\lambda,\underline{N}} := \bigoplus_{b \geq 0} T_b^{\lambda,\underline{N}}$. As in \cite[\S2.5]{KL1}, let $\psi : T^{\lambda,\underline{N}} \rightarrow \opalg{(T^{\lambda,\underline{N}})}$ be the map that takes the mirror image of diagrams along the horizontal axis. Given a left $(T^{\lambda,\underline{N}},0)$-module $M$, we obtain a right $(T^{\lambda,\underline{N}},0)$-module $M^\psi$ with action given by $m^\psi \cdot r := (-1)^{\deg_h(r) \deg_h(m)} \psi(r) \cdot m$ for $m \in M$ and $r \in T^{\lambda,\underline{N}}$. Then we define the dg-bifunctor \begin{align*} (-,-) &: \mathcal{D}_{dg}(T^{\lambda,\underline{N}},0) \times \mathcal{D}_{dg}(T^{\lambda,\underline{N}},0) \rightarrow \mathcal{D}_{dg}(\Bbbk, 0), & (W,W') := W^\psi \otimes^{\Lderiv}_{(T^{\lambda,\underline{N}},0)} W'. \end{align*} \begin{prop}\label{prop:catshap} The dg-bifunctor defined above satisfies: \begin{itemize} \item $((T^{\lambda,\underline{N}}_0,d_N),(T^{\lambda,\underline{N}}_0,d_N)) \cong (\Bbbk,0)$; \item $(\Ind_b^{b+1} M,M') \cong (M, \Res_b^{b+1} M')$ for all $M,M' \in \mathcal{D}_{dg}(T^{\lambda,\underline{N}},0)$; \item $(\oplus_f M,M') \cong (M, \oplus_f M') \cong \oplus_f (M,M')$ for all $f \in \mathbb{Z}\pp{q,\lambda}$; \item $(M,M') \cong (\mathfrak{I}(M), \mathfrak{I}(M'))$. \end{itemize} \end{prop} \begin{proof} Straightforward, except for the last point which follows from \cref{prop:indresredstrand}, together with the adjunction $\mathfrak{I} \vdash \mathfrak{\bar I}$ \end{proof} Comparing \cref{prop:catshap} to \cref{sec:shepfortensor}, we deduce that $(-,-)$ has the same properties on the asymptotic Grothendieck group of $(T^{\lambda,r},0)$ as the Shapovalov form on $M\otimes V^r$. \subsubsection{The categorification theorem} Let $\E := \bigoplus_{b \geq 0} \E_b$ and $\F := \bigoplus_{b \geq 0} \F_b$. By \cref{thm:sl2comqi} and \cref{prop:cblfbiminduceKO}, we know that ${}_\mathbb{Q}\boldsymbol{K}_0^\Delta(T^{\lambda,\underline{N}},0)$ is an $U_q(\mathfrak{sl}_2)$-module, with action given by the pair $[\F], [\E]$. \begin{lem} \label{lem:rankcompare} We have $\dim_{\mathbb{Q}\pp{q,\lambda}} \bigl( {}_\mathbb{Q}\boldsymbol{K}_0^\Delta(T^{\lambda,\underline{N}}_b,0) \bigr) \leq \dim_{\mathbb{Q}\pp{q,\lambda}} \bigl( M(\lambda) \otimes V(\underline{N})_{\lambda q^{r-2b}} \bigr)$. Moreover, ${}_\mathbb{Q}\boldsymbol{K}_0^\Delta(T^{\lambda,\underline{N}},0)$ is spanned by the classes $\{ [P_\rho] \}_{\rho \in \mathcal{P}_b^{r,\und N}}$. \end{lem} \begin{proof} It is well-known (see for example \cite[Lemma 7.2]{naissevaz1}) that whenever $k > n$, then the unit element in $\nh_k$ can be rewritten as a combination of elements having $n$ consecutive dots somewhere on the left-most strand. Thus, for any $\rho' \in \mathcal{P}_b^r$, we obtain that $1_{\rho'}$ can be rewritten as a combination of elements factorizing through elements in $\{1_{\rho}\}_{\rho \in \mathcal{P}_b^{r,\und N}}$. \end{proof} We consider $ M(\lambda) \otimes V(\underline{N})$ over the ground ring $\mathbb{Q}\pp{q,\lambda}$ instead of $\mathbb{Q}(q,\lambda)$. \begin{thm}\label{thm:K0} There are isomorphisms of $U_q(\mathfrak{sl}_2)$-modules \[ {}_\mathbb{Q}\boldsymbol{K}_0^\Delta(T^{\lambda,\underline{N}},0) \cong M(\lambda) \otimes V(\underline{N}), \] and \[ {}_\mathbb{Q}\boldsymbol{K}_0^\Delta(T^{\lambda,\underline{N}},d_N)\cong V(N) \otimes V(\underline{N}), \] for all $N \in \mathbb{N}$. \end{thm} \begin{proof} We have a $\mathbb{Q}\pp{q,\lambda}$-linear map \[ M(\lambda) \otimes V(\underline{N}) \rightarrow {}_\mathbb{Q}\boldsymbol{K}_0^\Delta(T^{\lambda,\underline{N}},0), \quad v_\rho \mapsto [{\boldsymbol{P}}_\rho]. \] By \cref{lem:rankcompare}, this map is surjective. It commutes with the action of $K^{\pm 1}$ and $E$ because of \cref{prop:Tdecomp}. By \cref{prop:catshap}, the map intertwines the Shapovalov form with the bilinear form induced by the bifunctor $(-,-)$ on ${}_\mathbb{Q}\boldsymbol{K}_0^\Delta(T^{\lambda,\underline{N}},0)$. Thus, it is a $\mathbb{Q}\pp{q,\lambda}$-linear isomorphism. Since the map intertwines the Shapovalov form with the bifunctor $(-,-)$, and commutes with the action of $E$ and $K^{\pm 1}$, we deduce by non-degeneracy of the Shapovalov form that it also commutes with the action of $F$. Thus, it is a map of $U_q(\mathfrak{sl}_2)$-modules. The case ${}_\mathbb{Q}\boldsymbol{K}_0^\Delta(T^{\lambda,\underline{N}},d_N) $ follows from \cref{thm:dNformal} together with \cite[Theorem 4.38]{webster}. \end{proof} \section{A categorification of the blob algebra}\label{sec:catTLB} As in~\cite[\S7]{webster}, the cup and cap functors respect a categorical instance of the Temperley--Lieb algebra relations \eqref{eq:TLrels}--\eqref{eq:TLloopremov}. We additionally show that the double braiding functor respects a categorical version of the blob relations \eqref{eq:TLBloopremov} and \eqref{eq:TLBdoublebraid}. Note that Webster also proves that the cup and cap functors intertwine the categorical $U_q(\mathfrak{sl}_2)$-action, which categorifies the fact that the Temperley--Lieb algebra describes morphisms of $U_q(\mathfrak{sl}_2)$- modules. We start by proving the same for these functors in the dg-setting as well as for the double braiding functors: \begin{prop}\label{prop:catactioncommutes} We have natural isomorphisms $\E \circ \Xi \cong \Xi \circ \E$ and $\F \circ \Xi \cong \Xi \circ \F$, and also $\E \circ\B_i \cong \B_i \circ\E$, $\F \circ\B_i \cong \B_i \circ\F$, and similarly for $\overline\B_i$. \end{prop} \begin{proof} Since $\E$ and $\F$ are given by derived tensor product with a dg-bimodule that is cofibrant both as left and as right module, all compositions are given by usual tensor product of dg-bimodules. Then, the first isomorphism is equivalent to \[ 1_{b,1}(T^{\lambda,r}_{b+1}) \otimes_{b+1} X_{b+1} \cong X_{b} \otimes_{b}1_{b,1}(T^{\lambda,r}_{b+1}), \] which in turn follows from \cref{thm:X0basis} and \cref{prop:Tdecomp}. The case with $\F$ is identical, and so is the proof for $\B_i$ using \cref{thm:basisBi}. \end{proof} Then, we use all this to show that compositions of the functors $\B_i,\bar \B_i$ and $\Xi$ realize a categorification of $\mathcal{B}$. \subsection{Temperley--Lieb relations}\label{sec:catTLaction} This section is an extension of Webster's results~\cite[\S7]{webster} for the dg-enhanced KLRW algebra $T^{\lambda,r}$. \begin{prop} There is an isomorphism \[ \bar B_{i \pm 1} \otimes^{\Lderiv}_T B_i \cong T^{\lambda,r}, \] of $\mathbb{Z}^2$-graded $(T^{\lambda,r}, 0)$-$(T^{\lambda,r},0)$-$A_\infty$-bimodules. \end{prop} \begin{proof} We prove $\bar B_{i - 1} \otimes^{\Lderiv}_T B_i \cong T^{\lambda,r}$, the other case follows similarly. Using \cref{prop:catactioncommutes} and the fact that $\B_i \circ \mathfrak{I} \cong \mathfrak{I} \circ \B_i$ for $ i < r-1$ (where we recall $\mathfrak{I}$ is the induction along a red strand defined in \cref{sec:redind}), we can work locally, supposing that $i = r-1$ and $b_i = 0$. Then, we have that $\bar B_{i - 1} \otimes_T \br B_i $ looks like \[ \begin{tikzcd}[row sep = 1ex] & q\left( \tikzdiag[scale=.75]{ \draw (1,0) -- (1,1.5); % \filldraw [fill=white, draw=black] (-.25,.3) rectangle (2.25,1); \filldraw [fill=white, draw=white, dotted] (-.25,.3) rectangle (.15,1); % \draw [stdhl] (.5,0) -- (.5,1) .. controls (.5,1.75) and (1.5,1.75) .. (1.5,1)--(1.5,0); \draw [stdhl] (2,0) -- (2,1.5); % } \right)[1] \ar{dr} \ar[no head]{dr}{ {\tikzdiag[scale=.5]{ \draw (1,0) .. controls (1,.5) and (0,.5) .. (0,1); \draw[stdhl] (0,0) .. controls (0,.5) and (1,.5) .. (1,1); \draw[stdhl] (2,0) -- (2,1); }}} & \\ q^2\left( \tikzdiag[scale=.75]{ \draw (1.5,0) -- (1.5,.5) (1,1) -- (1,1.5); % \filldraw [fill=white, draw=black] (-.25,.3) rectangle (2.25,1); \filldraw [fill=white, draw=white, dotted] (-.25,.3) rectangle (.15,1); % \draw [stdhl] (.5,0) -- (.5,1) .. controls (.5,1.75) and (1.5,1.75) .. (1.5,1) .. controls (1.5,.5) and (1,.5) .. (1,0); \draw [stdhl] (2,0) -- (2,1.5); % } \right)[2] \ar{ur} \ar[no head]{ur}{ {\tikzdiag[scale=.5]{ \draw (0,0) .. controls (0,.5) and (1,.5) .. (1,1); \draw[stdhl] (1,0) .. controls (1,.5) and (0,.5) .. (0,1); \draw[stdhl] (2,0) -- (2,1); }}} \ar{dr} \ar[no head,swap]{dr}{ -\ {\tikzdiag[scale=.5]{ \draw (1,0) .. controls (1,.5) and (0,.5) .. (0,1); \draw[stdhl] (-1,0) -- (-1,1); \draw[stdhl] (0,0) .. controls (0,.5) and (1,.5) .. (1,1); }}} & \oplus & \tikzdiag[scale=.75]{ \draw (1.5,0) -- (1.5,.5) (1,1) -- (1,1.5); % \filldraw [fill=white, draw=black] (-.25,.3) rectangle (2.25,1); \filldraw [fill=white, draw=white, dotted] (-.25,.3) rectangle (.15,1); % \draw [stdhl] (.5,0) -- (.5,1) .. controls (.5,1.75) and (1.5,1.75) .. (1.5,1) .. controls (1.5,.5) and (1,.5) .. (1,0); \draw [stdhl] (2,0) -- (2,1.5); % } \\ & q\left( \tikzdiag[scale=.75]{ \draw (2,0) -- (2,.5) (1,1) -- (1,1.5); % \filldraw [fill=white, draw=black] (-.25,.3) rectangle (2.25,1); \filldraw [fill=white, draw=white, dotted] (-.25,.3) rectangle (.15,1); % \draw [stdhl] (.5,0) -- (.5,1) .. controls (.5,1.75) and (1.5,1.75) .. (1.5,1) .. controls (1.5,.5) and (1,.5) ..(1,0); \draw [stdhl] (1.5,0) .. controls (1.5,.5) and (2,.5) .. (2,1) -- (2,1.5); % } \right)[1] \ar{ur} \ar[no head,swap]{ur}{ {\tikzdiag[scale=.5]{ \draw (0,0) .. controls (0,.5) and (1,.5) .. (1,1); \draw[stdhl] (1,0) .. controls (1,.5) and (0,.5) .. (0,1); \draw[stdhl] (-1,0) -- (-1,1); }}} & \end{tikzcd} \] which is isomorphic to \[ \begin{tikzcd}[row sep = 1ex] & T^{\lambda,r} \ar{dr} & \\ 0 \ar{ur} \ar{dr} & \oplus & 0 \\ & 0 \ar{ur} & \end{tikzcd} \] because of \cref{eq:killcup}. Note that it is an isomorphism of dg-bimodules, since all the higher composition maps of the $A_\infty$-structure must be zero by degree reasons, concluding the proof. \end{proof} \begin{cor}\label{cor:TLbiadj} There is a natural isomorphism $\bar \B_{i \pm 1} \circ \B_i \cong \text{Id}$. \end{cor} \begin{prop}\label{prop:distinguishedBitriangle} There is a distinguished triangle \[ q (T^{\lambda,r})[1] \xrightarrow{\eta_i} \overline{B}_i \otimes^{\Lderiv}_T B_i \xrightarrow{\overline \varepsilon_i} q^{-1} (T^{\lambda,r})[-1] \xrightarrow{0} \] of $\mathbb{Z}^2$-graded $(T^{\lambda,r}, 0)$-$(T^{\lambda,r},0)$-$A_\infty$-bimodules. \end{prop} \begin{proof} We have \[ \bar B_{i} \otimes^{\Lderiv}_T B_i \cong \bar B_{i} \rb \otimes_T B_i, \] which looks like \[ \begin{tikzcd}[row sep = 1ex] & 0 \ar{dr} & \\ q ( B_{i}^{\tikzRBR})[1] \ar{ur} \ar{ur} \ar{dr} & \oplus & q^{-1}( B_{i}^{\tikzRBR})[-1]. \\ & 0 \ar{ur} & \\ \end{tikzcd} \] Thus, since $B_{i}^{\tikzRBR} \cong T^{\lambda,r}$, we have that \[ H(\bar B_{i} \otimes^{\Lderiv}_T B_i) \cong q(T^{\lambda,r})[1] \oplus q^{-1}(T^{\lambda,r})[-1]. \] In order to compute $\eta_i$, recall (or see \cref{sec:unitandcounit}) that the unit of the adjunction $(B_i \otimes^{\Lderiv}_T -) \vdash (\RHOM_T(B_i, -))$ is given by \[ \eta_i' : T^{\lambda,r} \rightarrow \RHOM_T( B_i, B_i \otimes^{\Lderiv}_T T^{\lambda,r}) \cong \HOM_T( \br B_i, B_i), \quad t \mapsto \left[ x \mapsto \overline{x} \cdot t \right], \] where $\overline{x}$ is the image of $x$ under the map $\br B_i \twoheadrightarrow B_i$. Moreover, $\HOM_T( \br B_i, B_i)$ is given by \[ \begin{tikzcd}[row sep = 1ex] & 0 \ar[leftarrow]{dr} & \\ q^{-2} \HOM_T(T_{i,\ \tikzRBR}, B_i)[-2] \ar[leftarrow]{ur} \ar[leftarrow]{ur} \ar[leftarrow]{dr} & \oplus & \HOM(T_{i,\ \tikzRBR}, B_i), \\ & 0 \ar[leftarrow]{ur} & \\ \end{tikzcd} \] and then, $\eta_i'$ is the map $T^{\lambda,r} \xrightarrow{\simeq} \HOM(T_{i,\ \tikzRBR}, B_i) \cong B_i^{\tikzRBR}$ that adds a cup on the top. Thus, $\eta_i$ identifies $q(T^{\lambda,r})[1]$ with $q(B_i^{\tikzRBR})[1] \subset H(\bar B_{i} \otimes^{\Lderiv}_T B_i)$ in homology. Similarly, the counit of the adjunction $(\overline{B}_i \otimes^{\Lderiv} -) \vdash (\RHOM_T(\overline{B}_i, -))$ is \[ \overline \varepsilon' : \overline{B}_i \otimes^{\Lderiv}_T \RHOM_T (\overline{B}_i, T^{\lambda,r}) \cong \overline{B}_i \rb \otimes_T \HOM_T(\overline{B}_i, T^{\lambda,r}) \rightarrow T^{\lambda,r}, \quad t \otimes f \mapsto f(\overline t). \] Then, we obtain that $\overline{B}_i \rb \otimes_T \HOM_T(\overline{B}_i, T^{\lambda,r})$ is isomorphic to \[ \begin{tikzcd}[row sep = 1ex] & 0 \ar{dr} & \\ q^2 ( B_{i}^{\tikzRBR})[2] \ar{ur} \ar{ur} \ar{dr} & \oplus & B_{i}^{\tikzRBR}, \\ & 0 \ar{ur} & \\ \end{tikzcd} \] and thus, $\overline \varepsilon'$ is the isomorphism $B_{i}^{\tikzRBR} \xrightarrow{\simeq} T^{\lambda,r}$. Therefore, $\overline \varepsilon$ identifies $ q^{-1} (T^{\lambda,r})[-1] $ with $q^{-1}(B_i^{\tikzRBR})[-1] \subset H(\bar B_{i} \otimes^{\Lderiv}_T B_i)$ in homology. \end{proof} Because the connecting morphism in \cref{prop:distinguishedBitriangle} is zero, the triangle splits and we have \[ \overline{B}_i \otimes^{\Lderiv}_T B_i \cong q (T^{\lambda,r})[1] \oplus q^{-1} (T^{\lambda,r})[-1]. \] \begin{cor}\label{cor:TLloop} There is a natural isomorphism \[ \overline \B_i \circ \B_i \cong q \text{Id} [1] \oplus q^{-1} \text{Id} [-1]. \] \end{cor} \subsection{Blob relations} Proving the blob relations requires some preparation. \subsubsection{Quadratic relation} We define recursively the following element by setting $z_0 := 0$, \begin{align}\label{eq:defzn} z_1 &:= \ \tikzdiag{ \draw (0,-.5) -- (0,1.5); \filldraw [fill=white, draw=black,rounded corners] (-.25,.3) rectangle (0.25,.8) node[midway] { $z_1$}; } \ :=\ \tikzdiag{ \draw (0,-.5) -- (0,1.5); }\ , & z_{t+2} &:=\ \tikzdiagh{0}{ \draw (0,-.5) -- (0,1.5); \node at (.5,-.25) {\tiny $\dots$}; \node at (.5,1.25) {\tiny $\dots$}; \draw (1,-.5) -- (1,1.5); \draw (1.5,-.5) -- (1.5,1.5); \draw (2,-.5) -- (2,1.5); \filldraw [fill=white, draw=black,rounded corners] (-.25,.3) rectangle (2.25,.8) node[midway] { $z_{t+2}$}; \draw[decoration={brace,mirror,raise=-8pt},decorate] (-0.15,-.85) -- node {$t$} (1.15,-.85) } \ := \ \tikzdiagh{0}{ \draw (0,-.5) -- (0,1.5); \node at (.5,-.25) {\tiny $\dots$}; \node at (.5,1.25) {\tiny $\dots$}; \draw (1,-.5) -- (1,1.5); \draw (1.5,-.5) -- (1.5,1) .. controls (1.5,1.25) and (2,1.25) .. (2,1.5); \draw (2,-.5) -- (2,1) .. controls (2,1.25) and (1.5,1.25) .. (1.5,1.5); \filldraw [fill=white, draw=black,rounded corners] (-.25,.3) rectangle (1.75,.8) node[midway] { $z_{t+1}$}; \draw[decoration={brace,mirror,raise=-8pt},decorate] (-0.15,-.85) -- node {$t$} (1.15,-.85) } \ + \ \tikzdiagh{0}{ \draw (0,-.5) -- (0,1.5); \node at (.5,-.25) {\tiny $\dots$}; \node at (.5,1.25) {\tiny $\dots$}; \draw (1,-.5) -- (1,1.5); \draw (2,-.5) .. controls (2,-.25) and (1.5,-.25) .. (1.5,0) -- (1.5,1) .. controls (1.5,1.25) and (2,1.25) .. (2,1.5); \draw (1.5,-.5) .. controls (1.5,-.25) and (2,-.25) .. (2,0) -- (2,1) .. controls (2,1.25) and (1.5,1.25) .. (1.5,1.5) node [near end, tikzdot]{}; \filldraw [fill=white, draw=black,rounded corners] (-.25,.3) rectangle (1.75,.8) node[midway] { $z_{t+1}$}; \draw[decoration={brace,mirror,raise=-8pt},decorate] (-0.15,-.85) -- node {$t$} (1.15,-.85) } \end{align} for all $t \geq 0$. Note that $z_2$ is given by a single crossing \[ \tikzdiag{ \draw (0,0) -- (0,1); \draw (.5,0) -- (.5,1); \filldraw [fill=white, draw=black,rounded corners] (-.25,.3) rectangle (0.75,.8) node[midway] { $z_2$}; } \ =\ \tikzdiag{ \draw (0,0) .. controls (0,.5) and (.5,.5) .. (.5,1); \draw (.5,0) .. controls (.5,.5) and (0,.5) .. (0,1); } \] since the second term is zero in this case. One easily sees that $\deg_q(z_t) = 2-2t$. Define a map of left modules \[ \varphi_k^1 : \lambda q^2 (X_k)[1] \rightarrow X \otimes_T Y^1_k , \] as $\varphi_k^1 := \sum_{t = 0}^{k-1} \varphi_k^{1,t}$, where each \begin{align*} \varphi_k^{1,t}: &\lambda q^2 (X_k) [1] \rightarrow X \otimes_T Y^{1,t}_k \qquad \bigl(\cong \bigoplus_{\ell,\rho} \lambda q^{k-2t+1} (X 1_{1,k+\ell-1,\rho})[1] \bigr) , \end{align*} is given by multiplication on the bottom by \[ \tikzdiag{ \draw (.5,0) .. controls (.5,.5) and (1.25,.5) .. (1.25,1) -- (1.25,1.5); \draw (1.5,0) .. controls (1.5,.5) and (.5,.5) .. (.5,1) -- (.5,1.5) node[midway,tikzdot]{}; \node at(1.75,.15) {\tiny $\dots$}; \draw (2,0) .. controls (2,.5) and (1,.5) .. (1,1) -- (1,1.5) node[midway,tikzdot]{}; \draw (2.25,0) .. controls (2.25,.5) and (1.5,.5) .. (1.5,1) -- (1.5,1.5); \node at(2.5,.15) {\tiny $\dots$}; \draw (2.75,0) .. controls (2.75,.5) and (2,.5) .. (2,1) -- (2,1.5); % \draw[decoration={brace,mirror,raise=-8pt},decorate] (1.35,-.35) -- node {$t$} (2.15,-.35); \draw[decoration={brace,raise=-8pt},decorate] (.4,1.85) -- node {$k$} (2.125,1.85); % \draw[stdhl] (1,0) node[below]{\small $1$} .. controls (1,.5) and (2.75,.5) .. (2.75,1) -- (2.75,1.5); \draw[vstdhl] (0,0) node[below]{\small $\lambda$} -- (0,1.5); % \filldraw [fill=white, draw=black,rounded corners] (1.125,.9) rectangle (2.125,1.4) node[midway] { $z_{k-t}$}; } \otimes \bar 1_{\ell,\rho}. \] Also define a map of left modules \[ \varphi_k^0 : \bigoplus_{\ell,\rho}q^2 (T_b^{\lambda,r}1_{k,\ell,\rho} )[1] \rightarrow X \otimes_T Y^0_k, \] as $\varphi_k^0 := {\varphi_k^0}' + \sum_{t = 0}^{k-1} \varphi_k^{0,t}$, where each \begin{align*} {\varphi_k^0}' &: \bigoplus_{\ell,\rho}q^2 (T_b^{\lambda,r}1_{k,\ell,\rho} )[1] \rightarrow X \otimes_T {Y'}^0_k \qquad \bigl( \cong \bigoplus_{\ell, \rho }\lambda^{-1} q^k (X 1_{0,k+\ell,\rho}) \bigr), \\ &\tikzdiagh{0}{ \draw[stdhl] (1.5,0) node[below]{\small $1$} -- (1.5,1); \draw[vstdhl] (0,0) node[below]{\small $\lambda$} -- (0,1); % \draw (.5,0) -- (.5,1); \node at (.75,.5) {\tiny $\dots$}; \draw (1,0) --(1,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (.4,-.35) -- node {$k$} (1.1,-.35); } \otimes \bar 1_{\ell,\rho} \mapsto - \sum_{t=0}^{k-1} \tikzdiag{ \draw (.5,-.5) .. controls (.5,.25) and (0,.25) .. (0,1.5) node[pos=.9,tikzdot]{}; \node at(.75,-.25) {\tiny $\dots$}; \draw (1,-.5) .. controls (1,.25) and (.5,.25) .. (.5,1.5) node[pos=.9,tikzdot]{}; % \draw (1.25,-.5) .. controls (1.25,.25) .. (-.5,.375) .. controls (.75,.5) .. (.75,1.5); % \draw (1.5,-.5) .. controls (1.5,.5) and (1,.5) .. (1,1.5); \node at(1.75,-.25) {\tiny $\dots$}; \draw (2,-.5) .. controls (2,.5) and (1.5,.5) .. (1.5,1.5); % \filldraw [fill=white, draw=black,rounded corners] (.625,.9) rectangle (1.625,1.4) node[midway] { $z_{k-t}$}; % \draw[stdhl] (0,-.5) node[below]{\small $1$} .. controls (0,-.25) .. (-.5,0) .. controls (2,.0) .. (2,1) -- (2,1.5) ; \draw[fill=white, color=white] (-.6,0) circle (.1cm); \draw[vstdhl] (-.5,-.5) node[below]{\small $\lambda$} -- (-.5,1.5) node[pos=.45,nail]{}; % \draw[decoration={brace,mirror,raise=-8pt},decorate] (.4,-.85) -- node {$t$} (1.1,-.85); \draw[decoration={brace,raise=-8pt},decorate] (-.1,1.85) -- node {$k$} (1.6,1.85); } \otimes \bar 1_{\ell,\rho}, \end{align*} and where \begin{align*} \varphi_k^{0,t} &: \bigoplus_{\ell,\rho}q^2 (T_b^{\lambda,r}1_{k,\ell,\rho} )[1] \rightarrow X \otimes_T Y^{0,t}_k \qquad \bigl(\cong \bigoplus_{\ell,\rho} \lambda q^{k-2t} (X 1_{0,k+\ell,\rho})[1] \bigr), \\ &\tikzdiagh{0}{ \draw[stdhl] (1.5,0) node[below]{\small $1$} -- (1.5,1); \draw[vstdhl] (0,0) node[below]{\small $\lambda$} -- (0,1); % \draw (.5,0) -- (.5,1); \node at (.75,.5) {\tiny $\dots$}; \draw (1,0) --(1,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (.4,-.35) -- node {$k$} (1.1,-.35); } \otimes \bar 1_{\ell,\rho} \mapsto \tikzdiag{ % \draw (.5,-.5) .. controls (.5,0) and (0,0) .. (0,.5) .. controls (0,.75) and (.75,.75) .. (.75,1) -- (.75, 1.5); % \draw (.75,-.5) .. controls (.75,.25) and (0,.75) .. (0,1) -- (0,1.5) node[midway, tikzdot]{}; \node at(1,-.35) {\tiny $\dots$}; \draw (1.25,-.5) .. controls (1.25,.25) and (.5,.75) .. (.5,1) -- (.5,1.5) node[midway, tikzdot]{}; % \draw (1.5,-.5) .. controls (1.5,.25) and (1,.75) .. (1,1) -- (1,1.5); \node at(1.75,-.35) {\tiny $\dots$}; \draw (2,-.5) .. controls (2,.25) and (1.5,.75) .. (1.5,1) -- (1.5,1.5); % \filldraw [fill=white, draw=black,rounded corners] (.625,.9) rectangle (1.625,1.4) node[midway] { $z_{k-t}$}; % \draw[stdhl] (0,-.5) node[below]{\small $1$} .. controls (0,-.25) .. (-.5,0) .. controls (2,.25) .. (2,.5) -- (2,1.5) ; \draw[fill=white, color=white] (-.6,0) circle (.1cm); \draw[vstdhl] (-.5,-.5) node[below]{\small $\lambda$} -- (-.5,1.5) ; % \draw[decoration={brace,mirror,raise=-8pt},decorate] (.65,-.85) -- node {$t$} (1.35,-.85); \draw[decoration={brace,raise=-8pt},decorate] (-.1,1.85) -- node {$k$} (1.6,1.85); } \otimes \bar 1_{\ell,\rho}. \end{align*} Recall that the unbraiding map (\cref{def:unbraidingmap}) \[ u : \lambda X \hookrightarrow T_b^{\lambda,r}, \] is given by \[ \tikzdiagh{0}{ \draw[stdhl] (1,0) node[below]{\small $1$} .. controls (1,.25) .. (0,.5) .. controls (1,.75) .. (1,1); \draw[fill=white, color=white] (-.1,.5) circle (.1cm); \draw[vstdhl] (0,0) node[below]{\small $\lambda$} -- (0,1); } \mapsto \tikzdiagh{0}{ \draw[stdhl] (1,0) node[below]{\small $1$} .. controls (1,.25) and (.25,.25) .. (.25,.5) .. controls (.25,.75) and (1,.75) .. (1,1); \draw[vstdhl] (0,0) node[below]{\small $\lambda$} -- (0,1); } \] \begin{lem}The diagram \[ \begin{tikzcd} X \otimes_T Y^1_k \ar[hookrightarrow]{r}{1 \otimes \imath_k} & X \otimes_T Y^0_k \ar[twoheadrightarrow]{r}{u \otimes \gamma_k} & \lambda^{-1} X \\ \lambda q^2 (X_k)[1] \ar{u}{\varphi_k^1} \ar[hookrightarrow]{r}{u} & \bigoplus_{\ell,\rho} q^2 (T_b^{\lambda,r}1_{k,\ell,\rho})[1] \ar{u}{\varphi_k^0} \ar{r} & 0 \ar{u} \end{tikzcd} \] commutes. \end{lem} \begin{proof} The proof is a straightforward computation using \cref{eq:dotredstrand} and \cref{eq:crossingslidered} together with \cref{eq:nailslidedcross}. We leave the details to the reader. \end{proof} Thus, there is an induced map \[ \varphi_k : \cone(\lambda q^2 (X_k) [1] \xrightarrow{u} \bigoplus_{\ell,\rho} q^2 (T_b^{\lambda,r} 1_{k,\ell,\rho})[1] )[1] \rightarrow \cone(X \otimes^{\Lderiv}_T X_k \xrightarrow{1\otimes u} \lambda^{-1} X_k), \] as left modules. \begin{thm}\label{thm:catdoublebraidphiiso} The map \[ \varphi := \sum_{k = 0}^{m} (-1)^k \varphi_k : \cone(\lambda q^2 X [1] \xrightarrow{u} q^2 T_b^{\lambda,r} [1] )[1] \rightarrow \cone(X \otimes^{\Lderiv}_T X \xrightarrow{1\otimes u} \lambda^{-1} X), \] is a quasi-isomorphism. \end{thm} \begin{proof} The statement can be proven by showing that $\cone(\varphi)$ has a trivial homology, and thus is acyclic. This is done in details in~\cref{sec:proofofacyclicity}. \end{proof} The next step is to prove that $\varphi$ defines a map of $A_\infty$-bimodules. Luckily, by the following proposition, we do not need to use any $A_\infty$-structure here. \begin{prop}\label{prop:XoLXisXoX} The map \[ X \otimes_T \br X \xrightarrow{1 \otimes \gamma} X \otimes_T X, \] is a quasi-isomorphism of $A_\infty$-bimodules. \end{prop} \begin{proof} Tensoring to the left is a right-exact functor, thus \cref{lem:sesX0} gives us an exact sequence \[ X \otimes_T Y^1_k \xrightarrow{1 \otimes \imath_k} X \otimes_T Y^0_k \xrightarrow{1 \otimes \gamma_k} X_k \rightarrow 0. \] It is not hard to see that $1 \otimes \imath_k$ is injective, and thus we have a short exact sequence \[ 0 \rightarrow X \otimes_T Y^1_k \xrightarrow{1 \otimes \imath} X \otimes_T Y^0_k \xrightarrow{1 \otimes \gamma_k} X_k \rightarrow 0, \] so that $1 \otimes \gamma_k$ is a quasi-isomorphism. \end{proof} Taking a mapping cone preserves quasi-isomorphisms. Thus, we have a quasi-isomorphism \begin{equation}\label{eq:qimappingconegamma} \cone(X \otimes^{\Lderiv}_T X \xrightarrow{1\otimes u} \lambda^{-1} X) \xrightarrow{\simeq} \cone(X \otimes_T X \xrightarrow{1\otimes u} \lambda^{-1} X). \end{equation} Let \[ \tilde \varphi : \cone\bigl(\lambda q^2 X [1] \xrightarrow{u} q^2 T_b^{\lambda,r}[1]\bigr)[1] \rightarrow \cone(X \otimes_T X \xrightarrow{1\otimes u} \lambda^{-1} X) \] be the map given by composing $\varphi$ with the quasi-isomorphism in~\cref{eq:qimappingconegamma}. We also write $\tilde \varphi^0 := (1\otimes \gamma) \circ \varphi^0$. Therefore, by \cref{lem:indAinftymap}, proving that $\varphi$ is a map of $A_\infty$-bimodules ends up being the same as proving that $\tilde \varphi^0$ is a map of dg-bimodules. \begin{thm}\label{thm:phiisAinfty} The map $\varphi$ is a map of $\mathbb{Z}^2$-graded $(T^{\lambda,r},0)$-$(T^{\lambda,r},0)$-$A_\infty$-bimodules. \end{thm} \begin{proof} The statement follows by proving that $\tilde \varphi^0$ is a map of dg-bimodules, which is done in details in \cref{sec:proofofbimodulemap}. \end{proof} \begin{cor} There is an exact sequence \[ 0 \rightarrow \lambda q^2 (X) [1] \xrightarrow{u} q^2 (T_b^{\lambda,r})[1] \xrightarrow{\tilde \varphi_0} X \otimes_T X \xrightarrow{1\otimes u} \lambda^{-1} X \rightarrow 0, \] of dg-bimodules. \end{cor} \begin{cor}\label{cor:qi-Xquadratic} There is a quasi-isomorphism \[ \cone\bigl(\lambda q^2 \Xi [1] \rightarrow q^2 \text{Id} [1]\bigr)[1] \xrightarrow{\simeq} \cone( \Xi \circ \Xi \rightarrow \lambda^{-1} \Xi), \] of dg-functors. \end{cor} \subsubsection{Inverse of $\Xi$} Recall the notations from \cref{sec:cofX}. \begin{lem}\label{lem:Xklgenerated} As a right $(T^{\lambda,r},0)$-module, $1_{1,k+\ell-1,\rho}X$ is generated by the elements \begin{align}\label{eq:Xklgenerator} \tikzdiagh{0}{ \draw (1.25,0) .. controls (1.25,.5) .. (-.5,.75) .. controls (0,.875) .. (0,1); \draw[stdhl] (0,0) node[below]{\small $1$} .. controls (0,.1) .. (-.5,.25) .. controls (.5,.7) .. (.5,1); \draw[fill=white, color=white] (-.6,.25) circle (.1cm); \draw[vstdhl] (-.5,0) node[below]{\small $\lambda$} --(-.5,1) node[pos=.75,nail]{}; \draw (.5,0) .. controls (.5,.5) and (.75,.5) .. (.75,1); \node at (.75,.15){\tiny $\dots$}; \draw (1,0) .. controls (1,.5) and (1.25,.5).. (1.25,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (.4,-.35) -- node { \small $k-1$} (1.1,-.35); } \otimes \bar 1_{\ell,\rho}, &&\text{and} && \tikzdiagh{0}{ \draw (.5,0) .. controls (.5,.5) and (0,.5) .. (0,1); \draw[stdhl] (0,0) node[below]{\small $1$} .. controls (0,.1) .. (-.5,.25) .. controls (.5,.7) .. (.5,1); \draw[fill=white, color=white] (-.6,.25) circle (.1cm); \draw[vstdhl] (-.5,0) node[below]{\small $\lambda$} --(-.5,1); } \otimes \bar 1_{\ell+k-1,\rho}. \end{align} \end{lem} \begin{proof} The statement can be proven using an induction on $k$, as done in details in \cref{sec:proofsofcatTLB}. \end{proof} \begin{lem}\label{lem:surjcircimath} The map \[ (- \circ \imath_k) : \HOM_T(Y^0_k, X) \twoheadrightarrow \HOM_T(Y^1_k, X), \] is surjective. \end{lem} \begin{proof} We have \begin{align*} \HOM_T(Y^0_k, X) &\cong \bigoplus_{\ell,\rho}\left( \lambda q^{-k} (1_{0,k+\ell,\rho} X) \oplus \bigoplus_{t=0}^{k-1} \lambda^{-1}q^{-(k-2t)}(1_{0,k+\ell,\rho}X)[-1]\right), \\ \HOM_T(Y^1_k, X) &\cong \bigoplus_{\ell,\rho}\left( \bigoplus_{t=0}^{k-1} \lambda^{-1} q^{-(k-2t+1)} (1_{1,k+\ell-1,\rho}X)[-1]\right). \end{align*} Then, the map \[ (- \circ \imath_k) : \lambda q^{-k} (1_{0,k+\ell,\rho} X) \oplus \lambda^{-1}q^{-(k-2t)}(1_{0,k+\ell,\rho}X)[-1] \rightarrow \lambda^{-1} q^{-(k-2t+1)} (1_{1,k+\ell-1,\rho}X)[-1] \] is given by gluing \[ \left( -\ \tikzdiagh{0}{ \draw (1.25,0) .. controls (1.25,.5) .. (-.5,.75) .. controls (0,.875) .. (0,1); \draw[stdhl] (0,0) node[below]{\small $1$} .. controls(0,.5) and (.5,.5) .. (.5,1); \draw[vstdhl] (-.5,0) node[below]{\small $\lambda$} --(-.5,1) node[pos=.75,nail]{}; \draw (.5,0) .. controls (.5,.5) and (.75,.5) .. (.75,1); \node at (.75,.15){\tiny $\dots$}; \draw (1,0) .. controls (1,.5) and (1.25,.5).. (1.25,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (.4,-.35) -- node { \small $t$} (1.1,-.35); } \otimes \bar 1_{\ell+k-1-t,\rho}, \ \tikzdiagh{0}{ \draw[vstdhl] (-.5,0) node[below]{\small $\lambda$} --(-.5,1); \draw (.5,0) .. controls (.5,.5) and (0,.5) .. (0,1); \draw[stdhl] (0,0) node[below]{\small $1$} .. controls(0,.5) and (.5,.5) .. (.5,1); } \otimes \bar 1_{\ell+k-1,\rho} \right) \] on the top of diagrams, for all $0 \leq t \leq k-1$. Then we observe that the map $(- \circ \imath_k) : \lambda q^{-k} (1_{0,k+\ell,\rho} X) \rightarrow \lambda^{-1} q^{-(k-2t+1)} (1_{1,k+\ell-1,\rho}X)[-1]$ sends \[ \tikzdiag[xscale=.5]{ \draw (1,0) -- (1,1); \node at(1.5,.5) {\tiny $\dots$}; \draw (2,0) -- (2,1); % \draw (2.5,0) .. controls (2.5,.5) and (3,.5) .. (3,1); \node at(3,.15) {\tiny $\dots$}; \node at(3.5,.85) {\tiny $\dots$}; \draw (3.5,0) .. controls (3.5,.5) and (4,.5) .. (4,1); % \draw (4,0) .. controls (4,.5) and (2.5,.5) .. (2.5,1); % \draw[stdhl] (.5,0) node[below]{\small $1$} .. controls (.5,.25) .. (-.5,.5) .. controls (.5,.75) .. (.5,1); \draw[fill=white, color=white] (-.6,.5) circle (.1cm); \draw[vstdhl] (-.5,0) node[below]{\small $\lambda$} -- (-.5,1); % \tikzbrace{1}{2}{0}{\small $s$}; \tikzbraceop{1}{4}{1}{\small $k$}; % } \otimes \bar 1_{\ell,\rho}, \ \mapsto \ \begin{cases} 0, & \text{if $ s< t$}, \\ \tikzdiagh{0}{ \draw (1.25,0) .. controls (1.25,.5) .. (-.5,.75) .. controls (0,.875) .. (0,1); \draw[stdhl] (0,0) node[below]{\small $1$} .. controls (0,.1) .. (-.5,.25) .. controls (.5,.7) .. (.5,1); \draw[fill=white, color=white] (-.6,.25) circle (.1cm); \draw[vstdhl] (-.5,0) node[below]{\small $\lambda$} --(-.5,1) node[pos=.75,nail]{}; \draw (.5,0) .. controls (.5,.5) and (.75,.5) .. (.75,1); \node at (.75,.15){\tiny $\dots$}; \draw (1,0) .. controls (1,.5) and (1.25,.5).. (1.25,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (.4,-.35) -- node { \small $k-1$} (1.1,-.35); } \otimes \bar 1_{\ell,\rho}, & \text{if $s = t$}, \\ \tikzdiag{ \draw (1.25,0) .. controls (1.25,.5) .. (-.5,.75) .. controls (0,.875) .. (0,1); \draw[stdhl] (0,0) node[below]{\small $1$} .. controls (0,.1) .. (-.5,.25) .. controls (.5,.7) .. (.5,1); \draw[fill=white, color=white] (-.6,.25) circle (.1cm); \draw[vstdhl] (-.5,0) node[below]{\small $\lambda$} --(-.5,1) node[pos=.75,nail]{}; \draw (.5,0) .. controls (.5,.5) and (.75,.5) .. (.75,1); \node at (.75,.15){\tiny $\dots$}; \draw (1,0) .. controls (1,.5) and (1.25,.5).. (1.25,1); % \draw (1.5,0) -- (1.5,1); \node at (1.75,.5){\tiny $\dots$}; \draw (2,0) -- (2,1); % \draw (2.25,0) .. controls (2.25,.5) and (2.5,.5) .. (2.5,1); \node at (2.5,.15){\tiny $\dots$}; \draw (2.75,0) .. controls (2.75,.5) and (3,.5) .. (3,1); \draw (3,0) .. controls (3,.5) and (2.25,.5) .. (2.25,1); % \draw[decoration={brace,mirror,raise=-8pt},decorate] (.4,-.35) -- node { \small $t$} (1.1,-.35); \tikzbraceop{.75}{2}{1}{\small $s-1$}; } \otimes \bar 1_{\ell,\rho}, &\text{if $s> t$}, \end{cases} \] for all $0 \leq s \leq k-1$. Thus, $(- \circ \imath_k) : \HOM_T(Y^0_k, X) \twoheadrightarrow \HOM_T(Y^1_k, X)$ has a triangular form when applied to the elements above, and is surjective by \cref{lem:surjcircimath}. \end{proof} \begin{prop}\label{prop:Xi-autoequiv} The functor $\Xi : \mathcal{D}_{dg}(T^{\lambda,r},0) \rightarrow \mathcal{D}_{dg}(T^{\lambda,r},0)$ is an autoequivalence, with inverse given by $\Xi^{-1} := \RHOM_T(X,-): \mathcal{D}_{dg}(T^{\lambda,r},0) \rightarrow \mathcal{D}_{dg}(T^{\lambda,r},0).$ \end{prop} \begin{proof} By \cref{lem:surjcircimath} and \cref{prop:gammaqi}, we have \[ \RHOM_T(X 1_\rho, X 1_{\rho'}) \cong \HOM_T(X 1_\rho, X 1_{\rho'}). \] Then, we compute \[ \gdim \HOM_T(X 1_\rho, X 1_{\rho'}) = \gdim \HOM_T({\boldsymbol{P}}_\rho, {\boldsymbol{P}}_{\rho'}), \] using the fact that $\Xi$ decategorifies to the action of $\xi$. More precisely, as in \cite[\S4.7]{webster}, the bifunctor $\RHOM_T(-,-)$ decategorifies to a sesquilinear version of the Shapovalov form when restricted to a particular subcategory of $\mathcal{D}_{dg}(T^{\lambda,r},0)$, and this sesquilinear form respects $(\xi w, \xi w') = (w,w')$. Finally, we observe that the map \[ \HOM_T({\boldsymbol{P}}_\rho, {\boldsymbol{P}}_{\rho'}) \xhookrightarrow{\text{Id}_X \otimes (-)} \HOM_T(X 1_\rho, X 1_{\rho'}), \] is injective, since the map ${\boldsymbol{P}}_\rho \rightarrow X 1_\rho$ given by gluing \[ \tikzdiag{ \draw (.5,0) .. controls (.5,.25) and (.75,.25) .. (.75,.5) .. controls (.75,.75) and (.5,.75) .. (.5,1); \node at (1,.15) {\small $\dots$}; \node at (1,.85) {\small $\dots$}; \draw (1.5,0) .. controls (1.5,.25) and (1.75,.25) .. (1.75,.5) .. controls (1.75,.75) and (1.5,.75) .. (1.5,1); % \draw[stdhl] (2,0) node[below]{\small $1$} .. controls (2,.25) .. (0,.5) .. controls (2,.75) .. (2,1); \draw[fill=white, color=white] (-.25,.5) circle (.2cm); \draw[vstdhl] (0,0) node[below]{\small $\lambda$} -- (0,1); % \node[red] at(2.5,.5) {$\dots$} } \] on the top of diagrams is injective. This can be seen by composing the above map ${\boldsymbol{P}}_\rho \rightarrow X 1_\rho$ with the injection $u : X 1_\rho \rightarrow {\boldsymbol{P}}_\rho$, and observing it yields an injective map. Therefore, $\RHOM_T(X 1_\rho, X 1_{\rho'}) \cong 1_\rho T^{\lambda,r} 1_{\rho'}$, and $\Xi$ is an autoequivalence. \end{proof} \subsubsection{Categorification of relation \cref{eq:TLBloopremov}.} \begin{lem}\label{lem:XB1qi} There is a quasi-isomorphism \[ X \otimes^{\Lderiv}_T B_1 \cong X \otimes_T \br B_1 \xrightarrow{\simeq} X \otimes_T B_1, \] of $A_\infty$-bimodules. \end{lem} \begin{proof} Let us write $X_{\tikzRRB} := X \otimes_T T_{1,\tikzRRB}$. Then we have \[ X \otimes_T \br B_1 \cong \begin{tikzcd}[row sep = 1ex] & q (X_{\tikzBRR}) [1] \ar{dr} \ar[no head]{dr}{ {\tikzdiag[scale=.5]{ \draw (1,0) .. controls (1,.5) and (0,.5) .. (0,1); \draw[stdhl] (0,0) .. controls (0,.5) and (1,.5) .. (1,1); \draw[stdhl] (2,0) -- (2,1); }}} & \\ q^2 (X_{\tikzRBR}) [2] \ar{ur} \ar[no head]{ur}{ {\tikzdiag[scale=.5]{ \draw (0,0) .. controls (0,.5) and (1,.5) .. (1,1); \draw[stdhl] (1,0) .. controls (1,.5) and (0,.5) .. (0,1); \draw[stdhl] (2,0) -- (2,1); }}} \ar{dr} \ar[no head,swap]{dr}{ -\ {\tikzdiag[scale=.5]{ \draw (1,0) .. controls (1,.5) and (0,.5) .. (0,1); \draw[stdhl] (-1,0) -- (-1,1); \draw[stdhl] (0,0) .. controls (0,.5) and (1,.5) .. (1,1); }}} & \oplus & X_{\tikzRBR} \\ & q (X_{\tikzRRB}) [1] \ar{ur} \ar[no head,swap]{ur}{ {\tikzdiag[scale=.5]{ \draw (0,0) .. controls (0,.5) and (1,.5) .. (1,1); \draw[stdhl] (1,0) .. controls (1,.5) and (0,.5) .. (0,1); \draw[stdhl] (-1,0) -- (-1,1); }}} & \\ \end{tikzcd} \] The statement follows by observing that the first map is injective, and its image coincides with the kernel of the second one. \end{proof} Our goal will be to show the following: \begin{prop}\label{prop:blobbubble} There is a quasi-isomorphism \[ \lambda q (T^{\lambda,r})[1] \oplus \lambda^{-1} q^{-1} (T^{\lambda,r}) [-1] \xrightarrow{\simeq} \bar B_1 \otimes^{\Lderiv}_T X \otimes^{\Lderiv}_T B_1, \] of $A_\infty$-bimodules. \end{prop} For this, we will need to understand the left $A_\infty$-action on $B_1 \rb$: \[ B_1 \rb := \begin{tikzcd}[row sep = 1ex] & T^{\ \tikzBRR} \ar{dr} \ar[no head]{dr}{ {\tikzdiag[scale=.5,yscale=-1]{ \draw (1,0) .. controls (1,.5) and (0,.5) .. (0,1); \draw[stdhl] (0,0) .. controls (0,.5) and (1,.5) .. (1,1); \draw[stdhl] (2,0) -- (2,1); }}} & \\ q (T^{\ \tikzRBR}) [1] \ar{ur} \ar[no head]{ur}{ {\tikzdiag[scale=.5,yscale=-1]{ \draw (0,0) .. controls (0,.5) and (1,.5) .. (1,1); \draw[stdhl] (1,0) .. controls (1,.5) and (0,.5) .. (0,1); \draw[stdhl] (2,0) -- (2,1); }}} \ar{dr} \ar[no head,swap]{dr}{ -\ {\tikzdiag[scale=.5,yscale=-1]{ \draw (1,0) .. controls (1,.5) and (0,.5) .. (0,1); \draw[stdhl] (-1,0) -- (-1,1); \draw[stdhl] (0,0) .. controls (0,.5) and (1,.5) .. (1,1); }}} & \oplus & q^{-1}(T^{\ \tikzRBR})[-1]. \\ & T^{\ \tikzRRB} \ar{ur} \ar[no head,swap]{ur}{ {\tikzdiag[scale=.5,yscale=-1]{ \draw (0,0) .. controls (0,.5) and (1,.5) .. (1,1); \draw[stdhl] (1,0) .. controls (1,.5) and (0,.5) .. (0,1); \draw[stdhl] (-1,0) -- (-1,1); }}} & \\ \end{tikzcd} \] We start by constructing a composition map $T \otimes B_1 \rb \rightarrow B_1 \rb$, by defining it on each generator of $T$. We extend it by first rewriting elements in $T$ as basis elements and then applying recursively the definition in terms of generating elements (so that it is well-defined). Dots and crossings act on each of the summand by simply adding the three missing vertical strands between the $\lambda$-strand and the remaining of the diagram, and gluing on top. For example in $q^{-1} (T^{\ \tikzRBR}) [-1]$, we have \[ \tikzdiagh[xscale=1.25]{0}{ \draw [vstdhl] (-.25,0) node[below]{\small $\lambda$} -- (-.25,1); % \draw (0,0) -- (0,1); \node at(.25,.125) {\tiny $\dots$}; \node at(.25,.875) {\tiny $\dots$}; \draw (.5,0) -- (.5,1); % \draw [stdhl] (.75,0) node[below]{\small $1$} -- (.75,1); % \node[red] at(1.125,.125) { $\dots$}; \node[red] at(1.125,.875) { $\dots$}; % \draw [stdhl] (1.5,0) node[below]{\small $1$} -- (1.5,1); % \draw (1.75,0) -- (1.75,1); \node at(2,.125) {\tiny $\dots$}; \node at(2,.875) {\tiny $\dots$}; \draw (2.25,0) -- (2.25,1); % \draw [stdhl] (2.5,0) node[below]{\small $1$} -- (2.5,1); % \draw (2.75,0) -- (2.75,1); \node at(3,.125) {\tiny $\dots$}; \node at(3,.875) {\tiny $\dots$}; \draw (3.25,0) -- (3.25,1); % \filldraw [fill=white, draw=black] (-.125,.25) rectangle (3.375,.75) node[midway] { $D$}; } \ \mapsto \ \tikzdiagh[xscale=1.25]{0}{ \draw [vstdhl] (-1,0) node[below]{\small $\lambda$} -- (-1,1); % \draw[stdhl] (-.75,0) node[below]{\small $1$} -- (-.75,1); \draw (-.5,0) -- (-.5,1); \draw[stdhl] (-.25,0) node[below]{\small $1$} -- (-.25,1); % \draw (0,0) -- (0,1); \node at(.25,.125) {\tiny $\dots$}; \node at(.25,.875) {\tiny $\dots$}; \draw (.5,0) -- (.5,1); % \draw [stdhl] (.75,0) node[below]{\small $1$} -- (.75,1); % \node[red] at(1.125,.125) { $\dots$}; \node[red] at(1.125,.875) { $\dots$}; % \draw [stdhl] (1.5,0) node[below]{\small $1$} -- (1.5,1); % \draw (1.75,0) -- (1.75,1); \node at(2,.125) {\tiny $\dots$}; \node at(2,.875) {\tiny $\dots$}; \draw (2.25,0) -- (2.25,1); % \draw [stdhl] (2.5,0) node[below]{\small $1$} -- (2.5,1); % \draw (2.75,0) -- (2.75,1); \node at(3,.125) {\tiny $\dots$}; \node at(3,.875) {\tiny $\dots$}; \draw (3.25,0) -- (3.25,1); % \filldraw [fill=white, draw=black] (-.125,.25) rectangle (3.375,.75) node[midway] { $D$}; } \] The action of the nail is a bit trickier. On $q (T^{\ \tikzRBR}) [1]$ and on $q^{-1} (T^{\ \tikzRBR}) [-1]$ it acts by gluing \[ \tikzdiagh{0}{ \draw (.5,-.5) .. controls (.5,-.25) .. (0,0) .. controls (.5,.25) .. (.5,.5); \draw[vstdhl] (0,-.5) node[below]{\small $\lambda$} -- (0,.5) node [midway,nail]{}; } \ \mapsto \ \tikzdiagh{0}{ \draw (2,-.5) .. controls (2,-.25) .. (0,0) .. controls (2,.25) .. (2,.5); % \draw[stdhl] (.5,-.5) -- (.5,.5); \draw (1,-.5) -- (1,.5); \draw[stdhl] (1.5,-.5) -- (1.5,.5); % \draw[fill=white, color=white] (-.1,0) circle (.1cm); \draw[vstdhl] (0,-.5) node[below]{\small $\lambda$} -- (0,.5) node [midway,nail]{}; } \] on the top of the diagrams. On $T^{\ \tikzBRR}$ it acts by \[ \tikzdiagh{0}{ \draw (.5,-.5) .. controls (.5,-.25) .. (0,0) .. controls (.5,.25) .. (.5,.5); \draw[vstdhl] (0,-.5) node[below]{\small $\lambda$} -- (0,.5) node [midway,nail]{}; } \ \mapsto \ \left( \tikzdiagh{0}{ \draw (2,-.5) .. controls (2,-.25) .. (0,0) .. controls (2,.25) .. (2,.5); % \draw (.5,-.5) -- (.5,.5); \draw[stdhl] (1,-.5) -- (1,.5); \draw[stdhl] (1.5,-.5) -- (1.5,.5); % \draw[fill=white, color=white] (-.1,0) circle (.1cm); \draw[vstdhl] (0,-.5) node[below]{\small $\lambda$} -- (0,.5) node [midway,nail]{}; } \ - \ \tikzdiagh{0}{ \draw (.5,-.5) .. controls (.5,-.25) .. (0,0) .. controls (.5,.25) .. (.5,.5); % \draw (2,-.5) -- (2,.5); \draw[stdhl] (1,-.5) -- (1,.5); \draw[stdhl] (1.5,-.5) -- (1.5,.5); % \draw[fill=white, color=white] (-.1,0) circle (.1cm); \draw[vstdhl] (0,-.5) node[below]{\small $\lambda$} -- (0,.5) node [midway,nail]{}; } \ ,\ \tikzdiagh{0}{ \draw (.5,-.5) .. controls (.5,-.25) .. (0,0) .. controls (2,.25) .. (2,.5); % \draw[stdhl] (1,-.5) .. controls (1,0) and (.5,0) .. (.5,.5); \draw[stdhl] (1.5,-.5) .. controls (1.5,0) and (1,0) .. (1,.5); \draw (2,-.5) .. controls (2,0) and (1.5,0) .. (1.5,.5); % \draw[fill=white, color=white] (-.1,0) circle (.1cm); \draw[vstdhl] (0,-.5) node[below]{\small $\lambda$} -- (0,.5) node [midway,nail]{}; } \right) \ \in T^{\ \tikzBRR} \oplus T^{\ \tikzRRB}, \] and on $T^{\ \tikzRRB}$ by \[ \tikzdiagh{0}{ \draw (.5,-.5) .. controls (.5,-.25) .. (0,0) .. controls (.5,.25) .. (.5,.5); \draw[vstdhl] (0,-.5) node[below]{\small $\lambda$} -- (0,.5) node [midway,nail]{}; } \ \mapsto \ \left( \tikzdiagh[yscale=-1]{0}{ \draw (.5,-.5) .. controls (.5,-.25) .. (0,0) .. controls (2,.25) .. (2,.5); % \draw[stdhl] (1,-.5) .. controls (1,0) and (.5,0) .. (.5,.5); \draw[stdhl] (1.5,-.5) .. controls (1.5,0) and (1,0) .. (1,.5); \draw (2,-.5) .. controls (2,0) and (1.5,0) .. (1.5,.5); % \draw[fill=white, color=white] (-.1,0) circle (.1cm); \draw[vstdhl] (0,-.5) -- (0,.5) node [midway,nail]{} node[below]{\small $\lambda$}; } \ ,\ \tikzdiagh{0}{ \draw (2,-.5) .. controls (2,-.25) .. (0,0) .. controls (2,.25) .. (2,.5); % \draw[stdhl] (.5,-.5) -- (.5,.5); \draw[stdhl] (1,-.5) -- (1,.5); \draw (1.5,-.5) -- (1.5,.5); % \draw[fill=white, color=white] (-.1,0) circle (.1cm); \draw[vstdhl] (0,-.5) node[below]{\small $\lambda$} -- (0,.5) node [midway,nail]{}; } \right) \ \in T^{\ \tikzBRR} \oplus T^{\ \tikzRRB}. \] One can easily verify that this respects the differential in $B_1 \rb$. The higher multiplication maps $T \otimes B_1 \rb \otimes T \rightarrow B_1 \rb$ and $T \otimes T \otimes B_1 \rb$ compute the defect of the map $T \otimes B_1 \rb \rightarrow B_1 \rb$ for being a left $T$-action. Concretely, it means that we can compute these higher multiplication maps by looking how both side of each defining relation of $T$ act on $B_1 \rb$. For example, the relation \[ \tikzdiagh{0}{ \draw (.5,-.5) .. controls (.5,-.25) .. (0,0) node[midway, tikzdot]{} .. controls (.5,.25) .. (.5,.5); \draw[vstdhl] (0,-.5) node[below]{\small $\lambda$} -- (0,.5) node [midway,nail]{}; } \ = \ \tikzdiagh{0}{ \draw (.5,-.5) .. controls (.5,-.25) .. (0,0) .. controls (.5,.25) .. (.5,.5) node[midway, tikzdot]{}; \draw[vstdhl] (0,-.5) node[below]{\small $\lambda$} -- (0,.5) node [midway,nail]{}; } \] is respected on $q^{-1} (T^{\ \tikzRBR}) [-1]$ up to adding the elements appearing in the right of the following equation: \[ \tikzdiagh{0}{ \draw (2,-.5) .. controls (2,-.25) .. (0,0) node[pos=.3,tikzdot]{} .. controls (2,.25) .. (2,.5); % \draw[stdhl] (.5,-.5) -- (.5,.5); \draw (1,-.5) -- (1,.5); \draw[stdhl] (1.5,-.5) -- (1.5,.5); % \draw[fill=white, color=white] (-.1,0) circle (.1cm); \draw[vstdhl] (0,-.5) node[below]{\small $\lambda$} -- (0,.5) node [midway,nail]{}; } \ = \ \tikzdiagh{0}{ \draw (2,-.5) .. controls (2,-.25) .. (0,0) .. controls (2,.25) .. (2,.5) node[pos=.7,tikzdot]{}; % \draw[stdhl] (.5,-.5) -- (.5,.5); \draw (1,-.5) -- (1,.5); \draw[stdhl] (1.5,-.5) -- (1.5,.5); % \draw[fill=white, color=white] (-.1,0) circle (.1cm); \draw[vstdhl] (0,-.5) node[below]{\small $\lambda$} -- (0,.5) node [midway,nail]{}; } \ + \ \tikzdiagh{0}{ \draw (2,-.5) .. controls (2,-.125) .. (0,.25) .. controls (1,.375) .. (1,.5); % \draw (1,-.5) .. controls (1,0) and (2,0) .. (2,.5); \draw[stdhl] (.5,-.5) -- (.5,.5); \draw[stdhl] (1.5,-.5) .. controls (1.5,-.25) and (1,-.25) .. (1,0) .. controls (1,.25) and (1.5,.25) .. (1.5,.5); % \draw[fill=white, color=white] (-.1,.25) circle (.1cm); \draw[vstdhl] (0,-.5) node[below]{\small $\lambda$} -- (0,.5) node [pos=.75,nail]{}; } \ - \ \tikzdiagh[yscale=-1]{0}{ \draw (2,-.5) .. controls (2,-.125) .. (0,.25) .. controls (1,.375) .. (1,.5); % \draw (1,-.5) .. controls (1,0) and (2,0) .. (2,.5); \draw[stdhl] (.5,-.5) -- (.5,.5); \draw[stdhl] (1.5,-.5) .. controls (1.5,-.25) and (1,-.25) .. (1,0) .. controls (1,.25) and (1.5,.25) .. (1.5,.5); % \draw[fill=white, color=white] (-.1,.25) circle (.1cm); \draw[vstdhl] (0,-.5) -- (0,.5) node [pos=.75,nail]{} node[below]{\small $\lambda$}; } \] so that the higher multiplication map $T \otimes T \otimes q^{-1} (T^{\ \tikzRBR}) [-1] \rightarrow B_1 \rb$ gives \[ \tikzdiagh{0}{ \draw (.5,-.5) .. controls (.5,-.25) .. (0,0) .. controls (.5,.25) .. (.5,.5); \draw[vstdhl] (0,-.5) node[below]{\small $\lambda$} -- (0,.5) node [midway,nail]{}; } \ \otimes \ \tikzdiagh{0}{ \draw (.5,-.5) -- (.5,.5) node[midway, tikzdot]{}; \draw[vstdhl] (0,-.5) node[below]{\small $\lambda$} -- (0,.5); } \ \otimes \ 1 \ \mapsto \ \left( \tikzdiagh{0}{ \draw (2,-.5) .. controls (2,-.125) .. (0,.25) .. controls (.5,.375) .. (.5,.5); % \draw (1,-.5) .. controls (1,0) and (2,0) .. (2,.5); \draw[stdhl] (.5,-.5) .. controls (.5,0) and (1,0) .. (1,.5); \draw[stdhl] (1.5,-.5) .. controls (1.5,-.25) and (1.125,-.25) .. (1.125,0) .. controls (1.125,.25) and (1.5,.25) .. (1.5,.5); % \draw[fill=white, color=white] (-.1,.25) circle (.1cm); \draw[vstdhl] (0,-.5) node[below]{\small $\lambda$} -- (0,.5) node [pos=.75,nail]{}; } \ , - \ \tikzdiagh[yscale=-1]{0}{ \draw (2,-.5) .. controls (2,-.125) .. (0,.25) .. controls (1,.375) .. (1,.5); % \draw (1.5,-.5) .. controls (1.5,0) and (2,0) .. (2,.5); \draw[stdhl] (.5,-.5) -- (.5,.5); \draw[stdhl] (1,-.5) .. controls (1,0) and (1.5,0) .. (1.5,.5); % \draw[fill=white, color=white] (-.1,.25) circle (.1cm); \draw[vstdhl] (0,-.5) -- (0,.5) node [pos=.75,nail]{} node[below]{\small $\lambda$}; } \right) \ \in T^{\ \tikzBRR} \oplus T^{\ \tikzRRB}. \] Note that it means the higher maps only involve elements coming from \cref{eq:relNail}. Also, one can easily verify that the other two relations in \cref{eq:relNail} are already respected for the multiplication map $T \otimes q^{-1} (T^{\ \tikzRBR}) [-1] \rightarrow B_1 \rb$, so that our computation above completely determine $T \otimes T \otimes q^{-1} (T^{\ \tikzRBR}) [-1] \rightarrow B_1 \rb$. There is a similar higher multiplication map $T \otimes q^{-1} (T^{\ \tikzRBR}) [-1] \otimes T \rightarrow B_1 \rb$, which is non-trivial in the case \[ \tikzdiagh{0}{ \draw (.5,-.5) .. controls (.5,-.25) .. (0,0) .. controls (.5,.25) .. (.5,.5); \draw[vstdhl] (0,-.5) node[below]{\small $\lambda$} -- (0,.5) node [midway,nail]{}; } \ \otimes \ 1 \ \otimes \ \tikzdiagh{0}{ \draw (.5,-.5) -- (.5,.5) node[midway, tikzdot]{}; \draw[vstdhl] (0,-.5) node[below]{\small $\lambda$} -- (0,.5); } \] for similar reasons. We will not need to compute the other higher composition maps. \begin{proof}[Proof of \cref{prop:blobbubble}] Tensoring $B_1 \rb$ with $X \otimes_T B_1$ gives a complex where the elements are locally of the form \[ \begin{tikzcd} &0 \ar{dr}& \\ q\left( \tikzdiag[scale=.75]{ \draw (1,.5) -- (1,2); \draw[stdhl] (.5,1) .. controls (.5,1.25) .. (0,1.5) .. controls (.5,1.75) .. (.5,2); \draw [stdhl] (.5,1) .. controls (.5,.25) and (1.5,.25) .. (1.5,1)--(1.5,2); \draw[fill=white, color=white] (-.1,1.5) circle (.1cm); \draw[vstdhl] (0,.5) node[below]{\small $\lambda$} -- (0,2); } \right)[1] \ar{ur} \ar{dr} \ar[no head,swap]{dr}{ -\ {\tikzdiag[scale=.5,yscale=-1]{ \draw (1,0) .. controls (1,.5) and (0,.5) .. (0,1); \draw[stdhl] (-1,0) -- (-1,1); \draw[stdhl] (0,0) .. controls (0,.5) and (1,.5) .. (1,1); }}} && q^{-1}\left( \tikzdiag[scale=.75]{ \draw (1,.5) -- (1,2); \draw[stdhl] (.5,1) .. controls (.5,1.25) .. (0,1.5) .. controls (.5,1.75) .. (.5,2); \draw [stdhl] (.5,1) .. controls (.5,.25) and (1.5,.25) .. (1.5,1)--(1.5,2); \draw[fill=white, color=white] (-.1,1.5) circle (.1cm); \draw[vstdhl] (0,.5) node[below]{\small $\lambda$} -- (0,2); } \right)[-1] \\ & { \tikzdiagh[yscale=-1,scale=.75]{0}{ \draw[stdhl] (1.5,0) -- (1.5,1) ; \draw (.5,0) .. controls (.5,.25) and (1,.25) .. (1,1) -- (1,1.5); \draw[stdhl] (1,0).. controls (1,.25) .. (0,.5) .. controls (.5,.75) .. (.5,1) ; \draw [stdhl] (.5,1) .. controls (.5,1.75) and (1.5,1.75) .. (1.5,1); \draw[fill=white, color=white] (-.1,.5) circle (.1cm); \draw[vstdhl] (0,0) -- (0,1.5) node[below]{\small $\lambda$}; }\ , \tikzdiagh[yscale=-1,scale=.75]{0}{ \draw[stdhl] (1.5,0) -- (1.5,1) ; \draw (.5,0) .. controls (.5,.125) .. (0,.25) .. controls (1,.5) .. (1,1) -- (1,1.5); \draw[stdhl] (1,0).. controls (1,.25) .. (0,.5) .. controls (.5,.75) .. (.5,1) ; \draw [stdhl] (.5,1) .. controls (.5,1.75) and (1.5,1.75) .. (1.5,1); \draw[fill=white, color=white] (-.1,.5) circle (.1cm); \draw[vstdhl] (0,0) -- (0,1) node[nail, pos=.25]{} -- (0,1.5)node[below]{\small $\lambda$} ; } } \ar[swap]{ur}{0} & \end{tikzcd} \] which, after eliminating the acyclic subcomplex, yields \[ \tikzdiagh[yscale=-1,scale=.75]{0}{ \draw[stdhl] (1.5,0) -- (1.5,1) ; \draw (.5,0) .. controls (.5,.125) .. (0,.25) .. controls (1,.5) .. (1,1) -- (1,1.5); \draw[stdhl] (1,0).. controls (1,.25) .. (0,.5) .. controls (.5,.75) .. (.5,1) ; \draw [stdhl] (.5,1) .. controls (.5,1.75) and (1.5,1.75) .. (1.5,1); \draw[fill=white, color=white] (-.1,.5) circle (.1cm); \draw[vstdhl] (0,0) -- (0,1) node[nail, pos=.25]{} -- (0,1.5)node[below]{\small $\lambda$} ; } \ \oplus \ q^{-1}\left( \tikzdiag[scale=.75]{ \draw (1,.5) -- (1,2); \draw[stdhl] (.5,1) .. controls (.5,1.25) .. (0,1.5) .. controls (.5,1.75) .. (.5,2); \draw [stdhl] (.5,1) .. controls (.5,.25) and (1.5,.25) .. (1.5,1)--(1.5,2); \draw[fill=white, color=white] (-.1,1.5) circle (.1cm); \draw[vstdhl] (0,.5) node[below]{\small $\lambda$} -- (0,2); } \right)[-1] \] All higher multiplications maps vanish: except for $T \otimes T \otimes (B_1\rb \otimes_T X \otimes B_1) \rightarrow (B_1\rb \otimes_T X \otimes B_1)$ and $T \otimes ( B_1\rb \otimes_T X \otimes B_1) \otimes T \rightarrow (B_1\rb \otimes_T X \otimes B_1)$, all of these are zero for degree reasons, and the remaining two are zero by the calculations above. Therefore, what remains is isomorphic to $\lambda q (T^{\lambda,r})[1] \oplus \lambda^{-1} q^{-1} (T^{\lambda,r}) [-1]$, as dg-bimodules. We conclude by applying \cref{lem:XB1qi}. \end{proof} \begin{cor}\label{cor:qi-bubbleremv} There is a quasi-isomorphism \[ \lambda q (\text{Id})[1] \oplus \lambda^{-1} q^{-1} (\text{Id}) [-1] \xrightarrow{\simeq} \bar \B_1 \circ \Xi \circ \B_1, \] of dg-functors. \end{cor} \subsection{The blob 2-category}\label{sec:blob2cat} In this section, we suppose $\Bbbk$ is a field. Let $\mathfrak B(r,r')$ be the subcategory of dg-functors $\mathcal{D}_{dg}(T^{\lambda,r}, 0) \rightarrow \mathcal{D}_{dg}(T^{\lambda,r'}, 0)$ c.b.l.f. generated by all compositions of $\Xi, \B_i$ and $\bar \B_i$, and identity functor whenever $r = r'$, where c.b.l.f. generated means it is given by certain (potentially infinite) iterated extensions of these objects (see \cref{def:cblfgenerated} for a precise definition). As explained in \cref{sec:cblfdgfunctors}, there is an induced morphism \[ {}_\mathbb{Q} \boldsymbol{K}_0^\Delta(\mathfrak B(r,r')) \xrightarrow{\eqref{eq:K0homK0}} \Hom_{\mathbb{Q}\pp{q,\lambda}}( {}_\mathbb{Q}\boldsymbol{K}_0^\Delta(T^{\lambda,r}, 0), {}_\mathbb{Q}\boldsymbol{K}_0^\Delta(T^{\lambda,r'}, 0)), \] sending the equivalence class of an exact dg-functor to its induced map on the asymptotic Grothendieck groups of its source and target (this is similar to the fact that an exact functor between triangulated categories induces a map on their triangulated Grothendieck groups). Recall the blob category $\mathcal{B}$, but consider it as defined over $\mathbb{Q}\pp{q,\lambda}$ instead of $\mathbb{Q}(q,\lambda)$. \begin{thm} There is an isomorphism \[ \Hom_\mathcal{B}(r,r') \cong {}_\mathbb{Q} \boldsymbol{K}_0^\Delta(\mathfrak B(r,r')). \] \end{thm} \begin{proof} Comparing the action of $\mathcal{B}$ on $M \otimes V^r$ from \cref{sec:blobalgebra} with the cofibrant replacement $\br X$ from \cref{sec:cofX}, and $\br B_i$ and $\br \bar B_i$ from \cref{sec:cofBi}, we deduce there is a commutative diagram \[ \begin{tikzcd} \Hom_\mathcal{B}(r,r') \ar[hookrightarrow]{r}{\eqref{eq:BcongMV}} \ar[twoheadrightarrow,swap]{d}{f} & \Hom_{\mathbb{Q}\pp{q,\lambda}}(M \otimes V^r, M \otimes V^{r'}) \\ {}_\mathbb{Q} \boldsymbol{K}_0^\Delta(\mathfrak B(r,r')) \ar[swap]{r}{\eqref{eq:K0homK0}}& \Hom_{\mathbb{Q}\pp{q,\lambda}}( {}_\mathbb{Q}\boldsymbol{K}_0^\Delta(T^{\lambda,r}, 0), {}_\mathbb{Q}\boldsymbol{K}_0^\Delta(T^{\lambda,r'}, 0) ) \ar[sloped]{u}{\simeq} \end{tikzcd} \] where the arrow $f$ is the obvious surjective one, sending $\xi$ to $[\Xi]$, and cup/caps to $[\B_i]$/$[\bar \B_i]$. Because the diagram commutes and using \cref{thm:BcongMV}, we deduce that $f$ is injective, and thus it is an isomorphism. \end{proof} In particular, if we write $\mathfrak B_r := \mathfrak B(r,r)$, then we have: \begin{cor}\label{cor:twoblobalg} There is an isomorphism of $\mathbb{Q}\pp{q,\lambda}$-algebras \[ {}_\mathbb{Q}\boldsymbol{K}_0^\Delta(\mathfrak{B}_r) \cong \mathcal{B}_r. \] \end{cor} By Faonte~\cite{faonte}, we know that $A_\infty$-categories form an $(\infty,2)$-category, where the hom-spaces are given by Lurie's dg-nerves \cite{lurie} of the dg-categories of $A_\infty$-functors (or equivalently quasi-functors, see \cref{sec:dgfunctors}). Thus, we can define the following: \begin{defn} Let $\mathfrak{B}$ be the $(\infty,2)$-category defined by \begin{itemize} \item objects are non-negative integers $r \in \mathbb{N}$ (corresponding to $\mathcal{D}_{dg}(T^{\lambda,r},0)$); \item $\Hom_{\mathfrak{B}}(r,r')$ is Lurie's dg-nerve of the dg-category $\mathfrak B(r,r')$. \end{itemize} We refer to $\mathfrak{B}$ as the \emph{blob 2-category}. \end{defn} We define ${}_\mathbb{Q}\boldsymbol{K}_0^\Delta(\mathfrak{B})$ to be the category with objects being non-negative integers $r \in \mathbb{N}$ and homs are given by asymptotic Grothendieck groups of the homotopy categories of $\Hom_{\mathfrak{B}}(r,r')$. These homs are equivalent to ${}_\mathbb{Q} \boldsymbol{K}_0^\Delta(\mathfrak B(r,r'))$. \begin{cor}\label{cor:twoblob} There is an equivalence of categories \[ {}_\mathbb{Q}\boldsymbol{K}_0^\Delta(\mathfrak{B}) \cong \mathcal{B}. \] \end{cor} \section{Detailed proofs and computations}\label{sec:computations} We give the detailed computations used to prove various results of the paper. \subsection{Proofs of \cref{sec:qsltTLB}} \begin{citelem}{lem:explicitaction} The action of $\mathcal{B}_r$ translates in terms of $v_\rho$-vectors of $M \otimes V^r$ as \begin{align} \tag{\ref*{eq:caponk}} \tikzdiagh[scale=0.75]{2}{ \draw[ultra thick,myblue] (-.5,0) -- (-.5,1); \draw[red] (0,0) -- (0,1); \node at (.5,.5){\small $\dots$}; \draw[red] (1,0) -- (1,1); \draw[red] (1.5, 0) .. controls (1.5,.5) and (2,.5) .. (2,0) ; \draw[red] (2.5,0) -- (2.5,1); \node at (3,.5){\small $\dots$}; \draw[red] (3.5,0) -- (3.5,1); % \tikzbrace{-.5}{1}{-0.2}{$i$}; } :& v_{(\dots, b_{i-1}, b_i, b_{i+1}, b_{i+2}, \dots)} \mapsto -q^{-1} [b_i]_q v_{(\dots, b_{i-1} + b_i + b_{i+1} - 1, b_{i+2}, \dots)}, \\ \tag{\ref*{eq:cuponk}} \tikzdiagh[scale=0.75,yscale=-1]{2}{ \draw[ultra thick,myblue] (-.5,0) -- (-.5,1); \draw[red] (0,0) -- (0,1); \node at (.5,.5){\small $\dots$}; \draw[red] (1,0) -- (1,1); \draw[red] (1.5, 0) .. controls (1.5,.5) and (2,.5) .. (2,0) ; \draw[red] (2.5,0) -- (2.5,1); \node at (3,.5){\small $\dots$}; \draw[red] (3.5,0) -- (3.5,1); % \tikzbrace{-.5}{1}{1.9}{$i$}; } :&v_{\rho} \mapsto q[2]_q v_{(\dots, b_{i-1},1 ,0 ,b_{i}, \dots)} - q v_{(\dots, b_{i-1}+1, 0, 0,b_{i}, \dots)} -q v_{(\dots, b_{i-1},0 , 1,b_{i}, \dots)}, \\ \tag{\ref*{eq:xionk}} \tikzdiag[scale=0.75]{ \draw[red] (0,0) .. controls (0,.25) .. (-.5,.5) .. controls (0,.75) .. (0,1); \draw[fill=white, color=white] (-.52,.5) circle (.02cm); \draw[red] (0.5,0) -- (0.5,1); \node at (1,0.5){\small $\dots$}; \draw[red] (1.5,0) -- (1.5,1); \draw[ultra thick,myblue] (-.5,0) -- (-.5,1); } :&v_{(b_0 , b_1 ,\dots)} \mapsto (\lambda^{-1}q^{b_0} - \lambda q [b_0]_q) v_{(0,b_0+b_1,\dots)} + \lambda q^2 [b_0]_q v_{(1,b_0+b_1-1,\dots)}. \end{align} \end{citelem} \begin{proof} We start with the cap. We have \[ v_{(\ldots,b_{i-1},b_i,b_{i+1},b_{i+2},\ldots)} = [b_i]_q v_{(\ldots,b_{i-1}+b_i-1,1,b_{i+1},b_{i+2},\ldots)} - [b_i-1]_q v_{(\ldots,b_{i-1}+b_i,0,b_{i+1},b_{i+2},\ldots)}, \] and we easily check that $v_{(\ldots,b_{i-1}+b_i-1,1,b_{i+1},b_{i+2},\ldots)}$ is sent to $-q^{-1}v_{(\ldots,b_{i-1}+b_i+b_{i+1}-1,b_{i+2},\ldots)}$ and $v_{(\ldots,b_{i-1}+b_i,0,b_{i+1},b_{i+2},\ldots)}$ is sent to $0$. We now turn to the cup. It suffices to do the computation for $i=r+1$ because of the recursive definition of $v_\rho$. By definition, $v_{(b_0,\ldots,b_n)}$ is sent to $-qv_{(b_0,\ldots,b_n)} \otimes v_{1,0}\otimes F v_{1,0}+v_{(b_0,\ldots,b_n)}\otimes Fv_{1,0}\otimes v_{1,0}$. Since \[ v_{(b_0,\ldots,b_n)} \otimes v_{1,0}\otimes F v_{1,0} = v_{(b_0,\ldots,b_n,0,1)} - q^2v_{(b_0,\ldots,b_n+1,0,0)} - q v_{(b_0,\ldots,b_n)} \otimes F v_{1,0}\otimes v_{1,0}, \] and \[ v_{(b_0,\ldots,b_n)} \otimes F v_{1,0}\otimes v_{1,0} = v_{(b_0,\ldots,b_n,1,0)} - q v_{(b_0,\ldots,b_n+1,0,0)}, \] one finds the expected formula. Finally, we finish with $\xi$. Using the fact that $\xi$ is a morphism of $U_q(\mathfrak{sl}_2)$-modules, it suffices to consider the case of the vector $v_{(b_0,b_1)}$. One may check that $v_{(b_0,b_1)}=[b_0]_q v_{(1,b_0+b_1-1)}-[b_0-1]_q v_{(0,b_0+b_1)}$ and therefore \[ \xi(v_{(b_0,b_1)})=[b_0]_q F^{b_0+b_1-1}\xi(v_{(1,0)})-[b_0-1]_q F^{b_0+b_1}\xi(v_{(0,0)}). \] Using the definition of $\xi$ we have \[ \xi(v_{(0,0)}) = \lambda^{-1}v_{(0,0)}, \] and \[ \xi(v_{(1,0)}) = \lambda q^2v_{(1,0)}-q(\lambda-\lambda^{-1})v_{(0,1)}. \] Hence we deduce that \[ \xi(v_{(b_0,b_1)}) = \lambda q^2[b_0]_q v_{(1,b_0+b_1-1)} -(q(\lambda-\lambda^{-1})[b_0]_q+\lambda^{-1}[b_0-1]_q)v_{(0,b_0+b_1)}. \] We conclude by checking that $\lambda^{-1}q[b_0]_q-\lambda^{-1}[b_0-1]_q =\lambda^{-1}q^{b_0}$. \end{proof} \subsection{Proofs of \cref{sec:bimod}}\label{sec:proofsofsecbimod} \begin{citelem}{lem:gammasurjective} The map $\gamma_k : \br X_k \rightarrow X_k$ is surjective. \end{citelem} \begin{proof} First, we recall the following well-known relation \begin{equation}\label{eq:nhdoublecrossingid} \tikzdiag{ \draw (0,0) -- (0,1); \draw (.5,0) -- (.5,1); } \ = \ \tikzdiag{ \draw (0,0) ..controls (0,.25) and (1,.25) .. (1,.5) node[tikzdot, near start]{} ..controls (1,.75) and (0,.75) .. (0,1) ; \draw (1,0) ..controls (1,.25) and (0,.25) .. (0,.5) node[tikzdot, pos=1]{} ..controls (0,.75) and (1,.75) .. (1,1) ; } \ - \ \tikzdiag{ \draw (0,0) ..controls (0,.25) and (1,.25) .. (1,.5) ..controls (1,.75) and (0,.75) .. (0,1) ; \draw (1,0) ..controls (1,.25) and (0,.25) .. (0,.5) node[tikzdot, pos=1]{} ..controls (0,.75) and (1,.75) .. (1,1) node[tikzdot, near end]{} ; } \end{equation} which follows easily from \cref{eq:nhR2andR3} and \cref{eq:nhdotslide}. We also observe \begin{equation}\label{eq:dottednailslide} \tikzdiagh{0}{ \draw (.5,-.5) .. controls (.5,-.3) .. (0,-.1) .. controls (1,.5) .. (1,1) -- (1,1.5) node[tikzdot,midway]{}; \draw[stdhl] (1,-.5) node[below]{\small $1$} -- (1,0) .. controls (1,.25) .. (0,.5) .. controls (.5,.75) .. (.5,1) -- (.5,1.5); \draw[fill=white, color=white] (-.1,.5) circle (.1cm); \draw[vstdhl] (0,-.5) node[below]{\small $\lambda$} -- (0,1.5) node[pos=.2,nail]{}; } \ \overset{\eqref{eq:redR2}}{=} \ \tikzdiagh{0}{ \draw (.5,-.5) .. controls (.5,-.3) .. (0,-.1) .. controls (.75,.25) .. (.75,.5) .. controls (.75,.75) and (.25,.75) .. (.25,1) .. controls (.25,1.25) and (1,1.25) .. (1,1.5); \draw[stdhl] (1,-.5) node[below]{\small $1$} -- (1,0) .. controls (1,.25) .. (0,.5) .. controls (.5,.75) .. (.5,1) -- (.5,1.5); \draw[fill=white, color=white] (-.1,.5) circle (.1cm); \draw[vstdhl] (0,-.5) node[below]{\small $\lambda$} -- (0,1.5) node[pos=.2,nail]{}; } \ \overset{\eqref{eq:nailslidedcross}}{=} \ \tikzdiagh{0}{ \draw (.5,-.5) .. controls (.5,0) and (.75,0) .. (.75,.5) .. controls (.75,.75) .. (0,1) .. controls (1,1.25) .. (1,1.5); \draw[stdhl] (1,-.5) node[below]{\small $1$} -- (1,0) .. controls (1,.25) .. (0,.5) .. controls (.5,.75) .. (.5,1) -- (.5,1.5); \draw[fill=white, color=white] (-.1,.5) circle (.1cm); \draw[vstdhl] (0,-.5) node[below]{\small $\lambda$} -- (0,1.5) node[pos=.75,nail]{}; } \end{equation} Then, we compute \begin{align*} \tikzdiagh{0}{ \draw (0,-1) .. controls (0,-.875) .. (-.5,-.75) .. controls (.75,-.25) .. (.75,0) -- (.75,1); \draw (.25,-1) .. controls (.25,-.625) .. (-.5,-.375) .. controls (.5,-.25) .. (.5,0) -- (.5,1); % \draw [stdhl] (.75, -1) node[below]{\small $1$} .. controls (.75,-.5) .. (-.5,0) .. controls (0,.5) .. (0,1); \draw[fill=white, color=white] (-.6,0) circle (.1cm); \draw[vstdhl] (-.5,-1) node[below]{\small $\lambda$} -- (-.5,1) node[pos=.125,nail]{} node[pos=. 3125,nail]{}; } \ \overset{\eqref{eq:nhdoublecrossingid}}{=} \ \tikzdiagh{0}{ \draw (0,-1) .. controls (0,-.875) .. (-.5,-.75) .. controls (.75,-.25) .. (.75,0) .. controls (.75,.25) and (.5,.25) .. (.5,.5) node[tikzdot, pos=1]{} .. controls (.5,.75) and (.75,.75) .. (.75,1); \draw (.25,-1) .. controls (.25,-.625) .. (-.5,-.375) .. controls (.5,-.25) .. (.5,0) .. controls (.5,.25) and (.75,.25) .. (.75,.5) node[tikzdot, near start]{} .. controls (.75,.75) and (.5,.75) .. (.5,1); % \draw [stdhl] (.75, -1) node[below]{\small $1$} .. controls (.75,-.5) .. (-.5,0) .. controls (0,.5) .. (0,1); \draw[fill=white, color=white] (-.6,0) circle (.1cm); \draw[vstdhl] (-.5,-1) node[below]{\small $\lambda$} -- (-.5,1) node[pos=.125,nail]{} node[pos=. 3125,nail]{}; } \ - \ \tikzdiagh{0}{ \draw (0,-1) .. controls (0,-.875) .. (-.5,-.75) .. controls (.75,-.25) .. (.75,0) .. controls (.75,.25) and (.5,.25) .. (.5,.5) node[tikzdot, pos=1]{} .. controls (.5,.75) and (.75,.75) .. (.75,1) node[tikzdot, near end]{} ; \draw (.25,-1) .. controls (.25,-.625) .. (-.5,-.375) .. controls (.5,-.25) .. (.5,0) .. controls (.5,.25) and (.75,.25) .. (.75,.5) .. controls (.75,.75) and (.5,.75) .. (.5,1); % \draw [stdhl] (.75, -1) node[below]{\small $1$} .. controls (.75,-.5) .. (-.5,0) .. controls (0,.5) .. (0,1); \draw[fill=white, color=white] (-.6,0) circle (.1cm); \draw[vstdhl] (-.5,-1) node[below]{\small $\lambda$} -- (-.5,1) node[pos=.125,nail]{} node[pos=. 3125,nail]{}; } \end{align*} and \begin{align*} \tikzdiagh[yscale=1.25]{0}{ \draw (0,-1) .. controls (0,-.875) .. (-.5,-.75) .. controls (.75,-.25) .. (.75,0) .. controls (.75,.25) and (.5,.25) .. (.5,.5) node[tikzdot, pos=1]{} .. controls (.5,.75) and (.75,.75) .. (.75,1) node[tikzdot, near end]{} ; \draw (.25,-1) .. controls (.25,-.625) .. (-.5,-.375) .. controls (.5,-.25) .. (.5,0) .. controls (.5,.25) and (.75,.25) .. (.75,.5) .. controls (.75,.75) and (.5,.75) .. (.5,1); % \draw [stdhl] (.75, -1) node[below]{\small $1$} .. controls (.75,-.5) .. (-.5,0) .. controls (0,.5) .. (0,1); \draw[fill=white, color=white] (-.6,0) circle (.1cm); \draw[vstdhl] (-.5,-1) node[below]{\small $\lambda$} -- (-.5,1) node[pos=.125,nail]{} node[pos=. 3125,nail]{}; } \ \overset{(\ref{eq:crossingslidered})}{=} \ \tikzdiagh[yscale=1.25]{0}{ \draw (0,-1) .. controls (0,-.875) .. (-.5,-.75) .. controls (.25,-.5) .. (.25,-.375) .. controls (.25,0) and (.5,0) .. (.5,.5) node[tikzdot, pos=1]{} .. controls (.5,.75) and (.75,.75) .. (.75,1) node[tikzdot, near end]{} ; \draw (.25,-1) .. controls (.25,-.625) .. (-.5,-.375) .. controls (.75,-.25) .. (.75,0) -- (.75,.5) .. controls (.75,.75) and (.5,.75) .. (.5,1); % \draw [stdhl] (.75, -1) node[below]{\small $1$} .. controls (.75,0) .. (-.5,.25) .. controls (0,.5) .. (0,1); \draw[fill=white, color=white] (-.6,.25) circle (.1cm); \draw[vstdhl] (-.5,-1) node[below]{\small $\lambda$} -- (-.5,1) node[pos=.125,nail]{} node[pos=. 3125,nail]{}; } \ \overset{(\ref{eq:relNail})}{=} - \ \tikzdiagh[yscale=1.25]{0}{ \draw (.25,-1) .. controls (.25,-.875) .. (-.5,-.75) .. controls (.75,-.25) .. (.75,.5) .. controls (.75,.75) and (.5,.75) .. (.5,1); \draw (0,-1) -- (0,-.75) .. controls (0,-.625) .. (-.5,-.375) .. controls (.5,0) .. (.5,.5) node[tikzdot, pos=1]{} .. controls (.5,.75) and (.75,.75) .. (.75,1) node[tikzdot, near end]{} ; % \draw [stdhl] (.75, -1) node[below]{\small $1$} .. controls (.75,0) .. (-.5,.25) .. controls (0,.5) .. (0,1); \draw[fill=white, color=white] (-.6,.25) circle (.1cm); \draw[vstdhl] (-.5,-1) node[below]{\small $\lambda$} -- (-.5,1) node[pos=.125,nail]{} node[pos=. 3125,nail]{}; } \end{align*} Thus, using \cref{eq:dottednailslide} we obtain \begin{align*} \tikzdiagh{0}{ \draw (0,-1) .. controls (0,-.875) .. (-.5,-.75) .. controls (.75,-.25) .. (.75,0) -- (.75,1.5); \draw (.25,-1) .. controls (.25,-.625) .. (-.5,-.375) .. controls (.5,-.25) .. (.5,0) -- (.5,1.5); % \draw [stdhl] (.75, -1) node[below]{\small $1$} .. controls (.75,-.5) .. (-.5,0) .. controls (0,.5) .. (0,1.5); \draw[fill=white, color=white] (-.6,0) circle (.1cm); \draw[vstdhl] (-.5,-1) node[below]{\small $\lambda$} -- (-.5,1) node[pos=.125,nail]{} node[pos=. 3125,nail]{} -- (-.5,1.5); } \ = \ \tikzdiagh{0}{ \draw (0,-1) .. controls (0,-.75) .. (-.5,-.5) .. controls (.75,.25) .. (.75,.5) .. controls (.75,.75) and (.5,.75) .. (.5,1) node[tikzdot, pos=1]{} .. controls (.5,1.25) and (.75,1.25) .. (.75,1.5) ; \draw (.25,-1) .. controls (.25,-.5) and (.75,-.5) .. (.75,0) .. controls (.75,.25) .. (-.5,.5) .. controls (.75,.75) .. (.75,1) .. controls (.75,1.25) and (.5,1.25) .. (.5,1.5); % \draw [stdhl] (.75, -1) node[below]{\small $1$} .. controls (.75,-.5) .. (-.5,0) .. controls (0,.5) .. (0,1) -- (0,1.5); \draw[fill=white, color=white] (-.6,0) circle (.1cm); \draw[vstdhl] (-.5,-1) node[below]{\small $\lambda$} -- (-.5,1) node[pos=.25,nail]{} node[pos=.75,nail]{} -- (-.5,1.5); } \ + \ \tikzdiagh{0}{ \draw (.25,-1) .. controls (.25,-.75) .. (-.5,-.5) .. controls (.75,.25) .. (.75,.5) .. controls (.75,.75) and (.5,.75) .. (.5,1) -- (.5,1.5); \draw (0,-1) .. controls (0,-.5) and (.75,-.5) .. (.75,0) .. controls (.75,.25) .. (-.5,.5) .. controls (.75,.75) .. (.75,1) -- (.75,1.5) node[tikzdot, midway]{} ; % \draw [stdhl] (.75, -1) node[below]{\small $1$} .. controls (.75,-.5) .. (-.5,0) .. controls (0,.5) .. (0,1) -- (0,1.5); \draw[fill=white, color=white] (-.6,0) circle (.1cm); \draw[vstdhl] (-.5,-1) node[below]{\small $\lambda$} -- (-.5,1) node[pos=.25,nail]{} node[pos=.75,nail]{} -- (-.5,1.5); } \end{align*} Consequently, using \cref{thm:X0basis}, we deduce that $X_k$ is generated as left $(T_b^{\lambda,r},0)$-module by the elements \begin{align*} \tikzdiagh{0}{ \draw (0,0) .. controls (0,.5) and (.5,.5) .. (.5,1); \node at(.75,.75) {\tiny$\dots$}; \draw (.5,0) .. controls (.5,.5) and (1,.5) .. (1,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (-.1,-.35) -- node {$k$} (.6,-.35); % \draw[stdhl] (1,0) node[below]{\small $1$} .. controls (1,.25) .. (-.5,.5) .. controls (0,.75) .. (0,1); \draw[fill=white, color=white] (-.6,.5) circle (.1cm); \draw[vstdhl] (-.5,0) node[below]{\small $\lambda$} -- (-.5,1); % } &\otimes \bar 1_{\ell,\rho}, & \tikzdiagh{0}{ % \draw (0,-.5) .. controls (0,0) and (.75,0) .. (.75,1); \node at(1,.75) {\tiny$\dots$}; \draw (.5,-.5) .. controls (.5,0) and (1.25,0) .. (1.25,1); \draw (.75,-.5) .. controls (.75,-.25) .. (-.5,0) .. controls (.5,.5) .. (.5,1); \draw (1,-.5) .. controls (1,0) and (1.5,0) .. (1.5,1); \node at(1.75,.75) {\tiny$\dots$}; \draw (1.5,-.5) .. controls (1.5,0) and (2,0) .. (2,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (-.1,-.85) -- node {$t$} (.6,-.85); % \draw[stdhl] (2,-.5) node[below]{\small $1$} -- (2,0) .. controls (2,.25) .. (-.5,.5) .. controls (0,.75) .. (0,1); \draw[fill=white, color=white] (-.6,.5) circle (.1cm); \draw[vstdhl] (-.5,-.5) node[below]{\small $\lambda$} -- (-.5,1) node[pos=.33,nail]{}; } &\otimes \bar 1_{\ell,\rho}, \end{align*} for all $0 \leq t \leq k-1$. In particular, $\gamma_k$ is surjective. \end{proof} \begin{lem}\label{prop:X0decomp} Suppose $r=1$ and $\ell = 0$. As a $\mathbb{Z}\times\mathbb{Z}^2$-graded $\Bbbk$-module, $X 1_{k,0}$ admits a decomposition \[ X 1_{k,0} \cong \lambda^{-1} q^{2k}(T_{k}^{\lambda,0}) \oplus \ssbigoplus{0 \leq t < k \\ p \geq 0} \bigl( q^{2p+1-2(k-t)} (X 1_{k-1,0}) \oplus \lambda^2 q^{2p+1-2(k+t)}(X 1_{k-1,0})[1] \bigr). \] \end{lem} \begin{proof} It follows from \cref{thm:X0basis} that we have a decomposition \[ \tikzdiag[xscale=.75]{ \draw [vstdhl] (-.5,0) node[below]{\small $\lambda$} -- (-.5,1); % \draw (0,0) -- (0,1); \node at(.5,.25) {\tiny $\dots$}; \draw (1,0) -- (1,1); % \draw [stdhl] (1.5,0) node[below]{\small $1$} -- (1.5,1); % \filldraw [fill=white, draw=black] (-.75,.5) rectangle (1.75,1) node[midway] { $X_{k}$}; } \ \cong \ \tikzdiag[xscale=.75]{ \draw (0,-.5) .. controls (0,-.25) and (.5,-.25) .. (.5,0) .. controls (.5,.25) and (0,.25).. (0,.5) -- (0,1); \node at(1,0) {\tiny $\dots$}; \draw (1,-.5) .. controls (1,-.25) and (1.5,-.25) .. (1.5,0) .. controls (1.5,.25) and (1,.25) .. (1,.5) -- (1,1); % \draw [stdhl] (1.5,-.5) node[below]{\small $1$} .. controls (1.5,-.25) .. (-.5,0) .. controls (1.5,.25) .. (1.5,.5) -- (1.5,1); \draw[fill=white, color=white] (-.8,0) circle (.2cm); \draw [vstdhl] (-.5,-.5) node[below]{\small $\lambda$} -- (-.5,1); % \filldraw [fill=white, draw=black] (-.75,.5) rectangle (1.25,1) node[midway] { $T_{k}$}; } \oplus \ssbigoplus{0 \leq t < k \\ p \geq 0} \left( \tikzdiag[xscale=.75]{ \draw [vstdhl] (-.5,-.5) node[below]{\small $\lambda$} -- (-.5,1); % \draw (0,-.5) -- (0,1); \node at(.5,.25) {\tiny $\dots$}; \draw (1,-.5) -- (1,1); % \draw[decoration={brace,mirror,raise=-8pt},decorate] (-.1,-.85) -- node {$t$} (1.1,-.85); % \draw (2,-.5) .. controls (2,0) and (1.5,0) .. (1.5,.5) -- (1.5,1); \node at(2,.25) {\tiny $\dots$}; \draw (3,-.5) .. controls (3,0) and (2.5,0) .. (2.5,.5) -- (2.5,1); % \draw (1.5,-.5) .. controls (1.5,0) and (3.5,0) ..(3.5,.5) -- (3.5,1) node[midway,tikzdot]{} node[midway, xshift=1.5ex, yshift=1ex]{\small $p$}; % \draw [stdhl] (3.5,-.5) node[below]{\small $1$} .. controls (3.5,0) and (3,0) .. (3,.5) -- (3,1); % \filldraw [fill=white, draw=black] (-.75,.5) rectangle (3.25,1) node[midway] { $X_{k-1}$}; } \oplus \tikzdiag[xscale=.75]{ % \draw (0,-.5) -- (0,1); \node at(.5,.25) {\tiny $\dots$}; \draw (1,-.5) -- (1,1); % \draw[decoration={brace,mirror,raise=-8pt},decorate] (-.1,-.85) -- node {$t$} (1.1,-.85); % \draw (2,-.5) .. controls (2,0) and (1.5,0) .. (1.5,.5) -- (1.5,1); \node at(2,.25) {\tiny $\dots$}; \draw (3,-.5) .. controls (3,0) and (2.5,0) .. (2.5,.5) -- (2.5,1); % \draw (1.5,-.5) .. controls (1.5,-.25) ..(-.5,0) .. controls (3.5,.25) .. (3.5,.5) -- (3.5,1) node[midway,tikzdot]{} node[midway, xshift=1.5ex, yshift=1ex]{\small $p$}; % \draw [stdhl] (3.5,-.5) node[below]{\small $1$} .. controls (3.5,0) and (3,0) .. (3,.5) -- (3,1); \draw[fill=white, color=white] (-.8,0) circle (.2cm); \draw [vstdhl] (-.5,-.5) node[below]{\small $\lambda$} -- (-.5,1) node[pos=.33,nail]{}; % \filldraw [fill=white, draw=black] (-.75,.5) rectangle (3.25,1) node[midway] { $X_{k-1}$}; } \right) \] concluding the proof. \end{proof} \begin{citelem}{lem:sesX0} The sequence \[ 0 \rightarrow Y^1_k \xrightarrow{\imath_k} Y^0_k \xrightarrow{\gamma_k} X_k \rightarrow 0, \] is a short exact sequence of left $(T^{\lambda,r}, 0)$-modules. \end{citelem} \begin{proof} Since we already have a complex with an injection and a surjection, it is enough to show that \[ \gdim X_k = \gdim Y^0_k - \gdim Y^1_k, \] where $\gdim$ is the graded dimension in the form of a Laurent series in $\mathbb{N}\llbracket h^{\pm 1}, \lambda^{\pm 1}, q^{\pm 1} \rrbracket$. We will show this by induction on $k$. When $k=0$, this is immediate. Suppose it is true for $k$, and we will show it for $k+1$. Let \[ [\beta+t]_q^h := \frac{\lambda^{-1}q^{-t} + h \lambda q^t }{q^{-1}-q} =q \frac{\lambda^{-1}q^{-t} + h \lambda q^t }{1-q^2}. \] Note that \begin{align} [k+1]_q [\beta+t+k]_q^h &= \sum_{r=0}^{k} [\beta+t-2r]_q^h, \label{eq:defqnbrprod} \\ [k+1]_q &= q [k]_q + q^{-k}, \label{eq:qnbrplus} \\ [\beta-k+1]_q^h &= q^{-1} [\beta-k]_q^h - h \lambda q^{-k}, \label{eq:defqnbrplus} \end{align} and \begin{align} [k+1]_q [\beta-k+1]_q^h &= [k]_q [\beta-k]_q^h + q^{-1-k}[\beta-k]_q^h - h \lambda q^{-k} [k+1]_q. \label{eq:qnbrtimesdefqnbr} \end{align} We first restrict to the case $\ell = 0$ and $r=1$. By \cref{prop:X0decomp} using \eqref{eq:defqnbrprod}, followed by the induction hypothesis, we have \begin{align*} \gdim X 1_{k+1,0} =& \lambda^{-1} q^{2(k+1)} \gdim T_{k+1}^{\lambda,0} + \lambda q^{-2k} [k+1]_q[\beta-k]^h_q \gdim X 1_{k,0} \\ =& \lambda^{-1} q^{2(k+1)} \gdim T_{k+1}^{\lambda,0} + \lambda q^{-2k} [k+1]_q[\beta-k]^h_q \\ &\times \bigl( (\lambda^{-1}q^k + h \lambda q [k]_q ) \gdim T^{\lambda,1}_{k} 1_{0,k} - h \lambda q^2 [k]_q \gdim T_k^{\lambda,1} 1_{1,k-1} \bigr). \end{align*} By definition, we have \begin{align*} \gdim Y_{k+1,0}^0 &= \bigl( \lambda^{-1} q^{k+1} + h \lambda q [k+1]_q \bigr) \gdim T^{\lambda,1}_{k+1} 1_{0,k+1}, \\ \gdim Y_{k+1,0}^1 &= h \lambda q^2 [k+1]_q \gdim T^{\lambda,1}_{k+1} 1_{1,k}. \end{align*} By \cref{prop:Tdecomp}, we have \begin{align*} \gdim T^{\lambda,1}_{k+1} 1_{0,k+1} = & q^{k+1} \gdim T^{\lambda,0}_{k+1} + \lambda q^{-2k} [k+1]_q[\beta+1-k]_q^h \gdim T^{\lambda,1}_k 1_{0,k}, \\ \gdim T^{\lambda,1}_{k+1} 1_{1,k} = & q^{k} \gdim T^{\lambda,0}_{k+1} + \lambda q^{-2k} [\beta]_q^h \gdim T^{\lambda,1}_{k} 1_{0,k} \\ &+ \lambda q^{-2k} [k]_q [\beta-k]_q^h \gdim T^{\lambda,1}_k 1_{1,k-1}. \end{align*} We now gather by $ \gdim T_{k+1}^{\lambda,0}$, $ \gdim T^{\lambda,1}_{k} 1_{0,k}$ and $\gdim T^{\lambda,1}_k 1_{1,k-1}$. For $\gdim T_{k+1}^{\lambda,0}$, we verify that \[ \lambda^{-1} q^{2(k+1)} = \bigl( \lambda^{-1} q^{k+1} + \lambda q [k+1]_q \bigr) q^{k+1} - \lambda q^2 [k+1]_q q^{k}. \] Gathering by $ \gdim T^{\lambda,1}_{k} 1_{0,k}$, we obtain on one hand \begin{align} \begin{split} &\lambda q^{-2k} [k+1]_q[\beta-k]^h_q (\lambda^{-1}q^k + h \lambda q [k]_q ) \\ &= q^{-k}[k+1]_q [\beta-k]_q^h + h \lambda^2 q^{1-2k}[k]_q[k+1]_q[\beta-k]_q^h, \end{split} \label{eq:dim0kLHS} \end{align} and on the other hand \begin{align*} &\bigl( \lambda^{-1} q^{k+1} + h\lambda q [k+1]_q \bigr) \lambda q^{-2k} [k+1]_q[\beta+1-k]_q^h - h\lambda q^2 [k+1]_q \lambda q^{-2k} [\beta]_q^h \\ &= q^{1-k}[k+1]_q [\beta-k+1]_q^h + h \lambda^2 q^{1-2k} [k+1]_q [k+1_q] [\beta-k+1]_q^h \\ &\quad - h \lambda^2 q^{2-2k}[k+1]_q [\beta]_q^h \\ &= q^{-k} [\beta-k]_q^h [k+1]_q - h \lambda q^{1-2k} [k+1]_q \\ &\quad + h \lambda^2 q^{1-2k} [k+1]_q \bigl( [k]_q[\beta-k]_q^h + q^{-1-k}[\beta-k]_q^h - h\lambda q^{-k}[k+1]_q \bigr) \\ &\quad - h \lambda^2 q^{2-2k}[k+1]_q [\beta]_q^h, \end{align*} using \eqref{eq:defqnbrplus} and \eqref{eq:qnbrtimesdefqnbr}. We remark that the first and third terms coincide with \eqref{eq:dim0kLHS}. We gather the remaining terms, putting $h\lambda q^{-2k}[k+1]_q$ in evidence, so that we obtain \begin{align*} & - q + \lambda q^{-k}[\beta-k]_q^h - h\lambda^2 q^{1-k}[k+1]_q - \lambda q^{2} [\beta]_q^h \\ &= \frac{1}{q^{-1}-q}\bigl( -q(q^{-1}-q) + \lambda q^{-k}(\lambda^{-1}q^k + h \lambda q^{-k}) - h \lambda^2q^{1-k}(q^{-k-1}-q^{k+1}) - \lambda q^2 (\lambda^{-1}+h\lambda) \bigr) \\&= 0. \end{align*} Finally, for $\gdim T^{\lambda,1}_k 1_{1,k-1}$, we verify that \begin{align*} \lambda q^{-2k} [k+1]_q[\beta-k]^h_q (- h \lambda q^2 [k]_q ) &= - h \lambda q^2 [k+1]_q \lambda q^{-2k} [k]_q [\beta-k]_q^h, \end{align*} concluding the proof in the case $\ell = 0$ and $r=1$. The case $\ell > 0$ comes from an induction on $\ell$ and using the case $\ell = 0$. Using a similar decomposition as in \cref{prop:X0decomp}, we obtain \begin{align*} \gdim X_{k,\ell} =& \lambda^{-1} q^{2k+\ell} \gdim T_{k+\ell}^{\lambda,0} + \lambda q^{-2\ell-2k-2} [k]_q[\beta-k+1]_q^h \gdim X_{k-1, \ell} \\ &+ \lambda q^{-2k-2\ell} [\ell]_q [\beta-2k-\ell]_q^h \gdim X_{k, \ell-1}, \end{align*} where $X_{k,\ell} := X_k 1_{k,\ell}$. Similarly, one can compute \begin{align*} \gdim T^{\lambda,1}_{k+\ell} 1_{0,k+\ell} =& q^{k+\ell} \gdim T_{k+\ell}^{\lambda,0} + \lambda q^{-2k-2\ell} [k+\ell]_q[\beta-k-\ell]_q^h \gdim T^{\lambda,1}_{k+\ell-1} 1_{0,k+\ell-1}, \\ \gdim T^{\lambda,1}_{k+\ell} 1_{1,k+\ell-1} =& q^{k+\ell-1} \gdim T_{k+\ell}^{\lambda,0} + \lambda q^{2-2(k+\ell)} [\beta]_q^h \gdim T_{k+\ell-1}^{\lambda,1} 1_{0,k+\ell-1} \\ & + \lambda q^{-2k-2\ell} [k+\ell-1]_q [\beta-k-\ell-1]_q^h \gdim T_{k+\ell-1}^{\lambda,1} 1_{1,k+\ell-2}. \end{align*} By the same reasons as above, the part in $\gdim T_{k+\ell}^{\lambda,0}$ annihilates each others. By induction hypothesis we know that \[ \gdim X_{k, \ell-1} = \bigl( \lambda^{-1} q^{k} + h \lambda q [k]_q \bigr) \gdim T^{\lambda,1}_{k+\ell-1} 1_{0,k+\ell-1} - h \lambda q^2 [k]_q \gdim T^{\lambda,1}_{k+\ell-1} 1_{1,k+\ell-2}. \] Using the fact that \begin{align*} [k+\ell]_q[\beta-k-\ell]_q^h &= [k]_q[\beta-k]_q^h + [\ell]_q[\beta-2k-\ell]_q^h, \\ [k+\ell-1]_q[\beta-k-\ell-1]_q^h &= [k-1]_q[\beta-k+1]_q^h + [\ell]_q[\beta-2k-\ell]_q^h, \end{align*} together with the induction hypothesis, we cancel the part given by $\gdim (X_{k, \ell-1})$ in $X_{k,\ell}$ with the part given by \[ \lambda q^{-2k-2\ell} [\ell]_q[\beta-2k-\ell]_q^h \gdim T^{\lambda,1}_{k+\ell-1} 1_{0,k+\ell-1} \] in $Y_{k, \ell}^0$ minus the part given by \[ \lambda q^{-2k-2\ell} [\ell]_q[\beta-2k-\ell]_q^h \gdim T_{k+\ell-1}^{\lambda,1} 1_{1,k+\ell-2}. \] in $Y_{k,\ell}^1$. The remaining terms yields the same computations as for the case $\ell = 0$ (replacing $k+1$ by $k$), but shifting everything by $q^{-2\ell}$. Thus, it concludes the case $\ell > 0$. The general case follows from a similar argument, using the fact that $X$ decomposes similarly to $T^{\lambda,r}_b$ whenever $r > 1$, that is as in \cref{prop:Tdecomp}, replacing all $T$ by $X$. We leave the details to the reader. \end{proof} \subsection{Proofs of \cref{sec:catTLB}}\label{sec:proofsofcatTLB} \begin{citelem}{lem:Xklgenerated} As a right $(T^{\lambda,r},0)$-module, $1_{1,k+\ell-1,\rho}X$ is generated by the elements \begin{align}\label{eq:Xklgenerator} \tikzdiagh{0}{ \draw (1.25,0) .. controls (1.25,.5) .. (-.5,.75) .. controls (0,.875) .. (0,1); \draw[stdhl] (0,0) node[below]{\small $1$} .. controls (0,.1) .. (-.5,.25) .. controls (.5,.7) .. (.5,1); \draw[fill=white, color=white] (-.6,.25) circle (.1cm); \draw[vstdhl] (-.5,0) node[below]{\small $\lambda$} --(-.5,1) node[pos=.75,nail]{}; \draw (.5,0) .. controls (.5,.5) and (.75,.5) .. (.75,1); \node at (.75,.15){\tiny $\dots$}; \draw (1,0) .. controls (1,.5) and (1.25,.5).. (1.25,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (.4,-.35) -- node { \small $k-1$} (1.1,-.35); } \otimes \bar 1_{\ell,\rho}, &&\text{and} && \tikzdiagh{0}{ \draw (.5,0) .. controls (.5,.5) and (0,.5) .. (0,1); \draw[stdhl] (0,0) node[below]{\small $1$} .. controls (0,.1) .. (-.5,.25) .. controls (.5,.7) .. (.5,1); \draw[fill=white, color=white] (-.6,.25) circle (.1cm); \draw[vstdhl] (-.5,0) node[below]{\small $\lambda$} --(-.5,1); } \otimes \bar 1_{\ell+k-1,\rho}. \end{align} \end{citelem} \begin{proof} We prove this claim using an induction on $k$. The case $k=1$ is obvious. We suppose it is true for $k-1$, and thus it is enough to show that we can generate the element: \[ \tikzdiagh{0}{ \draw (1.25,0) .. controls (1.25,.5) .. (-.5,.75) .. controls (0,.875) .. (0,1); \draw[stdhl] (0,0) node[below]{\small $1$} .. controls (0,.1) .. (-.5,.25) .. controls (.5,.7) .. (.5,1); \draw[fill=white, color=white] (-.6,.25) circle (.1cm); \draw[vstdhl] (-.5,0) node[below]{\small $\lambda$} --(-.5,1) node[pos=.75,nail]{}; \draw (.5,0) .. controls (.5,.5) and (.75,.5) .. (.75,1); \node at (.75,.15){\tiny $\dots$}; \draw (1,0) .. controls (1,.5) and (1.25,.5).. (1.25,1); % \draw (1.5,0) -- (1.5,1); % \draw[decoration={brace,mirror,raise=-8pt},decorate] (.4,-.35) -- node { \small $k-2$} (1.1,-.35); } \otimes \bar 1_{\ell,\rho}. \] Using \cref{eq:nhdotslide}, we have \[ \tikzdiagh{0}{ \draw (1.25,0) .. controls (1.25,.5) .. (-.5,.75) .. controls (0,.875) .. (0,1); \draw[stdhl] (0,0) node[below]{\small $1$} .. controls (0,.1) .. (-.5,.25) .. controls (.5,.7) .. (.5,1); \draw[fill=white, color=white] (-.6,.25) circle (.1cm); \draw[vstdhl] (-.5,0) node[below]{\small $\lambda$} --(-.5,1) node[pos=.75,nail]{}; \draw (.5,0) .. controls (.5,.5) and (.75,.5) .. (.75,1); \node at (.75,.15){\tiny $\dots$}; \draw (1,0) .. controls (1,.5) and (1.25,.5).. (1.25,1); \draw (1.5,0) -- (1.5,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (.4,-.35) -- node { \small $k-2$} (1.1,-.35); } = \tikzdiagh{0}{ \draw (1.6,0) .. controls (1.6,.5) .. (-.5,.75) node[pos=.52,tikzdot]{} .. controls (0,.875) .. (0,1); \draw[stdhl] (0,0) node[below]{\small $1$} .. controls (0,.1) .. (-.5,.25) .. controls (.5,.7) .. (.5,1); \draw[fill=white, color=white] (-.6,.25) circle (.1cm); \draw[vstdhl] (-.5,0) node[below]{\small $\lambda$} --(-.5,1) node[pos=.75,nail]{}; \draw (.5,0) .. controls (.5,.5) and (.75,.5) .. (.75,1); \node at (.75,.15){\tiny $\dots$}; \draw (1,0) .. controls (1,.5) and (1.25,.5).. (1.25,1); \draw (1.35,0) .. controls (1.35,.5) and (1.6,.5).. (1.6,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (.4,-.35) -- node { \small $k-2$} (1.1,-.35); } - \tikzdiagh{0}{ \draw (1.25,0) .. controls (1.25,.5) .. (-.5,.75) node[pos=.2,tikzdot]{} .. controls (0,.875) .. (0,1); \draw[stdhl] (0,0) node[below]{\small $1$} .. controls (0,.1) .. (-.5,.25) .. controls (.5,.7) .. (.5,1); \draw[fill=white, color=white] (-.6,.25) circle (.1cm); \draw[vstdhl] (-.5,0) node[below]{\small $\lambda$} --(-.5,1) node[pos=.75,nail]{}; \draw (.5,0) .. controls (.5,.5) and (.75,.5) .. (.75,1); \node at (.75,.15){\tiny $\dots$}; \draw (1,0) .. controls (1,.5) and (1.25,.5).. (1.25,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (.4,-.35) -- node { \small $k-1$} (1.1,-.35); } \] The second term on the right-hand side is generated by the second element in \cref{eq:Xklgenerator}. For the first term of the right-hand side, we slide the dot to the left using repeatedly \cref{eq:nhdotslide}: \[ \tikzdiagh{0}{ \draw (1.6,0) .. controls (1.6,.5) .. (-.5,.75) node[pos=.52,tikzdot]{} .. controls (0,.875) .. (0,1); \draw[stdhl] (0,0) node[below]{\small $1$} .. controls (0,.1) .. (-.5,.25) .. controls (.5,.7) .. (.5,1); \draw[fill=white, color=white] (-.6,.25) circle (.1cm); \draw[vstdhl] (-.5,0) node[below]{\small $\lambda$} --(-.5,1) node[pos=.75,nail]{}; \draw (.5,0) .. controls (.5,.5) and (.75,.5) .. (.75,1); \node at (.75,.15){\tiny $\dots$}; \draw (1,0) .. controls (1,.5) and (1.25,.5).. (1.25,1); \draw (1.35,0) .. controls (1.35,.5) and (1.6,.5).. (1.6,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (.4,-.35) -- node { \small $k-2$} (1.1,-.35); } = \tikzdiagh{0}{ \draw (1.3,0) .. controls (1.3,.5) .. (-.5,.75) node[pos=.75,tikzdot]{} .. controls (0,.875) .. (0,1); \draw[stdhl] (0,0) node[below]{\small $1$} .. controls (0,.1) .. (-.5,.25) .. controls (.5,.7) .. (.5,1); \draw[fill=white, color=white] (-.6,.25) circle (.1cm); \draw[vstdhl] (-.5,0) node[below]{\small $\lambda$} --(-.5,1) node[pos=.75,nail]{}; \draw (.55,0) .. controls (.55,.5) and (.8,.5) .. (.8,1); \node at (.8,.15){\tiny $\dots$}; \draw (1.05,0) .. controls (1.05,.5) and (1.3,.5).. (1.3,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (.4,-.35) -- node { \small $k-1$} (1.1,-.35); } - \sum_{j=1}^{k-2} \tikzdiagh{0}{ \draw (1.25,0) .. controls (1.25,.5) .. (-.5,.75) .. controls (0,.875) .. (0,1); \draw[stdhl] (0,0) node[below]{\small $1$} .. controls (0,.1) .. (-.5,.25) .. controls (.5,.7) .. (.5,1); \draw[fill=white, color=white] (-.6,.25) circle (.1cm); \draw[vstdhl] (-.5,0) node[below]{\small $\lambda$} --(-.5,1) node[pos=.75,nail]{}; \draw (.5,0) .. controls (.5,.5) and (.75,.5) .. (.75,1); \node at (.75,.15){\tiny $\dots$}; \draw (1,0) .. controls (1,.5) and (1.25,.5).. (1.25,1); \draw (2.25,0) .. controls (2.25,.5) and (1.5,.5) .. (1.5,1); \draw (1.5,0) .. controls (1.5,.5) and (1.75,.5) .. (1.75,1); \node at (1.75,.15){\tiny $\dots$}; \draw (2,0) .. controls (2,.5) and (2.25,.5) .. (2.25,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (1.4,-.35) -- node { \small $j$} (2.1,-.35); } \] Because of the symmetric of \cref{eq:dottednailslide}, the first term on the right-hand side is generated by the second element in \cref{eq:Xklgenerator}. We now prove that every element of the sum on the right-hand side is generated by elements in \cref{eq:Xklgenerator}. By applying the induction hypothesis, it suffices to show that for every $1 \leq j \leq k-2$, the elements \begin{align*} \tikzdiagh{0}{ \draw (.5,0) .. controls (.5,.5) and (0,.5) .. (0,1); \draw[stdhl] (0,0) node[below]{\small $1$} .. controls (0,.1) .. (-.5,.25) .. controls (.5,.7) .. (.5,1); \draw[fill=white, color=white] (-.6,.25) circle (.1cm); \draw[vstdhl] (-.5,0) node[below]{\small $\lambda$} --(-.5,1); \draw (.75,0) -- (.75,1); \draw (1.25,0) -- (1.25,1); \node at (1,.5){\tiny $\dots$}; \draw (2.25,0) .. controls (2.25,.5) and (1.5,.5) .. (1.5,1); \draw (1.5,0) .. controls (1.5,.5) and (1.75,.5) .. (1.75,1); \node at (1.75,.15){\tiny $\dots$}; \draw (2,0) .. controls (2,.5) and (2.25,.5) .. (2.25,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (1.4,-.35) -- node { \small $j$} (2.1,-.35); } && \text{and} && \tikzdiagh{0}{ \draw (2,0) .. controls (2,.5) .. (-.5,.75) .. controls (0,.875) .. (0,1); \draw[stdhl] (0,0) node[below]{\small $1$} .. controls (0,.1) .. (-.5,.25) .. controls (.5,.7) .. (.5,1); \draw[fill=white, color=white] (-.6,.25) circle (.1cm); \draw[vstdhl] (-.5,0) node[below]{\small $\lambda$} --(-.5,1) node[pos=.75,nail]{}; \draw (.5,0) .. controls (.5,.5) and (.75,.5) .. (.75,1); \node at (.75,.15){\tiny $\dots$}; \draw (1,0) .. controls (1,.5) and (1.25,.5) .. (1.25,1); \draw (1.25,0) .. controls (1.25,.5) and (1.75,.5) .. (1.75,1); \node at (1.5,.15){\tiny $\dots$}; \draw (1.75,0) .. controls (1.75,.5) and (2.25,.5).. (2.25,1); \draw (2.25,0) -- (2.25,.4); \draw (2.25,.4) .. controls (2.25,.75) and (1.5,.75) .. (1.5,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (1.15,-.35) -- node { \small $j$} (1.85,-.35); } \end{align*} are in the right module generated be the elements in \cref{eq:Xklgenerator}, which is clear for the first diagram. Concerning the second one, we have by \cref{eq:nhdotslide} \[ \tikzdiagh{0}{ \draw (2,0) .. controls (2,.5) .. (-.5,.75) .. controls (0,.875) .. (0,1); \draw[stdhl] (0,0) node[below]{\small $1$} .. controls (0,.1) .. (-.5,.25) .. controls (.5,.7) .. (.5,1); \draw[fill=white, color=white] (-.6,.25) circle (.1cm); \draw[vstdhl] (-.5,0) node[below]{\small $\lambda$} --(-.5,1) node[pos=.75,nail]{}; \draw (.5,0) .. controls (.5,.5) and (.75,.5) .. (.75,1); \node at (.75,.15){\tiny $\dots$}; \draw (1,0) .. controls (1,.5) and (1.25,.5) .. (1.25,1); \draw (1.25,0) .. controls (1.25,.5) and (1.75,.5) .. (1.75,1); \node at (1.5,.15){\tiny $\dots$}; \draw (1.75,0) .. controls (1.75,.5) and (2.25,.5).. (2.25,1); \draw (2.25,0) -- (2.25,.4); \draw (2.25,.4) .. controls (2.25,.75) and (1.5,.75) .. (1.5,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (1.15,-.35) -- node { \small $j$} (1.85,-.35); } = \tikzdiagh{0}{ \draw (2.25,0) .. controls (2.25,.5) .. (-.5,.75) node[pos=.39,tikzdot]{} .. controls (0,.875) .. (0,1); \draw[stdhl] (0,0) node[below]{\small $1$} .. controls (0,.1) .. (-.5,.25) .. controls (.5,.7) .. (.5,1); \draw[fill=white, color=white] (-.6,.25) circle (.1cm); \draw[vstdhl] (-.5,0) node[below]{\small $\lambda$} --(-.5,1) node[pos=.75,nail]{}; \draw (.5,0) .. controls (.5,.5) and (.75,.5) .. (.75,1); \node at (.75,.15){\tiny $\dots$}; \draw (1,0) .. controls (1,.5) and (1.25,.5) .. (1.25,1); \draw (1.25,0) .. controls (1.25,.5) and (1.75,.5) .. (1.75,1); \node at (1.5,.15){\tiny $\dots$}; \draw (1.75,0) .. controls (1.75,.5) and (2.25,.5).. (2.25,1); \draw (2,0) .. controls (2,.25) and (2.25,.25) .. (2.25,.5); \draw (2.25,.5) .. controls (2.25,.75) and (1.5,.75) .. (1.5,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (1.15,-.35) -- node { \small $j$} (1.85,-.35); } - \tikzdiagh{0}{ \draw (2.25,0) .. controls (2.25,.5) .. (-.5,.75) node[pos=.1,tikzdot]{} .. controls (0,.875) .. (0,1); \draw[stdhl] (0,0) node[below]{\small $1$} .. controls (0,.1) .. (-.5,.25) .. controls (.5,.7) .. (.5,1); \draw[fill=white, color=white] (-.6,.25) circle (.1cm); \draw[vstdhl] (-.5,0) node[below]{\small $\lambda$} --(-.5,1) node[pos=.75,nail]{}; \draw (.5,0) .. controls (.5,.5) and (.75,.5) .. (.75,1); \node at (.75,.15){\tiny $\dots$}; \draw (1,0) .. controls (1,.5) and (1.25,.5) .. (1.25,1); \draw (1.25,0) .. controls (1.25,.5) and (1.75,.5) .. (1.75,1); \node at (1.5,.15){\tiny $\dots$}; \draw (1.75,0) .. controls (1.75,.5) and (2.25,.5).. (2.25,1); \draw (2,0) .. controls (2,.25) and (2.25,.25) .. (2.25,.5); \draw (2.25,.5) .. controls (2.25,.75) and (1.5,.75) .. (1.5,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (1.15,-.35) -- node { \small $j$} (1.85,-.35); } \] For the first term of the right-hand side, we again slide the dot to the left using \cref{eq:nhdotslide} and obtain \begin{align*} \tikzdiagh{0}{ \draw (2.25,0) .. controls (2.25,.5) .. (-.5,.75) node[pos=.39,tikzdot]{} .. controls (0,.875) .. (0,1); \draw[stdhl] (0,0) node[below]{\small $1$} .. controls (0,.1) .. (-.5,.25) .. controls (.5,.7) .. (.5,1); \draw[fill=white, color=white] (-.6,.25) circle (.1cm); \draw[vstdhl] (-.5,0) node[below]{\small $\lambda$} --(-.5,1) node[pos=.75,nail]{}; \draw (.5,0) .. controls (.5,.5) and (.75,.5) .. (.75,1); \node at (.75,.15){\tiny $\dots$}; \draw (1,0) .. controls (1,.5) and (1.25,.5) .. (1.25,1); \draw (1.25,0) .. controls (1.25,.5) and (1.75,.5) .. (1.75,1); \node at (1.5,.15){\tiny $\dots$}; \draw (1.75,0) .. controls (1.75,.5) and (2.25,.5).. (2.25,1); \draw (2,0) .. controls (2,.25) and (2.25,.25) .. (2.25,.5); \draw (2.25,.5) .. controls (2.25,.75) and (1.5,.75) .. (1.5,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (1.15,-.35) -- node { \small $j$} (1.85,-.35); } &\overset{\phantom{\eqref{eq:nhR2andR3}}}{=} \tikzdiagh{0}{ \draw (2.35,0) .. controls (2.35,.5) .. (-.5,.75) node[pos=.845,tikzdot]{} .. controls (0,.875) .. (0,1); \draw[stdhl] (0,0) node[below]{\small $1$} .. controls (0,.1) .. (-.5,.25) .. controls (.5,.7) .. (.5,1); \draw[fill=white, color=white] (-.6,.25) circle (.1cm); \draw[vstdhl] (-.5,0) node[below]{\small $\lambda$} --(-.5,1) node[pos=.75,nail]{}; \draw (.6,0) .. controls (.6,.5) and (.85,.5) .. (.85,1); \node at (.85,.15){\tiny $\dots$}; \draw (1.1,0) .. controls (1.1,.5) and (1.35,.5) .. (1.35,1); \draw (1.35,0) .. controls (1.35,.5) and (1.85,.5) .. (1.85,1); \node at (1.6,.15){\tiny $\dots$}; \draw (1.85,0) .. controls (1.85,.5) and (2.35,.5).. (2.35,1); \draw (2.1,0) .. controls (2.1,.25) and (2.35,.25) .. (2.35,.5); \draw (2.35,.5) .. controls (2.35,.75) and (1.6,.75) .. (1.6,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (1.25,-.35) -- node { \small $j$} (1.95,-.35); } - \sum_{l=0}^{k-j-3} \tikzdiagh{0}{ \draw (1.25,0) .. controls (1.25,.5) .. (-.5,.75) .. controls (0,.875) .. (0,1); \draw[stdhl] (0,0) node[below]{\small $1$} .. controls (0,.1) .. (-.5,.25) .. controls (.5,.7) .. (.5,1); \draw[fill=white, color=white] (-.6,.25) circle (.1cm); \draw[vstdhl] (-.5,0) node[below]{\small $\lambda$} --(-.5,1) node[pos=.75,nail]{}; \draw (.5,0) .. controls (.5,.5) and (.75,.5) .. (.75,1); \node at (.75,.15){\tiny $\dots$}; \draw (1,0) .. controls (1,.5) and (1.25,.5) .. (1.25,1); \draw (1.5,0) .. controls (1.5,.5) and (1.75,.5) .. (1.75,1); \node at (1.75,.15){\tiny $\dots$}; \draw (2,0) .. controls (2,.5) and (2.25,.5) .. (2.25,1); \draw (2.25,0) .. controls (2.25,.5) and (2.75,.5) .. (2.75,1); \node at (2.5,.15){\tiny $\dots$}; \draw (2.75,0) .. controls (2.75,.5) and (3.25,.5) .. (3.25,1); \draw (3,0) .. controls (3,.25) and (3.25,.25) .. (3.25,.5); \draw (3.25,.5) .. controls (3.25,.75) and (2.5,.75) .. (2.5,1); \draw (3.25,0) .. controls (3.25,.5) and (1.5,.5) .. (1.5,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (1.4,-.35) -- node { \small $l$} (2.1,-.35); \draw[decoration={brace,mirror,raise=-8pt},decorate] (2.15,-.35) -- node { \small $j$} (2.85,-.35); }\\ &\overset{\eqref{eq:nhR2andR3}}{=} \tikzdiagh{0}{ \draw (2.35,0) .. controls (2.35,.5) .. (-.5,.75) node[pos=.845,tikzdot]{} .. controls (0,.875) .. (0,1); \draw[stdhl] (0,0) node[below]{\small $1$} .. controls (0,.1) .. (-.5,.25) .. controls (.5,.7) .. (.5,1); \draw[fill=white, color=white] (-.6,.25) circle (.1cm); \draw[vstdhl] (-.5,0) node[below]{\small $\lambda$} --(-.5,1) node[pos=.75,nail]{}; \draw (.6,0) .. controls (.6,.5) and (.85,.5) .. (.85,1); \node at (.85,.15){\tiny $\dots$}; \draw (1.1,0) .. controls (1.1,.5) and (1.35,.5) .. (1.35,1); \draw (1.35,0) .. controls (1.35,.5) and (1.85,.5) .. (1.85,1); \node at (1.6,.15){\tiny $\dots$}; \draw (1.85,0) .. controls (1.85,.5) and (2.35,.5).. (2.35,1); \draw (2.1,0) .. controls (2.1,.25) and (2.35,.25) .. (2.35,.5); \draw (2.35,.5) .. controls (2.35,.75) and (1.6,.75) .. (1.6,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (1.25,-.35) -- node { \small $j$} (1.95,-.35); } - \sum_{l=0}^{k-j-3} \tikzdiagh{0}{ \draw (1.25,0) .. controls (1.25,.5) .. (-.5,.75) .. controls (0,.875) .. (0,1); \draw[stdhl] (0,0) node[below]{\small $1$} .. controls (0,.1) .. (-.5,.25) .. controls (.5,.7) .. (.5,1); \draw[fill=white, color=white] (-.6,.25) circle (.1cm); \draw[vstdhl] (-.5,0) node[below]{\small $\lambda$} --(-.5,1) node[pos=.75,nail]{}; \draw (.5,0) .. controls (.5,.5) and (.75,.5) .. (.75,1); \node at (.75,.15){\tiny $\dots$}; \draw (1,0) .. controls (1,.5) and (1.25,.5) .. (1.25,1); \draw (1.5,0) .. controls (1.5,.5) and (1.75,.5) .. (1.75,1); \node at (1.75,.15){\tiny $\dots$}; \draw (2,0) .. controls (2,.5) and (2.25,.5) .. (2.25,1); \draw (2.25,0) .. controls (2.25,.5) and (2.75,.5) .. (2.75,1); \node at (2.5,.15){\tiny $\dots$}; \draw (2.75,0) .. controls (2.75,.5) and (3.25,.5) .. (3.25,1); \draw (3,0) .. controls (3,.3) and (2.3,.25) .. (2.3,.5); \draw (2.3,.5) .. controls (2.3,.75) and (2.5,.75) .. (2.5,1); \draw (3.25,0) .. controls (3.25,.6) and (1.5,.5) .. (1.5,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (1.4,-.35) -- node { \small $l$} (2.1,-.35); \draw[decoration={brace,mirror,raise=-8pt},decorate] (2.15,-.35) -- node { \small $j$} (2.85,-.35); } \end{align*} Another application of the symmetry of \cref{eq:dottednailslide} deals with the first term, and every term of the sum is handled trough a descending induction on $j$, noting that the sum is zero if $j=k-2$. For the second term, we apply once again \cref{eq:nhR2andR3} and obtain \[ \tikzdiagh{0}{ \draw (2.25,0) .. controls (2.25,.5) .. (-.5,.75) node[pos=.1,tikzdot]{} .. controls (0,.875) .. (0,1); \draw[stdhl] (0,0) node[below]{\small $1$} .. controls (0,.1) .. (-.5,.25) .. controls (.5,.7) .. (.5,1); \draw[fill=white, color=white] (-.6,.25) circle (.1cm); \draw[vstdhl] (-.5,0) node[below]{\small $\lambda$} --(-.5,1) node[pos=.75,nail]{}; \draw (.5,0) .. controls (.5,.5) and (.75,.5) .. (.75,1); \node at (.75,.15){\tiny $\dots$}; \draw (1,0) .. controls (1,.5) and (1.25,.5) .. (1.25,1); \draw (1.25,0) .. controls (1.25,.5) and (1.75,.5) .. (1.75,1); \node at (1.5,.15){\tiny $\dots$}; \draw (1.75,0) .. controls (1.75,.5) and (2.25,.5).. (2.25,1); \draw (2,0) .. controls (2,.25) and (2.25,.25) .. (2.25,.5); \draw (2.25,.5) .. controls (2.25,.75) and (1.5,.75) .. (1.5,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (1.15,-.35) -- node { \small $j$} (1.85,-.35); } = \tikzdiagh{0}{ \draw (2.25,0) .. controls (2.25,.5) .. (-.5,.75) node[pos=.1,tikzdot]{} .. controls (0,.875) .. (0,1); \draw[stdhl] (0,0) node[below]{\small $1$} .. controls (0,.1) .. (-.5,.25) .. controls (.5,.7) .. (.5,1); \draw[fill=white, color=white] (-.6,.25) circle (.1cm); \draw[vstdhl] (-.5,0) node[below]{\small $\lambda$} --(-.5,1) node[pos=.75,nail]{}; \draw (.5,0) .. controls (.5,.5) and (.75,.5) .. (.75,1); \node at (.75,.15){\tiny $\dots$}; \draw (1,0) .. controls (1,.5) and (1.25,.5) .. (1.25,1); \draw (1.25,0) .. controls (1.25,.5) and (1.75,.5) .. (1.75,1); \node at (1.5,.15){\tiny $\dots$}; \draw (1.75,0) .. controls (1.75,.5) and (2.25,.5).. (2.25,1); \draw (2,0) .. controls (2,.5) and (1.35,.25) .. (1.35,.5); \draw (1.35,.5) .. controls (1.35,.75) and (1.5,.75) .. (1.5,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (1.15,-.35) -- node { \small $j$} (1.85,-.35); } \] which has the desired form. \end{proof} \subsubsection{Acyclicity of $\cone(\varphi)$}\label{sec:proofofacyclicity} \begin{citethm}{thm:catdoublebraidphiiso} The map \[ \varphi := \sum_{k = 0}^{m} (-1)^k \varphi_k : \cone(\lambda q^2 X [1] \xrightarrow{u} q^2 T_b^{\lambda,r} [1] )[1] \rightarrow \cone(X \otimes^{\Lderiv}_T X \xrightarrow{1\otimes u} \lambda^{-1} X), \] is a quasi-isomorphism. \end{citethm} The goal of this section is to prove \cref{thm:catdoublebraidphiiso}, which we will achieve by showing that $\cone(\varphi_k)$ is acyclic. We have that $\cone(\varphi_k)$ is given by the complex \[ \begin{tikzcd}[row sep = 1ex] & X\otimes_T Y^1_k \ar[hookrightarrow]{dr}{1 \otimes \imath_k} && \\ \lambda q^2 (X_k)[1] \ar{ur}{\varphi_k^1} \ar[hookrightarrow,swap]{dr}{-u}& & X\otimes_T Y^0_k \ar[twoheadrightarrow]{r}{u \otimes \gamma_k} & \lambda^{-1} X_k. \\ & \bigoplus_{\ell,\rho} q^2 (T_b^{\lambda,r} 1_{k,\ell,\rho})[1] \ar[swap]{ur}{\varphi_k^0} && \end{tikzcd} \] The map $\varphi_k^1 - u$ is injective since $u$ is injective by \cref{cor:uinj}, and the map $u \otimes \gamma_k$ is surjective. We want to first show that $\varphi_k^0 + 1\otimes \imath_k$ is surjective on the kernel of $u \otimes \gamma_k$. This requires some preparation. \begin{lem}\label{lem:dotanddoubledots} For $k \geq 2$, the local relation \begin{equation}\label{eq:dotanddoubledots} \tikzdiagh[xscale=1.25]{0}{ \draw (0,0) .. controls (0,.5) and (1.5,.5) .. (1.5,2); \draw (.5,0) .. controls (.5,.5) and (0,.5) .. (0,1) node[tikzdot,pos=1]{} .. controls (0,1.5) and (.5,1.5) .. (.5,2); \draw (.75,0) .. controls (.75,.5) and (.25,.5) .. (.25,1) node[tikzdot,pos=1]{} .. controls (.25,1.5) and (.75,1.5) .. (.75,2); \node at (.5,1) {\tiny $\dots$}; \draw (1.25,0) .. controls (1.25,.5) and (.75,.5) .. (.75,1) node[tikzdot,pos=1]{} .. controls (.75,1.5) and (1.25,1.5) .. (1.25,2); \draw (1.5,0) .. controls (1.5,1.5) and (0,1.5) .. (0,2); % \draw[stdhl] (.25,0) node[below]{\small $1$} .. controls (.25,.25) and (1.5,.25) .. (1.5,1) .. controls (1.5,1.75) and (.25,1.75) .. (.25,2); % \draw[decoration={brace,mirror,raise=-8pt},decorate] (.4,-.35) -- node {\small $k-2$} (1.35,-.35); } \ - \ \sum_{s=0}^{k-2} (-1)^{s}\ \tikzdiagh[xscale=1.25]{0}{ \draw (0,0) .. controls (0,.5) and (1.25,.5) .. (1.25,2); \draw (.5,0) .. controls (.5,.5) and (.25,.5) .. (.25,1) node[near start, tikzdot]{} .. controls (.25,1.5) and (.5,1.5) .. (.5,2) node[near end, tikzdot]{}; \node at (.5,1) {\tiny $\dots$}; \draw (1,0) .. controls (1,.5) and (.75,.5) .. (.75,1)node[near start, tikzdot]{} .. controls (.75,1.5) and (1,1.5) .. (1,2) node[near end, tikzdot]{}; \draw (1.25,0) .. controls (1.25,1.5) and (0,1.5) .. (0,2); % \draw[stdhl] (.25,0) node[below]{\small $1$} .. controls (.25,.25) and (0,.25) .. (0,1) .. controls (0,1.75) and (.25,1.75) .. (.25,2); % \draw (1.5,0) -- (1.5,2); \node at (1.75,1) {\tiny $\dots$}; \draw (2,0) -- (2,2); % \draw[decoration={brace,mirror,raise=-8pt},decorate] (.4,-.35) -- node {\small $s$} (1.1,-.35); } \ = (-1)^{k-1} \ \tikzdiagh[xscale=1.25]{0}{ \draw (0,0) -- (0,2); \draw[stdhl] (.25,0) node[below]{\small $1$} -- (.25,2); \draw (.5,0) -- (.5,2); \draw (.75,0) -- (.75,2); \node at (1,1) {\tiny $\dots$}; \draw (1.25,0) -- (1.25,2); \draw (1.5,0) -- (1.5,2); % \draw[decoration={brace,mirror,raise=-8pt},decorate] (.4,-.35) -- node {\small $k-2$} (1.35,-.35); } \end{equation} holds in $T^{\lambda,r}$. \end{lem} \begin{proof} We prove the statement by induction on $k-2$. If $k-2 = 0$, then the claim follows from \cref{eq:redR3}. Suppose by induction that \eqref{eq:dotanddoubledots} holds for $k-3$. We compute \begin{equation}\label{eq:dotanddoubledots1} \tikzdiagh[xscale=1.25]{0}{ \draw (0,0) .. controls (0,.5) and (1.5,.5) .. (1.5,2); \draw (.5,0) .. controls (.5,.5) and (0,.5) .. (0,1) node[tikzdot,pos=1]{} .. controls (0,1.5) and (.5,1.5) .. (.5,2); \draw (.75,0) .. controls (.75,.5) and (.25,.5) .. (.25,1) node[tikzdot,pos=1]{} .. controls (.25,1.5) and (.75,1.5) .. (.75,2); \node at (.5,1) {\tiny $\dots$}; \draw (1.25,0) .. controls (1.25,.5) and (.75,.5) .. (.75,1) node[tikzdot,pos=1]{} .. controls (.75,1.5) and (1.25,1.5) .. (1.25,2); \draw (1.5,0) .. controls (1.5,1.5) and (0,1.5) .. (0,2); % \draw[stdhl] (.25,0) node[below]{\small $1$} .. controls (.25,.25) and (1.5,.25) .. (1.5,1) .. controls (1.5,1.75) and (.25,1.75) .. (.25,2); % \draw[decoration={brace,mirror,raise=-8pt},decorate] (.4,-.35) -- node {\small $k-2$} (1.35,-.35); } \ \overset{\eqref{eq:redR3}}{=} \ \tikzdiagh[xscale=1.25]{0}{ \draw (0,0) .. controls (0,.5) and (1.5,.25) .. (1.5,2); \draw (.5,0) .. controls (.5,.5) and (0,.5) .. (0,1) node[tikzdot,pos=1]{} .. controls (0,1.5) and (.5,1.5) .. (.5,2); \draw (.75,0) .. controls (.75,.5) and (.25,.5) .. (.25,1) node[tikzdot,pos=1]{} .. controls (.25,1.5) and (.75,1.5) .. (.75,2); \node at (.5,1) {\tiny $\dots$}; \draw (1.25,0) .. controls (1.25,.5) and (.75,.5) .. (.75,1) node[tikzdot,pos=1]{} .. controls (.75,1.5) and (1.25,1.5) .. (1.25,2); \draw (1.5,0) .. controls (1.5,1.75) and (0,1.5) .. (0,2); % \draw[stdhl] (.25,0) node[below]{\small $1$} .. controls (.25,.25) and (1.25,.25) .. (1.25,.6) .. controls (1.25,.8) and (1,.8) .. (1,1) .. controls (1,1.2) and (1.25,1.2) .. (1.25,1.4) .. controls (1.25,1.75) and (.25,1.75) .. (.25,2); % \draw[decoration={brace,mirror,raise=-8pt},decorate] (.4,-.35) -- node {\small $k-2$} (1.35,-.35); } \ - \ \tikzdiagh[xscale=1.25]{0}{ \draw (0,0) .. controls (0,.5) and (1,.25) .. (1,1) .. controls (1,1.75) and (0,1.5) .. (0,2); \draw (.5,0) .. controls (.5,.5) and (0,.5) .. (0,1) node[tikzdot,pos=1]{} .. controls (0,1.5) and (.5,1.5) .. (.5,2); \draw (.75,0) .. controls (.75,.5) and (.25,.5) .. (.25,1) node[tikzdot,pos=1]{} .. controls (.25,1.5) and (.75,1.5) .. (.75,2); \node at (.5,1) {\tiny $\dots$}; \draw (1.25,0) .. controls (1.25,.5) and (.75,.5) .. (.75,1) node[tikzdot,pos=1]{} .. controls (.75,1.5) and (1.25,1.5) .. (1.25,2); \draw (1.5,0) -- (1.5,2); % \draw[stdhl] (.25,0) node[below]{\small $1$} .. controls (.25,.25) and (1.25,.25) .. (1.25,.6) -- (1.25,1.4) .. controls (1.25,1.75) and (.25,1.75) .. (.25,2); % \draw[decoration={brace,mirror,raise=-8pt},decorate] (.4,-.35) -- node {\small $k-2$} (1.35,-.35); } \end{equation} and \begin{align} \label{eq:dotanddoubledots2} \tikzdiagh[xscale=1.25]{0}{ \draw (0,0) .. controls (0,.5) and (1,.25) .. (1,1) .. controls (1,1.75) and (0,1.5) .. (0,2); \draw (.5,0) .. controls (.5,.5) and (0,.5) .. (0,1) node[tikzdot,pos=1]{} .. controls (0,1.5) and (.5,1.5) .. (.5,2); \draw (.75,0) .. controls (.75,.5) and (.25,.5) .. (.25,1) node[tikzdot,pos=1]{} .. controls (.25,1.5) and (.75,1.5) .. (.75,2); \node at (.5,1) {\tiny $\dots$}; \draw (1.25,0) .. controls (1.25,.5) and (.75,.5) .. (.75,1) node[tikzdot,pos=1]{} .. controls (.75,1.5) and (1.25,1.5) .. (1.25,2); \draw (1.5,0) -- (1.5,2); % \draw[stdhl] (.25,0) node[below]{\small $1$} .. controls (.25,.25) and (1.25,.25) .. (1.25,.6) -- (1.25,1.4) .. controls (1.25,1.75) and (.25,1.75) .. (.25,2); % \draw[decoration={brace,mirror,raise=-8pt},decorate] (.4,-.35) -- node {\small $k-2$} (1.35,-.35); } \ &\overset{(\ref{eq:nhdotslide}, \ref{eq:nhR2andR3})}{=} \ \tikzdiagh[xscale=1.25]{0}{ \draw (0,0) .. controls (0,.5) and (1.25,.5) .. (1.25,2); \draw (.5,0) .. controls (.5,.5) and (0,.5) .. (0,1) node[tikzdot,pos=1]{} .. controls (0,1.5) and (.5,1.5) .. (.5,2); \node at (.25,1) {\tiny $\dots$}; \draw (1,0) .. controls (1,.5) and (.5,.5) .. (.5,1) node[tikzdot,pos=1]{} .. controls (.5,1.5) and (1,1.5) .. (1,2); \draw (1.25,0) .. controls (1.25,1.5) and (0,1.5) .. (0,2); % \draw[stdhl] (.25,0) node[below]{\small $1$} .. controls (.25,.25) and (1.25,.25) .. (1.25,1) .. controls (1.25,1.75) and (.25,1.75) .. (.25,2); % \draw[decoration={brace,mirror,raise=-8pt},decorate] (.4,-.35) -- node {\small $k-3$} (1.1,-.35); % \draw (1.5,0) -- (1.5,2); } \\ \label{eq:dotanddoubledots3} \tikzdiagh[xscale=1.25,yscale=1.25]{0}{ \draw (0,0) .. controls (0,.5) and (1.5,.25) .. (1.5,2); \draw (.5,0) .. controls (.5,.5) and (0,.5) .. (0,1) node[tikzdot,pos=1]{} .. controls (0,1.5) and (.5,1.5) .. (.5,2); \draw (.75,0) .. controls (.75,.5) and (.25,.5) .. (.25,1) node[tikzdot,pos=1]{} .. controls (.25,1.5) and (.75,1.5) .. (.75,2); \node at (.5,1) {\tiny $\dots$}; \draw (1.25,0) .. controls (1.25,.5) and (.75,.5) .. (.75,1) node[tikzdot,pos=1]{} .. controls (.75,1.5) and (1.25,1.5) .. (1.25,2); \draw (1.5,0) .. controls (1.5,1.75) and (0,1.5) .. (0,2); % \draw[stdhl] (.25,0) node[below]{\small $1$} .. controls (.25,.25) and (1.25,.25) .. (1.25,.6) .. controls (1.25,.8) and (1,.8) .. (1,1) .. controls (1,1.2) and (1.25,1.2) .. (1.25,1.4) .. controls (1.25,1.75) and (.25,1.75) .. (.25,2); % \draw[decoration={brace,mirror,raise=-8pt},decorate] (.4,-.35) -- node {\small $k-2$} (1.35,-.35); } \ &\overset{\eqref{eq:redR2}}{=} \ \tikzdiagh[xscale=1.25,yscale=1.25]{0}{ \draw (0,0) .. controls (0,.5) and (1.5,0) .. (1.5,2); \draw (.5,0) .. controls (.5,.5) and (.25,.5) .. (.25,1) .. controls (.25,1.5) and (.5,1.5) .. (.5,2); \draw (.75,0) .. controls (.75,.5) and (.5,.5) .. (.5,1) .. controls (.5,1.5) and (.75,1.5) .. (.75,2); \node at (.75,1) {\tiny $\dots$}; \draw (1.25,0) .. controls (1.25,.5) and (1,.5) .. (1,1) .. controls (1,1.5) and (1.25,1.5) .. (1.25,2); \draw (1.5,0) .. controls (1.5,2) and (0,1.5) .. (0,2); % \draw[stdhl] (.25,0) node[below]{\small $1$} .. controls (.25,.25) and (1.25,.25) .. (1.25,.6) .. controls (1.25,1) and (0,.8) .. (0,1) .. controls (0,1.2) and (1.25,1) .. (1.25,1.4) .. controls (1.25,1.75) and (.25,1.75) .. (.25,2); % \draw[decoration={brace,mirror,raise=-8pt},decorate] (.4,-.35) -- node {\small $k-2$} (1.35,-.35); } \ \overset{\eqref{eq:redR3}}{=} \ \tikzdiagh[xscale=1.25,yscale=1.25]{0}{ \draw (0,0) .. controls (0,1.5) and (1.5,0) .. (1.5,2); \draw (.5,0) .. controls (.5,.5) and (.25,.5) .. (.25,1) .. controls (.25,1.5) and (.5,1.5) .. (.5,2); \draw (.75,0) .. controls (.75,.5) and (.5,.5) .. (.5,1) .. controls (.5,1.5) and (.75,1.5) .. (.75,2); \node at (.75,1) {\tiny $\dots$}; \draw (1.25,0) .. controls (1.25,.5) and (1,.5) .. (1,1) .. controls (1,1.5) and (1.25,1.5) .. (1.25,2); \draw (1.5,0) .. controls (1.5,2) and (0,.5) .. (0,2); % \draw[stdhl] (.25,0) node[below]{\small $1$} .. controls (.25,.1) and (1.375,.1) .. (1.375,.2) .. controls (1.375,.5) and (0,0) .. (0,1) .. controls (0,2) and (1.375,1.5) .. (1.375,1.8) .. controls (1.375,1.9) and (.25,1.9) .. (.25,2); % \draw[decoration={brace,mirror,raise=-8pt},decorate] (.4,-.35) -- node {\small $k-2$} (1.35,-.35); } \ \overset{\eqref{eq:redR2}}{=} \ \tikzdiagh[xscale=1.25,yscale=1.25]{0}{ \draw (0,0) .. controls (0,1.5) and (1.5,0) .. (1.5,2); \draw (.5,0) .. controls (.5,.5) and (.25,.5) .. (.25,1) node[near start, tikzdot]{} .. controls (.25,1.5) and (.5,1.5) .. (.5,2) node[near end, tikzdot]{}; \draw (.75,0) .. controls (.75,.5) and (.5,.5) .. (.5,1) node[near start, tikzdot]{} .. controls (.5,1.5) and (.75,1.5) .. (.75,2)node[near end, tikzdot]{}; \node at (.75,1) {\tiny $\dots$}; \draw (1.25,0) .. controls (1.25,.5) and (1,.5) .. (1,1)node[near start, tikzdot]{} .. controls (1,1.5) and (1.25,1.5) .. (1.25,2)node[near end, tikzdot]{}; \draw (1.5,0) .. controls (1.5,2) and (0,.5) .. (0,2); % \draw[stdhl] (.25,0) node[below]{\small $1$} .. controls (.25,.5) and (0,.5) .. (0,1) .. controls (0,1.5) and (.25,1.5) .. (.25,2); % \draw[decoration={brace,mirror,raise=-8pt},decorate] (.4,-.35) -- node {\small $k-2$} (1.35,-.35); } \end{align} Applying the induction hypothesis on \cref{eq:dotanddoubledots2}, and inserting the result together with \cref{eq:dotanddoubledots3} in \cref{eq:dotanddoubledots1} gives \cref{eq:dotanddoubledots}. \end{proof} \begin{lem}\label{lem:computezn} We have \begin{equation}\label{eq:computezn} \tikzdiag{ \draw (0,0) -- (0,1); \node at (.5,.1) {\tiny $\dots$}; \node at (.5,.9) {\tiny $\dots$}; \draw (1,0) -- (1,1); \draw (1.5,0) -- (1.5,1); \draw (2,0) -- (2,1); \filldraw [fill=white, draw=black,rounded corners] (-.25,.25) rectangle (2.25,.75) node[midway] { $z_{t+2}$}; } \ = \ \tikzdiag{ \draw (0,0) .. controls (0,.25) and (2,.25) .. (2,1); \draw (2,0) .. controls (2,.75) and (0,.75) .. (0,1); % \draw (.5,0) .. controls (.5,.25) and (0,.25) .. (0,.5) .. controls (0,.75) and (.5,.75) .. (.5,1) node[pos=0,tikzdot]{}; \draw (1.5,0) .. controls (1.5,.25) and (1,.25) .. (1,.5) .. controls (1,.75) and (1.5,.75) .. (1.5,1) node[pos=0,tikzdot]{}; \node at(.5,.5){\tiny $\dots$}; } \end{equation} for all $t \geq 0$. \end{lem} \begin{proof} We prove the statement by induction on $t$. The claim is clearly true for $t=0$. Suppose it is true for $t$. We compute \begin{align*} \tikzdiag{ \draw (0,-.5) -- (0,1.5); \node at (.5,-.25) {\tiny $\dots$}; \node at (.5,1.25) {\tiny $\dots$}; \draw (1,-.5) -- (1,1.5); \draw (1.5,-.5) -- (1.5,1.5); \draw (2,-.5) -- (2,1.5); \draw (2.5,-.5) -- (2.5,1.5); \filldraw [fill=white, draw=black,rounded corners] (-.25,.3) rectangle (2.75,.8) node[midway] { $z_{t+3}$}; } \ &\overset{\eqref{eq:defzn}}{=} \ \tikzdiag{ \draw (0,-.5) -- (0,1.5); \node at (.5,-.25) {\tiny $\dots$}; \node at (.5,1.25) {\tiny $\dots$}; \draw (1,-.5) -- (1,1.5); \draw (1.5,-.5) -- (1.5,1.5); \draw (2,-.5) -- (2,1) .. controls (2,1.25) and (2.5,1.25) .. (2.5,1.5); \draw (2.5,-.5) -- (2.5,1) .. controls (2.5,1.25) and (2,1.25) .. (2,1.5); \filldraw [fill=white, draw=black,rounded corners] (-.25,.3) rectangle (2.25,.8) node[midway] { $z_{t+2}$}; } \ + \ \tikzdiag{ \draw (0,-.5) -- (0,1.5); \node at (.5,-.25) {\tiny $\dots$}; \node at (.5,1.25) {\tiny $\dots$}; \draw (1,-.5) -- (1,1.5); \draw (1.5,-.5) -- (1.5,1.5); \draw (2.5,-.5) .. controls (2.5,-.25) and (2,-.25) .. (2,0) -- (2,1) .. controls (2,1.25) and (2.5,1.25) .. (2.5,1.5); \draw (2,-.5) .. controls (2,-.25) and (2.5,-.25) .. (2.5,0) -- (2.5,1) .. controls (2.5,1.25) and (2,1.25) .. (2,1.5) node [near end, tikzdot]{}; \filldraw [fill=white, draw=black,rounded corners] (-.25,.3) rectangle (2.25,.8) node[midway] { $z_{t+2}$}; } \\ \ &\overset{\eqref{eq:computezn}}{=} \ \tikzdiag{ \draw (0,-.5) -- (0,0) .. controls (0,.25) and (2,.25) .. (2,1) .. controls (2,1.25) and (2.5,1.25) .. (2.5,1.5); \draw (2,-.5) -- (2,0) .. controls (2,.75) and (0,.75) .. (0,1) -- (0,1.5); % \draw (.5,-.5) -- (.5,0) .. controls (.5,.25) and (0,.25) .. (0,.5) .. controls (0,.75) and (.5,.75) .. (.5,1) node[pos=0,tikzdot]{} -- (.5,1.5); \draw (1.5,-.5) -- (1.5,0) .. controls (1.5,.25) and (1,.25) .. (1,.5) .. controls (1,.75) and (1.5,.75) .. (1.5,1) node[pos=0,tikzdot]{} -- (1.5,1.5); \node at(.5,.5){\tiny $\dots$}; \draw (2.5,-.5) -- (2.5,1) .. controls (2.5,1.25) and (2,1.25) .. (2,1.5); } \ + \ \tikzdiag{ \draw (0,-.5) -- (0,0) .. controls (0,.25) and (2,.25) .. (2,1) .. controls (2,1.25) and (2.5,1.25) .. (2.5,1.5); \draw (2.5,-.5) .. controls (2.5,-.25) and (2,-.25) .. (2,0) .. controls (2,.75) and (0,.75) .. (0,1) -- (0,1.5); % \draw (.5,-.5) -- (.5,0) .. controls (.5,.25) and (0,.25) .. (0,.5) .. controls (0,.75) and (.5,.75) .. (.5,1) node[pos=0,tikzdot]{} -- (.5,1.5); \draw (1.5,-.5) -- (1.5,0) .. controls (1.5,.25) and (1,.25) .. (1,.5) .. controls (1,.75) and (1.5,.75) .. (1.5,1) node[pos=0,tikzdot]{} -- (1.5,1.5); \node at(.5,.5){\tiny $\dots$}; \draw (2,-.5) .. controls (2,-.25) and (2.5,-.25) .. (2.5,0) -- (2.5,1) .. controls (2.5,1.25) and (2,1.25) .. (2,1.5) node [near end, tikzdot]{}; } \ \overset{(\ref{eq:nhR2andR3},\ref{eq:nhdotslide})}{=} \ \tikzdiag{ \draw (0,-.5) .. controls (0,0) and (2.5,0) .. (2.5,1.5); \draw (2.5,-.5) .. controls (2.5,1) and (0,1) .. (0,1.5); % \draw (.5,-.5) .. controls (.5,0) and (0,0) .. (0,.5) .. controls (0,1) and (.5,1) .. (.5,1.5) node[pos=0,tikzdot]{}; \draw (1.5,-.5) .. controls (1.5,0) and (1,0) .. (1,.5) .. controls (1,1) and (1.5,1) .. (1.5,1.5) node[pos=0,tikzdot]{}; \draw (2,-.5) .. controls (2,0) and (1.5,0) .. (1.5,.5) .. controls (1.5,1) and (2,1) .. (2,1.5) node[pos=0,tikzdot]{}; \node at(.5,.5){\tiny $\dots$}; } \end{align*} concluding the proof. \end{proof} \begin{lem}\label{lem:allcrossingzniszero} We have \[ \tikzdiag{ \draw (0,0) -- (0,1) .. controls (0,1.25) and (.5,1.25) .. (.5,1.5); \node at (.5,.1) {\tiny $\dots$}; \node at (.5,1) {\tiny $\dots$}; \draw (1,0) -- (1,1) .. controls (1,1.25) and (1.5,1.25) .. (1.5,1.5); \draw (1.5,0) -- (1.5,1) .. controls (1.5,1.25) and (2,1.25) .. (2,1.5); \draw (2,0) -- (2,1) .. controls (2,1.25) and (0,1.25) .. (0,1.5); \draw (2.5,0) -- (2.5,1.5); \node at (3,.1) {\tiny $\dots$}; \node at (3,1) {\tiny $\dots$}; \draw (3.5,0) -- (3.5,1.5); \filldraw [fill=white, draw=black,rounded corners] (-.25,.3) rectangle (3.75,.8) node[midway] { $z_{t+2}$}; } \ = \ 0, \] for all $t \geq 0$. \end{lem} \begin{proof} It is a direct consequence of \cref{lem:computezn} together with \cref{eq:nhR2andR3}. \end{proof} \begin{lem}\label{lem:phizeroimathgenker} We have \begin{equation}\label{eq:phizeroimathgenker} \begin{aligned} \varphi_k^0&\left( \tikzdiagh[xscale=1.25]{0}{ \draw[vstdhl] (-.25,.75) node[below]{\small $\lambda$} -- (-.25,2); % \draw (0,.75) .. controls (0,1.25) and (.5,1.25) .. (.5,2); \node at (.25,.85) {\tiny $\dots$}; \draw (.5,.75) .. controls (.5,1.25) and (1,1.25) .. (1,2); \draw (.75,.75) .. controls (.75,1.25) and (0,1.25) .. (0,2); % \draw[stdhl] (1,.75) node[below]{\small $1$} .. controls (1,1.5) and (.25,1.5) .. (.25,2); % \draw[decoration={brace,mirror,raise=-8pt},decorate] (-.1,.4) -- node {\small $k-1$} (.6,.4); } \otimes \bar 1_{\ell,\rho} \right) - \sum_{s = 0}^{k-2} (-1)^s (1\otimes\imath_k^{k-1}) \left( \tikzdiagh[xscale=1.25]{0}{ \draw (0,0) .. controls (0,.5) and (1.25,.5) .. (1.25,2); \draw (.5,0) .. controls (.5,.5) and (.25,.5) .. (.25,1) node[near start, tikzdot]{} .. controls (.25,1.5) and (.5,1.5) .. (.5,2) node[near end, tikzdot]{}; \node at (.5,1) {\tiny $\dots$}; \draw (1,0) .. controls (1,.5) and (.75,.5) .. (.75,1)node[near start, tikzdot]{} .. controls (.75,1.5) and (1,1.5) .. (1,2) node[near end, tikzdot]{}; \draw (1.25,0) .. controls (1.25,1.5) and (0,1.5) .. (0,2); % \draw[stdhl] (.25,0) node[below]{\small $1$} .. controls (.25,.5).. (-.25,1) .. controls (.25,1.5) .. (.25,2); \draw[fill=white, color=white] (-.35,1) circle (.1cm); \draw[vstdhl] (-.25,0) node[below]{\small $\lambda$} -- (-.25,2); % \draw (1.5,0) -- (1.5,2); \node at (1.75,1) {\tiny $\dots$}; \draw (2,0) -- (2,2); % \draw[decoration={brace,mirror,raise=-8pt},decorate] (.4,-.35) -- node {\small $s$} (1.1,-.35); } \otimes \bar 1_{\ell,\rho} \right) \\ &= (-1)^{k-1} \left( -\ \tikzdiagh{0}{ \draw (1.25,0) .. controls (1.25,.5) .. (-.5,.75) .. controls (0,.875) .. (0,1); \draw[stdhl] (0,0) node[below]{\small $1$} .. controls (0,.1) .. (-.5,.25) .. controls (.5,.7) .. (.5,1); \draw[fill=white, color=white] (-.6,.25) circle (.1cm); \draw[vstdhl] (-.5,0) node[below]{\small $\lambda$} --(-.5,1) node[pos=.75,nail]{}; \draw (.5,0) .. controls (.5,.5) and (.75,.5) .. (.75,1); \node at (.75,.15){\tiny $\dots$}; \draw (1,0) .. controls (1,.5) and (1.25,.5).. (1.25,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (.4,-.35) -- node { \small $k-1$} (1.1,-.35); } \otimes \bar 1_{\ell,\rho} , \tikzdiagh{0}{ \draw (.5,0) .. controls (.5,.5) and (0,.5) .. (0,1); \draw[stdhl] (0,0) node[below]{\small $1$} .. controls (0,.1) .. (-.5,.25) .. controls (.5,.7) .. (.5,1); \draw[fill=white, color=white] (-.6,.25) circle (.1cm); \draw[vstdhl] (-.5,0) node[below]{\small $\lambda$} --(-.5,1); } \otimes \bar 1_{\ell+k-1,\rho} \right) \in (X\otimes_T {Y'}^0_k) \oplus (X\otimes_T Y^{0,k-1}_k). \end{aligned} \end{equation} \end{lem} \begin{proof} The case $k=1$ is clear, thus we assume $k > 1$. First, let us write $\star$ and $\star_s$ for the inputs of $\varphi_k^0$ and of $1\otimes \imath_k^{k-1}$ in \cref{eq:phizeroimathgenker}, respectively. Then, on one hand, we note that $\varphi_k^{0,t'}(\star) = 0$ by \cref{lem:allcrossingzniszero} whenever $t' \neq k-1$, because of \eqref{eq:nhR2andR3}. For $t' = k-1$, we obtain \begin{equation} \label{eq:phizeroimathgenker1} \varphi_k^{0,k-1}(\star) = \ \tikzdiagh[xscale=1.25]{0}{ \draw (.25,-.5) .. controls (.25,-.1) and (0,-.1) .. (0,.15) .. controls (0,.5) and (1.5,.5) .. (1.5,2); \draw (.5,-.5) -- (.5,0) .. controls (.5,.5) and (0,.5) .. (0,1) node[tikzdot,pos=1]{} .. controls (0,1.5) and (.5,1.5) .. (.5,2); \draw (.75,-.5) -- (.75,0) .. controls (.75,.5) and (.25,.5) .. (.25,1) node[tikzdot,pos=1]{} .. controls (.25,1.5) and (.75,1.5) .. (.75,2); \node at (.5,1) {\tiny $\dots$}; \draw (1.25,-.5) -- (1.25,0) .. controls (1.25,.5) and (.75,.5) .. (.75,1) node[tikzdot,pos=1]{} .. controls (.75,1.5) and (1.25,1.5) .. (1.25,2); \draw (1.5,-.5) -- (1.5,0) .. controls (1.5,1.5) and (0,1.5) .. (0,2); % \draw[stdhl] (0,-.5) node[below]{\small $1$} .. controls (0,-.25) .. (-.25,0) .. controls (1.5,.5) .. (1.5,1) .. controls (1.5,1.75) and (.25,1.75) .. (.25,2); \draw[fill=white, color=white] (-.35,0) circle (.1cm); % \draw[decoration={brace,mirror,raise=-8pt},decorate] (.4,-.85) -- node {\small $k-1$} (1.6,-.85); % \draw[vstdhl] (-.25,-.5) node[below]{\small $\lambda$} -- (-.25,2); } \otimes \bar 1_{\ell,\rho} \end{equation} using \cref{eq:nhR2andR3} and \cref{eq:nhdotslide}. Similarly, using \cref{lem:allcrossingzniszero}, we get \begin{equation} \label{eq:phizeroimathgenker2} {\varphi_k^0}'(\star) = \ \tikzdiagh[xscale=1.25]{0}{ \draw (1.5,-.5) .. controls (1.5,-.4) .. (-.25,.35) .. controls (1.5,1) .. (1.5,2); \draw (.25,-.5) .. controls (.25,-.25) and (.5,-.25) .. (.5,0) .. controls (.5,.5) and (0,.5) .. (0,1) node[tikzdot,pos=1]{} .. controls (0,1.5) and (.5,1.5) .. (.5,2); \draw (.5,-.5) .. controls (.5,-.25) and (.75,-.25) .. (.75,0) .. controls (.75,.5) and (.25,.5) .. (.25,1) node[tikzdot,pos=1]{} .. controls (.25,1.5) and (.75,1.5) .. (.75,2); \node at (.5,1) {\tiny $\dots$}; \draw (1,-.5) .. controls (1,-.25) and (1.25,-.25) .. (1.25,0) .. controls (1.25,.5) and (.75,.5) .. (.75,1) node[tikzdot,pos=1]{} .. controls (.75,1.5) and (1.25,1.5) .. (1.25,2); \draw (1.25,-.5) .. controls (1.25,-.25) and (1.5,-.25) .. (1.5,0) .. controls (1.5,1.5) and (0,1.5) .. (0,2); % \draw[stdhl] (0,-.5) node[below]{\small $1$} .. controls (0,-.25) .. (-.25,0) .. controls (1.5,.5) .. (1.5,1) .. controls (1.5,1.75) and (.25,1.75) .. (.25,2); \draw[fill=white, color=white] (-.35,0) circle (.1cm); % \draw[decoration={brace,mirror,raise=-8pt},decorate] (.15,-.85) -- node {\small $k-1$} (1.35,-.85); % \draw[vstdhl] (-.25,-.5) node[below]{\small $\lambda$} -- (-.25,2) node[pos=.335,nail]{}; } \otimes \bar 1_{\ell,\rho} \end{equation} On the other hand, we compute \begin{equation} \label{eq:phizeroimathgenker3} \begin{aligned} (1\otimes\imath_k^{k-1})(\star_s) &= (-1)^s \left( \tikzdiagh[xscale=1.25]{0}{ \draw (.25,-.5) .. controls (.25,-.25) and (0,-.25) .. (0,0) .. controls (0,.5) and (1.25,.5) .. (1.25,2); \draw (.5,-.5) -- (.5,0) .. controls (.5,.5) and (.25,.5) .. (.25,1) node[near start, tikzdot]{} .. controls (.25,1.5) and (.5,1.5) .. (.5,2) node[near end, tikzdot]{}; \node at (.5,1) {\tiny $\dots$}; \draw (1,-.5) -- (1,0) .. controls (1,.5) and (.75,.5) .. (.75,1)node[near start, tikzdot]{} .. controls (.75,1.5) and (1,1.5) .. (1,2) node[near end, tikzdot]{}; \draw (1.25,-.5) -- (1.25,0) .. controls (1.25,1.5) and (0,1.5) .. (0,2); % \draw[stdhl] (0,-.5)node[below]{\small $1$} .. controls (0,-.25) and (.25,-.25) ..(.25,0) .. controls (.25,.5).. (-.25,1) .. controls (.25,1.5) .. (.25,2); \draw[fill=white, color=white] (-.35,1) circle (.1cm); \draw[vstdhl] (-.25,-.5) node[below]{\small $\lambda$} -- (-.25,2); % \draw (1.5,-.5) -- (1.5,2); \node at (1.75,1) {\tiny $\dots$}; \draw (2,-.5) -- (2,2); % \draw[decoration={brace,mirror,raise=-8pt},decorate] (.4,-.85) -- node {\small $s$} (1.1,-.85); } \otimes \bar 1_{\ell,\rho} , \tikzdiagh[xscale=1.25]{0}{ \draw (2,-.5) .. controls (2,-.25) .. (-.25,0) .. controls (1.25,1) .. (1.25,2); \draw (.25,-.5) .. controls (.25,-.25) and (.5,-.25) .. (.5,0) .. controls (.5,.5) and (.25,.5) .. (.25,1) node[near start, tikzdot]{} .. controls (.25,1.5) and (.5,1.5) .. (.5,2) node[near end, tikzdot]{}; \node at (.5,1) {\tiny $\dots$}; \draw (.75,-.5) .. controls (.75,-.25) and (1,-.25) .. (1,0) .. controls (1,.5) and (.75,.5) .. (.75,1)node[near start, tikzdot]{} .. controls (.75,1.5) and (1,1.5) .. (1,2) node[near end, tikzdot]{}; \draw (1,-.5) .. controls (1,-.25) and (1.25,-.25) .. (1.25,0) .. controls (1.25,1.5) and (0,1.5) .. (0,2); % \draw[stdhl] (0,-.5)node[below]{\small $1$} .. controls (0,-.25) and (.25,-.25) ..(.25,0) .. controls (.25,.5).. (-.25,1) .. controls (.25,1.5) .. (.25,2); \draw[fill=white, color=white] (-.35,1) circle (.1cm); \draw[vstdhl] (-.25,-.5) node[below]{\small $\lambda$} -- (-.25,2) node[nail, pos = .2]{}; % \draw (1.25,-.5) .. controls (1.25,-.25) and (1.5,-.25) .. (1.5,0) -- (1.5,2); \node at (1.75,1) {\tiny $\dots$}; \draw (1.75,-.5) .. controls (1.75,-.25) and (2,-.25) ..(2,0) -- (2,2); % \draw[decoration={brace,mirror,raise=-8pt},decorate] (.15,-.85) -- node {\small $s$} (.85,-.85); } \otimes \bar 1_{\ell,\rho} \right) \\ &= (-1)^s \left( \tikzdiagh[xscale=1.25]{0}{ \draw (.25,-.5) .. controls (.25,-.25) and (0,-.25) .. (0,0) .. controls (0,.5) and (1.25,.5) .. (1.25,2); \draw (.5,-.5) -- (.5,0) .. controls (.5,.5) and (.25,.5) .. (.25,1) node[near start, tikzdot]{} .. controls (.25,1.5) and (.5,1.5) .. (.5,2) node[near end, tikzdot]{}; \node at (.5,1) {\tiny $\dots$}; \draw (1,-.5) -- (1,0) .. controls (1,.5) and (.75,.5) .. (.75,1)node[near start, tikzdot]{} .. controls (.75,1.5) and (1,1.5) .. (1,2) node[near end, tikzdot]{}; \draw (1.25,-.5) -- (1.25,0) .. controls (1.25,1.5) and (0,1.5) .. (0,2); % \draw[stdhl] (0,-.5) node[below]{\small $1$} .. controls (0,-.375).. (-.25,-.25) .. controls (.25,0) .. (.25,.25) .. controls (.25,.5) and (0,.5) .. (0,1) .. controls (0,1.5) and (.25,1.5) .. (.25,2); \draw[fill=white, color=white] (-.35,-.25) circle (.1cm); \draw[vstdhl] (-.25,-.5) node[below]{\small $\lambda$} -- (-.25,2); % \draw (1.5,-.5) -- (1.5,2); \node at (1.75,1) {\tiny $\dots$}; \draw (2,-.5) -- (2,2); % \draw[decoration={brace,mirror,raise=-8pt},decorate] (.4,-.85) -- node {\small $s$} (1.1,-.85); } \otimes \bar 1_{\ell,\rho} , \tikzdiagh[xscale=1.25]{0}{ \draw (2,-.5) .. controls (2,-.25) .. (-.25,.25) .. controls (1.25,1) .. (1.25,2); \draw (.25,-.5) .. controls (.25,-.25) and (.5,-.25) .. (.5,0) .. controls (.5,.5) and (.25,.5) .. (.25,1) node[near start, tikzdot]{} .. controls (.25,1.5) and (.5,1.5) .. (.5,2) node[near end, tikzdot]{}; \node at (.5,1) {\tiny $\dots$}; \draw (.75,-.5) .. controls (.75,-.25) and (1,-.25) .. (1,0) .. controls (1,.5) and (.75,.5) .. (.75,1)node[near start, tikzdot]{} .. controls (.75,1.5) and (1,1.5) .. (1,2) node[near end, tikzdot]{}; \draw (1,-.5) .. controls (1,-.25) and (1.25,-.25) .. (1.25,0) .. controls (1.25,1.5) and (0,1.5) .. (0,2); % \draw[stdhl] (0,-.5) node[below]{\small $1$} .. controls (0,-.375).. (-.25,-.25) .. controls (.25,0) .. (.25,.25) .. controls (.25,.5) and (0,.5) .. (0,1) .. controls (0,1.5) and (.25,1.5) .. (.25,2); \draw[fill=white, color=white] (-.35,-.25) circle (.1cm); \draw[vstdhl] (-.25,-.5) node[below]{\small $\lambda$} -- (-.25,2) node[nail, pos = .3]{}; % \draw (1.25,-.5) .. controls (1.25,-.25) and (1.5,-.25) .. (1.5,0) -- (1.5,2); \node at (1.75,1) {\tiny $\dots$}; \draw (1.75,-.5) .. controls (1.75,-.25) and (2,-.25) ..(2,0) -- (2,2); % \draw[decoration={brace,mirror,raise=-8pt},decorate] (.15,-.85) -- node {\small $s$} (.85,-.85); } \otimes \bar 1_{\ell,\rho} \right) \end{aligned} \end{equation} using \cref{eq:redR2} and \cref{eq:nailslidedcross}. Then, we conclude by observing that \cref{eq:phizeroimathgenker} follows by applying \cref{lem:dotanddoubledots} on \cref{eq:phizeroimathgenker1}, \cref{eq:phizeroimathgenker2} and \cref{eq:phizeroimathgenker3} together. \end{proof} \begin{lem} As a left $(T^{\lambda,r},0)$-module, $\ker(u \otimes \gamma_k)$ is generated by the elements \begin{equation}\label{eq:kergenelem} \left( -\ \tikzdiagh{0}{ \draw (1.25,0) .. controls (1.25,.5) .. (-.5,.75) .. controls (0,.875) .. (0,1); \draw[stdhl] (0,0) node[below]{\small $1$} .. controls (0,.1) .. (-.5,.25) .. controls (.5,.7) .. (.5,1); \draw[fill=white, color=white] (-.6,.25) circle (.1cm); \draw[vstdhl] (-.5,0) node[below]{\small $\lambda$} --(-.5,1) node[pos=.75,nail]{}; \draw (.5,0) .. controls (.5,.5) and (.75,.5) .. (.75,1); \node at (.75,.15){\tiny $\dots$}; \draw (1,0) .. controls (1,.5) and (1.25,.5).. (1.25,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (.4,-.35) -- node { \small $t$} (1.1,-.35); } \otimes \bar 1_{\ell+k-1-t,\rho} , \tikzdiagh{0}{ \draw (.5,0) .. controls (.5,.5) and (0,.5) .. (0,1); \draw[stdhl] (0,0) node[below]{\small $1$} .. controls (0,.1) .. (-.5,.25) .. controls (.5,.7) .. (.5,1); \draw[fill=white, color=white] (-.6,.25) circle (.1cm); \draw[vstdhl] (-.5,0) node[below]{\small $\lambda$} --(-.5,1); } \otimes \bar 1_{\ell+k-1,\rho} \right) \in (X\otimes_T {Y'}^0_k) \oplus (X\otimes_T Y^{0,t}_k) \end{equation} for all $0 \leq t \leq k-1$. \end{lem} \begin{proof} Let $K \subset X \otimes_T Y^0_k$ be the submodule generated by the elements in \cref{eq:kergenelem}. A straightforward computation shows that $K \subset \ker(u \otimes \gamma_k)$. Thus, we have a complex \begin{equation}\label{eq:SESKXTX} 0 \rightarrow K \hookrightarrow X \otimes_T Y_k^0 \overset{u \otimes \gamma_k}{\twoheadrightarrow} \lambda^{-1} X_k \rightarrow 0, \end{equation} where the left arrow is an injection and the right arrow is a surjection. Furthermore, by \cref{thm:X0basis}, we have that \begin{align*} K &\cong \lambda^{-1} Y_k^1, & X \otimes_T Y_k^0 &\cong \lambda^{-1} Y_k^0. \end{align*} Therefore, by \cref{lem:sesX0} we obtain that the sequence in \cref{eq:SESKXTX} is exact. In particular, we have $K = \ker(u \otimes \gamma_k)$. \end{proof} \begin{prop}\label{prop:kerequalsimg1} We have $\ker(u \otimes \gamma_k) = \Image(\varphi_k^0 + 1\otimes \imath_k)$. \end{prop} \begin{proof} We will show by backward induction on $t$ that the elements \cref{eq:kergenelem} are all in $\Image(\varphi_k^0 + 1\otimes \imath_k)$. The case $t = k-1$ is \cref{lem:phizeroimathgenker}. The induction step is essentially similar to the proof of \cref{lem:phizeroimathgenker}. In particular we want to show that \cref{eq:kergenelem} is in $\bigcup_{t' \geq t} \Image(\varphi_k^0 + 1\otimes \imath_k^{t'})$. For this, we write \begin{align*} \star &:= \tikzdiagh[xscale=1.25]{0}{ \draw[vstdhl] (-.25,0) node[below]{\small $\lambda$} -- (-.25,2); % \draw (0,0) .. controls (0,1) and (.5,1) .. (.5,2); \node at (.25,.25) {\tiny $\dots$}; \node at (.75,1.75) {\tiny $\dots$}; \draw (.5,0) .. controls (.5,1) and (1,1) .. (1,2); % \draw (.75,0) .. controls (.75,1) and (0,1) .. (0,2); % \draw (1,0) .. controls (1,1) and (1.25,1) .. (1.25,2); \node at (1.25,.25) {\tiny $\dots$}; \node at (1.5,1.75) {\tiny $\dots$}; \draw (1.5,0) .. controls (1.5,1) and (1.75,1) .. (1.75,2); % \draw[stdhl] (2,0) node[below]{\small $1$} .. controls (2,1) and (.25,1) .. (.25,2); % \draw[decoration={brace,mirror,raise=-8pt},decorate] (-.1,-.35) -- node {\small $t$} (.65,-.35); } \otimes \bar 1_{\ell,\rho}, \end{align*} Then, we compute using \cref{lem:allcrossingzniszero} and \cref{lem:computezn} \[ \varphi_k^{0,t'}(\star) = \begin{cases} \quad 0, & \text{if $t' < t$,} \\ \tikzdiagh[xscale=1.25]{0}{ \draw (.25,-.5) .. controls (.25,-.1) and (0,-.1) .. (0,.15) .. controls (0,.5) and (1.5,.5) .. (1.5,2); \draw (.5,-.5) -- (.5,0) .. controls (.5,.5) and (0,.5) .. (0,1) node[tikzdot,pos=1]{} .. controls (0,1.5) and (.5,1.5) .. (.5,2); \draw (.75,-.5) -- (.75,0) .. controls (.75,.5) and (.25,.5) .. (.25,1) node[tikzdot,pos=1]{} .. controls (.25,1.5) and (.75,1.5) .. (.75,2); \node at (.5,1) {\tiny $\dots$}; \draw (1.25,-.5) -- (1.25,0) .. controls (1.25,.5) and (.75,.5) .. (.75,1) node[tikzdot,pos=1]{} .. controls (.75,1.5) and (1.25,1.5) .. (1.25,2); \draw (1.5,-.5) -- (1.5,0) .. controls (1.5,1.5) and (0,1.5) .. (0,2); % \draw[stdhl] (0,-.5) node[below]{\small $1$} .. controls (0,-.25) .. (-.25,0) .. controls (1.5,.5) .. (1.5,1) .. controls (1.5,1.75) and (.25,1.75) .. (.25,2); \draw[fill=white, color=white] (-.35,0) circle (.1cm); % \draw[decoration={brace,mirror,raise=-8pt},decorate] (.4,-.85) -- node {\small $k-1$} (1.6,-.85); % \draw[vstdhl] (-.25,-.5) node[below]{\small $\lambda$} -- (-.25,2); } \otimes \bar 1_{\ell,\rho}, & \text{if $t' = t$,} \\ \tikzdiagh[xscale=1.25]{0}{ \draw(.25,-.5) .. controls (.25,0) and (0,0) .. (0,.25) .. controls (0,.5) and (1.75,.5) .. (1.75,.75) .. controls (1.75,1.5) and (2,1.5) .. (2,2); % \draw (.5,-.5) .. controls (.5,.25) and (0,.25) .. (0,1) .. controls (0,1.5) and (.5,1.5) .. (.5,2) node[pos=0,tikzdot]{}; \node at (.75,-.4) {\tiny $\dots$}; \node at (.25,1) {\tiny $\dots$}; \draw (1,-.5) .. controls (1,.25) and (.5,.25) .. (.5,1) .. controls (.5,1.5) and (1,1.5) .. (1,2) node[pos=0,tikzdot]{}; % \draw (1.25,-.5) .. controls (1.25,.25) and (.75,.25) .. (.75,1) .. controls (.75,1.5) and (0,1.5) .. (0,2) node[pos=0,tikzdot]{}; % \draw (1.5,-.5) .. controls (1.5,.25) and (1,.25) .. (1,1) .. controls (1,1.5) and (1.25,1.5) .. (1.25,2) node[pos=0,tikzdot]{}; \node at (1.75,-.4) {\tiny $\dots$}; \node at (1.25,1) {\tiny $\dots$}; \draw (2,-.5) .. controls (2,.25) and (1.5,.25) .. (1.5,1) .. controls (1.5,1.5) and (1.75,1.5) .. (1.75,2) node[pos=0,tikzdot]{}; % \draw (2.25,-.5) .. controls (2.25,.25) and (2,.25) .. (2,1) .. controls (2,1.5) and (2.25,1.5) .. (2.25,2); \node at (2.5,-.4) {\tiny $\dots$}; \node at (2.5,1.9) {\tiny $\dots$}; \draw (2.75,-.5) .. controls (2.75,.25) and (2.5,.25) .. (2.5,1) .. controls (2.5,1.5) and (2.75,1.5) .. (2.75,2); % \filldraw [fill=white, draw=black,rounded corners] (1.625,.75) rectangle (2.625,1.25) node[midway] { $z_{k-t'}$}; % \draw[stdhl] (0,-.5) node[below]{\small $1$} .. controls (0,-.25) .. (-.25,0) .. controls (2.75,.25) .. (2.75,1) -- (2.75,1.25) .. controls (2.75,1.75) and (.25,1.75) .. (.25,2); \draw[fill=white, color=white] (-.35,0) circle (.1cm); % \draw[decoration={brace,raise=-8pt},decorate] (.4,2.35) -- node {\small $t$} (1.1,2.35); \draw[decoration={brace,mirror,raise=-8pt},decorate] (.4,-.85) -- node {\small $t'$} (2.1,-.85); % \draw[vstdhl] (-.25,-.5) node[below]{\small $\lambda$} -- (-.25,2); } \otimes \bar 1_{\ell,\rho}, & \text{if $t' > t$,} \end{cases} \] for $0 \leq t' \leq k-1$, and \[ {\varphi'}_k^0(\star) = -\ \tikzdiagh[xscale=1.25]{0}{ \draw (1,-.5) .. controls (1,-.4) .. (-.25,.35) .. controls (2,1) .. (2,2); \draw (.25,-.5) .. controls (.25,-.25) and (.5,-.25) .. (.5,0) .. controls (.5,.5) and (0,.5) .. (0,1) node[tikzdot,pos=1]{} .. controls (0,1.5) and (.5,1.5) .. (.5,2); \node at (.25,1) {\tiny $\dots$}; \draw (.75,-.5) .. controls (.75,-.25) and (1,-.25) .. (1,0) .. controls (1,.5) and (.5,.5) .. (.5,1) node[tikzdot,pos=1]{} .. controls (.5,1.5) and (1,1.5) .. (1,2); % \draw (1.25,-.5) .. controls (1.25,.5) and (.75,.5) .. (.75,1) node[tikzdot,pos=1]{} .. controls (.75,1.5) and (1.25,1.5) .. (1.25,2); \node at (1,1) {\tiny $\dots$}; \draw (1.75,-.5) .. controls (1.75,.5) and (1.25,.5) .. (1.25,1) node[tikzdot,pos=1]{} .. controls (1.25,1.5) and (1.75,1.5) .. (1.75,2); % \draw (2,-.5) .. controls (2,1.75) and (0,1.5) .. (0,2); % \draw[stdhl] (0,-.5) node[below]{\small $1$} .. controls (0,-.25) .. (-.25,0) .. controls (2,.5) .. (2,1) .. controls (2,1.75) and (.25,1.75) .. (.25,2); \draw[fill=white, color=white] (-.35,0) circle (.1cm); % \draw[decoration={brace,mirror,raise=-8pt},decorate] (.15,-.85) -- node {\small $t$} (.85,-.85); % \draw[vstdhl] (-.25,-.5) node[below]{\small $\lambda$} -- (-.25,2) node[pos=.335,nail]{}; } \ \otimes \bar 1_{\ell,\rho} - \sum_{t' = t+1}^{k-1} \ \tikzdiagh[xscale=1.25]{0}{ \draw(2,-.5) .. controls (2,.25) .. (-.25,.375) .. controls (1.75,.5) .. (1.75,.75) .. controls (1.75,1.5) and (2,1.5) .. (2,2); % \draw (.25,-.5) .. controls (.25,.25) and (0,.25) .. (0,1) .. controls (0,1.5) and (.5,1.5) .. (.5,2) node[pos=0,tikzdot]{}; \node at (.5,-.4) {\tiny $\dots$}; \node at (.25,1) {\tiny $\dots$}; \draw (.75,-.5) .. controls (.75,.25) and (.5,.25) .. (.5,1) .. controls (.5,1.5) and (1,1.5) .. (1,2) node[pos=0,tikzdot]{}; % \draw (1,-.5) .. controls (1,.25) and (.75,.25) .. (.75,1) .. controls (.75,1.5) and (0,1.5) .. (0,2) node[pos=0,tikzdot]{}; % \draw (1.25,-.5) .. controls (1.25,.25) and (1,.25) .. (1,1) .. controls (1,1.5) and (1.25,1.5) .. (1.25,2) node[pos=0,tikzdot]{}; \node at (1.5,-.4) {\tiny $\dots$}; \node at (1.25,1) {\tiny $\dots$}; \draw (1.75,-.5) .. controls (1.75,.25) and (1.5,.25) .. (1.5,1) .. controls (1.5,1.5) and (1.75,1.5) .. (1.75,2) node[pos=0,tikzdot]{}; % \draw (2.25,-.5) .. controls (2.25,.25) and (2,.25) .. (2,1) .. controls (2,1.5) and (2.25,1.5) .. (2.25,2); \node at (2.5,-.4) {\tiny $\dots$}; \node at (2.5,1.9) {\tiny $\dots$}; \draw (2.75,-.5) .. controls (2.75,.25) and (2.5,.25) .. (2.5,1) .. controls (2.5,1.5) and (2.75,1.5) .. (2.75,2); % \filldraw [fill=white, draw=black,rounded corners] (1.625,.75) rectangle (2.625,1.25) node[midway] { $z_{k-t'}$}; % \draw[stdhl] (0,-.5) node[below]{\small $1$} .. controls (0,-.375) .. (-.25,-.25) .. controls (2.75,0) .. (2.75,.75) -- (2.75,1.25) .. controls (2.75,1.75) and (.25,1.75) .. (.25,2); \draw[fill=white, color=white] (-.35,-.25) circle (.1cm); % \draw[decoration={brace,raise=-8pt},decorate] (.4,2.35) -- node {\small $t$} (1.1,2.35); \draw[decoration={brace,mirror,raise=-8pt},decorate] (.15,-.85) -- node {\small $t'$} (1.85,-.85); % \draw[vstdhl] (-.25,-.5) node[below]{\small $\lambda$} -- (-.25,2) node[pos=.35,nail]{}; } \ \otimes \bar 1_{\ell,\rho} \] Thus, since $t' > t$, by induction hypothesis we know that \begin{align*} \varphi_k^0(\star) \equiv \left( - \ \tikzdiagh[xscale=1.25]{0}{ \draw (1,-.5) .. controls (1,-.4) .. (-.25,.35) .. controls (2,1) .. (2,2); \draw (.25,-.5) .. controls (.25,-.25) and (.5,-.25) .. (.5,0) .. controls (.5,.5) and (0,.5) .. (0,1) node[tikzdot,pos=1]{} .. controls (0,1.5) and (.5,1.5) .. (.5,2); \node at (.25,1) {\tiny $\dots$}; \draw (.75,-.5) .. controls (.75,-.25) and (1,-.25) .. (1,0) .. controls (1,.5) and (.5,.5) .. (.5,1) node[tikzdot,pos=1]{} .. controls (.5,1.5) and (1,1.5) .. (1,2); % \draw (1.25,-.5) .. controls (1.25,.5) and (.75,.5) .. (.75,1) node[tikzdot,pos=1]{} .. controls (.75,1.5) and (1.25,1.5) .. (1.25,2); \node at (1,1) {\tiny $\dots$}; \draw (1.75,-.5) .. controls (1.75,.5) and (1.25,.5) .. (1.25,1) node[tikzdot,pos=1]{} .. controls (1.25,1.5) and (1.75,1.5) .. (1.75,2); % \draw (2,-.5) .. controls (2,1.75) and (0,1.5) .. (0,2); % \draw[stdhl] (0,-.5) node[below]{\small $1$} .. controls (0,-.25) .. (-.25,0) .. controls (2,.5) .. (2,1) .. controls (2,1.75) and (.25,1.75) .. (.25,2); \draw[fill=white, color=white] (-.35,0) circle (.1cm); % \draw[decoration={brace,mirror,raise=-8pt},decorate] (.15,-.85) -- node {\small $t$} (.85,-.85); % \draw[vstdhl] (-.25,-.5) node[below]{\small $\lambda$} -- (-.25,2) node[pos=.335,nail]{}; } \ \otimes \bar 1_{\ell,\rho} , \tikzdiagh[xscale=1.25]{0}{ \draw (.25,-.5) .. controls (.25,-.1) and (0,-.1) .. (0,.15) .. controls (0,.5) and (1.5,.5) .. (1.5,2); \draw (.5,-.5) -- (.5,0) .. controls (.5,.5) and (0,.5) .. (0,1) node[tikzdot,pos=1]{} .. controls (0,1.5) and (.5,1.5) .. (.5,2); \draw (.75,-.5) -- (.75,0) .. controls (.75,.5) and (.25,.5) .. (.25,1) node[tikzdot,pos=1]{} .. controls (.25,1.5) and (.75,1.5) .. (.75,2); \node at (.5,1) {\tiny $\dots$}; \draw (1.25,-.5) -- (1.25,0) .. controls (1.25,.5) and (.75,.5) .. (.75,1) node[tikzdot,pos=1]{} .. controls (.75,1.5) and (1.25,1.5) .. (1.25,2); \draw (1.5,-.5) -- (1.5,0) .. controls (1.5,1.5) and (0,1.5) .. (0,2); % \draw[stdhl] (0,-.5) node[below]{\small $1$} .. controls (0,-.25) .. (-.25,0) .. controls (1.5,.5) .. (1.5,1) .. controls (1.5,1.75) and (.25,1.75) .. (.25,2); \draw[fill=white, color=white] (-.35,0) circle (.1cm); % \draw[decoration={brace,mirror,raise=-8pt},decorate] (.4,-.85) -- node {\small $k-1$} (1.6,-.85); % \draw[vstdhl] (-.25,-.5) node[below]{\small $\lambda$} -- (-.25,2); } \otimes \bar 1_{\ell,\rho} \right) &+ \Image(\varphi_k^0 + 1\otimes \imath_k) \\ &\in (X\otimes_T Y'_0) \oplus (X\otimes_T Y^{0,t}_k). \end{align*} Then, by the same arguments as in \cref{lem:phizeroimathgenker}, that is using \cref{eq:dotanddoubledots}, we obtain \cref{eq:kergenelem}. \end{proof} \begin{lem}\label{lem:varphi0kinjective} The map $\varphi^0_k$ is injective. \end{lem} \begin{proof} Since adding black/red crossings is injective, it is enough because of \cref{lem:computezn} to show that the left $T_k^{\lambda,0}$-module map \begin{equation}\label{eq:phi0injective} T_k^{\lambda,0} \rightarrow \bigoplus_{t=0}^{k-1} q^{2(k-1-t)} T_k^{\lambda,0}, \quad \tikzdiag { \draw[vstdhl] (0,0) -- (0,1); \draw (.25,0) -- (.25,1); \node at(.5,.5) {\tiny $\dots$}; \draw (.75,0) -- (.75,1); } \mapsto \left( \tikzdiag { \draw[vstdhl]node[below]{\small$\lambda$} (0,0) -- (0,1); \draw (.25,0) .. controls (.25,.25) and (2,.25) .. (2,.5) .. controls (2,.75) and (1,.75) .. (1,1); % \draw (.5,0) .. controls (.5,.5) and (.25,.5) .. (.25,1) node[midway, tikzdot]{}; \node at (.75,.1){\tiny $\dots$}; \draw (1,0) .. controls (1,.5) and (.75,.5) .. (.75,1) node[midway, tikzdot]{}; % \draw[decoration={brace,mirror,raise=-8pt},decorate] (.4,-.35) -- node {\small $t$} (1.1,-.35); % \draw (1.25,0)-- (1.25,1) node[midway, tikzdot]{}; \node at(1.5,.1){\tiny $\dots$}; \draw (1.75,0)-- (1.75,1) node[midway, tikzdot]{}; } \right)_{0 \leq t < k} \end{equation} is injective. Since $T_k^{\lambda,0}$ is isomorphic to the dg-enhanced nilHecke algebra of \cite{naissevaz2}, we know by the results in \cite[Proposition 2.5]{naissevaz2} that there is a decomposition \begin{align}\label{eq:Andecomp} T_k^{\lambda,0} &\cong \bigoplus_{t' \geq 0} P_{k,t'} , & P_{k,t'} := \bigoplus_{p \geq 0} \tikzdiag{ \draw[vstdhl]node[below]{\small$\lambda$} (-.25,0) -- (-.25,2); % \draw (.25,0) .. controls (.25,.5) and (.5,.5) .. (.5,1) -- (.5,2); \node at(.5,.1){\tiny $\dots$}; \draw (.75,0) .. controls (.75,.5) and (1,.5) .. (1,1) -- (1,2); % \draw[decoration={brace,mirror,raise=-8pt},decorate] (.15,-.35) -- node {\small $t'$} (.85,-.35); % \draw (1,0) .. controls (1,.5) and (.25,.5) .. (.25,1) -- (.25,2) node[pos=.75,tikzdot]{} node[pos=.75,xshift=-1ex,yshift=.75ex]{\small $p$}; % \draw (1.25,0) -- (1.25,2); \node at(1.5,.1){\tiny $\dots$}; \draw (1.75,0) -- (1.75,2); % \filldraw [fill=white, draw=black] (.375,1.15) rectangle (1.875,1.65) node[midway] { $\nh_{k-1}$}; \filldraw [fill=white, draw=black] (2.25,1.25) circle (1.5ex) node { $k$}; } \end{align} where the box labeled $\nh_{k-1}$ is the nilHecke algebra, and the circle labeled $k$ is the algebra generated by labeled floating dots in the rightmost region (see \cite[\S2.4]{naissevaz2}). These floating dots correspond to combinations of nails, dots and crossings, giving elements that are in the (graded w.r.t. the homological degree) center of $T_k^{\lambda,0}$. Furthermore, the map \[ \nh_{k-1} \rightarrow q^{2k-2} \nh_k, \quad \tikzdiag{ \draw(.25,1) -- (.25,2); \node at(.75,1.1){\tiny$\dots$}; \node at(.75,1.9){\tiny$\dots$}; \draw(1.25,1) -- (1.25,2); \filldraw [fill=white, draw=black] (0,1.25) rectangle (1.5,1.75) node[midway] { $\nh_{k-1}$}; } \mapsto \tikzdiag{ \draw (0,0) .. controls (0,.25) and (1.5,.25) .. (1.5,.5) .. controls (1.5,.75) and (0,.75) .. (0,1) -- (0,2); % \draw (.5,0) .. controls (.5,.25) and (0,.25) .. (0,.5) node[pos=1,tikzdot]{} .. controls (0,.75) and (.5,.75) .. (.5,1) -- (.5,2); \node at(.5,.5){\tiny $\dots$}; \draw (1.5,0) .. controls (1.5,.25) and (1,.25) .. (1,.5) node[pos=1,tikzdot]{} .. controls (1,.75) and (1.5,.75) .. (1.5,1) -- (1.5,2); % \filldraw [fill=white, draw=black] (.25,1.25) rectangle (1.75,1.75) node[midway] { $\nh_{k-1}$}; } \] is injective (this can be deduced by sliding all dots to the bottom using \cref{eq:nhdotslide}, and then using a basis theorem as for example in \cite[Theorem 2.5]{KL1} to see the map takes the form of a column echelon matrix with $1$ as pivots). Then, applying \cref{eq:phi0injective} on $P_{k,t'}$ yields \[ \tikzdiag{ \draw[vstdhl]node[below]{\small$\lambda$} (-.25,0) -- (-.25,2); % \draw (.25,0) .. controls (.25,.5) and (.5,.5) .. (.5,1) -- (.5,2); \node at(.5,.1){\tiny $\dots$}; \draw (.75,0) .. controls (.75,.5) and (1,.5) .. (1,1) -- (1,2); % \draw[decoration={brace,mirror,raise=-8pt},decorate] (.15,-.35) -- node {\small $t'$} (.85,-.35); % \draw (1,0) .. controls (1,.5) and (.25,.5) .. (.25,1) -- (.25,2) node[pos=.75,tikzdot]{} node[pos=.75,xshift=-1ex,yshift=.75ex]{\small $p$}; % \draw (1.25,0) -- (1.25,2); \node at(1.5,.1){\tiny $\dots$}; \draw (1.75,0) -- (1.75,2); % \filldraw [fill=white, draw=black] (.375,1.15) rectangle (1.875,1.65) node[midway] { $\nh_{k-1}$}; \filldraw [fill=white, draw=black] (2.25,1.25) circle (1.5ex) node { $k$}; } \mapsto \begin{cases} 0, & \text{if $t < t'$},\\ \tikzdiag{ \draw (0,0) .. controls (0,.25) and (1.5,.25) .. (1.5,.5) .. controls (1.5,.75) and (0,.75) .. (0,1) -- (0,2) node[pos=.75,tikzdot]{} node[pos=.75,xshift=-1ex,yshift=.75ex]{\small $p$}; % \draw (.5,0) .. controls (.5,.25) and (0,.25) .. (0,.5) node[pos=1,tikzdot]{} .. controls (0,.75) and (.5,.75) .. (.5,1) -- (.5,2); \node at(.5,.5){\tiny $\dots$}; \draw (1.5,0) .. controls (1.5,.25) and (1,.25) .. (1,.5) node[pos=1,tikzdot]{} .. controls (1,.75) and (1.5,.75) .. (1.5,1) -- (1.5,2); % \filldraw [fill=white, draw=black] (.25,1.25) rectangle (1.75,1.75) node[midway] { $\nh_{k-1}$}; \filldraw [fill=white, draw=black] (2.25,1.25) circle (1.5ex) node { $k$}; }, & \text{if $t = t'$}, \\ \tikzdiag{ \draw (0,0) .. controls (0,.25) and (5.25,.25) .. (5.25,.5) .. controls (5.25,.75) and (3.5,.75) .. (3.5,1) -- (3.5,2); % \draw (.5,0) .. controls (.5,.5) and (0,.5) .. (0,1) node[midway, tikzdot]{} .. controls (0,1.25) and (.5,1.25) .. (.5,1.5) -- (.5,2); \node at(.75,.5){\tiny$\dots$}; \draw (1.5,0) .. controls (1.5,.5) and (1,.5) .. (1,1) node[midway, tikzdot]{} .. controls (1,1.25) and (1.5,1.25) .. (1.5,1.5) -- (1.5,2); % \draw (2,0) .. controls (2,.5) and (1.5,.5) .. (1.5,1) node[midway, tikzdot]{} .. controls (1.5,1.25) and (0,1.25) .. (0,1.5) -- (0,2) node[pos=.5,tikzdot]{} node[pos=.5,xshift=-1ex,yshift=.75ex]{\small $p$}; % \draw (2.5,0) .. controls (2.5,.5) and (2,.5) .. (2,1) node[midway, tikzdot]{} -- (2,2); \node at(2.75,.5){\tiny$\dots$}; \draw (3.5,0) .. controls (3.5,.5) and (3,.5) .. (3,1) node[midway, tikzdot]{} -- (3,2); % \draw (4,0) .. controls (4,.25) and (3.75,.25) .. (3.75,.5) .. controls (3.75,.75) and (4,.75) .. (4,1) node[pos=0, tikzdot]{} -- (4,2); \node at(4.25,.5){\tiny$\dots$}; \draw (5,0) .. controls (5,.25) and (4.75,.25) .. (4.75,.5) .. controls (4.75,.75) and (5,.75) .. (5,1) node[pos=0, tikzdot]{}-- (5,2); % \filldraw [fill=white, draw=black] (.25,1.45) rectangle (5.25,1.95) node[midway] { $\nh_{k-1}$}; \filldraw [fill=white, draw=black] (5.75,1.25) circle (1.5ex) node { $k$}; % \draw[decoration={brace,mirror,raise=-8pt},decorate] (.4,-.35) -- node {\small $t$} (3.6,-.35); \draw[decoration={brace,raise=-8pt},decorate] (.4,2.35) -- node {$t'$} (1.6,2.35); } , & \text{if $t > t'$}. \end{cases} \] Therefore, after decomposing $T^{\lambda,0}_k$, \cref{eq:phi0injective} yields a column echelon form matrix with injective maps as pivots, and thus is injective. \end{proof} \begin{prop}\label{prop:kerequalsimg2} We have $\ker((1\otimes \imath_k) + \varphi_k^0) = \Image(\varphi_k^1 - u)$. \end{prop} \begin{proof} First, recall that $ \imath_k$ is injective (as explained in \cref{sec:cofX}). Thus, both $(1\otimes \imath_k)$ and $\varphi_k^0$ are injective, and we get \[ \ker((1\otimes \imath_k) + \varphi_k^0) \cong \Image(1\otimes \imath_k) \cap \Image(\varphi_k^0). \] We observe that $\Image(1\otimes \imath_k) \cap \Image(\varphi_k^{0}) \cap (X\otimes_T Y^{0,t}_k)$ is generated by \[ \varphi_k^{0,t}\left( \tikzdiagh{0}{ \draw[vstdhl] (-.5,0) node[below]{\small $\lambda$} -- (-.5,1); \draw (0,0) .. controls (0,.5) and (.5,.5) .. (.5,1); \draw (.25,0) .. controls (.25,.5) and (.75,.5) .. (.75,1); \node at (.55,.15){\tiny $\dots$}; \draw (.75,0) .. controls (.75,.5) and (1.25,.5) .. (1.25,1); \draw[stdhl] (1.25,0) node[below]{\small $1$} .. controls (1.25,.5) and (-.25,.5) .. (-.25,1); } \otimes \bar 1_{\ell,\rho} \right) = (1 \otimes \imath_k^{t}) \left( \tikzdiag{ % \draw (0,-.5) .. controls (0,0) and (1,0) .. (1,.5) -- (1,1); % \draw (.75,-.5) .. controls (.75,0) and (.25,0) .. (.25,.5)node[pos=.15, tikzdot]{} -- (.25,1) node[pos=.5, tikzdot]{} ; \node at(1,-.4) {\tiny $\dots$}; \node at(.55,.35) {\tiny $\dots$}; \draw (1.25,-.5) .. controls (1.25,0) and (.75,0) .. (.75,.5)node[pos=.15, tikzdot]{} -- (.75,1) node[pos=.5, tikzdot]{} ; % \draw (1.5,-.5) .. controls (1.5,0) and (1.25,0) .. (1.25,.5)node[pos=.15, tikzdot]{} -- (1.25,1); \node at(1.75,-.4) {\tiny $\dots$}; \node at(1.55,.15) {\tiny $\dots$}; \draw (2,-.5) .. controls (2,0) and (1.75,0) .. (1.75,.5)node[pos=.15, tikzdot]{} -- (1.75,1); % \filldraw [fill=white, draw=black,rounded corners] (.875,.4) rectangle (1.875,.9) node[midway] { $z_{k-t}$}; \draw[decoration={brace,mirror,raise=-8pt},decorate] (.65,-.85) -- node {\small $t$} (1.35,-.85); % \draw[stdhl] (.375,-.5) node[below]{\small $1$} .. controls (.375,-.25) .. (-.5,0) .. controls (-.25,.25) .. (-.25,.5) -- (-.25,1); \draw[fill=white, color=white] (-.65,0) circle (.15cm); \draw[vstdhl] (-.5,-.5) node[below]{\small $\lambda$} -- (-.5,1) ; } \otimes \bar 1_{\ell,\rho} \right) = \tikzdiag{ % \draw (.25,-.5) .. controls (.25,0) and (1,0) .. (1,.5)node[pos=.15, tikzdot]{} -- (1,1); % \draw (.75,-.5) .. controls (.75,0) and (.25,0) .. (.25,.5)node[pos=.15, tikzdot]{} -- (.25,1) node[pos=.5, tikzdot]{} ; \node at(1,-.4) {\tiny $\dots$}; \node at(.55,.35) {\tiny $\dots$}; \draw (1.25,-.5) .. controls (1.25,0) and (.75,0) .. (.75,.5)node[pos=.15, tikzdot]{} -- (.75,1) node[pos=.5, tikzdot]{} ; % \draw (1.5,-.5) .. controls (1.5,0) and (1.25,0) .. (1.25,.5)node[pos=.15, tikzdot]{} -- (1.25,1); \node at(1.75,-.4) {\tiny $\dots$}; \node at(1.55,.15) {\tiny $\dots$}; \draw (2,-.5) .. controls (2,0) and (1.75,0) .. (1.75,.5)node[pos=.15, tikzdot]{} -- (1.75,1); % \filldraw [fill=white, draw=black,rounded corners] (.875,.4) rectangle (1.875,.9) node[midway] { $z_{k-t}$}; \draw[decoration={brace,mirror,raise=-8pt},decorate] (.65,-.85) -- node {\small $t$} (1.35,-.85); % \draw[stdhl] (-.25,-.5) node[below]{\small $1$} .. controls (-.25,-.25) .. (-.5,0) .. controls (-.25,.25) .. (-.25,.5) -- (-.25,1); \draw[fill=white, color=white] (-.65,0) circle (.15cm); \draw[vstdhl] (-.5,-.5) node[below]{\small $\lambda$} -- (-.5,1) ; } \otimes \bar 1_{\ell,\rho}. \] and by \[ \varphi_k^{0,t}\left( \tikzdiagh{0}{ \draw (0,0) .. controls (0,.125) .. (-.5,.25) .. controls (.5,.75) .. (.5,1); \draw (.25,0) .. controls (.25,.5) and (.75,.5) .. (.75,1); \node at (.55,.15){\tiny $\dots$}; \draw (.75,0) .. controls (.75,.5) and (1.25,.5) .. (1.25,1); \draw[stdhl] (1.25,0) node[below]{\small $1$} .. controls (1.25,.5) and (-.25,.5) .. (-.25,1); \draw[vstdhl] (-.5,0) node[below]{\small $\lambda$} -- (-.5,1) node[pos=.25,nail]{}; } \otimes \bar 1_{\ell,\rho} \right) = (1 \otimes \imath_k^{t}) \left( \tikzdiag{ % \draw (0,-.5) .. controls (0,-.375) .. (-.5,-.25) .. controls (1,0) .. (1,.5) -- (1,1); % \draw (.75,-.5) .. controls (.75,0) and (.25,0) .. (.25,.5)node[pos=.15, tikzdot]{} -- (.25,1) node[pos=.5, tikzdot]{} ; \node at(1,-.4) {\tiny $\dots$}; \node at(.55,.35) {\tiny $\dots$}; \draw (1.25,-.5) .. controls (1.25,0) and (.75,0) .. (.75,.5)node[pos=.15, tikzdot]{} -- (.75,1) node[pos=.5, tikzdot]{} ; % \draw (1.5,-.5) .. controls (1.5,0) and (1.25,0) .. (1.25,.5)node[pos=.15, tikzdot]{} -- (1.25,1); \node at(1.75,-.4) {\tiny $\dots$}; \node at(1.55,.15) {\tiny $\dots$}; \draw (2,-.5) .. controls (2,0) and (1.75,0) .. (1.75,.5)node[pos=.15, tikzdot]{} -- (1.75,1); % \filldraw [fill=white, draw=black,rounded corners] (.875,.4) rectangle (1.875,.9) node[midway] { $z_{k-t}$}; \draw[decoration={brace,mirror,raise=-8pt},decorate] (.65,-.85) -- node {\small $t$} (1.35,-.85); % \draw[stdhl] (.375,-.5) node[below]{\small $1$} .. controls (.375,-.25) .. (-.5,0) .. controls (-.25,.25) .. (-.25,.5) -- (-.25,1); \draw[fill=white, color=white] (-.65,0) circle (.15cm); \draw[vstdhl] (-.5,-.5) node[below]{\small $\lambda$} -- (-.5,1) node[pos=.15,nail]{}; } \otimes \bar 1_{\ell,\rho} \right) = \tikzdiag{ % \draw (.25,-.5) .. controls (.25,-.375) .. (-.5,-.25) .. controls (1,0) .. (1,.5) -- (1,1); % \draw (.75,-.5) .. controls (.75,0) and (.25,0) .. (.25,.5)node[pos=.15, tikzdot]{} -- (.25,1) node[pos=.5, tikzdot]{} ; \node at(1,-.4) {\tiny $\dots$}; \node at(.55,.35) {\tiny $\dots$}; \draw (1.25,-.5) .. controls (1.25,0) and (.75,0) .. (.75,.5)node[pos=.15, tikzdot]{} -- (.75,1) node[pos=.5, tikzdot]{} ; % \draw (1.5,-.5) .. controls (1.5,0) and (1.25,0) .. (1.25,.5)node[pos=.15, tikzdot]{} -- (1.25,1); \node at(1.75,-.4) {\tiny $\dots$}; \node at(1.55,.15) {\tiny $\dots$}; \draw (2,-.5) .. controls (2,0) and (1.75,0) .. (1.75,.5)node[pos=.15, tikzdot]{} -- (1.75,1); % \filldraw [fill=white, draw=black,rounded corners] (.875,.4) rectangle (1.875,.9) node[midway] { $z_{k-t}$}; \draw[decoration={brace,mirror,raise=-8pt},decorate] (.65,-.85) -- node {\small $t$} (1.35,-.85); % \draw[stdhl] (-.25,-.5) node[below]{\small $1$} .. controls (-.25,0) .. (-.5,.25) .. controls (-.25,.375) .. (-.25,.5) -- (-.25,1); \draw[fill=white, color=white] (-.65,.25) circle (.15cm); \draw[vstdhl] (-.5,-.5) node[below]{\small $\lambda$} -- (-.5,1) node[pos=.15,nail]{}; } \otimes \bar 1_{\ell,\rho}. \] Moreover, we have \begin{align*} \varphi_k^{1,t}\left( \tikzdiagh{0}{ \draw (0,-.5) .. controls (0,0) and (.25,0) .. (.25,.5); \draw (.25,-.5) .. controls (.25,0) and (.5,0) .. (.5,.5); \node at(.75,.35) {\tiny $\dots$}; \draw (.75,-.5) .. controls (.75,0) and (1,0) .. (1,.5); % \draw[stdhl] (1.25,-.5) node[below]{\small $1$} .. controls (1.25,-.25) .. (-.5,0) .. controls (-.25,.25) .. (-.25,.5) ; \draw[fill=white, color=white] (-.65,0) circle (.15cm); \draw[vstdhl] (-.5,-.5) node[below]{\small $\lambda$} -- (-.5,.5) ; } \otimes \bar 1_{\ell,\rho} \right) \ &= \ \tikzdiag{ % \draw (0,-.5) .. controls (0,0) and (1,0) .. (1,.5) -- (1,1); % \draw (.75,-.5) .. controls (.75,0) and (.25,0) .. (.25,.5)node[pos=.15, tikzdot]{} -- (.25,1) node[pos=.5, tikzdot]{} ; \node at(1,-.4) {\tiny $\dots$}; \node at(.55,.35) {\tiny $\dots$}; \draw (1.25,-.5) .. controls (1.25,0) and (.75,0) .. (.75,.5)node[pos=.15, tikzdot]{} -- (.75,1) node[pos=.5, tikzdot]{} ; % \draw (1.5,-.5) .. controls (1.5,0) and (1.25,0) .. (1.25,.5)node[pos=.15, tikzdot]{} -- (1.25,1); \node at(1.75,-.4) {\tiny $\dots$}; \node at(1.55,.15) {\tiny $\dots$}; \draw (2,-.5) .. controls (2,0) and (1.75,0) .. (1.75,.5)node[pos=.15, tikzdot]{} -- (1.75,1); % \filldraw [fill=white, draw=black,rounded corners] (.875,.4) rectangle (1.875,.9) node[midway] { $z_{k-t}$}; \draw[decoration={brace,mirror,raise=-8pt},decorate] (.65,-.85) -- node {\small $t$} (1.35,-.85); % \draw[stdhl] (.375,-.5) node[below]{\small $1$} .. controls (.375,-.25) .. (-.5,0) .. controls (-.25,.25) .. (-.25,.5) -- (-.25,1); \draw[fill=white, color=white] (-.65,0) circle (.15cm); \draw[vstdhl] (-.5,-.5) node[below]{\small $\lambda$} -- (-.5,1) ; } \otimes \bar 1_{\ell,\rho}, \\ u\left( \tikzdiagh{0}{ \draw (0,-.5) .. controls (0,0) and (.25,0) .. (.25,.5); \draw (.25,-.5) .. controls (.25,0) and (.5,0) .. (.5,.5); \node at(.75,.35) {\tiny $\dots$}; \draw (.75,-.5) .. controls (.75,0) and (1,0) .. (1,.5); % \draw[stdhl] (1.25,-.5) node[below]{\small $1$} .. controls (1.25,-.25) .. (-.5,0) .. controls (-.25,.25) .. (-.25,.5) ; \draw[fill=white, color=white] (-.65,0) circle (.15cm); \draw[vstdhl] (-.5,-.5) node[below]{\small $\lambda$} -- (-.5,.5) ; } \otimes \bar 1_{\ell,\rho} \right) \ &= \ \tikzdiagh{0}{ \draw[vstdhl] (-.5,0) node[below]{\small $\lambda$} -- (-.5,1); \draw (0,0) .. controls (0,.5) and (.5,.5) .. (.5,1); \draw (.25,0) .. controls (.25,.5) and (.75,.5) .. (.75,1); \node at (.55,.15){\tiny $\dots$}; \draw (.75,0) .. controls (.75,.5) and (1.25,.5) .. (1.25,1); \draw[stdhl] (1.25,0) node[below]{\small $1$} .. controls (1.25,.5) and (-.25,.5) .. (-.25,1); } \otimes \bar 1_{\ell,\rho}. \end{align*} The case with a nail is similar, concluding the proof. \end{proof} \begin{proof}[Proof of \cref{thm:catdoublebraidphiiso}] Since $\varphi_k^1 - u$ is injective, and $u \otimes \gamma_k$ is surjective, and by \cref{prop:kerequalsimg1} and \cref{prop:kerequalsimg2}, we conclude that $\cone(\varphi_k)$ is acyclic for all $k$. Consequently, $\varphi$ is a quasi-isomorphism. \end{proof} \subsubsection{The bimodule map $\tilde \varphi$}\label{sec:proofofbimodulemap} \begin{citethm}{thm:phiisAinfty} The map $\varphi$ is a map of $\mathbb{Z}^2$-graded $(T^{\lambda,r},0)$-$(T^{\lambda,r},0)$-$A_\infty$-bimodules. \end{citethm} The goal of this section is to prove \cref{thm:phiisAinfty}. To this end, we first prove that the map $\tilde \varphi^0 : q^2 (T_b^{\lambda,r})[1] \rightarrow X \otimes_T X$ is a map of bimodules. \begin{prop}\label{prop:tildevarphi0rec} We have \[ \tilde \varphi^0 \left( \tikzdiag{ \draw[vstdhl] (0,0) node[below]{\small$\lambda$} -- (0,1); \draw (.5,0) -- (.5,1); \node at(1,.5){\tiny $\dots$}; \draw (1.5,0) -- (1.5,1); \draw[stdhl] (2,0) node[below]{\small$1$} -- (2,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (.4,-.35) -- node {$k$} (1.6,-.35); } \otimes \bar 1_{\ell,\rho} \right) = (-1)^k \tikzdiagh{0}{ \draw[vstdhl] (0,0) node[below]{\small$\lambda$} -- (0,1); \draw (.5,0) -- (.5,1); \node at(1,.5){\tiny $\dots$}; \draw (1.5,0) -- (1.5,1); \draw[stdhl] (2,0) node[below]{\small$1$} -- (2,1); \node at(1,.1){\tiny $\dots$}; \filldraw [fill=white, draw=black] (-.15,.25) rectangle (2.15,.75) node[midway] { $\tilde\varphi(k)$}; \node at(1,.9){\tiny $\dots$}; } \otimes \bar 1_{\ell,\rho} , \] where \begin{align*} \tilde\varphi(0) := \ \tikzdiagh{0}{ % \draw[vstdhl] (0,0) node[below]{\small$\lambda$} -- (0,1); \draw[stdhl] (1,0) node[below]{\small$1$} -- (1,1); % \filldraw [fill=white, draw=black] (-.15,.25) rectangle (1.15,.75) node[midway] { $\tilde\varphi(0)$}; } \ &:= \ 0, \\ \tilde\varphi(1) := \ \tikzdiagh{0}{ \draw (.5,0) -- (.5,1); % \draw[vstdhl] (0,0) node[below]{\small$\lambda$} -- (0,1); \draw[stdhl] (1,0) node[below]{\small$1$} -- (1,1); % \filldraw [fill=white, draw=black] (-.15,.25) rectangle (1.15,.75) node[midway] { $\tilde\varphi(1)$}; } \ &:= \ \tikzdiagh{0}{ \draw (.5,-.5) .. controls (.5,-.375) .. (0,-.125) .. controls (1,.25) .. (1,.5) .. controls (1,1) and (.5,1) .. (.5,1.5); % \draw[stdhl] (1,-.5) node[below]{\small$1$} .. controls (1,0) .. (0,.25) ..controls (.5,.25) and (.5,.75) .. (0,.75) .. controls (1,1) .. (1,1.5); \draw[fill=white, color=white] (-.25,.25) circle (.2cm); \draw[fill=white, color=white] (-.25,.75) circle (.2cm); \draw[vstdhl] (0,-.5) node[below]{\small$\lambda$} -- (0,1.5) node[pos=.175,nail]{}; } \ - \ \tikzdiagh[yscale=-1]{0}{ \draw (.5,-.5) .. controls (.5,-.375) .. (0,-.125) .. controls (1,.25) .. (1,.5) .. controls (1,1) and (.5,1) .. (.5,1.5); % \draw[stdhl] (1,-.5) .. controls (1,0) .. (0,.25) ..controls (.5,.25) and (.5,.75) .. (0,.75) .. controls (1,1) .. (1,1.5) node[below]{\small$1$}; \draw[fill=white, color=white] (-.25,.25) circle (.2cm); \draw[fill=white, color=white] (-.25,.75) circle (.2cm); \draw[vstdhl] (0,-.5) -- (0,1.5) node[below]{\small$\lambda$} node[pos=.175,nail]{}; } \\ \tilde\varphi(t+2) := \ \tikzdiagh[xscale=.75]{0}{ \draw (.5,-.5) -- (.5,1.5); \node at(1,.5){\tiny $\dots$}; \draw (1.5,-.5) -- (1.5,1.5); % \draw (2,-.5) -- (2,1.5); \draw (2.5,-.5) -- (2.5,1.5); % \draw[vstdhl] (0,-.5) node[below]{\small$\lambda$} -- (0,1.5); \draw[stdhl] (3,-.5) node[below]{\small$1$} -- (3,1.5); % \node at(1,.1){\tiny $\dots$}; \filldraw [fill=white, draw=black] (-.15,.25) rectangle (3.15,.75) node[midway] { $\tilde\varphi(t+2)$}; \node at(1,.9){\tiny $\dots$}; } \ &:= \ \tikzdiagh[xscale=.75]{0}{ \draw (.5,-.5) -- (.5,1.5); \node at(1,.5){\tiny $\dots$}; \draw (1.5,-.5) -- (1.5,1.5); % \draw (2,-.5) -- (2,0) -- (2,1) .. controls (2,1.25) and (2.5,1.25) .. (2.5,1.5); \draw (2.5, -.5).. controls (2.5,-.25) and(3,-.25) .. (3,0) -- (3,1) .. controls (3,1.25) and (2,1.25) .. (2,1.5); % \draw[vstdhl] (0,-.5) node[below]{\small$\lambda$} -- (0,1.5); \draw[stdhl] (3,-.5) node[below]{\small$1$} .. controls (3,-.25) and (2.5,-.25) ..(2.5,0) -- (2.5,1) .. controls (2.5,1.25) and (3,1.25) .. (3,1.5); % \node at(1,.1){\tiny $\dots$}; \filldraw [fill=white, draw=black] (-.15,.25) rectangle (2.65,.75) node[midway] { $\tilde\varphi(t+1)$}; \node at(1,.9){\tiny $\dots$}; } \ + \ \tikzdiagh[xscale=.75]{0}{ \draw (.5,-.5) -- (.5,1.5); \node at(1,.5){\tiny $\dots$}; \draw (1.5,-.5) -- (1.5,1.5); % \draw (2.5,-.5) .. controls (2.5,-.25) and (2,-.25) .. (2,0) -- (2,1) -- (2,1.5); \draw (2, -.5).. controls (2,-.25) and(3,-.25) .. (3,0) -- (3,1) .. controls (3,1.25) and (2.5,1.25) .. (2.5,1.5); % \draw[vstdhl] (0,-.5) node[below]{\small$\lambda$} -- (0,1.5); \draw[stdhl] (3,-.5) node[below]{\small$1$} .. controls (3,-.25) and (2.5,-.25) ..(2.5,0) -- (2.5,1) .. controls (2.5,1.25) and (3,1.25) .. (3,1.5); % \node at(1,.1){\tiny $\dots$}; \filldraw [fill=white, draw=black] (-.15,.25) rectangle (2.65,.75) node[midway] { $\tilde\varphi(t+1)$}; \node at(1,.9){\tiny $\dots$}; } \ + \ \tikzdiagh[xscale=.75]{0}{ \draw (.5,-.5) -- (.5,1.5); \node at(1,.5){\tiny $\dots$}; \draw (1.5,-.5) -- (1.5,1.5); % \draw (2.5,-.5) .. controls (2.5,-.25) and (2,-.25) .. (2,0) -- (2,1) .. controls (2,1.25) and (2.5,1.25) .. (2.5,1.5); \draw (2, -.5).. controls (2,-.25) and(3,-.25) .. (3,0) -- (3,1) node[midway,tikzdot]{} .. controls (3,1.25) and (2,1.25) .. (2,1.5); % \draw[vstdhl] (0,-.5) node[below]{\small$\lambda$} -- (0,1.5); \draw[stdhl] (3,-.5) node[below]{\small$1$} .. controls (3,-.25) and (2.5,-.25) ..(2.5,0) -- (2.5,1) .. controls (2.5,1.25) and (3,1.25) .. (3,1.5); % \node at(1,.1){\tiny $\dots$}; \filldraw [fill=white, draw=black] (-.15,.25) rectangle (2.65,.75) node[midway] { $\tilde\varphi(t+1)$}; \node at(1,.9){\tiny $\dots$}; } \end{align*} for all $t \geq 0$. \end{prop} \begin{proof} Recall that $\tilde \varphi^0 := (1 \otimes \gamma) \circ \varphi^0$. Then, we obtain \begin{align} \label{eq:gammavarphi0rec} \begin{split} (1 \otimes \gamma) \circ \varphi^0 &\left( \tikzdiagh{0}{ \draw[vstdhl] (0,0) node[below]{\small$\lambda$} -- (0,1); \draw (.5,0) -- (.5,1); \node at(1,.5){\tiny $\dots$}; \draw (1.5,0) -- (1.5,1); \draw[stdhl] (2,0) node[below]{\small$1$} -- (2,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (.4,-.35) -- node {$k$} (1.6,-.35); } \otimes \bar 1_{\ell,\rho} \right) \\ \ &= (-1)^k \sum_{t = 0}^{k-1} \tikzdiag{ % \draw (.5,-.5) .. controls (.5,0) and (0,0) .. (0,.5) .. controls (0,.75) and (.75,.75) .. (.75,1) -- (.75, 1.5); % \draw (.75,-.5) .. controls (.75,.25) and (0,.75) .. (0,1) -- (0,1.5) node[midway, tikzdot]{}; \node at(1,-.5) {\tiny $\dots$}; \draw (1.25,-.5) .. controls (1.25,.25) and (.5,.75) .. (.5,1) -- (.5,1.5) node[midway, tikzdot]{}; % \draw (1.5,-.5) .. controls (1.5,.25) and (1,.75) .. (1,1) -- (1,1.5); \node at(1.75,-.5) {\tiny $\dots$}; \draw (2,-.5) .. controls (2,.25) and (1.5,.75) .. (1.5,1) -- (1.5,1.5); % \filldraw [fill=white, draw=black,rounded corners] (.625,.9) rectangle (1.625,1.4) node[midway] { $z_{k-t}$}; % \draw[stdhl] (0,-.5) .. controls (0,-.25) .. (-.5,0) .. controls (2,.25) .. (2,.5) -- (2,1.5) ; \draw[fill=white, color=white] (-.6,0) circle (.1cm); \draw[vstdhl] (-.5,-.5) -- (-.5,1.5) ; % % % \draw (0,-2) .. controls (0,-1.5) and (.75,-1.5) .. (.75,-.5); \draw (.5,-2) .. controls (.5,-1.5) and (1.25,-1.5) .. (1.25,-.5); \draw (.75,-2) .. controls (.75,-1.75) .. (-.5,-1.5) .. controls (.5,-1) .. (.5,-.5); \draw (1,-2) .. controls (1,-1.5) and (1.5,-1.5) .. (1.5,-.5); \draw (1.5,-2) .. controls (1.5,-1.5) and (2,-1.5) .. (2,-.5); \draw[decoration={brace,mirror,raise=-8pt},decorate] (-.1,-2.35) -- node {$t$} (.6,-2.35); % \draw[stdhl] (2,-2) node[below]{\small $1$} -- (2,-1.5) .. controls (2,-1.25) .. (-.5,-1) .. controls (0,-.75) .. (0,-.5); \draw[fill=white, color=white] (-.6,-1) circle (.1cm); \draw[vstdhl] (-.5,-2) node[below]{\small $\lambda$} -- (-.5,-.5) node[pos=.33,nail]{}; } \ \otimes \bar 1_{\ell,\rho} \ - \ \tikzdiag{ \draw (.5,-.5) .. controls (.5,.25) and (0,.25) .. (0,1.5) node[pos=.9,tikzdot]{}; \node at(.75,-.5) {\tiny $\dots$}; \draw (1,-.5) .. controls (1,.25) and (.5,.25) .. (.5,1.5) node[pos=.9,tikzdot]{}; % \draw (1.25,-.5) .. controls (1.25,.25) .. (-.5,.375) .. controls (.75,.5) .. (.75,1.5); % \draw (1.5,-.5) .. controls (1.5,.5) and (1,.5) .. (1,1.5); \node at(1.75,-.5) {\tiny $\dots$}; \draw (2,-.5) .. controls (2,.5) and (1.5,.5) .. (1.5,1.5); % \filldraw [fill=white, draw=black,rounded corners] (.625,.9) rectangle (1.625,1.4) node[midway] { $z_{k-t}$}; % \draw[stdhl] (0,-.5) .. controls (0,-.25) .. (-.5,0) .. controls (2,.0) .. (2,1) -- (2,1.5) ; \draw[fill=white, color=white] (-.6,0) circle (.1cm); \draw[vstdhl] (-.5,-.5) -- (-.5,1.5) node[pos=.45,nail]{}; % % % \draw (0,-2) .. controls (0,-1.5) and (.5,-1.5) .. (.5,-.5); \draw (.5,-2) .. controls (.5,-1.5) and (1,-1.5) .. (1,-.5); \draw (.75,-2) .. controls (.75,-1.5) and (1.25,-1.5) .. (1.25,-.5); \draw (1,-2) .. controls (1,-1.5) and (1.5,-1.5) .. (1.5,-.5); \draw (1.5,-2) .. controls (1.5,-1.5) and (2,-1.5) .. (2,-.5); \draw[decoration={brace,mirror,raise=-8pt},decorate] (-.1,-2.35) -- node {$t$} (.6,-2.35); % \draw[stdhl] (2,-2) node[below]{\small $1$} -- (2,-1.5) .. controls (2,-1.25) .. (-.5,-1) .. controls (0,-.75) .. (0,-.5); \draw[fill=white, color=white] (-.6,-1) circle (.1cm); \draw[vstdhl] (-.5,-2) node[below]{\small $\lambda$} -- (-.5,-.5); } \ \otimes \bar 1_{\ell,\rho}. \end{split} \end{align} We prove the statement by induction on $k$. The claim is clearly true for $k=0$ and $k=1$. Suppose it is true for $k+1$, and we will show it is true for $k+2$. By definition of $\tilde\varphi(k+2)$ and using \cref{eq:nhdotslide}, we have \begin{equation}\label{eq:altvarphik} \tikzdiagh[xscale=.75]{0}{ \draw (.5,-.5) -- (.5,1.5); \node at(1,.5){\tiny $\dots$}; \draw (1.5,-.5) -- (1.5,1.5); % \draw (2,-.5) -- (2,1.5); \draw (2.5,-.5) -- (2.5,1.5); % \draw[vstdhl] (0,-.5) node[below]{\small$\lambda$} -- (0,1.5); \draw[stdhl] (3,-.5) node[below]{\small$1$} -- (3,1.5); % \node at(1,.1){\tiny $\dots$}; \filldraw [fill=white, draw=black] (-.15,.25) rectangle (3.15,.75) node[midway] { $\tilde\varphi(k+2)$}; \node at(1,.9){\tiny $\dots$}; } \ = \ \tikzdiagh[xscale=.75]{0}{ \draw (.5,-.5) -- (.5,1.5); \node at(1,.5){\tiny $\dots$}; \draw (1.5,-.5) -- (1.5,1.5); % \draw (2,-.5) -- (2,0) -- (2,1) .. controls (2,1.25) and (2.5,1.25) .. (2.5,1.5); \draw (2.5, -.5).. controls (2.5,-.25) and(3,-.25) .. (3,0) -- (3,1) .. controls (3,1.25) and (2,1.25) .. (2,1.5); % \draw[vstdhl] (0,-.5) node[below]{\small$\lambda$} -- (0,1.5); \draw[stdhl] (3,-.5) node[below]{\small$1$} .. controls (3,-.25) and (2.5,-.25) ..(2.5,0) -- (2.5,1) .. controls (2.5,1.25) and (3,1.25) .. (3,1.5); % \node at(1,.1){\tiny $\dots$}; \filldraw [fill=white, draw=black] (-.15,.25) rectangle (2.65,.75) node[midway] { $\tilde\varphi(k+1)$}; \node at(1,.9){\tiny $\dots$}; } \ + \ \tikzdiagh[xscale=.75]{0}{ \draw (.5,-.5) -- (.5,1.5); \node at(1,.5){\tiny $\dots$}; \draw (1.5,-.5) -- (1.5,1.5); % \draw (2.5,-.5) .. controls (2.5,-.25) and (2,-.25) .. (2,0) -- (2,1) .. controls (2,1.25) and (2.5,1.25) .. (2.5,1.5); \draw (2, -.5).. controls (2,-.25) and(3,-.25) .. (3,0) -- (3,1) .. controls (3,1.25) and (2,1.25) .. (2,1.5) node[pos=.8,tikzdot]{}; \node at(1,.9){\tiny $\dots$}; % \draw[vstdhl] (0,-.5) node[below]{\small$\lambda$} -- (0,1.5); \draw[stdhl] (3,-.5) node[below]{\small$1$} .. controls (3,-.25) and (2.5,-.25) ..(2.5,0) -- (2.5,1) .. controls (2.5,1.25) and (3,1.25) .. (3,1.5); % \node at(1,.1){\tiny $\dots$}; \filldraw [fill=white, draw=black] (-.15,.25) rectangle (2.65,.75) node[midway] { $\tilde\varphi(k+1)$}; } \end{equation} Applying the induction hypothesis on \cref{eq:altvarphik}, we get \begin{align*} &\tikzdiagh[xscale=.75]{0}{ \draw (.5,-.5) -- (.5,1.5); \node at(1,.5){\tiny $\dots$}; \draw (1.5,-.5) -- (1.5,1.5); % \draw (2,-.5) -- (2,1.5); \draw (2.5,-.5) -- (2.5,1.5); % \draw[vstdhl] (0,-.5) node[below]{\small$\lambda$} -- (0,1.5); \draw[stdhl] (3,-.5) node[below]{\small$1$} -- (3,1.5); % \node at(1,.1){\tiny $\dots$}; \filldraw [fill=white, draw=black] (-.15,.25) rectangle (3.15,.75) node[midway] { $\tilde\varphi(k+2)$}; \node at(1,.9){\tiny $\dots$}; } \\ \ &= \sum_{t=0}^{k-1} \ \left( \tikzdiag{ \draw (2, -2.5) .. controls (2,-2.25) and (2.5,-2.25) .. (2.5,-2) -- (2.5,1.5) .. controls (2.5,1.75) and (1.75,1.75) .. (1.75,2); % \draw (.5,-.5) .. controls (.5,0) and (0,0) .. (0,.5) .. controls (0,.75) and (.75,.75) .. (.75,1) -- (.75, 1.5) -- (.75,2); % \draw (.75,-.5) .. controls (.75,.25) and (0,.75) .. (0,1) -- (0,1.5) node[midway, tikzdot]{} -- (0,2); \node at(1,-.5) {\tiny $\dots$}; \draw (1.25,-.5) .. controls (1.25,.25) and (.5,.75) .. (.5,1) -- (.5,1.5) node[midway, tikzdot]{} -- (.5,2); % \draw (1.5,-.5) .. controls (1.5,.25) and (1,.75) .. (1,1) -- (1,1.5) -- (1,2); \node at(1.75,-.5) {\tiny $\dots$}; \draw (2,-.5) .. controls (2,.25) and (1.5,.75) .. (1.5,1) -- (1.5,1.5) -- (1.5,2); \draw (2.25,-.5) .. controls (2.25,.25) and (1.75,.75) .. (1.75,1) -- (1.75,1.5) .. controls (1.75,1.75) and (2,1.75) .. (2,2); % \filldraw [fill=white, draw=black,rounded corners] (.625,.9) rectangle (1.875,1.4) node[midway] { $z_{k+1-t}$}; % \draw[stdhl] (0,-.5) .. controls (0,-.25) .. (-.5,0) .. controls (2.25,.25) .. (2.25,.5) -- (2.25,1.5) .. controls (2.25,1.75) and (2.5,1.75) .. (2.5,2) ; \draw[fill=white, color=white] (-.6,0) circle (.1cm); \draw[vstdhl] (-.5,-.5) -- (-.5,2) ; % % \draw (0,-2.5) -- (0,-2) .. controls (0,-1.5) and (.75,-1.5) .. (.75,-.5); \draw (.5,-2.5) -- (.5,-2) .. controls (.5,-1.5) and (1.25,-1.5) .. (1.25,-.5); \draw (.75,-2.5) -- (.75,-2) .. controls (.75,-1.75) .. (-.5,-1.5) .. controls (.5,-1) .. (.5,-.5); \draw (1,-2.5) -- (1,-2) .. controls (1,-1.5) and (1.5,-1.5) .. (1.5,-.5); \draw (1.5,-2.5) -- (1.5,-2) .. controls (1.5,-1.5) and (2,-1.5) .. (2,-.5); \draw (1.75,-2.5) -- (1.75,-2) .. controls (1.75,-1.5) and (2.25,-1.5) .. (2.25,-.5); \draw[decoration={brace,mirror,raise=-8pt},decorate] (-.1,-2.85) -- node {$t$} (.6,-2.85); % \draw[stdhl] (2.5,-2.5 )node[below]{\small $1$} .. controls (2.5,-2.25) and (2.25,-2.25) .. (2.25,-2) -- (2.25,-1.5) .. controls (2.25,-1.25) .. (-.5,-1) .. controls (0,-.75) .. (0,-.5); \draw[fill=white, color=white] (-.6,-1) circle (.1cm); \draw[vstdhl] (-.5,-2.5) node[below]{\small $\lambda$}-- (-.5,-2) -- (-.5,-.5) node[pos=.33,nail]{}; } \ + \ \tikzdiag{ \draw (1.75, -2.5) .. controls (1.75,-2.25) and (2.5,-2.25) .. (2.5,-2) -- (2.5,1.5) .. controls (2.5,1.75) and (1.75,1.75) .. (1.75,2) node[pos=.8, tikzdot]{}; % \draw (.5,-.5) .. controls (.5,0) and (0,0) .. (0,.5) .. controls (0,.75) and (.75,.75) .. (.75,1) -- (.75, 1.5) -- (.75,2); % \draw (.75,-.5) .. controls (.75,.25) and (0,.75) .. (0,1) -- (0,1.5) node[midway, tikzdot]{} -- (0,2); \node at(1,-.5) {\tiny $\dots$}; \draw (1.25,-.5) .. controls (1.25,.25) and (.5,.75) .. (.5,1) -- (.5,1.5) node[midway, tikzdot]{} -- (.5,2); % \draw (1.5,-.5) .. controls (1.5,.25) and (1,.75) .. (1,1) -- (1,1.5) -- (1,2); \node at(1.75,-.5) {\tiny $\dots$}; \draw (2,-.5) .. controls (2,.25) and (1.5,.75) .. (1.5,1) -- (1.5,1.5) -- (1.5,2); \draw (2.25,-.5) .. controls (2.25,.25) and (1.75,.75) .. (1.75,1) -- (1.75,1.5) .. controls (1.75,1.75) and (2,1.75) .. (2,2); % \filldraw [fill=white, draw=black,rounded corners] (.625,.9) rectangle (1.875,1.4) node[midway] { $z_{k+1-t}$}; % \draw[stdhl] (0,-.5) .. controls (0,-.25) .. (-.5,0) .. controls (2.25,.25) .. (2.25,.5) -- (2.25,1.5) .. controls (2.25,1.75) and (2.5,1.75) .. (2.5,2) ; \draw[fill=white, color=white] (-.6,0) circle (.1cm); \draw[vstdhl] (-.5,-.5) -- (-.5,2) ; % % \draw (0,-2.5) -- (0,-2) .. controls (0,-1.5) and (.75,-1.5) .. (.75,-.5); \draw (.5,-2.5) -- (.5,-2) .. controls (.5,-1.5) and (1.25,-1.5) .. (1.25,-.5); \draw (.75,-2.5) -- (.75,-2) .. controls (.75,-1.75) .. (-.5,-1.5) .. controls (.5,-1) .. (.5,-.5); \draw (1,-2.5) -- (1,-2) .. controls (1,-1.5) and (1.5,-1.5) .. (1.5,-.5); \draw (1.5,-2.5) -- (1.5,-2) .. controls (1.5,-1.5) and (2,-1.5) .. (2,-.5); \draw (2,-2.5) .. controls (2,-2.25) and (1.75,-2.25) .. (1.75,-2) .. controls (1.75,-1.5) and (2.25,-1.5) .. (2.25,-.5); \draw[decoration={brace,mirror,raise=-8pt},decorate] (-.1,-2.85) -- node {$t$} (.6,-2.85); % \draw[stdhl] (2.5,-2.5 )node[below]{\small $1$} .. controls (2.5,-2.25) and (2.25,-2.25) .. (2.25,-2) -- (2.25,-1.5) .. controls (2.25,-1.25) .. (-.5,-1) .. controls (0,-.75) .. (0,-.5); \draw[fill=white, color=white] (-.6,-1) circle (.1cm); \draw[vstdhl] (-.5,-2.5) node[below]{\small $\lambda$}-- (-.5,-2) -- (-.5,-.5) node[pos=.33,nail]{}; } \right) \ + \ \tikzdiag{ \draw (1, -2.5) .. controls (1,-2.25) and (1.5,-2.25) .. (1.5,-2) -- (1.5,1.5) .. controls (1.5,1.75) and (.75,1.75) .. (.75,2); % \draw (.5,-.5) .. controls (.5,0) and (0,0) .. (0,.5) .. controls (0,.75) and (.75,.75) .. (.75,1) -- (.75, 1.5) .. controls (.75,1.75) and (1,1.75) .. (1,2); % \draw (.75,-.5) .. controls (.75,.25) and (0,.75) .. (0,1) -- (0,1.5) node[midway, tikzdot]{} -- (0,2); \node at(1,-.5) {\tiny $\dots$}; \draw (1.25,-.5) .. controls (1.25,.25) and (.5,.75) .. (.5,1) -- (.5,1.5) node[midway, tikzdot]{} -- (.5,2); % % % \draw[stdhl] (0,-.5) .. controls (0,-.25) .. (-.5,0) .. controls (1.25,.25) .. (1.25,.5) -- (1.25,1.5) .. controls (1.25,1.75) and (1.5,1.75) .. (1.5,2) ; \draw[fill=white, color=white] (-.6,0) circle (.1cm); \draw[vstdhl] (-.5,-.5) -- (-.5,2) ; % % \draw (0,-2.5) -- (0,-2) .. controls (0,-1.5) and (.75,-1.5) .. (.75,-.5); \draw (.5,-2.5) -- (.5,-2) .. controls (.5,-1.5) and (1.25,-1.5) .. (1.25,-.5); \draw (.75,-2.5) -- (.75,-2) .. controls (.75,-1.75) .. (-.5,-1.5) .. controls (.5,-1) .. (.5,-.5); \draw[decoration={brace,mirror,raise=-8pt},decorate] (-.1,-2.85) -- node {$k$} (.6,-2.85); % \draw[stdhl] (1.5,-2.5 )node[below]{\small $1$} .. controls (1.5,-2.25) and (1.25,-2.25) .. (1.25,-2) -- (1.25,-1.5) .. controls (1.25,-1.25) .. (-.5,-1) .. controls (0,-.75) .. (0,-.5); \draw[fill=white, color=white] (-.6,-1) circle (.1cm); \draw[vstdhl] (-.5,-2.5) node[below]{\small $\lambda$}-- (-.5,-2) -- (-.5,-.5) node[pos=.33,nail]{}; } \ + \ \tikzdiag{ \draw (.75, -2.5) .. controls (.75,-2.25) and (1.5,-2.25) .. (1.5,-2) -- (1.5,1.5) .. controls (1.5,1.75) and (.75,1.75) .. (.75,2) node[pos=.8, tikzdot]{}; % \draw (.5,-.5) .. controls (.5,0) and (0,0) .. (0,.5) .. controls (0,.75) and (.75,.75) .. (.75,1) -- (.75, 1.5) .. controls (.75,1.75) and (1,1.75) .. (1,2); % \draw (.75,-.5) .. controls (.75,.25) and (0,.75) .. (0,1) -- (0,1.5) node[midway, tikzdot]{} -- (0,2); \node at(1,-.5) {\tiny $\dots$}; \draw (1.25,-.5) .. controls (1.25,.25) and (.5,.75) .. (.5,1) -- (.5,1.5) node[midway, tikzdot]{} -- (.5,2); % % % \draw[stdhl] (0,-.5) .. controls (0,-.25) .. (-.5,0) .. controls (1.25,.25) .. (1.25,.5) -- (1.25,1.5) .. controls (1.25,1.75) and (1.5,1.75) .. (1.5,2) ; \draw[fill=white, color=white] (-.6,0) circle (.1cm); \draw[vstdhl] (-.5,-.5) -- (-.5,2) ; % % \draw (0,-2.5) -- (0,-2) .. controls (0,-1.5) and (.75,-1.5) .. (.75,-.5); \draw (.5,-2.5) -- (.5,-2) .. controls (.5,-1.5) and (1.25,-1.5) .. (1.25,-.5); \draw (1,-2.5) .. controls (1,-2.25) and (.75,-2.25) .. (.75,-2) .. controls (.75,-1.75) .. (-.5,-1.5) .. controls (.5,-1) .. (.5,-.5); \draw[decoration={brace,mirror,raise=-8pt},decorate] (-.1,-2.85) -- node {$k$} (.6,-2.85); % \draw[stdhl] (1.5,-2.5 )node[below]{\small $1$} .. controls (1.5,-2.25) and (1.25,-2.25) .. (1.25,-2) -- (1.25,-1.5) .. controls (1.25,-1.25) .. (-.5,-1) .. controls (0,-.75) .. (0,-.5); \draw[fill=white, color=white] (-.6,-1) circle (.1cm); \draw[vstdhl] (-.5,-2.5) node[below]{\small $\lambda$}-- (-.5,-2) -- (-.5,-.5) node[pos=.33,nail]{}; } \\ & \phantom{\ = \ }\ - \ \text{(similar terms with the nail above).} \end{align*} Applying \cref{eq:defzn} on each pair of terms in the sum (including non-displayed terms) gives the part for $0 \leq t < k-2$ in \cref{eq:gammavarphi0rec} for $k+2$. The last two terms (including non-displayed terms) give $t=k$ and $t=k+1$, since $z_2$ is a single crossing, concluding the proof. \end{proof} Having \cref{prop:tildevarphi0rec}, proving \cref{thm:phiisAinfty} boils down to proving that the left and right action by the same element of $T_b^{\lambda,r}$ on \[ \sum_{k+\ell+|\rho|=b} (-1)^k \tilde\varphi(k) \otimes \bar 1_{\ell,\rho} \] coincide. \begin{lem}\label{lem:varphitdotright} We have \[ \tikzdiagh[xscale=.75]{0}{ \draw (.5,-.5) -- (.5,1.5); \node at(1,.5){\tiny $\dots$}; \draw (1.5,-.5) -- (1.5,1.5); % \draw (2,-.5) -- (2,1.5) node[near end, tikzdot]{}; % \draw[vstdhl] (0,-.5) node[below]{\small$\lambda$} -- (0,1.5); \draw[stdhl] (2.5,-.5) node[below]{\small$1$} -- (2.5,1.5); % \node at(1,.1){\tiny $\dots$}; \filldraw [fill=white, draw=black] (-.15,.25) rectangle (2.65,.75) node[midway] { $\tilde\varphi(t+1)$}; \node at(1,.9){\tiny $\dots$}; } \ = - \ \tikzdiagh[xscale=.75]{0}{ \draw (.5,-.5) -- (.5,1.5); \node at(1,.5){\tiny $\dots$}; \draw (1.5,-.5) -- (1.5,1.5); % \draw (2,-.5) .. controls (2,-.25) and (2.5,-.25) .. (2.5,0) -- (2.5,1) .. controls (2.5,1.25) and (2,1.25) .. (2,1.5); % \draw[vstdhl] (0,-.5) node[below]{\small$\lambda$} -- (0,1.5); \draw[stdhl] (2.5,-.5) node[below]{\small$1$} .. controls (2.5,-.25) and (2,-.25) .. (2,0) -- (2,1) .. controls (2,1.25) and (2.5,1.25) .. (2.5,1.5); % \node at(1,.1){\tiny $\dots$}; \filldraw [fill=white, draw=black] (-.15,.25) rectangle (2.15,.75) node[midway] { $\tilde\varphi(t)$}; \node at(1,.9){\tiny $\dots$}; } \ = \ \tikzdiagh[xscale=.75]{0}{ \draw (.5,-.5) -- (.5,1.5); \node at(1,.5){\tiny $\dots$}; \draw (1.5,-.5) -- (1.5,1.5); % \draw (2,-.5) -- (2,1.5) node[near start, tikzdot]{}; % \draw[vstdhl] (0,-.5) node[below]{\small$\lambda$} -- (0,1.5); \draw[stdhl] (2.5,-.5) node[below]{\small$1$} -- (2.5,1.5); % \node at(1,.1){\tiny $\dots$}; \filldraw [fill=white, draw=black] (-.15,.25) rectangle (2.65,.75) node[midway] { $\tilde\varphi(t+1)$}; \node at(1,.9){\tiny $\dots$}; } \] for all $t \geq 0$. \end{lem} \begin{proof} We show the first equality, and the second one follows by symmetry along the horizontal axis of the definition of $\tilde\varphi(t+1)$. We prove the statement by induction on $t$. The case $t=0$ follows from \cref{eq:dottednailslide}. Suppose the claim is true for $t \geq 0$. We compute using \cref{eq:altvarphik} \begin{align*} \tikzdiagh[xscale=.75]{0}{ \draw (.5,-.5) -- (.5,1.5); \node at(1,.5){\tiny $\dots$}; \draw (1.5,-.5) -- (1.5,1.5); % \draw (2,-.5) -- (2,1.5); \draw (2.5,-.5) -- (2.5,1.5) node[near end, tikzdot]{}; % \draw[vstdhl] (0,-.5) node[below]{\small$\lambda$} -- (0,1.5); \draw[stdhl] (3,-.5) node[below]{\small$1$} -- (3,1.5); % \node at(1,.1){\tiny $\dots$}; \filldraw [fill=white, draw=black] (-.15,.25) rectangle (3.15,.75) node[midway] { $\tilde\varphi(t+2)$}; \node at(1,.9){\tiny $\dots$}; } \ &= \ \tikzdiagh[xscale=.75]{0}{ \draw (.5,-.5) -- (.5,1.5); \node at(1,.5){\tiny $\dots$}; \draw (1.5,-.5) -- (1.5,1.5); % \draw (2,-.5) -- (2,0) -- (2,1) .. controls (2,1.25) and (2.5,1.25) .. (2.5,1.5) node[pos=.8,tikzdot]{}; \draw (2.5, -.5).. controls (2.5,-.25) and(3,-.25) .. (3,0) -- (3,1) .. controls (3,1.25) and (2,1.25) .. (2,1.5); % \draw[vstdhl] (0,-.5) node[below]{\small$\lambda$} -- (0,1.5); \draw[stdhl] (3,-.5) node[below]{\small$1$} .. controls (3,-.25) and (2.5,-.25) ..(2.5,0) -- (2.5,1) .. controls (2.5,1.25) and (3,1.25) .. (3,1.5); % \node at(1,.1){\tiny $\dots$}; \filldraw [fill=white, draw=black] (-.15,.25) rectangle (2.65,.75) node[midway] { $\tilde\varphi(t+1)$}; \node at(1,.9){\tiny $\dots$}; } \ + \ \tikzdiagh[xscale=.75]{0}{ \draw (.5,-.5) -- (.5,1.5); \node at(1,.5){\tiny $\dots$}; \draw (1.5,-.5) -- (1.5,1.5); % \draw (2.5,-.5) .. controls (2.5,-.25) and (2,-.25) .. (2,0) -- (2,1) .. controls (2,1.25) and (2.5,1.25) .. (2.5,1.5) node[pos=.8,tikzdot]{}; \draw (2, -.5).. controls (2,-.25) and(3,-.25) .. (3,0) -- (3,1) .. controls (3,1.25) and (2,1.25) .. (2,1.5) node[pos=.8,tikzdot]{}; % \draw[vstdhl] (0,-.5) node[below]{\small$\lambda$} -- (0,1.5); \draw[stdhl] (3,-.5) node[below]{\small$1$} .. controls (3,-.25) and (2.5,-.25) ..(2.5,0) -- (2.5,1) .. controls (2.5,1.25) and (3,1.25) .. (3,1.5); % \node at(1,.1){\tiny $\dots$}; \filldraw [fill=white, draw=black] (-.15,.25) rectangle (2.65,.75) node[midway] { $\tilde\varphi(t+1)$}; \node at(1,.9){\tiny $\dots$}; } \\ \ &= - \ \tikzdiagh[xscale=.75]{0}{ \draw (.5,-.5) -- (.5,1.5); \node at(1,.5){\tiny $\dots$}; \draw (1.5,-.5) -- (1.5,1.5); % \draw (2,-.5) -- (2,0) -- (2,1.5); \draw (2.5, -.5).. controls (2.5,-.25) and(3,-.25) .. (3,0) -- (3,1) .. controls (3,1.25) and (2.5,1.25) .. (2.5,1.5); % \draw[vstdhl] (0,-.5) node[below]{\small$\lambda$} -- (0,1.5); \draw[stdhl] (3,-.5) node[below]{\small$1$} .. controls (3,-.25) and (2.5,-.25) ..(2.5,0) -- (2.5,1) .. controls (2.5,1.25) and (3,1.25) .. (3,1.5); % \node at(1,.1){\tiny $\dots$}; \filldraw [fill=white, draw=black] (-.15,.25) rectangle (2.65,.75) node[midway] { $\tilde\varphi(t+1)$}; \node at(1,.9){\tiny $\dots$}; } \ - \ \tikzdiagh[xscale=.75]{0}{ \draw (.5,-.5) -- (.5,1.5); \node at(1,.5){\tiny $\dots$}; \draw (1.5,-.5) -- (1.5,1.5); % \draw (2,-.5) .. controls (2,-.25) and (2.5,-.25) .. (2.5,0) .. controls (2.5,.5) and (3,.5) .. (3,1) .. controls (3,1.25) and (2.5,1.25) .. (2.5,1.5) \draw (2.5, -.5).. controls (2.5,-.25) and(3,-.25) .. (3,0) .. controls (3,.5) and (2.5,.5) .. (2.5,1) .. controls (2.5,1.25) and (2,1.25) .. (2,1.5) % \draw[vstdhl] (0,-.5) node[below]{\small$\lambda$} -- (0,1.5); \draw[stdhl] (3,-.5) node[below]{\small$1$} .. controls (3,-.25) and (2,-.25) ..(2,0) -- (2,1) .. controls (2,1.25) and (3,1.25) .. (3,1.5); % \node at(1,.1){\tiny $\dots$}; \filldraw [fill=white, draw=black] (-.15,.25) rectangle (2.15,.75) node[midway] { $\tilde\varphi(t)$}; \node at(1,.9){\tiny $\dots$}; } \ - \ \tikzdiagh[xscale=.75]{0}{ \draw (.5,-.5) -- (.5,1.5); \node at(1,.5){\tiny $\dots$}; \draw (1.5,-.5) -- (1.5,1.5); % \draw (2,-.5) .. controls (2,-.25) and (2.5,-.25) .. (2.5,0) .. controls (2.5,.25) and (3,.25) .. (3,.5) .. controls (3,.75) and (2.5,.75) .. (2.5,1) node[tikzdot, pos=0]{} .. controls (2.5,1.25) and (2,1.25) .. (2,1.5) ; \draw (2.5, -.5).. controls (2.5,-.25) and(3,-.25) .. (3,0) .. controls (3,.25) and (2.5,.25) .. (2.5,.5) .. controls (2.5,.75) and (3,.75) .. (3,1) .. controls (3,1.25) and (2.5,1.25) .. (2.5,1.5); % \draw[vstdhl] (0,-.5) node[below]{\small$\lambda$} -- (0,1.5); \draw[stdhl] (3,-.5) node[below]{\small$1$} .. controls (3,-.25) and (2,-.25) ..(2,0) -- (2,1) .. controls (2,1.25) and (3,1.25) .. (3,1.5); % \node at(1,.1){\tiny $\dots$}; \filldraw [fill=white, draw=black] (-.15,.25) rectangle (2.15,.75) node[midway] { $\tilde\varphi(t)$}; \node at(1,.9){\tiny $\dots$}; } \end{align*} where the last two terms annihilate each other, concluding the proof. \end{proof} \begin{lem}\label{lem:varphitdotsecondright} We have \[ \tikzdiagh[xscale=.75]{0}{ \draw (.5,-.5) -- (.5,1.5); \node at(1,.5){\tiny $\dots$}; \draw (1.5,-.5) -- (1.5,1.5); % \draw (2,-.5) -- (2,1.5) node[near end, tikzdot]{}; \draw (2.5,-.5) -- (2.5,1.5) ; % \draw[vstdhl] (0,-.5) node[below]{\small $\lambda$} -- (0,1.5); \draw[stdhl] (3,-.5) node[below]{\small $1$} -- (3,1.5); % \node at(1,.1){\tiny $\dots$}; \filldraw [fill=white, draw=black] (-.15,.25) rectangle (3.15,.75) node[midway] { $\tilde\varphi(t+2)$}; \node at(1,.9){\tiny $\dots$}; } \ = \ \tikzdiagh[xscale=.75]{0}{ \draw (.5,-.5) -- (.5,1.5); \node at(1,.5){\tiny $\dots$}; \draw (1.5,-.5) -- (1.5,1.5); % \draw (2.5,-.5) .. controls (2.5,-.25) and (2,-.25) .. (2,0) -- (2,1) .. controls (2,1.25) and (2.5,1.25) .. (2.5,1.5); \draw (2, -.5).. controls (2,-.25) and(3,-.25) .. (3,.25) node[tikzdot, pos=.2]{} -- (3,.75) .. controls (3,1.25) and (2,1.25) .. (2,1.5) node[tikzdot, pos=.8]{}; % \draw[vstdhl] (0,-.5) node[below]{\small $\lambda$} -- (0,1.5); \draw[stdhl] (3,-.5) node[below]{\small $1$} .. controls (3,-.25) and (2.5,-.25) ..(2.5,.25) -- (2.5,.75) .. controls (2.5,1.25) and (3,1.25) .. (3,1.5); % \node at(1,.1){\tiny $\dots$}; \filldraw [fill=white, draw=black] (-.15,.25) rectangle (2.65,.75) node[midway] { $\tilde\varphi(t+1)$}; \node at(1,.9){\tiny $\dots$}; } \ - \ \tikzdiagh[xscale=.75]{0}{ \draw (.5,-.5) -- (.5,1.5); \node at(1,.5){\tiny $\dots$}; \draw (1.5,-.5) -- (1.5,1.5); % \draw (2,-.5) .. controls (2,-.25) and (2.5,-.25) .. (2.5,.25) .. controls (2.5,.5) and (3,.5) .. (3,.75) .. controls (3,1.25) and (2.5,1.25) .. (2.5,1.5); \draw (2.5, -.5).. controls (2.5,-.25) and(3,-.25) .. (3,.25) .. controls (3,.5) and (2.5,.5) .. (2.5,.75) .. controls (2.5,1.25) and (2,1.25) .. (2,1.5); % \draw[vstdhl] (0,-.5) node[below]{\small $\lambda$} -- (0,1.5); \draw[stdhl] (3,-.5) node[below]{\small $1$} .. controls (3,-.25) and (2,-.25) ..(2,0) -- (2,1) .. controls (2,1.25) and (3,1.25) .. (3,1.5); % \node at(1,.1){\tiny $\dots$}; \filldraw [fill=white, draw=black] (-.15,.25) rectangle (2.15,.75) node[midway] { $\tilde\varphi(t)$}; \node at(1,.9){\tiny $\dots$}; } \ = \ \tikzdiagh[xscale=.75]{0}{ \draw (.5,-.5) -- (.5,1.5); \node at(1,.5){\tiny $\dots$}; \draw (1.5,-.5) -- (1.5,1.5); % \draw (2,-.5) -- (2,1.5) node[near start, tikzdot]{}; \draw (2.5,-.5) -- (2.5,1.5) ; % \draw[vstdhl] (0,-.5) node[below]{\small$\lambda$} -- (0,1.5); \draw[stdhl] (3,-.5) node[below]{\small$1$} -- (3,1.5); % \node at(1,.1){\tiny $\dots$}; \filldraw [fill=white, draw=black] (-.15,.25) rectangle (3.15,.75) node[midway] { $\tilde\varphi(t+2)$}; \node at(1,.9){\tiny $\dots$}; } \] for all $t \geq 0$. \end{lem} \begin{proof} By \cref{eq:altvarphik}, we have \begin{align*} \tikzdiagh[xscale=.75]{0}{ \draw (.5,-.5) -- (.5,1.5); \node at(1,.5){\tiny $\dots$}; \draw (1.5,-.5) -- (1.5,1.5); % \draw (2,-.5) -- (2,1.5) node[near start, tikzdot]{}; \draw (2.5,-.5) -- (2.5,1.5) ; % \draw[vstdhl] (0,-.5) node[below]{\small $\lambda$} -- (0,1.5); \draw[stdhl] (3,-.5) node[below]{\small $1$} -- (3,1.5); % \node at(1,.1){\tiny $\dots$}; \filldraw [fill=white, draw=black] (-.15,.25) rectangle (3.15,.75) node[midway] { $\tilde\varphi(t+2)$}; \node at(1,.9){\tiny $\dots$}; } \ &= \ \tikzdiagh[xscale=.75]{0}{ \draw (.5,-.5) -- (.5,1.5); \node at(1,.5){\tiny $\dots$}; \draw (1.5,-.5) -- (1.5,1.5); % \draw (2,-.5) -- (2,0) node[pos=1, tikzdot]{} -- (2,1) .. controls (2,1.25) and (2.5,1.25) .. (2.5,1.5); \draw (2.5, -.5).. controls (2.5,-.25) and(3,-.25) .. (3,0) -- (3,1) .. controls (3,1.25) and (2,1.25) .. (2,1.5); % \draw[vstdhl] (0,-.5) node[below]{\small $\lambda$} -- (0,1.5); \draw[stdhl] (3,-.5) node[below]{\small $1$} .. controls (3,-.25) and (2.5,-.25) ..(2.5,0) -- (2.5,1) .. controls (2.5,1.25) and (3,1.25) .. (3,1.5); % \node at(1,.1){\tiny $\dots$}; \filldraw [fill=white, draw=black] (-.15,.25) rectangle (2.65,.75) node[midway] { $\tilde\varphi(k+1)$}; \node at(1,.9){\tiny $\dots$}; } \ + \ \tikzdiagh[xscale=.75]{0}{ \draw (.5,-.5) -- (.5,1.5); \node at(1,.5){\tiny $\dots$}; \draw (1.5,-.5) -- (1.5,1.5); % \draw (2.5,-.5) .. controls (2.5,-.25) and (2,-.25) .. (2,0) -- (2,1) .. controls (2,1.25) and (2.5,1.25) .. (2.5,1.5); \draw (2, -.5).. controls (2,-.25) and(3,-.25) .. (3,0) node[pos=.2,tikzdot]{} -- (3,1) .. controls (3,1.25) and (2,1.25) .. (2,1.5) node[pos=.8,tikzdot]{}; % \draw[vstdhl] (0,-.5) node[below]{\small $\lambda$} -- (0,1.5); \draw[stdhl] (3,-.5) node[below]{\small $1$} .. controls (3,-.25) and (2.5,-.25) ..(2.5,0) -- (2.5,1) .. controls (2.5,1.25) and (3,1.25) .. (3,1.5); % \node at(1,.1){\tiny $\dots$}; \filldraw [fill=white, draw=black] (-.15,.25) rectangle (2.65,.75) node[midway] { $\tilde\varphi(k+1)$}; \node at(1,.9){\tiny $\dots$}; } \end{align*} We conclude by applying \cref{lem:varphitdotright}. \end{proof} \begin{lem}\label{lem:varphitcrossingright} We have \[ \tikzdiagh[xscale=.75]{0}{ \draw (.5,-.5) -- (.5,1.5); \node at(1,.5){\tiny $\dots$}; \draw (1.5,-.5) -- (1.5,1.5); % \draw (2,-.5) -- (2,1) .. controls (2,1.25) and (2.5,1.25) .. (2.5,1.5) ; \draw (2.5,-.5) -- (2.5,1) .. controls (2.5,1.25) and (2,1.25) .. (2,1.5); % \draw[vstdhl] (0,-.5) node[below]{\small$\lambda$} -- (0,1.5); \draw[stdhl] (3,-.5) node[below]{\small$1$} -- (3,1.5); % \node at(1,.1){\tiny $\dots$}; \filldraw [fill=white, draw=black] (-.15,.25) rectangle (3.15,.75) node[midway] { $\tilde\varphi(t+2)$}; \node at(1,.9){\tiny $\dots$}; } \ = \ \tikzdiagh[xscale=.75]{0}{ \draw (.5,-.5) -- (.5,1.5); \node at(1,.5){\tiny $\dots$}; \draw (1.5,-.5) -- (1.5,1.5); % \draw (2.5,-.5) .. controls (2.5,-.25) and (2,-.25) .. (2,0) -- (2,1) .. controls (2,1.25) and (2.5,1.25) .. (2.5,1.5); \draw (2, -.5).. controls (2,-.25) and(3,-.25) .. (3,.25) -- (3,.75) .. controls (3,1.25) and (2,1.25) .. (2,1.5); % \draw[vstdhl] (0,-.5) node[below]{\small$\lambda$} -- (0,1.5); \draw[stdhl] (3,-.5) node[below]{\small$1$} .. controls (3,-.25) and (2.5,-.25) ..(2.5,.25) -- (2.5,.75) .. controls (2.5,1.25) and (3,1.25) .. (3,1.5); % \node at(1,.1){\tiny $\dots$}; \filldraw [fill=white, draw=black] (-.15,.25) rectangle (2.65,.75) node[midway] { $\tilde\varphi(t+1)$}; \node at(1,.9){\tiny $\dots$}; } \ = \ \tikzdiagh[xscale=.75]{0}{ \draw (.5,-.5) -- (.5,1.5); \node at(1,.5){\tiny $\dots$}; \draw (1.5,-.5) -- (1.5,1.5); % \draw (2.5,-.5) .. controls (2.5,-.25) and (2,-.25) .. (2,0) --(2,1.5) ; \draw (2,-.5) .. controls (2,-.25) and (2.5,-.25) .. (2.5,0) -- (2.5,1.5); % \draw[vstdhl] (0,-.5) node[below]{\small$\lambda$} -- (0,1.5); \draw[stdhl] (3,-.5) node[below]{\small$1$} -- (3,1.5); % \node at(1,.1){\tiny $\dots$}; \filldraw [fill=white, draw=black] (-.15,.25) rectangle (3.15,.75) node[midway] { $\tilde\varphi(t+2)$}; \node at(1,.9){\tiny $\dots$}; } \] for all $t \geq 0$. \end{lem} \begin{proof} This is immediate by applying \cref{eq:nhR2andR3} on the definition of $\tilde\varphi(t+2)$. \end{proof} \begin{lem}\label{lem:varphitcrossingsecondright} We have \[ \tikzdiagh[xscale=.75]{0}{ \draw (.5,-.5) -- (.5,1.5); \node at(1,.5){\tiny $\dots$}; \draw (1.5,-.5) -- (1.5,1.5); % \draw (2,-.5) -- (2,1) .. controls (2,1.25) and (2.5,1.25) .. (2.5,1.5) ; \draw (2.5,-.5) -- (2.5,1) .. controls (2.5,1.25) and (2,1.25) .. (2,1.5); % \draw (3,-.5) -- (3,1.5); % \draw[vstdhl] (0,-.5) node[below]{\small$\lambda$} -- (0,1.5); \draw[stdhl] (3.5,-.5) node[below]{\small$1$} -- (3.5,1.5); % \node at(1,.1){\tiny $\dots$}; \filldraw [fill=white, draw=black] (-.15,.25) rectangle (3.65,.75) node[midway] { $\tilde\varphi(t+3)$}; \node at(1,.9){\tiny $\dots$}; } \ = \ \tikzdiagh[xscale=.75]{0}{ \draw (.5,-.5) -- (.5,1.5); \node at(1,.5){\tiny $\dots$}; \draw (1.5,-.5) -- (1.5,1.5); % \draw (2.5,-.5) .. controls (2.5,-.25) and (2,-.25) .. (2,0) --(2,1.5) ; \draw (2,-.5) .. controls (2,-.25) and (2.5,-.25) .. (2.5,0) -- (2.5,1.5); % \draw (3,-.5) -- (3,1.5); % \draw[vstdhl] (0,-.5) node[below]{\small$\lambda$} -- (0,1.5); \draw[stdhl] (3.5,-.5) node[below]{\small$1$} -- (3.5,1.5); % \node at(1,.1){\tiny $\dots$}; \filldraw [fill=white, draw=black] (-.15,.25) rectangle (3.65,.75) node[midway] { $\tilde\varphi(t+3)$}; \node at(1,.9){\tiny $\dots$}; } \] for all $t \geq 0$. \end{lem} \begin{proof} By \cref{eq:altvarphik} we have \begin{align*} \tikzdiagh[xscale=.75]{0}{ \draw (.5,-.5) -- (.5,1.5); \node at(1,.5){\tiny $\dots$}; \draw (1.5,-.5) -- (1.5,1.5); % \draw (2,-.5) -- (2,1) .. controls (2,1.25) and (2.5,1.25) .. (2.5,1.5) ; \draw (2.5,-.5) -- (2.5,1) .. controls (2.5,1.25) and (2,1.25) .. (2,1.5); % \draw (3,-.5) -- (3,1.5); % \draw[vstdhl] (0,-.5) node[below]{\small$\lambda$} -- (0,1.5); \draw[stdhl] (3.5,-.5) node[below]{\small$1$} -- (3.5,1.5); % \node at(1,.1){\tiny $\dots$}; \filldraw [fill=white, draw=black] (-.15,.25) rectangle (3.65,.75) node[midway] { $\tilde\varphi(t+3)$}; \node at(1,.9){\tiny $\dots$}; } \ &= \ \tikzdiagh[xscale=.75]{0}{ \draw (.5,-.5) -- (.5,1.5); \node at(1,.5){\tiny $\dots$}; \draw (1.5,-.5) -- (1.5,1.5); % \draw (2,-.5) -- (2,1) .. controls (2,1.25) and (2.5,1.25) .. (2.5,1.5); \draw (2.5,-.5) -- (2.5,1) .. controls (2.5,1.25) and (3,1.25) .. (3,1.5); \draw (3, -.5).. controls (3,-.25) and(3.5,-.25) .. (3.5,.25) -- (3.5,.75) .. controls (3.5,1.25) and (2,1.25) .. (2,1.5); % \draw[vstdhl] (0,-.5) node[below]{\small$\lambda$} -- (0,1.5); \draw[stdhl] (3.5,-.5) node[below]{\small$1$} .. controls (3.5,-.25) and (3,-.25) ..(3,.25) -- (3,.75) .. controls (3,1.25) and (3.5,1.25) .. (3.5,1.5); % \node at(1,.1){\tiny $\dots$}; \filldraw [fill=white, draw=black] (-.15,.25) rectangle (3.15,.75) node[midway] { $\tilde\varphi(t+2)$}; \node at(1,.9){\tiny $\dots$}; } \ + \ \tikzdiagh[xscale=.75]{0}{ \draw (.5,-.5) -- (.5,1.5); \node at(1,.5){\tiny $\dots$}; \draw (1.5,-.5) -- (1.5,1.5); % \draw (2,-.5) -- (2,1) .. controls (2,1.25) and (2.5,1.25) .. (2.5,1.5); \draw (3,-.5) .. controls (3,-.25) and (2.5,-.25) .. (2.5,0) -- (2.5,1) .. controls (2.5,1.25) and (3,1.25) .. (3,1.5); \draw (2.5, -.5).. controls (2.5,-.25) and(3.5,-.25) .. (3.5,.25) -- (3.5,.75) .. controls (3.5,1.25) and (2,1.25) .. (2,1.5) node[pos=.6,tikzdot]{} ; % \draw[vstdhl] (0,-.5) node[below]{\small$\lambda$} -- (0,1.5); \draw[stdhl] (3.5,-.5) node[below]{\small$1$} .. controls (3.5,-.25) and (3,-.25) ..(3,.25) -- (3,.75) .. controls (3,1.25) and (3.5,1.25) .. (3.5,1.5); % \node at(1,.1){\tiny $\dots$}; \filldraw [fill=white, draw=black] (-.15,.25) rectangle (3.15,.75) node[midway] { $\tilde\varphi(t+2)$}; \node at(1,.9){\tiny $\dots$}; } \end{align*} Then, we compute \[ \tikzdiagh[xscale=.75]{0}{ \draw (.5,-.5) -- (.5,1.5); \node at(1,.5){\tiny $\dots$}; \draw (1.5,-.5) -- (1.5,1.5); % \draw (2,-.5) -- (2,1) .. controls (2,1.25) and (2.5,1.25) .. (2.5,1.5); \draw (2.5,-.5) -- (2.5,1) .. controls (2.5,1.25) and (3,1.25) .. (3,1.5); \draw (3, -.5).. controls (3,-.25) and(3.5,-.25) .. (3.5,.25) -- (3.5,.75) .. controls (3.5,1.25) and (2,1.25) .. (2,1.5); % \draw[vstdhl] (0,-.5) node[below]{\small$\lambda$} -- (0,1.5); \draw[stdhl] (3.5,-.5) node[below]{\small$1$} .. controls (3.5,-.25) and (3,-.25) ..(3,.25) -- (3,.75) .. controls (3,1.25) and (3.5,1.25) .. (3.5,1.5); % \node at(1,.1){\tiny $\dots$}; \filldraw [fill=white, draw=black] (-.15,.25) rectangle (3.15,.75) node[midway] { $\tilde\varphi(t+2)$}; \node at(1,.9){\tiny $\dots$}; } \ = \ \tikzdiagh[xscale=.75]{0}{ \draw (.5,-.5) -- (.5,1.5); \node at(1,.5){\tiny $\dots$}; \draw (1.5,-.5) -- (1.5,1.5); % \draw (2,-.5) -- (2,1) .. controls (2,1.25) and (3,1.25) .. (3,1.5); \draw (2.5,-.5) .. controls (2.5,-.25) and (3,-.25) ..(3,.25) .. controls (3,.5) and (3.5,.5) .. (3.5,.75) .. controls (3.5,1.25) and (2.5,1.25) .. (2.5,1.5); \draw (3, -.5).. controls (3,-.25) and(3.5,-.25) .. (3.5,.25) .. controls (3.5,.5) and (3,.5) .. (3,.75) .. controls (3,1.25) and (2,1.25) .. (2,1.5); % \draw[vstdhl] (0,-.5) node[below]{\small$\lambda$} -- (0,1.5); \draw[stdhl] (3.5,-.5) node[below]{\small$1$} .. controls (3.5,-.25) and (2.5,0) ..(2.5,.25) -- (2.5,.75) .. controls (2.5,1) and (3.5,1.25) .. (3.5,1.5); % \node at(1,.1){\tiny $\dots$}; \filldraw [fill=white, draw=black] (-.15,.25) rectangle (2.65,.75) node[midway] { $\tilde\varphi(t+1)$}; \node at(1,.9){\tiny $\dots$}; } \ + \ \tikzdiagh[xscale=.75]{0}{ \draw (.5,-.5) -- (.5,1.5); \node at(1,.5){\tiny $\dots$}; \draw (1.5,-.5) -- (1.5,1.5); % \draw (2.5,-.5) .. controls (2.5,-.25) and (2,-.25) .. (2,0) -- (2,1) .. controls (2,1.25) and (2.5,1.25) .. (2.5,1.5); \draw (2,-.5) .. controls (2,-.25) and (3,-.25) ..(3,.25) .. controls (3,.5) and (3.5,.5) .. (3.5,.75) .. controls (3.5,1.25) and (3,1.25) .. (3,1.5); \draw (3, -.5).. controls (3,-.25) and(3.5,-.25) .. (3.5,.25) .. controls (3.5,.5) and (3,.5) .. (3,.75) .. controls (3,1.25) and (2,1.25) .. (2,1.5); % \draw[vstdhl] (0,-.5) node[below]{\small$\lambda$} -- (0,1.5); \draw[stdhl] (3.5,-.5) node[below]{\small$1$} .. controls (3.5,-.25) and (2.5,0) ..(2.5,.25) -- (2.5,.75) .. controls (2.5,1) and (3.5,1.25) .. (3.5,1.5); % \node at(1,.1){\tiny $\dots$}; \filldraw [fill=white, draw=black] (-.15,.25) rectangle (2.65,.75) node[midway] { $\tilde\varphi(t+1)$}; \node at(1,.9){\tiny $\dots$}; } \ + \ \tikzdiagh[xscale=.75]{0}{ \draw (.5,-.5) -- (.5,1.5); \node at(1,.5){\tiny $\dots$}; \draw (1.5,-.5) -- (1.5,1.5); % \draw (2.5,-.5) .. controls (2.5,-.25) and (2,-.25) .. (2,0) -- (2,1) .. controls (2,1.25) and (3,1.25) .. (3,1.5); \draw (2,-.5) .. controls (2,-.25) and (3,-.25) ..(3,.25) node[tikzdot, pos=1]{} .. controls (3,.5) and (3.5,.5) .. (3.5,.75) .. controls (3.5,1.25) and (2.5,1.25) .. (2.5,1.5); \draw (3, -.5).. controls (3,-.25) and(3.5,-.25) .. (3.5,.25) .. controls (3.5,.5) and (3,.5) .. (3,.75) .. controls (3,1.25) and (2,1.25) .. (2,1.5); % \draw[vstdhl] (0,-.5) node[below]{\small$\lambda$} -- (0,1.5); \draw[stdhl] (3.5,-.5) node[below]{\small$1$} .. controls (3.5,-.25) and (2.5,0) ..(2.5,.25) -- (2.5,.75) .. controls (2.5,1) and (3.5,1.25) .. (3.5,1.5); % \node at(1,.1){\tiny $\dots$}; \filldraw [fill=white, draw=black] (-.15,.25) rectangle (2.65,.75) node[midway] { $\tilde\varphi(t+1)$}; \node at(1,.9){\tiny $\dots$}; } \] and \[ \tikzdiagh[xscale=.75]{0}{ \draw (.5,-.5) -- (.5,1.5); \node at(1,.5){\tiny $\dots$}; \draw (1.5,-.5) -- (1.5,1.5); % \draw (2,-.5) -- (2,1) .. controls (2,1.25) and (2.5,1.25) .. (2.5,1.5); \draw (3,-.5) .. controls (3,-.25) and (2.5,-.25) .. (2.5,0) -- (2.5,1) .. controls (2.5,1.25) and (3,1.25) .. (3,1.5); \draw (2.5, -.5).. controls (2.5,-.25) and(3.5,-.25) .. (3.5,.25) -- (3.5,.75) .. controls (3.5,1.25) and (2,1.25) .. (2,1.5) node[pos=.6,tikzdot]{} ; % \draw[vstdhl] (0,-.5) node[below]{\small$\lambda$} -- (0,1.5); \draw[stdhl] (3.5,-.5) node[below]{\small$1$} .. controls (3.5,-.25) and (3,-.25) ..(3,.25) -- (3,.75) .. controls (3,1.25) and (3.5,1.25) .. (3.5,1.5); % \node at(1,.1){\tiny $\dots$}; \filldraw [fill=white, draw=black] (-.15,.25) rectangle (3.15,.75) node[midway] { $\tilde\varphi(t+2)$}; \node at(1,.9){\tiny $\dots$}; } \ = \ \tikzdiagh[xscale=.75]{0}{ \draw (.5,-.5) -- (.5,1.75); \node at(1,.5){\tiny $\dots$}; \draw (1.5,-.5) -- (1.5,1.75); % \draw (2,-.5) -- (2,1) .. controls (2,1.25) and (3,1.25) .. (3,1.75); \draw (2.5,-.5) .. controls (2.5,-.25) and (3,-.25) ..(3,.25) .. controls (3,.5) and (3.5,.5) .. (3.5,.75) .. controls (3.5,1.5) and (2,1.5) ..(2,1.75) node[tikzdot,pos=.6]{}; \draw (3, -.5).. controls (3,-.25) and(3.5,-.25) .. (3.5,.25) .. controls (3.5,.5) and (3,.5) .. (3,.75) .. controls (3,1.25) and (2,1.25) .. (2,1.5) .. controls (2,1.625) and (2.5,1.625) .. (2.5,1.75); % \draw[vstdhl] (0,-.5) node[below]{\small$\lambda$} -- (0,1.75); \draw[stdhl] (3.5,-.5) node[below]{\small$1$} .. controls (3.5,-.25) and (2.5,0) ..(2.5,.25) -- (2.5,.75) .. controls (2.5,1) and (3.5,1.25) .. (3.5,1.5) -- (3.5,1.75); % \node at(1,.1){\tiny $\dots$}; \filldraw [fill=white, draw=black] (-.15,.25) rectangle (2.65,.75) node[midway] { $\tilde\varphi(t+1)$}; \node at(1,.9){\tiny $\dots$}; } \ + \ \tikzdiagh[xscale=.75]{0}{ \draw (.5,-.5) -- (.5,1.5); \node at(1,.5){\tiny $\dots$}; \draw (1.5,-.5) -- (1.5,1.5); % \draw (3,-.5) .. controls (3,-.25) and (2,-.25) .. (2,0) -- (2,1) .. controls (2,1.25) and (2.5,1.25) .. (2.5,1.5); \draw (2,-.5) .. controls (2,-.25) and (3,-.25) ..(3,.25) .. controls (3,.5) and (3.5,.5) .. (3.5,.75) .. controls (3.5,1.25) and (3,1.25) .. (3,1.5); \draw (2.5, -.5).. controls (2.5,-.25) and(3.5,-.25) .. (3.5,.25) .. controls (3.5,.5) and (3,.5) .. (3,.75) node[tikzdot,pos=1]{} .. controls (3,1.25) and (2,1.25) .. (2,1.5); % \draw[vstdhl] (0,-.5) node[below]{\small$\lambda$} -- (0,1.5); \draw[stdhl] (3.5,-.5) node[below]{\small$1$} .. controls (3.5,-.25) and (2.5,0) ..(2.5,.25) -- (2.5,.75) .. controls (2.5,1) and (3.5,1.25) .. (3.5,1.5); % \node at(1,.1){\tiny $\dots$}; \filldraw [fill=white, draw=black] (-.15,.25) rectangle (2.65,.75) node[midway] { $\tilde\varphi(t+1)$}; \node at(1,.9){\tiny $\dots$}; } \ + \ \tikzdiagh[xscale=.75]{0}{ \draw (.5,-.5) -- (.5,1.75); \node at(1,.5){\tiny $\dots$}; \draw (1.5,-.5) -- (1.5,1.75); % \draw (3,-.5) .. controls (3,-.25) and (2,-.25) .. (2,0) -- (2,1) .. controls (2,1.25) and (3,1.25) .. (3,1.75); \draw (2,-.5) .. controls (2,-.25) and (3,-.25) ..(3,.25) -- (3,.75) node[midway, tikzdot]{} .. controls (3,1.25) and (2,1.25) .. (2,1.5) .. controls (2,1.625) and (2.5,1.625) .. (2.5,1.75); \draw (2.5, -.5).. controls (2.5,-.25) and(3.5,-.25) .. (3.5,.25) -- (3.5,.75) .. controls (3.5,1.5) and (2,1.5) ..(2,1.75) node[tikzdot,pos=.6]{}; % \draw[vstdhl] (0,-.5) node[below]{\small$\lambda$} -- (0,1.75); \draw[stdhl] (3.5,-.5) node[below]{\small$1$} .. controls (3.5,-.25) and (2.5,0) ..(2.5,.25) -- (2.5,.75) .. controls (2.5,1) and (3.5,1.25) .. (3.5,1.5) -- (3.5,1.75); % \node at(1,.1){\tiny $\dots$}; \filldraw [fill=white, draw=black] (-.15,.25) rectangle (2.65,.75) node[midway] { $\tilde\varphi(t+1)$}; \node at(1,.9){\tiny $\dots$}; } \] Furthermore, we compute mainly using \cref{eq:nhR2andR3} and \cref{eq:nhdotslide}, \[ \tikzdiagh[xscale=.75]{0}{ \draw (.5,-.5) -- (.5,1.75); \node at(1,.5){\tiny $\dots$}; \draw (1.5,-.5) -- (1.5,1.75); % \draw (2,-.5) -- (2,1) .. controls (2,1.25) and (3,1.25) .. (3,1.75); \draw (2.5,-.5) .. controls (2.5,-.25) and (3,-.25) ..(3,.25) .. controls (3,.5) and (3.5,.5) .. (3.5,.75) .. controls (3.5,1.5) and (2,1.5) ..(2,1.75) node[tikzdot,pos=.6]{}; \draw (3, -.5).. controls (3,-.25) and(3.5,-.25) .. (3.5,.25) .. controls (3.5,.5) and (3,.5) .. (3,.75) .. controls (3,1.25) and (2,1.25) .. (2,1.5) .. controls (2,1.625) and (2.5,1.625) .. (2.5,1.75); % \draw[vstdhl] (0,-.5) node[below]{\small$\lambda$} -- (0,1.75); \draw[stdhl] (3.5,-.5) node[below]{\small$1$} .. controls (3.5,-.25) and (2.5,0) ..(2.5,.25) -- (2.5,.75) .. controls (2.5,1) and (3.5,1.25) .. (3.5,1.5) -- (3.5,1.75); % \node at(1,.1){\tiny $\dots$}; \filldraw [fill=white, draw=black] (-.15,.25) rectangle (2.65,.75) node[midway] { $\tilde\varphi(t+1)$}; \node at(1,.9){\tiny $\dots$}; } \ = - \ \tikzdiagh[xscale=.75]{0}{ \draw (.5,-.5) -- (.5,1.5); \node at(1,.5){\tiny $\dots$}; \draw (1.5,-.5) -- (1.5,1.5); % \draw (2,-.5) -- (2,1) .. controls (2,1.25) and (3,1.25) .. (3,1.5); \draw (2.5,-.5) .. controls (2.5,-.25) and (3,-.25) ..(3,.25) .. controls (3,.5) and (3.5,.5) .. (3.5,.75) .. controls (3.5,1.25) and (2.5,1.25) .. (2.5,1.5); \draw (3, -.5).. controls (3,-.25) and(3.5,-.25) .. (3.5,.25) .. controls (3.5,.5) and (3,.5) .. (3,.75) .. controls (3,1.25) and (2,1.25) .. (2,1.5); % \draw[vstdhl] (0,-.5) node[below]{\small$\lambda$} -- (0,1.5); \draw[stdhl] (3.5,-.5) node[below]{\small$1$} .. controls (3.5,-.25) and (2.5,0) ..(2.5,.25) -- (2.5,.75) .. controls (2.5,1) and (3.5,1.25) .. (3.5,1.5); % \node at(1,.1){\tiny $\dots$}; \filldraw [fill=white, draw=black] (-.15,.25) rectangle (2.65,.75) node[midway] { $\tilde\varphi(t+1)$}; \node at(1,.9){\tiny $\dots$}; } \] and \[ \tikzdiagh[xscale=.75]{0}{ \draw (.5,-.5) -- (.5,1.75); \node at(1,.5){\tiny $\dots$}; \draw (1.5,-.5) -- (1.5,1.75); % \draw (3,-.5) .. controls (3,-.25) and (2,-.25) .. (2,0) -- (2,1) .. controls (2,1.25) and (3,1.25) .. (3,1.75); \draw (2,-.5) .. controls (2,-.25) and (3,-.25) ..(3,.25) -- (3,.75) node[midway, tikzdot]{} .. controls (3,1.25) and (2,1.25) .. (2,1.5) .. controls (2,1.625) and (2.5,1.625) .. (2.5,1.75); \draw (2.5, -.5).. controls (2.5,-.25) and(3.5,-.25) .. (3.5,.25) -- (3.5,.75) .. controls (3.5,1.5) and (2,1.5) ..(2,1.75) node[tikzdot,pos=.6]{}; % \draw[vstdhl] (0,-.5) node[below]{\small$\lambda$} -- (0,1.75); \draw[stdhl] (3.5,-.5) node[below]{\small$1$} .. controls (3.5,-.25) and (2.5,0) ..(2.5,.25) -- (2.5,.75) .. controls (2.5,1) and (3.5,1.25) .. (3.5,1.5) -- (3.5,1.75); % \node at(1,.1){\tiny $\dots$}; \filldraw [fill=white, draw=black] (-.15,.25) rectangle (2.65,.75) node[midway] { $\tilde\varphi(t+1)$}; \node at(1,.9){\tiny $\dots$}; } \ = \ \tikzdiagh[xscale=.75]{0}{ \draw (.5,-.5) -- (.5,1.5); \node at(1,.5){\tiny $\dots$}; \draw (1.5,-.5) -- (1.5,1.5); % \draw (3,-.5) .. controls (3,-.25) and (2,-.25) .. (2,0) -- (2,1) .. controls (2,1.25) and (3,1.25) .. (3,1.5); \draw (2,-.5) .. controls (2,-.25) and (3,-.25) ..(3,.25) .. controls (3,.5) and (3.5,.5) .. (3.5,.75) node[tikzdot,pos=1]{} .. controls (3.5,1.25) and (2.5,1.25) .. (2.5,1.5); \draw (2.5, -.5).. controls (2.5,-.25) and(3.5,-.25) .. (3.5,.25) .. controls (3.5,.5) and (3,.5) .. (3,.75) node[tikzdot,pos=1]{} .. controls (3,1.25) and (2,1.25) .. (2,1.5); % \draw[vstdhl] (0,-.5) node[below]{\small$\lambda$} -- (0,1.5); \draw[stdhl] (3.5,-.5) node[below]{\small$1$} .. controls (3.5,-.25) and (2.5,0) ..(2.5,.25) -- (2.5,.75) .. controls (2.5,1) and (3.5,1.25) .. (3.5,1.5); % \node at(1,.1){\tiny $\dots$}; \filldraw [fill=white, draw=black] (-.15,.25) rectangle (2.65,.75) node[midway] { $\tilde\varphi(t+1)$}; \node at(1,.9){\tiny $\dots$}; } \] In conclusion, we get \begin{align*} &\tikzdiagh[xscale=.75]{0}{ \draw (.5,-.5) -- (.5,1.5); \node at(1,.5){\tiny $\dots$}; \draw (1.5,-.5) -- (1.5,1.5); % \draw (2,-.5) -- (2,1) .. controls (2,1.25) and (2.5,1.25) .. (2.5,1.5) ; \draw (2.5,-.5) -- (2.5,1) .. controls (2.5,1.25) and (2,1.25) .. (2,1.5); % \draw (3,-.5) -- (3,1.5); % \draw[vstdhl] (0,-.5) node[below]{\small$\lambda$} -- (0,1.5); \draw[stdhl] (3.5,-.5) node[below]{\small$1$} -- (3.5,1.5); % \node at(1,.1){\tiny $\dots$}; \filldraw [fill=white, draw=black] (-.15,.25) rectangle (3.65,.75) node[midway] { $\tilde\varphi(t+3)$}; \node at(1,.9){\tiny $\dots$}; } \\ \ &= \ \tikzdiagh[xscale=.75]{0}{ \draw (.5,-.5) -- (.5,1.5); \node at(1,.5){\tiny $\dots$}; \draw (1.5,-.5) -- (1.5,1.5); % \draw (2.5,-.5) .. controls (2.5,-.25) and (2,-.25) .. (2,0) -- (2,1) .. controls (2,1.25) and (2.5,1.25) .. (2.5,1.5); \draw (2,-.5) .. controls (2,-.25) and (3,-.25) ..(3,.25) .. controls (3,.5) and (3.5,.5) .. (3.5,.75) .. controls (3.5,1.25) and (3,1.25) .. (3,1.5); \draw (3, -.5).. controls (3,-.25) and(3.5,-.25) .. (3.5,.25) .. controls (3.5,.5) and (3,.5) .. (3,.75) .. controls (3,1.25) and (2,1.25) .. (2,1.5); % \draw[vstdhl] (0,-.5) node[below]{\small$\lambda$} -- (0,1.5); \draw[stdhl] (3.5,-.5) node[below]{\small$1$} .. controls (3.5,-.25) and (2.5,0) ..(2.5,.25) -- (2.5,.75) .. controls (2.5,1) and (3.5,1.25) .. (3.5,1.5); % \node at(1,.1){\tiny $\dots$}; \filldraw [fill=white, draw=black] (-.15,.25) rectangle (2.65,.75) node[midway] { $\tilde\varphi(t+1)$}; \node at(1,.9){\tiny $\dots$}; } \ + \ \tikzdiagh[xscale=.75]{0}{ \draw (.5,-.5) -- (.5,1.5); \node at(1,.5){\tiny $\dots$}; \draw (1.5,-.5) -- (1.5,1.5); % \draw (2.5,-.5) .. controls (2.5,-.25) and (2,-.25) .. (2,0) -- (2,1) .. controls (2,1.25) and (3,1.25) .. (3,1.5); \draw (2,-.5) .. controls (2,-.25) and (3,-.25) ..(3,.25) node[tikzdot, pos=1]{} .. controls (3,.5) and (3.5,.5) .. (3.5,.75) .. controls (3.5,1.25) and (2.5,1.25) .. (2.5,1.5); \draw (3, -.5).. controls (3,-.25) and(3.5,-.25) .. (3.5,.25) .. controls (3.5,.5) and (3,.5) .. (3,.75) .. controls (3,1.25) and (2,1.25) .. (2,1.5); % \draw[vstdhl] (0,-.5) node[below]{\small$\lambda$} -- (0,1.5); \draw[stdhl] (3.5,-.5) node[below]{\small$1$} .. controls (3.5,-.25) and (2.5,0) ..(2.5,.25) -- (2.5,.75) .. controls (2.5,1) and (3.5,1.25) .. (3.5,1.5); % \node at(1,.1){\tiny $\dots$}; \filldraw [fill=white, draw=black] (-.15,.25) rectangle (2.65,.75) node[midway] { $\tilde\varphi(t+1)$}; \node at(1,.9){\tiny $\dots$}; } \ + \ \tikzdiagh[xscale=.75]{0}{ \draw (.5,-.5) -- (.5,1.5); \node at(1,.5){\tiny $\dots$}; \draw (1.5,-.5) -- (1.5,1.5); % \draw (3,-.5) .. controls (3,-.25) and (2,-.25) .. (2,0) -- (2,1) .. controls (2,1.25) and (2.5,1.25) .. (2.5,1.5); \draw (2,-.5) .. controls (2,-.25) and (3,-.25) ..(3,.25) .. controls (3,.5) and (3.5,.5) .. (3.5,.75) .. controls (3.5,1.25) and (3,1.25) .. (3,1.5); \draw (2.5, -.5).. controls (2.5,-.25) and(3.5,-.25) .. (3.5,.25) .. controls (3.5,.5) and (3,.5) .. (3,.75) node[tikzdot,pos=1]{} .. controls (3,1.25) and (2,1.25) .. (2,1.5); % \draw[vstdhl] (0,-.5) node[below]{\small$\lambda$} -- (0,1.5); \draw[stdhl] (3.5,-.5) node[below]{\small$1$} .. controls (3.5,-.25) and (2.5,0) ..(2.5,.25) -- (2.5,.75) .. controls (2.5,1) and (3.5,1.25) .. (3.5,1.5); % \node at(1,.1){\tiny $\dots$}; \filldraw [fill=white, draw=black] (-.15,.25) rectangle (2.65,.75) node[midway] { $\tilde\varphi(t+1)$}; \node at(1,.9){\tiny $\dots$}; } \ + \ \tikzdiagh[xscale=.75]{0}{ \draw (.5,-.5) -- (.5,1.5); \node at(1,.5){\tiny $\dots$}; \draw (1.5,-.5) -- (1.5,1.5); % \draw (3,-.5) .. controls (3,-.25) and (2,-.25) .. (2,0) -- (2,1) .. controls (2,1.25) and (3,1.25) .. (3,1.5); \draw (2,-.5) .. controls (2,-.25) and (3,-.25) ..(3,.25) .. controls (3,.5) and (3.5,.5) .. (3.5,.75) node[tikzdot,pos=1]{} .. controls (3.5,1.25) and (2.5,1.25) .. (2.5,1.5); \draw (2.5, -.5).. controls (2.5,-.25) and(3.5,-.25) .. (3.5,.25) .. controls (3.5,.5) and (3,.5) .. (3,.75) node[tikzdot,pos=1]{} .. controls (3,1.25) and (2,1.25) .. (2,1.5); % \draw[vstdhl] (0,-.5) node[below]{\small$\lambda$} -- (0,1.5); \draw[stdhl] (3.5,-.5) node[below]{\small$1$} .. controls (3.5,-.25) and (2.5,0) ..(2.5,.25) -- (2.5,.75) .. controls (2.5,1) and (3.5,1.25) .. (3.5,1.5); % \node at(1,.1){\tiny $\dots$}; \filldraw [fill=white, draw=black] (-.15,.25) rectangle (2.65,.75) node[midway] { $\tilde\varphi(t+1)$}; \node at(1,.9){\tiny $\dots$}; } \end{align*} which is symmetric with respect to taking the mirror image along the horizontal axis. Therefore, we get the same a crossing at the bottom of $\tilde \varphi(t+3)$, finishing the proof. \end{proof} \begin{lem}\label{lem:varphitredblackcrossing} We have \[ \tikzdiagh[xscale=.75]{0}{ \draw (.5,-.5) -- (.5,1.5); \node at(1,.5){\tiny $\dots$}; \draw (1.5,-.5) -- (1.5,1.5); % \draw (2,-.5) -- (2,1) .. controls (2,1.25) and (2.5,1.25) .. (2.5,1.5); % \draw[vstdhl] (0,-.5) node[below]{\small$\lambda$} -- (0,1.5); \draw[stdhl] (2.5,-.5) node[below]{\small$1$} -- (2.5,1) .. controls (2.5,1.25) and (2,1.25) .. (2,1.5); % \node at(1,.1){\tiny $\dots$}; \filldraw [fill=white, draw=black] (-.15,.25) rectangle (2.65,.75) node[midway] { $\tilde\varphi(t+1)$}; \node at(1,.9){\tiny $\dots$}; } \ = - \ \tikzdiagh[xscale=.75]{0}{ \draw (.5,-.5) -- (.5,1.5); \node at(1,.5){\tiny $\dots$}; \draw (1.5,-.5) -- (1.5,1.5); % \draw (2,-.5) ..controls (2,-.25) and (2.5,-.25) .. (2.5,0) --(2.5,1.5); % \draw[vstdhl] (0,-.5) node[below]{\small$\lambda$} -- (0,1.5); \draw[stdhl] (2.5,-.5) node[below]{\small$1$} .. controls (2.5,-.25) and (2,-.25) .. (2,0) -- (2,1.5); % \node at(1,.1){\tiny $\dots$}; \filldraw [fill=white, draw=black] (-.15,.25) rectangle (2.15,.75) node[midway] { $\tilde\varphi(t)$}; \node at(1,.9){\tiny $\dots$}; } \] for all $t \geq 0$. \end{lem} \begin{proof} We prove the statement by induction on $t$. The case $t=0$ follows from \cref{eq:nailslidedcross}. We suppose the claim is true for $t \geq 0$. We compute using the mirror of \cref{eq:altvarphik}, \begin{align*} \tikzdiagh[xscale=.75]{0}{ \draw (.5,-.5) -- (.5,1.5); \node at(1,.5){\tiny $\dots$}; \draw (1.5,-.5) -- (1.5,1.5); % \draw (2,-.5) -- (2,1.5); \draw (2.5,-.5) -- (2.5,1) .. controls (2.5,1.25) and (3,1.25) .. (3,1.5); % \draw[vstdhl] (0,-.5) node[below]{\small$\lambda$} -- (0,1.5); \draw[stdhl] (3,-.5) node[below]{\small$1$} -- (3,1) .. controls (3,1.25) and (2.5,1.25) .. (2.5,1.5); % \node at(1,.1){\tiny $\dots$}; \filldraw [fill=white, draw=black] (-.15,.25) rectangle (3.15,.75) node[midway] { $\tilde\varphi(t+2)$}; \node at(1,.9){\tiny $\dots$}; } \ &= \ \tikzdiagh[xscale=.75]{0}{ \draw (.5,-.5) -- (.5,1.75); \node at(1,.5){\tiny $\dots$}; \draw (1.5,-.5) -- (1.5,1.75); % \draw (2.5,-.5) .. controls (2.5,-.25) and (2,-.25) .. (2,0) -- (2,1) -- (2,1.75); \draw (2, -.5).. controls (2,-.25) and(3,-.25) .. (3,.25) -- (3,.75) .. controls (3,1) and (2.5,1) .. (2.5,1.25) .. controls (2.5,1.5) and (3,1.5) ..(3,1.75); % \draw[vstdhl] (0,-.5) node[below]{\small$\lambda$} -- (0,1.75); \draw[stdhl] (3,-.5) node[below]{\small$1$} .. controls (3,-.25) and (2.5,-.25) ..(2.5,.25) -- (2.5,.75) .. controls (2.5,1) and (3,1) .. (3,1.25) .. controls (3,1.5) and (2.5,1.5) .. (2.5,1.75); % \node at(1,.1){\tiny $\dots$}; \filldraw [fill=white, draw=black] (-.15,.25) rectangle (2.65,.75) node[midway] { $\tilde\varphi(t+1)$}; \node at(1,.9){\tiny $\dots$}; } \ + \ \tikzdiagh[xscale=.75]{0}{ \draw (.5,-.5) -- (.5,1.75); \node at(1,.5){\tiny $\dots$}; \draw (1.5,-.5) -- (1.5,1.75); % \draw (2.5,-.5) .. controls (2.5,-.25) and (2,-.25) .. (2,0) -- (2,1) .. controls (2,1.25) and (3,1.25) .. (3,1.75); \draw (2, -.5).. controls (2,-.25) and(3,-.25) .. (3,.25) node[tikzdot, pos=.2]{} -- (3,.75) .. controls (3,1.25) and (2,1.25) .. (2,1.75); % \draw[vstdhl] (0,-.5) node[below]{\small$\lambda$} -- (0,1.75); \draw[stdhl] (3,-.5) node[below]{\small$1$} .. controls (3,-.25) and (2.5,-.25) ..(2.5,.25) -- (2.5,.75) .. controls (2.5,1) and (3,1) .. (3,1.25) .. controls (3,1.5) and (2.5,1.5) .. (2.5,1.75); % \node at(1,.1){\tiny $\dots$}; \filldraw [fill=white, draw=black] (-.15,.25) rectangle (2.65,.75) node[midway] { $\tilde\varphi(t+1)$}; \node at(1,.9){\tiny $\dots$}; } \\ \ &= \ \tikzdiagh[xscale=.75]{0}{ \draw (.5,-.5) -- (.5,1.75); \node at(1,.5){\tiny $\dots$}; \draw (1.5,-.5) -- (1.5,1.75); % \draw (2.5,-.5) .. controls (2.5,-.25) and (2,-.25) .. (2,0) -- (2,1) -- (2,1.75); \draw (2, -.5).. controls (2,-.25) and(3,-.25) .. (3,.25) -- (3,.75) --(3,1.75) node[tikzdot,midway]{}; % \draw[vstdhl] (0,-.5) node[below]{\small$\lambda$} -- (0,1.75); \draw[stdhl] (3,-.5) node[below]{\small$1$} .. controls (3,-.25) and (2.5,-.25) ..(2.5,.25) -- (2.5,.75) --(2.5,1.75); % \node at(1,.1){\tiny $\dots$}; \filldraw [fill=white, draw=black] (-.15,.25) rectangle (2.65,.75) node[midway] { $\tilde\varphi(t+1)$}; \node at(1,.9){\tiny $\dots$}; } \ + \ \tikzdiagh[xscale=.75]{0}{ \draw (.5,-.5) -- (.5,1.75); \node at(1,.5){\tiny $\dots$}; \draw (1.5,-.5) -- (1.5,1.75); % \draw (2.5,-.5) .. controls (2.5,-.25) and (2,-.25) .. (2,0) -- (2,1) .. controls (2,1.25) and (3,1.25) .. (3,1.75); \draw (2, -.5).. controls (2,-.25) and(3,-.25) .. (3,.25) node[tikzdot, pos=.2]{} -- (3,.75) .. controls (3,1.25) and (2,1.25) .. (2,1.75); % \draw[vstdhl] (0,-.5) node[below]{\small$\lambda$} -- (0,1.75); \draw[stdhl] (3,-.5) node[below]{\small$1$} .. controls (3,-.25) and (2.5,-.25) ..(2.5,.25) -- (2.5,.75) .. controls (2.5,1) and (2,1) .. (2,1.25) .. controls (2,1.5) and (2.5,1.5) .. (2.5,1.75); % \node at(1,.1){\tiny $\dots$}; \filldraw [fill=white, draw=black] (-.15,.25) rectangle (2.65,.75) node[midway] { $\tilde\varphi(t+1)$}; \node at(1,.9){\tiny $\dots$}; } \ - \ \tikzdiagh[xscale=.75]{0}{ \draw (.5,-.5) -- (.5,1.75); \node at(1,.5){\tiny $\dots$}; \draw (1.5,-.5) -- (1.5,1.75); % \draw (2.5,-.5) .. controls (2.5,-.25) and (2,-.25) .. (2,0) -- (2,1.75); \draw (2, -.5).. controls (2,-.25) and(3,-.25) .. (3,.25) node[tikzdot, pos=.2]{} -- (3,1.75) ; % \draw[vstdhl] (0,-.5) node[below]{\small$\lambda$} -- (0,1.75); \draw[stdhl] (3,-.5) node[below]{\small$1$} .. controls (3,-.25) and (2.5,-.25) ..(2.5,.25) -- (2.5,.75) -- (2.5,1.75); % \node at(1,.1){\tiny $\dots$}; \filldraw [fill=white, draw=black] (-.15,.25) rectangle (2.65,.75) node[midway] { $\tilde\varphi(t+1)$}; \node at(1,.9){\tiny $\dots$}; } \end{align*} Then, we have \[ \tikzdiagh[xscale=.75]{0}{ \draw (.5,-.5) -- (.5,1.75); \node at(1,.5){\tiny $\dots$}; \draw (1.5,-.5) -- (1.5,1.75); % \draw (2.5,-.5) .. controls (2.5,-.25) and (2,-.25) .. (2,0) -- (2,1) .. controls (2,1.25) and (3,1.25) .. (3,1.75); \draw (2, -.5).. controls (2,-.25) and(3,-.25) .. (3,.25) node[tikzdot, pos=.2]{} -- (3,.75) .. controls (3,1.25) and (2,1.25) .. (2,1.75); % \draw[vstdhl] (0,-.5) node[below]{\small$\lambda$} -- (0,1.75); \draw[stdhl] (3,-.5) node[below]{\small$1$} .. controls (3,-.25) and (2.5,-.25) ..(2.5,.25) -- (2.5,.75) .. controls (2.5,1) and (2,1) .. (2,1.25) .. controls (2,1.5) and (2.5,1.5) .. (2.5,1.75); % \node at(1,.1){\tiny $\dots$}; \filldraw [fill=white, draw=black] (-.15,.25) rectangle (2.65,.75) node[midway] { $\tilde\varphi(t+1)$}; \node at(1,.9){\tiny $\dots$}; } \ = - \ \tikzdiagh[xscale=.75]{0}{ \draw (.5,-.5) -- (.5,1.5); \node at(1,.5){\tiny $\dots$}; \draw (1.5,-.5) -- (1.5,1.5); % \draw (2.5,-.5).. controls (2.5,-.375) and (2,-.375) .. (2,-.25) ..controls (2,0) and (2.5,0) .. (2.5,.25) --(2.5,1) .. controls (2.5,1.25) and (3,1.25) .. (3,1.5); \draw (2,-.5) .. controls (2,-.25) and (3,-.25) .. (3,.25) -- (3,1) .. controls (3,1.25) and (2,1.25) .. (2,1.5); % \draw[vstdhl] (0,-.5) node[below]{\small$\lambda$} -- (0,1.5); \draw[stdhl] (3,-.5) node[below]{\small$1$} .. controls (3,-.25) and (2,-.25) .. (2,.25) -- (2,1) .. controls (2,1.25) and (2.5,1.25) .. (2.5,1.5); % \node at(1,.1){\tiny $\dots$}; \filldraw [fill=white, draw=black] (-.15,.25) rectangle (2.15,.75) node[midway] { $\tilde\varphi(t)$}; \node at(1,.9){\tiny $\dots$}; } \ = 0, \] by induction hypothesis. Finally, we obtain \[ \tikzdiagh[xscale=.75]{0}{ \draw (.5,-.5) -- (.5,1.75); \node at(1,.5){\tiny $\dots$}; \draw (1.5,-.5) -- (1.5,1.75); % \draw (2.5,-.5) .. controls (2.5,-.25) and (2,-.25) .. (2,0) -- (2,1) -- (2,1.75); \draw (2, -.5).. controls (2,-.25) and(3,-.25) .. (3,.25) -- (3,.75) --(3,1.75) node[tikzdot,midway]{}; % \draw[vstdhl] (0,-.5) node[below]{\small$\lambda$} -- (0,1.75); \draw[stdhl] (3,-.5) node[below]{\small$1$} .. controls (3,-.25) and (2.5,-.25) ..(2.5,.25) -- (2.5,.75) --(2.5,1.75); % \node at(1,.1){\tiny $\dots$}; \filldraw [fill=white, draw=black] (-.15,.25) rectangle (2.65,.75) node[midway] { $\tilde\varphi(t+1)$}; \node at(1,.9){\tiny $\dots$}; } \ - \ \tikzdiagh[xscale=.75]{0}{ \draw (.5,-.5) -- (.5,1.75); \node at(1,.5){\tiny $\dots$}; \draw (1.5,-.5) -- (1.5,1.75); % \draw (2.5,-.5) .. controls (2.5,-.25) and (2,-.25) .. (2,0) -- (2,1.75); \draw (2, -.5).. controls (2,-.25) and(3,-.25) .. (3,.25) node[tikzdot, pos=.2]{} -- (3,1.75) ; % \draw[vstdhl] (0,-.5) node[below]{\small$\lambda$} -- (0,1.75); \draw[stdhl] (3,-.5) node[below]{\small$1$} .. controls (3,-.25) and (2.5,-.25) ..(2.5,.25) -- (2.5,.75) -- (2.5,1.75); % \node at(1,.1){\tiny $\dots$}; \filldraw [fill=white, draw=black] (-.15,.25) rectangle (2.65,.75) node[midway] { $\tilde\varphi(t+1)$}; \node at(1,.9){\tiny $\dots$}; } \ = - \ \tikzdiagh[xscale=.75]{0}{ \draw (.5,-.5) -- (.5,1.5); \node at(1,.5){\tiny $\dots$}; \draw (1.5,-.5) -- (1.5,1.5); % \draw (2,-.5) -- (2,1.5); \draw (2.5,-.5) .. controls (2.5,-.25) and (3,-.25) .. (3,0) -- (3,1.5); % \draw[vstdhl] (0,-.5) node[below]{\small$\lambda$} -- (0,1.5); \draw[stdhl] (3,-.5) node[below]{\small$1$} .. controls (3,-.25) and (2.5,-.25) .. (2.5,0) --(2.5,1.5); % \node at(1,.1){\tiny $\dots$}; \filldraw [fill=white, draw=black] (-.15,.25) rectangle (2.65,.75) node[midway] { $\tilde\varphi(t+1)$}; \node at(1,.9){\tiny $\dots$}; } \] finishing the proof. \end{proof} \begin{prop} The map $\tilde\varphi^0$ is a map of dg-bimodules. \end{prop} \begin{proof} As already mentioned above, it is enough to show that the left and right action by the same element of $T_b^{\lambda,r}$ on \[ \sum_{k+\ell+|\rho|=b} (-1)^k \tilde\varphi(k) \otimes \bar 1_{\ell,\rho} \] coincide. We obtain commutation with dots and crossings by induction on $k$, using Lemmas~\ref{lem:varphitdotright}--\ref{lem:varphitredblackcrossing}. The commutation with a nail also comes from a straightforward induction on $k$, where the base case is immediate by \cref{eq:relNail}. \end{proof} \section{Homological toolbox}\label{sec:dgcat} The goal of this section is to recall and briefly explain the tools from homological algebra which we use in this paper. The main references for this section are \cite{keller}, \cite{toen} and \cite{asympK0} (see also \cite{toenlectures}, \cite{kellersurvey} and \cite[Appendix A]{naissevaz3}). \subsection{Derived category} Let $(A, d_A)$ be a $\mathbb{Z}^n$-graded dg-algebra (with the same conventions as in \cref{sec:conventions}). The \emph{derived category $\mathcal{D}(A,d_A)$} of $(A,d_A)$ is the localization of the category $(A,d_A)\amod$ of $\mathbb{Z}^n$-graded (left) $(A,d_A)$-dg-modules along quasi-isomorphisms. It is a triangulated category with translation functor induced by the homological shift functor $[1]$, and distinguished triangles are equivalent to \[ (M,d_N) \xrightarrow{f} (N,d_N) \xrightarrow{\imath_N} \cone(f) \xrightarrow{\pi_{M[1]}} (M,d_N)[1], \] for every maps of dg-modules $f : (M,d_M) \rightarrow (N,d_N)$. \subsubsection{(Co)fibrant replacements} A \emph{cofibrant} dg-module $(P,d_P)$ is a dg-module such that $P$ is projective as graded $A$-module. Equivalently, it is a dg-module $(P,d_P)$ such that for every surjective quasi-isomorphism $(L,d_L) \xrightarrowdbl{\simeq} (M,d_M)$, every morphism $(P,d_P) \rightarrow (M,d_M)$ factors through $(L,d_L)$. For any dg-module $(N, d_N)$, we have \begin{align*} \Hom_{\mathcal{D}(A,d_A)}\bigl((P,d_P), (N,d_N)\bigr) \cong H^0_0 \left(\HOM_{(A,d_A)}\bigl((P,d_P), (N,d_N) \bigr) \right). \end{align*} Moreover, tensoring with a cofibrant dg-module preserves quasi-isomorphisms. Given a left (resp. right) dg-module $(M,d_M)$, there exists a cofibrant replacement $(\br M , d_{\br M })$ (resp. $( M\rb, d_{M\rb})$) together with a surjective quasi-isomorphism $\pi_M : \br M \xrightarrowdbl{\simeq} M$ (resp. $\pi'_M : M \rb \xrightarrowdbl{\simeq} M$). Moreover, the assignment $M \mapsto \br M$ (resp. $M \mapsto M \rb$) is natural. Thus, we can compute $\Hom_{\mathcal{D}(A,d_A)}\bigl((M,d_M), (N,d_N)\bigr)$ by taking \[ H^0_0\left(\HOM_{(A,d_A)}\bigl((\br M,d_{\br M}), (N,d_N) \bigr) \right) \cong \Hom_{\mathcal{D}(A,d_A)}\bigl((M,d_M), (N,d_N)\bigr). \] \smallskip A dg-module $(I,d_I)$ is \emph{fibrant} if for every injective quasi-isomorphism $(L,d_L) \xhookrightarrow{\simeq} (M, d_M)$, every morphism $(L,d_L) \rightarrow (M,d_M)$ extends to $(M,d_M)$. Then, we have \begin{align*} \Hom_{\mathcal{D}(A,d_A)}\bigl((M,d_M), (I,d_I)\bigr) \cong H^0_0\left(\HOM_{(A,d_A)}\bigl((M,d_M), (I,d_I) \bigr) \right). \end{align*} Again, for every dg-module $(M,d_M)$ there exists a fibrant replacement $(\fr M, d_{\fr M})$ with an injective quasi-isomorphism $ \imath_M : (M,d_M) \xhookrightarrow{\simeq} (\fr M, d_{\fr M})$. \subsubsection{Strongly projective modules}\label{sec:stronglyproj} Let $R$ be a unital commutative ring. The following was introduced in \cite{moore}, but we use the definition given in \cite{sixdgmodels}. \begin{defn}[{\cite[Definition 8.17]{sixdgmodels}}] \label{def:stronglyproj} A dg-module $(P,d_P)$ over a dg-$R$-algebra $(A,d_A)$ is \emph{strongly projective} if it is a direct summand of some dg-module $(A, d_A) \otimes_R (Q, d_Q)$ where $(Q,d_Q)$ is a $(R,0)$-dg-module such that both $H(Q,d_Q)$ and $\Image (d_Q)$ are projective $R$-modules. \end{defn} \begin{prop}[{\cite[Lemma 8.23]{sixdgmodels}}]\label{prop:stronglyproj} Let $(P,d_P)$ be a strongly projective left dg-module. For any right dg-module $(M,d_M)$, we have an isomorphism \[ H\left( (M,d_M) \otimes_{(A,d_A)} (P,d_P) \right) \cong H(M,d_M) \otimes_{H(A,d_A)} H(P,d_P). \] \end{prop} \subsubsection{$A_\infty$-action}\label{sec:Ainftyaction} Let $(B,d_B)$ be a dg-bimodule over a pair of dg-algebras $(S,d_S)$-$(R,d_R)$. As explained in \cite[\S 2.3]{MW}, there is (in general) no right $(R,d_R)$-action on $\br B_i$ compatible with the left $(S,d_S)$-action. However, there is an induced $A_\infty$-action (defined uniquely up to homotopy), so that the quasi-isomorphism $\pi_B : \br B \xrightarrowdbl{\simeq} B$ can be upgraded to a map of $A_\infty$-bimodules. \begin{lem} \label{lem:indAinftymap} Let $(A,d_A)$ be a dg-algebra, and let $U$ and $V$ be dg-(bi)modules over $(A,d_A)$, with a fixed cofibrant replacement $\pi_V: \br V \rightarrow V$. Suppose $(1 \otimes p_V) : U \otimes_{(A,d_A)} \br V \xrightarrow{\simeq} U \otimes_{(A,d_A)} V$ is a quasi-isomorphism. If $f : Z \rightarrow U \otimes_{(A,d_A)} \br V$ is a map of complexes of graded $\Bbbk$-spaces, and $f \circ (1 \otimes p_V)$ is a map of dg-(bi)modules, then there is an induced map $\overline f : Z \rightarrow U \otimes^{\Lderiv} V$ of $A_\infty$-(bi)modules whose degree zero part is $f$. \end{lem} \begin{proof} We take $\overline f := (1 \otimes p_V)^{-1} \circ (f \circ (1 \otimes p_V))$, as a composition of maps of $A_\infty$-(bi)modules, since any map of (bi)module can be considered as a map of $A_\infty$-(bi)modules with no higher composition. \end{proof} Note that the equivalent statement also holds for a cofibrant replacement $U \rb \rightarrow U$ such that $(\pi_U \otimes 1) : U \rb \otimes_{(A,d_A)} V \xrightarrow{\simeq} U \otimes_{(A,d_A)} V$ is a quasi-isomorphism. \subsection{Dg-derived categories}\label{sec:dgdercat} One of the issues with triangulated categories is that the category of functors between triangulated categories is in general not triangulated. To fix this, we work with a dg-enhancement of the derived category. In particular, this allows us to talk about distinguished triangles of dg-functors. Recall that a dg-category is a category where the hom-spaces are dg-modules over $(\Bbbk,0)$, and compositions are compatible with this structure (see \cite[\S1.2]{keller} for a precise definition). Given such a dg-category $\mathcal{C}$ with hom-spaces $\Hom_{\mathcal{C}}(X,Y) = (\bigoplus_{h \in \mathbb{Z}} \Hom^{h}(X,Y), d_{X,Y})$, we can consider its \emph{underlying category} $Z^0(\mathcal{C})$, which is given by the same objects as $\mathcal{C}$ and hom-spaces \[ \Hom_{Z^0(\mathcal{C})}(X,Y) := \ker \left( d_{X,Y} : \Hom^0(X,Y) \rightarrow \Hom^{-1}(X,Y) \right). \] Similarly, the \emph{homotopy category $H^0(\mathcal{C})$} is given by \[ \Hom_{H^0(\mathcal{C})}(X,Y) := H^0(\Hom_\mathcal{C}(X,Y)). \] A \emph{dg-enhancement} of a category $\mathcal{C}_0$ is a dg-category $\mathcal{C}$ such that $H^0(\mathcal{C}) \cong \mathcal{C}_0$. \smallskip The \emph{dg-derived category $\mathcal{D}_{dg}(A,d_A)$} of a $\mathbb{Z}^n$-graded dg-algebra $(A,d_A)$ is the $\mathbb{Z}^n$-graded dg-category with objects being cofibrant dg-modules over $(A,d_A)$, and hom-spaces being subspaces of the graded dg-spaces $\HOM_{(A,d_A)}$ from \cref{eq:dghom}, given by maps that preserve the $\mathbb{Z}^n$-grading: \[ \Hom_{\mathcal{D}_{dg}(A,d_A)}(M,N) := \HOM_{(A,d_A)}(M,N)_0, \] for $(M,d_M)$ and $(N,d_N)$ cofibrant dg-modules. By construction, we have $H^0(\mathcal{D}_{dg}(A,d_A)) \cong \mathcal{D}(A,d_A)$. Moreover, $\mathcal{D}_{dg}(A,d_A)$ is a dg-triangulated category, meaning its homotopy category is canonically triangulated (see \cite{toen} for a precise definition, or \cite[Appendix A]{naissevaz3} for a summary oriented toward categorification), and this triangulated structure matches with the usual one on $\mathcal{D}(A,d_A)$. \subsubsection{Dg-functors}\label{sec:dgfunctors} A \emph{dg-functor} between dg-categories is a functor commuting with the differentials. Given a dg-functor $F : \mathcal{C} \rightarrow \mathcal{C}'$, it induces a functor on the homotopy categories $[F] : H^0(\mathcal{C}) \rightarrow H^0(\mathcal{C}')$. We say that a dg-functor is a \emph{quasi-equivalence} if it gives quasi-isomorphisms on the hom-spaces, and induces an equivalence on the homotopy categories. We want to consider dg-category up to quasi-equivalence. Let $\Hqe$ be the homotopy category of dg-categories up to quasi-equivalence , and we write $\cRHom_{\Hqe}$ for the dg-space of quasi-functors between dg-categories (see \cite{toen}, \cite{toenlectures}, or \cite[Appendix A]{naissevaz3}). These quasi-functors induce honest functors on the homotopy categories. Whenever $\mathcal{C}'$ is dg-triangulated, then $\cRHom_{\Hqe}(\mathcal{C},\mathcal{C}')$ is dg-triangulated. \begin{rem} The space of quasi-functors is equivalent to the space of strictly unital $A_\infty$-functors. \end{rem} It is in general hard to understand the space of quasi-functors. However, by the results of Toen~\cite{toen}, if $\Bbbk$ is a field and $(A,d_A)$ and $(A',d_{A'})$ are dg-algebras, then it is possible to compute the space of `coproduct preserving' quasi-functors $\cRHom_{\Hqe}^{cop}(\mathcal{D}_{dg}(A,d_A),\mathcal{D}_{dg}(A',d_{A'}))$, in the same way as the category of coproduct preserving functors between categories of modules is equivalent to the category of bimodules. Indeed, we have a quasi-equivalence \begin{equation}\label{eq:quasifunctequiv} \cRHom_{\Hqe}^{cop}(\mathcal{D}_{dg}(A,d_A),\mathcal{D}_{dg}(A',d_{A'})) \cong \mathcal{D}_{dg}((A',d_{A'}), (A,d_A)), \end{equation} where $ \mathcal{D}_{dg}((A',d_{A'}), (A,d_A))$ is the dg-derived category of dg-bimodules. Composition of functors becomes equivalent to derived tensor product. Then, understanding the triangulated structure of $\cRHom_{\Hqe}^{cop}(\mathcal{D}_{dg}(A,d_A),\mathcal{D}_{dg}(A',d_{A'}))$ becomes as easy as to understand $\mathcal{D}((A,d_A), (A',d_{A'}))$. In particular, a short exact sequence of dg-bimodules gives a distinguished triangle of dg-functors. \subsection{Derived hom and tensor dg-functors}\label{sec:deriveddghomtensor} Let $(R,d_R)$ and $(S,d_S)$ be dg-algebras. Let $M$ and $N$ be $(R,d_R)$-module and $(S,d_S)$-module respectively. Let $B$ be a dg-bimodule over $(S,d_S)$-$(R,d_R)$. Then, the \emph{derived tensor product} is \[ B \otimes^{\Lderiv}_{(R,d_R)} M := B \otimes \br M, \] and the \emph{derived hom space} is \[ \RHOM_{(S,d_S)}(B, N) := \HOM_{(S,d_S)}(B, \fr N). \] Note that we have quasi-isomorphisms as dg-spaces $B \otimes^{\Lderiv}_{(R,d_R)} M \cong B \rb \otimes_{(R,d_R)}\br M \cong B \rb \otimes_{(R,d_R)} M$, and $\RHOM_{(S,d_S)}(B, N) \cong \HOM_{(S,d_S)}(\br B,\fr N) \cong \HOM_{(S,d_S)}(\br B, N)$. \smallskip This defines in turns triangulated dg-functors \begin{align* B \otimes^{\Lderiv}_{(R,d_R)} (-) &: \mathcal{D}_{dg}(R,d_R) \rightarrow \mathcal{D}_{dg}(S,d_S), \intertext{and} \RHOM_{(S,d_S)}(B, -) &: \mathcal{D}_{dg}(S,d_S) \rightarrow \mathcal{D}_{dg}(R,d_R). \end{align*} They induce a pair of adjoint functors $B \otimes^{\Lderiv}_{(R,d_R)} (-) \vdash \RHOM_{(S,d_S)}(B, -)$ between the derived categories $\mathcal{D}_{dg}(R,d_R)$ and $\mathcal{D}_{dg}(S,d_S)$. \subsubsection{Computing units and counits}\label{sec:unitandcounit} The natural bijection \[ \bar \Phi_{M,N}^{B} : \Hom_{\mathcal{D}(S,d_S)}( B \otimes^{\Lderiv}_{(R,d_R)} M, N ) \xrightarrow{\simeq} \Hom_{\mathcal{D}(R,d_R)}(M, \RHOM_{(S,d_S)}(B,N)), \] is obtained by making the following diagram commutative: \[ \begin{tikzcd} \Hom_{\mathcal{D}(S,d_S)}( B \otimes^{\Lderiv}_{(R,d_R)} M, N ) \ar{r}{\bar \Phi_{M,N}^B} \ar[sloped]{d}{\simeq} & \Hom_{\mathcal{D}(R,d_R)}(M, \RHOM_{(S,d_S)}(B,N)) \\ \Hom_{(S,d_S)}( B \otimes_{(R,d_R)} \br M, \fr N ) \ar[swap]{r}{\Phi_{\br M,\fr N}^B} & \Hom_{(R,d_R)}(\br M, \HOM_{(S,d_S)}(B,\fr N)). \ar[sloped]{u}{\simeq} \end{tikzcd} \] where $\Phi$ is defined in \cref{eq:homtensajd}. \smallskip For the sake of keeping notations short, we will write $\HOM$ instead of $\HOM_{(S,d_S)}$, and $\otimes$ instead of $\otimes_{(R,d_R)}$, and similarly for the derived versions. We are interested in computing the unit \[ \eta_M : M \rightarrow \RHOM(B, B \otimes^{\Lderiv} M), \] which is given by $\eta_M = \bar \Phi_{M, B \otimes^{\Lderiv} M}^B(\text{Id}_{B \otimes^{\Lderiv} M})$. Composing with the isomorphisms $\RHOM(B, B \otimes^{\Lderiv} M) \cong \HOM(B, \fr (B \otimes \br M))$ and $\br M \cong M$, we can compute $\eta_M$ as \[ \eta_M' = \Phi_{\br M,\fr(B \otimes^{\Lderiv} M)}^B( \imath_{B \otimes^{\Lderiv} M}) : \br M \rightarrow \HOM(B, \fr (B \otimes \br M)), \] which gives \[ \eta'_M(m) = (b \mapsto \imath_{B \otimes \br M}(b \otimes m)). \] Using the quasi-isomorphisms \[ \begin{tikzcd} \HOM(\br B, B \otimes^{\Lderiv} M) \ar["\imath_{B \otimes^{\Lderiv} M} \circ -", "\simeq"']{r} & \HOM(\br B, \fr(B \otimes^{\Lderiv} M)) & \ar["- \circ \pi_B"', "\simeq"]{l} \HOM(B, \fr(B \otimes^{\Lderiv} M)), \end{tikzcd} \] we can compute $\eta'_M$ through \begin{align*} \eta_M'' &: \br M \rightarrow \HOM(\br B, B \otimes \br M), & \eta_M''(m) &:= (b \mapsto \pi_B(b) \otimes m). \end{align*} This is particularly useful, since it means we do not have to compute any fibrant replacement to understand $\eta_M$. \smallskip Similarly, for the counit \[ \varepsilon_M : B \otimes^{\Lderiv} \RHOM(B,M) \rightarrow M, \] we have $\varepsilon_M = (\bar\Phi^{B}_{\RHOM(B,M),M})^{-1}(\text{Id}_{\RHOM(B,M)})$. We rewrite it as \[ \varepsilon_M ' = \Phi_{\br \HOM(B, \fr M), \fr M}^{-1}(\pi_{\HOM(B, \fr M)}) : B \otimes \br \HOM(B, \fr M) \rightarrow \fr M, \] with $\varepsilon_M'(b \otimes f) = \bigl(\pi_{\HOM(B, \fr M)}(f)\bigr)(b)$. We consider the quasi-isomorphisms \[ \begin{tikzcd}[column sep=10ex] B \otimes \br \HOM(B, \fr M) & \ar["\pi_B \otimes 1"',"\simeq"]{l} B \rb \otimes \br \HOM(B, \fr M) \ar["1 \otimes \pi_{\HOM(B,\fr M)}","\simeq"']{r} & B \rb \otimes \HOM(B, \fr M). \end{tikzcd} \] Therefore, we can compute $\varepsilon_M$ as \begin{align*} \varepsilon''_M &: B \rb \otimes \HOM(B, \fr M) \rightarrow \fr M, & \varepsilon''_M(b \otimes f) &:= f(\pi'_B(b)), \end{align*} where $\pi'_B : B \rb \xrightarrow{\simeq} B$. If in addition $B$ is already cofibrant as left dg-module, then we can suppose $\br B = B$ and $\pi_B = \text{Id}_B$, and we obtain a commutative diagram \[ \begin{tikzcd} B\rb \otimes \HOM(B, \fr M) \ar{r}{\varepsilon''_M} \ar[sloped]{d}{\simeq} \ar[swap]{d}{1 \otimes (- \circ \pi_B)} & \fr M \ar[equals]{d} \\ B\rb \otimes \HOM(\br B, \fr M) \ar[sloped]{d}{\simeq} \ar{r} & \fr M \\ B\rb \otimes \HOM(\br B, M) \ar[swap]{r}{\varepsilon'''_M} \ar{u}{1 \otimes (\imath_M \circ -)} & \ar[sloped]{u}{\simeq} \ar[swap]{u}{\imath_M} M \end{tikzcd} \] where \begin{align*} \varepsilon_M''' &: B \rb \otimes \HOM(B, M) \rightarrow M, & \varepsilon_M'''(b \otimes f) := f(\pi'(b)). \end{align*} This is useful, since it means we can compute $\varepsilon_M$ using $\varepsilon'''_M$, which does not require any fibrant replacement. \color{black} \subsection{Asymptotic Grothendieck group} \label{sec:asympK0} The usual definition of the Grothendieck group of a triangulated category does not take into consideration relations coming from infinite iterated extensions. When $\mathcal{C}$ is a triangulated subcategory of a triangulated category $\mathcal{T}$ admitting countable products and coproducts, and these preserves distinguished triangles, then there exists a notion of \emph{asymptotic Grothendick group $\boldsymbol{K}_0^\Delta(\mathcal{C})$} of $\mathcal{C}$, given by modding out relations obtained from Milnor (co)limits (see \cref{sec:cblfitext} below) in the usual Grothendieck group $K_0(\mathcal{C})$ (see \cite[\S 8]{asympK0} for a precise definition). \subsubsection{Ring of Laurent series} We follow the construction of the ring of formal Laurent series given in~\cite{laurent} (see also \cite[\S5]{asympK0}). The \emph{ring of formal Laurent series} $\Bbbk\pp{x_1,\dots, x_n}$ is given by first choosing a total additive order $\prec$ on $\mathbb{Z}^n$. One says that a cone $C := \{\alpha_1 v_1 + \cdots + \alpha_n v_n | \alpha_i \in \mathbb{R}_{\geq 0} \} \subset \mathbb{R}^n$ is compatible with $\prec$ whenever $0 \prec v_i$ for all $i \in \{1,\dots,n\}$. Then, we set \[ \Bbbk\pp{x_1,\dots,x_n} := \bigcup_{{\boldsymbol{e}} \in \mathbb{Z}^n} x^{{\boldsymbol{e}}} \Bbbk_{\prec}\llbracket x_1,\dots, x_n \rrbracket, \] where $\Bbbk_{\prec}\llbracket x_1,\dots, x_n \rrbracket$ consists of formal Laurent series in $\Bbbk \llbracket x_1,\dots, x_n\rrbracket$ such that the terms are contained in a cone compatible with $\prec$. It forms a ring when we equip $\Bbbk\pp{x_1,\dots,x_n}$ with the usual addition and multiplication of series. \subsubsection{C.b.l.f. structures}\label{sec:cblfstruct} We fix an arbitrary additive total order $\prec$ on $\mathbb{Z}^n$. We say that a $\mathbb{Z}^n$-graded $\Bbbk$-vector space $M = \bigoplus_{ \bigoplus_{{\boldsymbol{g}} \in \mathbb{Z}^n}} M_{\boldsymbol{g}}$ is \emph{c.b.l.f. (cone bounded, locally finite) dimensional} if \begin{itemize} \item $\dim M_{\boldsymbol{g}} < \infty$ for all ${\boldsymbol{g}} \in \mathbb{Z}^n$; \item there exists a cone $C_M \subset \mathbb{R}^n$ compatible with $\prec$ and ${\boldsymbol{e}} \in \mathbb{Z}^n$ such that $M_{\boldsymbol{g}} = 0$ whenever ${\boldsymbol{g}} - {\boldsymbol{e}} \notin C_M$. \end{itemize} Let $(A,d_A)$ be a $\mathbb{Z}^n$-graded dg-algebra. Suppose that $(A,d)$ is concentrated in non-negative homological degrees, that is $A_{\boldsymbol{g}}^h = 0$ whenever $h < 0$. The \emph{c.b.l.f. derived category $\mathcal{D}^{cblf}(A,d_A)$} of $(A,d_A)$ is the triangulated full subcategory of $\mathcal{D}(A,d_A)$ given by dg-modules having homology being c.b.l.f. dimensional for the $\mathbb{Z}^n$-grading. There exists also a dg-enhanced version $\mathcal{D}_{dg}^{cblf}(A,d_A)$. We write $\boldsymbol{K}_0^\Delta(A,d) := \boldsymbol{K}_0^{\Delta}(\mathcal{D}^{cblf}(A,d_A))$. \begin{defn}\label{def:positivecblfdgalg} We say that $(A,d)$ is a \emph{positive c.b.l.f. dg-algebra} if \begin{enumerate} \item $A$ is c.b.l.f. dimensional for the $\mathbb{Z}^n$-grading; \item $A$ is non-negative for the homological grading; \item $A_0^0$ is semi-simple; \item $A_0^h = 0$ for $h >0$; \item$(A,d_A)$ decomposes a direct sum of shifted copies of modules $P_i := A e_i$ for some idempotent $e_i \in A$, such that $P_i$ is non-negative for the $\mathbb{Z}^n$-grading. \end{enumerate} \end{defn} In a $\mathbb{Z}^n$-graded triangulated category $\mathcal{C}$, we define the notion of \emph{c.b.l.f. direct sum} as follows: \begin{itemize} \item take a a finite collection of objects $\{K_1,\dots, K_m\}$ in $\mathcal{C}$; \item consider a direct sum of the form \begin{align*} &\bigoplus_{{\boldsymbol{g}} \in \mathbb{Z}^n} x^{{\boldsymbol{g}}} (K_ {1,{\boldsymbol{g}}} \oplus \cdots \oplus K_{m,{\boldsymbol{g}}}), & &\text{ with }& K_{i,{\boldsymbol{g}}} &= \bigoplus_{j = 1}^{k_{i,{\boldsymbol{g}}}} K_i[h_{i,j,{\boldsymbol{g}}}], \end{align*} where $k_{i,{\boldsymbol{g}}} \in \mathbb{N}$ and $h_{i,j,{\boldsymbol{g}}} \in \mathbb{Z}$ such that: \item there exists a cone $C$ compatible with $\prec$, and ${\boldsymbol{e}} \in \mathbb{Z}^n$ such that for all $j$ we have $k_{j,{\boldsymbol{g}}} = 0$ whenever ${\boldsymbol{g}} - {\boldsymbol{e}} \notin C$; \item there exists $h \in \mathbb{Z}$ such that $h_{i,j,{\boldsymbol{g}}} \geq h$ for all $i,j,{\boldsymbol{g}}$. \end{itemize} If $\mathcal{C}$ admits arbitrary c.b.l.f. direct sums, then $K_0^{\Delta}(\mathcal{C})$ has a natural structure of $\mathbb{Z}\pp{x_1,\dots, x_n}$-module with \[ \sum_{{\boldsymbol{g}} \in C} a_{\boldsymbol{g}} x^{{\boldsymbol{e}} + {\boldsymbol{g}}} [X] := [\bigoplus_{{\boldsymbol{g}} \in C} x^{{\boldsymbol{g}} + {\boldsymbol{e}}} X^{\oplus a_{\boldsymbol{g}}}], \] where $X^{\oplus a_{\boldsymbol{g}}} = \bigoplus_{\ell = 1}^{|a_{\boldsymbol{g}}|} X[\alpha_{\boldsymbol{g}}]$ and $\alpha_{\boldsymbol{g}} = 0$ if $a_{\boldsymbol{g}} \geq 0$ and $\alpha_{\boldsymbol{g}} =1$ if $a_{\boldsymbol{g}} < 0$. \begin{thm}[{\cite[Theorem 9.15]{asympK0}}]\label{thm:triangtopK0genbyPi} Let $(A,d)$ be a positive c.b.l.f. dg-algebra, and let $\{P_j\}_{j \in J}$ be a complete set of indecomposable cofibrant $(A,d)$-modules that are pairwise non-isomorphic (even up to degree shift). Let $\{S_j\}_{j \in J}$ be the set of corresponding simple modules. There is an isomorphism \begin{align*} \boldsymbol{K}_0^\Delta(A,d) & \cong \bigoplus_{j \in J} \mathbb{Z}\pp{x_1, \dots, x_\ell} [P_j], \end{align*} and $\boldsymbol{K}_0^\Delta(A,d)$ is also freely generated by the classes of $\{[S_j]\}_{j \in J}$. \end{thm} \begin{prop}[{\cite[Proposition 9.18]{asympK0}}]\label{prop:cblfbiminduceKO} Let $(A,d)$ and $(A',d')$ be two c.b.l.f. positive dg-algebras. Let $B$ be a c.b.l.f. dimensional $(A',d')$-$(A,d)$-bimodule. The derived tensor product functor \[ F : \mathcal{D}^{cblf}(A,d) \rightarrow \mathcal{D}^{cblf}(A',d'), \quad F(X) := B \otimes^{\Lderiv}_{(A,d)} X, \] induces a map \[ [F] : \boldsymbol{K}_0^\Delta(A,d) \rightarrow \boldsymbol{K}_0^\Delta(A',d'), \] sending $[X]$ to $[F(X)]$. \end{prop} \subsubsection{C.b.l.f. iterated extensions}\label{sec:cblfitext} Recall that the \emph{Milnor colimit $\mcolim_{r \geq 0} (f_r) $} (using the terminology of~\cite{kellerweight}) of a collection of arrows $\{X_r \xrightarrow{f_r} X_{r+1}\}_{r \in \mathbb{N}}$ in a triangulated category $\mathcal{T}$ is the mapping cone fitting inside the following distinguished triangle \[ \coprod_{r \in \mathbb{N}} X_r \xrightarrow{1-f_\bullet} \coprod_{r \in \mathbb{N}} X_r \rightarrow \mcolim_{r \geq 0} (f_r) \rightarrow \] where the left arrow is given by the infinite matrix \[ 1-f_\bullet := \begin{pmatrix} 1 & 0 & 0 & 0 & \cdots \\ -f_0 & 1 & 0 & 0 & \cdots \\ 0 & -f_1 & 1 & 0 & \cdots \\ \vdots & \ddots & \ddots & \ddots & \ddots \end{pmatrix} \] \begin{defn} Let $\{K_1, \dots, K_m\}$ be a finite collection of objects in $\mathcal{C}$, and let $\{E_r\}_{r \in \mathbb{N}}$ be a family of direct sums of $\{K_1, \dots, K_m\}$ such that $\bigoplus_{r \in \mathbb{N}} E_r$ is a c.b.l.f. direct sum of $\{K_1, \dots, K_m\}$. Let $\{M_r\}_{r \in \mathbb{N}}$ be a collection of objects in $\mathcal{C}$ with $M_0 = 0$, such that they fit in distinguished triangles \[ M_r \xrightarrow{f_r} M_{r+1} \rightarrow E_r \rightarrow \] Then, we say that an object $M \in \mathcal{C}$ such that $M \cong_{\mathcal{T}} \mcolim_{r\geq 0} (f_r)$ in $\mathcal{T}$ is a \emph{c.b.l.f. iterated extension of $\{K_1, \dots, K_m\}$}. \end{defn} Note that under the conditions above, we have \[ [M] = \sum_{r \geq 0} [E_r], \] in the asymptotic Grothendieck group $\boldsymbol{K}_0(\mathcal{C})$. \begin{defn}\label{def:cblfgenerated} Let $\mathcal{T}$ be a $\mathbb{Z}^n$-graded (dg-)triangulated (dg-)category, and $\{X_j\}_{j \in J}$ be a collection of objects in $\mathcal{T}$. The subcategory of $\mathcal{T}$ \emph{c.b.l.f. generated by $\{X_j\}_{j \in J}$} is the triangulated full subcategory $\mathcal{C} \subset \mathcal{T}$ given by all objects $Y \in \mathcal{T}$ such that there exists a finite subset $\{X_k\}_{k \in K}$ such that $Y$ is isomorphic to a c.b.l.f. iterated extension of $\{X_k\}_{k \in K}$ in $\mathcal{T}$. \end{defn} Thus, under the conditions above, $\boldsymbol{K}_0(\mathcal{C})$ is generated as a $\mathbb{Z}\pp{x_1,\dots, x_n}$-module by the classes of $\{[X_j]\}_{j \in J}$. \subsubsection{Dg-functors}\label{sec:cblfdgfunctors} Let $(R,d_R)$ and $(S,d_S)$ be ($\mathbb{Z}^n$-graded) dg-algebras. The situation of \cref{eq:quasifunctequiv} in \cref{sec:dgfunctors} restricts to the c.b.l.f. version $\mathcal{D}_{dg}^{cblf}$ of \cref{sec:cblfstruct}, so that \[ \cRHom_{\Hqe}^{cop}(\mathcal{D}_{dg}^{cblf}(R,d_R),\mathcal{D}_{dg}^{cblf}(S,d_{S})) \cong \mathcal{D}_{dg}^{cblf}((S,d_{S}), (R,d_R)). \] Then, we obtain an induced map \begin{equation} \label{eq:K0homK0} \boldsymbol{K}_0^\Delta(\cRHom_{\Hqe}^{cop}( \mathcal{D}_{dg}^{cblf}(R,d_R), \mathcal{D}_{dg}^{cblf}(S,d_{S}))) \rightarrow \Hom_{\mathbb{Z}\pp{x_1,\dots,x_n}}(\boldsymbol{K}_0^\Delta(R,d_R), \boldsymbol{K}_0^\Delta(S,d_{S})), \end{equation} by using \cref{prop:cblfbiminduceKO}. \section{Variants and generalizations}\label{sec:losends} \subsection{Zigzag algebras}\label{sec:zigazag} In~\cite[\S4]{qi-sussan} it was proven that for ${\mathfrak{g}}=\mathfrak{sl}_2$ the KLRW algebra $T_1^{1,\dotsc,1}$ with $r$ red strands and only one black strand is isomorphic to a preprojective algebra $A^!_r$ of type $A$. It is a Koszul algebra, whose quadratic dual was used by Khovanov--Seidel in~\cite{khovanov-seidel} to construct a categorical braid group action. Let $\Bbbk$ be a field of any characteristic and let $\mathcal{Q}_r$ be the following quiver \[ \begin{tikzcd}[column sep = 1.5cm] 0 \arrow[r,bend left,"0\vert 1"] \arrow[out=210,in=140,loop,"\theta"] & 1 \arrow[l,bend left,"1\vert0"] \arrow[r,bend left, "1\vert 2"] & 2 \arrow[l,bend left,"2\vert 1"] \arrow[r,bend left] & \dotsm \arrow[r,bend left] \arrow[l,bend left] & r \arrow[l,bend left] \end{tikzcd} \] and $\Bbbk\mathcal{Q}_r$ its path algebra. We endow $\Bbbk\mathcal{Q}_r$ with a $\mathbb{Z} \times \mathbb{Z}^2$-grading by declaring that \[ \deg( i\vert i\pm 1) := (0,1,0) , \mspace{40mu} \deg(\theta) := (1,0,2) . \] We consider the first grading as homological, and the second and third gradings are called the $q$-grading and the $\lambda$-grading respectively. We denote the straight path that starts on $i_1$ and ends at $i_n$ by $(i_1\vert i_2\vert\dotsc \vert i_{n-1}\vert i_n)$ and the constant path on $i$ by $(i)$. The set $\{(0),\dotsm,(r)\}$ forms a complete set of primitive orthogonal idempotents in $\Bbbk\mathcal{Q}_r$. \begin{defn} Let $\mathcal{A}_r^!$ be algebra given by the quotient of the path algebra $\Bbbk\mathcal{Q}_r$ by the relations \begin{align*} (i\vert i-1\vert i) &= (i\vert i+1\vert i), \rlap{\hspace{6ex}for $i>0$,}\\ \theta(0\vert 1\vert 0) &= (0\vert 1\vert 0)\theta , \\ \theta ^2 &= 0. \end{align*} \end{defn} We usually consider $\mathcal{A}_r^!$ as a dg-algebra $(\mathcal{A}_r^!, 0)$ with zero differential. We can also consider a version of $\mathcal{A}_r^!$ with a non-trivial differential $d$ given by \[ d(X) := \begin{cases} (0\vert 1\vert 0), & \text{if }X=\theta, \\ 0 , & \text{otherwise}, \end{cases} \] of which one easily checks that it is well-defined. \begin{prop} \label{prop:isozigzag} The $\mathbb{Z}\times\mathbb{Z}^2$ algebra $\mathcal{A}_r^!$ is isomorphic to the $\mathbb{Z} \times \mathbb{Z}^2$ algebra $T_1^{\lambda, r}$ in $r$ red strands and $1$ black strand by the map sending \begin{align*}\allowdisplaybreaks (i) &\mapsto \tikzdiagh{0}{ \draw[vstdhl] (0,0) node[below]{\small $\lambda$} --(0,1); \draw[stdhl] (.5,0) --(.5,1); \node at(1,.5) {\tiny$\dots$}; \draw[stdhl] (1.5,0) --(1.5,1); \draw (2,0) -- (2,1); \draw[stdhl] (2.5,0) --(2.5,1); \node at(3,.5) {\tiny$\dots$}; \draw[stdhl] (3.5,0) --(3.5,1); % \tikzbrace{.5}{1.5}{0}{$i$} } \intertext{where the black strand comes right after the $i$th red, and} (i-1\vert i) &\mapsto \tikzdiagh{0}{ \draw[vstdhl] (0,0) node[below]{\small $\lambda$} --(0,1); \draw[stdhl] (.5,0) --(.5,1); \node at(1,.5) {\tiny$\dots$}; \draw (2,0) ..controls (2,.3) and (1.5,.7) .. (1.5,1); \draw[stdhl] (1.5,0) ..controls (1.5,.3) and (2,.7) .. (2,1); \draw[stdhl] (2.5,0) --(2.5,1); \node at(3,.5) {\tiny$\dots$}; \draw[stdhl] (3.5,0) --(3.5,1); % \tikzbrace{.5}{1.5}{0}{$i$} }\\ (i+1\vert i) &\mapsto \tikzdiagh{0}{ \draw[vstdhl] (0,0) node[below]{\small $\lambda$} --(0,1); \draw[stdhl] (.5,0) --(.5,1); \node at(1,.5) {\tiny$\dots$}; \draw[stdhl] (1.5,0) --(1.5,1); \draw (2,0) ..controls (2,.3) and (2.5,.7) .. (2.5,1); \draw[stdhl] (2.5,0) ..controls (2.5,.3) and (2,.7) .. (2,1); \node at(3,.5) {\tiny$\dots$}; \draw[stdhl] (3.5,0) --(3.5,1); % \tikzbrace{.5}{1.5}{0}{$i$} }\\ \theta &\mapsto \tikzdiagh{0}{ \draw (.5,0) .. controls (.5,.25) .. (0,.5) .. controls (.5,.75) .. (.5,1); \draw[vstdhl] (0,0) node[below]{\small $\lambda$} -- (0,1) node [midway,nail]{}; \draw[stdhl] (1,0) --(1,1); \node at(1.75,.5) {\tiny$\dots$}; \draw[stdhl] (2.5,0) --(2.5,1); \node at(3,.5) {\tiny$\dots$}; \draw[stdhl] (3.5,0) --(3.5,1); } \end{align*} Furthermore, the isomorphism upgrades to isomorphisms of dg-algebras $(\mathcal{A}_r^!, 0) \cong (T_1^{\lambda, r},0)$ and $(\mathcal{A}_r^!, d) \cong (T_1^{\lambda, r},d_{1})$. \end{prop} \begin{proof} First, one can show by a straightforward computation that the map defined above respects all defining relations of $\mathcal{A}_r^!$. Moreover, by turning any dot in $T_1^{\lambda, r}$ to a double crossings using~\cref{eq:redR2}, it is not hard to construct an inverse of the map defined above. We leave the details to the reader. \end{proof} \begin{cor} The homology of $(\mathcal{A}_r^!,d)$ is concentrated in homological degree $0$ and is isomorphic to the preprojective algebra $A_r^!$. \end{cor} Moreover, by \cref{prop:isozigzag}, the results in \cref{sec:catTLB} can be pulled to the derived category of $\mathbb{Z}^2$-graded $(\mathcal{A}_r^!,0)$-modules, endowing $\mathcal{D}_{dg}(\mathcal{A}_r^!,0) \cong \mathcal{D}_{dg}(T_1^{\lambda, r}, 0)$ with a categorical action of $\mathcal{B}_r$. \subsection{Dg-enhanced KLRW algebras: the general case}\label{sec:dgWebstergeneral} Fix a symmetrizable Kac--Moody algebra ${\mathfrak{g}}$ with set of simple roots $I$ and dominant integral weights $\underline{\mu}:=(\mu_1,\dotsc ,\mu_d)$. \subsubsection{Dg-enhanced KLRW algebras: ${\mathfrak{g}}$ symmetrizable} Recall that the KLRW algebra~\cite[\S4]{webster} $T_b^{\underline{\mu}}({\mathfrak{g}})$ on $b$ strands is the diagrammatic $\Bbbk$-algebra generated by braid-like diagrams on $b$ black strands and $r$ red strands. Red strands are labeled from left to right by $\mu_1, \dots, \mu_r$ and cannot intersect each other, while black strands are labeled by simple roots and can intersect red strands transversally, they can intersect transversally among themselves and can carry dots. Diagrams are taken up to braid-like planar isotopy and satisfy the following local relations: \begin{itemize} \item the KLR local relations (2.5a)-(2.5g) in~\cite[Definition~2.4]{webster}; \item the local black/red relations~\eqref{eq:gdotredstrand}-\eqref{eq:gredR3} for all $\nu\in\underline{\mu}$ and for all $\alpha_j$, $\alpha_k\in I$, given below; \item a black strand in the leftmost region is $0$. \end{itemize} \begin{align} \tikzdiagh{0}{ \draw (1,0)node[below]{\small $\alpha_j$} ..controls (1,.5) and (0,.5) .. (0,1) node [near end,tikzdot]{}; \draw[stdhl] (0,0) node[below]{\small $\nu$} ..controls (0,.5) and (1,.5) .. (1,1); } \ &= \ \tikzdiagh{0}{ \draw (1,0)node[below]{\small $\alpha_j$} ..controls (1,.5) and (0,.5) .. (0,1) node [near start,tikzdot]{}; \draw[stdhl] (0,0) node[below]{\small $\nu$} ..controls (0,.5) and (1,.5) .. (1,1); } & \tikzdiagh{0}{ \draw (0,0)node[below]{\small $\alpha_j$} ..controls (0,.5) and (1,.5) .. (1,1) node [near start,tikzdot]{}; \draw[stdhl] (1,0) node[below]{\small $\nu$} ..controls (1,.5) and (0,.5) .. (0,1); } \ &= \ \tikzdiagh{0}{ \draw (0,0)node[below]{\small $\alpha_j$} ..controls (0,.5) and (1,.5) .. (1,1) node [near end,tikzdot]{}; \draw[stdhl] (1,0) node[below]{\small $\nu$} ..controls (1,.5) and (0,.5) .. (0,1); } \label{eq:gdotredstrand} \\ \tikzdiagh{0}{ \draw (1,0)node[below]{\small $\alpha_j$} ..controls (1,.25) and (0,.25) .. (0,.5)..controls (0,.75) and (1,.75) .. (1,1) ; \draw[stdhl] (0,0) node[below]{\small $\nu$} ..controls (0,.25) and (1,.25) .. (1,.5) ..controls (1,.75) and (0,.75) .. (0,1) ; } \ &= \ \tikzdiagh{0}{ \draw[stdhl] (0,0) node[below]{\small $\nu$} -- (0,1) ; \draw (1,0)node[below]{\small $\alpha_j$} -- (1,1) node[midway,tikzdot]{} node[midway,xshift=1.75ex,yshift=.75ex]{\small $\nu_j$} ; } & \tikzdiagh{0}{ \draw (0,0)node[below]{\small $\alpha_j$} ..controls (0,.25) and (1,.25) .. (1,.5) ..controls (1,.75) and (0,.75) .. (0,1) ; \draw[stdhl] (1,0) node[below]{\small $\nu$} ..controls (1,.25) and (0,.25) .. (0,.5)..controls (0,.75) and (1,.75) .. (1,1) ; } \ &= \ \tikzdiagh{0}{ \draw (0,0)node[below]{\small $\alpha_j$} -- (0,1) node[midway,tikzdot]{} node[midway,xshift=1.75ex,yshift=.75ex]{\small $\nu_j$} ; \draw[stdhl] (1,0) node[below]{\small $\nu$} -- (1,1) ; } \label{eq:gredR2} \\ \tikzdiagh{0}{ \draw (0.5,0)node[below]{\small $\alpha_j$} .. controls (0.5,0.25) and (0, 0.25) .. (0,0.5) .. controls (0,0.75) and (0.5, 0.75) .. (0.5,1); \draw (1,0)node[below]{\small $\alpha_k$} .. controls (1,0.5) and (0, 0.75) .. (0,1); \draw [stdhl] (0,0) node[below]{\small $\nu$} .. controls (0,0.25) and (1, 0.5) .. (1,1); } \ &= \ \tikzdiagh[xscale=-1]{0}{ \draw (0,0)node[below]{\small $\alpha_k$} .. controls (0,0.25) and (1, 0.5) .. (1,1); \draw (0.5,0)node[below]{\small $\alpha_j$} .. controls (0.5,0.25) and (0, 0.25) .. (0,0.5) .. controls (0,0.75) and (0.5, 0.75) .. (0.5,1); \draw [stdhl] (1,0) node[below]{\small $\nu$} .. controls (1,0.5) and (0, 0.75) .. (0,1); } & \tikzdiagh{0}{ \draw (0,0)node[below]{\small $\alpha_j$} .. controls (0,0.25) and (1, 0.5) .. (1,1); \draw (0.5,0)node[below]{\small $\alpha_k$} .. controls (0.5,0.25) and (0, 0.25) .. (0,0.5) .. controls (0,0.75) and (0.5, 0.75) .. (0.5,1); \draw [stdhl] (1,0) node[below]{\small $\nu$} .. controls (1,0.5) and (0, 0.75) .. (0,1); } \ &= \ \tikzdiagh[xscale=-1]{0}{ \draw (0.5,0)node[below]{\small $\alpha_k$} .. controls (0.5,0.25) and (0, 0.25) .. (0,0.5) .. controls (0,0.75) and (0.5, 0.75) .. (0.5,1); \draw (1,0)node[below]{\small $\alpha_j$} .. controls (1,0.5) and (0, 0.75) .. (0,1); \draw [stdhl] (0,0) node[below]{\small $\nu$} .. controls (0,0.25) and (1, 0.5) .. (1,1); } \label{eq:gcrossingslidered} \\ \tikzdiagh{0}{ \draw (0,0)node[below]{\small $\alpha_j$} .. controls (0,0.25) and (1, 0.5) .. (1,1); \draw (1,0)node[below]{\small $\alpha_k$} .. controls (1,0.5) and (0, 0.75) .. (0,1); \draw [stdhl] (0.5,0)node[below]{\small $\nu$} .. controls (0.5,0.25) and (0, 0.25) .. (0,0.5) .. controls (0,0.75) and (0.5, 0.75) .. (0.5,1); } \ &= \ \tikzdiagh[xscale=-1]{0}{ \draw (0,0)node[below]{\small $\alpha_k$} .. controls (0,0.25) and (1, 0.5) .. (1,1); \draw (1,0)node[below]{\small $\alpha_j$} .. controls (1,0.5) and (0, 0.75) .. (0,1); \draw [stdhl] (0.5,0) node[below]{\small $\nu$} .. controls (0.5,0.25) and (0, 0.25) .. (0,0.5) .. controls (0,0.75) and (0.5, 0.75) .. (0.5,1); } \ + \delta_{j,k}\sssum{a+b= \nu_j-1} \ \tikzdiagh{0}{ \draw (0,0)node[below]{\small $\alpha_j$} -- (0,1) node[midway,tikzdot]{} node[midway,xshift=-1.5ex,yshift=.75ex]{\small $a$}; \draw (1,0)node[below]{\small $\alpha_k$} -- (1,1) node[midway,tikzdot]{} node[midway,xshift=1.5ex,yshift=.75ex]{\small $b$}; \draw [stdhl] (0.5,0)node[below]{\small $\nu$} -- (0.5,1); } \label{eq:gredR3} \end{align} Multiplication is given by concatenation of diagrams that are read from bottom to top, and it is zero if the labels do not match. The algebra $T_b^{\underline{\mu}}({\mathfrak{g}})$ is finite-dimensional and can be endowed with a $\mathbb{Z}$-grading (we refer to~\cite[Definition~4.4]{webster} for the definition of the grading). In the case of $ \underline{\mu}=\nu$ the algebra $T_b^{\nu}({\mathfrak{g}})$ contains a single red strand labeled $\nu$ and is isomorphic to the cyclotomic KLR algebra $R^{\nu}(b)$ for ${\mathfrak{g}}$ in $b$ strands. \begin{defn} Fix a ${\mathfrak{g}}$-weight $\lambda=(\lambda_1,\dotsc ,\lambda_{\vert I\vert})$ with each $\lambda_i$ being a formal parameter. The \emph{dg-enhanced KLRW algebra} $T_b^{\lambda,\underline{\mu}}({\mathfrak{g}})$ is defined as in \cref{def:dgwebsteralg}, with a blue strand labeled by $\lambda$ and with the $r$ red strands labeled by $\mu_1, \dots, \mu_r$ and the black strands labeled by simple roots. The black strands can carry dots and be nailed on the blue strand: \[ \tikzdiagh{0}{ \draw (.5,-.5) .. controls (.5,-.25) .. (0,0) .. controls (.5,.25) .. (.5,.5);\node at (.5,-.88) {$\alpha_j$}; \draw[vstdhl] (0,-.5) node[below]{\small $\lambda$} -- (0,.5) node [midway,nail]{}; } \] with everything in homological degree $0$, except that a nail is in homological degree $1$. The diagrams are taken up to graded braid-like planar isotopy, and are required to satisfy the same local relations as $T_b^{\underline{\mu}}({\mathfrak{g}})$, together with the following extra local relations: \begin{align*} \tikzdiagh{0}{ \draw (.5,-.5) .. controls (.5,-.25) .. (0,0) node[midway, tikzdot]{} .. controls (.5,.25) .. (.5,.5); \node at (.5,-.88) {$\alpha_j$}; \draw[vstdhl] (0,-.5) node[below]{\small $\lambda$} -- (0,.5) node [midway,nail]{}; } &= \ \tikzdiagh{0}{ \draw (.5,-.5) .. controls (.5,-.25) .. (0,0) .. controls (.5,.25) .. (.5,.5) node[midway, tikzdot]{}; \node at (.5,-.88) {$\alpha_j$}; \draw[vstdhl] (0,-.5) node[below]{\small $\lambda$} -- (0,.5) node [midway,nail]{}; } & \tikzdiagh{0}{ \draw (.5,-1) .. controls (.5,-.75) .. (0,-.4) .. controls (.5,-.05) .. (.5,.2) -- (.5,1);\node at (.5,-1.38) {$\alpha_j$}; \draw (1,-1) .. controls (1,0) .. (0, .4) .. controls (1,.75) .. (1,1); \node at (1,-1.38) {$\alpha_k$}; \draw [vstdhl] (0,-1) node[below]{\small $\lambda$} -- (0,1) node [pos=.3,nail] {} node [pos=.7,nail] {} ; } &= \ - \tikzdiagh[yscale=-1]{0}{ \draw (.5,-1) .. controls (.5,-.75) .. (0,-.4) .. controls (.5,-.05) .. (.5,.2) -- (.5,1);\node at (.5,1.38) {$\alpha_j$}; \draw (1,-1) .. controls (1,0) .. (0, .4) .. controls (1,.75) .. (1,1); \node at (1,1.38) {$\alpha_k$}; \draw [vstdhl] (0,-1) -- (0,1) node[below]{\small $\lambda$} node [pos=.3,nail] {} node [pos=.7,nail] {} ; } & \tikzdiagh{0}{ \draw (.5,-1) .. controls (.5,-.75) .. (0,-.4) .. controls (.5,-.4) and (.5,.4) .. (0,.4) .. controls (.5,.75) .. (.5,1);\node at (.5,-1.38) {$\alpha_j$}; \draw [vstdhl] (0,-1) node[below]{\small $\lambda$} -- (0,1) node [pos=.3,nail] {} node [pos=.7,nail] {} ; } &= \ 0 , \end{align*} for all $\alpha_j$, $\alpha_k\in I$. \end{defn} Note that we have an inclusion $T_b^{\underline{\mu}}({\mathfrak{g}})\subset T_b^{\lambda,\underline{\mu}}({\mathfrak{g}})$ by adding a vertical blue strand at the left of a diagram. The algebra $T_b^{\lambda,\underline{\mu}}({\mathfrak{g}})$ can be endowed with the $q$-grading inherited from $T_b^{\underline{\mu}}({\mathfrak{g}})$. It can be additionally endowed with a $\lambda_k$-grading for each $\alpha_k\in I$ such that $T_b^{\underline{\mu}}({\mathfrak{g}})\subset T_b^{\lambda,\underline{\mu}}({\mathfrak{g}})$ sits in $\lambda_k$-degree zero for all $k$, and \begin{align*} \deg_{q,\lambda_k} \left( \tikzdiagh{-1ex}{ \draw (.5,-.5) .. controls (.5,-.25) .. (0,0) .. controls (.5,.25) .. (.5,.5);\node at (.5,-.88) {$\alpha_j$}; \draw[vstdhl] (0,-.5) node[below]{\small $\lambda$} -- (0,.5) node [midway,nail]{}; } \right) &:= (0,2\delta_{k,j}). \end{align*} We usually consider $T_b^{\lambda,\underline{\mu}}({\mathfrak{g}})$ as a $\mathbb{Z}^{1+|I|}$-graded dg-algebra $(T_b^{\lambda,\underline{\mu}}({\mathfrak{g}}), 0)$ with trivial differential. In the case of $ \underline{\mu}=\varnothing$ the algebra $T_b^{\lambda,\varnothing}({\mathfrak{g}})$ contains a blue strand labeled $\lambda$ and is isomorphic to the $\mathfrak{b}$-KLR algebra $R_{\mathfrak{b}}(b)$ introduced in~\cite[\S3.1]{naissevaz3}. The results of~\cref{sec:dgWebster} can be generalized to $T_b^{\lambda,\underline{\mu}}({\mathfrak{g}})$. In particular, one can prove it is free over $\Bbbk$ and that it admits a basis similar to the one in Theorem~\ref{thm:Tbasis}. Moreover, by using induction and restriction functors that add a black strand, we obtain a categorical action of ${\mathfrak{g}}$ on $\mathcal{D}_{dg}(T_b^{\lambda,\underline{\mu}}({\mathfrak{g}}),0)$ (in the sense of \cite{naissevaz3}), which categorifies the $U_q({\mathfrak{g}})$-action on the tensor product of a universal Verma module and several integrable modules. \smallskip Fix an integrable dominant weight $\kappa$ of ${\mathfrak{g}}$ and define a differential $d_\kappa$ on $T_b^{\lambda,\underline{\mu}}({\mathfrak{g}})$ (after specialization of the $\lambda_j$-grading to $q^{\kappa_j}$) by setting \[ d_{\kappa}\Biggl( \tikzdiagh{-1ex}{ \draw (.5,-.5) .. controls (.5,-.25) .. (0,0) .. controls (.5,.25) .. (.5,.5);\node at (.5,-.88) {$\alpha_j$}; \draw[vstdhl] (0,-.5) node[below]{\small $\lambda$} -- (0,.5) node [midway,nail]{}; } \Biggr) \ = \ \tikzdiagh{-1ex}{ \draw (.5,-.5) -- (.5,.5) node[midway,tikzdot]{} node[midway,xshift=1.75ex,yshift=.75ex]{\small $\kappa_j$};\node at (.5,-.88) {$\alpha_j$}; \draw[vstdhl] (0,-.5) node[below]{\small $\lambda$} -- (0,.5); } \] and $d_\kappa(t) = 0$ for all $t \in T_b^{\underline{\mu}}({\mathfrak{g}}) \subset T_b^{\lambda,\underline{\mu}}({\mathfrak{g}})$, and extending by the graded Leibniz rule w.r.t. the homological grading. A straightforward computation shows that $d_\kappa$ is well-defined. \begin{prop} The dg-algebra $(T^{\lambda,\underline{\mu}}_b({\mathfrak{g}}),d_\kappa)$ is formal with \[ H(T^{\lambda,\underline{\mu}}_b({\mathfrak{g}}),d_\kappa) \cong T^{(\kappa,\underline{\mu})}_b({\mathfrak{g}}) . \] \end{prop} \begin{proof} The proof follows by similar arguments as in~\cite[Theorem 4.4]{naissevaz3}. \end{proof} \subsubsection{Dg-enhanced KLRW algebras for parabolic subalgebras} Let ${\mathfrak{p}}\subseteq{\mathfrak{g}}$ be a parabolic subalgebra with partition $I=I_f\sqcup I_r$ of the set of simple roots, and $(\lambda,n)=(\lambda_i)_{i\in I}$, with $\lambda_i$ a formal parameter if $i\in I_r$, and $\lambda_i=q^{n_i}$ with $n_i\in\mathbb{N}$ if $i\in I_f$. Introduce a differential $d_{\lambda,n}$ on $T_b^{\lambda,\underline{\mu}}({\mathfrak{g}})$ (after specialization of the $\lambda_j$-grading to $q^{n_j}$ for each $\alpha_j\in I_r$) by setting \[ d_{\lambda,n}\Biggl( \tikzdiagh{-1ex}{ \draw (.5,-.5) .. controls (.5,-.25) .. (0,0) .. controls (.5,.25) .. (.5,.5);\node at (.5,-.88) {$\alpha_j$}; \draw[vstdhl] (0,-.5) node[below]{\small $\lambda$} -- (0,.5) node [midway,nail]{}; } \Biggr) \ = \ \begin{cases} \ \ \ 0, & \text{if $\alpha_j\in I_r$}, \\[1ex] \tikzdiagh{-1ex}{ \draw (.5,-.5) -- (.5,.5) node[midway,tikzdot]{} node[midway,xshift=1.75ex,yshift=.75ex]{\small $n_j$};\node at (.5,-.88) {$\alpha_j$}; \draw[vstdhl] (0,-.5) node[below]{\small $\lambda$} -- (0,.5); } &\text{if $\alpha_j\in I_f$}, \end{cases} \] and $d_{\lambda,n}(t) = 0$ for all $t \in T_b^{\underline{\mu}}({\mathfrak{g}}) \subset T_b^{\lambda,\underline{\mu}}({\mathfrak{g}})$, and extending by the graded Leibniz rule w.r.t. the homological grading. As before, a straightforward computation shows that it is well-defined. \begin{prop} \label{prop:pKLRWformal} The dg-algebra $(T^{\lambda,\underline{\mu}}_b({\mathfrak{g}}),d_{\lambda,n})$ is formal. \end{prop} \begin{proof} The proof follows by similar arguments as in~\cite[Theorem 4.4]{naissevaz3}. \end{proof} \begin{defn} We define the \emph{dg-enhanced ${\mathfrak{p}}$-KLRW algebra} as \[ T_b^{\lambda,\underline{\mu}}({\mathfrak{g}},{\mathfrak{p}}) := H(T^{\lambda,\underline{\mu}}_b({\mathfrak{g}}),d_{\lambda,n}). \] \end{defn} Note that by \cref{prop:pKLRWformal} we have a quasi-isomorphism $(T_b^{\lambda,\underline{\mu}}({\mathfrak{g}},{\mathfrak{p}}), 0) \cong (T^{\lambda,\underline{\mu}}_b({\mathfrak{g}}),d_{\lambda,n})$. Similarly as above, $\mathcal{D}_{dg} (T^{\lambda,\underline{\mu}}_b({\mathfrak{g}}),d_{\lambda,n})$ categorifies the tensor product of a parabolic Verma module and several integrable modules, and comes with a categorical action of ${\mathfrak{g}}$. \subsection{Dg-enhanced quiver Schur algebras}\label{sec:dgqSchur} In order to define a quiver Schur algebra of type $A_1$, we follow the approach of~\cite{KSY}, which best suits our goals. We actually use a slightly different definition because theirs corresponds to a thick version of KLRW algebra (see \cite[\S9.2]{KSY}), and we want to relate it to the version we use. \subsubsection{Cyclic modules and quiver Schur algebras} Recall that $\nh_b^N \cong T_b^{(N)}$ is the $N$-cyclotomic nilHecke algebra on $b$ strands. Fix $ r \geq 0$ and $\und N = (N_0, N_1, \dots, N_r) \in \mathbb{N}^r$ such that $\sum_i N_i = N$. For $\rho = (b_0, b_1, \dots, b_r)$ such that $\sum_i b_i = b$, we define the element \[ x_{\rho}^{\und N} := \prod_{i=1}^{r} (x_{b_r+ \cdots + b_{i+1}+1}^{N_{i}+ \cdots + N_1} \cdots x_{b_r + \cdots + b_{i+1} + b_i}^{N_{i}+ \cdots + N_1}) \in \nh_b^N, \] where we recall that $x_b$ is a dot on the $b$th black strand. Then, we consider the cyclic right $\nh_b^N$-module defined as \[ Y^{\und N}_{\rho} := x_{\rho}^{\und N} \nh_b^N. \] The \emph{quiver Schur algebra} (of type $A_1$) is defined as the $\mathbb{Z}$-graded algebra: \[ Q^{\und N}_b := \END_{\nh_b^N}\left( \bigoplus_{\rho \in \mathcal{P}_b^r} q^{-\deg_q(x_\rho^{\und N})/2} Y^{\und N}_{\rho} \right), \] where $\END$ means the ($\mathbb{Z}$-)graded endomorphism ring. The $\mathbb{Z}$-graded algebra $Q^{\und N}_b$ is isomorphic to $T^{\und N}_b$~\cite[Proposition 5.33]{webster}. The \emph{reduced quiver Schur algebra} (of type $A_1$) is defined as \[ {}^{red}Q := \END_{\nh_b^N}\left( \bigoplus_{\rho \in \mathcal{P}_b^{r, \und N}} q^{-\deg_q(x_\rho^{\und N})/2} Y^{\und N}_{\rho} \right), \] where $ \mathcal{P}_{b}^{r, \und N} := \left\{ (b_0, b_1, \dots, b_r) | b_i \leq N_i \text{ for $0 \leq i \leq r$}\right\} \subset \mathcal{P}_{b}^{r} $. It is Morita equivalent to $Q^{\und N}_b$ (this can be shown by observing that if $b_i > N_i$ for some $i$, then $Y^{\und N}_{\rho}$ is isomorphic to a direct sum of elements in $ \{ q^{-\deg_q(x_{\rho'}^{\und N})/2} Y^{\und N}_{\rho'} | {\rho' \in \mathcal{P}_b^{r, \und N}} \}$ ), and thus to $T^{\und N}_b$. \subsubsection{Dg-enhanced cyclic modules} Our goal is to construct a dg-enhancement of $Y^{\und N}_{\rho}$ over $(T_b^{\lambda, \emptyset}, d_N)$, the dg-enhanced KLRW algebra without red strands. We will simply write $T_b^\lambda$ for $T_b^{\lambda, \emptyset}$. Recall from \cref{thm:dNformal} that $(T_b^\lambda, d_N)$ is quasi-isomorphic to $T_b^{(N)} \cong \nh_b^N$ . \smallskip Let $T_b^{q^{\ell}\lambda}$ for $\ell \in \mathbb{Z}$ be the algebra defined similarly as $T_b^\lambda$ (see \cref{def:dgwebsteralg}) except that the blue strand is labeled by $q^{\ell}\lambda$, and the nail is in $\mathbb{Z}^2$-degree: \begin{align*} \deg_{q,\lambda} \left( \tikzdiagh{-1ex}{ \draw (.5,-.5) .. controls (.5,-.25) .. (0,0) .. controls (.5,.25) .. (.5,.5); \draw[vstdhl] (0,-.5) node[below]{\small $q^{\ell}\lambda$} -- (0,.5) node [midway,nail]{}; } \right) &= (2\ell,2). \end{align*} Whenever $\ell \geq \ell'$ and $b \leq b'$, there is an inclusion of algebras \begin{equation}\label{eq:inclusionTlshift} T^{q^{\ell} \lambda}_{b} \hookrightarrow T^{q^{\ell'}\lambda}_{b'}, \end{equation} given by first turning any $q^{\ell}\lambda$-nail into a $q^{\ell'}\lambda$-nail by adding dots: \[ \tikzdiagh{0}{ \draw (.5,-.5) .. controls (.5,-.25) .. (0,0) .. controls (.5,.25) .. (.5,.5); \draw[vstdhl] (0,-.5) node[below]{\small $q^{\ell}\lambda$} -- (0,.5) node [midway,nail]{}; } \ \mapsto \ \tikzdiagh{0}{ \draw (.5,-.5) .. controls (.5,-.25) .. (0,0) .. controls (.5,.25) .. (.5,.5) node[midway,tikzdot]{} node[midway, xshift=4ex,yshift=.5ex]{\small $\ell - \ell'$}; \draw[vstdhl] (0,-.5) node[below]{\small $q^{\ell'}\lambda$} -- (0,.5) node [midway,nail]{}; } \] so that the blue strand labeled $q^{\ell} \lambda$ becomes labeled $q^{\ell'} \lambda$, and then adding $b'- b$ vertical black strands at the right: \[ \tikzdiagh[xscale=2]{0}{ \draw [vstdhl] (-.25,0) node[below]{\small $q^{\ell'}\lambda$} -- (-.25,1); % \draw (0,0) -- (0,1); \node at(.25,.125) {\tiny $\dots$}; \node at(.25,.875) {\tiny $\dots$}; \draw (.5,0) -- (.5,1); % \tikzbrace{0}{.5}{0}{\small $b$}; % \filldraw [fill=white, draw=black] (-.375,.25) rectangle (.625,.75) node[midway] { $D$}; } \mapsto \tikzdiagh[xscale=2]{0}{ \draw [vstdhl] (-.25,0) node[below]{\small $q^{\ell'}\lambda$} -- (-.25,1); % \draw (0,0) -- (0,1); \node at(.25,.125) {\tiny $\dots$}; \node at(.25,.875) {\tiny $\dots$}; \draw (.5,0) -- (.5,1); % \tikzbrace{0}{.5}{0}{\small $b$}; % \draw (.75,0) -- (.75,1); \node at(1,.5) {\tiny $\dots$}; \draw (1.25,0) -- (1.25,1); % \tikzbrace{.75}{1.25}{0}{\small $b'-b$}; % \filldraw [fill=white, draw=black] (-.375,.25) rectangle (.625,.75) node[midway] { $D$}; } \] A straightforward computation shows that the map in \cref{eq:inclusionTlshift} is well-defined, and \cref{thm:Tbasis} shows that the map is injective. By restriction, the inclusion $T^{q^{\ell} \lambda}_{b} \hookrightarrow T^{q^{\ell'}\lambda}_{b'}$ defines a left action of $T^{q^{\ell} \lambda}_{b}$ on any $T^{q^{\ell'} \lambda}_{b'}$-module. \begin{defn} We define the right $T_b^\lambda$-modules \begin{align*} \tilde G_{\rho}^{\und N} &:= T_{b_r}^{q^{-N_r-\cdots-N_1} \lambda} \otimes_{T_{b_r}^{q^{-N_{r-1}-\cdots-N_1} \lambda}} \cdots \otimes_{T_{b_r+\cdots+b_2}^{ q^{-N_1} \lambda}} T_{b_r+\cdots+b_1}^{q^{-N_1} \lambda} \otimes_{T_{b_r+\cdots+b_1}^{\lambda}} T_b^{\lambda}, \intertext{and} G_{\rho}^{\und N} &:= x_{\rho}^{\und N} \tilde G_{\rho}^{\und N}. \end{align*} \end{defn} Note that we can endow $G_{\rho}^{\und N} $ with either a differential of the form $d_N$ (as in \cref{sec:dgenh}) or a trivial one, making it a right dg-module over $(T_b^\lambda, d_N)$ or $(T_b^\lambda,0)$ respectively. \begin{exe} Take for example $r=2$. Then, we picture $G_{\rho}^{\und N}$ in terms of diagrams as: \[ \tikzdiag[xscale=1.25]{ \draw [vstdhl] (-.5,0) node[below]{\small $q^{-N_2-N_1}\lambda$} -- (-.5,4); % \draw (0,0) -- (0,2.25) -- (0,2.75) node[midway, tikzdot]{} node[midway, xshift=1.5ex, yshift=.5ex]{\tiny $N_1$} -- (0,3.5) -- (0,4) node[midway, tikzdot]{} node[midway, xshift=1.5ex, yshift=.5ex]{\tiny $N_2$}; \node at(.5,3.625) {\tiny $\dots$}; \node at(.5,.125) {\tiny $\dots$}; \draw (1,0) -- (1,2.25) -- (1,2.75) node[midway, tikzdot]{} node[midway, xshift=1.5ex, yshift=.5ex]{\tiny $N_1$} -- (1,3.5) -- (1,4) node[midway, tikzdot]{} node[midway, xshift=1.5ex, yshift=.5ex]{\tiny $N_2$}; % \draw (1.5,0) -- (1.5,2.25) -- (1.5,2.75) node[midway, tikzdot]{} node[midway, xshift=1.5ex, yshift=.5ex]{\tiny $N_1$} -- (1.5,4); \node at(2,3.625) {\tiny $\dots$}; \node at(2,.125) {\tiny $\dots$}; \draw (2.5,0) -- (2.5,2.25) -- (2.5,2.75) node[midway, tikzdot]{} node[midway, xshift=1.5ex, yshift=.5ex]{\tiny $N_1$} -- (2.5,4); % \draw (3,0) -- (3,4); \node at(3.5,3.625) {\tiny $\dots$}; \node at(3.5,.125) {\tiny $\dots$}; \draw (4,0) -- (4,4); % \filldraw [fill=white, draw=black] (-.75,2.75) rectangle (1.25,3.5) node[midway] { $T_{b_2}^{q^{-N_2-N_1}\lambda}$}; \filldraw [fill=white, draw=black] (-.75,1.5) rectangle (2.75,2.25) node[midway] { $T_{b_1+b_2}^{q^{-N_1}\lambda}$}; \filldraw [fill=white, draw=black] (-.75,.25) rectangle (4.25,1) node[midway] { $T_{b}^{\lambda}$}; % \tikzbraceop{0}{1}{4}{\small $b_2$}; \tikzbraceop{1.5}{2.5}{4}{\small $b_1$}; } \] \end{exe} Note that whenever $N + \ell \geq 0$ we can equip $T^{q^\ell \lambda}_b$ with a differential $d_N$ given by \[ d_N\left( \tikzdiagh{-1ex}{ \draw (.5,-.5) .. controls (.5,-.25) .. (0,0) .. controls (.5,.25) .. (.5,.5); \draw[vstdhl] (0,-.5) node[below]{\small $q^{\ell} \lambda$} -- (0,.5) node [midway,nail]{}; } \right) \ := \ \tikzdiagh{-1ex}{ \draw (.5,-.5) -- (.5,.5) node[midway,tikzdot]{} node[midway,xshift=4ex,yshift=.75ex]{\small $N+\ell$}; \draw[vstdhl] (0,-.5) node[below]{\small $q^{\ell} \lambda$} -- (0,.5); } \] and it is compatible with the inclusion in \cref{eq:inclusionTlshift}. We conjecture the following: \begin{conj} There is a quasi-isomorphism \[ (G_{\rho}^{\und N}, d_{N}) \xrightarrow{\simeq} (Y_{\rho}^{\und N}, 0). \] \end{conj} \begin{lem}\label{lem:Tkndecomp} There is a decomposition as graded vector spaces \[ \tikzdiag[xscale=1]{ \draw [vstdhl] (-.5,0) node[below]{\small $q^{-n}\lambda$} -- (-.5,2.75); % \draw (0,0) -- (0,2.25) -- (0,2.75) node[midway, tikzdot]{} node[midway, xshift=1.5ex, yshift=.5ex]{\tiny $n$}; \node at(.5,2.375) {\tiny $\dots$}; \node at(.5,.125) {\tiny $\dots$}; \draw (1,0) -- (1,2.25) -- (1,2.75) node[midway, tikzdot]{} node[midway, xshift=1.5ex, yshift=.5ex]{\tiny $n$}; \draw (1.5,0) -- (1.5,2.25) -- (1.5,2.75) node[midway, tikzdot]{} node[midway, xshift=1.5ex, yshift=.5ex]{\tiny $n$}; % % \filldraw [fill=white, draw=black] (-.75,1.5) rectangle (1.75,2.25) node[midway] { $T_{k+1}^{q^{-n}\lambda}$}; % } \ \cong \ \tikzdiag[xscale=1]{ \draw [vstdhl] (-.5,0) node[below]{\small $q^{-n}\lambda$} -- (-.5,2.75); % \draw (0,0) -- (0,2.25) -- (0,2.75) node[midway, tikzdot]{} node[midway, xshift=1.5ex, yshift=.5ex]{\tiny $n$}; \node at(.5,2.375) {\tiny $\dots$}; \node at(.5,1.25) {\tiny $\dots$}; \node at(.5,.125) {\tiny $\dots$}; \draw (1,0) -- (1,2.25) -- (1,2.75) node[midway, tikzdot]{} node[midway, xshift=1.5ex, yshift=.5ex]{\tiny $n$}; \draw (1.5,0) -- (1.5,2.25) -- (1.5,2.75) node[midway, tikzdot]{} node[midway, xshift=1.5ex, yshift=.5ex]{\tiny $n$}; % % \filldraw [fill=white, draw=black] (-.75,1.5) rectangle (1.25,2.25) node[midway] { $T_{k}^{q^{-n}\lambda}$}; \filldraw [fill=white, draw=black] (-.25,.25) rectangle (1.75,1) node[midway] { $\nh_{k+1}$}; % } \ \oplus \ \tikzdiag[xscale=1]{ % \draw (.5,0) -- (.5,1) .. controls (.5,1.25) and (0,1.25) .. (0,1.5) -- (0,2.25) -- (0,2.75) node[midway, tikzdot]{} node[midway, xshift=1.5ex, yshift=.5ex]{\tiny $n$}; \node at(.5,2.375) {\tiny $\dots$}; \node at(.75,.125) {\tiny $\dots$}; \draw (1.5,0) -- (1.5,1) .. controls (1.5,1.25) and (1,1.25) .. (1,1.5) -- (1,2.25) -- (1,2.75) node[midway, tikzdot]{} node[midway, xshift=1.5ex, yshift=.5ex]{\tiny $n$}; \draw (0,0) -- (0,1).. controls(0,1.125) .. (-.5,1.25) .. controls (1.5,1.375) ..(1.5,1.5)-- (1.5,2.25) -- (1.5,2.75) node[midway, tikzdot]{} node[midway, xshift=1.5ex, yshift=.5ex]{\tiny $n$}; % \draw [vstdhl] (-.5,0) node[below]{\small $q^{-n}\lambda$} -- (-.5,2.75) node[pos=.45,nail]{}; % \filldraw [fill=white, draw=black] (-.75,1.5) rectangle (1.25,2.25) node[midway] { $T_{k}^{q^{-n}\lambda}$}; \filldraw [fill=white, draw=black] (-.25,.25) rectangle (1.75,1) node[midway] { $\nh_{k+1}$}; % } \] \end{lem} \begin{proof} The claim follows from \cref{thm:Tbasis}. \end{proof} \begin{prop}\label{prop:incdgcyclmod1} Suppose $\rho$ and $\rho'$ are such that $b_i = b_i'$ for all $0 \leq i \leq m$ except $i = j$ and $i = j+1$ where they respect $b_j = b_j' - 1$ and $b_{j+1} = b_{j+1}' + 1$. Then there is an inclusion of right dg-modules \[ G_{\rho}^{\und N} \hookrightarrow G_{\rho'}^{\und N}. \] \end{prop} \begin{proof} We can work locally, and thus we want to prove that \[ G_2 := \tikzdiag[xscale=1.25]{ \draw [vstdhl] (-.5,1.25) node[below]{\small $q^{-n} \lambda $} -- (-.5,4); % \draw (0,1.25) -- (0,2.25) -- (0,2.75) -- (0,3.5) -- (0,4) node[midway, tikzdot]{} node[midway, xshift=1.5ex, yshift=.5ex]{\tiny $n$}; \node at(.5,3.625) {\tiny $\dots$}; \node at(.5,1.375) {\tiny $\dots$}; \draw (1,1.25) -- (1,2.25) -- (1,2.75) -- (1,3.5) -- (1,4) node[midway, tikzdot]{} node[midway, xshift=1.5ex, yshift=.5ex]{\tiny $n$}; \draw (1.5,1.25) -- (1.5,2.25) -- (1.5,2.75) -- (1.5,3.5) -- (1.5,4) node[midway, tikzdot]{} node[midway, xshift=1.5ex, yshift=.5ex]{\tiny $n$}; % \draw (2,1.25) -- (2,2.25) -- (2,2.75) -- (2,4); \node at(2.5,3.625) {\tiny $\dots$}; \node at(2.5,1.375) {\tiny $\dots$}; \draw (3,1.25) -- (3,2.25) -- (3,2.75) -- (3,4); % % \filldraw [fill=white, draw=black] (-.75,2.75) rectangle (1.75,3.5) node[midway] { $T_{k+1}^{q^{-n}\lambda}$}; \filldraw [fill=white, draw=black] (-.75,1.5) rectangle (3.25,2.25) node[midway] { $T_{b}^{\lambda}$}; % \tikzbraceop{0}{1}{4}{\small $k$}; } \ \subset \ \tikzdiag[xscale=1.25]{ \draw [vstdhl] (-.5,1.25) node[below]{\small $q^{-n}\lambda$} -- (-.5,4); % \draw (0,1.25) -- (0,2.25) -- (0,2.75) -- (0,3.5) -- (0,4) node[midway, tikzdot]{} node[midway, xshift=1.5ex, yshift=.5ex]{\tiny $n$}; \node at(.5,3.625) {\tiny $\dots$}; \node at(.5,1.375) {\tiny $\dots$}; \draw (1,1.25) -- (1,2.25) -- (1,2.75) -- (1,3.5) -- (1,4) node[midway, tikzdot]{} node[midway, xshift=1.5ex, yshift=.5ex]{\tiny $n$}; \draw (1.5,1.25) -- (1.5,2.25) -- (1.5,2.75) -- (1.5,3.5) -- (1.5,4); % \draw (2,1.25) -- (2,2.25) -- (2,2.75) -- (2,4); \node at(2.5,3.625) {\tiny $\dots$}; \node at(2.5,1.375) {\tiny $\dots$}; \draw (3,1.25) -- (3,2.25) -- (3,2.75) -- (3,4); % % \filldraw [fill=white, draw=black] (-.75,2.75) rectangle (1.25,3.5) node[midway] { $T_{k}^{q^{-n}\lambda}$}; \filldraw [fill=white, draw=black] (-.75,1.5) rectangle (3.25,2.25) node[midway] { $T_{b}^{\lambda}$}; % \tikzbraceop{0}{1}{4}{\small $k$}; } =: G_1 \] We apply \cref{lem:Tkndecomp} on $T^{q^{-n}\lambda}_{k+1}$ inside $G_2$. The left summand is clearly in $G_1$. For the right summand, it is less clear since the nails in $T^{\lambda}_{b}$ all acts by adding a nail and $n$ dots on the blue strand labeled $q^{-n}\lambda$. Thus, we want to show that \[ \tikzdiagh{0}{ \draw (0,-.5) .. controls (0,-.375) .. (-.5,-.25) .. controls (0,-.125) .. (0,0) .. controls (0,.5) and (1.5,.5) .. (1.5,1) -- (1.5,1.25) node[pos=.2,tikzdot]{} node[pos=.2, xshift=1ex,yshift=.5ex]{\tiny $n$}; \draw (.5,-.5)--(.5,0) .. controls (.5,.5) and (0,.5) .. (0,1) -- (0,1.25) node[pos=.2,tikzdot]{} node[pos=.2, xshift=1ex,yshift=.5ex]{\tiny $n$}; \node at (1,-.25) {\small $\cdots$}; \node at (.65,.75) {\small $\cdots$}; \draw (1.5,-.5)--(1.5,0) .. controls (1.5,.5) and (1,.5) .. (1,1) -- (1,1.25) node[pos=.2,tikzdot]{} node[pos=.2, xshift=1ex,yshift=.5ex]{\tiny $n$}; % \tikzbrace{.5}{1.5}{-.5}{\small $k$}; % \draw (2,-.5) -- (2,1.25); \node at (2.5,.375) {\small $\cdots$}; \draw (3,-.5) -- (3,1.25); % \draw[vstdhl] (-.5,-.5) node[below]{\small $q^{-n}\lambda$} -- (-.5,0) node[midway,nail]{} -- (-.5,1.25); } \ \in G_1. \] By \cref{eq:nhdotslide}, we have \begin{align*} \tikzdiagh{0}{ \draw (0,-.5) .. controls (0,-.375) .. (-.5,-.25) .. controls (0,-.125) .. (0,0) .. controls (0,.5) and (1.5,.5) .. (1.5,1) -- (1.5,1.25) node[pos=.2,tikzdot]{} node[pos=.2, xshift=1ex,yshift=.5ex]{\tiny $n$}; \draw (.5,-.5)--(.5,0) .. controls (.5,.5) and (0,.5) .. (0,1) -- (0,1.25) node[pos=.2,tikzdot]{} node[pos=.2, xshift=1ex,yshift=.5ex]{\tiny $n$}; \node at (1,-.25) {\small $\cdots$}; \node at (.65,.75) {\small $\cdots$}; \draw (1.5,-.5)--(1.5,0) .. controls (1.5,.5) and (1,.5) .. (1,1) -- (1,1.25) node[pos=.2,tikzdot]{} node[pos=.2, xshift=1ex,yshift=.5ex]{\tiny $n$}; % \tikzbrace{.5}{1.5}{-.5}{\small $k$}; % \draw[vstdhl] (-.5,-.5) node[below]{\small $q^{-n}\lambda$} -- (-.5,0) node[midway,nail]{} -- (-.5,1.25); } & \ = \ \tikzdiagh{0}{ \draw (0,-.5) .. controls (0,-.375) .. (-.5,-.25) .. controls (0,-.125) .. (0,0) .. controls (0,.5) and (1.5,.5) .. (1.5,1) node[pos=.1,tikzdot]{} node[pos=.1, xshift=-1ex,yshift=.5ex]{\tiny $n$} -- (1.5,1.25) ; \draw (.5,-.5)--(.5,0) .. controls (.5,.5) and (0,.5) .. (0,1) -- (0,1.25) node[pos=.2,tikzdot]{} node[pos=.2, xshift=1ex,yshift=.5ex]{\tiny $n$}; \node at (1,-.25) {\small $\cdots$}; \node at (.65,.75) {\small $\cdots$}; \draw (1.5,-.5)--(1.5,0) .. controls (1.5,.5) and (1,.5) .. (1,1) -- (1,1.25) node[pos=.2,tikzdot]{} node[pos=.2, xshift=1ex,yshift=.5ex]{\tiny $n$}; % \tikzbrace{.5}{1.5}{-.5}{\small $k$}; % \draw[vstdhl] (-.5,-.5) node[below]{\small $q^{-n}\lambda$} -- (-.5,0) node[midway,nail]{} -- (-.5,1.25); } \ - \sum_{\ell = 0}^k \sssum{r+s \\= n-1} \ \tikzdiagh{0}{ \draw (0,-.5) .. controls (0,-.375) .. (-.5,-.25) .. controls (0,-.125) .. (0,0) .. controls (0,.5) and (1.5,.5) .. (1.5,1) node[pos=.8,tikzdot]{} node[pos=.8, xshift=1ex,yshift=-.5ex]{\tiny $r$} -- (1.5,1.25) node[pos=.2,tikzdot]{} node[pos=.2, xshift=1ex,yshift=.5ex]{\tiny $n$}; \draw (.5,-.5)--(.5,0) .. controls (.5,.5) and (0,.5) .. (0,1) -- (0,1.25) node[pos=.2,tikzdot]{} node[pos=.2, xshift=1ex,yshift=.5ex]{\tiny $n$}; \node at (1,-.25) {\small $\cdots$}; \node at (.65,.75) {\small $\cdots$}; \draw (1.5,-.5)--(1.5,0) .. controls (1.5,.5) and (1,.5) .. (1,1) -- (1,1.25) node[pos=.2,tikzdot]{} node[pos=.2, xshift=1ex,yshift=.5ex]{\tiny $n$}; % \tikzbrace{.5}{1.5}{-.5}{\small $\ell$}; % \draw (2,-.5) -- (2,0) .. controls (2,.5) and (3.5,.5) .. (3.5,1) node[pos=.1,tikzdot]{} node[pos=.1, xshift=-1ex,yshift=.5ex]{\tiny $s$} -- (3.5,1.25); \draw (2.5,-.5)--(2.5,0) .. controls (2.5,.5) and (2,.5) .. (2,1) -- (2,1.25) node[pos=.2,tikzdot]{} node[pos=.2, xshift=1ex,yshift=.5ex]{\tiny $n$}; \node at (3,-.25) {\small $\cdots$}; \node at (2.65,.75) {\small $\cdots$}; \draw (3.5,-.5)--(3.5,0) .. controls (3.5,.5) and (3,.5) .. (3,1) -- (3,1.25) node[pos=.2,tikzdot]{} node[pos=.2, xshift=1ex,yshift=.5ex]{\tiny $n$}; % \draw[vstdhl] (-.5,-.5) node[below]{\small $q^{-n}\lambda$} -- (-.5,0) node[midway,nail]{} -- (-.5,1.25); } \end{align*} The term of the left is clearly in $G_1$ since there are $n$ dots next to the nail, so that it can be obtained from a nail in $T^{\lambda}_{b}$. The terms on the right are also in $G_1$ since we can slide the nail and crossings on the left to the top, into $T_k^{q^{-n}\lambda}$. \end{proof} Consider $(x_1^n \cdots x_{k}^n) T^{q^{-n}\lambda}_k \otimes_{T^{\lambda}_k} T^{\lambda}_b$. We obtain an inclusion \[ (x_1^n \cdots x_{k}^n) T^{q^{-n}\lambda}_k \hookrightarrow (x_1^n \cdots x_k^n x_{k+1}^n) T^{q^{-n}\lambda}_{k+1}, \] of q-degree $2n$ by adding a vertical strand on the right on which we put $n$ dots (again, the fact it is an inclusion follows immediatly from \cref{thm:Tbasis}). In turns, it gives rise to a map of right (dg-)modules $(x_1^n \cdots x_{k}^n) T^{q^{-n}\lambda}_k \otimes_{T^{\lambda}_k} T^{\lambda}_b \rightarrow (x_1^n \cdots x_{k}^n x_{k+1}^n) T^{q^{-n}\lambda}_{k+1} \otimes_{T^{\lambda}_{k+1}} T^{\lambda}_b$. In terms of diagrams, we can picture the inclusion above as: \[ \tikzdiag[xscale=1]{ \draw [vstdhl] (-.5,0) node[below]{\small $q^{-n}\lambda$} -- (-.5,2.75); % \draw (0,0) -- (0,2.25) -- (0,2.75) node[midway, tikzdot]{} node[midway, xshift=1.5ex, yshift=.5ex]{\tiny $n$}; \node at(.5,2.375) {\tiny $\dots$}; \node at(.5,.125) {\tiny $\dots$}; \draw (1,0) -- (1,2.25) -- (1,2.75) node[midway, tikzdot]{} node[midway, xshift=1.5ex, yshift=.5ex]{\tiny $n$}; % \draw (1.5,0) -- (1.5,2.25) -- (1.5,2.75); \draw (2,0) -- (2,2.25) -- (2,2.75); \node at(2.5,2.375) {\tiny $\dots$}; \node at(2.5,.125) {\tiny $\dots$}; \draw (3,0) -- (3,2.25) -- (3,2.75); % \filldraw [fill=white, draw=black] (-.75,1.5) rectangle (1.25,2.25) node[midway] { $T_{k}^{q^{-n}\lambda}$}; \filldraw [fill=white, draw=black] (-.75,.25) rectangle (3.25,1) node[midway] { $T_{b}^{\lambda}$}; % \tikzbraceop{0}{1}{2.75}{\small $k$}; } \ \mapsto \tikzdiag[xscale=1]{ \draw [vstdhl] (-.5,0) node[below]{\small $q^{-n}\lambda$} -- (-.5,2.75); % \draw (0,0) -- (0,2.25) -- (0,2.75) node[midway, tikzdot]{} node[midway, xshift=1.5ex, yshift=.5ex]{\tiny $n$}; \node at(.5,2.375) {\tiny $\dots$}; \node at(.5,.125) {\tiny $\dots$}; \draw (1,0) -- (1,2.25) -- (1,2.75) node[midway, tikzdot]{} node[midway, xshift=1.5ex, yshift=.5ex]{\tiny $n$}; % \draw (1.5,0) -- (1.5,2.25) -- (1.5,2.75) node[midway, tikzdot]{} node[midway, xshift=1.5ex, yshift=.5ex]{\tiny $n$}; \draw (2,0) -- (2,2.25) -- (2,2.75); \node at(2.5,2.375) {\tiny $\dots$}; \node at(2.5,.125) {\tiny $\dots$}; \draw (3,0) -- (3,2.25) -- (3,2.75); % \filldraw [fill=white, draw=black] (-.75,1.5) rectangle (1.25,2.25) node[midway] { $T_{k}^{q^{-n}\lambda}$}; \filldraw [fill=white, draw=black] (-.75,.25) rectangle (3.25,1) node[midway] { $T_{b}^{\lambda}$}; % \tikzbraceop{0}{1}{2.75}{\small $k$}; } \ \subset \ \tikzdiag[xscale=1]{ \draw [vstdhl] (-.5,0) node[below]{\small $q^{-n}\lambda$} -- (-.5,2.75); % \draw (0,0) -- (0,2.25) -- (0,2.75) node[midway, tikzdot]{} node[midway, xshift=1.5ex, yshift=.5ex]{\tiny $n$}; \node at(.5,2.375) {\tiny $\dots$}; \node at(.5,.125) {\tiny $\dots$}; \draw (1,0) -- (1,2.25) -- (1,2.75) node[midway, tikzdot]{} node[midway, xshift=1.5ex, yshift=.5ex]{\tiny $n$}; % \draw (1.5,0) -- (1.5,2.25) -- (1.5,2.75) node[midway, tikzdot]{} node[midway, xshift=1.5ex, yshift=.5ex]{\tiny $n$}; \draw (2,0) -- (2,2.25) -- (2,2.75); \node at(2.5,2.375) {\tiny $\dots$}; \node at(2.5,.125) {\tiny $\dots$}; \draw (3,0) -- (3,2.25) -- (3,2.75); % \filldraw [fill=white, draw=black] (-.75,1.5) rectangle (1.75,2.25) node[midway] { $T_{k+1}^{q^{-n}\lambda}$}; \filldraw [fill=white, draw=black] (-.75,.25) rectangle (3.25,1) node[midway] { $T_{b}^{\lambda}$}; % \tikzbraceop{0}{1}{2.75}{\small $k$}; } \] This generalizes into the following proposition: \begin{prop}\label{prop:incdgcyclmod2} Under the same hypothesis as in \cref{prop:incdgcyclmod1}, we obtain a map of right dg-modules \[ G_{\rho'}^{\und N} \rightarrow G_{\rho}^{\und N}, \] of $q$-degree $2N_{j+1}$, diagrammatically given by gluing on top the dots $x_{b_r' + \cdots + b_{j+1}'+1} ^{N_{j+1}}$. \end{prop} \subsubsection{Dg-quiver Schur algebra} \begin{defn} We define the \emph{dg-quiver Schur algebras} as \[ ({}_{dg}Q^{\und N}_b, d_N) := \END^{dg}_{(T^{\lambda}_b, d_N)} \left( \bigoplus_{\rho \in \mathcal{P}_b^r} q^{-\deg_q(x_\rho^{\und N})/2}(G_{\rho}^{\und N} , d_N) \right), \] and \[ ({}_{dg}Q^{\und N}_b,0) := \END^{dg}_{(T^{\lambda}_b, 0)} \left( \bigoplus_{\rho \in \mathcal{P}_b^r} q^{-\deg_q(x_\rho^{\und N})/2}(G_{\rho}^{\und N}, 0) \right), \] where $\END^{dg}$ is the $\mathbb{Z}^2$-graded ($\mathbb{Z}$-graded in the first case) dg-endomorphism ring (see \cref{sec:classicalhomandtensor}). We also define a reduced version as \[ ({}_{dg}^{red}Q^{\und N}_b,0) := \END^{dg}_{(T^{\lambda}_b, 0)} \left( \bigoplus_{\rho \in \mathcal{P}_b^{r, \und N}} q^{-\deg_q(x_\rho^{\und N})/2}(G_{\rho}^{\und N}, 0) \right). \] \end{defn} \begin{conj}\label{conj:dgSchurCycSchur} There is a quasi-isomorphism \[ ({}_{dg}Q^{\und N}_b, d_N) \xrightarrow{\simeq} (Q^{\und N}_b,0). \] \end{conj} Our goal is to construct a graded map of algebras \[ T_b^{\lambda, \und N} \rightarrow {}_{dg}Q^{\und N}_b. \] For $\rho=(b_0,b_1,\dots,b_r)\in \mathcal{P}_b^r$, we send \[ 1_\rho \mapsto \text{Id} \in \END_{T_b^\lambda}(G_{\rho}^{\und N}) \subset {}_{dg}Q^{\und N}_b. \] Dots on the $i$th black strand (resp. black/black crossings on the $i$th and $(i+1)$th black strands) on $1_\rho$ is sent to multiplication on the left (i.e. gluing on top) by a dot on the $i$th black strand (resp. crossing) on $G_{\rho}^{\und N}$. These are indeed maps of right $T_b^\lambda$-modules since the dots and crossing commutes with $x_i^{n} x_{i+1}^n$ for all $n \geq 0$. Similarly, a nail on the blue strand labeled $\lambda$ in $T_b^{\lambda, \und N}$ is sent to multiplication on the left by a nail on the blue strand labeled $q^{-N_r-\cdots-N_1}\lambda$ in $G_{\rho}^{\und N}$. For black/red crossing $\tau_i$, if the red strand goes from bottom left to top right, then we have $1_{\rho'} \tau_i 1_{\rho}$ where $\rho$ and $\rho'$ are as in \cref{prop:incdgcyclmod1}. Then, we associate to it the map $G_{\rho}^{\und N} \rightarrow G_{\rho'}^{\und N}$ of \cref{prop:incdgcyclmod1}. If the red strand goes from bottom right to top left, then we have $1_{\rho} \tau_i 1_{\rho'}$, and we associate to it the map $G_{\rho'}^{\und N} \rightarrow G_{\rho}^{\und N}$ of \cref{prop:incdgcyclmod2}. \begin{prop}\label{prop:mapdgquiverschur} The map defined above gives rise to maps of $\mathbb{Z}$-graded dg-algebras \[ (T_b^{\lambda, \und N}, d_{N_0}) \rightarrow ( {}_{dg}Q^{\und N}_b, d_N), \] and of $\mathbb{Z}^2$-graded dg-algebras \[ (T_b^{\lambda, \und N}, 0) \rightarrow ( {}_{dg}Q^{\und N}_b, 0 ). \] \end{prop} \begin{proof} We show the assignment given above is a map of algebras, the commutation with the differentials being obvious since the image by $d_{N_0}$ of a nail on a blue strand labeled $\lambda$ consists of $N_0$ dots on the first black strand; and the image by $d_N$ of a nail on a blue strand labeled $q^{-N_r-\cdots-N_1} \lambda$ consists of $N-N_r-\cdots-N_1 = N_0$ dots. Thus, we need to prove the map respects all the defining relations in \cref{def:dgwebsteralg}. Relations in \cref{eq:nhR2andR3} and \cref{eq:nhdotslide} are immediate by construction. The relations in \cref{eq:dotredstrand} follow from commutations of dots. Since the map in \cref{prop:incdgcyclmod2} is multiplication by $n_{j+1}$ dots and the map in \cref{prop:incdgcyclmod1} is an inclusion, we have the relations in \cref{eq:redR2}. For the left side of \cref{eq:crossingslidered} both black/red crossings are given by an inclusion, and thus commutes with the multiplication on the left by the black/black crossing. For the right side, the black/red crossings give a multiplication by $x_i^{N_{j+1}} x_{i+1}^{N_{j+1}}$, which commutes with the black/black crossing. For \cref{eq:redR3}, one the black/red crossing is an inclusion and the other one is multiplication by $x_i^{N_{j+1}}$ on both side of the equality, so that the relation follows from \cref{eq:nhdotslide}. Finally, the relation in \cref{eq:relNail} is immediate by construction. \end{proof} \begin{conj}\label{conj:WebQSchur} The maps in \cref{prop:mapdgquiverschur} are isomorphisms. \end{conj} We also conjecture that the reduced dg-quiver Schur algebra $({}_{dg}^{red}Q^{\und N}_b,0)$ is dg-Morita equivalent to the non-reduced one $({}_{dg}Q^{\und N}_b,0)$. \section{Dg-enhanced KLRW algebras}\label{sec:dgWebster} In~ \cite{naissevaz2} and~\cite{naissevaz3} it was explained how to construct a `dg-enhancement' of cyclotomic nilHecke algebras to pass from a categorification of the integrable module $V(N)$ to a categorification of the Verma module $M(\lambda)$. This suggests that one might try to go from a categorification of $V(N) \otimes V(\underline{N})$ to a categorification of $M(\lambda) \otimes V(\underline{N})$ by constructing a dg-enhancement of KLRW algebras~\cite[\S4]{webster}, which we do next. \subsection{Preliminaries and conventions} \label{sec:conventions} Before defining the various algebras, we fix some conventions, and we recall some common facts about dg-structures (a reference for this is~\cite{keller}). First, let $\Bbbk$ be a commutative unital ring for the remaining of the paper. \subsubsection{Dg-algebras} A \emph{$\mathbb{Z}^n$-graded dg-($\Bbbk$-)algebra} $(A,d_A)$ is a unital $\mathbb{Z} \times \mathbb{Z}^n$-graded ($\Bbbk$-)algebra $A = \bigoplus_{(h,{\boldsymbol{g}}) \in \mathbb{Z} \times \mathbb{Z}^n} A_{\boldsymbol{g}}^h$, where we refer to the $\mathbb{Z}$-grading as homological (or $h$-degree) and the $\mathbb{Z}^n$-grading as ${\boldsymbol{g}}$-degree, with a differential $d : A \rightarrow A$ such that: \begin{itemize} \item $d_A(A_{\boldsymbol{g}}^h) \subset A_{{\boldsymbol{g}}}^{h-1}$ for all ${\boldsymbol{g}} \in \mathbb{Z}^n, h \in \mathbb{Z}$; \item $d_A(xy) = d_A(x)y + (-1)^{\deg_h(x)} x d_A(y)$; \item $d_A^2 = 0$. \end{itemize} The \emph{homology} of $(A,d_A)$ is $H(A,d_A) := \ker(d)/\Image(d)$, which is a $\mathbb{Z} \times \mathbb{Z}^n$-graded algebra that decomposes as $\bigoplus_{h \in \mathbb{Z},{\boldsymbol{g}} \in \mathbb{Z}^n} H^h_{\boldsymbol{g}}(A, d_A) := H^h(A_{\boldsymbol{g}}, d_A)$. A morphism of dg-algebras $f: (A,d_A) \rightarrow (A', d_{A'})$ is a morphism of algebras that preserves the $\mathbb{Z} \times \mathbb{Z}^n$-grading and commutes with the differentials. Such a morphism induces a morphism $f^* : H(A,d_A) \rightarrow H(A',d_{A'})$. We say that $f$ is a \emph{quasi-isomorphism} whenever $f^*$ is an isomorphism. Also, we say that $(A,d_A)$ is formal if there is a quasi-isomorphism $(A,d_A) \xrightarrow{\simeq} (H(A,d_A), 0)$. \begin{rem} Note that, in contrast to \cite{keller}, the differential decreases the homological degree instead of increasing it. \end{rem} Similarly, a $\mathbb{Z}^n$-graded left dg-module is a $\mathbb{Z} \times \mathbb{Z}^{n}$-graded module $M$ with a differential $d_M$ such that: \begin{itemize} \item $d_M(M_{\boldsymbol{g}}^h) \subset M_{{\boldsymbol{g}}}^{h-1}$ for all ${\boldsymbol{g}} \in \mathbb{Z}^n, h \in \mathbb{Z}$; \item $d_M(x \cdot m) = d_A(x) \cdot y + (-1)^{\deg_h(x)} x \cdot d_M(y)$; \item $d_M^2 = 0$. \end{itemize} Homology, maps between dg-modules and quasi-isomorphisms are defined as above, and there are similar notions of $\mathbb{Z}^n$-graded right dg-modules and dg-bimodules. In our convention, a $\mathbb{Z}^m$-graded category is a category with a collection of $m$ autoequivalences, strictly commuting with each others. The category $(A,d_A)\amod$ of (left) $\mathbb{Z}^n$-graded dg-modules over a dg-algebra $(A,d_A)$ is a $\mathbb{Z} \times \mathbb{Z}^n$-graded abelian category, with kernels and cokernels defined as usual. The action of $\mathbb{Z}$ is given by the \emph{homological shift functor} $[1] : (A,d_A)\amod \rightarrow (A,d_A)\amod$ acting by: \begin{itemize} \item increasing the degree of all elements in a module $M$ up by $1$, i.e. $\deg_h(m[1]) = \deg_h(m) + 1$; \item switching the sign of the differential $d_{M[1]} := -d_M$; \item introducing a sign in the left-action $r \cdot (m[1]) := (-1)^{\deg_h(r)} (r \cdot m)[1]$. \end{itemize} The action of ${\boldsymbol{g}} \in \mathbb{Z}^n$ is given by increasing the $\mathbb{Z}^n$-degree of elements up by ${\boldsymbol{g}}$, in the sense that \[ ({\boldsymbol{g}} M)_{{\boldsymbol{g}}_0 + {\boldsymbol{g}}} := (M)_{{\boldsymbol{g}}_0}, \] or in other terms, an element $x \in M$ with degree ${\boldsymbol{g}}_0$ becomes of degree ${\boldsymbol{g}}_0+{\boldsymbol{g}}$ in ${\boldsymbol{g}} M$. There are similar definitions for categories of right dg-modules and dg-bimodules, with the subtlety that the homological shift functor does not twist the right-action: \[ (m[1]) \cdot r := (m \cdot r)[1]. \] As usual, a short exact sequence of dg-(bi)modules induces a long exact sequence in homology. \smallskip Let $f : (M,d_M) \rightarrow (N,d_N)$ be a morphism of dg-(bi)modules. Then, one constructs the \emph{mapping cone} of $f$ as \begin{align} \label{eq:cone} \cone(f) &:= (M[1] \oplus N, d_C), & d_C &:= \begin{pmatrix} -d_M & 0 \\ f & d_N \end{pmatrix}. \end{align} The mapping cone is a dg-(bi)module, and it fits in a short exact sequence: \[ 0 \rightarrow N \xrightarrow{\imath_N} \cone(f) \xrightarrow{\pi_{M[1]}} M[1] \rightarrow 0, \] where $\imath_N$ and $\pi_{M[1]}$ are the inclusion and projection $N \xrightarrow{\imath_N} M[1] \oplus N \xrightarrow{\pi_{M[1]}} M[1]$. \subsubsection{Hom and tensor functors}\label{sec:classicalhomandtensor} Given a left dg-module $(M,d_M)$ and a right dg-module $(N,d_N)$, one constructs the tensor product \begin{equation}\label{eq:dgtens} \begin{split} (N,d_N) \otimes_{(A,d_A)} (M,d_M) &:= \bigl( (M \otimes_A N), d_{M \otimes N} \bigr), \\ d_{M \otimes N}(m \otimes n) &:= d_M(m) \otimes n + (-1)^{\deg_h(m)} m \otimes d_N(n). \end{split} \end{equation} If $(N,d_N)$ (resp. $(M,d_M)$) has the structure of a dg-bimodule, then the tensor product inherits a left (resp. right) dg-module structure. Given a pair of left dg-modules $(M,d_M)$ and $(N,d_N)$, one constructs the dg-hom space \begin{equation}\label{eq:dghom} \begin{split} \HOM_{(A,d_A)}\bigl( (M,d_M), (N,d_N) \bigr) &:= \bigl( \HOM_A(M,N), d_{\HOM(M,N)} \bigr), \\ d_{\HOM(M,N)}(f) &:= d_N \circ f - (-1)^{\deg_h(f)} f \circ d_M, \end{split} \end{equation} where $\HOM_A$ is the $\mathbb{Z}\times \mathbb{Z}^n$-graded hom space of maps between $\mathbb{Z}\times \mathbb{Z}^n$-graded $A$-modules. Again, if $(M,d_M)$ (resp. $(N,d_N)$) has the structure of a dg-bimodule, then it inherits a left (resp. right) dg-module structure. In particular, given a dg-bimodule $(B,d_B)$ over a pair of dg-algebras $(S,d_{S})$-$(R,d_R)$, we obtain tensor and hom functors \begin{align*} B \otimes_{(R,d_R)} (-) &: (R,d_R)\amod \rightarrow(S,d_S)\amod, \\ \HOM_{(S, d_{S})}(B, -) &: (S,d_{S})\amod \rightarrow(R,d_R)\amod, \end{align*} which form a adjoint pair $(B \otimes_{(R,d_R)} -) \vdash \HOM_{(S,d_{S})}(B, -)$. Explicitly, the natural bijection \begin{equation}\label{eq:homtensajd} \Phi_{M,N}^B : \Hom_{(S,d_S)}( B \otimes_{(R,d_R)} M, N ) \xrightarrow{\simeq} \Hom_{(R,d_R)}(M, \HOM_{(S,d_S)}(B,N)), \end{equation} is given by $(f : B \otimes_{(R,d_R)} M \rightarrow N) \mapsto \bigl(m \mapsto (b \mapsto f(b \otimes m))) \bigr)$. \subsubsection{Diagrammatic algebras} We always read diagram from bottom to top. We say that a diagram is braid-like when it is given by strands connecting a collection of points on the bottom to a collection of points on the top, without being able the turnback. Suppose these diagrams can have singularities (like dots, 4-valent crossings, or other similar decorations). A \emph{braid-like planar isotopy} is an isotopy fixing the endpoints and that does not create any critical point, in particular it means we can exchange distant singularities $f$ and $g$: \[ \tikzdiag{ \draw (0,-1) -- (0,0) ..controls (0,.5) and (1,.5) .. (1,1); \draw (1,-1) -- (1,0) ..controls (1,.5) and (0,.5) .. (0,1); \filldraw [fill=white, draw=black,rounded corners] (.5-.25,.5-.25) rectangle (.5+.25,.5+.25) node[midway] { $g$}; } \quad \cdots \quad \tikzdiag{ \draw (0,0) ..controls (0,.5) and (1,.5) .. (1,1) -- (1,2); \draw (1,0) ..controls (1,.5) and (0,.5) .. (0,1) -- (0,2); \filldraw [fill=white, draw=black,rounded corners] (.5-.25,.5-.25) rectangle (.5+.25,.5+.25) node[midway] { $f$}; } \ = \ \tikzdiag{ \draw (0,0) ..controls (0,.5) and (1,.5) .. (1,1) -- (1,2); \draw (1,0) ..controls (1,.5) and (0,.5) .. (0,1) -- (0,2); \filldraw [fill=white, draw=black,rounded corners] (.5-.25,.5-.25) rectangle (.5+.25,.5+.25) node[midway] { $g$}; } \quad \cdots \quad \tikzdiag{ \draw (0,-1) -- (0,0) ..controls (0,.5) and (1,.5) .. (1,1); \draw (1,-1) -- (1,0) ..controls (1,.5) and (0,.5) .. (0,1); \filldraw [fill=white, draw=black,rounded corners] (.5-.25,.5-.25) rectangle (.5+.25,.5+.25) node[midway] { $f$}; } \] Suppose that the diagrams carry a homological degree (associated to singularities), and consider linear combination of such diagrams. Then, a \emph{graded braid-like planar isotopy} is an isotopy fixing the endpoints, that does not create any critical point and such that we get a sign whenever we exchange two distant singularities $f$ and $g$: \[ \tikzdiag{ \draw (0,-1) -- (0,0) ..controls (0,.5) and (1,.5) .. (1,1); \draw (1,-1) -- (1,0) ..controls (1,.5) and (0,.5) .. (0,1); \filldraw [fill=white, draw=black,rounded corners] (.5-.25,.5-.25) rectangle (.5+.25,.5+.25) node[midway] { $g$}; } \quad \cdots \quad \tikzdiag{ \draw (0,0) ..controls (0,.5) and (1,.5) .. (1,1) -- (1,2); \draw (1,0) ..controls (1,.5) and (0,.5) .. (0,1) -- (0,2); \filldraw [fill=white, draw=black,rounded corners] (.5-.25,.5-.25) rectangle (.5+.25,.5+.25) node[midway] { $f$}; } \ = (-1)^{|f||g|} \ \tikzdiag{ \draw (0,0) ..controls (0,.5) and (1,.5) .. (1,1) -- (1,2); \draw (1,0) ..controls (1,.5) and (0,.5) .. (0,1) -- (0,2); \filldraw [fill=white, draw=black,rounded corners] (.5-.25,.5-.25) rectangle (.5+.25,.5+.25) node[midway] { $g$}; } \quad \cdots \quad \tikzdiag{ \draw (0,-1) -- (0,0) ..controls (0,.5) and (1,.5) .. (1,1); \draw (1,-1) -- (1,0) ..controls (1,.5) and (0,.5) .. (0,1); \filldraw [fill=white, draw=black,rounded corners] (.5-.25,.5-.25) rectangle (.5+.25,.5+.25) node[midway] { $f$}; } \] where $|f|$ (resp. $|g|$) is the homological degree of $f$ (resp. $g$). \subsection{Dg-enhanced KLRW algebras} Let $\underline{N} = (N_1,\ldots,N_r)$. Recall the KLRW algebra~\cite[\S4]{webster} (also called tensor product algebra) on $b$ strands $T_b^{\underline{N}}$ is the diagrammatic $\Bbbk$-algebra generated by braid-like diagrams on $b$ black strands and $r$ red strands. Red strands are labeled from left to right by $N_1, \dots, N_r$ and cannot intersect each other, while black strands can intersect red strands transversely, they can intersect transversely among themselves and can carry dots. Diagrams are taken up to braid-like planar isotopy, and satisfy local relations~\eqref{eq:nhR2andR3}-\eqref{eq:redR3} which are given below, together with the \emph{violating condition} that a black strand in the leftmost region is $0$: \[ \tikzdiagh{0}{ \draw (0,0) -- (0,1) ; \draw[stdhl] (1,0) node[below]{\small $N_1$}-- (1,1) ; } \quad \cdots \quad \ = 0. \] We write $\widetilde T_b^{\underline{N}}$ for the same construction but without the violating condition. The following are the defining (local) relations of $T_b^{\underline{N}}$: \begin{itemize} \item The \emph{nilHecke relations}: \begin{align} \label{eq:nhR2andR3} \tikzdiag{ \draw (0,0) ..controls (0,.25) and (1,.25) .. (1,.5) ..controls (1,.75) and (0,.75) .. (0,1) ; \draw (1,0) ..controls (1,.25) and (0,.25) .. (0,.5)..controls (0,.75) and (1,.75) .. (1,1) ; } \ &=\ 0 & \tikzdiag{ \draw (0,0) .. controls (0,0.25) and (1, 0.5) .. (1,1); \draw (1,0) .. controls (1,0.5) and (0, 0.75) .. (0,1); \draw (0.5,0) .. controls (0.5,0.25) and (0, 0.25) .. (0,0.5) .. controls (0,0.75) and (0.5, 0.75) .. (0.5,1); } \ &= \ \tikzdiag[xscale=-1]{ \draw (0,0) .. controls (0,0.25) and (1, 0.5) .. (1,1); \draw (1,0) .. controls (1,0.5) and (0, 0.75) .. (0,1); \draw (0.5,0) .. controls (0.5,0.25) and (0, 0.25) .. (0,0.5) .. controls (0,0.75) and (0.5, 0.75) .. (0.5,1); } \\ \label{eq:nhdotslide} \tikzdiag{ \draw (0,0) ..controls (0,.5) and (1,.5) .. (1,1) node [near start,tikzdot]{}; \draw (1,0) ..controls (1,.5) and (0,.5) .. (0,1); } \ &= \ \tikzdiag{ \draw (0,0) ..controls (0,.5) and (1,.5) .. (1,1) node [near end,tikzdot]{}; \draw (1,0) ..controls (1,.5) and (0,.5) .. (0,1); } \ + \ \tikzdiag{ \draw (0,0) -- (0,1) ; \draw (1,0)-- (1,1) ; } & \tikzdiag{ \draw (0,0) ..controls (0,.5) and (1,.5) .. (1,1); \draw (1,0) ..controls (1,.5) and (0,.5) .. (0,1) node [near end,tikzdot]{}; } \ &= \ \tikzdiag{ \draw (0,0) ..controls (0,.5) and (1,.5) .. (1,1); \draw (1,0) ..controls (1,.5) and (0,.5) .. (0,1) node [near start,tikzdot]{}; } \ + \ \tikzdiag{ \draw (0,0) -- (0,1) ; \draw (1,0)-- (1,1) ; } \end{align} \item The \emph{black/red relations}: \begin{align} \tikzdiagh{0}{ \draw (1,0) ..controls (1,.5) and (0,.5) .. (0,1) node [near end,tikzdot]{}; \draw[stdhl] (0,0) node[below]{\small $N_i$} ..controls (0,.5) and (1,.5) .. (1,1); } \ &= \ \tikzdiagh{0}{ \draw (1,0) ..controls (1,.5) and (0,.5) .. (0,1) node [near start,tikzdot]{}; \draw[stdhl] (0,0) node[below]{\small $N_i$} ..controls (0,.5) and (1,.5) .. (1,1); } & \tikzdiagh{0}{ \draw (0,0) ..controls (0,.5) and (1,.5) .. (1,1) node [near start,tikzdot]{}; \draw[stdhl] (1,0) node[below]{\small $N_i$} ..controls (1,.5) and (0,.5) .. (0,1); } \ &= \ \tikzdiagh{0}{ \draw (0,0) ..controls (0,.5) and (1,.5) .. (1,1) node [near end,tikzdot]{}; \draw[stdhl] (1,0) node[below]{\small $N_i$} ..controls (1,.5) and (0,.5) .. (0,1); } \label{eq:dotredstrand} \\ \tikzdiagh{0}{ \draw (1,0) ..controls (1,.25) and (0,.25) .. (0,.5)..controls (0,.75) and (1,.75) .. (1,1) ; \draw[stdhl] (0,0) node[below]{\small $N_i$} ..controls (0,.25) and (1,.25) .. (1,.5) ..controls (1,.75) and (0,.75) .. (0,1) ; } \ &= \ \tikzdiagh{0}{ \draw[stdhl] (0,0) node[below]{\small $N_i$} -- (0,1) ; \draw (1,0) -- (1,1) node[midway,tikzdot]{} node[midway,xshift=1.75ex,yshift=.75ex]{\small $N_i$} ; } & \tikzdiagh{0}{ \draw (0,0) ..controls (0,.25) and (1,.25) .. (1,.5) ..controls (1,.75) and (0,.75) .. (0,1) ; \draw[stdhl] (1,0) node[below]{\small $N_i$} ..controls (1,.25) and (0,.25) .. (0,.5)..controls (0,.75) and (1,.75) .. (1,1) ; } \ &= \ \tikzdiagh{0}{ \draw (0,0) -- (0,1) node[midway,tikzdot]{} node[midway,xshift=1.75ex,yshift=.75ex]{\small $N_i$} ; \draw[stdhl] (1,0) node[below]{\small $N_i$} -- (1,1) ; } \label{eq:redR2} \\ \tikzdiagh{0}{ \draw (0.5,0) .. controls (0.5,0.25) and (0, 0.25) .. (0,0.5) .. controls (0,0.75) and (0.5, 0.75) .. (0.5,1); \draw (1,0) .. controls (1,0.5) and (0, 0.75) .. (0,1); \draw [stdhl] (0,0) node[below]{\small $N_i$} .. controls (0,0.25) and (1, 0.5) .. (1,1); } \ &= \ \tikzdiagh[xscale=-1]{0}{ \draw (0,0) .. controls (0,0.25) and (1, 0.5) .. (1,1); \draw (0.5,0) .. controls (0.5,0.25) and (0, 0.25) .. (0,0.5) .. controls (0,0.75) and (0.5, 0.75) .. (0.5,1); \draw [stdhl] (1,0) node[below]{\small $N_i$} .. controls (1,0.5) and (0, 0.75) .. (0,1); } & \tikzdiagh{0}{ \draw (0,0) .. controls (0,0.25) and (1, 0.5) .. (1,1); \draw (0.5,0) .. controls (0.5,0.25) and (0, 0.25) .. (0,0.5) .. controls (0,0.75) and (0.5, 0.75) .. (0.5,1); \draw [stdhl] (1,0) node[below]{\small $N_i$} .. controls (1,0.5) and (0, 0.75) .. (0,1); } \ &= \ \tikzdiagh[xscale=-1]{0}{ \draw (0.5,0) .. controls (0.5,0.25) and (0, 0.25) .. (0,0.5) .. controls (0,0.75) and (0.5, 0.75) .. (0.5,1); \draw (1,0) .. controls (1,0.5) and (0, 0.75) .. (0,1); \draw [stdhl] (0,0) node[below]{\small $N_i$} .. controls (0,0.25) and (1, 0.5) .. (1,1); } \label{eq:crossingslidered} \\ \tikzdiagh{0}{ \draw (0,0) .. controls (0,0.25) and (1, 0.5) .. (1,1); \draw (1,0) .. controls (1,0.5) and (0, 0.75) .. (0,1); \draw [stdhl] (0.5,0)node[below]{\small $N_i$} .. controls (0.5,0.25) and (0, 0.25) .. (0,0.5) .. controls (0,0.75) and (0.5, 0.75) .. (0.5,1); } \ &= \ \tikzdiagh[xscale=-1]{0}{ \draw (0,0) .. controls (0,0.25) and (1, 0.5) .. (1,1); \draw (1,0) .. controls (1,0.5) and (0, 0.75) .. (0,1); \draw [stdhl] (0.5,0) node[below]{\small $N_i$} .. controls (0.5,0.25) and (0, 0.25) .. (0,0.5) .. controls (0,0.75) and (0.5, 0.75) .. (0.5,1); } \ + \sssum{k+\ell=\\N_i-1} \ \tikzdiagh{0}{ \draw (0,0) -- (0,1) node[midway,tikzdot]{} node[midway,xshift=-1.5ex,yshift=.75ex]{\small $k$}; \draw (1,0) -- (1,1) node[midway,tikzdot]{} node[midway,xshift=1.5ex,yshift=.75ex]{\small $\ell$}; \draw [stdhl] (0.5,0)node[below]{\small $N_i$} -- (0.5,1); } \label{eq:redR3} \end{align} \end{itemize} Multiplication is given by vertical concatenation of diagrams if the labels and colors of the strands agree, and is zero otherwise. As explained in \cite[\S4]{webster}, the algebra $T_b^{\underline{N}}$ is finite-dimensional and $\mathbb{Z}$-graded (we refer to this grading as $q$-grading), with \begin{align}\label{eq:KLRWgrading} \deg_q \left( \ \tikzdiag{ \draw (0,0) ..controls (0,.5) and (1,.5) .. (1,1); \draw (1,0) ..controls (1,.5) and (0,.5) .. (0,1); } \ \right) &= -2, & \deg_q \left( \ \tikzdiag{ \draw (0,0) -- (0,1) node [midway,tikzdot]{}; } \ \right) & = 2, & \deg_q \left( \ \tikzdiag{ \draw (1,0) ..controls (1,.5) and (0,.5) .. (0,1); \draw[stdhl] (0,0) node[below]{\small $N_i$} ..controls (0,.5) and (1,.5) .. (1,1); } \ \right) = \deg_q \left( \ \tikzdiag{ \draw (0,0) ..controls (0,.5) and (1,.5) .. (1,1); \draw[stdhl] (1,0) node[below]{\small $N_i$} ..controls (1,.5) and (0,.5) .. (0,1); } \ \right) &= N_i. \end{align} In the case of $ \underline{N}=(N)$ the algebra $T_b^{(N)}$ contains a single red strand labeled $N$, and is isomorphic to the cyclotomic nilHecke algebra $\nh_b^N$. \begin{defn}\label{def:dgwebsteralg} The \emph{dg-enhanced KLRW algebra} $T_b^{\lambda,\underline{N}}$ is the diagrammatic $\Bbbk$-algebra carrying an homological degree generated by braid-like diagrams on $b$ black strands, $r$ red strands and a blue strand on the left. Red strands are labeled from left to right by $N_1, \dots, N_r$ and the blue strand is labeled $\lambda$. Black strands can carry dots and intersect transversely with black and red strands. Moreover, the left-most black strand can be \emph{nailed} on the blue strand, giving a 4-valent vertex as follows: \[ \tikzdiagh{0}{ \draw (.5,-.5) .. controls (.5,-.25) .. (0,0) .. controls (.5,.25) .. (.5,.5); \draw[vstdhl] (0,-.5) node[below]{\small $\lambda$} -- (0,.5) node [midway,nail]{}; } \] We put the crossings and the dot in homological degree $0$, while the nail is in homological degree $1$. These diagrams are taken modulo graded braid-like planar isotopy, and subject to the local relations~\eqref{eq:nhR2andR3}-\eqref{eq:redR3} of $T_b^{\underline{N}}$, together with the local relations: \begin{align} \label{eq:relNail} \tikzdiagh{0}{ \draw (.5,-.5) .. controls (.5,-.25) .. (0,0) node[midway, tikzdot]{} .. controls (.5,.25) .. (.5,.5); \draw[vstdhl] (0,-.5) node[below]{\small $\lambda$} -- (0,.5) node [midway,nail]{}; } \ &= \ \tikzdiagh{0}{ \draw (.5,-.5) .. controls (.5,-.25) .. (0,0) .. controls (.5,.25) .. (.5,.5) node[midway, tikzdot]{}; \draw[vstdhl] (0,-.5) node[below]{\small $\lambda$} -- (0,.5) node [midway,nail]{}; } & \tikzdiagh{0}{ \draw (.5,-1) .. controls (.5,-.75) .. (0,-.4) .. controls (.5,-.05) .. (.5,.2) -- (.5,1); \draw (1,-1) .. controls (1,0) .. (0, .4) .. controls (1,.75) .. (1,1); \draw [vstdhl] (0,-1) node[below]{\small $\lambda$} -- (0,1) node [pos=.3,nail] {} node [pos=.7,nail] {} ; } \ &= \ - \tikzdiagh[yscale=-1]{0}{ \draw (.5,-1) .. controls (.5,-.75) .. (0,-.4) .. controls (.5,-.05) .. (.5,.2) -- (.5,1); \draw (1,-1) .. controls (1,0) .. (0, .4) .. controls (1,.75) .. (1,1); \draw [vstdhl] (0,-1) -- (0,1) node[below]{\small $\lambda$} node [pos=.3,nail] {} node [pos=.7,nail] {} ; } & \tikzdiagh{0}{ \draw (.5,-1) .. controls (.5,-.75) .. (0,-.4) .. controls (.5,-.4) and (.5,.4) .. (0,.4) .. controls (.5,.75) .. (.5,1); \draw [vstdhl] (0,-1) node[below]{\small $\lambda$} -- (0,1) node [pos=.3,nail] {} node [pos=.7,nail] {} ; } \ &= \ 0. \end{align} \end{defn} \begin{rem} Note that there can be no black or red strand at the left of the blue strand. \end{rem} \begin{rem} Note that since nails are stuck on the left, we can not exchange them using a graded braid-like planar isotopy. Thus, because nails are the only generators carrying a non-zero homological degree, we could consider diagrams up to usual braid-like planar isotopy. However, the homological degree of the nail will play an important role in the categorification of the structure constant $[\beta + k]_q$ appearing in $M(\lambda) \otimes V(\underline{N})$, and graded braid-like planar isotopy will play an important role in \cref{sec:bimod}. \end{rem} Clearly, there is an injection of algebra $\widetilde T_b^{\underline{N}} \hookrightarrow T_b^{\lambda,\underline{N}}$ given by adding a vertical blue strand at the left of a diagram in $\widetilde T_b^{\underline{N}}$. We endow $T_b^{\lambda,\underline{N}}$ with an extra $\mathbb{Z}^2$-grading, the first one being inherited from $\widetilde T_b^{\underline{N}}$ and denoted $q$, the second is written $\lambda$. We declare that \begin{align*} \deg_{q,\lambda} \left( \tikzdiagh{-1ex}{ \draw (.5,-.5) .. controls (.5,-.25) .. (0,0) .. controls (.5,.25) .. (.5,.5); \draw[vstdhl] (0,-.5) node[below]{\small $\lambda$} -- (0,.5) node [midway,nail]{}; } \right) &:= (0,2), \end{align*} and the elements without a nail are all in degree $\lambda$ zero and have the same $q$-degree as in \cref{eq:KLRWgrading}, so that the inclusion $\widetilde T_b^{\underline{N}} \hookrightarrow T_b^{\lambda,\underline{N}}$ preserves the $q$-grading. One easily checks that it is well-defined. In the case of $ \underline{N}=\varnothing$, the algebra $T_b^{\lambda,\varnothing}$ contains only a blue strand labeled $\lambda$ and no red strands, and is isomorphic to the dg-enhanced nilHecke algebra introduced in~\cite[Definition 2.3]{naissevaz2}. To match with the notation from~\cite{naissevaz2}, we write $A_b := T_b^{\lambda,\varnothing}$. We will often endow $T_b^{\lambda,\underline{N}}$ with a trivial differential, turning it into a $\mathbb{Z}^2$-graded dg-algebra $(T_b^{\lambda,\underline{N}}, 0)$. \subsection{Basis theorem} For any $\rho=(b_0,b_1,\dots,b_r)\in \mathcal{P}_b^r$, define the idempotent \[ 1_{\rho} := \tikzdiagh{0}{ \draw[vstdhl] (0,0) node[below]{\small $\lambda$} --(0,1); \draw (.5,0) -- (.5,1); \node at(1,.5) {\tiny$\dots$}; \draw (1.5,0) -- (1.5,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (.4,-.35) -- node {$b_0$} (1.6,-.35); \draw[stdhl] (2,0) node[below]{\small $N_1$} --(2,1); % \draw (2.5,0) -- (2.5,1); \node at(3,.5) {\tiny$\dots$}; \draw (3.5,0) -- (3.5,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (2.4,-.35) -- node {$b_1$} (3.6,-.35); \draw[stdhl] (4,0) node[below]{\small $N_2$} --(4,1); % \node[red] at (5,.5) {\dots}; % \draw[stdhl] (6,0) node[below]{\small $N_{r}$} --(6,1); \draw (6.5,0) -- (6.5,1); \node at(7,.5) {\tiny$\dots$}; \draw (7.5,0) -- (7.5,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (6.4,-.35) -- node {$b_r$} (7.6,-.35); } \] of $T_b^{\lambda,\underline{N}}$. Note that $T_b^{\lambda,\underline{N}} \cong \bigoplus_{\kappa, \rho \in \mathcal{P}_b^r} 1_\kappa T_b^{\lambda,\underline{N}} 1_\rho$ as $\mathbb{Z}\times\mathbb{Z}^2$-graded $\Bbbk$-module. \subsubsection{Polynomial action} We now define an action of the dg-algebra $T_b^{\lambda,\underline{N}}$ on $\Pol_b^r := \bigoplus_{\rho \in \mathcal{P}_b^r} \Pol_b \varepsilon_\rho$, the free module over the ring $\Pol_b:=\mathbb{Z}[x_1,\dots,x_b]\otimes \raisebox{0.03cm}{\mbox{\footnotesize$\textstyle{\bigwedge}$}}^{\bullet}(\omega_1,\dots,\omega_b)$ generated by $\varepsilon_\rho$ for each $\rho\in\mathcal{P}_b^r$. We recall the action of the symmetric group $S_b$ on $\Pol_b$ used in~\cite[\S2.2]{naissevaz2}. We view $S_b$ as a Coxeter group with generators $\sigma_i = (i\ i+1)$. The generator $\sigma_i$ acts on $\Pol_b$ as follows, \begin{align*} \sigma_i(x_j) &:= x_{\sigma_i(j)},\\ \sigma_i(\omega_j) &:= \omega_j + \delta_{i,j}(x_i-x_{i+1})\omega_{i+1}. \end{align*} For $\kappa,\rho \in \mathcal{P}_b^r$, an element of $1_{\kappa}T_b^{\lambda,\und N}1_{\rho}$ acts by zero on any $\Pol_b\varepsilon_{\rho'}$ for $\rho'\neq \rho$ and sends $\Pol_b\varepsilon_{\rho}$ to $\Pol_b\varepsilon_{\kappa}$. It remains to describe the action of the local generators of $T^{\lambda,\und N}_b$ on a polynomial $f\in \Pol_b$. First, similarly as in~\cite[Lemma 4.12]{webster}, we put \begin{align*} \tikzdiagh{-1.5ex}{ \node at(0,.5) {\tiny$\dots$}; \draw (0.5,0) -- (0.5,1) node [midway,tikzdot]{}; \node at(1,.5) {\tiny$\dots$}; }\cdot f &:= x_if, & \tikzdiagh{-1.5ex}{ \node at(0,.5) {\tiny$\dots$}; \draw (0.5,0) ..controls (0.5,.5) and (1.5,.5) .. (1.5,1); \draw (1.5,0) ..controls (1.5,.5) and (0.5,.5) .. (0.5,1); \node at(2,.5) {\tiny$\dots$}; }\cdot f &:= \frac{f-\sigma_i(f)}{x_i-x_{i+1}},\\ \tikzdiagh{0}{ \node at(0,.5) {\tiny$\dots$}; \draw (0.5,0) ..controls (0.5,.5) and (1.5,.5) .. (1.5,1); \draw[stdhl] (1.5,0) node[below]{\small $N$} ..controls (1.5,.5) and (0.5,.5) .. (0.5,1); \node at(2,.5) {\tiny$\dots$}; }\cdot f &:= f, & \tikzdiagh{0}{ \node at(0,.5) {\tiny$\dots$}; \draw (1.5,0) ..controls (1.5,.5) and (0.5,.5) .. (0.5,1); \draw[stdhl] (0.5,0) node[below]{\small $N$} ..controls (0.5,.5) and (1.5,.5) .. (1.5,1); \node at(2,.5) {\tiny$\dots$}; }\cdot f &:= x_i^{N}f, \end{align*} where we identify $x_i \in \Pol_b\varepsilon_{\rho}$ with $x_i \in \Pol_b\varepsilon_{\kappa}$, and where we only have drawn the $i$-th or the $i$-th and $(i+1)$-th black strands, counting from left to right. Furthermore, we put \[ \\ \tikzdiagh{0}{ \draw (.5,0) .. controls (.5,.25) .. (0,0.5) .. controls (.5,.75) .. (.5,1); \draw[vstdhl] (0,0) node[below]{\small $\lambda$} -- (0,1) node [midway,nail]{}; \node at(1,.5) {\tiny$\dots$}; }\cdot f := \omega_1 f. \] \begin{lem} The rules above define an action of $T_b^{\lambda,\underline{N}}$ on $\Pol_b^r$. \end{lem} \begin{proof} We easily check that the relations \eqref{eq:nhR2andR3}-\eqref{eq:redR3} and \eqref{eq:relNail} are satisfied. \end{proof} Fix $\rho=(b_0,\dots,b_r) \in \mathcal{P}_b^r$. Let $\nh_n$ be the nilHecke algebra on $n$-strands (it is described as a diagrammatic algebra with only black strands having dots and relations \cref{eq:nhR2andR3} and \cref{eq:nhdotslide}). There is a map $\eta_\rho \colon A_{b_0} \otimes \nh_{b_1} \otimes \cdots \otimes \nh_{b_r} \rightarrow T_b^{\lambda,\underline{N}}$, diagrammatically given by \begin{equation}\label{eq:defetainclusion} \tikzdiagh[xscale=1.5]{0}{ \draw [vstdhl] (-.25,-.5) node[below]{\small $\lambda$} -- (-.25,.5); \draw (0,-.5) -- (0,.5); \node at(.25,-.4) {\tiny $\dots$}; \node at(.25,.4) {\tiny $\dots$}; \draw (.5,-.5) -- (.5,.5); \filldraw [fill=white, draw=black] (-.35,-.25) rectangle (.6,.25) node[midway] {$A_{b_0}$}; } \otimes \tikzdiag[xscale=1.5]{ \draw (0,-.5) -- (0,.5); \node at(.25,-.4) {\tiny $\dots$}; \node at(.25,.4) {\tiny $\dots$}; \draw (.5,-.5) -- (.5,.5); \filldraw [fill=white, draw=black] (-.1,-.25) rectangle (.6,.25) node[midway] { $\nh_{b_{1}}$}; } \otimes \cdots \otimes \tikzdiag[xscale=1.5]{ \draw (0,-.5) -- (0,.5); \node at(.25,-.4) {\tiny $\dots$}; \node at(.25,.4) {\tiny $\dots$}; \draw (.5,-.5) -- (.5,.5); \filldraw [fill=white, draw=black] (-.1,-.25) rectangle (.6,.25) node[midway] { $\nh_{b_r}$}; } \ \xmapsto{\eta_\rho} \ \tikzdiagh[xscale=1.5]{0}{ \draw [vstdhl] (-.25,-.5) node[below]{\small $\lambda$} -- (-.25,.5); % \draw (0,-.5) -- (0,.5); \node at(.25,-.4) {\tiny $\dots$}; \node at(.25,.4) {\tiny $\dots$}; \draw (.5,-.5) -- (.5,.5); % \draw [stdhl] (.75,-.5) node[below]{\small $N_1$} -- (.75,.5); % % \draw (1,-.5) -- (1,.5); \node at(1.25,-.4) {\tiny $\dots$}; \node at(1.25,.4) {\tiny $\dots$}; \draw (1.5,-.5) -- (1.5,.5); % \draw [stdhl] (1.75,-.5) node[below]{\small $N_2$} -- (1.75,.5); % \node[red] at(2.125,0) { $\dots$}; % \draw [stdhl] (2.5,-.5) node[below]{\small $N_r$} -- (2.5,.5); % \draw (2.75,-.5) -- (2.75,.5); \node at(3,-.4) {\tiny $\dots$}; \node at(3,.4) {\tiny $\dots$}; \draw (3.25,-.5) -- (3.25,.5); % \filldraw [fill=white, draw=black] (-.35,-.25) rectangle (.6,.25) node[midway] {$A_{b_0}$}; \filldraw [fill=white, draw=black] (1-.1,-.25) rectangle (1+.6,.25) node[midway] { $\nh_{b_1}$}; \filldraw [fill=white, draw=black] (2.75-.1,-.25) rectangle (2.75+.6,.25) node[midway] { $\nh_{b_r}$}; } \end{equation} where we recall that $A_{b_0}$ is isomorphic to the dg-enhanced nilHecke algebra of \cite{naissevaz2}, identifying the nilHecke generators with each other and the the nail with the ``leftmost floating dot''. The tensor product $A_{b_0}\otimes \nh_{b_1} \otimes \cdots \nh_{b_r}$ acts on $\Pol_b^r$ through $\eta_\rho$. This action is only non-zero on $\Pol_b\varepsilon_{\rho}$ and it is readily checked that this action coincides with the tensor product of the polynomial actions of $A_{b_0}$ on $\mathbb{Z}[x_1,\dots,x_{b_0}]\otimes \raisebox{0.03cm}{\mbox{\footnotesize$\textstyle{\bigwedge}$}}^{\bullet}(\omega_1,\dots,\omega_{b_0}) \subset \Pol_b$ from \cite[\S2.2]{naissevaz2}, and of the usual action of the nilHecke algebra $\nh_{b_i}$ on $\mathbb{Z}[x_{b_0+\dots+b_{i-1}+1},\dots,x_{b_0+\dots+b_i}] \subset \Pol_b$ (see for example \cite[\S2.3]{KL1}). \begin{lem}\label{lem:injectstoANH} The map $\eta_\rho$ is injective. \end{lem} \begin{proof} It follows immediately from the faithfulness of the polynomial actions of $A_{b_0}$~\cite[Corollary 3.9]{naissevaz2} and of $\nh_{b_i}$ \cite[Corollary 2.6]{KL1}. \end{proof} \subsubsection{Left-adjusted expressions} We recall the notion of a left-adjusted expression as in~\cite[Section 2.2.1]{naissevaz2}: a reduced expression $\sigma_{i_1}\cdots\sigma_{i_k}$ of an element $w\in S_{r+b}$ is said to be \emph{left-adjusted} if $i_1+\cdots+i_k$ is minimal. One can obtain a left-adjusted expression of any element of $S_{r+b}$ by taking recursively its representative in the left coset decomposition \[ S_n = \bigsqcup_{t=1}^{n}S_{n-1}\sigma_{n-1}\cdots\sigma_{t}. \] As one easily confirms, if we think of permutations in terms of string diagrams, then a left-reduced expression is obtained by pulling every strand as far as possible to the left. \subsubsection{A basis of $T^{\lambda,\protect\underline{N}}_b$}\label{ssec:basisTl} We now turn to the diagrammatic description of a basis of $T^{\lambda,\underline{N}}_b$ similar to~\cite[Section 3.2.3]{naissevaz3}. For an element $\rho \in \mathcal{P}^r_b$ and $1 \leq k \leq b$, we define the tightened nail $\theta_k \in 1_\rho T^{\lambda,\underline{N}}_b 1_\rho$ as the following element: \[ \theta_k:=\tikzdiagh[xscale=1.25]{0}{ % \draw (0,-1) -- (0,1); \node at(.25,.85) {\tiny $\dots$}; \node at(.25,-.85) {\tiny $\dots$}; \draw (.5,-1) -- (.5,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (-.1,-1.35) -- node {\small $b_0$} (.6,-1.35); % % \node[red] at(1.125,-.85) { $\dots$}; \node[red] at(1.125,.85) { $\dots$}; % % \draw (1.75,-1) -- (1.75,1); \node at(2,-.85) {\tiny $\dots$}; \node at(2,.85) {\tiny $\dots$}; \draw (2.25,-1) -- (2.25,1); % \draw (2.5,-1) .. controls (2.5,-.25) and (-.5,-.25) .. (-.25,0) .. controls (-.5,.25) and (2.5,.25) .. (2.5,1); % \draw (2.75,-1) -- (2.75,1); \node at(3,-.85) {\tiny $\dots$}; \node at(3,.85) {\tiny $\dots$}; \draw (3.25,-1) -- (3.25,1); % \draw [stdhl] (3.5,-1) node[below,yshift={-1ex}]{\small $N_{i+1}$} -- (3.5,1); % \node[red] at(3.875,-.85) { $\dots$}; \node[red] at(3.875,.85) { $\dots$}; % \draw [stdhl] (4.25,-1) node[below,yshift={-1ex}]{\small $N_{r}$} -- (4.25,1); % \draw (4.5,-1) -- (4.5,1); \node at(4.75,-.85) {\tiny $\dots$}; \node at(4.75,.85) {\tiny $\dots$}; \draw (5,-1) -- (5,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (4.4,-1.35) -- node {\small $b_r$} (5.1,-1.35); % \draw [stdhl] (1.5,-1) node[below,yshift={-1ex}]{\small $N_i$} -- (1.5,1); \draw [stdhl] (.75,-1) node[below,yshift={-1ex}]{\small $N_1$} -- (.75,1); \draw [vstdhl] (-.25,-1) node[below,yshift={-1ex}]{\small $\lambda$} -- (-.25,1) node[midway,nail]{}; } \] where the nailed strand is the $k$-th black strand counting from left to right. This element has degree $\deg_{h,q,\lambda}(\theta_k) = (1,-4(k-1)+2(N_1+\cdots+N_i),2)$. \begin{lem} \label{lem:anticom_theta} Tightened nails anticommute with each other, up to terms with a smaller number of crossings: \begin{align*} \theta_k \theta_\ell &= -\theta_\ell \theta_k + R & \theta_k^2 &= 0 + R', \end{align*} where $R$ (resp. $R'$) is a sum of diagrams with strictly fewer crossings than $\theta_k\theta_\ell$ (resp. $\theta_k^2$), for all $1 \leq k, \ell \leq b$. \end{lem} \begin{proof} Similar to \cite[Lemma 3.12]{naissevaz3}, and omitted. \end{proof} \begin{rem} If $k,\ell \leq b_0$, then we have $\theta_k\theta_\ell = -\theta_\ell \theta_k$. Moreover, if $k \not \in\{b_0+1,b_0+b_1+1,\ldots,b_0+\ldots+b_r+1\}$, then we have $\theta_k^2=0$. \end{rem} Now fix $\kappa,\rho\in\mathcal{P}^r_b$ and consider the subset of permutations ${}_\kappa S_\rho\subset S_{r+b}$, viewed as diagrams with a blue strand, $b$ black strands and $r$ red strands, such that: \begin{itemize} \item the blue strand is always on the left of the diagram, \item the strands are ordered at the bottom by $1_\rho$ and at the top by $1_\kappa$, \item for any reduced expression of $w\in {}_\kappa S_\rho$, there are no red/red crossings. \end{itemize} \begin{ex} If $\kappa=\rho=(0,1,1)$, then the set ${}_\kappa S_\rho$ has two elements, namely \[ \tikzdiagh{-1.5ex}{ \draw[vstdhl] (0,0) -- (0,1); \draw[stdhl] (.25,0) -- (.25,1); \draw (0.5,0) -- (.5,1); \draw[stdhl] (.75,0) -- (.75,1); \draw (1,0) -- (1,1); } \quad\text{and}\quad \tikzdiagh{-1.5ex}{ \draw[vstdhl] (0,0) -- (0,1); \draw[stdhl] (.25,0) -- (.25,1); \draw (.5,0) ..controls (.5,.5) and (1,.5) .. (1,1); \draw (1,0) ..controls (1,.5) and (.5,.5) .. (.5,1); \draw[stdhl] (.75,0) ..controls (.75,.35) and (1,.15) .. (1,.5); \draw[stdhl] (1,.5) ..controls (1,.85) and (.75,.65) ..(.75,1); } \] Note that the second element is not left-adjusted. \end{ex} For each $w\in {}_\kappa S_\rho,\ \underline{l}=(l_1,\ldots,l_b)\in \{0,1\}^b$ and $\underline{a}=(a_1,\ldots,a_b)\in\mathbb{N}^b$ we define an element $b_{w,\underline{l},\underline{a}}\in 1_\kappa T^{\lambda,\underline{N}}_m 1_\rho$ as follows: \begin{enumerate} \item we choose a left-reduced expression of $w$ in terms of diagrams as above; \item for each $1\leq i \leq b$, if $l_i=1$, then we nail the $i$-th black strand at the top, counting from the left, on the blue strand by pulling it from its leftmost position; \item finally, for each $1\leq i \leq b$, we add $a_i$ dots on the $i$-th black strand at the top. \end{enumerate} Let ${}_\kappa B_\rho := \{ b_{w,\underline{l},\underline{a}} | w\in {}_\kappa S_\rho,\ \underline{l}\in \{0,1\}^b, \underline{a}\in\mathbb{N}^b\}$. \begin{ex} We continue the example of $\kappa=\rho=(0,1,1)$. If we choose $\underline{l}=(1,0)$ and $\underline{a}=(0,1)$ for $w$ the permutation with a black/black crossing, after left-adjusting it, then we obtain \[ b_{w,\underline{l},\underline{a}} = \tikzdiagh{-1.5ex}{ \draw (.5,0) ..controls (.5,.5) and (1,.5) .. (1,1) node [pos=.85,tikzdot]{}; \draw (.5,1) ..controls (.5,.85) and (0,.90) .. (0,.75); \draw (0,.75) ..controls (0,.60) and (1,.75) .. (1,0); \draw[stdhl] (.25,0) -- (.25,1); \draw[stdhl] (.75,0) ..controls (.75,.35) and (.5,.15) .. (.5,.5); \draw[stdhl] (.5,.5) ..controls (.5,.85) and (.75,.65) ..(.75,1); \draw[vstdhl] (0,0) -- (0,1) node [pos=.75,nail]{}; } \] \end{ex} \begin{thm}\label{thm:Tbasis} The set ${}_\kappa B_\rho$ is a basis of $1_\kappa T^{\lambda,\underline{N}}_b 1_\rho$ as a $\mathbb{Z}\times\mathbb{Z}^2$-graded $\Bbbk$-module. \end{thm} \begin{proof} By \cref{lem:anticom_theta}, with arguments similar to \cite[Proposition 3.13]{naissevaz3}, one shows that this set generates $1_\kappa T^{\lambda,\underline{N}}_m 1_\rho$ as a $\Bbbk$-module. The proof consists in an induction on the number of crossings, allowing to apply braid-moves in order to reduce diagrams. In order to show that this set is linearly independent over $\Bbbk$, we apply \cref{lem:injectstoANH}. \end{proof} In the following, we draw $T_b^{\lambda,\underline{N}} 1_{\rho}$ with $\rho = (b_0, \dots, b_r)$ as a box diagram \[ \tikzdiag[xscale=1.25]{ \draw [vstdhl] (-.25,-.5) node[below,yshift={-1ex}]{\small $\lambda$} -- (-.25,1); % \draw (0,-.5) -- (0,1); \node at(.25,0) {\tiny $\dots$}; \draw (.5,-.5) -- (.5,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (-.1,-.85) -- node {\small $b_0$} (.6,-.85); % \draw [stdhl] (.75,-.5) node[below,yshift={-1ex}]{\small $N_1$} -- (.75,1); % \node[red] at(1.125,0) { $\dots$}; % \draw [stdhl] (1.5,-.5) node[below,yshift={-1ex}]{\small $N_{r-1}$} -- (1.5,1); % \draw (1.75,-.5) -- (1.75,1); \node at(2,0) {\tiny $\dots$}; \draw (2.25,-.5) -- (2.25,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (1.65,-.85) -- node {\small $b_{r-1}$} (2.35,-.85); % \draw [stdhl] (2.5,-.5) node[below,yshift={-1ex}]{\small $N_{r}$} -- (2.5,1); % \draw (2.75,-.5) -- (2.75,1); \node at(3,0) {\tiny $\dots$}; \draw (3.25,-.5) -- (3.25,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (2.65,-.85) -- node {\small $b_r$} (3.35,-.85); % \filldraw [fill=white, draw=black] (-.375,.5) rectangle (3.375,1.25) node[midway] { $T_b^{\lambda,\underline{N}}$}; } \] Moreover, when we draw something like \[ \tikzdiag[xscale=1.25]{ \draw[fill=white, color=white] (-.35,0) circle (.15cm); \draw [vstdhl] (-.25,-.5) node[below,yshift={-1ex}]{\small $\lambda$} -- (-.25,1); % \draw (0,-.5) -- (0,1); \node at(.25,-.35) {\tiny $\dots$}; \draw (.5,-.5) -- (.5,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (-.1,-.85) -- node {\small $b_0$} (.6,-.85); % \draw [stdhl] (.75,-.5) node[below,yshift={-1ex}]{\small $N_1$} -- (.75,1); % \node[red] at(1.125,-.35) { $\dots$}; % \draw [stdhl] (1.5,-.5) node[below,yshift={-1ex}]{\small $N_i$} -- (1.5,1); % \draw (1.75,-.5) -- (1.75,1); \node at(2,-.35) {\tiny $\dots$}; \draw (2.25,-.5) -- (2.25,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (1.65,-.85) -- node {\small $t$} (2.35,-.85); \draw (2.5,-.5) .. controls (2.5,0) and (5.25,0) .. (5.25,.5) -- (5.25,1.25) node[midway,tikzdot]{} node[midway, xshift=1.5ex, yshift=1ex]{\small $p$}; \draw (2.75,-.5) .. controls (2.75,0) and (2.5,0) .. (2.5,.5); \node at(3,-.35) {\tiny $\dots$}; \draw (3.25,-.5) .. controls (3.25,0) and (3,0) .. (3,.5); % \draw [stdhl] (3.5,-.5) node[below,yshift={-1ex}]{\small $N_{i+1}$} .. controls (3.5,0) and (3.25,0) .. (3.25,.5); % \node[red] at(3.875,-.35) { $\dots$}; % \draw [stdhl] (4.25,-.5) node[below,yshift={-1ex}]{\small $N_{r}$} .. controls (4.25,0) and (4,0) .. (4,.5); % \draw (4.5,-.5) .. controls (4.5,0) and (4.25,0) .. (4.25,.5); \node at(4.75,-.35) {\tiny $\dots$}; \draw (5,-.5) .. controls (5,0) and (4.75,0) .. (4.75,.5); \draw[decoration={brace,mirror,raise=-8pt},decorate] (4.4,-.85) -- node {\small $b_r$} (5.1,-.85); % \filldraw [fill=white, draw=black] (-.375,.5) rectangle (4.875,1.25) node[midway] { $T_{b-1}^{\lambda,\underline{N}}$}; } \] with $p \geq 0$ and $0 \leq t < b_i$, it means we consider the subset of $T_b^{\lambda,\underline{N}} 1_{\rho}$ given replacing the box labeled $T_{b-1}^{\lambda,\underline{N}}$ with any diagram of $T_{b-1}^{\lambda,\underline{N}}$ in the diagram above, and consider it as a diagram of $T_b^{\lambda,\underline{N}} 1_{\rho}$. \begin{cor}\label{prop:Tdecomp} As a $\mathbb{Z}\times\mathbb{Z}^2$-graded $\Bbbk$-module, $T_b^{\lambda,\underline{N}} 1_{\rho}$ decomposes as a direct sum \begin{align*} \tikzdiag[xscale=1.25]{ \draw [vstdhl] (-.25,-.5) node[below,yshift={-1ex}]{\small $\lambda$} -- (-.25,1); % \draw (0,-.5) -- (0,1); \node at(.25,0) {\tiny $\dots$}; \draw (.5,-.5) -- (.5,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (-.1,-.85) -- node {\small $b_0$} (.6,-.85); % \draw [stdhl] (.75,-.5) node[below,yshift={-1ex}]{\small $N_1$} -- (.75,1); % \node[red] at(1.125,0) { $\dots$}; % \draw [stdhl] (1.5,-.5) node[below,yshift={-1ex}]{\small $N_{r-1}$} -- (1.5,1); % \draw (1.75,-.5) -- (1.75,1); \node at(2,0) {\tiny $\dots$}; \draw (2.25,-.5) -- (2.25,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (1.65,-.85) -- node {\small $b_{r-1}$} (2.35,-.85); % \draw [stdhl] (2.5,-.5) node[below,yshift={-1ex}]{\small $N_{r}$} -- (2.5,1); % \draw (2.75,-.5) -- (2.75,1); \node at(3,0) {\tiny $\dots$}; \draw (3.25,-.5) -- (3.25,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (2.65,-.85) -- node {\small $b_r$} (3.35,-.85); % \filldraw [fill=white, draw=black] (-.375,.5) rectangle (3.375,1.25) node[midway] { $T_b^{\lambda,\underline{N}}$}; } \ \cong& \ \tikzdiag[xscale=1.25]{ % \draw (0,-.5) -- (0,1); \node at(.25,-.35) {\tiny $\dots$}; \draw (.5,-.5) -- (.5,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (-.1,-.85) -- node {\small $b_0$} (.6,-.85); % \draw [stdhl] (.75,-.5) node[below,yshift={-1ex}]{\small $N_1$} -- (.75,1); % \node[red] at(1.125,-.35) { $\dots$}; % \draw [stdhl] (1.5,-.5) node[below,yshift={-1ex}]{\small $N_{r-1}$} -- (1.5,1); % \draw (1.75,-.5) -- (1.75,1); \node at(2,-.35) {\tiny $\dots$}; \draw (2.25,-.5) -- (2.25,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (1.65,-.85) -- node {\small $b_{r-1}$} (2.35,-.85); % \draw (2.75,-.5) .. controls (2.75,0) and (2.5,0) .. (2.5,.5) -- (2.5,1); \node at(3,-.35) {\tiny $\dots$}; \draw (3.25,-.5) .. controls (3.25,0) and (3,0) .. (3,.5) -- (3,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (2.65,-.85) -- node {\small $b_r$} (3.35,-.85); % \draw [stdhl] (2.5,-.5) node[below,yshift={-1ex}]{\small $N_{r}$} .. controls (2.5,0) and (3.5,0) .. (3.5,.5) -- (3.5,1.25); \draw [vstdhl] (-.25,-.5) node[below,yshift={-1ex}]{\small $\lambda$} -- (-.25,1); % \filldraw [fill=white, draw=black] (-.375,.5) rectangle (3.125,1.25) node[midway] { $T_b^{\lambda,\underline{N'}}$}; } \\ &\oplus \bigoplus_{i=0}^r \ssbigoplus{0 \leq t < b_i \\ p \geq 0} \tikzdiag[xscale=1.25]{ \draw[fill=white, color=white] (-.35,0) circle (.15cm); \draw [vstdhl] (-.25,-.5) node[below,yshift={-1ex}]{\small $\lambda$} -- (-.25,1); % \draw (0,-.5) -- (0,1); \node at(.25,-.35) {\tiny $\dots$}; \draw (.5,-.5) -- (.5,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (-.1,-.85) -- node {\small $b_0$} (.6,-.85); % \draw [stdhl] (.75,-.5) node[below,yshift={-1ex}]{\small $N_1$} -- (.75,1); % \node[red] at(1.125,-.35) { $\dots$}; % \draw [stdhl] (1.5,-.5) node[below,yshift={-1ex}]{\small $N_i$} -- (1.5,1); % \draw (1.75,-.5) -- (1.75,1); \node at(2,-.35) {\tiny $\dots$}; \draw (2.25,-.5) -- (2.25,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (1.65,-.85) -- node {\small $t$} (2.35,-.85); \draw (2.5,-.5) .. controls (2.5,0) and (5.25,0) .. (5.25,.5) -- (5.25,1.25) node[midway,tikzdot]{} node[midway, xshift=1.5ex, yshift=1ex]{\small $p$}; \draw (2.75,-.5) .. controls (2.75,0) and (2.5,0) .. (2.5,.5); \node at(3,-.35) {\tiny $\dots$}; \draw (3.25,-.5) .. controls (3.25,0) and (3,0) .. (3,.5); % \draw [stdhl] (3.5,-.5) node[below,yshift={-1ex}]{\small $N_{i+1}$} .. controls (3.5,0) and (3.25,0) .. (3.25,.5); % \node[red] at(3.875,-.35) { $\dots$}; % \draw [stdhl] (4.25,-.5) node[below,yshift={-1ex}]{\small $N_{r}$} .. controls (4.25,0) and (4,0) .. (4,.5); % \draw (4.5,-.5) .. controls (4.5,0) and (4.25,0) .. (4.25,.5); \node at(4.75,-.35) {\tiny $\dots$}; \draw (5,-.5) .. controls (5,0) and (4.75,0) .. (4.75,.5); \draw[decoration={brace,mirror,raise=-8pt},decorate] (4.4,-.85) -- node {\small $b_r$} (5.1,-.85); % \filldraw [fill=white, draw=black] (-.375,.5) rectangle (4.875,1.25) node[midway] { $T_{b-1}^{\lambda,\underline{N}}$}; } \\ &\oplus \bigoplus_{i=0}^r \ssbigoplus{0 \leq t < b_i \\ p \geq 0} \tikzdiag[xscale=1.25]{ % \draw (0,-.5) -- (0,1); \node at(.25,-.35) {\tiny $\dots$}; \draw (.5,-.5) -- (.5,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (-.1,-.85) -- node {\small $b_0$} (.6,-.85); % % \node[red] at(1.125,-.35) { $\dots$}; % % \draw (1.75,-.5) -- (1.75,1); \node at(2,-.35) {\tiny $\dots$}; \draw (2.25,-.5) -- (2.25,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (1.65,-.85) -- node {\small $t$} (2.35,-.85); % \draw (2.5,-.5) .. controls (2.5,-.25) .. (-.5,0) .. controls (5.25,.25) .. (5.25,.5) -- (5.25,1.25) node[midway,tikzdot]{} node[midway, xshift=1.5ex, yshift=1ex]{\small $p$}; % \draw (2.75,-.5) .. controls (2.75,0) and (2.5,0) .. (2.5,.5); \node at(3,-.35) {\tiny $\dots$}; \draw (3.25,-.5) .. controls (3.25,0) and (3,0) .. (3,.5); % \draw [stdhl] (3.5,-.5) node[below,yshift={-1ex}]{\small $N_{i+1}$} .. controls (3.5,0) and (3.25,0) .. (3.25,.5); % \node[red] at(3.875,-.35) { $\dots$}; % \draw [stdhl] (4.25,-.5) node[below,yshift={-1ex}]{\small $N_{r}$} .. controls (4.25,0) and (4,0) .. (4,.5); % \draw (4.5,-.5) .. controls (4.5,0) and (4.25,0) .. (4.25,.5); \node at(4.75,-.35) {\tiny $\dots$}; \draw (5,-.5) .. controls (5,0) and (4.75,0) .. (4.75,.5); \draw[decoration={brace,mirror,raise=-8pt},decorate] (4.4,-.85) -- node {\small $b_r$} (5.1,-.85); % \draw [stdhl] (1.5,-.5) node[below,yshift={-1ex}]{\small $N_i$} -- (1.5,1); \draw [stdhl] (.75,-.5) node[below,yshift={-1ex}]{\small $N_1$} -- (.75,1); \draw[fill=white, color=white] (-.35,0) circle (.15cm); \draw [vstdhl] (-.25,-.5) node[below,yshift={-1ex}]{\small $\lambda$} -- (-.25,1) node[nail,pos=.33]{}; \filldraw [fill=white, draw=black] (-.375,.5) rectangle (4.875,1.25) node[midway] { $T_{b-1}^{\lambda,\underline{N}}$}; } \end{align*} where $\und N' = (N_1,\dots, N_{r-1})$, and the isomorphism is given by inclusion. \end{cor} \begin{proof} The claim follows immediately from \cref{thm:Tbasis}. \end{proof} \subsection{Dg-enhancement}\label{sec:dgenh} For each $N \in \mathbb{N}$, we want to define a non-trivial differential $d_N$ on $T_b^{\lambda, \underline{N}}$. First, we collapse the $\mathbb{Z}^2$-grading into a single $\mathbb{Z}$-grading, which we also call $q$-degree, through the map $\mathbb{Z}^2 \rightarrow \mathbb{Z}, (a,b) \mapsto a + bN$ (i.e. specializing $\lambda = q^N$). Then, we put \[ d_N\left( \tikzdiagh{0}{ \draw (.5,-.5) .. controls (.5,-.25) .. (0,0) .. controls (.5,.25) .. (.5,.5); \draw[vstdhl] (0,-.5) node[below]{\small $\lambda$} -- (0,.5) node [midway,nail]{}; } \right) \ := \ \tikzdiagh{0}{ \draw (.5,-.5) -- (.5,.5) node[midway,tikzdot]{} node[midway,xshift=1.75ex,yshift=.75ex]{\small $N$}; \draw[vstdhl] (0,-.5) node[below]{\small $\lambda$} -- (0,.5); } \] and $d_N(t) := 0$ for all element $t$ of $\widetilde T_b^{\underline N} \subset T_b^{\lambda, \underline N}$, and extending by the graded Leibniz rule w.r.t. the homological grading. A straightforward computation shows that $d_N$ respects all the defining relations of $T_b^{\lambda, \underline N}$, and therefore is well-defined. \begin{thm}\label{thm:dNformal} The $\mathbb{Z}$-graded dg-algebra $(T^{\lambda,\underline{N}}_b,d_N)$ is formal with \[ H(T^{\lambda,\underline{N}}_b,d_N) \cong T^{(N,\underline{N})}_b , \] were $(N,\underline{N}) := (N, N_1, \dots, N_r) \in \mathbb{N}^{r+1}$. \end{thm} \begin{proof} The proof follows by similar arguments as in~\cite[Theorem 4.4]{naissevaz3}, by using \cref{prop:Tdecomp}. We leave the details to the reader. \end{proof} \section{Introduction}\label{sec:intro} Dualities are fundamental tools in mathematics in general and in higher representation theory in particular. For example, Stroppel's version of Khovanov homology~\cite{Str05,Str09} and Khovanov's HOMFLY--PT homology~\cite{Kh-soergel} can be seen as instances of higher Schur--Weyl duality (see also~\cite{Str-ICM} for further explanations). In this paper we construct an instance of higher Schur--Weyl duality between $U_q(\mathfrak{sl}_2)$ and the blob algebra of Martin and Saleur~\cite{Martin-Saleur} by using a categorification of the tensor product of a Verma module and several two-dimensional irreducibles. \subsection{State of the art} \subsubsection{Schur--Weyl duality, $U_q(\mathfrak{sl}_2)$ and the Temperley--Lieb algebra } Schur--Weyl duality connects finite-dimensional modules of the general linear and symmetric groups. In particular, it states that over an algebraically closed field the actions of $GL_m$ and $\mathfrak{S}_r$ on the $r$-folded tensor power of the natural module $V$ of $GL_m$ commute and are the centralizers of each other. In the quantum version, $GL_m$ and $\mathfrak{S}_r$ are replaced respectively by the quantum general linear algebra $U_q(\mathfrak{gl}_m)$ and the Hecke algebra $\mathcal{H}_r(q)$. We note that these consequences of (quantum) Schur--Weyl duality remain true if one replaces the general linear with the special linear group. For example, in the case of $m=2$, the centralizer of the action of $U_q(\mathfrak{sl}_2)$ on $V^{\otimes r}$ is the Temperley--Lieb algebra $TL_r$, a well-known quotient of the Hecke algebra. One of the applications of this connection is the construction of the Jones--Witten--Reshetikhin--Turaev $U_q(\mathfrak{sl}_2)$-tangle invariant as a state-sum model (a linear combination of elements of $TL_r$) called the Kauffman bracket, which was the version categorified by Khovanov~\cite{Kh-jones} in the particular case of links. \subsubsection{The blob algebra} It was shown in~\cite{swTLB} that, for a projective $U_q(\mathfrak{sl}_2)$ Verma module $M$ with highest weight $\lambda$ (in the sense that $\lambda$ is the eigenvalue), the endomorphism algebra of $M\otimes V^{\otimes r}$ is the blob algebra $\mathcal{B}_r=\mathcal{B}_r(q,\lambda)$ of Martin--Saleur~\cite{Martin-Saleur}. This algebra $\mathcal{B}_r$, which was unfortunately called Temperley--Lieb algebra of type B in~\cite{swTLB}, is in fact a quotient of the Temperley--Lieb algebra of type B~\cite{Graham,Green}. Note that the parameters $\lambda$ and $q$ in~\cite{swTLB} are not algebraically independent but can be easily made independent by working with a universal Verma module as in~\cite{LV}. The blob algebra $\mathcal{B}_r$ can be given a diagrammatic presentation in terms of $\mathbb{Q}(q,\lambda)$-linear combinations of flat tangle diagrams~\cite{swTLB} on $r+1$ strands, with generators \begin{align*} u_i &:= \ \tikzdiagh{0}{ \draw[ultra thick,myblue] (-.5,0) -- (-.5,1); \draw[red] (0,0) -- (0,1); \node at (.5,.5){\small $\dots$}; \draw[red] (1,0) -- (1,1); \draw[red] (1.5, 0) .. controls (1.5,.5) and (2,.5) .. (2,0) ; \draw[red] (1.5, 1) .. controls (1.5,.5) and (2,.5) .. (2,1); \draw[red] (2.5,0) -- (2.5,1); \node at (3,.5){\small $\dots$}; \draw[red] (3.5,0) -- (3.5,1); % \tikzbrace{-.5}{1}{0}{$i$}; } \intertext{for $i=1,\dotsc ,r-1$, and} \xi &:= \ \tikzdiag{ \draw[red] (0,0) .. controls (0,.25) .. (-.5,.5) .. controls (0,.75) .. (0,1); \draw[fill=white, color=white] (-.52,.5) circle (.02cm); \draw[red] (.5,0) -- (.5,1); \node at (1,.5){\small $\dots$}; \draw[red] (1.5,0) -- (1.5,1); \draw[ultra thick,myblue] (-.5,0) -- (-.5,1); } \end{align*} taken up to planar isotopy fixing the endpoints, and subject to the usual Temperley--Lieb relation of type A: \begin{align*}\allowdisplaybreaks \tikzdiag[yscale=.75]{ \draw[red] (0, 0) .. controls (0,.5) and (.5,.5) .. (.5,0) .. controls (.5,-.5) and (0,-.5) .. (0,0); } \ &= \ -(q+q^{-1}), \intertext{and the blob relations:} % \tikzdiag[yscale=.75]{ \draw[red] (0,0) .. controls (0,.25) .. (-.5,.5) .. controls (0,.75) .. (0,1) .. controls (0,1.5) and (.5,1.5) .. (.5,1) -- (.5,0) .. controls (.5,-.5) and (0,-.5) .. (0,0); \draw[fill=white, color=white] (-.52,.5) circle (.02cm); \draw[ultra thick,myblue] (-.5,-.5) -- (-.5,1.5); } \ &= -(\lambda q+\lambda^{-1}q^{-1}) \ \tikzdiag[yscale=.75]{ \draw[ultra thick,myblue] (-.5,-.5) -- (-.5,1.5); } % \\ % q^{-1} \ \tikzdiag{ \draw[red] (0,0) .. controls (0,.25) .. (-.5,.5) .. controls (0,.75) .. (0,1) .. controls (0,1.25) .. (-.5,1.5) .. controls (0,1.75) .. (0,2); \draw[fill=white, color=white] (-.52,.5) circle (.02cm); \draw[fill=white, color=white] (-.52,1.5) circle (.02cm); \draw[ultra thick,myblue] (-.5,0) -- (-.5,2); } \ &= \ (\lambda q+\lambda^{-1}q^{-1}) \ \tikzdiag{ \draw[red] (0,0) -- (0,.5) .. controls (0,.75) .. (-.5,1) .. controls (0,1.25) .. (0,1.5) -- (0,2); \draw[fill=white, color=white] (-.52,1) circle (.02cm); \draw[ultra thick,myblue] (-.5,0) -- (-.5,2); } \ - q \ \tikzdiag{ \draw[red] (0,0) -- (0,2); \draw[ultra thick,myblue] (-.5,0)-- (-.5,2); } \end{align*} Note that this generators-relations definition of the blob algebra makes also sense over $\mathbb{Z}[q^{\pm 1},\lambda^{\pm 1}]$. \begin{rem} In \cite{Martin-Saleur}, the blob algebra is given a different presentation, where the generator of type B is pictured as a dot on the left-most strand, and is an idempotent. We use the presentation given in \cite{swTLB}, which is isomorphic to the one in \cite{Martin-Saleur} over $\mathbb{Z}(q, \lambda)$ (but not over $\mathbb{Z}[q^{\pm1}, \lambda^{\pm 1}]$). This presentation is closer to the representation theory of $U_q(\mathfrak{sl}_2)$ and is the one that arises from our categorification construction. \end{rem} More generally, we consider the category $\mathcal{B}$ with objects given by $M \otimes V^{\otimes r}$ for various $r \in \mathbb{N}$, and hom-spaces given by $U_q(\mathfrak{sl}_2)$-intertwiners. This category, that we call the \emph{blob category}, has a very similar diagrammatic description as the blob algebra, where objects are collections of $r+1$ points on the horizontal line. The hom-spaces are presented by flat tangles connecting these points, with the left-most point of the source always connected to the left-most point of the target, allowing 4-valent intersections between the first two strands. These diagrams are subject to the same relations as the blob algebra. We stress that, in contrast to the Temperley--Lieb category of type A, the blob category is not monoidal w.r.t. juxtaposition of diagrams since the blue strand in the pictures above needs to be on the left-hand side of any diagram. \subsubsection{Webster categorification} In a seminal paper~\cite{webster}, Webster has constructed categorifications of tensor products of integrable modules for symmetrizable Kac--Moody algebras, generalizing Lauda's~\cite{L1}, Khovanov--Lauda~\cite{KL1,KL2} and Chuang--Rouquier~\cite{CR} and Rouquier's~\cite{rouquier} categorification of quantum groups, and their integrable modules. Webster further used his categorifications to give a link homology theory categorifying the Witten--Reshetikhin--Turaev invariant of tangles. The construction in~\cite{webster} involves algebras, called KLRW algebras (or tensor product algebras), that are finite-dimensional algebras presented diagrammatically, generalizing cyclotomic KLR algebras. Categories of finitely generated modules over KLRW algebras come equipped with an action of Khovanov--Lauda--Rouquier's 2-Kac--Moody category, and their Grothendieck groups are isomorphic to tensor products of integrable modules. Link invariants and categorifications of intertwiners are constructed using functors given by the derived tensor product with certain bimodules over KLRW algebras. \subsubsection{Verma categorification: dg-enhancements} In~\cite{naissevaz1,naissevaz2,naissevaz3}, the second and third authors have given a categorification of (universal, parabolic) Verma modules for (quantized) symmetrizable Kac--Moody algebras. In its more general form~\cite{naissevaz3}, the categorification is given as a derived category of dg-modules over a certain dg-algebra, similar to a KLR algebra but containing an extra generator in homological degree $1$. This dg-algebra can also be endowed with a collection of different differentials, each of them turning it into a dg-algebra whose homology is isomorphic to a cyclotomic KLR algebra. This can be interpreted as a categorification of the projection of a universal Verma module onto an integrable module. Categorification of Verma modules was used by the second and third authors in~\cite{naissevaz4} to give a quantum group higher representation theory construction of Khovanov--Rozansky's HOMFLY--PT link homology. \subsection{The work in this paper} For $\lambda$ a formal parameter, let $M(\lambda)$ be the universal $U_q(\mathfrak{sl}_2)$-Verma module with highest weight $\lambda$, and $V(\und{N}):=V(N_1)\otimes \dotsm \otimes V(N_r)$, where $V(N_j)$ is the irreducible of highest weight $q^{N_j}$, $N_j \in\mathbb{N}$. In this paper we combine Webster's categorification with the Verma categorification to give a categorification of $M(\lambda)\otimes V(\und{N})$. Then we construct a categorification of the blob algebra by categorifying the intertwiners of $M(\lambda)\otimes V(\und{N})$ where all the $N_j$ are 1. \subsubsection{Dg-enhanced KLRW algebras and categorification of tensor products (Sections~\ref{sec:dgWebster} and~\ref{sec:catTensProd})} Fix a commutative unital ring $\Bbbk$. The KLRW algebra is the $\Bbbk$-algebra spanned by planar isotopy classes of braid-like diagrams whose strands are of two types: there are black strands labeled by simple roots of a symmetrizable Kac--Moody algebra ${\mathfrak{g}}$ and carrying dots, and there are red strands labeled by dominant integral weights. KLRW algebras are cyclotomic algebras in the sense that they generalize cyclotomic KLR algebras to a string of dominant integral weights, where the ``\emph{violating condition}''~\cite[Definition 4.3]{webster} plays the role of the cyclotomic condition. KLRW algebras were also defined without the violating condition, in which case we call them non-cyclotomic or affine KLRW algebras. In the case of $\mathfrak{sl}_2$, for $b\in\mathbb{N}$ and $\und{N}\in\mathbb{N}^r$, we denote by $T_b^{\und{N}}$ (resp. $\widetilde T_b^{\und N}$) the (resp. affine) KLRW algebra spanned by $b$ black strands (all labeled by the simple root of $\mathfrak{sl}_2$) and $r$ red strands, labeled in order $N_1,\dotsc, N_r$ from left to right. Following a procedure analogous to~\cite{naissevaz2,naissevaz3}, we construct in~\cref{sec:dgWebster} an algebra $T_b^{\lambda,\und{N}}$, with $\lambda$ a formal parameter, that contains the affine KLRW algebra $\widetilde T_b^{\und N}$ as a subalgebra. In a nutshell, $T_b^{\lambda,\und{N}}$ is defined by putting a vertical blue strand labeled by $\lambda$ on the left of the diagrams of $\widetilde T_b^{\und N}$, and adding a new generator that we call a nail (this corresponds with the ``tight floating dots'' of \cite{naissevaz2,naissevaz3}). We draw this new generator as: \[ \tikzdiagh{0}{ \draw (.5,-.5) .. controls (.5,-.25) .. (0,0) .. controls (.5,.25) .. (.5,.5); \draw[vstdhl] (0,-.5) node[below]{\small $\lambda$} -- (0,.5) node [midway,nail]{}; \node at (1.2,0) {$\dotsm$}; \draw[stdhl] (1.8,-.5) node[below]{\small $N_1$} -- (1.8,.5) ; \node at (2.65,0) {$\dotsm$}; \draw[stdhl] (3.5,-.5) node[below]{\small $N_r$} -- (3.5,.5) ; \node at (4.3,0) {$\dotsm$}; } \] Note that a nail can only be placed on the left-most strand, which is always blue. The nails are subject to the following local relations: \begin{align*} \tikzdiagh{0}{ \draw (.5,-.5) .. controls (.5,-.25) .. (0,0) node[midway, tikzdot]{} .. controls (.5,.25) .. (.5,.5); \draw[vstdhl] (0,-.5) node[below]{\small $\lambda$} -- (0,.5) node [midway,nail]{}; } \ &= \ \tikzdiagh{0}{ \draw (.5,-.5) .. controls (.5,-.25) .. (0,0) .. controls (.5,.25) .. (.5,.5) node[midway, tikzdot]{}; \draw[vstdhl] (0,-.5) node[below]{\small $\lambda$} -- (0,.5) node [midway,nail]{}; } & \tikzdiagh{0}{ \draw (.5,-1) .. controls (.5,-.75) .. (0,-.4) .. controls (.5,-.05) .. (.5,.2) -- (.5,1); \draw (1,-1) .. controls (1,0) .. (0, .4) .. controls (1,.75) .. (1,1); \draw [vstdhl] (0,-1) node[below]{\small $\lambda$} -- (0,1) node [pos=.3,nail] {} node [pos=.7,nail] {} ; } \ &= \ - \tikzdiagh[yscale=-1]{0}{ \draw (.5,-1) .. controls (.5,-.75) .. (0,-.4) .. controls (.5,-.05) .. (.5,.2) -- (.5,1); \draw (1,-1) .. controls (1,0) .. (0, .4) .. controls (1,.75) .. (1,1); \draw [vstdhl] (0,-1) -- (0,1) node[below]{\small $\lambda$} node [pos=.3,nail] {} node [pos=.7,nail] {} ; } & \tikzdiagh{0}{ \draw (.5,-1) .. controls (.5,-.75) .. (0,-.4) .. controls (.5,-.4) and (.5,.4) .. (0,.4) .. controls (.5,.75) .. (.5,1); \draw [vstdhl] (0,-1) node[below]{\small $\lambda$} -- (0,1) node [pos=.3,nail] {} node [pos=.7,nail] {} ; } \ &= \ 0. \end{align*} When $\und{N}=\varnothing$ is the empty sequence, we recover the dg-enhanced nilHecke algebra from~\cite{naissevaz2}. The subalgebra spanned by all diagrams without a nail is isomorphic to the affine KLRW algebra $\widetilde T_b^{\und N}$. As we will see, the algebra $T^{\lambda, \und N}_b$ can be equipped with three ($\mathbb{Z}$-)gradings: two internal gradings, one as in Webster's original definition and an additional grading (see~\cref{def:dgwebsteralg}), as well as a homological grading. The first two of these gradings categorify the parameters $q$ and $\lambda$ respectively, and we call them $q$- and $\lambda$-gradings. As usual, the homological grading allows us to categorify relations involving minus signs. We write $q^k$ (resp. $\lambda^k$) for a grading shift up by $k$ in the $q$- (resp. $\lambda$-)grading, and $[k]$ for a grading shift up by $k$ in the homological grading, for $k \in \mathbb{Z}$. We let the nail be in homological degree $1$, while diagrams without a nail are in homological degree $0$. As in the categorification of Verma modules, if we endow the algebra $T^{\lambda,\underline{N}}_b$ with a trivial differential, then it becomes a dg-algebra categorifying $M(\lambda)\otimes V(\und{N})$ (see below). We can also equip $T^{\lambda,\underline{N}}_b$ with a differential $d_N$, for $N\geq 0$, which acts trivially on diagrams without a nail, while \[ d_N\left( \tikzdiagh{-1ex}{ \draw (.5,-.5) .. controls (.5,-.25) .. (0,0) .. controls (.5,.25) .. (.5,.5); \draw[vstdhl] (0,-.5) node[below]{\small $\lambda$} -- (0,.5) node [midway,nail]{}; } \right) \ := \ \tikzdiagh{-1ex}{ \draw (.5,-.5) -- (.5,.5) node[midway,tikzdot]{} node[midway,xshift=1.75ex,yshift=.75ex]{\small $N$}; \draw[vstdhl] (0,-.5) node[below]{\small $\lambda$} -- (0,.5); } \] and extending using the graded Leibniz rule. The dg-algebra $(T^{\lambda,\underline{N}}_b,d_N)$ is formal with homology isomorphic to the KLRW algebra $T^{(N,\underline{N})}_b$ (see~\cref{thm:dNformal}). The usual framework using the algebra map $T^{\lambda,\underline{N}}_b \rightarrow T^{\lambda,\underline{N}}_{b+1}$ that adds a black strand at the right of a diagram gives rise to induction and restriction dg-functors $\E_b$ and $\F_b$ between the derived dg-categories $\mathcal{D}_{dg}(T^{\lambda,\underline{N}}_{b},0)$ and $\mathcal{D}_{dg}(T^{\lambda,\underline{N}}_{b+1},0)$. The following describes the categorical $U_q(\mathfrak{sl}_2)$-action: \newtheorem*{thma}{\cref{thm:sl2comqi}} \begin{thma} There is a quasi-isomorphism \[ \cone(\F_{b-1}\E_{b-1} \rightarrow \E_b\F_b) \xrightarrow{\cong} \oplus_{[\beta+|\underline{N}|-2b]_q} \text{Id}_b, \] of dg-functors. \end{thma} \noindent As usual in the context of categorification, the notation $\oplus_{[\beta+|\underline{N}|-2b]_q}$ on the right-hand side is an infinite coproduct categorifying multiplication by the rational fraction $(\lambda q^{|\underline{N}|-2b}-\lambda^{-1} q^{-|\underline{N}|+2b})/(q-q^{-1})$ interpreted as a Laurent series. Turning on the differential $d_N$ gives functors $\E_b^N$, $\F_b^N$ on $\mathcal{D}_{dg}(\oplus_{b\geq 0}T^{\lambda,\underline{n}}_{b},d_N)$. In this case, the right-hand side in \cref{thm:sl2comqi} becomes quasi-isomorphic to a finite sum and we recover the usual action on categories of modules over KLRW algebras (see~\cref{prop:actionN}). In~\cite{asympK0}, the second author introduced the notion of an asymptotic Grothendieck group, which is a notion of a Grothendieck group for (multi)graded categories of objects admitting infinite iterated extensions (like infinite composition series or infinite resolutions) whose gradings satisfy some mild conditions. Denote by ${}_\mathbb{Q}\boldsymbol{K}_0^\Delta(-)$ the asymptotic Grothendieck group (tensored over $\mathbb{Z}\pp{q,\lambda}$ with $\mathbb{Q}\pp{q,\lambda}$). The categorical $U_q(\mathfrak{sl}_2)$-actions on the derived categories $\mathcal{D}_{dg}(\oplus_{b\geq 0}T^{\lambda,\underline{N}}_{b},0)$ and $\mathcal{D}_{dg}(\oplus_{b\geq 0}T^{\lambda,\underline{N}}_{b},d_N)$ descend to the asymptotic Grothendieck group and we have the main result of~\cref{sec:catTensProd}, which reads as following: \newtheorem*{thmb}{\cref{thm:K0}} \begin{thmb} There are isomorphisms of $U_q(\mathfrak{sl}_2)$-modules \[ {}_\mathbb{Q}\boldsymbol{K}_0^\Delta(T^{\lambda,\underline{N}},0) \cong M(\lambda) \otimes V(\underline{N}), \] and \[ {}_\mathbb{Q}\boldsymbol{K}_0^\Delta(T^{\lambda,\underline{N}},d_N)\cong V(N) \otimes V(\underline{N}), \] for all $N \in \mathbb{N}$. \end{thmb} In~\cref{sec:zigazag} we prove that in the case of $b=1$, $\und{N}={1,\dotsc ,1}$ and $N=1$, the dg-algebra $(T^{\lambda,1,\dotsc ,1}_{1},d_1)$ is isomorphic to a dg-enhanced zigzag algebra, generalizing~\cite[\S4]{qi-sussan}. \subsubsection{The blob 2-category (Sections~\ref{sec:bimod} and~\ref{sec:catTLB})} We study the case of $\und{N}=1,\dotsc ,1$ in more detail. We define several functors on $\mathcal{D}_{dg}(T^{\lambda,\und{N}},0)$ commuting with the categorical action of $U_q(\mathfrak{sl}_2)$. As in~\cite{webster}, these are defined as a first step via (dg-)bimodules over the abovementioned dg-enhancements of KLRW-like algebras. To simplify matters, let $T^{\lambda,r}$ be the dg-enhanced KLRW algebra with $r$ strands labeled 1 and a blue strand labeled $\lambda$. The categorical Temperley--Lieb action is realized by a pair of biadjoint functors, constructed in the same way as in~\cite{webster}. They are given by derived tensoring with the $(T^{\lambda,r},T^{\lambda,r\pm 2})$-bimodules $B_i$ and $\overline{B}_i$ generated respectively by the diagram \[ \tikzdiag{ \draw[vstdhl] (0,0) node[below]{\small $\lambda$} -- (0,1); \draw[stdhl] (1,0) node[below]{\small $1$} -- (1,1); \node[red] at(2,.5) { $\dots$}; \draw[stdhl] (3,0) node[below]{\small $1$} -- (3,1); \draw[decoration={brace,mirror,raise=-8pt},decorate] (-.1,-1) -- node {$i$} (3.1,-1); \draw (4.5,.5) -- (4.5,1); \draw [stdhl] (4,1) .. controls (4,.25) and (5,.25) .. (5,1); \draw[stdhl] (6,0) node[below]{\small $1$} -- (6,1); \node[red] at(7,.5) { $\dots$}; \draw[stdhl] (8,0) node[below]{\small $1$} -- (8,1); } \] and its mirror along a horizontal axis. We stress again that the blue strand is on the left. Moreover, these diagrams are subjected to some local relations (see \cref{sec:Tlaction}). Taking the derived tensor product with these bimodules defines the coevaluation and evaluation dg-functors as \begin{align*} \B_i := B_i \otimes^{\Lderiv}_T - : \mathcal{D}_{dg}(T^{\lambda,r},0) \rightarrow \mathcal{D}_{dg}(T^{\lambda,r+2},0), \\ \overline{\B}_i := \overline{B}_i \otimes^{\Lderiv}_T - : \mathcal{D}_{dg}(T^{\lambda,r+2},0) \rightarrow \mathcal{D}_{dg}(T^{\lambda,r},0). \end{align*} In~\cref{sec:catTLaction} we extend~\cite{webster} and prove that these functors satisfy the relations of the Temperley--Lieb algebra: \newtheorem*{thmtla}{Corollaries~\ref{cor:TLbiadj} and~\ref{cor:TLloop}} \begin{thmtla} There are natural isomorphisms \begin{align*} \bar \B_{i \pm 1} \circ \B_i &\cong \text{Id}, & \overline \B_i \circ \B_i &\cong q \text{Id} [1] \oplus q^{-1} \text{Id} [-1]. \end{align*} \end{thmtla} We define the double braiding functor in the same vein, using the $(T^{\lambda,r},T^{\lambda,r})$-bimodule $X$ generated by the diagram \[ \tikzdiag{ \draw[stdhl] (1,0) node[below]{\small $1$} .. controls (1,.25) .. (0,.5) .. controls (1,.75) .. (1,1); \draw[fill=white, color=white] (-.1,.5) circle (.1cm); \draw[vstdhl] (0,0) node[below]{\small $\lambda$} -- (0,1); % \draw[stdhl] (2,0) node[below]{\small $1$} -- (2,1); \node[red] at(3,.5) { $\dots$}; \draw[stdhl] (4,0) node[below]{\small $1$} -- (4,1); } \] modulo the defining relations of $T^{\lambda,r}$, and the extra local relations \begin{align*} \tikzdiagh{0}{ \draw (.5,-.5) .. controls (.5,-.3) .. (0,-.1) .. controls (.5,.1) .. (.5,.3) -- (.5,1.5); \draw[stdhl] (1,-.5) node[below]{\small $1$} -- (1,0) .. controls (1,.25) .. (0,.5) .. controls (1,.75) .. (1,1) -- (1,1.5); \draw[fill=white, color=white] (-.1,.5) circle (.1cm); \draw[vstdhl] (0,-.5) node[below]{\small $\lambda$} -- (0,1.5) node[pos=.2,nail]{}; } \ &= \ \tikzdiagh[yscale=-1]{0}{ \draw (.5,-.5) .. controls (.5,-.3) .. (0,-.1) .. controls (.5,.1) .. (.5,.3) -- (.5,1.5); \draw[stdhl] (1,-.5)-- (1,0) .. controls (1,.25) .. (0,.5) .. controls (1,.75) .. (1,1) -- (1,1.5) node[below]{\small $1$} ; \draw[fill=white, color=white] (-.1,.5) circle (.1cm); \draw[vstdhl] (0,-.5) -- (0,1.5) node[pos=.2,nail]{} node[below]{\small $\lambda$}; } & \tikzdiagh{0}{ \draw (1,-.5) .. controls (1,-.3) .. (0,-.1) .. controls (1,.1) .. (1,.3) -- (1,1.5); \draw[stdhl] (.5,-.5) node[below]{\small $1$} -- (.5,0) .. controls (.5,.25) .. (0,.5) .. controls (.5,.75) .. (.5,1) -- (.5,1.5); \draw[fill=white, color=white] (-.1,.5) circle (.1cm); \draw[vstdhl] (0,-.5) node[below]{\small $\lambda$} -- (0,1.5) node[pos=.2,nail]{}; } \ &= \ \tikzdiagh[yscale=-1]{0}{ \draw (1,-.5) .. controls (1,-.3) .. (0,-.1) .. controls (1,.1) .. (1,.3) -- (1,1.5); \draw[stdhl] (.5,-.5) -- (.5,0) .. controls (.5,.25) .. (0,.5) .. controls (.5,.75) .. (.5,1) -- (.5,1.5) node[below]{\small $1$}; \draw[fill=white, color=white] (-.1,.5) circle (.1cm); \draw[vstdhl] (0,-.5) -- (0,1.5) node[pos=.2,nail]{} node[below]{\small $\lambda$}; } \end{align*} The \emph{double braiding functor} is then defined as the derived tensor product \[ \Xi := X \otimes^{\Lderiv}_T - : \mathcal{D}_{dg}(T^{\lambda,r},0) \rightarrow \mathcal{D}_{dg}(T^{\lambda,r},0). \] The functors $B_i$, $\overline{B}_i$ and $\Xi$ intertwine the categorical $U_q(\mathfrak{sl}_2)$-action on $\mathcal{D}_{dg}(T^{\lambda,r},0)$: \newtheorem*{thmc}{\cref{prop:catactioncommutes}} \begin{thmc} We have natural isomorphisms $\E \circ \Xi \cong \Xi \circ \E$ and $\F \circ \Xi \cong \Xi \circ \F$, and also $\E \circ\B_i \cong \B_i \circ\E$, $\F \circ\B_i \cong \B_i \circ\F$, and similarly for $\overline\B_i$. \end{thmc} The first main result of~\cref{sec:catTLB} is that the blob algebra acts on $\mathcal{D}_{dg}(T^{\lambda,r},0)$. This follows from the Temperley--Lieb action in~\cref{sec:catTLaction} and~\cref{cor:qi-Xquadratic},~\cref{prop:Xi-autoequiv} and~\cref{cor:qi-bubbleremv}, summarized below. \newtheorem*{thmd}{\cref{cor:qi-Xquadratic}, \cref{prop:Xi-autoequiv} and \cref{cor:qi-bubbleremv}} \begin{thmd} The functor $\Xi : \mathcal{D}_{dg}(T^{\lambda,r},0) \rightarrow \mathcal{D}_{dg}(T^{\lambda,r},0)$ is an autoequivalence, with inverse given by \[ \Xi^{-1} := \RHOM_T(X,-): \mathcal{D}_{dg}(T^{\lambda,r},0) \rightarrow \mathcal{D}_{dg}(T^{\lambda,r},0) . \] There are quasi-isomorphisms \begin{align*} \cone\bigl(\lambda q^2 \Xi [1] \rightarrow q^2 \text{Id} [1]\bigr)[1] &\xrightarrow{\simeq} \cone( \Xi \circ \Xi \rightarrow \lambda^{-1} \Xi), \intertext{and} \lambda q (\text{Id})[1] \oplus \lambda^{-1} q^{-1} (\text{Id}) [-1] &\xrightarrow{\simeq} \bar \B_1 \circ \Xi \circ \B_1, \end{align*} of dg-functors \end{thmd} One of the main difficulties in establishing the results above is that, in order to compute derived tensor products, we have to take left (resp. right) cofibrant replacements of several dg-bimodules. As observed in~\cite[\S 2.3]{MW}, while the left (resp. right) module structure remains unchanged when passing to the left (resp. right) cofibrant replacement, the right (resp. left) module structure is preserved only in the $A_\infty$ sense. As a consequence, constructing natural transformations between compositions of derived tensor product functors often requires to use $A_\infty$-bimodules maps. We have tried to avoid as much as possible to end up in this situation, replacing the potentially unwieldy $A_\infty$-bimodules by quasi-isomorphic dg-bimodules. Let $\mathfrak B_r$ be a certain subcategory (see \cref{sec:blob2cat}) of the derived dg-category of $(T^{\lambda,r},0)$-$(T^{\lambda,r},0)$-bimodules generated by the dg-bimodules corresponding with the dg-functors identity, $\Xi^{\pm 1}$ and $\B_i \circ \overline{\B}_i $. Given two dg-bimodules in $\mathfrak B_r$, we can compose them in the derived sense by replacing both of them with a bimodule cofibrant replacement (i.e. a cofibrant replacement as dg-bimodule, and not only left or right dg-module), and taking the usual tensor product. This gives a dg-bimodule, isomorphic to the derived tensor product of the two initial dg-bimodules. In particular, it equips ${}_\mathbb{Q}\boldsymbol{K}_0^\Delta(\mathfrak{B}_r)$ with a ring structure. We show that $\mathfrak B_r$ is a categorification of the blob algebra $\mathcal{B}_r$ with ground ring extended to $\mathbb{Q}\pp{q,\lambda}$: \begin{citecor}{cor:twoblobalg} There is an isomorphism of $\mathbb{Q}\pp{q,\lambda}$-algebras \[ {}_\mathbb{Q}\boldsymbol{K}_0^\Delta(\mathfrak{B}_r) \cong \mathcal{B}_r(q,\lambda). \] \end{citecor} This result generalizes to the blob category. However, a technical issue we find here is that dg-categories up to quasi-equivalence do not form a 2-category, but rather an $(\infty,2)$-category~\cite{faonte}. Concretely in our case, we consider a sub-$(\infty,2)$-category of this $(\infty,2)$-category, where the objects are the derived dg-categories $\mathcal{D}_{dg}(T^{\lambda,r},0)$ for various $r \in \mathbb{N}$, and the $1$-hom are generated by the dg-functors identity, $\Xi^{\pm 1}$, $\overline{\B}_i $ and $\B_i$. Moreover, these 1-hom are stable $(\infty,1)$-categories, and thus their homotopy categories are triangulated (see \cite{lurie}). In particular, we write ${}_\mathbb{Q}\boldsymbol{K}_0^\Delta(\mathfrak{B}) $ for the category with the same objects as $\mathfrak{B}$ and with hom-spaces given by the asymptotic Grothendieck groups of the homotopy category of the $1$-hom of $\mathfrak{B}$. By~\cite{faonte} and~\cite{toen}, we can compute these hom-spaces by considering usual derived categories of dg-bimodules, and we obtain the following, again after extending the ground rings to $\mathbb{Q}\pp{q,\lambda}$: \begin{citecor}{cor:twoblob} There is an equivalence of categories \[ {}_\mathbb{Q}\boldsymbol{K}_0^\Delta(\mathfrak{B}) \cong \mathcal{B}. \] \end{citecor} \subsubsection{The general case: symmetrizable ${\mathfrak{g}}$} The definition of dg-enhanced KLRW algebras in~\cref{sec:dgWebster} generalizes immediately to any symmetrizable ${\mathfrak{g}}$. We indicate this generalization in~\cref{sec:dgWebstergeneral}. We expect that the results of~\cref{sec:dgWebster} and \cref{sec:catTensProd} extend to this case without difficulty. \subsubsection{Quiver Schur algebras} Quiver Schur algebras were introduced geometrically by Stroppel and Webster in~\cite{SW} to give a graded version of the cyclotomic $q$-Schur algebras of Dipper, James and Mathas~\cite{DJM}. Independently, Hu and Mathas~\cite{HM} constructed a graded Morita equivalent variant of the quiver Schur algebras in~\cite{SW} as graded quasi-hereditary covers of cyclotomic KLR algebras for linear quivers. While the construction in~\cite{SW} is geometric, the construction in~\cite{HM} is combinatorial/algebraic. More recently, Khovanov, Qi and Sussan~\cite{KSY} gave a variant of the quiver Schur algebras in~\cite{HM} for the case of cyclotomic nilHecke algebras, and showed that Grothendieck groups of their algebras can be identified with tensor products of integrable modules of $U_q(\mathfrak{sl}_2)$. Following similar ideas, in~\cref{sec:dgqSchur} we construct a dg-algebra, which we conjecture to be the quiver Schur variant of the dg-enhanced KLRW algebra of~\cref{sec:dgWebster} (Conjectures~\ref{conj:dgSchurCycSchur} and~\ref{conj:WebQSchur}). \subsubsection{Appendix} We have moved the most computational proofs to \cref{sec:computations}, leaving only a sketch of some of the proofs in the main text. The reader can also find in \cref{sec:dgcat} some explanations and results about homological algebra, $A_\infty$-structures and asymptotic Grothendieck groups. \subsection{Possible future directions and applications} \subsubsection{Khovanov homology for tangles of type B} The topological interpretation of the blob algebra in~\cite[\S3.4]{swTLB} gives rise to a Jones polynomial for tangles of type B (i.e. tangles in the annulus). We expect that by introducing braiding functors as in \cite{webster}, we obtain a link homology of type B, yielding invariants of links in the annulus akin to ones introduced by Asaeda--Przytycki--Sikora~\cite{APS} (see also~\cite{AGW,GLW,QR-annular}). Given a link in the annulus, the invariant obtained from our construction would be a dg-endofunctor of the derived dg-category of dg-modules over the dg-enhanced KLRW algebra $(T^{\lambda,\emptyset},0)$. This means that the empty link is sent to the dg-endomorphism space of the identity functor, which coincides with the Hochschild cohomology of $T^{\lambda,\emptyset}$, and is infinite-dimensional (the center of $T^{\lambda,\emptyset}$ is already infinite-dimensional). By restricting to the subcategory of dg-modules over $(T^{\lambda,\emptyset}_0,0)$, it becomes $1$-dimensional since $T^{\lambda,\emptyset}_0 \cong \Bbbk$. With this restriction, we conjecture that our ``would-be'' invariant coincides with the usual annular Khovanov homology. The following is a work in progress with A. Wilbert. As it is the case of using Webster's machinery~\cite{webster}, computing the tangle invariant of type B using our framework could be unwieldy. A more computation-friendly alternative could be to use dg-bimodules over annular arc algebras constructed using the annular TQFT of~\cite{APS}, as done in~\cite[\S5.3]{annulararcalg} (see also \cite[\S5]{ehrigtubbenhauer}). Furthermore, evidences show there is a (at least weak) categorical action of the blob algebra on the derived category of dg-modules over these annular arc algebras. In a different direction, one could try to extend our results to construct a Khovanov invariant for links in handlebodies, in the spirit of the handlebody HOMFLY--PT-link homology of Rose--Tubbenhauer in~\cite{RT}. \subsubsection{Constructions using homotopy categories} KLRW algebras are given diagrammatically, which is the often an appropriate framework for constructions with an additive flavor. Nevertheless, the various functors realizing the various intertwiners and the braiding need to pass to derived categories of modules. This makes it harder to describe explicitly the 2-categories realizing these symmetries since a bimodule for two of those algebras induces an $A_\infty$-bimodule on the level of derived categories. This was pointed out by Mackaay and Webster in~\cite{MW}, who gave explicit constructions of categorified intertwiners in order to prove the equivalence between the several existing $\mathfrak{gl}_n$-link homologies. One of the things~\cite{MW} tells us is how to construct homotopy versions of Webster's categorifications. A construction using homotopy categories for the results in this paper seems desirable from our point of view. We hope it can be done either by mimicking~\cite{MW}, which can turn out to be a technically challenging problem, or alternatively, by a construction of dg-enhancements for redotted Webster algebras, as considered in~\cite{KhS} and \cite{KhLSY} to give a homotopical version of some of the above, but whose low-tech presentation might hide difficulties. \subsubsection{Generalized blob algebras and variants} The results of~\cite{swTLB} were extended in~\cite{LV}, where the first and third authors have computed the endomorphism algebra of the $U_q(\mathfrak{gl}_m)$-module $M^{{\mathfrak{p}}}(\Lambda)\otimes V^{\otimes n}$ for $M^{{\mathfrak{p}}}(\Lambda)$ a parabolic universal Verma modules and $V$ the natural module of $U_q(\mathfrak{gl}_m)$, which is always a quotient of an Ariki--Koike algebra. As particular cases (depending on ${\mathfrak{p}}$ and the relation between $n$ and $m$) we obtain Hecke algebras of type $B$ with two parameters, the generalized blob algebra of Martin and Woodcock~\cite{generalized_blob} or the Ariki--Koike algebra itself. With this result in mind it is tantalizing to ask for an extension to $\mathfrak{gl}_m$ of the work in this paper. Modulo technical difficulties the methods in this paper could work for $\mathfrak{gl}_m$ in the case of a parabolic Verma module for a 2-block parabolic subalgebra, which is the case where the generators of the endomorphism algebra satisfy a quadratic relation. Constructing a categorification of the Ariki--Koike algebra or the generalized blob algebra as the blob 2-category in~\cref{sec:catTLB} looks quite challenging at the moment, in particular for a functor-realization of the cyclotomic relation and the relation $\tau=0$ (for the generalized blob algebra in the presentation given in~\cite[Theorem 2.24]{LV}). \smallskip \subsection*{Acknowledgments} The authors thank Catharina Stroppel for interesting discussions, and for pointing us~\cite{Martin-Saleur}, helping to clarify the confusion with the terminology of ``blob algebra'' and ``Temperley--Lieb algebra of type B''. The authors would also like to thank the referee for his/her numerous, detailed and helpful comments. A.L. was supported by the Fonds de la Recherche Scientifique - FNRS under Grant no.~MIS-F.4536.19. G.N. was a Research Fellow of the Fonds de la Recherche Scientifique - FNRS, under Grant no.~1.A310.16 when starting working on this project. G.N. is also grateful to the Max Planck Institute for Mathematics in Bonn for its hospitality and financial support. P.V. was supported by the Fonds de la Recherche Scientifique - FNRS under Grant no.~MIS-F.4536.19. \section{Quantum $\mathfrak{sl}_2$ and the blob algebra}\label{sec:qsltTLB} \subsection{Quantum $\mathfrak{sl}_2$} Recall that \emph{quantum $\mathfrak{sl}_2$} can be defined as the $\mathbb{Q}(q)$-algebra $U_q(\mathfrak{sl}_2)$, with generic $q$, generated by $K,K^{-1}, E$ and $F$ with relations \begin{align*} &KE = q^2EK, & &KK^{-1} = 1 = K^{-1}K, \\ &KF = q^{-2}FK, & &EF - FE = \frac{K-K^{-1}}{q-q^{-1}}. \end{align*} Quantum $\mathfrak{sl}_2$ becomes a bialgebra when endowed with comultiplication \begin{align*} \Delta(K^{\pm 1}) &:= K^{\pm 1} \otimes K^{\pm 1}, & \Delta(E) &:= E \otimes 1 + K^{-1} \otimes E, & \Delta(F) &:= F \otimes K + 1 \otimes F, \end{align*} and with counit $\varepsilon(K^{\pm 1}) := 1$, $\varepsilon(E) := \varepsilon(F) := 0$. There is a $\mathbb{Q}(q)$-linear anti-involution $\bar \tau$ of $U_q(\mathfrak{sl}_2)$ defined on the generators by \begin{align} \bar \tau(E) &:= q^{-1}K^{-1}F, & \bar \tau(F) &:= q^{-1}EK, & \bar \tau(K) &:= K. \end{align} It is easily checked that \begin{equation} \label{eq:tauCoprod} \Delta\circ \bar \tau = (\bar \tau \otimes \bar \tau) \circ \Delta. \end{equation} \subsubsection{Integrable modules} For each $N \in \mathbb{N}$, there is a finite-dimensional irreducible $U_q(\mathfrak{sl}_2)$-module $V(N)$, called \emph{integrable module}, with basis $v_{N,0}, v_{N,1}, \dots, v_{N,N}$ and \begin{align*} K \cdot v_{N,i} &:= q^{N-2i} v_{N,i}, \\ F \cdot v_{N,i} &:= v_{N,i+1}, \\ E \cdot v_{N,i} &:= [i]_q [N-i+1]_q v_{N,i-1}, \end{align*} where $[n]_q$ is the $n$-th \emph{quantum integer} \[ [n]_q := \frac{q^n-q^{-n}}{q-q^{-1}} = q^{n-1} + q^{n-1-2} + \cdots + q^{1-n}. \] In particular, let $V := V(1)$ be the \emph{fundamental $U_q(\mathfrak{sl}_2)$-module}. \smallskip The module $V(N)$ can be equipped with the \emph{Shapovalov form} \[ (-,- )_N : V(N) \times V(N) \rightarrow \mathbb{Q}(q), \] which is a non-degenerate bilinear form such that $(v_{N,0}, v_{N,0})_N = 1$ and which is $\bar \tau$-Hermitian: for any $v,v' \in V(N)$ and $u \in U_q(\mathfrak{sl}_2)$, we have $(u\cdot v, v')_N= (v, \bar \tau(u)\cdot v')_N$. A computation shows that \[ (v_{N,i}, v_{N,j})_N = \delta_{i,j}q^{i(N-i)}\frac{[i]_q![N]_q!}{[N-i]_q!}, \] where $[0]_q! := 1$ and $[n]_q! := [n]_q[n-1]_q\ldots [2]_q[1]_q$. \subsubsection{Verma modules} Let $\beta$ be a formal parameter and write $\lambda := q^{\beta}$ as a formal variable. Let ${\mathfrak{b}}$ be the standard upper Borel subalgebra of $\mathfrak{sl}_2$ and $U_q({\mathfrak{b}})$ be its quantum version. It is the $U_q(\mathfrak{sl}_2)$-subalgebra generated by $K,K^{-1}$ and $E$. Let $K_\lambda$ be a $1$-dimensional $\mathbb{Q}(\lambda,q)$-vector space, with fixed basis element $v_\lambda$. We endow $K_\lambda$ with an $U_q({\mathfrak{b}})$-action by declaring that: \begin{align*} K^{\pm 1} v_\lambda &:= \lambda^{\pm 1} v_\lambda, & E v_\lambda &:= 0, \end{align*} extending linearly through the obvious inclusion $\mathbb{Q}(q) \hookrightarrow \mathbb{Q}(q,\lambda)$. The universal \emph{Verma module $M(\lambda)$} is the induced module \[ M(\lambda) := U_q(\mathfrak{sl}_2) \otimes_{U_q({\mathfrak{b}})} K_\lambda. \] It is irreducible and infinite-dimensional with $\mathbb{Q}(q,\lambda)$-basis $v_{\lambda,0} := v_\lambda, v_{\lambda,1}, \dots , v_{\lambda,i} , \dots$ and \begin{align*} K \cdot v_{\lambda,i} &:= \lambda q^{-2i} v_{\lambda,i}, \\ F \cdot v_{\lambda,i} &:= v_{\lambda, i+1}, \\ E \cdot v_{\lambda,i} &:= [i]_q [\beta-i+1]_q v_{\lambda, i-1}, \end{align*} where \[ [\beta+k]_q := \frac{\lambda q^k - \lambda^{-1} q^{-k}}{q-q^{-1}}. \] \smallskip The Verma module $M(\lambda)$ can also be equipped with a Shapovalov form $(\cdot,\cdot)_\lambda$, which is again a non-degenerate bilinear form such that $(v_\lambda, v_\lambda)_\lambda = 1$ and which is $\bar \tau$-Hermitian: for any $v,v' \in M(\lambda)$ and $u \in U_q(\mathfrak{sl}_2)$, we have $(u\cdot v, v')_\lambda = (v, \bar \tau(u)\cdot v')_\lambda$. One easily calculates that \[ (v_{\lambda,i}, v_{\lambda,j})_\lambda = \delta_{i,j}\lambda^iq^{-i^2}[i]_q![\beta]_q[\beta-1]_q\cdots[\beta-i+1]_q. \] \subsubsection{Tensor products} Given $W$ and $W'$ two $U_q(\mathfrak{sl}_2)$-modules, their tensor product $W \otimes W'$ is again a $U_q(\mathfrak{sl}_2)$-module with the action induced by $\Delta$. Explicitly, \begin{align*} K^{\pm 1} \cdot (w \otimes w') &:= K^{\pm 1} w \otimes K^{\pm 1} w', \\ F \cdot (w \otimes w') &:= F w \otimes K w' + w \otimes Fw', \\ E \cdot (w \otimes w') &:= E w \otimes w' + K^{-1} w \otimes E w', \end{align*} for all $w \in W$ and $w' \in W'$. \smallskip For $\underline{N}=(N_1,\ldots,N_r)\in \mathbb{N}^r$ we write $V(\underline{N}) := V(N_1) \otimes \cdots \otimes V(N_r)$ and $M\otimes V(\underline{N}):=M(\lambda)\otimes V(N_1) \otimes \cdots \otimes V(N_r)$. In the particular case $N_1=\dotsm =N_r=1$, we write $V^r$ for the $r$-th folded tensor product $V \otimes V \otimes \cdots \otimes V$. \subsubsection{Weight spaces} The module $M\otimes V(\underline{N})$ decomposes into \emph{weight spaces} \[ M\otimes V(\underline{N})_{\lambda q^{k}} := \{ v \in M\otimes V(\underline{N}) | Kv = \lambda q^k v \}. \] Note that we have $M\otimes V(\underline{N}) \cong \bigoplus_{\ell \geq 0} M\otimes V(\underline{N})_{\lambda q^{|\und N| - 2\ell}} $, where $|\und N| := \sum_i N_i$. \subsubsection{Basis} Let $\mathcal{P}_{b}^{r}$ be the set of weak compositions of $b$ into $r+1$ parts, that is: \[ \mathcal{P}_{b}^{r}:=\left\{(b_0,b_1,\dots,b_r)\in\mathbb{N}^{r+1}\ \middle\vert\ \sum_{i=0}^r b_i = b\right\}. \] Consider also \[ \mathcal{P}_{b}^{r, \und N} := \left\{ (b_0, b_1, \dots, b_r)\in\mathcal{P}_{b}^{r} | b_i \leq N_i \text{ for $1\leq i \leq r$}\right\} \subset \mathcal{P}_{b}^{r}. \] In addition to the induced basis by the tensor product, the space $M\otimes V(\underline{N})$ admits a basis that will be particularly useful for categorification. For $\rho = (b_0, \dots, b_r) \in \mathcal{P}_{b}^{r}$, we write \[ v_\rho := F^{b_r}\left( \cdots F^{b_1} \left( F^{b_0}(v_\lambda) \otimes v_{N_1,0} \right) \cdots \otimes v_{N_r,0} \right). \] Then, $M\otimes V(\underline{N})$ has a basis given by \[ \left\{ v_\rho | \rho \in \mathcal{P}_{b}^{r, \und N}, b \geq 0 \right\}. \] In particular, we have that $M\otimes V(\underline{N})_{\lambda q^{|\und N|- 2b}}$ has a basis given by $\{ v_\rho \}_{\rho \in \mathcal{P}_{b}^{r, \und N}}$. One can describe inductively the change of basis from $\{ v_\rho \}_{\rho \in \mathcal{P}_{b}^{r, \und N}}$ to the induced basis as follows: \[ v_{(b_0,\ldots,b_r)}=\sum_{k=0}^{\min(b_r,N_r)}q^{(1-k)(b_r-k)}\qbinom{b_r}{k}v_{(b_0,\ldots,b_{r-1}+b_r-k)}\otimes v_{N_r,k}, \] for any $(b_0,\ldots,b_r)\in\mathcal{P}_{b}^{r}$ and \[ v_{(b_0,\ldots,b_{r-1})}\otimes v_{N_r,n} = \sum_{k=0}^n (-1)^{n-k}q^{(n-k)(n-2)}\qbinom{n}{k}v_{(b_0,\ldots,b_{r-1}+n-k,k)}, \] for any $(b_0,\ldots,b_{r-1})\in \mathcal{P}_{b}^{r-1}$ and $0\leq n \leq N_r$, with $\qbinom{n}{k}:=\frac{[n]_q!}{[k]_q![n-k]_q!}$. We can also use these formulas to inductively rewrite a vector $v_{\rho}$ with $\rho\in \mathcal{P}_{b}^{r}$ in terms of various $v_{\kappa}$ for $\kappa\in\mathcal{P}_{b}^{r,\und N}$. Indeed, we have \[ v_{(b_0,\ldots,b_r)} = \sum_{k=0}^{\min(b_r,N_r)}q^{(1-N_r)(b_r-k)}\frac{\displaystyle\prod_{j=1,j\neq k}^{N_r}[b_r-j]_q}{\displaystyle\prod_{j=1,j\neq k}^{N_r}[k-j]_q}v_{(b_0,\ldots,b_{r-1}+b_r-k,k)}, \] for any $(b_0,\ldots,b_r)\in\mathcal{P}_{b}^{r}$. \subsubsection{Shapovalov forms for tensor products}\label{sec:shepfortensor} Following \cite[\S4.7]{webster}, we consider a family of bilinear forms $(\cdot,\cdot)_{\lambda,\underline{N}}$ on tensor products of the form $M(\lambda)\otimes V(\underline{N})$ satisfying the following properties: \begin{enumerate} \item each form $(\cdot,\cdot)_{\lambda,\underline{N}}$ is non-degenerate; \item for any $v,v'\in M(\lambda)\otimes V(\underline{N})$ and $u\in U_q(\mathfrak{sl}_2)$ we have $(u \cdot v,v')_{\lambda,\underline{N}} = (v, \bar \tau(u)\cdot v')_{\lambda,\underline{N}}$; \item for any $f\in \mathbb{Q}(q,\lambda)$ and $v,v'\in M(\lambda)\otimes V(\underline{N})$, we have $(f v,v')_{\lambda,\underline{N}} = (v,fv')_{\lambda,\underline{N}} = f(v,v')_{\lambda,\underline{N}}$; \item if $v,v'\in M(\lambda)\otimes V(\underline{N})$, then we have $(v,v')_{\lambda,\underline{N}} = (v\otimes v_{N,0},v'\otimes v_{N,0})_{\lambda,\underline{N'}}$ where $\underline{N'}=(N_1,\ldots,N_r,N)$. \end{enumerate} Similarly to \cite[Proposition 4.33]{webster} we have: \begin{prop} There exists a unique system of such bilinear forms which are given by \[ (v , v')_{\lambda,\underline{N}} = (v, v')_{\lambda,\underline{N}}^{\Pi}, \] for every $v,v'\in M(\lambda)\otimes V(\underline{N})$ where $(\cdot,\cdot)_{\lambda,\underline{N}}^{\Pi}$ is the product of the universal Shapovalov form on $M(\lambda)$ and of the Shapovalov forms on the various $V(N_i)$. \end{prop} \subsection{The blob algebra}\label{sec:blobalgebra} Recall that the \emph{blob algebra} $\mathcal{B}_r$ is the $\mathbb{Q}(\lambda,q)$-algebra with generators $u_1, \dots, u_{r-1}$ and $\xi$, and with the relations of type A: \begin{align} u_i u_j &= u_j u_i, & \text{for $|i-j| > 1$,}& \label{eq:TLrels}\\ u_i u_{i+1} u_i &= u_i, & \text{for $1 \leq i \leq r-2$,}& \label{eq:TLrels2} \\ u_i u_{i-1} u_i &= u_i, & \text{for $2 \leq i \leq r-1$,}& \label{eq:TLrels3} \\ u_i^2 &= -(q+q^{-1}), & \text{for $1 \leq i \leq r-1$,}& \label{eq:TLloopremov} \end{align} and the blob relations: \begin{align} \xi u_i &= u_i \xi, &\text{for $2 \leq i \leq r$,} \label{eq:TLBrels} \\ u_1 \xi u_1 &= -(\lambda q + \lambda^{-1} q^{-1}) u_1, \label{eq:TLBloopremov} \\ q^{-1} \xi^2 &= (\lambda q + \lambda^{-1} q^{-1}) \xi - q. \label{eq:TLBdoublebraid} \end{align} Note that $\xi$ is invertible, with inverse given by $\xi^{-1} = \lambda+q^{-2}\lambda^{-1} -q^{-2}\xi$, and that the relations \eqref{eq:TLrels}-\eqref{eq:TLloopremov} imply that the generators $u_1,\dotsc, u_{r-1}$ generate a subalgebra isomorphic to the Temperley--Lieb algebra of type $A$. The blob algebra has several well-known diagrammatic presentations. The most classical one already appeared in~\cite{Martin-Saleur}, but (a slight modification of) the one in~\cite{swTLB} is more convenient for our purposes. This presentation is given by setting \begin{align*} u_i &= \tikzdiagh{0}{ \draw[ultra thick,myblue] (-.5,0) -- (-.5,1); \draw[red] (0,0) -- (0,1); \node at (.5,.5){\small $\dots$}; \draw[red] (1,0) -- (1,1); \draw[red] (1.5, 0) .. controls (1.5,.5) and (2,.5) .. (2,0); \draw[red] (1.5, 1) .. controls (1.5,.5) and (2,.5) .. (2,1); \draw[red] (2.5,0) -- (2.5,1); \node at (3,.5){\small $\dots$}; \draw[red] (3.5,0) -- (3.5,1); \tikzbrace{-.5}{1}{0}{$i$}; } \\ \xi &= \tikzdiag{ \draw[red] (0,0) .. controls (0,.25) .. (-.5,.5) .. controls (0,.75) .. (0,1); \draw[fill=white, color=white] (-.52,.5) circle (.02cm); \draw[red] (.5,0) -- (.5,1); \node at (1,.5){\small $\dots$}; \draw[red] (1.5,0) -- (1.5,1); \draw[ultra thick,myblue] (-.5,0) -- (-.5,1); } \end{align*} where diagrams are taken up to planar isotopy and read from bottom to top, and with local relations \begin{align}\allowdisplaybreaks \tikzdiag{ \draw[red] (0, 0) .. controls (0,.5) and (.5,.5) .. (.5,0) .. controls (.5,-.5) and (0,-.5) .. (0,0); } \ &= \ -(q+q^{-1}), \tag{\ref{eq:TLloopremov}} \\ \tikzdiag{ \draw[red] (0,0) .. controls (0,.25) .. (-.5,.5) .. controls (0,.75) .. (0,1) .. controls (0,1.5) and (.5,1.5) .. (.5,1) -- (.5,0) .. controls (.5,-.5) and (0,-.5) .. (0,0); \draw[fill=white, color=white] (-.52,.5) circle (.02cm); \draw[ultra thick,myblue] (-.5,-.5) -- (-.5,1.5); } \ &= -(\lambda q+\lambda^{-1}q^{-1}) \ \tikzdiag{ \draw[ultra thick,myblue] (-.5,-.5) -- (-.5,1.5); } \tag{\ref{eq:TLBloopremov}} \\ q^{-1} \tikzdiag{ \draw[red] (0,0) .. controls (0,.25) .. (-.5,.5) .. controls (0,.75) .. (0,1) .. controls (0,1.25) .. (-.5,1.5) .. controls (0,1.75) .. (0,2); \draw[fill=white, color=white] (-.52,.5) circle (.02cm); \draw[fill=white, color=white] (-.52,1.5) circle (.02cm); \draw[ultra thick,myblue] (-.5,0) -- (-.5,2); } \ &= \ (\lambda q+\lambda^{-1}q^{-1}) \tikzdiag{ \draw[red] (0,0) -- (0,.5) .. controls (0,.75) .. (-.5,1) .. controls (0,1.25) .. (0,1.5) -- (0,2); \draw[fill=white, color=white] (-.52,1) circle (.02cm); \draw[ultra thick,myblue] (-.5,0) -- (-.5,2); } \ - q \ \tikzdiag{ \draw[red] (0,0) -- (0,2); \draw[ultra thick,myblue] (-.5,0) -- (-.5,2); } \tag{\ref{eq:TLBdoublebraid}} \end{align} corresponding to \eqref{eq:TLloopremov}, \eqref{eq:TLBloopremov} and \eqref{eq:TLBdoublebraid} (explaining why we kept the same numbering). Note that the relations \eqref{eq:TLrels} -- \eqref{eq:TLrels3} and \eqref{eq:TLBrels} are encoded by the planar isotopies. \begin{rem}\label{rem:dbbraiding} In the graphical description of $\mathcal{B}_r$ given in~\cite{swTLB} the generator $\xi$ is presented as a double braiding (see~\cite[Figure 1]{swTLB}). We don't follow that interpretation in our diagrammatics in order to simplify pictures, but we keep the terminology (see~\S\ref{def:dbbraiding} ahead). \end{rem} \begin{rem} With respect to~\cite{swTLB} our conventions switch $(\lambda,q)$ and $(\lambda^{-1},q^{-1})$, which can be interpreted as exchanging the double braiding by the double inverse braiding. \end{rem} There is an action of $\mathcal{B}_r$ on $M \otimes V^r$ that intertwines with the quantum $\mathfrak{sl}_2$-action. This action can be described locally, identifying the first vertical strand in $\mathcal{B}_r$ with the identity on $M(\lambda)$, and the $i$th vertical strand with the identity on the $i$-th copy of $V$ in $M \otimes V^r$. Then the action is given using the following maps \begin{align*} \tikzdiag{ \draw[red] (1.5, 0) .. controls (1.5,.5) and (2,.5) .. (2,0); } &: V \otimes V \rightarrow \mathbb{Q}(q,\lambda),\ \begin{cases} v_{1,0} \otimes v_{1,0} &\mapsto 0, \\ v_{1,0} \otimes v_{1,1} &\mapsto 1, \\ v_{1,1} \otimes v_{1,0} &\mapsto -q^{-1}, \\ v_{1,1} \otimes v_{1,1} &\mapsto 0, \\ \end{cases} \\ \tikzdiag[yscale=-1]{ \draw[red] (1.5, 0) .. controls (1.5,.5) and (2,.5) .. (2,0); } &: \mathbb{Q}(q,\lambda)\rightarrow V \otimes V, \ 1 \mapsto -q v_{1,0} \otimes v_{1,1} + v_{1,1} \otimes v_{1,0}, \\ \tikzdiag{ \draw[red] (0,0) .. controls (0,.25) .. (-.5,.5) .. controls (0,.75) .. (0,1); \draw[fill=white, color=white] (-.52,.5) circle (.02cm); \draw[ultra thick,myblue] (-.5,0) -- (-.5,1); } &: M \otimes V \rightarrow M \otimes V,\ \begin{cases} v_{\lambda,k} \otimes v_{1,0} &\mapsto \lambda^{-1}q^{2k}v_{\lambda,k} \otimes v_{1,0}\\ &\qquad - q(q-q^{-1})[k]_q [\beta-k+1]_q v_{\lambda,k-1} \otimes v_{1,1},\\ v_{\lambda,k} \otimes v_{1,1} &\mapsto (\lambda^{-1}+\lambda q^2-\lambda^{-1}q^{2(k+1)})v_{\lambda,k} \otimes v_{1,1}\\ &\qquad-\lambda^{-1}q^{2(k+1)}(q-q^{-1})v_{\lambda,k+1} \otimes v_{1,0}, \end{cases} \end{align*} where the formula for $\xi$ is obtained by acting twice with an $R$-matrix. In our conventions, we have $\xi=f\circ \Theta_{21} \circ f \circ \Theta$ where $\Theta$ is given by the action of \[ \sum_{n=0}^{+\infty}(-1)^nq^{-n(n-1)/2}\frac{(q-q^{-1})^n}{[n]_q!}F^n\otimes E^n, \] $\Theta_{21}$ by the action of \[ \sum_{n=0}^{+\infty}(-1)^nq^{-n(n-1)/2}\frac{(q-q^{-1})^n}{[n]_q!}E^n\otimes F^n, \] $f(v_{\lambda,k}\otimes v_{1,0}) := \lambda^{-1/2}q^{k}v_{\lambda,k}\otimes v_{1,0}$ and $f(v_{\lambda,k}\otimes v_{1,1}) := \lambda^{1/2}q^{-k}v_{\lambda,k}\otimes v_{1,1}$ for any $k\in\mathbb{N}$. The following will be useful later: \begin{lem}\label{lem:explicitaction} The action of $\mathcal{B}_r$ translates in terms of $v_\rho$-vectors of $M \otimes V^r$ as \begin{align} \label{eq:caponk} \tikzdiagh[scale=0.75]{2}{ \draw[ultra thick,myblue] (-.5,0) -- (-.5,1); \draw[red] (0,0) -- (0,1); \node at (.5,.5){\small $\dots$}; \draw[red] (1,0) -- (1,1); \draw[red] (1.5, 0) .. controls (1.5,.5) and (2,.5) .. (2,0) ; \draw[red] (2.5,0) -- (2.5,1); \node at (3,.5){\small $\dots$}; \draw[red] (3.5,0) -- (3.5,1); % \tikzbrace{-.5}{1}{-0.2}{$i$}; } :& v_{(\dots, b_{i-1}, b_i, b_{i+1}, b_{i+2}, \dots)} \mapsto -q^{-1} [b_i]_q v_{(\dots, b_{i-1} + b_i + b_{i+1} - 1, b_{i+2}, \dots)}, \\ \label{eq:cuponk} \tikzdiagh[scale=0.75,yscale=-1]{2}{ \draw[ultra thick,myblue] (-.5,0) -- (-.5,1); \draw[red] (0,0) -- (0,1); \node at (.5,.5){\small $\dots$}; \draw[red] (1,0) -- (1,1); \draw[red] (1.5, 0) .. controls (1.5,.5) and (2,.5) .. (2,0) ; \draw[red] (2.5,0) -- (2.5,1); \node at (3,.5){\small $\dots$}; \draw[red] (3.5,0) -- (3.5,1); % \tikzbrace{-.5}{1}{1.9}{$i$}; } :&v_{\rho} \mapsto q[2]_q v_{(\dots, b_{i-1},1 ,0 ,b_{i}, \dots)} - q v_{(\dots, b_{i-1}+1, 0, 0,b_{i}, \dots)} -q v_{(\dots, b_{i-1},0 , 1,b_{i}, \dots)}, \\ \label{eq:xionk} \tikzdiag[scale=0.75]{ \draw[red] (0,0) .. controls (0,.25) .. (-.5,.5) .. controls (0,.75) .. (0,1); \draw[fill=white, color=white] (-.52,.5) circle (.02cm); \draw[red] (0.5,0) -- (0.5,1); \node at (1,0.5){\small $\dots$}; \draw[red] (1.5,0) -- (1.5,1); \draw[ultra thick,myblue] (-.5,0) -- (-.5,1); } :&v_{(b_0 , b_1 ,\dots)} \mapsto (\lambda^{-1}q^{b_0} - \lambda q [b_0]_q) v_{(0,b_0+b_1,\dots)} + \lambda q^2 [b_0]_q v_{(1,b_0+b_1-1,\dots)}. \end{align} \end{lem} \begin{proof} A computational proof is given in \cref{sec:computations}. \end{proof} As a matter of fact, this completely determines $ \End_{U_q(\mathfrak{sl}_2)}(M \otimes V^r)$: \begin{thm}[{\cite[Theorem 4.9]{swTLB}}]\label{thm:BcongMV} There is an isomorphism \begin{equation}\label{eq:BcongMV} \mathcal{B}_r \cong \End_{U_q(\mathfrak{sl}_2)}(M \otimes V^r). \end{equation} \end{thm} The \emph{blob category $\mathcal{B}$} is the $\mathbb{Q}(\lambda,q)$-linear category given by \begin{itemize} \item objects are non-negative integers $r \in \mathbb{N}$; \item $\Hom_{\mathcal{B}}(r,r')$ is given by $\mathbb{Q}(\lambda,q)$-linear combinations of string diagrams connecting $r+1$ points on the bottom to $r'+1$ points on the top, with the first strand always connecting the left-most point to the left-most point, where the strings cannot intersect each other except for diagrams like $\xi$. Diagrams are considered up to planar isotopy and subject to the relations \eqref{eq:TLloopremov}, \eqref{eq:TLBloopremov} and \eqref{eq:TLBdoublebraid}. \end{itemize} Let $\cT\cL$ be the Temperley--Lieb category of type $A$, defined diagrammatically. It is a $\mathbb{Q}(q)$-linear monoidal category equivalent to $\mathcal{F} und(\mathfrak{sl}_2)$, the full monoidal subcategory of $ U_q(\mathfrak{sl}_2)\amod$ generated by $V$. Note that $\mathcal{B}$ can be endowed with a structure of module category over $\cT\cL$, by gluing diagrams on the right. Also consider the full subcategory $\cM\cV \subset U_q(\mathfrak{sl}_2)\amod$ given by the modules $M(\lambda) \otimes V^{\otimes r}$ for all $r \in \mathbb{N}$. It is a module category over $\mathcal{F} und(\mathfrak{sl}_2)$ by acting on the right with tensor product of $U_q(\mathfrak{sl}_2)$-modules. \begin{thm}[{\cite[Theorem 4.9]{swTLB}}] There are equivalences of categories such that \[ \begin{tikzcd}[ampersand replacement=\&] \mathcal{B} \ar{d}{\rotatebox{-90}{\(\simeq\)}} \& \ar[swap]{l}{\otimes \text{acts}} \mathcal{T}\mathcal{L} \ar{d}{\rotatebox{-90}{\(\simeq\)}} \\ \mathcal{M}\mathcal{V} \& \ar{l}{\otimes \text{acts}} \mathcal{F} und(\mathfrak{sl}_2) \end{tikzcd} \] commutes. \end{thm} \begin{rem} Note that~\cite{swTLB} considers projective Verma modules with integral highest weight. The case of universal Verma modules was studied in~\cite{LV}, albeit not in the categorical setup. \end{rem}
1,314,259,994,705
arxiv
\section{Introduction} \label{sec:introduction} This paper is concerned with the problem of estimating entropy or mutual information of an unknown probability density $p$ over $\R^D$, given $n$ i.i.d. samples from $p$. Entropy and mutual information are fundamental information theoretic quantities, and consistent estimators for these quantities have a host of applications within machine learning, statistics, and signal processing. For example, entropy estimators have been used for goodness-of-fit testing~\citep{goria05new}, parameter estimation in semi-parametric models~\citep{Wolsztynski85minimum}, texture classification and image registration~\citep{hero01alpha,hero02applications}, change point detection~\citep{bercher00estimating}, and anomaly detection in networks~\citep{noble03graphAnomaly,nychis08empirical,berezinski15entropy}. Mutual information is a popular nonparametric measure of dependence, whose estimators have been used in feature selection~\citep{peng05feature,shishkin16efficient}, clustering~\citep{aghagolzadeh07hierarchical}, learning graphical models~\citep{chow68chowLiuTree}, fMRI data processing~\citep{chai09fMRI}, prediction of protein structures~\citep{adami04information}, boosting and facial expression recognition~\citep{shan05conditional}, and fitting deep nonlinear models~\citep{hunter16fittingDeepNonlinearModels}. Estimators for both entropy and mutual information have been used in independent component and subspace analysis \citep{radical03,szabo07undercomplete_TCC}. Motivated by these and other applications, several very recent lines of work (discussed in \hyperref[sec:related_work]{Section~\ref{sec:related_work}}) have studied information estimation,\footnote{We will collectively call the closely related problems of entropy and mutual information estimation \emph{information estimation}.} focusing largely on two settings: \begin{enumerate}[wide,noitemsep,topsep=0mm] \item {\bf Gaussian Setting:} If $p$ is known to be Gaussian, there exist information estimators with mean squared error (MSE) at most $2\log \left(1 - \frac{D}{n} \right)$ and an (almost matching) minimax lower bound of $2D/n$ \citep{cai15logDetCov}. \item {\bf Nonparametric Setting:} If $p$ is assumed to lie in a nonparametric smoothness class, such an $s$-order\footnote{Here, $s$ encodes the degree of smoothness, roughly corresponding to the number of continuous derivatives of $p$.} H\'older or Sobolev class, then the minimax MSE is of asymptotic order $\asymp \max \left\{ n\inv, n^{-\frac{8s}{4s + D}} \right\}$ \citep{birge95densityFunctionals}. \end{enumerate} In the Gaussian setting, consistent estimation is tractable even in the high-dimensional case where $D$ increases fairly quickly with $n$, as long as $D/n \to 0$. However, optimal estimators for the Gaussian setting rely heavily on the assumption of joint Gaussianity, and their performance can degrade quickly when the data deviate from Gaussian. Especially in high dimensions, it is unlikely that data are jointly Gaussian, making these estimators brittle in practice. In the nonparametric setting, the theoretical convergence rate decays exponentially with $D$, and, it has been found empirically that information estimators for this setting fail to converge at realistic sample sizes in all but very low dimensions. Also, most nonparametric estimators are sensitive to tuning of bandwidth parameters, which is challenging for information estimation, since no empirical error estimate is available for cross-validation. Given these factors, though the Gaussian and nonparametric cases are fairly well understood in theory, there remains a lack of practical information estimators for the common case where data are neither exactly Gaussian nor very low dimensional. The {\bf main goal of this paper} is to fill the gap between these two extreme settings by studying information estimation in a semiparametric compromise between the two, known as the ``nonparanormal'' (a.k.a. ``Gaussian copula'') model (see \hyperref[def:nonparanormal]{Definition \ref{def:nonparanormal}}). The nonparanormal model, analogous to the additive model popular in regression~\citep{friedman81projection}, limits complexity of interactions among variables but makes minimal assumptions on the marginal distribution of each variable. The result scales better with dimension than nonparametric models, while being more robust than Gaussian models. {\bf Paper Organization:} \hyperref[sec:problem_statement]{Section~\ref{sec:problem_statement}} gives definitions and notation to formalize the nonparanormal information estimation problem. \hyperref[sec:related_work]{Section~\ref{sec:related_work}} discusses the history of the nonparanormal model and prior work on information estimation, motivating our contributions. \hyperref[sec:estimators]{Section~\ref{sec:estimators}} proposes three estimators, while \hyperref[sec:main_results]{Section~\ref{sec:main_results}} presents our theoretical error bounds, proven in the Appendix. \hyperref[sec:empirical]{Section~\ref{sec:empirical}} provides simulation results. While most of the paper discusses mutual information estimation, \hyperref[sec:entropy]{Section~\ref{sec:entropy}} discusses additional considerations arising in entropy estimation. \hyperref[sec:conc_and_future]{Section \ref{sec:conc_and_future}} presents some concluding thoughts and avenues for future work. \section{Problem statement and notation} \label{sec:problem_statement} There are a number of distinct generalizations of mutual information to more than two variables. The definition we consider is simply the difference between the sum of marginal entropies and the joint entropy: \begin{definition} {\bf (Multivariate mutual information)} Let $X_1,\dots,X_D$ be $\R$-valued random variables with a joint probability density $p~:~\R^D~\to~[0, \infty)$ and marginal densities $p_1,...,p_D : \R \to [0, \infty)$. The \emph{multivariate mutual information $I(X)$ of $X = (X_1,\dots,X_D)$} is defined by \begin{align} \notag I(X) & := \E_{X \sim p} \left[ \log \left( \frac{p(X)} {\prod_{j = 1}^D p_j(X_j)} \right) \right] \\ \label{eq:entropy_relation} & = \sum_{j = 1}^D H(X_j) - H(X), \end{align} where $H(X) = -\E_{X \sim p} [\log p(X)]$ denotes entropy of $X$. \end{definition} This notion of multivariate mutual information, originally due to \citet{watanabe60totalInformation} (who called it ``total correlation'') measures total dependency, or redundancy, within a set of $D$ random variables. It has also been called the ``multivariate constraint''~\citep{garner62multivariateConstraint} and ``multi-information''~\citep{studeny98multiinformation}. Many related information theoretic quantities can be expressed in terms of $I(X)$, and can thus be estimated using estimators of $I(X)$. Examples include pairwise mutual information $I(X,Y) = I((X,Y)) - I(X) - I(Y)$, which measures dependence between (potentially multivariate) random variables $X$ and $Y$, conditional mutual information \[I(X|Z) = I((X,Z)) - \sum_{j = 1}^D I((X_j,Z)),\] which is useful for characterizing how much dependence within $X$ can be explained by a latent variable $Z$ \citep{studeny98multiinformation}, and transfer entropy (a.k.a. directed information) $T_{X~\to~Y}$, which measures predictive power of one time series $X$ on the future of another time series $Y$. $I(X)$ is also related to entropy via Eq.~\eqref{eq:entropy_relation}, but, unlike the above quantities, this relationship depends on the marginal distributions of $X$, and hence involves some additional considerations, as discussed in \hyperref[sec:entropy]{Section~\ref{sec:entropy}}. We now define the class of nonparanormal distributions, from which we assume our data are drawn. \begin{definition} {\bf (Nonparanormal distribution, a.k.a. Gaussian copula model)} A random vector $X = (X_1,\dots,X_D)^T$ is said to have a \emph{nonparanormal distribution} (denoted $X \sim \mathcal{NPN}(\Sigma; f)$) if there exist functions $\{f_j\}_{j = 1}^D$ such that each $f_j : \R \to \R$ is a diffeomorphism \footnote{A diffeomorphism is a continuously differentiable bijection $g : \R \to R \subseteq \R$ such that $g\inv$ is continuously differentiable. } and $f(X) \sim \mathcal{N}(0, \Sigma)$, for some (strictly) positive definite $\Sigma \in \R^{D \times D}$ with $1$'s on the diagonal (i.e., each $\sigma_j = \Sigma_{j,j} = 1$). \footnote{Setting $\E \left[ f(X) \right] = 0$ and each $\sigma_j = 1$ ensures model identifiability, but does not reduce the model space, since these parameters can be absorbed into the marginal transformation $f$.} $\Sigma$ is called the \emph{latent covariance} of $X$ and $f$ is called the \emph{marginal transformation} of $X$. \label{def:nonparanormal} \end{definition} The nonparanormal family relaxes many constraints of the Gaussian family. Nonparanormal distributions can be multi-modal or heavy-tailed, can encode noisy nonlinear dependencies amongst variables, and need not be supported on $\R^D$. Assumptions made by a nonparanormal model on the marginals are minimal; any desired continuously differentiable marginal cumulative distribution function (CDF) $F_i$ of the variable $X_i$ corresponds to the marginal transformation $f_i(x) = \Phi\inv(F_i(x))$ (where $\Phi$ is the standard normal CDF). As examples, for a Gaussian variable $Z$, the $2$-dimensional case, $X_1\sim\mathcal{N}(0,1)$, and $X_2 = T(X_1 + Z)$ is completely captured by a Gaussian copula when $T(x) = x^3$, $T = \tanh$, $T = \Phi$, or any other diffeomorphism. On the other hand, the limits of the Gaussian copula appear, for example, when $T(x) = x^2$, which is not bijective; then, if $\E[Z] = 0$, the Gaussian copula approximation of $(X_1,X_2)$ will model $X_1$ and $X_2$ as independent. We are now ready to formally state our problem: {\bf Formal Problem Statement:} \emph{Given $n$ i.i.d. samples $X_1,...,X_n \sim \mathcal{NPN}(\Sigma;f)$, where $\Sigma$ and $f$ are both unknown, we would like to estimate $I(X)$.} {\bf Other notation:} $D$ denotes the dimension of the data (i.e., $\Sigma \in \R^{D \times D}$ and $f : \R^D \to \R^D$). For a positive integer $k$, $[k] = \{1,...,k\}$ denotes the set of positive integers less than $k$ (inclusive). For consistency, where possible, we use $i \in [n]$ to index samples and $j \in [D]$ to index dimensions (so that, e.g., $X_{i,j}$ denotes the $j^{th}$ dimension of the $i^{th}$ sample). Given a data matrix $X \in \R^{n \times D}$, our estimators depend on the empirical rank matrix \begin{equation} R \in [n]^{n \times D} \quad \text{ with } \quad R_{i,j} := \sum_{k = 1}^n 1_{\{X_{i,j} \geq X_{k,j} \}}. \label{def:empirical_rank_matrix} \end{equation} For a square matrix $A \in \R^{k \times k}$, $|A|$ denotes the determinant of $A$, $A^T$ denotes the transpose of $A$, and \[\|A\|_2 := \max_{\scriptsize\shortstack{$x \in \R^k$ \\ $\|x\|_2 = 1$}} \|Ax\|_2 \quad \text{ and } \quad \|A\|_F := \sqrt{\sum_{i, j \in [k]} A_{i,j}^2}\] denote the spectral and Frobenius norms of $A$, respectively. When $A$ is symmetric, $\lambda_1(A) \geq \lambda_2(A) \geq \cdots \geq \lambda_D(A)$ are its eigenvalues. \section{Related Work and Our Contributions} \label{sec:related_work} \subsection{The Nonparanormal} Nonparanormal models have been used for modeling dependencies among high-dimensional data in a number of fields, such as graphical modeling of gene expression data~\citep{liu12SKEPTIC}, of neural data~\citep{berkes09neuralDependencies}, and of financial time series~\citep{malevergne03testing,wilson10copula,hernandez13gaussianCopulaFinancial}, extreme value analysis in hydrology~\citep{renard07hydrology,aghakouchak14entropy}, and informative data compression~\citep{rey12metaGaussianIB}. Besides being more robust generalizations of Gaussians, nonparanormal distributions are also theoretically motivated in certain contexts. For example, the output $Z$ of a neuron is often modeled by feeding a weighted linear combination $Y = \sum_{k = 1}^N w_k X_k$ of inputs into a nonlinear transformation $Z = f(Y)$. When the components of $X$ are independent, the central limit theorem suggests $Y$ is approximately normally distributed, and hence $Z$ is approximately nonparanormally distributed~\citep{szabo07post}. With one recent exception~\citep{ince16I_G}, previous information estimators for the nonparanormal case~\citep{calsaverini09copulaInformation,ma11copulaEntropy,elidan13copulas}, rely on fully nonparametric information estimators as subroutines, and hence suffer strongly from the curse of dimensionality. Very recently, \citet{ince16I_G} proposed what we believe is the first mutual information estimator tailored specifically to the nonparanormal case; their estimator is equivalent to one of the estimators ($I_G$, described in Section~\ref{subsec:I_G}) we study. However, they focused on its applications to neuroimaging data analysis, and did not study its performance theoretically or empirically. \subsection{Information Estimation} Our motivation for studying the nonparanormal family comes from trying to bridge two recent approaches to information estimation. The first has studied fully non-parametric entropy estimation, assuming only that data are drawn from a smooth probability density $p$, where smoothness is typically quantified by a H\"older or Sobolev exponent $s \in (0, \infty)$, roughly corresponding to the continuous differentiability of $s$. In this setting, the minimax optimal MSE rate has been shown by \citet{birge95densityFunctionals} to be $O \left( \max \left\{ n\inv, n^{-\frac{8s}{4s + D}} \right\} \right)$. This rate slows exponentially with the dimension $D$, and, while many estimators have been proposed \citep{pal10RenyiEntropyEstimation,sricharan10entropyConfidence,sricharan2013ensemble,singh2014exponential,singh2014generalized,krishnamurthy14Renyi,moon14fDivergence,moon14fDivergenceConfidence,singh16KNNFunctionals,moon17ensembleMI} for this setting, their practical use is limited to a few dimensions\footnote{``Few'' depends on $s$ and $n$, but \citet{kandasamy15vonMises} suggest nonparametric estimators should only be used with $D$ at most $4$-$6$. \citet{rey12metaGaussianIB} tried using several nonparametric information estimators on the \emph{Communities and Crime} UCI data set ($n = 2195, D = 10$), but found all too unstable to be useful.}. The second area is in the setting where data are assumed to be drawn from a truly Gaussian distribution. Here the high-dimensional case is far more optimistic. While this case had been studied previously~\citep{ahmed89entropy,misra05estimation,srivastava08bayesian}, \citet{cai15logDetCov} recently provided a precise finite-sample analysis based on deriving the exact probability law of the log-determinant $\log|\hat\Sigma|$ of the scatter matrix $\hat\Sigma$. From this, they derived a deterministic bias correction, giving an estimator for which they prove an MSE upper bound of $2\log \left( 1 - \frac{D}{n} \right)$ and a high-dimensional central limit theorem for the case $D \to \infty$ as $n \to \infty$ (but $D < n$). \citet{cai15logDetCov} also prove a minimax lower bound of $2D/n$ on MSE, with several interesting consequences. First, consistent information estimation is possible only if $D/n \to 0$. Second, since, for small $x$, $\log(1 - x) \approx x$, this lower bound essentially matches the above upper bound when $D/n$ is small. Third, they show this lower bound holds even when restricted to diagonal covariance matrices. Since the upper bound for the general case and the lower bound for the diagonal case essentially match, it follows that Gaussian information estimation is not made easier by structural assumptions such as $\Sigma$ being bandable, sparse, or Toeplitz, as is common in, for example, stationary Gaussian process models~\citep{cai12adaptiveCovarianceMatrix}. This $2D/n$ lower bound extends to our more general nonparanormal setting. However, we provide a minimax lower bound suggesting that the nonparanormal setting is strictly harder, in that optimal rates depend on $\Sigma$. Our results imply nonparanormal information estimation \emph{does} become easier if $\Sigma$ is assumed to be bandable or Toeplitz. A closely related point is that known convergence rates for the fully nonparametric case require the density $p$ to be bounded away from $0$ or have particular tail behavior, due to singularity of the logarithm near $0$ and resulting sensitivity of Shannon information-theoretic functionals to regions of low but non-zero probability. In contrast, \citet{cai15logDetCov} need no lower-bound-type assumptions in the Gaussian case. In the nonparanormal case, we show \emph{some} such condition is needed to prove a uniform rate, but a weaker condition, a positive lower bound on $\lambda_D(\Sigma)$, suffices. The {\bf main contributions} of this paper are the following: \begin{enumerate}[noitemsep,topsep=0mm] \item We propose three estimators, $\hat I_G$, $\hat I_\rho$, and $\hat I_\tau$,\footnote{\citet{ince16I_G} proposed $\hat I_G$ for use in neuroimaging data analysis. To the best of our knowledge, $\hat I_\rho$ and $\hat I_\tau$ are novel.} for the mutual information of a nonparanormal distribution. \item We prove upper bounds, of order $O(D^2/(\lambda_D^2(\Sigma)n))$ on the mean squared error of $\hat I_\rho$, providing the first upper bounds for a nonparanormal information estimator. This bound suggests nonparanormal estimators scale far better with $D$ than nonparametric estimators. \item We prove a minimax lower bound suggesting that, unlike the Gaussian case, difficulty of nonparanormal information estimation depends on the true $\Sigma$. \item We give simulations comparing our proposed estimators to Gaussian and nonparametric estimators. Besides confirming and augmenting our theoretical predictions, these help characterize the settings in which each nonparanormal estimator works best. \item We present entropy estimators based on $\hat I_G$, $\hat I_\rho$, and $\hat I_\tau$. Though nonparanormal entropy estimation requires somewhat different assumptions from mutual information estimation, we show that entropy can also be estimated at the rate $O(D^2/(\lambda_D^2(\Sigma)n))$. \end{enumerate} \section{Nonparanormal Information Estimators} \label{sec:estimators} In this section, we present three different estimators, $I_G$, $I_\rho$, and $I_\tau$, for the mutual information of a nonparanormal distribution. We begin with a lemma providing common motivation for all three estimators. Since mutual information is invariant to diffeomorphisms of individual variables, it is easy to see that the mutual information of a nonparanormal random variable is the same as that of the latent Gaussian random variable. Specifically: \begin{lemma} {\bf (Nonparanormal mutual information):} Suppose $X \sim \mathcal{NPN}(\Sigma; f)$. Then, \begin{equation} I(X) = -\frac{1}{2} \log|\Sigma|. \label{eq:gaussian_MI} \end{equation} \label{lemma:NPN_MI} \end{lemma} Lemma~\ref{lemma:NPN_MI} shows that mutual information of a nonparanormal random variable depends only the latent covariance $\Sigma$; the marginal transformations are nuisance parameters, allowing us to avoid difficult nonparametric estimation; the estimators we propose all plug different estimates of $\Sigma$ into Eq.~\eqref{eq:gaussian_MI}, after a regularization step described in Section \ref{subsec:regularization}. \subsection{Estimating $\Sigma$ by Gaussianization} \label{subsec:I_G} The first estimator $\hat \Sigma_G$ of $\Sigma$ proceeds in two steps. First, the data are transformed to have approximately standard normal marginal distributions, a process \citet{szabo07post} referred to as ``Gaussianization''. By the nonparanormal assumption, the Gaussianized data are approximately jointly Gaussian. Then, the latent covariance matrix is estimated by the empirical covariance of the Gaussianized data. More specifically, letting $\Phi\inv$ denote the quantile function of the standard normal distribution and recalling the rank matrix $R$ defined in \eqref{def:empirical_rank_matrix}, the Gaussianized data \[\tilde{X}_{i,j} := \Phi\inv \left( \frac{R_{i,j}}{n + 1} \right) \quad (\text{for } i \in [n], j \in [D])\] are obtained by transforming the empirical CDF of the each dimension to approximate $\Phi$. Then, we estimate $\Sigma$ by the empirical covariance $\hat \Sigma_G := \frac{1}{n} \sum_{i = 1}^n \tilde{X}_i \tilde{X}_i^T$. \subsection{Estimating $\Sigma$ by rank correlation} \label{subsec:I_rho_and_I_tau} The second estimator actually has two variants, $I_\rho$ and $I_\tau$, respectively based on relating the latent covariance to two classic rank-based dependence measures, Spearman's $\rho$ and Kendall's $\tau$. For two random variables $X$ and $Y$ with CDFs $F_X,F_Y : \R \to [0, 1]$, $\rho$ and $\tau$ are defined by \begin{align*} \rho(X, Y) & := \Corr(F_X(X),F_Y(Y)) \\ \text{and } \quad \tau(X, Y) & := \Corr(\sign(X - X'), \sign(Y - Y')), \end{align*} respectively, where \[\Corr(X, Y) = \frac{\E[(X - \E[X])(Y - \E[Y])]}{\sqrt{\Var[X]\Var[Y]}}\] denotes the standard Pearson correlation operator and $(X',Y')$ is an IID copy of $(X,Y)$. $\rho$ and $\tau$ generalize to the $D$-dimensional setting in the form of rank correlation matrices $\rho, \tau \in [-1,1]^{D \times D}$ with $\rho_{i,j} = \rho(X_i, X_j)$ and $\tau_{i,j} = \tau(X_i, X_j)$ for each $i \in [n],j \in [D]$. $I_\rho$ and $I_\tau$ are based on a classical result relating the correlation and rank-correlation of a bivariate Gaussian: \begin{theorem} {\bf \citep{kruskal58ordinal}:} Suppose $(X,Y)$ has a Gaussian joint distribution with covariance $\Sigma$. Then, \[\Corr(X, Y) = 2\sin \left(\frac{\pi}{6} \rho(X, Y) \right) = \sin \left( \frac{\pi}{2} \tau(X, Y) \right).\] \label{thm:kruskal} \end{theorem} $\rho$ and $\tau$ are often preferred to Pearson correlation for their relative robustness to outliers and applicability to non-numerical ordinal data. While these are strengths here as well, the main reason for their relevance is that they are invariant to marginal transformations (i.e., for diffeomorphisms $f, g : \R \to \R$, $\rho(f(X),g(Y)) = \pm \rho(X, Y)$ and $\tau(f(X),g(Y)) = \pm \tau(X,Y)$). As a consequence, the identity provided in Theorem \ref{thm:kruskal} extends unchanged to the case $(X,Y) \sim \mathcal{NPN}(\Sigma;f)$. This suggests an estimate for $\Sigma$ based on estimating $\rho$ or $\tau$ and plugging this element-wise into the transform $x \mapsto 2\sin \left( \frac{\pi}{6} x \right)$ or $x \mapsto \sin \left( \frac{\pi}{2} x \right)$, respectively. Specifically, $\Sigma_\rho$ is defined by \[\hat\Sigma_\rho := 2 \sin\left( \frac{\pi}{6} \hat \rho \right), \quad \text{ where } \quad \hat\rho = \widehat{\Corr}(R)\] is the empirical correlation of the rank matrix $R$, and sine is applied element-wise. Similarly, $\hat\Sigma_\tau := \sin\left( \frac{\pi}{2} \hat \tau \right)$, where \[\hat \tau_{j,k} := \frac{1}{\binom{n}{2}}\sum_{i \neq \ell \in [n]} \sign(X_{i,j} - X_{\ell,j})\sign(X_{i,k} - X_{\ell,k}).\] \subsection{Regularization and estimating $I$} \label{subsec:regularization} Unfortunately, unlike usual empirical correlation matrices, none of $\hat\Sigma_G$, $\hat\Sigma_\rho$, or $\hat\Sigma_\tau$ is almost surely strictly positive definite. As a result, directly plugging into the mutual information functional~\eqref{eq:gaussian_MI} may give $\infty$ or even be undefined. To correct for this, we propose a regularization step, in which we project each estimated latent covariance matrix onto the (closed) cone $\mathcal{S}(z)$ of symmetric matrices with minimum eigenvalue $z > 0$. Specifically, for any $z > 0$, let \[\mathcal{S}(z) := \left\{ A \in \R^{D \times D} : A = A^T, \lambda_D(A) \geq z \right\}.\] For any symmetric matrix $A \in \R^{D \times D}$ with eigendecomposition $\hat \Sigma = Q \Lambda Q\inv$ (i.e., $QQ^T = Q^TQ = I_D$ and $\Lambda$ is diagonal), the projection $A_z$ of $A$ onto $\mathcal{S}(z)$ is defined as $A_z := Q \Lambda_z Q\inv$, where $\Lambda_z$ is the diagonal matrix with $j^{th}$ nonzero entry $\left( \Lambda_z \right)_{j,j} = \max\{ z, \Lambda_{j,j} \}$. We call this a ``projection'' because $A_z$ is precisely the Frobenius norm projection of $A$ onto $\mathcal{S}(z)$ (see, e.g., \citet{henrion12semidefiniteProjection}): $A_z = {\arg\!\min}_{B \in \R^{D \times D}} \|A - B\|_F$. Applying this regularization to $\hat\Sigma_G$, $\hat\Sigma_\rho$, or $\hat\Sigma_\tau$ gives a strictly positive definite estimate $\hat\Sigma_{G,z}$, $\hat\Sigma_{\rho,z}$, or $\hat\Sigma_{\tau,z}$, respectively, of $\Sigma$. We can then estimate $I$ by plugging this into Equation \eqref{eq:gaussian_MI}, giving our three estimators: \begin{align*} \hat I_{G,z} := -\frac{1}{2} \log \left| \hat\Sigma_{G,z} \right|, \qquad \hat I_{\rho,z} := -\frac{1}{2} \log \left| \hat\Sigma_{\rho,z} \right| \\ \text{ and } \quad \hat I_{\tau,z} := -\frac{1}{2} \log \left| \hat\Sigma_{\tau,z} \right|. \end{align*} \section{Upper Bounds on the Error of $\hat I_{\rho,z}$} \label{sec:main_results} Here, we provide finite-sample upper bounds on the error of the estimator $\hat I_\rho$ based on Spearman's $\rho$. Proofs are given in the Appendix. We first bound the bias of the estimator: \begin{proposition} Suppose $X_1,...,X_n \stackrel{i.i.d.}{\sim} \mathcal{NPN}(\Sigma;f)$. Then, there exists a constant $C > 0$ such that, for any $z > 0$, the bias of $\hat I_{\rho,z}$ is at most \begin{align*} \left| \E \left[ \hat I_{\rho,z} \right] - I \right| \leq C \left( \frac{D}{z\sqrt{n}} + \log \frac{|\Sigma_z|}{|\Sigma|} \right), \end{align*} where $\Sigma_z$ is the projection of $\Sigma$ onto $\mathcal{S}(z)$. \label{prop:I_rho_bias_bound} \end{proposition} The first term of the bias stems from nonlinearity of the log-determinant function in Equation~\ref{eq:gaussian_MI}, which we analyze via Taylor expansion. The second term, \[\log \frac{|\Sigma_z|}{|\Sigma|} = \sum_{\lambda_j(\Sigma) < z} \log \left( \frac{z}{\lambda_j(\Sigma)} \right),\] is due to the regularization step and is actually, but is difficult to simplify or bound without further assumptions on the spectrum of $\Sigma$ and a choice of $z$, which we discuss later. We now turn to bounding the variance of $\hat I_{\rho,z}$. We first provide an exponential concentration inequality for $\hat I_{\rho,z}$ around its expectation, based on McDiarmid's inequality: \begin{proposition} Suppose $X_1,...,X_n \stackrel{i.i.d.}{\sim} \mathcal{NPN}(\Sigma;f)$. Then, for any $z,\e > 0$, \[\pr \left[ \left| \hat I_{\rho,z} - \E \left[ \hat I_{\rho,z} \right] \right| > \e \right] = 2 \exp \left( - \frac{n z^2\e^2}{18 \pi^2 D^2} \right).\] \label{prop:I_rho_exponential_concentration} \end{proposition} Such exponential concentration bounds are useful when one wants to simultaneously bound the error of multiple uses of an estimator, and hence we present it separately as it may be independently useful. However, for the purpose of understanding convergence rates, we are more interested in the variance bound that follows as an easy corollary: \begin{corollary} Suppose $X_1,...,X_n \stackrel{i.i.d.}{\sim} \mathcal{NPN}(\Sigma;f)$. Then, for any $z > 0$, the variance of $\hat I_{\rho,z}$ is at most \[\Var \left[ \hat I_{\rho,z} \right] \leq \frac{36 \pi^2 D^2}{z^2 n}.\] \label{prop:I_rho_variance_bound} \end{corollary} Given these bias and variance bounds, a bound on the MSE of $\hat I_{\rho,z}$ follows via the usual bias-variance decomposition: \begin{theorem} Suppose $X \sim \mathcal{NPN}(\Sigma;f)$. Then, there exists a constant $C$ such that \begin{align} \E \left[ \left( \hat I_{\rho,z} - I \right)^2 \right] \leq C \left( \frac{D^2}{z^2n} + \log^2 \frac{|\Sigma_z|}{|\Sigma|} \right). \label{ineq:general_I_rho_MSE_bound} \end{align} \label{thm:general_I_rho_MSE_bound} \end{theorem} A natural question is now how to optimally select the regularization parameter $z$. While the bound \eqref{ineq:general_I_rho_MSE_bound} is clearly convex in $z$, it depends crucially on the unknown spectrum of $\Sigma$, and, in particular, on the smallest eigenvalues of $\Sigma$. As a result, it is difficult to choose $z$ optimally in general, but we we can do so for certain common subclasses of covariance matrices. For example, if $\Sigma$ is Toeplitz or bandable (i.e., for some $c \in (0,1)$, all $|\Sigma_{i,j}| \leq c^{|i - j|}$), then the smallest eigenvalue of $\Sigma$ can be bounded below~\citep{cai12adaptiveCovarianceMatrix}. When $\Sigma$ is bandable, as we show in the Appendix, this bound can be independent of $D$. In these cases, the following somewhat simpler MSE bound can be used: \begin{corollary} Suppose $X \sim \mathcal{NPN}(\Sigma;f)$, and suppose $z \leq \lambda_D(\Sigma)$. Then, there exists a constant $C > 0$ such that \[\E \left[ \left( \hat I_{\rho,z} - I \right)^2 \right] \leq \frac{CD^2}{z^2n}.\] \label{corr:specific_I_rho_MSE_bound} \end{corollary} \section{Lower Bounds in terms of $\Sigma$} \label{sec:Sigma_lower_bound} When the data $X_1,...,X_n \stackrel{i.i.d}{\sim} \mathcal{N}(0,\Sigma)$ are truly Gaussian, using the plug-in estimator \[\textstyle \hat I = -\frac{1}{2} \log \left| \hat \Sigma \right| \quad \mbox{ (where } \quad \hat\Sigma = \frac{1}{n} \sum_{i = 1}^n X_i X_i^T\] is the empirical covariance matrix), \citet{cai15logDetCov} showed that the distribution of $\hat I - I$ is independent of the true correlation matrix $\Sigma$. This follows from the ``stability'' of Gaussians (i.e., that nonsingular linear transformations of Gaussian random variables are Gaussian). In particular, \[\hat I - I = \log|\hat\Sigma| - \log|\Sigma| = \log|\Sigma^{-1/2}\hat\Sigma\Sigma^{-1/2}|,\] and $\Sigma^{-1/2}\hat\Sigma\Sigma^{-1/2}$ has the same distribution as $\log\hat\Sigma$ does in the special case that $\Sigma = I_D$ is the identity. This property is both somewhat surprising, given that $I \to \infty$ as $|\Sigma| \to 0$, and useful, leading to a tight analysis of the error of $\hat I$ and confidence intervals that do not depend on $\Sigma$. It would be convenient if any nonparanormal information estimators satisfied this property. Unfortunately, the main result of this section is a negative one, showing that this property is unlikely to hold without additional assumptions: \begin{proposition} Consider the $2$-dimensional case \begin{equation} X_1,...,X_n \stackrel{i.i.d}{\sim} \mathcal{N}(0,\Sigma), \quad \text{ with } \quad \Sigma = \begin{bmatrix} 1 & \sigma \\ \sigma & 1 \end{bmatrix}, \label{eq:2D_Sigma} \end{equation} and let $\sigma_* \in (0,1)$. Suppose an estimator $\hat I = \hat I(R)$ of $I_\sigma = -\frac{1}{2}\log(1 - \sigma^2)$ is a function of the empirical rank matrix $R \in \N^{n \times 2}$ of $X$. Then, there exists a constant $C > 0$, depending only $n$, such that the worst-case MSE of $\hat I$ over $\sigma \in (0,\sigma_*)$ satisfies \begin{align*} \sup_{\sigma \in (0,\sigma^*)} \E \left[ \left( \hat I(R) - I_\sigma \right)^2 \right] & \geq \frac{1}{64} \left( C - \log(1 - \sigma_*^2) \right)^2 \end{align*} \label{prop:sigma_lower_bound} \end{proposition} Clearly, this lower bound tends to $\infty$ as $\sigma \to 1$. As written, this result lower bounds the error of \emph{rank-based estimators} in the Gaussian case when $\sigma \approx 1$. However, to the best of our knowledge, all methods for estimating $\Sigma$ in the nonparanormal case are functions of $R$, and prior work~\citep{hoff07extending} has shown that the rank matrix $R$ is a generalized sufficient statistic for $\Sigma$ (and hence for $I$) in the nonparanormal model. Thus, it is reasonable to think of lower bounds for rank-based estimators in the Gaussian case as lower bounds for any estimator in the nonparanormal case. The proof of this result is based on the simple observation that the rank matrix can take only finitely many values. Hence, as $\sigma \to 1$, $R$ tends to be perfectly correlated, providing little information about $\sigma$, whereas the dependence of the estimand $I_\sigma$ on $\sigma$ increases sharply. This is intuition is formalized in the Appendi using Le Cam's lemma for lower bounds in two-point parameter estimation problems. \section{Empirical Results} \label{sec:empirical} We compare 5 mutual information estimators: \begin{itemize}[itemsep=0.0mm,topsep=0mm] \item $\hat I$: Gaussian plug-in estimator with bias-correction (see \citet{cai15logDetCov}). \item $\hat I_G$: Nonparanormal estimator using Gaussianization. \item $\hat I_\rho$: Nonparanormal estimator using Spearman's $\rho$. \item $\hat I_\tau$: Nonparanormal estimator using Kendall's $\tau$. \item $\hat I_{k\text{NN}}$: Nonparametric estimator using $k$-nearest neighbor ($k$NN) statistics. \end{itemize} For $I_\rho$ and $I_\tau$, we used a regularization constant $z = 10^{-3}$. We did not regularize for $I_G$. Although this implies $\pr[I_G~=~\infty]~>~0$, this is extremely unlikely for even moderate values of $n$ and never occurred during our experiment, which all use $n \geq 32$. We will thus omit denoting dependence on $z$. For $I_{k\text{NN}}$, except as noted in Experiment 3, $k = 2$, based on recent analysis \citep{singh16kNNEntropy} suggesting that small values of $k$ are best for estimation. Sufficient details to reproduce experiments are given in the Appendix, and MATLAB source code is available at [Omitted for anonymity]. We report MSE based on $1000$ i.i.d. trials of each condition. $95\%$ confidence intervals were consistently smaller than plot markers and hence omitted to avoid cluttering plots. Except as specified otherwise, each experiment had the following basic structure: In each trial, a correlation matrix $\Sigma$ was drawn by normalizing a random covariance matrix from a Wishart distribution, and data $X_1,...,X_n \stackrel{i.i.d.}{\sim} \mathcal{N}(0, \Sigma)$ drawn. All $5$ estimators were computed from $X_1,...,X_n$ and squared error from true mutual information (computed from $\Sigma$) was recorded. Unless specified otherwise, $n = 100$ and $D = 25$. Since our nonparanormal information estimators are functions of ranks of the data, neither the true mutual information nor our non-paranormal estimators depend on the marginal transformations. Thus, except in Experiment 2, where we show the effects of transforming marginals, and Experiment 3, where we add outliers to the data, we perform all experiments on truly Gaussian data, with the understanding that this setting favors the Gaussian estimator. All experimental results are displayed in Figure~\ref{fig:main_experimental_results}. \begin{figure*}[t!] \centering \begin{subfigure}[b]{0.27\textwidth} \centering \includegraphics[width=\textwidth,trim={0 0 0 0},clip]{fig3_hard.eps} \caption{Experiment 1} \label{subfig:exp_1} \end{subfigure}% ~ \begin{subfigure}[b]{0.24\textwidth} \centering \includegraphics[width=\textwidth,trim={0mm 0 0 0},clip]{fig2.eps} \caption{Experiment 2} \label{subfig:exp_2} \end{subfigure}% ~ \begin{subfigure}[b]{0.24\textwidth} \centering \includegraphics[width=\textwidth,trim={0mm 0 0 0},clip]{fig5.eps} \caption{Experiment 3} \label{subfig:exp_3} \end{subfigure}% ~ \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[width=\textwidth,trim={1mm 0 0 0},clip]{fig4.eps} \caption{Experiment 4} \label{subfig:exp_4} \end{subfigure} \caption{Plots of $\log_{10}(\text{MSE})$ plotted over (a) log-sample-size $\log_{10}(n)$, (b) fraction $\alpha$ of dimensions with non-Gaussian marginals, (c) fraction $\beta$ of outlier samples in each dimension, and (d) covariance $\Sigma_{1,2} = \Cov(X_1,X_2)$. Note that the $x$-axis in (d) is decreasing.} \label{fig:main_experimental_results} \end{figure*} {\bf Experiment 1 (Dependence on $n$):} We first show nonparanormal estimators have ``parametric'' $O(n\inv)$ dependence on $n$, unlike $\hat I_{k\text{NN}}$, which converges far more slowly. For large $n$, MSEs of $\hat I_G$, $\hat I_\rho$, and $\hat I_\tau$ are close to that of $\hat I$. {\bf Experiment 2 (Non-Gaussian Marginals):} \label{subsec:hat_I_nonrobust} Next, we show nonparanormal estimators are robust to non-Gaussianity of the marginals, unlike $\hat I$. We applied a nonlinear transformation $f$ to a fraction $\alpha \in [0, 1]$ of dimensions of Gaussian data. That is, we drew $Z_1,...,Z_n \stackrel{i.i.d.}{\sim} \mathcal{N}(0,\Sigma)$ and then used data $X_1,...,X_n$, where \[X_{i,j} = \left\{ \begin{array}{ll} T(Z_{i,j}) & \mbox{ if } j < \alpha D \\ Z_{i,j} & \mbox{ if } j \geq \alpha D \end{array} \right., \quad \forall i \in [n], j \in [D],\] for a diffeomorphism $T$. Here, we use $T(z) = e^z$. The Appendix shows similar results for several other $T$. $\hat I$ performs poorly even when $\alpha$ is quite small. Poor performance of $\hat I_{k\text{NN}}$ may be due to discontinuity of the density at $x = 0$. {\bf Experiment 3 (Outliers):} We now show that nonparanormal estimators are far more robust to the presence of outliers than $\hat I$ or $\hat I_{k\text{NN}}$. To do this, we added outliers to the data according to the method of \citet{liu12SKEPTIC}. After drawing Gaussian data, we independently select $\lfloor \beta n \rfloor$ samples in each dimension, and replace each i.i.d. uniformly at random from $\{-5,+5\}$. Performance of $\hat I$ degrades rapidly even for small $\beta$. $\hat I_{k\text{NN}}$ can fail for atomic distributions, $\hat I_{k\text{NN}} = \infty$ whenever at least $k$ samples are identical. This mitigate this, we increased $k$ to $20$ and ignored trials where $\hat I_{k\text{NN}} = \infty$, but $\hat I_{k\text{NN}}$ ceased to give any finite estimates when $\beta$ was sufficiently large. For small values of $\beta$, nonparanormal estimators surprisingly improve. We hypothesize this is due to convexity of the mutual information functional Eq.~\eqref{eq:gaussian_MI} in $\Sigma$. By Jensen's inequality, estimators which plug-in an approximately unbiased estimate $\hat\Sigma$ of $\Sigma$ are biased towards overestimating $I$. Adding random (uncorrelated) noise reduces estimated dependence, moving the estimate closer to the true value. If this nonlinearity is indeed a major source of bias, it may be possible to derive a von Mises-type bias correction (see \citet{kandasamy15vonMises}) accounting for higher-order terms in the Taylor expansion of the log-determinant. {\bf Experiment 4 (Dependence on $\Sigma$):} Here, we verify our results in \hyperref[sec:Sigma_lower_bound]{Section~\ref{sec:Sigma_lower_bound}} showing that MSE of rank-based estimators approaches $\infty$ as $|\Sigma| \to 0$, while MSE of $\hat I$ is independent of $\Sigma$. Here, we set $D = 2$ and $\Sigma$ as in Eq.~\eqref{eq:2D_Sigma}, varying $\sigma \in [0,1]$. Indeed, the MSE of $\hat I$ does not change, while the MSEs of $\hat I_G$, $\hat I_\rho$, and $\hat I_\tau$ all increase as $\sigma \to 1$. This increase seems mild in practice, with performance worse than of $\hat I$ only when $\sigma > 0.99$. $\hat I_\tau$ appears to perform far better than $\hat I_G$ and $\hat I_\rho$ in this regime. Performance of $I_{k\text{NN}}$ degrades far more quickly as $\sigma \to 1$. This phenomenon is explored by \citet{gao15efficient}, who lower bound error of $I_{k\text{NN}}$ in the presence of strong dependencies, and proposed a correction to improve performance in this case. It is also interesting that errors of $\hat I_{\rho}$ and $\hat I_{\tau}$ drop as $\sigma \to 0$. This is likely because, in this regime, the main source of error is the variance of $\hat\rho$ and $\hat\tau$ (as $-\log(1 - \sigma^2) \approx \sigma^2$ when $\sigma \approx 0$). When $n \to \infty$ and $D$ is fixed, both $2\sin(\pi\hat\rho/6)$ and $\sin(\pi\hat\tau/2)$ are asymptotically normal estimates of $\sigma$, with asymptotic variances proportional to $(1 - \sigma^2)^2$~\citep{klaassen97bivariateNormalCopula}. By the delta method, since $\frac{dI}{d\sigma} = \frac{\sigma}{1 - \sigma^2}$, $\hat I_\rho$ and $\hat I_\tau$ are asymptotically normal estimates of $I$, with asymptotic variances proportional to $\sigma^2$ and hence vanishing as $\sigma \to 0$. \section{Estimating Entropy} \label{sec:entropy} Thus far, we have discussed estimation of mutual information $I(X)$. Mutual information is convenient because it is invariant under marginal transformation, and hence $I(X) = I(f(X))$ depends only on $\Sigma$. While the entropy $H(X)$ does depend on the marginal transform $f$, fortunately, by Eq.~\eqref{eq:entropy_relation}, $H(X)$ differs from $I(X)$ only by a sum of univariate entropies. Univariate nonparametric estimation of entropy in has been studied extensively, and there exist several estimators (e.g., based on sample spacings~\citep{beirlant97entropyOverview}, kernel density estimates~\citep{moon16improving} or $k$-nearest neighbor methods~\citep{singh16kNNEntropy}) that can estimate $H(X_j)$ at the rate $\asymp n\inv$ in MSE under relatively mild conditions on the marginal density $p_j$. While the precise assumptions vary with the choice of estimator, they are mainly (a) that $p_j$ be lower bounded on its support or have particular (e.g., exponential) tail behavior, and (b) that $p_j$ be smooth, typically quantified by a H\"older or Sobolev condition. Details of these assumptions are in the Appendix. Under these conditions, since there exist estimators $\hat H_1,...,\hat H_D$ and a constant $C > 0$ such that \begin{equation} \E [(\hat H_j - H(X_j))^2] \leq C/n, \quad \forall j \in [D]. \label{ineq:MSE_of_sum_of_marginal_entropies} \end{equation} Combining these estimators with an estimator, say $\hat I_{\rho,z}$, of mutual information gives an estimator of entropy: \[\textstyle \hat H_{\rho,z} := \sum_{j = 1}^D \hat H_j - \hat I_{\rho,z}.\] If we assume $z = \lambda_D\inv(\Sigma)$ is bounded below by a positive constant, combining inequality~\eqref{ineq:MSE_of_sum_of_marginal_entropies} with Corollary~\ref{corr:specific_I_rho_MSE_bound} gives \[\E \left[ \left( \hat H_{\rho,z} - H(X) \right)^2 \right] \leq \frac{CD^2}{n},\] where the constant $C$ may differ from in \eqref{ineq:MSE_of_sum_of_marginal_entropies} but is independent of $n$ and $D$. \section{Conclusions and Future Work} \label{sec:conc_and_future} This paper we suggests nonparanormal information estimation as a practical compromise between the difficult nonparametric case and the restrictive Gaussian case. We proposed three estimators for this problem, and provided the first upper bounds for nonparanormal information estimation. We also provided lower bounds showing how dependence on $\Sigma$ differs from the Gaussian case, and we demonstrated empirically that nonparanormal estimators are more robust than Gaussian estimators, even when dimension is too high for fully nonparametric estimators. Collectively, these results suggest that, by scaling to moderate or high dimensionality without relying on Gaussianity, nonparanormal information estimators may be effective tools with a number of machine learning applications. While the best choice of information estimator inevitably depends on context, as a crude off-the-shelf guide for practitioners, the estimators we might suggest, in order of preference, are: \begin{itemize}[leftmargin=*,noitemsep,topsep=0pt] \item fully nonparametric if $D < 6, n > \max\{100,10^D\}$. \item $\hat I_\rho$ if $D^2/n$ is small and data may have outliers. \item $\hat I_\tau$ if $D^2/n$ is small and dependencies may be strong. \item $\hat I_G$ otherwise. \item $\hat I$ only given strong belief that data are nearly Gaussian. \end{itemize} There are many natural open questions in this line of work. First, in the nonparanormal model, we focused on estimating mutual information $I(X)$, which does not depend on marginal transforms $f$, and entropy, which decomposes into $I(X)$ and $1$-dimensional entropies. In both cases, additional structure imposed by the nonparanormal model allows estimation in higher dimensions than fully nonparametric models. Can nonparanormal assumptions lead to higher dimensional estimators for the many other useful nonlinear functionals of densities (e.g., $L_p$ norms/distances and more general (e.g., R\'enyi or Tsallis) entropies, mutual informations, and divergences) that do not decompose? Second, there is a gap between our upper bound rate of $\|\Sigma\inv\|_2^2 D^2/n$ and the only known lower bound of $2D/n$ (from the Gaussian case), thought we also showed that bounds for rank-based estimators depend on $\Sigma$. Is quadratic dependence on $D$ optimal? How much do rates improve under structural assumptions on $\Sigma$? Upper bounds should be derived for other estimators, such as $\hat I_G$ and $\hat I_\tau$. The $2D/n$ lower bound proof of \citet{cai15logDetCov} for the Gaussian case, based on the Cramer-Rao inequality~\citep{van07parameter}, is unlikely to tighten in the nonparanormal case, since Fisher information is invariant to diffeomorphisms of the data. Hence, a new approach is needed if the lower bound in the nonparanormal case is to be raised. Finally, our work also applies to estimating the log-determinant $\log|\Sigma|$ of the latent correlation matrix in a nonparanormal model. In addition to information estimation, the work of \citet{cai15logDetCov} on estimating $\log|\Sigma|$ in the Gaussian setting was motivated by the use of $\log|\Sigma|$ in several other multivariate statistical tools, including quadratic discriminant analysis (QDA) and MANOVA~\citep{anderson84multivariate}. Can our estimators lead to more robust nonparanormal versions of these procedures?
1,314,259,994,706
arxiv
\section{Introduction\label{sec:biX1}} The existence of composite particles in semiconductors has been predicted long ago\cite{lampert1958}. Bound states made of conduction band electrons and valence band holes result from the Coulomb attraction between these carriers. To name the simplest ones, these bound states are excitons ($X$) made of one electron plus one hole, trions ($X^{\mp}$) made of two electrons plus one hole, or two holes plus one electron, and biexcitons ($XX$) made of two electrons plus two holes. More exotic composite objects made of a large number of correlated fermion pairs, called ``electron-hole droplets", also exist, with a carrier density far larger than the one in which excitons are formed.\ The simplest composite particle, the exciton, made of one conduction electron and one valence hole, is very similar to an Hydrogen atom, if we neglect interband Coulomb processes\cite{MoniqueSSC2009}. Exciton has bound and unbound (scattering) states which can be analytically determined\cite{Elliott1957}. Exciton bound states appear as large narrow peaks in the photon absorption spectrum, the peak intensity depending on the so-called ``exciton oscillator strength". The reason bound exciton peaks are easy to observe is twofold: first, when a plane-wave photon with momentum ${\bf Q}$ transforms into a bound exciton, the coupling is quite good because the center-of-mass motion of the bound exciton also is a plane wave with same momentum ${\bf Q}$. Second, excitons, made of an even number of fermions, have a bosonic nature; so, they can be piled up all at the same energy, the absorption peak intensity increasing linearly with the density of excitons already present in the sample. \ Observation of composite particles like trions is more complex. It has been hampered for quite a long time, partly due to the small binding energies that trions have in bulk samples, at best one order of magnitude smaller than the exciton binding energy. Such binding energies are smaller than usual exciton line width; so, trion peaks fall on the side of exciton lines. This small trion binding energy can be physically understood by seeing the trion as an electron or a hole bound to an exciton. The effective attraction then is dipole-like which makes interaction between exciton and free carrier much weaker than between elementary charges. A clear signature of trions has been obtained recently only in semiconductor quantum wells\cite{Kheng1993,Finkelstein1995,Shields1995,Buhmann1995}, the reduction of dimensionality increasing all binding energies as seen from the exciton energy which goes from $R_X^{\rm (3D)}$ in 3D to $R_X^{\rm (2D)}=4R_X^{\rm (3D)}$ in 2D, and to infinity in 1D. \ What makes bound trions hard to observe has also to be traced back to their oscillator strength which is one trion volume divided by one sample volume smaller than the exciton oscillator strength\cite{moniqSSC2003_2}. This drastic reduction factor can be physically understood as the probability for a photocreated exciton to localize over a trion volume, a free carrier initially spread over the whole sample. As a result, the trion peak commonly observed in heavily doped samples in which a large electron density exists, should not be interpreted as a signature of elementary trion, but rather as an exciton interacting in a coherent way with all the electrons present in the sample. Such a many-body effect is singular and leads to a broad absorption line, as experimentally shown\cite{moniqEP2005}. \ Mathematically, the derivation of trion eigenstates amounts to solving a three-body problem which has no known analytical solution. Attempts to tackle such a problem inevitably rely on some truncation scheme in addition to heavy numerics in order to possibly obtain satisfactory results. Recently, we showed how, using a physically relevant viewpoint, we can approach one trion as an exciton interacting with an electron\cite{Shiau2011}. We have constructed a Schr\"{o}dinger equation for trion using the electron-exciton basis and solved it by restricting this basis to the first few low-lying exciton levels,---which is reasonable since the energy scale for excitation of the exciton internal motion, of the order of one Rydberg, is much larger than the other energy scales. The resulting trion binding energies we find in 2D and 3D agree reasonably well with the most accurate variational results. One important advantage of this approach is to allow reaching the trion ground and excited states on equal footing, these excited states being out of reach from standard variational methods. \ Following Lampert's prediction\cite{lampert1958}, an even more complex composite particle, the biexciton, has been observed in bulk materials such as CuCl [Ref.~\onlinecite{Nikitine1968}], Cu$_2$O [Ref.~\onlinecite{Lvov1971}], and AgBr [Ref.~\onlinecite{Pelant1976,Hulin1977}]. More recently, the biexciton binding energy has been measured in GaAs quantum well \cite{Miller1982} and found to be one order of magnitude larger than in bulk samples, a result supported by calculations done one year later\cite{Kleinman1983}. Since then, other aspects of biexcitons in confined structures, such as optical enhancement in biexciton formation\cite{Cingolani1988, Lovering1992} and the influence of dimensionality on the biexciton binding\cite{Miller1982,Birkedal1996,Euteneuer1997}, have been studied. Biexcitons in quantum wires have also been reported\cite{Crottini2002}. In view of the successful application of the composite boson many-body formalism to trion\cite{Shiau2011}, we, in this work, go on along the same line to tackle biexciton. The biexciton problem {\it a priori} is an even more complex four-body problem, with two electrons $(e_1,e_2)$ and two holes $(h_1,h_2)$ involved. The idea is to start with two electron-hole pairs bound into two excitons by the strong electron-hole Coulomb attraction. The exciton-exciton attraction, although quite weak since it essentially is dipole-like, allows two free excitons to form a molecule with a binding energy substantially smaller than the exciton binding energy. To approach a system made of two electrons plus two holes, the exciton basis is physically quite appealing because the strong exciton binding energy is then included into the problem at the zeroth order. We are left with solving a Schr\"{o}dinger equation for the weaker biexciton binding energy. In this approach, the four-body system is pictured as two interacting excitons: one exciton is made of the $(e_1,h_1)$ pair, while the other is made of $(e_2,h_2)$ pair, these two excitons however exchanging their carriers to be possibly made of $(e_1,h_2)$ and $(e_2,h_1)$. Such a two-exciton picture could be thought, at first sight, to lead to an easy problem because of the weak exciton-exciton attraction compared with the strong electron-hole attraction. However, this weak attraction is the one responsible for two excitons to be bound into a molecule. So, in order to reach bound states and find the associated poles, this exciton-exciton interaction has to be treated in an exact way. \ With this goal in mind, we here construct a biexciton Schr\"{o}dinger equation in terms of the exciton basis using the recently developed composite boson many-body theory\cite{moniqPhysRep}. In much the same spirit as Feynman diagrams for elementary particles, this theory takes advantage of ``shiva diagrams" to visually identify many-body effects involved among composite particles. It moreover enables treating exactly carrier exchange which results from the indistinguisability of the fermionic components of these composite particles. By restricting the exciton levels to the ground state only, it becomes possible to numerically solve the biexciton Schr\"{o}dinger equation quite easily. The values we obtain for the biexciton ground state energies in 2D and 3D are in good agreement with variational results. One important advantage of the procedure is that the biexciton Schr\"{o}dinger equation can be cast into a generalized eigenvalue problem; so, we can reach bound and unbound excited states at once, with a single matrix diagonalization. \ In a second step, we use the obtained biexciton relative motion wave functions to calculate the photon absorption spectrum in quantum wells, assumed to be exact 2D systems. Instead of considering biexciton as generated through two-photon absorption\cite{Hamamura1973,Arya1977,Ivanov1993,Hassan1993}, we here study one photocreated exciton interacting with a dilute exciton gas. Using similar arguments as those we used for bound trion, we find that the biexciton oscillator strength is one biexciton volume divided by one sample volume smaller than the exciton oscillator strength. This would make observing the biexciton line very difficult. However, biexcitons, like excitons, are boson-like particles: They can thus be packed up all at the same energy level. As a result, the biexciton absorption line increases linearly with exciton density provided that the density is low enough to possibly neglect many-body effects between the photocreated exciton and the free excitons present in the sample. The calculated photon absorption spectrum shows a small peak, with a characteristic low-energy tail, originating from the biexciton molecular state, and large peaks centered on the exciton ground levels, which are associated with exciton-exciton scattering states. Both, the bound and unbound biexciton peak intensities decrease when the temperature increases. It can also be shown that the intensity of the absorption line for one biexciton made from a photocreated exciton and an exciton of the exciton gas, increases linearly with photon number and exciton density. This is in contrast to the biexciton absorption line associated with two-photon absorption which increases quadratically with photon number and thus becomes dominant at high laser intensity. \ The present paper is organized as follows: In Sec.~\ref{sec:biX2}, we briefly discuss the relation which exists between biexciton written in the free-carrier basis, and biexciton written in the exciton basis. We also introduce the four commutators necessary to properly handle many-body effects involving composite excitons.\ In Sec.~\ref{sec:biX3}, we study triplet biexciton states made of same-spin electrons and same-spin holes and we derive the corresponding Schr\"{o}dinger equation.\ In Sec.~\ref{sec:biX4}, we study singlet and triplet biexciton states made of opposite-spin electrons and opposite-spin holes. These biexciton states are first constructed in terms of two free electrons plus two free holes, and then in terms of two free excitons, in order to reveal important parity relations. We then concentrate on singlet biexciton states with center-of-mass momentum equal to zero and we restrict the exciton levels to the ground state. This nicely reduces the biexciton Schr\"{o}dinger equation to a 1D integral equation. In Sec.~\ref{sec:biX5}, we numerically solve this 1D integral equation to obtain the biexciton binding energies for the ground and excited states as a function of hole-to-electron mass ratio. We also show the biexciton relative motion wave functions for the bound state as well as for a few unbound states. Finally, we use these wave functions to calculate the photon absorption spectrum in the presence of a dilute exciton gas for various low temperatures.\ In the last section, we conclude.\ \begin{figure}[t] \centering \subfigure[]{ \includegraphics[trim=4cm 6cm 4cm 4cm,clip,width=3.2in] {fig1a.eps} \label{fig:fig1a} } \subfigure[]{ \includegraphics[trim=4cm 6cm 4cm 4cm,clip,width=3.2in] {fig1b.eps} \label{fig:fig1b} } \caption{{\small (a) $\lambda_h\big(^{\hspace{.06cm} n \hspace{.12cm} j\hspace{.05cm}}_{\hspace{.05cm} m\hspace{.09cm} i\hspace{.05cm}}\big)=\lambda\big(^{\hspace{.06cm} n \hspace{.12cm} j\hspace{.05cm}}_{\hspace{.05cm} m\hspace{.09cm} i\hspace{.05cm}}\big)$ for hole exchange, the excitons $m$ and $i$ having the same electron. (b) Pauli scattering $\lambda_e\big(^{\hspace{.06cm} n \hspace{.12cm} j\hspace{.05cm}}_{\hspace{.05cm} m\hspace{.09cm} i\hspace{.05cm}}\big)=\lambda\big(^{\hspace{.05cm} m \hspace{.08cm} j\hspace{.05cm}}_{\hspace{.06cm} n\hspace{.13cm} i\hspace{.05cm}}\big)$ for electron exchange, the excitons $m$ and $i$ having the same hole. }} \label{fig:lambda_eh} \end{figure} \section{Biexciton on the exciton basis\label{sec:biX2}} We consider a system made of two electrons and two holes in a semiconductor: the electrons carry a spin $s=\pm 1/2$ while the holes carry an angular momentum $m$ that we will also call spin. In bulk semiconductors, the hole angular momentum can be $m = (\pm 3/2,\pm 1/2)$, while in narrow quantum wells, it reduces to $m = \pm 3/2$ due to the heavy-light hole energy splitting induced by the well confinement. For simplicity, here we shall neglect the role of light holes and consider heavy holes only. Furthermore, we neglect the warping of the semiconductor valence band and approximate it by a spherical, parabolic band. The usual basis for such a biexciton system is then made of states with two free electrons and two free holes \begin{equation} a^\dag_{{\bf k}_{e_1}, s_1}a^\dag_{{\bf k}_{e_2} ,s_2}b^\dag_{{\bf k}_{h_1}, m_1}b^\dag_{{\bf k}_{h_2} ,m_2}|v\rangle.\label{eq:free2eh_basis} \end{equation} To transform this free-carrier basis into an exciton basis, we make use of the relations which exist between free electron-hole pair creation operators and exciton creation operators, namely, \begin{eqnarray} B^\dag_{i;s_im_i}&=&\sum_{{\bf k}_e{\bf k}_h}a^\dag_{{\bf k}_{e}, s_i}b^\dag_{{\bf k}_{h}, m_i}\langle {\bf k}_h{\bf k}_e|i\rangle,\label{eq:Btoeh}\\ a^\dag_{{\bf k}_{e}, s_i}b^\dag_{{\bf k}_{h}, m_i}&=&\sum_i B^\dag_{i;s_im_i}\langle i |{\bf k}_e{\bf k}_h\rangle,\label{eq:ehtoB} \end{eqnarray} where $|i\rangle$ denotes the $i$ exciton state. Using Eq.~(\ref{eq:ehtoB}), we can rewrite the two-free-electron-hole pair states of Eq.~(\ref{eq:free2eh_basis}) in terms of exciton states as \begin{equation} B^\dag_{i;s_im_i}B^\dag_{j;s_jm_j}|v\rangle,\label{eq:2B_basis} \end{equation} with $(s_i,s_j)=(s_1,s_2)$ and $(m_i,m_j)=(m_1,m_2)$. Note that the basis made of $[(s_1,m_1);(s_2,m_2)]$ and $[(s_1,m_2);(s_2,m_1)]$ are equally valid. This means that, for $s_1\ne s_2$ and $m_1\ne m_2$, the basis can be made either of two bright excitons, $(-1/2,3/2)$ and $(1/2,-3/2)$, or of two dark excitons, $(1/2,3/2)$ and $(-1/2,-3/2)$, the bright exciton basis however being more convenient for problems dealing with photons. Note that bright and dark excitons are degenerate if we neglect interband Coulomb processes. \ \begin{figure}[t] \begin{center} \includegraphics[trim=3.5cm 7cm 4cm 4cm,clip,width=3.2in] {fig2.eps} \end{center} \caption{{\small Pauli scattering $\lambda\big(^{ (\nu_n,-{\bf Q}^\prime)\hspace{0.1cm} (\nu_j,-{\bf Q})}_{ \hspace{0.1cm}(\nu_m,{\bf Q}^\prime)\hspace{0.2cm} (\nu_i,{\bf Q})\hspace{0.15cm}}\big)$ for carrier exchange between an exciton $i=(\nu_i,{\bf Q})$ and an exciton $j=(\nu_j,-{\bf Q})$ (see Eq.~(\ref{app:def_lambda})). The exciton $(\nu_i,{\bf Q})$ is a linear combination of electron-hole pair $({\bf k}+\alpha_e{\bf Q},-{\bf k}+\alpha_h{\bf Q})$ where $\alpha_e=1-\alpha_h=m_e/(m_e+m_h)$ (see Eq.~(\ref{app:B_i_vs_eh})). }} \label{fig:lambda_h} \end{figure} While the great advantage of the exciton basis is to contain part of the electron-hole interaction, actually the strong part leading to exciton bound states, its main disadvantage is to be overcomplete; as a direct consequence, this basis is not orthogonal. It is possible to overcome the difficulties induced by the non-orthogonality of the exciton basis through the commutation technique recently developed for composite boson many-body effects \cite{moniqPhysRep}. The keys of this formalism rely on just four commutators between exciton operators $B_i^\dag$: Fermion exchanges follow from two commutators which read, in the absence of spin degrees of freedom, as \begin{eqnarray} \left[B_m,B^\dag_i\right]&=&\delta_{mi}-D_{mi},\label{eq:commut_BB}\\ \left[D_{mi},B^\dag_j\right]&=&\sum_n\left[\lambda_h\left(\begin{smallmatrix} n& j\\ m& i\end{smallmatrix}\right)+\lambda_e\left(\begin{smallmatrix} n& j\\ m& i\end{smallmatrix}\right)\right]B_i^\dag.\label{eq:commut_DB} \end{eqnarray} $D_{mi}$ is called ``deviation-from-boson'' operator because, without it, $B_i^\dag$ would reduce to an elementary boson operator. The Pauli scattering $\lambda_h\left(\begin{smallmatrix} n& j\\ m& i \end{smallmatrix}\right)$ corresponds to a hole exchange between excitons $(i,j)$, the excitons $m$ and $i$ having the same electron, as defined in Eq.~(\ref{app:def_lambda_h}) of the appendix and shown in the diagram of Fig.~\ref{fig:fig1a}. In the same way, the Pauli scattering $\lambda_e\left(\begin{smallmatrix} n& j\\ m& i \end{smallmatrix}\right)$ corresponds to an electron exchange, the excitons $m$ and $i$ having the same hole, as defined in Eq.~(\ref{app:def_lambda_e}) and shown in the diagram of Fig.~\ref{fig:fig1b}. $\lambda_h\left(\begin{smallmatrix} n& j\\ m& i \end{smallmatrix}\right)$, which is equal to $\lambda_e\left(\begin{smallmatrix} m& j\\ n& i \end{smallmatrix}\right)$, is often written as $\lambda\left(\begin{smallmatrix} n& j\\ m& i \end{smallmatrix}\right)$ for simplicity. The other two commutators that handle fermion-fermion interactions, are \begin{eqnarray} \left[H,B^\dag_i\right]&=&E_iB^\dag_i+V^\dag_i,\label{eq:commut_HB}\\ \left[V^\dag_i,B^\dag_j\right]&=&\sum_{mn}\xi^{\rm dir}\left(\begin{smallmatrix} n& j\\ m& i\end{smallmatrix}\right)B_m^\dag B_n^\dag.\label{eq:commut_VB} \end{eqnarray} The ``creation potential'' $V^\dag_i$ generates, through (\ref{eq:commut_VB}), the direct Coulomb scattering $\xi^{\rm dir}\left(\begin{smallmatrix} n& j\\ m& i\end{smallmatrix}\right)$ of excitons $i$ and $j$. It consists of four Coulomb processes between the fermionic components of these two excitons: one electron-electron repulsion, one hole-hole repulsion, and two electron-hole attractions, as shown in the diagram of Fig.~\ref{fig:Xi}. The precise expression of the hole-hole part of this scattering is given in Eq.~(\ref{app:def_dirCoul_hh}) of the appendix. \ These four commutators are used to get the biexciton Schr\"{o}dinger equations derived in the next sections. To include the electron and hole degrees of freedom, let us focus on the two relevant cases: \ (a) Carriers with same spins, $s_1=s_2$ and $m_1=m_2$. This corresponds to triplet states for the electron part $(S_e=1,S_e^z=\pm1)$ and triplet-like states for the hole part $(S_h=3,S_h^z=\pm3)$. The associated orbital wave functions then have to be odd with respect to exchange of the two electrons or the two holes in order to fulfill the Pauli exclusion principle.\ (b) Carriers with opposite spins, $s_1=-s_2$ and $m_1=-m_2$. The resulting spin configuration then depends on the way electrons and holes are linearly combined: We can either have a triplet state for the electron part $(S_e=1,S_e^z=0)$ and a triplet-like state for the hole part $(S_h=3,S_h^z=0)$, or a singlet state for the electron part $(S_e=0,S_e^z=0)$ and a singlet-like state for the hole part $(S_h=0,S_h^z=0)$. This case thus requires a more careful derivation since the orbital wave function for triplet state must be odd as in the case (a), but even for the singlet configuration. The biexciton ground state belongs to the set of singlet states. \section{Biexciton made of electron-hole pairs with same spin $s_1=s_2, m_1=m_2$\label{sec:same_spin}\label{sec:biX3}} Let us start with triplet biexcitons made of same-spin electrons and same-spin holes and drop the spin indices to make notations of this section lighter. We look for the biexciton eigenstates \begin{equation} (H-\mathcal{E}_\eta)|\Psi^{(\eta)}\rangle=0\label{Psi_eta} \end{equation} in the two-free-exciton basis $|ij\rangle=B^\dag_iB^\dag_j|v\rangle$, namely, \begin{equation} |\Psi^{(\eta)}\rangle=\sum_{ij}\phi_{ij}^{(\eta)}|ij\rangle=\sum_{ij}\phi_{ij}^{(\eta)}B^\dag_iB^\dag_j|v\rangle.\label{eq:Eigst_2Bbs} \end{equation} Since $B^\dag_iB^\dag_j=B^\dag_jB^\dag_i$, we can replace the above prefactor by $(\phi_{ij}^{(\eta)}+\phi_{ji}^{(\eta)})/2$. The biexciton state then appears as in Eq.~(\ref{eq:Eigst_2Bbs}) but with the symmetry condition $\phi_{ij}^{(\eta)}=\phi_{ji}^{(\eta)}$.\ \begin{figure}[t] \begin{center} \includegraphics[trim=2.2cm 3.5cm 3cm 3cm,clip,width=3.4in] {fig3.eps} \end{center} \caption{{\small Direct Coulomb scattering $\xi^{\rm dir}\big(^{\hspace{.06cm} n \hspace{.12cm} j\hspace{.05cm}}_{\hspace{.05cm} m\hspace{.09cm} i\hspace{.05cm}}\big)$ between exciton $i$ and exciton $j$. The ``out'' exciton $m$ is made with the same electron-hole pair as the $i$ exciton. Similarly for excitons $n$ and $j$. This exciton-exciton scattering consists of four terms: two repulsive interactions between electrons and between holes, and two attractive interactions between electron and hole.}} \label{fig:Xi} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[trim=3.5cm 5.5cm 3cm 5cm,clip,width=3.4in] {fig4.eps} \end{center} \caption{{\small Part of the direct Coulomb scattering $\xi^{\rm dir}\big(^{ (\nu_n,-{\bf Q}^\prime)\hspace{0.1cm} (\nu_j,-{\bf Q})}_{ \hspace{0.1cm}(\nu_m,{\bf Q}^\prime)\hspace{0.2cm} (\nu_i,{\bf Q})\hspace{0.15cm}}\big)$ between a $i=(\nu_i,{\bf Q})$ exciton and a $j=(\nu_j,-{\bf Q})$ exciton coming from hole-hole repulsion (second diagram of Fig.~\ref{fig:Xi}).}} \label{fig:Xi_part} \end{figure} Equations (\ref{eq:commut_HB}) and (\ref{eq:commut_VB}) allow us to rewrite the biexciton Schr\"{o}dinger equation (\ref{Psi_eta}) as \begin{eqnarray} 0&=&\sum_{ij}\phi_{ij}^{(\eta)}\Big[(E_{ij}-\mathcal{E}_\eta)|ij\rangle+\sum_{rs}\xi^{\rm dir}\left(\begin{smallmatrix} s& j\\ r& i\end{smallmatrix}\right)|rs\rangle \Big]\nonumber\\ &=&\sum_{rs}\Big[(E_{rs}-\mathcal{E}_\eta)\phi_{rs}^{(\eta)}+\sum_{ij}\xi^{\rm dir}\left(\begin{smallmatrix} s& j\\ r& i\end{smallmatrix}\right)\phi_{ij}^{(\eta)}\Big]|rs\rangle\label{eq:Schrod_2Bbs1} \end{eqnarray} with $E_{rs}=E_r+E_s$. In the standard case, i.e., when the basis is made of orthogonal states, the above equation forces the bracket to be zero. The situation is more subtle with the exciton basis because, due to Eqs.~(\ref{eq:commut_BB}) and (\ref{eq:commut_DB}), the scalar product of two-exciton states reads as \begin{equation} \langle v|B_mB_nB^\dag_i B^\dag_j|v\rangle =\Big[\delta_{mi}\delta_{nj}-\lambda\left(\begin{smallmatrix} n& j\\ m& i \end{smallmatrix}\right)\Big]+\Big[m\longleftrightarrow n\Big].\label{4Bamtrixelem} \end{equation} By projecting Eq.~(\ref{eq:Schrod_2Bbs1}) onto $\langle mn|$, we then find, since $\phi_{ij}^{(\eta)}=\phi_{ji}^{(\eta)}$, \begin{equation} 0=(E_{mn}-\mathcal{E}_\eta)\phi_{mn}^{(\eta)}+\sum_{ij}\hat{\xi}^{(\eta)}\left(\begin{smallmatrix} n& j\\ m& i\end{smallmatrix}\right)\phi_{ij}^{(\eta)},\label{eq:Schrod_2Bbs2} \end{equation} where $\hat{\xi}^{(\eta)}\left(\begin{smallmatrix} n& j\\ m& i\end{smallmatrix}\right)$ is defined as \begin{equation} \hat{\xi}^{(\eta)}\left(\begin{smallmatrix} n& j\\ m& i\end{smallmatrix}\right)=\xi^{\rm dir}\big(\begin{smallmatrix} n& j\\ m& i\end{smallmatrix}\big)-\xi^{\rm in}\big(\begin{smallmatrix} n& j\\ m& i\end{smallmatrix}\big)-\lambda\big(\begin{smallmatrix} n& j\\ m& i\end{smallmatrix}\big)(E_{ij}-\mathcal{E}_\eta).\label{def:xihat} \end{equation} with $\xi^{\rm in}\left(\begin{smallmatrix} n& j\\ m& i\end{smallmatrix}\right)=\sum_{rs}\lambda\left(\begin{smallmatrix} n& s\\ m& r\end{smallmatrix}\right)\xi^{\rm dir}\left(\begin{smallmatrix} s& j\\ r& i\end{smallmatrix}\right)$ being ``in'' exchange-Coulomb scattering with Coulomb interactions taking place between the ``in'' exciton pair $(i,j)$, i.e., before hole exchange (see the diagram of Fig.~\ref{fig:Xi_in}). $\hat{\xi}^{(\eta)}\left(\begin{smallmatrix} n& j\\ m& i\end{smallmatrix}\right)$ can appear as an effective Coulomb scattering between two excitons although it also depends on the biexciton energy $\mathcal{E}_\eta$. The first two contributions to this effective scattering correspond to the standard combination of Coulomb processes appearing, for example, in the time evolution of two excitons. The third term is more interesting because it only comes from the Pauli exclusion principle. Since the associated Pauli scattering $\lambda\left(\begin{smallmatrix} n& j\\ m& i\end{smallmatrix}\right)$ is dimensionless, it must go along with an energy to produce an energy-like scattering, this energy actually being an energy difference in order to be gap independent. \ We now introduce a similar ``out" exchange-Coulomb scattering $\xi^{\rm out}\left(\begin{smallmatrix} n& j\\ m& i\end{smallmatrix}\right)$, defined as $\xi^{\rm out}\left(\begin{smallmatrix} n& j\\ m& i\end{smallmatrix}\right)=\sum_{rs}\xi^{\rm dir}\left(\begin{smallmatrix} n& s\\ m& r\end{smallmatrix}\right)\lambda\left(\begin{smallmatrix} s& j\\ r& i\end{smallmatrix}\right)$, Coulomb interactions taking place between the ``out'' exciton pair $(m,n)$ once exchange has occurred. It is easy to check that $\xi^{\rm in}\left(\begin{smallmatrix} n& j\\ m& i\end{smallmatrix}\right)=\left[\xi^{\rm out}\left(\begin{smallmatrix} j& n\\ i& m\end{smallmatrix}\right)\right]^*$. Moreover, the ``in'' and ``out'' exchange-Coulomb scatterings are not independent: their difference is related to Pauli scattering via \begin{equation} \xi^{\rm in}\left(\begin{smallmatrix} n& j\\ m& i\end{smallmatrix}\right)-\xi^{\rm out}\left(\begin{smallmatrix} n& j\\ m& i\end{smallmatrix}\right)=(E_{mn}-E_{ij})\lambda\left(\begin{smallmatrix} n& j\\ m& i\end{smallmatrix}\right),\label{eq:rel_xi_in_out_lambda} \end{equation} as easy to recover from calculating the matrix element $\langle v| B_mB_nHB^\dag_iB^\dag_j|v\rangle$ with $H$ acting either on the right or on the left.\ By inserting Eq.~(\ref{eq:rel_xi_in_out_lambda}) into Eq.~(\ref{def:xihat}), we can symmetrize the biexciton Schr\"{o}dinger equation (\ref{eq:Schrod_2Bbs2}) for triplet states as \begin{equation} 0=(E_{mn}-\mathcal{E}_\eta)\phi_{mn}^{(\eta)}+\sum_{ij}\hat\xi_{\rm sym}^{(\eta)}\left(\begin{smallmatrix} n& j\\ m& i\end{smallmatrix}\right)\phi_{ij}^{(\eta)},\label{eq:Schrod_2Bbs2sym} \end{equation} the effective Coulomb scattering, defined as \begin{eqnarray} \hat\xi_{\rm sym}^{(\eta)}\left(\begin{smallmatrix} n& j\\ m& i\end{smallmatrix}\right)&=&\xi^{\rm dir}\left(\begin{smallmatrix} n& j\\ m& i\end{smallmatrix}\right)-\frac{1}{2}\Big[\xi^{\rm in}\left(\begin{smallmatrix} n& j\\ m& i\end{smallmatrix}\right)+\xi^{\rm out}\left(\begin{smallmatrix} n& j\\ m& i\end{smallmatrix}\right)\nonumber\\ &&+\lambda\left(\begin{smallmatrix} n& j\\ m& i\end{smallmatrix}\right)(E_{mn}+E_{ij}-2\mathcal{E}_\eta)\Big],\label{def_xi_sym1} \end{eqnarray} now being symmetrical with respect to the ``in" and ``out" exciton pairs $(i,j)$ and $(m,n)$. \ \begin{figure}[t] \begin{center} \includegraphics[trim=3cm 4.5cm 3cm 2cm,clip,width=3.4in] {fig5.eps} \end{center} \caption{{\small ``In'' exchange-Coulomb scattering $\xi^{\rm in}\big(^{\hspace{.06cm} n \hspace{.12cm} j\hspace{.05cm}}_{\hspace{.05cm} m\hspace{.09cm} i\hspace{.05cm}}\big)$ between exciton $i$ and exciton $j$. The exciton pair exchanges their holes after Coulomb interactions, while the excitons $m$ and $i$ keep the same electron.}} \label{fig:Xi_in} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[trim=3cm 5.5cm 2.8cm 5cm,clip,width=3.4in] {fig6.eps} \end{center} \caption{{\small Part of the ``in'' exchange-Coulomb scattering $\xi^{\rm in}\big(^{ (\nu_n,-{\bf Q}^\prime)\hspace{0.1cm} (\nu_j,-{\bf Q})}_{ \hspace{0.1cm}(\nu_m,{\bf Q}^\prime)\hspace{0.2cm} (\nu_i,{\bf Q})\hspace{0.15cm}}\big)$ between a $i=(\nu_i,{\bf Q})$ exciton and a $j=(\nu_j,-{\bf Q})$ exciton, coming from hole-hole repulsion (first diagram of figure~\ref{fig:Xi_in}). The exciton pair exchanges their holes after Coulomb interactions (see Eq.~(\ref{app:inInt1})), the excitons $m$ and $i$ keeping the same electron.}} \label{fig:Xi_in_hh} \end{figure} \section{Biexciton made of electron-hole pairs with opposite spins $s_1=-s_2,~m_1=-m_2$\label{sec:opp_spin}\label{sec:biX4}} When the electron spins and hole spins are opposite, the derivation of biexciton singlet and triplet states with $S_e^z=S_h^z=0$ requires a more careful analysis of the parity condition induced by the free-carrier fermionic nature. To this end, we first construct biexciton eigenstates in the free-carrier basis and derive the parity condition imposed by the Pauli exclusion principle for singlet and triplet wave functions. We then use Eq.~(\ref{eq:ehtoB}) to rewrite the biexciton eigenstates in terms of exciton operators and rederive the parity condition in the exciton basis. Finally, we write down the biexciton Schr\"{o}dinger equation. \subsection{Free-carrier basis} The biexciton eigenstates for opposite electron spins and opposite hole spins can be written on free-carrier states as, \begin{eqnarray} \lefteqn{|\Psi_{S_e,S_h}^{(\eta)}\rangle=\sum_{{\bf k}_e,{\bf k}^\prime_{e}}\sum_{{\bf k}_{h},{\bf k}^\prime_{h}}\psi^{(\eta,S_e,S_h)}_{{\bf k}_e,{\bf k}^\prime_{e},{\bf k}_{h},{\bf k}^\prime_{h}}}\label{eq:BXeignst_diffspin1}\\ &&\times\Big[a^\dag_{{\bf k}_{e}, +1/2}a^\dag_{{\bf k}^\prime_{e}, -1/2}-(-1)^{S_e}a^\dag_{{\bf k}_{e}, -1/2}a^\dag_{{\bf k}^\prime_{e}, +1/2}\Big]\nonumber\\ &&\times\Big[b^\dag_{{\bf k}_{h}, +3/2}b^\dag_{{\bf k}^\prime_{h}, -3/2}-(-1)^{S_h}b^\dag_{{\bf k}_{h}, -3/2}b^\dag_{{\bf k}^\prime_{h}, +3/2}\Big]|v\rangle.\nonumber \end{eqnarray} This writing covers singlet state, ($S_e=S_h=0$), as well as ``triplet" state, ($S_e=1,S_h=3$), for which we have a sum instead of a difference of pair states---the biexciton triplet state $S^z=0$ being degenerate with respect to the ones constructed on $(s_1=s_2,~m_1=m_2)$ in Sec.~\ref{sec:biX3}. Since $a^\dag_{{\bf k}_{e}, -1/2}a^\dag_{{\bf k}^\prime_{e}, +1/2}=-a^\dag_{{\bf k}^\prime_{e}, +1/2}a^\dag_{{\bf k}_{e}, -1/2}$, it is possible to rewrite Eq.~(\ref{eq:BXeignst_diffspin1}) as \begin{eqnarray} \lefteqn{|\Psi_{S_e,S_h}^{(\eta)}\rangle=\sum_{{\bf k}_e,{\bf k}^\prime_{e}}\sum_{{\bf k}_{h},{\bf k}^\prime_{h}}\hat\psi^{(\eta,S_e,S_h)}_{{\bf k}_e,{\bf k}^\prime_{e},{\bf k}_{h},{\bf k}^\prime_{h}}a^\dag_{{\bf k}_{e}, +1/2}a^\dag_{{\bf k}^\prime_{e}, -1/2}}\label{eq:BXeignst_diffspin2}\\ &&\times\Big[b^\dag_{{\bf k}_{h}, +3/2}b^\dag_{{\bf k}^\prime_{h}, -3/2}-(-1)^{S_h}b^\dag_{{\bf k}_{h}, -3/2}b^\dag_{{\bf k}^\prime_{h}, +3/2}\Big]|v\rangle,\nonumber \end{eqnarray} where $\hat\psi^{(\eta,S_e,S_h)}_{{\bf k}_e,{\bf k}^\prime_{e},{\bf k}_{h},{\bf k}^\prime_{h}}=\psi^{(\eta,S_e,S_h)}_{{\bf k}_e,{\bf k}^\prime_{e},{\bf k}_{h},{\bf k}^\prime_{h}}+(-1)^{S_e}\psi^{(\eta,S_e,S_h)}_{{\bf k}^\prime_e,{\bf k}_{e},{\bf k}_{h},{\bf k}^\prime_{h}}$ follows the parity condition \begin{equation} \hat\psi^{(\eta,S_e,S_h)}_{{\bf k}_e,{\bf k}^\prime_{e},{\bf k}_{h},{\bf k}^\prime_{h}}=(-1)^{S_e}\hat\psi^{(\eta,S_e,S_h)}_{{\bf k}^\prime_e,{\bf k}_{e},{\bf k}_{h},{\bf k}^\prime_{h}}. \end{equation} If we do the same for the hole part, we end up with \begin{eqnarray} \lefteqn{|\Psi_{S_e,S_h}^{(\eta)}\rangle=\sum_{{\bf k}_e,{\bf k}^\prime_{e}}\sum_{{\bf k}_{h},{\bf k}^\prime_{h}}\phi^{(\eta,S_e,S_h)}_{{\bf k}_e,{\bf k}^\prime_{e},{\bf k}_{h},{\bf k}^\prime_{h}}}\nonumber\\ &&\times a^\dag_{{\bf k}_{e}, +1/2}a^\dag_{{\bf k}^\prime_{e}, -1/2}b^\dag_{{\bf k}_{h}, +3/2}b^\dag_{{\bf k}^\prime_{h}, -3/2}|v\rangle,\label{eq:BXeignst_diffspin3} \end{eqnarray} where $\phi^{(\eta,S_e,S_h)}_{{\bf k}_e,{\bf k}^\prime_{e},{\bf k}_{h},{\bf k}^\prime_{h}}=\hat\psi^{(\eta,S_e,S_h)}_{{\bf k}_e,{\bf k}^\prime_{e},{\bf k}_{h},{\bf k}^\prime_{h}}+(-1)^{S_h}\hat\psi^{(\eta,S_e,S_h)}_{{\bf k}_e,{\bf k}^\prime_{e},{\bf k}^\prime_{h},{\bf k}_{h}}$ has the expected parity condition, namely \begin{equation} \phi^{(\eta,S_e,S_h)}_{{\bf k}_e,{\bf k}^\prime_{e},{\bf k}_{h},{\bf k}^\prime_{h}}=(-1)^{S_e}\phi^{(\eta,S_e,S_h)}_{{\bf k}^\prime_e,{\bf k}_{e},{\bf k}_{h},{\bf k}^\prime_{h}}=(-1)^{S_h}\phi^{(\eta,S_e,S_h)}_{{\bf k}_e,{\bf k}^\prime_{e},{\bf k}^\prime_{h},{\bf k}_{h}}. \end{equation} The biexciton ground state belongs to the set of singlet states $(S_e=S_h=0)$. \subsection{Exciton basis} To write the biexciton on the exciton basis, we use Eq.~(\ref{eq:ehtoB}) to rewrite the electron-hole state of Eq.~(\ref{eq:BXeignst_diffspin3}) in terms of bright exciton operators. We find \begin{equation} |\Psi_{S_e,S_h}^{(\eta)}\rangle=\sum_{ij}\phi^{(\eta,S_e,S_h)}_{ij}B^\dag_{i,-1}B^\dag_{j,1}|v\rangle,\label{biXWF1:bright} \end{equation} where the prefactor $\phi^{(\eta,S_e,S_h)}_{ij}$, defined as \begin{equation} \phi^{(\eta,S_e,S_h)}_{ij}=\sum_{{\bf k}_e,{\bf k}^\prime_{h}}\sum_{{\bf k}^\prime_{e},{\bf k}_{h}}\langle i|{\bf k}_e{\bf k}^\prime_h\rangle\langle j|{\bf k}^\prime_e{\bf k}_h\rangle\phi^{(\eta,S_e,S_h)}_{{\bf k}_e,{\bf k}^\prime_{e},{\bf k}_{h},{\bf k}^\prime_{h}},\label{eq:def_phi_ij} \end{equation} is the biexciton wave function ``in the exciton basis". \ Using $\langle{\bf k}_h{\bf k}_e|{\bf p}_e{\bf p}_h\rangle=\delta_{{\bf k}_e{\bf p}_e}\delta_{{\bf k}_h{\bf p}_h}$ and the exciton closure relation, $\sum_i|i\rangle\langle i|=I$, it is easy to show that from Eqs.~(\ref{eq:def_phi_ij}), (\ref{app:def_lambda_h}), and (\ref{app:def_lambda_e}), the parity conditions for electron exchange or hole exchange read as \begin{eqnarray} \phi^{(\eta,S_e,S_h)}_{mn}&=&(-1)^{S_e}\sum_{ij}\lambda_e\left(\begin{smallmatrix} n& j\\ m& i \end{smallmatrix}\right)\phi^{(\eta,S_e,S_h)}_{ij}\nonumber\\ &=&(-1)^{S_h}\sum_{ij}\lambda_h\left(\begin{smallmatrix} n& j\\ m& i \end{smallmatrix}\right)\phi^{(\eta,S_e,S_h)}_{ij}.\label{eq:parity_electron_excitonbas} \end{eqnarray} By noting that $\lambda_h\left(\begin{smallmatrix} n& j\\ m& i \end{smallmatrix}\right)=\lambda_e\left(\begin{smallmatrix} m& j\\ n& i \end{smallmatrix}\right)$, these two equations give the parity condition for exciton exchange as \begin{equation} \phi^{(\eta,S_e,S_h)}_{ij}=(-1)^{(S_e+S_h)}\phi^{(\eta,S_e,S_h)}_{ji}. \end{equation} \ We now turn to the biexciton Schr\"{o}dinger equation in the exciton basis. We find, from the commutators (\ref{eq:commut_HB}) and (\ref{eq:commut_VB}), \begin{eqnarray} 0&=&(H-\mathcal{E}_\eta^{(S_e,S_h)})|\Psi_{S_e,S_h}^{(\eta)}\rangle\\ &=&\sum_{rs}\Big\{ \left(E_{rs}-\mathcal{E}_\eta^{(S_e,S_h)}\right)\phi^{(\eta,S_e,S_h)}_{rs}\nonumber\\ &&+\sum_{ij}\xi^{\rm dir}\left(\begin{smallmatrix} s& j\\ r& i \end{smallmatrix}\right)\phi^{(\eta,S_e,S_h)}_{ij} \Big\} B^\dag_{r,-1}B^\dag_{s,1}|v\rangle.\nonumber \end{eqnarray} As $\langle v|B_{n,1}B_{m,-1}B^\dag_{r,-1}B^\dag_{s,1}|v\rangle$ reduces to $\delta_{m,r}\delta_{n,s}$ for excitons made of carriers with different spins, the projection of this equation onto $\langle v|B_{n,1}B_{m,-1}$ simply gives \begin{equation} 0=(E_{mn}-\mathcal{E}_\eta^{(S_e,S_h)})\phi^{(\eta,S_e,S_h)}_{mn}+\sum_{ij}\xi^{\rm dir}\left(\begin{smallmatrix} n& j\\ m& i \end{smallmatrix}\right)\phi^{(\eta,S_e,S_h)}_{ij}.\label{eq:Schrod_diffspin} \end{equation} Note that, if we instead project it onto $\langle v|B_{m,1}B_{n,-1}$, we end up with the same equation but with $m$ and $n$ interchanged. So, the resulting Schr\"{o}dinger equation (\ref{eq:Schrod_diffspin}) has to be solved self-consistently with the parity conditions (\ref{eq:parity_electron_excitonbas}),---which is not convenient numerically.\ Moreover, the Schr\"{o}dinger equation (\ref{eq:Schrod_diffspin}) for spin triplet states $(S_e=1,S_h=3)$ does not readily reduce to Eq.~(\ref{eq:Schrod_2Bbs2}), whereas the Schr\"{o}dinger equation must be the same for all triplet states since they are degenerate. To possibly relate these two Schr\"{o}dinger equations and also avoid handling the parity conditions (\ref{eq:parity_electron_excitonbas}), we can introduce two new sets of functions $\varphi^{(c;\eta,S_e,S_h)}_{ij}$ with $c=(e,h)$ and rewrite $\phi^{(\eta,S_e,S_h)}_{ij}$ as \begin{equation} \phi^{(\eta,S_e,S_h)}_{ij}=\varphi^{(c;\eta,S_e,S_h)}_{ij}+(-1)^{S_c}\sum_{mn}\lambda_c\left(\begin{smallmatrix} j& n\\ i& m \end{smallmatrix}\right)\varphi^{(c;\eta,S_e,S_h)}_{mn},\label{parity:phi_varphi} \end{equation} so that the parity condition (\ref{eq:parity_electron_excitonbas}) is automatically fulfilled whatever $\varphi^{(c;\eta,S_e,S_h)}_{ij}$. By inserting the above equation into Eq.~(\ref{eq:Schrod_diffspin}), we find a Schr\"{o}dinger equation for $\varphi^{(c;\eta,S_e,S_h)}_{ij}$. It reads \begin{eqnarray} 0&=&\left(E_{mn}-\mathcal{E}_\eta^{(S_e,S_h)}\right)\varphi^{(c;\eta,S_e,S_h)}_{mn}\nonumber\\ &&+\sum_{ij}\Big[\xi^{\rm dir}\left(\begin{smallmatrix} n& j\\ m& i \end{smallmatrix}\right)+(-1)^{S_c}\xi^{\rm out}\left(\begin{smallmatrix} n& j\\ m& i\end{smallmatrix}\right)\label{eq:Schrod_diffspin_2}\\ &&+(-1)^{S_c}\left(E_{mn}-\mathcal{E}_\eta^{(S_e,S_h)}\right)\lambda_c\left(\begin{smallmatrix} n& j\\ m& i\end{smallmatrix}\right)\Big]\varphi^{(c;\eta,S_e,S_h)}_{ij}.\nonumber \end{eqnarray} If we now use Eq.~(\ref{eq:rel_xi_in_out_lambda}) to rewrite $\xi^{\rm out}\left(\begin{smallmatrix} n& j\\ m& i\end{smallmatrix}\right)$ in terms of $\xi^{\rm in}\left(\begin{smallmatrix} n& j\\ m& i\end{smallmatrix}\right)$, the above equation also reads \begin{equation} 0=(E_{mn}-\mathcal{E}_\eta^{(S_e,S_h)})\varphi_{mn}^{(c;\eta,S_e,S_h)}+\sum_{ij}\hat\xi^{(c;\eta)}_{\rm sym}\left(\begin{smallmatrix} n& j\\ m& i\end{smallmatrix}\right)\varphi_{ij}^{(c;\eta,S_e,S_h)},\label{eq:Schrod_diffspin_sym} \end{equation} where the effective scattering now has a symmetrical form with respect to ``in" and ``out" states \begin{eqnarray} \hat\xi^{(c;\eta)}_{\rm sym}\big(\begin{smallmatrix} n& j\\ m& i\end{smallmatrix}\big)&=&\xi^{\rm dir}\big(\begin{smallmatrix} n& j\\ m& i\end{smallmatrix}\big)+\frac{(-1)^{S_c}}{2}\Big[\xi^{\rm in}\big(\begin{smallmatrix} n& j\\ m& i\end{smallmatrix}\big)+\xi^{\rm out}\big(\begin{smallmatrix} n& j\\ m& i\end{smallmatrix}\big)\nonumber\\ &&+\lambda_c\big(\begin{smallmatrix} n& j\\ m& i\end{smallmatrix}\big)\left(E_{mn}+E_{ij}-2\mathcal{E}_\eta^{(S_e,S_h)}\right)\Big].\label{def_xi_sym2} \end{eqnarray} It is then easy to see that, for spin triplet state, $S_h=3$ and $c=h$, the Schr\"{o}dinger equations (\ref{eq:Schrod_diffspin_sym}) and (\ref{eq:Schrod_2Bbs2}) are indeed identical. \ Equations (\ref{eq:Schrod_2Bbs2}) and ({\ref{eq:Schrod_diffspin}), in the absence of Coulomb scatterings $\xi^{\rm dir}\left(\begin{smallmatrix} n& j\\ m& i\end{smallmatrix}\right)$ and Pauli scatterings $\lambda\left(\begin{smallmatrix} n& j\\ m& i\end{smallmatrix}\right)$, would lead to $\phi_{mn}^{(\eta)}=1$ for $E_{mn}=\mathcal{E}_\eta$ and zero otherwise. The biexciton would then reduce to two free excitons as expected. Interactions between excitons, through both Coulomb and Pauli scatterings, produce a difference between the biexciton energy and the energy of two free excitons. \ By comparing the Schr\"{o}dinger equations (\ref{eq:Schrod_2Bbs2sym}) and (\ref{def_xi_sym1}) for same-spin carriers with the Schr\"{o}dinger equations (\ref{eq:Schrod_diffspin_sym}) and (\ref{def_xi_sym2}) for opposite-spin carriers, we see that there is a sign change in front of the exchange part of the effective scattering (\ref{def_xi_sym2}), depending on singlet and triplet states. This sign change which results from the Pauli exclusion principle, is the reason why two excitons in a singlet state $(S_e=S_h=0)$ can bind together into a molecular state. This can be understood by considering energies close to the energy $2\varepsilon_{\nu_0}$ of two ground state excitons labeled by $(\nu_0,{\bf Q}=0)$, which is the expected biexciton energy for temperature much smaller than the energy for exciton internal motion excitation. So, the biexciton binding energy $\delta_\eta=2\varepsilon_{\nu_0}-\mathcal{E}_\eta^{(S_e,S_h)}$ is expected to be far smaller than the difference between exciton binding energies $\varepsilon_{\nu_i}-\varepsilon_{\nu_0}$. We then note that the direct Coulomb scattering $\xi^{\rm dir}\big(\begin{smallmatrix} 0& 0\\ 0& 0\end{smallmatrix}\big)$ is equal to zero; so, $\xi^{\rm dir}$ essentially vanishes for small momentum transfer (see Eq.~(\ref{app:dirInt1})). By contrast, the ``in" exchange-Coulomb scattering is given by \cite{moniqEPJ2003,moniqEP2007} \begin{equation} \xi^{\rm in}\big(\begin{smallmatrix} 0& 0\\ 0& 0 \end{smallmatrix}\big)= \left\{ \begin{array}{ll} \displaystyle-\left(8\pi-\frac{315\pi^3}{512}\right)\left(\frac{a_X}{L}\right)^2R_X^{\rm (3D)} & \text{ in 2D }\\ \displaystyle-\frac{26\pi}{3}\left(\frac{a_X}{L}\right)^3R_X^{\rm (3D)} & \text{ in 3D} \end{array} \right. , \end{equation} with $\xi^{\rm in}\big(\begin{smallmatrix} 0& 0\\ 0& 0 \end{smallmatrix}\big)$ equal to $\xi^{\rm out}\big(\begin{smallmatrix} 0& 0\\ 0& 0 \end{smallmatrix}\big)$ according to Eq.~(\ref{eq:rel_xi_in_out_lambda}). As a result, the sum of the ``in" and ``out" exchange-Coulomb scatterings, $\xi^{\rm in}\big(\begin{smallmatrix} 0& 0\\ 0& 0\end{smallmatrix}\big)+\xi^{\rm out}\big(\begin{smallmatrix} 0& 0\\ 0& 0\end{smallmatrix}\big)$, renders the effective scattering $\hat\xi^{(c;\eta)}_{\rm sym}$ overall largely negative for small exciton momenta, while the third term of Eq.~(\ref{def_xi_sym2}), of the order of the biexciton binding energy, is small. This large negative effective scattering allows bound-state solutions to the Schr\"{o}dinger equation (\ref{eq:Schrod_diffspin_sym}). It is then crucial to treat the exchange-Coulomb interactions adequately if one aims at getting reliable results for the ground and excited states. Detailed discussions about the dependence of Coulomb scatterings on electron-to-hole mass ratio and on relative motion momentum of the exciton pair can be found in Refs.~\onlinecite{MoniqPRB2007} and \onlinecite{LauraPRB2010}. For completeness, in \ref{app:sec2}, we have rederived the various scatterings appearing in the biexciton Schr\"{o}dinger equations.\ Equation (\ref{biXWF1:bright}), for $i=(\nu_i,{\bf k}+{\bf K}/2)$ and $j=(\nu_j,-{\bf k}+{\bf K}/2)$, ${\bf K}$ being the biexciton center-of-mass momentum and ${\bf k}$ the relative motion momentum between the exciton pair, allows us to rewrite the biexciton operator in terms of two bright excitons as (see also Eq.~(29) of Ref.~\onlinecite{moniqPRB2009}) \begin{equation} \mathbb{B}_{\eta{\bf K}}^\dag=\sum_{\nu_i\nu_j;{\bf k}}\phi^{(\eta,S_e,S_h)}_{\bf k}(\nu_i,\nu_j) B^\dag_{\nu_i,{\bf k}+{\bf K}/2;-1}B^\dag_{\nu_j,-{\bf k}+{\bf K}/2;1}.\label{BX_operator} \end{equation} Since all $\lambda$ and $\xi$ scatterings do not depend on the center-of-mass momentum ${\bf K}$, we can, without any loss of generality, set ${\bf K}=0$ from now on. Such a biexciton is then made of two excitons with opposite momenta. \ Since the parity condition (\ref{eq:parity_electron_excitonbas}) is difficult to numerically implement in the Schr\"{o}dinger equation (\ref{eq:Schrod_diffspin}) for $\phi^{(\eta,S_e,S_h)}_{\bf k}(\nu_i,\nu_j)$, we instead solve a somewhat more complicated equation (\ref{eq:Schrod_diffspin_sym}) in which the parity condition (\ref{parity:phi_varphi}) is already enforced, the function $\varphi^{(c;\eta,S_e,S_h)}_{\bf k}(\nu_i,\nu_j)$ in Eq.~(\ref{parity:phi_varphi}) being a free parameter.\ Let us from now on focus on the biexciton singlet state $(S_e=S_h=0)$ and consider $c=h$ without any loss of generality. By restricting the exciton level to the ground state $\nu_0$ and by setting $\varphi^{(h;\eta,0,0)}_{\bf k}(\nu_0,\nu_0)=\varphi^{(\eta)}_{\bf k}$, the Schr\"{o}dinger equation (\ref{eq:Schrod_diffspin_sym}) for the singlet state then reduces to \begin{eqnarray} \lefteqn{-\delta_\eta\Big[\varphi^{(\eta)}_{\bf k}+\sum_{{\bf k}^\prime}\lambda\left(\begin{smallmatrix} (\nu_0,-{\bf k})& (\nu_0,-{\bf k}^\prime)\\ (\nu_0,{\bf k})& (\nu_0,{\bf k}^\prime)\end{smallmatrix}\right)\varphi^{(\eta)}_{{\bf k}^\prime}\Big]}\nonumber\\ &&\simeq\frac{{\bf k}^2}{M_{X}}\varphi^{(\eta)}_{\bf k}+\sum_{{\bf k}^\prime}\tilde\xi\left(\begin{smallmatrix} (\nu_0,-{\bf k})& (\nu_0,-{\bf k}^\prime)\\ (\nu_0,{\bf k})& (\nu_0,{\bf k}^\prime)\end{smallmatrix}\right)\varphi^{(\eta)}_{{\bf k}^\prime},\label{eq:Schrod_2Bbs4} \end{eqnarray} where $M_X=m_e+m_h$ and \begin{eqnarray} \lefteqn{\tilde\xi\left(\begin{smallmatrix} (\nu_0,-{\bf k})& (\nu_0,-{\bf k}^\prime)\\ (\nu_0,{\bf k})& (\nu_0,{\bf k}^\prime)\end{smallmatrix}\right)=\xi^{\rm dir}\left(\begin{smallmatrix} (\nu_0,-{\bf k})& (\nu_0,-{\bf k}^\prime)\\ (\nu_0,{\bf k})& (\nu_0,{\bf k}^\prime)\end{smallmatrix}\right)}\nonumber\\ &&+\frac{1}{2}\Big[\xi^{\rm in}\left(\begin{smallmatrix} (\nu_0,-{\bf k})& (\nu_0,-{\bf k}^\prime)\\ (\nu_0,{\bf k})& (\nu_0,{\bf k}^\prime)\end{smallmatrix}\right)+\xi^{\rm out}\left(\begin{smallmatrix} (\nu_0,-{\bf k})& (\nu_0,-{\bf k}^\prime)\\ (\nu_0,{\bf k})& (\nu_0,{\bf k}^\prime)\end{smallmatrix}\right)\nonumber\\ && +\lambda\left(\begin{smallmatrix} (\nu_0,-{\bf k})& (\nu_0,-{\bf k}^\prime)\\ (\nu_0,{\bf k})& (\nu_0,{\bf k}^\prime)\end{smallmatrix}\right)\frac{({{\bf k}'}^2+{\bf k}^2)}{M_X}\Big]. \end{eqnarray} \ The scatterings $\lambda$ and $\tilde \xi$ depend on $({\bf k},{\bf k}^\prime)$ through $|{\bf k}|, |{\bf k}^\prime|$ and the angle $\theta_{{\bf k}\vk^\prime}$ between ${\bf k}$ and ${\bf k}^\prime$, the explicit values of these scatterings being given in \ref{app:sec2}. Since the biexciton singlet state has a zero angular momentum, the $\varphi^{(\eta)}_{\bf k}$ function depends on $|{\bf k}|=k$ only. So, we can first average the various scatterings over the $\theta_{{\bf k},{\bf k}^\prime}$. Let us call $\lambda(k,k^\prime)$ and $\tilde \xi(k,k^\prime)$ these averaged quantities. We then end up with a 1D integral equation for the function $\varphi^{(\eta)}_k$. It reads as \begin{equation} -\delta_\eta\Big[\varphi^{(\eta)}_k+\sum_{{\bf k}^\prime}\lambda(k,k^\prime)\varphi^{(\eta)}_{k^\prime}\Big]=\frac{k^2}{M_{X}}\varphi^{(\eta)}_k+\sum_{{\bf k}^\prime}\tilde\xi(k,k^\prime)\varphi^{(\eta)}_{k^\prime}.\label{eq:Schrod_2Bbs5} \end{equation} \ This equation is numerically solved to get the binding energies of the biexciton ground and excited states for various hole-to-electron mass ratios. We can also get the $\varphi^{(\eta)}_{\bf p}$ functions, which are related to the biexciton wave functions $\phi^{(\eta)}_{\bf p}$ with proper symmetry via Eq.~(\ref{parity:phi_varphi}). The excited states are expected to mainly come from vibrational modes. To reach rotational modes, it is necessary to include $p$ and $d$ exciton levels. As a consequence, the $\lambda$ and $\xi$ scatterings, as well as the biexciton wave functions $\phi^{(\eta)}_{\bf k}$, would get an angular dependence. The precise treatment of the angular dependence in these scatterings is rather complex and definitely beyond the scope of the present work. A relatively simple, yet nontrivial, biexciton state follows from just considering two ground-state excitons with a nonzero relative-motion angular momentum.\ \begin{figure}[t] \begin{center} \epsfig{figure=fig7.eps,clip=,width=3.6 in} \end{center} \caption{(Color online) Binding energies of the 2D biexciton ground state(GS) and the first three bound excited states in $R_X^{\rm (2D)}=4R_X^{\rm (3D)}$ unit, as a function of the hole-to-electron mass ratio $m_h/m_e$. The ground state binding energy has a minimum for $m_e=m_h$.} \label{fig:result_bindingEgy2D} \end{figure} \begin{figure}[t] \begin{center} \epsfig{figure=fig8.eps,clip=,width=3.6 in} \end{center} \caption{(Color online) Same as Fig.~\ref{fig:result_bindingEgy2D} for 3D; the energy unit now is $R_X^{\rm (3D)}$. The curves are qualitatively similar to the 2D curves, though the binding energies are significantly smaller.} \label{fig:result_bindingEgy3D} \end{figure} \section{Results and discussions\label{sec:biX5}} To solve the Schr\"{o}dinger equation (\ref{eq:Schrod_2Bbs5}) for the biexciton binding energy $\delta_\eta$, we use the normalized exciton ground-state wave functions $\langle {\bf p}|\nu_0\rangle$ in 2D and 3D \begin{eqnarray} \langle {\bf p}|1s\rangle^{\rm (2D)}&=& \left(\frac{a_X}{L}\right)\frac{\sqrt{2\pi}}{(1+a_X^2{\bf p}^2/4)^{3/2}},\\ \langle {\bf p}|1s\rangle^{\rm (3D)}&=&\left(\frac{a_X}{L}\right)^{3/2}\frac{8\sqrt{\pi}}{(1+a_X^2{\bf p}^2)^2}. \end{eqnarray} $L$ is the sample size, $a_X=\hbar^2\epsilon_{sc}/\mu_Xe^2$ is the 3D exciton Bohr radius, with $\mu_X^{-1}=m_e^{-1}+m_h^{-1}$, and $\epsilon_{sc}$ is the static semiconductor dielectric constant, one order of magnitude larger in semiconductor samples than in vacuum.\ The Schr\"{o}dinger equation (\ref{eq:Schrod_2Bbs5}) can be cast into a generalized eigenvalue problem, with matrices spanned by the ${\bf k}$ momentum. To solve it, we sample the $k=|{\bf k}|$ values with $150$ mesh points in 2D and $100$ in 3D, according to $k_i=u_i^3$ where the $u_i$'s are equally distributed, thereby allowing for more sampling in the small $k$ region. The upper cutoff $k_{\rm max}$ (in $a_X^{-1}$ unit) is taken to be $10$ in 3D but $20$ in 2D because the exciton wave function has a larger radial extension (in ${\bf k}$ space) in 2D than in 3D and also because the Coulomb interaction $V_{\bf q}$ decreases as $1/q$ in 2D, i.e., more slowly than the $1/q^2$ dependence it has in 3D (see Eq.~(\ref{app:CoulombPotential_q})). \ \subsection{Biexciton binding energies for ground and excited states} Solid curves in Figs.~(\ref{fig:result_bindingEgy2D}) and (\ref{fig:result_bindingEgy3D}) show the biexciton binding energies in 2D and 3D as a function of hole-to-electron mass ratio $m_h/m_e$. The results are expressed in terms of the corresponding effective Rydbergs, namely $R_X^{\rm (2D)}$ and $R_X^{\rm (3D)}$, with $R_X^{\rm (2D)}=4R_X^{\rm (3D)}$ and $R_X^{\rm (3D)} = (\mu_X/m_0\epsilon_{sc}^2) 13.6$eV, where $m_0$ is the free electron mass. The curves for the 2D and 3D binding energies are qualitatively similar. However, the values in 2D are significantly larger than those in 3D. This is physically expected since the reduction of dimensionality allows for much stronger Coulomb interactions and more localized wave functions to enhance overlapping. We further notice that both, the 2D and 3D binding energies, have a minimum in the positronium limit, i.e., at $m_h/m_e= 1$; it then increases logarithmically as the mass ratio increases, until it saturates for large mass ratios.\ Our results gives a ground-state biexciton binding energy equal to $0.012R_X^{\rm (3D)}$ for $m_h/m_e=1$ in 3D, which accounts for only about 40\% of the more accurate variational results \cite{Brinkman1972,Akimoto1972}, while it reaches $0.21R_X^{\rm (3D)}$ when $m_h/m_e=1000$, which accounts for about 70\%. In 2D, our calculated binding energies give the ground state at $0.075R_X^{\rm (2D)}$ when $m_h/m_e=1$, which accounts for about 50\% of the best variational result \cite{Kleinman1983}, while it reaches $0.44R_X^{\rm (2D)}$ when $m_h/m_e=1000$, which accounts for about 80\%. All this shows that our approach gives a much better ground-state energy when $m_h/m_e\gg1$, i.e., close to the hydrogen molecule limit, possibly because the exciton wave functions are less deformed when forming a molecule than in the case of lighter hole. \ One important advantage of the present approach over variational procedures is that it allows reaching the biexciton bound and unbound excited states as easily as the ground state. Dashed curves in Figs.~\ref{fig:result_bindingEgy2D} and \ref{fig:result_bindingEgy3D} show the binding energies of the biexciton bound states in 2D and 3D. The number of bound states increases with the mass ratio $m_h/m_e$, this number reducing to 1 for $m_h/m_e\lesssim 20$ in 2D and $m_h/m_e\lesssim30$ in 3D. For a large mass ratio $m_h/m_e=1000$, we find 9 bound states in 2D and 8 in 3D. The binding energy differences become smaller for higher excited states, evidencing a difference in the exciton-exciton interaction compared to the usual harmonic potential which leads to equal energy spacings between eigenstates. \ \begin{figure}[t] \begin{center} \epsfig{figure=fig9.eps,clip=,width=3.4 in} \end{center} \caption{(Color online) Plot of the ground state ($\eta=\eta_0$) wave function $L^2|\langle {\bf r}_{-1}=0,{\bf r}_{1}=0, {\bf p}|\eta\rangle|^2$ as a function of $p$ in $a_X^{-1}$ unit, when the mass ratio $m_h/m_e$ is equal to 5 and 50. Its $p$ extension scales as the inverse of the biexciton Bohr radius, $a_{XX}$. We also show the same quantity for the first excited state ($\eta=\eta_1$) when $m_h/m_e=50$. } \label{fig:WF_r=0peta0} \end{figure} \begin{figure}[t] \begin{center} \epsfig{figure=fig10.eps,clip=,width=3.4 in} \end{center} \caption{(Color online) Plot of $\ln |\langle {\bf r}_{-1}=0,{\bf r}_{1}=0, {\bf p}|\eta\rangle|^2 $ for three unbound biexciton states in a semilog plot, the mass ratio being $m_h/m_e=5$ and the momentum $p$ still in $a_X^{-1}$ unit. Note that the wave function is not here rescaled by $L^2$, so the amplitude of the wave function is significantly larger than for bound states, but its $p$ extension significantly smaller for normalized functions.} \label{fig:WF_r=0peta3} \end{figure} \subsection{Biexciton wave function} In a previous work on biexciton, we have shown that the wave function of a biexciton made of opposite-spin carriers, with center-of-mass momentum ${\bf K}$, relative motion index $\eta$, and electron and hole total spins $S=(S_e,S_h)$, splits as (see Eq.~(12) in Ref.~\onlinecite{moniqPRB2009}) \begin{equation} \langle {\bf r}_{e_1},{\bf r}_{e_2},{\bf r}_{h_1},{\bf r}_{h_2} |{\bf K},\eta,S\rangle=\langle {\bf R}_{XX}|{\bf K}\rangle \langle {\bf r}_{-1},{\bf r}_{1},{\bf u}|\eta,S\rangle, \end{equation} where ${\bf R}_{XX}=(m_e{\bf r}_{e_1}+m_e{\bf r}_{e_2}+m_h{\bf r}_{h_1}+m_h{\bf r}_{h_2})/2(m_e+m_h)$ is the biexciton center-of-mass coordinate, ${\bf r}_{1}={\bf r}_{e_2}-{\bf r}_{h_1}$ and ${\bf r}_{-1}={\bf r}_{e_1}-{\bf r}_{h_2}$ are the electron-to-hole distances of the two bright excitons having spins $(\pm1)$, while ${\bf u}=(m_e{\bf r}_{e_2}+m_h{\bf r}_{h_1})/(m_e+m_h)-(m_e{\bf r}_{e_1}+m_h{\bf r}_{h_2})/(m_e+m_h)$ is the distance between the center-of-masses of the two bright excitons.\ For bound biexciton, the wave function $\langle {\bf r}_{-1},{\bf r}_{1},{\bf u}|\eta,S\rangle$ has an extension of the order of the exciton size $a_X$ over ${\bf r}_{-1}$ and over ${\bf r}_{1}$, and an extension of the order of the biexciton size $a_{XX}$ over ${\bf u}$. By contrast, for unbound biexciton, the extension over ${\bf u}$ is as large as the sample size $L$, since unbound biexciton resembles very much an exciton with another free exciton moving around anywhere in the sample. Thus, in the case of bound states, dimensional arguments give the normalized relative motion wave functions through \begin{eqnarray} 1&=&\int d{\bf r}_{1} d{\bf r}_{-1}d{\bf u} \left|\langle {\bf r}_{-1},{\bf r}_{1},{\bf u}|\eta,S\rangle\right|^2\nonumber\\ &&\simeq a_X^D a_X^D a_{XX}^D \left|\langle 0,0,0|\eta,S\rangle\right|^2, \end{eqnarray} which leads to \begin{equation} \left|\langle{\bf r}_{-1}=0,{\bf r}_{1}=0,{\bf u}=0|\eta,S\rangle\right|^2\simeq\left(\frac{1}{a_X^2a_{XX}}\right)^D, \end{equation} while, in the case of unbound states, $a_{XX}$ is replaced by $L$; so, \begin{equation} \left|\langle{\bf r}_{-1}=0,{\bf r}_{1}=0,{\bf u}|\eta,S\rangle\right|^2\simeq\left(\frac{1}{a_X^2L}\right)^D. \end{equation} \ To obtain $\langle {\bf r}_{-1},{\bf r}_{1},{\bf p}|\eta,S\rangle$, which is of physical relevance in photon absorption, we perform a Fourier transform as \begin{equation} \langle {\bf r}_{-1},{\bf r}_{1},{\bf p}|\eta,S\rangle=\int d{\bf u}\langle{\bf p}|{\bf u}\rangle\langle {\bf r}_{-1},{\bf r}_{1},{\bf u}|\eta,S\rangle,\label{eq:wf_FT:up} \end{equation} where $\langle{\bf p}|{\bf u}\rangle=e^{i{\bf p}\cdot {\bf u}}/L^{D/2}$; so, the extension over ${\bf p}$ of $\langle {\bf r}_{-1},{\bf r}_{1},{\bf p}|\eta,S\rangle$ is of the order of $1/a_{XX}$ for bound states (see Fig.~\ref{fig:WF_r=0peta0}), and of the order of $1/L$ for unbound states (see Fig.~\ref{fig:WF_r=0peta3}). The same dimensional arguments then give, in the case of bound states, \begin{equation} \left|\langle {\bf r}_{-1}=0,{\bf r}_{1}=0,{\bf p}=0|\eta,S\rangle\right|^2\simeq\left(\frac{a_{XX}}{a_X^2L}\right)^D,\label{WF:DimArg:rrp} \end{equation} and, in the case of unbound states, \begin{equation} \left|\langle {\bf r}_{-1}=0,{\bf r}_{1}=0,{\bf p}|\eta,S\rangle\right|^2\simeq\left(\frac{1}{a_X^2}\right)^D. \end{equation} \ We can compute $\langle \nu_i,\nu_j,{\bf p}|\eta,S\rangle$ from $\langle {\bf r}_{-1},{\bf r}_{1},{\bf p}|\eta,S\rangle$ through a double Fourier transform ``in the exciton sense" (see also Eq.~(31) of Ref.~\onlinecite{moniqPRB2009}), namely, \begin{equation} \langle \nu_i,\nu_j,{\bf p}|\eta,S\rangle=\int d{\bf r}_{-1}d{\bf r}_{1}\langle\nu_i|{\bf r}_{-1}\rangle \langle\nu_j|{\bf r}_{1}\rangle \langle {\bf r}_{-1},{\bf r}_{1},{\bf p}|\eta,S\rangle,\label{eq:wf_DFT:nunu} \end{equation} which also reads as \begin{equation} \langle {\bf r}_{-1},{\bf r}_{1},{\bf p}|\eta,S\rangle=\sum_{\nu_i,\nu_j}\langle{\bf r}_{-1}|\nu_i\rangle \langle{\bf r}_{1}|\nu_j\rangle\langle \nu_i,\nu_j,{\bf p}|\eta,S\rangle.\label{eq:wf_DFT:rr} \end{equation} Note that for ${\bf r}_{1}$ or ${\bf r}_{-1}$ equal to 0, the $\nu$ exciton levels that survive in the above $\nu$ sum are $s$-like states only. So, the $\langle \nu_i,\nu_j,{\bf p}|\eta,S\rangle$ function just is the function $\phi^{(\eta,S)}_{\bf p}(\nu_i,\nu_j)$ given in Eq.~(\ref{BX_operator}). Since, when numerically solving the biexciton Schr\"{o}dinger equation (\ref{eq:Schrod_2Bbs5}) for singlet state ($S=0$), we have restricted the sum over $\nu$ to the ground state $\nu_0$, we must for consistency also keep $\nu_0$ only in the $\nu$ sum of Eq.~(\ref{eq:wf_DFT:rr}).\ Figure \ref{fig:WF_r=0peta0} shows $|\langle {\bf r}_{-1}=0,{\bf r}_{1}=0,{\bf p}|\eta,S=0\rangle|^2$ for the ground state when the mass ratio is $m_h/m_e=5$, and for two bound states when the mass ratio is $m_h/m_e=50$. Note that Eq.~(\ref{WF:DimArg:rrp}) forces us to plot bound-state wave functions through $L^2|\langle {\bf r}_{-1}=0,{\bf r}_{1}=0,{\bf p}|\eta,S=0\rangle|^2$ in order to have a quantity independent of sample size $L$. For unbound states, the $\left|\langle {\bf r}_{-1}=0,{\bf r}_{1}=0,{\bf p}|\eta,S=0\rangle\right|^2$ function is peaked on momenta ${\bf p}_\eta$ which depend on the unbound biexciton energies. Figure \ref{fig:WF_r=0peta3} shows that this peaked function broadens when the unbound biexciton energy increases, the broadening being due to exciton-exciton interactions. \ \subsection{Biexciton absorption spectrum in pump-probe experiment} (i) Let us now first consider an initial state made of one circularly polarized photon $\sigma_+$ with momentum ${\bf Q}_{ph}$ and frequency $\omega$, and one exciton already present in the sample, this $(\nu_0,{\bf Q}_i)$ exciton having an opposite circular polarization, $\sigma_-$. After photon absorption, the final state contains two electron-hole pairs, their center-of-mass momentum being ${\bf K}_i={\bf Q}_{ph}+{\bf Q}_i$. The photocreated exciton interacts with the exciton present in the sample to possibly form a biexciton. Since we are mainly interested in low-lying biexciton states, we shall focus on singlet states $(S=0)$. The Fermi golden rule gives the photon absorption as $(-2)$ times the imaginary part of the response function to one photon $(\omega,{\bf Q}_{ph})$. This response function reads as (see Eq.~(36) of Ref.~\onlinecite{moniqPRB2009}) \begin{eqnarray} \lefteqn{S_{XX}(\omega,{\bf Q}_{ph};{\bf Q}_i)=}\nonumber\\ &&\sum_\eta \frac{f^{(\eta)}_{XX}({\bf p}_i)}{\omega+E_{\nu_0,{\bf Q}_i}-\left[\mathcal{E}_\eta+\frac{({\bf Q}_{ph}+{\bf Q}_i)^2}{4M_X}\right]+i0^+},\label{BX:repsfun} \end{eqnarray} where ${\bf p}_i=({\bf Q}_i-{\bf Q}_{ph})/2$ is the relative motion momentum of the $(X,X)$ pair and $E_{\nu_0,{\bf Q}_i}$ is the free exciton energy given by Eq.~(\ref{app:E_iQ}). The biexciton oscillator strength $f^{(\eta)}_{XX}({\bf p})$ in Eq.~(\ref{BX:repsfun}) is given by \begin{eqnarray} f^{(\eta)}_{XX}({\bf p})&=&|\Omega|^2L^D\Big|\sum_\nu\langle{\bf r}=0 |\nu\rangle \langle\nu_0,\nu,{\bf p}|\eta\rangle\Big|^2\nonumber\\ &=&|\Omega|^2L^D\Big|\langle\nu_0,{\bf r}=0,{\bf p}|\eta\rangle\Big|^2,\label{OS_BX} \end{eqnarray} where $\Omega$ is the vacuum Rabi coupling, and $ \langle\nu_i,\nu_j,{\bf p}|\eta\rangle$ is the singlet biexciton wave function $ \langle\nu_i,\nu_j,{\bf p}|\eta,S=0\rangle$. \ Using Eq.~(\ref{WF:DimArg:rrp}), we can show that the biexciton oscillator strength $f^{(\eta,S=0)}_{XX}({\bf p})$ for ${\bf p}$ close to zero, is related to the exciton oscillator strength \begin{equation} f_X=|\Omega|^2L^D|\langle{\bf r}=0 |\nu\rangle|^2\simeq|\Omega|^2(L/a_X)^D \end{equation} via\cite{moniqPRB2009} \begin{equation} f^{(\eta,S=0)}_{XX}\simeq (a_{XX}/L)^Df_X .\label{OS_bX_ratio} \end{equation} The prefactor $(a_{XX}/L)^D$ corresponds to the localization into a biexciton volume $a^D_{XX}$ of the exciton present in the sample and initially delocalized over a volume $L^D$. For very large sample size $L$, this {\it a priori} prevents using the same scale to draw bound and unbound biexciton absorption spectra through linear response to a photon field.\ (ii) We now turn to a more complicated situation in which one circularly polarized photon, $\sigma_+$, is absorbed in a dilute exciton gas having $N_X$ excitons with opposite circular polarization, $\sigma_-$, which is what currently happens in pump-probe experiments: One first prepares a dilute exciton gas using a circularly polarized photon beam with low pump power; then we probe this gas with a weak photon beam having opposite polarization. We assume that these $N_X$ excitons all are in the exciton ground state $\nu_0$, the temperature being too small to have excited exciton states populated. For a dilute exciton gas, i.e., $N_X(a_X/L)^D\ll 1$, we may neglect exciton many-body effects, since these effects scale at least quadratically in the exciton density $n_X(=N_X/L^D)$. Indeed, the Pauli scattering for fermion exchanges between {\it two} excitons leads to terms in $n_X^2$. Such many-body effects would alter the absorption spectra presented below, because they affect the exciton energy states, and accordingly the biexciton state. In addition, the photocreated biexciton can interact with other excitons in the exciton gas. As a first approximation, we consider the dilute exciton gas as a set of noninteracting classical particles. The ${\bf Q}_i$ exciton distribution for finite temperature $T$ then is just the Boltzmann distribution \begin{equation} N({\bf Q}_i,T)=\frac{N_X}{L^D}\left(\frac{2\pi}{M_Xk_BT}\right)^{D/2}e^{-{\bf Q}_i^2/2M_Xk_BT},\label{eq:N_kT} \end{equation} normalized through $\sum_{{\bf Q}_i}N({\bf Q}_i,T)=N_X$. \ Since ${\bf Q}_{ph}\approx0$ on the characteristic electron scale, we can write the response function of $N_{ph}$ photons to a $N({\bf Q}_i,T)$ exciton distribution as \begin{equation} S^{(XX)}(\omega,T)=N_{ph}\sum_{{\bf Q}_i}N({\bf Q}_i,T)S_{XX}(\omega,0;{\bf Q}_i),\label{SI0} \end{equation} with $S_{XX}(\omega,{\bf Q}_{ph};{\bf Q}_i)$ given in (\ref{BX:repsfun}). \begin{figure}[t] \begin{center} \epsfig{figure=fig11.eps,clip=,width=3.6 in} \end{center} \caption{(Color online) Absorption amplitude $A^{(XX)}_{\eta_0}(\omega,T)$, defined in Eq.~(\ref{SI2}), for the biexciton ground state in 2D, as a function of the photon energy $\omega$ in $R_X^{\rm (3D)}$ unit for various temperatures $T$. We find an asymmetric low-energy peak at $-4.57R_X^{\rm (3D)}$, which is associated with the biexciton ground state having a binding energy $0.57R_X^{\rm (3D)}$ below $R_X^{\rm (2D)}=4R_X^{\rm (3D)}$. The hole-to-electron mass ratio is $m_h/m_e=5$, as in usual GaAs samples, and $8\pi|\Omega|^2N_{ph}N_X\rho/L^D=1$ is set equal to 1, the $L^D$ factor coming from the wave function part of this equation, due to Eq.~(\ref{WF:DimArg:rrp}). } \label{fig:result_Abs_bound} \end{figure} \subsubsection{Bound biexciton} By replacing the ${\bf Q}_i$ sum in Eq.~(\ref{SI0}) by an integral with a constant density of states $\rho$ for 2D systems, and by setting $\varepsilon={\bf Q}_i^2/2M_X$, we find that the absorption spectrum associated with a biexciton bound state $\eta_0$ is given by \begin{eqnarray} A_{\eta_0}^{(XX)}(\omega,T)&=&\frac{4\pi^2|\Omega|^2N_{ph}N_X}{M_Xk_BT}\int\rho d\varepsilon e^{-\varepsilon/k_BT}\nonumber\\ &&\times\Big|\langle\nu_0,{\bf r}=0 ,p=\sqrt{M_X\varepsilon/2}|\eta_0\rangle\Big|^2\nonumber\\ &&\times \delta\left(\omega+\varepsilon_{\nu_0}-\mathcal{E}_{\eta_0}+\varepsilon/2\right).\label{SI1} \end{eqnarray} For a photon detuning $\delta^{(\eta_0)}_{ph}=\mathcal{E}_{\eta_0}-\varepsilon_{\nu_0}-\omega>0$, this leads to \begin{eqnarray} A_{\eta_0}^{(XX)}(\omega,T)&=&\frac{8\pi^2|\Omega|^2N_{ph}N_X\rho}{M_Xk_BT} e^{-2\delta^{(\eta_0)}_{ph}/k_BT}\label{SI2}\\ &&\times\Big| \langle\nu_0,{\bf r}=0,p=\sqrt{M_X\delta^{(\eta_0)}_{ph}}|\eta_0\rangle\Big|^2.\nonumber \end{eqnarray} \ Figure \ref{fig:result_Abs_bound} shows the absorption $A^{(XX)}_{\eta_0}(\omega,T)$ (\ref{SI2}) associated with the biexciton ground state $\eta_0$ in the presence of a dilute exciton gas as a function of the photon energy $\omega$ for various temperatures. We identify the biexciton ground state with the low-energy peak lying $0.57R_X^{\rm (3D)}$---which is the biexciton binding energy---below a larger peak at $-4R_X^{\rm (3D)}$, shown in Fig.~\ref{fig:result_Abs_unbound}, that accounts for the photocreated exciton scattered by the free excitons present in the sample. Note the $(L/a_X)^D$ factor being the scale between the two figures. The lineshape of the ground-state biexciton peak is asymmetric with a long tail on the lower-energy side, the peak strength decreasing with increasing temperature. This low-energy tail reflects the fact that the photon energy required to meet the biexciton ground-state energy, as a result of energy conservation in Eq.~(\ref{SI1}), is smaller if the exciton already present in the sample has a larger kinetic energy. So, the biexciton binding energy corresponds to the upper sharp edge of the peak, since the tail comes from biexciton with larger relative motion momenta. Of course, this upper sharp edge is smoothed into a high-energy tail if we include a broadening of the delta function in Eq.~(\ref{SI1}), which amounts to replacing $i0^+$ in Eq.~(\ref{BX:repsfun}) by a finite width $i\gamma$. This low-energy tail is also found in the photoluminescence spectra of 1D quantum wires \cite{Crottini2002}.\ \subsubsection{Unbound biexciton} Since the unbound biexciton energies are close to the exciton energies, photoabsorption with formation of an unbound biexciton $A^{(XX)}$ is mixed with photoabsorption with formation of an exciton $A^{(X)}$, depending on the capture rate $f$ of an exciton by the photocreated exciton. As a first approximation, one can write the resulting absorption spectrum as \begin{equation} fA^{(XX)}+(1-f)A^{(X)}=A^{(X)}+f\left[A^{(XX)}-A^{(X)}\right].\label{def:Abs_GS3} \end{equation} For $N_X$ excitons in a sample volume $L^D$, the capture rate $f$ should be of the order of the exciton volume divided by the average volume occupied by one free exciton in the sample, namely, \begin{equation} f\simeq \frac{a_X^D}{L^D/N_X}=N_X\left(\frac{a_X}{L}\right)^D. \end{equation} However, since excitons with different ${\bf Q}_i$'s contribute differently to the biexciton absorption, as seen from Eq.~(\ref{SI0}), the absorption spectrum should in fact read, instead of Eq.~(\ref{def:Abs_GS3}), as \begin{eqnarray} A(\omega,T)&=&A^{(X)}(\omega)+\left(\frac{a_X}{L}\right)^D\sum_{{\bf Q}_i}N({\bf Q}_i,T)\nonumber\\ &&\times\left[ A^{(XX)}(\omega,{\bf Q}_i)-A^{(X)}(\omega)\right],\label{def:Abs_GS4} \end{eqnarray} with $N({\bf Q}_i,T)$ given by Eq.~(\ref{eq:N_kT}). We see that the absorption spectrum reduces to the exciton spectrum in the absence of free excitons, $N_X=0$, as physically required.\ The exciton absorption spectrum $A^{(X)}(\omega)$ is made of delta peaks centered on the exciton energies $\varepsilon^{(\nu)}$, and weighted by the value at ${\bf r}=0$ of the exciton wave function squared $|\langle {\bf r}=0|\nu\rangle|^2$. When taking into account the finite exciton lifetime, these delta peaks broaden into Lorentzian functions with finite width.\ Although, according to Eq.~(\ref{def:Abs_GS4}), photon absorption contains an exciton and a biexciton part, we have chosen to only show here the part of the spectrum coming from unbound biexcitons, namely, \begin{eqnarray} A^{(XX)}(\omega,T)&=& \frac{4\pi^2|\Omega|^2N_{ph}N_X}{M_Xk_BT}\sum_{\eta}\int\rho d\varepsilon e^{-\varepsilon/k_BT}\nonumber\\ &&\times\Big|\langle\nu_0,{\bf r}=0 ,\sqrt{M_X\varepsilon/2}|\eta\rangle\Big|^2\nonumber\\ &&\times \delta\left(\omega+\varepsilon_{\nu_0}-\mathcal{E}_\eta+\varepsilon/2\right),\label{SI3} \end{eqnarray} in order to avoid ambiguity coming from the relative weights of the exciton and biexciton lifetimes, which are sample dependent. To compute $A^{(XX)}(\omega,T)$, we replace the discrete sum over $\eta$ by a continuous sum over ${\bf p}_\eta$ and we introduce a finite lifetime by replacing the delta function by a Lorentzian function having a small half-width $\gamma$, namely, \begin{equation} \delta(\omega)\rightarrow \frac{\gamma/\pi}{\omega^2+\gamma^2}.\label{deltaf} \end{equation} \ \begin{figure}[t] \begin{center} \epsfig{figure=fig12.eps,clip=,width=3.6 in} \end{center} \caption{(Color online) Absorption amplitude $A^{(XX)}(\omega,T)$, defined in Eq.~(\ref{SI3}), for the biexciton unbound states in 2D, as a function of the photon energy $\omega$ in $R_X^{\rm (3D)}$ unit for various temperatures. We find a large peak centered on $-R_X^{\rm (2D)}=-4R_X^{\rm (3D)}$. We have taken an hole-to-electron mass ratio $m_h/m_e=5$, a broadening $\gamma=0.002R_X^{\rm (3D)}$ in Eq.~(\ref{deltaf}), and we have set $8\pi|\Omega|^2N_{ph}N_X\rho/a_X^D=1$. } \label{fig:result_Abs_unbound} \end{figure} Figure \ref{fig:result_Abs_unbound} shows the lineshape of the peak located at $-4R_X^{\rm (3D)}$, which is the $1s$ exciton level in 2D quantum wells. This peak corresponds to unbound biexcitons, i.e., exciton-exciton scattering states. We see that this peak spreads on both sides of the $1s$ exciton level, due to energy conservation enforced by the broadened delta function. The peak lineshape essentially is Lorentzian with a peak height slightly decreasing with temperature. This can be attributed to broadening at large relative motion momentum induced by exciton-exciton scatterings because contributions from large relative momenta start to weight in when the temperature gets large. Due to energy conservation enforced by the delta function in Eq.~(\ref{SI3}), the broadening induced by temperature may lead to unbound states with a much broader lineshape on top of the sharp exciton peak. The broadening of the peak remains close to $0.01R_X^{\rm (3D)}$ even though the temperature changes from $0.01R_X^{\rm (3D)}$ to $0.03R_X^{\rm (3D)}$. This indicates a ``non-thermal broadening" behavior, since, for such temperatures, the intrinsic broadening due to exciton-exciton scattering is dominant over thermal broadening. \ \section{Conclusions} We have constructed the Schr\"{o}dinger equation for biexciton written in the exciton basis, instead of the standard free-carrier basis. To choose such a basis is motivated by the physical fact that Coulomb interaction between electron and hole is far stronger than between two dipole-like excitons. However, in this exciton formulation, we must handle the fact that two excitons can exchange their carriers. The exact handling of this carrier exchange, as induced by the Pauli exclusion principle, is made possible thanks to the recently developed composite boson many-body formalism. \ By restricting the exciton levels to the ground state, we have numerically solved the resulting biexciton Schr\"{o}dinger equation for the ground state, as well as for the bound and unbound excited states in 2D and 3D systems. Our results for the ground-state binding energies agree reasonably well with those obtained from variational methods. The main advantage of the present approach is to easily reach excited states, which are out of reach from usual variational procedures. We then use the obtained biexciton wave functions to calculate the optical absorption spectrum in the presence of a dilute exciton gas in 2D. The spectrum shows a small asymmetric low-energy peak associated with the biexciton ground state, and larger peaks coming from exciton-exciton scattering states. \section{Acknowledgment} This work is supported by the National Science Council of Taiwan under Contract No. NSC 101-2112-M-001-024-MY3 and Academia Sinica, Taiwan. M.C. wishes to thank the Research Center for Applied Sciences of the Academia Sinica in Taiwan for various invitations. We also acknowledge fruitful discussions with K. V. P. Latha. \renewcommand{\thesection}{\mbox{Appendix~\Roman{section}}} \setcounter{section}{0} \renewcommand{\theequation}{\mbox{A.\arabic{equation}}} \setcounter{equation}{0} % \section{Exciton operator in momentum space\label{app:sec1}} The index $i$ in a state $|i\rangle$ of an exciton made of electron-hole pairs in a translationally invariant system, refers to the pair center-of-mass momentum ${\bf Q}_i$ and the internal motion index of the exciton state at hand, $\nu_i$, i.e., $i=(\nu_i,{\bf Q}_i)$. The wave function of this $(\nu_i,{\bf Q}_i)$ exciton splits as \begin{equation} \phi_i({\bf r}_e,{\bf r}_h)=\left\langle {\bf R}|{\bf Q}_i\right\rangle\langle {\bf r}|\nu_i\rangle, \end{equation} where ${\bf R}=(m_e {\bf r}_e+m_h {\bf r}_h)/(m_e+m_h)$ is the center-of-mass coordinate of the pair and $ {\bf r}={\bf r}_e-{\bf r}_h$ is the distance between electron and hole. The center-of-mass momentum wave function is a plane wave, $\left\langle {\bf R}|{\bf Q}_i\right\rangle=e^{i{\bf Q}_i\cdot{\bf R}}/L^{D/2}$ for a size $L$ sample. By writing the relative motion wave function as $\langle {\bf r}|\nu\rangle=\sum_{{\bf p}_i}\langle {\bf r}|{\bf p}_i\rangle\langle{\bf p}_i|\nu\rangle $, we can rewrite $\phi_i({\bf r}_e,{\bf r}_h)$ as \begin{equation} \phi_i({\bf r}_e,{\bf r}_h)=\sum_{{\bf p}_i} \langle{\bf p}_i|\nu_i\rangle\frac{e^{i({\bf p}_i+\alpha_e {\bf Q}_i)\cdot {\bf r}_e}}{L^{D/2}}\frac{e^{i(-{\bf p}_i+\alpha_h {\bf Q}_i)\cdot {\bf r}_h}}{L^{D/2}},\label{eq:func_phi} \end{equation} where we have set \begin{equation} \alpha_e=\frac{m_e}{m_e+m_h}=1-\alpha_h. \end{equation} So, the electron and hole momenta read in terms of the center-of-mass and relative motion momenta of the pair, ${\bf Q}_i$ and ${\bf p}_i$, as \begin{equation} {\bf k}_e={\bf p}_i+\alpha_e{\bf Q}_i,\qquad {\bf k}_h=-{\bf p}_i+\alpha_h{\bf Q}_i. \end{equation} \ The exciton relative motion wave function $\langle{\bf k}|\nu_i\rangle$ obeys the Schr\"{o}dinger equation \begin{equation} \left(\frac{{\bf k}^2}{2\mu_X}-\varepsilon_{\nu_i}\right) \langle{\bf k}|\nu_i\rangle-\sum_{{\bf q}\not=0}V_{\bf q} \langle{\bf k}+{\bf q}|\nu_i\rangle=0,\label{app:X_schrodeq} \end{equation} where $\mu_X^{-1}=m_e^{-1}+m_h^{-1}$ is the electron-hole pair reduced mass, while the Coulomb potential $V_{\bf q}$ is given by \begin{equation} \label{app:CoulombPotential_q} V_{\bf q} = \left\{ \begin{array}{rl} 2\pi e^2/\epsilon_{sc}L^2q & \text{ in 2D }\\ 4\pi e^2/\epsilon_{sc}L^3q^2 & \text{ in 3D} \end{array} \right. . \end{equation} \ From Eq.~(\ref{eq:func_phi}), we can deduce the creation operator of the $i$ exciton as \begin{equation} B^\dag_i=B^\dag_{\nu_i,{\bf Q}_i}=\sum_{{\bf k}}\langle{\bf k}|\nu_i\rangle a^\dag_{{\bf k}+\alpha_e {\bf Q}_i}b^\dag_{-{\bf k}+\alpha_h {\bf Q}_i},\label{app:B_i_vs_eh} \end{equation} This operator creates a single-pair eigenstate \begin{equation} (H-E_i)B^\dag_i|v\rangle=0. \end{equation} The exciton energy $E_i$ contains a relative motion part and a center-of-mass kinetic energy, namely, \begin{equation} E_i=E_{\nu_i,{\bf Q}_i}=\varepsilon_{\nu_i}+\frac{{\bf Q}_i^2}{2M_X},\label{app:E_iQ} \end{equation} where $M_X=m_e+m_h$. \renewcommand{\theequation}{\mbox{B.\arabic{equation}}} \setcounter{equation}{0} \section{On the various scatterings of two excitons\label{app:sec2}} In this appendix, we rederive various scatterings appearing in the biexciton Schr\"{o}dinger equation \cite{moniqPhysRep,MoniqPRB2007}. Without any loss of generality in calculating these scatterings, we set the biexciton center-of-mass momentum ${\bf K}$ to be zero. The two excitons then have opposite momenta. This just amounts to working in the reference frame of the center-of-mass of the exciton pair. \subsection{Pauli scattering} Let us first consider the Pauli scattering for hole exchange between excitons starting in ``in" states $(i,j)$ and ending in ``out" states $(m,n)$, the excitons $m$ and $i$ having the same electron, as shown in the diagram of Fig.~\ref{fig:fig1a}. This scattering is given, as read from the figure, by \begin{equation} \lambda_h\left(\begin{smallmatrix} n& j\\ m& i \end{smallmatrix}\right)=\sum\langle m|{\bf k}'_e{\bf k}'_h\rangle\langle n|{\bf p}'_e{\bf p}'_h\rangle\langle{\bf p}_h{\bf p}_e|j\rangle\langle{\bf k}_h{\bf k}_e|i\rangle,\label{app:def_lambda_h} \end{equation} Similarly, the Pauli scattering for electron exchange shown in the diagram of Fig.~\ref{fig:fig1b}, with the excitons $m$ and $i$ having the same hole, is given by \begin{equation} \lambda_e\left(\begin{smallmatrix} n& j\\ m& i \end{smallmatrix}\right)=\sum\langle m|{\bf p}'_e{\bf p}'_h\rangle\langle n|{\bf k}'_e{\bf k}'_h\rangle\langle{\bf p}_h{\bf p}_e|j\rangle\langle{\bf k}_h{\bf k}_e|i\rangle.\label{app:def_lambda_e} \end{equation} It is easy to check that $\lambda_e\left(\begin{smallmatrix} n& j\\ m& i \end{smallmatrix}\right)=\lambda_h\left(\begin{smallmatrix} m& j\\ n& i \end{smallmatrix}\right)$, the Pauli scattering $\lambda_h\left(\begin{smallmatrix} n& j\\ m& i \end{smallmatrix}\right)$ being often written as $\lambda\left(\begin{smallmatrix} n& j\\ m& i \end{smallmatrix}\right)$ for simplicity. For two excitons $i$ and $j$ having opposite momenta, i.e., $i=(\nu_i,{\bf Q})$ and $j=(\nu_j,-{\bf Q})$, Eq.~(\ref{app:def_lambda_h}) reduces to \begin{equation} \sum\langle \nu_m|{\bf k}^\prime\rangle\langle \nu_n|{\bf p}^\prime\rangle\langle{\bf p}|\nu_j\rangle\langle{\bf k}|\nu_i\rangle,\label{app:def_lambda_h2} \end{equation} provided that the $({\bf k}^\prime,{\bf p}^\prime,{\bf k},{\bf p})$ momenta are such that $-{\bf p}^\prime-\alpha_h{\bf Q}^\prime=-{\bf k}+\alpha_h{\bf Q}$, $-{\bf k}'+\alpha_h{\bf Q}'=-{\bf p}-\alpha_h{\bf Q}$, ${\bf k}^\prime+\alpha_e{\bf Q}^\prime={\bf k}+\alpha_e{\bf Q}$, and ${\bf p}^\prime-\alpha_e{\bf Q}^\prime={\bf p}-\alpha_e{\bf Q}$, as read from Fig.~\ref{fig:lambda_h}. When inserted into Eq.~(\ref{app:def_lambda_h}), we get \begin{eqnarray} \lefteqn{\lambda_h\left(\begin{smallmatrix} (\nu_n,-{\bf Q}^\prime)& (\nu_j,-{\bf Q})\\ (\nu_m,{\bf Q}^\prime)& (\nu_i,{\bf Q}) \end{smallmatrix}\right)}\nonumber\\ &=&\sum_{\bf p}\langle \nu_m|{\bf p}+\frac{{\bf P}_-}{2}\rangle\langle \nu_n|{\bf p}-\frac{{\bf P}_-}{2}\rangle\nonumber\\ &&\times\langle {\bf p}+\frac{{\bf P}_+}{2}|\nu_i\rangle\langle {\bf p}-\frac{{\bf P}_+}{2}|\nu_j\rangle,\label{app:def_lambda} \end{eqnarray} where the momenta ${\bf P}_\pm$ are defined by ${\bf P}_\pm=\alpha_h({\bf Q}+{\bf Q}^\prime)\pm\alpha_e({\bf Q}^\prime-{\bf Q})$. \subsection{Direct Coulomb scattering} We now turn to the part of the direct Coulomb scattering resulting from hole-hole interaction. As seen from the second diagram of Fig.~\ref{fig:Xi}, it reads as \begin{eqnarray} \xi_{h_1 h_2}^{\rm dir}\big(\begin{smallmatrix} n& j\\ m& i \end{smallmatrix}\big)&=&\sum_{{\bf q},{\bf k}_e,{\bf k}_h,{\bf p}_e,{\bf p}_h}V_{\bf q}\langle m|{\bf k}_e,{\bf k}_h-{\bf q}\rangle\label{app:def_dirCoul_hh}\\ &&\times\langle n|{\bf p}_e,{\bf p}_h+{\bf q}\rangle\langle{\bf p}_h,{\bf p}_e|j\rangle\langle{\bf k}_h,{\bf k}_e|i\rangle.\nonumber \end{eqnarray} In this direct process, the excitons keep their electron and hole components. As seen from Fig.~\ref{fig:Xi_part}, for two excitons $i=(\nu_i,{\bf Q})$ and $j=(\nu_j,-{\bf Q})$, Eq.~(\ref{app:def_dirCoul_hh}) reduces to \begin{equation} \sum V_{\bf q}\langle \nu_m|{\bf k}^\prime\rangle\langle \nu_n|{\bf p}^\prime\rangle\langle{\bf p}|\nu_j\rangle\langle{\bf k}|\nu_i\rangle,\label{app:def_dirCoul_hh2} \end{equation} provided that the $({\bf k}^\prime,{\bf p}^\prime,{\bf k},{\bf p})$ momenta are such that $-{\bf k}^\prime+\alpha_h{\bf Q}^\prime=-{\bf k}+\alpha_h{\bf Q}-{\bf q}$, $-{\bf p}^\prime-\alpha_h{\bf Q}^\prime=-{\bf p}-\alpha_h{\bf Q}+{\bf q}$, ${\bf k}^\prime+\alpha_e{\bf Q}^\prime={\bf k}+\alpha_e{\bf Q}$, and ${\bf p}^\prime-\alpha_e{\bf Q}^\prime={\bf p}-\alpha_e{\bf Q}$, as read from the figure. When inserted into Eq.~(\ref{app:def_dirCoul_hh}), this gives \begin{eqnarray} \lefteqn{\xi_{h_1h_2}^{\rm dir}\Big(\begin{smallmatrix} (\nu_n,-{\bf Q}^\prime)& (\nu_j,-{\bf Q})\\ (\nu_m,{\bf Q}^\prime)& (\nu_i,{\bf Q}) \end{smallmatrix}\Big)}\label{app:def_dirXi_hh}\\ &&=V_{{\bf P}_0}\sum_{{\bf k}{\bf p}}\langle \nu_m|{\bf k}-\alpha_e {\bf P}_0\rangle\langle\nu_n|{\bf p}+\alpha_e {\bf P}_0\rangle\langle {\bf k}|\nu_i\rangle\langle {\bf p}|\nu_j\rangle,\nonumber \end{eqnarray} where ${\bf P}_0={\bf Q}^\prime-{\bf Q}$ is the momentum transfer. Using the same procedure for $\xi_{e_1e_2}^{\rm dir}$, $\xi_{h_1e_2}^{\rm dir}$, and $\xi_{e_1h_2}^{\rm dir}$, we end with a direct Coulomb scattering between two excitons given by \begin{eqnarray} \lefteqn{\xi^{\rm dir}\left(\begin{smallmatrix} (\nu_n,-{\bf Q}^\prime)& (\nu_j,-{\bf Q})\\ (\nu_m,{\bf Q}^\prime)& (\nu_i,{\bf Q}) \end{smallmatrix}\right)}\label{app:def_dirXi}\\ &=&V_{{\bf P}_0}\sum_{{\bf k}{\bf p}}\Big[\langle \nu_m|{\bf k}+\alpha_h {\bf P}_0\rangle\langle\nu_n|{\bf p}-\alpha_h {\bf P}_0\rangle\nonumber\\ &&+\langle \nu_m|{\bf k}-\alpha_e {\bf P}_0\rangle\langle\nu_n|{\bf p}+\alpha_e {\bf P}_0\rangle\nonumber\\ &&-\langle \nu_m|{\bf k}+\alpha_h {\bf P}_0\rangle\langle\nu_n|{\bf p}+\alpha_e {\bf P}_0\rangle\nonumber\\ &&-\langle \nu_m|{\bf k}-\alpha_e {\bf P}_0\rangle\langle\nu_n|{\bf p}-\alpha_h {\bf P}_0\rangle\Big]\nonumber\\ &&\times\langle {\bf k}|\nu_i\rangle\langle {\bf p}|\nu_j\rangle\nonumber. \end{eqnarray} By noting that \begin{eqnarray} &&\sum_{\bf k} \big[\langle \nu^\prime|{\bf k}+\alpha_h {\bf q}\rangle-\langle \nu^\prime|{\bf k}-\alpha_e {\bf q}\rangle\big]\langle {\bf k}|\nu\rangle\nonumber\\ &&=\langle \nu^\prime|e^{i\alpha_h {\bf q}\cdot{\bf r}}-e^{-i\alpha_e {\bf q}\cdot{\bf r}}|\nu\rangle=\mathcal{T}_{\nu^\prime\nu}({\bf q}), \end{eqnarray} this direct Coulomb scattering splits as \begin{equation} \xi^{\rm dir}\left(\begin{smallmatrix} (\nu_n,-{\bf Q}^\prime)& (\nu_j,-{\bf Q})\\ (\nu_m,{\bf Q}^\prime)& (\nu_i,{\bf Q}) \end{smallmatrix}\right)=V_{{\bf P}_0}\mathcal{T}_{\nu_m\nu_i}({\bf P}_0)\mathcal{T}_{\nu_n\nu_j}(-{\bf P}_0).\label{app:dirInt1} \end{equation} For $\nu$ and $\nu^\prime$ restricted to the exciton ground state $\nu_0$, we find \begin{equation} \mathcal{T}_{\nu_0\nu_0}({\bf q})=g\left(\frac{\alpha_h a_X q}{2}\right)-g\left(\frac{\alpha_e a_X q}{2}\right), \end{equation} where, for 2D and 3D systems, $g_{2D}(p)=(1+p^2/4)^{-3/2}$ and $g_{3D}(p)=(1+p^2)^{-2}$. We see that $\mathcal{T}_{\nu_0\nu_0}({\bf q})$ depends on the magnitude of the momentum transfer $q=|{\bf q}|$ only and that $\mathcal{T}_{\nu_0\nu_0}({\bf q}=0)=0$. We also note that $\mathcal{T}_{\nu_0\nu_0}({\bf q})=0$ for $\alpha_e=\alpha_h$, i.e., equal electron and hole masses.\ \subsection{Exchange-Coulomb scatterings} Two excitons can also have exchange-Coulomb scatterings. These are defined as \begin{eqnarray} \xi^{\rm in}\big(\begin{smallmatrix} n& j\\ m& i\end{smallmatrix}\big)&=&\sum_{rs}\lambda_h\big(\begin{smallmatrix} n& s\\ m& r\end{smallmatrix}\big)\xi^{\rm dir}\big(\begin{smallmatrix} s& j\\ r& i\end{smallmatrix}\big),\\ \xi^{\rm out}\big(\begin{smallmatrix} n& j\\ m& i\end{smallmatrix}\big)&=&\sum_{rs}\xi^{\rm dir}\big(\begin{smallmatrix} n& s\\ m& r\end{smallmatrix}\big)\lambda_h\big(\begin{smallmatrix} s& j\\ r& i\end{smallmatrix}\big). \end{eqnarray} depending on if the exchange takes place after or before Coulomb interaction. The part of the ``in" exchange-Coulomb scattering due to hole-hole interaction, as shown in the first diagram of Fig.~\ref{fig:Xi_in}, is given by \begin{eqnarray} \xi_{h_1 h_2}^{\rm in}\big(\begin{smallmatrix} n& j\\ m& i \end{smallmatrix}\big)&=&\sum_{{\bf q},{\bf k}_e,{\bf k}_h,{\bf p}_e,{\bf p}_h}V_{\bf q}\langle m|{\bf k}_e,{\bf p}_h+{\bf q}\rangle\label{app:def_inexCoul_hh}\\ &&\times\langle n|{\bf p}_e,{\bf k}_h-{\bf q}\rangle\langle{\bf p}_h,{\bf p}_e|j\rangle\langle{\bf k}_h,{\bf k}_e|i\rangle.\nonumber \end{eqnarray} As seen from Fig.~\ref{fig:Xi_in_hh}, for $i=(\nu_i,{\bf Q})$ and $j=(\nu_j,-{\bf Q})$, Eq.~(\ref{app:def_inexCoul_hh}) reduces to the same equation as Eq.~(\ref{app:def_dirCoul_hh2}), the $({\bf k}^\prime,{\bf p}^\prime,{\bf k},{\bf p})$ momenta being now such that ${\bf k}^\prime+\alpha_h{\bf Q}^\prime={\bf p}-\alpha_h{\bf Q}+{\bf q}$, ${\bf p}^\prime-\alpha_h{\bf Q}^\prime={\bf k}+\alpha_h{\bf Q}-{\bf q}$, $-{\bf k}^\prime+\alpha_e{\bf Q}^\prime=-{\bf p}-\alpha_e{\bf Q}$, and $-{\bf p}^\prime-\alpha_e{\bf Q}^\prime=-{\bf k}+\alpha_e{\bf Q}$. This leads to \begin{eqnarray} \lefteqn{\xi^{\rm in}_{ c_1c_2}\left(\begin{smallmatrix} (\nu_n,-{\bf Q})& (\nu_j,-{\bf Q}^\prime)\\ (\nu_m,{\bf Q})& (\nu_i,{\bf Q}^\prime) \end{smallmatrix}\right)}\nonumber\\ &=&\sum_{{\bf k},{\bf p}\not=0}V_{\bf p}\langle \nu_m|{\bf k}+\frac{{\bf P}_-+{\bf p}}{2}\rangle\langle \nu_n|{\bf k}-\frac{{\bf P}_-+{\bf p}}{2}\rangle \nonumber\\ &&\times\langle {\bf k}+\frac{{\bf P}_+\mp{\bf p}}{2}|\nu_i\rangle\langle {\bf k}-\frac{{\bf P}_+\mp{\bf p}}{2}|\nu_j\rangle,\label{app:inInt1} \end{eqnarray} with the lower sign in front of ${\bf p}$ for $c_1c_2=h_1h_2$ and the upper sign for $c_1c_2=e_1e_2$.\ The same procedure for the electron-hole part of the interaction yields \begin{eqnarray} \lefteqn{\xi^{\rm in}_{ c_1d_2}\left(\begin{smallmatrix} (\nu_n,-{\bf Q}^\prime)& (\nu_j,-{\bf Q})\\ (\nu_m,{\bf Q}^\prime)& (\nu_i,{\bf Q}) \end{smallmatrix}\right)}\nonumber\\ &=&-\sum_{{\bf k},{\bf p}\not=0}V_{\bf p}\langle \nu_m|{\bf k}+\frac{{\bf P}_-+{\bf p}}{2}\rangle\langle \nu_n|{\bf k}-\frac{{\bf P}_-+{\bf p}}{2}\rangle\nonumber\\ &&\times \langle {\bf k}+\frac{{\bf P}_+\mp{\bf p}}{2}|\nu_i\rangle\langle {\bf k}-\frac{{\bf P}_+\pm{\bf p}}{2}|\nu_j\rangle,\label{app:inInt2} \end{eqnarray} with the upper sign for $c_1d_2=e_1h_2$ and the lower sign for $c_1d_2=h_1e_2$.\ Note that we can eliminate the sum over ${\bf p}$ in the electron-hole part of the $\xi^{\rm in}_{ c_1d_2}$ scattering, by setting ${\bf k}^\prime={\bf k}-{\bf p}/2$ and by using Eq.~(\ref{app:X_schrodeq}). We then find \begin{eqnarray} \lefteqn{\xi^{\rm in}_{ e_1h_2}\left(\begin{smallmatrix} (\nu_n,-{\bf Q}^\prime)& (\nu_j,-{\bf Q})\\ (\nu_m,{\bf Q}^\prime)& (\nu_i,{\bf Q}) \end{smallmatrix}\right)}\nonumber\\ &=&\sum_{{\bf k}}\left(\varepsilon_{\nu_m}-\frac{({\bf k}+{\bf P}_-/2)^2}{2\mu_X}\right)\langle \nu_m|{\bf k}+\frac{{\bf P}_-}{2}\rangle\nonumber\\ &&\times \langle \nu_n|{\bf k}-\frac{{\bf P}_-}{2}\rangle\langle {\bf k}+\frac{{\bf P}_+}{2}|\nu_i\rangle\langle {\bf k}-\frac{{\bf P}_+}{2}|\nu_j\rangle.\label{app:inInt21} \end{eqnarray} Similarly, by setting ${\bf k}^\prime={\bf k}+{\bf p}/2$, we find \begin{eqnarray} \lefteqn{\xi^{\rm in}_{ h_1e_2}\left(\begin{smallmatrix} (\nu_n,-{\bf Q}^\prime)& (\nu_j,-{\bf Q})\\ (\nu_m,{\bf Q}^\prime)& (\nu_i,{\bf Q}) \end{smallmatrix}\right)}\nonumber\\ &=&\sum_{{\bf k}}\left(\varepsilon_{\nu_n}-\frac{({\bf k}-{\bf P}_-/2)^2}{2\mu_X}\right)\langle \nu_m|{\bf k}+\frac{{\bf P}_-}{2}\rangle\nonumber\\ &&\times \langle \nu_n|{\bf k}-\frac{{\bf P}_-}{2}\rangle\langle {\bf k}+\frac{{\bf P}_+}{2}|\nu_i\rangle\langle {\bf k}-\frac{{\bf P}_+}{2}|\nu_j\rangle.\label{app:inInt22} \end{eqnarray}
1,314,259,994,707
arxiv
\section{Introduction} \emph{Actual causality} is a long-standing philosophical problem that is fundamental to the task of reasoning about and explaining observations. Given a narrative or history of events and an observed effect, solving this problem involves finding the events or actions from this history that are responsible for producing this effect, i.e.\ those that caused the effect. Also known as {\em token-level} causality, this problem is different from \emph{general} or {\em type-level} causality, where the task is to discover universal causal mechanisms. Actual causality plays a significant role in reasoning about agents. For instance, causal reasoning can be used to explain the behaviour of a group of agents, e.g.\ via causal analysis of the mental states produced by this behaviour. These mental states may include beliefs and goals of the agents whose actions are the cause of the observed behaviour as well as those of others' (see Sec.\ 5 for an example). \par Pearl \shortcite{Pearl98,Pearl00} was a pioneer in computational enquiry into actual causality. This line of research was later continued by Halpern \shortcite{Halpern00}, Halpern and Pearl \shortcite{HalpernP05}, and others \cite{EiterL02,Hopkins05,HopkinsP07,Halpern15,Halpern16}. This ``HP approach'' is based on the concept of structural equations \cite{Simon77}. HP follows the Humean counterfactual definition of causation, which states that ``an outcome $B$ is caused by an event $A$'' is the same as saying that ``had $A$ never occurred, $B$ never had existed''. This definition suffers from the problem of preemption\footnote{Preemption happens when two competing events try to achieve the same effect, and the latter of these fails to do so, as the earlier one has already achieved the effect.}: it could be the case that in the absence of event $A$, $B$ would still have occurred due to another event, which in the original trace was preempted by $A$. HP address this by performing counterfactual analysis only under carefully selected contingencies, which suspend some subset of the model's mechanisms. While their inspirational early work was shown to be useful for some practical applications, their approach based on Structural Equations Models (SEM) has been criticized for its limited expressiveness \cite{Hopkins05,HopkinsP07,GlymourDGERSSTZ10}, and researchers have attempted to expand SEM with additional features, e.g.\ \cite{Leitner-FischerL13}. Note that despite recently reported progresses (e.g.\ \cite{HalpernPeters22}), many of these expressive limitations remain. Also, while there has been much work on actual causality, the vast majority of the work in this area has focused on defining causes from an objective standpoint. \par In recent years, researchers have become increasingly interested in studying causation from the perspective of agents. Among other things, this is useful for defining important concepts such as responsibility and blame. Inspired by a novel action-theoretic formalization of actual causation \cite{BatusovS18}, Khan and Lesp\'{e}rance \shortcite{KL21} (KL, henceforth) recently proposed a first account of causal knowledge that supports epistemic effects, models causal knowledge dynamics, and allows sensing actions to be causes of observed effects. To date, no other study has looked specifically at these issues. But their formalization is not sufficiently expressive enough to model explanations via causal analysis of mental states as it ignores a crucial aspect of theory of mind, namely motivations. In this paper, we build on their work to support causal reasoning about conative effects. In our framework, one can reason about causes of motivational states, and we allow motivation-altering actions to be causes of observed effects. We illustrate that this formalization along with a model of goal recognition can be utilized to explain agent behaviour. \par Our contribution in this paper is three-fold. First, we show how causal reasoning about goals/intentions can be modeled. Secondly, using an example, we illustrate how this formalization along with a model of goal recognition can be used to explain agent behaviour in communicative multiagent contexts. The generated explanations include both direct causal explanations as well as higher-order and more useful indirect explanations. The latter is grounded in (multiagent) theory of mind-based causal reasoning. Finally, while doing this, we extend a previously proposed account of goal change to deal with the \emph{request} communicative action. \section{Action and Knowledge} \textbf{The Situation Calculus}\indent Our base framework for modeling causal reasoning is the situation calculus (SC) \cite{McCarthyH69} as formalized in \cite{Reiter01}. Here, a possible state of the domain is represented by a situation. The initial state is denoted by $S_0$. There is a distinguished binary function symbol $do$ where $do(a,s)$ denotes the successor situation to $s$ resulting from performing the action $a$. Thus the situations can be viewed forming a tree, where the root of the tree is an initial situation and the arcs represent actions. As usual, a relational/functional fluent takes a situation term as its last argument. There is a special predicate $\mathit{Poss}(a,s)$ used to state that action $a$ is executable in situation $s$. We will use the abbreviation $do([\alpha_1,\cdots,\alpha_n],S_0)$ to represent the situation obtained by consecutively performing $\alpha_1,\cdots,\alpha_n$ starting from $S_0$. Also, the notation $s\sqsubset s'$ means that situation $s'$ can be reached from situation $s$ by executing a sequence of actions. $s\sqsubseteq s'$ is an abbreviation of $s\sqsubset s'\vee s= s'.$ $s<s'$ is an abbreviation of $s\sqsubset s'\wedge \mathit{executable}(s'),$ where $\mathit{executable}(s)$ is defined as $\forall a',s'.\;do(a',s')\sqsubseteq s\supset \mathit{Poss}(a',s'),$ i.e.\ every action performed in reaching situation s was possible in the situation in which it occurred. $s\leq s'$ is an abbreviation of $s<s'\vee s= s'.$ \par Our framework uses an action theory $\mathcal{D}$ that includes the following set of axioms:\footnote{We will be quantifying over formulae, and thus assume that $\mathcal{D}$ includes axioms for encoding of formulae as first order terms, as in \cite{ShapLesLev07}.} (1) action precondition axioms (APA), one per action $a$ characterizing $\mathit{Poss}(a,s)$, (2) successor state axioms (SSA), one per fluent, that succinctly encode both effect and frame axioms and specify exactly when the fluent changes, (3) initial state axioms describing what is true initially, (4) unique name axioms for actions, and (5) domain-independent foundational axioms describing the structure of situations \cite{LevPirRei98}. \newline\noindent\textbf{Knowledge in the Situation Calculus}\indent Following \cite{Moore85,SchLev03}, we model knowledge using a possible worlds account adapted to the SC. There can now be multiple initial situations. $\mathit{Init}(s)$ means that $s$ is an initial situation. The actual initial state is denoted by $S_0$. $K(d,s',s)$ is used to denote that in situation $s$, the agent $d$ thinks that it could be in situation $s'$. Using $K$, the knowledge of an agent $d$ is defined as:\footnote{$\Phi$ can contain a placeholder {\em now} in the place of the situation terms. We often suppress {\em now} when the intent is clear from the context. Also, $\Phi[s]$ denotes the formula obtained by restoring the situation argument $s$ into all fluents in $\Phi$. } $\mathit{Know}(d,\Phi,s)\doteq\forall s'.\;K(d,s',s)\supset \Phi[s']$, i.e.\ the agent $d$ knows $\Phi$ in $s$ if $\Phi$ holds in all of its $K$-accessible situations in $s$. We also use the abbreviations $\mathit{Kwhether}(d,\Phi,s)\doteq \mathit{Know}(d,\Phi,s)\vee \mathit{Know}(d,\neg\Phi,s),$ i.e.\ $d$ knows whether $\Phi$ holds in $s$ and $\mathit{Kref}(d,\theta,s)\doteq\exists t.\;\mathit{Know}(d,\theta=t,s)$, i.e.\ it knows who/what $\theta$ refers to in $s$. $K$ is constrained to be reflexive and Euclidean (and thus transitive) in the initial situation to capture the fact that the agent's knowledge is true, and that it has positive and negative introspection. \par In our framework, the dynamics of knowledge is specified using a SSA for $K$ that supports knowledge expansion as a result of sensing actions as well as communication actions. The information provided by a binary sensing action is specified using the predicate $SF(a, s)$. Similarly for non-binary sensing actions, the term $\mathit{sff}(a, s)$ is used to denote the sensing value returned by the action. These are specified using {\em sensed fluent axioms}; see \cite{Lev96} for details. Shapiro et al. \shortcite{ShapiroLL97} and later Lesp\'erance \shortcite{Les02} extended the SSA for $K$ to support variants of the `inform' communicative action. We will adopt the variant proposed in KL \shortcite{KL05}. The preconditions of $\mathit{inform}(\mathit{inf},\mathit{agt},\Phi)$, which can be used by $\mathit{inf}$ to inform $\mathit{agt}$ that $\Phi$, are as follows: \begin{eqnarray*} &&\mathit{Poss}(\mathit{inform}(\mathit{inf},\mathit{agt},\Phi),s)\equiv\mathit{Know}(\mathit{inf},\Phi,s)\\ &&\hspace{10 mm}\mbox{}\wedge\neg\mathit{Know}(\mathit{inf},\mathit{Know}(\mathit{agt},\Phi,\mathit{now}),s). \end{eqnarray*} We assume that its effects has been specified as in KL \shortcite{KL05}. As shown in \cite{SchLev03}, the constraints on $K$ then continue to hold after any sequence of actions since they are preserved by the SSA for $K$. A similar result can be shown for the KL \shortcite{KL05} variant of the SSA for $K$. \par Thus to model knowledge, we will use a theory that is similar to before, but with modified foundational axioms to allow for multiple initial epistemic states. Also, action preconditions can now include knowledge preconditions and initial state axioms can now include axioms describing the epistemic states of the agents. Finally, the aforementioned axioms for $K$ and $\mathit{inform}$ are included. See \cite{Reiter01} and \cite{KL05} for details of these. Note that like \cite{SchLev03}, we assume that actions are fully observable (even if their effects are not). This can be generalized as in \cite{BacchusHL99}. \newline\noindent\textbf{Paths in the Situation Calculus}\indent Following KL \shortcite{KL16}, we will formalize the sort of \emph{paths} in the SC. A path is essentially an infinite sequence of situations, where each situation along the path can be reached by performing some executable action in the preceding situation. We will use $\mathit{Starts}(p,s)$ to denote that $s$ is the earliest situation on path $p$ and $\mathit{OnPath}(p,s)$ to denote that $s$ is on $p$. $\mathit{Suffix}(p',p,s)$ means that path $p'$ that starts with situation $s$ is a suffix of $p$. KL \shortcite{KL15} showed how one can interpret arbitrary CTL$^*$ formulae within SC with paths. We assume that our theory $\mathcal{D}$ includes the axiomatization for paths. \par We will use uppercase and lowercase Greek letters for state formulae (i.e.\ situation-suppressed SC formulae) and path formulae, resp. These are inductively defined as follows: \vspace{-4 mm}\begin{small} \begin{eqnarray*} &&\hspace{0 mm}\Phi::=P(\vec{x})\mid A \phi\mid\Phi\wedge\Phi\mid\neg\Phi\mid\forall x.\;\Phi\\ &&\hspace{0 mm}\phi::=\Phi\mid\phi\wedge\phi\mid\neg\phi\mid\forall x.\;\phi\mid\bigcirc\phi\mid\phi\;\mathcal{U}\;\phi \end{eqnarray*} \end{small}\noindent Here, $\vec{x}$ and $x$ are object terms, $P(\vec{x})$ is an arbitrary situation-suppressed SC formula, and $A\phi$ (i.e.\ \emph{over all paths} $\phi$) is a path quantifier. Also, $\bigcirc\phi$ means that $\phi$ holds {\em next} over a path while $\phi\;\mathcal{U}\;\psi$ stands for $\phi$ \emph{until} $\psi$. Finally, other logical connectives and quantifiers such as $\vee,\supset,\equiv,\exists$ and CTL$^*$ operators such as $\Diamond\phi$ (i.e.\ eventually $\phi$), $\phi\;\mathcal{B}\;\psi$ (i.e.\ $\phi$ before $\psi$), etc.\ are handled as the usual abbreviations. \par Like $\mathit{now}$ in state formulae, path formulae $\phi$ can also contain an often-suppressed path placeholder $\mathit{path}$ in the place of the path terms. The function $\llbracket\cdot\rrbracket$ translates the above-defined formulae into formulae of the SC with paths. We write $\Phi\llbracket s\rrbracket$ (and $\phi\llbracket p\rrbracket$) to mean that state formula $\Phi$ (and path formula $\phi$) holds in situation $s$ (and over path $p$, respectively). See \cite{KL15} for how $\llbracket\cdot\rrbracket$ is defined. \par We will use $\alpha$ and $\sigma$, possibly with decorations, to represent ground action and situation terms, respectively. Finally, we will use uppercase Latin letters for ground terms, and lowercase Latin letters for variables. \newline\noindent\textbf{Example}\indent For our running example, we consider a couple of simple rescue drone agents $D_1$ and $D_2$ and their flight paths from one location to another. At anytime, an agent can be in any of the four locations $L_s,L_d, L_1,$ and $L_1'$. The geometry of the flight paths is captured using the non-fluent relation $\mathit{Route}(l,l')$, which states that there is a flight path from location $l$ to $l'$ (throughout, we assume that free variables are universally quantified from the outside):\footnote{We assume that all agents know all non-fluent facts.} \vspace{-4 mm} \begin{small} \begin{eqnarray*} &&\hspace{-7 mm}(a).\;\mathit{Route}(l,l')\equiv[(l=L_s\wedge l'=L_1)\vee(l=L_s\wedge l'=L_1')\\ &&\hspace{14 mm}\mbox{}\vee(l=L_1\wedge l'=L_d)\vee(l=L_1'\wedge l'=L_d)]. \end{eqnarray*} \end{small} \vspace{-4 mm} \noindent A controller agent $D_c$ is in charge of the overall mission and warns about potentially unsafe routes. Besides the $\mathit{inform}$ communicative action mentioned above, there are three additional actions in this domain. Action $\mathit{takeOff}(d,l)$ can be used by drone $d$ to take off from location $l$, $\mathit{flyTo}(d,l,l')$ takes $d$ from $l$ to $l'$, and $\mathit{land}(d,l)$ makes $d$ land at $l$. There are four fluents in this domain, $\mathit{At}(d,l,s)$, $\mathit{Flying}(d,s)$, $\mathit{Vis}(d,l,s),$ and $\mathit{TStrom}(l,s)$, representing that $d$ is located at $l$ in situation $s$, that $d$ is flying in $s$, that $d$ has visited $l$ in $s$, and that there is an ongoing thunderstorm at $l$ in $s$. \par The action preconditions in this domain are as follows: \vspace{-4 mm} \begin{small} \begin{eqnarray*} &&\hspace{-7 mm}(b).\;\mathit{Poss}(\mathit{takeOff}(d,l),s)\equiv\mathit{At}(d,l,s)\land\neg\mathit{Flying}(d,s),\\ &&\hspace{-7 mm}(c).\;\mathit{Poss}(\mathit{flyTo}(d,l,l'),s)\equiv\mathit{At}(d,l,s)\land\mathit{Flying}(d,s)\\ &&\hspace{10 mm}\mbox{}\land\mathit{Route}(l,l')\land\neg\mathit{Know}(d,\mathit{TStrom}(l'),s),\\ &&\hspace{-7 mm}(d).\;\mathit{Poss}(\mathit{land}(d,l),s)\equiv\mathit{At}(d,l,s)\land\mathit{Flying}(d,s). \end{eqnarray*} \end{small} \vspace{-4 mm} \noindent Thus, e.g., $(c)$ states that a drone agent $d$ can fly from locations $l$ to $l'$ in situation $s$ iff it is located at $l$ in $s$, it is flying in $s$, there is a route from $l$ to $l'$, and it does not know that there is a thunderstorm at $l'$ in $s$. \par Moreover, the SSA for the above fluents are as follows. \vspace{-4 mm} \begin{small} \begin{eqnarray*} &&\hspace{-7 mm}(e).\;\mathit{At}(d,l,do(a,s))\equiv[\exists l'.\;a=\mathit{flyTo}(d,l',l)\\ &&\hspace{22 mm}\mbox{}\lor(\mathit{At}(d,l,s)\land\neg\exists l'.\;a=\mathit{flyTo}(d,l,l'))],\\ &&\hspace{-7 mm}(f).\;\mathit{Flying}(d,do(a,s))\equiv[\exists l.\;a=\mathit{takeOff}(d,l)\\ &&\hspace{22 mm}\mbox{}\lor(\mathit{Flying}(d,s)\land\neg\exists l.\;a=\mathit{land}(d,l))],\\ &&\hspace{-7 mm}(g).\;\mathit{Vis}(d,l,do(a,s))\equi \exists l'.\;a=\mathit{flyTo}(d,l',l)\lor\mathit{Vis}(d,l,s),\\ &&\hspace{-7 mm}(h).\;\mathit{TStrom}(l,do(a,s))\equiv\mathit{TStrom}(l,s). \end{eqnarray*} \end{small} \vspace{-4 mm} \noindent Thus, e.g., Axiom $(e)$ states that $d$ is at location $l$ after executing action $a$ in situation $s$ (i.e.\ in $do(a,s)$) iff $a$ refers to $d$'s action of flying from some location $l'$ to $l$, or $d$ was already at $l$ in $s$ and $a$ is not its action of flying to a different location $l'$ \par Initially, drone $D_1$ is at location $L_s$, is not flying, and has only visited $L_s$, and it knows these facts. Moreover, it does not know that there is a storm at location $L_1$, but knows that there are no storms at $L_1'$ and $L_d$. There is indeed a thunderstorm at location $L_1$ and the controller agent $D_c$ knows this. Finally, $D_c$ does not know however that the other agents know this fact. These are captured using the following initial state axioms (note that $\mathit{Know}(d,\Phi(\mathit{now}),s)\supset\Phi[s]$): \vspace{-4 mm} \begin{small} \begin{eqnarray*} &&\hspace{-7 mm}(i).\;\mathit{Know}(D_1,\mathit{At}(D_1,L_s),S_0),\\ &&\hspace{-7 mm}(j).\;\mathit{Know}(D_1,\neg\mathit{Flying}(D_1),S_0),\\ &&\hspace{-7 mm}(k).\;\mathit{Know}(D_1,\forall l.\;\mathit{Vis}(D_1,l)\equiv l=L_s,S_0),\\ &&\hspace{-7 mm}(l).\;\neg\mathit{Know}(D_1,\mathit{TStrom}(L_1),S_0),\\ &&\hspace{-7 mm}(m).\;\mathit{Know}(D_1,\neg\mathit{TStrom}(L_1'),S_0),\\ &&\hspace{-7 mm}(n).\;\mathit{Know}(D_1,\neg\mathit{TStrom}(L_d),S_0),\\ &&\hspace{-7 mm}(o).\;\mathit{Know}(D_c,\mathit{TStrom}(L_1),S_0),\\ &&\hspace{-7 mm}(p).\;\forall d.\;d\neq D_c\supset\neg\mathit{Know}(D_c,\mathit{Know}(d,\mathit{TStrom}(L_1)),S_0). \end{eqnarray*} \end{small} \vspace{-4 mm} \section{Formalizing Goals and Intentions} To model conative effects in the SC, we adopt the expressive formalization of prioritized goals (p-goals) and intentions proposed by KL \shortcite{KL10}. In this framework, each p-goal is specified by its own accessibility relation $G$. To deal with multiple agents, we modify KL's proposal by adding an agent argument for all goal-related predicates and relations; usually the first argument for this. Given agent $d$, a path $p$ is $G$-accessible at priority level $n$ in situation $s$, denoted by $G(d,p,n,s)$, iff the goal of $d$ at level $n$ is satisfied over $p$ and $p$ starts with a situation that has the same action history as $s$. The latter requirement ensures that the agent's p-goal-accessible paths reflect the actions that have been performed so far. A smaller $n$ represents higher priority, with $0$ being the highest priority level. Thus the set of p-goals are totally ordered according to priority. We say that $d$ has the p-goal that $\phi$ at level $n$ in situation $s$ iff $\phi$ holds over all paths that are $G$-accessible for $d$ at $n$ in $s$, i.e.\ $\mathit{PGoal}(d,\phi,n,s)\doteq\forall p.\;G(d,p,n,s)\supset\phi\llbracket p\rrbracket.$ \par We assume that a domain theory $\mathcal{D}$ for our framework also includes the domain-dependent initial goal axioms (see below) and the domain-independent axioms and definitions that appear throughout this paper. As KL, we allow the agent to have infinitely many goals, some of which can be left unspecified. For instance, assume that initially, our drone agent $D_1$ has the following two p-goals: $\phi_0=\Diamond\mathit{At}(D_1,L_d)$, i.e.\ that it is eventually at $L_d$, and $\phi_1=\mathit{Vis}(D_1,L_1)\;\mathcal{B}\;\mathit{Vis}(D_1,L_d)$, i.e.\ that it visits $L_1$ before it visits $L_d$, at level $0$ and $1$, respectively. Also, $D_c$ does not have any initial p-goals. Then the initial goal hierarchy of $D_1$ and $D_c$ can be specified using the following axioms: \vspace{-4 mm} \begin{small} \begin{eqnarray*} &&\hspace{-7 mm}(q).\;\mathit{Init}(s)\!\supset\ ((G(D_1,p,0,s)\equiv\mathit{Starts}(p,s')\!\wedge\!\mathit{Init}(s')\!\wedge\!\phi_0\llbracket p\rrbracket)\\ &&\hspace{9 mm}\mbox{}\wedge(G(D_1,p,1,s)\equiv\mathit{Starts}(p,s')\!\wedge\!\mathit{Init}(s')\!\wedge\!\phi_1\llbracket p\rrbracket),\\ &&\hspace{-7 mm}(r).\ \mathit{Init}(s)\!\wedge\!n\geq 2\supse (G(D_1,p,n,s)\equiv\mathit{Starts}(p,s')\!\wedge\!\mathit{Init}(s')),\\ &&\hspace{-7 mm}(s).\ \mathit{Init}(s)\!\wedge\!n\geq 0\supse (G(D_c,p,n,s)\equiv\mathit{Starts}(p,s')\!\wedge\!\mathit{Init}(s')). \end{eqnarray*} \end{small} \vspace{-4 mm} \noindent$(q)$ specifies the p-goals $\phi_0,\phi_1$ (from highest to lowest priority) of $D_1$ in the initial situations, and makes $G(D_1,p,n,s)$ true for every path $p$ that starts with an initial situation and over which $\phi_n$ holds, for $n=0,1$; each of them defines a set of initial goal paths for a given priority level, and must be consistent. $(r)$ makes $G(D_1,p,n,s)$ true for every path $p$ that starts with an initial situation for $n\geq 2$. Thus at levels $n\geq 2$, $D_1$ has the trivial p-goal that it be in an initial situation. The case for $D_c$ is similar. \par Assume that $\mathcal{D}_{dr}$ denotes our theory for the drone domain. Then in our example, we can show the following \vspace{-4 mm} \begin{small} \begin{eqnarray*} &&\hspace{-7 mm}\mathcal{D}_{dr}\models\mathit{PGoal}(D_1,\phi_n\!\wedge\!\mathit{Starts}(p,s)\!\wedge\!\mathit{Init}(s),n,S_0),\textup{ for }n<2,\\ &&\hspace{-7 mm}\mathcal{D}_{dr}\models\mathit{PGoal}(D_1,\mathit{Starts}(p,s)\wedge\mathit{Init}(s),n,S_0),\textup{ for any }n\geq 2 . \end{eqnarray*} \end{small} \vspace{-4 mm} \par Since not all $G$-accessible paths are realistic in the sense that they start with a $K$-accessible situation, to filter the unrealistic paths out, KL defined {\em realistic} p-goal accessible paths: \begin{eqnarray*} G_R(d,p,n,s)\doteq G(d,p,n,s)\wedge\mathit{Starts}(p,s')\wedge K(d,s',s). \end{eqnarray*} $G_R$ prunes out the paths from $G$ that are known to be impossible, and since intentions are defined in terms of realistic p-goals, this ensures that these are realistic. \par Using realistic p-goals-accessible paths, KL defined intentions as the realistic and maximal consistent prioritized intersection of the agent's goal hierarchy. First they specify all paths $p$ that are in this prioritized intersection $G_\cap(d,p,n,s)$:\footnote{\textbf{if}$ \phi$ \textbf{then} $\delta_1$ \textbf{else} $\delta_2$ is an abbreviation for $(\phi\supset\delta_1)\wedge(\neg\phi\supset\delta_2).$} \vspace{- 4mm} \begin{small} \begin{eqnarray*} &&\hspace{-7 mm}G_\cap(d,p,n,s)\equi \textbf{\textup{if}}\;(n=0)\;\textbf{\textup{then}}\\ &&\hspace{18 mm}\textbf{\textup{if}}\;\exists p'.\;G_R(d,p',n,s)\;\textbf{\textup{then}}\;G_R(d,p,n,s)\\ &&\hspace{18 mm}\textbf{\textup{else}}\;\mathit{Starts}(p,s')\wedge K(d,s',s)\\ &&\hspace{15 mm}\textbf{\textup{else}}\\ &&\hspace{18 mm}\textbf{\textup{if}}\;\exists p'.(G_R(d,p',n,s)\wedge G_\cap(d,p',n-1,s))\\ &&\hspace{22 mm}\textbf{\textup{then}}\;(G_R(d,p,n,s)\wedge G_\cap(d,p,n-1,s))\\ &&\hspace{18 mm}\textbf{\textup{else}}\;G_\cap(d,p,n-1,s). \end{eqnarray*} \end{small} \vspace{-4 mm} Using this, they defined what it means for an agent to have an intention at some level $n$:\footnote{KL used the term ``chosen goals'' (C-Goals) for this.} $\mathit{Int}(d,\phi,n,s)\doteq\forall p.\;G_\cap(d,p,n,s)\supset\phi\llbracket p\rrbracket$, i.e.\ an agent $d$ has the intention at level $n$ that $\phi$ in situation $s$ if $\phi$ holds over all paths that are in the prioritized intersection of $d$'s set of $G_R$-accessible paths up to level $n$ in $s$. Finally, intentions are defined in terms of intentions at $n$: $\mathit{Int}(d,\phi,s)\doteq\forall n.\;\mathit{Int}(d,\phi,n,s)$, i.e.\ the agent $d$ has the intention that $\phi$ in $s$ if for any level $n$, $\phi$ is $d$'s intention at $n$ in $s$. \par In our example, it can be shown that initially the $D_1$ has the intention that $\phi_0$ and that $\phi_1$: $\mathcal{D}_{dr}\models\mathit{Int}(D_1,\phi_0\wedge\phi_1,S_0)$. \noindent\textbf{Goal Dynamics}\indent An agent's goals change when its knowledge changes as a result of the occurrence of an action, including exogenous events, or when it adopts or drops a goal. KL showed how this can be formalized by specifying how p-goals change. Intentions are then computed using realistic p-goals in every new situation as above. \par Since for our example we only need to model cooperative agents that always respect the controller agent's requests, to simplify, we will modify KL's framework slightly by introducing a request communicative action and by getting rid of the actions for goal adoption and dropping. $req(d,d',\phi)$ can be used by an agent $d$ to request to adopt a p-goal $\phi$ to another agent $d'$. The APA for this is as follows: \vspace{- 4mm} \begin{small} \begin{eqnarray*} &&\hspace{-7 mm}\mathit{Poss}(\mathit{req}(d,d',\phi),s)\equiv\mbox{}\\ &&\hspace{-5 mm}\neg\mathit{Int}(d,\neg\exists s',p'.\;\mathit{Starts}(s')\wedge\mathit{Suffix}(p',do(\mathit{req}(d,d',\phi),s'))\\ &&\hspace{9 mm}\mbox{}\wedge\phi\llbracket p'\rrbracket,s)\land\neg\exists n.\;\mathit{PGoal}(d',\phi,n,s). \end{eqnarray*} \end{small} \vspace{-4 mm} \noindent That is, an agent $d$ can request another agent $d'$ to adopt the p-goal that $\phi$ if $d$ does not intend in $s$ that it is not the case that it executes the $\mathit{req}$ action next and $\phi$ holds afterwards, and $d'$ does not already have $\phi$ as its p-goal at some level $n$ in $s$. \par In the following, we specify the dynamics of p-goals by giving the SSA for $G$ and discuss each case, one at a time: \vspace{-4 mm} \begin{small} \begin{eqnarray*} &&\hspace{-5 mm}G(d,p,n,do(a,s))\equiv\\ &&\hspace{-3 mm}\forall d',\phi.\;(a\neq\mathit{req}(d',d,\phi)\wedg \mathit{Progressed}(d,p,n,a,s))\\ &&\hspace{-3 mm}\mbox{}\vee\exists d',\phi.\;(a=\mathit{req}(d',d,\phi)\wedge\mathit{Requested}(d,p,n,a,s,\phi)). \end{eqnarray*} \end{small} \vspace{-4 mm} \noindent The overall idea for this is as follows. First of all, to handle the occurrence of a non-request (i.e.\ a regular or a request not directed to $d$) action $a$, we progress all of $d$'s $G$-accessible paths to reflect the fact that $a$ has just happened; this is done using the $\mathit{Progressed}(d,p,n,a,s)$ construct, which replaces each of $d$'s $G$-accessible path $p'$ with starting situation $s'$, by its suffix $p$ provided that it starts with $do(a,s')$: \vspace{-4 mm} \begin{small} \begin{eqnarray*} &&\hspace{-7 mm}\mathit{Progressed}(d,p,n,a,s)\doteq\\ &&\hspace{-4 mm}\exists p',s'.\;G(d,p',n,s)\wedge\mathit{Starts}(p',s')\wedge\mathit{Suffix}(p,p',do(a,s')). \end{eqnarray*} \end{small} \vspace{-4 mm} \noindent Any path over which the next action performed is not $a$ is eliminated from the respective $G$-accessibility level for $d$. \par Secondly, to handle the request of a p-goal $\phi$ directed to $d$, we add a new proposition containing the p-goal to $d$'s goal hierarchy at the highest priority level by modifying the $G$-relation accordingly.\footnote{For simplicity, we assume that the requested goal is always adopted as the highest priority goal. Other sophisticated models, e.g.\ one where the requestee adopts the requested goal only if it is from a trusted source, it is consistent with its own set of core goals, and at just below these core goals, could have been modeled as easily.} The $G$-accessible paths for $d$ at level $0$ are the ones that share the same history with $do(a,s)$ and over which $\phi$ holds. The $G$-accessible paths for $d$ at all levels below $0$ are the ones that can be obtained by progressing the level immediately above it. Thus the agent $d$ acquires the p-goal that $\phi$ at the highest priority level $0$, and all the p-goals in $s$ are pushed down one level in the hierarchy. \vspace{-4 mm} \begin{small} \begin{eqnarray*} &&\hspace{-7 mm}\mathit{Requested}(d,p,n,a,s,\phi)\doteq\\ &&\hspace{-3 mm}\textbf{if}\;(n=0)\;\textbf{then}\;\\ &&\hspace{3 mm}\exists s'.\;\mathit{Starts}(p,s' \wedge\mathit{SameHist}(s',do(a,s))\wedge\phi\llbracket p\rrbracket\\ &&\hspace{-3 mm}\textbf{else}\;\mathit{Progressed}(d,p,n-1,a,s). \end{eqnarray*} \end{small} \vspace{-4 mm} In our example, we can show that the agent $D_1$ will have the intention that $\Diamond\mathit{Vis}(D_1,L_1')$ after $D_1$ takes off from $L_s,$ $D_c$ informs $D_1$ that there is a thunderstorm at $L_1,$ and $D_c$ requests $D_1$ to eventually visit $L_1'$, starting in $S_0$, i.e.\ in situation $S_3=do([\mathit{takeOff}(D_1,L_s);\mathit{inform}(D_c,D_1,\mathit{TStrom}$ $(L_1));$ $\mathit{req}(D_c,D_1,\Diamond\mathit{Vis}(D_1,L_1'))],S_0);$ thus: \vspace{-2 mm} \begin{small} \[\mathcal{D}_{dr}\models\mathit{Int}(D_1,\Diamond\mathit{Vis}(D_1,L_1'),S_3).\] \end{small} \vspace{-4 mm} \noindent But $D_1$ will not have the intention that $\phi_1$ as it has become impossible for $D_1$ to visit $L_1$ due to its knowledge of the thunderstorm at $L_1$, i.e.\ $\mathcal{D}_{dr}\models\neg\mathit{Int}(D_1,\phi_1,S_3)$. \section{Handling Conative Effects} Given a trace of events, \emph{actual achievement causes} are the events that are behind achieving an effect.\footnote{We do not conceptually distinguish between actions and events.} To formalize reasoning about epistemic effects, KL \shortcite{KL21} introduced the notion of \emph{epistemic dynamic formulae in the SC}. An effect in their framework is thus an epistemic dynamic formula. We will extend this notion to that of \emph{intentional dynamic formulae} $\varphi$ to deal with conative effects (see below). Given an effect $\varphi,$ the actual causes are defined relative to a {\em narrative} (variously known as a {\em scenario} or a {\em trace}) $s$. When $s$ is ground, the tuple $\langle\varphi,s\rangle$ is often called a {\em causal setting} \cite{BatusovS18}. Also, it is assumed that $s$ is executable, and $\varphi$ was false before the execution of the actions in $s$, but became true afterwards, i.e.\ $\mathcal{D}\models \mathit{executable}(s)\wedge\neg\varphi\lceil\mathit{root}(s)\rfloor\wedge\varphi\lceil s\rfloor$, where $\mathit{root}(s)\doteq\mathit{root}(s'),$ if $\exists a'.\;s=do(a',s'),$ and $\mathit{root}(s)=s,$ otherwise. Here $\varphi\lceil s\rfloor$ denotes the formula obtained from $\varphi$ by restoring the appropriate situation argument into all fluents in $\varphi$ (see Definition \ref{psiSAT}). \par Note that since all changes in the SC result from actions, the potential causes of an effect $\varphi$ are identified with a set of action terms occurring in $s$. However, since $s$ might include multiple occurrences of the same action, one also needs to identify the situations where these actions were executed. To deal with this, KL required that each situation is associated with a time-stamp. Since in the context of knowledge, we will have different $K$-accessible situations where an action occurs, using time-stamps provides a common reference/rigid designator for the action occurrence. The initial situations start at time 0 and each action increments the time-stamp by one. Thus, our theory includes the following axioms: \vspace{-4 mm} \begin{small} \begin{eqnarray*} &&\mathit{Init}(s)\supset\mathit{time}(s)=0,\\ &&\forall a,s,t.\;\mathit{time}(do(a,s))=t\equiv \mathit{time}(s)=t-1. \end{eqnarray*} \end{small} \vspace{-4 mm} \noindent With this, causes in this framework is a non-empty set of action-time-stamp pairs derived from the trace $s$ given $\varphi$. \par We now introduce the notion of \emph{intentional dynamic formulae} (IF, henceforth): \begin{definition} Let $\vec{x}$, $\theta_a$, and $\vec{y}$ respectively range over object terms, action terms, and object and action terms. The class of \emph{situation-suppressed intentional dynamic formulae} $\varphi$ is defined inductively using the following grammar: \vspace{-4 mm} \begin{small} \begin{eqnarray*} &&\hspace{-5 mm}\varphi::=P(\vec{x}) \mid Poss(\theta_a)\mid\mathit{After}(\theta_a,\varphi)\mid\neg\varphi\mid\varphi_1\wedge\varphi_2\\ &&\hspace{10 mm}\mbox{}\mid\exists\vec{y}.\;\varphi\mid\mathit{Know}(\mathit{agt},\varphi)\mid\mathit{Int}(\mathit{agt},\psi). \end{eqnarray*} \end{small} \end{definition} \noindent That is, an IF can be a situation-suppressed fluent, a formula that says that some action $\theta_a$ is possible, a formula that some IF holds after some action has occurred, a formula that can built from other IF using the usual connectives, or a formula that the agent knows that some IF holds or intends to bring about some path formula $\psi$. Note that $\varphi$ can have quantification over object and action variables, but must not include quantification over situations or ordering over situations (i.e.\ $\sqsubset$) or arbitrary $K$ or $G$-relations, i.e.\ those that do not come from the expansion of $\mathit{Know}/\mathit{Int}$ . We will use $\varphi$ for IF. \par Note that the argument of $\mathit{Int}$ in the above inductive definition is a path formula $\psi$. Thus to allow for IF in the context of $\mathit{Int}$, we need to redefine state formulae $\Phi$ to include IF $\varphi$: \vspace{-2 mm} \begin{small} \[\Phi::=P(\vec{x})\mid A \phi\mid\Phi\wedge\Phi\mid\neg\Phi\mid\forall x.\;\Phi\mid\varphi.\] \end{small} \vspace{-2 mm} We define $\varphi\lceil\cdot\rfloor$ as follows: \begin{definition}\label{psiSAT} \begin{small} \begin{eqnarray*} && \hspace{-7 mm}\varphi\lceil s\rfloor\doteq \begin{cases} P(\vec{x},s) & \textup{ if }\varphi\textup{ is }P(\vec{x})\\ \mathit{Poss}(\theta_a,s) & \textup{ if }\varphi\textup{ is }\mathit{Poss}(\theta_a)\\ \varphi'\lceil do(\theta_a,s)\rfloor & \textup{ if }\varphi\textup{ is }\mathit{After}(\theta_a,\varphi')\\ \neg(\varphi'\lceil s\rfloor) & \textup{ if }\varphi\textup{ is }(\neg\varphi')\\ \varphi_1\lceil s\rfloor\wedge\varphi_2\lceil s\rfloor & \textup{ if }\varphi\textup{ is }(\varphi_1\wedge\varphi_2)\\ \exists\vec{y}.\;(\varphi'\lceil s\rfloor) & \textup{ if }\varphi\textup{ is }(\exists\vec{y}.\;\varphi')\\ \forall s'.\;K(d,s',s)\supset(\varphi'\lceil s'\rfloor) & \textup{ if }\varphi\textup{ is }\mathit{Know}(d,\varphi')\\ \forall n.\;\mathit{Int}(d,\psi,n,s) & \textup{ if }\varphi\textup{ is }\mathit{Int}(d,\psi).\\ \end{cases} \end{eqnarray*} \end{small} \end{definition} \par We will now present the definition of causes in the SC. The idea behind how causes are computed is as follows. Given an effect $\varphi$ and scenario $s$, if some action of the action sequence in $s$ triggers the formula $\varphi$ to change its truth value from false to true relative to $\mathcal{D}$, and if there are no actions in $s$ after it that change the value of $\varphi$ back to false, then this action is an actual cause of achieving $\varphi$ in $s$. Such causes are referred to as {\em primary} causes:\footnote{This is a slightly generalized definition than that of \cite{KL21}, where the authors used $S_0$ instead of $\mathit{root}(s)$.} \begin{definition}[Primary Cause]\label{PCause} \begin{small} \begin{eqnarray*} &&\hspace{-10 mm}\mathit{CausesDirectly}(a,t,\varphi,s)\doteq\mbox{}\\ &&\hspace{3 mm}\exists s_a.\;\mathit{time}(s_a)=t\wedge(\mathit{root}(s)<do(a,s_a)\leq s)\\ &&\hspace{3 mm}\mbox{}\wedge\neg\varphi\lceil s_a\rfloor\wedge\forall s'.(do(a,s_a)\leq s'\leq s\supset\varphi\lceil s'\rfloor). \end{eqnarray*} \end{small} \end{definition} \noindent That is, $a$ executed at time $t$ is the \emph{primary cause} of effect $\varphi$ in situation $s$ iff $a$ was executed in a situation with time-stamp $t$ in scenario $s$, $a$ caused $\varphi$ to change its truth value to true, and no subsequent actions on the way to $s$ falsified $\varphi$. \par Now, note that a (primary) cause $a$ might have been non-executable initially. Also, $a$ might have only brought about the effect conditionally and this context condition might have been false initially. Thus earlier actions on the trace that contributed to the preconditions and the context conditions of a cause must be considered as a cause as well. The following definition captures both primary and indirect causes.\footnote{In this, we need to quantify over situation-suppressed IF. Thus we must encode such formulae as terms and formalize their relationship to the associated SC formulae. This is tedious but can be done essentially along the lines of \cite{GiacomoLL00}. We assume that we have such an encoding and use formulae as terms directly.} \begin{definition}[Actual Cause \cite{KL21}]\label{ACause} \begin{small} \begin{eqnarray*} &&\hspace{-7 mm}\mathit{Causes}(a,t,\varphi,s)\doteq\\ &&\hspace{-7 mm}\forall P.[ \forall a,t,s,\varphi.(\mathit{CausesDirectly}(a,t,\varphi,s)\supset P(a,t,\varphi,s))\wedge\mbox{}\\ &&\hspace{-5 mm}\forall a,t,s,\varphi.( \exists a'\!,t'\!,s'\!.(\mathit{CausesDirectly}(a'\!,t'\!,\varphi,s) \land \mathit{time}(s')\!=\!t'\\ &&\hspace{12 mm}\mbox{} \land s'<s \land P(a,t,[\mathit{Poss}(a')\wedge\mathit{After}(a',\varphi)],s')) \\ &&\hspace{23 mm}\mbox{}\supset P(a,t,\varphi,s))\\ &&\hspace{-3 mm}]\supset P(a,t,\varphi,s). \end{eqnarray*} \end{small} \end{definition} \noindent Thus, $\mathit{Causes}$ is defined to be the least relation $P$ such that if $a$ executed at time $t$ directly causes $\varphi$ in scenario $s$ then $(a,t,\varphi,s)$ is in $P$, and if $a'$ executed at $t'$ is a direct cause of $\varphi$ in $s$, the time-stamp of $s'$ is $t'$, $s'<s$, and $(a,t,[\mathit{Poss}(a')\wedge\mathit{After}(a',\varphi)],s')$ is in $P$ (i.e.\ $a$ executed at $t$ is a direct or indirect cause of $[\mathit{Poss}(a')$ $\mbox{}\wedge\mathit{After}(a',\varphi)]$ in $s'$), then $(a,t,\varphi,s)$ is in $P$. Here the effect $[\mathit{Poss}(a')\wedge\mathit{After}(a',\varphi)]$ requires $a'$ to be executable and $\varphi$ to hold after $a'$. \par% With these simple modifications, the framework is now capable of dealing with conative effects. To see this, consider the following scenario $\sigma$ in our example. $\sigma=do([\mathit{takeOff}(D_1,L_s); \mathit{inform}(D_c,D_1,\mathit{TStrom}(L_1));$ $\mathit{req}(D_c,D_1,\Diamond\mathit{Vis}(D_1,L_1')); \mathit{inform}(D_c,\!D_2,\!\mathit{TStrom}(L_1));$ $\mathit{req}(D_c,D_2,\Diamond\mathit{Vis}(D_1,L_1')); \mathit{flyTo}(D_1,L_s,L_1'); \mathit{flyTo}(D_1,$ $L_1',L_d)],S_0).$ There are 7 actions in this scenario. For convenience, we will use $\vec{\alpha_i}$ to denote the first $i$ actions in this trace, and so $do([\vec{\alpha_5}],S_0)$ is the situation obtained from executing the first 5 actions starting in $S_0$. Now assume that we want to reason about the causes of the effect $\varphi_1=\mathit{Int}(D_1,\Diamond\mathit{Vis}(D_1,L_1'))$ in scenario $\sigma_1=do([\vec{\alpha_5}],S_0)).$ Then we can show that: \vspace{-4 mm} \begin{small} \[\mathcal{D}_{dr}\models\mathit{Causes}(\mathit{req}(D_c,D_1,\Diamond\mathit{Vis}(D_1,L_1')),2,\varphi_1,\sigma_1),\] \end{small} \vspace{-4 mm} \noindent i.e.\ as expected, $D_c$'s request to $D_1$ to eventually visit $L_1'$ that was executed at time 2 is the cause of $D_1$'s intention that $\Diamond\mathit{Vis}(D_1,L_1').$ \section{Reasoning about Agent Behaviour} We are now ready to formalize reasoning about agent behaviour via causation. Just like causes, an explanation in our framework is also modeled using an action-time-stamp pair $(a,t)$. Agent behaviour, on the other hand, is captured using a situation $s$ and relative to an observation $\varphi$. For this, we use the predicate $\mathit{Explains}(a,t,\varphi,s)$, which means that the action $a$ executed at time $t$ explains the behaviour of the agents captured in situation $s$ relative to the observation $\varphi$. For example, $\mathit{Explains}(\alpha,\tau,\varphi_2,\sigma)$ states that the behaviour of drones as modeled by situation/scenario $\sigma$ relative to the effect that $\varphi_2=\mathit{Vis}(D_1,L_1')$ can be explained by action $\alpha$ executed at time $\tau$ (see below for the values of $\alpha$ and $\tau$). Thus, $(\alpha,\tau)$ explains why the drone $D_1$ visited the location $L_1'$. Note that, just as in the case for achievement causation, we assume here that $\neg\varphi\lceil\mathit{root}(s)\rfloor\wedge\varphi\lceil s\rfloor$. \par While explaining agent behaviour through direct causation is reasonable, it may not always be insightful. For instance, we can show that agent behaviour in $\sigma$ w.r.t.\ visiting $L_1'$ can be explained by its action $\mathit{flyTo}(D_1,L_s,L_1')$ executed at time $5$. However, this is obvious and is hardly useful. A deeper level of explanation requires analyzing the mental states of the involved agents. \par To further explain agent behaviour, we will use an intention recognition system, which for this paper is considered to be a black-box module. We use the predicate $\mathit{RRInt}(d,\phi,a,t,s)$ to denote that agent $d$ is recognized to have the relevant intention that $\phi$ in situation $s$ w.r.t.\ the action $a$ executed at time $t$. For instance, $\mathit{RRInt}(D_1,\Diamond\mathit{Vis}(D_1,L_1'),\mathit{flyTo}(D_1,L_s,L_1'),5,\sigma)$ says that in scenario $\sigma$, agent $D_1$ is recognized to have the intention that $\Diamond\mathit{Vis}(D_1,L_1')$ for executing the action $\mathit{flyTo}(D_1,L_s,L_1')$ at time $5$. With this, we can further explain agent behaviour via the root-cause analysis of its intentions behind performing actions. In our example, since $D_1$ flew to $L_1'$ due to its intention that $\Diamond\mathit{Vis}(D_1,L_1'),$ it is reasonable to explain agent behaviour via the causes of having this intention. This will reveal that $D_1$ had this intention due to $D_c$'s request, and thus agent behaviour w.r.t.\ $D_1$ visiting $L_1'$ can explained by this request action. \par We now give the definition for $\mathit{Explains}$: \begin{definition}\label{Explains} \begin{small} \begin{eqnarray*} &&\hspace{-7 mm}\mathit{Explains}(a,t,\varphi,s)\doteq\neg\varphi\lceil\mathit{root}(s)\rfloor\wedge\varphi\lceil s\rfloor\wedge\\ &&\hspace{-3.5 mm}[\mathit{Causes}(a,t,\varphi,s)\vee\mbox{}\\ &&\hspace{3 mm}(\exists a',t',d',s',\psi.\;\mathit{Explains}(a',t',\varphi,s)\wedge\mathit{agent}(a')=d'\\ &&\hspace{5 mm}\mbox{}\wedge\mathit{RRInt}(d',\psi,a',t',s)\wedge s'<s\wedge\mathit{time}(s')=t'\\ &&\hspace{5 mm}\mbox{}\wedge\neg\mathit{Int}(d',\psi,\mathit{root}(s'))\wedge\mathit{Int}(d',\psi,s')\\ &&\hspace{5 mm}\mbox{}\wedge\mathit{Causes}(a,t,\mathit{Int}(d',\psi),s'))]. \end{eqnarray*} \end{small} \end{definition} \noindent Thus, agent behaviour relative to the observation that $\varphi$ in scenario $s$ can be explained by the action $a$ executed at time $t$ iff $a$ at $t$ is a cause of $\varphi$ in $s$; or some other action $a'$ executed at time $t'$ explains $\varphi$ in $s$, the agent of $a'$ is $d'$, $d'$ is recognized to have the intention that $\psi$ behind performing $a'$ at $t'$ in $s$, and $a$ executed at $t$ was the cause of this intention in $s'$. Here $\mathit{agent}(a)$ denotes the agent of the action $a$; it can be specified as usual by an axiom that returns the agent of $a$, usually the first argument of $a$, i.e.\ $\mathit{agent}(a(d,\vec{x}))=d.$ Also, $s'$ is the situation where $a'$ was executed. Finally, the two requirements that the effect is false before the execution of the actions in the scenario and became true afterwards, i.e.\ $\neg\varphi\lceil\mathit{root}(s)\rfloor\wedge\varphi\lceil s\rfloor$ and $\neg\mathit{Int}(d',\psi,\mathit{root}(s'))\wedge\mathit{Int}(d',\psi,s')$, are needed to ensure that the causes actually exist. \par Returning to our example, we now formally state the two explanations that we mentioned above and give the values for $\alpha$ and $\tau$. First, we can show that: \vspace{-3 mm} \begin{small} \[\mathcal{D}_{dr}\models\mathit{Explains}(\mathit{flyTo}(D_1,L_s,L_1'),5,\varphi_2,\sigma).\] \end{small} \vspace{-4 mm} \noindent But perhaps more interestingly, we can show that: \vspace{-4 mm} \begin{small} \begin{eqnarray*} &&\hspace{-7 mm}\mathcal{D}_{dr}\cup\{\mathit{RRInt}(D_1,\Diamond\mathit{Vis}(D_1,L_1'),\mathit{flyTo}(D_1,L_s,L_1'),5,\sigma)\}\models\\ &&\mathit{Explains}(\mathit{req}(D_c,D_1,\Diamond\mathit{Vis}(D_1,L_1')),2,\varphi_2,\sigma). \end{eqnarray*} \end{small} \vspace{-4 mm} It is important to note that the scenario $s$ in Definition \ref{Explains} may and will often include the actions of multiple agents, and thus explanation of agent behaviour may trigger the analysis of the mental states of multiple agents. For example, given a different scenario, recognizing the intention behind the controller agent $D_c$'s request to $D_1$ and analyzing this intention can in turn expose the causes behind its actions, e.g.\ due to its prior commitments to safety, etc. As such, the analysis performed here is truly multiagent in nature. Also, although our example only involves single-action causes and we do not consider epistemic effects, as discussed above, the framework does support secondary causes and causal knowledge dynamics; see \cite{KL21} for concrete examples. \section{Conclusion} In this paper, we formalized causal reasoning about motivations. Using this, we offer a novel take on explainable AI that is grounded in theory of mind: agent behaviour in our framework can be explained via the causal analysis of observed effects, which as we show can trigger the analysis of their mental states. This paper reports our ongoing work. Understanding the properties of our proposal and how it relates to previous work in this area is what we plan to investigate next. \section*{Acknowledgments} We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), [funding reference number RGPIN-2022-03433]. \mbox{}\newline\mbox{}\newline \noindent Cette recherche a été financée par le Conseil de recherches en sciences naturelles et en génie du Canada (CRSNG), [numéro de référence RGPIN-2022-03433]. \bibliographystyle{named}
1,314,259,994,708
arxiv
\section{Introduction} In the span of a few years, supervised image classification has made remarkable progress and even surpassed humans on specific recognition tasks \cite{he2016deep}. Its success depends on few important factors, namely, stochastic gradient descent to optimize millions of parameters, GPUs to accelerate high-dimensional matrix computations, and access to vast amounts of manually annotated data \cite{krizhevsky2012imagenet}. Although a particular optimization method or high-performance hardware is not theoretically essential for supervised learning methods, labeled data is. In fact, access to large-scale labeled data is vital if we want to get the top performance \cite{sun2017revisiting,he2019rethinking}. Hence, one of the current major challenges in modern computer vision is being able to do unsupervised visual recognition. That means eliminating the costly and not always feasible process of manual labeling \cite{lin2014microsoft,deng2009imagenet}. In this context, visual representation learning has recently demonstrated great success in discarding manual labels by relying on self-supervision \cite{bachman2019learning,gidaris2018unsupervised,wu2018unsupervised,oord2018representation,chen2020simple,he2020momentum,tian2019contrastive}. We believe self-supervised representation learning has paved the way for unsupervised recognition. \begin{figure}[t] \centering \includegraphics[width=120mm]{images/mix_embe_simclr.jpg} \caption{Our proposed architecture with a mixture of embeddings module (bottom row), compared to the contrastive representation learning framework (top row). Components in green are devised in MIX'EM. } \label{fig:schematics} \end{figure} In self-supervised visual representation learning, a pretext (auxiliary) task provides a supervision signal (without manual labels) for training a representation encoder. One particularly successful pretext task is to treat each image instance in a dataset as a unique class and use them as supervision labels for training a classifier using the cross-entropy objective \cite{wu2018unsupervised}. However, it is computationally prohibitive to implement at a scale of millions of images. In practice, it is simplified such that given a mini-batch containing transformations of different images, transformations of a particular image should be classified as the same \cite{he2020momentum,chen2020simple}. By training a linear classifier on representations generated by such an encoder, we can achieve high accuracy, close to that of an end-to-end trained fully supervised ImageNet classifier: 76.5\% compared to 78.3\% in terms of top-1 accuracy, respectively. \cite{chen2020simple}. They have even outperformed supervised representations on some variants of object detection and segmentation \cite{he2020momentum}. The fact that a linear layer, with very limited discriminative power \cite{asano2019critical}, can deliver such high accuracy on the complex ImageNet classification task signals presence of powerful semantic clues in the representations. Hence, we hypothesize that by just knowing the expected number of classes, an off-the-shelf clustering method must be able to deliver high accuracy clustering similarly. However, our experiments show that K-means trained on "normalized" representations generated by the recent SimCLR method \cite{chen2020simple} achieves clustering accuracy of 70\% on STL10 compared to top-1 accuracy of 87\% by a supervised linear classifier. Therefore there is significant room for improvement. Our goal in this work is to impose semantic structure on the self-supervised representations to boost clustering accuracy. In other words, we want to generate representations that are already highly clustered or disentangled. For this purpose, we build a mixture of embeddings module into the contrastive visual representation learning framework \cite{chen2020simple}, as illustrated in figure \ref{fig:schematics}. The mixture components \cite{mclachlan1988mixture,jordan1994hierarchical,bishop1994mixture} are expected to specialise on embedding different semantic categories. For a given sample, each component should generate an embedding and predict how much it contributes to the combined final embedding. We have designed MIX'EM essentially by taking inspiration from a few recent works \cite{greff2019multi,chen2019unsupervised,li2019generating,lee2015m,ye2018occlusion,makansi2019overcoming,varamesh2020mixture} showing that mixture models can divide their input-output space in a meaningful manner without being directly supervised. MIX'EM takes advantage of the contrastive visual learning framework for guiding training a mixture model without interfering with its mechanisms (see table \ref{table:linear}). In addition to the contrastive representation learning loss, we introduce three key techniques to successfully train MIX'EM end-to-end. A naive attempt to train using only the contrastive loss would quickly converge to a degenerate solution that assigns all samples to a single component, bypassing all other paths. To avoid this issue and achieve high accuracy, we (i) maximize the entropy of coefficient distribution to diversify the mixture components; (ii) minimize the entropy of components conditioned on the input to enforce separation of the embedding space; and enabled by (i) and (ii), (iii) we use an associative embedding loss \cite{newell2017pixels,newell2017associative} to directly enforce semantic coherency inter/intra mixture components. Figure \ref{fig:tsne_fcmax} presents visualizations of the embeddings when gradually plugging in each of the loss terms. The resulting representations significantly boost K-means' performance up to 78\% on the STL10 dataset, without interfering with the contrastive learning process. We summarise our contributions as follows: \begin{itemize} \item We propose MIX'EM, a solution for unsupervised image classification using a mixture of embedding module. MIX'EM disentangles visual representations semantically at the category level, such that an off-the-shelf clustering method can be applied to acquire robust image classification. \item We introduce three techniques to successfully train MIX'EM in an unsupervised manner and avoid degenerate solutions. \item We introduce a technique to initialize K-means algorithm using the mixture components in MIX'EM and achieve significantly higher accuracy. This eliminates the need to run K-means with multiple random initializations. \end{itemize} \begin{figure}[t] \centering \includegraphics[width=120mm]{images/accv_tsne_fcmax_fc.jpg} \caption{tSNE visualization of embeddings learned by MIX'EM on STL10. Top row: embeddings from the dominant component ($e_{m'}^i$ s.t. $m'=\argmaxA_{m'}(p_{m'}^i)$\ ); Bottom row: the final mixture embedding ($z^i$). By adding our loss terms (explained in \ref{loss_terms}), embedding space gets increasingly more disentangled at category level. Samples are marked and color coded based on their ground truth label. } \label{fig:tsne_fcmax} \end{figure} \section{Related Work} Our work relates to a few different lines of research. It is the most related to the self-supervised representation learning, as our goal in the first place is to train a better representation encoder without using manually labeled data. However, beyond that, we want the representations to be highly structured such that reliable semantic clustering is possible using an off-the-shelf clustering method. We develop our idea on the recent research \cite{chen2020simple,chen2020improved} which empirically proves using noise contrastive loss \cite{gutmann2010noise} and heavy augmentation for visual representation learning outperforms other popular approaches, including mutual information maximization \cite{hjelm2018learning,bachman2019learning}, generative models \cite{donahue2016adversarial,dumoulin2016adversarially}, image rotation prediction \cite{gidaris2018unsupervised}, predicting patch position \cite{doersch2015unsupervised}, clustering \cite{caron2018deep}, solving jigsaw puzzles \cite{noroozi2016unsupervised}, and image colorization \cite{zhang2016colorful}. In this work, we also advocate using contrastive loss for self-supervised representation learning. However, we are particularly interested in enforcing category-level semantic disentanglement on the representation space. To the best of our knowledge, this is the first work to set out to impose semantic structure on self-supervised visual representations learned using the contrastive loss. We show that the representations generated by MIX'EM result in high accuracy semantic clustering only by applying K-means to them. Existing works on unsupervised image classification using self-supervised representations \cite{van2020learning,huang2020deep,ji2019invariant} should benefit from adapting our proposed module, as it is an internal module that can be plugged-in without altering the output mechanism. There have been a few recent works with the same objective as ours, that is unsupervised image classification. IIC \cite{ji2019invariant} is the best known among them, which directly generates semantic clustering assignments using a deep network. Its loss function maximizes the mutual information between augmented versions of the same image based on the cluster assignment. The intuitive goal is to force the clustering to be decided based on invariant information across different views of an image. \cite{huang2020deep} proposes a max-margin clustering criterion for simultaneous clustering and representation learning, such that clustering confidence is the highest based on a defined confidence index. Finally, concurrent with our work, \cite{van2020learning} proposes a framework with multiple stages that relies on the k-nearest neighbors method to extract samples that are ideally from the same semantic categories based on their representations learned in a self-supervised fashion. They use the samples to train a clustering network, and then a classification network by treating clusters as pseudo labels. None of these works concern improving the representations directly in terms of category-level disentanglement in an unsupervised fashion. Our work also relates to clustering-based approaches for deep self-supervised representation learning \cite{caron2018deep,asano2019self,yang2016joint,yan2020clusterfit}. These models devise a branch in a deep network for clustering; that generates pseudo labels for training another branch of the network for a classification task. The training process either iterate between the two stages until it converges \cite{caron2018deep,asano2019self} or does it simultaneously \cite{zhan2020online}. Generating high-level semantic labels using clustering, however, is not the goal in this line of work. Instead, they combine clustering and classification in order to build a pretext task for representation learning. Often the best representations are achieved with over-clustering. For example, \cite{caron2018deep} achieves the best mAP on the Pascal VOC 2007 object detection task when the representations are learned using a 10000-way clustering stage. Finally, \cite{sanchez2019learning} is also related to our work, where representations are split into "shared" and "exclusive" parts. It maximizes mutual information for the shared and minimizes for the exclusive component across paired samples. However, they use supervision to pair images for training. Moreover, the work is not concerned with semantic clustering. Based on their problem formulation and results, the disentanglement focuses a foreground-background separation. \section{Method} In this section, first we review the contrastive learning framework as proposed in SimCLR \cite{chen2020simple} (the principles are similar to \cite{he2020momentum,wu2018unsupervised,tian2019contrastive}). Next, we show how to integrate a mixture of embeddings module in this framework. \subsection{Contrastive learning of visual representation} Contrastive learning of visual representations is built upon the intuition that different transformations of an image should have the same characteristics, which identifies them as bearing the same semantics. In practice, this means that given a dataset with images containing a single dominant object (like ImageNet or CIFAR10/100), an ideal encoder should map different augmentations of an image to a very compact neighborhood. This interpretation implies considering every image in a dataset as a distinct class and training a classification network using cross-entropy loss \cite{wu2018unsupervised}. However, having as many classes as the number of samples in a large dataset is not scalable. A streamlined version of this idea, SimCLR \cite{chen2020simple}, is based on doing instance classification within mini-batches of images. In SimCLR, the goal is to train an encoder to generate visual representations. We denote the encoder function with $f$ such that $h^i = f(x^i)$, where $x^i \in \textit{D}$ is an RGB image from the unlabeled dataset $\textit{D}$. Encoder $f$ is implemented using a deep convolutional neural network. Training then will proceed by contrasting representations $h^i$ in order to pull together similar images in the space. $h^i$ is the representation intended to be used by downstream tasks. However, Chen et al.~\cite{chen2020simple} show that, before computing the contrastive loss, applying a further non-linear $g$ layer to $h^i$ results in significant improvement. So in the following definitions, the contrastive loss will be computed on $z^i = g(h^i)$. At training, given a mini-batch of N images, $\{x^i\}_{i=1}^{N}$, every image is augmented twice using a sequence of random transformations to generate 2N samples $\{\hat{x}^j\}_{j=1}^{2N}$. Then, the similarity between every pair $u$ and $v$ of the 2N samples is computed using function $sim(u,v) = z_u^{T}.z_v/||z_u||||z_v||$. Next, counteractive loss for a positive pair (i.e. two augmentations of the same image $x_i$) is implemented in form of cross-entropy loss for a 2N-1 way classification task, where the logits are set to the pairwise similarities of a given view with its positive counterpart, and 2N-2 views from the remaining augmented samples. The contrastive loss $l_c(\hat{x}^{j_1},\hat{x}^{j_2})$ for a positive pair $\hat{x}^{j_1}$ and $\hat{x}^{j_2}$ (two views of the image $x^j$) is shown in the Equ.~(\ref{equ:ntxent}), where $\tau$ is a temperature parameter \cite{chen2020simple}. The contrastive loss is computed for both views of each of the N images. The total contrastive loss $L_{contrast}$ is shown in Equ.~(\ref{equ:ntxent_total}). \begin{equation} \begin{gathered} l_c(\hat{x}^{j_1},\hat{x}^{j_2}) = - \log{\frac{exp(sim(\hat{x}^{j_1},\hat{x}^{j_2})/\tau )}{ \sum_{k=1}^{2N} \mathbbm{1}_{[k \neq {j_1}]} exp(sim(\hat{x}^{j_1},\hat{x}^k)/\tau ) }}\\ \end{gathered} \label{equ:ntxent} \end{equation} \begin{equation} \begin{gathered} L_{contrast}= \frac{1}{2N}\sum_{k=1}^{N} { l_c(\hat{x}^{k_1},\hat{x}^{k_2}) + l_c(\hat{x}^{k_2},\hat{x}^{k_1})} \end{gathered} \label{equ:ntxent_total} \end{equation} \subsection{Mixture Embedding} \label{loss_terms} SimCLR computes the contrastive loss after embedding the target representations ($h$) into another space ($z$) via the non-linear layer, $g$. In MIX'EM, we replace this layer with multiple parallel non-linear layers, each generating an embedding and a coefficient to determine how much the embedding contributes to the final embedding, $z^i$. Figure \ref{fig:schematics} depicts the architecture of MIX'EM and how it differs from the regular contrastive representation learning pipeline. Given input $x^i$, and representation $h^i = f(x^i)$ generated by the encoder, our model replaces $z^i = g(h^i)$ with the function $z^i = g(\psi(h^i))$, where the function $\psi$ is defined in Equ.~(\ref{equ:MIX'EM_module}). $M$ in Equ.~(\ref{equ:MIX'EM_module}) indicates the number of mixture components. $g_m$ is a non-linear layer similar to $g$ and specializes in generating embedding for samples that component $m$ is responsible for. Mixing coefficient $p_m^i$ indicates the prior probability of sample $x_i$ being generated by the component $m$. The coefficients $p^i$ for $x^i$ are computed from $h^i$ using a non-linear layer $g_p$ and softmax function. \begin{equation} \psi(h^i) = \sum_{m=1}^{M} {p_m^i * g_m(h^i)}) \ \ \ \ \ s.t. \ \ \ \ \ p^i = softmax(g_p(h^i)) \label{equ:MIX'EM_module} \end{equation} With the mixture module, we expect the network to distribute input samples across components, as this should make the task easier \cite{mclachlan1988mixture,lee2015m}. Each mixture component should generate embeddings for certain semantic categories and guide the backpropagation process conditioned on the input. However,if we train MIX'EM only using the contrastive loss, it will quickly lead to a degenerate solution that assigns all samples to a single component. Therefore, we devise three loss terms to avoid such degenerate solutions and adequately train MIX'EM to meet our goals. \subsubsection{Entropy maximization across components} In a degenerate solution, the coefficients $p^i$ provide the lowest information from an information theory point of view; always, a particular component is equal to one. However, we want the model to be more diverse in the assignment of the components. We expect it to be dependent on the input image, not to ignore it. Given that we do not have any means to directly supervise the mixture module, we can instead maximize the entropy of the marginal distribution of mixtures $p$, which would take the highest value when all components are equally probable. As we will show in the experiments, this term indeed avoids the degenerate solution. Moreover, it will result in semantically meaningful components; that is, components will focus on different categories. We believe this is due to the simultaneous backpropagation of the contrastive loss, imposing minimal semantic regularization. In fact, without the contrastive loss, training would fail. The entropy maximization loss term is shown in Equ.~(\ref{equ:max_ebtropy_loss}), and is equal to the negative of entropy $\textit{H}(p)$. \begin{equation} \begin{gathered} L_{comp-ent} = - \textit{H}(p) = \sum{ p_m \log{p_m}}\ \ \ \ \ s.t. \ \ \ \ \ p_m = 1/N \sum_{i=0}^{N}{p_m^{i}} \end{gathered} \label{equ:max_ebtropy_loss} \end{equation} \subsubsection{Conditional component entropy minimization} Maximizing entropy of marginal $p$ diversifies the components. However, we would like to separate the representation space based on the most discriminative aspect of objects. For a given image, ideally, we want one of the mixture components to have close to full confidence so that it can be interpreted as an indicator of the true category. This, in turn, would mean reducing the entropy of the mixture components given an instance. We know that entropy would be minimized in practice if all probability mass is assigned to a single component. Therefore, given an image, we add a loss term that pushes the probability of the dominant (max) component to the highest value. Equ.~(\ref{equ:max_ebtropy_loss_cat}) shows the instance based entropy minimization loss term. \begin{equation} \begin{gathered} L_{inst-ent} = \sum_{i=0}^{N}{1- max\{p_m^{i}\}_{m=1}^{M} } \\ \end{gathered} \label{equ:max_ebtropy_loss_cat} \end{equation} \subsubsection{Associative embedding loss} Both entropy-based loss terms above are principled techniques to guide the training. However, they do not directly take into account the semantics. Intuitively, samples' ideal assignment to the components should pick up on visual clues that minimize the distance between samples with the same dominant mixture component. At the same time, it should maximize the distance of samples with different dominant components. In a supervised setting, it is straightforward to implement a loss function like this given the true semantic labels; however, here we do not have access to such labels. The good news, however, is that just training MIX'EM with $L_{comp-ent}$ and $L_{inst-ent}$ would result in each component specializing in one category. In quantitative words, evaluating MIX'EM by treating the dominant component index as cluster label, on STL10, we get an accuracy of 73\% (row (3) of the fourth column in table \ref{table:incremental}.) Therefore, we introduce a third loss term to enforce semantic coherency by relying on the index of the dominant component as a pseudo-ground-truth label. This loss, called associative embedding, is inspired by the work of Newell et al.~\cite{newell2017pixels,newell2017associative} on scene graph generation and human pose estimation. Using the dominant component index as the class label, we want to pull the embeddings assigned to a component as close as possible to each other. We implement this by minimizing the distance of all embeddings by a component $m$ and the average embedding for the component on samples with $m$ as their dominant component (pull loss). Simultaneously, we wish to push the average embedding of different components away from each other (push loss). We implement this by directly maximizing the pairwise distance of the average embedding of components. Equations (\ref{equ:ae_means})-(\ref{equ:ae_push}) show formal specification of pull and push loss terms. Note that $E_m$ is vital for the both losses, and we are able to compute it only by means of using the dominant components in MIX'EM. \begin{equation} \begin{gathered} \mu_m = \frac{1}{|E_m|}\sum_{i\in E_m}{e_m^i} \ \ \ \ \ s.t. \ \ \ \ \ E_m = \{i\ | \argminA_{m'}(p_{m'}^i)= m\} \end{gathered} \label{equ:ae_means} \end{equation} \begin{equation} \begin{gathered} L_{pull}= \frac{1}{M} \sum_{m=1}^{M}{\sum_{i \in E_m}{|| e_m^i - \mu_m||_2} } \\ \end{gathered} \label{equ:ae_pull} \end{equation} \begin{equation} \begin{gathered} L_{push}= -\frac{1}{M}\sum_{m=1}^{M}{\sum_{m'=1}^{M}{ \mathbbm{1}_{[m \neq m']}|| \mu_{m} - \mu_{m'}||_2} } \\ \end{gathered} \label{equ:ae_push} \end{equation} \subsubsection{Total loss} Equ.~(\ref{equ:total_loss}) shows the total loss we use to train MIX'EM. \begin{equation} \begin{gathered} L_{total} = L_{contrast} + \lambda_1 L_{comp-ent} + \lambda_2 L_{inst-ent} + \lambda_3 L_{push} + \lambda_4 L_{pull} \\ \end{gathered} \label{equ:total_loss} \end{equation} \subsection{Clustering to acquire classification} Once MIX'EM is trained, we apply the K-means algorithm to the representations or the embeddings to generate the final clusters. Our experiments show that K-means on representations delivers superior performance. We also tried using other off-the-shelf clustering methods including spectral clustering \cite{von2007tutorial}, and obtained similar results. Moreover, the dominant mixture component index also provides a highly accurate classification, as shown in the next section. \section{Experiments} We experiment with three standard datasets, STL10 \cite{coates2011analysis}, CIFAR10 \cite{krizhevsky2009learning}, and CIFAR100$-$20 \cite{krizhevsky2009learning} (CIFAR100 with 20 super-classes). STL10 is a subset of ImageNet designed to benchmark self-supervised and unsupervised methods. It includes 100k unlabeled images and train/test labeled splits with 5k/8k images. The labeled splits have ten categories, and the unlabeled split includes a similar but broader set of categories. We use ResNet-18 \cite{he2016deep} for all the experiments. Since CIFAR10 and CIFAR100-20 images are already in small resolution (32x32), we remove the down-sampling in the first convectional layer and the max-pooling layer for experiments on them. We assume to know the number of categories in the datasets and use the same number of mixture components for the main results, but also provide results with larger number of components. As a potential application in the real world, consider labeling items from a set of already known classes in a supermarket \cite{han2020automatically} or in a warehouse. Nevertheless, if the number of classes is not known in advance, there are techniques to estimate it \cite{han2019learning}. For each dataset, we first train the bare SimCLR for 600 epochs with embedding dimension 256 without the mixture embedding component. For this stage, on STL10 we use the train and unlabled splits, and on CIFAR10/100-20 we use the train split. We call this model "base SimCLR encoder." Next, we continue training with/without the mixture embedding module with a lower embedding dimension to investigate various aspects under equal conditions. We set embedding dimension to 32, as it gives slightly better results. In the rest of the paper, by "SimCLR", we mean the version trained further on the base SimCLR encoder. For the evaluation of the semantic clustering, we use the three popular metrics: clustering accuracy (ACC), normalized mutual information (NMI), and adjusted rand index (ARI). For ACC, we use the Hungarian method to map cluster indices to the ground-truth labels. Following the standard practice for unsupervised settings \cite{ji2019invariant,yang2016joint}, we train MIX'EM on the combination of all labeled data and evaluate on the test split. To gain a better understanding of the role of data in unsupervised learning, in ablation studies, we also provide separate evaluations for when test data is not used in the training of MIX'EM. Unless specified otherwise, the results are obtained by taking the average of five separate trainings and are accompanied by the standard deviation. Training hyper-parameters, including learning rate, mini-batch size, learning schedule, the loss weight terms ($\lambda_1$, $\lambda_2$, $\lambda_3$, and $\lambda_4$), and augmentation configuration are determined by trying few different values for each dataset. Details for the training setup are provided in the supplementary material. Given enough computational resources, we believe that extending the experiments to a larger dataset like ImageNet would be straightforward. \setlength{\tabcolsep}{4pt} \begin{table} \begin{center} \caption{Effect of different MIX'EM loss terms when evaluated on STL10. Randomly initialized K-means is repeated $50$ for times. "Max Comp" means doing clustering by using the dominant component indices as the final cluster labels.} \label{table:incremental} \resizebox{1.\textwidth}{!}{ \begin{tabular}{ll|cc|cc|cc} \hline & & \multicolumn{2}{c|}{\bf \thead{ACC}} & \multicolumn{2}{c|}{\bf NMI} & \multicolumn{2}{c}{\bf ARI} \\ & \bf Method & \multicolumn{1}{c}{\bf \thead{K$-$means}} & \multicolumn{1}{c|}{\bf Max Comp}& \multicolumn{1}{c}{\bf \thead{K$-$means}} & \multicolumn{1}{c|}{\bf Max Comp}& \multicolumn{1}{c}{\bf \thead{K$-$means}} & \multicolumn{1}{c}{\bf Max Comp} \\ \hline (1) & SimCLR & $65.01 \pm 1.79$ & $-$ & $66.2 \pm 0.66$ & $-$ & $43.94 \pm 1.09$ & $-$ \\ \hline (2) & $+$ representation normalization & $69.95 \pm 0.78$ & $-$ & $67.05\pm 0.33$ & $-$ & $55.37 \pm 0.78$ & $-$ \\ \hline & \multicolumn{7}{l}{ MIX'EM} \\ (3) & $+ L_{comp-ent}$ & $69.88 \pm 0.17$ & $31.83 \pm 0.29$ & $66.52 \pm 0.08$ & $20.61 \pm 0.15$ & $54.43 \pm 0.23$ & $12.15 \pm 0.17$ \\ \hline (4) & $+ L_{inst-ent}$ & $76.21 \pm 0.12$ & $73.45 \pm 0.03$ & $67.27 \pm 0.07$ & $64.3 \pm 0.04$ & $60.16 \pm 0.16$ & $55.88 \pm 0.05$ \\ \hline (5) & $+$ MIX'EM initializes K$-$means & $76.21 \pm 0.09$ & $73.45 \pm 0.03$ & $67.23 \pm 0.08$ & $64.3 \pm 0.04$ & $60.12 \pm 0.11$ & $55.88 \pm 0.05$ \\ \hline (6) & $+ L_{push} + L_{pull}$ & $\textbf{77.76} \pm \textbf{0.08}$ & $68.44 \pm 0.44$ & $\textbf{68.03} \pm \textbf{0.07}$ & $64.08 \pm 0.1$ & $\textbf{61.35} \pm \textbf{0.1}$ & $54.66 \pm 0.09$ \\ \hline (7) & $-$ MIX'EM initializes K$-$means & $70.78 \pm 0.19$ & $68.44 \pm 0.44$ & $67.57 \pm 0.42$ & $64.08 \pm 0.1$ & $55.95 \pm 0.16$ & $54.66 \pm 0.09$ \\ \hline \end{tabular} } \end{center} \end{table} \setlength{\tabcolsep}{1.4pt} \begin{figure}[t] \centering \includegraphics[width=110mm]{images/accv_tsne_rep.jpg} \caption{tSNE visualization of representation on STL10 to illustrate the category-level disentanglement. Samples are marked and colored based on their true label. } \label{fig:tsne_rep_fc} \end{figure} \subsection{Results} We begin by analyzing the effect of each loss term on STL10. Table \ref{table:incremental} shows that, starting from contrastive loss alone (SimCLR) \cite{chen2020simple}, gradually adding various MIX'EM loss terms consistently improves the performance. Row (2) illustrates the importance of normalizing the representations before applying K-means. Rows (4) vs. (5), and (6) vs. (7) show using MIX'EM to initialize the K-means results in significant improvement. In figure \ref{fig:tsne_rep_fc} we present the tSNE \cite{maaten2008visualizing} visualization of the representations for SimCLR and MIX'EM. In line with the quantitative evaluation, MIX'EM representations are more disentangled and constitute more compact clusters. Using contrastive loss alone does not adequately pull samples from similar categories to each other, resulting in a sparse space. The mixture module guides the training process via mixing coefficients, forcing the encoder to allocate more compact regions to different categories. In figure \ref{fig:tsne_fcmax}, the top row displays the tSNE visualization of the embeddings for the dominant component of each image as we gradually add MIX'EM loss terms. The bottom row shows how, in turn, the mixture embeddings get more disentangled as we do so. For a category level analysis of MIX'EM, we show the accuracy confusion matrix in figure \ref{fig:confusion}. The animal categories are clearly more difficult to discriminate and benefit the most from MIX'EM. In the supplementary material, we provide more visualizations that indicate how the correct category is very hard to recognize in some images. \begin{figure}[t] \centering \includegraphics[width=9cm]{images/confusion.jpg} \caption{Confusion matrix for prediction vs. true label on STL10. } \label{fig:confusion} \end{figure} \subsection{Comparison to the state-of-the-art} Table \ref{table:sota} compares performance of MIX'EM to the state-of-the-art. On the more challenging dataset of STL10, our model outperforms all other works by a large margin. On CIFAR, our model outperforms other works, except SCAN \cite{van2020learning} (concurrent work) when further trained with a classification objective. MIX'EM has very low standard deviation, which would be of high importance in a real-world application. On STL10 and CIFAR100-20, standard deviation of SCAN is about an order of magnitude higher. Since MIX'EM improves representations in terms of separability, SCAN should benefit from using MIX'EM as the representation encoder. On CIFAR100-20 for all models, the results are generally worse compared to other datasets. This is mainly due to the some confusing mapping of classes to super-classes. For example, "bicycle" and "train" both are mapped to "vehicles 1" and most animals are divided based on size, rather than semantics. \setlength{\tabcolsep}{4pt} \begin{table}[t] \begin{center} \caption{Comparison to the state-of-the-art in unsupervised image classification.} \label{table:sota} \resizebox{1.\textwidth}{!}{ \begin{threeparttable} \begin{tabular}{c|ccc|ccc|ccc} \hline \multicolumn{1}{c}{\bf \thead{Dataset}} &\multicolumn{3}{|c}{\bf \thead{CIFAR10}} &\multicolumn{3}{|c}{\bf \thead{CIFAR100-20}} &\multicolumn{3}{|c}{\bf \thead{STL10}} \\\hline \multicolumn{1}{c}{\bf \thead{Metric}} & \multicolumn{1}{|c}{\bf \thead{ACC}} & \multicolumn{1}{c}{\bf \thead{NMI}} & \multicolumn{1}{c}{ \bf \thead{ARI}}& \multicolumn{1}{|c}{\bf \thead{ACC}} & \multicolumn{1}{c}{\bf \thead{NMI}} & \multicolumn{1}{c}{\bf \thead{ARI}}& \multicolumn{1}{|c}{\bf \thead{ACC}} & \multicolumn{1}{c}{\bf \thead{NMI}} & \multicolumn{1}{c}{\bf \thead{ARI}} \\ \hline \thead{Linear classifier on SimCLR\\ (supervised)} & $89.6 \pm 0.2$ & $79.91 \pm 0.3$ & $79.15 \pm 0.35$ & $79.69 \pm 0.15$ & $64.38 \pm 0.15$ & $61.54 \pm 0.26$ & $87.22 \pm 0.09$ & $77.1 \pm 0.13$ & $74.88\pm 0.15$ \\ \hline DEC \cite{xie2016unsupervised} & $30.1$ & $25.7$ & $16.1$ & $18.5$ & $13.6$ & $5.0$ & $35.9$ & $27.6$ & $18.6$ \\ DAC \cite{chang2017deep} & $52.2$ & $40.0$ & $30.1$ & $23.8$ & $18.5$ & $8.8$ & $47.0$ & $36.6$ & $25.6$ \\ DeepCluster \cite{caron2018deep} & $37.4$ & $-$ & $-$ & $18.9$ & $-$ & $-$ & $65.6$ & $-$ & $-$ \\ ADC \cite{haeusser2018associative} & $32.5$ & $-$ & $-$ & $16.0$ & $-$ & $-$ & $53$ & $-$ & $-$ \\ PICA \cite{huang2020deep} & $0.561$ & $0.645$ & $0.467$ & $-$ & $-$ & $-$ & $0.592$ & $0.693$ & $0.504$ \\ IIC \cite{ji2019invariant} & $61.7$ & $51.1$ & $41.1$ & $25.7$ & $22.5$ & $11.7$ & $59.6$ & $49.6$ & $39.7$ \\ SCAN \cite{van2020learning} & $81.8 \pm 0.3$ & $71.2 \pm 0.4$ & $66.5 \pm 0.4$ & $42.2 \pm 3.0$ & $44.1 \pm 1.0$ & $26.7 \pm 1.3$ & $75.5 \pm 2.0$ & $65.4 \pm 1.2$ & $59.0 \pm 1.6$ \\ SCAN \cite{van2020learning} w/ classification & $\textbf{87.6} \pm 0.4$ & $\textbf{78.7} \pm 0.5$ & $\textbf{75.8} \pm 0.7$ & $\textbf{45.9} \pm 2.7$ & $\textbf{46.8} \pm 1.3$ & $\textbf{30.1} \pm 2.1$ & $76.7 \pm 1.9$ & $\textbf{68.0} \pm 1.2$ & \textbf{$\textbf{61.6} \pm 1.8$ }\\ SimCLR + K-means & $79.72 \pm 0.22$ & $69.56 \pm 0.28$ & $62.06 \pm 0.43$ & $42.58 \pm 0.74$ & $43.7 \pm 0.59$ & $24.43 \pm 0.81$ & $69.95 \pm 0.78$ & $67.05 \pm 0.33$ & $55.37 \pm 0.78$ \\ \hline MIX'EM + K-means & $81.87 \pm 0.23$ & $70.85 \pm 0.26$ & $66.59 \pm 0.37$ & $43.77 \pm 0.51$ & $\textbf{46.41} \pm 0.11$ & $27.12 \pm 0.33$ & $\textbf{77.76} \pm 0.08$ & $\textbf{68.03} \pm 0.07$ & $\textbf{61.35} \pm 0.1$\\ MIX'EM max component& $82.19 \pm 0.21$ & $71.35 \pm 0.27$ & $67.15 \pm 0.32$ & $39.19 \pm 0.44$ & $43.59 \pm 0.14$ & $26.67 \pm 0.12$ & $68.44 \pm 0.44$ & $64.08 \pm 0.1$ & $54.66 \pm 0.09$ \\ \hline \end{tabular} \end{threeparttable} } \end{center} \end{table} \setlength{\tabcolsep}{1.4pt} \subsection{Ablation studies} \subsubsection{Number of the mixture components} Although we set the number of mixture components to be the same as the number of categories, it is not necessary to do so. With 20 and 40 components on STL10, clustering accuracy is relatively stable: $76.22\%$ and $75.53\%$, respectively, compared to $77.76\%$ with 10 components. In these cases, where we have more mixture components than classes, we initialize K-Means using the most frequent components. As MIX’EM is a solution for clustering to a known number of categories, we believe it is optimal to use that information in the design. \subsubsection{Initializing K-means using MIX'EM} \label{sec:init_kmena} K-means lacks a robust initialization method and the standard procedure is to run K-means many times using random initialization and choose the best one in terms of inertia. We experimented with up to 100 runs and found 50 times to work the best on our models. However, this is neither reliable nor efficient on large scale datasets. With random initialization, K-means is not guaranteed to find the best clustering within practical limits (see large fluctuations in accuracy across different runs in figure \ref{fig:kmean_random}). Running K-means for 50 times on representations of dimensionality 512 takes about 21 seconds on the relatively small STL10 test split (8k images and 10 classes). On 10k images of CIFAR100-20/CIFAR100 with 20/100 classes it takes 112/221 seconds on average. This will get worse on larger datasets with even more categories. In MIX'EM, we use the mean of representations by each component, based on samples with the same dominant mixture component, to initialize K-means. This eliminates need for multiple random initializations, while consistently delivering higher accuracy. Rows (4),(5),(6) and (7) in table \ref{table:incremental} show the performance with MIX'EM initialization. In particular, rows (6) and (7) illustrate how K-means with 50 random initialization can be far worse than using MIX'EM for initialization. A single run with MIX'EM initialization, on average, takes $0.27$ , $1.5$, and $3$ seconds on STL10, CIFAR100-20, and CIFAR100, in order. \begin{figure}[t] \centering \subfloat[SimCLR]{\label{fig:kmean_random_simclr}\includegraphics[height=2cm]{images/kmeans_random_inits.png} } \subfloat[MIX'EM]{\label{fig:kmean_random_mixem}\includegraphics[height=2cm]{images/kmeans_random_inits_mixem.png}} \caption{K-means with standard random initialization fluctuates heavily over different runs and does not guarantee converging to an optimal solution (on STL10). } \label{fig:kmean_random} \end{figure} \subsubsection{Effect on contrastive representation learning} In MIX'EM, the contrastive loss is vital for successful training. This raises the question of how the mixture module influences the performance of representations in terms of accuracy of a linear classifier trained on the frozen features generated by the encoder, which is the standard measure to evaluate self-supervised representations \cite{zhang2016colorful,oord2018representation,chen2020simple}. To answer this, we train a linear classifier on the frozen base SimCLR encoder, SimCLR, and various forms of MIX'EM. According to table~\ref{table:linear}, the mixture module neither improves nor hurts the representation quality the linear classification task. This implies that the representation learned using SimCLR contain rich information enough to easily train a supervised linear classifier without further disentanglement of the representation. However, for the unsupervised setting, category-level disentanglement of the representation seems essential, as we observed a significant boost in clustering accuracy using MIX'EM. \setlength{\tabcolsep}{4pt} \begin{table} \begin{center} \caption{MIX'EM does not disrupt contrastive learning objective while imposing category-level disentanglement on representations. (evaluated on STL10)} \label{table:linear} \resizebox{.8\textwidth}{!}{ \begin{tabular}{llc} & \bf Model & \bf \thead{Supervised linear classifier accuracy} \\ \hline (0) & base SimCLR encoder & $86.1$\\ \hline (1) & SimCLR & $87.21 \pm 0.1$\\ \hline & MIX'EM \\ (2) &+ entropy maximization & $87.25 \pm 0.05$ \\ \hline (3) & + component entropy minimization & $87.15 \pm 0.06$ \\ \hline (4) & + associative embedding loss & $87.22 \pm 0.09$ \\ \hline \end{tabular} } \end{center} \end{table} \setlength{\tabcolsep}{1.4pt} \subsubsection{Effect of using test data in training} We investigate three scenarios regarding data splits used for training of MIX'EM and SimCLR; (1) using both train and test splits for training. This is the standard setting as we do not use the available labels for training \cite{ji2019invariant,yang2016joint}. (2) only using the train split for training; (3) using the train and unlabeled splits (on STL10 only) for training. Note that we always evaluate on the test split. The results are presented in table \ref{table:train_data_effect}. \setlength{\tabcolsep}{4pt} \begin{table} \begin{center} \caption{The effect of data splits used for training MIX'EM and SimCLR on the K-Menas clustering performance. All evaluations are on test split.} \label{table:train_data_effect} \resizebox{.8\textwidth}{!}{ \begin{tabular}{l|cc|l|l|l} \hline \bf Dataset & \bf \thead{Training splits} & \bf Method & \bf \thead{ACC} & \bf \thead{NMI} & \bf\thead{ARI} \\ \hline \multirow{6}{*}{\bf \thead{STL10}} & \multirow{2}{*}{\bf \thead{train+unlabled}} & SimCLR & $67.44 \pm 0.71$ & $64.90 \pm 0.1$ & $51.26 \pm 0.18$ \\ & & MIX'EM & $71.04 \pm 1.13$ & $62.56 \pm 0.85$ & $52.29 \pm 1.41$ \\ \cline{2-6} & \multirow{2}{*}{\bf \thead{train}} & SimCLR & $65.57 \pm 0.4$ & $63.72 \pm 0.2$ & $50.50 \pm 0.38$ \\ & & MIX'EM & $74.20 \pm 0.06$ & $65.19 \pm 0.06$ & $55.89 \pm 0.1$ \\ \cline{2-6} & \multirow{2}{*}{\bf \thead{train+test}} & SimCLR & $69.95 \pm 0.78$ & $67.05 \pm 0.33$ & $55.37 \pm 0.78$ \\ & & MIX'EM & $77.76 \pm 0.08$ & $68.03 \pm 0.07$ & $61.35 \pm 0.1$ \\ \hline \hline \multirow{4}{*}{\bf \thead{CIFAR10}} & \multirow{2}{*}{\bf \thead{train}} & SimCLR & $77.74 \pm 0.08$ & $67.21 \pm 0.15$ & $58.54 \pm 0.16$ \\ & & MIX'EM & $79.51 \pm 0.41$ & $68.29 \pm 0.28$ & $63.29 \pm 0.44$ \\ \cline{2-6} & \multirow{2}{*}{\bf \thead{train+test}} & SimCLR & $79.72 \pm 0.22$ & $69.56 \pm 0.28$ & $62.06 \pm 0.43$ \\ & & MIX'EM & $81.87 \pm 0.23$ & $70.85 \pm 0.26$ & $66.59 \pm 0.37$ \\ \hline \end{tabular} } \end{center} \end{table} \setlength{\tabcolsep}{1.4pt} \subsubsection{Scenario (1) vs (2)} Using test split in training consistently improves performance, having a more significant impact on STL10. We argue that this is due to the size and visual difficulty of STL10. CIFAR10 has 50k training and 10k test images. But, on STL10 there is only 5k training and 8k test images. Hence, on STL10, using test split in training means 160\% additional data, while on CIFAR10 it is just a 20\% addition. In the future, a more controlled experiment by progressively removing fractions of training data should be helpful for making a more informed conclusion. Additionally, STL10 is a subset of ImageNet and is visually more complex. On CIFAR100-20 trend is quite similar to CIFAR10. \subsubsection{Scenario (2) vs. (3)} Unlabeled split of STL10 contains 100k images; however, we do not know the distribution of the categories, and it contains unknown distractor categories. Therefore, despite increasing training data by a large factor, performance drops in this scenario. MIX'EM presumes access to the expected number of categories, which does not hold for the unlabeled set. We believe this is the reason why the accuracy of K-means on SimCLR does not drop as much in this case. Nevertheless, MIX'EM still is significantly more accurate. \section{Conclusion} We presented MIX'EM, a novel solution for unsupervised image classification. MIX'EM builds a mixture of embeddings module into SimCLR in order to impose semantic structure on the representations. To successfully train MIX'EM, we introduce various loss terms. MXI'EM sets a new stat-of-the-art unsupervised accuracy on STL10 and performs on par with current models on CIFAR. We also show that applying K-means itself on normalized representations from SimCLR results in impressively high accuracy. We believe this can be used as a new measure for evaluating the quality of self-supervised representation learning methods. The results we publish here could be further improved by using the latest findings in contrastive visual representation learning \cite{tian2020makes}. In the future, we would like to explore the impact of our model on image retrieval and instance segmentation tasks. Moreover, studying the theoretical aspects of MIX'EM could provide insight for further improvements. \section*{Acknowledgments} This work was partially funded by the FWO SBO project HAPPY. \bibliographystyle{splncs}
1,314,259,994,709
arxiv
\section{Introduction} Let $(u_n)_{n \geq 0}$ be a linear recurrence over the integers, that is, $(u_n)_{n \geq 0}$ is a sequence of integers satisfying \begin{equation*} u_n = a_1 u_{n - 1} + a_2 u_{n - 2} + \cdots + a_k u_{n - k} , \end{equation*} for all integers $n \geq k$, where $a_1, \ldots, a_k \in \mathbf{Z}$ and $a_k \neq 0$. To avoid trivialities, we assume that $(u_n)_{n \geq 0}$ is not identically zero. We refer the reader to \cite[Ch.~1-8]{MR1990179} for the general terminology and theory of linear recurrences. The set \begin{equation*} \mathcal{B}_u := \{n \in \mathbf{N} : n \mid u_n \} \end{equation*} has been studied by several researchers. Assuming that $(u_n)_{n \geq 0}$ is nondegenerate and that its characteristic polynomial has only simple roots, Alba Gonz\'alez, Luca, Pomerance, and Shparlinski~\cite[Theorem~1.1]{MR2928495} proved that \begin{equation*} \#\mathcal{B}_u(x) \ll_k \frac{x}{\log x} , \end{equation*} for all sufficiently large $x > 1$. Andr\'{e}-Jeannin~\cite{MR1131414} and Somer~\cite{MR1271392} studied the arithmetic properties of the elements of $\mathcal{B}_u$ when $(u_n)_{n \geq 0}$ is a Lucas sequence, that is, $(u_0, u_1, k) = (0, 1, 2)$. In such a case, generalizing a previous result of Luca and Tron~\cite{MR3409327}, Sanna~\cite{MR3606950} proved the upper bound \begin{equation*} \#\mathcal{B}_u(x) \leq x^{1 - \left(\frac1{2} + o(1)\right) \log \log \log x / \log \log x} , \end{equation*} as $x \to +\infty$, where the $o(1)$ depends on $a_1$ and $a_2$. Furthermore, Corvaja and Zannier~\cite{MR1918678} studied the more general set \begin{equation*} \mathcal{B}_{u,v} := \{n \in \mathbf{N} : v_n \mid u_n \} , \end{equation*} where $(v_n)_{n \geq 0}$ is another linear recurrence over the integers. Under some mild hypotheses on $(u_n)_{n \geq 0}$ and $(v_n)_{n \geq 0}$, they proved that $\mathcal{B}_{u,v}$ has zero asymptotic density~\cite[Corollary~2]{MR1918678}, while Sanna~\cite{San_preprint} gave the bound \begin{equation*} \#\mathcal{B}_{u,v}(x) \ll_{u,v} x \cdot \left(\frac{\log \log x}{\log x}\right)^{h_{u,v}} , \end{equation*} for all $x \geq 3$, where $h_{u,v}$ is a positive integer depending only on $(u_n)_{n \geq 0}$ and $(v_n)_{n \geq 0}$. If $(F_n)_{n \geq 0}$ is the sequence of Fibonacci numbers, Leonetti and Sanna~\cite{LS_preprint} showed that the set \begin{equation*} \mathcal{G} := \{\gcd(n, F_n) : n \in \mathbf{N} \} \end{equation*} has zero asymptotic density, and that \begin{equation*} \#\mathcal{G}(x) \gg \frac{x}{\log x} , \end{equation*} for all $x \geq 2$. On the other hand, the set \begin{equation*} \mathcal{A}_u = \{n \in \mathbf{N} : \gcd(n, u_n) = 1\} \end{equation*} does not seem to have attracted so much attention. We prove the following result: \begin{thm}\label{thm:main} For any nondegenerate linear recurrence $(u_n)_{n \geq 0}$, the asymptotic density $\mathbf{d}(\mathcal{A}_u)$ of $\mathcal{A}_u$ exists. Moreover, if $(u_n / n)_{n \geq 1}$ is not a linear recurrence then $\mathbf{d}(\mathcal{A}_u) > 0$. Otherwise, $\mathcal{A}_u$ is finite and, a fortiori, $\mathbf{d}(\mathcal{A}_u) = 0$. \end{thm} We remark that given the initial conditions and the coefficients of a linear recurrence $(u_n)_{n \geq 0}$, it is easy to test effectively if $(u_n / n)_{n \geq 1}$ is a linear recurrence or not (see Lemma~\ref{lem:unovern}, in \S\ref{sec:preliminaries}). \subsection*{Notation} Throughout, the letter $p$ always denotes a prime number. For a set of positive integers $\mathcal{S}$, we put $\mathcal{S}(x):=\mathcal{S}\cap [1,x]$ for all $x\ge 1$, and we recall that the asymptotic density $\mathbf{d}(\mathcal{S})$ of $\mathcal{S}$ is defined as the limit of the ratio $\#\mathcal{S}(x) / x$ as $x \to +\infty$, whenever this exists. We employ the Landau--Bachmann ``Big Oh'' and ``little oh'' notations $O$ and $o$, as well as the associated Vinogradov symbols $\ll$ and $\gg$, with their usual meanings. Any dependence of the implied constants is explicitly stated or indicated with subscripts. \section{Preliminaries}\label{sec:preliminaries} In this section we give some definitions and collect some preliminary results needed in the later proofs. Let $f_u$ be the characteristic polynomial of $(u_n)_{n \geq 0}$, i.e., \begin{equation*} f_u(X) = X^k - a_1 X^{k - 1} - a_2 X^{k - 2} - \cdots - a_k . \end{equation*} Moreover, let $\mathbf{K}$ be the splitting field of $f_u$ over $\mathbf{Q}$, and let $\alpha_1, \ldots, \alpha_r \in \mathbf{K}$ be all the distinct roots of $f_u$. It is well known that there exist $g_1, \ldots, g_r \in \mathbf{K}[X]$ such that \begin{equation}\label{equ:genpowsum} u_n = \sum_{i = 1}^r g_i(n)\;\! \alpha_i^n , \end{equation} for all integers $n \geq 0$. We define $B_u$ as the smallest positive integer such that all the coefficients of the polynomials $B_u g_1, \ldots, B_u g_r$ are algebraic integers. We have the following easy lemma. \begin{lem}\label{lem:unovern} $\{u_n / n\}_{n \geq 1}$ is a linear recurrence if and only if \begin{equation}\label{equ:gizero} g_1(0) = \cdots = g_r(0) = 0 . \end{equation} In such a case, $\mathcal{A}_u$ is finite. \end{lem} \begin{proof} The first part of the lemma follows immediately from the fact that any linear recurrence can be written as a generalized power sum like (\ref{equ:genpowsum}) in a unique way (assuming the roots $\alpha_1, \ldots, \alpha_r$ are distinct, and up to the order of the addends). For the second part, if (\ref{equ:gizero}) holds then for all positive integer $n$ we have that \begin{equation*} \frac{B_u u_n}{n} = \sum_{i = 1}^r \frac{B_u g_i(n)}{n} \, \alpha_i^n \end{equation*} is both a rational number and an algebraic integer, hence it is an integer. Therefore, $n \mid B_u u_n$, and so $\gcd(n, u_n) = 1$ only if $n \mid B_u$, which in turn implies that $\mathcal{A}_u$ is finite. \end{proof} For the rest of this section, we assume that $(u_n)_{n \geq 0}$ is nondegenerate and that $f_u$ has only simple roots, hence, in particular, $r = k$. We write $\Delta_u$ for the discriminant of the polynomial $f_u$, and we recall that $\Delta_u$ is a nonzero integer. If $k \geq 2$, then for all integers $x_1, \ldots, x_k$ we set \begin{equation*} D_u(x_1, \ldots, x_k) := \det(\alpha_i^{x_j})_{1 \leq i, j \leq k} , \end{equation*} and for any prime number $p$ not dividing $a_k$ we define $T_u(p)$ as the greatest integer $T \geq 0$ such that $p$ does not divide \begin{equation*} \prod_{1 \leq x_2, \ldots, x_k \leq T} \max\!\left\{1, |N_\mathbf{K} (D_u(0, x_2, \ldots, x_k))| \right\} , \end{equation*} where $N_\mathbf{K}(\alpha)$ denotes the norm of $\alpha \in \mathbf{K}$ over $\mathbf{Q}$, and the empty product is equal to $1$. It is known that such $T$ exists \cite[p.~88]{MR1990179}. If $k = 1$, then we set $T_u(p) := +\infty$ for all prime numbers $p$ not dividing $a_1$. Note that $T_u(p) = 0$ if and only if $k = 2$ and $p$ divides $\Delta_u$. Finally, for all $\gamma \in {]0,1[}$, we define \begin{equation*} \mathcal{P}_{u,\gamma} := \{p : p \nmid a_k, \; T_u(p) < p^\gamma \} . \end{equation*} We are ready to state two important lemmas regarding $T_u(p)$~\cite[Lemma~2.1, Lemma~2.2]{MR2928495}. \begin{lem}\label{lem:Pugamma} For all $\gamma \in {]0,1[}$ and $x \geq 2^{1/ \gamma}$ we have \begin{equation*} \#\mathcal{P}_{u,\gamma}(x) \ll_u \frac{x^{k\gamma}}{\gamma \log x} . \end{equation*} \end{lem} \begin{lem}\label{lem:upmcong} Assume that $p$ is a prime number not dividing $a_k B_u \Delta_u$ and relatively prime with at least one term of $(u_n)_{n \geq 0}$. Then, for all $x \geq 1$, the number of positive integers $m \leq x$ such that $u_{pm} \equiv 0 \pmod p$ is \begin{equation*} O_k\!\left(\frac{x}{T_u(p)} + 1\right) . \end{equation*} \end{lem} Actually, in~\cite{MR2928495} both Lemma~\ref{lem:Pugamma} and Lemma~\ref{lem:upmcong} were proved only for $k \geq 2$. However, one can easily check that they are true also for $k = 1$. \section{Proof of Theorem~\ref{thm:main}} For all integers $n \geq 0$, define \begin{equation*} v_n := B_u \sum_{i = 1}^r \frac{g_i(n) - g_i(0)}{n} \;\! \alpha_i^n \quad\text{and}\quad w_n := B_u \sum_{i = 1}^r g_i(0) \;\! \alpha_i^n . \end{equation*} Note that both $(v_n)_{n \geq 0}$ and $(w_n)_{n \geq 0}$ are linear recurrences of algebraic integers, and that the characteristic polynomial of $(w_n)_{n \geq 0}$ has only simple roots. Let $\mathcal{G}$ be the Galois group of $\mathbf{K}$ over $\mathbf{Q}$. Since $u_n$ is an integer, for any $\sigma \in \mathcal{G}$ we have that \begin{align}\label{equ:nvnwn} nv_n + w_n = B_u u_n = \sigma(B_u u_n) = \sigma(n v_n + w_n) = n\sigma(v_n) + \sigma(w_n) , \end{align} for all integers $n \geq 0$. In (\ref{equ:nvnwn}) note that both $n\sigma(v_n)$ and $\sigma(w_n)$ are linear recurrences, and the first is a multiple of $n$, while the characteristic polynomial of the second has only simple roots. Since the expression of a linear recurrence as a generalized power sum is unique, from (\ref{equ:nvnwn}) we get that $w_n = \sigma(w_n)$ for any $\sigma \in \mathcal{G}$, hence $w_n$ is an integer. Thanks to Lemma~\ref{lem:unovern}, we know that $(w_n)_{n \geq 0}$ is identically zero if and only if $(u_n / n)_{n \geq 1}$ is a linear recurrence, and in such a case $\mathcal{A}_u$ is finite, so that the claim of Theorem~\ref{thm:main} is obvious. Hence, we assume that $(w_n)_{n \geq 0}$ is not identically zero. For the sake of convenience, put $\mathcal{C}_u := \mathbf{N} \setminus \mathcal{A}_u$. Thus we have to prove that the asymptotic density of $\mathcal{C}_u$ exists and is less than $1$. For each $y > 0$, we split $\mathcal{C}_u$ into two subsets: \begin{align*} \mathcal{C}_{u,y}^- &:= \{n \in \mathcal{C}_u : p \mid \gcd(n, u_n) \text{ for some } p \leq y\} , \\ \mathcal{C}_{u,y}^+ &:= \mathcal{C}_u \setminus \mathcal{C}_{u,y}^- . \end{align*} It is well known that $(u_n)_{n \geq 0}$ is definitively periodic modulo $p$, for any prime number $p$. Therefore, it is easy to see that $\mathcal{C}_{u,y}^-$ is an union of finitely many arithmetic progressions and a finite subset of $\mathbf{N}$. In particular, $\mathcal{C}_{u,y}^-$ has an asymptotic density. If we put $\delta_y := \mathbf{d}(\mathcal{C}_{u,y}^-)$, then it is clear that $\delta_y$ is a bounded nondecreasing function of $y$, hence the limit \begin{equation}\label{equ:limdelta} \delta := \lim_{y \to +\infty} \delta_y \end{equation} exists finite. We shall prove that $\mathcal{C}_u$ has asymptotic density $\delta$. Hereafter, all the implied constants may depend on $(u_n)_{n \geq 0}$ and $k$. If $n \in \mathcal{C}_{u, y}^+(x)$ then there exists a prime $p > y$ such that $p \mid n$ and $p \mid u_n$. Furthermore, $B_u u_n = n v_n + w_n$ implies that $p \mid w_n$. Hence, we can write $n = pm$ for some positive integer $m \leq x / p$ such that $w_{pm} \equiv 0 \pmod p$. For sufficiently large $y$, we have that $p$ does not divide $f_w(0) B_w \Delta_w$ and is coprime with at least one term of $(w_s)_{s \geq 0}$, since $(w_s)_{s \geq 0}$ is not identically zero. Therefore, by applying Lemma~\ref{lem:upmcong} to $(w_s)_{s \geq 0}$, we get that the number of possible values of $m$ is at most \begin{equation*} O\!\left(\frac{x}{pT_w(p)} + 1\right) . \end{equation*} As a consequence, \begin{equation}\label{equ:bound1} \#\mathcal{C}_{u, y}^+(x) \ll \sum_{y < p \leq x} \left(\frac{x}{pT_w(p)} + 1\right) \ll x \cdot \left(\sum_{p > y} \frac1{pT_w(p)} + \frac1{\log x}\right) , \end{equation} where we also used the Chebyshev's bound for the number of primes not exceeding $x$. Setting $\gamma := 1/(k + 1)$, by partial summation and Lemma~\ref{lem:Pugamma}, we have \begin{equation}\label{equ:bound2} \sum_{\substack{p > y \\ p \in \mathcal{P}_{w,\gamma}}} \frac1{pT_w(p)} \leq \sum_{\substack{p > y \\ p \in \mathcal{P}_{w,\gamma}}} \frac1{p} = \left[\frac{\#\mathcal{P}_{w,\gamma}(t)}{t}\right]_{t = y}^{+\infty} + \int_y^{+\infty} \frac{\#\mathcal{P}_{w,\gamma}(t)}{t^2}\mathrm{d}t \ll \frac1{y^{1 - k\gamma}} = \frac1{y^\gamma}. \end{equation} On the other hand, \begin{equation}\label{equ:bound3} \sum_{\substack{p > y \\ p \notin \mathcal{P}_{w,\gamma}}} \frac1{pT_w(p)} \leq \sum_{\substack{p > y \\ p \notin \mathcal{P}_{w,\gamma}}} \frac1{p^{1 + \gamma}} \ll \int_y^{+\infty} \frac{\mathrm{d}t}{t^{1 + \gamma}} \ll \frac1{y^{\gamma}} \end{equation} Thus, putting together (\ref{equ:bound1}), (\ref{equ:bound2}), and (\ref{equ:bound3}), we obtain \begin{equation*} \frac{\#\mathcal{C}_{u, y}^+(x)}{x} \ll \frac1{y^\gamma} + \frac1{\log x} , \end{equation*} so that \begin{equation}\label{equ:limsup} \limsup_{x \to +\infty} \left|\frac{\#\mathcal{C}_u(x)}{x} - \delta_y\right| = \limsup_{x \to +\infty} \left|\frac{\#\mathcal{C}_u(x)}{x} - \frac{\#\mathcal{C}_{u,y}^-(x)}{x}\right| = \limsup_{x \to +\infty} \frac{\#\mathcal{C}_{u,y}^+(x)}{x} \ll \frac1{y^\gamma} , \end{equation} hence, by letting $y \to +\infty$ in (\ref{equ:limsup}) and by using (\ref{equ:limdelta}), we get that $\mathcal{C}_u$ has asymptotic density $\delta$. It remains only to prove that $\delta < 1$. Clearly, \begin{equation*} \mathcal{C}_{u,y}^- \subseteq \{n \in \mathbf{N} : p \mid n \text{ for some } p \leq y\} , \end{equation*} so that, by Eratosthenes' sieve and Mertens' third theorem~\cite[Ch.~I.1, Theorem~11]{Ten95}, we have \begin{equation}\label{equ:limsupCminus} \limsup_{x \to +\infty} \frac{\#\mathcal{C}_{u,y}^-(x)}{x} \leq 1 - \prod_{p \leq y}\left(1 - \frac1{p}\right) \leq 1 - \frac{c_1}{\log y} , \end{equation} for all $y \geq 2$, where $c_1 > 0$ is an absolute constant. Hence, putting together (\ref{equ:limsupCminus}) and the last part of (\ref{equ:limsup}), we get \begin{equation}\label{equ:lastlimsup} \delta = \lim_{x \to +\infty} \frac{\#\mathcal{C}_u(x)}{x} \leq \limsup_{x \to +\infty} \frac{\#\mathcal{C}_{u,y}^-(x)}{x} + \limsup_{x \to +\infty} \frac{\#\mathcal{C}_{u,y}^+(x)}{x} \leq 1 - \left(\frac{c_1}{\log y} - \frac{c_2}{y^{\gamma}}\right) , \end{equation} for all $y \geq 2$, where $c_2 > 0$ is an absolute constant. Finally, picking a sufficiently large $y$, depending on $c_1$ and $c_2$, the bound (\ref{equ:lastlimsup}) yields $\delta < 1$. The proof of Theorem~\ref{thm:main} is complete. \bibliographystyle{amsplain}
1,314,259,994,710
arxiv
\section{{#1}}} \newcommand{\uple}[1]{\text{\boldmath${#1}$}} \def\stacksum#1#2{{\stackrel{{\scriptstyle #1}} {{\scriptstyle #2}}}} \newcommand{\uple{x}}{\uple{x}} \newcommand{\uple{y}}{\uple{y}} \newcommand{\uple{m}}{\uple{m}} \newcommand{\uple{n}}{\uple{n}} \newcommand{\mathbf{C}}{\mathbf{C}} \newcommand{\mathbf{N}}{\mathbf{N}} \newcommand{\mathbf{A}}{\mathbf{A}} \newcommand{\mathbf{B}}{\mathbf{B}} \newcommand{\mathbf{D}}{\mathbf{D}} \newcommand{\mathbf{Z}}{\mathbf{Z}} \newcommand{\mathbf{P}}{\mathbf{P}} \newcommand{\mathbf{R}}{\mathbf{R}} \newcommand{\mathbf{G}}{\mathbf{G}} \newcommand{\mathbf{G}_{m}}{\mathbf{G}_{m}} \newcommand{\mathbf{H}}{\mathbf{H}} \newcommand{\mathbf{Q}}{\mathbf{Q}} \newcommand{{\mathbf{F}_p}}{{\mathbf{F}_p}} \newcommand{{\mathbf{F}^\times_p}}{{\mathbf{F}^\times_p}} \newcommand{\mathbf{F}}{\mathbf{F}} \newcommand{\mathbf{T}}{\mathbf{T}} \newcommand{\mathbf{G}}{\mathbf{G}} \newcommand{g^\natural}{g^\natural} \newcommand{\boldsymbol{\mu}}{\boldsymbol{\mu}} \newcommand{\mathcal{O}}{\mathcal{O}} \newcommand{\mathcal{K}\ell}{\mathcal{K}\ell} \newcommand{\overline{\mathbf{F}}}{\overline{\mathbf{F}}} \newcommand{\mathcal{E}}{\mathcal{E}} \newcommand{\mathcal{H}}{\mathcal{H}} \newcommand{\mathcal{C}}{\mathcal{C}} \newcommand{\mathcal{L}}{\mathcal{L}} \newcommand{\text{\boldmath$P$}}{\mathbf{P}} \newcommand{\text{\boldmath$E$}}{\mathbf{E}} \newcommand{\mathbf{V}}{\mathbf{V}} \newcommand{\mathbf{1}}{\mathbf{1}} \newcommand{\mathcal{B}}{\mathcal{B}} \newcommand{g^{\sharp}}{g^{\sharp}} \newcommand{y^{\sharp}}{y^{\sharp}} \newcommand{\clconj}[1]{{{#1}}^{\sharp}} \newcommand{\mods}[1]{\,(\mathrm{mod}\,{#1})} \newcommand{\sli}[1]{\underline{{#1}}} \newcommand{\ideal}[1]{\mathfrak{{#1}}} \newcommand{\widehat}{\widehat} \newcommand{\mathcal{C}}{\mathcal{C}} \newcommand{\mathbf{G}}{\mathbf{G}} \newcommand{\mathbf{B}}{\mathbf{B}} \newcommand{\mathbf{D}}{\mathbf{D}} \newcommand{\mathbf{G}^{opt}}{\mathbf{G}^{opt}} \newcommand{\hautk}[2]{\mathbf{G}_{{#1},{#2}}} \newcommand{\hautz}[2]{\mathbf{G}^{a}_{{#1},{#2}}} \newcommand{\hauti}[3]{\mathbf{G}^{{#1}}_{{#2},{#3}}} \newcommand{\mathcal{S}}{\mathcal{S}} \newcommand{\skl}[1]{\sheaf{K}^{({#1})}} \newcommand{\hk}[1]{\sheaf{K}\ell_{{#1}}} \newcommand{\mutw}[3]{\mu_{{#3},{#2}}} \newcommand{\frtr}[3]{(\Tr{{#1}})({#2},{#3})} \DeclareMathOperator{\hypk}{Kl} \newcommand{\mathcal{M}}{\mathcal{M}} \newcommand{\rightarrow}{\rightarrow} \newcommand{\longrightarrow}{\longrightarrow} \newcommand{\twoheadrightarrow}{\twoheadrightarrow} \newcommand{\hookrightarrow}{\hookrightarrow} \newcommand{\Longleftrightarrow}{\Longleftrightarrow} \newcommand{\fleche}[1]{\stackrel{#1}{\longrightarrow}} \newcommand{\barre}[1]{\overline{{#1}}} \DeclareMathOperator{\spec}{Spec} \DeclareMathOperator{\Vol}{Vol} \DeclareMathOperator{\proj}{Proj} \DeclareMathOperator{\Card}{Card} \DeclareMathOperator{\rank}{rank} \DeclareMathOperator{\res}{Res} \DeclareMathOperator{\reg}{reg} \DeclareMathOperator{\ord}{ord} \DeclareMathOperator{\cl}{Cl} \DeclareMathOperator{\Div}{Div} \DeclareMathOperator{\divg}{divg} \DeclareMathOperator{\Pic}{Pic} \DeclareMathOperator{\vol}{Vol} \DeclareMathOperator{\Imag}{Im} \DeclareMathOperator{\Reel}{Re} \DeclareMathOperator{\syms}{Sym^{2}} \DeclareMathOperator{\symk}{Sym} \DeclareMathOperator{\li}{li} \DeclareMathOperator{\frob}{\mathrm{Fr}} \DeclareMathOperator{\tr}{\mathrm{tr}} \DeclareMathOperator{\Gal}{Gal} \DeclareMathOperator{\Ind}{Ind} \DeclareMathOperator{\supp}{supp} \DeclareMathOperator{\im}{Im} \DeclareMathOperator{\Tr}{tr} \DeclareMathOperator{\Hom}{Hom} \DeclareMathOperator{\End}{End} \DeclareMathOperator{\Aut}{Aut} \DeclareMathOperator{\varia}{Var} \DeclareMathOperator{\argu}{Arg} \DeclareMathOperator{\spect}{Spec} \DeclareMathOperator{\disc}{disc} \DeclareMathOperator{\swan}{Swan} \DeclareMathOperator{\bb}{B} \DeclareMathOperator{\codim}{codim} \DeclareMathOperator{\ft}{FT} \DeclareMathOperator{\cond}{cond} \DeclareMathOperator{\Ad}{Ad} \DeclareMathOperator{\dual}{D} \newcommand{\varepsilon}{\varepsilon} \renewcommand{\rho}{\varrho} \DeclareMathOperator{\SL}{SL} \DeclareMathOperator{\GL}{GL} \DeclareMathOperator{\PGL}{PGL} \DeclareMathOperator{\rmT}{T} \DeclareMathOperator{\rmG}{G} \DeclareMathOperator{\rmN}{N} \DeclareMathOperator{\rmU}{U} \DeclareMathOperator{\PSL}{PSL} \DeclareMathOperator{\Sp}{Sp} \DeclareMathOperator{\GSp}{GSp} \DeclareMathOperator{\SO}{SO} \DeclareMathOperator{\Ort}{O} \DeclareMathOperator{\SU}{SU} \DeclareMathOperator{\Un}{U} \DeclareMathOperator{\USp}{USp} \newcommand{{\textstyle{\frac{1}{2}}}}{{\textstyle{\frac{1}{2}}}} \newcommand{{\textstyle{\frac{1}{4}}}}{{\textstyle{\frac{1}{4}}}} \newcommand{{\textstyle{\frac{3}{2}}}}{{\textstyle{\frac{3}{2}}}} \newcommand{\avg}[1]{A[{#1}]} \newcommand{\underline{O}}{\underline{O}} \newcommand{O}{O} \newcommand{\sheaf}[1]{\mathcal{{#1}}} \newcommand{M}{M} \newcommand{linearly disjoint}{linearly disjoint} \newcommand{\sheafm}[1]{\tilde{\sheaf{{#1}}}_{\ell}} \DeclareMathSymbol{\gena}{\mathord}{letters}{"3C} \DeclareMathSymbol{\genb}{\mathord}{letters}{"3E} \def\mathop{\sum \Bigl.^{\flat}}\limits{\mathop{\sum \Bigl.^{\flat}}\limits} \def\mathop{\sum \sum}\limits{\mathop{\sum \sum}\limits} \def\mathop{\sum \sum \sum \sum}\limits{\mathop{\sum \sum \sum \sum}\limits} \def\mathop{\sum\cdots \sum}\limits{\mathop{\sum\cdots \sum}\limits} \def\mathop{\sum\bigl.^{\flat}}\limits{\mathop{\sum\bigl.^{\flat}}\limits} \def\mathop{\sum \Bigl.^{*}}\limits{\mathop{\sum \Bigl.^{*}}\limits} \def\mathop{\sum\sum \Bigl.^{*}}\limits{\mathop{\sum\sum \Bigl.^{*}}\limits} \def\mathop{\sum\sum \Bigl.^{\sharp}}\limits{\mathop{\sum\sum \Bigl.^{**}}\limits} \def\mathop{\sum\sum \Bigl.^{\sharp}}\limits{\mathop{\sum\sum \Bigl.^{\sharp}}\limits} \def\mathop{\prod \Bigl.^{*}}\limits{\mathop{\prod \Bigl.^{*}}\limits} \def\mathop{\sum \Bigl.^{h}}\limits{\mathop{\sum \Bigl.^{h}}\limits} \def\frac{1}{2i\pi}\mathop{\int}\limits{\frac{1}{2i\pi}\mathop{\int}\limits} \def\mathop{\oplus}\limits{\mathop{\oplus}\limits} \theoremstyle{plain} \newtheorem{theorem}{Theorem}[section] \newtheorem*{theorem*}{Theorem} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{problem}[theorem]{Problem} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{variant}[theorem]{Variant} \theoremstyle{remark} \newtheorem*{convention}{Conventions} \newtheorem*{warning}{Warning} \newtheorem*{rem}{Remark} \newtheorem*{rems}{Remarks} \newtheorem*{property}{Properties} \theoremstyle{definition} \newtheorem*{claim}{Claim} \newtheorem{definition}[theorem]{Definition} \newtheorem{assumption}[theorem]{Assumption} \newtheorem*{question}{Question} \newtheorem{example}[theorem]{Example} \newtheorem{remark}[theorem]{Remark} \newtheorem*{application}{Application} \newtheorem{xca}{Exercise} \newcommand{\indic}[1]{[\underline{Hint}:\ {#1}]} \newcommand{\abs}[1]{\lvert#1\rvert} \newcommand{\blankbox}[2]{% \parbox{\columnwidth}{\centering \setlength{\fboxsep}{0pt}% \fbox{\raisebox{0pt}[#2]{\hspace{#1}}}% }% } \newcommand{w}{w} \newcommand{\mathfrak{p}}{\mathfrak{p}} \newcommand{$g$-equivalent}{$g$-equivalent} \newcommand{$g$-equivalence}{$g$-equivalence} \newcommand{G^g}{G^g} \newcommand{\Psi}{\Psi} \newcommand{\Upsilon}{\Upsilon} \newcommand{(\sieve,\siftable)}{(\Psi,\Upsilon)} \newenvironment{epigraph} {\hfill\begin{minipage}{0.6\linewidth}\raggedleft\footnotesize}{\end{minipage}\bigskip\bigskip} \newcommand{\mathcal{M}}{\mathcal{M}} \newcommand{\mathcal{L}}{\mathcal{L}} \newcommand{\mathcal{S}}{\mathcal{S}} \newcommand{\mathcal{C}}{\mathcal{C}} \newcommand{\mathcal{F}}{\mathcal{F}} \newcommand{\mathcal{J}}{\mathcal{J}} \newcommand{\mathcal{I}}{\mathcal{I}} \newcommand{\mathcal{G}}{\mathcal{G}} \newcommand{\mathcal{B}}{\mathcal{B}} \newcommand{\mathcal{E}}{\mathcal{E}} \newcommand{\mathfrak{a}}{\mathfrak{a}} \renewcommand{\geq}{\geqslant} \renewcommand{\leq}{\leqslant} \renewcommand{\Re}{\mathfrak{Re}\,} \renewcommand{\Im}{\mathfrak{Im}\,} \newcommand{\eqref}{\eqref} \newcommand{\backslash}{\backslash} \newcommand{\ov}[1]{\overline{#1}} \newcommand{\peter}[1]{\langle{#1}\rangle} \newcommand\sumsum{\mathop{\sum\sum}\limits} \newcommand\delval{1/8} \newcommand\delvaln{1/16} \newcommand\finalexponent{1/24} \begin{document} \title{Algebraic trace functions over the primes} \author{\'Etienne Fouvry} \address{Universit\'e Paris Sud, Laboratoire de Math\'ematique\\ Campus d'Orsay\\ 91405 Orsay Cedex\\France} \email{etienne.fouvry@math.u-psud.fr} \author{Emmanuel Kowalski} \address{ETH Z\"urich -- D-MATH\\ R\"amistrasse 101\\ CH-8092 Z\"urich\\ Switzerland} \email{kowalski@math.ethz.ch} \author{Philippe Michel} \address{EPFL/SB/IMB/TAN, Station 8, CH-1015 Lausanne, Switzerland } \email{philippe.michel@epfl.ch} \date{\today,\ \thistime} \thanks{Ph. M. was partially supported by the SNF (grant 200021-137488) and the ERC (Advanced Research Grant 228304). \'E. F. thanks ETH Z\"urich, EPF Lausanne and the Institut Universitaire de France for financial support. } \subjclass[2010]{11N05, 11N13, 11N32, 11N35, 11F11, 11T23, 11L05} \keywords{Sums over primes, M\"obius function, Eisenstein series, trace functions of $\ell$-adic sheaves, Riemann Hypothesis over finite fields} \begin{abstract} We study sums over primes of trace functions of $\ell$-adic sheaves. Using an extension of our earlier results on algebraic twist of modular forms to the case of Eisenstein series and bounds for Type II sums based on similar applications of the Riemann Hypothesis over finite fields, we prove general estimates with power-saving for such sums. We then derive various concrete applications. \end{abstract} \maketitle \setcounter{tocdepth}{1} \tableofcontents \section{Introduction} Let $f(X)=P(X)/Q(X)$, where $P,Q\in\mathbf{Z}[X]$, be a non-constant rational function. If $p$ is a prime large enough so that $f(X)$ defines a rational function on ${\mathbf{F}_p}$ by reduction modulo $p$, it follows from the work of Weil that we have the estimate $$ \sum_\stacksum{1\leq n\leq p}{(Q(n),p)=1}e\Bigl(\frac{P(n)\ov{Q(n)}}p\Bigr)\ll p^{1/2} $$ (where $Q(n)\ov{Q(n)}=1\mods p$) which exhibits considerable cancellation in this exponential sum. It is a natural question, with many potential applications, to ask whether such cancellation persists when the sum is restricted to prime numbers $q$, either less than $p$ or over shorter intervals (longer intervals being usually easier to handle). In \cite[Théorème 1.1]{FMAnn}, Fouvry and Michel proved that this is indeed almost always the case: \begin{theorem}\label{primesumthmFM} Let $f=P/Q$, with $P$, $Q\in\mathbf{Z}[X]$ coprime unitary polynomials. For every prime $p$ such that the reduction of $f$ modulo $p$ is not a polynomial of degree $\leq 1$, for every $X\leq p$ and every $\eta<1/32$, we have $$ \sum_\stacksum{q\leq X,\ prime}{(Q(q),p)=1} e\Bigl(\frac{P(q)\ov{Q(q)}}p\Bigr) \ll X\Bigl(\frac{p}X\Bigr)^{7/32}p^{-\eta}, $$ where the implicit constant depends only on $\eta$ and on the degrees of $P$ and $Q$. \end{theorem} Similar estimates were already known when $f$ is a polynomial (of degree $>1$), but with the exponent $\eta$ depending on the degree of $f$ and tending to $0$ as the latter increased (see, e.g,~\cite{Hua,Harman}). Thus an important new feature in~\cite{FMAnn} was to allow for the most general possible rational fractions, and for a uniform $p$-power saving. One of the key input of the proof was an essential use of Deligne's theory of higher dimensional algebraic exponential sums. There are, however, many other functions defined over ${\mathbf{F}_p}$ for which one would like to have similar results. For instance, with $f(X)=P(X)/Q(X)$ with $P$, $Q$ in $\mathbf{Z}[X]$ as above, one may naturally want to consider $$ K(n)= \begin{cases} \chi(f(n))&\text{if $(p,Q(n))=1$ },\\ 0&\hbox{if}\ p|Q(n), \end{cases} $$ for some non-trivial Dirichlet character $\chi$ modulo $p$ of order $h\geq 2$, provided the rational function $f$ is not proportional to an $h$-th power. Nevertheless, the only examples we are aware of concerns the case where $f$ is a split polynomial of degree $\leq 2$ not vanishing at $0$ which was considered by Karatsuba~\cite{Kar,Kar2,Kar3} (see also \cite{FGS}). In Corollary~\ref{cor-mult-car}, we will prove a non-trivial bound for an (almost) arbitrary rational function $f$. Beyond additive and multiplicative characters, there are other functions defined over finite fields which are now common tools in number theory. A nice example is given by the (normalized) hyper-Kloosterman sums in $m-1$ variables, introduced by Deligne and studied by Katz in great detail in \cite{GKM}, which are defined by $$ K(n)=\hypk_m(n;p)=\frac{1}{p^{\frac{m-1}2}}\mathop{\sum\cdots \sum}\limits_\stacksum{x_1\cdots x_m=n}{x_i\in{\mathbf{F}_p}} e\Bigl(\frac{x_1+\cdots+x_m}p\Bigr), $$ for some integer $m\geq 2$. This example was first considered by the third author in~\cite{Michelthese,MichelInv,MichelDMJ}, who obtained a modest (yet non-trivial) saving of $\frac{\log\log p}{\log p}$ over the trivial bound $O(p/\log p)$ for the sum over primes $q<p$ of $\hypk_m(q;p)$. \begin{remark}\label{introHNY} One can also wonder about the very recent generalizations of Kloosterman sums $\hypk^\rho_{\check\mathbf{G}}$ associated to the general Kloosterman sheaves defined, for an arbitrary split reductive group $\check\mathbf{G}$ and a representation $\rho$ of it, by Heinloth, Ng\^o and Yun~\cite{HNY}, where the case of the hyper-Kloosterman sums above corresponds to $\check\mathbf{G}=\GL_n$ with its standard representation. It is a sign of the generality of our results that they do apply very straightforwardly to this case, although the corresponding trace functions have not (yet) been made explicit for all\footnote{\ As Ng\^o kindly informed us, Yun has computed these sums explicitly for $\check\mathbf{G}=\mathrm{SO}(2n+1)$ and its standard representation: $\hypk_{\mathrm{SO}(3)}$ is the symmetric square of $\hypk_2$ and $\hypk_{\mathrm{SO}(2n+1)}$ for $n\geq 2$ is essentially the multiplicative convolution of $\hypk_{\mathrm{SO}(3)}$ and of two Kloosterman sums $\hypk_n$, see~\cite{Yun}.} $\check\mathbf{G}$! \end{remark} The common link between all these functions is that they are special cases of the general class of functions we called {\em trace weights} in \cite{FKM}, with bounded conductors. Precisely, we have the following definition: \begin{definition}[Trace weights]\label{def-admissible} For a prime $p$ and a prime $\ell\not=p$, an {\em isotypic trace sheaf modulo} $p$ is a geometrically isotypic $\ell$-adic Fourier sheaf $\mathcal{F}$ on $\mathbf{A}^1_{{\mathbf{F}_p}}$, in the sense of~\cite[Def. 8.2.2]{GKM}, which is pointwise pure of weight $0$. \par An {\em isotypic trace weight modulo} $p$ is the trace function $$ K(x)=\iota(\frtr{\sheaf{F}}{{\mathbf{F}_p}}{x}) $$ for $x\in {\mathbf{F}_p}$ of an isotypic trace sheaf $\mathcal{F}$, this trace function being seen as complex-valued by means of some fixed isomorphism $\iota\,:\,\bar{\mathbf{Q}}_{\ell}\rightarrow \mathbf{C}$. \end{definition} To any middle-extension sheaf $\mathcal{F}$ on $\mathbf{A}^1_{{\mathbf{F}_p}}$ is associated its \emph{analytic conductor}, a numerical invariant which measures the complexity of $\mathcal{F}$. This is a positive integer defined by $$ \cond(\mathcal{F})=\rank(\mathcal{F})+\sum_{x}(1+\swan_x(\mathcal{F})), $$ where $x$ ranges over the (finitely many) singularities of $\mathcal{F}$ in $\mathbf{P}^1(\ov{\mathbf{F}_p})$, i.e., those $x$ where $\mathcal{F}$ is not lisse, and $\swan_x(\mathcal{F})\geq 0$ is the Swan conductor of $\sheaf{F}$ at $x$ (see~\cite{GKM}). For an isotypic trace weight $K(n)$, we define the conductor as the minimal conductor of an isotypic trace sheaf $\sheaf{F}$ with trace function equal to $K(n)$ on ${\mathbf{F}_p}$. \begin{remark} For example: \begin{itemize} \item[-] If $K(n)=e(P(n)/p)$ for a polynomial $P\in{\mathbf{F}_p}[X]$ of degree $<p$, the associated sheaf has conductor $\cond(\mathcal{F})=\deg P+2$; \item[-] If $K(n)=\chi(P(n))$ where $\chi$ is multiplicative and $P\in{\mathbf{F}_p}[X]$ a polynomial, then $\cond(\mathcal{F})$ is bounded by $2$ plus the number of distinct zeros of $P$ in $\ov{{\mathbf{F}_p}}$; \item[-] For the hyper-Kloosterman sums $K(n)=\hypk_m(n;p)$, the conductor is $m+3$; \item[-] For the trace function of the $\ell$-adic Kloosterman sheaf associated to the adjoint representation of the split reductive group $\check\mathbf{G}$ in~\cite{HNY}, the conductor is bounded by $\dim(\check\mathbf{G})+2+r(\check\mathbf{G})$, where $r(\check\mathbf{G})$ is the rank of $\check\mathbf{G}$ (by~\cite[p. 4, (3)]{HNY}: this sheaf is of dimension $\dim \Ad=\dim\check\mathbf{G}$, lisse on $\mathbf{G}_m$, tame at $0$ and with Swan conductor $r(\check\mathbf{G})$ at $\infty$). \end{itemize} \end{remark} Our main result in this paper is an estimate for any sum over primes $q$ of an isotypic trace function modulo $p$, which is universal in quality and gives power-saving whenever the length of the sum is roughly comparable with $p$ on a logarithmic scale, in particular allowing some sums over shorter intervals $q<p^{1-\theta}$ for some $\theta>0$. The only weights we cannot handle are those where the corresponding estimate would be tantamount to a ``quasi-Riemann Hypothesis'', i.e., a zero-free strip for some Dirichlet $L$-functions. \par It will therefore be natural to say that $K(n)$ is an \emph{exceptional weight modulo $p$} (for sums over primes) if it is proportional to a weight $$ K_{\chi,\psi}(n)=\chi(n)\psi(n) $$ where $\chi$ (resp. $\psi$) is a multiplicative (resp. additive) character modulo $p$, where either or both may be trivial. \par Similarly, a sheaf $\mathcal{F}$ will be called \emph{exceptional} if it is geometrically isotypic and geometrically isomorphic to a sum of copies of a tensor product $\sheaf{L}_{\chi}\otimes\sheaf{L}_{\psi}$ of a Kummer sheaf with an Artin-Schreier sheaf, so that its trace function is exceptional. \par We state the results both for standard and for smoothed sums over primes. We will consider smooth test functions $V$, compactly supported in $[1/2,2]$, such that \begin{equation}\label{Vcond} x^jV^{(j)}(x)\ll Q^j \end{equation} for some $Q\geq 1$ and for any integer $j\geq 0$, where the implicit constant depends on $j$. \begin{theorem}[Trace weights vs. primes]\label{primesumthm} Let $K$ be an isotypic trace weight on ${\mathbf{F}_p}$ associated to some sheaf $\mathcal{F}$, and assume that $\mathcal{F}$ is not exceptional. Let $V$ be a smooth function as above satisfying~\emph{(\ref{Vcond})} for some parameter $Q\geq 1$. \par For any $X\geq 2$, we have \begin{align}\label{primesumsmooth} \sum_{q\ \text{prime}}K(q)V\Bigl(\frac{q}X\Bigr)&\ll QX(1+p/X)^{1/6}p^{-\eta},\\ \label{primesuminterval} \sum_\stacksum{q\ \text{prime}}{q \leq X}K(q)&\ll X(1+p/X)^{1/12}p^{-\eta/2}, \end{align} for any $\eta<1/24$. The implicit constants depend only on $\eta$, $\cond(\mathcal{F})$ and the implicit constants in~\emph{(\ref{Vcond})}. Moreover, the dependency on $\cond(\mathcal{F})$ is at most polynomial. \end{theorem} \begin{remark} For $X=p$ one gets $$ \sum_\stacksum{q\ \text{prime}}{q <p}K(q)\ll p^{1-1/48+\varepsilon}, $$ and for general $X$ these bounds are non-trivial as long as the conductor of $\mathcal{F}$ remains bounded and the range $X$ is greater that $p^{3/4+\varepsilon}$ for some $\varepsilon>0$. Stronger results are available by different methods for special $K$. For instance, Bourgain~\cite{bourgainmore} and Bourgain-Garaev \cite{BG} have obtained bounds for $$ K(n)=e\Bigl(\frac{an+b{\ov{n}}^{k}}{p}\Bigr),\ k\in\mathbf{N}-\{0\},\ (b,p)=1, $$ which are non-trivial as long as $X\geq p^{1/2+\varepsilon}$ (see also \cite{FoSh} for a survey of existing methods). \end{remark} Closely related to Theorem \ref{primesumthm} is the following estimate: \begin{theorem}[Trace weights vs. M\"obius] \label{moebiussumthm} Let $\mu$ denote the M\"obius function. With the same notations and hypotheses as in Theorem~\ref{primesumthm}, we have for $X\geq 2$ \begin{align*} \sum_{n}\mu(n)K(n)V\Bigl(\frac{n}X\Bigr)&\ll QX(1+p/X)^{1/6}p^{-\eta},\\ \sum_{n \leq X}\mu(n)K(n)&\ll X(1+p/X)^{1/12}p^{-\eta/2}, \end{align*} for any $\eta<1/24$, where the implicit constants depend only on $\eta$, $\cond(\mathcal{F})$ and the implicit constants in~\emph{(\ref{Vcond})}, and the dependency on $\cond(\mathcal{F})$ is at most polynomial. \end{theorem} \begin{remark} The discrepancy in the power-saving exponents between the smoothed and unsmoothed sums in Theorem \ref{primesumthm} and \ref{moebiussumthm} is due to the growth of the Sobolev norms of the smooth functions approximating the characteristic function of the interval $[1,X]$, which is measured by the parameter $Q$ of the smoothed sums. This dependency shows up in the treatment of the sums of Type $I$ and $I_2$ below, through Theorem \ref{typeIsumthm} and the shape of the packet of Eisenstein series involved in the analysis. Any improvement in the exponent of the parameter $Q_W$ in these statements will yield an improvement for the unsmoothed sums, but it is also quite possible that other more advanced arguments could give such improvements. One should also remark however that the smallest range of $X$ for which the sums can be bounded non-trivially ($X\approx p^{3/4}$) is the same for the smoothed and unsmoothed sums (for a fixed smooth function $V$). \end{remark} \begin{remark} Jean Bourgain pointed out that Theorem 2 of~\cite{BSZ} (along with the Note following it) can be used in conjonction with~(\ref{eq-correl}) and Proposition~\ref{correlationprop} of the present paper to prove that if $K$ is an isotypic trace weight, we have, for any $\varepsilon>0$ and for any $X\geq p^{1/2+\varepsilon}$ $$ \sum_{n \leq X}\mu(n)K(n)=o(X), $$ where the implicit constant depends on $\cond(\mathcal{F})$ and $\varepsilon$. However, this approach does not seem to yield a power saving, and does not seem to apply if $\mu$ is replaced by $\Lambda$ or by the characteristic function of the primes. \end{remark} \begin{remark} This theorem expresses an orthogonality property (i.e., the absence of correlation) between the M\"obius function and any isotypic trace weight modulo $p$ with bounded conductor. This fits with the philosophy of the ``M\"obius randomness principle'', formulated vaguely in~\cite[p. 338]{KI}, and with Sarnak's recent precise formulation in terms of orthogonality of the M\"obius function against function with low complexity, in the sense of entropy (see~\cite{sarnak}). Our result is in a slightly different context than Sarnak's conjecture, however, since the trace weights $K(n)$ are defined modulo $p$, and an asymptotic statement follows only by taking, for each $p$, a different weight with some bound on the complexity, as measured by the conductor in our case. Thus, our results are closer in spirit to those of Green~\cite{green} and Bourgain~\cite{bourgain}, which prove asymptotic orthogonality of the M\"obius function against, respectively, bounded depth boolean functions, and monotone boolean functions on binary hypercubes $\{0,1\}^N$ identified with $\{1,\ldots, 2^N\}$. \par In fact, it seems to be a very intriguing question (suggested by Peter Sarnak) to understand which functions can arise as small linear combinations of trace functions of low conductor (which, by linearity, still satisfy Theorems~\ref{primesumthm} and~\ref{moebiussumthm}). Another natural question is whether trace functions have low complexity in a more algorithmic sense, and this does not seem to be easy to answer. Only functions such as $$ K(n)=e\Bigl(\frac{P(n)}p\Bigr) \text{ or } K(n)=\Bigl(\frac{P(n)}p\Bigr) $$ (for $P(X)\in\mathbf{Z}[X]$ fixed and $(\frac\cdot p)$ the Legendre symbol) seem to be obviously of low complexity for most meaning of the term, and such functions as $$ K(n)=\hypk_2(n;p) $$ are far from being understood in this respect. One can certainly expect the class of trace weights (and linear combinations with small coefficients) to be very rich and fascinating (in this respect, there are already hints in Deligne's conjecture about the number of trace functions with various conditions, see \cite{deligne-drinfeld,EK,FKM1.5} for this topic.) \end{remark} \subsection{First applications} We present here some corollaries of Theorem~\ref{primesumthm} which are obtained by applying it to specific weights $K$. This is only a selection and we expect many more applications. We leave to the reader to write down the corresponding statements involving the M\"obius function, which follow from Theorem~\ref{moebiussumthm}. \par In the first result, we obtain a power saving in the sum of the error term in the prime number theorem in arithmetic progressions over residues classes modulo a prime which are values of some fixed polynomial. Precisely, we define $E(X;p,a)$ by $$ \pi(X;p,a) =\frac{\delta_p(a)}{\varphi(p)}\pi(X)+E(X;p,a), $$ where $\delta_p(a)=0$ if $(a,p)\not=1$ and is $1$ otherwise. \par We will prove in \S \ref{sec:primepolynomials} : \begin{corollary}\label{cor-poly-error-terms} Let $P\in \mathbf{Z}[X]$ be a polynomial whose reduction modulo $p$ is squarefree and non-constant. \par \emph{(1)} We have $$ \sum_{n\in{\mathbf{F}_p}}{E(X;p,P(n))}\ll X(1+p/X)^{1/12}p^{-\eta} $$ for any $\eta<1/48$, where the implicit constant depends only on $\eta$ and $\deg P$. \par \emph{(2)} We have $$ \sum_{a\in P({\mathbf{F}_p})}{E(X;p,a)}\ll X(1+p/X)^{1/12}p^{-\eta} $$ for any $\eta<1/48$, where the implicit constant depends only on $\eta$ and $\deg P$. \end{corollary} Note that this corollary is trivial if $P$ is linear. The restriction to a squarefree polynomial could be relaxed, but some condition is needed in the current state of knowledge since for $P=X^2$ we have the interpretation $$ \sum_{n\in{\mathbf{F}_p}}{E(X;p,n^2)}= \sum_{q\leq X}\Bigl(\frac{q}{p}\Bigr)+O(1) $$ in terms of average of the Legendre symbol over primes, from which we cannot get power saving without using a quasi-Riemann Hypothesis for the corresponding $L$-function. \par On the other hand, if we take $P=X^2-1$, the study of either of these sums becomes equivalent to that of the sum $$ \sum_{q<X}{\chi(q+1)} $$ where $q$, again, runs over primes. This was estimated, as we recalled, by Karatsuba~\cite{Kar}. \par We can now generalize considerably this result of Karatsuba: \begin{corollary}[Character sums over polynomially-shifted primes] \label{cor-mult-car} Let $f=P/Q$ be a rational function represented as a ratio of integral polynomials. Let $\chi$ be a non-trivial Dirichlet character of prime modulus $p$ and order $h\geq 2$. Assume that $f$ modulo $p$ is not of the form $$ cX^kg(X)^h $$ for some $c\in{\mathbf{F}_p}^{\times}$, some $k\in\mathbf{Z}$ and some $g(X)\in{\mathbf{F}_p}(X)$. We then have \begin{align*} \sum_{q\ \text{prime}}{\chi(f(q))V(q/X)}&\ll X(1+p/X)^{1/6}p^{-\eta}\\ \sum_\stacksum{q\ \text{prime}}{q\leq X}{\chi(f(q))}&\ll X(1+p/X)^{1/12}p^{-\eta/2} \end{align*} for any $\eta<1/24$, where the implicit constant depends only on $\eta$, $V$ and the degrees of $P$ and $Q$. \end{corollary} \begin{proof} We will show that Theorem~\ref{primesumthm} is applicable. We first recall (see \cite[Chap. 4]{GKM}) that an Artin-Schreier sheaf $\mathcal{L}_\psi$ (for $\psi$ non trivial) is wildly ramified at $\infty$ and unramified on $\mathbf{A}^1_{{\mathbf{F}_p}}$, while a Kummer sheaf $\mathcal{L}_{\chi}$ (for $\chi$ non-trivial) is tamely ramified at $0$ and $\infty$ and unramified on ${\mathbf{G}_{m}}_{,{\mathbf{F}_p}}$. Both types of sheaves have rank $1$. \par The weight $K(n)=\chi(f(n))$ is the trace function of the tame, Kummer sheaf $\sheaf{L}_{\chi(f)}$ which has (at most) $\deg P+\deg Q$ singularities, hence conductor bounded in terms of $\deg P$ and $\deg Q$. Theorem~\ref{primesumthm} therefore applies when $\sheaf{L}_{\chi(f)}$ is not exceptional. We now determine when this is so. \par The sheaf $\sheaf{L}_{\chi(f)}$ is tamely ramified at $x\in\mathbf{P}^1(\ov{\mathbf{F}_p})$ if and only if $x$ is a zero or a pole of $f$ of order not divisible by $h$. This implies that if $\sheaf{L}_{\chi(f)}$ is geometrically isomorphic to some $\mathcal{L}_\psi\otimes\mathcal{L}_{\chi'}$ then $\psi$ is trivial (otherwise $\sheaf{L}_{\chi(f)}$ would be wildly ramified at $\infty$) and the zeros or poles of $f$ distinct from $0$ and $\infty$ have order divisible by $h$; this means precisely that $f$ is of the form $cX^kg(X)^h$. \end{proof} One can unify this corollary with Theorem~\ref{primesumthmFM} for $$ K(n)=\chi(f(n))e\Bigl(\frac{g(n)}{p}\Bigr), $$ getting cancellation for sums over primes provided $f$ (resp. $g$) satisfies the assumptions of Corollary~\ref{cor-mult-car} (resp. Theorem~\ref{primesumthmFM}). \subsection{Kloosterman sums at prime arguments} Our last application involves the weights $K(n)$ which are related to Kloosterman or hyper-Kloosterman sums $\hypk_m$. We first spell out two very specific corollaries for the standard Kloosterman sum in one variable: \begin{corollary} For every $0 < \eta < 1/48$ there exists $C(\eta)$ such that for every $p$, every $X\geq 2$ and every integer $n$ coprime with $p$, one has the inequalities $$ \Bigl\vert \, \sum_{q<X,\, q\text{ prime} }\hypk_2(nq;p)\log q \,\Bigr\vert \leq C(\eta) X(1+p/X)^{1/12}p^{-\eta} $$ and $$ \Bigl\vert \, \sum_{q<X, \, q\text{ prime }}\hypk_2(n^2q^2;p)e\Bigl(\frac{2nq}p\Bigr)\log q \,\Bigr\vert \leq C(\eta) X(1+p/X)^{1/12}p^{-\eta}. $$ \end{corollary} These two bounds improve~\cite[Lemmas 6.1, 6.2, 6.3]{ILS} when $c=p$ is a prime and when $X$ is near and possibly a bit smaller than $p$ (these results were proved in~\cite{ILS} assuming the Generalized Riemann Hypothesis for Dirichlet characters). Using the methods of \cite{ILS}, one can use the second bound and the Petersson formula to increase the size of the support of the Fourier transform $\widehat \Phi$ of the test functions $\Phi$ in the problem of computing the distribution of low-lying zeros (average $1$-level density) of the symmetric square $L$-functions $L(\mathrm{sym}^2f,s)$ for $f$ in the family of holomorphic newforms of prime level $p\rightarrow +\infty$ and weight $k$: with notation as in~\cite{ILS} (except that they denote the level by $N$), \emph{there exists $\delta>0$ such that for any $\Phi\in\mathcal{S}(\mathbf{R})$ with the support of $\widehat\Phi$ in $]-1/2-\delta,1/2+\delta[$, one has} $$ \lim_{p\rightarrow\infty}\frac{1}{|H^{\star}_k(p)|}\sum_{f\in H^{\star}_k(p)}D(\mathrm{sym}^2f,\Phi)=\int_\mathbf{R} \Phi(x)W(\mathrm{Sp})(x)dx. $$ \par The possibility of such an improvement was known to the authors of \cite{ILS} (see \cite[Remark C, p. 61]{ILS}), though their method was different. The consideration of powers of hyper-Kloosterman sums allows us to strengthen the results~\cite{MichelInv,MichelDMJ} concerning the existence of hyper-Kloosterman sums with large absolute value modulo a product of two primes: \begin{corollary}\label{largesums} For any $m\geq 2$, there exists a constant $\alpha_m>0$ such that $$ \sum_{c\leq X}\Lambda_2(c)|\hypk_m(1;c)|\geq (\alpha_m+o_m(1))X\log X, $$ were $\Lambda_2(c)=(\mu\star\log^2)(c)$ denotes the von Mangoldt function of order $2$, which is supported over integers with at most two prime factors. \end{corollary} This corollary shows that the normalized hyper-Kloosterman sums $\hypk_m(1;c)$ whose modulus is a product of at most two primes have their size $\gg_m 1$ for a positive proportion of such moduli (when these are weighted by $\Lambda_2$). In~\cite{Michelthese,MichelInv,MichelDMJ}, the lower-bound was of order $X$, and by adding the missing logarithmic factor, we obtain the right order of magnitude. This answers a question of Bombieri to the third author from 1996. For $\Lambda_2$ replaced by $\Lambda_3$, a corresponding (easier) statement was proven in \cite{FoMiPac}. \par Another potential application of this corollary (or rather of the techniques used to prove it) is to reduce the value of the constant $\omega$ in the following statement, which was first established by Fouvry and Michel for $\omega=23$ in \cite{FMAnnals} subsequently improved to $\omega=18$ by Sivak-Fischler~\cite{Siv1,Siv} and to $\omega=15$ by Matom\"aki~\cite{Mat}: \begin{theorem*} The sequence $(\hypk_2(1;c))_{c\geq 1}$ changes sign infinitely often as $c$ varies over squarefree moduli with at most $\omega$ prime factors. \end{theorem*} \subsection{Principle of the proof of Theorem~\ref{primesumthm}: the combinatorics of sums over primes} We start from a general perspective before explaining what features are specific to our case and what our new ingredients are. Suppose we are given some oscillatory arithmetic function $K$, bounded by $1$ in modulus, some smooth function $V$, compactly supported in $]0,+\infty[$ and some $X\geq 2$; we wish to obtain non-trivial bounds for the sum $$ \sum_{n}\Lambda(n)K(n)V\Bigl(\frac{n}X\Bigr), $$ where $\Lambda$ denotes the von Mangoldt function. \par Using Heath-Brown's identity (see, e.g.,~\cite[Prop. 13.3]{KI}) and a smooth partition of unity, this sum decomposes essentially into a linear combination of sums of the shape \begin{multline}\label{eq-hb-sum} \sumsum_{m_1,\cdots,m_k}\alpha_1(m_1)\cdots\alpha_k(m_k)\sumsum_{n_1,\cdots,n_k} V_1(n_1)\cdots V_k(n_k)\\ V\Bigl(\frac{m_1\cdots m_k n_1\cdots n_k}{X}\Bigr)K(m_1\cdots m_k n_1\cdots n_k) \end{multline} for some integral parameter $k\geq 2$, where the $\alpha_i(m)$ are essentially bounded arithmetic functions supported in dyadic intervals (say $[M_i/2,M_i]$) of short range (i.e. $M_i\leq X^{1/k}$), whereas the $V_i(n)$ are smooth functions supported in dyadic intervals with arbitrary range (say, $[N_i/2,N_i]$ with $N_i\in[1/2,2X]$), and where $$ \prod_i M_iN_i\asymp X. $$ \par We refer to the $n_i$ as the ``smooth'' variables and the $m_i$ as the ``non-smooth'' variables, as one is usually unable to exploit the specific shape of the functions $\alpha_i$, except for the fact that they are supported in short ranges. Depending on which estimates and methods are available to bound these sums, according to the location of the point $(M_1,\cdots,M_k,N_1,\cdots,N_k)$ in the $2k$-dimensional cube $[1/2,2X]^{2k}$, it is useful to classify them into different (not necessarily disjoint) categories, based on the number of ``long'' smooth variables which are available: \begin{itemize} \item[-] If there is one very long smooth variable, say $n_1$, one usually speaks of {\em sums of type} $I$, with the remaining (smooth and non-smooth variables) combined together into a single non-smooth variable, $m$, which means that the original sum~(\ref{eq-hb-sum}) may be written $$ \sum_{m\asymp M}\beta_m\sum_{n_1\asymp N_1} V_1(n_1)V\Bigl(\frac{mn_1}X\Bigr)K(mn_1). $$ \item[-] If there are two relatively long smooth variables, say $n_1, n_2$, one speaks of sums of type $I_2$; after combining the remaining (smooth and non-smooth variables) into a single non-smooth variable, the sum can now be rewritten $$ \sum_{m\asymp M}\alpha_m\sumsum_\stacksum{n_1\asymp N_1}{n_2\asymp N_2} V_1(n_1)V_2(n_2)V\Bigl(\frac{mn_1n_1}X\Bigr)K(mn_1n_2). $$ \item[-] And if there are three relatively long smooth variables, say $n_1, n_2, n_3$, we will speak of sums of type $I_3$, and so on. \end{itemize} \par This classification appears more or less explicitly in the work~\cite{FouvryAM} of Fouvry, in the context of the average distribution of primes in arithmetic progressions to large moduli. The implementation of this strategy depends on the possibility of dealing with the sums of type $I_r$ for $r$ as large as possible, a question which becomes increasingly difficult as $r$ increases, since the range of the smooth variables decreases.\footnote{\ For instance, in \cite{FouvryAM}, it is shown that one could prove results on the distribution of primes in long arithmetic progressions on average, beyond the Bombieri-Vinogradov Theorem, if one could treat the corresponding sums of type $I_r$ for $r=1$, \ldots, $6$. Currently, the sums of type $I_1$, $I_2$ and $I_3$ can be handled \cite{FrIw}.} All remaining sums belong then to the class of {\em sums of type} $II$. The most direct treatment of these sums --there may be other treatments available, depending on the original problem-- consists in combining these (short) variables in subsets to form variables with larger ranges, in order to obtain bilinear forms involving two non-smooth variables of the type $$ \sum_{m\asymp M}\sum_{n\asymp N}\alpha_m\beta_n K(mn),\quad\text{ where } MN\asymp X. $$ \par One can then ``smoothen'' one of the variables, say $n$, by an application of the Cauchy-Schwarz inequality, leading to a quadratic form with coefficients with {\em multiplicative correlation sums} of the function $K$, namely $$ \mathop{\sum \sum}\limits_{m_1,m_2}\ov{\alpha_{m_1}}\alpha_{m_2} \sum_{n}\ov{K(m_1n)}{K(m_2n)}. $$ \par Notice here that the fact that the original variables are rather short actually helps, since it offers some flexibility in the ways they may be combined to tailor the relative ranges of $M$ and $N$. This is the strategy we will follow in this paper. \subsection{Sums of type $I_2$} We can now come to our specific situation and explain our new results for sums over primes of trace weights. \par We will give estimates for sums of type $I$, $I_2$ and $II$. In fact, the starting point of this work is a very general estimate for sums of type $I_2$ (two long smooth variables of approximately equal size) when $K$ is a trace weight, which follows relatively easily from the results of our earlier paper~\cite{FKM}. Indeed, using Mellin inversion, the estimation of sums of type $I_2$ can be reduced to that of sums of the shape \begin{equation}\label{eq-eisenstein-twist} \mathcal{S}_{V,X}(it,K)=\sum_{n}K(n)d_{it}(n)V\Bigl(\frac{n}X\Bigr) \end{equation} where $t\in\mathbf{R}$, and (for any $u\in\mathbf{C}$) we denote by $$ d_{u}(n)=d_{-u}(n)=\sum_{ab=n}\Bigl(\frac{a}{b}\Bigr)^{u} $$ the twisted divisor function. \par We observe that the arithmetic function $n\rightarrow d_{it}(n)$ is (up to suitable normalization) the Fourier coefficient of the non-holomorphic unitary Eisenstein series $$ E(z,s)=\frac12\sum_{(c,d)=1}\frac{y^s}{|cz+d|^{2s}}, $$ for $s=\frac{1}{2}+it$. The main result of our previous paper (\cite[Thm 1.2]{FKM}) is a universal non-trivial bound for the analogue of $\mathcal{S}_{V,X}(it,K)$ where $d_{it}(n)$ is replaced with the Fourier coefficients of a classical cusp form (either holomorphic or not). We will extend the proof to Eisenstein series, obtaining the following result: \begin{theorem}[Algebraic twists of Eisenstein series] \label{eisensteinsumthm} Let $K$ be an isotypic trace weight associated to the $\ell$-adic sheaf $\mathcal{F}$ modulo $p$. Let $V$ be a smooth function satisfying~\emph{(\ref{Vcond})} with parameter $Q\geq 1$. If $\mathcal{F}$ is not geometrically trivial, then for any $X\geq 1$, we have $$ \mathcal{S}_{V,X}(it,K)= \sum_{n}K(n)d_{it}(n)V\Bigl(\frac{n}X\Bigr)\ll (1+|t|)^AQ X\Bigl(1+\frac{p}{X}\Bigr)^{1/2}p^{-\eta} $$ for any $\eta< 1/8$ and some $A\geq 1$ possibly depending on $\eta$. The implicit constant depends only on $\eta$, on the implicit constants in~\emph{(\ref{Vcond})}, and polynomially on the conductor of $\sheaf{F}$. \end{theorem} In fact, the proof of this theorem will be intertwined with the proof of the following estimate on sums of type $I_2$: \begin{theorem}[Type $I_2$ sums of trace weights] \label{typeIsumthm} Let $K$ be an isotypic trace weight associated to the $\ell$-adic sheaf $\mathcal{F}$ modulo $p$. Let $M,N, X\geq 1$ be parameters with $X/4\leq MN\leq X$. Let $U$, $V$, $W$ be smooth functions satisfying condition~\eqref{Vcond} with respective parameters $Q_U,Q_V$ and $Q_W$, all $\geq 1$. We then have $$ \sum_{m,n}K(mn)\Bigl( \frac{m}n\Bigr)^{it} U\Bigl(\frac{m}M\Bigr)V\Bigl(\frac{n}N\Bigr) W\Bigl(\frac{mn}{X}\Bigr) \ll (1+|t|)^A(Q_U+Q_V)^{B} Q_W X\Bigl(1+\frac{p}{X}\Bigr)^{1/2}p^{-\eta} $$ for $t\in\mathbf{R}$ and for any $\eta< 1/8$ and some constants $A,B\geq 1$ depending on $\eta$ only. The implicit constant depends only on $\eta$, on the implicit constants in~\emph{(\ref{Vcond})}, and polynomially on the conductor of $\sheaf{F}$. \end{theorem} \begin{rem} (1) Through the techniques of~\cite{FKM}, this result depends on deep results of algebraic geometry, including Deligne's general form of the Riemann Hypothesis over finite fields, and the theory of the $\ell$-adic Fourier transform of Deligne, Laumon and Katz. \par (2) The Polya-Vinogradov method would yield a non trivial bound for the sum above as long as $\mathop{\mathrm{Max}}\limits(M,N)\gg p^{1/2}\log p$. Here we obtain non trivial estimates for $MN\gg p^{3/4+\varepsilon}$ in particular when $M,N\gg p^{3/8+\varepsilon}$. \par (3) From our point of view, the main innovation in this result, which promises to have other applications, is that we handle the divisor function in a fully automorphic manner, instead of attempting to use its combinatorial structure as a Dirichlet convolution. \end{rem} \subsection{Sums of type $I$ and $II$} Our second main result is a general estimate for sums of type $II$, which gives non-trivial bounds, as long as one of the variables has range slightly greater than $p^{1/2}\log p$ and the other has non-trivial range. Precisely: \begin{theorem}\label{typeIIsumthm} Let $K$ be a \emph{non-exceptional} trace weight modulo $p$ associated to an isotypic $\ell$-adic sheaf $\mathcal{F}$. Let $M,N\geq 1$ be parameters, and let $(\alpha_m)_{m}$, $(\beta_n)_n$ be sequences supported on $[M/2,2M]$ and $[N/2,2N]$ respectively. \par \emph{(1)} We have \begin{equation} \label{typeIIeq} \sumsum_\stacksum{m,n}{(m,p)=1} \alpha_m\beta_n K(mn) \ll\|\alpha\|\|\beta\|(MN)^{1/2} \Bigl(\frac{1}{p^{1/4}}+\frac{1}{M^{1/2}}+ \frac{p^{1/4}\log^{1/2}p}{N^{1/2}}\Bigr), \end{equation} where $$ \|\alpha\|^2=\sum_m|\alpha_m|^2,\ \|\beta\|^2=\sum_n|\beta_n|^2. $$ \par \emph{(2)} We have \begin{equation} \label{typeIeq} \sum_{(m,p)=1}\alpha_m\sum_{n\leq N} K(mn)\ll \Bigl(\sum_{m}|\alpha_m|\Bigr)N \Bigl(\frac{1}{p^{1/2}}+\frac{p^{1/2}\log p}{N}\Bigr). \end{equation} \par In both estimates, the implicit constants depend only, and at most polynomially, on the conductor of $\mathcal{F}$. \end{theorem} This theorem constitutes a significant generalization of results like \cite[Cor.~2.11]{MichelInv} or \cite[Prop.~1.3]{FMAnn}, which were obtained for very specific weights (symmetric powers of Kloosterman sums and additive characters of rational functions, respectively). The main difference is that we do not require any knowledge of the geometric monodromy group of $\mathcal{F}$. Instead, it turns out that we can build on the same ideas used in~\cite{FKM} to handle algebraic twists of cusp forms. A crucial role is played again by the $\ell$-adic Fourier transform, and by a geometric invariant of $\mathcal{F}$ which we introduced in~\cite{FKM}, namely its \emph{Fourier-M\"obius group}, which controls the correlation of the trace function of the Fourier transform of $\mathcal{F}$ with its pullbacks under automorphisms of the projective line. In fact, it is the intersection of this group with the standard Borel subgroup of upper-triangular matrices of $\PGL_2({\mathbf{F}_p})$ which we must understand, the essential point being that this intersection is of size bounded in terms of the conductor of $\mathcal{F}$ \emph{unless} $\mathcal{F}$ is exceptional. This is the origin of this restriction in Theorem~\ref{typeIIsumthm}. It is rather remarkable that the upper-triangular matrices in the Fourier-M\"obius group were precisely those which do \emph{not} cause any difficulty in~\cite{FKM} (hence in Theorem~\ref{typeIsumthm}). \begin{remark} For the purpose of Theorem~\ref{primesumthm}, it is indeed enough to handle sums of type $I_2$ and to deal will all others as sums of type $II$. Other problems may require direct treatment of sums of type $I_r$ with $r\geq 3$ (see for instance the beautiful recent work of N. Pitt \cite{Pitt}). One might expect that this involves the theory of automorphic forms on $\GL_r$. \end{remark} \begin{remark} In~\cite{FMAnn}, the first and third authors obtained bounds which could be stronger than \eqref{typeIIeq} and \eqref{typeIeq}, in particular in ranges of $M$, $N$ which are shorter than the Polya-Vinogradov range $p^{1/2}$ (see \cite[Prop. 1.2 and Thm~1.4]{FMAnn}). These bounds were established only for very special weights associated to rank one sheaves (additive characters of specific rational functions). It is quite conceivable that these results remain valid for more general trace weights, and we hope to come back to this question in a later work. From our current level of understanding at least, it seems that, instead of the Fourier-M\"obius group (or in addition to it), we would need to involve more precise information on the underlying sheaf, for example concerning fine details of its ramification behavior, and/or its geometric monodromy group. \end{remark} \subsection{Acknowledgments} Part of this work was done during the 60th birthday conference of Roger Heath-Brown at Oxford. We would also like to thank the organizers, Tim Browning, David Ellwood and Jonathan Pila for this rich and pleasant week. Roger Heath-Brown has obtained fundamental results in the theory of the primes; it is no surprise that, once more, the celebrated ``Heath-Brown identity'' makes a crucial appearance in the present work. \par This paper has benefited from discussions and comments from Jean Bourgain, Satadal Ganguly, Paul Nelson, Richard Pink, Ng\^o Bao Ch\^au, Peter Sarnak, Igor Shparlinski, Akshay Venkatesh, and Zhiwei Yun and it is a pleasure to thank them for their input. Finally, we are thankful to the referees for their careful reading of the manuscript. \section{Algebraic twists of Eisenstein series and sums of type $I_2$} In this section, we will prove Theorem~\ref{eisensteinsumthm} and Theorem~\ref{typeIsumthm} simultaneously. Indeed, the two results are very closely related, as we will first clarify. \par Let $M,N, X\geq 1$ with $X/4\leq MN\leq 4X$. Let $t\in\mathbf{R}$ be given, as well as three smooth functions $U$, $V$, $W$ satisfying \eqref{Vcond} with respective parameters $Q_U$, $Q_V$, $Q_W$, all $\geq 1$. We package these parameters by denoting $$ \uple{P}=(U,V,W,M,N,X), $$ and we denote $$ \mathcal{S}_{\uple{P}}(it,K) =\sum_{m,n}K(mn)\Bigl(\frac{m}n\Bigr)^{it} U\Bigl(\frac{m}M\Bigr)V\Bigl(\frac{n}N\Bigr) W\Bigl(\frac{mn}{X}\Bigr), $$ which is the sum involved in Theorem~\ref{typeIsumthm}. For later use, we state formally here the trivial bound \begin{equation}\label{eq-trivial-bound} \mathcal{S}_{V,X}(it,K)\ll X(\log X) \end{equation} for the sums $\mathcal{S}_{V,X}(it,K)$. \par We start with a lemma relating the sums of type $\mathcal{S}_{V,X}(\cdot,K)$ and $\mathcal{S}_{\uple{P}}(\cdot,K)$. \begin{lemma}\label{lm-relations} We adopt the above notations and for $s\in {\mathbb C}$ and $x >0$, let $$ W_{s}(x):=W(x)x^{-s}. $$ \emph{(1)} For every $\varepsilon >0$, there exists $C=C(\varepsilon)$, such that we have $$ \mathcal{S}_{\uple{P}}(it,K)\ll_\varepsilon (Q_U+Q_V)^C+ \iint_{|t_1|,|t_2|\leq X^{\varepsilon}}{ |\mathcal{S}_{W_{t_1},X}(it_2+it,K)|dt_1dt_2 }. $$ \emph{(2)} For every $\varepsilon >0$, one has $$ \mathcal{S}_{V,X}(it,K)\ll_\varepsilon X^{\varepsilon}\mathop{\mathrm{Max}}\limits_{\uple{P}=(U_1,V_1,V,M,N,X)} |\mathcal{S}_{\uple{P}}(it,K)|, $$ where $\uple{P}$ runs over parameters $(U_1,V_1,V,M,N,X)$ as above with $Q_{U_1}=Q_{V_1}=1$ \end{lemma} \begin{proof} (1) Denote by $\hat U$ and $\hat V$ the Mellin transforms of the smooth functions $U$ and $V$. These are entire functions, which satisfy \begin{equation} \label{UVdecay} \hat U(s),\hat V(s)\ll \Bigl(\frac{Q_U+Q_V}{1+|s|}\Bigr)^k, \end{equation} for any $k\geq 0$, where the implicit constants depend on $k$, $\Re s$ and the implicit constants in~(\ref{Vcond}). \par We then have \begin{align*} \mathcal{S}_{\uple{P}}(it,K) &= \frac{1}{(2i\pi)^2} \int_{(0)}\int_{(0)}\hat{U}(u)\hat{V}(v)\mathcal{T}_W(u,v)N^uM^vdudv \end{align*} by Mellin inversion, where $$ \mathcal{T}_W(u,v)= \sum_{m,n\geq 1}{K(mn)m^{it-u}n^{-it-v}W\Bigl(\frac{mn}{X}\Bigr)}. $$ \par This sum can be expressed as a twist of Eisenstein series~(\ref{eq-eisenstein-twist}), namely $$ \mathcal{T}_W(u,v)=X^{-\theta_1}\mathcal{S}_{W_{\theta_1},X}(\theta_2+it,K), $$ where \begin{gather*} \theta_1=\frac{u+v}{2},\quad\quad \theta_2=\frac{-u+v}{2}. \end{gather*} \par Thus, by a change of variable, we get $$ \mathcal{S}_{\uple{P}}(it,K)= \frac{2}{(2i\pi)^2} \int_{(0)}\int_{(0)} \hat{U}(\theta_1-\theta_2) \hat{V}(\theta_1+\theta_2) \Bigl(\frac{M}{N}\Bigr)^{\theta_2} \Bigl(\frac{MN}{X}\Bigr)^{\theta_1} \mathcal{S}_{W_{\theta_1},X}(\theta_2+it,K)d\theta_2d\theta_1. $$ \par The function $W_{\theta_1}$ is smooth and compactly supported on $[1/2,2]$. For $\Re \theta_1=0$, it satisfies~(\ref{Vcond}) with parameter \begin{equation}\label{eq-qw1} Q(\theta_1)\ll Q_W+|\theta_1|, \end{equation} where the implicit constant is absolute. \par Using~(\ref{UVdecay}) for $k$ large enough, and the trivial bound~(\ref{eq-trivial-bound}), the contribution to this double integral of the region where $|\theta_1|\geq X^{\varepsilon}$ or $|\theta_2|\geq X^{\varepsilon}$ is $$ \ll (Q_U+Q_V)^C $$ for some $C\geq 0$ depending only on $\varepsilon$, which concludes the proof. \par (2) By a dyadic partition of unity (using Lemma \ref{dyadic} below), and taking into account the support condition, we can decompose $\mathcal{S}_{V,X}(it,K)$ into $O(\log X)$ sums of the shape $$ \mathcal{S}_{\uple{P}}(it,K) $$ where $$ \uple{P}=(U_1,V_1,V,M,N,X) $$ with $X/4\leq MN\leq 4X$, and furthermore the functions $U_1$, $V_1$ satisfy condition \eqref{Vcond} with parameters $Q_{U_1}=Q_{V_1}=1$. The result is then immediate. \end{proof} \subsection{A simple bound} We start with the following simple ``convexity'' bound for the Eisenstein twists $\mathcal{S}_{V,X}(it,K)$, which is useful for $X\geq p$, and will indeed imply both Theorem~\ref{typeIsumthm} and Theorem~\ref{eisensteinsumthm} for $X\geq p^{5/4+\varepsilon}$. \begin{lemma}\label{convexlem} With the notation and assumptions of Theorem \ref{eisensteinsumthm}, we have for any $\varepsilon>0$, \begin{equation} \label{convexeq} \mathcal{S}_{V,X}(it,K)\ll \bigl(pQX(1+|t|)\bigr)^\varepsilon (1+|t|)^{1/2}QX\Bigl(\frac{1}{p}+\frac{p}{X}\Bigr)^{1/2}, \end{equation} where the implicit constant depends on $\varepsilon$ and polynomially on $\cond(\mathcal{F})$. \end{lemma} \begin{proof} This is relatively standard, so we will be fairly brief: the idea is to use periodicity of $K$ and to represent it in terms of Dirichlet characters, reducing then to easy estimates for moments of Dirichlet $L$-functions. \par First of all, the contribution to $\mathcal{S}_{V,X}(it,K)$ of the integers $n$ divisible by $p$ is $$ \sum_{n\equiv 0\mods{p}}K(0)d_{it}(n)V\Bigl(\frac{n}X\Bigr) \ll_{\cond(\mathcal{F})} p^{-1}X\log X. $$ \par Next, for $(n,p)=1$, we can write $$ K(n)=\frac{1}{(p-1)^{1/2}}\sum_{\chi}\tilde K(\chi)\chi(n) $$ where $\chi$ runs over the Dirichlet characters modulo $p$ and $$ \tilde K(\chi)=\frac{1}{(p-1)^{1/2}}\sum_{m\in{\mathbf{F}^\times_p}}K(m)\ov\chi(m) $$ is the finite-field Mellin transform of $K$. Thus we get $$ \sum_{(n,p)=1}K(n)d_{it}(n)V\Bigl(\frac{n}X\Bigr) =\frac{1}{(p-1)^{1/2}}\sum_{\chi}\tilde K(\chi)\sum_{n}\chi(n)d_{it}(n)V\Bigl(\frac{n}{X}\Bigr). $$ \par The contribution of the trivial character $\chi_0$ to this sum is estimated by $$ \frac{1}{(p-1)^{1/2}}\tilde K(\chi_0) \sum_{n}\chi_0(n)d_{it}(n)V\Bigl(\frac{n}{X}\Bigr)\ll_{\cond(\mathcal{F})} p^{-1/2}X\log X $$ (indeed, since $K$ is a trace weight, we have $$ (p-1)^{1/2}\tilde K(\chi_0)=\sum_{m\in{\mathbf{F}_p}}K(m)-K(0)=p^{1/2}\hat K(0)-K(0)\ll_{\cond(\mathcal{F})} p^{1/2}, $$ where $\hat K$ denotes the unitarily normalized Fourier transform of $K$, so that $-\hat{K}$ is also an isotypic trace weight, associated to a sheaf with conductor bounded in terms of that of $\sheaf{F}$ only, by the properties of the Fourier transform of $\ell$-adic sheaves, as explained in~\cite[\S 1.4, Prop. 8.2]{FKM}.) \par For $\chi$ non-trivial, denoting by $\hat{V}(s)$ the Mellin transform of $V$, we have $$ \sum_{\chi\not=\chi_0}\tilde K(\chi)\sum_{n}\chi(n)d_{it}(n)V\Bigl(\frac{n}{X}\Bigr)= \frac{1}{2i\pi}\mathop{\int}\limits_{\Re s=1/2}\sum_{\chi\not=\chi_0}\tilde K(\chi) L(\chi,s+it)L(\chi,s-it)\hat{V}(s)X^sds, $$ by a standard application of Mellin inversion and a contour shift. \par From~(\ref{Vcond}), we get \begin{equation}\label{eq-117} \hat{V}(s)\ll_j \Bigl(\frac Q{1+|s|}\Bigr)^j \end{equation} for all $j\geq 0$, where the implicit constant depends on $j$. Now, for any fixed $\varepsilon>0$, let $S=Q$, and split the $s$-integral into $$ \frac{1}{2i\pi}\mathop{\int}\limits_{\Re s=1/2}\cdots=\frac{1}{2i\pi}\mathop{\int}\limits_\stacksum{\Re s=1/2}{|\Im s|\leq S}\cdots+\frac{1}{2i\pi}\mathop{\int}\limits_\stacksum{\Re s=1/2}{|\Im s|> S}\cdots = I_1+I_2, $$ say. To handle $I_1$, we apply Cauchy's inequality to obtain $$ I_1^2\ll \Bigl\{XS\sum_{\chi}{|\tilde{K}(\chi)|^2}\Bigr\} \times \int_\stacksum{\Re s=1/2}{|\Im(s)|\leq S}{ \sum_{\chi\not=\chi_0} | L(\chi,s-it)L(\chi,s+it) |^2|ds| }. $$ \par By the Parseval identity, the first factor on the right-hand side is $\ll pXS$. For the second factor, we apply the approximate functional equation to bound the product $L(\chi,s-it)L(\chi,s+it)$ by sums of the shape $$ \sum_{n}\frac{\chi(n)d_{it}(n)}{n^s}W\Bigl(\frac{n}{N}\Bigr) $$ for $W$ rapidly decreasing and $N\ll p(1+S+|t|)$ (see, e.g.,~\cite[Th. 5.3]{KI}). By a hybrid-large sieve estimate (see~\cite[Th. 6.4]{montgomery}, compare with~\cite[Th. 7.34]{KI}), we can get the Lindel\"of conjecture on average for the integral and sum, and therefore derive $$ \int_\stacksum{\Re s=1/2}{|\Im(s)|\leq S}{ \sum_{\chi\not=\chi_0} | L(\chi,s-it)L(\chi,s+it) |^2|ds| }\ll_\varepsilon (pS(1+|t|))^{1+\varepsilon}. $$ \par To estimate $I_2$, we split the interval of integration $|\Im(s)|\geq S$ into segments $2^\nu S\leq |\Im(s)| \leq 2^{\nu+1}S$ for $\nu\geq 0$. Applying to each segment the last inequality just above with $S$ replaced by $2^{\nu} S$ together with the rapid decay of $\hat{V}(s)$ (see~(\ref{eq-117})), we obtain the same type of bound for $I_2$ after summation over $\nu$. \end{proof} For $X\geq p^{3/2}$, for instance, the bound in this lemma is stronger than the one claimed in Theorem~\ref{eisensteinsumthm}. Using Lemma~\ref{lm-relations} (1), we then also deduce Theorem~\ref{typeIsumthm} in this range. We will therefore assume for the remainder of this section that $X\leq p^{3/2}$. Similarly, comparing the bounds of Theorems~\ref{eisensteinsumthm} and \ref{typeIsumthm} with the trivial bounds $$ \mathcal{S}_{V,X}(it,K)\ll X\log X,\quad \quad \mathcal{S}_{\mathbf P}(it,K)\ll X\log X, $$ we may assume that $X\geq p^{3/4}$. \subsection{Spectral theory and amplification} The most important ingredient in the proof of Theorems~\ref{eisensteinsumthm} and \ref{typeIsumthm}, is the following lemma, which is proved with the methods of~\cite{FKM}, based on the amplification method and Kuznetsov's formula. It is an averaged version of a bound for the amplified second moment of the sums $\mathcal{S}_{V,X}(it,K)$. Recall that $K$ is an isotypic trace weight. \par For $\tau\in\mathbf{R}$, $L\geq 1$ and $u\in\mathbf{C}$, let $$ B_{i\tau}(u)=\sum_\stacksum{\ell\leq 2L}{\ell\text{ prime}} \mathrm{sign}(d_{i\tau}(\ell))d_{u}(\ell), $$ which is the amplifier (of length $2L$) adapted to the Eisenstein series $E(z,1/2+i\tau)$. \begin{lemma}\label{lm-average-bound} For any $\varepsilon>0$ there exists $b=b(\varepsilon)\geq 0$ such that \begin{equation} \label{eq-average-bound} \int_\mathbf{R}\min(|t|^{2},|t|^{-2-2b})|B_{i\tau}(it)\mathcal{S}_{V,X}(it,K)|^2dt\ll_\varepsilon p^\varepsilon\bigl(pLQX+p^{1/2}L^3QX(X/p+Q)^2\bigr), \end{equation} provided $$ p^{\varepsilon}LQ<p^{1/4},\quad\quad 1\leq L\leq X. $$ \end{lemma} \begin{proof} As in the cuspidal case in~\cite{FKM}, we use the amplification method and the Kuznetsov formula, exploiting the fact that, for any given $\tau\in\mathbf{R}$, the Eisenstein series $$ \frac{1}{(p+1)^{1/2}}E(z,1/2+i\tau) $$ occurs in the continuous spectrum of Hecke eigenforms of level $p$. More precisely, we have the Fourier expansion $$ E(z,1/2+it)=y^{1/2+it}+\frac{\theta(1/2-it)}{\theta(1/2+it)}y^{1/2-it}+ \frac{1}{\theta(1/2+it)}\sum_{n\not=0}d_{it}(|n|)|n|^{-1/2}W_{it}(4\pi |n|y)e(n x), $$ where $$ \theta(s)=\pi^{-s}\Gamma(s)\zeta(2s), $$ and \begin{equation} \label{Wit} W_{it}(y)= \frac{e^{-y/2}}{\Gamma(it+\frac12)} \int_0^\infty e^{-x}x^{it-1/2}\Bigl(1+\frac{x}y\Bigr)^{it-1/2}dx \end{equation} denotes the Whittaker function (see for instance \cite[(3.29)]{IwaIntro}). \par We assume that the condition of the lemma are met. Using the notation of~\cite[Section 4]{FKM} and taking there $P=Xp^{-1}$, we obtain as in~\cite[Prop. 4.1, (4.10)]{FKM} (with the parameter $M$ given by applying~\cite[Th. 1.14]{FKM} to the trace weight $K$, so that $M=aN^s$ for some absolute constants $a>0$ and $s\geq 1$) the bound \begin{multline} \label{meansquarebound} \frac{1}{p+1} \int_\mathbf{R}\tilde\phi_{a,b}(t)\frac{1}{\cosh(\pi t)|\theta(1/2+it)|^2} |B_{i\tau}(it)|^2|\mathcal{S}_{V,X}(it,K)|^2dt \\ \leq M(L)-2\sum_{k>a-b }\dot\phi(k)(k-1)M(L;k)\ll p^\varepsilon\Bigl( LQX+\frac{L^3QX}{p^{1/2}}\Bigl(\frac Xp+Q\Bigr)^2\Bigr) \end{multline} for any $\varepsilon>0$, where $2\leq b<a$ are odd integers depending on $\varepsilon$ and $\tilde\phi(t)=\tilde{\phi}_{a, b}(t)$ denotes a positive function such that $$ \tilde\phi(t)\asymp_{a, b}\, (1+|t|)^{ - 2b-2}. $$ \par We then obtain the desired average estimate from this, using Stirling's formula and the bound $$ \zeta(1+2it)\ll \frac{1}{|t|}+\log(1+|t|). $$ \end{proof} In order to apply this, we need some lower bound for the amplifier $B_{i\tau}(it)$. The following lemma gets a suitable bound for $t$ close enough to $\tau$. \begin{lemma}\label{Flemma} For $L$ large enough, we have $$ B_{i\tau}(i t)\gg\frac{L}{\log^6 L}, $$ uniformly for $t$ and $\tau\in\mathbf{R}$ satisfying $$ |t-\tau|\ll \frac{1}{\log^7 L},\quad\text{ and } \quad |\tau|\leq L^\frac{1}{3}. $$ \end{lemma} \begin{proof} We observe first that for any prime $\ell\leq 2L$ and $|t-\tau|\ll \log^{-7} L$, we have \begin{align*} |B_{i\tau}(it)-B_{i\tau}(i\tau)|&\leq \sum_\stacksum{\ell\leq 2L}{\ell\ \text{prime}}|d_{i\tau}(\ell)-d_{it}(\ell)|\\ &=2\sum_\stacksum{\ell\leq 2L}{\ell\ \text{prime}} |\cos(\tau\log\ell)-\cos(t\log \ell)|\\ &\leq 2|t-\tau|\sum_\stacksum{\ell\leq 2L}{\ell\ \text{prime}}\log \ell \ll \frac{L}{\log^7 L}, \end{align*} and hence it suffices to prove the lower bound for $t=\tau$. \par Furthermore, we may clearly assume that $\tau>0$ (by parity) and that $L\geq 3$. We then have $$ B_{i\tau}(i\tau)= \sum_{\stacksum{\ell\leq 2L}{\ell\text{ prime}}} {\mathrm{sign}(d_{i\tau}(\ell))d_{i\tau}(\ell)}= \sum_{\stacksum{\ell\leq 2L}{\ell\text{ prime}}} |d_{i\tau}(\ell)| =2\sum_{\stacksum{\ell\leq 2L}{\ell\text{ prime}}} |\cos(\tau\log\ell)|, $$ and since $|\cos(\tau\log \ell)|\leq 1$ it is enough to prove that $$ \sum_{\ell\sim L}\cos^2(\tau\log \ell) \gg \frac{L}{\log^6 L} $$ (where $\ell$ ranges over primes $L<\ell\leq 2L$) under the assumption of the lemma. We do this by finding suitable sub-intervals where $\tau\log \ell$ is sufficiently far away from $\pi/2$ modulo $\pi\mathbf{Z}$. \par Consider the function $$ g(x)=\tau\log x $$ for $x\in[L,2L]$. It is non-decreasing and satisfies $$ g(2L)-g(L)=\tau\log 2,\ g'(x)\in\left[\frac{\tau}{2L},\frac{\tau}{L}\right] \text{ for } x\in [L,2L]. $$ \par In particular, if $\tau\log 2\geq 2\pi$, the preimage $g^{-1}([-\pi/4,\pi/4]+\pi\mathbf{Z})$ intersects $[L,2L]$ in $\gg \tau$ intervals of length $\geq \frac{\pi L}{2\tau}$. From Huxley's Theorem on primes in short intervals (see, e.g.,~\cite[Th. 10.4, Th. 10.5]{KI} and note that any of the variants of~\cite[Th. 10.5]{KI}, going back to Hoheisel, would be enough) the number of primes in any such interval is $\gg \frac{L}{\tau\log L}$, provided $\tau\leq L^{5/12-\varepsilon}$ for some fixed $\varepsilon>0$ and $L$ is large enough. Therefore (taking $\varepsilon=1/12$ and summing over these intervals) we obtain $$ |B_{i\tau}(i\tau)|\gg \frac{L}{\log L} $$ provided $2\pi/\log 2\leq \tau\leq L^{1/3}$. \par At the other extreme, if $0\leq \tau \leq \frac{1}{100\log L}$, we have $$ \cos^2(\tau\log \ell)\geq \cos^2\Bigl(\frac{1}{50}\Bigr)\ $$ for every $L\leq \ell\leq 2L$, and hence $B_{i\tau}(i\tau)\gg L/\log L$ also in that case. \par Suppose now that $$ \frac{1}{100\log L}\leq\tau\leq \frac{2\pi}{\log 2}. $$ \par In that case, $g([L,2L])$ is an interval of length at least $1/(200\log L)$. It is then easy to see (the worst case is when the interval is symmetric around $\pi/2+k\pi$ for some integer $k$) that there exists $x_0\in [L,2L]$ such that $$ \cos^2(\tau\log(x_0))\geq \frac{1}{2\cdot 400^2\log^2 L}. $$ \par Using the prime number theorem with sufficiently precise error term, we know that the interval $[L,2L]\cap[x_0-L(\log L)^{-3},x_0+L(\log L)^{-3}]$ contains at least $\gg L/(\log L)^4$ primes, and since $$ |\cos^2(\tau\log (\ell))-\cos^2(\tau\log (x_0))|\ll |\log(\ell/x_0)|\ll (\log L)^{-3} $$ for these primes, we have $$ \cos^2(\tau\log (\ell))\gg \log^{-2} L, $$ and therefore by summing over these $\ell$ we get $$ B_{i\tau}(i\tau)\gg \frac{L}{\log^{6} L} $$ in that last case, which concludes the proof. \end{proof} This lemma and the average bound~\eqref{eq-average-bound} allow us to deduce a first good upper-bound for the twists of Eisenstein series, averaged in rather short intervals. It will be convenient for later purposes to introduce the notation \begin{gather}\label{eq-lmax-tau} I(\tau,p)=\{t\in \mathbf{R}\,\mid\, |t-\tau|\leq \log^{-7}p\},\quad \mathcal{M}(\tau,p)=\mathop{\mathrm{Max}}\limits_{t\in I(\tau,p)}|\mathcal{S}_{V,X}(it,K)|,\quad\\ M(Q,X)=QX\Bigl(1+\frac{p}{X}\Bigr)^{1/2}p^{-1/8}, \end{gather} so that, for instance, Theorem~\ref{eisensteinsumthm} claims that $$ \mathcal{S}_{V,X}(it,K)\ll p^{\varepsilon}(1+|t|)^A M(Q,X) $$ for any $\varepsilon>0$ and $A\geq 1$ depending on $\varepsilon$. \par For our next result, we recall that we work under the assumptions $p^{3/4}<X<p^{3/2}$ and $1\leq Q \leq p$. We then have: \begin{proposition}\label{shortaveragebound} For any $\varepsilon>0$, there exists $B\geq 5$, depending only on $\varepsilon$, such that for any $\tau\in\mathbf{R}$ we have \begin{equation}\label{eq-first-average} \int_{I(\tau,p)}\min(|t|^2,1)|\mathcal{S}_{V,X}(i t,K)|^2dt\ll_\varepsilon p^\varepsilon(1+|\tau|)^{B}M(Q,X)^2, \end{equation} where the implied constant depends only on $\varepsilon$. \end{proposition} \begin{proof} Let $$ L=\frac{p^{1/4-\varepsilon}}{X/p+Q} $$ as in~\cite[(4.2)]{FKM}. If $L\ll 1$ or if $|\tau|>L^{1/3}$, the trivial bound~(\ref{eq-trivial-bound}) or the convexity bound \eqref{convexeq} yield stronger results than~(\ref{eq-first-average}) (since $B\geq 5$). With this definition of $L$, and the above reduction, we may apply Lemmas~\ref{lm-average-bound} and~\ref{Flemma} to the remaining cases. We obtain \begin{align*} \int_{|t-\tau|\leq \log^{-7}p}\min(|t|^2,|t|^{-2-2b}) |\mathcal{S}_{V,X}(it,K)|^2dt &\ll_\varepsilon p^\varepsilon\Bigl(\frac{pXQ}{L}+p^{1/2}XQL\Bigl(\frac{X}{p}+Q\Bigr)^2 \Bigr)\\ &\ll p^\varepsilon Q^2X^2\Bigl(1+\frac{p}{X}\Bigr)p^{-1/4}, \end{align*} for some $b$ depending on $\varepsilon$. It only remains to note inequality $$ \min (| t |^2, 1) \ll (1+| \tau|)^{2b+2} \min (| t|^2, | t |^{-2b-2}), $$ for $t \in I(\tau,p)$, and to choose $B(\varepsilon)=\mathop{\mathrm{Max}}\limits (5, 2 b +2)$, in order to complete the proof of~(\ref{eq-first-average}) in all cases. \end{proof} The remaining objective is to derive a pointwise bound for $\mathcal{S}_{V,X}(it,K)$, and to do so we must relax the zero of order $2$ of the weight $\min(|t|^2,1)$ at the origin (for similar issues with estimates of $L$-functions, see e.g.~\cite{blomer}; we could use similar methods, but at the expense of expressing our sums in terms of $L$-functions, and instead we use them directly, and resort to an iterative argument.) \par The basic mechanism is the following consequence of Proposition~\ref{shortaveragebound}: \begin{corollary}\label{corshortaverage} For any $\tau\in\mathbf{R}$ and $\varepsilon>0$, with notation as above, we have $$ \int_{I(\tau,p)}|\mathcal{S}_{V,X}(i t,K)|dt\ll_\varepsilon \begin{cases} p^\varepsilon(1+|\tau|)^{B}M(Q,X)&\text{ if } |\tau|\geq 1,\\ p^\varepsilon \mathcal{M}(\tau,p)^{1/3}M(Q,X)^{2/3}&\text{ if } |\tau|\leq 1. \end{cases} $$ \end{corollary} \begin{proof} If $|\tau|\geq 1$, we just apply the Cauchy-Schwarz inequality to get \begin{align*} \int_{I(\tau,p)}|\mathcal{S}_{V,X}(i t,K)|dt &\leq \Bigl(\int_{I(\tau,p)}|\mathcal{S}_{V,X}(i t,K)|^2dt\Bigr)^{1/2} \Bigl(\int_{I(\tau,p)}dt\Bigr)^{1/2}\\ &\ll \Bigl(\int_{I(\tau,p)}\min(|t|^2,1)|\mathcal{S}_{V,X}(i t,K)|^2dt\Bigr)^{1/2} \ll p^{\varepsilon}(1+|\tau|)^BM(Q,X) \end{align*} by Proposition~\ref{shortaveragebound}. \par Now assume $|\tau|<1$. Let $0<\alpha<1/3$ be some parameter. By H\"older's inequality, we ge \begin{align*} \int_{I(\tau,p)}|\mathcal{S}_{V,X}(i t,K)|dt&\leq \mathcal{M}(\tau,p)^{1-2\alpha} \int_{I(\tau,p)}|\mathcal{S}_{V,X}(i t,K)|^{2\alpha}dt \\ &\leq \mathcal{M}(\tau,p)^{1-2\alpha} \Bigl(\int_{I(\tau,p)}|t|^2|\mathcal{S}_{V,X}(i t,K)|^{2}dt\Bigr)^{\alpha} \Bigl(\int_{0}^2|t|^{-2\alpha/(1-\alpha)}dt\Bigr)^{1-\alpha} \\ &\ll_{\varepsilon,\alpha} p^\varepsilon \mathcal{M}(\tau,p)^{1-2\alpha}M(Q,X)^{2\alpha}\\ &\ll_{\varepsilon,\alpha} p^\varepsilon \mathcal{M}(\tau,p)^{1-2\alpha}M(Q,X)^{2/3}, \end{align*} (since $2\alpha/(1-\alpha)<1$ and $M(Q,X)\geq 1$). By the trivial bound~(\ref{eq-trivial-bound}), we have $$ \mathcal{M}(\tau,p)^{1-2\alpha}\ll \mathcal{M}(\tau,p)^{1/3}(X\log X)^{1-2\alpha-1/3}, $$ and we conclude by taking $\alpha=1/3-\varepsilon$. \end{proof} \subsection{An iterative bound} The following lemma establishes an iterative bound for Eisenstein twists. \begin{lemma} Assume that $\beta>0$ is such that \begin{equation}\label{eq-iter-assumption} \mathcal{S}_{V,X}(it,K)\ll p^{\varepsilon}(1+|t|)^AX^{\beta}M(Q,X)^{1-\beta} \end{equation} for $X\leq p^{3/2}$, any $\varepsilon>0$, and some $A\geq 1$ depending on $\varepsilon$. Then for any $\varepsilon>0$, we have \begin{align} \mathcal{S}_{V,X}(it,K)&\ll p^{\varepsilon}(1+|t|)^{A_1}X^{\beta/3}M(Q,X)^{1-\beta/3} \label{eq-1}\\ \mathcal{S}_{\uple{P}}(it,K)&\ll p^{\varepsilon} (Q_U+Q_V)^B(1+|t|)^{A_1}X^{\beta/3}M(Q_W,X)^{1-\beta/3}\label{eq-2} \end{align} for $A_1$, $B\geq 1$ depending on $\varepsilon$. \end{lemma} \begin{proof} Using Lemma~\ref{lm-relations} (1), we first use the assumption to estimate $\mathcal{S}_{\uple{P}}(it,K)$. For each $t_1$, we split the integral over $|t_2|\leq p^{\varepsilon}$ into $\ll p^{\varepsilon}$ integrals over intervals of length $\log^{-7}p$. For an interval $I$ with center at $\tau$ with $|\tau|\leq 1$, the integral is bounded by $$ \ll p^{\varepsilon}\mathcal{M}^{1/3}M(Q_W+|t_1|,X)^{2/3} $$ by Corollary~\ref{corshortaverage} applied to $W_{t_1}$ (see~(\ref{eq-qw1})), where $$ \mathcal{M}=\mathop{\mathrm{Max}}\limits_{t\in I} |\mathcal{S}_{W_{t_1},X}(it,K)|\ll p^{\varepsilon} X^{\beta}M(Q_W+|t_1|,X)^{1-\beta} $$ by~(\ref{eq-iter-assumption}) and~(\ref{eq-qw1}). Thus each such integral is $$ \ll p^{\varepsilon} X^{\beta/3}M(Q_W+|t_1|,X)^{1-\beta/3}. $$ \par For intervals centered at $\tau$ with $1\leq |\tau|\leq p^{\varepsilon}$, we obtain the bound $\ll p^{\varepsilon}(1+|\tau|)^AM(Q_W+|t_1|,X)$, which is better, and integrating over $|t_1|\leq p^{\varepsilon}$, we get~(\ref{eq-2}) (note that $Q\mapsto M(Q,X)$ is linear). \par Now, applying Lemma~\ref{lm-relations} (2), we immediately deduce~(\ref{eq-1}). \end{proof} We are now done: for $p^{3/4}\leq X\leq p^{3/2}$, we can start applying this lemma with $\beta=1$ by the trivial bound~(\ref{eq-trivial-bound}). We deduce that, for any integer $k\geq 1$, we have \begin{align*} \mathcal{S}_{V,X}(it,K)&\ll p^{\varepsilon}(1+|t|)^AX^{3^{-k}}M(Q,X)^{1-3^{-k}} \\ \mathcal{S}_{\uple{P}}(it,K)&\ll p^{\varepsilon} (Q_U+Q_V)^B(1+|t|)^AX^{3^{-k}}M(Q_W,X)^{1-3^{-k}}. \end{align*} \par Since \begin{align*} X^{3^{-k}}M(Q,X)^{1-3^{-k}}&= XQ^{1-3^{-k}}\Bigl(1+\frac{p}{X}\Bigr)^{(1-3^{-k})/2}p^{-(1-3^{-k})/8}\\ &\leq XQ(1+p/X)^{1/2}p^{-1/8}p^{3^{-k}/8}, \end{align*} we therefore obtain Theorems~\ref{eisensteinsumthm} and~\ref{typeIsumthm} by taking $k$ large enough. \section{Estimating sums of type $II$} In this section we prove Theorem \ref{typeIIsumthm}. We will leave the proof of the simpler bound~(\ref{typeIeq}) to the reader, and consider~(\ref{typeIIeq}), proceeding along classical lines. Denoting $$ T=\sumsum_\stacksum{m,n}{(m,p)=1} \alpha_m\beta_n K(mn) $$ the bilinear form to estimate, we apply Cauchy's inequality and deduce that \begin{equation}\label{eq-norm-bil} |T|^2\leq \|\beta\|^2 \mathop{\sum \sum}\limits_{\stacksum{M/2\leq m_1,m_2\leq 2M}{p\nmid m_1m_2}} \ov{\alpha_{m_1}}\alpha_{m_2}\sum_{N/2\leq n\leq 2N}\ov{K(m_1n)}K(m_2n). \end{equation} \par The inner correlation coefficients are then treated by completion (i.e., by the Polya-Vinogradov method), which gives \begin{equation}\label{eq-correl} \sum_{N/2\leq n\leq 2N}\ov{K(m_1n)}K(m_2n) \ll \frac{N}p|\mathcal{C}(m_1,m_2,0,K)|+ \sum_{0<|h|\leq p/2}\min\Bigl(\frac{1}{|h|},\frac{N}{p}\Bigr) |\mathcal{C}(m_1,m_2,h,K)| \end{equation} where $$ \mathcal{C}(m_1,m_2,h,K)=\sum_{z\in{\mathbf{F}_p}}\ov{K(m_1z)}K(m_2 z)e\Bigl(\frac{hz}p\Bigr), $$ a sum which satisfies the relation $$ \mathcal{C}(m_1,m_2,h,K)=\mathcal{C}(m_1/m_2,1,h/m_2,K). $$ \par For a trace weight, we have the trivial bound $$ |\mathcal{C}(m_1,m_2,h,K)|\leq \cond(\mathcal{F})^2 p, $$ but this is not sharp in most cases. In fact, the crucial point is to show that for most parameters $(m_1,m_2,h)$, we have a better estimate with square-root cancellation. We provide such a result in Theorem~\ref{th-bound-bad} in Section~\ref{sec-prelim}, building on our earlier work in~\cite{FKM}. \begin{proposition}[Paucity of large correlations]\label{correlationprop} Let $K$ be an irreducible trace weight modulo $p$ which is not $p$-exceptional, associated to the sheaf $\mathcal{F}$. Then there exists $C\geq 1$, $D\geq 0$, depending only polynomially on $\cond(\sheaf{F})$, such that $$ |\mathcal{C}(m,1,h,K)|\leq Cp^{1/2} $$ for every pair $(m,h)\in{\mathbf{F}^\times_p}\times{\mathbf{F}_p}$ except for those in a set of pairs of cardinality at most $D$. \end{proposition} After inserting~(\ref{eq-correl}) in~(\ref{eq-norm-bil}), the contribution of all triples $(m_1,m_2,h)$ for which $$ |\mathcal{C}(m_1,m_2,h,K)|\leq Cp^{1/2} $$ is at most $$ \ll \|\alpha\|^2\|\beta\|^2\Bigl(\frac{MN}{p^{1/2}}+Mp^{1/2}\log p\Bigr). $$ \par For the remaining triples, we sum over $m_1$ first. For each $m_1$, the proposition shows that the possible $(m_1/m_2,h/m_1)$ that can occur lie, modulo $p$, in a finite set $\mathcal{E}$ of size bounded in terms of the conductor of $\mathcal{F}$ only, i.e., $m_2$ modulo $p$ and $h$ are determined by $m_1$ up to a finite number of possibilities. We use the trivial bounds $$ |\mathcal{C}(m_1,m_2,h,K)|\leq \cond(\sheaf{F})^2p,\quad\quad \min\Bigl(\frac{1}{|h|},\frac{N}{p}\Bigr)\leq \frac{N}{p}, $$ and obtain that the contribution of these terms to the right-hand side of \eqref{eq-norm-bil} is \begin{gather*} \ll \|\beta\|^2N\sum_{(t,h)\in\mathcal{E}} \sum_{m_1} \sum_\stacksum{m_1, m_2}{m_2\equiv tm_1\mods{p}}|\alpha_{m_1}||\alpha_{m_2}|\\ \ll \|\beta\|^2N\sum_{(t,h)\in\mathcal{E}} \sum_{m_1} \sum_\stacksum{m_1, m_2}{m_2\equiv tm_1\mods{p}}(|\alpha_{m_1}|^2+|\alpha_{m_2}|^2)\\ \ll N\Bigl(1+\frac{M}{p}\Bigr)\|\beta\|^2\|\alpha\|^2, \end{gather*} where the implicit constant depends only (polynomially) on $\cond(\sheaf{F})$. \par Combining the two, we get $$ T\ll \|\alpha\|\|\beta\| (MN)^{1/2}\Bigl(\frac{1}{p^{1/4}}+\frac{1}{M^{1/2}}+\frac{p^{1/4}\log^{1/2}p} {N^{1/2}}\Bigr), $$ where the implicit constant depends only on the conductor of $\sheaf{F}$. This completes the proof of Theorem \ref{typeIIsumthm}. \section{Sums over primes} We now finally prove Theorem \ref{primesumthm}, our main result on sums over primes. \subsection{Smooth sums} We start with the smooth version \eqref{primesumsmooth}. Clearly, it is enough to estimate the sum $$ \mathcal{S}_{V,X}(\Lambda,K)=\sum_{n}\Lambda(n)K(n)V\Bigl(\frac{n}{X}\Bigr), $$ and we begin by recalling two lemmas. The first one is Heath-Brown's identity for the von Mangoldt function~\cite{HB}: \begin{lemma}[Heath-Brown] For any integer $J\geq 1$ and $n< 2X$, we have $$ \Lambda(n)=-\sum_{j=1}^J(-1)^j\binom{J}{j} \sum_{m_1,\cdots, m_j\leq Z}\mu(m_1)\cdots\mu(m_j) \sum_{m_1\cdots m_jn_1\cdots n_j=n}\log n_1, $$ where $Z=X^{1/J}$. \end{lemma} \begin{remark} Using instead the analogous formula $$ \mu(n)=-\sum_{j=1}^J(-1)^j\binom{J}{j} \sum_{m_1,\cdots, m_j\leq Z}\mu(m_1)\cdots\mu(m_j) \sum_{m_1\cdots m_jn_1\cdots n_{j-1}=n}1, $$ for the M\"obius function (valid under the same conditions), one proves Theorem~\ref{moebiussumthm} using exactly the same arguments, so we will not say more about the proof of that result. \end{remark} The second lemma provides a smooth partition of unity (see, e.g.,~\cite[Lemma 2]{FouvryCrelle}). \begin{lemma} \label{dyadic} There exists a sequence $(V_l)_{l\geq 0}$ of smooth functions on $[0,+\infty[$ such that \begin{itemize} \item[-] For any $l$, $V_{l}$ is supported in $]2^{l-1},2^{l+1}[$; \item[-] For any $k,l\geq 0$, we have $$ x^kV_l^{(k)}(x)\ll_k 1, $$ where the implicit constant depends only on $k$; \item[-] For any $x\geq 1$, $$ \sum_{l\geq 0}V_l(x)=1. $$ \end{itemize} \end{lemma} Fix some $J\geq 2$. Applying these two lemmas, we see that $\mathcal{S}_{V,X}(\Lambda,K)$ decomposes into a linear combination, with coefficients bounded by $O_J(\log X)$, of $O(\log^{2J}X)$ sums of the shape \begin{multline} \Sigma(\uple{M},\uple{N})= \mathop{\sum\cdots \sum}\limits_{m_1,\cdots,m_J}\alpha_{1}(m_1)\alpha_{2}(m_2)\cdots \alpha_{J}(m_J)\\ \times\mathop{\sum\cdots \sum}\limits_{n_1,\cdots,n_J}V_{1}({n_1})\cdots V_{J}({n_J}) V\Bigl(\frac{m_1\cdots m_Jn_1\cdots n_J}{X}\Bigr) K(m_1\cdots m_Jn_1\cdots n_J) \end{multline} where \begin{itemize} \item[-] $\uple{M}=(M_1,\cdots,M_J)$, $\uple{N}=(N_1,\cdots,N_J)$ are $J$-uples of parameters in $[1/2,2X]^{2J}$ which satisfy $$ N_1\geq N_2\geq \cdots \geq N_J,\quad\quad M_i\leq X^{1/J},\quad\quad M_1\cdots M_JN_1\cdots N_J\asymp_J X; $$ \item[-] the arithmetic functions $m\mapsto \alpha_{i}(m)$ are bounded and supported in $[M_i/2,2M_i]$; \item[-] the smooth functions $V_{i}(x)$ are compactly supported in $[N_i/2,2N_i]$, and their derivatives satisfy $$ y^{k}V_{i}^{(k)}(y)\ll 1, $$ for all $y\geq 1$, where the implicit constants depend only on $k$. \end{itemize} We will state different bounds for $\Sigma(\uple{M},\uple{N})$, depending on the relative sizes of the parameters, and then optimize the result. \par For $J\geq 2$, we obtain, by Theorem~\ref{typeIsumthm} applied to $n_1,\ n_2$ and trivial summation over the remaining variables, the bound \begin{equation} \label{eqbound2} \Sigma(\uple{M},\uple{N})\ll (pQ)^\varepsilon QX\Bigl(1+\frac{p}{N_1N_2}\Bigr)^{1/2}p^{-1/8}. \end{equation} for any $\varepsilon>0$, the implicit constant depending on $\varepsilon$ and $\cond(\mathcal{F})$. \par On the other hand, from \eqref{typeIIeq} with an integration by parts, we have the bound \begin{equation} \label{eqbound3} \Sigma(\uple{M},\uple{N}) \ll (pQ)^\varepsilon QX\Bigl(\frac{1}{p^{1/4}}+\frac1{M^{1/2}}+\frac{p^{1/4}}{(X/M)^{1/2}} \Bigr), \end{equation} for any factorization $$ M_1\cdots M_JN_1\cdots N_J=M\times N $$ where $M$ and $N$ are products of some of the $M_i$ and $N_{j}$. \par Our goal is to choose the best of the two bounds~(\ref{eqbound2}) and~(\ref{eqbound3}) for each such configuration of the parameters $(\uple{M},\uple{N})$. By taking logarithms (in base $p$), we readily see that the proof of \eqref{primesumsmooth} is reduced to the optimization problem of the next section. \subsection{An optimization problem} We consider here the following optimization problem. We are given a real number $x>0$ (we have in mind $x=\log X/\log p$), an integer $J\geq 3$, and parameters $$ (\uple{m},\uple{n})=(m_1,\cdots,m_J,n_1,\cdots,n_J)\in[0,x]^{2J} $$ such that \begin{equation} \label{simplex} \sum_{i}m_i+\sum_j n_j=x,\quad\quad m_i\leq x/J,\quad\quad n_1\geq n_2\geq \cdots\geq n_J. \end{equation} \par We want to estimate from below the quantity \begin{equation}\label{eq-first-max} \eta(\uple{m},\uple{n})=\mathop{\mathrm{Max}}\limits\Bigl\ \mathop{\mathrm{Max}}\limits_{\sigma}\min\Bigl(\frac{1}4,\frac{\sigma}2,\frac{x-\sigma }2-\frac14\Bigr),\ \frac 18-\mathop{\mathrm{Max}}\limits\Bigl(0,\frac 12(1-(n_1+n_2))\Bigr) \Bigr\}, \end{equation} where $\sigma $ ranges over all possible sub-sums of the $m_i$ and $n_j$ for $1\leq i,j\leq J$, that is over the sums $$ \sigma=\sum_{i\in \mathcal{I}}m_i+\sum_{j\in \mathcal{J}}n_j $$ for $\mathcal{I}$, $\mathcal{J}$ ranging over all possible subsets of $\{1,\cdots,J\}$. \begin{remark} One could also try to exploit the estimate~(\ref{typeIeq}) to improve the result, but we will not actually use it. \end{remark} The number $\eta(\uple{m},\uple{n})$ represents the maximal power of $p$ that we save over the trivial bound using \eqref{eqbound2} and \eqref{eqbound3}. The outcome of the discussion in the previous section is that, for $x=(\log X)/(\log p)$ and $J\geq 3$, we have $$ \Sigma_J(\uple{M},\uple{N})\ll (pQ)^{\varepsilon}QXp^{-\eta(\uple{m},\uple{n})}. $$ \par By Heath-Brown's identity, it follows that $$ \mathcal{S}_{V,X}(\Lambda,K)\ll (pQ)^{\varepsilon}QXp^{-\eta} $$ where $$ \eta=\min_{(\uple{m},\uple{n})}\eta(\uple{m},\uple{n}). $$ \par We will show: \begin{proposition}\label{pr-optimize} Let $x>3/4$ be given. Provided $J$ is large enough in terms of $x$, we have the inequality $$ \eta(\uple{m},\uple{n})\geq \min\Bigl(\frac{1}{24},\frac{4x-3}{24}\Bigr). $$ \end{proposition} Combining these lower-bounds with the above estimates, the proof of Theorem~\ref{primesumthm} is concluded, noting that $x\leq 1$ means that $X\leq p$, and that $$ Xp^{-(4x-3)/24}=X\Bigl(\frac{p}{X}\Bigr)^{1/6}p^{-1/24}. $$ \begin{proof}[Proof of Proposition~\ref{pr-optimize}] Let $\delta$ be a parameter such that \begin{equation}\label{eq-delta} 0<\delta < \min\Bigl(\frac{4x-3}{12},\frac{x-1/2}{6},\frac{1}{4}\Bigr). \end{equation} \par The interval $$ I_{\delta}=\Bigl[2\delta,x-\frac12-2\delta\Bigr] $$ is then non-empty. If we can find a subsum $\sigma$ such that $\sigma \in I_\delta$, we then deduce immediately from the definition~(\ref{eq-first-max}) that \begin{equation}\label{eq-first-bound} \eta(\uple{m},\uple{n})\geq \mathop{\mathrm{Max}}\limits_\sigma\min\Bigl(\frac14,\frac{\sigma}2,\frac{x}2-\frac14-\frac \sigma 2\Bigr) \geq \delta. \end{equation} \par We now assume that such a subsum $\sigma$ does \emph{not} exist, and attempt to get a lower-bound on $\eta(\uple{m},\uple{n})$ using the second term in the maximum~(\ref{eq-first-max}). First of all, we claim that, in that case, we have \begin{equation} \label{xibound} \sum_{i\leq J}m_i<2\delta, \end{equation} provided \begin{equation}\label{eq-provided} \frac{x}{J}\leq x-\frac12-4\delta=\mathrm{length}(I_\delta), \end{equation} a condition which we assume from now on. \par Indeed, if~(\ref{xibound}) were false, using the fact that $m_i\leq x/J$ and that $x/J$ is then less than the length of the interval $I_\delta$, we would be able to find some subsum $\sigma$ (formed only with some $m_i$'s) which is contained in $I_\delta$, contradicting our current assumption. \par From \eqref{simplex} and (\ref{xibound}), we get in particular the inequality \begin{equation}\label{eq-818} \sum_{j}n_j\geq x-2\delta. \end{equation} \par Since, under our assumption~(\ref{eq-delta}) on $\delta$, we have $$ 2\delta\leq x-\frac12-4\delta=\mathrm{length}(I_\delta), $$ this implies that $$ n_j\leq 2\delta $$ for any $j\geq 3$ (because otherwise, we would have $$ x-\frac12-2\delta \leq n_3\leq n_2\leq n_1 $$ since $n_j\notin I_{\delta}$, and then, in view of~(\ref{eq-delta}), we would get $$ n_1+n_2+n_3> 3x-\frac32-6\delta\geq x, $$ a contradiction). But now it follows that \begin{equation} \label{yjbound} \sum_{j\geq 3}n_j<2\delta, \end{equation} because otherwise, using $4\delta\leq x-1/2-2\delta$, we could again obtain a subsum of the $n_j$'s, $j\geq 3$, in $I_{\delta}$. \par Combining \eqref{eq-818} and \eqref{yjbound}, we obtain $$ n_1+n_2\geq x-4\delta $$ and hence $$ \frac 18-\mathop{\mathrm{Max}}\limits\Bigl(0,\frac 12(1-(n_1+n_2))\Bigr) \geq \min\Bigl(\frac18,\frac{4x-3}8-2\delta\Bigr). $$ \par Combining this with~(\ref{eq-first-bound}), it follows that for $\delta$ satisfying~(\ref{eq-delta}) and $J$ large enough in terms of $x$ and $\delta$ so that~(\ref{eq-provided}) holds, we have $$ \eta(\uple{m},\uple{n})\geq\min\Bigl({\delta},\ \min\Bigl(\frac18,\frac{4x-3}8-2\delta\Bigr)\Bigr), $$ \par For $x>3/4$, we take $$ \delta=\min\Bigl(\frac{4x-3}{24},\frac{1}{24}\Bigr) $$ and Proposition \ref{pr-optimize} follows. \end{proof} \subsection{Sums over intervals}\label{ssec-intervals} We can now also easily deduce from~(\ref{primesumsmooth}) the estimate~(\ref{primesuminterval}) for sums over primes in the interval $2\leq q\leq X$ (below all sums over $q$ are restricted to $q$ prime). By a dyadic decomposition of the interval $[1,X]$, we are reduced to proving that \begin{equation} \label{primesumdyadic} \sum_{X\leq q\leq 2X}K(q)\ll_{\eta,\cond(\mathcal{F})} X(1+p/X)^{1/12}p^{-\eta/2} \end{equation} for $X\geq 2$ and for any $\eta<1/24$. Since the right-hand side of this bound increases with $X$, this is sufficient to conclude the proof of \eqref{primesuminterval}. Let $\Delta<1$ be some parameter and let $V$ be a smooth function defined on $[0,+\infty[$ such that $$ \supp(V)\subset [1-\Delta,2+\Delta],\quad\quad 0\leq V\leq 1,\quad\quad V(x)=1\text{ for } 1\leq x\leq 2, $$ and which satisfies $$ x^{j}V^{(j)}(x)\ll_j Q^{j}, $$ with $Q=\Delta^{-1}$. \par By applying \eqref{primesumsmooth} to $V$, we get \begin{align*} \sum_{X\leq q\leq 2X}K(q)& \ll X\Delta + \sum_{q}K(q)V\Bigl(\frac{q}X\Bigr)\\ &\ll_{\eta,\cond(\mathcal{F})} X(\Delta +\Delta^{-1}(1+p/X)^{1/6}p^{-\eta}) \end{align*} for any $\eta<1/24$. \par If $X>p^{1-6\eta}$, we can take $$ \Delta=(1+p/X)^{1/12}p^{-\eta/2}<1 $$ and we obtain \eqref{primesumdyadic}. On the other hand, if $X\leq p^{1-6\eta}$, the bound \eqref{primesumdyadic} is weaker than the trivial bound $2X$ for $p$ large enough. \section{Applications} \subsection{Primes represented by a polynomial modulo $p$} \label{sec:primepolynomials} In this section we prove Corollaries~\ref{cor-poly-error-terms} and~\ref{cor-mult-car}. \par For the former, we fix a non-constant polynomial $P\in\mathbf{Z}[X]$, and we consider a prime $p$ such that $P$ is non-constant modulo $p$. \par For Corollary~\ref{cor-poly-error-terms}, (1), we are dealing with \begin{align*} \sum_{n\in{\mathbf{F}_p}}{E(X;p,P(n))}&= \sum_{n\in{\mathbf{F}_p}}{\pi(X;p,P(n))} -\frac{1}{p-1}\sum_\stacksum{n\in{\mathbf{F}_p}}{P(n)\not\equiv 0\mods{p}}{\pi(X)}. \end{align*} \par We denote $$ N_P(x)=\sum_{\stacksum{n\in{\mathbf{F}_p}}{P(n)=x}}{1}-1 $$ the ``centered'' number of representations of $x$ as a value of $P$ modulo $p$. The formula above allows us to write $$ \sum_{n\in{\mathbf{F}_p}}{E(X;p,P(n))} =\sum_{q\leq X}{N_P(q)} +\sum_{q\leq X}\Bigl(1-\frac{1}{p-1}|\{n\in{\mathbf{F}_p}\,\mid\, P(n)\not=0\}|\Bigr) $$ (where $q$ runs over primes, as before). \par The second term of the previous expression is trivially bounded by $\ll p^{-1}X+1$, since $P$ has at most $\deg P$ zeros modulo $p$. Thus Corollary~\ref{cor-poly-error-terms}, (1) follows from Theorem~\ref{primesumthm} and from the fact -- recalled in Section~\ref{subsec-decompositions-poly} below -- that $N_P$ is a trace function for an $\ell$-adic sheaf with no exceptional Jordan-H\"older factor (i.e. no such factor is geometrically isomorphic to a tensor product of a Kummer sheaf and an Artin-Schreier sheaf). \par For Corollary~\ref{cor-poly-error-terms}, (2), we write $\mathbf{1}_{P({\mathbf{F}_p})}$ for the characteristic function of the set $P({\mathbf{F}_p})$ of values of $P$ modulo $p$, and we will denote $P^*({\mathbf{F}_p})=P({\mathbf{F}_p})-\{0\}$, the set of non-zero values of $P$ modulo $p$. A reasoning similar to the previous one leads to $$ \sum_{a\in P({\mathbf{F}_p})}{E(X;p,a)}= \sum_{q\leq X}\mathbf{1}_{P({\mathbf{F}_p})}(q) -\frac{|P^*({\mathbf{F}_p})|}{p-1}\pi(X). $$ \par Applying Proposition~\ref{pr-decomp-poly} of Section~\ref{subsec-decompositions-poly}, the first term on the right-hand side becomes $$ c_1\pi(X) +\sum_{2\leq i\leq k}\sum_{q\leq X}{c_i K_i(q)}+ O({p^{-1}}{X}+1) $$ where the implicit constant depends only on $\deg P$, using the notation of that proposition (the error term corresponds to the contribution of those $q$ such that $q\mods{p}$ is in one of the residue classes in the set $S$ of Proposition~\ref{pr-decomp-poly}; its size is bounded in terms of $\deg P$ only.) \par Using the asymptotic formula~(\ref{eq-size-c1}) for the constant $c_1$, we get $$ \sum_{a\in P({\mathbf{F}_p})}{E(X;p,a)}=\sum_{2\leq i\leq k}\sum_{q\leq X}{c_iK_i(q)} +O(p^{-1/2}X), $$ and Theorem~\ref{primesumthm} concludes the proof. \subsection{Large Kloosterman sums with almost prime modulus} In this section we prove Corollary \ref{largesums}. It is sufficient to prove the following: \begin{proposition}\label{largesumsII} For any $m\geq 2$, and $\delta$ such that $0<\delta<1/2$, there exists a constant $\beta_m>0$ such that $$ |\{(p,q),\ p,q\text{ primes }\geq X^{\delta},\ pq\leq X,\ |\hypk_m(1;pq)|\geq\beta_m\}|\gg \frac{X}{\log X}. $$ here the implicit constants depend on $m$ and $\delta$ only. \end{proposition} We recall first the basic strategy from \cite{MichelInv}. By the Chinese remainder theorem, we have the twisted multiplicativity \begin{equation} \label{twistedmultiplicativity} \hypk_m(1;pq)=\hypk_m(\ov q^m;p)\hypk_m(\ov p^m;q), \end{equation} when $p$ and $q$ are distinct primes. Therefore, in order to prove the existence of pairs of primes $(p,q)$ for which $|\hypk_m(1;pq)|$ is large, it is sufficient to show that there exists two sets of pairs of primes for which $|\hypk_m(\ov q^m;p)|$ and $|\hypk_m(\ov p^m;q)|$ are both large, and that these two sets intersect non-trivially. This leads us to proving that, for pairs $(p,q)$ in suitable ranges, the hyper-Kloosterman sums $\hypk_m(\ov q^m;p)$ and $\hypk_m(\ov p^m;q)$ become equidistributed in the interval $[-m,m]$ with respect to a suitable measure. Such a statement is an instance of the vertical (or average) Sato-Tate laws of Katz and Deligne, but specialized to prime arguments. \par To state properly these equidistribution statements, we recall that for any prime number $p$ and auxiliary prime $\ell\not=p$, and for any isomorphism $\iota:\ov{\mathbf{Q}_\ell}\hookrightarrow \mathbf{C}$, there exists a $\ov\mathbf{Q}_{\ell}$-adic sheaf $\mathcal{K}\ell_m$ on $\mathbf{P}^1_{\mathbf{F}_p}$ (constructed by Deligne and studied by Katz in~\cite{GKM}) such that: \begin{enumerate} \item The sheaf $\mathcal{K}\ell_m$ has rank $m$ and is lisse on ${\mathbf{G}_{m}}_{,{\mathbf{F}_p}}$, tamely ramified at $0$ with a single Jordan block and wildly ramified at $\infty$ with Swan conductor $1$ (in particular, we have $\cond(\mathcal{K}\ell)=m+3$); \item The sheaf $\mathcal{K}\ell_m$ is geometrically irreducible, and its geometric monodromy group is equal to $\rmG_m=\SL_{m}$ or $\Sp_{m}$ depending on whether $m$ is odd or even; \item The sheaf $\mathcal{K}\ell_m$ is pointwise pure of weight $0$, and for any $a\in{\mathbf{F}^\times_p}$, the trace of the Frobenius at $a$ equals $$ \iota(\tr(\frob_a|\mathcal{K}\ell_m))=(-1)^{m-1}\hypk_m(a;p), $$ and moreover, for any choice of maximal compact subgroup $K_m$ of $\rmG_m(\mathbf{C})$, $(\frob_a|\mathcal{K}\ell_m)$ defines a unique conjugacy class $g^\natural_m(a;p)$ in $K_m^\natural$ (the space of conjugacy classes of $K_m$) whose trace is equal to $(-1)^{m-1}\hypk_m(a;p)$. \end{enumerate} It will be easy to prove the following result using Theorem \ref{primesumthm}: \begin{theorem}[Sato-Tate equidistribution]\label{equidthm} Given $\delta<0$, $A\geq 1$, $P,Q\geq 2$ such that $$ P^{3/4+\delta}\leq Q\leq P^A, $$ the set of conjugacy classes $$ \{g^\natural_m({\ov q^m;p}),\ p\not=q\text{ primes, } (p,q)\in[P,2P]\times[Q,2Q]\}\subset K_m^\natural, $$ becomes equidistributed as $P\rightarrow +\infty$ with respect to the (image of the) probability Haar measure $\mu_{m}$ on $K_m^\natural$. \end{theorem} \begin{remark} A similar Sato-Tate equidistribution result over the primes holds for the generalized Kloosterman sheaves of Heinloth, Ng\^o and Yun~\cite{HNY} already mentioned in Remark \ref{introHNY}. \end{remark} We will sketch the proof below, but for the moment we can conclude from this the proof of Corollary~\ref{largesums}. We pick $\alpha_m>0$ small enough such that $$ \mu_{m}(\{g^\natural\in K_m^\natural,\ |\tr(g^\natural)|\geq\alpha_m\})\geq 0.51, $$ (such an $\alpha_m$ exists because the direct image of the measure $\mu_{m}$ under the trace map $g^\natural\mapsto |\tr(g^\natural)|$ is absolutely continuous with respect to the Lebesgue probability measure on $[0,m]$). \par Now let $\delta>0$ be given, let $P$ be large enough and consider $Q$ such that $$ P^{3/4+\delta}\leq Q\leq P^{4/3-\delta}. $$ \par We then have $$ Q^{3/4+\delta'}\leq P\leq Q^{4/3-\delta'} $$ for some $\delta'>0$ depending only on $\delta$, and we can apply Theorem~\ref{equidthm} twice to show that \emph{both} sets \begin{gather*} \mathcal{P}_1=\{(p,q)\in[P,2P]\times[Q,2Q],\ p\not=q\text{ primes, } |\tr(g^\natural_m({\ov q^m;p}))|\geq \alpha_m\}\\ \mathcal{P}_2= \{(p,q)\in[P,2P]\times[Q,2Q],\ p\not=q\text{ primes, } |\tr(g^\natural_m({\ov p^m;q}))|\geq \alpha_m\} \end{gather*} satisfy, as $P\rightarrow +\infty$, the limit $$ \frac{|\mathcal{P}_i|}{|\{(p,q)\in [P,2P]\times[Q,2Q]\}|}\longrightarrow \mu_{m}(\{g^\natural\in K_m^\natural,\ |\tr(g^\natural)|\geq\alpha_m\})\geq 0.51. $$ \par In particular, the two sets have a non-empty intersection for $P$ large enough, and in fact $$ |\mathcal{P}_1\cap \mathcal{P}_2|\gg \frac{P}{\log P}\frac{Q}{\log Q}. $$ \par By~\eqref{twistedmultiplicativity}, it follows that $$ |\{(p,q)\in[P,2P]\times[Q,2Q],\ p\not=q\text{ primes, } |\hypk_m(1;pq)|\geq \alpha^2_m\}|\gg \frac{P}{\log P}\frac{Q}{\log Q}. $$ \par Then we obtain by an easy argument of dyadic partition that for $X$ large enough, we have $$ |\{(p,q),\ p\not=q\text{ primes, } \ p,q\geq X^{4/9},\ pq\in[X,2X],\ |\hypk_m(1;pq)|\geq \alpha^2_m\}|\gg \frac{X}{\log X}, $$ as claimed. \begin{proof}[Proof of Theorem \ref{equidthm}] This is a direct application of the Weyl criterion. Let $$ X_{P,Q}=\{ p\not=q\text{ primes, } (p,q)\in[P,2P]\times[Q,2Q]\}. $$ \par It is enough to prove that if $\rho$ is a non-trivial irreducible representation of $\rmG_m$, we have \begin{equation}\label{eq-weyl} \frac{1}{|X_{P,Q}|} \sum_{(p,q)\in X_{P,Q}} \tr\rho(g^\natural_m({\ov q^m;p})) \longrightarrow 0 \end{equation} as $P\rightarrow +\infty$. \par Now, for each $p$, we can interpret the sum over $q$ as the sum of the weight $$ K_{\rho}(q)=\tr\rho(g^\natural_m({\ov q^m;p})) $$ modulo $p$. Now we claim that, for each $\rho\not=1$, the weight $K_{\rho}$ is a non-exceptional irreducible trace weight modulo $p$ with conductor bounded by a constant depending only on $m$ and $\rho$. Assuming this, Theorem~\ref{primesumthm} (see~\eqref{primesuminterval}) gives $$ \sum_{(p,q)\in X_{P,Q}} \tr\rho(g^\natural_m({\ov q^m;p})) \ll \frac{PQ}{\log P}P^{-\eta}\Bigl(1+\frac{P}{Q}\Bigr)^{1/12} $$ for any $\eta<1/48$. Dividing by $|X_{P,Q}|\asymp PQ/(\log P)(\log Q)$, we get $$ \frac{1}{|X_{P,Q}|} \sum_{(p,q)\in X_{P,Q}} \tr\rho(g^\natural_m({\ov q^m;p}))\ll (\log Q)(1+P/Q)^{1/2}P^{-\eta}, $$ which tends to $0$ provided $P^{3/4+\delta}<Q<P^A$ for some $\delta>0$, $A\geq 1$. \par To check the claim, we first define $$ \mathcal{K}\ell_m'=[x\mapsto x^{-m}]^*\mathcal{K}\ell $$ so that, for $a\in{\mathbf{F}^\times_p}$, we have the trace function $$ \iota(\frtr{\mathcal{K}\ell'_m}{{\mathbf{F}_p}}{a})=(-1)^{m-1}\hypk_m(a^{-m};p). $$ \par The function $$ K_{\rho}\,:\, a\mapsto \tr(\rho(g^\natural_m(a^{-m};p))) $$ is then (the restriction to ${\mathbf{F}^\times_p}$ of) the irreducible trace weight associated to the sheaf $\rho(\mathcal{K}\ell'_m)$ obtained by composing the representation $\mathcal{K}\ell'_m$ with the representation $\rho$. In particular, this sheaf is also lisse and geometrically irreducible on $\mathbf{G}_m$, and of rank $\dim\rho$. It is tame at $\infty$, and its Swan conductor at $0$ is bounded in terms of $m$ and $\dim\rho$ only (by bounding the largest slope, see e.g.~\cite{MichelInv}), so the conductor is bounded in terms of $m$ and $\deg \rho$ only. Finally, because $\rho(\mathcal{K}\ell'_m)$ is irreducible of rank $\deg \rho\geq 2$ (we use here the fact that both $\SL_m$ and $\Sp_m$ have no non-trivial representations of dimension $1$), it follows that $\rho(\mathcal{K}\ell'_m)$ is not $p$-exceptional. \end{proof} \section{Results from algebraic geometry}\label{sec-prelim} \subsection{Properties of the Fourier-M\"obius group} The goal of this section is to prove Proposition~\ref{correlationprop}. In order to do so, we must first recall the definition of the Fourier-M\"obius group of an isotypic sheaf $\sheaf{F}$, and establish a few of its properties which were not necessary in~\cite{FKM}. \par Let $p$ be a prime. Let $\ell\not=p$ be an auxiliary prime number, $\iota\,:\, \bar{\mathbf{Q}}_{\ell}\simeq \mathbf{C}$ an isomorphism. Let $\psi$ be the $\ell$-adic additive character such that $\iota(\psi(x))=e(x/p)$ for $x\in{\mathbf{F}_p}$. \par Given any middle-extension sheaf $\sheaf{F}$ on $\mathbf{A}^1_{{\mathbf{F}_p}}$, any finite extension $k/{\mathbf{F}_p}$ and any $x\in \mathbf{P}^1(k)$, we denote by $$ \frtr{\sheaf{F}}{k}{x} $$ the trace of the geometric Frobenius of $k$ acting on the stalk of $\sheaf{F}$ at $x$. We also denote by $\dual(\sheaf{F})$ the middle-extension dual of $\sheaf{F}$ given by $j_*(\check{j^*\sheaf{F}})$, where $j\,:\, U\hookrightarrow \mathbf{P}^1$ is the inclusion of any dense open set $U$ on which $\sheaf{F}$ is lisse. \par If $\sheaf{F}$ is any Fourier sheaf (in the sense of~\cite[Def. 8.2.2]{GKM}) on $\mathbf{A}^1_{{\mathbf{F}_p}}$, we denote by $\ft(\sheaf{F})$ the Fourier transform of $\sheaf{F}$, computed by means of $\psi$, which satisfies $$ \frtr{\ft(\sheaf{F})}{{\mathbf{F}_p}}{y}= -\sum_{x\in {\mathbf{F}_p}}{\frtr{\sheaf{F}}{{\mathbf{F}_p}}{x}\psi(xy)} $$ for any $y\in{\mathbf{F}_p}$. It follows from \cite[8.4.1]{GKM}, that $\ft(\sheaf{F})$ is geometrically isotypic (resp. geometrically irreducible) if $\sheaf{F}$ is isotypic (resp. geometrically irreducible.) \par Let now $\sheaf{F}$ be an isotypic trace sheaf modulo $p$ as in Definition~\ref{def-admissible}. In~\cite{FKM}, we defined the Fourier-M\"obius group of $\sheaf{F}$ by $$ \mathbf{G}_{\sheaf{F}}=\{\gamma\in\PGL_2(\bar{\mathbf{F}}_p)\,\mid\, \gamma^*(\ft(\sheaf{F}))\simeq \ft(\sheaf{F})\}, $$ where $\simeq$ denotes geometric isomorphism (see~\cite[Def. 1.14]{FKM}). Furthermore, we defined the correlation sums of $\sheaf{F}$ by $$ \mathcal{C}(\sheaf{F};\gamma)= \frac{1}{p} \sum_{x\in{\mathbf{F}_p}}{\frtr{\ft(\sheaf{F})}{{\mathbf{F}_p}}{\gamma\cdot x} \overline{\frtr{\ft(\sheaf{F})}{{\mathbf{F}_p}}{x}}} $$ for $\gamma\in\PGL_2({\mathbf{F}_p})$. \par The crucial link between these two notions is the following result (see~\cite[Cor. 9.2]{FKM}) which follows from the Riemann Hypothesis over finite fields, and from bounds for the conductor of the Fourier transforms of Fourier sheaves. \begin{proposition} Let $p$ be a prime and let $\mathcal{F}$ be an isotypic trace sheaf modulo $p$. There exists $M\geq 1$, which depends only, polynomially, on $\cond(\sheaf{F})$, such that $$ |\iota(\mathcal{C}(\sheaf{F};\gamma))|\leq M\sqrt{p} $$ for all $\gamma\notin \mathbf{G}_{\sheaf{F}}$. \end{proposition} Let then $$ \mathbf{B}_{\sheaf{F}}=\mathbf{G}_{\sheaf{F}}\cap \mathbf{B}, $$ where $\mathbf{B}\subset \PGL_2$ is the upper-triangular Borel subgroup. We deduce from the proposition above: \begin{proposition} Let $p$ be a prime, let $\sheaf{F}$ be an isotypic trace sheaf modulo $p$, and let $K(x)=\iota(\frtr{\sheaf{F}}{{\mathbf{F}_p}}{x})$ denote the trace function of $\sheaf{F}$ on ${\mathbf{F}_p}$. There exists $M\geq 1$, depending only, polynomially, on $\cond(\sheaf{F})$, such that $$ \Bigl|\sum_{x\in{\mathbf{F}_p}}{K(x)\overline{K(ax)}e\Bigl(\frac{bx}{p}\Bigr)} \Bigr|\leq M\sqrt{p} $$ if \begin{equation}\label{eq-borel} \begin{pmatrix} a&b\\0&1 \end{pmatrix}\notin \mathbf{B}_{\sheaf{F}}. \end{equation} \end{proposition} \begin{proof} By means of the Plancherel formula for the finite-field Fourier transform, we check easily that $$ \sum_{x\in{\mathbf{F}_p}}{K(x)\overline{K(ax)}e\Bigl(\frac{bx}{p}\Bigr)} =\iota(\mathcal{C}(\sheaf{F};\gamma)) $$ where $\gamma$ is the upper-triangular matrix in~(\ref{eq-borel}). Hence the proposition gives the result. \end{proof} It follows now that Proposition~\ref{correlationprop} is a consequence of the next theorem: \begin{theorem}\label{th-bound-bad} Let $p$ be a prime and let $\sheaf{F}$ be an isotypic sheaf. At least one of the following four properties holds: \par \emph{(1)} The trace function of $\sheaf{F}$ is proportional to a delta function at some point $a\in{\mathbf{F}_p}$, or to the trace function of a sheaf $\sheaf{L}_{\psi(aX)}$ for some $a\in{\mathbf{F}_p}$, i.e., to an additive character; \par \emph{(2)} The group $\mathbf{B}_{\sheaf{F}}$ has dimension $\geq 1$ and $\sheaf{F}$ is $p$-exceptional, i.e., its unique geometrically irreducible component is a tensor product $\sheaf{L}_{\chi}\otimes\sheaf{L}_{\eta}$ for some non-trivial Kummer sheaf $\sheaf{L}_{\chi}$ and some possibly trivial additive character $\eta$; \par \emph{(3)} The group $\mathbf{B}_{\sheaf{F}}$ is finite and $$ |\mathbf{B}_{\sheaf{F}}({\mathbf{F}_p})|\leq 10\cond(\sheaf{F})^2; $$ \par \emph{(4)} The conductor of $\sheaf{F}$ is at least $(p/10)^{1/2}$. \end{theorem} To prove this, we first prove two basic properties of the Fourier-M\"obius group and one lemma concerning Swan conductors. \begin{proposition}\label{pr-algebraic} Let $k$ be a finite field, and let $\sheaf{F}$ be an $\ell$-adic isotypic trace sheaf on $\mathbf{A}^1_k$. Let $\sheaf{G}$ be its Fourier transform. Then the subgroup $\mathbf{G}_{\sheaf{F}}\subset \PGL_2(\bar{k})$ is an algebraic subgroup defined over $k$. \par In particular, for $\sheaf{F}$ over ${\mathbf{F}_p}$, $\mathbf{B}_{\sheaf{F}}$ is an algebraic subgroup of $\mathbf{B}$ defined over ${\mathbf{F}_p}$. \end{proposition} We thank R. Pink for explaining to us how to prove this proposition. \begin{proof} Let $S\subset \mathbf{P}^1$ be the divisor of singularities of $\sheaf{G}$, so that $U=\mathbf{P}^1-S$ is the largest open set on which it is lisse. Because $\sheaf{G}$ is non-constant (the sheaf $\sheaf{F}$ would have to be a Dirac delta sheaf supported on a single point for this to happen, and such a sheaf is not a Fourier sheaf), we have $S\not=\emptyset$. Let $G\subset \PGL_2$ be the stabilizer of $S$, which is a proper algebraic subgroup of $\PGL_2$ defined over ${\mathbf{F}_p}$. Then we have a first inclusion $\mathbf{G}_{\sheaf{F}}\subset G$. \par Now we work over $\bar{k}$, and just denote by $U$ its base-change to $\bar{k}$. We consider the action morphism $$ \mu\,:\,\begin{cases} G\times U\longrightarrow U\\ (\gamma,x)\mapsto \gamma\cdot x \end{cases} $$ and the second projection $p_2\,:\, G\times U\longrightarrow U$, and we define the sheaf $$ \sheaf{E}=\mu^*\sheaf{G}\otimes p_2^*\dual(\sheaf{G}) $$ on $G\times U$ and the higher direct-image $\sheaf{I}=R^2p_{1,!}\sheaf{E}$, which is a sheaf on the algebraic group $G/\bar{k}$. By the base-change theorem for higher-direct images with compact support~\cite[Arcata, IV, Th. 5.4]{deligne}, the stalk of $\sheaf{I}$ at a geometric point $\gamma\in G(\bar{k})$ is naturally isomorphic to $H^2_c(U,\gamma^*\sheaf{G}\otimes\dual(\sheaf{G}))$. \par Furthermore, the constructibility theorem for higher direct images with compact support~\cite[Arcata, IV, Th. 6.2]{deligne} shows that $\sheaf{I}$ is a constructible $\ell$-adic sheaf on $G$. This implies (see also~\cite[Rapport, Prop. 2.5]{deligne}) that for any $d\geq 0$, the set $$ \{\gamma\in G(\bar{k})\,\mid\, \dim \sheaf{I}_{\gamma}=\dim H^2_c(U, \gamma^*\sheaf{G}\otimes\dual(\sheaf{G}))=d\} $$ is constructible in $G(\bar{k})$, i.e., is a finite union of locally-closed subsets. In particular, the set of all $\gamma$ where $$ H^2_c(U,\gamma^*\sheaf{G}\otimes\dual(\sheaf{G}))\not=0, $$ is constructible. But this set is exactly $\mathbf{G}_{\sheaf{F}}$ by the co-invariant formula for $H^2_c$ on a curve (see~\cite[Th. 9.1]{FKM}). Since it is well-known that a constructible subgroup of an algebraic group is Zariski-closed (see, e.g.,~\cite[Ch. I, Prop. 1.3]{borel}) we conclude therefore that $\mathbf{G}_{\sheaf{F}}$ is a closed subgroup of $\PGL_2$. \par Finally, $\mathbf{G}_{\sheaf{F}}$ is defined over $k$: since $\sheaf{F}$ is invariant under the Frobenius automorphism of $k$, the definition implies that if $\gamma\in \mathbf{G}_{\sheaf{F}}$, then so does the image of $\gamma$ under the Frobenius automorphism. \end{proof} Next we need to understand when $\mathbf{G}_{\sheaf{F}}$ can be ``large''. We prove here a bit more than what we need for the sake of completeness. We use the notation $\rmT^{x,y}$ for the maximal torus in $\PGL_2$ defined as the pointwise stabilizer of $\{x,y\}\subset \mathbf{P}^1$ (for $x\not=y$) and $\rmU^x$ for the unipotent radical of the Borel subgroup $\mathbf{B}^x$ which is the stabilizer of $x\in \mathbf{P}^1$. \begin{proposition}\label{pr-big-fourier} Let $\sheaf{F}$ be a geometrically isotypic $\ell$-adic Fourier sheaf on $\mathbf{A}^1_{\mathbf{F}_p}$, with Fourier transform $\sheaf{G}=\ft_{\psi}(\sheaf{F})$ with respect to some non-trivial additive character $\psi$. \par \emph{(1)} If there exists $x\in \mathbf{P}^1$ such that $\mathbf{G}_{\sheaf{F}}\supset \rmU^x$, then $\sheaf{G}$ is geometrically isomorphic to a direct sum of copies of $\sheaf{L}_{\psi_0(\gamma_0(X))}$ for some non-trivial additive character $\psi_0$, where $\gamma_0\in \PGL_2$ is such that $\gamma_0\cdot x=\infty$. In that case, we have $\mathbf{G}_{\sheaf{F}}=\rmU^x$. \par \emph{(2)} If there exist $x\not=y$ in $\mathbf{P}^1$ such that $\mathbf{G}_{\sheaf{F}}\supset \rmT^{x,y}$, then $\sheaf{G}$ is geometrically isomorphic to a direct sum of copies of $\sheaf{L}_{\chi_0(\gamma_0(X))}$ for some non-trivial multiplicative character $\chi_0$, where $\gamma_0\in \PGL_2$ is such that $\gamma_0\cdot x=0$, $\gamma_0\cdot y=\infty$. In that case, we have $\mathbf{G}_{\sheaf{F}}=\rmT^{x,y}$ if $\chi_0$ is not of order $2$, and $\mathbf{G}_{\sheaf{F}}=\rmN^{x,y}$, the normalizer of $\rmT^{x,y}$, if $\chi_0^2=1$. \end{proposition} \begin{proof} (1) The ``if'' direction is immediate. For the converse, we may first assume that $x=\infty$, by conjugation with a matrix $\gamma_0$ with $\gamma_0\cdot x=\infty$. The assumption is then that $$ \begin{pmatrix}1&t\\ 0&1\end{pmatrix}^*\sheaf{G}\simeq \sheaf{G}, $$ for any $t\in\bar{\mathbf{F}}_p$, where the symbol $\simeq$ denotes geometric isomorphism. Since $\sheaf{G}$ is geometrically isotypic, we also have $$ \begin{pmatrix}1&t\\ 0&1\end{pmatrix}^*\sheaf{G}_1\simeq \sheaf{G}_1, $$ for $t\in \bar{\mathbf{F}}_p$, where $\sheaf{G}_1$ is the geometrically irreducible component of $\sheaf{G}$. We can then apply~\cite[Lemma 2.6.13]{katz-rls} to deduce that $$ \sheaf{G}_1\simeq\sheaf{L}_{\psi_0(X)} $$ (geometrically) for some additive $\ell$-adic character $\psi_0$, and hence $\sheaf{G}$ is a direct sum of copies of this Artin-Schreier sheaf. Furthermore, it follows from the classification of Artin-Schreier sheaves that if $\psi_0$ is non-trivial and $\gamma\notin \rmU^{\infty}$, we do not have $\gamma^*\sheaf{L}_{\psi_0(X)}\simeq \sheaf{L}_{\psi_0(X)}$, and therefore the Fourier-M\"obius group is exactly equal to $\rmU^{\infty}$. \par (2) As before, we may first conjugate using some $\gamma_0$ to reduce to the case where $x=0$, $y=\infty$, and we may reduce to the case where $\sheaf{F}$ and $\sheaf{G}$ are geometrically irreducible, so that the assumption is $$ \mathbf{G}_{\sheaf{F}}\supset \rmT=\rmT^{0,\infty}=\Bigl\{\begin{pmatrix}a&0\\0&d \end{pmatrix}\Bigr\} $$ for all $a$, $d\in\bar{k}$. By~\cite[Lemma 2.6.13]{katz-rls}, again, there exists a multiplicative character $\chi_0$ such that $$ \sheaf{G}\simeq \sheaf{L}_{\chi_0(X)}. $$ \par This character is non-trivial since $\sheaf{G}$ is a Fourier sheaf. Now to finish the computation of $\mathbf{G}_{\sheaf{F}}$, we use the fact that $\sheaf{L}_{\chi_0(X)}$ is tamely ramified at $0$ and $\infty$, and hence $$ \mathbf{G}_{\sheaf{F}}\subset \rmN=\rmN^{0,\infty}=\rmT\cup \Bigl\{ \begin{pmatrix} 0&b\\c&0 \end{pmatrix} \Bigr\}, $$ the normalizer of $\rmT$ in $\PGL_2$. Clearly, $\rmT\subset \mathbf{G}_{\sheaf{L}_{\chi_0(X)}}$. If $\gamma\in \rmN-\rmT$, on the other hand, we have $\gamma^*\sheaf{F}_{\chi_0(X)}\simeq \sheaf{F}_{\chi_0(X^{-1})}$, and by the classification of Kummer sheaves, it follows that $\gamma\in \mathbf{G}_{\sheaf{L}_{\chi_0(X)}}$ if and only if $\chi_0=\chi_0^{-1}$, i.e., if $\chi_0$ is of order $2$. \end{proof} The second lemma concerns the size of Swan conductors of lisse sheaves on $\mathbf{G}_m$ with some non-trivial (multiplicative) translation-invariance property. \begin{lemma}\label{lm-mult-invariant} Let $k$ be an algebraic closure of a finite field of characteristic $p$, and let $\sheaf{F}$ be an $\ell$-adic sheaf for some $\ell\not=p$ which is lisse on $\mathbf{G}_{m,k}$. If there exists $a\not=1$ in $\mathbf{G}_m(k)$ such that $\sheaf{F}\simeq [\times a]^*\sheaf{F}$, then $m\mid \swan_{\infty}(\sheaf{F})$, where $m$ is the multiplicative order of $a$. In particular, if $\sheaf{F}$ is not tame at $\infty$, we have $\swan_{\infty}(\sheaf{F})\geq m$. \end{lemma} \begin{proof} Let $V$ be the generic stalk of $\sheaf{F}$, seen as a representation of the inertia group $I=I(\infty)$ at $\infty$, and let $$ V=\bigoplus_{\alpha\in A}{V_{\alpha}} $$ be the decomposition of $V$ in $I$-isotypic subspaces. Let $W_{\alpha}$ denote the irreducible $I$-representation such that $V_{\alpha}$ is a multiple of $W_{\alpha}$. \par The finite cyclic subgroup $G\subset \mathbf{G}_m(k)$ of order $m$ generated by $a$ acts on the index set $A$, corresponding to the fact that $[\times a]^*V=V$ as $I$-representation: we have $$ [\times a^j]^*V_{\alpha}=V_{a^j\cdot \alpha}, $$ for any integer $j\geq 0$, and in fact even $$ [\times a^j]^*W_{\alpha}=W_{a^j\cdot \alpha}, $$ since $W_{\alpha}$ is uniquely determined by $V_{\alpha}$. \par Let $B\subset A$ be one of the orbits of $G$. Its size $|B|$ is a divisor of $m$, and if $\alpha\in B$, we have an isomorphism $[\times a^{|B|}]^*W_{\alpha}=W_{\alpha}$. Since $W_{\alpha}$ is irreducible, we can apply~\cite[Prop. 4.1.6 (2)]{GKM} to deduce that $$ \swan_{\infty}(W_{\alpha})\equiv 0\mods{m/|B|}. $$ \par Since multiplicative translation by $a$ is an automorphism, it follows that $$ \swan_{\infty}(W_{a^j\cdot \alpha})=\swan_{\infty}(W_{\alpha})\equiv 0\mods{m/|B|} $$ for any $a^j\in G$. Summing over the orbit, we get $$ \swan_{\infty}\Bigl(\bigoplus_{\alpha\in B}{V_{\alpha}}\Bigr)\equiv 0\mods{m}, $$ and then summing over the orbits we get $$ \swan_{\infty}(V)\equiv 0\mods{m}. $$ \par If $\sheaf{F}$ is wild at infinity, than $\swan_{\infty}(V)\not=0$, and therefore it must be $\geq m$. \end{proof} Having dealt with these preliminaries, we can now prove the theorem. \begin{proof}[Proof of Theorem~\ref{th-bound-bad}] The group $B=\mathbf{B}_{\sheaf{F}}({\mathbf{F}_p})$ is a finite subgroup of $\mathbf{B}\cap \PGL_2({\mathbf{F}_p})$. We distinguish three situations in turn. \par (1) If $B$ contains a non-trivial unipotent element $g$, then since $g$ fixes $\infty$, the reasoning in~\cite[\S 9, Proof of Th. 1.12]{FKM} shows that either $\cond(\sheaf{G})\geq p$, in which case the fourth case holds by~\cite[Prop. 8.2 (1)]{FKM}, or otherwise the trace function of the Fourier transform $\ft_{\psi}(\sheaf{F})$ is proportional to an additive character, so that the trace function of $\sheaf{F}$ is proportional to a delta function, and we are in the first case. \par Now, if $B$ contains no unipotent elements, the unipotent radical of $\mathbf{B}_{\sheaf{F}}$ must also be trivial (otherwise it would have non-trivial ${\mathbf{F}_p}$-points). So, by the structure of $\mathbf{B}$, the connected component of the identity $\mathbf{B}_{\sheaf{F}}^{\circ}$ of $\mathbf{B}_{\sheaf{F}}$ is contained in a conjugate (say $\mathbf{D}$) of the diagonal subgroup in $\mathbf{B}$. Since $\mathbf{D}$ has dimension $1$, there are two further possibilities: \par (2) If $\mathbf{B}_{\sheaf{F}}^{\circ}=\mathbf{D}$, so that $\mathbf{B}_{\sheaf{F}}\supset \mathbf{D}$, we deduce from Proposition~\ref{pr-big-fourier} (2) that the Fourier transform of $\sheaf{F}$ is geometrically isomorphic to a direct sum of copies of $\sheaf{L}_{\chi(\gamma(X))}$ for some multiplicative character $\chi$ and some $\gamma\in \mathbf{B}$. By Fourier transform, this implies that $\sheaf{F}$ is geometrically isomorphic to a direct sum of copies of the tensor product $\sheaf{L}_{\chi}\otimes\sheaf{L}_{\eta}$ for some multiplicative character $\chi$ and some additive character $\eta$. Here $\chi$ must be non-trivial because otherwise $\sheaf{F}$ would not be a Fourier sheaf, and we are in the second case of the statement of the proposition. \par (3) Otherwise, $\mathbf{B}_{\sheaf{F}}$ is a finite group so that its finite subgroup $B\subset \mathbf{D}$ is cyclic, and there exists $x_0\in\mathbf{A}^1$ such that all elements of $B$ fix $\infty$ and $x_0$. Let $\sheaf{G}$ be the Fourier transform of $\sheaf{F}$. Replacing $\sheaf{G}$ with $\sheaf{G}_0=[-x_0]^*\sheaf{G}$, which has the same conductor as $\sheaf{G}$, we can assume that $x_0=0$, and hence that $B$ can be identified with a finite cyclic subgroup of ${\mathbf{F}^\times_p}$ acting on $\mathbf{P}^1$ by multiplication. Let $a\in {\mathbf{F}^\times_p}$ be a generator of $B\subset {\mathbf{F}^\times_p}$. There are two subcases: \par -- (3.1) If $\sheaf{G}_0$ is not lisse on $\mathbf{G}_m$, there is a non-zero singularity $s\in\mathbf{G}_m$ of $\sheaf{G}_0$; the geometric isomorphism $\sheaf{G}_0\simeq [\times a]^*\sheaf{G}_0$ implies that the orbit of $s$ under multiplication by powers of $a$ is also contained in the set $S$ of singularities of $\sheaf{G}_0$. This set contains $\geq |B|$ elements, and therefore $$ \cond(\sheaf{G})=\cond(\sheaf{G}_0)\geq |S|\geq |B| $$ in that case, and by~\cite[Prop. 8.2 (1)]{FKM}, we get $$ |B|\leq \cond(\sheaf{G})\leq 10\cond(\sheaf{F})^2, $$ i.e., case (3) of the theorem. \par -- (3.2) If $\sheaf{G}_0$ is lisse on $\mathbf{G}_m$, we first note that $\sheaf{G}_0$ cannot be tame at both $0$ and $\infty$, since the tame fundamental group of $\mathbf{G}_m$ is abelian and $\sheaf{G}_0$ would then be a Kummer sheaf, which we excluded by assuming that $\mathbf{B}_{\sheaf{F}}$ is finite (again from Proposition~\ref{pr-big-fourier}, (2)). Up to applying a further automorphism $x\mapsto x^{-1}$, we can assume that $\sheaf{G}_0$ is wildly ramified at $\infty$. We can then apply Lemma~\ref{lm-mult-invariant} to $\sheaf{G}_0$, and deduce that $$ \swan_{\infty}(\sheaf{G}_0)\geq |B|, $$ and hence we get again $$ \cond(\sheaf{G})=\cond(\sheaf{G}_0)\geq \swan_{\infty}(\sheaf{G}_0)\geq |B|, $$ and conclude as before. \end{proof} \subsection{Decomposition of characteristic functions} \label{subsec-decompositions-poly} In this section, we explain the necessary properties of the trace weights underlying Corollary~\ref{cor-poly-error-terms}. We recall especially the decomposition of the characteristic function of the set of values $P(n)$ of a polynomial $P\in{\mathbf{F}_p}[X]$ with $n\in{\mathbf{F}_p}$ in terms of trace functions. These types of results are well-known, but we give the full proof since we require some quantitative information concerning this decomposition. \begin{proposition}\label{pr-decomp-poly} Let $p$ be prime and let $P\in{\mathbf{F}_p}[X]$ be a non-constant polynomial of degree $\deg P<p$. Let $\mathcal{P}$ be the set of values of $P$ modulo $p$ and let $\mathbf{1}_P$ be its characteristic function. \par There exist a finite set $S\subset {\mathbf{F}_p}$ with order at most $\deg P$, an integer $k\geq 1$ and a finite number of trace functions $K_i$ associated to middle-extension sheaves $\sheaf{F}_i$, $1\leq i\leq k$, which are pointwise pure of weight $0$, and algebraic numbers $c_i\in\bar{\mathbf{Q}}$, such that \begin{equation}\label{eq-decomp-charfun} \sum_{i}c_iK_i(x)=\mathbf{1}_{P}(x) \end{equation} for all $x\in {\mathbf{F}_p}-S$, and with the following properties: \par -- The constants $k$, $|c_i|$ and $\cond(\sheaf{F}_i)$ are bounded in terms of $\deg P$ only; \par -- The sheaf $\sheaf{F}_1$ is trivial and none of the $\sheaf{F}_i$ for $i\not=1$ are geometrically trivial, and furthermore \begin{equation}\label{eq-size-c1} c_1=\frac{|\mathcal{P}|}{p}+O(p^{-1/2}), \end{equation} where the implicit constant depends only on $\deg P$; \par -- If $P$ is squarefree, no $\sheaf{F}_i$, $i\not=1$, contains an exceptional sheaf as a Jordan-Hölder factor. \end{proposition} \begin{proof} Let $K(x)$, for $x\in {\mathbf{F}_p}$, denote the characteristic function of the set of values $P(y)$ for $y\in {\mathbf{F}_p}$, so that we are trying to express $K$ as a linear combination of trace weights. \par Let $\tilde{D}\subset \mathbf{A}^1$ be the critical points of $P$, $\tilde{S}=P(\tilde{D})\subset \mathbf{A}^1$ the set of critical values, so that $P$ restricts to a finite \'etale covering $$ V=\mathbf{A}^1-\tilde{D}\longrightarrow U=\mathbf{A}^1-\tilde{S} $$ and let $$ W\fleche{\pi} V\longrightarrow U $$ be the Galois closure of $V$. The Galois group $G=\Gal(W/U)$ contains the subgroup $H=\Gal(W/V)$, and has order dividing $\deg(P)!$, hence coprime to $p$. \par For any $x\in U({\mathbf{F}_p})$, the Galois group $G$ permutes the points of the fiber $\pi^{-1}(x)\subset W$, and this Galois action is isomorphic to the left-translation action on $G/H$. The Frobenius $\frob_{x,p}$ at $x$, seen as an element of $G$, also permutes the points of the fiber, and the subset of rational points $\pi^{-1}(x)\cap W({\mathbf{F}_p})$ correspond bijectively to the fixed points of $\frob_{x,p}$, and hence the number of fixed points of $\frob_{x,p}$ acting on $G/H$ is equal to the number of conjugates of $\frob_{x,p}$ that are in $H$. \par More generally, if we consider the function $$ \theta\,:\, \begin{cases} G\longrightarrow \bar{\mathbf{Q}}_{\ell}\\ g\mapsto \begin{cases} 1&\text{ if $g$ is conjugate to \emph{some} $h\in H$}\\ 0&\text{ otherwise,} \end{cases} \end{cases} $$ the same argument implies that we have $$ K(x)=\theta(\frob_{x,p}) $$ for all $x\in U({\mathbf{F}_p})$. \par The function $\theta$ is invariant under $G$-conjugation. Hence, by character theory (since $\ell\not=p$, the $\bar{\mathbf{Q}}_{\ell}$-linear representations of $G$ can be identified with the $\mathbf{C}$-linear representations) there exist coefficients $c_{\rho}$ such that $$ \theta=\sum_{\rho}{c_{\rho}\chi_{\rho}} $$ where $\rho$ runs over isomorphism classes of irreducible $\bar{\mathbf{Q}}_{\ell}$-linear representations $$ \rho\,:\, G\longrightarrow \GL(V_{\rho}) $$ of $G$ and $\chi_{\rho}=\Tr\rho$ denotes the character of $\rho$. By composition $$ \Lambda_{\rho}\,:\, \pi_1(U)\longrightarrow \pi_1(U)/\pi_1(W)\simeq G\fleche{\rho} \GL(V_{\rho}) $$ each $\rho$ determines an $\ell$-adic lisse sheaf $\Lambda_{\rho}$ on $U$ which is pointwise pure of weight $0$ and satisfies $$ \chi_{\rho}(\frob_{x,p})=\frtr{\Lambda_{\rho}}{x}{{\mathbf{F}_p}} $$ for all $x\in U({\mathbf{F}_p})$. We therefore obtain $$ K(x)=\sum_{\rho}{c_{\rho}K_{\rho}} $$ for $x\in U({\mathbf{F}_p})$, where $K_{\rho}$ is the trace function of $\Lambda_{\rho}$. \par We rearrange this slightly for convenience. Let $\mathcal{T}$ denote the set of $\rho$ such that $\Lambda_{\rho}$ is geometrically trivial. We know that $K_{\rho}$ is a constant of weight $0$, say $\alpha_{\rho}$, for $\rho\in\mathcal{T}$, and we define $\sheaf{F}_1=\bar{\mathbf{Q}}_{\ell}$, so $K_1(x)=1$, and $$ c_1=\sum_{\rho\in\mathcal{T}}{c_{\rho}\alpha_{\rho}}. $$ \par Then we enumerate arbitrarily $$ \{\rho\notin\mathcal{T}\}=\{\rho_2,\ldots, \rho_k\} $$ and take $\sheaf{F}_i=j_*\Lambda_{\rho_i}$ where $j\,:\, U\hookrightarrow \mathbf{A}^1$ is the open immersion, and $c_i=c_{\rho_i}$. This gives the desired decomposition~(\ref{eq-decomp-charfun}) with $S=\tilde{S}({\mathbf{F}_p})$, which has $\leq |\tilde{S}|\leq \deg P$ elements. \par We now bound the numerical invariants in this decomposition. First, note that the number of non-zero summands is at most the number of $\rho$, i.e, the number of conjugacy classes in $G$, and hence is bounded in terms of $\deg P$ only. For any $\rho$ we have $$ |c_{\rho}|=\Bigl|\frac{1}{|G|} \sum_{g\in G}{\theta(g)\chi_{\rho}(g)} \Bigr|\leq \dim\rho\leq \sqrt{|G|} $$ which is bounded in terms of $\deg P$ only (using very trivial bounds $|\chi_{\rho}(g)|\leq \dim \rho$, $|\theta(g)|\leq 1$ and the fact that the sum of squares of $\dim\rho$ is equal to $|G|$). And since $p\nmid |G|$, all sheaves $\Lambda_{\rho}$ are tame, and since they are unramified outside $S$, we get $$ \cond(\Lambda_{\rho})\leq |S|+\dim\rho $$ which is again bounded in terms of $\deg P$ only. \par Moreover, none of the sheaves $\Lambda_{\rho}$ can contain a Jordan-H\"older factor geometrically isomorphic to $\sheaf{L}_{\chi(X)}\otimes\sheaf{L}_{\psi(X)}$ with $\psi$ non-trivial, since the $\Lambda_{\rho}$ are tamely ramified everywhere. If we assume that $P$ is squarefree, $0$ is not a critical value, and all the sheaves $\Lambda_{\rho}$ are unramified at $0$ and therefore cannot have a non-trivial Kummer sheaf as (geometric) Jordan-Hölder factor. Thus the sheaf $\sheaf{F}_i$ does not contain an exceptional factor in this case. \par We conclude by proving~(\ref{eq-size-c1}): we have \begin{align*} |\mathcal{P}\cap U({\mathbf{F}_p})|&=\sum_{x\in U({\mathbf{F}_p})}\theta(\frob_{x,p})\\ &=\sum_{\rho}c_{\rho}\sum_{x\in U({\mathbf{F}_p})}{K_{\rho}(x)}\\ &=c_1|U({\mathbf{F}_p})|+\sum_{\rho\notin \mathcal{T}} {c_{\rho}\sum_{x\in U({\mathbf{F}_p})}{K_{\rho}(x)}}. \end{align*} \par For each $\rho$ which is not geometrically trivial, we can apply the Riemann Hypothesis to the inner sum, which shows it is $\ll p^{1/2}$ with an implicit constant that depends only on $\deg P$ (since the conductor of $\Lambda_{\rho}$ is bounded in terms of $\deg P$ only). Since the number of $\rho$ and the constants $c_{\rho}$ are also bounded in terms of $\deg P$ only, we obtain $$ c_1=\frac{|\mathcal{P}\cap U({\mathbf{F}_p})|}{|U({\mathbf{F}_p})|}+O(p^{1/2}|U({\mathbf{F}_p})|^{-1}), $$ hence the result since $p-\deg P\leq |U({\mathbf{F}_p})|\leq p$. \end{proof} \begin{bibdiv} \begin{biblist} \bib{blomer}{article}{ author={Blomer, V.}, title={Subconvexity for twisted L-functions on GL(3)}, journal={Am. Journ. Math.}, volume={134}, date={2012}, number={5}, pages={1385--1421}, } \bib{borel}{book}{ author={Borel, A.}, title={Linear algebraic groups}, publisher={Springer}, series={Graduate Texts in Math.}, volume={126}, year={1991}, } \bib{bourgainmore}{article}{ author={Bourgain, J.}, title={More on the sum-product phenomenon in prime fields and its applications}, journal={Int. J. Number Theory}, volume={1}, date={2005}, number={1}, pages={1--32} } \bib{bourgain}{article}{ author={Bourgain, J.}, title={On the Fourier-Walsh spectrum of the Moebius function}, journal={Israel J. of Math.}, status={to appear}, } \bib{BG}{article}{ author={Bourgain, J.}, author={Garaev, M. Z.}, title={Sumsets of reciprocals in prime fields and multilinear Kloosterman sums}, journal={\url{arXiv:1211.4184 }}, date={2012}, } \bib{BSZ}{incollection}{ author={Bourgain, J.}, author={Sarnak, P.}, author={Ziegler, T.}, title={Disjointness of Moebius from horocycle flows}, booktitle={From Fourier analysis and number theory to Radon transforms and geometry}, series={Dev. Math.}, volume={28}, pages={67--83}, publisher={Springer}, year={2013}, } \bib{deligne}{book}{ author={Deligne, P.}, title={Cohomologie \'etale, SGA $4\ 1/2$}, publisher={Springer}, series={Lecture Notes in Mathematics}, volume={569}, year={1977}, } \bib{deligne-drinfeld}{misc}{ author={Deligne, P.}, title={letter to V. Drinfeld}, date={dated June 18, 2011}, pages={9 pages}, } \bib{EK}{article}{ author={Esnault, H.}, author={Kerz, M.}, title={A finiteness theorem for Galois representations of function fields over finite fields (after Deligne)}, journal={Acta Math. Vietnam}, volume={37}, date={2012}, pages={531--562}, } \bib{FouvryAM}{article}{ author={Fouvry, {\'E}.}, title={Autour du th\'eor\`eme de Bombieri-Vinogradov}, journal={Acta Math.}, volume={152}, date={1984}, number={3-4}, pages={219--244}, } \bib{FouvryCrelle}{article}{ author={Fouvry, {\'E}.}, title={Sur le probl\`eme des diviseurs de Titchmarsh}, journal={J. reine angew. Math.}, volume={357}, date={1985}, pages={51--76}, } \bib{FKM}{article}{ author={Fouvry, {\'E}.}, author={Kowalski, E.}, author={Michel, Ph.}, title={Algebraic twists of modular forms and Hecke orbits}, journal={Preprint \url{arXiv:1207.0617}}, date={2012}, } \bib{FKM1.5}{article}{ author={Fouvry, {\'E}.}, author={Kowalski, E.}, author={Michel, Ph.}, title={Counting sheaves using spherical codes}, journal={Math. Res. Letters}, status={to appear}, } \bib{FMAnn}{article}{ author={Fouvry, {\'E}.}, author={Michel, Ph.}, title={Sur certaines sommes d'exponentielles sur les nombres premiers}, journal={Ann. Sci. \'Ecole Norm. Sup. (4)}, volume={31}, date={1998}, number={1}, pages={93--130}, } \bib{FMAnnals}{article}{ author={Fouvry, {\'E}}, author={Michel, Ph.}, title={Sur le changement de signe des sommes de Kloosterman}, journal={Ann. of Math. (2)}, volume={165}, date={2007}, number={3}, pages={675--715}, } \bib{FoMiPac}{article}{ author={Fouvry, {\'E}.}, author={Michel, Ph.}, title={Sommes de modules de sommes d'exponentielles}, language={French, with English summary}, journal={Pacific J. Math.}, volume={209}, date={2003}, number={2}, pages={261--288}, doi={10.2140/pjm.2003.209.261}, } \bib{FoSh}{article}{ author={Fouvry, {\'E}.}, author={Shparlinski, I. E.}, title={On a ternary quadratic form over primes}, journal={Acta Arith.}, volume={150}, date={2011}, number={3}, pages={285--314}, } \bib{FGS}{article}{ author={Friedlander, J. B.}, author={Gong, K.}, author={Shparlinski{\u\i}, I.}, title={Character sums over shifted primes}, language={Russian, with Russian summary}, journal={Mat. Zametki}, volume={88}, date={2010}, number={4}, pages={605--619}, translation={ journal={Math. Notes}, volume={88}, date={2010}, number={3-4}, pages={585--598}, issn={0001-4346}, }, } \bib{FrIw}{article}{ author={Friedlander, J. B.}, author={Iwaniec, H.}, title={Incomplete Kloosterman sums and a divisor problem}, note={With an appendix by B.J. Birch and E. Bombieri}, journal={Ann. of Math. (2)}, volume={121}, date={1985}, number={2}, pages={319--350}, } \bib{green}{article}{ author={Green, B.J.}, title={On (not) computing the Moebius function using bounded depth circuits}, journal={Combinatorics, Probability and Computing}, volume={21}, date={2012}, pages={942--951}, } \bib{Harman}{article}{ author={Harman, G.}, title={Trigonometric sums over primes. I}, journal={Mathematika}, volume={28}, date={1981}, number={2}, pages={249--254 (1982)}, } \bib{HB}{article}{ author={Heath-Brown, D. R.}, title={Prime numbers in short intervals and a generalized Vaughan identity}, journal={Canad. J. Math.}, volume={34}, date={1982}, number={6}, pages={1365--1377}, } \bib{HNY}{article}{ author={Heinloth, J.}, author={Ng\^o, B.-C.}, author={Yun, Z.}, title={Kloosterman sheaves for reductive groups}, journal={Ann. of Math.}, date={2013}, volume={177}, pages={241--310}, } \bib{Hua}{book}{ author={Hua, L. K.}, title={Additive theory of prime numbers}, series={Translations of Mathematical Monographs, Vol. 13}, publisher={American Mathematical Society}, place={Providence, R.I.}, date={1965}, pages={xiii+190}, } \bib{IwaIntro}{book}{ author={Iwaniec, H.}, title={Introduction to the spectral theory of automorphic forms}, series={Biblioteca de la Revista Matem\'atica Iberoamericana}, publisher={Revista Matem\'atica Iberoamericana}, place={Madrid}, date={1995}, pages={xiv+247}, } \bib{KI}{book}{ author={Iwaniec, H.}, author={Kowalski, E.}, title={Analytic number theory}, series={American Mathematical Society Colloquium Publications}, volume={53}, publisher={American Mathematical Society}, place={Providence, RI}, date={2004}, pages={xii+615}, } \bib{ILS}{article}{ author={Iwaniec, H.}, author={Luo, W.}, author={Sarnak, P.}, title={Low lying zeros of families of $L$-functions}, journal={Inst. Hautes \'Etudes Sci. Publ. Math.}, number={91}, date={2000}, pages={55--131 (2001)}, } \bib{Kar}{article}{ author={Karatsuba, A. A.}, title={Sums of characters with prime numbers}, journal={Izv. Akad. Nauk SSSR Ser. Mat.}, volume={34}, date={1970}, pages={299--321}, } \bib{Kar2}{article}{ author={Karatsuba, A. A.}, title={Sums of Legendre symbols of quadratic polynomials with prime numbers}, language={Russian}, journal={Izv. Akad. Nauk SSSR Ser. Mat.}, volume={42}, date={1978}, number={2}, pages={315--324, 470}, issn={0373-2436}, } \bib{Kar3}{article}{ author={Karatsuba, A. A.}, title={Distribution of pairs of residues and nonresidues of special form}, language={Russian}, journal={Izv. Akad. Nauk SSSR Ser. Mat.}, volume={51}, date={1987}, number={5}, pages={994--1009, 1117--1118}, issn={0373-2436}, translation={ journal={Math. USSR-Izv.}, volume={31}, date={1988}, number={2}, pages={307--323}, issn={0025-5726}, }, } \bib{GKM}{book}{ author={Katz, N. M.}, title={Gauss sums, Kloosterman sums, and monodromy groups}, series={Annals of Mathematics Studies}, volume={116}, publisher={Princeton University Press}, place={Princeton, NJ}, date={1988}, pages={x+246}, isbn={0-691-08432-7}, isbn={0-691-08433-5}, } \bib{katz-rls}{book}{ author={Katz, N. M.}, title={Rigid local systems}, series={Annals of Mathematics Studies}, volume={139}, publisher={Princeton University Press}, place={Princeton, NJ}, date={1993}, } \bib{Mat}{article}{ author={Matom{\"a}ki, K.}, title={A note on signs of Kloosterman sums}, journal={Bull. Soc. Math. France}, volume={139}, date={2011}, number={3}, pages={287--295}, } \bib{Michelthese}{article}{ author={Michel, Ph.}, title={Autour des conjectures de Sato-Tate }, journal={Th\`ese de Doctorat \`es Sciences, Universit\' e de Paris-Sud}, date={1995}, } \bib{MichelInv}{article}{ author={Michel, Ph.}, title={Autour de la conjecture de Sato-Tate pour les sommes de Kloosterman. I}, journal={Invent. math.}, volume={121}, date={1995}, number={1}, pages={61--78}, } \bib{MichelDMJ}{article}{ author={Michel, Ph.}, title={Minorations de sommes d'exponentielles}, journal={Duke Math. J.}, volume={95}, date={1998}, number={2}, pages={227--240}, } \bib{montgomery}{book}{ author={Montgomery, H. L.}, title={Topics in multiplicative number theory}, series={Lecture Notes in Mathematics}, volume={227}, publisher={Springer}, date={1971}, } \bib{Pitt}{article}{ author={Pitt, N.}, title={On an analogue of Titchmarsh’s divisor problem for holomorphic cusp forms}, journal={J. Amer. Math. Soc.}, doi={\url{http://dx.doi.org/10.1090/S0894-0347-2012-00750-4}}, volume={26}, date={2013}, pages={735--776}, } \bib{sarnak}{article}{ author={Sarnak, P.}, title={Moebius randomness and dynamics}, journal={Not. S. African Math. Soc}, volume={43}, date={2012}, pages={89--97}, } \bib{Siv1}{article}{ author={Sivak-Fischler, J.}, title={Crible \'etrange et sommes de Kloosterman}, language={French}, journal={Acta Arith.}, volume={128}, date={2007}, number={1}, pages={69--100}, doi={10.4064/aa128-1-4}, } \bib{Siv}{article}{ author={Sivak-Fischler, J.}, title={Crible asymptotique et sommes de Kloosterman}, journal={Bull. Soc. Math. France}, volume={137}, date={2009}, number={1}, pages={1--62}, } \bib{Yun}{article}{ author={Yun, Z.}, title={Examples of Kloosterman sheaves}, journal={manuscript}, date={2009}, } \end{biblist} \end{bibdiv} \end{document}
1,314,259,994,711
arxiv
\section{Introduction} \label{sec:intro} \input{01-introParticle.tex} \section{Particle Filtering} \label{sec:particle} \input{02-particleFiltering.tex} \section{Unrolling Particles} \label{sec:unrolling} \input{03-unrolling.tex} \section{Numerical Experiments} \label{sec:sims} \input{04-sims.tex} \vspace{0.75cm} \section{Conclusions} \label{sec:conclusions} \input{05-conclusionsParticle.tex} \vfill\pagebreak \balance \bibliographystyle{bibFiles/IEEEbib} \subsection{Linear system with Gaussian noise} \label{subsec:linear} We start with the case of a linear model given by \begin{equation} \label{eq:linearGaussianModel} \begin{aligned} & \vcx_{t} = \mtA \vcx_{t-1} + \vcv_{t} \quad , \quad \vcy_{t} = \mtC \vcx_{t}+ \vcw_{t}, \\ & \vcx_{0} \sim \gaussDist(\vcmu^{0}, \mtSigma^{0}) \ , \ \vcv_{t} \sim \gaussDist(\vcZeros, \mtSigma_{v}) \ , \ \vcw_{t} \sim \gaussDist(\vcZeros, \mtSigma_{w}), \end{aligned} \end{equation} where $\vcx_{t} \in \fdR^{N}$ and $\vcy_{t} \in \fdR^{M}$ so that $\mtA \in \fdR^{N \times N}$ and $\mtC \in \fdR^{M \times N}$. This implies that $\vcmu^{0} \in \fdR^{N}, \mtSigma^{0} \in \fdR^{N \times N}$, $\mtSigma_{v} \in \fdR^{N\times N}$, and $\mtSigma_{w} \in \fdR^{M \times M}$. We assume that the noise processes are white (i.e., $\vcv_{t}, \vcv_{t'}$ are uncorrelated for $t \neq t'$) and uncorrelated from each other. This model serves as a baseline, since all quantities of interest can be computed in closed form, namely the posterior distribution $\fnp(\vcx_{0:t}|\vcy_{0:t})$ and the optimal SIS particle filter $\fnp(\vcx_{t}|\vcx_{t-1},\vcy_{t})$. We set $N = 10$, $M = 8$ and a trajectory that evolves for $12$ steps. We set $\mtA$ to be the adjacency matrix of a random planar graph with normalized spectral norm (i.e., it acts as a diffusion process) and we set $\mtC$ to be such that each measurement looks at the value of $\vcx_{t}$ at two different nodes and aggregates them, before adding the measurement noise. We set $\vcmu^{0} = \vcOnes_{10}$ and $\mtSigma^{0} = \mtI_{10}$, while the covariance matrices $\mtSigma_{v},\mtSigma_{w}$ are selected at random. We define the SNR as $\|\vcmu^{0}\|_{2}^{2}/\|\mtSigma_{v}\|_{2}^{2}$ and simulate for different values of it, ranging from $0\text{dB}$ to $10\text{dB}$. We set $\|\mtSigma_{v}\|_{2} = \|\mtSigma_{w}\|_{2}$. \begin{figure*}[htb] \centering \subfloat[Linear]{% \label{subfig:linear}% \includegraphics[width=0.65\columnwidth]{figures/l2errorSNRlinear.pdf}% }% \hfill \subfloat[Nonlinear]{% \label{subfig:nonlinear}% \includegraphics[width=0.65\columnwidth]{figures/l2errorSNRnonlinear.pdf}% }% \hfill \subfloat[Non-Gaussian]{% \label{subfig:nongaussian}% \includegraphics[width=0.65\columnwidth]{figures/l2errorSNRuniform.pdf}% }% \caption{Performance of the algorithms as a function of the SNR, measured by the relative RMSE. \protect\subref{subfig:linear}~Linear model with Gaussian noise. \protect\subref{subfig:nonlinear}~Nonlinear model with linear measurements and Gaussian noise. \protect\subref{subfig:nongaussian}~Linear model with uniform noise. In all cases it is observed that the learned SIS particle filter performs better. The error bars represent one-third of the estimated standard~deviation.} \label{fig:l2errorSNR} \end{figure*} We consider $5$ algorithms, namely: (i)~the application of the law of large numbers (LLN) sampling directly from $\fnp(\vcx_{0:t}|\vcy_{0:t})$ which serves as the baseline; (ii)~the minimum-degeneracy SIS particle filter sampling from $\fnp(\vcx_{t}|\vcx_{t-1},\vcy_{t})$ without resampling (iii)~and with resampling; (iv)~the SIS particle filter with learned distribution following Sec.~\ref{sec:unrolling}, without resampling, (v)~and with resampling. In Fig.~\ref{subfig:linear} we show the performance of the five algorithms as a function of the SNR. First, it is immediate that the learnable SIS particle filter exhibits better performance than the minimum-degeneracy (\texttt{MinDeg}) particle filter over the entire spectrum of SNR. Second, we note that while resampling significantly improves the \texttt{MinDeg} particle filter, it does not offer much improvement in the learnable case, likely because the distribution is trained to minimize degeneracy. Third, we note that the performance is overall robust, improving slightly for larger values of SNR. Fourth, we see that the performance is comparable to the LLN benchmark. \vspace{0.4cm} \subsection{Nonlinear system with Gaussian noise} \label{subsec:nonlinear} For the second scenario, we add a nonlinear function $\fnphi:\fdR^{N} \to \fdR^{N}$ to the transition rule as \begin{equation} \label{eq:nonlinearGaussianModel} \begin{aligned} & \vcx_{t} = \fnphi(\mtA \vcx_{t-1}) + \vcv_{t}, \\ & \vcy_{t} = \mtC \vcx_{t}+ \vcw_{t}, \\ & \vcx_{0} \sim \gaussDist(\vcmu^{0}, \mtSigma^{0}) \ , \ \vcv_{t} \sim \gaussDist(\vcZeros, \mtSigma_{v}) \ , \ \vcw_{t} \sim \gaussDist(\vcZeros, \mtSigma_{w}), \end{aligned} \end{equation} with all the other quantities defined analogously to Sec.~\ref{subsec:linear}. We choose the nonlinear function $\fnphi$ to be a pointwise absolute value, i.e. $[\fnphi(\vcx)]_{i} = |[\vcx]_{i}|$. We note that we cannot simulate the LLN estimate because we do not have access to the posterior $\fnp(\vcx_{0:t}|\vcy_{0:t})$. Recall that the minimum-degeneracy sampling distribution $\fnp(\vcx_{t}|\vcx_{t-1},\vcy_{t})$ is a multivariate normal with mean and variance given in \cite[eq. (13)]{Doucet2000-ParticleFiltering}, so that it can be easily sampled from it. Results in Fig.~\ref{subfig:nonlinear} show that the learnable SIS particle filter performs better than the minimum-degeneracy one. We also see that while resampling helps the \texttt{MinDeg} filter, it does not make a difference for the learned filter. We also observe robustness with respect to varying the SNR, performing slightly better for larger values. \vspace{0.4cm} \subsection{Linear system with non-Gaussian noise: Model mismatch} \label{subsec:nongaussian} For the last experiment we investigate the robustness to model mismatch. The model is given by \eqref{eq:linearGaussianModel}, except that now the initial state $\vcx_{0}$ as well as the noise $\vcv_{t}, \vcw_{t}$ are uniform, instead of Gaussian, with i.i.d. entries. The covariance matrices are given by $\mtSigma_{v} = \sigma^{2}\mtI_{10}$ and $\mtSigma_{w} = \sigma^{2}\mtI_{8}$ so that the SNR is $\|\vcmu_{0}\|_{2}^{2}/\sigma^{2}$. To investigate model mismatch, we consider the minimum-degeneracy SIS filter assuming a multivariate normal distribution, given by the same sampling distribution $\fnp(\vcx_{t}|\vcx_{t-1},\vcy_{t})$ as in Sec.~\ref{subsec:linear}. Likewise, we do not modify the parametrization of the sampling distribution of Sec.~\ref{sec:unrolling} which is also given by a multivariate normal [cf. \eqref{eq:MVNormalParam}]. The results shown in Fig.~\ref{subfig:nongaussian} show the greater robustness of the learned distribution. In fact, in this scenario, the difference between the performance of the \texttt{MinDeg} SIS filter with resampling and the learned SIS filter is much larger than in the previous two scenarios.
1,314,259,994,712
arxiv
\section{Introduction}\label{S1} \subsection{Statement of Results} In this paper we are concerned with understanding the rectifiability properties of doubling measures. Our ultimate goal is to characterize rectifiable doubling measures. Recently Tolsa provided such a characterization for 1-rectifiable measures with upper density bounded below (see \cite{To}). His conditions are expressed in terms of the properties of the density ratios. We are interested in whether self-similarity properties yield some sort of rectifiabilty. Roughly speaking we analyze how the distance between the dilations of a measure appropriately scaled yield information about the structure of its support. We provide a criteria which ensures that the support of a doubling measure can be decomposed as a union of rectifiable pieces of different dimensions. In a previous paper \cite{ADTprep}, similar decompositions were obtained by looking at conditions that were expressed in terms of the properties of the \emph{local} distance between the measure and flat measures (that is multiples of Hausdorff measures restricted to affine subsets of Euclidean space). In both cases a minor variant of the $L^1$ Wasserstein distance is used to estimate the \emph{good features} of a measure. To present our results we need to define local distances between measures as well as several quantities which describe the self similar character of a measure. In this paper, $\mu$ denotes a Radon measure on ${\mathbb{R}}^n$ (i.e., a locally finite positive Borel measure), and $\Sigma = \Sigma_{\mu}$ denotes its support. That is, \begin{equation} \label{1.1} \Sigma = \big\{ x\in {\mathbb{R}}^n \, ; \, \mu(B(x,r)) > 0 \text{ for } r > 0\big\}, \end{equation} where $B(x,r)$ denotes the open ball centered at $x$ and with radius $r$. We say that $\mu$ is \textit{doubling} when there is a constant $C_\delta>0$ for which \begin{equation}\label{1.2} \mu (B(x,2r)) \leq C_\delta \, \mu(B(x,r)) \mbox{ for all $x\in \Sigma$ and } r>0. \end{equation} Let ${\mathbb{B}} = B(0,1)$ denote the unit ball in ${\mathbb{R}}^n$. For $M \geq 0$, denote by $\text{Lip}_M({\mathbb{B}})$ the set of functions $\psi : {\mathbb{R}}^n \to {\mathbb{R}}$ that are $M$-Lipschitz, i.e., such that \begin{equation}\label{1.3} |\psi(x)-\psi(y)| \leq M |x-y| \ \text{ for } x,y\in {\mathbb{R}}^n, \end{equation} and for which \begin{equation}\label{1.4} \psi(x)=0 \ \text{ for } x\in {\mathbb{R}}^n \setminus {\mathbb{B}}. \end{equation} \begin{definition} \label{t1.1} Let $\mu$ and $\nu$ be measures on ${\mathbb{R}}^n$, whose restrictions to ${\mathbb{B}}:= B(0,1)$ are probability measures. We set \begin{equation} \label{1.5} {\mathbb {W}}_1(\mu,\nu):=\sup_{\psi \in \text{Lip}_1({\mathbb{B}})}\av{\int\psi d\mu-\int\psi d\nu}. \end{equation} \end{definition} Thus ${\mathbb {W}}_1(\mu,\nu)$ only measures some distance between the restrictions to ${\mathbb{B}}$ of $\mu$ and $\nu$. This quantity is similar to the usual $L^1$-Wasserstein distance, which by the Kantorovich duality theorem has the same definition as ${\mathbb {W}}_1$ except that the infimum ranges over all $1$-Lipschitz nonnegative functions in ${\mathbb{B}}$. Note that ${\mathbb {W}}_1$ has appeared before in the study of rectifiability of measures; see for example \cite{Preiss87}, \cite{Tolsa-uniform-rectifiability}, \cite{Tolsa-mass-transport}, and \cite{ADTprep}. In Section \ref{S5}, we replace ${\mathbb {W}}_1$ with a smoother version of local distance ${\mathbb {W}}_\varphi$ which is easier to manipulate. Lemmas \ref{t5.1} and \ref{t5.3} state that ${\mathbb {W}}_1$ and ${\mathbb {W}}_\varphi$ are essentially comparable. We refer to \cite{Villani} for a detailed introduction to Wasserstein distances and their properties. \medskip To estimate the self-similarity properties of $\mu$ we use several groups of affine transformations of ${\mathbb{R}}^n$. Denote by ${\mathscr{R}}$ the group of affine isometries of ${\mathbb{R}}^n$ (i.e., compositions of translations, rotations, and symmetries). Then let ${\mathscr{G}}$ denote the group of similar affine transformations, defined by \begin{equation} \label{1.6} {\mathscr{G}} = \big\{ \lambda R \, ; \, \lambda > 0 \text{ and } R \in {\mathscr{R}} \big\}. \end{equation} For $G \in {\mathscr{G}}$, we denote by $\lambda(G)$ the unique positive number such that $G = \lambda(G) R$ for some $R\in {\mathscr{R}}$. Denote by ${\mathscr{D}}$ the group of translations and dilations: \begin{equation} \label{1.7} {\mathscr{D}} = \big\{ \lambda I + a \, ; \, \lambda > 0 \text{ and } a \in {\mathbb{R}}^n \big\}, \end{equation} where $I$ denotes the identity on ${\mathbb{R}}^n$. The transformations that map a given $x\in {\mathbb{R}}^n$ to the origin, are denoted by \begin{equation} \label{1.8} {\mathscr{G}}(x) = \big\{ G \in {\mathscr{G}} ; \, G(x) = 0 \big\} \ \text{ and } \ {\mathscr{D}}(x) = \big\{ D \in {\mathscr{D}} ; \, D(x) = 0 \big\}. \end{equation} To each $G\in {\mathscr{G}}$, we associate the measure $\mu^G = G_\sharp \mu$, which is defined by \begin{equation} \label{1.9} \mu^G(A) = \mu(G^{-1}(A)) \ \text{ for every Borel set } A \subset {\mathbb{R}}^n. \end{equation} When $G \in {\mathscr{G}}(x)$ for some $x\in \Sigma$, be may normalize $\mu^G$ and set \begin{equation} \label{1.10} \mu^G_0 = {\mu^G \over \mu^G({\mathbb{B}})} = {\mu^G \over \mu(G^{-1}({\mathbb{B}}))} = {\mu^G \over \mu(B(x,\lambda(G)^{-1}))} \end{equation} because $\mu(B(x,\lambda(G)^{-1})) > 0$. This normalization is needed if we want to compute ${\mathbb {W}}_1$-distances. A special case of this is when $G = T_{x,r}$, the element of ${\mathscr{D}}$ that maps $B(x,r)$ to ${\mathbb{B}}$; then $\mu^G$ and $\mu^G_0$ are denoted by $\mu^{x,r}$ and $\mu^{x,r}_0$ respectively. That is, \begin{equation} \label{1.11} \mu^{x,r}(A) = \mu(x + rA) \text{ for } A \subset {\mathbb{R}}^n, \ \text{ and } \ \mu^{x,r}_0 = {\mu^{x,r} \over \mu(B(x,r))}. \end{equation} To measure the self-similar nature of $\mu$ we introduce two quantities $\alpha_{{\mathscr{G}}}$ and $\alpha_{{\mathscr{D}}}$. We fix two parameters $1 < \lambda_1 < \lambda_2 < \infty$. Set \begin{equation} \label{1.12} {\mathscr{G}}(x,r) = \big\{ G\in {\mathscr{G}}(x) \, ; \, \lambda_1 r \leq \lambda(G)^{-1} \leq \lambda_2 r \big\} \end{equation} and then \begin{equation} \label{1.13} \alpha_{{\mathscr{G}}}(x,r) = \inf\big\{ {\mathbb {W}}_1(\mu_0^{G},\mu_0^{x,r}) \, ; \, G \in {\mathscr{G}}(x,r) \big\}. \end{equation} Thus, if $\alpha_{{\mathscr{G}}}(x,r)$ is small, this means that in ${\mathbb{B}}$, $\mu^{x,r}_0$ is close to some measure $\mu^G_0$, obtained via a transformation $G$ that contracts more than $T_{x,r}$ and possibly rotates as well. After composition with $T_{x,r}^{-1}$, the fact that $\alpha_{{\mathscr{G}}}(x,r)$ is small can be interpreted as saying that in $B(x,r)$, $\mu$ is quite close to the measure $a \mu^{G'}$, where $G' = T_{x,r}^{-1} \circ G$ is a contracting element of ${\mathscr{G}}$ that fixes $x$ and $a >0$ is a normalizing constant. It is important to note that even though we allow some flexibility in the choice of $G$ and $G'$, we demand that $G'(x) = x$. This is the reason why the usual fractal measures do not satisfy the conditions below. We also use the analogue of $\alpha_{{\mathscr{G}}}(x,r)$ for the smaller group ${\mathscr{D}}$.That is, set \begin{equation} \label{1.14} {\mathscr{D}}(x,r) = \big\{ D\in {\mathscr{D}}(x) \, ; \, \lambda_1 r \leq \lambda(G)^{-1} \leq \lambda_2 r \big\} \end{equation} and \begin{equation} \label{1.15} \alpha_{{\mathscr{D}}}(x,r) = \inf\big\{ {\mathbb {W}}_1(\mu_0^{G},\mu_0^{x,r}) \, ; \, G \in {\mathscr{D}}(x,r) \big\}. \end{equation} For $\alpha_{{\mathscr{D}}}(x,r)$ we only compare $\mu$ with its image by some dilation centered at $x$. Obviously $\alpha_{{\mathscr{D}}}(x,r) \geq \alpha_{{\mathscr{G}}}(x,r)$. Thus conditions on $\alpha_{{\mathscr{D}}}(x,r)$ are more restrictive than those on $\alpha_{{\mathscr{G}}}(x,r)$. Our goal is to get a control on the part of $\Sigma$ (see \eqref{1.1}) where either $\alpha_{{\mathscr{G}}}(x,r)$, or $\alpha_{{\mathscr{D}}}(x,r)$, are sufficiently small. More precisely, we want to control the sets \begin{equation} \label{1.16} \Sigma_1 = \big\{ x\in \Sigma \, ; \, \int_0^1 \alpha_{{\mathscr{D}}}(x,r) {dr \over r } < \infty \big\} \end{equation} and \begin{equation} \label{1.17} \Sigma_2 = \big\{ x\in \Sigma \, ; \, \int_0^1 \alpha_{{\mathscr{G}}}(x,r) {\log(1/r) dr \over r} < \infty \big\}. \end{equation} \medskip \begin{theorem}\label{t1.2} Let $\mu$ be a doubling measure on ${\mathbb{R}}^n$, and denote by $\Sigma$ its support. Let $1 < \lambda_1 < \lambda_2 < \infty$ be given, and define the sets $\Sigma_1$ and $\Sigma_2$ as above. Then there are sets ${\mathscr{S}}_{0},...,{\mathscr{S}}_{n} \subset \Sigma$, such that \begin{equation} \label{1.18} \mu\Big( (\Sigma_1 \cup \Sigma_2) \setminus \Big(\bigcup_{d=0}^{n} {\mathscr{S}}_{d} \Big)\Big) =0, \end{equation} and moreover \begin{itemize} \item ${\mathscr{S}}_{0}$ is the set of points where $\Sigma$ has an atom; it is at most countable, and every point of ${\mathscr{S}}_{0}$ is an isolated point of $\Sigma$. \item For $1 \leq d \leq n$, if $x\in {\mathscr{S}}_d$, there exists a $d$-dimensional vector space $V_x$ such that all the tangent measures to $\mu$ at $x$ (defined below) are multiples of the Lebesgue measure on $V_x$. \item For $1 \leq d \leq n$, ${\mathscr{S}}_{d}$ is $d$-rectifiable, and it can be covered by a countable family of Lipschitz graphs of dimension $d$. \end{itemize} \end{theorem} \medskip We may see Theorem \ref{1.2} as a structural decomposition of the good parts of $\Sigma$. Tangent measures will play an important role in proof and the definition of the ${\mathscr{S}}_{d}$. Recall that the set of {\it tangent measures} to $\mu$ at $x$, which will be denoted by $\text{Tan}(\mu,x)$, is the set of non-zero Radon measures $\sigma$ for which there exist a sequence $\{ r_k \}$, with $\lim_{k \to \infty} r_k = 0$, and a sequence $\{ a_k \}$ of nonnegative numbers, such that \begin{equation} \label{1.19} \sigma \text{ is the weak limit of the measures } a_k \mu^{x,r_k}, \end{equation} where the $\mu^{x,r_k}$ are as in \eqref{1.11}. That is, for every continuous function $f$ with compact support, \begin{equation} \label{1.20} \int f d\sigma = \lim_{k \to \infty} a_k \int f d\mu^{x,r_k}. \end{equation} Note that since here $\mu$ is doubling, $\text{Tan}(\mu,x)$ is not empty (see for instance the proof of Lemma~2.1 in \cite{ADTprep}). Furthermore if $\mu$ satisfies \eqref{1.2} and $\sigma\in \text{Tan}(\mu,x)$ then $\sigma$ is also doubling with a constant at most $C_\delta^2$. A priori $\text{Tan}(\mu,x)$ may be large. Nevertheless Theorem \ref{t1.2} ensures that for $x\in {\mathscr{S}}_d$, $\text{Tan}(\mu,x)$ is of dimension~$1$. A Lipschitz graph of dimension $d$ is a set $\Gamma_A$ such that $$ \Gamma_A = \big\{ x+A(x) \, ; \, x\in V \big\}, $$ where $V$ is a vector space of dimension $d$, $A : V \to V^\perp$ is a Lipschitz map and $V^\perp$ denotes the $(n-d)$-dimensional vector space perpendicular to $V$. In the statement of Theorem \ref{t1.2}, ${\mathscr{S}}_d$ can be covered by Lipschitz graphs where the corresponding function $A$ has Lipschitz constant less than $\varepsilon$, where $\varepsilon > 0$ is any small number given in advance. Note that this yields that ${\mathscr{S}}_d$ is $d$-rectifiable while providing additional information in the sense that ${\mathscr{S}}_d$ is completely covered by Lipschitz graphs not simply up to a set of ${\mathscr{H}}^d$-measure zero. Let us make a few more remarks on Theorem \ref{t1.2} and its proof. The advantage of using the quantities $\alpha_{\mathscr{G}}$ and $\alpha_{\mathscr{D}}$ is that they yield information not only about the geometry of the support but also about how the measure is distributed on it. The decomposition of $\Sigma_1 \cup \Sigma_2$ into pieces of different dimensions is possible once we prove that for $\mu$-almost every point $x\in \Sigma_1 \cap \Sigma_2$, $\text{Tan}(\mu,x)$ is entirely composed of flat measures of a same dimension depending on $x$. Recall that {\it flat measures} are multiples of Lebesgue measures on vector subspaces of ${\mathbb{R}}^n$; that is, for each integer $d\in [0,n]$, set \begin{equation} \label{1.21} {\mathscr{F}}_d = \big\{ c {\mathscr{H}}^d\res{ V} \, ; \, c \geq 0 \text{ and } V\in G(d,n) \big\}, \end{equation} where ${\mathscr{H}}^d$ denotes the $d$-dimensional Hausdorff measure (see \cite{Mattila} or \cite{Federer}) and $G(d,n)$ is the set of $d$-planes in ${\mathbb{R}}^n$. The set of flat measures is ${\mathscr{F}} = \bigcup_{0 \leq d \leq n} {\mathscr{F}}_d$. It is natural to use the self-similarity properties of $\mu$ to get information on the structure of $\Sigma$, as in Theorem \ref{t1.2}. In particular the numbers $\alpha_{{\mathscr{G}}}(x,r)$ provide an intrinsic way to measure the regularity of $\mu$. We contrast this approach with the one taken in \cite{ADTprep} where we were interested on the local approximation of the measure by flat measures. As we shall see in Section \ref{S8}, the additional logarithm in \eqref{1.17} is used to sum a series which allows us to control the density of $\mu$ on most of $\Sigma_2$. It may well be an artifact of the proof. To prove Theorem \ref{t1.2} we need to find a set $\Sigma_0$ which covers almost all $\Sigma_1\cup\Sigma_2$ and such that all tangents to $\mu$ at points in $\Sigma_0$ are flat. To accomplish this we define an average analogue of the numbers $\alpha_{{\mathscr{G}}}(x,r)$ by \begin{equation} \label{1.23} \alpha_{{\mathscr{G}}}^\ast(x,r) = \fint_{B(x,r)} \fint_{r}^{2r} \alpha_{{\mathscr{G}}}(y,t) d\mu(y)dt \end{equation} (where $\fint$ is our notation for an average) and consider the set \begin{equation} \label{1.24} \Sigma_0 = \big\{ x\in \Sigma \, ; \, \lim_{r \to 0} \alpha_{{\mathscr{G}}}^\ast(x,r) = 0 \big\}. \end{equation} \medskip \begin{theorem}\label{t1.3} Let $\mu$ be a doubling measure on ${\mathbb{R}}^n$, and denote by $\Sigma$ its support. Let $1 < \lambda_1 < \lambda_2 < \infty$ be given, and define the functions $\alpha_{{\mathscr{G}}}(x,r)$ and $\alpha_{{\mathscr{G}}}^\ast(x,r)$ and the set $\Sigma_0$ as above (see \eqref{1.13}, \eqref{1.23}, and \eqref{1.24}). Then \begin{equation} \label{1.25} \text{Tan}(\mu,x) \subset {\mathscr{F}} \ \text{ for every } x\in \Sigma_0. \end{equation} \end{theorem} \medskip A consequence of \eqref{1.25} and the fact that elements of different ${\mathscr{F}}_d$ are far away from each other is that for each $x\in \Sigma_0$, there is an integer $d \in [0,n]$ such that \begin{equation} \label{1.26} \text{Tan}(\mu,x) \subset {\mathscr{F}}_d. \end{equation} This is not too hard to prove. In the case of Theorem \ref{t1.2}, we obtain more than \eqref{1.26} directly, thus we omit the proof of this fact. \medskip To deduce Theorem \ref{t1.2} from Theorem~\ref{t1.3}, we shall first check that \begin{equation} \label{1.22} \mu\big((\Sigma_1 \cup \Sigma_2) \setminus \Sigma_0 \big) = 0. \end{equation} This uses standard techniques from measure theory including the Lebesgue density theorem. Then we show that for each $x\in \Sigma_0 \cap (\Sigma_1 \cup \Sigma_2)$, \begin{equation} \label{1.27} \text{Tan}(\mu,x) = \big\{ c \sigma \, ; \, c >0 \big\} \ \text{ for some } \sigma \in {\mathscr{F}}. \end{equation} Let us now say a few words about the definition of the ${\mathscr{S}}_d$. Set \begin{equation} \label{1.28} {\mathscr{S}}_d = \big\{ x\in \Sigma_0 \cap (\Sigma_1 \cup \Sigma_2) \, ; \, \text{Tan}(\mu,x) \subset {\mathscr{F}}_d \big\}; \end{equation} these sets are disjoint, and by \eqref{1.27} or \eqref{1.26} \begin{equation} \label{1.29} \Sigma_0 \cap (\Sigma_1 \cup \Sigma_2) = \bigcup_{d=0}^n {\mathscr{S}}_d. \end{equation} The special set ${\mathscr{S}}_0$ is easily dealt with at the beginning of Section \ref{S7}, and Theorem~\ref{t1.2} follows as soon as we prove that for $d \geq 1$, \begin{eqnarray} \label{1.30} &&\text{${\mathscr{S}}_d$ can be covered by a countable collection} \\ &&\hskip4cm \text{of Lipschitz graphs of dimension $d$.} \nonumber \end{eqnarray} The fact that information on the tangent measures may imply rectifiability properties for the measure is much better understood since D. Preiss \cite{Preiss87} showed that if $\mu$ is a Radon measure, not necessarily doubling, such that for $\mu$-almost every $x\in {\mathbb{R}}^n$, the $d$-density $\lim_{r\rightarrow 0}\frac{\mu(B(x,r))}{r^{d}}$ exists and is positive and finite, then ${\mathbb{R}}^n$ may be covered, up to a set of $\mu$-measure zero, by a countable collection of $d$-dimensional Lipschitz graphs. He deduced this from the hypothesis on the $d$-density and the fact that at $\mu$-almost every $x\in \Sigma$, $\text{Tan}(\mu,x) \subset {\mathscr{F}}$. In our case, we are unable to use \cite{Preiss87} because we are not given any information on the density of $\mu$. We shall use the fact that since $\mu$ is doubling, \eqref{1.27} implies the existence of a tangent $d$-plane to $\Sigma$ at $x$, and then \eqref{1.30} for the set where \eqref{1.27} holds. We include a proof of these simple observations in Section \ref{S7}. To prove \eqref{1.27}, we shall use the numbers $\alpha_{{\mathscr{D}}}(x,r)$ and $\alpha_{{\mathscr{G}}}(x,r)$ to control the variations of the measures $\mu^{x,r}$ on $\Sigma_1$ and $\Sigma_2$. Eventually we compare them to the tangent measures. For points of $\Sigma_1$ we use the triangle inequality and the summability of the $\alpha_{\mathscr{D}}(x,r)$, to show that the distance between two different tangent measures at $x$ is controlled by integrals that tend to $0$. To deal with some of the technical complications that arise with the distance ${\mathbb {W}}_1$, we shall introduce in Section \ref{S5} a smoother variant ${\mathbb {W}}_\varphi$ of the Wasserstein distance, study it briefly, and then use it in Section \ref{S6} to prove \eqref{1.27} on $\Sigma_1 \cap \Sigma_0$. For points of $\Sigma_2$, we'll use the bounds on the numbers $\alpha_{\mathscr{G}}(x,r)$ to compute the ${\mathbb {W}}_\varphi$-distance between the $\mu^{x,r}$ and the tangent measures. This time we can only work modulo rotations, but this is enough to control the ${\mathbb {W}}_1$-distance from the $\mu_0^{x,r}$ to flat measures, and apply Theorem~1.5 in \cite{ADTprep}. This yields additional information on $\Sigma_2\cap\Sigma_0$. In particular, it guarantees that on the sets $\Sigma_2 \cap {\mathscr{S}}_d$, $\mu$ is absolutely continuous with respect to the Hausdorff measure ${\mathscr{H}}^d$, with a density that can be computed from the measure of balls, and that some local mutual absolute continuity of $\mu$ and ${\mathscr{H}}^d_{| \Sigma_2 \cap {\mathscr{S}}_d}$ holds. See near \eqref{8.19} for a statement, and the rest of Section \ref{S8} for the proof. \medskip There is a significant difference between \eqref{1.25} (or even \eqref{1.26}) and the stronger \eqref{1.27}. For instance, let $\Sigma$ be an asymptotically flat snowflake in ${\mathbb{R}}^2$, constructed in the usual way but with angles that tend slowly to $0$. Put on $\Sigma$ the natural measure $\mu$, coming from the parameterization of $\Sigma$ (see \cite{DKT}). In this case for $\mu$-almost every $x\in \Sigma$, $\text{Tan}(\mu,x)={\mathscr{F}}_1$. Of course $\Sigma$ is not rectifiable, and Theorem \ref{t1.2} says that $\Sigma_1 \cup \Sigma_2$ is $\mu$-negligible. \medskip The definitions \eqref{1.23} and \eqref{1.24} ensure that for $x\in \Sigma_0$, the local self-similarity character of $\mu$ improves as the balls get smaller and smaller, which yields self-similar tangent measures at all point of $\Sigma_0$. That is, we show that if $x\in\Sigma_0$, $\sigma \in \text{Tan}(\mu,x)$ and $y$ lies in the support of $\sigma$, there is a transformation $G\in {\mathscr{G}}$ such that $\lambda_1 \leq \lambda(H)^{-1} \leq \lambda_2$, $H(y)=y$, and $H_\sharp \sigma = c \sigma$ for some $c> 0$. See Lemma~\ref{t3.1}, in Section \ref{S3}. Once we prove this, showing that $\sigma$ is flat is mostly a matter of playing with the invariance properties of the support and the measure; see Section \ref{S4}. \subsection{Acknowledgements} The authors are grateful to Alessio Figalli and Xavier Tolsa for helpful discussions. Parts of this work were done while the first author was visiting IPAM and while the second author was visiting the University of Washington. \section{Control of the averages $\alpha_{\mathscr{G}}$} \label{S2} The main goal of this section is to prove \eqref{1.22}. To this effect define the set \begin{equation} \label{2.1} \Sigma_3 = \big\{ x\in \Sigma \, ; \, \int_0^1 \alpha_{{\mathscr{G}}}(x,r) {dr \over r} < \infty \big\}. \end{equation} Notice that $\Sigma_1 \cup \Sigma_2 \subset \Sigma_3$, by \eqref{1.16}, \eqref{1.17}, and because $\alpha_{{\mathscr{G}}}(x,r) \leq \alpha_{{\mathscr{D}}}(x,r)$. Thus \eqref{1.22} follows once we prove that \begin{equation} \label{2.2} \mu(\Sigma_3 \setminus \Sigma_0) = 0. \end{equation} For $N > 0$ large and $k \geq 2$, let \begin{equation} \label{2.3} \Sigma_3(N) = \big\{ x\in \Sigma_3 \cap B(0,N) \, ; \, \int_0^1 \alpha_{{\mathscr{G}}}(x,r) {dr \over r} \leq N \big\} \end{equation} and \begin{equation} \label{2.4} \varepsilon_k = \int_{\Sigma_3(N)} \int_{ 2^{-k}}^{2^{-k+2}} \alpha_{{\mathscr{G}}}(y,r) {d\mu(y) dr \over r}. \end{equation} Then \begin{equation} \label{2.5} \sum_{k \geq 2}\varepsilon_k \leq 2 \int_{ \Sigma_3(N)} \int_0^1 \alpha_{{\mathscr{G}}}(x,r) {d\mu(x) dr \over r} \leq 2N \mu(\Sigma_3(N)) < \infty. \end{equation} Choose a decreasing sequence $\{ \gamma_k \}$ such that \begin{equation} \label{2.6} \lim_{k \to \infty} \gamma_k = 0 \ \text{ and } \ \sum_{k \geq 2} \gamma_k^{-1} \varepsilon_k < \infty. \end{equation} For $x\in \Sigma_3(N)$, define auxiliary functions $\alpha_k$ by \begin{equation} \label{2.7} \alpha_k(x) = \int_{\Sigma_3(N) \cap B(x,2^{-k+1})} \int_{ 2^{-k}}^{2^{-k+2}} \alpha_{\mathscr{G}}(y,r) {d\mu(y) dr \over r}. \end{equation} Consider the bad sets \begin{equation} \label{2.8} Z_k = \big\{ x\in \Sigma_3(N) \, ; \, \alpha_k(x) \geq \gamma_k \mu(B(x,2^{-k+1})) \big\}. \end{equation} Our goal is to show that $Z_k$ is small. Let $X \subset Z_k$ be a maximal subset whose points lie at distance at least $2^{-k+2}$ from each other. Thus the balls $\overline B(x,2^{-k+2})$, $x\in X$, cover $Z_k$, so by \eqref{1.2} and \eqref{2.8} \begin{eqnarray} \label{2.9} \mu(Z_k) &\leq & \sum_{x\in X} \mu(\overline B(x,2^{-k+2})) \leq C_\delta^2 \sum_{x\in X} \mu(B(x,2^{-k+1})) \nonumber\\ &\leq &C_\delta^2 \gamma_k^{-1} \sum_{x\in X} \alpha_k(x). \end{eqnarray} Since the balls $B(x,2^{-k+1})$, $x\in X$, are disjoint, $$ \sum_{x\in X} \alpha_k(x) \leq \varepsilon_k $$ (compare \eqref{2.4} and \eqref{2.7}); thus $\mu(Z_k) \leq C_\delta^2 \gamma_k^{-1} \varepsilon_k$. We are not interested in the precise bound, but merely the fact that $\sum_k \mu(Z_k) < \infty$, from which we deduce that if we set $Z^\ast_l = \bigcup_{k \geq l} Z_k$ for $l \geq 2$, then $\lim_{l \to \infty} \mu(Z^\ast_l) = 0$. Thus for $\mu$-almost every $x\in \Sigma_3(N)$ there is $k_x\in {\mathbb{N}}$ such that \begin{equation} \label{2.10} x \in \Sigma_3(N) \setminus Z_k \hbox{ for } k\ge k_x. \end{equation} By the Lebesgue differentiation theorem applied to the doubling measure $\mu$, we have that for $\mu$-almost every $x\in \Sigma_3(N)$ \begin{equation} \label{2.11} \lim_{r\to 0} {\mu(B(x,r) \cap \Sigma \setminus \Sigma_3(N)) \over \mu(B(x,r) \cap \Sigma)} = 0; \end{equation} see for instance Corollary 2.14 in \cite{Mattila}. Let $x\in \Sigma_3(N)$ satisfy \eqref{2.10} and \eqref{2.11}; we want to estimate $\alpha_{{\mathscr{G}}}^\ast(x,r)$ for $r$ small. Choose $k$ such that $2^{-k} \leq r \leq 2^{-k+1}$; then $k\ge k_x$ for $r$ small. Recall from \eqref{1.23} that \begin{equation} \label{2.12} \alpha_{{\mathscr{G}}}^\ast(x,r) = \mu(B(x,r))^{-1}\int_{y\in B(x,r)} \fint_{r}^{2r} \alpha_{{\mathscr{G}}}(y,t) d\mu(y)dt. \end{equation} We decompose the domain of integration above into two parts and estimate each one separately. By \eqref{1.2}, \eqref{2.7}, \eqref{2.10}, and \eqref{2.8}, \begin{eqnarray} \label{2.13} &\,& \hskip-2cm \mu(B(x,r))^{-1} \int_{\Sigma_3(N) \cap B(x,r)} \fint_{r}^{2r} \alpha_{{\mathscr{G}}}(y,t) d\mu(y)dt \nonumber \\ &\leq& 4\mu(B(x,2^{-k}))^{-1} \int_{\Sigma_3(N) \cap B(x,r)} \int_{2^{-k}}^{2^{-k+2}} \alpha_{{\mathscr{G}}}(y,t) {d\mu(y)dt \over t} \nonumber \\ &\leq& 4 C_\delta \mu(B(x,2^{-k+1}))^{-1} \alpha_k(x) \leq 4 C_\delta \gamma_k. \end{eqnarray} This term tends to $0$ when $r$ tends to $0$, by \eqref{2.6}. For the second part, we notice that $\alpha_{{\mathscr{G}}}(y,t) \leq 2$ by definition (see \eqref{1.13} and \eqref{1.5}), so \begin{eqnarray} \label{2.14} &\,& \hskip-3cm\mu(B(x,r))^{-1} \int_{\Sigma\cap B(x,r) \setminus \Sigma_3(N)} \fint_{r}^{2r} \alpha_{{\mathscr{G}}}(y,t) d\mu(y)dt \nonumber\\ &\leq& 2 \mu(B(x,r))^{-1} \mu(\Sigma\cap B(x,r) \setminus \Sigma_3(N)), \end{eqnarray} which tends to $0$ by \eqref{2.11}. Combining \eqref{2.13} and \eqref{2.14} we get that $$ \lim_{r \to 0} \alpha_{{\mathscr{G}}}^\ast(x,r) = 0 \ \text{ for $\mu$-almost every } x\in \Sigma_3(N). $$ In other words, $\mu(\Sigma_3(N) \setminus \Sigma_0) = 0$ (see \eqref{1.24}); \eqref{2.2} follows easily, and so does \eqref{1.22}. \section{Tangent measures are self-similar} \label{S3} In this section we start the proof of Theorem \ref{t1.3} and prove the basic self-similarity estimate for tangent measures. \begin{lemma} \label{t3.1} Let $\mu$ be a doubling measure, let $\Sigma$ denote its support, let $\Sigma_0\subset \Sigma$ be as in \eqref{1.24}, and $\sigma \in \text{Tan}(\mu,x)$ be a tangent measure of $\mu$ at a point $x\in \Sigma_0$. For each $y \in \Xi$, the support of $\sigma$, there exist $H \in {\mathscr{G}}$ such that $H(y) = y$, \begin{equation} \label{3.1} \lambda_1 \leq \lambda(H)^{-1} \leq \lambda_2 \end{equation} and \begin{equation} \label{3.2} H_\sharp \sigma = c \sigma \ \text{ for some } c > 0. \end{equation} \end{lemma} \medskip The numbers $\lambda_1$ and $\lambda_2$ are the same as in \eqref{1.12}, the dilation number $\lambda(H)$ is defined below \eqref{1.6}, and $H_\sharp \sigma$, the push forward image of $\sigma$ by $H$, is defined as in \eqref{1.9}. \begin{proof} We may assume, without loss of generality, that $x=0$. Since $\sigma \in \text{Tan}(\mu,x)$ there are coefficients $a_k \geq 0$ and radii $r_k > 0$, such that $ \lim_{k \to \infty} r_k = 0$, and $\sigma$ is the weak limit of the measures $\{ \sigma_k \}$, where \begin{equation} \label{3.4} \sigma_k = a_k \mu^{0,r_k} = a_k \mu^{R_k}\hbox{ with } R_k(u) = r_k^{-1} u \ \text{ for } u \in {\mathbb{R}}^n. \end{equation} Note that $R_k$ maps $B(x,r_k)=B(0,r_{k})$ to ${\mathbb{B}}$. See \eqref{1.11} for the definition of $\mu^{x,r}$. Let \begin{equation} \label{3.8} \alpha_k = \sup\big\{ \alpha_{\mathscr{G}}^\ast(0,r) \, ; \, 0 < r < \sqrt{r_k} \big\}; \end{equation} then since $x\in\Sigma_0$ (see \eqref{1.24}) \begin{equation} \label{3.9} \lim_{k \to \infty} \alpha_k = 0. \end{equation} By \eqref{3.8}, if for $k$ large \begin{equation} \label{3.7} r_k < \rho_k < \sqrt{r_k} \end{equation} then $\alpha_{\mathscr{G}}^\ast(0,\rho_k) \leq \alpha_k$ for these $k$. Since $\sigma$ is the weak limit of the $\sigma_k$, for each $y\in\Xi:=\mathop\mathrm{supp} \sigma$ we can find points $y_k \in \mathop\mathrm{supp}(\sigma_k) = r_k^{-1} \Sigma$ such that \begin{equation} \label{3.12} \lim_{k \to \infty} |y_k-y| = 0. \end{equation} Let $\{\eta_k\}_{k\ge1}$ and $\{\rho_k\}_{k\ge 1}$ be sequences such that \eqref{3.7} holds for $k$ large, and also \begin{equation} \label{3.6} \lim_{k \to \infty} {\rho_k \over r_k} = \infty \ \text{ and } \ \lim_{k \to \infty} \eta_k = 0. \end{equation} Consider \begin{equation} \label{3.13} A_k = \fint_{ B(r_k y_k, \eta_k r_k)} \fint_{ \rho_k}^{2\rho_k} \alpha_{{\mathscr{G}}}(z,t) d\mu(z)dt. \end{equation} For $k$ large, the domain of integration $B(r_k y_k, \eta_k r_k)$ is contained in $B(0,\rho_k)$ (because $y_k$ tends to $y$, $\eta_k$ tends to $0$, and $r_k^{-1}\rho_k$ tends to $+\infty$; see \eqref{3.12} and \eqref{3.6}). Recall that $B(0,\rho_k) \times [\rho_k,2\rho_k]$ is the domain of integration in the definition of $\alpha_{\mathscr{G}}^\ast(0,\rho_k)$ (see \eqref{1.23}); hence for $k$ large \begin{equation} \label{3.14} A_k \leq {\mu(B(0,\rho_k)) \over \mu(B(r_k y_k, \eta_k r_k))} \, \alpha_{\mathscr{G}}^\ast(0,\rho_k) \leq C_\delta^{2 + \log_2(\rho_k/(\eta_k r_k))} \alpha_k \end{equation} by \eqref{3.13}, \eqref{1.23}, \eqref{3.7}, \eqref{3.8}, and the doubling property \eqref{1.2}. Later on, we will choose $\rho_k$ and $\eta_k$, depending on $\alpha_k$, so that $A_k$ is still small enough. By Chebyshev's inequality there exist \begin{equation} \label{3.15} z_k \in \Sigma \cap B(r_k y_k, \eta_k r_k) \ \text{ and } \ t_k \in [\rho_k,2\rho_k] \end{equation} such that \begin{equation} \label{3.16} \alpha_{{\mathscr{G}}}(z_k,t_k) \leq A_k. \end{equation} By the definition of $\alpha_{{\mathscr{G}}}$ (see \eqref{1.13}) there exists $G_k \in {\mathscr{G}}(z_k,t_k)$ such that \begin{equation} \label{3.17} {\mathbb {W}}_1(\mu_0^{G_k},\mu_0^{z_k,t_k}) \leq 2 A_k, \end{equation} which by \eqref{1.5} means that \begin{equation} \label{3.18} \av{\int\psi d\mu_0^{G_k}-\int\psi \mu_0^{z_k,t_k}} \leq 2 A_k \ \text{ for any } \psi \in \text{Lip}_1({\mathbb{B}}). \end{equation} Our next goal is to interpret \eqref{3.18} in terms of $\sigma_k$. Let $T_k \in {\mathscr{D}}$ be such that for $u\in {\mathbb{R}}^n$ \begin{equation} \label{3.19} T_k(u) = {u-z_k \over t_k} \ \hbox{ and so }\ T_k(B(z_k,t_k)) = {\mathbb{B}}. \end{equation} The definitions \eqref{1.10} and \eqref{1.11} yield \begin{equation} \label{3.20} \mu_0^{G_k} = e_k \mu^{G_k} \ \text{ and } \ \mu_0^{z_k,t_k} = e'_k \mu^{z_k,t_k} = e'_k \mu^{T_k}, \end{equation} where $e_k$ and $e'_k$ come from the normalization, and are given by \begin{equation} \label{3.21} e_k = \mu^{G_k}({\mathbb{B}})^{-1} \ \text{ and } \ e'_k = \mu^{T_k}({\mathbb{B}})^{-1}. \end{equation} Let $\psi$ be any Lipschitz function supported on ${\mathbb{B}}$ and set $I_k = e_k \int_{G_k(\Sigma)}\psi d\mu^{G_k}$; by \eqref{3.20} and \eqref{1.9}, \begin{equation} \label{3.23} I_k = e_k \int_{G_k(\Sigma)}\psi d\mu^{G_k} = e_k \int_\Sigma\psi(G_k(\xi)) d\mu(\xi) = e_k \int\psi \circ G_k \, d\mu. \end{equation} By \eqref{3.4}, $\sigma_k = a_k \mu^{R_k} = a_k (R_k)_\sharp \mu$, hence $\mu = a_k^{-1} (R_k^{-1})_\sharp \sigma_k$. Thus a similar computation to the one in \eqref{3.23} yields \begin{equation} \label{3.24} I_k = e_k a_k^{-1} \int \psi \circ G_k \circ R_k^{-1} d\sigma_k. \end{equation} A similar computation, with $G_k$ replaced by $T_k$, yields \begin{equation} \label{3.25} I'_k := \int\psi \mu_0^{z_k,t_k}= e'_k a_k^{-1} \int \psi \circ T_k \circ R_k^{-1} d\sigma_k. \end{equation} We want to apply \eqref{3.24} and \eqref{3.25} to special functions $\psi$. Let $\varphi$ be a compactly supported 1-Lipschitz function. Let \begin{equation} \label{3.27} \psi = \varphi \circ R_k \circ T_k^{-1}. \end{equation} Note that $\psi$ is a Lipschitz function with constant less or equal to $r_k^{-1}t_k\le 2r_k^{-1}\rho_k$ (see \eqref{3.15}). If $\varphi$ is supported in $B(0,R)$, for $k$ large enough the support of $\psi$ is contained in $$\begin{aligned} T_k \circ R_k^{-1}(B(0,R)) &= T_k(B(0,r_kR)) = B\big({-z_k \over t_k}, t_k^{-1} r_k R\big) \\ &\subset B(0, r_kt_k^{-1}(R+\eta_k+|y_k|))\subset B(0,1), \end{aligned} $$ where we have used \eqref{3.15}, \eqref{3.6}, and \eqref{3.12}. Because of \eqref{3.14}, $\widetilde \psi = (2r_k^{-1} \rho_k)^{-1}\psi$ is $1$-Lipschitz; then \eqref{3.18} applies to $\widetilde \psi$, and \eqref{3.14} yields \begin{equation} \label{3.29} |I'_k - I_k| \leq 4 A_k r_k^{-1} \rho_k \leq 4 r_k^{-1} \rho_k C_\delta^{2 + \log_2(\rho_k/(\eta_k r_k))} \alpha_k =: \widetilde \alpha_k \end{equation} where $I_k$ and $I'_k$ are as in \eqref{3.23} and \eqref{3.25} with $\psi$ coming from \eqref{3.27}. The final identity is the definition of $\widetilde \alpha_k$. Notice that even though $\varphi$ does not depend on $k$, $\psi$ does, but this is not an issue. Note that by \eqref{3.27} and \eqref{3.25}, we have \begin{equation} \label{3.30} I'_k = e'_k a_k^{-1} \int \varphi d\sigma_k. \end{equation} Similarly, by \eqref{3.24} and \eqref{3.27} we have \begin{equation} \label{3.31} I_k = e_k a_k^{-1} \int \varphi \circ R_k \circ T_k^{-1} \circ G_k \circ R_k^{-1} d\sigma_k. \end{equation} Set \begin{equation}\label{3.31A} H_k = R_k \circ T_k^{-1} \circ G_k \circ R_k^{-1}. \end{equation} Then $H_k \in {\mathscr{G}}$, and by \eqref{3.19}, its dilation factor $\lambda(H_k)$ (which is also the $n$-th root of its Jacobian determinant) is such that \begin{equation} \label{3.32} \lambda(H_k)^{-1} = \lambda(T_k) \lambda(G_k)^{-1} = t_k^{-1} \lambda(G_k)^{-1} \in [\lambda_1, \lambda_2], \end{equation} because $G_k \in {\mathscr{G}}(z_k,t_k)$, and by the definition \eqref{1.12}. Note that by \eqref{3.4}, the fact that $G_k \in {\mathscr{G}}(z_k,t_k)$ (see \eqref{1.12} and \eqref{1.8}), and \eqref{3.19}, we have \begin{eqnarray} \label{3.33} H_k(r_k^{-1} z_k) &= & R_k \circ T_k^{-1} \circ G_k \circ R_k^{-1}(r_k^{-1}z_k) = R_k \circ T_k^{-1} \circ G_k (z_k)\nonumber\\ &=&R_k \circ T_k^{-1}(0)= R_k(z_k)= r_k^{-1} z_k. \end{eqnarray} Notice also that by \eqref{3.15} $$ |r_k^{-1} z_k - y| \leq |r_k^{-1} z_k - y_k| + |y_k-y| = r_k^{-1} | z_k - r_k y_k| + |y_k-y| \leq \eta_k + |y_k-y|. $$ Thus $|r_k^{-1} z_k - y| $ tends to $0$ by \eqref{3.12} and \eqref{3.6}, so \begin{equation} \label{3.34} \lim_{k \to \infty} r_k^{-1} z_k = y. \end{equation} Combining \eqref{3.32}, \eqref{3.33}, and \eqref{3.34}, we deduce that $H_k$ lies in a compact subset of ${\mathscr{G}}$. Thus we can replace $\{ r_k \}$ by a subsequence for which the $H_k$ converge to a limit $H$. In addition, \eqref{3.32}, \eqref{3.33} and \eqref{3.34} imply that \begin{equation} \label{3.35} \lambda(H)^{-1} \in [\lambda_1, \lambda_2] \ \text{ and } \ H(y) = y. \end{equation} Combining \eqref{3.29}, \eqref{3.30}, \eqref{3.31}, and \eqref{3.31A} we see that if $\varphi$ is a compactly supported $1$-Lipschitz function, then for $k$ large \begin{equation} \label{3.36} \Big| e_k a_k^{-1} \int \varphi \circ H_k \, d\sigma_k - e'_k a_k^{-1} \int \varphi d\sigma_k \Big| \leq \widetilde\alpha_k. \end{equation} By \eqref{3.21}, \eqref{1.9}, and \eqref{3.19}, \begin{equation} \label{3.37} e'_k = \mu^{T_k}({\mathbb{B}})^{-1} = \mu(T_k^{-1}({\mathbb{B}}))^{-1} = \mu(B(z_k,t_k))^{-1}. \end{equation} Similarly notice that $G_k(z_k) = 0$ because $G_k \in {\mathscr{G}}(z_k,t_k) \subset {\mathscr{G}}(z_k)$ (see \eqref{1.12} and \eqref{1.8}); then \eqref{3.21} and \eqref{1.9} yield \begin{equation} \label{3.37A} e_k = \mu^{G_k}({\mathbb{B}})^{-1} = \mu(G_k^{-1}({\mathbb{B}}))^{-1} = \mu(B(z_k,\lambda(G_k)^{-1}))^{-1}. \end{equation} In addition, \begin{equation} \label{3.37B} B(z_k,\lambda_1 t_k) \subset G_k^{-1}({\mathbb{B}}) = B(z_k,\lambda(G_k)^{-1}) \subset B(z_k,\lambda_2 t_k) \end{equation} because $G_k(z_k) = 0$ and by \eqref{1.12}. Then \eqref{1.2}, \eqref{3.37}, \eqref{3.37A}, \eqref{3.37B}, and the fact that $\lambda_1>1$ yield \begin{equation} \label{3.38} C^{-1} e'_k \leq e_k \leq e'_k \end{equation} for some constant $C$ that depends on $C_\delta$ and $\lambda_2$. To estimate $a_k$, consider a test function $f$ such that $ {\mathds 1}_{{\mathbb{B}}}\le f\le {\mathds 1}_{2{\mathbb{B}}}$. By definition of $\sigma$, $\int f d\sigma = \lim_{k \to \infty} \int f d\sigma_k$. By \eqref{1.2}, \eqref{3.4} and the definition above \eqref{1.11}, we have \begin{equation}\label{3.38A} a_k \mu(B(0,r_k)) = \sigma_k({\mathbb{B}})\le \int f d\sigma_k \le \sigma_k(2{\mathbb{B}}) = a_k \mu(B(0,2r_k)) \le C_\delta a_k \mu(B(0,r_k)). \end{equation} Moreover, since $\sigma$ is also doubling (see the remark below \eqref{1.20}), we have that \begin{equation}\label{3.38B} \sigma({\mathbb{B}})\le \int f d\sigma \le \sigma(2{\mathbb{B}})\le C_\delta ^2\sigma({\mathbb{B}}). \end{equation} Thus by \eqref{3.38A}, \eqref{3.38B} and the definition of $\sigma$ there exists $C>1$ such that for $k$ large, \begin{equation}\label{3.38C} C^{-1} a_k \mu(B(0,r_k))\le \sigma({\mathbb{B}})\le CC_\delta^2 a_k \mu(B(0,r_k)). \end{equation} Recall that $\rho_k \leq t_k \leq 2\rho_k$ by \eqref{3.15}, and that $r_k^{-1} z_k$ is bounded by \eqref{3.34}; since $r_k^{-1} \rho_k$ tends to $+\infty$ by \eqref{3.6} we get that for $k$ large, $|z_{k}|<Cr_{k}<\rho_{k}\leq t_{k}$, and so $B(z_k,t_k) \subset B(0, 2t_k) \subset B(0, 4\rho_k)$. By \eqref{3.37}, since $0 \in \Sigma$, and by \eqref{1.2}, \begin{equation}\label{3.38D} (e'_k)^{-1}=\mu(B(z_k,t_k)) \le \mu(B(0,4\rho_k))\leq C_\delta^{3+\log_2(\rho_k/r_k)} \mu(B(0,r_k)). \end{equation} Combining \eqref{3.38C} and \eqref{3.38D} we obtain \begin{equation} \label{3.40} a_k \leq C C_\delta^{5+\log_2(\rho_k/r_k)} e'_k \cdot \sigma({\mathbb{B}}). \end{equation} Return to \eqref{3.36}, set $b_k = e_k/e'_k$, and observe that by \eqref{3.40} \begin{equation} \label{3.41} \Big| b_k \int \varphi \circ H_k \, d\sigma_k - \int \varphi d\sigma_k \Big| \leq {a_k \widetilde\alpha_k\over e'_k} \leq C C_\delta^{5+\log_2(\rho_k/r_k)} \widetilde\alpha_k \sigma({\mathbb{B}}). \end{equation} Now we choose $\rho_k$ and $\eta_k$. Denote by $$ \beta_k = C C_\delta^{5+\log_2(\rho_k/r_k)} \widetilde\alpha_k = C C_\delta^{5+\log_2(\rho_k/r_k)} \cdot 4 r_k^{-1} \rho_k C_\delta^{2 + \log_2(\rho_k/(\eta_k r_k))} \alpha_k $$ the right-hand side of \eqref{3.41} (see \eqref{3.29}). Since $\alpha_k$ tends to $0$ by \eqref{3.9}, we can choose $\rho_k$ and $\eta_k$ so that the constraints \eqref{3.6} and \eqref{3.7} hold, but the convergence in \eqref{3.6} is slow enough so that \begin{equation} \label{3.42} \lim_{k \to \infty} \beta_k = 0. \end{equation} Recall from \eqref{3.38} that $C^{-1} \leq b_k \leq 1$, hence modulo passing to a subsequence (which we relabel) we can guarantee that $\lim_{k \to \infty} b_k = b > 0$. Letting $k$ tend to infinity in \eqref{3.41} we obtain \begin{equation} \label{3.42A} \lim_{k \to \infty} \int \varphi d\sigma_k = \int \varphi d\sigma. \end{equation} Since $\varphi$ is Lipschitz and compactly supported, so is $\varphi\circ H$ and \begin{equation} \label{3.42B} \lim_{k \to \infty} \int \varphi\circ H \, d\sigma_k = \int \varphi\circ H \, d\sigma \end{equation} because $\sigma$ is the weak limit of the $\sigma_k$. Note that there is also a ball $B$ such that for $k$ large $\varphi\circ H(x) = \varphi\circ H_k(x) = 0$ for $x\in {\mathbb{R}}^n \setminus B$; then \begin{align}\label{3.42C} \int |\varphi\circ H - \varphi\circ H_k| \, d\sigma_k &\leq ||\varphi||_{lip} \int_B |H - H_k| \, d\sigma_k \nonumber \\ &\leq ||\varphi||_{lip} ||H - H_k||_{L^\infty(B)}\sigma_k(B)\nonumber \\ &\leq 2 ||\varphi||_{lip} ||H - H_k||_{L^\infty(B)}\sigma(2B). \end{align} Thus \begin{equation}\label{3.42D} \lim_{k\rightarrow \infty}\left|\int \varphi\circ H_k\, d\sigma_k -\int \varphi\circ H\, d\sigma_k \right|=0. \end{equation} Combining \eqref{3.41}, \eqref{3.42}, \eqref{3.42A}, \eqref{3.42B}, \eqref{3.42C} and \eqref{3.42D} we obtain that for any $1$-Lipschitz function $\varphi$ with compact support, \begin{equation} \label{3.43} b \int \varphi \circ H \, d\sigma = \int \varphi \,d\sigma. \end{equation} Since the Radon measure $\sigma$ is regular, \eqref{3.43} also holds for characteristic functions of Borel sets. Hence $b H_\sharp \sigma = \sigma$. Recall that $\lambda(H)^{-1} \in [\lambda_1, \lambda_2]$ and $H(y) = y$ by \eqref{3.35}; thus the conclusion of Lemma \ref{t3.1} hold, with $c=b^{-1}$. \end{proof} \section{Self-similar measures are flat} \label{S4} In this section we complete the proof of Theorem \ref{t1.3}. Using the notation in Section \ref{S3}, our goal is to show that if $\sigma \in \text{Tan}(\mu,x_0)$, where $x_0\in \Sigma_0$ (see \eqref{1.24}) then $\sigma$ is a flat measure. Lemma~\ref{t3.1} guarantees that for each $y\in \Xi$ (the support of $\sigma$), there is a transformation $H(y) \in {\mathscr{G}}$ and a constant $c(y) > 0$ such that \begin{equation} \label{4.1} \lambda(y) := \lambda(H(y)) \in [\lambda_2^{-1},\lambda_1^{-1}] \end{equation} and \begin{equation} \label{4.2} H(y)_\sharp \sigma = c(y) \sigma. \end{equation} By definition of ${\mathscr{G}}$ (see \eqref{1.6}), the linear part of $H(y)$ is of the form $\lambda(y) R(y)$, where $R(y)$ is a linear isometry. Since $H(y)$ fixes $y$, this means that \begin{equation} \label{4.3} H(y)(u) = y + \lambda(y) R(y)(u-y) \ \text{ for } u\in {\mathbb{R}}^n. \end{equation} The next lemma allows us to replace $H(y)$ with one of its large powers, chosen so that its isometric part is very close to the identity. \begin{lemma} \label{t4.1} For each choice of $\varepsilon > 0$, there is an integer $m_0$, that depends only on $\varepsilon$ and $n$, such that for each $y\in \Xi$ and each integer $\ell \geq 1$, there is an integer $m(y) \in [1,m_0]$ such that \begin{equation} \label{4.4} || R(y)^{m(y)\ell} - I || \leq \varepsilon. \end{equation} \end{lemma} \medskip \begin{proof} Here $I$ is the identity mapping. We use the compactness of the group of linear isometries of ${\mathbb{R}}^n$ to choose $m_0$ large enough so that if $R_1, \ldots, R_{m_0}$ are linear isometries, we can find integers $m_1, m_2$ such that $1 \leq m_1 < m_2 < m_0$ and $|| R_{m_2} - R_{m_1}|| \leq \varepsilon$. We apply this with $R_m = R(y)^{m \ell}$ to find $m_1$ and $m_2$ such that $|| R(y)^{m_2 \ell} - R(y)^{m_1 \ell}|| \leq \varepsilon$. Then $|| R(y)^{(m_2-m_1)\ell} - I|| \leq \varepsilon$, as needed. \end{proof} \medskip Next we study elementary properties of $\Xi$. Notice that by \eqref{4.2}, $H(y)(\Xi) = \Xi$, and iterations also yield \begin{equation} \label{4.5} H(y)^m(\Xi) = \Xi \ \text{ for } m \geq 1. \end{equation} \begin{lemma} \label{t4.2} The set $\Xi$ is convex. \end{lemma} \medskip \begin{proof} Let $x, y \in \Xi$ be given. Our goal is to show that the segment $[x,y]$ is contained in $\Xi$. For each $\varepsilon > 0$ and $\ell \geq 1$, we construct a sequence $\{ y_k \}$ in $\Xi$ (depending on $\varepsilon$ and $\ell$) which will allow us to estimate how far $[x,y]$ is from $\Xi$. We start with $y_0 = y$. If $k \geq 0$ and $y_k \in \Xi$ has been defined, we define $y_{k+1}$ as follows. Set \begin{equation} \label{4.6} H_k = H(y_k)^{m(y_k) \ell} \ \text{ and } \ y_{k+1} = H_k(x). \end{equation} By \eqref{4.5} and since $x\in\Xi$, $y_{k+1} \in \Xi$. Let us show that for $\ell$ large and $\varepsilon$ small, the $y_k$ stay close to the segment $[x,y]$ and converge (slowly) to $x$. First observe that by iterations of \eqref{4.3}, $H_k$ is given by \begin{equation} \label{4.7} H_k(u) = y_k + \lambda'_k R_k(u-y_k) \ \text{ for } u\in {\mathbb{R}}^n, \end{equation} with \begin{equation} \label{4.8} \lambda'_k = \lambda(H(y_k))^{m(y_k) \ell} \in [\lambda_2^{-m_0 \ell},\lambda_1^{-\ell}]\hbox{ and hence }\lambda'_k<1. \end{equation} In addition, $R_k = R(y_k)^{m(y_k) \ell}$ and therefore, by \eqref{4.4}, $||R_k -I || \leq \varepsilon$. Set $r_k = |x-y_k|$ and $y_{k+1}^\ast = y_k + \lambda'_k (x-y_k)$. Notice that by \eqref{4.7} and \eqref{4.4}, \begin{eqnarray} \label{4.9} |y_{k+1}-y_{k+1}^\ast| &=& \big|[y_k+\lambda'_k R_k (x-y_k)] - [y_k+\lambda'_k(x-y_k)] \big| \nonumber\\ &=& \big|\lambda'_k [R_k - I](x-y_k)\big| \leq \varepsilon \lambda'_k r_k. \end{eqnarray} Since \begin{equation} \label{4.10} |y_{k+1}^\ast-x| = (1-\lambda'_k) |y_{k}-x| = (1-\lambda'_k) r_k, \end{equation} we also get that if $\varepsilon < 1/2$, \begin{eqnarray} \label{4.11} r_{k+1} &=& |y_{k+1}-x| \leq |y_{k+1}^\ast-x|+|y_{k+1}-y_{k+1}^\ast| \nonumber\\ &\leq& (1-\lambda'_k) r_k + \varepsilon \lambda'_k r_k \leq (1-\lambda'_k/2) r_k. \end{eqnarray} Then since $y_{k+1}^\ast \in [x,y_k]$, and by \eqref{4.9} and \eqref{4.11} we have \begin{equation} \label{4.12} \mathop\mathrm{dist}(y_{k+1},[x,y_k]) \leq |y_{k+1}-y_{k+1}^\ast| \leq \varepsilon \lambda'_k r_k \leq 2 \varepsilon (r_k - r_{k+1}). \end{equation} By elementary geometry, \begin{equation} \label{4.13} \mathop\mathrm{dist}(y_{k+1},[x,y]) \leq \mathop\mathrm{dist}(y_{k+1},[x,y_k])+\mathop\mathrm{dist}(y_{k},[x,y]). \end{equation} An iteration of \eqref{4.12} combined with \eqref{4.13} yields \begin{equation} \label{4.14} \mathop\mathrm{dist}(y_{k+1},[x,y]) \leq 2 \varepsilon \sum_{0 \leq j \leq k} (r_j - r_{j+1}) \leq 2 \varepsilon r_0 = 2 \varepsilon |x-y|. \end{equation} Notice that by \eqref{4.11} and \eqref{4.8} $r_k$ tends to $0$. Thus we have constructed a sequence $\{ y_k \}$ in $\Xi$, which goes from $y=y_0$ to $x = \lim_{k \to \infty} y_k$. The points $y_k$ lie within $2 \varepsilon |x-y|$ from $[x,y]$. Using the definition of $y_{k+1}^\ast$, \eqref{4.9}, and \eqref{4.8} we can estimate their successive distances \begin{eqnarray} \label{4.15} |y_{k+1}-y_{k}| &\leq& |y_{k+1}-y_{k+1}^\ast| + |y_{k+1}^\ast-y_{k}| \leq \varepsilon \lambda'_k r_k + \lambda'_k r_k \nonumber\\ &\leq& 2 \lambda'_k r_k \leq 2 \lambda_1^{-\ell} |x-y|. \end{eqnarray} Let $z_k$ be the orthogonal projection of $y_k$ into the line containing $x$ and $y$. Note that by \eqref{4.14} $z_k\in \big[x-2\varepsilon|x-y|\frac{y-x}{|x-y|}, y+2\varepsilon|x-y|\frac{y-x}{|x-y|}\big]$. By \eqref{4.15}, $|z_{k+1}-z_k|\le 2 \lambda_1^{-\ell} |x-y|$ and by the definition of $z_k$, $|z_k-y_k|\le 2 \varepsilon |x-y|$. Therefore every point of $[x,y]$ lies within $2 (\varepsilon + \lambda_1^{-\ell}) |x-y|$ of some $y_k$, that is each point of $[x,y]$ is at most $2 (\varepsilon + \lambda_1^{-\ell}) |x-y|$ away from $\Xi$. Choosing $\varepsilon$ arbitrarly small and $\ell$ arbitrarly large, we get that $[x,y] \subset \Xi$. Lemma~\ref{4.2} follows. \end{proof} \medskip \begin{lemma} \label{t4.3} The set $\Xi$ is a vector subspace of ${\mathbb{R}}^n$. \end{lemma} \begin{proof} Let $V$ be the smallest affine subspace of ${\mathbb{R}}^n$ that contains $\Xi$, and let $d$ denote its dimension. Choose $d+1$ affinely independent points $y_0, \ldots y_{d}$ in $V$ (this means that the vectors $y_j-y_0$, $j \geq 1$, are linearly independent), and set $y = {1 \over d+1} \sum_{j=0}^d y_j$. By Lemma~\ref{t4.2}, $\Xi$ contains the convex hull of the $y_j$, so there is a small radius $r > 0$ such that $V \cap B(y,r) \subset \Xi$. Recall from \eqref{4.5} that $H(y)^m(\Xi) = \Xi$ for $m \geq 1$. Applying the bijection $H(y)^{-m}$ to both sides, we see that $H(y)^{-m}(\Xi) = \Xi$ for $m \geq 1$, so \begin{equation} \label{4.16} H(y)^{-m}(V \cap B(y,r)) \subset \Xi. \end{equation} We know that $H(y)^{-m}(V \cap B(y,r))$ is a nontrivial open subset of the affine space $H(y)^{-m}(V)$, and since $\Xi \subset V$ we get that $H(y)^{-m}(V) \subset V$. Then by a dimension count $H(y)^{-m}(V) = V$, and by the description \eqref{4.3} of $H(y)$, we see that \begin{equation} \label{4.17} H(y)^{-m}(V \cap B(y,r)) = V \cap B(y, \lambda(H(y))^{-m} r). \end{equation} Recall that $\lambda(H(y)) < 1$; then $\lambda(H(y))^{-m} r$ is as large as we want by picking $m$ as large as we need. Hence \eqref{4.16} guarantees that $\Xi$ contains $V$, and this means that $\Xi = V$. Note that Remark 14.4 (2) in \cite{Mattila} ensures that $0\in\Xi$, as $\Xi$ is the support of $\sigma$ and $\sigma\in\text{Tan}(\mu,x_0)$ where $\mu$ is a doubling measure and $x_0\in \Sigma$. Hence $\Xi = V$ is a vector space. This completes our proof of Lemma \ref{t4.3}. \end{proof} \medskip Now we study the distribution of $\sigma$ on $\Xi$. If $\Xi$ is reduced to the origin, then $\sigma$ is a Dirac mass, and Dirac masses lie in ${\mathscr{F}}_0$. Thus in this case $\sigma$ is trivially flat. We may now assume that $\Xi$ is a vector space of dimension $d > 0$. \begin{lemma} \label{t4.4} There is a dimension $D \geq 0$ such that $\sigma$ is Ahlfors-regular of dimension $D$, which means that \begin{equation} \label{4.19} C^{-1} \rho^D \leq \sigma(B(y,\rho)) \leq C\rho^D \ \text{ for $y\in \Xi$ and $\rho > 0$} \end{equation} for some constant $C \geq 1$. \end{lemma} \medskip \begin{proof} Set $\lambda(y) = \lambda(H(y))$, and recall from \eqref{4.1} that $1 < \lambda_1 \leq \lambda(y)^{-1} \leq \lambda_2$. Notice that by \eqref{4.2}, for $y\in \Xi$ and $r > 0$ \begin{eqnarray} \label{4.20} \sigma(B(y,\lambda(y)^{-1} r)) &=& \sigma(H(y)^{-1}(B(y,r))) = H(y)_\sharp \sigma(B(y,r)) \nonumber \\ &=& c(y) \sigma(B(y,r)). \end{eqnarray} Iterating we obtain for $m\ge 0$ \begin{equation} \label{4.21} \sigma(B(y,\lambda(y)^{-m} r)) = c(y)^m \sigma(B(y,r)). \end{equation} Applying \eqref{4.21} to $\lambda(y)^{m} r$ instead of $r$ we have that \eqref{4.21} also holds for $m\le 0$. Observe that since $\lambda(y) < 1$ and $\sigma(B(y,r)) > 0$ when $y\in \Xi$, \eqref{4.21} yields $c(y) \geq 1$. If $c(y) = 1$, then $\sigma(B(y,\lambda(y)^{-m} r))=\sigma(B(y,r))$ for all $m\in {\mathbb{Z}}$, and $\sigma$ is a Dirac mass. This case was excluded before the statement of the lemma, so $c(y) > 1$. Now let $\rho >0$ be given, and choose $m$ such that \begin{equation} \label{4.22} \lambda(y)^{-m} \leq \rho \leq \lambda(y)^{-m-1}. \end{equation} By \eqref{4.21} applied to $r=1$, \begin{equation} \label{4.23} c(y)^m \sigma(B(y,1)) \leq \sigma(B(y,\rho)) \leq c(y)^{m+1} \sigma(B(y,1)), \end{equation} hence, letting $\ell = \log(\sigma(B(y,1)))$, \eqref{4.23} yields \begin{equation}\label{4.23A} m \log(c(y)) + \ell \leq \log(\sigma(B(y,\rho))) \leq (m+1) \log(c(y)) + \ell. \end{equation} By \eqref{4.22} \begin{equation}\label{4.22A} m \log(\lambda(y)^{-1}) \leq \log\rho \leq (m+1) \log(\lambda(y)^{-1}). \end{equation} Hence combining \eqref{4.23A} and \eqref{4.22A} we have \begin{equation} \label{4.24} \lim_{\rho \to +\infty} {\log(\sigma(B(y,\rho))) \over \log\rho} = {\log(c(y)) \over \log(\lambda(y)^{-1})}=: D(y). \end{equation} We claim that $D(y)$ does not depend on $y$. Indeed, if $z \in \Xi$, observe that $B(y,\rho) \subset B(z,\rho+|z-y|)$, hence \begin{eqnarray} \label{4.25} D(y) &=& \lim_{\rho \to +\infty} {\log(\sigma(B(y,\rho))) \over \log\rho} \leq \liminf_{\rho \to +\infty} {\log(\sigma(B(z,\rho+|z-y|))) \over \log\rho} \nonumber \\ &=& \liminf_{\rho \to +\infty} {\log(\sigma(B(z,\rho+|z-y|))) \over \log(\rho+|z-y|)} = D(z); \end{eqnarray} the opposite inequality also holds exchanging the roles of $y$ and $z$. Let $D$ be the common value of the $D(y)$ for $y\in \Xi$. By definition (see \eqref{4.24}), $\lambda(y)^{-D} = c(y)$, and using \eqref{4.22} we can rewrite \eqref{4.23} as \begin{equation} \label{4.26} \lambda(y)^{-m D} \sigma(B(y,1)) \leq \sigma(B(y,\rho)) \leq \lambda(y)^{-D} \lambda(y)^{-m D} \sigma(B(y,1)). \end{equation} Thus, by \eqref{4.22}, \begin{equation} \label{4.27} \lambda(y)^{D} \rho^D \sigma(B(y,1)) \leq \sigma(B(y,\rho)) \leq \lambda(y)^{-D} \rho^{D} \sigma(B(y,1)), \end{equation} which yields \eqref{4.19} with $C = \lambda_2^D \max(\sigma(B(y,1)), \sigma(B(y,1))^{-1})$ (because \eqref{4.1} guarantees that $\lambda(y)^{-1} \leq \lambda_2$). \end{proof} \medskip \begin{lemma} \label{t4.5} Let $d$ be the dimension of the vector space $\Xi$, and denote by $\nu={\mathscr{H}}^d\res\Xi$ the restriction of ${\mathscr{H}}^d$ to $\Xi$. Let $D$ be as in \eqref{4.19}. Then $d=D$, and there exists a constant $c_0>0$ such that $\sigma=c_0 \nu$. \end{lemma} \medskip \begin{proof} Since $\sigma$ is Ahlfors regular of dimension $D$ (by Lemma \ref{t4.4}), a standard covering argument (see for instance Lemma 18.11 in \cite{MSbook}) guarantees that there exists a constant $C >0$ such that \begin{equation} \label{4.28} C^{-1} {\mathscr{H}}^D\res{\Xi} \leq \sigma \leq C {\mathscr{H}}^D\res{\Xi}, \end{equation} and $\Xi$ is a $D$-dimensional Ahlfors regular set. Hence $D=d$. By \eqref{4.28}, $\sigma$ is absolutely continuous with respect to $\nu$, and the Radon-Nikodym derivative of $\sigma$ with respect to $\nu$ is bounded. Thus there is a bounded function $f$ on $\Xi$ such that $\sigma = f \nu$. We now show that $f$ is constant. First observe that since $d = D = {\log(c(y)) \over \log(\lambda(y)^{-1})}$ (by \eqref{4.24}), \eqref{4.21} guarantees that for $y\in \Xi$, $r > 0$, and $m \in {\mathbb{Z}}$ \begin{equation} \label{4.29} \sigma(B(y,\lambda(y)^{-m} r)) = c(y)^m \sigma(B(y,r)) = \lambda(y)^{- md} \sigma(B(y,r)). \end{equation} Since $\nu(B(y,\lambda(y)^{-m} r)) = \lambda(y)^{- md} \nu(B(y,r))$, we may rewrite \eqref{4.29} as \begin{equation} \label{4.30} {\sigma(B(y,\lambda(y)^{-m} r)) \over \nu(B(y,\lambda(y)^{-m} r))} = {\sigma(B(y, r)) \over \nu(B(y, r))}. \end{equation} The Lebesgue differentiation theorem says that for $\nu$-almost every $y\in \Xi$, \begin{equation} \label{4.31} f(y) = \lim_{\rho \to 0} {\sigma(B(y,\rho)) \over \nu(B(y,\rho))}. \end{equation} For such an $y$ and every $r > 0$, by \eqref{4.31} and \eqref{4.30} for $-m$ \begin{equation} \label{4.32} f(y) = \lim_{m \to \infty} {\sigma(B(y,\lambda(y)^{m} r)) \over \nu(B(y,\lambda(y)^{m}r))} = {\sigma(B(y,r)) \over \nu(B(y,r))}. \end{equation} That is, \begin{equation} \label{4.33} \sigma(B(y,r)) = f(y) \nu(B(y,r)) \ \text{ for } r > 0. \end{equation} If $z \in \Xi$ is another Lebesgue point of $f$, since $B(z,r) \subset B(y,r+|y-z|)$ we have \begin{eqnarray} \label{4.34} f(z) &=& \lim_{r \to \infty} {\sigma(B(z,r)) \over \nu(B(z,r))} \leq \liminf_{r \to \infty} {\sigma(B(y,r+|y-z|)) \over \nu(B(z,r))} \nonumber \\ &=& \liminf_{r \to \infty} {\sigma(B(y,r+|y-z|)) \over \nu(B(y,r+|y-z|))} = f(y). \end{eqnarray} Similarly $f(y) \leq f(z)$, and $f$ is constant. \end{proof} This completes the proof of Theorem~\ref{t1.3}. In fact we have proved that if $\sigma\in \text{Tan}(\mu, x_0)$ with $x_0\in\Sigma_0$ (see \eqref{1.24}) then $\sigma=c_0{\mathscr{H}}^d\res\Xi$ for some vector space $\Xi$. \section{A smoother version of the Wasserstein ${\mathbb {W}}_1$ distance} \label{S5} So far we managed to work with the distance ${\mathbb {W}}_1$ defined by \eqref{1.5}, but for the proof of Theorem~\ref{t1.2}, it is more convenient to use a slightly smoother variant, which attenuates the possible discontinuities in $r>0$ of the normalizing factors $\mu^{x,r}(B(0,1))^{-1} = \mu(B(x,r))^{-1}$. Let us choose a smooth radial function $\varphi$ such that \begin{equation} \label{5.1} {\mathds 1}_{B(0,1/2)} \leq \varphi \leq {\mathds 1}_{B(0,1)}; \end{equation} If $\mu$ and $\nu$ are two Radon measures such that \begin{equation} \label{5.2} \mu(B(0,1/2)) > 0 \ \text{ and } \ \nu(B(0,1/2)) > 0, \end{equation} we define a new distance ${\mathbb {W}}_{\varphi}(\mu,\nu)$ by \begin{equation} \label{5.3} {\mathbb {W}}_{\varphi}(\mu,\nu) = \sup_{\psi \in \text{Lip}_1({\mathbb{B}})} \av{{\int\psi \varphi d\mu \over \int \varphi d\mu} -{\int\psi \varphi d\nu \over \int \varphi d\nu}}. \end{equation} Recall that $\text{Lip}_1({\mathbb{B}})$ is defined near \eqref{1.3}. The distance ${\mathbb {W}}_1$ above essentially corresponds to $\varphi = {\mathds 1}_{{\mathbb{B}}}$ here. We required \eqref{5.2} (and \eqref{5.1}) to make sure that we do not divide by $0$. But notice that even when $\mu(B(0,1/2))$ or $\nu(B(0,1/2))$ is very small, we always get that \begin{equation} \label{5.4} {\mathbb {W}}_{\varphi}(\mu,\nu) \leq 2, \end{equation} because $|\int\psi \varphi d\mu| \leq \int \varphi d\mu$ and similarly for $\nu$. Note that \begin{equation} \label{5.5} {\mathbb {W}}_{\varphi}(a\mu,b\nu) = {\mathbb {W}}_{\varphi}(\mu,\nu) \ \text{ for } a, b > 0, \end{equation} so we do not need to normalize $\mu$ and $\nu$ in advance. Finally observe that ${\mathbb {W}}_{\varphi}$ satisfies the triangle inequality. That is, if $\sigma$ is a third Radon measure such that $\sigma(B(0,1/2)) > 0$, if follows at once from the definition that \begin{equation} \label{5.6} {\mathbb {W}}_{\varphi}(\mu,\sigma) \leq {\mathbb {W}}_{\varphi}(\mu,\nu) + {\mathbb {W}}_{\varphi}(\nu,\sigma). \end{equation} Let us check that if we restrict to measures that are not to small on $B(0,1/2)$, then ${\mathbb {W}}_1$ controls ${\mathbb {W}}_{\varphi}$. \begin{lemma} \label{t5.1} Let $\mu$ and $\nu$ be Radon measures such that \eqref{5.2} holds and \begin{equation} \label{5.7} \mu({\mathbb{B}}) = \nu({\mathbb{B}}) =1. \end{equation} Then \begin{equation} \label{5.8} {\mathbb {W}}_{\varphi}(\mu,\nu) \leq {1+ 2 ||\varphi||_{lip} \over \mu(B(0,1/2))} \, {\mathbb {W}}_1(\mu,\nu), \end{equation} where $||\varphi||_{lip}$ denotes the Lipschitz norm of $\varphi$. \end{lemma} \medskip The fact that the estimate is not symmetric is not an issue. In particular we shall apply \eqref{5.8} to doubling measures $\mu$ and $\nu$; in this case $\mu(B(0,1/2))\sim\mu({\mathbb{B}})=1=\nu({\mathbb{B}})\sim\nu(B(0,1/2))$. \begin{proof} Let $\psi \in \text{Lip}_1({\mathbb{B}})$ be given. The definition \eqref{1.5}, applied to $\psi \varphi$, yields \begin{equation} \label{5.9} \Big|\int \psi \varphi d\mu - \int \psi \varphi d\nu \Big| \leq ||\psi \varphi||_{lip} {\mathbb {W}}_1(\mu,\nu) \leq (1+||\varphi||_{lip}) {\mathbb {W}}_1(\mu,\nu). \end{equation} The same definition, applied to $\varphi$ itself, yields \begin{equation} \label{5.10} \Big|\int \varphi d\mu - \int \varphi d\nu \Big| \leq ||\varphi||_{lip} {\mathbb {W}}_1(\mu,\nu). \end{equation} Set \begin{equation} \label{5.11} \Delta = \av{{\int\psi \varphi d\mu \over \int \varphi d\mu} -{\int\psi \varphi d\nu \over \int \varphi d\nu}} \end{equation} and write \begin{equation} \label{5.12} \Delta = \av{ {a \over b} - {c \over d}} = \av{{ad-bc \over bd}} = {| d(a-c)+c(d-b)| \over bd} \end{equation} with $a = \int\psi \varphi d\mu$, $b= \int \varphi d\mu$, $c= \int\psi \varphi d\nu$, and $d=\int \varphi d\nu$. Notice that $|c| \leq d$ because $\psi \in \text{Lip}_1({\mathbb{B}})$ and $\varphi \geq 0$. Also, $b \geq \mu(B(0,1/2))$ by \eqref{5.1}. Hence by \eqref{5.9} and \eqref{5.10}; we have \begin{equation} \label{5.13} \Delta \leq {|a-c| + |d-b| \over b} \leq {1+ 2 ||\varphi||_{lip} \over \mu(B(0,1/2))} \, {\mathbb {W}}_1(\mu,\nu). \end{equation} Taking the supremum over $\psi \in \text{Lip}_1({\mathbb{B}})$ in \eqref{5.13} ( recall \eqref{5.11}) yields \eqref{5.8}. \end{proof} The following lemma specifies the sense in which ${\mathbb {W}}_{\varphi}$ is more stable that ${\mathbb {W}}_1$. \begin{lemma} \label{t5.2} Let $\mu$ and $\nu$ be Radon measures and let $\theta\in (0, 1/2]$ be such that \begin{equation} \label{5.14} \mu(B(0,\theta/2)) > 0 \ \text{ and } \ \nu(B(0,\theta/2)) > 0, \end{equation} and define new measures $\mu_1$ and $\nu_1$ by \begin{equation} \label{5.15} \mu_1(A) = \mu(\theta A) \text{ and } \nu_1(A) = \nu(\theta A) \ \text{ for } A \subset {\mathbb{R}}^n. \end{equation} Then \begin{equation} \label{5.16} {\mathbb {W}}_{\varphi}(\mu_1,\nu_1) \leq \theta^{-1}(1+4||\varphi||_{lip}) \, {\mu(B(0,1)) \over \mu(B(0,\theta/2))} \, {\mathbb {W}}_{\varphi}(\mu,\nu). \end{equation} \end{lemma} \medskip As in \eqref{5.8} the estimate is not symmetric in $\mu$ and $\nu$, but is nonetheless valid. We require that $\mu(B(0,\theta/2)) \neq 0$ and $\nu(B(0,\theta/2)) \neq 0$ to make sure that ${\mathbb {W}}_{\varphi}(\mu_1,\nu_1)$ is easily defined. Often $\mu$ is the restriction to ${\mathbb{B}}$ of a doubling measure and its support contains the origin; then ${\mu(B(0,1)) \over \mu(B(0,\theta/2))} \geq C^{-1}$, for some $C$ that depends on $\theta$ and the doubling constant $C_\delta$. \begin{proof} Let $\psi \in \text{Lip}_1({\mathbb{B}})$ be given; we want to control the quantity \begin{equation} \label{5.17} \Delta = \av{{\int\psi \varphi d\mu_1 \over \int \varphi d\mu_1} -{\int\psi \varphi d\nu_1 \over \int \varphi d\nu_1}} = : \av{ {a \over b} - {c \over d}} \end{equation} (as above, but with integrals relative to $\mu_1$ and $\nu_1$). Notice that by \eqref{5.15}, \begin{equation} \label{5.18} a = \int\psi \varphi d\mu_1 = \int\psi(\theta^{-1} x) \varphi(\theta^{-1} x) d\mu(x) = \int\psi(\theta^{-1} x) \varphi(\theta^{-1} x) \varphi(x)^2 d\mu(x), \end{equation} where we just use the fact that $\varphi(\theta^{-1} x)= 0$ when $x\in {\mathbb{R}}^n \setminus B(0,1/2)$, and the special shape of $\varphi$ in \eqref{5.1}, to add an extra $\varphi^2(x)$. Similarly, \begin{equation} \label{5.19} c = \int\psi \varphi d\nu_1 = \int\psi(\theta^{-1} x) \varphi(\theta^{-1} x) \varphi^2(x) d\nu(x). \end{equation} The same computations without $\psi$ yield \begin{equation} \label{5.20} b = \int \varphi d\mu_1 = \int \varphi(\theta^{-1} x) \varphi(x)^2 d\mu(x), \end{equation} \begin{equation} \label{5.21} d = \int \varphi d\nu_1 = \int \varphi(\theta^{-1}x) \varphi(x)^2 d\nu(x). \end{equation} It is also useful to introduce \begin{equation} \label{5.22} e = \int \varphi d\mu \ \text{ and } \ f = \int \varphi d\nu. \end{equation} Let us first estimate $\delta_1 = {a \over e} - {c \over f}$. We want to apply the definition of ${\mathbb {W}}_{\varphi}(\mu,\nu)$ to the function $\Psi$ defined by $\Psi(x) = \psi(\theta^{-1}x) \varphi(\theta^{-1} x) \varphi(x)$. Notice that $\Psi$ is supported in ${\mathbb{B}}$ (this is why we added $\varphi(x)$), and its Lipschitz norm is at most $\theta^{-1}(1+||\varphi||_{lip}) + ||\varphi||_{lip} \leq \theta^{-1}(1+2||\varphi||_{lip})$. Thus \eqref{5.3} yields \begin{equation} \label{5.23} |\delta_1|= \av{ {a \over e} - {c \over f}} \leq \theta^{-1}||\Psi ||_{lip} {\mathbb {W}}_{\varphi}(\mu,\nu) \leq \theta^{-1}(1+2||\varphi||_{lip}) {\mathbb {W}}_{\varphi}(\mu,\nu). \end{equation} We can also apply the definition of ${\mathbb {W}}_{\varphi}(\mu,\nu)$ to $\varphi(\theta^{-1} x) \varphi(x)$, whose Lipschitz norm is at most $2\theta^{-1} ||\varphi||_{lip}$, and we get that \begin{equation} \label{5.24} |\delta_2| :=\av{ {b \over e} - {d \over f}} \leq 2\theta^{-1} ||\varphi||_{lip} {\mathbb {W}}_{\varphi}(\mu,\nu). \end{equation} Thus $$ {a \over b} = {e \over b} {a\over e}= {e \over b}\Big({c \over f} + \delta_1\Big) = {e \over b}{c \over d} {d \over f} + {e \delta_1\over b} = {e \over b}{c \over d}\Big({b \over e}+\delta_2 \Big)+ {e \delta_1\over b} = {c \over d} + {e c \delta_2 \over b d} + {e \delta_1\over b}, $$ where $\delta_1$ and $\delta_2$ are as in\eqref{5.23} and \eqref{5.24}. Thus \begin{equation} \label{5.25} \Delta = \av{ {a \over b} - {c \over d}} \leq {|e c \delta_2| \over b d} + {|e \delta_1|\over b}. \end{equation} Now $|c| \leq d$ because $\varphi \geq 0$ and $|\psi| \leq 1$, $e = \int \varphi d\mu \leq \mu(B(0,1))$, and $b = \int \varphi(\theta^{-1} x) \varphi(x)^2 d\mu(x) \geq \mu(B(0,\theta/2))$ because of \eqref{5.1}. Thus we have \begin{equation} \label{5.26} \Delta \leq (\delta_1 + \delta_2) {e \over b} \leq\theta^{-1} (1+4||\varphi||_{lip}) {\mathbb {W}}_{\varphi}(\mu,\nu) \, {\mu(B(0,1)) \over \mu(B(0,\theta/2))}. \end{equation} Noting \eqref{5.17} and taking the supremum over $\psi \in \text{Lip}_1({\mathbb{B}})$, we obtain \eqref{5.16}. \end{proof} The next lemma is used in Section \ref{S8}. It shows that the distance function ${\mathbb {W}}_{\varphi}$ also controls ${\mathbb {W}}_1$ in some averaged way. Thus ${\mathbb {W}}_1$ and ${\mathbb {W}}_{\varphi}$ are basically interchangeable. \begin{lemma} \label{t5.3} Let $\mu$ and $\nu$ are Radon measures such that $\mu(B(0,1/4)) > 0$ and $\nu(B(0,1/4))>0$. Define $\mu_t$ and $\nu_t \,$, $1/4 \leq t \leq 1/2$, by \begin{equation} \label{5.27} \mu_t(A) = {\mu(tA) \over \mu(B(0,t))} \ \text{ and } \ \nu_t(A) = {\mu(tA) \over \nu(B(0,t))} \ \text{ for } A \subset {\mathbb{R}}^n. \end{equation} Then \begin{equation} \label{5.28} \fint_{1/4}^{1/2} {\mathbb {W}}_1(\mu_t,\nu_t) dt \leq {(8+||\varphi||_{lip})\mu({\mathbb{B}}) \over \mu(B(0,1/4))} \, {\mathbb {W}}_{\varphi}(\mu,\nu). \end{equation} \end{lemma} \medskip \begin{proof} First notice that the statement does not change when we multiply $\mu$ and $\nu$ by positive constants. So we may assume that \begin{equation} \label{5.29} \int \varphi d\mu = \int \varphi d\nu = 1. \end{equation} Next fix $t \in [1/4,1/2]$ and $\psi \in \text{Lip}_1({\mathbb{B}})$. We want to estimate \begin{equation} \label{5.30} \int \psi d\mu_t - \int \psi d\nu_t = {\int \psi(t^{-1}x) d\mu(x) \over \mu(B(0,t))} - {\int \psi(t^{-1}x) d\nu(x) \over \nu(B(0,t))} = : {a \over b} -{c \over d}. \end{equation} As before, \begin{equation} \label{5.31} \Delta = {|ad-bc| \over bd} = {|d(a-c) - c(b-d)| \over bd} \leq {|a-c| + |b-d| \over b} \end{equation} because $|c| = |\int \psi(t^{-1}x) d\nu(x)| \leq ||\psi||_\infty \nu(B(0,t)) \le d$. Notice that $\varphi(x) = 1$ when $\psi(t^{-1}x) \neq 0$ since this implies that $|t^{-1}x|\le 1$ and hence $|x|\le t\le 1/2$. Thus \begin{eqnarray} \label{5.32} |a-c| &=& \av{\int \psi(t^{-1}x) d\mu(x)-\int \psi(t^{-1}x) d\nu(x)} \\ &=& \av{\int \psi(t^{-1}x) \varphi^2(x) d\mu(x)-\int \psi(t^{-1}x) \varphi^2(x) d\nu(x)}. \nonumber \end{eqnarray} We apply the definition \eqref{5.3} of ${\mathbb {W}}_\varphi$ with the function $x \to \psi(t^{-1}x) \varphi(x)$, which is supported in ${\mathbb{B}}$ and $(t^{-1}+||\varphi||_{lip})$-Lipschitz. We obtain that \begin{equation} \label{5.33} |a-c| \leq (t^{-1}+||\varphi||_{lip}){\mathbb {W}}_\varphi(\mu,\nu) \end{equation} Notice also that \begin{equation} \label{5.34} {1 \over b} = {\int \varphi d\mu \over \mu(B(0,t))} \leq {\mu({\mathbb{B}}) \over \mu(B(0,t))} \leq {\mu({\mathbb{B}}) \over \mu(B(0,1/4))}. \end{equation} Thus \begin{eqnarray} \label{5.35} \Delta &\leq& {|a-c| + |b-d| \over b} \leq {\mu({\mathbb{B}}) \over \mu(B(0,1/4))} \, [(t^{-1}+||\varphi||_{lip}){\mathbb {W}}_\varphi(\mu,\nu) + |b-d|] \hskip0.8cm\, \nonumber \\ &=& {\mu({\mathbb{B}}) \over \mu(B(0,1/4))} \, \big[(t^{-1}+||\varphi||_{lip}){\mathbb {W}}_\varphi(\mu,\nu) + |\mu(B(0,t)) - \nu(B(0,t))| \big]. \end{eqnarray} We take the supremum over $\psi \in \text{Lip}_1({\mathbb{B}})$ and get that \begin{equation} \label{5.36} {\mathbb {W}}_1(\mu_t,\nu_t) \leq {\mu({\mathbb{B}}) \over \mu(B(0,1/4))} \, \big[(t^{-1}+||\varphi||_{lip}){\mathbb {W}}_\varphi(\mu,\nu) + |\mu(B(0,t)) - \nu(B(0,t))| \big]. \end{equation} Since $t^{-1}\in[2,4]$, \eqref{5.28} will follow as soon as we prove that \begin{equation} \label{5.37} \int_{[1/4,1/2]} |\mu(B(0,t))-\nu(B(0,t))| dt \leq {\mathbb {W}}_\varphi(\mu,\nu). \end{equation} Let $h$ be a bounded measurable function, defined on $[1/4,1/2]$; we want to evaluate \begin{equation} \label{5.38} I_h = \int_{[1/4,1/2]} h(t) [\mu(B(0,t))-\nu(B(0,t))] dt. \end{equation} Observe that by Fubini $$ \int_{[1/4,1/2]} h(t) \mu(B(0,t)) dt = \int_{x\in B(0,1/2)} \Big\{\int {\mathds 1}_{t\in [1/4,1/2]} {\mathds 1}_{t > |x|} h(t) dt \Big\} d\mu(x), $$ and similarly for $\nu$. Set $\psi_h(x) = \int {\mathds 1}_{t\in [1/4,1/2]} {\mathds 1}_{t > |x|} h(t) dt$. This is a $||h||_\infty$-Lipschitz function of $|x|$, which vanishes when $|x| \geq 1/2$, so by \eqref{5.1}, \eqref{5.3} and the normalization in \eqref{5.29} we have \begin{eqnarray} \label{5.39} | I_h | &=& \av{\int_{B(0,1/2)} \psi_h d\mu - \int_{B(0,1/2)} \psi_h d\nu } \\ &=& \av{ \int \psi_h \varphi d\mu - \int \psi_h \varphi d\nu } \leq ||h||_\infty {\mathbb {W}}_\varphi(\mu,\nu). \nonumber \end{eqnarray} Thus \eqref{5.39} holds for all bounded measurable functions $h$ defined on $[1/4,1/2]$, and \eqref{5.37} follows by duality. We saw earlier that \eqref{5.37} implies \eqref{5.28}. Lemma \ref{t5.3} follows. \end{proof} We conclude this section with an easy observation concerning the behavior of ${\mathbb {W}}_\varphi(\mu,\nu)$ when taking weak limits. \medskip \begin{lemma} \label{t5.4} Let $\mu$ and $\nu$ satisfy \eqref{5.2}, suppose that $\mu$ is the weak limit of some sequence $\{\mu_k \}$, and that $\nu$ is the weak limit of some sequence $\{\nu_k \}$. Then \begin{equation} \label{5.40} {\mathbb {W}}_\varphi(\mu,\nu) \leq \liminf_{k \to \infty} {\mathbb {W}}_\varphi(\mu_k,\nu_k). \end{equation} \end{lemma} \medskip \begin{proof} Set $L_k = {\mathbb {W}}_\varphi(\mu_k,\nu_k)$ and $L = \liminf_{k \to \infty} L_k$. Notice that $\int\varphi d\mu = \lim_{k \to \infty} \int \varphi d\mu_k$ by weak convergence; by \eqref{5.2}, this implies that $\int \varphi d\mu_k > 0$ for $k$ large. The same argument applied to a continuous function $f \leq {\mathds 1}_{B(0,1/2)}$ such that $\int f d\mu > 0$ shows that $\mu_k(B(0,1/2)) > 0$ for $k$ large. Similar observations hold for $\nu$ and $\nu_k$. For each $\psi \in \text{Lip}_1({\mathbb{B}})$, the weak convergence yields $\int \psi\varphi d\mu = \lim_{k \to \infty} \int \psi\varphi d\mu_k$. For $k$ large, $$ \av{{\int\psi \varphi d\mu_k \over \int \varphi d\mu_k} -{\int\psi \varphi d\nu_k \over \int \varphi d\nu_k}} \leq L_k. $$ Since each term has a limit and the denominators are bounded away from $0$, taking a $\liminf$ we have that $$ \av{{\int\psi \varphi d\mu \over \int \varphi d\mu} -{\int\psi \varphi d\nu \over \int \varphi d\nu}} \leq L. $$ Taking the supremum over $\psi\in \text{Lip}_1({\mathbb{B}})$ we conclude that \eqref{5.40} holds. \end{proof} \section{ Uniqueness of the tangent measure at ``good points"} \label{S6} In this section show that for $x\in \Sigma_0 \cap \Sigma_1$, $\text{Tan}(\mu,x)$ is a one-dimensional set of flat measures. Recall that $\Sigma_1$ and $\Sigma_0$ were defined in \eqref{1.16} and \eqref{1.24} respectively. \begin{lemma} \label{t6.1} Let $\mu$ be a doubling measure, and let $x\in \Sigma_0 \cap \Sigma_1$. Then there is a nonzero flat measure $\sigma$ such that $\text{Tan}(\mu,x) = \big\{ c\sigma \, ; \, c > 0 \big\}$. \end{lemma} \medskip \begin{proof} Fix $\mu$ and $x\in \Sigma_0 \cap \Sigma_1$; without loss of generality, we may assume that $x=0$. By Theorem \ref{t1.3}, we know that \begin{equation} \label{6.1} \text{Tan}(\mu,0) \subset {\mathscr{F}} \end{equation} where ${\mathscr{F}}$ denotes the set of flat measures (see \eqref{1.21}). By definition of $\Sigma_1$, \begin{equation} \label{6.2} \int_0^1 \alpha_{{\mathscr{D}}}(0,r) {dr \over r } < \infty. \end{equation} Thus it only remains to show that $\text{Tan}(\mu,0)$ is one-dimensional. Our initial goal is to bound the ${\mathbb {W}}_{\varphi}$ distance for two different scaled dilations of $\mu$ by $ \alpha_{{\mathscr{D}}}(0,\cdot)$ at the right scale. For each $r \in (0,1/4)$ we use Chebyshev's inequality to find $r_+ \in [2r, 4r]$ such that \begin{equation} \label{6.3} \alpha_{{\mathscr{D}}}(0,r_+) \leq (2r)^{-1} \int_{2r}^{4r} \alpha_{{\mathscr{D}}}(0,t) dt \leq 2\int_{2r}^{4r} \alpha_{{\mathscr{D}}}(0,t) {dt\over t}. \end{equation} By the definition of $\alpha_{{\mathscr{D}}}(0,r_+)$ (see \eqref{1.15}): there is a transformation $G \in {\mathscr{D}}(0,r_+)$, such that \begin{equation} \label{6.4} {\mathbb {W}}_1(\mu_0^{G},\mu_0^{0,r_+}) \leq 2 \alpha_{{\mathscr{D}}}(0,r_+). \end{equation} By the definition \eqref{1.14} of ${\mathscr{D}}(0,r_+)$, $G$ is simply given by $G(u) = \lambda u$, with $\lambda^{-1} = \lambda(G)^{-1} \in [\lambda_1r_+,\lambda_2 r_+]$. Set \begin{equation} \label{6.5} r^\ast = \lambda^{-1} \in [\lambda_1r_+,\lambda_2 r_+]; \end{equation} notice that $G$ is the homotety that sends $B(0,r^\ast)$ to ${\mathbb{B}}$, so $\mu_0^{G} = \mu_0^{0,r^\ast}$ (see near \eqref{1.11}) and now \eqref{6.4} says that \begin{equation} \label{6.6} {\mathbb {W}}_1(\mu_0^{0,r^\ast},\mu_0^{0,r_+}) \leq 2 \alpha_{{\mathscr{D}}}(0,r_+). \end{equation} First apply Lemma \ref{t5.1} to the measures $\mu_0^{0,r^\ast}$ and $\mu_0^{0,r_+}$;\eqref{6.6} yields that \begin{equation} \label{6.9} {\mathbb {W}}_\varphi(\mu_0^{0,r^\ast},\mu_0^{0,r_+}) \leq C \alpha_{{\mathscr{D}}}(0,r_+) \end{equation} (where we do not record any more the dependence on $\varphi$ or $C_\delta$). Then we apply Lemma \ref{t5.2} to the same measures, with $\theta = r/r^\ast$. Notice that $\theta \leq 1/2$ because $r^\ast \geq r_+ \geq 2r$. Recall from \eqref{5.15} that $\mu_1$ is defined by \begin{eqnarray}\label{6.10A} \mu_1(A) &=& \mu_0^{0,r^\ast}(\theta A) = \mu_0^{0,r^\ast}(r A/r^\ast) = \frac{\mu(r^\ast r A/r^\ast)}{\mu(B(0,r^\ast))} \nonumber\\ &=& \frac{\mu(rA)}{\mu(B(0,r^\ast))} = \frac{\mu(B(0,r))}{\mu(B(0,r^\ast))} \, \mu^{0,r}_0(A) \end{eqnarray} by \eqref{1.11}. Similarly, $\nu_1$ is defined by \begin{eqnarray}\label{6.10B} \nu_1(A) &=& \mu_0^{0,r_+}(\theta A) = \mu_0^{0,r_+}(r A/r^\ast) = \frac{\mu(r_+ r A/r^\ast)}{\mu(B(0,r_+))} \nonumber\\ &=& \frac{\mu(\rho(r)A)}{\mu(B(0,r_+))} = \frac{\mu(B(0,\rho(r)))}{\mu(B(0,r_+))} \, \mu^{0,\rho(r)}_0(A) \end{eqnarray} where \begin{equation} \label{6.7} \rho(r) = { r r_+ \over r^\ast} \in [\lambda_2^{-1}r,\lambda_1^{-1} r]. \end{equation} By \eqref{5.5}, \eqref{6.10A}, \eqref{6.10B}, Lemma \ref{t5.2}, the fact that $\mu_0^{0,r^\ast}$ is also doubling with the same constant as $\mu$ (which controls mass ratio in \eqref{5.16}), and \eqref{6.9}, \begin{equation} \label{6.11} {\mathbb {W}}_\varphi(\mu_0^{0,r},\mu_0^{0,\rho(r)}) = {\mathbb {W}}_\varphi(\mu_1,\nu_1) \leq C {\mathbb {W}}_\varphi(\mu_0^{0,r^\ast},\mu_0^{0,r_+}) \leq C \alpha_{{\mathscr{D}}}(0,r_+). \end{equation} \medskip In order to show that $\text{Tan}(\mu,0)$ is one-dimensional, we define a specific sequence of measures $\mu^{0,r_j}$, which will be used to approximate all the tangent measures of $\text{Tan}(\mu,0)$ up to a multiplicative constant. We start with $r_0 = 1/4$, and define $r_j$ by induction, taking $r_{j+1} = \rho(r_j)$ for $j \geq 0$. Note that for all choice of integers $0 \leq k \leq l$, by \eqref{5.6}, \eqref{6.11}, and \eqref{6.3} we have \begin{eqnarray} \label{6.12} {\mathbb {W}}_\varphi(\mu_0^{0,r_{k}},\mu_0^{0,r_{l+1}}) &\leq& \sum_{k \leq j \leq l} {\mathbb {W}}_\varphi(\mu_0^{0,r_j},\mu_0^{0,r_{j+1}}) \leq C \sum_{k \leq j < l} \alpha_{{\mathscr{D}}}(0,(r_j)_+) \nonumber\\ &\leq& 2C \sum_{k \leq j < l} \int_{2r_j}^{4r_j} \alpha_{{\mathscr{D}}}(0,t) {dt\over t}. \end{eqnarray} Recall that $r_{j+1} = \rho(r_j) \leq \lambda_1^{-1}r_j$ (see \eqref{6.7}), thus the $r_j$'s decay at a definite rate. Therefore the intervals $[2r_j,4r_j]$ have bounded overlap, and since they are all contained in $(0,4r_k]$, we obtain \begin{equation} \label{6.13} {\mathbb {W}}_\varphi(\mu_0^{0,r_{k}},\mu_0^{0,r_{l+1}}) \leq C \int_{0}^{4r_k} \alpha_{{\mathscr{D}}}(0,t) {dt\over t}. \end{equation} \medskip Let $\sigma \in \text{Tan}(\mu,0)$ be given. There exist sequences $\{\rho_k\}$ and $\{a_k\}$ such that $\rho_k \in (0,1/4]$, $\lim _{k\to\infty}\rho_k =0$, $a_k>0$, and \begin{equation} \label{6.14} \sigma_k = a_k \mu^{0,\rho_k} \ \text{ converges weakly to } \sigma. \end{equation} Let $j = j(k)$ denote the largest integer such that $r_{j} \geq \rho_k$. Thus $j \geq 0$ (because $r_0 = 1/4$), and $r_{j+1}<\rho_k$; since $r_{j+1} = \rho(r_j) \in [\lambda_2^{-1}r_j,\lambda_1^{-1} r_j]$ (by \eqref{6.7}), we get that \begin{equation} \label{6.15} \lambda_2^{-1} r_{j(k)}<\rho_k \leq r_{j(k)}. \end{equation} Set $\theta_k = \rho_k /r_{j(k)}\in [\lambda_2^{-1},1]$; we may replace $\{ \rho_k \}$ by a subsequence such that the $\theta_k$ converge to a limit $\theta$. Consider the dilation $D_k$ defined by $D_k(u) = \theta_k u$, and set $D(u) = \theta u$. Notice that by \eqref{6.14} and \eqref{1.11} \begin{equation} \label{6.16} [D_k]_\sharp \sigma_k = a_k [D_k]_\sharp \mu^{0,\rho_k} = a_k \mu^{0,\rho_k/\theta_k} = a_k \mu^{0,r_{j(k)}}. \end{equation} Also, the $[D_k]_\sharp \sigma_k$ converge weakly to $D_\sharp \sigma$. In fact for $f$ continuous and compactly supported, \begin{eqnarray}\label{6.16A} \lim_{k\to\infty}\int f \, d[D_k]_\sharp \sigma_k &= &\lim_{k\to\infty}\int f(\theta_k^{-1} x) d\sigma_k(x) = \lim_{k\to\infty}\int f(\theta^{-1} x) d\sigma_k(x) \nonumber\\ &= &\int f(\theta^{-1} x) d\sigma(x) = \int f \, d[D]_\sharp \sigma \end{eqnarray} (use the uniform continuity of $f$ and local uniform bounds on the $\sigma_k$). By \eqref{6.1}, $\sigma$ is a flat measure; then $D_\sharp \sigma = \sigma$ and \eqref{6.16A} shows that \begin{equation} \label{6.17} \{ a_k \mu^{r_{j(k)}} \} \text{ converges weakly to } \sigma. \end{equation} If $\sigma'$ is another nonzero element of $\text{Tan}(\mu,0)$, we can find other sequences $\{ j'(k) \}$ and $\{ a'_k \}$, with $\lim_{k \to \infty} j'(k) = \infty$ (by the analogue of \eqref{6.15}), such that \begin{equation} \label{6.18} \{ a'_k \mu^{0,r_{j'(k)}} \} \text{ converges weakly to } \sigma'. \end{equation} By Lemma \ref{t5.4}, then \eqref{5.5}, and then \eqref{6.13} and \eqref{6.2}, \begin{eqnarray} \label{6.19} {\mathbb {W}}_{\varphi}(\sigma,\sigma') \hskip-0.2cm &\leq& \liminf_{k \to \infty} {\mathbb {W}}_{\varphi}(a_k \mu^{0,r_{j(k)}},a'_k \mu^{0,r_{j'(k)}}) \nonumber = \liminf_{k \to \infty} {\mathbb {W}}_{\varphi}(\mu^{0,r_{j(k)}},\mu^{0,r_{j'(k)}}) \\ &\leq& C \liminf_{k \to \infty} \int_{0}^{4 \max(r_{j(k)},r_{j'(k)})} \alpha_{{\mathscr{D}}}(0,t) {dt\over t} = 0. \end{eqnarray} Then $\sigma = \sigma'$, and this completes our proof of Lemma \ref{t6.1}. \end{proof} \section{The decomposition of the ``good set" in $\Sigma_1$} \label{S7} In \eqref{1.28}-\eqref{1.29} we announced a decomposition of $\Sigma_0 \cap (\Sigma_1 \cup \Sigma_2)$ into pieces ${\mathscr{S}}_d$ ($0 \leq d \leq n$), which satisfy the property that for each $x\in{\mathscr{S}}_d$, $\text{Tan}(\mu,x)\subset {\mathscr{F}}_d$. In this section we check that the pieces $\Sigma_1 \cap {\mathscr{S}}_d$ satisfy the requirements of Theorem \ref{t1.2}. The remaining sets $\Sigma_2 \cap {\mathscr{S}}_d$ will be treated in Section \ref{S8}. We start with $d=0$. Set \begin{equation} \label{7.1} {\mathscr{S}}_0 = \big\{ x\in \Sigma \, ; \, \text{Tan}(\mu,x) \subset {\mathscr{F}}_0 \big\}. \end{equation} We claim (as in the statement of Theorem \ref{t1.2}) that ${\mathscr{S}}_0$ is the set of points where $\mu$ has an atom, and that every point of ${\mathscr{S}}_0$ is an isolated point of $\Sigma$. Suppose that $\mu$ has an atom at $x$. Then since $\mu$ is doubling, $x$ is an isolated point of $\Sigma$ (Lemma 2.3 in \cite{ADTprep}). We can check by hand that $\text{Tan}(\mu,x)$ is the set ${\mathscr{F}}_0$ of multiples of the Dirac measure at the origin, and that $x\in \Sigma_0 \cap \Sigma_1 \cap \Sigma_2$ (because $\alpha_{\mathscr{D}}(x,r) = 0$ for $r$ small). Conversely, suppose that $\text{Tan}(\mu,x) \subset {\mathscr{F}}_0$, and let us check that $x$ is an isolated point of $\Sigma$. Suppose instead that we can find a sequence $\{ x_k \}$ in $\Sigma \setminus \{ x \}$ that converges to $x$. Set $r_k = 2|x-x_k|$. Since $\mu$ is doubling, there is a subsequence of $\{ \mu_0^{x,r_k} \}$ which converges weakly to a measure $\sigma$. Since $\sigma \in \text{Tan}(\mu,x)$, $\sigma$ is a Dirac mass. Let $\zeta$ be smooth function such that ${\mathds 1}_ {{\mathbb{B}} \setminus B(0,1/4)}\le \zeta\le {\mathds 1}_{(B(0,2)\setminus B(0,1/4)}$. Then $\int \zeta d\sigma = 0$, so $\lim_{k \to \infty} \int \zeta \mu_0^{x,r_k} = 0$. On the other hand, by \eqref{1.11} and \eqref{1.2} \begin{eqnarray} \label{7.2} \int \zeta d\mu_0^{x,r_k} &=& \mu(B(x,r_k))^{-1} \int \zeta d\mu^{x,r_k} \\ &=& \mu(B(x,r_k))^{-1} \int \zeta(r_k^{-1} (y-x)) d\mu(y) \nonumber \\ &\geq& \mu(B(x,r_k))^{-1} \mu(B(x_k,r_k/4)) \geq C_\delta^{-3} \nonumber \end{eqnarray} because $\zeta(r_k^{-1} (y-x))=1$ for $y\in B(x_k,r_k/4)$. This contradiction shows that if $x\in {\mathscr{S}}_0$, then $x$ is an isolated point in $\Sigma$, and then $\mu$ has a Dirac mass at $x$. This gives the desired description of ${\mathscr{S}}_0$, the fact that ${\mathscr{S}}_0$ is at most countable is easy to see. We may now concentrate on exponents $d \in[1,n]$. Set \begin{equation} \label{7.3} {\mathscr{S}}'_d = \big\{ x\in \Sigma_0 \cap \Sigma_1 \, ; \, \text{Tan}(\mu,x) \subset {\mathscr{F}}_d \big\} = {\mathscr{S}}_d \cap \Sigma_1, \end{equation} where the last equality comes from \eqref{1.28}. Together with ${\mathscr{S}}_0$, these sets are disjoint and cover $\Sigma_1 \cap \Sigma_0$ (by \eqref{1.29}), hence also $\mu$-almost all of $\Sigma_1$ (by \eqref{1.22}). By Lemma~\ref{t6.1}, the only part of Theorem \ref{t1.2} concerning $\Sigma_1$ that remains to be proven is the fact that ${\mathscr{S}}'_d$ is rectifiable for $1 \leq d \leq n$, and more precisely \begin{eqnarray} \label{7.4} &&\text{${\mathscr{S}}'_d$ can be covered by a countable family} \\ &&\hskip3cm \text{of $d$-dimensional Lipschitz graphs.} \nonumber \end{eqnarray} (This is slightly more precise because we don not need to add a ${\mathscr{H}}^d$-negligible set.) This follows from the following lemma, which is essentially known, but which we prove for the reader's convenience. \begin{lemma} \label{t7.1} Suppose that $\mu$ is a doubling measure, $\Sigma$ is its support, $d\in \{1,...,n\}$, and $E \subset\Sigma$ is such that for all $x\in E$, there is a vector space $V_x$ of dimension $d$ such that $\text{Tan}(x,\mu)=\big\{c{\mathscr{H}}^{d}|_{V_{x}} \, ; \, c>0\big\}$. Then $E$ can be covered by a countable family of $d$-dimensional Lipschitz graphs. \end{lemma} \medskip The fact that $E = {\mathscr{S}}'_d$ satisfies the assumption of the lemma comes from Lemma~\ref{t6.1}. \begin{proof} If $d=n$, ${\mathbb{R}}^n$ is a $d$-dimensional Lipschitz graph that covers $E$, thus we assume that $d < n$. We claim that for $x\in E$ \begin{equation} \label{7.5} x+V_x \text{ is a tangent plane to $\Sigma$ at $x$.} \end{equation} If not there is a sequence $\{ y_k \}$ in $\Sigma \setminus \{ x\}$, that tends to $x$, and such that \begin{equation} \label{7.6} \mathop\mathrm{dist}(y_k, x+V_x) \geq c |y_k -x| \end{equation} for some $c > 0$. Set $r_k = 2 |y_k -x|$, and replace $\{ y_k \}$ with a subsequence for which the $\{ \mu_0^{x,r_k} \}$ converges weakly to a measure $\sigma\in \text{Tan}(\mu,x)$. Let $\zeta$ be a smooth compactly supported non-negative function such that $\zeta(0) = 0$ on $V_x$, but \begin{equation} \label{7.7} \zeta(u) = 1 \ \text{ for $u\in {\mathbb{B}}$ such that } \mathop\mathrm{dist}(u,V_x) \geq c/2. \end{equation} By assumption, $\sigma$ is supported on $V_x$ and so $\int \zeta d\sigma = 0$. Thus $\lim_{k \to \infty} \int \zeta \mu_0^{x,r_k} = 0$. On the other hand, \eqref{7.6} says that for $y\in B(y_k, cr_k/4)$, $$ \mathop\mathrm{dist}(r_k^{-1} (y-x), V_x) = r_k^{-1} \mathop\mathrm{dist}(y, x+V_x) \geq r_k^{-1} [\mathop\mathrm{dist}(y_k, x+V_x) - {cr_k \over 4}] \geq c/4, $$ hence $\zeta(r_k^{-1} (y-x))=1$ by \eqref{7.7}, and \eqref{1.11} and \eqref{1.2} imply that \begin{eqnarray} \label{7.8} \int \zeta d\mu_0^{x,r_k} &=& \mu(B(x,r_k))^{-1} \int \zeta d\mu^{x,r_k} \\ &=& \mu(B(x,r_k))^{-1} \int \zeta(r_k^{-1} (y-x)) d\mu(y) \nonumber \\ &\geq& \mu(B(x,r_k))^{-1} \mu(B(x_k, c r_k/4)) \geq C^{-1}. \nonumber \end{eqnarray} This contradiction proves \eqref{7.5}. \medskip For $\varepsilon > 0$ small and $x\in E$, choose an integer $j = j(x) \geq 0$ such that \begin{equation} \label{7.9} \mathop\mathrm{dist}(y,x+V_x) \leq \varepsilon |y-x| \ \text{ for } y\in \Sigma \cap B(x,2^{-j(x)}). \end{equation} On the Grassmann manifold $G(d,n)$ of vector spaces of dimension $d$ in ${\mathbb{R}}^n$, let us for instance use the distance defined by $\mathop\mathrm{dist}(V,W) = ||\pi_V - \pi_W||$, where $\pi_V$ and $\pi_W$ denote the orthogonal projections on $V$ and $W$. With this distance, $G(d,n)$ is compact. Choose a finite family ${\mathscr{V}}$ in $G(d,n)$ such that $\mathop\mathrm{dist}(V, {\mathscr{V}}) \leq \varepsilon$ for $V \in G(d,n)$. Set \begin{equation} \label{7.10} E(V,j) = \big\{x\in E \, ; \, j(x) = j \text{ and } \mathop\mathrm{dist}(V_x, V)\leq \varepsilon \big\} \end{equation} for $V\in {\mathscr{V}}$ and $j \geq 0$. We now cover each $E(V,j)$ with a countable collection of $d$-dimensional Lipschitz graphs. We claim that for each ball $B$ of radius $2^{-j-1}$, \begin{equation} \label{7.11} E(V,j) \cap B \ \text{ is contained in a Lipschitz graph over $V$} . \end{equation} Lemma \ref{t7.1} follows from this claim because the $E(V,j)$ cover $E$. To prove the claim, let $x, y \in E(V,j)\cap B$ be given. Observe that $|x-y| < 2^{-j}$ and $y \in E \subset \Sigma$, so \eqref{7.9} guarantees that $\mathop\mathrm{dist}(y,x+V_x) \leq \varepsilon |y-x|$. Then $$ |\pi_V(y) - \pi_V(x)| \leq |\pi_{V_x}(y) - \pi_{V_x}(x)| + ||\pi_V-\pi_{V_x}|| |x-y| \leq 2\varepsilon |x-y|, $$ which yields \eqref{7.11}. This completes our proof of Lemma \ref{t7.1}. \end{proof} \section{The decomposition of the ``good set" in $\Sigma_2$} \label{S8} Our goal in the section is to apply Theorem 1.5 in \cite{ADTprep} to the set $\Sigma_0 \cap \Sigma_2$, to obtain the desired decomposition. For the reader's convenience we include the necessary background below. \begin{theorem} [Theorem 1.5, \cite{ADTprep}] \label{t8.1} Let $\mu$ be a doubling measure in ${\mathbb{R}}^n$, denote by $\Sigma$ its support, and set \begin{equation*} \Sigma^0 = \big\{ x\in \Sigma \, ; \, \int_{0}^{1}\alpha(x,r) \frac{dr}{r} < \infty \big\}, \end{equation*} where \begin{equation*} \alpha(x,r)= \min_{d=0,1,...,n} \alpha_{d}(x,r), \end{equation*} and \begin{equation*} \alpha_d(x,r) = \inf\big\{ {\mathbb {W}}_1(\mu_0^{x,r},\nu_V) \, ; \, V \in A'(d,n) \big\}. \end{equation*} Here $A'(d,n)$ denotes the set of $n$ dimensional affine spaces which intersect $B(0,1/2)$ and $\nu_V = c_V {\mathscr{H}}^d\res{V} = c_V {\mathds 1}_{V} {\mathscr{H}}^d$, with $ c_V = {\mathscr{H}}^d(V\cap {\mathbb{B}})^{-1}$. Then there are disjoint Borel sets $\Sigma^0(d) \subset \Sigma$, $0 \leq d \leq n$, such that \begin{equation*} \Sigma^0 = \bigcup_{d=0}^n \Sigma^0(d), \end{equation*} with the following properties. \begin{enumerate} \item First, $\Sigma^0(0)$ is the set of points of $\Sigma$ where $\mu$ has an atom; it is at most countable and each of its point is an isolated point of $\Sigma$. \item For $1 \leq d \leq n$ and $x\in \Sigma^0(d)$, the limit \begin{equation*} \theta_d(x) := \lim_{r \to 0} r^{-d} \mu(B(x,r)) \end{equation*} exists, and $0 < \theta_d(x) < \infty$. \item For $1 \leq d \leq n$ and $x\in \Sigma^0(d)$, $\Sigma$ has a tangent $d$-plane at $x$, $W$, and set $W^\ast = W-x$. Then $\text{Tan}(x,\mu)= \{c{\mathscr{H}}^{d}\res{W^\ast} \, ; \, c > 0\}$. In addition, the measures $\mu_0^{x,r}$ converge weakly to ${\mathscr{H}}^{d}\res{W^\ast}$. \item Further decompose $\Sigma^0(d)$, $1 \leq d \leq n$, into the sets \begin{equation*} \Sigma^0(d,k) = \big\{ x\in \Sigma^0(d) \, ; \, 2^k \leq \theta_d(x) < 2^{k+1} \big\}, \ k\in {\mathbb{Z}}; \end{equation*} then each $\Sigma^0(d,k)$ is a rectifiable set of dimension $d$, with ${\mathscr{H}}^d(\Sigma^0(d,k) \cap B(0,R)) < \infty$ for every $R > 0$, $\mu$ and ${\mathscr{H}}^d$ are mutually absolutely continuous on $\Sigma^0(d,k)$, and $\mu = \theta_d {\mathscr{H}}^d$ there. \end{enumerate} \end{theorem} We want to apply Theorem \ref{t8.1}, so we need to show that for each $x\in \Sigma_0 \cap \Sigma_2$, \begin{equation} \label{8.1} \int_0^1 \alpha(x,t) {dt \over t} < \infty. \end{equation} Let $x \in \Sigma_0 \cap \Sigma_2$ be given. By Theorem \ref{t1.3} every tangent measure $\sigma \in \text{Tan}(\mu,x)$ is flat. To estimate to the distance from $\mu_0^{x,r}$ to $\sigma$ we proceed as in Section \ref{S6} except that we work with the whole group ${\mathscr{G}}$ rather than ${\mathscr{D}}$. We now follow that argument, without some of the details but we do emphasize the differences. Without loss of generality, we assume that $x=0$. We use the definition of $\alpha_{\mathscr{G}}$ and Chebyshev's inequality to associate to each $r \in (0,1/4]$ a radius $r_+ \in [2r,4r]$ such that \begin{equation} \label{8.2} \alpha_{{\mathscr{G}}}(0,r_+) \leq 2\int_{2r}^{4r} \alpha_{{\mathscr{G}}}(0,t) {dt\over t} \end{equation} (see \eqref{6.3}). By the definition \eqref{1.13}, there exists $G \in {\mathscr{G}}(0,r_+)$ such that \begin{equation} \label{8.3} {\mathbb {W}}_1(\mu_0^{G},\mu_0^{0,r_+}) \leq 2 \alpha_{{\mathscr{G}}}(0,r_+) \end{equation} (see \eqref{6.4}). By \eqref{1.6} $G = \lambda R$ for some isometry, and since $G \in {\mathscr{G}}(x)$, \eqref{1.8} guarantee that $G(0) = 0$ and hence $R(0) = 0$. That is, $R$ is a linear isometry. We still have that $\lambda^{-1} = \lambda(G)^{-1} \in [\lambda_1r_+,\lambda_2 r_+]$, and if we set \begin{equation} \label{8.4} r^\ast = \lambda^{-1} \in [\lambda_1r_+,\lambda_2 r_+]; \end{equation} as in \eqref{6.5}, we have that \begin{equation} \label{8.5} G(B(0,r^\ast)) = {\mathbb{B}} \end{equation} and $\mu_0^{G}$ is the image of $\mu_0^{0,r^\ast}$ by a linear isometry. That is, $\mu_0^{G} = R_\sharp \mu_0^{0,r^\ast}$ and \eqref{8.3} only yields \begin{equation} \label{8.6} {\mathbb {W}}_1(R_\sharp\mu_0^{0,r^\ast},\mu_0^{0,r_+}) \leq 2 \alpha_{{\mathscr{G}}}(0,r_+) \end{equation} instead of \eqref{6.6}. We still multiply the radii by $r /r^\ast$, set \begin{equation} \label{8.7} \rho(r) = { r r_+ \over r^\ast} \in [\lambda_2^{-1}r,\lambda_1^{-1} r] \end{equation} as in \eqref{6.7}, and deduce from \eqref{8.6} that \begin{equation} \label{8.8} {\mathbb {W}}_\varphi(R_\sharp\mu_0^{0,r},\mu_0^{0,\rho(r)}) \leq C \alpha_{{\mathscr{G}}}(0,r_+), \end{equation} using the same proof which involves Lemma \ref{t5.1} and Lemma \ref{t5.2} (the extra rotation does not affect the argument). Inequality \eqref{8.8} is the analogue of \eqref{6.11}. Let us write this slightly differently. Set $R^r = R^{-1}$; then by \eqref{8.8} \begin{equation} \label{8.9} {\mathbb {W}}_\varphi(\mu_0^{0,r},R^r_\sharp\mu_0^{0,\rho(r)}) \leq C \alpha_{{\mathscr{G}}}(0,r_+), \end{equation} since the ${\mathbb {W}}_{\varphi}$-distance is invariant under isometry. \medskip Given $r_0 \leq 1/4$, we can construct a decreasing sequence $\{ r_j \}$ as we did before, defined by $r_{j+1} = \rho(r_j)$. Let us keep track of the rotations: set $S^0 = I$ and $S^{j+1} = S^j R^{r_j}$. For $ k\ge 0$ we want to estimate the numbers \begin{equation} \label{8.10} \delta_k = {\mathbb {W}}_\varphi(\mu_0^{0,r_0},S^{k+1}_\sharp\mu_0^{0,r_{k+1}}). \end{equation} Let us check by induction that \begin{equation} \label{8.11} \delta_k \leq C \sum_{0\leq j \leq k} \alpha_{{\mathscr{G}}}(0,(r_j)_+). \end{equation} When $k=0$, this is \eqref{8.9} for $r_0$. If $k \geq 1$ and \eqref{8.11} holds for $k-1$, the triangle inequality \eqref{5.6} yields \begin{eqnarray} \label{8.12} \delta_k &\leq& \delta_{k-1} + {\mathbb {W}}_\varphi(S^{k}_\sharp\mu_0^{0,r_{k}},S^{k+1}_\sharp\mu_0^{0,r_{k+1}}) \\ &=& \delta_{k-1} + {\mathbb {W}}_\varphi(S^{k}_\sharp\mu_0^{0,r_{k}},[S^{k}R^{r_k}]_\sharp\mu_0^{0,r_{k+1}}) \nonumber \\ &=& \delta_{k-1} + {\mathbb {W}}_\varphi(\mu_0^{0,r_{k}},R^{r_k}_\sharp\mu_0^{0,\rho(r_{k})}) \leq \delta_{k-1} + C \alpha_{{\mathscr{G}}}(0,(r_k)_+) \nonumber \end{eqnarray} by definition of $S^{k+1}$, the invariance of ${\mathbb {W}}_\varphi$ under linear isometries, and \eqref{8.9}. This proves \eqref{8.11}. Then \eqref{8.2} and the same argument as in \eqref{6.12}-\eqref{6.13} yield \begin{eqnarray} \label{8.13} \delta_k &\leq& C \sum_{0\leq j \leq k} \alpha_{{\mathscr{G}}}(0,(r_j)_+) \leq C \sum_{0\leq j \leq k} \int_{2r_j}^{4r_j} \alpha_{{\mathscr{G}}}(0,t) {dt\over t} \nonumber\\ &\leq& C\int_{0}^{4r_0} \alpha_{{\mathscr{G}}}(0,t) {dt\over t}. \end{eqnarray} The final integral is finite because $0\in \Sigma_2$ (see the definition \eqref{1.17}). The measures $\mu_0^{0,r_k}$ are suitably normalized, so there is a subsequence which converges weakly to some measure $\sigma$ (again see Lemma 2.1 in \cite{ADTprep} for a little more detail). There is also a further subsequence for which the $S^k$ converge to an isometry $S$, and then the $S^k _\sharp\mu_0^{0,r_k}$ converge to $S_\sharp \sigma$ (proceed as for \eqref{6.16A}). By Lemma \ref{t5.4}, \eqref{8.10}, and \eqref{8.13}, \begin{eqnarray} \label{8.14} {\mathbb {W}}_\varphi(\mu_0^{0,r_0},S_\sharp \sigma) &\leq& \liminf_{k \to \infty} {\mathbb {W}}_\varphi(\mu_0^{0,r_0},S^{k}_\sharp\mu_0^{0,r_{k}}) =\liminf_{k \to \infty} \delta_k \nonumber \\ &\leq& C\int_{0}^{4r_0} \alpha_{{\mathscr{G}}}(0,t) {dt\over t}. \end{eqnarray} We now use Lemma \ref{t5.3} to translate estimate \eqref{8.14} into an upper bound for the $\int_0^1 \alpha(x,r) \frac{dr}{r}$. For $t \in [1/4,1/2]$, the measure $\mu_t$ that is defined by \eqref{5.27} with $\mu$ replaced by $\mu_0^{0,r_0}$ is just $\mu_0^{0,tr_0}$. Since $\sigma$ is a flat measure, so is $S_\sharp \sigma$. Hence the measure $\nu_t$ built from $\nu = S_\sharp \sigma$ as in \eqref{5.27} is also a flat measure supported on a $d$-plane $V$ passing through the origin. We use $\nu_t$ to estimate $\alpha(x,r)$. By \eqref{5.28}, the fact that $\mu$ is doubling, and \eqref{8.14}, we have \begin{eqnarray} \label{8.17} \int_{1/4}^{1/2} \alpha(x,tr_0) dt &\leq& \int_{1/4}^{1/2} {\mathbb {W}}_1(\mu_0^{0,tr_0},\nu_t) dt =\int_{1/4}^{1/2} {\mathbb {W}}_1(\mu_t,\nu_t) dt \\ &\leq& (8+||\varphi||_{lip}) C_\delta^2 {\mathbb {W}}_\varphi(\mu_0^{0,r_0},S_\sharp \sigma) \leq C \int_{0}^{4r_0} \alpha_{{\mathscr{G}}}(0,t) {dt\over t}. \nonumber \end{eqnarray} Note that \eqref{8.17} holds for $r_0 \leq 1/4$. We are now ready to prove that for $x\in \Sigma_0\cap \Sigma_2$, $\int_0^1 \alpha(x,r) \frac{dr}{r}<\infty$. Recall we are assuming $x=0$. By \eqref{8.17} and the definition \eqref{1.17} of $\Sigma_2$ we have \begin{eqnarray} \label{8.18} \int_0^{1/8} \alpha(x,s) {ds \over s} &=& \sum_{k \geq 2} \int_{2^{-k-2}}^{2^{-k-1}} \alpha(0,s) {ds \over s} = \sum_{k \geq 2} \int_{1/4}^{1/2} \alpha(0,t2^{-k}) {dt \over t} \nonumber \\ &\leq& 4 \sum_{k \geq 2} \int_{1/4}^{1/2} \alpha(0,t2^{-k}) dt \nonumber\\ &\leq& 4C \sum_{k \geq 2} \int_{0}^{2^{-k+2}} \alpha_{{\mathscr{G}}}(0,t) {dt\over t} \\ &=& 4C \int_{0}^{1} \alpha_{{\mathscr{G}}}(0,t) \Big\{ \sum_{k \geq 2} {\mathds 1}_{\{ k : 2^{-k+2} > t \}}(k) \Big\} {dt\over t} \nonumber\\ &\leq& C \int_{0}^{1} \alpha_{{\mathscr{G}}}(0,t) {\log(2/t) dt\over t} < \infty. \nonumber \end{eqnarray} A brutal estimate shows that $\alpha(x,r) \leq 2$ for $r>0$, hence $\int_{1/8}^1 \alpha(x,s) {ds \over s}< \infty$. Hence the hypothesis of Theorem \ref{t8.1} hold. We obtain a decomposition of the set $\Sigma_0 \cap \Sigma_2$ into subsets ${\mathscr{S}}''_{d}$, $0 \leq d \leq n$, which satisfy all the requirements for Theorem \ref{t1.2}. In fact we get some additional information which we record here. First, for every point $x\in {\mathscr{S}}''_{d}$, $d \geq 1$, $\text{Tan}(\mu,x)$ is the vector space of dimension $1$ spanned by some flat measure of dimension $d$ (see \eqref{1.27}) which implies \begin{equation} \label{8.19} {\mathscr{S}}''_{d} = \big\{ x\in \Sigma_0 \cap \Sigma_2 \, ; \, \text{Tan}(\mu,x) \subset {\mathscr{F}}_d \big\} = \Sigma_2 \cap {\mathscr{S}}_d, \end{equation} where ${\mathscr{S}}_d$ is as in \eqref{1.28}. Moreover once we know that $\text{Tan}(\mu,x)$ is the space of dimension $1$ spanned by some flat measure $\sigma \in {\mathscr{F}}_d$, we have that $\Sigma$ has a tangent $d$-plane at $x$ whose direction is given by the support of $\sigma$; see \eqref{7.5}. Then ${\mathscr{S}}''_{d}$ is rectifiable, and even satisfies \eqref{7.4}; see the proof above, and also\cite{ADTprep}. This completes the proof of Theorem \ref{t1.2}. As mentioned in Section \ref{S1} we have additional control on the size of $\mu$ on $\Sigma_2$ and the behavior of $\mu$ on ${\mathscr{S}}''_{d}$. For $1 \leq d \leq n$ and every point $x\in {\mathscr{S}}''_{d} = \Sigma_2 \cap {\mathscr{S}}_d$, the density of $\mu$ exists, that is \begin{equation} \label{8.20} \theta_d(x) = \lim_{r \to 0} r^{-d} \mu(B(x,r)) \in (0,\infty) \end{equation} (see \cite{ADTprep}). Morever, we have the further decomposition of ${\mathscr{S}}''_{d}$ into sets ${\mathscr{S}}''_{d} \cap \Sigma^0(d,k)$ where $\mu$ and ${\mathscr{H}}^d$ are mutually absolutely continuous, as in 4. in Theorem \ref{t8.1}. \bibliographystyle{amsplain} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{ \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}} \providecommand{\href}[2]{#2}
1,314,259,994,713
arxiv
\section{Introduction} In a generic supergravity theory, the soft supersymmetry breaking scalar masses, gaugino masses and $A$ and $B$ terms are free parameters. On the other hand if the supergravity theory is the low energy limit of an orbifold compactificaion of the heterotic string then these parameters are calculable in principle \cite{cvetic,ibanez,carlos,brig,brig2} since string theory has only one free parameter, namely the string scale. However, the soft supersymmetry breaking parameters depend on the moduli of the orbifold model (including the dilaton expectation value) and the values of the moduli cannot be determined without a detailed knowledge of the non-peturbative superpotential probably responsible for supersymmetry breaking. One possible approach \cite{brig} is to accept for the time being that we lack this detailed knowledge and to absorb this uncertainty into an angle, $\theta$, which is a measure of the relative size of the auxiliary fields for the dilaton and the overall $T$ modulus (with the vacuum energy taken to be zero). In such an approach \cite{brig} the supersymmetric particle spectrum has been derived as a function of this angle $\theta$ when unification of gauge coupling constants at $2 \times 10^{16}$ GeV is due to large moduli dependent string loop corrections and also when it is due to extra matter states close to the unification scale. Here, we shall explore the robustness of the qualitative features of the spectrum obtained in the former case when various assumptions about the orbifold model are varied. In particular, we shall consider the effect of making one or more of the following changes to the assumptions in Brignole {\em et al}.\ \cite{brig}: \newline {\bf a.} Using the modular weights allowed \cite{cvetic,bl1} for states in the twisted sectors of those abelian orbifolds which possess three $N=2$ moduli, $T_i$. Then it is possible to adopt a single overall modulus model with $T_1=T_2=T_3=T$ as in ref.\cite{brig} with all three moduli on the same footing if, as is the case in gaugino condensate models, only the $N=2$ moduli occur in the non-perturbative superpotential. The possible choices of modular weights are then further restricted by requiring that the string loop threshold corrections to the gauge coupling constants allow unification of all three observable sector gauge coupling constants at a single energy scale. The value of the overall modulus $T$ is determined by requiring that this energy scale is $2 \times 10^{16}$ GeV. This is an alternative to choosing the simplest set of modular weights \cite{brig} which will achieve the gauge coupling unification without reference to any particular orbifold. \newline {\bf b.} Adopting the values for the Green-Schwarz parameters, $\delta_{GS}$, suggested by the above orbifold models. \newline {\bf c.} Taking account of the possible moduli dependence of the Yukawa couplings when all three states are in twisted sectors of the orbifold. \newline {\bf d.} In the case that the $\mu$ parameter originates from K\"ahler potential mixing, using the moduli dependence of $\mu$ suggested by the discussion of ref.\cite{antoniadis} rather than taking $\mu$ to be moduli independent, and in addition, \newline {\bf e.} taking account of the radiative corrections to the tree level effective potential in calculating the Higgs scalar expectaton values $v_1$ and $v_2$ which affect the supersymmetric particle spectrum and also in calculating the Higgs scalar masses \cite{ellis,ellis2}. We do not consider the effect of more than one independent modulus expectation value which has been considered elsewhere \cite{brig2}, nor do we consider the M-theory regime of strong ten dimensional string coupling \cite{horava} for which gauge coupling constant unification at the `observed' energy scale may occur without large string loop threshold corrections if there is a large eleventh dimension \cite{witten}. The organisation of the paper is as follows. In section 2 all possible choices of modular weights for the standard model states in abelian orbifold compactifications with three $N=2$ moduli $T_i$ are obtained. The choice is restricted by demanding consistency with gauge coupling constant unification with $T_1=T_2=T_3=T$. The corresponding value of $T$ is also given. In section 3 the soft supersymmetry breaking terms are presented as functions of the overall modulus $T$ and the angle $\theta$ introduced in ref.\cite{brig} . In section 4 the relevant renormalisation group equations for the running of the coupling constants and soft supersymmetry breaking parameters from the unification scale to the electroweak scale are displayed and the strategy for choosing the various string theoretic parameters and ensuring the correct electroweak breaking scale is discussed. In section 5 the resulting supersymmetric particle spectrum is explored, including the effect of radiative corrections to the effective potential. Finally, in section 6 we present our conclusions and make comparisons with the work of ref.\cite{brig}. \section{Choices of modular weights} As will be seen in section 3, the values of the soft supersymmetry breaking parameters at the string scale depend on the modular weights of the matter states \cite{cvetic,ibanez,carlos,brig,brig2}. Let us first establish our conventions. In general we shall write the K\"ahler potential $K$ to quadratic order in the matter fields in the form \begin{equation} K=-\ln{Y}-\sum_i\ln{(T_i + \bar{T}_i)}+\sum_{\alpha}\tilde{K}_{\alpha} |\phi_\alpha|^2 +(Z\phi_1 \phi_2 + h.c.) \label{K} \end{equation} with \begin{eqnarray} Y&=&S+\bar{S}-\sum_i \delta_i \ln{(T_i + \bar{T}_i)} \\ \delta_i &=& \frac{\delta_{GS}^i}{8 \pi^2} \end{eqnarray} and \begin{equation} \tilde{K}_\alpha=\prod_i (T_i + \bar{T}_i)^{n^i_\alpha} \label{Kprod} \end{equation} In (\ref{K})-(\ref{Kprod}) any $U$ moduli associated with $Z_2$ planes are included as additional $T_i$ moduli, $\delta_{GS}^i$ are Green-Schwarz parameters, $\phi_{\alpha}$ are matter fields and the $Z \phi_1 \phi_2$ term is present when the orbifold has a $Z_2$ plane (ie. when the action of the point group in that plane is as $Z_2$). The matter fields $\phi_1$ and $\phi_2$ are untwisted states associated with the $T$ and $U$ moduli for the $Z_2$ plane. The powers $n^i_{\alpha}$ are the modular weights for the matter fields $\phi_\alpha$. In the case of a single overall modulus \begin{equation} T=T_1=T_2=T_3 \end{equation} these expressions reduce to \begin{equation} K=-\ln{Y}-3\ln{(T + \bar{T})}+\sum_{\alpha}\tilde{K}_{\alpha} |\phi_\alpha|^2 +(Z\phi_1 \phi_2 + h.c.) \end{equation} with \begin{equation} Y=S+\bar{S}-\tilde{\delta}_{GS}\ln{(T+\bar{T})} \end{equation} where \begin{eqnarray} \delta_{GS}&=&\sum_i \delta_{GS}^i \label{gs}\\ \tilde{\delta}_{GS}&=& \frac{\delta_{GS}}{8 \pi^2} \label{gst} \end{eqnarray} and \begin{equation} \tilde{K}_\alpha=(T+\bar{T})^{n_\alpha} \end{equation} with overall modular weights \begin{equation} n_\alpha=\sum_i n^i_\alpha \label{n}\ \ . \end{equation} The only abelian orbifolds that possess three $N=2$ moduli $T_i$ are $Z_2 \times Z_6$ and $Z_3 \times Z_6$, the former orbifold having in addition a single $U$ modulus. All possible modular weights for massless matter states in the twisted (and untwisted) sectors of abelian orbifolds can be determined using the approach of refs. \cite{ibanez} and \cite{bl1}. For $Z_2 \times Z_6$ the allowed modular weights are \begin{equation} \mbox{(Q, u, e) : } n_\alpha=0,-1,-2 \label{mw1} \end{equation} and \begin{equation} \mbox{(L, d, H) : } n_\alpha=+1,0,-1,-2,-3 \label{mw2} \end{equation} where Q, L, and H denote quark, lepton and Higgs $SU(2)_L$ doublets and u, d and e denote quark and lepton singlets. For $Z_3 \times Z_6$, the possible modular weights are \begin{equation} \mbox{(Q, u, e) : } n_\alpha=0,-1,-2 \label{mw3} \end{equation} and \begin{equation} \mbox{(L, d, H) : } n_\alpha=+1,0,-1,-2,-3,-4 \label{mw4} \end{equation} For a single overall modulus $T$ the conditions for unification of the $SU(3)_C \times SU(2)_L \times U(1)$ gauge coupling constants $g_3$, $g_2$ and $\tilde{g}_1$ at a scale less than $10^{18}$ GeV may be taken to be \cite{deren,ibanez2,ibanez} \begin{equation} \frac{b'_3-b'_2}{b_3-b_2} <0 \label{bc1} \end{equation} and \begin{equation} \frac{b'_3-b'_2}{b'_3-\tilde{b}'_1} =\frac{5}{12} \label{bc2} \end{equation} where the standard model renormalisation group coefficients are \begin{equation} b_3=-3 \ ,\ b_2=1 \ ,\ \tilde{b}_1=\mfrac{33}{5} \end{equation} and the $b'_i \ ,\ i=1,2,3$ which occur in the string loop threshold corrections \cite{deren,kaplun,dixon} are given by \begin{eqnarray} b'_3&=&9+2\sum^3_{g=1}(n_{Q(g)}+\frac{1}{2}n_{u(g)}+\frac{1}{2}n_{d(g)}) \label{b3}\\ b'_2&=&15+\sum^3_{g=1}(3n_{Q(g)}+n_{L(g)}) +n_{H_1} +n_{H_2} \label{b2} \end{eqnarray} and \begin{equation} \tilde{b}'_1=\frac{99}{5}+\frac{1}{5}\sum^3_{g=1}(n_{Q(g)} +8n_{u(g)} +2n_{d(g)}+3n_{L(g)}+6n_{e(g)} ) +\frac{3}{5}(n_{H_1}+n_{H_2}) \label{b1} \end{equation} where the sum over $g$ is a sum over generations. Here the $U(1)$ coupling constant $\tilde{g}_1$ is normalised so that all three coupling constants are equal at the unification scale. Assuming generation universality to avoid flavour changing neutral currents, \begin{equation} n_{Q(1)}=n_{Q(2)}=n_{Q(3)}=n_Q \end{equation} and similarly for the modular weights of the other states, the solutions of (\ref{bc1}) and (\ref{bc2}) with modular weights given by (\ref{mw1}) and (\ref{mw2}) or (\ref{mw3}) and (\ref{mw4}) are given in table \ref{modws} with \begin{equation} M_{string} \approx 0.53\times g_{string} \times 10^{18} \mbox{GeV} \label{Mstr} \end{equation} and \begin{equation} g_{string}\approx 0.7\ . \end{equation} \begin{table} \[ \begin{tabular}{ccccccc} $n_{QL}$&$n_{UR}$&$n_{DR}$&$n_{LL}$&$n_{ER}$&$n_{H_1}$&$n_{H_2}$\\ \hline 0 & -2 & 1 & -3 & -1 & -1 & -1 \\ 0 & -1 & 0 & -3 & -2 & -1 & -1 \\ 0 & -1 & 1 & -2 & -2 & -1 & -1 \\ 0 & -2 & 0 & -4 & -1 & -1 & -1 \\ 0 & -1 & -1& -4 & -2 & -1 & -1 \\ \hline \end{tabular} \] \caption{Modular weights \label{modws}} \end{table} The corresponding value of $T$ for which unification takes place at \begin{equation} M_X \approx 2 \times 10^{16} \mbox{GeV} \end{equation} is given by \cite{deren,ibanez2,ibanez} \begin{equation} \frac{M_X}{M_{string}}=[(T+\bar{T})| \eta(T)|^4 ]^{(b'_3-b'_2)/2(b_3-b_2)} \label{unif} \end{equation} and we find $T=14.5$ is suitable for all choices of modular weights of table \ref{modws}, and also gives the gauge couplings as $\alpha_s(m_Z)=0.115$ and $\sin^2\theta_W(m_Z)=0.2315$. We have restricted $n_{H_1}$ and $n_{H_2}$ to take the value $-1$ for consistency with the two mechanisms for generating the $\mu$ parameter that we shall discuss in the next section, both of which require the Higgs fields to be untwisted sector states. \section{Soft supersymmetry breaking terms at the string scale} The soft supersymmetry scalar masses, gaugino masses and $A$ and $B$ terms which occur in a supergravity theory may be calculated from the low energy limit of an orbifold compactification of the heterotic string given the K\"ahler potential, the superpotential and the gauge kinetic function derived from the string theory \cite{cvetic,ibanez,carlos,brig,brig2}.In view of our current lack of detailed knowledge of the non-perturbative superpotential responsible for supersymmetry breaking, a possible approach \cite{brig} is to absorb this uncertainty into an angle $\theta$ which measures the relative contributions of the dilaton and $T$ modulus auxiliary fields to supersymmetry breaking. In this section we summarize the resulting formulae \cite{brig} for the soft supersymmetry breaking terms and discuss the choice of values for the Green-Schwarz parameters, the moduli dependence of the Yukawa couplings that occur in the $A$ term and the $\mu$ parameter that occurs in the $B$ term. The expressions for the soft supersymmetry breaking terms are expressed in terms of the angle $\theta$ defined by \cite{brig} \begin{equation} F^S-\tilde{\delta}_{GS}(T+\bar{T})^{-1}F^T=\sqrt{3}\,C\, Y m_{3/2} \sin{\theta} \label{FS} \end{equation} and \begin{equation} \left( \frac{Y-\frac{\tilde{\delta}_{GS}}{3}}{Y} \right)^{1/2} F^T=C\,(T+\bar{T})\,m_{3/2} \cos{\theta} \end{equation} where the auxiliary fields $F^S$ and $F^T$ for the dilaton and the overall $T$ modulus are given in terms of $G \equiv K+\ln{|W|^2}$ by \begin{equation} F^S -\tilde{\delta}_{GS}(T+\bar{T})^{-1}F^T=Y^2 m_{3/2} \frac{\partial G}{\partial S} \end{equation} and \begin{equation} F^T=\frac{ (T+\bar{T})^2 Y m_{3/2}}{3\left(Y-\frac{\tilde{\delta}_{GS}}{3}\right)} \left( \frac{\partial G}{\partial T}+\tilde{\delta}_{GS}(T+\bar{T})^{-1} \frac{\partial G}{\partial S}\right)\ \ . \label{FT} \end{equation} It has been assumed that all three moduli $T_i,\ i=1,2,3$, are on the same footing in the K\"ahler potential and superpotential and possible (CP-violating ) phases have been dropped for present purposes. The vacuum energy $V_0$ is given by \begin{equation} V_0=3(C^2 -1)m^2_{3/2} \end{equation} where \begin{equation} m_{3/2}=e^{G/2} \label{grav} \end{equation} at the minimum of the effective potential. Thus, if the vacuum energy is identified with the cosmological constant we should take $C=1$. This we shall do throughout. The soft supersymmetry breaking scalar masses, $m_\alpha$, are given by \begin{equation} m^2_\alpha=(3C^2-2)m^2_{3/2}+n_\alpha C^2 m^2_{3/2} \frac{Y} {\left(Y-\frac{\tilde{\delta}_{GS}}{3}\right)}\cos^2{\theta} \label{scalars} \end{equation} with overall modular weights $n_\alpha$ as in (\ref{n}). The gaugino masses $M_a$ are given by \begin{equation} 2m^{-1}_{3/2}(\mbox{Re}f_a) M_a= \sqrt{3} C Y \sin{\theta}+ \left(\frac{Y}{Y-\frac{\tilde{\delta}_{GS}}{3}}\right)^{1/2} C (T+\bar{T}) \cos{\theta} \frac{(b'_a-\delta_{GS})}{16 \pi^3} \,\hat{G}_2(T,\bar{T}) \label{gauginos} \end{equation} where \begin{eqnarray} \hat{G}_2(T,\bar{T})&=&G_2(T)-2\pi(T+\bar{T})^{-1} \\ G_2(T)&=&-4\pi \frac{d \ln{\eta}}{dT} \end{eqnarray} where $\eta(T)$ is the Dedekind function and, for the standard model, $b'_a, \ a=1,2,3$, are given by (\ref{b3})-(\ref{b1}). The real part of the gauge kinetic funcion Re$f_a$ is given by \begin{equation} \mbox{Re}f_a=g^{-2}_a(M_{string}) \end{equation} and is determined from the gauge coupling constant $g_a(m_Z)$ at the electroweak scale $Q=m_Z$ by \begin{equation} g^{-2}_a(M_{string})-g^{-2}_a(m_Z)=\frac{b_a}{16\pi^2} \ln \left(\frac{m_Z^2} {M^2_{string}}\right) \end{equation} with $M_{string}$ as in (\ref{Mstr}). The soft supersymmetry breaking $A$ terms $A_{\alpha\beta\gamma}$ are given by \begin{equation} A_{\alpha\beta\gamma}=-\sqrt{3} C m_{3/2} \sin\theta- Cm_{3/2} \left(\frac{Y}{Y-\frac{\tilde{\delta}_{GS}}{3}}\right)^{1/2}\cos\theta\,\omega_{\alpha\beta \gamma}(T) \end{equation} where \begin{equation} \omega_{\alpha\beta\gamma}(T)=3+n_\alpha+n_\beta+n_\gamma-(T+\bar{T}) \frac{\partial \ln h_{\alpha\beta\gamma}}{\partial T} \end{equation} and the trilinear term $\tilde{W}_3$ in the perturbative superpotential has been written as \begin{equation} \tilde{W}_3=h_{\alpha\beta\gamma} \phi_{\alpha} \phi_{\beta} \phi_{\gamma}\ \ . \end{equation} The modular weights $n_\alpha$, $n_\beta$ and $n_\gamma$ are chosen to correspond to one of the solutions for gauge coupling constant unification (for the $Z_2 \times Z_6$ or $Z_3 \times Z_6$ orbifold) discussed in the previous section. Since we are assuming large values of Re$T$ in order to reduce the unification scale to $2 \times 10^{16}$ GeV, we shall use the asymptotic form of $h_{\alpha\beta\gamma}$ valid for large Re$T$, \begin{equation} h_{\alpha\beta\gamma} \sim \exp\left( - \frac{\pi}{3}\lambda_{\alpha\beta \gamma} T \right) \label{htop} \end{equation} where $\lambda_{\alpha\beta\gamma}$ is an integer in the range 0 to 4 for the $Z_2 \times Z_6$ orbifold and in the range 0 to 10 for the $Z_3 \times Z_6$ orbifold \cite{bl2}. The constant of proportionality in (\ref{htop}) is expected to be of order $g_{string}$. Here and in (\ref{scalars}) and (\ref{gauginos}) we shall use the Green-Schwarz parameter obtained by inserting $\delta_{GS}^i,\ i=1,2,3$, for the $Z_2 \times Z_6$ or $Z_3 \times Z_6$ orbifold in (\ref{gs}) and (\ref{gst}). Although, in general, the Green-Schwarz parameters $\delta_{GS}^i$ have different values for the different complex planes, contradicting the assumption that all three complex planes are on the same footing in $G$, this better approximates the situation than neglecting the Green-Schwarz parameters. A simple model is to take a pure gauge hidden sector with $E_8$ gauge group. Then \cite{deren,bl3} \begin{equation} \delta_{GS}^i=\frac{b_a}{3} \left(1-2\frac{|G_i|}{|G|}\right) \end{equation} where $a$ now refers to the hidden sector gauge group and the $i$th complex plane is left unrotated by the subgroup $G_i$ of the point group $G$. With $b_a=-90$ for $E_8$ we have \begin{equation} \delta_{GS}=-40\mbox{ or }-50 \label{gsvals} \end{equation} for $Z_2 \times Z_6$ or $Z_3 \times Z_6$ respectively. The soft supersymmetry breaking $B$ term is more model dependent because of different possible origins for the $\mu$ parameter. If the $\mu$ term is generated non-perturbatively as an explict superpotential term $\mu_W \phi_1 \phi_2$, where $\phi_1$ and $\phi_2$ are the superfields for the Higgs scalars $H_1$ and $H_2$, then the $B$ term, which in this case we denote by $B_W$, is given by \begin{eqnarray} m^{-1}_{3/2}B_W&=&-1-\sqrt{3}C\sin\theta\left(1-Y\frac{\partial\ln\mu_W}{\partial S}\right) -\left(\frac{Y}{Y-\frac{\tilde{\delta}_{GS}}{3}}\right)^{1/2} C \cos\theta\left(\frac{}{}3+ n_{H_1}+n_{H_2}\right.\nonumber\\ & &\ \ \ \ \ \left.-(T+\bar{T})\frac{\partial\ln\mu_W}{\partial T}-\tilde{\delta}_{GS}\frac{\partial\ln\mu_W} {\partial S} \right) \ \ . \label{BW} \end{eqnarray} If the $\mu$ parameter is gaugino condensate induced \cite{antoniadis} then \begin{equation} \mu_W \propto W_{np}\frac{\partial\ln\eta(T_3)}{\partial T_3}\frac{\partial \ln\eta(U_3)}{\partial U_3} \label{muW} \end{equation} where $W_{np}$ is the non-perturbative superpotential and the orbifold is assumed to possess a $Z_2$ plane, taken to be the third complex plane with associated moduli $T_3$ and $U_3$ and untwisted matter fields $\phi_1$ and $\phi_2$. Such a mechanism is possible for the $Z_2 \times Z_6$ orbifold though not for the $Z_3 \times Z_6$ orbifold which does not possess a $Z_2$ plane. In the case of $Z_3 \times Z_6$ we take $\mu_W$ constant as in ref.\cite{brig}. Because $\phi_1$ and $\phi_2$ are then necessarily untwisted states the modular weights $n_{H_1}$ and $n_{H_2}$ should be taken to be $-1$. It is somewhat problematic to employ this mechanism in the context of the simple model with only a single overall modulus $T$ being considered here. However if we neglect the auxiliary field for $U_3$, or equivalently assume that $U_3$ does not contribute significantly to the supersymmetry breaking, then (\ref{BW}) is correct when $T_1$, $T_2$ and $T_3$ are on the same footing. There is also the difficulty that gaugino condensate models in general produce a negative vacuum energy $V_0$ rather than zero vacuum energy, as we have assumed after (\ref{grav}). Nonetheless, we think it worthwhile to study this mechanism to obtain a flavour of the effect on the supersymmetric particle spectrum of the kind of moduli dependence of the $\mu$ parameter that can occur in physically motivated models. After evaluating $\frac{\partial\ln\mu_W}{\partial S}$ and $\frac{\partial\ln\mu_W}{\partial T}$ we obtain \begin{equation} m^{-1}_{3/2}B_W=3C^2-1-\left(\frac{Y}{Y-\frac{\tilde{\delta}_{GS}}{3}}\right)^{1/2} C \cos \theta \left(n_{H_1}+n_{H_2}-(T+\bar{T}) \left( \frac{\partial\ln\eta(T)}{\partial T} \right)^{-1}\frac{\partial^2\ln\eta(T)}{\partial T^2} \right) \end{equation} Here $\frac{\partial\ln\mu_W}{\partial S}$ and $\frac{\partial\ln\mu_W}{\partial T}$ have been written in terms of $F^T$ and $F^S$ and so in terms of $\theta$ using (\ref{FS})-(\ref{FT}) and because $T_i,\ i=1,2,3$, are not on the same footing in (\ref{muW}), $(T+\bar{T})\frac{\partial\ln\mu_W}{\partial T}$ has been interpreted as $(T_3+\bar{T}_3)\frac{\partial\ln\mu_W}{\partial T_3}$ evaluated at $T_3=T$. On the other hand if the $\mu$ parameter is generated by a term of the form $(Z \phi_1 \phi_2+h.c.)$ in the K\"ahler potential \cite{antoniadis} mixing the Higgs superfields then the tree level form of $Z$ is \begin{equation} Z=(T_3+\bar{T}_3)^{-1} (U_3+\bar{U}_3)^{-1} \end{equation} if the third complex plane is the $Z_2$ plane with whose moduli, $T_3$ and $U_3$, the untwisted matter fields $\phi_1$ and $\phi_2$ are associated for this mechanism. The effective $\mu$ parameter $(\mu_Z)_{eff}$ derived from the Higgsino mass term is \begin{equation} (\mu_Z)_{eff}=|W_{np}|Z(1+C\cos\theta) \label{muZ} \end{equation} and the final form of the $B$ term, which in this case we denote by $B_Z$ is given by \begin{equation} m^{-1}_{3/2} B_Z= \frac{[2(1+C\cos\theta)-3(C^2-1)]}{(1+C \cos\theta)}\ \ . \label{BZ} \end{equation} In particular, when $V_0$ is zero so that $C=1$, as we are assuming throughout , $B_Z$ takes the constant value $2m_{3/2}$. This compares with $2(1+\cos\theta)m_{3/2}$ in ref. \cite{brig}, where $Z$ was taken to be a moduli independent constant. In arriving at (\ref{muZ}) and (\ref{BZ}) we have again assumed that there is no significant supersymmetry breaking due to the $U$ modulus, so as to be able to neglect the auxiliary field for the $U$ modulus. Also, here and elsewhere in this section the usual rescaling by a factor $e^{K/2}\frac{\bar{W}_{np}}{|W_{np}|}$ required to go from the supergravity theory derived from the orbifold compactification of the string theory to the globally supersymmetric theory has been carried out, together with normalisation of the matter fields (see, for example, ref.\cite{brig}). \section{Running of coupling constants and supersymmetry breaking parameters} The method for running coupling constants and supersymmetry breaking parameters from the unification scale $M_X$ to the electroweak scale is well known. (See, for example, refs. \cite{ibanez3} and \cite{ibanez4}.) The relevant renormalisation group equations for our purposes are summarized in appendix A and the relevant solutions in appendix B, with the bottom quark and $\tau$ lepton Yukawa couplings, as well as the first and second generation Yukawa couplings, neglected but the effect of the $\mu$ parameter retained. The top Yukawa, $h_t$, and the $\mu$ parameter have been defined through the superpotetial terms \begin{equation} W=h_tQ_tt^cH_2-\mu H_1H_2 \label{W} \end{equation} where $Q_t$ is the doublet $(t\ b)_L$ for the top and bottom quarks, $t^c$ is the corresponding singlet and the Higgs doublets are \begin{equation} H_1=\left(\begin{array}{c} H^0_1 \\H^-_1 \end{array}\right) \ ,\ H_2=\left(\begin{array}{c} H^+_2 \\H^0_2 \end{array}\right) \end{equation} where $H^0_1$ and $H^0_2$ have expectation values $v_1$ and $v_2$ respectively. In (\ref{W}), $Q_t t^c H_2$ is shorthand for $Q_t^T i \tau^2 H_2 t^c$ and $\mu H_1 H_2$ for $\mu H^T_1 i \tau^2 H_2$. The tree level Higgs scalar potential $V_{eff}$ in terms of the above expectaton values is \begin{equation} V_{eff}=\mu^2_1 v^2_1+\mu^2_2 v^2_2-2\mu^2_3v_1v_2+\frac{1}{8}(g^2_1+g^2_2) (v^2_1-v^2_2)^2 \end{equation} where \begin{equation} \mu^2_1=m^2_1+\mu^2 \ ,\ \mu^2_2=m^2_2+\mu^2 \ ,\ \mu^2_3=-\mu B m_{3/2} \end{equation} and $m_1$ and $m_2$ are the soft supersymmetry breaking masses for $H_1$ and $H_2$. Minimisation of the tree level effective potential gives \begin{equation} \omega^2 = \frac{\mu_1^2+\frac{1}{2}m_Z^2}{\mu_2^2+\frac{1}{2}m_Z^2} \label{min1} \end{equation} and \begin{equation} \frac{\omega}{\omega^2+1}=\frac{\mu_3^2}{\mu_1^2+\mu_2^2} \label{min2} \end{equation} at the electroweak scale $Q=m_Z$ where \begin{equation} \omega^{-1}=\tan\tilde{\theta}=\frac{v_1}{v_2}\ \ . \end{equation} Also the following inequalities must hold \begin{eqnarray} \mu_1^2+\mu_2^2&>&2|\mu_3^2| \label{con1}\\ \mu_3^4&>&\mu_1^2\mu_2^2 \label{con2}\\ \mu_2^2+m_{QL}^2+m_{UR}^2&>&m^2(2|A_t|-3)\label{ccb} \end{eqnarray} as explained in ref.\cite{ibanez3}. Our strategy for fixing some of the parameters in the models is as follows. Knowing the values $m_1^2(0)$, $m_2^2(0)$ and $B(0)$ of the soft supersymmetry breaking parameters at the gauge coupling constant unification scale $M_X$ (which differ little from their values at the string scale) and assuming values for the gravitino mass $m_{3/2}$ in the range 100 GeV to 10 TeV, (\ref{min1}) and (\ref{min2}) are a pair of equations that can be solved for $\mu(0)$ and $\omega$. Then \begin{equation} m_W^2=\frac{g_2^2}{2}(v_1^2+v_2^2) \end{equation} determines $v_1$ and $v_2$. In addition \begin{equation} m_t=h_t v_2 \end{equation} fixes $h_t$ at the electroweak scale and eqn.(\ref{Yt}) determines $h_t (0)$. We run all renormalisation group equations from the gauge coupling constant unification scale $M_X$ and ignore small effects due to the difference between $M_X$ and the string scale. The supersymmetry breaking parameters at the string scale are calculated as in $\S 3$. Then the predictions for the supersymmetric particle spectrum to be discussed in the next section are parameterised by the angle $\theta$ which measures the relative contribution of the dilaton $S$ and the modulus $T$ to supersymmetry breaking and the gravitino mass (assuming zero vacuum energy $V_0$ so that $C=1$). In addition, the outcome for the spectrum depends on the choice of modular weights $n_\alpha$ from amongst the sets allowed for the $Z_2 \times Z_6$ and $Z_3 \times Z_6$ orbifolds, as in table \ref{modws}, and on the mechanism adopted to generate the $\mu$ parameter, which influences the form of the $B$ term. The choice of modular weights also fixes the value of $T$ from (\ref{unif}). The Green-Schwarz parameters $\delta_{GS}$ are taken from (\ref{gsvals}). The above discussion neglects radiative corrections to the effective potential. When these are included \cite{ellis,ellis2} the strategy for obtaining the expectation values $v_1$ and $v_2$ has to be amended. Those supersymmetric particle masses that depend on $v_1$ and $v_2$ are then modified as well as the Higgs scalar masses. We will discuss these points in detail in the next section. We have not considered the radiative corrections to (\ref{con1})-(\ref{ccb}) which may exclude some values of $\theta$, in particular the dilaton dominated case \cite{casas}. \section{The supersymmetric particle spectrum} The expressions for the masses of the supersymmetric partners of standard model states in terms of the soft supersymmetry breaking parameters are well known. For the first two generations of quarks and leptons the Yukawa couplings and $A$ terms are negligible and the corresponding squark and slepton mass terms are simply the soft supersymmetry breaking scalar masses. For the third generation it is necessary to allow for a non-negligible top Yukawa coupling and the top squark masses are given by \begin{equation} m^2_{\tilde{t}_{h,l}}=m_t^2+\frac{1}{2}\left(m^2_Q+m^2_U \pm \left[(m^2_Q- m^2_U)^2+4m_t^2(A_tm_{3/2}+\mu\omega^{-1})\right]^{1/2}\right) \end{equation} where $m_Q$ and $m_U$ refer to the scalar partners of the quark doublet and one of the quark singlets for the third generation respectively, and the D term has been neglected. The gluino mass is given by the Majorana mass term. However, the Wino and Zino mix with the Higgsinos. The chargino mass matrix has eigenvalues $m_{c_{h,l}}$ given by \begin{equation} 2m^2_{c_{h,l}}=M_2^2+\mu^2+2m_W^2 \pm \Delta^{1/2} \end{equation} where \begin{equation} \Delta = (M_2^2-\mu^2)^2 +4m_W^2(M_2^2 +\mu^2 +2 M_2 \mu \sin{2\tilde{\theta}}) +4m_W^4 \cos^2 2\tilde{\theta} \end{equation} The neutralino mass matrix has the form \begin{equation} \begin{array}{cc} &\begin{array}{cccc}\ i\tilde{W}^3\ \ &\ i\tilde{B}\ \ &\ \ \tilde{h}_2^0\ &\ \ \tilde{h}_1^0\ \end{array} \\ \begin{array}{c}i\tilde{W}^3\\i\tilde{B}\\\tilde{h}_2^0\\\tilde{h}_1^0 \end{array} & \left(\begin{array}{cccc}-M_2&0&-\frac{g_2v_2}{\sqrt{2}}&\frac{g_2 v_1}{\sqrt{2}}\\ 0&-M_1&\frac{g_1v_2}{\sqrt{2}}&-\frac{g_1v_1}{\sqrt{2}}\\ -\frac{g_2v_2}{\sqrt{2}}&\frac{g_1v_2}{\sqrt{2}}&0&\mu\\ \frac{g_2 v_1}{\sqrt{2}}&-\frac{g_1v_1}{\sqrt{2}}&\mu&0\end{array}\right) \end{array} \ \ +h.c. \end{equation} In addition the charged Higgs has mass \begin{equation} m_{H^\pm}^2 =m_W^2 +\mu_1^2 +\mu_2^2 \end{equation} and the neutral Higgses have masses \begin{equation} m_c^2=\mu_1^2+\mu_2^2 \end{equation} and \begin{equation} m_{a,b}^2=\frac{1}{2}\left(m_c^2+m_Z^2\pm\left[(m_c^2+m_Z^2)^2-4m_c^2m_Z^2 \cos^2 2\tilde{\theta} \right]^{1/2}\right) \ . \label{neutralhiggs} \end{equation} In our detailed calculations, the mass $m_b$ given by (\ref{neutralhiggs}) is generically lower than the experimental bound. However, the one loop radiative corrections to the Higgs scalar masses are substantial \cite{ellis,ellis2} and we shall use the one loop Higgs scalar effective potential in what follows. The one loop corrected formulae for the Higgs masses can be found in ref.\cite{ellis2}. When the one loop corrections to the effective potential are included the minimisation conditions (\ref{min1}) and (\ref{min2}) for the expectation values $v_1$ and $v_2$ are modified with the result that \begin{equation} \omega^2= \frac{2\mu_1^2+M_Z^2+\frac{1}{v_1}\frac{ \partial\Delta V_1}{\partial v_1}-\frac{v_2}{v_1^2}\frac{ \partial\Delta V_1}{\partial v_2} }{2\mu_2^2+m_Z^2} \label{1lpmin1} \end{equation} and \begin{equation} \frac{\omega}{\omega^2+1}=\frac{2\mu_3^2}{2\mu_2^2 +2\mu_1^2 + \frac{1}{v_1}\frac{\partial \Delta V_1}{\partial v_1}+\frac{1}{v_2}\frac{ \partial \Delta V_1}{\partial v_2}} \label{1lpmin2} \end{equation} where $\Delta V_1$ is the one loop correction to the effective potential evaluated at $m_Z$ and \begin{eqnarray} \frac{\partial\Delta V_1}{\partial v_i}&=&\frac{3}{16\pi^2}\left[ m_{\tilde{t}_h}^2 \frac{\partial m_{\tilde{t}_h}^2}{\partial v_i} \left( \ln{\frac{m_{\tilde{t}_h}^2}{m_Z^2}}-1 \right) + m_{\tilde{t}_l}^2 \frac{\partial m_{\tilde{t}_l}^2}{\partial v_i} \left( \ln{\frac{m_{\tilde{t}_l}^2}{m_Z^2}}-1 \right) \right. \nonumber\\ & &\left.-2m_t^2 \frac{\partial m_t^2}{\partial v_i} \left(\ln{\frac{m_t^2}{m_Z^2}}-1\right)\right] \ ,\ i=1,2\ \ . \end{eqnarray} The strategy for fixing the parameters in the models is essentially that described in $\S 4$ except that (\ref{1lpmin1}) and (\ref{1lpmin2}) should now be regarded as a pair of equations for $v_1$, $v_2$ and $\mu(0)$ given the soft supersymmetry breaking parameters $m_Q^2$, $m_U^2$ and $A_t$ at the string scale and given values for $m_t$ and $m_{3/2}$, rather than as a pair of equatons that can be solved for $\mu(0)$ and $\omega$. In deriving the possible supersymmetric particle spectrum we have insisted on no negative squared masses at the string scale to avoid high scale symmetry breaking in the standard model. We have also insisted on the following experimental constraints. From LEP1.5 data, there are no charged or coloured sparticles with masses less than 65 GeV, the lightest Higgs is heavier than 65 GeV and the lower bound on the charginos is 80~GeV. Tevatron data indicates that the gluino mass is above 175 GeV, but should not exceed 1.5 TeV (to avoid reintroducing the hierarchy problem). The top quark mass is known to be 175$\pm$6 GeV. The vev of the Higgs responsible for the top quark mass has a maximum value given by \begin{equation} v_1^2+v_2^2 = \frac{2 m_Z^2}{(g^2+g'^2)} \end{equation} with $v_1^2=0$ implying $v_2$(max)=173.3 GeV. Since $m_t=h_t v_2$ this puts a lower limit on $h_t$ if $m_t=175 \pm6$ GeV is to be obtained. Specifically the value at $M_X$ is $h_t(min)=0.52 $ and so it is appropriate to set $\lambda=0$ in (\ref{htop}) for the top Yukawa coupling. One loop minimisation conditions have been used throughout and the Higgs masses are one loop corrected. The parameter $\omega$ is found to be never greater than 6, justifying the neglect of the b-quark contibution. The D terms have been included in the mass of the lightest sleptons, the right selectron and the left sneutrino. In the figures the following notation is used for the masses: \newline $c_h$ , $c_l$ : heavy and light charginos \newline $t_h$ , $t_l$ : heavy and light stops \newline $H_a$ , $H_b$ : heavy and light CP-even Higgses respectively \newline $m_t$ : top quark \newline $E_R$ , $V_L$ : right selectron and left sneutrino respectively \newline $N1$ : lightest neutralino \newline $g$ : gluino \newline Particles not displayed are the three neutralinos which are degenerate with the charginos, the charged and CP-odd Higgses which are degenerate with $H_a$, the remaining squarks which are all only slightly less massive than the gluino, and the left selectron which is always heavier than $V_L$. Several models will now be presented that are representative of the variety of the supersymmetric particle spectra that can occur. \begin{figure}[p] \[ \epsfxsize=6in \epsfysize=3.5in \epsffile{fig1.eps} \] \caption{$\delta_{GS}=-50$, $m_{3/2}=100$ GeV, $T=14.5$, $h_t=0.7$, $B \equiv B_W$, modular weights as in line 2 of table \ref{modws}. \label{modA4}} \end{figure} \begin{figure}[p] \[ \epsfxsize=6in \epsfysize=3.5in \epsffile{fig2.eps} \] \caption{$\delta_{GS}=-40$, $m_{3/2}=100$ GeV, $T=14.5$, $h_t=0.9$, $B \equiv B_W$, modular weights as in line 2 of table \ref{modws} \label{modB2}} \end{figure} In figure \ref{modA4} the resulting mass spectrum is shown for the $Z_3 \times Z_6$ orbifold, characterised here by $\delta_{GS}=-50$, with $m_{3/2}=100$ GeV, $h_t=0.7$, modular weights as in line 2 of table \ref{modws} and with the $B$ term given by $B_W$ with $\mu_W$ constant. The only allowed region is bounded on the left by $E_R$ acquiring a too low mass, and on the right by $c_l$ becoming too light while $m_t$ is always in the vicinity of 175 GeV. Further, the acceptable part of the spectrum is limited by the requirement of positive squared scalar masses at the string scale which confines it to the regions between the two pairs of vertical lines shown in the figure, centred on the dilaton dominated limits ($\theta=\frac{\pi}{2}, \frac{3\pi}{2}$). The part of the spectrum around $\theta=\frac{\pi}{2}$ is ruled out due to $m_t$ being unacceptably low and the electroweak symmetry is unbroken there. As in ref.\cite{brig}, there is a clear division of the spectrum into a heavy and a light group although now there is a far greater variation in masses as $\theta$ is varied than was the case in ref.\cite{brig}. This latter effect is attributable directly to the magnitude of $\delta_{GS}$ and the effect it has on the gaugino masses which feed through to all sparticles. At the dilaton dominated limit the spectrum is qualitatively similar to that in ref.\cite{brig} but away from this limit we see that the gluino (and squarks) are often heavier than the heavy stop (in \cite{brig} the gluino mass was fixed). Particularly noticeable, and attributable to $|\delta_{GS}|$, is the mass of the lightest neutralino (which is mostly $M_1$) which often exceeds 100 GeV. This is worth noting because, as seen in figure \ref{modA4}, on the left hand limit of the allowed region its role as the `lightest supersymmetric particle' is jeopardised in favour of the right selectron. This is why the D term has been included in $E_R$ (it can add 10 GeV or more). The lightest Higgs, $H_b$, is also in the region of 100 GeV and the light chargino can be lighter than the sleptons. A change in $m_{3/2}$ will scale all masses (except $m_t$). Decreasing $m_{3/2}$ narrows the allowed region by virtue of $E_R$ and $c_l$ becoming too light on the right hand edge, giving an approximate effective minimum of $m_{3/2}\simeq70$ GeV, below which the dilaton dominated limit is unreachable. Increasing $m_{3/2}$ rapidly increases the gluino mass to way above the limit 1.5 TeV. For $m_{3/2}\geq250$ GeV even the dilaton dominated limit is excluded. The allowed regions may not be extended to the right significantly, even with a high gravitino mass because $c_l$ remains too light there. A variation in $h_t$ affects all masses due to its appearance in the one-loop effects and an increase in $h_t$ will increase all masses slightly. However adjustment of $h_t$ is restricted by $m_t$ and it may not deviate far from $0.7$ without pushing $m_t$ out of the experimental bounds. A different choice of modular weights from table \ref{modws} will not change the spectrum very much other than by changing the acceptable width at the string scale. The modular weights from line 3 of table \ref{modws} are the least restrictive at the string scale, the `heaviest' weight being $-2$, and so the acceptable part of the spectrum is widened slightly, while lines 4 and 5 from table \ref{modws} have the opposite effect. Thus the spectrum displayed in figure \ref{modA4} is very typical and deviations from it are small. \begin{figure}[tb] \[ \epsfxsize=6in \epsfysize=3.5in \epsffile{fig3.eps} \] \caption{$\delta_{GS}=-40$, $m_{3/2}=100$ GeV, $T=14.5$, $h_t=0.9$, $B \equiv B_Z$, modular weights as in line 2 of table \ref{modws}. \label{modJ2}} \end{figure} Figure \ref{modB2} shows that spectrum obtained for a $Z_2 \times Z_6$ orbifold ($\delta_{GS}=-40$) with $\mu_W$ as in (\ref{muW}) has two valid regions. The inclusion of the derivatives of $\mu_W$ in $B_W$ is instrumental in obtaining this result, were they to be neglected we would obtain only one valid region similar to figure \ref{modA4}. Each region is qualitatively similar to that shown in figure \ref{modA4}, although both regions are bounded on the right by $m_t$ becoming too low. The dilaton dominated limit is not reachable at $\theta=\frac{3\pi}{2}$ for this reason, while that at $\theta=\frac{\pi}{2}$ is reachable. Note that here $h_t$ cannot deviate far from 0.9 without $m_t$ being pushed outside the experimental bounds. Conversely, obtaining an acceptable $m_t$ beyond the displayed regions would require an unacceptably high value of $h_t$. We also find $70<m_{3/2}<250$ GeV for an acceptable spectrum, as before. Concerning the other form of the $B$ term, $B_Z$, which is only valid for the \mbox{$Z_2 \times Z_6$} orbifold because it requires a $Z_2$ plane, an example is shown in figure \ref{modJ2}. Comparison with figure \ref{modB2} shows some differences. In the right hand region the $H_a$-$H_b$ splitting is increased and in both regions the $t_h$-$t_l$ splitting is reduced. There is near degeneracy between $c_h$, $t_l$ and $H_a$ in the right hand region. The light group remains relatively unaffected, although the top quark is, on average, heavier in the left hand region than the right hand region. To obtain central values of $m_t$ the left and right regions require $h_t=0.8, 1.0$ respectively. The dilaton dominated limits at $\theta=\frac{\pi}{2},\frac{3\pi}{2}$ are respectively included and excluded as in figure \ref{modB2}. \section{Conclusions} We have studied the supersymmetric particle spectrum in orbifold compactifications of string theory where unification of gauge coupling constants at $2\times 10^{16}$ GeV is due to large moduli dependent string loop threshold corrections, making a number of changes to the assumptions in Brignole {\em et al}.\ \cite{brig} in order to explore the robustness of the qualitative features of the spectrum obtained. The specific orbifold models considered here show that the inclusion of the derivatives of $\mu_W$ in $B_W$ is important in obtaining acceptable spectra in two separate regions and that it is also necessary for $h_t\approx0.9$ to obtain correct values for the top quark mass. For $Z_2 \times Z_6$ orbifold models there is then little resultant difference between the $B_Z$ and $B_W$ mechanisms. For both mechanisms the gravitino mass is restricted approximately to the range 70-250 GeV in order to satisfy the upper and lower mass limits imposed in $\S5$. It is also apparent that the effects of the Green-Schwarz coefficient are not negligible. In the examples presented here, $|\delta_{GS}|$ is substantial enough to shift the spectrum away from the dilaton dominated limit and partly out of the regions allowed at the string scale resulting in a considerably narrower acceptable range for $\theta$. The lightest neutralino is often heavier than usually assumed ($\sim$100 GeV as opposed to $\sim$50 GeV \cite{brig}), and can be of similar mass to (or heavier than) the light Higgs and the right selectron. In addition $|\delta_{GS}|$ induces a large variation in the masses as $\theta$ is varied, particularly for the heavy group. In principle this should make the goldstino angle, $\theta$, easier to determine if sparticles are eventually discovered. \section*{Acknowledgments} We are grateful to George Kraniotis for helpful discussions. This research was supported in part by PPARC and P.S. was supported by a Royal Holloway studentship.
1,314,259,994,714
arxiv
\section{Introduction} Many extensions to the standard model of particle physics contain an extra U(1) gauge factor corresponding to a hidden sector of particles~\cite{abel, goodsell}. The only interaction between this hidden sector and normal matter is a weak kinetic mixing between the photon $\gamma$ and hidden sector photon $\gamma^\prime$~\cite{okun,holdom}. This allows for oscillations to occur between the two particles. The hidden sector photon is likely very light and belongs to a class of hypothetical particles known as WISPs (Weakly Interacting Sub-eV Particles)~\cite{jaeckel_lowenergy}. It is proposed that indirect observations of the hidden sector photon can be made by photon regeneration experiments in a similar way to ALP (Axion Like Particle) experiments~\cite{axionexp1,axionexp2,axionexp3,axionexp4}. Typically these have been laser `light shining through a wall' (LSW) experiments~\cite{BFRT1,BFRT2,BMV,ahlers_paraphoton,GammeV,ahlers_laser,afanasev_2008,fouche,afanasev_2009,ALPS,ALPS2010}. Recently however a new method for detecting hidden sector photons using microwave cavities was proposed~\cite{jaeckel_cavity}. In microwave cavity LSW we use two resonance matched cavities separated by a wall that is impenetrable by normal photons. Electromagnetic radiation is injected into one of the cavities (the emitter cavity) and a small portion oscillates into hidden sector photons. These hidden sector photons do not interact with normal matter and are able to pass straight through the normally impervious wall. If some of these particles then oscillate back into photons inside the other (detector) cavity a signal detection could be made. The probability of this transmission taking place is~\cite{jaeckel_cavity} \begin{eqnarray} \mathrm{P}_\mathrm{trans} =& \frac{P_\mathrm{det}}{P_\mathrm{emit}} = \chi^4 \: Q_\mathrm{emit} \: Q_\mathrm{det} \: \left(\frac{m_{\gamma \prime} \: c^2}{\hbar \: \omega_\gamma}\right)^8 \: |\mathcal{G}|^2 \label{eq:Ptrans} \\ =& \chi^4 \: Q_\mathrm{emit} \: Q_\mathrm{det} \: \left(1 - \frac{k_{\gamma \prime}^{\: 2}}{k_\gamma^{\: 2}} \right)^4 \: |\mathcal{G}|^2 , \nonumber \end{eqnarray} where $P_\mathrm{det}$ and $P_\mathrm{emit}$ are the powers in and out of the respective cavities, $\chi$ is the kinetic mixing parameter, $Q$ is the cavity quality factor, $m_{\gamma \prime}$ is the hidden sector photon mass, $\omega_\gamma$ is the angular (and cavity resonance) frequency of the photons, $k_\gamma$ is the photon wavenumber, $k_{\gamma \prime}$ is the hidden sector photon wavenumber and $\mathcal{G}$ is a dimensionless function that encodes the geometric setup of the two cavities. The function $\mathcal{G}$ is a 6-integral given by~\cite{jaeckel_cavity} \begin{multline} \mathcal{G} \left( \frac{k_{\gamma \prime}}{k_\gamma} \right) = k_\gamma^{\:2} \int\limits_{V_\mathrm{emit}} \int\limits_{V_\mathrm{det}} \frac{\exp(i \; k_{\gamma \prime} \: |\textbf{\textsl{x}}-\textbf{\textsl{y}}|)}{4 \pi |\textbf{\textsl{x}}-\textbf{\textsl{y}}|} \\ \textbf{\textsl{A}}_\mathrm{emit} (\textbf{\textsl{y}}) \: \cdot \: \textbf{\textsl{A}}_\mathrm{det} (\textbf{\textsl{x}}) \; d^3 \textbf{\textsl{x}} \; d^3 \textbf{\textsl{y}} \label{eq:G} , \end{multline} where $V$ represents the respective cavity volumes and $\textbf{\textsl{A}}$ is the normalized spatial part of the resonance electromagnetic gauge field inside the cavities satisfying \begin{equation} \int\limits_V \lvert \textbf{\textsl{A}} (\textbf{\textsl{x}}) \rvert^2 \: d^3 \textbf{\textsl{x}} = 1 \nonumber . \end{equation} The cavity Q-factors also contain geometric dependencies from the cavity geometric factor $G$, \begin{align} Q=\frac{G}{R_S}, & & G = \omega_0 \frac{\int\limits_V \mu \lvert \textbf{\textsl{H}} (\textbf{\textsl{x}})\rvert^2 \; d^3 \textbf{\textsl{x}}}{\int\limits_S \lvert \textbf{\textsl{H}}_\mathrm{T} (\textbf{\textsl{y}}) \rvert^2 \; d^2 \textbf{\textsl{y}}}\nonumber , \end{align} where $R_S$ is the surface resistance, $\omega_0$ is the angular resonance frequency, $\mu$ is the permeability inside the cavity, $S$ is the surface of the cavity and $\textbf{\textsl{H}}_\mathrm{T}$ is component of $\textbf{\textsl{H}}$ tangential the cavity surface. To further study $\mathrm{P}_\mathrm{trans}$ it is convenient to define a new function that encompasses all of the geometric, electromagnetic and $k_{\gamma \prime}/k_\gamma$ dependencies. Hence we define the `full geometric function' \begin{equation} \mathcal{F}^2 \left( \frac{k_{\gamma \prime}}{k_\gamma} \right) = G_\mathrm{emit} \: G_\mathrm{det} \: \left(1 - \frac{k_{\gamma \prime}^{\: 2}}{k_\gamma^{\: 2}} \right)^4 \: |\mathcal{G}|^2 \label{eq:F2}, \end{equation} measured in $\Omega^2$, such that \begin{equation} \mathrm{P}_\mathrm{trans} = \chi^4 \: R_{S_\mathrm{emit}}^{\;-1} \: R_{S_\mathrm{det}}^{\;-1} \: \mathcal{F}^2\nonumber . \end{equation} In this paper we investigate the behavior of $\mathcal{F}^2$ for axially stacked cylinders as well as provide results of our first experimental test of microwave cavity LSW. \section{Full geometric function} For the best chance of detection $\mathcal{F}^2$ needs to be maximized. Currently however, very little is known about the behavior of this function or its constituents. In this section we study the full geometric function from its dependence on the electromagnetic mode, aspect ratio and cavity separation. Here we only consider symmetrical cavities stacked axially as depicted in Fig.~\ref{fig:cylinders}. Both cylinders are assumed to be of the exact same dimensions. The aspect ratio ($AR$) is defined to be the diameter ($2a$) divided by the length ($L$) of each cylinder ($AR=2a/L$) and the separation distance ($d$) is defined to be the axial distance between the inside boundaries of the two cylinders. A separation distance of zero refers to the ideal and unrealizable case where the two cavities have infinitesimally thin walls and sit directly on top of each other. \begin{figure}[t] \includegraphics[width=0.3\textwidth]{fig1.eps} \caption{Diagram of cavity setup with radius $a$, length $L$ and separation distance $d$.\label{fig:cylinders}} \end{figure} Before we can calculate $\mathcal{G}$ and hence $\mathcal{F}^2$ we first need to know the electromagnetic gauge field $\textbf{\textsl{A}}$ inside the cavities. Solving Maxwell's equations inside a cylinder of radius $a$ and length (or height) $L$ we find two classes of resonance modes. The Transverse Magnetic modes with azimuthal mode number $m$, radial mode number $n$ and axial mode number $p$ are ($\mathrm{TM}_{\,m\,n\,p}$): \begin{multline} \textbf{\textsl{E}}_\textbf{TM} = \begin{pmatrix} E_r \\ E_{\phi} \\ E_z \end{pmatrix} = E_0 \: e^{i \: \omega \: t} \\ \begin{pmatrix} - \frac{p \pi}{L} \frac{a}{\varsigma_{m,n}} J_m^\prime \left( \frac{\varsigma_{m,n}}{a} r \right) \cos(m \: \phi) \sin(\frac{p \pi}{L} z)\\ \frac{m}{r} \frac{p \pi}{L} \frac{a^2}{(\varsigma_{m,n})^2} J_m \left( \frac{\varsigma_{m,n}}{a} r \right) \sin(m \: \phi) \sin(\frac{p \pi}{L} z)\\ J_m \left( \frac{\varsigma_{m,n}}{a}r \right) \cos (m \: \phi) \cos(\frac{p \pi}{L} z) \end{pmatrix} \nonumber, \end{multline} \begin{multline} \textbf{\textsl{H}}_\textbf{TM} = \begin{pmatrix} H_r \\ H_{\phi} \\ H_z \end{pmatrix} = -i \: \omega \: \varepsilon \: a \: E_0 \: e^{i \: \omega \: t} \\ \begin{pmatrix} \frac{m}{r} \frac{a}{(\varsigma_{m,n})^2} J_m \left( \frac{\varsigma_{m,n}}{a} r \right) \sin(m \: \phi) \cos(\frac{p \pi}{L} z)\\ \frac{1}{\varsigma_{m,n}} J_m^\prime \left( \frac{\varsigma_{m,n}}{a} r \right) \cos(m \: \phi) \cos(\frac{p \pi}{L} z)\\ 0 \end{pmatrix} \nonumber. \end{multline} The Transverse Electric (TE) modes with azimuthal mode number $m$, radial mode number $n$ and axial mode number $p$ are ($\mathrm{TE}_{\,m\,n\,p}$): \begin{multline} \textbf{\textsl{E}}_\textbf{TE} = \begin{pmatrix} E_r \\ E_{\phi} \\ E_z \end{pmatrix} = i \: \omega \: \mu \: a \: H_0 \: e^{i \: \omega \: t} \\ \begin{pmatrix} \frac{m}{r} \frac{a}{(\varsigma^\prime_{m,n})^2} J_m \left( \frac{\varsigma^\prime_{m,n}}{a} r \right) \sin(m \: \phi) \sin(\frac{p \pi}{L} z)\\ -\frac{1}{\varsigma^\prime_{m,n}} J_m^\prime \left( \frac{\varsigma^\prime_{m,n}}{a} r \right) \cos(m \: \phi) \sin(\frac{p \pi}{L} z)\\ 0 \end{pmatrix} \nonumber, \end{multline} \begin{multline} \textbf{\textsl{H}}_\textbf{TE} = \begin{pmatrix} H_r \\ H_{\phi} \\ H_z \end{pmatrix} = H_0 \: e^{i \: \omega \: t} \\ \begin{pmatrix} \frac{p \pi}{L} \frac{a}{\varsigma^\prime_{m,n}} J_m^\prime \left( \frac{\varsigma^\prime_{m,n}}{a} r \right) \cos(m \: \phi) \cos(\frac{p \pi}{L} z)\\ - \frac{m}{r} \frac{p \pi}{L} \frac{a^2}{(\varsigma^\prime_{m,n})^2} J_m \left( \frac{\varsigma^\prime_{m,n}}{a} r \right) \sin(m \: \phi) \cos(\frac{p \pi}{L} z)\\ J_m \left( \frac{\varsigma^{\prime}_{m,n}}{a}r \right) \cos (m \: \phi) \sin(\frac{p \pi}{L} z) \end{pmatrix} \nonumber. \end{multline} Here $\varsigma_{m,n}$ (unitless) is the $n$'th root of the Bessel J function of order $m$ and $\varsigma^\prime_{m,n}$ (unitless) is the $n$'th root of the derivative of the Bessel J function of order $m$. The parameter $\varepsilon$ is the permittivity, $\mu$ is the permeability and \begin{eqnarray} \omega_\mathrm{TM} &=& \sqrt{ \Bigl( \frac{\varsigma_{m,n}}{a} \Bigr)^2 + \Bigl( \frac{p \pi}{L} \Bigr)^2} \; c \: \nonumber ,\\ \omega_\mathrm{TE} &=& \sqrt{ \Bigl( \frac{\varsigma^\prime_{m,n}}{a} \Bigr)^2 + \Bigl( \frac{p \pi}{L} \Bigr)^2} \; c \: \nonumber , \end{eqnarray} are the resonance angular frequencies of the cavity. $E_0$ is a constant in units of $\mathrm{V}/\mathrm{m}$ and $H_0$ is a constant in units of $\mathrm{A}/\mathrm{m}$. Finally we find the gauge field inside the cavity satisfying both the Lorenz and Coulomb condition to be \begin{equation} \textbf{\textsl{A}} = \frac{i}{\omega} \textbf{\textsl{E}} \nonumber . \end{equation} Thus the normalized spatial part of the gauge potential $\textbf{\textsl{A}}$ appearing in Eq.~\eqref{eq:G} can be taken as the normalized spatial part of the electric field $\textbf{\textsl{E}}$ with units $\mathrm{m}^{-3/2}$. We now have an explicit definition of $\mathcal{G}$ for any particular resonance mode we operate the pair of cavities in ($k_\gamma = \omega_\gamma / c$). Unfortunately the integral of Eq.~\eqref{eq:G} cannot be solved analytically. To understand it's behavior large numbers of numerical calculations had to be carried out. To do this, a \textsl{Mathematica}~\cite{mathematica} program was created utilizing its numerical integration features. To improve results the integrals were distributed over the component terms of $\textbf{\textsl{A}}_\mathrm{emit} \cdot \textbf{\textsl{A}}_\mathrm{det}$ and calculated separately. Furthermore the integration domains were split at the zeros of the field equations. Once evaluated these results were then used to obtain $\mathcal{F}^2$ as in Eq.~\eqref{eq:F2}. In Fig.~\ref{fig:TM01p} some typical plots of the full geometric function is given. On the scale of wavenumber ratios, $k_{\gamma \prime}/k_\gamma=0$ represents a massive hidden sector photon whose rest mass uses all of the energy of the initial photon and $k_{\gamma \prime}/k_\gamma=1$ represents a massless hidden sector photon. The $(1-k_{\gamma \prime}^{\,2} / k_\gamma)^{\,2})^4$ factor in Eq.~\eqref{eq:F2} means $\mathcal{F}^2$ will always be greatly diminished for higher wavenumber ratios. \begin{figure}[!t] \includegraphics[width=0.45\textwidth]{fig2.eps} \caption{ Curves for the $\mathrm{TM}_{\,0\,1\,p}$ mode family labeled ($p$) where $p$ corresponds to the axial mode number. Each was calculated for a cavity aspect ratio of one and zero separation distance between the cavities. \label{fig:TM01p}} \end{figure} \subsection{Mode dependency} \begin{figure}[!t] \includegraphics[width=0.45\textwidth]{fig3.eps} \caption{Lines of $\mathcal{F}^2$ maximums for various families of modes against relevant mode numbers. All for a cavity aspect ratio of one and zero separation distance between the cavities. \label{fig:pnm}} \end{figure} The full geometric function can be very different between modes. In the investigation of mode dependencies we fix the cavity aspect ratio to one, i.e. diameter equal to length, and separation distance to zero. Figure~\ref{fig:TM01p} shows $\mathcal{F}^2$ for the family of $\mathrm{TM}_{\,0\,1\,p}$ modes and allows us to observe how $\mathcal{F}^2$ responds to varying $p$. Taking the peak of each curve we can obtain a characteristic line for the maximum $\mathcal{F}^2$ against axial mode number. The max $\mathcal{F}^2$ line for the $\mathrm{TM}_{\,0\,1\,p}$ family of modes, as well as the $\mathrm{TM}_{\,0\,2\,p}$, $\mathrm{TE}_{\,0\,1\,p}$ and $\mathrm{TE}_{\,0\,2\,p}$ families, is shown in Fig.~\ref{fig:pnm}~(a). At higher axial mode numbers $\mathcal{F}^2$ generally tends to increase with mode number. For the TM modes however the full geometric function initially decreases to some minimum value. The same investigation can be done for the radial mode number. The $\mathrm{TM}_{\,0\,n\,0}$, $\mathrm{TE}_{\,0\,n\,1}$ and $\mathrm{TE}_{\,0\,n\,2}$ families of modes are compared in Fig.~\ref{fig:pnm}~(b). In all cases the full geometric function increases with higher radial mode number. Lastly, dependence on the azimuthal mode number is investigated. In Fig.~\ref{fig:pnm}~(c) max $\mathcal{F}^2$ lines are shown for the $\mathrm{TE}_{\,m\,1\,1}$, $\mathrm{TM}_{\,m\,1\,0}$ and $\mathrm{TM}_{\,m\,1\,1}$ families of modes. The full geometric function tends to decrease with higher azimuthal mode number, most notably for the $\mathrm{TE}_{\,m\,1\,1}$ mode. For the TM modes however the change in $\mathcal{F}^2$ is much more subtle. It is uncertain whether TM whispering gallery modes with high order azimuthal mode number (often used in high-Q oscillators~\cite{locke_rsi}) in an axial stack configuration will be sensitive to hidden sector photons. The full geometric function for each of the different modes generally occupy different regions of $k_{\gamma \prime}/k_\gamma$. This allows a range of hidden sector photon masses to be probed by different modes. We also have the option of simultaneously exciting multiple modes in the emitter cavity and covering a wider range of hidden sector photon masses at once (although at different sensitivities). \subsection{Aspect ratio dependency} When the two cavities are perfectly adjacent with zero separation distance, the total size of the cavities becomes unimportant and only the aspect ratio between diameter and length affects the full geometric function. For each resonance mode the effect of varying the aspect ratio is different. Figure~\ref{fig:ARTM012} gives an example of how the full geometric function changes with different aspect ratios, in this case for the $\mathrm{TM}_{\,0\,1\,2}$ mode. In general, changing the aspect ratio not only changes the maximal value for $\mathcal{F}^2$ but also the shape and position of the peak. Nevertheless we can plot trends of the maximum $\mathcal{F}^2$ for a set of various modes as in Fig.~\ref{fig:AR}. The full geometric function seems in increase with larger aspect ratios for all modes except those with axial mode number $p=0$ which have limited $L$ dependence. Practically, extreme aspect ratio cavities are difficult to couple to and may not be usable. It is unclear if there is an optimal aspect ratio for each or some modes. \begin{figure}[!t] \includegraphics[width=0.45\textwidth]{fig4.eps} \caption{Curves of $\mathcal{F}^2$ for the $\mathrm{TM}_{\,0\,1\,2}$ mode with aspect ratios (diameter divided by length) from (1)$1/3$ to (17)$5$, and zero separation distance between the cavities. \label{fig:ARTM012}} \end{figure} \begin{figure}[!t] \includegraphics[width=0.45\textwidth]{fig5.eps} \caption{Trends of the full geometric function maximum against aspect ratio for various modes. In all cases the separation distance between the cavities is zero. \label{fig:AR}} \end{figure} \clearpage \subsection{Separation dependency} Generally, as expected, the full geometric function decreases with greater separation distances between the two cavities. The amount by which it decreases however is different for each mode and also depends on the aspect ratio and total size of the cavities. Using the $\mathrm{TE}_{\,0\,1\,2}$ mode as an example, Fig.~\ref{fig:dTE012} shows the typical dependence of $\mathcal{F}^2$ on separation distance. For any particular mode, both $\mathcal{G}$ and the full geometric function remain constant under proportional scaling of the cavity radius, length and separation distance, \begin{equation} \mathcal{F}^2_\mathrm{mode}(a,L,d)=\mathcal{F}^2_\mathrm{mode}(\alpha\:a,\alpha\:L,\alpha\:d), \nonumber \end{equation} where $a$, $L$ and $d$ are the cavity radius, length and separation distance respectively, and $\alpha$ is a real number greater than zero. When $d=0$ and $a$ and $L$ are scaled together (keeping the same aspect ratio) then $\mathcal{F}^2$ remains constant as previously stated. In Fig.~\ref{fig:dTE012L} we plot $\mathcal{F}^2$ maximums against separation distance over cavity length ($d/L$) and compare different aspect ratios. We find that when the aspect ratio is lower (i.e. length greater than diameter) the decay in $\mathcal{F}^2$ is faster, whilst when the aspect ratio is higher (i.e. diameter greater than length) the decay in $\mathcal{F}^2$ is slower. The results are similar for other modes except when the axial mode number $p=0$ and the trend is reversed. Thus if the cavities are to be separated at large distances then a larger cavity with a greater aspect ratio is favourable. \begin{figure}[!t] \includegraphics[width=0.45\textwidth]{fig6.eps} \caption{Plots of $\mathcal{F}^2$ for the $\mathrm{TE}_{\,0\,1\,2}$ mode in cavities of size $\mathrm{length} = \mathrm{diameter} = 4 \,\mathrm{cm}$ (aspect ratio 1) with a separation distance labeled $(d)\,\mathrm{cm}$ from 0 to 10. \label{fig:dTE012}} \end{figure} \begin{figure}[!t] \includegraphics[width=0.45\textwidth]{fig7.eps} \caption{ Trend lines of maximal $\mathcal{F}^2$ against distance over length for the $\mathrm{TE}_{\,0\,1\,2}$ mode with aspect ratios as labeled. \label{fig:dTE012L}} \end{figure} \subsection{Optimal configuration} From our findings in Fig.~\ref{fig:pnm} the full geometric function is optimized with the use of a $\mathrm{TM}_{\,0\,n\,0}$ mode with high radial mode number ($n$) or possibly also a $\mathrm{TE}_{\,0\,n\,p}$ mode with high axial and radial mode number. Figure~\ref{fig:AR} suggests that a lower aspect ratio may be better with a $\mathrm{TM}_{\,0\,n\,0}$ mode and a higher aspect ratio with a $\mathrm{TE}_{\,0\,n\,p}$ mode. Practical considerations will place limitations on the mode numbers and dimensions of our cavity. Firstly, we need to be able to couple effectively to the cavity and this may be difficult with obscure or extreme dimensions. Secondly, we have to consider the microwave components being used with the cavities. It is most convenient to operate in the X-band ($8-12\,\mathrm{GHz}$) range of frequencies as these are readily supported. The choice of mode and frequency will also depend on what range of hidden sector photon masses is to be explored. For large and flat cavities the relative dependence on separation distance is the weakest. Whilst the separation should still be kept minimal, the problems of microwave leakage make it favourable to increase the separation distance to allow for better electromagnetic shielding between the cavities. Following these guidelines it should be possible to construct an experiment exploiting a peak $\mathcal{F}^2 \sim 10^6 \; \Omega^2$ with a decent separation of $10\,\mathrm{cm}$. \section{First experiment} \subsection{Experimental setup} To demonstrate the viability of microwave cavity LSW we conducted a simple experiment using two cylindrical copper cavities at room temperature. Our cavities have an internal radius of approximately $2\;\mathrm{cm}$ and internal length of approximately $4\;\mathrm{cm}$. The $\mathrm{TE}_\mathrm{\,0\,1\,1}$ mode was used to excite the cavities. \begin{figure*}[!t] \includegraphics[width=0.65\textwidth]{fig9.eps} \caption{Diagram of the microwave circuit used in our cavity experiment. \label{fig:expcircuit}} \end{figure*} \begin{figure}[!b] \includegraphics[width=0.45\textwidth]{fig8.eps} \caption{Schematic of the experimental setup. \label{fig:expsetup}} \end{figure} A single loop probe was inserted in the middle of the side wall of each cavity and aligned and adjusted to maximize coupling to the $\mathrm{TE}_\mathrm{\,0\,1\,1}$ resonance mode. Operating in this mode the cavities have quality factors of $9060$ and $8370$, resonance frequencies of $9.58806\;\mathrm{G\,Hz}$ and $9.58794\;\mathrm{G\,Hz}$, resonance bandwidths of $1.01\;\mathrm{M\,Hz}$ and $1.17\;\mathrm{M\,Hz}$, and coupling coefficients of $0.97$ and $0.83$. The difference in resonance frequencies between the two cavities is $0.12\;\mathrm{M\,Hz}$, well within their resonance bandwidth of $\sim 1\;\mathrm{M\,Hz}$. The cavities were stacked axially on top of each other inside a vacuum chamber and temperature controlled to maintain the resonance frequency match. They were clamped down to provide good thermal contact. Isolation between the cavities was provided only by their individual cavity walls with no extra shielding being employed. A diagram of the cavities in the vacuum chamber is shown in Fig.~\ref{fig:expsetup}. This setup has a peak $\mathcal{F}^2 = 9825 \; \Omega^2$ at $k_{\gamma \prime}/k_\gamma=0.3$. To excite the emitter cavity a signal generator was used at the cavity's resonance frequency. To measure the resulting signal in the detector cavity the microwave circuit shown in Fig.~\ref{fig:expcircuit} was used. The output of the detector cavity passed through a low noise amplifier and was then mixed against a second signal generator set a few MHz off the cavity resonance frequency. This provided a signal at the offset frequency which was put through a low pass filter and preamplifier before being measured by a FFT spectrum analyzer. \begin{figure}[!b] \includegraphics[width=0.45\textwidth]{fig10.eps} \caption{Power spectral density showing the thermal noise of our detector cavity. \label{fig:noise}} \end{figure} \subsection{Limiting sensitivity} The best possible sensitivity of our experiment will depend on the thermal noise floor of the detector cavity. The theoretical amount of Nyquist thermal noise is \begin{equation} N = \frac{k_B \: T}{2} \lvert \mathcal{T}(i \: \omega) \rvert^2 \nonumber, \end{equation} where \begin{equation} \mathcal{T}(i \: \omega) = \frac{2 \sqrt{\beta}}{(1+\beta)(1+2\:i\:Q\:(\omega-\omega_0)/\omega_0)} \nonumber\label{eq:transcoef} \end{equation} is the the transmission coefficient, in which $\beta$ is the cavity coupling coefficient, $\omega_0$ is the angular resonance frequency and $i=\sqrt{-1}$. Thus when measuring the cavity's noise spectral density ($\mathrm{Q}=8370$, $T=295\;\mathrm{K}$, $\beta=0.83$) we expect to see a Lorentzian centered at the resonance frequency with a peak value of $176.9\;\mathrm{dBm/Hz}$. Using the setup of Fig.~\ref{fig:expcircuit} the actual thermal noise measured, with an uncertainty of $\pm1.5\;\mathrm{dBm}$, is shown in Fig.~\ref{fig:noise} and agrees with our prediction. \begin{figure*}[!t] \includegraphics[width=0.65\textwidth]{fig11.eps} \caption{Limits from this experiment against current hidden sector photon bounds from Ref.~\cite{goodsell}. \label{fig:expexclusion2}} \end{figure*} For a signal to noise ratio of one, Eq.~\eqref{eq:Ptrans} gives us a maximum sensitivity of \begin{equation} \chi = \left( \frac{k_B \: T}{2 \: \tau \: P_\mathrm{emit}} \right)^{\frac{1}{4}} \sqrt{\frac{R_S}{\mathcal{F}}} \label{eq:sens} \end{equation} where $\tau$ is the integration time and $R_S=\sqrt{R_{S_\mathrm{emit}}\:R_{S_\mathrm{det}}}$. For our experimental setup the peak $\mathcal{F}^2 / R_S^{\;2} = 1.2 \times 10^6$. If $1\,\mathrm{W}$ of incident power and an integration time of 1 week is used, then Eq.~\eqref{eq:sens} allows us to probe $\chi = 7.2 \times 10^{-9}$. \subsection{Experimental results} To operate the experiment various power levels from the driving signal generator ranging between $0$ and $20\;\mathrm{dBm}$ were input to the emitter cavity. As expected microwave leakage was a major problem in this simple setup. With an incident power of $0\;\mathrm{dBm}$, a reading of approximately $-66\;\mathrm{dBV}_{\mathrm{RMS}}$ was obtained from the spectrum analyzer and this scaled proportionally with higher power inputs. Taking into account the amplification and measurement system the detector cavity power output was measured to be on average $120.35 \pm 1.50 \;\mathrm{dB}$ below the power input of the emitter cavity. This still relatively large signal is most likely due to microwave leakage inside the common vacuum chamber, probably through the necessary pinhole in each cavity for vacuum pumping, unmatched SMA connections and coupling probes. We are unable to distinguish this signal from possible hidden sector photons and a limit can only be placed down to the strength of this signal. That is, hidden sector photons which would produce a signal greater than that measured are not observed. Using Eq.~\eqref{eq:Ptrans} we can place an upper limit on the kinetic mixing parameter $\chi$ from this experiment which peaks at $\chi=2.9\times10^{-5}$ when $m_{\gamma \prime}=3.788\times10^{-5}\;\mathrm{eV}$. A comparison of these results to previous limits on the hidden sector photon is given in Fig.~\ref{fig:expexclusion2}. \subsection{Future work} Our results from this prototype experiment are not an improvement on previous hidden sector photon bounds~\cite{goodsell}, but do provide promise for the future of microwave cavity LSW. Great improvements on this experiment can be made and a reduction in the $\chi$ limit by multiple orders of magnitude is possible. The two main areas for improvement are microwave leakage suppression and higher Q-factor cavities. By separating our cavities into individual vacuum chambers we can greatly reduce the amount of leakage and hence be able to place a tighter limit on the mixing parameter. This extra separation comes at the cost of reducing $\mathcal{F}^2$ but overall produces a better experiment. We have been able to determine geometries which maintain $\mathcal{F}^2 \sim 10^6 \; \Omega^2$ at separations of $10\;\mathrm{cm}$. Higher Q-factor (lower $R_S$) emitter and detector cavities can be used to increase the probability of transmission and hence the sensitivity to $\chi$. To reap the full benefits of higher Q-factors, however, we need to be able to closely match and maintain the resonance frequency of our cavities. If two cavities with $\mathrm{Q}=10^8$ can be matched in resonance frequency at cryogenic temperatures then a fundamental sensitivity of $\chi \sim 10^{-12}$ can be achieved. Further improvements and methods of positive signal detection from Ref.~\cite{caspers} could also be incorporated. \begin{acknowledgments} This work was supported by the Australian Research Council. \end{acknowledgments}
1,314,259,994,715
arxiv
\section{The new superchiral superconnection $\widetilde{A}$} The de Rham complex over a 4-dimensional differentiable manifold is the space of all differential exterior-forms of all degrees from 0 to 4: $\widetilde{A} = \phi + a + b + c + e$. In Yang-Mills theory, the scalars $\phi$ are considered as zero-forms, i.e. ordinary functions, and the Yang-Mills vector $a_{\mu}$ can be identified with the components of a Lie algebra-valued Cartan connection one-form $a = a^a_{\mu}\lambda_adx^{\mu}$ where the $\lambda_a$ are the generators of the Lie algebra. The connection $a$ defines the parallel transport on the manifold and specifies how to rotate the fields in internal space under an infinitesimal displacement in the base space by replacing the Cartan exterior differential $d$ by the covariant exterior differential $D = d + a$. Since exterior-forms of even degree ($\phi,b,e)$ commute, and exterior-forms of odd degree $(a,c)$ anticommute, it is natural (see Ne'eman-Thierry-Mieg \cite {NTM82}, Quillen \cite {Quillen,MQ86}) to associate the $\mathbb{Z}_2$ grading of the exterior-forms to the $\mathbb{Z}_2$ grading of a superalgebra (appendix A) and to try to define a superconnection as a globally odd form, that is to keep only the odd exterior-forms of degree 1 and 3, $a + c =(a^a_{\mu}dx^{\mu} + c^a_{\mu\nu\rho}dx^{\mu}dx^{\nu}dx^{\rho}/6)\lambda_a$ which are valued in the even Lie subalgebra, together with the even forms of degree 0, 2 and 4 ($\phi^i + b^i_{\mu\nu}dx^{\mu}dx^{\nu}/2 +e^i_{\mu\nu\rho\sigma}dx^{\mu}dx^{\nu}dx^{\rho}dx^{\sigma}/24)\lambda_i$ which are valued in the odd module of the superalgebra. In \cite{TM20a}, we have shown that this definition is incomplete because the odd and even forms commute $\phi^ia^a = a^a\phi^i$, whereas we need (A.3) to generate the antisymmetric commutator of the even and odd matrices. The paradox is resolved by invoking the superalgebra charge chirality matrix $\chi$ (see the details in appendix A), which defines the supertrace of the superalgebra, commutes with the even matrices and anticommutes with the odd matrices: \begin{equation} \label{eq3} STr (M) = Tr (\chi\;M)\;,\;\;[\chi,\lambda_a] = 0\;,\;\;\{\chi,\lambda_i\} = 0\;. \end{equation} Our final definition of the superconnection is \begin{equation} \begin{array}{c} \widetilde{A} = (\phi + b + e)^i\lambda_i + \chi \;(a + c)^a\lambda_a\;, \\ \widetilde{d} = \chi\,d\;,\;\;\widetilde{D} = \widetilde{d} + \widetilde{A}\;,\;\;\widetilde{F} = \widetilde{d}\widetilde{A} + \widetilde{A}\AT\;. \end{array}\end{equation} The presence of the superalgebra-grading-matrix $\chi$ ensures that the signs arising in the construction of the curvature polyform $\widetilde{F}$, and in the action of $\widetilde{D}$ on all fields, are always consistent with the brackets and structure relations of the superalgebra \cite{TM20a}. As a result, the curvature $\widetilde{F}$ defined as the square of the covariant differential $\widetilde{F} = \widetilde{D}\DT$ is valued in the adjoint representation of the superalgebra, defines a linear map, and satisfies the Bianchi identity $\widetilde{D}\widetilde{F} = 0$, which in turn implies that the covariant differential is associative $(\widetilde{D}\DT)\widetilde{D} = \widetilde{D}(\widetilde{D}\DT)$. This geometric construction is satisfactory, but it does not yet explain the structure of the electroweak interactions. The new concept presented here is to consider, in Minkowski 4-dimensional space-time with signature $(-+++)$, a self-dual superconnection $\widetilde{A} = ^*\widetilde{A}$, where $^*$ denotes the Hodge duality which maps $p$-forms onto $(4-p)$-forms (appendix C). In Yang-Mills theory, the connection $a$ is a 1-form, its dual $^*a$ is a 3-form, so a Yang-Mills connection cannot be self-dual and we are only familiar with the self-dual topological theories satisfying $F = ^*F$. But because a superconnection is composed of exterior-forms of all degrees, its 1-form component $a$ can be the dual $a = ^*c$ of its 3-form component $c$ and the concept of a self-dual superconnection makes sense. This constraint has a remarkable consequence when we consider the action of the superconnection on chiral spinors. To construct this action, we saturate the Lorentz indices of the component $p$-forms with Dirac $\gamma$ matrices, effectively defining a map in spinor space using the Dirac-Feynman slash operator. The classic Dirac mapping $a = a_{\mu}dx^{\mu}\;\Rightarrow\slashed{a} = a_{\mu} ({\sigma}^{\mu} + {\overline{\sigma}}^{\mu})$ is generalized to antisymmetric tensors of any rank, for example $b = {\frac{1}{2}} b_{\mu\nu}dx^{\mu}dx^{\nu}\;\Rightarrow\slashed{b} = {\frac{1}{2}} b_{\mu\nu}({\overline{\sigma}}^{\mu}{\sigma}^{\nu}+{\sigma}^{\mu}{\overline{\sigma}}^{\nu})$. As all our spinors are chiral, we use the $\gamma_5$ diagonal notation $\gamma_{\mu} (1+\gamma_5)/2 + \gamma_{\mu} ((1 - \gamma_5)/2 \rightarrow {\sigma}_{\mu} + {\overline{\sigma}}_{\mu}$ as explained in appendix C. However, the anti-symmetric product of $p$ Pauli matrices (appendix B) can be rewritten as a product of $4 - p$ Pauli matrices contracted with the antisymmetric Levi-Civita $\epsilon$ symbol (B.7). Therefore the Dirac operator associated to a $p$-form $\omega$ can be rewritten as $\pm$ the Dirac operator associated to its Hodge dual $^*\omega$, where the sign depends on the helicity of the two components Fermion on which we act (C.8). For example, if a 3-form $c$ acts on left Fermions, this can as well be expressed in terms of the dual 1-form $^*c$ (C.6) : \begin{equation} \slashed{c} \; \frac {1 - \gamma_5}{2} = \frac{1}{6} c_{\mu\nu\rho}{\overline{\sigma}}^{\mu}{\sigma}^{\nu}{\overline{\sigma}}^{\rho} = \frac{i}{6} c_{\mu\nu\rho}\epsilon^{\mu\nu\rho\sigma} {\overline{\sigma}}_{\sigma} = (^*c)_{\mu}{\overline{\sigma}}^{\mu} \end{equation} Applying this transformation to the 2, 3 and 4 forms $(b,c,e)$, the Dirac operator associated to the superconnection $\widetilde{A}$ acting on the left Fermions can be rewritten as \begin{equation} \widetilde{\slashed{A}} \; \frac {1 - \gamma_5}{2} = (\phi + ^*e) \frac {1 - \gamma_5}{2}+ (a + ^*c)_{\mu}\;{\overline{\sigma}}^{\mu} + \frac{1}{2}\;(b + ^*b)_{\mu\nu}\;{\sigma}^{\mu}{\overline{\sigma}}^{\nu} \;, \end{equation} whereas the Dirac operator associated to the superconnection acting on the right Fermions can be rewritten as \begin{equation} \widetilde{\slashed{A}} \; \frac {1 + \gamma_5}{2} = (\phi - ^*e) \frac {1 + \gamma_5}{2}+ (a - ^*c)_{\mu}\;{\sigma}^{\mu} + \frac{1}{2}\;(b - ^*b)_{\mu\nu}\;{\overline{\sigma}}^{\mu}{\sigma}^{\nu} \;. \end{equation} Each parenthesized term pairs a $p$-form to the dual of the matching $(4-p)$-form. As a result, see the details in appendix C, a self-dual superconnection annihilates the right Fermions and \textit{mutatis mutandis} an anti-self-dual superconnection annihilates the left Fermions \begin{equation} \widetilde{A} = ^*\widetilde{A} \Rightarrow \widetilde{\slashed{A}}\;\psi_R = 0\;,\;\;\, \widetilde{A} = -^*\widetilde{A} \Rightarrow \widetilde{\slashed{A}}\;\psi_L = 0\;. \end{equation} To describe the electroweak interactions, we need to act both on left and on right Fermions, but with different kinds of forces. In a superalgebra framework, the charge chirality operator $\chi$ (2.1) that we have already introduced in the definition (2.2) of the superconnection provides this distinction and we postulate that our superconnection should be superchiral \begin{equation} \widetilde{A} = ^*\widetilde{A}\;\chi\;. \end{equation} This beautiful equation (1.1) correlates the orientation of space, which is hidden in the definition of the Hodge duality, denoted by the $^*$, to the charge chirality $\chi$ of the superalgebra, defined in the internal charge space, and in consequence constrains the chirality of the charged Fermions. We illustrate the outcome of these constraints on the specific case of $SU(m/n)$ viewed as a chiral superalgebra. In the $SU(m)$ sector, the potential (in the $SU(m/n)$ fundamental representation), is accompanied by the sign $\chi = +1$, so $a = ^*c$ (C.7). Hence in the Dirac operator only the term $(a + ^*c)_{\mu}\;{\overline{\sigma}}^{\mu}$ survives (C.6-8) and it acts only on the left Fermion (B.3) and (C.9) and (2.6). Reciprocally, in the $SU(n)$ sector ($\chi=-1$) , $\widetilde{A}$ is anti-self-dual and the Dirac operator $(a - ^*c)_{\mu}\;{\sigma}^{\mu}$ annihilates the left Fermions and only acts on the right Fermions. The $U(1)$ operator of $SU(m/n)$ is special in that the corresponding matrix in the fundamental representation acts at the same time on $\chi = 1$ and $\chi = -1$ states and satisfies $a = ^*c \chi$ accordingly. In consequence we get an Abelian vector multiplying the supertraceless $U(1)$ matrix acting both on the left and right Fermions via $a_{\mu} ((1 + \chi){\overline{\sigma}}^{\mu} + (1 - \chi){\sigma}^{\mu})$. Returning to the case of $SU(2/1)$ we get $diagonal({\overline{\sigma}},{\overline{\sigma}};2\,{\sigma})$ exactly as postulated \textit{ex nihilo} in 1979 by Ne'eman \cite{N1} and Fairlie \cite{F1}. \begin{equation} \begin{array}{c} \slashed{A} = \frac{1}{4}\;A^a_{\mu}\lambda_a\;({\overline{\sigma}}^{\mu}(1+\chi)(1-\gamma_5) + {\sigma}^{\mu} (1-\chi)(1+\gamma_5)) \end{array}\end{equation} For the scalar fields, we have $\overline{\Phi} = \phi + ^*e$ and $\Phi = \phi - ^*e$ which act as \begin{equation} \begin{array}{c} \overline{\Phi} = \frac{1}{4}\;\overline{\Phi}^i\lambda_i\;(1+\chi)(1-\gamma_5) \;,\\ \Phi = \frac{1}{4}\;\Phi^i\lambda_i\;(1-\chi)(1+\gamma_5) \;, \end{array}\end{equation} exactly as we postulated \textit{ex nihilo} in \cite{TM20b}. The 2-form $b$ follows a similar pattern. Separating the self-dual and anti-self-dual parts $\overline{B} = b + ^*b$ and $B = b - ^*b$ the Dirac operator acts via the combinations \begin{equation} \begin{array}{c} \slashed{\overline{B}} = \frac{1}{8}\;\overline{B}^i_{\mu\nu}\lambda_i\;{\sigma}^{\mu}{\overline{\sigma}}^{\nu}\;(1+\chi)(1-\gamma_5) \;,\\ \slashed{B} = \frac{1}{8}\;B^i_{\mu\nu}\lambda_i\;{\overline{\sigma}}^{\mu}{\sigma}^{\nu}\;(1-\chi)(1+\gamma_5) \;. \end{array}\end{equation} $\Phi$ and $B$ absorb right Fermions and emit left Fermions, and their antiparticles $\overline{\Phi}$ and $\overline{B}$ absorb left Fermions and emit right Fermions, as illustrated below in the Feynman diagrams presented in sections 3 and 4. Our point is that the superchiral constraint allows us to derive from first principles the same interactions that had to be imposed in the previous $SU(2/1)$ literature to force the gauge superalgebra to look like the standard model. The price we pay is the appearance of a new scalar sector represented by the $\overline{B} B$ fields. The reader should notice that the 2-form component of the curvature polyform $\widetilde{F}$ (2.2) reads in these new notations \begin{equation} \breve{F} = (dA^a + \frac{1}{2} (f^a_{bc}A^bA^c + d^a_{ij} (\overline{\Phi}^i B^j + \Phi^i \overline{B}^j)))\;\lambda_a\;, \end{equation} generating inside the Lagrangian $\widetilde{F}^2$ a new scalar-vector-tensor interaction $F (\{\overline{B},\Phi\} + \{\overline{\Phi}, B\})$. As shown below, this term plays a crucial role in the self consistency of the theory. Given these algebraic and geometrical definitions, let us now study how the Dirac action of the superconnection on the chiral Fermions gets promoted in the quantum field theory into the definition of the propagators and interactions of its components, the complex scalar field $\Phi$, the vector $A$, and the complex self-dual anti-self-dual antisymmetric tensor $\overline{B} B$ all correctly satisfying the spin-statistics relation. \section {The Avdeev-Chizhov propagator is induced by the Fermion loop} In their seminal study \cite{AC94}, Avdeev and Chizhov have introduced a new type of quantum field: a self-dual and anti-self-dual antisymmetric tensor $B$ and $\overline{B}$ satisfying in Minkowski space the conditions \begin{equation} B = - ^*B \;,\;\;\;\overline{B} = ^*\overline{B}\;, \end{equation} where $^*$ denotes the Hodge dual (C.1) in 4-dimensional Minkowski space-time with signature $(-+++)$: \begin{equation} \begin{array}{c} B = \frac {1}{2} B_{\mu\nu}\;dx^{\mu}dx^{\nu} \;,\;\;^*B = -\frac{i}{2}\;\epsilon_{\mu\nu\rho\sigma} B^{\mu\nu}\;dx^{\rho}dx^{\sigma} \end{array}\end{equation} and $\epsilon$ is the fully antisymmetric symbol with $\epsilon_{0123} = 1$. These fields coincide with the antisymmetric tensor fields identified in (2.10) as part of the superchiral superconnection $\widetilde{A}$: compare (3.2) with (C.1,C.5) and the definition of the Hodge dual of the field components in (C.6) and (C.8). Until their discovery, the existence of a Lagrangian compatible with the self-duality condition seemed unlikely and its structure appeared at first complicated. Some efforts were needed to demonstrate that the Avdeev-Chizhov tensors describe a complex scalar field with one real degree of freedom for $B$ and one for $\overline{B}$ and to delineate their possible interactions \cite{LRS95,Wet08}. With hindsight, we can reconstruct the model just from the rules of quantum field theory. The possible couplings of a 2-tensor to a chiral Fermion are strongly constrained by Lorentz invariance. The $\mu\nu$ indices must act on the Fermions via the antisymmetrized product of two Pauli matrices (see appendix B for our precise notations) and this product is by itself self-dual: \begin{equation} {\sigma}{\overline{\sigma}} = P^+\;{\sigma}{\overline{\sigma}}\;,\;\;{\overline{\sigma}}{\sigma} = P^-\;{\overline{\sigma}}{\sigma}\;,\;\; \end{equation} where $P^{\pm}$ are the self-duality projectors \begin{equation} \begin{array}{c} P^{\pm}_{\mu\nu\rho\sigma} = \frac {1}{4} (g_{\mu\nu}g_{\rho\sigma} - g_{\mu\rho}g_{\nu\sigma} \mp i\,\epsilon_{\mu\nu\rho\sigma})\;, \\ (P^+)(P^+) = P^+\;,\;\;(P^-)(P^-) = P^-\;,\;\;(P^+)(P^-) = (P^-)(P^+) = 0\;.\;\; \end{array}\end{equation} Therefore, the only antisymmetric tensors which can couple to chiral Fermions are self or anti-self-dual. The anti-self-dual field $B$ absorbs right states and emits left states, and the self-dual field $\overline{B}$ absorbs left states and emits right states according to the Feynman diagrams: $\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;$ \begin{tikzpicture} \begin{feynman} \vertex (a) {\(\overline{B}^i_{\mu\nu}\)}; \vertex [left = of a, label=\(\lambda_i\)] (x); \vertex [below left=of x] (b){\(\psi_L\)}; \vertex [above left=of x] (c){\(\overline{\psi_R}\)}; \diagram* { (a) -- [gluon] (x), (x) -- [anti fermion](b), (x) -- [fermion](c), }; \vertex [right = of a] (a2) {\(B_{\rho\sigma}^i\)}; \vertex [right = of a2, label=\(\lambda_i\)] (x2); \vertex [below right=of x2] (b2){\(\psi_R\)}; \vertex [above right=of x2] (c2){\({\overline{\psi}}_L\)}; \diagram* { (a2) -- [gluon] (x2), (x2) -- [anti fermion](b2), (x2) -- [fermion](c2), }; \end{feynman} \end{tikzpicture} Assuming the standard propagator for the chiral Fermions defined by the Lagrangian \begin{equation} \begin{array}{c} \mathcal{L} = i \overline {(\psi_R)} \;{\sigma}^{\mu}\partial_{\mu}\;\psi_R + i \overline {(\psi_L)} \; {\overline{\sigma}}^{\mu}\partial_{\mu}\;\psi_L \;, \end{array}\end{equation} the knowledge of these 2 vertices is sufficient to compute the pole part of the propagator of the $\overline{B} B$ field by closing the Fermion loop: $\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;$ \begin{tikzpicture} \begin{feynman} \vertex (a) {\(B^i_{\mu\nu}\)}; \vertex [right=of a] (b); \vertex [right=of b] (c); \vertex [right=of c] (d){\(\overline{B}^j_{\rho\sigma}\)}; \diagram* { (a) -- [gluon] (b), (b) -- [anti fermion, half left, edge label =\(\psi_R\) ](c), (c) -- [anti fermion, half left, edge label =\(\psi_L\) ] (b), (d) -- [gluon] (c), }; \end{feynman} \end{tikzpicture} Carefully computing this Feynman diagram (appendix E), we recover the tensorial structure of Avdeev-Chizhov propagator \cite{AC94} \begin{equation} \mathcal{L}_B = - \kappa_{ij}g^{\mu\nu} \;\;\partial^{\alpha} \overline{B}^i_{\alpha\mu}\;\;\partial^{\beta} B^j_{\beta\nu}\;\;, \end{equation} however \cite{TM20b}, an unexpected consequence of the chiral couplings of the $\overline{B} B$ fields (2.10) is that the $\kappa_{ij}$ metric is calculated as a chiral trace: \begin{equation} \kappa_{ij} = \frac{1}{2}\; Tr ((1+\chi)\lambda_i\lambda_j) = \frac{1}{2}\;Tr (\lambda_i\lambda_j) + \frac{1}{2}\;STr (\lambda_i\lambda_j) \;\;. \end{equation} The theory hesitates between a Lie algebra like metric: $Tr(\lambda_i\lambda_j)$, and a Lie-Kac superalgebra supermetric: $STr(\lambda_i\lambda_j)$. The resolution of this dilemma depends on the number and types of chiral Fermions described by the model and is discussed below in section 6. \section {The Bosonic interaction terms are induced by the Fermion loop} Following our above discussion of the Avdeev-Chizhov fields, we now extend the method to determine the propagators and self interactions of the remaining components of the superchiral superconnection. We postulate the generalized Dirac Lagrangian \begin{equation} \begin{array}{c} \mathcal{L} = i\; \overline {(\psi)} \;\widetilde{\slashed{D}}\;\psi\;, \end{array}\end{equation} where $\widetilde{D} = \chi d + \widetilde{A}$, and $\widetilde{A}$ is our new superchiral superconnection (2.7). The renormalization of the wave function upon inclusion of a Fermion loop as above gives the well known propagator of the scalars and the photon, as well as the Avdeev-Chizhov propagator \cite{AC94} as derived in (3.6): \begin{equation} \begin{array}{c} \mathcal{L}_{\Phi} = - \kappa_{ij}\;g^{\mu\nu} \;\;\partial_{\mu} \overline{\Phi}^i\;\;\partial_{\nu} \Phi^j\;,\;\; \\ \mathcal{L}_A = - \frac {1}{4}\;\kappa_{ab}g^{\mu\rho}g^{\nu\sigma} \;\;(\partial_{\mu}A^a_{\nu} - \partial_{\nu}A^a_{\mu}) \;\;(\partial_{\rho}A^b_{\sigma} - \partial_{\sigma}A^b_{\rho})\;, \\ \mathcal{L}_{B} = - \kappa_{ij}\;g^{\mu\nu} \;\;\partial^{\alpha} \overline{B}^i_{\alpha\mu}\;\;\partial^{\beta} B^j_{\beta\nu}\;, \end{array}\end{equation} where the computed $\kappa{ij}$ metrics controlling the scalar and tensor propagator (3.7) are identical. The vector metric $\kappa_{ab} = g_{ab} = \frac{1}{2}\;Tr (\lambda_a\lambda_b)$ is the only term that is purely algebraic and does not hesitate; we clarify this point in section 5. The interaction terms are given by the pole part of the Fermion loops with 3 external fields. The Feynman diagrams $\;\;\;\;\;\;\;\;\;\;$ \begin{tikzpicture} \begin{feynman} \vertex (a1) {\(A^a_{\mu}\)}; \vertex [right= of a1] (a); \vertex [below right=of a] (b); \vertex [above right=of a] (c); \vertex [below right=of b](b1) {\(\Phi^i\)}; \vertex [above right=of c](c1) {\(\overline{\Phi}^j\)}; \diagram* { (a1) -- [photon] (a), (b1) -- [charged scalar] (b), (c) -- [charged scalar] (c1), (a) -- [fermion, in=150,out=90, edge label =\(\psi_L\) ](c), (c) -- [fermion, in=30,out=-30, edge label =\(\psi_R\) ] (b), (b) -- [fermion, in=270,out=210, edge label =\(\psi_L\) ] (a), }; \end{feynman} \end{tikzpicture} \begin{tikzpicture} \begin{feynman} \vertex (a1) {\(A^a_{\mu}\)}; \vertex [right= of a1] (a); \vertex [below right=of a] (b); \vertex [above right=of a] (c); \vertex [below right=of b](b1) {\(\Phi^i\)}; \vertex [above right=of c](c1) {\(\overline{\Phi}^j\)}; \diagram* { (a1) -- [photon] (a), (b1) -- [charged scalar] (b), (c) -- [charged scalar] (c1), (a) -- [anti fermion, in=150,out=90, edge label =\(\psi_R\) ](c), (c) -- [anti fermion, in=30,out=-30, edge label =\(\psi_L\) ] (b), (b) -- [anti fermion, in=270,out=210, edge label =\(\psi_R\) ] (a), }; \end{feynman} \end{tikzpicture} $\;\;\;\;\;\;\;\;\;\;$ \begin{tikzpicture} \begin{feynman} \vertex (a1) {\(A^a_{\mu}\)}; \vertex [right= of a1] (a); \vertex [below right=of a] (b); \vertex [above right=of a] (c); \vertex [below right=of b](b1) {\(B^i\)}; \vertex [above right=of c](c1) {\(\overline{B}^j\)}; \diagram* { (a1) -- [photon] (a), (b) -- [gluon] (b1), (c1) -- [gluon] (c), (a) -- [fermion, in=150,out=90, edge label =\(\psi_L\) ](c), (c) -- [fermion, in=30,out=-30, edge label =\(\psi_R\) ] (b), (b) -- [fermion, in=270,out=210, edge label =\(\psi_L\) ] (a), }; \end{feynman} \end{tikzpicture} \begin{tikzpicture} \begin{feynman} \vertex (a1) {\(A^a_{\mu}\)}; \vertex [right= of a1] (a); \vertex [below right=of a] (b); \vertex [above right=of a] (c); \vertex [below right=of b](b1) {\(B^i\)}; \vertex [above right=of c](c1) {\(\overline{B}^j\)}; \diagram* { (a1) -- [photon] (a), (b) -- [gluon] (b1), (c1) -- [gluon] (c), (a) -- [anti fermion, in=150,out=90, edge label =\(\psi_R\) ](c), (c) -- [anti fermion, in=30,out=-30, edge label =\(\psi_L\) ] (b), (b) -- [anti fermion, in=270,out=210, edge label =\(\psi_R\) ] (a), }; \end{feynman} \end{tikzpicture} induce the expected covariant derivative minimal coupling \begin{equation} \mathcal{L} = - D_{\mu}\overline{\Phi}\;D_{\mu}\Phi - D^{\alpha}\overline{B}_{\alpha\mu}\;D^{\beta}B_{\beta\mu} \end{equation} with a caveat \cite{TM20b}: since the orientation of the loop is correlated with the chirality of the looping Fermions, the interaction term hidden in the definition of the covariant derivative \begin{equation} \begin{array}{c} D_{\mu}\Phi_i = \partial_{\mu}\Phi_i + t_{aij} A^a_{\mu}\Phi^j\;, \end{array}\end{equation} \begin{equation} \begin{array}{c} D^{\alpha}B_{i\alpha\mu} = \partial^{\alpha}B_{i\alpha\mu} + t_{aij} A^{a\alpha}B^j_{\alpha\mu}\;, \end{array}\end{equation} is given by the chiral trace \begin{equation} t_{aij} = Tr ((1+\chi) \;\lambda_a\lambda_i\lambda_j - (1-\chi) \;\lambda_a\lambda_j\lambda_j) = Tr (\lambda_a\;[\lambda_i,\lambda_j]) + STr (\lambda_a\;\{\lambda_i,\lambda_j\})\;. \end{equation} As found for the tensor propagators (3.7 and 4.2), the $t_{aij}$ interaction terms (4.6) are neither fish nor meat. They hesitate between a Lie algebra trace and a Lie-Kac supertrace. They are not universal. They depend on the Fermion content of the model. Another novelty is the apparition of a new mixed $AB\overline{\Phi}$ coupling, which must be considered as a genuine component of the superchiral minimal coupling, and is induced by the Feynman diagrams: $\;\;\;\;\;\;\;\;\;\;$ \begin{tikzpicture} \begin{feynman} \vertex (a1) {\(A^a_{\mu}\)}; \vertex [right= of a1] (a); \vertex [below right=of a] (b); \vertex [above right=of a] (c); \vertex [below right=of b](b1) {\(B^i\)}; \vertex [above right=of c](c1) {\(\overline{\Phi}^j\)}; \diagram* { (a1) -- [photon] (a), (b) -- [gluon] (b1), (c) -- [charged scalar] (c1), (a) -- [fermion, in=150,out=90, edge label =\(\psi_L\) ](c), (c) -- [fermion, in=30,out=-30, edge label =\(\psi_R\) ] (b), (b) -- [fermion, in=270,out=210, edge label =\(\psi_L\) ] (a), }; \end{feynman} \end{tikzpicture} \begin{tikzpicture} \begin{feynman} \vertex (a1) {\(A^a_{\mu}\)}; \vertex [right= of a1] (a); \vertex [below right=of a] (b); \vertex [above right=of a] (c); \vertex [below right=of b](b1) {\(B^i\)}; \vertex [above right=of c](c1) {\(\overline{\Phi}^j\)}; \diagram* { (a1) -- [photon] (a), (b) -- [gluon] (b1), (c) -- [charged scalar] (c1), (a) -- [anti fermion, in=150,out=90, edge label =\(\psi_R\) ](c), (c) -- [anti fermion, in=30,out=-30, edge label =\(\psi_L\) ] (b), (b) -- [anti fermion, in=270,out=210, edge label =\(\psi_R\) ] (a), }; \end{feynman} \end{tikzpicture} $\;\;\;\;\;\;\;\;\;\;$ \begin{tikzpicture} \begin{feynman} \vertex (a1) {\(A^a_{\mu}\)}; \vertex [right= of a1] (a); \vertex [below right=of a] (b); \vertex [above right=of a] (c); \vertex [below right=of b](b1) {\(\Phi^i\)}; \vertex [above right=of c](c1){\(\overline{B}^j\)}; \diagram* { (a1) -- [photon] (a), (b1) -- [charged scalar] (b), (c1) -- [gluon] (c), (a) -- [fermion, in=150,out=90, edge label =\(\psi_L\) ](c), (c) -- [fermion, in=30,out=-30, edge label =\(\psi_R\) ] (b), (b) -- [fermion, in=270,out=210, edge label =\(\psi_L\) ] (a), }; \end{feynman} \end{tikzpicture} \begin{tikzpicture} \begin{feynman} \vertex (a1) {\(A^a_{\mu}\)}; \vertex [right= of a1] (a); \vertex [below right=of a] (b); \vertex [above right=of a] (c); \vertex [below right=of b](b1) {\(\Phi^i\)}; \vertex [above right=of c](c1) {\(\overline{B}^j\)}; \diagram* { (a1) -- [photon] (a), (b1) -- [charged scalar] (b), (c1) -- [gluon] (c), (a) -- [anti fermion, in=150,out=90, edge label =\(\psi_R\) ](c), (c) -- [anti fermion, in=30,out=-30, edge label =\(\psi_L\) ] (b), (b) -- [anti fermion, in=270,out=210, edge label =\(\psi_R\) ] (a), }; \end{feynman} \end{tikzpicture} The tensorial structure of these counterterms is unusual because the propagator (3.6) of the $\overline{B} B$ field has a rather complex structure \begin{equation} P^+_{\mu\nu\alpha\beta}\;k^{\alpha} g^{\beta\gamma}k^{\delta}\;P^-_{\gamma\delta\rho\sigma}/(k^2)^2\;. \end{equation} When we perform the calculation, we get with the same strength as in (4.3) the interaction: \begin{equation} \mathcal{L}_{AB\Phi} = \frac{1}{4}\;t_{aij} \;F^a_{\mu\nu} (\overline{B}^i_{\mu\nu}\Phi^j + B^i_{\mu\nu}\overline{\Phi}^j)\;. \end{equation} This is the only term which is Lorentz invariant and invariant under the Lie subalgebra. The coupling matrix $t_{aij}$ is the same mixture (4.6) of trace and supertrace which appeared above in $D\Phi$ and $DB$, and is common to $A\overline{\Phi}\Phi$, $A\overline{B} B$ and $AB\overline{\Phi}$ because the $\Phi$ and the $B$ fields have the same chiral interactions to the Fermions (2.9,2.10). Regrouping all terms we get \begin{equation} \mathcal{L}_{B\Phi} = - \kappa_{ij} \;\;D_{\alpha} \overline{B}^i_{\alpha\mu}\;\;D_{\beta} B^j_{\beta\mu}\;\; - \kappa_{ij} \;\;D_{\alpha} \overline{\Phi}^i\;\;D_{\alpha} \overline{\Phi}^j\;\; - \frac{1}{4}\,t_{aij} \;F^a_{\mu\nu} (\overline{B}^i_{\mu\nu}\Phi^j + B^i_{\mu\nu}\overline{\Phi}^j)\;. \end{equation} The interesting point is that the $F$ coupling cannot be freely adjusted. It comes as a consequence of the $\widetilde{\slashed{D}}$ coupling of all the connection fields to the Fermion and should be considered as an indispensable part of the minimal coupling of the Avdeev Chizhov fields. The same coupling appears in (2.11) as part of the classic Lagrangian $\breve{F}^2$ \section {The Adler-Bell-Jackiw vector anomaly viewed as superalgebraic} The main surprise of the previous calculations is that the theory seems to hesitate between a Lie algebra and a Lie superalgebra structure. The scalar propagator $\kappa_{ij}$ (3.7) and the vector-scalar or vector-tensor vertex $t_{aij}$ (4.6) contain a Lie algebra and a Lie superalgebra tensor, which cannot both be well defined at the same time. But \textit{a posteriori}, this is not so surprising; this situation is actually very well known in physics. If we compute just as before the chiral Fermion loop contributions to the triple vector interaction: $\;\;\;\;\;\;\;\;\;\;$ \begin{tikzpicture} \begin{feynman} \vertex (a1) {\(A^a_{\mu}\)}; \vertex [right= of a1] (a); \vertex [below right=of a] (b); \vertex [above right=of a] (c); \vertex [below right=of b](b1) {\(A^b_{\nu}\)}; \vertex [above right=of c](c1) {\(A^c_{\rho}\)}; \diagram* { (a1) -- [photon] (a), (b1) -- [photon] (b), (c1) -- [photon] (c), (a) -- [anti fermion, in=150,out=90, edge label =\(\psi_L\) ](c), (c) -- [anti fermion, in=30,out=-30, edge label =\(\psi_L\) ] (b), (b) -- [anti fermion, in=270,out=210, edge label =\(\psi_L\) ] (a), }; \end{feynman} \end{tikzpicture} \begin{tikzpicture} \begin{feynman} \vertex (a1) {\(A^a_{\mu}\)}; \vertex [right= of a1] (a); \vertex [below right=of a] (b); \vertex [above right=of a] (c); \vertex [below right=of b](b1) {\(A^b_{\nu}\)}; \vertex [above right=of c](c1) {\(A^c_{\rho}\)}; \diagram* { (a1) -- [photon] (a), (b1) -- [photon] (b), (c1) -- [photon] (c), (a) -- [anti fermion, in=150,out=90, edge label =\(\psi_R\) ](c), (c) -- [anti fermion, in=30,out=-30, edge label =\(\psi_R\) ] (b), (b) -- [anti fermion, in=270,out=210, edge label =\(\psi_R\) ] (a), }; \end{feynman} \end{tikzpicture} $\;\;\;\;\;\;\;\;\;\;$ \begin{tikzpicture} \begin{feynman} \vertex (a1) {\(A^a_{\mu}\)}; \vertex [right= of a1] (a); \vertex [below right=of a] (b); \vertex [above right=of a] (c); \vertex [below right=of b](b1) {\(A^b_{\nu}\)}; \vertex [above right=of c](c1) {\(A^c_{\rho}\)}; \diagram* { (a1) -- [photon] (a), (b1) -- [photon] (b), (c1) -- [photon] (c), (a) -- [fermion, in=150,out=90, edge label =\(\psi_L\) ](c), (c) -- [fermion, in=30,out=-30, edge label =\(\psi_L\) ] (b), (b) -- [fermion, in=270,out=210, edge label =\(\psi_L\) ] (a), }; \end{feynman} \end{tikzpicture} \begin{tikzpicture} \begin{feynman} \vertex (a1) {\(A^a_{\mu}\)}; \vertex [right= of a1] (a); \vertex [below right=of a] (b); \vertex [above right=of a] (c); \vertex [below right=of b](b1) {\(A^b_{\nu}\)}; \vertex [above right=of c](c1) {\(A^c_{\rho}\)}; \diagram* { (a1) -- [photon] (a), (b1) -- [photon] (b), (c1) -- [photon] (c), (a) -- [fermion, in=150,out=90, edge label =\(\psi_R\) ](c), (c) -- [fermion, in=30,out=-30, edge label =\(\psi_R\) ] (b), (b) -- [fermion, in=270,out=210, edge label =\(\psi_R\) ] (a), }; \end{feynman} \end{tikzpicture} we also obtain 2 types of counterterms: \begin{equation} \begin{array}{c} Z_f = Tr(\lambda_a\;[\lambda_b,\lambda_c])\;\;\;A^{a \mu}A^{b \nu}\partial_{\mu}A^c_{\nu} \;,\\ Z_d = STr(\lambda_a\;\{\lambda_b,\lambda_c\})\;\;\;\epsilon^{\mu\nu\rho\sigma}\;A^a_{\mu}A^b_{\nu}\partial_{\rho}A^c_{\sigma} \;. \end{array}\end{equation} The $f_{abc} = Tr(\lambda_a\;[\lambda_b,\lambda_c])$ term is the expected counterterm to the Lie algebra triple vector vertex contained in the classic Yang-Mills Lagrangian $Tr(F^2)$. The $d_{abc} = STr(\lambda_a\;\{\lambda_b,\lambda_c\})$ term is the surprise Adler-Bell-Jackiw \cite{Adler,BJ} counterterm to the superalgebra triple vector vertex contained in the undesirable topological Lagrangian $STr(^*FF)$, where the supertrace is defined as the trace over the left Fermions minus the trace over the right Fermions. Using the superchirality condition (1.1), we can reinterpret this helicity supertrace in the sense of Herman Weyl (B.1), as the internal superalgebra supertrace in the sense of Kac (1.2) and (A.1), and identify the vector anomaly with the even part of the rank-3 super-Casimir operator (A.7) of the superalgebra. The occurrence of the Hodge dual $^*F$ is also consistent with our superchiral condition (1.1) which correlates the chirality of the spinor bundle with the Hodge duality of the exterior bundle (1.3). We can now clarify why we found in (4.2) that the vector metric is purely algebraic. The vector Lagrangian also hesitates between a $trace$ and a $supertrace$, but in a particular way. For the scalars, $Tr(\lambda_i\lambda_j)$ is symmetric in $(ij)$ whereas $STr(\lambda_i\lambda_j)$ is antisymmetric and $\kappa_{ij}$ has mixed symmetry. For the vectors, $Tr(\lambda_a\lambda_b)$ and $STr(\lambda_a\lambda_b)$ are both symmetric in $(ab)$, so $\kappa_{ab}$ is symmetric. The difference is that the trace leads to a positive-definite metric whereas the supertrace does not. For $SU(m/n)$, the $SU(n)$ sector is negative-definite. But the Fermion helicity, which generates the supertrace in the Feynman diagrams, is correlated with duality via equation (2.10). Therefore the supertrace metric is correlated with the topological Lagrangian $STr (^*F F)$, whose quadratic contribution to the propagator is an exact form: it can be eliminated by integration by part. Hence $\kappa_{ab}$ (4.2) only contains the $trace$ term. In the $SU(2/1)$ case, this offers a solution to the long standing problem of the metric of the photon: is it negative relative to the weak $SU(2)$ metric as the supertrace dictates or is it positive as Ne'eman hoped ? In our new superconnection model, we conclude that the metric of the photon propagator uses the positive trace and that the not positive-defined supertrace metric is only involved in the anomalous topological Lagrangian. To conclude, the triple-vector vertex (5.1) also hesitates between a Lie algebra and a Lie superalgebra structure. The Adler-Bell-Jackiw anomalous counterterm (5.1) is superalgebraic in nature and cancels out if the supertrace of the Casimir of rank 3 (A.7) of the Lie subalgebra vanishes. The Fermion loop counterterms to the quartic vertices $A^4$, $A^2\overline{\Phi}\Phi$, $A^2\overline{B} B$, $A^2\overline{\Phi} B$, $A^2 \overline{B} \Phi$ also contain anomalies, but they automatically follow the structure of the cubic terms because of the Lie algebra Ward identities. For example the classic and the anomalous $A^4$ vertices are respectively the complements of the $A^3$ vector terms in the classic Yang-Mills Lagrangian $Tr(F^2)$ and in the topological anomalous Lagrangian $STr(^*FF)$. The $(A^2 \overline{B} \Phi)$ counterterm is the complement of the $(A \overline{B}\Phi)$ term in the $(F \{\overline{B},\Phi\})$ Lagrangian. The quartic potentials $\overline{\Phi}^2\Phi^2$, $\overline{\Phi}\Phi\overline{B} B$ and $\overline{B}^2B^2$ remain to be studied. \section {Classification of the anomaly-free superchiral supeconnections} We have identified three obstructions to the construction of the quantum field theory: (3.7), (4.6) and (5.1). We wish to show here that these hesitations between trace and supertrace are resolved in many superchiral models. Consider first the scalar anomalies. Since the trace operator is invariant under circular permutation, we can use the closure relation (A.3) of the superalgebra to rewrite the trace term in (3.7) as \begin{equation} Tr (\lambda_i\lambda_j) = Tr ((\lambda_j\lambda_i) = \frac{1}{2}Tr ({\lambda_i,\lambda_j}) = \frac{1}{2} d^a_{ij} Tr (\lambda_a)\;. \end{equation} In the same way, we can rewrite (4.6) as \begin{equation} \begin{array}{c} Tr (\lambda_a\;[\lambda_i,\lambda_j]) = Tr (\lambda_a\lambda_i\lambda_j -\lambda_a\lambda_j\lambda_i) \\ = Tr (\lambda_i\lambda_j\lambda_a -\lambda_i\lambda_a\lambda_j)) = - Tr (\lambda_i\;[\lambda_a,\lambda_j]) \\ = - f^k_{aj} Tr (\lambda_i\lambda_k) = - \frac{1}{2} \; d^b_{ik}f^k_{aj}\;Tr(\lambda_b)\;. \end{array}\end{equation} Hence if all the even generators satisfy the constraint \begin{equation} Tr (\lambda_a) = 0\;, \end{equation} the theory is superalgebraic: the propagators of the scalars and of the Avdeev Chizhov tensors are controlled by the odd part of the super-Killing metric $\kappa_{ij} = \frac{1}{2} STr(\lambda_i\lambda_j)$ and their interactions with the vectors are governed by the symmetric structure constants of the superalgebra $t_{aij} = d_{aij}$. Consider now the vector anomaly (5.1). The simple Lie algebras are of type $A$, $B$, $C$, $D$, $E$ and $F$. Among those, only $A_m = SU(m+1), \;m>=2$, including $A_3 = D_3 = SO(6) = SU(4)$, admit a Casimir of rank 3. In addition, we can have a $U(1)$ algebra,denoted $Y$, which generates two supplementary Casimirs of rank 3: $Y^3$ and $Y C_2$ where $C_2$ is a rank 2 Casimir of any other Lie algebra present in the model. A superchiral model associated to the simple superalgebras $G(3)$ or $OSp(m/n)$ with $m>7$ has no Casimir of rank 3 and no $U(1)$ factor, so it cannot have a vector anomaly. As all its generators are traceless, it has no scalar anomaly either. Therefore the model is superalgebraic and anomaly-free. It is nevertheless chiral because the non-Abelian charges of the left and right Fermions in general differ. \section {Anomaly cancellation in the superchiral SU(2/1) model of leptons and quarks} It is also possible to cancel the scalar anomalies by combining several irreducible representations. For example, in the $SU(2/1)$ model of the electroweak interactions, the superchirality condition (1.1) implies that $SU(2)$ only acts on the left doublets (2.8). The cumulated hypercharge of the \{right-electron/(left-electron,left-neutrino)\} triplet $Tr(Y) = -4$ (\cite{TM20b} appendix B) is compensated by the cumulated hypercharge of the 3 colored (up,down) quarks quadruplets $Tr(Y) = 3 * 4/3$ (\cite{TM20b} appendix D). This is equivalent to the observation that the electric charge of the hydrogen atom (1 electron plus three quarks) vanishes. Using (6.1) and (6.2), the scalar anomalies (3.7) and (4.6) cancel out. As found in 1972 by Bouchiat, Iliopoulos and Meyer (BIM \cite{BIM}), the four vector anomalies $Y^3$, $Y\;SU(2)^2$, $Y\; SU(3)^2$ and $SU(3)^3$ (5.1) also cancel out in the standard model, separately for each family. \section {Discussion} The concept of a superconnection defined as an odd polyform, a linear combination of exterior-forms of all degrees valued in a Lie superalgebra (2.2), was first introduced in 1982 \cite{NTM82} in terms of the primitive forms $(\phi,a,b,c,e)$ as \begin{equation} \widetilde{A} = (\phi + b + e)^i\lambda_i + (a + c)^a\lambda_a\;, \end{equation} then developed by Quillen and Mathai their seminal papers \cite{Quillen,MQ86} and recently modified in \cite{TM20a}. In Quillen \cite{Quillen,MQ86}, the covariant differential is defined as $\widetilde{D} = d + A + L$, where $L = L^i\lambda_i$ is as for us a mixed exterior-form of even degree valued in the odd module of the superalgebra. But because $L$ must be odd relative to the differential calculus to ensure that the curvature $\widetilde{F} = \widetilde{D}\DT$ defines a linear map, Quillen assumes that the components $L^i$ of $L$ are valued in another graded algebra which anticommute with the exterior-forms. The difficulty is that these partially anticommuting $L^i$ cannot be represented in quantum field theory by commuting scalar fields. This is probably why the works of Ne'eman and Sternberg \cite {NSF} or of the Marseille-Mainz group (see for example \cite{CQ0,HS}) who have all adopted the Quillen formalism, stop short of the quantum theory. In our construction \cite{TM20a}, the components $\phi^i$ of $\phi = \phi^i\lambda_i$ are just ordinary commuting functions. Nevertheless $\phi$ is odd with respect to our differential calculus as requested by Quillen \cite{Quillen}, because the $\lambda_i$ matrices anticommute with the chirality $\chi$ (2.1) which decorates our exterior differential $\widetilde{d} = \chi d$ (2.2). As a result, the commuting $\phi^i$ can be represented by Bose scalars and we can develop a quantum field theory formalism as here. This modifies the calculation of the superconnection cohomology \cite{MQ86} which should be reexamined and we conjecture that the Adler-Bell-Jackiw quantum anomalies play a role as obstructions in this purely geometrical context. However, an inconvenience of the superconnection formalism is the excessive number of component fields. The new self-dual superchiral constraints $\widetilde{A} = ^*\widetilde{A}\; \chi$ introduced in the present work as equation (1.1) provides a subtle way to eliminate the higher forms. The Hodge duality conspires with the chiral properties of the Pauli matrices such that the action of the Dirac operator is focused on a single chirality (2.6). If $\widetilde{A}$ is self-dual, it only absorbs left Fermions, if it is anti-self-dual, it only absorbs right Fermions. Coupling the Hodge duality signs (C.1) with the superalgebra supertrace operator $\chi$, defined as $Str(...) = Tr (\chi ...)$ (1.2 and appendix A), links the spinor bundle (appendix B) to the exterior bundle (appendix C) and focuses the action of the superconnection (2.8-10) on the $CP$ positive Fermions: $\widetilde{\slashed{A}} = \widetilde{\slashed{A}} \;(1 \pm \chi)(1 \mp \gamma_5)/4$. The constraint also eliminates the primitive fields (2.2) in terms of the self-chiral fields $(\Phi,A,B)$ (2.8-10). Applied to the $SU(2/1)$ superalgebraic model of the elementary particles, we recover exactly the vector interactions postulated \textit{ex nihilo} by Ne'eman \cite{N1} and Fairlie \cite{F1} and the scalar interactions postulated in \cite{TM20b}. The superchirality focuses, as observed in Nature, the action of the $SU(2)$ vector Bosons on the left leptons and quarks (section 7). Because the left and right leptons and quarks do not carry the same $SU(2)$ or $U(1)$ charges, the Yang-Mills sector is subject to the Adler-Bell-Jackiw anomaly \cite {Adler,BJ} which curiously (section 5) has the structure of a would-be superalgebraic symmetric structure constant $d_{abc} = STr (\lambda_a\;\{\lambda_b,\lambda_c\})$ involving three Lie algebra even indices $(abc)$. Our new interpretation of this well known result is that the chiral supertrace, in the sense of Adler-Bell-Jackiw: left minus right Fermions, is equivalent to the charge supertrace in the sense of Kac (1.2), so the supertrace terms in (5.1) matches the definition of the even part of the super-Casimir (A.7). In a similar way, the counterterm to the scalar and tensor interactions (4.6) involve would-be algebraic antisymmetric structure constants $f_{aij} = Tr (\lambda_a\;[\lambda_i,\lambda_j])$ although $(ij)$ are odd indices. Our result is that all these unwanted terms cancel out if all the even generators are traceless (6.3) and if the even part of the super-Casimir operator of rank 3 (5.1 and A.7) vanishes. This is true in particular in the standard model of the fundamental interactions when we apply the BIM \cite{BIM} mechanism whereby each lepton family is balanced by its pair of quarks (section 7). Only the representation independent universal couplings $f_{abc} = Tr (\lambda_a\;[\lambda_b,\lambda_c])$ and $d_{aij} = STr (\lambda_a\;\{\lambda_i,\lambda_j\})$ survive. The resulting scalar-vector-tensor theory is therefore, at one-loop, superalgebraic and anomaly-free. Another very interesting consequence of our superchiral structure is the induction by the Fermion loops of a new scalar-vector-tensor triple interaction (4.8) which reproduces, if and only is we apply the BIM mechanism, the structure of the square of the geometric supercurvature (2.11). Once again, Differential Geometry and Quantum Field Theory agree, conditional on the elimination of the Adler-Bell-Jackiw anomaly. These results are tantalizing from a theoretical point of view, yet very surprising: the coupling (4.8-10) of the vectors $\chi \lambda_a$ and of the scalars $\lambda_i (1 \pm \chi)$ are outside the naive superalgebra generated by the $(\lambda_a, \lambda_i)$ matrices (appendix A); the couplings to the Fermions are all even (they transform Fermions into Fermions, not Fermions into Bosons as in Wess-Zumino supersymmetry); yet the signs induced by the helicity of the Fermion propagators restore the superalgebraic structure (4.6), if and only if the model is anomaly-free. The expected minimal coupling of the vectors to the scalars and the tensors, via the covariant derivatives, is also necessarily completed by a new scalar-vector-tensor vertex (4.9) which modifies the asymptotic behavior of the coupling of the scalars and tensors to the Fermions. A deeper understanding of these equations must be possible. These results are also resolutely bizarre from a phenomenological point of view, even if the occurrence of the superalgebraic structure is a direct consequence of the experimentally verified BIM mechanism whereby the chiral quantum anomalies are canceled by the balances of the leptons against the quarks. Please understand that the model is highly constrained and offers no choice. The field content is defined by the differential geometry, the dynamics are induced by the Fermion loops, and there is no free structural choice and no free parameter, except that the Cabbibo-Kobayashi-Maskawa angles can be understood as specifying the details of the 3-generations indecomposable representations of $SU(2/1)$ (see \cite{CQ0,HS} and \cite{TM20b}, appendix H). We must face the model as it comes and then try to devise a physical interpretation. Many problems remain. Having established the self interactions of the Boson fields (4.9), one has to examine if the theory is renormalizable and in particular if the counterterms involving Boson loops have the correct Lorentz structure, which seems likely, and the correct algebraic structure, which is non-trivial as we only have Lie-algebra Ward identities. The scalar potential has to be evaluated. The symmetry breaking pattern of the model must be studied. Finally, the crucial open question is the eventual existence of a symmetry associated to the odd-generators of the superalgebra. \section*{Acknowledgments} This research was supported by the Intramural Research Program of the National Library of Medicine, National Institute of Health. We are grateful to Danielle Thierry-Mieg for clarifying the presentation and to Andre Neveu for stimulating discussions.
1,314,259,994,716
arxiv
\section{Introduction} The discovery of high-temperature superconductivity in H$_3$S at extreme pressures~\cite{drozdov2015conventional} stimulated an intense hunt for novel hydride compounds with even higher $T_\text{c}$'s, spearheaded by computational material discovery~\cite{Boeri_2019,Lilia_2022,FLORESLIVAS20201,Zurek_Hilleke2022,Zurek_chemPrecomp2022}. One prominent example is LaH$_{10}$, which has been shown to superconduct up to temperatures of \SI{265}{K} at pressures of $\sim$\SI{190}{GPa}~\cite{drozdov2019superconductivity,somayazulu2019evidence}. While it is very tempting to continue searching for materials with record-breaking $T_\text{c}$'s~\cite{peng2017hydrogen, grockowiak2020hot, DiCataldo_2022}, lowering the required stabilization pressures is even more important in view of technological applications~\cite{pickard2020superconducting,lv2020theory,di2020phase,DiCataldo_2021,shipley2021high,zhang2021design,PhysRevB.104.134501,Lucrezi2022,DiCataldo_2022,Sun_2022}. In a recent paper~\cite{DiCataldo_2021} some of us proposed a strategy to bring the stabilization pressures of high-$T_\text{c}$\ hydrides closer to ambient pressure, based on the concept of an optimized \textit{chemical precompression}. In fact, we identified LaBH$_8$, a hydride superconductor with a $T_\text{c}$\ $>$ \SI{100}{K}, dynamically stable at an unprecedentedly low pressure. In a follow-up work, we showed that other hydride superconductors with the same $Fm\bar{3}m$\ $XY$H$_8$\ structural template can be identified with even lower critical pressures of stability, such as SrSiH$_8$\ and BaSiH$_8$~\cite{Lucrezi2022}. The latter is particularly interesting, as it remains dynamically stable down to \SI{3}{GPa}. We note in passing that these estimates of dynamical stability were based on anharmonic frozen-phonon calculations. All of the mentioned $Fm\bar{3}m$\ $XY$H$_8$\ compounds are thermodynamically stable only at pressures above $\sim$\SI{100}{GPa}, and metastable below. A conceivable route to synthesize these materials would hence be to obtain them at high pressures where they are thermodynamically stable, and quench them to lower pressures. The standard criterion employed in literature to estimate how far a metastable phase can be quenched down in pressure is \textit{dynamical} (phonon) stability. However, dynamical stability indicates only that a structure is in a \textit{local minimum} of the potential energy surface. To estimate its actual \textit{lifetime} (\textit{kinetic} stability) one needs also to estimate the height of the barriers that separate the current minimum from other minima. In Ref.~\cite{Lucrezi2022}, some of us introduced a rigorous method to assess the kinetic stability pressure $p_\text{kin}$\ by explicitly calculating the energy barrier protecting the metastable $Fm\bar{3}m$\ structure from decomposition as a function of pressure, using the variable-cell nudged elastic band method~\cite{QIAN20132111}. For BaSiH$_8$, for example, we found a $p_\text{kin}$\ of $\sim$\SI{30}{GPa}, significantly higher than the dynamical value $p_\text{dyn}$\ = \SI{3}{GPa}. It was argued in a recent work~\cite{Errea_LaBH8} that quantum lattice effects treated within the stochastic self-consistent harmonic approximation (\mbox{SSCHA}) drastically increase the dynamical stabilization pressure $p_\text{dyn}$\ for LaBH$_8$\ and it was further suggested that a similar increase in $p_\text{dyn}$\ should be expected for other $Fm\bar{3}m$\ $XY$H$_8$\ hydrides. To investigate this, we apply the \mbox{SSCHA}\ formalism to BaSiH$_8$, which, so far, has the lowest $p_\text{dyn}$\ among all $Fm\bar{3}m$\ $XY$H$_8$\ hydrides. In \mbox{SSCHA}, a major bottleneck is represented by the need to use large supercells and large numbers of individuals if one wants to fully converge the calculation. We overcome this problem by employing machine-learned moment tensor potentials (MTP)~\cite{Shapeev2015-MTP,GUBAEV2019148} that allow us to obtain total energies, forces, and stresses with DFT accuracy but at a fraction of the computational cost~\cite{ranalli_SSCHA,D1NR08359G,MLIP_eval}. To our knowledge, this work represents the first combination of MTPs with the \mbox{SSCHA}\ method. In addition, we introduce a method to discern the contribution of quantum ionic (QI) effects from those of anharmonic and phonon-phonon (ph-ph) effects. We find that $p_\text{dyn}$\ increases from \SI{3}{GPa} to about \SI{20}{GPa} within the \mbox{SSCHA}\ and that this rise can almost entirely be attributed to QI effects, with actual anharmonic and ph-ph effects playing only a subordinate role. In fact, the same crystal structure that minimizes the free energy within \mbox{SSCHA}\ can already be obtained in density-functional theory (DFT) by including zero-point energies (ZPE). Importantly, we demonstrate that even after including QI, anharmonic, and ph-ph effects within the framework of \mbox{SSCHA}, the actual limit of stability is still set by $p_\text{kin}$\ ($\sim$\SI{30}{GPa}), as stated in our previous work~\cite{Lucrezi2022}. \section{Results} \subsection{Ab-initio machine-learned interatomic potentials} In the self-consistent harmonic approximation, the full anharmonic and interacting lattice is mapped onto an auxiliary harmonic system and the free energy $\mathcal{F}$ of the full system is approximated by the minimum of the free energy of the auxiliary harmonic system~\cite{Born1951,SCHA_Hooton,PhysRevLett.17.89,PhysRevB.1.572}. In the \mbox{SSCHA}, this minimization is performed stochastically via Monte-Carlo summation and importance sampling over several consecutive ensembles (populations) of a large number of individuals. Each individual here corresponds to a supercell structure with displaced atomic positions, where the supercell size determines the density of phonon wave vectors in the Brillouin zone~\cite{PhysRevB.89.064302}. More details are provided in the Method section, the Supplementary Material (SM)~\footnote{The Supplemental Material is available on the journal website.}, and in Refs.~\cite{PhysRevLett.111.177002,PhysRevB.96.014111,PhysRevB.97.014306,PhysRevB.98.024106,PhysRevLett.122.075901,errea2020quantum,Monacelli_2021}. In practice, to calculate accurate phonon frequencies within \mbox{SSCHA}, in particular for slow-converging soft modes, one needs to consider population sizes of several ten or hundred thousands individuals. In addition, one also needs to converge the supercell size. Doing this fully at a DFT level is computationally prohibitive, which is why we made use of MTPs in this work. For every pressure, MTPs were trained on DFT results of 50 structures randomly chosen out of the \mbox{SSCHA}\ random-displacement individuals in 2$\times$2$\times$2 supercells. We then validated the trained MTPs for all other individuals by comparing the total energies, forces and stress components. This validation is shown in Fig.~\ref{fig:MLIP_valid}, demonstrating the exceptional accuracy of the used MTPs (see Supplementary Fig.~2 in the SM for other pressures, as well as forces and stresses). As can be appreciated in this figure, the root-mean-squared error (RMSE) is below \SI{1}{meV/atom}, i.e., at the same level as the error in DFT. The inset also shows that the potential energy surface of the slow-converging, $T_{2g}$ mode at $\Gamma$ is reproduced very nicely with the MTPs. As a final validation, we compare the \mbox{SSCHA}\ phonon dispersions obtained using only DFT with those employing only MTPs (see Supplementary Fig.~3 in the SM) and find very good agreement, with only minor differences in the $T_{2g}$ mode at $\Gamma$ and the $E_g$ at $X$. To fully converge these modes within \SI{1}{meV}, we increased the populations sizes within MTP-\mbox{SSCHA}\ up to \SI{100000}{} individuals compared to \SI{10000}{} for the DFT-\mbox{SSCHA}\ calculations. \begin{figure}[t] \centering \includegraphics[width=0.95\columnwidth]{fig1_MTP.png} \caption{\textbf{MTP validation:} MTP total energy ($E_\mathrm{MTP}$) versus DFT total energy ($E_\mathrm{DFT}$) for all \SI{19250}{} individuals of a \mbox{SSCHA}\ calculation for a lattice constant $a=\SI{6.242}{\angstrom}$ (blue scatter plot). $E_\mathrm{ref}$ is the DFT total energy of the high-symmetry structure with undisplaced H positions. The diagonal black line serves as guide to the eye. The inset shows the full frozen-phonon potential of the lowest $T_{2g}$ mode at $\Gamma$ obtained with DFT (solid black line) and MTP (dotted blue line), as well as the harmonic potential (dashed grey line).} \label{fig:MLIP_valid} \end{figure} The use of MTPs does not only substantially speed up the calculations~\footnote{We found MTPs to be a factor of about \SI{20000}{} faster than DFT in the case of 2$\times$2$\times$2 supercells of BaSiH$_8$.}, but also gives access to larger supercells. In this work, we performed additional \mbox{SSCHA}\ calculations using MTPs on $n\times n\times n$ supercells with $n=1,2,3,4$ at all studied pressures. The convergence of the free energy, the structural parameters, and the phonon dispersions with respect to the supercell size is provided in Supplementary Figs.~4-6 in the SM. An overview of all performed \mbox{SSCHA}\ runs is given in Supplementary Tab.~1 in the SM. Unless stated otherwise, all \mbox{SSCHA}\ results presented in the following have been obtained with MTPs for \SI{100000}{} individuals in 4$\times$4$\times$4 supercells. \subsection{Structural parameters and electronic dispersion} The $Fm\bar{3}m$\ phase of BaSiH$_8$\ has a face-centered cubic unit cell with 10 atoms in the primitive cell, where Ba and Si occupy Wyckoff $4a/b$ sites and the H occupy $32f$ sites. The eight H atoms form rhombicuboctahedral cages around the Ba atoms and cubic cages around the Si atoms. The structure has only two free parameters, namely the lattice constant $a$ and the Wyckoff coordinate of the $32f$ sites $x$, defining the H-H distance $d_{\text{H-H}}=2a\cdot x$ (side length of the cubic cage) and the H-Si distance $d_{\text{H-Si}}=a\sqrt{3}\cdot x$ (half the space diagonal of the cubic cage). Relaxing the structure within DFT to target pressures of 10, 20, 25, and \SI{30}{GPa}, we obtained lattice constants between \SI{6.5}{\angstrom} and \SI{6.2}{\angstrom}, and H-Si distances of about \SI{1.6}{\angstrom}, as shown in Table~\ref{tab:sructpar}. An extensive list of the structural parameters from ambient pressure up to \SI{100}{GPa}, as well as the fit to the Birch-Murnaghan equation of state can be found in Supplementary Note 1 in the SM. Starting from the atomic positions obtained in DFT and the harmonic dynamical matrices obtained in density-functional perturbation theory (DFPT) calculations at each pressure, we performed constant-volume \mbox{SSCHA}\ relaxation calculations~\footnote{Within \mbox{SSCHA}, the stress tensor is obtained as $P_{\alpha\beta}=-V_\text{sc}^{-1}\left(\partial\mathcal{F}/\partial\varepsilon_{\alpha\beta}\right)|_{\varepsilon=0}$, where $V_\text{sc}$ is the supercell volume, $\mathcal{F}$ the free energy functional, and $\varepsilon_{\alpha\beta}$ the strain tensor~\cite{PhysRevB.98.024106}. The pressure is then obtained as $\tilde{p}=\sum_\alpha P_{\alpha\alpha}/3$.}. The corresponding parameters, indicated by $\tilde{x}, \tilde{d}$, and $\tilde{p}$, are reported in Table~\ref{tab:sructpar}. \begin{table}[t] \caption{\textbf{Structural parameters:} Lattice constant $a$, Wyckoff parameter $x$, H-Si distance $d_{\text{H-Si}}$, and pressure $p$ after the relaxation with respect to the DFT total energy and after the constant-volume relaxation within \mbox{SSCHA}\ ($\tilde{x}, \tilde{d}$, and $\tilde{p}$).} \label{tab:sructpar} \begin{tabular}{c|ccc|ccc} $a\,/\,\text{\AA}$ & $x$ & $d_{\text{H-Si}}\,/\,\text{\AA}$ & $p\,/\,\text{GPa}$ & $\tilde{x}$ & $\tilde{d}_{\text{H-Si}}\,/\,\text{\AA}$ & $\tilde{p}\,/\,\text{GPa}$\\ \toprule 6.541 & 0.1434 & 1.625 & 10 & 0.1459 & 1.653 & 12.2\\ 6.323 & 0.1471 & 1.611 & 20 & 0.1498 & 1.640 & 22.6 \\ 6.242 & 0.1483 & 1.603 & 25 & 0.1510 & 1.633 & 27.9 \\ 6.171 & 0.1494 & 1.597 & 30 & 0.1521 & 1.626 & 33.1 \end{tabular} \end{table} We observe an elongation of $d_{\text{H-H/Si}}$ of about \SI{30}{\milli\angstrom} (2\%) for all pressures and an increase in pressure of about 2 to \SI{3}{GPa}, i.e. $\sim20\%$ at \SI{10}{GPa} and $\sim10\%$ at \SI{30}{GPa}. The change in atomic positions introduces only small changes in the electronic structure, as demonstrated in Fig.~\ref{fig:el}, where we compare the electronic bands and densities of states (DOS) for $x$ and $\tilde{x}$. The largest differences are found above and below the Fermi energy, whereas electronic bands and DOS at the Fermi energy, and hence the Fermi surface, remain essentially unchanged. \begin{figure}[h] \includegraphics[width=1\columnwidth]{fig2_el_bands.png} \caption{\textbf{Difference in electronic properties:} \textbf{a} electronic bands, and \textbf{b} density of states for the structure with H positions defined by $x$ (DFT minimum, blue line) and $\tilde{x}$ (\mbox{SSCHA}\ minimum, red line) for $a=\SI{6.242}{\angstrom}$.} \label{fig:el} \end{figure} \subsection{Phonon dispersions and lattice instability} Moving on, we evaluate and compare the DFPT and \mbox{SSCHA}\ phonon dispersions at all studied pressures, as shown in Fig.~\ref{fig:hessian_ph}. Similar to the results for LaBH$_8$\ in Ref.~\cite{Errea_LaBH8}, we find that the high-energy optical modes are strongly renormalized to lower frequencies. In particular, a significant softening occurs for the threefold degenerate $T_{2g}$ mode at $\Gamma$ (harmonic values around \SI{50}{meV} in Fig.~\ref{fig:hessian_ph}), which becomes imaginary and indicates a (dynamic) lattice instability for lattice constants \mbox{$a>\SI{6.323}{\angstrom}$}, corresponding to $p=\SI{20}{GPa}$ and $\tilde{p}=\SI{22.6}{\GPa}$. Thus, the inclusion of quantum lattice effects within the \mbox{SSCHA}\ shifts the dynamical stability pressure from the anharmonic frozen-phonon value \mbox{$p_\text{dyn}$\ = \SI{3}{GPa}} to $\tilde{p}_\text{dyn} = \SI{20}{GPa}$ (see Supplementary Fig.~7 in the SM). This $\sim$\SI{17}{GPa} difference is substantial, but considerably smaller than the $\sim$\SI{40}{GPa} shift reported for LaBH$_8$~\cite{Errea_LaBH8}. \begin{figure*}[ht!] \centering \includegraphics[width=\textwidth]{fig3_ph_bands.png} \caption{\textbf{Harmonic and \mbox{SSCHA}\ phonon dispersions:} Phonon dispersions for various lattice constants $a$ along a high-symmetry path of the BZ. The dashed black lines correspond to harmonic calculations and the solid red lines to the \mbox{SSCHA}\ results. The indicated pressures correspond to the DFT (\mbox{SSCHA}) pressures for relaxed atomic positions defined by $x$ ($\tilde{x}$)~\cite{foot_phonon_interp}.} \label{fig:hessian_ph} \end{figure*} We want to note in passing that we also calculated the fourth order corrections to the phonon frequencies in \mbox{SSCHA}\ and find, in contrast to Ref.~\cite{Errea_LaBH8}, but in accordance with other works employing \mbox{SSCHA}~\cite{Monacelli_2021,errea2020quantum,PhysRevB.96.014111,PhysRevLett.122.075901,PhysRevB.97.014306}, only minor differences to the results obtained only up to third order. The maximum phonon energy differences are on the order of about \SI{1}{meV} for all pressures. The phonon dispersions in the 2$\times$2$\times$2 supercells~\footnote{Calculating fourth-order corrections on larger supercells is computationally unfeasible with our currently available resources. This calculation in an $n\times n\times n$ supercell requires 4D arrays with \mbox{$(3\cdot n^3\cdot <\text{number of atoms in uc}>)^4$} elements. For 10 atoms in the unit cell and $n=3$, this yields $810^4$ double-precision entries, for which \SI{3.13}{TB} of RAM are needed on a single shared-memory node.} with second (auxiliary), third, and fourth order terms for all studied pressures are shown in Supplementary Fig.~8 in the SM. \subsection{Different effects contributing to frequency shifts} The observed changes in the phonon dispersions when employing \mbox{SSCHA}\ and the resulting different dynamical stabilization pressures result from a combination of several effects that are not included at the level of standard DFT and DFPT. These are most importantly the vibrational contributions of the ions to the free energy, phonon anharmonicity, and ph-ph interactions. In the following, we will present an attempt to disentangle and determine the importance of each of these effects for BaSiH$_8$. \textbf{QI effects: } First, we want to look at the contributions to the total energy originating from the so-called \textit{zero-point} vibrations, i.e, vibrations of the ions around their equilibrium positions due to the quantum mechanical treatment of the nuclei, absent in the classical, \textit{clamped-nuclei} picture~\cite{BOA}. In the Born-Oppenheimer approximation, the total energy $E_\text{tot}[\mathbf{R}]$ (at $T=\SI{0}{K}$) for ionic positions $\mathbf{R}$ is given by the sum of the internal electronic energy $E_\text{el}[\mathbf{R}]$ and the ZPE contributions of the nuclei $E_\text{ZP}[\mathbf{R}]$. In most solids, $E_\text{ZP}$ is much smaller than $E_\text{el}$ and can be safely neglected. However, due to the small mass of H and the resulting high phonon frequencies in hydrides, the ZPE can become substantial and thus cause a modification of the equilibrium crystal structure. At the harmonic level, the true ZPE can be approximated via $E_\text{ZP}[\mathbf{R}] \approx \int_0^\infty\mathrm{d}\omega\rho_\mathbf{R}(\omega)\hbar\omega/2$, where $\rho_\mathbf{R}(\omega)$ is the DFPT phonon density of states and $\hbar\omega/2$ the ZPE of a quantum harmonic oscillator. At constant volume, the only free parameter in the $Fm\bar{3}m$\ structure is the Wyckoff parameter $x$, for which we have plotted $E_\text{tot}$, $E_\text{el}$, and $E^\text{harm}_\text{ZP}$ in Fig.~\ref{fig:ZPE_vs_free_dist} for a lattice constant of $a=\SI{6.242}{\angstrom}$. The results for other lattice constants, i.e., pressures, are provided in Supplementary Fig.~9 in the SM. As can be appreciated in this figure, the inclusion of the ZPE, even at the harmonic level, shifts the position of the minimum of the total energy considerably and puts it almost exactly at the minimum position $\tilde{x}$ predicted by the \mbox{SSCHA}. The differences in $d_\text{H-Si}$ between the \mbox{SSCHA}\ calculations and the ZPE analysis are, in fact, of the order of \SI{1}{m\angstrom}, i.e., well within the observed stochastic noise in \mbox{SSCHA}. We want to note that the same is true for LaBH$_8$\ (see Supplementary Fig.~10 of the SM). Furthermore, inclusion of ZPE reduces the total energy at its minimum by $\sim$\SI{30}{meV/uc}, agreeing very nicely with the result from \mbox{SSCHA}\ ($\sim$\SI{27}{meV/uc}). This is a quite remarkable result as it demonstrates the importance of QI effects of the light H ions on the dynamic stability of the hydride materials, and shows that the minimum structure from \mbox{SSCHA}\ can already be obtained at the level of harmonic ZPE corrections, at least for this class of materials. \begin{figure}[t] \includegraphics[width=\columnwidth]{fig4_Etot_ZPE.png} \caption{\textbf{Electronic total energy, ZPE and resulting total energy} as a function of H-Si distance, where the DFT minimum $x$ and the \mbox{SSCHA}\ minimum $\tilde{x}$ are marked explicitly, the latter coinciding with the minimum position of $E_\text{tot}$. The three energy curves are plotted relative to their respective values at $x$. The solid lines represent a cubic spline for $E_\text{el}$ and second order polynomial fits for the other energies.} \label{fig:ZPE_vs_free_dist} \end{figure} \begin{figure*}[t!] \centering \includegraphics[width=\textwidth]{fig5_effects_ph.png} \caption{\textbf{Different effects contributing to shifts in the phonon dispersions:} \textbf{a} Harmonic dispersion for atomic positions defined by $x$ (dashed black) and $\tilde{x}$ (dotted blue). \textbf{b} \mbox{SSCHA}-obtained dispersions for fixed atomic positions defined by $x$ (dotted green lines) and harmonic reference (dashed black). \textbf{c} Dispersions for \mbox{SSCHA}-relaxed atomic positions defined by $\tilde{x}$ (solid red), obtained from $C_\text{sum}$ (dotted grey, see text for more details), and for the harmonic case (dashed black).} \label{fig:effects_ph} \end{figure*} Having established that the ZPE has a crucial effect on the structure, we investigate the effect of the changed structure on the phonon dispersions. In Fig.~\mbox{\ref{fig:effects_ph}\textbf{a}}, we present the harmonic dispersions for atomic positions defined by $x$ and $\tilde{x}$. We observe large differences for the high-energy optical modes above \SI{150}{meV}, but also for the low $T_{2g}$ mode at $\Gamma$. The energy shifts for these modes are between 15 and \SI{25}{meV}. \textbf{Anharmonicity and ph-ph interaction effects:} Having established the QI effects on the structure and the phonon dispersions, we now want to assess the contributions of phonon anharmonicity (anh) and ph-ph interactions. To do that, we perform a \mbox{SSCHA}\ calculation while keeping the ions fixed at the DFT equilibrium positions, thus qualitatively~\footnote{In the case of fixed ions, we obtain non-vanishing forces within \mbox{SSCHA}. As the individual force components on the H atoms are still small ($\sim$\SI{150}{meV\per\angstrom} in the cell with $a=\SI{6.242}{\angstrom}$, for example), a fixed-ion calculation seems applicable for a qualitative insight. The forces on atom $i$ are obtained as $\mathbf{F}_i=-\partial\mathcal{F}/\partial\mathcal{\mathbf{R}}_i$.} removing \textit{structural} effects on the phonon frequencies. The phonon dispersions obtained from this calculation are presented in Fig.~\mbox{\ref{fig:effects_ph}\textbf{b}}, where we find that all H-dominated optical modes in the whole BZ experience a sizable frequency renormalization. It is worth noting that fixed-ions \mbox{SSCHA}\ calculations indicate the onset of dynamical instability just between 5 and \SI{10}{GPa} (8.4 and \SI{12.2}{GPa}, respectively, for the \mbox{SSCHA}\ pressure $\tilde{p}$, see Supplementary Fig.~11 in the SM), which is reasonably close to the frozen-phonon anharmonic (harmonic) \mbox{$p_\text{dyn}$\ = \SI{3(5)}{GPa}}~\cite{Lucrezi2022}. \textbf{Combining QI, anh, and ph-ph effects:} As we made the attempt of separating the different contributions from QI effects, anharmonicity, and ph-ph interactions, it is tempting to combine the individual contributions to the phonon dispersion as a simple sum and compare it to the full \mbox{SSCHA}\ calculation. We approach this task in real space via adding the force constants $C_\text{sum} = C_\text{harm} + \Delta C_\text{QI} + \Delta C_\text{anh/phph}$, where $C_\text{harm}$ is the force constant matrix for harmonic phonons of structure $x$, $\Delta C_\text{QI}$ are the force constant contributions due to the structural changes based on including the ZPE (\textbf{QI effects}), and $\Delta C_\text{anh/phph}$ are the force constant contributions to the \textbf{anharmonic} and \textbf{ph-ph} interaction effects (see Supplementary Note 3 in the SM). The dispersions obtained from $C_\text{sum}$ are shown in Fig.~\mbox{\ref{fig:effects_ph}\textbf{c}}, where we also present as reference the phonon dispersions from a full \mbox{SSCHA}\ calculation. Again, we find very good agreement between these results, providing further, \textit{a posteriori} justification and support for the qualitatively introduced separation Ansatz. In Table~\ref{tab:effects}, we summarize the considered effects, methods, and the corresponding physical description of the ions. \begin{table}[h] \caption{\textbf{Overview of ionic treatment:} The separate cases are classified according to the structural and phonon treatment. The phonons are obtained either via DFPT or \mbox{SSCHA}. The ground-state (GS) structure is determined by minimizing either the electronic energy within DFT or the total energy including the ZPE (using DFPT or \mbox{SSCHA}.)} \label{tab:effects} \begin{tabular}{c||c|c} $^{\text{\hspace{0.1\columnwidth}phonon}\rightarrow}_{\downarrow\text{structure}}$ & DFPT & \mbox{SSCHA} \\ \toprule $x$ $\hat{=}$ min($E_\text{el}$) & Classical ions & Interact. quantum ions \\ DFT & in el. GS & in el. GS \\ & \textbf{standard} & \textbf{anh}+\textbf{ph-ph} \\ \hline $\tilde{x}$ $\hat{=}$ min($E_\text{tot}$) & Quantum ions & Interact. quantum ions \\ ZPE/\mbox{SSCHA} & in lattice GS & in lattice GS \\ & \textbf{QI} & \textbf{QI}+\textbf{anh}+\textbf{ph-ph} \end{tabular} \end{table} \subsection{Superconductivity} As BaSiH$_8$\ is potentially a very promising high-$T_\text{c}$ superconductor, we also want to assess the implications of the above mentioned effects on its superconducting (SC) properties. To do that, we solved the anisotropic Migdal-Eliashberg (ME) equations as implemented in EPW~\cite{ponce_epw_2016} for the four cases in Tab. \ref{tab:effects}. Details about the calculation within EPW are provided in the Method section and in the SM, at this point we only want to highlight that for each case we used the corresponding force constants to compute the dynamical matrices, and computed the electron-phonon ($ep$) coupling matrix elements as the self-consistent first-order variation of the potential using the equilibrium positions as defined in Tab. \ref{tab:effects} ~\footnote{A more rigorous treatment would require the replacement of the polarization vectors $\mathbf{e}_{\mathbf{q}\nu}$, as well as either the inclusion of the force term arising in the DF(P)T description of the structure defined by $\tilde{x}$, or an $\mathcal{F}$-based description of the $ep$ matrix elements within the \mbox{SSCHA}.}. In Table~\ref{tab:SCprops}, we summarize the obtained values for quantities characterizing the SC state, i.e., the $ep$ coupling strength $\lambda$, the logarithmic average of the phonon frequencies $\omega_\text{log}$, and the SC critical temperature $T_\text{c}$~\footnote{The $ep$ coupling strength $\lambda(\omega) = 2\int_0^\omega\mathrm{d}\omega' \frac{\alpha^2F(\omega')}{\omega'}$ and the logarithmic average phonon frequency $\omega_\text{log} = \exp\left(\frac{2}{\lambda}\int_0^\infty\mathrm{d}\omega\frac{\alpha^2F(\omega)\ln\omega}{\omega}\right)$ are obtained from the Eliashberg spectral function $\alpha^2 F(\omega)$}. The corresponding Eliashberg spectral functions $\alpha^2F(\omega)$ and the cumulative coupling strengths $\lambda(\omega)$ are shown in Supplementary Fig.~12 in the SM. We want to stress that the provided values for $T_\text{c}$ \ are obtained by the solution of the full ME equations. The distribution of the SC gap function $\Delta_\mathbf{k}$ indicates no change in the distinct two-gap shape calculated in Ref.~\cite{Lucrezi2022}, and is therefore not shown here. \begin{table}[h] \caption{\textbf{SC properties from ME theory:} Critical temperature $T_\text{c}$, $ep$ coupling strength $\lambda$, and logarithmic phonon frequency average $\omega_\text{log}$ for the cases discussed in the text.} \label{tab:SCprops} \begin{tabular}{cc|ccc} effect & (struct., phon.) & $\omega_\mathrm{log}\,/\,\mathrm{meV}$ & $\lambda$ & $T_\text{c}$\,/\,K\\ \toprule \textbf{standard} & ($x$, harm.) & 54 & 1.25 & 84 \\ \textbf{QI} & ($\tilde{x}$, harm.) & 47 & 1.43 & 82 \\ \textbf{anh}+\textbf{ph-ph} & ($x$, \mbox{SSCHA}) & 54 & 1.38 & 90 \\ \textbf{QI}+\textbf{anh}+\textbf{ph-ph} & ($\tilde{x}$, \mbox{SSCHA}) & 28 & 2.44 & 96 \end{tabular} \end{table} The differences in $\omega_\text{log}$, $\lambda$, and $T_\text{c}$\ are in the order of 10-15\% except for the full \mbox{SSCHA}\ calculation, where we see a considerable increase in $\lambda$ to almost double the harmonic value, but also a decrease in $\omega_\text{log}$, compensating the enhancement of $\lambda$. The resulting $T_\text{c}$\ is increased from 84 to \SI{96}{K}, showing that the full inclusion of all discussed effects results only in $\sim$15\% change in $T_\text{c}$\ for BaSiH$_8$. \section{Discussion} In this work, we study the effects of quantum lattice dynamics within the \mbox{SSCHA}\ framework on the structure and the dynamical stability of the $Fm\bar{3}m$\ phase of BaSiH$_8$. The \mbox{SSCHA}\ structure relaxation suggests a 2\% elongation of the H-H and H-Si bonds for the studied pressure range of 10 to \SI{30}{GPa} ($\sim\SI{30}{\milli\angstrom}$). In the phonon dispersions, we find an overall softening of the high optical modes, as well as a dynamic lattice instability characterized by imaginary \mbox{SSCHA}\ phonon frequencies in the $T_{2g}$ mode at $\Gamma$ below \SI{20}{GPa}, setting the estimate for the critical dynamical pressure to $\tilde{p}_\text{dyn}\approx\SI{20}{GPa}$. We have further demonstrated the importance of QI effects over anharmonicity and ph-ph interactions, and found that the change in structure, and consequently in pressure, can already be understood by considering harmonic ZPE corrections to the total electronic energy of the system alone (which can be obtained much faster and easier than performing a full \mbox{SSCHA}\ calculation). We are now left with the question: what is the stability boundary of $Fm\bar{3}m$-BaSiH$_8$? In our previous work on BaSiH$_8$~\cite{Lucrezi2022}, we challenged the common practice of assuming the range of metastability of high-pressure hydride phases to coincide with the range of (an)harmonic dynamical stability, which systematically underestimates the stabilization pressures needed to synthesize these materials in reality~\cite{PhysRevB.99.220502,Kong2021,Errea2016,Einaga2016}. Dynamical stability is only a prerequisite for thermodynamic metastability, which is characterized by the existence of a distinctive enthalpy barrier that protects a metastable phase from decomposition into other phases (kinetic stability). We calculated the enthalpy transition path to the thermodynamic groundstate at different pressures (corresponding to a decomposition of the $Fm\bar{3}m$\ BaSiH$_8$\ phase into \mbox{BaSiH$_6$ + H$_2$} in molecular form), and could estimate the barrier height from the intermediate structures. In combination with the calculated convex hulls for the B-S-H system, we can argue with confidence that the $Fm\bar{3}m$\ BaSiH$_8$\ phase could be synthesized above \SI{100}{GPa}, and retained down to $\sim\SI{30}{GPa}$, where a distinctive enthalpy barrier still exists. At lower pressures, metastable $Fm\bar{3}m$\ BaSiH$_8$\ will decompose, even though (anharmonic) lattice dynamics calculations predict it to be stable. Hence, kinetic stability poses a stricter bound for synthesizability than dynamical stability. \textbf{In conclusion}, employing \textit{ab-initio} machine-learned MTPs, we were able to perform \mbox{SSCHA}\ calculations for BaSiH$_8$\ at various pressures for supercells up to $4 \times 4 \times 4$ and more than \SI{100000}{} individuals. The inclusion of QI effects, anharmonicity, and ph-ph interactions within the \mbox{SSCHA}-framework increases the pressure of dynamical stability from $p_\text{dyn}\approx\SI{3}{GPa}$ to $\tilde{p}_\text{dyn}\approx\SI{20}{GPa}$. We identified the change in structure due to QI effects to be the main driving force here, something that can already be captured to good approximation at the level of harmonic zero-point energy corrections. Most importantly, the determined $\tilde{p}_\text{dyn}\approx\SI{20}{GPa}$ is still below $p_\text{kin}\approx\SI{30}{GPa}$ posed by the concept of kinetic stability, thus the latter represents a much stricter bound for the stability and realizability in these materials. \section{Methods} \subsection{DF(P)T calculations} All DFT and DFPT calculations of electronic and vibrational properties were carried out using the plane-wave pseudopotential code \textsc{Quantum Espresso}~\cite{giannozzi_advanced_2017}, scalar-relativistic optimized norm-conserving Vanderbilt pseudopotentials~\cite{hamann_optimized_2013}, and the \textsc{Pbe}-\textsc{Gga} exchange and correlation functional~\cite{perdew_generalized_1996}. The unit cell calculations are done in the \textit{fcc} primitive unit cell with 10 atoms, a 12$\times$12$\times$12 $\mathbf{k}$-grid, and a plane-wave cutoff energy of \SI{80}{Ry}. The 2$\times$2$\times$2 supercell calculations were done on a 6$\times$6$\times$6 $\mathbf{k}$-grid. Further details are provided in Supplementary Method 1 in the SM. \subsection{\mbox{SSCHA}\ calculations} The calculations in the \mbox{SSCHA}\ are done in the constant-volume relaxation mode, i.e. minimizing the free energy with respect to the average atomic positions $\mathcal{R}$ and the force constants $\Phi$, as implemented in the \mbox{SSCHA}\ python package \cite{Monacelli_2021}. We use the DFT equilibrium atomic positions and the DFPT dynamical matrices on a 2$\times$2$\times$2 $\mathbf{q}$-grid as initial guesses for $\mathcal{R}$ and $\Phi$, respectively. The starting point for the larger supercells is obtained by interpolating the previously converged auxiliary dynamical matrices. The total energies, forces, and stress tensors for the individuals are obtained from DFT calculations or from machine-learned interatomic potentials in the framework of MTPs, see below. At the end of a minimization run, a new population with higher number of individuals $N$ is generated from the minimized trial density matrix until convergence. We set two stopping criteria for the minimization loops: a Kong-Liu ratio for the effective sample size~\footnote{The effective sample size is calculated as $N_\text{eff}=(\sum_i\rho_i^2)/(\sum_i\rho_i)^2$, where the $\rho_i$ are the importance sampling weighting factors.} of 0.2, and ratio of $<10^{-7}$ between the free energy gradient with respect to the auxiliary dynamical matrix and its stochastic error. In calculations based on DFT we increased $N$ up to $10^4$ individuals, for the MTP cases up to $10^5$. The anharmonic phonon dispersions are obtained from the positional free-energy Hessians without the fourth-order term, if not specified otherwise explicitly. The final atomic positions are obtained from the converged average atomic positions $\mathcal{R}$ and the pressure as derivative of the converged free energy with respect to a strain tensor. The free energy difference between the last two populations in the 2$\times$2$\times$2 cells is well below \SI{1}{meV/u.c.}, and well below \SI{0.1}{meV/u.c.} for higher cells. The total forces in the last population are well below $10^{-6}\,\mathrm{meV}\text{\AA}^{-1}$. The physical phonon frequency differences between the last two populations are below \SI{5}{meV} for the DFT cases and well below \SI{1}{meV} for the MTP case ($T_{2g}$ and $A_{2u}$ converge slower, see SM). All calculations were carried out at zero temperature. We note in passing that with these settings, we could reproduce all LaBH$_8$\ results from Ref.~\cite{Errea_LaBH8}. Further details are provided in Supplementary Method 2 in the SM. \subsection{Moment tensor potentials} The MTPs were trained and evaluated using the MLIP package \cite{Shapeev2015-MTP,Novikov_2021}. We choose a functional form of level 26, eight radial basis functions, $R_\text{cut}=\SI{5.0}{\angstrom}$ and $R_\text{min}=\SI{1.2}{\angstrom}$, and trained on 50 structures in a 2$\times$2$\times$2 supercell randomly chosen out of all individuals of the DFT \mbox{SSCHA}\ calculations for a given pressure. We validated the potentials on all individuals in the DFT \mbox{SSCHA}\ calculations and find a RMSE on the total energy of 0.5-\SI{0.6}{meV/atom}, 45-\SI{50}{\milli\electronvolt\per\angstrom} for the force components, and 0.3-\SI{0.4}{GPa} for the diagonal stress tensor components. We further validated the MTPs on 30 randomly chosen individuals in a 3$\times$3$\times$3 supercell and achieve similar RMSEs. The validations and RMSEs for each pressure are shown in Supplementary Fig.~2 and Supplementary Note 2 in the SM. \subsection{ZPE and total energy} The internal electronic energy $E_\mathrm{el}[\mathbf{R}(x)]$ is obtained from DFT calculations at fixed volume by varying the Wyckoff parameter $x$ of the H positions. The explicit positions $\mathbf{R}(x)$ are given in the SM. The phonon density of states $\rho_{\mathbf{R}(x)}(\omega)$ is obtained using DFPT on a 2$\times$2$\times$2 $\mathbf{q}$-grid, interpolated on a 16$\times$16$\times$16 $\mathbf{q}$-grid. Smooth ZPE and total energy curves are obtained by second-order polynomial fits in $x$. Due to the shift out of the DFT equilibrium structure, forces on the individual H atoms arise at the DFT level. Around the total energy minimum, the force components are in the order of \SI{150}{\milli\electronvolt\per\angstrom}, i.e., small enough to warrant the use of linear-response theory to gain qualitative and systematic insights. DFT diagonal stress tensor components (pressures) are decreased by about \SI{2}{GPa}. \subsection{Migdal-Eliashberg theory} The Wannier interpolation of the $ep$ matrix elements onto dense $\mathbf{k}$- and $\mathbf{q}$-grids, and the subsequent self-consistent solution of the fully anisotropic Migdal-Eliashberg equations were done in \textsc{Epw}~\cite{margine_anisotropic_2013,ponce_epw_2016}, for all the cases in Tab.~\ref{tab:SCprops}. We used coarse 6$\times$6$\times$6 and fine 30$\times$30$\times$30 $\mathbf{k}$- and $\mathbf{q}$ grids, a Matsubara frequency cutoff of \SI{1}{eV}, and a Morel-Anderson pseudopotential $\mu^*=0.10$. Further details are provided in Supplementary Method 3 in the SM. \section*{Acknowledgments} We thank Pedro Pires Ferreira, Carla Verdi, and Luigi Ranalli for fruitful discussions. This work was supported by the Austrian Science Fund (FWF), projects P30269 and P32144. Calculations have been performed at the dCluster of TU Graz, as well as on the Vienna Scientific Cluster (VSC4 and VSC5). L.B. acknowledges support from Fondo Ateneo-Sapienza 2018-2022. S.D.C. acknowledges computational resources from CINECA, proj. IsC90-HTS-TECH\_C and IsC99-ACME-C. \section*{Author contributions} R.L., E.K., and S.D.C. performed the calculations, M.A. introduced the idea of MTP, and C.H. and L.B. conceived and supervised the project. All authors contributed to the discussion of the results and participated in preparing the manuscript. \section*{Competing interests} The authors declare no competing interests. \section*{Data availability} The authors confirm that the data supporting the findings of this study are available within the article and its supplementary materials. Further information is available upon request.
1,314,259,994,717
arxiv
\section{Introduction}\label{Intro} This paper faces the problem of the global history of star formation and chemical enrichment of the whole Universe, otherwise known as the baryon budget in galactic halos or the history of the so-called star formation rate density SFRD(z). Since the seminal studies by \citet{Tinsley1980} and \citet{Madau_1996}, the cosmic star formation has been the subject of numberless papers that are impossible to recall here. The evolution of the SFRD(z) over cosmic times is crucial to understand galaxy formation and evolution and to constrain any theory devoted to this subject \citep{Hopkins2004, Wilkinsetal2008, Guoetal2011, Bouwensetal2012, Cucciatietal2012, Tescarietal2014, KatsianisTescarietal2017, Abramsonetal2016}. The evolution of the SFRD(z) is nowadays known with unprecedented accuracy up to the distant Universe thanks to the multi-wavelengths surveys carried out by many groups among which we recall \citet{Bernardietal2010, Gonzalezetal2011, Bouwensetal2012, Lee_etal2011,Smitetal2012, Santinietal2012, Schenkeretal2013,vandenBurgetal2010,Gruppionietal2013,Parsaetal2016,Reddyetal2008, Magnellietal2011,Sobraletal2013,Alavietal2014,Cucciatietal2012,Lyetal2011}. The situation has been recently systematically summarized and reviewed by \citet{MadauDickinson2014} and \citet{KatsianisTescarietal2017} to whom we refer for all details. The interpretation of the cosmic SFRD(z) has been addressed by many theoretical studies, among which we recall \citet{RaseraTeyssier2006}, \citet{HernquistSpringel2003}, and \citet{KatsianisTescarietal2017}, with either analytical or semi-analytical or hydrodynamical simulations. In particular, they investigated the effect of the energy feedback from supernovae explosions, stellar winds, and AGN activity on modeling the cosmic star formation. They made use of an improved version of the P-GADGET3 of \citet{Springel2005} with chemical enrichment \citep{Tornatore-etal2007}, supernova energy and momentum-driven galactic winds \citep{PuchweinSpringel2013}, AGN feedback \citep{SpringelDiMatteo2005,Planellesetal2013}, metal-line cooling \citep{Wiersma2009a,Wiersma2009b} plus molecules/metal cooling \citep{Maioetal2007}, supernova-driven galactic winds with feedback \citep{Baraietal2013}, thermal conduction \citep{Dolagetal2004}, and other more technical details \citep[see][for a more exhaustive description]{Tescarietal2014}. In general the shape of the SFRD(z) as a function of the redshift is reproduced by the models. However, according to \citet{Tescarietal2014} the SFRD(z) is insensitive to feedback at $z > 5$, unlike to what happens at lower redshifts. They find that the key factor for reproducing the observational SFRD is a combination of strong supernova-driven wind and early AGN feedback in low mass galaxies. Similar conclusions are reached by \citet{KatsianisTescarietal2017} in the sense that the AGN feedback they adopt decreases the SFRD at $z <3$ but not sufficiently at higher redshift. According to them, the kind of feedback one would need to reconcile things is a strong feedback at high redshifts and a less efficient one at low redshift. They also show that variable galactic winds, which are efficient at decreasing the star formation rate (SFR) of low mass galaxies, are quite successful in reproducing the observational data. \textsf{Aims of this study}. Although the theoretical SFRD(z) obtained in those studies nicely reproduces the observational one (which is not a surprise since some important physical ingredients, such as for instance the energy feedback from AGNs via the galactic winds on a galaxy's SFR, have yielded the sought variation of the cosmic SFRD with the redshift), still we feel that the results are not yet conclusive as far as the key physical process in shaping the cosmic SFRD(z) is concerned. Casting the question in a different way, we would like to understand whether the cosmic SFRD(z) is driven more by causes of external or internal nature. Among the external causes, chief is the gravitational building up of structures (the proto-galaxies made of dark and baryonic matter) via hierarchical aggregation which leads to a mass function of galaxies which is not the same at different redshifts. The numerical simulations of cosmic mass aggregation show that the halo mass distribution function, i.e. the relative number of galaxies per mass interval, on one hand gets steeper and steeper with mass at increasing redshift but even more important several different solution are found \citep[and references therein]{Murray_etal2013} all worth being explored. Among the internal causes, chief are the star formation, how this varies with the total mass and the mean density of the galaxy, how the SFR varies with time within a galaxy, and the physical properties of the interstellar medium. Another important issue is whether in a galaxy the SFR always starts at maximum efficiency and declines with time so that some ``quenching mechanisms'' must be invoked at the very early epochs to explain the decline of the SFRD(z) at increasing redshift, or rather it starts low, grows to a maximum and then declines (typical of spheroidal systems), or alternatively it remains mild and nearly constant (such as in disk-like objects), or in some cases it goes through a series of episodes (the so-called bursting model, typical of low mass galaxies). Finally, we would like to quantify the relative weight of the hierarchical aggregation compared to the intrinsic SFR. Most likely both concur to model the SFRD(z), but to which extent? Investigations based on large scale numerical simulations that are possible with P-GADGET3-like codes have the gravitational aggregation built in by default so that only the effect of different prescriptions for the star formation and associated energy injection and feedback can be tested. On the other hand, testing the effects of the various physical ingredients with direct hydrodynamical large scale simulations is expensive and time consuming. For these reasons it is still useful and interesting to address the problem in a simple fashion by means of semi-analytical model able to catch the essence of the problem. The plan of the paper is as follows. In Section \ref{CSFR} we shortly review the present-day observational picture about the SFRD(z), recalling the main ways of measuring it, the various points of uncertainty, in particular the role of the stellar initial mass function (IMF), the variation of the star formation histories in galaxies of different type, and the distance determinations. In Section \ref{strategy} we present the strategy of the present study aimed at deriving the SFRD(z) from three elementary building blocks: (i) the current hierarchical view of galaxy formation providing the expected number of galaxies (made of dark and baryonic matter) per unit volume (usually a $Mpc^3$) presented in Section \ref{first_block}, (ii) simple models of galaxy formation and evolution for different values of total mass and morphological type that are presented in Section \ref{second_block} (they provide the rate of star formation, mass in stars, gas content, metallicity and other useful properties of individual galaxies), and (iii) finally, the evolutionary population synthesis technique that is used to derive the magnitudes and colors of the model galaxies as function of time (redshift) that is presented in Section \ref{third_block}. In Section \ref{SFRD_theory} we fold together the results of the previous sections, derive the cosmic SFRD and compare it to observational data. In order to highlight and single out the role played by the galaxy number density distribution and the galaxy SFR at different epochs, we perform some ad hoc simulations by varying some key assumptions and illustrate the results. Finally in Section \ref{Conclusions} some conclusive remarks are made. \begin{figure*} \centering{ {\includegraphics[width=7.5cm]{figure1a.ps} } {\includegraphics[width=7.5cm]{figure1b.ps} } } \caption{ The history of cosmic star formation according to \citet[][their Fig.8]{MadauDickinson2014}. The left panel shows the rest-frame FUV+IR data (blue and red dots respectively), whereas in the right panels the same data are plotted separately. The sources of data are those listed in Table 1 of \citet{MadauDickinson2014}. The solid line in the three panels is the analytical best fit of the data given by \citet{MadauDickinson2014}. } \label{fig_madau_dickinson} \end{figure*} \section{The cosmic star formation rate}\label{CSFR} The SFRD(z) we intend to investigate and reproduce is the one presented by \citet[][and references therein]{MadauDickinson2014}. In general, to infer the SFRD(z) from the fluxes measured in suitable pass-bands (typically UV and near and far infrared, NIR and FIR, respectively) and to express it in masses per unit time and unit volume of space, one needs some assumptions about the correlation between the measured fluxes and the SFR, the corrections for the effect of dust on absorbing part of the UV to re-emit it in the NIR and FIR, the IMF together with some hints about its constancy or variation with time and space, the kind of star formation at work on cosmic scales and Hubble time, and others details. \textsf{Major uncertainties}. A number of problems affect the determination of the SFRD(z) among which we briefly recall: \textit{Stellar mass census}. Deriving the mass in stars (i.e. the underlying IMF) from their light is a cumbersome affair because it requires information on the mass to light ratio (M/L) of the stellar populations, which in turn depends on the age, the history of star formation and the amount of dust around (extinction). In general, the conversion from light to mass is made through population synthesis models which provide the relationship between mass in stars (both luminous and faint - hence invisible), the light emitted by these, and the relative number of stars born in different generations, all contributing to the light and the mass at present day; in other words, the history of star formation. Among other things, these models depend on the IMF. On the other hand, the IMF is difficult to determine directly from the observational data for a number of reasons that do not need to be examined here \citep[see][for a detailed discussion of the issue]{MadauDickinson2014}. The obvious way out is to assume a certain IMF. The most popular one is the \citet{Salpeter1955} law even though it is known to predict M/L ratios higher than observed, thus requiring deep revision of the IMF at the low mass hand \citep[see][]{Kroupa_etal_1993,Chabrier2015, HennebelleChabrier2011}. Another difficulty affecting the stellar mass census is due to the detection of low mass dusty galaxies. This means that a great portion of the stellar mass could be missing in current data. \textit{Variations of the star formation history in galaxies}. The star formation history (SFH) of a galaxy may change a lot over the years both on short and long timescales. As a matter of fact, young stars outshine the old ones thus affecting the total spectral energy distribution, and hence the total mass of the old stars may be largely underestimated and the effect of these can be hardly singled out. \textit{Distances}. Finally, the sources of observational data change with the distance so that homogeneous data sets extending from the local pool all the way up to redshift $z \leq 10$ are not possible: For instance in the local Universe ($0<z<1$) most of the IR data are not due to dust in star-forming regions but to dust in the ISM. This trend tends to decrease with the distance. In the redshift interval $1<z<4$, no IR data are measured for individual sources but for the hyper-luminous ones, thus heavily affecting the evaluation of the IR luminosity density. At larger redshifts essentially only data for hyper-luminous sources are available thus worsening the problem. For all these reasons, \citet{MadauDickinson2014} limit their analysis to the redshift interval $0<z<8$. \textsf{Analytical Fits}. In this work, we will make use of the analytical fits derived by \citet{MadauDickinson2014} and \citet{MadauFragos2017}. Both have similar functional dependencies given by \begin{equation} SFRD(z)=\gamma_0 \frac{(1+z)^{\gamma_1}}{ 1 + (\frac{1+z}{\gamma_2})^{\gamma_3} }\,\, M_\odot yr^{-1} Mpc^{-3} \label{eq_madau_dickinson} \end{equation} The relation of \citet{MadauDickinson2014} is for $\gamma_0=0.015$, $\gamma_1=2.7$, $\gamma_2=2.9$, and $\gamma_3=5.6$ and it is shown in the three panels of Fig.\ref{fig_madau_dickinson} together with the original data from the same source. In recent times, the above relationship has been slightly revised by \citet{MadauFragos2017} who used the \citet{Kroupa2001} IMF. The new coefficients and exponents are $\gamma_0=0.01$, $\gamma_1=2.6$, $\gamma_2=3.2$, and $\gamma_3=6.2$. It is easy to check that the old and new relationships agree within a factor of about two. Both represent the foot-print of the past star and galaxy formation history of the Universe that needs to be deciphered (see Fig.\ref{two_fits} for a comparison). \section{Strategy: deriving the SFRD(z) from fundamental building blocks}\label{strategy} In this study, we intend to derive the observational SFRD(z) from a small number of hypotheses or ``building blocks'': 1) The cosmic scenario and the hierarchical building up of bound structures which provide the number density of DM halos of mass $M_{DM}$ and radius $R_{DM}$ as function of the redshift, $N(M_{DM,} z)$. 2) The aggregation of BM in DM halos which provides the visible component of galaxies and their star formation and chemical enrichment. This gives rise to a complicate game among several important physical processes, chief among others the gravitational contraction and collapse together with gas heating and cooling and star formation. All this requires a suitable timescale to occur so that building up of the stellar component of a galaxy cannot be instantaneous. The best simple model apt to describe this situation is the so-called ``infall model'' developed by \citet{Chiosi1980}. 3) The spectro-photometric properties of the stellar population of galaxies that will provide the evolution of spectral energy distribution as function of time, SFH and chemical enrichment. This gives us magnitudes and colors of the stellar populations in galaxies as a function of the time and/or redshift for whatsoever photometric system in use. The SFRD(z) results from folding together the building blocks above: at each redshift we know the number density of DM halos and associated BM galaxies born in the redshift $z_{f} \leq z$, where $z_{f}$ is the redshift at which the first galaxies are supposed to form ($z=20$ in our case). At each redshift we calculate the number density of galaxies per $Mpc^3$ as a function of the mass of the DM halo (this soon sets the mass of the BM galaxy hosted by a DM halo). For this ideal sample of galaxies we calculate the total and mean cosmic density of star formation, metallicity, mass in stars, and luminosity emitted in any pass-band according to \begin{equation} [ \mathcal{F} ]_{T} = \int \int \mathcal{F}(M_{DM}, z, z_{f} ) \times N(M_{DM},z,z_{f} )dM_{DM} dz \\ \end{equation} \begin{equation} < \mathcal{F} > = \frac { \int \int \mathcal{F}(M_{DM}, z, z_{f}) \times N(M_{DM},z,z_{f})dM_{DM} dz } {\int \int N(M_{DM}, z, z_{f}) dM_{DM} dz} \end{equation} \noindent where $\mathcal{F}$ stands for any of the physical quantities listed above and the integrals are carried out over the range of $M_{DM}$ and $z_{f} \geq z \geq 0$ we have considered. The correspondence between the halo mass $M_{DM}$ and the BM galaxy mass $M_{BM}$ inside is fixed by the cosmological model of the Universe (see below). \begin{table*} \begin{center} \caption{ Coefficients of the polynomial interpolation of the relation (\ref{Lukic_interp}), which provides the number density of halos $N(M_{DM}, z)$ per (Mpc/h)$^3$.} \label{coef_lukic} \begin{tabular}{|c|r|r|r|r|r|} \hline Mass $[M_\odot/h]$ & A$_4$ & A$_3$ & A$_2$ & A$_1$ & A$_0$ \\ \hline 5e7 &-2.34275e-5 & 1.28686e-3 & -2.97961e-2 & 2.11295e-1 & 2.02908 \\ 5e8 &-2.76999e-5 & 1.49291e-3 & -3.47013e-2 & 2.13274e-1 & 1.13553 \\ 5e9 &-1.31118e-5 & 6.50876e-4 & -2.36972e-2 & 1.31993e-1 & 0.23807 \\ 5e10 &-1.18729e-5 & 6.65488e-4 & -3.17079e-2 & 1.30360e-1 & -0.59744 \\ 5e11 &-1.47246e-5 & 8.10097e-4 & -4.65279e-2 & 1.13790e-1 & -1.44571 \\ 5e12 & 6.59657e-5 & -7.19134e-4 & -6.99445e-2 & 1.06782e-1 & -2.45684 \\ 5e13 &-7.34568e-4 & 9.99022e-3 & -1.65888e-1 & -9.48292e-2 & -3.11701 \\ \hline \end{tabular} \end{center} \end{table*} \section{First building block: number of DM halos at different redshifts}\label{first_block} We assume the $\Lambda$-CDM concordance cosmology, with values inferred from the WMAP-5 data \citep{Hinshawaetal2009}: flat geometry, $H_0=70.5$ km/s/Mpc, $\Omega_{\Lambda} = 0.72$, $\Omega_m=0.28$, $\Omega_b=0.046$ (giving a baryon ratio of $ \Omega_b/\Omega_m \simeq 0.1656$), $\sigma_8=0.817$, and $n=0.96$. To these values for $\Omega_m$ and $\Omega_b$ we have the corresponding ratio between the baryonic and dark matter masses of individual galaxies $M_{BM} /M_{DM} \simeq 0.16$ and vice versa $ M_{DM} / M_{BM} \simeq 6.12$. As already mentioned, the standard approach to investigate the cosmic SFRD is based on large scale cosmological N-Body simulations in the framework of a given cosmological model of the Universe ($\Lambda$-CDM in our case) so that the appearance, growth and subsequent aggregation of perturbations of all scales can be suitably described \citep[e.g.]{Springeletal2005}. The formation of DM halos and BM galaxies inside are automatically taken into account in the simulations. The price to pay is a large computational cost so that the analysis is limited to a few paradigmatic cases. In alternative, one may adopt the strategy used by \citet{Lukicetal2007}. Starting from the \citet[][]{Warren_etal_2006} halo mass function (HMF), they derive the halo growth function (HGF) in the concordance $\Lambda$-CDM model over a wide range of redshifts (from $z\simeq 20$ to the present) (see their Fig.2). The HGF $N(M_{DM}, z)$ gives the number density of halos of different masses per (Mpc $h^{-1})^3$ resulting by all creation/destruction events. By performing a large suite of nested-box N-body simulations with careful convergence and error controls, they determine the mass function and its evolution with excellent statistical and systematic errors, reaching a few percent over most of the considered redshift and mass range. The advantage of the \citet{Lukicetal2007} study is that it provides a halo mass distribution function, $N(M_{DM}, z)$, easy to use in all cases like the present one in which galaxy evolution has to be framed in a cosmological context. In order to make use of the \citet{Lukicetal2007} distribution in our analysis, we fit their results with a fourth order polynomial \begin{equation} \log N(M_{DM}, z) = \sum_{j=0}^4 A_j(M_{DM}) \times z^j \label{Lukic_interp}. \end{equation} \noindent The coefficients $A_j(M_{DM})$ are listed in Table \ref{coef_lukic}. The interpolated distribution function for the number density $N(M_{DM},z)$ of halos per $Mpc^3$ as a function of the mass and redshifts is shown here in Fig. \ref{fig_lukic}. As expected it is identical to the original one by \citet{Lukicetal2007}. Although what we are going to say is well known, see the pioneer study of \citet{PressSchechter1974} and \citet[][ for ample referencing]{Lukicetal2007}, for the sake of clarity and relevance for our discussion we note the following: (i) for each halo mass (or mass interval) the number density is small at high redshift, increases to high values towards the present, and depending on the halo mass either gets a maximum value at a certain redshift followed by a decrease (typical of low-mass halos) or it keeps increasing as in the case of high-mass halos; in other words, first creation of halos of a given mass (by spontaneous growth of perturbation to the collapse regime or by mergers) overwhelms their destruction (by mergers), whereas the opposite occurs past a certain value of the redshift, for low mass halos; (ii) at any redshift high mass halos are orders of magnitude less frequent than the low mass ones; (iii) at any redshift, the mass distribution of halos has a typical interval of existence whose upper mass end (cut-off mass) increases at decreasing redshift. Finally it worth recalling that both the number densities $N(M_{DM},z)$ \citet{Lukicetal2007} and the SFRD(z) of \citet{MadauDickinson2014} are per $Mpc^3$ so that comparing theory with observations is less of a problem. However, owing to the many uncertainties affecting the observational data and the crudeness of the theoretical models, small adjustments of the order of a few units can be tolerated in the final comparison. \begin{figure} \centering \includegraphics[width=8.5cm]{figure2.ps} \caption{ The HGF reproduced from \citet{Lukicetal2007}. The number density of galaxies (in logarithmic scale) is in (Mpc $h^{-1})^3$, where $h=H_0/100$. Each line refers to halos with DM mass in solar units as indicated.} \label{fig_lukic} \end{figure} \begin{table*} \begin{center} \caption{ Expected number densities of DM halos per $ \, [Mpc\, h^{-1}]^{3}$ at varying the DM mass and redshift $z$. The DM masses are in $M_{\odot}h^{-1}$.} \label{count_lukic} \begin{tabular}{|c c c c c c c c| } \hline z &$10^7$ & $10^8$ & $10^9$ & $10^{10}$ &$10^{11}$& $10^{12}$ &$10^{13}$ \\ \hline 20.00 & 0.113E+01 & 0.159E-01 & 0.482E-04 & 0.857E-08 & 0.346E-14 & 0.506E-24 & 0.000E+00 \\ 15.00 & 0.658E+01 & 0.215E+00 & 0.390E-02 & 0.111E-04 & 0.930E-09 & 0.335E-16 & 0.120E-45 \\ 10.00 & 0.240E+02 & 0.152E+01 & 0.753E-01 & 0.180E-02 & 0.762E-05 & 0.566E-09 & 0.176E-18 \\ 8.00 & 0.347E+02 & 0.274E+01 & 0.168E+00 & 0.756E-02& 0.104E-03 & 0.103E-06 & 0.691E-13 \\ 6.00 & 0.433E+02 & 0.415E+01 & 0.293E+00 & 0.219E-01& 0.778E-03 & 0.601E-05 & 0.568E-09 \\ 4.00 & 0.433E+02 & 0.485E+01 & 0.389E+00 & 0.419E-01& 0.304E-02 & 0.994E-04 & 0.312E-06 \\ 2.00 & 0.319E+02 & 0.395E+01 & 0.376E+00 & 0.507E-01& 0.585E-02 & 0.437E-03 & 0.190E-04 \\ 1.00 & 0.235E+02 & 0.299E+01 & 0.322E+00 & 0.461E-01& 0.610E-02 & 0.554E-03 & 0.638E-04 \\ 0.80 & 0.218E+02 & 0.278E+01 & 0.309E+00 & 0.445E-01& 0.601E-02 & 0.559E-03 & 0.755E-04 \\ 0.60 & 0.202E+02 & 0.257E+01 & 0.295E+00 & 0.427E-01& 0.586E-02 & 0.556E-03 & 0.869E-04 \\ 0.40 & 0.185E+02 & 0.237E+01 & 0.280E+00 & 0.408E-01& 0.568E-02 & 0.546E-03 & 0.973E-04 \\ 0.20 & 0.169E+02 & 0.217E+01 & 0.266E+00 & 0.387E-01& 0.545E-02 & 0.529E-03 & 0.106E-03 \\ 0.00 & 0.157E+02 & 0.201E+01 & 0.254E+00 & 0.371E-01& 0.525E-02 & 0.512E-03 & 0.111E-03 \\ \hline \end{tabular} \end{center} \end{table*} \section{Second building block: the BM Galaxies inside DM halos}\label{second_block} Given the mass distribution of DM halos as a function of the redshift and knowing the mass $M_{BM}$ of baryons inside thanks to the cosmological proportions $M_{BM}/M_{DM} \simeq 0.16$, one needs a prescription to get a model galaxy out of this lump of matter. DM and BM undergo gravitational collapse, baryons cool down, accumulate towards the core, and form stars. The visible galaxy is gradually built up. The timescale needed to get to the stage of nearly complete generation of the stellar content goes from the typical free-fall time (say about 0.5 Gyr or so) to significantly longer than this (say about 2 Gyr or even longer) depending on the galaxy type (mass). The NB-TSPH simulations of \citet{ChiosiCarraro2002} in a monolithic-like scheme and those of \citet{MerlinChiosi2006,MerlinChiosi2007,Merlinetal2012} in the early hierarchical one show that at decreasing galaxy mass the SFR shifts from a single prominent early episode to ever continuous bursting-like mode as the galaxy mass and/or the over-density of the initial perturbation decreases, and that in massive and intermediate mass galaxies (with $M_{BM}$ from $10^{10}$ to $10^{11} M_\odot$ or more) the building-up of the stellar component is complete up to 90\% or so before $z \simeq 2$. These overall trends and timescales of galaxy formation have been independently found and confirmed by \citet{Thomasetal2005} from their analysis of the line absorptions indices in a large sample of galaxies. See also the review of \citet{Renzini2006}. The infall models of galactic chemical evolution over the years have reached a very high degree of complexity and sophistication, have been applied to study galaxies of different morphological type going from early types to disks and irregulars and have proved to successfully explain many observational properties of galaxies such as chemical abundances, gas and stellar content and, with the aid of photometric synthesis tools, also magnitudes and colors. The situation has been widely reviewed by \citet{Matteucci2012,Matteucci2016}: we limit ourselves to mention here those developed by \citet{Bressanetal1994,Chiosietal1998,Tantaloetal1996,Tantaloetal1998} for early type galaxies, and by \citet{PortinariChiosi1999,PortinariChiosi2000,Fattore2009} for spherical and disk galaxies with radial flows of gas. In the following, we will use models adapted to the one zone description (fully adequate to our purposes) from those elaborated by \citet{Tantaloetal1998} in spherical symmetry. Over the years many important physical phenomena have been incorporated in the chemical models, for instance gas heating by supernova explosions (both type II and type Ia), stellar winds, gas cooling by radiative emission, in order to correctly evaluate the thermal content of the gas eventually triggering the galactic winds, and finally the radial flows of gas. Only recently, the same physical processes have been included also in the N-body simulations of galaxy formation and evolution. Due to the scarce communication between the two scientific communities, the strong predicting power of the costless chemical models with respect to the heavy time consuming numerical simulations has been ignored. The essence of all infall models is the assumption of the gas accretion into the central region of the proto-galaxy at a suitable rate (driven by the timescale $\tau$) and of gas consumption by a Schmidt-like law of star formation. The gas accretion and consumption coupled together give rise to a time dependence of the SFR closely resembling the one resulting from the N-body simulations and the line absorption indices diagnostics. We will come back to this important issue later on in this study. In the framework of infall models, the luminous mass $M_{BM}$ increases with time according to \begin{equation} \frac{dM_{BM}(t)} { dt} = {\dot{M}}_{BM,0}\,exp(-t/\tau) \label{inf} \end{equation} where $\tau$ is the accretion timescale. The constant $ \dot{M}_{BM,0} $ is obtained from imposing that at the galaxy age ${T_{G}}$ the value ${M_{BM}(T_{G})}$ is reached: \begin{equation} \dot{M}_{BM,0} = \frac{M_{BM}(T_{G})} {\tau [1 - exp(- T_{G}/\tau)]} \label{mdot} \end{equation} \noindent Therefore, integrating the accretion law the time dependence of ${M_{BM}(t)}$ is \begin{equation} { M_{BM}(t) = { \frac{M_{BM}(T_{G})} {[1-exp (-T_{G}/\tau)] } } [1 - exp(-t/\tau)] } \label{mas-t} \end{equation} The above formalism allows us to immediately recover the {\it closed box} approximation letting $\tau \rightarrow 0$. The timescale $\tau$ parameterizes the timescale over which the present-day mass $M_{BM}(T_{G})$ is reached. In this scheme the total mass of a galaxy at the present time is $M_G = M_{DM} + M_{BM}(T_G)$. \subsection{ Basic equations of the model} We denote with ${X_{i}(t)}$ the current mass abundance of an element $i$ and introduce the dimensionless variables \begin{equation} { G(t)=M_{g}(t)/M_{BM}(T_{G}) } \label{gas_fra} \end{equation} \noindent and \begin{equation} { G_{i}(t)=G(t)X_{i}(t), } \label{gas_i} \end{equation} \noindent where by definition ${\Sigma_i X_i(t)=1}$. The equations governing the time variation of the ${G_{i}(t)}$ and hence ${X_{i}(t)}$ are \begin{eqnarray}\label{degas_i} \frac{dG_{i}(t)}{dt} &=& - X_{i}(t) \psi(t) + \nonumber \\ && [ {d G_{i}(t) \over dt} ]_{star} + [ {d G_{i}(t) \over dt} ]_{inf} - [{d G_{i}(t) \over dt }]_{win} \end{eqnarray} where $\psi(t)$ is the normalized rate of star formation to be defined, and $t$ the current galaxy age. The four terms at the right hand side are in the order: the rate of gas consumption by star formation, the rate of gas restitution (ejecta) by stars formed in previous epochs, the rate of mass accretion by infall of primordial gas onto the system, and the finally the rate at which enriched gas leaves the system. The infall rate is given by \begin{equation} [ { dG_{i}(t) \over dt } ]_{inf} = { X_{inf} \over M_{BM}(T_G) } [ {dM(t) \over dt } ]_{inf} \label{gas_inf} \end{equation} \noindent and it is easily derived from eqn.(\ref{inf}). The rate of gas ejection is formally given by \begin{equation} [ { dG_{i}(t) \over dt } ]_{win} = { X_{i}(t) \over M_{BM}(T_G) } [ {dM(t) \over dt } ]_{win} \label{gas_wind} \end{equation} is usually taken to be very high (nearly instantaneous ejection of all heated up gas). The rate of gas ejection by stars is more complicated to calculate. The correct definition of this quantity can be found in \citet{Bressanetal1994}, \citet{Tantaloetal1996}, and \citet{Tantaloetal1998}. Suffice here to mention that (i) it requires integration over the IMF to account for the different contribution from stars of different mass and lifetime $\tau_M$, (ii) the stellar yields are calculated according to the so-called Q-formalism \citep[cf.][]{TalbotArnett1971,TalbotArnett1973}; (iii) at any age $t$, the rate of star $\psi(t)$ weighting the contribution from star of different mass $M$ must be evaluated at $t_M=t - \tau_M$. The inclusion of Type Ia supernovae is made according to \citet{MatteucciGreggio1986} and it requires the specification of the mass interval and mass ratios for the binary stars progenitors of Type Ia supernovae together with the distribution function $f(\mu)$ of their mass ratios and the percentage of such binary systems with respect to the total. The contribution from Type II supernovae is straightforwards ad it is incorporated in the Q-formalism. The stellar ejecta are from \citet{Marigoetal1996,Marigoetal1998}, \citet{Portinari1998}, and \citet{Portinarietal1998} to whom we refer for all details. The stellar lifetimes $\tau_M$ are from the tabulations by \citet{Bertellietal1994} and take the dependence of $\tau_M$ on the initial chemical composition into account. Finally, for the purposes of this study we follow in detail only the total metallicity (the sum of the abundances by mass of all elements heavier than $\rm ^{4}He$), shortly indicated by $Z = \sum_{j > He} X_j$. Last we write the equation for the current mass of a galaxy in form of stars $M_{s}(t)$: at any age $t$ this is given by \begin{equation} M_{s}(t) = M_{BM}(t) - M_g(t) \label{mass_stars} \end{equation} with obvious meaning of the symbols. \subsection{The stellar initial mass function} For this we choose the \citet{Salpeter1955} law by number \begin{equation} \phi(M)= M^{-x} \label{imf} \end{equation} \noindent where $ x=2.35$. The IMF is normalized by choosing the fraction $\zeta$ of stars more massive than $M_n$, i.e. the mass interval most contributing to chemical enrichment over the whole Hubble time \begin{equation} { \zeta = \frac{\int_{M_{n}}^{M_u}\phi(M){\times}M{\times}dM} {\int_{M_l}^{M_u}\phi(M){\times}M{\times}dM} } \label{zeta} \end{equation} \noindent Where $M_u$ and $M_n$ fixed and equal to $M_u =100 M_{\odot}$ and $M_{n} \simeq 1 M_{\odot}$, the lowest mass limit $M_l$ is left free. Following \citet{Bressanetal1994,Tantaloetal1996,Tantaloetal1998} good choices force for $\zeta$ are from 0.3 to 0.5 (and the values for $M_l$ are consequently derived). \subsection{ The star formation rate} The rate of star formation is assumed to depend on the gas mass according to \begin{equation} { \Psi(t)= \frac{dM_g}{dt} = \nu M_{g}(t)^{k} } \label{sfr} \end{equation} \noindent where $\nu$ and $k$ are adjustable parameters. The SFR normalized to ${ M_{BM}(T_G)}$ becomes \begin{equation} { \psi(t)= \nu M_{BM}(T_G)^{k-1} G(t)^{k} } \label{sfr1} \end{equation} Linear and quadratic dependencies of the SFR on the gas content, $k=1$ and $k=2$ respectively, were first proposed by \citet{Schmidt1959} and have been adopted ever since because of their simplicity \citep[see][for a classical review of the subject]{Larson1991}. We adopt here $k=1$. With the law of star formation of equation (\ref{sfr}), the resulting time dependence of ${\rm \psi(t)}$ is driven by the rate of mass accretion onto the system. In the closed-box description, the SFR is maximum at the beginning, and since then it continuously decreases until galactic winds occur. In the infall model, owing to the competition between the rate of gas infall and gas consumption by star formation, the rate of star formation starts small, increases to a maximum and then declines. The age at which the peak occurs, shortly indicated by $T_P$, approximately corresponds to the infall timescale $\tau$. Finally, $\nu$ is the efficiency parameter of the star forming process. Its physical meaning is better understood by casting the SFR in a slightly different fashion. One can identify $dt$ with the timescale $\tau$ of the mass accretion rate and assume $k=1$ \begin{eqnarray} \Psi(t) = \frac{dM_g}{dt} = \nu M_{g}(t)^{k} \quad \Rightarrow \quad \frac{\Delta M_g}{M_g} = \tau \, \nu \quad && \nonumber \\ \Rightarrow \quad \simeq \frac{\Delta M_s} {M_{g}} = \tau \, \nu && \label{sfr_nor} \end{eqnarray} \noindent where the ratio $\Delta M_s/M_{g}$ is the mass of gas already converted to stars with respect to the mass of the left-over gas. Furthermore, if the accretion $\tau$ is identified with the infall timescale $t_{ff}$ of a galaxy we may get rough estimates of the specific star formation efficiency. The infall timescale of a galaxy can be approximated to the collapse timescale of primordial perturbations which depends on the redshift but is independent on the galaxy mass. Rough estimates of $\tau$ yield values ranging from 0.05 to 0.1 Gyr when the galaxy formation redshift is in the interval 20 to 1. This in turn implies $\nu \simeq$ 20 to 1 to assemble a typical $10^{12}\, M_\odot$ galaxy. This efficiency is lowered by a factor of ten at least if the mass assembly is diluted over the Hubble time. According to \citet{Cassara_etal2016} the shape of SFR as a function of time can be schematically grouped according to the value taken by the ratio of the two parameters $\tau$ and $\nu$ \citep[see Fig.2 of][]{Cassara_etal2016}. With the above laws of gas accretion and star formation, they are able to model two main types of objects: (i) in bulge-like models, characterized by high values of $\nu$ and low values of $\tau$ (ratios $\tau/\nu \leq 0.1$), the SFR increases to a peak on a relatively short timescale (on average 0.5 Gyr), and soon after declines. These models reproduce the chemical pattern in the gas of early-type galaxies at both low \citep{Piovanetal2006a,Piovanetal2006b,Piovanetal2006c,PipinoMatteucci2011} and high redshift \citep[e.g.][]{MatteucciPipino2002, PipinoMatteucci2011}; (ii) in the disk-like models, characterized by low values of $\nu$ together with high values of $\tau$ (ratios $\tau/\nu \geq 1$), the SFR shows a slow rise followed by a slow decline. These models could well mimic disk and to some extent also irregular galaxies in the local Universe \citep{Piovanetal2006a,Piovanetal2006b,Piovanetal2006c,Pipinoetal2013}. Finally, we like to mention that a functional form for the SFR that could mimic the above systematic variation with galaxy type (mass) is the so-called delayed exponentially declining law \begin{equation} \Psi(t) \propto \frac{t}{\tau} exp(-\frac{t}{\tau} )\, . \label{psidelay} \end{equation} \noindent In this framework, the Schmidt law is the link between the gas accretion by infall and the gas consumption by star formation. By varying the parameters $\tau$ and $\nu$ we may model different types of galaxies \citep{Buzzoni2002}. Basing on these considerations and taking the results of NB-TSPH simulations by \citet{ChiosiCarraro2002}, \citet{MerlinChiosi2006,MerlinChiosi2007}, and \citet{Merlinetal2012} as reference templates for the SFH in galaxies of different mass and morphological types, we calculate chemical models for different combinations of $\tau$ and $\nu$ that are meant to cover the whole Hubble sequence of galaxies. \subsection{Onset of galactic winds: energy feedback and gas heating-cooling } Long ago \citet{Larson1974} postulated that the present-day Color-Magnitude Relations of elliptical galaxies \citep[see][and references]{Boweretal1992,Kodamaetal1999,Kodamaetal2001,Terlevichetal2001} could be the result of galactic winds powered by supernova explosions thus initiating a long series of chemo-spectro-photometric models of elliptical galaxies standing on this idea \citep[][and references therein]{Saito1979a,Saito1979b,MatteucciTornambe1987,ArimotoYoshii1987, AngelettiGiannone1990, MiharaTakahara1994, Matteucci1994, Gibson1998, GibsonMatteucci1997}. In brief, gas is let escape from the galaxy and star formation is supposed to halt when the total thermal energy of the gas equates its gravitational binding energy. The same scheme is adopted here in the models that take galactic winds into account, i.e. the term $[{d G_{i}(t) \over dt }]_{win}$ in eqn. (\ref{degas_i}) is at work. The thermal energy of the gas is mainly due to three contributions, namely Type Ia and II supernovae and stellar winds from massive stars: \begin{equation} E_{th}(t) = \sum_{J} E_{th}(t)_J \label{Eth_tot} \end{equation} \noindent where $J\equiv$ SNI for type Ia supernovae, $J\equiv$ SNII for type II supernovae, and $J\equiv W$ for stellar winds; each term has a similar dependence \begin{equation} E_{th}(t)_{J} = \int_{0}^{t} \epsilon_{J}(t-t') R_{J}(t') M_{BM}(T_{G}) dt' \label{Esni_w} \end{equation} \noindent with obvious meaning of the symbols. The quantities $\epsilon_{J}(t-t')$'s and $R_{J}(t)$'s are the energies emitted by a supernova and/or stellar wind event and the corresponding production rates, respectively. As the production rates are functions of the dimensionless variables $G_{i}(t)$, the normalization factor $M_{BM}$ is required to calculate the energy in physical units. The production rates can be easily derived from the equations governing the chemical evolution. The emitted energies incorporate the cooling laws of supernova remnants and stellar winds by radiative cooling processes according to expression used by \citet[][]{Tantaloetal1998}. Finally, star formation and chemical enrichment of the model galaxies are halted, and the remaining gas is supposed to be expelled (winds) when the condition \begin{equation} E_{th}(t) \geq \left| \Omega_{g}(t)\right| \label{eth_omg} \end{equation} \noindent is verified. \subsection{Gravitational potential of DM and BM} To calculate the gravitational energy of the gas we make use of the analytical dynamical models of \citet{Bertinetal1992} and \citet{Sagliaetal1992} and adapt them to our case. DM is supposed to be already in situ, whereas the BM is supposed to fall into the gravitational well of the former and soon to reach the equilibrium configuration so that at each instant the description of \citet{Bertinetal1992} and \citet{Sagliaetal1992} can be applied. In this description of galactic structure, the mass and radius of the DM, $M_{DM}$ and $R_{DM}$ respectively, are related to those of the BM, $M_{BM}$ and $R_{BM}$, by the relation \begin{equation} { {M_{BM}(t) \over M_{DM} } \geq {1\over 2\pi} ({R_{BM}(t)\over R_{DM}}) [1 + 1.37 ( {R_{BM}(t) \over R_{DM} } )] } \label{dark} \end{equation} \noindent and the binding gravitational energy of the gas is given by \begin{equation} { \Omega_{g}(t)=-{\alpha}_{BM} G {M_{g}(t) M_{BM}(t)\over R_{BM}(t) } - G {M_{g}(t) M_{DM} \over R_{BM}(t) } \Omega'_{BDM} } \label{gas_pot} \end{equation} \noindent where $G$ is the gravitational constant, ${\rm M_g(t)}$ the current value of the gas mass, $\alpha_{BM}$ a numerical factor $\simeq 0.5$, and \begin{equation} { \Omega'_{BDM}= {1\over 2\pi} ({R_{BM}(t)\over R_{DM}}) [1 + 1.37 ( {R_{BM}(t) \over R_{DM} } )] } \label{dark_pot} \end{equation} \noindent the contribution to the gravitational energy given by the presence of DM. According to \citet{Bertinetal1992} and \citet{Sagliaetal1992}, in equilibrium conditions ${M_{BM}/M_{DM}} \simeq {R_{BM}/R_{DM}}$. With the current estimates of $M_{DM}$ and $M_{BM}$ of the $\Lambda$-CDM cosmogony both ratios are equal to 0.16. With these values, the factor $\Omega'_{BDM}=0.04$ so that the total correction to the gravitational energy of the gas (eq. \ref{gas_pot}) does not exceed 0.3 of the term for the luminous mass. \subsection{General remarks on the galactic models} \textsf{Mass homology}. It is worth noting that with the above formalism, in absence of galactic winds all the models are \textit{homologous in mass} in the sense that the same solution (current fractional gas and star mass) applies to galaxies of different mass provided suitably rescaled to the total asymptotic baryonic mass, i.e. the total baryonic mass reached at $t=T_G$. The same technique can also be used in presence of galactic winds by suitably rescaling the asymptotic mass to the real value i.e. subtracted by the amount of gas mass definitely lost by the system in form of galactic winds. \textsf{Specific star formation rate}. It is also worth noticing that with above assumptions the SFR in use is the specific star formation rate (SFR per unit baryonic mass, SSFR) which depends on three parameters, i.e. $\nu$, $\tau$ and $T_P$. Since $\tau$ and $T_P$ are correlated, each galaxy model here is characterized here only by the parameters $\nu$ and $\tau$. \textsf{Groups of galaxy models}. For the purposes of this study we have calculated three groups of models, labeled A, B and C: \textit{Group A}. In the models of group A, we assume that all galaxies begin their evolutionary history at redshift $z$=20 and suppose that the mass accretion timescale $\tau$ corresponds to the free-fall timescale for the $\rho_{200}$ over-density of the proto-galaxy with respect to the surrounding medium at this value of the redshift. The free-fall timescale is given by \begin{equation} t_{ff}(z) = \sqrt{ \frac{3\pi} {32 G \rho_{200} } } \label{free_fall} \end{equation} \noindent for the homologous collapse of a homogeneous sphere of gas. This timescale is the same for all galaxies independently of their mass. The free-fall timescale $t_{ff}(z)$ goes from about $3.5 \times 10^7$ yrs at z=20 to about $1.0\times 10^9 $ yrs at $z=1$. For the star formation efficiency parameter $\nu$ we adopt the constant value $\nu=10$. Owing to the very short mass accretion timescale and high value of $\nu$ these models are very similar to the ideal situation of the closed-box approximation. The baryonic mass of the models spans the range $10^7$ to $10^{12}\, M_\odot$. \textit{Group B}. At the light of the considerations on the type of SFH in real galaxies, in models of group B we adopt values for the accretion timescale $\tau$ ranging from 6 Gyr to 2 Gyr as the baryonic mass increases from $10^7\, M_\odot$ to $10^{12}\, M_\odot$, but keep unchanged the value for the parameter $\nu$, i.e. $\nu=10$ for all the models. Since the ratio $\tau/\nu$ of the models of groups A and B is always lower than one, they are best suited to represent early type objects of any mass going from dwarfs to bulges and massive ellipticals \citep{Cassara_etal2016}. \textit{Group C}. Finally, the models of Group C more closely follow the classification by \citet{Cassara_etal2016} who rank the galaxy SFR by means of the ratio $\tau/ \nu$. First of all, we stretch the interval of the accretion timescale $\tau$ assigned to each BM mass. It goes now from 2 Gyr for the $M_{BM}=10^{12}\, M_\odot$ galaxy to 10 Gyr for the $M_{BM}=10^{7}\, M_\odot$. Second, for each value of $\tau$ we explore three values of $\nu$, namely $\nu=0.1$, 1, and 10. In this way we can model galaxies along the whole Hubble sequence by varying the ratio $\tau/\nu$ depending on the galaxy mass: low values for the most massive ones corresponding to the massive early types, intermediate values for the less massive ones (going from intermediate early type to massive spirals), and high values for the less massive galaxies like low spirals and irregulars. Group C partially overlaps group B. \begin{table*} \begin{center} \caption{Parameters adopted in the chemical models. Masses are in $M_\odot$, the timescale $\tau$ is in Gyr, and the radii $R_{BM}$ and $R_{DM}$ are in kpc. } \label{parchem} \begin{tabular}{llrrrrrrrrrrrr} \hline \multicolumn{1}{l}{$M_{BM}$} & \multicolumn{1}{l}{$M_{DM}$} & \multicolumn{1}{c}{$R_{BM}$} & \multicolumn{1}{c}{$R_{DM}$} & \multicolumn{1}{c}{$\zeta$} & \multicolumn{1}{c}{$\tau$} & \multicolumn{1}{c}{$\nu$} & \multicolumn{1}{c}{$\tau/\nu$} & \multicolumn{1}{c}{$\tau$} & \multicolumn{1}{c}{$\nu$} & \multicolumn{1}{c}{$\tau/\nu$} & \multicolumn{1}{c}{$\tau$} & \multicolumn{1}{c}{$\nu$} & \multicolumn{1}{c}{$\tau/\nu$} \\ \hline \multicolumn{5}{c}{ } & \multicolumn{3}{c}{Group A} & \multicolumn{3}{c}{Group B} & \multicolumn{3}{c}{Group C} \\ \hline & & & & & & & & & & & 10.0 & 0.1 & 100 \\ $10^{7} $ & $6\times 10^{7}$ & 0.13 & 1.35 & 0.3 & 0.01 & 10.0 & 0.003 & 6.0 & 10.0 & 0.6 & 10.0 & 1.0 & 10 \\ & & & & & & & & & & & 10.0 &10.0 & 1 \\ \hline & & & & & & & & & & & 8.0 & 0.1 & 80 \\ $10^{8} $ & $6\times 10^{8}$ & 0.28 & 2.90 & 0.3 & 0.01 & 10.0 & 0.003 & 5.0 & 10.0 & 0.5 & 8.0 & 1.0 & 8 \\ & & & & & & & & & & & 8.0 &10.0 &0.8 \\ \hline & & & & & & & & & & & 6.0 & 0.1 & 60 \\ $10^{9} $ & $6\times 10^{9}$ & 0.61 & 6.26 & 0.3 & 0.01 & 10.0 & 0.003 & 4.0 & 10.0 & 0.4 & 6.0 & 1.0 & 6 \\ & & & & & & & & & & & 6.0 &10.0 & 0.6\\ \hline & & & & & & & & & & & 4.0 & 0.1 & 40 \\ $10^{10}$ & $6\times 10^{10}$ & 1.32 & 13.48 & 0.3 & 0.01 & 10.0 & 0.003 & 3.0 & 10.0 & 0.3 & 4.0 & 1.0 & 4 \\ & & & & & & & & & & & 4.0 &10.0 & 0.4 \\ \hline & & & & & & & & & & & 3.0 & 0.1 & 30 \\ $10^{11}$ & $6\times 10^{11}$ & 2.85 & 29.04 & 0.3 & 0.01 & 10.0 & 0.003 & 2.5 & 10.0 & 0.2 & 3.0 & 1.0 & 3 \\ & & & & & & & & & & & 3.0 &10.0 & 0.3 \\ \hline & & & & & & & & & & & 2.0 & 0.1 & 20 \\ $10^{12}$ & $6\times 10^{12}$ & 6.13 & 62.53 & 0.3 & 0.01 & 10.0 & 0.003 & 2.0 & 10.0 & 0.2 & 2.0 & 1.0 & 2 \\ & & & & & & & & & & & 2.0 &10.0 & 0.2 \\ \hline \end{tabular} \end{center} \end{table*} It goes without saying that other combinations of two out of the three parameters ($\tau$, $\nu$, and hence $\tau/\nu$) are possible. Since the aim here is to calculate galaxy models whose SFR and SFH closely resemble the ones observed in real galaxies, the choice we have made is fully adequate to our purposes. Finally, we like to point out that the models of Group A are meant to represent a sort of reference sample corresponding to the ideal situation of the closed-box approximation. They will be used only to evaluate the effects on the SFRD(z) of an exponentially decreasing rate of star formation. Table \ref{parchem} lists all the parameters adopted for the chemical models in usage. \subsection{Results for the chemical models} The model galaxies are calculated from $z_{f}=20$ to $z=0$ i.e. from the rest-frame age $t=0$ Gyr to the maximum age of $T_{G}=13.75$ Gyr where $T_G=t_u(z=0) - t_u(z=z_f)$ with $t_u(z)$ being the age of the Universe for the adopted cosmological model. If for any reason, we need to change the redshift of galaxy formation from $z_f$ to $z_{f}^{*} \le z_f$ (keeping unchanged all other input parameters) the same models can be used provided their rest-frame age is simply limited to the interval from $t=0$ at $z_{f}^{*}$ to $T_{G}^{*}= T_{G,(z_{f}=20)} - t_u(z_{f}^{*})$, where $t_u(z_{f}^{*})$ is the age of the universe at $z=z_{f}^{*}$. In other words, at $z=0$ the new galaxy is younger than the previous one. On purpose, in this first step of the analysis, the occurrence of galactic winds is not considered. This means that the energy input from Type II and Type Ia supernovae and galactic winds are turned off so that star formation can occur all over the life of galaxies. Finally, the discussion below is limited to the models of case B. Those of cases A and C have similar trends and behaviour. \textsf{Star formation}. The specific (in units of $yr^{-1}$) and true star formation (in $M_\odot\, yr^{-1}$) of the model galaxies are shown in the left and right panels of Fig. \ref{sfr_models_B}. As expected, the SSFRs look very similar to each other, whereas the true rates may significantly change with the galaxy mass. From now on, different values of $M_G$ are identified in all figures with the following colors: blue ($10^{7} M_{\odot}$), magenta ($10^{8} M_{\odot}$), olive green ($10^{9} M_{\odot}$), green ($10^{10} M_{\odot}$), orange ($10^{11} M_{\odot}$) and red ($10^{12} M_{\odot}$). \begin{figure*} \centering{ {\includegraphics[width=8.0cm,height=8.0cm]{figure3a.ps} } {\includegraphics[width=8.0cm,height=8.0cm]{figure3b.ps} } } \caption{\textsf{Left Panel}: The SSFR of models B in $ yr^{-1}$ for the galaxies of different $M_{BM}$, different accretion (collapse) timescale $\tau$ and efficiency $\nu = 10$. The mass $M_{BM}$ increases from $10^7$ to $10^{12}\, M_\odot$ from the bottom to the top. No galactic winds are supposed to occur. The time is the age of the galaxy in the rest-frame. \textsf{Right Panel}: The same as in the left panel but for the true SFR in units of $M_\odot \, yr^{-1}$. } \label{sfr_models_B} \end{figure*} \textsf{Metallicity}. The temporal variation of the metallicity $Z$ for the model galaxies is shown in Fig. \ref{met_models}. Owing to the rather high value of $\nu$ and parameter $\zeta$ of the IMF normalization, high metallicites are built up in the galaxies. This is less of a problem because by lowering $\nu$ and/or $\zeta$ one would obtain similar results but lower values of the metallicity at the present time without changing the overall behavior of the solution. \begin{figure} \centering{ {\includegraphics[width=7.5cm]{figure4.ps} } } \caption{ The metallicity vs time relation for the galaxies of group B with different $M_{BM}$, different accretion timescale $\tau$ and efficiency $\nu = 10$. No galactic winds are supposed to occur. The mass $M_{BM}$ incresases from $10^7$ to $10^{12}\, M_\odot$ from the bottom to the top. } \label{met_models} \end{figure} \textsf{Gas and Star contents}. Finally, in Fig.\ref{gas_stars} we show the temporal variation of the fractional masses of gas (bottom panel) and stars (top panel) for the models of group B. The timescale of mass accretion varies with the galaxy mass as reported in Table \ref{parchem}. The intrinsic efficiency of star formation is the same for the models on display (i.e. $\nu$=10). Because of the high efficiency of star formation, all the models have the peak of activity within the first Gyr of their lifetime. \begin{figure} \centering{ {\includegraphics[width=7.50cm]{figure5.ps} } } \caption{ The gas and star content vs time relationships for the galaxies belonging to group B withdifferent $M_{BM}$, different accretion timescale $\tau$ and efficiency $\nu = 10$. The mass $M_{BM}$ increases from $10^7$ to $10^{12}\, M_\odot$ from the bottom to the top. } \label{gas_stars} \end{figure} \textsf{The role of $\nu$}. Concluding this section, it is worth commenting on the role of the intrinsic efficiency $\nu$ in shaping the final time-dependence of the SFR in infall models of galaxy formation and evolution. So far the discussion of the results for groups B and C has been limited to models with efficient SFR represented here by all the cases with $\nu=10$. The peak of activity is always confined within the first Gyr. Clearly for $\nu=10$ case C does not differ too much from case B. We give the preference to this particular choice for the parameter $\nu$ in view of the discussion below concerning the SFRD(z). In any case it is worth emphasizing that the role of $\nu$ is of paramount importance in shaping the overall time-dependence of the SFR. The situation is best illustrated in Fig. \ref{nu_tau_C} that displays the SFR vs time of the $1\times10^{12} M_\odot$ and $1\times10^8 M_\odot$ galaxies of Group C (three values of $\nu$ for each case). The models gradually change their SFR from early peaked to ever-continuing according to the value of the ratio $\tau/\nu$, in other words along the Hubble sequence of galaxies passing from early types (low ratios $\tau/\nu$) to disk-like objects (intermediate ratios $\tau/\nu$), and finally to irregulars (high ratios $\tau/\nu$). This trend of the star formation was suggested by \citet{Sandage1986}, examining the SFR in galaxies of different types, and more recently confirmed by studies of SFHs based on absorption line indices by \citet{Thomasetal2005}, and NB-TSPH numerical models of galaxy formation and evolution \citep{ChiosiCarraro2002,Merlinetal2012}. \begin{figure} \centering{ {\includegraphics[width=7.5cm]{figure6.eps} } } \caption{ The SFR versus time relationship for the galaxies of different $M_{BM}$ belonging to group C at varying $\tau$ and $\nu$. Two values of the galaxy mass (baryonic component) are considered, namely $10^{8} M_\odot$ and $10^{12} M_\odot$. The values of $\tau$ and $\nu$ are listed in Table \ref{parchem}. } \label{nu_tau_C} \end{figure} \subsection{Remarks on the star formation rate} As already mentioned, the time dependence of our SFR is the delayed exponential law, see eq. (\ref{psidelay}), which is implicit to the galactic chemical models with gas accretion. The reasons why the simple exponential law, \begin{equation} \Psi(t) \propto \frac{1}{\tau} exp(- \frac{t}{\tau}) , \label{sfr_delay} \end{equation} \noindent adopted long ago by \citet{Tinsley1972} and in usage even today, must be abandoned have been discussed many times (see the classical studies by \citet{LyndenBell1975}, \citet{Chiosi1980} and the recent review by \citet{Matteucci2016} so that they are not repeated here). In favor of the time delayed exponential law were the original models by \citet{Chiosi1980} and the long list of studies dedicated to the evolution of chemical elements in galaxies of different morphological type, going from bulges and early-type objects to disk and even irregular galaxies \citep[see][for exhaustive reviews and referencing of this issue]{Matteucci2012,Matteucci2016}. In support to the delayed exponential law is also the study of \citet{Gavazzietal2002} with the spectro-photometric data of galaxies in the Virgo cluster. These classical analytical representations of the SFR have been recently questioned by \citet{Oemleretal2013} basing their analyses on the SSFR in the redshift interval $z\leq 1$. They concluded that the standard laws cannot explain both the tail of high specific SFR at z=1 and the low value we see today at $z=0$. They also argued that the starbust hypothesis cannot solve the problem. \citet{Gladdersetal2013} argue that a log-normal SFH of galaxies successfully describes both the SFRD over cosmic times and the present-day distribution of the SSFR of galaxies and the evolution of this quantity up to $z\simeq 1$. The log-normal SFR law they assume is \begin{equation} SFRD(t) \propto \frac{1}{t \tau} exp[ - \frac {(ln(t) - T_0)^2} {2\tau} ] \label{sfr_logno} \end{equation} \noindent where $T_0$ and $\tau$ (not to be mistaken with the timescale of gas accretion in galaxies) are the cosmic SFRD's half mass time and width [in units of ln(time)]. Basing on the notion that log-normal laws seem to be ubiquitous in Nature \citep{Limpert2001}, they take the SDSS sample of local galaxies (2094 objects), assign them a log-normal SFR, and derive for each object the SFR (i.e. the parameters $T_0$ and $\tau$), while ensuring that the ensemble of SFRs summed to the SFRD. \citet{Abramsonetal2016} go ahead along this line of reasoning. Adding and folding together a large number of log-normal SFHs parameterized by $T_0$ and $\tau$, they argue that this simple minded model reproduces (i) the stellar mass functions at $z\leq 8$; (ii) the slope of the SFR vs stellar mass relation (the Galaxy Main Sequence) at $z \leq 6$; (iii) galaxy downsizing; (iv) and a correlation between the formation timescale and the SSFR(M$_s$,t). In our view, the straight inference of a log-normal SFR in single galaxies contributing to the total cosmic SFRD is somewhat arbitrary and misleading. The cosmic SFRD is not the simple summation of that of many galaxy SFRs because each galaxy may differ from the others nor all types of galaxy occur in equal number, but it results from the number weighted summation of many objects of different type and SFHs (see below). The SFR of a galaxy might not be lognormal and yet the cumulative effect of many of them may turn out to look as a log-normal distribution. For these reasons we prefer to describe the SFR of galaxies as independent entities with the time-delayed law. \section{ Third building block: photometry }\label{third_block} The integrated monochromatic flux generated by the stellar content of a galaxy of age $T$ is defined as \begin{equation} F_{\lambda}(T) = \int_0^T \Psi[t,Z(t)]\ {\it sp}_{\lambda}[\tau',Z(\tau')]\ dt \label{popsynt1} \end{equation} \noindent where $\Psi[t,Z(t)]$ is the SFR at the current age $t$ and metal content $Z$ (chemical composition in general), $ {\it sp}_{\lambda}[\tau',Z(\tau')]$ the integrated monochromatic flux of single stellar population (i.e. of a coeval, chemically homogeneous assembly of stars, shortly named SSP) with age $\tau'$ and metallicity $Z(\tau')$, and finally $\tau'=T-t$. The flux of a SSP is in turn given \begin{equation} {\it sp}_{\lambda}[\tau',Z(\tau')] = \int_{M_l}^{M_u} \phi(M)\ f_{\lambda}[M,\tau',Z(\tau')]\ dM \label{popsynt2} \end{equation} \noindent where $\phi(M)$ is the stellar IMF and $f_{\lambda}(M,\tau',Z)$ the monochromatic flux of a star of mass $M$, metallicity $Z$, and age $\tau'=T-t$. M$_l$ and M$_u$ respectively, define the mass range within which stars are generated by each event of star formation. \noindent The metallicity dependence of the rate of star formation $\Psi(t,Z)$ is customarily neglected and equally for the time and metallicity dependencies of the IMF. The flux of a SSP, ${\it sp}_{\lambda}(\tau',Z)$, is calculated by integrating equation (\ref{popsynt2}) along an isochrone of age $\tau'$ populated by ``virtual stars'' with luminosty $L$, effective temperature $T_{eff}$, mass $M$, age $\tau'$, and composition $Z$. For any star along an isochrone, the relations connecting luminosity, effective temperature, and age are derived from the library of stellar models, while the flux $f_{\lambda}(M,\tau',Z)$ emitted by such a star is obtained from the library of stellar spectra. Sources of stellar tracks, isochrones, spectra, and SSPs in different photometric systems are from \citet{Bertellietal2008,Bertellietal2009}. \textsf{ Cosmological evolution of magnitudes and colours}. In the course of this study, we need the magnitudes and colors of the galaxies not only in the rest-frame but also as function of the redshift. Following \citet{GuiderdoniRoccaVolmerange1987}, the apparent magnitude of a galaxy at redshift $z$ in a pass-band $\Delta\lambda$ is \begin{equation} m(z)=(m-M)_{bol}(z)+K(z)+E(z)+M(0,t_0) \end{equation} \noindent where $K(z)$ and $E(z)$ are the cosmological and evolutionary corrections \begin{equation} K(z)= M(z,t_0) - M(0,t_0) \end{equation} \begin{equation} E(z)= M(z,t_z) - M(z,t_0) \end{equation} \noindent and where $M(0,t_0)$ is the absolute magnitude in the pass-band $\Delta\lambda$ derived from the rest-frame spectrum of the galaxy at the current time, $M(z,t_0)$ is the absolute magnitude in the pass-band $\Delta\lambda$ derived from the spectrum of the galaxy at the current time but redshifted at $z$, and $M(z,t_z)$ is the absolute magnitude in the pass-band $\Delta\lambda$ obtained from the spectrum of the galaxy at the time t$_z$ and redshifted at $z$. \begin{figure} \centering{ {\includegraphics[width=7.5cm,height=8.0cm]{figure7.eps} } } \caption{ The expected number of DM halos as a function of the halo mass $M_{DM}$ at three selected values of the redshift, namely z=20 (short dashed line in dark green), z=6 (long dashed line in red), and z=0 (solid blue line). } \label{lukic_mass_histo} \end{figure} \section{The Cosmic Star Formation Rate from theory }\label{SFRD_theory} It is worth emphasizing from the very beginning that in the course of the analysis and companion discussion we will use two mass scales: (i) The scale of the halo masses, i.e. we will refer to galaxies by their halo mass which is roughly coincident with the total mass ($M_{G} \equiv M_{DM} + M_{BM} \simeq M_{DM}$) to determine the number of halos per unit volume as a function of the halo mass and redshift. (ii) The scale of the baryonic component hosted by a halo, made of gas and stars. This scale will be used to read from the sample of chemical models their SFR and photometric properties (magnitudes and colors) as a function of the BM mass, age, redshift, etc. The relationship between the two mass scales is given in the first two columns of Table \ref{parchem} Second, we will use the grids of models of the group B, choosing the one appropriate to the mass halo, according to the mass scale $M_{BM} \simeq M_{DM}/6$. We have already described these models in the previous section. However, we recall here that for each model we know both the SFR and the SSFR, the abundance of metals $Z(t)$ (for the present aims the total metallicity is fully adequate), the mass in gas $M_g(t)$ and the mass in stars $M_s(t)$, the integrated magnitudes in the pass-bands $M_{\Delta\lambda}$ of the Johnson-Cousins and/or HST-WFPC photometric systems, the cosmological evolution of these magnitudes, i.e. the $K_{\Delta\lambda}$ and $E_{\Delta\lambda}$ corrections as a function of the redshift. It is worth recalling here that these models not only fit the main average properties of the galaxies in the local and distant Universe, see for instance \citet{Bressanetal1994}, \citet{Tantaloetal1998}, and \citet{Tantaloetal2010}, but also their SFHs agree with the results from NB-TSPH simulations of galaxy formation in cosmological context and according to the so-called early hierarchical scheme \citep{ChiosiCarraro2002, MerlinChiosi2006, MerlinChiosi2007, Merlinetal2012}. Therefore, the simple infall models presented here can be safely used to study the mean properties of galaxies in the context of the early hierarchical view of galaxy formation and evolution. \subsection{Distribution of halos in number and mass} We start the analysis by looking at the mass distribution of the DM halos at each value of the redshift. This is simply derived as the number of DM halos within a small interval $\Delta z$ centered at few selected values of the redshift ($\Delta z = 0.02$). The results are listed in Table \ref{count_lukic} and are plotted in Fig.\ref{lukic_mass_histo}. The visual inspection of Fig. \ref{fig_lukic} yields a qualitative estimate of the maximum value of the mass distribution at each redshift, which means that DM halos with mass in excess of this value have such a low probability of occurrence that they can be neglected for any practical purpose. The histograms of Fig.\ref{lukic_mass_histo} shows the comoving number density of halos as a function of the halo mass for three selected values of the redshift, namely $z$=20 (short dashed line), $z$=6 (long dashed line) and $z$=0 (solid line). The mass distribution for all other values of redshift can be easily derived from the entries of Table \ref{count_lukic}. It is worth calling the attention on the steeper decrease of the number of halos at increasing halo mass and increasing redshift. While for $z$=0 each step of the histogram roughly decreases by the same amount, this is not the case for $z$=6 and higher in which the steps decrease more and more at increasing halo mass. This behavior of the number frequency distribution will have far reaching consequences. \begin{figure} \centering \includegraphics[width=7.5cm]{figure8.eps} \caption{ Three groups of data are displayed: (i) the integrated absolute U magnitude of the model galaxies as a function of the redshift (on logarithmic scale); each galaxy is indicated by a solid line with different color code according to the mass of the BM component. The galaxies on display have BM masses of $10^8$ to $10^{12} M_\odot$ from the bottom to the top of the Figure (blue, magenta, yellow, green, red). (ii) The total magnitude in the U-pass-bands of all the galaxies present in the ideal sample of $1\, Mpc^3$ volume according to the \citet{Lukicetal2007} statistics (the blue dotted line with filled circles). The y-axis for the magnitudes is at left hand side of the panel. (iii) The SFRD(z) for the same sample of galaxies (the red solid line). Finally the analytical fit of the \citet{MadauDickinson2014} SFRD (the black dashed line). The y-axis for the SFRD is at right hand side of the figure.} \label{Umag_gal} \end{figure} \subsection{The reference case for the SFRD(z)} In Fig. \ref{Umag_gal} we present three groups of data: (i) The integrated absolute U magnitude of the model galaxies as a function of redshift (on logarithmic scale); each galaxy is indicated with a different color code. As expected the absolute magnitude first decreases (the luminosity increases) and then increases (the luminosity decreases) as the redshift decreases toward zero. The peak in the luminosity occurs when the rate of star formation is maximum. (ii) The total luminosity and total magnitude in the U-pass-bands of all the galaxies present in the ideal sample contained in $1\, Mpc^3$ according to the \citet{Lukicetal2007} statistics. The total U flux is given by \begin{equation} [F_{\Delta\lambda}(z_j)]_T = \sum_i N_{i}(M_{DM}, z_j) F_{{\Delta\lambda}, i} {\Delta z_{j}} \end{equation} where $[F_{{\Delta\lambda}, i}]_T$ is the flux in the chosen pass-band of the generic galaxy $i$ of mass $M_{G,i} = M_{BM,i} + M_{DM,i}$, and $\Delta z_j$ is the generic redshift range centered on $z_j$ and defined by $0.5\times[z_{j} - z_{j-1}]$ and $0.5\times[z_{j+1} - z_{j}]$. The number of galaxies $N_{i}(M_{DM,i}, z_{j})$ at the generic redshift $z_j$ is calculated using the mass scale of the DM halos, whereas the photometric properties are obtained using the mass scale of BM and more precisely the mass in stars existing in the galaxy at the time $t$ or redshift $z$. The indices $i$ and $j$ runs over the whole grids of masses and redshifts under considerations. The total U magnitude is shown by the dotted blue line in Fig. \ref{Umag_gal}. The magnitude scale along the y-axis is on the left hand side of the figure. (iii) Similar procedure is applied to derive the total true SFR for the galaxies in the same ideal sample contained in a volume of $1 \, \rm Mpc^3$. \begin{equation} [SFRD(z_j)]_T = \sum_i N_{i}(M_{DM}, z_{j} ) SFR_{i}(z_{j}) {\Delta z_{j}} \label{sfr_N} \end{equation} where the indices $i$ and $j$ run over the whole mass and redshifts grids as before. The SFRD(z) is the red solid curve. The scale for the SFR is along the y-axis at the right hand side of the figure. The SFRD(z) has the same trend of the total U-magnitude, thus confirming that the UV light is a good tracer of star formation. \begin{figure} \centering{ {\includegraphics[width=7.5cm]{figure9.ps}} } \caption{The theoretical SFRD(z) predicted from galaxy models of group B (solid black line) compared with the observational data (blue and red filled circles with error bars) and the analytical fit of \citet{MadauDickinson2014} (dotted line).} \label{sfr_madau} \end{figure} The results are shown in Fig.\ref{sfr_madau} and compared with the data and the empirical best-fit relation of \citet{MadauDickinson2014} given by eqn. (\ref{eq_madau_dickinson}) (the dotted black line). Theory and data nearly agree in the location of the peak (redshift $z \simeq 2$) and at the low redshift side (descending branch), whereas they may differ up to a factor of three beyond the peak towards the past. The provisional conclusion we could derive at this stage is that the theoretical SFRD(z) and the data of \citet{MadauDickinson2014} fairly agree each other, thus indicating that our simple model of the cosmic SFR well reproduces the observational data. It is worth emphasizing here that for each bin of redshift, the SFRD(z) is obtained by summing up the contribution from galaxies of different mass, and in particular different history and stage of star formation. For instance in the redshift interval $1 < z < 3$ we may have both galaxies with increasing SFR and galaxies with descending SFR. The change in the slope of the SFRD(z) at $z\simeq 2$ implies a change in the slope of the mean SFR in the galaxy population. At $z > 2$ galaxies with increasing SFR dominate, the opposite at $z < 2$, they balance each other at $z \simeq 2$. The SFRD(z) does not tell the behavior of individual galaxies, but only the current mean behavior of whole galaxy population (see also the discussion in Section \ref{dissect} below). \begin{figure} \centering{ {\includegraphics[width=7.5cm]{figure10.eps} } } \caption{Contribution to the SFRD(z) from galaxies of different mass at varying redshifts. The red lines are for redshifts $z$=20, 15, 10, 8 and 6, the green lines for $z$=4, 2, 1, 0.8 and 0.6, the orange lines for $z$=0.4 and 0.2, finally the blue dashed line is for $z$=0. } \label{sfr_mass_redshift} \end{figure} It is also interesting to see the contribution to the SFRD by galaxies of different mass at different redshifts. This is displayed in Fig. \ref{sfr_mass_redshift}, where we show the product $SFR(z, M_{G}) \times N(z, M_{G})$ as a function of the total galaxy mass $M_G$ and redshift. Each line is at constant redshifts. The color code bins the lines in three groups of redshift (namely, the red lines are for redshifts $z$=20, 15, 10, 8 and 6, the green lines for $z$=4, 2, 1, 0.8 and 0.6, the orange lines for $z$=0.4 and 0.2, finally the blue dashed line is for $z$=0). At high redshifts, the dominant contribution is from the low-mass galaxies, it shifts to that from higher mass galaxies at intermediate redshifts, and gradually it goes back to the low-mass range for redshifts tending to zero. Looking at the case $z$=0 in the $[SFR(M_G)\times N(M_G)]$ vs $M_G$ plane (Fig. \ref{sfr_mass_redshift}), the slope $dlog[SFR(M_G) \times N(M_G)] / dlog M_G \simeq -0.3$ for $M_G$ passing from $10^8$ to $10^{12}\, M_\odot$, i.e. it mildly decreases with the galaxy mass. \begin{figure} \begin{center} \includegraphics[width=7.5cm]{figure11.ps} \caption{ \textsf{Top Panel}: The SFR vs the galaxy stellar mass $M_s$ at different epochs in galaxies of different BM mass. Since these models are without galactic winds, $M_s \simeq 0.97\times M_{BM}$. The blue solid line is for $z$=15, the green line for $z$=1.0, orange line for $z$=0.6, and finally the dark red line for $z=0$. The solid red line labeled RP15 are the observational data from \citet{RenziniPeng2015} for active galaxies. Finally the red dotted line is the RP15 decreased by factor of 10 (see text for details). \textsf{Bottom Panel}: the product $SFR(0)\times N(M_G,0)$ vs mass in stars $M_s$ of each galaxy in comoving $Mpc^3$. } \label{sfr_mass_number} \end{center} \end{figure} To strengthen the above conclusion we look at the correlations of the SFR and SFRD(z) with the star mass $M_s$ and/or $M_{BM}$ as a function of the redshift. Thanks to the high efficiency of star formation ($\nu$) in all the models, $M_s \simeq 0.97 M_{BM}$. The relationships in question are shown in the two panel of Fig. \ref{sfr_mass_number}. The top panel shows the SFR vs $M_s$ at different redshifts, whereas the bottom panel displays the SFRD(z) at $z$=0. The slope and zero point of the SFR vs $M_s$ relationships change with the redshift. The slope gets smaller at decreasing redshift: specifically at $z$=15 indicated by the blue line, $z$=1.0 and $z$=0.6 (the green and orange lines respectively) and the $z$=0 (the dark red line). These theoretical relationships are compared by the observational one by \citet[][]{RenziniPeng2015}, the red solid line labelled RP15. The theoretical relation at $z$=0 has the same slope of the one by \citet{RenziniPeng2015} and differs in the zero point. It coincides with the RP15 lowered by a factor of 10 (red dotted line). Model galaxies at $z$=0 have a minimal value of star formation, therefore they belong either to the group so-called ``green valley'' galaxies or even to that of quiescent objects. They could rise to the values of \citet{RenziniPeng2015} by allowing the formation redshifts to span a wider range of values. The key result of this panel is that the slope of the SFR vs $M_s$ relation is the same as that of the observational data over a wide range of redshifts ($0 \leq z < 1$). In the bottom panel the product SFR$\times N(M_G,0)$ decreases at increasing star mass of the galaxies. Furthermore, there may be a qualitative agreement with the data of Fig.8 of \citet{Speagle_etal2014} (their Figure 8), who find that the Main Sequence slope of the star-forming galaxies increases with the redshift, i.e. the conversion of gas in stars decreases with time for all masses, the massive ones in particular. This feature, otherwise known as ``downsizing'' from the observational point of view, appears after a mere application of the $N(M_G,z)$, thus it perfectly agrees with concordance $\Lambda$-CDM cosmogony. Finally, there is a point to note in the bottom panel: at a first look, the case with $M_s=10^{11} \, M_\odot$ seems to deviate from the expected trend due to its apparently higher value. To single out the cause of it is a cumbersome affair. Most likely it is due to inaccuracy in the derivation of the galaxy number densities for galaxy masses in the high mass hand of the distribution. Although it is not in plain contrast with other values of stellar mass, we plan to highlight the issue investigating other halo mass functions in literature \citep[see][and references]{Murray_etal2013} We conclude this section with the provisional result that our simplified model for the evolution of the SFRD(z) nicely agrees with the observational data. However, it would be of general interest to single out the physical ingredients that are ultimately responsible of this result. This will be the subject of a few ad hoc designed experiments that are shortly illustrated in the following sections. \begin{figure} \centering{ {\includegraphics[width=7.5cm]{figure12.ps} } } \caption{The contribution to the total SFRD(z) of Fig. \ref{sfr_madau} from galaxies of different mass. The top solid line is the theoretical SFRD(z) whereas the dashed line is the analytical fit of the data by \citet{MadauDickinson2014}. Finally the remaining lines are the partial contributions the the SFRD(z) by galaxies of different BM mass from $10^{12} \, M_\odot$ (bottom) to $10^7\, M_\odot$ (top). } \label{dissect_sfrd} \end{figure} \subsection{Dissecting the SFRD(z)}\label{dissect} The first test to perform is to dissect the total $SFRD(z)$ in its components, i.e to single out the functions $SFRD(z)_{M_{G}}$ whose sum at each $z$ yields back the total $SFRD(z)$. These are shown in Fig.\ref{dissect_sfrd}. Remarkably, all functions peak at $z \simeq 2$. The decrease of the partial SFRDs at both sides of the peaks cannot be attributed only to number density (see the curves in the \citet{Lukicetal2007} and our Fig. \ref{fig_lukic}) because either they are still increasing toward their peak value (case of the high mass galaxies) or they have already reached their peak at higher redshifts ($z \simeq 5$) as in the case of the low mass galaxies (this mirrors the combined effects of the gravitational collapse and their destruction in the hierarchical aggregation). The only plausible explanation is that the SFRD(z) peak mirrors the superposition of the $SFR[(t(z)]$s in existing galaxies which reach their peak in their star forming activity roughly at the same time. In more detail, back in the past ($z > 5-6$) the dominant contribution came from low-mass objects; around the peak interval all galaxies contribute in nearly equal amounts even though those with masses in the range $10^{10}$ to $10^{11}$ $M_\odot$ are more important; finally the low-mass ones are again dominating the contribution at low redshifts ($z < 1-2$). \begin{figure} \centering{ {\includegraphics[width=7.5cm]{figure13.ps} } } \caption{Changing the relation $M_{BM} = \beta^{-1} M_{DM}$. The black dotted line is the SFRD(z) derived from the arbitrary assumption that $\beta=1$, equal amounts of DM and BM per galaxy. The red dashed line is the analytical best fit of the observational data of \citet{MadauDickinson2014} and the solid black line the theoretical SFRD(z) of Fig. \ref{sfr_madau}, obviously obtained with $\beta=6$. } \label{BM_DM_re} \end{figure} \subsection{Changing the ratio $M_{DM}$ to $M_{BM}$} It is worth examining the effect of adopting a different ratio $\beta = M_{DM}/M_{BM}$. Among the various possibilities, there is one particularly interesting, i.e. $\beta= 1$: DM and BM are in equal amounts. The effect of this assumption on the first and second building blocks (the number of DM halos of given mass as function of the redshift, and the galaxy chemical models) are easy to foresee. The $N(M_{DM}, z)$ distribution remains the same and no particular remark has to be made. The chemical models describing the evolution of the BM component within the DM halos remain unchanged at least with our simplified picture in which the dynamical interaction of DM and BM is neglected. Some effect would occur on the onset of galactic winds (if present) because the gravitational potential of the gas depends on the ratios $M_{BM}/M_{DM}$ and $R_{BM}/R_{DM}$; the effect is however small, not exceeding a factor of a few percent. The major difference caused by the new the relationship between BM and DM shows up when calculating $[SFRD(z)]_T = \sum_i N(M_{DM}, z) SFR_{i}(M_{DM}, z) {\Delta z}$ because the $SFR_{i}(M_{DM}, z)$ that was the SFR of the BM galaxy with mass $M_{BM}=M_{DM}/\beta$ now is the one of the BM galaxy with $M_{BM} = M_{DM}$: there can be a large factor in between that depends on the redshift. In other words, at given total mass $M_G$ there is more BM to consider, so the SFR is more intense at early epochs. The new cosmic SFRD(z) and the comparison of it with the reference one and the observational SFRD(z) of \citet{MadauDickinson2014} is shown in Fig. \ref{BM_DM_re}. The new SFRD(z) much resembles one of the reference case, nearly coincides with it on the tail from z=1 to z=0, but after that it flattens out and runs well above the reference case (it peaks at about $z$=3 instead of $z$=1 to $z$=2 and runs above it by a factor of about three at higher redshifts). One is tempted to argue that the cosmic SFRD(z) could be a good tracer of the amount of DM with respect to BM. \begin{table*} \begin{center} \caption{Characteristic quantities of the models of Group B at the onset of the galactic wind. The following quantities are shown: the baryonic mass $M_{BM}$ in solar units, the accretion timescale $\tau$ in Gyr, the efficiency of star formation $\nu$, the time $t_{GW}$ in Gyr of the occurrence of the stellar wind, the gas fraction $G_{g,GW}$, star mass fraction $G_{s,GW}$, the current metallicity $Z_{GW}$, the mean metallicity $<Z_{GW}>$, the SFR $SFR_{GW}$, the gas gravitational potential energy $\Omega_{g,GW}$, and the gas thermal energy $E_{th,GW}$ at the onset of the galactic wind (both are per unit mass of the galaxy and in $erg\, g^{-1}$). The top models refer to the case of standard rate of star formation and condition for galactic wind. The bottom models refer to case in which the thermal budget given to the interstellar medium is artificially cooled down to suitable value and at the same time the SFR is lowered by means of $\nu_{eff}$ so that the galactic wind can occur only at the present time. See the text for details. } \label{keydata_winds} \begin{tabular}{ |r c| c| c| c| c| c| c| c| c| c|} \hline \multicolumn{11}{c}{Standard Galactic Winds and SFR}\\ \hline \multicolumn{1}{|r}{$M_{BM}$ } & \multicolumn{1}{c|}{$\tau$ } & \multicolumn{1}{c|}{$\nu$ } & \multicolumn{1}{c|}{$t_{GW}$ } & \multicolumn{1}{c|}{$G_{g,GW}$} & \multicolumn{1}{c|}{$G_{s,GW}$} & \multicolumn{1}{c|}{$Z_{GW}$ } & \multicolumn{1}{c|}{$<Z_{GW}>$} & \multicolumn{1}{c|}{$SFR_{GW}$} & \multicolumn{1}{c|}{|$\Omega_{g,GW}$|} & \multicolumn{1}{c|}{$E_{th, GW}$} \\ \hline 1.0$\times 10^{7}$ & 6 & 10 & 0.007 & 0.010 & 0.001 & 0.0001 & 0.0001 &1.05E-03 & 1.72E-02 & 9.13E+00 \\ 1.0$\times 10^{8}$ & 5 & 10 & 0.007 & 0.011 & 0.001 & 0.0001 & 0.0001 &1.06E-02 & 1.75E+00 & 9.27E+01 \\ 1.0$\times 10^{9}$ & 4 & 10 & 0.010 & 0.011 & 0.001 & 0.0001 & 0.0001 &1.09E-01 & 1.80E+02 & 9.15E+02 \\ 1.0$\times 10^{10}$ & 3 & 10 & 0.010 & 0.012 & 0.001 & 0.0008 & 0.0008 &1.21E+00 & 2.01E+04 & 2.13E+04 \\ 1.0$\times 10^{11}$ & 2 & 10 & 0.100 & 0.030 & 0.019 & 0.0135 & 0.0083 &2.95E+01 & 5.22E+06 & 5.24E+06 \\ 1.0$\times 10^{12}$ & 2 & 10 & 1.010 & 0.040 & 0.362 & 0.0385 & 0.0311 &4.03E+02 & 5.17E+08 & 5.32E+08 \\ \hline \end{tabular} \begin{tabular}{ |r c| c| c| c| c| c| c| c| c| c|} \multicolumn{11}{c}{Modified SFR and Conditions for the onset of Galactic Winds }\\ \hline \multicolumn{1}{|r}{$M_{BM}$ } & \multicolumn{1}{c|}{$\tau$ } & \multicolumn{1}{c|}{$\nu$ } & \multicolumn{1}{c|}{$\eta_{th}$ } & \multicolumn{1}{c|}{$G_{g,GW}$} & \multicolumn{1}{c|}{$G_{s,GW}$} & \multicolumn{1}{c|}{$Z_{GW}$ } & \multicolumn{1}{c|}{$<Z_{GW}>$} & \multicolumn{1}{c|}{$SFR_{GW}$} & \multicolumn{1}{c|}{$\Omega_{g,GW}$} & \multicolumn{1}{c|}{$E_{th, GW}$} \\ \hline 1.0$\times 10^{7}$ & 6 & 10 & 0.00001 & 0.033 & 0.920 & 0.0440 & 0.0357 &2.89E-04 & 1.87E-01 & 1.58E-01 \\ 1.0$\times 10^{8}$ & 5 & 10 & 0.00010 & 0.020 & 0.928 & 0.0445 & 0.0365 &2.14E-03 & 1.12E+01 & 9.07E+00 \\ 1.0$\times 10^{9}$ & 4 & 10 & 0.00500 & 0.040 & 0.903 & 0.0488 & 0.0348 &1.58E-02 & 2.24E+03 & 2.07E+03 \\ 1.0$\times 10^{10}$ & 3 & 10 & 0.01000 & 0.007 & 0.928 & 0.0485 & 0.0371 &6.14E-02 & 3.99E+04 & 3.39E+04 \\ 1.0$\times 10^{11}$ & 2 & 10 & 0.10000 & 0.007 & 0.923 & 0.0514 & 0.0370 &3.42E-01 & 2.79E+06 & 2.54E+06 \\ 1.0$\times 10^{12}$ & 2 & 10 & 0.30000 & 0.008 & 0.913 & 0.0556 & 0.0366 &1.51E+00 & 9.24E+07 & 8.45E+07 \\ \hline \end{tabular} \end{center} \end{table*} \begin{figure} \centering{ {\includegraphics[width=7.5cm]{figure14.ps} } } \caption{Changing the $N(M_{DM}, z)$ relationship. See the text for details. } \label{NDM_zeta} \end{figure} \subsection{Changing the $N(M_{DM}, z)$ relationship} At each redshift the gravitational aggregation of lumps of DM and BM in objects of larger and larger total mass is described by the function $N(M_{DM+BM}, z)$ whose mass dependence is customarily approximated to $N(M_{DM},z)$ thanks to the large ratio $M_{DM}/M_{BM}$. However the exact shape of the function $N(M_{DM},z)$ is still uncertain, even if the one we have adopted may be a good approximation of the real one. Basing on these considerations, it comes naturally to pose the question: how would the cosmic SFRD(z) change if the underlying mass function of DM halos was different from the one currently in use? To answer the question without venturing in arbitrary speculations, we perform a simple numerical experiment. We assume that the mass distribution does not vary with the redshift but only with mass, and test four mass distributions: namely $N(M_{DM},0)$, $N(M_{DM},2)$, $N(M_{DM},6)$ and $N(M_{DM},10)$. Therefore, while the rate of star formation in the model galaxies varies with the redshift, their number does not. With this recipe, we calculate the corresponding SFRD(z). We will refer to this as the \textit{false SFRDs}. The results are shown in Fig. \ref{NDM_zeta}. No renormalization has been applied to the false SFRD(z) to make their peak value to coincide with the observational one by \citet{MadauDickinson2014}. It is interesting to note that the false SFRDs resemble the real one at low redshift ($z \leq 2$), strongly deviate from it at intermediate redshifts, and eventually tend again to it at high redshifts. Since the SFRs of the model galaxies are the same as those of the reference frame, this clearly shows that $N(M_{DM}, z)$ is the term that mainly drives the shape of the cosmic SFRD(z). The gravitational building up of galaxies at early epochs ($z \geq 1-2$) yields the rising branch whereas in more recent epochs ($z \leq 1-2$) the declining of the mean SFR in galaxies by gas consumption most likely prevails. \begin{figure} \centering{ {\includegraphics[width=7.5cm]{figure15.ps} } } \caption{The effect of the intrinsic efficiency $\nu$ on the SFRD(z) derived from models of the groups C (dashed lines) and B (solid line) and the comparison with observational data (filled circles with error bars) and their analytical fit (dotted line) of \citet{MadauDickinson2014}. The three lines for the models of group C refer to the different $\nu$ under consideration (0.1, 1 and 10 from the bottom to the top of the figure).} \label{sfr_nu} \end{figure} \subsection{Changing the efficiency of star formation $\nu$} The rate of star formation we have adopted contains also the efficiency parameter $\nu$ whose effects are worth being investigated. Figure \ref{sfr_nu} shows the SFRD(z) expected for models of type C in which the efficiency parameter $\nu$ of the SFR is decreased from $\nu=10$ (top long dashed line) to $\nu=1$ (middle long dashed line) and even to $\nu=0.1$ (bottom long dashed line). Together with the observational data (filled circles) we plot the analytical fit (dotted line) by \citet{MadauDickinson2014}, and finally the theoretical SFRD(z) for models B (the solid black line). It is soon evident that models with a SFR too low ($\nu=0.1$) can be ruled out because too far off compared to the observational data. The agreement between theory and observational data is good for case B ($\nu=10$) and also case C models with high efficiency of SFR $\nu=1$ and $\nu=10$, both cases, however, being somewhat higher than observed on the descending branch towards $z$=0. \subsection{Changing the SFH of galaxies} It has repeatedly been said that an important requisite to get the observed SFRD(z) is that the star formation in galaxies starts very small, increases to a peak value and then declines because of gas consumption. How legitimate is the kind of temporal dependence of the SFR we have been using so far? In other words can we obtain the same SFRD(z) using different types of SFR? To test this point, we explore here two different alternatives: (i) in each galaxy the rate of star formation is constant and equal to a suitable value so that the desired amount of stars is obtained; (ii) the rate of star formation is a mere exponentially decreasing function from a maximum value at the beginning to the present-day value. \begin{figure} \centering \includegraphics[width=7.5cm]{figure16.ps} \caption{The cosmic SFRD(z) predicted by galaxy models whose rate of star formation is constant with time and equal to the mean value expected for models of group B (black solid line). The mean SFR is different for each galaxy. The value is calculated as $M_s(T_G) = T_G^{-1} \int \psi(t) dt = <\Psi(t)> T_G$, where $M_s(T_G)$ is the present day mass in stars of the galaxy with the same total mass but normal-time-varying $\Psi(t)$, $T_G$ the present day age of the galaxy. The blue dotted line is the SFRD(z) for models of group B and the red dashed line is the observational fit by \citet{MadauDickinson2014}. } \label{sfr_const} \end{figure} \textsf{Constant Star formation}. The analysis is made by means of models B for which we calculate the mean SFR as \begin{equation} <SFR> = \frac{ \int_0^{T_{G}} \Psi(t) dt} {T_{G} } \Rightarrow M_s(T_G) = <SFR> \times T_{G} \label{mean_sfr} \end{equation} where $T_G$ is the galaxy age, $\Psi(t)$ the current SFR, and $M_s(T_G)$ the total mass in stars at the galaxy age $T_G$. All these quantities are known from the previous calculation of models B. Since no galactic winds are considered in models of group B and nearly all BM mass is converted into stars ($\simeq 90 \%$), for all practical purposes their $<SFR>$ can be estimated inserting in eq.(\ref{mean_sfr}) $M_s(T_G) = 0.9\times M_{BM}$, $T_G \simeq 13.5$ Gyr, and expressing it in $M_\odot\, yr^{-1}$. With the aid of these $<SFR>$, we derive the new SFRD(z) with the usual procedure and compare it with the observational one. The result is shown in Fig.\ref{sfr_const}. As expected, now the cosmic SFRD(z) simply increases with decreasing redshift, thus mirroring the underlying increasing mean number density of galaxies of different mass. This finding lends strong support to our previous conclusion about the time dependence of the SFR taking place in each galaxy. \begin{figure} \centering{ {\includegraphics[width=7.5cm]{figure17.ps} } } \caption{The predicted SFRD(z) for the closed-like models of type A (see text for details). The thin lines show the partial contribution to the SFRD(z) from galaxies of different mass: from top to bottom the galaxies are labelled by their BM mass scale from $10^7$ to $10^{12} \, M_\odot$. The heavy solid line is the total SFRD(z). Finally, the dashed line is the analytical fit of the data by \citet{MadauDickinson2014} } \label{quench} \end{figure} \textsf{Exponentially decreasing star formation rate}. To cast light on this issue we make use of the models of group A. In all these models the timescale of mass accretion and intrinsic star formation efficiency are $\tau= 0.01$ Gyr and $\nu=10$. These models closely mimic the closed-box approximation. With these assumptions, the SSFR of these models is essentially a simple exponential law. Consequently, the maximum star formation occurs at the beginning of the SFH and declines ever since. The resulting SFRD(z) is shown in Fig. \ref{quench}, which shows the partial contribution to the SFRD(z) from galaxies of different mass (thin lines), the total SFRD(z), heavy solid line, and the observational fit. Looking at the total SFRD(z) we note that it is very high (actually much higher than the observational one) at high redshift, it has a lull at intermediate values and it remains lower than the observational one at low redshifts. The reason for this awkward behavior can be accounted for by examining the partial contribution from galaxies with different mass. First of all, galaxies with $M_G > 10^{10}\, M_\odot$ at decreasing redshift first increase, reach a peak value and then decrease again. At z=0 their contribution is comparable within a factor of five. The galaxies of lower mass have, at low redshift, contribution smaller than the massive ones, the opposite happens at high redshift, and they are also responsible for the intermediate redshift lull. Finally, this can be attributed to the time dependence of the SSFR in each galaxy which is simply a mere exponential law, the same for galaxy models. The net conclusion of this experiment is that an always decreasing SFR from an initial maximum to the present day value cannot generate the desired SFRD(z) unless other physical effects are introduced. \begin{figure} \centering{ {\includegraphics[width=7.5cm]{figure18.ps} } } \caption{The predicted SFRD(z) for models of type B with galactic winds whose key data are reported in Table \ref{keydata_winds} and comparison with the observational data (filled circles) and their analytical fit (dashed line) by \citet{MadauDickinson2014}. } \label{wind} \end{figure} \subsection{Introducing galactic winds} All the galaxy models used so far have been calculated ignoring the possible presence of galactic winds (i.e. the condition (\ref{eth_omg}) for the onset of galactic winds has not been applied). In this section, we take the energy injection by supernova explosions and stellar winds into account and apply condition (\ref{eth_omg}). In this view of the whole issue of galactic winds, the above prescription implies that when condition (\ref{eth_omg}) is verified, the remaining gas is supposed to escape the galaxy and further star formation does no longer occur. The evolution of the remnant galaxy is a passive one and all the gas shed by stars formed in the previous epochs either in form of stellar wind or supernova explosions does no longer generate new stars. In addition to this, owing to the different gravitational potential well of massive galaxies with respect to the low-mass ones, the time at which the threshold energy for galactic winds is reached occurs earlier in low-mass galaxies than in the massive ones. All this is inherent to the \citet{Larson1974} model of galactic winds which has been superseded by more sophisticated treatment of the wind process with the aid of NB-TSPH models of galaxy formation and evolution \citep{ChiosiCarraro2002, MerlinChiosi2006,MerlinChiosi2007,Merlinetal2012}. To illustrate the point, we show in the top part of Table \ref{keydata_winds} a few key quantities for models of Group B evolved in presence of galactic winds according to the straight prescription of \citet{Larson1974}. With this prescription, galactic winds occur very early so that the stellar content of a galaxy is hardly made. The problem can be partly cured either by decreasing the efficiency of star formation (lower values of $\nu$) or by invoking a lower quantity of energy actually injected by supernova explosions and stellar winds to the interstellar medium (more efficient cooling of this energy). Since in doing this a certain degree of arbitrariness is unavoidable owing to the lack of suitable constraints on the galaxy models in use, we prefer to adopt a different strategy. This modeling of the galactic winds is not realistic because the numerical NB-TSPH simulations have indicated that galactic winds are not instantaneous but take place on long timescales. Gas heated up by supernova explosions and stellar winds and cooled down by radiative processes not only gradually reaches the escape velocity but also affects the efficiency of star formation because the hot gas is continuously subtracted. All this cannot be easily incorporated in the simple-minded galaxy model we are using here. To cope with this difficulty we modify our model as follows. First of all, feeling that the cooling algorithm we are using is not as good as the one currently adopted in NB-TSPH models, we introduce an efficiency parameter $\eta_{th}$ ranging from 0 (no energy feed-back) to 1 (full energy feed-back) and accordingly change the (\ref{eth_omg}) to the new one \begin{equation} \eta_{th} \times E_{th}(t) \geq |\Omega_{g}(t)| \label{new_th_cond} \end{equation} Second we change the star formation law redefining the parameter $\nu$ as an effective efficiency given by \begin{equation} \nu_{eff} = \nu \times \frac{|\eta_{th} \times E_{th} - |\Omega_{g}||}{\eta_{th} \times E_{th} + \Omega_{g}} \label{nu_eff} \end{equation} where $\nu$ is the usual efficiency. By decreasing the efficiency of star formation at increasing $E_{th}$ we intend to mimic the fact that hot gas is likely less prone to generate stars by gravitational collapse. As consequence, the threshold stage for the onset of galactic winds may occur much later in time or even avoided at all. Less gas is turned into stars as if part of the gas is continuously escaping from the galaxy. The net SFR decreases with obvious consequences on the SFRD(z). New models of Group B are calculated using the efficiency parameter $\nu_{eff}$, and the new condition (\ref{nu_eff}) for the onset of galactic winds. The values of $\nu_{eff}$ are chosen in such a way that the galactic winds occur only at the present age or later. These parameters are listed in the bottom part of Table \ref{keydata_winds}. All over their history these models have a SFR lower than their standard counterparts thus mimicking the most important effect of the energy feedback of evolving stars, i.e. heating up part of gas and subtracting it to star formation. The SFRD(z) expected from these models and the comparison with the observational one of \citet{MadauDickinson2014} is shown in Fig.\ref{wind}. Theory and observations agree above all expectations. Although our treatment of galactic wind is very crude, yet we suspect that galactic wind should only play a marginal role on shaping the SFRD(z). \begin{figure} \centering{ {\includegraphics[width=7.5cm]{figure19.ps} } } \caption{The theoretical SFRD(z) from models of group B (solid black line) compared with the observational data (blue and red filled circles with error bars), the old analytical fit by \citet{MadauDickinson2014} (long dashed line), the original empirical relationship by \citet{MadauFragos2017} (short dashed line) and the same shifted by the factor 0.66 to compensate for the different assumptions for the stellar IMF (dashed dotted line). See the text for details. } \label{two_fits} \end{figure} \subsection{Changing the analytical best fit} We conclude the analysis by comparing the theoretical models with new analytical best fit of the observational data by \citet{MadauFragos2017} who take into account recent data in the redshift interval ($4 \leq z \leq 10$0 and also the IMF by \citet{Kroupa2001} instead of the \citet{Salpeter1955} one. Changing the IMF introduces a factor 0.66 passing from the old to the new one. The comparison is shown in Fig.\ref{two_fits}. Correcting for this factor as appropriate, the agreement between theory and observation is still there. \subsection{Comparison with N-Body cosmological simulations} Recent attempts to model the SFRD(z) in the framework of large scale simulations of hierarchical galaxy formation in $\Lambda$-CDM cosmogony including both DM and BM have been made possible by the new generation of numerical codes developed by \citet[][and references therein]{Hernquist_Springel_2003, Springel_Hernquist_2003a,Springel_Hernquist_2003b,Vogelsberger_etal_2012,PuchweinSpringel2013,Baraietal2013}, in which much effort is paid to include radiative cooling and heating in presence of an UV background radiation field, star formation and associated feedback processes. The SFRD(z) in particular has been addressed by \citet[][see their Fig. 7]{KatsianisTescarietal2017} and \citet{Pillepich_etal_2017}. The key results are the comoving mean SFR and cosmic SFRD as functions of look-back time and/or redshift that are much similar to those we have used here. It is worth emphasizing that the mean SFH and SFRD(z) refer to the whole slab of Universe under examination, and not to any galaxy in particular. In our study we have taken a different perspective: starting from galaxies of which we follow in detail the SFH, we integrate over the whole population of galaxies in the same Universe slab (which number is derived from the hierarchical growth of structures in the $\Lambda$-CDM cosmogony) and we derive the total SFRD(z). In other words, starting from individual objects, we reconstruct the mean SFRD(z). In this context, the results of the present study are in perfect agreement with those obtained from extensive and time consuming cosmological simulations. The novelty of the present study is that we arrive at the same conclusions with a much simpler approach, in which all physical foundations of the cosmic SFRD can be changed and separately analyzed with almost no computational time needed. \section{General remarks and conclusions}\label{Conclusions} Prior to any consideration, we point out that (a) the HGF and galaxy SFR are the starring actors of the whole problem and (b) no specific assumption is made to force the galaxy models in use to reproduce the cosmic SFRD (the choice of the their leading parameter is suggested by other independent arguments). Basing on the present analysis, we may conclude: (i) The shape of the SFRD(z) is primarily driven by the cosmic mass distribution of galaxies, i.e. the function $N(M_{G}, z)$ in place at each value of the redshift. The galaxy mass distribution function in turn partly results from the growth of primordial fluctuations to the collapse stage, and partly from the aggregation of existing objects with active or quiescent star formation into new ones of larger mass (the classical hierarchical view). (ii) The second important ingredient is the rate of star formation taking place in individual galaxies. Only the so-called time-delayed SFR, i.e. a rate of star formation that starts small, grows to a maximum and then declines, can yield the desired SFRD(z). In the formalism of the infall models, in which the BM component (in form of gas) flows into the gravitational potential well of DM at a suitable rate proportional to an exponential time dependence $\dot{M}_{BM} \propto exp(-t/\tau)$ and is gradually converted into stars by the law $\dot{M}_{s} = \nu M_g$, giving rise to the time-delayed star formation $\dot{M_s} \propto \frac{t}{\tau} exp(-t/\tau)$. This kind of SFR is able to reproduce the one inferred in galaxies of different morphological type \citep[see][]{Sandage1986,Thomasetal2005} and also the SFR resulting from detailed numerical NB-TSPH simulations of galaxies \citep[][]{ChiosiCarraro2002,Merlinetal2012}. Constant and exponentially declining SFRs cannot yield the observed SFRD(z). However, also the intrinsic efficiency of star formation (the parameter $\nu$) has an important role because, together with the timescale of mass accretion, eventually drives the temporal dependence of star formation passing from the one peaking at early epochs (high values of $\nu$) to that more skewed towards the present (low values of $\nu$) passing through the interesting case of nearly constant star formation. We plan to better investigate this issue in a forthcoming study. (iii) The best galaxy models to use are those of type B or even type C with minor adjustments with respect to those in use here that tend to produce too high metallicites. The problem can be easily solved either by simply changing the net metal enrichment per stellar generation (the parameter $\zeta$ in eqn.(\ref{zeta})) to lower values or playing with the other model parameters $\tau$, $\nu$, and $\nu_{eff}$. Since this issue is marginal to our discussion we leave it to future investigations. However, the agreement show by type B and C models imposes a strong constraint on the type of star formation taking place in galaxies. It cannot be too much diluted over the Hubble time but instead it should be peaked at early epochs. (iv) At early and late epochs (i.e. high and low redshifts) the major contribution to the SFRD(z) comes from galaxies of relatively low mass, whereas at intermediate redshifts the contribution from intermediate mass galaxies may parallel or even exceed that from the low-mass ones. Although always present at all epochs, the contribution from high mass galaxies is always smaller than that from low and intermediate mass ones. (v) The energy feedback to the interstellar gas is only due to supernovae and stellar winds, no AGN has been considered. Radiative cooling of the injected energy is taken into account albeit in a simplified fashion. This point needs to be improved. The present galaxy models are not the best ones to investigate the effect of galactic winds because, owing to the one-zone approximation the onset of galactic winds at a certain time means sudden interruption of the star formation process, whereas in real galaxies and also in numerical 3D-simulations of galaxy formation and evolution, galactic winds take place locally and over very long timescales without halting star formation in the whole system. To cope with this, we preferred to decrease the efficiency of star formation as the thermal content of the gas, despite the radiative cooling, tends to approach and eventually overwhelms the gravitational potential energy of the gas. In general galactic winds, even if they improve the overall agreement of the models with observational data, are found to play a secondary role in the context of the temporal evolution of the cosmic SFRD(z). (vi) The SFRD(z) does not represent the instantaneous SFR in individual galaxies $\Psi_G[t(z)]$, but it measures the mean SFR of the population of galaxies in a unit volume. Therefore, it mirrors the product $\Psi_{G}[t(z)]\times N[M_G (t(z)]$, where $M_G$ is the total mass of a galaxy and $t(z)$ is the particular time-redshift relation of the cosmological model of the Universe that is adopted. Using the SFRD(z) instead of $\Psi_G[t(z)]$ to model the history of single galaxies may lead to wrong results. The opposite is also true. (vii) We have adopted the HGF of \citet{Lukicetal2007}, which in turn stems from the HMF of \citet{Warren_etal_2006}, simply because it is an easy-to-use tool for our purposes. However, owing to the well known problem of the non-universality of the fitting function $f(\sigma)$ \citep{Tinker_etal2008}, other models for the HMF can be found \citep{Murray_etal2013}. We plan to investigate this issue by using different HMFs. (viii) The present approach yields results that fully agree with those from the highly sophisticated large scale numerical simulations. Therefore it should be considered as a complementary tool for exploring different assumptions concerning basic physical processes such as the star formation law and the nature and efficiency of the energy feedback. (ix) We plan to refine the present modeling of the SFRD(z) history by replacing the simple galaxy models with a library of 3D N-body simulations of galaxy formation and evolution and also the number density evolution of galaxies of different mass, i.e. the functions $N(M_G,z)$ with the aid of ad hoc designed Monte-Carlo simulations. Finally, we will follow the photometric evolution of the galaxies to investigate the relationship between the SSFR and stellar mass content in galaxies of different mass, redshift and colors. (x) As final conclusions we would like to shortly answer a few important questions that could be raised such as for instance: Why is the SFRD(z) small at high and low redshift? Is the quenching of SF at $z \leq 2$ associated with a decreasing gas supply at late epochs? Why is star formation inefficient at early times even in the absence of feedback? Why is it possible to reproduce the data without AGN feedback? What is the meaning of the particular combinations of parameters $\nu$ and $\tau$ required to reproduce the data? The time (redshift) dependence of the SFR in the model galaxies is the result of the cross-effect of two physical processes: the gas accretion at a suitable rate onto the galaxy potential well and the gas consumption by star formation according to a Schmidt-like law. By controlling these two parameters the galaxy models can be tuned to match the gross features of real galaxies all along the Hubble sequence. The key feature of these galaxy models is that independently of the galaxy mass the SFR starts small, grows to a maximum and then declines as function of time. However the same SFR is strong and peaked at early epochs in massive objects (having the early type galaxies as counterparts), mild and prolonged in the intermediate mass galaxies (observational counterparts the disk galaxies), and very mild and likely stretching (perhaps in recurrent bursts of activity not considered here) all over the Hubble time (observational counterparts the irregular galaxies). As already mentioned, this scheme is strongly supported by the body of observational data of galaxies and the N-body simulations of these. This tuning of the galaxy models in usage here has been made over the years independently of the cosmic SFRD issue. At low redshift, the ``quenching'' of star formation, is simply caused by the fact individual galaxies tend to run out of fuel (gas) in the star forming activity. At high redshift, a similar trend is recovered because galaxies are still in the gas accumulation phase and little gas has already reached the threshold density required for star formation to occur \citep[it is worth recalling here that stars form in very dense environments]{Krumholz_2015}. So at these very early epochs, the natural expectation is the star formation activity is low but growing with time. This trend would mimic the effect of some quenching at early epochs. Our reference SFRD(z) of Fig. \ref{sfr_madau} obtained with standard energy feedback from supernovae and stellar winds, with no AGNs and no galactic winds, is already in rather good agreement with the observational one from z=0 to z=2 (the reference case simply mirrors the picture outlined above for the natural behavior of SFR in galaxies) whereas it tends to depart from it at increasing redshift. At redshift $z\simeq 10$ it is about a factor of 2 to 3 higher than expected. The presence of galactic winds slightly improves the agreement in the latter region (see Fig. \ref{wind}) and perhaps some other effects like mild quenching by AGNs could completely remove the discrepancy. Our provisional conclusion is that strong and exotic quenching of the star formation in the interval $2\leq z \leq 8)$ \citep[see for instance][and references]{Tescarietal2014, Renzini2016} is not strictly needed. The only case in which either strong quenching and/or dust obscuration or both are required is when an an exponential SFR is used. However, the resulting SFRD(z) differs from the observed one in many other details and has to be discarded. Therefore, quenching does not likely play an important role in shaping the observed SFRD(z) as compared to the combined effect of the HGF $N(M_{DM},z)$ and of the $\Psi(t)$ modulated by the gradual accumulation of gas within the total gravitational potential well and conversion of it into stars. It goes without saying that AGNs ad galactic winds are not excluded from the above picture, but simply they are suspected to play a role less important than customarily claimed. \section*{Acknowledgments} We would like to thank the anonymous referee for his/her useful critical comments that helped us to amend and improve the first version of the paper. C. Chiosi, F. Brotto, R. De Michele, and V. Politino are deeply grateful to the Heraeus Foundation for the financial support to attend the Heraeus Summer School 2016 ``Origins of Stars and Planets'' (August 2016, Florence, Italy) where this study was presented for the first time. C.C. would like to thank the Physics \& Astronomy Department of the Padova University for the kind hospitality and computing support. \bibliographystyle{mn2e}
1,314,259,994,718
arxiv
\section{Introduction} An instance of the envy-free cake-cutting problem consists in the following. We are given $n$ players, numbered from $1$ to $n$, and a cake to be divided among them. Since the cuts are achieved with parallel knives, the cake is identified with the segment [0,1] so that knife cuts are just points of this segment. A {\em division} of the cake is a partition of $[0,1]$ into finitely many intervals of positive length, which we call {\em nonempty pieces} in this context. For each player $i$, there is a {\em preference function} $p_i$ which, given a division $\mathcal{D}$ of the cake, returns a subset of $\mathcal{D}\cup\{\varnothing\}$. A nonempty piece $I$ being in $p_i(\mathcal{D})$ means that player $i$ is happy to get $I$; the empty set $\varnothing$ being in $p_i(\mathcal{D})$ means that player $i$ is happy to get an {\em empty piece}, i.e., to get nothing (e.g., each nonempty piece is partially burnt, a situation the player prefers to avoid). An {\em envy-free division} of the cake is a division $\mathcal{D}$ such that there exists a map $\pi\colon[n]\rightarrow\mathcal{D}\cup\{\varnothing\}$ (matching players and pieces) that satisfies the following three conditions: \begin{enumerate}[label=(\roman*)] \item\label{i} $\pi(i)\in p_i(\mathcal{D})$ for all $i\in[n]$. \item\label{ii} $\mathcal{D}\subseteq\pi([n])$. \item\label{iii} $\pi(i)=\pi(i')\Longrightarrow (i=i' \mbox{ or } \pi(i)=\varnothing)$. \end{enumerate} Condition~\ref{i} means that each player gets a piece she is happy with, condition~\ref{ii} means that each piece is assigned to a player, and condition~\ref{iii} means that the same nonempty piece cannot be assigned to two distinct players. We present now two assumptions on the preference functions. When $n$ is a prime number or is equal to $4$, they will be enough to ensure the existence of an envy-free division. Consider a converging sequence $(\mathcal{D}^k)$ of divisions with a fixed number of nonempty pieces. The preference function $p_i$ satisfies the {\em closed preferences} assumption if the following holds: when there is a converging sequence of pieces $(P^k)$ (empty or not) with $P^k\in p_i(\mathcal{D}^k)$ for all $k$, then $P^{\infty}\in p_i(\mathcal{D}^{\infty})$, where $\mathcal{D}^{\infty}$ and $P^{\infty}$ are respectively limits of $(\mathcal{D}^k)$ and $(P^k)$. Here, we have to explain which metrics are used for the convergence, but to ease the readability, we postpone this discussion to the end of the introduction. We already emphasize that the chosen metric makes that whether the endpoints belong or not to the intervals does not matter. The second assumption, which we call the {\em full division} assumption, ensures that when $|\mathcal{D}|=n$, then $\varnothing\notin p_i(\mathcal{D})$ for every player $i$. In other words, under this assumption, when the cake has been divided into $n$ intervals of positive length, then no player is happy with an empty piece. The following theorem is our main result. Without the condition on $n$, it has been conjectured by Erel Segal-Halevi~\cite{segal2017fairly}, who proved it for at most $n=3$ players. \begin{theorem}\label{main} Consider an instance of the cake-cutting problem with the number $n$ of players being either a prime number or $4$. If the preference function of every player satisfies the closed preferences assumption and the full division assumption, then there exists an envy-free division. \end{theorem} We do not know whether this theorem is still valid if $n$ is neither $4$, nor a prime number. The classical envy-free cake division theorem, attributed in the recent literature to Walter Stromquist~\cite{stromquist1980cut}, but found independently by Douglas Woodall~\cite{woodall1980dividing}, ensures the same conclusion with the additional assumption that $\varnothing\notin p_i(\mathcal{D})$ for every player $i$ in any case, even when $|\mathcal{D}|<n$, i.e., a player always strictly prefers a nonempty piece to an empty piece. This additional assumption is usually called the ``hungry players'' assumption and it has always been considered as crucial for the conclusion to hold. The divisions into at most $n$ pieces can be identified to points in the standard $(n-1)$-dimensional simplex $\Delta^{n-1}$ (these points being the cut positions). The approach by Francis Su~\cite{su1999rental} consists in constructing a triangulation of $\Delta^{n-1}$ and in labeling it. The ``hungry players'' assumption implies that the labeling is a ``Sperner labeling'' and that Sperner's lemma can be applied. It was thus very surprising to discover that this assumption is unnecessary when $n=3$ at most, and that it might even be unnecessary for larger $n$. \begin{example} For a player $i$, a standard preference function is obtained with $$p_i(\mathcal{D})=\left\{I\in\mathcal{D}\colon\mu_i(I)=\max_{I'\in\mathcal{D}}\mu_i(I')\right\},$$ where $\mu_i$ is any absolutely continuous measure on $[0,1]$. (This is actually the original type of preference functions considered by Stromquist and by Woodall.) Call it an {\em attraction preference function}. In other words, player $i$ has her own way to weigh the pieces of the cake (encoded by $\mu_i$) and facing a division of the cake, she is happy to get any of the heaviest pieces. Another simple preference function satisfying the two assumptions required by Theorem~\ref{main}, yet making player $i$ sometimes happy with the empty piece, is obtained as follows: $$ p_i(\mathcal{D})=\left\{\begin{array}{l@{\hspace{1cm}}l} \displaystyle{\left\{I\in\mathcal{D}\colon\mu_i(I)=\min_{I'\in\mathcal{D}}\mu_i(I')\right\}} & \mbox{if $|\mathcal{D}|=n$} \bigskip\\ \{\varnothing\} & \mbox{otherwise.}\end{array}\right. $$ In other words, if the cake is divided into $n$ pieces, she is happy with any of the lightest pieces, and if the cake is divided into less than $n$ pieces, she always strictly prefers to get nothing. It models a situation where, for instance, she does not like at all the cake but she will anyway take a piece when there are $n$ nonempty pieces, in order not to offend the cook. Call such a $p_i$ a {\em rejection preference function}. In the two cases where all the players possess attraction preference functions or all the players possess rejection preference functions, an envy-free division exists, without any condition on $n$: in the rejection case, this can be seen by a simple adaptation of the rental harmony's proof by Su~\cite{su1999rental}, and in the attraction case, this is the Stromquist-Woodall theorem. Theorem~\ref{main} shows that when $n$ is a prime number or $4$, an envy-free division exists, even if preference functions of both types are simultaneously present. \end{example} Without the full division assumption, an envy-free division (in the sense of conditions~\ref{i},~\ref{ii}, and~\ref{iii} above) is not necessarily achievable. Imagine for instance the preference functions $p_i$ are such that, no matter which division $\mathcal{D}$ is chosen, we have $p_i(\mathcal{D})=\{\varnothing\}$ for all $i$: every player strictly prefers to get nothing instead of a piece of positive length (e.g., the cake has been poisoned). Nevertheless, a division satisfying conditions~\ref{i} and~\ref{iii} always exists in this case, as shown by an easy adaptation of Su's proof, which we leave to the reader. Our main step toward the proof of Theorem~\ref{main} is given by Sperner-type results, where the usual boundary condition of the Sperner lemma is replaced by a symmetry condition. This symmetry comes from the fact that, in the representation of divisions by points in $\Delta^{n-1}$, divisions into at most $n-1$ pieces admit several representatives. These Sperner-type results, close to a conjecture by Segal-Halevi \cite{segal2017fairly}, are Theorem~\ref{thm:sperner-symmetry-prime} and Proposition~\ref{prop:sperner-symmetry-4}, stated and proved in Section~\ref{sec:sperner}. Our paper provides another evidence to the importance of combinatorial topology in studying social-choice problems. Recent literature abounds in more examples, especially in the area of fair division. These include a ``secretive-player'' version of the classical envy-free cake division theorem \cite{frick2017achieving} (see also Remark~\ref{secretive} in Section~\ref{proofs}), discrete versions of this theorem~\cite{discrete-connected,mirzakhani2015sperner}, and consensus-halving~\cite{SiSu03}. In particular, our paper contributes to the current vibrant research activity focused on variations and generalizations of Sperner's lemma. Examples, with many references, are discussed in a survey by De Loera et al.~\cite{DGMM}. \subsection*{Algorithmic aspects} Our proof is constructive in a purely logical sense but does not directly admit an algorithm for computing a desired division. Usually, finding such an algorithm in envy-free division problems is a byproduct of applying a Sperner-like theorem, which often times provides a path-following method for approximating the required division (see e.g., Su~\cite[Section 5]{su1999rental} and Frick et al.~\cite[Section 5]{frick2017achieving}). Thus, it would be interesting to find an algorithmic version of our proof, especially using a path-following method. Such an algorithmic proof would not only provide a practical way to compute a desired envy-free division in our setting, but would also make the associated computational problem -- ``given polytime preference functions, find an envy-free division'' -- amenable for complexity study. The relevant complexity class here is the {\em PPAD class}, which is, very roughly speaking, the class of problems that can be solved by a path-following method. The PPAD class admits {\em PPAD-complete} problems, i.e., PPAD problems that are at least as hard as any other PPAD problem. The computational problem associated with the classical envy-free cake division theorem is PPAD-complete, as was shown by Deng et al.~\cite{deng2012algorithmic} (this reference also provides an accessible introduction to the PPAD complexity class). The PPAD-completeness of the classical envy-free cake division problem implies that our problem is PPAD-hard (since it is more general), but not PPAD-complete since its belonging to the PPAD class is not established. Almost all the computational problems associated to Sperner's lemma are PPAD-complete; see e.g.,~\cite{chen2009complexity} and~\cite{kiraly2013ppad}, and the references therein. However, finding a fully-labeled simplex in the computational problem associated to our Theorem~\ref{thm:sperner-symmetry-prime} (which is our main Sperner-type result) is {\em not} more general than finding a fully-labeled simplex in the usual Sperner's lemma setting. Thus we cannot directly conclude that the computational problem associated to our Sperner-type result is PPAD-hard. \subsection*{Convergence of intervals and divisions} The metric we consider on the measurable subsets of $[0,1]$ is the {\em symmetric difference metric} $\delta$, which is defined for two measurable subsets $A,B$ of $[0,1]$ by $\delta(A,B)=\mu(A\triangle B)$, where $\mu$ is the Lebesgue measure. Note that with the symmetric difference metric, $\varnothing$ is limit of a sequence of intervals whose length goes to $0$. The distance considered between two divisions $\mathcal{D}$ and $\mathcal{D}'$ is then simply the Hausdorff distance induced by $\delta$ between $\mathcal{D}\cup\{\varnothing\}$ and $\mathcal{D}'\cup\{\varnothing\}$. As already mentioned, divisions are usually identified with points in the standard simplex $\Delta^{n-1}$ and not with partitions into intervals. It is mainly to ease the definition of convergence and to avoid to deal with several representative of the limit that we chose to work with partitions into intervals. Nevertheless, our theorem does imply the Stromquist-Woodall theorem when $n$ is a prime number or $4$, and Segal-Halevi's result for $n=2$. Indeed, if the closed preferences assumption is satisfied for the definition with points in the simplex, it is also satisfied for our definition. \begin{remark} Another option for defining the divisions and the topology would have been to consider $$D^{n-1}=\big\{(z_1,\ldots,z_{n-1})\in\mathbb{R}^{n-1}\colon 0\leq z_1\leq\cdots\leq z_{n-1}\leq 1\big\}/\sim,$$ where $\sim$ is the equivalence relation defined on $\mathbb{R}^{n-1}$ by $$(z_1,\ldots,z_{n-1})\sim(z_1',\ldots,z_{n-1}')\quad\mbox{if and only if}\quad \{z_1,\ldots,z_{n-1}\}=\{z_1',\ldots,z_{n-1}'\}.$$ Two points in $\mathbb{R}^{n-1}$ are equivalent if the sets consisting of their coordinates are equal. The space $D^{2}$ is the classical dunce hat space. The spaces $D^{n-1}$ have been more systematically studied by Andersen, Marjanovi\'c, and Schori~\cite{andersen1993symmetric}, and generalized by Kozlov~\cite{koz}. There is an one-to-one correspondence between the points in $D^{n-1}$ and the divisions of the cake in at most $n$ pieces: the coordinates of a point in $D^{n-1}$ are the endpoints of the intervals (except $0$ and $1$). The Hausdorff metric on $\mathbb{R}^{n-1}$ is compatible with the equivalence relation $\sim$ and induces then a metric on $D^{n-1}$. Convergences of divisions could have been considered according to this metric: it does not change the topology. But we have not made this choice because we have thought that it makes the description of the problem less intuitive and more cumbersome. \end{remark} \section{``Sperner lemmas'' with symmetries}\label{sec:sperner} The purpose of this section is to state and prove combinatorial fixed point results (Theorem~\ref{thm:sperner-symmetry-prime} and Proposition~\ref{prop:sperner-symmetry-4}) in the spirit of the Sperner lemma. One of these results, which will play a crucial role in our proof of Theorem~\ref{main}, is close to Conjecture 4.15 in the paper by Segal-Halevi \cite{segal2017fairly}, who realized that it would be the main step to a proof of his conjecture on cake-division. It deals with triangulations of $\Delta^{n-1}$ -- the standard $(n-1)$-dimensional simplex -- that satisfy some symmetry. \subsection{Statements} For each $j\in[n]$, we introduce two maps. The first map is the permutation $\rho^j$ on $[n]$ defined by $$\rho^j(i)=\left\{\begin{array}{ll} j & \mbox{if $i=1$} \\ i-1 & \mbox{if $2\leq i\leq j$} \\ i & \mbox{if $i\geq j+1$.}\end{array}\right.$$ The second map, denoted $r^j$, is the linear self-map of $\mathbb{R}^n$ defined by $r^j(\boldsymbol{e}_i)=\boldsymbol{e}_{\rho^j(i)}$ for $i=1,\ldots,n$, where the $\boldsymbol{e}_i$ are the unit vectors of the standard basis of $\mathbb{R}^n$. Note that $r^j$ is a bijection that induces a permutation of the facets of $\Delta^{n-1}$, and that when $j=1$, both maps are the identity map. We identify $\Delta^{n-1}$ with $\{(x_1,\ldots,x_n)\in\mathbb{R}^n_+\colon\sum_{i=1}^nx_i=1\}$. The facet of $\Delta^{n-1}$ defined by $x_j=0$ is the {\em $\widehat{j}$-facet}. Note that the vertices of $\Delta^{n-1}$ are the $\boldsymbol{e}_i$'s. For a point $\boldsymbol{x}\in\Delta^{n-1}$, we define $J_{\boldsymbol{x}}$ to be the set of all $j$ such that $\boldsymbol{x}$ belongs to the $\widehat{j}$-facet. Triangulations and labelings considered in the paper will in general enjoy certain symmetries. We say that a triangulation $\mathsf{T}$ of $\Delta^{n-1}$ is {\em nice} if $r^j(\sigma)\in\mathsf{T}$ for every $j\in[n]$ and every simplex $\sigma\in\mathsf{T}$ included in the $\widehat{1}$-facet. (Such triangulations are called ``friendly'' in Segal-Halevi's paper.) Similarly, consider a labeling $\Lambda$ of the vertices $v$ of $\mathsf{T}$ with subsets of $[n]$. We say such a labeling is {\em nice} if $\Lambda\big(r^j(v)\big)=\rho^j\big(\Lambda(v)\big)$ for every $j\in [n]$ and every vertex $v$ of $\mathsf{T}$ included in the $\widehat{1}$-facet. \begin{theorem}\label{thm:sperner-symmetry-prime} Let $\mathsf{T}$ be a nice triangulation of $\Delta^{n-1}$ and let $\Lambda$ be a nice labeling of its vertices with nonempty proper subsets of $[n]$. If $n$ is a prime number, then there is an $(n-1)$-dimensional simplex $\tau\in\mathsf{T}$ such that it is possible to pick a distinct label in each $\Lambda(u)$ when $u$ runs over the vertices of $\tau$. \end{theorem} We do not know whether Theorem~\ref{thm:sperner-symmetry-prime} is true for any nonprime $n$. Up to additional conditions on the labeling, we are however able to prove that a special case holds when $n=4$. The {\em supporting face} $\operatorname{supp}(\boldsymbol{x})$ of a point $\boldsymbol{x}$ in $\Delta^{n-1}$ is the inclusion-minimal face of $\Delta^{n-1}$ containing this point. Two faces are {\em comparable by inclusion} if one of them is included in the other (they can be equal). \begin{proposition}\label{prop:sperner-symmetry-4} Let $\mathsf{T}$ be a nice triangulation of $\Delta^{3}$ such that the supporting faces of any two adjacent vertices are comparable by inclusion. Suppose that $\Lambda$ is a nice labeling of its vertices with nonempty subsets of $\{1,2,3,4\}$ such that for every vertex $v$, the subset $\Lambda(v)$ is either $J_v$ or a singleton not belonging to $J_v$. Then there is a $3$-dimensional simplex $\tau\in\mathsf{T}$ such that it is possible to pick a distinct label in each $\Lambda(u)$ when $u$ runs over the vertices of $\tau$. \end{proposition} Replacing $4$ by a prime number $n$ in the statement of Proposition~\ref{prop:sperner-symmetry-4} leads to a valid result, since such a result is a direct consequence of Theorem~\ref{thm:sperner-symmetry-prime}: this latter theorem requires much weaker constraints on the triangulation and the labeling. For the proof of Theorem~\ref{main}, such a result would actually be enough, but we think that Theorem~\ref{thm:sperner-symmetry-prime} has its own merit. \subsection{Preliminaries} For the proofs, we assume basic knowledge in algebraic topology; see the book by Munkres~\cite{Mun}, and especially Chapter 1 for the notions used hereafter (abstract simplicial complexes, chains, chain maps, etc.). To that traditional tools, we add the following ones. Let $\mathsf{R}$ be the abstract simplicial complex whose vertices are the points of $\mathbb{R}^n$ and whose maximal simplices are all possible $n$-subsets of $\mathbb{R}^n$. (Note that $\mathsf{R}$ is an abstract simplicial complex, two $(n-1)$-dimensional simplices may have intersecting convex hulls, and the vertices of a simplex can be aligned.) For an oriented $(n-1)$-dimensional simplex $[\boldsymbol{v}_1,\ldots,\boldsymbol{v}_n]$ of $\mathsf{R}$, we define $\operatorname{det}_{\sharp}([\boldsymbol{v}_1,\ldots,\boldsymbol{v}_n])$ to be $\operatorname{det}(\boldsymbol{v}_1,\ldots,\boldsymbol{v}_n)$. We extend then the definition of $\operatorname{det}_{\sharp}$ by linearity for all elements in $C_{n-1}(\mathsf{R},\mathbb{Z})$. The following generalization of the Sperner lemma will play a key role in the proof of Theorem~\ref{thm:sperner-symmetry-prime} and Proposition~\ref{prop:sperner-symmetry-4}. Contrary to the latter results, there is no particular assumption on the triangulation $\mathsf{T}$ and on $n$. \begin{lemma}\label{lem:sperner_det} Let $\mathsf{T}$ be a coherently oriented triangulation of $\Delta^{n-1}$. Consider a labeling $\lambda$ of the vertices in $V(\mathsf{T})$ with points in $\Delta^{n-1}$, such that, for every vertex $v\in V(\mathsf{T})$, the point $\lambda(v)$ lies in the affine hull of $\operatorname{supp}(v)$. Then $$\left|\sum_{[v_1,\ldots,v_n]}\operatorname{det}(\lambda(v_1),\ldots,\lambda(v_n))\right|=1,$$ where $[v_1,\ldots,v_n]$ runs over the positively oriented $(n-1)$-dimensional simplices of $\mathsf{T}$. \end{lemma} \begin{proof} We proceed by induction on $n$. If $n=1$, the result is immediate: $\lambda(\boldsymbol{e}_1)=\boldsymbol{e}_1$. Suppose $n\geq 2$ and let $t$ be the element of $C_{n-1}(\mathsf{T},\mathbb{Z})$ that is the formal sum of all positively oriented $(n-1)$-dimensional simplices of $\mathsf{T}$. Our goal is to prove that $(\operatorname{det}_{\sharp}\circ\lambda_{\sharp})(t)=1$, where we interpret $\lambda$ as a simplicial map from $\mathsf{T}$ to $\mathsf{R}$. Let $\operatorname{proj}\colon\mathbb{R}^n\rightarrow\mathbb{R}^{n-1}$ be the projection on the first $n-1$ coordinates, defined by $$\operatorname{proj}(\boldsymbol{e}_i)=\left\{\begin{array}{ll}\boldsymbol{e}'_i& \mbox{if $i\neq n$} \\ \boldsymbol{0} & \mbox{otherwise,}\end{array}\right.$$ where the $\boldsymbol{e}_i'$ are the unit vectors of the standard basis of $\mathbb{R}^{n-1}$. We interpret $\operatorname{proj}$ as a simplicial map from $\mathsf{R}$ to $\mathsf{R}'$ too, where $\mathsf{R}'$ is the abstract simplicial complex whose vertices are the points of $\mathbb{R}^{n-1}$ and whose maximal simplices are the $(n-1)$-subsets of $\mathbb{R}^{n-1}$. \begin{claim} The following equality holds: $(\operatorname{det}_{\sharp}\circ\lambda_{\sharp})=(-1)^{n-1}(\operatorname{det}_{\sharp}\circ \operatorname{proj}_{\sharp}\circ\partial\circ \lambda_{\sharp})$. \end{claim} \begin{proof} We prove the equality for an oriented simplex $[v_1,\ldots,v_n]$, which is enough to get the conclusion. Define $\boldsymbol{a}_i=(a_{1,i},\ldots,a_{n,i})$ to be $\lambda(v_i)$ and $\boldsymbol{a}_i'$ to be $\operatorname{proj}(\boldsymbol{a}_i)$. By definition of $\operatorname{proj}$, we have $\boldsymbol{a}'_i=(a_{1,i},\ldots,a_{n-1,i})$. If two of the $\boldsymbol{a}_i$'s are equal, the left-hand and right-hand terms of the equality to prove are both equal to zero when applied to the considered oriented simplex. We can thus assume that all $\boldsymbol{a}_i$'s are distinct. Compute the left-hand term on the considered oriented simplex: $$\begin{array}{rcl} (\operatorname{det}_{\sharp}\circ\lambda_{\sharp})([v_1,\ldots,v_n]) & = & \displaystyle{\operatorname{det}\left(\boldsymbol{a}_1,\ldots,\boldsymbol{a}_n\right)}\smallskip \\ & = & \displaystyle{\left|\begin{array}{ccc} a_{1,1} & \cdots & a_{1,n} \\ \vdots & & \vdots \\ a_{n-1,1} & \cdots & a_{n-1,n} \\ 1 & \cdots & 1 \end{array}\right|}\smallskip\\ & = & \displaystyle{\sum_{i=1}^n(-1)^{n-i}\operatorname{det}\left(\boldsymbol{a}'_1,\ldots,\widehat{\boldsymbol{a}'_i},\ldots,\boldsymbol{a}'_n\right)}, \end{array} $$ where the second equality follows from row operations and the fact that $\sum_{j=1}^n a_{j,i} = 1$. Similarly, compute the right-hand term on the considered oriented simplex: $$\begin{array}{rcl} (\operatorname{det}_{\sharp}\circ \operatorname{proj}_{\sharp}\circ\partial\circ\lambda_{\sharp})([v_1,\ldots,v_n]) & = & \displaystyle{(\operatorname{det}_{\sharp}\circ \operatorname{proj}_{\sharp})\left(\sum_{i=1}^n(-1)^{i-1}[\boldsymbol{a}_1,\ldots,\widehat{\boldsymbol{a}_i},\ldots,\boldsymbol{a}_n]\right)} \\ & = & \displaystyle{\sum_{i=1}^n(-1)^{i-1}\operatorname{det}\left(\operatorname{proj}(\boldsymbol{a}_1),\ldots,\widehat{\operatorname{proj}(\boldsymbol{a}_i)},\ldots,\operatorname{proj}(\boldsymbol{a}_n)\right)}. \end{array} $$ We have thus in any case $(\operatorname{det}_{\sharp}\circ\lambda_{\sharp})([v_1,\ldots,v_n])=(-1)^{n-1}(\operatorname{det}_{\sharp}\circ \operatorname{proj}_{\sharp}\circ\partial\circ\lambda_{\sharp})([v_1,\ldots,v_n])$. \end{proof} According to this claim and the commutation of $\partial$, we have $$(\operatorname{det}_{\sharp}\circ\lambda_{\sharp})(t)=(-1)^{n-1}(\operatorname{det}_{\sharp}\circ \operatorname{proj}_{\sharp}\circ \lambda_{\sharp})\left(\sum_{j=1}^nt_j\right),$$ where $t_j$ is the term in $\partial t$ supported by the $\widehat{j}$-facet of $\Delta^{n-1}$. For $j\neq n$, the $j$th coordinate of $(\operatorname{proj}\circ\lambda)(u)$ for any vertex $u\in V(\mathsf{T})$ on the $\widehat{j}$-facet is equal to $0$. Hence, $(\operatorname{det}_{\sharp}\circ \operatorname{proj}_{\sharp}\circ \lambda_{\sharp})(t_j)=0$ when $j\neq n$. Therefore, $(\operatorname{det}_{\sharp}\circ\lambda_{\sharp})(t)=(-1)^{n-1}(\operatorname{det}_{\sharp}\circ \operatorname{proj}_{\sharp}\circ \lambda_{\sharp})(t_n)$. Let $\mathsf{T}'$ be the triangulation of the $\widehat{n}$-facet of $\Delta^{n-1}$ induced by $\mathsf{T}$. We identify this facet with $\Delta^{n-2}=\{(x_1,\ldots,x_{n-1})\in\mathbb{R}_+^{n-1}\colon\sum_{i=1}^{n-1}x_i=1\}$. The map $\operatorname{proj}\circ\lambda$ is a labeling of the vertices of $\mathsf{T}'$ with elements of $\Delta^{n-2}=\{(x_1,\ldots,x_{n-1})\in\mathbb{R}_+^{n-1}\colon\sum_{i=1}^{n-1}x_i=1\}$, which satisfies the condition of the lemma for $n-1$. The chain $t_n$ is the formal sum of all positively oriented simplices of $\mathsf{T}'$. By induction, we have thus $\left|(\operatorname{det}_{\sharp}\circ\operatorname{proj}_{\sharp}\circ \lambda_{\sharp})(t_n)\right|=1$, which implies $\left|(\operatorname{det}_{\sharp}\circ \lambda_{\sharp})(t)\right|=1$. \end{proof} A lemma by Yakar Kannai~\cite[Lemma 3]{kannai2013using} has the same condition and almost the same conclusion as Lemma~\ref{lem:sperner_det}. Regarding the conclusion, the lemma of Kannai ensures the existence of a subdivision of $\mathsf{T}$ for which the formula of Lemma~\ref{lem:sperner_det} holds, while we know henceforth that it holds for $\mathsf{T}$ itself. Florian Frick (personal communication) noted that the approach by Andrew McLennan and Rabee Tourky~\cite{mclennan2008using} for proving the Sperner lemma can also be used here to provide an alternative proof of Lemma~\ref{lem:sperner_det}. For a nonempty subset $S$ of $[n]$, we define the points $\boldsymbol{b}^S=(b_1^S,\ldots,b_n^S)$ of $\Delta^{n-1}$ by $$b_i^S=\left\{\begin{array}{ll}\displaystyle{ \frac 1 {|S|}}& \mbox{if $i\in S$}\medskip \\ 0 & \mbox{otherwise.}\end{array}\right.$$ The point $\boldsymbol{b}^S$ is the barycenter of the face whose vertex set is $\{ \boldsymbol{e}_i\colon i\in S\}$. These points are subject to two easy lemmas, which will be useful in the sequel. \begin{lemma}\label{lem:cover} Suppose that $S_1,\ldots,S_n$ are nonempty subsets of $[n]$ such that $\operatorname{det}(\boldsymbol{b}^{S_1},\ldots,\boldsymbol{b}^{S_n})\neq 0$. Then it is possible to pick in each $S_j$ a distinct element of $[n]$. \end{lemma} \begin{proof} Since the determinant is nonzero, there is a permutation $\pi$ of $[n]$ such that $b_{\pi(i)}^{S_i}$ is nonzero for all $i\in[n]$. We have therefore $\pi(i)\in S_i$ for all $i$ and all $\pi(i)$'s are distinct. \end{proof} \begin{lemma}\label{lem:sym_b} The equality $r^j(\boldsymbol{b}^S)=\boldsymbol{b}^{\rho^j(S)}$ holds for every $j\in[n]$ and every nonempty subset $S$ of $[n]$. \end{lemma} \begin{proof} We have $$r^j(\boldsymbol{b}^S)=\frac 1 {|S|} \sum_{i\in S}r^j(\boldsymbol{e}_i)=\frac 1 {|S|} \sum_{i\in S}\boldsymbol{e}_{\rho^j(i)}=\frac 1 {|\rho^j(S)|} \sum_{i\in \rho^j(S)}\boldsymbol{e}_i=\boldsymbol{b}^{\rho^j(S)},$$ where the penultimate equality comes from the fact that $\rho^j$ is a permutation of $[n]$. \end{proof} We end this section with a simple property of the map $r^j$. \begin{lemma}\label{lem:detr} We have $\operatorname{det}(r^j)=(-1)^{j-1}$ for all $j$. \end{lemma} \begin{proof} The determinant is the sign of the permutation $\rho^j$. Denote by $\tau^j$ the transposition that interchanges $j$ and $j+1$. We have $\rho^{j+1}=\tau^j\circ\rho^j$. The conclusion follows from the fact that the sign of $\rho^1$ is $1$ (it is the identity). \end{proof} \subsection{Proofs}\label{proofs} \begin{figure} \begin{center} \includegraphics[width=15cm]{triangulation.pdf} \caption{Illustration of the construction of the triangulation $\mathsf{T}'$ from the triangulation $\mathsf{T}$ in the proof of Theorem~\ref{thm:sperner-symmetry-prime} and Proposition~\ref{prop:sperner-symmetry-4}. On the right-hand figure, the labels on the vertices in $V(\mathsf{T}')\setminus V(\mathsf{T})$ have been displayed.} \label{fig:triangulation} \end{center} \end{figure} \begin{proof}[Proof of Theorem~\ref{thm:sperner-symmetry-prime}] We put a shrunk copy of $\mathsf{T}$ inside $\Delta^{n-1}$. The non triangulated part of $\Delta^{n-1}$ admits a natural decomposition into $n$ cells, each being the convex hull $C_j$ of a $\widehat j$-facet and its shrunk copy. We complete $\mathsf{T}$ in a triangulation $\mathsf{T}'$ of $\Delta^{n-1}$ such that $r^j(\sigma)\in\mathsf{T}'$ for every $j\in[n]$ and every simplex $\sigma\in\mathsf{T}'$ included in $C_1$. In particular, $\mathsf{T}'$ is nice. The triangulation of $C_j$ induced by $\mathsf{T}'$ is denoted $\mathsf{C}_j$. (Such a triangulation $\mathsf{T}'$ is easily achieved by keeping on $\partial\Delta^{n-1}$ the triangulation induced by $\mathsf{T}$ before it was shrunk, and then proceeding to a subdivision of the prisms induced by the pairs $\sigma',\sigma$, where $\sigma'$ is an $(n-2)$-dimensional simplex of $\partial\Delta^{n-1}$ and $\sigma$ is its shrunk copy. This latter subdivision can be performed without adding new vertices in those simplices $\sigma,\sigma'$.) Now, we label the vertices $v$ of $\mathsf{T}$ by $\lambda(v)=\boldsymbol{b}^{\Lambda(v)}$. For the other vertices $u$ in $V(\mathsf{T}')\setminus V(\mathsf{T})$, we proceed as follows: we consider the radial projection $u'$ from the barycenter of $\Delta^{n-1}$ on $\partial\Delta^{n-1}$. We label then $u$ by $\lambda(u)=\boldsymbol{b}^{\{i(u)\}}=\boldsymbol{e}_{i(u)}$, where $i(u)$ is the minimal index of a nonzero coordinate of the projection $u'$. (The construction of $\mathsf{T}'$ and of $\lambda$ is illustrated for $n=3$ on Figure~\ref{fig:triangulation}.) The map $\lambda$ satisfies obviously the condition of Lemma~\ref{lem:sperner_det}. We have thus \begin{equation}\label{tprime} \left|(\operatorname{det}_{\sharp}\circ\lambda_{\sharp})(t')\right|=1,\end{equation} where $t'$ is the formal sum of all positively oriented $(n-1)$-dimensional simplices of $\mathsf{T}'$. (As in the proof of Lemma~\ref{lem:sperner_det}, the map $\lambda$ is seen as a simplicial map $\mathsf{T}'\rightarrow\mathsf{R}$.) On the other hand, we have by linearity \begin{equation}\label{decomp} (\operatorname{det}_{\sharp}\circ\lambda_{\sharp})(t')=(\operatorname{det}_{\sharp}\circ\lambda_{\sharp})(t)+\sum_{j=1}^n(\operatorname{det}_{\sharp}\circ\lambda_{\sharp})(c_j), \end{equation} where $t$ is the formal sum of all positively oriented $(n-1)$-dimensional simplices of $\mathsf{T}$ and $c_j$ the formal sum of all positively oriented $(n-1)$-dimensional simplices of $\mathsf{C}_j$. As for other linear maps already met in this paper, we also interpret $r^j$ as a simplicial self-map of $\mathsf{R}$. Moreover, $r^j$ satisfies the following property. \begin{claim} The relation $(\lambda\circ r^j)(v)=(r^j\circ\lambda)(v)$ is satisfied for all $j$ and all $v\in V(\mathsf{C}_1)$. \end{claim} \begin{proof} To ease the reading, we denote by $\Omega$ the barycenter of $\Delta^{n-1}$ (which is denoted $\boldsymbol{b}^{[n]}$ elsewhere in the paper). For $\alpha>0$, the application $a_{\alpha}$ that maps a point $\boldsymbol{x}$ of $\Delta^{n-1}$ to a point $\boldsymbol{x}'$ such that $\boldsymbol{x}'=\Omega+\alpha(\boldsymbol{x}-\Omega)$ commutes with $r^j$. This is because $r^j(\Omega)=\Omega$. When we shrink $\mathsf{T}$, we are applying such a map and Lemma~\ref{lem:sym_b} implies then that $(\lambda\circ r^j)(v)=(r^j\circ\lambda)(v)$ for all $v\in V(\mathsf{C}_1)\cap V(\mathsf{T})$. Consider now a vertex $v\in V(\mathsf{C}_1)$ that is on the boundary of $\mathsf{T}'$ and an integer $j\in[n]$. The $i$th and $i'$th coordinates of $v$ are nonzero if and only if the $\rho^j(i)$ and $\rho^j(i')$ coordinates of $r^j(v)$ are nonzero. In such a case, we have moreover that $i<i'$ if and only if $\rho^j(i)<\rho^j(i')$. It implies that $(\lambda\circ r^j)(v)=(r^j\circ\lambda)(v)$ for all $v\in V(\mathsf{C}_1)$ that are on the boundary of $\mathsf{T}'$. Finally, since $a_{\alpha}$ and $r^j$ commutes, this holds for any other vertex in $V(\mathsf{C}_1)$ as well. \end{proof} We have $$\begin{array}{rcl} \displaystyle{\sum_{j=1}^n(\operatorname{det}_{\sharp}\circ\lambda_{\sharp})(c_j)} & = & \displaystyle{\sum_{j=1}^n(-1)^{j-1}(\operatorname{det}_{\sharp}\circ\lambda_{\sharp}\circ r^j_{\sharp})(c_1)}\smallskip \\ & = & \displaystyle{\sum_{j=1}^n(-1)^{j-1}(\operatorname{det}_{\sharp}\circ\, r^j_{\sharp}\circ\lambda_{\sharp})(c_1)}\smallskip \\ & = & \displaystyle{\sum_{j=1}^n(\operatorname{det}_{\sharp}\circ\lambda_{\sharp})(c_1)}\smallskip\\ & = & n(\operatorname{det}_{\sharp}\circ\lambda_{\sharp})(c_1), \end{array}$$ where the first equality is a consequence of the relation $r^j_{\sharp}(c_1)=(-1)^{j-1}c_j$, due to the definition of $r^j$ and to Lemma~\ref{lem:detr}, the second equality comes from the claim above, and the third one is again a consequence of Lemma~\ref{lem:detr}. Since $\Lambda(v)$ is always a proper subset of $[n]$, the rational number $(\operatorname{det}\circ\lambda)(\sigma)$ can always be written as a fraction of integers, with a denominator being a product of integers smaller than $n$. With Equalities~\eqref{tprime} and~\eqref{decomp}, the fact that $n$ is a prime implies that $(\operatorname{det}_{\sharp}\circ\lambda_{\sharp})(t)$ is nonzero. There is therefore at least one $(n-1)$-dimensional simplex $\tau=[v_1,\ldots,v_n]$ of $\mathsf{T}$ such that $\operatorname{det}(\lambda(v_1),\ldots,\lambda(v_n))\neq 0$. Lemma~\ref{lem:cover} shows then we can pick distinct labels in the $\Lambda(v_i)$'s when $i$ runs over the integer $1,\ldots,n$. \end{proof} \begin{proof}[Proof of Proposition~\ref{prop:sperner-symmetry-4}] The proof is exactly the same as the one of Theorem~\ref{thm:sperner-symmetry-prime}, except the last paragraph. Under the condition of Proposition~\ref{prop:sperner-symmetry-4}, the rational number $(\operatorname{det}\circ\lambda)(\sigma)$ can always be written as a fraction of integers, with a denominator equal to $3!=6$. With Equalities~\eqref{tprime} and~\eqref{decomp}, the fact that $4$ does not divide $6$ implies that $(\operatorname{det}_{\sharp}\circ\lambda_{\sharp})(t)$ is nonzero. The conclusion is then identical. \end{proof} \begin{remark} The proof above for Proposition~\ref{prop:sperner-symmetry-4} can be extended for values of $n$ different from $4$, but to conclude that $(\operatorname{det}_{\sharp}\circ\lambda_{\sharp})(t)$ is nonzero we need that $n$ does not divide $(n-1)!$, which happens precisely when $n$ is a prime number or is equal to $4$. Therefore, it does not seem that we can reach other values for $n$ with the current approach. \end{remark} \begin{remark}\label{secretive} A ``secretive-player'' generalization of the classical envy-free cake division theorem has recently drawn some attention. It was originally proved by Woodall~\cite{woodall1980dividing} and rediscovered with a much simpler proof by Asada et al.~\cite{asada2017fair}, who also gave it its picturesque name. It states that an envy-free division can be achieved in the classical setting without taking into account the preferences of one fixed (``secretive'') player: there is a division such that no matter which piece is chosen by this player, there will be an envy-free assignment of the remaining pieces to the other players. It is reasonable to assume that Theorem~\ref{main} also admits a ``secretive'' generalization. However, we do not know how to prove this using our approach. The natural adaptation of the proof by Asada et al. to our setting would require more from the simplex $\tau$ found in Theorem~\ref{thm:sperner-symmetry-prime}: denoting its vertices by $v_1,\ldots,v_n$, it would require that the barycenter of $\Delta^{n-1}$ is in the convex hull of the $\lambda(v_i)$ (where $\lambda$ is defined as in the proof of Theorem~\ref{thm:sperner-symmetry-prime}). In our proof we get that the determinant of the matrix whose columns are the $\lambda(v_i)$ is nonzero, but this does not imply this additional required property. \end{remark} \section{Proof of the main theorem} With Theorem~\ref{thm:sperner-symmetry-prime} and Proposition~\ref{prop:sperner-symmetry-4}, the proof of Theorem~\ref{main} is more or less routine. We start with a lemma that shows that triangulations satisfying the symmetry condition of Theorem~\ref{thm:sperner-symmetry-prime} and Proposition~\ref{prop:sperner-symmetry-4} exist and can have arbitrary small mesh size. This lemma shows moreover that we can label the vertices of the triangulation with the players in a way compatible with the symmetry. A labeling with the players is called ``owner-labeling'' by Su~\cite{su1999rental}, and ``ownership-assignment'' by Segal-Halevi~\cite{segal2017fairly}. \begin{lemma}\label{lem:sym-own} There exists a nice triangulation $\mathsf{T}$ of $\Delta^{n-1}$ of arbitrary small mesh size and a labeling $o\colon V(\mathsf{T})\rightarrow[n]$ satisfying $o(r^j(v))=o(v)$ for every vertex $v$ of $\mathsf{T}$ in the $\widehat{1}$-facet and such that adjacent vertices in $\mathsf{T}$ get distinct labels via $o$. Moreover, such a triangulation can be built so that the supporting faces of any two adjacent vertices are comparable by inclusion. \end{lemma} \begin{proof} We repeat the barycentric subdivision operation starting with $\Delta^{n-1}$ as many times as needed to get a triangulation $\mathsf{T}$ with a sufficiently small mesh size. This triangulation is clearly nice. The triangulation $\mathsf{T}$ is thus of the form $\operatorname{sd}^N\left(\Delta^{n-1}\right)$ for some positive integer $N$. Each vertex $v$ of $\mathsf{T}$ corresponds to a simplex of $\operatorname{sd}^{N-1}\left(\Delta^{n-1}\right)$. Defining $o(v)$ to be the dimension of this simplex plus one shows that this labeling $o$ is as required. \end{proof} \begin{proof}[Proof of Theorem~\ref{main}] Consider a nice triangulation $\mathsf{T}$ with a labeling $o$ as in Lemma~\ref{lem:sym-own}. For a point $\boldsymbol{x}$ in $\Delta^{n-1}$, we denote by $\mathcal{D}(\boldsymbol{x})$ the division of the cake obtained when the cake is cut at positions $X_1,\ldots,X_n$, where $X_{\ell}=\sum_{i=1}^{\ell}x_i$. (Remember that because of our metrics, whether the endpoints belong or not to the pieces does not matter.) We are going to define a labeling $\Lambda$ of the vertices of $\mathsf{T}$ to which we are going to apply Theorem~\ref{thm:sperner-symmetry-prime} when $n$ is a prime number and Proposition~\ref{prop:sperner-symmetry-4} when $n=4$. This labeling will be defined with the help of a set-valued map $L_i\colon\Delta^{n-1}\rightarrow 2^{[n]}\setminus\{\varnothing\}$ we introduce now. Consider a point $\boldsymbol{x}$ in $\Delta^{n-1}$. For each nonempty piece $I$ in $p_{i}\left(\mathcal{D}(\boldsymbol{x})\right)$, we put in $L_i(\boldsymbol{x})$ the smallest index $j$ such that $X_j$ is the right endpoint of $I$. If $\varnothing\in p_{i}\left(\mathcal{D}(\boldsymbol{x})\right)$, then we add to $L_i(\boldsymbol{x})$ the set $J_{\boldsymbol{x}}$. Because of the full division assumption, $L_i(\boldsymbol{x})\neq\varnothing$. We define then $$\Lambda_i(\boldsymbol{x})=\left\{\begin{array}{ll} J_{\boldsymbol{x}} & \mbox{if $L_i(\boldsymbol{x})=J_{\boldsymbol{x}}$} \\ \{\min\left(L_i(\boldsymbol{x})\setminus J_{\boldsymbol{x}}\right)\} & \mbox{otherwise.}\end{array}\right.$$ Note that if $L_i(\boldsymbol{x})\neq J_{\boldsymbol{x}}$, then $L_i(\boldsymbol{x})\setminus J_{\boldsymbol{x}}\neq\varnothing$, which ensures that $\Lambda_i(\boldsymbol{x})$ is always either $J_{\boldsymbol{x}}$ or a singleton made of an element not in $J_{\boldsymbol{x}}$. \begin{claim} Given $\boldsymbol{x}$ in the $\widehat{1}$-facet of $\Delta^{n-1}$, the equality $\Lambda_i(r^j(\boldsymbol{x}))=\rho^j\left(\Lambda_i(\boldsymbol{x})\right)$ holds for all $i$ and $j$. \end{claim} \begin{proof} Let $\boldsymbol{y}=r^j(\boldsymbol{x})$ and $Y_{\ell}=\sum_{i=1}^{\ell}y_i$. Note that $\mathcal{D}(\boldsymbol{x})=\mathcal{D}(\boldsymbol{y})$. Suppose first that there is at least one nonempty piece in $p_i(\mathcal{D}(\boldsymbol{x}))$. If $\ell$ is the smallest index such that $X_{\ell}$ is the right endpoint of a nonempty piece, then $\rho^j(\ell)$ is the smallest index $\ell'$ such that $Y_{\ell'}$ is the right endpoint of that same nonempty piece. This shows that in this case $\Lambda_i(\boldsymbol{y})=\rho^j\left(\Lambda_i(\boldsymbol{x})\right)$. Suppose now that $p_i(\mathcal{D}(\boldsymbol{x}))=\{\varnothing\}$. We have $\Lambda_i(\boldsymbol{x})=J_{\boldsymbol{x}}$ and $\Lambda_i(\boldsymbol{y})=J_{\boldsymbol{y}}$. Since $J_{\boldsymbol{y}}=\rho^j(J_{\boldsymbol{x}})$ by definition of $\rho^j$, we have again $\Lambda_i(\boldsymbol{y})=\rho^j\left(\Lambda_i(\boldsymbol{x})\right)$. \end{proof} For a vertex $v\in V(\mathsf{T})$ of coordinate $\boldsymbol{x}$, we define $\Lambda(v)$ to be $\Lambda_{o(v)}(\boldsymbol{x})$. If $v$ is in the $\widehat{1}$-facet, we have $$\Lambda(r^j(v))=\Lambda_{o(r^j(v))}(r^j(\boldsymbol{x}))=\Lambda_{o(v)}(r^j(\boldsymbol{x}))=\rho^j\left(\Lambda_{o(v)}(\boldsymbol{x})\right)=\rho^j\left(\Lambda(v)\right),$$ where $\boldsymbol{x}$ is the coordinate vector of $v$. The first equality is by definition, the second is by Lemma~\ref{lem:sym-own}, the third is the preceding claim, and the last is again by definition. Thus, $\Lambda$ is a nice labeling. Moreover, it satisfies the additional condition of Proposition~\ref{prop:sperner-symmetry-4}. Theorem~\ref{thm:sperner-symmetry-prime} and Proposition~\ref{prop:sperner-symmetry-4} can be applied and their conclusion holds: there is an $(n-1)$-dimensional $\tau$ of $\mathsf{T}$ such that it is possible to pick a distinct label in each $\Lambda(u)$ when $u$ runs over the vertices of $\tau$. Lemma~\ref{lem:sym-own} allows to choose $\mathsf{T}$ of arbitrarily small mesh size. Compactness and the following claim imply that there is a point $\boldsymbol{x}^*=(x_1^*,\ldots,x_n^*)$ of $\Delta^{n-1}$ such that it is possible to select a distinct element from each $L_i(\boldsymbol{x}^*)$ when $i$ goes from $1$ to $n$. \begin{claim} Let $(\boldsymbol{x}^k)$ be a sequence of points in $\Delta^{n-1}$ converging to a point $\boldsymbol{x}^{\infty}$. If $j\in\Lambda_i(\boldsymbol{x}^k)$ for all $k$, then $j\in L_i(\boldsymbol{x}^{\infty})$. \end{claim} \begin{proof} We assume without loss of generality that all $J_{\boldsymbol{x}^k}$ are equal (for finite $k$) and denote by $J$ this subset. We have $J_{\boldsymbol{x}^{\infty}}\supseteq J$. Consider a $j$ that is in all $\Lambda_i(\boldsymbol{x}^k)$. Suppose first that $j\notin J_{\boldsymbol{x}^{\infty}}$. The intervals $(X_{j-1}^k,X_j^k)$ and $(X_{j-1}^{\infty},X_j^{\infty})$ are all of positive length. The interval $(X_{j-1}^k,X_j^k)$ belongs thus to $p_i(\mathcal{D}(\boldsymbol{x}^k))$ for all $k$ and since $$\lim_{k\rightarrow+\infty}\delta\left((X_{j-1}^k,X_j^k),(X_{j-1}^{\infty},X_j^{\infty})\right)=0,$$ the interval $(X_{j-1}^{\infty},X_j^{\infty})$ belongs to $p_i(\mathcal{D}(\boldsymbol{x}^{\infty}))$ (closed preferences assumption). The interval $(X_{j-1}^{\infty},X_j^{\infty})$ being of positive length, we get $j\in L_i(\boldsymbol{x}^{\infty})$. Suppose now that $j\in J_{\boldsymbol{x}^{\infty}}\setminus J$. The intervals $(X_{j-1}^k,X_j^k)$ are all of positive length. The interval $(X_{j-1}^k,X_j^k)$ belongs thus to $p_i(\mathcal{D}(\boldsymbol{x}^k))$ for all $k$. The fact that $j\in J_{\boldsymbol{x}^{\infty}}$ means that $X_{j-1}^{\infty}=X_j^{\infty}$, which implies that $$\lim_{k\rightarrow+\infty}\delta\left((X_{j-1}^k,X_j^k),\varnothing\right)=0.$$ The closed preferences assumption implies then that $\varnothing\in p_i(\mathcal{D}(\boldsymbol{x}^{\infty}))$. In such a case, by definition of $L_i$, we have $j\in L_i(\boldsymbol{x}^{\infty})$. Suppose finally that $j\in J$. By definition of $\Lambda_i$, it means that $\varnothing\in p_i(\mathcal{D}(\boldsymbol{x}^k))$ for all $k$. Since $\delta(\varnothing,\varnothing)=0$, we get that $\varnothing\in p_i(\mathcal{D}(\boldsymbol{x}^{\infty})$ and thus $j\in L_i(\boldsymbol{x}^{\infty})$. \end{proof} We finish the proof by showing that $\mathcal{D}(\boldsymbol{x}^*)$ is an envy-free division. Denote by $j_i$ pairwise distinct elements selected in the $L_i(\boldsymbol{x}^*)$ when $i$ goes from $1$ to $n$. If $(X_{j_i-1}^*,X_{j_i}^*)\in p_i(\mathcal{D}(\boldsymbol{x}^*))$, define $\pi(i)=(X_{j_i-1}^*,X_{j_i}^*)$. Otherwise, define $\pi(i)=\varnothing$. By definition of $L_i$, we have $\pi(i)\in p_i(\mathcal{D}(\boldsymbol{x}^*))$ (item~\ref{i} is satisfied). Since $\{j_i\colon i\in[n]\}=[n]$, each nonempty piece is equal to some $\pi(i)$ (item~\ref{ii} is satisfied). Finally, if $\pi(i)=\pi(i')$, we have $\pi(i)=\varnothing$ because otherwise $j_i$ would be equal to $j_{i'}$ (item~\ref{iii} is satisfied). \end{proof} \subsection*{Acknowledgments} The authors are grateful to the referee for his thorough reading and his suggestions and questions that helped improve the paper. This work has been initiated when the authors were in residence at the Mathematical Sciences Research Institute in Berkeley, California, during the Fall 2017 semester. This material is thus partially based upon work supported by the National Science Foundation under Grant No. DMS-1440140. \bibliographystyle{plain}
1,314,259,994,719
arxiv
\section*{Title and length} \subsection*{Title} Sequential/Session-based Recommendations: Challenges, Methods and Opportunities \subsection*{Proposed Length of the Tutorial} Half day (3 hours plus breaks) \section*{Tutorial format} Online \section*{Intended Audience and Prerequisite Knowledge} \subsection*{Intended Audience} The goal of this tutorial aims to enable both academic and practical audiences with a comprehensive understanding of sequential/session-based recommender systems (SRSs and SBRSs) and provide relevant techniques of how to apply state-of-the-art machine learning approaches to build more powerful sequential/session-based recommender systems. In this tutorial, we will present a systematic review of the general problem statement, the diversified data characteristics and challenges, and a taxonomy of sequential/session-based recommendations. After this tutorial, the audience can walk away with: \begin{itemize} \item Introductory: The review of the problem statement and data characteristics and challenges of SRSs and SBRSs, and the basic approaches to build SRSs and SBRSs; \item Intermediate: The recent and various development of advanced SRSs and SBRSs built on the state-of-the-art machine learning methods, and the deep insight behind the corresponding sequence and session modeling; \item Advanced: The practical approaches to customize and build advanced SRSs and SBRSs over the audience's ideas and data, models and techniques learned from this tutorial. \end{itemize} \subsection*{Prerequisite Knowledge} Nothing specific, but a rudimentary knowledge of RSs and some machine learning methods will be helpful, including: \begin{itemize} \item Recommender systems \item Latent representation models \item Deep learning \end{itemize} \section*{Presenters} \subsection*{Dr. Shoujin Wang (Main Presenter)} \textbf{Position}: Research Fellow\\ \textbf{Affiliation}: School of Computing Tehcnologies, RMIT University, School of Computing, Macquarie University, Australia\\ \textbf{Postal Address}: Level 3, 4 Research Park Drive (BD Building), Macquarie University, Sydney, NSW 2109, Australia\\ \textbf{Email}: shoujin.wang@mq.edu.au\\ \textbf{Tel}: +61-452139866\\ \textbf{Homepage}: \url{https://sites.google.com/view/shoujinwanghome} \noindent \textbf{Shoujin Wang} is a postdoctoral research fellow in the Department of Computing, Macquarie University, Australia. His research interests include data mining, machine learning, and their general applications in recommender systems. He has published a number of papers in top-rank international conferences including IJCAI, AAAI, ECML-PKDD, and journals like IEEE Transactions on Systems, Man, and Cybernetics: Systems (IEEE TSMC: system), IEEE Transactions on Neural Networks and Learning Systems (IEEE TNNLS) in the areas such as data mining, machine learning, and recommender systems. He has served the community as a program committee member of international conferences like AAAI, IJCAI, KDD, ICDM, CIKM, PAKDD, DSAA, or a reviewer of prestigious journals including machine learning, IEEE intelligent systems, IEEE transactions on cybernetics and IEEE Transactions on Knowledge and Data Engineering (TKDE).\\ He was invited to provide several tutorials at top conferences including IJCAI-19, IJCAI-20, AAAI-21 to popularize recommender systems to top conference audiences and talks to introduce the knowledge on recommender systems to industry and government. He has rich teaching experience including introducing recommender systems at the undergraduate course at the department of computing, Macquarie University and delivering industry training series of machine learning at Vantage FX Pty Ltd. \subsection*{Dr. Qi Zhang} \textbf{Position}: Research Scientist\\ \textbf{Affiliation}: DeepBlue Academy of Sciences, China\\ \textbf{Postal Address}: 369 Weining Rd, Changning District, Shanghai 200240, China\\ \textbf{Email}: zhangqi@deepblueai.com\\ \textbf{Tel}: +86-18301367053\\ \textbf{Homepage}: \url{https://sites.google.com/view/qizhang-bit-uts/home}\\ \noindent \textbf{Qi Zhang} received his first Ph.D. degree in computer science from the Department of Computer Science and Engineering, Beijing Institute of Technology, China in 2020. The title of his dissertation is Case-Based Reasoning on Complex Data. Currently, he serves as the AI scientist with DeepBlue academy of sciences. His research interests include recommender systems, learning to hash, machine learning and general artificial intelligence. He has published several papers in top-rank international conferences and journals in the area of recommender systems, including TKDE, TNNLS, AAAI, ECAI, ESWA, KBS and DSAA. He has served the community as a program committee member or reviewer of IJCAI, ICDM, CIKM, PAKDD, DSAA, IEEE Intelligent Systems, IEEE Transactions on Systems, Man, and Cybernetics: Cybernetics (TSMC), IEEE Transaction on Knowledge and Data Engineering (TKDE), and Journal of Data Science. He also served as a teaching assistant to introduce the Machine Learning course and the Frontier of Computer Science when studying at Beijing Institute of Technology. \subsection*{Professor Liang Hu} \textbf{Affiliation}: Tongji University, China\\ \textbf{Postal Address}: No. 1239, Siping Road, Shanghai 200092, China\\ \iffalse \textbf{Position}: Chief Scientist at AI Research Institute\\ \textbf{Affiliation}: DeepBlue Academy of Sciences, Shanghai, China\\ \textbf{Postal Address}: No. 369, Weining Road, Xinjing Town, Changning District, Shanghai, 200240, China\\ \fi \textbf{Email}: lianghu@tongji.edu.cn\\ \textbf{Tel}: +86-13918633966\\ \textbf{Homepage}: \url{https://sites.google.com/view/lianghu/home}\\ \noindent \textbf{Liang Hu} is a full professor with Tongji University, China and also chief AI scientist with DeepBlue Academy of Science, China. He received his first Ph.D. degree in computer application technology from the Department of Computer Science and Engineering, Shanghai Jiao Tong University, China in 2015, and he received his second Ph.D. degree in Analytics with Advanced Analytics Institute, University of Technology Sydney, Australia, in 2019. He has published more than 40 papers in top-rank international conferences and journals in the area of recommender systems, including WWW, IJCAI, AAAI, ICDM, ICWS, TOIS, JWSR.\\ Dr Liang Hu has successfully delivered 8 tutorials including AAAI-18, AAAI-20, IJCAI-19, IJCAI-20, IJCAI-21, KDD-18, PAKDD-18, ICDM-21 and several invited talks to main conference/workshop audiences and public seminars to industry and government. He has been invited as the program committee member for more than 30 top-rank AI international conferences, including AAAI, IJCAI, ICDM, CIKM, and KDD. He also serves as the reviewer of more than ten AI and data science-related international journals, including ACM Computing Survey, IEEE TKDE, ACM TOIS, IEEE TPAMI, etc. As co-chair, he has organized workshops on AI, neural networks and recommender systems on ICDM. In addition, he has presented six tutorials on recommender systems and machine learning at top-rank AI conferences including IJCAI, AAAI, and ICDM. \subsection*{Professor Xiuzhen Zhang} \textbf{Affiliation}: School of Computer Technologies, RMIT University, Australia\\ \textbf{Postal Address}: 14.09.05 - Bld 14 Lvel 9 Room 5, Bundoora Campus, RMIT University, Melbourne, VIC 3000, Australia\\ \textbf{Tel}: +61-399252774 \\ \textbf{Email}: xiuzhen.zhang@rmit.edu.au\\ \textbf{Homepage}: \url{https://www.rmit.edu.au/contact/staff-contacts/academic-staff/z/zhang-professor-jenny}\\ \noindent \textbf{Xiuzhen Zhang} is currently a Professor with the School of Computing Technologies, RMIT University, Australia. Her research interests are in data mining and data analytics. She currently supervises several PhD student research projects in these areas. She teaches courses in the areas of databases, data analytics and data mining. She is the Assoc Dean for Higher Degrees by Res and Tech. \subsection*{Professor Yan Wang} \textbf{Affiliation}: School of Computing, Macquarie University, Australia\\ \textbf{Postal Address}: BD Building, Room 354, Macquarie University, Sydney, NSW 2109 Australia\\ \textbf{Tel}: +61-298509539 \\ \textbf{Email}: yan.wang@mq.edu.au\\ \textbf{Homepage}: \url{http://web.science.mq.edu.au/~yanwang/}\\ \noindent \textbf{Yan Wang} is currently a Professor with the Department of Computing, Macquarie University, Australia. His research interests include recommender systems, trust management/computing, social computing and services computing. He has authored or co-authored over 100 journal and conference papers in the above areas, all at top venues. He has served as general chair and PC chair for a couple of international conferences and workshops, such as IEEE Cloud2017, IEEE SCC2018, SOSE2018. \subsection*{Dr. Charu Aggarwal} \textbf{Position}: Distinguished Research Staff Member\\ \textbf{Affiliation}: IBM T. J. Watson Research Center, United States\\ \textbf{Postal Address}: 1101 Kitchawan Rd, Yorktown, NY 10598, United States\\ \textbf{Email}: charu@us.ibm.com\\ \textbf{Homepage}: \url{http://www.charuaggarwal.net/}\\ \noindent \textbf{Charu Aggarwal} is a Distinguished Research Staff Member (DRSM) at the IBM T. J. Watson Research Center in Yorktown Heights, New York. He completed his Bachelor of Technology in Computer Science from the Indian Institute of Technology at Kanpur in 1993 and his PhD in Operations Research from the Massachusetts Institute of Technology in 1996. He has worked extensively in the field of data mining, with particular interests in data streams, privacy, uncertain data and social network analysis. He has authored 9 books, over 400 papers in refereed venues, and has applied for or been granted over 80 patents. His h-index is 134. He has received two best paper awards and an EDBT Test-of-Time Award (2014). He is a recipient of the IEEE ICDM Research Contributions Award (2015) and the ACM SIGKDD Innovation Award (2019), which are the two most prestigious awards for influential research in data mining. He is also a recipient of the W. Wallace McDowell Award , the highest award given by the IEEE Computer Society across the field of computer science. He has served as the general or program co-chair of the IEEE Big Data Conference (2014), the ICDM Conference (2015), the ACM CIKM Conference (2015), and the KDD Conference (2016). He has served as the editor-in-chief of the ACM SIGKDD Explorations and is currently an editor-in-chief of the ACM Transactions on Knowledge Discovery and Data Mining as well as that of ACM Books. He is serving or has served as associate editor/action editor of several premier journals including the IEEE TKDE, the IEEE TBD, DMKD, and KIS. He is a fellow of the IEEE (2010), ACM (2013), and the SIAM (2015) for "contributions to knowledge discovery and data mining algorithms." \newpage \section{Motivation} Recommender systems (RSs) have been playing an increasingly important role for informed consumption, services, and decision-making in the current era of information explosion and digitized economy~\cite{wang2021survey}. In recent years, sequential recommender systems (SRSs) and session-based recommender systems (SBRSs) have emerged as a new paradigm of RSs to capture users' short-term but dynamic preferences for enabling more timely and accurate recommendations~\cite{wang2019sequential}. The recommendations performed by SRSs and SBRSs are named as sequential recommendations (SR) and session-based recommendations (SBR) respectively. SR and SBR have been a quite important and popular research area in recent years, which has attracted much attention from both academia and industry. SR and SBR are highly correlated and similar in terms of the input, output and recommendation mechanism, and most of the representative approaches for building SRSs and SBRSs are very similar. Therefore, we present this tutorial to cover both SR (SRS) and SBR (SBRS), which will be collectively referred to as SR/SBR (SRS/SBRS) in the rest of this tutorial abstract. The detailed relationships between SRS and SBRS will be discussed in Section~\ref{SRSvsSBRS}. The key challenge of performing SR or SBR lies in how to accurately learn the complex dependencies wmbe In recent years, there has been some promising progress in tackling Although sequential or session-based recommender systems have been widespread in various domains including e-Commerce, stream media and many related studies have been conducted, there are many inconsistencies in this area caused by the diverse descriptions, settings, assumptions and application domains. There is not a unified framework that well categorize them and there are no unified problem statements for the research problem. Nowadays, the renaissance of artificial intelligence (AI) has attracted huge attention from every corner of the world. Specially, machine learning approaches have deeply involved in AI research in almost all areas, e.g., natural language processing (NLP), computer vision (CV) and planning. In particular, recommender systems (RS), as probably one of the most widely used AI systems, has integrated into every part of our daily life. In this AI age, on the one hand, state-of-the-art machine learning approaches, e.g. deep learning, have become the primary choice to build RS, on the other hand, both the theories and applications of RS are developing rapidly. \section{Objectives} The proposed tutorial will systematically the progress of research in sequential and session-based recommender systems with an emphasis on the frameworks, problem statement, data characteristics and challenges, approaches and algorithms, and prospects. The tutorial will provide (1) a unified framework to categorize the studies on sequential and session-based recommender systems, which provides an overview of the research in this area, (2) a unified problem statement of the research problem in the area, (3) a comprehensive overview of the unique characteristics of the data used for sequential/session-based recommender systems as well as the challenges faced this new recommender system paradigm, (4) a systematic classification and comparison of approaches for building sequential/session-based recommender systems, (5) a brief summary and introduction of the represent approaches for sequential/session-based recommender systems from each class of approaches, (6) a summary of representative classical and emerging real-world application scenarios of sequential and session-based recommender systems, (7) a summary of open research issues and prospects in the area of sequential and session-based recommender systems. The tutorial should be appealing to any research students, researchers and practitioners who are working on recommender systems or who plan to step in this vibrant area. This tutorial is especially suitable for those who are interested in obtaining a comprehensive view of all the key research problems and concepts, main data and its characteristics, the key challenges, representative and state-of-the-art approaches, models and algorithms, classic and emerging application scenarios, main datasets and prospects in the area of sequential/session-based recommender systems. The most recent advances achieved in the past decades will also be demonstrated in the tutorial. The attendants will learn how to formalize the sequential/session-based recommendation mathematically. They will also learn in detail about the representative and state-of-the-art formal models, techniques, and algorithms for building sequential/session-based recommender systems. Then, they will learn the classical and emerging applications of sequential/session-based recommender systems and the representative open-source real-world experimental datasets used for sequential/session-based recommendations. Finally, they will learn the main technology trends and open research directions in this area. \iffalse Classic RS, e.g., collaborative filtering and content-based filtering, are mainly conducted on the users' feedback or items' contents to promote the items by predicting the users' explicit preference (e.g., ratings) over items in the e-commerce area. This not only downgrade the performance of RS by ignoring other relevant information, but also greatly limits the application scenarios and domains of RS. This actually motivates the necessity of new theories and approaches for building next-generation RS, as well as developing the advanced applications of RS. In practice, in addition to explicit preference prediction in the e-commerce area, RS are more and more widely used in many emerging scenarios (e.g., implicit ranking prediction over items) and domains (e.g., FashionAI) for smarter decisions in recent years. This not only broadens the application scope of RS, but also benefits us from nearly every aspects including eating, dressing, living and traveling. The goal of this tutorial aims to enable both academic and practical audience with a comprehensive understanding and relevant techniques of how to apply state-of-the-art machine learning approaches to build more sensible next-generation RSs in contexts with various heterogeneous data and complex relations. In this tutorial, we will present a systematic review and applications of recent advanced machine learning techniques to build real-life intelligent RSs. After this tutorial, the audience can walk away with: \begin{itemize} \item The insight into recent development and evolution of recommendation techniques; \item The machine learning methods to model complex couplings over heterogeneous recommendation data in a comprehensive way; \item The various development of advanced RSs built on the state-of-the-art machine learning methods; \item The practical approaches to customize and build advanced RSs over audience's own complex data with the ideas, models and techniques learned from this tutorial. \end{itemize} \fi \section{Relevance to SIGIR} Recommender systems (RS) are one of the very well-recognized important topics in the information retrieve (IR) research community due to the highly relevance and similarity between RS and IR. This can be demonstrated by the fact that recommendation has been one of the major relevant topics mentioned by call for papers (CFP) in ACM SIGIR conferences\footnote{https://sigir.org/sigir2022/call-for-papers/}. Sequential or session-based recommender systems, as one of the most and representative recommender system paradigms which emerge in recent years, are highly relevant to IR community. A majority of topics of research papers published in ACM SIGIR conferences are related to sequential or session-based recommender systems, including, e.g., sequential recommendations, session-based recommendations, next-item recommendations, next-basket recommendation/prediction, sequential user behaviour modelling. As such, the tutorial is expected to be broadly appealing to many attendants of ACM SIGIR. \section{Related Tutorials} The proposed tutorial is extended from the following tutorial presented in \begin{itemize} \item \textit{Massimo Quadrana, Paolo Cremonesi. Sequence Aware Recommender Systems, ACM RecSys 2018.}\\ \textit{Massimo Quadrana, Dietmar Jannach, Paolo Cremonesi. Sequence Aware Recommender Systems, WWW 2019.}\\ This tutorial discusses the class of sequence-aware recommender systems. It introduces the problem formulation, sket-ches a number of computational tasks, reviews existing algorithmic approaches, and finally discusses evaluation aspects of sequence-aware recommender systems. \item \textit{Hui Fang, Guibing Guo, Danning Zhang, Yiheng Shu. Deep Learning-Based Sequential Recommender Systems: Concepts, Algorithms, and Evaluations, ICWE 2019.}\\ This tutorial carefully answers the definitions, the challenges, the solutions, and the key factors to influence deep-learning-based (DL-based) sequential recommendation, and provides a comprehensive overview of DL-based sequential recommender system \item \textit{Liang Hu, Shoujin Wang, Qi Zhang, Zhong Yuan Lai and Dora D. Liu. Complement, Composite and Context: The 3C-Law to Build Multidomain Recommender Systems, ICDM 2021.}\\ This tutorial presents state-of-the-art theories and approaches to building multidomain recommender systems (RSs), including the latest and most advanced theories, methods, models, data, and applications. \item \textit{Shoujin Wang, Liang Hu, Yan Wang, Longbing Cao, Michael Sheng and Mehmet Orgun. Towards Ubiquitous Recommender Systems: Data, Approaches, and Applications, AAAI 2021.}\\ This tutorial presents state-of-the-art theories and approaches to conduct recommendations, i.e., the latest and most advanced RSs, and their widely applications. This tutorial focuses on three typical theories and approaches for building the most advanced RS: (1) sequential or session-based RS, (2) graph learning based RS, and (3) interactive and conversational RS, together with their prototypes. \item \textit{Shoujin Wang, Liang Hu, Yan Wang, Longbing Cao, Michael Sheng and Mehmet Orgun. Next-Generation Recommender Systems and Their Advanced Applications, IJCAI 2020} This tutorial presents state-of-the-art theories and approaches to equip the next-generation recommender systems (RS). It illustrates three typical theories and approaches for building next-generation RS: (1) sequential RS, (2) graph learning based RS, and (3) interactive and conversational RS, together with their prototypes. \item \textit{Liang Hu, Shoujin Wang, Longbing Cao, Songlei Jian. Coupling Everything: A Universal Guideline for Building State-of-The-Art Recommender Systems, IJCAI 2019.}\\ This tutorial presents state-of-the-art RSs that enhance their capabilities by coupling users, items, contexts, data modalities and evaluation criterion. Specially, the data characteristics and business needs are systematically analyzed from a non-independent and identical distributed (non-IID) perspective to present the challenges and the difficulties to build advanced RSs. \item \textit{Liang Hu, Longbing Cao, Jian Cao and Songlei Jian. When Advanced Machine Learning Meets Intelligent Recommender Systems, AAAI2018.}\\ This tutorial addresses the topic of how to apply advanced machine learning models to build recommender systems, where it focuses on introducing how to incorporate advanced machine learning models into RSs in theory. \end{itemize} \section{Format and Schedule} The tutorial will be a 3-hour half-day tutorial, with breaks scheduled by the SIGIR organizers. This tutorial will conduct a systematic and extensive review of the most notable works to date on sequential and session-based recommender systems. It will first introduce a unified framework to organise the existing works in this area, followed by a unified problem statement to the research problem in this area. Then, we will thoroughly analyze the characteristics of data used for sequential and session-based recommender systems and the main challenges faced by sequential and session-based recommenddations. Then a classification scheme will be introduced to well classify and organize all the existing approaches for building sequential and session-based recommender systems where the most recent advance each class of approaches will be highlighted. Afterwords, we will introduce the traditional and emerging real-world applications of sequential and session-based recommender systems and the commonly used real-world datasets for the experiments in this area. Finally, we will discuss some of the most pressing open issues and promising directions. \subsection{Background} This part will briefly provide necessary background information to the audience, including a high-level historical review of research in recommender systems (RS), and sequential recommender systems(SRS)/session-based recommender systems (SBRS). Through this, we will provide the audience an understanding of the major research milestones in RS and SRS/SBRS, especially the main issues (e.g., inconsistency of concepts and problem statements) and for SRS/SBRS existing in the research community, which motivates the necessity of this tutorial. \subsubsection{Why do we need sequential/session-based recommender systems?} This part will motivate the audience of the necessity of SRS/SBRS, a new recommendation paradigm by demonstrating the drawbacks of existing classic RS including collaborative filtering (e.g., rating prediction) and content based filtering. \subsubsection{Why do we need this tutorial?} This part will motivate the audience of the necessity of this tutorial on SRS/SBRS by analyzing the commonly existing issues in the SRS/SBRS research community, such as the misleading concepts, inconsistent problem statements in the area. \subsection{Overview in Sequential/Session-based Recommender Systems}\label{SRSvsSBRS} This section will provide an overview of the key research directions as well the progress in SRS/SBRS area. \subsubsection{Sequential Recommender Systems vs. Session-based Recommender Systems} This part will compare SRS and SBRS from multiple perspectives including the input, output, work mechanism. \subsubsection{Research Landscape of Sequential/Session-based Recommender Systems} This part will provide a high-level bird view of the research progress in SRS and SBRS respectively. \subsection{Sequential/Session-based Recommender Systems Problem Statement} This section will first demonstrate the main entities involved in SRS/SBRS including User, Item, Action, Interaction, Session/sequence and then provide an unified problem statement for SRS/SBRS on the basis of these entities. \subsubsection{User and User Properties} This part will introduce users and their properties in an SRS/SBRS. \subsubsection{Item and Item Properties} This part will introduce items and their properties in an SRS/SBRS. \subsubsection{Action and Action Properties} This part will introduce actions (e.g., clicks, views, purchase) and their properties in an SRS/SBRS. \subsubsection{Interaction and Interaction Properties} This part will introduce interactions and their properties in an SRS/SBRS. An interaction is a triplet of $<user, action, item>$. \subsubsection{Session and Session Properties} This part will introduce sessions/sequences and their properties in an SRS/SBRS. A session/sequence is a set of interactions. \subsubsection{The SRS/SBRS Problem} This part will provide the formal problem statement of the SRS/SBRS problem, including the input, output and main work mechanism of an SRS/SBRS. \subsection{Data Characteristics and Challenges} This section will introduce the unique characteristics of the data used for SRS/SBRS and the special challenges they have triggered for building SRS/SBRS from five different dimensions: (1) sequence/session length, (2) the internal order within sequences/sessions, (3) the type of actions within sequences/sessions, (4) user information, and (5) sequence/session-data structure. The outline of this section is listed below. \subsubsection{Characteristics and Challenges Related to Sequence/Session Length} \subsubsection{Characteristics and Challenges Related to Internal Order} \subsubsection{Characteristics and Challenges Related to Action Type} \subsubsection{Characteristics and Challenges Related to User Information} \subsubsection{Characteristics and Challenges Related to Session-data Structure} \subsubsection{A Summary of Characteristics and Challenges} \subsection{Classification and Comparison of SRS/SBRS Approaches} This section will first describe the classification taxonomy of the achieved progress for addressing the challenges described in the last section from the technical perspective, i.e., employed approaches or models, and then comprehensively compare different classes of approaches. \subsubsection{A Classification of SRS/SBRS Approaches} As shown in Figure 1, the approaches are classified into three classes: (1) conventional approaches, (2) latent representation based approaches, and (3) deep neural network based approaches. The first class includes four sub-classes: pattern/rule mining based approaches, K nearnest neighbour approaches, Markov chain approaches, and generative probabilistic approaches. The second class includes two sub-classes: latent factor model based approaches and distributed representation based approaches. The third class includes two sub-classes: basic deep neural network (i.e., RNN, MLP, CNN, GNN) based approaches and advanced model (i.e., attention mechanism, memory network, mixture model, generative model, reinforcement learning) based approaches. Finally, there are eight sub-classes and 15 atomic classes of approaches included in the aforementioned three classes. \subsubsection{A Comparison of Different Classes of Approaches} This part will provide comparisons between different classes of approaches from multiple perspectives, including the main work mechanism of each class, the learned dependency types of each class, and the research trend of each class in terms of the number of publications. \begin{figure*}[!t] \centering \includegraphics[width=.85\textwidth {picture/SBRS_taxnomy_v3.pdf} \vspace{-1em} \caption{The categorization of SBRS approaches, from \cite{wang2021survey}} \label{fig_class} \vspace{-1em} \end{figure*} \subsection{Conventional SRS/SBRS Approaches} Conventional approaches utilize conventional data mining or machine leaning approaches to build SRS/SBRS \cite{wang2021survey}. This section will introduce each of the four classes of conventional approaches for SRS/SBRS respectively, and then conduct a comparison between these four classes. The outline of this part is listed below. \subsubsection{Patern/Rule Mining based SRS/SBRSs} \subsubsection{K Nearest Neighbour based Approaches} \subsubsection{Markov Chain based Approaches} \subsubsection{Generative Probabilistic Model based Approaches} \subsubsection{Comparison of Conventional SBRS Approaches} \subsection{Latent Representation Approaches for SRSs/SBRSs} Latent representation approaches first learn a low-dimensional latent representation for each interaction from sequences or sessions (usually with shallow models) and then employ the learned representation as the input of the subsequent recommendation task. This section will first introduce two classes of latent representation approaches for SRSs/SBRSs respectively, followed by a comparison between them. The outline is listed below. \subsubsection{Latent Factor Model based Approaches} \subsubsection{Distributed Representation based Approaches} \subsubsection{Comparison of Latent Representation based Approaches} \subsection{Deep Neural Network Approaches for SRSS/SBRSs} Deep neural network approaches mainly take advantage of the powerful capabilities of deep neural models in learning the complex dependencies within or between sequences/sessions for recommendations \cite{wang2021survey}, and they can be roughly classified into basic approaches and advanced approaches. Each basic deep neural approach is built on a basic deep neural network (e.g., RNN) while each advanced approaches is based on one or more advanced neural models (attention model). This section will first introduce the four classes of basic approaches and five classes of advanced approaches, and then compare these approaches. The outline of this section is listed below. \subsubsection{Basic Deep Neural Network based Approaches} \begin{itemize} \item{Recurrent Neural Networks (RNN) based Approaches} \item{MultiLayer Perceptron (MLP) networks based Approaches} \item{Convolutional Neural Networks (CNN) based Approaches} \item{Graph Neural Networks (GNN) based Approaches} \end{itemize} \subsubsection{Advanced Neural Model based Approaches} \begin{itemize} \item{Atention Model based Approaches} \item{Memory Networks based Approaches} \item{Mixture Model based Approaches} \item{Generative Model based Approaches} \item{Reinforcement Learning (RL) based Approaches} \end{itemize} \subsubsection{Comparison of Deep Neural Network based SRS/SBRS Approaches} \subsection{SRS/SBRS Applications, Algorithms and Datasets} This section will first demonstrate the real-world applications of SRSs and SBRSs, including both the applications in conventional domains such as the e-Commerce domain, stream media domain, tourism domain and the applications in emerging domains such as finance domain and health care domain. Then, we will summarize a collection of representative and state-of-the-art algorithms for building SRSs and SBRS and the commonly used real-world datasets for testing the performance of these algorithms. \subsubsection{SRS/SBRS Applications} \subsubsection{Algorithms and Datasets for SRS/SBRSs} \subsection{Prospects and Future Directions} This section will outline the following seven promising prospective research directions in the areas of SRS and SBRS. For each direction, we will demonstrate its significance the open research issues. \begin{itemize} \item{SRS/SBRSs with General User Preference} \item{SRS/SBRSs Considering More Contextual Factors} \item{SRS/SBRSs with Cross-domain Information} \item{SRS/SBRSs by Considering More User Behaviour Patterns} \item{SRS/SBRSs with Constraints} \item{Interactive SRS/SBRSs} \item{Online or Streaming SRS/SBRSs} \end{itemize} \subsection{Conclusions} Finally, we will conclude this tutorial by briefly highlighting the main contents covered in this tutorial. Some time will be left for questions from the audience. \iffalse analyze data, challenges, and business needs in advanced recommendation problems, and take non-IID perspective to introduce recent advances in machine learning to model the 3C-based next-generation RSs. This includes an overall of RS evolution and non-IIDness in recommendation, advanced machine learning for cross-domain RS, social RS, multimodal RS, multi-criteria RS, context-aware RS, and group-based RS, and their integration in building real-life RS. \subsection*{Background of Recommender Systems} At the beginning, we will introduce the background of RSs. \begin{itemize} \item We are leaving the ``Information Age" and entering the ``Recommendation Age"'. \item E-commerce companies like Amazon and Alibaba; online social media like Facebook and Weibo; Internet service companies like Google and Baidu have paid much more attention to AI research. \item With the rapid development of AI technology, RSs are the most straightforward AI application to improve user experience and make more business profit. \end{itemize} \subsubsection*{Brief Review on Classic Recommender Systems} We introduce some classic recommendation techniques and their typical problems. \begin{itemize} \item Collaborative filtering (CF) \begin{itemize} \item Memory-based CF and model-based CF. \item Application cases: Item-based CF in Amazon.com is a classic example. Facebook, MySpace, LinkedIn use CF to recommend new friends, groups, and other social connections. \item The problems in CF: (1) cold start (2) scalability and (3) sparsity. \end{itemize} \item Content-based filtering (CBF) \begin{itemize} \item CBF are based on a description of the item and a profile of the user’s preferences. \item Application cases: Rotten Tomatoes, Internet Movie Database use CBF to recommend movies. \item The problems in CBF: (1) limited content analysis (2) over-specialization and (3) new user. \end{itemize} \item The challenges cannot be easily handled by classic RSs: (1) heterogeneity (2) vulnerability (3) social influence (4) multiple type data integration (5) context awareness. \end{itemize} \subsubsection*{Brief Review on Machine Learning Methods for Recommender Systems} Before presenting advanced machine learning technique to build RSs, we will give some preliminaries in this section to make audience easier to follow. \begin{itemize} \item Latent factor models are the most prevalent approach in RS. In particular, matrix factorization and tensor factorization are presented as the typical examples. \item Deep learning are the most effective approach to handle most types of data, e.g. images, text and video. \begin{itemize} \item Autoencoder to generate the high-level representation from raw attributes \item Convolutional neural networks to model the spatial coupling \item Recurrent neural networks to model temporal coupling \end{itemize} \item Language models, Word2vec, Transformers, BERT and GPT are the most popular methods to learn the representation from text, which are helpful to deal with textual data and sequential data. \item Transfer learning aims to share and transfer knowledge learned from different data types and different data sources. Therefore, we will present some transfer learning methods, and some specific areas including domain adaption and multi-task learning. \end{itemize} \subsubsection*{Brief Review on Classic Applications of Recommender Systems} Before presenting the advanced applications of RS, we briefly review and summarize the classic application cases and domains of RS in this section. \begin{itemize} \item Item recommendation in e-commerce. The items (e.g., a book, a cup) are recommended to the end users based on their explicit feedback (e.g., users' ratings on items) or implicit feedback (e.g., click, view) on items shown in the online shopping platforms (e.g., eBay, Amazon). This is the most popular application case of recommender systems. \begin{itemize} \item Explicit user-item interaction (e.g., ratings) prediction in online e-commerce is one of the most representative applications of recommender systems. It is usually conducted in the form of user-item rating matrix completion. \item Implicit user-item interaction (e.g., click, purchase) prediction in online e-commerce is another representative application of recommender systems. It is often to predict the probability of a user to click or purchase an item during one of his/her shopping events. \end{itemize} \end{itemize} \subsubsection*{Preliminary -- Multi-Domain Recommendation} In this section, we will analyze the theoretical root cause, necessity and challenges in multi-domain recommendation and present the principles to deal with them. \begin{itemize} \item In the big data era, it is natural to integrate multi-domain information to build user's performance and model item's features. Principally, there are two critical points in designing multi-domain RSs, i.e. heterogeneity and coupling. \begin{itemize} \item Heterogeneity in various aspects: (1) the heterogeneity over users (2) the heterogeneity over items (3) the heterogeneity over user/item attributes (4) the heterogeneity of data types and (5) the heterogeneity over user-item interaction. \item Coupling in various aspects: (1) the coupling between users (2) the coupling between items (3) the coupling between different attributes and (4) the coupling between different data types. \end{itemize} \item To apply the guidelines presented in this section, we give more specific instances to show how to couple heterogeneous data and learning enhanced information. \end{itemize} \subsection*{Learning Complementary Knowledge} In this section, we demonstrate how to (1) couple data from multiple domains, and (2) leverage information through social relations to obtain complementary information to deal with sparsity and cold start. In particular, we focus on introducing two typical types of RSs: (1) cross-domain RSs and (2) social RSs. \begin{itemize} \item Cross-domain recommender systems mainly aim to learn the complementary information for multiple domains, In this tutorial, we will present the following key points: \begin{itemize} \item The main issues and challenges in cross-domain RSs \item Heterogeneity and coupling in cross-domain problems \item What, when and how to transfer complementary information between domains \item State-of-the-art transfer learning methods for modeling cross-domain RSs \begin{itemize} \item Latent factor model based methods \item Deep learning based methods \item Other methods \end{itemize} \item Real-world cases demonstration \end{itemize} \item Social recommender systems mainly aim to learn the complementary information for related people based on some social relations. In this tutorial, we try to present the following key points: \begin{itemize} \item Social influence in recommendation \item Reputation and trust in social recommendation \item Learning to incorporate social information \begin{itemize} \item Social regularization based models \item Coupling users using deep learning approach \end{itemize} \item Real-world cases demonstration \end{itemize} \end{itemize} \subsection*{Learning Compatible Knowledge} Traditional single data and single objective RSs have a number of limitations. In this section, we present multimodal and multi-criteria RSs to learn more comprehensive information. \begin{itemize} \item Multimodal recommender systems \begin{itemize} \item The limitations of the classic RSs built on simple data, e.g. ratings. \item The needs of building multimodal RSs to learn comprehensive information from multiple types of data, including ratings, comments and images. \item Multimodal learning methods \begin{itemize} \item Modeling multimodal data with deep models \item Link two types of data with encoder-decoder. \end{itemize} \item Real-world cases demonstration \end{itemize} \item Multi-criteria recommender systems \begin{itemize} \item Most current RSs only optimize accuracy but other factors are also important in the real-world cases. \item Utility function over multi-criteria ratings \item Machine learning methods for multi-criteria RSs \begin{itemize} \item Integrating multi-criteria objectives by regularization \item Probabilistic modeling and tensor factorization \item Multi-task learning for multi-criteria objectives \item Related work: multi-criteria optimization \end{itemize} \item Real-world cases demonstration \end{itemize} \end{itemize} \subsection*{Learning Contextual Knowledge} AI systems are expected to be conscious enough to the contextual information all around as human. Obviously, recommendation is very sensitive to the context, such as time, place, and the company of other people. In this section, we will present two types of RSs built on contextual information: (1) context-aware RSs, (2) group-based RSs and (3) spacial-temporal RSs. \begin{itemize} \item Context-aware recommender systems \begin{itemize} \item What is the context and how to obtain contextual information \item Paradigms for incorporating context \item Machine learning methods to model and represent contextual information \begin{itemize} \item Multidimensional/Tensor models for represent high-order interaction \item Deep and wide networks for learning high-level representation of context \end{itemize} \item Case study for modeling typical context-aware RSs \begin{itemize} \item Session-based recommender systems \end{itemize} \end{itemize} \item Group-based recommender systems \begin{itemize} \item The challenges in group-based recommendation \item How to represent group preference over all members with heterogeneous tastes \item Machine learning methods for modeling group-based recommendation \begin{itemize} \item Mixture model for group context representation \item Learning group preference in terms of deep models \item Weighting members in terms of word embedding model with attention mechanism \end{itemize} \item Real-world case demonstration \end{itemize} \item Spacial-temporal recommender systems \begin{itemize} \item The challenges in spacial-temporal recommendation \item How to represent and integrate spacial and temporal information \item Machine learning methods for modeling spacial-temporal recommendation \begin{itemize} \item Mixture model for spacial-temporal context representation \item Learning spacial-temporal information in terms of deep models \item Weighting members in terms of word embedding model with attention mechanism \end{itemize} \item Real-world case demonstration \begin{itemize} \item Point of interest recommendation \end{itemize} \end{itemize} \end{itemize} \subsection*{Advanced Multi-Domain Recommender Systems in Practise} In this section, we present a series of advanced and novel applications of RS, which go far beyond the aforementioned classic applications of RS. These advanced applications are developing rapidly in recent years and have played an important role in making a more intelligent, convenient and efficient daily life for us. \subsubsection*{RS in FashionAI} \begin{itemize} \item The background, formalization and characteristics of fashion recommendations. \item The challenges in fashion recommendations. \item The representative solutions and models for fashion recommendations. \item The typical real-world application cases of fashion recommendations. \end{itemize} \subsubsection*{RS in FinTech} \begin{itemize} \item The background, formalization and characteristics of recommendations in FinTech. \item The challenges of recommendations in FinTech. \item The representative solutions and models for FinTech recommendations. \item The typical real-world application cases of FinTech recommendations. \end{itemize} \subsubsection*{RS in Healthcare} \begin{itemize} \item The background, formalization and characteristics of recommendations in healthcare. \item The challenges of recommendations in healthcare. \item The representative solutions and models for healthcare recommendations. \item The typical real-world application cases of healthcare recommendations. \end{itemize} \subsubsection*{RS in Point-Of-Interest} \begin{itemize} \item The background, formalization and characteristics of recommendations in POI. \item The challenges of recommendations in POI. \item The representative solutions and models for POI recommendations. \item The typical real-world application cases of POI recommendations. \end{itemize} Point of Interest (POI) recommendation in location-based service (LBS). POI recommender systems are desired to make use of the rich information (social relationships, check-in history and so on) to mine users’ preferences on locations and recommend new places where users may be interested in. \subsubsection*{RS in Multimedia} \begin{itemize} \item The background, formalization and characteristics of recommendations in multimedia. \item The challenges of recommendations in multimedia. \item The representative solutions and models for multimedia recommendations. \item The typical real-world application cases of multimedia recommendations. \end{itemize} Multimedia recommendations mainly refer to movie, music, news or video recommendation in entertainment domain. The corresponding recommender systems are designed to predict users' preference over certain movies or songs according to the users' explicit feedback or to predict the next movie/song/new/video that would be likely to be consumed by the user given his/her historical interactions. \fi \section{Supporting Materials} To provide the audiences a better understanding of the presented tutorial, we will provide the following support materials: \begin{itemize} \item Slides (PPT or PDF) of the tutorial \item Video recording the presentation for the tutorial \item Detailed reference papers included in the slides \item Links of publicly available datasets mentioned in the tutorial \end{itemize} \normalem \bibliographystyle{ACM-Reference-Format} \section*{Title and length} \subsection*{Title} Sequential/Session-based Recommendations: Challenges, Approaches, Applications and Opportunities \subsection*{Proposed Length of the Tutorial} Half day (3 hours plus breaks) \section*{Tutorial format} Online \section*{Intended Audience and Prerequisite Knowledge} \subsection*{Intended Audience} This tutorial should be appealing to any research students, researchers and practitioners who are working on recommender systems or who plan to step into this vibrant area. The goal of the tutorial is to enable both academic and practical audience with a comprehensive understanding of sequential/session-based recommender systems (SRSs/SBRSs) and provide them with relevant techniques and skills to build most advanced SRSs and SBRSs. After this tutorial, the audience can walk away with: \begin{itemize} \item Introductory: The background and motivation of SRSs and SBRSs. A basic understanding of the research problem, data characteristics and challenges of SRSs and SBRSs, and the basic approaches to build SRSs and SBRSs; \item Intermediate: The most recent and various development of advanced SRSs and SBRSs built on the state-of-the-art machine learning methods, and the deep insight into the corresponding sequence/session modeling behind SRSs/SBRSs; \item Advanced: The practical approaches to customize and build advanced SRSs and SBRSs over the audience's ideas and data with the knowledge learned from this tutorial. \end{itemize} \subsection*{Prerequisite Knowledge} Nothing specific, but a rudimentary knowledge of RSs and some machine learning methods will be helpful, including: \begin{itemize} \item Recommender systems \item Latent representation models \item Deep learning \end{itemize} \section*{Presenters} \subsection*{Dr. Shoujin Wang (Main Contact Person)} \textbf{Position}: Research Fellow\\ \textbf{Affiliation}: School of Computing Tehcnologies, RMIT University, School of Computing, Macquarie University, Australia\\ \textbf{Postal Address}: Level 3, 4 Research Park Drive (BD Building), Macquarie University, Sydney, NSW 2109, Australia\\ \textbf{Email}: shoujin.wang@mq.edu.au\\ \textbf{Tel}: +61-452139866\\ \textbf{Homepage}: \url{https://sites.google.com/view/shoujinwanghome}\\ \noindent \textbf{Shoujin Wang} is a Research Fellow in the School of Computing Technologies, RMIT University, Australia, and he is also a Casual Academic at Macquarie University, Australia. His research interests include data mining, machine learning, and their general applications in recommender systems and fake news mitigation. He has published more than 30 high-quality papers in top-rank international conferences including WWW, IJCAI, AAAI, ECML-PKDD, and premier journals like ACM Computing Surveys (ACM CSUR), IEEE Transactions on Systems, Man, and Cybernetics: Systems (IEEE TSMC: system), IEEE Transactions on Neural Networks and Learning Systems (IEEE TNNLS) in the areas such as data mining, machine learning, and recommender systems. He has served the community as a program committee member of international conferences like AAAI, IJCAI, KDD, WSDM, CIKM, PAKDD, DSAA, or a reviewer of prestigious journals including Machine Learning, IEEE Intelligent Systems, IEEE Transactions on Cybernetics and IEEE Transactions on Knowledge and Data Engineering (TKDE). As the lead organizer, Shoujin has initiated and organised the first and second international workshop on Neural Recommender Systems at ICDM-20 and ICDM-21. Shoujin has successfully delivered four tutorials on recommender systems at top conferences including IJCAI-19, IJCAI-20, AAAI-21 and ICDM-21. He was also widely invited to give talks to introduce the knowledge of recommender systems to industry and government partners. Since 2012, Shoujin has extensive teaching experience by serving as a lecturer, a guest lecturer, a casual lecturer and a tutor to deliver courses on data science and recommender systems to both the undergraduate and postgraduate students at Macquarie University, Monash University and Shanghai University. Shoujin also delivered industry training series of data science and machine learning at Vantage FX Pty Ltd in 2018. \subsection*{Dr. Qi Zhang} \textbf{Position}: Research Scientist\\ \textbf{Affiliation}: DeepBlue Academy of Sciences, China\\ \textbf{Postal Address}: 369 Weining Rd, Changning District, Shanghai 200240, China\\ \textbf{Email}: zhangqi@deepblueai.com\\ \textbf{Tel}: +86-18301367053\\ \textbf{Homepage}: \url{https://sites.google.com/view/qizhang-bit-uts/home}\\ \noindent \textbf{Qi Zhang} received his first Ph.D. degree in computer science from the Department of Computer Science and Engineering, Beijing Institute of Technology, China in 2020. Currently, he is working as an AI Scientist at DeepBlue Academy of Sciences. His research interests include recommender systems, learning to hash, machine learning and general artificial intelligence. He has published several papers in top-rank international conferences and journals in the area of recommender systems, including TKDE, TNNLS, AAAI, ECAI, ESWA, KBS and DSAA. He has served the community as a program committee member or reviewer of IJCAI, ICDM, CIKM, PAKDD, DSAA, IEEE Intelligent Systems, IEEE TSMC, IEEE TKDE, and JDSA. He has successfully delivered a tutorial on recommender systems at ICDM-21 and given talks to introduce recommender systems to industry, e.g., IP Australia and Commonwealth Bank. He also served as a teaching assistant to introduce the Machine Learning course and the Frontier of Computer Science at Beijing Institute of Technology. \subsection*{Professor Liang Hu} \textbf{Affiliation}: Tongji University, China\\ \textbf{Postal Address}: 4800 Cao'an Highway, Shanghai 201804, P. R. China \\ \iffalse \textbf{Position}: Chief Scientist at AI Research Institute\\ \textbf{Affiliation}: DeepBlue Academy of Sciences, Shanghai, China\\ \textbf{Postal Address}: No. 369, Weining Road, Xinjing Town, Changning District, Shanghai, 200240, China\\ \fi \textbf{Email}: lianghu@tongji.edu.cn\\ \textbf{Tel}: +86-13918633966\\ \textbf{Homepage}: \url{https://sites.google.com/view/lianghu/home}\\ \noindent \textbf{Liang Hu} is a full professor with Tongji University, China and also chief AI scientist with DeepBlue Academy of Science, China. He received his first Ph.D. degree in computer application technology from the Department of Computer Science and Engineering, Shanghai Jiao Tong University, China in 2015, and he received his second Ph.D. degree in Analytics with Advanced Analytics Institute, University of Technology Sydney, Australia, in 2019. He has published more than 40 papers in top-rank international conferences and journals in the area of recommender systems, including WWW, IJCAI, AAAI, ICDM, ICWS, TOIS, JWSR. Dr Liang Hu has successfully delivered eight tutorials including AAAI-18, AAAI-20, IJCAI-19, IJCAI-20, IJCAI-21, KDD-18, PAKDD-18, ICDM-21 and several invited talks to main conference/workshop audiences and public seminars to industry and government. He has been invited as the program committee member for more than 30 top-rank AI international conferences, including AAAI, IJCAI, ICDM, CIKM, and KDD. He also serves as the reviewer of more than ten AI and data science-related international journals, including ACM Computing Survey, IEEE TKDE, ACM TOIS, IEEE TPAMI, etc. As a co-chair, he has organized workshops on AI, neural networks and recommender systems at ICDM-20 and ICDM-21. \subsection*{Professor Xiuzhen Zhang} \textbf{Affiliation}: School of Computer Technologies, RMIT University, Australia\\ \textbf{Postal Address}: 14.09.05 - Bld 14 Lvel 9 Room 5, Bundoora Campus, RMIT University, Melbourne, VIC 3000, Australia\\ \textbf{Tel}: +61-399252774 \\ \textbf{Email}: xiuzhen.zhang@rmit.edu.au\\ \textbf{Homepage}: \url{https://www.rmit.edu.au/contact/staff-contacts/academic-staff/z/zhang-professor-jenny}\\ \noindent \textbf{Xiuzhen Zhang} is currently a Professor with the School of Computing Technologies, RMIT University, Australia. Her research interests are in data mining and data analytics. She currently supervises several PhD student research projects in these areas. She teaches courses in the areas of databases, data analytics and data mining. She is the Assoc Dean for Higher Degrees by Res and Tech. \subsection*{Professor Yan Wang} \textbf{Affiliation}: School of Computing, Macquarie University, Australia\\ \textbf{Postal Address}: BD Building, Room 354, Macquarie University, Sydney, NSW 2109 Australia\\ \textbf{Tel}: +61-298509539 \\ \textbf{Email}: yan.wang@mq.edu.au\\ \textbf{Homepage}: \url{http://web.science.mq.edu.au/~yanwang/}\\ \noindent \textbf{Yan Wang} is currently a Professor with the Department of Computing, Macquarie University, Australia. His research interests include recommender systems, trust management/computing, social computing and services computing. He has authored or co-authored over 100 journal and conference papers in the above areas, all at top venues. He has served as general chair and PC chair for a couple of international conferences and workshops, such as IEEE Cloud2017, IEEE SCC2018, SOSE2018. \subsection*{Dr. Charu Aggarwal} \textbf{Position}: Distinguished Research Staff Member\\ \textbf{Affiliation}: IBM T. J. Watson Research Center, United States\\ \textbf{Postal Address}: 1101 Kitchawan Rd, Yorktown, NY 10598, United States\\ \textbf{Email}: charu@us.ibm.com\\ \textbf{Homepage}: \url{http://www.charuaggarwal.net/}\\ \noindent \textbf{Charu Aggarwal} is a Distinguished Research Staff Member (DRSM) at the IBM T. J. Watson Research Center in Yorktown Heights, New York. He completed his Bachelor of Technology in Computer Science from the Indian Institute of Technology at Kanpur in 1993 and his PhD in Operations Research from the Massachusetts Institute of Technology in 1996. He has worked extensively in the field of data mining, with particular interests in data streams, privacy, uncertain data and social network analysis and recommender systems. He has authored 9 books, over 400 papers in refereed venues, and has applied for or been granted over 80 patents. His h-index is 134. Charu Aggarwal has received two best paper awards and an EDBT Test-of-Time Award (2014). He is a recipient of the IEEE ICDM Research Contributions Award (2015) and the ACM SIGKDD Innovation Award (2019), which are the two most prestigious awards for influential research in data mining. He is also a recipient of the W. Wallace McDowell Award, the highest award given by the IEEE Computer Society across the field of computer science. He has served as the general or program co-chair of the IEEE Big Data Conference (2014), the ICDM Conference (2015), the ACM CIKM Conference (2015), and the KDD Conference (2016). He has served as the editor-in-chief of the ACM SIGKDD Explorations and is currently editor-in-chief of the ACM Transactions on Knowledge Discovery and Data Mining as well as that of ACM Books. He is serving or has served as associate editor/action editor of several premier journals including the IEEE TKDE, the IEEE TBD, DMKD, and KIS. He is a fellow of the IEEE (2010), ACM (2013), and the SIAM (2015) \fi \section{Introduction} Recommender systems (RSs) have been playing an increasingly important role in informed consumption, and decision-making in the current era of information explosion and digitized economy~\cite{wang2021survey}. In recent years, sequential recommender systems (SRSs) and session-based recommender systems (SBRSs) have emerged as a new paradigm of RSs to capture users' short-term but dynamic preferences for enabling more timely and accurate recommendations~\cite{wang2019sequential}. SRSs and SBRSs have been quite important and popular research areas in the recommendation communities, which have attracted much attention from both academia and industry. SRSs and SBRSs are highly correlated and similar in terms of the input, output and recommendation mechanism, and most of the representative approaches for building SRSs and SBRSs are very similar. Therefore, we present this work to cover both SRSs and SBRSs. The key challenge of building SRSs/SBRSs lies in how to comprehensively learn the complex dependencies embedded within and between sequences/sessions to accurately infer users' timely and dynamic preferences~\cite{wang2021survey}. In recent years, there has been some promising progress in tackling this challenge, including, e.g., Markov chain based approaches~\cite{rendle2010factorizing}, distributed representation based approaches~\cite{wang2018attention}, recurrent neural network (RNN) based approaches~\cite{hidasi2015session}, graph neural network (GNN) based approaches~\cite{wu2019session,wang2021graph}, reinforcement learning-based approaches~\cite{zhao2017deep} and contrastive learn-ing-based approaches~\cite{qiu2022contrastive}. Although SRSs and SBRSs have been extensively studied in recent years, there are many inconsistencies within each area and/or between both areas, caused by the diverse description terms, scenario settings, employed assumptions and application domains. There is a lack of a unified framework to well categorize them, and there are no unified problem statements for the research problem(s)~\cite{wang2021survey}. A few tutorials have focused on sequence-aware recommender systems~\cite{Massimowww19}, deep learning-based sequential recommendations~\cite{FangICWE19}, and session-based recommendation on GPU~\cite{Moreirarecsys21}. However, there is no work to provide a unified framework and problem statement to remove the commonly existing and various inconsistencies in the areas of SRSs and SBRSs. There is a lack of work to provide a comprehensive and systematic demonstration of the data characteristics, key challenges, most representative and state-of-the-art approaches, typical real-world applications and important future research directions in the area of SRSs and SBRSs. This work aims to fill in these gaps so as to facilitate further research in this exciting and vibrant area. \iffalse \section{Objectives} This work will systematically review the progress of research in sequential and session-based recommender systems with an emphasis on the frameworks, problem statement, data characteristics and challenges, approaches and algorithms, applications and prospects. The tutorial will provide (1) a unified framework to categorize the studies on SRS/SBRS, which provides an overview of the research in this area, (2) a unified problem statement of the research problem in the area, (3) a comprehensive overview of the unique characteristics of the data used for SRS and SBRS as well as the challenges faced this new recommender system paradigm, (4) a systematic classification and comparison of approaches for building SRSs and SBRSs, (5) a brief summary and introduction of the represent approaches from each class of SRS/SBRS approaches, (6) a summary of representative classical and emerging real-world application scenarios of SRS/SBRS, (7) a summary of open research issues and prospects in the area of SRS/SBRS. This work should be appealing to any research students, researchers and practitioners who are working on recommender systems or who plan to step into this vibrant area. \fi \iffalse Classic RS, e.g., collaborative filtering and content-based filtering, are mainly conducted on the users' feedback or items' contents to promote the items by predicting the users' explicit preference (e.g., ratings) over items in the e-commerce area. This not only downgrades the performance of RS by ignoring other relevant information but also greatly limits the application scenarios and domains of RS. This actually motivates the necessity of new theories and approaches for building next-generation RS, as well as developing the advanced applications of RS. In practice, in addition to explicit preference prediction in the e-commerce area, RS is more and more widely used in many emerging scenarios (e.g., implicit ranking prediction over items) and domains (e.g., FashionAI) for smarter decisions in recent years. This not only broadens the application scope of RS but also benefits us from nearly every aspect including eating, dressing, living and traveling. The goal of this tutorial aims to enable both academic and practical audiences with a comprehensive understanding and relevant techniques of how to apply state-of-the-art machine learning approaches to build more sensible next-generation RSs in contexts with various heterogeneous data and complex relations. In this tutorial, we will present a systematic review and applications of recent advanced machine learning techniques to build real-life intelligent RSs. After this tutorial, the audience can walk away with: \begin{itemize} \item The insight into recent development and evolution of recommendation techniques; \item The machine learning methods to model complex couplings over heterogeneous recommendation data in a comprehensive way; \item The various development of advanced RSs built on the state-of-the-art machine learning methods; \item The practical approaches to customize and build advanced RSs over the audience's own complex data with the ideas, models and techniques learned from this tutorial. \end{itemize} \fi \iffalse \section{Relevance to SIGIR} Recommender systems (RS) are one of the very well-recognized important topics in the information retrieve (IR) research community due to the high relevance and similarity between RS and IR. This can be demonstrated by the fact that recommendation has been one of the major relevant topics mentioned by call for papers (CFP) in ACM SIGIR conferences\footnote{https://sigir.org/sigir2022/call-for-papers/}. Sequential or session-based recommender systems, as one of the most and representative recommender system paradigms which emerge in recent years, are highly relevant to the IR community. A majority of topics of research papers published in ACM SIGIR conferences are related to sequential or session-based recommender systems, including, e.g., sequential recommendations, session-based recommendations, next-item recommendations, next-basket recommendation/prediction, sequential user behavior modeling. As such, the tutorial is expected to be broadly appealing to many attendants of ACM SIGIR. \fi \begin{figure}[!t] \centering \includegraphics[width=.47\textwidth {picture/session_vs_sequence_v3.pdf} \vspace{-1em} \caption{Session data vs. sequence data, from ~\cite{wang2021survey}} \label{fig_sequence} \vspace{-2em} \end{figure} \section{Related Work} There are some surveys and tutorials focusing on the topic of SRSs or SBRSs. For SRSs, Quadrana et al. performed a comprehensive survey~\cite{quadrana2018sequence} together with two tutorials~\cite{Massimowww19,Massimorecsys18} at WWW 2019 and RecSys 2018 on sequence-aware recommender systems, which talk about the recommendation task, algorithms and evaluations of SRSs. Fang et al. provided a survey~\cite{fang2019deep} and presented a tutorial~\cite{FangICWE19} at ICWE 2019 both on deep learning based sequential recommendations. They discussed various aspects of SRSs including the concepts, algorithms, influential factors, and evaluations; Wang et al.~\cite{wang2019sequential} conducted a brief review on the challenges, progress and prospects of SRSs. Regarding SBRSs, to the best of our knowledge, there is only one comprehensive survey on SBRSs~\cite{wang2021survey} to systematically discuss the session-based recommendation problem, data characteristics, recent progress, approach taxonomy, applications and future directions. Ludewig et al.~\cite{ludewig2021empirical} conducted an empirical study on some representative SBRS algorithms while Gabriel et al.~\cite{Moreirarecsys21} provided a tutorial on session-based recommendation on GPU. These existing works have great value in treating specifics of research in more detail in the area of SRSs or SBRSs. However, they often focus on either SRSs or SBRSs, and none of them can systematically focus on both SRSs and SBRSs to systematically talk about the difference and similarities of SRSs and SBRSs, as well as to address the commonly existing inconsistencies w.r.t. concepts, settings, etc. between them. This work is well complementary to those related works by providing a more complete summarization of sequential and session-based recommendations with an emphasis on the problem statement, data characteristics and challenges, applications and prospects, and comprehensive analysis of all kinds of state-of-the-art approaches, models and algorithms. Specifically, it performs a comprehensive review of the latest survey papers on the sequential and session-based recommendations. \begin{figure*}[!t] \centering \includegraphics[width=.86\textwidth {picture/SBRS_taxnomy_v4.pdf} \vspace{-1em} \caption{The categorization of SBRS approaches, referred to \cite{wang2019sequential,wang2021survey}} \label{fig_class} \vspace{-0.5em} \end{figure*} \section{An Overview of This Work} This work will perform a systematic and high-level review of the most notable works to date on SRSs/SBRSs. It will contain five parts: \begin{itemize}[leftmargin=*] \item{Part 1 Introduction and Problem Statement. This part will first introduce the background of SRSs and SBRSs with an emphasis on the comparison between them, followed by a unified problem statement of SRSs/SBRSs.} \item{Part 2 Data Characteristics and Challenges. This part will thoroughly analyze the characteristics of data used for SRSs and SBRSs and the main challenges triggered by them.} \item{Part 3 Sequential/Session-Based Recommendation Approaches. This part will first provide a classification scheme to well organize all the existing approaches to SRSs and SBRSs and then highlight the most recent advance in each class of approaches.} \item{Part 4 Applications and Algorithms. This part will introduce both the traditional and emerging real-world applications of SRSs and SBRSs and a collection of representative and state-of-the-art SRSs/SBRSs algorithms together with public datasets.} \item{Part 5 Future Opportunities. This part will discuss some of the most promising directions in the area and conclude this tutorial.} \end{itemize} \iffalse (1) Part 1 Introduction and Problem Statement. This part will first introduce the background of SRS and SBRS (cf. Sec. \ref{sec:background}), followed by an overview of the SRS/SBRS research (cf. Sec. \ref{SRSvsSBRS}), and finally a unified problem statement to the research problem in this area (cf. Sec. \ref{sec:problem}). (2) Part 2 Data Characteristics and Challenges. This part will thoroughly analyze the characteristics of data used for SR and SBR and the main challenges faced by SR and SBR (cf. Sec. \ref{sec:datachar}). (3) Part 3 SRS/SBRS Approaches. This part will first provide a classification scheme to well classify and organize all the existing approaches for building SRS and SBRS and then highlight the most recent advance in each class of approaches (cf. Sec. \ref{sec:taxonomy} to \ref{sec:dnn_approaches}). (4) Part 4 Applications and Algorithms. This part will introduce both the traditional and emerging real-world applications of SBR and SBRS. In addition, a collection of represent and state-of-the-art SBR/SBRS algorithms together with those commonly used real-world datasets for testing their performance will be introduced (cf. Sec. \ref{sec:app_alg_datasets}). (5) Part 5 Future Opportunities. This part will discuss some of the most pressing open issues and promising directions in the area and conclude this tutorial (cf. Sec. \ref{sec:prospects_future} to \ref{sec:conclusions}). The outline of this tutorial is listed below. It will first introduce a unified framework to organize the existing works in this area, followed by a unified problem statement to the research problem in this area. Then, we will thoroughly analyze the characteristics of data used for sequential and session-based recommender systems and the main challenges faced by sequential and session-based recommendations. Then a classification scheme will be introduced to well classify and organize all the existing approaches for building sequential and session-based recommender systems where the most recent advance each class of approaches will be highlighted. Afterward, we will introduce the traditional and emerging real-world applications of sequential and session-based recommender systems and the commonly used real-world datasets for the experiments in this area. Finally, we will discuss some of the most pressing open issues and promising directions. The outline of this tutorial is listed below. \fi \section{Background and Problem Statement} \iffalse \subsection{Background} \label{sec:background} We introduce the background and motivation of this work by focusing on two critical questions: why do we need SRSs and SBRSs? and Why do we need this work? \subsubsection{Why do we need sequential/session-based recommender systems?} \subsubsection{Why do we need this work?} \fi \subsection{SRSs vs SBRSs}\label{SRSvsSBRS} Generally, SRSs and SBRSs take sequence data and session data as its input respectively. A session is a set of interactions with clear boundary and the interactions may be ordered or unordered. A sequence is a list a elements (such as item IDs) with clear chronological order. SBRSs either predict the next interaction(s) based on the given historical interactions within a session, or predict the future session (e.g., the next-basket) based on the historical sessions, which mainly depends on the intra- or inter-session dependencies. In comparison, SRSs predict the following elements of a sequence given the historical elements in the sequence, which mainly relies on the sequential or temporal dependencies over the elements inside each sequence~\cite{wang2021survey,wang2019learning}. \iffalse \subsubsection{Sequential Recommender Systems vs. Session-based Recommender Systems} This part will compare SRS and SBRS from multiple perspectives including the input, output and work mechanism. \fi \iffalse \subsubsection{Research Landscape of Sequential/Session-based Recommender Systems} This part will provide a high-level bird view of the research progress in SRS and SBRS respectively. \fi \subsection{Sequential/Session-based Recommendation Problem Statement} \label{sec:problem} There are five entities (namely User, Item, Action, Interaction and Sequence/Session) involved in sequential/session-based recommendation scenarios and they constitute the foundations for defining the sequential/session-based recommendation research problems. We first provide a brief introduction of each of them and then define the sequential/session-based recommendation research problem. \begin{itemize}[leftmargin=*] \item{User and User Properties.} \item{Item and Item Properties.} \item{Action and Action Properties.} Actions refers to users' actions on items, such as clicks, views, purchases. \item{Interaction and Interaction Properties.} An interaction is a triplet of $\left \langle user, action, item \right \rangle$. \item{Sequence/Session and Their Properties.} A sequence or a session is a set of interactions and can be characterised by a set of properties including sequence/session length, internal order, action type (purchase or click, etc.), user information availability and data structure. A detailed illustration was provided by Wang et al.~\cite{wang2021survey}. \end{itemize} \subsubsection*{Problem Statement} Sequential/session-based recommendation is often formalized as a next-item or next-basket prediction problem. Specifically, given a user's historical interaction information, an RS is built to predict the user's future interactions, such as her/his next item, next basket to be purchased, or next POI to be visited. The main work mechanism is to first accurately learn the complex dependencies embedded in users' interaction behaviours and then employ the learned dependencies as a signal to guide the following prediction. So the main challenge/task lies in dependency learning. \section{Data Characteristics and Challenges} \label{sec:datachar} This section will introduce the unique characteristics of data used for SRSs/SBRSs and the special challenges they brought to SRSs/SBRSs from five dimensions: (1) sequence/session length, (2) the internal order within sequences/sessions, (3) the type of actions within sequences/sessions, (4) user information, and (5) sequence/session data structure. The outline of this section is listed below. \begin{itemize}[leftmargin=*] \item{Characteristics and Challenges Related to Sequence/Session Length.} According to length, a sequence/session can be classified into long or short sequence/session, which has different challenges. For instance, the challenges for long sequence/sessions lie in how to learn long-range and/or high-order dependencies while that for short sessions lies in how to learn enough dependency information with limited interactions. \item{Characteristics and Challenges Related to Internal Order.} For sequences, there is clear order within each sequence, while for sessions, they may be ordered, unordered and flexibly ordered sessions. Sessions with different types of orders often have different challenges when making recommendations based on them. \item{Characteristics and Challenges Related to Action Type.} The action can be purchase, click, view, and add to cart. So a sequence/session can be based on each of them or a combination of any of them, leading to single-action-type sequence/sessions and multi-action-type sequence/sessions. The main challenges for multi-action-type sequence/sessions lie in how to effectively learn the intra- and inter-action type dependencies. \item{Characteristics and Challenges Related to User Information.} Sequences/sessions can be divided into anonymous ones and non-anonymous ones. Usually, anonymous ones can be more challenging for dependency learning. \item{Characteristics and Challenges Related to Data Structure.} Session data can be divided into single-level sessions and multi-level sessions according to the number of structure levels involved~\cite{wang2021survey}. \end{itemize} \section{SRS/SBRS Approaches} \label{sec:taxonomy} This section first describes the classification taxonomy of SRS/SBRS approaches, and then comprehensively compare them. \subsection{A Classification of Approaches} As shown in Figure 1, the approaches are classified into three classes: (1) conventional approaches, (2) latent representation-based approaches, and (3) deep neural network-based approaches. The first class includes four sub-classes: pattern/rule mining-based approaches, K nearest neighbor approaches, Markov chain approaches, and generative probabilistic approaches. The second class includes two sub-classes: latent factor model-based approaches and distributed representation-based approaches. The third class includes two sub-classes: basic deep neural network (i.e., RNN, MLP, CNN, GNN) based approaches and advanced model (i.e., attention mechanism, memory network, mixture model, generative model, reinforcement learning, contrastive learning, meta learning) based approaches~\cite{wang2021survey,wang2019sequential}. \subsection{Conventional Approaches} \label{sec:conv_approaches} Conventional approaches utilize conventional data mining or machine learning approaches to build SRSs/SBRSs \cite{wang2021survey}. This section will introduce four classes of conventional approaches for SRSs/SBRSs, and then compare them. The outline of this part is listed below. \begin{itemize}[leftmargin=*] \item{Pattern/Rule Mining based SRSs/SBRSss.} \item{K Nearest Neighbour based Approaches.} \item{Markov Chain based Approaches.} \item{Generative Probabilistic Model based Approaches.} \end{itemize} \subsection{Latent Representation Approaches for SRSs/SBRSs} \label{sec:latent_rep} Latent representation approaches first learn a low-dimensional latent representation for each interaction from sequences or sessions (usually with shallow models) and then employ the learned representation as to the input of the subsequent recommendation task. This section will first introduce two classes of latent representation approaches, followed by a comparison between them. \begin{itemize}[leftmargin=*] \item{Latent Factor Model based Approaches.} \item{Distributed Representation based Approaches.} \end{itemize} \subsection{Deep Neural Network Approaches for SRSs/SBRSs} \label{sec:dnn_approaches} Deep neural network approaches mainly take advantage of the powerful capabilities of deep neural models in learning the complex dependencies within or between sequences/sessions for recommendations \cite{wang2021survey}, and they can be roughly classified into basic approaches and advanced approaches. Each basic deep neural approach is built on a basic deep neural network (e.g., RNN) while each advanced approach is based on one or more advanced neural models (attention model). This section will first introduce the four classes of basic approaches and five classes of advanced approaches and then compare them. The outline of this section is listed below. \subsubsection{Basic Deep Neural Network based Approaches} \begin{itemize}[leftmargin=*] \item{Recurrent Neural Networks (RNN) based Approaches} \item{MultiLayer Perceptron (MLP) networks based Approaches} \item{Convolutional Neural Networks (CNN) based Approaches} \item{Graph Neural Networks (GNN) based Approaches} \end{itemize} \subsubsection{Advanced Model based Approaches} \begin{itemize}[leftmargin=*] \item{Attention Model based Approaches} \item{Memory Networks based Approaches} \item{Mixture Model based Approaches} \item{Generative Model based Approaches} \item{Reinforcement Learning (RL) based Approaches} \item{Contrastive Learning based Approaches} \item{Meta Learning based Approaches} \end{itemize} \subsection{A Comparison of Different Classes of Approaches} This part will provide comparisons between different classes of approaches from multiple perspectives, including the work mechanism of each class, the learned dependency types, and the research trend. Due to the limited space, please refer to \cite{wang2019sequential,wang2021survey} for detailed comparisons. \section{SRS/SBRS Applications and Algorithms} \label{sec:app_alg_datasets} This section will first demonstrate the real-world applications of SRSs and SBRSs, including both the applications in conventional domains and emerging domains~\cite{wang2021survey}. Then, we will summarize 24 representative and state-of-the-art open-source algorithms for building SRSs and SBRSs and 13 commonly used public real-world datasets for testing the performance of these algorithms, the details of them can be found in ~\cite{wang2021survey}. \subsection{Applications} \textbf{Conventional applications:} \begin{itemize}[leftmargin=*] \item{E-commerce domain: Next-item/basket recommendation.} \item{Media, entertainment domain: Next news/web-page/song/movie /video recommendation.} \item{Tourism domain: Next-POI recommendation.} \end{itemize} \noindent\textbf{Emerging applications:} \begin{itemize}[leftmargin=*] \item{Finance domain: Next-trading/investment recommendation.} \item{Healthcare domain: Next-treatment/medicine recommendation.} \end{itemize} \section{Prospects and Future Directions} \label{sec:prospects_future} This section will outline the following eight promising prospective research directions in the areas of SRSs and SBRSs. \begin{itemize}[leftmargin=*] \item{SRSs/SBRSs with General User Preference} \item{SRSs/SBRSs Considering More Contextual Factors} \item{SRSs/SBRSs with Cross-domain Information} \item{SRSs/SBRSs by Considering More User Behaviour Patterns} \item{SRSs/SBRSs with Constraints} \item{Interactive SRSs/SBRSs} \item{Online or Streaming SRSs/SBRSs} \item{Trustworthy and Responsible SRSs/SBRSs} \end{itemize} \begin{acks} This work was supported by ARC Discovery Project DP200101441. \end{acks} \iffalse analyze data, challenges, and business needs in advanced recommendation problems, and take non-IID perspective to introduce recent advances in machine learning to model the 3C-based next-generation RSs. This includes an overall RS evolution and non-IIDness in the recommendation, advanced machine learning for cross-domain RS, social RS, multimodal RS, multi-criteria RS, context-aware RS, and group-based RS, and their integration in building real-life RS. \subsection*{Background of Recommender Systems} In the beginning, we will introduce the background of RSs. \begin{itemize} \item We are leaving the ``Information Age" and entering the ``Recommendation Age"'. \item E-commerce companies like Amazon and Alibaba; online social media like Facebook and Weibo; Internet service companies like Google and Baidu have paid much more attention to AI research. \item With the rapid development of AI technology, RSs are the most straightforward AI application to improve user experience and make more business profit. \end{itemize} \subsubsection*{Brief Review on Classic Recommender Systems} We introduce some classic recommendation techniques and their typical problems. \begin{itemize} \item Collaborative filtering (CF) \begin{itemize} \item Memory-based CF and model-based CF. \item Application cases: Item-based CF in Amazon.com is a classic example. Facebook, MySpace, LinkedIn use CF to recommend new friends, groups, and other social connections. \item The problems in CF: (1) cold start (2) scalability and (3) sparsity. \end{itemize} \item Content-based filtering (CBF) \begin{itemize} \item CBF is based on a description of the item and a profile of the user’s preferences. \item Application cases: Rotten Tomatoes, Internet Movie Database use CBF to recommend movies. \item The problems in CBF: (1) limited content analysis (2) over-specialization and (3) new user. \end{itemize} \item The challenges cannot be easily handled by classic RSs: (1) heterogeneity (2) vulnerability (3) social influence (4) multiple type data integration (5) context awareness. \end{itemize} \subsubsection*{Brief Review on Machine Learning Methods for Recommender Systems} Before presenting an advanced machine learning technique to build RSs, we will give some preliminaries in this section to make the audience easier to follow. \begin{itemize} \item Latent factor models are the most prevalent approach in RS. In particular, matrix factorization and tensor factorization are presented as typical examples. \item Deep learning is the most effective approach to handle most types of data, e.g. images, text, and video. \begin{itemize} \item Autoencoder to generate the high-level representation from raw attributes \item Convolutional neural networks to model the spatial coupling \item Recurrent neural networks to model temporal coupling \end{itemize} \item Language models, Word2vec, Transformers, BERT and GPT are the most popular methods to learn the representation from text, which are helpful to deal with textual data and sequential data. \item Transfer learning aims to share and transfer knowledge learned from different data types and different data sources. Therefore, we will present some transfer learning methods and some specific areas including domain adaption and multi-task learning. \end{itemize} \subsubsection*{Brief Review on Classic Applications of Recommender Systems} Before presenting the advanced applications of RS, we briefly review and summarize the classic application cases and domains of RS in this section. \begin{itemize} \item Item recommendation in e-commerce. The items (e.g., a book, a cup) are recommended to the end-users based on their explicit feedback (e.g., users' ratings on items) or implicit feedback (e.g., click, view) on items shown in the online shopping platforms (e.g., eBay, Amazon). This is the most popular application case of recommender systems. \begin{itemize} \item Explicit user-item interaction (e.g., ratings) prediction in online e-commerce is one of the most representative applications of recommender systems. It is usually conducted in the form of user-item rating matrix completion. \item Implicit user-item interaction (e.g., click, purchase) prediction in online e-commerce is another representative application of recommender systems. It is often to predict the probability of a user to click or purchase an item during one of his/her shopping events. \end{itemize} \end{itemize} \subsubsection*{Preliminary -- Multi-Domain Recommendation} In this section, we will analyze the theoretical root cause, necessity and challenges in the multi-domain recommendation and present the principles to deal with them. \begin{itemize} \item In the big data era, it is natural to integrate multi-domain information to build user's performance and model item's features. Principally, there are two critical points in designing multi-domain RSs, i.e. heterogeneity and coupling. \begin{itemize} \item Heterogeneity in various aspects: (1) the heterogeneity over users (2) the heterogeneity over items (3) the heterogeneity over user/item attributes (4) the heterogeneity of data types and (5) the heterogeneity over user-item interaction. \item Coupling in various aspects: (1) the coupling between users (2) the coupling between items (3) the coupling between different attributes and (4) the coupling between different data types. \end{itemize} \item To apply the guidelines presented in this section, we give more specific instances to show how to couple heterogeneous data and learning enhanced information. \end{itemize} \subsection*{Learning Complementary Knowledge} In this section, we demonstrate how to (1) couple data from multiple domains, and (2) leverage information through social relations to obtain complementary information to deal with sparsity and cold start. In particular, we focus on introducing two typical types of RSs: (1) cross-domain RSs and (2) social RSs. \begin{itemize} \item Cross-domain recommender systems mainly aim to learn the complementary information for multiple domains, In this tutorial, we will present the following key points: \begin{itemize} \item The main issues and challenges in cross-domain RSs \item Heterogeneity and coupling in cross-domain problems \item What, when and how to transfer complementary information between domains \item State-of-the-art transfer learning methods for modeling cross-domain RSs \begin{itemize} \item Latent factor model based methods \item Deep learning based methods \item Other methods \end{itemize} \item Real-world cases demonstration \end{itemize} \item Social recommender systems mainly aim to learn the complementary information for related people based on some social relations. In this tutorial, we try to present the following key points: \begin{itemize} \item Social influence in recommendation \item Reputation and trust in social recommendation \item Learning to incorporate social information \begin{itemize} \item Social regularization based models \item Coupling users using deep learning approach \end{itemize} \item Real-world cases demonstration \end{itemize} \end{itemize} \subsection*{Learning Compatible Knowledge} Traditional single data and single objective RSs have a number of limitations. In this section, we present multimodal and multi-criteria RSs to learn more comprehensive information. \begin{itemize} \item Multimodal recommender systems \begin{itemize} \item The limitations of the classic RSs built on simple data, e.g. ratings. \item The need of building multimodal RSs to learn comprehensive information from multiple types of data, including ratings, comments and images. \item Multimodal learning methods \begin{itemize} \item Modeling multimodal data with deep models \item Link two types of data with encoder-decoder. \end{itemize} \item Real-world cases demonstration \end{itemize} \item Multi-criteria recommender systems \begin{itemize} \item Most current RSs only optimize accuracy but other factors are also important in real-world cases. \item Utility function over multi-criteria ratings \item Machine learning methods for multi-criteria RSs \begin{itemize} \item Integrating multi-criteria objectives by regularization \item Probabilistic modeling and tensor factorization \item Multi-task learning for multi-criteria objectives \item Related work: multi-criteria optimization \end{itemize} \item Real-world cases demonstration \end{itemize} \end{itemize} \subsection*{Learning Contextual Knowledge} AI systems are expected to be conscious enough of the contextual information all around as humans. Obviously, a recommendation is very sensitive to the context, such as time, place, and the company of other people. In this section, we will present two types of RSs built on contextual information: (1) context-aware RSs, (2) group-based RSs and (3) spatial-temporal RSs. \begin{itemize} \item Context-aware recommender systems \begin{itemize} \item What is the context and how to obtain contextual information \item Paradigms for incorporating context \item Machine learning methods to model and represent contextual information \begin{itemize} \item Multidimensional/Tensor models for represent high-order interaction \item Deep and wide networks for learning high-level representation of context \end{itemize} \item Case study for modeling typical context-aware RSs \begin{itemize} \item Session-based recommender systems \end{itemize} \end{itemize} \item Group-based recommender systems \begin{itemize} \item The challenges in group-based recommendation \item How to represent group preference over all members with heterogeneous tastes \item Machine learning methods for modeling group-based recommendation \begin{itemize} \item Mixture model for group context representation \item Learning group preference in terms of deep models \item Weighting members in terms of word embedding model with an attention mechanism \end{itemize} \item Real-world case demonstration \end{itemize} \item Spacial-temporal recommender systems \begin{itemize} \item The challenges in spacial-temporal recommendation \item How to represent and integrate spacial and temporal information \item Machine learning methods for modeling spacial-temporal recommendation \begin{itemize} \item Mixture model for spacial-temporal context representation \item Learning spacial-temporal information in terms of deep models \item Weighting members in terms of word embedding model with attention mechanism \end{itemize} \item Real-world case demonstration \begin{itemize} \item Point of interest recommendation \end{itemize} \end{itemize} \end{itemize} \subsection*{Advanced Multi-Domain Recommender Systems in Practise} In this section, we present a series of advanced and novel applications of RS, which go far beyond the aforementioned classic applications of RS. These advanced applications are developing rapidly in recent years and have played an important role in making a more intelligent, convenient and efficient daily life for us. \subsubsection*{RS in FashionAI} \begin{itemize} \item The background, formalization and characteristics of fashion recommendations. \item The challenges in fashion recommendations. \item The representative solutions and models for fashion recommendations. \item The typical real-world application cases of fashion recommendations. \end{itemize} \subsubsection*{RS in FinTech} \begin{itemize} \item The background, formalization and characteristics of recommendations in FinTech. \item The challenges of recommendations in FinTech. \item The representative solutions and models for FinTech recommendations. \item The typical real-world application cases of FinTech recommendations. \end{itemize} \subsubsection*{RS in Healthcare} \begin{itemize} \item The background, formalization and characteristics of recommendations in healthcare. \item The challenges of recommendations in healthcare. \item The representative solutions and models for healthcare recommendations. \item The typical real-world application cases of healthcare recommendations. \end{itemize} \subsubsection*{RS in Point-Of-Interest} \begin{itemize} \item The background, formalization and characteristics of recommendations in POI. \item The challenges of recommendations in POI. \item The representative solutions and models for POI recommendations. \item The typical real-world application cases of POI recommendations. \end{itemize} Point of Interest (POI) recommendation in location-based service (LBS). POI recommender systems are desired to make use of the rich information (social relationships, check-in history and so on) to mine users’ preferences on locations and recommend new places where users may be interested in. \subsubsection*{RS in Multimedia} \begin{itemize} \item The background, formalization and characteristics of recommendations in multimedia. \item The challenges of recommendations in multimedia. \item The representative solutions and models for multimedia recommendations. \item The typical real-world application cases of multimedia recommendations. \end{itemize} Multimedia recommendations mainly refer to movie, music, news, or video recommendation in the entertainment domain. The corresponding recommender systems are designed to predict users' preference over certain movies or songs according to the users' explicit feedback or to predict the next movie/song/new/video that would be likely to be consumed by the user given his/her historical interactions. \section{Supporting Materials} To provide the audiences a better understanding of the presented tutorial, we will provide the following support materials: \begin{itemize} \item Slides (PPT or PDF) of the tutorial \item Video recording the presentation for the tutorial \item Detailed reference papers included in the slides \item Links of publicly available datasets mentioned in the tutorial \end{itemize} \fi \normalem \bibliographystyle{ACM-Reference-Format}
1,314,259,994,720
arxiv
\section{Introduction}\label{sec:introduction}} \IEEEPARstart{A} network is a set of inter-connected items with a powerful mathematical basis for modeling many real-world systems, such as the Internet, World Wide Web, and transportation networks \cite{newman2003structure}. Detection of communities as the main hidden structures of networks has attracted the interests of researchers from the early stages of the the appearance of network science \cite{fortunato2016community}. A community in a network is defined as a set of nodes with intense intra community connections while having sparse inter community links. Nodes within the same community are likely to share common properties and play similar actions \cite{coscia_classification_2011}. The role of community structures in the functional modules of the networks has been applied on a wide range of fields including spammers identification in online social networks \cite{bhat2013}, image clustering \cite{okuda2017}, and detection the neural units dense modules \cite{garcia_applications_2018}.\par Much effort has been carried out on various aspects and assumptions about communities while focusing primarily on the connectivity information. Algorithms to discover non-overlapping communities are generally aimed at partitioning the network into sub-networks which are densely connected internally, while weakly connected externally. Examples of such algorithms are graph partitioning \cite{girvan2002community}, hierarchical agglomeration algorithm \cite{clauset2004finding}, optimization based methods \cite{ blondel2008fast} and many variants of spectral clustering method \cite{qin2013regularized}. On the other hand, several methods have been proposed to discover overlapping communities, such as mixed membership stochastic block-models \cite{airoldi2008mixed}, map equation framework based on probabilistic flows (\emph{InfoMap}) \cite{rosvall_maps_2008}, label propagation (\emph{Fast-Greedy}) \cite{gregory_finding_2010}, nonnegative matrix factorization (\emph{BigClam}) \cite{yang2013overlapping}, modularity based optimization (\emph{COMBO}) \cite{sobolevsky_general_2014}, tracking the evolution of online social networks \cite{bhat2015}, neighborhood seed expansion \cite{whang2016}, a unified approach on detection of general community structures \cite{hajiabadi_iedc2017}, and asymmetric triangle cuts \cite{rezvani2018}. \par Indeed, the community detection can be considered as an ill-posed hard unsupervised learning problem. There are node (or edge) features that can be effectively used to provide better community structures \cite{peelGroundTruthMetadata2017a, chakraborty2018}. On the one hand, it is known that significant correlation exists between community structure and node features, hereafter called as \emph{``features''}, in a variety of real networks \cite{fortunato2016community}. On the other hand, most of the well-known approaches are based on applying only one of these two information sources. Exploiting features in the community detection process could yield better results. Moreover, it is shown that there is strong dependency between the communities and features in some real networks \cite{newman2016structure}. Recently, some researchers have addressed the community detection using network structure coupled with the features, such as single-assignment clustering heuristic \cite{ruan2013efficient}, topic derived models \cite{sun2012relation}, generative model (\emph{CESNA}) \cite{yang2013community}, Bayesian Graph Clustering (\emph{BAGC}) \cite{xu2012model}, and Expectation-Maximization (EM) approach \cite{newman2016structure}. However, most of these approaches are somewhat sensitive to correctness of the model specification. \par In this paper, we propose a novel graphical model to find communities through a probabilistic approach. The proposed model provides the level of correlations between communities and features that could be used to select the suitable divisions of the network as well as the appropriate features. The summary of our contributions in this work are as follows, \begin{itemize} \item We propose a graphical model to form the relation between the communities and features \item We investigate the correlation between the community structures and features based on the learned model \item We extract communities for a general class of networks through exploiting features \item We introduce a novel approach on the influence of features on the community structures \end{itemize} \par The paper is organized as follows. Section \ref{sec2} presents the related works and motivation of the proposed approach. Section \ref{sec3} introduces the elements of the proposed model. Section \ref{sec4} describes the statistical learning of the model parameters. Section \ref{sec5} represents the experimental results on benchmark real network dataset. Section \ref{sec6} presents a case study to illustrate the proposed approach. Section \ref{sec7} concludes the paper with suggestions for further works on this field. \section{Related works and motivation} \label{sec2} \subsection{Related works} The role of nodal features on different aspects of network modeling are considered in a variety of works such as the missing nodes prediction via non-parametric Bayesian inference \cite{hric2016}, finding k-truss subgraphs with the aid of features \cite{huang2017}, and network approach on topic modeling \cite{gerlach2018}. \par On community detection with nodal features, there are generally two types of techniques, model-free methods and generative models. Like the structure based algorithms with some optimality criteria to detect communities such as the modularity based methods \cite{newman2006modularity, blondel2008fast, chen2014} and label propagation \cite{lu2019}, such model-free methods are proposed to exploit the features including structure mining \cite{silva2012mining}, simulated annealing \cite{cheng2011clustering}, Joint Community Detection Criterion (\emph{JCDC}) \cite{zhang_community_2016}, Semidefinite Programming (\emph{SDP}) \cite{yan2016convex}, and Covariance Assisted Spectral Clustering (\emph{CASC}) \cite{binkiewicz_covariate-assisted_2017}. Most methods in this category exploit features in the same way without considering the relationship between them and communities. \par Generative models were initially introduced on connectivity based community detection including affiliation graph model \cite{yang2012a}, matrix factorization \emph{BigClam} \cite{yang2013overlapping}, Bayesian community detection \cite{pas2018}, and nonparametric probabilistic model by conducting random walks \cite{zhu2019}. Feature based generative models on community extraction have been proposed in some works such as topic modeling \cite{sun2012relation}, \emph{CESNA} \cite{yang2013community}, and stochastic block model \cite{newman2016structure}. In \cite{yang2013community}, a generative model was introduced to consider just the influence of community structures on features. The modified stochastic block model aligned with the features is modified in \cite{newman2016structure} to reveal the efficacy of each feature on community structures by employing the Expectation-Maximization inference stage. On the one hand, most of the model-free feature based methods suffer from the dependency to multiple tuning parameters such as \emph{JCDC} \cite{zhang_community_2016}, and \emph{CASC} \cite{binkiewicz_covariate-assisted_2017}. On the other hand, the generative feature based models on extraction of communities have some problems including the model sensitivity on the presumed graphical representation of the features and communities \emph{CESNA} \cite{yang2013community}, and modeling a correlation of single feature with the community structure at a time\cite{newman2016structure}. \subsection{Motivation} In general, there are two paradigms in constructing the effect of features on the community structure: \textbf{(i)} the assortative features like age, sexuality, race and overall personal user's attributes having significant influence on the formation of communities, and \textbf{(ii)} the community generative features like education, living place, office location and user's interests in social networks which is imitated from the community structure. \begin{figure}[t!] \centering \begin{minipage}{0.25\textwidth} \centering \includegraphics[width=0.5\linewidth]{Weddela.png} \subcaption{} \label{SubFig:Weddell} \end{minipage}% \begin{minipage}{0.25\textwidth} \centering \includegraphics[width=0.5\linewidth]{Trade.png} \subcaption{} \label{SubFig:WorldTrade} \end{minipage} \caption{The influence of features on communities in two real networks, \textbf{(a)}: \emph{Weddell Sea} with \emph{feature} ``Environment'' with two categories, "Pelagic" (Blue) and "Benthic" (Red). \textbf{(b)}: \emph{World Trade} with \emph{feature} "continent" where each node(country) belongs to "Asia" (Blue) or "Europe" (Red). } \label{fig:FeatImpaceComm} \end{figure} To clarify the effect of features on the formation of community structure, we consider two real-world networks. Figure \ref{fig:FeatImpaceComm} (a) shows a snapshot of the \emph{Predator-Prey} network where each node represents a unique marine creature of the Weddell Sea and the color of each node shows its living environment , ``Pelagic'' or ``Benthic'' \cite{jacob2011role}. Figure \ref{fig:FeatImpaceComm} (b) shows a snapshot of \emph{World Trade} network, where each node (country) belongs to "Asia" or "Europe" \cite{de2011exploratory}. As it can be seen, the features provide a useful insight on discovering the more likely community structures on these case studies. Our primary aim in this work is to study how to extract community based on two different type of features, assortative and generative, which has not been considered in the earlier works. We propose a principled graphical model via the “division of features into assortative and generative features” to construct a general approach to deal with the community detection problem. It is assumed that each community can be formed by causal relationship of assortative features and the \emph{community} generative features which are influenced within the community structure. The community generative features is called as \emph{generative features} for simplicity. \begin{figure}[t] \centering \includegraphics[width=0.5\linewidth]{SmallPic.pdf} \caption{The connection between features and community structure where \emph{gender} and \emph{ethnicity} are \emph{assortative features} with causal impact on the community strucutre, and \emph{football} and \emph{education} are generative features which are generated by the community structure.} \label{fig:SmallScheme} \end{figure} The causal relationship is represented in a graphical model to encode the main elements of the community formation through two main sources: connectivity structure and features. The schematic representation of features and their relations on the community structures are depicted in Figure \ref{fig:SmallScheme}. The proposed approach is designed based on the different types of features in the probabilistic generative model. Moreover, it can also model the joint features altogether unlike most of ad-hoc based earlier methods \cite{newman2016structure, zhang_community_2016}. The parameters of the proposed model are learned through a likelihood based approach. Furthermore, the dependency of features on the community structure is computed through the learning phase that can be applied to infer the main ingredients on the construction of network communities. \begin{figure}[t!] \centering \includegraphics[width=1\linewidth]{figure_general.pdf} \caption{\label{fig:SchemePaper} {The flow-graph of the proposed method}} \end{figure} \section{Elements of the proposed approach} \label{sec3} Let's assume a network representation as $G = (V, E)$ where $V$ and $E$ denote nodes and edges in graph $G$. Furthermore, we have a set of features on nodes as $Features = S\cup F$, which comprises two disjoint subsets $S$ for ``assortative features'' and $F$ for ``generative features'' based on the dataset contextual information. The assortative features represent the personal attributes like race and age, and the generative features represent features like education, sports and users' interests, see Figure \ref{fig:SmallScheme} for further details. \par The proposed approach consists of three main steps, which is depicted in Figure \ref{fig:SchemePaper}. Initially, the features are divided into assortative and generative. {The first main step is to measure the dependency of features \textcolor{black}{towards each community}. In this step, the influence of assortative and generative features on the community formation is computed. The first main step is to measure the dependency of features towards each community. In this step, the influence of assortative and generative features on the community formation is computed. The relationship between communities is the second component of the proposed algorithm. The next important step is computing the probability membership function for each node, which is performed by statistical learning of the main parameters and iteratively updating the initial membership function to result in the final ones. Output of the algorithm is twofold: the dependency weights of features to the community structures that is informative for interpretation of the attained results, and the community structure of the network. \subsection{Description of graphical model} Here, we describe the details of proposed community detection model. The proposed model provides a generative framework among the main factors in a graph structure to detect the community structures in a network. The graphical model is constructed based on main factors namely community membership $M$, community interaction matrix $\beta$, assortative features $S$, generative features $F$, the influence of assortative features on communities denoted by $I$ and the interaction of communities with features denoted by $W$. \begin{figure}[t!] \centering \begin{minipage}{0.50 \textwidth} \centering \includegraphics[width=.4\linewidth]{ProposedPGM.pdf} \end{minipage}% \\ \begin{minipage}{0.50\textwidth} \begin{table}[H] \centering \small \resizebox{1\textwidth}{!}{ \begin{tabular}{lcc} \toprule[1.5pt] Node & Type & Description\\ \midrule $M$ & Hidden Variable & Membership function of each node\\ $I$ & Weight Factor & Correlation level between communities and assortative features\\ $W$ &Weight Factor & Correlation level between communities and generative features \\ $\beta$ & Weight Factor & Level of interactions between communities \\ $G$ & Observation & Input network \\ $F$ & Observation & Generative features\\ $S$ & Observation& Assortative features\\ \bottomrule[1.5pt] \end{tabular} } \end{table} \end{minipage} \caption{The description of main elements of proposed methodology in a graphical model} \label{fig:ProposedMehodologyGraphicalModel} \end{figure} Figure \ref{fig:ProposedMehodologyGraphicalModel} represents the graph structure of the proposed model. \par The probability of creation of an edge between a pair of nodes is directly related to their communities and the interaction levels between communities based on a probabilistic generative approach. Particularly, it is assumed that two nodes $u$ and $v$ are connected by considering the following probability, \begin{equation} P ((u,v)\in E) = 1 - \exp(-M_u^T \beta M_v) \label{eq:EdgeProbability} \end{equation} where $M_u$ and $M_v$ are non-negative membership functions of nodes $u$ and $v$ toward each community and $\beta$ represents the probability matrix of interactions between different communities. If nodes $u$ and $v$ share more communities or belong to communities $c_i$ and $c_j$ with high level of interaction ($\beta_{c_i,c_j}$) among them, their tendency to establish an edge will be increased. Hence, nodes $u$ and $v$ do not share a connection with the following probability, \begin{equation} P ((u,v)\notin E) = 1 - P((u,v) \in E) = \exp(-M_u^T \beta M_v) \label{eq:EdgeNotProbability} \end{equation} Based on the generative probabilistic process between any pair of nodes $(u,v)$, each pair of nodes are independently distributed as Bernoulli distribution. \textcolor{black}{Therefore, each element $A_{uv}\in \{0,1\}$ of the adjacency matrix is generated according to the following generative approach, \begin{IEEEeqnarray}{rcl} P_{uv} &=& 1 - \exp(-M_u^T \beta M_v) \nonumber\\ A_{uv} & \sim & Bernoulli(P_{uv}) \end{IEEEeqnarray} } The graphical model in Figure \ref{fig:ProposedMehodologyGraphicalModel} indicates that the generative features $F$ is conditionally dependent on variables $M$ and $W$, which is assumed to be parametrized through the following sigmoid probabilistic function, \begin{equation} P(F_{uk} = 1) = \frac{1}{1 + \exp(-\sum_{i=1}^{K}M_{uc_i}W_{kc_i})} \label{eq:FeatProbability} \end{equation} where $F_{uk} = 1$ denotes the property of node $u$ to have the $k$th feature, $M_uc_i$ and $W_kc_i$ denote the membership of node $u$ in community $c_i$ and the interaction between the $k$-th feature and community $c_i$. In summary, we assume that $F_{uk}$ follows the Bernoulli distribution in the following way: \begin{equation} P(F_{uk} = 0) = \frac{\exp(-\sum_{i=1}^{K}M_{uc_i}W_{kc_i})}{{1 + \exp(-\sum_{i=1}^{K}M_{uc_i}W_{kc_i})}} \label{eq:FeatNotProbability} \end{equation} Furthermore, communities are influenced by the assortative features $S$ and its weight parameters $I$ in the graph structure. In a similar way, the community membership of each node is estimated by, \begin{equation} M_{uc_i} = \frac{1}{1 + \exp(-\sum_{j\in I}I_{jc_i}S_{uj})} \label{eq:ProposedMembership} \end{equation} \section{Parameters learning} \label{sec4} Here, we consider the learning and inference stage on the proposed probabilistic model. The probability distribution on the observed variables $M$ and $F$ is written as, \begin{equation} \label{eq:jointdis} P(G, F| M,\beta, I, W, S) = P(G| M, \beta, I, S)P(F|M, W) \end{equation} The likelihood function is calculated based on the model configuration as follows, \begin{IEEEeqnarray}{rCl} L(\theta) &=& \prod_{(u,v)}\big(1 - \exp(-M_u^T \beta M_v)\big)^{a_{uv}}\big(\exp(-M_u^T \beta M_v)\big)^{1-a_{uv}}\IEEEnonumber\\ &\times& \prod_{u}\prod_{k\in F} \Big(\frac{1}{1 + \exp(-\sum_{c_i}M_{uc_i}W_{kc_i})}\Big)^{F_{uk}}\IEEEnonumber\\ &\times& \Big(\frac{\exp(-\sum_{c_i}M_{uc_i}W_{kc_i})}{1 + \exp(-\sum_{c_i}M_{uc_i}W_{kc_i})}\Big)^{1-F_{uk}} \label{eq:Likelihood} \end{IEEEeqnarray} \par The well-known optimization approaches could not be applied to obtain the maximum of the non-linear likelihood function of \eqref{eq:Likelihood} which contains the latent variables $M$ and $W$. Some approximate algorithms have been proposed to solve the hardship of optimization problems with latent variables in machine learning such as the expectation- maximization algorithm (EM) \cite{murphy_machine_2012}, variational Inference \cite{airoldi2008mixed} and block coordinate approach \cite{xu_block_2013}. {We employ the \emph{Block Coordinate Ascent} algorithm to find the solution of the objective function in Eq \ref{eq:Likelihood}.} According to \emph{Block Coordinate Ascent} approach, updating the parameters takes place in two main steps, \textbf{(i)} updating the first block, the membership function $M$, by fixing the second block, the parameters $\beta, I, W$, and \textbf{(ii)} updating the second block of parameters by fixing the first one. The log-likelihood function $\ell(\theta)$ is employed in our calculations that is more tractable than the likelihood in \eqref{eq:Likelihood} as, \begin{IEEEeqnarray}{rCl} \ell(\theta)&=& \sum_{(u,v)}a_{uv}\log\big(1-\exp(-M_u^T\beta M_v)\big) \IEEEnonumber \\ &+& (1 - a_{uv})\mathcolor{black}{\big(-M_u^T \beta M_v\big)}\IEEEnonumber \\ & +& \sum_{u}\sum_{k\in F}F_{uk}\log \frac{1}{1 + \exp(-\sum_{c_i}M_{uc_i}W_{kc_i})} \IEEEnonumber \\ &+& (1 - F_{uk})\log \frac{\exp(-\sum_{c_i}M_{uc_i}W_{kc_i})}{1 + \exp(-\sum_{c_i}M_{uc_i}W_{kc_i})} \label{eq:Loglik} \end{IEEEeqnarray} Details of the learning stage is follows. \subsection{Updating the parameters} The first step is to update the values of membership function $M_u$. To do so, it is required to derive the partial derivative of the log-likelihood function \eqref{eq:Loglik} with respect to $M_u$ as, \begin{IEEEeqnarray}{rCl} \frac{\partial\ell(M_u)}{\partial M_u} &=& \sum_{v\in N(u)} \beta M_v \frac{\exp(-M_u^T\beta M_v)}{1 - \exp(-M_u^T\beta M_v)} +\sum_{v \notin N(u)} -\beta M_v \label{eq:StdDeviationLL}\nonumber\\ &+&\Big(F_{uk} - \frac{1}{1 + \exp(-\sum_{c_i}M_{uc_i}W_{kc_i})}\Big)W_{kc_i} \end{IEEEeqnarray} {where the set of neighbors of $u$ is represented by $N(u)$.} {Each $M_u$ is first updated by gradient ascent in Eq~\eqref{eq:StdDeviationLL}, and then transformed to space of $[0, \infty)$ to meet the non-negativity property, \begin{equation} M_{u}(t+1) = max\big(0, M_{u}(t) + \alpha (\frac{\partial\ell(M_u)}{\partial M_u})\big) \label{eq:MetaData:FinalUpdataLatent} \end{equation} } {where $\alpha$ is the learning weight parameter.} After updating $M_u$, the second block of parameters $(I,W,\beta)$ are updated once at a time. \par {First, the important probabilistic dependencies among the parameters are employed to simplify the calculations.} The conditional independence between the parameters $(I,W,\beta)$, given $M_u$ is derived from the proposed probabilistic model. Also, the structure of the graphical model implies the probabilistic dependencies of $I$ to $\beta$ and $W$. Taking into account the relation of $M$ and $I$, and the chain rule to get $\frac{\partial\ell}{\partial I}$ for a node $u$ as, \begin{equation} \frac{\partial\ell}{\partial I} = \frac{\partial\ell(M_u)}{\partial M_u} \times \frac{\partial M_u}{\partial I} \label{eq:GeneralFormOfI} \end{equation} The $\frac{\partial M_u}{\partial I}$ is obtained from Eq. \eqref{eq:ProposedMembership} by, \begin{equation} \frac{\partial M_u}{\partial I} = S_u \times \frac{\exp (-\sum_{k\in I} S_{uk}.I_{c_ik} )}{[1 + \exp (-\sum_{k\in I}S_{uk}.I_{c_ik} )]^2} \label{eq:DerivativeMResI} \end{equation} Eq. \eqref{eq:DerivativeMResI} and Eq. \eqref{eq:StdDeviationLL} imply the updating procedure of $I$ as, \begin{equation} I(t + 1) = I(t) + \alpha (\sum_{u \in V}\frac{\partial\ell(M_u)}{\partial M_u} \times \frac{\partial M_u}{\partial I}) \label{eq:FinalFormI} \end{equation} In the next step, parameter $W$, which is responsible for the correlation levels between generative features and the community is updated according to the following, \begin{equation} \frac{\partial\mathcal\ell}{\partial W_{kc_i}} = \sum_{u}(F_{uk} - \frac{1}{1 + \exp(-\sum_{c_i}M_{uc_i}W_{kc_i})})M_{uc_i} \label{eq:FindDerivationW} \end{equation} In a similar way, the final updating form of parameter $W$ takes the form as the following, \begin{equation} W_{kc_i}(t+1) = W_{kc_i}(t) + \alpha(\frac{\partial\ell}{\partial W_{kc_i}}) \label{eq:MetaData:Updata:W} \end{equation} To update $\beta$, the derivation of the log-likelihood function with respect to $\beta$ is calculated as, \begin{IEEEeqnarray}{rCl} \frac{\partial\ell}{\partial \beta} &=& \sum_{v\in N(u)} -M_u^TM_v\times \frac{\exp(-M_u^T\beta M_v)}{1 - \exp(-M_u^T\beta M_v)} \IEEEnonumber \\ & +& \sum_{v \notin N(u)} -M_u^TM_v \label{eq:FindDerivationBeta} \end{IEEEeqnarray} The Eq. \eqref{eq:FindDerivationBeta} provides the updating procedure for $\beta$ as, \begin{equation} \beta(t + 1) = \beta(t) + \alpha (\frac{\partial\ell(M_u)}{\partial \beta}) \label{eq:MetaData:Updata:Beta} \end{equation} \begin{algorithm}[h] \algsetup{linenosize=\tiny} \scriptsize \caption{\small{Probabilistic Feature based Community Detection(PFCD)}} \label{alg:MetaData} \begin{algorithmic}[1] \STATE {\bfseries Input:} $G = (V,E), Features, Number\ of\ communities(k)$. \STATE {\bfseries Output:} $M$ Community memberships of each node. \STATE {\bfseries Initialize:} Initializing parameters. \STATE {$t \gets 0$} \WHILE {$|M_u{(t+1)}-M_u{(t)}|<= threshold$} \STATE {$t \gets t + 1$} \FOR {$i = 1$ \TO $|V|$} \STATE {$DevM= findDerivationM()$ } \STATE {\bfseries Update:} $M_{v_i}(t+1)= UpdataM(DevM,M_{v_i}(t))$. \ENDFOR \FOR {$i = 1$ \TO $|S|$} \STATE {$DevI= findDerivationI()$ } \STATE {$I(t+1) = UpdateI(I(t),DevI)$ } \ENDFOR \FOR {$i = 1$ \TO $|F|$} \STATE {$DevW= findDerivationW()$ } \STATE {$W(t+1) = UpdateW(W(t),DevW)$ } \ENDFOR \FOR {$i = 1$ \TO $k$} \STATE {$Dev\beta= findDerivation\beta()$ } \STATE {$\beta(t+1) = UpdateBeta(\beta(t),Dev\beta)$ } \ENDFOR \ENDWHILE \end{algorithmic} \end{algorithm} \subsection{PFCD algorithm} The proposed approach, \emph{PFCD} (\textbf{P}robabilistic \textbf{F}eature based \textbf{C}ommunity \textbf{D}etection) is represented in Algorithm \ref{alg:MetaData}. According to \emph{PFCD}, the inputs are the network structure ($G$), the features ($S \cup F$), the number of assortative features ($|S|$), the number of generative features ($|F|$) and the number of communities ($k$). The final output is the membership function of each node. The relationship between different types of features and communities can also be two important output of the proposed algorithm with the aim of interpretation. After initialization of the parameters, the main part of the algorithm is performed in an iterative manner. The algorithm stops when the absolute difference between the subsequent log-likelihoods of the model (Eq. \eqref{eq:Likelihood} ) is less than a threshold parameter, which is set to $0.001$ in our setting. Function \emph{findDerivationM} is applied to compute the derivation of the \emph{log-likelihood} function with respect to $M$ based on Eq. \eqref{eq:StdDeviationLL}. The update procedure on $M$ is performed by \emph{UpdataM} based on Eq. \eqref{eq:MetaData:FinalUpdataLatent}. When updating the membership function for all nodes is finished, the next step is to update the parameters $I$, $W$ and $\beta$. The \emph{findDerivationI} calculates the derivation of the \emph{log-likelihood} function with respect to $I$ by Eq. \eqref{eq:DerivativeMResI} and function \emph{UpdateI} updates $I$ by Eq. \eqref{eq:FinalFormI}. $W$ which captures the correlation level between the communities and the generative features, is updated by \emph{FindDerivationW} and \emph{UpdateW} according to equations \eqref{eq:FindDerivationW} and \eqref{eq:MetaData:Updata:W}. $\beta$ is updated based on equations \eqref{eq:FindDerivationBeta} and \eqref{eq:MetaData:Updata:Beta}. The update procedure is repeated until the convergence criterion is met. \subsection{Computational Complexity} The complexity of PFCD in each iteration is linearly dependent on the number of communities, assortative and generative features in network. The process of updating of \emph{PFCD} consists of two parts, updating the membership value of each node toward each community and updating the weight of each feature to each community. The membership value is updated based on Eq.s \eqref{eq:StdDeviationLL} and \eqref{eq:MetaData:FinalUpdataLatent}. For each given node $u$, the process considers the membership of its neighbors $M_v , v\in N(u)$ and non-neighbors towards communities $M_v, v \notin N(u)$. Therefore, for each given node Eq. \ref{eq:StdDeviationLL} takes $O(|N(u)|(k^2 +k))\sim O(|N(u)|k^2)$ (because of multiplying $M_u^T\beta M_v$) time for the neighbors and $O(|N(u)|k^2)$ for the non-neighbors. After iterating on all nodes the time complexity is $O(|E|\times k^2)$. The time complexity for each given node of the second component of Eq. \ref{eq:StdDeviationLL} is related to $O(k\times |F|)$. As a result the time complexity of the second component iterating on all nodes would end up with $O(|V|\times k \times |F|)$. The next step is the weight computation for each feature, \emph{\{I,W\}}, and the community interaction matrix, \emph{$\beta$}. According to Eq. \ref{eq:DerivativeMResI}, the time complexity is related to multiplying $S_u\times I_{c_i}$ which can be done in $O(|V|\times k \times |S|)$. Accordingly, updating the parameter $W$ has complexity of $O(|V|\times k \times |F|)$ and the matrix \emph{$\beta$} takes $O(|E|\times k^2)$ for updating its values by considering Eq. \eqref{eq:MetaData:Updata:Beta}. Therefore, the proposed method would have the complexity of $max(O(|E|\times k^2), O(|V|\times k \times |F|))$. } \section{Experiments} \label{sec5} Synthetic and real-world networks are used to evaluate performance of \emph{PFCD}. The efficiency of the proposed model is demonstrated on the state-of-the-art community detection methods by considering the feature based methods and structure based ones'. On state-of-the-art feature based methods, \emph{Cesna} \cite{yang2013community}, \emph{JCDC} \cite{zhang_community_2016}, \emph{NC} \cite{newman2016structure}, \emph{BAGC} \cite{xu2012model}, \emph{SDP} \cite{yan2016convex}, and \emph{CASC} \cite{binkiewicz_covariate-assisted_2017} are employed in the experiments. On state-of-the-art structure based methods, \emph{BigClam} \cite{yang2013overlapping}, \emph{Fast-Greedy} \cite{gregory_finding_2010}, \emph{Infomap} \cite{rosvall_maps_2008}, \emph{Louvain} \cite{blondel2008fast}, and \emph{COMBO} \cite{sobolevsky_general_2014} are applied in the experiments. Table\ref{tbl:GeneralMethod} summarizes these algorithms.\par \begin{table}[t!] \centering \small \caption{Overview of the state-of-the-art algorithms} \resizebox{.5\textwidth}{!}{ \begin{tabular}{llc} \toprule[1.5pt] Methods & Description & Reference\\ \midrule \multicolumn{3}{c}{Feature based Community Detection Methods}\\ \midrule Cesna & Feature enabled Generative model & \cite{yang2013community}\\ JCDC& Joint Feature based Community Detection Criterion& \cite{zhang_community_2016}\\ NC& Modified Feature based Stochastic Block Model& \cite{newman2016structure}\\ BAGC & Bayesian Graph Clustering &\cite{xu2012model}\\ SDP &Semi-Definite Programming&\cite{yan2016convex}\\ CASC& Covariance Assisted Spectral Clustering & \cite{binkiewicz_covariate-assisted_2017}\\ \midrule \multicolumn{3}{c}{Structure based Community Detection Methods}\\ \midrule BigClam & Find Overlapping community & \cite{yang2013overlapping}\\ Fast-Greedy & Fast Overlapping community detection & \cite{gregory_finding_2010} \\ Infomap & Find Overlapping community by probabilistic flows &\cite{rosvall_maps_2008}\\ Louvain & Heuristic method for detecting non-overlapping community & \cite{blondel2008fast}\\ COMBO & Find overlapping and non-overlapping communities & \cite{sobolevsky_general_2014}\\ \bottomrule[1.5pt] \end{tabular} } \label{tbl:GeneralMethod} \end{table} Initially, synthetic networks are used to examine the proposed approach. Then, we compare the performance of \emph{PFCD} on benchmark real networks with ground-truth communities. {In our experiments, the ground-truth communities and presumed number of communities are used for all of the methods to yield a fair comparison as highly recommended \cite{fortunato2016community}}. {Experiments are run on a desktop PC with 4GB memory and Core i5 CPU under JAVA using JGraphT library. The source codes are provided in the supplementary information. The weight parameter $\alpha$ is set as $0.001$ in the experiments. The $\beta$ is initialized based on the conductance measure approach \cite{Gleich2012}. The algorithms are run with their default parameters.} \subsection{Evaluation criteria} \label{EvalCriteria} Two well-known evaluation criteria, the \emph{F1 score} and the \emph{NMI}, are applied to measure the accuracy of the community detection algorithms as compared to the ground-truth communities \cite{fortunato2016community}. \emph{F1 score} is a standard evaluation measure in machine learning and community detection tasks, which quantifies the relative frequency of the number of correct detections of the members in each community based on the gold-standard information. The second performance measure is \emph{NMI} which is the mutual information of the similarity (or dissimilarity) between the discovered communities and the ground--truth ones'. \subsection{Synthetic networks} Synthetic networks are generated based on the degree-corrected stochastic block model \cite{zhang_community_2016}. The generation process is performed at two phases. At the first phase, the structure of network is shaped and the features are provided to each node at the second phase. At the first phase, a pair of nodes $(i,j)$ are sharing an edge independently from the other pairs. The probability of an edge generation between any pair of nodes depends on whether they are in a same community or not. If they share a community, the probability would be $\beta\theta_i \theta_j$, otherwise is $r\theta_i\theta_j\beta$. Parameter $\beta$ indicates the level of interaction between any pair of communities, such that higher level of $\beta$ for a pair of communities $(i,j)$ results in more interactions of them. The $r$ is used for handling the density inside the community and parameters $\theta_i, \theta_j$ are used for controlling the degree of nodes. To avoid homogeneity in the generated networks, we set 10\% of nodes inside a community as hub by setting $\theta_i = 10$ and for non-hub nodes $\theta_i = 1$. We set $\beta=0.1$ and $r = 0.25$ in generating the networks. The average degree of the resulted network is around $31$. After shaping the structure of network, at the second phase we generate features for two communities from the Gaussian distribution $\mathcal{N}(\mu,1)$, for nodes of the first community and $\mathcal{N}(-\mu , 1)$ for nodes of the second community. As $\mu$ increases, the feature of each community becomes stronger. To reveal the impact of nodal features on communities, three different networks are generated with $N=\{1000, 2000, 5000\}$ by considering three different scenarios $\mu = \{2, 3, 5\}$ for each network. \textcolor{black}{A summary of properties of the synthetic networks are given in Table \ref{tbl:SyntheticNetworks}.} \begin{table}[t!] \centering \caption{Main properties of the synthetic networks.} \label{tbl:SyntheticNetworks} \begin{tabular}{lccc} \toprule[1.5pt] Network & Nodes& Edges & Communities \\ \midrule \textit{1000-2} & 1000 & 51578 & 2 \\ \textit{1000-3.5} & 1000& 56401 & 2 \\ \textit{1000-5} & 1000 & 52488 & 2 \\ \textit{2000-2} & 2000 & 177719& 2 \\ \textit{2000-3.5} & 2000 & 181288 & 2\\ \textit{2000-5} & 2000 & 177592 & 2 \\ \textit{5000-2} & 5000 & 819220 & 2 \\ \textit{5000-3.5} & 5000 & 783694 & 2 \\ \textit{5000-5} & 5000 & 845994 & 2 \\ \bottomrule[1.5pt] \end{tabular} \end{table} The performance of \emph{PFCD} is demonstrated on these generated networks by using the state-of-the-art feature based methods in Table \ref{tbl:GeneralMethod}. The results are shown in Figures \ref{fig:SimResFScore} and \ref{fig:SimResNMI}. As $\mu$ increases, the influence of features on community structures becomes stronger. \textcolor{black}{Due to the number of features in the generation of networks, there exists a slight difference between the proposed approach and other feature-enabled community detection methods. In addition, the main aim of the experiments on synthetic networks is to demonstrate the impact of features on the performance of community detection (Figures \ref{fig:SimResFMeasStruct} and \ref{fig:SimResNMIStruct}).} We compare the proposed approach with the well-known structure-based methods in Table \ref{tbl:GeneralMethod} to reveal the importance of features on community detection process by using F1-Score and NMI criteria. Figures \ref{fig:SimResFMeasStruct} and \ref{fig:SimResNMIStruct} indicate that the proposed method outperforms the well-known structure based methods. \begin{figure}[H] \centering \includegraphics[scale=0.5]{FMeasureSim.pdf} \caption{\label{fig:SimResFScore} \textcolor{black}{Results of \emph{PFCD} compared with feature based community detection methods (Table \ref{tbl:GeneralMethod}) in terms of F1-Score, where the horizontal axis represents three different scenarios for each generated network with sample sizes from $1000$ to $5000$.}} \end{figure} \begin{figure}[H] \centering \includegraphics[scale=0.5]{nmiSim.pdf} \caption{\label{fig:SimResNMI} \textcolor{black}{Results of \emph{PFCD} compared with the feature based community detection methods (Table \ref{tbl:GeneralMethod}) in terms of NMI, where the horizontal axis presents three different scenarios for each generated network with sample sizes from $1000$ to $5000$.}} \end{figure} \begin{figure}[h] \centering \includegraphics[scale=0.8]{FMeausre_StructSim.pdf} \caption{\label{fig:SimResFMeasStruct} \textcolor{black}{Results of \emph{PFCD} compared with the structure based community detection methods (Table \ref{tbl:GeneralMethod}) in terms of F1-Score, where the horizontal axis presents three generated networks with sample sizes from $1000$ to $5000$.}} \end{figure} \begin{figure}[h] \centering \includegraphics[scale=0.82]{nmi_StructSim.pdf} \caption{\label{fig:SimResNMIStruct} \textcolor{black}{Results of \emph{PFCD} compared with the structure based community detection methods (Table \ref{tbl:GeneralMethod}) in terms of NMI, where the horizontal axis presents three generated networks with sample sizes from $1000$ to $5000$.}} \end{figure} \subsection{Real networks} We examine our approach on a number of benchmark real-world network dataset. The networks are from different domains including economy, biology, ecology, and social. \subsubsection{Dataset description} Three social friendship networks, namely \emph{Lawyer}, \emph{CalTech}, and \emph{Rice}, are used in this study. The \emph{Lawyer} dataset is derived from a study of corporate law partnership that was developed in a Northeastern US corporate law firm. It contains the friendship networks among 71 attorneys (partners and associates) of this company. There are several features for the members as part of the dataset, such as seniority, formal status, working office location, gender, law school attended, working hours, the years of activity, and attitudes on various management options \cite{lazega2001collegial}. We consider two Facebook subnetworks, namely \emph{CalTech} and \emph{Rice}, from the Facebook-100 dataset, which consists of the Facebook networks for 100 colleges and universities in the US. There are links between the members (student or faculty) inside each school, and also nodal features including the status of student/faculty, major, senior or junior, dormitory, year and high school information \cite{traud2012social}. \par Five information networks are used in our experiments, including \emph{DBLP}, \emph{ArXiv}, \emph{WTrade}, \emph{Internet}, and \emph{PolBlogs}. In \emph{DBLP} repository, the nodes and edges represent the authors and co-authorship relationships. 20 keywords are extracted from the title of papers to represent four different fields: Data-Mining, Computer Graphics, Artificial Intelligence and Databases. The keywords include ``classification'', ``cluster'', ``graphic'' and ``human''. In \emph{ArXiv}, the nodes represent papers and the edges show citations between them. The features denote how often a specific keyword appears in the abstract of a paper. The \emph{ArXiv} network contains 30 distinct keywords. \emph{WTrade} is a dataset of various manufactures from 80 countries on metal trade commodities in 1994. The edges show the exports from one country to another for metal commodities. The nodes are countries with features such as the continent, position in world system and GDP \cite{de2011exploratory}. \emph{Internet} network is a topological network where each node represents Autonomous Systems (AS) and the edges represent the path from an AS to another one. Communities are countries of the registered AS. \emph{PolBlogs} is a network of hyperlinks between weblogs on US politics, recorded in 2005 by Adamic and Glance \cite{adamic2005political}. Each node is represented by its political affiliation, conservative or liberal. {\emph{Patent} \cite{gunnemann2013spectral} is a large citation networks on the utility patents granted which is maintained by the National Bureau of Economic Research during January 1, 1963 to December 30, 1999.} \par Two biological networks \emph{Predator-Prey} and \emph{Malaria} are employed in the experimental settings. \emph{Predator-Prey} is an ecological network about 488 marine creatures living in the WeddellSea. Each creature has different features such as feeding type, feeding mode, environment and body mass \cite{jacob2011role}. \emph{Malaria} dataset is a biological network of genetic sequences from the malaria parasites \cite{larremore2013network,rask2010plasmodium}. In this network, the nodes represent 297 genes and their various shared amino acid substrings. The common process of recombination among genes to produce proteins generates a natural two-mode network which consists two types of nodes: genes in HVRs (highly variables regions) and every HVR has different set of edges among the same nodes \cite{rask2010plasmodium}. A summary of the datasets are given in Table \ref{tbl:Networks}. \begin{table}[t!] \centering \caption{Summary of the real network datasets.} \label{tbl:Networks} \resizebox{0.5\textwidth}{!}{ \begin{tabular}{lcccccc} \toprule[1.5pt] Network & Nodes& Edges& Domain & Features &Communities \\ \midrule \textit{Lawyer} \cite{lazega2001collegial} & 71 & 379 & Social & Status, Office, Years & 2 \\ \textit{CalTech} \cite{traud2012social} & 769 & 16656 & Social & Gender, Class year, Major, Residence & 9 \\ \textit{Rice} \cite{traud2012social} & 4087 & 184828 & Social & Gender, Class year, Major, Residence & 9 \\ \textit{DBLP} \cite{gunnemann2013spectral} & 774 & 1757 & Social & Extracted Keywords from papers & 20 \\ \textit{PolBlogs} \cite{adamic2005political} & 1490 & 19090 & Political & Political Affiliation & 2\\ \textit{World Trade} \cite{de2011exploratory} & 80 & 876 & Economical & Continent, Positions & 6 \\ \textit{Malaria} \cite{larremore2013network,rask2010plasmodium} & 307 & 7759 & Biological & Cys-PoLV labels & 4 \\ \textit{WeddellSea} \cite{jacob2011role} & 488 & 15435 & Ecological & Feeding type and mode, Body mass, Environment & 4 \\ \textit{ArXiv} \cite{gunnemann2013spectral} & 856 & 2660 & Scientific & Abstract Keywords of papers & 9 \\ \textit{Internet} \cite{hric2014community} & 46676 & 262953 & Technological & Country & 131\\ \textit{Patent} \cite{gunnemann2013spectral} & 100000 & 188631 & Citation & Year, Country, PatentClass, Assigned Code & 15 \\ \bottomrule[1.5pt] \end{tabular} } \end{table} \subsubsection{Experimental results} At first, we consider the performance of \emph{PFCD} without taking into account of features, \emph{Plain}, along with some of the state-of-the-art structure based methods such as, \emph{BigClam} \cite{yang2013overlapping}, \emph{Fast-Greedy} \cite{gregory_finding_2010}, \emph{Infomap} \cite{rosvall_maps_2008}, \emph{Louvain} \cite{blondel2008fast}, and \emph{COMBO} \cite{sobolevsky_general_2014} (see Table\ref{tbl:GeneralMethod}). \textcolor{black}{An automatic strategy is used on threshold specification in \emph{Plain} approach, where each node is assigned to the community with the greater membership value than the average of the memberships of all nodes \cite{fortunato2016community}. } Figures \ref{fig:Fscore} and \ref{fig:nmi} show the results. It reveals that the proposed method is able to perform as well as the other structure-based methods. Moreover, its result on some networks like Lawyer, CalTech and Rice is so competitive compared with results of its original version. Actually, the PFCD algorithm without nodal features considers community as dense sub-graphs like most of other structure-based methods. \textcolor{black}{The obtained results by considering just the structural properties are not good enough (less than 0.2) on certain type of networks such as DBLP, Arxiv, Internet, and Patent due \textcolor{black}{to the fact that} the community structures consist of the dense sub-graphs and assortative modules \cite{gunnemann2013spectral}. In addition, structure based methods perform better than the \emph{Plain} on some networks such as \emph{Predator} and \emph{PolBlogs} (Figure \ref{fig:Fscore}) due to having small number of features. These results reveal the impact of features from the viewpoint of types and numbers on the detection of community structures in real networks, where Figure \ref{fig:withoutMetaDataEvaluation_nmi} demonstrate the usefulness of features on the community detection process. } \begin{figure}[h] \centering \includegraphics[scale = 0.57]{FMeasure_Plain.pdf} \caption{ \textcolor{black}{Results of the proposed method compared with others by just considering connectivity structure, \emph{Plain}, in terms of F1-score. The baseline methods include \emph{BigClam} \cite{yang2013overlapping}, \emph{Fast-Greedy} \cite{gregory_finding_2010}, \emph{Infomap} \cite{rosvall_maps_2008}, \emph{Louvain} \cite{blondel2008fast}, and \emph{COMBO} \cite{sobolevsky_general_2014} } } \label{fig:Fscore} \end{figure} \begin{figure}[h] \centering \includegraphics[scale = 0.57]{nmi_Plain.pdf} \caption{\textcolor{black}{Results of the proposed method compared with others by just considering connectivity structure, \emph{Plain}, in terms of NMI. The baseline methods include \emph{BigClam} \cite{yang2013overlapping}, \emph{Fast-Greedy} \cite{gregory_finding_2010}, \emph{Infomap} \cite{rosvall_maps_2008}, \emph{Louvain} \cite{blondel2008fast}, and \emph{COMBO} \cite{sobolevsky_general_2014}.} } \label{fig:nmi} \end{figure} \begin{figure*}[!t] \centering \begin{minipage}{0.9\textwidth} \centering \includegraphics[width=0.9\linewidth]{FMeasureStruct.pdf} \subcaption{} \label{SubFig:Weddell} \end{minipage}% \begin{minipage}{0.9\textwidth} \centering \includegraphics[width=0.9\linewidth]{nmiStruct.pdf} \subcaption{} \label{SubFig:WorldTrade} \end{minipage} \caption{\label{fig:withoutMetaDataEvaluation_nmi} Results of \emph{PFCD} compared with the well-known structure based methods. The baseline methods include \emph{BigClam} \cite{yang2013overlapping}, \emph{Fast-Greedy} \cite{gregory_finding_2010}, \emph{Infomap} \cite{rosvall_maps_2008}, \emph{Louvain} \cite{blondel2008fast}, and \emph{COMBO} \cite{sobolevsky_general_2014} (Table \ref{tbl:GeneralMethod})} \end{figure*} Then, the \emph{PFCD} is compared with the the well-known structure based methods in Table \ref{tbl:GeneralMethod}. Figure \ref{fig:withoutMetaDataEvaluation_nmi} represents the results in terms of \emph{F1-score} and \emph{NMI}. We observe that considering nodal features in the community detection process leads to the superiority of \emph{PFCD} compared to the algorithms that are only based on structural information. The difference is specially well pronounced in some networks, such as \emph{Lawyer}, \emph{Malaria}, \emph{Predator-Prey}, \emph{DBLP}, \emph{ArXiv} and \emph{Internet}, for which \emph{PFCD} has higher \emph{F1-score} and \emph{NMI} than other algorithms. \begin{figure*}[!t] \centering \begin{minipage}{0.9\textwidth} \centering \includegraphics[width=0.9\linewidth]{FMeasure.pdf} \subcaption{} \label{SubFig:Weddell} \end{minipage}% \begin{minipage}{0.9\textwidth} \centering \includegraphics[width=0.9\linewidth]{nmi.pdf} \subcaption{} \label{SubFig:WorldTrade} \end{minipage} \caption{\label{fig:MetaDataEvaluation} Results of \emph{PFCD} compared with the benchmark feature based methods in terms of F1--score and NMI. The baseline methods include \emph{Cesna} \cite{yang2013community}, \emph{JCDC} \cite{zhang_community_2016}, \emph{NC} \cite{newman2016structure}, \emph{BAGC} \cite{xu2012model}, \emph{SDP} \cite{yan2016convex} and \emph{CASC} \cite{binkiewicz_covariate-assisted_2017} (Table \ref{tbl:GeneralMethod})} \end{figure*} Moreover, the performance of our approach are demonstrated with the state-of-the-art feature based algorithms in Table\ref{tbl:GeneralMethod}. The obtained results are shown in Figure \ref{fig:MetaDataEvaluation}. We observe that \emph{PFCD} outperforms others in almost all experiments. Specifically in the networks for which features are completely dependent on the community structures, such as \emph{WTrade}, \emph{PolBlogs}, \emph{DBLP} and \emph{ArXiv} networks, the proposed approach performs better than the others. The results show that higher dependency between features and community structure can lead to higher accuracy in the community detection process. While \emph{JCDC} performs well on small networks, it fails to accurately detect communities in large networks due to its dependency on multiple tuning parameters. \emph{NC} considers only one type of feature as a metadata and fails to precisely detect communities. The results return similar weak performance of \emph{CESNA}, on networks such as \emph{WTrade}, \emph{Predator-Prey}, \emph{Malaria}, \emph{PolBlogs} and \emph{ArXiv}. {On the division of the features into assortative and generative categories, we used NMI to select the appropriate assortative features due to the high level impact on the community structures. After selection the assortative features, the remaining features are used in the category of generative features. The details are reported in Table \ref{tbl:assortativeNetworks}.} \begin{table}[h] \centering \caption{The properties of features in the real network datasets.} \label{tbl:assortativeNetworks} \resizebox{0.5\textwidth}{!}{ \begin{tabular}{lcc} \toprule[1.5pt] Network & Features& Assortative Feature\\ \midrule \textit{Lawyer} \cite{lazega2001collegial} & Status, Office, Years & Status \\ \textit{CalTech} \cite{traud2012social} &Gender, Class year, Major, Residence & Class Year \\ \textit{Rice} \cite{traud2012social} & Gender, Class year, Major, Residence & Class Year\\ \textit{DBLP} \cite{gunnemann2013spectral} & extracted keywords from papers & Keyword \\ \textit{PolBlogs} \cite{adamic2005political} & Political Affiliation(Liberal, Conservative) & Political Affiliation \\ \textit{World Trade} \cite{de2011exploratory}& Continent, Positions & Continent \\ \textit{Malaria} \cite{larremore2013network,rask2010plasmodium} & Cys-PoLV labels & Cys-PolV \\ \textit{WeddellSea} \cite{jacob2011role} & Feeding type, Feeding mode, Body mass, Environment & Feeding Type \\ \textit{ArXiv} \cite{gunnemann2013spectral} & Keywords from abstract of papers & Keywords \\ \textit{Internet} \cite{hric2014community} & Country & Country \\ \textit{Patent} \cite{gunnemann2013spectral} & Year, Country, PatentClass, Assigned Code & PatentClass \\ \bottomrule[1.5pt] \end{tabular} } \end{table} \begin{table}[h] \caption{\label{tbl:execution} {The execution running time of \emph{PFCD} along with \emph{NC}, \emph{CESNA}, and \emph{Louvain}.}} \begin{center} \begin{tabular}{ |c| c| c| c| c|} \hline Network & PFCD & NC & CESNA & Louvain \\ \hline WorldTrade & 0.52 & 3.097 & 0.38 & 0.04\\ \hline PolBlogs & 0.702 & 41 & 2.16 & 0.01 \\ \hline Lawyer & 0.765 & 21 & 0.12 & 0.01 \\ \hline Malaria & 0.81 & 34 & 1.918 & 0.02 \\ \hline WeddellSea & 1.41 & 1461 & 3.18 & 0.03 \\ \hline CalTech & 1.497 & 1518 & 4.71 & 0.02 \\ \hline DBLP & 2 & 1114 & 0.7 & 0.005 \\ \hline Arxiv & 2.31 & 91 & 0.26 & 0.01 \\ \hline Rice & 31 & 3476 & 5.4 & 0.33 \\ \hline Patent & 446 & -- & 37 & 3.5\\ \hline Internet & 2818 & -- & 5400 & 0.63\\ \hline \end{tabular} \end{center} \end{table} {The execution running time of \emph{PFCD} is compared with the state-of-the-art competitors in Table \ref{tbl:execution}, \emph{CESNA} with $O(E)$ and developed in C++, \emph{NC} with $O(|V|^2\times k^2)$ and developed in C, and the fast non-overlapping community detection method, \emph{Louvain}, developed in C and taking $O(|V|log(|V|))$. All of the experiments are performed using a single Lenovo machine with 4GB Ram and Core i5 CPU and the proposed method is implemented in JAVA using JGraphT library. It is worth mentioning that most of the feature-oriented methods are unable to report results in decent time because some of them were developed in MATLAB or require adjacency matrix ($|V| \times |V|$) which is not working on big networks. In the execution time of each method, Louvain algorithm is quite faster compared with the proposed algorithm because first of all its time complexity is $O(|V|log(|V|))$ which is generally lower than $O(|E|)$ and it is just based on the network structure and does not \textcolor{black}{take into account} node features. However, the Louvain algorithm is not able to detect communities on several networks such as CalTech (0.34 F1-score), WorldTrade (0.24 F-score), Arxiv or DBLP (around 0.1 F-score). } \section{Case study} \label{sec6} Here, we demonstrate the effectiveness of features for detection of communities. \emph{Lawyer} network is considered for further analyzing its communities and the role of features in each community structure. To illustrate the situation, Figure \ref{fig:LawyerCaseStudy} depicts the adjacency matrix sorted by different features, \emph{status (Partner, Associate)} and \emph{office location(Boston, Hartford)}. In Figure \ref{fig:LawyerCaseStudy} part (a), yellow block denotes a group of lawyers with \emph{Partner} status and blue block consists of lawyers with \emph{Associate} status. {In Figure \ref{fig:LawyerCaseStudy} part (b), lawyers working in \emph{Boston} are shown in yellow block, and the others working in \emph{Hartford} are depicted in blue block.} According to Figure \ref{fig:LawyerCaseStudy}, the features are strong enough to shape communities where nodes with similar features are more likely to share connections together compared with those with different features. The level of strength between the features and communities, is shown in Table \ref{fig:correlationLawyer}. In the first glimpse, Table \ref{fig:correlationLawyer} shows that the proposed method is able to extract the unique set of features among all possible features for each community based on their strength levels. For example, feature \emph{Partner} has a positive impact on the second community while it does not carry out the same impact on the first one. Moreover, the features, ``lawyers with \emph{associate} status'', and ``the lawyer's working offices located in \emph{Boston}'', play important role for shaping the first community. In the same way, ``lawyers with \emph{partner} status'' {whose} working offices located in \emph{Hartford}'' are more influential in shaping the second community. The proposed method is also able to prioritize the importance levels of specific features of each community. It is shown in Table \ref{fig:correlationLawyer}, \emph{''status''} is more important than \emph{''office location''}. \begin{figure}[h!] \centering \begin{minipage}{0.25\textwidth} \includegraphics[width=0.7\linewidth]{Part-Assoc.pdf} \label{fig:LawyerStatus} \subcaption{} \end{minipage} \begin{minipage}{0.25\textwidth} \includegraphics[width = 0.7\linewidth]{Practice.pdf} \label{fig:LawyerPractice} \subcaption{} \end{minipage} \caption{\label{fig:LawyerCaseStudy}Adjacency matrix according to the value of each feature. (a): status, (b): office. Points represent edges among nodes and each either yellow or blue block shows the group of nodes with a similar feature. } \end{figure} \begin{table}[t!] \caption{the Impact of each feature on each community.} \label{fig:correlationLawyer} \centering \small \resizebox{0.5\textwidth}{!}{ \begin{tabular}{lccccc} \toprule[1pt] Community & Partner & Associate & Boston & Hartford\\ \midrule 1 & -0.66 & 0.8 & 0.04 & -0.001\\ 2 & 0.9 & -0.7 & 0 & 0.081\\ \bottomrule[1pt] \end{tabular} } \end{table} \section{Conclusion} In this work, we introduced a novel graphical model based approach for community detection. The proposed approach, \emph{PFCD}, considered both the network structure and nodal features. The different influence of nodal features on community structures are investigated in our proposed framework. The proposed model is inferred through an efficient probabilistic algorithm. The block-coordinate descent algorithm was employed to learn parameters of the model to deal with the latent variables in an efficient computational manner. In line with the discrimination of features influence on community formation, the priority of each feature on the structure of communities can be inferred from our model. The experimental results on synthetic networks justified the strength of the \emph{PFCD} approach on {detection of} communities compared with the well-known methods. Furthermore, a variety of small to large real network datasets {were} used to evaluate the proposed model based on standard evaluation measures. The results on real networks showed the high performance of the proposed model and very promising results on the detection of community structures based on a network aligned with the nodal features.\par There are some future works, such as representation learning approach to derive automatic features from the network structure and extending the proposed method to temporal networks. \label{sec7} \ifCLASSOPTIONcompsoc \else \fi \ifCLASSOPTIONcaptionsoff \newpage \fi
1,314,259,994,721
arxiv
\section{Introduction} Many physical, chemical and biological systems can be modelled with (partial) differential equations. They capture a system's interactions, scales and conserved quantities in an interpretive manner. Unfortunately, manually deriving these equations from first principles is a time-consuming process and requires expert knowledge of the underlying dynamics. The last few years have seen a rising interest in automating this process, also known as \textit{model discovery}. As the model space is exponentially large, a popular approach is to select a large set of candidate terms (features) and perform sparse regression on these features, effectively turning model discovery into variable selection \citep{brunton_discovering_2016, rudy_data-driven_2017}. A uniquely challenging aspect of discovering PDEs is that many of the candidate features contain higher-order derivatives, which are challenging to calculate accurately using numerical differentiation. This essentially limited model discovery to densely sampled datasets with low noise levels. Various works have explored the use of neural networks to generate a surrogate of the data \citep{rasheed_digital_2020}, and perform model discovery on this surrogate instead \citep{berg_data-driven_2019, both_deepmod:_2019}. By using a neural network $g$ to learn the data $u$ such that $u \approx g(x, t)$, the network denoises the data, while automatic differentiation can be used to accurately calculate the (higher-) order derivatives of $u$ used in the candidate features. These approaches show significant improvements when the neural network is constrained to solutions allowed by the candidate features. This essentially yields a Physics Informed Neural Network (PINN) \citep{raissi_physics_2017-1}, with the important distinction that the form of the constraint, i.e. which terms of the candidate features are active and make up the underlying equation, is also learned. As it is composed of all candidate features, the constraint itself is prone to overfitting, and the discovered equation will contain more terms than required. Applying $\ell_1$ regularisation can alleviate this problem, but raises the question of how strongly to apply it. Alternatively, inactive terms can be pruned from the constraint by applying a mask, but this is a non-differentiable operation, and training the network does not take sparsity into account. The open question then is how to optimally apply a constraint, which itself is learned and sensitive to overfitting, all the while remaining fully differentiable. In this work we introduce a fully differentiable model discovery algorithm consisting of a neural-network based surrogate with a constraint based on Sparse Bayesian Learning (SBL). We summarise our contributions as follows: \begin{itemize} \item{We show how Bayesian parameter inference methods can be used as a constraint in a PINN. Specifically, we use Sparse Bayesian Learning (SBL) to create a fully-differentiable, robust model discovery algorithm and showcase this on various datasets.} \item{We identify a connection with multitask learning using uncertainty and exploit this to generalise PINNs to probabilistic surrogates. We introduce a conditional normalizing flow constrained by SBL, a so called Physics Informed Normalizing Flow (PINF).} \item{We present a proof-of-concept where PINF learns a time-dependent density from unlabelled single-particle data, allowing the constraint to discover a density model directly from single particle data.} \end{itemize} \section{Background} \paragraph{Model discovery with sparse regression} Model discovery aims to discover the PDE from a large set of $M$ candidate features $\{u, u_{xx}, uu_x, \ldots\}$. Assuming the underlying equation can be written as a linear combination of the candidate features, model discovery can be approached as a regression problem \citep{brunton_discovering_2016} by solving \begin{equation} \hat{\bm{\xi}} = \min_{{\bm{\xi}}} \left \lVert \partial_t {\bm{u}} - {\bm{\Theta}} {\bm{\xi}} \right\rVert^2 + \lambda R({\bm{\xi}}), \label{eq:sparse} \end{equation} where $\mathbf{{\bm{\Theta}}} \in \mathcal{R}^{N \times M}$ contains all candidate features, ${\bm{\xi}} \in \mathcal{R}^{M}$ the unknown coefficient vector and $R({\bm{\xi}})$ some sparsity-promoting penalty; the number of candidate features is typically much larger than the number of terms in the underlying equation. The main challenge of discovering the underlying equation using this approach is dealing with large, possible correlated errors in the features containing derivatives; using numerical differentiation to calculate these higher-order derivatives accurately from noisy and sparse data is extremely challenging, even after denoising. One line of work has focused on constructing more robust and sparser approaches to solving eq. \ref{eq:sparse}, for example SR3 \citep{zheng_unified_2019} or using stability criteria \citep{maddu_learning_2020}. Alternatively, several works \citep{both_deepmod:_2019, berg_data-driven_2019} have explored the use of neural networks to create a surrogate of the data and perform model discovery on this surrogate instead. Automatic differentiation can then be used to calculate the derivatives, yielding much more accurate features. \paragraph{PINNs} Physics Informed Neural Networks (PINNs) \citep{raissi_physics_2017-1} have become a very popular method to either solve a differential equation or perform parameter inference with neural networks. Here we focus on parameter inference and consider a (noisy) dataset $\{{\bm{u}}_i, {\bm{x}}_i, t_i\}_{i=1}^{N}$, governed by a differential equation of form $\partial_t {\bm{u}} = {\bm{X}} {\bm{w}} $ with ${\bm{X}}$ the terms of the equation and ${\bm{w}}$ the unknown parameters. PINNs infer ${\bm{w}}$ by using a neural network to approximate ${\bm{u}}$, and constrain the network to the given differential equation by minimising \begin{equation} \mathcal{L}_{\text{PINN}}({\bm{\theta}}, {\bm{w}}) = \frac{1}{N}\sum_{i=1}^{N} \left\lVert \hat{{\bm{u}}}_i - {\bm{u}}_i \right\rVert^2 + \frac{\lambda}{N} \sum_{i=1}^{N} \left\lVert \partial_t \hat{{\bm{u}}}_i - {\bm{X}}_i {\bm{w}}\right \rVert^2. \label{eq:pinn_loss} \end{equation} Here $\hat{{\bm{u}}}_i = g_{{\bm{\theta}}}({\bm{x}}_i, t_i)$ is the prediction of the neural network and $\lambda$ sets the strength of the regularisation. The constraint ensures the network approximates the data ${\bm{u}}$ consistently with the given differential equation, and terms containing derivatives can be calculated using automatic differentiation. These two features make PINNs especially useful in noisy and sparse datasets. \paragraph{Model discovery with PINNs} PINNs can easily be adapted to perform model discovery by replacing the given differential equation ${\bm{X}}$ with a larger set of candidate features ${\bm{\Theta}}$. Additionally, a mask ${\bm{m}}$ is applied to the coefficients, yielding as loss function, \begin{equation} \begin{split} \mathcal{L}_{\text{MD}}({\bm{\theta}}, {\bm{\xi}}) & = \frac{1}{N}\sum_{i=1}^{N} \left\lVert \hat{{\bm{u}}}_i - {\bm{u}}_i \right\rVert^2 + \frac{\lambda}{N} \sum_{i=1}^{N} \left\lVert \partial_t \hat{{\bm{u}}}_i - {\bm{\Theta}}_i ({\bm{m}} \odot {\bm{\xi}})\right \rVert^2\\ & = \mathcal{L}_{\text{fit}}({\bm{\theta}}) + \lambda \mathcal{L}_{\text{reg}}({\bm{\theta}}, {\bm{\xi}}). \label{eq:deepmod_loss} \end{split} \end{equation} The mask ${\bm{m}}$ describes which terms feature in the equation, and hence the form of the constraint; this approach can be interpreted as a PINN in which the constraint is also learned. The mask is updated periodically by some sparse regression technique, and as terms are pruned, the constraint becomes stricter, preventing overfitting of the constraint itself and improving the approximation of the network, boosting performance significantly \citep{both_sparsely_2021}. However, the non-differentiability of the mask can lead to issues during training, for example when it is updated at the wrong time, or when the wrong terms are accidentally pruned. Our goal here is thus to construct an approach where the mask $M$ is learned together with the networks parameters, while still maintaining the benefits of iteratively refining the approximation: a fully-differentiable model discovery algorithm. \paragraph{Removing free variables} Training with eq. \ref{eq:deepmod_loss} optimises two sets of parameters: the network parameters ${\bm{\theta}}$ and the coefficients ${\bm{\xi}}$. Typically both are minimised together using gradient descent, but the optimisation of ${\bm{\xi}}$ can be performed analytically \citep{both_sparsely_2021}. Given a configuration of the network parameters ${\bm{\theta}}^*$, the minimisation over ${\bm{\xi}}$ is a regression problem as given by eq. \ref{eq:sparse} and can be solved exactly. Referring to this solution as the maximum likelihood estimate ${\bm{\xi}}_{\text{MLE}}$, we define a loss function $\tilde{\mathcal{L}}_{\text{MD}}({\bm{\theta}}) \equiv \mathcal{L}_{\text{MD}}({\bm{\theta}}, {\bm{\xi}}_{\text{MLE}})$, which optimises only the network parameters $\theta$ using gradient descent. This significantly speeds up convergence and reduces the variance of the discovered coefficients across initialisations. We shall adopt this approach in the rest of the paper, and define the convention of $\tilde{\mathcal{L}}$ denoting the loss function $\mathcal{L}$ with the independent variables calculated analytically. \section{Fully differentiable model discovery} Our goal is to create a fully-differentiable model discovery algorithm, which, considering eq. \ref{eq:deepmod_loss}, requires making the mask ${\bm{m}}$ differentiable. Differentiable masking is challenging due to the binary nature of the problem, and instead \textit{we relax the application of the mask to a regularisation problem}. Specifically, we propose to use Sparse Bayesian Learning \citep{michael_tipping_tipping01apdf_2001} to select the active features and act as constraint. We start this section by reviewing SBL and how it can be used for differentiable variable selection, next show to integrate it in PINNs and finally introduce Physics Informed Normalizing Flows. \subsection{Differentiable masking with SBL} \paragraph{Sparse Bayesian Learning} Sparse Bayesian Learning (SBL) \citep{michael_tipping_tipping01apdf_2001} is a Bayesian approach to regression yielding sparse results. SBL defines a hierarchical model, starting with a Gaussian likelihood with noise precision $\beta \equiv \sigma^{-2}$, and a zero-mean Gaussian with precision $\alpha_j$ on each component $\xi_j$ as prior, \begin{align} p(\partial_t \hat{{\bm{u}}}; \ {\bm{\Theta}}, {\bm{\xi}}, \beta) = \prod_{i=1}^{N} \mathcal{N}(\partial_t \hat{{\bm{u}}}_{i}; \ {\bm{\Theta}}_i {\bm{\xi}}, \beta^{-1}), \\ p({\bm{\xi}} ; \ {\bm{A}}) =\prod_{j=1}^M \mathcal{N}(\xi_j ; \ 0, \alpha_j^{-1}), \end{align} with $\partial_t \hat{{\bm{u}}} \in \mathcal{R}^{N}$, ${\bm{\Theta}} \in \mathcal{R}^{N \times M}$, ${\bm{\xi}} \in \mathcal{R}^{M}$, and we have defined ${\bm{A}} \equiv \text{diag}({\bm{\alpha}})$. The posterior distribution of ${\bm{\xi}}$ is a Gaussian with mean ${\bm{\mu}}$ and covariance ${\bm{\Sigma}}$, \begin{equation} \begin{split} {\bm{\Sigma}} & = (\beta {\bm{\Theta}}^T {\bm{\Theta}} + {\bm{A}})^{-1}\\ {\bm{\mu}} & = \beta {\bm{\Sigma}} {\bm{\Theta}}^T \partial_t\hat{{\bm{u}}}. \end{split} \end{equation} Many of the terms in ${\bm{A}}$ will go to infinity when optimised, and correspondingly the prior for term $j$ becomes a delta peak. We are thus certain that that specific term is inactive and can be pruned from the model. This makes SBL a very suitable choice for model discovery, as it gives a rigorous criterion for deciding whether a term is active or not. Additionally it defines hyper-priors over ${\bm{\alpha}}$ and $\beta$, \begin{equation} \begin{split} p({\bm{\alpha}}) &= \prod_{j=1}^{M} \Gamma(\alpha_j ; \ a, b) \\ p(\beta) &= \Gamma(\beta ; \ c, d) \end{split} \end{equation} The inference of ${\bm{A}}$ and $\beta$ cannot be performed exactly, and SBL uses type-II maximum likelihood to find the most likely values of $\hat{{\bm{A}}}$ and $\hat{\beta}$ by minimising the negative log marginal likelihood \footnote{Neglecting the hyper-prior, this loss function can also written more compactly as \begin{equation} \mathcal{L}_{\text{SBL}}({\bm{A}}, \beta) = \log \left \lvert {\bm{C}} \right \rvert + \partial_t {\bm{u}}^T {\bm{C}}^{-1}\partial_t {\bm{u}}, \quad {\bm{C}} = \beta^{-1} {\bm{I}} + {\bm{\Theta}} {\bm{A}}^{-1} {\bm{\Theta}}^T, \end{equation} but the format we use provides more insight how SBL provides differentiable masking.}, \begin{multline} \mathcal{L}_{\text{SBL}}({\bm{A}}, \beta) = \frac{1}{2}\left[ \beta \left\lVert {\bm{u}}_t - {\bm{\Theta}} {\bm{\mu}} \right\rVert^2 + {\bm{\mu}}^T {\bm{A}} {\bm{\mu}} - \log\lvert {\bm{\Sigma}} \rvert - \log \lvert {\bm{A}} \rvert - N \log \beta \right] - \\ \sum_{j=1}^{M}(a \log\alpha_j - b \alpha_j) - c \log \beta + d \beta, \label{eq:sbl_evidence} \end{multline} using an iterative method (see \citet{michael_tipping_tipping01apdf_2001}). \paragraph{Continuous relaxation} The marginal likelihood also offers insight how the SBL provides differentiable masking. Considering only the first two terms of eq. \ref{eq:sbl_evidence}, \begin{equation} \beta \left\lVert {\bm{u}}_t - {\bm{\Theta}} {\bm{\mu}} \right\rVert^2 + {\bm{\mu}}^T {\bm{A}} {\bm{\mu}} \end{equation} we note that the SBL essentially applies a coefficient-specific $\ell_2$ penalty to the posterior mean ${\bm{\mu}}$. If $A_j \to \infty$, the corresponding coefficient $\mu_j \to 0$, pruning the variable from the model. Effectively, the SBL replaces the discrete mask $m_j \in \{0, 1\}$ by a continuous regularisation $A_j \in (0, \infty]$, and we thus refer to our approach as \textit{continuous relaxation}. \subsection{SBL-constrained PINNs} \paragraph{Model} To integrate SBL as a constraint in PINNs (similar to eq. \ref{eq:pinn_loss}), we place a Gaussian likelihood on the output of the neural network, \begin{equation} \hat{u}: p({\bm{u}}; \ \hat{{\bm{u}}}, \tau) = \prod_{i=1}^{N}\mathcal{N}({\bm{u}}_i; \ \hat{{\bm{u}}}_i,\ \tau^{-1}), \end{equation} and define a Gamma hyper prior on $\tau$, $p(\tau) = \Gamma(\tau ; \ e, f)$, yielding the loss function, \begin{equation} \mathcal{L}_{\text{data}}({\bm{\theta}}, \tau) = \frac{1}{2}\left[ \tau \left\lVert {\bm{u}} - \hat{{\bm{u}}} \right\rVert^2 - N \log \tau\right] - e \log \tau + f \tau. \label{eq:data} \end{equation} Assuming the likelihoods factorise, i.e. $p({\bm{u}}, {\bm{u}}_t ; \ \hat{{\bm{u}}}, \ {\bm{\Theta}},\ {\bm{\xi}}) = p({\bm{u}} ; \ \hat{{\bm{u}}}) \cdot p({\bm{u}}_t ; {\bm{\Theta}}, \ {\bm{\xi}})$, SBL can be integrated as a constraint in a PINN by simply adding the two losses given by eq. \ref{eq:sbl_evidence} and eq. \ref{eq:data}, \begin{equation} \mathcal{L}_{\text{SBL-PINN}}({\bm{\theta}}, {\bm{A}}, \tau, \beta) = \mathcal{L}_{\text{data}}({\bm{\theta}}, \tau) + \mathcal{L}_{\text{SBL}}({\bm{\theta}}, {\bm{A}}, \beta) \end{equation} Our approach does not rely on any specific property of the SBL, and thus generalises to other Bayesian regression approaches. \paragraph{Training} The loss function for the SBL-constrained PINN contains three variables which can be exactly minimised, and denote these as $\hat{{\bm{A}}}, \hat{\tau}$ and $\hat{\beta}$. With these values, we introduce $\tilde{\mathcal{L}}_{\text{SBL-PINN}}({\bm{\theta}}) \equiv \mathcal{L}_{\text{SBL-PINN}}({\bm{\theta}}, \hat{{\bm{A}}}, \hat{\tau}, \hat{\beta})$ and note that we can further simplify this expression as the gradient of the loss with respect to these variables is zero. For example, $\nabla_{\bm{\theta}} \mathcal{L}(\hat{{\bm{A}}})= \nabla_{{\bm{A}}}\mathcal{L} \cdot \nabla_{\bm{\theta}} {\bm{A}} |_{{\bm{A}}=\hat{{\bm{A}}}} = 0$, as $\nabla_{\bm{A}} \mathcal{L} |_{{\bm{A}} = \hat{{\bm{A}}}} =0$. Thus, keeping only terms directly depending on the neural network parameters ${\bm{\theta}}$ yields, \begin{equation} \begin{split} \tilde{\mathcal{L}}_{\text{SBL-PINN}}({\bm{\theta}}) & = \frac{\hat{\tau}}{2} \left\lVert {\bm{u}} - \hat{{\bm{u}}} \right\rVert^2 + \frac{\hat{\beta}}{2} \left\lVert {\bm{u}}_t -{\bm{\Theta}} {\bm{\mu}} \right\rVert^2 + {\bm{\mu}}^T \hat{{\bm{A}}} {\bm{\mu}} - \log\lvert {\bm{\Sigma}} \rvert \\ & = \frac{N\hat{\tau}}{2}\underbrace{\left[\mathcal{L}_{\text{fit}}({\bm{\theta}}) + \frac{\hat{\beta}}{\hat{\tau}} \mathcal{L}_{\text{reg}}({\bm{\theta}}, {\bm{\mu}}) \right]}_{=\mathcal{L}_{\text{PINN}}({\bm{\theta}}, \mu)} + {\bm{\mu}}^T \hat{{\bm{A}}} {\bm{\mu}} -\log\lvert {\bm{\Sigma}} \rvert \end{split} \label{eq:sblpinn} \end{equation} where in the second line we have rewritten the loss function in terms of a classical PINN with relative regularisation strength $\lambda = \hat{\beta} / \hat{\tau}$ and coefficients ${\bm{\xi}} = {\bm{\mu}}$. Contrary to a PINN however, the regularisation strength is inferred from the data, and the coefficients ${\bm{\mu}}$ are inherently sparse. An additional consequence of $\nabla_{\bm{\theta}} \mathcal{L}(\hat{{\bm{A}}},\hat{\beta}, \hat{\tau})= 0$ is that our method does not require backpropagating through the solver. While such an operation could be efficiently performed using implicit differentiation \citep{bai_deep_2019}, our method requires solving an iterative problem only in the forward pass. During the backwards pass the values obtained during the forward pass can be considered constant. \paragraph{Connection with multitask learning} Considering eq. \ref{eq:sblpinn}, we note the resemblance to multitask learning using uncertainty, introduced by \citet{cipolla_multi-task_2018}. Given a set of objectives, the authors propose placing a Gaussian likelihood on each objective so that each task gets weighed by its uncertainty. The similarity implies that we are essentially reinterpreting PINNs as Bayesian or hierarchical multi-task models. \subsection{Physics Informed Normalizing Flows} Having redefined the PINN loss function (eq. \ref{eq:pinn_loss}) in terms of likelihoods (i.e. eq. \ref{eq:sblpinn}) allows to introduce a PINN-type constraint to any architecture with a probabilistic loss function. In this section we introduce an approach with normalizing flows, called Physics Informed Normalizing Flows (PINFs). As most physical equations involve time, we first shortly discuss how to construct a time-dependent normalizing flow. We show in the experiments section how PINFs can be used to directly infer a density model from single particle observations. \paragraph{Conditional Normalizing Flows} Normalizing flows construct arbitrary probability distributions by applying a series of $K$ invertible transformations $f$ to a known probability distribution $\pi({\bm{z}})$, \begin{equation} \begin{split} {\bm{z}} & = f_K \circ \ldots \circ f_0({\bm{x}}) \equiv g_{\bm{\theta}}({\bm{x}})\\ \log p({\bm{x}}) & = \log \pi({\bm{z}}) + \sum_{k=1}^{K} \log \left \lvert \det \frac{\partial f_k({\bm{z}})}{\partial d{\bm{z}}} \right \rvert, \end{split} \end{equation} and are trained by minimising the negative log likelihood, $\mathcal{L}_{\text{NF}} = -\sum_{i=1}^{N} \log p({\bm{x}})$. Most physical processes yield time-dependent densities, meaning that the spatial axis is a proper probability distribution with $\int p({\bm{x}}, t)d{\bm{x}}=1$. Contrarily, this is not valid along the temporal axis, as $\int p({\bm{x}}, t)dt = f({\bm{x}})$. To construct PINFs, we first require a Conditional Normalizing Flow capable of modelling such time-dependent densities. Instead of following the method of \citet{both_temporal_2019}, which modifies the Jacobian, we employ time-dependent hyper-network. This hyper-network $h$ outputs the flow parameters ${\bm{\theta}}$, and is only dependent on time, i.e. ${\bm{\theta}} = h(t)$, thus defining a time-dependent normalizing flow as ${\bm{z}} = g_{h(t)}({\bm{x}})$. \paragraph{PINFs} Conditional normalizing flows yield a continuous spatio-temporal density, and the loss function of a PINF is defined as simply adding the SBL-loss to that of the normalizing flow, yielding \begin{equation} \tilde{\mathcal{L}}_{\text{PINF}}({\bm{\theta}}) = \mathcal{L}_{\text{NF}}({\bm{\theta}}) + \frac{N\hat{\beta}}{2} \mathcal{L}_{\text{reg}}({\bm{\theta}}, {\bm{\mu}}) + {\bm{\mu}}^T \hat{{\bm{A}}} {\bm{\mu}} -\log\lvert {\bm{\Sigma}} \rvert. \end{equation} \section{Experiments} We now show several experiments illustrating our approach. We start this section by discussing choice of hyperprior, followed by a benchmark on several datasets and finally a proof-of-concept with physics-informed normalizing flows. \subsection{Choosing prior} The loss function for the SBL constrained approach contains several hyper-parameters, all defining the (hyper-) priors on respectively ${\bm{A}}$, $\beta$ and $\tau$. We set uninformed priors on ${\bm{A}}$ and $\beta$, $a=b=e=f=1e^{-6}$, but those on $\beta$, the precision of the constraint, must be chosen more carefully. Figure \ref{fig:prior} illustrates the learning dynamics on a dataset of the Korteweg-de Vries equation \footnote{We choose to plot the losses of the original PINN loss $\mathcal{L}_{\text{data}}$ and $\mathcal{L}_{\text{reg}}$ because these are more easily interpreted than the likelihood-based losses we have introduced.} when the $\beta$ hyperprior is uninformed, i.e. $c=d=1e^{-6}$. Observe that the model fails to learn the data, while almost immediately optimising the constraint. We explain this behaviour as a consequence of our assumption that the likelihoods factorise, which implies the two tasks of learning the data and applying the constraint are independent. Since the constraint contains much more terms than required, it can fit a model with high precision to any output the neural network produces. The two tasks then are not independent but conditional: a high precision on the constraint is warranted only if the data is reasonably approximated by the neural network. To escape the local minimum observed in figure \ref{fig:prior}, we couple the two tasks by making the hyper-prior on $\beta$ dependent on the performance of the fitting task. \begin{wrapfigure}{r}{0.5\textwidth} \centering \includegraphics{figures/prior.pdf} \caption{Regression loss as a function of fitting loss during training, comparing an uninformed prior with a dynamic prior.} \label{fig:prior} \end{wrapfigure} \paragraph{Dynamic prior} Our starting point is the update equation for $\beta$ (see \citet{michael_tipping_tipping01apdf_2001} for details), \begin{equation} \hat{\beta} = \frac{N - M + \sum_i {\bm{\alpha}}_i {\bm{\Sigma}}_{ii} +2c}{N \mathcal{L}_{\text{reg}} + 2d} \end{equation} We typically observe good convergence of normal PINNs with $\lambda=1$, and following this implies $\hat{\beta} \approx \hat{\tau}$, and similarly $\mathcal{L}_{\text{reg}} \to 0$ as the model converges. Assuming $N \gg M + \sum_i \alpha_i \Sigma_{ii}$, we have \begin{equation} \hat{\tau} \approx \frac{N + 2c}{2d}, \end{equation} which can be satisfied with $c = N/2, \ d=n/\hat{\tau}$. Figure \ref{fig:prior} shows that with this dynamic prior the SBL constrained PINN does not get trapped in a local minimum and learns the underlying data. We hope to exploit multitask learning techniques to optimize this choice in future work. \subsection{Experiments} We present three experiments to benchmark our approach. We first study the learning dynamics in-depth on a solution of the Korteweg- de Vries equation, followed by a robustness study of the Burgers equation, and finally show the ability to discover the chaotic Kuramoto-Shivashinsky equation from highly noisy data. Reproducibility details can be found in the appendix. \paragraph{Korteweg-de Vries} The Korteweg-de Vries equation describes waves in shallow water and is given by $u_t = u_{xxx} - u u_x$. Figure \ref{fig:kdv}a shows the dataset: 2000 samples with 20\% noise from a two-soliton solution. We compare our approach with I) Sparse Bayesian Learning with features calculated with numerical differentiation, II) a model discovery algorithm with PINNs, but non-differentiable variable selection called DeepMoD \citep{both_fully_2021} and III) PDE-find \citep{rudy_data-driven_2017}, a popular model discovery method for PDEs based on SINDy \citep{brunton_discovering_2016}. The first two benchmarks also act as an ablation study: method I uses the same regression algorithm but does not use a neural network to interpolate, while method II uses a neural network to interpolate but does not implement differentiable variable selection. \begin{figure} \centering \includegraphics{figures/kdv.pdf} \caption{Comparison of a differentiable SBL-constrained model and an non-differentiable OLS-constrained model on a Korteweg-de Vries dataset (panel \textbf{a}) with a library consisting of 4th order derivatives and 3rd order polynomials, for a total of 20 candidate features. In panel \textbf{b} and \textbf{c} we respectively plot the inferred prior $\hat{A}$ and the posterior coefficients $\mu$. In panel \textbf{d} we show the non-differentiable DeePyMod approach. In panels \textbf{b} and \textbf{c} we see that the correct equation (bold blue line: $u_{xx}$, bold orange line: $uu_x$) is discovered early on, while the non-differentiable model (panel \textbf{d}) selects the wrong terms.} \label{fig:kdv} \end{figure} In figure \ref{fig:kdv}b and c we show that the differentiable approach recovers the correct equation after approximately 3000 epochs. Contrarily, DeepMoD recovers the wrong equation. Performing the inference 10 times with different seeds shows that the fully-differentiable approach manages to recover the Kortweg-de Vries equation nine times, while DeepMoD recovers the correct equation only twice - worse, it recovers the same wrong equation the other 8 times. Neither PDE-find nor SBL with numerical differentiation is able to discover the Korteweg-de Vries equation from this dataset, even at 0\% noise due to the data sparsity. \paragraph{Burgers} We now explore how robust the SBL-constrained PINN is with respect to noise on a dataset of the Burgers equation, $u_t = \nu u_{xx} - u u_x$ (figure \ref{fig:sbl_experiments}a)). We add noise varying from 1\% to 100\% and compare the equation discovered by benchmark method II (DeepMoD, panel b) and our approach (panel c) - the bold orange and blue lines denote $u_{xx}$ and $uu_x$ respectively, and the black dashed line their true value. Observe that DeepMoD discovers small additional terms for $>50\%$ noise, which become significant when noise $>80\%$. Contrarily, our fully differentiable approach discovers the same equation with nearly the same coefficients across the entire range of noise, with only very small additional terms ($\mathcal{O}(10^{-4}$). Neither PDE-find nor SBL with numerical differentiation is able to find the correct equation on this dataset at 10\% noise or higher. \begin{figure} \centering \includegraphics{figures/sbl_experiments.pdf} \caption{Exploration of robustness of SBL-constrained model for model discovery for the Burgers equation (panel \textbf{a}). We show the discovered equation over a range of noise for DeepMoD (panel \textbf{b}) and the approach presented in this paper (panel \textbf{c}). The bold orange and blue lines denotes $u_xx$ and $u u_x$, and black dashed line their true value.} \label{fig:sbl_experiments} \end{figure} \paragraph{Kuramoto-Shivashinsky}The Kuramoto-Shivashinksy equation describes flame propagation and is given by $u_t = -uu_x - u_{xx} - u_{xxxx}$. The fourth order derivative makes it challenging to learn with numerical differentiation-based methods, while its periodic and chaotic nature makes it challenging to learn with neural network based methods \citep{both_sparsely_2021}. We show here that using the SBL-constrained approach we discover the KS-equation from only a small slice of the chaotic data (256 in space, 25 time steps), with 20\% additive noise. We use a tanh-activated network with 5 layers of 60 neurons each, and the library consists of derivatives up to 5th order and polynomials up to fourth order for a total of thirty terms. Additionally, we precondition the network by training without the constraint for 10k epochs. Training this dataset to convergence takes significantly longer than previous examples, as the network struggles with the data's periodicity (panel b). After roughly 70k epochs, a clear separation between active and inactive terms is visible in panel c, but it takes another 30k epochs before all inactive terms are completely pruned from the model. Panels d and e show the corresponding posterior and the maximum likelihood estimate of the coefficients using the whole library. Remarkably, the MLE estimate recovers the correct coefficients for the active terms, while the inactive terms are all nearly zero. In other words, the accuracy of the approximation is so high, that least squares identifies the correct equation. \begin{figure} \centering \includegraphics{figures/KS.pdf} \caption{Recovering the Kuramoto-Shivashinsky equation. We show the chaotic data and a cross section in panels \textbf{a} and \textbf{b}. The periodicity makes this a challenging dataset to learn, requiring 200k iterations to fully converge before it can be recovered (panel \textbf{c}). Panels \textbf{d} and \textbf{e} show that the posterior and MLE of the coefficients yield nearly the same coefficients, indicating that the network was able construct an extremely accurate approximation of the data.} \label{fig:KS} \end{figure} \subsection{Model discovery with normalizing flows} Consider a set of particles whose movement is described by a micro-scale stochastic process. In the limit of many of such particles, such processes can often be described with a deterministic macro-scale density model, determining the evolution of the density of the particles over time. For example, a biased random walk can be mapped to an advection-diffusion equation. The macro-scale density models are typically more insightful than the corresponding microscopic model, but many (biological) experiments yield single-particle data, rather than densities. Discovering the underlying equation thus requires first reconstructing the density profile from the particles' locations. Classical approaches such as binning or kernel density estimation are either non-differentiable, non-continuous or computationally expensive. Normalizing Flows (NFs) have emerged in recent years as a flexible and powerful method of constructing probability distribution, which is similar to density estimation up to a multiplicative factor. In this section we use physics informed normalizing flows to learn a PDE describing the evolution of the density directly from unlabelled single particle data. \begin{figure}[h] \centering \includegraphics{figures/NF.pdf} \caption{Using a tempo-spatial Normalizing Flow constrained by Sparse Bayesian Learning to discover the advection-diffusion equation directly from single particle data. Panel \textbf{a} shows the true density profile, and in panels \textbf{b} \textbf{c} and \textbf{d} we show the density inferred by binning (blue bars), inferred by NF (red) and the ground truth (black, dashed) at $t=0.1, 2.5, 4.5$. Note that although the estimate of the density is very good, we see in panel \textbf{e} that we recover two additional terms (bold blue line: $u_x$, bold orange line $u_{xx}$.} \label{fig:nf} \end{figure} Since the conditional normalizing flow is used to construct the density, a precision denoting the noise level does not exist, and instead we set as prior for $\beta$ $(a=N, b=N \cdot 10^{-5})$. We consider a flow consisting of ten planar transforms \citep{rezende_variational_2015} and a hyper-network of two layers with thirty neurons each. The dataset consists of 200 walkers on a biased random walk for 50 steps, corresponding to an advection-diffusion model, with an initial condition consisting of two Gaussians, leading to the density profile shown in figure \ref{fig:nf}a. The two smallest terms in panel e correspond to the advection (bold green line) and diffusion (bold red line) term, but not all terms are pruned. Panels b, c and compare the inferred density (red line) to the true density (dashed black line) and the result obtained by binning. In all three panels the constrained NF is able to infer a fairly accurate density from only 200 walkers. We hypothesise that the extra terms are mainly due to the small deviations, and that properly tuning the prior parameters and using a more expressive transformation would prune the remaining terms completely. Nonetheless, this shows that NF flows can be integrated in this fully differentiable model discovery framework. \section{Outlook} Our experiments show a strong improvement over non-differentiable comparable methods, and opens up several new avenues to explore. One direction is the choice of the prior parameters for the precision. We presented a reasonable choice of prior parameters, but future work could find better estimates, for example a 'prior-scheduler', similar to a learning rate scheduler, or explore approaches to multitask learning. A different direction is exploring different Bayesian regression methods. For example, using a Laplacian \citep{helgoy_noise-robust_2020} or spike-and-slab prior can improve sparsity \citep{nayek_spike-and-slab_2020}. Alternatively, the prior can be used to introduce more structure into the problem. For example, the group-SBL could be used to combine data from several experiments \citep{babacan_bayesian_2014}.
1,314,259,994,722
arxiv
\section{\@startsection{section}{1}% \z@{.7\linespacing\@plus\linespacing}{.5\linespacing}% {\bfseries \centering }} \def\@secnumfont{\bfseries} \makeatother \usepackage{graphicx} \newcommand{\textcolor}{\textcolor} \newcommand{\mathbb C}{\mathbb C} \newcommand{\mathbb D}{\mathbb D} \newcommand{\mathbb R}{\mathbb R} \newcommand{\mathbb N}{\mathbb N} \newcommand{\mathbb Q}{\mathbb Q} \newcommand{\mathbb Z}{\mathbb Z} \newcommand{\mathbb E}{\mathbb E} \newcommand{\mathbf{B}}{\mathbf{B}} \newcommand{\mathbf{E}}{\mathbf{E}} \newcommand{\mathbf{G}}{\mathbf{G}} \newcommand{\mathbf{P}}{\mathbf{P}} \newcommand{\mathbf{\Gamma}}{\mathbf{\Gamma}} \newcommand{{\bf 1}}{{\bf 1}} \newcommand{{\bf 2}}{{\bf 2}} \newcommand{\mathcal B}{\mathcal B} \newcommand{\mathcal C}{\mathcal C} \newcommand{\hat {\mathcal C}}{\hat {\mathcal C}} \newcommand{\mathcal E}{\mathcal E} \newcommand{\mathcal F}{\mathcal F} \newcommand{\mathcal H}{\mathcal H} \newcommand{\mathcal I}{\mathcal I} \newcommand{\mathcal J}{\mathcal J} \newcommand{\mathcal L}{\mathcal L} \newcommand{\mathcal N}{\mathcal N} \newcommand{\mathcal Q}{\mathcal Q} \newcommand{\mathcal S}{\mathcal S} \newcommand{\mathcal T}{\mathcal T} \newcommand{\mathcal W}{\mathcal W} \newcommand{\mathcal Z}{\mathcal Z} \newcommand{\mathcal R}{\mathcal R} \newcommand{\alpha}{\alpha} \newcommand{\gamma}{\gamma} \newcommand{\Gamma}{\Gamma} \newcommand{\delta}{\delta} \newcommand{\varepsilon}{\varepsilon} \newcommand{\iota}{\iota} \newcommand{\kappa}{\kappa} \newcommand{\lambda}{\lambda} \newcommand{\Lambda}{\Lambda} \newcommand{\omega}{\omega} \newcommand{\Omega}{\Omega} \newcommand{\Sigma}{\Sigma} \newcommand{\sigma}{\sigma} \newcommand{\theta}{\theta} \newcommand{\varphi}{\varphi} \newcommand{\zeta}{\zeta} \newcommand{\left(}{\left(} \newcommand{\right)}{\right)} \newcommand{\left[}{\left[} \newcommand{\right]}{\right]} \newcommand{\left\{}{\left\{} \newcommand{\right\}}{\right\}} \newcommand{\left|}{\left|} \newcommand{\right|}{\right|} \newcommand{\left\langle}{\left\langle} \newcommand{\right\rangle}{\right\rangle} \newcommand{\left\langle}{\left\langle} \setlength{\textheight}{19.5 cm} \setlength{\textwidth}{14 cm} \newtheorem{theorem}{Theorem}[section] \newtheorem{assumption}{Assumption}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{corollary}[theorem]{Corollary} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \theoremstyle{remark} \newtheorem{remark}{Remark} \numberwithin{equation}{section} \setcounter{page}{1} \begin{document} \title[Oscillating Processes]{Oscillating Gaussian Processes} \author[P.Ilmonen]{Pauliina Ilmonen$^1$} \address{1: Aalto University School of Science, Department of Mathematics and Systems Analysis, Finland. } \email{pauliina.ilmonen@aalto.fi} \author[S.Torres]{Soledad Torres$^2$} \address{2: Facultad de Ingenier\'ia, CIMFAV Universidad de Valpara\'iso, Casilla 123-V, 4059 Valparaiso, Chile. } \email{soledad.torres@uv.cl} \author[L.Viitasaari]{Lauri Viitasaari$^3$} \address{3: Aalto University School of Business, Department of Information and Service Management, Finland (\textbf{Corresponding author})} \email{lauri.viitasaari@iki.fi} \begin{abstract} In this article we introduce and study oscillating Gaussian processes defined by $X_t = \alpha_+ Y_t {\bf 1}_{Y_t >0} + \alpha_- Y_t{\bf 1}_{Y_t<0}$, where $\alpha_+,\alpha_->0$ are free parameters and $Y$ is either stationary or self-similar Gaussian process. We study the basic properties of $X$ and we consider estimation of the model parameters. In particular, we show that the moment estimators converge in $L^p$ and are, when suitably normalised, asymptotically normal. \end{abstract} \maketitle \medskip\noindent {\bf Mathematics Subject Classifications (2010)}: 60G15 (primary), 60F05, 60F25, 62F10, 62F12 \medskip\noindent {\bf Keywords:} Gaussian processes, oscillating processes, stationarity, self-similarity, parameter estimation, central limit theorem \allowdisplaybreaks \section{Introduction} During the past two decades interest in the study of the existence and uniqueness of stochastic differential equations driven by a fractional Brownian motion has been very intense and there have been many advances in their theory and applications. In particular, strong solutions of the following stochastic differential equation (SDE in short) \begin{equation}\label{sde} X_t = X_0 + \int_0^t b(s,X_s) ds + \int_0^t ¡\sigma(s,X_s) dB^H_s, \end{equation} under usual conditions on the coefficients, such as Lipschitz and linear growth, were developed by Nualart and R$\check{a}\text{\c{s}}$canu \cite{NualartRascanu2002}, and have been considered by many authors, see \cite{Mishura} and the references therein. Nevertheless, the case of SDE with discontinuous coefficients has been less explored. Most of the cases of stochastic differential equations driven by a fractional Brownian motion and with discontinuous coefficients which have been studied are those corresponding to discontinuous drift coefficient (for $H>1/2$). Regarding that, in \cite{MN}, the authors studied a drift that is H\"older continuous except on a finite numbers of points. Another class of discontinuity in SDE driven by a fractional Brownian motion is related to adding a Poisson process to the equation. In \cite{BM}, extending the results given in \cite{MN}, the authors proved the existence of the strong solution of this kind of SDE driven by a fractional Brownian motion and a Poisson point process. To the best of our knowledge, in the fractional Brownian motion framework, there is only a preliminary work that studies equations with discontinuous diffusion coefficient, written by Garz\'on et al. \cite{GLT}. There the authors proved the existence and uniqueness of solutions to the SDE driven by the fractional Brownian motion $B^H$ with $H>\frac12$ given by \begin{equation}\label{DISC} X_t = X_0 + \int_0^t \sigma (X_s) d B^H_s \quad , \quad t \geq 0, \end{equation} where the function $\sigma$ is given by \begin{equation}\label{sigma1} \sigma (x) = \frac{1}{\alpha} {\bf 1}_{x \geq 0} + \frac{1}{1-\alpha} {\bf 1}_{x < 0}, \ \alpha \in \left(0,\frac12\right). \end{equation} The authors showed that the explicit solution to the equation (\ref{DISC}) is \begin{equation}\label{SDISC} X_t = \alpha B_t^H {\bf 1}_{B^H_t > 0} + (1- \alpha ) B^H_t{\bf 1}_{B^H_t < 0} , \quad \quad t \geq 0. \end{equation} It is straightforward to see that the explicit existence and uniqueness of solution to equation (\ref{DISC}) holds also if $\alpha$ and $1-\alpha$ are replaced with $\alpha_+$ and $\alpha_-$ satisfying $ 0 < \alpha_- < \alpha_+$ (or $0 < \alpha_+ < \alpha_-$, respectively). One of the reasons why SDEs with discontinuous diffusion coefficient are interesting is their relation to the Skew Brownian motion. In the Brownian motion framework, the Skew Brownian motion appeared as a natural generalization of the Brownian motion. The Skew Brownian motion is a process that behaves like a Brownian motion except that the sign of each excursion is chosen using an independent Bernoulli random variable with the parameter $\alpha \in (0 , 1)$. For $\alpha = 1/2,$ the process corresponds to a Brownian motion. This process is a Markov process and a semi-martingale. Moreover, it is a strong solution to certain SDE with local time (see \cite{Lejay} for a survey). Let \begin{equation}\label{SB} X_t = x + B_t + (2\alpha - 1) L^0_t(X), \end{equation} where $L_t^0(X)$ is the symmetric local time of $X$ at $0$. In the case of the Brownian motion, it follows from the It\^o-Tanaka formula that the equations (\ref{SB}) and (\ref{DISC}) with $\sigma (x) = \frac{1}{\alpha} {\bf 1}_{\{x \geq 0\}} + \frac{1}{1-\alpha} {\bf 1}_{ \{x < 0\}}$ are equivalent. For a comprehensive survey on Skew Brownian motion, see the work by Lejay A. in \cite{Lejay}. In the case of the fractional Brownian motion, the Tanaka type formulas are more complicated and no relations between the two types of equations are known to exist. The motivation for the authors in \cite{GLT} to study equation \eqref{DISC} stemmed from this fact. To the best of our knowledge, \cite{LP} is the only study that considers the inference of parameters related to SDE with a discontinuous diffusion process. The study considers the case of a discontinuous diffusion coefficient that can only attain two different values. More precisely, the authors of \cite{LP} studied the so-called \emph{oscillating Brownian motion} that is a solution to the SDE \begin{equation}\label{le-pi} X_t = x + \int_0^t \sigma(X_s) dW_s, \end{equation} where $W$ is a standard Brownian motion and $\sigma(x) = \alpha_+{\bf 1}_{x \ge 0} + \alpha_- {\bf 1}_{x < 0}, \quad x \in \mathbb{R}$. The authors proposed two natural consistent estimators, which are variants of the integrated volatility estimator. Moreover, the stable convergence towards certain Gaussian mixture of the renormalised estimators was proven. The estimators are given by \begin{eqnarray}\label{LP} \hat{\alpha}_+ = \sqrt{\frac{\sum_{k=1}^n \left(X_k - X_{k-1} \right)^2 }{\sum_{k=1}^{n} {\bf 1}_{X_k \ge0} }}, \quad \hat{\alpha}_- = \sqrt{\frac{\sum_{k=1}^n \left(X_k - X_{k-1} \right)^2 }{\sum_{k=1}^{n} {\bf 1}_{X_k \le0} }}. \end{eqnarray} Note that when the paths are strictly positive or strictly negative, only one of the estimators can be computed. Motivated by Equation (\ref{SDISC}), we define the Oscillating Gaussian process by \begin{equation} \label{OfBm-b} X_t = \alpha_+ Y_t{\bf 1}_{Y_t > 0} + \alpha_- Y_t{\bf 1}_{Y_t < 0}, t \in T, \end{equation} where $\alpha_+$ and $\alpha_-$ are both strictly positive (or negative, respectively) constants. In addition to the above mentioned links to SDEs and skew Brownian motion, we note that \eqref{OfBm-b} could be applied in various other modelling scenarios as well, making oscillating Gaussian process an interesting object of study. For example, \eqref{OfBm-b} can be viewed as a model for different situations where the variance changes by regions. One of the main interests in this paper is in the estimation of the model parameters $\alpha_+$ and $\alpha_-$. In order to be able to compute estimators for both parameters in all possible cases, we define estimators based on moments and study their asymptotic properties. Moreover, we show that our moment based approach can be applied under a large class of driving Gaussian processes $Y$ in \eqref{OfBm-b}. The rest of the paper is organised as follows. In Section \ref{sec:oscillating}, we introduce the oscillating Gaussian processes and study their basic properties such as moments, covariance structures, and continuity properties. Section \ref{sec:calibration} is devoted to model calibration. We begin by showing that the moment estimators are consistent and satisfy central limit theorems under suitable assumptions on the driving Gaussian process. On top of that, we also consider corresponding estimators based on discrete observations. In Subsection \ref{subsec:ss-oscillating}, we briefly discuss how Lamperti transform can be used to study oscillating Gaussian processes driven by self-similar Gaussian noise, and as a particular example, we apply the method to the case of the bifractional Brownian motion. We end the paper with a short summary and a discussion about future prospects. \section{Oscillating Gaussian processes} \label{sec:oscillating} Throughout this section we consider Gaussian oscillating processes $X=(X_t)_{t\geq 0}$ defined by \begin{equation} \label{OfBm-general} X_t = \alpha_+ Y_t{\bf 1}_{Y_t > 0} + \alpha_- Y_t{\bf 1}_{Y_t < 0}, \end{equation} where $Y = (Y_t)_{t\geq 0}$ is a stationary Gaussian process and the $\alpha_+$ and $\alpha_-$ are positive parameters such that $\alpha_+\neq \alpha_-$. Note that the $\alpha_+$ and $\alpha_-$ describe the magnitude of variations of $X$ on different regions. Our goal is to estimate the unknown parameters $\alpha_+$ and $\alpha_-$. In order to do this, we assume that $\mathbb E (Y_t^2)=1$. Note that the general case $\mathbb E (Y_t^2) = \sigma^2$ can be written as $$ X_t = \alpha_+ \sigma \tilde{Y}_t{\bf 1}_{\tilde{Y}_t > 0} + \alpha_-\sigma \tilde{Y}_t{\bf 1}_{\tilde{Y}_t < 0}, $$ where now $\mathbb E (\tilde{Y}_t) = 1$. We also assume that the parameters $\alpha_+$ and $\alpha_-$ are both strictly positive (or negative). \begin{remark} Note that we can extend our analysis in a straightforward manner to the case $\alpha_-<0<\alpha_+$ (or $\alpha_->0>\alpha_+$) as well. Reason for that is that we defined $X$ with \eqref{OfBm-general} directly instead of restricting ourselves to the situation where $X$ is a solution to SDE (\ref{DISC}), in which case the solution is known to exists and is of the form \eqref{OfBm-general} only for $\alpha_-,\alpha_+>0$. See also Remark \ref{rem:extension}. \end{remark} \begin{definition}[Oscillating Gaussian process (OGP)] Let $Y$ be a centered stationary Gaussian process with variance $\sigma^2=1$ and covariance function $r(t)$, and let $\alpha_+,\alpha_->0, \alpha_+\neq \alpha_-$ be constants. We define the oscillating version $X$ of $Y$ by \begin{equation} \label{OfBm} X_t = \alpha_+ Y_t{\bf 1}_{Y_t > 0} + \alpha_- Y_t{\bf 1}_{Y_t < 0}. \end{equation} \end{definition} In the following lemmas we compute the moments and covariances of the OGP $X$ defined in (\ref{OfBm}). \begin{lemma} \label{lemma:moments} Let $n\geq 1$ be an integer and $t\geq 0$ arbitrary. Then $$ \mu_n := \mathbb E (X_0^n) = \mathbb E (X_t^n) = \frac{2^{\frac{n}{2}}\Gamma\left(\frac{n+1}{2}\right)}{2\sqrt{\pi}}(\alpha_+^n+(-1)^n\alpha_-^n). $$ \end{lemma} \begin{proof} By the definition of OGP, we have $$ X_t^n = \alpha_+^nY_t^n{\bf 1}_{Y_t > 0} + \alpha_-^n Y_t^n{\bf 1}_{Y_t < 0}. $$ Since $Y$ is a centered stationary Gaussian process we have \begin{equation} \label{eq:positive-moment} \mathbb E (Y_t^n{\bf 1}_{Y_t >0}) = \int_0^\infty \frac{x^n}{\sqrt{2\pi}}e^{-\frac{x^2}{2}}dx = \frac{1}{2}\int_{-\infty}^\infty \frac{|x|^n}{\sqrt{2\pi}}e^{-\frac{x^2}{2}}dx = \frac{1}{2}\mathbb E|N|^n, \end{equation} where $N\sim \mathcal{N}(0,1)$. Similarly, \begin{equation} \label{eq:negative-moment} \mathbb E (Y_t^n{\bf 1}_{Y_t <0}) = (-1)^n\frac{1}{2}\mathbb E|N|^n. \end{equation} Now, the well known formula for a standard normal variable $\mathbb E|N|^n = \frac{2^{\frac{n}{2}}\Gamma\left(\frac{n+1}{2}\right)}{\sqrt{\pi}}$, implies the claim. \end{proof} The following lemma allows us to compute the parameters $\alpha_+$ and $\alpha_-$ in terms of the moments. \begin{lemma} \label{lemma:parameters} Let $t>0$ be arbitrary. Then $$ \alpha_+ = \sqrt{\frac{\pi}{2}}\mu_1 + \frac{1}{2}\sqrt{4\mu_2-2\pi(\mu_1)^2} $$ and $$ \alpha_- = -\sqrt{\frac{\pi}{2}}\mu_1 + \frac{1}{2}\sqrt{4\mu_2-2\pi(\mu_1)^2}. $$ \end{lemma} \begin{proof} Since $\Gamma(1) =1$ and $\Gamma\left(\frac{3}{2}\right) = \frac{\sqrt{\pi}}{2}$, Lemma \ref{lemma:moments} yields $$ \mu_1 = \frac{1}{\sqrt{2\pi}}\left(\alpha_+-\alpha_-\right) $$ and $$ \mu_2 = \frac{1}{2}\left(\alpha_+^2+\alpha_-^2\right). $$ From the first equality we get $$ \alpha_+ = \alpha_- + \sqrt{2\pi}\mu_1. $$ Plugging into the second inequality with some simple manipulations gives \begin{equation} \label{eq:quadratic} 2\alpha_-^2 + 2\sqrt{2\pi}\mu_1 \alpha_- + 2\pi \mu_1^2 -2\mu_2 = 0. \end{equation} Now $$ 4\mu_2 -2\pi\mu_1^2 = 2\alpha_+^2+2\alpha_-^2 - (\alpha_+ - \alpha_-)^2 = (\alpha_+ + \alpha_-)^2 > 0, $$ and since $\alpha_->0$, we obtain the result. \end{proof} \begin{remark} \label{rem:extension} Note that in the proof of Lemma \ref{lemma:parameters} we applied the assumption $\alpha_->0$. In the case $\alpha_-<0<\alpha_+$, one has to choose the other solution to Equation \eqref{eq:quadratic} yielding $$ \alpha_- = -\sqrt{\frac{\pi}{2}}\mu_1 - \frac{1}{2}\sqrt{4\mu_2-2\pi(\mu_1)^2}. $$ \end{remark} In the next lemma, we derive the covariance function of the process $X$. That allows us to obtain consistency for our estimators. \begin{lemma} \label{lemma:pos-covariance} Let $N_1\sim\mathcal{N}(0,1)$ and $N_2\sim\mathcal{N}(0,1)$ such that $Cov(N_1,N_2) = a$. Then $$ \mathbb E(N_1^mN_2^n{\bf 1}_{N_1,N_2>0})=2^{\frac{n+m-4}{2}}\pi^{-1}(1-a^2)^{\frac{n+m-1}{2}}\sum_{r=0}^\infty \frac{(4a)^r}{r!}\Gamma\left(\frac{n+r+1}{2}\right)\Gamma\left(\frac{m+r+1}{2}\right) $$ \end{lemma} \begin{proof} We have $$ \mathbb E(N_1^mN_2^n{\bf 1}_{N_1,N_2>0}) = \frac{1}{2\pi\sqrt{1-a^2}}\int_0^\infty\int_0^\infty x^my^n e^{-\frac{x^2+y^2-2axy}{2(1-a^2)}}dx dy. $$ Change of variables $u = \frac{x}{\sqrt{2(1-a^2)}}$ and $v=\frac{y}{\sqrt{2(1-a^2)}}$ gives $$ \mathbb E(N_1^mN_2^n{\bf 1}_{N_1,N_2>0}) = 2^{\frac{n+m}{2}}\pi^{-1}(1-a^2)^{\frac{n+m-1}{2}}\int_0^\infty\int_0^\infty u^mv^n e^{-u^2-v^2+2auv}du dv, $$ and using formula 3.5-5 in \cite{rice} we obtain $$ \int_0^\infty\int_0^\infty u^mv^n e^{-u^2-v^2+2auv}du dv = \frac{1}{4}\sum_{r=0}^\infty \frac{(4a)^r}{r!}\Gamma\left(\frac{n+r+1}{2}\right)\Gamma\left(\frac{m+r+1}{2}\right). $$ This proves the claim. \end{proof} In the sequel we apply standard Landau notation $O(\cdot)$. \begin{corollary} \label{corollary:pos-covariance-asymp} Let $N_1\sim\mathcal{N}(0,1)$ and $N_2\sim\mathcal{N}(0,1)$ such that $Cov(N_1,N_2) = a$, and let $n\geq 1$ be an integer. Then $$ \mathbb E(N_1^nN_2^n{\bf 1}_{N_1,N_2>0})=2^{n-2}\pi^{-1}\Gamma\left(\frac{n+1}{2}\right)^2 + O(|a|) $$ and $$ \mathbb E(N_1^nN_2^n{\bf 1}_{N_1>0,N_2<0})=(-1)^n 2^{n-2}\pi^{-1}\Gamma\left(\frac{n+1}{2}\right)^2 + O(|a|). $$ \end{corollary} \begin{proof} It follows from Lemma \ref{lemma:pos-covariance} that $$ \mathbb E(N_1^nN_2^n{\bf 1}_{N_1,N_2>0}) = 2^{n-2}\pi^{-1}\Gamma\left(\frac{n+1}{2}\right)^2 (1-a^2)^{\frac{2n-1}{2}}+ O(|a|). $$ Now, the first claim follows from the fact that $$ (1-a^2)^{\frac{2n-1}{2}} = 1 + O(|a|). $$ The second claim follows similarly since $$ \mathbb E(N_1^nN_2^n{\bf 1}_{N_1>0,N_2<0}) = (-1)^n\mathbb E(N_1^n(-N_2)^n{\bf 1}_{N_1>0,-N_2>0}). $$ \end{proof} \begin{corollary} \label{corollary:covariance-X} Let $X$ be the oscillating Gaussian process defined in \eqref{OfBm}. Then $$ Cov(X_t^n,X_s^n) =O(|r(t-s)|), $$ where $r$ is the covariance function of $Y$. \end{corollary} \begin{proof} We have \begin{equation*} \begin{split} X_t^n X_s^n &= \alpha_+^{2n}Y_t^nY_s^n{\bf 1}_{Y_t,Y_s > 0} + \alpha_-^{2n} Y_t^nY_s^n{\bf 1}_{Y_t,Y_s < 0} \\ &+ \alpha^n_+\alpha^n_-(Y_t^nY_s^n{\bf 1}_{Y_t>0,Y_s<0}+Y_t^nY_s^n{\bf 1}_{Y_t<0,{Y}_s>0}). \end{split} \end{equation*} Taking expectation and using Corollary \ref{corollary:pos-covariance-asymp} we get $$ \mathbb E(X_t^n X_s^n) = 2^{n-2}\pi^{-1}\Gamma\left(\frac{n+1}{2}\right)^2\left(\alpha_+^{2n}+\alpha_-^{2n} + 2(-1)^n\alpha^n_+\alpha^n_-\right) + O(|r(t-s)|). $$ Lemma \ref{lemma:moments} now implies the claim. \end{proof} We end this section with the following result that ensures the path continuity of the OGP $X$. \begin{proposition} \label{prop:holder} Let $X$ be the oscillating Gaussian process defined by \eqref{OfBm}. If $Y$ has H\"older continuous paths of order $\gamma\in(0,1]$ almost surely, then so does $X$. \end{proposition} \begin{proof} The result follows from the simple observations that \begin{equation*} \begin{split} &|Y_t{\bf 1}_{Y_t>0} - Y_s{\bf 1}_{Y_s>0}| \\ &=Y_t{\bf 1}_{Y_t>0\geq Y_s} + Y_s{\bf 1}_{Y_s>0\geq Y_t} + |Y_t-Y_s|{\bf 1}_{Y_t,Y_s>0}\\ &\leq (Y_t-Y_s){\bf 1}_{Y_t>0\geq Y_s} + (Y_s-Y_t){\bf 1}_{Y_s>0\geq Y_t} + |Y_t-Y_s|{\bf 1}_{Y_t,Y_s>0}\\ &\leq |Y_t-Y_s|\left({\bf 1}_{Y_t>0\geq Y_s} + {\bf 1}_{Y_s>0\geq Y_t} +{\bf 1}_{Y_t,Y_s>0}\right)\\ &\leq |Y_t-Y_s|. \end{split} \end{equation*} Similarly, $$ |Y_t{\bf 1}_{Y_t<0} - Y_s{\bf 1}_{Y_s<0}| \leq |Y_t-Y_s| $$ from which the claim follows. \end{proof} \section{Model calibration} \label{sec:calibration} This section is devoted to the estimation of the unknown parameters $\alpha_+,\alpha_-$ by the method of moments. Following the ideas of Lemma \ref{lemma:parameters}, we define \begin{equation} \label{eq:alpha-plus_estimator} \hat\alpha_+(T) = \sqrt{\frac{\pi}{2}}\hat\mu_1(T) + \frac{1}{2}\sqrt{\left|4\hat\mu_2(T)-2\pi\hat\mu^2_1(T)\right|} \end{equation} and \begin{equation} \label{eq:alpha-minus_estimator} \hat\alpha_-(T) = \hat\alpha_+(T) - \sqrt{\frac{\pi}{2}}\hat\mu_1(T), \end{equation} where $\hat\mu_i(T), i=1,2$ are the classical moment estimators defined by \begin{equation} \label{eq:moment-estimator} \hat\mu_{i}(T) = \frac{1}{T} \int_0^{T} X_u^i du. \end{equation} \begin{remark} Note that here we have taken absolute values inside the square roots in order to obtain real valued estimates for real valued quantities. Since $$ 4\mu_2 - 2\pi\mu_1^2 > 0, $$ this does not affect the asymptotical properties of the estimators. \end{remark} The following result gives us the consistency and can be viewed as one of our main theorems. The proof is postponed to Subsection \ref{subsec:proofs}. \begin{theorem} \label{thm:estimation-consistency} Assume that $|r(T)|\to 0$ as $T\to \infty$. Then, for any $p\geq 1$, we have $$ \hat\alpha_+(T) \to \alpha_+ $$ and $$ \hat\alpha_-(T) \to \alpha_- $$ in $L^p$, as $T \to \infty$. \end{theorem} In order to study the limiting distribution, we need some additional assumptions on the covariance function $r$. \begin{assumption} \label{assumption:covariance} Let $r$ be the covariance function of $Y$. We assume that one of the following condition hold: \begin{enumerate} \item The covariance function $r$ satisfies $r\in L^1(\mathbb R)$. \item We have that $$ \lim_{t\to\infty}\frac{r(t)}{t} = C < \infty. $$ \item There exists $H\in\left(\frac12,1\right)$ such that $$ \lim_{t\to\infty}\frac{r(t)}{t^{2H-2}} = C < \infty. $$ \end{enumerate} \end{assumption} \begin{remark} The first condition in Assumption \ref{assumption:covariance} corresponds to short-range dependence and the last condition corresponds to long-range dependence. The second condition corresponds to the border case resulting to a logarithmic factor to our normalising sequence (see Theorem \ref{thm:estimation-clt}). \end{remark} The following theorem gives the central limit theorem for the moments estimators. \begin{theorem} \label{thm:estimation-moment-clt} Let $\hat\mu_1(T)$ and $\hat\mu_2(T)$ be defined by \eqref{eq:moment-estimator}, and let $\hat\mu(T) = (\hat\mu_1(T),\hat\mu_2(T))$ and $\mu = (\mu_1,\mu_2)$. Then, \begin{enumerate} \item \label{c1} if $r$ satisfies the condition (1) of Assumption \ref{assumption:covariance}, $$ \sqrt{T}\left(\hat\mu(T) - \mu\right) \to \mathcal{N}(0,\Sigma_1^2) $$ in law as $T \to \infty$, \item \label{c2} if $r$ satisfies the condition (2) of Assumption \ref{assumption:covariance}, $$ \sqrt{\frac{T}{\log T}}\left(\hat\mu(T)-\mu\right) \to \mathcal{N}(0,\Sigma_2^2) $$ in law as $T \to \infty$, and \item \label{c3} if $r$ satisfies the condition (3) of Assumption \eqref{assumption:covariance}, $$ T^{1-H}\left(\hat\mu(T)-\mu\right) \to \mathcal{N}(0,\Sigma_3^2) $$ in law as $T \to \infty$, \end{enumerate} where $\Sigma_1^2$, $\Sigma_2^2$, and $\Sigma_3^2$ are constant covariance matrices depending on $\alpha_+,\alpha_-$, and the covariance $r$. \end{theorem} \begin{remark} Note that the covariance matrices $\Sigma^2_i,i=1,2,3$ in Theorem \ref{thm:estimation-moment-clt} can be calculated explicitly in terms of the covariance $r$, $\alpha_+$, and $\alpha_-$ by computing the chaos decompositions of the functions $ f_1(x) = \alpha_+ x{\bf 1}_{x>0} + \alpha_- x{\bf 1}_{x<0} $ and $f_2(x) =\alpha_+ x^2{\bf 1}_{x>0} + \alpha_- x^2{\bf 1}_{x<0}$. \end{remark} \begin{remark} By replacing $\hat\mu_n(T)$ with $$ \hat\mu_n(t,T) = \frac{1}{T}\int_0^{tT} X_u^n du $$ and normalising accordingly, one can obtain functional versions of the above limit theorems. That is, in cases (\ref{c1}) and (\ref{c2}) of Theorem \ref{thm:estimation-moment-clt}, we obtain convergence in law in the space of continuous functions towards $\sigma W_t$, where $W_t$ is a Brownian motion. In the case (\ref{c3}), the limiting process is $\sigma B^H_t$, where $B^H$ is the fractional Brownian motion. Indeed, the last case follows from a classical result by Taqqu \cite{taqqu} and the first case from \cite{nourdin-nualart} and from the fact that all moments of $X$ are finite. However, from practical point of view, translating these results to functional versions of the estimators $\hat\alpha_+(T)$ and $\hat\alpha_-(T)$ is not feasible. Indeed, this follows from the fact that in the functional central limit theorem for $\hat\mu(t,T)$ the normalisation (subtracting the true value) is done inside the integral, while for $\hat\alpha_+(T)$ and $\hat\alpha_-(T)$ this is done after integration. \end{remark} Theorems \ref{thm:estimation-consistency} and \ref{thm:estimation-moment-clt} now give us the following limiting distributions for the estimators $\alpha_+(T)$ and $\alpha_-(T)$. \begin{theorem} \label{thm:estimation-clt} Let $\hat\alpha_+(T)$ and $\hat\alpha_-(T)$ be defined by \eqref{eq:alpha-plus_estimator} and \eqref{eq:alpha-minus_estimator}, respectively, and let $\hat\alpha(T) = (\hat\alpha_+(T),\hat\alpha_-(T))$ and $\alpha = (\alpha_+,\alpha_-)$. Then, \begin{enumerate} \item if $r$ satisfies the condition (1) of Assumption \ref{assumption:covariance}, $$ \sqrt{T}\left(\hat\alpha(T)-\alpha\right) \to \mathcal{N}(0,\Sigma_A^2) $$ in law, \item if $r$ satisfies the condition (2) of Assumption \ref{assumption:covariance}, then $$ \sqrt{\frac{T}{\log T}}\left(\hat\alpha(T)-\alpha\right) \to \mathcal{N}(0,\Sigma_B^2) $$ in law, and \item if $r$ satisfies the condition (3) of Assumption \ref{assumption:covariance}, then $$ T^{1-H}\left(\hat\alpha(T)-\alpha\right) \to \mathcal{N}(0,\Sigma_C^2) $$ in law, \end{enumerate} where $\Sigma_A^2$, $\Sigma_B^2$, and $\Sigma_C^2$ are constant covariance matrices depending on $\alpha_+,\alpha_-$, and the covariance $r$. \end{theorem} \begin{proof} The result follows from Theorems \ref{thm:estimation-consistency} and \ref{thm:estimation-moment-clt} together with a simple application of a multidimensional delta method. We leave the details to the reader. \end{proof} \begin{remark} As in the case of Theorem \ref{thm:estimation-moment-clt}, the covariance matrices $\Sigma^2_j,j=A,B,C$ in Theorem \ref{thm:estimation-clt} can be calculated explicitly. Indeed, by utilising two-dimensional delta method, $\Sigma^2_j,j=A,B,C$ are linear transformations of $\Sigma^2_i,i=1,2,3$ defined in Theorem \ref{thm:estimation-moment-clt}. \end{remark} \subsection{Proofs of Theorems \ref{thm:estimation-consistency} and \ref{thm:estimation-moment-clt}} \label{subsec:proofs} We begin with the following versions of weak law of large numbers. \begin{proposition}[Laws of large numbers] \label{prop:wlln} Let $n\geq 1$ and suppose that $|r(T)| \to 0$ as $|T| \to \infty$. Then, for any $p\geq 1$, as $T\to \infty$, $$ \frac{1}{T}\int_0^{T} X_u^n du \to \frac{2^{\frac{n}{2}}\Gamma\left(\frac{n+1}{2}\right)}{2\sqrt{\pi}}(\alpha_+^n+(-1)^n\alpha_-^n) $$ in $L^p$ as $T \to \infty$. \end{proposition} \begin{proof} In order to prove the claim, we have to show that $$ \left\Vert\frac{1}{T}\int_0^T X_u^n - \mu_n du\right\Vert_p \to 0, $$ where $\Vert \cdot\Vert_p$ is the $p$:th norm. We first observe that it suffices to prove convergence in probability. Indeed, for every $p\geq 1$ and $\epsilon>0$, we have $$ \sup_{T\geq 1}\left\Vert\frac{1}{T}\int_0^T X_u^n - \mu_n du\right\Vert_{p+\epsilon} \leq \sup_{T\geq 1}\frac{1}{T}\int_0^T \left\Vert X_u^n - \mu_n\right\Vert_{p+\epsilon} du \leq C. $$ Thus, for every $p$, the quantity $$ \left|\frac{1}{T}\int_0^T X_u^n - \mu_n du\right|^{p} $$ is uniformly integrable. Now the result follows from the fact that uniform integrability and convergence in probability implies convergence in $L^1$, i.e. $$ \mathbb E\left|\frac{1}{T}\int_0^T X_u^n - \mu_n du\right|^{p} \to 0, \quad \mbox{as}\quad T \to \infty. $$ Let us now prove the convergence in $L^2$, which then implies the convergence in probability. By Corollary \ref{corollary:covariance-X}, we have that $$ \mathbb E\left|\frac{1}{T}\int_0^T X_u^n du - \frac{2^{\frac{n}{2}}\Gamma\left(\frac{n+1}{2}\right)}{2\sqrt{\pi}}(\alpha_+^n+(-1)^n\alpha_-^n)\right|^2 = T^{-2}\int_0^T\int_0^T a(u,s)du ds, $$ where $a(u,s) = O(|r(s-u)|)$. Writing \begin{equation*} \begin{split} &\int_{(u,s)\in[0,T]^2}r(u-s)du ds \\ &= \int_{(u,s)\in[0,T]^2,|u-s|\geq T_0}r(u-s)du ds + \int_{(u,s)\in[0,T]^2,|u-s|<T_0}r(u-s)du ds \end{split} \end{equation*} and choosing $T_0$ such that $|r(u-s)|<\epsilon$ on $\{(u,s)\in[0,T]^2,|u-s|\geq T_0\}$ yields the result. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:estimation-consistency}] By Proposition \ref{prop:wlln}, we have that $(\hat\mu_1(T),\hat\mu_2(T)) \to (\mu_1,\mu_2)$ in $L^p$ as $T\to \infty$. As $ \displaystyle \sup_{T\geq 1}\Vert \hat\mu_1(T)\Vert_p < \infty$ for all $p\geq 1$, it follows from H\"older inequality that, for any $r>0$, we have $$ \Vert \hat\mu^2_1(T) - \mu_1^2\Vert_p = \Vert (\hat\mu_1(T) + \mu_1)(\hat\mu_1(T)-\mu_1)\Vert_p \leq C \Vert \hat\mu_1(T)-\mu_1\Vert_{p+r}, $$ where $C$ is a constant. Thus $$ \Vert \hat\mu^2_1(T) - \mu_1^2\Vert_p \to 0 \quad \mbox{as} \quad T \to \infty. $$ Now, using $|\sqrt{a}-\sqrt{b}| \leq \sqrt{|a-b|}$ and the triangle inequality, we get \begin{equation*} \begin{split} &\sqrt{\left|4\hat\mu_2(T)-2\pi\hat\mu^2_1(T)\right|} - \sqrt{\left|4\mu_2-2\pi\mu^2_1\right|} \\ &\leq C\sqrt{|\hat\mu_2(T)-\mu_2|} + C\sqrt{|\hat\mu_1^2(T) - \mu_1^2|}. \end{split} \end{equation*} The claim now follows from the fact that, for any random variable $Z$ and for any $p\geq 2$, $$ \Vert \sqrt{|Z|}\Vert_p = \sqrt{\Vert Z\Vert_{p/2}}. $$ \end{proof} We proceed now to the proof of Theorem \ref{thm:estimation-clt}. Before that we recall some preliminaries. Let $N\sim \mathcal{N}(0,1)$ and let $f$ be a function such that $\mathbb E \left( f(N)^2\right) < \infty$. Then $f$ admits the Hermite decomposition $$ f(x) = \sum_{k=0}^\infty \beta_k H_k(x), $$ where $H_k,k=0,1,\ldots$ are the Hermite polynomials. The index $d=\min\{k\geq 1: \beta_k \neq 0\}$ is called the Hermite rank of $f$. For our purposes we need to consider the functions $$ f_i(x) = \alpha^i_+ x^i {\bf 1}_{x>0} + \alpha_- x^i {\bf 1}_{x<0}, \quad i=1,2. $$ The Hermite decompositions of $f_1$ and $f_2$ are denoted by \begin{equation} \label{eq:f1} f_1(x) = \sum_{k=0} \beta_{1,k}H_k(x) \end{equation} and \begin{equation} \label{eq:f2} f_2(x) = \sum_{k=0}\beta_{2,k}H_k(x), \end{equation} respectively. \begin{proof}[Proof of Theorem \ref{thm:estimation-moment-clt}] By Cramer-Wold device, it suffices to prove that each linear combination $$ Z(y_1,y_2,T):= y_1(\hat\mu_1(T) - \mu_1) + y_2(\hat\mu_2(T)-\mu_2), $$ when properly normalised, converges towards a normal distribution. By using representations \eqref{eq:f1} and \eqref{eq:f2}, it follows that $Z(y_1,y_2,T)$ have representation \begin{equation} \label{eq:Z-rep} Z(y_1,y_2,T) = \frac{1}{T}\int_0^T \sum_{k=0}^\infty \gamma_k H_k(Y_t)dt, \end{equation} where $ \gamma_k = y_1\beta_{1,k} + y_2\beta_{2,k}. $ Note also that we have $ \mathbb E\hat\mu_i(T) = \mu_i,\quad i=1,2, $ and thus $\gamma_0 = 0$, i.e. $Z(y_1,y_2,T)$ is a normalised sequence. We begin with the first case that is relatively easy. Indeed, suppose that the condition (1) of Assumption \ref{assumption:covariance} holds. Then, as $r$ is integrable, continuous version of the Breuer-Major theorem (see e.g. \cite{nourdin-nualart}) implies the claim directly. Under the other two conditions, we first note that the only contributing factor to the limiting distribution in \eqref{eq:Z-rep} is $$ \frac{1}{T}\int_0^T \gamma_1H_1(Y_t)dt. $$ This follows from the fact that \begin{equation*} \begin{split} \mathbb E \left[\sum_{k=2}^\infty \gamma_k \int_0^T H_k(Y_t)dt\right]^2 \leq CT\int_0^T r^2(u)du \end{split} \end{equation*} and clearly $$ \frac{1}{\log T}\int_0^T r^2(u)du \to 0 $$ under the condition (2) and $$ T^{1-2H}\int_0^T r^2(u)du \to 0 $$ under the condition (3). Thus it suffices to prove that $$ [y_1\beta_{1,1}+y_2\beta_{2,1}]\frac{l(T)}{T}\int_0^T Y_t dt $$ converges towards normal distribution, where $l(T) = \sqrt{\frac{T}{\log T}}$ under the condition (2) and $l(T) = T^{1-H}$ under the condition (3). Convergence of $\frac{l(T)}{T}\int_0^T Y_t dt$ follows from the fact that $Y$ is Gaussian, and the variance converges. Indeed, we have that \begin{equation*} \begin{split} \mathbb E \left(\int_0^T Y_t dt\right)^2 &= \int_0^T \int_0^T \mathbb E (Y_uY_s)du ds \\ &= \int_0^T \int_0^T r(u-s)du ds \\ &= \int_0^T r(u)(T-u)du. \end{split} \end{equation*} Under conditions (2) and (3) of Assumption \ref{assumption:covariance} we obtain that, in both cases, $$ \frac{l^2(T)}{T^2}\int_0^T r(u)(T-u)du \to C>0. $$ Thus, it suffices to prove that $\beta_{1,1}\neq 0$ or $\beta_{1,2}\neq 0$. Recall that $$ f_1(x) = \alpha_+ x{\bf 1}_{x>0}+\alpha_- x{\bf 1}_{x<0}. $$ Thus we have $$ \beta_{1,1} = \mathbb E [f_1(N)N], $$ where $N\sim\mathcal{N}(0,1)$. Using \eqref{eq:positive-moment} and \eqref{eq:negative-moment} we get $$ \beta_{1,1} = \frac{3}{2}\left(\alpha_+-\alpha_-\right). $$ Recalling that $\alpha_+\neq \alpha_-$ concludes the proof. \end{proof} \begin{remark} Note that the proof of Theorem \ref{thm:estimation-clt} relied on the fact that $\alpha_+ \neq \alpha_-$. If $\alpha_+ = \alpha_- = \alpha$, then $ X_t = \alpha|Y_t| $ and it follows that $\gamma_1=0$ and $\gamma_2\neq 0$. Then, under conditions (1) and (2), the limiting distribution is normal and the rate is $\sqrt{T}$. Under the condition (3), the limiting distribution and the rate depends on the value of $H$. If $H<\frac34$, the limiting distribution is normal and the rate is $\sqrt{T}$. If $H=\frac34$, then the limiting distribution is still normal, but the rate is $\sqrt{\frac{T}{\log T}}$. For $H>\frac34$, the limiting distribution is the Rosenblatt distribution (multiplied by a constant) and the rate is $T^{2-2H}$. \end{remark} \subsection{Estimation based on discrete observations} In practice, one does not observe the continuous path of $X$. Instead of that, one observes $X$ on some discrete time points $0\leq t_0 <t_1 < \ldots <T_N<\infty$. That is why, in practical applications, the integrals in \eqref{eq:moment-estimator} are approximated by discrete sums. Thus the natural moment estimators $\tilde{\mu}_n(N)$ are defined by \begin{equation} \label{eq:moment-estimator-discrete} \tilde{\mu}_n(N) = \frac{1}{T_N}\sum_{k=1}^N X^n_{t_{k-1}}\Delta t_k, \end{equation} where $\Delta t_k = t_k - t_{k-1}$. The corresponding estimators $\tilde{\alpha}_+(N)$ and $\tilde{\alpha}_-(N)$ for parameters $\alpha_+$ and $\alpha_-$ are \begin{equation} \label{eq:alpha-plus_estimator-discrete} \tilde\alpha_+(N) = \sqrt{\frac{\pi}{2}}\tilde\mu_1(N) + \frac{1}{2}\sqrt{\left|4\tilde\mu_2(N)-2\pi\tilde\mu^2_1(N)\right|} \end{equation} and \begin{equation} \label{eq:alpha-minus_estimator-discrete} \tilde\alpha_-(N) = \tilde\alpha_+(N) - \sqrt{\frac{\pi}{2}}\tilde\mu_1(N). \end{equation} Let $\Delta_N = \max_k \Delta t_k$. In order to obtain consistency and asymptotic normality for the discretised versions, we have to assume that $T_N \to \infty$ and, at the same time, that $\Delta_N \to 0$ in a suitable way. The following proposition studies the difference between $\hat\mu_n(T_N)$ and $\tilde\mu_n(N)$. \begin{proposition} \label{prop:discrete-approx} Denote the variogram of the stationary process $Y$ by $c(t)$, i.e. $$ c(t) = 2\left[r(0)-r(t)\right], $$ where $r$ is the covariance function. Then, for any $n\geq 1$ and for any $p\geq 1$, there exists a constant $C=C(n,p,\alpha_+,\alpha_-)$ such that $$ \left\Vert\hat\mu_n(T_N)-\tilde\mu_n(N)\right\Vert_p \leq C\sup_{0\leq t \leq \Delta_N}\sqrt{c(t)}. $$ \end{proposition} \begin{proof} We have, by Minkowski inequality, that \begin{equation*} \begin{split} &\left\Vert \hat\mu_n(T_N)-\tilde\mu_n(N)\right\Vert_p\\ &= \left\Vert \frac{1}{T_N}\int_0^{T_N}X_u^n du - \frac{1}{T_N}\sum_{k=1}^N X^n_{t_{k-1}}\Delta t_k\right\Vert_p \\ &\leq \frac{1}{T_N} \sum_{k=1}^N \int_{t_{k-1}}^{t_k}\left\Vert X_u^n - X_{t_{k-1}}^n\right\Vert_p du. \end{split} \end{equation*} Using, $$ x^n-y^n = (x-y)\sum_{j=0}^{n-1}x^jy^{n-1-j}, $$ we get, for any $s,u\geq 0$, that $$ |X_s^n-X_u^n| \leq |X_s-X_u|\sum_{j=0}^{n-1}|X_s|^j|X_u|^{n-1-j}. $$ Thus, a repeated application of H\"older inequality together with the fact that $\sup_{s\geq 0}\Vert X_s\Vert_p < \infty$ implies that, for every $q>p$, we have $$ \left\Vert X_u^n - X_{s}^n\right\Vert_p \leq C\Vert X_u - X_s\Vert_{q}, $$ where $C$ is a constant. Moreover, by the proof of Proposition \ref{prop:holder}, we have, $$ |X_u-X_s| \leq C|Y_u-Y_s|. $$ Since $Y$ is Gaussian, hypercontractivity implies that $$ \Vert X_u - X_s\Vert_q \leq C\Vert Y_u - Y_s\Vert_2. $$ Now stationarity of $Y$ gives $$ \Vert Y_u -Y_s\Vert_2 = \sqrt{c(u-s)}. $$ Thus we observe \begin{equation*} \begin{split} &\frac{1}{T_N} \sum_{k=1}^N \int_{t_{k-1}}^{t_k}\left\Vert X_u^n - X_{t_{k-1}}^n\right\Vert_p du \\ & \leq \frac{C}{T_N} \sum_{k=1}^N \int_{t_{k-1}}^{t_k}\sqrt{c(u-t_{k-1})} du\\ &\leq C \sup_{0\leq t \leq \Delta_N}\sqrt{c(t)} \end{split} \end{equation*} proving the claim. \end{proof} We can now easily deduce the following results for the asymptotical properties of the estimators $\tilde\alpha_+$ and $\tilde\alpha_-$. \begin{theorem} \label{thm:estimation-consistency-discrete} Let $\tilde\alpha_+(N)$ and $\tilde\alpha_-(N)$ be defined by \eqref{eq:alpha-plus_estimator-discrete} and \eqref{eq:alpha-minus_estimator-discrete}, respectively. Suppose that $r(T) \to 0$ as $T\to \infty$ and that $\sup_{0\leq s\leq T}c(s)\to 0$ as $T\to 0$. If $T_N \to \infty$ and $\Delta_N \to 0$ as $N \to \infty$, then for any $p\geq 1$, $$ \tilde\alpha_+(N) \to \alpha_+ $$ and $$ \tilde\alpha_-(N) \to \alpha_- $$ in $L^p$. \end{theorem} \begin{proof} Using the arguments of the proof of Theorem \ref{thm:estimation-consistency} together with Proposition \ref{prop:discrete-approx} we deduce that $$ \Vert \tilde\alpha_+(N) - \hat\alpha_+(T_N)\Vert_p \to 0 $$ and $$ \Vert \tilde\alpha_-(N) - \hat\alpha_-(T_N)\Vert_p \to 0. $$ Thus the claim follows from Theorem \ref{thm:estimation-consistency}. \end{proof} \begin{theorem} \label{thm:estimation-clt-discrete} Let $\tilde\alpha_+(N)$ and $\tilde\alpha_-(N)$ be defined by \eqref{eq:alpha-plus_estimator-discrete} and \eqref{eq:alpha-minus_estimator-discrete}, respectively, and let $\tilde\alpha(N) = (\tilde\alpha_+(N),\tilde\alpha_-(N))$ and $\alpha = (\alpha_+,\alpha_-)$. Let $\Sigma_A^2$, $\Sigma_B^2$, and $\Sigma_C^2$ be the same covariance matrices as in Theorem \ref{thm:estimation-clt}. Suppose further that $\displaystyle \sup_{0\leq s\leq t}c(s)\to 0$ as $t\to 0$, $T_N \to \infty$, and $\Delta_N \to 0$ as $N \to \infty$. Denote $$ h(N) = \sup_{0\leq s \leq \Delta_N}\sqrt{c(s)}. $$ Then, \begin{enumerate} \item \label{a1} if $r$ satisfies the condition (1) of Assumption \ref{assumption:covariance}, $$ \sqrt{T_N}\left(\tilde\alpha(N)-\alpha\right) \to \mathcal{N}(0,\Sigma_A^2) $$ in law for every partitions $0\leq t_0 < \ldots T_N$ satisfying $\sqrt{T_N}h(N)\to 0$, \item \label{a2} if $r$ satisfies the condition (2) of Assumption \ref{assumption:covariance}, $$ \sqrt{\frac{T_N}{\log T_N}}\left(\tilde\alpha(N)-\alpha\right) \to \mathcal{N}(0,\Sigma_B^2) $$ in law for every partitions $0\leq t_0 < \ldots T_N$ satisfying $\sqrt{\frac{T_N}{\log T_N}}h(N)\to 0$, and \item \label{a3} if $r$ satisfies the condition (3) of Assumption \ref{assumption:covariance} , $$ T_N^{1-H}\left(\tilde\alpha(N)-\alpha\right) \to \mathcal{N}(0,\Sigma_C^2) $$ in law for every partitions $0\leq t_0 < \ldots T_N$ satisfying $T_N^{1-H}h(N)\to 0$. \end{enumerate} \end{theorem} \begin{proof} The additional conditions on the mesh together with Proposition \ref{prop:discrete-approx} guarantee that $$ l(T_N)\Vert \tilde\alpha(N) - \hat\alpha(T_N)\Vert_p \to 0, $$ where $l(T_N)$ is the corresponding normalisation for each case. Thus the result follows directly from Theorem \ref{thm:estimation-clt}. \end{proof} One natural way for choosing the observation points such that the above mentioned conditions are fulfilled, is to choose $N$ equidistant points with $\Delta_N = \frac{\log N}{N}$. Then $\Delta_N \to 0$ and $T_N = N\Delta_N = \log N \to \infty$. If, in addition, $Y$ is H\"older continuous of some order $\theta>0$, then also the rest of the requirements are satisfied. Indeed, it follows from \cite[Theorem 1]{gauss-holder} that if $Y$ is H\"older continuous of order $\theta>0$, then for any $\epsilon>0$, we have $$ c(t) \leq Ct^{\theta-\epsilon} $$ for some constant $C$. Thus $h(N)\leq \sqrt{C}\Delta_N^{\frac12(\theta-\epsilon)}$, from which it is easy to see that, for $\epsilon<\theta$, $$ T_N^{1-H}h(N) \leq \sqrt{\frac{T_N}{\log T_N}}h(N) \leq \sqrt{T_N}h(N) \leq \sqrt{C}\frac{(\log N)^{\frac12(1+\theta-\epsilon)}}{N^{\frac12(\theta-\epsilon)}} \to 0. $$ \subsection{Oscillating Self-similar Gaussian Processes} \label{subsec:ss-oscillating} Self-similar processes form an interesting and applicable class of stochastic processes. In this subsection, we consider oscillating Gaussian processes driven by self-similar Gaussian processes $Y$. In other words, we consider processes of the type $$ X_t = \alpha_+ Y_t{\bf 1}_{Y_t>0} + \alpha_-Y_t{\bf 1}_{Y_t<0}, $$ where $Y$ is $H$-self-similar for some $H>0$. That is, for every $a>0$, the finite dimensional distributions of the processes $(Y_{at})_{t\geq 0}$ and $(a^HY_t)_{t\geq 0}$ are the same. Throughout this section we assume that we have observed $X_t$ on an interval $[0,1]$, and our aim is to estimate $\alpha_+$ and $\alpha_-$. The key ingredient is the Lamperti transform \begin{equation} \label{eq:lamperti} U_t = e^{-Ht}Y_{e^t}. \end{equation} It is well-known that $U$ is stationary on $(-\infty,0]$. Moreover, for $t\geq 0$, we define a process $$ \tilde{X}_t := e^{Ht}X_{e^{-t}} = \alpha_+ U_{-t} {\bf 1}_{U_{-t} >0} + \alpha_- U_{-t}{\bf 1}_{U_{-t}<0}. $$ Clearly, observing $X$ on $[0,1]$ is equivalent to observing $\tilde{X}_t$ on $t\geq 0$. This leads to the ''moment estimators'' $\hat\mu_i(T)$ defined by \begin{equation} \label{eq:moment-estimator-ss} \hat\mu_i(T) = \frac{1}{T}\int_{e^{-T}}^1 u^{-H-1}X_u^idu. \end{equation} The corresponding parameter estimators $\hat\alpha_+(T)$ and $\hat\alpha_-(T)$ are defined by plugging in $\hat\mu_1(T)$ and $\hat\mu_2(T)$ into \eqref{eq:alpha-plus_estimator} and \eqref{eq:alpha-minus_estimator}, respectively. Indeed, a change of variable $u=e^{t}$ gives $$ \hat\mu_i(T)=\frac{1}{T}\int_0^T \tilde{X}^i_t dt. $$ Thus studying the covariance function $r$ of a stationary Gaussian process $U$ given by \eqref{eq:lamperti} enables us to apply Theorems \ref{thm:estimation-consistency} and \ref{thm:estimation-clt}. \subsection{The case of bi-fractional Brownian motion} We end this section with an interesting example. We consider bifractional Brownian motions that, among others, cover fractional Brownian motions and standard Brownian motions. Recall that a bifractional Brownian motion $B^{H,K}$ with $H\in(0,1)$ and $K\in(0,2)$ such that $HK\in(0,1)$ is a centered Gaussian process with a covariance function $$ R(s,t)=\frac{1}{2^K}\left[(t^{2H}+s^{2H})^K-|t-s|^{2HK}\right]. $$ It is known that $B^{H,K}$ is $HK$-self-similar. Furthermore, one recovers fractional Brownian motion by plugging in $K=1$, from which standard Brownian motion is recovered by further setting $H=\frac12$. Now the covariance function $r$ of the Lamperti transform $U_t=e^{-HKt}B^{H,K}_{e^t}$ has exponential decay (see \cite{langevin}). Thus, we may apply the item (1) of Theorem \ref{thm:estimation-clt} to obtain that $ \sqrt{T}(\hat\alpha_+(T)-\alpha_+) $ and $\sqrt{T}(\hat\alpha_-(T)-\alpha_-)$ are asymptotically normal. Similarly, discretising the integral in \eqref{eq:moment-estimator-ss} and applying Theorems \ref{thm:estimation-consistency-discrete} and \ref{thm:estimation-clt-discrete} allows us to consider parameter estimators based on discrete observations. We leave the details to the reader. \section{Discussion} In this paper we considered oscillating Gaussian processes and introduced a moment based estimators for the model parameters. Moreover, we proved consistency and asymptotic normality of the estimators under natural assumptions on the driving Gaussian process. An interesting and natural extension to our approach would be to consider oscillating processes with several (more than two) parameters and corresponding regions. This would make the model class more flexible and adaptive. Another topic for future research would be to develop testing procedures for the model parameters. \subsection*{Acknowledgements} S. Torres is partially supported by the Project Fondecyt N. 1171335. P. Ilmonen and L. Viitasaari wishes to thank Vilho, Yrj\"o, and Kalle V\"ais\"al\"a foundation for financial support.
1,314,259,994,723
arxiv
1,314,259,994,724
arxiv
\section*{$\mathbb{Z}_Q$ Berry phases} \subsection{Hyper-pyrochlore lattice models} Recently, it is found that the breathing kagome model and breathing Pyrochlore model have the higher-order topological insulator (HOTI) phase [27, 29]. In the HOTI phases, the models have mid-gap corner states, which is exactly solvable with certain boundary conditions [28, 31]. In this section, we show the correspondence between the $\mathbb{Z}_Q$ Berry phases and the HOTI phases in the breathing kagome model and the breathing pyrochlore model. The Su-Schrieffer-Heeger (SSH) model, the breathing kagome model and the breathing pyrochlore are regarded as the $d=1, 2, 3$-dimensional breathing hyper-tetrahedron (BHT) model, respectively, which have $d+1$ sites in a unit cell. For the $d$-dimensional BHT model, the $\mathbb{Z}_Q$ Berry phase is defined by introducing local-twist parameters and quantized in $\mathbb{Z}_Q$ ($Q=d+1$) [55]. The definition of the Berry phase for the $d$-dimensional BHT model is the same to that for a square lattice model shown in the main manuscript, but the number of twist-parameters is $Q$ (for the detail of the definition, see Ref. [55]). Note that the models in [55] are different from the model in [27, 29] due to the on-site potential $t_1+t_2$, but it does not change the Berry phase. The quantization of the Berry phase is protected by $\mathbb{Z}_Q$ symmetry, which change the annihilation operator $c_j\rightarrow c_{j+1}, ~c_{Q+1}=c_1$. The symmetry is the same to the mirror symmetry for $d=1$ and the three-fold rotational ($C_3$) symmetry for $d=2$. The Berry phase of the type-II plaquette for the BHT models are shown in Fig. \ref{fig:s-fig1}(d). The Berry phases are quantized in $\mathbb{Z}_Q$ for $d = 1, 2, 3$. The non-trivial Berry phase corresponds with the HOTI phase in which corner states appear [Fig. \ref{fig:s-fig1}(a)-(c)]. \begin{figure}[thb] \centering \includegraphics[width=0.7\linewidth]{supp_fig1.pdf} \caption{The zero energy corner state of (a) the SSH model, (b) the breathing kagome model (BK) and (c) the breathing Pyrochlore (BP) model. (d) The $\mathbb{Z}_Q$ Berry phases for the BHT models in $d=1, 2, 3$-dimensions are shown. } \label{fig:s-fig1} \end{figure} \subsection{The $Z_Q$ Berry phase for a decoupled cluster} \begin{figure}[thb] \centering \includegraphics[width=0.65\linewidth]{supp_fig2.pdf} \caption{ Decoupled clusters are colored in red for (a) a kagome lattice and (b) a square lattice. } \label{fig:s-fig2} \end{figure} \begin{figure}[thb] \centering \includegraphics[width=0.8\linewidth]{supp_fig3.pdf} \caption{The trajectories of the $Z_Q$ Berry phase for (a) $Q=3$ and (b) $Q=4$ in the parameter space are shown. } \label{fig:s-fig3} \end{figure} The BHT models and the BBH model in the HOTI/HOSPT phases are adiabatically connected to the decoupled clusters [Fig. \ref{fig:s-fig2}]. The Berry phase for the decoupled clusters is the same to that for single cluster because the bond-twists only affect to the cluster and the other clusters are not changed. In this section, we consider single cluster that consists of $Q$ sites and the Hamiltonian of the cluster has $\mathbb{Z}_Q$ symmetry such that the Hamiltonian is unitary invariant under the cycle $c_j \rightarrow c_{j+1},~ c_{Q+1} = c_{1}$ where $c_j (j=1,\cdots, Q)$ is the annihilation operators. Let $\ket{\Psi}$ be the many-body ground state. We define a unitary operator \begin{equation} U = e ^ { -i \varphi _ { 1 } n _ { 1 } } e ^ { -i \varphi_2 n _ { 2 } } \cdots e ^ { -i \varphi_{Q-1} n _ { Q - 1 } }, \end{equation} where $\varphi_j = \sum_{i=1}^{j}\theta_i, ~ \varphi_Q = 0$. The annihilation operators are transformed by the unitary operator $U$ as \begin{equation} U c _ { j } U ^ { - 1 } = e ^ { i \varphi_j} c _ { j } \end{equation} so the ground state for $\mathcal{H}(\Theta)$ is written as \begin{equation} \label{eq:Psi_theta_is_U_Psi} \ket{\Psi(\Theta)} = U\ket{\Psi}. \end{equation} Note that the expectation value of the number operators are not changed by $\Theta$ : \begin{eqnarray} \braket{n_j}_\Theta &=& \braket{\Psi(\Theta)| n_j |\Psi(\Theta)} \\ &=& \braket{\Psi|U^\dagger n_j U|\Psi} \\ &=& \braket{\Psi|n_j|\Psi} = \braket{n_j}. \end{eqnarray} Next, to define the Berry phase, let us introduce trajectories $L_j, (j = 1, \cdots, Q)$ (see Fig. \ref{fig:s-fig3}) \begin{eqnarray} L_1 &:& \bm{O} \rightarrow \bm{G} \rightarrow \bm{E}_1 \nonumber \\ L_2 &:& \bm{E}_1 \rightarrow \bm{G} \rightarrow \bm{E}_2 \nonumber \\ &\cdots& \\ L_Q &:& \bm{E}_{Q-1} \rightarrow \bm{G} \rightarrow \bm{O} \nonumber \end{eqnarray} where $\bm{E}_j=2\pi\bm{e}_j$ and \{$\bm{e}_j$\} are unit vectors of the parameter space. The vector is $\bm{G} = 1/Q\sum_{j=1}^{Q-1}\bm{V}_j$. We then have \begin{equation} \label{eq:total_is_zero_Q} \sum_{j=1}^Q\gamma_j = 0 ~({\rm mod} ~2\pi). \end{equation} due to the cancellation of the trajectories. The calculation of the Berry phase can be performed explicitly by parametrizing the trajectory. By using Eq. (\ref{eq:Psi_theta_is_U_Psi}), we have \begin{eqnarray} d\bm{\Theta}\cdot\braket{\Psi(\Theta)|\frac{\partial}{\partial \bm{\Theta}}|\Psi(\Theta)} &=& dt \bra{\Psi(\Theta)}\partial_t\ket{\Psi(\Theta)} \nonumber \\ &=& -i dt\frac{\partial \theta_1}{\partial t}(\braket{n_1} + \braket{n_2} + \cdots + \braket{n_{Q-1}}) \nonumber \\ &~& - i dt\frac{\partial \theta_2}{\partial t}(\braket{n_2} + \cdots + \braket{n_{Q-1}}) \nonumber \\ &~& - \cdots - i dt\frac{\partial \theta_{Q-1}}{\partial t}\braket{n_{Q-1}}, \end{eqnarray} where the real parameter $t$ parametrizes the trajectory. In the trajectory $L_j ~(j = 1, \cdots, Q)$, only the $\frac{\partial \theta_j}{\partial t}$ is $2\pi$ and the others $\frac{\partial \theta_{k\neq j}}{\partial t}$ are zero. So the Berry phases, \begin{equation} \gamma_j = -i\int_{L_j}\bra{\Psi(\Theta)}\partial_t\ket{\Psi(\Theta)}dt, \end{equation} are calculated as \begin{eqnarray} \gamma_1 &=& 2\pi (\braket{n_Q} - N \nonumber) \\ \gamma_2 &=& 2\pi \braket{n_1} \nonumber \\ &\cdots& \nonumber \\ \gamma_Q &=& 2\pi \braket{n_{Q-1}}. \label{eq:berry} \end{eqnarray} Here, the total number of particles in the cluster, $N = \sum_{j=1}^Q \braket{n_j}$, is an integer, so $\gamma_1 = \braket{n_Q} ~({\rm mod} ~2\pi)$. Because of the $C_Q$ symmetry, the density of the particle is uniform \begin{equation} \braket{n_1} = \cdots = \braket{n_Q} = N/Q \equiv \nu. \label{eq:berry2} \end{equation} Here $\nu$ is a filling of electrons in a unit cell. Combining Eqs. (\ref{eq:total_is_zero_Q}) and (\ref{eq:berry2}), we finally find the Berry phase of the cluster limit is $\gamma_1=\cdots=\gamma_Q\equiv \gamma = 2\pi \nu ~{\rm mod} ~2\pi, ~n \in \mathbb{Z}$. \section*{BBH model with NNN hopping} In this section we give an exact corner states of the BBH model with NNN hopping. The tight-binding model is shown in Fig. 2 in the main text. First we consider a lattice size of $(2L_x+1) \times (2L_y+1)$. In this case, an exact zero energy corner state is obtained: \begin{equation} \label{eq:exact_zero} \ket{\psi} = \frac{1}{N}\sum_{m,n} r^{m+n}\ket{2m+1,2n+1}. \end{equation} Here, $N$ is a normalization factor, $r=-t_2/t_1$ and $\ket{i,j}$ is a basis localized at position $(i, j)$. One can show that the state is zero energy eigenstate, satisfying $\mathcal{H}\ket{\psi} = 0$. When $|t_1| < |t_2|$, the corner state is localized at $(1, 1)$ with the localization length $-1/\rm{log}|r|$. If the lattice size for both $x$ and $y$ axes are odd, an exact zero energy state (Eq. (\ref{eq:exact_zero})) is obtained. For the even case, there are four corner states and the energies of them are not exactly zero in a finite system [Fig. \ref{fig:s-fig4}]. But when the system size is large enough, the exact zero energy state in Eq. (\ref{eq:exact_zero}) gives a good approximation to the corner states because the exact zero energy state exponentially decays for distance from a corner and the four corner states are orthogonal each other, except the phase transition points. \begin{figure}[thb] \centering \includegraphics[width=0.4\linewidth]{supp_fig4.pdf} \caption{The single particle energy against $t_1/t_2$ with fixed $\lambda=0.2$. The red line denotes the corner localized states. The phase transition points are $t_1/t_2 = \pm 1$. } \label{fig:s-fig4} \end{figure} \end{document}
1,314,259,994,725
arxiv
\section{Introduction and summary.} We study solutions of the Einstein equations of $d + 2$ dimensional gravity with a cosmological constant. The solutions represent $(d-1)$-branes embedded in a space with two compact transverse dimensions with a positive curvature. They generalize the static solutions of Deser and Jackiw \cite{DJ}, describing point particles in $2+1$ dimensional gravity with cosmological constant, to the case of extended objects. The solutions we find obey Einstein's equations, as well as the equation of motion of the branes (the geodesic equation). It is remarkable that the metric, in appropriate coordinates, can be found without explicitly solving the Einstein equations. Analyzing the properties of the solutions is reduced to finding the roots of a simple algebraic equation. {}From the point of view of applications to the recently proposed \cite{ADD}, \cite{extra1} large extra dimensions scenario, the most important property of our solutions is that they are characterized by nontrivial ``warp factors" that affect the values of the parameters of the world-volume theories \cite{RS}. In particular, the usual relation between the four dimensional Planck scale, the 6 dimensional ``fundamental" scale of gravity, and the volume of the compactified space is affected by the presence of the warp factors. This difference can be important if the warp factors significantly deviate from unity (which, in the present framework can only be achieved by fine tuning various parameters) (for related papers with one transverse dimension, see \cite{NK}). While our recipe for finding solutions is rather general, motivated by the large extra dimensions proposal, we discuss in some detail the solution that describes two 3-branes embedded in six dimensional space time with negative\footnote{We note that our conventions for the signs of the curvature are those of \cite{weinberg}; in addition $\Lambda < 0$ in (\ref{gravityaction}) corresponds to de Sitter space.} (de Sitter) cosmological constant. The two branes are located on the north and south poles of a ``sphere", with a ``wedge" cut out, due to the deficit angle characteristic of pointlike sources in 2 dimensions. An important property of the solution is that the metric on the branes is not flat, but rather de Sitter. While the cosmological constant and the strength of gravity on the ``visible" brane can be tuned to satisfy the experimental constraints, within our ansatz there is no solution with flat world volume metric. This paper is structured as follows. In Section 2, we present the ansatz and derive the equations of motion. We show that they reduce to a simple differential equation in one real variable, similar to the case considered by Deser and Jackiw \cite{DJ}. In Section 3, we give a general analysis of the solutions. We show that the form of the metric (in appropriate coordinates) as well as the interpretation of its singularities can be obtained without explicitly solving the differential equation. All information on the solution can be deduced by finding the appropriate roots of an algebraic equation. In particular, these roots determine the values of the warp factors on the branes. We derive expressions for the strength of the gravitational coupling, the cosmological constant on the various branes, and the size of the transverse space. Finally, in Section 4, we discuss an explicit example, with $d=4$, which is of interest to the large extra dimension scenario. We show that the parameters allow fine tuning of the Planck scale and cosmological constant to values that do not contradict the observed ones. \section{The ansatz and equations of motion.} The action we consider is that of gravity in $D = d + 2$ dimensions, with the usual Einstein action with cosmological term included: \beq \label{gravityaction} S_{gravity} = - M^{D-2} \int d^D y \sqrt{g} \left( R - 2 \Lambda \right)~. \eeq The indices $M, N = 1,...,D$, while $\mu,\nu = 0,...,d = D - 2$ and $i,j = 1,2$ span the rest of the space; the metric has signature $(-,+,\ldots, +)$. The vacuum Einstein equations follow from the variation of the gravity action: \beq \label{vacuumeinstein} \delta S_{gravity}= - M^{D-2} \int d^D y \sqrt{g} \left( - R^{MN} + {1\over 2} g^{MN} R - g^{MN} \Lambda \right) \delta g_{MN} = 0~. \eeq The ``matter" is in the form of $d-1$ dimensional branes, whose action is proportional to the area of the world surface they sweep in the $d + 2$ dimensional spacetime: \beq \label{matteraction} S_{matter} = - \sum_a f_a^d \int d^D y \int d^d \sigma ~\delta^D(y - X_a(\sigma))~ \sqrt{\tilde{g}}, \eeq where $X_a^M (\sigma)$ is the embedding of the world surface in space time ($a$ runs over the various branes) and $\tilde{g}_{\mu \nu} = X^M_{,\mu}~ X^N_{,\nu} ~g_{MN}(y)$ the induced metric and $\tilde{g}\equiv - \det \tilde{g}_{\mu \nu}$ (hereafter we will omit the sum over the various sources, it will be implicit in all our formulae). The matter energy momentum tensor is defined by: \beqa \label{matteractionvariation} \delta S_{matter} &=& {1 \over 2} \int d^D y ~\sqrt{g} ~ T^{MN}(y) \delta g_{MN}(y)\nonumber \\ &=& - {1 \over 2}~f^d~ \int d^D y \int d^d \sigma ~\delta^D (y - X(\sigma))~ \tilde{g}^{1/2}~ X^M_{,\mu} ~X^N_{,\nu} ~\tilde{g}^{\mu \nu}~ \delta g_{MN}(y)~, \eeqa hence \beq \label{matterenergymomentum} \sqrt{g} T^{MN}(y) = - f^d \int d^d \sigma ~\delta^D(y - X(\sigma))~ \tilde{g}^{1/2} ~X^M_{,\mu} ~X^N_{,\nu}~ \tilde{g}^{\mu \nu}~. \eeq The equation of motion of the branes (obtained by varying the action with respect to the brane coordinates $X(\sigma)$) is the ``geodesic" equation: \beq \label{geodesic} \nabla^2 X^M ~+ ~ \Gamma^M_{KL} ~\tilde{g}^{\mu \nu}~ X^K_{,\mu} ~X^L_{,\nu} = 0~, \eeq where $\nabla^2 = \tilde{g}^{-1/2} \partial_\mu \tilde{g}^{1/2} \tilde{g}^{\mu \nu} \partial_\nu$ is the covariant D'Alembertian in the induced metric. Our ansatz for the metric includes a nontrivial warp factor $e^{2 \lambda}$, depending only on the coordinates transverse to the branes (denoted by $x^i$, $i = 1,2$): \beq \label{metricansatz} d s^2 = e^{2 \lambda (x^i)} \left[ g_{\mu \nu} (x^{\lambda}) d x^\mu d x^\nu + e^{2 K (x^i)} d x^i d x^i \right]~. \eeq We assume that the d-dimensional metric $g_{\mu \nu}(x^\lambda)$ obeys the equation \beq \label{4dmetric} R_{\mu \nu} - {1\over 2} g_{\mu \nu} R + \alpha g_{\mu \nu} = 0~, \eeq so that $\alpha$ becomes a parameter of our solution. The idea for finding a solution with delta-function sources is to first solve the source-free equations of motion. Then, admitting delta function like curvature singularities of the metric will allow us to include the effect of the sources. Following this strategy, in the remainder of this section we consider the source free equations. The Einstein equation following from the variation of the action with respect to the $\mu, \nu$ components of the metric is: \beq \label{munuequation} d \lambda_{, ii} + K_{, ii} + {d (d-1) \over 2} \lambda_{,i} \lambda_{,i} = e^{2 \lambda + 2 K} \Lambda - e^{2 K} \alpha ~ \eeq This equation, away from the singularities, is a consequence of the other equations of motion. Static sources will lead to delta-function singularities on the r.h.s. of equation (\ref{munuequation}). These will appear, as we will show in Section 3, because of singularites of $K_{,ii}$. The (trace of) the variation of the action w.r.t. the $i,j$ components of the metric gives the following equation: \beq \label{singleeqn} \partial \bar{\partial} \lambda + d ~\partial \lambda ~\bar{\partial} \lambda = {2 \over d} e^{2 \lambda + 2 K} \Lambda - {2 \over d -2} e^{2K} \alpha ~. \eeq We have introduced complex coordinates, $z = (x^1 + i x^2)/2, \bar{z} = (x^1 - i x^2)/2$, and defined $\partial= \partial_{x_1} - i \partial_{x_2}$, $ \bar{\partial} = \partial_{x_1} + i \partial_{x_2}$ such that $\partial z = 1, \partial \bar{z} = 0$. On the other hand, the traceless part of the equations of motion that follow from varying the action with respect to the $i,j$ components of the metric are nothing but the Cauchy-Riemann conditions for the function \beq \label{V} V = e^{- 2K - \lambda} \bar{\partial} \lambda~. \eeq The function $V$, therefore is holomorphic, $\bar{\partial}V=0$. This fact will play a central role in our ability to find explicit solutions.\footnote{Before continuing with the analysis of solutions with nontrivial warp factors, we note that the source-free equations of motion (\ref{singleeqn},\ref{V}) admit a simple solution with $K = - \log ( 1 + 4 |z|^2/\rho^2)$, and $\lambda = const.$, provided the values of $\Lambda$ and $\alpha$ are related, so that the r.h.s. of (\ref{singleeqn}) vanishes. These solutions are of the form $dS_{d-2} \times S^2$, with the radius of $S^2$, $\rho$, and the cosmological constant of the $dS_{d-2}$, $\alpha$, related by (\ref{singleeqn}): $\alpha = - 2 \rho^{-2} (d-2) $, $\Lambda = - 2 \rho^{-2} d $ (for $\lambda = 0$). These vacuum solutions of the $d+2$ dimensional Einstein equations with cosmological constant generalize the Nariai solutions \cite{Nariai} to $d+2$ dimensions. We thank N. Kaloper for pointing out to us the existence of ref.~\cite{Nariai}.} We can now further simplify our equations, following \cite{DJ}. Multiplying (\ref{singleeqn}) by $V$ given by (\ref{V}) and using $\bar{\partial} V = 0$, we obtain after integrating over $\bar{z}$, \beq \label{finaleqn1} V \partial e^{d \lambda} = {2 \Lambda \over d + 1} e^{ (d + 1) \lambda} - {2 d \over (d-2) (d -1)} \alpha e^{(d-1) \lambda} + \epsilon(z)~, \eeq where $\epsilon(z)$ is an integration constant. Upon introducing the new complex variable \beq \label{xi} \xi = \int^z { d w \over V(w)} \eeq as well as denoting $e^\lambda = N$, we can rewrite the equation as: \beq \label{finaleqn2} {\partial \over \partial \xi} N^d = {2 \Lambda \over d + 1} N^{d + 1} - {2 d \over (d-2) (d -1)} \alpha N^{d-1} + \epsilon~. \eeq Now upon taking the complex conjugate equation, we see that the reality condition for $N$ implies that $N$ is a function of the real part of $\xi$ only and that $\epsilon$ must be a real constant. Finally, introduce the variable \beq \label{t} t ~=~{2 \Lambda \over (d + 1) d}~ \left( \xi + \xi^* \right) ~ =~{2 \Lambda \over (d + 1) d}~ \left( \int^z { d w \over V(w)} ~+~{\rm h.c.} \right) \eeq and write the equation in terms of the single real variable $t$ (\ref{t}): \beq \label{Finaleqn} f(N) ~d N ~\equiv ~ {N^{d -1} \over P(N) } ~d N ~= d t ~, \eeq where $f(N)$ is defined by the first equality, and the polynomial $P(N)$ is: \beq \label{polynomial} P(N) ~=~ N^{d+1} - a N^{d-1} + b~, \eeq with the coefficients $a,b$ defined as follows: \beq \label{ab} a ~=~ {\alpha \over \Lambda} ~{ d (d+1) \over (d-2) (d-1)}~,~~~ b ~=~ {\epsilon \over \Lambda} ~ {d + 1 \over 2}~. \eeq This completes our discussion of the equations of motion. We saw that the equation of motion for the ansatz (\ref{metricansatz}) reduced to a simple differential equation in one real variable (\ref{Finaleqn}); a general solution can be expressed in terms of the general solution of this equation and a holomorphic function $V$ (\ref{V}). We also note that this provides also a solution to eq.~(\ref{munuequation})---it is easy to see that (\ref{singleeqn}) together with the holomorphicity of $V$ implies (\ref{munuequation}), away from possible singularities (which, as we will see in the next section, appear in $K_{,ii}$). In the following section we present a general analysis of the solutions of (\ref{Finaleqn}). \section{General analysis of the solutions.} In this section, we discuss the properties of the general solution of the equation (\ref{Finaleqn}). To begin, we need to specify the range of the variable $t$ in (\ref{Finaleqn}). In the cases of interest to us $t$ will span the entire real axis---consider e.g. the case $V(z) = z/c$, so that $t \sim \log |z|$. In order to solve (\ref{Finaleqn}) we need to find a function $F(N)$ such that $F^\prime (N) = f(N)$. Moreover, we should be able to invert $F(N)$ for all values of $t$---then $N(t) = F^{-1}(t)$ will satisfy (\ref{Finaleqn}). Even though the equation is rather complicated and the explicit form of $F(N)$ is generally unknown, we will see that the quantities of interest of us do not require an explicit knowledge of the solution. In fact, the metric can, in appropriate coordinates, be expressed in terms of the polynomial $P(N)$, while the ranges of the variables are determined by appropriately chosen roots of the polynomial $P(N)$. The basic requirement is that we find a function $F(N)$ which monotonically changes from $-\infty$ to $\infty$ over some finite range of values of $N$. For that to be the case, the function $f(N)$ has to have the properties (recall that $F^\prime = f$) that between two values of $N$ {\it a}) it does not change sign and {\it b}) its modulus approaches infinity at the two boundary values of $N$. Consider two ``nearest neighbor" roots of the polynomial $P(N)$, $N_1$ and $N_2$. Assume that both $N_1$ and $N_2$ have the same sign.\footnote{This requirement can be dropped for $d =1$.} Now, at the roots of $P(N)$ $f(N)$ blows up; moreover, if $N_1$ and $N_2$ have the same sign, $f(N)$ does not change sign as $N$ varies between $N_1$ and $N_2$ (we are mostly interested in the case of even $d$). Then, between these two roots $F(N)$ is monotonic and approaches $\infty$ at one of the roots, say $N_1$, and $-\infty$ at the other root, $N_2$. Hence, we can invert the equation (\ref{Finaleqn}) and find the function $N(t) = F^{-1}(t)$ for all values of $t$ on the real axis; moreover, the function $N(t)$ approaches finite values---the roots of $P(N)$, $N_1$ and $N_2$---as $|t| \rightarrow \infty$. Thus the problem of finding a solution of (\ref{Finaleqn}) is reduced to finding the condition that there are two real ``nearest-neighbor" roots of $P(N)$ of the same sign. It is easily seen, e.g. for $d=4$, that this requires that $a > 0$, i.e. $\alpha$ and $\Lambda$ have the same sign. We assume now that the parameters $a, b$ (\ref{ab}) are such that two such roots, $N_1$ and $N_2$ are found (assume for simplicity that they are both positive). Let us now analyze the properties of the solution. The metric has the form (\ref{metricansatz}); having found the function $N(t)$, we can now determine the $N^2 e^{2K}$ factor in the metric by using (\ref{V}) and the equation (\ref{Finaleqn}): \beq \label{exp2K} N^2 e^{2K} ~=~ {1 \over V} ~\bar{\partial} N ~=~{2 \Lambda \over d (d + 1) } ~{1 \over |V|^2}~ {P(N) \over N^{d -1}}~. \eeq Now note that using \beqa \label{ttheta} d t ~&=&~ { 2 \Lambda \over d (d +1) } \left[ {d z \over V(z)} + { d {\bar z} \over {\bar V}({\bar z})} \right] \\ d \theta ~&\equiv&~ i \beta \left[ {d z\over V(z)} - {d {\bar z} \over {\bar V}({ \bar z } ) }\right] ~\nonumber \eeqa we can rewrite the metric (\ref{metricansatz}) as: \beqa \label{metric1} d s^2 ~&=&~ N^2 ds_{(d)}^2 ~+~ 4 N^2 e^{2K} d z d \bar{z} \\ \nonumber \\ \label{metric11} &=& N^2 ds_{(d)}^2~ +~{d (d+1) \over 2 \Lambda}~{P(N) \over N^{d -1}}~d t^2 ~+~{ 2 \Lambda \over d (d + 1)} ~{P(N) \over N^{d -1}}~{d \theta^2 \over \beta^2}~. \eeqa Finally, we can change variables from $t$ to $N$ using (\ref{Finaleqn}) and write the metric as: \beq \label{metric2} d s^2 ~=~ N^2 ds_{(d)}^2 ~+~ {d (d+1) \over 2 \Lambda}~ {N^{d - 1} \over P(N)} d N^2 ~+~{ 2 \Lambda \over d (d + 1)} ~{P(N) \over N^{d -1}}~{d \theta^2 \over \beta^2}~, \eeq where $N$ now varies between the two roots, $N_1$ and $N_2$ of $P(N)$. The region of variation of the angle $\theta$ depends on the precise form of $V(z)$ and the choice of coefficient $\beta$; $\beta$ in (\ref{ttheta}) is an arbitrary constant that can be reabsorbed in redefinition of $\theta$.\footnote{In $d=1$, $\alpha = 0$, $P(N) = N^2 + b$, so upon changing variables $N = |b|^{1/2} \cos \Omega$ in (\ref{metric2}) we recover the solution obtained in \cite{DJ} by explicitly solving the equations of motion.} Clearly the metric (\ref{metric2}) has singularities at $N = N_1,N_2$. To find what these singularities imply for the physical interpretation of the solution, it is convenient to consider again the metric in the form (\ref{metric1}), (\ref{exp2K}). Clearly the zeros of $V(z)$ are singular points of the metric. Under the map $z \rightarrow t$ (\ref{ttheta}) these zeros are mapped to $t = \pm \infty$, which, finally, are mapped to the roots of the polynomial $P(N)$, $N_{1,2}$. Now recall that the geodesic equation---the equation of motion of the branes (\ref{geodesic}) has also to be obeyed (for example, if this equation is not obeyed, an initially static brane will accelerate). In the static gauge, $X^\mu = \sigma^\mu, X^i_{,\mu} = 0$, (\ref{geodesic}) implies that $\sqrt{\tilde{g}} \Gamma^i_{\mu \nu} \tilde{g}^{\mu \nu} = 0$ at the positions of the branes (note that in the static gauge the induced metric is equal to the the metric of the embedding space restricted to the brane). For our ansatz (\ref{metricansatz}) the geodesic equation in the static gauge amounts to the condition \beq \label{condition} e^{(d-2)\lambda - 2K} \lambda_{,i} = 0 ~\rightarrow ~ N^{d-1} V = 0~, \eeq which should hold at the positions of the branes. The positions of the branes, therefore, correspond to zeros of the holomorphic function $V$ (\ref{V}), and hence to $N_{1,2}$ in the coordinates (\ref{metric2}). In the following, consider the simple ansatz for $V(z)$, $V(z) = z/c$; note that since $t = 2 \Lambda (c \log z + c^* \log \bar{z})/(d+1)d$, single valuedness of the map $t(z, \bar{z})$ requires $c$ to be real. To see what kind of singularity this is, consider the $\mu, \nu$ components of the Einstein equations in the static gauge: \beq \label{einsteinmunusource} \sqrt{\hat{g}}~ \left( \hat{R}_{\mu \nu} - {1\over 2} \hat{g}_{\mu \nu} \hat{R} + \hat{g}_{\mu \nu} \Lambda \right) ~=~ {1\over 2} {f_a^d\over M^d} ~ \delta^2(y - X_a)~ \tilde{g}^{1/2} ~\tilde{g}_{\mu\nu} ~, \eeq where the hats indicate that the curvatures and metric are those of eq.~(\ref{metricansatz}). The metric in the $z$-coordinates (\ref{metric1}) can be written in the form: \beq \label{metricnearzero1} d s^2 ~=~ N^2~ d s_{(d)}^2 + e^{2 \kappa} ~d z ~d {\bar z}~, \eeq where \beq \label{kappa} \kappa ~=~ {1 \over 2} \log \left[ {c^2 \over |z|^2} {P(N) \over N^{d-1}} {2 \Lambda \over d (d + 1) }\right] \eeq Near the singularity, equation (\ref{einsteinmunusource}) becomes: \beq \label{singul} - {1\over 2}~ e^{2 \kappa}~ R_{2} ~= - \partial {\bar\partial} \kappa ~= ~ {1\over 2} ~{f_a^d \over M^d}~ \delta^2 (z - X_a) ~, \eeq where the equality holds up to terms that do not contain delta functions. To calculate the function $\kappa$ (\ref{kappa}) near $z = 0, \infty$ note that these two points are mapped by (\ref{ttheta}) to $t = \pm \infty$ (whether $z=0$ is mapped to $t = +\infty$ or $-\infty$ depends on the signs of $\Lambda$ and $c$). These points, on the other hand, correspond to the roots of the polynomial $P$, $N_1$ and $N_2$. Near $z = 0$ (assume, for concreteness, that $z = 0$ is mapped to $N = N_1$), the dependence of $N$ on $t$ is easy to find upon solving eqn.~(\ref{Finaleqn}) by keeping the most singular term: \beq \label{NnearN1} | N(z) - N_1 | \simeq |z|^{2 \delta_1}~, \eeq where \beq \label{delta1} \delta_1 = {2 \Lambda c \over d (d+1)} {P^\prime(N_1) \over N_1^{d-1}}~. \eeq Therefore, near $z=0$, the function $\kappa$ from the metric (\ref{metricnearzero1}) behaves as: \beq \label{kappanearzero} \kappa ~\simeq ~ - \log |z|^{1 - \delta_1} ~. \eeq Upon comparison with eq.~(\ref{singul}), noting that $\partial \bar\partial \kappa = - 2 \pi (1 - \delta_1) \delta^2 (z)$ we can now find the tension of the brane at $z = 0$, \beq \label{tensionnearN1} f_1^d = M^d ~4 \pi ~(1 - \delta_1) ~. \eeq We can repeat the same analysis near $z = \infty$ mapped to the other root, $N=N_2$.\footnote{The leftmost equation in (\ref{condition}) implies that the roots of $P(N)$ are stationary points of $\lambda$ and the equation of motion of the branes is also satisfied at $N_2$.} It is convenient to change variables to $u = 1/z$. Note that the metric (\ref{metricnearzero1}) (and the function $\kappa$) has the same form in the $u$ variables. The behavior of $N$ near $u = 0$ from eq.~(\ref{Finaleqn}) is: \beq \label{NnearN2} |N(u) - N_2 | \simeq |u|^{- 2 \delta_2}~, \eeq where \beq \label{delta2} \delta_2 = {2 \Lambda c \over d (d+1)} {P^\prime(N_2) \over N_2^{d-1}}~. \eeq Consequently, $\kappa \simeq - \log |u|^{1 + \delta_2}$, and the tension of the brane near $z = \infty$ is then \beq \label{tensionnearN2} f_2^d = M^d 4 \pi ~(1 + \delta_2) ~. \eeq We note the conditions that $0 < \delta_1 < 1$ and $-1 < \delta_2 < 0$. They follow from requiring positivity of the brane tensions ($\delta_2>-1$ and $\delta_1 <1 $). On the other hand, the distance between the two branes should be finite along any path in the $z$-plane, which requires $\delta_1 > 0$ and $\delta_2 < 0$; this follows from considering the interval (\ref{metric1}) in the $z$-coordinates and requiring integrability of $ds$ near $z = 0$ and $z = \infty$.\footnote{Equivalently, $\delta_1 > 0$, $\delta_2 < 0$ ensure that the map $|z| \rightarrow N$ is nondegenerate near the ends of the interval, as follows from the expressions (\ref{NnearN1}, \ref{NnearN2}).} Finally, positivity of the metric (\ref{metric1}) of the transverse space also requires that $\Lambda P(N) N^{1-d}$ be positive as $N$ varies over the interval $N_1, N_2$. These restrictions impose constraints on the various parameters in the action and the solution; these will be considered for our $d = 4$ example in Section 4. We can now deduce a general formula for the Planck scale and cosmological constant on the $a$-th brane. The Einstein term in d-dimensional effective action will be: \beqa \label{Mplanck} &&- M^{d} ~\int d^d x ~d N ~{d \theta \over \beta} ~ \sqrt{- \det ( N^2 g_{(d)}) } ~ R_{(d)} ( N^2 g_{d}) \\ ~&=& - \int d^d x \sqrt{g_{(d)}}~ R_{(d)} (g_{d}) ~ M^{d} ~ \int_{N_1}^{N_2} N^{d-2} d N ~\beta^{-1} \int d \theta ~\nonumber ~. \eeqa Let the warp factor on the $a$-th brane be $N_{(a)}$. The physical metric there is the induced metric $N_{(a)}^2 g_{(d)}$. Hence the d-dimensional Planck constant---the coefficient in front of the curvature term in the action---will be: \beq \label{dplanck} M_{Pl(a)}^{d-2} ~=~{M^{d} \over (d-1)} ~N_{(a)}^{2-d}~\left( N_2^{d-1} - N_1^{d-1} \right) ~ \beta^{-1} \int d \theta ~, \eeq where the $\theta$ integral is over the appropriate range that follows from (\ref{ttheta}) and depends on the choice of $V$ and $\beta$. Since the induced metric on the $a$-th brane is $N_{(a)}^2 g_{(d)}$, and since $g_{(d)}$ obeys the equation (\ref{4dmetric}), the cosmological constant on the $a$-th brane, denoted by $\Lambda_{(d),(a)}$, is \beq \label{lambdad} \Lambda_{(d),(a)} ~=~ N_{(a)}^{-2}~ \alpha~. \eeq Similarly, one can derive an expression for the proper distance between the branes at $N_1$ and $N_2$, which we will present for the case $d=4$ in the following section. \section{An explicit example and some speculations.} As an application of the general analysis of the previous section, let us now consider the physically interesting case $d = 4$. The polynomial $P(N)$ then is of degree 5. It is easy to see that if the coefficients $a, b$ in $P(N)$ obey\footnote{More precisely, the numerical constant on the r.h.s. is $(3/5)^{3/2} - (3/5)^{5/2}$.} \beq \label{abconditions} a > 0 ~~ {\rm and} ~~ 0 < b < .18 ~ a^{5/2} ~ \eeq then there are two positive roots $N_1$ and $N_2$ of $P(N)$; in addition, $P(N) < 0$ as $N$ varies between the two roots, while $P^\prime (N_1) < 0 $ and $P^\prime (N_2) > 0$. Since $P(N)$ is negative in the range of variation of $N$, the metric (\ref{metric1}) is positive definite only for negative $\Lambda$, corresponding to de Sitter space in our convention.\footnote{This conclusion is always true: in the case of two negative roots of $P(N)$, which can be achieved by adjusting the value of $b$, $P>0$ between the roots, but $N < 0$ and positivity of the metric also requires negative $\Lambda$.} Note that since $a \sim \alpha/\Lambda > 0$, this implies that $\alpha < 0$ as well and the four dimensional metric on the branes is also de Sitter (also note that a solution with $\alpha = 0$ does not exist---then $P(N)$ has a single real root and the equation (\ref{Finaleqn}) can not be solved for the desired interval of values of $t$). We need to satisfy the conditions $\delta_i > 0$ for the root $N_i$ which is the image of $z = 0$ and $\delta_j < 0$ for the other root $N_j$ (the image of $z =\infty$), so that the proper distance between the two roots is finite. Since $\delta_i \sim \Lambda c P^\prime (N_i)/N_i^3$, (\ref{delta1}, \ref{delta2}), for our choice of two positive roots, $0< N_1 < N_2$, we find that ${\rm sign}~ \delta_1 = {\rm sign}~ c$ and ${\rm sign} ~\delta_2 = - {\rm sign}~ c$ (since $\Lambda < 0$). On the other hand, since $t \sim \Lambda c \log|z|$, if $c > 0$, then $z = 0$ is mapped to $t = + \infty$ (and, since $f(N) < 0$, the smaller root $N_1$), while $z = \infty$ is mapped to $t = - \infty$ and, hence, the larger root $N_1$. Then the finite proper distance requirement is satisfied by the choice $c>0$, yielding $\delta_1 > 0$, $\delta_2 < 0$. The coordinate $\theta$ (\ref{ttheta}) changes then from $(0, 2 \pi c)$ (we chose $\beta = - 1/2$). Expressing the parameters $\delta_{1,2}$ through the tensions of the branes we obtain: \beqa \label{tensions4d} \delta_1 &=& {|\Lambda| c \over 10} ~ { |5 N_1^4 - 3 a N_1^2| \over N_1^3} = 1 - {f_1^4 \over 4 \pi M^4} \nonumber \\ |\delta_2| &=& {|\Lambda| c \over 10} ~ { 5 N_2^4 - 3 a N_2^2 \over N_2^3}= 1 - {f_2^4 \over 4 \pi M^4}~. \eeqa The induced metric on the $i$-th brane has \beq \Lambda_{4,i} = N_{i}^{-2} \alpha ~~{\rm and}~~ M_{Pl, i}^2 = M^4~ {4 \pi c \over 3} ~ N_i^{-2} ~ (N_2^3 - N_1^3)~, \eeq while the proper distance between the branes is: \beq \label{properdistance} R ~=~ \sqrt{10 \over \Lambda} ~ \int_{N_1}^{N_2} { N^{3/2} d N \over | N^5 - a N^3 + b |^{1/2}}~. \eeq There are four parameters in the action with which we started: the 6 dimensional gravitational constant, $M$, the cosmological constant, $\Lambda$, and the tensions of the two branes, $f_1, f_2$. The solution involves three additional dimensionful parameters, the parameter $\alpha$ of the ansatz for the d-dimensional metric (\ref{4dmetric}), the integration constant $\epsilon$ (\ref{finaleqn2}), and $c$---the ``strength" of the zero of $V(z)$. The equations (\ref{tensions4d}) expressing the tensions for the branes through the other parameters of the solutions represent two constraints on the seven parameters $M, \Lambda, f_1, f_2, \epsilon, \alpha, c$. We can use them to eliminate the tensions $f_{1,2}$ from the list of our parameters. The only importance of eqns.~(\ref{tensions4d}) then is that their left hand sides have to be smaller than 1, to ensure positivity of the brane tensions. {}From the four dimensional effective theory point of view the quantities of interest are the four dimensional Planck constant, $M_{Pl}$ and the cosmological constant, $\Lambda_{4}$. Of interest are also the mass of Kaluza-Klein excitations as well as the distance scale where a gravitational experiment on the brane will reveal deviations from Newton's law. The last two quantities depend, of course on the size of the extra dimensions (and hence on the distance $R$ (\ref{properdistance})). However, due to the warped geometry, there can be also nontrivial dependence on the warp factors; we leave a detailed investigation of this issue for future work. To get an idea as to what the effect of the nontrivial warp factors might be, consider first the case where the parameters $a,b$ in $P(N)$ (\ref{polynomial}), (\ref{ab}) satisfy $0 < b << a^{5/2}$. The advantage of this limit is that the roots can be easily approximated by: \beq \label{roots4d} N_1 \simeq \left( b\over a \right)^{1/3} \sim \left( \epsilon \over \alpha \right)^{1/3} ~~~{\rm and} ~~ N_2 \simeq a^{1/2} \sim \left( \alpha \over \Lambda \right)^{1/2}~. \eeq Note that the case we are considering corresponds to $N_2 \gg N_1$---the warp factors on the two branes can be (upon adjusting the parameters, of course) very different. The conditions of positive tensions (\ref{tensions4d}) then imply \beq \label{positivetensions4d} {3 |\Lambda| c \over 10 }~ a ~\left( a \over b \right)^{1/3} < 1 ~~{\rm and} ~~ |\Lambda| ~c ~a^{1/2} < 1~. \eeq In the case we are discussing, the proper distance (\ref{properdistance}) between the branes is easily seen to be approximately \beq \label{properdistance4d} R \sim {1 \over \sqrt{\Lambda}}~. \eeq The induced metrics on the two branes at $N_1$ and $N_2$ are de Sitter with cosmological constants: \beq \label{cc4d} ~ |\Lambda_{(4),1}| = N_1^{-2} |\alpha| \sim |\alpha | ~ \left( \alpha \over \epsilon \right)^{2/3}~ ~{\rm and}~~ |\Lambda_{(4), 2}| = N_2^{-2} |\alpha| \sim |\Lambda|~ , \eeq while the corresponding Planck constants are: \beq \label{mpl4d} M_{Pl,1}^2 \sim M^4 c ~ a^{3/2} N_1^{-2} \sim M^4 c~ {|\alpha|^{13/6} \over |\Lambda|^{3/2} |\epsilon|^{2/3} }~ ~{\rm and} ~~ M_{Pl, 2}^2 \sim M^4 c ~ a^{3/2} N_2^{-2} \sim M^4 c~ {|\alpha|^{1/2} \over |\Lambda|^{1/2}} ~. \eeq The vacuum energy density on the brane is of order $M_{Pl}^2 \Lambda_{(4)}$. If one of the branes described the physical universe, the parameters should be tuned so that $\Lambda_{(4),i}/M_{Pl,i}^2 \sim 10^{-120}$ (or to a value smaller than that; note also that this ratio is the same on both branes, since the warp factors cancel between numerator and denominator). It is not our objective to solve the cosmological constant problem; we will only ask whether our parameter space allows for at least a fine-tuned situation, subject to the constraints imposed by the existence of a solution. Tuning the cosmological constant to the observed small value, therefore, requires that \beq \label{cctuningcondition} {M^4 \over \Lambda^2}~ (|\Lambda| c) ~a^{1/2} \sim 10^{120}~. \eeq It is convenient to work in terms of the dimensionless ratios $a$, $b$, $(M^2 |\Lambda|^{-1})$, $(|\Lambda| c)$, and keep $\alpha$ as the only dimensionful parameter. Consider first the case where the observed universe is at the first brane. Then, $$ M_{Pl, 1}^2 \sim \left[ {M^4\over|\Lambda|^2}~ (|\Lambda| c)~ a^{1/2} \right] ~|\alpha| ~ \left(a\over b\right)^{2/3} \sim 10^{120} ~|\alpha| ~\left(a\over b\right)^{2/3} ~\sim 10^{54} ~~eV^2~. $$ Hence, \beq \label{alpha3} \alpha \sim \left( b\over a \right)^{2/3} 10^{- 66} eV^2~. \eeq Then $$ {1\over R} \sim |\Lambda|^{1/2} \sim |\alpha|^{1/2} a^{-1/2} \sim { b^{1/3}\over a^{5/6}} ~10^{-33} ~ eV~. $$ This is a very large distance, unless $b\gg a^{5/2}$, which violates (\ref{abconditions}). If the observed universe is on the other brane, we obtain instead $|\alpha| \sim a 10^{-66} eV^2$ and $1/R \sim 10^{-33} eV$---also a large value for $R$. Therefore, if the deviations from Newton's law reveal themselves to an experimentalist on the brane at a distance scale of order $R$, then we can not accommodate the observed universe on one of our branes. If, on the other hand, the relevant scale of deviations is $N_i R$, then it is possible to fine-tune the parameters to obtain a value for $N_i R$ of order a millimeter, and hence accommodate the observed world in our setup (admittedly with enormous fine-tuning of parameters!). The above considerations apply also in a more general situation, where eq.~(\ref{cctuningcondition}) is replaced by: \beq \label{cctuninggeneral} {M^4 \over \Lambda^2}~ (|\Lambda| c) ~{N_2^3 - N_1^3 \over a} \sim 10^{120}~, \eeq while (\ref{alpha3}) becomes: \beq \label{alpha4} |\alpha| ~\sim~ N_i^2~ 10^{- 66} eV^2~, \eeq provided that the observed universe is identified with the brane at $N_i$. But then the inverse radius becomes: \beq \label{radius} {1 \over R} ~\sim~ {N_i \over a^{1/2}} ~I~ 10^{-33} eV~, \eeq where $I$ is the integral \beq \label{integral} I ~=~ \int_{N_1}^{N_2} { N^{3/2} d N \over | N^5 - a N^3 + b |^{1/2}}~, \eeq which is trivial in the limit $b \rightarrow 0$ considered before. Assuming that in our geometrical setup the deviations of gravity from Newton's law will show up, to an observer on ``our" brane, at a distance scale of order $R$, an acceptable value of $1/R > 10^{-3} eV$ requires $ N_i I a^{-1/2} > 10^{30}$ (which appears hard to achieve, similar to the example given above). If the deviations show up at a scale $N_i R$, then the requirement is $I a^{-1/2} > 10^{30}$ (which is achievable via fine tuning). We leave the investigation of this issue for future work. \section{Acknowledgments.} We would like to thank N. Kaloper, R. Leigh, M. Luty, and V. Moncrief for helpful discussions and comments.
1,314,259,994,726
arxiv
\section{Introduction} Black hole entropy is discussed in a number of contexts: thermodynamics and statistical mechanics \cite{BCH,Bekenstein,Gibbons hawking,'t Hooft,Wald review}, quantum entanglement \cite{Bombelli,Srednicki,Plenio}, spacetime symmetries \cite{Wald 1,wald iyer,Carlip,Silva} and more. It is not clear whether entropy in all these contexts refers to the same entity, although they are frequently taken to be equivalent. The fact that black holes obey thermodynamic type laws is still not understood; for example we do not know what degrees of freedom the entropy represents. If the different versions of entropy do not coincide, the confusion increases. In order to shed some light on the matter, we examine statistical entropy in curved space, and we ask whether statistical entropy can be related to the curvature of spacetime. Our main motivation is an attempt to introduce a little more clarity into the discussion of entropy in the black hole context, by providing an unequivocal description of the curvature independence of statistical entropy as it is conventionally defined in the literature. The Unruh effect shows that an accelerated observer sees a thermal bath of particles, while an observer in Minkowski space sees vacuum \cite{Unruh} so clearly the statistical entropy in the two cases will be different. Both Minkowski and Rindler metric have the same (vanishing) curvature, so it appears that the entropy must be observer dependent and not a function of curvature. On the other hand Wald's Noether charge entropy \cite{Wald 1,wald iyer} is defined in terms of the curvature tensor, \begin{equation} S_{Wald}=2\pi\intop_{\Sigma}\frac{\delta L}{\delta R_{abcd}}\epsilon_{ab}\epsilon_{cd}\end{equation} where the functional derivative is taken viewing the Riemann tensor as a field independent of $g_{ab}$, and $\epsilon_{ab}$ is the binormal to the bifurcation surface. Thus it appears to be different from the entropy defined in statistical mechanics. The question is whether for any curvature (not just in the absence of curvature) the number of states is observer dependent, or whether there is a possible dependence on curvature. The textbook definition of statistical entroy originally assumed flat space, where the choice of vacuum is unambiguous. In curved space this is not the case. Thus it would appear that this definition is not suitable for treatment of black hole entropy. We will show technically where and how curvature independence arises in the conventional definition. This will give a precise understanding of the direction in which it needs to be redefined. In this paper we examine the statistical number of states of matter in a general curved space metric. Our motivation is understanding of black hole entropy; however, specialisation to the black hole metric includes further issues to be dealt with in future work. The relationship of the entropy of matter outside the horizon to the entropy of the black hole is not clear. 't Hooft \cite{'t Hooft} calculated the statistical entropy of a scalar field in the black hole metric, where his motivation was to reconcile black hole physics with quantum mechanics. At the time of writing, black holes were understood to be in a quantum mechanically mixed state, and 't Hooft attempted to describe them as pure states resembling ordinary particles. Thus black holes inhabit an extension of Hilbert space with an according Hamiltonian. This system is sensitive to observer dependence: the free falling observer perceives matter, and 't Hooft writes that it is this matter which he considers in this paper. The distinction between presence and absence of matter is assumed to be observer dependent when considering coordinate transformations with a horizon. Another view on the relationship between the statistical entropy of a field outside the horizon and the black hole is the idea that the horizon entropy arises from the microscopic structure of spacetime and that the matter fields inherit the entropy as material kept in a hot oven inherits the temperature \cite{paddy kolekar ref 13,Kolekar}. Yet another view takes into account the fact that the entanglement entropy of a bipartite system, which expresses the quantum correlations between its subsystems, is equal to the statistical entropy of a subsystem if it is in a thermal state \cite{Kabat,kabat 3 israel,JudyRamy}, thus statistical entropy of fields outside the horizon may equal entanglement entropy of the black hole syste \footnote{The statistical entropy, as discussed in this paper, is fundamentally different than the usual notion of quantum entanglement entropy. This distinction is further clarified in the concluding section }. In this paper we do not discuss these issues. We focus only on the curvature independence of statistical entropy in a general curved space, as a first small step in clarifying the relation of the different concepts of black hole entropy. We will show that the number of states from which statistical entropy is derived is an explicit function of the metric. Since the curvature derives from the metric, it would seem that the number of states is related to curvature. However we find that for certain transformations of the metric, the number of states is preserved. These transformations do not preserve curvature. Therefore the number of states does not depend on curvature. This is shown only for a diagonal metric, but it serves as a counter example showing that in the most general case the number of states is not a function of the spacetime curvature scalar. This paper is organized as follows. First we establish the definition of the number of states, and methods of calculating the volume of phase space. We then ask under what conditions transformation of the metric will leave this volume invariant. We obtain a general transformation of any diagonal metric which displays a clear constraint on the preservation of the number of states. We examine characteristics of this transformation and look for a possible relationship to curvature. We find that in general it need not preserve curvature. That is, the number of states and thus the entropy will remain the same for systems with different curvature. This is shown for a diagonal metric in a static spacetime, but serves as a counter example showing that in general entropy is not dependent on curvature. \section{Definition of number of states} In classical thermodynamics the number of states of a nonrelativistic system is defined as follows: Take an integral over the volume of phase space ($d^{3}xd^{3}p$), restrict it to values of momenta which fit the energy eigenvalues of the system and the number of states is\begin{eqnarray} N & = & g\int d^{3}x\int\frac{d^{3}p}{\left(2\pi\hbar\right)^{3}}\nonumber \\ & = & gV\int\frac{d^{3}p}{\left(2\pi\hbar\right)^{3}}\end{eqnarray} where $g$ is a numerical factor related to the degeneracy. Dividing by a unit of volume in momentum space, $2\pi\hbar,$ gives the number of states in phase space with the given energy, per unit volume of phase space. Since we do not limit ourselves to nonrelativistic systems, nor to three space dimensions, a more general definition is necessary \cite{Kolekar,Paddy phase space}. The number of states is then defined as\begin{equation} N=\int d^{d}x\frac{d^{d}p}{(2\pi\hbar)^{d}}dE\delta(E-E(p)).\label{eq:N general first formula}\end{equation} where $d$ denotes the number of space dimension \footnote{This integral is actually $\int d^{d}x\frac{d^{d}p}{(2\pi\hbar)^{d+1}}dE\left[2\pi\hbar\delta(E-E(p))\right] }. Without loss of generality we are taking a constant time hypersurface. The number of states is Lorentz invariant. For a proof see \cite{Paddy book}. Remarks on notation: for simplicity of notation in this paper, $g_{00}$ refers to the positive value of the time coordinate of the metric, except where explicitly stated otherwise. The minus sign appears in the form of the equation. An explicitly covariant derivation which parallels ours can be found in \cite{Paddy phase space}. We here keep the vector notation because it clarifies our proof in what follows. In order to apply this definition to curved space, we need to clarify what momentum and energy refer to for a matter field or gas of particles in curved space. There are (at least) two possible ways to approach the issue. One is that of \cite{'t Hooft}, who took $\psi(x)$ a scalar wave function for a light spinless particle of mass $m$ in the Schwarzschild metric, $m\ll1\ll M$ where $M$ is the BH mass, used a WKB approximation, wrote the wave equation and defined the spatial momentum $k(r)$ in terms of the eigenvalues of the Laplacian operator while taking energy as the eigenvalue of the time component of the Laplacian. He obtained the number of states by calculating $\int k(r)dr$ and then summing over angular degrees of freedom. Another possibility is that of \cite{Kolekar,Paddy phase space} who treated a relativistic gas of particles, and rather than the wave equation, used the scalar invariance of the squared momentum four-vector of the particle, while the covariant energy of a particle is the projection of the timelike Killing vector on the four momentum. Both approaches give the same relationship between energy and momentum, which for a general static metric is \begin{equation} g^{00}E^{2}=\sum_{i}g^{ii}\left(p_{i}\right)^{2}\label{eq:energy momentum eq}\end{equation} taking a massless particle for simplicity. The number of states given by the volume of phase space is the product of the volume of position and momentum space. The momentum component of the number of states belongs to a constrained region in the cotangent space of the region of configuration space in question. For example, in Cartesian coordinates in flat space eq.(\ref{eq:energy momentum eq}) gives\begin{equation} E^{2}-p_{x}^{2}-p_{y}^{2}-p_{z}^{2}=0\end{equation} and this defines a sphere of radius $E$:\begin{equation} 1=\frac{p_{x}^{2}}{E^{2}}+\frac{p_{y}^{2}}{E^{2}}+\frac{p_{z}^{2}}{E^{2}}.\end{equation} In statistical physics we take all energies up to a given energy, and so we look for the volume enclosed by this sphere, $\frac{4}{3}\pi E^{3}$. If the metric is not flat, the volume will be an ellipsoid. Since our proof of curvature independence rests on a counter example, we are free to take a static metric with timelike Killing vector and can define the energy accordingly. For a general static diagonal metric eq. (\ref{eq:energy momentum eq}) gives \begin{eqnarray} g^{00}E^{2} & = & \sum_{i}g^{ii}\left(p_{i}\right)^{2}\nonumber \\ 1 & = & \sum_{i}\frac{g^{ii}\left(p_{i}\right)^{2}}{g^{00}E^{2}}=\sum_{i}\frac{p_{i}^{2}}{g_{ii}g^{00}E^{2}}\end{eqnarray} where $p_{i}$ the spatial momenta are summed in all space directions. This is the formula for an ellipsoid with axes $\sqrt{g_{ii}g^{00}}E,$ which encloses a region whose volume in three space dimensions would be $\frac{4}{3}\pi\sqrt{g_{xx}g_{yy}g_{zz}}\left(\sqrt{g^{00}}E\right)^{3}$ . In $d+1$ spacetime dimensions this becomes \begin{equation} C_{d}\sqrt{g_{d}}\left(\sqrt{g^{00}}E\right)^{d}\label{eq:ellipsoid}\end{equation} where $g_{d}$ denotes the determinant of the spatial part of the metric and $C_{d}$ is the volume enclosed by the d-dimensional unit ball. One then integrates over all momentum space. Since the measure in the momentum integral includes the root of inverse metric $g^{d}$, that is, the integral is given by\begin{equation} \int\frac{d^{d}p}{\sqrt{g_{d}}}\end{equation} then the space determinant in eq.(\ref{eq:ellipsoid}) cancels out, and the integral over momentum space gives the volume of a ball with radius $\sqrt{g^{00}}E.$ Therefore the number of states in $d+1$ dimensions (d space dimensions) for a diagonal metric is\begin{eqnarray} N & = & C_{d}E^{d}\intop_{V}d^{d}x\sqrt{g_{d}}\left(g^{00}\right)^{\frac{d}{2}}\nonumber \\ C_{d} & = & \frac{\pi^{\frac{d}{2}}}{\Gamma\left(\frac{d}{2}+1\right)}.\end{eqnarray} An explicit proof for $3+1$ and $4+1$ dimensions appears in the Appendix. \section{Invariance of number of states under transformation of metrics} We wish to examine a general transformation which changes the metric while leaving the number of states invariant. We find that such a transformation exists, but does not preserve curvature. We give details of the transformation, followed by examples of the relation to curvature. We begin with conformal rescaling. If a $d-$dimensional metric changes by $\tilde{g}_{\mu\nu}=a(x)g_{\mu\nu},$ then the number of states is\begin{eqnarray} N_{0} & = & \int\sqrt{g_{3}}d^{3}xd^{3}p=\int\sqrt{g_{3}}\frac{4\pi E^{3}}{3}\left(g^{00}\right)^{3/2}d^{3}x.\nonumber \\ \tilde{N} & = & \int\sqrt{\tilde{g}_{3}}d^{3}xd^{3}p\nonumber \\ & = & \int a^{3/2}\sqrt{g}\frac{4\pi E^{3}}{3}\left(\frac{1}{a}g^{00}\right)^{3/2}d^{3}x\nonumber \\ & = & \int\sqrt{g}\frac{4\pi E^{3}}{3}\left(g^{00}\right)^{3/2}d^{3}x=N.\end{eqnarray} since $\tilde{g}_{00}=a(x)g_{00}$ and so $\tilde{g}^{00}=\frac{1}{a(x)}g^{00}$.This only works if the metric is uniformly rescaled, so that $a_{0}=a_{i}.$ Thus conformal rescaling preserves the number of states. We conclude that preservation of the number of states requires a constraint on the relationship between the time and space components of the metric. In search of a general transformation we take a general diagonal metric in $1+3$ dimensions. Generalization to more space dimensions will be simple. \begin{equation} \left(\begin{array}{cccc} g_{00}\\ & g_{xx}\\ & & g_{yy}\\ & & & g_{zz}\end{array}\right)\end{equation} The volume of space in this metric:\begin{equation} \intop_{V}\sqrt{g_{xx}g_{yy}g_{zz}}d^{3}x\end{equation} where the integral is over a given volume V. The volume of momentum space is \begin{equation} \intop_{V_{p}}\frac{d^{3}p}{\sqrt{g_{xx}g_{yy}g_{zz}}}\end{equation} where $V_{p}$ is the volume in momentum space. As explained above, from eq.(\ref{eq:energy momentum eq}) \begin{equation} 1=\frac{1}{g_{xx}g^{00}E^{2}}p_{x}^{2}+\frac{1}{g_{yy}g^{00}E^{2}}p_{y}^{2}+\frac{1}{g_{zz}g^{00}E^{2}}p_{z}^{2}\label{eq:wave eq}\end{equation} which is the equation for volume of ellipsoid with axes $\sqrt{g_{xx}g^{00}}E,\sqrt{g_{yy}g^{00}}E,\sqrt{g_{zz}g^{00}E}$. The momentum volume is obtained by integration, or more simply by just plugging in the formula for volume of ellipsoid in 3 dimensions: $\frac{4}{3}\pi abc=\frac{4}{3}\pi\sqrt{g_{xx}g_{yy}g_{zz}}\left(g^{00}E^{2}\right)^{3/2}.$ Phase space is given a \footnote{This can have a prefactor of $\left(2\pi\right)^{-3}$ when calculating the density of modes per unit of phase space }:\begin{equation} N=\intop_{V}d^{3}x\intop_{V_{p}}d^{3}p.\end{equation} We now transform the metric in arbitrary way but keeping it diagonal:\begin{equation} \left(\begin{array}{cccc} a(\vec{x})g_{00}\\ & b(\vec{x})g_{xx}\\ & & c(\vec{x})g_{yy}\\ & & & d(\vec{x})g_{zz}\end{array}\right)\end{equation} We plug this into the term for phase space. First we calculate the volume of momentum space for the transformed metric. We obtain\begin{eqnarray} \frac{1}{a(\vec{x})}g^{00}E^{2} & = & \frac{1}{b(\vec{x})}g^{xx}p_{x}^{2}+\frac{1}{c(\vec{x})}g^{yy}p_{y}^{2}+\frac{1}{d(\vec{x})}g^{zz}p_{z}^{2}\end{eqnarray} and using eq.(\ref{eq:wave eq})\begin{equation} 1=\frac{1}{a(\vec{x})b(\vec{x})g_{xx}g^{00}E^{2}}p_{x}^{2}+\frac{1}{a(\vec{x})c(\vec{x})g_{yy}g^{00}E^{2}}p_{y}^{2}+\frac{1}{a(\vec{x})d(\vec{x})g_{zz}g^{00}E^{2}}p_{z}^{2}\label{transf wave eq}\end{equation} so that the volume becomes\begin{equation} V_{p}=\frac{4}{3}\pi\sqrt{b(\vec{x})c(\vec{x})d(\vec{x})g_{xx}g_{yy}g_{zz}}\left(\frac{g^{00}}{a(\vec{x})}E^{2}\right)^{3/2}.\end{equation} This will equal the volume before the transformation if\begin{equation} b(\vec{x})c(\vec{x})d(\vec{x})=a(\vec{x})^{3}.\label{eq:THE CONSTRAINT}\end{equation} Thus we have identified the constraint for an arbitrary transformation to preserve the volume of phase space. We looked for some kind of general algebraic characterization for this kind of matrix, but found none. It belongs to $GL(n,R)$ but does not represent a particular symmetry. Conformal transformations are a subgroup of our transformation. Certain non conformal transformations also preserve the number of states. This holds if the determinants cancel out: That is, for $d$ space dimensions, the time part $a(x)$ when raised to the $d^{th}$ power, has to equal the determinant of the space part. Take\begin{equation} A=\left(\begin{array}{cccc} a(x) & 0 & 0 & 0\\ 0 & a(x)^{2} & 0 & 0\\ 0 & 0 & a(x) & 0\\ 0 & 0 & 0 & 1\end{array}\right)\end{equation} As with the conformal transformation, we still have $\sqrt{\tilde{g}_{3}}=a^{3/2}\sqrt{g_{3}}$ and and so $\tilde{N}=N_{0}$. So in general our constraint is:\begin{equation} \left(g_{00}\right)^{d}=det\: g_{space}\label{eq:constraint in GENERAL}\end{equation} where $d$ is the number of space dimensions and $g_{space}$ is the determinant of the spatial part of the metric. We can regard the transformation matrix, labeled $A$, as two blocks, separating the time and space components:\begin{equation} A=\left(\begin{array}{cc} T\\ & S\end{array}\right)\end{equation} where $T$ is a 1x1 matrix, and S is a diagonal matric of rank $d$ where $d$ is the dimension of space (rank of $A$ is the dimension of space-time, $d+1$). Then the constraint requires \begin{eqnarray} det(S) & = & det(T)^{d}\nonumber \\ det(A) & = & det(T)^{2d}.\end{eqnarray} \section{Relation to curvature} In $d+1$ dimensions \begin{eqnarray} N & \sim & \int d^{3}x\sqrt{\frac{g_{d}}{\left(g_{00}\right)^{d}}}\end{eqnarray} where $g_{d}$ denotes the determinant of the space part of metric. To preserve the number of states we have to preserve the ratio $g_{d}/g_{00}^{d}$ , which entails the constraint on the determinant as detailed above. The question becomes: given a change of metric for which this constraint holds, will such a constraint ensure preservation of scalar curvature? If so preservation of the number of states would entail preservation of curvature, which is an observer independent characteristic. We take two matrices representing two possible transformations of a given $3$- dimensional metric:\begin{eqnarray} A & = & \frac{1}{L}\left(\begin{array}{ccc} x\\ & x\\ & & x\end{array}\right)\nonumber \\ B & = & \left(\begin{array}{ccc} \frac{\sqrt{2}x}{L}\\ & 2\\ & & \frac{x^{2}}{L^{2}}\end{array}\right)\end{eqnarray} where $L$ is a constant with dimension of length. Both transformation matrices preserve the constraint given above, while their curvature differs. That is, taking a flat metric for example, after undergoing each of these transformations it would have the same number of states as previously, but different curvature. The first transformation would give $R=\frac{3L}{2x^{3}}$ , the secondgives $R=\frac{L}{\sqrt{2}x^{3}}.$ This is because the second one has fewer Christoffel signs, since the derivative must be $\partial_{x}$, and $\partial_{x}g_{yy}=0$ . Therefore clearly imposing the constraint on a metric transformation will not necessarily preserve the curvature of the original metric. Curvature in these examples is affected by the number of terms with an $x$ derivative, while the determinant is not. Thus the constraint on the determinant does NOT preserve curvature. This is intuitively understandable: the determinant indicates volume but gives no information as to the spatial distribution of the volume. \subsection{Examples in various dimensions} In $1+1$ dimensions a transformation that preserves $N$ \emph{must} be conformal: $g_{00}=g_{xx}$ since $det\, g_{space}=g_{xx.}$ In 2+1 dimensions we give two examples of transformations that preserve N. One is conformal, the other non conformal but symmetric: Conformal: \begin{equation} A=\frac{1}{L}\left(\begin{array}{ccc} x\\ & x\\ & & x\end{array}\right),R=\frac{3L}{2x^{3}}\end{equation} Symmetric:\begin{equation} B=\frac{1}{L}\left(\begin{array}{ccc} \sqrt{xy}\\ & x\\ & & y\end{array}\right),R=L\left(\frac{5x^{2}-6y\sqrt{xy}}{8x^{3}y^{2}}\right).\end{equation} An asymmetric example is like the one given in the previous section for a Euclidean metric. Note that plugging in the value $x=y$ \emph{after} deriving $R$ for matrix $B$ does not give the curvature of matrix $A.$ This is because the derivation of $R$ takes into account the direction of each component as well as its numerical value. If one plugs in $y=x$ before deriving $R$ all the derivatives $\partial_{y}$ vanish, giving the different result. We next look at 3+1 dimensions. The constraint requires $\left|\left(g_{00}\right)^{3}\right|=detg_{3}$. Comparing several matrices that obey this constraint and inspecting their curvature:\begin{eqnarray} A=\left(\begin{array}{cccc} \frac{x}{L}\\ & \frac{x^{3}}{L^{3}}\\ & & 1\\ & & & 1\end{array}\right),\: R & = & \frac{2L^{3}}{x^{5}}\nonumber \\ B=\frac{1}{L}\left(\begin{array}{cccc} \left(xyz\right)^{\frac{1}{3}}\\ & x\\ & & y\\ & & & z\end{array}\right),\: R & = & \frac{4L}{9}\left(\frac{1}{x^{3}}+\frac{1}{y^{3}}+\frac{1}{z^{3}}\right)\nonumber \\ C=\frac{1}{L}\left(\begin{array}{cccc} x\\ & x\\ & & x\\ & & & x\end{array}\right),\: R & = & \frac{3L}{2x^{3}}\end{eqnarray} A few comments: 1) The curvature for the third transformation is the same as for the conformal matrix in 1+2 dimensions. 2) As before, setting $x=y=z$ after calculating the curvature for matrix $B$ does not give the same result as the curvature for matrix $C$. Again, this is because the direction of the variable contributes in calculating $R,$ and not just its numeric value. This sheds light on the fact that the number of states, which is proportional to the volume of phase space, is different from curvature, which incorporates information on the distribution of that volume.\textsf{\textbf{\textit{ }}}A constraint on the determinant, representing Euclidean volume, is not the same as that on Ricci curvature, which in fact represents the amount by which the volume of a geodesic ball in a curved Riemannian manifold deviates from that of the standard ball in Euclidean space. \subsubsection{Rindler vs Schwarzschild: } The transformation from Minkowsky to Rindler space is not diagonal. It mixes time and space coordinates and that is why N is different from flat space. We cannot conclude from this that curvature is irrelevant to statistical entropy. That conclusion can only be drawn from the general proof given above. The Schwarzschild metric diverges at the boundary and it was found that the number of states (and thus the entropy) is different from that of Minkowski space \cite{'t Hooft,JudyRamy}. This is not the same as the difference between the number of states in Rindler and Minkowski spaces. In the Schwarzschild case the argument above does apply, since the transformation metric from Minkowski to Schwarzschild metric is diagonal. The Schwarzschild number of states differs from that of Minkowski because of the redshift on energy: $g_{00}(r)$. \subsection{Discussion} Our transformation leaves N invariant because it preserves the relationship between the volume of momentum space and of position space. $\left(g^{00}\right)^{3/2}$ is the variable part of momentum space, and $\sqrt{g_{d}}$ is the variable part of position space. $N$ is invariant so long as the relation between the two is preserved, so that if position space shrinks, momentum space grows and vice verse: $a(x)^{d}$ multiplying $\sqrt{g_{space}}$ equals $1/a(x)^{d}$ multiplying momentum space. We examined the question whether in curved space the number of states, and the statistical entropy derived from this, is observer dependent or is related to a physical quantity such as curvature. We found that it is observer dependent and not related to the intrinsic geometry. One might argue that a proof of curvature independence must show that there are no cases at all where curvature is preserved under a transformation that preserved the number of states. In fact it is quite possible that in some case curvature might be preserved. We claim that this must be seen as a coincidence because the constraint on preservation of the number of states relates to the determinant. By definition, there is a difference between the determinant, which represents Euclidean volume and does not depend on directions in space, and curvature which does depend on directions in space. The number of states does not depend on directions in space and so it can be preserved even if the directional characteristics and thus the curvature are changed. There will be a subgroup where transformation of the number of states will indeed preserve curvature. But one cannot assume that any given number of states, and the entropy derived from it, relate to a spacetime with a given curvature. The results in this paper apply to a diagonal metric only, but this is sufficient as it serves as a counter example. The question arises: what of Wald's entropy? Since statistical entropy is derived from the number of states, whereas Wald's entropy is an explicit function of curvature, this indicates a difference between these two concepts of entropy. Calculation for Einstein gravity gives the same result in both cases, but for generalized theories of gravity Wald entropy could contain terms derived from the curvature, so the concepts themselves do not coincide. However the issue may not be so simple. Phase space is defined as the product of the volume of position and momentum spaces. The definition arose in the context where momentum refers to kinetic momentum which is also the canonical conjugate to position. However even in spherical coordinates, kinetic and conjugate momentum do not coincide \cite{Chinese}, and a treatment in curved space should generalize the definition to conjugate momentum. If this is done, one then notes that gravitational Lagrangian includes the Ricci scalar, and Ricci tensors as well in the generalized theories of gravity with which Wald dealt. The Lagrangian of a particle in a gravitational background will include at least two terms, the matter Lagrangian and the graviational term. Each will have a generalized momentum conjugate to the dynamical variable in the Lagrangian. Therefore it may be necessary to redefine statistical entropy to take into account a more general formulation of phase space. This cannot be done simply by adding gravitational degrees of freedom; for example, adding gravitational degrees of freedom to the statistical calculation would not give one fourth the area but one half, and thus differ from the other derivations of entropy. A clue to suitable redefinition of the term may be found in the example discussed at the start of the paper: Minkowski and Rindler space. Statistical entropy was originally defined for flat space, where choice of vacuum is unambiguous. In treatment of curved space there should be a way to incorporate the choice of vacuum into the concept of phase space. Another issue is that of divergence on the horizon. While this may be an artifact of quantum uncertainty\cite{JudyRamy} still a more thorough investigation is necessary before drawing conclusions on the relationship of the two entropies. Note: There is a claim that entanglement entropy and statistical entropy are one and the same. In \cite{Katja} it was shown for explicit examples that entanglement entropy does not depend on curvature. For a discretized region in curved space it was found that even when the space is large enough for the effects of curvature to be noticeable, entropy remains proportional to area and is not affected by the curvature of the background. This qualitative similarity to our result reinforces the idea that entanglement and statistical entropy may be the same, and that they differ from Noether charge entropy. In conclusion, we have shown that the number of states is a function of the metric and is preserved under specific transformations of the metric, which do not necessarily preserve curvature. Therefore the number of states calculated with the accepted definition of phase space does not depend on curvature, and neither does the statistical entropy derived from it. For general theories of gravity it may be necessary to redefine statistical entropy taking into account a more general concept of phase space in some subtle manner, but as the definitions stand, it appears that statistical entropy and Wald entropy differ This research was supported by the Israel Science Foundation Grant No. 239/10. We thank Ramy Brustein, Merav Hadad and Frol Zapolsky for helpful discussions, and Joey Medved for comments on the manuscript.
1,314,259,994,727
arxiv
\section{Introduction} \subsection{Motivation and related literature} \quad In reliability engineering and system security, one of the most useful methods for enhancing the reliability characteristics of a system is to allocate redundant components to the system. The redundancy can be performed at the component level or the system level. In the former case, some redundant components are connected to each component, while in the latter one, the original coherent system fastens to some copies of itself. In a commonly used type of redundancy, called active redundancy, the original component and the redundant ones work simultaneously as parallel. In this case, the lifetime of the resulted parallel subsystem equals the maximum lifetime between the connected components. This strategy is mostly applied when the replacement of the components during the operation time of the system is impossible. As redundancy allocation is a widely used method for improving the performance of the products, numerous researchers have paid attention to develop the theories and applications of this subject. For example, \cite{li-ding2010} investigated the allocation of active redundancies to a $k$-out-of-$n$ system in which the lifetimes of independent components are stochastically ordered. \cite{you2016} studied $k$-out-of-$n$ redundant systems with dependent components. {\cite{Eryilmaz-Ucum} determined the optimal number of spare components for a weighted $k$-out-of-$n$.} \cite{kavlak} investigated the reliability and the mean residual life functions of coherent systems with active redundancies at the component and system levels. \cite{zhang} investigated the optimal allocation of active redundancies for weighted $k$-out-of-$n$ systems. \cite{amini} compared the component redundancy versus system redundancy for coherent systems with dependent and identically distributed components. \cite{fangli2016} studied the allocation of one active redundancy to coherent systems consisting of heterogeneous and statistically dependent components. Utilizing the minimal path decomposition, they proposed a necessary and sufficient condition identifying a better allocating strategy from two candidates. \cite{fangli2017} investigated allocating multiple matched active redundant components to coherent systems. \cite{fangli2018} studied the coherent systems with one active redundancy, using the minimal cut decomposition of the system. { \cite{Torradoetal2021} studied the redundancy allocation for a coherent system formed by modules, under different settings related to dependency and distribution of components. They stochastically compared the redundancies at the components level versus redundancies at the modules level.} { \cite{Torrado} considered a coherent system having possibly dependent subsystems in which the components are connected in parallel or in series. It is assumed that a number of possibly dependent components in each subsystem are randomly selected from a heterogeneous population. The cited author stochastically compared such systems with different numbers of components, based on majorization orders and, determined the optimal numbers of components in each subsystem such that the system reliability is maximized. Particularly, she examined the results for series-parallel systems. } The redundancy allocation in a series-parallel system has also been considered by some authors, among which we refer to \cite{soltani-O-2014}, \cite{karimi}, and \cite{fang2020}. \quad It is worth noting that the redundant components can be added to the system as inactive (cold and warm standby) components. The systems with cold and warm standby redundancy have also investigated in reliability literature; see, for example, \cite{eryi-cold}, \cite{fink2018}, \cite{shen}, and \cite{asadi}. \subsection{Survival signatures of coherent systems}\label{surv} \quad The first main step to assess the reliability and stochastic characteristics of an $n$-component system is to get knowledge about the structure-function of the system as well as the probability distribution of component lifetimes. In this regard, a useful concept for assessing the reliability of the system through the reliability of its components is the notion of {\it survival signature}. This concept is particularly significant for describing the structures of coherent systems with multiple types of components. Consider an $n$-component coherent system consisting of $L$ different types, such that there are $n_i$ components from $i$th type, $i=1,\ldots, L$, and $\sum_{i=1}^L n_i=n$. The reliability function of the system, at any time $t$, can be represented as follows \begin{align}\label{Fbaroriginal} \bar{F}_T(t)=\sum_{l_1=0}^{n_1}\ldots\sum_{l_L=0}^{n_L}\Phi(l_1,\ldots, l_L)P(C_1(t)=l_1, \ldots, C_L(t)=l_L), \end{align} where $C_i(t)$ denotes the number of components of type $i$ working at time $t$, and $\Phi$ is called the survival signature and represents \lq\lq the probability that the system is working when exactly $l_i$ components of type $i$ is working\rq\rq, see \cite{coolen}. Suppose that the lifetimes of the components of the same type are exchangeable dependent, and the lifetimes of the components of different types are dependent. Commonly, the dependency structure is modeled using a survival copula. In other words, if $T_j^{(i)}$ denotes the lifetime of $j$th component from type $i$, $j=1, \ldots, n_i$, $i=1, \ldots, L$, and $\bar{F}_i,~i=1, \ldots, L$ denotes the common reliability function for the components of the $i$th type, then there is a survival copula $\hat{C}$ such that the joint reliability of $T_j^{(i)}$'s can be written as \begin{align}\label{copula} &P( T_1^{(1)}>t_1^{(1)}, \ldots, T_{n_1}^{(1)}>t_{n_1}^{(1)}, \ldots, T_1^{(L)}>t_L^{(L)}, \ldots, T_{n_L}^{(L)}>t_{n_L}^{(L)})\nonumber\\ &\hspace{2cm}= \hat{C}\big(\bar{F}_1(t_1^{(1)}), \ldots, \bar{F}_1(t_{n_1}^{(1)}), \ldots, \bar{F}_L(t_1^{(L)}), \ldots, \bar{F}_L(t_{n_L}^{(L)})\big), \end{align} see, for example, \cite{navarro2}, \cite{navarro1}, and \cite{fangli2018}. In this case, it can be shown that \begin{align}\label{Fbar-dep} \bar{F}_T(t)&={\sum_{l_1=0}^{n_1}\ldots\sum_{l_L=0}^{n_L}}{\sum_{i_1=0}^{n_1-l_1}\ldots\sum_{i_L=0}^{n_L-l_L}}(-1)^{i_1+\ldots+ i_L}\binom{n_1}{l_1}\ldots \binom{n_L}{l_L}\binom{n_1-l_1}{i_1}\ldots \binom{n_L-l_L}{i_L}\nonumber\\ &\times\Phi(l_1,\ldots, l_L) \hat{C}(\underbrace{\bar{F}_1(t)}_{i_1+l_1},\underbrace{1}_{n_1-(i_1+l_1)},\ldots,\underbrace{\bar{F}_L(t)}_{i_L+l_L},\underbrace{1}_{n_L-(i_L+l_L) }), \end{align} where $\underbrace{u}_{m}$ means the $m$ repetitions of $u$, see \cite{eryilmaz-importance-2018, eriylmaz-dependent-2018}. If the components of the system are independent, then the representation \eqref{Fbaroriginal} is converted to the following expression \begin{align}\label{ind} \bar{F}_T(t)=\sum_{l_1=0}^{n_1}\ldots\sum_{l_L=0}^{n_L}\Phi(l_1,\ldots, l_L)\prod_{i=1}^L \binom{n_i}{l_i}[\bar{F}_i(t)]^{l_i} [{F}_i(t)]^{n_i- l_i}. \end{align} Many authors have considered the reliability properties of a coherent system with multi-type components based on the survival signature and for various applications. Among them, we mention to recent papers, \cite{feng2016}, \cite{saman}, \cite{eryilmaz-importance-2018, eriylmaz-dependent-2018}. \cite{heu} used the notion of survival signature for the formulation of the reliability-redundancy allocation problem. They considered the objective function to maximize the system reliability under some constraints. \quad \cite{zarezadeh2019} studied the reliability and preventive maintenance of the coherent systems with multi-type components whose components are subject to failure according to multiple external shocks. \cite{hashemi} proposed two maintenance strategies for optimal preservation of coherent systems consisting of independent multi-type components. \subsection{Contributions of this paper} \quad This paper aims to study the optimal number of redundancy allocation to $n$-component coherent systems consist of different components. It is assumed that the components of the system are dependent, where a given copula function models the dependency structure. We are interested in allocating $v_i$ active redundant components to each component of type $i$, under the constraint on the number of existing spare components. To get the optimal number of $v_i$'s, we propose two cost-based functions. More precisely, the contributions of the paper are as follows. \begin{itemize} \item We propose a mean cost rate function in terms of the costs of renewing the failed components and the costs of refreshing the alive components at the time of the system failure. Then, we find the optimal number of redundant components, $v_i$'s, to be added to each component of type $i$, such that the proposed cost function is minimized. \item We introduce a mean cost rate function, relevant to an age replacement policy, in terms of the costs of renewing (refreshing) the failed (alive) components at the failure time of the system or at a predetermined time $\tau$, whichever occurs first. Then, the optimal values of $v_i$'s are obtained, such that the suggested cost-based function achieves its minimum value. \item In the particular considerable case that the system is a series-parallel system, we provide the formulas for the proposed mean cost rate functions. Then we investigate the optimal number of the components for each parallel subsystem such that the proposed functions are minimized \end{itemize} The derivations of the paper are extensions of the results of \cite{Eryilmaz-main-2018}, who investigated the optimal number of components in the case that structure function is $k$-out-of-$n$ with independent components. \subsection{Organizations of the paper } \quad The remaining of the paper is arranged as follows. In Section 2, under the settings of subsection \ref{surv}, we present the formulation of the system reliability function (\ref{Fbar-dep}) in the case that $v_i$ components are added as active redundant to each component of type $i$, $i=1,\dots, L$. Then, utilizing this formulation, a mean cost rate function is introduced at the time of the system failure. Next, a mean cost rate function is established based on the costs of replacing the system at its failure time or at a predetermined time $\tau$, whichever occurs first. The expressions for the proposed mean cost rate functions are derived in terms of the reliability function (\ref{Fbar-dep}). {Some examples of coherent systems are presented to illustrate the applications of the proposed approaches; a 6-component system consisting of two types of dependent components, and an 8-component system composed of three types of components that are independent.} The optimal number of redundant components, based on the proposed cost-based functions, are discussed for each system numerically. \quad Section 3 is devoted to the particular case that the system is a series-parallel system. We provide the formulas for the proposed mean cost rate functions in Section 2 for such systems. Then, we investigate the optimal number of the components for each parallel subsystem such that the proposed cost functions are minimized. The results of this section are numerically illustrated for a series-parallel system consisting of three parallel subsystems connected in series. Some concluding remarks in Section 4 finalize the paper. The details of the proofs are given in the Appendix. \section{Optimal number of redundant components} \quad We consider an $n$-component coherent system consisting of multiple types of components with following description. { The system is built up of $L$ types of components, $L\geq 1$, such that there are $n_i$ components of type $i$ and $\sum_{i=1}^{L}n_i=n$. We assume that the common reliability function of the components of type $i$ is $\bar{F}_i(.)$, $i=1,2,\dots,L$. The lifetimes of the components of the same type are exchangeable dependent, and the lifetimes of the components of different types are dependent. The assumed dependence structure is modeled by a survival copula given in \eqref{copula}. To increase the reliability of the system, we desire to add $v_i$ active redundancies to each component of type $i$, $i=1,\ldots, L$. Each original component in the system and its redundant components are assumed to be independent and identically distributed (i.i.d.). Let $T_R$ denote the lifetime of the system incorporated by redundant components. Because an original component and its redundant ones make a parallel subsystem, one can easily show that the reliability function of $T_R$ at time $t$ can be represented as follows. } \begin{align*} \bar{F}_{T_R}(t) &={\sum_{l_1=0}^{n_1}\ldots\sum_{l_L=0}^{n_L}}{\sum_{i_1=0}^{n_1-l_1}\ldots\sum_{i_L=0}^{n_L-l_L}}(-1)^{i_1+\ldots+ i_L}\binom{n_1}{l_1}\ldots \binom{n_L}{l_L}\binom{n_1-l_1}{i_1}\ldots \binom{n_L-l_L}{i_L} \nonumber\\ &\times\Phi(l_1,\ldots, l_L)\hat{C}(\underbrace{1-{F}_1^{v_1+1}(t)}_{i_1+l_1}, \underbrace{1}_{n_1-(i_1+l_1)},\ldots,\underbrace{1-{F}_L^{v_L+1}(t)}_{i_L+l_L},\underbrace{1}_{n_L-(i_L+l_L) }). \end{align*} In the case of independence of all components, this representation reduces to \begin{align*} \bar{F}_{T_R}(t)=\sum_{l_1=0}^{n_1}\ldots\sum_{l_L=0}^{n_L}\Phi(l_1,\ldots, l_L)\prod_{i=1}^L \binom{n_i}{l_i}[1-{F}_i^{v_i+1}(t)]^{l_i} [{F}_i^{v_i+1}(t)]^{n_i-l_i}. \end{align*} \quad The problem of interest in this redundancy strategy is to determine the optimal number of spares allocated to each component. In this paper, our approach is to find $v$'s based on the minimization of a kind of cost criterion. In this regard, we set up two mean cost rate functions to obtain the optimal number of redundant components. One of them is imposed based on the cost of the system failure, which depends on the number of failed components when a system failure occurs. The other one is defined based on an age replacement policy. In the next subsections, we describe these two functions with details. \begin{Remark} \em Although the system considered above is described in the general case that the component lifetimes of the same type are exchangeable dependent, and the lifetimes of the components of different types are dependent, in allocating the redundant components we assumed that in the constructed parallel subsystem the components are independent and identically distributed. This assumption seems to be a restriction in some practical cases, but it should be noted that if we drop the assumption of i.i.d. for the redundant components, the computation of the system reliability, would be a challenging problem and potentially involves complex calculations. We believe that, considering the problem of optimal redundancy under i.i.d. components in each subsystem, as considered in this paper, could be a first step toward solving the more general cases (see also \cite{sam}, pp. 76-77). \end{Remark} \subsection{Cost function at the system failure}\label{cost} \quad Suppose that the system starts working at $t=0$ and fails at a random time after $t=0$. Assume that when the system fails, we have a cost $c_i$ for each failed component of type $i$ to replace it by a new one and a cost $c_i^*$ for each unfailed component for refreshing it so that it becomes as good as new, where we assume that $c_i\geq c^*_i$, $i=1, \ldots, L$. Furthermore, we assume that $c^{**}$ denotes the fixed overall cost for system failure. With $T_R$ as the lifetime of the system after redundancy, let the random variable $X_i(T_R)$ denote the number of failed components of type $i$ at the time of system failure, $i=1,\dots,L$. Then, the mean cost rate function for a failed system is defined as \begin{align}\label{cost1} Cost_1(\mathbf{v})&=\frac{\sum_{i=1}^Lc_iE(X_i(T_R))+\sum_{i=1}^L c_i^*E(n_i(v_i+1)-X_i(T_R))+c^{**}}{E(T_R)} \end{align} where $\mathbf{v}=(v_1,\ldots, v_L)$. The numerator is the expected cost of the system failure, and the denominator is the mean time to failure ($MTTF$) of the system, hence $Cost_1$ becomes the mean cost per unit of time. Note that in the system after redundancy, { there are altogether $n_i(v_i+1)$ components} of type $i$, $i=1,2, \ldots, L$. The relation \eqref{cost1} can be rewritten in terms of the lifetime of the original system without any redundancy, $T$, as \begin{align}\label{poi} Cost_1(\mathbf{v})&=\frac{\sum_{i=1}^Lc_i(v_i+1)E(X_i(T))+\sum_{i=1}^L c_i^*(v_i+1)E(n_i-X_i(T))+c^{**}}{E(T_R)}\nonumber\\ &=\frac{\sum_{i=1}^L(c_i-c_i^*)(v_i+1)E(X_i(T))+\sum_{i=1}^L c_i^*(v_i+1)n_i+c^{**}}{E(T_R)}. \end{align} {\begin{Lemma}\label{ui} The quantity $E(X_i(T))$ in (\ref{poi}) can be expressed as follows. \begin{small} \begin{align*} &E(X_i(T))\\ &=n_i\int_{0}^{\infty} \lim_{\delta\rightarrow 0}\frac{1}{\delta}\sum_{m_1=0}^{n_1}...\sum_{m_i=0}^{n_i-1}...\sum_{m_L=0}^{n_L}\Phi(m_1,...,m_{i-1},m_i+1,m_{i+1},..., m_L) \binom{n_1}{m_1}... \binom{n_i-1}{m_i}...\binom{n_L}{m_L}A_{\mathbf{m}}^{(i)}(t,\delta)dt \end{align*} \end{small} where \begin{small} \begin{align}\label{Adelta} A_{\mathbf{m}}^{(i)}(t,\delta) &=P\Big({T_1^{(1)}>t},\ldots,{T_{m_1}^{(1)}>t},{T_{m_1+1}^{(1)}\leq t},\ldots, {T_{n_1}^{(1)}\leq t}, \nonumber\\ &\hspace{1cm}\ldots, t<T_1^{(i)}\leq t+\delta, {T_2^{(i)}>t},\ldots,{T_{m_i+1}^{(i)}>t},{T_{m_i+2}^{(i)}\leq t},\ldots, {T_{n_i}^{(i)}\leq t}, \nonumber\\ &\hspace{1cm}\ldots, {T_1^{(L)}>t},\ldots,{T_{m_L}^{(L)}>t},{T_{m_L+1}^{(L)}\leq t},\ldots, {T_{n_L}^{(L)}\leq t}\Big). \end{align} \end{small} \end{Lemma} \begin{proof} \begin{align*} E(X_i(T))&=E(\sum_{j=1}^{n_i}I(T_j^{(i)}\leq T))=\sum_{j=1}^{n_i}P(T_j^{(i)}\leq T)=n_iP(T_1^{(i)}\leq T)\\ &=n_i\int_{0}^{\infty} \lim_{\delta\rightarrow 0}\frac{P(T>t,t<T_1^{(i)}\leq t+\delta)}{\delta}dt, \end{align*} where the third equality follows from the exchangeability of the components of type $i$, $i=1, \dots, L$. By conditioning on the number of live components of each type, we obtain \begin{small} \begin{align}\label{Atdelat} &P(T>t,t<T_1^{(i)}\leq t+\delta)\nonumber\\ &=\sum_{m_1=0}^{n_1}\ldots\sum_{m_i=0}^{n_i-1}\ldots\sum_{m_L=0}^{n_L}P(T>t,t<T_1^{(i)}\leq t+\delta, C_1(t)=m_1,\ldots, C_i(t)=m_i,\ldots, C_L(t)=m_L)\nonumber\\ &=\sum_{m_1=0}^{n_1}\ldots\sum_{m_i=0}^{n_i-1}\ldots\sum_{m_L=0}^{n_L}\Phi(m_1,\ldots,m_{i-1},m_i+1,m_{i+1},\ldots, m_L) \binom{n_1}{m_1}\ldots \binom{n_i-1}{m_i}\ldots\binom{n_L}{m_L}A_{\mathbf{m}}^{(i)}(t,\delta) \end{align} \end{small} The last equality in \eqref{Atdelat} holds because the components of the same type have a common failure time distribution. \end{proof}} In the following theorem, \eqref{Adelta} is represented based on the survival copula of the components lifetimes. \begin{Theorem}\label{Am} Using the inclusion-exclusion rule, $A_{\mathbf{m}}^{(i)}(t,\delta)$ can be represented as the following expression \begin{small} \begin{align*} A_{\mathbf{m}}^{(i)}(t, \delta)&={\sum_{j_1=0}^{n_1-m_1}\ldots\sum_{j_i=0}^{n_i-m_i-1}\ldots\sum_{j_L=0}^{n_L-m_L}}(-1)^{j_1+\ldots + j_L}\binom{n_1-m_1}{j_1}\ldots \binom{n_i-m_i-1}{j_i}\ldots\binom{n_L-m_L}{j_L}\\ &\times \Big[\hat{C}(\underbrace{\bar{F}_1(t)}_{m_1+j_1},\underbrace{1}_{n_1-(m_1+j_1)},\ldots,\underbrace{\bar{F}_i(t)}_{m_i+j_i+1},\underbrace{1}_{n_i-(m_i+j_i+1)},\ldots,\underbrace{\bar{F}_L(t)}_{m_L+j_L}, \underbrace{1}_{n_L-(m_L+j_L)})\\ &-\hat{C}(\underbrace{\bar{F}_1(t)}_{m_1+j_1},\underbrace{1}_{n_1-(m_1+j_1)},\ldots,\underbrace{\bar{F}_i(t)}_{m_i+j_i},\bar{F}_i(t+\delta),\underbrace{1}_{n_i-(m_i+j_i+1)}, \ldots,\underbrace{\bar{F}_L(t)}_{m_L+j_L},\underbrace{1}_{n_L-(m_L+j_L)})\Big]. \end{align*} \end{small} \end{Theorem} \begin{proof} See the Appendix. \end{proof} Note that, in the particular case of independence of all components, we get \begin{small} \begin{align}\label{1} &E(X_i(T))\nonumber\\ &=n_i\sum_{m_1=0}^{n_1}\ldots\sum_{m_i=0}^{n_i-1}\ldots\sum_{m_L=0}^{n_L}\Phi(m_1,\ldots,m_{i-1},m_i+1,m_{i+1},\ldots, m_L)\binom{n_1}{m_1}\ldots \binom{n_i-1}{m_i}\ldots\binom{n_L}{m_L}\nonumber\\ &\times\int_{0}^{\infty}\bar{F}_1^{m_1}(t)F_1^{n_1-m_1}(t)\ldots\bar{F}_i^{m_i}(t)F_i^{n_i-m_i-1}(t)\ldots \bar{F}_L^{m_L}(t)F_L^{n_L-m_L}(t)dF_i(t). \end{align} \end{small} \quad In order to minimize the mean cost rate function $Cost_1(\mathbf{v})$, we impose the constraint that there are at most $M_i$ components of type $i$ as spares, $i=1,\dots,L$. This means that the number of the components that can be connected in parallel at the $i$th group satisfies in the inequality $n_iv_i\leq M_i$, $i=1,\ldots, L$. To determine the optimal values of $v_i$'s, we do the following: for given values of $n_i, c_i, c_i^*$, and $M_i$, $i=1, \ldots, L$, and $c^{**}$, we evaluate $Cost_1(\mathbf{v})$ for all possible choices of $v_1, \ldots, v_L$ such that for all $i$, $n_iv_i\leq M_i$. Then the optimal values of $v_1, \ldots, v_L$ can be determined as the values for which the corresponding mean cost rate function $Cost_1(\mathbf{v})$ is minimum. \begin{Remark} {\rm If the system has a $k$-out-of-$n$ structure with independent components from multiple types components, then \eqref{1} reduces to the result of \cite{Eryilmaz-main-2018}. This is so because for such system the survival signature is obviously given as \begin{align*} \Phi(l_1,\ldots, l_L)=\left\{ \begin{tabular}{ll} 1 & $\sum_{j=1}^L l_j\geq k$\\ 0 & otherwise,\\ \end{tabular} \right. \end{align*} i.e. the system works if at least $k$ components are alive. } \end{Remark} \subsection{Cost function based on preventive replacement} \quad In this section, we propose a kind of age replacement preventive maintenance policy for the system with multiple types of components described in subsection 1.2. The policy of renewing the system performed by the operator is such that it is replaced at failure time or at a predetermined time $\tau$, whichever occurs first. There are many papers about age replacement strategy; the interested reader can see for example, \cite{zhao2016}, \cite{ashrafi} and \cite{nakagawa2019}. \cite{Mannai2018} found the optimal configuration of a $k$-out-of-$n$ system so that the expected total costs of the system under some generalized age replacement policies are minimized. \quad Here, suppose that the operator has $M_i$ components of type $i$ as spare available, and he/she decides to add $v_i$ components to each of the components of type $i$, where $n_iv_i\leq M_i$. Under the implemented policy, here, the aim is to find the optimal number of $v$'s such that the mean cost rate we impose below is minimized. \quad If the replacement occurs after the system failure, i.e., $T_R\leq \tau$, then, considering the costs $c_i$, $c_i^*$ and $c^{**}$ as defined in the previous subsection, the average cost of renewing the system is obtained as \begin{align*} M_1(\mathbf{v})&=\sum_{i=1}^L c_i E(X_i(T_R)|T_R\leq \tau)+\sum_{i=1}^L c^*_i E(n_i(v_i+1)-X_i(T_R)|T_R\leq \tau)+c^{**}\nonumber\\ &=\sum_{i=1}^L (v_i+1)c_i E(X_i(T)|T\leq\tau)+\sum_{i=1}^L (v_i+1)c^*_i E(n_i-X_i(T)|T\leq\tau)+c^{**}\nonumber\\ &=\sum_{i=1}^L (c_i-c^*_i)(v_i+1) E(X_i(T)|T\leq\tau)+\sum_{i=1}^L (v_i+1)c^*_i n_i+c^{**}, \end{align*} where $T$ is the lifetime of the system before redundancy allocation. \quad If the system is replaced before failure, i.e., $T_R>\tau$, then, by the costs $c_i$ and $c_i^*$, $i=1, \ldots, L$ for renewing the failed components and refreshing the alive components of type $i$, respectively, the system will be as good as the new condition. Let $N_i(\tau)$ be the number of failed components of type $i$ on $[0, \tau]$. Then the average cost of renewing the system is defined as \begin{align*} M_2(\mathbf{v})&=\sum_{i=1}^L c_i E(N_i(\tau)|T_R>\tau)+\sum_{i=1}^L c^*_i E(n_i(v_i+1))-N_i(\tau)|T_R>\tau)\nonumber\\ &=\sum_{i=1}^L (c_i-c^*_i)(v_i+1) E(N_i(\tau)|T>\tau)+\sum_{i=1}^L (v_i+1)c^*_i n_i. \end{align*} Consequently, the mean cost rate function of the system renewing at time $\min(\tau, T_R)$ is achieved as \begin{align}\label{cost2} Cost_2(\mathbf{v})=\frac{M_1(\mathbf{v})P(T_R\leq \tau)+M_2(\mathbf{v})P(T_R>\tau)}{E(\min(\tau,T_R))}, \end{align} {where it is attained that} \begin{align*} E(\min(\tau,T_R))=\int_{0}^{\tau}\bar{F}_{T_R}(y)dy. \end{align*} {For computing \eqref{cost2}}, we need to calculate $E(N_i(\tau)|T>\tau)$ and $E(X_i(T)|T\leq \tau)$. For the first one, we have \begin{align} \label{EN12} &E(N_i(\tau)|T>\tau)=\frac{1}{\bar{F}_T(\tau)}\sum_{j_i=0}^{n_i} j_i P(N_i(\tau)=j_i, T>\tau)\nonumber\\ &=\frac{1}{\bar{F}_T(\tau)}\sum_{j_1=0}^{n_1}\ldots\sum_{j_L=0}^{n_L} j_i P(T>\tau | N_1(\tau)=j_1,\ldots, N_L(\tau)=j_L)P(N_1(\tau)=j_1,\ldots, N_L(\tau)=j_L)\nonumber\\ &=\frac{1}{\bar{F}_T(\tau)}\sum_{j_1=0}^{n_1}\ldots\sum_{j_L=0}^{n_L} j_i \Phi(n_1-j_1,\ldots, n_L-j_L)\binom{n_1}{j_1}\ldots \binom{n_L}{j_L}B(\tau,j_1,\dots,j_L) \end{align} where \begin{align}\label{Btau} B(\tau,j_1,\dots,j_L)&=P(T_1^{(1)}\leq\tau, \ldots, T_{j_1}^{(1)}\leq\tau, T_{j_1+1}^{(1)}>\tau, \ldots, T_{n_1}^{(1)}>\tau, \nonumber\\ &\hspace{4cm}\ldots, T_1^{(L)}\leq\tau, \ldots, T_{j_L}^{(L)}\leq\tau, T_{j_L+1}^{(L)}>\tau, \ldots, T_{n_L}^{(L)}>\tau ). \end{align} Using a similar method as used in Lemma \ref{ui}, we can calculate $E(X_i(T)|T\leq \tau)$, $i=1,\dots,L$, as follows. \begin{align*} E(X_i(T)|T\leq \tau) &=n_iP(T_1^{(i)}\leq T|T\leq \tau)=n_i\frac{P(T_1^{(i)}\leq T,T\leq \tau)}{1-P(T>\tau)}, \quad j=1,\dots,L. \end{align*} Now, we can write \begin{align*} P(T_1^{(i)}\leq T,T\leq \tau)=\int_{0}^{\tau}\lim_{\delta\rightarrow 0}\frac{P(s<T\leq \tau, s<T_1^{(i)}\leq s+\delta)}{\delta}ds,\quad i=1,\dots,L, \end{align*} for which we have \begin{small} \begin{align*} &P(s<T\leq \tau, s<T_1^{(i)}\leq s+\delta)\nonumber\\ &=\sum_{m_1=0}^{n_1}\ldots\sum_{m_i=0}^{n_i-1}\ldots\sum_{m_L=0}^{n_L}~\sum_{l_1=0}^{m_1}\ldots\sum_{l_L=0}^{m_L}P(s<T\leq \tau|s<T_1^{(i)}\leq s+\delta, C_1(\tau)=l_1,\ldots, C_i(\tau)=l_i\nonumber\\ &\hspace{1cm}C_L(\tau)=l_L, C_1(s)=m_1,\ldots, C_i(s)=m_i,\ldots, C_L(s)=m_L)P\big(s<T_1^{(i)}\leq s+\delta,C_1(\tau)=l_1,\nonumber\\ &\hspace{1cm},\ldots, C_i(\tau)=l_i,C_L(\tau)=l_L, C_1(s)=m_1,\ldots, C_i(s)=m_i,\ldots, C_L(s)=m_L)\big )\nonumber\\ &=\sum_{m_1=0}^{n_1}\ldots\sum_{m_i=0}^{n_i-1}\ldots\sum_{m_L=0}^{n_L}~\sum_{l_1=0}^{m_1}\ldots\sum_{l_L=0}^{m_L}[\Phi(m_1,\ldots, m_L)-\Phi(l_1,\ldots, l_L)]\left[\prod_{j=1,j\neq i}^L\binom{n_j}{m_j}\binom{m_j}{l_j}\right]\nonumber\\ &\times \binom{n_i-1}{m_i}\binom{m_i}{l_i}A_{\mathbf{m}, \mathbf{l}}^{(i)}(s,s+\delta, \tau), \end{align*} \end{small} where \begin{small} \begin{align}\label{Asdelta} &A_{\mathbf{m}, \mathbf{l}}^{(i)}(s,s+\delta, \tau)\nonumber\\ &=P(T_1^{(1)}>\tau, \ldots, T_{l_1}^{(1)}>\tau, s<T_{l_1+1}^{(1)}\leq \tau, \ldots, s<T_{m_1}^{(1)}\leq \tau, T_{m_1+1}^{(1)}\leq s,\ldots, T_{n_1}^{(1)}\leq s, \nonumber \\ &\hspace{0.2cm}\ldots, T_1^{(i)}>\tau, \ldots, T_{l_i}^{(i)}>\tau, s<T_{l_i+1}^{(i)}\leq\tau, \ldots, s<T_{m_i}^{(i)}\leq\tau, s<T_{m_i+1}^{(i)}\leq s+\delta, T_{m_i+2}^{(i)}\leq s,\ldots, T_{n_i}^{(i)}\leq s, \nonumber \\ &\ldots, T_1^{(L)}>\tau, \ldots, T_{l_L}^{(L)}>\tau, s<T_{l_L+1}^{(L)}\leq \tau, \ldots, s<T_{m_L}^{(L)}\leq \tau, T_{m_L+1}^{(L)}\leq s,\ldots, T_{n_L}^{(L)}\leq s ). \end{align} \end{small} \vspace{0.1cm} In the following theorem, the probabilities in \eqref{Btau} and \eqref{Asdelta} are represented based on the survival copula of components lifetimes. \begin{Theorem}\label{Aml} Using the inclusion-exclusion rule, we get the following expressions for $B(\tau,j_1,\dots,j_L)$ and $A_{\mathbf{m}, \mathbf{l}}^{(i)}(s,s+\delta, \tau)$, respectively. \begin{align*} B(\tau,j_1,\dots,j_L)& ={\sum_{b_1=0}^{j_1}\ldots\sum_{b_L=0}^{j_L}}(-1)^{b_1+\ldots+ b_L}\binom{j_1}{b_1}\ldots \binom{j_L}{b_L} \hat{C}(\underbrace{\bar{F}_1(\tau)}_{n_1-j_1+b_1},\underbrace{1}_{j_1-b_1},\ldots,\underbrace{\bar{F}_L(\tau)}_{n_L-j_L+b_L},\underbrace{1}_{j_L-b_L)}) \end{align*} and \begin{small} \begin{align*} &A_{\mathbf{m}, \mathbf{l}}^{(i)}(s,s+\delta, \tau)={\sum_{j_1=0}^{n_1-m_1}\ldots\sum_{j_i=0}^{n_i-m_i-1}\ldots\sum_{j_L=0}^{n_L-m_L}}(-1)^{j_1+\ldots + j_L}\binom{n_1-m_1}{j_1}\ldots \binom{n_i-m_i-1}{j_i}\ldots\binom{n_L-m_L}{j_L}\\ &\times {\sum_{d_1=0}^{m_1-l_1}\ldots\sum_{d_i=0}^{m_i-l_i}\ldots\sum_{d_L=0}^{m_L-l_L}}(-1)^{d_1+\ldots + d_L}\binom{m_1-l_1}{d_1}\ldots \binom{m_L-l_L}{d_L}\\ &\times \big[\hat{C}(\underbrace{\bar{F}_k(\tau)}_{l_k+d_k}, \underbrace{\bar{F}_k(s)}_{m_k-l_k+j_k-d_k},\underbrace{1}_{n_k-m_k-j_k}, for~ k=1,\ldots, L, k\neq i, \underbrace{\bar{F}_i(\tau)}_{l_i+d_i}, \underbrace{\bar{F}_i(s)}_{m_i-l_i+j_i-d_i+1}, \underbrace{1}_{n_i-m_i-j_i-1})\\ &-\hat{C}(\underbrace{\bar{F}_k(\tau)}_{l_k+d_k}, \underbrace{\bar{F}_k(s)}_{m_k-l_k+j_k-d_k},\underbrace{1}_{n_k-m_k-j_k}, for~ k=1,\ldots, L, k\neq i,\underbrace{\bar{F}_i(\tau)}_{l_i+d_i}, \underbrace{\bar{F}_i(s)}_{m_i-l_i+j_i-d_i}, \bar{F}_i(s+\delta), \underbrace{1}_{n_i-m_i-j_i-1} )\big]. \end{align*} \end{small} \end{Theorem} \begin{proof} See the Appendix \end{proof} \begin{Corollary} For the particular case of independent components, it can be deduced that \begin{align}\label{2} E(N_i(\tau)|T>\tau)=\frac{1}{\bar{F}(\tau)}\sum_{j_1=0}^{n_1}\ldots\sum_{j_L=0}^{n_L} j_i\Phi(n_1-j_1,\ldots, n_L-j_L)\prod_{l=1}^L \binom{n_l}{j_l}F_l^{j_l}(\tau)\bar{F}_l^{n_l-j_l}(\tau). \end{align} Also, in this case, we have \begin{align*} A_{\mathbf{m}, \mathbf{l}}^{(i)}(s,s+\delta, \tau)&=\big\{\prod_{j=1,j\neq i}^L F_j^{n_j-m_j}(s)[\bar{F}_j(\tau)-\bar{F}_j(s)]^{m_j-l_j}\bar{F}_j^{l_j}(\tau)\big\}\\ &\times[\bar{F}_i(s)-\bar{F}_i(s+\delta)]F_i^{n_i-m_i-1}(s)[\bar{F}_i(\tau)-\bar{F}_i(s)]^{m_i-l_i}\bar{F}_i^{l_i}(\tau) \end{align*} which, in turn, implies that \begin{small} \begin{align}\label{22} &E(X_i(T)|T\leq \tau)=\frac{n_i}{1-\bar{F}_{T}(\tau)}\sum_{m_1=0}^{n_1}\ldots\sum_{m_i=0}^{n_i-1}\ldots\sum_{m_L=0}^{n_L} \sum_{l_1=0}^{m_1}\ldots\sum_{l_L=0}^{m_L} [\Phi(m_1,\ldots, m_L)-\Phi(l_1,\ldots, l_L)]\nonumber\\ &\times\left[\prod_{j=1,j\neq i}^L\binom{n_j}{m_j}\binom{m_j}{l_j}\right] \binom{n_i-1}{m_i}\binom{m_i}{l_i}\int_{0}^{\tau}\big\{\prod_{j=1,j\neq i}^L F_j^{n_j-m_j}(s)[\bar{F}_j(\tau)-\bar{F}_j(s)]^{m_j-l_j}\bar{F}_j^{l_j}(\tau)\big\}\nonumber\\ &\hspace{7.5cm}\times F_i^{n_i-m_i-1}(s)[\bar{F}_i(\tau)-\bar{F}_i(s)]^{m_i-l_i}\bar{F}_i^{l_i}(\tau)dF_i(s). \end{align} \end{small} \end{Corollary} \quad It is worth noting that if the structure of the system is $k$-out-of-$n$ whose components are independent, then \eqref{2} and \eqref{22} are reduced to the results appeared in \cite{Eryilmaz-main-2018}. \quad For given values $n_i, c_i, c_i^*, M_i$, $i=1,\ldots, L$, $c^{**}$ and $\tau$, we aim to determine the optimal values of $v_i$'s under the constraints $n_iv_i\leq M_i,$ $ i=1, \ldots, L$, such that the mean cost rate function $Cost_2$ is minimized. In the following, we present two examples to examine the aforementioned theoretical results. \begin{Example}\label{eg11} {\rm Consider the system depicted in Figure \ref{fig1}, given in \cite{feng2016} and \cite{eriylmaz-dependent-2018}. The system consists of six components in which components 1, 2, and 5 are of type one, and components 3, 4, and 6 are of type two. The survival signature of the system is presented in Table \ref{table1}. \begin{figure}[h!] \centerline{\begin{tikzpicture}[node distance = 5 cm] \tikzset{LabelStyle/.style = {scale=2, fill= white, text=black}} \node[shape = rectangle,draw, fill= black, text= black, inner sep =2 pt, outer sep= 0 pt, scale=0.2](A) at (-2,0) {}; \node[shape = rectangle,draw, fill= white, text= black, inner sep =2 pt, outer sep= 0 pt, scale=1.7](B) at (-1,0) {\scriptsize 1}; \node[shape = rectangle,draw, fill= lightgray, text= black, inner sep =2 pt, outer sep= 0 pt, scale=0.01](D) at (0,0) {}; \node[shape = rectangle,draw, fill= lightgray, text= black, inner sep =2 pt, outer sep= 0 pt, scale=0.01](E) at (0,1) {}; \node[shape = rectangle,draw, fill= lightgray, text= black, inner sep =2 pt, outer sep= 0 pt, scale=0.01](F) at (0,-1) {}; \node[shape = rectangle,draw, fill= white, text= black, inner sep =2 pt, outer sep= 0 pt, scale=1.7](G) at (0.9,1) {\scriptsize 2}; \node[shape = rectangle,draw, fill= black, text= black, inner sep =2 pt, outer sep= 0 pt, scale=0.2](GH) at (1.5,1) {}; \node[shape = rectangle,draw, fill= white, text= black, inner sep =2 pt, outer sep= 0 pt, scale=1.7](H) at (2.1,1) {\scriptsize 5}; \node[shape = rectangle,draw, fill= lightgray, text= black, inner sep =2 pt, outer sep= 0 pt, scale=0.01](I) at (3,1) {}; \node[shape = rectangle,draw, fill= lightgray, text= black, inner sep =2 pt, outer sep= 0 pt, scale=1.7](DD) at (1.5,0) {\scriptsize 4}; \node[shape = rectangle,draw, fill= lightgray, text= black, inner sep =2 pt, outer sep= 0 pt, scale=1.7](J) at (0.9,-1) {\scriptsize 3}; \node[shape = rectangle,draw, fill= black, text= black, inner sep =2 pt, outer sep= 0 pt, scale=0.2](JK) at (1.5,-1) {}; \node[shape = rectangle,draw, fill= lightgray, text= black, inner sep =2 pt, outer sep= 0 pt, scale=1.7](K) at (2.1,-1) {\scriptsize 6}; \node[shape = rectangle,draw, fill= lightgray, text= black, inner sep =2 pt, outer sep= 0 pt, scale=0.01](II) at (3,-1) {}; \node[shape = rectangle,draw, fill= lightgray, text= black, inner sep =2 pt, outer sep= 0 pt, scale=0.01](L) at (3,0) {}; \node[shape = rectangle,draw, fill= black, text= black, inner sep =2 pt, outer sep= 0 pt, scale=0.2](M) at (4,0) {}; \draw[semithick](A) -- (B); \draw[semithick](B) -- (D); \draw[semithick](D) -- (E); \draw[semithick](D) -- (F); \draw[semithick](E) -- (G); \draw[semithick](GH) -- (DD); \draw[semithick](JK) -- (DD); \draw[semithick](G) -- (H); \draw[semithick](H) -- (I); \draw[semithick](I) -- (L); \draw[semithick](F) -- (J); \draw[semithick](J) -- (K); \draw[semithick](II) -- (L); \draw[semithick](K) -- (II); \draw[semithick](L) -- (M); \end{tikzpicture}} \caption{\small The system in Example \ref{eg11} with two types of components} \hfill \label{fig1} \end{figure} \begin{table}[h] \small \caption{Survival signature of the system in Figure \ref{fig1}}\label{table1} \begin{center} \begin{tabular}{cccccc} \hline $l_1$ & $l_2$ & $\Phi(l_1, l_2)$ & $l_1$ & $l_2$ & $\Phi(l_1, l_2)$\\ \hline 0 & 0 & 0 & 2 & 0 & 0\\ 0 & 1 & 0 & 2 & 1 & 0\\ 0 & 2 & 0 & 2 & 2 & $\frac{4}{9}$\\ 0 & 3 & 0 & 2 & 3 & $\frac{2}{3}$\\ 1 & 0 & 0 & 3 & 0 & 1\\ 1 & 1 & 0 & 3 & 1 & 1\\ 1 & 2 & $\frac{1}{9}$ & 3 & 2 & 1\\ 1 & 3 & $\frac{1}{3}$ & 3 & 3 &1\\ \hline \end{tabular} \end{center} \end{table} Assume that the dependency structure of the component lifetimes is modeled by a parametric family of copulas known as Gumbel-Hougaard family defined as \begin{align*} &\hat{C}(u_1,\ldots, u_n)=\exp\left(-\left[(-\ln u_1)^{\alpha})+\ldots+(-\ln u_n)^{\alpha}) \right] ^{1/\alpha} \right), \end{align*} where $\alpha\geq 1$ is the dependency parameter in the family. The value $\alpha=1$ corresponds to the independent condition. Let the component lifetimes of the two types follow exponential distributions with reliability functions $\bar{F}_i(t)=e^{-t\theta_i}$, where we assume that $\theta_1=0.2$, and $\theta_2=0.3$. If there are $M_1=9$ and $M_2=6$ components from type 1 and type 2 as spares, respectively, then $v_1\in\{0,1,2,3\}$ and $ v_2\in\{0,1,2\}$. To find the optimal number of redundant components for each type, we use the following values for the replacement costs: $c_1=3, c_2=2, c_1^*=1.5, c_2^*=1$, and $ c^{**}=10$. { For computing the numerator of \eqref{poi}, we need to compute $E(X_i(T)), i=1, 2$. From Lemma \ref{ui} and Theorem \ref{Am}, we have \begin{small} \begin{align*} &E(X_1(T))=n_1\int_{0}^{\infty} \lim_{\delta\rightarrow 0}\frac{1}{\delta}\sum_{m_1=0}^{n_1-1}\sum_{m_2=0}^{n_2}\Phi(m_1+1, m_2) \binom{n_1-1}{m_1}\binom{n_2}{m_2}A_{\mathbf{m}}^{(1)}(t,\delta)dt, \end{align*} \end{small} where \begin{align*} &A_{\mathbf{m}}^{(1)}(t, \delta)={\sum_{j_1=0}^{n_1-m_1-1} \sum_{j_2=0}^{n_2-m_2}(-1)^{j_1+ j_2}\binom{n_1-m_1-1}{j_1}\binom{n_2-m_2}{j_2}} \\ &\times \Big[ e^{-[(m_1+j_1+1)(t \theta_1)^{\alpha}+(m_2+j_2)(t \theta_2)^{\alpha}]^{1/\alpha}}-e^{-[(m_1+j_1)(t \theta_1)^{\alpha}+((t+\delta)\theta_1)^{\alpha}+(m_2+j_2)(t \theta_2)^{\alpha}]^{1/\alpha}}\Big]. \end{align*} Thus, we get \begin{align*} &E(X_1(T))\\ &=n_1 \sum_{m_1=0}^{n_1-1}\sum_{m_2=0}^{n_2}\binom{n_1-1}{m_1}\binom{n_2}{m_2}\Phi(m_1+1, m_2)\sum_{j_1=0}^{n_1-m_1-1} \sum_{j_2=0}^{n_2-m_2} (-1)^{j_1+ j_2} \binom{n_1-m_1-1}{j_1}\binom{n_2-m_2}{j_2}\\ &\times \theta_1^{\alpha}\big[(m_1+j_1+1)\theta_1^{\alpha}+(m_2+j_2)\theta_2^{\alpha}\big]^{-1}. \end{align*} Similarly, we have \begin{align*} &E(X_2(T))\\ &=n_2 \sum_{m_1=0}^{n_1}\sum_{m_2=0}^{n_2-1}\binom{n_1}{m_1}\binom{n_2-1}{m_2}\Phi(m_1, m_2+1)\sum_{j_1=0}^{n_1-m_1} \sum_{j_2=0}^{n_2-m_2-1} (-1)^{j_1+ j_2} \binom{n_1-m_1}{j_1}\binom{n_2-m_2-1}{j_2}\\ &\times\theta_2^{\alpha}\big[(m_1+j_1)\theta_1^{\alpha}+(m_2+j_2+1)\theta_2^{\alpha}\big]^{-1}. \end{align*} Also, the denominator of \eqref{poi} can be written as follows \begin{align*} E(T_R)&=\int_0^{\infty} \bar{F}_{T_R}(t) dt={\sum_{l_1=0}^{n_1}\sum_{l_2=0}^{n_2}}\binom{n_1}{l_1} \binom{n_2}{l_2}\Phi(l_1, l_2){\sum_{i_1=0}^{n_1-l_1}\sum_{i_2=0}^{n_2-l_2}}(-1)^{i_1+ i_2}\binom{n_1-l_1}{i_1} \binom{n_2-l_2}{i_2} \nonumber\\ &\times \int_0^{\infty} e^{-\big[(i_1+l_1)\big(-\ln(1-(1-\exp[-t \theta_1])^{v_1+1})\big)^{\alpha}+(i_2+l_2)\big(-\ln(1-(1-\exp[-t \theta_2])^{v_2+1})\big)^{\alpha} \big]^{1/\alpha}} dt, \end{align*} which should be evaluated numerically by suitable softwares such as $Mathematica$. } The values of $Cost_1(\mathbf{v})$ for different combinations of $v_1$ and $ v_2$ are presented in Table \ref{table11} for two values $\alpha=2$ (dependent components) and $\alpha=1$ (independent components). { It is seen that $v_1=2$ and $v_2=0$ are the optimal choices for the number of redundant components of the first and the second types, respectively, under the criterion $Cost_1$ in the case $\alpha=2$, and $v_1=3$ and $v_2=0$ are the optimal numbers in the case $\alpha=1$. } \begin{table}[h!] \small \begin{center} \caption{{The values of $Cost_1(\mathbf{v})$ and $Cost_2(\mathbf{v})$ in Example \ref{eg11} in the case of dependent components $(\alpha=2)$ and independent components $(\alpha=1)$.}}\label{table11} \end{center} \begin{center} \begin{tabular}{|cc|c|c|c|c|} \hline $v_1$ & $v_2$ & {$Cost_1(v_1, v_2)$} & {$Cost_2(v_1, v_2)$} & {$Cost_1(v_1, v_2)$} & { $Cost_2(v_1, v_2)$}\\ &&$\alpha=2$& $\alpha=2$&$\alpha=1$& $\alpha=1$ \\ \hline 0 & 0 & 6.33927 & 8.2455 & 9.36071 & 9.70214\\ 0 & 1 & 6.81922 & 9.44774 & 9.38725 & 10.3790\\ 0 & 2 & 7.5289 & 11.3022 & 9.87544 & 11.9363\\ 1 & 0 & 5.20719 & {\bf 7.77258} & 7.09069 & {\bf 8.48716}\\ 1 & 1 & 5.82298 & 9.2981 & 7.51217 & 9.7885\\ 1 & 2 & 6.35331 & 10.7302 & 7.98448 & 11.3693\\ 2 & 0 & {\bf 4.99041} & 9.13115 & 6.56325 & 9.88167\\ 2 & 1 & 5.58924 & 10.7299 & 7.06002 & 11.4307\\ 2 & 2 & 6.13251 & 12.28921 & 7.53562 & 13.0756\\ 3 & 0 & 5.04518 & 11.137 & {\bf 6.48197} & 12.035\\ 3 & 1 & 5.59315 & 12.7178 & 6.98476 & 13.6766\\ 3 & 2 & 6.11375 & 14.2941 & 7.45447 & 15.3508\\ \hline \end{tabular} \end{center} \end{table} Suppose that the described system is maintained under the aforementioned age replacement policy, where we assume that $\tau=2$, i.e., the replacement time of the system is $\min(T_R, 2)$. { From equations \eqref{EN12} and \eqref{Btau} the mean number of failed components of $i$th type at time $\tau$, before system failure, is evaluated by the following expression. \begin{align*} &E(N_i(\tau)|T>\tau)=\frac{1}{\bar{F}_T(\tau)}\sum_{j_1=0}^{n_1}\sum_{j_2=0}^{n_2} j_i \Phi(n_1-j_1, n_2-j_2)\binom{n_1}{j_1} \binom{n_2}{j_2} {\sum_{b_1=0}^{j_1}\sum_{b_2=0}^{j_2}}(-1)^{b_1+ b_2}\binom{j_1}{b_1} \binom{j_2}{b_2} \\ &\times \exp[-\tau\big( \theta_1^{\alpha}(n_1-j_1+b_1)+\theta_2^{\alpha}(n_2-j_2+b_2) \big)^{1/\alpha}], ~i=1, 2, \end{align*} where from \eqref{Fbar-dep}, we get \begin{align*} \bar{F}_T(\tau)&={\sum_{l_1=0}^{n_1}\sum_{l_2=0}^{n_2}}{\sum_{i_1=0}^{n_1-l_1}\sum_{i_2=0}^{n_2-l_2}}(-1)^{i_1+ i_1}\binom{n_1}{l_1} \binom{n_2}{l_2}\binom{n_1-l_1}{i_1} \binom{n_2-l_2}{i_2}\Phi(l_1, l_2)\nonumber\\ &\times \exp[-\tau\big((i_1+l_1)\theta_1^{\alpha}+(i_2+l_2)\theta_2^{\alpha} \big)^{1/\alpha}]. \end{align*} Next, for the mean number of failed components at the time of the system failure given that the system has failed before $\tau$, we have \begin{small} \begin{align*} &E(X_1(T)|T\leq \tau)\\ &=\frac{n_1}{1-\bar{F}_T(\tau)}\sum_{m_1=0}^{n_1-1}\sum_{m_2=0}^{n_2}\sum_{l_1=0}^{m_1}\sum_{l_2=0}^{m_2}[\Phi(m_1, m_2)-\Phi(l_1, l_2)]\binom{n_1-1}{m_1}\binom{m_1}{l_1}\binom{n_2}{m_2}\binom{m_2}{l_2}\\ &\times {\sum_{j_1=0}^{n_1-m_1-1}\sum_{j_2=0}^{n_2-m_2}}(-1)^{j_1+ j_2}\binom{n_1-m_1-1}{j_1} \binom{n_2-m_2}{j_2} {\sum_{d_1=0}^{m_1-l_1}\sum_{d_2=0}^{m_2-l_2}}(-1)^{d_1+d_2}\binom{m_1-l_1}{d_1} \binom{m_2-l_2}{d_2}\\ &\times \frac{\theta_1^{\alpha}\big[ e^{-\tau\big((l_1+d_1)\theta_1^{\alpha}+(l_2+d_2)\theta_2^{\alpha} \big)^{1/\alpha}}- e^{-\tau\big((m_1+j_1+1)\theta_1^{\alpha}+(m_2+j_2)\theta_2^{\alpha} \big)^{1/\alpha}} \big]}{(m_1-l_1+j_1-d_1+1)\theta_1^{\alpha}+(m_2-l_2+j_2-d_2)\theta_2^{\alpha}}, \end{align*} \end{small} and \begin{small} \begin{align*} &E(X_2(T)|T\leq \tau)\\ &=\frac{n_2}{1-\bar{F}_T(\tau)}\sum_{m_1=0}^{n_1}\sum_{m_2=0}^{n_2-1}\sum_{l_1=0}^{m_1}\sum_{l_2=0}^{m_2}[\Phi(m_1, m_2)-\Phi(l_1, l_2)]\binom{n_1}{m_1}\binom{m_1}{l_1}\binom{n_2-1}{m_2}\binom{m_2}{l_2}\\ &\times {\sum_{j_1=0}^{n_1-m_1}\sum_{j_2=0}^{n_2-m_2-1}}(-1)^{j_1+ j_2}\binom{n_1-m_1}{j_1} \binom{n_2-m_2-1}{j_2} {\sum_{d_1=0}^{m_1-l_1}\sum_{d_2=0}^{m_2-l_2}}(-1)^{d_1+d_2}\binom{m_1-l_1}{d_1} \binom{m_2-l_2}{d_2}\\ &\times \frac{\theta_2^{\alpha}\big[ e^{-\tau\big((l_1+d_1)\theta_1^{\alpha}+(l_2+d_2)\theta_2^{\alpha} \big)^{1/\alpha}}- e^{-\tau\big((m_1+j_1)\theta_1^{\alpha}+(m_2+j_2+1)\theta_2^{\alpha} \big)^{1/\alpha}} \big]}{(m_1-l_1+j_1-d_1)\theta_1^{\alpha}+(m_2-l_2+j_2-d_2+1)\theta_2^{\alpha}}. \end{align*} \end{small} By substituting these results in \eqref{cost2}, the mean cost rate of replacement strategy can be evaluated. } \quad The values of $Cost_2(\mathbf{v})$ are calculated for different combinations of $v_1$ and $v_2$, in Table \ref{table11} for both dependent and independent situations. It follows, from the results of the table, that for $v_1=1$ and $v_2=0$, the mean cost rate $Cost_2(\mathbf{v})$ is minimized, in both cases $\alpha=1,2$. {\quad In order to investigate the robustness of our strategies concerning the model parameters, we calculate some numerical results based on these parameters.} The results in Table \ref{table66} shows the effect of the dependency parameter $\alpha$ on the optimal values of $v_1$ and $v_2$, for different values of $\alpha$. As seen, when $\alpha$ increases (i.e., we get far from independence) the number of redundant components decreases { based on the objective function $Cost_1(v_1,v_2)$, but remain unchanged under the $Cost_2(v_1,v_2)$. }{This makes sense since under the more dependency the {$MTTF$} is increased, and the need to spare components reduces. Also it should be noted that the higher the $\alpha$, the lower mean cost rates. } \begin{table}[h!] \small \caption{\small The optimum values of $\mathbf{v}$ by minimizing $Cost_i(v_1,v_2)$, $i=1,2$, for different $\alpha$ in Example \ref{eg11}}\label{table66} \begin{center} { \begin{tabular}{|c|ccccccccccc|} \hline $\alpha$& 1 & 1.2 & 1.4 & 1.6 & 1.8 & 2 & 2.2 & 2.4 & 2.6 & 2.8 & 3 \\ \hline $v_1$ & 3 & 3 & 2 & 2 & 2 & 2 & 2 & 2 & 2 & 2 & 2 \\ $v_2$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ $Cost_1(v_1,v_2)$ & 6.48 & 5.98 & 5.63 & 5.36 & 5.15 & 4.99 & 4.86 & 4.75 & 4.66 & 4.58 & 4.52\\ \hline $v_1$ & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ $v_2$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ ${Cost_2}(v_1,v_2)$ & 8.48 & 8.28 & 8.11 & 7.98 & 7.87 & 7.77 & 7.69 & 7.62 & 7.57 & 7.50 & 7.46\\ \hline \end{tabular} } \end{center} \end{table} \quad For exploring the sensitivity of the proposed models with respect to components costs, $\mathbf{c}=(c_1,c_2)$ and $\mathbf{c}^*=(c_1^*,c_2^*)$, we have provided some numerical results in Table \ref{table412}, for $\alpha=2$. In the top part of the left panel of the table, we observe that for fixed values of $c_1^*=1.5$ and $c_2^*=1$, the increase in the costs $c_1$ and $c_2$ results a reduction to the number of optimal values of $v_1$ and $v_2$. In the bottom part of the left panel of the table, the costs $c_i$ and $c^*_i$, $i=1,2$, of the two types are swapped. In this case, when the costs $c_1^*$ and $c_2^*$ are fixed as $c_1^*=1$ and $c_2^*=1.5$, then we again observe that the increase in the costs $c_1$ and $c_2$ results a decline to the number of optimal values of $v_1$ and $v_2$. In the top part of the right panel of Table \ref{table412}, it can be seen that for fixed values of renewing failed components as $c_1=6, c_2=5.5$, the decrease in the costs $c_1^*$ and $c_2^*$ results an increase to the number of optimal values of $v_1$ and $v_2$. As shown in the bottom part of the right panel, the same result holds by swapping the costs of the components of type 1 and type 2. { It is seen that in all four parts of Table \ref{table412} the value of costs and the numbers of spare components are inversely related to each other.} \begin{table}[h!] \small \caption{The optimum values of $\mathbf{v}$ by minimizing $Cost_1$ for different costs in Example \ref{eg11}}\label{table412} \begin{center} \begin{tabular}{|ccccc||ccccc|} \hline $\mathbf{c}$ & $\mathbf{c^*}$ & { $v_1$} & {$v_2$} & { $Cost_1$} & $\mathbf{c}$ & $\mathbf{c^*}$ & { $v_1$} & {$v_2$} & {$ Cost_1$}\\ \hline (1.6, 1.1) & (1.5,1) & 3 & 0 & 4.0492 & (6, 5.5) & (5.9, 5.4) & 1 & 0 & 11.9478\\ (1.7, 1.2) & (1.5,1) & 3 & 0 & 4.1278 & (6, 5.5) & (5.7, 5.2) & 1 & 0 & 11.7509\\ (1.8, 1.3) & (1.5,1) & 3 & 0 & 4.2065 & (6, 5.5) & (5.5, 5.0) & 1 & 0 & 11.5540\\ (1.9, 1.4) & (1.5,1) & 2 & 0 & 4.2845 & (6, 5.5) & (5.3, 4.8) & 1 & 0 & 11.3571\\ (2, 1.5) & (1.5,1) & 2 & 0 & 4.3599 & (6, 5.5) & (5.1, 4.6) & 2 & 0 & 11.1543\\ (2.5, 2) & (1.5,1) & 2 & 0 & 4.7369 & (6, 5.5) & (5, 4.5) & 2 & 0 & 11.0493 \\ (3, 2.5) & (1.5,1) & 2 & 0 & 5.1139 & (6, 5.5) & (4.5, 4) & 2 & 0 & 10.5245\\ (3.5, 3) & (1.5,1) & 2 & 0 & 5.4908 & (6, 5.5) & (4, 3.5) & 2 & 0 & 9.9997\\ (4, 3.5) & (1.5,1) & 2 & 0 & 5.8678 & (6, 5.5) & (3.5, 3) & 2 & 0 & 9.4750 \\ (4.5, 4) & (1.5,1) & 2 & 0 & 6.2448 & (6, 5.5) & (3, 2.5) & 2 & 0 & 8.9502\\ \hline (1.5, 2) & (1, 1.5) & 3 & 0 & 3.7874 & (5.5, 6) & (5, 5.5) & 2 & 0 & 11.1232\\ (2, 2.5) & (1, 1.5) & 3 & 0 & 4.1807 & (5.5, 6) & (4.5, 5) & 2 & 0 & 10.5984\\ (2.5, 3) & (1, 1.5) & 3 & 0 & 4.5740 & (5.5, 6) & (4, 4.5) & 2 & 0 & 10.0737\\ (3, 3.5) & (1, 1.5) & 3 & 0 & 4.9673 & (5.5, 6) & (3.5, 4) & 2 & 0 & 9.5489\\ (3.5, 4) & (1, 1.5) & 3 & 0 & 5.3606 & (5.5, 6) & (3, 3.5) & 2 & 0 & 9.0241\\ (4, 4.5) & (1, 1.5) & 3 & 0 & 5.7539 & (5.5, 6) & (2.5, 3) & 2 & 0 & 8.4992\\ (4.5, 5) & (1, 1.5) & 3 & 0 & 6.1472 & (5.5, 6) & (2, 2.5) & 2 & 0 & 7.9745\\ (5, 5.5) & (1, 1.5) & 3 & 0 & 6.5405 & (5.5, 6) & (1.5, 2) & 2 & 0 & 7.4497\\ (5.5, 6) & (1, 1.5) & 2 & 0 & 6.9249 & (5.5, 6) & (1, 1.5) & 2 & 0 & 6.9249\\ (6, 6.5) & (1, 1.5) & 2 & 0 & 7.0550 & (5.5, 6) & (0.5, 1) & 3 & 0 & 6.3664\\ (6.5, 7) & (1, 1.5) & 2 & 0 & 7.4320 & (5.5, 6) & (0, 0.5) & 3 & 0 & 5.7991\\ \hline \end{tabular} \end{center} \end{table} Table \ref{table44} shows the behavior of the number of redundant components from another point of view. In the left panel of the table, we have kept $c_2 (c^*_2)$ constant and have increased the values of $c_1 (c^*_1)$. In fact, we have assumed that $c_1=\omega c_2$ and $c_1^*=\omega c_2^*$ for $\omega=1.5,2,3, ..., 10$. As seen, when $\omega$ increases the optimal value of the redundant component $v_1$ decreases { and the optimal value of $v_2$ increases}. In the right panel of Table \ref{table44}, we exchange the costs of type 1 and 2, i.e. we assume $c_2=\omega c_1$ and $c_2^*=\omega c_1^*$. {In this case, we observe no changes in the number of redundant components $v_1, v_2$ when $\omega$ increases.}\\ \begin{table}[h!] \small \caption{The optimum values of $\mathbf{v}$ by minimizing $Cost_1$ for different costs in Example \ref{eg11}}\label{table44} \begin{center} \begin{tabular}{|ccccc||ccccc|} \hline $\mathbf{c}$ & $\mathbf{c^*}$ &{ $v_1$ }& {$v_2$ }& {$Cost_1$} &$\mathbf{c}$ & $\mathbf{c^*}$ & {$v_1$} & {$v_2$} &{ $Cost_1$} \\ \hline (3,2) & (1.5,1) & 2 & 0 & 4.9904 & (2,3) & (1,1.5) & 3 & 0 & 4.2859\\ (4,2) & (2,1) & 2 & 0 & 5.9203 & (2,4) & (1,2) & 3 & 0 & 4.5832\\ (6,2) & (3,1) & 1 & 0 & 7.5921 & (2,6) & (1,3) & 3 & 0 & 5.1778\\ (8,2) & (4,1) & 1 & 0 & 9.1824 & (2,8) & (1,4) & 3 & 0 & 5.7725\\ (10,2) & (5,1) & 0 & 1 & 10.6840 & (2,10) & (1,5) & 3 & 0 & 6.3671\\ (12,2) & (6,1) & 0 & 1 & 11.7883 & (2,12) & (1,6) & 3 & 0 & 6.9617\\ (14,2) & (7,1) & 0 & 1 & 12.8925 & (2,14) & (1,7) & 3 & 0 & 7.5563\\ (16,2) & (8,1) & 0 & 1 & 13.9967 & (2,16) & (1,8) & 3 & 0 & 8.1509\\ (18,2) & (9,1) & 0 & 1 & 15.1009 & (2,18) & (1,9) & 3 & 0 & 8.7456\\ (20,2) & (10,1) & 0 & 1 & 16.2052 & (2,20) & (1,10) & 3 & 0 & 9.3402\\ \hline \end{tabular} \end{center} \end{table} \quad {In this example, the distributions of the components lifetimes are ordered such that $\bar{F}_1(t)\geq \bar{F}_2(t)$, for all $t>0$, i.e. the reliability (and subsequently, the $MTTF$) of the components of type one is more than type two. Note that, according to the system structure, it is reveal that the components of type one are generally in more critical positions than those of type two. Hence, one should intuitively expect that the optimal solution, according to the cost criterion, would be the case in which one allocates more components of type one than type two.} \quad { To see whether this fact affects the number of $v_i$'s, we let $\bar{F}_2^*(t)=e^{-0.07 t^{2}}$, $t>0$, be the reliability function of the components of type two. In this case, the two reliability functions cross each other such that $\bar{F}_1(t)<\bar{F}_2^*(t)$ for $t<2.86$ and $\bar{F}_1(t)>\bar{F}_2^*(t)$ for $t>2.86$. Note that the $MTTF$ for components of type two in this new case is the same as the previous one. By fixing the other parameters as before, we get the results given in Table \ref{table660}. We see that although the distributions cross each other, the optimal numbers of components in Table \ref{table660} are mostly the same as those in Table \ref{table66}, perhaps since the $MTTF$s have not been changed in both cases. \begin{table}[h!] \small \caption{\small {The optimum values of $\mathbf{v}$ by minimizing $Cost_1(v_1,v_2)$, for different values of $\alpha$ in Example \ref{eg11}}}\label{table660} \begin{center} { \begin{tabular}{|c|ccccccccccc|} \hline $\alpha$& 1 & 1.2 & 1.4 & 1.6 & 1.8 & 2 & 2.2 & 2.4 & 2.6 & 2.8 & 3 \\ \hline $v_1$ & 3 & 2 & 2 & 2 & 2 & 2 & 2 & 2 & 2 & 2 & 2 \\ $v_2$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ $Cost_1(v_1,v_2)$ & 6.53 & 6.06 & 5.71 & 5.45 & 5.25 & 5.08 & 4.95 & 4.85 & 4.76 & 4.68 & 4.62\\ \hline \end{tabular} } \end{center} \end{table} } { As a final point, to see the effect of survival copula on the optimal numbers of $v_1$ and $v_2$, we suppose that the dependence structure is followed by the Clayton copula with the following form \begin{align*} \hat{C}(u_1,\ldots, u_n)=\left(u_1^{-1/\alpha}+\ldots + u_n^{-1/\alpha}-n+1 \right)^{-\alpha}, \alpha>0. \end{align*} The parameter $\alpha$ manages the dependency degree of the copula, and the limiting case $\alpha=0$, gives the independence. We obtain the optimal values of redundant components for some values of $\alpha$ in Table \ref{table6687}. As it can be seen, by increasing $\alpha$, the values of $v_1$ and $v_2$ and also the mean cost rate show increment, which is in contradiction with the results in Table \ref{table66}. Hence, the output of the optimization problem strongly pertains to the functional structure of dependence, not only to the dependence parameter.} \begin{table}[h!] \small \caption{\small {The optimum values of $\mathbf{v}$ by minimizing $Cost_1(v_1,v_2)$ for different $\alpha$ under the Clayton copula in Example \ref{eg11}}}\label{table6687} \begin{center} { \begin{tabular}{|c|cccccc|} \hline $\alpha$ & 0.001 & 0.1 & 1 & 2 & 3 & 4 \\ \hline $v_1$ & 2 & 2 & 2 & 2 & 3 & 3 \\ $v_2$ & 0 & 0 & 0 & 0 & 0 & 0 \\ $Cost_1(v_1,v_2)$ & 3.71 & 3.97 & 5.26 & 5.75 & 5.98 & 6.09 \\ \hline \end{tabular} } \end{center} \end{table} } \end{Example} In the following example we consider an 8-component system consists of three types of components. For fixed values of $v_i$'s, we minimize the function $Cost_1$ and also the function $Cost_2$ in the case that replacement time of unfailed system, $\tau$, is considered as the variable of interest. \begin{Example}\label{eg2} {\rm Consider the system depicted in Figure \ref{fig2}, given in \cite{heu}. The system has eight components from which three components (1, 2, and 3) are of type one, three components (4, 5, and 7) are of type two, and two components (6 and 8) are of type three. The values of the system survival signature are computed in \cite{heu}, and hence we refer the reader to the cited paper for the details. \begin{figure}[h!] \centerline{\begin{tikzpicture}[node distance = 5 cm] \tikzset{LabelStyle/.style = {scale=2, fill= white, text=black}} \node[shape = rectangle,draw, fill= lightgray, text= black, inner sep =2 pt, outer sep= 0 pt, scale=0.01](D) at (0,0) {}; \node[shape = rectangle,draw, fill= lightgray, text= black, inner sep =2 pt, outer sep= 0 pt, scale=0.01](E) at (0,1) {}; \node[shape = rectangle,draw, fill= lightgray, text= black, inner sep =2 pt, outer sep= 0 pt, scale=0.01](F) at (0,-1) {}; \node[shape = rectangle,draw, fill= white, text= black, inner sep =2 pt, outer sep= 0 pt, scale=1.7](G) at (0.9,1) {\scriptsize 1}; \node[shape = rectangle,draw, fill= black, text= black, inner sep =2 pt, outer sep= 0 pt, scale=0.2](GH) at (1.5,1) {}; \node[shape = rectangle,draw, fill=lightgray, text= black, inner sep =2 pt, outer sep= 0 pt, scale=1.7](H) at (2.1,1) {\scriptsize 4}; \node[shape = rectangle,draw, fill= lightgray, text= black, inner sep =2 pt, outer sep= 0 pt, scale=0.01](I) at (3,1) {}; \node[shape = rectangle,draw, fill=white, text= black, inner sep =2 pt, outer sep= 0 pt, scale=1.7](DD) at (1.5,0) {\scriptsize 3}; \node[shape = rectangle,draw, fill=white, text= black, inner sep =2 pt, outer sep= 0 pt, scale=1.7](J) at (0.9,-1) {\scriptsize 2}; \node[shape = rectangle,draw, fill= black, text= black, inner sep =2 pt, outer sep= 0 pt, scale=0.2](JK) at (1.5,-1) {}; \node[shape = rectangle,draw, fill= lightgray, text= black, inner sep =2 pt, outer sep= 0 pt, scale=1.7](K) at (2.1,-1) {\scriptsize 5}; \node[shape = rectangle,draw, fill= lightgray, text= black, inner sep =2 pt, outer sep= 0 pt, scale=0.01](II) at (3,-1) {}; \node[shape = rectangle,draw, fill= lightgray, text= black, inner sep =2 pt, outer sep= 0 pt, scale=0.01](L) at (3,0) {}; \node[shape = rectangle,draw, fill= black, text= white, inner sep =2 pt, outer sep= 0 pt, scale=0.2](M) at (3.5,0) {}; \node[shape = rectangle,draw, fill= black, text= white, inner sep =2 pt, outer sep= 0 pt, scale=0.2](M1) at (3.5,1) {}; \node[shape = rectangle,draw, fill= black, text=white, inner sep =2 pt, outer sep= 0 pt, scale=0.2](M2) at (3.5,-1) {}; \node[shape = rectangle,draw, fill= black, text= white, inner sep =2 pt, outer sep= 0 pt, scale=1.7](M22) at (4,-1) {\scriptsize 6}; \node[shape = rectangle,draw, fill= black, text= white, inner sep =2 pt, outer sep= 0 pt, scale=1.7](N) at (5,-1) {\scriptsize 8}; \node[shape = rectangle,draw, fill= black, text= white, inner sep =2 pt, outer sep= 0 pt, scale=0.2](N1) at (5.5,-1) {}; \node[shape = rectangle,draw, fill= black, text= white, inner sep =2 pt, outer sep= 0 pt, scale=0.2](N2) at (5.5,1) {}; \node[shape = rectangle,draw, fill= black, text= white, inner sep =2 pt, outer sep= 0 pt, scale=0.2](N3) at (5.5,0) {}; \node[shape = rectangle,draw, fill= black, text= white, inner sep =2 pt, outer sep= 0 pt, scale=0.2](N4) at (6,0) {}; \node[shape = rectangle,draw, fill= lightgray, text= black, inner sep =2 pt, outer sep= 0 pt, scale=1.7](O1) at (4.5,1) {\scriptsize 7}; \draw[semithick](B) -- (D); \draw[semithick](D) -- (E); \draw[semithick](D) -- (F); \draw[semithick](E) -- (G); \draw[semithick](GH) -- (DD); \draw[semithick](JK) -- (DD); \draw[semithick](G) -- (H); \draw[semithick](H) -- (I); \draw[semithick](I) -- (L); \draw[semithick](F) -- (J); \draw[semithick](J) -- (K); \draw[semithick](II) -- (L); \draw[semithick](K) -- (II); \draw[semithick](L) -- (M); \draw[semithick](M) -- (M1); \draw[semithick](M) -- (M2); \draw[semithick](M2) -- (M22); \draw[semithick](M1) -- (O1); \draw[semithick](O1) -- (N2); \draw[semithick](N2) -- (N3); \draw[semithick](M22) -- (N); \draw[semithick](N) -- (N1); \draw[semithick](N1) -- (N3); \draw[semithick](N3) -- (N4); \\ \end{tikzpicture}} \caption{\small System in Example \ref{eg2}} \hfill \label{fig2} \end{figure} Suppose here that all the components are independent, where the components of type $i$ have common Weibull reliability functions $\bar{F}_i(t)=e^{-\beta_i t^{\alpha_i}}, \alpha_i, \beta_i>0$ for $i=1,2,3$. In Table \ref{table22} we presented the mean cost rate $Cost_1(v_1, v_2,v_3)$ for given values of $\beta_1=3$, $\beta_2=4,$ $ \beta_3=2$, $\alpha_1=2,$ $\alpha_2=3,$ $ \alpha_3=1$ when the cost parameters are $c_1=1.5, $ $ c_2=1,$ $c_3=2, c_1^*=0.75$, $c_2^*=0.4,$ $c_3^*=1$ and $c^{**}=10$. Assume that we have $M_1=7,$ $M_2=4,$ and $M_3=5$ components from types 1, 2, and 3, respectively as spares. Hence, we can choose $v_1\in \{0, 1, 2 \}$, $v_2\in \{0, 1\}$, and $v_3\in \{0, 1, 2\}$ as the redundancy for each type, respectively. In the left panel of Table \ref{table22}, the values of $Cost_1$ are computed for different combinations of $v_i$'s. As it is seen, by adding $v_1=0$, $v_2=1,$ and $ v_3=0$ as the redundant components to groups 1, 2, and 3, respectively, we get the minimum value for the mean cost rate $Cost_1(v_1, v_2,v_3)$. Under the assumption that $\tau$ is the variable of interest, in the right panel of the table, we have minimized $Cost_2(v_1, v_2,v_3)$ in terms of $\tau$ and have reported the optimum value of $\tau$, in the case that the values of $v_i$'s are kept fixed and known. It is observed that among all minimized values of $Cost_2$, the least value is obtained for the case that the number of redundant components are $v_1=0$, $v_2=1,$ and $ v_3=0$ for which we have $\tau=0.375$. \begin{table}[h!] \small \caption{The values of $Cost_1(\mathbf{v})$ and $Cost_2(\mathbf{v})$ in Example \ref{eg2}}\label{table22} \begin{center} \begin{tabular}{|ccc|c||c|c|} \hline $v_1$ & $v_2$ & $v_3$ & $Cost_1(v_1, v_2,v_3)$ & $\tau_{opt}$ & $Cost_2(v_1, v_2,v_3)$ \\ \hline 0 & 0 & 0 & 39.0424 & { 0.300} & { 29.5929} \\ 0 & 1 & 0 & {\bf 37.4142} & {\bf 0.375} & {\bf 28.9959} \\ 1 & 0 & 0 & 42.1779 & 0.365 & 34.7858\\ 1 & 1 & 0 & 39.0422 & 0.450 & 31.1106 \\ 2 & 0 & 0 & 47.4998 & 0.405 & 42.0378 \\ 2 & 1 & 0 & 43.0531 & 0.490 & 36.3495 \\ 0 & 0 & 1 & 42.2553 & 0.326 & 38.2377 \\ 0 & 1 & 1 & 41.2034 & 0.367 & 37.6403 \\ 1 & 0 & 1 & 44.1149 & 0.391 & 41.3141 \\ 1 & 1 & 1 & 41.9973 & 0.463 & 37.8883 \\ 2 & 0 & 1 & 48.5362 & 0.445 & 47.7900 \\ 2 & 1 & 1 & 45.5179 & 0.503 & 42.4445 \\ 0 & 0 & 2 & 45.5246 & 0.350 & 46.7613 \\ 0 & 1 & 2 & 44.8865 & 0.410 & 46.1258 \\ 1 & 0 & 2 & 46.1353 & 0.412 & 47.6364 \\ 1 & 1 & 2 & 44.8022 & 0.480 & 44.4306 \\ 2 & 0 & 2 & 49.7090 & 0.455 & 53.1287 \\ 2 & 1 & 2 & 47.7961 & 0.520 & 48.2148 \\ \hline \end{tabular} \end{center} \end{table} } \end{Example} \section{Optimal number of components in series-parallel system}\label{series-parallel} \quad An important subclass of coherent systems is the class of series-parallel systems. A series-parallel system is a series structure of $L$ parallel subsystems, $L\geq 1$; see e.g. Figure \ref{figsp}. The purpose here is to find the optimal number of the components in the $l$th parallel subsystem, under the condition that there are available $M_l$ components of type $l$, where the components in $l$th subsystem are exchangeable dependent having common reliability function $\bar{F}_l$, $l=1,\ldots, L$. Furthermore, suppose that the random failure times of the components of different types are dependent. The dependency structure in the system is built with a copula function $\hat{C}$, as described in Section 2. Under the mean cost rate criteria defined in Section 2, the problem of optimal allocation is to find the optimal values of $n_l$ for each subsystem so that (\ref{cost1}) or (\ref{cost2}) is minimized. In the following, we provide the corresponding expressions for the cost-based functions in a series-parallel system. First note that for this system, we have \begin{align*} \Phi(l_1,\ldots, l_L)=\left\{ \begin{tabular}{ll} 1 & $\forall j\in\{1,\ldots,L\}: l_j\geq 1$\\ 0 & o.w.\\ \end{tabular} \right. \end{align*} Hence from \eqref{Fbar-dep}, we get \begin{align*} \bar{F}_T(t)&={\sum_{l_1=1}^{n_1}\ldots\sum_{l_L=1}^{n_L}}{\sum_{i_1=0}^{n_1-l_1}\ldots\sum_{i_L=0}^{n_L-l_L}}(-1)^{i_1+\ldots+ i_L}\binom{n_1}{l_1}\ldots \binom{n_L}{l_L}\binom{n_1-l_1}{i_1}\ldots \binom{n_L-l_L}{i_L}\nonumber\\ &\times \hat{C}(\underbrace{\bar{F}_1(t)}_{i_1+l_1},\underbrace{1}_{n_1-(i_1+l_1)},\ldots,\underbrace{\bar{F}_L(t)}_{i_L+l_L},\underbrace{1}_{n_L-(i_L+l_L). }), \end{align*} and in the especial case of independent components, we derive from \eqref{ind} \begin{align*} \bar{F}_T(t)&=\prod_{l=1}^L (1-[1-\bar{F}_l(t)]^{n_l}). \end{align*} \begin{figure}[h!] \small \centerline{\begin{tikzpicture}[node distance = 3 cm] \tikzset{LabelStyle/.style = {scale=1, fill= white, text=black}} \node[shape = circle,draw, fill= white, text= black, inner sep =2 pt, outer sep= 0 pt, scale=0.2](A) at (-3.5,0){} ; \node[shape = circle,draw, fill= white, text= black, inner sep =2 pt, outer sep= 0 pt, scale=0.01](D) at (-3,0) {}; \node[shape = circle,draw, fill= white, text= black, inner sep =2 pt, outer sep= 0 pt, scale=0.01](E) at (-3,1) {}; \node[shape = circle,draw, fill= white, text= black, inner sep =2 pt, outer sep= 0 pt, scale=0.01](E2) at (-3,2) {}; \node[shape = circle,draw, fill= white, text= black, inner sep =2 pt, outer sep= 0 pt, scale=0.01](F2) at (-3,-2) {}; \node[shape = circle,draw, fill= white, text= black, inner sep =2 pt, outer sep= 0 pt, scale=1.7](G) at (-2,1) {\tiny 2}; \node[shape = circle,draw, fill= white, text= black, inner sep =2 pt, outer sep= 0 pt, scale=1.7](G2) at (-2,2) {\tiny 1}; \node[shape = circle,draw, fill= white, text= black, inner sep =2 pt, outer sep= 0 pt, scale=0.01](I) at (-1,1) {}; \node[shape = circle,draw, fill= white, text= black, inner sep =2 pt, outer sep= 0 pt, scale=0.01](I2) at (-1,2) {}; \node[shape = circle,draw, fill= white, text= black, inner sep =2 pt, outer sep= 0 pt, scale=0.01](L) at (-1,0) {}; \node[shape = circle,draw, fill=white , text= black, inner sep =2 pt, outer sep= 0 pt, scale=1.7](J) at (-2,-2) {\tiny $n_1$}; \node[shape = circle,draw, fill= white, text= black, inner sep =2 pt, outer sep= 0 pt, scale=0.01](J2) at (-1,-2) {}; \node[shape = circle,draw, fill= black, text= black, inner sep =2 pt, outer sep= 0 pt, scale=0.2](M1) at (-2,0) {}; \node[shape = circle,draw, fill= black, text= black, inner sep =2 pt, outer sep= 0 pt, scale=0.2](M11) at (-2,-0.5) {}; \node[shape = circle,draw, fill= black, text= black, inner sep =2 pt, outer sep= 0 pt, scale=0.2](M12) at (-2,-1) {}; \node[shape = circle,draw, fill= lightgray, text= black, inner sep =2 pt, outer sep= 0 pt, scale=0.01](DD) at (0,0) {}; \node[shape = circle,draw, fill= lightgray, text= black, inner sep =2 pt, outer sep= 0 pt, scale=0.01](EE) at (0,1) {}; \node[shape = circle,draw, fill= lightgray, text= black, inner sep =2 pt, outer sep= 0 pt, scale=0.01](EE2) at (0,2) {}; \node[shape = circle,draw, fill= lightgray, text= black, inner sep =2 pt, outer sep= 0 pt, scale=0.01](FF2) at (0,-2) {}; \node[shape = circle,draw, fill= lightgray, text= black, inner sep =2 pt, outer sep= 0 pt, scale=1.7](GG) at (1,1) {\tiny 2}; \node[shape = circle,draw, fill= lightgray, text= black, inner sep =2 pt, outer sep= 0 pt, scale=1.7](GG2) at (1,2) {\tiny 1}; \node[shape = circle,draw, fill= lightgray, text= black, inner sep =2 pt, outer sep= 0 pt, scale=1.7](JJ) at (1,-2) {\tiny $n_2$}; \node[shape = circle,draw, fill= lightgray, text= black, inner sep =2 pt, outer sep= 0 pt, scale=0.01](JJ2) at (2,-2) {}; \node[shape = circle,draw, fill= lightgray, text= black, inner sep =2 pt, outer sep= 0 pt, scale=0.01](DDD) at (2,0) {}; \node[shape = circle,draw, fill= lightgray, text= black, inner sep =2 pt, outer sep= 0 pt, scale=0.01](II) at (2,1) {}; \node[shape = circle,draw, fill= lightgray, text= black, inner sep =2 pt, outer sep= 0 pt, scale=0.01](II2) at (2,2) {}; \node[shape = circle,draw, fill= lightgray, text= black, inner sep =2 pt, outer sep= 0 pt, scale=0.2](M) at (1,0) {}; \node[shape = circle,draw, fill= lightgray, text= black, inner sep =2 pt, outer sep= 0 pt, scale=0.2](M1) at (1,-0.5) {}; \node[shape = circle,draw, fill= lightgray, text= black, inner sep =2 pt, outer sep= 0 pt, scale=0.2](M2) at (1,-1) {}; \node[shape = circle,draw, fill= black, text= white, inner sep =2 pt, outer sep= 0 pt, scale=0.01](D3) at (3,0) {}; \node[shape = circle,draw, fill= black, text= white, inner sep =2 pt, outer sep= 0 pt, scale=0.01](K) at (3,1) {}; \node[shape = circle,draw, fill= black, text= white, inner sep =2 pt, outer sep= 0 pt, scale=0.01](B) at (3,2) {}; \node[shape = circle,draw, fill= black, text= white, inner sep =2 pt, outer sep= 0 pt, scale=0.01](P) at (3,-2) {}; \node[shape = circle,draw, fill= black, text= white, inner sep =2 pt, outer sep= 0 pt, scale=1.70](O) at (4,1) {\tiny 2}; \node[shape = circle,draw, fill= black, text= white, inner sep =2 pt, outer sep= 0 pt, scale=1.70](Q) at (4,2) {\tiny 1}; \node[shape = circle,draw, fill= black, text= white, inner sep =2 pt, outer sep= 0 pt, scale=1.70](R) at (4,-2) {\tiny $n_3$}; \node[shape = circle,draw, fill= black, text= white, inner sep =2 pt, outer sep= 0 pt, scale=0.01](S) at (5,-2) {}; \node[shape = circle,draw, fill= black, text= white, inner sep =2 pt, outer sep= 0 pt, scale=0.01](D4) at (5,0) {}; \node[shape = circle,draw, fill= black, text= white, inner sep =2 pt, outer sep= 0 pt, scale=0.01](T) at (5,1) {}; \node[shape = circle,draw, fill= black, text= white, inner sep =2 pt, outer sep= 0 pt, scale=0.01](U) at (5,2) {}; \node[shape = circle,draw, fill= black, text= white, inner sep =2 pt, outer sep= 0 pt, scale=0.20](MM) at (4,0) {}; \node[shape = circle,draw, fill= black, text= white, inner sep =2 pt, outer sep= 0 pt, scale=0.20](V) at (4,-0.5) {}; \node[shape = circle,draw, fill= black, text= white, inner sep =2 pt, outer sep= 0 pt, scale=0.20](W) at (4,-1) {}; \node[shape = circle,draw, fill= black, text= white, inner sep =2 pt, outer sep= 0 pt, scale=0.20](M3) at (5.5,0) {}; \draw[semithick](A) -- (D); \draw[semithick](D) -- (E); \draw[semithick](E) -- (E2); \draw[semithick](E) -- (G); \draw[semithick](G) -- (I); \draw[semithick](G2) -- (I2); \draw[semithick](E2) -- (G2); \draw[semithick](D) -- (F2); \draw[semithick](F2) -- (J); \draw[semithick](J) -- (J2); \draw[semithick](J2) -- (L); \draw[semithick](I2) -- (L); \draw[semithick](I) -- (I2); \draw[semithick](I) -- (L); \draw[semithick](L) -- (DD); \draw[semithick](DD) -- (EE); \draw[semithick](EE) -- (EE2); \draw[semithick](DD) -- (FF2); \draw[semithick](EE) -- (GG); \draw[semithick](EE2) -- (GG2); \draw[semithick](FF2) -- (JJ); \draw[semithick](JJ) -- (JJ2); \draw[semithick](GG) -- (II); \draw[semithick](GG2) -- (II2); \draw[semithick](II) -- (II2); \draw[semithick](II2) -- (DDD); \draw[semithick](JJ2) -- (DDD); \draw[semithick](D3) -- (K); \draw[semithick](K) -- (B); \draw[semithick](D3) -- (P); \draw[semithick](K) -- (O); \draw[semithick](B) -- (Q); \draw[semithick](P) -- (R); \draw[semithick](R) -- (S); \draw[semithick](O) -- (T); \draw[semithick](Q) -- (U); \draw[semithick](T) -- (U); \draw[semithick](T) -- (D4); \draw[semithick](S) -- (D4); \draw[semithick](DDD) -- (D3); \draw[semithick](D4) -- (M3); \end{tikzpicture}} \caption{\small A series-parallel system with 3 subsystems } \hfill \label{figsp} \end{figure} \subsection*{Cost function at the system failure} \quad In a similar manner to Subsection \ref{cost}, the mean cost rate function for system failure is defined as: \begin{align}\label{cost78} Cost_3(\mathbf{n})=\frac{\sum_{i=1}^Lc_iE(X_i(T))+\sum_{i=1}^L c_i^*E(n_i-X_i(T))+c^{**}}{E(T)} \end{align} where $\mathbf{n}=(n_1,\ldots, n_L)$, and \begin{align}\label{3} E(X_i(T))&=n_i \sum_{m_1=1}^{n_1}\ldots\sum_{m_i=0}^{n_i-1}\ldots\sum_{m_L=1}^{n_L}\binom{n_1}{m_1}\ldots \binom{n_i-1}{m_i}\ldots\binom{n_L}{m_L}\int_{0}^{\infty}\lim_{\delta\rightarrow 0}\frac{A_{\mathbf{m}}^{(i)}(t,\delta)}{\delta}dt, \end{align} in which $A_{\mathbf{m}}^{(i)}(t,\delta)$ is introduced in \eqref{Adelta}. For the independent components \eqref{3} reduces to the following expression \begin{small} \begin{align}\label{78} E(X_i(T)) &=n_i \int_0^{\infty} \prod_{l=1,l\neq i}^L (1-(1-\bar{F}_l(t))^{n_l}) dF_{i}(t). \end{align} \end{small} If $L=1$ then the system becomes a parallel system with $n_1$ components, and hence in this case $E(X_1(T))=n_1$. { \quad For the considered series-parallel system in which the components of subsystems are independent, \cite{Eryilmazetal2020} gained a similar result for $E(X_i(T))$ in \eqref{78}. Subsequently, they found the optimal numbers of components in each subsystem based on the minimization of cost function \eqref{cost78} under the constraints on the total allotted cost for replacing failed components and the total allotted cost for rejuvenation of unfailed ones. Hence, our results in this subsection may be considered as an extension of their work to the case of dependent components.} { Also, \cite{Dembinska2021} discussed the similar problem for the case that the lifetime distributions of components are discrete; especially they obtained some results for discrete phase-type distribution.} \subsection*{Cost function based on preventive replacement} The mean cost rate function of the system for age replacement at time $\min(\tau, T)$ is defined as \begin{align*} Cost_4(\mathbf{n})=\frac{M_1(\mathbf{n})P(T\leq \tau)+M_2(\mathbf{n})P(T> \tau)}{E(\min(\tau,T))} \end{align*} where \begin{align*} M_1(\mathbf{n})=\sum_{i=1}^L c_i E(X_i(T)|T\leq \tau)+\sum_{i=1}^L c^*_i E(n_i-X_i{(T)}|T\leq \tau)+c^{**}, \end{align*} and \begin{align*} M_2(\mathbf{n})=\sum_{i=1}^L c_i E(N_i{(\tau)}|T>\tau)+\sum_{i=1}^L c^*_i E(n_i-N_i{(\tau)}|T>\tau). \end{align*} Using the formula for the survival signature of the series-parallel system, from the results given in Subsection 2.2, we get the following expressions. \begin{align*} E(N_i(\tau)|T>\tau)=\frac{1}{\bar{F}(\tau)}\sum_{j_1=0}^{n_1-1}\ldots\sum_{j_L=0}^{n_L-1} j_i \binom{n_1}{j_1}\ldots \binom{n_L}{j_L}B(\tau, j_1,\dots,j_L) \end{align*} and { for $n_i\geq 2$} \begin{align}\label{iop} &E(X_i(T)|T\leq \tau)=\frac{n_i}{1-\bar{F}_T(\tau)} \Bigg[\sum_{m_1=1}^{n_1}\ldots\sum_{m_i=1}^{n_i-1}\ldots\sum_{m_L=1}^{n_L} {\sum_{l_1=0}^{m_1}\ldots\sum_{l_L=0}^{m_L}}\bigg[\prod_{j=1,j\neq i}^L\binom{n_j}{m_j}\binom{m_j}{l_j}\bigg]\nonumber\\ &\times\binom{n_i-1}{m_i}\binom{m_i}{l_i}\int_{0}^{\tau}\lim_{\delta\rightarrow 0}\frac{1}{\delta} A_{\mathbf{m}, \mathbf{l}}^{(i)}(s,s+\delta, \tau)ds - \sum_{m_1=1}^{n_1}\ldots\sum_{m_i=1}^{n_i-1}\ldots\sum_{m_L=1}^{n_L} {\sum_{l_1=1}^{m_1}\ldots\sum_{l_L=1}^{m_L}}\nonumber\\ &\bigg[\prod_{j=1,j\neq i}^L\binom{n_j}{m_j}\binom{m_j}{l_j}\bigg]\binom{n_i-1}{m_i}\binom{m_i}{l_i}\int_{0}^{\tau}\lim_{\delta\rightarrow 0}\frac{1}{\delta} A_{\mathbf{m}, \mathbf{l}}^{(i)}(s,s+\delta, \tau)ds \Bigg]. \end{align} If $n_i=1$ then it is easily deduced that $E(X_i(T)|T\leq \tau)=\frac{{F}_i(\tau)}{1-\bar{F}_T(\tau)}$. \vspace{0.5cm} \begin{Example}\label{ex-series} {\rm Consider a series-parallel system with $L=3$ subsystems and assume that there are $M_1=2, M_2=3$ and $M_3=3$ components from types $1, 2$, and $3$, respectively, to construct the system. Suppose that the joint reliability function of the components lifetimes follow the multivariate Pareto model given by \begin{align*} &P\big( T_1^{(1)}>t_1^{(1)}, \ldots, T_{n_1}^{(1)}>t_{n_1}^{(1)}, \ldots, T_1^{(L)}>t_L^{(L)}, \ldots, T_{n_L}^{(L)}>t_{n_L}^{(L)}\big)\\ &\hspace{2cm}=\left[ 1+\theta_1\sum_{i=1}^{n_1}t_i^{(1)}+\ldots+\theta_L\sum_{i=1}^{n_L}t_i^{(L)} \right]^{-\alpha} , \end{align*} for $\theta_i>0, i=1, \ldots, L$, and $\alpha>0$. In fact, the corresponding survival copula is \begin{align*} \hat{C}(u_1,\ldots, u_n)=\left(u_1^{-1/\alpha}+\ldots + u_n^{-1/\alpha}-n+1 \right)^{-\alpha}, \end{align*} and the marginal reliability functions of the components in the subsystems are $\bar{F}_i(t)=(1+\theta_i t)^{-\alpha}$, $i=1,2, \ldots, L$. { First note that for the described system, we have \begin{small} \begin{align*} \int_{0}^{\infty}\lim_{\delta\rightarrow 0}\frac{A_{\mathbf{m}}^{(i)}(t,\delta)}{\delta}dt&={\sum_{j_1=0}^{n_1-m_1}\ldots\sum_{j_i=0}^{n_i-m_i-1}\ldots\sum_{j_L=0}^{n_L-m_L}}(-1)^{j_1+\ldots + j_L}\binom{n_1-m_1}{j_1}\ldots \binom{n_i-m_i-1}{j_i}\ldots\binom{n_L-m_L}{j_L}\\ &\times \frac{\theta_i}{\theta_i(m_i+j_i+1)+\sum_{l=1, l\neq i}^{L}\theta_l(m_l+j_l)} \end{align*} \end{small} By replacing these expressions in \eqref{3}, $E(X_i(T)), i=1, 2, 3$ are obtained. } Let ${\mathbf{\theta}}=(0.4, 0.2, 0.3)$, $\mathbf{c}=(1.5, 2, 3)$, $\mathbf{c}^*=(0.3, 0.75, 1)$, $c^{**}=8$, and $\alpha=2$. The values of the mean cost rate function $Cost_3$ are obtained for all combinations of $n_1, n_2$, and $n_3$, such that $n_1\in\{1, 2\}, n_2\in\{1, 2, 3\}$, and $n_3\in\{1,2,3\}$. Also, under the age replacement strategy in $\tau=1$, the values of mean cost rate function $Cost_4$ are calculated for different values $n_1, n_2$ and $n_3$. { For computing $Cost_4(\tau)$, we use the following simplified expressions: \begin{align*} &E(N_i(\tau)|T>\tau)=\frac{1}{\bar{F}_T(\tau)}\sum_{j_1=0}^{n_1-1}\sum_{j_2=0}^{n_2-1}\sum_{j_3=0}^{n_3-1} j_i \binom{n_1}{j_1} \binom{n_2}{j_2} \binom{n_3}{j_3}{\sum_{b_1=0}^{j_1}\sum_{b_2=0}^{j_2}\sum_{b_3=0}^{j_3}}(-1)^{b_1+ b_2+ b_L}\\ &\binom{j_1}{b_1}\binom{j_2}{b_2}\binom{j_3}{b_3}[1+\theta_1\tau(n_1-j_1+b_1)+\theta_2\tau(n_2-j_2+b_2)+\theta_3\tau(n_3-j_3+b_3)]^{-\alpha},~i=1, 2, 3, \end{align*} where \begin{align*} \bar{F}_T(\tau)&={\sum_{l_1=1}^{n_1}\sum_{l_2=1}^{n_2}\sum_{l_3=1}^{n_3}}{\sum_{i_1=0}^{n_1-l_1}\sum_{i_2=0}^{n_2-l_2}\sum_{i_3=0}^{n_3-l_3}}(-1)^{i_1+ i_2+ i_3}\binom{n_1}{l_1}\binom{n_2}{l_2}\binom{n_3}{l_3}\binom{n_1-l_1}{i_1}\binom{n_2-l_2}{i_2} \binom{n_3-l_3}{i_3}\nonumber\\ &\big(1+\theta_1 \tau(i_1+l_1)+\theta_2 \tau(i_2+l_2)+\theta_3 \tau(i_3+l_3)\big)^{-\alpha}. \end{align*} Also, we obtain $E(X_i(T)|T\leq \tau), i=1, 2, 3$ by placing the following quantity in \eqref{iop}, \begin{small} \begin{align*} &\int_{0}^{\tau}\lim_{\delta\rightarrow 0}\frac{1}{\delta} A_{\mathbf{m}, \mathbf{l}}^{(i)}(s,s+\delta, \tau)ds={\sum_{j_1=0}^{n_1-m_1}\ldots\sum_{j_i=0}^{n_i-m_i-1}\ldots\sum_{j_L=0}^{n_L-m_L}}(-1)^{j_1+\ldots + j_L}\binom{n_1-m_1}{j_1}\ldots \binom{n_i-m_i-1}{j_i}\ldots\binom{n_L-m_L}{j_L}\\ &\times {\sum_{d_1=0}^{m_1-l_1}\ldots\sum_{d_i=0}^{m_i-l_i}\ldots\sum_{d_L=0}^{m_L-l_L}}(-1)^{d_1+\ldots + d_L}\binom{m_1-l_1}{d_1}\ldots \binom{m_L-l_L}{d_L}\\ &\times \frac{\theta_i[\big(1+\tau(\sum_{k=1}^L \theta_k(l_k+d_k)) \big)^{-\alpha}-\big(1+\tau(\theta_i(m_i+j_i+1)+\sum_{k=1, \neq i}^L \theta_k(m_k+j_k))\big)^{-\alpha}]}{\theta_i(m_i-l_i+j_i-d_i+1)+\sum_{k=1, \neq i}^L \theta_k(m_k-l_k+j_k-d_k)}. \end{align*} \end{small} } The results are given in Table \ref{table222}. It is seen from the results that based on objective function $Cost_3(.)$ the optimal series-parallel system has $n_1=2, n_2=3, n_3=2$ components. Since the reliability of the second type is greater than the other two types, it is expected that more components for the second subsystem lead to a reduction in the mean cost rate of system failure. Also, $ n_1 = 2, n_2 = 2, n_3 = 2 $ are the optimal number of components in the subsystems so as to minimize the average cost rate of the age replacement policy. \begin{table}[t] \small \caption{The values of $Cost_3(\mathbf{n})$ and $Cost_4(\mathbf{n})$ in Example \ref{ex-series}}\label{table222} \begin{center} \begin{tabular}{|ccc|c|c|} \hline $n_1$ & $n_2$ & $n_3$ & {$Cost_3(n_1,n_2,n_3)$} & $Cost_4(n_1,n_2,n_3)$ \\ \hline 1 & 1 & 1 & 10.3750 & 18.2784 \\ 1 & 1 & 2 & 9.6460 & 18.6515 \\ 1 & 1 & 3 & 10.0967 & 21.2331 \\ 1 & 2 & 1 & 9.7277 & 18.1732 \\ 1 & 2 & 2 & 8.8857 & 18.1463 \\ 1 & 2 & 3 & 9.1886 & 20.4388 \\ 1 & 3 & 1 & 10.0842 & 19.8532 \\ 1 & 3 & 2 & 9.0989 & 19.5615 \\ 1 & 3 & 3 & 9.3157 & 21.7702\\ 2 & 1 & 1 & 8.7073 & 16.2100\\ 2 & 1 & 2 & 7.9885 & 16.1742\\ 2 & 1 & 3 & 8.2879 & 18.3280 \\ 2 & 2 & 1 & 8.0593 & 15.8127\\ 2 & 2 & 2 & 7.4747 & {\bf 12.2278}\\ 2 & 2 & 3 & 7.4835 & {13.5080} \\ 2 & 3 & 1 & 8.2888 & 17.1959 \\ 2 & 3 & 2 & {\bf 7.4068} & 13.0874\\ 2 & 3 & 3 & 7.5279 & {14.2977 }\\ \hline \end{tabular} \end{center} \end{table} } \end{Example} \section{Conclusions} \quad In this paper, we studied the optimal number of redundancy allocation in an $n$-component coherent system consists of heterogeneous components. We assumed that the system has been built up of $L$ different types of components, $L\geq 1$, where there are $n_i$ components of type $i$ and $\sum_{i=1}^{L}n_i=n$. We assumed that the components of the different types in the system are statistically dependent. The system reliability function was modeled by the notion of survival signature in terms of a given survival copula function. We further assumed $M_i$ components available as spares for the components of type $i$. We investigated the number of active redundant components $v_i$, $n_iv_i\leq M_i$, that can be added to each component of type $i$ such that the imposed cost functions are minimized, $i=1,\dots, L$. { We first proposed a cost function in terms of the costs of renewing the failed components and the costs of refreshing the alive components at the time of the system failure. In the sequel, we proposed an cost-based function in terms of the costs of the renewing (refreshing) the failed (alive) components at the system failure time or at a predetermined time $\tau$, whichever occurs first.} In the last part of the paper, we studied under the settings of the first part, the particular case that the system is a series-parallel system. We derived the formulas for the proposed cost functions and using them investigated the optimal number of the components in each parallel subsystem. The expressions for the proposed cost functions were derived using the mixture representation of the system reliability function based on the notion of survival signature. The results were examined numerically for some particular coherent systems. The proposed mean cost rate functions simultaneously consider the cost of the system and its $MTTF$ (which is directly related to its reliability). Hence this optimization problem can be viewed as a bi-objective reliability-redundancy allocation problem but with a more comfortable setup. In this study, we considered the general case that the components of the same group are exchangeable and the components of different groups and dependent. Although these assumptions are more realistic and hence increase the range of applications of our results, however, obviously lead to the complexity of the formulas. Even a more realistic case is the situation that the components in each group are dependent in a more general sense than that of exchangeability. Developing results in this direction may be considered as a future study. Here, we presumed the active redundancy for components. Allocating the other variants of spares, i.e., cold and warm standby, for coherent systems may be investigated as some interesting problems for future studies. \section*{Acknowledgments:} { We would like to thank Associate Editor and two anonymous reviewers for their constructive comments and suggestions which led to improvements of the presentation of the paper.} Asadi's research work was performed in IPM Isfahan branch and was in part supported by a grant from IPM (No. 1400620212). \section*{Appendix} {\bf Proof of Theorem \ref{Am}} Define the events $M$, $N^c$ and $L^C$ as follows \begin{align*} M&\equiv\{{T_1^{(1)}>t},\ldots,{T_{m_1}^{(1)}>t},{T_2^{(i)}>t},\ldots,{T_{m_i+1}^{(i)}>t},{T_1^{(L)}>t},\ldots,{T_{m_L}^{(L)}>t} \}\\ N^c &\equiv\{{T_{m_1+1}^{(1)}<t},\ldots, {T_{n_1}^{(1)}<t}, {T_{m_i+2}^{(i)}<t},\ldots, {T_{n_i}^{(i)}<t}, {T_{m_L+1}^{(L)}<t},\ldots, {T_{n_L}^{(L)}<t} \}\\ L^c &\equiv\{t<T_1^{(i)}<t+\delta\}. \end{align*} Then $A_{\mathbf{m}}(t, \delta)$ equals to the following \begin{align}\label{av3} A_{\mathbf{m}}^{(i)}(t, \delta)&=P(M\cap N^c \cap L^c)=P(M)-P(M\cap N)-P(M \cap L)+P(M \cap N \cap L). \end{align} Evidently, we have \begin{align*} &P(M)=\hat{C}(\underbrace{\bar{F}_1(t)}_{m_1},\underbrace{1}_{n_1-m_1},\ldots, \underbrace{\bar{F}_i(t)}_{m_i+1},\underbrace{1}_{n_i-m_i-1}, \ldots, \underbrace{\bar{F}_L(t)}_{m_L},\underbrace{1}_{n_L-m_L}) \end{align*} and \begin{align*} &P(M \cap L)=\hat{C}(\underbrace{\bar{F}_1(t)}_{m_1},\underbrace{1}_{n_1-m_1},\ldots, \underbrace{\bar{F}_i(t)}_{m_i}, \bar{F}_i(t+\delta),\underbrace{1}_{n_i-m_i-1}, \ldots, \underbrace{\bar{F}_L(t)}_{m_L},\underbrace{1}_{n_L-m_L}). \end{align*} Note that we can write \begin{align*} N=\cup_{j=m_1+1}^{n_1} \{T_{j}^{(1)}>t\}\cup \ldots \cup_{j=m_i+2}^{n_i} \{T_{j}^{(i)}>t\}\cup \ldots \cup_{j=m_L+1}^{n_L} \{T_{j}^{(L)}>t\}. \end{align*} Therefore, we can easily see that \begin{small} \begin{align*} &P(M\cap N)\\ &=\sum_{l=1}^{n-\sum_{i=1}^L m_i -1}(-1)^{l+1}\underset{j_1+\ldots +j_L=l}{\sum_{j_1=0}^{n_1-m_1}\ldots \sum_{j_i=0}^{n_i-m_i-1}\ldots \sum_{j_L=0}^{n_L-m_L}} \binom{n_1-m_1}{j_1}\ldots \binom{n_i-m_i-1}{j_i}\ldots\binom{n_L-m_L}{j_L}\\ &\ \ \ \times \hat{C}(\underbrace{\bar{F}_1(t)}_{m_1+j_1},\underbrace{1}_{n_1-m_1-j_1},\ldots, \underbrace{\bar{F}_i(t)}_{m_i+j_i+1},\underbrace{1}_{n_i-m_i-j_i-1}, \ldots, \underbrace{\bar{F}_L(t)}_{m_L+j_L},\underbrace{1}_{n_L-m_L-j_L}). \end{align*} \end{small} If we subtract $P(M)$ from both sides of this equation, then we have \begin{small} \begin{align}\label{av2} &P(M\cap N)-P(M)\nonumber\\ &=\sum_{l=1}^{n-\sum_{i=1}^L m_i -1}(-1)^{l+1}\underset{j_1+\ldots +j_L=l}{\sum_{j_1=0}^{n_1-m_1}\ldots \sum_{j_i=0}^{n_i-m_i-1}\ldots \sum_{j_L=0}^{n_L-m_L}}\binom{n_1-m_1}{j_1}\ldots \binom{n_i-m_i-1}{j_i}\ldots\binom{n_L-m_L}{j_L}\nonumber\\ &\ \ \ \times \hat{C}(\underbrace{\bar{F}_1(t)}_{m_1+j_1},\underbrace{1}_{n_1-m_1-j_1},\ldots, \underbrace{\bar{F}_i(t)}_{m_i+j_i+1},\underbrace{1}_{n_i-m_i-j_i-1}, \ldots, \underbrace{\bar{F}_L(t)}_{m_L+j_L},\underbrace{1}_{n_L-m_L-j_L})\nonumber\\ &\hspace{7.5cm}-[\hat{C}(\underbrace{\bar{F}_1(t)}_{m_1},\underbrace{1}_{n_1-m_1},\ldots, \underbrace{\bar{F}_i(t)}_{m_i+1},\underbrace{1}_{n_i-m_i-1}, \ldots, \underbrace{\bar{F}_L(t)}_{m_L},\underbrace{1}_{n_L-m_L})]\nonumber\\ &=\sum_{l=0}^{n-\sum_{i=1}^L m_i -1}(-1)^{l+1}\underset{j_1+\ldots +j_L=l}{\sum_{j_1=0}^{n_1-m_1}\ldots \sum_{j_i=0}^{n_i-m_i-1}\ldots \sum_{j_L=0}^{n_L-m_L}}\binom{n_1-m_1}{j_1}\ldots \binom{n_i-m_i-1}{j_i}\ldots\binom{n_L-m_L}{j_L}\nonumber\\ &\ \ \ \times \hat{C}(\underbrace{\bar{F}_1(t)}_{m_1+j_1},\underbrace{1}_{n_1-m_1-j_1},\ldots, \underbrace{\bar{F}_i(t)}_{m_i+j_i+1},\underbrace{1}_{n_i-m_i-j_i-1}, \ldots, \underbrace{\bar{F}_L(t)}_{m_L+j_L},\underbrace{1}_{n_L-m_L-j_L})\nonumber\\ &={\sum_{j_1=0}^{n_1-m_1}\ldots \sum_{j_i=0}^{n_i-m_i-1}\ldots \sum_{j_L=0}^{n_L-m_L}}(-1)^{j_1+\ldots +j_L+1}\binom{n_1-m_1}{j_1}\ldots \binom{n_i-m_i-1}{j_i}\ldots\binom{n_L-m_L}{j_L}\nonumber\\ &\ \ \ \times \hat{C}(\underbrace{\bar{F}_1(t)}_{m_1+j_1},\underbrace{1}_{n_1-m_1-j_1},\ldots, \underbrace{\bar{F}_i(t)}_{m_i+j_i+1},\underbrace{1}_{n_i-m_i-j_i-1}, \ldots, \underbrace{\bar{F}_L(t)}_{m_L+j_L},\underbrace{1}_{n_L-m_L-j_L}). \end{align} \end{small} Similarly, we have \begin{small} \begin{align*} &P(M\cap N \cap L)\nonumber\\ &=\sum_{l=1}^{n-\sum_{i=1}^L m_i -1}(-1)^{l+1}\underset{j_1+\ldots +j_L=l}{\sum_{j_1=0}^{n_1-m_1}\ldots \sum_{j_i=0}^{n_i-m_i-1}\ldots \sum_{j_L=0}^{n_L-m_L}}\binom{n_1-m_1}{j_1}\ldots \binom{n_i-m_i-1}{j_i}\ldots\binom{n_L-m_L}{j_L}\nonumber\\ &\ \ \ \times \hat{C}(\underbrace{\bar{F}_1(t)}_{m_1+j_1},\underbrace{1}_{n_1-m_1-j_1},\ldots, \underbrace{\bar{F}_i(t)}_{m_i+j_i}, \bar{F}_i(t+\delta),\underbrace{1}_{n_i-m_i-j_i-1}, \ldots, \underbrace{\bar{F}_L(t)}_{m_L+j_L},\underbrace{1}_{n_L-m_L-j_L}).\nonumber \\ \end{align*} \begin{align}\label{av1} &P(M\cap N \cap L)-P(M\cap L)\nonumber\\ &=\sum_{l=0}^{n-\sum_{i=1}^L m_i -1}(-1)^{l+1}\underset{j_1+\ldots +j_L=l}{\sum_{j_1=0}^{n_1-m_1}\ldots \sum_{j_i=0}^{n_i-m_i-1}\ldots \sum_{j_L=0}^{n_L-m_L}}\binom{n_1-m_1}{j_1}\ldots \binom{n_i-m_i-1}{j_i}\ldots\binom{n_L-m_L}{j_L}\nonumber\\ &\ \ \ \times \hat{C}(\underbrace{\bar{F}_1(t)}_{m_1+j_1},\underbrace{1}_{n_1-m_1-j_1},\ldots, \underbrace{\bar{F}_i(t)}_{m_i+j_i}, \bar{F}_i(t+\delta),\underbrace{1}_{n_i-m_i-j_i-1}, \ldots, \underbrace{\bar{F}_L(t)}_{m_L+j_L},\underbrace{1}_{n_L-m_L-j_L})\nonumber\\ &={\sum_{j_1=0}^{n_1-m_1}\ldots \sum_{j_i=0}^{n_i-m_i-1}\ldots \sum_{j_L=0}^{n_L-m_L}}(-1)^{j_1+\ldots +j_L+1}\binom{n_1-m_1}{j_1}\ldots \binom{n_i-m_i-1}{j_i}\ldots\binom{n_L-m_L}{j_L}\nonumber\\ &\ \ \ \times \hat{C}(\underbrace{\bar{F}_1(t)}_{m_1+j_1},\underbrace{1}_{n_1-m_1-j_1},\ldots, \underbrace{\bar{F}_i(t)}_{m_i+j_i}, \bar{F}_i(t+\delta),\underbrace{1}_{n_i-m_i-j_i-1}, \ldots, \underbrace{\bar{F}_L(t)}_{m_L+j_L},\underbrace{1}_{n_L-m_L-j_L}). \end{align} \end{small} Then replacing \eqref{av2} and \eqref{av1} in \eqref{av3} we have \begin{small} \begin{align*} &A_{\mathbf{m}}^{(i)}(t, \delta)= -{\sum_{j_1=0}^{n_1-m_1}\ldots \sum_{j_i=0}^{n_i-m_i-1}\ldots \sum_{j_L=0}^{n_L-m_L}} (-1)^{j_1+\ldots +j_L+1}\binom{n_1-m_1}{j_1}\ldots \binom{n_i-m_i-1}{j_i}\ldots\binom{n_L-m_L}{j_L}\\ &\hspace{5cm} \times \hat{C}(\underbrace{\bar{F}_1(t)}_{m_1+j_1},\underbrace{1}_{n_1-m_1-j_1},\ldots, \underbrace{\bar{F}_i(t)}_{m_i+j_i+1},\underbrace{1}_{n_i-m_i-j_i-1}, \ldots, \underbrace{\bar{F}_L(t)}_{m_L+j_L},\underbrace{1}_{n_L-m_L-j_L})\\ &+{\sum_{j_1=0}^{n_1-m_1}\ldots \sum_{j_i=0}^{n_i-m_i-1}\ldots \sum_{j_L=0}^{n_L-m_L}} (-1)^{j_1+\ldots +j_L+1}\binom{n_1-m_1}{j_1}\ldots \binom{n_i-m_i-1}{j_i}\ldots\binom{n_L-m_L}{j_L}\\ &\hspace{4cm}\times \hat{C}(\underbrace{\bar{F}_1(t)}_{m_1+j_1},\underbrace{1}_{n_1-m_1-j_1},\ldots, \underbrace{\bar{F}_i(t)}_{m_i+j_i}, \bar{F}_i(t+\delta),\underbrace{1}_{n_i-m_i-j_i-1}, \ldots, \underbrace{\bar{F}_L(t)}_{m_L+j_L},\underbrace{1}_{n_L-m_L-j_L})\\ &={\sum_{j_1=0}^{n_1-m_1}\ldots \sum_{j_i=0}^{n_i-m_i-1}\ldots \sum_{j_L=0}^{n_L-m_L}} (-1)^{j_1+\ldots +j_L}\binom{n_1-m_1}{j_1}\ldots \binom{n_i-m_i-1}{j_i}\ldots\binom{n_L-m_L}{j_L}\\ &\times \bigg[ \hat{C}(\underbrace{\bar{F}_1(t)}_{m_1+j_1},\underbrace{1}_{n_1-m_1-j_1},\ldots, \underbrace{\bar{F}_i(t)}_{m_i+j_i+1},\underbrace{1}_{n_i-m_i-j_i-1}, \ldots, \underbrace{\bar{F}_L(t)}_{m_L+j_L},\underbrace{1}_{n_L-m_L-j_L})\\ &\hspace{4cm} -\hat{C}(\underbrace{\bar{F}_1(t)}_{m_1+j_1},\underbrace{1}_{n_1-m_1-j_1},\ldots, \underbrace{\bar{F}_i(t)}_{m_i+j_i}, \bar{F}_i(t+\delta),\underbrace{1}_{n_i-m_i-j_i-1}, \ldots, \underbrace{\bar{F}_L(t)}_{m_L+j_L},\underbrace{1}_{n_L-m_L-j_L})\bigg] \end{align*} \end{small} \qed {\bf Proof of Theorem \ref{Aml}}:\\ Let us define the following events \begin{small} \begin{align*} M&\equiv\{T_1^{(j)}>\tau, \ldots, T_{l_j}^{(j)}>\tau, T_{l_j+1}^{(j)}>s, \ldots, T_{m_j}^{(j)}>s, \text{for $j=1,...,L, j\neq i$}\\ &\hspace{4.3cm},T_1^{(i)}>\tau, \ldots, T_{l_i}^{(i)}>\tau, T_{l_i+1}^{(i)}>s, \ldots, T_{m_i+1}^{(i)}>s\}\\ N^c&\equiv\{ T_{m_j+1}^{(j)}<s,\ldots, T_{n_j}^{(j)}<s, \text{for $j=1,...,L, j\neq i$}, T_{m_i+2}^{(i)}<s,\ldots, T_{n_i}^{(i)}<s \}\\ L^c&\equiv\{T_{l_j+1}^{(j)}<\tau, \ldots, T_{m_j}^{(j)}<\tau,\text{for $j=1,...,L$} \}\\ K^c&\equiv\{T_{m_i+1}^{(i)}<s+\delta \}. \end{align*} \end{small} Hence, \begin{small} \begin{align}\label{av5} A_{\mathbf{m}, \mathbf{l}}^{(i)}(s,s+\delta, \tau)&=P(M\cap N^c \cap L^c \cap K^c)\nonumber\\ &=P(M)-P(M\cap N)-P(M\cap L)-P(M \cap K)+P(M \cap N \cap L)+P(M \cap N \cap K)\nonumber\\ &+P(M \cap K \cap L)-P(M \cap N \cap L \cap K). \end{align} \end{small} It can be easily shown that \begin{small} \begin{align*} P(M)&= \hat{C}(\underbrace{\bar{F}_j(\tau)}_{l_j},\underbrace{\bar{F}_j(s)}_{m_j-l_j},\underbrace{1}_{n_j-m_j}, \text{for $j=1,...,L, j\neq i$}, \underbrace{\bar{F}_i(\tau)}_{l_i}, \underbrace{\bar{F}_i(s)}_{m_i-l_i+1}, \underbrace{1}_{n_i-m_i-1}),\\ P(M \cap K)&=P(M\cap \left\{ T_{m_i+1}^{(i)}>s+\delta \} \right\})\\ &=\hat{C}(\underbrace{\bar{F}_j(\tau)}_{l_j},\underbrace{\bar{F}_j(s)}_{m_j-l_j},\underbrace{1}_{n_j-m_j}, \text{for $j=1,...,L, j\neq i$}, \underbrace{\bar{F}_i(\tau)}_{l_i}, \underbrace{\bar{F}_i(s)}_{m_i-l_i}, \bar{F}_i(s+\delta),\underbrace{1}_{n_i-m_i-1}). \end{align*} \end{small} Note also that, the event $N$ (the complement of $N^c$) can be represented as \begin{align*} N=\cup_{j=1,j\neq i}^L \cup_{k_j=m_j+1}^{n_j} \{T_{k_j}^{(j)}>s\}\cup \cup_{k_i=m_i+2}^{n_i} \{T_{k_i}^{(i)}>s\}, \end{align*} Thus, we get \begin{small} \begin{align*} &P(M \cap N)\\ &=\sum_{l=1}^{n-\sum_{i=1}^L m_i -1}(-1)^{l+1}\underset{r_1+\ldots +r_L=l}{\sum_{r_1=0}^{n_1-m_1}\ldots \sum_{r_i=0}^{n_i-m_i-1}\ldots \sum_{r_L=0}^{n_L-m_L}}\binom{n_1-m_1}{r_1}\ldots \binom{n_i-m_i-1}{r_i}\ldots\binom{n_L-m_L}{r_L}\\ &\hspace{2cm}\times \hat{C}(\underbrace{\bar{F}_j(\tau)}_{l_j},\underbrace{\bar{F}_j(s)}_{m_j-l_j+r_j},\underbrace{1}_{n_j-m_j-r_j}, \text{for $j=1,...,L, j\neq i$}, \underbrace{\bar{F}_i(\tau)}_{l_i}, \underbrace{\bar{F}_i(s)}_{m_i-l_i+r_i+1}, \underbrace{1}_{n_i-m_i-r_i-1})\\ \end{align*} \begin{align*} &P(M \cap N)-P(M)\\ &=\sum_{l=0}^{n-\sum_{i=1}^L m_i -1}(-1)^{l+1}\underset{r_1+\ldots +r_L=l}{\sum_{r_1=0}^{n_1-m_1}\ldots \sum_{r_i=0}^{n_i-m_i-1}\ldots \sum_{r_L=0}^{n_L-m_L}}\binom{n_1-m_1}{r_1}\ldots \binom{n_i-m_i-1}{r_i}\ldots\binom{n_L-m_L}{r_L}\\ &\times \hat{C}(\underbrace{\bar{F}_j(\tau)}_{l_j},\underbrace{\bar{F}_j(s)}_{m_j-l_j+r_j},\underbrace{1}_{n_j-m_j-r_j}, \text{for $j=1,...,L, j\neq i$}, \underbrace{\bar{F}_i(\tau)}_{l_i}, \underbrace{\bar{F}_i(s)}_{m_i-l_i+r_i+1},\underbrace{1}_{n_i-m_i-r_i-1})\\ &={\sum_{r_1=0}^{n_1-m_1}\ldots \sum_{r_i=0}^{n_i-m_i-1}\ldots \sum_{r_L=0}^{n_L-m_L}}(-1)^{r_1+\ldots +r_L+1}\binom{n_1-m_1}{r_1}\ldots \binom{n_i-m_i-1}{r_i}\ldots\binom{n_L-m_L}{r_L}\\ &\hspace{2cm}\times \hat{C}(\underbrace{\bar{F}_j(\tau)}_{l_j},\underbrace{\bar{F}_j(s)}_{m_j-l_j+r_j},\underbrace{1}_{n_j-m_j-r_j}, \text{for $j=1,...,L, j\neq i$}, \underbrace{\bar{F}_i(\tau)}_{l_i}, \underbrace{\bar{F}_i(s)}_{m_i-l_i+r_i+1},\underbrace{1}_{n_i-m_i-r_i-1}) \end{align*} \end{small} and \begin{small} \begin{align*} &P(M \cap N \cap K)\\ &=\sum_{l=1}^{n-\sum_{i=1}^L m_i -1}(-1)^{l+1}\underset{r_1+\ldots +r_L=l}{\sum_{r_1=0}^{n_1-m_1}\ldots \sum_{r_i=0}^{n_i-m_i-1}\ldots \sum_{r_L=0}^{n_L-m_L}} \binom{n_1-m_1}{r_1}\ldots \binom{n_i-m_i-1}{r_i}\ldots\binom{n_L-m_L}{r_L}\\ &\hspace{2cm}\times \hat{C}(\underbrace{\bar{F}_j(\tau)}_{l_j},\underbrace{\bar{F}_j(s)}_{m_j-l_j+r_j},\underbrace{1}_{n_j-m_j-r_j},\text{for $j=1,...,L, j\neq i$},\underbrace{\bar{F}_i(\tau)}_{l_i}, \underbrace{\bar{F}_i(s)}_{m_i-l_i+r_i},\bar{F}_i{(s+\delta)}, \underbrace{1}_{n_i-m_i-r_i-1})\\ &=\sum_{l=0}^{n-\sum_{i=1}^L m_i -1}(-1)^{l+1}\underset{r_1+\ldots +r_L=l}{\sum_{r_1=0}^{n_1-m_1}\ldots \sum_{r_i=0}^{n_i-m_i-1}\ldots \sum_{r_L=0}^{n_L-m_L}} \binom{n_1-m_1}{r_1}\ldots \binom{n_i-m_i-1}{r_i}\ldots\binom{n_L-m_L}{r_L}\\ &\times \hat{C}(\underbrace{\bar{F}_j(\tau)}_{l_j},\underbrace{\bar{F}_j(s)}_{m_j-l_j+r_j},\underbrace{1}_{n_j-m_j-r_j},\text{for $j=1,...,L, j\neq i$},\underbrace{\bar{F}_i(\tau)}_{l_i}, \underbrace{\bar{F}_i(s)}_{m_i-l_i+r_i},\bar{F}_i{(s+\delta)}, \underbrace{1}_{n_i-m_i-r_i-1})+P(M \cap K). \end{align*} \end{small} Similarly, we have $L=\left\{\cup_{j=1}^L \cup_{k_j=l_j+1}^{m_j} \{T_{k_j}^{(j)}>\tau\} \right\}.$ Therefore, we get \begin{small} \begin{align*} P(M \cap L) &=\sum_{y=1}^{\sum_{j=1}^L (m_j -l_j)}(-1)^{y+1}\underset{d_1+\ldots +d_L=y}{\sum_{d_1=0}^{m_1-l_1}\ldots \sum_{d_L=0}^{m_L-l_L}} \binom{m_1-l_1}{d_1}\ldots \binom{m_L-l_L}{d_L}\\ &\times \hat{C}(\underbrace{\bar{F}_j(\tau)}_{l_j+d_j},\underbrace{\bar{F}_j(s)}_{m_j-l_j-d_j},\underbrace{1}_{n_j-m_j}\text{for $j=1,...,L, j\neq i$}, \underbrace{\bar{F}_i(\tau)}_{l_i+d_i}, \underbrace{\bar{F}_i(s)}_{m_i-l_i-d_i+1}, \underbrace{1}_{n_i-m_i-1}), \end{align*} \end{small} and \begin{small} \begin{align*} &P(M \cap L \cap K) \\ &=\sum_{l=1}^{\sum_{j=1}^L (m_j-l_j)}(-1)^{l+1}\underset{r_1+\ldots +r_L=l}{\sum_{r_1=0}^{n_1-m_1}\ldots \sum_{r_i=0}^{n_i-m_i-1}\ldots \sum_{r_L=0}^{n_L-m_L}} \binom{n_1-m_1}{r_1}\ldots \binom{n_i-m_i-1}{r_i}\ldots\binom{n_L-m_L}{r_L}\\ &\hspace{2cm}\times \hat{C}(\underbrace{\bar{F}_j(\tau)}_{l_j+r_j},\underbrace{\bar{F}_j(s)}_{m_j-l_j-r_j},\underbrace{1}_{n_j-m_j}, \text{for $j=1,...,L, j\neq i$},\underbrace{\bar{F}_i(\tau)}_{l_i+r_i}, \underbrace{\bar{F}_i(s)}_{m_i-l_i-r_i},\bar{F}_i{(s+\delta)}, \underbrace{1}_{n_i-m_i-1}) \end{align*} \end{small} Also, after some manipulations, one can verify that $P(M \cap N \cap L)$ and $P(M \cap N \cap L \cap K)$ can be written, respectively, as \begin{small} \begin{align*} &P(M \cap N \cap L)\\ &=\sum_{l=1}^{n-\sum_{j=1}^L m_j -1}(-1)^{l+1}\underset{r_1+\ldots +r_L=l}{\sum_{r_1=0}^{n_1-m_1}\ldots \sum_{r_i=0}^{n_i-m_i-1}\ldots \sum_{r_L=0}^{n_L-m_L}} \binom{n_1-m_1}{r_1}\ldots \binom{n_i-m_i-1}{r_i}\ldots\binom{n_L-m_L}{r_L}\\ &\times\sum_{y=1}^{\sum_{j=1}^L (m_j-l_j)}(-1)^{y+1}\underset{d_1+\ldots +d_L=y}{\sum_{d_1=0}^{m_1-l_1}\ldots \sum_{d_L=0}^{m_L-l_L}} \binom{m_1-l_1}{d_1}\ldots \binom{m_L-l_L}{d_L}\\ &\times \hat{C}(\underbrace{\bar{F}_j(\tau)}_{l_j+d_j},\underbrace{\bar{F}_j(s)}_{m_j-l_j+r_j-d_j},\underbrace{1}_{n_j-m_j-r_j},\text{for $j=1,...,L, j\neq i$},\underbrace{\bar{F}_i(\tau)}_{l_i+d_i}, \underbrace{\bar{F}_i(s)}_{m_i-l_i+r_i+1-d_i}, \underbrace{1}_{n_i- m_i-r_i-1})\\ &=\sum_{l=0}^{n-\sum_{j=1}^L m_j -1}(-1)^{l+1}\underset{r_1+\ldots +r_L=l}{\sum_{r_1=0}^{n_1-m_1}\ldots \sum_{r_i=0}^{n_i-m_i-1}\ldots \sum_{r_L=0}^{n_L-m_L}} \binom{n_1-m_1}{r_1}\ldots \binom{n_i-m_i-1}{r_i}\ldots\binom{n_L-m_L}{r_L}\\ &\times\sum_{y=1}^{\sum_{j=1}^L (m_j-l_j)}(-1)^{y+1}\underset{d_1+\ldots +d_L=y}{\sum_{d_1=0}^{m_1-l_1}\ldots \sum_{d_L=0}^{m_L-l_L}} \binom{m_1-l_1}{d_1}\ldots \binom{m_L-l_L}{d_L}\\ &\times \hat{C}(\underbrace{\bar{F}_j(\tau)}_{l_j+d_j},\underbrace{\bar{F}_j(s)}_{m_j-l_j+r_j-d_j},\underbrace{1}_{n_j-m_j-r_j},\text{for $j=1,...,L, j\neq i$},\underbrace{\bar{F}_i(\tau)}_{l_i+d_i}, \underbrace{\bar{F}_i(s)}_{m_i-l_i+r_i+1-d_i}, \underbrace{1}_{n_i- m_i-r_i-1})+P(M \cap L) \end{align*} \begin{align*} &=\sum_{l=0}^{n-\sum_{j=1}^L m_j -1}(-1)^{l+1}\underset{r_1+\ldots +r_L=l}{\sum_{r_1=0}^{n_1-m_1}\ldots \sum_{r_i=0}^{n_i-m_i-1}\ldots \sum_{r_L=0}^{n_L-m_L}} \binom{n_1-m_1}{r_1}\ldots \binom{n_i-m_i-1}{r_i}\ldots\binom{n_L-m_L}{r_L}\\ &\times\sum_{y=0}^{\sum_{j=1}^L (m_j-l_j)}(-1)^{y+1}\underset{d_1+\ldots +d_L=y}{\sum_{d_1=0}^{m_1-l_1}\ldots \sum_{d_L=0}^{m_L-l_L}} \binom{m_1-l_1}{d_1}\ldots \binom{m_L-l_L}{d_L}\\ &\times \hat{C}(\underbrace{\bar{F}_j(\tau)}_{l_j+d_j},\underbrace{\bar{F}_j(s)}_{m_j-l_j+r_j-d_j},\underbrace{1}_{n_j-m_j-r_j},\text{for $j=1,...,L, j\neq i$},\underbrace{\bar{F}_i(\tau)}_{l_i+d_i}, \underbrace{\bar{F}_i(s)}_{m_i-l_i+r_i+1-d_i}, \underbrace{1}_{n_i- m_i-r_i-1})\\ &+\sum_{l=0}^{n-\sum_{i=1}^L m_i -1}(-1)^{l+1}\underset{r_1+\ldots +r_L=l}{\sum_{r_1=0}^{n_1-m_1}\ldots \sum_{r_i=0}^{n_i-m_i-1}\ldots \sum_{r_L=0}^{n_L-m_L}} \binom{n_1-m_1}{r_1}\ldots \binom{n_i-m_i-1}{r_i}\ldots\binom{n_L-m_L}{r_L}\\ &\times \hat{C}(\underbrace{\bar{F}_j(\tau)}_{l_j},\underbrace{\bar{F}_j(s)}_{m_j-l_j+r_j},\underbrace{1}_{n_j-m_j-r_j}, \text{for $j=1,...,L, j\neq i$}, \underbrace{\bar{F}_i(\tau)}_{l_i}, \underbrace{\bar{F}_i(s)}_{m_i-l_i+r_i+1}, \underbrace{1}_{n_i-m_i-r_i})+P(M \cap L). \end{align*} \end{small} and \begin{small} \begin{align*} &P(M \cap N \cap L \cap K)\\ &=\sum_{l=1}^{n-\sum_{j=1}^L m_j -1(t)}(-1)^{l+1}\underset{r_1+\ldots +r_L=l}{\sum_{r_1=0}^{n_1-m_1}\ldots \sum_{r_i=0}^{n_i-m_i-1}\ldots \sum_{r_L=0}^{n_L-m_L}} \binom{n_1-m_1}{r_1}\ldots \binom{n_i-m_i-1}{r_i}\ldots\binom{n_L-m_L}{r_L}\\ &\times\sum_{y=1}^{\sum_{i=1}^L (m_i-l_i)}(-1)^{y+1}\underset{d_1+\ldots +d_L=y}{\sum_{d_1=0}^{m_1-l_1}\ldots \sum_{d_L=0}^{m_L-l_L}} \binom{m_1-l_1}{d_1}\ldots \binom{m_L-l_L}{d_L}\\ &\times \hat{C}(\underbrace{\bar{F}_j(\tau)}_{l_j+d_j},\underbrace{\bar{F}_j(s)}_{m_j-l_j+r_j-d_j},\underbrace{1}_{n_j-m_j-r_j}, \text{for $j=1,...,L, j\neq i$}, \underbrace{\bar{F}_i(\tau)}_{l_i+d_i}, \underbrace{\bar{F}_i(s)}_{m_i-l_i+r_i-d_i}, \bar{F}_i{(s+\delta)},\underbrace{1}_{n_i-m_i-r_i-1})\\ &=\sum_{l=0}^{n-\sum_{j=1}^L m_j -1}(-1)^{l+1}\underset{r_1+\ldots +r_L=l}{\sum_{r_1=0}^{n_1-m_1}\ldots \sum_{r_i=0}^{n_i-m_i-1}\ldots \sum_{r_L=0}^{n_L-m_L}} \binom{n_1-m_1}{r_1}\ldots \binom{n_i-m_i-1}{r_i}\ldots\binom{n_L-m_L}{r_L}\\ &\times\sum_{y=1}^{\sum_{i=1}^L (m_i-l_i)}(-1)^{y+1}\underset{d_1+\ldots +d_L=y}{\sum_{d_1=0}^{m_1-l_1}\ldots \sum_{d_L=0}^{m_L-l_L}} \binom{m_1-l_1}{d_1}\ldots \binom{m_L-l_L}{d_L}\\ &\times \hat{C}(\underbrace{\bar{F}_j}_{l_j+d_j},\underbrace{\bar{F}_j(s)}_{m_j-l_j+r_j-d_j},\underbrace{1}_{n_j-m_j-r_j}, \text{for $j=1,...,L, j\neq i$}, \underbrace{\bar{F}_i(\tau)}_{l_i+d_i}, \underbrace{\bar{F}_i(s)}_{m_i-l_i+r_i-d_i}, \bar{F}_i{(s+\delta)},\underbrace{1}_{n_i-m_i-r_i-1})\\ &+P(M \cap L \cap K) \end{align*} \begin{align*} &=\sum_{l=0}^{n-\sum_{j=1}^L m_j -1}(-1)^{l+1}\underset{r_1+\ldots +r_L=l}{\sum_{r_1=0}^{n_1-m_1}\ldots \sum_{r_i=0}^{n_i-m_i-1}\ldots \sum_{r_L=0}^{n_L-m_L}} \binom{n_1-m_1}{r_1}\ldots \binom{n_i-m_i-1}{r_i}\ldots\binom{n_L-m_L}{r_L}\\ &\times\sum_{y=0}^{\sum_{i=1}^L (m_i-l_i)}(-1)^{y+1}\underset{d_1+\ldots +d_L=y}{\sum_{d_1=0}^{m_1-l_1}\ldots \sum_{d_L=0}^{m_L-l_L}} \binom{m_1-l_1}{d_1}\ldots \binom{m_L-l_L}{d_L}\\ &\times \hat{C}(\underbrace{\bar{F}_j(\tau)}_{l_j+d_j},\underbrace{\bar{F}_j(s)}_{m_j-l_j+r_j-d_j},\underbrace{1}_{n_j-m_j-r_j}, \text{for $j=1,...,L, j\neq i$}, \underbrace{\bar{F}_i(\tau)}_{l_i+d_i}, \underbrace{\bar{F}_i(s)}_{m_i-l_i+r_i-d_i}, \bar{F}_i{(s+\delta)},\underbrace{1}_{n_i-m_i-r_i-1})\\ &+\sum_{l=0}^{n-\sum_{j=1}^L m_j -1}(-1)^{l+1}\underset{r_1+\ldots +r_L=l}{\sum_{r_1=0}^{n_1-m_1}\ldots \sum_{r_i=0}^{n_i-m_i-1}\ldots \sum_{r_L=0}^{n_L-m_L}} \binom{n_1-m_1}{r_1}\ldots \binom{n_i-m_i-1}{r_i}\ldots\binom{n_L-m_L}{r_L}\\ &\times \hat{C}(\underbrace{\bar{F}_j(\tau)}_{l_j},\underbrace{\bar{F}_j(s)}_{m_j-l_j+r_j},\underbrace{1}_{n_j-m_j-r_j}, \text{for $j=1,...,L, j\neq i$}, \underbrace{\bar{F}_i(\tau)}_{l_i}, \underbrace{\bar{F}_i(s)}_{m_i-l_i+r_i}, \bar{F}_i{(s+\delta)},\underbrace{1}_{n_i-m_i-r_i-1})\\ &+P(M \cap L \cap K). \end{align*} \end{small} Finally, by replacing the obtained expressions in \eqref{av5} we have \begin{small} \begin{align*} &A_{\mathbf{m}, \mathbf{l}}^{(i)}(s,s+\delta, \tau)=\sum_{l=0}^{n-\sum_{j=1}^L m_j -1}(-1)^{l+1}\underset{r_1+\ldots +r_L=l}{\sum_{r_1=0}^{n_1-m_1}\ldots \sum_{r_i=0}^{n_i-m_i-1}\ldots \sum_{r_L=0}^{n_L-m_L}} \binom{n_1-m_1}{r_1}\ldots \binom{n_i-m_i-1}{r_i}\\ &\ldots\binom{n_L-m_L}{r_L}\times\sum_{y=0}^{\sum_{j=1}^L (m_j-l_j)}(-1)^{y+1}\underset{d_1+\ldots +d_L=y}{\sum_{d_1=0}^{m_1-l_1}\ldots \sum_{d_L=0}^{m_L-l_L}} \binom{m_1-l_1}{d_1}\ldots \binom{m_L-l_L}{d_L}\\ &\times \hat{C}(\underbrace{\bar{F}_j(\tau)}_{l_j+d_j},\underbrace{\bar{F}_j(s)}_{m_j-l_j+r_j-d_j},\underbrace{1}_{n_j-m_j-r_j},\text{for $j=1,...,L, j\neq i$},\underbrace{\bar{F}_i(\tau)}_{l_i+d_i}, \underbrace{\bar{F}_i(s)}_{m_i-l_i+r_i+1-d_i}, \underbrace{1}_{n_i- m_i-r_i-1})\\ &-\sum_{l=0}^{n-\sum_{j=1}^L m_j -1}(-1)^{l+1}\underset{r_1+\ldots +r_L=l}{\sum_{r_1=0}^{n_1-m_1}\ldots \sum_{r_i=0}^{n_i-m_i-1}\ldots \sum_{r_L=0}^{n_L-m_L}} \binom{n_1-m_1}{r_1}\ldots \binom{n_i-m_i-1}{r_i}\ldots\binom{n_L-m_L}{r_L}\\ &\times\sum_{y=0}^{\sum_{i=1}^L (m_i-l_i)}(-1)^{y+1}\underset{d_1+\ldots +d_L=y}{\sum_{d_1=0}^{m_1-l_1}\ldots \sum_{d_L=0}^{m_L-l_L}} \binom{m_1-l_1}{d_1}\ldots \binom{m_L-l_L}{d_L}\\ &\times \hat{C}(\underbrace{\bar{F}_j(\tau)}_{l_j+d_j},\underbrace{\bar{F}_j(s)}_{m_j-l_j+r_j-d_j},\underbrace{1}_{n_j-m_j-r_j}, \text{for $j=1,...,L, j\neq i$}, \underbrace{\bar{F}_i(\tau)}_{l_i+d_i}, \underbrace{\bar{F}_i(s)}_{m_i-l_i+r_i-d_i}, \bar{F}_i{(s+\delta)},\underbrace{1}_{n_i-m_i-r_i-1}) \end{align*} \begin{align*} &={\sum_{r_1=0}^{n_1-m_1}\ldots \sum_{r_i=0}^{n_i-m_i-1}\ldots \sum_{r_L=0}^{n_L-m_L}}(-1)^{r_1+\ldots +r_L} \binom{n_1-m_1}{r_1}\ldots \binom{n_i-m_i-1}{r_i}\ldots\binom{n_L-m_L}{r_L}\\ &\times{\sum_{d_1=0}^{m_1-l_1}\ldots \sum_{d_L=0}^{m_L-l_L}}(-1)^{d_1+\ldots +d_L} \binom{m_1-l_1}{d_1}\ldots \binom{m_L-l_L}{d_L}\\ &\times \left[ \hat{C}(\underbrace{\bar{F}_j(\tau)}_{l_j+d_j},\underbrace{\bar{F}_j(s)}_{m_j-l_j+r_j-d_j},\underbrace{1}_{n_j-m_j-r_j},\text{for $j=1,...,L, j\neq i$},\underbrace{\bar{F}_i(\tau)}_{l_i+d_i}, \underbrace{\bar{F}_i(s)}_{m_i-l_i+r_i+1-d_i}, \underbrace{1}_{n_i- m_i-r_i-1})\right.\\ &\left.-\hat{C}(\underbrace{\bar{F}_j(\tau)}_{l_j+d_j},\underbrace{\bar{F}_j(s)}_{m_j-l_j+r_j-d_j},\underbrace{1}_{n_j-m_j-r_j}, \text{for $j=1,...,L, j\neq i$}, \underbrace{\bar{F}_i(\tau)}_{l_i+d_i}, \underbrace{\bar{F}_i(s)}_{m_i-l_i+r_i-d_i}, \bar{F}_i{(s+\delta)},\underbrace{1}_{n_i-m_i-r_i-1})\right]\qed \end{align*} \end{small}
1,314,259,994,728
arxiv
\section{Introduction} Adaptive control is an approach used to deal with systems with uncertain and/or time-varying parameters. In the classical approach to adaptive control, one combines a linear time-invariant (LTI) compensator together with a tuning mechanism to adjust the compensator parameters to match the plant. The first general proofs came around 1980, e.g. see \cite{morse1978}, \cite{Goodwin1980}, \cite{Morse1980}, \cite{Narendra1980} and \cite{Narendra1980_pt2}. However, the original controllers are typically not robust to unmodelled dynamics, do not tolerate time-variations well, have poor transient behavior and do not handle noise/disturbances well, e.g. see \cite{rohrs}. During the following two decades, a good deal of research was carried out to alleviate these shortcomings; a number of small controller design changes were proposed, such as the use of signal normalization, deadzones and $\sigma$-modification, e.g. see \cite{Ioa86}, \cite{kreiss}, \cite{rick2}, \cite{rick}, and \cite{Tsakalis4}; also, simply using projection onto a convex set of admissible parameters turned out to be powerful, e.g. see \cite{hanfu}, \cite{Naik}, \cite{Wen}, \cite{Wenhill} and \cite{ydstie}. However, in general these redesigned controllers provide asymptotic stability and not exponential stability, with no bounded gain on the noise\footnote{An exception is the work of Ydstie \cite{ydstie} where a bounded gain is proven.}; that being said, some of them, especially those using projection, provide a bounded-noise bounded-state property, as well as tolerance of some degree of unmodelled dynamics and/or time-variations. Recently, for discrete-time LTI plants, in both the $d$-step ahead control setting \cite{scl17}, \cite{acc19}, \cite{mcss20}, and the pole-placement control setting \cite{ccta17}, \cite{mcss18}, a new approach has been proposed which not only provides exponential stability and a bounded gain on the noise, but also a convolution bound on the exogenous inputs; the resulting convolution bound is leveraged to prove tolerance to a degree of time-variations and to a degree of unmodelled dynamics \cite{ccta20}. As far as the authors are aware, such {\bf linear-like convolution bounds have never before been proven in the adaptive setting}. The key idea is to use the original projection algorithm in conjunction with a restriction of the parameter estimates to a convex set, although this convexity requirement was relaxed in \cite{cdc18}, \cite{mcss18} and \cite{tac20}. The goal of the present paper is to extend these linear-like results in the $d$-step ahead control setting to the more general {\em Model Reference Adaptive Control} (MRAC) problem. Model reference adaptive control is an important approach to adaptive control where a pre-designed stable reference model is used to model the desired closed-loop behavior. Here we build on the results proven for the $d $-step-ahead adaptive control problem in \cite{acc19} and \cite{mcss20}, which is a special case of the more general MRAC problem considered here; because we are seeking stronger closed-loop properties than what is normally proven in the literature, more detailed analysis is needed in dealing with the MRAC setup, since the introduction of the reference model into the analysis brings extra complexity. We prove that the desirable linear-like closed-loop properties of {\bf exponential stability, a bounded gain on the noise in every $p$-norm and a convolution bound on the exogenous inputs}, are achieved using a model reference adaptive controller; we also prove a stronger tracking result than what is usually found in the literature. {\bf Notation.} We use standard notation throughout the paper. We denote $\R$, $\Z$, and $\C$ as the set of real numbers, integers and complex numbers, respectively. We will denote the Euclidean-norm of a vector and the induced norm of a matrix by the subscript-less default notation $\|\cdot\|$. Let ${\mathbb S}(\R^{p\times q} ) $ denote the set of $\R^{p\times q} $-valued sequences. Also, $\ellb_{\infty} $ denotes the set of bounded sequences. For a signal $f\in\ellb_\infty $, define the $\infty $-norm by $\|f \|_\infty:= \sup_{t\in \Z} | f(t) | $. For a closed and convex set $\Omega\subset\R^p$, let the function $\mathrm{Proj}_{\Omega} \left\{ \cdot \right\}:\R^{p} \mapsto \Omega $ denote the projection onto the set $\Omega $; because the set $\Omega $ is closed and convex, the function $\mathrm{Proj}_{\Omega} $ is well-defined. If $\Omega\subset\R^p$ is a compact (closed and bounded) set, we define $\|\Omega\|:=\max_{x\in\Omega}\|x\|$. Let $I_{p} $ denote the identity matrix of size $p$. Define the normal vector $\e_j \in\R^{p}$ of appropriate length $p$ as $\e_j:= \bigl[ \underbrace{\begin{matrix}0 & \cdots & 0 \end{matrix}}_{j-1\text{ elements} } \;\; \begin{matrix}1 & 0 & \cdots & 0 \end{matrix} \bigr]^\top.$ Last of all, for a signal $f\in {\mathbb S}(\R ) $ which is sufficiently well-behaved to have a $z$-transform, we let $F(z) $ denote this quantity. \section{The Setup} \label{sec2} In this paper we consider the following linear time-invariant (LTI) discrete-time plant: \begin{equation} \label{plant1} \sum_{i=0}^{n} a_i y(t-i) = \sum_{i=0}^{m} b_i u(t-d-i) +w(t), \quad t\in\Z, \end{equation} with $y(t)\in\R $ as the measured output, $u(t)\in\R $ as the control input, and $w(t)\in\R $ as the noise/disturbance. The plant parameters are regularized such that $a_0=1 $, and the system delay is exactly $d$, i.e. $b_0\neq0 $. Associated with this plant are the polynomials ${\mathbf A}(z^{-1}):= \sum_{i=0}^{n} a_i z^{-i} $ and $ \quad {\mathbf B}(z^{-1}):= \sum_{i=0}^{m} b_i z^{-i}$, the transfer function $z^{-d} \frac{{\mathbf B}(z^{-1})}{{\mathbf A}(z^{-1})} $, and the plant parameter vector: $$ \theta:= \begin{bmatrix} a_1 & a_2 & \cdots & a_n & b_0 & b_1& \cdots & b_m \end{bmatrix}; $$ we assume that $\theta $ belongs to a known set $ {\cal S}_{ab} \subset\R^{n+m+1}$. Observe that such a plant can be expressed in the $z$-transform domain as \begin{equation} \label{plant2} {\mathbf A}(z^{-1}) Y(z) = z^{-d} {\mathbf B}(z^{-1}) U(z) + W(z). \end{equation} The control objective is closed-loop stability and asymptotic tracking of a given reference signal $y^*(t)\in\R $ generated as the output of a stable reference model; more specifically, given pre-designed polynomials ${\mathbf H}(z^{-1}):= \sum_{i=0}^{n'-d} h_i z^{-i}$ and $ {\mathbf L}(z^{-1}):= 1+\sum_{i=1}^{n'} l_i z^{-i}$ (with $ n'\leq n $), and given a bounded exogenous signal $r(t)\in\R $, we utilize the following reference model expressed in the $z $-transform form: \begin{flalign} {\mathbf L}(z^{-1})Y^*(z)=z^{-d}{\mathbf H}(z^{-1}) R(z). \label{plantRef1} \end{flalign} We assume that the roots of ${\mathbf L}(z^{-1})$ belongs to the open unit desk, i.e. the reference model is stable. If we define the tracking error $\varepsilon$ by \begin{flalign} \varepsilon(t):=y(t)-y^*(t), \end{flalign} then the goal is to drive $\varepsilon $ to zero asymptotically. \begin{remark} Notice that for the $d$-step-ahead control problem the reference model is simply $Y^*(z)=z^{-d} R(z) $. \end{remark} We impose the following assumptions on the set of admissible parameters. \begin{assm} ${\cal S}_{ab}$ is closed and bounded (compact), and for each $\theta\in {\cal S}_{ab}$, the corresponding ${\mathbf B}(z^{-1})$ has roots in the open unit disk and the sign of $b_{0}$ is always the same. \end{assm} \noindent The boundedness requirement on ${\cal S}_{ab}$ is reasonable in practical situations; it is used here to prove uniform bounds and decay rates on the closed-loop behavior. The constraint on the roots of ${\mathbf B}(z^{-1})$ is a requirement that the plant be minimum phase; this is necessary to ensure tracking of bounded reference signals \cite{ld_paper}. Knowledge of the sign of the high-frequency gain $b_0 $ is common in adaptive control \cite{goodwinsin}. \begin{remark} It is implicit in the assumptions that we know the system delay $d$ as well as the upper bounds on the orders of ${\mathbf A}(z^{-1})$ and ${\mathbf B}(z^{-1})$. \end{remark} To proceed, we use a parameter estimator together with an adaptive control law based on the Certainty Equivalence Principle. It is convenient to put the plant into the so-called {\em predictor form}. To this end, by long division we can find ${\mathbf F}(z^{-1})=\sum_{i=0}^{d-1} f_i z^{-i} $ and ${\boldsymbol\alpha}(z^{-1})=\sum_{i=0}^{n-1} \alpha_i z^{-i}$ that satisfy the following: \begin{equation*} \frac{{\mathbf L}(z^{-1})}{{\mathbf A}(z^{-1})} = {\mathbf F}(z^{-1}) + z^{-d} \frac{{\boldsymbol\alpha}(z^{-1})}{ {\mathbf A}(z^{-1})}; \end{equation*} if we now define $ {\boldsymbol\beta}(z^{-1})= \sum_{i=0}^{m+d-1} \beta_i z^{-i}:={\mathbf F}(z^{-1}){\mathbf B}(z^{-1}), $ then it is easy to verify that the following is true: \begin{equation} \label{alpha1} z^{-d} \frac{{\mathbf B}(z^{-1})}{{\mathbf A}(z^{-1})} = \frac{ {\boldsymbol\beta}(z^{-1})}{z^{d} {\mathbf L}(z^{-1})-{\boldsymbol\alpha}(z^{-1})} . \end{equation} So comparing \eqref{alpha1} with the plant equation in \eqref{plant2}, we are able to re-write the plant equation as \begin{equation} \label{plantM1} {\mathbf L}(z^{-1})[z^d Y(z)] = {\boldsymbol\alpha}(z^{-1})Y(z) + {\boldsymbol\beta}(z^{-1}) U(z) + \overline W(z), \end{equation} with $ \overline W(z):= z^d {\mathbf F}(z^{-1}) W(z) $. Now define a weighted sum of the system output $\overline y $ by \begin{flalign} \overline y(t):= y(t) + \sum_{j=1}^{n'} l_j y(t-j); \label{ybar} \end{flalign} clearly the $z$-transform of $\overline y(t) $ is $ {\mathbf L}(z^{-1})Y(z) $, so the time-domain counterpart of \eqref{plantM1} in predictor form is \begin{equation} \label{plantM2} \overline y(t+d) = \phi(t)^\top \theta^* + \overline w(t), \end{equation} with \[ \phi(t):= \begin{bmatrix} y(t) \\ y(t-1) \\ \vdots \\ y(t-n+1) \\ u(t) \\ u(t-1) \\ \vdots \\ u(t-m-d+1) \end{bmatrix}, \qquad \theta^*:= \begin{bmatrix} \alpha_0 \\ \alpha_1 \\ \vdots \\ \alpha_{n-1} \\ \beta_0 \\ \beta_1 \\ \vdots \\ \beta_{m+d-1} \end{bmatrix}. \] Let ${\cal S}_{\alpha\beta} \subset \R^{n+m+d} $ denote the set of admissible $\theta^* $ that arise from the original plant parameters which lie in ${\cal S}_{ab} $; it is clear that the associated mapping from ${\cal S}_{ab} $ to ${\cal S}_{\alpha\beta}$ is analytic, so the compactness of ${\cal S}_{ab} $ means that ${\cal S}_{\alpha\beta}$ is compact as well. Furthermore, it is easy to see that $\beta_0=b_0 $. It is convenient that the set of admissible parameters in the new parameter space be convex and closed; so at this point let ${\cal S} \subset \R^{n+m+d} $ be any compact and convex set containing ${\cal S}_{\alpha\beta}$ for which the $(n+1)$th element is never zero; the convex hull of ${\cal S}_{\alpha\beta}$ will do, although it may be more convenient to use a hyperrectangle (for projection purposes). We will show an example on obtaining such a set in the simulation section. Now define $\overline Y^*(z):= {\mathbf L}(z^{-1})Y^*(z); $ then the model reference control law is given by \begin{equation*} \overline y^*(t+d) = \phi(t)^\top \theta^*. \end{equation*} In the absence of noise, and assuming the controller is applied for all $t\in\Z $, we can show that we have $y(t)=y^*(t) $ for all $t\in\Z $. In our case of unknown parameters, we seek an adaptive version of the control law which is applied after some initial time, i.e. $t\geq t_0 $. \subsection{Initialization} In most adaptive control results, the goal is to prove asymptotic behavior, so the details of the initial condition are unimportant. On the other hand, here we wish to obtain a bound on the transient behavior so we must proceed carefully. Here we adopt the approach used in the $d$-step ahead control setting of \cite{acc19} and \cite{mcss20}. With the definition \eqref{ybar} in mind (and $n'\leq n $), observe that if we wish to solve \eqref{plantM2} for $y(\cdot)$ starting at time $t_0$, then it is clear that we need an initial condition of \begin{multline} x_0:= \bigl[ y(t_0) \;\; y(t_0-1) \;\; \cdots \;\; y(t_0-n-d+2) \nonumber \\ \qquad u(t_0) \;\; u(t_0-1) \;\; \cdots \;\; u(t_0-m-2d+2) \bigr]^\top ; \end{multline} observe that this is sufficient information to obtain $\phi(t_0),\phi(t_0-1),\ldots,\phi(t_0-d+1) $. \subsection{Parameter Estimation} \unskip We can re-write the plant equation \eqref{plantM2} as \begin{equation} \label{plantM3} \overline y(t+1) = \phi(t-d+1)^\top \theta^* + \overline w(t-d+1), \;\; t\geq t_0. \end{equation} Given an estimate $\hat\theta(t) $ of $\theta^* $ at time $t$, we define the prediction error by \begin{equation} \label{predict1} e(t+1):=\overline y(t+1) - \phi(t-d+1)^\top \hat\theta(t); \end{equation} this is a measure of the error in $\hat\theta(t) $. A common way to obtain a new estimate is from the solution of the optimization problem \[ \underset{\theta}{\mathrm{argmin}} \left\{ \|\theta-\hat\theta(t) \| : \overline y(t+1)=\phi(t-d+1)^\top \theta \right\}, \] yielding the (ideal) {\bf Projection Algorithm}: \begin{equation} \hat\theta(t+1) = \left\{ \begin{matrix*}[l] \hat\theta(t) & \phi(t-d+1)=0 \\ \hat\theta(t) + \frac{\phi(t-d+1)}{\|\phi(t-d+1)\|^2} e(t+1) & \text{otherwise}; \end{matrix*} \right. \label{orig1} \end{equation} at this point, we can also constrain it to ${\cal S}$ by projection. Of course, if $\|\phi(t-d+1)\|$ is close to zero, numerical problems may occur, so it is the norm in the literature (e.g. \cite{goodwinsin} and \cite{Goodwin1980}) to add a constant to the denominator\footnote{ An exception is \cite{akhtar} where the ideal algorithm \eqref{orig1} is used and Lyapunov stability is proven, but a convolution bound on the exogenous inputs is not proven, and the high-frequency gain is assumed to be known. }; however as pointed out in our earlier work \cite{ccta17}, \cite{mcss18} and \cite{mcss20}, this can lead to the loss of exponential stability and a loss of a bounded gain on the noise. As proposed in \cite{ccta17}, \cite{mcss18} and \cite{mcss20}, we turn off the estimation if it is clear that the noise is swamping the estimation error. To this end, with $\delta\in(0,\infty] $, we turn off the estimator if the update is larger than $2\|{\cal S}\| + \delta$ in magnitude; so define \begin{equation} \rho(t) :=\left\{\begin{matrix} 1 & & \text{if } | e(t+1)|< (2\Vert {\cal S}\Vert +\delta)\|\phi(t-d+1)\|\\ 0 & & \text{otherwise}; \end{matrix}\right. \nonumber \end{equation} given the initial condition of $\hat\theta(t_0 ) =\theta_0 \in \R^{m+n+d} $, for $t\geq t_0 $ we define\footnote{If $\delta=\infty $, then we adopt the understanding that $\infty \times 0 = 0$, in which case this formula in \eqref{est1a} collapses into the original version \eqref{orig1}.} \begin{subequations} \label{est1} \begin{flalign} \label{est1a} &\check\theta(t+1) = \hat\theta(t) + \rho(t) {\frac{\phi(t-d+1)}{\|\phi(t-d+1)\|^2}} e(t+1) \\ \label{est1b} &\hat\theta(t+1) = \mathrm{Proj}_{\cal S} \left\{ \check\theta(t+1) \right\}. \end{flalign} \end{subequations} Analyzing the closed-loop system requires a careful examination of the estimation algorithm. First define the parameter error by $\tilde\theta(t) := \hat\theta(t)-\theta^* $. The following result lists properties which are equivalent to those of Proposition 1 in \cite{mcss20} for the $d $-step ahead adaptive control setup. \begin{prop} \label{est_prop} For every $t_0\in\Z$, $ x_0\in\R^{n+m+3d-2}$, $ \theta_0 \in{\cal S}$, $ \theta\in{\cal S}_{ab} , w\in\ellb_\infty $, and $\delta\in(0,\infty] $, when the estimator \eqref{est1} is applied to the plant \eqref{plant1}, the following holds: \begin{flalign} &\|\hat\theta(t+1)-\hat\theta(t) \| \leq \rho(t) \frac{|e(t+1) |}{\|\phi(t-d+1) \|}, \quad t\geq t_0, \nonumber \\ &\|\tilde\theta(t) \|^2 \leq \|\tilde\theta(\tau) \|^2 + \sum_{j=\tau}^{t-1} \rho(j) \biggl[ - \frac{1}{2}\frac{e(j+1) ^2 }{ \|\phi(j-d+1) \|^2 } + \nonumber \\ &\qquad\qquad\qquad \frac{2\overline w(j-d+1)^2 }{ \|\phi(j-d+1) \|^2 } \biggr], \qquad t>\tau\geq t_0. \nonumber \end{flalign} \end{prop} \subsection{The Control Law} With the natural partitioning \begin{equation} \hat\theta(t)=: \begin{bmatrix} \hat\alpha_0(t) & \cdots & \hat\alpha_{n-1}(t) & \hat\beta_0(t) & \cdots & \hat\beta_{m+d-1}(t) \end{bmatrix}^\top, \nonumber \end{equation} the {\bf model reference adaptive control law} (based on the Certainty Equivalence principle) is \begin{equation} \overline y^*(t+d) = \phi(t)^\top \hat\theta(t); \nonumber \end{equation} solving this for $u(t)$ and using the reference model \eqref{plantRef1}, we have \begin{flalign} u(t) &= \frac{1}{\hat\beta_0(t)}\biggl[ -\sum_{i=0}^{n-1} \hat\alpha_{i}(t) y(t-i) -\sum_{i=1}^{m+d-1} \hat\beta_{i}(t) u(t-i) + \nonumber \\ &\qquad \sum_{i=0}^{n'-d} h_i r(t-i) \biggr], \quad t\geq t_0. \label{control1} \end{flalign} It is convenient for analysis to define an auxiliary tracking error: \begin{flalign} \overline \varepsilon(t):=\overline y(t)-\overline y^*(t); \label{errL} \end{flalign} it is easy to show that \begin{flalign} \overline\varepsilon(t) &= -\phi(t-d)^\top \tilde\theta(t-d) + \overline w(t-d), \;\; t\geq t_0+d, \label{error2} \\ e(t)&= -\phi(t-d)^\top \tilde\theta(t-1) + \overline w(t-d), \;\; t\geq t_0+1, \label{pred_error1} \end{flalign} as well as \begin{equation} \label{error1} \overline\varepsilon(t)=e(t)+ \phi(t-d)^\top \left[\hat\theta(t-1)-\hat\theta(t-d)\right], \quad t\geq t_0+d. \end{equation} Observe that we can compute $\overline \varepsilon (t),\, t\in\{t_0,t_0+1,\ldots,t_0+d-1 \} $, from $x_0,w$ and $y^* $. In the next section we develop several models used in the analysis, after which we state and prove our result. The approach borrows ideas from our previous work on the $d$-step ahead setup \cite{mcss20} and extends them to the {\em Model Reference Adaptive Control} (MRAC) case. \section{The Analysis} \unskip In the pole-placement adaptive control setup of our earlier work \cite{mcss18}, a key closed-loop model consists of an update equation for $\phi(t) $, with the state matrix consisting of controller and plant estimates; this was effective because the characteristic polynomial of this matrix is time-invariant and has all roots in the open unit disk. If we were to apply the same idea in our case here, then the characteristic polynomial would have roots which are time-varying, with some at zero and the rest at the roots of the corresponding naturally defined polynomial $\hat{\boldsymbol \beta}(t,z^{-1} ) $, which is time varying, and it may not have roots in the open unit disk. On the other hand, in the $d$-step ahead adaptive control setup of our earlier work \cite{acc19} and \cite{mcss20}, these difficulties were dealt with by constructing three different models for use in the analysis: a model that does not use parameter estimates but is driven by the tracking error, a crude model to bound the size of growth of $\phi(t) $, and a crucial model which is driven by perturbed versions of the present and past values of $\phi(\cdot) $. Here in this paper, which deals with the more general MRAC problem, we construct similar, though not identical, models, but they need more careful analysis than ones in the $d $-step ahead control case. \subsection{A Good Model} \unskip Here we first obtain an equation which avoids using parameter estimates, though it is driven by the weighted sum of the tracking error $\overline\varepsilon(\cdot) $. By extending the idea from \cite{acc19} and \cite{mcss20}, using the definition of $\varepsilon $ we obtain a formula for $y(t+1) $, and using the plant equation \eqref{plant1} we obtain a formula for $u(t+1) $; then, it is easy to see that there exists a matrix $A_g\in\R^{(n+m+d)\times(n+m+d)} $ (which depends implicitly on $\theta \in {\cal S}_{ab} $) so that the following holds: \begin{flalign} &\phi(t+1) = A_g \phi(t) + \e_1 \varepsilon(t+1) + \nonumber \\ &\quad \frac{1}{b_0} \e_{n+1} \sum_{i=0}^{d} a_{d-i} \varepsilon(t+1+i) + \e_1 y^*(t+1) + \nonumber \\ &\quad \frac{1}{b_0} \e_{n+1} \biggl[ \sum_{i=0}^{d} a_{d-i} y^*(t+1+i) - w(t+d+1) \biggr]. \label{plant_good1} \end{flalign} The characteristic polynomial of $A_g $ is $\tfrac{1}{b_0}z^{n+m+d}{\mathbf B}(z^{-1}) $, so all of its roots are in the open unit disk. The model in \eqref{plant_good1} is similar to the {\em good} model obtained in the analysis in the $d $-step ahead control case in \cite{acc19} and \cite{mcss20} where it is driven by the tracking error $\varepsilon $. However in the case considered here we would like to obtain a model which is, instead, driven by $\overline\varepsilon $; this will turn out to be crucial in analyzing the closed-loop behavior. To this end, from \eqref{errL} and the definitions of $\overline y $ and $\overline y^* $, it is easy to see that \begin{equation} {\cal E}(z)=\frac{1}{{\mathbf L}(z^{-1})} \overline{\cal E}(z) ; \label{tfL} \end{equation} so we can represent $\varepsilon(t) $ as the output of an $n' $th-order system driven by $\overline \varepsilon $ as follows: with $ \zeta(t) := \begin{bmatrix} \varepsilon(t) & \varepsilon(t-1) & \cdots & \varepsilon(t-n'+1) \end{bmatrix}^\top, $ and $A_l\in\R^{n'\times n' } $ defined by \[ A_l :=\left[ \begin{smallmatrix} -l_1 & -l_2 & \cdots & -l_{n'-1} & -l_{n'} \\ 1 & 0 & \cdots & 0 & 0 \\ 0 & 1 & \cdots & 0 & 0 \\ \vdots & & \ddots & \ddots & \vdots \\ 0 & 0 & \cdots & 1 & 0 \end{smallmatrix} \right], \] we have \begin{subequations} \label{xi_sys} \begin{flalign} \zeta(t+1) &= A_l \zeta(t) + \e_1 \overline\varepsilon(t+1) \\ \varepsilon(t) &= \e_1^\top \zeta(t). \label{vare_eqn} \end{flalign} \end{subequations} Note that \eqref{plant_good1} is driven on the RHS by $d+1 $ terms of $\varepsilon(\cdot) $; but from \eqref{vare_eqn} we have \begin{equation} \varepsilon(t+1+j)=\e_1^\top \zeta(t+1+j ),\quad j=0,1,\ldots,d. \label{barerr2} \end{equation} With this in mind, we construct the following $(n'(d+1)) $th-order system driven by $\overline \varepsilon(\cdot) $: \begin{equation} \left[ \begin{smallmatrix} \zeta(t+d+2) \\ \zeta(t+d+1) \\ \vdots \\ \vdots \\ \zeta(t+2) \end{smallmatrix} \right] = \underbrace{ \left[ \begin{smallmatrix} A_l & & & & \\ I_{n'} & & & & \\ & I_{n'} & & & \\ & & \ddots & & \\ & & & I_{n'} & 0 \end{smallmatrix} \right] }_ {=:\widetilde A_l } \underbrace{ \left[ \begin{smallmatrix} \zeta(t+d+1) \\ \zeta(t+d) \\ \vdots \\ \vdots \\ \zeta(t+1) \end{smallmatrix} \right] }_{=:\overline\zeta(t)} + \e_1 \overline \varepsilon(t+d+2) . \label{xi_sys_no2} \end{equation} At this point we can combine the models \eqref{plant_good1} and \eqref{xi_sys_no2} together with the linking equation \eqref{barerr2} to obtain a model driven by the exogenous inputs and $\overline \varepsilon $ (rather than $\varepsilon $): with \begin{equation} \eta(t) := \frac{1}{b_0} \e_{n+1} \biggl[ \sum_{i=0}^{d} a_{d-i} y^*(t+1+i) - w(t+d+1) \biggr] + \e_1 y^*(t+1), \label{eta_2} \end{equation} if follows that there exists a matrix $\tilde B\in\R^{(n+m+d)\times(n'(d+1))} $, which depends continuously on $\theta \in {\cal S}_{ab} $, to obtain the following $(n+m+d+n'(d+1)) $th-order system: \begin{equation} \begin{bmatrix} \phi(t+1) \\ \overline\zeta(t+1) \end{bmatrix} = \underbrace{ \begin{bmatrix} A_g & \tilde B \\ & \tilde A_l \end{bmatrix} }_{=: \widetilde A_g } \underbrace{ \begin{bmatrix} \phi(t) \\ \overline\zeta(t) \end{bmatrix} }_{=: \overline\phi(t) } + \eta(t) + \e_{n+m+d+1} \overline\varepsilon(t+d+2). \label{goodmodel3} \end{equation} Before presenting an even better model suitable for analysis we need to analyze a couple of crude models of the closed-loop behavior. \subsection{Crude Models} \unskip At times, we will need to use crude models to bound the size of the growth of $\phi(t) $ and the size of the growth of $\overline\phi(t) $ in terms of the exogenous inputs. Following a similar analysis of the crude model in \cite{acc19} and \cite{mcss20}, we now use \eqref{plant1} to describe $y(t+1)$, and use the control law \eqref{control1} together with the obtained equation for $y(t+1) $ to describe $u(t+1)$; so, we can appropriately define matrices $A_{1}(t)$, $B_{1}(t)$ and $B_{2}(t)$ in terms of $\theta\in{\cal S}_{ab} $ and $\hat\theta(t+1)\in{\cal S} $ so that we have the following {\bf crude model of the behavior of $\phi(\cdot) $}: \begin{flalign} &\phi(t+1)=A_1(t)\phi(t)+ \nonumber \\ & \quad B_1(t) \overline y^*(t+d+1) +B_2(t) w(t+1), \quad t\geq t_0. \label{crude1} \end{flalign} Furthermore, we combine \eqref{crude1} and \eqref{xi_sys_no2} to obtain an equation for $\overline\phi(t) $: we can appropriately define matrices $B_3(t),B_4(t) $ so that \begin{flalign} &\overline\phi(t+1) = \begin{bmatrix} A_1(t) & \\ & \widetilde A_l \end{bmatrix} \overline \phi(t) +B_3(t) \overline y^*(t+d+1) + \nonumber \\ &\quad B_4(t) w(t+1) + \e_{n+m+d+1} \overline\varepsilon(t+d+2), \quad t\geq t_0. \label{crude_no2} \end{flalign} Now we want to find a representation for $\overline\varepsilon(t+d+2)$ on the RHS above in terms of $\phi(t) $: from \eqref{error2} we have $\overline \varepsilon(t+d+2)=-\tilde\theta(t+2)^\top\phi(t+2)+\overline w(t+2) $, so we use \eqref{crude1} to find a representation of $\phi(t+2) $ in terms of $\phi(t) $ and substitute into \eqref{crude_no2}; then we can appropriately define matrices $A_2(t), B_5(t), B_6(t), B_7(t), B_8(t) $ so that the following {\bf crude model of the behavior of $\overline\phi(\cdot) $} is obtained: \begin{flalign} &\overline\phi(t+1) = A_2(t) \overline \phi(t) +B_5(t) \overline y^*(t+d+1) + \nonumber \\ &\qquad B_6(t) w(t+1) + B_7(t) \overline y^*(t+d+2) + \nonumber \\ &\qquad B_8(t) w(t+2) + \e_{n+m+d+1} \overline w(t+2), \quad t\geq t_0. \label{crude_no3} \end{flalign} Due to the compactness of ${\cal S}_{ab} $, ${\cal S}_{\alpha\beta} $ and ${\cal S} $, we can obtain the following immediately. \begin{prop} \label{prop_crude} There exists a constant $c_1\geq 1 $ such that for every $t_0\in\Z $, $x_0\in\R^{n+m+3d-2} $, $\theta_0 \in{\cal S} $, $\theta\in{\cal S}_{ab} $, $r,w\in\ellb_\infty $, and $\delta\in(0,\infty] $, when the adaptive controller \eqref{est1} and \eqref{control1} is applied to the plant \eqref{plant1}, the following holds: \begin{flalign} &\|A_1(t) \|\leq c_1, \;\; \|A_2(t) \|\leq c_1, \;\; \|B_1(t) \|\leq c_1, \nonumber \\ & \|B_2(t) \|\leq c_1, \;\; \|B_5(t) \|\leq c_1, \;\; \|B_6(t) \|\leq c_1, \nonumber \\ & \|B_7(t) \|\leq c_1, \;\; \|B_8(t) \|\leq c_1, \;\; t\geq t_0. \nonumber \end{flalign} \end{prop} \subsection{A Better Model} \unskip The {\em good} closed-loop model \eqref{goodmodel3} is driven by a future value of $\overline \varepsilon(\cdot) $. We now combine it with the crude model \eqref{crude_no3} to obtain a new model which is driven by perturbed version of $\overline\phi(t) $, with weights associated with the parameter estimation updates. Before proceeding, motivated by the form of the term in the parameter estimator, we define \begin{flalign} \nu(t):= \rho(t) \frac{\phi(t-d+1)}{\Vert\phi(t-d+1)\Vert^2} e(t+1). \nonumber \end{flalign} \begin{prop} \label{good_prop} There exists a constant $c_2 $ so that for every $t_0\in\Z$, $ x_0\in\R^{n+m+3d-2}$, $ \theta_0\in{\cal S}$, $ \theta\in{\cal S}_{ab}$, $ r,w\in\ellb_\infty $, and $\delta\in(0,\infty] $, when the adaptive controller \eqref{est1} and \eqref{control1} is applied to the plant \eqref{plant1}, the following holds: \begin{flalign} \overline\phi(t+1) &= [\widetilde A_g + \Delta(t) ] \overline\phi (t) + \bar\eta(t), \quad t\geq t_0 , \label{good_key1} \end{flalign} with \begin{flalign} \|\Delta(t) \| &\leq c_2 \sum_{j=2}^{d+1} \|\nu(t+j) \|, \label{xx} \end{flalign} and \begin{flalign} &\|\bar\eta(t) \| \leq c_2 \biggl( 1+ \sum_{j=2}^{d+1} \|\nu(t+j) \| \biggr) \biggl[ \sum_{j=1}^{\max\{3,d+1\}} (|y^*(t+j)| + \nonumber \\ &\quad |\overline y^*(t+d+j)| + |w(t+j)|+|\overline w(t+j)|) \biggr]. \label{xx2} \end{flalign} \unskip \end{prop} \begin{proof} See the Appendix. \unskip \end{proof} The result in Proposition \ref{good_prop} looks very similar, though not identical, to the analysis leading up to the main result of \cite{acc19} and \cite{mcss20} on the $d$-step ahead adaptive control problem. Notice that the matrix $\widetilde A_g $ is a function of $\theta\in{\cal S}_{ab} $ and the coefficients of ${\mathbf L}(z^{-1}) $; it lies in a corresponding compact set ${\cal A} \subset \R^{(n+m+d+n'(d+1))\times(n+m+d+n'(d+1)) } $. Furthermore, the eigenvalues of $\widetilde A_g $ are at the origin, the roots of ${\mathbf L}(z^{-1}) $, and the roots of ${\mathbf B}(z^{-1}) $, so they are all in the open unit disk; so we can use classical arguments to prove that for the desired reference model there exists constants $\gamma $ and $\sigma\in(0 ,1) $ so that for all $\theta\in{\cal S}_{ab} $, we have \begin{equation} \bigl\| \widetilde A_g^k \bigr\| \leq \gamma \sigma^k,\quad k\geq0. \label{ATV1} \end{equation} Indeed, we can choose any $\sigma $ larger than \[ \underline\lambda:= \max_{\theta\in{\cal S}_{ab} } \bigl\{|\lambda|: \lambda\in\C, {\mathbf B}(\lambda^{-1})=0 \text{ and } {\mathbf L}(\lambda^{-1})=0 \bigr\}. \] Equations of the form given in \eqref{good_key1} appear in classical adaptive control approaches. While we can view \eqref{good_key1} as a linear time-varying system, we have to keep in mind that $\Delta(t) $ and $\bar\eta(t) $ are implicit nonlinear functions of $\theta $, $\theta_0 $, $x_0$, $r $ and $w $. However, this linear time-varying interpretation is very convenient for analysis; to this end, let $\boldsymbol\Phi_A $ denote the state transition matrix of a general time-varying square matrix $A$. The following result is useful in analyzing our closed-loop system. \begin{prop}[\hspace{-.2pt}\textbf{\cite{kreiss2}}] \label{prop_4} With $\sigma\in(\underline\lambda,1 ) $, suppose that $\gamma\geq 1 $ is such that \eqref{ATV1} is satisfied for every $\widetilde A_g \in {\cal A} $. For every $\mu\in(\sigma,1) $, $g_0\geq0 $, $g_1\geq0 $, and $g_2 \in \bigl[0, \frac{\mu-\sigma}{\gamma} \bigr),$ there exists a constant $\bar\gamma\geq1 $ so that for every $\widetilde A_g\in{\cal A} $ and $\Delta \in {\mathbb S}\bigl(\R^{(n+m+d+n'(d+1))\times(n+m+d+n'(d+1)) } \bigr) $ satisfying \[ \sum_{j=\tau}^{t-1} \|\Delta(j) \| \leq g_0 + g_1 (t-\tau)^{\frac{1}{2}}+g_2(t-\tau), \;\; \bar t \geq t>\tau \geq \underline t, \] we have $\|{\boldsymbol\Phi}_{\widetilde A_g + \Delta }(t,\tau) \| \leq \bar\gamma \mu^{t-\tau}, \quad \bar t \geq t>\tau \geq \underline t.$ \end{prop} Next, we present the main result proving that the closed-loop system enjoys very desirable linear-like behavior. \section{The Main Result} \begin{theorem} \label{thm1} For every $\delta\in(0,\infty] $ and $\lambda\in(\underline\lambda,1 ) $, there exists a constant $c >0 $ so that for every $t_0\in \Z $, $\theta \in {\cal S}_{ab} $, $r,w \in \ellb_\infty $, $ \theta_0 \in {\cal S} $, and plant initial condition $x_0 $, when the adaptive controller \eqref{est1} and \eqref{control1} is applied to the plant \eqref{plant1}, the following bound holds: \begin{equation} \|\phi(t)\| \leq c \lambda^{t-t_0} \|x_0 \| + \sum_{j=t_0}^t c \lambda^{t-j } (|r(j) | + |w(j)| ), \quad t\geq t_0. \label{th_bd1} \end{equation} Furthermore, if $w=0 $, then \begin{flalign} \sum_{k=t_0+d}^\infty \varepsilon(k)^2 &\leq c (\|x_0 \|^2 + \|r \|_\infty^2 ). \label{errBD2} \end{flalign} \end{theorem} \begin{remark} The above result shows that the closed-loop system experiences linear-like behavior. There is a uniform exponential decay bound on the effect of the initial condition, and a convolution bound on the effect of the exogenous inputs. This implies that the system has a bounded gain (from $w$ and $r$ to $y$) in every $p$-norm. For example, for $p = \infty $, we see from the above bound that \begin{equation} \|\phi(t) \| \leq \tfrac{c}{1-\lambda} \left( \lambda^{t-t_0} \|x_0 \| + \|w\|_\infty+ \|r\|_\infty, \right) \quad t\geq t_0. \nonumber \end{equation} \end{remark} \begin{remark} In the absence of noise, most adaptive controllers provide that the tracking error is square summable, e.g. see \cite{goodwinsin}. Here we prove a stronger result, namely, an upper bound on the $2$-norm in terms of the size of $x_0 $ and $r$. \end{remark} \begin{proof}[{\bf Proof of Theorem \ref{thm1}}] Fix $\delta\in(0,\infty] $ and $\lambda\in(\underline\lambda,1) $. Let $t_0\in \Z $, $\theta \in {\cal S}_{ab} $, $r,w \in \ellb_\infty $, $\theta_0 \in {\cal S} $, and $x_0\in\R^{n+m+3d-2} $ be arbitrary. Now choose $\sigma\in(\underline\lambda,\lambda ) $. We will analyze \eqref{good_key1} to obtain a bound on $\overline\phi(t) $. Before proceeding, we see that there exists $\gamma_1 $ so that for every $\widetilde A_g \in {\cal A} $, $\|\widetilde A_g^k \| \leq \gamma_1 \sigma^k, k\geq0.$ Also, we need to compute a bound on the sum of $\|\Delta(\cdot ) \| $; since there are $d$ terms on the RHS of \eqref{xx}, by the Cauchy-Schwarz inequality we obtain \begin{flalign} \sum_{j=\tau}^{t-1} \|\Delta(j) \| &\leq d c_2 \sum_{j=\tau+1 }^{t+d-1} \|\nu(j+1) \| \nonumber \\ &\leq d^2 c_2 \left[ \sum_{j=\tau+1 }^{t+d-1} \|\nu(j+1) \|^2 \right]^{\frac{1}{2}} (t-\tau +d-1 )^{\frac{1}{2}}, \nonumber \\ &\qquad\quad t>\tau \geq t_0; \end{flalign} but $(t_2-t_1+d-1)^\frac{1}{2} \leq d(t_2-t_1)^\frac{1}{2}, t_2>t_1 $, so incorporating this and the definition of $\nu(\cdot) $ we have \begin{flalign} \sum_{j=\tau}^{t-1} \|\Delta(j) \| &\leq d^3 c_2 \left[ \sum_{j=\tau+1 }^{t+d-1} \|\nu(j+1) \|^2 \right]^{\frac{1}{2}} (t-\tau)^{\frac{1}{2}}, \nonumber \\ &= d^3 c_2 \left[ \sum_{j=\tau+2 }^{t+d} \rho(j) \frac{|e(j+1)|^2}{\|\phi(j-d+1) \|^2} \right]^{\frac{1}{2}} (t-\tau)^{\frac{1}{2}}, \nonumber \\ &\qquad\quad t>\tau \geq t_0. \label{delta_bd1} \end{flalign} Also for ease of notation, let us define \begin{equation} \tilde w(t) := \sum_{j=1}^{\max\{3,d+1\}} ( |y^*(t+j)| + |\overline y^*(t+d+j)| + |w(t+j)|+|\overline w(t+j)|). \nonumber \end{equation} Now we consider the closed-loop system behavior. To proceed, we partition the timeline into two parts: one in which the noise $\overline w(\cdot) $ is small versus $\phi(\cdot) $ and one where it is not. To this end, with $\mathfrak{v}>0 $ to be chosen shortly, define \[ S_{\text{\sf good}}:= \left\{ j\geq t_0 : \phi(j) \neq0 \text{ and } \tfrac{|\overline w(j) |^2 }{\|\phi(j) \|^2 } < \mathfrak{v} \right\}, \] \[ S_{\text{\sf bad}}:= \left\{ j\geq t_0 : \phi(j) =0 \text{ or } \tfrac{|\overline w(j) |^2 }{\|\phi(j) \|^2 } \geq \mathfrak{v} \right\}; \] clearly $\{j\in\Z : j\geq t_0 \} =S_{\text{\sf good}} \cup S_{\text{\sf bad}}$.\footnote{If the noise is zero, the $S_{\text{\sf good}}$ may be the whole timeline $[t_0,\infty) $.} Observe that this partition implicitly depends on $\theta\in{\cal S}_{ab} $, as well as the initial conditions. We will easily obtain bounds on the closed-loop system behavior on $S_{\text{\sf bad}}$; we will apply Proposition \ref{prop_4} to analyze the behavior on $S_{\text{\sf good}}$. Before proceeding, we partition the timeline into intervals which oscillate between $S_{\text{\sf good}}$ and $S_{\text{\sf bad}}$. To this end, it is easy to see that we can define a (possibly infinite) sequence of intervals of the form $[k_i,k_{i+1}) $ satisfying: (i) $k_0=t_0 $; (ii) $[k_i,k_{i+1}) $ either belongs to $S_{\text{\sf good}}$ or $S_{\text{\sf bad}}$; and (iii) if $k_{i+1}\neq \infty $ and $[k_i,k_{i+1}) $ belongs to $S_{\text{\sf good}}$ (respectively, $S_{\text{\sf bad}}$), then the interval $[k_{i+1},k_{i+2}) $ must belong to $S_{\text{\sf bad}}$ (respectively, $S_{\text{\sf good}}$). Now we analyze the closed-loop behavior on each interval. \noindent{\bf Case 1}: The behavior on $S_{\text{\sf bad}}$. Let $j\in [k_i,k_{i+1})\subset S_{\text{\sf bad}} $ be arbitrary. In this case, we have either $\phi(j)=0 $ or $\tfrac{|\overline w(j) |^2}{\|\phi(j) \|^2 } \geq \mathfrak{v} $. In either case, we have \begin{flalign} \|\phi(j) \|\leq \tfrac{1}{\sqrt{\mathfrak{v}} } |\overline w(j) |, \quad j\in[k_i,k_{i+1} ); \label{bad_bd1} \end{flalign} then from the crude model \eqref{crude1} and Proposition \ref{prop_crude}, we have \begin{equation} \|\phi(j+1) \| \leq \tfrac{c_1}{\sqrt{\mathfrak{v}} } |\overline w(j) | + c_1 |\overline y^*(j+d+1) | + c_1 |w(j+1) |, \;\; j\in[k_i,k_{i+1}); \nonumber \end{equation} combining this with \eqref{bad_bd1} yields: \begin{equation} \|\phi(j) \| \leq \left\lbrace \begin{matrix*}[l] \tfrac{1}{\sqrt{\mathfrak{v}} } |\overline w(j) |, & j=k_i \\ c_1 \left(\tfrac{1}{\sqrt{\mathfrak{v}} }+1\right) [|\overline w(j-1) | + \\ \quad |\overline y^*(j+d) | + |w(j) |], &j=k_i+1,\ldots,k_{i+1}. \end{matrix*} \right. \nonumber \end{equation} \noindent{\bf Case 2}: The behavior on $S_{\text{\sf good}}$. Suppose that $[k_i,k_{i+1} ) $ lies in $S_{\text{\sf good}}$; notice that the bound on $\|\Delta(t) \| $ in \eqref{delta_bd1} occasionally extends outside $S_{\text{\sf good}}$; so we handle the first $d+1$ and last $d+1$ times steps separately. To this end, first suppose that $k_{i+1}-k_i\leq 2(d+1) $; then using the crude model on $\phi $ in \eqref{crude1} and Proposition \ref{prop_crude}, it is easy to show that if we define $\gamma_2:= \left(\frac{c_1}{\lambda}\right)^{2d+2} $, then we have \begin{flalign} &\|\phi(t) \| \leq \gamma_2 \lambda^{t-k_i} \|\phi(k_i) \| + \nonumber \\ & \sum_{j=k_i}^{t-1} \gamma_2 \lambda^{t-j-1} (|\overline y^*(j+d+1)|+|w(j+1)| ), \;\; t\in[k_i,k_{i+1}]. \label{good_bd4} \end{flalign} Now suppose that $k_{i+1}-k_i> 2(d+1) $. Define $\overline k_i:= k_i+d+1 $ and $\underline k_i:= k_{i+1}-d-1 $. By the second part of Proposition \ref{est_prop} and using the facts that $\|\tilde \theta(t) \|\leq 2\|{\cal S} \| $ and that $\frac{|\overline w(j)|^2 }{\|\phi(j) \|^2}< \mathfrak{v} $ for $j\in[k_i,k_{i+1} ) $, we obtain from \eqref{delta_bd1}: \begin{flalign} \sum_{j=\tau}^{t-1} \|\Delta(j)\| &\leq d^3c_2 \left[ 8\|{\cal S} \|^2+ 4\mathfrak{v}(t-\tau+d-1) \right]^\frac{1}{2} (t-\tau)^\frac{1}{2} , \nonumber \\ & \qquad \underline k_{i+1} \geq t>\tau \geq \overline k_i. \nonumber \end{flalign} If we restrict $\mathfrak{v}\leq1 $, and define $\gamma_3:= d^3c_2\bigl( \bigl[ 8\|{\cal S} \|^2+ 4(d-1) \bigr]^\frac{1}{2} + 2 \bigr) $, then we obtain \begin{equation} \sum_{j=\tau}^{t-1} \|\Delta(j)\| \leq \gamma_3 (t-\tau)^\frac{1}{2} + \gamma_3 \mathfrak{v}^\frac{1}{2} (t-\tau), \quad \underline k_{i+1} \geq t>\tau \geq \overline k_i. \nonumber \end{equation} We now apply Proposition \ref{prop_4}: set $g_0=0, g_1=\gamma_3, g_2=\gamma_3\mathfrak{v}^\frac{1}{2}, \mu=\lambda, \gamma=\gamma_1 $; we need $\gamma_3\mathfrak{v}^\frac{1}{2}< \tfrac{\lambda-\sigma}{\gamma_1} $, so if we set $\mathfrak{v}:= \min \left\{1, \tfrac{1}{2} \bigl(\tfrac{\lambda-\sigma}{\gamma_3\gamma_1}\bigr)^2 \right\} $, then from Proposition \ref{prop_4} we see that there exists a constant $\gamma_4 $ so that the state transition matrix $\boldsymbol\Phi_{\widetilde A_g+\Delta }(t,\tau) $ satisfies \begin{equation} \|\boldsymbol\Phi_{\widetilde A_g+\Delta }(t,\tau)\| \leq \gamma_4 \lambda^{t-\tau}, \quad \underline k_{i+1} \geq t>\tau \geq \overline k_i. \label{state_bd1} \end{equation} Before solving \eqref{good_key1}, we obtain a bound on $\bar\eta(t) $; from Proposition \ref{est_prop}, we see that $\|\nu(t)\| \leq \sqrt{8\|{\cal S} \|^2+4\mathfrak{v} } ,\underline k_{i+1} \geq t \geq \overline k_i $, so there exists a constant $\gamma_5 $ so that $\|\bar\eta(t) \| \leq \gamma_5 \tilde w(t), \; \underline k_{i+1} \geq t \geq \overline k_i.$ Then, using the bound in \eqref{state_bd1} to solve \eqref{good_key1} we see that there exists a constant $\gamma_6 $ so that \begin{equation} \|\overline \phi(t) \| \leq \gamma_6 \lambda^{t-k_i} \|\overline \phi(\overline k_i) \| + \sum_{j=k_i}^{t-1} \gamma_6 \lambda^{t-j-1} \tilde w(j), \quad t\in[\overline k_i,\underline k_{i+1}]. \label{good_bd5} \end{equation} We want to have a bound on the whole interval $[k_i,k_{i+1} ) $, and we would like it to be in terms of $\phi $ instead of $\overline\phi $. First, we use the crude model on $\overline\phi(\cdot) $ in \eqref{crude_no3} and Proposition \ref{prop_crude} to find bounds on $\|\overline\phi(t) \| $ for $t\in[k_i, \overline k_i ] $ and for $t\in[\underline k_{i+1}, k_{i+1} ] $, and on $\|\overline\phi(\overline k_i) \|$ in terms of $\|\overline\phi(k_i) \|$ and combine them with \eqref{good_bd5} to see that there exists a constant $\gamma_7 $ so that \begin{equation} \|\overline \phi(t) \| \leq \gamma_7 \lambda^{t-k_i} \|\overline \phi(k_i) \| + \sum_{j=k_i}^{t-1} \gamma_7 \lambda^{t-j-1} \tilde w(j), \quad t\in[ k_i, k_{i+1}]. \label{good_bd6} \end{equation} Next we obtain a bound in terms of $\phi$. It is obvious that $ \|\phi(t) \|\leq\|\overline\phi(t) \|,t\in[ k_i, k_{i+1}] $. Then from the definitions of $\overline\phi(\cdot),\overline\zeta(\cdot) $ and $\zeta(\cdot) $, it is easy to see that there exists a constant $c_3 $ such that $\|\overline \phi(k_i) \| \leq c_3 \sum_{j=1}^{d+1} \|\phi(k_i+j) \| + c_3 \sum_{j=-n'+2}^{d+1} |y^*(k_i+j)|;$ we use the crude model on $\phi(\cdot) $ in \eqref{crude1} and Proposition \ref{prop_crude} to obtain bounds on $\|\phi(k_i+j) \|,j=1,2,\ldots,d+1 $, in terms of $\|\phi(k_i) \|$. Incorporating all of the above into \eqref{good_bd6} and after simplification, we see that there exists a constant $\gamma_8 $ so that \begin{flalign} &\| \phi(t) \| \leq \gamma_8 \lambda^{t-k_i} \| \phi(k_i) \| + \nonumber \\ & \sum_{j=k_i}^{t-1} \gamma_8 \lambda^{t-j-1} \tilde w(j)+ \gamma_8 \sum_{q=1}^{n'-2} |y^*(k_i-q)|, \quad t\in[ k_i, k_{i+1}], \nonumber \end{flalign} which we combine with \eqref{good_bd4} to conclude Case 2. We now glue together the bounds obtained on $S_{\text{\sf good}}$ and $S_{\text{\sf bad}}$ to obtain a bound which holds on all of $[t_0,\infty )$ using the identical argument used in gluing together similar bounds in the proof of Theorem 1 of \cite{mcss20}. Last of all, we simplify the resulting quantity by using causality arguments to remove extraneous terms, and end up with the bound in \eqref{th_bd1}. Finally we prove asymptotic tracking. Suppose that $w=0$; then we see from the estimation algorithm that in this case, $\rho(t)=1 \Leftrightarrow \|\phi(t-d) \|\neq0 $. So from \eqref{error1}, the first part of Proposition \ref{est_prop}, and the Cauchy-Schwarz inequality, we have \[ \rho(t-1) \frac{\overline \varepsilon(t)^2}{\|\phi(t-d) \|^2} \leq d \sum_{j=0,\|\phi(t-j-d) \|\neq0}^{d-1} \frac{e(t-j)^2}{\|\phi(t-j-d) \|^2}; \] so by the second part of Proposition \ref{est_prop} we can see that \begin{flalign} &\sum_{t=t_0+d,\|\phi(t-d) \|\neq0 }^\infty \frac{\overline \varepsilon(t)^2}{\|\phi(t-d) \|^2} \leq 8d^2\|{\cal S}\|^2. \nonumber \end{flalign} Then it is easy to see by the boundedness of $\phi $ proven in \eqref{th_bd1} and by the fact that $\overline \varepsilon(t)=0 $ when $\phi(t-d)=0 $, that \begin{equation} \sum_{t=t_0+d}^\infty \overline \varepsilon(t)^2 \leq 8d^2\|{\cal S}\|^2\max_{j\geq t_0-d} \|\phi(j) \|^2 \leq \left(\tfrac{4d\|{\cal S} \|c}{1-\lambda}\right)^2 [ \|x_0 \|^2+ \|r\|_\infty^2]. \nonumber \end{equation} But $\varepsilon $ and $\overline\varepsilon $ are related by a stable transfer function (see \eqref{tfL}), so if we apply Parseval's Theorem then we obtain a bound on $\varepsilon $ of the form given in \eqref{errBD2}. \end{proof} \section{Robustness} It turns out that the convolution bounds proven in Theorem \ref{thm1} will guarantee robustness to a degree of time-variations and unmodelled dynamics. We have shown that the corresponding model reference adaptive controller provides a convolution bound with gain $c$ and decay rate $\lambda$ when applied to the time-invariant nominal plant; we can apply Theorems 1 and 2 of \cite{ccta20} to show that, in the presence of a degree of time-variation (slow enough parameter time-variations and/or occasional jumps) and small enough unmodelled dynamics, the controller still provides linear-like properties. Furthermore, we can also obtain bounds on the average tracking error both in the case of no noise under slow time-variations, as well as in the noisy case, by adapting arguments used in the proofs of Theorems 4 and 5 of \cite{mcss20}. \section{A Simulation Example} \unskip We now provide a simulation example to illustrate the results of this paper. Consider the time-varying plant \begin{multline*} y(t + 1) = -a_1(t)y(t) - a_2(t)y(t - 1) + \\ b_0(t)u(t) + b_1(t)u(t - 1) + w(t), \end{multline*} with $a_1(t) \in [-2, 2], a_2(t) \in [-2, 2], b_0(t) \in [\tfrac{3}{2}, 5]$ and $b_1(t) \in [-1, 1]$. Note here that the delay $d=1$. We want to apply an adaptive controller such that the closed-loop system follows the behavior of a reference model \eqref{plantRef1} with $n'=2 $; following the discussion at the beginning of Section \ref{sec2}, we transform the plant into the predictor form by way of long division: we see that $\alpha_0(t)=a_1(t)-l_1$, $ \alpha_1(t)=a_2(t)-l_2$, $\beta_0(t)=b_0(t)$, and $\beta_1(t)=b_1(t)$. We choose a reference model represented by ${\mathbf L}(z^{-1}):=1-\frac{1}{2}z^{-2}, $ and $ {\mathbf H}(z^{-1}):=\frac{1}{2},$ which has poles in the open unit disk as required; then we can set \begin{multline*} {\cal S}:= {\cal S}_{\alpha\beta}= \biggl\{ \left[ \begin{smallmatrix} \alpha_0\\\alpha_1\\\beta_0\\\beta_1 \end{smallmatrix} \right] \in\R^4 : \alpha_0\in[-2,2], \alpha_1\in[-\tfrac{5}{2},\tfrac{3}{2}], \\ \beta_0\in [\tfrac{3}{2}, 5], \beta_1\in [-1, 1] \biggr\}. \end{multline*} We apply the adaptive controller \eqref{est1} and \eqref{control1} (with $\delta =\infty $) to this plant with the plant parameters given by: $a_1(t) = 2 \cos(\tfrac{1}{100}t)$, $a_2(t) = -2 \sin(\tfrac{1}{300}t)$, $b_0(t) = \tfrac{13}{4} - \tfrac{7}{4} \cos(\tfrac{1}{125}t)$, and $b_1(t) = -\cos(\tfrac{1}{50}t)$, and the disturbance given by: $$ w(t) = \left\{ \begin{smallmatrix*}[l] \tfrac{1}{10} \cos(10t), & 200 < t \leq 500 \\ 0, & \text{otherwise}. \end{smallmatrix*} \right. .$$ We set $r(t) $ to be a unit square wave with period of $200$ steps. We set $y(-1) = y(0) = -1, u(-1) = 0$, and the initial parameter estimates to the midpoint of the respective intervals. Figures \ref{fig1} and \ref{fig2} show the results. The controller does a good job of tracking when there is no disturbance; the tracking degrades when the disturbance enters the system but tracking performance improves when the disturbance returns to zero. You can also see that the estimator tracks the time-varying parameters fairly well. \begin{figure} \center\includegraphics[trim=30 10 30 20,clip,width=.73\columnwidth]{mrac_ex1a} \vspace{-1em} \caption{ The first plot shows both $y(t)$ (solid) and $y^*(t) $ (dashed); the second plot shows the control input $u(t)$. } \vspace{-1.2em} \label{fig1} \end{figure} \begin{figure} \center\includegraphics[trim=30 20 30 20,clip,width=.73\columnwidth]{mrac_ex1b_v2} \vspace{-1.5em} \caption{ The plots show the parameter estimates $\hat\theta(t) $ (solid) as well as the actual parameters $\theta^* $ (dashed). } \vspace{-1.5em} \label{fig2} \end{figure} \vspace{-.7em} \section{Conclusion} \unskip In this paper, we show that a model reference adaptive controller using a parameter estimator based on the original projection algorithm provides desirable {\bf linear-like closed-loop properties}: exponential stability, a bounded gain on the noise in every $p$-norm, and a convolution bound on the exogenous inputs; this is never found in the literature, except in our earlier work. This can be leveraged to directly prove tolerance to a degree of parameter time-variations and a degree of unmodelled dynamics. Also, in the noise-free case, asymptotic tracking is achieved with an explicit upper bound on the square sum of the tracking error. We would like to extend these linear-like results to the case when the sign of the high-frequency gain is unknown by using multiple estimators along the lines of \cite{cdc18} and \cite{tac20}. At present, our proof does not extend to that case since the key transition matrix is not deadbeat, but we are working on an alternative proof approach. \vspace*{-.3cm}
1,314,259,994,729
arxiv
\section{ Introduction } \label{sec:1} \indent\indent The description of hadron physics starting from quantum chromodynamics \break (QCD) -- the theory of strong interactions -- is one of the most challenging problems in medium energy physics of today. The difficulty in the description of the low energy sector of QCD lies in the large effective coupling constant, which precludes a perturbative treatment. Various non-perturbative methods shed some light on its ground state properties: the operator product expansion~\cite{wi69} improves the perturbative expansion by the inclusion of effects from vacuum condensates in Green's functions, QCD sum rules~\cite{shi79} relate these condensates to hadron masses, variational methods~\cite{ole81} attempt to evolve a qualitative picture of the gluonic vacuum, the semi-classical evolution of the QCD functional integral, leading to instanton physics~\cite{ca78}, and the reformulation of QCD in terms of the field strength~\cite{ha77,sch90,re91,la91} provides some insight into the vacuum properties of the quark sector. These analytical approaches can be contrasted to numerical investigations of lattice QCD~\cite{wi74,eng87}. The latter approach is not bounded by any approximation, but is restricted by the capacities of computers. One of the most striking features of low energy QCD is the absence of quarks in the asymptotic states. This quark confinement is explained in lattice QCD by a linear rising potential between two static quarks. This behavior is also confirmed by considerations within the $1/N_c$-expansion, $N_c$ being the number of colors~\cite{mi83}. A natural explanation of the linear confining potential is provided by the dual superconductor picture~\cite{ba91}. As pointed out by 't Hooft, in a certain gauge, the non-abelian Yang-Mills theories possess monopole configurations~\cite{tHooft}. If these monopoles condense, a dual Meissner effect occurs, expelling electric field strength out of the vacuum. This implies that the electric field between two static quarks is squeezed into a flux tube, which subsequently gives rise to the linear confining potential. A recent SUSY model of Seiberg and Witten suggests that monopole condensation could be the mechanism of confinement in certain Yang-Mills theories~\cite{sei94,sei94b}. There have been suggestions that the same phenomenon occurs in the Yang-Mills sector of QCD proper \cite{qcd}. In this development, it is found that the confinement of the quarks is intimately related to the spontaneous breakdown of chiral symmetry~\cite{sei94b}. Despite the recent progress in understanding the ground state properties of QCD, the description of hadron properties is still feasible only with effective models. These models include aspects of QCD by incorporating its symmetries. The most important symmetry constraining the variety of hadron models is chiral symmetry. For QCD ($N_c=3$) and for zero current masses, the quark sector is invariant under global $SU(N_f)_L \times SU(N_f)_R$ transformations of the left- and right-handed quarks, where $N_f$ is the number of quark flavors. The vaccum breaks this symmetry down to the diagonal $SU(N_f)$.\footnote{For $SU(2)$ Yang-Mills theory, the chiral symmetry group is much larger, i.e.\ $SU(2N_f)$, since the $SU(2)$ gauge group is pseudoreal~\cite{sm95}. More on this later.} Most effective theories of mesons and nucleons contain many parameters restricting the predictive power of these models. It is therefore desirable to ``derive" the effective hadron theory from an underlying effective quark model in order to obtain constraints on the parameter range. One of these quark models is the model of Nambu and Jona-Lasinio (NJL)~\cite{nambu}. The NJL model is one of the most economical models that possess the essence of dynamical symmetry breaking that is the hall-mark of modern field theories and has enjoyed an impressive phenomenological success in hadron physics~\cite{appli}. Now the NJL-model reflects the principal low-energy QCD symmetry properties of the quark sector, but does not include the confinement of quarks. This implies, in particular, that the mesons of this model -- quark-anti-quark bound states -- can decay into free quark-anti-quark pairs, if such a process is allowed by kinematics. This manifestly unphysical threshold puts severe constraints on the applicability of the NJL-model~\cite{ja92}. In this paper, we propose an NJL-type model that possesses the confinement property in the sense that quark-anti-quark thresholds are absent in (mesonic) Green's functions. Our reasoning, albeit exploratory, will be guided by a close analogy to a phenomenon in condensed-matter physics known as Anderson localization of electrons~\cite{anderson}. In Anderson localization, it is observed that freely moving electrons get localized when the strength of a random distributed potential stemming from the impurities of the conducting solid exceed a certain strength. We shall argue in analogy that the quarks feel randomly distributed background fields generated by the gluon sector. The paper is organized as follows. In the next section, a motivation and a detailed description of the model is presented. The gap equation describing quark ground-state properties is derived. In section 3, we first describe the particular ansatz which produces a remarkable confinement property. The absence of quark-anti-quark thresholds is explicitly demonstrated for the scalar correlation function. We then show that this particular ansatz is indeed a solution of the gap equation. The phase structure of the model is discussed in some detail. In section 4, we describe the implication of the model on chiral properties. There is no surprise here as the model is designed to reproduce the chiral structure of QCD (section 2). We reproduce the corresponding low-energy theorems by first solving the Bethe-Salpeter equation for the pion field, normalize the amplitude by the electromagnetic form factor and extract the pion decay constant from its definition. We then establish that the Gell-Mann-Oakes-Renner (GMOR) relation is valid in our model. Finally, we compute the pseudoscalar correlation function. The sections 5 and 6 are devoted to the temperature and density dependence of the vacuum properties of the model. We will show that a deconfinement phase transition occurs at some large temperature and/or density. We will find that the deconfinement phase transition is accompanied by the restoration of the spontaneously broken chiral symmetry. The model predicts the two transitions at the same critical point. The final section contains some concluding remarks. \section{The Model } \label{sec:2} \subsection{ Motivation } \indent\indent It is known for a long time that the lattice formulation of Yang-Mills theory provides information on the confinement of quarks~\cite{eng87}. Subsequently quark liberation due to temperature was studied with the lattice version of QCD. It turned out that quantities which are dominated by the properties of the gluonic sector, e.g.\ the gluon condensate, vary smoothly throughout the deconfinement phase transition \cite{lattice,kochbrown}. This suggests that quark liberation may be described solely by a dynamical effect of the quark sector. If so, the description of this phase transition should be feasible by an effective quark model. We hope to gain qualitative insight into the nature of quark liberation by including temperature effects in the quark loops, while the temperature dependence of the quark interaction, mediated by the gluonic sector, is kept temperature-independent. The properties of the light pseudoscalar mesons, in particular those of the scalar meson and the pion, are {\it protected } by chiral symmetry of QCD and might therefore serve as a convenient test ground to explore other features of low energy QCD. We have learned in the past years that at low energies, the pion physics is phenomenologically well described in terms of an effective quark model, in which the quarks interact via a local current-current interaction of the NJL type~\cite{nambu,appli}. However, the open question is: What is the signature of confinement at the level of an effective quark theory? In order to answer this question, we can be guided by a phenomenon in solid state physics: the localization of electrons in random potentials~\cite{anderson}, known as Anderson localization. In his pioneering work, Anderson showed that the key idea of localization can be traced back to the Hamiltonian \begin{equation} H \; = \; \sum _{ \langle ik \rangle } (-V) \, a_{i}^{\dagger } a_{k} \; + \; \sum _i \epsilon _i \, a_{i}^{\dagger } a_i \; , \label{eq:2.1.1} \end{equation} where $a_{i}^\dagger $ is the creation operator of a spin at site $i$ of a lattice. The sum in the first term of (\ref{eq:2.1.1}) extends only over the nearest neighbors, and the constant $V$ measures the strength of the nearest-neighbor hopping. This term is responsible for the spin diffusion on the lattice and is the analogue of a kinetic term in continuum quantum field theory. The energy $\epsilon _i $ corresponds to a potential at site $i$. In the Anderson model, the $\epsilon _i$'s are random variables distributed over a range $-W/2 < \epsilon < W/2 $. A continuum version of this model is described by the Hamiltonian \begin{equation} H \; = \; p^2 \; + \; w \, \sum _{\alpha } \delta ( r - R_\alpha ) \, , \label{eq:2.1.2} \end{equation} which describes electrons elastically scattered off impurities which are randomly distributed at points $R_\alpha $. It turns out that the precise form of the distribution of the potential $\epsilon _i$ is not crucial. It has been found that if the random potential exceeds a certain strength, i.e. \begin{equation} W \; > \; W_c \, = \, 4 K V \; , \label{eq:2.1.3} \end{equation} where the connective constant $K$ is characteristic of the lattice type, then the spins are localized at their sites, whereas for $W<W_c$ the spins are {\it liberated }, and the spin diffusion takes place. The localization of spins is intimately related to the presence of random potentials. Is there a random interaction of quarks which stems from the gluonic sector and which survives the low-energy limit? The answer to this question is not known. There is, however, a first hint from the field-strength formulation of QCD~\cite{ha77,sch90}. When QCD ($N_c=3$) is formulated in terms of the field strength, the resulting quark interaction takes the form \begin{equation} Z[j] \; = \; \int {\cal D } T^{a}_{\mu \nu } \; e^{-S[T]} \; \exp \left\{ \int d^4x \; \left[ j^{a}_{\mu } V^{a}_{\mu } \, -i \frac{g}{2} j^{a}_{\mu } (\hat{T}^{-1} )^{ab}_{\mu \nu } j^{b}_{\nu } \right] \right\} \; , \label{eq:2.1.4} \end{equation} where $j^{a}_{\mu } = \bar{q} t^a \gamma _\mu q$ is the color octet current of the quarks, $\hat{T} ^{ab}_{\mu \nu } = f^{abc} T^{c}_{\mu \nu }$ is defined with the help of the $SU(3)$ structure functions $f^{abc}$, and $V^{a}_\mu = (\hat{T}^{-1} )^{ab}_{\mu \nu } \partial _\rho T^{b}_{\rho \nu } $ is the gauge potential induced by the conjugate field strength $T^{a}_{\mu \nu }$. The action of the field strength $S[T]$ need not to be specified here and can be found in~\cite{ha77,sch90}. Within the strong coupling limit, one observes that the gluonic vacuum decays into domains of constant field strength $i T^{a}_{\mu \nu }$~\cite{sch90} giving rise to a Nambu-Jona-Lasinio type quark interaction in~(\ref{eq:2.1.4}). Due to gauge covariance, all orientations of the constant background field contribute to the quark interaction. Subsequently, the spontaneous breakdown of chiral symmetry was observed in the strong coupling limit~\cite{re91,la91}. The specific form of the quark interaction due to the gluon background fields gives rise to a splitting of the strange- and up-quark condensates as is predicted by QCD sum rules~\cite{la91}. We conclude from these observations that the low energy-effective quark interaction of the NJL type is in fact mediated by the gluon background field. Our key assumption is that in strongly fluctuating gluonic background fields (with an average scale given by the gluon condensate), we will be endowed with a random quark interaction which induces quark confinement by a mechanism similar to Anderson localization in solid state physics. In this paper, we shall construct a simple toy model which has the expected qualitative feature mentioned above of low-energy QCD. \subsection{ Description of the model } \label{sec:2.2} \indent\indent In order to investigate the implications of strong random colored interactions of quarks, we study a model in which the quark fields are doublets of a global $SU(2)$ {\it color } symmetry. We write the generating functional for mesonic Green's functions in Euclidean space as \begin{eqnarray} Z[\phi ] &=& \left\langle \int {\cal D} q \; {\cal D } \bar{q} \; \exp \left\{ - \int d^{4}x \; [ {\cal L } \, - \, \bar{q}(x) \phi (x) q(x)] \; \right\} \right\rangle _{ O } \; , \label{eq:1} \\ {\cal L } &=& \bar{q}(x) ( i \partial\kern-.5em\slash + im ) q(x) \; + \; \frac{G_0}{2} [ \bar{q} q(x) \, \bar{q} q(x) \, - \, \bar{q} \gamma _5 q(x) \, \bar{q} \gamma _5 q(x) ] \label{eq:2} \\ &+& \frac{1}{2} [ \bar{q} \tau ^{\alpha } q(x) \, G^{\alpha \beta } \bar{q} \tau ^{\beta } q(x) \, - \, \bar{q} \gamma _5 \tau ^{\alpha } q(x) \, G^{\alpha \beta } \bar{q} \gamma _5 \tau ^{\beta } q(x) ] \; , \nonumber \end{eqnarray} where $m$ is the current quark mass. We assume that the quark interaction is given by a color-singlet four-fermion interaction of strength $G_0$ and a color-triplet interaction mediated by a positive definite matrix $G^{\alpha \beta }$, which represents gluonic background fields. An average of all orientations $O$ of the background field $G^{\alpha \beta }$, transforming as $G'=O^{T} G O$ with $O$ being a $3 \times 3$ orthogonal matrix, is understood in (\ref{eq:1}) to restore global $SU(2)$ color symmetry. Our basic assumption as motivated above is that this averaging amounts to assuring confinement. Our model is defined in Euclidean space for two reasons. First, the motivation of the model is provided by the Euclidean formulation of QCD. It seems reasonable to assume that classical Euclidean configurations such as instantons or monopoles might provide the random background. Second, the averaging procedure in (\ref{eq:1}) is better defined in Euclidean space, since a superposition of weight functions is again a weight function (with a correct normalization), whereas the superposition of phases (as the integrand of the Minkowskian functional integral is) does not give a phase. The theory in Minkowski space {\it is defined } by the standard Wick rotation. We will have more discussions on this later. Instead of the current-current interaction (\ref{eq:2.1.4}), we shall use its pseudoscalar-scalar part that results from a Fierz transformation. The reason is as follows. As mentioned, it is known that QCD in $SU(2)$ (being pseudoreal) undergoes a symmetry breakdown which is quite different from what one expects in three-color QCD. To simulate what happens in QCD with three colors, we choose interactions so that we would have the correct symmetry-breaking pattern. Specifically whereas the interaction of two color triplet currents $j_\mu ^a$ which presumably follows from QCD exhibits the full $SU(2N_f)$ chiral symmetry group, the reduced interaction in (\ref{eq:1}) is, however, only invariant under $SU(N_f) \times SU(N_f)$ transformation, as QCD is. Although we are dealing with an $SU(2)$ color group, we assume that the basic idea of the confinement mechanism, developed below, does not depend qualitatively on the color group under investigations\footnote{ We do not expect the answer to the question as to whether the quarks are confined or not to depend on the color group used, whereas the actual value of e.g.\ the pion decay constant might depend on whether we have the $SU(3)$ or the $SU(2)$ gauge group.}. In order to keep contact with QCD as closely as possible, we therefore choose our low-energy effective quark theory to exhibit the chiral patterns of QCD in four dimensions. In order to make contact with more familiar formulations of effective quark models, we first study the limit where the color-triplet interaction $G^{\alpha \beta }$ is weak. In this limit we can perform the average over the background orientation $O$ using a cumulant expansion. The colored part of the quark interaction becomes \begin{equation} \int d \lambda \; f(\lambda ) \exp \left\{ - \frac{\lambda }{3} \int d^4x \; [ \bar{q} \tau ^{\alpha } q(x) \, \bar{q} \tau ^{\alpha } q(x) \, - \, \bar{q} \gamma _5 \tau ^{\alpha } q(x) \, \bar{q} \gamma _5 \tau ^{\alpha } q(x) ] \; + \; O(\lambda ^2) \right\} \; , \label{eq:3} \end{equation} where $\lambda $ is an eigenvalue of the matrix $G^{\alpha \beta }$, and $f(\lambda )$ is the corresponding eigenvalue density. At lowest order of the cumulant expansion, we obtain the familiar Nambu-Jona-Lasinio model~\cite{nambu} with a global color symmetry. Terms multiplied by $\lambda ^2$ are eight-quark interactions. The average over the background field $G^{\alpha \beta }$ in (\ref{eq:1}) obviously incorporates the interaction of more than four quarks. If these interactions are not small or equivalently if the background fields in (\ref{eq:1}) are not weak, one then has to abandon the cumulant expansion, and instead study the quark theory of (\ref{eq:1}) with fixed background field in a certain approximation, which we will specify below, and average over the background fields afterwards. If the approximation used is not bound to weak couplings, this approach can be applied even if the cumulant approximation fails. Specifically, the approximation under investigation is to introduce meson fields on top of the scalar condensate of the vacuum and to treat their interactions perturbatively. This approximation does not resort to small couplings and is expected to give good results if the number of mesonic degrees of freedom is large. This approximation is the usual one applied to study the physics of light hadrons in the context of the Nambu-Jona-Lasinio model. We will not further question the validity of the approximation, but investigate the ground state and the properties of the light mesons within this scheme. The quark interaction in (\ref{eq:1}) is linearized by means of color-triplet ($\sigma ^{\alpha }$, $\pi ^{\alpha }$) and color-singlet mesons ($\sigma $, $\pi $). Integrating out the quark fields in the Hubbard-Stratonovich formalism, the resulting effective meson theory is \begin{eqnarray} Z[\phi ] &=& \left\langle \int {\cal D} \sigma ^{\alpha } \; {\cal D} \pi ^{\alpha } \; {\cal D} \sigma \; {\cal D} \pi \; \exp \left\{ - \int d^{4}x \; {\cal L }_M \; \right\} \right\rangle _{ O } \; , \label{eq:4} \\ {\cal L }_M &=& - \ln \left( i \partial\kern-.5em\slash + iM_0 + i\sigma + i \tilde{M}^{\alpha } \tau ^{\alpha } + i \sigma ^{\alpha } \tau ^{\alpha } + \pi \gamma _5 + \pi ^{\alpha } \tau ^{\alpha } \gamma _5 \right) \label{eq:5} \\ &+& \frac{1}{2} \left( \sigma ^{\alpha } + \tilde{M}^{\alpha } \right) \, \left( G^{-1} \right) ^{\alpha \beta } \left( \sigma ^{\beta } + \tilde{M}^{\beta } \right) \, + \, \frac{1}{2} \pi ^{\alpha } \left( G^{-1} \right) ^{\alpha \beta } \pi ^{\beta } \nonumber \\ &+& \frac{1}{2 G_0 } \left( (\sigma + M_0 - m - i \phi _0)^2 + ( \pi + \phi _5) ^2 \right) \; , \nonumber \end{eqnarray} where $\tilde{M}^\alpha $ and $M_0$ are, respectively, the color-triplet and color-singlet constituent quark masses, and we have decomposed the external source $\phi $ into a scalar and pseudoscalar parts, i.e. $\phi = \phi _0 + \phi _5 \gamma _5 $. The approximation mentioned above consists of truncating the expansion of (\ref{eq:4}) in terms of the meson fields. The zeroth-order approximation provides access to ground state properties, in particular, the quark condensate. From \begin{equation} \frac{ \delta \, \ln Z[0] }{ \delta \sigma ^{\alpha } } \; = \; 0, \hbox to 1 true cm {\hfill } \frac{ \delta \, \ln Z[0] }{ \delta \sigma } \; = \; 0 \label{eq:6} \end{equation} one obtains \begin{eqnarray} - \frac{1}{V_4} \hbox{Tr} \left\{ \frac{ i }{ i \partial\kern-.5em\slash + i M_0 + i \tilde{M}^{\alpha } \tau ^{\alpha } } \right\} &+& \frac{1}{G_0} \left( M_0 -m \right) \; = \; 0 \; , \label{eq:7} \\ - \frac{1}{V_4} \hbox{Tr} \left\{ \frac{ i }{ i \partial\kern-.5em\slash + i M_0 + i \tilde{M}^{\alpha } \tau ^{\alpha } } \tau ^{\beta } \right\} &+& \left( G^{-1}\right )^{\beta \gamma } \tilde{M} ^{\gamma } \; = \; 0 \; , \label{eq:8} \end{eqnarray} where $V_4$ is the Euclidean space-time volume, and the trace extends over internal degrees of freedom as well as over space-time. In the latter case, a regularization is required, which we will specify when needed. Different solutions of the equations (\ref{eq:7}-\ref{eq:8}) correspond to different phases of the system. \section{ The Confining Phase } \label{sec:3} \indent\indent Here we will show that a particular solution of the gap equations (\ref{eq:7})--(\ref{eq:8}) with an imaginary color-triplet constituent mass exists, i.e. $\tilde{M} ^\alpha = -i M^{\alpha }$ with $M^{\alpha } $ real. Before discussing in detail the existence of such a solution, we first illustrate its remarkable physical consequence. \newpage \subsection{ The scalar correlation function } \label{sec:3.1} \indent\indent Consider the color-singlet scalar correlation function \begin{equation} \Delta _s (p) \; = \; \int d^{4}x \; e^{-ipx} \; \left\langle \bar{q}q(x) \; \bar{q}q (0) \right\rangle \; , \label{eq:9} \end{equation} within the effective meson theory (\ref{eq:5}). Its connected part is given by \begin{equation} \Delta _s^c(p) \; = \; \int d^{4}x \; e^{-ipx} \; \frac{ \delta ^2 \, \ln Z[\phi ] }{ \delta \phi _0 (x) \delta \phi _0(0) } \vert _{\phi =0 } \; . \label{eq:10} \end{equation} To compute this, one has to consider the fluctuations of the scalar color-singlet field $\sigma _0$ as well as those of the fields $\sigma ^{\alpha }$, since, for a fixed interaction $G^{\alpha \beta }$, the colored mesons couple to $\sigma _0$ by a quark loop (see the first term on the right-hand side of (\ref{eq:5})). Fluctuations of the pion fields $\pi ^{\alpha }$, $\pi $ do not contribute to the scalar correlation function, since the pion fields have the wrong quantum numbers. Expanding (\ref{eq:5}) up to second order in the meson fields, the action, in momentum space representation, is \begin{eqnarray} S^{(2)} &=& \int \frac{ d^4p }{ (2\pi )^4 } \; \left\{ \frac{1}{2} \sigma (p) \Pi _s^0 (p^2) \sigma (-p) + \frac{1}{2} \sigma ^\alpha (p) \Pi _s^{\alpha \beta } (p^2) \sigma ^\beta (-p) \right. \label{eq:e1} \\ &+& \left. i \sigma ^\alpha (p) K^\alpha (p^2) \sigma (-p) - \frac{ 1 }{ 2 G_0 } \phi _0 (p) \phi _0(-p) - \frac{i}{ G_0 } \phi _0 (p) \sigma (-p) \, \right\} \; , \nonumber \end{eqnarray} where \begin{eqnarray} \Pi _s^0 (p^2) &=& \frac{1}{G_0} \; - \; \int \frac{ d^4k }{ (2\pi )^4 } \; \hbox{tr} \left\{ S(k+p) S(k) \right\} \; , \label{eq:e2} \\ \Pi _s^{\alpha \beta } (p^2) &=& (G^{-1})^{\alpha \beta } \; - \; \int \frac{ d^4k }{ (2\pi )^4 } \; \hbox{tr} \left\{ \tau ^{\alpha } S(k+p) \tau ^{\beta } S(k) \right\} \; , \label{eq:e3} \\ K^{\alpha } (p^2) &=& -i \; \int \frac{ d^4k }{ (2\pi )^4 } \; \hbox{tr} \left\{ \tau ^{\alpha } S(k+p) S(k) \right\} \; . \label{eq:e4} \end{eqnarray} The quark propagator $S(k)$, in momentum space, is \begin{equation} S(k) \; := \; \frac{ 1 }{ k\kern-.5em\slash \, + \, i(M_0 + i M^{\alpha } \tau ^{\alpha } ) } \; . \label{eq:e5} \end{equation} Our model (\ref{eq:1}) is defined with an average over all global $SU(2)$ orientations of the interaction matrix $G^{\alpha \beta }$. In order to render the averaging procedure easy, we integrate out the colored mesons in (\ref{eq:e1}), i.e. \begin{eqnarray} S_{eff}^{(2)} &=& \int \frac{ d^4p }{ (2\pi )^4 } \; \left\{ \frac{1}{2} \sigma (p) \, \left[ \Pi _s^0 \, + \, K^{\alpha } ( \Pi _s ^{-1} ) ^{\alpha \beta } K^{\beta } \right] \, \sigma (-p) \right. \label{eq:e6} \\ &-& \left. \frac{1}{ 2 G_0 } \phi _0 (p) \phi _0(-p) - \frac{i}{ G_0 } \phi _0 (p) \sigma (-p) \, \right\} \; . \nonumber \end{eqnarray} In fact, we will observe that $S_{eff}^{(2)}$ no longer depends on the orientation of the interaction matrix, so the averaging procedure is trivial. In order to study the effective theory of the singlet meson $\sigma $, we explicitly calculate the polarization functions $\Pi _s^0 $, $\Pi _s^{\alpha \beta }$ in (\ref{eq:e2}) and (\ref{eq:e3}), respectively, as well as the mixing $K^{\alpha }$ (\ref{eq:e4}). For this purpose, it is convenient to introduce the eigenvectors of the matrix $M^{\alpha } \tau ^{\alpha } $, i.e. \begin{equation} M^{\alpha } \tau ^{\alpha } \, \vert \pm \rangle \; = \; \pm M \, \vert \pm \rangle \; , \hbox to 1 true cm {\hfill } M=\sqrt{ M^{\alpha } M^{\alpha } } \; . \label{eq:12} \end{equation} They possess the property \begin{equation} \langle \pm \vert \tau ^\alpha \vert \pm \rangle \; = \; \pm \frac{ M^\alpha }{M} \; , \label{eq:12a} \end{equation} which will be extensively used later. The detail of the calculation is left to Appendix \ref{app:a}. It turns out that the quantities (\ref{eq:e2}) and (\ref{eq:e4}) can be expressed in terms of two functions $H_0(p^2)$, $H_v(p^2)$, i.e. \begin{equation} \Pi_s^0 (p^2) = \frac{1}{G_0} - H_0(p^2) \; , \hbox to 1 true cm {\hfill } K^{\alpha }(p^2) = \frac{ M^{\alpha } }{ M } \, H_{v}(p^2) \; . \label{eq:e7} \end{equation} For later convenience, we explicitly write them down here: \begin{eqnarray} H_0 (p^2) &=& 4 \int _0^1 d\alpha \; \int \frac{ d^4q }{ (2\pi )^4 } \; \left\{ \frac{ q^2 -Q }{ \left[ q^2 + Q \right] ^2 } \; + \; ( M \rightarrow -M ) \right\} \; , \label{eq:e8} \\ H_v (p^2) &=& -4i \int _0^1 d\alpha \; \int \frac{ d^4q }{ (2\pi )^4 } \; \left\{ \frac{ q^2 -Q }{ \left[ q^2 + Q \right] ^2 } \; - \; ( M \rightarrow -M ) \right\} \; \label{eq:e9} \end{eqnarray} where \begin{equation} Q \; = \; \alpha (1 - \alpha ) p^2 \; + \; (M_0 + i M)^2 \; . \label{eq:e10} \end{equation} We also find (see Appendix \ref{app:a}) that $M^\alpha $ is an eigenvector of the polarization matrix $\Pi _s^{\alpha \beta }$, i.e. \begin{equation} \Pi _s^{\alpha \beta } \frac{ M^{\beta } }{M} \; = \; \left( \frac{1}{G_c} - H_0(p^2) \right) \; \frac{ M^{\alpha } }{M} \; , \label{eq:e11} \end{equation} where we have used the property (of which we will have more to say in the next subsection) of the solution of the Dyson-Schwinger equations (\ref{eq:7},\ref{eq:8}), namely, that $M^{\alpha }$ is an eigenvector of the symmetric matrix $G^{\alpha \beta }$ with eigenvalue $G_c$. It is now straightforward to calculate $S_{eff}^{(2)}$, (\ref{eq:e6}). Since $K^{\alpha }$ is proportional to $M^{\alpha }$ and using that the eigenvectors of the symmetric matrix $G^{\alpha \beta }$ are orthogonal, one obtains \begin{eqnarray} S_{eff}^{(2)} &=& \int \frac{ d^4p }{ (2\pi )^4 } \; \left\{ \frac{1}{2} \sigma (p) \, \left[ \frac{1}{G_0} - H_0 (p^2) \, + \, \frac{ H_v^2(p^2) }{ \frac{1}{G_c} - H_0(p^2) } \right] \sigma (-p) \right. \label{eq:e12} \\ &-& \left. \frac{1}{ 2 G_0 } \phi _0 (p) \phi _0(-p) - \frac{i}{ G_0 } \phi _0 (p) \sigma (-p) \, \right\} \; . \nonumber \end{eqnarray} Note that $S_{eff}^{(2)}$ depends only on $M_0$ and $M$, which are invariant under $SU(2)$ rotations. This is the desired result, since the average over the $SU(2)$ orientations can now be trivially performed. We are now going to study the occurrence of an imaginary part of the scalar correlation function signaling a quark-anti-quark threshold. The crucial observation will be that whenever the trace of the color indices is required, the contributions with $\tilde{M}^{\alpha }$ occur in conjugate complex pairs. In particular, this will erase imaginary parts of the correlations function, and no quark-anti-quark threshold will occur. In order to work out this phenomena in some detail for the scalar correlation function (\ref{eq:9}), it is sufficient to study the functions $H_0(p^2)$, (\ref{eq:e8}) and $H_v(p^2)$, (\ref{eq:e9}), since they provide the complete correlator with the help of (\ref{eq:e12}). We rewrite, for instance, $H_0(p^2)$ as \begin{equation} H _0(p^2) \; = \; \frac{1}{4 \pi ^2} \int _0^1 d \alpha \int_0^{\Lambda ^2} du \; \left\{ 1 \, - \, \frac{3Q}{ u + Q } \, + \, \frac{2 Q^2 }{ [u + Q ]^2 } \right\} \; + \; (M \rightarrow -M) \; . \label{eq:14} \end{equation} To illustrate the disappearance of the quark-anti-quark threshold for $M \not= 0$, we first study its occurrence for $M=0$. In this case our model describes the scalar correlation function in the usual constituent quark model with a constituent quark mass $M_0$. The term of interest is the second one in the curly bracket in (\ref{eq:14}). After integration over $u$, this term becomes essentially $\ln Q$. This implies that whenever $Q$ becomes negative, the function $H_0$, (\ref{eq:14}), acquires an imaginary part. In order for $Q$ to become negative, the Euclidean momentum $p^2$ must satisfy, \begin{equation} - p^2 \; < \; 4 M_0^2 \; , \label{eq:16} \end{equation} implying that the quark-anti-quark threshold occurs at a (Minkowskian) momentum $p_M = 2 M_0$, which is the familiar result. For $M=0$ the function $H_v(p^2)$ does no harm, since it is identically zero by definition, (\ref{eq:e9}). We are now going to show that for $M \not= 0$ no threshold will occur at all. Adding up the contributions from $M$ and $-M$ in (\ref{eq:14}), the crucial term becomes \begin{equation} H_0^{crit} \; = \; - \frac{3}{2 \pi ^2} \int _0^1 d \alpha \int_0^{\Lambda ^2} du \; \frac{ (u + W)W + 4 M_0^2 M^2 }{ [ u + W ]^2 + 4 M_0^2 M^2 } \; , \label{eq:17} \end{equation} where $W = \alpha (1-\alpha) p^2 + M_0^2 - M^2$. An analogous result holds for the function $H_v(p^2)$. One finds \begin{equation} H_v^{crit} \; = \; - \frac{3}{2 \pi ^2} \int _0^1 d \alpha \int_0^{\Lambda ^2} du \; \frac{ 2 M_0^2 M^2 }{ [ u + W ]^2 + 4 M_0^2 M^2 } \; . \label{eq:17a} \end{equation} We find that the logarithmic divergence is screened if $M M_0 \not=0$. No imaginary part occurs for $M_0 \not= 0$ {\it and } $M \not= 0$. This is our main observation. For non-vanishing current mass $m$, one always expects at least a small constituent quark mass $M_0$. Therefore, the main ingredient in avoiding the decay of the scalar meson into a quark-anti-quark pair is the non-vanishing value of $M$. In the following, we refer to the phase of the constituent quark model (\ref{eq:1}) with $M \not=0 $ as {\it confining phase}. In the chiral limit, the chiral symmetric phase $(M_0=0)$ needs further discussion. In section (\ref{sec:4}) we will find that temperature induces a deconfinement phase transition, and that chiral symmetry is restored at the same time. Here we find by an inspection of (\ref{eq:17},\ref{eq:17a}) that the restoration of chiral symmetry $(M_0=0)$ is accompanied by the occurrence of quark-anti-quark thresholds. \subsection{ Phase structure } \label{sec:3.2} \indent\indent Here we will search for solutions of the gap equations (\ref{eq:7}-\ref{eq:8}) with a non-vanishing, imaginary constituent quark mass in the color-triplet channel, which implies the remarkable consequences discussed above. We will discuss the dependence of the phase structure, and, in particular, the phase transition from the confining phase $(M\not=0)$ to a non-confining phase $(M=0)$, on the parameters of the model, i.e. $G_0$, $G^{\alpha \beta } $ and $m$. For this purpose, we have to analyze the solution of the gap equations (\ref{eq:7}-\ref{eq:8}). In order to solve (\ref{eq:8}), we assume $M^{\alpha }$ to be an eigenvector of the matrix $G^{\alpha \beta }$, i.e. \begin{equation} G^{\alpha \beta } M^{\beta } \; = \; G_c \, M^{\alpha } \; . \label{eq:18} \end{equation} This reduces eqs. (\ref{eq:7}) and (\ref{eq:8}) to (see Appendix \ref{app:b}) \begin{eqnarray} \frac{1}{G_0} (M_0 - m) &=& \; - M_0 \, I_0(M_0,M) \; + \; M \, I_v(M_0,M) \; , \label{eq:19} \\ \frac{1}{G_c} M &=& - M \, I_0(M_0,M) \; - \; M_0 \, I_v(M_0,M) \; , \label{eq:19a} \end{eqnarray} where \begin{eqnarray} I_{0}(M_0,M) &=& - \frac{1}{2 \pi ^2 } \int _0 ^{\Lambda ^2 } du \; u \; \frac{ u + M_0^2 - M^2 }{ u^2 + 2 u (M_0^2-M^2) + (M_0^2 + M^2 )^2} \; , \label{eq:20} \\ I_{v}(M_0,M) &=& \frac{1}{ \pi ^2 } \int _0 ^{\Lambda ^2 } du \; u \; \frac{ M_0 M }{ u^2 + 2 u (M_0^2-M^2) + (M_0^2 + M^2 )^2 } \; , \label{eq:20a} \end{eqnarray} where a sharp O(4) cutoff $\Lambda $ was introduced. Since $I_v$ is proportional to $M_0 M$, $M=0$ is always a solution of eq. (\ref{eq:19a}). In this case, eq. (\ref{eq:19}) is reduced to the gap equation of the standard Nambu-Jona-Lasinio model with an additional $SU(2)$ degree of freedom. This implies that the standard Nambu-Jona-Lasinio model is contained in the extended model (\ref{eq:1}) as a special case, and we expect the theory (\ref{eq:1}) to be phenomenologically as successful as the Nambu-Jona-Lasinio model. The solution of the system of eqs. (\ref{eq:19},\ref{eq:19a}) was investigated numerically. Figure \ref{fig:1} shows the color-singlet and color-triplet constituent quark masses, $M_0$ and $M$, as function of the color-singlet coupling $G_0$. For sufficiently small color-singlet coupling strength a confining phase with $M \not= 0$ exists. Also shown is the non-confining phase with $M=0$ for all values of $G_0$. In order to decide which solution forms the vacuum, one has to compare the classical action, i.e. \begin{equation} {\cal A}_c = \int \frac{ d^{4}k }{ (2 \pi )^{4} } \; \left\{ -2 \ln [ ( k^2 - M_0^2-M^2)^2+4 M_0^2 k^2 ] \, - \, \frac{1}{2 G_c} M^2 \, + \, \frac{1}{2 G_0} (M_0 -m )^2 \right\} , \label{eq:21} \end{equation} of both solutions. One finds that the confining solution has a lower action and forms the vacuum. It is observed that the color-triplet coupling strength $G_c$ must exceed a critical value in order to allow for the desired imaginary constituent quark mass in the color-triplet channel. Figure \ref{fig:2} shows the critical coupling as function of the color-singlet coupling $G_0$ for different values of the current mass $m$. A strong color-singlet constituent mass $M_0$ seems to suppress the occurrence of the confining phase. The situation can be compared with that of free electrons in a solid. It was discovered by Anderson that the electrons get localized if the density of impurities exceeds a critical limit~\cite{anderson}. Finally, we present the result for the scalar correlation function $\Delta _s^c (p^2)$, (\ref{eq:10}) which we have calculated in the last subsection. The final result (two integrations are left to a numerical calculation) is given by \begin{equation} \Delta _c^s (p^2) \; = \; \frac{1}{G_0} \, - \, \frac{1}{G_0^2} \frac{1}{ \frac{1}{G_0} - H_0 (p^2) \, + \, \frac{ H_v^2(p^2) }{ \frac{1}{G_c} - H_0(p^2) } } \; . \label{eq:21a} \end{equation} Figure \ref{fig:3} shows the correlation function $\Delta _s^c(p^2)$ as function of the Euclidean momentum transfer $p^2$ for different values for the current quark mass $m$. From Figure \ref{fig:2}, we conclude that increasing the current mass $m$ at fixed values of $G_0$ and $G_c$ drives the system towards the deconfinement phase transition. For large $m$ (close to the critical value of $m$ where the transition to the deconfined phase occurs) one observes a resonance-like peak at a negative momentum squared, which is reminiscent of the quark anti-quark threshold of the deconfined phase ($M=0$). In the latter case, our model is identical to the Nambu-Jona-Lasinio model. It is well known that in the standard Nambu-Jona-Lasinio model, a weakly bound scalar particle occurs in the spectrum. It has a mass of two times the constituent quark mass implying that the particle pole occurs at the threshold position. The physics of the scalar meson is therefore beyond the scope of the standard Nambu-Jona-Lasinio model. This defect of the NJL-model is cured in our model (\ref{eq:1}): the threshold is avoided in the confining phase $(M\not=0)$. At the same time, no scalar particle is present in the spectrum. There appears a ``resonance" near the threshold position with a width proportional to $M$. This result is in agreement with nature, since no scalar bound state is observed in the meson spectrum at the relevant energy scale. The scalar Green's function develops a structure around the threshold position without allowing the meson to decay into two quarks. This might be a precursor to a parton structure of hadrons at high energy. Unfortunately we cannot push this conjecture further, since our model is limited to low energies. One might raise the question as to whether this result is in not disagreement with dispersion relations, since the correlation function we have is a real function in the entire momentum space. It is interesting to note that our model does {\it not} allow a scalar particle to appear, although a scalar field is needed in the bosonization approach to represent the quark interaction. The scalar correlation function at hand is, therefore, not appropriate for discussing dispersion relations. On the other hand, as we will see soon, the pion-pion correlation function exhibits the pion pole and is otherwise real, so one might worry about the dispersion relation in this case. Note, however, that in order to arrive at this result, we have expanded the effective meson theory (\ref{eq:5}) up to second order in the meson fields ignoring mesonic interactions. If these interactions are included, we expect to recover the physical thresholds of an interacting meson theory satisfying dispersion relations. Although the correlation function is real, a non-trivial structure appears at the would-be threshold position. A close inspection of the correlator (\ref{eq:21a}) (see Appendix \ref{app:b2}) shows that the function possesses cuts in the complex $p$-plane. One might then question whether these complex structures violate fundamental principles of quantum field theory (e.g.\ stability). We note that our approach -- the bosonization procedure of section \ref{sec:2.2} and the related approximations -- is supposed not to violate any of the analyticity requirements. In comparison to other NJL-type models, our ground state (vacuum) is simply more complicated, with the standard NJL model corresponding to a particular case ($M=0$). It may be that some of the fundamental requirements of QCD are not correctly implemented. If so, then our detailed calculation may be providing mechanisms for the cancellation of unwanted features. In this case, we could expect our model to give some insight in the analyticity structure of Green's functions of a confining theory. This question is of particular interest, but is beyond the scope of the present paper. See \cite{oehme} for a discussion on this matter. To check that nothing is amiss with our model, we have studied the analytic structure of the scalar correlation function in Appendix \ref{app:b2} and explicitly verified that the stability criterion is indeed satisfied thanks to a cancellation mechanism mentioned above. \newpage \section{ Chiral Properties } \label{sec:ch} \indent\indent In this section we study the properties of the pion that should emerge as a Goldstone boson of the spontaneously broken chiral symmetry. We will show that our model is in agreement with the low-energy theorems, and will verify by an explicit calculation that the Gell-Mann-Oakes-Renner relation holds. \subsection{The pion Bethe-Salpeter equation } \label{sec:ch1} \indent\indent One way to extract the properties of the pion is to study the pseudoscalar correlation function in the color-singlet channel which can be obtained in the same way as for the scalar correlator discussed in section \ref{sec:3.1}. Here we prefer to use a different approach which is to calculate the pion Bethe-Salpeter amplitude. This method will illustrate the role of the hidden color components of the pion, and will readily provide an access to such observables as the pion decay constant and the pion electromagnetic form factor. The Bethe-Salpeter equation for the pion amplitude $(P_0, P^\alpha )$ is directly obtained from the effective meson Lagrangian ${\cal L}_M$ (\ref{eq:5}) by \begin{equation} \sum _{b=0,1\ldots 3} \; \frac{ \delta ^2 {\cal L}_M }{ \delta \pi ^{a}(-p) \delta \pi ^{b} (p) } \, P^b(p) \vert_ {p^2= - m_\pi ^2} \; = \; 0 \; . \label{eq:h1} \end{equation} For a fixed orientation of the interaction matrix $G^{\alpha \beta }$, the left-hand side of this equation becomes \begin{equation} \left( \begin{array}{cc} \frac{1}{G_0} + \hbox{Tr} \{ \gamma _5 S(k+p) \gamma _5 S(k) \} & \hbox{Tr} \{ \tau ^\alpha \gamma _5 S(k+p) \gamma _5 S(k) \} \cr \hbox{Tr} \{ \tau^ \alpha \gamma _5 S(k+p) \gamma _5 S(k) \} & (G^{-1})^{\alpha \beta } + \hbox{Tr} \{ \tau ^\alpha \gamma _5 S(k+p) \tau ^\beta \gamma _5 S(k) \} \cr \end{array} \right) \; \left( \begin{array}{c} P_0 \cr P^{\alpha } \cr \end{array} \right) , \label{eq:h2} \end{equation} where the trace $\hbox{Tr} $ extends over the momentum space ($k$) as well as Lorentz- and color-space. One observes that the ansatz \begin{equation} P^\alpha \; = \; i \frac{ M^{\alpha } }{M} P_1 \label{eq:h3} \end{equation} for the color-components of the pion\footnote{ Once the average over all orientations of the interaction matrix is performed, these parts become the hidden-color components (compare section \ref{sec:ch3}).} reduces (\ref{eq:h2}) to (see Appendix \ref{app:c}) \begin{equation} \left( \begin{array}{cc} \frac{1}{G_0} + {\cal I }_0(p^2) & - {\cal I}_v(p^2) \cr {\cal I}_v(p^2) & \frac{1}{G_c} + {\cal I}_0(p^2) \cr \end{array} \right) \; \left( \begin{array}{c} P_0(p^2) \cr P_1(p^2) \cr \end{array} \right) \; = \; 0 \; , \label{eq:h4} \end{equation} where the functions ${\cal I}_0$ and ${\cal I}_v$ are defined by \begin{eqnarray} {\cal I}_0 (p^2) &=& - \frac{1}{4 \pi ^2} \int _0^1 d\alpha \; \int _0 ^{\Lambda ^2 } du \; u \; \frac{ u - \alpha (1- \alpha ) p^2 + A^2 }{ \left[ u + \alpha (1- \alpha ) p^2 + A^2 \right] ^2 } \; + \; (M \rightarrow -M) \; , \label{eq:h5} \\ {\cal I}_v (p^2) &=& \frac{i}{4 \pi ^2} \int _0^1 d\alpha \; \int _0 ^{\Lambda ^2 } du \; u \; \frac{ u - \alpha (1- \alpha ) p^2 + A^2 }{ \left[ u + \alpha (1- \alpha ) p^2 + A^2 \right] ^2 } \; - \; (M \rightarrow -M) \; , \label{eq:h6} \end{eqnarray} with $A=M_0+iM$. An inspection of (\ref{eq:h6}) yields that both ${\cal I}_0$ and ${\cal I}_v$ are real. Demanding ({\ref{eq:h4}) to have a non-trivial solution for $P^2=-m_\pi ^2$ leads to a nonlinear equation to determine $m_\pi $. Instead of solving this equation numerically, it is more instructive to solve it analytically for pion masses small compared with the constituent quark mass $M_0$. For this purpose we expand the functions ${\cal I}_0$ and ${\cal I}_v$ up to linear order in $p^2$, i.e. \begin{equation} {\cal I}_0(p^2) \; = \; I_0 \, + \, F_0 \, p^2 \, + O(p^4) \; , \hbox to 1 true cm {\hfill } {\cal I}_v(p^2) \; = \; I_v \, + \, F_v \, p^2 \, + O(p^4) \; , \label{eq:h7} \end{equation} where $I_0$ and $I_v$ depend, respectively, on $M_0$ and $M$ (see (\ref{eq:20}) and {\ref{eq:20a})), and the quantities $F_0$, $F_v$ are defined by \begin{eqnarray} F_0 &=& \frac{1}{8 \pi ^2} \int _0^{\Lambda ^2} d(k^2) \; k^2 \; \frac{1}{ \left[ k^2 + (M_0+iM)^2 \right] ^2 } \; + \; (M \rightarrow -M) \; , \label{eq:h8} \\ F_v &=& - \frac{i}{8 \pi ^2} \int _0^{\Lambda ^2} d(k^2) \; k^2 \; \frac{1}{ \left[ k^2 + (M_0+iM)^2 \right] ^2} \; - \; (M \rightarrow -M) \; , \label{eq:h9} \end{eqnarray} A non-trivial solution for the Bethe-Salpeter amplitudes $P_0$, $P_1$ exists provided \begin{equation} \det \left( \begin{array}{cc} \frac{1}{G_0} + I_0 + F_0 p^2 & - I_v - F_v p^2 \cr I_v + F_v p^2 & \frac{1}{G_c} + I_0 + F_0 p^2 \cr \end{array} \right) \; = \; 0 \; . \label{eq:h10} \end{equation} With the help of the equations of motion (\ref{eq:19}) and (\ref{eq:19a}), i.e. \begin{equation} \frac{1}{G_0} + I_0 \; = \; \frac{M}{M_0} I_v \, + \, \frac{m}{M_0 G_0} \; , \hbox to 1 true cm {\hfill } \frac{1}{G_c} + I_0 \; = \; - \frac{M_0}{M} I_v \; , \label{eq:h11} \end{equation} eq. (\ref{eq:h10}) can be rewritten as (setting $p^2=-m_\pi ^2$) \begin{equation} m_\pi ^2 \, f_{\pi c} ^2 \; = \; 4m \; \langle \bar{q} q \rangle \; + \; O(m_\pi^4) \, + \, O(m^2) \; , \label{eq:h12} \end{equation} where we have used that the quark condensate $\langle \bar{q}q \rangle $ is given by $M_0 / G_0$, and where $f_{\pi c}^2$ is defined by \begin{equation} f_{\pi c}^2 \; = \; 4 \left\{ M_0^2 F_0 \; - \; M^2 F_0 \; - \; 2 M M_0 F_v \right\} \; . \label{eq:h13} \end{equation} Equation (\ref{eq:h12}) is the Gell-Mann-Oakes-Renner relation. It tells us that in the chiral limt $(m=0)$ the pion is massless if chiral symmetry is spontaneously broken $(\langle \bar{q}q \rangle \not=0)$. Below, we will show that $f_{\pi c}^2$, defined in (\ref{eq:h13}), coincides with the pion decay constant in the chiral limit $(m=0)$. \subsection{ The electromagnetic form factor } \label{sec:ch2} \indent\indent Since the Bethe-Salpeter equation provides only the relative weights of the amplitudes $P_0$, $P^\alpha $, an additional physical input is needed for the normalization of the amplitudes. One may use the electromagnetic form factor of the pion. The electromagnetic form factor provides the desired additional constraint. The form factor $F(q^2)$ is defined via the electromagnetic vertex~\cite{goe83}, i.e. \begin{equation} \langle \pi (p') \vert J_\mu (0) \vert \pi (p) \rangle \; = \; ie \, (p'_\mu + p_\mu) \, F(q^2) \; , \label{eq:h14} \end{equation} where $J_\mu $ is the electromagnetic current and $q=p'-p$. In order to normalize the pion charge to unity, we demand that \begin{equation} F(0) \; = \; 1 \; . \label{eq:h15} \end{equation} Once the form factor is calculated, the (mean-square) charge radius of the pion is simply given by \begin{equation} r_{ms}^2 \; = \; \langle r^2 \rangle _{\pi } \; = \; - 6 \, \frac{ \partial F(q^2) }{ \partial q^2 } \vert _{q^2=0} \; , \label{eq:h16} \end{equation} where $q$ is the momentum in Minkowski space. Writing the electromagnetic current in terms of the quark fields, \begin{equation} J_\mu (x) \; = \; \bar{q}(x) \, \gamma _\mu \, q(x) \; , \label{eq:h17} \end{equation} we can study the matrix element (\ref{eq:h14}) most efficiently by using the Bethe-Salpeter amplitude of the pion. Its graphical representation is given in Fig.\ref{fig:ch1}\footnote{ Since the diagrams (in particular, (a)) are divergent, the result depends on the actual choice of the loop momentum. A more sophisticated cutoff procedure (e.g.\ Schwinger's proper-time regularization) would remove this ambiguity. }. For $p'=0$, the matrix element is \begin{eqnarray} - \int \frac{ d^4k }{ (2\pi )^4 } \; \hbox{tr} \bigg\{ &\gamma _5& \left( P_0 + i \frac{ M^\alpha }{M} \tau ^\alpha P_1 \right) \, S(k- \frac{p}{2} ) \, \gamma _\mu \, S(k+ \frac{p}{2} ) \, \label{eq:h18} \\ &\gamma _5& \left( P_0 + i \frac{ M^\beta }{M} \tau ^\beta P_1 \right) \, S(k- \frac{p}{2} ) \, \bigg\} \; . \nonumber \end{eqnarray} The evaluation of this matrix element is left to Appendix \ref{app:d}. With (\ref{eq:d7}), the form factor, for small momentum, reads \begin{equation} F(p^2) \; = \; \left( P_0^2 - P_1^2 \right) \, \left( F_0 - p^2 R_0 \right) \; - \; 2 P_0 P_1 \, \left( F_v - p^2 R_v \right) \; . \label{eq:h19} \end{equation} {}From this expression follow the desired normalization (\ref{eq:h15}) of the pion Bethe-Salpeter amplitude and the pion charge radius (\ref{eq:h16}): \begin{eqnarray} 1 &=& \left( P_0^2 - P_1^2 \right) \, F_0 \; - \; 2 P_0 P_1 \, F_v \; , \label{eq:h20} \\ r_{ms}^2 &=& 6 \left[ \left( P_0^2 - P_1^2 \right) \, R_0 \; - \; 2 P_0 P_1 \, R_v \, \right] \; . \label{eq:h21} \end{eqnarray} We can now discuss the full momentum dependence of the form factor $F(p^2)$. The expression for $F(p^2)$ is derived in Appendix \ref{app:d}. The integration over the angle variable and the radial component of the momentum space was performed numerically. The numerical result is presented in Figure \ref{fig:ch3}. There exists no vector-meson poles at negative momentum transfer, but instead a peak, its width strongly depending on the parameters of the model. Presumably the resonance-like structure is an artifact of the model, unrelated to any physical processes. It probably reflects the suppression of the quark-anti-quark pole present in the standard NJL model without confinement. As mentioned, the physics of the vector mesons is beyond the scope of a local four-quark interaction\footnote{ In particular, we do not have any vector and axial-vector channels in the quark interaction (\ref{eq:2}), which are known to be important for the physics of vector mesons. }. We expect that a more realistic model with quantum loops will produce a vector-meson (i.e., $\rho$) pole instead of the resonance. In fact, we observe a pion pole and no further structure in the pseudoscalar correlation function in our model, which is supposed to give good results for the pion physics (see below). It is quite remarkable that we obtain a real form factor for arbitrary large (Minkowskian) momentum manifesting the absence of quark and anti-quark thresholds. This cures an outstanding problem of the standard non-confining NJL-model. This may have an important consequence in heavy-meson physics. For instance, the non-perturbative description of the electroweak currents of a heavy and a light quark is beyond the reach of the usual NJL-model due to the occurrence of quark anti-quark threshold effects. Electroweak currents such as the one considered here enter into the decay rate of a B-meson into a pion and an electron. In the limit of the electron being very fast, one should be able to extract the element $V_{bu}$ of the Kobayashi-Maskawa mixing matrix from the differential cross section. However, it has been shown that in the limit of a fast electron, non-perturbative contributions to the electroweak currents become important. We believe that a more realistic formulation ($SU(3)$ color) of our model might be able to provide the desired results. The normalization (\ref{eq:h20}) completes the calculation of the pion Bethe-Salpeter amplitude. This is the desired result, since we are now able to calculate matrix elments involving pions. In particular, we will obtain the pion decay constant in the next chapter by a direct calculation. \subsection{ The pion decay constant } \label{sec:ch3} \indent\indent An expression of pion decay constant was already deduced in section \ref{sec:ch1} by demanding that the Gell-Mann-Renner-Oakes relation (\ref{eq:h12}) be quantitatively satisfied. The aim of this subsection is to show that the decay constant (\ref{eq:h13}), obtained there, is identical to that derived from its definition. The pion decay constant $f_\pi $ is defined by the matrix element \begin{equation} \langle 0 \vert A_{\mu }(x) \vert \pi (p) \rangle \vert _{p \rightarrow 0} \; = \; i \, e^{ipx} \, p_\mu \, f_\pi \; , \label{eq:h22} \end{equation} which describes the coupling of the pion to the vacuum via the axial vector current $A_{\mu }(x) = \bar{q}(x) \gamma _5 \gamma _\mu q(x) $. The matrix element (\ref{eq:h22}), also shown in Figure \ref{fig:ch1}, is \begin{equation} - \int \frac{ d^4k }{ (2\pi )^4 } \; \hbox{tr} \left\{ \gamma _5 \gamma _\mu \, S(k+p) \, \left( P_0 + i \frac{ M^{\alpha } }{M} P_1 \right) \gamma _5 \, S(k) \, \right\} \; . \label{eq:h23} \end{equation} An average over all color orientations of the interaction matrix $G^{\alpha \beta }$ -- which is equivalent to an average over all color directions of $M^\alpha $ -- is understood in (\ref{eq:h23}). A straightforward calculation of (\ref{eq:h23}) for a given vector $M^\alpha $ yields \begin{eqnarray} 4i \, p_\mu &\bigg\{ & \int \frac{ d^4k }{ (2\pi )^4 } \; \left[ \frac{ A }{ \left[ k^2 + A^2 \right] ^2 } + (M \rightarrow -M) \right] \; P_0 \label{eq:h24} \\ &-& (-i) \int \frac{ d^4k }{ (2\pi )^4 } \; \left[ \frac{ A }{ \left[ k^2 + A^2 \right] ^2 } - (M \rightarrow -M) \right] \; P_1 \; \bigg\} \; . \nonumber \end{eqnarray} This result can be further simplified by introducing the functions $F_0$ and $F_v$ from (\ref{eq:h8},\ref{eq:h9}), i.e. \begin{equation} 2i \, p_\mu \, \left\{ (M_0 F_0 - M F_v) \, P_0 \; - \; (M_0 F_v + M F_0) \, P_1 \right\} \; . \label{eq:h26} \end{equation} The ratio $\epsilon $ of $P_0$ over $P_1$ is provided by the Bethe-Salpeter equation (\ref{eq:h4}), i.e. \begin{equation} \epsilon \; := \; \frac{ P_1 }{ P_0 } \; = \; - \frac{ {\cal I}_v(p^2=-m_\pi ^2) }{ \frac{1}{G_c} + {\cal I}_0(p^2=-m_\pi ^2) } \; , \label{eq:h25a} \end{equation} whereas the overall normalization is constrained by the electromagnetic form factor (\ref{eq:h20}). Expressing $P_1$ in terms of $\epsilon $ and $P_0$ and eliminating $P_0$ with the constraint (\ref{eq:h20}), we obtain the final result \begin{equation} f_\pi ^2 \; = \; 4 \, \frac{ \left[ M_0 F_0 - M F_v \; - \; (M_0 F_v + M F_0) \, \epsilon \right] ^2 }{ (1 - \epsilon ^2) \, F_0 \, - \, 2 \epsilon F_v } \; , \label{eq:h25} \end{equation} where the functions $F_0$, $F_v$ are given by (\ref{eq:h8}) and (\ref{eq:h9}). This is our result for the pion decay constant, valid for all values of the current mass. This decay constant needs to agree with the one extracted from the Gell-Mann-Renner-Oakes relation, (\ref{eq:h13}), only in the chiral limit $(m=0)$. Before we show that this is indeed the case, we would like to comment on the result (\ref{eq:h25}). First note that the average of all orientations has still to be performed. Since our result (\ref{eq:h25}), however, depends only on the invariant quantity $M$ rather than on $M^\alpha $, this average is trivial. Note further that a term proportional to $P_1$ enters into the physical decay constant. This is the contribution from the hidden-color components of the pion. In order to calculate $f_\pi $ in the chiral limit, we proceed with (\ref{eq:h24}). {}From the Bethe-Salpeter equation (\ref{eq:h4}) in the chiral limit $p^2 = - m_\pi ^2 = 0$, one finds \begin{equation} P_0 \; = \; \frac{ M_0 }{ M } \, P_1 \; . \label{eq:h27} \end{equation} With this result, the decay constant in the chiral limit becomes \begin{equation} f_{\pi c} \; = \; 2 \left\{ M_0^2 F_0 \, - \, 2 M M_0 F_v \, - \, M^2 F_0 \right\} \; \frac{ P_1 }{M} \; . \label{eq:h28} \end{equation} Using (\ref{eq:h27}) in the normalization of the Bethe-Salpeter amplitude (\ref{eq:h20}), one may eliminate $P_1$ in (\ref{eq:h28}) to obtain \begin{equation} f_{\pi c}^2 \; = \; 4 \left\{ (M_0^2 -M^2) \, F_0 \; - \; 2 M M_0 \, F_v \right\} \; . \label{eq:h29} \end{equation} This expression is identical to (\ref{eq:h13}). This establishes that the Gell-Mann-Oakes-Renner relation is indeed valid in our model. We finally present the numerical data for the pion mass and the pion decay constant. We have numerically solved the Bethe-Salpeter equation (\ref{eq:h4}) to obtain the mass of the pion and the ratio $\epsilon = P_1/P_0$. The decay constant $f_\pi $ was then calculated from (\ref{eq:h25}). The result for $m_\pi $ and $f_\pi $ as a function of the current mass is shown in Figure \ref{fig:ch2}. One observes that $m_\pi ^2$ depends almost linearly on the current mass with the slope given by the Gell-Mann-Oakes-Renner relation. The decay constant $f_\pi $ decreases with increasing current mass. \subsection{ The pseudoscalar correlation function } \label{sec:ch4} \indent\indent It is instructive to compare the result of the scalar correlation function with that for the pseudoscalar correlator where one expects a pion pole. The pseudoscalar correlation function is defined by \begin{equation} \Delta _\pi ^c(p) \; = \; \int d^{4}x \; e^{-ipx} \; \frac{ \delta ^2 \, \ln Z[\phi ] }{ \delta \phi _5 (x) \delta \phi _5(0) } \vert _{\phi =0 } \; . \label{eq:p1} \end{equation} In order to derive $\Delta _\pi ^c$, we expand the effective meson theory (\ref{eq:4}) up to second order in pion fields. Since an average over all orientations of the interaction matrix $G^{\alpha \beta }$ is required, it is convenient to integrate out the colored pion fields. One observes that the resulting theory for the color-singlet pion does not depend on this orientation, so the averaging is trivial. Since the explicit calculation parallels closely that of the scalar correlator in section \ref{sec:3.1}, we shall simply present the final result: \begin{equation} S_{eff}^{(2)} \; = \; \int \frac{ d^4p }{ (2\pi )^4 } \; \left\{ \frac{1}{2} \pi (p) \, \Pi _5 (p^2) \, \pi (-p) \; + \; \frac{1}{ 2 G_0 } \phi _5 (p) \phi _5(-p) + \frac{1}{ G_0 } \phi _5 (p) \pi (-p) \, \right\} \; , \label{eq:p2} \\ \end{equation} where the pion dispersion formula $\Pi _5 (p^2)$ is given by \begin{equation} \Pi _5 (p^2) \; = \; \frac{1}{G_0} + {\cal I}_0 (p^2) \, + \, \frac{ {\cal I}_v^2(p^2) }{ \frac{1}{G_c} + {\cal I}_0(p^2) } \; . \label{eq:p4} \end{equation} The functions ${\cal I}_0$ and ${\cal I}_v$ were defined in (\ref{eq:h5}) and (\ref{eq:h6}), respectively. The dispersion formula $\Pi _5$ is shown in Figure \ref{fig:3b} as function of the Euclidean momentum transfer in the chiral limit $(m=0)$. Note that the function $\Pi _5 (p^2)$ is zero at $p^2=0$. This zero gives rise to a pole of the correlation function at zero momentum transfer confirming the pion as Goldstone boson. The most striking feature of Figure \ref{fig:3b} is a singularity at positive Euclidean momentum. It arises from the second term in (\ref{eq:p4}), which stems from the integration over the colored pions. The singularity occurs at the momentum where the colored pions go on-shell. The occurrence of the singularity therefore is quite natural and does not depend on the details of the model. In our model, the colored pions go on-shell at positive Euclidean momentum $p^2$, which might indicate that the colored pions condense. In this case our approach, expanding the Lagrangian up to second order in the colored pion fields and integrating them out, is no longer appropriate. We will leave this issue to future investigations and here study the pion pole in some detail. One can deduce the pion mass $m_\pi^2$ from the position of the pion pole in the correlation function $\Delta _\pi ^c (p^2)$. It is worthwhile to check whether this result for the pion mass agrees with that obtained from solving the Bethe-Salpeter equation (see section \ref{sec:ch2}). One observes that both masses are indeed identical, because the constraint for the dispersion relation to be satisfied coincides with the condition (\ref{eq:h10}) which guarantees a non-zero Bethe-Salpeter amplitude. Finally the numerical result for the correlator (\ref{eq:p1}) is presented in Figure \ref{fig:3a}. The correlation function is entirely dominated by the pion pole. This is compared with the result for the scalar correlator. In the latter case, no pole occurs at all, but the scalar correlation function is significantly influenced by a peak structure. A second pole in the correlation function occurs at a positive value of the Euclidean momentum $p^2$. This pole is due to the fact that the occurrence of the colored pion pole in the dispersion relation $\Pi _5 $ gives rise to a further zero of $\Pi _5 (p^2)$. The pole of $\Delta _\pi ^c$ at a positive momentum squared is intimately related to the properties of the hidden-color pions and will be the subject of a future work. \section{ Temperature Effects } \label{sec:4} \indent\indent The model (\ref{eq:1}) offers a confining phase and a deconfining phase, both emerging from the classical equations of motion (\ref{eq:7}-\ref{eq:8}). In the section given above, we investigated the phase structure in the model's parameter space. We found that the ground state is in the confining phase if the color-triplet coupling strength is strong enough. In this section, we will investigate the influence of temperature on the phase structure. In particular, we will start from the system with the vacuum in the confining phase and will study the type of phase transitions, if any, to the deconfining phase. For this purpose, we must generalize the gap equations (\ref{eq:7}-\ref{eq:8}) to finite temperature. The trace in the first terms on the left-hand side stems from the integration over the quark fields. In order to introduce temperature, we adopt the usual imaginary-time formalism~\cite{ka89} and confine the configuration space of the fermionic fields to the configurations which are anti-periodic in the Euclidean time direction with a periodic length $1/T$ with $T$ the temperature. At finite temperature, the integration over the zeroth component of the Euclidean momentum in the terms of (\ref{eq:7}-\ref{eq:8}) is replaced by a discrete sum over Matsubara frequences. In order to illustrate the evaluation of such trace terms at finite temperature, we explicitly work out a term, which arises in (\ref{eq:7}-\ref{eq:8}) once the color trace is performed, i.e. \begin{equation} F \; := \; \hbox{Tr} \left\{ \frac{i}{ k\kern-.5em\slash + i(M_0 + iM) } \right\} \; . \label{eq:22} \end{equation} Performing the Dirac trace as well as the trace over space-time, one obtains \begin{equation} F \; = \; 4V \sum _{n = -\infty }^{\infty } \int \frac{ d^{3}k }{(2\pi )^3 } \; \frac{ M_0 + i M }{ \pi ^2 T^2 (2n+1)^2 + (a + ib)^2 } \; , \label{eq:23} \end{equation} where $V$ is the space volume, and the quantities \begin{eqnarray} a &=& \sqrt{ \frac{1}{2} \left[ \vec{k}^2 + M_0^2 - M^2 + \sqrt{ (\vec{k}^2 + M_0^2 - M^2 )^2 + 4 M_0^2 M^2 } \right] } \label{eq:24} \\ b &=& \frac{ M_0 M }{a} \label{eq:25} \end{eqnarray} were introduced as an abbreviation. In order to split the zero temperature part, which is cutoff-dependent, from the temperature-dependent contributions, which are finite, we perform a Poisson resummation of the Matsubara sum in (\ref{eq:23}), i.e. \begin{equation} F \; = \; 4 V \int dn \; \frac{ d^{3}k }{(2\pi )^3 } \; \frac{ M_0 + i M }{ \pi ^2 T^2 (2n+1)^2 + \vec{k}^2 + (M_0 + iM)^2 } \; + \; F(\nu \not=0) \; . \label{eq:26} \end{equation} The first term is the term with zero conjugate Matsubara frequency ($\nu =0$), which yields the zero temperature result if we substitute $k_0 = \pi T (2n+1)$ for $n$ in the integral in (\ref{eq:26}). It is this term which needs regularization. It will be performed by introducing a sharp O(4) cutoff as it was done in the zero-temperature case. The part of the non-zero conjugate Matsubara indices is finite and needs no regularization. It is \begin{equation} F(\nu \not=0) \; = \; - \frac{4 V}{T} \frac{ M_0 + i M }{ a + ib } \left[ e ^{ \frac{a + ib }{T} } +1 \right] ^{-1} \; . \label{eq:27} \end{equation} Evaluating the trace terms in (\ref{eq:7}-\ref{eq:8}) as sketched above, one finds that the gap equations acquire an additional term that contains a temperature dependence. Introducing \begin{eqnarray} \Sigma _R &=& \frac{ a e^{a/T} \cos (b/T) + a - b e^{a/T} \sin (b/T) }{ (a^2+b^2) ( e^{2a/T} + 2 \cos (b/T) e^{a/t} +1 ) } \; , \label{eq:28} \\ \Sigma _I &=& \frac{ a e^{a/T} \sin (b/T) + b + b e^{a/T} \cos (b/T) }{ (a^2+b^2) ( e^{2a/T} + 2 \cos (b/T) e^{a/t} +1 ) } \; , \label{eq:29} \end{eqnarray} the modified gap equations are \begin{eqnarray} G_0^{-1} (M_0 - m) &=& M_0 \, I_+(M_0,M) \; - \; \frac{4}{\pi ^2 } \int dk \; k^2 \; \left( M_0 \Sigma _R + M \Sigma _I \right) \; . \label{eq:30} \\ G_c^{-1} M &=& M \, I_-(M_0,M) \; - \; \frac{4}{\pi ^2} \int dk \; k^2 \; \left( M \Sigma _R - M_0 \Sigma _I \right) \; , \label{eq:31} \end{eqnarray} where the functions $I_{\pm }$ are defined with the help of (\ref{eq:20},\ref{eq:20a}) by \begin{equation} I_+ \; = \; - I_0 \, + \, \frac{M}{M_0} I_v \; , \hbox to 1 true cm {\hfill } I_- \; = \; - I_0 \, - \, \frac{M_0}{M} I_v \; . \label{eq:31a} \end{equation} This is the main result of this section; the temperature-dependent color-singlet and color-triplet constituent quark masses will emerge from these equations. We have studied numerically the solutions $M$ and $M_0$ of (\ref{eq:30}-\ref{eq:31}) as function of temperature. The coupling strengths $G_0$ and $G$ were chosen in order for the system to be in the confining phase at zero temperature. The result is shown in Figure (\ref{fig:4}). If the temperature exceeds a critical value, a first order phase transition from the confining phase $(M \not=0)$ to the deconfined phase $(M=0)$ takes place. The deconfining phase transition is accompanied by a sudden drop of the color-singlet constituent mass $M_0$ indicating at the same time the restoration of chiral symmetry. The small residual constituent quark mass is due to the current mass $m=0.02 \Lambda $ which explicitly breaks chiral symmetry. For phenomenological applications, the dependence of the deconfining phase transition on the current quark mass is of particular interest. Figure \ref{fig:5} shows the critical color-triplet coupling strength $G_c$ as function of the temperature for some values of the current mass. The result suggests that quark liberation of {\it all} quark flavors occurs approximately at the same temperature (assuming that the system was in the confining phase at zero temperature). \section{Finite-Density Effects} \label{sec:5} \indent\indent In the last section, we observed a phase transition from the confined phase to the deconfined phase at high temperature. For phenomenological reasons~\cite{newbr}, one also expects a phase transition to occur at high baryonic density. Here we investigate the existence of this phase transition in the random background quark model~(\ref{eq:1}). In order to study the system at a non-zero baryonic density, we introduce a chemical potential $\mu $ into the Lagrangian (\ref{eq:2}): \begin{eqnarray} {\cal L }_D &=& \bar{q}(x) ( i \partial\kern-.5em\slash + im + i \mu \gamma _0 ) q(x) \; + \; G_0 [ \bar{q} q(x) \, \bar{q} q(x) \, - \, \bar{q} \gamma _5 q(x) \, \bar{q} \gamma _5 q(x) ] \label{eq:40} \\ &+& [ \bar{q} \tau ^{\alpha } q(x) \, G^{\alpha \beta } \bar{q} \tau ^{\beta } q(x) \, - \, \bar{q} \gamma _5 \tau ^{\alpha } q(x) \, G^{\alpha \beta } \bar{q} \gamma _5 \tau ^{\beta } q(x) ] \; , \nonumber \end{eqnarray} The bosonization procedure -- introducing scalar and pion fields -- is unchanged by the presence of the chemical potential. The gap equations (\ref{eq:7},\ref{eq:8}) acquire an additional part due to finite density, i.e. \begin{eqnarray} - \frac{1}{V_4} \hbox{Tr} \left\{ \frac{ i }{ i \partial\kern-.5em\slash + i M_0 + i \tilde{M}^{\alpha } \tau ^{\alpha } } \right\} &-& I_F(M_0,M) \; + \; \frac{1}{G_0} \left( M_0 -m \right) \; = \; 0 \; , \label{eq:41} \\ - \frac{1}{V_4} \hbox{Tr} \left\{ \frac{ i }{ i \partial\kern-.5em\slash + i M_0 + i \tilde{M}^{\alpha } \tau ^{\alpha } } \tau ^{\beta } \right\} &-& I_F^\beta (M_0,M) \; + \; \left( G^{-1}\right )^{\beta \gamma } \tilde{M} ^{\gamma } \; = \; 0 \; , \label{eq:42} \end{eqnarray} where \begin{eqnarray} I_F &=& \frac{1}{V_4} \hbox{Tr} \left\{ \frac{ i }{ i \partial\kern-.5em\slash + i \mu \gamma_0 + i M_0 + i \tilde{M}^{\alpha } \tau ^{\alpha } } \right\} - \frac{1}{V_4} \hbox{Tr} \left\{ \frac{ i }{ i \partial\kern-.5em\slash + i M_0 + i \tilde{M}^{\alpha } \tau ^{\alpha } } \right\} \label{eq:43} \\ I_F^\beta &=& \frac{1}{V_4} \hbox{Tr} \left\{ \frac{ i }{ i \partial\kern-.5em\slash + i \mu \gamma_0 + i M_0 + i \tilde{M}^{\alpha } \tau ^{\alpha } } \tau ^{\beta } \right\} - \frac{1}{V_4} \hbox{Tr} \left\{ \frac{ i }{ i \partial\kern-.5em\slash + i M_0 + i \tilde{M}^{\alpha } \tau ^{\alpha } } \tau ^{\beta } \right\} \; . \label{eq:44} \end{eqnarray} One easily verifies that both functions $I_F$ and $I_F^\beta $ are finite and need no regularization. In order to calculate these functions, we first perform the color trace by introducing the eigenvectors of the matrix $M^\alpha \tau ^\alpha $ defined in (\ref{eq:12}). After taking the trace over Dirac indices, the function $I_F$ becomes \begin{equation} I_F(M_0,M) \; = \; \int \frac{ d^{4}k }{ (2\pi )^4 } \; \left[ \frac{ 4 ( M_0 + iM ) }{ (k_0 + i \mu )^2 + (a+ib)^2 } \, - \, \frac{ 4 ( M_0 + iM ) }{ k_0 ^2 + (a+ib)^2 } \right] \; + \; \left( M \rightarrow -M \right) \; , \label{eq:45} \end{equation} where the functions $a$ and $b$ depend on the momentum $\vec{k}$ as defined in (\ref{eq:24},\ref{eq:25}). It is now straightforward to evaluate the $k_0$ integration in (\ref{eq:45}): \begin{equation} I_F(M_0,M) \; = \; - i \int _{ a( \vec{k}^2 ) < \mu } \frac{ d^{3}k }{ (2\pi )^3 } \; \frac{ 2 (M_0 + iM) }{ ia - b } \; + \; \left( M \rightarrow -M \right) \; . \label{eq:46} \end{equation} One observes that the color trace again enforces that the function $I_F$ be real. Analogous considerations hold for the function $I_F^\beta $. The final result for the gap equations (\ref{eq:41},\ref{eq:42}) is \begin{eqnarray} G_0^{-1} (M_0 - m) &=& M_0 \, I_+(M_0,M) \; - \; \frac{2}{\pi ^2 } \int_0^{k_f} dk \; k^2 \; \frac{ M_0 a + M b }{ a^2+b^2 } \, , \label{eq:47} \\ G_c^{-1} M &=& M \, I_-(M_0,M) \; - \; \frac{2}{\pi ^2} \int_0^{k_f} dk \; k^2 \; \frac{ Ma - M_0 b }{ a^2+b^2 } \, , \label{eq:48} \end{eqnarray} where the functions $I_{\pm }$ are defined in (\ref{eq:31a}) and $a(\vec{k}^2)$ and $b(\vec{k}^2)$ are defined in (\ref{eq:24}) and (\ref{eq:25}) respectively. The upper bound of the integration over the three-momentum $\vec{k}$ is provided by the Fermi sphere of radius $k_f$ where $k_f$ is defined by $a(k_f^2)= \mu $. The solutions of the coupled system (\ref{eq:47},\ref{eq:48}) provide the constituent quark mass $M_0$ and the mass $M$ in the colored channel as a function of the chemical potential $\mu $. For physical applications, it is convenient to express the chemical potential in terms of the baryonic density defined by \begin{equation} \rho _B \; := \; -i \left( \frac{ \partial \ln Z }{ \partial \mu } (\mu ) \, - \, \frac{ \partial \ln Z }{ \partial \mu } (\mu =0) \right) \; . \label{eq:49} \end{equation} {}From (\ref{eq:1}) and the Lagrangian ${\cal L}_D$ at finite chemical potential $\mu $ in (\ref{eq:40}), one obtains \begin{equation} \frac{ \partial \ln Z }{ \partial \mu } (\mu ) \; = \; \hbox{Tr} \left\{ \frac{ -1 }{ (k_0+i\mu) \gamma_0 + \vec{k} \vec{\gamma } + i (M_0 + iM) } \gamma _0 \right\} \; . \label{eq:50} \end{equation} A calculation along the line sketched above yields the simple result \begin{equation} \rho _B \; = \; 2 \int _{ a( \vec{k}^2 ) < \mu } \frac{ d^{3}k }{ (2\pi )^3 } \; . \label{eq:51} \end{equation} This provides the familiar relation $k_f^3 = 3\pi ^2 \rho _B$ verifying that this relation also holds in the context of the augmented model (\ref{eq:1}). We have studied the solutions $M_0$ and $M$ of the gap equations (\ref{eq:47},\ref{eq:48}) as a function of the Fermi momentum $k_f$. The result is shown in Figure \ref{fig:6}. The coupling strengths $G_0$ and $G_c$ were chosen in order for the system to be in the confined phase $(M \not=0 )$ at zero density. For increasing Fermi momentum $k_f$ one observes an increase in $M$ up to a critical momentum $k_f^c$, where $M$ rapidly drops to zero implying that the deconfined phase is realized at high density. The behavior of the color-singlet constituent quark mass $M_0$ is different. It smoothly decreases and vanishes at $k_f^c$. This behavior is different from that found at finite temperature (i.e. section \ref{sec:5}) where the phase transition causes a discontinuity of $M_0$ at the critical temperature. The deconfining phase transition at finite density is accompanied by the restoration of chiral symmetry as in the temperature case. Figure \ref{fig:7} shows the critical color-triplet coupling $G_c^{(crit)}$ as a function of the Fermi momentum $k_f$. The dependence of $G_c^{(crit)}$ on the Fermi momentum $k_f$ is qualitatively the same as the dependence on the temperature (compare Figure \ref{fig:5}). The critical density is nearly independent of the current mass of the quarks. \section{Conclusions} \indent\indent This paper describes how the NJL model can be modified minimally so as to take into account quark confinement. Confinement is understood in the sense that quark-anti-quark thresholds which plague the standard NJL model are screened by a random color background field: physical quantities are free of unwanted colored excitations. The mechanism that confines the quarks is analogous to Anderson localization in electronic systems and deconfinement can be induced by temperature and/or density in a way that seems to be consistent with QCD. In this model, deconfinement and chiral symmetry restoration occur at the same critical point. We have used, for simplicity, color $SU(2)$ group but we have no reason to expect that qualitative features would be modified if we were to consider the realistic $SU(3)$ gauge group. For applications to phenomenology, $SU(3)$ gauge group will have to be treated. The extension to color $SU(3)$ remains to be worked out. The problem that we hope to be able to resolve with the confining NJL model is to describe how hadrons -- both mesons and baryons -- behave in medium at finite temperature and density. Since confinement is suitably implemented in the model, we can treat the excitations that are not amenable to the conventional NJL model, such as scalar and vector mesons. For instance, we should be able to ``derive" the BR scaling predicted in mean field of effective chiral Lagrangians \cite{br91} and address the properties of hadrons in hot and dense medium created in heavy-ion collisions or compact star matter \cite{newbr}. A microscopic model of the kind presented here will have certain advantage over the ``macroscopic" treatment of \cite{br91}. \newpage \subsubsection*{Acknowledgments} \indent\indent One of us (K.L) is indebted to J.\ Gasser for helpful discussions on the analytic structure of Green's functions. Part of this work was done while the authors were participating in the 1995 Spring program on ``Chiral dynamics in hadrons and nuclei" at the Institute for Nuclear Theory, University of Washington, Seattle. We would like to thank the participants for helpful discussions. We would also like to thank the INT for the hospitality and the Department of Energy for partial support. \bigskip \section*{Appendices} \renewcommand{\thesubsection}{\Alph{subsection}} \renewcommand{\theequation}{\Alph{subsection}.\arabic{equation}} \setcounter{subsection}{0} \subsection{Some ingredients for the scalar correlation function } \label{app:a} \setcounter{equation}{0} \indent\indent Here we calculate the functions $\Pi_s^0 (p^2)$, $K^{\alpha }(p^2)$ and $\Pi _s^{\alpha \beta }(p^2)$, defined in (\ref{eq:e2},\ref{eq:e3},\ref{eq:e4}) that enter in the scalar correlation function (\ref{eq:9}). For this purpose, we need to evaluate the trace, which extends over Lorentz- and color-space, of the functions \begin{equation} \hbox{tr} \left\{ S(k+p) S(k) \right\} \; , \hbox to 1 true cm {\hfill } \hbox{tr} \left\{ \tau ^{\alpha } S(k+p) \tau ^{\beta } S(k) \right\} \; , \hbox to 1 true cm {\hfill } \hbox{tr} \left\{ \tau ^{\alpha } S(k+p) S(k) \right\} \; , \label{eq:a1} \end{equation} where the fermion propagator $S(k)$, (\ref{eq:e5}), \begin{equation} S(k) \; := \; \frac{ 1 }{ k\kern-.5em\slash \, + \, i \left( M_0 + i M^{\alpha } \tau ^{\alpha } \right) } \; \label{eq:a2} \end{equation} possesses a non-trivial color structure. The Lorentz trace is straightforward. The color trace is most conveniently evaluated by introducing the eigenvectors $\vert \pm \rangle $ of the matrix $M^\alpha \tau ^\alpha $ which form a complete set in color space giving rise to a specific representation of the unit element, i.e. \begin{equation} 1 \; = \; \sum _{i=\pm } \vert i \rangle \langle i \vert \; . \label{eq:a3} \end{equation} For instance, the color trace of the first expression in (\ref{eq:a1}) is \begin{equation} \sum _{i,l=\pm } \hbox{tr}_L \left\{ \langle i \vert S(k+p) \vert l \rangle \langle l \vert S(k) \vert i \rangle \right\} \; , \label{eq:a4} \end{equation} where $\hbox{tr} _L $ indicates the trace over Lorentz indices only. Using \begin{equation} S(k) \vert \pm \rangle \; = \; \frac{ 1 }{ k\kern-.5em\slash \, + \, i \left( M_0 \pm i M^{\alpha } \right) } \, \vert \pm \rangle \; , \label{eq:a5} \end{equation} one obtains for (\ref{eq:a4}) \begin{equation} \hbox{tr} _L \left\{ s(k+p) s(k) \right\} \; + \; (M \rightarrow -M) \; , \label{eq:a6} \end{equation} where \begin{equation} s(k) \; = \; \frac{1}{ k\kern-.5em\slash + i A } \; \hbox to 2cm{\hfill with \hfill } A = M_0 + i M \; . \label{eq:a7} \end{equation} It is now straightforward to calculate $\Pi _s^0(p^2)$ in (\ref{eq:e2}): \begin{equation} \Pi _s^0 (p^2) \; = \; \frac{1}{G_0} \; - \; \int \frac{ d^4k }{ (2\pi )^4 } \left\{ \frac{ \hbox{tr} _L \, (k\kern-.5em\slash + p\kern-.5em\slash - iA)(k\kern-.5em\slash -iA) }{ \left[ (k+p)^2 + A^2 \right] \, (k^2 + A^2) } \; + \; (M \rightarrow -M) \right\} \; . \label{eq:a8} \end{equation} The Lorentz trace can be easily performed. Introducing Feynman's parametrization, one has \begin{equation} \Pi _s^0 (p^2) \; = \; \frac{1}{G_0} \; - \; 4 \int _0^1 d\alpha \; \int \frac{ d^4k }{ (2\pi )^4 } \left\{ \frac{ k^2 + kp - A^2 }{ \left[ (k+ \alpha p)^2 + Q^2 \right]^2 } \; + \; (M \rightarrow -M) \right\} \; , \label{eq:a9} \end{equation} where $Q=\alpha (1- \alpha ) p^2 +A^2 $. After the substitution $q= k + \alpha p$, we obtain the final result \begin{eqnarray} \Pi _s^0 (p^2) &=& \frac{1}{G_0} \; - \; 4 \int _0^1 d\alpha \; \int \frac{ d^4q }{ (2\pi )^4 } \left\{ \frac{ q^2 - Q }{ \left[ q^2 + Q^2 \right]^2 } \; + \; (M \rightarrow -M) \right\} \; , \label{eq:a10} \\ &=& \frac{1}{ G_0 } \; - \; H_0(p^2) \; , \end{eqnarray} with $H_0(p^2)$ defined in (\ref{eq:e8}). Since the integral in (\ref{eq:a10}) is divergent, we cut off the $q$-integral by a sharp O(4) cutoff $\Lambda $. This regularization procedure is included into the definition of our low-energy effective quark theory. We have ensured (see section \ref{sec:ch}) that chiral symmetry is not violated by the cutoff procedure. For the evaluation of $K^\alpha (p^2)$, we proceed along the same line as sketched above. We first perform the color trace in the last expression of (\ref{eq:a1}): \begin{equation} \sum _{i,l=\pm } \hbox{tr}_L \left\{ \langle i \vert \tau ^\alpha S(k+p) \vert l \rangle \langle l \vert S(k) \vert i \rangle \right\} \; = \; \sum _{i,l=\pm } \hbox{tr}_L \left\{ \langle i \vert \tau ^\alpha \vert l \rangle s(k+p) s(k) \langle l \vert i \rangle \right\} \; . \label{eq:a11} \end{equation} {}From (\ref{eq:12a}) and the orthogonality of the eigenvectors $\vert \pm \rangle $, we find \begin{equation} K^{\alpha }(p^2) \; = \; -i \frac{ M^\alpha }{M} \; \int \frac{ d^4k }{ (2\pi )^4 } \hbox{tr} _L \left\{ s(k+p) s(k) \right\} \; - \; (M \rightarrow -M) \; . \label{eq:a12} \end{equation} For the further calculation of $K^\alpha $, one might employ directly the above results to finally obtain \begin{equation} K^\alpha (p^2) \; = \; - 4i \frac{ M^\alpha }{M} \int _0^1 d\alpha \; \int \frac{ d^4q }{ (2\pi )^4 } \left\{ \frac{ q^2 - Q }{ \left[ q^2 + Q^2 \right]^2 } \; - \; (M \rightarrow -M) \right\} \; , \label{eq:a13} \\ \end{equation} and therefore $K^{\alpha } (p^2) = \frac{ M^\alpha }{M} H_v(p^2)$ with $H_v(p^)$ defined in (\ref{eq:e9}). It remains to show that $M^\alpha $ is an eigenvector of the polarization matrix $\Pi _s^{\alpha \beta }$. To do this, we first evaluate the color trace of \begin{equation} \hbox{tr} \left\{ \tau ^{\alpha } S(k+p) \tau ^{\beta } \frac{ M^\beta }{M} S(k) \right\} \; = \; \sum _{i,l=\pm } \hbox{tr}_L \left\{ \langle i \vert \tau ^\alpha \vert l \rangle s(k+p) s(k) \langle l \vert \tau ^{\beta } \frac{ M^\beta }{M} \vert i \rangle \right\} \; . \label{eq:a14} \end{equation} Since $\vert \pm \rangle $ are eigenvectors to $\tau ^\beta M^\beta $, the expression (\ref{eq:a14}) is \begin{equation} \frac{ M^\alpha }{M} \hbox{tr} _L \left\{ s(k+p) s(k) \right\} \; + \; (M \rightarrow -M) \; . \label{eq:a15} \end{equation} This expression was already calculated to obtain $\Pi _s^0(p^2)$. Since $M^\beta $ is also an eigenvector of the interaction matrix $G^{\alpha \beta }$ (\ref{eq:18}), the final result is \begin{equation} \Pi _s^{\alpha \beta } (p^2) \frac{ M^\beta }{M} \; = \; \left( \frac{ 1}{ G_c} - H_0(p^2) \right) \; \frac{M^\alpha }{M} \; . \label{eq:a16} \end{equation} \subsection{ The equations of motion } \label{app:b} \setcounter{equation}{0} \indent\indent In this section we derive an explicit expression for the equations of motion (\ref{eq:7}) and (\ref{eq:8}). To perform the color trace of the quark propagator $S(k)$, we introduce the eigenstates $\vert \pm \rangle $ of the color matrix $M^\alpha \tau ^\alpha $ (compare Appendix \ref{app:a}): \begin{equation} \hbox{tr} \, S(k) \; = \; \hbox{tr} _L \, \sum _{i=\pm} \langle i \vert \, S(k) \, \vert i \rangle \; , \label{eq:b1} \end{equation} where $\hbox{tr} _L$ is the trace over Lorenz-indices only. The equation of motion (\ref{eq:7}) becomes \begin{equation} \frac{1}{G_0} (M_0-m) \; - \; i \int \frac{ d^4k }{ (2\pi )^4 } \; \left\{ \frac{ k\kern-.5em\slash - i A }{ k^2 + A^2 } \; + \; (M \rightarrow -M) \right\} \; = \; 0 \; , \label{eq:b2} \end{equation} where $A$ was defined in (\ref{eq:a7}). It is straightforward to evaluate (\ref{eq:b2}). One obtains \begin{equation} \frac{1}{G_0} (M_0-m) \; - \; 8 M_0 \; \int \frac{ d^4k }{ (2\pi )^4 } \; \frac{ k^2 + M_0^2 + M^2 }{ k^4 +2 (M_0^2 - M^2) k^2 + (M_0^2+M^2)^2 }=0 \; . \label{eq:b3} \end{equation} Setting $\tilde{M}^\alpha = i M^\alpha $, we derive the explicit form of the remaining equation of motion (\ref{eq:8}) in a similar fashion. The color-triplet part of the quark propagator, \ $\hbox{tr} \, S(k) \tau ^\alpha $, is \begin{equation} \hbox{tr} _L \sum _{i=\pm} \langle i \vert \, S(k) \tau ^\alpha \, \vert i \rangle \; = \; \hbox{tr} _L \sum _{i=\pm} s(k) \langle i \vert \, \tau ^\alpha \, \vert i \rangle \; = \; \hbox{tr} _L s(k) \frac{ M^{\alpha } }{M} \, - \, (M \rightarrow -M) \; , \label{eq:b4} \end{equation} with $s(k)$ from (\ref{eq:a7}). If $M^\alpha $ is an eigenstate of the interaction matrix, i.e.\ $G^{\alpha \beta } M^{\beta } = G_c \, M^\alpha $, the equation of motion (\ref{eq:8}) reduces to \begin{equation} \frac{i}{G_c} M \; - \; i \, (-i) \int \frac{ d^4k }{ (2\pi )^4 } \; \left\{ \frac{ k\kern-.5em\slash - i A }{ k^2 + A^2 } \; - \; (M \rightarrow -M) \right\} \; = \; 0 \; . \label{eq:b5} \end{equation} A direct calculation yields \begin{equation} \frac{1}{G_c} M \; - \; 8 M \; \int \frac{ d^4k }{ (2\pi )^4 } \; \frac{ k^2 - M_0^2 - M^2 }{ k^4 +2 (M_0^2 - M^2) k^2 + (M_0^2+M^2)^2 } \; . \label{eq:b6} \end{equation} Both integrals in (\ref{eq:b3}) and (\ref{eq:b6}) can be expressed with the help of the functions $I_0$ and $I_v$ defined, respectively, in (\ref{eq:20}) and (\ref{eq:20a}), to obtain the desired result presented in (\ref{eq:19}) and (\ref{eq:19a}). \subsection{ Analytic structure of the scalar correlation function } \label{app:b2} \setcounter{equation}{0} \indent\indent As argued in section \ref{sec:3.2}, we believe that the approximations made to derive the scalar correlation function do not violate any fundamental requirements of quantum field theory. Nevertheless we observe a non-trivial analytic structure of the scalar correlation function which might make one suspect that some constraints may be incorrectly implemented. In this subsection, we show by an explicit calculation that one particular constraint, namely, the stability criterion, is indeed satisfied. Of course there are many more constraints. Here we shall offer one evidence that our model is compatible with a general axiom of quantum field theory. The investigation of further constraints which might provide insight into the possible analytic structure of Green's functions of a confining theory seems very interesting to us and will be relegated to a future work. For completeness, we rederive the stability criterion. Following the standard procedure~\cite{ch84}, the correlation function of a local operator $j(x)$ is written in momentum space as \begin{equation} G(q) \; = \; \int d^4 x \; e^{iqx} \; \langle 0 \vert \, T \, j(x) j(0) \vert 0 \rangle \; , \label{eq:b2.1} \end{equation} where $T$ is the time-ordering operator. Rewriting the time ordering, one has \begin{equation} G(q) \; = \; \int d^4 x \; e^{iqx} \; \theta (x_0) \langle 0 \vert \, [j(x), j(0) ] \, \vert 0 \rangle \; + \; \int d^4 x \; e^{iqx} \; \langle 0 \vert \, j(0) j(x) \vert 0 \rangle \; \label{eq:b2.2} \end{equation} Inserting a complete set of eigenstates, the last expression becomes \begin{eqnarray} \sum _n \int d^4 x && e^{i(q+p_n)x} \; \langle 0 \vert j(0) \vert n \rangle \langle n \vert j(0) \vert 0 \rangle \label{eq:b2.3} \\ &=& \sum _n (2 \pi )^4 \, \delta ^4 (q+p_n) \; \langle 0 \vert j(0) \vert n \rangle \langle n \vert j(0) \vert 0 \rangle \; . \nonumber \end{eqnarray} In the lab frame $(q^0 > 0)$, there is no contribution from the term (\ref{eq:b2.3}) to the correlator (\ref{eq:b2.1}), if we are dealing with a stable theory, since there are no states with negative energy $(E_n=p_n^0 > 0)$. We therefore observe that the correlation function vanishes for $x_0$ less than zero, i.e. \begin{equation} \langle 0 \vert \, T \, j(x) j(0) \vert 0 \rangle \; = \; f(x) \; \theta (x_0) \; . \label{eq:b2.4} \end{equation} In the standard case of a constituent quark model, the stability criterion is satisfied for the following reason: the correlation function in momentum space is analytic in the upper $q_0$ complex-plane and possesses cuts at the real axes due to quark anti-quark thresholds (see Figure \ref{fig:b2.1} (a)). For $x_0<0$ one might calculate the Fourier transform by closing the path by a semi-circle in the upper half plane. Since there are no poles or cuts, we conclude that the Fourier transformation yields zero. \indent\indent In order to check the stability criterion in our model, we first study the analytic structure of the scalar correlation function (\ref{eq:21a}). To this aim, we have explicitly calculated the functions $H_{0/v}(p^2)$ in (\ref{eq:e8}) and (\ref{eq:e9}). The result is \begin{eqnarray} H_0 (p^2) &=& h(p^2;M_0,M) + h(p^2;M_0,-M) \; , \nonumber \\ H_v (p^2) &=& -i [ h(p^2;M_0,M) - h(p^2;M_0,-M) ] \label{eq:b2.5} \\ \frac{ 4 \pi ^2 }{ p^2 } \, h(p^2; M_0,M) &=& \frac{\Lambda ^2}{p^2} + \frac{1}{2} \left( \frac{ 4A^2 }{ p^2 } +1 \right) ^{3/2} \ln \frac{ \sqrt{ \frac{ 4A^2 }{ p^2 } +1 } + 1 }{ \sqrt{ \frac{ 4A^2 }{ p^2 } +1 } - 1 } \nonumber \\ &-& \frac{1}{2} \left( \frac{ 6 A^2 }{ p^2 } +1 \right) \ln \left( 1 + \frac{ \Lambda ^2 }{ A^2 } \right) \label{eq:b2.6} \\ &-& \frac{1}{2} \frac{ \left( \frac{ 4A^2 }{ p^2 } +1 \right) \left( \frac{2 \Lambda ^2 + 4A^2 }{ p^2 } +1 \right) }{ \sqrt{ \left( \frac{ 4 \Lambda ^2 + 4A^2 }{ p^2 } +1 \right) } } \ln \frac{ \sqrt{ \left( \frac{ 4 \Lambda ^2 + 4A^2 }{ p^2 } +1 \right) } +1 }{ \sqrt{ \left( \frac{ 4 \Lambda ^2 + 4A^2 }{ p^2 } +1 \right) } -1 } \; , \nonumber \end{eqnarray} where $A= M_0 + i M $. We confine the momentum to $\vert p \vert ^2 \le \Lambda ^2 $. In this case the only cuts which occurs in the upper half $q_0$ complex-plane are shown in Figure \ref{fig:b2.1}(b). Their orientation is given by the phase $\phi $ of the complex number $M_0 + i M$. In order to perform the Fourier transform to the coordinate space, we close the path in the upper half plane by the contour depicted in Figure \ref{fig:b2.1}(b). It seems that the Fourier transform does not yield zero as it would be required by the stability criterion, since there are now the contributions from the cut. Our main observation is, however, that these contributions exactly cancel. In order to see this, we first note that the correlation function $\Delta (q_0)$ possesses reflection positivity, i.e. \begin{equation} \Delta (z^{\ast }) \; = \; \Delta ^{\ast } (z) \; , \label{eq:b2.7} \end{equation} as the functions $H_{0/v}$ do. This can be shown either by a direct inspection of (\ref{eq:b2.5}) or, which is more instructive, by tracing it back to the fact that we consider color singlet correlation functions. In the latter case, the appropriate superposition of terms with $M$ and $-M$ provides (\ref{eq:b2.7}). Equation (\ref{eq:b2.7}) immediately implies that the complex part of the integration over the contour in the half-plane vanishes, since this contour is chosen to be symmetric with respect to the imaginary axis. In order to prove that the contributions from the cuts cancel, it is sufficient to show that the contribution from either cut is purely imaginary. That this is indeed the case holds on very general grounds. Although this may be well-known to the readers, we nevertheless feel that we should give some arguments on this matter. We have to prove that integrals of the type \begin{equation} I := \int _{\cal C} dx \; K( \ln (x) ) f(x) \label{eq:b2.8} \end{equation} produce purely imaginary results, where ${\cal C}$ is the contour surrounding the cut (like the contour $ACB$ in Figure \ref{fig:b2.1}(b)). In fact, one has to study the more general integrand $ K[ \ln (g(x)) ] f(x)$, the integral of which, however, can be traced back to the one in (\ref{eq:b2.8}) by a change of variable $x \rightarrow y= g(x)$ and a redefinition of the function $f(x)$. This change of variables provides us with a linear cut in the complex plane. If we define the cut of the logarithm to coincide with the negative real-axis, a straightforward calculation yields \begin{equation} I = \int _{-R}^{0} dx \; f(x) \; \left\{ K( \ln (-x) + i \pi ) \, - \, K( \ln (-x) - i \pi ) \right\} \; , \label{eq:b2.9} \end{equation} where $R$ is the length of the cut under considerations. To continue we have to assume that the function $K(x)$ possesses a Laurant-expansion around $x=0$. It is straightforward to check that this is the case in the context of the scalar correlation function of our model. We then have \begin{equation} I = \sum _{n=-m}^{\infty } c_n \int _{-R}^{0} dx \; f(x) \; \left\{ ( \ln (-x) + i \pi )^n \, - \, ( \ln (-x) - i \pi )^n \right\} \; . \label{eq:b2.10} \end{equation} The even powers of $i \pi $ drop out and we end up with a purely imaginary result. This completes the proof. To summarize our results, the scalar correlation function possesses a nontrivial analytic structure in momentum space, e.g.\ consisting of cuts in the upper half $q_0$ complex-plane. We have proven that the particular analytic structure is compatible with the stability criterion (\ref{eq:b2.4}) due to an intrinsic cancellation mechanism of the contributions from the cuts. \indent\indent We finally present the scalar correlation function in Minkowski-space. {}From the very beginning, our model is given in Euclidean space and the Green's functions in Minkowski-space are defined by the Wick rotation (see Figure \ref{fig:b2.2}). In our case, this Wick-rotation is non-trivial, since we have to take into account the contribution from the cuts in Figure \ref{fig:b2.2}. Suppose the scalar correlation function in Euclidean space is part of some scattering amplitude, e.g. \begin{equation} S(p^2) \; = \; \int _{-\infty }^{\infty } dq_0^E \; \Delta (q^2) \, {\cal G }(q^2,p^2) \label{eq:b2.11} \end{equation} where ${\cal G}(q^2,p^2)$ is assumed to be analytic in the first and third quadrants of the $q_0^E$-plane. From Figure \ref{fig:b2.2} we have \begin{equation} S(p^2) \; = \; \int _{-i\infty }^{i\infty } dq_0^E \; \Delta (q^2) \, {\cal G }(q^2,p^2) \; + \; 2i \int _{-\infty }^{\infty } dx \; I_{cut} \; , \label{eq:b2.12} \end{equation} where we have used that the contribution $I_{cut}$ from one cut is of the form (\ref{eq:b2.10}) and purely imaginary ($I_{cut}$ is real). Performing the substitution $q_0^M = -i q_0^E$ and redefining $x=q_0^M$, we obtain \begin{equation} S(p^2) \; = \; i \int _{-\infty }^{\infty } dq_0^M \; \Delta (-q^2) \, {\cal G }(-q^2,p^2) \; + \; 2i \int _{-\infty }^{\infty } dq_0^M \; I_{cut} \; , \label{eq:b2.13} \end{equation} and arrive at the scalar correlation function in Minkowski space \begin{equation} \Delta ^M (q^2) \; = \; \Delta (-q^2) \; + \; 2 I_{cut} \label{eq:b2.14} \end{equation} Note that the contribution from the cuts is real. We expect $I_{cut}$ to contribute only a background to $\Delta ^M (q^2) $, whereas the significant structure, such as imaginary parts from thresholds and poles from particle states, is produced by the Euclidean scalar correlation function at negative momentum squared. \subsection{ Reduction of the Bethe-Salpeter amplitude } \label{app:c} \setcounter{equation}{0} \indent\indent In order to solve the Bethe-Salpeter equation (\ref{eq:h2}) for the hidden color structure of the pion, one has to perform the color trace of polarizations containing two quark propagators $S(k)$. Since the calculational technique parallels very much that performed to obtain the scalar correlation function, we only sketch the derivation briefly and refer the reader to Appendix \ref{app:a} for further details. The traces of interest are \begin{eqnarray} \hbox{tr} \{ \gamma _5 S(k+p) \gamma _5 S(k) \} &=& \hbox{tr} _L \{ \gamma _5 s(k+p) \gamma _5 s(k) \} \, + \, (M \rightarrow -M) \; , \label{eq:c1}\nonumber \\ \hbox{tr} \{ \tau ^\alpha \gamma _5 S(k+p) \gamma _5 S(k) \} &=& \frac{M^{\alpha }}{M} \, \hbox{tr} _L \{ \gamma _5 s(k+p) \gamma _5 s(k) \} \, - \, (M \rightarrow -M) \; , \label{eq:c2} \nonumber\\ \hbox{tr} \{ \tau ^\alpha \gamma _5 S(k+p) \tau ^\beta \gamma _5 S(k) \} \frac{M^\beta }{M} &=& \frac{M^{\alpha }}{M} \, \hbox{tr} _L \{ \gamma _5 s(k+p) \gamma _5 s(k) \} \, + \, (M \rightarrow -M) \; \nonumber\\ \label{eq:c3} \end{eqnarray} with $s(k)$ defined in (\ref{eq:a7}). The next step is to calculate the trace over Lorentz indices $\hbox{tr} _L$: \begin{equation} \hbox{tr} _L \left\{ \gamma _5 \frac{ k\kern-.5em\slash + p\kern-.5em\slash - iA }{ (k+p)^2 + A^2 } \gamma _5 \frac{ k\kern-.5em\slash - iA }{ k^2 + A^2 } \right\} \; = \; -4 \frac{ k^2 + A^2 + kp }{ \left[ (k+p)^2 +A^2 \right] \, (k^2 + A^2 ) } \; . \label{eq:c4} \end{equation} To obtain the desired matrix elements entering into (\ref{eq:h2}), an integration over the loop momentum $k$ is required. Introducing Feynman's parametrization and shifting the momentum integration $q= k + \alpha p$ yields \begin{equation} \hbox{Tr} \left\{ \gamma _5 s(k+p) \gamma _5 s(k) \right\} \; = \; -4 \int _0^1 d\alpha \; \int (q) \; \frac{ q^2 - \alpha (1 - \alpha) p^2 +A^2 }{ \left[ q^2 + \alpha (1-\alpha ) p^2 + A^2 \right] ^2 } \; . \label{eq:c5} \end{equation} Inserting (\ref{eq:c5}) into (\ref{eq:c1}-\ref{eq:c3}), we have all the ingredients to derive the result (\ref{eq:h4}). \subsection{ Calculation of the electromagnetic form factor } \label{app:d} \setcounter{equation}{0} \indent\indent Here we evaluate the matrix element (\ref{eq:h18}), which is directly related to the electromagnetic form factor. For this, we first perform the color trace by the method employed several times before (for details see Appendix \ref{app:a}): \begin{equation} (P_0^2 - P_1^2) \, W_\mu ^0 \; - \; 2 P_0 P_1 \, W_\mu ^v \; . \label{eq:d1} \end{equation} The functions \begin{eqnarray} W_\mu ^0 &=& - \int \frac{ d^4k }{ (2\pi )^4 } \; \hbox{tr}_L \left\{ \gamma _5 \, s(k - \frac{p}{2}) \, \gamma _\mu \, s(k + \frac{p}{2}) \, \gamma _5 \, s(k - \frac{p}{2}) \right\} \; + \; (M \rightarrow -M) \; ,\nonumber\\ \label{eq:d2} \\ W_\mu ^v &=& + i \int \frac{ d^4k }{ (2\pi )^4 } \; \hbox{tr}_L \left\{ \gamma _5 \, s(k - \frac{p}{2}) \, \gamma _\mu \, s(k + \frac{p}{2}) \, \gamma _5 \, s(k - \frac{p}{2}) \right\} \; - \; (M \rightarrow -M)\nonumber\\ \label{eq:d3} \end{eqnarray} are introduced for abbreviation. The propagator $s(k)$ is given by (\ref{eq:a7}). Performing the Lorentz trace of the terms under investigation in (\ref{eq:d2},\ref{eq:d3}), i.e. \begin{equation} \frac{ 4 ( k_\mu + \frac{ p_\mu }{2} ) }{ \left[ (k- \frac{p}{2})^2 +A^2 \right] \, \left[ (k+ \frac{p}{2})^2 +A^2 \right] } \; , \end{equation} one obtains \begin{eqnarray} W_\mu ^0 &=& \frac{1}{2} p_\mu \; \int \frac{ d^4k }{ (2\pi )^4 } \; \frac{ 1 }{ \left[ (k- \frac{p}{2})^2 +A^2 \right] \, \left[ (k+ \frac{p}{2})^2 +A^2 \right] } \; + \; (M \rightarrow -M) \label{eq:d5} \\ W_\mu ^v &=& - \frac{i}{2} p_\mu \; \int \frac{ d^4k }{ (2\pi )^4 } \; \frac{ 1 }{ \left[ (k- \frac{p}{2})^2 +A^2 \right] \, \left[ (k+ \frac{p}{2})^2 +A^2 \right] } \; - \; (M \rightarrow -M) \;. \label{eq:d6} \end{eqnarray} It is of a particular interest to expand these functions for small $p^2$ because they are required to normalize the pion Bethe-Salpeter amplitude and to calculate the pion charge radius. A direct calculation yields \begin{equation} W_\mu ^{0/v} \; = \; p_\mu \, \left( F_{0/v} \; - \; p^2 \, R_{0/v} \; + \; O(p^4) \right) \; , \label{eq:d7} \end{equation} where $F_{0/v}$ are defined in (\ref{eq:h8},\ref{eq:h9}) and \begin{eqnarray} R_0 &=& \frac{1}{2} \int \frac{ d^4k }{ (2\pi )^4 } \; \frac{ A^2 - k^2 }{ \left( k^2 + A^2 \right)^4 } \; + \; (M \rightarrow -M) \; , \label{eq:d8} \\ R_v &=& -\frac{i}{2}\int \frac{ d^4k }{ (2\pi )^4 } \; \frac{ A^2 - k^2 }{ \left( k^2 + A^2 \right)^4 } \; - \; (M \rightarrow -M) \; . \label{eq:d9} \end{eqnarray} Inserting (\ref{eq:d7}) in (\ref{eq:d1}) leads to the desired matrix element. In order to obtain the electromagnetic form factor for non-vanishing momentum in Minkowski space, an analytic continuation of the Euclidean momentum squared to negative values is needed. This analytic continuation is difficult from a technical point of view, since the momentum integration in (\ref{eq:d5}, \ref{eq:d6}) can be performed only numerically. Introducing \begin{equation} K_\pm = (k \pm \frac{p}{2})^2 = k^2 - \frac{ P^2}{4} \pm k \dot p \; , \hbox to 1 true cm {\hfill } A_\pm = M_0 \pm iM \; , \label{eq:d11} \end{equation} a term of interest is \begin{equation} \frac{ 1 }{ \left[ K_- + A_+^2 \right] \, \left[ K_+ + A_+^2 \right] } \; + \; (A_+ \rightarrow A_-) \; . \label{eq:d12} \end{equation} We first remove the complex parts that enter via $A_\pm $, namely, \begin{equation} \frac{ K_+ K_- + B_- (K_- + K_+) + B_-^2- 4 M_0^2 M^2 }{ \left[ K_- ^2 + 2 K_- B_- + B_+^2 \right] \, \left[ K_+ ^2 + 2 K_+ B_- + B_+^2 \right] } \; , \label{eq:13} \end{equation} where $B_{\pm } = M_0^2 \pm M^2$. Introducing the angle $\alpha $ between the Euclidean four-vectors $k$ and $p$, the crucial observation is that in (\ref{eq:13}), only the terms quadratic in the external momentum $p$ appear: \begin{eqnarray} K_+K_- &=& (k^2 - \frac{p^2}{4})^2 - k^2 p^2 \cos ^2 \alpha \; , \\ K_+^2 + K_-^2 &=& 2 \left[ (k^2 - \frac{p^2}{4})^2 + k^2 p^2 \cos ^2 \alpha \right] \label{eq:d14} \\ K_+ + K_- &=& 2\left( k^2 - \frac{ p^2}{4} \right) \; . \label{eq:d14a} \end{eqnarray} It is now a simple matter to perform the analytic continuation $p^2 \rightarrow - p^2$. We finally present the result for the functions $W_\mu ^0$, $W_\mu ^v$ in (\ref{eq:d5},\ref{eq:d6}) \begin{eqnarray} W_\mu ^0 &=& \frac{1}{8 \pi ^3} p_\mu \; \int dV \; \frac{ \alpha _- + 2 T_- \beta + T_-^2 - 4 M_0^2 M^2 }{ \alpha _-^2 + 4 T_- \alpha _- \beta + 2 T_+^2 \alpha _+ + 4 T_-^2 \alpha _- + 4 T_- T_+^2 \beta + T_+^4 } \; ,\nonumber\ \label{eq:d15} \\ W_\mu ^- &=& - \frac{1}{8 \pi ^3} p_\mu \; \int dV \; \frac{ 4 M_0 M (\beta + T_-) }{ \alpha _-^2 + 4 T_- \alpha _- \beta + 2 T_+^2 \alpha _+ + 4 T_-^2 \alpha _- + 4 T_- T_+^2 \beta + T_+^4 } \; \nonumber\\ \label{eq:d16} \end{eqnarray} where $dV = d \alpha \; \sin ^2 \alpha \, d(k^2) \, k^2 , \, \alpha \in [0,\pi]$ and \begin{equation} \alpha _{\pm } = (k^2 \pm \frac{p^2}{4})^2 \pm k^2 p^2 \cos ^2 \alpha \; , \hbox to 1 true cm {\hfill } \beta = k^2 - \frac{p^2}{4} \; , \hbox to 1 true cm {\hfill } T_\pm = M_0^2 \pm M^2 \; . \label{eq:d17} \end{equation} The momentum integration $k^2$ and the angle integral in (\ref{eq:d15},\ref{eq:d16}) are left to a numerical calculation.
1,314,259,994,730
arxiv
\section{Introduction} The family of multilayer graphite is one of the most intriguing paradigms in the realm of modern condensed matter. The simplest model in the group, graphene, has drawn enormous attention from both theoretical and experimental communities\cite{RevModPhys.81.109}. Its linear dispersion at low energy makes it a low-dimensional example of a particle-hole symmetric ultrarelativistic Dirac fermion, the unconventional electronic property of which distinguishes it from other semiconductors made up of ordinary nonrelativistic quasiparticles. In particular, when placed in a magnetic field, each Dirac point possesses the anomalous Hall conductivity $\frac{1}{2}\frac{e^2}{h}$ at the filling factor $\nu = 0$ \cite{PhysRevLett.95.146801}, which is uniquely connected with the filled Fermi sea of Dirac particles. Even more fruitful physics emerges as one stacks two layers of graphene on top of each other. In the $AB$-stacked configuration as shown in Fig. \ref{fig2}, the low-energy projection of the model forms another family without any relativistic analog: a particle-hole symmetric two-band semiconductor with parabolic dispersion\cite{2013RPPh76e6503M}. It also serves as a model possessing a Fermi surface with a Berry phase of $2\pi$ and is also a candidate for the dual description of the $\nu = 1$ fractional quantum Hall state in the context of fermion-vortex duality \cite{PhysRevLett.117.136802}. \begin{figure} \includegraphics[width=0.9\linewidth]{AB.pdf} \caption{The schematic plot for the $AB$-stacked bilayer graphene. The thick bonds represent the upper lattice, while the thin ones represents the lower layer. The overlapped orange sites have labels $\tilde A = B$. The plot was generated using the PYTHON package PYBINDING \onlinecite{dean_moldovan_2020_4010216}.} \label{fig2} \end{figure} As the background magnetic field is turned on, Landau levels form in bilayer graphene as well. The spectrum, nevertheless, possesses two zero-energy bands\cite{PhysRevLett.96.086805}. This feature reshapes our understanding of low energy physics in quantum Hall systems \cite{Novoselov2006}. In particular, in a large magnetic field, the conventional wisdom and machinery of the lowest Landau level projection has to be modified since the low-energy sector contains more than holomorphic wave functions in the symmetric gauge and the system can host exotic quantum phases in the lowest Landau level \cite{PhysRevLett.112.046602}. The dielectric property and the low energy excitations in the zero-energy bands under this circumstance were investigated in Ref. \onlinecite{PhysRevB.77.195423} and Refs. \onlinecite{PhysRevB.79.165402}. This paper concentrates on the time-reversal odd responses of bilayer graphene to dynamical and inhomogeneous external perturbations in a strong out-of-plane magnetic field in (2+1) dimensions, which has been partially addressed recently in an independent work \onlinecite{2019arXiv190909608I}. Here, we adopt a different approach that systematically generates the effective action as a functional of external gauge field $A_{\mu}$ to all orders in momentum and frequency. Using this, we compute the effective action as in the gradient expansion to explore the large scale dynamics. As a result, we show that for the low-energy model of bilayer graphene in a background magnetic field, the Hall conductivity in an inhomogeneous and time-dependent electric field $E_i(\omega, q)$ at filling factor $N$ is \begin{align} \sigma_H(\omega, q)\approx \frac{N}{2\pi} \bigg(1 + (q\ell)^2 \frac{1-3N^2}{4N}+ \frac{\omega^2}{\omega_c^2}\frac{2N^2-1}{2N^2}\bigg), \end{align} where $\ell$ and $\omega_c$ are the magnetic length and cyclotron frequency. We also establish a relationship between the Hall viscosity and the coefficient of the $(q\ell)^2$ term. In particular, with a high-energy Landau level cutoff $N_c$, the static ($\omega =0$) result reduces to \begin{align} \label{main2}\sigma_H(q)= \sigma_H(0) + (q\ell)^2[\ell^2\eta_H -\epsilon''(B) - \mathcal O(\ln N_c)]. \end{align} It is reminiscent of the Hoyos--Son relation\cite{PhysRevLett.108.066805} for Galilean invariant systems. An explicit formula is derived to compute higher-order corrections in $N^{-1}$ and $N_c^{-1}$. For Galilean invariant systems, the Hoyos--Son relation has been systematically and thoroughly discussed. For instance, Ref. \onlinecite{PhysRevB.86.245309} delivers a concrete relationship between the non-local Hall conductivity and the Hall viscosity. This framework has recently been generalized to anisotropic and lattice-regularized systems in Ref. \onlinecite{PhysRevX.10.021005}. As for graphene-like systems, Ref. \onlinecite{PhysRevB.94.125427} tackles a similar problem for a single layer in a strong magnetic field and low temperature, whereas Ref. \onlinecite{PhysRevB.92.115426} reports a Hoyos--Son-like relation in the interaction-dominating regime using the hydrodynamic approach in the absence of a magnetic field. This paper aims to provide a different generalization by considering the zero-temperature dynamics of a rotationally invariant and particle-hole symmetric system breaking both Galilean and Lorentz symmetries. This paper is organized as follows: In Sec. \ref{EFT} we first review the methodology of the one-loop effective action from a microscopic model and introduce the low-energy two-band model for bilayer graphene. We then start presenting this work by first deriving the Feynman rules and the generating functional for the polarization tensor. What follows is the time-reversal odd effective action computed to cubic order in space-time gradients with emphasis on the coefficient of Hall conductivity at the order of $(q\ell)^2$ and an investigation of its relationship with the Hall viscosity. In Sec. \ref{VisSus}, we construct the stress tensor and compute the Hall viscosity and orbital magnetic susceptibility for our model. This provides numerical support for the observation established in Sec. \ref{EFT}. Finally we revisit the conductivity tensor using the Kubo formula in Sec. \ref{KuboCurrent} and derive an exact algebraic relation that connects the Hall conductivity and the Hall viscosity in the absence of space-time symmetry. We then conclude the paper. Details of the computations and an alternative derivation of the stress tensor are given in the appendixes for completeness. \section{the effective action}\label{EFT} \subsection{Methodology} The electromagnetic response can be compactly summarized in the effective action as a functional of the $U(1)$ gauge potential. Starting with a microscopic model for the matter field $\psi$ defined by the action $S[\psi]$, one gauges the charge symmetry by coupling the charge and current densities $j^{\mu} = (\rho,j^{i} )$ with the gauge field $A_{\mu}$. The effective action is defined as follows. \begin{align} \label{Seff}\mathcal S_{\rm eff}[A_{\mu}] = -i\ln \int \mathscr D\psi^{\dagger}\mathscr D\psi\, e^{iS[\psi]+i\int j_{\mu}A^{\mu}}. \end{align} If the external electric and magnetic fields $\mathbf E$ and $B$ take constant values, Eq.~\eqref{Seff} can be computed and serves as an example of Euler--Heisenberg effective action \cite{KATSNELSON2013160}. As far as the linear response is concerned, $\mathcal S_{\rm eff}$ is usually expanded as a multinomial in $A_{\mu}$. In the $d$-dimensional Fourier space, the effective action under a Gaussian approximation assumes the form \begin{align} \mathcal S_{\rm eff} [A_{\mu}]= &\int \frac{d^dq}{(2\pi)^d}\bigg[\bar{j}^{\mu}A_{\mu} + A_{\mu}(-q)\Pi^{\mu\nu}(q)A_{\nu}(q)\bigg], \end{align} where $\bar{j}$ denotes the average charge in the ground state. The polarization tensor $\Pi^{\mu\nu}(q)$ encodes the response functions. In (2+1) dimensions, the gauge symmetry alone fixes the form of the effective Lagrangian $\mathscr L_{\rm eff}$. Organized by the number of derivatives, the Lagrangian is: \begin{align} \mathscr L_{\rm eff} = &\frac{k}{4\pi} A\, dA\, +\frac{\epsilon}{2}\mathbf E^2 -\frac{1}{2\mu}B^2 \notag\\ & + \alpha \mathbf E\cdot(\nabla B)+\beta\epsilon^{ij}E_i\partial_tE_j+\mathcal O(\partial^4). \end{align} Computing the functional determinant for a specific model yields the parameters $k, \epsilon,\mu$, $\alpha$, and $\beta$. For a fermionic system, computing Eq.~\eqref{Seff} amounts to evaluating the functional determinant of the fermion action. Formally, one can decompose the microscopic Lagrangian for the fermion $\psi$ into the free part $i\psi^{\dagger}D^{-1}\psi$ and the potential part $-v(x)\psi^{\dagger}\psi$. Such a decomposition is straightforward for a free fermion system in an external potential. If a two-particle interaction is present, one can first introduce an auxiliary field by the Hubbard--Stratonovich transformation to decompose the two-particle term. In this way the fermion part of the theory can be integrated in the path integral, resulting in a functional of $v$. Expanding the functional to the second order of $v$, it reads \begin{align} &\mathcal S_{\rm eff} = -i\mathrm{tr}\ln[D^{-1}+iv]\notag\\ \label{LoopExpansion}= & -i\mathrm{tr}\ln [D^{-1}] + \mathrm{tr}[Dv] - \frac{i}{2}\mathrm{tr}[DvDv] + O(v^3). \end{align} This formula has an intuitive diagrammatic representation shown in Fig. \ref{fig1}. The inverse of action $D$ corresponds to the Feynman propagator of the free theory and the potential $v$ is the vertex. One can systematically compute Eq.~\eqref{LoopExpansion} using the standard perturbation methods in quantum field theory. \begin{figure} \includegraphics[width=1.0\linewidth]{Seff.pdf} \caption{Diagrammatic expansion of the effective action $\mathcal S_{\rm eff}$. Each solid line represents a Feynman propagator $D$ and each wavy line corresponds to the insertion of the external potential $v$. The first and the third diagrams correspond to the trace $\mathrm{tr}[Dv]$, and the diagram in the middle corresponds to the two-point trace $-\frac{i}{2}\mathrm{tr}[DvDv]$.} \label{fig1} \end{figure} \subsection{The model for bilayer graphene} Let us now apply the above machinery to the low-energy model of $AB$-stacked bilayer graphene \cite{PhysRevLett.96.086805}. We depict the lattice structure in Fig. \ref{fig2}. The minimal low-energy model for each valley in the Brillouin zone contains two copies of the Dirac fermions and a hopping amplitude $2m_{\star}$ bridging sites $B$ and $\tilde A$. In the low energy regime $\omega\ll m_{\star}$ , the four-band model can be projected to the two dominant bands. The model for valley $K$ is \begin{align} \label{model}H = -\frac{1}{2m_{\star}}\begin{pmatrix} 0 & \pi^2 \\ (\pi^{\dagger})^2 & 0 \end{pmatrix}, \psi_K = \begin{pmatrix}\phi_A\\ \phi_{\tilde B}\end{pmatrix} \end{align} where $\pi = \pi_x -i\pi_y$ and $\pi_i = p_i + A_i$ is the kinematic momentum. $\psi_K$ is the spinor storing the dominant two bands. For this model, $k$ and the dielectric constant $\epsilon$ were discovered in Ref. \onlinecite{PhysRevB.77.195423}. The focus of this paper is the computation of $\alpha$ and $\beta$. We first derive the propagator $D$ for model~\eqref{model} in a magnetic field. Turning on a finite background magnetic field in the symmetric gauge $(\bar A_x, \bar A_y) = \frac{B}{2}(-y,x)$, the spectrum of the system consists of non-trivial Landau levels. It also introduces the length scale of magnetic length $\ell = B^{-1/2}$. To solve the spectrum, it is convenient to introduce the ladder operators \begin{subequations} \begin{align} & a= \frac{i\ell}{\sqrt 2}\pi\\ & a^{\dagger} = -\frac{i\ell}{\sqrt 2}\pi^{\dagger}, \end{align} \end{subequations} which satisfies $[a,a^{\dagger}] = 1$. The Hilbert space of the $(\widehat{\mathbf r}, \widehat{\mathbf p})$ operators is then organized partly using the eigenstates of the operator $\widehat n = a^{\dagger}a$, $\{|n\rangle| n\in \mathbb Z^+\}$. The Hamiltonian~\eqref{model} can then be shown to have the eigenstates \footnote{We use the notation $|\cdot)$ to denote the doublet wave functions.} \begin{subequations} \begin{align} \label{zerobands}& |0) = \begin{pmatrix} 0 \\ |0\rangle \end{pmatrix}, |1) = \begin{pmatrix} 0 \\ |1\rangle \end{pmatrix}\\ \label{otherbands}&|n)=\frac{1}{\sqrt{2}}\begin{pmatrix} \mathrm{sgn}(n) ||n|-2\rangle \\ ||n|\rangle \end{pmatrix}, |n|\geq2, \end{align} \end{subequations} associated with the spectrum (Fig. \ref{fig3}) \begin{align} \label{spectrum}\varepsilon_n =\mathrm{sgn}(n) \omega_c\sqrt{|n|(|n|-1)} \end{align} with the cyclotron frequency $\omega_c = B/m_{\star}$. \begin{figure} \includegraphics[width=0.9\linewidth]{LLspectrum.pdf} \caption{The Landau level spectrum of Eq.~\eqref{spectrum} and the Landau level indices associated with each band. The parabolas depict the dispersion in the absence of the background magnetic field. The red dashed line represents the Fermi energy $\epsilon_F$, separating occupied black bands and the empty gray one.} \label{fig3} \end{figure} The degeneracy of each Landau level, as in a two-dimensional electron gas (2DEG) in the symmetric gauge, is encoded by the other set of ladder operators $[b,b^{\dagger}] = 1$ which generates the angular momentum $\widehat{\ell}_z = \frac{1}{2}(\widehat{\mathbf r}\times\widehat{\mathbf p}-\widehat{\mathbf p}\times\hat{\mathbf r})_z$. (See Ref. \onlinecite{jain_2007}.) Note that in this system we no longer treat $\widehat{\ell}_z$ as the canonical angular momentum. The algebraic structure of the second pair of commuting ladder operators still holds. Denoting the Hilbert space with $\frac{1}{\sqrt{m!}}(b^{\dagger})^m|0\rangle = |m\rangle$, the complete basis is $\{|n)|m\rangle\equiv|nm)\}.$ Writing $\xi_n = \varepsilon_n-\mu$, the inverse of the kernel $iD^{-1} = i\partial_t - \sum_{n,n}(\varepsilon_{n}-\mu)|nm)(nm|$ is \begin{align} D(t,t') = \int \frac{d\Omega}{2\pi}e^{-i\Omega(t-t')}\sum_{n,m}\frac{i|nm)(nm|}{\Omega-\xi_n + i\epsilon\, \mathrm{sgn}(\xi_n)}. \end{align} Next, we turn on the perturbation on top of $\bar A_{\mu} \to \bar{A}_{\mu}+A_{\mu}$, leading to the variation of the Hamiltonian $H\to H + v(t,\mathbf x)$, where the vertex $v$ is \begin{align} v(t,\mathbf x) =& A_0 -\frac{1}{2m_{\star}}\{\widehat{\Pi}_i,A_i\} & \notag\\ &- \frac{1}{2m_{\star}}\begin{bmatrix} 0 & (A_x-iA_y)^2 \\ (A_x+iA_y)^2 & 0 \end{bmatrix}, \end{align} with the momentum operators \begin{subequations} \begin{align} & \widehat{\Pi}^x = \begin{pmatrix} 0 & \pi \\ \pi^{\dagger} & 0 \end{pmatrix},\\ & \widehat{\Pi}^y = \begin{pmatrix} 0 & -i\pi \\ i\pi^{\dagger} & 0 \end{pmatrix}. \end{align} \end{subequations} In order to perform a gradient expansion, the Fourier transform needs to be introduced properly, since the coordinates $\mathbf x = (x,y)$ are treated as operators on Hilbert space. Given a $c$-valued vector $\mathbf k$, we recognize that \begin{align} e^{i\mathbf k\cdot\mathbf x} = e^{-|\mathsf k|^2}e^{ia^{\dagger}\mathsf k}e^{ia\bar{\mathsf k}}e^{ib^{\dagger}\bar{\mathsf k}}e^{ib\mathsf k}, \end{align} where the dimensionless complex momenta are $\mathsf k = \frac{\ell}{\sqrt 2}(k_x-ik_y)$ and $\bar{\mathsf k} = \mathsf k^*$. Each operator in $v(t,\mathbf x)$ is Fourier expanded as an integral of these exponential operators weighted by the Fourier coefficients. For concrete instances, we Fourier transform terms which are linear in the external field. \begin{widetext} \begin{subequations} \begin{align} \label{v0} A_0 = & \int \frac{d^3k}{(2\pi)^3}e^{-i\omega t-|\mathsf k|^2}\widetilde{A}_0(k) e^{ia^{\dagger}\mathsf k}e^{ia\bar{\mathsf k}}e^{ib^{\dagger}\bar{\mathsf k}}e^{ib\mathsf k},\\ \label{v1} \frac{-1}{2m_{\star}}\{\widehat{\Pi}_x, A_x\} = &\int \frac{d^3k}{(2\pi)^3}\frac{ie^{-|\mathsf k|^2-i\omega t}}{\sqrt 2m_{\star}\ell}\widetilde{A}_x(k)e^{ib^{\dagger}\bar{\mathsf k}}e^{ib\mathsf k} \{ \begin{pmatrix} 0 & a\\ -a^{\dagger} & 0\end{pmatrix}, e^{ia^{\dagger}\mathsf k}e^{ia\bar{\mathsf k}}\},\\ \label{v2} \frac{-1}{2m_{\star}}\{\widehat{\Pi}_y, A_y\} = &\int \frac{d^3k}{(2\pi)^3}\frac{e^{-|\mathsf k|^2-i\omega t}}{\sqrt 2m_{\star}\ell}\widetilde{A}_y(k)e^{ib^{\dagger}\bar{\mathsf k}}e^{ib\mathsf k} \{ \begin{pmatrix} 0 & a\\ a^{\dagger} & 0\end{pmatrix}, e^{ia^{\dagger}\mathsf k}e^{ia\bar{\mathsf k}}\}. \end{align} \end{subequations} Because of the $\pm i$ involved in the definitions of $(a, a^{\dagger})$, perturbation in the $x$ direction now resembles $\sigma_y$ and vice versa. This twist will lead to an extra minus in the effective action. We have built the machinery for computing the traces in Eq.~\eqref{LoopExpansion}. As far as the response functions are concerned, the second diagram in Fig. \ref{fig1} is the only nontrivial one. The first diagram gives rise to the ground-state charge density, while in the third diagram, the contact term or diamagnetic current vanishes exactly for this model. For the vertices $v$ in Eqs.~\eqref{v0}--\eqref{v2}, the trace assumes the general form \begin{align} -\frac{i}{2}\mathrm{tr}[DvDv] =-\frac{i}{2} \int dt\, dt'\frac{d\Omega\, d\Omega'}{(2\pi)^2}e^{-i\Omega(t'-t)}e^{-i\Omega'(t-t')}\sum_{n,m,n,m'}\frac{i(nm|v(t)|n'm')i(n'm'|v(t')|nm)}{(\Omega-\xi_n +i\epsilon\, \mathrm{sgn}(\xi_n))(\Omega' - \xi_{n'}+i\epsilon\, \mathrm{sgn}(\xi_{n'}))} \end{align}. The denominator does not depend on the angular momenta $m$ and $m'$. Thus, the trace over $\{|m\rangle, |m'\rangle\}$ space can be computed separately. To this end, we decompose the vertices~\eqref{v0}--\eqref{v2} into \begin{subequations} \begin{align} A_0 & = \int \frac{d^3k}{(2\pi)^3}\widetilde{A}_0(k) e^{-i\omega t}e^{-|\mathsf k|^2}\gamma^0(\mathsf k)e^{ib^{\dagger}\bar{\mathsf k}}e^{ib\mathsf k},\\ \frac{-1}{2m_{\star}}\{\widehat{\Pi}_x, A_x\} & =\int \frac{d^3k}{(2\pi)^3} \widetilde A_x(k)e^{-i\omega t}e^{-|\mathsf k|^2} \gamma^x(\mathsf k) e^{ib^{\dagger}\bar{\mathsf k}}e^{ib\mathsf k},\\ \frac{-1}{2m_{\star}}\{\widehat{\Pi}_y, A_y\} & =\int \frac{d^3k}{(2\pi)^3} \widetilde A_y(k)e^{-i\omega t}e^{-|\mathsf k|^2} \gamma^y(\mathsf k) e^{ib^{\dagger}\bar{\mathsf k}}e^{ib\mathsf k}. \end{align} \end{subequations} In this way, we trace out $\langle m |e^{ib^{\dagger}\bar{\mathsf k}}e^{ib\mathsf k}|m'\rangle\langle m'|e^{ib^{\dagger}\bar{\mathsf k}'}e^{ib\mathsf k'}|m\rangle$, obtaining a delta function $\frac{2\pi}{\ell^2}e^{|\mathsf k|^2}\delta^{(2)}(\mathbf k+\mathbf k')$. The time and frequency integrals for $(t, t')$ and $(\Omega, \Omega')$ can be performed with $\delta$ functions and residue calculus. The non-trivial summands remaining incorporate only the virtual processes between filled and empty Landau levels [$\mathrm{sgn}(\xi_n)\mathrm{sgn}(\xi_{n'})<0$]. Suppose the Fermi energy is pinned between the $N$th and the $(N+1)$th Landau levels as shown in Fig. \ref{fig3}. We then have \begin{align} \label{GF}-\frac{i}{2}\mathrm{tr}[DvDv] =-\frac{1}{2\pi\ell^2}\int \frac{d^3k}{(2\pi)^3}e^{-|\mathsf k|^2}\widetilde{A}_{\mu}(-k) \widetilde A_{\nu}(k)\sum_{n>N,n'\leq N}\bigg[\frac{\gamma^{\mu}_{nn'}(-\mathsf k)\gamma^{\nu}_{n'n}(\mathsf k)}{ \xi_{n'}-\xi_n+\omega+i\epsilon}+\frac{\gamma^{\mu}_{n'n}(-\mathsf k)\gamma^{\nu}_{nn'}(\mathsf k)}{\xi_{n'}-\xi_n-\omega+i\epsilon}\bigg], \end{align} with $\gamma^{\mu}_{nn'} $ denoting the matrix element $(n|\gamma^{\mu}|n')$. Equation~\eqref{GF} is an exact result of the trace to all orders in frequency and wavenumbers. The matrix elements $\gamma^{\mu}_{nn'}(\mathsf k)$ can be written as linear combinations of the associated Laguerre polynomials $L^{\alpha}_n(|\mathsf k|^2)$. Some useful identities are documented in Appendix \ref{algebras}. The long-wavelength and low-frequency effective theory is derived upon expanding the $\Pi^{\mu\nu}(k)$ in small $\omega$ and $|\mathsf k|$. By comparing Eqs.~\eqref{GF} and~\eqref{Seff}, the time-ordered polarization tensor is identified as \begin{align}\label{GFpolar} \Pi^{\mu\nu}(\omega, \mathbf k) = -\frac{e^{-|\mathsf k|^2}}{\pi\ell^2}\sum_{n>N,n'\leq N}\bigg[\frac{\gamma^{\mu}_{nn'}(-\mathsf k)\gamma^{\nu}_{n'n}(\mathsf k)}{ \xi_{n'}-\xi_n+\omega+i\epsilon}+\frac{\gamma^{\mu}_{n'n}(-\mathsf k)\gamma^{\nu}_{nn'}(\mathsf k)}{\xi_{n'}-\xi_n-\omega+i\epsilon}\bigg]. \end{align} \end{widetext} \subsection{Polarization tensors and transport coefficients} Equation~\eqref{GFpolar} contains complete information about transport in our model of bilayer graphene. To avoid the ambiguity of doubly degenerate bands, we fill them simultaneously by taking $N>2$ and compute the polarization tensor to the specified limit or desired order in momentum. The transport observables can be extracted after properly modifying the epsilon prescription. For instance, with a homogeneous external electric field of form $\mathbf E = \mathbf E_{\omega}e^{-i\omega t}$, $\Pi^{00} = \Pi^{0i} = 0$ and the retarded $\Pi^{xy}_R$ can be computed exactly to all orders in frequency, \begin{align} \Pi^{xy}_R= \frac{i\omega}{2\pi} \frac{4N^3\omega_c^4 - 2N\omega^2\omega_c^2}{\omega_+^4 - 4N^2\omega_c^2(\omega_+^2-\omega_c^2)}, \end{align} with $\omega_+ = \omega + i\epsilon.$ In the presence of a spatial fluctuation, at large scale, the effective action is organized by powers of $(\ell\partial_i)$ and $\partial_t/\omega_c$. By expanding and evaluating the sum to order $O(\partial^3)$, the targeted parity-odd part of the Lagrangian is \begin{align} \mathscr L_{\rm eff}^{(\rm odd)} = -\frac{N}{4\pi} A\, dA &- \frac{3N^2-1}{4\pi}\frac{\ell^2}{2}\mathbf E\cdot\nabla B\notag\\ &-\frac{2N^2-1}{4\pi N}\frac{1}{2\omega_c^2}\epsilon^{ij}E_i\partial_tE_j. \end{align} For nonrelativistic fermions, the level corresponds precisely to the number of filled Landau levels. The level here, $N$, is the index of the largest filled Landau level. As opposed to the bilayer graphene model, the number of filled Landau levels, and thus the interpretation of $N$, could be ambiguous due to the filled negative energy bands. This ambiguity is resolved by understanding the net contribution from the Fermi sea. By redoing the computation for $N= -1$, it is straightforward to confirm that negative energy bands contribute to $\frac{1}{4\pi}A\, dA$. Therefore, $N+1$ is understood as the number of filled bands with non-negative energies. We refer to the coefficient of the term $\frac{1}{2}(A_x\partial_tA_y-A_y\partial_tA_x)$ as the Hall conductivity $\sigma_H$ \footnote{This is not the most conventional definition, yet it is consistent when the extra minus sign from the electron charge is accounted for.}. To order $(q\ell)^2$ and $\omega^2$, the Hall conductivity is \begin{align} &\label{HallCon}\frac{\sigma_H }{\sigma_H(0)}= 1 + (q\ell)^2 \bigg(\frac{N^2+1}{4N}-N\bigg)+ \frac{\omega^2}{\omega_c^2}\frac{2N^2-1}{2N^2} \end{align} with $\sigma_H(0) = N/(2\pi).$ To make more sense out of this result, we can look at the static large $N$ limit, in which the hole band becomes insignificant and we should recover the physics of non-relativistic 2DEG. Taking $N\to\infty$, Eq.~\eqref{HallCon} can be organized into the form \begin{align*} \frac{\sigma_H}{N/(2\pi)} = 1 + (q\ell)^2 \bigg( \frac{N^2/(8\pi\ell^2)}{N/(2\pi\ell^2)} - N\bigg). \end{align*} Formally, it produces the Hoyos--Son relation \cite{PhysRevLett.108.066805, PhysRevB.94.125427} for 2DEG, \begin{align} \frac{\sigma_H}{\sigma_H(0)} = 1 + (q\ell)^2 \bigg[\frac{\eta_H}{n}-\frac{2\pi}{\nu}\frac{\ell^2}{\omega_c}B^2\epsilon''(B)\bigg], \end{align} if we identify the number density as $N/(2\pi\ell^2)$. The Hoyos--Son result establishes a relation between the Hall viscosity $\eta_H(\omega)$ at long wavelengths, orbital magnetic susceptibility $-\frac{\partial^2}{\partial B^2}\epsilon(B)$, and the coefficient of Hall conductivity at order $(q\ell)^2$ based on the Galilean symmetry of the microscopic physics. In addition, the 2DEG result is physically understood as the ratio of $\eta_H$ to the charge density $n$. Starting from the model Eq.~\eqref{model}, we would not expect this relation to persist at finite $N$ because it lacks Galilean symmetry and the charge density depends on the regularization of the bottom of the Fermi sea. Nonetheless, motivated by observations at large $N$, we explore if a similar or approximate relation exists without any particular space-time symmetry. In particular, we wish to clarify the roles of Hall viscosity and orbital magnetic susceptibility. \section{Hall viscosity and orbital magnetic susceptibility of bilayer graphene}\label{VisSus} To proceed with the last observation, our goal is to compute the Hall viscosity $\eta_H$ and the orbital magnetic susceptibility of the model Eq.~\eqref{model}. Within the framework of linear response, $\eta_H$ is given by the correlation function of the stress tensors $\langle \tau_{xx}\tau_{xy}\rangle$, which can be expressed using the Kubo formula as \cite{PhysRevB.76.161305, PhysRevB.94.125427} \begin{align} \label{etaKubo}\eta_H(\omega) = \frac{1}{\pi\ell^2}\mathrm{Im}\sum_{a,b}\frac{(\tau_{xx})_{ba}(\tau_{xy})_{ab}}{\omega^2_+-(\xi_a-\xi_{b})^2}, \end{align} where $a,b$ label bands above and below the Fermi energy. Defining the stress tensor nevertheless requires some caution for the following reason: The stress tensor can be defined as the response of the Hamiltonian with respect to the variation of the metric $g_{ij}$, but there is no obvious covariant way of coupling \eqref{model} to a general curved manifold. We circumvent this conceptual obstacle as follows: Instead of the projected low-energy Hamiltonian~\eqref{model}, we revisit the original model consisting of two Dirac spinors, where one consistently defines the action over a curved manifold \cite{PhysRevB.95.085151}, and hence the stress tensor $\tau_{ij}$. We derive the stress tensor in this manner and again project the components to the low-energy bands \cite{PhysRevB.101.035117}. The components of $\tau_{ij}$ are found to be \begin{subequations} \begin{align} \label{txx}& \tau_{xx} = -\frac{1}{m_{\star}}\pi_x^2 \sigma_x - \frac{1}{2m_{\star}}\sigma_y\{\pi_x, \pi_y\},\\ \label{tyy}& \tau_{yy} = \frac{1}{m_{\star}}\pi_y^2 \sigma_x - \frac{1}{2m_{\star}}\sigma_y \{\pi_x,\pi_y\},\\ \label{txy}& \tau_{xy} = -\frac{1}{4m_{\star}}\sigma_y \{\pi^{\dagger},\pi\}. \end{align} \end{subequations} In Appendix \ref{deriveT}, we present another distinct way to derive the above results, where a natural conjecture for the model on a general curved manifold is postulated and $\tau_{ij}$ is obtained by varying the Hamiltonian with respect to the vielbein field. A recent work \onlinecite{2019arXiv190909608I} derives the stress tensor by constructing strain generators and computing their commutators with the model Hamiltonian\cite{PhysRevB.86.245309}. This approach does not evade the difficulty discussed above. Taking the pseudospin degree of freedom as an example, it is not obvious that the pseudospin matrices generate physical rotations and should be included in the strain generators. Plugging $\tau_{xx}$ and $\tau_{xy}$ into Eq.~\eqref{etaKubo} yields the Hall viscosity to all orders in frequency. Particularly in the static limit $\omega \to 0$, we have \begin{align} \label{etaH}\eta_H(\omega = 0) = \frac{1}{8\pi \ell^2}(N^2+1). \end{align} Caution is required to compute the orbital magnetic susceptibility $-\epsilon''(B)$, which is the negative of the second derivative of the energy density as a function of the background field. Suppose we naively evaluate the energy density per Landau level, $\epsilon(B)$: \begin{subequations} \begin{align} \epsilon(B) = \frac{\omega_c}{2\pi\ell^2}\sum_{n = -\infty}^N \mathrm{sgn}(n)\sqrt{|n|(|n|-1)}. \end{align} After canceling the summands from $n = -N$ to $n =N$, the remaining sum is proportional to $\sum_{n=N+1}^{\infty}\sqrt{n(n-1)}$, which is severely divergent. To extract a finite result, we regularize the sum by introducing a natural cutoff $N_c$ obeying $\omega_c\sqrt{N_c(N_c-1)} = m_{\star}$, which constrains the validity of the low energy model. The regularized sum now reads \begin{align} \epsilon(B) = -\frac{\omega_c}{2\pi\ell^2}\sum^{N_c}_{n=N+1}\sqrt{n(n-1)}. \end{align} \end{subequations} The sum can be evaluated approximately with the Euler--Maclaurin formula. In a double expansion of large $N$ and small $\omega_c/m_{\star}$, the leading contribution is \begin{align} \label{chiB}-\epsilon''(B) \approx -\frac{N^2}{2\pi} + \frac{1}{16\pi}\ln \frac{N\omega_c}{m_{\star}}. \end{align} Since $N$ is bounded from above by $N_c\sim m_{\star}/\omega_c$, at large $N$ the logarithmic term is sub-leading. Consequently, both Eq.~\eqref{etaH} and Eq.~\eqref{chiB} coincide with the 2DEG results in the limit $N\to\infty$ \footnote{We note that the cutoff-dependent logarithm is not an artifact of the sharp cutoff, it also appears in other regularization schemes.}. In the rest of this paper, we exploit these observations and establish a concrete algebraic identity. \section{An algebraic relation from the Kubo formula}\label{KuboCurrent} To better understand the relationship, we reexamine the Hall conductivity from the more conventional perspective of the Kubo formula. It is algebraically equivalent to the computation of the two-point functions $\Pi^{\mu\nu}$. Nonetheless, as we will show in a moment, it can make our speculation more transparent. The current operator in the first quantized form is given by the symmetrized velocity operator \begin{align} \label{firstQuanCurrent}j^i(\mathbf r) =\frac{1}{2m_{\star}}\sum_k \{\widehat{\Pi}^i_k, \delta(\mathbf r-\mathbf r_k)\}. \end{align} Applying the Kubo formula for Hall conductivity to the current~\eqref{firstQuanCurrent}, we obtain \begin{align} \label{KuboCon}\sigma_{H} = \mathrm{Im}\sum_{a,b}\frac{\{\widehat{\Pi}^x,e^{\frac{-\mathsf q\ell}{\sqrt 2}\pi^{\dagger}}e^{\frac{\bar{\mathsf q}\ell}{\sqrt 2}\pi}\}_{ba}\{\widehat{\Pi}^y,e^{\frac{\mathsf q\ell}{\sqrt 2}\pi^{\dagger}}e^{\frac{-\bar{\mathsf q}\ell}{\sqrt 2}\pi}\}_{ab}}{4m^2_{\star}\pi\ell^2e^{|\mathsf q|^2}[\omega^2_+ - (\xi_{a}-\xi_b)^2]}. \end{align} This formula can be quickly justified by considering $\frac{1}{i\omega}\Pi^{ij}$, which reduces to the conductivity tensor in the temporal gauge $A_0 = 0$. The above formula can be straightforwardly expanded in small $\mathsf q = (q\ell)$. To proceed, let us exploit rotational invariance and take $\mathbf q = (0,q)$. \begin{align*} & \{\widehat{\Pi}^x, e^{\frac{-\mathsf q\ell}{\sqrt 2}\pi^{\dagger}}e^{\frac{\bar{\mathsf q}\ell}{\sqrt 2}\pi}\}\notag\\ \approx &\widehat{\Pi}^x (1+(q\ell)^2/4) - \frac{iq\tau_{xx}}{\omega_c} -\frac{q^2\ell^4}{4}\{\pi_x^2, \widehat{\Pi}^x\}.\\ & \{\widehat{\Pi}^y, e^{\frac{\mathsf q\ell}{\sqrt 2}\pi^{\dagger}}e^{\frac{-\bar{\mathsf q}\ell}{\sqrt 2}\pi}\}\notag\\ \approx & \widehat{\Pi}^y(1+(q\ell)^2/4) + \frac{2iq\tau_{xy}}{\omega_c} + \frac{q\sigma_z\tau_{yy}}{\omega_c} -\frac{q^2\ell^2}{4}\{\pi_x^2, \widehat{\Pi}_y\}. \end{align*} To organize the products, we first observe the current operators map the $n$th Landau level to $(n\pm 1)$th, whereas the stress tensor operators only generate transitions between $n\to n,n\pm 2$. Their products thus have no finite summand and as a result, there is no term linear in $q$. Another non-trivial observation is at the level of the matrix elements $(\tau_{xx})_{ba}(\tau_{xy})_{ab} = (\tau_{xx})_{ba}(i\sigma_z\tau_{yy})_{ab}$ even though $\tau_{xy}\neq(i\sigma_z\tau_{yy})_{ab}$ in general. Consequently, expanding the numerator of~\eqref{KuboCon} to the second order in $(q\ell)$ yields the following identity, \begin{align}\label{ConVisRelation} &\sigma_H(\omega, q)\approx \sigma_H(\omega) + (q\ell)^2\bigg[\ell^2 \eta_H(\omega)\notag\\ & -\mathrm{Im}\sum_{a,b}\frac{(\widehat{\Pi}^x)_{ba}\{\pi_x^2,\widehat{\Pi}^y\}_{ab}+\{\pi^2_x,\widehat{\Pi}^x\}_{ba}(\widehat{\Pi}^y)_{ab}}{4\pi m^2_{\star}[\omega^2_+-(\xi_{a}-\xi_{b})^2]}\bigg], \end{align} which unambiguously identifies the role of Hall viscosity as part of the coefficient of $(q\ell)^2$. The sum in the second line of Eq.~\eqref{ConVisRelation} is not expressed directly with a physical observable at finite $\omega$. In the static limit, it sums to $-\frac{N^2}{2\pi} = -\epsilon''(B)-\frac{1}{16}\ln \frac{N}{N_c}$, and it reduces to Eq.~\eqref{main2} in the static limit $\omega \to 0$. At finite frequency, despite its lack of lucid physical interpretation, Eq.~\eqref{ConVisRelation} provides a decomposition that generates corrections to the Hoyos--Son relation in powers of $\omega^2$ and $N^{-1}$. \section{Discussion and conclusion} The model Eq.~\eqref{model} is often considered a hybridization of Dirac and non-relativistic fermions, capturing the particle-hole structure of the former and the massive parabolic dispersion of the latter. The results derived in the main text entail in many ways that the features of non-relativistic fermions, or the restoration of Galilean symmetry, manifest asymptotically in the limit of a large filling factor. Explicit examples are the forms of Hall conductivity, Hall viscosity, orbital magnetic susceptibility, poles of transport coefficients, and the Hoyos--Son relation, although, in term of hydrodynamic relations \cite{doi:10.1142/S0217984989001400}, it is not yet clear in what sense the charge current and momentum density approach each other in the same limit. Established exact formulae can be utilized for interpolating between the asymmetric model and the Galilean symmetric paradigm. To move forward, this paper opens various directions. Empirical ones include the real-time effective theory at finite temperature using the Schwinger--Keldysh formalism \cite{PhysRevB.97.115123}, and interaction-generated transport properties owing to either an instantaneous Coulomb interaction or a mixed-dimensional Maxwell term \cite{PhysRevX.5.011040,PhysRevB.96.075127}. These developments will be critical in order to connect the single-particle toy model in the quantum Hall regime with experimental investigations of graphene materials \cite{Berdyugin162}, which are usually conducted in a hydrodynamic regime with strong disorder or interactions \cite{PhysRevB.100.035125, PhysRevB.100.115434}. Equally interesting are the inclusion of lattice effects in the generalized Hoyos--Son relation \cite{PhysRevB.98.245303}, generalization of the linear response theory to graphite multilayers as in Ref. \onlinecite{PhysRevB.101.155310}, and clarifying the distinction between dual descriptions of the $\nu = 1$ fractional quantum Hall state \cite{PhysRevLett.117.136802, PhysRevB.100.235150}. To conclude, we have determined the time-reversal odd electromagnetic response for the low-energy model of bilayer graphene to quadratic order in momentum and frequency, endowed a precise definition of the stress tensor to the low-energy projected model, and established a conductivity-viscosity relationship in the absence of obvious space-time symmetry. We investigated the limit in which the symmetry is restored and provided support from concrete computation at the operator level. In addition to the conclusions in the above, the effective action derived and the vielbein formulation introduced in this work can be further applied to the exploration of unknown facets of this model. \acknowledgements The author thanks Yu-Ping Lin, M. Lapa and F Setiawan for comments on the manuscript. This work is supported by a Simons Investigator Grant from the Simons Foundation.
1,314,259,994,731
arxiv
\section{Introduction}\label{sec:intro} `Multiplayer Online Battle Arena (\bb{MOBA})' is a game genre that represents team competition strategy games where two five-player teams face off to destroy each other team's base. The MOBA is one of the most successful games genres nowadays. Especially, in the League of Legends, one of the MOBA games, more than 70 million people watched its world championship in 2021 \cite{championship_viewer}. Also, about 180 million players are enjoying the game in 2022 \cite{active_players}. Most MOBA games have a `Rank' system that evaluates their players and groups them at a similar level. Players can achieve a higher Rank by increasing their `Match Making Rating (\bb{MMR}).' The MMR is designed to match players who have similar levels. While most game companies do not open the exact formulas of the MMR systems in public, game communities surmise that the games such as \textit{League of Legends}, \textit{Dota 2}, and \ii{Honor of Kings} use revised versions of the Elo rating system \cite{elo_lolboosts, elo_dota2freaks} used in chess leagues \cite{wiki_elo}. In the Elo rating mechanism, a player's MMR increases or decreases by a game match's result (win/lose). The system creates a competitive environment and gives players a sense of achievement when they reach a higher rank. It is reasonable that a team's achievement (win/lose) directly affects a player's MMR since MOBA games are team-playing games. In many cases, players of the lost team performed relatively less-valuable acts than the won team players. However, some players perform remarkably although the team is losing. On the other hand, some players take advantage of teammates' performance without their skill and effort. Therefore, estimating individual performances in team-play matches is also a challenging and attractive topic in the sports domain. In addition, occasionally, the competitive environment and the fall of a player's rank trigger undesired side-effects such as losing motivation and toxic behaviors. For example, players can encounter an incompetent player (intentionally or unintentionally) as a teammate. In that case, the players usually exercise `Peer pressure' \cite{heneman1995balancing} towards the unskilled player because they worry about their rank falling if they lose. Peer pressure strategy sometimes works fine. However, it generally brings rage and offensive reactions to the peer-pressured player \cite{kou2014playing}. Hence, reflecting individual contributions to the team's achievement to MMR can relieve the side-effects of a current team-based reward system. Many game communities use metrics to evaluate a player's individual performance (See \cref{sec:metrics}). However, the metrics have limitations; they evaluate a player's contribution for a whole match, while evaluations of the respective actions of the player are absent. Also, the metrics do not consider the context of when an indicator transition occurs. For example, a player can die for a strategic purpose, such as delaying enemy march until teammates are ready to fight or sacrifice themselves to save teammates from a massacre. Therefore, there are some outlier players that are underestimated or overestimated by those metrics. We propose \bb{an embedding approach that embeds a player's respective actions to quantified scores.} Our approach uses combinations of recurrent neural networks and multi-layered perceptrons and is also inspired by the concept of the neural network language model (NNLM), a word-embedding approach. The main contributions derived from our approach are below: \begin{enumerate} \item The proposed model allows the game system to quantify an individual player's contribution to the team's victory to alleviate the disadvantage of the current MMR system. \item The model can discover and re-evaluate players who are over/underestimated by common performance metrics. \item The model can guide a player to improve their skills by debriefing a match analysis with the quantified scores of their respective actions. \end{enumerate} This paper is organized as follows. First, we describe the basic rules of \textit{League of Legends}~and introduce common metrics to evaluate a player's individual performance in \cref{sec:lol}. Next, \cref{sec:related_works} introduces previous studies to evaluate a player's contribution and provides background on the NNLM and RNN that underlie our model. Then, \cref{sec:model} and \cref{sec:dataset} present our proposed model and the dataset we used to train and validate our model, respectively. Next, we evaluate our model's performance by comparing it with common metrics as baselines and by more detailed analysis, in \cref{sec:experiment}. In \cref{sec:discussion}, In \cref{sec:discussion}, we review the experiment result and discuss the limitations and strengths of this study. Finally, we conclude the paper by noting the contribution of this paper and the prospect of usage of the model we proposed, In \cref{sec:conclusion}. \section{League of Legends}\label{sec:lol} \textit{League of Legends}~ is one of the MOBA games that many players in the world enjoy every day. The Primary system of \textit{League of Legends}~ is similar to other MOBA games; Ten players meet in a single match and form teams of five players each. There are two bases called \bb{Nexus} for both teams on either side of the map diagonally (See \cref{fig:map} \cite{lol_official}). The lower-left corner base team is \bb{blue}, and the opposite is \bb{red}. The ultimate goal of both teams is to destroy the opponent's Nexus while protecting their own. There are three main roads to reach the opposing bases, and \bb{defense towers} belonging to both teams prevent opposing players from accessing their Nexus. \subsection{Champions} In a match, a player chooses a playable character called \bb{champion}. The game has over 140 champions, and a champion belongs to one or more of the following \bb{roles}: \ii{Assassins}, \ii{Fighters}, \ii{Mages}, \ii{Marksmen}, \ii{Supports}, \ii{Tanks}. Assassins are killers with excellent damaging ability but are vulnerable to being killed because of their low durability. On the other hand, Fighters have both good damaging ability and survivability. They commit in short-ranged combat. Tankers take and endure the enemy's attack instead of teammates with extraordinary durability. Also, Marksmen make long-ranged attacks behind other teammates. Mages are the champions who damage, debuff (make a target weaker) or disturb the enemy with their magic. Last, Supports help teammates with various skills, such as healing and buff. These are the typical character classes of most role-based games like RPGs and MOBAs. \subsection{Lanes} \bb{Lane} means the three-main routes of a map: \ii{Top}, \ii{Mid}, and \ii{Bottom}. Each lane indicates the upper, middle, and lower roads, respectively. Meanwhile, there is another terminology called \bb{Position}, which denotes players' roles in their team. The game officially designed five positions, \ii{Top}, \ii{Mid}, \ii{Bottom}, \ii{Jungle}, and \ii{Support}. The players who take Top, Mid, and Bottom positions are responsible for defending the enemy's march in their corresponding lane and pushing the line of combat in the direction of the opponent's base. On the other hand, the Jungle player roams in the jungle area while hunting monsters and joins the engagement in the lanes. Occasionally they assassinate enemy champions with surprise. Also, the Support player aids other teammates with their assisting skills such as buff or healing. They usually company the Bottom player. There are three duplicating terms between Land and Position--Top, Mid, and Bottom. Therefore, the game community often uses the terminology Lane and Position interchangeably. In this paper, we use the term \bb{Lane} rather than Position because we use the term Position for another meaning. \subsection{Minions and Monsters} \bb{Minions} are NPC soldiers that belong to one of the two teams. Both teams' minions are continuously generated from their base, and they charge to the opponent's base, following the three main roads and attacking enemy characters and structures. On the other hand, \bb{Monsters} are NPCs that do not belong to any team. Players can collect gold and experience from slaying monsters in the jungle areas. Especially, \ii{Baron Nashor} and \ii{Drake} are called the \bb{Elite monsters} that give significant bonuses (such as buffs) to the team which hunts them. \subsection{Kill, Support, and Death} While struggling to achieve the team goal and intervening enemy, players can kill the opponent players and aid teammates. The game records a player who gave the last damage to the victim as the killer. On the other hand, a player who damaged the victim in the last 10 seconds before the victim died gets the assist record. Meanwhile, the players whom the enemy killed are not washed out from the game. Instead, they rebirth after certain times elapsed from the death. Therefore, victims can continue the match; however, their team will be at a disadvantage of a troops shortage while the victims wait for rebirth. \subsection{Gold, Item, and Experience} \bb{Gold} is the currency of the \textit{League of Legends}~ that players can use to buy items. Players can collect gold from several actions, such as killing enemy champions and minions, hunting monsters, and destroying enemy structures. The items provide tactical functionality like healing the owner and stuns the enemy. Furthermore, Players also get \bb{experience} from the actions mentioned above, and they can level up with the cumulated experience; it gives the players enhanced ability and unlock additional skills. \begin{figure}[ht] \centering \includegraphics[width=0.9\linewidth]{figures/lol_map.png} \caption{Quarter view of map of \textit{League of Legends}.} \label{fig:map} \vspace{10pt} \end{figure} \subsection{Common metrics to evaluate individual performance}\label{sec:metrics} As mentioned in \cref{sec:intro}, many game communities commonly reference some metrics to estimate a player's performance (hereinafter \bb{common metrics}). The most common metrics are the Kill-Death-Assists ratio (KDA), Gold, and Creep score. \bb{KDA} represents the ratio that a player recorded how many kills and assists compared to their death. \bb{Gold} represents the amount of gold that a player earned in a match. \bb{Creep score} denotes the number of monsters and minions that a player hunted. A player records 1 creep score for hunting a minion or a regular monster and records 4 creep scores for hunting a giant monster. \section{Related Works}\label{sec:related_works} \subsection{Win prediction} Predicting the outcome of a match is one of the most actively researched areas. Of course, the topic is interesting itself; however, it is also momentous because many studies that estimate player contribution use the prediction results. Most studies use one or both of the two types of information to predict outcomes: pre-match information and mid-match information. Pre-match information denotes information collected before the match starts, such as champions that players selected and players' gameplay history. On the other hand, mid/post-match information represents the information that occurred during a match, including KDA, gold, and creep score. The eight papers among the win prediction studies we surveyed use only pre-match information. Conley \textit{et al.}~applied players' choice of heroes to logistic regression and the K-nearest neighbors model to predict match outcome and recommend the player choose a better hero to win. The logistic regression model shows up to 71\% of prediction accuracy \cite{conley2013does}. Kalyanaraman \textit{et al.}~introduced a regression model that uses combinations of heroes in \textit{Dota 2}. Their model shows 69.42\% prediction accuracy \cite{kalyanaraman2014win}. Song \textit{et al.}~also use hero draft data of \textit{Dota 2}~in the logistic regression model to predict match outcome. They achieved a maximum of 58\% accuracy with their model. To improve the model's performance, the authors claim they added some hero combo features, and the model recorded 61\% accuracy with the features. However, the authors do not explain the details of the hero combo features \cite{song2015predicting}. Semenov \textit{et al.}~also proposed a prediction model for \textit{Dota 2}~that receives heroes draft as input based on Na\"ive Bayes, Logistic regression, and Gradient boosted decision tree. Their model predicted games' results with 70.6\%, 67\% and 66\% accuracy for average, high skilled, very high skilled player games, respectively \cite{semenov2016performance}. Hanke \textit{et al.}~suggested an MLP model that uses hero draft data as inputs for prediction in \textit{Dota 2}. Their model displayed 88.63\% prediction accuracy. They also proposed a hero recommendation system for players based on the frequent itemset mining with the apriori algorithm. The win percentile of the recommended hero combinations was 74.9\% \cite{hanke2017recommender}. Andono \textit{et al.}~ use Na\"ive Bayes model with Adaboost algorithm to predict match outcomes of \textit{Dota 2}. The model records 80\% prediction accuracy with metadata of heroes, such as heroes' types and attributes. However, the dataset used for the experiment only includes matches with one human player and four AI players for each team; therefore, it is difficult to generalize the results of this study to common matches that consist of 10 human players \cite{andono2017dota}. Do \textit{et al.}~apply the players' records on champions, such as win rate with a champion and the total number of the played game with the champion, as a feature to their deep neural network model. They trained their model with 5,000 \textit{League of Legends}~match data and achieved 75.1\% of prediction accuracy \cite{do2021using}. Lee \textit{et al.}~ proposed a personalized champion recommendation system for \textit{League of Legends}~and \textit{Dota 2}~based on the win prediction rate of a player's preferred champions. The authors divided the win prediction model into player-level and match-level embedding networks. The player-level embedding networks transform a player stat into a vector; the vector is input to match-level embedding networks that use champions and their role data with the player's stats to predict a match outcome. The model recorded its highest prediction accuracy on \textit{League of Legends}~as 55.35\% and 57.55\% on \textit{Dota 2}~\cite{lee2022draftrec}. Six studies apply mid or post-match information to predict the outcome of a match. Rioult \textit{et al.}~proposed a binary classification model that uses topological clues, and their model achieved 85\% precision and 90\% recall. However, the number of replay data they used is insufficient as the authors recognize \cite{rioult2014mining}. Kinkade \textit{et al.}~ introduced two approaches that respectively use pre-match and post-match information. First, they used hero matchup and heroes' synergy/countering data for the pre-match information. Their pre-match model indicated up to 73\% accuracy. On the other hand, their post-match model uses gold, experience, and kill numbers that are cumulated per minute. The post-match model shows a maximum of 99.81\% of accuracy \cite{kinkade2015dota}. Hodge \textit{et al.}~proposed a real-time prediction model for professional matches of {\itshape Dota 2}, using Logistic Regression, Random Forest, LightGBM, respectively \cite{hodge2019win}. They achieved the accuracy to 74.59\% with recorded game data; moreover, their model hits 85\% accuracy when predicts the outcome at 5-minute game time elapsed with live data from \textit{the ESL One Hamburg 2017 Dota 2 league} . Yang \textit{et al.}~attempted not just to predict the outcome of matches but also to interpret which features were essential to derive the outcome, using 184 thousand match data of \textit{Honor of the Kings}~\cite{yang2022interpretable}. Their model consists of two stages, spatial and temporal. First, six features, including pre-match and mid-match data, are processed in the spatial stage with logistic regression and feed-forward networks. Then temporal stage draws the final prediction from the results of the spatial stage. TSSTN shows 54.6\% accurate prediction at the start of a match and a maximum of 78.5\% accuracy when 10 minutes elapsed. Another research by Yang \textit{et al.}~ uses mid-match statistical information and the occurrence of champion and boss monster kill events with a two-directional LSTM/transformer model and fully connected neural networks to predict the match outcome and the killer and victim of the next champion and boss monster kill events \cite{yang2022predicting}. Their model performs 70.8\% of win prediction accuracy and predicts the killer of the next champion and boss monster with a maximum of 94.4\% and 28.1\% accuracy, respectively. Zhao \textit{et al.}~proposed a real-time prediction model, Winning Tracker \cite{zhao2022winning}. They reconstructed players and towers data of \textit{League of Legends}~matches into confrontation and individual movement information to predict the match outcome and the next tower destruction event. The model achieved up to 0.901 to predict tower destruction and 0.889 on match outcome prediction, on F1-score. \subsection{Player contribution} Estimating the performance of individual players is an attractive topic in the sports analysis domain, including e-sports. The studies that attempted to evaluate the contributions of players of MOBA games share a similar goal with this study. Suznjevic \textit{et al.}~proposed Application Context-Aware Rating algorIthm (\bb{ACARI}), which is an algorithm to adjust the MMR system on \ii{Heroes of Newerth} (HoN) \cite{suznjevic2015application}. They created hero vectors representing how much a \bb{hero} (playable character of HoN) fits various \bb{roles} that a player can take in a match. Then they choose parameters related to a player's performance; the proportion of a player's parameter among the teammates is the player's contribution toward the team's achievement. The authors also modified the contributions with two weights. The one weight is defined with correlations between a hero and each parameter, derived from the domain knowledge. The other weight comes from the parameter percentile on the 10000 matches statistics. Finally, they adjusted the increasing/decreasing MMR with estimated individual performances after a match. Cavandenti \textit{et al.}~proposed a pattern analysis model to help novice players to improve their skills by analyzing their behaviors in \textit{Dota 2}~\cite{cavadenti2016did}. They mined behavioral patterns of skillful players and compared them to novice players' acts. Their anomaly detection-like model is helpful to analyze whether a novice player's action is closed to the standard play style of skillful players. However, their model cannot explain whether the skillful players' play patterns are beneficial to lead the team's victory or not. That makes the limitation that their model is hard to estimate players' individual performances. Sapienza \textit{et al.}~ considered only KDA as a performance indicator for an individual player \cite{sapienza2018individual}. The authors reveal that a player's win rate and performance tend to decrease when a player continuously plays the game. Jiang \textit{et al.}~classified League of Legends players into four categories from 148,000 match data using the concept of diversity and conformity; maverick, generalist, specialist, and niche players. \cite{jiang2021wide}. They used players' champion selection and movement to classify them. The authors also analyzed players' performance in each category by KDA, the approximate value of MMR, and the win rate. Ramler \textit{et al.}~ conducted an interesting analysis of the existence of performance differences between genders of League of Legends champions \cite{ramler2021investigating}. The indicators that the study used to quantify a champion's performance are the end-game statistical data such as total minion kills, gold, and KDA. Maymin distinguished the Smart kill and the Useless death using win probability transition of before and after a killing event \cite{maymin2021smart}. The former means the kill action that increases the chance of the team's victory, and the latter is the death that decreases the probability of winning. They estimated middle-of-game win probability using a logistic regression model; the input features were champion kill count, the number of remaining towers, elite monster kill count, elapsed game time in minutes. Also, they analyzed additional features such as gold and survivability that how to affect those features toward the team's winning probability. Finally, Damediuk attempted to set a performance index to quantify individual players' performances with \textit{Dota 2} match data \cite{demediuk2021performance}. They used end-game data such as XP, LEVEL, and KDA; and calculated the weighted sum of the features according to the player's playstyle, of which the authors classified a player into one of 10 types. As reviewed above, many studies attempt to assess players' contributions. However, most studies use result-oriented measurements; and they do not try to assess the respective actions of players, except in a few studies such as Maymin's study. On the other hand, We intended to study the areas that were less explored by former studies; in this study, we attempt to assess the contributions of individual actions to a team's victory. \subsection{Word embedding models} Word embedding is the conversion of text words into N-dimensional dense vectors. Embedded vectors make the model can process words and make it possible to represent relations between words mathematically. Neural Network Language Model (\textbf{NNLM}) is an word embedding model that uses feed-forward neural network \cite{nnlm}. \textbf{NNLM}~model consists of input, projection layer, hidden layer, and output. The model is trained to predict a next-coming word when it takes a $1 \times V$ dimensional words sequence in the context length, $N$, as an input. The projection layer is a $V \times D$ dimensional lookup table where each vector row is matched 1:1 with $V$ words in the vocabulary. The vectors in the projection layer are randomly initialized, and they are transformed into embedded word vectors that represent V words as training proceeds. \cref{fig:nnlm} shows the basic process of \textbf{NNLM}. \begin{figure}[ht] \centering \includegraphics[width=0.9\linewidth]{figures/NNLM.png} \caption{Basic process of NNLM} \label{fig:nnlm} \vspace{10pt} \end{figure} \subsection{Recurrent neural network} A Recurrent Neural Network (RNN) is a type of neural network model for dealing with sequence data in which the outcome of one time point influences the outcome of the next. In the RNN model, a neural network at time $t$ takes sequence data of $t$ and the neural network's output at $t-1$ as inputs. Basically, there are four types of RNN models; \ii{one-to-one}, \ii{one-to-many}, \ii{many-to-one}, \ii{many-to-many}. \cref{fig:rnn} shows the standard figure and four types of RNN. \begin{figure}[ht] \vspace{7pt} \centering \begin{subfigure}[b]{0.15\linewidth} \includegraphics[width=\linewidth, height=1.8\linewidth]{figures/RNN.png} \caption{standard figure} \label{fig:rnn_std} \end{subfigure} \hfill \begin{subfigure}[b]{0.15\linewidth} \includegraphics[width=\linewidth, height=1.8\linewidth]{figures/RNN-1to1.png} \caption{one-to-one} \label{fig:rnn_1to1} \end{subfigure} \hfill \begin{subfigure}[b]{0.15\linewidth} \includegraphics[width=\linewidth, height=1.8\linewidth]{figures/RNN-1toN.png} \caption{one-to-many} \label{fig:rnn_1ton} \end{subfigure} \hfill \begin{subfigure}[b]{0.15\linewidth} \includegraphics[width=\linewidth, height=1.8\linewidth]{figures/RNN-Nto1.png} \caption{many-to-one} \label{fig:rnn_nto1} \end{subfigure} \hfill \begin{subfigure}[b]{0.15\linewidth} \includegraphics[width=\linewidth, height=1.8\linewidth]{figures/RNN-NtoN.png} \caption{many-to-many} \label{fig:rnn_nton} \end{subfigure} \caption{Representative Types of RNN} \label{fig:rnn} \end{figure} Long Short-Term Memory (\bb{LSTM}) is an improved model that expiates the weaknesses of the standard RNN model \cite{lstm}. In the standard RNN model, information of previous steps gradually disappears if the information's weight is too small. In opposite, if the weight is too huge, previous information's influence becomes excessively strong. LSTM overcomes it using cell states, in which three gates determine whether the previous time point's data will be dropped or remembered. Gated Recurrent Unit (\bb{GRU}) is a modified type of LSTM that uses only two gates \cite{gru}. By reducing the number of gates in a NN cell, GRU aims to decrease computational workloads to train. \section{Proposed model}\label{sec:model} In this paper, we propose an action-embedding model that converts a player's actions into quantitative scores. We borrow NNLM's word embedding idea. The difference between our model and NNLM is the presence or absence of a projection layer. The word embedding approach has a 1:1 matching relation between words and embedded vectors; therefore, NNLM has a projection layer and trains the layer's value. However, the relationship between a player's action types and the score is not one-to-one. For example, when a player kills an enemy's champion, the action is evaluated variously by contexts such as the timing and the victim's remaining health. Because of this reason, our model does not use a projection layer. Instead, our model aims to obtain parameters of neural networks that output adequate scores when the action features are given. \textit{i.e.}, the neural network's parameters of our model and projection layer of NNLM take a similar role. Meanwhile, a player's action has causal relationships with the previous or the following action. So we can evaluate the cause action through the resulting action. For example, if a player killed an enemy champion after purchasing an item, the act of purchasing an item was likely beneficial. We combined the score-embedding model with the RNN model to reflect the causal relationship between actions. Moreover, different from regular RNN models, we implemented our model to take player actions in a time-reversed sequence to analyze the previous action through the following action. Our model consists of two main parts. One converts player action sequences into scores, and the other discerns the winner by comparing the two teams' scores and calculates the loss to execute backpropagation. \cref{sec:rnnslp} and \cref{sec:dep} describe the details of each part of the model. \subsection{RNN and single layer perceptron part}\label{sec:rnnslp} The first part of our model consists of RNN and Single Layer Perceptron (\bb{SLP}). We used GRU for RNN. When a player's action sequence is input to GRU, the output (hidden state) is input to the next GRU and corresponding SLP. SLPs output a 1-dimensional vector respectively when taking N-dimensional hidden states of each GRU. Finally, each output of SLPs becomes scores between -1 to 1 real number through the hyperbolic tangent (\textit{tanh}) function. An action converted into a positive score represents beneficial action to win; the opposite is a harmful action that drove defeat. \cref{fig:gru+slp} depicts the operation of a GRU-SLP combined model. \begin{figure}[ht] \centering \vspace{7pt} \includegraphics[width=0.8\linewidth]{figures/GRU.png} \caption{basic process of GRU-SLP part} \label{fig:gru+slp} \end{figure} \subsection{Discernment and evaluation part}\label{sec:dep} As described above, the Discernment and Evaluation Part (hereinafter DEP) discerns the winner by comparing the two teams' scores and computes the loss to conduct backpropagation. After obtaining ten players' scores, DEP adds up blue and red team players' scores, respectively. DEP discerns which team is the winner through both teams' added-up scores. Also, it calculates loss using scores and discernment. We used two pairs of discernment method-loss function. The details of the sets are explained in \cref{sec:loss_function}. Ten GRU-SLPs' parameter values are updated after backpropagation. In \cref{sec:overall} we describe the overall training process. \subsection{Discernment methods and loss function pairs}\label{sec:loss_function} As mentioned above, the goal is training our model to output scores that reflect the matches' outcomes properly. Our study stands based on the assumption that a team that acts on profitable actions more would win. Therefore, if a model gave a higher total score to the won team than the lost team, then the score reflected the match result well. On the other hand, if the lost team's total score is higher than the won team's, then the model misestimated the values of actions in the match. We defined two discernment method-loss function pairs to train our model; \bb{Confidence--Cross Entropy} and \bb{Deterministic--ReLU}. \subsubsection{\bb{Confidence and cross-entropy loss}} In Confidence--Cross Entropy pair, DEP calculates how confident a team given a higher score is the actual winner. For example, there are two matches; On the first, the blue team got 20 points of the score, and the reds got 5. On the other, the blues got 10 points, and the reds got 7. In this case, DEP is more confident in the analysis result of the first match than the second. Through a softmax, the confidence is expressed as a probability in a real number between 0 - 1 function. \cref{eq:confidence} shows how DEP calculates the confidence $c(T)$. $T$ is a team among both teams, and $S_T$ is the team's total score. $S_B$ and $S_R$ represents the total score of the blue and red team, respectively. Cross entropy is a function that outputs how different between two given probability distributions $P$ and $Q$ using information entropy theory. The output of the cross-entropy function represents the weighted arithmetic mean of the information content of when an event with information content according to probability distribution Q occurs according to probability distribution P. Many classification models use the function as the loss function to compare output probability distribution with the ground truth. We use cross-entropy to calculate the error between our model's analysis and the actual result. \cref{eq:crossentropy} displays how our model adopts the cross-entropy loss function. The values of $q(T_n)$ are determined by which team is the actual winner. If blue team is the winner, the values are: $q(T_{blue})=1.0$, $q(T_{red})=0.0$; while $q(T_{blue})=0.0$, $q(T_{red})=1.0$ if red team is the winner. We used binary cross-entropy (BCE) loss to simplify codes and reduce computing workloads instead of cross-entropy loss on the implementation level. For BCE loss, confidence is a degree of certainty that the blue team is the actual winner. The calculation of modified confidence $c'(T)$ and BCE loss is shown in \cref{eq:s_confidence} and \cref{eq:bce}. \begin{equation}\label{eq:confidence} \vspace{3pt} c(T) = \frac{e^{S_T}}{e^{S_B} + e^{S_R}} \vspace{3pt} \end{equation} \begin{equation}\label{eq:crossentropy} \vspace{3pt} CE\ loss = -\sum_{n}{q(T_n)\ log\ c(T_n)} \vspace{3pt} \end{equation} \begin{equation}\label{eq:s_confidence} \vspace{3pt} c'(T) = \frac{e^{S_B-S_R}}{e^{S_B-S_R} + 1} \vspace{3pt} \end{equation} \begin{equation}\label{eq:bce} \vspace{3pt} BCE\ loss = -[q(T_B)\ log\ c'(T_B)+(1-q(T_B))\ log\ (1-c'(T_B))] \end{equation} \subsubsection{\bb{Deterministic and ReLU loss}} Sometimes a team can overwhelm their opponent and win a match with a massive performance gap. However, in many sports games that two or more teams compete, it is not uncommon for one team to win by a very narrow margin of performance. In this case, the lost team can get a high score from the model as much as the won team, and that is reasonable. However, with Confidence--Cross Entropy pair, the loss is very high when the model scores both teams with narrow differences, even if the discernment of the winner is correct. To alleviate this problem, we designed Deterministic-ReLU pair. With this pair, DEP does not calculate the confidence. Instead, DEP discerns the winner only by comparing the total scores of both teams. Therefore, the discernment for the winner is expressed in deterministic. ReLU function outputs the same input value if it is positive; else, it outputs zeros. In most cases, it is used as the activation function of deep neural networks; the activation function is located between layers and removes the linearity of successive inter-layer matrix multiplications. However, we used the ReLU function as the loss function, not the activation function. For ReLU loss, the input is the score of the lost team ($S_L$) minus the score of the won team ($S_W$). \textit{i.e.}, the loss is zero as long as DEP discerns correctly no matter how different both teams' scores are. On the other hand, the loss gets larger by the difference if DEP discerns wrong. The definition of ReLU loss is shown in \cref{eq:reluloss}. \begin{equation}\label{eq:reluloss} ReLU\ Loss = \begin{cases} S_L - S_W & if\ S_W \leq S_L \\ 0 & if\ S_W > S_L \end{cases} \end{equation} \subsection{Overall train process}\label{sec:overall} Our model has ten independent GRU-SLP models since the number of players is 10 in a match. At the beginning of the training, the model randomly initializes its GRU-SLPs parameter values. Then, the training proceeds a match by match. Each GRU-SLPs respectively take input one action sequence of a player in charge and output scores of the actions. DEP adds up the output scores by teams and discerns the winner by the methods described in \cref{sec:loss_function}. Then DEP calculates the loss by using a loss function. After computing the loss, GRU-SLPs are trained respectively by backpropagation. Finally, the model calculates average parameter values of trained GRU-SLPs and assigns the parameters to each GRU-SLP. \cref{fig:overall} depicts the overall train process of our model. Furthermore, \cref{algo:parameters} is a pseudo-code that shows how the model trains ten independent GRU-SLPs and assigns the average values of their parameters. \begin{figure}[ht] \centering \includegraphics[width=0.9\linewidth]{figures/Overall.png} \caption{Overall process for a match of the proposed model} \label{fig:overall} \end{figure} \begin{algorithm} \caption{Pseudo codes for training process of proposed model \label{algo:parameters} \begin{algorithmic}[1] \State GRU-SLP Model \bb{subModel[10]} \For {$match=1,2,\ldots,N$} \For {$player=1,2,\ldots,10$} \State score[$player$] $\leftarrow$ \bb{subModel[$player$]}(actionSequance[$player$]) \EndFor \State score[Team1] $\leftarrow$ sum(score[1:5]), score[Team2] $\leftarrow$ sum(score[6:10]) \State prediction $\leftarrow$ PredictWinner(score[Team1], score[Team2]) \State loss $\leftarrow$ LossFunction(prediction) \State Backpropagate and update parameters of \bb{subModels} \State \bb{subModels}.parameters $\leftarrow$ average(\bb{subModels}.parameters) \EndFor \end{algorithmic} \end{algorithm} \section{Dataset}\label{sec:dataset} We collected 245,575 match data of \textit{League of Legends}~ by using the RiotAPI, which the company that developed the game (Riot Games) provides \cite{riot_api}. The dataset contains matches of North America, Europe, South Korea, and Japan that were played between May 10 and 17, 2021. We got two JSON files for a match through the API: the match's metadata and the timeline. The metadata has the basic information such as the game's version, chosen champions by each player, total game duration. On the other hand, the timeline data contains significant events' reports and snapshots at 1-minute intervals; an event report has details, including the timestamp and the actor player. From match data, we set up seven features: timestamp, champion, lane, position, distance, and event. The definition of each feature is elaborated on below. \begin{description} \item[\ii{Timestamp}] is the time point at which the event occurred expressed in milliseconds. \item[\ii{Champion}] is the champions chosen by actor players of the event. \item[\ii{Lane}] is the chosen lanes of the event's actor player. \item[\ii{Position}] represents the location on the map of the actor player when the event occurs. \item[\ii{Distance}] depicts how much the actor player is isolated from their teammates. \item[\ii{Event}] is the type of the event (such as champion kill and purchase item) and its weight. \end{description} \subsection{Champion} As mentioned in \cref{sec:lol}, A champion belongs to one or more of six roles. We used the \bb{role} as an input feature instead of \bb{champion} since \textit{League of Legends}~ has over 140 champions making our model consume more computing power when training. Also, the game company can add or remove champions when they release patches. Therefore, \bb{role} is a more stable feature applicable regardless of the game's version than \bb{champion}. We created a Champion-Role vector lookup table through \textit{League of Legends}~ game data that \ii{Riot Games} officially provides \cite{datadragon} (See \cref{tab:cham-role}). By using the lookup table, \bb{Champion} info of events are converted to one-hot encoded \bb{Role} vector. \begin{table}[ht] \centering \begin{tabular}{|c|cccccc|} \hline Champion & Assassins & Fighters & Mages & Marksmen & Supports & Tanks \\ \hline\hline Annie & 0 & 0 & 1 & 0 & 0 & 0 \\ \hline Kayle & 0 & 1 & 0 & 0 & 1 & 0 \\ \hline Shyvana & 0 & 1 & 0 & 0 & 0 & 1 \\ \hline \multicolumn{7}{|c|}{...} \\ \hline Vayne & 1 & 0 & 0 & 1 & 0 & 0 \\ \hline \end{tabular} \vspace{5px} \caption{Champion-Role vector lookup table} \label{tab:cham-role} \end{table} \subsection{Lane} A player chooses a \bb{Lane} that represents the player's role in a team. There are five official lanes: \ii{Top}, \ii{Mid}, \ii{Bottom}, \ii{Support}, and \ii{Jungle}. The metadata file of a match contains \bb{Lane} info of each player. Like the \bb{Champion} feature, we converted \bb{Lane}s to one-hot vectors too. \subsection{Position} \bb{Position} is a specific location on the map of the actor of an event. In a match's timeline data, the location is expressed in x and y coordinates, of which the value is between 0 and 15000. We normalized the values into between 0 and 1 by min-max scaling. Some types of the event do not have \bb{Position} data. Therefore, we presumed the missing \bb{Position} using domain knowledge and a 1-minute interval of snapshots. For example, when an item purchasing event occurs, we can presume the actor's \bb{Position} is (0, 0) or (1, 1) by their team since a player must return to their base to purchase an item. \subsection{Distance} \bb{Distance} represents how much the actor player is isolated from their teammates. Since \textit{League of Legends}~ is a teamplay game, cooperative and strategic play are crucial to triumph. \bb{Distance} reflects cooperativity and tactics indirectly. For example, when teammates engage in a team fight, a player has to join teammates in most cases. In this situation, a player's \bb{Distance} would be small because the player acts with teammates. On the other hand, a player also can sneak through an enemy's flank to destroy their defense tower when teammates draw the opponents' attention. Opposite to the former situation, the \bb{Distance} would grow. \cref{eq:dist} shows the formula to calculate Distance of player $i$. $D_i$. Digit 1 to digit 4 represent each teammate of player $i$, therefore $D_{nm}$ is the Euclidean distance between player $n$ and $m$. \begin{equation}\label{eq:dist} D_i = \frac{D_{i1}+D_{i2}+D_{i3}+D_{i4}}{D_{i1}+D_{i2}+D_{i3}+D_{i4}+D_{12}+D_{13}+...+D_{34}} \end{equation} \subsection{Event} A match data contains ten different event types representing a player's action, such as purchasing an item and placing a ward. To achieve our goal, we added four new event types derived from original events. The following list shows the original event types in black and additional types in red. We also converted the types of events to one-hot encoded vectors. \begin{description} \item[\ii{ITEM\_PURCHASED}] represents that a player purchased an item. \item[\ii{ITEM\_SOLD}] means that a player sold an item they have. \item[\ii{ITEM\_DESTROYED}] implies a player consumed a consumable item such as a potion and elixir. \item[\ii{SKILL\_LEVEL\_UP}] represents that a player increased their skill level.. \item[\ii{LEVEL\_UP}] signifies a player's level increased. \item[\ii{WARD\_PLACED}] means a player placed a ward on the map for some purpose such as the sight and heals. \item[\ii{WARD\_KILL}] implies an action that a player destroys an enemy's ward. \item[\ii{CHAMPION\_KILL}] represents that a player killed an enemy's champion. \item[\ii{BUILDING\_KILL}] means a player eliminated an opponents' defense tower. \item[\ii{ELITE\_MONSTER\_KILL}] represents that a player hunted an elite monster in the jungle area. \item[\red{\ii{CHAMPION\_KILL\_ASSIST}}] signifies that a player assisted a teammate to kill an enemy champion. \item[\red{\ii{CHAMPION\_KILL\_VICTIM}}] implies that an opponent champion murders a player. \item[\red{\ii{BUILDING\_KILL\_ASSIST}}] means that a player assisted a teammate in destroying an enemy defense tower. \item[\red{\ii{ELITE\_MONSTER\_KILL\_ASSIST}}] represents that a player assisted a teammate in hunting an elite monster. \end{description} \subsection{Event weight} Each \bb{Event} has a different weight depending on the context, even if the event types are identical. When a player kills an enemy champion, for example, there is a distinction between killing the opponent alone and enlisting the help of teammates. Therefore, we devised formulas to describe the weight of each event, respectively. We also normalized the weights' values in a range from 0 to 1. Formulas for the \bb{Event}s are described in \cref{tab:event_weight}: \begin{table}[ht] \centering \begin{tabular}{c|c|c} Event & Weight formula & Description \\ \hline\hline \makecell{ITEM\_PURCHASED \\ ITEM\_DESTROYED} & $\frac{item\_purchase\_cost}{highest\_item\_purchase\_cost}$ & \small{\makecell{The numerator is the value of the purchased\\or destroyed item at the event,\\while the denominator is the gold value\\of the most expensive item.}} \\ \hline ITEM\_SOLD & $\frac{item\_sell\_cost}{highest\_item\_sell\_cost}$ & \small{\makecell{\ii{highest\_item\_sell\_cost} is the gold value\\that a player receives when they sell\\the most expensive item\\ of all of the game items.}} \\ \hline SKILL\_LEVEL\_UP & $\frac{current\_skill\_level}{maximum\_skill\_level}$ & \small{\makecell{\ii{maximum\_skill\_level} is the maximum level\\of the skill that a player just leveled up\\ at the event, and \ii{current\_skill\_level} is\\the skill level right after the event occurs.}} \\ \hline LEVEL\_UP & $\frac{level\_place\_rank}{number\_of\_player}$ & \small{\makecell{ \\The player level ranking within\\ of all of the players decides the event weight.\\{ }}} \\ \hline \makecell{WARD\_PLACED \\ WARD\_KILL} & $\frac{ward\_bounty}{highest\_ward\_bounty}$ & \footnotesize{\makecell{\bb{Ward} is a unit that provides a vision to players.\\Each ward has a bounty expressed as gold value,\\and a player can receive it as a reward\\when they destroy the ward.\\ \ii{highest\_ward\_bounty} is the highest bounty\\value of all of the wards.}}\\ \hline \makecell{CHAMPION\_KILL \\ CHAMPION\_KILL\_ASSIST \\ CHAMPION\_KILL\_VICTIM} & $\frac{damage\_dealt}{total\_damage\_victim\_received}$ & \footnotesize{\makecell{It represents how much the killer and assists\\contributed to killing the victim each.\\Damages that the killer and assists caused\\to the victim decide the weight.\\It also represents how much the victim\\resisted hard until they got killed.\\Thus, the weight of the victim event is related\\to damages that the victim caused\\to the killers while resists.}} \\ \hline \makecell{BUILDING\_KILL \\ BUILDING\_KILL\_ASSIST \\ ELITE\_MONSTER\_KILL \\ ELITE\_MONSTER\_KILL\_ASSIST} & $\frac{1}{number\_of\_involved\_players}$ & \footnotesize{\makecell{The dataset does not have information\\about specific damage dealt or received.\\Therefore, we supposed that every player\\who was involved in the event\\contributed the same as others.}} \\ \hline \end{tabular} \vspace{5pt} \caption{Weight formula by event} \label{tab:event_weight} \end{table} \subsection{Processed feature vector} Eventually, a player action is converted to a 30-dimensional vector. Therefore, a player's action sequence becomes a sequence of 30-dimensional vectors. Each vector has 5 ranged values (0 to 1) and 25 one-hot values. \cref{tab:features} shows the features that are converted to vector. \begin{table}[ht] \centering \begin{tabular}{|c|c|c|} \hline \bb{Event} & \bb{Description} & \bb{Range} \\ \hline Timestamp & Time elapsed & from 0 to 1 \\ Mage & Champion's role & 0 or 1 \\ Fighter & Champion's role & 0 or 1 \\ Support & Champion's role & 0 or 1 \\ Tank & Champion's role & 0 or 1 \\ Assassin & Champion's role & 0 or 1 \\ Marksman & Champion's role & 0 or 1 \\ Top & Lane was chosen by a player & 0 or 1 \\ Mid & Lane was chosen by a player & 0 or 1 \\ Bottom & Lane was chosen by a player & 0 or 1 \\ Utility & Lane was chosen by a player & 0 or 1 \\ Jungle & Lane was chosen by a player & 0 or 1 \\ x\_position & x coordinates of a player location & from 0 to 1 \\ y\_position & y coordinates of a player location & from 0 to 1 \\ distance & Isolation from teammates & from 0 to 1 \\ ITEM\_PURCHASED & Purchase an item & 0 or 1 \\ ITEM\_SOLD & Sell an item & 0 or 1 \\ ITEM\_DESTROYED & Destroy an item & 0 or 1 \\ SKILL\_LEVEL\_UP & Player's skill level & 0 or 1 \\ LEVEL\_UP & Player's level & 0 or 1 \\ WARD\_PLACED & Placing a ward & 0 or 1 \\ WARD\_KILL & Kill an enemy ward & 0 or 1 \\ CHAMPION\_KILL & Kill an enemy champion & 0 or 1 \\ CHAMPION\_KILL\_ASSIST & Assist in killing an enemy & 0 or 1 \\ CHAMPION\_KILL\_VICTIM & Get killed & 0 or 1 \\ BUILDING\_KILL & Destroy an enemy building & 0 or 1 \\ BUILDING\_KILL\_ASSIST & Assist in destroying building & 0 or 1 \\ ELITE\_MONSTER\_KILL & kill an elite monster & 0 or 1 \\ ELITE\_MONSTER\_KILL\_ASSIST & Assist in killing monster & 0 or 1 \\ Event\_Weight & weight of the event & from 0 to 1 \\ \hline \end{tabular} \vspace{5pt} \caption{Converted features} \label{tab:features} \end{table} \section{Experiment}\label{sec:experiment} To the best of our knowledge, this study is the first attempt to evaluate the contributions of individual actions; as a result, we used common metrics (KDA, Gold, and Minion kills) as baselines rather than models from previous studies. In addition, we created seven models based on a score embedding methodology to compare performances and determine which method is superior. The tested models are below: \begin{description} \item[Model 1: \ii{ReLU \#1}] Model 1 is a basic version of the model described in \cref{sec:model}. It uses \bb{10 GRU-SLP}s and \bb{ReLU loss} function. Also, $h_0$ of this model (See \cref{fig:gru+slp}) is $(L, 1, N)$-dimension vector that initialized to 0. $L$ is the depth of GRU layers, and $N$ is the length of the hidden dimension of GRU. \item[Model 2: \ii{ReLU \#2}] Model 2 is the same as Model 1, but it uses different $h_0$ for won and lost teams. i.e., The winner's $h_0$ is initialized to 1 while the loser is 0. The purpose of the model is to reflect the fact that the consequence of a match (win, lose) occurs just after a player's last action. \item[Model 3: \ii{ReLU \#3}] We designed Model 3 to validate our hypothesis that the time-reversed input sequence is adequate to analyze players' actions. Therefore, the action sequences that are input to Model 3 are not time-reversed, but chronological. \item[Model 4: \ii{BCE \#1}] Model 4 is a comparison model of Model 1 that uses the Confidence for the discernment method and BCE loss for the loss function. \item[Model 5: \ii{BCE \#2}] Model 5 is the same as Model 2 but uses Confidence and BCE loss. \item[Model 6: \ii{MLP \#1}] We created Model 6 to validate the GRU-SLP model's effectiveness. Model 6 uses Multi-Layered Perceptrons (MLP) to process each action independently without taking into account the sequential effect. It uses the deterministic discernment method and ReLU loss. \item[Model 7: \ii{MLP \#2}] Model 7 is the same as Model 6 but uses Confidence and BCE loss for DEP. \end{description} \subsection{Model implementation and settings} We implemented the seven models using \ii{PyTorch} 1.10.0 with CUDA 10.1 \cite{pytorch}. Also, we used \ii{Adam} optimizer \cite{adam} for the models to adjust models' parameters as training progressed. For the GRU-SLPs, the depth of hidden layers is 2, and the dimension of hidden states is 15. Every model was trained in 10 epochs and 0.0001 of the learning rate. Because of the concern about the overfitting problem, we set test data to consist of 44,575 matches, nearly 20\% of the total in the dataset; the test set represents the unseen data from the trained model. The rest of the dataset, which consists of 200,000 matches, is the trainset. Owing to enough size of the test set, the model would show low performance for the test stage if it is overfitted onto the trainset; it provides confidence that the model that achieves high performance in the test stage is not overfitted. Also, to choose the best-trained model in the training stage, we set the validation stage for each epoch. We extracted the validation data from the trainset, and its size is 5\% (10,000 matches) of the trainset. \subsection{Discernment accuracy} First, we checked the discernment accuracy of the models. The accuracy represents how much the models' scores reflect well for the contributions of each action since we trained the models to give high scores for actions that are highly related to winning. \cref{tab:discern_accuracy} shows the discernment accuracy of designed models. \begin{table}[ht] \centering \begin{tabular}{c|c|c|c|c} & \bb{Accuracy} & \bb{Precision} & \bb{Recall} & \bb{F1 score} \\ \hline Model 1 & 99.5634\% & 99.5587\% & 99.5886\% & 0.995736 \\ Model 2 & \bb{100\%} & \bb{100\%} & \bb{100\%} & \bb{1.} \\ Model 3 & 99.4800\% & 99.3884\% & 99.5971\% & 0.994927 \\ Model 4 & 99.4712\% & 99.5154\% & 99.4514\% & 0.994834 \\ Model 5 & \bb{100\%} & \bb{100\%} & \bb{100\%} & \bb{1.} \\ Model 6 & 99.4668\% & 99.4941\% & 99.4643\% & 0.994792 \\ Model 7 & 99.5063\% & 99.5285\% & 99.5071\% & 0.995178 \\ \hline Baseline (KDA) & 93.2090\% & 93.2399\% & 93.5156\% & 0.933775 \\ Baseline (Gold) & 94.5979\% & 95.1655\% & 94.2356\% & 0.946983 \\ Baseline (Minion kills) & 65.5381\% & 67.1973\% & 63.8623\% & 0.654874 \\ \hline \end{tabular} \vspace{5pt} \caption{Discernment accuracy, precision, recall and F1 score for each model} \label{tab:discern_accuracy} \end{table} Model 2 and Model 5 show that the judgment accuracy is 100\% when $h_0$ of the won and the lost team are input differently, regardless of the loss function. It is not surprising that the two models perform flawlessly in the discernment in that those models already knew the ground truth before discerning was executed by receiving the outcomes of matches as input ($h_0$). However, it is only a necessary condition that the model accurately discerns outcomes, not a sufficient condition to achieve this study's purpose. We discuss this topic in \cref{sec:feature-score}. Except for those two models, Model 1 displays the best performance for discerning the winner. In addition, all designed models indicate accuracy exceeding 99\%, while the discernment with common metrics show under 95\% accuracy. \subsection{Player ranking by models}\label{sec:ranking} Many game communities use common metrics such as KDA, Gold (earn/spent), and Minion kills to assess the individual performance of players. Therefore, we compared the post-match player rankings for common metrics and model scores. To avoid too many graphs on the paper that could confuse, we analyzed the top 4 models on discernment accuracy (\textit{i.e.}, models 1, 2, 5, 7). \cref{fig:rankings} displays the relation between players' model score ranking and typical indicator rankings (KDA, Gold, Minion kills, and the average of them) each. The X-axis denotes players' ranking for typical indicators, and the Y-axis represents players' ranking for the model scores. The color and size of the dots show the number of players ranked in the corresponding rank for the positions in axes. Therefore, if many players have the same rank by both typical indicators and model scores, the circles in the diagonal position become bigger and redder. All four models showed a positive correlation between the model score and the KDA ranking. However, models 1 and 7 also have a positive correlation with Gold ranking, but models 2 and 5 show a relatively low correlation. In addition, the graphs show that the Minion kills are not highly related to the models' scores compared to other metrics. Furthermore, \cref{fig:rankings} also demonstrate that the ranking average of common metrics has a nearly linear correlation with models' scores. The fact above represents that our models reflect individual performance evaluation by common metrics well. There are some differences; however, we can validate our model with a more detailed analysis since the common metrics are not applicable for every case. \begin{figure}[hp] \centering \begin{subfigure}[b]{0.9\linewidth} \includegraphics[width=\linewidth]{figures/ranking_model1.pdf} \vspace{-10pt} \caption{Model 1} \label{fig:ranking_model1} \end{subfigure} \begin{subfigure}[b]{0.9\linewidth} \includegraphics[width=\linewidth]{figures/ranking_model5.pdf} \vspace{-10pt} \caption{Model 2} \label{fig:ranking_model2} \end{subfigure} \begin{subfigure}[b]{0.9\linewidth} \includegraphics[width=\linewidth]{figures/ranking_model5.pdf} \vspace{-10pt} \caption{Model 5} \label{fig:ranking_model5} \end{subfigure} \hfill \begin{subfigure}[b]{0.9\linewidth} \includegraphics[width=\linewidth]{figures/ranking_model7.pdf} \vspace{-10pt} \caption{Model 7} \label{fig:ranking_model7} \end{subfigure} \caption{Player ranking counts by model and common metrics} \label{fig:rankings} \end{figure} \subsection{Under/overestimated players}\label{sec:under_over} As mentioned in \cref{sec:lol}, players take charge of one of the five \bb{lanes}, each with a specific function in a team. However, common metrics do not suit every lane of players to estimate their contribution. For example, the general mission of the support lane is to assist other lanes, not to kill an enemy champion. Also, a support lane champion is often targeted in team fights because of their practical supporting skills and quickly murdered because of their fragility. Therefore, a player who takes charge of the support lane usually gets low KDA, and it does not represent that the player contributed less than teammates. A support lane player is frequently underestimated for their contribution by KDA metrics. We investigated the differences between player rankings by our models and common metrics and analyzed whether the differences were reasonable. For the differences, we defined \ii{underestimated} (underestimated by common metrics) as a player who ranks over \bb{five places higher in model score than common metrics} and also defined \ii{overestimated} (overestimated by common metrics) as a player who ranks over \bb{five places lower in model score than common metrics}. \begin{figure}[p] \begin{subfigure}[b]{1\linewidth} \includegraphics[width=1\linewidth]{figures/under_over_avg.pdf} \caption{Avg. of common metrics} \label{fig:under_over_avg} \end{subfigure} \begin{subfigure}[b]{1\linewidth} \includegraphics[width=1\linewidth]{figures/under_over_kda.pdf} \caption{KDA} \label{fig:under_over_kda} \end{subfigure} \begin{subfigure}[b]{1\linewidth} \includegraphics[width=1\linewidth]{figures/under_over_gold.pdf} \caption{Gold} \label{fig:under_over_gold} \end{subfigure} \hfill \begin{subfigure}[b]{1\linewidth} \includegraphics[width=1\linewidth]{figures/under_over_creep.pdf} \caption{Minion kills} \label{fig:under_over_creep} \end{subfigure} \caption{Number of under/overestimated players by common metrics} \label{fig:under_over} \end{figure} \cref{fig:under_over} displays the number of under/overestimated players by common metrics. \cref{fig:under_over_avg} to \cref{fig:under_over_creep} are separated statistics by each metric, and each figure consists of separated graphs by models. The red bars of the graphs represent overestimated players, and the blue bars represent underestimated players. \cref{fig:under_over_kda} shows the KDA underestimates many support lane players compared with model 1 and model 7. It is reasonable that the models give a higher evaluation to support lane players than the KDA metrics, as mentioned above about the support lane players' missions. On the other hand, every model gives low scores to players who took the support and jungle lane. For support lane players, it is a general strategy that the support player concedes the advantage of killing minions to teammates. If a support player monopolizes advantage from killing minions than teammates, teammates' growth who have a role to battle with enemies will be slow, then the possibility of loss becomes higher. Meanwhile, the mission of a jungle lane player is to clear field monsters and gather benefits from them. If a jungle player spends too much time killing minions, the enemy jungle player can monopolize the field monsters' benefit and make it slow that teammates grow. Therefore, the differences of our models' ranking and the minion kill count ranking are rational. \subsection{Feature-score correlation analysis}\label{sec:feature-score} To achieve our goal that measures a player's contribution regardless of whether they won or lost, the embedding model has to give similar scores to the same actions in a similar context. Therefore, we examined the score distribution by the action features. To visualize the relationship between the model scores and all features at once, we reduced the feature dimension to 1 using Principal Component Analysis (PCA), then drew graphs displaying the relationship between model scores and dimension-reduced features (See \cref{fig:pca_relation}). We can consider that actions with the same dimension-reduced feature value have similar worthiness in a match context. Therefore, to fulfill the goal of this study, the actions with the same dimension-reduced feature value have to get similar scores from the trained model without bias by the matches' outcomes. In \cref{fig:pca_relation}, the green dots represent the winners' scores according to the dimension-reduced feature values, and the yellows represent the losers' scores. If the model fulfills our purpose, the clusters of green and yellow dots will ting similar shapes, representing that the model scores actions by their value, not affected by the matches' outcomes. On the other hand, if the green and yellow clusters ting significantly different shapes, we can consider that the model's scoring function is heavily affected by the outcomes of the matches. As we can see from the graph, Model 2 and Model 5 give scores significantly different for winners and losers. Therefore, those two models do not suit our purpose despite the discerning accuracy being 100\% since they evaluate a player's actions considering the outcome of a match, not the actions and contexts themselves. On the other hand, Model 1 and 7 give similar scores for similar dimension-reduced feature values. Therefore, although their discerning accuracy is relatively lower than Model 1 and 5, \cref{fig:pca_relation} shows those two models are more suited to the purpose of this study. \begin{figure}[ht] \centering \begin{subfigure}[b]{0.47\linewidth} \includegraphics[width=\linewidth]{figures/pca_model1.pdf} \caption{Model 1} \label{fig:pca_model1} \end{subfigure} \hfill \begin{subfigure}[b]{0.47\linewidth} \includegraphics[width=\linewidth]{figures/pca_model2.pdf} \caption{Model 2} \label{fig:pca_model2} \end{subfigure} \begin{subfigure}[b]{0.47\linewidth} \includegraphics[width=\linewidth]{figures/pca_model5.pdf} \caption{Model 5} \label{fig:pca_model5} \end{subfigure} \hfill \begin{subfigure}[b]{0.47\linewidth} \includegraphics[width=\linewidth]{figures/pca_model7.pdf} \caption{Model 7} \label{fig:pca_model7} \end{subfigure} \caption{Relationship between models' score and features} \label{fig:pca_relation} \end{figure} \section{Discussion}\label{sec:discussion} In the experiment section (\cref{sec:experiment}), we looked at the winner-discernment accuracy of the models to see if they accurately reflect a player's entire contribution to team’s win. The results (\cref{tab:discern_accuracy}) show that our algorithms accurately discern the outcome of a game. Then we compared our models to common metrics to see if they estimate a player's performance properly. Despite minor differences, the models' outputs are comparable to common metrics. Furthermore, as we saw in \cref{sec:under_over}, the discrepancies between our models (particularly Model 1 and Model 7) are rational and explainable. Finally, we looked at the feature-model score correlation to see if the models gave similar scores to actions with similar feature values (worthiness in a match context). \cref{fig:pca_relation} depicts that Model 2 and Model 5 give significantly different scores to the won player's action and the lost player's action, despite their similar feature value; in contrast, models 1 and 7 give similar scores to similar actions regardless of the match's outcome. The last analysis shows that Model 1 and Model 7 suit our research purpose more than Model 2 and Model 5 despite their relatively low discerning accuracy. There are limitations to this study. First, the RiotAPI that we used to construct the dataset provides limited types of action. Our dataset does not have a player's decision and specific acts in detail, such as skill usage and giving damages to the defense tower. Therefore, our approach is not tested for a dataset that contains very detailed action information. It will be our future work to validate our model by collecting a more detailed dataset; it can be done with advanced data collection methods such as replay video analysis. Also, our model cannot be used for real-time services such as e-sports commentary because we designed it for post-match evaluation. Our models' strength, despite their limitations, is their generality and maintainability. Because the actions used for scoring are generic enough, the trained models do not need to retrain every time the game is updated with adding a new champion and item or rebalancing the game system. Furthermore, despite the actions and attributes for \textit{League of Legends}~do not precisely fit other games, we can use the fundamental structure of our model (which is the pivotal contribution of this paper) for most games, including different genres such as First-Person Shooter (FPS). The only task developers need to finish before using our model is defining actions and the actions' attributes representing their game well. \section{Conclusion}\label{sec:conclusion} Estimating individual performance for fair reward and analyzing past behaviors to improve a player's skill and team-level tactics are fascinating subjects of team-based competitive sport study. This study introduced an embedding approach to score player's in-game actions by how much they contributed to winning. The idea of our approach came from the word-embedding technique, NNLM. This approach allows quantitative evaluations of a player's respective actions, which were not available from common metrics and previous studies. With quantitative values of players' contribution to victory, MOBA games can use our model in many ways. First, our model can adjust the amount of MMR that increases or decreases according to match outcome, alleviating the side-effects of the team/result-oriented MMR systems (The exact amount of MMR that our model adjusts could vary from game to game, according to the MMR formula and the developers' intention). The second application is to display contribution scores on the games' season leaderboard or to inform players at the ends of matches. As a result, the games can honor players who have made significant contributions to their teams' accomplishments but have been overlooked by traditional performance criteria. This can avoid situations where some players unreasonably blame their teammates for low KDA or gold despite their enough contributions. Finally, the contribution scores can be utilized to maintain professional players' records in the professional e-sports business. Furthermore, our model's applicability is not limited to MOBAs; the GRU-SLP and DEP structures can be used in various genres. \section{Research Methods} \subsection{Part One} Lorem ipsum dolor sit amet, consectetur adipiscing elit. Morbi malesuada, quam in pulvinar varius, metus nunc fermentum urna, id sollicitudin purus odio sit amet enim. Aliquam ullamcorper eu ipsum vel mollis. Curabitur quis dictum nisl. Phasellus vel semper risus, et lacinia dolor. Integer ultricies commodo sem nec semper. \subsection{Part Two} Etiam commodo feugiat nisl pulvinar pellentesque. Etiam auctor sodales ligula, non varius nibh pulvinar semper. Suspendisse nec lectus non ipsum convallis congue hendrerit vitae sapien. Donec at laoreet eros. Vivamus non purus placerat, scelerisque diam eu, cursus ante. Etiam aliquam tortor auctor efficitur mattis. \section{Online Resources} Nam id fermentum dui. Suspendisse sagittis tortor a nulla mollis, in pulvinar ex pretium. Sed interdum orci quis metus euismod, et sagittis enim maximus. Vestibulum gravida massa ut felis suscipit congue. Quisque mattis elit a risus ultrices commodo venenatis eget dui. Etiam sagittis eleifend elementum. Nam interdum magna at lectus dignissim, ac dignissim lorem rhoncus. Maecenas eu arcu ac neque placerat aliquam. Nunc pulvinar \section{Introduction}\label{sec:intro} `Multiplayer Online Battle Arena (\bb{MOBA})' is a game genre that represents team competition strategy games where two five-player teams face off to destroy each other team's base. The MOBA is one of the most successful games genres nowadays. Especially, in the League of Legends, one of the MOBA games, more than 70 million people watched its world championship in 2021 \cite{championship_viewer}. Also, about 180 million players are enjoying the game in 2022 \cite{active_players}. Most MOBA games have a `Rank' system that evaluates their players and groups them at a similar level. Players can achieve a higher Rank by increasing their `Match Making Rating (\bb{MMR}).' The MMR is designed to match players who have similar levels. While most game companies do not open the exact formulas of the MMR systems in public, game communities surmise that the games such as \textit{League of Legends}, \textit{Dota 2}, and \ii{Honor of Kings} use revised versions of the Elo rating system \cite{elo_lolboosts, elo_dota2freaks} used in chess leagues \cite{wiki_elo}. In the Elo rating mechanism, a player's MMR increases or decreases by a game match's result (win/lose). The system creates a competitive environment and gives players a sense of achievement when they reach a higher rank. It is reasonable that a team's achievement (win/lose) directly affects a player's MMR since MOBA games are team-playing games. In many cases, players of the lost team performed relatively less-valuable acts than the won team players. However, some players perform remarkably although the team is losing. On the other hand, some players take advantage of teammates' performance without their skill and effort. Therefore, estimating individual performances in team-play matches is also a challenging and attractive topic in the sports domain. In addition, occasionally, the competitive environment and the fall of a player's rank trigger undesired side-effects such as losing motivation and toxic behaviors. For example, players can encounter an incompetent player (intentionally or unintentionally) as a teammate. In that case, the players usually exercise `Peer pressure' \cite{heneman1995balancing} towards the unskilled player because they worry about their rank falling if they lose. Peer pressure strategy sometimes works fine. However, it generally brings rage and offensive reactions to the peer-pressured player \cite{kou2014playing}. Hence, reflecting individual contributions to the team's achievement to MMR can relieve the side-effects of a current team-based reward system. Many game communities use metrics to evaluate a player's individual performance (See \cref{sec:metrics}). However, the metrics have limitations; they evaluate a player's contribution for a whole match, while evaluations of the respective actions of the player are absent. Also, the metrics do not consider the context of when an indicator transition occurs. For example, a player can die for a strategic purpose, such as delaying enemy march until teammates are ready to fight or sacrifice themselves to save teammates from a massacre. Therefore, there are some outlier players that are underestimated or overestimated by those metrics. We propose \bb{an embedding approach that embeds a player's respective actions to quantified scores.} Our approach uses combinations of recurrent neural networks and multi-layered perceptrons and is also inspired by the concept of the neural network language model (NNLM), a word-embedding approach. The main contributions derived from our approach are below: \begin{enumerate} \item The proposed model allows the game system to quantify an individual player's contribution to the team's victory to alleviate the disadvantage of the current MMR system. \item The model can discover and re-evaluate players who are over/underestimated by common performance metrics. \item The model can guide a player to improve their skills by debriefing a match analysis with the quantified scores of their respective actions. \end{enumerate} This paper is organized as follows. First, we describe the basic rules of \textit{League of Legends}~and introduce common metrics to evaluate a player's individual performance in \cref{sec:lol}. Next, \cref{sec:related_works} introduces previous studies to evaluate a player's contribution and provides background on the NNLM and RNN that underlie our model. Then, \cref{sec:model} and \cref{sec:dataset} present our proposed model and the dataset we used to train and validate our model, respectively. Next, we evaluate our model's performance by comparing it with common metrics as baselines and by more detailed analysis, in \cref{sec:experiment}. In \cref{sec:discussion}, In \cref{sec:discussion}, we review the experiment result and discuss the limitations and strengths of this study. Finally, we conclude the paper by noting the contribution of this paper and the prospect of usage of the model we proposed, In \cref{sec:conclusion}. \section{League of Legends}\label{sec:lol} \textit{League of Legends}~ is one of the MOBA games that many players in the world enjoy every day. The Primary system of \textit{League of Legends}~ is similar to other MOBA games; Ten players meet in a single match and form teams of five players each. There are two bases called \bb{Nexus} for both teams on either side of the map diagonally (See \cref{fig:map} \cite{lol_official}). The lower-left corner base team is \bb{blue}, and the opposite is \bb{red}. The ultimate goal of both teams is to destroy the opponent's Nexus while protecting their own. There are three main roads to reach the opposing bases, and \bb{defense towers} belonging to both teams prevent opposing players from accessing their Nexus. \subsection{Champions} In a match, a player chooses a playable character called \bb{champion}. The game has over 140 champions, and a champion belongs to one or more of the following \bb{roles}: \ii{Assassins}, \ii{Fighters}, \ii{Mages}, \ii{Marksmen}, \ii{Supports}, \ii{Tanks}. Assassins are killers with excellent damaging ability but are vulnerable to being killed because of their low durability. On the other hand, Fighters have both good damaging ability and survivability. They commit in short-ranged combat. Tankers take and endure the enemy's attack instead of teammates with extraordinary durability. Also, Marksmen make long-ranged attacks behind other teammates. Mages are the champions who damage, debuff (make a target weaker) or disturb the enemy with their magic. Last, Supports help teammates with various skills, such as healing and buff. These are the typical character classes of most role-based games like RPGs and MOBAs. \subsection{Lanes} \bb{Lane} means the three-main routes of a map: \ii{Top}, \ii{Mid}, and \ii{Bottom}. Each lane indicates the upper, middle, and lower roads, respectively. Meanwhile, there is another terminology called \bb{Position}, which denotes players' roles in their team. The game officially designed five positions, \ii{Top}, \ii{Mid}, \ii{Bottom}, \ii{Jungle}, and \ii{Support}. The players who take Top, Mid, and Bottom positions are responsible for defending the enemy's march in their corresponding lane and pushing the line of combat in the direction of the opponent's base. On the other hand, the Jungle player roams in the jungle area while hunting monsters and joins the engagement in the lanes. Occasionally they assassinate enemy champions with surprise. Also, the Support player aids other teammates with their assisting skills such as buff or healing. They usually company the Bottom player. There are three duplicating terms between Land and Position--Top, Mid, and Bottom. Therefore, the game community often uses the terminology Lane and Position interchangeably. In this paper, we use the term \bb{Lane} rather than Position because we use the term Position for another meaning. \subsection{Minions and Monsters} \bb{Minions} are NPC soldiers that belong to one of the two teams. Both teams' minions are continuously generated from their base, and they charge to the opponent's base, following the three main roads and attacking enemy characters and structures. On the other hand, \bb{Monsters} are NPCs that do not belong to any team. Players can collect gold and experience from slaying monsters in the jungle areas. Especially, \ii{Baron Nashor} and \ii{Drake} are called the \bb{Elite monsters} that give significant bonuses (such as buffs) to the team which hunts them. \subsection{Kill, Support, and Death} While struggling to achieve the team goal and intervening enemy, players can kill the opponent players and aid teammates. The game records a player who gave the last damage to the victim as the killer. On the other hand, a player who damaged the victim in the last 10 seconds before the victim died gets the assist record. Meanwhile, the players whom the enemy killed are not washed out from the game. Instead, they rebirth after certain times elapsed from the death. Therefore, victims can continue the match; however, their team will be at a disadvantage of a troops shortage while the victims wait for rebirth. \subsection{Gold, Item, and Experience} \bb{Gold} is the currency of the \textit{League of Legends}~ that players can use to buy items. Players can collect gold from several actions, such as killing enemy champions and minions, hunting monsters, and destroying enemy structures. The items provide tactical functionality like healing the owner and stuns the enemy. Furthermore, Players also get \bb{experience} from the actions mentioned above, and they can level up with the cumulated experience; it gives the players enhanced ability and unlock additional skills. \begin{figure}[ht] \centering \includegraphics[width=0.9\linewidth]{figures/lol_map.png} \caption{Quarter view of map of \textit{League of Legends}.} \label{fig:map} \vspace{10pt} \end{figure} \subsection{Common metrics to evaluate individual performance}\label{sec:metrics} As mentioned in \cref{sec:intro}, many game communities commonly reference some metrics to estimate a player's performance (hereinafter \bb{common metrics}). The most common metrics are the Kill-Death-Assists ratio (KDA), Gold, and Creep score. \bb{KDA} represents the ratio that a player recorded how many kills and assists compared to their death. \bb{Gold} represents the amount of gold that a player earned in a match. \bb{Creep score} denotes the number of monsters and minions that a player hunted. A player records 1 creep score for hunting a minion or a regular monster and records 4 creep scores for hunting a giant monster. \section{Related Works}\label{sec:related_works} \subsection{Win prediction} Predicting the outcome of a match is one of the most actively researched areas. Of course, the topic is interesting itself; however, it is also momentous because many studies that estimate player contribution use the prediction results. Most studies use one or both of the two types of information to predict outcomes: pre-match information and mid-match information. Pre-match information denotes information collected before the match starts, such as champions that players selected and players' gameplay history. On the other hand, mid/post-match information represents the information that occurred during a match, including KDA, gold, and creep score. The eight papers among the win prediction studies we surveyed use only pre-match information. Conley \textit{et al.}~applied players' choice of heroes to logistic regression and the K-nearest neighbors model to predict match outcome and recommend the player choose a better hero to win. The logistic regression model shows up to 71\% of prediction accuracy \cite{conley2013does}. Kalyanaraman \textit{et al.}~introduced a regression model that uses combinations of heroes in \textit{Dota 2}. Their model shows 69.42\% prediction accuracy \cite{kalyanaraman2014win}. Song \textit{et al.}~also use hero draft data of \textit{Dota 2}~in the logistic regression model to predict match outcome. They achieved a maximum of 58\% accuracy with their model. To improve the model's performance, the authors claim they added some hero combo features, and the model recorded 61\% accuracy with the features. However, the authors do not explain the details of the hero combo features \cite{song2015predicting}. Semenov \textit{et al.}~also proposed a prediction model for \textit{Dota 2}~that receives heroes draft as input based on Na\"ive Bayes, Logistic regression, and Gradient boosted decision tree. Their model predicted games' results with 70.6\%, 67\% and 66\% accuracy for average, high skilled, very high skilled player games, respectively \cite{semenov2016performance}. Hanke \textit{et al.}~suggested an MLP model that uses hero draft data as inputs for prediction in \textit{Dota 2}. Their model displayed 88.63\% prediction accuracy. They also proposed a hero recommendation system for players based on the frequent itemset mining with the apriori algorithm. The win percentile of the recommended hero combinations was 74.9\% \cite{hanke2017recommender}. Andono \textit{et al.}~ use Na\"ive Bayes model with Adaboost algorithm to predict match outcomes of \textit{Dota 2}. The model records 80\% prediction accuracy with metadata of heroes, such as heroes' types and attributes. However, the dataset used for the experiment only includes matches with one human player and four AI players for each team; therefore, it is difficult to generalize the results of this study to common matches that consist of 10 human players \cite{andono2017dota}. Do \textit{et al.}~apply the players' records on champions, such as win rate with a champion and the total number of the played game with the champion, as a feature to their deep neural network model. They trained their model with 5,000 \textit{League of Legends}~match data and achieved 75.1\% of prediction accuracy \cite{do2021using}. Lee \textit{et al.}~ proposed a personalized champion recommendation system for \textit{League of Legends}~and \textit{Dota 2}~based on the win prediction rate of a player's preferred champions. The authors divided the win prediction model into player-level and match-level embedding networks. The player-level embedding networks transform a player stat into a vector; the vector is input to match-level embedding networks that use champions and their role data with the player's stats to predict a match outcome. The model recorded its highest prediction accuracy on \textit{League of Legends}~as 55.35\% and 57.55\% on \textit{Dota 2}~\cite{lee2022draftrec}. Six studies apply mid or post-match information to predict the outcome of a match. Rioult \textit{et al.}~proposed a binary classification model that uses topological clues, and their model achieved 85\% precision and 90\% recall. However, the number of replay data they used is insufficient as the authors recognize \cite{rioult2014mining}. Kinkade \textit{et al.}~ introduced two approaches that respectively use pre-match and post-match information. First, they used hero matchup and heroes' synergy/countering data for the pre-match information. Their pre-match model indicated up to 73\% accuracy. On the other hand, their post-match model uses gold, experience, and kill numbers that are cumulated per minute. The post-match model shows a maximum of 99.81\% of accuracy \cite{kinkade2015dota}. Hodge \textit{et al.}~proposed a real-time prediction model for professional matches of {\itshape Dota 2}, using Logistic Regression, Random Forest, LightGBM, respectively \cite{hodge2019win}. They achieved the accuracy to 74.59\% with recorded game data; moreover, their model hits 85\% accuracy when predicts the outcome at 5-minute game time elapsed with live data from \textit{the ESL One Hamburg 2017 Dota 2 league} . Yang \textit{et al.}~attempted not just to predict the outcome of matches but also to interpret which features were essential to derive the outcome, using 184 thousand match data of \textit{Honor of the Kings}~\cite{yang2022interpretable}. Their model consists of two stages, spatial and temporal. First, six features, including pre-match and mid-match data, are processed in the spatial stage with logistic regression and feed-forward networks. Then temporal stage draws the final prediction from the results of the spatial stage. TSSTN shows 54.6\% accurate prediction at the start of a match and a maximum of 78.5\% accuracy when 10 minutes elapsed. Another research by Yang \textit{et al.}~ uses mid-match statistical information and the occurrence of champion and boss monster kill events with a two-directional LSTM/transformer model and fully connected neural networks to predict the match outcome and the killer and victim of the next champion and boss monster kill events \cite{yang2022predicting}. Their model performs 70.8\% of win prediction accuracy and predicts the killer of the next champion and boss monster with a maximum of 94.4\% and 28.1\% accuracy, respectively. Zhao \textit{et al.}~proposed a real-time prediction model, Winning Tracker \cite{zhao2022winning}. They reconstructed players and towers data of \textit{League of Legends}~matches into confrontation and individual movement information to predict the match outcome and the next tower destruction event. The model achieved up to 0.901 to predict tower destruction and 0.889 on match outcome prediction, on F1-score. \subsection{Player contribution} Estimating the performance of individual players is an attractive topic in the sports analysis domain, including e-sports. The studies that attempted to evaluate the contributions of players of MOBA games share a similar goal with this study. Suznjevic \textit{et al.}~proposed Application Context-Aware Rating algorIthm (\bb{ACARI}), which is an algorithm to adjust the MMR system on \ii{Heroes of Newerth} (HoN) \cite{suznjevic2015application}. They created hero vectors representing how much a \bb{hero} (playable character of HoN) fits various \bb{roles} that a player can take in a match. Then they choose parameters related to a player's performance; the proportion of a player's parameter among the teammates is the player's contribution toward the team's achievement. The authors also modified the contributions with two weights. The one weight is defined with correlations between a hero and each parameter, derived from the domain knowledge. The other weight comes from the parameter percentile on the 10000 matches statistics. Finally, they adjusted the increasing/decreasing MMR with estimated individual performances after a match. Cavandenti \textit{et al.}~proposed a pattern analysis model to help novice players to improve their skills by analyzing their behaviors in \textit{Dota 2}~\cite{cavadenti2016did}. They mined behavioral patterns of skillful players and compared them to novice players' acts. Their anomaly detection-like model is helpful to analyze whether a novice player's action is closed to the standard play style of skillful players. However, their model cannot explain whether the skillful players' play patterns are beneficial to lead the team's victory or not. That makes the limitation that their model is hard to estimate players' individual performances. Sapienza \textit{et al.}~ considered only KDA as a performance indicator for an individual player \cite{sapienza2018individual}. The authors reveal that a player's win rate and performance tend to decrease when a player continuously plays the game. Jiang \textit{et al.}~classified League of Legends players into four categories from 148,000 match data using the concept of diversity and conformity; maverick, generalist, specialist, and niche players. \cite{jiang2021wide}. They used players' champion selection and movement to classify them. The authors also analyzed players' performance in each category by KDA, the approximate value of MMR, and the win rate. Ramler \textit{et al.}~ conducted an interesting analysis of the existence of performance differences between genders of League of Legends champions \cite{ramler2021investigating}. The indicators that the study used to quantify a champion's performance are the end-game statistical data such as total minion kills, gold, and KDA. Maymin distinguished the Smart kill and the Useless death using win probability transition of before and after a killing event \cite{maymin2021smart}. The former means the kill action that increases the chance of the team's victory, and the latter is the death that decreases the probability of winning. They estimated middle-of-game win probability using a logistic regression model; the input features were champion kill count, the number of remaining towers, elite monster kill count, elapsed game time in minutes. Also, they analyzed additional features such as gold and survivability that how to affect those features toward the team's winning probability. Finally, Damediuk attempted to set a performance index to quantify individual players' performances with \textit{Dota 2} match data \cite{demediuk2021performance}. They used end-game data such as XP, LEVEL, and KDA; and calculated the weighted sum of the features according to the player's playstyle, of which the authors classified a player into one of 10 types. As reviewed above, many studies attempt to assess players' contributions. However, most studies use result-oriented measurements; and they do not try to assess the respective actions of players, except in a few studies such as Maymin's study. On the other hand, We intended to study the areas that were less explored by former studies; in this study, we attempt to assess the contributions of individual actions to a team's victory. \subsection{Word embedding models} Word embedding is the conversion of text words into N-dimensional dense vectors. Embedded vectors make the model can process words and make it possible to represent relations between words mathematically. Neural Network Language Model (\textbf{NNLM}) is an word embedding model that uses feed-forward neural network \cite{nnlm}. \textbf{NNLM}~model consists of input, projection layer, hidden layer, and output. The model is trained to predict a next-coming word when it takes a $1 \times V$ dimensional words sequence in the context length, $N$, as an input. The projection layer is a $V \times D$ dimensional lookup table where each vector row is matched 1:1 with $V$ words in the vocabulary. The vectors in the projection layer are randomly initialized, and they are transformed into embedded word vectors that represent V words as training proceeds. \cref{fig:nnlm} shows the basic process of \textbf{NNLM}. \begin{figure}[ht] \centering \includegraphics[width=0.9\linewidth]{figures/NNLM.png} \caption{Basic process of NNLM} \label{fig:nnlm} \vspace{10pt} \end{figure} \subsection{Recurrent neural network} A Recurrent Neural Network (RNN) is a type of neural network model for dealing with sequence data in which the outcome of one time point influences the outcome of the next. In the RNN model, a neural network at time $t$ takes sequence data of $t$ and the neural network's output at $t-1$ as inputs. Basically, there are four types of RNN models; \ii{one-to-one}, \ii{one-to-many}, \ii{many-to-one}, \ii{many-to-many}. \cref{fig:rnn} shows the standard figure and four types of RNN. \begin{figure}[ht] \vspace{7pt} \centering \begin{subfigure}[b]{0.15\linewidth} \includegraphics[width=\linewidth, height=1.8\linewidth]{figures/RNN.png} \caption{standard figure} \label{fig:rnn_std} \end{subfigure} \hfill \begin{subfigure}[b]{0.15\linewidth} \includegraphics[width=\linewidth, height=1.8\linewidth]{figures/RNN-1to1.png} \caption{one-to-one} \label{fig:rnn_1to1} \end{subfigure} \hfill \begin{subfigure}[b]{0.15\linewidth} \includegraphics[width=\linewidth, height=1.8\linewidth]{figures/RNN-1toN.png} \caption{one-to-many} \label{fig:rnn_1ton} \end{subfigure} \hfill \begin{subfigure}[b]{0.15\linewidth} \includegraphics[width=\linewidth, height=1.8\linewidth]{figures/RNN-Nto1.png} \caption{many-to-one} \label{fig:rnn_nto1} \end{subfigure} \hfill \begin{subfigure}[b]{0.15\linewidth} \includegraphics[width=\linewidth, height=1.8\linewidth]{figures/RNN-NtoN.png} \caption{many-to-many} \label{fig:rnn_nton} \end{subfigure} \caption{Representative Types of RNN} \label{fig:rnn} \end{figure} Long Short-Term Memory (\bb{LSTM}) is an improved model that expiates the weaknesses of the standard RNN model \cite{lstm}. In the standard RNN model, information of previous steps gradually disappears if the information's weight is too small. In opposite, if the weight is too huge, previous information's influence becomes excessively strong. LSTM overcomes it using cell states, in which three gates determine whether the previous time point's data will be dropped or remembered. Gated Recurrent Unit (\bb{GRU}) is a modified type of LSTM that uses only two gates \cite{gru}. By reducing the number of gates in a NN cell, GRU aims to decrease computational workloads to train. \section{Proposed model}\label{sec:model} In this paper, we propose an action-embedding model that converts a player's actions into quantitative scores. We borrow NNLM's word embedding idea. The difference between our model and NNLM is the presence or absence of a projection layer. The word embedding approach has a 1:1 matching relation between words and embedded vectors; therefore, NNLM has a projection layer and trains the layer's value. However, the relationship between a player's action types and the score is not one-to-one. For example, when a player kills an enemy's champion, the action is evaluated variously by contexts such as the timing and the victim's remaining health. Because of this reason, our model does not use a projection layer. Instead, our model aims to obtain parameters of neural networks that output adequate scores when the action features are given. \textit{i.e.}, the neural network's parameters of our model and projection layer of NNLM take a similar role. Meanwhile, a player's action has causal relationships with the previous or the following action. So we can evaluate the cause action through the resulting action. For example, if a player killed an enemy champion after purchasing an item, the act of purchasing an item was likely beneficial. We combined the score-embedding model with the RNN model to reflect the causal relationship between actions. Moreover, different from regular RNN models, we implemented our model to take player actions in a time-reversed sequence to analyze the previous action through the following action. Our model consists of two main parts. One converts player action sequences into scores, and the other discerns the winner by comparing the two teams' scores and calculates the loss to execute backpropagation. \cref{sec:rnnslp} and \cref{sec:dep} describe the details of each part of the model. \subsection{RNN and single layer perceptron part}\label{sec:rnnslp} The first part of our model consists of RNN and Single Layer Perceptron (\bb{SLP}). We used GRU for RNN. When a player's action sequence is input to GRU, the output (hidden state) is input to the next GRU and corresponding SLP. SLPs output a 1-dimensional vector respectively when taking N-dimensional hidden states of each GRU. Finally, each output of SLPs becomes scores between -1 to 1 real number through the hyperbolic tangent (\textit{tanh}) function. An action converted into a positive score represents beneficial action to win; the opposite is a harmful action that drove defeat. \cref{fig:gru+slp} depicts the operation of a GRU-SLP combined model. \begin{figure}[ht] \centering \vspace{7pt} \includegraphics[width=0.8\linewidth]{figures/GRU.png} \caption{basic process of GRU-SLP part} \label{fig:gru+slp} \end{figure} \subsection{Discernment and evaluation part}\label{sec:dep} As described above, the Discernment and Evaluation Part (hereinafter DEP) discerns the winner by comparing the two teams' scores and computes the loss to conduct backpropagation. After obtaining ten players' scores, DEP adds up blue and red team players' scores, respectively. DEP discerns which team is the winner through both teams' added-up scores. Also, it calculates loss using scores and discernment. We used two pairs of discernment method-loss function. The details of the sets are explained in \cref{sec:loss_function}. Ten GRU-SLPs' parameter values are updated after backpropagation. In \cref{sec:overall} we describe the overall training process. \subsection{Discernment methods and loss function pairs}\label{sec:loss_function} As mentioned above, the goal is training our model to output scores that reflect the matches' outcomes properly. Our study stands based on the assumption that a team that acts on profitable actions more would win. Therefore, if a model gave a higher total score to the won team than the lost team, then the score reflected the match result well. On the other hand, if the lost team's total score is higher than the won team's, then the model misestimated the values of actions in the match. We defined two discernment method-loss function pairs to train our model; \bb{Confidence--Cross Entropy} and \bb{Deterministic--ReLU}. \subsubsection{\bb{Confidence and cross-entropy loss}} In Confidence--Cross Entropy pair, DEP calculates how confident a team given a higher score is the actual winner. For example, there are two matches; On the first, the blue team got 20 points of the score, and the reds got 5. On the other, the blues got 10 points, and the reds got 7. In this case, DEP is more confident in the analysis result of the first match than the second. Through a softmax, the confidence is expressed as a probability in a real number between 0 - 1 function. \cref{eq:confidence} shows how DEP calculates the confidence $c(T)$. $T$ is a team among both teams, and $S_T$ is the team's total score. $S_B$ and $S_R$ represents the total score of the blue and red team, respectively. Cross entropy is a function that outputs how different between two given probability distributions $P$ and $Q$ using information entropy theory. The output of the cross-entropy function represents the weighted arithmetic mean of the information content of when an event with information content according to probability distribution Q occurs according to probability distribution P. Many classification models use the function as the loss function to compare output probability distribution with the ground truth. We use cross-entropy to calculate the error between our model's analysis and the actual result. \cref{eq:crossentropy} displays how our model adopts the cross-entropy loss function. The values of $q(T_n)$ are determined by which team is the actual winner. If blue team is the winner, the values are: $q(T_{blue})=1.0$, $q(T_{red})=0.0$; while $q(T_{blue})=0.0$, $q(T_{red})=1.0$ if red team is the winner. We used binary cross-entropy (BCE) loss to simplify codes and reduce computing workloads instead of cross-entropy loss on the implementation level. For BCE loss, confidence is a degree of certainty that the blue team is the actual winner. The calculation of modified confidence $c'(T)$ and BCE loss is shown in \cref{eq:s_confidence} and \cref{eq:bce}. \begin{equation}\label{eq:confidence} \vspace{3pt} c(T) = \frac{e^{S_T}}{e^{S_B} + e^{S_R}} \vspace{3pt} \end{equation} \begin{equation}\label{eq:crossentropy} \vspace{3pt} CE\ loss = -\sum_{n}{q(T_n)\ log\ c(T_n)} \vspace{3pt} \end{equation} \begin{equation}\label{eq:s_confidence} \vspace{3pt} c'(T) = \frac{e^{S_B-S_R}}{e^{S_B-S_R} + 1} \vspace{3pt} \end{equation} \begin{equation}\label{eq:bce} \vspace{3pt} BCE\ loss = -[q(T_B)\ log\ c'(T_B)+(1-q(T_B))\ log\ (1-c'(T_B))] \end{equation} \subsubsection{\bb{Deterministic and ReLU loss}} Sometimes a team can overwhelm their opponent and win a match with a massive performance gap. However, in many sports games that two or more teams compete, it is not uncommon for one team to win by a very narrow margin of performance. In this case, the lost team can get a high score from the model as much as the won team, and that is reasonable. However, with Confidence--Cross Entropy pair, the loss is very high when the model scores both teams with narrow differences, even if the discernment of the winner is correct. To alleviate this problem, we designed Deterministic-ReLU pair. With this pair, DEP does not calculate the confidence. Instead, DEP discerns the winner only by comparing the total scores of both teams. Therefore, the discernment for the winner is expressed in deterministic. ReLU function outputs the same input value if it is positive; else, it outputs zeros. In most cases, it is used as the activation function of deep neural networks; the activation function is located between layers and removes the linearity of successive inter-layer matrix multiplications. However, we used the ReLU function as the loss function, not the activation function. For ReLU loss, the input is the score of the lost team ($S_L$) minus the score of the won team ($S_W$). \textit{i.e.}, the loss is zero as long as DEP discerns correctly no matter how different both teams' scores are. On the other hand, the loss gets larger by the difference if DEP discerns wrong. The definition of ReLU loss is shown in \cref{eq:reluloss}. \begin{equation}\label{eq:reluloss} ReLU\ Loss = \begin{cases} S_L - S_W & if\ S_W \leq S_L \\ 0 & if\ S_W > S_L \end{cases} \end{equation} \subsection{Overall train process}\label{sec:overall} Our model has ten independent GRU-SLP models since the number of players is 10 in a match. At the beginning of the training, the model randomly initializes its GRU-SLPs parameter values. Then, the training proceeds a match by match. Each GRU-SLPs respectively take input one action sequence of a player in charge and output scores of the actions. DEP adds up the output scores by teams and discerns the winner by the methods described in \cref{sec:loss_function}. Then DEP calculates the loss by using a loss function. After computing the loss, GRU-SLPs are trained respectively by backpropagation. Finally, the model calculates average parameter values of trained GRU-SLPs and assigns the parameters to each GRU-SLP. \cref{fig:overall} depicts the overall train process of our model. Furthermore, \cref{algo:parameters} is a pseudo-code that shows how the model trains ten independent GRU-SLPs and assigns the average values of their parameters. \begin{figure}[ht] \centering \includegraphics[width=0.9\linewidth]{figures/Overall.png} \caption{Overall process for a match of the proposed model} \label{fig:overall} \end{figure} \begin{algorithm} \caption{Pseudo codes for training process of proposed model \label{algo:parameters} \begin{algorithmic}[1] \State GRU-SLP Model \bb{subModel[10]} \For {$match=1,2,\ldots,N$} \For {$player=1,2,\ldots,10$} \State score[$player$] $\leftarrow$ \bb{subModel[$player$]}(actionSequance[$player$]) \EndFor \State score[Team1] $\leftarrow$ sum(score[1:5]), score[Team2] $\leftarrow$ sum(score[6:10]) \State prediction $\leftarrow$ PredictWinner(score[Team1], score[Team2]) \State loss $\leftarrow$ LossFunction(prediction) \State Backpropagate and update parameters of \bb{subModels} \State \bb{subModels}.parameters $\leftarrow$ average(\bb{subModels}.parameters) \EndFor \end{algorithmic} \end{algorithm} \section{Dataset}\label{sec:dataset} We collected 245,575 match data of \textit{League of Legends}~ by using the RiotAPI, which the company that developed the game (Riot Games) provides \cite{riot_api}. The dataset contains matches of North America, Europe, South Korea, and Japan that were played between May 10 and 17, 2021. We got two JSON files for a match through the API: the match's metadata and the timeline. The metadata has the basic information such as the game's version, chosen champions by each player, total game duration. On the other hand, the timeline data contains significant events' reports and snapshots at 1-minute intervals; an event report has details, including the timestamp and the actor player. From match data, we set up seven features: timestamp, champion, lane, position, distance, and event. The definition of each feature is elaborated on below. \begin{description} \item[\ii{Timestamp}] is the time point at which the event occurred expressed in milliseconds. \item[\ii{Champion}] is the champions chosen by actor players of the event. \item[\ii{Lane}] is the chosen lanes of the event's actor player. \item[\ii{Position}] represents the location on the map of the actor player when the event occurs. \item[\ii{Distance}] depicts how much the actor player is isolated from their teammates. \item[\ii{Event}] is the type of the event (such as champion kill and purchase item) and its weight. \end{description} \subsection{Champion} As mentioned in \cref{sec:lol}, A champion belongs to one or more of six roles. We used the \bb{role} as an input feature instead of \bb{champion} since \textit{League of Legends}~ has over 140 champions making our model consume more computing power when training. Also, the game company can add or remove champions when they release patches. Therefore, \bb{role} is a more stable feature applicable regardless of the game's version than \bb{champion}. We created a Champion-Role vector lookup table through \textit{League of Legends}~ game data that \ii{Riot Games} officially provides \cite{datadragon} (See \cref{tab:cham-role}). By using the lookup table, \bb{Champion} info of events are converted to one-hot encoded \bb{Role} vector. \begin{table}[ht] \centering \begin{tabular}{|c|cccccc|} \hline Champion & Assassins & Fighters & Mages & Marksmen & Supports & Tanks \\ \hline\hline Annie & 0 & 0 & 1 & 0 & 0 & 0 \\ \hline Kayle & 0 & 1 & 0 & 0 & 1 & 0 \\ \hline Shyvana & 0 & 1 & 0 & 0 & 0 & 1 \\ \hline \multicolumn{7}{|c|}{...} \\ \hline Vayne & 1 & 0 & 0 & 1 & 0 & 0 \\ \hline \end{tabular} \vspace{5px} \caption{Champion-Role vector lookup table} \label{tab:cham-role} \end{table} \subsection{Lane} A player chooses a \bb{Lane} that represents the player's role in a team. There are five official lanes: \ii{Top}, \ii{Mid}, \ii{Bottom}, \ii{Support}, and \ii{Jungle}. The metadata file of a match contains \bb{Lane} info of each player. Like the \bb{Champion} feature, we converted \bb{Lane}s to one-hot vectors too. \subsection{Position} \bb{Position} is a specific location on the map of the actor of an event. In a match's timeline data, the location is expressed in x and y coordinates, of which the value is between 0 and 15000. We normalized the values into between 0 and 1 by min-max scaling. Some types of the event do not have \bb{Position} data. Therefore, we presumed the missing \bb{Position} using domain knowledge and a 1-minute interval of snapshots. For example, when an item purchasing event occurs, we can presume the actor's \bb{Position} is (0, 0) or (1, 1) by their team since a player must return to their base to purchase an item. \subsection{Distance} \bb{Distance} represents how much the actor player is isolated from their teammates. Since \textit{League of Legends}~ is a teamplay game, cooperative and strategic play are crucial to triumph. \bb{Distance} reflects cooperativity and tactics indirectly. For example, when teammates engage in a team fight, a player has to join teammates in most cases. In this situation, a player's \bb{Distance} would be small because the player acts with teammates. On the other hand, a player also can sneak through an enemy's flank to destroy their defense tower when teammates draw the opponents' attention. Opposite to the former situation, the \bb{Distance} would grow. \cref{eq:dist} shows the formula to calculate Distance of player $i$. $D_i$. Digit 1 to digit 4 represent each teammate of player $i$, therefore $D_{nm}$ is the Euclidean distance between player $n$ and $m$. \begin{equation}\label{eq:dist} D_i = \frac{D_{i1}+D_{i2}+D_{i3}+D_{i4}}{D_{i1}+D_{i2}+D_{i3}+D_{i4}+D_{12}+D_{13}+...+D_{34}} \end{equation} \subsection{Event} A match data contains ten different event types representing a player's action, such as purchasing an item and placing a ward. To achieve our goal, we added four new event types derived from original events. The following list shows the original event types in black and additional types in red. We also converted the types of events to one-hot encoded vectors. \begin{description} \item[\ii{ITEM\_PURCHASED}] represents that a player purchased an item. \item[\ii{ITEM\_SOLD}] means that a player sold an item they have. \item[\ii{ITEM\_DESTROYED}] implies a player consumed a consumable item such as a potion and elixir. \item[\ii{SKILL\_LEVEL\_UP}] represents that a player increased their skill level.. \item[\ii{LEVEL\_UP}] signifies a player's level increased. \item[\ii{WARD\_PLACED}] means a player placed a ward on the map for some purpose such as the sight and heals. \item[\ii{WARD\_KILL}] implies an action that a player destroys an enemy's ward. \item[\ii{CHAMPION\_KILL}] represents that a player killed an enemy's champion. \item[\ii{BUILDING\_KILL}] means a player eliminated an opponents' defense tower. \item[\ii{ELITE\_MONSTER\_KILL}] represents that a player hunted an elite monster in the jungle area. \item[\red{\ii{CHAMPION\_KILL\_ASSIST}}] signifies that a player assisted a teammate to kill an enemy champion. \item[\red{\ii{CHAMPION\_KILL\_VICTIM}}] implies that an opponent champion murders a player. \item[\red{\ii{BUILDING\_KILL\_ASSIST}}] means that a player assisted a teammate in destroying an enemy defense tower. \item[\red{\ii{ELITE\_MONSTER\_KILL\_ASSIST}}] represents that a player assisted a teammate in hunting an elite monster. \end{description} \subsection{Event weight} Each \bb{Event} has a different weight depending on the context, even if the event types are identical. When a player kills an enemy champion, for example, there is a distinction between killing the opponent alone and enlisting the help of teammates. Therefore, we devised formulas to describe the weight of each event, respectively. We also normalized the weights' values in a range from 0 to 1. Formulas for the \bb{Event}s are described in \cref{tab:event_weight}: \begin{table}[ht] \centering \begin{tabular}{c|c|c} Event & Weight formula & Description \\ \hline\hline \makecell{ITEM\_PURCHASED \\ ITEM\_DESTROYED} & $\frac{item\_purchase\_cost}{highest\_item\_purchase\_cost}$ & \small{\makecell{The numerator is the value of the purchased\\or destroyed item at the event,\\while the denominator is the gold value\\of the most expensive item.}} \\ \hline ITEM\_SOLD & $\frac{item\_sell\_cost}{highest\_item\_sell\_cost}$ & \small{\makecell{\ii{highest\_item\_sell\_cost} is the gold value\\that a player receives when they sell\\the most expensive item\\ of all of the game items.}} \\ \hline SKILL\_LEVEL\_UP & $\frac{current\_skill\_level}{maximum\_skill\_level}$ & \small{\makecell{\ii{maximum\_skill\_level} is the maximum level\\of the skill that a player just leveled up\\ at the event, and \ii{current\_skill\_level} is\\the skill level right after the event occurs.}} \\ \hline LEVEL\_UP & $\frac{level\_place\_rank}{number\_of\_player}$ & \small{\makecell{ \\The player level ranking within\\ of all of the players decides the event weight.\\{ }}} \\ \hline \makecell{WARD\_PLACED \\ WARD\_KILL} & $\frac{ward\_bounty}{highest\_ward\_bounty}$ & \footnotesize{\makecell{\bb{Ward} is a unit that provides a vision to players.\\Each ward has a bounty expressed as gold value,\\and a player can receive it as a reward\\when they destroy the ward.\\ \ii{highest\_ward\_bounty} is the highest bounty\\value of all of the wards.}}\\ \hline \makecell{CHAMPION\_KILL \\ CHAMPION\_KILL\_ASSIST \\ CHAMPION\_KILL\_VICTIM} & $\frac{damage\_dealt}{total\_damage\_victim\_received}$ & \footnotesize{\makecell{It represents how much the killer and assists\\contributed to killing the victim each.\\Damages that the killer and assists caused\\to the victim decide the weight.\\It also represents how much the victim\\resisted hard until they got killed.\\Thus, the weight of the victim event is related\\to damages that the victim caused\\to the killers while resists.}} \\ \hline \makecell{BUILDING\_KILL \\ BUILDING\_KILL\_ASSIST \\ ELITE\_MONSTER\_KILL \\ ELITE\_MONSTER\_KILL\_ASSIST} & $\frac{1}{number\_of\_involved\_players}$ & \footnotesize{\makecell{The dataset does not have information\\about specific damage dealt or received.\\Therefore, we supposed that every player\\who was involved in the event\\contributed the same as others.}} \\ \hline \end{tabular} \vspace{5pt} \caption{Weight formula by event} \label{tab:event_weight} \end{table} \subsection{Processed feature vector} Eventually, a player action is converted to a 30-dimensional vector. Therefore, a player's action sequence becomes a sequence of 30-dimensional vectors. Each vector has 5 ranged values (0 to 1) and 25 one-hot values. \cref{tab:features} shows the features that are converted to vector. \begin{table}[ht] \centering \begin{tabular}{|c|c|c|} \hline \bb{Event} & \bb{Description} & \bb{Range} \\ \hline Timestamp & Time elapsed & from 0 to 1 \\ Mage & Champion's role & 0 or 1 \\ Fighter & Champion's role & 0 or 1 \\ Support & Champion's role & 0 or 1 \\ Tank & Champion's role & 0 or 1 \\ Assassin & Champion's role & 0 or 1 \\ Marksman & Champion's role & 0 or 1 \\ Top & Lane was chosen by a player & 0 or 1 \\ Mid & Lane was chosen by a player & 0 or 1 \\ Bottom & Lane was chosen by a player & 0 or 1 \\ Utility & Lane was chosen by a player & 0 or 1 \\ Jungle & Lane was chosen by a player & 0 or 1 \\ x\_position & x coordinates of a player location & from 0 to 1 \\ y\_position & y coordinates of a player location & from 0 to 1 \\ distance & Isolation from teammates & from 0 to 1 \\ ITEM\_PURCHASED & Purchase an item & 0 or 1 \\ ITEM\_SOLD & Sell an item & 0 or 1 \\ ITEM\_DESTROYED & Destroy an item & 0 or 1 \\ SKILL\_LEVEL\_UP & Player's skill level & 0 or 1 \\ LEVEL\_UP & Player's level & 0 or 1 \\ WARD\_PLACED & Placing a ward & 0 or 1 \\ WARD\_KILL & Kill an enemy ward & 0 or 1 \\ CHAMPION\_KILL & Kill an enemy champion & 0 or 1 \\ CHAMPION\_KILL\_ASSIST & Assist in killing an enemy & 0 or 1 \\ CHAMPION\_KILL\_VICTIM & Get killed & 0 or 1 \\ BUILDING\_KILL & Destroy an enemy building & 0 or 1 \\ BUILDING\_KILL\_ASSIST & Assist in destroying building & 0 or 1 \\ ELITE\_MONSTER\_KILL & kill an elite monster & 0 or 1 \\ ELITE\_MONSTER\_KILL\_ASSIST & Assist in killing monster & 0 or 1 \\ Event\_Weight & weight of the event & from 0 to 1 \\ \hline \end{tabular} \vspace{5pt} \caption{Converted features} \label{tab:features} \end{table} \section{Experiment}\label{sec:experiment} To the best of our knowledge, this study is the first attempt to evaluate the contributions of individual actions; as a result, we used common metrics (KDA, Gold, and Minion kills) as baselines rather than models from previous studies. In addition, we created seven models based on a score embedding methodology to compare performances and determine which method is superior. The tested models are below: \begin{description} \item[Model 1: \ii{ReLU \#1}] Model 1 is a basic version of the model described in \cref{sec:model}. It uses \bb{10 GRU-SLP}s and \bb{ReLU loss} function. Also, $h_0$ of this model (See \cref{fig:gru+slp}) is $(L, 1, N)$-dimension vector that initialized to 0. $L$ is the depth of GRU layers, and $N$ is the length of the hidden dimension of GRU. \item[Model 2: \ii{ReLU \#2}] Model 2 is the same as Model 1, but it uses different $h_0$ for won and lost teams. i.e., The winner's $h_0$ is initialized to 1 while the loser is 0. The purpose of the model is to reflect the fact that the consequence of a match (win, lose) occurs just after a player's last action. \item[Model 3: \ii{ReLU \#3}] We designed Model 3 to validate our hypothesis that the time-reversed input sequence is adequate to analyze players' actions. Therefore, the action sequences that are input to Model 3 are not time-reversed, but chronological. \item[Model 4: \ii{BCE \#1}] Model 4 is a comparison model of Model 1 that uses the Confidence for the discernment method and BCE loss for the loss function. \item[Model 5: \ii{BCE \#2}] Model 5 is the same as Model 2 but uses Confidence and BCE loss. \item[Model 6: \ii{MLP \#1}] We created Model 6 to validate the GRU-SLP model's effectiveness. Model 6 uses Multi-Layered Perceptrons (MLP) to process each action independently without taking into account the sequential effect. It uses the deterministic discernment method and ReLU loss. \item[Model 7: \ii{MLP \#2}] Model 7 is the same as Model 6 but uses Confidence and BCE loss for DEP. \end{description} \subsection{Model implementation and settings} We implemented the seven models using \ii{PyTorch} 1.10.0 with CUDA 10.1 \cite{pytorch}. Also, we used \ii{Adam} optimizer \cite{adam} for the models to adjust models' parameters as training progressed. For the GRU-SLPs, the depth of hidden layers is 2, and the dimension of hidden states is 15. Every model was trained in 10 epochs and 0.0001 of the learning rate. Because of the concern about the overfitting problem, we set test data to consist of 44,575 matches, nearly 20\% of the total in the dataset; the test set represents the unseen data from the trained model. The rest of the dataset, which consists of 200,000 matches, is the trainset. Owing to enough size of the test set, the model would show low performance for the test stage if it is overfitted onto the trainset; it provides confidence that the model that achieves high performance in the test stage is not overfitted. Also, to choose the best-trained model in the training stage, we set the validation stage for each epoch. We extracted the validation data from the trainset, and its size is 5\% (10,000 matches) of the trainset. \subsection{Discernment accuracy} First, we checked the discernment accuracy of the models. The accuracy represents how much the models' scores reflect well for the contributions of each action since we trained the models to give high scores for actions that are highly related to winning. \cref{tab:discern_accuracy} shows the discernment accuracy of designed models. \begin{table}[ht] \centering \begin{tabular}{c|c|c|c|c} & \bb{Accuracy} & \bb{Precision} & \bb{Recall} & \bb{F1 score} \\ \hline Model 1 & 99.5634\% & 99.5587\% & 99.5886\% & 0.995736 \\ Model 2 & \bb{100\%} & \bb{100\%} & \bb{100\%} & \bb{1.} \\ Model 3 & 99.4800\% & 99.3884\% & 99.5971\% & 0.994927 \\ Model 4 & 99.4712\% & 99.5154\% & 99.4514\% & 0.994834 \\ Model 5 & \bb{100\%} & \bb{100\%} & \bb{100\%} & \bb{1.} \\ Model 6 & 99.4668\% & 99.4941\% & 99.4643\% & 0.994792 \\ Model 7 & 99.5063\% & 99.5285\% & 99.5071\% & 0.995178 \\ \hline Baseline (KDA) & 93.2090\% & 93.2399\% & 93.5156\% & 0.933775 \\ Baseline (Gold) & 94.5979\% & 95.1655\% & 94.2356\% & 0.946983 \\ Baseline (Minion kills) & 65.5381\% & 67.1973\% & 63.8623\% & 0.654874 \\ \hline \end{tabular} \vspace{5pt} \caption{Discernment accuracy, precision, recall and F1 score for each model} \label{tab:discern_accuracy} \end{table} Model 2 and Model 5 show that the judgment accuracy is 100\% when $h_0$ of the won and the lost team are input differently, regardless of the loss function. It is not surprising that the two models perform flawlessly in the discernment in that those models already knew the ground truth before discerning was executed by receiving the outcomes of matches as input ($h_0$). However, it is only a necessary condition that the model accurately discerns outcomes, not a sufficient condition to achieve this study's purpose. We discuss this topic in \cref{sec:feature-score}. Except for those two models, Model 1 displays the best performance for discerning the winner. In addition, all designed models indicate accuracy exceeding 99\%, while the discernment with common metrics show under 95\% accuracy. \subsection{Player ranking by models}\label{sec:ranking} Many game communities use common metrics such as KDA, Gold (earn/spent), and Minion kills to assess the individual performance of players. Therefore, we compared the post-match player rankings for common metrics and model scores. To avoid too many graphs on the paper that could confuse, we analyzed the top 4 models on discernment accuracy (\textit{i.e.}, models 1, 2, 5, 7). \cref{fig:rankings} displays the relation between players' model score ranking and typical indicator rankings (KDA, Gold, Minion kills, and the average of them) each. The X-axis denotes players' ranking for typical indicators, and the Y-axis represents players' ranking for the model scores. The color and size of the dots show the number of players ranked in the corresponding rank for the positions in axes. Therefore, if many players have the same rank by both typical indicators and model scores, the circles in the diagonal position become bigger and redder. All four models showed a positive correlation between the model score and the KDA ranking. However, models 1 and 7 also have a positive correlation with Gold ranking, but models 2 and 5 show a relatively low correlation. In addition, the graphs show that the Minion kills are not highly related to the models' scores compared to other metrics. Furthermore, \cref{fig:rankings} also demonstrate that the ranking average of common metrics has a nearly linear correlation with models' scores. The fact above represents that our models reflect individual performance evaluation by common metrics well. There are some differences; however, we can validate our model with a more detailed analysis since the common metrics are not applicable for every case. \begin{figure}[hp] \centering \begin{subfigure}[b]{0.9\linewidth} \includegraphics[width=\linewidth]{figures/ranking_model1.pdf} \vspace{-10pt} \caption{Model 1} \label{fig:ranking_model1} \end{subfigure} \begin{subfigure}[b]{0.9\linewidth} \includegraphics[width=\linewidth]{figures/ranking_model5.pdf} \vspace{-10pt} \caption{Model 2} \label{fig:ranking_model2} \end{subfigure} \begin{subfigure}[b]{0.9\linewidth} \includegraphics[width=\linewidth]{figures/ranking_model5.pdf} \vspace{-10pt} \caption{Model 5} \label{fig:ranking_model5} \end{subfigure} \hfill \begin{subfigure}[b]{0.9\linewidth} \includegraphics[width=\linewidth]{figures/ranking_model7.pdf} \vspace{-10pt} \caption{Model 7} \label{fig:ranking_model7} \end{subfigure} \caption{Player ranking counts by model and common metrics} \label{fig:rankings} \end{figure} \subsection{Under/overestimated players}\label{sec:under_over} As mentioned in \cref{sec:lol}, players take charge of one of the five \bb{lanes}, each with a specific function in a team. However, common metrics do not suit every lane of players to estimate their contribution. For example, the general mission of the support lane is to assist other lanes, not to kill an enemy champion. Also, a support lane champion is often targeted in team fights because of their practical supporting skills and quickly murdered because of their fragility. Therefore, a player who takes charge of the support lane usually gets low KDA, and it does not represent that the player contributed less than teammates. A support lane player is frequently underestimated for their contribution by KDA metrics. We investigated the differences between player rankings by our models and common metrics and analyzed whether the differences were reasonable. For the differences, we defined \ii{underestimated} (underestimated by common metrics) as a player who ranks over \bb{five places higher in model score than common metrics} and also defined \ii{overestimated} (overestimated by common metrics) as a player who ranks over \bb{five places lower in model score than common metrics}. \begin{figure}[p] \begin{subfigure}[b]{1\linewidth} \includegraphics[width=1\linewidth]{figures/under_over_avg.pdf} \caption{Avg. of common metrics} \label{fig:under_over_avg} \end{subfigure} \begin{subfigure}[b]{1\linewidth} \includegraphics[width=1\linewidth]{figures/under_over_kda.pdf} \caption{KDA} \label{fig:under_over_kda} \end{subfigure} \begin{subfigure}[b]{1\linewidth} \includegraphics[width=1\linewidth]{figures/under_over_gold.pdf} \caption{Gold} \label{fig:under_over_gold} \end{subfigure} \hfill \begin{subfigure}[b]{1\linewidth} \includegraphics[width=1\linewidth]{figures/under_over_creep.pdf} \caption{Minion kills} \label{fig:under_over_creep} \end{subfigure} \caption{Number of under/overestimated players by common metrics} \label{fig:under_over} \end{figure} \cref{fig:under_over} displays the number of under/overestimated players by common metrics. \cref{fig:under_over_avg} to \cref{fig:under_over_creep} are separated statistics by each metric, and each figure consists of separated graphs by models. The red bars of the graphs represent overestimated players, and the blue bars represent underestimated players. \cref{fig:under_over_kda} shows the KDA underestimates many support lane players compared with model 1 and model 7. It is reasonable that the models give a higher evaluation to support lane players than the KDA metrics, as mentioned above about the support lane players' missions. On the other hand, every model gives low scores to players who took the support and jungle lane. For support lane players, it is a general strategy that the support player concedes the advantage of killing minions to teammates. If a support player monopolizes advantage from killing minions than teammates, teammates' growth who have a role to battle with enemies will be slow, then the possibility of loss becomes higher. Meanwhile, the mission of a jungle lane player is to clear field monsters and gather benefits from them. If a jungle player spends too much time killing minions, the enemy jungle player can monopolize the field monsters' benefit and make it slow that teammates grow. Therefore, the differences of our models' ranking and the minion kill count ranking are rational. \subsection{Feature-score correlation analysis}\label{sec:feature-score} To achieve our goal that measures a player's contribution regardless of whether they won or lost, the embedding model has to give similar scores to the same actions in a similar context. Therefore, we examined the score distribution by the action features. To visualize the relationship between the model scores and all features at once, we reduced the feature dimension to 1 using Principal Component Analysis (PCA), then drew graphs displaying the relationship between model scores and dimension-reduced features (See \cref{fig:pca_relation}). We can consider that actions with the same dimension-reduced feature value have similar worthiness in a match context. Therefore, to fulfill the goal of this study, the actions with the same dimension-reduced feature value have to get similar scores from the trained model without bias by the matches' outcomes. In \cref{fig:pca_relation}, the green dots represent the winners' scores according to the dimension-reduced feature values, and the yellows represent the losers' scores. If the model fulfills our purpose, the clusters of green and yellow dots will ting similar shapes, representing that the model scores actions by their value, not affected by the matches' outcomes. On the other hand, if the green and yellow clusters ting significantly different shapes, we can consider that the model's scoring function is heavily affected by the outcomes of the matches. As we can see from the graph, Model 2 and Model 5 give scores significantly different for winners and losers. Therefore, those two models do not suit our purpose despite the discerning accuracy being 100\% since they evaluate a player's actions considering the outcome of a match, not the actions and contexts themselves. On the other hand, Model 1 and 7 give similar scores for similar dimension-reduced feature values. Therefore, although their discerning accuracy is relatively lower than Model 1 and 5, \cref{fig:pca_relation} shows those two models are more suited to the purpose of this study. \begin{figure}[ht] \centering \begin{subfigure}[b]{0.47\linewidth} \includegraphics[width=\linewidth]{figures/pca_model1.pdf} \caption{Model 1} \label{fig:pca_model1} \end{subfigure} \hfill \begin{subfigure}[b]{0.47\linewidth} \includegraphics[width=\linewidth]{figures/pca_model2.pdf} \caption{Model 2} \label{fig:pca_model2} \end{subfigure} \begin{subfigure}[b]{0.47\linewidth} \includegraphics[width=\linewidth]{figures/pca_model5.pdf} \caption{Model 5} \label{fig:pca_model5} \end{subfigure} \hfill \begin{subfigure}[b]{0.47\linewidth} \includegraphics[width=\linewidth]{figures/pca_model7.pdf} \caption{Model 7} \label{fig:pca_model7} \end{subfigure} \caption{Relationship between models' score and features} \label{fig:pca_relation} \end{figure} \section{Discussion}\label{sec:discussion} In the experiment section (\cref{sec:experiment}), we looked at the winner-discernment accuracy of the models to see if they accurately reflect a player's entire contribution to team’s win. The results (\cref{tab:discern_accuracy}) show that our algorithms accurately discern the outcome of a game. Then we compared our models to common metrics to see if they estimate a player's performance properly. Despite minor differences, the models' outputs are comparable to common metrics. Furthermore, as we saw in \cref{sec:under_over}, the discrepancies between our models (particularly Model 1 and Model 7) are rational and explainable. Finally, we looked at the feature-model score correlation to see if the models gave similar scores to actions with similar feature values (worthiness in a match context). \cref{fig:pca_relation} depicts that Model 2 and Model 5 give significantly different scores to the won player's action and the lost player's action, despite their similar feature value; in contrast, models 1 and 7 give similar scores to similar actions regardless of the match's outcome. The last analysis shows that Model 1 and Model 7 suit our research purpose more than Model 2 and Model 5 despite their relatively low discerning accuracy. There are limitations to this study. First, the RiotAPI that we used to construct the dataset provides limited types of action. Our dataset does not have a player's decision and specific acts in detail, such as skill usage and giving damages to the defense tower. Therefore, our approach is not tested for a dataset that contains very detailed action information. It will be our future work to validate our model by collecting a more detailed dataset; it can be done with advanced data collection methods such as replay video analysis. Also, our model cannot be used for real-time services such as e-sports commentary because we designed it for post-match evaluation. Our models' strength, despite their limitations, is their generality and maintainability. Because the actions used for scoring are generic enough, the trained models do not need to retrain every time the game is updated with adding a new champion and item or rebalancing the game system. Furthermore, despite the actions and attributes for \textit{League of Legends}~do not precisely fit other games, we can use the fundamental structure of our model (which is the pivotal contribution of this paper) for most games, including different genres such as First-Person Shooter (FPS). The only task developers need to finish before using our model is defining actions and the actions' attributes representing their game well. \section{Conclusion}\label{sec:conclusion} Estimating individual performance for fair reward and analyzing past behaviors to improve a player's skill and team-level tactics are fascinating subjects of team-based competitive sport study. This study introduced an embedding approach to score player's in-game actions by how much they contributed to winning. The idea of our approach came from the word-embedding technique, NNLM. This approach allows quantitative evaluations of a player's respective actions, which were not available from common metrics and previous studies. With quantitative values of players' contribution to victory, MOBA games can use our model in many ways. First, our model can adjust the amount of MMR that increases or decreases according to match outcome, alleviating the side-effects of the team/result-oriented MMR systems (The exact amount of MMR that our model adjusts could vary from game to game, according to the MMR formula and the developers' intention). The second application is to display contribution scores on the games' season leaderboard or to inform players at the ends of matches. As a result, the games can honor players who have made significant contributions to their teams' accomplishments but have been overlooked by traditional performance criteria. This can avoid situations where some players unreasonably blame their teammates for low KDA or gold despite their enough contributions. Finally, the contribution scores can be utilized to maintain professional players' records in the professional e-sports business. Furthermore, our model's applicability is not limited to MOBAs; the GRU-SLP and DEP structures can be used in various genres. \section{Research Methods} \subsection{Part One} Lorem ipsum dolor sit amet, consectetur adipiscing elit. Morbi malesuada, quam in pulvinar varius, metus nunc fermentum urna, id sollicitudin purus odio sit amet enim. Aliquam ullamcorper eu ipsum vel mollis. Curabitur quis dictum nisl. Phasellus vel semper risus, et lacinia dolor. Integer ultricies commodo sem nec semper. \subsection{Part Two} Etiam commodo feugiat nisl pulvinar pellentesque. Etiam auctor sodales ligula, non varius nibh pulvinar semper. Suspendisse nec lectus non ipsum convallis congue hendrerit vitae sapien. Donec at laoreet eros. Vivamus non purus placerat, scelerisque diam eu, cursus ante. Etiam aliquam tortor auctor efficitur mattis. \section{Online Resources} Nam id fermentum dui. Suspendisse sagittis tortor a nulla mollis, in pulvinar ex pretium. Sed interdum orci quis metus euismod, et sagittis enim maximus. Vestibulum gravida massa ut felis suscipit congue. Quisque mattis elit a risus ultrices commodo venenatis eget dui. Etiam sagittis eleifend elementum. Nam interdum magna at lectus dignissim, ac dignissim lorem rhoncus. Maecenas eu arcu ac neque placerat aliquam. Nunc pulvinar
1,314,259,994,732
arxiv
\section{Introduction and statement of the result} If $Z$ is a real L\'evy process without Gaussian part, finding a necessary and sufficient condition on its jumping measure $\nu$ for the absolute continuity of $Z_t$ at some given $t>0,$ is a hard problem for which no sensible conjecture has been formulated as yet. One of the main difficulties for this formulation stems from the time-dependency of the absolute continuity property: if $\nu$ is infinite and has discrete support, then there are some situations where e.g. $Z_1$ is singular and $Z_2$ is absolutely continous. We refer to \cite{Wa} and Chapter 27 in \cite{S} for more on this topic as well as further references. When $Z$ is multidimensional, the problem becomes increasingly complicated and some partial results had been given in \cite{Ya}, involving conditions of geometrical nature on $\nu$. On the other hand, the problem of absolute continuity may well become simpler, and yield weaker conditions, when considering certain functionals of $Z$. In the real case for example, one can show that \begin{equation} \label{lifsh} \int_0^1\!Z_t \, \mathrm{d} t\; {\rm is \; a.c.}\;\; \Longleftrightarrow\;\; \nu \;{\rm is\; infinite.} \end {equation} Notice that the condition on the right-hand side is only equivalent to the non-atomicity of $Z_1$ - see Theorem 27.4 in \cite{S}. With a view towards the methods developed later in the present paper, let us give a short proof of the reverse inclusion in (\ref{lifsh}), the direct one being straightforward. If $\nu$ is infinite and $T^\eta_1, T^\eta_2$ denote the two first jumping times of $Z$ into $[-\eta, \eta]^c,$ then ${\mathbb P}[T^\eta_2 \ge 1]\to 0$ as $\eta\to 0.$ Besides on $\{T^\eta_2 < 1\}$ one can write $$\int_0^1\! Z_t\, \, \mathrm{d} t\; = \; (T^\eta_2 - T^\eta_1)\Delta Z_{T^\eta_1} \; +\; (1 - T^\eta_2)\Delta Z_{T^\eta_1} \; +\;\int_0^1 \left( Z_t - {\bf 1}_{\{T^\eta_1 \le t\}}\Delta Z_{T^\eta_1}\right) \, \mathrm{d} t$$ and the right-hand side is a.c. on $\{T^\eta_2 < 1\}$ for every $\eta >0,$ since conditionnally on $\mathcal{F}^\eta$ which is the $\sigma$-field generated by all the information given by $Z$ on $[0,1]$ except $T^\eta_1,$ the variable $(T^\eta_2 - T^\eta_1)$ has uniform hence absolutely continuous law on $[0,T^\eta_2],$ the variable $\Delta Z_{T^\eta_1}$ is non-zero a.s. and $\mathcal{F}^\eta$-measurable, and the remaining terms are $\mathcal{F}^\eta$-measurable. This conditioning method together with, roughly speaking, a derivation procedure along certain jumping times, had been systematically developed in the monograph \cite{DLS} where various absolute continuity results and smoother properties were established for several functionals of Wiener and other L\'evy processes, such as $L_p$-norms, proper integrals along a given function, one-sided and two-sided suprema. In \cite{NS, K}, it was also applied to a class of real stochastic equations with non-linear drift driven by $Z$. In this paper, we will deal with Non-Gaussian multidimensional Ornstein-Uhlenbeck processes, which are solutions to the S.D.E. \begin{equation} \label{OU2} \, \mathrm{d} X_t\; =\; AX_t \, \mathrm{d} t \; +\; B \, \mathrm{d} Z_t \end{equation} where $A$ is a real $n\times n$ matrix, $B$ a real $n\times d$ matrix and $Z$ a $d-$dimensional L\'evy process without Gaussian part. Adaptating without difficulty the discussion made in \cite{S} pp. 104-105 to the case where $A$ is not necessarily a multiple of the identity matrix, we see that the solution to (\ref{OU2}) is given in terms of some L\'evy integral: \begin{equation} \label{levy} X_t \; = \; \mathrm{e}^{tA}x\; +\; \int_0^t \mathrm{e}^{(t-s)A}B\, \mathrm{d} Z_s, \quad t\ge 0, \end{equation} where $x\in{\mathbb R}^n$ is the initial condition. Apart from a natural extension of the Langevin equation, these processes have their roots in distribution theory because in the ergodic case their limit laws are known \cite{SY} to be operator-self decomposable - see \cite{SWYY} for further results. Nowadays, Non-Gaussian OU processes are also quite popular in modelling \cite{BS}. To state our result, we need some notation. We will assume that the reader is familiar with basic properties of L\'evy processes and jumping measures which can be found at the beginning of the two monographs \cite{B, S}. Setting $\{B_t, \; t\ge 0\}$ for the ${\mathbb R}^n$-valued L\'evy process $\{BZ_t, \; t\ge 0\}$ and $\nu^B$ for its L\'evy measure, we introduce the vector spaces $${\mathcal B}_t\; =\; {\rm Vect} [\Delta B_s, \; s\le t]\quad {\rm and} \quad {\mathcal A}_t\; =\;<A,{\mathcal B}_t>, \quad t > 0,$$ where here and throughout, for every vector subspace ${\mathcal E}\subset{\mathbb R}^n$ with basis $\{e_1,\ldots, e_p\},$ we use the notation $$<A,{\mathcal E}>\; =\; {\rm Vect} [A^{i-1} e_j, \; i = 1\ldots n, j = 1\ldots p].$$ Notice that actually ${\mathcal A}_t = {\rm Vect} [A^{i-1} \Delta B_s, \; i = 1\ldots q, s\le t],$ where $q$ stands for the degree of the minimal polynomial of $A.$ Setting ${\mathcal B} = {\rm Im} \,B\subset{\mathbb R}^n,$ the condition $<A,{\mathcal B}>\, ={\mathbb R}^n$ or, in an equivalent matrix formulation, \begin{equation} \label{Rank} {\rm Rank} \left[ B, AB,\ldots, A^{q-1}B\right]\; =\; n, \end{equation} is well-known as a controllability condition on the deterministic linear system $$x'_t\; =\; Ax_t\; +\; Bu_t$$ where $\{u_t, \; t\ge 0\}$ is some input function - see e.g. Chapter 1 in \cite{W} and the references therein. When (\ref{Rank}) holds, we will say that $(A,B)$ is controllable. Let $\kappa$ denote the cyclic index of $A$, which is the maximal dimension of its proper subspaces. The condition $\kappa =1$ means that $A$ is a cyclic matrix i.e. there exists a generating vector $b\in{\mathbb R}^n$ such that $(b, Ab, \ldots, A^{n-1}b)$ forms a basis of ${\mathbb R}^n.$ This is also equivalent to $q,$ that is the minimal polynomial of $A$ is in fact its characteristic polynomial. When $\kappa >1,$ it is the number of the invariant factors of $A$, viz. the unique number of subspaces ${\mathcal A}_i\subset {\mathbb R}^n$ such that ${\mathcal A}_1\oplus\ldots\oplus {\mathcal A}_\kappa ={\mathbb R}^n,$ with each ${\mathcal A}_i$ stable by $A$ and each $A/{\mathcal A}_i$ a cyclic operator whose minimal polynomial $\alpha_i$ divides $\alpha_{i-1},$ $\alpha_1$ being the minimal polynomial of $A$ itself - see Chapter 4 in \cite{G} or Section 0.10 in \cite{W} for more precisions. Let $m = {\rm Dim}\, {\mathcal B} = {\rm Rank}\, B.$ When (\ref{Rank}) holds, a result of M.~Heymann - see Theorem 1.2. in \cite{W} - entails that necessarily $m \ge\kappa.$ More precisely, there exist $\kappa$ linearly independent vectors $b_1, \ldots, b_\kappa\in{\mathcal B}$ such that \begin{equation} \label{won} {\mathcal B}^1\; +\; \ldots\; +\; {\mathcal B}^\kappa\; =\; {\mathbb R}^n \end{equation} with the notation ${\mathcal B}^i = {\rm Vect} \left[ b_i, Ab_i,\ldots, A^{q-1}b_i\right]$ for $i = 1\ldots \kappa.$ Actually, Heymann's result was originally more precisely stated, connecting the subspaces ${\mathcal B}^i$ to the above $\kappa$ invariant factors of $A.$ Nevertheless we shall not need this in the sequel. Assuming (\ref{Rank}), a sequence $(b_1, \ldots, b_r)\in{\mathcal B}$ of linearly independent vectors such that ${\mathcal B}^1\, +\, \ldots\, +\, {\mathcal B}^r\, =\, {\mathbb R}^n$ with the above notations will be called a {\em generating sequence} of ${\mathbb R}^n$ with respect to $(A,B),$ or simply a generating sequence when no confusion is possible. Notice that $r\le m.$ Besides, the very definition of $\kappa$ entails that necessarily $r \ge \kappa$ as well - see the proof of Theorem 1.2 in \cite{W} for details. We now come to the central definition of this work: \begin{DEFI} With the above notations, the L\'evy measure $\nu^B$ is said to {\em exhaust} ${\mathbb R}^n$ {\em with respect to} $A$ if $<A,{\mathcal B}>\; ={\mathbb R}^n$ and there exists $r\in [\kappa, m]$ and a subspace $\mathcal{H}_r\subset{\mathcal B}$ of dimension $r$ such that $<A,\mathcal{H}_r>\; ={\mathbb R}^n$ and $\nu^B(\mathcal{H}_r\cap\mathcal{H}^c) =+\infty$ for every hyperplane $\mathcal{H}\subset\mathcal{H}_r.$ \end{DEFI} This definition is related to the conditions given in \cite{Ya} for the absolute continuity of multivariate infinitely divisible distributions, but it is less stringent since no arcwise absolute continuity is required, and since $\nu^B$ may be carried by any subspace with dimension $r\in [\kappa, m]$. Here, however, the important fact is that this subspace must be chosen with respect to $A$. Introducing finally the stopping time $$\tau\; =\; \inf\{ t>0, \; {\mathcal A}_t \, ={\mathbb R}^n\},$$ our result reads as follows: \begin{THEO} If $A$ is non-singular, then one has $$X_1\; {\rm is \; a.c.}\;\; \Longleftrightarrow\;\; \tau\, =\, 0\;\, {\rm a.s.}\;\; \Longleftrightarrow\;\; \nu^B \;{\rm exhausts}\; {\mathbb R}^n\; {\rm w.r.t.}\; A.$$ \end{THEO} Notice that the above equivalences are time-independent, so that when $X_1$ is a.c. then $X_t$ is a.c. as well for every $t >0.$ In other words absolute continuity is not a temporal property for Non-Gaussian OU processes, unlike L\'evy processes without Gaussian part. Besides, when $X_1$ is not a.c. then the first equivalence entails that $X_1$ is valued with positive probability in some fixed affine hyperplane of ${\mathbb R}^n,$ or equivalently that a certain one-dimensional projection of $X_1$ must have an atom. This, again, contrasts with L\'evy processes since $Z_1$ may be non-atomic and not absolutely continuous - see Theorem 27.19 in \cite{S}. We stress that the variable $X_1$ itself is infinitely divisible, i.e. it is the distribution at time 1 of some L\'evy process $\{Y_t, \; t\ge 0\}$ valued in ${\mathbb R}^n$. Indeed, a straightfoward extension of Lemma 17.1 in \cite{S} shows that $X_1$ is ID without Gaussian part and L\'evy measure $$\nu^X(\Lambda)\; =\; \int_{{\mathbb R}^n}\nu^B(dx)\int_0^1{\bf 1}_\Lambda (\mathrm{e}^{sA} x) \, \mathrm{d} s, \quad \Lambda\in{\mathcal B}({\mathbb R}^n).$$ Hence, our result yields an optimal criterion of absolute continuity for a certain subclass of multivariate Non-Gaussian ID distributions, which we may call the {\em OU class}. To this end, one can check that if $\nu^X$ satisfies any condition given in \cite{Ya}, then $\nu^B$ exhausts ${\mathbb R}^n$ w.r.t. $A$, but that the converse inclusion is not necessarily true since $\nu^X$ may not be arcwise absolutely continous when $\nu^B$ exhausts ${\mathbb R}^n$ w.r.t. $A.$ As we mentioned before, the variables $X_t$ converge in law when $t\to\infty$ to some operator self-decomposable or OL distribution in ${\mathbb R}^n$ under an ergodicity assumption, that is when the eigenvalues of $A$ have all negative real parts and $\nu^B$ is log-integrable at infinity \cite{SY}. If in addition $\nu^B$ exhausts ${\mathbb R}^n$ w.r.t. $A,$ then it is easy to see that the limit distribution is also genuinely $n$-dimensional. Let us notice that the absolute continuity of non-degenerated OL distributions had been established in \cite{Y}. This is probably related to our result, even though absolute continuity and non absolute continuity properties are barely stable under weak convergence. To make a true connection, one would need a stronger type of convergence such as convergence in total variation \cite{DLS}, but no such result seems available in the literature. It follows from our definition that when $\nu^B$ exhausts ${\mathbb R}^n,$ then $(A,B)$ is necessarily controllable. On the other hand, when $(A,B)$ is not controllable then $\tau = +\infty$ a.s. so that from our result $X_1$ is not a.c. In a recent paper \cite{PZ} which was the starting point of this work, it was proved that $X_1$ is absolutely continuous as soon as $(A,B)$ is controllable and the jumping measure $\nu$ of $Z$ is absolutely continuous in an open neighbourhood of 0. These conditions entail of course that $\nu^B$ exhausts ${\mathbb R}^n,$ but they are highly non-equivalent: for example when $A$ is cyclic and non-singular, then our result entails that $Z$ might be genuinely one-dimensional with an infinite, and possibly discrete, jumping measure carried by some line in $B^{-1}(b)$ where $b$ is a generating vector of $A$, nevertheless $X_1$ will be a.c. in ${\mathbb R}^n.$ Our method is also very different from \cite{PZ}, which hinges upon a certain derivation procedure, made possible by the absolute continuity condition on $\nu$, along the jumping {\em sizes} of $Z$. Here, as shortly suggested above, we will differentiate along suitably chosen jumping times, the price to pay being the non-singularity assumption on $A$. Our time-derivation procedure is close to the one developed in \cite{K}, whose Theorem 1.1. actually entails that $X_1$ is a.c. when $A$ is non-singular and $\nu^B(\mathcal{H}^c) = \infty$ for every hyperplane $\mathcal{H}\subset{\mathbb R}^n$. But again this latter assumption is more stringent than our exhaustion property, as well as, to the best of our knowledge, all conditions given in the literature on Malliavin's calculus for jump processes - see the references given in \cite{NS, K, PZ}. In the third section of this paper we will briefly describe what happens when the driving process $Z$ has some Gaussian component. By independence and by the linearity of the equation (\ref{OU2}), we get an analogous result which only requires a small modification of the proof in the Non-Gaussian case. Then we will discuss a few examples in order to provide more geometrical insight on the exhaustion property. In particular we will give a complete description in the case $n=2.$ Finally, we will mention some counterexamples and open questions. Before this we now proceed to the \section{Proof of the theorem} \subsection{Proof that $X_1$ is a.c. $\Rightarrow\;\tau = 0$ a.s.} Setting $${\mathcal B}_0\; =\; \bigcap_{t>0}{\mathcal B}_t$$ which is a deterministic subspace of ${\mathcal B}$ by the 0-1 law, it follows from the definition of ${\mathcal B}_t$ and that of a jumping measure that necessarily $\nu({\mathcal B}_0^c) < +\infty.$ In particular, ${\mathbb P}[T > 1] > 0$ where $T = \inf\{ t>0, \; \Delta B_t \in{\mathcal B}_0^c\}.$ Endow ${\mathcal B}$ with a canonical Euclidean structure and set ${\mathcal B}_0^\perp$ for the orthogonal supplementary of ${\mathcal B}_0$ in ${\mathcal B}$, which may be reduced to $\{0\}$ if ${\mathcal B}_0={\mathcal B}.$ Decomposing the L\'evy process $\{B_t, \; t >0\}$ along the orthogonal sum ${\mathcal B} = {\mathcal B}_0\oplus{\mathcal B}_0^\perp:$ $$B_t\; =\; B^0_t\; +\; B^\perp_t, \quad t >0,$$ notice that the L\'evy process $\{B^\perp_t, \; t >0\}$ is either the zero process or a compound Poisson process with drift coefficient, say, $b^\perp$. Hence, on $\{T > 1\},$ we deduce from (\ref{levy}) that $$X_1 \; = \; \mathrm{e}^Ax\; +\; \int_0^1 \mathrm{e}^{(1-s)A}\, \mathrm{d} B^0_s\; -\; \int_0^1 \mathrm{e}^{(1-s)A}b^\perp \, \mathrm{d} s$$ where we use the notation $b^\perp = 0$ if $B^\perp\equiv 0.$ Last, writing for every $t\in{\mathbb R}$ \begin{equation} \label{minime} \mathrm{e}^{tA}\; =\; \sum_{k=1}^q \psi_r(t) A^{r-1}, \end{equation} where the $\psi_r$'s are certain real functions whose exact expression is given e.g. in \cite{G} Chapter 5, we see that $$\int_0^1 \mathrm{e}^{(1-s)A}\, \mathrm{d} B^0_s\; \in\; <A, {\mathcal B}_0>\quad {\rm a.s.}$$ Putting everything together entails that if $X_1$ is a.c. then necessarily $<A, {\mathcal B}_0>\, ={\mathbb R}^n.$ But from the definitions of ${\mathcal B}_0$ and $\tau,$ this yields $\tau = 0$ a.s. \hfill$\square$ \begin{remark} {\em The above proof shows also that if ${\mathbb P}[\tau > 0] > 0,$ then $X_1$ is valued in some fixed affine hyperplane with probability ${\mathbb P}[T>1].$ Notice that if ${\mathcal B}_0 ={\mathcal B},$ then this probability is 1 and $(A,B)$ is not controllable, which entails that actually $\tau =\infty$ a.s.} \end{remark} \subsection{Proof that $\tau = 0$ a.s. $\Rightarrow\, \nu^B$ exhausts ${\mathbb R}^n$ w.r.t. $A$} We may assume $<A,{\mathcal B}>\, ={\mathbb R}^n$ since otherwise $\tau =\infty$ a.s. Suppose now that $\nu^B$ does not exhaust ${\mathbb R}^n.$ By the above assumption there exists a hyperplane $\mathcal{H}_1\subset{\mathcal B}$ such that $\nu^B(\mathcal{H}_1^c) < \infty.$ If $<A,\mathcal{H}_1>\, \neq{\mathbb R}^n,$ then $$\tau\; \ge \; \inf\{t > 0, \; \Delta B_t \in \mathcal{H}_1^c\}\; >\; 0 \quad{\rm a.s.}$$ If $<A,\mathcal{H}_1>\, ={\mathbb R}^n,$ then there exists a hyperplane $\mathcal{H}_2\subset\mathcal{H}_1$ such that $\nu^B(\mathcal{H}_1\cap\mathcal{H}_2^c) < \infty$ because $\nu^B$ does not exhaust ${\mathbb R}^n,$ and in particular $\nu^B(\mathcal{H}_2^c) < \infty$ since we already have $\nu^B(\mathcal{H}_1^c) < \infty.$ Similarly, when $<A,\mathcal{H}_2>\, \neq{\mathbb R}^n,$ then $$\tau\; \ge \; \inf\{t > 0, \; \Delta B_t \in \mathcal{H}_2^c\}\; >\; 0 \quad{\rm a.s.}$$ When $<A,\mathcal{H}_2>\, ={\mathbb R}^n,$ one can then repeat a finite number of times the same discussion as above: alltogether this entails that $\tau > 0$ a.s. if $\nu^B$ does not exhaust ${\mathbb R}^n,$ which completes the proof by contraposition. \hfill$\square$ \subsection{Proof that $\nu^B$ exhausts ${\mathbb R}^n$ w.r.t. $A\,\Rightarrow\, X_1$ is a.c.} This is the difficult inclusion and we will first establish three lemmas. The first one is an easy application of the implicit function theorem. The second one is an a.s. independence result on a certain class of linear systems, for which we could not find any reference in the literature on control theory. The third one allows to choose suitably the jumping times of $B$ which will be later targeted into $X_1$ via a certain a.s. submersion. Throughout, $\lambda$ will stand for the Lebesgue measure independently of the underlying Euclidean space. \begin{lemma} \label{Inv1} Let $X$ be an absolutely continuous random variable in ${\mathbb R}^p$ with $p\ge n$ and $\varphi : {\mathbb R}^p \to {\mathbb R}^n$ be a ${\mathcal C}^1$ function such that its Jacobian matrix $\, \mathrm{d} \varphi$ verifies ${\rm Rank}\,\, \mathrm{d} \varphi (X) = n$ a.s. Then $\varphi(X)$ is absolutely continuous in ${\mathbb R}^n.$ \end{lemma} \noindent {\em Proof:} Choose any $x^{}_\mathcal{N}\in{\mathbb R}^p\cap\mathcal{N}^c,$ where $\mathcal{N} = \{x\in{\mathbb R}^p\; /\; {\rm Rank}\;\, \mathrm{d} \varphi (x) < n\}.$ Setting $${\tilde X}\; =\; X{\bf 1}_{\{X\in\mathcal{N}^c\}}\; +\; x^{}_\mathcal{N}{\bf 1}_{\{X\in\mathcal{N}\}},$$ we see by assumption that $X ={\tilde X}$ a.s. so that $\varphi(X) = \varphi({\tilde X})$ a.s. as well. Hence, it suffices to show that $\varphi({\tilde X})$ is absolutely continuous in ${\mathbb R}^n$. In the case $p = n$ this is a well-known fact for which we found a proof in \cite{St}, Lemma IV.3.1. For the sake of completeness, we will give an argument in the general case $p\ge n.$ Fix an underlying probability space $(\Omega, \mathcal{F},{\mathbb P}).$ By approximation, it is enough to show that for every relatively compact set $\Gamma\subset \Phi = \{\varphi({\tilde X}(\omega)),\; \omega\in\Omega\}$ such that $\lambda(\Gamma) =0,$ $${\mathbb P}[ \varphi({\tilde X})\in\Gamma] \; =\; 0.$$ For every $y\in\Gamma,$ fix $x\in\varphi^{-1}(y)\subset A.$ Since Rank$\, \mathrm{d}\varphi(x)\, =\, n,$ by the implicit function theorem there exist $\mathcal{V}_x$ and $\mathcal{W}_y$ open neighbourhoods of $x$ and $y$ respectively, ${\mathcal O}_p$ and ${\mathcal O}_n$ open neighbourhoods of $0$ respectively in ${\mathbb R}^p$ and ${\mathbb R}^n$ endowed with a canonical basis, $\psi_x : \mathcal{V}_x \to {\mathcal O}_p$ and $\psi_y : \mathcal{W}_y \to {\mathcal O}_n$ diffeomorphisms such that $$\psi_y\,\circ\,\varphi\,\circ\,\psi_x^{-1} \; : \; {\mathcal O}_p \; \to \; {\mathcal O}_n$$ is the canonical projection from ${\mathcal O}_p$ to ${\mathcal O}_n.$ Taking a finite covering $\left\{ \mathcal{W}_{y_1}, \ldots, \mathcal{W}_{y_k}\right\}$ of $\Gamma$ yields \begin{eqnarray*} {\mathbb P}[ \varphi({\tilde X})\in\Gamma] & \le & \sum_{i=1}^k {\mathbb P} [ \varphi({\tilde X}) \in \mathcal{W}_{y_i}\cap\Gamma]\\ & = & \sum_{i=1}^k {\mathbb P} [ \psi_{y_i}\circ \varphi({\tilde X}) \in \Gamma_i] \end{eqnarray*} with the notation $\Gamma_i = \psi_{y_i}(\mathcal{W}_{y_i}\cap\Gamma),$ which has Lebesgue measure zero in ${\mathbb R}^n$ since $\psi_{y_i}$ is a diffeomorphism. Setting ${\mathcal A}_i\; =\; \{\psi_{y_i}\circ \varphi({\tilde X}) \in \Gamma_i\}$ for every $i\in\{1,\ldots,k\},$ the random variables $Y_i = {\bf 1}_{{\mathcal A}_i}\psi_{x_i}({\tilde X})$ are absolutely continuous in ${\mathbb R}^p$ since $\psi_{x_i}$ are diffeomorphisms. Besides, since the projections $\psi_{y_i}\circ \varphi \circ \psi_{x_i}$ have full rank, from Lemma 2.3 in \cite{PZ} the random variables $\psi_{y_i}\circ \varphi \circ \psi_{x_i}(Y_i)$ are absolutely continuous in ${\mathbb R}^n$ . Hence, for every $i\in\{1,\ldots, k\},$ \begin{eqnarray*} {\mathbb P} [\psi_{y_i}\circ \varphi({\tilde X}) \in \Gamma_i] & = & {\mathbb P} \left[ \psi_{y_i}\circ \varphi \circ \psi_{x_i}(Y_i) \in \Gamma_i\right]\; = \; 0, \end{eqnarray*} which entails that ${\mathbb P}[ \varphi({\tilde X})\in\Gamma] \, =\, 0$ and completes the proof. \hfill$\square$ \begin{lemma} \label{Van1} Assuming (\ref{Rank}), let $(b_1, \ldots, b_r)$ be a generating sequence with respect to $(A,B)$ for some $r\in[\kappa, m].$ Then the set $$\left\{ (t^1_1, \ldots, t_q^1, \ldots, t^r_1, \ldots, t_q^r)\in{\mathbb R}^{q\times r} \; /\;{\rm Rank} \left[ \mathrm{e}^{t^1_1 A}b_1, \ldots, \mathrm{e}^{t_q^1 A}b_1, \ldots, \mathrm{e}^{t^r_1 A}b_r, \ldots, \mathrm{e}^{t_q^r A}b_r\right]\; <\; n\right\}$$ has zero Lebesgue measure. \end{lemma} \noindent {\em Proof:} We first consider the case $\kappa = r = 1.$ Fix $b\in{\mathcal B}$ a generating vector such that ${\mathbb R}^n = {\rm Vect} \left[ b, Ab,\ldots, A^{n-1}b\right].$ The function $$(t_1, \ldots, t_n)\;\mapsto\; {\rm Det} \left[ \mathrm{e}^{t_1 A}b, \ldots, \mathrm{e}^{t_n A}b\right]$$ is analytic in ${\mathbb R}^n$ and it is not identically zero. Actually, if it were, then the analytic function $\rho : t\mapsto\mathrm{e}^{tA}b$ would be valued in some fixed hyperplane of ${\mathbb R}^n,$ as well as all its successive derivatives, which is impossible since $${\rm Vect} \left[ \rho(0), \rho'(0),\ldots, \rho^{(n-1)}(0)\right]\; =\; {\rm Vect} \left[ b, Ab,\ldots, A^{n-1}b\right]\; =\;{\mathbb R}^n.$$ Hence, since the zero set of a real analytic function over ${\mathbb R}^n$ either is ${\mathbb R}^n$ itself or has zero Lebesgue measure - see \cite{F} p. 240 or Lemma 2 in \cite{Y}, we obtain that $${\rm Rank }\left[ \mathrm{e}^{t_1 A}b, \ldots, \mathrm{e}^{t_n A}b\right]\; =\; n$$ almost everywhere in ${\mathbb R}^n$, which completes the proof when $\kappa=r=1.$ We now proceed to the remaining cases. Recalling (\ref{minime}), we first claim that the $q\times q$ matrix $$\Psi_q (t_1, \ldots, t_q)\; =\; \left\{ \psi_i (t_j)\right\}_{1\le i, j\le q}$$ has rank $q$ a.e. in ${\mathbb R}^q.$ Indeed, by the definition of $q$ there exists $b\in{\mathbb R}^n$ such that Rank $[b, Ab,\ldots, A^{q-1}b]\, =\, q.$ Setting ${\mathcal A}_b\, =\, {\rm Vect} [b, Ab,\ldots, A^{q-1}b]$ and viewing $A$ as a cyclic endomorphism of the $q-$dimensional vector space ${\mathcal A}_b,$ we see from the case $\kappa=r=1$ that Rank $[\mathrm{e}^{t_1 A}b, \ldots, \mathrm{e}^{t_q A}b]\; =\; q$ a.e. in ${\mathbb R}^q.$ However, it follows from (\ref{minime}) that $$[\mathrm{e}^{t_1 A}b, \ldots, \mathrm{e}^{t_q A}b]\; =\; [b, \ldots, A^{q-1}b]\times\Psi_q (t_1, \ldots, t_q)$$ so that $\Psi_q (t_1, \ldots, t_q)$ must have rank $q$ a.e. in ${\mathbb R}^q$ as well. Let now $(b_1, \ldots, b_r)$ be a generating sequence with respect to $(A,B).$ Setting ${\mathcal B}_i \, =\, {\rm Vect} [b_i, Ab_i,\ldots, A^{q-1}b_i],$ we have Dim ${\mathcal B}_i\, \le \, q$ for every $i = 1\ldots r.$ Similarly as above, $$[\mathrm{e}^{t_1 A}b_i, \ldots, \mathrm{e}^{t_q A}b_i]\; =\; [b_i, \ldots, A^{q-1}b_i]\times\Psi_q(t_1, \ldots, t_q)$$ and since $\Psi_q$ is a.e. invertible, this entails that Rank $[\mathrm{e}^{t_1 A}b_i, \ldots, \mathrm{e}^{t_q A}b_i]\, =\,$ Dim ${\mathcal B}_i$ a.e. in ${\mathbb R}^q.$ Notice that by (\ref{minime}), one has $\mathrm{e}^{tA}b_i\in{\mathcal B}_i$ for every $t\in{\mathbb R},$ so that $[\mathrm{e}^{t_1 A}b_i, \ldots, \mathrm{e}^{t_q A}b_i]$ forms actually a basis of ${\mathcal B}_i$ a.e. in ${\mathbb R}^q,$ for every $i= 1\ldots r.$ Besides, it follows from the definition of the Lebesgue measure that if ${\mathcal A}_1, \ldots, {\mathcal A}_r$ are negligible sets in ${\mathbb R}^q,$ then $({\mathcal A}_1^c\times\ldots\times{\mathcal A}_r^c)^c$ is negligible in ${\mathbb R}^{q\times r}.$ Putting everything together entails that a.e. in ${\mathbb R}^{q\times r},$ $${\rm Rank} \left[ \mathrm{e}^{t^1_1 A}b_1, \ldots, \mathrm{e}^{t_q^1 A}b_1, \ldots, \mathrm{e}^{t^r_1 A}b_r, \ldots, \mathrm{e}^{t_q^r A}b_\kappa\right]\; =\; {\rm Dim}({\mathcal B}_1 + \ldots + {\mathcal B}_r)\; =\; n,$$ where the last equality comes from the definition of the generating sequence $(b_1, \ldots, b_r).$ The proof is complete. \hfill$\square$ \begin{remarks} {\em (a) By the same argument, one can prove that $$\lambda \left\{ (t_1, \ldots, t_n)\in{\mathbb R}^{n} \; /\;{\rm Rank} \left[ \mathrm{e}^{t_1 A}B, \ldots, \mathrm{e}^{t_n A}B\right]\; <\; n\right\}\; =\; 0$$ as soon as (\ref{Rank}) holds. An interesting point is that when $A$ has real spectrum, then (\ref{Rank}) actually entails that \begin{equation} \label{exact} {\rm Rank} \left[ \mathrm{e}^{t_1 A}B, \ldots, \mathrm{e}^{t_n A}B\right]\; =\; n \end{equation} for {\em every} $\,t_1 < \ldots < t_n.$ The latter is false nevertheless when $A$ has non-real eigenvalues. The proofs of the above two facts involve rather technical considerations on generalized Vandermonde matrices, which we shall not discuss here. \vspace{1mm} \noindent (b) I could not find in the literature any material on the following question, whose positive answer would quickly entail (\ref{exact}). Assuming that $A$ has real spectrum and that (\ref{Rank}) holds, let $u^1, \ldots, u^n$ be $n$ piecewise constant control functions which are linearly independent over $[0,1]$ and consider the linear systems $$\frac{d x^i_t}{d t}\; = \; x\; +\; \int_0^t (Ax^i_s + Bu^i_s)\, \, \mathrm{d} s, \quad t\ge 0, \;\; i =1\ldots n.$$ Does $(x^1_1, \ldots, x^n_1)$ form then necessarily a basis of ${\mathbb R}^n$? } \end{remarks} A family $({\mathcal C}_1, \ldots, {\mathcal C}_r)$ of disjoint pointed cones with common vertex at zero such that every $(c_1, \ldots, c_n)\in({\mathcal C}_1, \ldots, {\mathcal C}_r)$ is a generating sequence with respect to $(A,B)$ will be called a {\em generating garland} with respect to $(A,B),$ or simply a generating garland when there is no ambiguity. When $(b_1, \ldots, b_r)$ is a generating sequence, notice that $({\mathcal C}_1, \ldots, {\mathcal C}_r)$ defined by ${\mathcal C}_i =\{\mu b_i, \; \mu >0\}$ for $i=1\ldots r,$ is a generating garland. Our last lemma makes a connection between this notion and the exhaustion property: \begin{lemma} \label{Geom} If $\nu^B$ exhausts ${\mathbb R}^n$ w.r.t. $A$, then for every $M>0$ there exists $r\in[\kappa,m]$ and a generating garland $({\mathcal C}_1, \ldots, {\mathcal C}_r)$ such that $\nu^B({\mathcal C}_i) \ge M$ for every $i = 1\ldots r.$ \end{lemma} \noindent {\em Proof.} Fix $M > 0.$ If $\nu^B$ exhausts ${\mathbb R}^n$ w.r.t. $A$, then we know that $<A,{\mathcal B}> \, ={\mathbb R}^n.$ Suppose first that $\nu(\mathcal{H}^c) =\infty$ for every hyperplane $\mathcal{H}\subset {\mathcal B}$ and let us show that there exists a generating garland $({\mathcal C}_1, \ldots, {\mathcal C}_m)$ such that $\nu^B({\mathcal C}_i) \ge M$ for every $i = 1\ldots m,$ which is intuitively obvious. If $\mathcal{S}^{m-1}$ denotes the unit Euclidean sphere of ${\mathcal B},$ consider a family $\{\Pi_\delta, \; \delta > 0\}$ of finite measurable partitions of $\mathcal{S}^{m-1}$ such that ${\rm Diam} \,(\Pi_\delta)\, < \, \delta$ and $\Pi_{\delta'}$ is a subpartition of $\Pi_\delta$ for every $\delta' < \delta.$ Let ${\mathcal C}^\delta_M$ denote the disjoint finite family of pointed cones with vertex at zero and apex in $\Pi_\delta$ such that $\nu^B({\mathcal C}) \ge M$ for every ${\mathcal C}\in{\mathcal C}^\delta_M. $ If for every $\delta > 0$ no generating garland of size $m$ is contained in ${\mathcal C}^\delta_M,$ then for every $\delta > 0$ there exists at least one hyperplane $\mathcal{H}_\delta$ intersecting every ${\mathcal C}\in{\mathcal C}^\delta_M,$ and the assumption ${\rm Diam} \,(\Pi_\delta)\, < \, \delta$ readily entails that all these $\mathcal{H}_\delta$'s converge - in the sense that their normal unit vectors converge in the metric space $\mathcal{S}^{m-1}$ - to some fixed hyperplane $\mathcal{H}_0.$ Last, it is a bit tedious but not difficult to see that by construction and by the finiteness of $\Pi_{\delta},$ one must have $\nu(\mathcal{H}_0^c) <\infty,$ which yields a contradiction. Hence, there exists $\delta > 0$ such that ${\mathcal C}^\delta_M$ contains a generating garland of size $m,$ and we are done. If $\nu(\mathcal{H}^c) <\infty$ for some hyperplane $\mathcal{H}\subset {\mathcal B}$, then by definition of the exhaustion property there must exist an hyperplane $\mathcal{H}_{m-1}$ such that $<A,\mathcal{H}_{m-1}> \, ={\mathbb R}^n.$ If $\nu(\mathcal{H}_{m-1}\cap\mathcal{H}^c) =\infty$ for every subspace $\mathcal{H}\subset \mathcal{H}_{m-1},$ then reasoning exactly as above we can show that there exists a generating garland $({\mathcal C}_1, \ldots, {\mathcal C}_{m-1})$ such that $\nu^B({\mathcal C}_i) \ge M$ for every $i = 1\ldots m-1.$ If $\nu(\mathcal{H}_{m-1}\cap\mathcal{H}^c) <\infty$ for some subspace $\mathcal{H}\subset \mathcal{H}_{m-1},$ then we can repeat the same procedure as above. But again by the definition of the exhaustion property, the latter procedure cannot be repeated more than $(m-\kappa)$ times: all in all, this shows that there exists $r\in[\kappa,m]$ and a generating garland $({\mathcal C}_1, \ldots, {\mathcal C}_r)$ such that $\nu^B({\mathcal C}_i) \ge M$ for every $i = 1\ldots r.$ \hfill$\square$ \begin{remark}{\em When $\nu^B$ exhausts ${\mathbb R}^n,$ there may exist no generating garland $({\mathcal C}_1, \ldots, {\mathcal C}_r)$ such that $\nu^B({\mathcal C}_i) = \infty $ for every $i = 1\ldots r.$ Think of the situation where $\kappa = m = 2$ and $\nu^B$ is infinite with support in the arc $y = x^2$, ${\mathcal B}$ being endowed with an orthonomal frame $Oxy$.} \end{remark} \noindent {\bf End of the proof.} From (\ref{levy}) and after time-reversal, it is enough to prove that $$Y\; =\; \int_0^1\mathrm{e}^{sA}B \, \mathrm{d} Z_s\; =\; \int_0^1\mathrm{e}^{sA}\, \mathrm{d} B_s$$ is absolutely continuous. For this purpose, we will use the same method as depicted in the introduction, in a somewhat more elaborated manner. If $\Gamma\subset {\mathbb R}^n$ is such that $\lambda(\Gamma) = 0,$ we need to show that for every $\varepsilon > 0$ $${\mathbb P}[Y\in\Gamma]\; < \; \varepsilon.$$ Fix $\varepsilon >0$ and let $M >0$ be such that $${\mathbb P}[T^M_{q+1}\ge 1]\; <\; \varepsilon/m$$ where $T^M_{q+1}$ is the sum of $(q+1)$ independent exponential variables with parameter $M$. By Lemma \ref{Geom}, there exists $r\in[\kappa,m]$ and a generating garland $({\mathcal C}_1, \ldots, {\mathcal C}_r)$ such that $\nu^B({\mathcal C}_i) \ge 2M$ for every $i = 1\ldots r.$ Besides, if ${\mathcal B}_\eta$ stands for the Euclidean ball of ${\mathcal B}$ centered at 0 with radius $\eta$ and if ${\mathcal C}_i^\eta = {\mathcal C}_i\cap {\mathcal B}_\eta^c,$ then we can actually choose $\eta > 0$ such that $\nu^B({\mathcal C}_i^\eta) \ge M$ for every $i = 1 \ldots r.$ Let $\{ T_{i,p}^\eta, \; p\ge 1\}$ be the ordered sequence of jumping times of the L\'evy process $\{B_t, \; t\ge 0\}$ into ${\mathcal C}^\eta_i$ and set $T_p^\eta = \sup\{T_{i,p}^\eta, \; i =1\ldots r\}$ for every $p\ge 1.$ We have $${\mathbb P}[T^\eta_{q+1}\ge 1]\; \le\; \sum_{i=1}^r {\mathbb P}[T^\eta_{i, q+1}\ge 1]\;\le\;r {\mathbb P}[T^M_{q+1}\ge 1]\; <\; r\varepsilon/m\; \le \; \varepsilon$$ and it is hence sufficient to prove that \begin{equation} \label{tite} {\mathbb P}\left[ T_{q+1}^\eta < 1, \; Y\in\Gamma\right]\; =\; 0 \end{equation} for every $\eta > 0.$ Let $\mathcal{F}^\eta$ be the $\sigma-$algebra generated by $\left\{ T_{i,p}^\eta, \; p\geq q+1, \; i = 1\ldots r\right\},$ $\{ \Delta B_{T_{i,p}^\eta}, \; p\geq 1, \; i = 1\ldots r\}$ and the L\'evy process ${\tilde B}^\eta$ defined by $${\tilde B}^\eta_t\; = \; B_t - \sum_{T_{i,p}^\eta\leq t} \Delta B_{T_{i,p}^\eta}, \quad t\geq 0.$$ On $\{ T_{q+1}^\eta < 1\},$ one can write $$Y\; =\; Y^\eta\; +\; \sum_{i=1}^{r}\sum_{j=1}^q\mathrm{e}^{T_{i,j}^\eta A}\Delta B_{T_{i,j}^\eta}$$ with $Y^\eta$ a $\mathcal{F}^\eta-$measurable random variable. Since $T_{q+1}^\eta$ is $\mathcal{F}^\eta-$measurable as well, we have \begin{eqnarray*} {\mathbb P}\left[ T_{q+1}^\eta < 1, \; Y\in\Gamma\right] & = & {\mathbb P}\left[ T_{q+1}^\eta < 1, \; {\mathbb P}\left[ Y\in\Gamma\;\vert\;\mathcal{F}^\eta\right]\rcr\\ & = & {\mathbb P}[T_{q+1}^\eta < 1, \; {\mathbb P}[ {\tilde Y}^\eta\in\Gamma^\eta\;\vert\;\mathcal{F}^\eta]] \end{eqnarray*} where $\Gamma^\eta = \Gamma - Y^\eta$ is a $\mathcal{F}^\eta-$measurable set such that $\lambda(\Gamma^\eta) = 0,$ and $${\tilde Y}^\eta\; =\;\sum_{i=1}^{r}\sum_{j=1}^q\mathrm{e}^{T_{i,j}^\eta A}\Delta B_{T_{i,j}^\eta}.$$ The key-point is that by standard properties of jumping measures, since the ${\mathcal C}_i^\eta$'s are disjoint, conditionally on $\mathcal{F}^\eta$ the law of the $(r\times q)-$uple $(T_{1,1}^\eta,\ldots ,T_{1,q}^\eta, \ldots, T_{r,1}^\eta,\ldots ,T_{r,q}^\eta)$ is that of the tensor product of $r$ independent $q$-th order statistics respectively on $[0, T_{i, q+1}^\eta],$ viz. the tensor product of $r$ independent uniform laws on the respective sets $$\left\{ 0 < t^i_1 < \ldots < t^i_q < T_{i, q+1}^\eta\right\}.$$ In particular, the law of $(T_{1,1}^\eta,\ldots ,T_{1,q}^\eta, \ldots, T_{r,1}^\eta,\ldots ,T_{r,q}^\eta)$ is absolutely continuous in ${\mathbb R}^{q\times r}$ and by Lemma \ref{Inv1}, (\ref{tite}) will hold as soon as the Jacobian matrix of the application $$(t^1_1,\ldots ,t^1_q, \ldots, t^r_1,\ldots ,t^r_q)\; \mapsto\; \sum_{i=1}^{r}\sum_{j=1}^q\mathrm{e}^{t^i_j A}\Delta B_{T_{i,j}^\eta}$$ from ${\mathbb R}^{q\times r}$ to ${\mathbb R}^n$ has rank $n$ a.e. conditionnally on $\mathcal{F}^\eta$ (recall that the sequence $\{ \Delta B_{T_{i,p}^\eta}, \; p\geq 1, \; i = 1\ldots r\}$ is $\mathcal{F}^\eta$-measurable and independent of $\{ T_{i,p}^\eta, \; p\leq q, \; i = 1\ldots r\}$). This Jacobian matrix is equal to \begin{equation} \label{yakov} A\times\left[ \mathrm{e}^{t^1_1A}\Delta B_{T_{1,1}^\eta},\ldots, \mathrm{e}^{t^1_q A}\Delta B_{T_{1,q}^\eta}, \ldots, \mathrm{e}^{t^r_1A}\Delta B_{T_{r,1}^\eta},\ldots, \mathrm{e}^{t^r_q A}\Delta B_{T_{r,q}^\eta}\right] \end{equation} and, since $A$ is invertible and $\Delta Z^{B}_{T_{i,j}^\eta}\in {\mathcal C}_i^\eta$ for every $i=1\ldots r$ and $j=1\ldots q,$ conditionnally on $\mathcal{F}^\eta$ it has a.s. the same rank as $$\left[ \mathrm{e}^{t^1_1 A}b_1, \ldots, \mathrm{e}^{t_q^1 A}b_1, \ldots, \mathrm{e}^{t^r_1 A}b_r, \ldots, \mathrm{e}^{t_q^r A}b_r\right]$$ where $(b_1, \ldots, b_r)$ is some generating sequence of ${\mathbb R}^n$ with respect to $(A,B).$ Now by Lemma \ref{Van1}, the latter has full rank a.e. and the proof is finished. \hfill$\square$ \begin{remark}{\em The invertibility assumption on $A$ is only useful for getting the full rank a.s. of the Jacobian matrix given in (\ref{yakov}). Nevertheless this is a crucial assumption, and in the next section we will give a counterexample when $A$ is singular.} \end{remark} \section{Final remarks} \subsection{The case with a Brownian component} If the driving L\'evy process $Z$ has a non-trivial Gaussian component, then by the linearity of (\ref{OU2}) this amounts to consider the problem of absolute continuity for $$X_t \; = \; \mathrm{e}^{tA}x\; +\; \int_0^t \mathrm{e}^{(t-s)A}\, \mathrm{d} W_s\; +\; \int_0^t \mathrm{e}^{(t-s)A}\, \mathrm{d} B_s, \quad t\ge 0,$$ where $W$ is some ${\mathcal B}$-valued Brownian motion independent of the Non-Gaussian L\'evy process $B$. Set $H = \,<A, {\rm Im}\, W>$ and, given any Euclidean structure on ${\mathbb R}^n,$ denote by $H^\perp$ the orthogonal complement of $H$ in ${\mathbb R}^n.$ The $H-$valued random variable $$\int_0^1 \mathrm{e}^{(1-s)A}\, \mathrm{d} W_s$$ is Gaussian and by a classical result in control theory - see e.g. Theorem 1.1 in \cite{W} - it is non-degenerated, hence absolutely continuous in $H$. Since $W$ and $B$ are independent, Lemma 3 in \cite{Y} and Lemma 2.3 in \cite{PZ} yield $$X_1\; {\rm is \; a.c.}\;\; \Longleftrightarrow\;\; \Pi_{H^\perp}\left(\int_0^1 \mathrm{e}^{(1-s)A}\, \mathrm{d} B_s\right)\; {\rm is \; a.c.}$$ where $\Pi_{H^\perp}$ stands for the orthogonal projection operator onto $H^\perp.$ A straightforward modification of our proof entails then $$X_1\; {\rm is \; a.c.}\;\; \Longleftrightarrow\;\; \tau^{}_H\, =\, 0\;\, {\rm a.s.}\;\; \Longleftrightarrow\;\; \nu^B \;{\rm exhausts}\; {\mathbb R}^n\; {\rm w.r.t.}\; (A,H).$$ where with the notations of the introduction we set $\tau^{}_H\; =\; \inf\{ t>0, \; {\mathcal A}_t + H \, ={\mathbb R}^n\},\kappa^{}_H$ for the minimal number of linearly independent vectors $b_1, \ldots,b_p\in{\mathcal B}$ such that $$<A,b_1>\, +\ldots + <A,b_p> \,+ \; H ={\mathbb R}^n,$$ and say that $\nu^B$ exhausts ${\mathbb R}^n$ with respect to $(A,H)$ if $<A,{\mathcal B}> \,+ \; H ={\mathbb R}^n$ and there exists $r\in [\kappa^{}_H, m]$ and a subspace $\mathcal{H}_r\subset{\mathcal B}$ of dimension $r$ such that $<A,\mathcal{H}_r>\, + \; H ={\mathbb R}^n$ and $\nu^B(\mathcal{H}_r\cap\mathcal{H}^c) =+\infty$ for every hyperplane $\mathcal{H}\subset\mathcal{H}_r.$ \subsection{Some explicit descriptions of the exhaustion property.} From now on we will assume that $Z$ has no Gaussian part, that is $B$ has no Gaussian part either. Let us first consider the case $n=1,$ i.e. $X$ is solution to \begin{equation} \label{OU1} \, \mathrm{d} X_t\; =\; aX_t \, \mathrm{d} t \; +\; \, \mathrm{d} B_t \end{equation} where $a\in{\mathbb R}$ and $B$ is one-dimensional. The exhaustion property just means that $\nu^B$ is infinite and our result reads \begin{equation} \label{n1} X_1\; {\rm is \; a.c.}\;\; \Longleftrightarrow\;\; \nu^B \;{\rm is\; infinite} \end{equation} as soon as $a\neq 0.$ Notice that this is actually an immediate consequence of Theorem A in \cite{NS} - see also Theorem 1.1. in \cite{K}. Let us also give a short proof of the non-trivial reverse inclusion in (\ref{n1}), similar to that given in the introduction: the solution to (\ref{OU1}) is given by $$X_t \; = \; \mathrm{e}^{ta}x\; +\; \int_0^t \mathrm{e}^{a(t-s)}\, \mathrm{d} B_s\; = \; \mathrm{e}^{ta}x\; +\; B_t \; + \; a\int_0^t \mathrm{e}^{a(t-s)} B_s \, \mathrm{d} s, \quad t\ge 0,$$ where $x$ is the initial condition and where in the second equality we made an integration by parts, assuming $B_0 =0$ without lost of generality. Hence, leaving the details to the reader, one may follow roughly the same method as for the integral of $B$, noticing with the same notations that on $\{T^\eta_2 < 1\}$ the value of $B_1$ does not depend on $T_1^\eta$, hence $B_1$ is $\mathcal{F}^\eta$-measurable as well. \\ Let us now discuss the case $n=2.$ To simplify the notations, we will denote by $\mathcal{I}$ the set or family of sets where $\nu^B$ is infinite if and only if the exhaustion property holds. We will also suppose implicitly that $A$ is non-singular. Up to some equivalent transformations on $A$ which are not relevant to the absolute continuity problem, there are four situations: \vspace{1mm} \noindent (a) $A$ has no real eigenvalue, in other words $A$ is a multiple of $$\left(\begin{array}{rl} \cos\theta & \sin\theta \\ -\sin\theta & \cos\theta \end{array}\right)$$ for some $\theta\in]0,\pi[.$ Then $\kappa = 1$ i.e. $A$ is cyclic, and it is easy to see that every non-zero vector of ${\mathbb R}^2$ is generating. Hence we simply have $\mathcal{I} = {\mathbb R}^2$ viz. as in the real case, $X_1$ is a.c. if and only if $\nu^B$ is infinite. \vspace{1mm} \noindent (b) $A$ is a multiple of the identity matrix. Then $\kappa = 2$ and $<A,b> \, = {\rm Vect}\{b\}$ for every $b\in{\mathbb R}^2.$ This means that $\mathcal{I} =\{({\rm Vect}\{b\})^c, \; b\in{\mathbb R}^2\}$ viz. the infinite part of $\nu^B$ must not be carried by any line in ${\mathbb R}^2$. \vspace{1mm} \noindent (c) $A$ is a Jordan cell matrix, i.e. $A$ is of the type $$\left(\begin{array}{cc} \alpha & 1 \\ 0 & \alpha \end{array}\right)$$ with $\alpha\neq 0.$ Then $\kappa = 1$ and every non-zero vector of ${\mathbb R}^2$ is generating except the multiples of $(1,0):$ we have $\mathcal{I} = ({\rm Vect}\{(1,0\})^c.$ \vspace{1mm} \noindent (d) $A = \mbox{Diag}(\alpha, \beta)$ with $\alpha\neq\beta$ and $\alpha, \beta$ non zero. Then $\kappa = 1$ and every non-zero vector of ${\mathbb R}^2$ is generating except those in Vect $\{(1,0)\}\cup\mbox{Vect}\{(0,1)\}.$ On the other hand, (Vect $\{(1,0)\} - \{0\}$, Vect $\{(0,1)\}-\{0\})$ is a generating garland: we have $\mathcal{I} = \{({\rm Vect}\{(1,0)\})^c\cap({\rm Vect}\{(0,1)\})^c\} \cup \{{\rm Vect}\{(1,0)\} \;\mbox{and Vect}\{(0,1)\}\}.$ \\ When $n > 2,$ it becomes quite lengthy to depict the exhaustion property. Let us give however four typical examples when $n=3,$ keeping for $\mathcal{I}$ the same meaning as above and using the notations $\mathcal{H}_x = \{x=0\}, \mathcal{H}_y = \{ y=0\}$ and $\mathcal{H}_z = \{z=0\}$ where $Oxyz$ is a given orthogonal frame of ${\mathbb R}^3.$ \vspace{1mm} \noindent (f) $A$ is a Jordan cell matrix, i.e. $A$ is of the type $$\left(\begin{array}{ccc} \alpha & 1 & 0\\ 0 & \alpha & 1\\ 0 & 0 & \alpha \end{array}\right)$$ with $\alpha\neq 0.$ Then $\kappa = 1$ and every non-zero vector of ${\mathbb R}^3$ is generating except those in $\mathcal{H}_z:$ we have $\mathcal{I} = \mathcal{H}_z^c.$ \vspace{1mm} \noindent (g) $A$ is a block matrix of the following type $$\left(\begin{array}{ccc} \alpha & 0 & 0\\ 0 & \beta & 1\\ 0 & 0 & \beta \end{array}\right)$$ with $\alpha\neq\beta$ and $\alpha, \beta$ non zero. Then $\kappa = 1$ and every non-zero vector of ${\mathbb R}^3$ is generating except those in $\mathcal{H}_x\cup\mathcal{H}_z:$ we have $\mathcal{I} = \mathcal{H}_x^c\cap\mathcal{H}_z^c.$ \vspace{1mm} \noindent (h) $A$ is a block matrix of the following type $$\left(\begin{array}{ccc} \alpha & 0 & 0\\ 0 & \alpha & 1\\ 0 & 0 & \alpha \end{array}\right)$$ with $\alpha\neq 0.$ Then $\kappa = 2$ and every generating sequence must not be valued in any hyperplane $\mathcal{H}_u = u^\perp$ with $u$ a unit vector of $Oxz:$ we have $\mathcal{I} = \{\mathcal{H}_u^c, \, u\in Oxz\}.$ \vspace{1mm} \noindent (i) $A = \mbox{Diag}(\alpha, \beta, \gamma)$ with distinct non zero $\alpha, \beta$ and $\gamma.$ Then $\kappa = 1$ and every vector in $\mathcal{H}^c_x\cap\mathcal{H}_y^c\cap\mathcal{H}_z^c$ is generating. But as in dimension 2, one can also build generating garlands with one component in $\mathcal{H}_x, \mathcal{H}_y$ or $\mathcal{H}_z.$ The infinity set $\mathcal{I}$ is then the union of the following eight sets or families of sets: $\mathcal{H}_x^c\cap\mathcal{H}_y^c\cap\mathcal{H}_z^c, \{\mathcal{H}_x\cap\mathcal{H}_y^c\cap\mathcal{H}_z^c\,\mbox{and}\,\mathcal{H}_y\cap\mathcal{H}_x^c\}, \{\mathcal{H}_y\cap\mathcal{H}_x^c\cap\mathcal{H}_z^c\,\mbox{and}\,\mathcal{H}_x\cap\mathcal{H}_y^c\}, \{\mathcal{H}_y\cap\mathcal{H}_z^c\cap\mathcal{H}_x^c\,\mbox{and}\,\mathcal{H}_z\cap\mathcal{H}_y^c\}, \{\mathcal{H}_z\cap\mathcal{H}_y^c\cap\mathcal{H}_z^c\,\mbox{and}\,\mathcal{H}_y\cap\mathcal{H}_z^c\}, \{\mathcal{H}_z\cap\mathcal{H}_x^c\cap\mathcal{H}_y^c\,\mbox{and}\,\mathcal{H}_x\cap\mathcal{H}_z^c\}, \{\mathcal{H}_x\cap\mathcal{H}_z^c\cap\mathcal{H}_y^c\,\mbox{and}\,\mathcal{H}_z\cap\mathcal{H}_x^c\}, \{\mathcal{H}_x\cap\mathcal{H}_y\cap\mathcal{H}_z^c\,\mbox{and}\,\mathcal{H}_y\cap\mathcal{H}_z\cap\mathcal{H}_x^c\,\mbox{and}\,\mathcal{H}_z\cap\mathcal{H}_x\cap\mathcal{H}_y^c\}.$ \subsection{Some open questions} As we mentioned before, our theorem no more holds when $A$ is singular, as shows the following counterexample with $n=2,d=m=1,$ $$A\; =\, \left(\begin{array}{ll} 0 & 0 \\ 1 & 0 \end{array}\right)\quad\mbox{and}\quad B\; =\, \left(\begin{array}{l} 1 \\ 0 \end{array}\right).$$ From the control theory viewpoint, this example yields the so-called rocket car equations, which serve as toy-models \cite{JM} for studying the Pontryagin maximum principle. From the stochastic viewpoint, assuming $x =(0,0)$ one gets the so-called Kolmogorov process $X = (X^1, X^2),$ with $$X^1_t\; =\; Z_t\quad\mbox{and}\quad X^2_t\; =\; \int_0^t Z_s \, \mathrm{d} s,\quad t\ge 0,$$ and $Z$ is a one-dimensional L\'evy process. Notice that $(A,B)$ is controllable, so that $\nu^B$ exhausts ${\mathbb R}^2$ w.r.t. $A$ if and only if $\nu^Z$ is infinite. But then $X^1_1 = Z_1$ may be singular - see again Theorem 27.19 in \cite{S} - so that by Lemma 2.3 in \cite{PZ}, $X_1$ is not absolutely continuous either. When $A$ is singular and $m$, one may wonder if the following holds true for every $t > 0:$ $$B_t\; {\rm is \; a.c. \; in}\;{\mathbb R}^n\; \Longrightarrow\;\; X_t\; {\rm is \; a.c.\; in}\; {\mathbb R}^n.$$ From our theorem, this property is trivial when $n = 1$ and we also refer to Theorem B in \cite{NS} for a non-linear extension. When $n\ge 2$ some geometrical difficulties arise however, which will be the matter of further research. When $m <n$ the problem seems much more complicated without further conditions on the jumping measure. In particular I do not have the answer to the following basic question, which would solve the problem at least for the Kolmogorov process: \begin{center} For a real L\'evy process $\{Z_t, \, t\ge 0\},$ if $Z_1$ is a.c. in ${\mathbb R},$ is ${\displaystyle \left( Z_1, \int_0^1Z_t \, \mathrm{d} t\right)}$ a.c. in ${\mathbb R}^2$? \end{center} To conclude this paper, let us go back to the case $n=1$ and consider the following class of infinitely distributions $${\mathcal O}\; =\; \left\{\mathcal{L}\left( \int_0^1 \mathrm{e}^s \, \mathrm{d} B_s\right), \; B\;\mbox{real L\'evy process}\right\},$$ for which we proposed the name OU class. If $\mu^B\in{\mathcal O}$ corresponds to some Non-Gaussian L\'evy process $B$, after time-reversal our previous discussion entails \begin{equation} \label{class} \mu^B\; {\rm is \; a.c.}\;\; \Longleftrightarrow\;\; \nu^B \;{\rm is\; infinite.} \end{equation} Besides, with a little reflexion, one can show that (\ref{class}) also holds when replacing in ${\mathcal O}$ the kernel $\mathrm{e}^s$ by any ${\mathcal C}^1$ function $f(s)$ whose derivative does not vanish in $]0,1[.$ In particular, the equivalence (\ref{class}) will hold for the well-known U-class where $f(s) =s,$ and B-class where $f(s) = \log s.$ Of course, for these two special classes one could get (\ref{class}) directly in considering the jumping measure of $\mu^B$ which happens to be absolutely continuous - see \cite{BMS} for details - and applying Tucker's result - see Theorem 27.7 in \cite{S}. It would be interesting to investigate which exact class of integration kernels entails (\ref{class}) for L\'evy integrals, and also what occurs in the multivariate case.
1,314,259,994,733
arxiv
\section{Introduction} Over the years, a challenging task has been to explore how entanglement~\cite{horo} is distributed among the constituents of a many-body system and understand its effects on cooperative phenomena~\cite{amico,adv,mac}. For instance, it was observed that the constituents of the non-critical phases of many-body systems are, in general, less entangled with particles beyond their nearest neighbors (NN), and obey the area law of scaling of entanglement entropy~\cite{hastings,plenio}, which provides useful information about their ground state properties~\cite{amico,adv,mac,plenio} and is closely related to its numerical simulability~\cite{dmrg,mps}. Hence, study of quantum correlation may actually provide deeper insight about the underlying cooperative and critical phenomena in these systems~\cite{sachdev1,sondhi,vojta}. In return, quantum many-body systems are also important substrates for quantum communication \cite{comm, q_communication1} and computation protocols \cite{comp,kit}, and are thus key enablers for quantum technology. In this work, we report on the entanglement behavior in the ground state phases of a doped one-dimensional (1D) Hubbard model with large onsite interactions. The quantum spin-1/2 particles on the lattice doped with holes interact via the $t$-$J$ Hamiltonian \cite{t_j}, with \(t\) representing a typical tunneling strength between two neighboring sites and \(J\) serving as the spin-spin interaction strength between particles in filled neighboring sites. The $t$-$J$ Hamiltonian is widely used to study the physical properties of doped quantum spin systems, in particular for high-$T_c$ superconducting phases of strongly-correlated matter \cite{t-J_sc,wen}. The minimum energy configuration of the $t$-$J$ Hamiltonian exhibits a rich phase diagram in the $J/t$-$n_{el}$ plane, with $n_{el}$ being the electron concentration or density, and has already been extensively studied using physical quantities such as ground state energy, spin correlation functions, and spin gap\cite{phase_separation,t-j_phase1,t_j1_j2,nakamura,t-j_phase2}. In this regard, one of our primary motivations is to investigate how quantum correlations, especially bipartite entanglement (BE) and multipartite entanglement (ME), behave in these different phases, and whether insertion of defects play a significant role in altering the entanglement properties. The key finding of this work is the existence of entanglement in the ground state of the doped 1D $t$-$J$ Hamiltonian, in particular at low electron densities, which remains invariant under sudden or perturbative changes to the $J/t$ ratio, implying potential application in robust quantum technologies~\cite{kara-kara-ekhane}. In other words, the entanglement remains constant under perturbations of the system parameter, a phenomena reminiscent of the \emph{adiabatic freezing} of quantum correlations~\cite{hsd-freeze} (cf.~\cite{manis,expt,freezing}), where the aforementioned quantities are completely insensitive or \emph{frozen} with respect to changes in system parameters~\cite{hsd-freeze} or decoherence~\cite{manis}. We observe that this adiabatic freezing behavior of entanglement is different for bipartite and multipartite cases, and is closely related to the relevant ground state phases of this model~\cite{phase_separation,t-j_phase1,t_j1_j2,nakamura,t-j_phase2}. To elaborate, we observe that at low $J/t$ ratio ($J/t < 2$), for low $n_{el}$, when the system is known to lie in the metallic Luttinger liquid phase \cite{t-j_phase1}, two-site BE, as quantified by the logarithmic negativity \cite{adotey-negativity,negativity1}, decays polynomially with the increase in lattice distance, $r$ = $|i-j|$, between the lattice sites \(i\) and \(j\), which essentially signals the dominating long-range order in the phase. Interestingly, within the metallic phase, the BE is invariant to changes in the $J/t$ ratio and is therefore adiabatically frozen. In contrast, at higher $J/t$ ratio, superconducting spin-gap phase~\cite{t_j1_j2,nakamura} and electron-hole phase separation (PS) occurs~\cite{phase_separation}, accompanied by an exponential decay of BE. Subsequently, the adiabatic freezing of BE is lost during the quantum phase transition. Of greater significance is the behavior of multipartite entanglement, which for low fixed values of $n_{el}$, remains adiabatically frozen for all values of the $J/t$ parameter space. Using generalized geometric measure (GGM) \cite{ggm} (cf.~\cite{ggm_extra}) as the measure of genuine multipartite entanglement, we show that the variation of GGM across the $J/t$-$n_{el}$ phase space, for low $n_{el}$, remains invariant under adiabatic changes of the $J/t$ ratio. It is important to note that no such adiabatic freezing of ME is observed in the undoped anisotropic 1D model \cite{diverging-roy}. Rather counterintuitively, it appears that the presence of \emph{impurities} or \emph{defects} (as modeled by the holes) in the spin chain acts as a vehicle for phases with \emph{frozen} ME. The importance of the results lie in the fact that many-body systems with robust ME, which is not sensitive to perturbations in system parameters or environmental processes, are necessary for realizing quantum information-theoretic protocols such as measurement based quantum computation \cite{comp} and quantum communication protocols \cite{comm, q_communication1}. The paper is arranged as follows. In Sec.~\ref{model} we introduce the 1D $t$-$J$ Hamiltonian. We study the decay and adiabatic freezing of bipartite entanglement in Sec.~\ref{be}. We discuss the low electron density ground states of the model in Sec.~\ref{gs} and demonstrate the freezing of genuine multipartite entanglement in Sec.~\ref{me}. We conclude in Sec.~\ref{conc}. \section{\label{model}Model} In our study, we consider the $t$-$J$ Hamiltonian as the structure that governs the interaction between the quantum particles in the doped 1D spin lattice, with $N$ sites populated with $N_{el} (< N)$ quantum spin-1/2 particles. The rest of the sites are vacant or contain \emph{holes}. The ``electron density'' of the lattice is given by $n_{el}$ ($= N_{el}/N$). The $t$-$J$ Hamiltonian can be obtained perturbatively from the prominent Hubbard model in the limit of large on-site interaction \cite{t_j}, and has been expressed in literature in the form, \begin{equation} H= -t\sum_{\langle i,j\rangle,\sigma} {\mathcal{P}_G}~ (c_{i\sigma}^{\dagger}c_{j\sigma}+\text{h.c.}) ~{\mathcal{P}_G}+J\sum_{\langle i,j\rangle}\vec{S_i}\cdot\vec{S_j}, \label{tJ} \end{equation} where $c_{i\sigma}$ ($c_{i\sigma}^\dag$) is the fermionic annihilation (creation) operator of spin $\sigma$ ($= \{\uparrow, \downarrow\}$), acting on site $i$. $\mathcal{P}_G$ is the Gutzwiller projector $\Pi_i (1-n_{i\uparrow}n_{i\downarrow})$ which enforces at most single occupancy at each lattice site. $S_i = \frac{1}{2}\sigma_i$'s are the triad of spin operators $\{S^x,S^y,S^z\}$, while $t$ and $J$ correspond to the transfer energy and the spin-exchange interaction energy terms, respectively, and each is limited to nearest-neighbor sites, with periodic boundary condition. The ground state phase diagram for the above 1D model has received widespread attention in the past years \cite{phase_separation,t-j_phase1,t_j1_j2,nakamura,t-j_phase2}. In particular, the presence of three primary phases, namely the repulsive Luttinger liquid or metallic, attractive Luttinger liquid or superconducting, and the phase separation, have been predicted using exact diagonalization \cite{t-j_phase1}. However, recent results, using density matrix renormalization group techniques, have also reported the presence of a superconducting spin-gap phase at low $n_{el}$ \cite{t-j_phase2, nakamura}. These phases play a significant role in the entanglement properties of the doped quantum spin model. \section{Decay of bipartite entanglement and adiabatic freezing in metallic phase} \label{be} We now focus on the behavior of bipartite entanglement in the ground state of the $t$-$J$ Hamiltonian. In particular, we look at the logarithmic negativity ($\mathcal{E}$) in the state, $\rho_{ij}$, shared between two-sites \(i\) and \(j\), and its decay with increase in lattice distance, $r$ = $|i-j|$, for different phases of the model in the $J/t$-$n_{el}$ plane. \begin{figure}[t] \includegraphics[width=0.42\textwidth]{Fig1.pdf} \caption{(Color online.) Decay and adiabatic freezing of bipartite entanglement in phases of the $t$-$J$ Hamiltonian. The plot shows the variation of two-site entanglement ($\mathcal{E}$) with increase in lattice distance $r$ = $|i-j|$, for the 1D $t$-$J$ Hamiltonian, with $N = 30$ and $n_{el}=\frac{2}{N}$. For $J/t \leq 2$, the ground state remains in the metallic phase and $\mathcal{E}$ decays polynomially as $1/(A + Br)$, with $r$, exhibiting the presence of a dominating long-range order in the ground state. The values of $A=162.6$ and $B=18.9$, obtained from the average best-fitted curve, remains almost unchanged for all the curves in this phase, and BE is adiabatically frozen. This freezing behavior of bipartite entanglement is shown more clearly in Fig. \ref{BE_freezing}. In contrast, for $J/t \geq 3$, the superconducting and PS phases leads to exponential decay of BE, given by $\mathcal{E} \sim C ~ \exp{(-\frac{r}{\xi})}$, where $\xi$ is the characteristic length and the constant $C$ can be obtained from the best-fitted curve. $\xi$ and $C$ are dependent on $J/t$ and the adiabatic freezing of BE is lost in this phase. The vertical axis are in ebits and the horizontal axes are dimensionless. \(J/t\) is also dimensionless. In the inset, we set the vertical axis in the logarithmic scale and plot $\mathcal{E}$ for $J/t\geq 3$. } \label{fig1} \end{figure} For a bipartite state $\rho_{ij}$, shared between two sites \(i\) and \(j\), its logarithmic negativity is defined as \begin{eqnarray} \mathcal{E}(\rho_{ij})=\text{log}_2(2 \mathcal{N}(\rho_{ij})+1), \end{eqnarray}\\ where $\mathcal{N}$ is the negativity \cite{adotey-negativity,negativity1}, defined as the absolute value of the sum of the negative eigenvalues of $\rho^{T_i}_{ij}$, so that $\mathcal{N}(\rho_{ij})=\frac{||\rho^{T_i}_{ij}||_1-1}{2}$, where $\rho^{T_i}_{ij}$ denotes the partial transpose of $\rho_{ij}$ with respect to the subsystem $i$. \begin{figure}[t] \includegraphics[width=0.42\textwidth]{Fig2.pdf} \caption{(Color online.) Adiabatic freezing of bipartite entanglement. Variation of two-site entanglement ($\mathcal{E}$) with $J/t$ for different lattice distances, $r$ = $|i-j|$=1 (black-circle), $r =3$ (blue-diamond), $r=5$ (red-triangle), for the 1D $t$-$J$ Hamiltonian, with $N = 30$ and $n_{el}=\frac{2}{N}$. From the figure, one can see that for $J/t \leq 2$, the ground state BE remains adiabatically frozen. The vertical axis is in ebits and the horizontal axis is dimensionless. Although the region considered in the figure is $ 0.5 \leq J/t \leq 2 $, the freezing behaviour extends all the way to $ J/t=0 $. } \label{BE_freezing} \end{figure} The decay of spin correlation functions with inter-site distance $r$, often signals the nature of correlation present in the system \cite{sachdev1,sondhi,class_corr_phase}. In general, for non-critical states of strongly-correlated 1D spin systems, quantum correlations are short-ranged and decay exponentially with the increase of lattice distance \cite{lieb_robinson}, giving rise to features such as the area law \cite{hastings,plenio}. As discussed earlier, for all $n_{el}$ in the $J/t$-$n_{el}$ phase space, at low values of $J/t$ ($\approx 2$), the ground state remains in a metallic phase or a repulsive Luttinger liquid-like phase \cite{t-j_phase1}. In Fig.~\ref{fig1}, we plot the decay of bipartite entanglement, $\mathcal{E}(\rho_{ij})$, with the lattice distance $r$, for different values of the $J/t$ ratio, using exact diagonalization to obtain the ground state for $N = 30$ and $n_{el}$ = ${2}/{N}$ \cite{lanczos}. In the metallic phase (\(J/t \leq 2.0\)), the decay with respect to \(r\) can be encapsulated as \(\mathcal{E} \sim 1/(Ar+B)\), where the numerically obtained values of $A$ and $B$, from the best-fit curve, are given {by $A = 162.6$ and $B = 18.9$,} respectively. Significantly, the curves of \(\mathcal{E}(\rho_{ij})\) with respect to \(r\) for different values of \(J/t\) are almost invariant in the metallic phase, i.e., the decay is not only polynomial, but it is the same polynomial for all \(J/t\) (see Fig.~\ref{BE_freezing} for a more clear illustration). The entanglement therefore remains adiabatically frozen under perturbations of $J/t$. It is known that in the Luttinger-liquid phase, the NN spin correlation functions are independent of $J/t$ and the electron density\cite{t-j_phase2}. Therefore, one can infer that the freezing of bipartite entanglement is characteristic of the ground state phase diagram of the 1D $t$-$J$ model. However, for non-NN spin correlation functions there is a very slow variation with the system parameters. Therefore, the behavior of $\mathcal{E}$ in Fig.~\ref{fig1} not only expectedly follows the properties of spin correlation functions but also provides more insight about the ground state in the metallic phase. The freezing of bipartite entanglement with respect to system parameters can be advantageous for implementing quantum technologies that is robust to fluctuations in the system parameters, potentially due to errors in the preparation procedure~\cite{kara-kara-ekhane}. In Fig.~\ref{fig1}, for higher values of $J/t$ ($\geq 3$), when the system subsequently enters into the superconducting and phase-separation region \cite{t-j_phase1,t-j_phase2}, the ground state of the system is likely to be a spin liquid or superposition of the terms where all the spin-1/2 particles form clusters, leading to a distinctive electron-rich and hole-rich phase separation, respectively. Consequently, in these regions, spin correlation functions are likely to be short-ranged similar to undoped ground state of the Heisenberg model. In other words, for high $J/t$, an exponential decay of spin correlation functions is expected. From Fig.~{\ref{fig1}}, it is quite prominent that as the $J/t$ ratio increases, the BE measure $\mathcal{E}(\rho_{ij})$ exhibits an exponential decay with the increase of $r$, given by $\mathcal{E}\sim C~ \exp({-{r}/{\xi}})$, where $\xi$ is the characteristic length of the decay. Again from the best-fit data, one can estimate the value of the constant $C$. As an example, for $J/t$ = 3.6, the best-fitted plot yields $C= 0.0236$ and $\xi=0.5225$. Interestingly, in contrast to the polynomial decay of BE in the metallic phase, the exponential decay rate is not constant for different values of $J/t$ in the superconducting and PS phase. It is observed that the decay becomes steeper, with increase in $J/t$, such that entanglement vanishes quicker with \(r\), and the freezing behavior is completely lost in these regions. Moreover, if we introduce additional next-nearest neighbor interactions in the $t$-$J$ Hamiltonian, the subsequent spin model is known to have a rich phase diagram in the $J/t$-$n_{el}$ plane \cite{t_j1_j2}, which is qualitatively similar to that of the Hamiltonian in Eq.~\ref{tJ}, apart from the fact that, in this case, the intermediate spin-gap phase is spread over a larger area in the phase plane. The boundaries between the metallic, superconducting, and PS phases are altered. Interestingly, the freezing of BE, or lack thereof, in the different phases remains unaltered. \begin{figure}[h] \includegraphics[width=0.36\textwidth]{Fig3.pdf} \caption{(Color online.) RVB gas as the spin-gap phase of the \(t\)-\(J\) Hamiltonian at low electron densities. We plot the fidelity ($\mathcal{F}$) of the ground state of the 1D $t$-$J$ Hamiltonian, obtained via exact diagonalization, and the variational long-range RVB state, at electron density $n_{el}=2/N$. The curves shown in the figure pertain to 1D lattices with $N = 12, 16, 20, 24$ sites. Note that the curves corresponding to \(N \geq 24\) coincide with reasonable numerical accuracy. We note that the RVB gas state considered for different values of \(J/t\) and \(N\) are not the same, as the set \(\{r_{\mathcal{C}}\}\) that maximizes the fidelity are different. All quantities used are dimensionless. } \label{fidelity} \label{LN} \end{figure} \section{\label{gs}Ground state phase at low electron densities} To understand the behavior of bipartite entanglement in the different phases of the 1D $t$-$J$ Hamiltonian, we now discuss the ground state properties of the model at low electron densities. In the superconducting phase of the model, at low $n_{el}$, a finite spin gap opens up, which is in contrast to the behavior at the high density region where the system remains gapless~\cite{t-j_phase2}. Interestingly, we find that in this spin-gap phase, the ground state of the system is essentially a long-range resonating valence bond (RVB) state or the RVB gas \cite{RVB_gas}. Thus, the ground state can be expressed as \begin{equation} |\psi\rangle_{\texttt{RVB}} = \sum_{\mathcal{C}} r_{\mathcal{C}}~ \prod_{i\neq j} |A_i B_{j}\rangle \otimes \prod_{k} |0_k\rangle, \end{equation} where $|A_i B_j\rangle$ = $\frac{1}{\sqrt{2}}(|1\rangle_i |2\rangle_j - |2\rangle_i |1\rangle_j)$ is the spin singlet formed between two spin-1/2 particles at spin-occupied sites `$i$' and `$j$', corresponding to the sublattices $A$ and $B$, respectively. The product is over all such non-overlapping dimers between $N_{el}/2$ pairs of spin-occupied sites $\{i,j\}$. The state $\prod_{k} |0_k\rangle$ represents the $k$ holes at $N - N_{el}$ vacant sites. The summation corresponds to the superposition of all possible dimer coverings ($\mathcal{C}$) on the lattice, each with relative weight $r_{\mathcal{C}}$. The RVB gas description of the superconducting spin-gap phase of the 1D $t$-$J$ Hamiltonian, at low electron density, has a remarkable significance, since it allows for the study related to the phase properties of this model and beyond, using the RVB ansatz \cite{t_j1_j2,t_J_RVB2,t_J_RVB3} under suitable doping. Hence, even for moderate-sized systems, where exact diagonalization is not possible, the doped RVB ansatz opens up the possibility of investigating different properties of the $t$-$J$ Hamiltonian~\cite{t-J_RVB_ours} using tensor network~\cite{cirac} or other approximate approaches~\cite{isotropic}. Fig.~\ref{fidelity} depicts the behavior of the fidelity, $\mathcal{F}$ = $\max_{\{r_{\mathcal{C}}\}}|\langle \phi_g|\psi\rangle_{RVB}|$, between the ground state \(|\phi_g\rangle\) as estimated by exact diagonalization and the RVB state \(|\psi\rangle_{RVB}\), for low electron density, $n_{el} = 2/N$. One observes that after a certain $J/t$ ($\approx 2.3$), pertaining to the transition between the metallic and superconducting phases, the minimum energy configuration of the system is actually long-ranged RVB gas. Further observation shows that even if we increase the $J/t$ ratio to a large value, the ground state at low $n_{el}$ still exhibits RVB behavior but the probability of formation of nearest-neighbor singlet pairing increases as compared to distant pairs due to the formation of electron-hole phase separation. In principle, this may lead to the formation of an RVB liquid state or NN dimer phase for high \(J/t\), which has a decisive bearing on the exponential decay pattern of the two-site entanglement of the system as the quantum correlation of the NN RVB states are known to be short-ranged. \section{\label{me}freezing of multipartite entanglement} A significant outcome of our analysis of the entanglement properties of ground state phases of the 1D $t$-$J$ Hamiltonian, is the existent characteristics of genuine multipartite entanglement. To measure the genuine ME in the different regions of the $J/t$-$n_{el}$ plane, we use the generalized geometric measure (GGM)\cite{ggm} (cf.~\cite{ggm_extra}). For an $N$-party pure quantum state $|\phi\rangle$, the GGM is a computable measure of genuine multisite entanglement, which is formally defined as the optimized fidelity-based distance of the state from the set of all states that are not genuinely multiparty entangled. Mathematically, the GGM can be evaluated as \begin{eqnarray} \mathcal{G}(|\phi\rangle) = 1 - \lambda_{\text{max}}^2(|\xi_N\rangle) \nonumber, \end{eqnarray} where $\lambda_{\text{max}}$ = $\max|\langle\xi_N|\phi\rangle|$, and $|\xi_N\rangle$ is an $N$-party non-genuinely multisite entangled quantum state and the maximization is performed over the set of all such states. The GGM can be effectively computed using the relation \begin{eqnarray} \mathcal{G}(|\phi\rangle) = 1 - \text{max}\{\lambda^2_{A:B}|A\cup B = {A_1 , \dots, A_N }, A\cap B=\phi\}, \nonumber \end{eqnarray} where $\lambda_{A:B}$ is the maximum Schmidt coefficient in all possible bipartite splits $A:B$ of the given state $|\phi\rangle$. A complexity in computation of the multiparty entanglement measure $\mathcal{G}$ lies in the fact that the number of possible bipartitions increases exponentially with an increase of the lattice size. Therefore, we need to restrict ourselves to moderate-sized systems only, which in our case restricts us to $N = 16$. We observe that at low electron concentrations the GGM is adiabatically frozen over significant regions of the phase space. \begin{figure}[t] \includegraphics[width=0.35\textwidth]{Fig4.pdf} \caption{(Color online) Adiabatic freezing of genuine multipartite entanglement. The plot shows the variation of the generalized geometric measure, $\mathcal{G}$, with $n_{el}$ for different values of $J/t$. The number of lattice sites in the 1D model is fixed at $N = 16$. At low electron density, viz. $n_{el} \lesssim 0.5$, $\mathcal{G}$ increases linearly, \emph{along the same line}, with $n_{el}$, and reaches its maximum value at $n_{el}\approx 0.6$. This feature remains invariant for any value of the $J/t$ ratio. However at large $n_{el}$, $\mathcal{G}$ becomes a function of system parameters and the feature -- of increasing along the same line -- obtained earlier, disappears. The inset shows that $\mathcal{G}$ is frozen with respect to change in $J/t$, for low $n_{el}$. The axes dimensions are the same as in Fig.~\ref{fig1}. } \label{ggm} \end{figure} We study the variation of GGM in the ground state of the 1D $t$-$J$ Hamiltonian, with respect to system parameters $J/t$ and $n_{el}$, as depicted in Fig.~\ref{ggm}. For convenience in representation, we look at higher values of $J/t$ ($\geq 2.5$), corresponding to the superconducting and PS phases of the model. We observe that $\mathcal{G}$ increases linearly with $n_{el}$, at low values of $n_{el}$, for fixed $J/t$. It reaches a maximum at $n_{el} \approx$ 0.6, thereafter decreasing with further increase in $n_{el}$. This behavior is similar to the ground state properties of spin liquid phases in doped Heisenberg ladders \cite{t-J_RVB_ours}. Significantly, in the low electron density regime, i.e., $n_{el} \lesssim$ 0.5, the genuine multisite entanglement ($\mathcal{G}$) is {insensitive} to the parameter $J/t$, and is thus adiabatically frozen. We have numerically observed that at low $n_{el}$ this phenomenon extends to lower values of $J/t$. However, this freezing of GGM completely vanishes as the electron density is increased. We note that such adiabatic freezing of ME is not observed in other models, for instance in the undoped anisotropic 1D model \cite{diverging-roy}. This highlights a set of very unique features of the ground state phases of the 1D $t$-$J$ Hamiltonian. In particular, in the metallic Luttinger liquid phase, at low $J/t$ and $n_{el}$, bipartite entanglement is long-ranged and adiabatically frozen, in stark contrast to the exponentially decaying BE in superconducting and PS phases. However, at low $n_{el}$ but all $J/t$, including the latter phases, multipartite entanglement is frozen and completely invariant to system parameters. This provides an interesting interplay between the behavior of BE and ME in different phases of the doped Hubbard model. \section{\label{conc}Conclusion} Entanglement is an important resource in quantum information protocols \cite{amico,adv,horo}. However, in general, both bipartite and multipartite entanglement are fragile to decoherence \cite{sudden_death}, and this is one of the main obstacles in realization of these protocols. Moreover, entanglement may also be highly sensitive to perturbative or sudden changes in system parameters and may fluctuate close to critical points, as observed during collapse and revival \cite{hsd-epl-pla} and dynamical transitions of entanglement \cite{aditi-pra}. It was observed that certain information-theoretic quantum correlations, such as quantum discord, could exhibit freezing in the face of decoherence \cite{freezing}, espousing a strong belief that this could lead to robust information protocols. However, entanglement, the workhorse of key quantum information protocols, rarely freezes under system parameter or temporal changes, including under decoherence (cf.~\cite{ekhane-pinjar}). Our results show that doped quantum spin chains described by the 1D $t$-$J$ Hamiltonian contain ground state phases that exhibit adiabatic freezing of both bipartite and genuine multisite entanglement. Interestingly, the same model without the insertion of defects -- in the form of doping -- does not exhibit a similar freezing phenomenon \cite{diverging-roy}. It is the presence of defects in the quantum spin system that gives rise to the nascent phenomenon of adiabatic freezing of entanglement. An important observation in this regard is that no freezing phenomenon of multiparty entanglement (or other multiparty quantum correlations) has hitherto been observed in any quantum system. For applications in quantum information protocols, such as fault-tolerant \cite{kit} or one-way computation \cite{comp}, robustness of multisite entanglement over fluctuating system parameters can be a significant resource in achieving desired levels of stability. \acknowledgments The research of SSR was supported in part by the INFOSYS scholarship for senior students. HSD acknowledges funding by the Austrian Science Fund (FWF), project no. M 2022-N27, under the Lise Meitner programme. DR acknowledges support from the EU Horizon 2020-FET QUIC 641122.
1,314,259,994,734
arxiv
\section{Introduction} In the heart of superfluidity -- which includes a whole collection of phenomena -- are non-classical rotational properties and the support of persistent currents \cite{SF}. One of the easiest and idealized models for the study of these properties is the one where the particles move in a ring potential. This model focuses on the longitudinal degrees of freedom and assumes periodic boundary conditions. Remarkably, several recent experiments on Bose-Einstein condensed gases of atoms have managed to realize, at least when the transverse degrees of freedom do not play a crucial role \cite{Patrik}, such a system and to investigate their rotational properties. More specifically, experimentalists have managed to trap atoms in toroidal/annular traps and have even created persistent currents in them \cite{Kurn, Olson, Phillips1, Foot, GK, Moulder, Ryu, WVK}. Going one step further, the addition of an extra, distinguishable, component may also be considered. This problem is even more interesting. The extra degrees of freedom associated with this extra component introduces novel and highly non-trivial effects. Interestingly enough, this has also become possible experimentally \cite{Zoran}. On the theoretical side \cite{2comp, 2compp0, 2compp1, 2compp2, 2compp3, 2compp4, 2compp5, 2compp6, 2compp7}, the general problem of a Bose-Einstein condensate with two distinguishable components -- which we label as $``A"$ and $``B"$ -- that is confined in a ring potential may be attacked at various levels of complication/difficulty. Assuming equal masses $M$ for the two components, there are two cases that one may distinguish. The first is the ``symmetric" one, where the scattering lengths $a_{AA}$, $a_{BB}$, and $a_{AB}$ for elastic atom-atom collisions between $AA$, $BB$, and $AB$ atoms respectively are all equal to each other. The second (and more realistic) is the ``asymmetric" one, where at least two of the scattering lengths are not equal to each other. In the symmetric case the dispersion relation is exactly linear within the mean-field approximation \cite{2comp} for $0 \le L \le N_B$ and $N_A \le L \le N = N_A + N_B$, where $L \hbar$ is the total angular momentum of the system, and $N_A$, $N_B$ are the numbers of particles in each component (here we assume without loss of generality that $N_B < N_A$). In the asymmetric case, the linearity of the spectrum disappears \cite{2compp6, 2compp7}, while for $N_B \le L \le N_A$ in both the symmetric and the asymmetric case the dispersion relation is more complex. In the present study we focus on the asymmetric case and use both the mean-field approximation, as well as the method of diagonalization of the many-body Hamiltonian to study the rotational properties of this system. Two crucial assumptions are made throughout the paper. The first is that the inter- and intra-component effective interaction is repulsive. The second is that the two components coexist spatially. The condition for phase coexistence in a finite ring has been derived in Ref.\,\cite{2comp} and we make sure we do not violate it with any set of parameters that we use. Roughly speaking this condition demands that the repulsion within the same species is stronger than that of the different ones. In a real experiment there is also of course the question of dynamic instability. As shown in Ref.\,\cite{2comp} the condition of energetic stability coincides with the one of dynamic stability of the system. We should also mention that in spinor condensates realistic dynamic simulations show that the spatial separation of the two components is possible, and this may affect significantly the rotational behaviour of the system, see, e.g., Ref.\,\cite{2compp2}. According to the results which are described below, under rather typical conditions, the minority component carries the majority of the angular momentum in the whole interval $0 < L \le N_B$. One of the novel results of our study is that under certain conditions the whole excitation spectrum is quasi-periodic (in addition to the periodicity dictated by the Bloch theorem \cite{FB}, which holds also in a two-component system \cite{2comp}) and may be derived from the one for $0 < L \le N_B$ by exciting the center of mass motion, either of the $A$ component, of the $B$ component, or both. Furthermore, in the limit of ``strong" interactions there is a very simple candidate state that minimizes the interaction energy of the system (under the assumption that there is no phase separation). This is the one where the density is homogeneous in each component separately, i.e., the one where the two order parameters $(\Psi_A, \Psi_B)$ of the two components are in the plane-wave states $(\phi_m, \phi_n)$. Here $\phi_m(\theta) = e^{i m \theta}/\sqrt{2 \pi R}$ are the eigenstates of the non-interacting problem, where $R$ is the radius of the ring, which have an angular momentum $m \hbar$. The corresponding total angular momentum in the pair of states $(\phi_m, \phi_n)$ is $L \hbar$, with $L = m N_A + n N_B$. A suitable choice of the integers $m$ and $n$ allows us to give any value to $L$, provided that $N_A$ and $N_B$ are relatively prime. Clearly, among all the possible values of $m$ and $n$ that satisfy the constraint of the angular momentum, one has to choose the pair of $(\phi_m, \phi_n)$ that minimize the kinetic energy. The number-theoretic arguments presented above hold for any atom number. For large atom numbers the mean-field approximation provides an excellent description of the state of the system. Still, within the mean-field approximation one fixes the population imbalance $x_i = N_i/N$, treating $x_i$ as a continuous variable. Even though the number-theoretic behaviour that results from the analysis presented above still applies, it has more dramatic effects in the limit of small atom numbers. To explore these finite-$N$ effects, we use the method of numerical diagonalization of the many-body Hamiltonian. In what follows below we describe in Sec.\,II the model that we use and the two approaches, namely the mean-field approximation and the diagonalization of the many-body Hamiltonian. In Sec.\,III we study the excitation spectrum, starting with the limit of long-wavelength excitations. In the same section we then focus on the mean-field approximation and show how one can derive the excitation spectrum starting from the one for $0 \le L \le N_B$. Then, in Sec.\,IV we investigate the excitation spectrum beyond the mean-field approximation, diagonalizing the many-body Hamiltonian. We first present an alternative way of exciting the system collectively and present an approximate generalization of Bloch's theorem. Finally, we compare the results that we get from the diagonalization with the ones of the mean-field approximation. In Sec.\,V we present a conjecture about the form of the many-body state that is expected to be the state of lowest energy under some conditions that are analysed. Finally in Sec.\,VI we give a summary of our study and an overview of our results. \section{Model and approach} The Hamiltonian that we consider is \begin{eqnarray} {\hat H} = \frac {\hbar^2} {2 M R^2} \sum_m m^2 ({\hat c}_m^{\dagger} {\hat c}_m + {\hat d}_m^{\dagger} {\hat d}_m) \nonumber \\ + \frac 1 2 g_{AA} \sum_{i,j,k,l} {\hat c}_i^{\dagger} {\hat c}_j^{\dagger} {\hat c}_k {\hat c}_l \, \delta_{i+j, k+l} \nonumber \\ + \frac 1 2 g_{BB} \sum_{i,j,k,l} {\hat d}_i^{\dagger} {\hat d}_j^{\dagger} {\hat d}_k {\hat d}_l \, \delta_{i+j, k+l} \nonumber \\ + g_{AB} \sum_{i,j,k,l} {\hat c}_i^{\dagger} {\hat d}_j^{\dagger} {\hat c}_k {\hat d}_l \, \delta_{i+j, k+l} , \end{eqnarray} Here $\hbar^2 m^2/(2 M R^2)$ is the eigenenergy of the single-particle eigenstates $\phi_m(\theta)$. The mass $M$ is assumed to be the same for the two species, while $g_{ij} = U_{ij}/(2 \pi)$, with $U_{ij}$ being the matrix elements for zero-energy elastic collisions between the $AA$, $BB$, and $AB$ components. Also, ${\hat c}_m$ and ${\hat d}_m$ are the operators which destroy an $A$, or a $B$ atom with angular momentum $m \hbar$, respectively. In what follows below we set $E_m = m^2 \epsilon$, where $\epsilon = \hbar^2/(2 M R^2)$ and also $\hbar = 2M = R = 1$. In the case of a single component this problem has been attacked by Lieb and Liniger \cite{LLM, LM} with use of the Bethe ansatz. The case of two species with equal scattering lengths has also been considered, see, e.g., Refs.\,\cite{BA}. In the present study we attack this problem in two ways. The first is within the mean-field approximation, introducing the two order parameters $\Psi_A$ and $\Psi_B$ of the two components, thus solving the corresponding coupled, Gross-Pitaevskii-like equations, (with $\Psi_A$ and $\Psi_B$ normalized to unity), \begin{eqnarray} - \frac {\partial^2 \Psi_A} {\partial \theta^2} + (U_{AA} N_A |\Psi_A|^2 + U_{AB} N_B |\Psi_B|^2) \Psi_A &=& \mu_A \Psi_A \nonumber \\ - \frac {\partial^2 \Psi_B} {\partial \theta^2} + (U_{BB} N_B |\Psi_B|^2 + U_{AB} N_A |\Psi_A|^2) \Psi_B &=& \mu_B \Psi_B, \nonumber \\ \end{eqnarray} where $\mu_A$ and $\mu_B$ is the chemical potential, and $N_A$ and $N_B$ is the number of atoms in each component. We find the solutions of lowest energy of the above equations imposing the constraint of some fixed angular momentum, as described in detail in Ref.\,\cite{2compp7}. Alternatively we solve this problem by diagonalizing the many-body Hamiltonian. Within this scheme we choose a set of single-particle states $\phi_m(\theta)$, with $m_{\rm min} \le m \le m_{\rm max}$, making sure that a decent convergence has been achieved with respect to $m_{\rm min}$ and $m_{\rm max}$. In this subspace of basis states we impose the constraints of a fixed number of atoms $A$ and $B$, $N_A$ and $N_B$, respectively. We also impose the constraint of some fixed angular momentum $L$ (which can be shared between the two components), see, e.g., \cite{2comp}. Finally we diagonalize the resulting Hamiltonian matrix in this subspace, thus deriving the eigenstates and the corresponding eigenenergies. The terminology of the ``yrast" state that we use below refers to the eigenstate with the lowest eigenenergy, i.e., the state which minimizes the energy for some given eigenvalue of the angular momentum. The same term is used within the mean-field approximation, where one fixes the expectation value of the angular momentum, instead. We stress that these yrast states play a fundamental role in the rotational response of these systems, very much like the phonon-roton spectrum in the problem of liquid Helium. Finally, we also stress that the problem of fixing the angular momentum is intimately connected with the one where the angular velocity of the trap is fixed, instead. \section{Excitation spectrum -- Mean-field approximation} \subsection{Elementary excitations} Let us start with the mean-field approximation. When the system has zero angular momentum, $L=0$, it is in the state \begin{eqnarray} |L = 0 \rangle = |0^{N_A} \rangle_A \bigotimes |0^{N_B} \rangle_B, \end{eqnarray} where in this notation we have $N_A$ and $N_B$ atoms in the single-particle state with $m = 0$. The total energy of the system is \begin{eqnarray} E_0 = \frac 1 2 g_{AA} N_A (N_A - 1) + g_{AB} N_A N_B \nonumber \\ + \frac 1 2 g_{BB} N_B (N_B - 1). \end{eqnarray} Giving one unit of angular momentum via single-particle excitation to, e.g., the $B$ component, then \begin{eqnarray} |L = 1 \rangle = |0^{N_A} \rangle_A \bigotimes |0^{N_B-1}, 1^1 \rangle_B, \label{siexc} \end{eqnarray} and correspondingly for the species $A$. The total energy of this state is \begin{eqnarray} E' = 1 + \frac 1 2 g_{AA} N_A (N_A - 1) + g_{AB} N_A N_B \nonumber \\ + \frac 1 2 g_{BB} [(N_B-1) (N_B - 2) + 4 (N_B-1)]. \end{eqnarray} Therefore, \begin{eqnarray} E' - E_0 = 1 + g_{BB} (N_B - 1), \label{diffe} \end{eqnarray} where the last term comes from the exchange interaction. From the above equation it follows that it is the ratio \begin{eqnarray} r = \frac {g_{BB} (N_B-1)} {g_{AA} (N_A-1)} \label{ratio} \end{eqnarray} which determines whether the angular momentum goes to the one, or the other component. In what follows below we set $g_{AA} = g_{BB} = g$, and thus as Eq.\,(\ref{ratio}) implies, with the assumption $N_A > N_B$ that we have made, we conclude that it is the $B$ (minority) component that carries the angular momentum, for $L=1$. By the way, Eq.\,(\ref{diffe}) may be identified as the speed of sound $c$ of the $B$ component, or equivalently as the slope of the dispersion relation for $L \to 0^+$ for exciting it. More specifically, \begin{eqnarray} c = 1 + g (N_B - 1). \label{diffee} \end{eqnarray} \subsection{Distribution of the angular momentum between the two components} While the above result holds for $L=1$, it turns out that more generally, under ``typical" conditions (which will be analysed below) the minority component carries the largest part of the angular momentum, all the way up to $L=N_B$. The two order parameters may be expanded in the basis of plane-wave states, \begin{eqnarray} \Psi_A = \sum_{m = m_{\rm min}}^{m = m_{\rm max}} c_m \phi_m, \,\,\, \Psi_B = \sum_{m = m_{\rm min}}^{m = m_{\rm max}} d_m \phi_m. \end{eqnarray} The corresponding energy per atom is \begin{eqnarray} \frac E N = \sum_{m_{\rm min}}^{m_{\rm max}} m^2 (x_A c_m^2 + x_B d_m^2) \nonumber \\ + \frac 1 2 x_A^2 N U \int \left| \sum_{m_{\rm min}}^{m_{\rm max}} c_m \phi_m \right|^4 \, d \theta \nonumber \\ + \frac 1 2 x_B^2 N U \int \left| \sum_{m_{\rm min}}^{m_{\rm max}} d_m \phi_m \right|^4 \, d \theta \nonumber \\ + x_A x_B N U_{AB} \int \left|\sum_{m_{\rm min}}^{m_{\rm max}} c_m \phi_m \right|^2 \, \left| \sum_{m_{\rm min}}^{m_{\rm max}} d_m \phi_m \right|^2 \, d \theta. \label{enen} \end{eqnarray} Considering the limit of weak interactions, in the interval $0 \le \ell \le 1$ one may work with the states with $m=0$ and $m=1$, only, \begin{eqnarray} \Psi_A = c_0 \phi_0 + c_1 \phi_1, \,\,\, \Psi_B = d_0 \phi_0 + d_1 \phi_1, \end{eqnarray} where $\ell = L/N = x_A c_1^2 + x_B d_1^2$ is the angular momentum per particle. In the ``symmetric" case $(g = g_{AB})$ it turns out that for $0 \le \ell \le x_B$ \cite{2compp0}, \begin{eqnarray} c_0^2 = \frac {(x_A - \ell) (1-\ell)} {x_A (1 - 2 \ell)}, \,\,\, c_1^2 = \frac {(x_B - \ell) \ell} {x_A (1 - 2 \ell)} \label{occup1} \end{eqnarray} and \begin{eqnarray} d_0^2 = \frac {(x_B - \ell) (1-\ell)} {x_B (1 - 2 \ell)}, \,\,\, d_1^2 = \frac {(x_A - \ell) \ell} {x_B (1 - 2 \ell)}, \label{occup2} \end{eqnarray} with $c_0 c_1 d_0 d_1$ negative (as minimization of the energy implies). In this symmetric case the maximum value of the angular momentum carried by the majority component in the interval $0 \le \ell \le x_B$ is of order $x_B^2/4$, for $\ell \approx x_B/2$ (for relatively small $x_B$). Figure 1 shows $c_0^2, c_1^2, d_0^2$, and $d_1^2$ for $x_A = 0.8$ and $x_B = 0.2$. We have seen numerically that in the asymmetric model ($g > g_{AB}$) the angular momentum of the majority component decreases as $g/g_{AB}$ increases. This is expected, since in the limit of $g_{AB} \to 0$, the two components decouple. Thus, from the above expressions we can get an upper bound on the angular momentum carried by the majority component, which is $\approx x_B^2/4$, at least for reasonably small values of $x_B \le 0.3$. For stronger interactions (and still in the asymmetric case), as we have seen in our numerical results, the angular momentum carried by the majority component for $0 \le \ell \le x_B$ is still very small, on the order of 1\%, at least up to $x_B \le 0.3$ and $g/g_{AB} = 5/3$. Actually, we argue that this is a very general result, due to energetic reasons. There are four energy scales in the problem [see, e.g., Eq.\,(\ref{enen})], namely the kinetic energy (which is set equal to unity), the interaction energy among the $A$ particles, $\sim x_A^2 g N$, among the $B$ particles, $\sim x_B^2 g N$, and the interaction energy between the $A$ and the $B$ particles, $x_A x_B g_{AB} N$. There are thus three dimensionless parameters, namely the coupling $g$, the interaction asymmetry $g/g_{AB}$, and the population imbalance $x_A/x_B$. Clearly, for large values of $g/g_{AB}$ and/or large values of $x_A/x_B$, there is a clear hierarchy in the three energy scales of the interaction energy, which makes it energetically favorable for the system to carry its angular momentum by the one component (i.e., the $B$ component in this case). \begin{figure} \includegraphics[width=7cm,height=5cm,angle=0]{fig1.ps} \vskip3pc \caption{The occupancy of the four states, $c_0^2, c_1^2, d_0^2$, and $d_1^2$, for $x_A = 0.8$ and $x_B = 0.2$ from Eqs.\,(\ref{occup1}) and (\ref{occup2}). One can hardly distinguish the coefficients $c_0^2$ and $c_1^2$ from 1 and 0, respectively.} \end{figure} \subsection{Quasi-periodic structure of the dispersion relation and an explicit example with $x_A = 0.8, x_B = 0.2$} As shown in Ref.\,\cite{2compp7}, for $x_A = 0.8$, $x_B = 0.2$, $N g/\epsilon = 1250/\pi^2$, and $N g_{AB}/\epsilon = 750/\pi^2$, to high accuracy the energy spectrum is given by the formula \begin{eqnarray} E(\ell) = E_{\rm int} + P_0(\ell) + e_0(\ell). \label{fit1} \end{eqnarray} Here $E_{\rm int}$ is the interaction energy of the homogeneous system, $e_0(\ell)$ is a periodic function of $\ell$, and \begin{eqnarray} P_0 (\ell) = [\ell]^2 x_A + (\ell - x_A [\ell])^2/x_B, \label{fit2} \end{eqnarray} where $[\ell]$ denotes the nearest-integer function. \begin{figure} \includegraphics[width=7cm,height=5cm,angle=0]{fig2a.ps} \includegraphics[width=7cm,height=5cm,angle=0]{fig2b.ps} \includegraphics[width=7cm,height=5cm,angle=0]{fig2c.ps} \includegraphics[width=7cm,height=5cm,angle=0]{fig2d.ps} \vskip3pc \caption{Density and phase of the order parameters $\Psi_k = \sqrt{n_k} \, e^{i \phi_k}$ of the two components $A$ and $B$, for $x_A = 0.8$, $x_B = 0.2$ and for $\ell = 0, 0.03, 0.1, 0.19$, and 0.2. Here $N g/\epsilon = 1250/\pi^2$ and $N g_{AB}/\epsilon = 750/\pi^2$.} \end{figure} \begin{figure} \includegraphics[width=7cm,height=5cm,angle=0]{fig3a.ps} \includegraphics[width=7cm,height=5cm,angle=0]{fig3b.ps} \includegraphics[width=7cm,height=5cm,angle=0]{fig3c.ps} \includegraphics[width=7cm,height=5cm,angle=0]{fig3d.ps} \vskip3pc \caption{Density and phase of the order parameters $\Psi_k = \sqrt{n_k} \, e^{i \phi_k}$ of the two components $A$ and $B$, for $x_A = 0.8$, $x_B = 0.2$ and for $\ell = 0.2, 0.23, 0.3, 0.39$, and 0.4. Here $N g/\epsilon = 1250/\pi^2$ and $N g_{AB}/\epsilon = 750/\pi^2$.} \end{figure} In Figs.\,2 and 3 we show the density and the phase of the two order parameters $\Psi_A$ and $\Psi_B$, in the two intervals $0 \le \ell \le 0.2$ and $0.2 \le \ell \le 0.4$. Comparing the density of the same species for values of $\ell$ which differ by $x_B = 0.2$ we observe that the difference is hardly visible. On the other hand, the phases of the two order parameters do change. These observations are explained in the analysis that follows below. Finally, the angular momentum carried by the majority component in the interval $0 \le \ell \le 0.2$ is very small, smaller than 1\%, as we argued also above. The above results follow from the facts that (i) at the interval $0 \le \ell \le x_B$ the minority component carries essentially all the angular momentum, and (ii) if one starts from the order parameters in the interval $0 \le \ell \le x_B$, the rest of the spectrum results by exciting the center of mass motion of each component separately. This operation changes the kinetic energy only, leaving the interaction energy unaffected. We thus essentially show below that Eqs.\,(\ref{fit1}) and (\ref{fit2}) follow from these two facts. In order for the above procedure to give the yrast states, for a fixed population imbalance and a fixed interaction asymmetry, $g$ has to be sufficiently large. Considering, for example, $\ell = 0.4$, the yrast state -- which has to be $(\Psi_A, \Psi_B) = (\phi_0, \phi_2)$, as the quasi-periodic behaviour implies -- is indeed the expected one for a sufficiently strong interaction, as analysed in Ref.\,\cite{2compp6}. For a fixed interaction asymmetry and a fixed $g$, the population imbalance has to be sufficiently large. Finally, for a fixed $g$ and a fixed population imbalance the interaction asymmetry has to be sufficiently large. To see the above arguments it is instructive to consider the specific example $x_A = 0.8$, $x_B = 0.2$. First of all, the possible values of the angular momentum carried by (purely) plane-wave states is a multiple of 0.2 in this case, since $\ell = m x_A + n x_B = 0.2 (4 m + n)$. It is also important to notice that the condition for a state $(\Psi_m, \Psi_n)$ to become an yrast state, depends only on $|m-n|$ \cite{2compp6}. Thus, when, e.g., the state $(\Psi_m, \Psi_n) = (\phi_0, \phi_2)$ with $\ell = 2 x_B = 0.4$, becomes the yrast state, also the state $(\Psi_m, \Psi_n) = (\phi_1, \phi_{-1})$ with $\ell = 3 x_B = 0.6$, becomes the yrast state, as well. (This also follows from Bloch's theorem, however it is a more general result). Having solved the yrast problem in the interval $0 \le \ell \le x_B = 0.2$, one may construct solutions at the interval $0.2 = x_B \le \ell \le 2 x_B = 0.4$, etc., all the way up to $4 x_B \le \ell \le 5 x_B = 1$ keeping the correlations unaffected and putting all the energy in the form of kinetic energy, by exciting the center of mass motion. In other words, the spectrum will ``repeat" itself in a quasi-periodic way (explained below) all the way up to $\ell = 1$. Beyond this point Bloch's theorem determines the rest of the excitation spectrum \cite{2comp}. Let us thus assume that in the interval $0 \le \ell \le x_B = 0.2$ the two order parameters are \begin{eqnarray} (\Psi_A, \Psi_B) = (\Psi_A^0, \Psi_B^0). \end{eqnarray} We should keep in mind that $\Psi_A^0$ carries a very small amount of angular momentum, and we will assume that it is zero. The angular momentum per particle of the above pair of states is $\ell = x_A \sum m c_m^2 + x_B \sum m d_m^2 = x_B \sum m d_m^2$, the kinetic energy per particle is $K^0(\ell) = x_A \sum m^2 c_m^2 + x_B \sum m^2 d_m^2$, and the total energy per particle is $E(\ell)/N = K^0(\ell) + V(\ell)/N$, where $V(\ell)$ is the total interaction energy. Finally, for the kinetic energy $K^0(\ell = 0) = 0$ and $K^0(\ell = x_B) = x_B = 0.2$. For $0.2 = x_B \le \ell \le 2 x_B = 0.4$ the order parameters are \begin{eqnarray} (\Psi_A, \Psi_B) = (\Psi_A^0, e^{i \theta} \Psi_B^0). \end{eqnarray} The factor that multiplies $\Psi_B^0$ does not affect the interaction energy and thus the interaction energy is identical to the one in the interval $0 \le \ell \le x_B$, $V(\ell) = V(\ell - x_B)$. The interesting part is the kinetic energy, which is \begin{eqnarray} K(\ell) = K^0(\ell-x_B) + 2 \ell - x_B, \label{K1} \end{eqnarray} with $K(\ell = x_B) = x_B = 0.2$ and $K(\ell = 2 x_B) = 4 x_B = 0.8$. For $0.4 = 2 x_B \le \ell \le 3 x_B = 0.6$ we have two competing solutions around $\ell = 1/2$. For values of $\ell$ smaller than $1/2$, \begin{eqnarray} (\Psi_A, \Psi_B) = (\Psi_A^0, e^{2 i \theta} \Psi_B^0). \end{eqnarray} The kinetic energy is \begin{eqnarray} K(\ell) = K^0(\ell - 2 x_B) + 4 \ell - 4 x_B, \label{K2} \end{eqnarray} with $K(\ell = 2 x_B) = 4 x_B = 0.8$ and $K(\ell = 3 x_B) = 9 x_B = 1.8$. For larger values than $\ell = 1/2$, \begin{eqnarray} (\Psi_A, \Psi_B) = (e^{i \theta} \Psi_A^0, e^{- 2 i \theta} \Psi_B^0). \end{eqnarray} The kinetic energy is \begin{eqnarray} K(\ell) = K^0(\ell - x_A + 2 x_B) + 5 - 4 \ell - 9 x_B, \label{K3} \end{eqnarray} with $K(\ell = 2 x_B) = 5 - 17 x_B = 1.6$ and $K(\ell = 3 x_B) = 5 - 20 x_B = 1$. Comparing the energies one sees that they cross at $\ell = 5 x_A/8 = 1/2$. This gives rise to a discontinuity in the derivative of the dispersion relation at $\ell = 1/2$. We have evaluated the slope to be $1/x_B$ as $\ell \to (1/2)^-$ and $(1 - 2 x_A)/x_B$ for $\ell \to (1/2)^+$, and therefore the difference between the right and the left slopes is $-2 x_A/x_B$. We stress that this discontinuous transition at $\ell = 1/2$ is also experimentally relevant, since the slope of the dispersion relation gives the velocity of propagation of the corresponding solitary waves. Interestingly, at this point the sign of the slope changes and thus the velocity of propagation also changes sign. For $0.6 = 3 x_B \le \ell \le 4 x_B = 0.8$, \begin{eqnarray} (\Psi_A, \Psi_B) = (e^{i \theta} \Psi_A^0, e^{-i \theta} \Psi_B^0). \end{eqnarray} The kinetic energy is \begin{eqnarray} K(\ell) = K^0(\ell - x_A + x_B) + 1 - 2 \ell + 2 x_A - 2 x_B, \label{K4} \end{eqnarray} with $K(\ell = 3 x_B) = 1$ and $K(\ell = 4 x_B) = 0.8$. Finally, for $0.8 = 4 x_B \le \ell \le 1$, \begin{eqnarray} (\Psi_A, \Psi_B) = (e^{i \theta} \Psi_A^0, \Psi_B^0). \end{eqnarray} The kinetic energy is \begin{eqnarray} K = K^0(\ell - x_A) + x_A, \label{K5} \end{eqnarray} with $K(\ell = 4 x_B) = x_A = 0.8$ and $K(\ell = 1) = 1$. Figure 4 shows the result of this calculation for $x_A = 0.8$ and $x_B = 0.2$. The results presented above imply Eqs.\.(\ref{fit1}) and (\ref{fit2}), which were motivated numerically \cite{2compp7}, as mentioned also earlier. They are also consistent with the numerical results of Figs.\,2 and 3. We also stress that, although the arguments were presented within the mean-field approximation, they do not rely in any way on the validity of the mean-field approximation, but rather they are much more general, as we also demonstrate in Sec.\,IV. As a final remark we mention that when $N_A$ and $N_B$ are relatively prime, e.g., $x_A = 0.7$ and $x_B = 0.3$, a similar picture develops. \begin{figure} \includegraphics[width=7cm,height=5cm,angle=0]{fig4.ps} \vskip3pc \caption{The kinetic energy $K(\ell)$, evaluated at $\ell = 0, 0.2, 0.4, 0.6, 0.8$, and 1.0, for $x_A = 0.8$, and $x_B = 0.2$, from Eqs.\,(\ref{K1}), (\ref{K2}), (\ref{K3}), (\ref{K4}), and (\ref{K5}). Knowing the energy at the interval $0 \le \ell \le x_B$, one may derive the rest of the spectrum using the transformations described in the text.} \end{figure} \section{Excitation spectrum -- many-body problem} \subsection{``Collective" excitation of the system} Up to now we have seen how the yrast states progress with increasing angular momentum via essentially single-particle excitation of the system. In other words, as $L$ increases, the additional momentum is carried by moving single particles to different single-particle states. Still, there is another way to excite the system ``collectively". By this term we mean that even an increase of the angular momentum by one unit requires a major rearrangement of the atoms in the single-particle states. Before we go to the many-body problem, we should recall the results of Ref.\,\cite{2compp6}, where it was argued that for sufficiently strong interactions, the mean-field state $(\Psi_A, \Psi_B) = (\phi_m, \phi_n)$ becomes the yrast state, where obviously the angular momentum is $\ell = x_A m + x_B n$. A way to argue about the state $(\phi_m, \phi_n)$ becoming the yrast state for the specific value of $\ell$ and for sufficiently strong interactions is that any density variation costs interaction energy. If this is the dominant term in the Hamiltonian, it is minimized by these plane-wave states, which have a constant density distribution. The expense that one pays is the corresponding kinetic energy, which is $x_A m^2 + x_B n^2$, and has to be sufficiently small in order for the argument to be self-consistent; this argument is analysed further in Sec.\,V. The details of this calculation (performed within the mean-field approximation), as well as the corresponding phase diagram are given in Ref.\,\cite{2compp6}. Let us thus consider a toy model which demonstrates the above arguments about the collective excitation. Assuming for convenience that $N_A - N_B = 1$, a state that competes with the one of Eq.\,(\ref{siexc}) is \begin{eqnarray} |L = 1 \rangle = |1^{N_A} \rangle_A \bigotimes |(-1)^{N_B} \rangle_B. \label{coll} \end{eqnarray} The energy of this state $E^{''}$ is \begin{eqnarray} E^{''} = N + \frac 1 2 g N_A (N_A - 1) + g_{AB} N_A N_B \nonumber \\ + \frac 1 2 g N_B (N_B - 1) = N + E_0, \end{eqnarray} or \begin{eqnarray} E^{''} - E_0 = N . \label{dief} \end{eqnarray} Therefore \begin{eqnarray} E^{''} - E' = (N - 1) - g (N_B - 1). \end{eqnarray} For values of $g$ larger than the critical value which satisfies the equation \begin{eqnarray} g = \frac {N-1} {N_B - 1}, \end{eqnarray} it is energetically favourable to excite the system collectively. In the limit of large $N$ and $N_B$, $g$ is of order unity which is necessary in order for the system not to enter the highly-correlated Tonks-Girardeau regime. (One should not forget that for the low atom numbers that we have used, the system easily makes the transition to the Tonks-Girardeau limit, when $g$ becomes of order $N$ \cite{TG}). We stress that the above calculation is just a toy model and should not in any way be trusted quantitatively. Besides, for $g$ of order unity, the typical interaction energy per atom is of order $N$ and thus (much) larger than the kinetic energy. Thus, the interaction energy will deplete the condensate significantly, while the depletion will also make the result dependent on $g_{AB}$; all these effect have been ignored here. \subsection{A ``generalization" of Bloch's theorem} The arguments presented above ignore the depletion of the condensate. However, the depletion lowers the energy to subleading order in the number of atoms $N$ and, in particular for small systems, it may have a rather important effect. Below, we suggest a different way of constructing a many-body state, taking into account also the depletion. Essentially this ansatz state generalizes (in an approximate way) Bloch's theorem, which also holds in a two-component system \cite{2comp}. The ansatz many-body state that we introduce is based on the ``exact" many-body state for $L = 0$. The many-body state of each component will be a linear superposition of the ``Fock" states of the form \begin{eqnarray} |m_{\rm min}^{N_{m_{\rm min}}^A}, \dots, m_{\rm max}^{N_{m_{\rm max}}^A} \rangle_A \bigotimes |m_{\rm min}^{N_{m_{\rm min}}^B}, \dots, m_{\rm max}^{N_{m_{\rm max}}^B} \rangle_B \nonumber \\ \end{eqnarray} for some given truncation to the single-particle states with $m_{\rm min} \le m \le m_{\rm max}$, with the obvious constraints in each state $\sum_m N_{m}^i = N_i$, with $i=A,B$ and also with $\sum_{m,i} m N_{m}^i = 0$. Then, one may excite the center of mass coordinate using the same amplitudes, thus creating the state \begin{eqnarray} |(m_{\rm min}+m_A)^{N_{m_{\rm min}}^A}, \dots, (m_{\rm max}+m_A)^{N_{m_{\rm max}}^A} \rangle_A \bigotimes \nonumber \\ |(m_{\rm min}+m_B)^{N_{m_{\rm min}}^B}, \dots, (m_{\rm max}+m_B)^{N_{m_{\rm max}}^B} \rangle_B. \label{ansatz} \end{eqnarray} The resulting state has an angular momentum \begin{eqnarray} L = N_A m_A + N_B m_B. \label{bl2} \end{eqnarray} Also, this state has the same interaction energy as the one with $L=0$, since the matrix elements do not depend on the angular momentum of the colliding particles. Its total energy is higher than the total energy of the many-body state with $L=0$, $E(L=0)$, due to its higher kinetic energy, \begin{eqnarray} E^{'''} &=& V(L=0) + \sum_m (m+m_A)^2 N_m^A + \sum_m (m+m_B)^2 N_m^B \nonumber \\ &=& E(L=0) + N_A m_A^2 + N_B m_B^2 + 2 (m_A L_A + m_B L_B). \label{bl1} \nonumber \\ \end{eqnarray} Here $V(L=0)$ is the exact, total, interaction energy of the full many-body state with $L=0$, and $L_A$, $L_B$ is the angular momentum of the $A$ and $B$ components of the state with $L = 0$. In general, their sum has to vanish, $L_A + L_B = 0$, without each of them vanishing separately. Still, the states with the dominant amplitudes are the ones for which $L_A = 0$ and $L_B = 0$, separately, because of the condition $g > g_{AB}$, which is roughly the condition for phase co-existence. As a result, \begin{eqnarray} E^{'''} - E(L=0) \approx N_A m_A^2 + N_B m_B^2, \label{nst} \end{eqnarray} which becomes exact for $g_{AB} = 0$. Equation (\ref{nst}) is also exact within the mean-field approximation, since the terms with $L_A \neq 0$ and $L_B \neq 0$ appear due to the depletion. On the other hand, whether the resulting (mean-field, or many body) state is the yrast state, depends on the parameters. Finally, we also mention that Eq.\,(\ref{nst}) reduces to Eq.\,(\ref{dief}) when $m_A = 1$ and $m_B = -1$, as expected. From Eqs.\,(\ref{bl2}) and (\ref{bl1}) if follows trivially that when $L$ is an integer multiple of $N$, $L = q N$, then $m_A = m_B = q$, in which case Bloch's theorem \cite{FB} holds exactly, even in a two-component system \cite{2comp}, $E^{'''}-E_0 = N q^2$. In the case of the ``traditional" Bloch theorem (i.e., in the case of one component) starting from the $L=0$ state, by exciting the center of mass motion one gets (exactly) only the states with an additional angular momentum which is an integer multiple of the total number of particles $N$. On the other hand, in the present case of a two-component system, this procedure allows us to give $L$ any desired value, at least when the populations $N_A$ and $N_B$ are relatively prime, otherwise the argument will hold for values of $L$ which are integer multiples of their greatest common divisor. Still, the generated states are not necessarily the yrast states, but rather they are candidate yrast states. \subsection{Results of numerical diagonalization} We turn now to the results that we get from the diagonalization of the many-body Hamiltonian. We consider as a first example the case $N_A=16, N_B=4, g_{AA} = g_{BB} = g = 0.1, g_{AB}=0.05$, with $m_{\rm min}=-1$, and $m_{\rm max} = 2$ and the results are shown in the Appendix. For $0 < L \le 4 (= N_B)$ we see that indeed the angular momentum of the majority component, $A$, is less than 10\% of the total, which is consistent with the results of Sec.\,III B. Partly this relatively large value is due to finite-$N$ corrections; increasing $N$ will make this number even smaller. For $5 \le L \le 9$ the dominant state of the $B$ (minority) component is $\phi_1$, carrying 4 units of angular momentum, while the additional angular momentum is carried by the $A$ component. This is because exciting the $B$ component costs kinetic energy. The state with $L=10 \, (=N/2)$ is analysed in detail below, for $N=10$ and $L = 5 \, (=N/2)$. The rest of the spectrum follows from Bloch's theorem. In order to see the effects that we investigate in the present study we turn now to higher couplings using the above as a ``reference" example. To achieve a decent convergence we expand the space of single-particle states to $m_{\rm min}=-2$, and $m_{\rm max} = 3$, which forces us to reduce the atom number, as otherwise the dimensionality of the Hamiltonian matrix explodes. We thus consider $N_A=8, N_B=2, g_{AA} = g_{BB} = g = 1.5, g_{AB}=0.15$. Another example, where $g$ and $g_{AB}$ are closer to each other, follows below. For $L=0$ and in the space with $m_{\rm min}= -1$ and $m_{\rm max} = 1$ the dimensionality of the Hamiltonian matrix is 26, while the lowest eigenenergy is $\approx 38.5864$. For $m_{\rm min}=-2$, and $m_{\rm max} = 2$ the dimensionality becomes 457 and the lowest eigenenergy reduces to $\approx 33.8139$, i.e., there is a reduction of roughly 14\%. For $m_{\rm min}=-2$, and $m_{\rm max} = 3$ the dimensionality becomes 1163 and the lowest eigenenergy reduces further to $\approx 32.8452$, i.e., there is a further reduction of roughly 3\%, indicating that although convergence has not been achieved, the results are relatively accurate. A generic feature of the above problem is that there is a very rapid increase of the dimensionality of the Hilbert space as more single-particle states are included, as seen also in the numbers mentioned above. We should also mention that in order to satisfy Bloch's theorem for e.g., $0 \le L \le N$ the single-particle states have to be ``symmetric" around 1/2. This is the reason why we choose to work, for example, with $m_{\rm min}=-2$, and $m_{\rm max} = 3$. The fact that we have to increase the single-particle states in pairs makes it even more difficult to investigate the convergence of our results and to increase the Hilbert space. For example, going e.g., from $m_{\rm min}=-2$, and $m_{\rm max} = 3$ to $m_{\rm min}=-3$, and $m_{\rm max} = 4$ may result in a very large increase of the dimensionality of the Hamiltonian matrix (for some fixed $N_A$ and $N_B$). The lowest-energy eigenstate with $L=0$ (and with an eigenenergy equal to $\approx 32.8452$), consists of the following four Fock states (with the amplitudes with the largest absolute value) \vskip1pc \begin{tabular}{ |c|c|c|c|c|c|c|c|c|c|c|c|c| } \hline \multicolumn{1}{ |c| }{} & \multicolumn{6}{ |c| }{Comp. $A$} & \multicolumn{6}{ |c| }{Comp. $B$} \\ \hline Ampl. & $\phi_{-2}$ & $\phi_{-1}$ & $\phi_0$ & $\phi_1$ & $\phi_2$ & $\phi_3$ & $\phi_{-2}$ & $\phi_{-1}$ & $\phi_0$ & $\phi_1$ & $\phi_2$ & $\phi_3 $ \\ \hline -0.2559 & 1 & 0 & 6 & 0 & 1 & 0 & 0 & 0 & 2 & 0 & 0 & 0 \\ \hline -0.2650 & 0 &0 &8& 0& 0& 0 & 0 & 1& 0& 1& 0& 0 \\ \hline -0.4326 & 0& 1& 6& 1& 0& 0 &0& 0& 2& 0& 0& 0 \\ \hline 0.6465 & 0& 0& 8& 0& 0& 0 &0& 0& 2& 0& 0& 0 \\ \hline \end{tabular} \vskip1pc In the above notation, the Fock state with e.g., 8 ``$A$" atoms in the single-particle state $\phi_0$ and 2 ``$B$" atoms in the single-particle state $\phi_0$ has an amplitude 0.6465, etc. To understand the arguments which follow, it is instructive to get some insight into the structure of the above many-body state. The Fock state with the largest amplitude has zero kinetic energy and it puts all 8 ``$A$" atoms at the $m=0$ state, as well as all 2 ``$B$" atoms at the state with $m=0$, also. The following three have a kinetic energy which is equal to $2$, $2$, and $8$, respectively. The degeneracy between the first two is lifted by the interactions. More specifically, in the two specific states there are processes where atoms are transferred from the $m=0$ state to the states with $m=\pm 1$, $m=\pm 2$, etc., which lower the energy (they are off-diagonal matrix elements which come from, e.g., ${\hat c}_0^2 {\hat c}_{-1}^{\dagger} {\hat c}_1^{\dagger}$ \cite{KMP}). For $L = 1$, the lowest eigenenergy is $\approx 34.6431$, while the states with the four largest amplitudes are \vskip1pc \begin{tabular}{ |c|c|c|c|c|c|c|c|c|c|c|c|c| } \hline \multicolumn{1}{ |c| }{} & \multicolumn{6}{ |c| }{Comp. $A$} & \multicolumn{6}{ |c| }{Comp. $B$} \\ \hline Ampl. & $\phi_{-2}$ & $\phi_{-1}$ & $\phi_0$ & $\phi_1$ & $\phi_2$ & $\phi_3$ & $\phi_{-2}$ & $\phi_{-1}$ & $\phi_0$ & $\phi_1$ & $\phi_2$ & $\phi_3 $ \\ \hline -0.2506 &1& 0& 6& 0& 1& 0 &0 & 0& 1& 1& 0& 0 \\ \hline -0.2904 &0 &0 & 8& 0& 0& 0 &0 & 1& 0& 0& 1& 0 \\ \hline -0.4223 & 0& 1& 6& 1& 0& 0& 0& 0& 1& 1& 0& 0 \\ \hline 0.6323 & 0& 0& 8& 0& 0& 0& 0& 0& 1& 1& 0& 0 \\ \hline \end{tabular} \vskip1pc Here we see that indeed it is the minority component that carries the angular momentum (in all four Fock states). For $L = 2$, the lowest eigenenergy is $\approx 34.8276$, with \vskip1pc \begin{tabular}{ |c|c|c|c|c|c|c|c|c|c|c|c|c| } \hline \multicolumn{1}{ |c| }{} & \multicolumn{6}{ |c| }{Comp. $A$} & \multicolumn{6}{ |c| }{Comp. $B$} \\ \hline Ampl. & $\phi_{-2}$ & $\phi_{-1}$ & $\phi_0$ & $\phi_1$ & $\phi_2$ & $\phi_3$ & $\phi_{-2}$ & $\phi_{-1}$ & $\phi_0$ & $\phi_1$ & $\phi_2$ & $\phi_3 $ \\ \hline -0.2557 &1& 0& 6& 0& 1& 0 &0& 0& 0& 2& 0& 0 \\ \hline -0.2644 & 1& 0& 6& 0& 1& 0 & 0& 0& 1& 0& 1& 0 \\ \hline -0.4322 & 0& 1& 6& 1& 0& 0 & 0& 0& 0& 2& 0& 0 \\ \hline 0.6461 & 0& 0& 8& 0& 0& 0& 0& 0& 0& 2& 0& 0\\ \hline \end{tabular} \vskip1pc The minority, $B$, component still carries the angular momentum (in all four Fock states). The state with the largest amplitude is the one expected also from the mean-field approximation. Furthermore, this state does indeed result (to high accuracy) from the one with $L=0$ by exciting the center of mass coordinate of the minority component, while the difference between the eigenenergy of this state and the one with $L=0$ is $\approx 1.9814$, i.e., very close to the value $2 (= N_B)$. These are in agreement with the results presented in Sec.\,III. For $L = 3$ the lowest eigenenergy is $\approx 38.7296$, with \vskip1pc \begin{tabular}{ |c|c|c|c|c|c|c|c|c|c|c|c|c| } \hline \multicolumn{1}{ |c| }{} & \multicolumn{6}{ |c| }{Comp. $A$} & \multicolumn{6}{ |c| }{Comp. $B$} \\ \hline Ampl. & $\phi_{-2}$ & $\phi_{-1}$ & $\phi_0$ & $\phi_1$ & $\phi_2$ & $\phi_3$ & $\phi_{-2}$ & $\phi_{-1}$ & $\phi_0$ & $\phi_1$ & $\phi_2$ & $\phi_3 $ \\ \hline -0.2401 & 1& 0& 6& 0& 1& 0& 0& 0& 0& 1& 1& 0 \\ \hline -0.3224 & 0& 0& 8& 0& 0& 0 &0& 0& 1& 0& 0& 1 \\ \hline -0.4035 & 0& 1& 6& 1& 0& 0 &0& 0& 0& 1& 1& 0 \\ \hline 0.6057 & 0& 0& 8& 0& 0& 0 &0& 0& 0& 1& 1& 0\\ \hline \end{tabular} \vskip1pc In agreement with the results of Sec.\,III, and contrary to the corresponding state with $L = 5$ given in the Appendix, the above state results to rather high accuracy from the one with $L=1$ by exciting the center of mass of the minority component. The energy difference is $\approx 4.0865$, while the one predicted by the results of Sec.\,III~C is $2 L - N_B = 4$. For $L = 4$ the lowest eigenenergy is $\approx 40.8526$, with \vskip1pc \begin{tabular}{ |c|c|c|c|c|c|c|c|c|c|c|c|c| } \hline \multicolumn{1}{ |c| }{} & \multicolumn{6}{ |c| }{Comp. $A$} & \multicolumn{6}{ |c| }{Comp. $B$} \\ \hline Ampl. & $\phi_{-2}$ & $\phi_{-1}$ & $\phi_0$ & $\phi_1$ & $\phi_2$ & $\phi_3$ & $\phi_{-2}$ & $\phi_{-1}$ & $\phi_0$ & $\phi_1$ & $\phi_2$ & $\phi_3 $ \\ \hline -0.2494 & 1& 0& 6& 0& 1& 0& 0& 0& 0& 0& 2& 0\\ \hline -0.2970 & 0& 0& 8& 0& 0& 0 & 0& 0& 0& 1& 0& 1 \\ \hline -0.4218 & 0& 1& 6& 1& 0& 0& 0& 0& 0& 0& 2& 0 \\ \hline 0.6305 & 0& 0& 8& 0& 0& 0& 0& 0& 0& 0& 2& 0\\ \hline \end{tabular} \vskip1pc Here we observe that the Fock state with the dominant amplitude is the one where all 8 ``$A$" atoms occupy the $m=0$ state, as well as all 2 ``$B$" atoms occupy the state with $m=2$. Again, this state results approximately from the states with $L=0$ and $L=2$, by exciting the center of mass motion of the minority component. The energy difference between this state and the one with $L=0$ is $\approx 8.0074$, while the one that one gets from Sec.\,III C is 8. We stress that for weaker interactions the many-body state does not have the structure seen above. For example, the state with $L = 8 (= 2 N_B)$ in the Appendix is not of this form, where the state with the largest amplitude is $0.6158 \, |0,12,4,0 \rangle_A \, |0,0,4,0 \rangle_B$. For $L = 5$ the lowest eigenenergy is $\approx 45.7010$, with \vskip1pc \begin{tabular}{ |c|c|c|c|c|c|c|c|c|c|c|c|c| } \hline \multicolumn{1}{ |c| }{} & \multicolumn{6}{ |c| }{Comp. $A$} & \multicolumn{6}{ |c| }{Comp. $B$} \\ \hline Ampl. & $\phi_{-2}$ & $\phi_{-1}$ & $\phi_0$ & $\phi_1$ & $\phi_2$ & $\phi_3$ & $\phi_{-2}$ & $\phi_{-1}$ & $\phi_0$ & $\phi_1$ & $\phi_2$ & $\phi_3 $ \\ \hline 0.3194 &0& 1& 5& 2& 0& 0& 0& 0& 0& 0& 2& 0\\ \hline 0.3194 & 0& 0& 2& 5& 1& 0& 0& 2& 0& 0& 0& 0 \\ \hline -0.3516 & 0& 0& 7& 1& 0& 0& 0& 0& 0& 0& 2& 0 \\ \hline -0.3516 & 0& 0& 1& 7& 0& 0& 0& 2& 0& 0& 0& 0\\ \hline \end{tabular} \vskip1pc Interestingly, this state with $L = N/2 = 5$ cannot in any way be linked to any other state and it does not result from exciting the center of mass motion \cite{comment}. This is seen by comparing this eigenstate with the ones with $L=1$ and $L=3$. The state that one would construct following this rule has an energy equal to $\approx 46.8276$, which is higher than the actual eigenenergy. Therefore, the system manages to construct a state that lies lower in energy. We should recall here that within the mean-field approximation for $L = N/2$ one gets a ``dark" solitary wave in the minority component, and the winding number changes. Furthermore, this eigenstate has the peculiar feature that the Fock states go in pairs, having the same amplitudes (modulo signs). This can be seen by the fact that for every Fock state, there has to be another one, which is its mirror image that results from the transformation $m \rightarrow 1 - m$. The first state will have an angular momentum $\sum m N_m = N/2$, while the other one $\sum (1-m) N_m = N - L = N/2$. Furthermore, the kinetic energy of the first will be $K = \sum m^2 N_m$, while that of the other will be $\sum (1-m)^2 N_m = K + N - 2 L = K$. Since the interaction energy will also be the same, that is the reason that these states go in pairs. It is interesting that within the mean-field approximation and for $\ell=1/2$ there are two degenerate solutions, with a very different structure in $\phi_A$, i.e., the phase of the order parameter $\Psi_A$ of the majority component. For $\ell \to (1/2)^{\pm}$ we get either the one, or the other solution (in practice depending, e.g., on the initial condition that we use in the algorithm). This is an example of spontaneous symmetry breaking. This symmetry is restored within the method of diagonalization, where, for $\ell = 1/2$, we get a superposition of these two states. Returning to the results from numerical diagonalization, the rest of the spectrum, for $L = 6, \dots, 10$, as well for $L > 10$, follows (exactly) from the above states, according to Bloch's theorem, as we have also checked numerically. \begin{figure} \includegraphics[width=7.5cm,height=7.cm,angle=0]{fig5a.ps} \includegraphics[width=7.5cm,height=7.cm,angle=0]{fig5b.ps} \vskip3pc \caption{Top figure: The solid curve connects the lowest eigenenergies, for $N_A=2$, $N_B=8$, $g_{AA}=g_{BB} =1.5$, and $g_{AB}=0.15$, for $L = 0$ up to 10, in the truncated space $m_{\rm min}=-2$, $m_{\rm max}=3$. The dashed curve connects the energies evaluated by the phase transformations described in Sec.\,III C, which result from the eigenenergies for $L=0$ and $L=1$. Bottom figure: Same as the top one, with $g_{AB}=0.9$.} \end{figure} Another example that we show below has a larger value for $g_{AB}$, $g_{AB}=9/10$, with $N_A=8, N_B=2, g_{AA} = g_{BB} = g = 3/2$ being the same as before. The ratio $g/g_{AB}$ is the same as the one in the mean-field calculation of Ref.\,\cite{2compp7}. In this study the chosen couplings were rather strong, however here considering the same parameters would require inclusion of a large space of single-particle states and a correspondingly huge dimensionality of the resulting Hamiltonian matrix. The lowest-energy eigenstate with $L=0$ has an eigenenergy equal to $\approx 43.7724$. The Fock states with the four largest amplitudes are \vskip1pc \begin{tabular}{ |c|c|c|c|c|c|c|c|c|c|c|c|c| } \hline \multicolumn{1}{ |c| }{} & \multicolumn{6}{ |c| }{Comp. $A$} & \multicolumn{6}{ |c| }{Comp. $B$} \\ \hline Ampl. & $\phi_{-2}$ & $\phi_{-1}$ & $\phi_0$ & $\phi_1$ & $\phi_2$ & $\phi_3$ & $\phi_{-2}$ & $\phi_{-1}$ & $\phi_0$ & $\phi_1$ & $\phi_2$ & $\phi_3 $ \\ \hline -0.2420 &1&0&6&0&1&0&0&0&2&0&0&0 \\ \hline 0.2422 &0&0&8&0&0&0&0&1&0&1&0&0 \\ \hline -0.4146 &0&1&6&1&0&0&0&0&2&0&0&0\\ \hline 0.6468 & 0&0&8&0&0&0&0&0&2&0&0&0\\ \hline \end{tabular} \vskip1pc For $L = 1$, the lowest eigenenergy is $\approx 45.0110$, while \vskip1pc \begin{tabular}{ |c|c|c|c|c|c|c|c|c|c|c|c|c| } \hline \multicolumn{1}{ |c| }{} & \multicolumn{6}{ |c| }{Comp. $A$} & \multicolumn{6}{ |c| }{Comp. $B$} \\ \hline Ampl. & $\phi_{-2}$ & $\phi_{-1}$ & $\phi_0$ & $\phi_1$ & $\phi_2$ & $\phi_3$ & $\phi_{-2}$ & $\phi_{-1}$ & $\phi_0$ & $\phi_1$ & $\phi_2$ & $\phi_3 $ \\ \hline -0.2312 &0&0&8&0&0&0&0&1&0&0&1&0 \\ \hline -0.2370 &1&0&6&0&1&0&0&0&1&1&0&0 \\ \hline -0.3676 &0&1&6&1&0&0&0&0&1&1&0&0\\ \hline 0.6135 &0&0&8&0&0&0&0&0&1&1&0&0\\ \hline \end{tabular} \vskip1pc Again, we observe that the angular momentum is carried by the minority component. For $L = 2$, the lowest eigenenergy is $\approx 45.2024$, with \vskip1pc \begin{tabular}{ |c|c|c|c|c|c|c|c|c|c|c|c|c| } \hline \multicolumn{1}{ |c| }{} & \multicolumn{6}{ |c| }{Comp. $A$} & \multicolumn{6}{ |c| }{Comp. $B$} \\ \hline Ampl. & $\phi_{-2}$ & $\phi_{-1}$ & $\phi_0$ & $\phi_1$ & $\phi_2$ & $\phi_3$ & $\phi_{-2}$ & $\phi_{-1}$ & $\phi_0$ & $\phi_1$ & $\phi_2$ & $\phi_3 $ \\ \hline -0.2205 &0&0&8&0&0&0&0&0&1&0&1&0 \\ \hline -0.2369 &1&0&6&0&1&0&0&0&0&2&0&0 \\ \hline -0.4035 &0&1&6&1&0&0&0&0&0&2&0&0\\ \hline 0.6351 &0&0&8&0&0&0&0&0&0&2&0&0\\ \hline \end{tabular} \vskip1pc This state is linked with $|L=0 \rangle$ the way we discussed above. The only difference is that the Fock states with the two smallest amplitudes are reversed. For $L = 3$ the lowest eigenenergy is $\approx 48.0354$, with \vskip1pc \begin{tabular}{ |c|c|c|c|c|c|c|c|c|c|c|c|c| } \hline \multicolumn{1}{ |c| }{} & \multicolumn{6}{ |c| }{Comp. $A$} & \multicolumn{6}{ |c| }{Comp. $B$} \\ \hline Ampl. & $\phi_{-2}$ & $\phi_{-1}$ & $\phi_0$ & $\phi_1$ & $\phi_2$ & $\phi_3$ & $\phi_{-2}$ & $\phi_{-1}$ & $\phi_0$ & $\phi_1$ & $\phi_2$ & $\phi_3 $ \\ \hline -0.2857 &0&1&6&1&0&0&0&&0&1&1&0 \\ \hline 0.3043 &0&1&5&2&0&0&0&0&0&2&0&0 \\ \hline -0.3555 &0&0&7&1&0&0&0&0&0&2&0&0\\ \hline 0.4900 &0&0&8&0&0&0&0&0&0&1&1&0\\ \hline \end{tabular} \vskip1pc The difference between this state and $|L=1 \rangle$ is more pronounced (in the second and the third lines). In these two Fock states we observe that there are 2 units of angular momentum, as compared to the first and the fourth lines, where there are 3 units of angular momentum, as a result of the increase of $g_{AB}$. Still, the Fock state with the largest amplitude is the one expected from the earlier discussion. For the state with $L = 4$ the lowest eigenenergy is $\approx 50.3904$, with \vskip1pc \begin{tabular}{ |c|c|c|c|c|c|c|c|c|c|c|c|c| } \hline \multicolumn{1}{ |c| }{} & \multicolumn{6}{ |c| }{Comp. $A$} & \multicolumn{6}{ |c| }{Comp. $B$} \\ \hline Ampl. & $\phi_{-2}$ & $\phi_{-1}$ & $\phi_0$ & $\phi_1$ & $\phi_2$ & $\phi_3$ & $\phi_{-2}$ & $\phi_{-1}$ & $\phi_0$ & $\phi_1$ & $\phi_2$ & $\phi_3 $ \\ \hline -0.2058 &1&0&6&0&1&0&0&0&0&0&2&0 \\ \hline -0.2074 &0&0&7&1&0&0&0&0&0&1&1&0 \\ \hline -0.3488 &0&1&6&1&0&0&0&0&0&0&2&0\\ \hline 0.5529 &0&0&8&0&0&0&0&0&0&0&2&0\\ \hline \end{tabular} \vskip1pc Again, this state is linked with the states $|L=0 \rangle$ and $|L=2 \rangle$, with the main difference in the third Fock state, which has 3 units of angular momentum, while the other ones have 4 units. Finally, for $L = 5$ the lowest eigenenergy is $\approx 52.7947$, with \vskip1pc \begin{tabular}{ |c|c|c|c|c|c|c|c|c|c|c|c|c| } \hline \multicolumn{1}{ |c| }{} & \multicolumn{6}{ |c| }{Comp. $A$} & \multicolumn{6}{ |c| }{Comp. $B$} \\ \hline Ampl. & $\phi_{-2}$ & $\phi_{-1}$ & $\phi_0$ & $\phi_1$ & $\phi_2$ & $\phi_3$ & $\phi_{-2}$ & $\phi_{-1}$ & $\phi_0$ & $\phi_1$ & $\phi_2$ & $\phi_3 $ \\ \hline -0.1817 &0&1&4&3&0&0&0&0&0&1&1&0 \\ \hline -0.1817 &0&0&3&4&1&0&0&1&1&0&0&0\\ \hline 0.1920 &0&0&4&4&0&0&0&0&1&1&0&0\\ \hline 0.2038 &0&0&6&2&0&0&0&0&0&1&1&0\\ \hline 0.2038 &0&0&2&6&0&0&0&1&1&0&0&0\\ \hline \end{tabular} \vskip1pc which still is not linked with the other states. Figure 5 shows the eigenenergies for $0 \le L \le 10$ for the two values of $g_{AB}$. In the same figure we have also used the eigenenergies for $L=0$ and $L=1$ and evaluated the other ones using the arguments presented in Sec.\,III C. The agreement for the lower value of $g_{AB}$ is better. With increasing $g_{AB}$ the two systems become more coupled and as a result there are processes like, e.g., ${\hat c}_0 {\hat c}_1^{\dagger} {\hat d}_0^{\dagger} {\hat d}_1$, which lower the energy and become more important. These processes make the amplitudes of the Fock states which constitute the $L=0$ yrast state and have $L_A \neq 0$ and $L_B \neq 0$ (with $L_A + L_B =0$) larger. These states are responsible for the observed deviations [see Eq.\,(\ref{bl1})]. We also observe the relatively large deviation that appears for $L = 5 = N/2$. This deviation is due to the fact that this eigenstate does not result from the other ones via excitation of the center of mass motion. To conclude, interestingly enough, essentially the whole excitation spectrum (with the exception of the distinct values of $L = N/2 + N q$, with $q$ being an integer), can thus be derived by the states $L = 0$ and $L = 1$ only -- at least approximately -- very much the same way that we saw in Sec.\,III. \section{A conjecture: Dispersion relation based on the minimization of the kinetic energy} As we argued in Sec.\,IV B, starting from the many-body state of a system with $L=0$ it is possible to create a many-body state with some nonzero value of $L$ at the expense of kinetic energy only, which is of order $N$ (in the total energy of the system). Alternatively the many-body state may result from single-particle excitation with an energy expense in the interaction energy which is of order $N g$ (still in the total energy of the system), for $g_{AA} \approx g_{BB} \approx g_{AB}$, and equal to $g$. Furthermore, for sufficiently strong interactions, i.e., when $g$ becomes of order $N$, the system enters the Tonks-Girardeau regime, where the energy does not depend on $g$, which is not desirable. Therefore, provided that \begin{eqnarray} N \ll N g \ll N^2, \label{eqq3} \end{eqnarray} it may be energetically favorable for the system to carry its angular momentum via the collective excitation described above. In this case, provided that $N_A$ and $N_B$ are relatively prime one may achieve any value of $L = m N_A + n N_B$. The integers $(m, n)$ are the ones which minimize the kinetic energy per particle \begin{eqnarray} K = m^2 N_A + n^2 N_B, \label{ke} \end{eqnarray} under the obvious constraint \begin{eqnarray} L = m N_A + n N_B. \label{angm} \end{eqnarray} Self-consistency requires that the resulting integers $m$ and $n$ have to be of order unity. \begin{figure} \includegraphics[width=7cm,height=5.cm]{fig6a.ps} \includegraphics[width=7cm,height=5.cm]{fig6b.ps} \includegraphics[width=7cm,height=5.cm]{fig6c.ps} \includegraphics[width=7cm,height=5.cm]{fig6d.ps} \vskip2pc \caption{The dispersion relation (i.e., the kinetic energy) evaluated from the minimization of Eq.\,(\ref{ke}) under the constraint of Eq.\,(\ref{angm}), for the numbers of $N_A$ and $N_B$ shown in each plot.} \end{figure} \begin{figure} \includegraphics[width=7cm,height=5.cm]{fig7a.ps} \includegraphics[width=7cm,height=5.cm]{fig7b.ps} \includegraphics[width=7cm,height=5.cm]{fig7c.ps} \includegraphics[width=7cm,height=5.cm]{fig7d.ps} \vskip2pc \caption{Same as in Fig.\,6.} \end{figure} It is important to point out that Eq.\,(\ref{ke}) and (\ref{angm}) are linear in $N_A$ and $N_B$. Thus, scaling $N_A$ and $N_B$ the same way will leave the resulting integers $m$ and $n$ unaffected. On the other hand, Eq.\,(\ref{eqq3}) will always be satisfied for a sufficiently large value of $N = N_A + N_B$, for some fixed~$g$. The inequality of Eq.\,(\ref{eqq3}) implies that in order for each term to differ by, e.g., one order of magnitude, $g$ has to be at least 10, while $N$ has to be at least 100. This introduces a very serious problem in the method of numerical diagonalization that we use. Convergence of the results requires that the space that one should work with is $|m_{\rm min}| \approx m_{\rm max} \approx \sqrt {N g} \approx 30$. This implies that the dimensionality of the resulting matrices is too large and certainly beyond the capability of current technology. Still, if one could reach these parameters -- which is certainly possible experimentally -- there is an interesting behaviour, which we investigate below. The most interesting aspect is that under the conditions presented above, the yrast spectrum is determined from the minimization of the kinetic energy and thus becomes trivial. In addition to the simplicity of the spectrum, even more interesting is that the dispersion relation may become very sensitive to $N_A$ and $N_B$, due to number-theoretic reasons. In Figs.\,6 and 7, instead of diagonalizing the many-body Hamiltonian, we minimize Eq.\,(\ref{ke}) under the constraint of Eq.\,(\ref{angm}) and plot the dispersion relation [measuring the energy from $E(L=0)$]. As an example, we have chosen $N_A = 49$ and $N_B$ from 28 up to 35. For $N_B = 28$, the greatest common divisor of $N_A$ and $N_B$ is 7 and for this reason we find a solution for the values of $L$ which are integer multiples of 7, i.e., 0, 7, 14, 21, 28, 35, 42, 49, 56, 63, 70, and 77. For all other values of $L$ the energy will be much higher, so the predicted dispersion relation will have minima at these values of $L$. Increasing the population of $N_B$ by one unit, i.e., for $N_B = 29$, the greatest common divisor of $N_A$ and $N_B$ is 1. This has dramatic consequences on the dispersion relation, since it is now possible to find a solution for all values of $L$ between 0 and $N_A + N_B = 78$. Various interesting patterns show up as $N_B$ continues to increase by one unit, until $N_B$ increases by seven units, $N_B = 35$, in which case the greatest common divisor of $N_A$ and $N_B$ is again equal to 7, in which case the dispersion shows a similar structure as in the case $N_B = 28$. A remarkable observation that follows from these results is that even if the population changes by one particle, this may change the dispersion dramatically. This is a direct consequence of the number-theoretic nature of the problem, much like shell-effects for fermions, due to the Pauli exclusion principle. \section{Summary and overview} In the present study we have studied the dispersion relation of a two-component Bose-Einstein condensed gas that is confined in a ring potential. The structure of the derived excitation spectrum and the corresponding states have immediate consequences on the rotational properties of the system that we have examined and thus they have a very interesting physical interpretation. To name just the most important ones, we need to recall that the local minima of the dispersion relation correspond to non-decaying states, i.e., persistent currents. Furthermore, the states that we have evaluated correspond to ``vector" solitary-wave solutions, i.e., density disturbances in both components (see Figs.\,2 and 3) which propagate together around the ring without change in their shape. In addition, the slope of the dispersion relation gives the velocity of propagation of these waves. Finally, the dispersion relation may be used to predict the behaviour of the system as it is driven by some external rotation of the trap and also it allows us to extract the hysteretic behaviour. Turning to the more specific properties we have derived, we have shown that, quite generally (and not only within the mean-field approximation) under certain and rather typical conditions the whole energy spectrum repeats itself in a quasi-periodic way. More specifically, if one knows the spectrum in the range of the angular momentum between $L=0$ and $L = N_B$, i.e., the population of the minority component, the rest may be derived by exciting the center of mass motion of the two components. An interesting result that is directly related with the above is the fact that in this range of angular momentum the majority of the angular momentum is carried by the minority component, which is a definite experimental prediction. Another interesting physical consequence of these results is that, within the mean-field approximation -- when the ``dark" soliton appears (in the minority component), the velocity of propagation of the solitary waves changes discontinuously. Furthermore, within the many-body scheme the state with this value of the angular momentum has some peculiar properties. One important observation in the problem we have studied is the fact that the matrix elements that determine the interaction do not depend on the angular momentum of the colliding particles. As a result, one may start from the non-rotating many-body state and use these correlations to build many-body states with some nonzero angular momentum. In the limit of relatively strong interactions these are possible yrast states. The reason is that the energy expense that one pays to give the angular momentum is purely kinetic energy and for sufficiently strong interatomic interactions this kind of excitation provides an energetically inexpensive way for the system to carry its angular momentum (since the correlations are unaffected). As a result in this limit it is the kinetic energy that has to be minimized, with the interesting consequence that the energy spectrum is trivial to calculate. Furthermore, much like non-interacting fermions, due to number-theoretic reasons the energy spectrum also becomes very sensitive to the population of the two components, as well as the angular momentum carried by the system. In a sense, this is an indication of ``quantum chaos", where even infinitesimally small changes in the number of atoms (i.e., of order unity) have very significant changes in the dispersion relation, and as a result in the rotational properties of the system. While we cannot demonstrate this conjecture numerically because of the huge dimensionality of the resulting matrices, there are definite predictions, which may be tested experimentally. \acknowledgements We thank Andy Jackson and Stephanie Reimann for useful discussions.
1,314,259,994,735
arxiv
\section{Introduction} The main goal of this study is a deep analysis of canonical transformations of the Hamiltonian dynamical variables which are applied in the metric gravity. We investigate general properties of such transformations and formulate some criteria of canonicity. Another our aim is to discuss modifications made by Dirac \cite{Dir58}, \cite{Dir50} and \cite{Dir64} in the classical Hamilton method. We want to show that this Dirac's approach has many crucial advantages for the development and following improvement of various Hamiltonian formulations for the free gravitational field in metric gravity, where $2 d$ additional gauge conditions exist. We also introduce the integral invariants of the metric gravity and discuss applications of these invariants to the current and new Hamiltonian formulations of the metric gravity. In metric gravity the classical gravitational field is described as a symmetric tensor field which is defined in the $d$ dimensional Riemann space-time. As is well known the general theory of metric gravitational field(s) have been created more than 100 years ago by A. Einstein and it is based on his fundamental idea that the actual gravitational field has a tensor nature and it is defined in the four-dimensional (Riemann) space-time. Since then the gravitational field is designated as covariant components of the fundamental metric tensor $g_{\alpha\beta}$. In general, the metric gravity can be developed in the $d$ dimensional space-time (or $d-$space), where $d \ge 3$, while the time is always one-dimensional. In other words, there is no need to restrict ourselves to the four-dimensional case only, where $d$ = 4. In this study we also deal with the $d$-dimensional Riemann space-time. Everywhere below the notation $\overline{x}$, designates the $d-$dimensional vector which has the contravariant components $x^{\alpha}$, while its covariant components are designated as $x_{\alpha}$, where $\alpha = 0, 1, \ldots, d - 1$. The temporal component (or time component) of the coordinate vector $\overline{x}$ is $x^{0}$, while its spatial components are $x^{k}$ ($k = 1, 2, \ldots, d - 1$). The same rule is applied everywhere in this study: components of $d-$vectors are labelled by Greek indices, while spatial components of these vectors are denoted by Latin indices. In respect to this agreement all components of the covariant fundamental tensor (or metric tensor) are designated as $g_{\alpha\beta}$ (see above), while the notation $g^{\alpha\beta}$ stands for the components of contravariant fundamental (or metric) tensor \cite{Kochin}, \cite{Dash}. The determinant of the $g_{\alpha\beta}$ is called the fundamental determinant and it is denoted by letter $g$. In the metric gravity the numerical value of $g$ is always real and negative, but the $- g$ value is positive, which allows one to operate with the expression such as $\sqrt{- g}$ and consider functions of $\sqrt{- g}$. Any suffix with a comma before it denotes differentiation according to the general scheme $F_{,\mu} = \frac{d F}{d x^{\mu}}$. In particular, the temporal derivative are always designated as $F_{,0} \Bigl( = \frac{d F}{d x^{0}} \Bigr)$. For an arbitrary metric-dependent functional $F(g_{\alpha\beta})$ the notation $F_{,\gamma}$ means $F_{,\gamma} = \Bigl(\frac{\partial F}{\partial g_{\mu\nu}}\Bigr) g_{\mu\nu,\gamma}$, etc. This paper has the following structure. The next two Sections play the role of `introductory part' for our current analysis. In particular, in the next Section we introduce the regular $\Gamma - \Gamma$ Lagrangian of the metric gravity. Then, by using this $\Gamma - \Gamma$ Lagrangian we define the momenta $\pi^{\mu\nu}$ which are also the dynamical (Hamiltonian) variables. These momenta are considered as variables which are dynamically conjugate to the corresponding generalized coordinates $g_{\alpha\beta}$. In general, the momenta $\pi^{\mu\nu}$ are defined as the partial derivatives of the $\Gamma - \Gamma$ Lagrangian in respect to the `velocities' $\frac{\partial g_{\mu\nu}}{\partial t} = g_{\mu\nu,0}$. The arising set of $2 d$ dynamical variables $\{ g_{\alpha\beta}, \pi^{\mu\nu} \}$ includes all variables which are needed to develop the Hamiltonian formulation of the metric gravity. In particular, the both canonical $H_C$ and total $H_{t}$ Hamiltonians of the metric gravity are written as the quadratic functions of all space-like momenta $\pi^{pq}$ and as a linear combination of temporal momenta $\pi^{0 \mu} (= \pi^{\mu 0})$. This essentially means that the free gravitational field is a constrained dynamical (Hamiltonian) system \cite{K&K}, \cite{Tyut} and all $d$ temporal momenta $\pi^{0\mu} (= \pi^{\mu 0})$ of this field can rigorously be defined only as the primary constraints. The commutators (or Poisson brackets) of these primary constraints with the canonical and total Hamiltonians produce $d$ secondary constraints. In general, the properly defined Poisson brackets always play a central role in Hamiltonian formulations of many physical theories. Briefly, we can say that the Poisson brackets is the most important working tool of any consistent Hamiltonian theory. In Section V by using our Poisson brackets we derive the Hamiltonian equations of motion for all dynamical variables of the metric gravity. Analogous Hamilton equations for the primary and secondary first-class constraints are also derived and discussed. The Dirac closure of this Hamiltonian formulation is explicitly demonstrated. Then we consider another Hamiltonian formulation of the metric gravity developed by Dirac \cite{Dir58}. For this Hamiltonian formulation we also derive the corresponding Hamilton equations of motion and determine the Poisson brackets between all essential first-class constraints (see, also our Appendix A). Then we define the canonical transformations of the dynamical Hamiltonian variables which relate the both these Hamiltonian formulations, i.e., \cite{Dir58} and \cite{K&K}. The criteria of canonicity for arbitrary transformations of the dynamical variables are formulated in the Section VI. Then we discuss modifications made by Dirac in the classical Hamiltonian method \cite{Dir58} - \cite{Dir64}. We have shown explicitly that these Dirac's modifications allowed him to create a logically closed and transparent Hamiltonian approach which has many advantages to study actual motions of various physical fields when numerous gauge conditions must also be taken into account. Here we also formulate the new principle of the ``complete reverse recovery'' which must be applied to any Hamiltonian formulation of the metric gravity to check its validity and correct relations with the original Einstein's field equations. This simple and physically clear principle can be used to ``separate the wheat from the chaff'' (Matthew, 3:13) in Hamiltonian gravity. It allows us to operate only with the true Hamiltonian formulations of metric gravity and discard the fake ones. In Section VIII we introduce integral invariants of the metric gravity. In some sense this is a central part of this study, since the method of integral invariants allows one to create a real foundation for the Hamiltonian formulations of the metric gravity. In particular, by applying the integral invariants of metric gravity one can easy perform all steps of the rigorous Hamiltonian procedure and formulate various criteria of canonicity which can be applied actual transformations of dynamical variables. Concluding remarks can be found in the last Section. This paper also includes three Appendixes. Appendix A is a `pure technical' part, which contains derivations of some formulas. These formulas and expressions are of paramount importance for this study, but they could not be included in the main text, since this would damage the logic and harmony of our presentation. In Appendix B we discuss a number of tricky moments which traditionally complicate the correct definition of canonicity in classical and quantum mechanics. In Appendix C we explicitly derive some important formulas for the integral invariants. Along with the discussions of the latest achievements in Hamiltonian metric gravity, we also wanted to write a simple and transparent article which can be understood by any theoretical physicist who is familiar with the modern Hamiltonian methods developed for constrained dynamical systems. \section{Regular $\Gamma - \Gamma$ Lagrangian density of the metric gravity} In this Section we introduce the regularized (or regular) Lagrangian density of metric gravity. As is well known (see, e.g., \cite{Carm} and \cite{LLTF}) the original Lagrangian density of the metric gravity coincides with the integrand in the Einstein-Hilbert integral-action $L_{EH}$ which equals to the product of scalar (or Gauss) curvature of the $d-$dimensional space $R = g^{\alpha\beta} R_{\alpha\beta}$ and factor $\sqrt{- g}$, which is Jacobian of the transformation from the flat space to the curved Riemann space (see, e.g., \cite{Carm} and \cite{LLTF}). The invariant integral $\int R \sqrt{- g} d\Omega$ is called the gravitational action. The explicit form of Lagrangian density is $L_{EH} = \sqrt{- g} R = \sqrt{- g} g^{\alpha\beta} R_{\alpha\beta} = \sqrt{- g} g^{\alpha\beta} g^{\gamma\sigma} R_{\gamma\alpha\sigma\beta}$, where $R = g^{\alpha\beta} R_{\alpha\beta}$ is the scalar (or Gauss) curvature of $d-$dimensional space-time, while $R_{\alpha\beta}$ is the Ricci tensor \begin{eqnarray} R_{\alpha\beta} = \frac{\partial \Gamma^{\gamma}_{\alpha\beta}}{\partial x^{\gamma}} - \frac{\partial \Gamma^{\gamma}_{\alpha\gamma}}{\partial x^{\beta}} + \Gamma^{\gamma}_{\alpha\beta} \Gamma^{\lambda}_{\gamma\lambda} - \Gamma^{\lambda}_{\alpha\gamma} \Gamma^{\gamma}_{\beta\lambda} \; , \; {\rm or} \; \; R_{\alpha\beta} = g^{\mu\nu} R_{\mu\alpha\nu\beta} = g^{\nu\mu} R_{\nu\beta\mu\alpha} = R_{\beta\alpha} \; \label{equH1} \end{eqnarray} In this equation and everywhere below in this study the notation $\Gamma^{\gamma}_{\alpha\beta} = \frac12 g^{\gamma\nu} \Bigl( \frac{\partial g_{\nu\alpha}}{\partial x^{\beta}} + \frac{\partial g_{\nu\beta}}{\partial x^{\alpha}} - \frac{\partial g_{\alpha\beta}}{\partial x^{\nu}} \Bigr)$ are the Cristoffel symbols of the second kind (see, e.g., \cite{Kochin}). The Ricci tensor $R_{\alpha\beta} = g^{\gamma\sigma} R_{\gamma\alpha\sigma\beta}$ is simply related to the Einstein tensor $G_{\alpha\beta} = g^{\gamma\sigma} R_{\alpha\gamma\sigma\beta}$, since $R_{\alpha\beta} = - G_{\alpha\beta}$ \cite{Kochin}. In this notation the governing equations of the free gravitational field (famous Einstein's equations) are written in one of the following forms: $R_{\alpha\beta} = 0 = G_{\alpha\beta}$. Any Hamiltonian formulation of the metric gravity must reproduce these original field equations exactly and unambiguously. This is the new fundamental principle of the ``complete reverse recovery'' and its applications to various new and old Hamiltonian formulations of the metric gravity allows one quickly ``to separate the wheat from the chaff'' (Mathew, 13, 24 - 30) (see below). An alternative (but equivalent!) form of the same Lagrangian density is written as: $L_{EH} = \sqrt{- g} g^{\alpha\beta} g^{\gamma\sigma} R_{\gamma\alpha \sigma\beta}$, where the notation $R_{\alpha\beta \gamma\sigma}$ designates the Riemann curvature tensor (or Riemann-Cristoffel tensor) which is \begin{eqnarray} R_{\alpha\beta \gamma\sigma} = \frac12 \Bigl[ \frac{\partial^{2} g_{\alpha\sigma}}{\partial x^{\beta} \partial x^{\gamma}} + \frac{\partial^{2} g_{\beta\gamma}}{\partial x^{\alpha} \partial x^{\sigma}} - \frac{\partial^{2} g_{\alpha\gamma}}{\partial x^{\beta} \partial x^{\sigma}} - \frac{\partial^{2} g_{\beta\sigma}}{\partial x^{\alpha} \partial x^{\gamma}} \Bigr] + \Gamma_{\rho, \alpha\sigma} \Gamma^{\rho}_{\beta\gamma} - \Gamma_{\rho, \beta\sigma} \Gamma^{\rho}_{\alpha\gamma} \; \; , \; \label{two} \end{eqnarray} where $\Gamma_{\lambda, \mu\nu} = \frac12 \Bigl( \frac{\partial g_{\nu\alpha}}{\partial x^{\beta}} + \frac{\partial g_{\nu\beta}}{\partial x^{\alpha}} - \frac{\partial g_{\alpha\beta}}{\partial x^{\nu}} \Bigr)$ are the Cristoffel symbols of the first kind. The Riemann-Cristoffel tensor defined in Eq.(\ref{two}) is a covariant tensor of the fourth rank. As follows from the last equation the Einstein-Hilbert Lagrangian density $L_{EH} = \sqrt{- g} g^{\alpha\gamma} g^{\beta\sigma} R_{\alpha\beta \gamma\sigma}$ contains a number of second-order derivatives $\frac{\partial^{2} g_{\alpha\beta}}{\partial x^{\gamma} \partial x^{\lambda}}$ and cannot be used directly in the principle of least action. However, as it follows from Eqs.(\ref{equH1}) and (\ref{two}) all these derivatives of the second order are included in the Lagrangian density $L_{EH}$ only as a linear combination with the constant coefficients which do not contain any derivative of the metric tensor. Such a linearity of the invariant integral $\int R \sqrt{- g} d\Omega$ upon the second-order derivatives of the metric tensor can be used to transform this integral (by means of Gauss theorem) to the integral which does not include any second-order derivative. After a few simple transformations the invariant integral $S_g$ is reduced to the form \begin{eqnarray} \int R \sqrt{- g} d\Omega = \int g^{\alpha\beta} \Bigl( \Gamma^{\lambda}_{\alpha\gamma} \Gamma^{\gamma}_{\beta\lambda} - \Gamma^{\gamma}_{\alpha\beta} \Gamma^{\lambda}_{\gamma\lambda}\Bigr) \sqrt{- g} d\Omega + \int \frac{\partial \Bigl[\sqrt{- g} \Bigl( g^{\alpha\beta} \Gamma^{\gamma}_{\alpha\beta} - g^{\alpha\gamma} \Gamma^{\beta}_{\alpha\beta} \Bigr)\Bigr]}{\partial x^{\gamma}} d\Omega \; , \; \label{equH2} \end{eqnarray} where the integrand of the first integral in the right-hand side of this equation contains only products of different powers of components of the metric tensor and their first-order derivatives, while the second integral has the form of a divergence of the vector-like quantity $\sqrt{- g} \Bigl( g^{\alpha\beta} \Gamma^{\gamma}_{\alpha\beta} - g^{\alpha\gamma} \Gamma^{\beta}_{\alpha\beta} \Bigr)$. It is clear that the second integral can be transformed (with the help of Gauss theorem) into an integral over a hyper-surface surrounding the $d-$dimensional volume over which the integration is carried out in other two integrals. When we vary the gravitational action $S_g$, the variation of this (second) term in the right-hand side of Eq.(\ref{equH2}) vanishes, since in respect to the principle of least action, the variation of the gravitational field at the limits of the region of integration must be equal zero. Now, from Eq.(\ref{equH2}) one finds \begin{eqnarray} \delta S_g = \delta \int L_{EH} d\Omega = \delta \int R \sqrt{- g} d\Omega = \delta \int L_{\Gamma-\Gamma} d\Omega \; , \; {\rm or} \; \; \frac{\delta L_{EH}}{\delta g_{\mu\nu}} = \frac{\delta \Bigl(R \sqrt{- g} \Bigr)}{\delta g_{\mu\nu}} = \frac{\delta L_{\Gamma - \Gamma}}{\delta g_{\mu\nu}} \; \; \label{LGG0} \end{eqnarray} where the notation $\delta$ means variation, while the notation $\frac{\delta F}{\delta g_{\mu\nu}}$ means the variational derivative (or Lagrange derivative) of the functional $F$. Also in this equation the symbol $L_{\Gamma-\Gamma} = \sqrt{- g} g^{\alpha\beta} \Bigl( \Gamma^{\lambda}_{\alpha\gamma} \Gamma^{\gamma}_{\beta\lambda} - \Gamma^{\gamma}_{\alpha\beta} \Gamma^{\lambda}_{\gamma\lambda} \Bigr)$ stands for the regularized (or regular) $\Gamma - \Gamma$ Lagrangian density of the metric gravity which plays a central role in numerous Hamiltonian approaches developed for the metric gravity. As follows from Eq.(\ref{LGG0}) the variational derivative of the $L_{\Gamma-\Gamma}$ Lagrangian density is a true tensor, while the original $L_{\Gamma-\Gamma}$ Lagrangian density is not a true scalar. The equality, Eq.(\ref{LGG0}), expresses the fact that we can replace the `singular' Einstein-Hilbert Lagrangian density $L_{EH} = \sqrt{- g} R$ by the regular $\Gamma - \Gamma$ Lagrangian density $L_{\Gamma-\Gamma} = \sqrt{- g} g^{\alpha\beta} \Bigl( \Gamma^{\lambda}_{\alpha\gamma} \Gamma^{\gamma}_{\beta\lambda} - \Gamma^{\gamma}_{\alpha\beta} \Gamma^{\lambda}_{\gamma\lambda} \Bigr)$ which is variationally equivalent to the original Einstein-Hilbert Lagrangian density and contains no second-order derivative. This $\Gamma - \Gamma$ Lagrangian density is also written in the following form \begin{eqnarray} L_{\Gamma - \Gamma} &=& \frac14 \sqrt{-g} B^{\alpha\beta\gamma\mu\nu\rho} \Bigl(\frac{\partial g_{\alpha\beta}}{\partial x^{\gamma}}\Bigr) \Bigl(\frac{\partial g_{\mu\nu}}{\partial x^{\rho}}\Bigr) = \frac14 \sqrt{-g} B^{\alpha\beta\gamma\mu\nu\rho} g_{\alpha\beta,\gamma} g_{\mu\nu,\rho} \; \; \nonumber \\ &=& \frac14 \sqrt{-g} \Bigl( g^{\alpha\beta} g^{\gamma\rho} g^{\mu\nu} - g^{\alpha\mu} g^{\beta\nu} g^{\gamma\rho} + 2 g^{\alpha\rho} g^{\beta\nu} g^{\gamma\mu} - 2 g^{\alpha\beta} g^{\gamma\mu} g^{\nu\rho} \Bigr) g_{\alpha\beta,\gamma} g_{\mu\nu,\rho} \; \; , \; \label{LGG} \end{eqnarray} where $B^{\alpha\beta\gamma\mu\nu\rho}$ is a homogeneous cubic polynomial of the contravariant components of the metric tensor $g^{\alpha\beta}$. The explicit definition of the $B^{\alpha\beta\gamma\mu\nu\rho}$ quantities follows directly from Eq.(\ref{LGG}). Below, we shall deal with the $\Gamma - \Gamma$ Lagrangian density only. In order to simplify the following formulas we shall designate this Lagrangian density by the letter $L$, i.e., $L = L_{\Gamma-\Gamma}$. Now, by using the $\Gamma - \Gamma$ Lagrangian density we can derive the explicit expressions for all contravariant components of momenta $\pi^{\mu\nu}$ and obtain the closed expression for the Hamiltonian(s) of the metric gravity. These important steps are made in the next Section. \section{Momenta. Canonical and total Hamiltonians of metric gravity} In the previous Section we have introduced the $\Gamma - \Gamma$ Lagrangian density, Eq.(\ref{LGG}), of the metric gravity. At the second step of any standard Hamiltonian procedure, by using the known Lagrangian density we have to define the corresponding momenta. Our current derivation of momenta in this study is based on the approaches developed in the two earlier papers \cite{Dir58} and \cite{K&K} which still play a central role in all modern Hamiltonian formulations of the metric gravity. First, we need to re-write the formula, Eq.(\ref{LGG}), for the $\Gamma - \Gamma$ Lagrangian density to a slightly different form where all temporal derivatives of the covariant components of metric tensor, i.e., $g_{\alpha\beta,0}$, are explicitly separated from other similar derivatives (see, e.g., \cite{K&K}, \cite{Fro2021}) \begin{eqnarray} L = \frac14 \sqrt{-g} B^{\alpha\beta 0\mu\nu 0} g_{\alpha\beta,0} g_{\mu\nu,0} + \frac12 \sqrt{-g} B^{(\alpha\beta 0|\mu\nu k)} g_{\alpha\beta,0} g_{\mu\nu,k} + \frac14 \sqrt{-g} B^{\alpha\beta k \mu\nu l} g_{\alpha\beta,k} g_{\mu\nu,l} \; , \; \label{LGGvel} \end{eqnarray} where the notation $B^{(\alpha\beta\gamma|\mu\nu\rho)}$ means a `symmetrical' $B^{\alpha\beta\gamma\mu\nu\rho}$ quantity which is symmetrized in respect to the permutation of two groups of indexes, i.e., \begin{eqnarray} B^{(\alpha\beta\gamma|\mu\nu\rho)} &=& \frac12 \Bigl( B^{\alpha\beta\gamma\mu\nu\rho} + B^{\mu\nu\rho\alpha\beta\gamma} \Bigr) = g^{\alpha\beta} g^{\gamma\rho} g^{\mu\nu} - g^{\alpha\mu} g^{\beta\nu} g^{\gamma\rho} \nonumber \\ &+& 2 g^{\alpha\rho} g^{\beta\nu} g^{\gamma\mu} - g^{\alpha\beta} g^{\nu\rho} g^{\gamma\mu} - g^{\alpha\rho} g^{\beta\gamma} g^{\mu\nu} \; . \; \label{eq52} \end{eqnarray} The contravariant components of momentum $\pi^{\gamma\sigma}$ are defined as partial derivatives of the Lagrangian density, Eq.(\ref{LGGvel}), in respect to the corresponding velocities $g_{\gamma\sigma,0}$ (see, e.g., \cite{Dir64}). The expressions for the contravariant components of gravitational momenta (or momenta, for short) are \begin{eqnarray} \pi^{\gamma\sigma} = \frac{\partial L}{\partial g_{\gamma\sigma,0}} = \frac{1}{2} \sqrt{-g} B^{((\gamma\sigma) 0|\mu\nu 0)} g_{\mu\nu, 0} + \frac{1}{2} \sqrt{-g} B^{((\gamma\sigma) 0|\mu\nu k)} g_{\mu\nu, k} \; \; , \; \label{mom} \end{eqnarray} where $B^{((\gamma\sigma) 0|\mu\nu 0)} = \frac12 \Bigl(B^{(\gamma\sigma 0|\mu\nu 0)} + B^{(\sigma\gamma 0|\mu\nu 0)} \Bigr)$ is the symmetrized linear combination of the two $B^{(\gamma\sigma 0|\mu\nu 0)}$ quantities. The first term in the right-hand side of this equation is written in the form \begin{eqnarray} \frac{1}{2} \sqrt{-g} B^{((\gamma\sigma)0|\mu\nu 0)} g_{\mu\nu, 0} = \frac{1}{2} \sqrt{-g} g^{00} \Bigl( e^{\mu\nu} e^{\gamma\sigma} - e^{\mu\gamma} e^{\nu\sigma} \Bigr) g_{\mu\nu, 0} = \frac{1}{2} \sqrt{-g} g^{00} E^{\mu\nu\gamma\sigma} g_{\mu\nu, 0} \; , \; \; \label{B1} \end{eqnarray} where the notations $e^{\mu \nu}$ and $E^{\mu\nu\gamma\sigma}$ stands for the Dirac contravariant tensors of the second and fourth ranks, respectively. The explicit expressions for these tensors are \begin{eqnarray} e^{\mu \nu} = g^{\mu \nu} - \frac{g^{0 \mu} g^{0 \nu}}{g^{00}} \; \; , \; \; {\rm and} \; \; \; E^{\mu \nu \gamma \rho} = e^{\mu \nu} e^{\gamma \rho} - e^{\mu \gamma} e^{\nu \rho} \; \; , \; \label{E} \end{eqnarray} i.e., each component of these two tensors is a function of the contravariant components of the metric tensor only. For these tensors one finds the following symmetries in respect to permutations of their indexes: $e^{\mu \nu} = e^{\nu \mu}$ and $E^{\mu\nu\gamma\sigma} = E^{\gamma\sigma\mu\nu}$. Also, as follows directly from the formulas, Eq.(\ref{E}), the tensor $e^{\mu \nu}$ equals zero, if either index $\mu$, or index $\nu$ (or both) equals zero. Analogously, for the Dirac $E^{\mu\nu\gamma\sigma}$ tensor one finds that $E^{0\nu\gamma\sigma} = 0, E^{\mu 0\gamma\sigma} = 0, E^{\mu\nu 0\sigma} = 0$ and $E^{\mu\nu\gamma 0} = 0$. Therefore, it is more productive to discuss the space-like quantities $e^{mn}$ and $E^{mnpq}$ only. The space-like $E^{p q k l}$ quantity is, in fact, the space-like Dirac tensor of the fourth rank. This space-like tensor $E^{p q k l}$ do not have components which are equal zero identically. Furthermore, this space-like tensor $E^{p q k l}$ is a positively defined, invertible tensor. Its inverse space-like tensor $I_{m n p q}$ is also positively defined and invertible space-like tensor of the fourth rank which is written in the form \cite{K&K} \begin{equation} I_{m n q p} = \frac{1}{d - 2} g_{m n} g_{p q} - g_{m p} g_{n q} \; \; . \; \label{I} \end{equation} The relation between space-like tensors $I_{m n p q}$ and $E^{p q k l}$ is written in the form $I_{m n p q} E^{p q k l} = g^{k}_{m} g^{l}_{n} = \delta^{k}_{m} \delta^{l}_{n}$ (see, Appendix A), where $g^{\alpha}_{\beta} = \delta^{\alpha}_{\beta}$ is the substitution tensor \cite{Kochin} and the symbol $\delta^{\alpha}_{\beta}$ denotes the Kroneker delta ($\delta^{\alpha}_{\alpha}$ = 1 and $\delta^{\alpha}_{\beta} = 0$, if $\alpha \ne \beta$). From this definition one easily finds that $\delta^{\alpha}_{\beta} = \delta_{\alpha}^{\beta}$. In general, for the $B^{((\gamma\sigma) 0|\mu\nu 0)}$ coefficients in the formula, Eq.(\ref{mom}) one finds from Eqs.(\ref{B1}), (\ref{E}) and (\ref{I}) one finds for the two possible situations. First, for $\gamma = p$ and $\sigma = q$ these coefficients are always different from zero. In this `regular' case we obtain the following formula for space-like contravariant components of the momentum tensor \begin{eqnarray} \pi^{pq} &=& \frac{\partial L}{\partial g_{p q,0}} = \frac{1}{2} \sqrt{-g} B^{((p q) 0|\mu\nu 0)} g_{\mu\nu,0} + \frac{1}{2} \sqrt{-g} B^{((p q) 0|\mu\nu k)} g_{\mu\nu, k} \nonumber \\ &=& \frac{1}{2} \sqrt{-g} B^{((p q) 0|m n 0)} g_{m n,0} + \frac{1}{2} \sqrt{-g} B^{((p q) 0|m n k)} g_{m n, k} \; \; \label{momenta} \end{eqnarray} for each pair of the spatial $(pq)-$indexes. In this case the $(pq;mn)-$matrix of the $\sqrt{-g} B^{((p q) 0|m n 0)} = g^{00} E^{pqmn}$ coefficients, which are located in front of the space-like $g_{m n, 0}$ velocities in the right-hand side of this equation, is invertible (see above). Therefore, in this case the field-velocity $g_{m n, 0}$ can be expressed as the linear combination of the space-like components $\pi^{pq}$ of momentum tensor, Eq.(\ref{I}): \begin{eqnarray} g_{mn, 0} &=& \frac{1}{g^{00}} \Bigl( \frac{2}{\sqrt{-g}} I_{m n p q} \pi^{pq} - I_{m n p q} B^{((pq) 0|\mu\nu k)} g_{\mu\nu, k} \Bigr) = \frac{1}{g^{00}} I_{m n p q} \Bigl( \frac{2}{\sqrt{-g}} \pi^{pq} \nonumber \\ &-& B^{((pq) 0|\mu\nu k)} g_{\mu\nu, k} \Bigr) \; \; , \; \label{veloc} \end{eqnarray} where the Dirac tensor $I_{m n p q}$ is defined by Eq.(\ref{I}). As follows from Eqs.(\ref{momenta}) and (\ref{veloc}) for all space-like components of the metric tensor $g_{pq}$ and corresponding momenta $\pi^{mn}$ one essentially finds no principal difference with those systems in classical mechanics which have Lagrangians written as quadratic functions of the velocities. Indeed, in metric gravity all space-like components of momenta and velocities are always related to each other by a few simple, linear equations, which however, take a multi-dimensional, or matrix form. The method described above is the direct and transparent generalization of Legendre's dual transformation for the tensor fields. In the second `singular' case, when $\gamma = 0$ (or $\sigma = 0$) in Eq.(\ref{mom}), the first term in the right-hand side of each of these equations equals zero. Therefore, these equations take the from of pure algebraic equations \begin{eqnarray} \pi^{0 \sigma} = \frac{\partial L}{\partial g_{0\sigma,0}} = \frac{1}{2} \sqrt{-g} B^{((0\sigma) 0|\mu\nu k)} g_{\mu\nu, k} \; \; , \; \; {\rm and} \; \; \; \pi^{\sigma 0} = \frac{\partial L}{\partial g_{\sigma 0,0}} = \frac{1}{2} \sqrt{-g} B^{((\sigma 0) 0|\mu\nu k)} g_{\mu\nu, k} \; \label{constr} \end{eqnarray} for $\sigma = 0, 1, \ldots, d - 1$. From this equations one finds finds $\pi^{0 \sigma} = \pi^{\sigma 0}$. Note also that these equations contain no velocities at all, i.e., we cannot express the $g_{0\sigma,0}$ velocities in terms of the momenta $\pi^{0\sigma}$ and vice versa. Each of the equations, Eq.(\ref{constr}), directly determines the momentum $\pi^{0\sigma}$ as a cubic polynomial of the contravariant components of the metric tensor $g^{\alpha\beta}$ which is multiplied by an additional factor $\sqrt{- g} g_{\mu\nu, k}$. In other words, the following $d$ functions \begin{eqnarray} \phi^{0 \sigma} = \pi^{0 \sigma} - \frac{1}{2} \sqrt{-g} B^{((0 \sigma) 0|\mu\nu k)} g_{\mu\nu, k} \; = \phi^{\sigma 0} \; \; , \; \label{primary} \end{eqnarray} where $\sigma = 0, 1, \ldots, d - 1$, must be equal zero during actual physical motions (or time-evolution) of the free gravitational field. In other words, during any actual motion (or time-evolution) of the free gravitational field the $d$ additional conditions (or constraints) $\phi^{0\sigma} = 0$ must be obeyed for the Hamiltonian dynamical variables, since otherwise such a motion is not possible. We have to emphasize that the equations $\phi^{0\sigma} = 0$ are correct only on the true Hamiltonian trajectories (or curves) $(x^{0}, g_{\alpha\beta}(x^{0}), \pi^{\mu\nu}(x^{0}))$ of the free gravitational field. Outside these trajectories, only some, or even none of these equations are satisfied. In \cite{Dir58} Dirac proposed to write these additional conditions in the symbolic form $\phi^{0\sigma} \approx 0$ (for $\sigma = 0, 1, \ldots, d - 1$) with a different sign $\approx$ from the usual (=). These `weak' equations are the primary constraints of the given Hamiltonian formulation (see, e.g., \cite{Dir58} and \cite{Dir64}). In other words, during time-evolution of the free gravitational field we always have $d$ primary constraints for the $d (d + 1)$ Hamiltonian variables $\{ g_{\alpha\beta}, \pi^{\mu\nu} \}$ and this number $d$ never changes, if one applies canonical transformations of the Hamiltonian dynamical variables (see below). This is a partial case of the law of inertia for the first-class constraints which is discussed below. Now, by applying the Legendre transformation to the known $\Gamma - \Gamma$ Lagrangian density $L$, of the metric gravity, Eq.(\ref{LGGvel}), and excluding all space-like velocities $g_{mn,0}$, we can derive the explicit formulas for the total $H_t$ and canonical $H_C$ Hamiltonians of the metric GR. Formally, these quantities are the Hamiltonian densities, but in this study we try to avoid any mention of Hamiltonian densities, since constant play with words `Hamiltonians' and `Hamiltonian densities' substantially complicates explanations and often leads to various confusions. In particular, the total Hamiltonian $H_t$ of the gravitational field in metric gravity derived from the $\Gamma - \Gamma$ Lagrangian density $L$, Eq.(\ref{LGG}), is written in the form \begin{eqnarray} H_t &=& g_{\alpha\beta,0} \pi^{\alpha\beta} - L = g_{pq,0} \pi^{pq} + g_{0\sigma,0} \phi^{0\sigma} - L = g_{pq,0} \pi^{pq} - L + g_{0\sigma,0} \phi^{0\sigma} \nonumber \\ &=& H_C + g_{0 0,0} \phi^{0 0} + g_{0 k,0} \phi^{0 k} + g_{k 0,0} \phi^{k 0} = H_C + g_{0 0,0} \phi^{0 0} + 2 g_{0 k,0} \phi^{0 k} , \; \label{eq1} \end{eqnarray} where $\phi^{0\sigma} = \pi^{0\sigma} - \frac{1}{2}\sqrt{-g} B^{\left( \left(0\sigma\right) 0\mid\mu\nu k\right)} g_{\mu\nu,k}$ are the primary constraints, while $g_{0\sigma,0}$ are the $\sigma$ velocities (or temporal velocities) and $H_C$ is the canonical Hamiltonian of the metric gravity \begin{eqnarray} & &H_C = \frac{1}{\sqrt{-g} g^{00}} I_{mnpq} \pi^{mn} \pi^{pq} - \frac{1}{g^{00}} I_{mnpq} \pi^{mn} B^{(p q 0|\mu \nu k)} g_{\mu\nu,k} \label{eq5} \\ &+& \frac14 \sqrt{-g} \Bigl[ \frac{1}{g^{00}} I_{mnpq} B^{((mn)0|\mu\nu k)} B^{(pq0|\alpha\beta l)} - B^{\mu\nu k \alpha\beta l}\Bigr] g_{\mu\nu,k} g_{\alpha\beta,l} \; \; , \; \nonumber \end{eqnarray} which does not contain any primary constraint $\phi^{0\sigma}$. The total Hamiltonian $H_t = H_C + g_{0\sigma,0} \phi^{0\sigma}$ is a scalar function, which is defined in the $d (d + 1)$ even-dimensional phase space $\Bigl\{ g_{\alpha\beta}, \pi^{\mu\nu} \Bigr\}$, where all components of the metric $g_{\alpha\beta}$ and momentum $\pi^{\mu\nu}$ tensors have been chosen as the basic Hamiltonian variables. The corresponding $d (d + 1)$ dimensional space of Hamiltonian variables must be endowed with a symplectic (or anti-symmetric) bilinear form (Poisson brackets), which turns this space into a symplectic, even-dimensional phase space. The definition of Poisson brackets between all basic dynamical variables, i.e., between coordinates $g_{\alpha\beta}$ and momenta $\pi^{\mu\nu}$, is discussed in the next Section. At the same time the spatial (covariant) components of the metric tensor $g_{mn}$ and spatial (contravariant) components of momenta $\pi^{pq}$ form another $d (d - 1)$ dimensional space, which is also transformed (by the same Poisson brackets) into a symplectic, even-dimensional phase space of the $d (d - 1)$ space-like Hamiltonian variables $\Bigl\{ g_{mn}, \pi^{pq} \Bigr\}$. \section{Poisson brackets} In general, the Poisson brackets (or PB, for short) are the fundamental and crucially important tools of any correct Hamiltonian theory. The correct definition of these Poisson brackets is the central part of numerous Hamiltonian formulations developed for different physical systems of particles, fields and their combinations. As is well known (see, e.g., \cite{Gant} - \cite{Arnold}) the Poisson bracket is an antisymmetric, bi-linear form defined in the $2 M$-dimensional phase space which is, in fact, a cotangent space to an $M-$dimensional manifold located in the position space. More accurate definition of the Poisson brackets can be found, e.g., in \cite{Arnold}. For arbitrary vectors $X, Y, Z$ from this phase space we can define the bi-linear form which is designated below as $[ X, Y ]$ and it is obeyed the four following rules (or axioms) \begin{eqnarray} &&[ X, Y ] = - [ Y , X ] \; \; \; \; \; ({\rm antisymmetry}) \; , \; \nonumber \\ &&[ a_1 X_1 + a_2 X_2 , Y ] = a_1 [ X_1, Y ] + a_2 [ X_2, Y ] \; \; \; \; \; ({\rm linearity}\; {\rm in} \; {\rm either} \; {\rm member}) \; , \; \nonumber \\ &&[ X Y, Z ] = [ X, Z ] Y + X [ Y, Z ] \; \; \; \; \; ({\rm the}\; {\rm product} \; {\rm law}) \; , \; \nonumber \\ &&[ X, [ Y, Z ]] + [ Y , [ Z, X ]] + [ Z , [ X, Y ]] = 0 \; \; \; \; \; ({\rm Jacobi} \; \; {\rm identity}) \; , \; \nonumber \end{eqnarray} where each of the $X, Y$ and $Z$ vectors belongs to the $2 M$ dimensional phase space. In metric gravity the $d ( d + 1)-$dimensional phase space includes the $\frac{d(d + 1)}{2}$ generalized coordinates $g_{\alpha\beta}$ and $\frac{d(d + 1)}{2}$ momenta $\pi^{\mu\nu}$, or contravariant components of the momentum tensor $\pi$. An additional (or fifth) fundamental rule for the Poisson brackets, which is often called the `time-evolution' of the Poisson bracket, is written in the form \begin{eqnarray} \frac{\partial}{\partial t} [ X, Y ] = [ \frac{\partial X}{\partial t}, Y ] + [ X, \frac{\partial Y}{\partial t} ] \; \; , \; \; {\rm and} \; \; \;\frac{\partial}{\partial b} [ X, Y ] = [ \frac{\partial X}{\partial b}, Y ] + [ X, \frac{\partial Y}{\partial b} ] \; \; \nonumber \end{eqnarray} where $t$ is the temporal variable (or time, for short), while $b$ is an arbitrary numerical parameter. In the Hamiltonian version of metric gravity the basic dynamical variables are the generalized `coordinates' $g_{\alpha\beta}$ and momenta $\pi^{\mu\nu}$ defined above. In respect to this the Poisson brackets between the two functions of these dynamical variables are defined as follows: \begin{eqnarray} [ f_1, f_2 ] = \frac{\partial f_1}{\partial g_{\alpha\beta}} \frac{\partial f_2}{\partial \pi^{\alpha\beta}} - \frac{\partial f_2}{\partial g_{\alpha\beta}} \frac{\partial f_1}{\partial \pi^{\alpha\beta}} = \frac{\partial f_1}{\partial g_{\alpha\beta}} \frac{\partial f_2}{\partial \pi^{\alpha\beta}} - \frac{\partial f_1}{\partial \pi^{\alpha\beta}} \frac{\partial f_2}{\partial g_{\alpha\beta}} \; \; \; . \; \; \label{PoisBrack} \end{eqnarray} The Poisson brackets between the generalized coordinates and momenta have the fundamental value for the purposes of this study. They are: \begin{eqnarray} [ g_{\alpha\beta}, \pi^{\mu\nu}] = \frac{\partial g_{\alpha\beta}}{\partial g_{\gamma\sigma}} \frac{\partial \pi^{\mu\nu}}{\partial \pi^{\gamma\sigma}} - \frac{\partial g_{\alpha\beta}}{\partial \pi^{\gamma\sigma}} \frac{\partial \pi^{\mu\nu}}{\partial g_{\gamma\sigma}} = \Delta^{\mu\nu}_{\alpha\beta} = \frac12 \Bigl(g^{\mu}_{\alpha} g^{\nu}_{\beta} + g^{\nu}_{\alpha} g^{\mu}_{\beta}\Bigr) = \frac12 \Bigl(\delta^{\mu}_{\alpha} \delta^{\nu}_{\beta} + \delta^{\nu}_{\alpha} \delta^{\mu}_{\beta}\Bigr) \; \; , \; \label{eq15} \end{eqnarray} where $g^{\mu}_{\alpha} = \delta^{\mu}_{\alpha} (= \delta^{\alpha}_{\mu}$) is the substitution tensor \cite{Kochin} and symbol $\delta^{\mu}_{\beta}$ is the Kronecker delta, while the notation $\Delta^{\mu\nu}_{\alpha\beta}$ stands for the gravitational (or tensor) delta-symbol. The three following properties of this delta-symbol are obvious and very useful in calculations of many Poisson brackets: (1) `horizontal' index symmetry $\Delta^{\mu\nu}_{\alpha\beta} = \Delta^{\nu\mu}_{\alpha\beta} = \Delta^{\mu\nu}_{\beta\alpha} = \Delta^{\nu\mu}_{\beta\alpha}$, (2) `vertical' index symmetry $\Delta^{\mu\nu}_{\alpha\beta} = \Delta^{\alpha\nu}_{\mu\beta} = \Delta^{\mu\alpha}_{\nu\beta} = \ldots = \Delta_{\mu\nu}^{\alpha\beta}$, and (3) the product property: $\Delta^{\nu\mu}_{\rho\sigma} \Delta^{\rho\sigma}_{\alpha\beta} = \Delta^{\mu\nu}_{\alpha\beta}$. By using these properties we can write that $[ g_{\alpha\beta}, \pi^{\mu\nu}] = \Delta^{\mu\nu}_{\alpha\beta} = \Delta_{\mu\nu}^{\alpha\beta} = [ g_{\mu\nu}, \pi^{\alpha\beta}]$. Note again that the total number of dynamical Hamiltonian variables in metric gravity is always even and equals $d (d + 1)$. The Poisson bracket, Eq.(\ref{eq15}), explains why in some papers the gravitational momenta $\pi^{\mu\nu}$ are called and considered as conjugate dynamical variables for the corresponding covariant components of metric tensor $g_{\mu\nu}$ (our coordinates). Other fundamental Poisson brackets between basic dynamical variables of the metric gravity equal zero identically, i.e., $[ g_{\alpha\beta}, g_{\mu\nu}] = 0$ and $[ \pi^{\alpha\beta}, \pi^{\mu\nu}] = 0$. In general, our dynamical variables depend upon one temporal and $(d - 1)$ spatial coordinates $x^{0}, x^{1}, \ldots, x^{d-1} = (x_0, \overline{x})$. In this case we have to apply the following definition of the Poisson brackets, e.g., \begin{eqnarray} [ g_{\alpha\beta}(\bar{x}, t), \pi^{\mu\nu}(\bar{x}^{\prime}, t)] = \Delta^{\mu\nu}_{\alpha\beta} \delta^{d-1}(\bar{x} - \bar{x}^{\prime}) \; \; , \; \label{non-local} \end{eqnarray} where $ \delta^{d-1}(\bar{y})$ is the usual delta-function in the position $(d - 1)-$space. Such a generalization of the Poisson brackets is straightforward and simple, but in this study we do not want to complicate our system of notations. In respect to this, below we shall always deal with the Poisson brackets of two quantities taken at the same spatial point. The explicit form of the fundamental Poisson brackets, Eq.(\ref{eq15}), allows one to derive the following formulas for slightly different Poisson brackets \begin{eqnarray} [ g^{\alpha\beta}, \pi^{\mu\nu}] = - \frac12 \Bigl( g^{\alpha\mu} g^{\beta\nu} + g^{\alpha\nu} g^{\beta\mu} \Bigr) = - g^{\alpha\gamma} \Delta^{\mu\nu}_{\gamma\sigma} g^{\beta\sigma} = - [ \pi^{\mu\nu}, g^{\alpha\beta}] \; \; {\rm and} \; \; [ g^{\alpha\beta}, g_{\mu\nu}] = 0 . \; \label{eq151} \end{eqnarray} which contain the contravariant components of the metric tensor $g^{\alpha\beta}$. Now, by using the Poisson brackets, Eqs.(\ref{eq15}) and (\ref{eq151}), defined above we can determine the Poisson brackets of more complicated quantities and functions. As the first example we calculate the following Poisson bracket \begin{eqnarray} [ g_{\alpha\beta} g^{\lambda\sigma}, \pi^{\mu\nu}] = [ g_{\alpha\beta}, \pi^{\mu\nu}] g^{\lambda\sigma} + g_{\alpha\beta} [ g^{\lambda\sigma}, \pi^{\mu\nu}] = \Delta^{\mu\nu}_{\alpha\beta} g^{\lambda\sigma} - \frac12 g_{\alpha\beta} \Bigl( g^{\lambda\mu} g^{\sigma\nu} + g^{\lambda\nu} g^{\sigma\mu} \Bigr) \; . \label{eq151a} \end{eqnarray} Let us assume that in this formula $\lambda = \beta$. In this case $g_{\alpha\beta} g^{\beta\sigma} = g^{\sigma}_{\alpha} = \delta^{\sigma}_{\alpha}$ and it is clear that $[ g^{\sigma}_{\alpha}, \pi^{\mu\nu}] = 0$. On the other hand, if $\lambda = \beta$, then for the right-hand side of Eq.(\ref{eq151a}) one finds: \begin{eqnarray} \Delta^{\mu\nu}_{\alpha\beta} g^{\beta\sigma} - \frac12 g_{\alpha\beta} \Bigl( g^{\beta\mu} g^{\sigma\nu} + g^{\beta\nu} g^{\sigma\mu} \Bigr) = \frac12 \Bigl( \delta^{\mu}_{\alpha} g^{\nu\sigma} + \delta^{\nu}_{\alpha} g^{\mu\sigma} \Bigr) - \frac12 \Bigl( \delta^{\mu}_{\alpha} g^{\nu\sigma} + \delta^{\nu}_{\alpha} g^{\mu\sigma} \Bigr) = 0 \; , \; \end{eqnarray} which means that for $\lambda = \beta$ the equation, Eq.(\ref{eq151a}), is written in the form $0 = 0$ and we have no contradiction here. Now, consider the following Poisson bracket $[ g_{\alpha\beta} g^{\alpha\beta}, \pi^{\mu\nu} ]$. As is well known from tensor calculus (see, e.g., \cite{Kochin}) $g_{\alpha\beta} g^{\alpha\beta} = d$, where $d$ is the dimension of tensor space. Therefore, this Poisson bracket is reduced to the equation \begin{eqnarray} 0 = [ g_{\alpha\beta}, \pi^{\mu\nu}] g^{\alpha\beta} + g_{\alpha\beta} [ g^{\alpha\beta}, \pi^{\mu\nu}] \; \; {\rm or} \; \; \Delta^{\mu\nu}_{\alpha\beta} g^{\alpha\beta} - \frac12 g_{\alpha\beta} \Bigl( g^{\alpha\mu} g^{\beta\nu} + g^{\alpha\nu} g^{\beta\nu} \Bigr) = 0 \; \; , \; \nonumber \end{eqnarray} which is easily transformed to an obvious identity $g^{\mu\nu} - g^{\nu\mu} = 0$. Analogously, it is easy to find a number of remarkable relations between the Poisson brackets $[ g_{\alpha\beta}, \pi^{\mu\nu} ]$ and $[ g^{\alpha\beta}, \pi^{\mu\nu} ]$, temporal derivatives of the covariant and contravariant components of the metric tensor and Poisson brackets of these components with the canonical Hamiltonian $H_C$ which directly follows from Eq.(\ref{eq151a}) (see, also our `technical' Appendix A): \begin{eqnarray} &&[ g^{\sigma\gamma}, \pi^{\mu\nu} ] = - g^{\alpha\sigma} [ g_{\alpha\beta}, \pi^{\mu\nu} ] g^{\beta\gamma} = - g^{\alpha\sigma} \Delta^{\mu\nu}_{\alpha\beta} g^{\beta\gamma} = - \frac12 \Bigl( g^{\sigma\mu} g^{\gamma\nu} + g^{\sigma\nu} g^{\gamma\mu} \Bigr) \; , \; \label{PBOa} \\ % &&\frac{d g^{\sigma\gamma}}{d t} = - g^{\alpha\sigma} \Bigl(\frac{d g_{\alpha\beta}}{d t}\Bigr) g^{\beta\gamma} \; \; \; , \; \; \; [ g^{\sigma\gamma}, H_C ] = - g^{\alpha\sigma} [ g_{\alpha\beta}, H_C ] g^{\beta\gamma} \; . \; \label{PBO} \end{eqnarray} Second example is of great interest for analytical calculations of the Poisson brackets between components of momenta and some functions of the coordinates only. Let $F(g_{\mu\nu}, g^{\lambda\sigma})$ be an arbitrary analytical function (or functional) of the co- and contravariant components of the metric tensor. The Poisson bracket of this $F(g_{\mu\nu}, g^{\lambda\sigma})$ function and components of momentum tensor $\pi^{\alpha\beta}$ takes the form \begin{eqnarray} [ \pi^{\alpha\beta}, F(g_{\mu\nu}, g^{\lambda\sigma})] &=& - \frac{\partial F}{\partial g_{\mu\nu}} [ g_{\mu\nu}, \pi^{\alpha\beta}] - \frac{\partial F}{\partial g^{\sigma\lambda}} [ g^{\sigma\lambda}, \pi^{\alpha\beta}] \nonumber \\ &=& \frac12 \Bigl(\frac{\partial F}{\partial g^{\sigma\lambda}}\Bigr) \Bigl( g^{\sigma\alpha} g^{\lambda\beta} + g^{\sigma\beta} g^{\lambda\alpha} \Bigr) - \Bigl(\frac{\partial F}{\partial g_{\mu\nu}}\Bigr) \Delta^{\alpha\beta}_{\mu\nu} \; . \label{eq152} \end{eqnarray} In particular, for the $F(g_{\mu\nu})$ function (or functional) of the covariant components of the metric tensor only this Poisson bracket can be written in the form \begin{eqnarray} [ \pi^{\alpha\beta}, F(g_{\mu\nu})] = - \frac{\partial F}{\partial g_{\mu\nu}} [ g_{\mu\nu}, \pi^{\alpha\beta}] = - \Bigl(\frac{\partial F}{\partial g_{\mu\nu}}\Bigr) \Delta^{\alpha\beta}_{\mu\nu} \; . \label{eq152a} \end{eqnarray} In the case when $F = \Phi(g_{\mu\nu}) g_{\rho\sigma, k}$ we obtain \begin{eqnarray} [ \pi^{\alpha\beta}, \Phi(g_{\mu\nu}) g_{\rho\sigma, k} ] = - \Bigl(\frac{\partial \Phi}{\partial g_{\mu\nu}}\Bigr) \Delta^{\alpha\beta}_{\mu\nu} g_{\rho\sigma, k} - \Delta^{\alpha\beta}_{\rho\sigma} \Bigl( \Phi \Bigr)_{, k} = - \Bigl(\frac{\partial \Phi}{\partial g_{\alpha\beta}}\Bigr) g_{\rho\sigma, k} - \Bigl( \Phi \Bigr)_{, k} \Delta^{\alpha\beta}_{\rho\sigma} \; \; \; \label{eq153a} \end{eqnarray} Here we have applied the integration by parts (see, also discussion at the end of this Section). The third example includes Poisson brackets between space-like components of the momenta $\pi^{mn}$ and components of the Dirac space-like tensor $e_{pq}$ and/or $e^{pq}$. They are \begin{eqnarray} [ \pi^{mn}, e_{pq} ] = - \Delta^{mn}_{pq} \; \; , \; \; {\rm and} \; \; [ \pi^{mn}, e^{pq} ] = \frac12 \Bigl( g^{pm} g^{qn} + g^{pn} g^{qm} \Bigr) \; \; . \label{eq151A} \end{eqnarray} As follows from the first PB in Eq.(\ref{eq151A}) and two other groups of Poisson brackets: $[ \pi^{mn}, \pi^{pq} ] = 0, [ e_{mn}, e_{pq} ] = 0$ these $d (d - 1)$ variables $e_{mn}$ and $\pi^{pq}$ are the canonical Hamiltonian variables in the $d (d - 1)$-dimensional subspace of space-like dynamical variables of metric gravity. These PB play an important role in this study (see below). From the formula, Eq.(\ref{eq152a}), one also finds \begin{eqnarray} [ \pi^{\alpha\beta}, F(g_{\mu\nu})] = - \Bigl(\frac{\partial F}{\partial g_{\mu\nu}}\Bigr) \Delta^{\alpha\beta}_{\mu\nu} = - \Bigl(\frac{\partial F}{\partial g_{\alpha\beta}}\Bigr) \Delta_{\alpha\beta}^{\mu\nu} = [ \pi^{\mu\nu}, F(g_{\alpha\beta})] \; \; , \; \; \end{eqnarray} or simply $[ \pi^{\alpha\beta}, F(g_{\mu\nu})] = [ \pi^{\mu\nu}, F(g_{\alpha\beta})]$. The principal moment here is the presence of tensor $\Delta$-symbol in these Poisson brackets. This equality simplifies calculations of a large number of Poisson brackets which are need to show canonicity of different sets of Hamiltonian dynamical variables. Also, by using the formula, Eq.(\ref{eq152a}), we can determine another group of important Poisson brackets between momenta and analytical functions of the fundamental determinant $g$ and its square root $\sqrt{g}$ (or $\sqrt{- g}$ as it is designated in the metric gravity). The general expression for the Poisson bracket between such a $F(g)$ function and $\pi^{\alpha\beta}$ is derived as follows \begin{eqnarray} [ F(g), \pi^{\alpha\beta}] = \Bigl( \frac{\partial F}{\partial g} \Bigr) \Bigl( \frac{\partial g}{\partial g_{\mu\nu}} \Bigr) [ g_{\mu\nu}, \pi^{\alpha\beta}] = \Bigl( \frac{\partial F}{\partial g} \Bigr) g g^{\mu\nu} \Delta^{\alpha\beta}_{\mu\nu} = \Bigl( \frac{\partial F}{\partial g} \Bigr) g g^{\alpha\beta} \; \; \; . \; \label{eq154} \end{eqnarray} or $[ \pi^{\alpha\beta}, F(g)] = - \Bigl( \frac{\partial F}{\partial g} \Bigr) g g^{\alpha\beta}$. Now, if $F(g) = g^{x}$, then one finds $[ \pi^{\alpha\beta}, F(g)] = -x F(g) g^{\alpha\beta}$. In particular, if $F(g) = \sqrt{- g}$ and $F(g) = \frac{1}{\sqrt{- g}}$ we obtain from the last equation \begin{eqnarray} [ \sqrt{- g}, \pi^{\alpha\beta}] = - \frac{1}{2 \sqrt{- g}} g g^{\alpha\beta} = \frac12 \sqrt{- g} g^{\alpha\beta} \; \; {\rm and} \; \; [ \frac{1}{\sqrt{- g}}, \pi^{\alpha\beta}] = - \frac{1}{2 \sqrt{- g}} g^{\alpha\beta} \; , \; \label{eq155} \end{eqnarray} respectively. Another Poisson bracket is often needed in operations with the both primary and secondary constraints: \begin{eqnarray} [ \pi^{0 \gamma}, \frac{g^{\sigma \lambda}}{g^{0 0}} ] = \frac{1}{2 g^{0 0}} \Bigl( g^{0 \sigma} g^{\gamma \lambda} + g^{\gamma \sigma} g^{0 \lambda} - 2 g^{\sigma \lambda} g^{0 \gamma} \Bigr) \; \; . \; \end{eqnarray} If $\lambda = 0$ here, then one finds \begin{eqnarray} [ \pi^{0 \gamma}, \frac{g^{\sigma 0}}{g^{0 0}} ] = \frac{1}{2 g^{0 0}} \Bigl( g^{\gamma \sigma} g^{0 0} - g^{0 \sigma} g^{0 \gamma} \Bigr) \; \; . \; \end{eqnarray} Now, if we assume that $\sigma = 0$ here, then this Poisson bracket equals zero identically (as expected). To conclude our discussion of the Poisson brackets let us make the two following remarks. First, as it was shown in \cite{K&K} and \cite{PirSS} in metric gravity the Poisson bracket(s) between two primary constraints $\phi^{0 \sigma}$ and $\phi^{0 \gamma}$, Eq.(\ref{primary}), are always equal zero, i.e., $[ \phi^{0 \sigma}, \phi^{0 \gamma} ] = 0$. In fact, the explicit derivation of this formula is a very good and relatively simple exercise in calculations of Poisson brackets (see, also discussion below). Thus, in the metric gravity all primary constraints commute with each other. This drastically simplifies many important steps of our procedure which is described below. Second, we have to explain calculations of the Poisson brackets between momenta and expressions which include some spatial derivatives of the metric tensor such as $\Phi(g_{\mu\nu}) g_{\gamma\lambda,k}, \Phi(g_{\mu\nu}) g_{\gamma\lambda,k} g_{\rho\sigma,m}, g_{\gamma\lambda,k} g_{\rho\sigma,m}$, etc. These and other similar expressions arise very often in actual Hamiltonian formulations, and they can be found, e.g., in operations with the both canonical and total Hamiltonians, primary and secondary constraints and other expressions. Analytical calculations of such Poisson brackets by using `integration by parts' (see, the text around Eq.(\ref{eq153a})). To explain a few hidden details of such calculations let us consider the following Poisson brackets \begin{eqnarray} &&[\pi^{\alpha\beta}, \Phi(g_{\mu\nu}) g_{\gamma\lambda,p} g_{\rho\sigma,q}] = -\frac{\partial \Phi(g_{\mu\nu})}{\partial g_{\mu\nu}} \Delta^{\alpha\beta}_{\mu\nu} g_{\gamma\lambda,p} g_{\rho\sigma,q} + \Bigl[\Phi(g_{\mu\nu}) g_{\rho\sigma,q}\Bigr]_{,p} \Delta^{\alpha\beta}_{\gamma\lambda} + \Bigl[\Phi(g_{\mu\nu}) g_{\gamma\lambda,p}\Bigr]_{,q} \Delta^{\alpha\beta}_{\rho\sigma} \; \; \nonumber \\ &&= -\frac{\partial \Phi}{\partial g_{\mu\nu}} g_{\gamma\lambda,p} g_{\rho\sigma,q} \Delta^{\alpha\beta}_{\mu\nu} + \Phi_{,p} g_{\rho\sigma,q} \Delta^{\alpha\beta}_{\gamma\lambda} + \Phi g_{\rho\sigma,qp} \Delta^{\alpha\beta}_{\gamma\lambda} + \Phi_{,q} g_{\gamma\lambda,p} \Delta^{\alpha\beta}_{\rho\sigma} + \Phi g_{\gamma\lambda,pq} \Delta^{\alpha\beta}_{\rho\sigma} \; \; , \; \label{AZet} \end{eqnarray} where $\Phi(x)$ is a scalar function of tensor argument(s). This formula can be simplified even further, but our goal here is to illustrate analytical computations of the Poisson brackets of momenta and some special functions and expressions which contain spatial derivatives of the metric tensor. In particular, the formula, Eq.(\ref{AZet}), explains the presence of second-order spatial derivatives of the metric tensor in some formulas below. All Poisson brackets mentioned above are crucially important for the goals of this study, since they define the unique symplectic structure which is closely related to our $d (d + 1)$-dimensional (tensor) phase space $\{ g_{\alpha\beta}, \pi^{\mu\nu} \}$, which is closely related to the original $d-$dimensional Riemann space in the metric gravity. In other words, such a simplectic structure is uniformly determined by the Poisson brackets between the covariant components of the fundamental metric tensor $g_{\alpha\beta}$ and contravariant components $\pi^{\mu\nu}$ of the momentum tensor. Finally, we have to note that there is an alternative approach to develop Hamiltonian formulations of the metric gravity which is based on the use of covariant components of momenta $\pi_{\mu\nu}$. In some sense this new approach is simpler than the method discussed above, but its applications lead to re-consideration of fundamental principles of the classical Hamiltonian procedure, operations in the dual phase space and analysis of combinations of the both straight and dual phase spaces for the tensor fields. This alternative approach and arising dual phase space are briefly considered below. \subsection{Covariant components of momenta. On the dual phase space} In actual physical theories of tensor fields an arbitrary tensor can be represented either by its covariant, or contravariant components. For an arbitrary Riemann space relations between co- and contravariant components of the same vector, or tensor-like quantity are determined by the co- and contravariant components of the fundamental metric tensor $g_{\alpha\beta}$ and $g^{\alpha\beta}$. Therefore, one can always represent the same tensor equations in the both covariant and contravariant forms. In general, many problems from tensor calculus can be simplified (even substantially), if they are re-written in the contravariant components of the same tensors and vice versa (some examples are considered in \cite{Kochin} and \cite{Dash}). The metric gravity can be one of such problems, since the both canonical and total Hamiltonians, Eqs.(\ref{eq5}) and (\ref{eq1}), contain multiple products of many contravariant components of the fundamental tensor $g^{\alpha\beta}$. Therefore, if we can properly define the covariant components of momenta $\pi_{\lambda\sigma}$, then our original problem can drastically be simplified. Let us define the covariant components of momenta by the relation $\pi_{\lambda\sigma} = g_{\lambda\mu} \pi^{\mu\nu} g_{\nu\sigma}$. Note here that in any Hamiltonian formulation of the metric gravity, the role of fundamental tensor $g_{\alpha\beta}$ is always twofold. First, it is traditionally used to raise and lower indices in vector and tensor expressions. On the other hand, in all Hamiltonian formulations of the metric gravity the components of the fundamental tensor $g_{\alpha\beta}$ are traditionally chosen as the generalized coordinates, i.e., dynamical variables which are dynamically conjugate to the corresponding momenta $\pi^{\mu\nu}$. Such a twofold role of the fundamental tensor $g_{\alpha\beta}$ (and $g^{\alpha\beta}$) in Hamiltonian metric gravity leads to an additional problem, since the momenta $\pi^{\mu\nu}$ do not commute with the coordinates $g_{\alpha\beta}$. In turn, this means that the following definitions of covariant components of momenta: $\pi_{\lambda\sigma} = g_{\lambda\mu} g_{\nu\sigma} \pi^{\mu\nu}, \pi_{\lambda\sigma} = g_{\lambda\mu} \pi^{\mu\nu} g_{\nu\sigma}$ and $\pi_{\lambda\sigma} = \pi^{\mu\nu} g_{\lambda\mu} g_{\nu\sigma}$ are not equivalent to each other. Indeed, it is easy to show that $\pi_{\lambda\sigma} = g_{\lambda\mu} g_{\nu\sigma} \pi^{\mu\nu} \ne g_{\lambda\mu} \pi^{\mu\nu} g_{\nu\sigma}$, since $[ g_{\nu\sigma}, \pi^{\mu\nu} ] = \Delta^{\mu\nu}_{\nu\sigma} = \frac12 \Bigl( \delta^{\mu}_{\nu} \delta^{\nu}_{\sigma} + \delta^{\mu}_{\sigma} \Bigr) \ne 0$ in the general case. To avoid repetitive discussions of similar problems in this study we shall always define the covariant components of momenta by the relation: $\pi_{\lambda\sigma} = g_{\lambda\mu} \pi^{\mu\nu} g_{\nu\sigma}$ mentioned above. By using this definition of covariant momenta we can determine the following Poisson brackets \begin{eqnarray} [ g_{\alpha\beta}, \pi_{\mu\nu}] = \frac12 \Bigl( g_{\alpha\mu} g_{\beta\nu} + g_{\alpha\nu} g_{\beta\mu} \Bigr) \; \; {\rm and} \; \; [ g^{\alpha\beta}, \pi_{\mu\nu}] = - \frac12 \Bigl( g^{\alpha}_{\mu} g^{\beta}_{\nu} + g^{\alpha}_{\nu} g^{\beta}_{\mu} \Bigr) = - \Delta^{\alpha\beta}_{\mu\nu} \; \; \label{eq153} \end{eqnarray} and also $[ g^{\alpha\beta}, g^{\mu\nu}] = 0$ and $[ g_{\alpha\beta}, g^{\mu\nu}] = 0$. The formulas Eqs.(\ref{eq154}), (\ref{eq155}) and other can be re-derived for the covariant components of momentum $\pi_{\alpha\beta}$: \begin{eqnarray} [ F(g), \pi_{\alpha\beta}] = -\Bigl( \frac{\partial F}{\partial g} \Bigr) g g_{\alpha\beta} \; , \; [ \sqrt{- g}, \pi_{\alpha\beta}] = - \frac12 \sqrt{- g} g_{\alpha\beta} \; , \; [ \frac{1}{\sqrt{- g}}, \pi_{\alpha\beta}] = - \frac{1}{2 \sqrt{- g}} g_{\alpha\beta} \; . \; \label{eq155a} \end{eqnarray} These Poisson brackets are also important to perform analytical calculations in the Hamiltonian formulation of the metric gravity. As follows from these formulas the dynamical Hamiltonian variables $\{ g^{\alpha\beta}, \pi_{\mu\nu}\}$ form another set of Hamiltonian dynamical variables which is often called the dual set of Hamiltonian (dynamical) variables. In general, this dual set of Hamiltonian variables $\{ g^{\alpha\beta}, \pi_{\mu\nu}\}$ can also be used to develop another Hamiltonian formulation of the metric gravity which is simpler than the approach described above. Thus, for the tensor field in metric gravity we always have two sets of canonical Hamiltonian variables: (a) straight (or natural) set $\{ g_{\alpha\beta}, \pi^{\mu\nu}\}$, and (b) dual set of dynamical variables $\{ g^{\alpha\beta}, \pi_{\mu\nu}\}$ \cite{Fro2021}. Further analysis \cite{Fro2021} shows that the two similar sets of dynamical Hamiltonian variables (straight and dual sets) always arise and exist in any Hamiltonian formulation of the tensor field theory and they are related to each other by a special canonical transformation. Also, there is a beautiful formula \cite{Fro2021} for the Poisson brackets which unites the both straight and dual sets of dynamical variables defined for the same Hamiltonian system \begin{eqnarray} [ g_{\alpha\beta}, \pi^{\mu\nu}] = \Delta^{\mu\nu}_{\alpha\beta} = [ \pi_{\alpha\beta}, g^{\mu\nu}] \; \; \; . \label{eq1551} \end{eqnarray} Another interesting Poisson bracket in the metric gravity is \begin{equation} [ \pi_{\alpha\beta}, \pi^{\mu\nu}] = \frac12 \Bigl( \delta_{\alpha}^{\mu} \pi_{\beta}^{\nu} + \delta_{\alpha}^{\nu} \pi_{\beta}^{\mu} + \delta_{\beta}^{\mu} \pi_{\alpha}^{\nu} + \delta_{\beta}^{\nu} \pi_{\alpha}^{\mu} \Bigr) = - [ \pi^{\mu\nu}, \pi_{\alpha\beta} ] \; \; \; , \; \; \label{pipi} \end{equation} where $\pi_{\kappa}^{\rho} = g^{\rho \lambda} \pi_{\lambda \kappa} = g_{\kappa \lambda} \pi^{\lambda \rho}$. The last equality means that the co- and contravariant components of the momentum tensor do not commute with each other. On the other hand, if they commuted, then the direct and dual sets of simplectic dynamical variables in metric gravity would be equivalent to each other and there would be no real need to keep these two sets of dynamical variables (straight and dual). Indeed, in this case one can always express one set of dynamic variables in terms of another set and vice versa. Such cases include all Hamiltonian theories developed for the truly scalar fields and those fields which are represented by affine vectors and tensors. However, this is not true for the metric gravity and for other theories developed for actual tensor fields in multi-dimensional Riemann spaces of non-zero curvature. In general, to develop the truly correct and covariant Hamiltonian formulation for many dynamical system of tensor fields it is much better to deal with the mixed set of $2 d (d + 1)$ Hamiltonian dynamical variables. This big set is a unification of the two different $d (d + 1)-$dimensional sets of Hamiltonian dynamical variables: (a) the straight set $\{ g_{\alpha\beta}, \pi^{\mu\nu}\}$, and (b) the corresponding dual set $\{ g^{\alpha\beta}, \pi_{\mu\nu}\}$. Applications of the two sets of dynamical variables makes our Hamiltonian formulation complete, truly covariant and physically transparent. In addition to this, an instant use of the direct and dual sets of Hamiltonian dynamical variables allows one to write canonical transformations of the Hamiltonian dynamical variables in the most general and powerful form. \section{Hamilton equations of motion} The main goal of any Hamiltonian formulation of some physical theory is to derive the correct Hamilton equations of motion by following the well established and physically transparent Hamilton procedure which has its internal logic based on Stoke's theorem in multi-dimensions. In general, the Hamilton method always provides a remarkable simplicity and universality in applications to actual dynamical systems and fields. Each of the Hamilton equations describes the complete time-evolution of one of the Hamiltonian dynamical variables. These correct Hamilton equations (or canonical equations) for the metric gravity are written in the following form \begin{eqnarray} \frac{d g_{\alpha\beta}}{d x^{0}} = [ g_{\alpha\beta}, H_t ] \; \; \; {\rm and} \; \; \; \frac{d \pi^{\mu\nu}}{d x^{0}} = [ \pi^{\mu\nu}, H_t ] \; \; , \; \; \label{Hamtequat} \end{eqnarray} where $x_0$ is the temporal variables and expressions in the right-hand sides of both equations are the Poisson brackets. In other words, the first-order time derivative of each of the Hamiltonian variables is proportional to the corresponding Poisson bracket of this variable with the total Hamiltonian $H_t$, Eq.(\ref{eq1}). The explicit form of these Hamiltonian equations and their solutions are discussed in \cite{Fro2021}. In particular, for space-like components of the metric tensor $g_{ij}$ one finds the following system of Hamilton equations \cite{Fro2021}: \begin{eqnarray} \frac{d g_{ij}}{d x^{0}} &=& [ g_{ij}, H_{t} ] = [ g_{ij}, H_{C} ] = \frac{2}{\sqrt{-g} g^{00}} I_{(ij)pq} \pi^{pq} - \frac{1}{g^{00}} I_{(ij)pq} B^{(p q 0|\mu \nu k)} g_{\mu\nu,k} \; \label{eq25} \\ &=& \frac{2}{\sqrt{-g} g^{00}} I_{(ij)pq} \Bigl[ \pi^{pq} - \frac12 \sqrt{-g} B^{(p q 0|\mu \nu k)} g_{\mu\nu,k} \Bigr] \; , \nonumber \end{eqnarray} where the notation $I_{(ij)pq}$ designates the $(ij)-$symmetrized value of the $I_{ijpq}$ tensor defined in Eq.(\ref{I}), i.e., \begin{equation} I_{(ij)pq} = \frac12 \Bigl( I_{ijpq} + I_{jipq} \Bigr) = \frac{1}{d - 2} g_{ij} g_{pq} - \frac12 ( g_{ip} g_{jq} + g_{iq} g_{jp} ) \; \; \; . \label{eq26} \end{equation} Now, let us consider the Poisson brackets for the covariant components $g_{0\sigma}$ of the fundamental tensor. It is clear that the Poisson bracket of any $g_{0\sigma}$ component with the canonical Hamiltonian $H_C$, Eq.(\ref{eq5}), equals zero identically. Therefore, the Hamilton equations of motion for the covariant $g_{0\sigma} (= g_{\sigma 0}$) components of the metric tensor take the form \begin{eqnarray} \frac{d g_{0 \sigma}}{d x_0} = [ g_{0 \sigma}, H_{t}] = [ g_{0 \sigma}, H_{t} - H_C] = g_{0 \sigma,0} \; \; \label{eq253} \end{eqnarray} and analogous equations for the $g_{\gamma 0}$ components. These formulas are, in fact, the definitions of the $\sigma-$velocities, where $\sigma = 0, 1, \ldots, d - 1$, which essentially coincide with the coefficients in front of the primary constraints in the total Hamiltonian, Eq.(\ref{eq1}). As follows from Eq.(\ref{PBO}) there is no need to derive and solve the equations of motion for the covariant components of the metric tensor $g^{\alpha\beta}$. Indeed, if we know the time evolution of all covariant components $g_{\mu\nu}$, then from Eq.(\ref{PBO}) one easily finds the exact description of time evolution for each $g^{\alpha\beta}$ component. In general, the Hamilton equations of motion for the contravariant components of momenta $\pi^{\alpha\beta}$ are significantly more complicated (see, e.g., \cite{Fro2021}) than analogous equations for the $g_{\alpha\beta}$ components (our coordinates). However, all these complications are pure technical and they are mainly related to a large number of Poisson brackets which must be determined in order to describe the complete time-evolution of all momentum variables $\pi^{\mu\nu}$. To understand the scale of this problem let us present here the Hamiltonian equations of motion for the contravariant space-like components of the momentum tensor $\pi^{ab}$: \begin{eqnarray} & &\frac{d \pi^{ab}}{d x_0} = [ \pi^{ab}, H_{t} ] = [ \pi^{ab}, H_{C} ] = - \frac{1}{g^{00}} \Bigl[ \frac{I_{mnpq}}{\sqrt{-g}}, \pi^{ab} \Bigr] \pi^{mn} \pi^{pq} \nonumber \\ &+& \frac{1}{g^{00}} \Bigl[ I_{mnpq}, \pi^{ab} \Bigr] \pi^{mn} B^{(p q 0|\mu \nu k)} g_{\mu\nu,k} + \frac{1}{g^{00}} I_{mnpq} \pi^{mn}\Bigl[ B^{(p q 0|\mu \nu k)}, \pi^{ab} \Bigr] g_{\mu\nu,k} + \ldots \; . \label{eq255} \end{eqnarray} This formula indicates clearly that in the Hamilton equations in metric gravity which describe time-evolution of momenta are significantly more complicated than analogous equations for time-evolution of the generalized coordinates $g_{\alpha\beta}$. In general, the Poisson bracket $[ \pi^{ab}, H_{t} ]$ is determined term-by-term. As follows from the Hamilton equations presented above the Hamilton method itself has a number of problems when it is applied to the metric gravity, or other dynamical systems with first-class constraints. First, we note that to write Hamilton equations of actual motion we need only the canonical Hamiltonian $H_C$ (not the total Hamiltonian $H_t$). Indeed, these equations are: \begin{eqnarray} \frac{d g_{m n}}{d x_0} = [ g_{m n}, H_{C} ] \; \; \; {\rm and} \; \; \; \frac{d \pi^{p q}}{d x_0} = [ \pi^{p q}, H_{C} ] \; \; \label{eq255A} \end{eqnarray} and there are $d (d - 1)$ of these Hamilton equations of actual motion. Second, our Hamilton equations mentioned above, Eq.(\ref{eq255A}), do not contain temporal momenta $\pi^{0 \mu}$ and/or $\pi^{\nu 0}$ at all. This means that in these frames we cannot describe time-evolution of the temporal components of metric tensor $g_{0 \mu}$ and $g_{\nu 0}$ (our coordinates). Moreover, it is not entirely clear where we can get these equations from, since all these temporal momenta are included in our Hamiltonian formulation of the metric gravity only as primary constraints. Formally, to solve this problem introduce in the new Hamilton equations either the total Hamiltonian $H_t$, or the difference $H_t - H_C$, which is a linear combination of the primary constraints $\phi^{0 \gamma}$, Eq.(\ref{primary}). However, the coefficients in this linear combination are the $\sigma-$velocities, which are, in fact, arbitrary parameters of the method, rather than its dynamical variables. These arguments lead to an unambiguous conclusion that the Hamiltonian method itself must substantially be modified, if we want to apply it successfully to constrained dynamical systems, including the metric gravity. Such a modification was carried out by Dirac in his papers \cite{Dir58} - \cite{Dir64} and will be analyzed in detail in Section VII, but here we just want to mention its main steps. First of all, Dirac accepted all Hamilton equations from Eqs.(\ref{eq255A}) as the equations which correctly describe the actual motions in our dynamical system. Thus, these $d (d - 1)$ Hamilton equations have been incorporated in the new Dirac's modification of the classical Hamilton method. At the second step, Dirac rejected the $d$ equations, Eq.(\ref{eq253}), that are formally correct but practically useless. Instead, these equations have been replaced by an equal number of equations which describe time-evolution of the primary constraints $\phi^{0 \sigma}$ and define the new secondary constraints $\chi^{0 \sigma}$, i.e., $\frac{d \phi^{0 \sigma}}{d x_0} = [ \phi^{0 \sigma}, H_t] = [ \phi^{0 \sigma}, H_C] = \chi^{0 \sigma}$. Here we apply the fact that all primary constraints commute with each other, i.e., $[ \phi^{0\sigma}, \phi^{0\gamma}] = 0$ (see above). At the next (third) step Dirac explicitly derived the Hamilton equations which describe time-evolution of all $d$ secondary constraints $\chi^{0 \sigma}$: $\frac{d \chi^{0 \sigma}}{d x_0} = [ \chi^{0 \sigma}, H_t] = [ \chi^{0 \sigma}, H_C] = D^{\sigma}_{c} = a^{\alpha}_{\mu}(g) \chi^{0 \mu} + b^{\alpha}_{\mu} (g) \Bigl( f^{k}(g) \chi^{0 \mu} \Bigr)_{,k}$, where the function (or functional) $D^{\sigma}_{c}$ is the $\sigma-$component of Dirac closure which is a quasi-linear combination of the same secondary constraints and total spatial derivatives of expressions which contain the same secondary constraints. All other temporal derivatives of the Dirac closure will produce only similar quasi-linear combinations of secondary constraints and a few total spatial derivatives of them. The process of time-evolution is formally closed, since we will never see anything new in this chain. Briefly, this means that in Dirac's modification of the classical Hamiltonian method all $d$ primary and $d$ secondary constraints are considered as the new Hamiltonian dynamical variables. Note that the equations which describe the time-evolution of these new dynamical variables are written in a manifestly Hamilton form. Let us show how this procedure works in the case of metric gravity. The Poisson brackets between the primary constraints $\phi^{0\sigma}$ and canonical Hamiltonian are \cite{K&K}: \begin{eqnarray} & & [ \phi^{0\sigma}, H_C ] = \frac{d \phi^{0\sigma}}{d x_0} = \chi^{0\sigma} = -\frac{g^{0\sigma}}{2 \sqrt{-g} g^{00}} I_{mnpq} \pi^{mn} \pi^{pq} \nonumber \\ &+& \frac{g^{0\sigma}}{2 g^{00}} I_{mnpq} \pi^{mn} U^{( pq0 \mid \mu\nu k )} g_{\mu\nu,k} + \Bigl[ \pi_{,k}^{\sigma k} + \Bigl(\pi^{pk} e^{q \sigma} - \frac12 \pi^{pq} e^{k \sigma}) g_{pq,k} \Bigr] \nonumber \\ &-& \frac{\sqrt{-g}}{8} \Bigl(\frac{g^{0\sigma}}{g^{00}} I_{mnpq} B^{((mn) 0 \mid \mu \nu k)} B^{(pq0 \mid \alpha \beta t)} - g^{0\sigma} B^{\mu \nu k \alpha \beta t} \Bigr) g_{\mu\nu,k} g_{\alpha\beta,t} \nonumber \\ &+& \frac{\sqrt{-g}}{4 g^{00}} I_{mnpq} B^{((mn) 0 \mid\mu\nu k )} g_{\mu\nu,k} g_{\alpha\beta,t} \Bigl[ g^{\sigma t} \Bigl( g^{00} g^{p \alpha} g^{q \beta} + g^{pq} g^{0 \alpha} g^{0 \beta} - 2 g^{\alpha q} g^{0 p} g^{0 \beta} \Bigr) \nonumber \\ &-& g^{\sigma p} \Bigl( 2 g^{00} g^{q \alpha} g^{t \beta} - g^{00} g^{\alpha \beta} g^{q t} + g^{\alpha\beta} g^{0q} g^{0t} - 2 g^{q \alpha} g^{0 \beta} g^{0t} - 2 g^{t \alpha} g^{0 \beta} g^{0q} + 2 g^{qt} g^{0\alpha} g^{0\beta} \Bigr) \nonumber \\ &+& g^{0\sigma} ( 2 g^{\beta t} g^{\alpha p} g^{0q} - 2 g^{p\alpha} g^{q\beta} g^{0t} - 2 g^{pq} g^{t\beta} g^{0\alpha} + 2 g^{pt} g^{q\beta} g^{0\alpha} + g^{pq} g^{\alpha\beta} g^{0t} - g^{tp} g^{\alpha\beta} g^{0q}) \Bigr] \nonumber \\ &-& \frac{\sqrt{-g}}{4} g_{\mu\nu,k} g_{\alpha\beta,t} \Bigl[ g^{\sigma t} ( g^{\alpha\mu} g^{\beta\nu} g^{0k} + g^{\mu\nu} g^{\alpha t} g^{0\beta} - 2 g^{\mu\alpha} g^{k\nu} g^{0\beta} ) \nonumber \\ &+& g^{0\sigma} ( 2 g^{\alpha t} g^{\beta\mu} g^{\nu k} - 3 g^{t\mu} g^{\nu k} g^{\alpha\beta} - 2 g^{\mu\alpha} g^{\nu\beta} g^{kt} + g^{\mu\nu} g^{kt} g^{\alpha\beta} + 2 g^{\mu t} g^{\nu\beta} g^{k\alpha}) \nonumber \\ &+& g^{\sigma\mu} \Bigl( (g^{\alpha\beta} g^{\nu t} - 2 g^{\nu\alpha} g^{t\beta}) g^{0k} + 2 ( g^{\beta\nu} g^{kt} - g^{\beta k} g^{t\nu}) g^{0\alpha} + ( 2 g^{k\beta} g^{\alpha t} - g^{\alpha\beta} g^{kt}) g^{0\nu}\Bigr) \Bigr] \nonumber \\ &-& \frac{\sqrt{-g} g^{00}}{2} E^{pqt\sigma} \Bigl( \frac{1}{g^{00}} I_{mnpq} B^{((mn)0 \mid \mu\nu k)} g_{\mu\nu,k} \Bigr)_{,t} - \frac{\sqrt{-g}}{2} B^{((\sigma 0) k \mid \alpha \beta t)} g_{\alpha\beta,kt} \; , \; \label{eqn8} \end{eqnarray} where $U^{( pq 0 \mid \mu\nu k )}$ is the symmetrized form of the following expression \begin{eqnarray} U^{\alpha\beta 0 \mu\nu k} = B^{(\alpha\beta 0 \mid \mu \nu k)} - g^{0k} E^{\alpha \beta \mu \nu} + 2 g^{0\mu} E^{\alpha \beta k \nu} \; \; \label{Sab} \end{eqnarray} and $\sigma = 0, 1, \ldots, d - 1$. Thus, the corresponding Poisson brackets of the primary constraints with the canonical Hamiltonians $H_C$ are the new functions of generalized coordinates $g_{\alpha\beta}$ (or $g^{\sigma\gamma}$) and momenta $\pi^{\mu\nu}$. In respect to the original terminology introduced by Dirac (see, e.g., \cite{Dir64}) these $\chi^{0\sigma}$ functions are the secondary constraints of this Hamiltonian formulation. Briefly, the definition of secondary constraints is written in the form: $\chi^{0\sigma} = [ \phi^{0\sigma}, H_C ]$, where $\sigma = 0, 1, \ldots, d - 1$. This means that in metric gravity we always have $d$ secondary constraints $\chi^{0\sigma}$ (= $\chi^{\sigma 0}$). On actual Hamiltonian trajectories (and only on these trajectories) of the free gravitational field these secondary constraints must be equal zero, i.e., we can write the following weak equations $\chi^{0\sigma} \approx 0$ for $\sigma = 0, 1, \ldots, d - 1$. Note also that the Poisson brackets between the primary and secondary constraints are $[ \phi^{0\gamma}, \chi^{0\sigma} ] = \frac12 g^{\gamma\sigma} \chi^{0 0}$ \cite{Fro2021}. It can also be shown that all primary $\phi^{0\lambda}$ and secondary constraints $\chi^{0\sigma}$ which arise during this Hamiltonian formulation of the metric gravity are the first-class constraints \cite{Dir64}. At the next step of the original Dirac procedure \cite{Dir58}, \cite{Dir50} and \cite{Dir64} we have to determine the temporal derivatives of all secondary constraints, i.e., $\frac{d \chi^{0\sigma}}{d x_{0}} = [ \chi^{0\sigma}, H_t] = [ \chi^{0\sigma}, H_C] + [ \chi^{0\sigma}, g_{0 0,0} \phi^{0 0} + 2 g_{0 k,0} \phi^{0 k}]$. The first Poisson brackets is \begin{eqnarray} \frac{d \chi^{0\sigma}}{d x_{0}} &=& [ \chi^{0\sigma}, H_{C} ] = -\frac{2}{\sqrt{-g}} I_{mnpq} \pi^{mn} \Bigl(\frac{g^{\sigma q}}{g^{00}}\Bigr) \chi^{0p} + \frac12 g^{\sigma k} g_{00,k} \chi^{00} + \delta_{0}^{\sigma} \chi_{,k}^{0k} \nonumber \\ &+& \Bigl[ -2 \frac{1}{\sqrt{-g}} I_{mnpk} \pi^{mn} \frac{g^{\sigma p}}{g^{00}} + I_{mkpq} \Bigl(\frac{g^{\sigma m}}{g^{00}}\Bigr) U^{(pq) 0 \mu\nu l} g_{\mu\nu,l} \Bigr] \chi^{0k} \nonumber \\ &-& \Bigl[ g^{0\sigma} g_{00,k} + 2 g^{n\sigma} g_{0n,k} + \frac{g^{n\sigma} g^{0m}}{g^{00}} (g_{mn,k} + g_{km,n} - g_{kn,m}) \Bigr] \chi^{0k} = D^{\sigma}_{c} \; , \; \label{close} \end{eqnarray} where $D^{\sigma}_{c}$ is the $\sigma$-component of the Dirac closure, $U^{(p q) 0 \mu\nu k}$ is the quantity $U^{p q 0 \mu \nu k}$ from Eq.(\ref{Sab}) which is symmetrized upon all $p \leftrightarrow q$ permutations. The second Poisson bracket in the original expression for $\frac{d \chi^{0\sigma}}{d x_{0}}$ takes the form \begin{eqnarray} [ \chi^{0\sigma}, g_{00,0} \phi^{00} + 2 g_{0 k,0} \phi^{0 k}] = - \frac12 g^{0 \sigma} g_{00,0} \chi^{0 0} - g^{0 k} g_{0 k,0} \chi^{0 0} = - \frac12 \Bigl( g^{0 \sigma} g_{00,0} + 2 g^{0 k} g_{0 k,0} \Bigr) \chi^{0 0} \label{eq47} \end{eqnarray} and it is proportional to the secondary constraint $\chi^{0 0}$. Thus, the Poisson brackets $[ \chi^{0\sigma}, H_t]$ and $[ \chi^{0\sigma}, H_C]$ are written as a linear combinations with field-dependent coefficients (we call them quasi-linear combinations) of the secondary constraints $\chi^{0 \gamma}$ only. The $[ \chi^{0\sigma}, H_C]$ Poisson bracket is called the $\sigma$-component of Dirac closure $D^{\sigma}_{c}$, or the Dirac $\sigma-$closure for the Hamiltonian formulation of metric gravity. In some old papers the Dirac closure has been defined as the $[ \chi^{0\sigma}, H_t]$ Poisson bracket. The difference between these two definitions is proportional to the secondary constraint $\chi^{0 0}$ (see, Eq.(\ref{eq47})), and we do not have any principal contradiction between these two definitions of Dirac's closure. Also, note that this expression for Dirac closure, Eq.(\ref{close}), written in terms of secondary constraints only, is one of three possible results in the original Dirac procedure \cite{Dir50}, \cite{Dir64}. Briefly, this means that our Hamiltonian formulation of metric gravity does not lead either to any constraints of higher order, e.g., tertiary constraints, or to any inconsistency which can be fatal for the whole theory based on the $\Gamma - \Gamma$ Lagrangian, Eq.(\ref{LGG}) \cite{Dir64}. Finally, we need to say that in metric gravity the Dirac closure is a $d-$vector-like quantity in contrast with the Maxwell $d-$dimensional electrodynamics of the free EM-field, where the Dirac closure is a scalar which equals zero for the free EM-field \cite{Dir64}, \cite{FroUni}. Thus, in the metric gravity each primary constraint generates one secondary constraint and the Dirac's chain of first-class constraints ends at the secondary constraints. Finally, we have $d$ primary and $d$ secondary first-class constraints, i.e., the total number of the first-class constraints in metric gravity equals $2 d$. In this sense there is an obvious similarity between the Hamiltonian approach for the Maxwell theory of multi-dimensional electromagnetic field (see, e.g., \cite{Dir64}, \cite{FroUni}) and Hamiltonian formulation of the metric gravity. Furthermore, all Hamiltonian formulations of different physical fields, which contain equal numbers of the primary and secondary first-class constraints, are quite similar to each other. The source of such a similarity can be traced back to the fact that the original Lagrangian density ($L_{\Gamma - \Gamma}$, Eq.(\ref{LGG}), in our case) is written as a quadratic-linear combination of velocities (or field-velocities). In conclusion we want to note that there is a direct relation which allows one to express the canonical Hamiltonian $H_C$ in terms of the secondary constraints $\chi^{0 \sigma}$ and total spatial derivatives \begin{eqnarray} H_C = - 2 g^{0 \sigma} \chi^{0 \sigma} + \Bigl( 2 g_{0 m} \pi^{m k} \Bigr)_{,k} + \Bigl[ \sqrt{- g} \Bigl( g_{0\gamma} B^{((0 \gamma) k \mid \alpha \beta m)} - g_{0 n} B^{((n k) 0\mid \alpha \beta m)} \Bigr) g_{\alpha\beta, m} \Bigr]_{,k} \; \; . \; \; \label{HCconst} \end{eqnarray} This formula relates the canonical Hamiltonian $H_C$ which depends upon the space-like momenta $\pi^{m n}$ (they belong to the pure dynamical $d (d - 1)-$dimensional space) and secondary first-class constraints $\chi^{0 \sigma}$ which belong to the pure constraint, or non-dynamical $2 d$ dimensional subspace (see below). From this point of view the equation, Eq.(\ref{HCconst}), is the `additionality' relation between the dynamical and constraint parts of the total Hamiltonian of the metric gravity. \section{Canonical transformations} One of the main advantages of the Hamiltonian formulation(s) of any physical theory is a possibility to apply various canonical transformations of the Hamiltonian dynamical variables. In general, such canonical transformations can be used to simplify either the canonical Hamiltonian $H_C$, or to reduce this Hamiltonian to some special forms, e.g., to its natural form \cite{Fro2021}. In the Hamiltonian formulations of metric gravity the canonical transformations of Hamiltonian dynamical variables are often used to simplify the explicit form of secondary constraints. Indeed, the secondary constraints derived above in the form of Eq.(\ref{eqn8}) are very complex. Applications and even simple operations with secondary constraints written in such a form are very difficult. For instance, calculations of the Poisson brackets between primary and secondary constraints, of between any pair of secondary constraints produce formulas which are extremely cumbersome. For the first time, this has been noticed by Dirac in his fundamental paper \cite{Dir58}. To resolve these issues he used some canonical transformation of the original (Hamiltonian) dynamical variables which were originally introduced in \cite{PirSS}. At that time nobody performed similar transformations in metric gravity. This explains why Dirac in \cite{Dir58} started his transformations from the original $\Gamma - \Gamma$ Lagrangian density, Eq.(\ref{LGG}), which is also an ultimate source of the Hamiltonian theory. As is well known in classical mechanics we can always add the total temporal derivative to our original Lagrangian density without any change in the Lagrange equations of motion. The same rule is true in the general relativity and metric gravity, where we have also to take care about general covariance of all our formulas and expressions. Briefly, the relation between the Dirac Lagrangian density introduced in \cite{Dir58} and our $L_{\Gamma - \Gamma}$ Lagrangian density, Eq.(\ref{LGG}), is written in the form $L_D = L_{\Gamma - \Gamma} - L^{\star}$ \cite{FK&K}, where the additional Lagrangian density $L^{\star}$ takes the manifestly covariant form \begin{eqnarray} L^{\star} = \Bigl[ \Bigl(\sqrt{- g} g^{0 0}\Bigr)_{,\alpha} \frac{g^{0 \alpha}}{g^{0 0}} \Bigr]_{,0} - \Bigl[ \Bigl(\sqrt{- g} g^{0 0}\Bigr)_{,0} \frac{g^{0 \alpha}}{g^{0 0}} \Bigr]_{,\alpha} = \Bigl[ \Bigl(\sqrt{- g} g^{0 0}\Bigr)_{,k} \frac{g^{0 k}}{g^{0 0}} \Bigr]_{,0} - \Bigl[ \Bigl(\sqrt{- g} g^{0 0}\Bigr)_{,0} \frac{g^{0 k}}{g^{0 0}} \Bigr]_{,k} \; . \label{LDLGG} \end{eqnarray} This equation is reduced to the form \begin{eqnarray} L^{\star} = \frac12 \sqrt{- g} A^{\alpha \beta 0 \mu \nu k} g_{\alpha\beta,0} g_{\mu\nu,k} &=& \frac12 \sqrt{- g} \Bigl( e^{\alpha\beta} e^{k \mu} g^{0 \nu} - e^{\mu\nu} e^{k \alpha} g^{0 \beta} + e^{k \alpha} \frac{g^{0 \mu} g^{0 \nu} g^{0 \beta}}{g^{0 0}} \nonumber \\ &-& e^{k \mu} \frac{g^{0 \alpha} g^{0 \nu} g^{0 \beta}}{g^{0 0}} \Bigr) g_{\alpha\beta,0} g_{\mu\nu,k} \; \; . \; \; \label{AAB} \end{eqnarray} The $A^{\alpha \beta 0 \mu \nu k}$ coefficients defined in this equation has a few following symmetries. First, these coefficients are symmetric upon the $\alpha\beta \leftrightarrow \beta\alpha$ and $\mu\nu \leftrightarrow \nu\mu$ permutations. Second, the important property of the $A^{\alpha \beta 0 \mu \nu k}$ coefficients is their anti-symmetry with respect to interchange of the two pairs of Greek indices, i.e., $A^{\alpha \beta 0 \mu \nu k} = - A^{\mu \nu 0 \alpha \beta k}$. Third, these coefficients are linearly related with the coefficients $B^{(\alpha \beta 0 \mid \mu \nu k)}$ and Dirac tensor $E^{\alpha\beta\gamma\sigma}$ (both defined in Section III). The explicit form of this relation is \begin{eqnarray} A^{\alpha \beta 0 \mu \nu k} = B^{(\alpha \beta 0 \mid \mu \nu k)} - g^{0 k} E^{\alpha \beta \mu \nu} + 2 g^{0 \mu} E^{\alpha \beta k \nu} \; \; . \; \; \label{ABE} \end{eqnarray} Finally, the relation between the Dirac's Lagrangian $L_D$ and our original $L_{\Gamma - \Gamma}$ Lagrangian of the metric gravity (see above) is written in the form \cite{FK&K} \begin{eqnarray} L_D = L_{\Gamma - \Gamma} - L^{\star} = L_{\Gamma - \Gamma} - \frac12 \sqrt{- g} A^{\alpha \beta 0 \mu \nu k} g_{\alpha\beta,0} g_{\mu\nu,k} \; , \; {\rm where} \; L^{\star} = \frac12 \sqrt{- g} A^{\alpha \beta 0 \mu \nu k} g_{\alpha\beta,0} g_{\mu\nu,k} \; . \label{LD} \end{eqnarray} From this equation one easily finds the following expression for the momenta $p^{\gamma\sigma}$ in the Dirac Hamiltonian formulation of the metric gravity \begin{eqnarray} \frac{\partial L_D}{\partial g_{\gamma\sigma,0}} = \frac{\partial L_{\Gamma - \Gamma} }{\partial g_{\gamma\sigma,0}} - \frac{\partial L^{\star}}{\partial g_{\gamma\sigma,0}} \; \; , \; {\rm or} \; \; p^{\gamma\sigma} = \pi^{\gamma\sigma} - \frac12 \sqrt{- g} A^{(\gamma \sigma) 0 \mu \nu k} g_{\mu\nu,k} \; \; , \; \; \label{mDmGG} \end{eqnarray} where $p^{\gamma\sigma}$ are the new momenta (or Dirac's momenta), while $\pi^{\gamma\sigma}$ are the old momenta defined above in Section III. The last equation in Eq.(\ref{mDmGG}) is, in fact, the explicit definition of the Dirac's momenta which is conveniently to write in the two following forms: \begin{eqnarray} p^{p q} = \pi^{p q} - \frac12 \sqrt{- g} A^{(p q) 0 \mu \nu k} g_{\mu\nu,k} \; \; \; {\rm and} \; \; \; p^{0 \sigma} = \pi^{0 \sigma} - \frac12 \sqrt{- g} A^{(0 \sigma) 0 \mu \nu k} g_{\mu\nu,k} \; \; , \; \; \label{Dirmom} \end{eqnarray} where $A^{(0 \sigma) 0 \mu \nu k} g_{\mu\nu,k} = \frac12 \Bigl( B^{(\alpha \beta 0 \mid \mu \nu k)} + B^{(\beta \alpha 0 \mid \mu \nu k)} \Bigr)$ and $p^{0 \sigma} = p^{\sigma 0}$. Thus, we have the two sets of Hamiltonian dynamical variables for the two different Hamiltonian formulations of the metric gravity: $\{ g_{\alpha\beta}, \pi^{\mu\nu} \}$ (the old set) and $\{ g_{\alpha\beta}, p^{\mu\nu} \}$ (the new set). Since these two sets of dynamical variables are related to each other by a canonical transformation, then the three following conditions for the Poisson brackets must be obeyed: $[ g_{\alpha\beta}, g_{\mu\nu} ] = 0, [ g_{\alpha\beta}, p^{\mu\nu} ] = \Delta^{\mu\nu}_{\alpha\beta}$ and $[ p^{\alpha\beta}, p^{\mu\nu} ] = 0$, where all new variables are written in terms of the old variables. For old variables we already know that the following equations are true: $[ g_{\alpha\beta}, g_{\mu\nu} ] = 0, [ g_{\alpha\beta}, \pi^{\mu\nu} ] = \Delta^{\mu\nu}_{\alpha\beta}$ and $[ \pi^{\alpha\beta}, \pi^{\mu\nu} ] = 0$. In reality, applications of these canonicity conditions needs some additional explanations, since for all Hamiltonian systems such conditions are always derived and formulated in a different form which is based on the `alternative' Laplace (not Poisson!) brackets. Here we have to make one step aside and discuss the general canonicity conditions for an arbitrary transformation of the Hamiltonian dynamical variables. \subsection{General conditions of canonicity for transformations of the dynamical variables} Let us assume that some Hamiltonian system is described by the $2 n$ independent dynamical variables $\{ q_k, p_k \}$, where $k = 1, \ldots, n$. In general, it is possible to replace these `old' dynamical variables by the new dynamical variables $\{ \tilde{q}_i, \tilde{p}_i \}$, where $i = 1, \ldots, n$: \begin{eqnarray} \tilde{q}_{i} = \phi_{i}(t, q_{k}, p_{k}) \; \; \; \tilde{p}_{i} = \psi_{i}(t, q_{k}, p_{k}) \; \; , \; \; \label{q-q} \end{eqnarray} but after such a transformation of variables we want to be sure that the new Hamiltonian system will be `dynamically equivalent' to our original Hamiltonian system. Transformations of the dynamical variables each of which transforms one Hamiltonian system into another Hamiltonian system, which is completely and unambiguously `dynamically equivalent' to the original system, are defined as the canonical transformations. In general, all canonical transformations of any Hamiltonian system form the closed algebraic structure, or group, for short (see, e.g., \cite{Fro2021}, \cite{Gant}). It was shown (by Jacobi) that for any time-dependent canonical transformation of the dynamical variables, Eq.(\ref{q-q}), the following canonicity condition (below, the main canonicity condition) must be obeyed \begin{eqnarray} \sum^{n}_{k=1} \tilde{p}_k d\tilde{q}_k - \tilde{H} \delta t = c \Bigl( \sum^{n}_{k=1} p_k dq_k - H \Bigr) \delta t - \delta F(t, q_k, p_k) \; \; , \; \; \label{Jacob0} \end{eqnarray} where $c (\ne 0)$ is some real number which does not depend upon the time $t$. The function $ F(t, q_k, p_k)$ is the Jacobi generating function, i.e., the function which generates this canonical transformation. Vice versa, one can easily show that, if Eq.(\ref{Jacob0}) holds for some transformation of the dynamical variables, then this transformation is canonical. For better understanding of equations from this subsection we use the explicit sing of summation. Moreover, since the valence $c (\ne 0)$ does not depend upon the time, then by establishing the criteria of canonicity, we can always restrict ourselves (for more details, see, e.g., \cite{Gant}) to the time-independent canonical transformations only, i.e., \begin{eqnarray} \tilde{q}_{i} = \phi_{i}(q_{k}, p_{k}) \; \; \; \tilde{p}_{i} = \psi_{i}(q_{k}, p_{k}) \; \; . \; \; \label{q-q-t} \end{eqnarray} For a canonical time-independent transformation the main condition, Eq.(\ref{Jacob0}), is written in the form \begin{eqnarray} \sum^{n}_{k=1} \tilde{p}_k d\tilde{q}_k = c \sum^{n}_{k=1} p_k dq_k - \delta K(q_k, p_k) \; \; , \; \; \label{Jacob1} \end{eqnarray} where $\tilde{q}_{k}, \tilde{p}_{k}$ ($k = 1, \ldots, n$) are the new generalized coordinates and momenta, while $q_{i}, p_{i}$ ($i = 1, \ldots, n$) are the old coordinates and momenta (old dynamical variables). Also, in this equation $K(q_{k}, p_{k}) = F(\overline{t}, q_{k}, p_{k})$, i.e., it is a short Jacobi generating function of the coordinates and momenta only, which coincides with the Jacobi generating function $F(\overline{t}, q_{k}, p_{k})$ taken at some fixed time $t = \overline{t}$. The variation of $K(q_{k}, p_{k})$ is written in the form \begin{eqnarray} \delta K = - \sum^{n}_{i=1} \Bigl( \Phi_{i} \delta q_{i} + \Psi_{i} \delta p_{i} \Bigr) \; \; . \; \; \label{Jacob2} \end{eqnarray} On the other hand, by using the formula $\delta \tilde{q}_{k} = \frac{\partial \tilde{q}_{k}}{\partial q_{i}} \delta q_{i} + \frac{\partial \tilde{q}_{k}}{\partial p_{i}} \delta p_{i}$ in Eq.(\ref{Jacob1}) one finds the following expression for the $\delta K(q_{k}, p_{k})$ variation: \begin{eqnarray} \delta K = - \sum^{n}_{i=1} \Bigl[ \sum^{n}_{k=1} \Bigl( \tilde{p}_{k} \frac{\partial \tilde{q}_{k}}{\partial q_{i}} \Bigl) - c p_{i} \Bigr] \delta q_{i} - \sum^{n}_{i=1} \Bigl[ \sum^{n}_{k=1} \Bigl( p_{k} \frac{\partial \tilde{q}_{k}}{\partial p_{i}} \Bigr) \Bigr] \delta p_{i} \; \; . \; \; \label{Jacob3} \end{eqnarray} By comparing Eqs.(\ref{Jacob2}) and (\ref{Jacob3}) one finds \begin{eqnarray} \Phi_{i} = \sum^{n}_{k=1} \tilde{p}_{k} \frac{\partial \tilde{q}_{k}}{\partial q_{i}} - c p_{i} \; \; \; \; {\rm and} \; \; \; \; \Psi_{i} = \sum^{n}_{k=1} p_{k} \frac{\partial \tilde{q}_{k}}{\partial p_{i}} \; \; . \; \; \label{Jacob4} \end{eqnarray} For canonical transformation(s) the expression in the left side of Eq.(\ref{Jacob2}) must be a total differential. From here one finds three following conditions: \begin{eqnarray} \frac{\partial \Phi_{i}}{\partial q_j} = \frac{\partial \Phi_{j}}{\partial q_i} \; \; \; , \; \; \; \frac{\partial \Psi_{i}}{\partial p_j} = \frac{\partial \Psi_{j}}{\partial p_i} \; \; \; , \; \; \; \frac{\partial \Phi_{i}}{\partial p_j} = \frac{\partial \Psi_{j}}{\partial q_i} \; \; \; , \; \; \; \; \; . \; \; \label{Jacob50} \end{eqnarray} By substituting the functions $\Phi_{i}$ and $\Psi_{i}$ in these equations by their expressions from Eq.(\ref{Jacob4}) one finds after a few additional and simple transformations: \begin{eqnarray} && \sum^{n}_{k=1} \Bigl( \frac{\partial \tilde{q}_{k}}{\partial q_{i}} \frac{\partial \tilde{p}_{k}}{\partial q_{j}} - \frac{\partial \tilde{q}_{k}}{\partial q_{i}} \frac{\partial \tilde{q}_{k}}{\partial q_{j}} \Bigl) = 0 \; \; , \; {\rm or} \; \; \; \{ q_{i}, q_{j} \} = 0 \; \; , \; \; \label{Jacob51} \\ && \sum^{n}_{k=1} \Bigl( \frac{\partial \tilde{q}_{k}}{\partial p_{i}} \frac{\partial \tilde{p}_{k}}{\partial p_{j}} - \frac{\partial \tilde{q}_{k}}{\partial p_{i}} \frac{\partial \tilde{q}_{k}}{\partial p_{j}} \Bigl) = 0 \; \; , \; {\rm or} \; \; \; \{ p_{i}, p_{j} \} = 0 \; \; , \; \; \label{Jacob53} \\ && \sum^{n}_{k=1} \Bigl( \frac{\partial \tilde{q}_{k}}{\partial q_{i}} \frac{\partial \tilde{p}_{k}}{\partial p_{j}} - \frac{\partial \tilde{q}_{k}}{\partial p_{i}} \frac{\partial \tilde{q}_{k}}{\partial q_{j}} \Bigl) = c \delta_{ij} \; \; , \; {\rm or} \; \; \; \{ q_{i}, p_{j} \} = c \delta_{ij} \; , \; \; \label{Jacob55} \end{eqnarray} where $\delta_{ij}$ is the Kronecker symbol and $c$ is some numerical constant. The constructions (or sums) which appear in these three equations are the Laplace brackets which are well known in classical mechanics (see, e.g., \cite{Gant}, \cite{Gold}). The standard notation for the Laplace brackets (see, e.g., \cite{Gant}, \cite{Gold}) is $\{ , \}$. Each of these sums includes $2 n$ functions ($\tilde{q}_{k}$ and $\tilde{p}_{k}$) and two variables only, e.g., either $q_{i}, q_{j}$, or $p_{i}, p_{j}$, or $q_{i}, p_{j}$. As follows from Eqs.(\ref{Jacob51}) - (\ref{Jacob55}) some transformation of the dynamical variables will be canonical if (and only if) the three groups of following conditions are obeyed: $\{ q_{i}, q_{j} \} = 0 , \{ p_{i}, p_{j} \} = 0$ and $\{ q_{i}, p_{j} \} = c \delta_{ij}$, where $c \ne 0$ and $(i,j) = 1, \ldots, n$, for all $2 n$ new dynamical variables $\tilde{q}_{k}, \tilde{p}_{k}$ ($k = 1, \ldots, n$). In reality, the original Laplace brackets are not convenient in applications. However, as follows from the Appendix B these brackets can be replaced by the Poisson brackets, each of which is the adjoint to the corresponding Laplace bracket. In terms of the Poisson brackets the same criteria of canonicity are written in a different form (for more details, see our Appendix B): $[\tilde{q}_{i}, \tilde{q}_{j} ] = 0 \; \; , \; \; [ \tilde{p}_{i}, \tilde{p}_{j} ] = 0 \; \; , \; \; [ \tilde{q}_{i}, \tilde{p}_{j} ] = c \delta_{ij}$, where $(i,j) = 1, \ldots, n$ and $c$ is the valence of this canonical transformation. These numerical values of the Poisson brackets taken for $c = 1$ are used below as the criteria of canonicity for the transformation of dynamical variables. To simplify the text below we shall call these brackets by the canonical, univalent set of the Poisson brackets, or CUSPB, for short. \subsection{Applications to the metric gravity} Let us apply the formulas derived above to the metric gravity by considering a transformation from the old set of dynamical variables $\{ g_{\alpha\beta}, \pi^{\mu\nu} \}$ to the new set of such variables $\{ g_{\alpha\beta}, p^{\mu\nu} \}$. First, we note that the generalized coordinates $g_{\alpha\beta}$ are identical in the both sets. For the brackets defined in the previous subsection this means that $\{ g_{\alpha\beta}, g_{\gamma\rho} \} = 0$ and $[ g_{\alpha\beta}, g_{\gamma\rho} ] = 0$. Furthermore, for the univalent ($c = 1$) transformations of dynamical variables we have for the new momenta $p^{\mu\nu} = \pi^{\mu\nu} + f^{\mu\nu}(g_{\gamma\rho})$, where $f^{\mu\nu}(g_{\gamma\rho})$ is a tensor function of generalized coordinates only. From here one finds that $[ g_{\alpha\beta}, p^{\mu\nu} ] = [ g_{\alpha\beta}, \pi^{\mu\nu} ] = \Delta^{\mu\nu}_{\alpha\beta}$. In other words, the first and last Poisson brackets from CUSPB are obeyed automatically for our transformation of the Hamiltonian dynamical variables. The only non-trivial bracket in CUSPB (see, Eq.(\ref{adjnt32}) in the Appendix B) is the second Poisson bracket between two new momenta which takes the following form in our tensor notations: \begin{eqnarray} [ p^{\alpha\beta}, p^{\mu\nu} ] = 0 \; \; , \; {\rm or} \; \; [ \pi^{\alpha\beta} - \frac12 \sqrt{- g} A^{(\alpha \beta) 0 \sigma \rho k} g_{\sigma\rho,k}, \pi^{\mu\nu} - \frac12 \sqrt{- g} A^{(\mu \nu) 0 \lambda \kappa k} g_{\lambda\kappa,k} ] = 0 \; \; , \; \label{p-p} \end{eqnarray} which is instantly reduced to the equation \begin{eqnarray} [ \pi^{\alpha\beta}, \sqrt{-g} A^{(\mu \nu) 0 \sigma \rho m} g_{\sigma\rho,m} ] = [ \pi^{\mu\nu}, \sqrt{- g} A^{(\alpha \beta) 0 \sigma \rho m} g_{\sigma\rho,m} ] \; \; . \; \label{pi-pi} \end{eqnarray} The transformation of dynamical variables will be canonical, if (and only if) this equation is obeyed. To proof the validity of this equation one has to perform direct calculations of the Poisson brackets in the both sides of Eq.(\ref{pi-pi}). In reality, the both sides of Eq.(\ref{pi-pi}) are compared with each other and identical terms (in the both sides) are cancelled. Finally, this equation is reduced to the form of identity such as $0 = 0$. In those cases when either $\alpha = 0$, or $\beta = 0$ (or both) one obtains from Eqs.(\ref{ABE}) and (\ref{LD}) the following equation: \begin{eqnarray} p^{0 \gamma} = \pi^{0 \gamma} - \frac12 \sqrt{- g} B^{((0 \gamma) 0 \mid \mu \nu k)} g_{\mu\nu,k} , \; \; \label{PrimetD} \end{eqnarray} which defines the momenta with one (or two) temporal component(s), or primary constraints $p^{0 \gamma} \approx 0$ in the Dirac's Hamiltonian formulation of the metric gravity. For these momenta the canonicity conditions, Eq.(\ref{p-p}), must also be obeyed. After a few simple transformations the essential canonicity conditions for the $p^{0 \gamma}$ and $p^{0 \sigma}$ momenta take one of the following forms \begin{eqnarray} [ p^{0 \gamma}, p^{0 \sigma} ] = 0 \; \; , \; \; {\rm or} \; \: [ \pi^{0 \gamma}, \sqrt{- g} B^{((0 \sigma) 0 \mid \mu \nu k)} g_{\mu\nu,k} ] = [ \pi^{0 \sigma}, \sqrt{- g} B^{((0 \gamma) 0 \mid \mu \nu k)} g_{\mu\nu,k} ] \; \; , \; \; \label{PrimeD} \end{eqnarray} which simply means that all primary constraints in the Dirac's Hamiltonian formulations commute with each other. The same statement is true for the original Hamiltonian formulation of metric GR \cite{K&K} discussed above. This fact has been checked in \cite{PirSS}. On the other hand, we have to note that the fact that all primary constraints in the metric gravity commute with each other follows directly from the canonicity of the complete Dirac's set of Hamiltonian dynamical variables. At this point it is very convenient to introduce the universal notation $\phi^{\mu\nu}$ for the momenta, or for contravariant components of the momenta. In Dirac's Hamiltonian formulation these momenta are $\phi^{\mu\nu} = p^{\mu\nu}$, while in the Hamiltonian formulation from \cite{K&K} these momenta are $\phi^{\mu\nu} = \pi^{\mu \nu} - \frac12 \sqrt{- g} A^{(\mu \nu) 0 \alpha \beta k} g_{\alpha\beta,k}$. In this notation the canonical Hamiltonians $H_C$ in the both formulation of metric gravity are represented in the same `universal' form \cite{FK&K} \begin{eqnarray} & &H_C = \frac{1}{\sqrt{-g} g^{00}} I_{mnpq} \phi^{mn} \phi^{pq} - \frac{1}{g^{00}} \phi^{mn} \Bigl( g^{0 l} g_{m n,l} - 2 g^{0\alpha} g_{\alpha n,m} \Bigr) \; \; \; \label{DirH-C} \\ &+& \frac14 \sqrt{-g} \Bigl[ \frac{1}{g^{00}} \Bigl( g^{0 k} E^{(m n) \mu \nu} - 2 g^{0\mu} E^{(m n) k \nu} \Bigr) \Bigl(g^{0 l} g^{\alpha}_{m} g^{\beta}_{n} - 2 g^{0 \alpha} g^{l}_{m} g^{\beta}_{n} \Bigr) - B^{\mu\nu k \alpha\beta l}\Bigr] g_{\mu\nu,k} g_{\alpha\beta,l} \; \; , \; \nonumber \end{eqnarray} where $g^{\alpha}_{\beta} = \delta^{\alpha}_{\beta}$ is the substitution tensor defined above. In the both formulations the primary constraints commute with each other, i.e., $[ \phi^{0\gamma}, \phi^{0\sigma} ] = 0$. The knowledge of the canonical Hamiltonian $H_C$ and all primary constraints allows one to restore the total Hamiltonian $H_{t}$: \begin{eqnarray} H_t = H_C + g_{0 0,0} \phi^{0 0} + 2 g_{0 k,0} \phi^{0 k} = H_C + g_{0 0,0} p^{0 0} + 2 g_{0 k,0} p^{0 k} \; \; . \; \; \label{DirH-t} \end{eqnarray} In particular, in the Dirac's Hamiltonian formulation we obtain $ H_t = H_C + g_{0 0,0} p^{0 0} + 2 g_{0 k,0} p^{0 k}$. It appears that the total Hamiltonian $H_t$ does not change during canonical transformations of the dynamical variables, i.e., $H^{K\&K}_t(g_{\alpha\beta}, \pi^{\mu\nu}) = H^{Dir}_t(g_{\alpha\beta}, p^{\mu\nu})$ \cite{FK&K}. In other words, the total Hamiltonian is an obvious and unique invariant of this theory. In respect to this, the Hamilton equations do not change its form during canonical transformations and we can write, e.g., \begin{eqnarray} g_{\alpha\beta,0} = [ g_{\alpha\beta}, H^{K\&K}_t] \; , \; \pi^{\mu\nu}_{,0} = [ \pi^{\mu\nu}, H^{K\&K}_t] \; \Leftrightarrow \; g_{\alpha\beta,0} = [ g_{\alpha\beta}, H^{Dir}_t] \; , \; p^{\mu\nu}_{,0} = [ p^{\mu\nu}, H^{Dir}_t] \label{HK&KtoHD} \end{eqnarray} i.e., these two sets of Hamilton equations are equivalent to each other. In other words, the Hamilton equations conserve their form during canonical transformation of the dynamical variables. In fact, this was the first definition (or criterion) of canonicity for the transformations of dynamical variables which has been formulated by Sir William R. Hamilton himself in 1834 and 1835. We have shown that his criterion works for the Hamiltonian approach to the metric gravity. However, the metric gravity is a dynamical system with constraints. It is clear that the Hamilton criterion, as well as other criteria of canonicity known for the transformations of dynamical variables in classical mechanics, must be supplemented by some statement(s) about the algebra of constraints (see below). Therefore, we need to derive the explicit expressions for the secondary constraints, their Poisson brackets with the canonical and/or total Hamiltonians, primary constraints, etc. All secondary constraints in the Dirac's Hamiltonian formulation are derived from the equations $\chi^{0\sigma} = [ \phi^{0\sigma}, H_t] = [ \phi^{0\sigma}, H_C]$. The explicit expressions are \begin{eqnarray} \chi^{0\sigma} = - \frac{g^{0\sigma}}{2 \sqrt{-g} g^{00}} I_{mnpq} \phi^{mn} \phi^{pq} + g^{\sigma}_{m} \Bigl[ (\phi^{mk})_{,k} + \Bigl( \phi^{pk} e^{qm} - \frac12 \phi^{pq} e^{km}\Bigr) g_{pq,k} \Bigr] \nonumber \\ + \frac12 \sqrt{- g} g^{0\sigma} \Bigl[ - g_{mn,kl} E^{mnkl} + \frac14 g_{mn,k} g_{pq,l} \Bigl( - E^{mnpq} e^{kl} + 2 E^{klpn} e^{mq} + E^{pqnl} e^{mk} \Bigl)\Bigr] \; \; . \; \label{Dirchi} \end{eqnarray} This formula is very compact and contains only two lines (compare with the formula, Eq.(\ref{eqn8})). It indicates clearly that Dirac's idea to apply canonical transformations of the Hamiltonian dynamical variables in order to simplify secondary first-class constraints works perfectly. Now, by using the explicit form of the primary $\phi^{0 \gamma} = p^{0\gamma}$ and secondary $\chi^{0 \sigma}$ constraints in Dirac's formulation one finds \begin{eqnarray} [ \phi^{0 \gamma}, \chi^{0 \sigma}]_{Dirac} = [ p^{0 \gamma}, \chi^{0 \sigma}]_{Dirac} = \frac12 g^{\gamma\sigma} \chi^{0 0}_{Dirac} \; \; \; , \; \; \label{phichi} \end{eqnarray} i.e., the formula which exactly coincides (by its form) with the formula $[ \phi^{0 \gamma}, \chi^{0 \sigma}]_{K\&K} = \frac12 g^{\gamma\sigma} \Bigl(\chi^{0 0}\Bigr)_{K\&K}$ mentioned above. It is very interesting, since the explicit forms of all primary and secondary constraints are substantially different in these two formulations. This and other similar facts directly follow from the canonicity of our transformation of the Hamiltonian dynamical variables. The time-evolution of the secondary constraints leads to the following formula \begin{eqnarray} & & \frac{d \chi^{0\sigma}}{d x_0} = \chi^{0\sigma}_{,0} = [ \chi^{0\sigma}, H_C] = - \Bigl[ \frac{2}{\sqrt{-g} g^{00}} I_{pqmk} g^{\sigma m} \phi^{pq} + g^{0 \sigma} g_{0 0,k} + 2 g^{\sigma p} g_{0 p,k} \nonumber \\ &+& \Bigl(\frac{g^{\sigma p} g^{0 q}}{g^{0 0}}\Bigr) \; \; \Bigl( g_{p q,k} + g_{q k,p} - g_{p k,q} \Bigr)\Bigr] \chi^{0 k} - g^{\sigma}_{0} (\chi_{0 k})_{,k} + \frac12 g^{\sigma k} g_{0 0,k} \chi^{0 0} = D^{\sigma}_{c} \; , \; \label{DirclD} \end{eqnarray} where $D^{\sigma}_{c}$ is the $\sigma-$component of the Dirac closure derived in the Dirac's Hamiltonian formulation of the metric gravity. All components of the Dirac closure are quasi-linear combination of secondary constraints and some total spatial derivatives of these secondary constraints. Again, this formula is very compact and transparent. Finally, we want to present the formula, which allows one to express the canonical Hamiltonian $H_C$ in terms of secondary constraints and some (total) spatial derivatives, Eq.(\ref{HCconst}), can also be derived in the Dirac Hamiltonian formulation. The formula takes the form, which is slightly different from Eq.(\ref{HCconst}) above: \begin{eqnarray} H_C = -2 g_{0\lambda} \chi^{0\lambda} &+& \Bigl(2 g_{0 m} \phi^{mk} \Bigr)_{,k} - \Bigl[\sqrt{- g} E^{mnpq} g_{mn,q} \nonumber \\ &-& \sqrt{- g} g_{\mu\nu,k} \; \Bigl(\frac{g^{0\mu}}{g^{00}}\Bigr) \; \Bigl( g^{\nu p} g^{0 k} - g^{\nu k} g^{0 p} \Bigr)\Bigr]_{,p} \; \label{HDirconst} \end{eqnarray} This formula also represents the canonical Hamiltonian $H_C$ (in the Dirac formulation) written as a quasi-linear combination of the secondary constraints $\chi^{0 \sigma}$ and a few total spatial derivatives of some expressions which include the same secondary constraints. Now, we can complete our discussion of canonical transformations in the metric gravity. There are three general rules which regulate changes in the primary and secondary constraints during such transformations. The first rule is simple and it is called the law of inertia for the first-class constraints. Indeed, by performing a number of canonical transformations between different sets of dynamical variables we have found that the total number of the primary $\phi^{0 \sigma}$ constraints $N_p$ never changes during such transformations. The same statement is true for the total number of secondary $\chi^{0 \sigma}$ constraints $N_s$ and for the sum $N_p + N_s$. We have to emphasize here that all primary and secondary constraints which arise in the Hamiltonian formulations of metric gravity are first-class. The second rule of `form-invariance' is even simpler: the internal structure of all first-class constraints must be conserved during canonical transformations of the Hamiltonian dynamical variables. The preservation of form-invariance for all first-class constraints is crucial to prove that any two Hamiltonian formulations developed for the same constrained dynamical system are equivalent to each other. The third rule essentially follows from the second rule: all Poisson brackets between the first-class constraints and canonical/total Hamiltonians, other constraints, etc, must also be form-invariant during canonical transformations. For simple Poisson brackets, e.g., for the $[\phi^{0\sigma}, \chi^{0\gamma}]$ brackets, this rule leads to the exact coincidence of corresponding expressions. The three rules mentioned here essentially mean preservation of the algebra of first-class constraints. Thus, the canonical transformations in the metric gravity must guarantee a complete preservation of the form-invariance for the total Hamiltonian $H_t$ and for the algebra of first-class constraints. The formulas derived in this Section allow one to apply the Dirac's Hamiltonian formulation of metric gravity to analyze and solve various gravitational problems. In some cases, however, one needs to know analytical expressions for other Poisson brackets, e.g., the Poisson bracket between two secondary constraints $[ \chi^{0\gamma}, \chi^{0\sigma} ]$ is of great interest, but it has never been obtain in previous papers. This Poisson bracket is determined in our `technical' Appendix A. In general, calculations of this and other Poisson brackets can be performed with the use of our formulas and method described in Section IV. \section{Dirac's modifications of the classical Hamilton method} In this Section we want to reconsider modifications which were made by Dirac in the classical Hamilton method \cite{Dir58}, \cite{Dir50}, \cite{Dir64}. This will eventually lead us to the new universal criterion of canonicity for Hamiltonian formulations of metric gravity. First, we note again that in any Hamiltonian formulation of metric gravity we always have $\frac{d (d + 1)}{2}$ generalized coordinates $g_{\alpha\beta}$ and $\frac{d (d + 1)}{2}$ momenta $p^{\mu\nu}$. These coordinates and momenta are the Hamiltonian dynamical variables of our problem (metric gravity). The total number of these variables equals $d (d + 1)$ which is an even number for any $d-$dimensional Riemann space-time. Note also that our original Lagrangian $L_{\Gamma - \Gamma}$, Eq.(\ref{LGGvel}), is a quadratic function upon velocities of the space-like components of the metric tensor $g_{m n}$, i.e., upon $g_{m n,0}$. On the other hand, the same $L_{\Gamma-\Gamma}$ Lagrangian is a linear function of the $d$ remaining $g_{0 \gamma,0}$ (= $g_{\gamma 0,0}$) velocities which are also called the temporal velocities. By using the standard Legendre transformation (see Section III) one can pass from the Lagrangian $L_{\Gamma - \Gamma}$ to the Hamiltonian $H_C$ which is quadratic function of space-like momenta $\pi^{m n}$. This Hamiltonian is called the canonical Hamiltonian, and it is an explicit function of the space-like dynamical $\{ g_{m n}, p^{p q} \}$ variables (there are $d(d-1)$ of such variables) and $d$ `temporal' coordinates $g_{0 \sigma}$ only. The Hamiltonian $H_C$ is of great interest for the whole metric gravity, since it describes the actual motions of a free gravitational field. However, we have to note that this Hamiltonian $H_C$ does not depend upon any of the temporal momenta, i.e., it does not include any of the $p^{0 \sigma}$ (or $p^{\sigma 0}$) momenta. This means that all Poisson brackets such as $[ g_{0 \sigma}, H_C] = 0$ and $[ g_{\sigma 0}, H_C] = 0$ equal zero identically, and canonical Hamiltonian $H_C$ does not describe time-evolution of the temporal coordinates $g_{0 \sigma}$ and/or $g_{\sigma 0}$ coordinates, in principle. Briefly, we can say that in the canonical Hamiltonian $H_C$ these temporal coordinates $g_{0 \mu}$ (and $g_{\mu 0}$) are rather parameters than actual dynamical variables. For normal applications of the Hamilton method we must have a Hamiltonian that contains all momenta, including temporal ones. Such a complete, or total Hamiltonian $H_t$ will describe time-evolution of all $d (d + 1)$ dynamical variables of the problem $\{ g_{\alpha\beta}, p^{\mu\nu} \}$ and all functions of these variables, including the canonical Hamiltonian $H_C$, new coordinates and momenta, which can be introduced by some canonical transformations, etc. Formally, this total Hamiltonian can be derived from our quadratic-linear Lagrangian $L_{\Gamma - \Gamma}$ by using the Legendre transform which is described in detail in Section III. However, for our quadratic-linear Lagrangian $L_{\Gamma - \Gamma}$, Eq.(\ref{LGGvel}), the Legendre transform works with some singularities. The two main singularities must be mentioned here, since they play crucial roles in Dirac's modification of the classical Hamilton method. First, as follows from the definition of momenta and from the general technique of Legendre transformations, we cannot obtain, in principle, the explicit expressions of the velocities $v_{\gamma} (= g_{0\gamma,0})$ written in terms of momenta $p^{0\gamma}$ and vice versa. Instead, we obtain the following algebraic equations: $p^{0 \gamma} \approx f(g_{\alpha\beta}, g^{0 \gamma})$, or $\phi^{0 \gamma} = p^{0 \gamma} - f(g_{\alpha\beta}, g^{0 \gamma}) \approx 0$ which are called the primary constraints (see, e.g., \cite{Dir64} and references therein). Second, in respect to the procedure of Legendre transformation, this moment must be multiplied by the corresponding velocity $g_{0\gamma,0} (= v_{\gamma})$, which is not a dynamical variable of our Hamiltonian method. This velocity is rather a parameter (arbitrary parameter) of the updated Legendre procedure. Thus, we have derived the total Hamiltonian $H_t$, Eq.(\ref{eq1}), which is written as the sum of the canonical Hamiltonian $H_C$ and primary constraints $\phi^{0 \alpha}$. The coefficients in front of the primary constraints equal to the corresponding velocities $v_{\alpha}$, i.e., $H_t = H_C + g_{0 0,0} \phi^{0 0} + 2 g_{0 k,0} \phi^{0 k} = H_C + v_{\alpha} \phi^{0 \alpha}$. Now, the time-evolution of any dynamical variable, or any function/functional, or quantity, which depends upon the complete set of dynamical variables $\{ g_{\alpha\beta}, p^{\mu\nu} \}$, are determined by the Poisson bracket of this variable (or function) with the total Hamiltonian $H_t$, e.g., $g_{\alpha\beta,0} = [ g_{\alpha\beta}, H_t ]$. Note that this new (total) Hamiltonian of the metric gravity acts in the $d (d + 1)$ dimensional space of the dynamical variables, in contrast with the canonical Hamiltonian $H_C$ which formally operates in the $d (d - 1)$ dimensional space of the space-like dynamical variables. In general, introduction of the new Hamiltonian $H_t$ always brings some new motions that did not exist in the original Hamiltonian system with the canonical Hamiltonian $H_C$. Immediately, the two following questions arise: (1) what is the sense of these `additional' motions, and (2) how can they affect the actual motions of our field, which are determined by the canonical Hamiltonian $H_C$. To understand this and answer the questions raised, let us consider the time-evolution of the canonical Hamiltonian $H_C$. First of all, we can write the following general formula which describes time-evolution of the canonical Hamiltonian \begin{eqnarray} H_{C}(t + \Delta) = H_{C}(t) + \frac{\Delta}{1!} \Bigl(\frac{d H_{C}}{d t}\Bigr) + \frac{\Delta^{2}}{2!} \Bigl(\frac{d^{2} H_{C}}{d t^{2}}\Bigr) + \frac{\Delta^{3}}{3!} \Bigl(\frac{d^{3} H_{C}}{d t^{3}}\Bigr) + \ldots \; \; , \; \label{H-CDelta} \end{eqnarray} where $\Delta$ is a small time interval. In this equation the first-order time derivative of $H_C$ is written in the form: \begin{eqnarray} \frac{d H_{C}}{d t} = [ H_C, H_t ] = [ H_C, H_C + v_{\alpha} \phi^{0\alpha} ] = v_{\alpha} [ H_C, \phi^{0\alpha} ] = - v_{\alpha} \; \chi^{0\alpha} \; \; , \; \label{1stDer} \end{eqnarray} where $\phi^{0\alpha}$ and $\chi^{0\alpha}$ are the primary and secondary first-class constraints, respectively. The explicit formulas for the $\phi^{0\alpha}$ and $\chi^{0\alpha}$ constraints are presented above (see, Eqs.(\ref{PrimetD}) and (\ref{Dirchi})). Also in this equation and everywhere below the notation $v_{\alpha} (= g_{0\alpha,0})$ is an arbitrary, in principle, velocity of the temporal $(0\alpha)$-component of the metric tensor. In Dirac's theory this and other similar velocities, e.g., $v_{\beta} (= g_{0\beta,0}), v_{\gamma} (= g_{0\gamma,0})$, etc, are considered as arbitrary parameters of the method. The second time-derivative of the canonical Hamiltonian $H_C$ is \begin{eqnarray} \frac{d^{2} H_{C}}{d t^{2}} = \Bigl[ \frac{d H_{C}}{d t}, H_t ] = [ - v_{\alpha} \chi^{0\alpha}, H_C + v_{\beta} \phi^{0\beta} ] = - v_{\alpha} D^{\alpha}_{c} - \frac12 \; v_{\alpha} v_{\beta} \; g^{\alpha\beta} \chi^{0 0} \; \; \; \label{2ndDer} \end{eqnarray} where $D^{\alpha}_{c}$ is the $\alpha$ component of the Dirac closure (see, Eq.(\ref{DirclD})), while $\chi^{0 0}$ is the secondary constraint defined above (see, Eq.(\ref{Dirchi})). The third time-derivative of the canonical Hamiltonian takes the form \begin{eqnarray} \frac{d^{3} H_{C}}{d t^{3}} = [ \frac{d^{2} H_{C}}{d t^{2}}, H_t ] = - v_{\alpha} [ D^{\alpha}_{c}, H_C ] &+& \frac12 v_{\alpha} v_{\beta} [ g^{\alpha\beta} \chi^{0 0}, H_C ] + \frac12 v_{\alpha} v_{\beta} v_{\gamma} [ g^{\alpha\beta} \chi^{0 0}, \phi^{0 \gamma} ] \nonumber \\ &-& v_{\alpha} v_{\gamma} [ D^{\alpha}_{c}, \phi^{0\gamma} ] \; \; . \; \label{3rdDer} \end{eqnarray} In principle, such a chain of time derivatives $\frac{d^{n} H_{C}}{d t^{n}}$ is infinite (in contrast with the $n-$dimensional Maxwell electrodynamics \cite{FroUni}), but we have to note that all values in the right-hand sides of these equations are always represented as finite, linear (or quasi-linear) combinations of the secondary, first-class constraints only. Furthermore, the coefficients in front of each term in these expressions depends upon the $v_{\alpha}, v_{\beta}, v_{\gamma}$ and other similar velocities, which ``are completely arbitrary and at our disposal'' \cite{Dir64}. In other words, these velocities are arbitrary parameters in the Dirac's modification of the classical Hamilton method. It is clear that similar transformations which depend upon arbitrary parameters cannot affect the actual (Hamiltonian) motion of the original dynamical system, e.g., a free gravitational field, in our case. Instead, they produce some changes in the Hamiltonian dynamical variables, which do not correspond to a change of physical state. Generators of such `fictional' transformations are the secondary first-class constraints (as Dirac predicted in \cite{Dir64}). This follows directly from Eqs.(\ref{1stDer}) - (\ref{3rdDer}) and other similar equations for higher-order derivatives in that chain. In field theory similar transformations of the dynamical variables are well known, and in earlier papers they were called gauge transformations, or simply gauges. Dirac could obtain and write (see, e.g., \cite{Dir64}) all essential equations for the actual motion and for the corresponding gauge generators in the united form of Hamilton equations. This explains the overall significance of Dirac's modification of the classical Hamilton method. In Dirac method the complete system of Hamiltonian equations for the metric gravity is written in the form \begin{eqnarray} \frac{d g_{p q}}{d t} = g_{p q, 0} = [ g_{p q}, H_C ] \; \; \; {\rm and} \; \; \; \frac{d p^{m n}}{d t} = \Bigl(p^{m m}\Bigr)_{,0} = [ p^{p q}, H_C ] \; \; \; . \label{metrgr1} \end{eqnarray} These $d (d - 1)$ Hamilton equations describe the actual motion of a free gravitational field. Solutions of these equations cannot become $v-$dependent at any moment of time-evolution (see discussion above). In addition to these equations we also have $d$ Hamilton equations which describe time-evolution of the primary constraints: \begin{eqnarray} \frac{d \phi^{0 \alpha}}{d t} = [ \phi^{0 \alpha}, H_C ] = \chi^{0 \alpha} \; \; , \; \; \label{metrgr2} \end{eqnarray} where $\chi^{0 \alpha}$ are the secondary constraints. The following group of $d$ Hamilton equations describe time-evolution of the secondary constraints: \begin{eqnarray} \frac{d \chi^{0 \alpha}}{d t} = [ \chi^{0 \alpha}, H_C ] = D^{\alpha}_{c} \; \; , \; \; \; \label{metrgr3} \end{eqnarray} where $D^{\alpha}_{c} = a^{\alpha}_{\mu} (g) \chi^{0 \mu} + b^{\alpha}_{\mu} (g) \Bigl( f^{k}(g) \chi^{0 \mu} \Bigr)_{,k}$ is the $\alpha-$component of the Dirac closure, which is a quasi-linear combination of the secondary constraints $\chi^{0 \sigma}$ and some total spatial derivatives of expressions which also include the same secondary constraints. All classical theory of the free gravitational field (in metric gravity) is summed up in these Dirac's equations, Eqs.(\ref{metrgr1}) - (\ref{metrgr3}), written here in a manifestly Hamiltonian form. Note also that there is a simple procedure which allows one to simplify (drastically) the explicit form of Dirac closure in the metric gravity. Indeed, in metric gravity on the shell of primary constraints we can always determine $d$ field-dependent coefficients ${\cal C}^{\alpha}_{\beta}(g)$ for which the following equations are satisfied: \begin{eqnarray} [ \chi^{0 \alpha} + {\cal C}^{\alpha}_{\beta} \phi^{0 \beta} , H_C ] = \Lambda_{\alpha} \chi^{0 \alpha} \; \; , \; \; \; \label{Frolov1} \end{eqnarray} where $\alpha = 0, 1, \ldots, d - 1, \beta = 0, 1, \ldots, d - 1$ and $\Lambda_{\alpha}(g)$ are some algebraic, field-dependent expressions, which are often called `eigenvalues' (or factor-eigenvalues) of the Dirac closure. Derivation of equations for the unknown ${\cal C}^{\alpha}_{\beta}(g)$ coefficients in Eq.(\ref{Frolov1}) is straightforward. In this procedure the Dirac closure becomes `diagonal' and each component of Dirac closure, e.g., $D^{\alpha}_{c}$ always contains only one secondary constraint, e.g., $\chi^{0 \alpha}$ in Eq.(\ref{Frolov1}). All secondary constraints in this procedure are uniquely determined as factor-eigenvectors which are defined on the shell of primary constraints. In this version of Dirac's approach we do not need to say many words to describe the internal structure of Dirac closure. Also, it is important to remember that in metric gravity we always have $[ \phi^{0 \alpha}, \phi^{0 \beta} ] = 0$, which means the pair-wise commutativity of the primary constraints. These generalized Hamilton equations, Eqs.(\ref{metrgr1}) - (\ref{metrgr3}), form a complete and unambiguous set of equations, which govern the behaviour of a free gravitational field in the $d (d + 1)-$dimensional space of dynamical variables, or in the original $d-$dimensional Riemann space. The Hamiltonian equations from the first group, Eq.(\ref{metrgr1}), are the canonical Hamilton equations of actual motion for true dynamical variables. Analogous equations from the second group, Eqs.(\ref{metrgr2}) - (\ref{metrgr3}), are the Hamilton equations for gauge generators. These equations determine the actual gauge generators for the given dynamical system, i.e., for the free gravitational field in our case. All equations from the second group describe certain changes in the dynamical variables, i.e., coordinates and momenta, which do not affect the real physical state. Thus, our original $d (d + 1)$-dimensional space of dynamical variables in the metric gravity splits into the $d (d - 1)$-dimensional space of dynamical variables, which describe actual motions, and $2 \; d$-dimensional space of variables, which are transformed in some way with time, but this does not make any changes in the real physical state. Formally, we can write this in the form: $S[d (d + 1)] = S[d (d - 1)] \oplus S[2 \; d]$, where all spaces are even-dimensional. If an additional temporal variable $t$ is introduced in our analysis, then all these three spaces become odd-dimensional and Hamilton method works perfectly in each of these spaces. Note that the Hamilton equations in the form of Eqs.(\ref{metrgr1}) - (\ref{metrgr3}) are more useful and informative for the field people, than the equivalent original system of the $d (d + 1)$ Hamilton equations: \begin{eqnarray} \frac{d g_{\alpha\beta}}{d t} = g_{\alpha\beta, 0} = [ g_{\alpha\beta}, H_t ] \; \; \; {\rm and} \; \; \; \frac{d p^{\mu\nu}}{d t} = \Bigl(p^{\mu\nu}\Bigr)_{,0} = [ p^{\mu\nu}, H_t ] \; \; \; . \label{metrgrD} \end{eqnarray} The replacement of this system of Hamilton equations by much more useful system of slightly different Hamilton equations, Eqs.(\ref{metrgr1}) - (\ref{metrgr3}), is the main advantage of the Dirac's modifications made in the classical Hamilton method. Another advantage follows from the fact that the governing equations for all gauge generators are also written in the form of Hamilton equations. The third advantage is obvious: from now on all calculations in the metric gravity are reduced to analytical calculations of the Poisson brackets only. Finally, we can formulate the complete and pure formal criterion of canonicity for some transformation between any two equivalent Hamiltonian formulations of the metric gravity. Based on arguments and equations presented in this and previous Sections, the universal criterion of canonicity for the metric gravity can be formulated in the following form. Some transformation of the dynamical Hamilton variables in metric gravity is canonical if (and only if) it transforms our original system of Hamilton equations, Eqs.(\ref{metrgr1}) - (\ref{metrgr3}), into a new system of similar Hamilton equations: \begin{eqnarray} & & \frac{d \tilde{g}_{p q}}{d t} = \tilde{g}_{p q, 0} = [ \tilde{g}_{p q}, \tilde{H}_C ] \; \; \; {\rm and} \; \; \; \frac{d \tilde{p}^{m n}}{d t} = \Bigl(\tilde{p}^{m m}\Bigr)_{,0} = [ \tilde{p}^{p q}, \tilde{H}_C ] \; \; , \; \; \label{ametrgr} \\ & & \frac{d \tilde{\phi}^{0 \alpha}}{d t} = [ \tilde{\phi}^{0 \alpha}, \tilde{H}_C ] = \tilde{\chi}^{0 \alpha} \; \; \; {\rm and} \; \; \; \frac{d \tilde{\chi}^{0 \alpha}}{d t} = [ \tilde{\chi}^{0 \alpha}, \tilde{H}_C ] = \tilde{D}^{\alpha}_{c} \; \; , \; \; \label{bmetrgr} \end{eqnarray} where the sign $\; \tilde{} \;$ means the new variable and/or function, while all new functions $\tilde{H}_C, \tilde{\chi}^{0 \alpha}$ and $\tilde{D}^{\alpha}_{c} = \tilde{a}^{\alpha}_{\mu} (\tilde{g}) \tilde{\chi}^{0 \mu} + \tilde{b}^{\alpha}_{\mu}(\tilde{g}) \Bigl( \tilde{f}^{k}(g) \tilde{\chi}^{0 \mu} \Bigr)_{,k} $, which appear in these equations, must have the same structure as the old functions $H_C, \chi^{0 \alpha}$ and $D^{\alpha}_{c}$ in Eqs.(\ref{metrgr1}) - (\ref{metrgr3}). This new system of equations represents the form-invariance of the Hamilton equations derived by Dirac for the metric gravity, which is a constrained dynamical system with the first-class constraints only. Also, for the true canonical transformation in the metric gravity the following equations for the Poisson brackets must be obeyed: \begin{eqnarray} & & [ \tilde{g}_{\alpha \beta}, \tilde{g}_{\mu \nu} ] = 0 \; \; , \; \; [ \tilde{g}_{\alpha \beta}, \tilde{p}^{m n} ] = \Delta^{m n}_{\alpha \beta} \; \; , \; \; [ \tilde{p}^{m n}, \tilde{p}^{p q} ] = 0 \; \; , \; \; [ \tilde{\phi}^{0 \gamma}, \tilde{\phi}^{0 \sigma} ] = 0 \nonumber \\ & & [ \tilde{g}_{\mu \nu}, \tilde{\phi}^{0 \sigma} ] = \Delta^{0 \sigma}_{\mu \nu} \; \; \; , \; \; \; [ \tilde{p}^{m n}, \tilde{\phi}^{0 \sigma} ] = 0 \; \; . \; \; \label{Ametrgr} \end{eqnarray} This criterion of canonicity can be generalized to other Hamiltonian dynamical systems with first-class constraints. Note also that in this Section we discuss only one version of the complete Dirac's approach \cite{Dir58}, which has been developed to deal with a free gravitational field in the metric gravity. Generalization of this procedure to other fields with non-trivial gauge invariance is also possible. For instance, the same approach works perfectly for a free electromagnetic field even in multi-dimensions (see, e.g., \cite{Dir64}, \cite{Tyut} and \cite{FroUni}). Our preliminary results indicate clearly that the quantum version of this approach is applicable (with some changes) to the modern united electroweak theory. \subsection{On complete reverse recovery of the original field equations} In the previous Sections, we have carefully derived the Hamiltonian equations for a free gravitational field and all primary and secondary first-class constraints. The main purpose of our analysis was to obtain the correct equations of motion (or time-evolution) of a free gravitational field and obtain all important gauge conditions. Here the following question immediately arises: what are these correct Hamiltonian equations of motion? Where and how was the criterion of correctness established? The answer is clear, and we have to recognize as correct only such Hamiltonian equations and first-class constraints which uniformly lead us back to the original (or maternal) equations of motion already known for our field. For a free gravitational field the maternal field equations are the Einstein's equations $G_{\alpha\beta} = 0$ (or $R_{\alpha\beta} = 0$) mentioned in Section II. For a free electromagnetic field (or EM-field, for short) the maternal equations are the Maxwell equations in vacuum. Therefore, any correct Hamiltonian approach for EM-field must be able to produce the governing Maxwell equations at any spatial point ${\bf x}$ and at any moment of time $t$. To explain how this works, let us consider the Hamiltonian formulation for the Maxwell electromagnetic field in the $(n + 1)-$dimensional (flat) space-time. We can start directly form the explicit form of the corresponding $EM-$Hamiltonian (all missing details, definitions and notations can be found in \cite{Dir64} and \cite{FroUni}). Also, form the definition of momenta $B^{\mu} = F^{\mu 0}$ to this point we already have one primary constraint $\phi = B^{0} \approx 0$, since the both $F^{\mu \nu}$ and $F_{\mu \nu}$ tensors are always antisymmetric. The fundamental Poisson brackets are: $[ A_{\mu}({\bf x}), B^{\nu}({\bf x}^{\prime}) ] = g^{\nu}_{\mu} \delta^{n}({\bf x} - {\bf x}^{\prime}), [ A_{\mu}({\bf x}), A_{\nu}({\bf x}^{\prime}) ] = 0$ and $[ B^{\mu}({\bf x}), B^{\nu}({\bf x}^{\prime}) ] = 0$. The Hamiltonian of a free electromagnetic field in the $(n + 1)$-dimensional space-time is written in the form \cite{Dir64}, \cite{FroUni}: \begin{eqnarray} H = \int \Bigl( \frac14 F^{p q} F_{p q} - \frac12 F^{p 0} F_{p 0} + F^{q 0} A_{0,q} \Bigr) d^{n}{\bf x} = \int \Bigl( \frac14 F^{p q} F_{p q} + \frac12 B^{p} B^{p} - A_{0} B^{p}_{,p} \Bigr) d^{n}{\bf x} \; \; , \; \label{HamltA3} \end{eqnarray} where all notations are exactly the same as in \cite{Dir64} and \cite{Fro2021}. The corresponding Hamiltonian density takes the form \begin{eqnarray} {\cal H} = \frac14 F^{p q} F_{p q} + \frac12 B^{p} B^{p} - A_{0} B^{p}_{,p} \; \; . \; \label{Hamltden1} \end{eqnarray} Integration by parts of the first term in the Hamiltonian, Eq.(\ref{HamltA3}), leads to the following expression for the Hamiltonian density Eq.(\ref{Hamltden1}): \begin{eqnarray} {\cal H} = - \frac12 \Bigl(F^{p q}\Bigr)_{q} A_{p} + \frac12 B^{p} B^{p} - A_{0} B^{p}_{,p} = \frac12 \Bigl( \frac{\partial^{2} A_{q}}{\partial x_{p} \partial x_{q}} - \frac{\partial^{2} A_{p}}{\partial x_{q} \partial x_{q}} \Bigr) A_{p} + \frac12 B^{p} B^{p} - A_{0} B^{p}_{,p} \; , \; \label{Hamltden3} \end{eqnarray} where $p = 1, 2, \ldots, n$ and $q = 1, 2, \ldots, n$, i.e., all these indexes are space-like. First of all, by determining the Poisson bracket $[ B^{0}, {\cal H}] = B^{p}_{,p}$ one finds the secondary constraint $\chi = B^{p}_{,p} \approx 0$. In standard notation this means that $\frac{\partial}{\partial t}\Bigl(\frac{\partial A_{p}}{\partial x_{p}}\Bigr) \approx 0$, or $\frac{\partial A_{p}}{\partial x_{p}} = C$, where $C$ is a numerical constant which does not depend upon ${\bf x}$ and/or $t$. This secondary constraints $\chi$ commute with the Hamiltonian density ${\cal H}$ and Dirac closure equals zero identically. By using the Hamiltonian density ${\cal H}$, Eq.(\ref{Hamltden3}), we obtain the following system of canonical Hamilton equations \begin{eqnarray} \frac{d A_{p}}{d t} = [ A_{p}, {\cal H} ] = \frac{\partial {\cal H}}{\partial B^{p}} = \frac12 \; (2 B^{p}) = B^{p} \; \; \; \label{caneq1} \end{eqnarray} and \begin{eqnarray} \frac{d B^{p}}{d t} = [ B^{p}, {\cal H} ] = - \frac{\partial {\cal H}}{\partial A_{p}} = - \frac12 \Bigl[ 2 \Bigl( \frac{\partial^{2} A_{q}}{\partial x_{q} \partial x_{p}} - \frac{\partial^{2} A_{p}}{\partial x_{q} \partial x_{q}} \Bigr)\Bigr] = \frac{\partial^{2} A_{p}}{\partial x_{q} \partial x_{q}} - \frac{\partial^{2} A_{q}}{\partial x_{q} \partial x_{p}} \; \; . \; \label{caneq2} \end{eqnarray} Combination of these two equations one finds \begin{eqnarray} \frac{d^{2} A_{p}}{d t^{2}} = \frac{\partial^{2} A_{p}}{\partial x_{q} \partial x_{q}} - \frac{\partial^{2} A_{q}}{\partial x_{q} \partial x_{p}} \; \; . \; \label{eqmot2} \end{eqnarray} Now, taking into account the condition which follows from secondary constraint: $\frac{\partial A_{q}}{\partial x_{q}} = C$, we can reduce this equation to the form of $n$-dimensional wave equation: \begin{eqnarray} \frac{\partial^{2} A_{p}}{\partial t^{2}} - \frac{\partial^{2} A_{p}}{\partial x_{q} \partial x_{q}} = 0 \; \; , \; {\rm or} \; \; \frac{\partial^{2} {\bf A}}{\partial t^{2}} - \Delta {\bf A} = 0 \; , \; \label{caneq2} \end{eqnarray} where ${\bf A} = (A_1, A_{2}, \ldots, A_n)$ is the $n-$dimensional vector potential of the EM-field. This is the Maxwell equations for a free electromagnetic field in the $(n + 1)-$dimensional space-time. The $n-$dimensional Laplace operator $\Delta$ in this equation is $\Delta = \frac{\partial^{2} }{\partial x_{q} \partial x_{q}}$. Thus, we have recovered all Maxwell equations of the free radiation field. Note here that if someone does not recognize any constraints at all, or these constraints were determined with mistakes, then such a Hamiltonian formulation of the electromagnetic theory does not allow one to recover the corresponding Maxwell equations. Formally, in this case any relation with the original Maxwell theory of radiation will be lost. In reality, this means that our Hamiltonian, Eq.(\ref{HamltA3}), does not describe the Maxwell EM-field in vacuum, or this Hamiltonian formulation is not valid for the Maxwell electromagnetic field and cannot be used in applications to this field. This is the principle of complete reverse recovery of the original field equations in application to the Maxwell theory of EM-field. Now, consider the Hamiltonian formulation of the metric gravity which has been developed by Dirac in \cite{Dir58}. In contrast with a free Maxwell EM-field, for a free gravitational field, everything becomes significantly more complicated, but our principle of the complete reverse recovery works in this case too. Recently, we have shown that the equations for the second-order temporal derivatives of the space-like co-variant components $g_{m n}$ of the metric tensor, which follow from the Hamilton equations obtained in the Dirac Hamiltonian formulation of the metric gravity, essentially coincide with the corresponding Einstein's equations for the same components. However, at that time this article was already completed and it was not possible to add a few new chapters into it. In addition to this, it takes a long time to transform a set of difficult formulas into a logically perfect text. Therefore, our results in this direction will be published some time later and elsewhere. Here we just want to present a few important details of the procedure used. Note that there are three peculiarities in the Einstein's equations for a free gravitational field ($R_{\alpha \beta} = 0$ plus $d$ additional conditions $R^{\gamma}_{\alpha ,\gamma} = \frac12 \frac{\partial R}{\partial x^{\alpha}}$), which are crucially important for our present purposes. First, these Einstein's equations are written as a system of differential equations which contains the first- the second-order temporal derivatives of the metric tensor $g_{\alpha\beta}$. Second, all second-order temporal derivatives from the $g_{0 \beta}$ and $g_{\beta 0}$ components of metric tensor cancel out from these Einstein's equations. Third, the temporal second-order derivatives of the spatial components of metric tensor are explicitly included in the Einstein's equations. In fact, each second-order derivative $\frac{d^{2} g_{m n}}{d t^{2}} = \frac{d^{2} g_{m n}}{d x^{2}_{0}}$, arises in Einstein's equations only from the $R_{0 m 0 n}$ components of the Riemann curvature tensor. From here it is easy to find that all these second-order temporal derivatives $\frac{d^{2} g_{m n}}{d t^{2}}$ are included in Einstein's equations only as separated terms, and each of these terms has the same numerical coefficient $-\frac12$ in front. As follows from here one can reduce the Einstein's equations for the $g_{m n}$ components to the form $\frac{d^{2} g_{m n}}{d t^{2}} = Q(g_{p q}, \frac{d g_{p q}}{d t}, g_{0 \alpha}, \frac{d g_{0 \alpha}}{d t})$. These equations must coincide (or be equivalent) with the analogous equations for the $\frac{d^{2} g_{m n}}{d t^{2}}$ derivatives which follow from the Hamilton equations in the Dirac formulation of the Hamiltonian metric gravity. In fact, we have to calculate the Poisson bracket $\frac{d^{2} g_{m n}}{d t^{2}} = [[ g_{m n}, H_t ], H_t ] = [[ g_{m n}, H_C ], H_t ]$ and exclude all momenta by using the Hamilton equations, primary and secondary first-class constraints derived in the Dirac Hamiltonian formulation (see above). Such calculations are quite complex, extremely time-consuming and very sensitive, since any mistake made either in the Hamiltonian, or in one of the first-class constraints substantially complicates further calculations. Also after such a mistake one can lose any relation with the original (or maternal) theory and cannot move forward, until this mistake was found and corrected. Nevertheless, after many weeks of calculations, we are happy to report that Hamiltonian formulation of the metric gravity, which has been developed by Dirac in \cite{Dir58}, successfully passed this our test of recovery (at least partially). Now, by using this Hamiltonian formulation we are able to recover the original field equations for all covariant space-like components $g_{m n}$ of the metric tensor. An alternative Hamiltonian formulation of metric gravity which obviously fails this test is mentioned in Section IX. To conclude this discussion we have to note that any correct Hamiltonian formulation of an arbitrary, in principle, field theory must reproduce (exactly and unambiguously) the original governing equations of this field. The correct Hamilton equations of motions and explicit form of all first-class constraints are crucially important to reach this goal. If this is not the case, then such a Hamiltonian theory is wrong and has nothing to do with the maternal field theory. \section{Invariant integrals of the metric gravity} This Section is a central part of our study, since here we define a number of integral invariants of the metric gravity, i.e., we reach an absolute top of the classical Hamilton mechanics. Obviously, in one Section we cannot even outline the main problems which exist in the theory of integral invariants and its applications to the Hamiltonian metric gravity. Therefore, the following presentation of this theory will be very brief. Moreover, here we restrict ourselves only to a description of the extension of integral invariants for dynamical (Hamiltonian) systems with constraints. More details of the theory of integral invariants and its applications to the Hamiltonian formulations of metric gravity will be presented in our next article \cite{Fro2023}. First of all we need to define the one-dimensional, or line integrals in the metric gravity. In general, the one-dimensional integral in multi-dimensional Riemann spaces is defined as follows \begin{eqnarray} \int \pi^{\alpha\beta} \; d g_{\alpha\beta} &=& \int \pi^{\alpha\beta} \; \Bigl(\frac{\partial g_{\alpha\beta}}{\partial x^{\beta}}\Bigr) dx^{\beta} = \int \pi^{\alpha\beta} \Bigl( \Gamma_{\alpha,\beta \gamma} + \Gamma_{\beta,\alpha \gamma} \Bigr) dx^{\gamma} \nonumber \\ &=& \int \pi^{\alpha\beta} \Bigl( g_{\alpha\lambda} \Gamma^{\lambda}_{\beta \gamma} + g_{\beta\lambda} \Gamma^{\lambda}_{\alpha \gamma} \Bigr) dx^{\gamma} \; \; , \; \; \label{7eq1} \end{eqnarray} where $\Gamma_{\alpha,\beta \gamma}$ are the Cristoffel symbols of the first kind, while $\Gamma^{\alpha}_{\beta \gamma}$ are the Cristoffel symbols of the second kind (see, e.g., \cite{Kochin}). The integrand in this integral is not a tensor. This means that the line (or one-dimensional) integral substantially depends on the curve along which it is calculated and also on the initial and final points chosen on this curve. If the start and end points coincide with each other, then such an integral is called a closed loop integral, or an integral taken along a closed loop. Below, similar closed loop integrals are designated by the sign $\oint$. In general, the complete theory of line (or one-dimensional) integrals in multi-dimensional Riemann spaces is very complex. However, for our current analysis of the integral invariants in the Hamiltonian formulations of metric gravity we do need to use the formula, Eq.(\ref{7eq1}). In fact, all line integrals can be considered in the $\frac{d (d + 1)}{2}$-dimensional pseudo-Euclidean (orthogonal) space, which is formally identical (or isomorphic) to our original $d-$dimensional Riemann space-time (for more details, see, e.g., \cite{Kochin}, \cite{Dash}). This is a very good news, since all integrals and integral forms defined in multi-dimensional pseudo-Euclidean spaces can be handled in a familiar way (see, e.g., \cite{HCartan}, \cite{Fland}). In particular, by applying the usual definition of the closed loop integrals in pseudo-Euclidean spaces we can consider the two following integrals in the $\frac{d (d - 1)}{2}$-dimensional space: \begin{eqnarray} I = \oint \Bigl[ \pi^{m n} \; d g_{m n} - H_C dt \Bigr] \; \; \; {\rm and} \; \; \; I_{P} = \oint \pi^{m n} \; d g_{m n} \; \; , \; \; \label{7eq2} \end{eqnarray} where $\pi^{m n}$ are the space-like components of momenta, while $H_C$ is the canonical Hamiltonian which has been defined in Section III. For now we restrict ourselves to the consideration of the space-like components of momenta $\pi^{m n}$ and coordinates $g_{m n}$. In other words, below we shall deal with the $\frac{d (d - 1)}{2}$ dimensional Euclidean position space (and $d (d - 1)$-dimensional phase space) instead of the original $(d - 1)$-dimensional sub-space in our original $d$-dimensional Riemann space-time. The coordinates in this position space coincide with the covariant components of the fundamental space-like tensor $g_{mn}$. The first integral in Eq.(\ref{7eq2}) is called the Poincare-Cartan integral invariant, while the second integral is the Poincare integral invariant \cite{Cartan}, \cite{Poin} which is often called the main integral invariant of mechanics (or Hamiltonian mechanics). This Poincare integral invariant has a fundamental value for the Hamiltonian formulation of metric gravity as well as for the general theory of canonical transformations in metric gravity and for analysis and solution of other gravitational problems. Indeed, it is relatively easy to prove the following statement. If for some system of first-order differential equations written for the space-like components of the metric tensor $g_{m n}$ and momenta $\pi_{m n}$: \begin{eqnarray} \frac{d g_{m n}}{d t} = Q_{mn}(t, g_{a b}, \pi^{p q}) \; \; \; , \; \; \; \frac{d \pi^{m n}}{d t} = P^{m n}(t, g_{a b}, \pi^{p q}) \; \; \; \label{7eq3} \end{eqnarray} the Poincare integral $I_P$, Eq.(\ref{7eq2}), is invariant, then this system of equations, Eq.(\ref{7eq3}), is Hamiltonian in the moment of time $t$ which is located between $t - \delta$ and $t + \delta$, where $\delta$ is a very small positive number. This term `Hamiltonian' means here that the functions $Q_{mn}(t, g_{a b}, \pi^{p q})$ and $P^{m n}(t, g_{a b}, \pi^{p q})$ from the right-hand side of Eqs.(\ref{7eq3}) are represented as the partial derivatives (or Poisson brackets) of some scalar function $H$, i.e., \begin{eqnarray} Q_{mn}(t, g_{a b}, \pi^{p q}) = \frac{\partial H}{\partial \pi^{mn}} = [ g_{mn}, H ] \; \; \; , \; \; \; P^{m n}(t, g_{a b}, \pi^{p q}) = - \frac{\partial H}{\partial g_{mn}} = [ \pi^{mn}, H ] \; \; , \; \label{7eq4} \end{eqnarray} where the notation $[ a, b]$ stands for the Poisson bracket defined by Eq.(\ref{PoisBrack}). An uniform reconstruction of the explicit form of this function $H$ (or Hamiltonian) is not an easy task, but if we know that the Poincare-Cartan integral is also an integral invariant, then the unknown Hamiltonian exactly coincides \cite{Cartan} with the canonical Hamiltonian $H_C$ mentioned in the first integral from Eq.(\ref{7eq2}). Let us discuss the following fundamental question. We shall assume that the integral $I$, Eq.(\ref{7eq2}), is an integral invariant for our dynamical system and $H_C$ is our Hamiltonian which describes the actual motion of this system, i.e., the equations of time-evolution take the form of Hamilton equations, Eq.(\ref{metrgr1}), for our $d (d-1)$ dynamical variables $\{ g_{m n}, \pi^{p q} \}$, where $[ g_{m n}, g_{p q} ] = 0, [ \pi^{m n}, \pi^{p q} ] = 0$ and $[ g_{m n}, \pi^{p q} ] = \Delta^{p q}_{m n}$. Now, we want to extend our phase space by adding a set of $2 d$ new dynamical variables $\{ g_{0 \gamma}, \pi^{0 \gamma} \}$. Here we assume the usual permutation symmetry for all `additional' coordinates and momenta: $g_{0 \gamma} = g_{\gamma 0}$ and $\pi^{0 \gamma} = \pi^{\gamma 0}$. The total dimension of this new phase space will be $d (d + 1)$, which corresponds to the $d-$dimensional Riemann space-time, and it is the main working space for the general relativity and metric gravity. We require that in this new extended phase space the integral defined by the last expression in Eq.(\ref{7eq2A}) must also be an integral invariant with the new Hamiltonian $H_t$. Furthermore, the new `extended' Hamiltonian $H_t$ must be closely related with the canonical Hamiltonian $H_C$ from Eq.(\ref{7eq2}). Based on the formulas derived in Section III we can transform the Poincare-Cartan integral invariant from Eq.(\ref{7eq2}) into the following form \begin{eqnarray} I = \oint \Bigl[ \pi^{m n} \; d g_{m n} &-& H_C dt \Bigr] = \oint \Bigl\{\Bigl[ \pi^{m n} \; d g_{m n} + \pi^{0 \gamma} \Bigl(\frac{d g_{0 \gamma}}{d t}\Bigr) d t \Bigr] - \Bigl[ H_C + \pi^{0 \gamma} \Bigl(\frac{d g_{0 \gamma}}{d t}\Bigr) \Bigr] d t \Bigr\} \nonumber \\ &=& \oint \Bigl[ \pi^{\alpha \beta} \; d g_{\alpha \beta} - \Bigl( H_C + v_{\gamma} \pi^{0 \gamma} \Bigr) dt \Bigr] = \oint \Bigl[ \pi^{\alpha \beta} \; d g_{\alpha \beta} - H_t dt \Bigr] \; \; , \; \; \label{7eq2A} \end{eqnarray} where $H_t = H_C + v_{\gamma} \pi^{0 \gamma} = H_C + g_{0 \gamma,0} \pi^{0 \gamma}$ and $g_{0 \gamma,0} = v_{\gamma}$ ($\gamma = 0, 1, \ldots, d - 1$) are the corresponding velocities, while $\pi^{0 \gamma}$ are the temporal momenta which must be equal zero along each Hamilton trajectory of the actual motion. In other words, we have $d$ primary constraints $\pi^{0 \gamma} \approx 0$ in the metric gravity. Otherwise, i.e., if $\pi^{0 \gamma} \ne 0$ for some $\gamma$, then Eq.(\ref{7eq2A}) does not hold. Another crucial fact which has been used to transform, Eq.(\ref{7eq2A}), follows from the formulas for canonical Hamiltonian $H_C$, Eqs.(\ref{eq5}) and (\ref{DirH-C}), which do not contain any of the temporal momenta $\pi^{0 \gamma}$, but it may include some of the $g_{0 \mu}$ and/or $g_{\mu 0}$ coordinates. If these conditions are obeyed, then from Eq.(\ref{7eq2A}) we can derive the following equality: \begin{eqnarray} \oint \Bigl[ \pi^{m n} \; d g_{m n} - H_C dt \Bigr] = I = \oint \Bigl[ \pi^{\alpha \beta} \; d g_{\alpha \beta} - H_t dt \Bigr] \; \; , \; \; \label{7eq3A} \end{eqnarray} which means that the both these integrals, i.e., integrals on the left and right sides of this equation, are the true integral invariants and their numerical values equal to each other. In other words, these two integral invariants coincide with each other, i.e., they are not independent, and for constrained dynamical systems we always have to deal with this complication. The formula, Eq.(\ref{7eq3A}), has a number of consequences for Hamiltonian formulations of the metric gravity, but here we consider just one of them. First, as follows from the left-hand side of Eq.(\ref{7eq3A}) the set of dynamical variables $\{ g_{mn}, \pi^{pq} \}$ will be canonical, if the Poisson brackets between these dynamical variables coincide with CUSPB, i.e., $[ g_{mn}, g_{pq}] = 0, [ g_{mn}, \pi^{pq}] = \Delta^{pq}_{mn}, [ \pi^{mn}, \pi^{pq}] = 0$ for all possible $(mn)-$ and $(pq)$-pairs. In other words, it is a necessary and sufficient condition of canonicity in this case. However, if we apply the same arguments to the integral in right-hand side of Eq.(\ref{7eq3A}) we can only say that the exact coincidence of the Poisson brackets between dynamical variables $[ g_{\alpha\beta}, g_{\mu\nu}] = 0, [ g_{\alpha\beta}, \pi^{\mu\nu}] = \Delta^{\mu\nu}_{\alpha\beta}, [ \pi^{\alpha\beta}, \pi^{\mu\nu}] = 0$ with the standard CUSPB values is only necessary (but not sufficient!) condition of canonicity. In order to obtain the sufficient conditions of canonicity we must also guarantee that all temporal momenta $\pi^{0 \gamma}$ and/or $\pi^{\gamma 0}$ do not change with time $t$ along the true Hamilton trajectories. This means that all time-derivatives of the temporal momenta must be equal zero at all times, i.e., we have a number of additional equations such as $\pi^{0 \gamma} \approx 0, [ \pi^{0 \gamma}, H_t] \approx 0, [ [ \pi^{0 \gamma}, H_t], H_t] \approx 0$, etc. To obey all these equations for the primary, secondary and other constraints we have to follow Dirac's modifications made in the classical Hamilton method for constrained dynamical systems (see above). Otherwise, if some of these conditions do not hold, then the numerical value of the integral in the right-hand side of Eq.(\ref{7eq3A}) will be different from $I$, i.e., this integral is not invariant in this case and we have an obvious contradiction here. This explains why the criteria of canonicity derived for constrained dynamical systems always include two parts: (a) coincidence of the Poisson brackets between dynamical variables with the standard CUSPB values, and (b) conservation of the algebra of first-class constraints. This presumes the form-invariance of all first-class constraints and Poisson brackets of these constraints with each other, and with the canonical/total Hamiltonians and other essential functions of dynamical variables. \section{Conclusions} Thus, we have investigated the two different Hamiltonian formulations \cite{Dir58} and \cite{K&K} of the metric gravity in $d-$dimensional Riemann space, where $d \ge 3$. These two Hamiltonian formulations are related to each other by a canonical transformation of dynamical variables in the $d(d + 1)$-dimensional phase space and each of them allows one to restore the complete $d-$dimensional diffeomorphism as the correct (and well known) gauge invariance of the free gravitational field in the metric gravity. By using the known canonical transformation between these two Hamiltonian formulations of the metric gravity we have investigated the basic properties of other similar canonical transformations and derived some useful criteria of canonicity for an arbitrary transformation of the Hamiltonian dynamical variables in the $d(d + 1)-$dimensional phase space. The results of our study are important in numerous applications, since in metric gravity canonical transformations of Hamiltonian dynamical variables are often used to simplify either the canonical $H_C$ and/or total Hamiltonian(s) $H_t$, or secondary constraints, or to reduce the canonical Hamiltonian $H_C$ to some special form, e.g., to its normal form, which is well known in classical mechanics. In general, all criteria of canonicity for transformations of dynamical variables in the metric gravity require the exact coincidence of the Poisson (or Laplace) brackets for the new and old dynamical (Hamiltonian) variables. Briefly, if the Poisson brackets of the new dynamical variables (expressed in the old dynamical variables) do not coincide with their canonical values, then such a transformation is not canonical. This is the universal criterion of canonicity which is known from classical mechanics of Hamiltonian dynamical systems. However, in all Hamiltonian formulations of metric gravity we always deal with the constrained dynamical systems. Therefore, all criteria of canonicity, which are valid for such systems, must contain the second part which deals with the algebra of first-class constraints, form-invariance of the canonical/total Hamiltonian(s) and/or form-invariance of the Hamilton equations. For instance, the true canonical transformation in the metric gravity must keep form-invariance of the Hamilton equations derived in the Dirac's modification of the classical Hamilton method. It can be illustrated by the transformation of Eqs.(\ref{metrgr1}) - (\ref{metrgr3}) into Eqs.(\ref{ametrgr}) - (\ref{bmetrgr}) during our canonical transformation of the Hamiltonian dynamical variables. This is the first criterion of canonicity in the metric which is relatively simple and ready to be applied to actual problems. The second similar criterion \cite{FK&K} requires the exact coincidence of the total Hamiltonian $H_t$ and preservation of the algebra of constraints for both (old and new) Hamiltonian formulations of the metric gravity. We have also reconsidered modifications made by Dirac \cite{Dir58}, \cite{Dir50}, \cite{Dir64} in the classical Hamilton approach. It is shown that these modifications are crucial to improve overall efficiency of the new Hamiltonian method for dynamical systems with constraints, including various physical fields with additional gauge conditions, or gauges, for short. The main advantage of the new Dirac's approach is a possibility to write all governing equations in the united, manifestly Hamilton form (see, Eqs.(\ref{metrgr1}) - (\ref{metrgr3}) and Eqs.(\ref{ametrgr}) - (\ref{bmetrgr}) above). The original Dirac's idea that all motions in Hamiltonian dynamical systems with first-class constraints can always be separated into actual motions and special motions along constraints (or gauge-consistent motions) was extremely productive. Now, by using this Dirac's modification of the classical Hamilton method we can describe time-evolution of a large number of actual and model fields. Furthermore, we can make a conjecture that the free fields which represents all currently known fundamental interactions can unambiguously be described by the this version of Hamiltonian method, which was originally developed and later modified by Dirac. In this study we have also considered the method of integral invariants and applied it to investigate canonical transformations between different Hamiltonian formulations of the metric gravity. This method was originally proposed and developed by Poincare and Cartan \cite{Cartan}. Since then it was transformed into a very powerful approach, which currently is an absolute tool in the Hamilton classical mechanics. In reality, the invariance of the Poincare-Cartan integral can be chosen as a foundation of the whole Hamiltonian mechanics. Indeed, if this integral is invariant for some dynamical system, then such a system is Hamiltonian and its time-evolution is described by a system of Hamilton equations. For Hamiltonian dynamical systems with constraints the general theory of integral invariants must be modified, but its overall power still remains outstanding. Unfortunately, the limited space of this article did not allow us to discuss other important directions of the Hamiltonian formulations of metric gravity. In particular, we could not consider the explicit derivation of the gauge generators which are defined by chain of the first-class constraints \cite{K&K}, \cite{Cast} (see also \cite{Fro2021}). Also, in this study, we didn't even mention various non-canonical quasi-Hamiltonian formulations of the metric gravity. However, we can make a reference to an excellent review article \cite{K&K2011} which contains a detailed analysis of this problem and a large number of references to papers published up to the beginning of 2011. Here we want to note that any of these non-canonical Hamiltonian formulations uses a set of dynamical ADM-variables, which were introduced in \cite{ADM}. This fact has been noticed and criticized by Bergmann, Dirac and many others. Our calculations of the corresponding Poisson brackets can be found in \cite{FK&K} and \cite{Fro2021}. However, since early 1960's there were no explanation of this remarkable fact and its consequences neither from ADM people, nor from their followers (see, e.g., \cite{Regg} - \cite{Wald2}). Then, in 1985 it was suddenly detected that ADM formulation of the Hamiltonian metric gravity cannot restore, in principle, the total four-dimensional diffeomorphism \cite{IshKuch} which is the correct and well known gauge symmetry of a free gravitational field in four-dimensional space-time. Recently, we have found another crucial problem for ADM gravity and similar non-canonical `Hamiltonian' formulations. Indeed, in Dirac's Hamiltonian formulation, we could restore the original Einstein's equations for a free gravitational field. Analogous ADM Hamiltonian formulation uses, in part, the same dynamical variables (12 of 20 variables), but there are some fundamental mistakes in all secondary constraints. Therefore, the extra terms which present in the restored field equations for ADM formulation do not cancel each other (as they do in Dirac's formulation), but remain and even multiply. Finally, in Dirac's Hamiltonian formulation we obtain the maternal Einstein equations with no additional terms, while for ADM Hamiltonian formulation we have similar equations with many extra terms in them. As follows from this fact the ADM Hamiltonian formulation either describes some different (i.e., non-Einstein's) field, or it is an absolutely wrong theoretical construction which does not represent and real and/or model field (if the arising system of extended Einstein-like equations is not closed). In the future, under better circumstances, we plan to discuss these (and other) issues which currently exist in the Hamiltonian formulations of the metric gravity. I am grateful to my friends N. Kiriushcheva, S.V. Kuzmin and D.G.C. (Gerry) McKeon (all from the University of Western Ontario, London, Ontario, Canada) for helpful discussions and inspiration.
1,314,259,994,736
arxiv
\section{Introduction} As Conversation Agents become increasingly ubiquitous, more and more people are adopting the systems into their daily routines, with many leading smartphone models now containing some version of conversational agent (CA) built in. CAs allow users to interact with and navigate the digital world easily in hand-free scenarios, often also being a source of entertainment for many. This is true of the home as well, with CA devices marketed for home use increasing in popularity, including Amazon’s Echo speaker range housing it’s Conversational Agent ‘Alexa’. At home, people use CAs for a range of tasks. \citet{ammari_music_2019} found that users largely utilize these devices to streamline their existing day-to-day routines, with popular uses including; the playing and controlling of music, hands-free internet search and the control of Internet of Things (IoT) devices embedded in the home; smart lights, thermostats, and smart security camera systems amongst others. In relation to IoT devices, \citet{mennicken_hacking_2012} note that ‘a convenient system is one that “fits, speeds up, or improves” family routines’. Previous research by \citet{porcheron_voice_2018} echoes this finding, suggesting that CAs become enmeshed within users’ home life. The systems allow users to complete tasks while at the same time engaging in regular home activities, such as eating dinner together as a family. They note that this apparent integration into the household is due in part to the continuous availability of VUIs through a simple wake word \cite{porcheron_voice_2018}. Research also suggests that conversational agents are more likely to be personified by users when the device is situated in a multi-member household or family unit \cite{purington_alexa_2017}. These studies, along with wider research, suggests that users are adopting conversational agents into the midst of their households. With CA devices seemingly reaching every corner of home life, It is perhaps unsurprising that a market has emerged for Alexa ‘Skills’ targeted towards pet owners. When we consider the fact that pets are often seen as family members themselves and are integral parts of many peoples’ home life, this appears to be a natural next step. Care of a pet is a major aspect of many peoples’ daily routine, whether it be feeding, walking, or grooming the pet among numerous other daily responsibilities, and these commitments have a major impact on the happiness of both pet and owner \cite{holland_acquiring_2019, bouma_expectations_2020}. Some work has called for a greater understanding of human and animal interactivity with agents \cite{moore_vocal_2016}, but this topic is underexplored, and we don’t currently know much about the interactions already taking place in people’s homes. With Alexa Skill functions ranging from the trivial, such as the Skill that will ‘marry’ your cats for you\footnote{amazon.com/Sarah-Dunlap-Cat-Wedding/dp/B07NMP17T8}, to the more practical, such as Skills promising trust-worthy pet health advice\footnote{ amazon.com/Vet24seven-Inc-MyPetDoc/dp/B07FP2N457 }, key questions remain around the scope of these Skills and how beneficial they could really be. This paper aims to explore these questions by offering an initial overview of the types of Alexa Skills people use with their pets and the their supposed purposes. We also introduce a veterinary perspective to discuss the potential benefits, or risks, of using a CA in these ways with animals, highlighting some of the Alexa Skills currently available for use with pets. \section{Methods} We searched the Alexa Skills category on Amazon.com for relevant English-language Skills in January 2021 using the following search terms: “dog(s)”, “cat(s), and “pet(s)”. Each term was searched individually in its singular and plural form. Our initial search yielded 1,851 Skills. Screening for duplicates yielded 589 Skills. The lead author then screened the Skills based on the following criteria: 1) Only Skills with at least 5 user reviews were included to ensure all Skills were used by Alexa users. 2) Only Skills which are explicitly aimed at pets or pet owners were included (examples of other Skills that were excluded are Skills that give a user facts about animals, Skills that play animal noises, and Skills that coincidentally have “cat” or “dog” in a product name). After this screening, we arrived at our final 88 Skills for analysis. The lead author and a co-author conducted inductive Thematic Analysis \cite{braun2006using} to categorize the purpose for each of the 88 Skills. Initial themes were generated by the lead author and then independently used for categorization by the co-author. After initial coding, there was 87\% agreement between the two authors, and inconsistencies were resolved through discussion. The themes of purposes for these Skills are summarized below and a complete table of Skills, descriptions, and themes is included in supplementary materials. \section{Themes} \subsection{Calming} This theme included Skills which are intended to be heard by a pet for the purpose of calming or relaxing the pet. This included Skills like Dog Sleep Music\footnote{amazon.com/Simmba-Dog-Sleep-Music/dp/B0859K698K}, Calm My Dog\footnote{amazon.com/Stephen-Brown-Calm-My-Dog/dp/B07G4B6WL4}, and Comfort My Dog\footnote{amazon.com/Voice-Games-Relax-My-Dog/dp/B07JYYHV5L}. These Skills use music or ambient noise to calm the listening pet. Some veterinary research has shown that playing classical music in shelter environments can help to calm dogs \cite{kogan_behavioral_2012}, so these Skills aimed at using music or ambient sound to sooth pets may likewise have the intended beneficial effect. \subsection{Animal Audience} This theme included Skills which are intended to be heard by a pet for them to react (commands, calls, toys, other noises) but not in the calming category below, including Skills like Make My Dog Howl\footnote{amazon.com/Iguana-ASD-Make-Dog-Howl/dp/B07XC4JMMS} and Calling Your Dog\footnote{amazon.com/Jobless-Calling-Your-Dog/dp/B08BZ8626H}. These Skills intend to entertain or stimulate pets, often by simulating the sounds of other animals or of people. This may be another area in which the Skills have their indented effect, as some research has indicated that dogs are stimulated by watching and hearing other dogs on television \cite{hirskyj-douglas_dog_2017}. \subsection{Smart Device} Some Skills are intended as a mode of interaction with smart devices, like the Dog Whisperer\footnote{amazon.com/Mario-Harper-Dog-Whisperer/dp/B01MXX19JL} Skill which connects with the FitBark device and the PetSafe® Smart Feed Skill\footnote{amazon.com/PetSafe-PetSafe\%C2\%AE-Smart-Feed/dp/B07RRSW2SV} which controls the PetSafe automatic feeder. The devices these Skills are linked with are primarily aimed at automatically feeding pets and monitoring their activity, which has been shown to produce positive health outcomes in a recent study involving multi-cat households \cite{lambrecht_can_2019}. That said, the efficacy of a smart devices is dependent on the capabilities of the device, the behaviour of the pet, and the integration of the device into the specific context of the household \cite{lambrecht_can_2019}, so the use of such a Skill and a smart device is no guarantee of its utility to pet or owner. \subsection{Tracking} A related theme, tracking Skills, likewise help pet owners monitor their pets feeding and exercise, but via manual tracking rather than using a smart device. These Skills, like Cat Feed Tracker\footnote{amazon.com/Kuske-Cat-Feed-Tracker/dp/B01MT3WAUS} and Pet Tracker\footnote{amazon.com/Doug-Johnson-Pet-Tracker/dp/B077M2X3VS} allow users to track their pet care and check if other users have logged any pet activities. The multi-user aim fits with prior work on smart speakers like Amazon Echo which has indicated that a single device is frequently used by multiple members of a home \cite{porcheron_voice_2018}. Likewise, veterinary research indicates consistent routines for feeding and exercise are beneficial to pet health \cite{vitger_integration_2016}. To the extent these routines can be maintained through Alexa Skills, this usage pattern may mirror the integration of Alexa into family routines that has been observed in other contexts. \cite{porcheron_voice_2018}. \subsection{Training and Health} Other Skills, like Doctor Pupper\footnote{amazon.com/melochi-Doctor-Pupper/dp/B07K7CSRTP}, My Pet Doc\footnote{amazon.com/Vet24seven-Inc-MyPetDoc/dp/B07FP2N457}, and Al’s Dog Training Tips\footnote{amazon.com/Longoriahaus-Dog-Training-Als-Tips/dp/B07H2WMDTV} aim to help pet owners with tips on training and with simple medical advice such as advice on whether certain foods are toxic to pets. These Skills have good intentions, particularly as most pet poisonings result from owners sharing foods they don’t know are harmful \cite{gugler_hidden_2013}, but research on conversational technology like Alexa giving medical advice has illustrated the severe harm that can be caused by technical errors and limited safeguards against bad advice \cite{bickmore_patient_2018}. \subsection{Translator} Many Skills, like Dog Translator\footnote{amazon.com/Steven-Foyston-Dog-Translator/dp/B088PK52LL} and Cat Translator\footnote{amazon.com/GeeNelly-Cat-Translator/dp/B079FPVGLC} purport to listen to a pet and explain to owners what their pets are saying. While there is some evidence that sounds like dog barks can convey different emotional states \cite{pongracz_acoustic_2006}, it is not clear that the Skills available presently can actually detect these acoustic differences. These Skills should not be considered serious utilities to pet owners. \subsection{Entertainment/Trivia} Several Skills like Name My Dog\footnote{amazon.com/BethSherm-Name-my-dog/dp/B0741F6WXR} or Cat Wedding\footnote{amazon.com/Sarah-Dunlap-Cat-Wedding/dp/B07NMP17T8} are intended purely as entertainment for pet owners, with activities that involve their pet or directly relate to owners’ relationships with their pets. These Skills don't purport to have any benefit to pets or their owners beyond entertainment of the owner. \subsection{Other - Human Audience} Finally, some Skills did not fit into the above categories but were aimed at human users and their relationships with animals and had practical purposes. Skills in this theme included PawBoost Lost and Found Pets\footnote{amazon.com/PawBoost-Lost-and-Found-Pets/dp/B01MQW0BC0} which lets users communicate about lost pets and Pet Finder\footnote{amazon.com/Monika-Wiest-Pet-Finder/dp/B01N7M3DKY} which guides a user through questions about their lifestyle to suggest suitable dog breeds for adoption. \section{Discussion} People use voice assistants like Alexa for a diverse array of purposes, often pertaining to tasks like playing music, searching the web, and interacting with smart devices \cite{ammari_music_2019}. In our review of Alexa Skills for pets and pet owners, we found a very similar pattern of purposes for using Alexa. Broadly, the themes we present, Calming, Animal Audience, Smart Device, Tracking, Training and Health, Translator, Entertainment/Trivia, and Other - Human Audience represent a variety of strategies for caring for pets, strengthening our bonds with them, and keeping them entertained. People use Alexa for their pets largely in the same ways that they use Alexa in the home setting in general; as an extension of existing routines (feeding, health tracking, training), with a number of these Skills offering straightforward support for pet owners’ creation and maintenance of routines. Some Skills likely do not or cannot technically achieve their desired outcome (i.e. Skills aimed at translating pet sounds) or offer minimal utility beyond entertaining pet owners. However, for many of the themes themes of Skill purposes we reviewed, there is veterinary evidence supporting benefits of the Skills' intention, and reason to believe those purposes can be acheived through an Alexa Skill. Pet owners should take caution that following medical advice that comes from voice assistants like Alexa is risky \cite{bickmore_patient_2018} and that individual pets require different approaches to care and training \cite{turcsan_trainability_2011}. The integration of technology into pet-care routines is not one-size-fits-all. Smart devices and technological aids to pet care must fit well into the structure of a pet's home, creating an ongoing challenge for both animal-computer interface designers and pet owners \cite{lambrecht_can_2019}. Still, most pet-focused Alexa Skills treat Alexa as an enhancement to a human-animal relationship, rather than a replacement for it. Insofar as pet owners see these Skills this way, as toys and tools at their hand as pet owners, but not as a replacement for training or veterinary expertise, the purposes for these Skills seem justifiable. Many questions remain in the wider field of CAs for animal-computer interaction, with further research into their efficacy posing an interesting point of collaboration between human-computer interaction and animal science researchers. We aim to offer a first step in investigating Alexa Skills for pets, surveying the Skills that currently are in use. Future work should deepen our understanding of this topic by seeking to understand the outcomes for the pets and the experiences of pet owners who use such Skills. The video poster accompanying this extended abstract can be found at https://youtu.be/H30qRRbrY68 \begin{acks} This research was conducted with the financial support of the ADAPT Science Foundation Ireland (SFI) Research Centre at University College Dublin and the SFI Centre for Research Training in Digitally Enhanced Reality (D-REAL). The ADAPT SFI Centre for Digital Content Technology is funded by Science Foundation Ireland through the SFI Research Centres Programme and is co-funded under the European Regional Development Fund (ERDF) through Grant No. 13/RC/2106\textunderscore{}P2 and D-REAL funding is provided under SFI under Grant No. 18/CRT/6224. \end{acks} \bibliographystyle{ACM-Reference-Format}
1,314,259,994,737
arxiv
\section{Introduction} \paragraph{Mathematical morphology} is a structure-based analysis of images constructed on set theory concepts. Two main induced transformations in mathematical morphology are dilation and erosion established initially by translations, unions and intersections on subsets of euclidean spaces. These transformations were extended to complete lattices later. In a more general form, dilation and erosion are Galois connections over mappings between complete lattices. According to ~\cite{heijmans1990algebraic}, every operator on a complete lattice which preserves finite supremum($\vee$) is regarded as a dilation and each infimum preserving operator can be inspected as an erosion. This is the most universal definition of dilation and erosion appeared in literature up to our knowledge. Looking toward category theory viewpoint, definition of the dilation and erosion can be generalized to left and right adjoint functors that preserve co-limits and limits respectively. The first section stares at mathematical morphology and category theory briefly. Most of the consisting material for category theory can be found in \cite{borceux1994handbook,opac-b1078351,borceux1994handbook2,leinster2014basic} . \subsection{Matrix representation of images} An image on a computer springs up from quantization of both image space and intensities into discrete values. It is merely a 2D rectangular array of integer values. It is widely accepted to record intensity as an integer number over the interval $[0,255]$ \cite{memar2017}. Conversely, each pixel of a colored image contains three values over the interval $[0,255]$ corresponding to its RGB values. Roughly speaking, the matrix representation of an image deals closely with a function entitled as a picture function. It is a function $f$ defined on spatial variables $x,y$. Intuitively, $f(x,y)$ defines the intensity value of the pixel at point $(x,y)$. The following definitions correspond to binary, gray-scaled and color images. \begin{definition} A binary image is a rectangular matrix with all elements values 0 or 1. \end{definition} \begin{definition} A gray-scaled image is a rectangular matrix with values ranging within $[0,255]$. \end{definition} \begin{definition} A color image is a 2D image which has vector of 3 values at each spatial point or pixel. \end{definition} \subsection{Mathematical morphology} Dilation and erosion constitute the basic operations which construct the backbone for clarifying other widely used operations such as opening, closing, hit-miss and a few others. They have their roots in set theory and boil from a set theoretical view point. These transformations are defined on elements of two sets called the source and structuring element respectively. The structuring element is generally much smaller sized comparing to the image that it acts on. It functions as a pattern probing the source image, targeting at finding its structure. First we define dilation for binary images as follows \cite{heijmans1990algebraic}. \begin{definition} If $A,B$ are two sets determining the source image and the structuring element respectively, dilation of $A$ and $B$ is defined by \cite{haralick1987image} and \cite{serra1983image} as \begin{equation} A \oplus B=\{a+b| a \in A , b \in B\}, \end{equation} \end{definition} This operation is also called the Minkowski sum. The pertinence of dilation in image analysis area varies from image expanding to filling holes. Erosion is the dual of dilation defined by: \begin{definition} The erosion of a set $A$ with a structuring element $B$ is: \cite{serra1983image,heijmans1990algebraic} \begin{equation} A \ominus B= \{ x \in \mathbb{Z}^2| \text{ for every } b \in B , \text{ there exists an } a \in A \text{ such that } x=a-b\} \end{equation} Erosion of two sets $A,B$ can also be defined as \begin{equation} A \ominus B=\{h \in \mathbb{Z}^2| B_h \subseteq A \} \end{equation} where $B_h=\{ b+h| b \in B\}$ is the translating of $B$ along the vector $h$ and reflection of the set $B$ with respect to origin is defined like \begin{equation} \breve{B}=\{x \in \mathbb{Z}^2| \text{ for some } b \in B, x =-b\} \end{equation} \end{definition} duality of dilation and erosion means that erosion can be written in terms of the dilation: \begin{equation} (A \ominus B) ^c=A^c \oplus \breve{B} \end{equation} where $\breve{B}$ has been defined before (3.13). In other words, dilating the foreground is the same as eroding the background, but the structuring element reflects between the two. Likewise, eroding the foreground is the same as dilating the background. Dilation and erosion are defined for gray scale images in a different way. \\ Dilation and erosion of $f$ where $f: F \rightarrow \mathbb{Z}, F \subseteq \mathbb{Z}^2$ is a function that maps $(x,y) \in \mathbb{Z}^2$ to gray scale value of pixel at $(x,y)$ with structuring element $B$ is denoted as \begin{equation} (f \oplus B)(x,y)=\text{max}_{(s,t) \in B} \{f(x-s,y-t)\} \end{equation} \begin{equation} (f \Theta B)(x,y)=\text{min}_{(s,t) \in B} \{f(x+s,y+t)\} \end{equation} The following more general definition of dilation and erosion can be found in \cite{heijmans1990algebraic}. \begin{definition} Let $\mathscr{L}$ be a complete lattice and $\mathscr{E}_1,\mathscr{E}_2$ be arbitrary sets. The operator $\delta: \mathscr{L}^{\mathscr{E}_1} \longrightarrow \mathscr{L}^{\mathscr{E}_2}$ is a dilation if and only if for every $x \in \mathscr{E}_1$ and $y \in \mathscr{E}_2$ there exists a $\delta_{y,x}: \mathscr{L} \longrightarrow \mathscr{L}$ such that for $F_1 \in \mathscr{L}^{\mathscr{E}_1}$ and $ y \in \mathscr{E}_2$, \[ \delta(F_1)(y) = \bigvee_{x \in \mathscr{E}_1} \delta_{y,x}(F_1(x)) \] The erosion $\epsilon: \mathscr{L}^{\mathscr{E}_1} \longrightarrow \mathscr{L}^{\mathscr{E}_2}$ is given by: \[ \epsilon(F_2)(x) = \bigwedge_{y \in \mathscr{E}_2} \epsilon_{y,x}(F_2(y)) \] \end{definition} \begin{definition} Let $(A,\leq),(B,\leq)$ be two partially ordered sets with two mappings, $F:A \longrightarrow B$ and $U:B \longrightarrow A$. A monotone Galois connection between $F,U$ is for all $x \in A$ and $y \in B$ : \[ F(x) \leq y \Leftrightarrow x \leq U(y) \] \end{definition} $F$ is referred as the left adjoint and $U$ as an right adjoint. Monotone Galois connections are entitled as adjunctions in literature \cite{erne2004adjunctions,shmuely1974structure} likewise. The other variety of Galois connections called antitone Galois connections emerges in literature as follows. \begin{definition} Let $(A,\leq),(B,\leq)$ be two partially ordered sets. Let $F:A \longrightarrow B$ and $U:B \longrightarrow A$. $F,U$ are an antitone Galois connection if for all $x \in A$ and $y \in B$ : \[ y \leq F(x) \Leftrightarrow x \leq U(y) \] \end{definition} \begin{theorem} \[ \delta(F_1) \leq F_2 \Leftrightarrow \epsilon(F_2) \leq F_1 \] \end{theorem} This theorem confirms that dilation and erosion engage in a monotonic Galois connection. \subsection{Category theory} Category theory is an effort for generalizing and simplifying many properties of mathematical systems by denoting them with objects and arrows. Each arrow $f:A \to B$ represents a function from an object $A$ to another object $B$. A category is small if its set of objects and arrows are small. A contravariant functor $F:\mathcal{A}^{OP} \to \mathcal{B}$ maps every object $A \in \mathcal{A}$ to $F(A) \in \mathcal{B}$ and there exist a mapping $\mathcal{A}(A,A^\prime) \to \mathcal{B}(FA,FA^\prime)$ for each mapping $f:A \to A^\prime \in \mathcal{A}$. Reminding that $\mathcal{A}(A,A^\prime)$ is an arrow between the objects, these data are subject to following two conditions: \begin{itemize} \item any two morphism $f \in \mathcal{A}(A,A^\prime)$ and $g \in \mathcal{A}(A^\prime, A''$) can be decomposed by $F(g \circ f)=F(f) \circ F(g)$. \item For any object $A \in \mathscr{A}$ , $F(1_A)=1_{FA}$. \end{itemize} A contravariant functor reverses the direction of arrows. For example $f:A \to B$ gets $F(f):f(B) \to f(A)$. Conversely, a covariant functor $F: \mathcal{C} \to \mathcal{D}$ preserves the direction of arrows. Everything is the same as the contravariant functor except : $F(f \circ g)=F(f) \circ F(g)$ for arrows $f,g$ in $\mathcal{C}$. \subsubsection{Natural transformations} \begin{definition} Let $\mathcal{A}$ and $\mathcal{B}$ be categories and $F,G$ two functors $F,G:\mathcal{A} \to \mathcal{B}$. A natural transformation between $F,G$ is an arrow $\alpha_x:F(x) \to G(x)$ for any object $X \in \mathcal{A}$ such that for any arrow $f: X \to Y \in \mathcal{A}$, the diagram depicted in figure \ref{fig1} commutes, \begin{figure}[!h] \begin{center} \label{def:wedge} \begin{tikzpicture} \matrix (m) [matrix of math nodes,row sep=4em,column sep=4em,minimum width=4em] { F(X) & F(Y) \\ G(X) & G(Y) \\}; \path[-stealth] (m-1-1) edge [sloped, above] node {$F(f)$} (m-1-2) (m-1-1) edge [sloped, above] node {$\alpha_X$} (m-2-1) (m-2-1) edge [ sloped, above] node {$ G(f) $} (m-2-2) (m-1-2) edge [ sloped, above] node {$\alpha_Y $} (m-2-2); \end{tikzpicture} \end{center} \caption {Natural transformations } \label{fig1} \end{figure} \end{definition} Natural transformations resemble functor isomorphisms. \subsubsection{(Co)-limits} \begin{definition} Given a functor $F: \mathcal{A} \to \mathcal{B}$, a cone of $F$ is an object $O \in \mathcal{B}$ together with a family of arrows $\phi_x:O \to F(x)$ for each $x \in \mathcal{A}$ such that for each arrow $f: x \to y $ in $\mathcal{A}$ we have $Ff\circ \phi_x=\phi_y$ according to figure \ref{fig5}. \begin{figure} \begin{center} \begin{tikzpicture}[descr/.style={fill=white}] \matrix(m)[matrix of math nodes, row sep=3em, column sep=2.8em, text height=1.5ex, text depth=0.25ex] {&O\\F(X)&F(Y)\\}; \path[->,font=\scriptsize] (m-1-2) edge node[above left] {$\phi_x$} (m-2-1) edge node[descr] {$\phi_y$} (m-2-2); \path[->] (m-2-1) edge node [below]{$F(f)$} (m-2-2); \end{tikzpicture} \end{center} \caption {Diagram of a cone} \label{fig5} \end{figure} \end{definition} \begin{definition} A limit of a functor $F: \mathcal{A} \to \mathcal{B}$, is a universal cone $(L,\phi_x)$ such that for every other cone $(N,\psi_x)$ of $F$ there is a unique arrow $u: N \to L$ such that $\psi_x= \phi_x \circ u$ for every $X$ in $\mathcal{B}$. \\ \begin{figure}[h] \centering \begin{tikzpicture}[descr/.style={fill=white}] \matrix(m)[matrix of math nodes, row sep=2.2em, column sep=2.5em, text height=1.5ex, text depth=0.25ex] {&N\\&L\\ F(X)&& F(Y)\\}; \path[->,font=\scriptsize] (m-1-2) edge node[ left] {u} (m-2-2); \path[->] (m-3-1) edge node [below]{$F(f)$} (m-3-3); \path[->] (m-1-2) edge node[descr] {$\psi_x$} (m-3-1) edge node[descr] {$\psi_y$} (m-3-3); \path[->] (m-2-2) edge node[descr] {$\phi_x$} (m-3-1) edge node[descr] {$\phi_y$} (m-3-3); \end{tikzpicture} \begin{tikzpicture}[descr/.style={fill=white}] \matrix(m)[matrix of math nodes, row sep=2.2em, column sep=2.5em, text height=1.5ex, text depth=0.25ex] {&N\\&L\\ F(X)&& F(Y)\\}; \path[->,font=\scriptsize] (m-2-2) edge node[left] {u} (m-1-2); \path[->] (m-3-1) edge node [below]{$F(f)$} (m-3-3); \path[->] (m-3-1) edge node[descr] {$\psi_x$} (m-1-2); \path[->] (m-3-3) edge node[descr] {$\psi_y$} (m-1-2); \path[->] (m-3-1) edge node[descr] {$\phi_x$} (m-2-2); \path[->] (m-3-3) edge node[descr] {$\phi_y$} (m-2-2); \end{tikzpicture} \caption{ a) Diagram of a limit b) Diagram of a co-limit} \label{fig4} \end{figure} \end{definition} \begin{definition} Dual to limit, a co-limit of a functor $F: \mathcal{A} \to \mathcal{B}$, is a universal co-cone $(L,\phi_x)$ such that for any other co-cone $(N,\psi_x)$ of $F$, there exists a unique arrow $u: L \to N$ such that $\psi_x=u \circ \phi_x$ for every $X$ in $\mathscr{B}$. Figure \ref{fig4} (b) illustrates the diagram of a co-limit. A functor $F: \mathcal{A} \to \mathcal{B}$ is called small if $\mathcal{B}$ is a small category. (co)-limits over small functors are called small. \end{definition} Symbols $\mathop{\lim_{\longrightarrow}}$ and $\mathop{\lim_{\longleftarrow}}$ are used to denote lim and co-limit often in literature. A category is called (co)-complete if it contains all small (co)-limits. Symbols $\mathop{\lim_{\longrightarrow}}$ and $\mathop{\lim_{\longleftarrow}}$ are used to denote lim and co-limit often in literature. \subsubsection{(Co)-ends} (co)-ends are useful notions inspired from calculus. Particularly, an end resembles an infinite product whereas a co-end imitates the idea of an infinite sum or integral. (co)-ends are special (co)-limits defined on functors of the form $F: \mathcal{C}^{OP} \times \mathcal{C} \to \mathcal{D}$. Defining (co)-wedge is essential since a (co)-end is a universal (co)-wedge. A wedge of a functor $F: \mathcal{C}^{OP} \times \mathcal{C} \to \mathcal{D}$ is an object $O \in \mathcal{D}$ with an arrow $\omega_c: O \to F(C,C)$ for any object $C \in \mathcal{C}$. The universal property of a wedge enforces that for any arrow $C^\prime \to C$ for $C,C^\prime \in \mathcal{C}$, the diagram illustrated in \label{def:wedge} (a) commutes. Conversely, co-ends are defined by natural co-wedges. A co-wedge for a functor $F:\mathcal{C}^{OP} \times \mathcal{C} \to \mathcal{D}$ is an object $O$ in $\mathcal{D}$ along with an arrow $\omega_c: F(C,C) \to O$ such that for any arrow $t: C^\prime \to C$ in $\mathcal{C}$, the diagram illustrated in \label{def:wedge} (a) commutes. \begin{figure}[!h] \begin{center} \label{def:wedge} \begin{tikzpicture} \matrix (m) [matrix of math nodes,row sep=4em,column sep=4em,minimum width=4em] { O & F(C,C) \\ F(C^\prime,C^\prime) & F(C,C^\prime) \\}; \path[-stealth] (m-1-1) edge [sloped, above] node {$\omega_c$} (m-1-2) (m-1-1) edge [sloped, above] node {$\omega_c^\prime$} (m-2-1) (m-2-1) edge [ sloped, above] node {$ F(t,1) $} (m-2-2) (m-1-2) edge [ sloped, above] node {$F(1,t) $} (m-2-2); \end{tikzpicture} \begin{tikzpicture} \matrix (m) [matrix of math nodes,row sep=4em,column sep=4em,minimum width=4em] { F(C,C^\prime) & F(C,C) \\ F(C^\prime,C^\prime) & O \\}; \path[-stealth] (m-1-1) edge [sloped, above] node {$F(1,t)$} (m-1-2) (m-1-1) edge [sloped, above] node {$F(t,1)$} (m-2-1) (m-2-1) edge [ sloped, above] node {$ \omega_c ^\prime $} (m-2-2) (m-1-2) edge [ sloped, above] node {$\omega_c $} (m-2-2); \end{tikzpicture} \end{center} \caption{a)Diagram of a wedge b)Diagram of a co-wedge} \end{figure} The abusing integral notation for denoting (co)-ends stems from the work of N.Yoneda \cite{loregian2015co} while he came up with functors $\mathcal{C}^{OP} \times \mathcal{C} \to \mathsf{Ab}$. The subscripted integral notation $\int_C F(C,C)$ denotes an end of a functor $F:\mathcal{C}^{OP} \times \mathcal{C} \to \mathcal{D}$ whereas the superscripted integral notation $\int^C F(C,C)$ demonstrates a Co-end for $F$. A helpful property of ends which makes them so useful is their capability of representing natural transformations. This can be expressed by the following theorem, \begin{theorem} Given two functors $F,G: \mathcal{C} \to \mathcal{D}$, the set of natural transformations between $F,G$ denoted by $[C,D](F,G)$ equals to $[C,D](F,G)=\bigintssss \limits_{c \in \mathscr{C}} D(F(c),G(c))$. \end{theorem} \begin{proof} Suppose $\bigintssss \limits_{c \in \mathscr{C}} D(F(c),G(c))$ includes morphisms $h(c):F(c) \to G(c)$ in $\mathscr{D}$. For any other morphism $f: c \to d$ in $\mathscr{C}$, The following diagram commutes as a consequence of an end properties. \begin{figure}[!h] \begin{center} \begin{tikzpicture} \matrix (m) [matrix of math nodes,row sep=3em,column sep=4em,minimum width=5em] { F(c) & F(d) & \\ G(c) & G(d) \\}; \path[-stealth] (m-1-1) edge [sloped, above] node {$h(c)$} (m-2-1) (m-1-2) edge [ sloped, above] node {$ h(d) $} (m-2-2) (m-1-1) edge [ sloped, above] node {$ F(f) $} (m-1-2) (m-2-1) edge [ sloped, above] node {$ G(f) $} (m-2-2); \end{tikzpicture} \end{center} \end{figure} This is by definition a natural transformation $F \Rightarrow G$. \end{proof} Some practical properties of ends that will be used later are: \begin{equation} \int \limits_{ a \in A} \int \limits_{b \in B} F(a,a,b,b) = \int \limits_{b \in B} \int \limits_{a \in A} F(a,a,b,b) = \int \limits_{(a,b) \in A\times B} F(a,a,b,b) \end{equation} \begin{equation} \int \limits_{A} [C,F(A,A)] \simeq [C, \int \limits_{A} F(A,A)] \hspace*{52mm} \label{eq1} \end{equation} \begin{equation} \int \limits_{A} [F(A,A), C] \simeq [\int \limits^{A} F(A,A),C] \hspace*{52mm} \label{eq2} \end{equation} \begin{definition} A functor $\mathcal{C}, \mathcal{D}$ along with functors $F: \mathcal{D} \to \mathcal{C}$ and $G: \mathcal{C} \to \mathcal{D}$, $F$ is assumed to preserve co-limits, if $\mathop{\lim_{\longrightarrow}}_{i}X_i$ which exists in $\mathcal{C}$ for a functor $X: \mathcal{I} \to \mathcal{C}$, $F(\mathop{\lim_{\longrightarrow}}_{i}X_i) \simeq \mathop{\lim_{\longrightarrow}}_{i}F(X_i)$. Conversely, $G$ is assumed to preserve all small limits if it forces that the limit $\mathop{\lim_{\longleftarrow}}_{i}X_i$ for a functor $X: \mathscr{I} \to \mathscr{C}$ in $\mathscr{C}$ if exists, $G(\mathop{\lim_{\longleftarrow}}_{i}X_i) \simeq \mathop{\lim_{\longleftarrow}}_{i}G(X_i)$. \label{preservingLimits} \end{definition} \subsection{Adjunctions} One of the main intentions of mathematics is to compare two models. One way to express that two models or objects are similar is by equality. However, equality is too much strong in many cases. Another similarity observation in some cases is isomorphism. Isomorphism is a weaker notion comparing with equality. Two categories $\mathcal{C},\mathcal{D}$ are isomorphic if there exist two functors $R: \mathcal{C} \to \mathcal{D}$ and $L:\mathcal{D} \to \mathcal{L}$ such that $L \circ R=\mathsf{id}_{\mathcal{C}}$ and $R \circ L=\mathsf{id}_{\mathcal{D}}$. However, the notion of isomorphism is also too ambitious to expect in many cases. Adjunction weakens even the requirements needed by isomorphism for two categories by just asking a one way natural transformation $\eta:\mathsf{id}_{\mathcal{D}} \Rightarrow R \circ L$ and another natural transformation expressing $\epsilon:L \circ R \Rightarrow \mathsf{id}_{\mathcal{C}}$. In the language of category theory $\eta$ is called the unit and $\epsilon$ is called the co-unit of the adjunction. Functor $L$ is noted as the left adjoint to functor $R$ and $R$ is called the right adjoint to $L$. One can also express the adjunctions in terms of triangular identities \cite{borceux1994handbook,borceux1994handbook3} depicted by diagrams in figure \ref{adjoint}: \begin{figure}[h] \centering \begin{tikzpicture}[node distance=3.1cm, auto] \pgfmathsetmacro{\shift}{0.3ex} \node (R) {$L$}; \node (P) [right of=R]{$L \circ R \circ L$}; \node (B) [below of=P] {$L$}; \draw[transform canvas={yshift=0.5ex},->] (R) --(B) node[above,midway] { }; \draw[transform canvas={yshift=-0.5ex},->](B) -- (R) node[below,midway] { }; \draw[transform canvas={yshift=-0.5ex},->](R) -- (P) node[above,midway] { $L \circ \eta$}; \draw[transform canvas={xshift=-0.5ex},->](P) -- (B) node[right,midway] { $\epsilon \circ L$}; \end{tikzpicture} \\ \begin{tikzpicture}[node distance=3.1cm, auto] \pgfmathsetmacro{\shift}{0.3ex} \node (R) {$R$}; \node (P) [right of=R]{$R \circ L \circ R$}; \node (B) [below of=P] {$R$}; \draw[transform canvas={yshift=0.5ex},->] (R) --(B) node[above,midway] { }; \draw[transform canvas={yshift=-0.5ex},->](B) -- (R) node[below,midway] { }; \draw[transform canvas={yshift=-0.5ex},->](R) -- (P) node[above,midway] { $ \eta \circ R$}; \draw[transform canvas={xshift=-0.5ex},->](P) -- (B) node[right,midway] { $R \circ \epsilon$}; \end{tikzpicture} \caption{Triangle diagrams(Adjunctions)} \label{adjoint} \end{figure} \textbf{Example.}\\ Adjunctions in the category of preorders corresponds to functors $F: L_1 \to L_2 ^{OP} $, $G: L_2 ^{OP} \to L_1$ between two preorders $L_1, L_2$. $L$ is the left adjoint of $G$ iff for any $p \in L_1, q \in L_2$,\begin{center} $q \leq F(p) \implies p \leq G(q) $ \end{center} The adjunction in category of complete lattices called Galois connections plays a crucial role in mathematical morphology area. To express more, any left adjoint in the category of complete lattices is a dilation and its right adjoint is an erosion consequently. A major property of adjoints which is widely used in category theory is that they preserve (co)-limits. A left adjoint preserves co-limits whereas a right adjoint preserves limits. \subsection{The Yoneda lemma} The Yoneda lemma is a major and applicative result of category theory \cite{riehl2017category}. It allows us to embed any category into the category of contravariant functors stemming from that category to the category of sets. The Yoneda lemma makes the life easier by suggesting that one can investigate functors from a small category to the category of sets instead of investigating directly on it. In many cases the former inspection is much more easier. \begin{definition} Consider a functor $F:\mathcal{A} \to \mathsf{Set}$ from an arbitrary category $\mathcal{A}$ to the category of sets, an object $A \in \mathcal{A}$ and the corresponding functor $\mathcal{A}(A,-): A \to \mathsf{Set}$. There exists a bijective correlation, \[ \mathsf{nat}(\mathcal{A}(A,-),F) \simeq FA\] \end{definition} The main idea inherited in Yoneda lemma is that every information we need about an object $A \in \mathcal{C}$ is encoded in $\mathcal{C}[-, A]$. Yoneda lemma can be expressed by co-ends also. Let $F:\mathcal{C}^{OP} \to \mathsf{Set}$ and $G:\mathcal{C} \to \mathsf{Set} $ be functors. The following formulas express the Yoneda lemma. \begin{equation*} F \simeq \int \limits^{A \in \mathcal{C}} FA \times \mathcal{C}[-,A] \end{equation*} \begin{equation*} G \simeq \int \limits^{A \in \mathcal{C}} GA \times \mathcal{C}[A,-] \end{equation*} \subsubsection{Monoidal categories} \begin{definition} A category $\mathcal{C}$ is monoidal if it is equipped with a tensor product $\otimes$ that satisfies some conditions. Roughly speaking $ \otimes: \mathcal{C} \times \mathcal{C} \to \mathcal{C}$ is a functor satisfying: \begin{itemize} \item $(A \otimes B) \otimes C \rightarrow A \otimes (B \otimes C)$ (Associativity isomorphism law) \item There exist an identity object $I$ satisfying: $\lambda_A:A \otimes I \rightarrow A$ and $\rho_A:I \otimes A \rightarrow A$ \item The following two commutations hold: \end{itemize} \begin{center} \begin{tikzpicture} \matrix (m) [matrix of math nodes,row sep=4em,column sep=3em,minimum width=2em] { & (A \otimes (B \otimes C)) \otimes D \\ ((A \otimes B) \otimes C) \otimes D & & A \otimes ( (B \otimes C) \otimes D) \\ (A \otimes B) \otimes (C \otimes D) & & (A \otimes B ( \otimes (C \otimes D)) \\ }; \path[-stealth] (m-2-1) edge [sloped, above] node {$\alpha_{A,B,C} \otimes 1_D$} (m-1-2) (m-1-2) edge [sloped, above] node {$\alpha_{A,B \otimes C,D } $} (m-2-3) (m-2-1) edge [ right] node {$\alpha_{A \otimes B, C,D } $} (m-3-1) (m-2-3) edge [ right] node {$ 1_A \otimes \alpha_{B, C,D } $} (m-3-3) (m-3-1) edge [above] node {$\alpha_{A ,B, C \otimes D } $} (m-3-3); \end{tikzpicture} \begin{tikzpicture} \matrix (m) [matrix of math nodes,row sep=4em,column sep=3em,minimum width=2em] { (A \otimes I) \otimes B & & A \otimes (I \otimes B) \\ & A \otimes B \\}; \path[-stealth] (m-1-1) edge [sloped, above] node {$\alpha_{A,I,B}$} (m-1-3) (m-1-1) edge [sloped, above] node {$\lambda_A \otimes 1_B $} (m-2-2) (m-1-3) edge [ sloped, above] node {$1_A \otimes \rho_B $} (m-2-2); \end{tikzpicture} \end{center} \end{definition} A monoidal category is left closed if for each object $A$ the functor $B \longmapsto A \otimes B$ has a right adjoint $ B \longmapsto ( A \Rightarrow B) $. This means that the bijection $\mathsf{Hom}_\mathcal{C}(A\otimes B, C) \cong \mathsf{Hom}_\mathcal{C}(A, B \Rightarrow C)$ between the Hom-sets called currying exists. Dually, the monoidal category $\mathcal{C}$ is right closed if the functor $ B \longmapsto B \otimes A$ admits a right adjoint. In a symmetric monoidal category the notions of left closed and right closed coincide. \subsection{Enriched categories} Generally, the arrows between two objects A,B in category $\mathcal{C}$ illustrated by $\mathcal{C}[A,B]$ are a set. The notion of enrichment extends that structure from merely a set to more fruitful structures \cite{kelly1982basic,day1969enriched}. For instance the set of arrows between two objects may have an abelian structure meaning that it is possible to add the arrows between two objects. A restriction imposed on a category $\mathcal{V}$ on which the arrows are enriched over it is that it should have a monoidal structure. More formally, \begin{definition} Let $\mathcal{V}$ be a symmetric monoidal category. A category enriched over $\mathcal{V}$ called a $\mathcal{V}$-category consists of: \begin{itemize} \item For all objects $A,B \in \mathcal{C}$, the set of arrows from $A$ to $B$ is an object $O \in \mathcal{V}$. \item For all object $A,B,C \in \mathcal{C}$, there exist composition of arrows in $\mathcal{V}$ such that $c_{A,B,C}: \mathcal{C}[A,B] \times \mathcal{C}[A,B] \to \mathcal{C}[A,B]$. \item For each object $A \in \mathcal{C}$, an identity arrow exists in $\mathcal{V}$ such that $I \to \mathcal{C}(A,A)$. \end{itemize} \end{definition} \section{Categorical dilation and erosion} Considering the well-known notion of dilation in the category of complete lattices ($\mathsf{CompLat}$), we may generalize it to other categories as follows: \begin{definition} A dilation is a co-limit preserving functor whereas an erosion is a limit preserving functor. \label{dilation} \end{definition} We expressed previously that left/right adjoint functors preserve co-limits/limits respectively. Thus, the claim that left/right adjoints are a major source of dilation and erosion functors is pretty precise. Another concept that we will utilize it for defining morphological operations on matrix representation of images is the Day convolution \cite{day1970closed,im1986universal}. Let $\mathcal{V}$ be a complete and co-complete small symmetric monoidal category. Let the enriched Yonedda embedding functor to be $ \mathcal{C} \to [\mathcal{C}^{OP}, \mathcal{V}]$. The intuition of Day convolution is that a monoidal structure on $ \mathscr{C}$ brings the monoidal structure on $[\mathscr{C}^{OP}, \mathscr{V}]$. The enriched co-Yoneda lemma states that any $\mathscr{V}$-enriched functor $[\mathscr{C}^{OP},\mathscr{V}]$ is canonically isomorphic to a co-end of representables. This can be expressed as $F(c) \simeq \mathlarger{\int^{c}F(c) \otimes \mathcal{C}(-,c)}$. Taking two functors $F,G: \mathcal{C}^{OP} \to \mathcal{V}$, define their multiplication as: \[ F \ast G=\mathlarger{\int^{c}F(c)} \otimes \mathcal{C}(-,c) \ast \mathlarger{ \int^{b}G(b) \otimes \mathcal{C}(-,b)} \] Assuming the multiplication operation interchanges properly with the co-end, we get: \[ F \ast G=\mathlarger{\int^{c,b}F(c) \otimes G(b) \otimes (\mathcal{C}(-,c) \ast \mathcal{C}(-,b)}) \] Forcing the Yoneda embedding $ \mathcal{C} \to [\mathcal{C}^{OP}, \mathcal{V}]$ be strongly monoidal yields to: \begin{equation} F \ast G \\ \simeq \mathlarger{\int^{c,b}}F(c) \otimes G(b) \otimes \mathcal{C}(-,c \otimes b). \end{equation} \begin{definition} A \textit{semiring} category $\mathcal{S }$ is a category with two operations of addition and multiplication shown as $(\oplus,\cdot)$ such that $(\mathcal{S},\oplus,0)$ is monoidal and symmetric, $(\mathcal{S},\dot,1)$ is monoidal. Left and right distributivity of multiplication over addition is expressed by natural isomorphisms as: \begin{flalign*} A \cdot (B \oplus C) \to (A \cdot B) \oplus (A \cdot C) \\ (B \oplus C) \cdot A \to (B \cdot A) \oplus (C \cdot A) \end{flalign*} Evidently, a semiring category is a ring in which the elements lack their inverses for addition. The max-plus category may be defined by $a \oplus b = \max \{a,b\}$ and $a \cdot b = a +b$ along with $ +\infty$ and $0$ acting as unit objects for addition and multiplication. \end{definition} Notions such as group, ring, semirings and many other varieties of algebraic structures are categorized via Lawvere theory \cite{lawvere1963functorial}. If $\mathsf{T}$ is a Lawvere theory and $\mathcal{C}$ is a category with finite products such as $\mathsf{set}$, a functor $F: \mathsf{T}^{OP} \to \mathcal{C}$ creates a full subcategory called a model of $\mathsf{T}$. The model of Lawvere theories when $\mathcal{C}$ is category of sets is referred as $\mathsf{T}$-algebras. $\mathsf{T}$-algebras built on complete and co-complete category $\mathcal{C}$ which is in most cases category of sets are complete and cocomplete. Literally, the forgetful functors from a $\mathsf{T}$-algebra to set category creates and preserves limits. \begin{definition} Given two matrices $R_{(m,n)}$ and $S_{(p,q)}$, their tensor product known also as Kronecker product is defined by: \[ R \otimes S = \begin{bmatrix} r_{1,1}S & \cdots & r_{1,n}S \\[0.3em] \vdots & & \vdots \\[0.3em] r_{m,1}S & \cdots & r_{m,n}S \end{bmatrix}\] \end{definition} \begin{definition} The category of $\mathsf{Mat}$ contains the set of integers as objects with arrows between two object $m,n$ as $ m \times n$ matrices with the matrix multiplication as composition of arrows. $\mathsf{S-mat}$ is deducted from enriching $\mathsf{Mat}$ on a semiring like $S$. In other words, $\mathsf{S-mat}$ has sets as objects and $\mathsf{s}$-valued matrices $m \times n \to \mathsf{S}$ as morphisms. For instance, if $S=(\{0,1\}, \vee, \wedge ) $ is the Boolean semiring, then the $\mathsf{S-mat}$ is exactly the well-known category of $\mathsf{Rel}$. \end{definition} The $\mathsf{V-mat}$ category with the array multiplication, Kronecker product behaving as composition and tensor product respectively with the identity matrix constitute a monoidal category. Let $\mathcal{X}$ be the discrete category containing tuples of integers as objects with arrows $\mathcal{X}((m,n),(m,n)) = \mathbb{Z}$. Let us define $F,G$ as two dimensional matrices over a semiring $S$. Eventually, $\mathcal{X}$ will act for indexing the two matrices. Day convolution of $F,G$ can be defined by: \begin{equation} F \ast G \\ \simeq \mathlarger{\int^{(m,n),(p,q)}}F_{(m,n)} \otimes G_{(p,q)} \otimes \mathcal{C}(-,(m,n) \otimes (p,q)). \label{Dayconvolution} \end{equation} Defining the tensor product $(m,n) \otimes (p,q)$ on the discrete category $\mathscr{X}$ as $(m,n) \otimes (p,q)=(m+p,n+q)$ brings it a monoidal structure. Thus, \ref{Dayconvolution} can be written as: \begin{equation} (F \ast G)_{(r,s)} = \bigoplus F_{(m,n)} \cdot G_{(p,q)}. \label{fig10} \end{equation} where $ m+p =r \wedge n+q =s$. assuming $F,G$ as the source image and the structuring element, $(F \ast G)$ can be defined as their dilation. The following example illustrates dilation of two binary images. Example: \\[0.2cm] Given $F=\begin{pmatrix} 0& 1 & 0 \\ 0 & 0 &1 \\ 1 &1 &0 \end{pmatrix}$ and $ G=\begin{pmatrix} 0& 1 & 0 \\ 1 & 0 &0 \\ 0 &0 &0 \end{pmatrix} $, Assuming $F,G$ defined on Boolean semiring, one can derive the formula from \ref{fig10} using $\vee$ and $\wedge$ as addition and multiplication respectively. So $F \ast G$ can be calculated like \begin{equation} (F \ast G)[r,s] = \bigvee (F[m,n] \wedge G[p,q]) \label{fig12} \end{equation} where $ m+p =r \wedge n+q =s$. \\[0.3cm] Hence, the following matrix will be induced by \ref{fig12}\\ $ F \ast G= \begin{pmatrix} 0 &0 &1&0 &0 \\ 0&1&0&1&0 \\ 0&1&1&0&0 \\ 1&1&0&0&0\\ 0&0&0&0&0 \\ \end{pmatrix} $. It should be noted that this result corresponds exactly to dilating binary image $F$ with the structuring element $G$ using well-known existing formulation. However, by migrating from Boolean to max-plus semiring, formulation of gray-scaled images dilation is calculated by the following Day convolution: \begin{equation} (F \ast G)_{(r,s)} = \max (F_{(m,n)} + G_{(p,q)}) . \end{equation} Other known morphological operation than can be derived from \ref{fig10} is the fuzzy dilation first appeared in \cite{de1998fuzzy}. For that intention we need to use the semiring with $ a \oplus b = \max(a,b)$ and $a \cdot b = \min (a,b)$ denoted as the $\mathsf{min-max}$ to get the formula: \begin{equation} (F \ast G)_{(r,s)} = \max ( \min(F_{(m,n)}, G_{(p,q)})) . \end{equation} in which $r=m+p \wedge s=n+q$. We can concentrate on celebrated tensor-hom adjunction for extracting the expression of erosion from dilation. Given a closed monoidal category $\mathcal{C}$, tensor-hom adjunction states that for an object $A \in \mathcal{C}$, tensor product $- \otimes A$ is the left adjoint with the internal hom functor $[A,-]$. This can be expressed by: \begin{equation} \mathcal{C}[A \otimes B,C] \simeq \mathcal{C}[A,[ B,C]] \label{hom-tensor} \end{equation} The morphism $\mathcal{C}[A \otimes B,C] \to \mathcal{C}[A,[ B,C]]$ is nominated as currying in literature. Intuitively, currying is achieved by $\tau: m \otimes n \to \tau (m)(n)$. The hom-tensor adjunction \ref{hom-tensor} is used to calculate the right adjoint which is equal to erosion. \begin{deriv} \mathcal{V} [F \ast G, E] \<= \commentaire{Natural transformations representation by ends} \bigintsss \limits_C [(F \ast G)C, EC] \<= \commentaire{Definition of Day convolution} \bigintsss \limits_C [\bigintsss \limits^{A,B} (FA \otimes GB \otimes \theta(A \otimes B,C), EC ] \<\simeq \commentaire{\ref{eq2}} \bigintsss \limits_C \bigintsss \limits_A [FA, [ \bigintsss \limits^B GB \otimes \theta(A \otimes B,C), EC ] \<\simeq \commentaire{Commutativity of ends} \bigintsss \limits_A \bigintsss \limits_C [FA,[ \bigintsss \limits^B GB \otimes \theta(A \otimes B,C), EC ]] \<\simeq \commentaire{\ref{eq2}} \bigintsss \limits_A [FA, \bigintsss \limits_{B,C} [ \theta(A \otimes B,C),[GB, EC ]]] \<\simeq \commentaire{} [A,[B,C]] \end{deriv} Thus the erosion of two binary matrices $F,G$ depicted by $F \ast^\prime G$ can be written as: \begin{equation} (F \ast^\prime G)[r,s]=\bigwedge (F[m,n] \wedge G[p,q]) \end{equation} where $r= m-p $ and $s=n-q$. The erosion of two gray-scaled images can be extracted by using max-plus semiring, \begin{equation} (F \ast^\prime G)[r,s]=\min (F[m,n]-G[p,q]) \end{equation} where $r= m-p $ and $s=n-q$. \section{conclusion} Category theory provides an abstract unified framework for almost all aspects of mathematics. This is the first research conducted on creating a unified definition for two fundamental morphological operations of dilation and erosion. We have unified morphological operations appeared in contrasting situations like binary, gray-scaled and fuzzy operators into a uniform definition that can be extended to some new variations with using different semirings. An interesting horizon for the future research on category theory and mathematical morphology can be imagined by *-autonomous categories and mathematical morphology. *-autonomous categories are categorical representation of linear logic which has been a major research area. Many models for linear logic such as Petri nets and game semantics has been suggested but none of them is satisfying. Mathematical morphology is claimed to be a model of linear logic \cite{van2007modal}. The authors have shown that every derivable formula in linear logic should be a model of mathematical morphology but the reverse is an open problem. Conducting research on *-autonomous which are just symmetric monoidal categories with an involution object and morphological operations will help to shed the light over relation of linear logic and mathematical morphology. \bibliographystyle{abbrv}
1,314,259,994,738
arxiv
\section{Introduction} \label{sec:intro} \PARstart{D}etecting correlated network flows, also known as flow linking, is a technique for traffic analysis with wide applications in network security and privacy. For instance, it may be utilized to expose a stepping stone attacker who hides behind proxy hosts. Alternatively, flow linking has been successfully used to attack low-latency anonymity networks such as Tor~\cite{Wang07}, where anonymity is compromised once end flows are correctly matched. As network connections are often encrypted, it is infeasible to link flows directly relying on packet contents. However, matching flows using side information such as packet timings is possible, as their values remain to some extent unchanged even after encryption~\cite{Staniford-Chen95, Zhang00, Yoda00}. Earlier work in flow linking was based on long observation of flow characteristics, such as the number of ON/OF periods~\cite{Zhang00}. Such {\em passive} techniques are fragile vis-a-vis network artifacts and require long observation periods to avoid large false alarm rates. {\em Flow watermarking}, an active approach, was suggested as an improvement. In this approach, a pattern, the watermark, is injected into the flow with the hope that the flow stays traceable after traversing the network as long as the same pattern can be later extracted~\cite{Wang02, Wang05, Wang07, Pyun07,Houmansadr09,Yu07}. Unlike passive schemes, flow watermarking is highly reliable and works effectively on short flows. The challenge of designing good flow watermarks is to keep the injected pattern robust to network artifacts yet invisible to watermark attackers.\footnote{The goal of watermark attackers is to prevent the success of flow linking by disrupting the detection or altogether removing the watermarks from the flow.} The robustness requirement guarantees that the injected pattern survives network artifacts, while the invisibility property prevents watermark removal attempts by active attackers. Most state-of-the-art schemes currently trade off one of the two properties at the expense of the other. In the so called \emph{interval-based} schemes~\cite{Wang07, Pyun07}, a flow is divided into intervals, and all packets within selected intervals are shifted to form a watermark pattern. Given that a few packets would not greatly affect the pattern created in the entire interval, these schemes are robust against network artifacts such as packet drops and splits. However, shifting a large number of packets produces noticeable `traces' of the embedded watermarks and compromises the invisibility requirement~\cite{Kiyavash08}. In \emph{inter-packet-delay (IPD)-based} schemes~\cite{Wang02,Houmansadr09}, the delays between consecutive packets are modulated to embed watermarks. Since only small perturbations are introduced in each inter-arrival time, watermarks are not visible. The drawback of this approach is that any packet loss or insertion during transmission can cause watermark desynchronization and severe decoding errors. In this paper, we present a new IPD-based flow watermarking scheme where invisible watermark patterns are injected in the inter-arrival-time of successive packets. We treat the network as a channel with substitution, deletion, and bursty insertion errors caused by jitter, packet drops, and packet splits or retransmission, respectively, and introduce an insertion, deletion and substitution (IDS) error-correction coding scheme to communicate the watermark reliably over the channel. At the same time, we preserve watermark invisibility by making unnoticeable modifications to packet timings using the QIM framework~\cite{Chen01}. Through experiments on both synthetic and real network traces, we show that our scheme performs reliably in presence of network jitter, packet losses and insertions. Furthermore, we verify the watermark invisibility using {\em Kolmogorov-Smirnov} ~\cite{Massey51} and {\em multi-flow-attack} tests~\cite{Kiyavash08}. Deletion correction codes were first applied to flow watermarking in~\cite{xunICASSP12}, where watermarks can be decoded correctly after packet losses as long as the first packet in the flow was not dropped. In this work, we extend our decoder to handle more realistic network environments where not only packet losses but also packet insertions occur. Furthermore, synchronization requirement on the first packet is relaxed. To verify the performance of our scheme, traffic traces collected from real SSH connections are tested. This improves the simulation setup in~\cite{xunICASSP12}, where merely simulated synthetic traffic was used. The rest of the paper is organized as follows. Background on flow watermarking appears in \S\ref{sec:background}. We describe notations and definitions in \S\ref{sec:def}. Our proposed scheme is presented in \S\ref{sec:scheme}. We evaluate the performance of our scheme using synthetic and real traffic traces in \S\ref{sec:eval}. \section{Background} \label{sec:background} This section covers some background material on flow watermarking. First, we describe three application scenarios of flow watermarking. Second, we discuss some principles for designing good watermarking schemes. We conclude by surveying the literature. \subsection{Applications} We begin with a stepping-stone detection scenario where flow watermarks are used to find hidden network attackers. Figure~\ref{fig:step_stone} depicts an attacker {\em Bob} who wants to attack a victim {\em Alice} without exposing his identity. {\em Bob} first remotely logins to a compromised intermediate host {\em Charlie} via SSH~\cite{ssh}. Then he proceeds by sending attack flows to {\em Alice} from {\em Charlie}'s machine. Tracing packet flows sent to Alice's machine would implicate {\em Charlie} instead of {\em Bob} as the attacker. Hosts like {\em Charlie}, exploited to hide the real attack source, are called as {\em stepping stones}~\cite{Staniford-Chen95}. In real life, attackers may hide behind a chain of stepping stones, making it hard for the victim, who only sees the last hop, to determine the origin of the attack. Fortunately, flow watermarking is a solution for tracing the attack source. Notice that an interactive connection is maintained along {\em Bob-Charlie-Alice} during the above stepping stone attack. Hence {\em Alice} can secretly embed a watermark in the packet flow heading back to {\em Charlie}. As this flow travels back to {\em Bob}, the watermark could be subsequently detected by the intermediate routers (or firewalls), revealing the attack path and its true origin~\cite{Yung02, Ding11}. \begin{figure}[t] \centering \includegraphics[width=0.7\columnwidth]{Graphics/stepping_stone} \caption{Detecting attackers behind stepping stones. {\em Bob} uses {\em Charlie} as a stepping stone to attack {\em Alice} so that his identity remains hidden from {\em Alice}. To traceback the origin of this attack, {\em Alice} injects a watermark on the flow sent back to the stepping stone. The path leading to {\em Bob} is exposed as every router along this path detects {\em Alice}'s watermark on flows passing through.} \label{fig:step_stone} \end{figure} \begin{figure*}[t] \centering \subfigure[]{ \includegraphics[width=0.45\columnwidth]{Graphics/stepping_stone_2} \label{fig:step_stone_enterprise} }\subfigure[]{ \includegraphics[width=0.45\columnwidth]{Graphics/anonymitynetwork} \label{fig:anonymity} } \caption{(a) Stepping stones in enterprise networks. An intruder compromises a host in the enterprise network as a `stepping stone'. The enterprise embeds watermarks on all incoming flows and monitors all the outgoing flows. Any pair of incoming/outgoing flows with the same watermark indicates the existence of inside stepping stones. (b) An anonymity network. Incoming flows are shuffled before leaving the system to hide the pairing among communicating parties.} \label{fig:application} \end{figure*} Another scenario of stepping-stone attack occurs in enterprise networks, as shown in Figure~\ref{fig:step_stone_enterprise}. Here, intruders are trying to compromise hosts in an enterprise network to relay their malicious traffic~\cite{Lippmann05, Kiyavash08}. To discover this kind of `stepping stones' within the network, an enterprise can add watermarks on all incoming flows, and then terminate outgoing flows that contain the watermark since they most probably come from stepping stones. In a similar fashion, flow watermarking may be applied to attacking anonymity network systems~\cite{ssh,i2p,freenet,gnunet}. In order to hide the identities of communicating parties, an anonymity network shuffles all the flows passing through it, as shown in Figure~\ref{fig:anonymity}. If an attacker somehow discovers the hidden mappings between incoming and outgoing flows, the anonymity is compromised. Akin to the previous enterprise network scenario, this can be achieved by marking all incoming flows with watermarks and subsequently detecting the watermarks on the exiting flows. \subsection{Design Principles} \label{sec:design} From above application examples, we summarize a list of principles for designing flow watermarks. The challenge of building an efficient scheme lies in the difficulty of achieving all desired properties simultaneously. \begin{itemize} \item {\em Robustness}. One major advantage of flow watermarking over passive traffic analysis is the robustness against network noise. Take the stepping stone attack of Figure~\ref{fig:step_stone} for example. The flow {\em Alice} sends back to {\em Bob} is subjected to jitter, packet drops, and packet splits during transmission. All these artifacts can alter the watermark, resulting in decoding errors. Without the ability to withstand these artifacts, flow watermarking is no different than passive analysis, which is fragile by nature. \item {\em Invisibility}. A successful watermark pattern should stay `invisible' to avoid possible attacks. For instance, in Figure~\ref{fig:step_stone_enterprise}, if the intruder notices that incoming flows contain watermarks, it can command the stepping stone to take precautionary actions (for instance, remove the watermarks altogether). \item {\em Blindness}. In a blind watermarking scheme, the watermark pattern can be extracted without the help of the original flow~\cite{Cox}. On the contrary, the original flow must be present in order to detect non-blind watermarks. Again, consider the example of Figure~\ref{fig:step_stone_enterprise}. In order to detect the stepping stone, the enterprise needs to perform watermark decoding on all outgoing flows. If a non-blind detection scheme is used, all exit routers are required to obtain a copy of each incoming flow. The resulting overheads of bandwidth and storage make such schemes impractical in large enterprise networks. \item{\em Presense watermarking}. In conventional digital watermarking (e.g., multimedia watermarking), often a large amount of hiding capacity is desired as the injected watermarks are frequently used to achieve copyright among many users~\cite{Sergio}. This, fortunately, is not required for most flow watermarking applications, since the main purpose of injecting watermarks here is to link flows initiated from the same sources. In other words, in digital watermarking terminology, {\em zero-bit} or {\em presence} watermarks suffice~\cite{Cox}. Therefore, when designing a flow watermarking scheme, one may trade the capacity for other properties such as robustness (see the discussion in~\S\ref{sec:ids_enc}). \end{itemize} \subsection{Watermark Attack Models} The difficulty of maintaining watermark invisibility depends on the specific attack model. Based on the strength of the watermark attacker, attack models may be classified as follows: \begin{itemize} \item {\em Level-I:} the attacker observes the watermarked flow, and has knowledge of certain feature (e.g., empirical distributions of IPDs) of the original flow; \item {\em Level-II:} the attacker observes the watermarked flow, and has a distorted version of the original flow; \item {\em Level-III:} the attacker observes both the watermarked flow and the original flow. \end{itemize} In Level-I, the weakest attack model, the attacker can only discover the presence of watermark by statistical approaches that real a deviation of know features from the norm in the original flow. For interval based schemes, the multi-flow attack (MFA) exposes empty intervals in the combination of several watermarked flows~\cite{Kiyavash08}. For IPD-based schemes, the empirical distribution of IPDs, which should not be changed with high probability, distinguishes watermarked flows from unwatermarked ones via Kolmogorov-Simirnov (K-S) tests~\cite{Massey51,Houmansadr09}. We show in~\S\ref{sec:visibility} that our watermark does not introduce noticeable patterns for the MFA or the KS test to detect it. In Level-II, given a distorted version of the original flow, the attacker has in effect an imperfect realization of the original flow signal which is more informative than the statistical information a Level-I attacker has. A Level-II attack, BACKLIT was recently proposed, where the attacker serves as a traffic relay between the client and server of a TCP connection and thus sees both REQUEST and RESPONSE flows~\cite{Luo}. When watermarks are added, packets along one direction (i.e., RESPONSE) must be delayed. The attacker can detect this `delayed' timing pattern as he observes the `clean' flow in the REQUEST direction. BACKLIT works well when a strong correlation between the REQUEST and RESPONSE flows exists, in which case the attack has a high fidelity version of the original flow. In~\S\ref{sec:visibility}, we evaluate our scheme against BACKLIT. We show that in practice the correlation between the response and request flows are destroyed for the most part by network jitter because watermarks in our scheme that add very small perturbations to IPDs can remain hidden. A Level-III attacker, who observes the exact original flow has a significantly easier task detecting the presence of a watermark~\cite{Lin}. This attack model, however, requires the attacker to be able to observe arbitrary flows everywhere in the network, which is impossible for most real applications. In this work, we focus on the first two attack models when evaluating watermark invisibility. \subsection{Related Work} \label{sec:related} We briefly review previous flow watermarking literature. To the best of our knowledge, all the previous schemes fail to meet at least one of the above design principles, necessitating the development of a comprehensive approach that meets all the aforementioned criteria. Earlier flow watermarks are of {\em inter packet delay (IPD)-based} type. In~\cite{Wang02}, the authors propose an IPD-based scheme that modulates the mean of selected IPDs using the QIM framework. Watermark synchronization is lost if enough packets are dropped or split. Therefore the scheme is unreliable. Another IPD-based scheme is presented in~\cite{Houmansadr09}, where watermarks are added by enlarging or shrinking the IPDs. This non-blind scheme achieves some watermark resynchronizations when packets are dropped or split, but is not scalable as the original packet flow is required during decoding. In {\em interval-based} schemes, instead of using the IPDs between individual packets, the watermark pattern is encoded into batch packet characteristics within fixed time intervals. In~\cite{Wang07}, an interval-centroid scheme is proposed. After dividing the flow into time intervals of the same length, the authors create two patterns by manipulating the centroid of packets within each interval. The modified centroids are not easily changed even after packets are delayed, lost or split. A similar design is presented in~\cite{Pyun07}, where the watermark pattern is embedded in the packet densities of predefined time intervals. One problem with interval-based schemes is the lack of invisibility. Moving packets in batches generates visible artifacts, which can expose the watermark positions. Based on this observation, a multi-flow attack (MFA) was proposed in~\cite{Kiyavash08}. The authors showed that by lining up as few as 10 watermarked flows, an attacker can observe a number of large gaps between packets (see Figure.10 in~\cite{Kiyavash08}) in the aggregate flow, revealing the watermark positions. Recently, a new interval-based scheme was proposed in~\cite{Swirl}. The main idea is that the exact locations of modified intervals depends on the flow pattern. This flow-dependent design reduces the success rate of MFA, but makes it more difficult to retrieve the correct intervals for decoding in face of strong network noise. Moreover, the perturbation introduced in the IPDs is large enough to make the scheme susceptible to Level-II attacks such as BACKLIT. \begin{table} \centering \caption{Summary of current watermarking schemes} \begin{tabular}{|c|c|cccc|} \hline & & \shortstack{Invisibility\\Level-I} & \shortstack{Invisibility\\Level-II} & Robustness & Blindness\\ \hline \multirow{3}{*}{Interval-based} & \cite{Wang07} & no & no & yes & yes\\ & \cite{Pyun07} & no & no & yes & yes\\ & \cite{Swirl} &yes&no&yes&yes\\\hline \multirow{3}{*}{IPD-based} & \cite{Wang02} & yes & yes$^{*}$ & no & yes\\ & \cite{Houmansadr09} & yes & no & yes & no\\ & The proposed scheme & {\bf yes} & {\bf yes}$^{*}$ & {\bf yes} & {\bf yes}\\ \hline \end{tabular} \begin{tablenotes} \footnotesize \item *The Level-II attack model is effective only when network jitter is small. Schemes like ours that add very small perturbations to IPDs remain hidden under normal network operating conditions (See~\S\ref{sec:visibility}). \end{tablenotes} \label{tab:summary} \end{table} Table~\ref{tab:summary} compares existing flow watermarking schemes with our proposed scheme. Unlike previous work, the new scheme satisfies all the desired properties. \section{Notations and Definitions} \label{sec:def} In the discussion of the rest of the paper, we use the following notation. $\mathbf{a}^b=\{a_1,a_2,\cdots,a_b\}$ is a sequence of length $b$; $a^{r}_{t}=\{a_t,\cdots,a_r\}$ is a sequence in $\mathbf{a}^b$ starting with index $r$ and ending with $t$. Specially, if $r\leq t$, $a^{r}_{t}$ is an empty sequence, denoted by $\emptyset$; $\oplus$ denotes the `xor' operation. We also define the following variables used in our scheme. \begin{itemize} \item $\mathbf{I}^M$ is the IPD sequence of an original packet flow, where each delay, $I_i$, is positive real valued; \item $\mathbf{I'}^M$ is the IPD sequence of the same flow after injection of the watermark pattern; \item $\hat{\mathbf{I}}^{M'}$ is the IPD sequence received by the watermark decoder; \item $\mathbf{w}^n$ is the binary watermark sequence; \item $\mathbf{\tilde{w}}^N$ is a sparse version of $\mathbf{w}^N$, where $N=sn$; \item $s$ is the {\em sparsification factor} and is integer valued; \item $f$ is the density of $\mathbf{\tilde{w}}^N$ (see~\eqref{eq:density}); \item $\mathbf{k}^N$ is a pseudo-random binary key sequence; \item $\mathbf{x}^N$ is a binary sequence, generated from the watermark $\mathbf{w}^n$ and the key $\mathbf{k}^N$, and embedded into flow IPDs; \item $\mathbf{y}^{N'}$ is decoder's estimate of $\mathbf{x}^N$; \item $\mathbf{\hat{w}}^n$ the estimate of the watermark sequence $\mathbf{w}^n$ at the decoder; \item $\Delta$ is a real-value step size used for IPD quantizations. It represents the strength of the watermark signal; \item $\sigma$ is the standard deviation of jitter; \item $P_s, P_I,$ and $P_d$ represent the probability of a substitution, an insertion, and a deletion event in the communication channel model of the network, respectively. \end{itemize} \section{The Proposed Scheme} \label{sec:scheme} \begin{figure}[t] \centering \includegraphics[width=0.8\columnwidth]{Graphics/system_model} \caption{An overview of the proposed flow watermarking scheme. The watermark sequence $\mathbf{w}^n$ is first transformed into a codeword $\mathbf{x}^N$ with the help of the key $\mathbf{k}^{N}$. $\mathbf{x}^N$ is then embedded into flow IPDs using QIM. At the decoder, the IPDs are processed by a QIM decoder to extract the codeword $\mathbf{y}^{N'}$, from which the IDS decoder recovers the watermark $\hat{\mathbf{w}}^N$, subsequently.} \label{fig:sys_mod} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.8\columnwidth]{Graphics/ids_channel} \caption{Abstraction of communication channel. The IDS encoder/decoder pair help correct the dependent substitution, deletion, and bursty insertion errors on the channel.} \label{fig:ids_channel} \end{figure} \subsection{Overview of the System} Figure~\ref{fig:sys_mod} depicts the schematic of our proposed scheme, which can be divided into two layers: the insertion deletion substitution (IDS) encoder/decoder and the quantization index modulation (QIM) encoder/decoder. In the upper layer, the watermark sequence $\mathbf{w}^n$ is processed to generate an IDS error-correction codeword $\mathbf{x}^N$. On the lower layer, a QIM framework is used to inject $\mathbf{x}^N$ into the IPDs of the flow. QIM embedding is blind and causes little change to packet timings~\cite{Chen01}. Upon receiving the flow, the QIM decoder extracts the pattern $\mathbf{y}^{N'}.$ Subsequently an IDS decoder recovers the watermark, $\hat{\mathbf{w}}^n$, from this pattern. If we abstract the QIM encoder, the network, and the QIM decoder together as a channel, which takes $\mathbf{x}^N$ as the input and spits out $\mathbf{y}^{N'}$, flow watermarking is equivalent to solving the problem of sending one bit of information (the presence of the watermark) over this compound communication channel (see Figure~\ref{fig:ids_channel}). Codes for this compound channel must withstand {\em dependent substitution, deletion, and bursty insertion} errors. We next introduce each component of our scheme in details. \subsection{Insertion Deletion Substitution (IDS) Encoder} \label{sec:ids_enc} Our IDS error correction scheme is inspired by~\cite{Davey00, Davey01}, where a `marker' code is employed to provide reliable communications over a channel with deletion and insertion errors. However, the approach in~\cite{Davey00, Davey01} is not directly applicable to our channel, as we need to deal with somewhat more complicated errors, such as dependent substitution, deletion, and bursty insertions which we discuss in~\S\ref{sec:error_analysis}. The IDS encoder works as follows. The watermark sequence $\mathbf{w}^n$ is first sparsified into a longer sequence $\mathbf{\tilde{w}}^N$ of length $N=sn$, as given by \begin{equation} \tilde{w}_{(j-1)s+1}^{js} = S\left(w_j\right), \quad j=1,2,\cdots, n, \label{eq:sparse} \end{equation} where $S(\cdot)$ is a deterministic sparsification function that pads $w_j$ with zeros, and is known at the decoder. We denote by density $f$ the ratio of `1' in $\mathbf{\tilde{w}}^N$, i.e., \begin{equation} f = \frac{\sum_{i=1}^{N}\tilde{w}_{i}}{N}. \label{eq:density} \end{equation} $f$ is a decoding parameter shared with the IDS decoder. The sparsified watermark $\mathbf{\tilde{w}}^N$ is then added to a key $\mathbf{k}^N$ to form the codeword $\mathbf{x}^N$: \begin{equation} x_{i}=\tilde{w}_{i} \oplus k_{i}, \quad i=1,2,\cdots,N, \label{eq:add} \end{equation} where $\mathbf{k}^N$ is pseudo-random {\em key} sequence which is also known at the decoder. Let us work on a small example of embedding one bit of watermark $w_1=1$ in a length 8 sequence. First $w_1$ is sparsified into an 8 bit sequence $\mathbf{\tilde{w}}^8=10000000$ (the sparsification factor $s = 8$). Then we add this sparse sequence to the first 8 bit of our key, $\mathbf{k}^8=11111011$. The resulting codeword is $\mathbf{x}^8=01111011$. Because $\mathbf{x}^8$ is only different from the key at one position, the decoder could infer the positions of deleted or inserted bits by comparing the received codeword with the key. For instance, if the decoder receives a codeword $\mathbf{y}^7=0111011$, one bit shorter than the key, then it knows that most likely a bit `1' from the second run was lost during transmission. Based on this observation, a probabilistic decoder can be developed to fully recover embedded bits, as will be discussed in~\S\ref{sec:dec}. Since $\mathbf{\hat{w}}^N$ is sparse, the codeword $\mathbf{x}^N$ is close to the key, which is known at the IDS decoder. Therefore, the IDS encoding helps synchronize the lost/inserted bits at the cost of information capacity over the channel, which is not a concern for flow watermarking (see~\S\ref{sec:design}). \subsection{Insertion Deletion Substitution (IDS) Channel} \label{sec:qim_enc_dec} \subsubsection{QIM Embedding} The codeword $\mathbf{x}^N$ is injected into IPDs of the original flow using QIM embedding. Given a quantization step size $\Delta$, the QIM encoder changes the IPD, $I_i$, into an even (or odd) multiplier of $\frac{\Delta}{2}$ given the embedded bit $x_i$ is a bit 0 (or 1). The IPDs after modifications are given by \begin{equation} I'_i = \left\{ \begin{array}{l l} \left \lceil \frac{\max\left(\sum_{j=1}^{i}I_j-\sum_{j=1}^{i-1}I'_j,0\right)}{\Delta} \right \rceil\Delta & \text{if } x_{i}=0, \\ \left(\left \lceil \frac{\max\left(\sum_{j=1}^{i}I_j-\sum_{j=1}^{i-1}I'_j,0\right)}{\Delta} \right \rceil+0.5\right)\Delta &\text{if } x_{i}=1,\\ \end{array}\right. \label{eq:qim_enc} \end{equation} for $i=1,2,\cdots N$, where the ceiling function describes the operation that adds minimum delays to $Packet$~$i$ to form the desired multiplier of $\frac{\Delta}{2}$. At the QIM decoder, each embedded bit is extracted based on whether a received IPD is closer to an even or odd quantizer, i.e., \begin{equation} y_i = \left\{ \begin{array}{l l} \lfloor{\frac{2\hat{I}_{i}}{\Delta}}\rfloor \mod 2 \quad \text{if } \frac{2\hat{I}_{i}}{\Delta}-\lfloor\frac{2\hat{I}_{i}}{\Delta}\rfloor \le 0.5, \\ \lceil{\frac{2\hat{I}_{i}}{\Delta}}\rceil \mod 2 \quad \text{if } \frac{2\hat{I}_{i}}{\Delta}-\lfloor\frac{2\hat{I}_{i}}{\Delta}\rfloor > 0.5.\\ \end{array}\right . \label{eq:qim_dec} \end{equation} \subsubsection{Channel Model} \label{sec:error_analysis} \begin{figure}[t] \centering \input{psFig/fig_jitters.tex} \caption{An example of substitution errors caused by network jitter. `$\mathbf{x}$'s denote even quantizers and `$\mathbf{o}$'s odd quantizers. The bit embedded in $I_1$ is `1', but the decoded bit from $\hat{I}_1$ (delay between received packets~$0$ and $1$) is `0'.} \label{fig:jitter} \end{figure} In presence of network artifacts, received IPDs, $\hat{\mathbf{I}}^{M'}$, are different from the original IPDs ${\mathbf{I}}^M$, leading to errors in decoding $\mathbf{x}^N$. {\em Substitution} errors occur when network jitter alters IPDs largely. Figure~\ref{fig:jitter} depicts one example where an embedded bit is flipped by jitter. In Figure~\ref{fig:jitter}, the bit `$x_1=1$' was originally encoded in the IPD $I_1$, resulting in $I'_1=\frac{\Delta}{2}$. But at the QIM decoder, the received IPD $\hat{I}_1$ is pushed by jitter into interval $(\frac{3\Delta}{4},\Delta)$, and thus decoded as `$y_1=0$'. In absence of packet drops or splits, a watermark bit flips if the IPD jitter is larger than $\frac{\Delta}{4}$. Following the observation of previous work that shows IPD jitter (within a certain period of time) is approximately i.i.d. zero-mean Laplace distributed~\cite{Houmansadr09}, the probability of a substitution error by jitter can be estimated as \begin{equation} P_s= 1-F\left(\frac{\Delta}{4}\right) = \frac{1}{2}e^\frac{ {-\Delta} }{2\sqrt{2}\sigma}, \label{eq:substitution} \end{equation} where $F(\cdot)$ is the Laplacian pdf and $\sigma^2$ is its variance. \begin{figure}[t] \centering \input{psFig/fig_diagrammerge.tex} \caption{ Merging of IPDs as the result of packet drops. The deletion of Packet~1 merges the first two IPDs $I_1$ and $I_2$. and the deletions of Packet 3 and 4 merge $I_3$, $I_4$ and $I_5$.} \label{fig:deletion} \end{figure} Decoding errors also occur when packets are dropped. As packet drops lead to the merger of successive IPDs, the resulting error contains both deletion and substitution, which we refer to as {\em dependent deletion and substitution error}. For instance in Figure~\ref{fig:deletion}, deletion of Packet~1 merges the IPDs $I_1$ and $I_2$ into a large received IPD $I'_1$. As a result, instead of $x_1$ and $x_2$, only one bit $x_1\oplus x_2$ is received at the decoder. We consider this case as a deletion of $x_1$, and possibly a substitution of $x_2$. In this paper, we assume that each packet is dropped independently with probability $P_d$. For the convenience of analysis, we also assume that the head of watermarked packet sequence, $Packet$~$0$, is not dropped The last type of error comes from packet insertions. This happens when packets are split to meet a smaller packet size limit, or when TCP transmission is triggered by network congestions. Both cases cause bursty insertions of packets. An example of such a scenario is depicted in Figure~\ref{fig:insertion}. Packet 2 is split into three smaller ones, creating two new IPDs (2-2' and 2'-2'', both with zero length). Therefore, two extra `0' bits would be decoded in $\mathbf{y}^{N'}$. In general, newly generated packets are mostly right next to the original one, hence we consider all inserted bits are`0's.\footnote{Our methodology can be extended to cover the case that both `0' and `1' bits may be inserted.} Furthermore, we assume the number of inserted packets follows a geometric distribution with parameter $P_I$. \begin{figure}[t] \centering \input{psFig/fig_diagramsplit.tex} \caption{A scenario with packet insertions. Packet 1 is split into two packets, and Packet~2 is split into three pieces.} \label{fig:insertion} \end{figure} \subsection{Insertion Deletion Substitution (IDS) Decoder} \label{sec:dec} We estimate each watermark bit from $\mathbf{y}^{N'}$ using the maximum likelihood decoding rule given by \begin{equation} \hat{w}_j = \arg \underset{w_j\in\{0,1\}}{\max} P\left(\mathbf{y}^{N'}|w_j\right), \quad j=1,2,\cdots, n. \label{eq:ml} \end{equation} Since $\mathbf{x}^{N}$ is a deterministic function of $\mathbf{w}$, we derive the likelihood in~\eqref{eq:ml} based on the dependency between $\mathbf{y}^{N'}$ and $\mathbf{x}^{N}$ over the IDS channel. Suppose the QIM decoder received $i'-1$ packets after the first $i-1$ packets were sent out by the QIM encoder, and assume the $i'-1^{th}$ packet in the received flow corresponds to the $q^{th}$ packet in the sent flow or a packet inserted immediately after it ($q\leq i-1$). The possible outcomes after $Packet$~$i$ is sent are: \begin{itemize} \item if $Packet$~$i$ in the sent flow is lost and no packets are inserted, the QIM decoder cannot decode new bits; \item if $Packet$~$i$ is lost but $l>0$ new packets are inserted right after it, the decoder could decode $l$ bits, $y_{i'}^{i'+l-1}$, from newly received IPDs; \item if $Packet$~$i$ is received and additionally $l\geq 0$ packets are inserted, the decoder can decode $l+1$ new bits, $y_{i'}^{i'+l}$. \end{itemize} In the last two cases, the new IPD between the $i'-1^{th}$ packet and $i'^{th}$ packets in the received flow corresponds to the merger of all IPDs between $Packet$~$q$ and $Packet$~$i$ in the sent flow. Hence, the first new bit, $y_{i'}$, is given by \begin{equation} \begin{aligned} y_{i'} = &\begin{cases} x_{q+1} \oplus x_{q+2} \cdots \oplus x_{i} & \text{w. p.}\quad1- P_s,\\ x_{q+1} \oplus x_{q+2} \cdots \oplus x_{i} \oplus 1 & \text{w. p.}\quad P_s, \end{cases} \end{aligned} \label{eq:temp1} \end{equation} where $P_s$ is the probability of a substitution error given in~\eqref{eq:substitution}. The remaining new bits $y^{i'+l-1}_{i'+1}$ (or $y^{i'+l}_{i'+1}$) are just `0' bits resulting from bursty packet insertions. \subsubsection{Hidden Markov Model} To capture the evolution of newly decoded bits from the received flow, we define the state after sending each packet with the pair $(x'_i,d_i)$, for $ i=1,2,\cdots N$, where \begin{itemize} \item The {\em accumulated bit} $x'_i$ it the sum of all bits resulting from merger of the IPDs between $Packet$~$i$ and the previous packet that was received at the decoder. If $Packet$~$i-1$ was received, then the $x'_i$ is just the bit embedded on the IPD between $Packet$~$i$ and~$i-1$, i.e., $x_i$. On the other hand, if $Packet$~$i-1$ was completely lost (i.e., after its deletion, there were no insertions), $x'_i$ would be the sum of current bit $x_i$ and bits embedded on previously merged IPDs, i.e., $x_i\oplus x'_{i-1}$. To sum up, \begin{equation} \begin{aligned} x'_{i}= &\begin{cases} x_{i} & \text{w. p.} \quad 1-P_d\left(1-P_I\right),\\ x_{i} \oplus x'_{i-1} & \text{w. p.} \quad P_d\left(1-P_I\right).\\ \end{cases} \end{aligned} \label{eq:accumulated_bit} \end{equation} Recall from~\eqref{eq:add} that $\mathbf{x}^N$ is generatd using the key $\mathbf{k}^N$ and the sparse watermark sequence $\mathbf{\tilde{w}}$. We will model the sparse watermark bits $\tilde{w}_i$'s as independent $Bernoulli(f)$ random variables. Therefore \eqref{eq:accumulated_bit} can be rewritten as \begin{equation} \begin{aligned} x'_{i}= &\begin{cases} k_{i} & \text{w. p.} \quad (1-f)\left(1-P_d(1-P_I)\right),\\ k_{i} \oplus 1 & \text{w. p.} \quad f\left(1-P_d(1-P_I)\right),\\ k_{i} \oplus x'_{i-1} & \text{w. p.} \quad (1-f)P_d\left(1-P_I\right),\\ k_{i} \oplus x'_{i-1} \oplus 1 & \text{w. p.} \quad fP_d\left(1-P_I\right). \end{cases} \end{aligned} \label{eq:temp3} \end{equation} Note that from~\eqref{eq:accumulated_bit}, we can rewrite~\eqref{eq:temp1} as \begin{equation} \begin{aligned} y_{i'} = &\begin{cases} x'_{i}& \text{w. p.}\quad1- P_s,\\ x'_{i}\oplus1& \text{w. p.}\quad P_s. \end{cases}\end{aligned} \label{eq:temp1_1} \end{equation} \item The {\em drift} $d_i$ is the shift in position of the sent $Packet$~$i$ in the received flow, i.e., if $Packet$~$i$ was not lost, it would appear at position $i'=i+d_i$ in the received flow. Given $d_{i-1}$, the drift of $Packet$~$i$ is updated as \begin{equation} \begin{aligned} d_{i}= &\begin{cases} d_{i-1}-1 & \text{w. p.}\quad P_{d}\left(1-P_I\right),\\ d_{i-1}+l, l\geq 0 & \text{w. p.}\quad \left(P_dP_I^{l+1}(1-P_I)+(1- P_d)P_I^{l}(1-P_I)\right), \end{cases} \end{aligned} \label{eq:drift} \end{equation} where the first case occurs when $Packet$~$i-1$ was dropped with no new packets inserted, and the second case occurs when total of $l$ packets are received either because $Packet$~$i-1$ was dropped and there were $l+1$ insertions or $Packet$~$i$ was received and there were $l$ insertions. For the first Packet~0, we initialize $d_0=0$. This minor change let us relax the synchronization requirement on the first packet. \end{itemize} Combine~\eqref{eq:temp1_1} and~\eqref{eq:temp3}, and given $i'=i+d_i$, we have \begin{equation} \begin{aligned} y_{i+d_{i}} = &\begin{cases} k_{i} & \text{w. p.} \quad \left((1-f)(1-P_s)+fP_s\right)\left(1-P_d(1-P_I)\right),\\ k_{i} \oplus 1 & \text{w. p.} \quad \left(f(1-P_s)+(1-f)P_s\right)\left(1-P_d(1-P_I)\right),\\ x'_{i-1}\oplus k_{i} & \text{w. p.} \quad \left((1-f)(1- P_s)+fP_s\right) P_d\left(1-P_I\right),\\ x'_{i-1} \oplus k_{i} \oplus 1 & \text{w. p.} \quad \left(f(1- P_s)+ (1-f)P_s\right) P_d\left(1-P_I\right). \end{cases} \end{aligned} \label{eq:temp4} \end{equation} \begin{figure}[t] \centering \input{psFig/fig_hmm.tex} \caption{The hidden-Markov model for the IDS channel. The observations are the codewords ($y_i$'s) received by the IDS decoder, and the hidden states keep track of the drift and accumulated bit when sending every packet.} \label{fig:hmm} \end{figure} Equation~\eqref{eq:temp4} captures the HMM with hidden states of $(x'_i,d_i), i=1,2,\cdots N$ and observation states of $\mathbf{y}^{N'}$, as depicted in Figure~\ref{fig:hmm}. The state transition probabilities $P\left(y_{i-1+d_{i-1}}^{i-1+d_{i}}, x'_{i},d_i | x'_{i-1},d_{i-1}\right)$ can be derived using~\eqref{eq:temp3},~\eqref{eq:drift} and~\eqref{eq:temp4}, summarized as \begin{equation} \begin{aligned} &P\left(y_{i-1+d_{i-1}}^{i-1+d_{i}}, x'_{i},d_i | x'_{i-1},d_{i-1}\right)=\\ &\begin{cases} (1-f)P_d(1-P_I)& \text{if } x'_i=x'_{i-1}\oplus k_i, d_{i}=d_{i-1}-1 \text{ and } y_{i-1+d_{i-1}}^{i-1+d_i}=\emptyset,\\ fP_d(1-P_I) & \text{if } x'_i=x'_{i-1}\oplus k_i\oplus1, d_{i}=d_{i-1}-1 \text{ and } y_{i-1+d_{i-1}}^{i-1+d_i}=\emptyset,\\ (1-f)(1-P_s)(1-P_I)( P_dP_I^{l+1}+(1-P_d)P_I^{l})& \text{if } x'_i=k_i, d_{i}=d_{i-1}+l \text{ and } y_{i-1+d_{i-1}}=x'_{i-1},\\ f(1-P_s)(1-P_I)( P_dP_I^{l+1} +(1-P_d)P_I^{l})& \text{if } x'_i=k_i\oplus1, d_{i}=d_{i-1}+l \text{ and } y_{i-1+d_{i-1}}=x'_{i-1},\\ (1-f)P_s(1-P_I)( P_dP_I^{l+1}+(1-P_d)P_I^{l})& \text{if } x'_i=k_i, d_{i}=d_{i-1}+l \text{ and } y_{i-1+d_{i-1}}=x'_{i-1}\oplus1,\\ fP_s(1-P_I)( P_dP_I^{l+1}+(1-P_d)P_I^{l})& \text{if } x'_i=k_i\oplus1, d_{i}=d_{i-1}+l \text{ and } y_{i-1+d_{i-1}}=x'_{i-1}\oplus1. \end{cases} \end{aligned} \label{eq:transition_prob} \end{equation} For example, after sending $Packet$~$i-1$, the system state is $(x'_{i-1},d_{i-1})$. If $Packet$~$i-1$ is lost and no packets are inserted. Then from~\eqref{eq:drift}, the drift of $Packet$~$i$ becomes $d_{i}=d_{i-1}-1$, and no new bit is decoded, i.e., $y_{i-1+d_{i-1}}^{i-1+d_{i}}$ is an empty sequence. Additionally, the IPD between $Packet$~$i$ and $i-1$ is added to previously merged IPDs such that $x'_{i}$ is decided based on the last two cases in~\eqref{eq:temp3}. Overall, the transition probability in this scenario is given by \begin{equation} \begin{aligned} P\left(\emptyset, x'_{i}, d_{i-1}-1| x'_{i-1}, d_{i-1} \right)= &\begin{cases}(1-f)P_d(1-P_I) & \text{if} \quad x'_{i}=x'_{i-1}\oplus k_{i},\\ fP_d(1-P_I) & \text{if} \quad x'_{i}=x'_{i-1}\oplus k_{i}\oplus 1. \end{cases} \end{aligned} \end{equation} \subsubsection{Forward-Backward Algorithm} For the HMM in Figure~\ref{fig:hmm}, we apply the forward-backward algorithm to derive the posterior probabilities $P(\mathbf{y}^{N'}|w_j)$, $j=1,2,\cdots, n$. Let us define the \emph{forward} quantity as the joint probability of bits $y_1^{i-1+d_i}$ decoded before sending $Packet$~$i$ at the hidden state of $(x'_{i},d_i)$, which is given by \begin{equation} \begin{aligned} F_{i}(x'_{i},d_i) = P(y_1^{i-1+d_i}, x'_{i},d_i), \quad i=1,2,\cdots, N. \end{aligned} \label{eq:forward_def} \end{equation} The forward quantities can be computed recursively using transition probabilities in~\eqref{eq:transition_prob} as \begin{equation} \begin{aligned} F_{i}(x'_{i},d_i)=\sum_{\substack{x'_{i-1},\\d_{i-1}}}F_{i-1}(x'_{i-1},d_{i-1}) P(y_{i-1+d_{i-1}}^{i-1+d_{i}}, x'_{i},d_i | x'_{i-1},d_{i-1}). \end{aligned} \label{eq:forward} \end{equation} Similarly, we define the \emph{backward} quantity as the conditional probability of decoding the rest of the bits in the received flow, $y_{i+d_i}^{N'}$, given the current state $(x'_{i},d_i)$, \begin{equation} B_{i}(x'_{i},d_i)=P(y_{i+d_i}^{N'}|x'_{i},d_i), \quad i=1,2,\cdots, N. \label{eq:backward_def} \end{equation} The backward quantities can also be computed recursively as \begin{equation} \begin{aligned} B_{i}(x'_{i},d_i)=\sum_{\substack{x'_{i+1},\\d_{i+1}}}P(y_{i+d_{i}}^{i+d_{i+1}}, x'_{i+1},d_{i+1} | x'_{i},d_{i})B_{i+1}(x'_{i+1},d_{i+1}). \end{aligned} \label{eq:backward} \end{equation} Given the forward/backward quantities, the posterior likelihood of the watermark bit $w_j$ is given by \begin{equation} \begin{aligned} P\left(\mathbf{y}^{N'}|w_j\right) &= P\left(\mathbf{y}^{N'}|\tilde{w}^{js}_{(j-1)s+1}\right)\\ &= \sum_{\substack{x'_{(j-1)s},x'_{js},\\d_{(j-1)s},d_{js}}} F_{(j-1)s}\left(x'_{(j-1)s},d_{(j-1)s}\right)\hat{F}_{js}\left(x'_{js},d_{js})B_{js}(x'_{js},d_{js}\right), \end{aligned} \label{eq:post} \end{equation} where the first equality follows from our watermark sparsification function in~\eqref{eq:sparse}, and the quantity $F'_{js}(x_{i},d_{i})$ is defined as \begin{equation} \hat{F}_{js}(x'_{i},d_{i})=P\left(y_{(j-1)s+d_{(j-1)s}}^{i-1+d_{i}},x'_{i},d_{i}|x'_{(j-1)s},d_{(j-1)s},\tilde{w}^{js}_{(j-1)s+1}\right), \quad (j-1)s+1 \leq i\leq js. \end{equation} The quantity $F'_{js}(x'_{i},d_{i})$ can be calculated recursively as \begin{equation} \begin{aligned} \hat{F}_{js}(x'_{i},d_{i})= \sum_{\substack{x'_{i-1},\\d_{i-1}}}\hat{F}_{js}(x'_{i-1},d_{i-1})P\left(y_{i-1+d_{i-1}}^{i-1+d_{i}},x'_{i},d_{i}|x'_{i-1},d_{i-1},\tilde{w}^{js}_{(j-1)s+1)}\right), \end{aligned} \label{eq:forward2} \end{equation} where $P\left(y_{i-1+d_{i-1}}^{i-1+d_{i}},x'_{i},d_{i}|x'_{i-1},d_{i-1},\tilde{w}^{js}_{(j-1)s+1)}\right)$ is given by \begin{equation} \begin{aligned} &P\left(y_{i-1+d_{i-1}}^{i-1+d_{i}},x'_{i},d_{i}|x'_{i-1},d_{i-1},\tilde{w}^{js}_{(j-1)s+1)}\right)=\\ &\begin{cases} P_d(1-P_I) \quad \text{if } d_{i}=d_{i-1}-1 \text{ and } x'_{i}=\tilde{w}_{i}\oplus k_{i}\oplus x'_{i-1}, \text{ and } y_{i-1+d_{i-1}}^{i-1+d_i}= \emptyset,\\ P_s (1-P_I)\left(P_dP_I^{d_i-d_{i-1}+1}+(1-P_d)P_I^{d_i-d_{i-1}}\right) \quad \text{if } d_{i}\geq d_{i-1}, x'_{i}=\tilde{w}_{i}\oplus k_{i} \text{ and } y_{i-1+d_{i-1}}= x'_{i-1}\oplus 1,\\ (1-P_s)(1-P_I) \left(P_dP_I^{d_i-d_{i-1}+1}+(1-P_d)P_I^{d_i-d_{i-1}}\right) \quad \text{if } d_{i}\geq d_{i-1}, x'_{i}=\tilde{w}_{i}\oplus k_{i} \text{ and } y_{i-1+d_{i-1}}= x'_{i-1}. \end{cases} \end{aligned} \label{eq:sum_2} \end{equation} Once the posterior probabilities for all watermark bits are calculated, the watermark sequence, $\hat{\mathbf{w}}^n$ can be estimated using maximum likelihood rule of~\eqref{eq:ml}. Finally, the presence of the watermark in a flow is decided based on the correlation value of the estimated watermark, $\hat{\mathbf{w}}^n$, and the original watermark sequence, $\mathbf{w}^n$. \begin{table}[t] \centering \caption{True positive rates with varying watermark parameters when false positive rate is fixed below 1\%.} \begin{tabular}{|l|c|c|c|c|c|} \hline \backslashbox{\quad n}{$\Delta$ (ms)}&20&60&100\\ \hline 10 & 0.0310& 0.6050 &0.6224 \\ 30 & 0.0310& 0.9790 &0.9970 \\ 50 & 0.0272& 0.9990 & 1 \\ \hline \end{tabular} \label{tab:partesttp} \end{table} \begin{table}[t] \centering \caption{True positive rates under varying IPD jitter with false positive rates below 1\%.} \begin{tabular}{|l|l|c|c|c|c|c|} \hline {Jitter Std. Dev. (ms)}&10&20&30&40\\ \hline Synthetic traffic &1.000&0.989&0.770&0.232\\ Real traffic &1.000&0.989&0.652&0.193\\ \hline \end{tabular} \label{tab:jitter} \end{table} \begin{table}[t] \centering \caption{True positive rates for varying $P_{d}$ with false positive rates below 1\%.} \begin{tabular}{|l|l|c|c|c|c|} \hline {$P_d$}&1\%&2\%&3\%&10\%\\ \hline Synthetic traffic &1.000&1.000&1.000&0.995\\ Real traffic & 1.000&1.000&1.000&0.996\\\hline \end{tabular} \label{tab:deletion} \end{table} \begin{table}[t] \centering \caption{True positive rates for varing $P_{I}$ with false positive rates below 1\%.} \begin{tabular}{|l|l|c|c|c|c|} \hline {$P_{I}$}&1\%&5\%&10\%&20\%\\ \hline Synthetic traffic &1.000&1.000&1.000&0.500\\ Real traffic &1.000&1.000&1.000&0.568\\ \hline \end{tabular} \label{tab:insertion} \end{table} \begin{table}[!htbp] \caption{True positive rates for varying $P_{I}$,$P_{d}$ with false positive rates below 1\%.} \centering \begin{tabular}{|l|l|c|c|c|} \hline {$P_{i}$,$P_{d}$}&1\%,1\%&5\%,5\%&10\%,10\%\\ \hline Synthetic&1.000&1.000&0.764\\ Real&1.000&1.000&0.662\\ \hline \end{tabular} \label{tab:insertiondeletion} \end{table} \section{Evaluation} \label{sec:eval} We tested our scheme for two groups of traces: {\em synthetic} packet flows of length 2000 generated from Poisson process with average rate of $3.3$~packets per second (pps), and real {\em SSH} traces of length 2000 collected in CAIDA database with average rate of $0.865$~pps~\cite{CAIDA}, which represent typical traffic in human-involved network connections, where flow watermarks are most applicable. \subsection{Parameter Selection} The first test examined the effects of watermark length $n$ and IPD quantization step size $\Delta$. We varied $n$ over $\{10,30,50\}$, $\Delta$ over $\{20, 60,100\}$\,ms and fixed the sparisificatoin factor $s=10$. The deletion and insertion probabilities and the network jitter were set to $P_d=0.1$, $P_I=0$, $\sigma=10$\,ms, respectively. 5000 synthetic flows were embedded with watermarks and another 5000 unwatermarked ones served as the control group. Table~\ref{tab:partesttp} shows the true positive rates of our test, when false positive rates were kept under 1\%. As we increase watermark length or quantization step size (embed a `stronger' pattern), detection error decreases. {For the tests in this section, we fix the watermark parameters to $\{\Delta=100\,ms, n=50, s=10\}$, which had the best performance in Table~\ref{tab:partesttp}}. \begin{table}[t] \centering \caption{Average KS distances between watermarked and unwatermarked synthetic traces.} \begin{tabular}{|c|c|c|c|} \hline \backslashbox{\quad n}{$\Delta$ (ms)} &100&80&60\\ \hline 30 &0.0177 &0.0138 &0.0101 \\ \hline 40 &0.0233 &0.0181 &0.0133 \\ \hline 50 &0.0284 &0.0223 &0.0160 \\ \hline \end{tabular} \label{tab:KStest} \end{table} \begin{table}[!htbp] \centering \caption{Average KS distances between watermarked and unwatermarked SSH traces.} \begin{tabular}{|c|c|c|c|} \hline \backslashbox{\quad n}{$\Delta$ (ms)} &100&80&60\\ \hline 30 &0.0091 &0.0081 &0.0071 \\ \hline 40 &0.0120 &0.0111 &0.0091 \\ \hline 50 &0.0158 &0.0139 &0.0123 \\ \hline \end{tabular} \label{tab:KStestssh} \end{table} \subsection{Robustness Tests} We evaluated watermark robustness against network jitter, and packet loss and insertion. \subsubsection{IPD jitter} \label{sec:jitter_tests} From the experimental results in~\cite{Swirl}, the standard deviation of the Laplacian jitter is estimated as $\sigma=10$\,ms. We performed tests with $\sigma$ varied over $\{10, 20, 30, 40\}$\,ms. The packet drop and split probabilities were $P_d=0.1$ and $P_I=0$, respectively. This time, we watermarked 1000 flows from both synthetic and SSH traces. The true positive rates are given in Table~\ref{tab:jitter}. Notice that the watermarks were detected with accuracies over $98\%$, even when jitter was as high as $20$\,ms. The detection performance falls sharply when jitter standard deviations exceeds $40$\,ms. However, such excessively large jitter rarely occurs at proper network conditions. Hence, our scheme withstands network jitter in normal operating conditions. \subsubsection{Packet deletion and insertion} One major improvement of our design over previous work is robustness against packet deletion and insertion. To verify this, we tested our scheme in a network with: solely packet deletion with probabilities $P_d=\{0.01,0.02,0.03,0.1\}$, solely packet insertion with probabilities $P_I=\{0.01,0.05,0.1,0.2\}$, and both deletion and insertion with probabilities $(P_d,P_I)=\{(0.01,0.01),(0.05,0.05),(0.1,0.1),(0.15,0.15)\}$. During all the tests, the standard deviation of jitter was fixed as $\sigma =10\,ms$, and 1000 flows from both synthetic and SSH traces were used. The results in Tables~\ref{tab:deletion}--\ref{tab:insertiondeletion} demonstrate watermarks were detected with high accuracies when 5\% of packets were dropped and inserted. \begin{table}[t] \centering \caption{Statistics of blank intervals in the aggregated flow from synthetic traces.} \begin{tabular}{|c|c|c|} \hline &Watermarked&Unwatermarked\\ \hline Mean&24.07&25.96\\ \hline Standard Deviation&5.246&5.187\\ \hline \end{tabular} \label{tab:MultiFlow} \end{table} \begin{table}[t] \centering \caption{Statistics of blank intervals in the aggregated flow from SSH traces.} \begin{tabular}{|c|c|c|} \hline &Watermarked&Unwatermarked\\ \hline Mean&403&395.67\\ \hline Standard Deviation&20.54&16.84\\ \hline \end{tabular} \label{tab:MFAssh} \end{table} \begin{figure}[t] \centering \subfigure[Unwatermarked flows]{ \includegraphics[width=0.48\columnwidth]{Graphics/Unmarked} \label{fig:unmarked} }\subfigure[Watermarked flows]{ \includegraphics[width=0.48\columnwidth]{Graphics/marked} \label{fig:marked} } \caption{MFA tests on synthetic traces: packet counts in intervals of 70\,ms in the flow aggregated from (a) 10 unwatermarked flows and (b) 10 watermarked flows.} \label{fig:MultiFlow} \end{figure} \begin{figure}[t] \centering \subfigure[Unwatermarked flows]{ \includegraphics[width=0.48\columnwidth]{Graphics/fmfa_4} \label{fig:unmarkedssh} }\subfigure[Watermarked flows]{ \includegraphics[width=0.48\columnwidth]{Graphics/fmfa_3} \label{fig:markedssh} } \caption{MFA tests on real SSH traces: packet counts in intervals of 70\,ms in the flow aggregated from (a) 10 unwatermarked flows and (b) 10 watermarked flows.} \label{fig:MFAssh} \end{figure} \subsection{Visibility Tests} \label{sec:visibility} We first evaluated watermark invisibility with two Level-I attack tests: the {Kolmogorov-Simirnov (KS) test} and the {multiflow attack (MFA) test}. KS test is commonly applied to comparing distributions of datasets. Given two data sets, the KS distance is computed as the maximum difference of their empirical distribution functions~\cite{Massey51}. For two flows $A$ and $B$, the KS distance is given by $\underset{x}{\sup}(|F_{A}(x)-F_{B}(x)|)$, where $F_A(x)$ and $F_B(x)$ are the empirical pdfs of IPDs in $A$ and $B$. We claim two flows are indistinguishable if their KS distance is below 0.036, a threshold suggested in~\cite{Massey51}. We calculated the average $KS$ distance between watermarked and unwatermarked flows using both synthetic and SSH traces. The results are tabulated in Tables~\ref{tab:KStest} and~\ref{tab:KStestssh}. None of the KS distances exceed the detection threshold of visibility, which implies the embedded watermarks did not cause noticeable artifacts in the original packet flows. MFA is a watermark attack that detects positions of embedded watermarks in interval-based schemes, When flows which were watermarked using the same watermark are aggregated, the aggregate flow shows a number of intervals containing no packets (see Figure~10 in~\cite{Kiyavash08}). To test whether such `visible' pattern exists in flows watermarked using our scheme, we combined 10 watermarked and 10 unwatermarked flows for both the synthetic and SSH traces, and divided the aggregated flows into intervals with length of length of 70\,ms. We then counted the number of blank intervals with no packets in each aggregate flow. This procedure was repeated 1000 times, and the resulting blank interval statistics are shown in Tables~\ref{tab:MultiFlow} and~\ref{tab:MFAssh}. For both synthetic and SSH traces, we see that the number of blank intervals does not change much after watermarks were embedded. Figure~\ref{fig:MultiFlow} depicts packet counts in each interval of length 70~ms in the aggregated synthetic traces. Comparing Figures~\ref{fig:unmarked} with~\ref{fig:marked}, no clear watermark pattern is observed. The same observation was made in Figure~\ref{fig:MFAssh}, which depicts packet counts of SSH traces. Therefore, our scheme is resistant to MFA. We next tested the performance of our watermarks under a Level-II attack, BACKLIT, where the attacker sees both directions of a TCP connection~\cite{Luo}. BACKLIT detects watermarks in SSH flows based on the differences in round trip times (RTTs) of consecutive TCP requests, $\Delta RTT$. We considered a stepping stone detection scenario in our campus network. Network jitter in such an environment, like most enterprise networks, is very small. According to our measurements from our lab machine to the campus exit node, the jitter standard deviation was as low as $\delta=1.6$\,ms. For this level of noise, a small quantization step size of IPDs, 10\,ms, was sufficient to achieve accurate decoding performances (true positive rate of $100$\% and false positive rate of less than 1\%). We then examined the effect of our watermarks on the $\Delta RTT$ distribution of a SSH connection, by monitoring the RTT jitter from our lab to 5 PlanetLab nodes~\cite{Planetlab}. For each node, we issued 4000 ping requests with ping interval of 100\,ms, and divided ping packets into two windows, each consisting of 2000 packets. We transplanted delays of SSH packets during watermarking onto the ping replies in Window-1, to mimic the effects of watermarking live TCP requests, and left Window-2 untouched as the control group. \begin{figure}[t] \centering \subfigure[PlanetLab node: 13.7.64.22]{ \includegraphics[width=0.32\columnwidth]{Graphics/rtt1.eps} \label{fig:host1} }\subfigure[PlanetLab node: 130.65.6.225]{ \includegraphics[width=0.32\columnwidth]{Graphics/rtt2.eps} \label{fig:host2} }\subfigure[PlanetLab node: 128.138.207.54]{ \includegraphics[width=0.32\columnwidth]{Graphics/rtt3.eps} \label{fig:host3} } \subfigure[PlanetLab node: 192.12.33.100]{ \includegraphics[width=0.32\columnwidth]{Graphics/rtt4.eps} \label{fig:host4} }\subfigure[PlanetLab node: 206.117.37.6]{ \includegraphics[width=0.32\columnwidth]{Graphics/rtt5.eps} \label{fig:host5} } \caption{Comparisons of the $\Delta RTT$ distributions before and after adding watermarking.} \label{fig:delta_rtt} \end{figure} Figure~\ref{fig:delta_rtt} depicts the CDFs of $\Delta RTT$ of Window-1, Window-1 with watermarks and Window-2. We notice that except in Figure~\ref{fig:host1}, where the RTT jitter to the destination node was extremely low, the watermarked flow is not distinguishable from the unwatermarked flow. Our results indicate that BACKLIT only works in ``clean'' environments with negligible jitter, and the subtle watermark we inject remains invisible when moderate jitter exists. To achieve simultaneous watermark robustness and invisibility, we embed a sparse watermark using the QIM embedding into flow IPDs. Modeling the network jitter, deletions, and insertions as a communication channel descried by a HMM, and employing an IDS decoder, we can reliably decoder the watermark. The QIM embedding meanwhile guarantees that watermark remains invisible to attackers \bibliographystyle{IEEETran}
1,314,259,994,739
arxiv
\section{Introduction} The prosperity of Esport catalyzes a significant research interest for academics~\cite{yu2018fine, xugccegame, tanaka2021lol} and industrial researchers~\cite{johnson2019impacts, whyesports} to pay attention to research topics of Esport’s game commentary since the game commentator could entertain the audiences by interactively informing them of game-related information during live-streaming. Moreover, the task of collaborative commentary generation aims to generate an expected follow-up commentary based on commentary given by the human commentator to collaborate with him or her interactively. Although the video caption of Esport games which were collected from the public video website that could be used for game commentary, is processed and punctuated, we notice the punctuated text sequences are usually incomplete game commentary (see Case \textit{Solo} in Table \ref{tab:example}, game scene screenshots were captured from Esports game videos derived from \textit{Youtube}\footnote{https://www.youtube.com/watch?v=pJRjqqajKwU}\footnote{https://www.youtube.com/watch?v=buNTNOgE2Bs}\footnote{https://www.youtube.com/watch?v=q7NvQUzE0FA}). Thus, we assume the use of original punctuated sentences will cause the problem of generating incomplete commentary, resulting in the game AI commentator output incomplete game commentary, fails to collaborate with the human commentator. To this end, further punctuating sentences is needed to leverage such data effectively. In this paper, we present two strategies for sentence punctuation for game commentary. We compared our two strategies with baseline by employing and fine-tuning a state-of-the-art generative language model Text-to-Text Transfer Transformer (\textit{T5})~\cite{t5} to generate collaborative commentary, respectively. Objective evaluations from automatic metrics and subjective analyses on generated commentaries among three strategies shown our strategy of punctuating sentences by two text sequences outperformed the baseline. \begin{table*}[t] \tiny \caption{Examples of punctuating commentary text using 3 sentence punctuation strategies respectively, temporary commentaries derived from \textit{Youtube} given from top to bottom in row Text. \faThumbsOUp\ indicates such sentence is a complete commentary and \faThumbsODown\ vice versa.} \label{tab:example} \hbox to\hsize{\hfil \begin{tabular}{|p{0.5cm} |p{4.3cm} p{0.2cm} |p{4.3cm} p{0.2cm}| p{4.3cm} p{0.2cm}|} \hline Scene & \multicolumn{1}{c}{ \begin{minipage}{4.5cm} \includegraphics[width=\linewidth]{scene_1.png} \end{minipage} } & & \multicolumn{1}{c}{ \begin{minipage}{4.5cm} \includegraphics[width=\linewidth]{scene_2.png} \end{minipage} } & & \multicolumn{1}{c}{ \begin{minipage}{4.5cm} \includegraphics[width=\linewidth]{scene_3.png} \end{minipage} } & \\ \hline Case & \multicolumn{2}{c|}{\textbf{Solo}, originally punctuated by \textit{Youtube}} & \multicolumn{2}{c|}{\textbf{Duo}, punctuated by every 2 sentences in \textbf{Solo}} &\multicolumn{2}{c|}{\textbf{Tri}, punctuated by every 3 sentences in \textbf{Solo}} \\ \hline Text & \multicolumn{1}{r}{\textless start\textgreater really frightens me but at the same time \textless end\textgreater} & \multicolumn{1}{r|}{\faThumbsODown} & \multicolumn{1}{r}{\textless start\textgreater their strengths of being able to burst} & & \multicolumn{1}{r}{\textless start\textgreater many differences down and they're just}& \\ & \multicolumn{1}{r}{\textless start\textgreater I respect that you have just faker \textless end\textgreater} & \multicolumn{1}{r|}{\faThumbsOUp} & \multicolumn{1}{r}{down the bear \textless end\textgreater} & \multicolumn{1}{r|}{\faThumbsOUp} & \multicolumn{1}{r}{gonna file straight up towards those} & \\ & \multicolumn{1}{r}{\textless start\textgreater nightmares you wake up in a cold sweat \textless end\textgreater} & \multicolumn{1}{r|}{\faThumbsODown} & \multicolumn{1}{r}{\textless start\textgreater and they're hoping to get rocks to pull} & & \multicolumn{1}{r}{super minions in the base you can see is \textless end\textgreater} & \multicolumn{1}{r|}{\faThumbsOUp}\\ & \multicolumn{1}{r}{\textless start\textgreater you're like fakers behind me I know \textless end\textgreater} & \multicolumn{1}{r|}{\faThumbsOUp} & \multicolumn{1}{r}{up gets rooted teleports coming in \textless end\textgreater} & \multicolumn{1}{r|}{\faThumbsOUp} & \multicolumn{1}{r}{\textless start\textgreater going in that pack it's definitely}& \\ & \multicolumn{1}{r}{\textless start\textgreater right like even though I only have a mat \textless end\textgreater} & \multicolumn{1}{r|}{\faThumbsODown} & \multicolumn{1}{r}{\textless start\textgreater curtain calling wish have already been} & & \multicolumn{1}{r}{awkward the Nexus Kevin is gonna be out} & \\ & \multicolumn{1}{r}{\textless start\textgreater on the floor I think is in the bed \textless end\textgreater} & \multicolumn{1}{r|}{\faThumbsODown} & \multicolumn{1}{r}{used a trick is running away \textless end\textgreater} & \multicolumn{1}{r|}{\faThumbsOUp} &\multicolumn{1}{r}{in the bank \textless end\textgreater} & \multicolumn{1}{r|}{\faThumbsODown}\\ \hline \end{tabular}\hfil} \end{table*} \section{METHOD} \noindent \textbf{Sentence Punctuation for Collaborative Commentary}: According to the finding mentioned in introduction, the generated commentary should be a complete sentence of commentary rather than a sliced snippet. As shown in Table \ref{tab:result}, in this paper we compared three strategies: the baseline Case \textit{Solo}, Case \textit{Duo} and Case \textit{Tri}, denoting punctuating sentences by 1, 2 and 3 text sequence(s) originally punctuated by \textit{Youtube} respectively. \noindent \textbf{Pre-trained Generative Language Models:} We used and fine-tuned a state-of-the-art generative pre-trained model, Text-to-Text Transfer Transformer (\textit{T5}) for game collaborative commentary generation. \section{Experiments} \subsection{Experimental Settings}\label{AA} We collected text sequences from 100 videos of one of the most watched Esports games~\cite{newzoo} called \textit{League of Legend} from \textit{Youtube}, further processed into Case \textit{Solo}, \textit{Duo} and \textit{Tri} respectively. Text sequences from 90 videos were used for fine-tuning and others were used for testing. In experiment, we used the small (T5-small) and base (T5-base) version of \textit{T5} model and compared three strategies of sentence punctuation for Esports collaborative commentary generation. \subsection{Evaluation Results} \iffalse \begin{itemize} \item Use either SI (MKS) or CGS as primary units. (SI units are encouraged.) English units may be used as secondary units (in parentheses). An exception would be the use of English units as identifiers in trade, such as ``3.5-inch disk drive''. \item Avoid combining SI and CGS units, such as current in amperes and magnetic field in oersteds. This often leads to confusion because equations do not balance dimensionally. If you must use mixed units, clearly state the units for each quantity that you use in an equation. \item Do not mix complete spellings and abbreviations of units: ``Wb/m\textsuperscript{2}'' or ``webers per square meter'', not ``webers/m\textsuperscript{2}''. Spell out units when they appear in text: ``. . . a few henries'', not ``. . . a few H''. \item Use a zero before decimal points: ``0.25'', not ``.25''. Use ``cm\textsuperscript{3}'', not ``cc''.) \end{itemize} \fi \noindent \textbf{Objective Results Given by Automatic Metrics:} We evaluated the generated commentary by using BLEU~\cite{bleu}, ROUGE-1, 2, L~\cite{rouge}, and METEOR~\cite{meteor}. As expected, the performance of ROUGE-1, 2, L among three strategies of sentence punctuation increased, which implies fine-tuning increases the recall rate of words in generated commentary from reference commentary, and in Case \textit{Solo} the METEOR score also increases, which stands for correlation of wording and phrasing with generated text with reference text. We marked the highest score of each metric for three strategies in bold, respectively. \noindent \textbf{Subjective Analyses on Generated Commentaries:} We compared the generated commentary from fine-tuned T5-base model. We observed (1) the model generated sliced key game-related phrases and incomplete sentences for commentary that is semantically ambiguous in all cases, especially in Case \textit{Solo}. (2) Case \textit{Duo} and Case \textit{Tri} model could generate the entire key game-related phrases such as the game-related scenes narrative with the correct name of characters and moves. (3) However, Case \textit{Tri} model generates more than one commentary (we expect only to generate one commentary for one commentary given by a human commentator). \begin{table}[t] \tiny \centering \caption{Results of conducted comparative experiments. The evaluation was given by automatic metrics on generated collaborative commentary used in this work, respectively.} \label{tab:result} \begin{tabular}{|l|r|r|r|r|r|} \hline Model & BLEU & ROUGE-1 & ROUGE-2 & ROUGE-L & METEOR \\ \hline Case \textit{Solo}& \multicolumn{5}{|r|}{}\\ \hline T5-small-solo & 2.88 & 0.17 & 0.01 & 0.17 & 6.44 \\ T5-base-solo & 2.88 & 0.52 & 0.07 & 0.52 & 19.17 \\ Fine-tuned-T5-small-solo & 2.92 & 3.15 & \textbf{0.71} & 3.15 & 7.65 \\ Fine-tuned-T5-base-solo & \textbf{2.97} & \textbf{3.91} & 0.48 & \textbf{3.91} & \textbf{22.76} \\ \hline Case \textit{Duo} & \multicolumn{5}{|r|}{} \\ \hline T5-small-duo & \textbf{1.13} & 0.82 & 0.01 & 0.82 & 12.24 \\ T5-base-duo & 1.01 & 1.11 & 0.01 & 1.11 & 12.02 \\ Fine-tuned-T5-small-duo & 1.04 & 11.26 & 6.08 & 11.24 & \textbf{13.18} \\ Fine-tuned-T5-base-duo & 1.05 & \textbf{16.60} & \textbf{8.99} & \textbf{16.52} & 11.76 \\ \hline Case \textit{Tri}& \multicolumn{5}{|r|}{}\\ \hline T5-small-tri & \textbf{1.06} & 1.12 & 0.03 & 1.11 & 12.26 \\ T5-base-tri & 1.01 & 0.94 & 0.02 & 0.93 & \textbf{12.87} \\ Fine-tuned-T5-small-tri & 1.03 & 9.32 & 4.61 & 9.30 & 12.84 \\ Fine-tuned-T5-base-tri & 1.04 & \textbf{11.90} & \textbf{5.91} & \textbf{11.89} & 12.24 \\ \hline \end{tabular} \end{table} \iffalse \subsection{Equations} Number equations consecutively. To make your equations more compact, you may use the solidus (~/~), the exp function, or appropriate exponents. Italicize Roman symbols for quantities and variables, but not Greek symbols. Use a long dash rather than a hyphen for a minus sign. Punctuate equations with commas or periods when they are part of a sentence, as in: \begin{equation} a+b=\gamma\label{eq} \end{equation} Be sure that the symbols in your equation have been defined before or immediately following the equation. Use ``\eqref{eq}'', not ``Eq.~\eqref{eq}'' or ``equation \eqref{eq}'', except at the beginning of a sentence: ``Equation \eqref{eq} is . . .'' \subsection{\LaTeX-Specific Advice} Please use ``soft'' (e.g., \verb|\eqref{Eq}|) cross references instead of ``hard'' references (e.g., \verb|(1)|). That will make it possible to combine sections, add equations, or change the order of figures or citations without having to go through the file line by line. Please don't use the \verb|{eqnarray}| equation environment. Use \verb|{align}| or \verb|{IEEEeqnarray}| instead. The \verb|{eqnarray}| environment leaves unsightly spaces around relation symbols. Please note that the \verb|{subequations}| environment in {\LaTeX} will increment the main equation counter even when there are no equation numbers displayed. If you forget that, you might write an article in which the equation numbers skip from (17) to (20), causing the copy editors to wonder if you've discovered a new method of counting. {\BibTeX} does not work by magic. It doesn't get the bibliographic data from thin air but from .bib files. If you use {\BibTeX} to produce a bibliography you must send the .bib files. {\LaTeX} can't read your mind. If you assign the same label to a subsubsection and a table, you might find that Table I has been cross referenced as Table IV-B3. {\LaTeX} does not have precognitive abilities. If you put a \verb|\label| command before the command that updates the counter it's supposed to be using, the label will pick up the last counter to be cross referenced instead. In particular, a \verb|\label| command should not go before the caption of a figure or a table. Do not use \verb|\nonumber| inside the \verb|{array}| environment. It will not stop equation numbers inside \verb|{array}| (there won't be any anyway) and it might stop a wanted equation number in the surrounding equation. \subsection{Some Common Mistakes}\label{SCM} \begin{itemize} \item The word ``data'' is plural, not singular. \item The subscript for the permeability of vacuum $\mu_{0}$, and other common scientific constants, is zero with subscript formatting, not a lowercase letter ``o''. \item In American English, commas, semicolons, periods, question and exclamation marks are located within quotation marks only when a complete thought or name is cited, such as a title or full quotation. When quotation marks are used, instead of a bold or italic typeface, to highlight a word or phrase, punctuation should appear outside of the quotation marks. A parenthetical phrase or statement at the end of a sentence is punctuated outside of the closing parenthesis (like this). (A parenthetical sentence is punctuated within the parentheses.) \item A graph within a graph is an ``inset'', not an ``insert''. The word alternatively is preferred to the word ``alternately'' (unless you really mean something that alternates). \item Do not use the word ``essentially'' to mean ``approximately'' or ``effectively''. \item In your paper title, if the words ``that uses'' can accurately replace the word ``using'', capitalize the ``u''; if not, keep using lower-cased. \item Be aware of the different meanings of the homophones ``affect'' and ``effect'', ``complement'' and ``compliment'', ``discreet'' and ``discrete'', ``principal'' and ``principle''. \item Do not confuse ``imply'' and ``infer''. \item The prefix ``non'' is not a word; it should be joined to the word it modifies, usually without a hyphen. \item There is no period after the ``et'' in the Latin abbreviation ``et al.''. \item The abbreviation ``i.e.'' means ``that is'', and the abbreviation ``e.g.'' means ``for example''. \end{itemize} An excellent style manual for science writers is \cite{b7}. \subsection{Authors and Affiliations} \textbf{The class file is designed for, but not limited to, six authors.} A minimum of one author is required for all conference articles. Author names should be listed starting from left to right and then moving down to the next line. This is the author sequence that will be used in future citations and by indexing services. Names should not be listed in columns nor group by affiliation. Please keep your affiliations as succinct as possible (for example, do not differentiate among departments of the same organization). \subsection{Identify the Headings} Headings, or heads, are organizational devices that guide the reader through your paper. There are two types: component heads and text heads. Component heads identify the different components of your paper and are not topically subordinate to each other. Examples include Acknowledgments and References and, for these, the correct style to use is ``Heading 5''. Use ``figure caption'' for your Figure captions, and ``table head'' for your table title. Run-in heads, such as ``Abstract'', will require you to apply a style (in this case, italic) in addition to the style provided by the drop down menu to differentiate the head from the text. Text heads organize the topics on a relational, hierarchical basis. For example, the paper title is the primary text head because all subsequent material relates and elaborates on this one topic. If there are two or more sub-topics, the next level head (uppercase Roman numerals) should be used and, conversely, if there are not at least two sub-topics, then no subheads should be introduced. \subsection{Figures and Tables} \paragraph{Positioning Figures and Tables} Place figures and tables at the top and bottom of columns. Avoid placing them in the middle of columns. Large figures and tables may span across both columns. Figure captions should be below the figures; table heads should appear above the tables. Insert figures and tables after they are cited in the text. Use the abbreviation ``Fig.~\ref{fig}'', even at the beginning of a sentence. \begin{table}[htbp] \caption{Table Type Styles} \begin{center} \begin{tabular}{|c|c|c|c|} \hline \textbf{Table}&\multicolumn{3}{|c|}{\textbf{Table Column Head}} \\ \cline{2-4} \textbf{Head} & \textbf{\textit{Table column subhead}}& \textbf{\textit{Subhead}}& \textbf{\textit{Subhead}} \\ \hline copy& More table copy$^{\mathrm{a}}$& & \\ \hline \multicolumn{4}{l}{$^{\mathrm{a}}$Sample of a Table footnote.} \end{tabular} \label{tab1} \end{center} \end{table} \begin{figure}[htbp] \centerline{\includegraphics{fig1.png}} \caption{Example of a figure caption.} \label{fig} \end{figure} Figure Labels: Use 8 point Times New Roman for Figure labels. Use words rather than symbols or abbreviations when writing Figure axis labels to avoid confusing the reader. As an example, write the quantity ``Magnetization'', or ``Magnetization, M'', not just ``M''. If including units in the label, present them within parentheses. Do not label axes only with units. In the example, write ``Magnetization (A/m)'' or ``Magnetization \{A[m(1)]\}'', not just ``A/m''. Do not label axes with a ratio of quantities and units. For example, write ``Temperature (K)'', not ``Temperature/K''. \fi \section{Conclusion} To solve the sentence punctuation problem for collaborative commentary generation in Esports live-streaming, this paper presents two strategies for sentence punctuation for punctuating text sequences of game commentary.We conducted comparative experiments among baseline and two strategies on state-of-the-art pre-trained language models for Esports collaborative commentary generation. From experiment results, we found that (1) the fine-tuning on pre-trained language models improves recalling the words from reference commentary. (2) Objective evaluations from automatic metrics and subjective analyses on generated examples on generated commentaries among three strategies showed Case \textit{Duo}, the strategy of punctuating sentences by two text sequences outperformed the baseline. (3) From the subjective analyses on Case \textit{Tri} we found that inputting lengthy text sequences results in the AI commentator output more than one commentary that confuses the human commentator. Our future work will develop a more advanced sentence punctuation strategy study using human evaluation for collaborative commentary generation in Esports live-streaming. \bibliographystyle{IEEEtran} \section{Introduction} The prosperity of Esport catalyzes a significant research interest for academics~\cite{yu2018fine, xugccegame, tanaka2021lol} and industrial researchers~\cite{johnson2019impacts, whyesports} to pay attention to research topics of Esport’s game commentary since the game commentator could entertain the audiences by interactively informing them of game-related information during live-streaming. Moreover, the task of collaborative commentary generation aims to generate an expected follow-up commentary based on commentary given by the human commentator to collaborate with him or her interactively. Although the video caption of Esport games which were collected from the public video website that could be used for game commentary, is processed and punctuated, we notice the punctuated text sequences are usually incomplete game commentary (see Case \textit{Solo} in Table \ref{tab:example}, game scene screenshots were captured from Esports game videos derived from \textit{Youtube}\footnote{https://www.youtube.com/watch?v=pJRjqqajKwU}\footnote{https://www.youtube.com/watch?v=buNTNOgE2Bs}\footnote{https://www.youtube.com/watch?v=q7NvQUzE0FA}). Thus, we assume the use of original punctuated sentences will cause the problem of generating incomplete commentary, resulting in the game AI commentator output incomplete game commentary, fails to collaborate with the human commentator. To this end, further punctuating sentences is needed to leverage such data effectively. In this paper, we present two strategies for sentence punctuation for game commentary. We compared our two strategies with baseline by employing and fine-tuning a state-of-the-art generative language model Text-to-Text Transfer Transformer (\textit{T5})~\cite{t5} to generate collaborative commentary, respectively. Objective evaluations from automatic metrics and subjective analyses on generated commentaries among three strategies shown our strategy of punctuating sentences by two text sequences outperformed the baseline. \begin{table*}[t] \tiny \caption{Examples of punctuating commentary text using 3 sentence punctuation strategies respectively, temporary commentaries derived from \textit{Youtube} given from top to bottom in row Text. \faThumbsOUp\ indicates such sentence is a complete commentary and \faThumbsODown\ vice versa.} \label{tab:example} \hbox to\hsize{\hfil \begin{tabular}{|p{0.5cm} |p{4.3cm} p{0.2cm} |p{4.3cm} p{0.2cm}| p{4.3cm} p{0.2cm}|} \hline Scene & \multicolumn{1}{c}{ \begin{minipage}{4.5cm} \includegraphics[width=\linewidth]{scene_1.png} \end{minipage} } & & \multicolumn{1}{c}{ \begin{minipage}{4.5cm} \includegraphics[width=\linewidth]{scene_2.png} \end{minipage} } & & \multicolumn{1}{c}{ \begin{minipage}{4.5cm} \includegraphics[width=\linewidth]{scene_3.png} \end{minipage} } & \\ \hline Case & \multicolumn{2}{c|}{\textbf{Solo}, originally punctuated by \textit{Youtube}} & \multicolumn{2}{c|}{\textbf{Duo}, punctuated by every 2 sentences in \textbf{Solo}} &\multicolumn{2}{c|}{\textbf{Tri}, punctuated by every 3 sentences in \textbf{Solo}} \\ \hline Text & \multicolumn{1}{r}{\textless start\textgreater really frightens me but at the same time \textless end\textgreater} & \multicolumn{1}{r|}{\faThumbsODown} & \multicolumn{1}{r}{\textless start\textgreater their strengths of being able to burst} & & \multicolumn{1}{r}{\textless start\textgreater many differences down and they're just}& \\ & \multicolumn{1}{r}{\textless start\textgreater I respect that you have just faker \textless end\textgreater} & \multicolumn{1}{r|}{\faThumbsOUp} & \multicolumn{1}{r}{down the bear \textless end\textgreater} & \multicolumn{1}{r|}{\faThumbsOUp} & \multicolumn{1}{r}{gonna file straight up towards those} & \\ & \multicolumn{1}{r}{\textless start\textgreater nightmares you wake up in a cold sweat \textless end\textgreater} & \multicolumn{1}{r|}{\faThumbsODown} & \multicolumn{1}{r}{\textless start\textgreater and they're hoping to get rocks to pull} & & \multicolumn{1}{r}{super minions in the base you can see is \textless end\textgreater} & \multicolumn{1}{r|}{\faThumbsOUp}\\ & \multicolumn{1}{r}{\textless start\textgreater you're like fakers behind me I know \textless end\textgreater} & \multicolumn{1}{r|}{\faThumbsOUp} & \multicolumn{1}{r}{up gets rooted teleports coming in \textless end\textgreater} & \multicolumn{1}{r|}{\faThumbsOUp} & \multicolumn{1}{r}{\textless start\textgreater going in that pack it's definitely}& \\ & \multicolumn{1}{r}{\textless start\textgreater right like even though I only have a mat \textless end\textgreater} & \multicolumn{1}{r|}{\faThumbsODown} & \multicolumn{1}{r}{\textless start\textgreater curtain calling wish have already been} & & \multicolumn{1}{r}{awkward the Nexus Kevin is gonna be out} & \\ & \multicolumn{1}{r}{\textless start\textgreater on the floor I think is in the bed \textless end\textgreater} & \multicolumn{1}{r|}{\faThumbsODown} & \multicolumn{1}{r}{used a trick is running away \textless end\textgreater} & \multicolumn{1}{r|}{\faThumbsOUp} &\multicolumn{1}{r}{in the bank \textless end\textgreater} & \multicolumn{1}{r|}{\faThumbsODown}\\ \hline \end{tabular}\hfil} \end{table*} \section{METHOD} \noindent \textbf{Sentence Punctuation for Collaborative Commentary}: According to the finding mentioned in introduction, the generated commentary should be a complete sentence of commentary rather than a sliced snippet. As shown in Table \ref{tab:result}, in this paper we compared three strategies: the baseline Case \textit{Solo}, Case \textit{Duo} and Case \textit{Tri}, denoting punctuating sentences by 1, 2 and 3 text sequence(s) originally punctuated by \textit{Youtube} respectively. \noindent \textbf{Pre-trained Generative Language Models:} We used and fine-tuned a state-of-the-art generative pre-trained model, Text-to-Text Transfer Transformer (\textit{T5}) for game collaborative commentary generation. \section{Experiments} \subsection{Experimental Settings}\label{AA} We collected text sequences from 100 videos of one of the most watched Esports games~\cite{newzoo} called \textit{League of Legend} from \textit{Youtube}, further processed into Case \textit{Solo}, \textit{Duo} and \textit{Tri} respectively. Text sequences from 90 videos were used for fine-tuning and others were used for testing. In experiment, we used the small (T5-small) and base (T5-base) version of \textit{T5} model and compared three strategies of sentence punctuation for Esports collaborative commentary generation. \subsection{Evaluation Results} \iffalse \begin{itemize} \item Use either SI (MKS) or CGS as primary units. (SI units are encouraged.) English units may be used as secondary units (in parentheses). An exception would be the use of English units as identifiers in trade, such as ``3.5-inch disk drive''. \item Avoid combining SI and CGS units, such as current in amperes and magnetic field in oersteds. This often leads to confusion because equations do not balance dimensionally. If you must use mixed units, clearly state the units for each quantity that you use in an equation. \item Do not mix complete spellings and abbreviations of units: ``Wb/m\textsuperscript{2}'' or ``webers per square meter'', not ``webers/m\textsuperscript{2}''. Spell out units when they appear in text: ``. . . a few henries'', not ``. . . a few H''. \item Use a zero before decimal points: ``0.25'', not ``.25''. Use ``cm\textsuperscript{3}'', not ``cc''.) \end{itemize} \fi \noindent \textbf{Objective Results Given by Automatic Metrics:} We evaluated the generated commentary by using BLEU~\cite{bleu}, ROUGE-1, 2, L~\cite{rouge}, and METEOR~\cite{meteor}. As expected, the performance of ROUGE-1, 2, L among three strategies of sentence punctuation increased, which implies fine-tuning increases the recall rate of words in generated commentary from reference commentary, and in Case \textit{Solo} the METEOR score also increases, which stands for correlation of wording and phrasing with generated text with reference text. We marked the highest score of each metric for three strategies in bold, respectively. \noindent \textbf{Subjective Analyses on Generated Commentaries:} We compared the generated commentary from fine-tuned T5-base model. We observed (1) the model generated sliced key game-related phrases and incomplete sentences for commentary that is semantically ambiguous in all cases, especially in Case \textit{Solo}. (2) Case \textit{Duo} and Case \textit{Tri} model could generate the entire key game-related phrases such as the game-related scenes narrative with the correct name of characters and moves. (3) However, Case \textit{Tri} model generates more than one commentary (we expect only to generate one commentary for one commentary given by a human commentator). \begin{table}[t] \tiny \centering \caption{Results of conducted comparative experiments. The evaluation was given by automatic metrics on generated collaborative commentary used in this work, respectively.} \label{tab:result} \begin{tabular}{|l|r|r|r|r|r|} \hline Model & BLEU & ROUGE-1 & ROUGE-2 & ROUGE-L & METEOR \\ \hline Case \textit{Solo}& \multicolumn{5}{|r|}{}\\ \hline T5-small-solo & 2.88 & 0.17 & 0.01 & 0.17 & 6.44 \\ T5-base-solo & 2.88 & 0.52 & 0.07 & 0.52 & 19.17 \\ Fine-tuned-T5-small-solo & 2.92 & 3.15 & \textbf{0.71} & 3.15 & 7.65 \\ Fine-tuned-T5-base-solo & \textbf{2.97} & \textbf{3.91} & 0.48 & \textbf{3.91} & \textbf{22.76} \\ \hline Case \textit{Duo} & \multicolumn{5}{|r|}{} \\ \hline T5-small-duo & \textbf{1.13} & 0.82 & 0.01 & 0.82 & 12.24 \\ T5-base-duo & 1.01 & 1.11 & 0.01 & 1.11 & 12.02 \\ Fine-tuned-T5-small-duo & 1.04 & 11.26 & 6.08 & 11.24 & \textbf{13.18} \\ Fine-tuned-T5-base-duo & 1.05 & \textbf{16.60} & \textbf{8.99} & \textbf{16.52} & 11.76 \\ \hline Case \textit{Tri}& \multicolumn{5}{|r|}{}\\ \hline T5-small-tri & \textbf{1.06} & 1.12 & 0.03 & 1.11 & 12.26 \\ T5-base-tri & 1.01 & 0.94 & 0.02 & 0.93 & \textbf{12.87} \\ Fine-tuned-T5-small-tri & 1.03 & 9.32 & 4.61 & 9.30 & 12.84 \\ Fine-tuned-T5-base-tri & 1.04 & \textbf{11.90} & \textbf{5.91} & \textbf{11.89} & 12.24 \\ \hline \end{tabular} \end{table} \iffalse \subsection{Equations} Number equations consecutively. To make your equations more compact, you may use the solidus (~/~), the exp function, or appropriate exponents. Italicize Roman symbols for quantities and variables, but not Greek symbols. Use a long dash rather than a hyphen for a minus sign. Punctuate equations with commas or periods when they are part of a sentence, as in: \begin{equation} a+b=\gamma\label{eq} \end{equation} Be sure that the symbols in your equation have been defined before or immediately following the equation. Use ``\eqref{eq}'', not ``Eq.~\eqref{eq}'' or ``equation \eqref{eq}'', except at the beginning of a sentence: ``Equation \eqref{eq} is . . .'' \subsection{\LaTeX-Specific Advice} Please use ``soft'' (e.g., \verb|\eqref{Eq}|) cross references instead of ``hard'' references (e.g., \verb|(1)|). That will make it possible to combine sections, add equations, or change the order of figures or citations without having to go through the file line by line. Please don't use the \verb|{eqnarray}| equation environment. Use \verb|{align}| or \verb|{IEEEeqnarray}| instead. The \verb|{eqnarray}| environment leaves unsightly spaces around relation symbols. Please note that the \verb|{subequations}| environment in {\LaTeX} will increment the main equation counter even when there are no equation numbers displayed. If you forget that, you might write an article in which the equation numbers skip from (17) to (20), causing the copy editors to wonder if you've discovered a new method of counting. {\BibTeX} does not work by magic. It doesn't get the bibliographic data from thin air but from .bib files. If you use {\BibTeX} to produce a bibliography you must send the .bib files. {\LaTeX} can't read your mind. If you assign the same label to a subsubsection and a table, you might find that Table I has been cross referenced as Table IV-B3. {\LaTeX} does not have precognitive abilities. If you put a \verb|\label| command before the command that updates the counter it's supposed to be using, the label will pick up the last counter to be cross referenced instead. In particular, a \verb|\label| command should not go before the caption of a figure or a table. Do not use \verb|\nonumber| inside the \verb|{array}| environment. It will not stop equation numbers inside \verb|{array}| (there won't be any anyway) and it might stop a wanted equation number in the surrounding equation. \subsection{Some Common Mistakes}\label{SCM} \begin{itemize} \item The word ``data'' is plural, not singular. \item The subscript for the permeability of vacuum $\mu_{0}$, and other common scientific constants, is zero with subscript formatting, not a lowercase letter ``o''. \item In American English, commas, semicolons, periods, question and exclamation marks are located within quotation marks only when a complete thought or name is cited, such as a title or full quotation. When quotation marks are used, instead of a bold or italic typeface, to highlight a word or phrase, punctuation should appear outside of the quotation marks. A parenthetical phrase or statement at the end of a sentence is punctuated outside of the closing parenthesis (like this). (A parenthetical sentence is punctuated within the parentheses.) \item A graph within a graph is an ``inset'', not an ``insert''. The word alternatively is preferred to the word ``alternately'' (unless you really mean something that alternates). \item Do not use the word ``essentially'' to mean ``approximately'' or ``effectively''. \item In your paper title, if the words ``that uses'' can accurately replace the word ``using'', capitalize the ``u''; if not, keep using lower-cased. \item Be aware of the different meanings of the homophones ``affect'' and ``effect'', ``complement'' and ``compliment'', ``discreet'' and ``discrete'', ``principal'' and ``principle''. \item Do not confuse ``imply'' and ``infer''. \item The prefix ``non'' is not a word; it should be joined to the word it modifies, usually without a hyphen. \item There is no period after the ``et'' in the Latin abbreviation ``et al.''. \item The abbreviation ``i.e.'' means ``that is'', and the abbreviation ``e.g.'' means ``for example''. \end{itemize} An excellent style manual for science writers is \cite{b7}. \subsection{Authors and Affiliations} \textbf{The class file is designed for, but not limited to, six authors.} A minimum of one author is required for all conference articles. Author names should be listed starting from left to right and then moving down to the next line. This is the author sequence that will be used in future citations and by indexing services. Names should not be listed in columns nor group by affiliation. Please keep your affiliations as succinct as possible (for example, do not differentiate among departments of the same organization). \subsection{Identify the Headings} Headings, or heads, are organizational devices that guide the reader through your paper. There are two types: component heads and text heads. Component heads identify the different components of your paper and are not topically subordinate to each other. Examples include Acknowledgments and References and, for these, the correct style to use is ``Heading 5''. Use ``figure caption'' for your Figure captions, and ``table head'' for your table title. Run-in heads, such as ``Abstract'', will require you to apply a style (in this case, italic) in addition to the style provided by the drop down menu to differentiate the head from the text. Text heads organize the topics on a relational, hierarchical basis. For example, the paper title is the primary text head because all subsequent material relates and elaborates on this one topic. If there are two or more sub-topics, the next level head (uppercase Roman numerals) should be used and, conversely, if there are not at least two sub-topics, then no subheads should be introduced. \subsection{Figures and Tables} \paragraph{Positioning Figures and Tables} Place figures and tables at the top and bottom of columns. Avoid placing them in the middle of columns. Large figures and tables may span across both columns. Figure captions should be below the figures; table heads should appear above the tables. Insert figures and tables after they are cited in the text. Use the abbreviation ``Fig.~\ref{fig}'', even at the beginning of a sentence. \begin{table}[htbp] \caption{Table Type Styles} \begin{center} \begin{tabular}{|c|c|c|c|} \hline \textbf{Table}&\multicolumn{3}{|c|}{\textbf{Table Column Head}} \\ \cline{2-4} \textbf{Head} & \textbf{\textit{Table column subhead}}& \textbf{\textit{Subhead}}& \textbf{\textit{Subhead}} \\ \hline copy& More table copy$^{\mathrm{a}}$& & \\ \hline \multicolumn{4}{l}{$^{\mathrm{a}}$Sample of a Table footnote.} \end{tabular} \label{tab1} \end{center} \end{table} \begin{figure}[htbp] \centerline{\includegraphics{fig1.png}} \caption{Example of a figure caption.} \label{fig} \end{figure} Figure Labels: Use 8 point Times New Roman for Figure labels. Use words rather than symbols or abbreviations when writing Figure axis labels to avoid confusing the reader. As an example, write the quantity ``Magnetization'', or ``Magnetization, M'', not just ``M''. If including units in the label, present them within parentheses. Do not label axes only with units. In the example, write ``Magnetization (A/m)'' or ``Magnetization \{A[m(1)]\}'', not just ``A/m''. Do not label axes with a ratio of quantities and units. For example, write ``Temperature (K)'', not ``Temperature/K''. \fi \section{Conclusion} To solve the sentence punctuation problem for collaborative commentary generation in Esports live-streaming, this paper presents two strategies for sentence punctuation for punctuating text sequences of game commentary.We conducted comparative experiments among baseline and two strategies on state-of-the-art pre-trained language models for Esports collaborative commentary generation. From experiment results, we found that (1) the fine-tuning on pre-trained language models improves recalling the words from reference commentary. (2) Objective evaluations from automatic metrics and subjective analyses on generated examples on generated commentaries among three strategies showed Case \textit{Duo}, the strategy of punctuating sentences by two text sequences outperformed the baseline. (3) From the subjective analyses on Case \textit{Tri} we found that inputting lengthy text sequences results in the AI commentator output more than one commentary that confuses the human commentator. Our future work will develop a more advanced sentence punctuation strategy study using human evaluation for collaborative commentary generation in Esports live-streaming. \bibliographystyle{IEEEtran}
1,314,259,994,740
arxiv
\section{Introduction} The properties of neutrinos that propagate through a medium have been the subject of great interest in the recent literature. This has been motivated by the attractive suggestion that the solar neutrino problem \cite{palrev} can be solved by the resonant oscillation mechanism \cite{MiSm86}, which hinges on the characteristics of the neutrino propagation in a background medium \cite{Wol78}. Inspired by its potentially important effects, the neutrino interactions in a material environment have also been studied in some detail \cite{NiPa89,SeSm89,OrSe87,OPSS87,DNP89,DNP90,GKL91,Saw92,GKLL92}. Of primary interest along these lines is the study of the electromagnetic interactions of neutrinos in a medium. A classic problem in this field is the decay of a plasmon into $\nu\bar\nu$ pairs \cite{ARW63}, which has received considerable attention recently \cite{DNP89,Saw92,BrSe93}. It has also been pointed out that the rate of decay of a massive neutrino into a lighter neutrino and a photon increases tremendously \cite{DNP90,GKL91} in matter, as a consequence of the fact that the GIM mechanism is not operative in a medium with electrons but no muons or taons. The study of the electromagnetic interactions of neutrinos in a medium, as well as the results and conclusions mentioned above, are based on the 1-loop calculation of the effective electromagnetic vertex of the neutrino, which has been performed to the leading order in the Fermi constant \cite{DNP89} using the methods of ``Quantum Statistical Field Theory''\footnote{More often, this is called the ``Finite Temperature Field Theory'', but the name is misleading because the methods are also applicable at zero temperature with a finite density of particles in the background.}. The implications of this calculation have been only partially explored. In this article, we show that the results of Ref. \cite{DNP89} imply that the neutrino acquires a small effective charge in a medium. This point was realized by Oraevsky and Semikoz \cite{OrSe87} using methods of plasma physics before the calculation of the $\nu\nu\gamma$ vertex was performed. On the other hand, our method is entirely different, hopefully easier to follow for a particle physicist, and it brings out some interesting points which are not easy to see in the method used by Oraevsky and Semikoz \cite{OrSe87}. In addition, we also calculate the induced charge for neutrinos which are massive, distinguishing between the cases of Dirac and Majorana masses. The plan of the paper is as follows. In Section 2 we establish notation and the definition of the induced charge that is used in the rest of the paper, and show why the neutrino can acquire an induced charge in the medium but not in the vacuum. Then, using field-theoretic methods, we will show that the neutrino electromagnetic vertex is related to the photon self-energy in the medium and, in particular, the neutrino induced charge is related to the Debye screening length in a plasma. This relation is derived to 1-loop order in Section 3, and then in Section 4 we show that it is valid to all orders in $e$ (and to first order in the Fermi coupling). In Section 5 we use this relation to find an expression for the induced charge of the neutrino, and in Section 6 we show that the magnitude of the induced charge depends on whether the neutrinos are massless or massive and, in the latter case, on whether the mass is Dirac or Majorana type. Finally, using the results of Ref. \cite{DNP89}, we will estimate the induced charge in some particular backgrounds. \section{Electromagnetic vertex of the neutrino and the definition of the induced charge} We begin by understanding the reason why the neutrino can acquire an induced charge in the medium but not in the vacuum. The off-shell electromagnetic vertex function $\Gamma_\lambda$ is defined in such a way that, for on-shell neutrinos, the $\nu\nu\gamma$ amplitude is given by \begin{eqnarray} M = -i\overline u(k') \Gamma_\lambda u(k) A^\lambda(q)\,, \label{defGamma} \end{eqnarray} where \begin{eqnarray} q \equiv k - k' \end{eqnarray} is the momentum carried by the photon. In general, $\Gamma_\lambda$ depends on $k$ and $k'$ or, equivalently, on $k$ and $q$. For neutrinos in a medium $\Gamma_\lambda$ depends also on the parameters characterizing the medium. For homogeneous and isotropic media, to which we will restrict ourselves, there is only one such parameter, viz. the velocity 4-vector of the background medium $v^\mu$. There are two important consequences of the fact that the external (neutrino) lines in the Feynman diagram for the $\nu\nu\gamma$ amplitude are neutral. Firstly, $\Gamma_\lambda$ satisfies \begin{eqnarray} \label{currconservation} q_\lambda \Gamma^\lambda = 0 \end{eqnarray} for all values of $q$. It is important to realize that for neutrinos Eq. (\ref{currconservation}) holds for arbitrary values of $q$, and not just when $k$ and $k'$ are on shell. If the fermion lines in the diagram were to correspond to a charged particle (e.g., the electron), then the analogous relation in that case involves terms in the right-hand side involving the inverse propagators corresponding to the external fermions. In that case Eq. (\ref{currconservation}) does not hold for arbitratry values of $q$, but only when both $k$ and $k'$ are on shell. The other important consequence of the fact that the external lines are neutral is that $\Gamma_\lambda$ is well defined in the limit $q^\mu \to 0$. The reason is that the photon vertex must be connected to a pair of internal lines of the diagram. If one of these lines is assigned the loop momentum $p$ which is integrated over, the other line will carry a momentum $p \pm q$. The propagator of this second line will involve the factor \begin{eqnarray} {1 \over (p \pm q)^2 - m^2} = {1 \over q^2 \pm 2 p\cdot q + (p^2 - m^2)} \,, \end{eqnarray} where $m$ is the mass of the internal line. However, since $p^2 \neq m^2$ for any internal line, no singularity is produced for $q^\mu \to 0$. {}From the two properties of $\Gamma_\lambda$ just discussed, we obtain \begin{eqnarray} \Gamma_\lambda(q^\mu = 0) = 0 \,, \end{eqnarray} which implies that the particle associated with the external line does not acquire a charge in any order of perturbation theory. To see this explicitly, we will consider the matrix element of the charge operator between two neutrino states with momenta: \begin{eqnarray} k^\lambda = (E,\vec k), \qquad k^{\prime\lambda} = (E,\vec k') \,. \end{eqnarray} We use states with the same energy, because then \begin{eqnarray} q \equiv k - k' = (0,\vec q) \end{eqnarray} with $\vec q = \vec k - \vec k'$, which corresponds to the static limit. Denoting by $\rho(x)$ the charge density operator, the effective charge is defined by the equation \begin{eqnarray} e_{{\rm eff}} \langle k'|k\rangle & = & \int d^3x \langle k' \left| \rho(x) \right| k \rangle\nonumber\\ & = & (2\pi)^3 \delta^{(3)}(\vec q) \langle k' \left| \rho(0) \right| k \rangle \label{effch1} \end{eqnarray} On the other hand, \begin{eqnarray} \langle k' \left| \rho(0) \right| k \rangle = \overline u (k') \Gamma_0 (0,\vec q) u(k)\,, \label{rhoGamma} \end{eqnarray} where the notation $\Gamma_\lambda(q^0,\vec q)$ has been used to indicate explicitly that we are considering the dependence of $\Gamma$ separately on the frequency and wavelength of the photon. Thus we obtain \begin{eqnarray} \label{basic} e_{{\rm eff}} = \frac{1}{2E} \overline u(k) \Gamma_0 (0,\vec q\to 0) u(k) \,, \end{eqnarray} which is the basic equation to interpret our results, but it can be cast in an elegant way. Introducing the spinor projection matrix \begin{eqnarray} S(k) \equiv u(k) \otimes \overline u(k) = {1 \over 2} (1 + \lambda \gamma_5) \rlap/ k \,, \label{S(k)} \end{eqnarray} where the last step is valid for massless Weyl spinors with $\lambda=\pm 1$ being the helicity, we can rewrite Eq.\ (\ref{basic}) as \begin{eqnarray} e_{{\rm eff}} &=& {1 \over 2E} \; {\rm tr} \, [\Gamma_0 (0,\vec q\to 0) S(k)] \,, \label{matrixeffch}\\ & = & {1 \over 4E} \; {\rm tr} \, \left[ \Gamma_0 (0,\vec q\to 0) (1 + \lambda \gamma_5) \rlap/ k \right] \,, \label{def:effch} \end{eqnarray} where again the second step is valid for massless Weyl spinors. Since $\Gamma_\lambda$ has a well defined limit as $q^0 \to 0$, we can make a Taylor expansion around $q^0 = 0$: \begin{eqnarray} \Gamma_0 & = & G_0 + q^0 G_1 + O(q^{0^2})\,\nonumber\\ \vec\Gamma & = & \vec H_0 + q^0\vec H_1 + O(q^{0^2})\,, \end{eqnarray} where all the coefficients are independent of $q^0$. Then from Eq. (\ref{currconservation}) it is easy to deduce that, in the limit $q^0\to 0$, \begin{eqnarray} \label{q0limit} \Gamma_0 & = & \vec q\cdot\vec H_1 + O(q^0)\,\nonumber\\ \vec\Gamma & = & q^0\vec H_1 + O(q^{0^2})\,. \end{eqnarray} Since $\vec H_1$ has a well defined limit as $\vec q\to 0$, it follows that $\Gamma_0 = 0$ in this limit, and from Eq. (\ref{def:effch}) we see that $e^{(\nu)}$ is zero in the vacuum. In the medium, Eq. (\ref{currconservation}) continues to hold for any value of $q$, and the relations in Eq. (\ref{q0limit}) are also valid, but $\vec H_1$ no longer has a well defined limit as $\vec q\to 0$. There are several ways to understand why this is so. One of them is to notice that in the case of the medium, some of the internal lines to which the photon is attached are on shell because they correspond to particles that are in the background. Thus, the singularities that are avoided in the case of the vacuum because the photon is attached only to internal off-shell lines, reappear here. Therefore, nothing prevents $\vec H_1$ to develop a singularity as $\vec q\to 0$ of the form \begin{eqnarray} \vec H_1 = \mbox{(constant)} \cdot \frac{\vec q}{{\vec q}\,^2} \end{eqnarray} In such a case, the constant appearing in this equation is the value of $\Gamma_0$ in the limit $q^0=0,\,\vec q \to0$. Thus, from the definition of the effective charge in Eq. (\ref{def:effch}), it follows that $e^{(\nu)}_{{\rm eff}}$ is non-vanishing. In what follows, we will show that this is exactly what happens, first explicitly by using the 1-loop calculation of the neutrino electromagnetic vertex and then by a general field-theoretic argument, which extends the 1-loop result to all orders in $e$. This will be done by showing that $\Gamma_\lambda$ is related to the photon self-energy $\pi_{\lambda\rho}(q)$. Further, in the limit that we are considering, $\pi_{\lambda\rho}(q^0 = 0,\vec q\to 0)$ is related to the Debye screening length, which allows us to establish the relation between the latter quantity and the neutrino induced charge. \section{One-loop result} We are interested in the regime where the neutrino momenta are small compared to the masses of the $W$ and $Z$ bosons. Therefore, we can neglect the momentum dependence in the $W$ and $Z$ propagators, which is justified if we are performing a calculation to the leading order in the Fermi constant $G_F$. In this approximation, the diagrams contributing to the electromagnetic vertex then appear at the 1-loop level, and are shown in Fig.~\ref{f:1loop}. Since the momentum dependence of the weak gauge bosons are neglected, these two diagrams can be represented by the single diagram of Fig.~\ref{f:4fermi} with a four-fermion vertex. Let us denote the 4-fermion interaction by \begin{eqnarray} \label{Lweak} {\cal L} {\rm _{int}^{(weak)}} = -\sqrt 2 G_F \; [\bar \nu \gamma^\rho L\nu ] \; [\bar f \gamma_\rho ({\cal A + B} \gamma_5) f ] \end{eqnarray} where $L={1\over2}(1-\gamma_5)$ is the projection operator for left chirality, and $f$ stands for the electron field. We can then write the amplitude of Fig.~\ref{f:4fermi} as \begin{eqnarray} -i \Gamma_\lambda &=& (-ie)(-iG_F\surd 2)(-1) \gamma^\rho L \; \times \nonumber\\ &&\quad \int {d^4p \over (2\pi)^4} \;\mbox{tr}\, \left[\gamma_\lambda \, iS_F(p) \gamma_\rho ({\cal A + B} \gamma_5) \, iS_F(p - q) \, \right] \,, \label{Gamma} \end{eqnarray} where $iS_F(p)$ denotes the propagator of the background particles with momentum $p$, and $e$ is the charge of the electron. In complicated systems, this propagator may be complicated, and the integration over the momentum $p$ may have unusual measure as well, but we do not need these explicitly for what follows. Now the interesting observation is that the contribution from ${\cal A}$ in Eq. (\ref{Gamma}) is intimately related to the vacuum polarization of the photon which, at 1-loop, arises from the diagram in Fig.~\ref{f:vacpol} and is given by \begin{eqnarray} i \pi_{\lambda\rho} (q) = (-ie)^2(-1) \int {d^4p \over (2\pi)^4} \;\mbox{tr}\, \left[ \gamma_\lambda \, iS_F(p) \, \gamma_\rho \, iS_F(p - q) \right] \,. \label{pi} \end{eqnarray} Therefore, Eq. (\ref{Gamma}) can be written in the form \begin{eqnarray} \Gamma_\lambda = -\frac{G_F}{\sqrt 2 e}\gamma^\rho(1 - \gamma^5) ({\cal A} \pi_{\lambda\rho} + {\cal B} \pi^5_{\lambda\rho})\,. \label{Gampi} \end{eqnarray} where we have defined \begin{eqnarray} i\pi^5_{\lambda\rho} = (-ie)^2(-1)\int {d^4p \over (2\pi)^4} \;\mbox{tr}\, \left[\gamma_\lambda \, iS_F(p) \gamma_\rho \gamma_5 \, iS_F(p-q) \, \right] \,. \end{eqnarray} The term proportional to $\pi^5_{\lambda\rho}$ does not contribute to $\Gamma_0$. The reason is that the trace contains a factor of $\gamma_5$, and therefore can be non-zero only if there are at least four other $\gamma$-matrices present. Since the terms in a fermion propagator have at most one $\gamma$-matrix, the trace involves $\mbox{tr}\,(\gamma_5\gamma_\lambda \gamma_\rho \gamma_\alpha \gamma_\beta) = 4i\epsilon_{\lambda\rho\alpha\beta}$. After the $p$ integration this can yield only a term proportional to $\epsilon_{\lambda\rho\alpha\beta}q^\alpha v^\beta$, which does not contribute to the zeroth component of $\Gamma_\lambda$. \section{Generalization to higher orders} The above result, based on the 1-loop calculation of the photon self-energy and the neutrino vertex function, can be generalized as follows. The photon self-energy is defined in general by \begin{eqnarray} i\pi_{\lambda\rho} &= & (-ie)^2\int{d^4x e^{iq\cdot x}\left< T\, U^{{\rm (em)}}j_\lambda(x)j_\rho(0)\right> }\,,\nonumber\\ & = & (-ie)^2\int{d^4x e^{-iq\cdot x}\left< T\, U^{{\rm (em)}}j_\lambda(0)j_\rho(x)\right> }\,, \end{eqnarray} where $j_\lambda$ is the electron current density \begin{eqnarray} j_\lambda = \overline f\gamma_\lambda f\,, \end{eqnarray} and \begin{eqnarray} U^{{\rm (em)}} = \exp \left( i\int d^4z \, {\cal L}{\rm _{int}^{(em)}} \right) \end{eqnarray} with \begin{eqnarray} {\cal L} {\rm _{int}^{(em)}} = -e \, j_\lambda A^\lambda\,. \end{eqnarray} On the other hand, the neutrino vertex function of Eq. (\ref{defGamma}) is defined by \begin{eqnarray} \label{genvertex} \Gamma_\lambda(k,k') = e \int{d^4x\,d^4y\, e^{-ik\cdot x} e^{ik'\cdot y}\left< T\, \exp \left( i\int d^4z \, {\cal L} {\rm _{int}^{(total)}} \right) \nu(y)j_\lambda(0)\overline\nu(x)\right> _a}\,, \end{eqnarray} where \begin{eqnarray} {\cal L} {\rm _{int}^{(total)}} = {\cal L} {\rm _{int}^{(weak)}} + {\cal L} {\rm _{int}^{(em)}} \,. \end{eqnarray} The subscript $a$ in Eq. (\ref{genvertex}) is used to indicate that $\Gamma_\lambda$ is obtained from the above formula by amputating the propagators corresponding to the external neutrino lines. It is convenient to rewrite ${\cal L}_{{\rm int}}^{{\rm (weak)}}$, given in Eq. (\ref{Lweak}), in the form \begin{eqnarray} \label{Lweak2} {\cal L}_{{\rm int}}^{{\rm (weak)}} = -\sqrt2 G_F \overline\nu_L \gamma_\lambda \nu_L \left[{\cal A}j_\lambda + {\cal B}j^5_\lambda\right]\,, \end{eqnarray} where \begin{eqnarray} j^5_\lambda \equiv \overline f\gamma_\lambda\gamma^5 f\,. \end{eqnarray} To first order in $G_F$, \begin{eqnarray} \Gamma_\lambda = e \int d^4x\, d^4y\, d^4z\,e^{-ik\cdot x}e^{ik'\cdot y} \left< T\,U^{{\rm (em)}} \nu(y) j_\lambda(0) \overline\nu(x) i {\cal L}_{{\rm int}}^{{\rm (weak)}}(z)\right> _a \label{1storder} \end{eqnarray} which, using Eq. (\ref{Lweak2}) and amputating the external neutrino lines, reduces to \begin{eqnarray} \Gamma_\lambda &=& - i eG_F\sqrt 2 {\cal A} \gamma^\rho L \times \nonumber\\ && \left( \int{d^4z \, e^{-iq\cdot z} \left< T \, U^{{\rm (em)}} j_\lambda(0) j_\rho(z) \right> } + \int{d^4z\,e^{-iq\cdot z}\left< T\,U^{{\rm (em)}} j_\lambda(0)j_\rho^5(z)\right> } \right)\,. \end{eqnarray} Defining \begin{eqnarray} i\pi^5_{\lambda\rho} & = & (-ie)^2 \int{d^4x e^{iq\cdot x}\left< T\, U^{{\rm (em)}}j_\lambda(x)j^5_\rho (0)\right> }\,,\nonumber\\ & = & (-ie)^2 \int{d^4x e^{-iq\cdot x}\left< T\, U^{{\rm (em)}}j_\lambda(0)j^5_\rho(x)\right> }\,, \end{eqnarray} we finally obtain the relation \begin{eqnarray} \Gamma_\lambda = - \frac{G_F}{\sqrt 2 e}\gamma^\rho(1 - \gamma^5) ({\cal A}\pi_{\lambda\rho} + {\cal B}\pi^5_{\lambda\rho})\,. \label{Gampi2} \end{eqnarray} This expression is the same as Eq. (\ref{Gampi}), except that now it is clear that it is valid in all orders of the electromagnetic interactions. Since $\pi^5_{\lambda\rho}$ is a pseudotensor that depends only on $q$ and $v$, it must be proportional to $\epsilon_{\lambda\rho\alpha\beta}q^\alpha v^\beta$, and therefore it does not contribute to $\Gamma_0$ in the rest frame of the medium. If one includes the effects of strong interaction as well, the various occurences of ${\cal L}{\rm ^{(em)}_{int}}$ should be replaced by ${\cal L}{\rm ^{(em)}_{int}} + {\cal L}{\rm ^{(strong)}_{int}}$, but this would not change Eq. (\ref{Gampi2}). \section{Expression for the induced charge} The relation between the induced charge and the Debye screening length is obtained as follows. As already argued, the zeroth-component of $\Gamma_\lambda$ is given by \begin{eqnarray} \label{gammapirel} \Gamma_0 (0,\vec q \to 0) = -\frac{G_F{\cal A}}{\sqrt 2 e}\gamma^\rho [1 - \gamma^5] \pi_{0\rho} (0,\vec q \to 0) \,. \end{eqnarray} The most general form of $\pi_{\lambda\rho}$ is \cite{NiPa89b} \begin{eqnarray} \label{pimunugeneral} \pi_{\lambda\rho} = \pi_T R_{\lambda\rho} + \pi_L Q_{\lambda\rho} + \pi_P P_{\lambda\rho} \,, \end{eqnarray} where \begin{eqnarray} R_{\lambda\rho} &\equiv & \tilde g_{\lambda\rho} - Q_{\lambda\rho}\,,\\ Q_{\lambda\rho} &\equiv & \frac{\tilde v_\lambda\tilde v_\rho}{\tilde v^2}\,, \\ P_{\lambda\rho} &\equiv& {i \over {\cal Q}} \epsilon_{\lambda\rho\alpha\beta} q^\alpha v^\beta \,, \end{eqnarray} with \begin{eqnarray} \tilde g_{\lambda\rho} &\equiv& g_{\lambda\rho} - \frac{q_\lambda q_\rho}{q^2} \\ \tilde v_\lambda &\equiv& \tilde g_{\lambda\rho}v^\rho\, \\ {\cal Q} &\equiv & \sqrt{(q\cdot v)^2 - q^2} \,. \end{eqnarray} In the rest frame of the medium, with $v^\mu=(1,0,0,0)$, the above definitions give \begin{eqnarray} Q_{\lambda\rho}(0,\vec q) & = & v_\lambda v_\rho\,,\\ R_{00}(0,\vec q) = R_{i0}(0,\vec q) = R_{0i}(0,\vec q) & = & 0\,,\\ P_{00}(0,\vec q) = P_{i0}(0,\vec q) = P_{0i}(0,\vec q) & = & 0\,, \end{eqnarray} where we have indicated explicitly the fact that we are evaluating $R$, $Q$ and $P$ in the static limit, $q^0 = 0$. From Eq. (\ref{pimunugeneral}) we then obtain \begin{eqnarray} \pi_{00}(0,\vec q) & = & \pi_L(0,\vec q)\,,\\ \pi_{0i}(0,\vec q) & = & \pi_{i0}(0,\vec q) = 0\,. \end{eqnarray} Eq. (\ref{gammapirel}) then yields \begin{eqnarray} \Gamma_0 (0, \vec q \to 0)= - \left(\frac{G_F{\cal A}}{\sqrt 2 e}\right)\gamma_0 (1 - \gamma_5) \pi_L(0,\vec q\to 0)\,. \end{eqnarray} Using the definition of the Debye screening length, \begin{eqnarray} r_D^{-2} = \pi_L(0,\vec q\to 0)\,, \label{Debye} \end{eqnarray} we finally obtain for the induced neutrino charge by the use of Eq. (\ref{def:effch}): \begin{eqnarray} e^{(\nu)}_{{\rm ind}} = - \frac{G_F{\cal A}}{\sqrt 2 er_D^2} (1 - \lambda) \,, \label{result} \end{eqnarray} where, as stated before, $\lambda$ is the helicity of the neutrino. Thus, it is clear that only the left-handed neutrinos have an induced charge. The induced charge for the right handed neutrinos vanish since they have no weak interactions. If they interact via some other weaker interaction, then of course they also acquire an induced charge, but the magnitude of that will be further suppressed. Also, note that $e^{(\nu)}_{{\rm ind}} \propto e$, since $r_D^2 \propto e^{-2}$ which follows from Eq. (\ref{Debye}). \section{Generalization to massive neutrinos} The generalization of our previous results to massive neutrinos is straightforward, although there are several important differences. The effective charge is defined by Eq. (\ref{matrixeffch}), but the expression for the spinor projection operator $S(k)$, as well as the expressison for $\Gamma_0$, differ from the previous case. The spinor projection operator $S(k)$ is given, for massive particles, by \begin{eqnarray} \label{massiveproj} S(k) = u(k) \otimes \overline u(k) = \frac{1}{2} (\rlap/ k + m) (1 + \lambda \gamma^5 \rlap/ s)\,, \end{eqnarray} where $s^\mu$ is the spin polarization vector which, for helicity states, is given by \begin{eqnarray} s^\mu = {1 \over m} \left( |\vec k| , E \vec k/|\vec k| \right) \,. \end{eqnarray} Therefore, although Eq. (\ref{matrixeffch}) remains valid for massive neutrinos, Eq. (\ref{def:effch}) does not. To proceed, we consider the cases of Dirac and Majorana neutrinos separately. \subsection{Dirac case} For Dirac neutrinos, $\Gamma_\lambda$ remains to be given by Eq. (\ref{Gampi2}) and the effective charge by Eq. (\ref{matrixeffch}). The formula for the effective charge of Dirac neutrinos is now obtained by substituting Eq. (\ref{massiveproj}) in Eq. (\ref{matrixeffch}), yielding \begin{eqnarray} e^{(\nu^D)}_{{\rm ind}} = -\frac{G_F{\cal A}}{\sqrt{2}e r_D^2} \left(1 - \frac{\lambda |\vec k|}{E} \right) \,. \end{eqnarray} Specializing this formula to the case of massless neutrinos we recover Eq. (\ref{result}), as it should be. In the opposite limit of non-relativistic neutrinos, we see that both helicity states have the same effective charge, which is equal to half the value of the charge of the left-handed neutrino in the massless case. \subsection{Majorana case} For (massive) Majorana neutrinos we again have to use the spinor projection operator appropriate for massive particles, given in Eq. (\ref{massiveproj}). However, the formula for $\Gamma_\lambda$ is modified as follows. Going back to Eq. (\ref{1storder}), it is important to recognize that each one of the neutrino field operators $\nu(y)$ and $\overline\nu(y)$ can be contracted with either one of the same field operators that come from the factor ${\cal L}{\rm _{int}^{(weak)}}$. The reason is that for a Majorana neutrino the field operator is self-conjugate and therefore $\nu(y)$ can be contracted with not only $\overline\nu(y)$ but also with itself. The net result of adding these two possible contractions is that the expression for $\Gamma_\lambda$ given in Eq. (\ref{Gampi2}) is replaced by \cite{MoPaBook} \begin{eqnarray} \label{gammamaj} \Gamma_\lambda^{(M)} = -\frac{G_F}{\sqrt{2}e}(-2\gamma^\rho\gamma^5) ({\cal A}\pi_{\lambda\rho} + {\cal B}\pi^5_{\lambda\rho}) \,. \end{eqnarray} Substituting this formula and the projection operator given above for the massive case, into Eq. (\ref{matrixeffch}) we then obtain the effective charge for Majorana neutrinos \begin{eqnarray} e^{(\nu^M)}_{{\rm ind}} = \frac{\sqrt{2} G_F {\cal A}}{er_D^2} \, \frac{\lambda |\vec k|}{E} \,. \end{eqnarray} We notice the following features: ($i$) the positive and negative helicity states have opposite effective charge; ($ii$) in the masless limit the result for the negative helicity state is the same one obtained in the massless Dirac case and in the Weyl case, while the positive helicity state has the opposite value of the charge. This is not surprising since the right-handed component of the Majorana neutrino is just the CPT conjugate of the left-handed one. \section{Numerical estimates} In order to obtain numerical estimates, we note that since $|\vec k|/E \leq 1$, the magnitude of the induced charge is maximum if the neutrino is massless. Thus, in this section, we use Eq. (\ref{result}), which is valid for massless neutrinos. We see that we need to know two things in order to obtain a numerical estimate for the induced charge of the neutrino, viz., $\cal A$ and $r_D$. The first is easy, and is given by the standard model of electroweak interactions. In fact, one obtains \begin{eqnarray} {\cal A} = \left\{ \begin{array}{ll} 2 \sin^2 \theta_W + {1 \over 2} \qquad & \mbox{for $\nu_e$} \\ 2 \sin^2 \theta_W - {1 \over 2} & \mbox{for $\nu_\mu, \nu_\tau$} \,. \end{array} \right. \label{A} \end{eqnarray} For massless $\nu_e$, our result exactly reproduces the results of Oraevsky and Semikoz \cite{OrSe87}. Our formulas can be used to obtain the induced charges for the $\nu_\mu$ and $\nu_\tau$ as well, even if they are massive particles. One curious thing to note is the fact that the induced charge of the $\nu_e$ is different from that of $\nu_\mu$ or $\nu_\tau$ since the $\nu_e$'s interact with the electrons of the medium via both charge and neutral currents. Thus, if neutrinos mix, when they oscillate during their propagation through a medium, the induced charges also oscillate. Using $\sin^2 \theta_W = 0.23$, we see that \begin{eqnarray} {e^{(\nu_e)}_{{\rm ind}} \over e^{(\nu_\mu)}_{{\rm ind}}} = -24 \,. \end{eqnarray} It therefore seems that this oscillation of charges should be a fantastic phenomenon, judging by the fact that the ratio of the induced charges is large, and also is negative. However, that is not the case because the magnitude of the induced charge appears to be extremely small. In fact, putting numbers in Eq. (\ref{result}), one obtains \begin{eqnarray} e^{(\nu_e)}_{{\rm ind}} = -2 \cdot 10^{-32} \times \left( {1\,{\rm cm} \over r_D} \right)^2 \,. \end{eqnarray} To proceed, we need the value of $r_D$ for the background. This can be obtained either from the results of Ref. \cite{DNP89} or from standard texts on plasma physics. For a background consisting of non-relativistic electrons at temperature $T$, the Debye radius is given by \begin{eqnarray} r_D^2 = {T \over n_e e^2} \,, \end{eqnarray} where $n_e$ is the electron number density\footnote{Formulas appearing in books on Plasma Physics usually have a factor $4\pi$ in the denominator on the right hand side since their definition of electric charge is different.}. In order for the induced neutrino charge to be detectable in experiments, the values of $T$ and $n_e$ must be such that the resulting Debye radius is small enough. While this is not the case for any known plasma, the methods presented here may prove useful in applications to similar situations where more exciting results may be obtained. \paragraph*{Note added in proof~:} After this work was submitted for publication, we were made aware of a paper by Altherr and Kainulainen \cite{AlKa91} where the one-loop electromagnetic vertex of neutrinos has been calculated in a medium. The calculation agrees with that of Ref.\ \cite{DNP89}. These authors specifically noted that an induced charge appears in the medium. No effort was made to make contact with the Debye radius. \paragraph*{Acknowledgements~:} The work of PBP was supported by a grant from the Department of Energy.
1,314,259,994,741
arxiv
\section{Introduction} \par Let $\Omega$ be a domain in the complex plane and consider the space $\holo$ of all the functions that are holomorphic on $\Omega$ with the topology of uniform convergence on compacta. In the first section of this article we show that, for a function $f \in \holo$, the phenomenon of its $k$- derivative or $k$-anti-derivative being bounded on $\Omega$ is a rare phenomenon in the topological sense, provided that $\Omega$ is simply connected. We do this by using Baire's Theorem and we prove that the set $\mathcal{D}$ of all the functions $f \in \holo$ with the property that the derivatives and the anti-derivatives of $f$ of all orders are unbounded on $\Omega$ is a dense $G_\d$ set in $\holo$. \par If a function $f$ is holomorphic in an open set containing $\zeta$, then $S_N(f,\zeta)(z)$ denotes the $N$-th partial sum of the Taylor expansion of $f$ with center $\zeta$ at $z$. If $\Omega$ is a simply connected domain and $\zeta \in \Omega$, we define the class $\uoz$ as follows: \begin{definition} The set $\uoz$ is the set of all functions $f \in \holo$ with the property that, for every compact set $K \subset \mathbb{C}$, $K \cap \Omega = \emptyset$, with $K^{\mathsf{c}}$ connected, and for every function $h$ which is continuous on $K$ and holomorphic in the interior of $K$, there exists a sequence $\{\lambda_n\} \in \{0,1,2,...\}$ such that \begin{equation*} \sup\limits_{z \in K}|S_{\lambda_n}(f,\zeta)(z)-h(z)| \longrightarrow 0, \hspace{10pt} n \rightarrow \infty \end{equation*} \end{definition} Denote $\mathbb{D}= \{z \in \mathbb{C}: |z|<1 \}$. It is shown in \cite{nestoridis1996universal} that $U(\mathbb{D},0)$ is a dense $G_\d$ set in $\hol (\mathbb{D})$. More generally, in \cite{nestoridis1999extension} it is shown that $\uoz$ is a dense $G_\d$ set in $\holo$, where $\Omega$ is any simply connected domain and $\zeta \in \Omega$. Next, for $\Omega$ as above, we define the set $\uo$: \begin{definition} The set $\uo$ is the set of all functions $f \in \holo$ with the property that, for every compact set $K \subset \mathbb{C}$, $K \cap \Omega = \emptyset$, with $K^{\mathsf{c}}$ connected, and every function $h$ which is continuous on $K$ and holomorphic in the interior of $K$, there exists a sequence $\{\lambda_n\} \in \{0,1,2,...\}$ such that, for every compact set $L \subset \Omega$, \begin{equation*} \sup\limits_{\zeta \in L}\sup\limits_{z \in K}|S_{\lambda_n}(f,\zeta)(z)-h(z)| \longrightarrow 0, \hspace{10pt} n \rightarrow \infty \end{equation*} \end{definition} Again in \cite{nestoridis1999extension} it is shown that $\uo$ is a dense $G_\d$ set in $\holo$. Furthermore, in \cite{melas2001universality} it is shown that $\uoz = \uo$, provided that $\Omega$ is contained in a half-plane. This result is generalized in \cite{muller2006universal}, where it is shown that $\uoz = \uo$ for any simply connected domain $\Omega$ and $\zeta \in \Omega$. \par In the second section of this article, we fix a $\zeta_0 \in \Omega$ and, for $N \geq 1$, we consider the function \begin{align*} S_N(f,\zeta_0): \mathbb{C} & \rightarrow \mathbb{C} \\ z& \mapsto \sum_{n=0}^{N}\frac{f^{(n)}(\zeta_0)}{n!}(z-\zeta_0)^n = S_N(f,\zeta_0)(z) \end{align*} \par V. Nestoridis suggested that, contrary to the functions in $ \uoz$, whose Taylor partial sums are considered as functions of $z$ with the center $\zeta$ fixed, we fix $z=0$ and let the center $\zeta$ vary in $\Omega$. Thus, for $N \geq 0$, we obtain an operator \begin{align*} \ttn : \holo & \rightarrow \holo \\ f & \mapsto \ttn (f) \end{align*} where \begin{align*} \ttn(f): \Omega & \rightarrow \mathbb{C}\\ \zeta & \mapsto \sum_{n=0}^{N}\frac{f^{(n)}(\zeta)}{n!}(- \zeta)^n = \ttn (f)(\zeta) \end{align*} for any $f \in \holo$ and $N \geq 0$. The set of functions $f \in \holo$ such that $\ttn(f)$ is unbounded on $\Omega$ for all $N \geq 0$ is residual in $\holo$. This led V.Nestoridis to conjecture that, if $0 \notin \Omega$, then the class $\so$ of all functions $f \in \holo$ with the property that the set $\{\ttn (f): N = 0,1,2,...\}$ is dense in $\holo$ is a dense $G_\d$ set in $\holo$. In this article we show that either $\so = \emptyset$ or $\so$ is a dense $G_\d$ set in $\holo$. The question of whether $\so \neq \emptyset$ will be examined in a future article. However, we do show that, if $0 \notin \Omega$, then the set $\sto$ of the functions $f \in \holo$ with he property that the closure of the set $\{\ttn (f)\}$ contains the constant functions on $\Omega$ is residual in $\holo$. We do this by proving that $\sto$ contains the set $\uo$, which is already proven to be a dense $G_\d$ set in $\holo$ (\cite{nestoridis1999extension}). \par In the last part of the article, answering a question by T. Hatziafratis, we prove that, for a countable set $E \subset \mathbb{T}=\{z \in \mathbb{C}: |z|=1\}$, the generic holomorphic function on $\mathbb{D}$ has unbounded derivatives and anti-derivatives on each ray $[0,z)$, $z \in E$. We also obtain a more general result, where in fact we do not use Baire's Theorem and , therefore, the topological vector space used need not be a Fr\'echet space. \section{Preliminaries} \par Regarding the terminology used, a set $\Omega \subset \mathbb{C}$ is called a \textit{domain} if it is open and connected in $\mathbb{C}$. A $G_\d$ set in $\holo$ is a countable intersection of open sets in $\holo$ and an $F_\sigma$ set is a countable union of closed sets in $\holo$. Furthermore, a subset $E$ of $\holo$ is called \textit{dense} if there exists no non-empty open subset $U$ of $\holo$ such $U$ and $E$ are disjoint. The set $E$ is \textit{nowhere dense} in $\holo$ if every non-empty open set $U$ has an open non-empty subset $V$ such that $E$ and $V$ are disjoint. This is equivalent to the closure of $E$ having an empty interior in $\holo$. A \textit{set of the first category} in $\holo$ is a set that can be expressed as a countable union of nowhere dense sets in $\holo$. A $G_\d$ dense subset of $\holo$ is a $G_\d$ subset which is also dense. Because the space $\holo$ is metrizable complete, Baire's theorem implies that a subset of $\holo$ is $G_\d$ dense iff it is the countable intersection of open and dense subsets of $\holo$. A subset of $\holo$ is called residual if it contains a $G_\d$ dense set. Equivalently, if its complement is contained in an $F_\sigma$ set of the first category. \\ \par Let $\Omega_1, \Omega_2$ be two domains in $\mathbb{C}$ and $T: \hol(\Omega_1) \rightarrow \hol(\Omega_2)$ be a linear operator with the property that for every $z \in \Omega_2$, the function $ f \mapsto T(f)(z)$ is continuous in $\hol(\Omega_1)$. Observe that this latter property is weaker than $T$ being continuous. Define \begin{equation*} \mathcal{U}_T= \big\{f \in \hol(\Omega_1): T(f) \hspace{3pt} \text{ is unbounded on } \Omega_2 \big\} \end{equation*} \begin{proposition} \label{eitheror} If $\Omega_1, \Omega_2 $ are two domains in $\mathbb{C }$ and $T$ is as above, then either $\mathcal{U}_T = \emptyset$ or $\mathcal{U}_T$ is a dense $G_\delta$ set in $\hol(\Omega_1)$. \end{proposition} \begin{proof} If $\mathcal{U}_T \neq \emptyset$, for $m \geq 1$ define \begin{equation*} U_m= \big\{f \in \hol(\Omega_1): |T(f)(z)| \leq m \hspace{5pt} \text{ for all } z \in \Omega_2 \big\} \end{equation*} Then \begin{equation*} \mathcal{U}_T= \Big(\bigcup\limits_{m=1}^{\infty}U_m \Big)^{\mathsf{c}}= \bigcap\limits_{m=1}^{\infty}U_m^{\mathsf{c}} \end{equation*} We will show that $U_m$ is closed and nowhere dense in $\hol(\Omega_1)$ for each $m \geq 1$. \par To see that it is closed, take a sequence $\{f_n\}$ in $U_m$ such that $f_n \longrightarrow f$ uniformly on compact subsets of $\Omega_1$ for some function $f$. Then $f \in \hol(\Omega_1)$ and, for $z \in \Omega_2$ we have \begin{align*} |T(f)(z)| & \leq |T(f)(z)-T(f_n)(z)| + |T(f_n)(z)|\\ & \leq |T(f-f_n)(z)| +m \end{align*} Taking $n \rightarrow \infty$ we get that $|T(f)(z)| \leq m$ because of the continuity of $ f \mapsto T(f)(z)$, i.e. $f \in U_m$. Thus, $U_m$ is closed.\par To see that $U_m$ is nowhere dense, it suffices to show that $U_m^{\circ}= \emptyset$. Suppose $f \in U_m^{\circ} $. Since $\mathcal{U}_T \neq \emptyset$, there exists a function $g \in \hol(\Omega_1)$ such that $T(g)$ is unbounded on $\Omega_2$. Then $\{f+\tfrac{1}{n}g\}_n$ is a sequence in $\hol(\Omega_1)$ and, if $K$ is a compact subset of $\Omega_1$, we have \begin{align*} \| (f+\tfrac{1}{n}g)-f\|_K &= \sup\limits_{z \in K} |f(z)+\frac{1}{n} g(z)-f(z)| \\ &= \sup\limits_{z \in K}|\frac{1}{n}g(z)|=\frac{1}{n} \|g\|_K \end{align*} By taking $n \rightarrow \infty$ and observing that $\|g\|_K < \infty$, $g$ being holomorphic on $\Omega_1 \supset K$, we obtain that $f+\frac{1}{n}g \longrightarrow f$ uniformly on $K$. But $K$ was an arbitrary compact subset of $\Omega_1$, so $f+\tfrac{1}{n} \hspace{2pt}g \longrightarrow f$ uniformly on compact subsets of $\Omega_1$. \\ Since $f \in U_m^{\circ}$, there exists an $n_0$ such that $f+\tfrac{1}{n_0} \hspace{2pt}g \in U_m$. By the linearity of $f \mapsto T(f)$ this means that \begin{align*} \tfrac{1}{n_0} \hspace{2pt}|T(g)(z)| & \leq |T(f)(z)+\tfrac{1}{n_0} \hspace{2pt} T(g)(z)|+|T(f)(z)| \\ & \leq m+m \end{align*} or $|T(g)(z)| \leq 2mn_0$, for all $z \in \Omega_2$, which is contradictory to the fact that $T(g)$ is unbounded on $\Omega_2$. Thus, $U_m^{\circ}= \emptyset$ and the proof is complete. \end{proof} \begin{proposition} \label{capuT_n} For $n \in \mathbb{Z}$, let $T_n: \hol(\Omega_1) \rightarrow \hol(\Omega_2)$ be linear and such that for every $z \in \Omega_2$, the function $ f \mapsto T(f)(z)$ is continuous in $\hol(\Omega_1)$. If $\mathcal{U}_{T_n} \neq \emptyset$ for all $n \in \mathbb{Z}$ then the set $\bigcap\limits_{n} \mathcal{U}_{T_n}$ is dense $G_\delta $ in $\hol(\Omega_1)$. \end{proposition} \begin{proof} The space $\hol(\Omega_1)$ with the metric of uniform convergence on compacta is a complete metric space, so by Baire's Theorem any countable intersection of dense $G_\d$ sets in $\hol(\Omega_1)$ is again a dense $G_\d$ set in $\hol(\Omega_1)$. Since $\mathcal{U}_{T_n} \neq \emptyset$, it is a dense $G_\d$ set in $\holo$ by Proposition (\ref{eitheror}), $n \in \mathbb{Z}$, and the desired result follows immediately. \end{proof} Observe that Propositions (\ref{eitheror}) and (\ref{capuT_n}) still hold if we replace $\hol(\Omega_2)$ by $\mathbb{C}^{X}$, where $X$ is any non-empty set and $\mathbb{C}^{X}$ is the set of all functions from $ X $ to $\mathbb{C}$. \section{Boundedness of derivatives and anti-derivatives as a rare phenomenon} \begin{proposition} \label{a_0} Let $\Omega \subset \mathbb{C}$ be open and non-empty. The set $\mathcal{A}_0$ of all functions $f \in \hol(\Omega)$ that are bounded on $\Omega$ is a set of the first category in $\hol(\Omega)$. \end{proposition} \begin{proof} For $m \in \mathbb{N}$ define \begin{equation*} A_m = \Big\{ f \in \hol(\Omega): |f(z)| \leq m, \text{ for all } z \in \Omega \Big\} \end{equation*} It is obvious that \begin{equation*} \mathcal{A}_0 = \bigcup\limits_{m=1}^{+\infty} A_m \end{equation*} We will show that every $A_m$ is closed and has an empty interior in $\holo$. \par For $m \in \mathbb{N}$, the set $A_m$ is closed in $\hol(\Omega)$: Let $\{f_n\}$ be a sequence in $A_m$ and $f$ a function on $\Omega$ such that $f_n \longrightarrow f$ uniformly on compact subsets of $\Omega$. By the Weierstrass theorem, $f\in \hol(\Omega)$ and, for $z \in \Omega$ \begin{equation*} |f(z)|= \lim\limits_{n \longrightarrow \infty} |f_n(z)| \leq m \end{equation*} Therefore, $f \in A_m$ and $A_m$ is closed in $\hol(\Omega)$ for each $m=1,2,...$. \par Next we show that $A_m ^{\circ} = \emptyset $ for all $m=1,2,...$: First observe that there exists a function $g \in \hol(\Omega)$ that is unbounded on $\Omega$. Indeed, if $\Omega$ is unbounded take $g(z)=z,$ $z \in \Omega$, and if $\Omega$ is bounded, take $\zeta_0 \in \partial \Omega$ and $g(z)= \frac{1}{z-\zeta_0}$.\\ Now assume that there exists $f \in A_m^{\circ}$ for some fixed $m=1,2,...$. Then $\{f+\tfrac{1}{n}g\}_n$ is a sequence in $\hol(\Omega)$ and $f+\frac{1}{n}g \longrightarrow f$ uniformly on compact subsets of $\Omega$, $n\rightarrow \infty$. But $f \in A_m^{\circ}$, hence there exists an $n_0 \in \mathbb{N}$ such that $f+ \tfrac{1}{n_0}g \in A_m^{\circ}$. This means that \begin{equation*} |f(z)+\frac{1}{n_0}g(z)| \leq m, \text{ for all } z \in \Omega \end{equation*} But then, for any $z \in \Omega$ we would have \begin{align*} |\frac{1}{n_0}g(z)| &= |f(z)+\frac{1}{n_0}g(z)-f(z)| \\ & \leq |f(z)+\frac{1}{n_0}g(z)| +|f(z)| \\ & \leq m+m, \end{align*} Therefore, $|g(z)| \leq 2mn_0$ for all $z \in \Omega$, which is contradictory to the fact that $g$ is unbounded on $\Omega$. Thus, $A_m^{\circ}= \emptyset $ and the proof is complete. \end{proof} For $f \in \holo$, we denote by $f^{(k)}$ the $k$-derivative of $f$, $k \geq 1$. By $f^{(0)}$ we denote $f$ itself. \begin{proposition} \label{a_k} Let $\Omega \subset \mathbb{C}$ be open and non-empty and $k \in \mathbb{N}$. The set $\mathcal{A}_k$ of all functions $f \in \hol(\Omega)$ such that $f^{(k)}$ is bounded on $\Omega$ is a set of the first category in $\hol(\Omega)$. \end{proposition} \begin{proof} For $m \in \mathbb{N}$, define \begin{equation*} A_m = \Big\{ f \in \hol(\Omega): |f^{(k)}(z)| \leq m, \text{ for all } z \in \Omega \Big\} \end{equation*} It is obvious that \begin{equation*} \mathcal{A}_k = \bigcup\limits_{m=1}^{+\infty} A_m \end{equation*} We will show that each $A_m$ is closed and has empty interior in $\hol(\Omega)$. \par To see that it is closed, take a sequence $\{f_n\}$ in $A_m$ and a function $f$ on $\Omega$ such that $f_n \longrightarrow f$ uniformly on compact subsets of $\Omega$. By the Weierstrass theorem we have that $f \in \hol(\Omega)$ and $f^{(k)}_n \longrightarrow f^{(k)}$ uniformly on compact subsets of $\Omega$. Therefore, for any $z \in \Omega$ we have that \begin{equation*} |f^{(k)}(z)|= \lim\limits_{n \rightarrow \infty} |f^{(k)}_n(z)| \leq m \end{equation*} i.e. $f \in A_m$. Thus, $A_m$ is closed. \par To see that $A^{\circ}_m= \emptyset$, first observe that there exists a function $g \in \hol(\Omega)$ such that $g^{(k)}$ is unbounded on $\Omega$. Indeed, if $\Omega$ is unbounded take $g(z)=z^{k+1}$ and if $\Omega$ is bounded take $\zeta_0 \in \partial \Omega$ and $g(z)= \frac{1}{z-\zeta_0}$.\\ Now assume that there exists $f \in A_m^{\circ}$. Then $\{f+\tfrac{1}{n}g\}_n$ is a sequence in $\hol(\Omega)$ and $f+\frac{1}{n}g \longrightarrow f$ uniformly on compact subsets of $\Omega$, $n\rightarrow \infty$. But $f \in A_m^{\circ}$, hence there exists an $n_0 \in \mathbb{N}$ such that $f+ \tfrac{1}{n_0}g \in A_m^{\circ}$. This means that \begin{equation*} |f^{(k)}(z)+\frac{1}{n_0}g^{(k)}(z)| \leq m, \text{ for all } z \in \Omega \end{equation*} where the linearity of the derivative operator is used. But then, for any $z \in \Omega$ we would have \begin{align*} |\frac{1}{n_0}g^{(k)}(z)| &= |f^{(k)}(z)+\frac{1}{n_0}g^{(k)}(z)-f^{(k)}(z)| \\ & \leq |f^{(k)}(z)+\frac{1}{n_0}g^{(k)}(z)| +|f^{(k)}(z)| \\ & \leq m+m, \end{align*} Thus $|g^{(k)}(z)| \leq 2mn_0$ for all $z \in \Omega$, which is contradictory to the fact that $g^{(k)}$ is unbounded on $\Omega$. Thus, $A_m^{\circ} = \emptyset$ and the proof is complete. \end{proof} \begin{proposition} \label{e} Let $\Omega \subset \mathbb{C}$ be open and non-empty. The set $\mathcal{E}$ of all functions $f \in \hol(\Omega)$ with the property that $f^{(k)}$ is unbounded on $\Omega$, for all $k \in \mathbb{N}$, is a dense $G_{\delta}$ set in $\hol(\Omega)$. \end{proposition} \begin{proof} Using the notation previously established it is obvious that \begin{equation*} \mathcal{E}= \bigcap\limits_{k=0}^{\infty} \mathcal{A}_k^\mathrm{c} \end{equation*} By Propositions (\ref{a_0}) and (\ref{a_k}) we have that for each $k \geq 0$ , the set $\mathcal{A}_k$ is the countable union of closed, nowhere dense sets in $\hol(\Omega)$, so its complement $\mathcal{A}_k^{\mathsf{c}}$ must be a dense $G_{\delta}$ set in $\hol(\Omega)$. By Baire's Theorem, the set $\mathcal{E}$ is a dense $G_{\delta}$ set in $\hol(\Omega)$ as a countable intersection of dense $G_\d$ sets in a complete metric space. \end{proof} From now on, and throughout the remainder of this section, consider an $\Omega \subset \mathbb{C}$ which is non-empty, open and simply connected. Fix $\zeta_0 \in \Omega$ and, for $f \in \hol(\Omega)$ define \begin{align*} T(f)(z)&= \int_{\gamma_z}f(\xi) d\xi, \hspace{90pt} \text{ for all } z \in \Omega \\ T^{(k)}(f)(z)&= \int_{\gamma_z}T^{(k-1)}(f)(\xi) d\xi, \hspace{50pt} \text{ for all } z \in \Omega, k \geq 2 \end{align*} where $\gamma_{z}$ is any polygonal line in $\Omega $ that starts at $\zeta_0$ and ends at $z$. Since $\Omega $ is assumed to be simply connected, each $T^{(k)}$ is well-defined and holomorphic in $\Omega$ and its $k-$derivative is $f$. \begin{proposition} \label{integraliscontinuous} The operator \begin{align*} T: \hol(\Omega)&\longrightarrow \hol(\Omega)\\ f& \mapsto T(f) \end{align*} is linear and continuous on $\hol(\Omega)$. \end{proposition} \begin{proof} The linearity of $T$ is obvious from the linearity of the integral. For the continuity, take a sequence $\{f_n\}$ in $\hol(\Omega)$ and a function $f$ on $\Omega$ such that $f_n \longrightarrow f$ uniformly on compact subsets of $\Omega$. By the Weierstrass theorem we have that $f \in \hol(\Omega)$. We must show that $T(f_n)\longrightarrow T(f)$ on compact subsets of $\Omega$. \par Let $K$ be a compact subset of $\Omega$. Either $\Omega =\mathbb{C}$ or $\Omega \neq \mathbb{C}$.\par In the first case, i.e. $\Omega = \mathbb{C}$, for $z \in K$ we take $\gamma_z$ to be the line segment $[\zeta_0,z]$. Set $M=\max\{|\zeta_0|, \max\limits_{z \in K}|z|\}$ and observe that $M$ is well defined and finite because $K$ is compact in $\mathbb{C}$. Define $L=\overline{D(0,M)}= \{z \in \mathbb{C}: |z| \leq M\} $. Then $L$ is compact in $\mathbb{C}$, $K \subset L$ and $\gamma_z \subset L$, for all $z \in K$. Therefore, for $z\in K$ we have \begin{align*} |T(f_n)(z)-T(f)(z)| &= \big| \int_{\gamma_z}f_n(\xi)d\xi-\int_{\gamma_z}f(\xi)d\xi \hspace{3pt}\big| \\ & = \big|\int_{\gamma_z}(f_n(\xi)-f(\xi))d\xi \hspace{3pt}\big|\\ & \leq \|f_n-f\|_L \hspace{5pt} |z-\zeta_0|\\ & \leq 2 M \|f_n-f\|_L \end{align*} Thus $\|T(f_n)-T(f)\|_K \leq 2M \|f_n-f\|_L \longrightarrow 0 $, $n\rightarrow \infty$. \par In the second case, i.e. $\Omega \neq \mathbb{C}$, since $\Omega $ is a simply connected domain, by the Riemann Mapping Theorem there exists an analytic function $\phi :\mathbb{D} =\{z \in \mathbb{C}: |z|<1\} \longrightarrow \mathbb{C}$ such that $\phi$ is univalent and $\phi(\mathbb{D})= \Omega$. Obviously $\phi$ is a homeomorphism between $\mathbb{D}$ and $\Omega$. Since the set $\{\zeta_0\}\cup K \subset \Omega$ is compact, the set $\phi ^{-1}(\{\zeta_0\}\cup K)\subset \mathbb{D}$ is also compact. Therefore, there exists an $r$, with $0<r<1$, such that $\phi ^{-1}(\{\zeta_0\}\cup K)\subset \overline{D(0,r)}=\{z \in \mathbb{C}: |z| \leq r\}$. Define $L=\phi(\overline{D(0,r)})\subset \phi(\mathbb{D})=\Omega$. Then $L$ is compact and $K\subset L$. For $z \in K$ we have that $\phi^{-1}(\zeta_o)$, $\phi^{-1}(z) \in \overline{D(0,r)}$, hence the line segment $[\phi^{-1}(\zeta_o),\phi^{-1}(z)] \subset \overline{D(0,r)}$. Therefore, if $\sigma: [0,1]\longrightarrow \mathbb{C}$ is a parametrization of $[\phi^{-1}(\zeta_o),\phi^{-1}(z)]$, then $Length(\sigma) \leq 2r$. Take $\gamma_z= \phi([\phi^{-1}(\zeta_o),\phi^{-1}(z)])\subset \phi(\overline{D(0,r)})=L$ and observe that $\gamma_z$ is rectifiable: $\phi \circ \sigma :[0,1]\longrightarrow \Omega $ is a parametrization of $\gamma_z$ and \begin{align*} Length(\gamma_z)&= \int_{0}^{1} |\gamma_{z}^{'}(t)|dt \\ &= \int_{0}^{1}|(\phi \circ \sigma)^{'}(t)|dt \\ &= \int_{0}^{1}|(\phi ^{'}(\sigma(t))|\hspace{2pt}|\sigma ^{'}(t)|dt \\ & \leq \max \big\{|\phi ^{'}(z)|: z \in \overline{D(0,r)} \big\} \hspace{5pt} Length(\sigma) \\ & \leq \max \big\{|\phi ^{'}(z)|: z \in \overline{D(0,r)} \big\} \hspace{5pt} 2r \end{align*} which is of course finite because $\phi ^{'}$ is continuous on the compact set $\overline{D(0,r)}$. \par We then have \begin{align*} |T(f_n)(z)-T(f)(z)| &= \big| \int_{\gamma_z}f_n(\xi)d\xi-\int_{\gamma_z}f(\xi)d\xi \hspace{3pt}\big| \\ & = \big|\int_{\gamma_z}(f_n(\xi)-f(\xi))d\xi \hspace{3pt}\big|\\ & \leq \|f_n-f\|_L \hspace{5pt} Length(\gamma_z)\\ & \leq \|f_n-f\|_L \hspace{5pt} \max \big\{|\phi ^{'}(z)|: z \in \overline{D(0,r)} \big\} \hspace{5pt} 2r \end{align*} Thus $\|T(f_n)-T(f)\|_K \leq \|f_n-f\|_L \hspace{5pt} \max \big\{|\phi ^{'}(z)|: z \in \overline{D(0,1)} \big\} \hspace{5pt} 2r \longrightarrow 0 $, $n\rightarrow \infty$.\\ In any case we have shown that $T(f_n)\longrightarrow T(f)$ uniformly on $K$. Since $K$ was an arbitrary compact subset of $\Omega$, the continuity of $T$ follows. \end{proof} \begin{corollary} \label{kprimitiveiscont} Let $k \geq 1$. The operator \begin{align*} T^{(k)}: \hol(\Omega)&\longrightarrow \hol(\Omega)\\ f& \mapsto T^{(k)}(f) \end{align*} is linear and continuous on $\hol(\Omega)$. \end{corollary} \begin{proof} We have that $T^{(k)}= T \circ T \circ ... \circ T$, the composition of $T$ $k$ times. Therefore linearity and continuity both follow by Proposition (\ref{integraliscontinuous}). \end{proof} \begin{corollary} \label{pointwisekprimitive} If $f_n \longrightarrow f$ uniformly on compact subsets of $\Omega$ and $k \geq 1$, then $T^{(k)}(f_n) \longrightarrow T^{(k)}(f)$ pointwise in $\Omega$. \end{corollary} \begin{proof} By the Weierstrass Theorem, $f \in \hol(\Omega)$. By Corollary (\ref{kprimitiveiscont}) we have that $T^{(k)}(f_n) \longrightarrow T^{(k)}(f)$ uniformly on compact subsets of $\Omega$ and therefore $T^{(k)}(f_n) \longrightarrow T^{(k)}(f)$ pointwise in $\Omega$. \end{proof} \begin{proposition} \label{b_k} Let $\Omega \subset \mathbb{C}$ be a simply connected domain and $k \geq 1$. The set $\mathcal{B}_k$ of all $f \in \hol(\Omega)$ such that $T^{(k)}(f)$ is bounded on $\Omega$ is a set of the first category in $\hol(\Omega)$. \end{proposition} \begin{proof} For $m \in \mathbb{N}$, define \begin{equation*} B_m= \big\{f \in \hol(\Omega): |T^{(k)}(f)(z)| \leq m \hspace{5pt} \text{ for all } z \in \Omega \big\} \end{equation*} Then $\mathcal{B}_k= \bigcup\limits_{m=1}^{\infty}B_m$. We will show that each $B_m$ is closed and nowhere dense in $\hol(\Omega)$. \par To see that it is closed, take a sequence $\{f_n\}$ in $B_m$ such that $f_n \longrightarrow f$ uniformly on compact subsets of $\Omega$. By Corollary (\ref{pointwisekprimitive}), $T^{(k)}(f_n) \longrightarrow T^{(k)}(f)$ pointwise in $\Omega$. Therefore, for $z \in \Omega$ we have that \begin{align*} |T^{(k)}(f)(z)| & \leq |T^{(k)}(f)(z)-T^{(k)}(f_n)(z)| + |T^{(k)}(f_n)(z)|\\ & \leq |T^{(k)}(f)(z)-T^{(k)}(f_n)(z)| +m \end{align*} Taking $n \rightarrow \infty$ we obtain $|T^{(k)}(f)(z)| \leq m$ and therefore $f \in B_m$. Thus, $B_m$ is closed. \par To see that $B_m ^{\circ}= \emptyset$, first observe that there exists a function $g \in \hol(\Omega)$ such that $T^{(k)}(g)$ is unbounded on $\Omega$: indeed, if $\Omega $ is unbounded take $g(z)=1$, $z\in \Omega$, and if $\Omega $ is bounded take $\zeta_0 \in \partial \Omega$ and $g(z)=\frac{1}{(z-\zeta_0)^{k+1}}$. Now assume that $f \in B_m^{\circ}$. Then $f+\tfrac{1}{n} \hspace{2pt}g \longrightarrow f$ uniformly on compact subsets of $\Omega$, $n \rightarrow \infty$. Therefore, there exists an $n_0$ such that $f+\tfrac{1}{n_0} \hspace{2pt}g \in B_m$. By the linearity of $f \mapsto T^{(k)}(f)$ this means that \begin{equation*} |T^{(k)}(f)(z)+\tfrac{1}{n_0} \hspace{2pt} T^{(k)}(g)(z)|=|T^{(k)}(f+\tfrac{1}{n_0} \hspace{2pt}g)(z)|\leq m \end{equation*} for all $z \in \Omega$. But then \begin{align*} \tfrac{1}{n_0} \hspace{2pt}|T^{(k)}(g)(z)| & \leq |T^{(k)}(f)(z)+\tfrac{1}{n_0} \hspace{2pt} T^{(k)}(g)(z)|+|T^{(k)}(f)(z)| \\ & \leq m+m \end{align*} or $|T^{(k)}(g)(z)| \leq 2mn_0$, for all $z \in \Omega$, which is contradictory to the fact that $T^{(k)}(g)$ is unbounded on $\Omega$. Thus, $B_m^{\circ} = \emptyset $ and the proof is complete. \end{proof} For $f \in \holo$, where $\Omega \subset \mathbb{C}$ is a simply connected domain, we denote \[ f^{(k)} = \begin{cases} \text{the } k^{th} \text{ derivative of }f, & \text{if } k>0\\ f, & \text{if } k=0\\ T^{(-k)}(f),& \text{if } k<0 \end{cases} \] where $T^{(k)}(f)$ as defined above. Collecting all the above results together we get \begin{theorem} \label{d} Let $\Omega \subset \mathbb{C}$ be a simply connected domain. Then the set $\mathcal{D}$ of all functions $f \in \hol(\Omega)$ with the property that $f^{(k)}$ is unbounded on $\Omega$ for all $k \in \mathbb{Z}$ is a dense $G_\delta$ subset of $\hol(\Omega)$. \end{theorem} \begin{proof} For $k \in \mathbb{Z}$ define \begin{equation*} D_k = \big\{ f \in \hol(\Omega): f^{(k)} \hspace{3pt} \text{ unbounded on } \Omega \big\} \end{equation*} Then $\mathcal{D} = \bigcap \limits_{k \in \mathbb{Z}}D_k$. By Propositions (\ref{a_0}), (\ref{a_k}) and (\ref{b_k}) we have that each $D_k$ is a dense $G_\delta$ set in $\hol(\Omega)$, because its complement is a countable union of closed, nowhere dense sets in $\hol(\Omega)$. Since $\hol(\Omega)$ is a complete metric space, Baire's Theorem gives that any countable intersection of dense $G_\delta $ sets is again a dense $G_\delta $ set. \end{proof} At this point observe that Proposition (\ref{e}) and Theorem (\ref{d}) are immediate corollaries to Proposition (\ref{capuT_n}):\\ The operator \begin{align*} \Lambda: \holo & \rightarrow \holo \\ f& \mapsto f^{'} \end{align*} is linear and continuous by the Weierstrass Theorem.\\ If additionally $\Omega$ is simply connected, the same holds for the operator \begin{align*} \widetilde{ \Lambda}: \holo & \rightarrow \holo \\ f& \mapsto \int_{\gamma_z}f(\xi)d\xi \end{align*} by Proposition (\ref{integraliscontinuous}), the primitive of $f$ being defined as in the discussion preceding that same Proposition. \par Now define $\Lambda _k$ to be $k$ compositions of $\Lambda $ with itself, $k \geq 1$, $\Lambda_0$ to be the identity function on $\holo$ and $\Lambda_k$ to be $(-k)$ compositions of $\widetilde{\Lambda}$ with itself, $k \leq -1$. Then each $\Lambda_k$ is linear and continuous in $\holo$ and, furthermore, $\mathcal{U}_{\Lambda_k} \neq \emptyset$, for all $k \in \mathbb{Z}$. Therefore, the set $\bigcap\limits_{k \in \mathbb{Z}}\mathcal{U}_{\Lambda_k}$ is a dense $G_\delta $ subset of $\holo$. But this is exactly the set $\mathcal{D}$ of Theorem (\ref{d}).\\ \section{Universality of operators related to the partial sums} Now assume that $\Omega$ is a domain in $\mathbb{C}$. For $N \geq 0$ we define: \begin{align*} S_N: \holo & \rightarrow \hol(\Omega \times \mathbb{C}) \\ f & \mapsto S_N(f, \cdot)(\cdot)= S_N(f) \end{align*} where \begin{equation*} S_N(f,\zeta)(z)= \sum_{n=0}^{N}\frac{f^{(n)}(\zeta)}{n!}(z- \zeta)^n, \hspace{5pt} \zeta \in \Omega, z\in \mathbb{C} \end{equation*} Then $S_N$ is obviously linear. By the Weierstrass Theorem it is also continuous; indeed suppose $K=K_1 \times K_2$ is a compact subset of $\Omega \times \mathbb{C}$, where $K_1, K_2$ are compact subsets of $\Omega$ and $\mathbb{C}$ respectively, and $f_k \longrightarrow f$ uniformly on compact subsets of $\Omega$. Set $M = \max\limits_{(\zeta,z)\in K}|z- \zeta|$. Then, for $(\zeta,z) \in K$ we have that \begin{align*} | S_N(f_k,\zeta)(z)-S_N(f,\zeta)(z) |&= \Big|\sum_{n=0}^{N}\frac{f_k^{(n)}(\zeta)-f^{(n)}(\zeta)}{n!}(z-\zeta)^n \Big|\\ & \leq \sum_{n=0}^{N}\frac{|f_k^{(n)}(\zeta)-f^{(n)}(\zeta)|}{n!}|z-\zeta|^n \\ & \leq \sum_{n=0}^{N} \frac{\|f_k^{(n)}-f^{(n)}\|_{K_1}}{n!}M^n \end{align*} which means that \begin{equation*} \|S_N(f_k)-S_N(f)\|_{K} \leq \sum_{n=0}^{N} \frac{\|f_k^{(n)}-f^{(n)}\|_{K_1}}{n!}M^n \end{equation*} and therefore $S_N(f_k) \longrightarrow S_N(f)$ uniformly on $K$, for each $N=0,1,2,...$ Now fix $\zeta_0 \in \Omega$ and, for $N \geq 0$, define \begin{align*} T_N: \holo & \rightarrow \hol(\mathbb{C}) \\ f & \mapsto S_N(f,\zeta_0)(\cdot) \end{align*} Then each $T_N$ is linear and continuous in $\holo$ and \begin{equation*} \mathcal{U}_{T_N} = \big\{ f \in \holo: S_N(f, \zeta_0) \text{ is unbounded in } \mathbb{C}\big\} \end{equation*} But $S_N(f, \zeta_0)$ is a polynomial, so it is bounded in $\mathbb{C}$ if and only if it is constant in $\mathbb{C}$. Therefore \begin{equation*} \mathcal{U}_{T_N} = \big\{ f \in \holo: S_N(f, \zeta_0) \text{ is non-constant in } \mathbb{C}\big\} \end{equation*} For $N=0$ we have that $S_N(f,\zeta_0)(z)= f(\zeta_0)$, $z \in \mathbb{C}$, so $\mathcal{U_{T_N}}= \emptyset$. \\ for $N \geq 1$, we have that \begin{equation*} S_N(f,\zeta_0)(z)= \sum_{n=0}^{N}\frac{f^{(n)}(\zeta_0)}{n!}(z- \zeta_0)^n \end{equation*} is constant if and only if $f^{'}(\zeta_0)= f^{''}(\zeta_0)=...=f^{(N)}(\zeta_0)=0$. But there always exists a function $f \in \holo$ such that $f^{(k)}(\zeta_0) \neq 0$, for all $k \in \mathbb{N}$, for example $f(z)= e^z$. Therefore, $\mathcal{U}_{T_N} \neq \emptyset$, for all $N \geq 1$. By Proposition (\ref{capuT_n}) we have that the set $\bigcap\limits_{N=1}^{\infty}\mathcal{U}_{T_N}$ of all the functions $f \in \holo$ with the property that the function $S_N(f, \zeta_0)$ is unbounded in $\mathbb{C}$ for all $N \geq 1$, is a dense $G_\d$ set in $\holo$. \\ \par We mention that $\mathcal{U}_{T_1}$ is an open dense set in $\hol(\Omega)$ because $\mathcal{U}_{T_1}= \{f \in \holo : f^{'}(\zeta_0) \neq 0\}$. Similarly, $\mathcal{U}_{T_N} $ is also an open dense set in $\holo$, so $\bigcap\limits_{N=1}^{\infty} \mathcal{U}_{T_N}$ is $G_\d$ dense in $\holo$. So this corollary of Proposition (\ref{capuT_n}) is well known and obvious. A similar result holds if we replace $\mathbb{C}$ by any unbounded domain $\Omega_2$; in particular this holds for $\Omega_2 = \Omega$ if $\Omega$ is unbounded. \par Now fix $z=0$ and, for $N \geq 0$, define \begin{align*} \ttn :\holo & \rightarrow \holo \\ f & \mapsto S_N(f, \cdot)(0) \end{align*} Each $\ttn$ is linear and continuous in $\holo$. \par For $N=0$, we have that $S_0(f,\zeta)(0)=f(\zeta)$, $\zeta \in \Omega$, and therefore \begin{equation*} \mathcal{U}_{\ttn}= \big\{ f\in \holo: f \text{ is unbounded in } \Omega \big\} \end{equation*} which is a dense $G_\d$ set in $\holo $ by Proposition (\ref{a_0}). \par For $N \geq 1$, if $\Omega = \mathbb{C}$, take $f(z)= e^z$, $z \in \mathbb{C}$. Since $z \mapsto e^z$ dominates the polynomials in $\mathbb{C}$, we have that $S_N(f,\zeta)(0)$ is unbounded in $\mathbb{C}$. If $\Omega \neq \mathbb{C}$, take $\zeta_0 \in \partial \Omega$ and $f(z)= \tfrac{1}{z-\zeta_0}$, $z \in \Omega$. Then $f \in \holo$ and \begin{equation*} S_N(f,\zeta)(0)= \sum_{n=0}^{N} \frac{\zeta ^n}{(\zeta-\zeta_0)^{n+1}}, \hspace{5pt} \zeta \in \Omega \end{equation*} which is a rational function with poles only at $z= \zeta_0$. Hence $\lim \limits_{\zeta\rightarrow\zeta_0}|S_N(f,\zeta)(0)| = \infty$ and $S_N(f,\cdot)(0)$ is unbounded in $\Omega$.\\ Therefore, $\mathcal{U}_{\ttn} \neq \emptyset$ for all $N \geq 0$, so by Corollary (\ref{capuT_n}) we have that the set $\bigcap\limits_{N=0}^{\infty}\mathcal{U}_{\ttn}$ of all functions $f \in \holo$ with the property that $S_N(f,\cdot)(0)$ is unbounded in $\Omega$ for all $N \geq 0$, is a dense $G_\d$ set in $\holo$. \par Next we consider the following class $\so$ of functions on $\Omega$: \begin{definition} Let $\Omega$ be an open, non-empty subset of $\mathbb{C}$. We define $\so$ to be the set of all functions $f \in \holo$ such that $\big\{ \ttn(f)\big\}_{N \geq 0}$ is dense in $\holo$. \end{definition} \label{so def} From now on and unless otherwise stated we assume that $\Omega$ is a simply connected domain in $\mathbb{C}$. Our goal is to show that either $\so = \emptyset$ or $\so$ is a dense $G_\d$ set in $\holo$. To this end, first observe that, $\holo$ is separable: the set $\{p_j\}_j$ of all polynomials with coefficients having rational coordinates is dense in $\holo$ by the Runge Theorem. Now consider an exhaustive sequence $\{K_m\}_m$ of compact subsets of $\Omega$, i.e. a sequence $\{K_m\}_m$ of compact subsets of $\Omega$ such that \begin{enumerate} \item $\Omega = \bigcup\limits_{m=1}^{\infty}K_m $ \item $K_m$ lies in the interior of $K_{m+1}$, for $m=1,2,...$ \item Every compact subset of $\Omega$ lies in some $K_m$ \item Every component of $K_m^{\mathsf{c}}$ contains a component of $\Omega^{\mathsf{c}}$, $m=1,2,...$ \end{enumerate} (See \cite{Rudin:1987:RCA:26851}) \par Now we can show that $\so$ can be expressed as a set which will be shown to be a $G_\d$ one in $\holo$: \begin{proposition} \label{so=} $ \so = \bigcap\limits_{s,j,m=1}^{\infty}\bigcup\limits_{N=0}^{\infty}\big\{ f \in \holo: \sup\limits_{\zeta \in K_m}|\ttn (f)(\zeta)-p_j(\zeta)| < \frac{1}{s}\big\}$ \end{proposition} \begin{proof} That $\so$ is a subset of the set on the right is an immediate consequence of the definition of $\so$. \par Consider now a function $f$ in the set on the right, a function $g \in \holo$, a compact subset $K$ of $\Omega$ and an $\epsilon >0$. There exists an $m \geq 1$ such that $K \subset K_m$ and an $s \geq 1$ such that $\tfrac{1}{s}< \epsilon$. For these $g$, $K_m$ and $s$, there exists a $j \geq 1$ such that \begin{equation*} \sup\limits_{\zeta \in K} |p_j(\zeta)-g(\zeta)| \leq \sup\limits_{\zeta \in K_m} |p_j(\zeta)-g(\zeta)|< \tfrac{1}{2s} \end{equation*} For these $K_m$, $s$ and $j$, there exists an $N \geq 0$ such that \begin{equation*} \sup\limits_{\zeta \in K} |\ttn(f)(\zeta)- p_j(\zeta)| \leq \sup\limits_{\zeta \in K_m} |\ttn(f)(\zeta)- p_j(\zeta)|< \tfrac{1}{2s} \end{equation*} By the triangle inequality, for $z \in K$, we have \begin{align*} |\ttn(f)(z)- g(z)|& \leq |\ttn(f)(z)- p_j(z)|+|p_j(\zeta)-g(\zeta)| \\ & \leq \sup\limits_{\zeta \in K} |\ttn(f)(\zeta)- p_j(\zeta)| +\sup\limits_{\zeta \in K} |p_j(\zeta)-g(\zeta)|\\ &< \frac{1}{2s} + \frac{1}{2s} \end{align*} Therefore, $\sup\limits_{\zeta \in K} |\ttn(f)(\zeta)- g(\zeta)| \leq \tfrac{1}{s}< \epsilon$, so $\{\ttn(f)\}$ is dense in $\holo$. \end{proof} \begin{proposition} \label{so g_d} $\so$ is a $G_\d$ set in $\holo$. \end{proposition} \begin{proof} By Proposition (\ref{so=}), it suffices to show that, for $j,s,m \geq 1$ and $N \geq 0$, the set \begin{equation*} E_{j,s,m,N}:= \big\{f \in \holo: \sup\limits_{\zeta \in K_m}|\ttn (f)(\zeta)-p_j(\zeta)| < \frac{1}{s} \big\} \end{equation*} is open in $\holo$. \par To this end, consider functions $g_k \in \holo$, $k \geq 1$, and $g \in E_{j,s,m,N}$ such that $g_k \longrightarrow g$ uniformly on compact subsets of $\Omega$. It suffices to find a $k_0$ such that $g_k \in E_{j,s,m,N}$, for all $k \geq k_0$. Since $g \in E_{j,s,m,N}$, there exists a $\d >0$ such that \begin{equation*} \sup\limits_{\zeta \in K_m} |\ttn (g)(\zeta)-p_j(\zeta)|< \tfrac{1}{s}- 2\d \end{equation*} Set $M = \max {\{e^{|\zeta|}:\zeta \in K_m}\}$. By the Weierstrass Theorem we have that $g_k^{(i)} \longrightarrow g^{(i)}$ uniformly on compact subsets of $\Omega$, $i=0,1,...,N$, so there exists a $k_0 \in \mathbb{N}$ such that \begin{equation*} \|g_k^{(i)}- g^{(i)}\|_{K_m}< \frac{\d}{M} \end{equation*} for all $i=0,1,...N$. Therefore, for $z \in K_m$ and $k \geq k_0$ we have \begin{align*} |\ttn (g_k)(z)-p_j(z)|& \leq |\ttn (g_k)(z)-\ttn(g)(z)|+|\ttn (g)(z)-p_j(z)| \\ &=\Big| \sum\limits_{n=0}^{N}\frac{g_k^{(n)}(z)-g^{(n)}(z)}{n!}(-z^n) \Big|+|\ttn (g)(z)-p_j(z)|\\ &\leq \sum\limits_{n=0}^{N}\frac{|g_k^{(n)}(z)-g^{(n)}(z)|}{n!}|z|^n +\sup\limits_{\zeta \in K_m} |\ttn (g)(\zeta)-p_j(\zeta)| \\ &< \sum_{n=0}^{N}\frac{\|g_k^{(n)}-g^{(n)}\|_{K_m}}{n!}|z|^n +\frac{1}{s} -2\d \\ &<\frac{\d}{M} \sum_{n=0}^{N}\frac{|z|^n}{n!} +\frac{1}{s} -2\d \\ & \leq \frac{\d}{M} \sum_{n=0}^{\infty}\frac{|z|^n}{n!} +\frac{1}{s} -2\d \\ &=\frac{\d}{M} e^{|z|} +\frac{1}{s} -2\d \\ & \leq \frac{\d}{M} \hspace{3pt} M +\frac{1}{s} -2\d \\ &=\frac{1}{s}-\d \end{align*} Since the $z \in K_m$ was arbitrary, we have that \begin{equation*} \sup\limits_{\zeta \in K_m} |\ttn (g_k)(\zeta)-p_j(\zeta)|\leq \frac{1}{s}- \d <\frac{1}{s} \end{equation*} for all $k \geq k_0$. Hence $g_k \in E_{j,s,m,N}$, $k \geq k_0$. This completes the proof. \end{proof} \begin{proposition} \label{so either or} Let $\Omega$ be a simply connected domain in $\mathbb{C}$. Either $\so = \emptyset$ or $\so$ is a dense $G_\d$ set in $\holo$. \end{proposition} \begin{proof} If $\so \neq \emptyset$, by Proposition (\ref{so g_d}) it suffices to show that $\so$ is dense in $\holo$. \par Let $f \in \so$. Observe that, if $p$ is a polynomial, then $f+p \in \so$. Indeed, $f+p \in \holo$ and, for all $N> \deg p$, we have that $\ttn(f+p)= \ttn(f)+q_p$, where \begin{equation*} q_p(\zeta)= \sum_{n=0}^{N}\frac{(-1)^n p^{(n)}(\zeta)}{n!}\zeta ^n, \hspace{10pt} \zeta \in \Omega \end{equation*} is again a polynomial. For a function $g \in \holo$, we have that $g-q_p \in \holo$, and therefore there exists a sequence $\{\lambda_n\}$ in $\mathbb{N}$ such that $\ttln (f) \longrightarrow g- q_p$ uniformly on compact subsets of $\Omega$. But then $\ttln(f+p)=\ttln(f)+q_p \longrightarrow g$ uniformly on compact subsets of $\Omega$, i.e. $\{\ttn(f+p)\}$ is dense in $\holo$ and $f+p \in \so$. \\ Now the density of $\so$ in $\holo$ follows easily because by Runge's Theorem the polynomials are dense in $\holo$. \end{proof} At this point observe that, if $0 \in \Omega$, then $\so = \emptyset$. Indeed, for $f,g \in \holo$ such that $f(0) \neq g(0)$, we have that, for any $N \in \mathbb{N}$ and any compact subset $L$ of $\Omega$ such that $0 \in L$, \begin{equation*} \sup\limits_{\zeta \in L}|\ttn(f)(\zeta)-g(\zeta)| \geq |\ttn(f)(0)-g(0)|=|f(0)-g(0)|>0 \end{equation*} so there is no subsequence of $\{\ttn(f)\}$ that converges to $g$ uniformly on compact subsets of $\Omega$. \\ \begin{definition} Let $\Omega$ be open in $\mathbb{C}$. The set $\sto$ is the set of all $f \in \holo$ with the property that, for every $c \in \mathbb{C}$ there exists a sequence $\{\lambda_n\}$ in $\mathbb{N}$ such that, for every $L \subset \Omega$ compact, \begin{equation*} \sup\limits_{\zeta \in L }|\widetilde{T}_{\lambda_n}(f)(\zeta)-c| \longrightarrow 0, \hspace{7pt} n \rightarrow \infty \end{equation*} \end{definition} \begin{proposition} The set $\sto$ is a $G_\d$ set in $\holo$. \end{proposition} \begin{proof} Let $\{z_j\}_{j \in \mathbb{N}}$ be an enumeration of the points in the complex plane with rational coordinates. Following the proof of Propositions (\ref{so=}) and (\ref{so g_d}), we get that \begin{equation*} \sto = \bigcap\limits_{s,j,m=1}^{\infty}\bigcup\limits_{N=0}^{\infty}\big\{ f \in \holo: \sup\limits_{\zeta \in K_m}|\widetilde{T}_{N} (f)(\zeta)-z_j| < \frac{1}{s}\big\} \end{equation*} and that the set \begin{equation*} \big\{ f \in \holo: \sup\limits_{\zeta \in K_m}|\widetilde{T}_{N} (f)(\zeta)-z_j| < \frac{1}{s}\big\} \end{equation*} is open in $\holo$, $m,j,s \geq 1$, $N \geq 0$. \end{proof} Observe again that, if $0 \in \Omega$, then $\sto = \emptyset$. Indeed, for $f \in \holo$, $c \in \mathbb{C}$ with $f(0) \neq c$ and $L \subset \Omega$ compact, we have that \begin{equation*} \sup\limits_{\zeta \in L}|\widetilde{T}_N(f)(\zeta)-c| \geq |\widetilde{T}_N(f)(0)-c|= |f(0)-c| >0 \end{equation*} for all $N \in \mathbb{N}$. However, we can show that $\sto$ is dense in $\holo$ if $\Omega $ is a simply connected domain and $0 \notin \Omega$: \begin{theorem} \label{sto dense gd} Let $\Omega $ be a simply connected domain with $0 \notin \Omega$. Then $\sto$ contains a dense $G_\d$ set in $\holo$. \end{theorem} \begin{proof} Since $\Omega$ is a simply connected domain, the class $\uo$ is a dense $G_\d$ set in $\holo$. We will show that $\uo \subset \sto$. \par Let $f \in \uo$ and $c \in \mathbb{C}$. Take $K=\{0\}$, which is disjoint from $\Omega$ because $0 \notin \Omega$. Then $K$ is a compact set in $\mathbb{C}$, $K\cap \Omega = \emptyset$, $K^{c}$ is connected, and the function $h(z)=c$, $z \in K$, is continuous on $K$ and (trivially) analytic in the interior of $K$. By definition of the class $\uo$, there exists a sequence $\{\lambda_n\}$ in $\mathbb{N}$ such that, for every compact set $L \subset \Omega$, \begin{equation*} \sup\limits_{\zeta \in L}\sup\limits_{z \in K}|S_{\lambda_n}(f,\zeta)(z)-h(z)| \longrightarrow 0, \hspace{10pt} n \rightarrow \infty \end{equation*} or \begin{equation*} \sup\limits_{\zeta \in L}|S_{\lambda_n}(f,\zeta)(0)-c| \longrightarrow 0, \hspace{10pt} n \rightarrow \infty \end{equation*} But this is exactly \begin{equation*} \sup\limits_{\zeta \in L}|\widetilde{T}_{\lambda_n}(f)(\zeta)-c| \longrightarrow 0, \hspace{10pt} n \rightarrow \infty \end{equation*} Therefore, $f \in \sto$. This completes the proof. \end{proof} \section{A more general statement} \par During a seminar on these topics, T. Hatziafratis posed the following question: Let $E$ be a countable dense subset of $\mathbb{T}=\{z \in \mathbb{C}: |z|=1 \}$. Is it true that, for the generic function $f \in \hol(\mathbb{D})$, all the derivatives and anti-derivatives of $f$ are unbounded on every radius joining $0$ to a point of $E$? \par The answer to this question is affirmative. To see this, we examine a more general case: \begin{proposition} \label{general either or} Let $\Omega \subset \mathbb{C}$ be an open set, $X$ a non-empty subset of $\Omega$. \newline If $T: \holo \rightarrow \holo$ is a linear operator with the property that, for every $z \in \Omega$, the mapping $ \holo \ni f \mapsto T(f)(z) \in \mathbb{C}$ is continuous, and \begin{equation*} S=S(T,\Omega,X)=\{f \in \holo: T(f) \text{ is unbounded on } X\}, \end{equation*} then either $S = \emptyset$ or $S$ is a dense $G_\d$ set in $\holo$. \end{proposition} \begin{proof} \par To show that $S$ is a $G_\d$ set, for $m \geq 1$, define \begin{equation*} S_m=\{f \in \holo: \exists z \in X \text{ such that } |T(f)(z)|>m \} \end{equation*} Then $S= \bigcap\limits_{m=1}^{\infty} S_m$. Since the mapping $f \mapsto T(f)(z)$ is continuous, the set $S_m$ is open in $\holo$, for each $m \geq 1$. Hence, $S$ is a $G_\d$ set in $\holo$. \par To show that $S$ is dense in $\holo$ if it is not empty, let $g \in S$, i.e. $g \in \holo$ and $T(g)$ is unbounded on $X$, and let $f \in \holo$. If $T(f)$ is unbounded on $X$, then $f \in S$ and $f$ is (trivially) the limit in $\holo$ of a sequence of functions in $S$. If $T(f)$ is bounded on $X$ by, say, $M_1$, then, for a fixed $n \geq 1$, the function $T(f+\frac{1}{n} \hspace{2pt}g)$ is unbounded on $X$. Indeed, suppose it is bounded on $X$ by a positive number $M_2$. Then, if $z \in X$, by the linearity of $T$ we would have \begin{align*} |T(g)(z)| & = n \hspace{2pt} |T(\frac{1}{n} \hspace{2pt} g)(z)| \\ & = n \hspace{2pt} |T(f+\frac{1}{n} \hspace{2pt} g)(z)- T(f)(z)|\\ &\leq n \hspace{2pt} |T(f+\frac{1}{n} \hspace{2pt} g)(z)|+n \hspace{2pt}|T(f)(z)|\\ &\leq n \hspace{2pt} M_2 +n \hspace{2pt} M_1 \end{align*} But this means that $T(g)$ is bounded on $X$ by $n \hspace{2pt}(M_1+M_2)$, which is contradictory to the fact that $T(g)$ is unbounded on $X$. Therefore, $T(f+\frac{1}{n} \hspace{2pt}g)$ is unbounded on $X$ for every $n \geq 1$; in other words $f+\frac{1}{n} \hspace{2pt}g \in S$, for every $n \geq 1$ . But $f+\frac{1}{n} \hspace{2pt}g \longrightarrow f$, $n \rightarrow \infty$, uniformly on compact subsets of $\Omega$, so $f$ is again the limit in $\holo$ of a sequence of functions in $S$. Since $f$ was an arbitrary function in $\holo$, $S$ is dense in $\holo$ and the proof is complete. \end{proof} \par Consider now countable $T^{(k)}$ and $X_m$ such that $S(T^{(k)}, \Omega, X_m) \neq \emptyset$, for all $k,m$. Then Baire's Theorem gives that $\bigcap\limits_{k,m}S(T^{(k)}, \Omega, X_m)$ is a dense $G_\d$ set in $\holo$. This answers the aforementioned question in the affirmative, because if $\zeta_m \in E$ and $X_m$ is the radius joining $0$ to $\zeta_m$, then the function $g(z)= \frac{1}{z-\zeta_m}$, $z \in \mathbb{D}$, belongs to $S(T^{(k)},\mathbb{D},X_m)$ for all $k \geq 0$, where $T$ is the differentiation operator. \par More generally, we can replace $\mathbb{D}$ with any open non-empty set $\Omega$ in $\mathbb{C}$, $T$ being the differentiation operator and $X_m \subset \Omega$ having at least one accumulation point in $\partial \Omega$. If $\Omega $ is simply connected, then we obtain the analogous result for both the integration operator and the operator related to Taylor partial sums $\ttn$ that was defined before. \par Observing that in the proof of Proposition (\ref{general either or}) no properties of $\holo$ were used other than those of a topological vector space, we can obtain the best generalization of our result, where completeness is not assumed and the proof does not use Baire's Theorem: \begin{proposition} \label{either or topological vs} Let $\mathcal{V}$ be a topological vector space over the field $\mathbb{R}$ or $\mathbb{C}$ and $X$ a non-empty set. Denote by $F(X)$ the set of all complex-valued functions on $X$ and consider a linear operator $T: \mathcal{V} \rightarrow F(X)$ with the property that, for all $x \in X$, the mapping $ \mathcal{V} \ni \alpha \mapsto T(\alpha)(x) \in \mathbb{C}$ is continuous. Let $S=\{\alpha \in \mathcal{V}: T(\alpha) \text{ is unbounded on } X\}$. Then either $S= \emptyset$ or $S$ is a dense $G_\d$ set in $\mathcal{V}$. \end{proposition} \begin{proof} That $S$ is a $G_\d$ set follows from the fact that $S = \bigcap\limits_{m=1}^{\infty} \bigcup\limits_{x \in X}\{\alpha \in \mathcal{V}: |T(\alpha)(x)|> m\}$ and the continuity of $\alpha \mapsto T(\alpha)(x)$. The proof that $S$ is dense if it is non-empty is identical to the proof of Proposition (\ref{general either or}). \end{proof} \vspace{3pt} \textit{Acknowledgement}--- The topics discussed in this article were suggested by V. Nestoridis. I would like to thank him for the guidance and the insightful suggestions offered. I would also like to thank T. Hatziafratis for his taking interest in the topics discussed.
1,314,259,994,742
arxiv
\section{ATTACK TECHNIQUES} In this section, we revise the definition of Byzantine tolerance in distributed synchronous SGD. Then, we theoretically analyze the Byzantine tolerance of coordinate-wise median and Krum, and show that under certain conditions, these two robust aggregation rules are no longer Byzantine-tolerant. \subsection{INNER PRODUCT MANIPULATION} In the previous work on Byzantine-tolerant SGD algorithms, most of the robust aggregation rules only guarantee that the robust estimator is not arbitrarily far away from the mean of the correct gradients. In other words, the distance between the robust estimator and the correct mean is upper-bounded. However, for gradient descent algorithms, to guarantee the descent of the loss, the inner product between the true gradient and the robust estimator must be non-negative: \begin{align*} \ip{\nabla F(x)}{{\tt Aggr}(\{\tilde{v}_i: i \in [m]\})} \geq 0, \end{align*} so that at least the loss will not increase in expectation. In particular, bounded distance is not enough to guarantee robustness, if the attackers manipulate the Byzantine gradients and make the inner product negative. The intuition underlying the inner product manipulation attack is that, when gradient descent algorithm converges, the gradient $\nabla F(x^t)$ approaches $0$. Thus, even if the distance between the robust estimator and the correct mean is bounded, it is still possible to manipulate their inner product to be negative, especially when the upper bound of such distance is large. We formally define a revised version of Byzantine tolerance for distributed synchronous SGD~(DSSGD-Byzantine tolerance): \begin{definition} (DSSGD-Byzantine Tolerance) Without loss of generality, suppose that in a specific iteration, the server receives $(m-q)$ correct gradients $\mathcal{V} = \{v_1, \ldots, v_{m-q}\}$ and $q$ Byzantine gradients $\mathcal{U} = \{u_1, \ldots, u_q\}$. We assume that the correct gradients have the same expectation ${\mathbb{E}}[v_i] = g, \forall i \in [m-q]$. An aggregation rule ${\tt Aggr}(\cdot)$ is said to be DSSGD-Byzantine-tolerant if \begin{align*} \ip{g}{\quad {\mathbb{E}}\left[ {\tt Aggr}(\mathcal{V} \cup \mathcal{U}) \right]} \geq 0. \end{align*} \end{definition} With the revised definition, now we theoretically analyze the DSSGD-Byzantine tolerance of coordinate-wise median and Krum. \begin{remark} Note that we do not argue that the theoretical guarantees in the previous work are wrong. Instead, our claim is that the theoretical guarantees on the bounded distances are not enough to secure distributed synchronous SGD. In particular, DSSGD-Byzantine tolerance is different from the Byzantine tolerance proposed in previous work. \end{remark} \subsection{COORDINATE-WISE MEDIAN} The following theorem shows that under certain conditions, \texttt{Median} is not DSSGD-Byzantine-tolerant. \begin{theorem} \label{thm:median} We consider the worst case where $m-2q = 1$. The server receives $(m-q)$ correct gradients $\mathcal{V} = \{v_1, \ldots, v_{m-q}\}$ and $q$ Byzantine gradients $\mathcal{U} = \{u_1, \ldots, u_q\}$. We assume that the stochastic gradients have identical expectation ${\mathbb{E}}[v_i] = g, \forall i \in [m-q]$, and non-zero coordinate-wise variance ${\mathbb{E}}[ \left( (v_i)_j - g_j \right)^2 ] \geq \sigma^2, \forall i \in [m-q], j \in [d]$, where $(v_i)_j$ is the $j$th coordinate of $v_i$, and $g_j$ is the $j$th coordinate of $g$. When $\max_{j \in [d]} | g_j | < \frac{\sigma}{\sqrt{m-q-1}}$, there exist Byzantine gradients $\mathcal{U} = \{u_1, \ldots, u_q\}$ such that \begin{align*} \ip{g}{\quad {\mathbb{E}}\left[ {\tt Median}(\mathcal{V} \cup \mathcal{U}) \right]} < 0. \end{align*} \end{theorem} \begin{proof} (sketch) Since median is independently taken in each coordinate, it is sufficient to prove Byzantine vulnerability for one coordinate or scalars. Thus, for convenience, with a little bit abuse of notation, we suppose that the correct gradients $\mathcal{V} = \{v_1, \ldots, v_{m-q}\}$ and $q$ Byzantine gradients $\mathcal{U} = \{u_1, \ldots, u_q\}$ are all scalars. We only need to show that under certain attacks, the aggregated value ${\tt Median}(\mathcal{V} \cup \mathcal{U})$ has a different sign than $\sum_{i \in [m-q]} v_i$. Without loss of generality, we assume that $g = \frac{1}{m-1} \sum_{i \in [m-q]} {\mathbb{E}}[v_i] > 0$ (the mirror case can be easily proved with a similar procedure). The Byzantine gradients are all assigned negative value: $u_i < 0, \forall i \in [q]$. Furthermore, we make the Byzantine gradients small enough such that $u_i < \min(\mathcal{V}), \forall i \in [q]$. By sorting the correct gradients, we can define the sequence $\{v_{1:m-q}, \ldots, v_{m-q:m-q}\}$, where $v_{i:m-q}$ is the $i$th smallest element in $\{v_1, \ldots, v_{m-q}\}$: \begin{align*} v_{1:m-q} \leq v_{2:m-q} \leq \cdots \leq v_{m-q:m-q}. \end{align*} We also define the expectation of the $i$th smallest element: $\mu_{i:m-q} = {\mathbb{E}}[ v_{i:m-q} ]$. Then, it is easy to check that ${\tt Median}(\mathcal{V} \cup \mathcal{U}) = v_{1:m-q}$, and ${\mathbb{E}} \left[ {\tt Median}(\mathcal{V} \cup \mathcal{U}) \right] = \mu_{1:m-q}$. Using Theorem 1(b) from \cite{hawkins1971bounds} (equiv. 9(a) from \cite{arnold1979bounds}), we have \begin{align*} \mu_{1:m-q} \leq g - \frac{\sigma}{\sqrt{m-q-1}}. \end{align*} Thus, when $g < \frac{\sigma}{\sqrt{m-q-1}}$, ${\mathbb{E}} \left[ {\tt Median}(\mathcal{V} \cup \mathcal{U}) \right]$ is negative. \end{proof} \begin{remark} When gradient descent converges, the expectation of the gradient $g$ approaches $0$. Furthermore, since the gradient produced by the correct workers are stochastic, the variance always exists. Thus, eventually, the condition $\max_{j \in [d]} | g_j | < \frac{\sigma}{\sqrt{m-q-1}}$ will be satisfied. To make things worse, the closer SGD approaches a critical point, the less likely the coordinate-wise median is DSSGD-Byzantine-tolerant. \end{remark} \begin{remark} \label{rmk:median} The proof of Theorem~\ref{thm:median} provides the intuition of constructing adversarial gradients for the attackers. In practice, in each coordinate, the attackers only need to guarantee that all the Byzantine values are much smaller than the smallest correct value if the correct expectation is positive, or much larger than the largest correct value if the correct expectation is negative. Then, hopefully, if the variance is large enough, the smallest/largest value has the opposite sign to the correct expectation. Then, the attackers can successfully manipulate the aggregated value into the opposite direction to the correct expectation. \end{remark} \subsubsection{Toy Example} We provide an 1-dimensional toy example to illustrate how easily \texttt{Median} can fail. Suppose there are $3$ correct gradients $\mathcal{V} = \{-0.1, 0.1, 0.3\}$ with the mean $0.1$, and $2$ Byzantine gradient $\mathcal{U} = \{-4, -2\}$ with the negative mean $-3$. According to Definition~\ref{def:marmed}, it is easy to check that ${\tt Median}(\mathcal{U} \cap \mathcal{V}) = -0.1$, which means that \texttt{Median} produces a value with the opposite sign of the mean of the correct gradients. \subsection{KRUM} The following theorem proves that under certain conditions, Krum is not DSSGD-Byzantine-tolerant. Note that Krum requires that $m-2q > 2$. \begin{theorem} \label{thm:krum} We consider the worst case where $m-2q = 3$. The server receives $(m-q)$ correct gradients $\mathcal{V} = \{v_1, \ldots, v_{m-q}\}$ and $q$ Byzantine gradients $\mathcal{U} = \{u_1, \ldots, u_q\}$. We assume that the stochastic gradients have identical expectation ${\mathbb{E}}[v_i] = g, \forall i \in [m-q]$. We define the mean of the correct gradients $\bar{v} = \frac{1}{m-q} \sum_{i \in [m-q]} v_i$. We assume that the correct gradients are bounded by $\|v_i - \bar{v}\|^2 \leq \|\bar{v}\|^2, \forall i \in [m-q]$. Furthermore, we assume that $v_i \neq v_j, \forall i \neq j, i,j \in [m-q]$, and $\exists \beta$ such that $\|v_i - v_j\|^2 \geq \beta^2, \forall i \neq j, i,j \in [m-q]$. We take $u_1 = u_2 = \cdots = u_q = - \epsilon \bar{v}$, where $\epsilon$ is a small positive constant value such that $\epsilon^2 \|\bar{v}\|^2 \leq \beta^2$. When $(m-q)$ is large enough: $m-q > \frac{2(\epsilon+2)^2}{\epsilon^2} + 2$, we have \begin{align*} \ip{g}{\quad {\mathbb{E}}\left[ {\tt Krum}(\mathcal{V} \cup \mathcal{U}) \right]} < 0. \end{align*} \end{theorem} \begin{proof} (sketch) For $\forall u \in \mathcal{U}$, $u = -\epsilon \bar{v}$, where $\bar{v} = \frac{1}{m-1} \sum_{i \in [m-q]} v_i$. Since any $u \in \mathcal{U}$ is identical, the nearest $(m-q-4)$ neighbours of $u$ must belong to $\mathcal{U}$. The remaining $(m-q-2) - (m-q-4) = 2$ nearest neighbours must belong to the set of correct gradients $\mathcal{V}$. Thus, we have \begin{align*} KR(u) \leq 2 \| \bar{v} + \bar{v} + \epsilon \bar{v} \|^2 = 2(\epsilon+2)^2 \|\bar{v}\|^2. \end{align*} For the correct gradients $\forall v \in \mathcal{V}$, there are two cases: \begin{itemize} \item \textbf{Case 1:} There are some $u \in \mathcal{U}$ which belong to the $(m-q-2)$ nearest neighbours of $v$. Suppose there are $a_1$ nearest neighbours in $\mathcal{V}$ and $a_2$ nearest neighbours in $\mathcal{U}$, where $a_1 + a_2 = m-q-2$. Since the correct gradients are bounded by $\|v_i - \bar{v}\|^2 \leq \|\bar{v}\|^2, \forall i \in [m-q]$, it is easy to check that $\|v-u\|^2 \geq \epsilon^2 \|\bar{v}\|^2$. Thus, we have \begin{align*} KR(v) \geq a_1 \beta^2 + a_2 \|v-u\|^2 \geq (m-q-2) \epsilon^2 \|\bar{v}\|^2. \end{align*} \item \textbf{Case 2:} There are no $u \in \mathcal{U}$ which belong to the $(m-q-2)$ nearest neighbours of $v$. Thus, we have \begin{align*} KR(v) \geq (m-q-2) \beta^2 \geq (m-q-2) \epsilon^2 \|\bar{v}\|^2. \end{align*} \end{itemize} In both cases, we have $ KR(v) \geq (m-q-2) \epsilon^2 \|\bar{v}\|^2. $ Thus, when $(m-q)$ is large enough: $m-q > \frac{2(\epsilon+2)^2}{\epsilon^2} + 2$, we have \begin{align*} KR(u) &\leq 2(\epsilon+2)^2 \|\bar{v}\|^2 < (m-q-2) \epsilon^2 \|\bar{v}\|^2 \\ &\leq KR(v). \end{align*} As a result, ${\tt Krum}(\mathcal{V} \cap \mathcal{U}) = u = -\epsilon \bar{v}$. Thus, ${\mathbb{E}}\left[ {\tt Krum}(\mathcal{V} \cap \mathcal{U}) \right] = -\epsilon g$. \end{proof} \begin{remark} In the theorem above, we assume that all the correct gradients are inside a Euclidean ball centered at their mean: $\|v_i - \bar{v}\|^2 \leq \|\bar{v}\|^2, \forall i \in [m-q]$. Such assumption can not always be satisfied, but it is reasonable that the random samples are sometimes inside such a Euclidean ball, if the variance is not too large. On the other hand, we assume that the pair-wise distances between the correct gradients are lower-bounded by $\beta > 0$. Almost surely, such $\beta$ exists, no matter how small it is. Note that the Byzantine attackers are supposed to be omniscient. Thus, the attackers can spy on the honest workers, and obtain $\mathcal{V}$ and $\beta$. Then, the attackers can choose an $\epsilon$ such that $\epsilon^2 \|\bar{v}\|^2 \leq \beta^2$. Finally, we only need the number of workers to be large enough, so that $m-q > \frac{2(\epsilon+2)^2}{\epsilon^2} + 2$. \end{remark} \begin{remark} \label{rmk:krum} The proof of Theorem~\ref{thm:krum} provides the intuition of constructing adversarial gradients for the attackers. In practice, the attackers only need to assign $\frac{\epsilon}{m-q} \sum_{i \in [m-q]} v_i$ to all the Byzantine gradients, with an $\epsilon > 0$ small enough. \end{remark} \begin{remark} Note that in \citet{blanchard2017machine}, \texttt{Krum} requires the assumption that $c \sigma < \|g\|$ for convergence, where $c$ is a general constant, $\sigma$ is the maximal variance of the gradients, and $g$ is the gradient in expectation. Note that $\|g\| \rightarrow 0$ when SGD converges to a critical point. Thus, such an assumption is never guaranteed to be satisfied, if the variance is non-zero. Furthermore, the better SGD converges, the less likely such an assumption can be satisfied. \end{remark} \subsubsection{Toy Example} Note that the assumptions made in Theorem~\ref{thm:krum} are sufficient but not necessary conditions of the DSSGD-Byzantine vulnerability of \texttt{Krum}. In practice, it can be easier to find an $\epsilon$ that crashes \texttt{Krum}, especially for 1-dimensional cases. We provide an 1-dimensional toy example to show how easily \texttt{Krum} can fail. Suppose there are $6$ correct gradients $\mathcal{V} = \{0,0.02,0.14,0.26,0.38,0.5\}$ with the mean $0.2167$, and $3$ Byzantine gradient $\mathcal{U} = \{-0.1, -0.1, -0.1\}$ with the negative mean $-0.1$. According to Definition~\ref{def:krum}, the corresponding function values $KR(\cdot)$ of $\mathcal{U} \cap \mathcal{V} = \{-0.1, -0.1, -0.1, 0,0.02,0.14,0.26,0.38,0.5\}$ are $\{0.0244,0.0244,0.0244,0.0304,0.0436,0.1060, 0.1440,\allowbreak 0.2160,0.4320\}$. Thus, ${\tt Krum}(\mathcal{U} \cap \mathcal{V}) = -0.1$, which means that \texttt{Krum} chooses the Byzantine gradient with the opposite sign of the mean of the correct gradients. \section{CASE STUDY} In this section, we implement special attack strategies for \texttt{Median} and \texttt{Krum}, and evaluate our attack strategies on a real-world application. The attack strategies are designed by using the intuitions underlying Theorem~\ref{thm:median} and Theorem~\ref{thm:krum}, which are mentioned in Remark~\ref{rmk:median} and Remark~\ref{rmk:krum}. \subsection{DATASETS AND EVALUATION METRICS} We conduct experiments on the benchmark CIFAR-10 image classification dataset~\citep{krizhevsky2009learning}, which is composed of 50k images for training and 10k images for testing. We use a convolutional neural network~(CNN) with 4 convolutional layers followed by 1 fully connected layer. The detailed network architecture can be found in the appendix. For any worker, the minibatch size for SGD is $50$. In each experiment, we launch 25 worker processes. We repeat each experiment 10 times and take the average. We use top-1 accuracy on the testing set and the cross-entropy loss function on the training set as the evaluation metrics. We use the averaging, \texttt{Median}, and \texttt{Krum} without attacks as the gold standard, which are referred to as \texttt{Mean without attack}, \texttt{Median without attack}, and \texttt{Krum without attack}. We start the attack at different epochs, so that SGD can warm up and make some progress first. We include some additional experiments in the appendix. \subsection{MEDIAN} In each iteration, the server receives $m=25$ gradients. A randomly selected subset of $q=12$ correct gradients are replaced by Byzantine gradients. We define the set of Byzantine gradients as $\mathcal{U} = \{u_1, \ldots, u_{12}\}$, and the set of the remaining correct gradients as $\mathcal{V} = \{v_1, \ldots, v_{13}\}$. Our attack strategy is as follows: \begin{align*} u_1 = u_2 = \cdots = u_{12} = - \frac{\epsilon}{13} \sum_{i=1}^{13} v_i. \end{align*} According to Theorem~\ref{thm:median} and Remark~\ref{rmk:median}, \texttt{Median} is vulnerable to positive $\epsilon$ with large magnitude $|\epsilon|$. We test the above attack strategy with different $\epsilon$. The results are shown in Figure~\ref{fig:median}. \texttt{Median} fails when $\epsilon > 0$. When $\epsilon = 0$, \texttt{Median} gets stuck and stops making progress. When $\epsilon < 0$, \texttt{Median} successfully defends against the attack. \subsection{KRUM} In each iteration, the server receives $m=25$ gradients. A randomly selected subset of $q=11$ correct gradients are replaced by Byzantine gradients. We define the set of Byzantine gradients as $\mathcal{U} = \{u_1, \ldots, u_{11}\}$, and the set of the remaining correct gradients as $\mathcal{V} = \{v_1, \ldots, v_{14}\}$. Our attack strategy is as follows: \begin{align*} u_1 = u_2 = \cdots = u_{11} = - \frac{\epsilon}{14} \sum_{i=1}^{14} v_i. \end{align*} According to Theorem~\ref{thm:krum} and Remark~\ref{rmk:krum}, \texttt{Krum} is vulnerable to positive $\epsilon$ with small magnitude $|\epsilon|$. We test the above attack strategy with different $\epsilon$. The results are shown in Figure~\ref{fig:krum}. \texttt{Krum} fails when $\epsilon > 0$ is small. When $\epsilon$ is large enough, \texttt{Krum} successfully defends against the attack. \subsection{DISCUSSION} Surprisingly, both \texttt{Median} and \texttt{Krum} are more vulnerable than we expected. Note that our theorems only analyze the worst cases. There are other cases where \texttt{Median} and \texttt{Krum} can fail. For \texttt{Median}, even if we take $\epsilon = 0$, SGD still performs badly. Theoretically, even if we do not use positive $\epsilon$, small $\epsilon$ can still enlarge the variance of SGD, which can be potentially harmful to the convergence. We can see that with large negative $\epsilon$, the defense of \texttt{Median} is successful. In our experiment, we reveal certain new vulnerabilities of \texttt{Median} in distributed synchronous SGD. The experiments conducted by \citet{yin2018byzantine} do not fail because the attacker only changes the labels of the poisoned training data by flipping a $label \in \{0, \ldots, 9\}$ to $9-label$. It is very likely that such an attack produces Byzantine gradients surrounding the correct gradients coordinate-wisely on both sides. However, according to Theorem~\ref{thm:median} and Remark~\ref{rmk:median}, an effective attack should place the Byzantine gradient on one and only one side of the correct gradients, which is the side opposite to the mean of the correct gradients, coordinate-wisely. For \texttt{Krum}, small positive $\epsilon$ makes SGD vulnerable. Furthermore, even if we take $\epsilon = 1$, \texttt{Krum} still fails. In our experiment, we reveal certain new vulnerability of \texttt{Krum} in distributed synchronous SGD. The experiments conducted by \citet{blanchard2017machine} do not fail even though a similar attack strategy called ``omniscient'' is conducted. The reason is that, in the paper of \citet{blanchard2017machine}, the attacker ``proposes the opposite vector, scaled to a large length'', which is similar to our attack strategy with a large $\epsilon$. Guided by our theoretical analysis, we designed efficient attack strategies for both \texttt{Median} and \texttt{Krum}. Our results show that the definition of Byzantine tolerance for distributed synchronous SGD should be revised. Using our definition of DSSGD-Byzantine tolerance, research can be conducted to design better defense techniques. \section{CONCLUSION} We propose a revised definition of Byzantine tolerance for distributed synchronous SGD. With the new definition, we theoretically and empirically examine the Byzantine tolerance of two prevailing robust aggregation rules. Guided by our theoretical analysis, attack techniques can be designed to fail the aggregation rules. In the future, we hope new defense techniques can be designed using our revised definition of Byzantine tolerance. \section{DEFENSE TECHNIQUES} In this section, we introduce two prevailing robust aggregation rules against Byzantine failures in distributed synchronous SGD: coordinate-wise median and Krum. For the remainder of this paper, we ignore the iteration superscript $t$ in $\tilde{v}_i^t$ and $v_i^t$ for convenience. \subsection{COORDINATE-WISE MEDIAN} \begin{definition}(Coordinate-wise Median~\citep{yin2018byzantine}) \label{def:marmed} We define the coordinate-wise median aggregation rule ${\tt Median}(\cdot)$ as \begin{align*} med = {\tt Median}(\{\tilde{v}_i: i \in [m]\}), \end{align*} where for any $j\in[d]$, the $j$th dimension of $med$ is $med_j = median\left(\{(\tilde{v}_1)_j, \ldots, (\tilde{v}_m)_j\}\right)$, $(\tilde{v}_i)_j$ is the $j$th dimension of the vector $\tilde{v}_i$, $median(\cdot)$ is the one-dimensional median. \end{definition} \subsection{KRUM} \begin{definition}(Krum~\citep{blanchard2017machine}) \label{def:krum} \begin{align*} & {\tt Krum}(\{\tilde{v}_i: i \in [m]\}) = \tilde{v}_k, \quad k = \mathop{\rm argmin}_{i \in [m]} KR(\tilde{v}_i), \\ & KR(\tilde{v}_i) = \sum_{i \rightarrow j} \| \tilde{v}_i - \tilde{v}_j \|^2, \end{align*} where $i \rightarrow j$ are the indices of the $m-q-2$ nearest neighbours of $\tilde{v}_i$ in $\{\tilde{v}_j: j \in [m], i \neq j\}$ as measured by squared Euclidean distance. \end{definition} For convenience, we refer to the coordinate-wise median and Krum as \texttt{Median} and \texttt{Krum}. \section{INTRODUCTION} The security of distributed machine learning has drawn increasing attention in recent years. Among the threat models, Byzantine failures~\citep{Lamport1982TheBG} are perhaps the most well-studied. In the Byzantine model, workers can behave arbitrarily and maliciously. In addition, Byzantine workers are omniscient and can conspire. Most of the existing Byzantine-tolerant machine-learning algorithms~\citep{blanchard2017machine,Chen2017DistributedSM,yin2018byzantine,feng2014distributed,Su2016FaultTolerantMO,su2016defending,alistarh2018byzantine} focus on the protection of distributed Stochastic Gradient Descent~(SGD). In this paper, we consider Byzantine-tolerant SGD in a server-worker architecture~(also known as the parameter server architecture~\citep{li2014scaling,li2014communication}), depicted in Figure~\ref{fig:ps}. The system is composed of server nodes and worker nodes. In each epoch, the workers pull the latest model from the servers, estimate the gradients using the locally sampled training data, and then push the gradient estimators to the servers. The servers aggregate the gradient estimators, and update the model by using the aggregated gradients. \begin{figure}[tb!] \centering \includegraphics[width=0.4\textwidth,height=3.4cm]{byz_attack_median_loss_100_observation} \vskip 0.2cm \includegraphics[width=0.4\textwidth,height=3.4cm]{byz_attack_krum_loss_1_observation} \caption{Illustration of failed Byzantine-tolerant SGD. We execute distributed synchronous SGD on CIFAR-10 image classification, with 25 workers. Beginning from the 100th epoch, we attack the a system by replacing some workers with Byzantine workers. During the attack, 12 workers are Byzantine in the case of coordinate-wise median, and 11 workers are Byzantine in the case of Krum. The Byzantine workers push $- \epsilon g$ to the server, where $g$ is the true gradient.} \label{fig:observation} \end{figure} We consider Byzantine failures at a subset of the worker nodes. Byzantine workers send arbitrary values instead of the gradient estimators to the server. Such Byzantine gradients are potentially adversarial, and this can result in convergence to sub-optimum models, or even lead to divergence. To make things worse, the Byzantine workers can spy on the information at any server or at any honest worker (omniscience). Byzantine gradients can thus be tailored to have similar variance and magnitude as the correct gradients, which makes them hard to distinguish. Additionally, in different iterations, different subsets of workers can behave in a Byzantine manner, evading detection. Existing literature assumes that less than half of the workers are Byzantine in any iteration. Compared to traditional Byzantine tolerance in distributed systems~\citep{lynch1996distributed,avizienis2004basic,tanenbaum2007distributed,fischer1982impossibility}, Byzantine tolerance in distributed machine learning has unique properties and challenges. Traditional Byzantine tolerance attempts to reach consensus on correct values. However, machine learning algorithms do not need to reach consensus. Further, even non-Byzantine-tolerant machine learning algorithms can naturally tolerate some noise in the input and during execution~\citep{xing2016strategies}. Thus for distributed SGD, existing techniques for Byzantine-tolerant execution guarantee an upper-bound on the distance between the aggregated approximate gradient (under Byzantine workers) and the true gradient~\citep{blanchard2017machine,yin2018byzantine}. A deeper introspection reveals, however, that what really matters for gradient descent algorithms is the direction of the descent. As shown in Figure~\ref{fig:descent}, to let the gradient descent algorithm make progress, we need to guarantee that the direction of the aggregated vector agrees with the true gradient, i.e., the inner product between the aggregated vector and the true gradient must be non-negative. This can be violated by an attack that makes the inner product negative. We call this class of new attacks ``inner product manipulation attacks''. \begin{figure}[htb!] \centering \includegraphics[width=0.28\textwidth]{byz_curve_cropped_3} \caption{Descent Direction. The blue arrows represent the directions which agree with the steepest descent direction (negative gradient). The red arrows represent directions which agree with the steepest ascent direction (gradient).} \label{fig:descent} \end{figure} We observe that the bounded distance between the aggregated value and the true gradient guaranteed by existing techniques is not enough to defend distributed synchronous SGD against inner product manipulation attacks. For example, for the coordinate-wise median, if we put all the Byzantine values on the opposite side of the true gradient, the inner product between the aggregated vector and the true gradient can be manipulated to be negative. In this paper, we study how inner product manipulation makes Byzantine-tolerant SGD vulnerable. We conduct case studies on coordinate-wise median~\citep{yin2018byzantine} and Krum~\citep{blanchard2017machine}. Figure~\ref{fig:observation} gives a glimpse of how bad the effect of the attack can be. In a nutshell, creating gradients in the opposite direction with large magnitude crashes coordinate-wise median, while creating gradients in the opposite direction with small magnitude crashes Krum. We provide theoretical analysis as well as empirical results to validate these findings. Based on these results, we argue that there is a need to revise the definition of Byzantine tolerance in distributed SGD. We provide a new definition, and study its satisfaction on two prevailing robust distributed SGD algorithms, theoretically and empirically. In summary, our contributions are: \setitemize[0]{leftmargin=*} \begin{itemize} \item We break two prevailing Byzantine tolerant SGD algorithms -- coordinate-wise median~\citep{yin2018byzantine} and Krum~\citep{blanchard2017machine} -- using a new class of attacks called inner production manipulation attacks. We theoretically prove that under certain conditions, we can backdoor these two algorithms, even when the assumptions and theorems presented in these papers are valid. \item We show how to design Byzantine gradients to compromise the robust aggregation rules. We conduct experiments to validate further. \item Following our theoretical and empirical analysis, we propose a revised definition of Byzantine tolerance for distributed SGD. \end{itemize} \section{PRELIMINARIES} In this paper, we focus on distributed synchronous Stochastic Gradient Descent~(SGD) with Parameter Server~(PS). In this section, we formally introduce distributed synchronous SGD and the threat model of Byzantine failures. \begin{figure}[htb!] \centering \includegraphics[width=0.49\textwidth,height=3.8cm]{PS_byz_cropped.jpg} \caption{Worker-Server Architecture} \label{fig:ps} \end{figure} \subsection{STOCHASTIC GRADIENT DESCENT} We consider the following optimization problem: \begin{align*} \min_{x \in {\mathbb{R}}^d} F(x), \end{align*} where $F(x) = {\mathbb{E}}_{z \sim \mathcal{D}}[f(x; z)]$ is a differentiable function, $z$ is sampled from some unknown distribution $\mathcal{D}$, $d$ is the number of dimensions. We assume that there exists at least one minimizer of $F(x)$, which is denoted by $x^*$, where $\nabla F(x^*) = 0$. This problem is solved in a distributed manner with $m$ workers. In each iteration, each worker will sample $n$ independent and identically distributed~(i.i.d.) data points from the distribution $\mathcal{D}$, and compute the gradient of the local empirical loss $F_i(x) = \frac{1}{n} \sum_{j=1}^n f(x; z^{i,j}), \forall i \in [m]$, where $z^{i,j}$ is the $j$th sampled data on the $i$th worker. The servers will collect and aggregate the gradients sent by the workers, and update the model as follows: \begin{align*} x^{t+1} = x^t - \gamma^t {\tt Aggr}(\{\tilde{v}_i^t: i \in [m]\}), \end{align*} where ${\tt Aggr}(\cdot)$ is an aggregation rule (e.g., averaging), and $\tilde{v}_i^t$ is the gradient sent by the $i$th worker, $\gamma^t$ is the learning rate in the $t^{\mbox{th}}$ iteration. For an honest worker, $ \tilde{v}_i^t = \nabla F_i(x^t) $ is an unbiased estimator such that ${\mathbb{E}}[\nabla F_i(x^t)] = \nabla F(x^t)$. When all the workers are honest, the most common choice of the aggregation rule ${\tt Aggr}(\cdot)$ is averaging: \begin{align*} {\tt Aggr}(\{\tilde{v}_i^t: i \in [m]\}) = \frac{1}{m} \sum_{i \in [m]} \tilde{v}_i^t. \end{align*} The detailed algorithm of distributed synchronous SGD with aggregation rule ${\tt Aggr}(\cdot)$ is shown in Algorithm~\ref{alg:sgd}. \begin{algorithm}[htb!] \caption{Distributed Synchronous SGD with Robust Aggregation} \begin{algorithmic} \vspace{0.2cm} \STATE{\large\underline{\textbf{Server}}} \STATE $x^0 \leftarrow rand()$ \COMMENT{Initialization} \FOR{$t = 0, \ldots, T$} \STATE Broadcast $x^{t}$ to all the workers \STATE Wait until all the gradients $\{\tilde{v}_i^t: i \in [m]\}$ arrive \STATE Compute $\bar{\tilde{v}}^t = {\tt Aggr}(\{\tilde{v}_i^t: i \in [m]\})$ \STATE Update the parameter $x^{t+1} \leftarrow x^{t} - \gamma^t \bar{\tilde{v}}^t$ \ENDFOR \end{algorithmic} \begin{algorithmic} \vspace{0.2cm} \STATE{\large\underline{\textbf{Worker}} $i = 1, \ldots, m$} \FOR{$t = 0, \ldots, T$} \STATE Receive $x^{t}$ from the server \STATE Draw the samples, compute, and send the gradient $v_i^t = \nabla F_i^t(x^{t})$ to the server \ENDFOR \end{algorithmic} \label{alg:sgd} \end{algorithm} \subsection{THREAT MODEL} In the Byzantine failure model, the gradients sent by malicious workers can take an arbitrary value: \begin{align} \label{equ:byz_grad} \tilde{v}_i^t = \begin{cases} *, & \mbox{if $i$th worker is Byzantine}, \\ \nabla F_i(x^t), & \mbox{otherwise,} \end{cases} \end{align} where ``$*$" represents arbitrary values. Formally, we define the threat model of Byzantine failure as follows. \begin{definition} \label{def:byz} (Threat Model~\citep{blanchard2017machine,Chen2017DistributedSM,yin2018byzantine}). In the $t^{\mbox{th}}$ iteration, let $\{v_i^t: i \in [m]\}$ be i.i.d. random vectors in ${\mathbb{R}}^d$, where $v_i^t = \nabla F_i(x^t)$. The set of correct vectors $\{v_i^t: i \in [m]\}$ is partially replaced by arbitrary vectors, which results in $\{\tilde{v}_i^t: i \in [m]\}$, as defined in Equation~(\ref{equ:byz_grad}). In other words, a correct gradient is $\nabla F_i(x^t)$, while a Byzantine gradient, marked as ``$*$", is assigned arbitrary value. We assume that $q$ out of $m$ vectors are Byzantine, where $2q < m$. Furthermore, the indices of faulty workers can change across different iterations. If the failures are caused by attackers, the threat model includes the case where the attackers can collude. \end{definition} The notations used in this paper is summarized in Table~\ref{tbl:notations}. \begin{table}[htb] \vspace{-0.7cm} \caption{Notations} \label{tbl:notations} \begin{center} \begin{small} \begin{tabular}{|l|l|} \hline {\bf Notation} & {\bf Description} \\ \hline $m$ & Number of workers \\ \hline $n$ & Minibatch size on each worker \\ \hline $T$ & Number of iterations \\ \hline $[m]$ & Set of integers $\{1, \ldots, m \}$ \\ \hline $q$ & Number of Byzantine workers \\ \hline $\gamma$ & Learning rate \\ \hline $x$ & Model parameters \\ \hline $\tilde{v}_i^t$ & Gradient sent by the $i$th worker \\ & in the $t^{\mbox{th}}$ iteration, potentially Byzantine \\ \hline $v_i^t$ & Correct gradient produced by the $i$th worker \\ & in the $t^{\mbox{th}}$ iteration \\ \hline $\| \cdot \|$ & All the norms in this paper are $l_2$-norms \\ \hline $\ip{a}{b}$ & Inner product between $a$ and $b$ \\ \hline \end{tabular} \end{small} \end{center} \vspace{-0.7cm} \end{table} \section{RELATED WORK} Robust estimators such as the median are well studied, and can naturally be applied to Byzantine tolerance. Coordinate-wise median is one approach that generalizes the median to high-dimensional vectors. In \citet{yin2018byzantine}, statistical error rates are studied for using coordinate-wise median in distributed SGD. \citet{blanchard2017machine} propose Krum, which is not based on robust statistics. For each candidate gradient, Krum computes the local sum of squared Euclidean distances to the other candidates, and outputs the one with minimal sum. In this paper, we focus on coordinate-wise median and Krum. There are other Byzantine-tolerant SGD algorithms. For example, Bulyan~\citep{guerraoui2018hidden} is built based on Krum, which potentially shares the same flaws. DRACO~\citep{chen2018draco} uses coding theory to ensure robustness, and is different from the other Byzantine-tolerant SGD algorithms. Recently, an increasing number of papers propose attack mechanisms to break the defense of machine learning in various scenarios. For example, \citet{athalye2018obfuscated} propose attack techniques using adversarial training data. \citet{bhagoji2018analyzing,bagdasaryan2018backdoor} break the defense of federated learning~\citep{mcmahan2016communication}. In this paper, we focus on attacking distributed synchronous SGD using adversarial gradients sent by Byzantine workers.
1,314,259,994,743
arxiv
\section{Conclusion} \label{sec:conc} We have presented an algorithm for identity testing of markov chains which avoids any dependence on brittle connectivity properties like the hitting time resolving a open question from \cite{daskalakis2017testing}. However, there are several open questions potentially relating to identity testing and graph partitioning arising from this work: \begin{enumerate} \item The sample complexity of our approach $\widetilde{O}(n / \epsilon^4)$ is sub-optimal in its dependence on the error parameter $\epsilon$. Can our approach be improved to the $\Omega (n / \epsilon)$ lower bound for the problem established in \cite{daskalakis2017testing}. \item One reason for this dependence on $\epsilon$ is due to the graph partitioning algorithm which guarantees sets of low expansion. Is it possible to improve upon such graph partitioning algorithms or devise new graph partitioning algorithms to achieve improved error dependence? \item Markov chains are arguably the simplest possible model for sequential data analysis. How can we quantify distances between models for more complicated methods? What assumptions does one need to place on the model to ensure that statistical and computational efficiency is possible for such hypothesis testing tasks? \end{enumerate} \section{Introduction} Statistical hypothesis testing is the principal method for lending statistical validity to claims made about the real world and is a vital step in any scientific enterprise. In the framework of statistical hypothesis testing, an investigator subjects hypotheses made as part of their inquiry by testing it against data collected from the real world. While the abstract framework of hypothesis testing is very powerful, its usefulness is limited by the range of hypotheses for which statistically efficient procedures have been developed. Furthermore, these tests also need to be computationally viable with large datasets. Unfortunately, most cases for which efficient procedures are known are concerned with the setting where we have access to independent and identically distributed observations from some underlying distribution. This severely restricts the use of these procedures. Motivated by these considerations, recent work by \cite{daskalakis2017testing} studied the problem of identity testing of markov chains given a single trajectory where strong correlations may exist between successive samples. They propose an algorithm to test whether the transition matrix, $\bm{Q}$, underlying the observed trajectory is equal to a known transition matrix, $\bm{P}$, or sufficiently far from it. They propose a notion of difference between markov chains which takes into account the connectivity properties of the chain to ensure that the problem remains well posed. However, a major drawback of their approach is that their runtime depends on the hitting time of $\bm{P}$ and an open question from their work is whether this dependence is truly necessary and conjectured it was not. The approach of \cite{daskalakis2017testing} is to convert the identity testing problem on markov chains to the simpler problem of identity testing of distributions given iid samples. The main idea is to use the observed trajectory to simulate samples from the distribution characterized by $\frac{1}{n} \bm{P}$. To simulate one sample from this distribution, one first picks a row of $\bm{P}$ uniformly at random and sample from the row using the trajectory. However, to generate the number of samples required to distinguish the two chains via this method, one needs to sample every row of $\bm{P}$ at least once with high probability. This leads to the dependence on the hitting time in the length of the observed trajectory. In this work, we propose an algorithm for identity testing of markov chains that avoids the dependence on the hitting time of $\bm{P}$. That is, we would like to solve the identity testing problem even in settings where one may not even be able to observe all the states in the chain. Similar to \cite{daskalakis2017testing}, we reduce the identity testing problem on markov chains to simpler identity testing problems on distributions given iid samples. However, instead of a reduction to a single identity testing problem, we formulate several identity testing problems. Our main insight is that to distinguish two sufficiently different markov chains, it is sufficient to analyze the trajectory in subsets of states which are close to being disconnected from the rest of the state space but well connected within themselves. That is, we formulate for each such subset $S$, an identity testing problem whose solution also resolves the testing problem on markov chains. However, this approach is throttled by two main difficulties: \begin{enumerate} \item Computing these ``high-information'' subsets and \item Ensuring we have sufficiently many samples from these subsets \end{enumerate} Our first main requirement of these subsets is that they have enough information to distinguish two different markov chains. We use as a sufficient criterion the property that these sets are poorly connected to the rest of the state space. The next crucial property that we will require is that the identity testing problem defined by the set can be simulated given a small number of samples from the set. We show that this property too can be related to the expansion properties of the set. This is guaranteed for a candidate set, $S$ if for all subsets, $R \subset S$, $R$ is well-connected to the rest of the set. Given these two requirements, our goal is to compute sets well connected within themselves and poorly connected to the rest of the state space. To do so, we generalize classical approximation algorithms for the Sparsest-Cut problem. However, this only ensures the first required property. To ensure the second property of being well connected within the set, we combine this approach with a divide and conquer framework. We then recursively extract such ``high-information'' subsets to obtain a partitioning of the state space into several ``high-information'' sets and a single ``low-information'' set. To tackle the second problem of ensuring we have enough points from these ``high-information'' subsets in the observed trajectory, we use techniques from the spectral analysis of markov chains to show that the chain does not spend too much time in the ``low-information'' component of the chain. The failure of our graph partitioning algorithm to partition the ``low-information'' component means that all subsets of the ``low-information'' component are well connected to the rest of the state space. This ensures that the chain escapes from this component fairly quickly if it enters it. \textbf{Related Work: } In the statistics community, a variety of tests have been developed for distribution testing in the iid scenario: Cramer-von Mises (\cite{cramer1928composition}), $\chi^2$ (\cite{pearson1900x}), Kolmogorov-Smirnov (\cite{smirnov1939estimation}) and for more recent results, \cite{agresti2013categorical, d2017goodness}. However, the analysis of these methods pertains to the asymptotic distribution of the test statistic without finite sample guarantees. In the computer science community, there has been a flurry of recent work in this setting, with a focus on finite sample lower bounds and statistical and computational tractability: \cite{batu2004sublinear, acharya2015optimal, canonne2016testing, daskalakis2018distribution, daskalakis2017testing, valiant2011testing, chan2014optimal, diakonikolas2015testing, rubinfeld2009testing, valiant2011testing, valiant2013instance, valiant2017automatic, rubinfeld2012taming, blais2017distribution, batu2000testing, batu2001testing, paninski2008coincidence, acharya2015optimal, diakonikolas2016new, diakonikolas2016collision}. The problem of identity testing and estimation in markov chains was, to the best of our knowledge, first studied in the seminal works of \cite{bartlett1951frequency, anderson1957statistical, billingsley1961statistical}. However, the results obtained are in the asymptotic regime with the number of samples tending to infinity. Recent work by \cite{daskalakis2017testing} provide finite sample analysis for the identity testing of markov chains but the length of the trajectory required depends on delicate connectivity properties of the chain like hitting times which may be arbitrarily large. The sparsest cut problem has been intensely studied with the breakthrough result of \cite{leighton1999multicommodity} devising the first $O(\log n)$ approximation algorithm followed by a subsequent result by \cite{linial1995geometry} which interprets the algorithm from a metric embedding perspective (\cite{bourgain1985lipschitz}). The $O(\log n)$ barrier was subsequently improved to $O(\sqrt{\log n})$ in another beautiful result by \cite{arora2009expander}. These algorithms have been used in divide and conquer based approaches to several combinatorial problems (\cite{shmoys1997cut}) and constructing approximation algorithms for unique games (\cite{trevisan2005approximation}). While graph decomposition techniques have been studied previously (see, for example, \cite{spielman2004nearly, trevisan2005approximation, goldreich1999sublinear}), approaches based on spectral techniques yield weaker guarantees than those based on sparsest cut approximations. Graph decompositions based on \cite{leighton1999multicommodity} have been studied in \cite{trevisan2005approximation} however these results are not strong enough for our setting as they only imply the existence of internally well-connected partitions with potentially several ``low-information'' sets whereas, we crucially require that there exists at most one such set. The relationship between the sparsest cut value of a markov chain and its spectral properties are well known (\cite{cheeger1969lower, sinclair1989approximate}) and have numerous applications (\cite{chung1997spectral,kannan1997random,lee2018kannan}). In the analysis of our algorithm, we use these techniques to bound hitting times in markov processes restricted to subsets of the state space and escape times from subsets of the state space where we bound the top eigenvalue of sub-matrices of the transition matrix as opposed to the second eigenvalue of the transition matrix. \section{Proof} \label{sec:mpr} In this section, we will present the proof of Theorem~\ref{thm:mainth}. As mentioned before, the guarantees provided by conventional graph partitioning algorithms are not strong enough to ensure the small trajectory lengths required for the success of Theorem~\ref{thm:mainth}. In the first subsection, we will describe some key lemmas relating to the graph decomposition technique detailed in Algorithm~\ref{alg:pg}. \subsection{Markov Chain Decomposition} This first lemma, proved in Appendix~\ref{prf:pg}, describes the expansion properties of the partition of the markov chain state space obtained from Algorithm~\ref{alg:pg}. Intuitively, it decomposes the graph into a set of subsets $\mathcal{S}$ which consists of sets which are well connected within themselves but poorly connected to the rest of the state space and a single set $T$ in which every subset is well connected to the rest of the state space. The sets in $\mathcal{S}$ refer to the ``high-information'' sets alluded to previously while the set $T$ is the single ``low-information'' subset of the state space. \begin{lemma} \label{lem:pg} Algorithm~\ref{alg:pg} returns a tuple $(\mathcal{S}, T)$ such that we have for all $S \in \mathcal{S}$: \begin{equation*} \text{Claim 1: } \frac{\sum_{i, j \in S} \bm{P}_{ij}}{\abs{S}} \geq 1 - \beta \qquad \text{Claim 2: } \forall R \subset \S\ \frac{\sum_{i \in R, j \in (\S \setminus R)} \bm{P}_{ij}}{\min(\abs{R}, \abs{\S \setminus R})} \geq \Omega \lprp{\frac{\beta}{\log^2 n}} \end{equation*} And $T$ satisfies: \begin{equation*} \text{Claim 3: } \forall R \subseteq T\ \frac{\sum_{i \in R, j \in \bar{R}} \bm{P}_{ij}}{\abs{R}} \geq \Omega \lprp{\frac{\beta}{\log n}} \end{equation*} Furthermore, the subsets in $\mathcal{S}$ along with $T$ form a partition of $[n]$. \end{lemma} Our next lemma, proved in Appendix~\ref{prf:helbnd}, shows that the distributions from Definition~\ref{def:ddef} are far in Hellinger distance if the original markov chains are far. \begin{lemma} \label{lem:helbnd} Let $\bm{P}$ and $\bm{Q}$ be transition matrices of symmetric markov chains such that $\text{Dist}(\bm{P}, \bm{Q}) \geq \epsilon$. Suppose now, that $T \subseteq [n]$ satisfies: \begin{equation*} \frac{\sum_{i, j \in T} \bm{P}_{ij}}{\abs{T}} \geq 1 - \frac{\epsilon}{16} \end{equation*} Then, we have: \begin{equation*} d_{Hel}^2 \lprp{\text{Dist}(T, \bm{P}), \text{Dist}(T, \bm{Q})} \geq \frac{\epsilon^2}{32} \end{equation*} \end{lemma} In the next lemma, whose proof may be found in Appendix~\ref{prf:submc}, we analyze the spectral properties of the markov processes observed on a subset of states. This lemma will be crucial in bounding the number of samples we need to see from this subset in order to generate a large number of samples from the distribution corresponding to this subset. \begin{lemma} \label{lem:submc} Let $\bm{P}$ be a symmetric irreducible markov chain and $T \subset [n]$ be a subset of states. Let $\bm{Y} = Y_1, Y_2, \dots$ be a markov process with transition matrix $\bm{P}$ and let $\bm{X} = X_1, X_2, \dots$ be the markov process observed on the subset $T$. Then, $\bm{X}$ is also a symmetric markov process with transition matrix: \begin{equation*} \bm{Q} = \bm{P}_T + \sum_{i = 1}^\infty \bm{P}_{T, \bar{T}} \bm{P}_{\bar{T}}^i \bm{P}_{\bar{T}, T} \end{equation*} \end{lemma} The next corollary is an application of Lemma~\ref{lem:submc} to markov processes defined on the ``high-information'' sets by exploiting their good expansion properties within the set itself. Its proof may be found in Appendix~\ref{prf:expsmc}. \begin{corollary} \label{cor:expsmc} In the setting of Lemma~\ref{lem:submc}, suppose in addition that $T$ satisfies: \begin{equation*} \forall R \subset T\ \frac{\sum_{i \in R, j \in (T \setminus R)} \bm{P}_{ij}}{\min(\abs{R}, \abs{T \setminus R})} \geq \alpha \end{equation*} Then, the transition matrix $\bm{Q}$ of the chain $\bm{X}$ satisfies: \begin{equation*} \chi(\bm{Q}) \geq \alpha \end{equation*} \end{corollary} We now bound the hitting time of markov processes defined on ``high-information'' subsets. See Appendix~\ref{prf:htb} for the proof. \begin{lemma} \label{lem:htb} Let $\bm{P}$ be the transition matrix of a symmetric markov chain, over state space $[n]$, satisfying $\chi(\bm{P}) \geq \alpha > 0$. Then, the hitting time of $\bm{P}$ is bounded as follows: \begin{equation*} \text{HitT}(\bm{P}) \leq \widetilde{O} \lprp{\frac{n}{\alpha^2}} \end{equation*} \end{lemma} The next lemma, which is a consequence of Theorem~1 from \cite{daskalakis2018distribution} (Also stated in \cite{daskalakis2017testing}), bounds the number of samples required to distinguish two distributions over the same support given a lower bound on their Hellinger distance. \begin{lemma} \label{lem:idtst} Given a discrete distribution $p$ on $[n]$ and given access to i.i.d samples from a distribution $q$ with the same support, there is a tester which can distinguish whether $p = q$ or $d_{Hel}(p, q) \geq \epsilon$ with $O(\frac{\sqrt{n}}{\epsilon^2} \log 1 / \delta)$ samples and failure probability at most $\delta$. \end{lemma} In the next lemma, we show how the expansion properties of the ``low-information'' set obtained before can be used to obtain a guarantee on the number of samples observed from the ``High-information'' sets. To prove the below bound, we bound the spectral norm of $\bm{P}_T$ which controls the amount of time needed to escape from the set $T$. Our proof mirrors that of Lemma~3.3 in \cite{sinclair1989approximate} but we bound the first eigenvalue of a sub-matrix as opposed to the second eigenvalue of the whole transition matrix. The full details of the proof are deferred to Appendix~\ref{prf:tli}. \begin{lemma} \label{lem:tli} Let $\bm{P}$ be the transition matrix of a symmetric markov chain. Furthermore, let $T\subset [n]$ be such that: \begin{equation*} \forall R \subseteq T,\ \frac{\sum_{i \in R, j \in \bar{R}} \bm{P}_{ij}}{\abs{R}} \geq \alpha \end{equation*} Then, in a word of length $l \geq 16\frac{1}{\alpha^2} \log (n) \log (1 / \delta)$, we have: \begin{equation*} \sum_{i = 1}^l \bm{1}\{X_i \notin T\} \geq \frac{l}{8 \log n \alpha^2} \end{equation*} with probability at least $1 - \delta$. \end{lemma} The next lemma from \cite{daskalakis2017testing} lower bounds the number of times we observe a certain state in a suitably long trajectory of a markov chain. We will use the lemma below for sub-chains consisting of chains corresponding to the ``high-information'' sets. \begin{lemma} \label{lem:mxtb} Let $X_1, \dots, X_m$ be a word of length $m$ from an irreducible markov chain, over state space $[n]$ and transition matrix $\bm{P}$. Then for $m \geq \widetilde{O}(\text{HitT} (\bm{P})\log \text{HitT} (\bm{p}))$, we have: \begin{equation*} \bm{P} \lbrb{\exists i: \abs*{\{t: X_t = i\}} \leq \frac{m}{8en}} \leq \frac{\epsilon^2}{n} \end{equation*} where the probability is over the sampling of $X_1, \dots, X_m$. \end{lemma} \subsection{Sample Generation Phase} In this subsection, we will state and prove key lemmas relating to the sample generation phase of the algorithm. Here, we will assume that the observed word $\bm{w}$ is a subset of an infinite word $\bm{w}_\infty$ from a markov process with the same starting distribution and transition matrix. We will first analyze the sample generation process on the infinite word $\bm{w}_\infty$. Assuming that we have access to the infinite word $\bm{w}_\infty$, we see that the sample generation process will never fail as we see each state infinitely many times with probability $1$. In the first lemma, we show that given access to $\bm{w}_\infty$, we will be able to use any of the ``high-information'' sets to test between the two chains: \begin{lemma} \label{lem:infFail} Suppose $(\mathcal{S}, T)$ is a decomposition of a markov chain $\bm{P}$ obtained from Algorithm~\ref{alg:idtst} and that we are given an infinite word $\bm{w}_\infty$ from a markov process with transition matrix $\bm{Q}$ and we are guaranteed one of the following two cases: \begin{equation*} \text{Case 1: } \text{Dist} (\bm{P}, \bm{Q}) \geq \epsilon \qquad \text{Case 2: } \bm{P} = \bm{Q} \end{equation*} Now, for each set $S \in \mathcal{S}$, let $l_S = \widetilde{\Omega} (\abs{S} / \epsilon^2)$, let $\mathcal{R}_S = \text{Generate IID Samples} (\bm{w}_\infty, S, l_S)$. Then, we have: \begin{equation*} \mb{P} \lbrb{\exists S \in \mathcal{S}: \text{Identity Test} (\mathcal{R}_S, \text{Dist} (S, \bm{P}), \epsilon^2 / 32) \neq \bm{1} \lbrb{\bm{P} = \bm{Q}}} \leq \frac{1}{10} \end{equation*} \end{lemma} \begin{proof} We will first consider a single set $S \in \mathcal{S}$. In the case that $\bm{P} = \bm{Q}$, we have that $\mathcal{R}_S$ consists of $l_S$ samples from $\text{Dist}(S, \bm{P})$. Therefore, we have from the guarantees of $\text{Identity Test}$ from Theorem~\ref{lem:idtst} that \begin{equation*} \mb{P} \lbrb{\text{Identity Test}(\mathcal{R}_S, \text{Dist}(S, \bm{P})) = 1} \geq 1 - \frac{1}{10n} \end{equation*} In the alternate case where $\text{Dist}(\bm{P}, \bm{Q}) \geq \epsilon$, we have from Lemma~\ref{lem:helbnd} that $d_{Hel}^2 (\text{Dist}(S, \bm{P}), \text{Dist}(S, \bm{Q})) \geq \epsilon^2 / 32$. Therefore, we have again from Lemma~\ref{lem:idtst}: \begin{equation*} \mb{P} \lbrb{\text{Identity Test}(\mathcal{R}_S, \text{Dist}(S, \bm{P})) = 0} \geq 1 - \frac{1}{10n} \end{equation*} The above two inequalities imply that for a fixed $S \in \mathcal{S}$, we have: \begin{equation*} \mb{P} \lbrb{\text{Identity Test}(\mathcal{R}_S, \text{Dist}(S, \bm{P})) = \bm{1} \lbrb{\bm{P} = \bm{Q}}} \geq 1 - \frac{1}{10n} \end{equation*} We note that since each $S \in \mathcal{S}$ is non-empty and along with $T$, they form a partition of $[n]$, there are at most $n$ sets in $\mathcal{S}$. Taking an union bound over the at most $n$ sets in $\mathcal{S}$, we get: \begin{equation*} \mb{P} \lbrb{\exists S \in \mathcal{S}: \text{Identity Test} (\mathcal{R}_S, \text{Dist} (S, \bm{P}), \epsilon^2 / 32) \neq \bm{1} \lbrb{\bm{P} = \bm{Q}}} \leq \frac{1}{10} \end{equation*} \end{proof} The above lemma shows that if we are able to generate samples from even one of the subsets $S \in \mathcal{S}$, we will be able to correctly answer the identity testing problem with high confidence. Therefore, to ensure the correctness of Algorithm~\ref{alg:idtst}, we simply need to show that the probability of being able to generate enough samples from the distribution corresponding to at least one of the sets $S \in \mathcal{S}$ is large. The next lemma, proved in the Appendix~\ref{prf:mxk}, is used to bound the number of times we will sample a particular state in the running of Algorithm~\ref{alg:geniid}. \begin{lemma} \label{lem:mxk} Let $X_1, \dots, X_m$ be $m$ iid samples from $\text{Uniform}([k])$. Let $\bm{v} = \text{Histogram} (X_1, \dots, X_m)$. Suppose further that $m \geq 10 k \log (n / \epsilon)$ for some $n > k$. Then, we have: \begin{equation*} \max_{i \in [k]} v_i \leq 2 \frac{m}{k} \end{equation*} with probability at least $1 - \frac{\epsilon}{n^2}$. \end{lemma} In the following lemma, we show that the number of samples in a trajectory from $S \in \mathcal{S}$ we will need to observe to generate $l_S$ samples from $\text{Dist}(S, \bm{P})$ is small. \begin{lemma} \label{lem:subMcSamps} Suppose $(\mathcal{S}, T)$ is a decomposition of a markov chain $\bm{P}$ obtained in Algorithm~\ref{alg:idtst} and that $\bm{w}_\infty$ is an infinite length trajectory from a markov process with transition matrix $\bm{P}$. Now for each $S \in \mathcal{S}$, let $l_S = \widetilde{O}(\abs{S} / \epsilon^2)$ and let $w_{\tau^S_1}, w_{\tau^S_2}, \dots , w_{\tau^S_{N_S}}$ be the indices corresponding to the entries in $S$ encountered in the running of $\text{Generate IID Samples}(\bm{w}_\infty, S, l_S)$. Then we have: \begin{equation*} \mb{P} \lbrb{\forall S \in \mathcal{S}: N_S \leq \widetilde{O} (\abs{S} / \epsilon^2)} \geq \frac{9}{10} \end{equation*} \end{lemma} \begin{proof} As in the proof of Lemma~\ref{lem:infFail}, we first consider a single component $S \in \mathcal{S}$. Note that the trajectory $\bm{w}_\infty$ observed on the set of states in $S$, $\bm{w}^S_\infty$, is also a markov process. Furthermore, we know from Lemmas~\ref{lem:pg}, \ref{lem:htb} and Corollary~\ref{cor:expsmc} that the hitting time of $\bm{w}^S_\infty$ is $\widetilde{O} (\abs{S} / \epsilon^2)$. Therefore, we have from Lemma~\ref{lem:mxtb}, that in a trajectory of length $N_S$ from $\bm{w}^S_\infty$, we have: \begin{equation*} \mb{P} \lbrb{\exists i: \abs*{\{t: X_t = i\}} \leq \frac{N_S}{8e\abs{S}}} \leq \frac{1}{20 n} \end{equation*} Similarly, to generate $l_S$ samples from $\text{Dist}(S, \bm{P})$, the maximum number of times any a particular state in $S$ will be sampled in a run of Algorithm~\ref{alg:geniid}, denoted by $m_S$, is upper bounded by Lemma~\ref{lem:mxk}: \begin{equation*} \mb{P} \lbrb{m_S \geq 2 \frac{l_S}{\abs{S}}} \leq \frac{1}{20n} \end{equation*} Therefore, the probability that we succeed in generating $l_S$ samples from $\text{Dist}(S, \bm{P})$ is upper bounded by the probability that both the above events fail to occur as this implies the event ${\{\forall i: \abs{\{t: X_t = i\}} \geq m_S\}}$ ensuring the sample generation process succeeds. Therefore, we have: \begin{equation*} \mb{P} \lbrb{N_S \geq \widetilde{O} (\abs{S} / \epsilon^2)} \leq \frac{1}{10n} \end{equation*} By taking a union bound over the at most $n$ subsets $S \in \mathcal{S}$, we get: \begin{equation*} \mb{P} \lbrb{\forall S \in \mathcal{S}: N_S \leq \widetilde{O} (\abs{S} / \epsilon^2)} \geq \frac{9}{10} \end{equation*} \end{proof} \subsection{Proof of Theorem~\ref{thm:mainth}} We are now ready to prove Theorem~\ref{thm:mainth}. We will prove the theorem in two cases: \textbf{Case 1: } $\bm{P} = \bm{Q}$. In this case, we see that the Algorithm~\ref{alg:idtst} only outputs the wrong answer if the sample generation process in Algorithm~\ref{alg:geniid} fails for all subsets $S \in \mathcal{S}$ or if the sample generation process succeeds but \text{Identity Test}{} returns the wrong answer. We will first upper bound the probability that the sample generation process fails. To do this, we see from Lemma~\ref{lem:tli} that if we have a trajectory of length $m \geq \widetilde{\Omega}(m / \epsilon^4)$, then we have: \begin{equation*} \sum_{i = 1}^m \bm{1} \lbrb{X_i \notin T} \geq \widetilde{\Omega} \lprp{\frac{n}{\epsilon^2}} \end{equation*} with probability at least $0.9$. Therefore, we have with probability at least $0.9$, there exists at least one set $S \in \mathcal{S}$: \begin{equation*} \sum_{i = 1}^m \bm{1} \lbrb{X_i \in S} \geq \widetilde{\Omega} \lprp{\frac{\abs{S}}{\epsilon^2}} \end{equation*} Therefore, the probability that the sample generation process fails is at most: \begin{equation*} \mb{P} \lbrb{\exists S \in \mathcal{S}: N_S \geq \widetilde{\Omega}\lprp{\frac{\abs{S}}{\epsilon^2}}} + \mb{P} \lbrb{\forall S \in \mathcal{S}: \sum_{i = 1}^m \bm{1} \lbrb{X_i \in S} \leq \widetilde{\Omega} \lprp{\frac{\abs{S}}{\epsilon^2}}} \leq \frac{2}{10} \end{equation*} where $N_S$ and the bound on the first term are from Lemma~\ref{lem:subMcSamps}. We finally bound the probability of failure of the algorithm by the sum of the probabilities of the sample generation process failing and the probability of the \text{Identity Test}{} failing on samples generated from the infinite word, $\bm{w}_\infty$ from Lemma~\ref{lem:infFail}. Putting the two bounds together, we get that Algorithm~\ref{alg:idtst} fails with probability at most $0.7$ which is less than $2 / 3$. \textbf{Case 2: } $\text{Dist}(\bm{P}, \bm{Q}) \geq \epsilon$. In this case, we see that the Algorithm~\ref{alg:idtst} always returns the correct answer if the sample generation process fails. Therefore, the probability of failure is at most the probability that \text{Identity Test}{} failing on samples generated from the infinite word from $\bm{Q}$. From Lemma~\ref{lem:infFail}, we know that this is at most $0.1$. Therefore, the failure probability of the Algorithm in this case is at most $0.9$ which is less than $2/3$. The above two cases conclude the proof of the theorem. \qed \section{Decomposing a Markov Chain into Well Connected Components} \label{sec:mcd} \subsection{Sparsest Cut with Component Constraints} In this section, we will design and analyze an algorithm for decomposing the state space of a Markov Chain into components that are internally well connected but poorly connected to the rest of the state space. Our algorithm is based on generalizations to the classical linear programming relaxations of the Sparsest Cut problem which is known to be NP-Hard in general. We will start by stating some classical results used to analyze such relaxations and adapt them to our setting. Our first result is Bourgain's famous metric-embedding theorem: \begin{theorem}[\cite{bourgain1985lipschitz,linial1995geometry}] \label{thm:bme} Let $\mathcal{X}$ be a finite metric space of size $n$ endowed with a metric $d$. Then, there exists a function $f: \mathcal{X} \rightarrow \mb{R}^m$ and a constant $C > 0$ such that: \begin{equation*} \forall x,y \in \mathcal{X},\ d(x,y) \leq \norm{f(x) - f(y)}_1 \leq C \log n\, d(x,y) \end{equation*} And furthermore, $m$ is at most $O(\log^2 n)$ and can be found in randomized polynomial time. \end{theorem} We will now describe the linear programming relaxation to the Sparsest Cut problem. Before we describe the formulation, we first introduce the notion of a Cut Metric: \begin{definition}[Cut Metric] \label{def:cutmet} For a state space $[n]$, the Cut Metric associated with a subset $S \subset [n]$ is defined as follows: \begin{equation*} \delta_S (i, j) = \begin{cases} 0, &\text{if $i,j \in S$ or $i,j \in \bar{S}$} \\ 1, &\text{otherwise} \end{cases} \end{equation*} \end{definition} It follows that the Cut Value of a subset can be restated in terms of the cut metric corresponding to the subset as follows: \begin{equation*} g_{\bm{P}} (S) = \frac{\sum_{i,j \in [n]} \bm{P}_{ij} \delta_S (i,j)}{\sum_{i,j \in [n]} \delta_S (i,j)} \end{equation*} The Linear Programming relaxation to the Sparsest Cut problem, can now be seen naturally as broadening the class of metrics in the Sparsest Cut formulation from the set of cut metrics to the set of all metrics and is described below: \begin{gather*} \min \sum_{i, j \in [n]} \delta_{ij} \bm{P}_{ij} \\ \text{such that } \delta_{ii} = 0\, \forall i\\ \delta_{ij} \leq \delta_{ik} + \delta_{kj} \, \forall i,j,k \\ \sum_{i,j \in [n]} \delta_{ij} = 1 \\ \delta_{ij} \geq 0 \label{eq:lpcut} \tag{\textbf{LP-CUT}} \end{gather*} where the second constraint is a normalization factor. We will work with a natural variant of the sparsest cut problem where we are given a priori a subset $T$ of states all of which we require to be in the same component: \begin{question}{Sparsest Cut with Component Constraints (\textbf{SPCCC}):} \label{que:spccc} Given a non-negative matrix, $\bm{P}$ and a set of states $T$ that are all required to be in the same component, we define the Sparsest Cut Problem with Component Constraints as follows: \begin{equation*} S^* = \argmin_{T \subseteq S \subset [n]} \frac{\sum_{i, j \in [n]} \delta_S (i,j) \bm{P}_{ij}}{\sum_{i, j \in [n]} \delta_S (i,j)} \end{equation*} \end{question} Now, we give our Linear Programming relaxation of the \textbf{SPCCC}{} problem. As for the Sparsest Cut problem, we relax the class of metrics beyond Cut Metrics, but we include the constraint that the distance between vertices in $T$ is $0$ and all the vertices in $T$ have the same distance to every other vertex: \begin{gather*} \min \sum_{i, j \in [n]} \delta_{ij} \bm{P}_{ij} \\ \text{such that } \delta_{ii} = 0\, \forall i\\ \delta_{ij} \leq \delta_{ik} + \delta_{kj} \, \forall i,j,k \\ \sum_{i,j \in [n]} \delta_{ij} = 1 \\ \delta_{ij} \geq 0 \\ \delta_{ij} = 0 \, \text{if $i,j \in T$} \\ \delta_{ik} = \delta_{jk} \, \forall i,j \in T, k \in [n]\label{eq:lpccc} \tag{\textbf{LP-CCC}} \end{gather*} The last two constraints in the relaxation defined above ensure that there is no distance between any two states in $T$ and the distance from the states in $T$ to every other state is the same. We will now denote by $(\delta, v) = \textbf{LP-CCC}(\bm{P}, T)$ a pair of metric $\delta$ and a value $v$ returned by \textbf{LP-CCC}. We will now prove a lemma showing that the function $f$ guaranteed by Theorem~\ref{thm:bme} can be shown to have special structure. \begin{lemma} \label{lem:me} Given an instance of the \textbf{SPCCC}{} problem, $(\bm{P}, T)$ and solution $(\delta, v) = \textbf{LP-CCC}(\bm{P}, T)$, there exists a function $f: V \rightarrow \mb{R}^m$ and a constant $C > 0$ such that: \begin{equation*} \text{Claim 1}: \delta_{ij} \leq \norm{f(i) - f(j)}_1 \leq C \log n\, \delta_{ij}, \qquad \text{Claim 2}: f(i) = f(j)\ \forall i,j \in T \end{equation*} Furthermore, $m$ is at most $O(\log^2 n)$ and $f$ can be found in randomized polynomial time. \end{lemma} \begin{proof} Let $f$ be the function whose existence is guaranteed by Theorem~\ref{thm:bme}. Note that $f$ satisfies Claim 1 of the lemma. For Claim 2, let $i,j \in T$. We know from the constraints on \textbf{LP-CCC}{} that $\delta_{ij} = 0$. Therefore, from Theorem~\ref{thm:bme}, we may again conclude that: \begin{equation*} \norm{f(i) - f(j)}_1 = 0 \implies f(i) = f(j) \end{equation*} Thus proving Claim 2. \end{proof} The next lemma from \cite{linial1995geometry} shows that it is possible to express the $l_1$ metric defined by $f$ on the state space as a sum of cut metrics. \begin{lemma}[\cite{linial1995geometry}] \label{lem:l1cut} Given $f: [n] \rightarrow \mb{R}^m$, it is possible to find in time $poly(n,m)$ a polynomial number of subsets $S_1, \dots, S_r$ and associated constants $\alpha_{S_i} > 0$ such that: \begin{equation*} \norm{f(j) - f(k)}_1 = \sum_{i = 1}^r \alpha_{S_i} \delta_{S_i} (j, k) \ \forall j,k \in [n] \end{equation*} \end{lemma} Now, finally, we conclude that the integrality gap of the Linear Programming Relaxation \textbf{LP-CCC}{} is small and furthermore, a cut obtaining such a value can be found efficiently. \begin{theorem} \label{thm:spccc} Given an instance of the \textbf{SPCCC}{} problem $(\bm{P}, T)$, there exists a polynomial time algorithm, FindComp{} which returns a cut $S^*$ satisfying: \begin{equation*} \frac{\sum_{i, j \in [n]} \delta_{S^*} (i,j) \bm{P}_{ij}}{\sum_{i, j \in [n]} \delta_{S^*} (i,j)} \leq O(\log n) \min_{T \subseteq S \subset V} \frac{\sum_{i, j \in [n]} \delta_{S} (i,j) \bm{P}_{ij}}{\sum_{i, j \in [n]} \delta_{S} (i,j)} \end{equation*} Furthermore, we have that $T \cap S^* = \phi$ \end{theorem} \begin{proof} First, let $(\delta, v) = \textbf{LP-CCC}(\bm{P}, T)$ and let $f$ be the function whose existence is guaranteed by Lemma~\ref{lem:me}. Furthermore $S_1, \dots, S_r$ denote the cuts with the associated constants $\alpha_{S_r} > 0$ as obtained from Lemma~\ref{lem:l1cut}. Now, we have: \begin{align*} \min_{i \in [r]} \frac{\sum_{j,k \in [n]} \delta_{S_i} (j, k) \bm{P}_{jk}}{\sum_{j,k \in [n]}\delta_{S_i} (j,k)} &\leq \frac{\sum_{i = 1}^r \alpha_{S_i}\sum_{j,k \in [n]} \delta_{S_i} (j, k) \bm{P}_{jk}}{\sum_{i = 1}^r \alpha_{S_i}\sum_{j,k \in [n]}\delta_{S_i} (j,k)} \\ &= \frac{\sum_{j,k \in [n]} \norm{f(j) - f(k)}_1 \bm{P}_{jk}}{\sum_{j, k \in [n]} \norm{f(j) - f(k)}_1} \leq O(\log n) v \end{align*} where the first inequality follows from the fact that $\min_i \{\frac{a_i}{b_i}\} \leq \frac{\sum a_i}{\sum b_i}$ and the final inequality follows by applying the lower bound from Theorem~\ref{thm:bme} to the denominator and the upper bound to the numerator. But since $v$ is less than the optimal value of the sparsest cut as it is a relaxation of the problem, we have proved the first claim of the theorem as we simply return the cut which minimizes the above ratio. The final result of the theorem will follow from the claim that for all $i \in [r]$, we have either $T \subseteq S_i$ or $T \subseteq \bar{S}_i$ and we return whichever one does not contain $T$. To prove the claim, assume for the sake of contradiction that there exists $i \in [r]$ and $j,k \in T$ such that $j \in S_i$ and $k \in \bar{S}_i$. Then, we have: \begin{equation*} 0 = \norm{f(k) - f(j)}_1 = \sum_{h = 1}^r \alpha_{S_h} \delta_{S_h} (j, k) \geq \alpha_{S_i} \delta_{S_i} (j, k) = \alpha_{S_i} > 0 \end{equation*} which is a contradiction. This proves the claim and the second result of the theorem. \end{proof} \subsection{Extracting a Single Component} For the purposes of our algorithm, we will consider a slightly different version of the sparsest cut problem. We begin by restating the definition of the expansion of a subset of the state space $S$: \begin{definition}[Expansion] Given a matrix, $\bm{P}$, with non-negative entries, the expansion of a set $S$, denoted by $h_{\bm{P}}(S)$ is defined as: \begin{equation*} h_{\bm{P}} (S) = \frac{\sum_{i \in S, j \notin S} \bm{P}_{ij}}{\min (\abs{S}, \abs{\bar{S}})} \end{equation*} \end{definition} We will now re-state the definition of the Cheeger constant of a graph: \begin{definition}[Cheeger Constant] The Cheeger Constant of a Markov Chain with transition matrix, $\bm{P}$, is the minimum expansion of any subset of the state space. \begin{equation*} \chi(\bm{P}) = \min_{S \subset [n]} h_{\bm{P}} (S) \end{equation*} \end{definition} \begin{algorithm}[H] \caption{Extract Component} \label{alg:ec} \begin{algorithmic}[1] \STATE \textbf{Input}: Transition Matrix $\bm{P}$, Extracted States $T$, Tolerance $\beta$ \STATE $\S_0 \leftarrow FindComp ([n], \bm{P}, T),\ t \leftarrow 0$ \label{ec:s0} \STATE $v_0 \leftarrow h_{\bm{P}} (\S_0)$ \IF {$v_0 \geq \beta / 8$} \STATE $\S_0 \leftarrow [n] \setminus T$ \STATE $v_0 \leftarrow \abs{S_0}^{-1} \sum_{i,j \in S_0} \bm{P}_{ij}$ \IF {$v_0 \leq 1 - \beta / 8$} \label{ec:afail} \STATE \textbf{Return: }False \label{ec:fret} \ENDIF \ENDIF \label{ec:intowhile} \WHILE {$\abs{\S_t} > 1$} \STATE $S^\prime_t \leftarrow FindComp (S_t, \bm{P}_{S_t}, \phi)$ \STATE $v_t \leftarrow h_{\bm{P}_{S_t}} (S^\prime_t)$ \IF {$v_t \geq \beta / (8 \log n)$} \label{ec:stif} \STATE \textbf{break} \label{ec:stbreak} \ENDIF \STATE $u_{S^\prime_t} \leftarrow \frac{\sum_{i, j \in S^\prime_t} \bm{P}_{ij}}{\abs{S^\prime_t}},\ u_{\bar{S^\prime_t}} \leftarrow \frac{\sum_{i,j \in (S_t \setminus S^\prime_t)} \bm{P}_{ij}}{\abs{S_t \setminus S^\prime_t}}$ \IF {$u_{S^\prime_t} \leq u_{\bar{S^\prime_t}}$} \STATE $S_{t + 1} \leftarrow S^\prime_t$ \ELSE \STATE $S_{t + 1} \leftarrow S_t \setminus S^\prime_t$ \ENDIF \STATE $t \leftarrow t + 1$ \ENDWHILE \STATE \textbf{Return: } $S_t$ \end{algorithmic} \end{algorithm} Here, we state a short lemma relating the expansion of a subset to its cut value. \begin{lemma} \label{lem:expCut} For a matrix $\bm{P}$ with positive entries and a subset $S$, we have: \begin{equation*} \frac{n}{2} g_{\bm{P}} (S) \leq h_{\bm{P}} (S) \leq n g_{\bm{P}} (S) \end{equation*} Consequently, we have for the cut, $S^*$ returned by FindComp{} when run with input $(\bm{P}, T)$: \begin{equation*} h_{\bm{P}} (S^*) \leq O(\log n) \min_{T \subseteq S \subset [n]} h_{\bm{P}} (S) \end{equation*} \end{lemma} \begin{proof} We first consider the case that $S \leq n / 2$. In this case, we have that $n / 2 \leq \abs{\bar{S}} \leq n$ and consequently: \begin{equation*} \frac{n}{2} g_{\bm{P}} (S) \leq h_{\bm{P}} (S) \leq n g_{\bm{P}} (S) \end{equation*} The alternate case is proved similarly. For the second claim of the lemma, we will again assume that $\abs{S^*} \leq n / 2$. Now, have from Theorem~\ref{thm:spccc} and the equation above: \begin{equation*} h_{\bm{P}} (S^*) \leq ng_{\bm{P}} (S^*) \leq n \cdot O(\log n) \min_{T \subseteq S \subset [n]} g_{\bm{P}} (S) \leq O(\log n) \min_{T \subseteq S \subset V} n \cdot \frac{2}{n} \cdot h_{\bm{P}} (S) \end{equation*} This proves the second claim of the lemma. \end{proof} The next lemma is the main result of the subsection concerning the performance of Algorithm~\ref{alg:ec}. \begin{lemma} \label{lem:ep} Algorithm~\ref{alg:ec} runs in randomized polynomial time and either returns partition $S$ disjoint from $T$ satisfying: \begin{equation*} \text{Claim 1: } \frac{\sum_{i, j \in S} \bm{P}_{ij}}{\abs{S}} \geq 1 - \beta \qquad \text{Claim 2: } \forall R \subset \S\ \frac{\sum_{i \in R, j \in (\S \setminus R)} \bm{P}_{ij}}{\min(\abs{R}, \abs{\S \setminus R})} \geq \Omega \lprp{\frac{\beta}{\log^2 n}} \end{equation*} Or returns $False$ and certifies for all subsets $S \subset ([n] \setminus T)$, we have: \begin{equation*} \text{Claim 3: } h_{\bm{P}} (S) \geq \Omega \lprp{\frac{\beta}{\log n}}\qquad \text{Claim 4: } \frac{\sum_{i,j \in [n] \setminus T} \bm{P}_{ij}}{n - \abs{T}} \leq 1 - \frac{\beta}{8} \end{equation*} \end{lemma} \begin{proof} We will first prove the third claim of the lemma. Let $\tilde{S}$ be the set returned in Line~\ref{ec:s0} of the algorithm. The only way the algorithm returns $False$ is if Line~\ref{ec:fret} is executed. Therefore, we have from the second claim of Lemma~\ref{lem:expCut} and the fact that Line~\ref{ec:fret} is executed: \begin{equation*} \frac{\beta}{8} \leq h_{\bm{P}} (\tilde{S}) \leq O(\log n) \min_{T \subseteq S \subset [n]} h_{\bm{P}} (S) \end{equation*} This proves the third claim of the lemma. The fourth claim of the lemma follows trivially from the fact that the if condition in Line~\ref{ec:afail} evaluates to true. Now, we will assume that the Algorithm is in the case where a set $S$ is returned. For the second claim of the lemma, the algorithm either returns a set containing a single element in which case, the claim is trivially true. In the alternate case, the break statement in Line~\ref{ec:stbreak} was executed and we have again from Lemma~\ref{lem:expCut}: \begin{equation*} \frac{\beta}{8 \log n} \leq h_{\bm{P}_S} (S^\prime) \leq O(\log n) \min_{R \subset S} h_{\bm{P}_{S}} (S) \end{equation*} which implies the second claim of the Lemma. For the first claim of the lemma, assume that the inner loop runs for $K$ time steps. Now, consider the times $t_{0}, \dots, t_{k}$ defined as follows: \begin{equation*} t_{0} = 0, \qquad t_{k} = \min \{t \in [K]: \abs{S_{t}} \leq \abs{S_{t_{k - 1}}} / 2\}\ \forall k \in \{1, \dots, K - 1\},\qquad t_k = K \end{equation*} It is clear that $k$ is at most $\log n$. Now, we will prove the following claim: \begin{claim} \label{clm:ine} $\forall i \in \{0, \dots, k\}$, we have that $S_{t_i}$ satisfies: \begin{equation*} \frac{\sum_{i, j \in S_{t_i}} \bm{P}_{ij}}{\abs{S_{t_i}}} \geq 1 - \frac{\beta}{8} - \frac{i \beta}{4 \log n} \end{equation*} \end{claim} Instantiating Claim~\ref{clm:ine}, with $i = k$, proves the first claim of the Lemma by nothing that $k$ is at most $\log n$. Now, we will prove the claim via induction.\\ \textbf{Base Case: } $i = 0$: The base case is true as the algorithm only proceeds beyond Line~\ref{ec:intowhile} if: \begin{equation*} \frac{\sum_{i \in S_{0},j \in \bar{S_{0}}} \bm{P}_{ij}}{\abs{S_{0}}} \leq \frac{\beta}{8} \implies \frac{\sum_{i,j \in S_{0}} \bm{P}_{ij}}{\abs{S_{0}}} \geq 1 - \frac{\beta}{8} \end{equation*} \textbf{Inductive Step: } Suppose that the claim is true for $l$, we will verify the claim for $l + 1$. Let $R_{m}$ denote the sets $(S_{m} \setminus S_{m + 1})$ for $m \in \{t_l, \dots, t_{l + 1} - 1\}$. Now, for $m \in \{t_l, \dots, t_{l + 1} - 1\}$: \begin{equation*} \sum_{i, j \in S_m} \bm{P}_{ij} = \sum_{i, j \in S_{m + 1}} \bm{P}_{ij} + \sum_{i, j \in R_{m}} \bm{P}_{ij} + \sum_{i \in S_{m + 1}, j \in R_{m}} \bm{P}_{ij} \end{equation*} Therefore, we have: \begin{equation*} \sum_{i, j \in S_{m + 1}} \bm{P}_{ij} + \sum_{i, j \in R_{m}} \bm{P}_{ij} = \sum_{i, j \in S_m} \bm{P}_{ij} - \sum_{i \in S_{m + 1}, j \in R_{m}} \bm{P}_{ij} \geq \sum_{i, j \in S_m} \bm{P}_{ij} - \frac{\beta}{8\log n} \abs{R_{m}} \end{equation*} where the last inequality follows because the algorithm will only proceed to step $m + 1$ if the condition in Line~\ref{ec:stif} of the Algorithm~\ref{alg:ec} fails. Rewriting the above inequality in terms of the quantities $u_{S^\prime_m}, u_{\bar{S}^\prime_m}$, we get: \begin{equation*} \abs{S^\prime_{m}}u_{S^\prime_m} + \abs{\bar{S}^\prime_m}u_{\bar{S}^\prime_m} \geq \sum_{i, j \in S_m} \bm{P}_{ij} - \frac{\beta}{8\log n} \abs{R_{m}} \end{equation*} From the above inequality, we may conclude by dividing both sides by $\abs{S_m}$ that (As the average of two numbers is always smaller than the larger number): \begin{multline*} \frac{\sum_{i, j \in S_{m + 1}} \bm{P}_{ij}}{\abs{S_{m + 1}}} \geq \frac{\sum_{i, j \in S_{m}} \bm{P}_{ij}}{\abs{S_{m}}} - \frac{\beta\cdot \abs{R_{m}}}{8\log n\cdot \abs{S_{m}}} \\ \geq \frac{\sum_{i, j \in S_{m}} \bm{P}_{ij}}{\abs{S_{m}}} - \frac{\beta\cdot \abs{R_{m}}}{8\log n\cdot \abs{S_{t_{l}}} / 2} = \frac{\sum_{i, j \in S_{m}} \bm{P}_{ij}}{\abs{S_{m}}} - \frac{\beta\cdot \abs{R_{m}}}{4\log n\cdot \abs{S_{t_{l}}}} \end{multline*} where the second inequality follows from the fact that in the range of $m$, $\abs{S_m} \geq \abs{S_{t_l}} / 2$. By summing up the above inequality for $m$ ranging from $t_l$ to $t_{l + 1} - 1$, we get: \begin{equation*} \frac{\sum_{i, j \in S_{t_{l + 1}}} \bm{P}_{ij}}{\abs{S_{t_{l + 1}}}} \geq \frac{\sum_{i, j \in S_{t_l}} \bm{P}_{ij}}{\abs{S_{t_l}}} - \frac{\beta\cdot \sum_{m = t_l}^{t_l - 1}\abs{R_{m}}}{4\log n\cdot \abs{S_{t_{l}}}} \geq \frac{\sum_{i, j \in S_{t_l}} \bm{P}_{ij}}{\abs{S_{t_l}}} - \frac{\beta}{4\log n} \geq 1 - \frac{\beta}{8} - \frac{(l + 1) \beta}{4\log n} \end{equation*} where the second inequality follows from that fact that the $R_m$ are disjoint subsets of $S_m$ and the second inequality follows from the inductive hypothesis. This proves Claim~\ref{clm:ine} and as explained earlier, the claim implies the first claim of the lemma. \end{proof} \subsection{Partitioning the Markov Chain} \label{prf:pg} In this subsection, we will design an algorithm to partition the entire state space of the Markov Chain. Our graph partitioning algorithm is illustrated in Algorithm~\ref{alg:pg}. We recursively call Algorithm~\ref{alg:ec} and stop when no more components can be extracted from the state space. We then use the guarantees provided by Lemma~\ref{lem:ep} to prove Lemma~\ref{lem:pg}. \begin{algorithm}[H] \caption{Partition Graph} \label{alg:pg} \begin{algorithmic}[1] \STATE \textbf{Input}: Transition Matrix $\bm{P}$, Tolerance $\beta$ \STATE $\mathcal{S} \leftarrow \{\}$ \STATE $t \leftarrow 0$ \STATE $T_t \leftarrow \phi$ \STATE $S_t \leftarrow \text{Extract Component}(\bm{P}, T_t, \beta)$ \WHILE {$S_t \neq False$} \STATE $\mathcal{S} \leftarrow \mathcal{S} \cup \{S_t\}$ \STATE $T_{t + 1} \leftarrow T_{t} \cup S_{t}$ \STATE $t \leftarrow t + 1$ \STATE $S_t \leftarrow \text{Extract Component}(\bm{P}, T_t, \beta)$ \ENDWHILE \STATE \textbf{Return: } $(\mathcal{S}, [n] \setminus T_t)$ \end{algorithmic} \end{algorithm} We will now proceed with the proof of Lemma~\ref{lem:pg}. We first note that $T_t \neq \phi$ at the end of the algorithm as this would violate Claim 4 of Lemma~\ref{lem:ep}. Now, we have by induction that $T_0 = \phi$ and $T_t = \bigcup_{i = 0}^{t - 1} S_i$. We also have by Lemma~\ref{lem:ep}, that $S_t$ is disjoint with $T_{t}$ and is therefore disjoint with $S_{0}, \dots, S_{t - 1}$. This shows that the subsets in $\mathcal{S}$ are disjoint. Suppose the algorithm terminates with $t = l$, note that $T_l = \bigcup_{S \in \mathcal{S}} S$ and consequently $T = [n] \setminus T_l$ and this proves the final claim of the lemma that the subsets in $\mathcal{S}$ along with $T$ form a partition of $[n]$. For the first two claims of the lemma, we have for all $S \in \mathcal{S}$, $S$ is returned by Algorithm~\ref{alg:ec} and the first two claims follow from the first two claims of Lemma~\ref{lem:ep}. We now prove the third claim of the lemma. We first note that if $T \neq \phi$, then from Claim 4 of Lemma~\ref{lem:ep} for $T$ and Claim 1 of Lemma~\ref{lem:ep} for each $S \in \mathcal{S}$: \begin{equation*} \frac{\beta}{8} \cdot \abs{T} \leq \sum_{i \in T, j \in \bar{T}} \bm{P}_{ij} = \sum_{S \in \mathcal{S}} \sum_{j \in S, i \in T} \bm{P}_{ij} \leq \sum_{S \in \mathcal{S}} \beta \abs{S} \implies \abs{T} \leq \frac{8n}{9} \end{equation*} Now, let any $R \subset T$. In the case that $\abs{R} \leq n / 2$, Claim 3 follows from Claim 3 of Lemma~\ref{lem:ep}. For $\abs{R} \geq n / 2$, note that $\abs{R} \leq 8n / 9$. Therefore, we have from Claim 3 of Lemma~\ref{lem:ep}: \begin{equation*} \Omega \lprp{\frac{\beta}{\log n}} \leq h_{\bm{P}} (R) = \frac{\sum_{i \in R, j \in \bar{R}} \bm{P}_{ij}}{\abs{\bar{R}}} \leq 9 \frac{\sum_{i \in R, j \in \bar{R}} \bm{P}_{ij}}{n} \leq 9 \frac{\sum_{i \in R, j \in \bar{R}} \bm{P}_{ij}}{\abs{R}} \end{equation*} and Claim 3 follows. \qed \section{Markov Chain Properties} \label{sec:mcprop} Our first lemma concerns bounding the amount of time the trajectory of the Markov Chain spends in the component $T$. The proof of our lemma follows along the lines of Lemma~3.3 in \cite{sinclair1989approximate}. In our lemma, we bound the first eigenvalue of a sub-matrix of the transition matrix whereas in \cite{sinclair1989approximate}, the same techniques are used to bound the second eigenvalue of the whole transition matrix. \begin{lemma} \label{lem:tnb} Let $\bm{P}$ be the transition matrix of a symmetric markov chain. Let $T \subset [n]$ satisfy: \begin{equation*} \forall R \subseteq T\ \frac{\sum_{i \in R, j \in \bar{R}} \bm{P}_{ij}}{\abs{R}} \geq \alpha \end{equation*} Then, $\bm{P}_T$ has the following bound on its spectral norm: \begin{equation*} \norm{\bm{P}_T} \leq 1 - \frac{\alpha^2}{2} \end{equation*} \end{lemma} \begin{proof} Since $\bm{P}_T$ is symmetric and positive, its top eigenvalue, denoted by $\lambda$, is the same as its top singular value. Now, let $\abs{T} = m$ and let $\bm{u} \in \mb{R}^m$ be the eigenvector associated with the top eigenvalue. We will now suppose without loss of generality that $\bm{u}_1 \geq \bm{u}_2 \dots{} \bm{u}_{m-1} \geq \bm{u}_m \geq 0$ from the Perron-Frobenius Theorem. Now, we have: \begin{equation*} \bm{P}_T \bm{u} = \lambda \bm{u} \implies (\bm{I} - \bm{P}_T)\bm{u} = (1 - \lambda)\bm{u} \implies (1 - \lambda) = \bm{u}^\top (\bm{I} - \bm{P}_T) \bm{u} \end{equation*} We will now extend the vector $\bm{u}$ to a vector $\bm{v} \in \mb{R}^{m + 1}$. Such that $\bm{v}_i = \bm{u}_i$ for all $i \in \{1, \dots, m\}$ and $\bm{v}_{m + 1} = 0$. Similarly, we extend $\bm{P}_T$ to an matrix $\bm{R} \in \mb{R}^{(m + 1) \times (m + 1)}$. Such that: \begin{equation*} \bm{R}_{ij} = \begin{cases} (\bm{P}_T)_{ij}, &\text{for $i,j \in [m]$} \\ 1 - \sum_{k \in [m]} (\bm{P}_T)_{ik}, &\text{for $i \in [m],\ j = m+1$} \\ 1 - \sum_{k \in [m]} (\bm{P}_T)_{kj}, &\text{for $i = m+1,\ j \in [m]$} \\ 0, &\text{otherwise} \end{cases} \end{equation*} Notice that $\bm{u}^\top (\bm{I} - \bm{P}_T) \bm{u} = \bm{v}^\top (\bm{I} - \bm{R}) \bm{v}$. Now, we expand the right hand side as follows: \begin{equation} \label{eqn:qfbnd} \bm{v}^\top (\bm{I} - \bm{R}) \bm{v} = \sum_{i = 1}^{m + 1} v_i^2 - \sum_{i, j} (\bm{R})_{ij} v_i v_j = \sum_{i = 1}^{m + 1} (1 - \bm{R}_{ii})v_i^2 - 2 \sum_{i < j} \bm{R}_{ij} v_i v_j = \sum_{i < j} \bm{R}_{ij} (v_i - v_j)^2 \end{equation} Now, consider the equation: \begin{equation} \label{eqn:csbnd} \sum_{i < j} \bm{R}_{ij} (v_i + v_j)^2 \leq 2\sum_{i < j} \bm{R}_{ij} (v_i^2 + v_j^2) \leq 2\sum_{i, j \in [m + 1]} \bm{R}_{ij} v_i^2 = 2 \end{equation} Now, we get from Equations~\ref{eqn:qfbnd} and \ref{eqn:csbnd}: \begin{equation} \label{eqn:pcsb} \bm{v}^\top (\bm{I} - \bm{R}) \bm{v} \geq \sum_{i < j} \bm{R}_{ij} (v_i - v_j)^2 \cdot \frac{\sum_{i < j} \bm{R}_{ij} (v_i + v_j)^2}{2} \geq \frac{1}{2} \cdot \lprp{\sum_{i < j} \bm{R}_{ij} (v_i^2 - v_j^2)}^2 \end{equation} where the last inequality follows from Cauchy-Schwarz. Now, we will bound the term in the parenthesis in the final expression on the right hand side: \begin{align*} \sum_{i < j} \bm{R}_{ij} (v_i^2 - v_j^2) &= \sum_{i < j} \bm{R}_{ij} \sum_{k = i}^{j - 1} (v_{k}^2 - v_{k + 1}^2) = \sum_{k = 1}^{m} (v_k^2 - v_{k + 1}^2) \sum_{j > k, i \leq k} \bm{R}_{ij} \\ & \geq \sum_{k = 1}^{m} (v_k^2 - v_{k + 1}^2) \alpha k = \alpha \sum_{j = 1}^m \sum_{k = j}^m (v_k^2 - v_{k + 1}^2) = \alpha \sum_{j = 1}^m v_j^2 = \alpha \end{align*} where the first inequality follows from the assumption on $\bm{P}$ and $T$ and the subsequent equality from the fact that $v_{m + 1} = 0$. Substituting the inequality in Equation~\ref{eqn:pcsb}, we get the desired result. \end{proof} \section{Preliminaries} We denote scalar values by small letters such as $a$, vectors with bolded small letters such as $\bm{v}$ and matrices with bolded capital letters like $\bm{P}$. We use capital letters like $P,Q,R$ chiefly to denote subsets of $[n]$ and calligraphic capital letters $\mathcal{S}$ to denote sets of such subsets. For a vector $\bm{v}$, $v_i$ denotes the $i^{th}$ entry in the vector. For a matrix $\bm{P}$ and two subsets $R$ and $S$, $\bm{P}_i$ denotes the $i^{th}$ column of a matrix, $\bm{P}_{ij}$ denote the $j^{th}$ entry of the $i^{th}$ row of the matrix, $\bm{P}_{R,S}$ corresponds to the $\abs{R} \times \abs{S}$ sized sub-matrix corresponding to the rows in $R$ and columns in $S$ and $\bm{P}_R$ is used as shorthand for $\bm{P}_{R,R}$. We use $\widetilde{O}$ and $\widetilde{\Omega}$ to hide logarithmic factors in $n$ and $\epsilon$. We use $\rho(\bm{M})$ to denote the largest eigenvalue of the matrix $\bm{M}$. We restate the definitions of the Total Variation and Hellinger distances (as stated in \cite{daskalakis2017testing}): \begin{definition} \label{def:dheltv} For two distributions $\bm{p}$ and $\bm{q}$ over a support $[n]$, we have the Hellinger and Total Variation distances, denoted by $d_{Hel}$ and $d_{TV}$ respectively, defined by: \begin{equation*} d_{Hel}^2 (\bm{p}, \bm{q}) = \frac{1}{2} \sum_{i \in [n]} (\sqrt{\bm{p}_i} - \sqrt{\bm{q}_i})^2 = 1 - \sum_{i \in [n]} \sqrt{\bm{p}_i\bm{q}_i}, \qquad d_{TV} = \frac{1}{2} \sum_{i \in [n]} \abs{\bm{p}_i - \bm{q}_i} \end{equation*} Furthermore, the two distances enjoy the following relationship: \begin{equation*} \sqrt{2} d_{Hel}(\bm{p}, \bm{q}) \geq d_{TV} (\bm{p}, \bm{q}) \geq d_{Hel}^2 (\bm{p}, \bm{q}) \end{equation*} \end{definition} Now, we will introduce some notations for markov chains: \subsection{Markov Chains} In this paper, we are only concerned with finite-dimensional markov chains: \begin{definition} \label{def:fmc} A finite dimensional homogeneous markov chain is a stochastic process $\{X_t\}_{t \in \mb{N}}$ over a state space $[n]$ which satisfies the following property: \begin{equation*} \mb{P} \lbrb{X_{t + 1} = j | X_0 = i_0, \dots, X_{t - 1} = i_{t - 1}, X_t = i} = p_{i,j} \end{equation*} That is the probability of the state at time step $t + 1$ given the states from $X_{0}, \dots, X_t$ only depends on the previous time step and this transition probability does not depend on the specific time step $t$. \end{definition} We will use $\bm{w}$ to denote a finite sample from a markov chain and $\bm{w}_\infty$ to denote an infinite sample from the markov chain. We will denote the transition matrices of markov chains usually by $\bm{P}$ and $\bm{Q}$ and we will be concerned with the symmetric case where both matrices $\bm{P}$ and $\bm{Q}$ are symmetric. We will also assume that $\bm{P}$ and $\bm{Q}$ are irreducible. We will now, restate the distance measure between two transition matrices $\bm{P}$ and $\bm{Q}$ as stated in \cite{daskalakis2017testing}: \begin{definition}[Distance between Markov Chains] For two symmetric transition matrices $\bm{P}$ and $\bm{Q}$, the distance between them is defined by: \begin{equation*} \text{Dist} (\bm{P}, \bm{Q}) = 1 - \rho (\text{Sq} (\bm{P}, \bm{Q})) \end{equation*} where the function $\text{Sq}: \mb{R}_+^{n \times n} \times \mb{R}_+^{n \times n} \rightarrow \mb{R}_+^{n \times n}$ is defined by: \begin{equation*} (\text{Sq} (\bm{P}, \bm{Q}))_{ij} = \sqrt{\bm{P}_{ij} \bm{Q}_{ij}} \end{equation*} \end{definition} \begin{definition} \label{def:mcsub} Let $\bm{P}$ be a symmetric irreducible markov chain and $T \subset [n]$ be a subset of states. Now, let $\bm{Y} = Y_1, Y_2, \dots$ be a markov process with transition matrix $\bm{P}$ and let $\tau_1, \tau_2, \dots$ be defined such that: \begin{equation*} \tau_1 = \min \{j: Y_j \in T\},\qquad \tau_i = \min \{j: j > \tau_{i - 1} \wedge Y_j \in T\} \end{equation*} Then the sequence $\bm{X} = Y_{\tau_1}, Y_{\tau_2}, \dots $ is defined to be the markov process, $\bm{Y}$, observed on $T$. \end{definition} We will now state some definitions which we will relate to the spectral properties of the markov chain. The first definition is the notion of expansion of a set of states which intuitively measures how well the set of states is connected to the rest of the state space: \begin{definition}[Expansion] Given a matrix, $\bm{P}$, with positive entries, the expansion of a set $S$, denoted by $h_{\bm{P}}(S)$ is defined as: \begin{equation*} h_{\bm{P}} (S) = \frac{\sum_{i \in S, j \notin S} \bm{P}_{ij}}{\min (\abs{S}, \abs{\bar{S}})} \end{equation*} \end{definition} The Cheeger constant of a markov chain is defined as the minimum of the expansion over all subsets of the state space. \begin{definition}[Cheeger Constant] The Cheeger Constant of a Markov Chain with transition matrix, $\bm{P}$, is the minimum expansion of any subset of the state space. \begin{equation*} \chi(\bm{P}) = \min_{S \subset [n]} h_{\bm{P}} (S) \end{equation*} \end{definition} The following relationship between the Cheeger constant of a markov chain and the spectrum of its transition matrix is well known from the work of \cite{sinclair1989approximate}. \begin{lemma}[\cite{sinclair1989approximate}] \label{lem:sinc} Let $\bm{P}$ be the transition matrix of a symmetric markov chain with eigen values $1 = \lambda_1 > \lambda_2 \geq \dots \geq \lambda_n \geq -1$. Furthermore, assume that $\bm{P}$ satisfies $\chi (\bm{P}) \geq \alpha > 0$. Then, we have: \begin{equation*} \lambda_2 \leq 1 - \frac{\alpha^2}{2} \end{equation*} \end{lemma} Now, we define the hitting time of a markov chain. \begin{definition} \label{def:htt} Let $\bm{P}$ be the transition matrix of a markov chain, $\bm{X}$, over state space $[n]$. Let $\tau_j = \min\{t: X_t = j\}$. Then, the hitting time of $\bm{P}$, denoted by $\text{HitT} (\bm{P})$ is defined as follows: \begin{equation*} \text{HitT} (\bm{P}) = \max_{i, j \in [n]} \mb{E} [\tau_j | X_0 = i] \end{equation*} \end{definition} \subsection{Sparsest Cut} Here, we will state some definitions relating to the graph decomposition algorithm we use for partitioning the state space of our markov chain. Our first definition is one that is closely related to the notion of expansion defined previously: \begin{definition}[Cut Value] \label{def:cutVal} Given a non-negative matrix, $\bm{P}$, the Cut Value of a set $S$, is defined as: \begin{equation*} g_{\bm{P}} (S) = \frac{\sum_{i \in S, j \in \bar{S}} \bm{P}_{ij}}{\abs{S} \abs{\bar{S}}} \end{equation*} \end{definition} The Sparsest Cut problem is then defined as the problem of finding the set obtaining the minimum cut value over all subsets. \begin{question}[Sparsest Cut] \label{que:spcut} Given a non-negative matrix $\bm{P}$, the goal is to find a subset $S^*$: \begin{equation*} S^* = \argmin_{S \subset [n]} g_{\bm{P}} (S) \end{equation*} \end{question} The Sparsest Cut problem is well known to be NP-Hard in general. However, good polynomial-time approximation algorithms are known to give a subset whose Cut Value which is within logarithmic factors of the Sparsest Cut value. \section{Deferred Proofs from Section~\ref{sec:mpr}} \label{app:dptst} \subsection{Proof of Lemma~\ref{lem:helbnd}} \label{prf:helbnd} Let $l = \abs{T}$ and $v = \frac{1}{\sqrt{l}} \bm{1}_{T}$. Now, we consider two cases: \textbf{Case 1: } First, we consider the case where $\sum_{i, j \in T} \bm{Q}_{ij} \geq (1 - 5\epsilon / 16)l$. In this case, we have by the definition of $\text{Dist}$: \begin{align*} d_{Hel}^2 \lprp{\text{Dist}(T, \bm{P}), \text{Dist}(T, \bm{Q})} &= \frac{1}{2} \lprp{\sum_{i \in T, j \in T} \frac{1}{l} \lprp{\sqrt{\bm{P}_{ij}} - \sqrt{\bm{Q}_{ij}}}^2 + \lprp{\sqrt{\text{Dist}(T, \bm{P}) (\eta)} - \sqrt{\text{Dist}(T, \bm{Q}) (\eta)}}^2} \\ &\geq \frac{1}{2l} \sum_{i,j \in T} \lprp{\sqrt{\bm{P}_{ij}} - \sqrt{\bm{Q}_{ij}}}^2 \geq 1 - \frac{3 \epsilon}{16} - \sum_{i,j \in T} \frac{\sqrt{\bm{P}_{ij}\bm{Q}_{ij}}}{l} \\ &= 1 - \frac{3\epsilon}{16} - v^\top \text{Sq} (\bm{Q}, \bm{P}) v \geq \frac{\epsilon}{2} \end{align*} where the second inequality is from our assumption on $T$ and $\bm{P}$ and the final inequality is from our definition of $\text{Dist}(\bm{P}, \bm{Q})$. \textbf{Case 2: } For the alternative case, we have $s = \sum_{i, j \in T} \bm{Q}_{ij} \leq (1 - 5\epsilon / 16)l$. In this case, we have from the definition of $d_{TV}$: \begin{equation*} d_{TV} \lprp{\text{Dist}(T, \bm{P}), \text{Dist}(T, \bm{Q})} \geq \frac{1}{l} \sum_{i, j \in T} \bm{P}_{ij} - \bm{Q}_{ij} \geq \frac{\epsilon}{4} \end{equation*} Therefore, we have from the relationship between the Hellinger distance and Total Variation distance in Definition~\ref{def:dheltv}: \begin{equation*} d_{Hel}^2 \lprp{\text{Dist}(T, \bm{P}), \text{Dist}(T, \bm{Q})} \geq \frac{\epsilon^2}{32} \end{equation*} \qed \subsection{Proof of Lemma~\ref{lem:tli}} \label{prf:tli} We begin by partitioning the word into $l / k$ blocks of length $k = \frac{2\log n}{\alpha^2}$ and let $Y_j$ denote the random variable denoting whether there is an element $X_k \notin T$ in the $j^{th}$ block. That is: \begin{equation*} Y_j = \bm{1} \{\exists i \in [(j - 1)k + 1, jk]: X_i \notin T\} \end{equation*} We will now prove bound $\mb{P} \{Y_j = 1 | X_1, \dots, X_{(j - 1)k}\}$. We will consider two cases: \textbf{Case 1: } $X_{j(k - 1) + 1} \notin T$. In this case, we have $\mb{P} \{Y_j = 1 | X_1, \dots, X_{(j - 1)k}\} = 1$. \textbf{Case 2: } In this case assume $X_{(j - 1)k + 1} = x \in T$. Here, we have from the property of the Markov chain that: \begin{equation*} \mb{P} \{Y_j = 0 | X_1, \dots, X_{(j - 1)k}, X_{(j - 1)k + 1} = x\} = e_{x}^\top \bm{P}_T^{k - 1} \bm{1} \leq \sqrt{n} \norm{\bm{P}_T}^{k - 1} \leq \frac{1}{2} \end{equation*} where the first inequality follows form Cauchy-Schwarz and the second follows from Lemma~\ref{lem:tnb}. Therefore, by combining the two cases above we have $\mb{P} \{Y_j = 1 | X_1, \dots, X_{(j - 1)k}\} \geq 0.5$ and we get: \begin{equation*} \mb{P} \lbrb{\sum_{i = 1}^l \bm{1}\{X_i \notin T\} \geq \frac{l}{8 \log n \alpha^2}} \geq \mb{P} \lbrb{\sum_{i = 1}^{l / k} Y_{i} \geq \frac{l}{4k}} \geq 1 - \delta \end{equation*} via an application of Hoeffding's inequality (See, for example, \cite{boucheron2013concentration}) and using our bound on $l / k$. \qed \subsection{Proof of Lemma~\ref{lem:mxk}} \label{prf:mxk} We start by first fixing a particular element $i \in [k]$. Now, we have: \begin{equation*} \mb{E} [v_i] = \frac{m}{k} \end{equation*} Therefore, we have by an application of Theorem~1.1 in \cite{dubhashi_panconesi_2009} that: \begin{equation*} \mb{P} \lbrb{v_i \geq 2 \frac{m}{k}} \leq \exp \lprp{- \frac{m}{3k}} \leq \exp \lprp{-3\log \frac{n}{\epsilon}} \leq \lprp{\frac{\epsilon}{n}}^3 \end{equation*} Finally, we get via an application of the union bound: \begin{equation*} \mb{P} \lbrb{\max_{i \in [k]} v_i \geq 2 \frac{m}{k}} \leq k \lprp{\frac{\epsilon}{n}}^3 \leq \frac{\epsilon}{n^2} \end{equation*} \qed \subsection{Proof of Lemma~\ref{lem:submc}} \label{prf:submc} Since the chain $\bm{Y}$ is irreducible, we have that $\bm{X}$ is defined almost surely. Now, we will prove that $q_{ij} = \mb{P} \{X_{k + 1} = j | X_{k} = i\}$ is independent of $k$. We will do this by showing that $\bm{P} [X_{k + 1} = j| X_{k} = i, \tau_k = l]$ is independent of $l$ and $k$ as: \begin{align*} \mb{P} \{X_{k + 1} = j | X_{k} = i\} &= \sum_{l = 1}^\infty \mb{P} \{X_{k + 1} = j, \tau_k = l| X_{k} = i\} \\ &= \sum_{l = 1}^\infty \mb{P} \{\tau_k = l| X_{k} = i\} \mb{P} \{X_{k + 1} = j | \tau_k = l, X_{k} = i\} \end{align*} Now, we define $\mathcal{P}_k$ to be sequences of states of length $k$ that begin with $i$ and end with $j$ but the elements in between are not in $T$. That is, if $(i_1, i_2, \dots, i_k) \in \mathcal{P}_k$, then we have $i_1 = i, i_k = j$ and $i_l \notin T,\ \forall l \in \{2, \dots, k-1\}$. Therefore, we get by the markov property of $\bm{Y}$ and the definition of $\bm{X}$: \begin{align*} \mb{P} \{X_{k + 1} = j | \tau_k = l, X_{k} = i\} &= \mb{P} \{Y_{\tau_{k + 1}} = j | Y_l = i\} = \sum_{m = l+1}^\infty \mb{P} \{\tau_{k + 1} = m, Y_{m} = j | Y_l = i\} \\ &= \sum_{m = 2}^\infty \mb{P} \{\tau_{2} = m, Y_{m} = j | Y_1 = i\} = \sum_{r = 2}^\infty \sum_{\bm{i} \in \mathcal{P}_r} \prod_{s = 1}^{r - 1} \bm{P}_{i_si_{s+1}} \\ &= \bm{P}_{ij} + \bm{e}_{i}^\top \lprp{\sum_{t = 1}^\infty \bm{P}_{T, \bar{T}} \bm{P_{\bar{T}}}^{t} \bm{P}_{\bar{T}, T}} \bm{e}_j \end{align*} This is independent of $k$ and therefore, the process $\bm{X}$ is a markov process and the claim about the transition matrix follows from the above expression as, we have for all $i,j \in T$ and $k \in \mb{N}$ $\mb{P}[X_{k + 1} = j | X_k = i] = \bm{Q}_{ij}$. \qed \subsection{Proof of Corollary~\ref{cor:expsmc}} \label{prf:expsmc} The corollary is immediate as $\forall S \subset T, \abs{S} \leq \abs{T} / 2$: \begin{equation*} h_{\bm{Q}} (S) = \frac{\sum_{i \in S, j \in T \setminus S} \bm{Q}_{ij}}{\abs{S}} \geq \frac{\sum_{i \in S, j \in T \setminus S} \bm{P}_{ij}}{\abs{S}} \geq \alpha \end{equation*} where the last bound follows from the fact that $\bm{Q}_{ij} \geq \bm{P}_{ij}$ from Lemma~\ref{lem:submc}. \qed \subsection{Proof of Lemma~\ref{lem:htb}} \label{prf:htb} To start, consider the markov chain with transition matrix $\bm{Q} = 0.5 (\bm{P} + \bm{P}^2)$. Given a trajectory of length $2l$ from the transition matrix $\bm{P}$, it is easy to simulate a trajectory of length $l$ from $\bm{Q}$ by simply taking the next element in the trajectory with probability $0.5$ and skipping an element with probability $0.5$. It follows that $\text{HitT}(\bm{P})$ is upper bounded by $2\text{HitT}(\bm{Q})$. Now, let $1 = \lambda_1 \geq \lambda_2 \geq \dots \geq \lambda_n \geq -1$ be the eigenvalues of $\bm{P}$ and let $v_1, \dots, v_n$ be the eigenvectors. Note that we can take $v_1$ to be the vector $(1 / \sqrt{n}, 1 / \sqrt{n}, \dots, 1 / \sqrt{n})$ (The unit vector in the direction of the stationary distribution). Now, let $\pi$ be any distribution over the states $[n]$. Now, we have $\inp{\pi}{v_1} = 1 / \sqrt{n}$. And furthermore, we have $\forall i \in [n]$, $\inp{v_i}{\pi} \leq 1$. Now, note that since $\bm{P}$ and $\bm{Q}$ have the same set of eigenvectors $v_1, \dots, v_n$ and the corresponding eigenvalues for $\bm{Q}$ are $0.5 (\lambda_1 + \lambda_1^2), \dots, 0.5(\lambda_n + \lambda_n^2)$. Now, let $1 = \sigma_1 > \sigma_2 \dots \geq \sigma_n$ be the eigenvalues of $\bm{Q}$ with eigenvectors $v_1, u_2, \dots, u_n$. We have from the previous Lemma~\ref{lem:sinc} that: \begin{equation*} \abs{\sigma_i} \leq 1 - \frac{\alpha^2}{2} \end{equation*} as when $\lambda \leq 0$, the maximum absolute value of $0.5 (\lambda + \lambda^2)$ is $1 / 8$. Now, let $\pi_0$ be any starting distribution over states, then the distribution over the states at time $t$, $\pi_t$, is $\pi_0 \bm{Q}^t$ and $\pi^*$ be the stationary distribution. Therefore, we get: \begin{equation*} \norm{\pi_t - \pi^*} = \norm*{\frac{1}{\sqrt{n}} v_1 - \pi^* + \sum_{i = 2}^n \sigma_i^t \inp{u_i}{\pi_0}u_i} \leq \sum_{i = 2}^n \sigma_i^t \abs{\inp{u_i}{\pi_0}} \leq n \lprp{1 - \frac{\alpha^2}{2}}^t \leq n\exp\lprp{- \frac{\alpha^2}{2}\cdot t} \end{equation*} Therefore, we have at $t^* = 4\log(10n) / \alpha^2$, we have: \begin{equation} \label{eqn:pbnd} \norm{\pi_t - \pi^*} \leq \frac{1}{4n} \end{equation} Therefore, we have by Equation~\ref{eqn:pbnd}: \begin{equation*} \text{HitT}(\bm{Q}) \leq 4 \frac{\log (10n)}{\alpha^2} \cdot \frac{3}{4n} + \lprp{1 - \frac{3}{4n}} \lprp{4 \frac{\log (10n)}{\alpha^2} + \text{HitT}(\bm{Q})} \end{equation*} By rearranging the above inequality, we get: \begin{equation*} \text{HitT}(\bm{Q}) \leq 10 \frac{\log (10n)}{\alpha^2} \implies \text{HitT}(\bm{Q}) \leq \widetilde{O} \lprp{\frac{n}{\alpha^2}} \implies \text{HitT}(\bm{P}) \leq \widetilde{O} \lprp{\frac{n}{\alpha^2}} \end{equation*} \qed \section{Testing Markov Chains} \label{sec:test} In this section, we state and prove the main result of the paper. We introduce our algorithm for identity testing of markov chains and prove statistical and computational guarantees on its performance. As stated before, our algorithm follows the reduction framework of \cite{daskalakis2017testing} but instead of a reduction to a single distribution testing problem, we instead reduce the problem to multiple distinct distribution testing problems where each problem corresponds to a disjoint subset of the state space. The main insight of our algorithm is that to distinguish between two markov chains that are sufficiently far from each other, it is sufficient to perform a test in such ``high-information'' sets. Our algorithm proceeds along three main steps: \begin{enumerate} \item \textbf{State Partitioning: } Partition the states into $S_1, \dots, S_k, T$ where the subsets $S_1, \dots, S_k$ are the ``high-information'' sets and $T$ is a single ``low-information'' set. \item \textbf{Generate IID Samples: } Check whether we have enough samples from one of the $S_i$ to generate samples for the iid distribution problem corresponding to it. \item \textbf{Run Identity Tester: } If so, return the result of the test or declare $\text{Dist}(\bm{P}, \bm{Q}) \geq \epsilon$. \end{enumerate} The full algorithm is described in Algorithm~\ref{alg:idtst} with supplementary algorithms for graph partitioning in Algorithms~\ref{alg:ec} and \ref{alg:pg} and to simulate iid samples in Algorithm~\ref{alg:geniid}. The main result of our paper is the following performance guarantee on Algorithm~\ref{alg:idtst}: \begin{theorem} \label{thm:mainth} There is a polynomial time algorithm (Algorithm~\ref{alg:idtst}) which given access to $\widetilde{O}\lprp{n / \epsilon^4}$ samples from a markov process with transition matrix $\bm{Q}$ and a symmetric transition matrix $\bm{P}$ correctly distinguishes between the two cases: \begin{equation*} \text{Case 1: } \bm{Q} = \bm{P}, \qquad \text{Case 2: } \text{Dist}(\bm{Q}, \bm{P}) \geq \epsilon \end{equation*} with probability at least $2/3$. \end{theorem} We start by giving a description of the type of the distributions for which we will employ our iid distribution tester: \begin{definition} \label{def:ddef} Let $\bm{P}$ be the transition matrix of a symmetric markov chain and let $R$ be a subset of its states, then we have the distribution $\text{Dist}(R, \bm{P})$ defined over a support of size $\abs{R}^2 + 1$ composed of $\{(i,j): i,j \in R\} \cup \{\eta\}$ where $\eta$ denotes a element which is none of the elements $(i,j),\ i,j \in [n]$: \begin{equation*} \forall i,j \in R, (\text{Dist} (R, \bm{P})) ((i,j)) = \frac{\bm{P}_{ij}}{\abs{R}}, \qquad (\text{Dist} (R, \bm{P})) (\eta) = 1 - \frac{1}{\abs{R}} \sum_{i,j \in R} \bm{P}_{ij} \end{equation*} \end{definition} Therefore, given a partitioning of the state space such that distributions of the above type are sufficiently different, it suffices to have enough samples from any one of the partitions. However, there are two questions that need to be answered before we can proceed: \begin{enumerate} \item Which partitions of the state space should we use to define such distributions? \item How many samples does one need from each of the partitions? \end{enumerate} It turns out that answers to both questions depend on the expansion properties of the sets. However, for the first property, we would like to have sets of low expansion, that is, subsets of the state space that are poorly connected to the rest of the state space while for the second property, we would like sets which are well connected internally. We will see that the second property relates to the hitting time of the markov process defined on the specific subset of states which is small if the original subset is well connected within itself. Therefore, one would like decompositions of the state space which are poorly connected to the rest of the state space but are well connected within themselves. We would like to point out that conventional graph decomposition algorithms decompose the graph into subsets which are well connected internally while removing a very small number of edges which guarantees the first property for a large fraction of the subsets in terms of total number of states. However, for the remaining subsets, we have no such guarantees and therefore, it is unclear whether samples from such subsets can be used to distinguish the two markov chains. Even though one can guarantee that upon entering such subsets, the trajectory is likely to quickly leave the subset, one cannot guarantee that the next partition that the chain visits is a ``high-information'' subset. An alternate approach is to group all such ``low-information'' subsets into a single set but in this case, one loses the expansion guarantees of the individual sets which again makes it hard to bound the amount of time needed to escape from this set. In light of all the above mentioned difficulties, we devise a new graph partitioning algorithm which decomposes the graph into potentially several well connected ``high-information'' sets and a single well connected ``low-information'' set from which one can guarantee that we escape from quickly and therefore reach a ``high-information'' set. We generalize conventional linear programming relaxations for the sparsest cut problem to respect component constraints and then use the above generalization to recursively partition the graph into subsets while measuring sparsest cut values with respect to the original graph instead of sub-graphs formed after removing partitions. The full details of our algorithm are deferred to the Appendix (See Algorithms~\ref{alg:ec} and \ref{alg:pg}). Note that following the approach of \cite{daskalakis2017testing}, we can sample from $\text{Dist}(T, \bm{P})$ given access to an infinite word. Firstly, note that it is possible to sample from $\text{Dist}(T, \bm{P})$ by first sampling an element from $T$ and then sampling from the distribution corresponding to the sampled element in $\text{Dist}(T, \bm{P})$. Therefore, to obtain $l$ samples from $\text{Dist}(T, \bm{P})$, we start by first generating $l$ samples from $\text{Uniform}(T)$. Let the number of times we generated state $i\in T$ be denoted by $r_i$. Now, we simply scan the infinite word sequentially and each time we encounter an element $j \in T$ at position $t$, we reject the sample if $j$ has been encountered more than $r_j$ times or add the transition $j \rightarrow w_{t + 1}$ to our samples if $w_{t + 1} \in T$ or add $\eta$ to our samples if $w_{t + 1} \notin T$. The correctness of the described procedure follows from the markov property which ensures that all the transitions generated previously are independent of the ones generated after. The procedure is formally described in Algorithm~\ref{alg:geniid}. \begin{algorithm}[H] \caption{Generate IID Samples} \label{alg:geniid} \begin{algorithmic}[1] \STATE \textbf{Input}: Finite word $\bm{w} \in [n]^m$, Subset $T$, Number of samples $l$ \STATE $\bm{v} \leftarrow l\text{ samples from Uniform}(T)$ \STATE $\bm{r} \leftarrow \text{Histogram}(\bm{v})$ \STATE $\mathcal{S} \leftarrow \{\}$ \FOR {$i = 1:m-1$} \STATE $j \leftarrow w_i$ \IF {$\bm{r}_j > 0$ and $w_{i + 1} \in T$} \STATE $\mathcal{S} \leftarrow \mathcal{S} \cup (j, w_{i + 1})$ \ELSIF {$\bm{r}_j > 0$ and $w_{i + 1} \notin T$} \STATE $\mathcal{S} \leftarrow \mathcal{S} \cup \eta$ \ENDIF \STATE $\bm{r}_j \leftarrow \bm{r_j} - 1$ \ENDFOR \IF {$\exists i \in T: \bm{r}_i > 0$} \STATE \textbf{Return: } False \ENDIF \STATE \textbf{Return: } $\mathcal{S}$ \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \caption{Identity Test of Markov Chains} \label{alg:idtst} \begin{algorithmic}[1] \STATE \textbf{Input}: Finite word $\bm{w} \in [n]^m$, Target Transition Matrix $\bm{P}$, Target Accuracy $\epsilon$ \STATE $(\mathcal{S}, T) \leftarrow \text{Partition Graph} (\bm{P}, \epsilon / 16)$ \FOR {$S \in \mathcal{S}$} \STATE $l^\prime \leftarrow O\lprp{\frac{\abs{S} \log (n)}{\epsilon^2}}$ \STATE $\mathcal{R}_S \leftarrow \text{Generate IID Samples} (\bm{w}, S, l^\prime)$ \IF {$\mathcal{R}_S \neq \text{False}$} \STATE \textbf{Return: } $\text{Identity Test} (\mathcal{R}_S, \text{Dist}(S, \bm{P}), \epsilon^2 / 32)$ \ENDIF \ENDFOR \STATE \textbf{Return: } False \end{algorithmic} \end{algorithm}
1,314,259,994,744
arxiv
\section{Introduction} \IEEEPARstart{N}{eural} networks often consist of a large number of nodes arranged in deep layers. As researchers have addressed an increasingly wide variety of problems using neural networks, these network architectures have become increasingly complicated. High-performance networks often contain thousands of nodes \cite{krizhevsky2012imagenet, szegedy2013intriguing, simonyan2014very, he2016deep, redmon2016you}. One property of neural networks is that nodes share the workload \cite{baldi1989neural}. A network is trained to perform a given task utilizing all the available nodes. Thus, all the nodes provided in a network architecture contribute towards solving a given problem. As neural networks are implemented on platforms with less computational power, and as neural networks are asked to perform more tasks on platforms with massive but still finite computational power, designing a network with less computational complexity has become an important issue \cite{reed1993pruning, cheng2017survey}. Numerous studies have been conducted to determine how to prune network nodes and obtain a computationally efficient network \cite{mozer1989skeletonization, lecun1990optimal, karmin1990simple, hassibi1993second, han2015learning, ishikawa1996structural, collins2014memory, li2016pruning, zhou2016less, jang2018deep}. In these approaches, the networks are first trained with a large amount of nodes; then, the importance of each node is evaluated by analyzing the node weight using various measures. Finally, the nodes with less importance are removed from the trained network. However, because all the nodes in trained network share the workload, removing a node from a trained network---even if its node weight indicates that is has lesser importance, will degrade the network's performance. Consequently, pruning approaches are usually followed by retraining the pruned network to recover the performance losses from pruning. In this paper, we propose a network that consists of nodes with heterogeneous sensitivity. Each node in a network is assigned a variable that determines its sensitivity to learn a given task. Then, the network learns to perform the task by relying more on the sensitive nodes and less on the insensitive nodes. In extreme cases where the sensitivity of a node is zero, the network does not utilize the node at all; it is essentially disconnected. Node sensitivity is learned during training through a constrained optimization. The sparsity of the sensitivity is maximized while the network performance is constrained within a certain range. As a result, the network learns to perform a given task using only a small number of sensitive nodes. By simply removing the nodes with zero sensitivity, the computationally efficient architecture of a deep network for a given task is obtained. The regularization parameter used for the constrained optimization is determined simultaneously during the training based on the L-curve, which has previously been used to determine the regularization parameters for inverse problems \cite{hansen1993use, hansen1999curve, hansen2005rank, hansen2006deblurring}. We assign sensitivity to nodes in a network by introducing a layer we call a sensitivity layer. The sensitivity layer can be implemented as a special type of dense or convolutional layer. Then, a network that includes these new sensitivity layers can be implemented and trained using functions available in standard deep learning packages \cite{jia2014caffe, chollet2015keras, abadi2016tensorflow}. Our approach does not require any special optimization routine to solve the constrained optimization problem that we designed to enforce the sparsity of the sensitivity variables while training. We designed computationally efficient architectures with heterogeneous sensitivity for various tasks such as autoregression, object recognition, facial expression recognition, and object detection using various datasets. We first applied networks with heterogeneous sensitivity to design a simple autoencoder and a deep convolutional neural network (CNN) for analysis. The effects of the regularization parameters on the network's performance and architecture are analyzed through the L-curve. Simultaneous selection of the regularization parameter during the training using the proposed algorithm is validated. Then, we applied the sensitivity layers to deep networks to solve various problems. Experiments are performed using various networks, autoencoder, CNN, LeNET \cite{krizhevsky2012imagenet}, VGG \cite{krizhevsky2012imagenet}, ResNet \cite{he2016deep}, and YOLO \cite{redmon2016you}, and various dataset, Gaussian, MNIST \cite{lecun1998gradient}, CIFAR-10 \cite{krizhevsky2009learning}, CK+ \cite{lucey2010extended}, VOC \cite{Everingham15}, and ImageNet \cite{deng2009imagenet} for various types of classifications. By introducing nodes with heterogeneous sensitivity to the networks and enforcing the sparsity of the sensitivity, we were able to design networks that consist of notably fewer nodes but that exhibit the same or even better performance. The proposed method can be used to design an efficient network containing the optimal number of nodes. The rest of this paper is organized as follows. We introduce networks with heterogeneous sensitivity and the constrained optimization to determine efficient network architecture in Section \ref{sec:network} and \ref{sec:optimal}, respectively. An algorithm for simultaneous selection of the regularization parameter during the network training is presented in Section \ref{sec:lcurve}. The implementation of the nodes with heterogeneous sensitivity as the sensitivity layer is discussed in Section \ref{sec:implementation}. Comparisons to pruning approaches to find efficient network architecture are given in Section \ref{sec:comparison}. We present the experimental results and discussions for the autoencoder in Section \ref{ex:gaussian} and \ref{ex:ae}. We find efficient network architecture of deep networks with heterogeneous sensitivity in Section \ref{ex:cnncifar} to \ref{ex:yolovoc}. We compare the performance and complexity of the optimal deep networks to those of pruned networks reported in literatures in Section \ref{ex:lenetmnist} to \ref{ex:resnetimagenet}. Section \ref{sec:conclusion} concludes the paper. \section{Efficient Architecture for Deep Neural Networks} \label{sec:proposed} \subsection{Neural Networks with Heterogeneous Sensitivity} \label{sec:network} Consider a network whose $l$th layer consists of the following operations. The intermediate output $u_i^{l}$ is computed by either a dense layer, \begin{equation} u_i^{l} = \sum_{j=1}^{n_{l-1}} W_{ij}^l x_j^{l-1}, \label{eq:dense} \end{equation} or by a convolutional layer, \begin{equation} u_i^{l} = \hbox{conv}(W_{i}^l, x^{l-1}), \label{eq:dense} \end{equation} where $u_i^l$ is the $i$th intermediate output node, $x_j^{l-1}$ and $x^{l-1}$ are the the $j$th nodes and the node volume in the $(l-1)$th layers, respectively. The network parameters $W_{ij}^l$ and $W_{i}^l$ denote a weight in the dense layer and a filter in the convolutional layer, respectively. The number of nodes in the $l$th layer are denoted as $n_l$. The intermediate output is activated by an activation function: \begin{equation} v_i^{l} = f^l(u_i^l). \label{eq:activation} \end{equation} The activated output $v_i^{l}$ is weighted by a newly introduced layer: \begin{equation} x_i^{l} = s^l_i v_i^{l}, \label{eq:sensitivity} \end{equation} where $s^l_i$ is a variable that determines the sensitivity of the $i$th node in the $l$th layer $x_i^l$. We denote this new layer as a sensitivity layer. The schematics of the proposed network architecture is given in Fig. \ref{fig:schematics}. \begin{figure}[!t] \centering \begin{minipage}{0.75\linewidth} \centering {\includegraphics[trim = 0 0 0 0, clip, width=\linewidth]{./Figures/network_dense_schematic.pdf}}% {\footnotesize (a)} \end{minipage}% \begin{minipage}{0.75\linewidth} \centering {\includegraphics[trim = 0 0 0 0, clip, width=\linewidth]{./Figures/network_conv_schematic.pdf}}% {\footnotesize (b)} \end{minipage}% \caption{A layer schematic in the proposed network with heterogeneous sensitivity: (a) dense layer; (b) convolutional layer. } \label{fig:schematics} \end{figure} The sensitivity variable $s^l_i$ in the sensitivity layer allows us to apply heterogeneous sensitivity to the nodes in a network. Consider a network trained to minimize a cost function $E$ using backpropagation \cite{rumelhart1986learning, haykin1994neural}. The weight matrix $\mathbf{W}_l$, whose element is the weight $W_{ij}^l$ of the $l$th layer, is updated by \begin{equation} \mathbf{W}_l \gets \mathbf{W}_l - \eta \bm{\delta}_l \mathbf{x}_{l-1}^\mathsf{T}, \label{eq:update} \end{equation} where $\eta$ is the step size and $\mathbf{x}_{l-1}$ is a vector whose elements are $x_i^{l-1}$. In the $l$th layer, each element of the sensitivity vector $\bm{\delta}_l$ is \begin{equation} \delta_i^{l} = s^l_i (\sum_{k=1}^{n_{l+1}} \delta_k^{l+1} W_{ki}^{l+1}) \frac{\partial f^l (u_i^{l})}{\partial u_i^l}. \label{eq:sensitivity2} \end{equation} The variable $s^l_i$ is a weight that reflects the sensitivity of a node. When the weights are updated, nodes with larger values of $|s^l_i|$ will respond more sensitively than those with smaller $|s^l_i|$ values. In the extreme case, when $s^l_i=0$, the node is completely insensitive. These zero-sensitivity nodes can be regarded as disconnected nodes. \subsection{Optimization for Efficient Network Architecture} \label{sec:optimal} Node sensitivity (the sensitivity variable $s^l_i$) can be determined during the training so that a network learns to perform a given task using only a small number of sensitive nodes. To accomplish this, we designed an optimization problem to train the network: \begin{equation} \begin{split} \textnormal{minimize} \quad & \sum_{l=1}^L \|\mathbf{s}_l\|_1\\ \textnormal{subject to} \quad & E < \epsilon, \end{split} \label{eq:optimization1} \end{equation} where $\mathbf{s}_l$ is a vector whose elements are $s^l_i$ and $E$ is a deviation penalty that measures the deviations of network outputs from the ground truth values. The cost function, which is the sum of the $\ell_1$ norm of $\mathbf{s}_l$, makes the vector $\mathbf{s}_l$ sparse, so that the network uses only a few sensitive nodes and includes as many disconnected nodes as possible. The constraint on $E$ guarantees the performance of the network within the threshold $\epsilon$. The deviation penalty $E$ is a typical cost function used to train a network. The last layer of a network is usually determined by a specific task. We set the sensitivity variable for the last layer $s^L_i$ to one. The optimization problem can be rewritten as follows: \begin{equation} \underset{\mathbf{W}_1,\mathbf{W}_2,\cdots, \mathbf{W}_L}{\underset{\mathbf{s}_1,\mathbf{s}_2,\cdots, \mathbf{s}_L}{\textnormal{minimize}}} \quad E + \lambda \sum_{l=1}^L \|\mathbf{s}_l\|_1, \label{eq:optimization2} \end{equation} where the regularization parameter $\lambda$ weighs the deviation penalty $E$ and the sparsity penalty $\sum \|\mathbf{s}_l\|_1$. When $\lambda$ is large, the sparsity penalty dominates the cost function in \eqref{eq:optimization2}. The trained network will consist of a small number of sensitive nodes with as many insensitive (i.e., disconnected) nodes as possible. However, because the deviation penalty $E$ is neglected during the training, the network will fail to provide accurate outputs. In contrast, when $\lambda$ is small, the deviation penalty dominates the cost function, and the network will be trained to provide accurate outputs but will utilize most of the available nodes. Thus few insensitive (i.e., disconnected) nodes will exist. Ideally, the goal is to find a $\lambda$ that balances the deviation and sparsity penalties. \subsection{Regularization Parameter Selection Via L-Curve} \label{sec:lcurve} An L-curve is a plot of the two penalties for various values of $\lambda$. The L-curve has previously been used in inverse problems \cite{hansen1993use, hansen1999curve, hansen2005rank, hansen2006deblurring}. In our problem, the L-curve shows the deviation penalty vs. the sparsity penalty for various values of $\lambda$. Let the deviation and sparsity penalties when the network is trained with the regularization parameter $\lambda$ be $E(\lambda)$ and $S(\lambda)$. The L-curve is given by \begin{equation} \{(\phi(E(\lambda)),\phi(S(\lambda))),\lambda > 0\}, \label{eq:lcurve} \end{equation} where $\phi$ is an increasing function such as the log function. One region of the L-curve corresponds to solutions that are dominated by the deviation penalty, and another region of the L-curve corresponds to solutions that are dominated by the sparsity penalty. The curve generally has an L-shape. The corner of the L-curve provides a solution for which the two penalties are balanced. We use the L-curve to determine the best value of the regularization parameter $\lambda$ for the optimization problem in \eqref{eq:optimization2}. The operation of a deep network is usually composed of multiple layers with non-linear activation function. Parameter selection methods based on diagonalization of a matrix that represents a linear operation of a system \cite{calvetti2000tikhonov, xie2009lanczos} are not applicable to determine the regularization parameter for deep networks. Also, the training of a deep network usually requires large amount of computation. Parameter selection methods that find the maximum curvature point of the L-curve \cite{hansen1993use} are not practical, because the L-curve has to be constructed for many values of $\lambda$. We determine the regularization parameter $\lambda$ simultaneously during the training of a deep network. The network training is initialized with random weight $\mathbf{W}_l$'s and a small initial regularization parameter $\lambda_0$. The network is trained for a given number of epochs and the deviation cost $E$ is measured. While the measured deviation penalty is smaller than $\varepsilon$, the regularization parameter $\lambda$ is increased by $\Delta \lambda$, the sensitivity variables are initialized randomly, and the network is trained for a given number of epochs. When the regularization parameter $\lambda$ becomes too large, the network cannot provide outputs close to the ground truth. The deviation penalty $E$ increases substantially and becomes larger than $\varepsilon$. We find the largest regularization parameter $\lambda$ value that provides the deviation penalty $E\leq \varepsilon$. Then, the training of the network continues with the found regularization parameter until the termination condition for the network training is met. The algorithm for the simultaneous network training and regularization parameter selection is given in Algorithm \ref{alg:training}. \begin{algorithm}[t!] \caption{Simultaneous Training and Parameter Selection} \label{alg:training} \begin{algorithmic}[1] \STATE $\lambda \gets \lambda_0$,$E\gets \varepsilon$. \STATE initialize $\mathbf{W}_l$'s. \WHILE{Termination condition is not met} \WHILE{$E \leq \varepsilon$} \STATE $\lambda \gets \lambda + \Delta \lambda$ \STATE Initialize $\mathbf{s}_l$'s. \STATE Train the network for a number of epochs \STATE Measure $E$. \ENDWHILE \STATE Train the network. \ENDWHILE \end{algorithmic} \end{algorithm} \subsection{Sensitivity Layer Implementation} \label{sec:implementation} The sensitivity layer can be regarded as a special type of dense or a convolutional layer. A sensitivity layer added after a dense layer with $n_l$ nodes can regarded as a set of $n_l$ (dense) layers with one input node, one output node, and one weight. A sensitivity layer added after a convolutional layer with $n_l$ nodes can be regarded as a set of $n_l$ convolutional layers with one input node, one output node, and a single one-by-one filter. Hence, the sensitivity layer can be implemented using layer definitions and functions already available in deep learning packages \cite{jia2014caffe, chollet2015keras, abadi2016tensorflow}. Moreover training a network that includes sensitivity layers can be accomplished with various training methods already available in deep learning packages. The weights in the dense or convolutional layers are updated by \begin{equation} W^l_{ij} \gets W^l_{ij} - \eta \frac{\partial E}{\partial W^l_{ij}}, \end{equation} and the parameters in the sensitivity layers are updated by \begin{equation} s^l_i \gets s^l_i - \eta \frac{\partial E}{\partial s^l_i} - \eta \lambda \frac{s^l_i}{|s^l_i|}. \end{equation} These approaches can be implemented as the training for either dense or convolutional layers under $\ell_1$ regularization. Hence, by regarding the sensitivity layers as simply special cases of dense or convolutional layers, we can implement and train a network with the sensitivity layers using standard deep learning packages. Our approach does not require any special optimization routines to solve the constrained optimization problem. \subsection{Comparison with Pruning Methods} \label{sec:comparison} Previous studies have investigated how to design compact and efficient networks to allow deep networks to be deployed on devices with restricted computational capabilities. A survey of efficient deep network design methods can be found in \cite{reed1993pruning, cheng2017survey}. Many approaches can prune nodes with high computational complexity and memory requirements to obtain a more efficient network. In these pruning approaches, a network is first trained, and then the importance of each node is evaluated with various measures. Finally, the less important nodes are pruned from the trained network. The issue of how the cost function used for the training changes with small weight perturbations was analyzed In \cite{mozer1989skeletonization, lecun1990optimal, karmin1990simple, hassibi1993second}. For example, the Hessian of a cost function provides information on how small weight changes affect the cost. Connections in a trained network that induce insignificant changes in the cost function were removed from the network. In \cite{han2015learning, ishikawa1996structural, collins2014memory, li2016pruning, zhou2016less}, the importance of weights in a trained network was evaluated using the $l_2$, $l_1$, and $l_0$ norms of the node weights or distances between the weights \cite{ayinde2018building}. Then, the less important connections between nodes were removed from the trained network based on the evaluated measures. To encourage a network to have node weights that result in smaller measures, regularization by the $l_2$, $l_1$, and $l_0$ norms of the node weights is used during training. Many pruning approaches use regularization as a function of node weights. As a results some connections in dense layers and some filter coefficients in convolutional layers have small values. Removing a node from a network entirely is not straightforward for dense layers and is difficult for convolutional layers. Examples of pruning with removed connections and filter coefficients are shown in Fig. \ref{fig:schematicspruning} (a), where prunned connection and prunned filter cofficients are denoted as red lines and blocks, respectively. The network is trained to perform a task utilizing all the available nodes. Because nodes in a network share the workload \cite{baldi1989neural}, removing a node---even one with a smaller measure of importance---from a trained network will degrade the network's performance. Consequently, pruning approaches typically retrain the pruned network to recover the performance loss from pruning. The proposed method uses regularization as a function of the sensitivity. As a result, removing zero-sensitivity nodes is straightforward because they are already disconnected by the end of the training. Examples of disconnected nodes for dense and convolutional layers are shown in Fig. \ref{fig:schematicspruning} (b), where the nodes with zero sensitivity, denoted with red lines, can be removed from a trained network. The network is trained to perform a task utilizing only the sensitive nodes; therefore, removing zero-sensitivity nodes has no effect on the network's performance because the network has already been trained to perform the task without them. \begin{figure}[!t] \centering \begin{minipage}{0.41\linewidth} \centering {\includegraphics[trim = 0 0 300 0, clip, width=\linewidth]{./Figures/network_dense_before.pdf}}% {\includegraphics[trim = 0 0 300 0, clip, width=\linewidth]{./Figures/network_conv_before.pdf}}% {\footnotesize (a)} \end{minipage}% \begin{minipage}{0.59\linewidth} \centering {\includegraphics[trim = 0 0 0 0, clip, width=\linewidth]{./Figures/network_dense_after.pdf}}% {\includegraphics[trim = 0 0 0 0, clip, width=\linewidth]{./Figures/network_conv_after.pdf}}% {\footnotesize (b)} \end{minipage}% \caption{Examples of removed connections in a network: (a) pruning approaches; (b) proposed method: top: dense layer; bottom: convolutional layer. Removed connections are indicated with red dotted lines. } \label{fig:schematicspruning} \end{figure} In \cite{jang2018deep}, networks with activation functions with nodewise variant slopes were introduced. Using this approach, the nodes with steeper slopes learn more important features, and vice versa. After training, the nodes with lower slopes are pruned from the trained network. The assignment of nodewise variant slopes to activation functions plays a role similar to that of the sensitivity layers presented in this paper; however, that study used a predefined set of values for the slopes regardless of the data. Because a predefined set of slopes does not reflect the actual data statistics, workload sharing still exists; hence, a performance loss occurs after pruning. Our proposed method can be viewed as an improvement of the work in \cite{jang2018deep} in which the slopes of the activation functions are learned from the data statistics by solving an optimization problem during the training. In \cite{he2017channel}, a layer similar to the sensitivity layer in our proposed network was introduced to a trained network. The variables in the added layer were then used for pruning. An optimization problem was constructed to determine which nodes could be removed from the trained network while still ensuring the network's performance. Again, however, because nodes in a trained network share the workload, node pruning---even when done through an optimization approach, degrades the network's performance. In contrast, the variables in the sensitivity layers in our approach are found during network training, thus avoiding the need for further pruning. In \cite{wen2016learning}, a group lasso of node weights is used as a regularization factor. The group lasso enforces groupwise sparsity. By defining all the coefficients in a filter as a group, a node can be effectively removed from a convolutional layer. By defining only a part of the filter coefficients as a group, filters with different support levels can be used in a convolutional layer. Consequently, different network architectures can be designed by defining different groups. Such a design requires multiple regularization parameters; however, the study did not address how to choose the regularization parameters to obtain efficient architecture. In contrast, our approach uses regularization as a function of node sensitivity, which allows us to simply disconnect a node in both dense and convolutional layers. The regularization parameter is chosen using an L-curve to help find efficient architecture. \section{Experiments and Discussions} \label{sec:ex} We first analyze a network with heterogeneous sensitivity using a simple autoencoder with Gaussian data and the MNIST dataset \cite{lecun1998gradient} in Section \ref{ex:gaussian} and \ref{ex:ae}, respectively. The regularization parameter selection via the L-curve is discussed in Section \ref{sec:regularization} using the autoencoder with the MNIST dataset. Then, we find efficient network architecture of deep networks with heterogeneous sensitivity using a CNN with the CIFAR-10 dataset \cite{krizhevsky2009learning}, using VGG \cite{simonyan2014very} and ResNet \cite{he2016deep} with the CK+ dataset \cite{lucey2010extended}, and using YOLO \cite{redmon2016you} with the VOC dataset \cite{Everingham15} in Section \ref{ex:cnncifar} to \ref{ex:yolovoc}. We compare the performance and complexity of the proposed deep networks to those of pruned networks reported in literatures for LeNet \cite{krizhevsky2012imagenet} with the MNIST dataset, VGG and ResNet with the CIFAR-10 dataset, and ResNet with the ImageNet dataset \cite{deng2009imagenet} in Section \ref{ex:lenetmnist} to \ref{ex:resnetimagenet}. \subsection{Autoencoder with Gaussian Data} \label{ex:gaussian} To understand how a network with sparse sensitivity variables is trained to perform a given task, consider a simple network in an autoregression setting. As an example, we use a network with two dense layers and linear activation. The sensitivity layer is implemented in the first layer. Then, network operation can be written as follows: \begin{eqnarray} \mathbf{y} & = & \mathbf{A}\mathbf{x} \\ & = & \mathbf{W}_2\mathbf{S}_1\mathbf{W}_1\mathbf{x} \\ & = & \sum_{k=1}^{n_1} s^1_k \mathbf{w}^2_k (\mathbf{w}^1_k)^\mathsf{T} \mathbf{x}, \label{eq:linearcomb} \end{eqnarray} where $\mathbf{W}_1$ and $\mathbf{W}_2$ are the weight matrices. The matrix $\mathbf{S}_1$ is a diagonal matrix whose diagonal elements are the sensitivity variables in $s^1_k$. The vectors $\mathbf{w}^2_k$ and $(\mathbf{w}^1_k)^\mathsf{T}$ are the $k$th column and row of $\mathbf{W}_2$ and $\mathbf{W}_1$, respectively. The network operation is written as a linear combination of $\mathbf{w}^2_k (\mathbf{w}^1_k)^\mathsf{T}$. The weights for the linear combination are given by the sensitivity variable $s^1_k$. The contribution of $\mathbf{w}^2_k (\mathbf{w}^1_k)^\mathsf{T}$ with small $s^1_k$ values to the operation of the network is small, and vice versa. We find the columns of the matrices $\mathbf{W}_1$, $\mathbf{W}_2$ and the diagonal matrix $\mathbf{S}_1$ through the optimization problem in \eqref{eq:optimization2}. The sensitivity variable $\mathbf{s}_1$ obtained by the optimization will have many zero elements because sparsity is enforced. Without loss of generality, let the elements of $\mathbf{s}_1$ be sorted such that the $(K+1)$th to $n_1$th elements are zero. The contribution of $\mathbf{w}^2_k (\mathbf{w}^1_k)^\mathsf{T}$ with $s^1_k=0$ is zero. Hence, we can remove those terms from the linear combination in \eqref{eq:linearcomb}. Then, we approximate the network's operation by \begin{equation} \mathbf{A} \approx \sum_{k=1}^K s^1_k \mathbf{w}^2_k (\mathbf{w}^1_k)^\mathsf{T}. \end{equation} Experiments are performed with 16, 12, and 8 inputs following zero mean Gaussian distribution. We consider cases where some inputs are correlated with large variances and the rest of the inputs are independent with small variances. The large and small variances are 1.0 and 0.0001, and the correlation coefficient is 0.9. The number of hidden nodes, $n_1$, are set to 16, 12, and 8. The autoencoders are trained by \eqref{eq:optimization2} for 50 times. Table \ref{tab:gaussian} shows the average numbers of nodes with nonzero and zero sensitivity. The network is trained using the regularization parameter $\lambda$ corresponding to the corner point of the L-curves. Examples of the L-curves are shown in Fig. \ref{fig:lcurvegaussian}. The average number of sensitive nodes, with nonzero sensitivity, is close to the numbers of nodes with large variances. The proposed network with heterogeneous sensitivity trained via the optimization problem in \eqref{eq:optimization1} operates similarly to the principal component analysis \cite{jolliffe2011principal}---it represents the inputs through the small number of sensitive, or principal, nodes. \begin{table}[t!] \centering \caption{Average Numbers of Sensitive Nodes Determined by Constrained Optimization using Gaussian Dataset} \label{tab:gaussian} \begin{tabular}{cc|c|cc} \hline \# of inputs & \# of inputs & \# of hidden & \# of nodes & \# of nodes \\ with & with & nodes & with & with \\ $\sigma$=1.0 & $\sigma=0.01$ & & $s \neq 0$ & $s = 0$ \\ \hline 8 & 8 & 16 & 8.3 & 7.7 \\ 8 & 4 & 12 & 8.0 & 4.0 \\ 8 & 0 & 8 & 7.5 & 0.5 \\ \hline 4 & 12 & 16 & 4.0 & 12.0 \\ 4 & 8 & 12 & 4.1 & 7.9 \\ 4 & 4 & 8 & 4.1 & 3.9 \\ \hline \end{tabular} \end{table} \begin{figure}[t!] \centering \begin{minipage}{0.5\linewidth} \centering {\includegraphics[width=\linewidth]{./Figures/gaussian_lcurve8-eps-converted-to.pdf}}% \end{minipage}% \begin{minipage}{0.5\linewidth} \centering {\includegraphics[width=\linewidth]{./Figures/gaussian_lcurve4-eps-converted-to.pdf}}% \end{minipage \caption{Example of L-curve for autoencoder using Guassian dataset, showing $E$ vs. $\sum_{l=1}^L \|\mathbf{s}^l\|_1$ for different values of $\lambda$; (a) For a case where 8 of 12 inputs are correlated with high variances and the number of hidden nodes is 12; (b) For a case where 4 of 12 inputs are correlated with high variances and the number of hidden nodes is 12. } \label{fig:lcurvegaussian} \end{figure} \subsection{Autoencoder with MNIST Dataset} \label{ex:ae} We analyze a simple network with heterogeneous sensitivity trained with a dataset. A large number of hidden nodes are used in the network and the number of required hidden nodes is determined through training. We prepared a network with 784 hidden nodes in an auto-associative setting to reconstruct the inputs. The rectified linear unit (ReLU) function is used as the activation function for all the nodes. The sensitivity layer is added after the activation functions and implemented as a collection of 784 individual dense layers, each of which has one input, one output, and one weight. As explained in Section \ref{sec:implementation}, by implementing the sensitivity layer as a special type of dense layer, we can implement and train the network using standard functions in deep learning packages. Here, we used the Keras Python deep learning library for implementation and training. The sensitivity variables are initialized to one. We adopted the $\ell_1$ regularization in the sensitivity layer as a training option and trained the network using the MNIST dataset \cite{lecun1998gradient}. Optimizing the proposed networks requires the regularization parameter $\lambda$ that weights the deviation and sparsity penalties. We found the appropriate $\lambda$ value using the L-curve. Fig. \ref{fig:mnistlcurve} shows the L-curve for the network using the MNIST dataset, plotted using the deviation penalty $E$ vs. the sparsity $\sum_{l=1}^L \|\mathbf{s}^l\|_1$ at different $\lambda$ values. The linear function is used as the function $\phi$ in \eqref{eq:lcurve}, as the L shape is clearly observed with the choice. For small values of $\lambda$, for example $\lambda = 1.0\times 10^{-5}$, the deviation penalty dominates the cost function of the unconstrained optimization problem in \eqref{eq:optimization2}. Then, the solution to the optimization problem provides only a small deviation penalty but a large sparsity penalty. In contrast, for large values of $\lambda$, for example $\lambda = 1.0\times 10^{-1}$, the sparsity penalty dominates the cost function, providing only a small sparsity penalty but a large deviation penalty. The balance of the two penalties can be achieved using the value from the corner of the L-curve. We used a $\lambda$ value of $1.0\times 10^{-3}$ for the optimization, which corresponds to the corner of the L-curve. \begin{figure}[t!] \centering \begin{minipage}{0.5\linewidth} \centering {\includegraphics[width=\linewidth]{./Figures/l_curve_mnist_autoencoder-eps-converted-to.pdf}}% \end{minipage}% \caption{L-curve for autoencoder using MNIST dataset, showing $E$ vs. $\sum_{l=1}^L \|\mathbf{s}^l\|_1$ for different values of $\lambda$; red: $\lambda = 1.0\times 10^{-5}$; blue: $\lambda = 1.0\times 10^{-3}$; green: $\lambda = 1.0\times 10^{-1}$. } \label{fig:mnistlcurve} \end{figure} Fig. \ref{fig:mnistsensitivity} shows examples of node sensitivity in networks trained with different values of the regularization parameter $\lambda$. The sensitivity variables, $s_i$, are shown, in the decreasing order, for $\lambda = 1.0\times 10^{-5}, 1.0\times 10^{-4}, 1.0\times 10^{-3}$, and $ 1.0\times 10^{-2}$ in Fig. \ref{fig:mnistsensitivity} (a), (b), (c), and (d), respectively. As the $\lambda$ value increases, there are more nodes with small sensitivity. When $\lambda=1.0\times 10^{-2}$, many nodes can be removed from the network. However, the trained network fails to provide acceptable performance at this $\lambda$ value. In contrast, a network trained with $\lambda=1.0\times 10^{-3}$ shown in (c) corresponds to the $\lambda$ value at the corner of the L-curve. At this setting, the network provides an acceptable deviation penalty yet has as many zero sensitivity nodes as possible. In effect, the nodes with zero sensitivity are disconnected. The efficient network architecture of the autoencoder with one hidden layer for the MNIST dataset is to have 75 hidden nodes. \begin{figure}[t!] \centering \begin{minipage}{0.5\linewidth} \centering {\includegraphics[width=\linewidth]{./Figures/node_mnist_1e-05-eps-converted-to.pdf}}% {\footnotesize (a)} \end{minipage}% \begin{minipage}{0.5\linewidth} \centering {\includegraphics[width=\linewidth]{./Figures/node_mnist_1e-04-eps-converted-to.pdf}}% {\footnotesize (b)} \end{minipage}% \begin{minipage}{0.5\linewidth} \centering {\includegraphics[width=\linewidth]{./Figures/node_mnist_1e-03-eps-converted-to.pdf}}% {\footnotesize (c)} \end{minipage}% \begin{minipage}{0.5\linewidth} \centering {\includegraphics[width=\linewidth]{./Figures/node_mnist_1e-02-eps-converted-to.pdf}}% {\footnotesize (d)} \end{minipage}% \caption{Node sensitivity for an autoencoder trained with different $\lambda$ values using the MNIST dataset: (a) $\lambda = 1.0\times 10^{-5}$; (b) $\lambda = 1.0\times 10^{-4}$; (c) $\lambda = 1.0\times 10^{-3}$; and (d) $\lambda = 1.0\times 10^{-2}$. } \label{fig:mnistsensitivity} \end{figure} Fig. \ref{fig:mnistpca} shows the result of PCA on the images in the MNIST dataset. The average mean square error (MSE) values between the inputs and their reconstructions using the $k$ principal components are shown in red line. The average MSE values between the inputs and their reconstructions using the $k$ most sensitive nodes are shown in blue line for the network trained with $\lambda=1.0\times 10^{-3}$. This network has 75 nodes. When all the 75 nodes are used to reconstruct the images, the MSE is lower than one would achieve by reconstructing the images using the 75 principal components. However, when fewer than 75 nodes are used, the MSE values degrade faster than the PCA results. The average MSE values between the inputs and their reconstructions for the network trained with $\lambda=6.0\times 10^{-3}$ are shown in green line. This network has 46 nodes. When all the 46 nodes are used to reconstruct the images, the MSE is lower than one would achieve by reconstructing the images using the 46 principal components. This observation suggests that a network with a fewer number of nodes should be designed by training a network using a higher regularization parameter value rather than removing nodes from a trained network. \begin{figure}[t!] \centering \begin{minipage}{0.5\linewidth} \centering {\includegraphics[clip, width=\linewidth]{./Figures/mnist_pca_vs_network_R3_R6e3-eps-converted-to.pdf}}% \end{minipage}% \caption{ PCA of images in MNIST dataset, average MSE of reconstructed images: red: when the first $k$ principal components are used; blue: when the first $k$ nodes in the network with heterogenous sensitivity are used ($\lambda=1.0\times 10^{-3}$); green: when the first $k$ nodes in the network with heterogenous sensitivity are used ($\lambda=6.0\times 10^{-3}$). } \label{fig:mnistpca} \end{figure} \subsection{Regularzation Parameter Selection} \label{sec:regularization} In Section \ref{sec:lcurve}, an algorithm to determine the regularization parameter simultaneously with the training of a network is proposed. In this section, we train a network using Algorithm \ref{alg:training}, and compare the networks performance and complexity to a network trained using the regularization parameter at the corner of the L-curve. The same network used in the previous section, the autoencoder with a hidden layer, is trained using the Algorithm \ref{alg:training}. Fig. \ref{fig:simultaneous} shows the deviation and sparsity penalties at different values of the regularization parameter used during the training by the Algorithm \ref{alg:training}. The algorithm increases the regularization parameters until the deviation penalty during the training increases considerable. Then, the algorithm trains the network using the selected regularization parameter until the termination condition is met. The L-curve constructed by training the same network multiple times at different regularization parameters is also shown. It can be seen that the deviation and sparsity penalty of the trained network is close to the corner point of the L-curve. \begin{figure}[t!] \centering \begin{minipage}{0.5\linewidth} \centering {\includegraphics[clip, width=\linewidth]{./Figures/parameter_selection_autoencoder-eps-converted-to.pdf}}% \end{minipage}% \caption{ Simultaneous regularization parameter selection and network training: red: L-curve constructed by Algorithm 1; blue: L-curve constructed by training the same network multiple times at different regularization parameters. } \label{fig:simultaneous} \end{figure} The regularization parameter simultaneously selected during the training by the Algorithm \ref{alg:training} is $5.0\times 10^{-4}$, while the regularization parameter that corresponds to the corner of the L-curve is $1.0\times 10^{-3}$. The numbers of hidden nodes are 89 and 75 and the training loss of the network is 0.0018 and 0.0009 for the network trained by the Algorithm \ref{alg:training} and the one trained with the L-curve corner value. The results obtained by determining the regularization parameter via the Algorithm \ref{alg:training} were the same as one would get by hand-selecting the parameter corresponding to the corner of the L-curve. We used the Algorithm \ref{alg:training} using four epochs for each updated values of $\lambda$, and the algorithm updated seven $\lambda$ values. After the $\lambda$ values is found, the network is trained for 268 epochs before the termination of the training. For comparison, the network with a fixed values of $\lambda$ is trained for 279 epochs. The simultaneous training and selection of $\lambda$ by the Algorithm \ref{alg:training} requires 6.1\% more epochs than the training for a single values of $\lambda$. \subsection{Deep CNN with CIFAR-10 Dataset} \label{ex:cnncifar} In this experiment, we consider a deep CNN with heterogeneous sensitivity for object classification using the CIFAR-10 dataset \cite{krizhevsky2009learning}. We prepared a network with four convolutional layers and two dense layers, adding sensitivity layers to all the convolutional layers and the first dense layers. The second dense layer is designed to perform object classification. The sensitivity layers for the convolutional layers are implemented as a collection of convolutional layers, each of which has one input node and one output node with a one-by-one filter. The sensitivity layers for the dense layers are implemented as a collection of dense layers, each of which has one input node, one output node, and one weight. We applied $\ell_1$ regularization in the sensitivity layers for training. For comparison, we trained a baseline CNN with the same number of nodes and layers and with the ReLU activation function using the same training set. The batch normalization is applied after the convolutional layers and the dropout is applied after the first dense layer on both the asymmetric and the baseline symmetric networks. Fig. \ref{fig:cnnlcurve} shows the L-curve for the CNN. The regularization parameter simultaneously selected during the training by the Algorithm \ref{alg:training} is $0.7\times 10^{-3}$, while the regularization parameter that corresponds to the corner of the L-curve is $0.8\times 10^{-3}$. Fig. \ref{fig:cnnsensitivity} shows the sensitivity of the nodes, in decreasing order, in each layer of the network trained with the find the regularization parameter. The sensitivity variables $s^l_i$ are sparse in all the layers and have many zero elements. Only the nodes with non-zero sensitivity need to be included in the efficient architecture. We used a thresholding approach to remove the nodes with sensitivity values numerically close to zero to find the efficient architecture. We used the Algorithm \ref{alg:training} for the simultaneous regularization parameter selection using four epochs for each updated values of $\lambda$, and the algorithm updated five $\lambda$ values. After the $\lambda$ values is found, the network is trained for 215 epochs before the termination of the training. For comparison, the network with a fixed values of $\lambda$ is trained for 221 epochs. The simultaneous training and selection of $\lambda$ by the Algorithm \ref{alg:training} requires 6.3\% more epochs than the training for a single values of $\lambda$. \begin{figure}[t!] \centering \begin{minipage}{0.5\linewidth} \centering {\includegraphics[width=\linewidth]{./Figures/l_curve_cifar10_cnn-eps-converted-to.pdf}}% \end{minipage}% \caption{L-curve for CNN with CIFAR-10 data, showing $E$ vs. $\sum_{l=1}^L \|\mathbf{s}^l\|_1$ at different values of $\lambda$; red: L-curve constructed by Algorithm 1; blue: L-curve constructed by training the same network multiple times at different regularization parameters. } \label{fig:cnnlcurve} \end{figure} \begin{figure}[t!] \centering \begin{minipage}{0.5\linewidth} \centering {\includegraphics[width=\linewidth]{./Figures/CIFAR10_CNN_conv1-eps-converted-to.pdf}}% {\footnotesize (a)} \end{minipage}% \begin{minipage}{0.5\linewidth} \centering {\includegraphics[width=\linewidth]{./Figures/CIFAR10_CNN_conv2-eps-converted-to.pdf}}% {\footnotesize (b)} \end{minipage}% \begin{minipage}{0.5\linewidth} \centering {\includegraphics[width=\linewidth]{./Figures/CIFAR10_CNN_conv3-eps-converted-to.pdf}}% {\footnotesize (c)} \end{minipage}% \begin{minipage}{0.5\linewidth} \centering {\includegraphics[width=\linewidth]{./Figures/CIFAR10_CNN_conv4-eps-converted-to.pdf}}% {\footnotesize (d)} \end{minipage}% \begin{minipage}{0.5\linewidth} \centering {\includegraphics[width=\linewidth]{./Figures/CIFAR10_CNN_dense1-eps-converted-to.pdf}}% {\footnotesize (e)} \end{minipage}% \begin{minipage}{0.5\linewidth} \centering {\includegraphics[width=\linewidth]{./Figures/blank32.pdf}}% \end{minipage}% \caption{Sensitivity variables for the proposed CNN with CIFAR-10 data: (a) 1st conv layer; (b) 2nd conv layer; (c) 3rd conv layer; (d) 4th conv layer; and (e) 1st dense layer. } \label{fig:cnnsensitivity} \end{figure} Table \ref{tab:cnncifar} summarizes the complexity and performance of the proposed efficient CNN compared to the baseline CNN. The proposed network includes only $39.30\%$ of the nodes and $37.71\%$ of the weights, and requires $33.21\%$ of the FLOPs compared to the baseline CNN. The accuracy of the efficient and baseline CNN are $83.20\%$ and $83.12\%$, respectively. By using nodes with heterogeneous sensitivity, the CNN learns to classify objects with the same accuracy as the baseline but uses only an optimal number of nodes. We included the layer-wise number of nodes, weights, and FLOPs data in the supporting materials. \begin{table}[t!] \centering \caption{Performances of Proposed Efficient CNN with CIFAR-10 data} \label{tab:cnncifar} \begin{tabular}{l|rrr} \hline & baseline & proposed & ratio \\ \hline \# of nodes & 1290 & 507 & 39.30\% \\ \# of weights & 3407498 & 1284942 & 37.71\% \\ \# of FLOPs & 1.93E+08 & 6.43E+07 & 33.21\% \\ \hline accuracy & 83.20\% & 83.12\% & 99.90\% \\ \hline \end{tabular} \end{table} \subsection{VGG and ResNet with CK+ Dataset} \label{ex:vggresnetck} A trained network can be transferred to form a basis for the design of a network intended for another task. In this section, we added the sensitivity layers to transferred networks to find efficient architectures for a different task. We tested transferred VGG \cite{simonyan2014very} and ResNet \cite{he2016deep} for facial expression recognition using the CK+ dataset \cite{lucey2010extended} to classify input facial images into seven emotions: \begin{equation} \{\hbox{anger, contempt, disgust, fear, surprise, happiness, sadness}\}. \nonumber \end{equation} We selected 325 sequences of 118 subjects that are classified as displaying one of the seven emotions. The so-called ``apex frames'' that occur at the peak of the expression were collected as labeled facial images. The network was trained and tested using the ten-fold cross validation protocol. The VGG-16 and ResNet-56 networks with heterogeneous sensitivity were prepared by adding the sensitivity layers after all the convolutional and dense layers. The batch normalization and the dropout are used for the networks. The networks were trained using the CK+ facial recognition dataset. We determined the regularization parameter $\lambda$ using the L-curve as described earlier. After the training, we included only the nodes with non-zero sensitivity in the efficient architecture. For comparison, the baseline VGG-16 and ResNet-56 are trained using the same training set. Table \ref{tab:vggck} summarizes the complexity and performance of the proposed efficient VGG-16 compared to the baseline VGG-16. The efficient network includes only $12.41\%$ of the nodes and $1.11\%$ of the weights, and requires $3.07\%$ of the FLOPs compared to the baseline VGG-16. The accuracy of the efficient and baseline VGG-16 are $97.89\%$ and $97.85\%$, respectively. Table \ref{tab:resnetck} summarizes the complexity and performance of the efficient ResNet-56 compared to the baseline ResNet-56. The proposed efficient network includes only $12.64\%$ of the nodes and $1.94\%$ of the weights, and requires $17.53\%$ of the FLOPs compared to the baseline ResNet-56. The accuracy of the efficient and baseline ResNet are $96.77\%$ and $96.03\%$, respectively. By using nodes with heterogeneous sensitivity, the transferred VGG and ResNet learns to classify facial expressions with the same accuracy as the baseline but uses only a small number of nodes. We included the node-wise sensitivity variables and the number of nodes, weights, and FLOPs data in the supporting materials. \begin{table}[t!] \centering \caption{Complexity and Performance of Proposed Efficient VGG-16 with the CK+ Dataset} \label{tab:vggck} \begin{tabular}{l|rrr} \hline & baseline & proposed & ratio \\ \hline \# of nodes & 4743 & 589 & 12.41\% \\ \# of weights & 15767367 & 174327 & 1.11\% \\ \# of FLOPs & 1.25E+09 & 3.85E+08 & 3.07\% \\ \hline accuracy & 97.89\% & 97.85\% & 100.00\% \\ \hline \end{tabular} \end{table} \begin{table}[t!] \centering \caption{Performances of Proposed Efficient ResNet with the CK+ Dataset} \label{tab:resnetck} \begin{tabular}{l|rrr} \hline & baseline & proposed & ratio \\ \hline \# of nodes & 27207 & 3441 & 12.64\% \\ \# of weights & 31872135 & 619369 & 1.94\% \\ \# of FLOPs & 1.33E+09 & 2.86E+08 & 17.53\% \\ \hline accuracy & 96.77\% & 96.03\% & 99.2\% \\ \hline \end{tabular} \end{table} \subsection{YOLO with VOC Dataset} \label{ex:yolovoc} We prepared a YOLO network with heterogeneous sensitivity by adding sensitivity layers after all the convolutional layers. We simplified the object detection task by considering only four object classes: car, motorbike, pedestrian, and people. The network was trained using the VOC dataset. The regularization parameter $\lambda$ was found using the L-curve as described previously. Table \ref{tab:yolovoc} summarizes the complexity and performance of the proposed efficient YOLO compared to the baseline YOLO. The efficient network includes only $56.69\%$ of the nodes and $26.18\%$ of the weights, and requires $43.12\%$ of the FLOPs compared to the baseline YOLO. The accuracy of the efficient and baseline YOLO in terms of the mean average precision (mAP) are 70.9 and 69.2, respectively. By using nodes with heterogeneous sensitivity, the YOLO learns to detect the specified objects with the same accuracy as the baseline but uses only a small number of nodes. We included the node-wise sensitivity variables and the number of nodes, weights, and FLOPs data in the supporting materials. \begin{table}[t!] \centering \caption{Performance of Proposed Efficient YOLO with the VOC Dataset} \label{tab:yolovoc} \begin{tabular}{l|rrr} \hline & baseline & proposed & ratio \\ \hline \# of nodes & 10381 & 5885 & 56.69\% \\ \# of weights & 50594061 & 13244927 & 26.18\% \\ \# of FLOPs & 1.54E+10 & 6.66E+09 & 43.12\% \\ \hline mAP & 70.9 & 69.2 & 97.60\% \\ \hline \end{tabular} \end{table} \subsection{LeNet with MNIST Dataset} \label{ex:lenetmnist} We compared the complexity and performance of an proposed efficient network found by the proposed method to the pruning results reported in \cite{han2015learning, zhou2016less, yang2015deep, lebedev2016fast, srinivas2015data, jang2018deep}. The LeNet-5 \cite{krizhevsky2012imagenet} network with heterogeneous sensitivity was prepared and trained using the MNIST dataset. The ratios of the weights remaining after the optimization and the classification errors are reported in Table \ref{tab:compMNIST}. The efficient network designed by the proposed method provides the highest performance with the least computational complexity. \begin{table}[t!] \centering \caption{Comparison to Pruning Methods Reported with LeNet on MNIST Dataset} \label{tab:compMNIST} \begin{tabular}{clrrr} \hline & & \# of FLOPs & \# of weights & error \\ \hline \cite{han2015learning} & baseline & & & 0.80\% \\ & pruned & 16.0\% & 8.24\% & 0.77\% \\ \hline \cite{zhou2016less} & baseline & & & 0.73\% \\ & pruned & N/A & 10.25\% & 0.76\% \\ \hline \cite{yang2015deep} & baseline & & & 0.87\% \\ & pruned & 12.1\% & 9.01\% & 0.71\% \\ \hline \cite{lebedev2016fast} & baseline & & & N/A \\ & pruned & N/A & 8.33\% & 1.70\% \\ \hline \cite{srinivas2015data} & baseline & & & 0.94\% \\ & pruned & 16.5\% & 16.00\% & 1.65\% \\ \hline \cite{jang2018deep} & baseline & & & 0.81\% \\ & pruned & 78.6\% & 6.73\% & 0.71\% \\ \hline proposed & baseline & & & 0.80\% \\ & proposed & 16.0\% & 6.40\% & 0.69\% \\ \hline \end{tabular} \end{table} \subsection{VGG and ResNet with CIFAR-10 Dataset} \label{ex:vggresnetcifar} We compared the complexity and performance of the proposed efficient networks with the pruning results reported in \cite{li2016pruning, wen2016learning, jang2018deep, ayinde2018building}. The VGG-16, ResNet-56, ResNet-20 networks with heterogeneous sensitivity were prepared using the CIFAR-10 dataset. The batch normalization and the dropout are used for the networks. The ratios of the weights remaining after the optimization and the classification errors are reported in Table \ref{tab:cifarcomp}. The efficient networks designed by the proposed method provide the same or higher performance but with less computational complexity. \begin{table}[t!] \centering \caption{Comparison to Pruning Methods Reported with VGG and ResNet on the CIFAR-10 Dataset} \label{tab:cifarcomp} \begin{tabular}{cllrrr} \hline & & & \# of FLOPs & \# of weights & error \\ \hline \cite{li2016pruning} & VGG-16 & baseline & & & 6.75\% \\ & & pruned & 65.6\% & 36.0\% & 6.60\% \\ \hline \cite{jang2018deep} & VGG-16 & baseline & & & 6.67\% \\ & & pruned & 67.8\% & 29.6\% & 6.17\% \\ \hline \cite{ayinde2018building} & VGG-16 & baseline & & & \\ & & pruned & 59.4\% & & 6.33\% \\ \hline proposed & VGG-16 & baseline & & & 6.67\% \\ & & proposed & 52.9\% & 16.1\% & 6.52\% \\ \hline \cite{li2016pruning} & ResNet-56 & baseline & & & 6.96\% \\ & & pruned & 72.6\% & 86.3\% & 6.94\% \\ \hline \cite{he2017channel} & ResNet-56 & baseline & & & 7.20\% \\ & & pruned & 50.0\% & N/A & 8.20\% \\ \hline \cite{jang2018deep} & ResNet-56 & baseline & & & 6.82\% \\ & & pruned & 18.4\% & 26.2\% & 6.75\% \\ \hline \cite{ayinde2018building} & ResNet-56 & baseline & & & \\ & & pruned & 72.6\% & 76.4\% & 6.88\% \\ \hline proposed & ResNet-56 & baseline & & & 6.82\% \\ & & proposed & 45.4\% & 40.2\% & 6.70\% \\ \hline \cite{wen2016learning} & ResNet-20 & baseline & & & 8.82\% \\ & & pruned & N/A & N/A & 7.51\% \\ \hline Proposed & ResNet-20 & baseline & & & 8.21\% \\ & & proposed & 66.2\% & 72.3\% & 6.76\% \\ \hline \end{tabular} \end{table} \subsection{ResNet with the ImageNet Dataset} \label{ex:resnetimagenet} The ResNet-50 network with heterogeneous sensitivity were prepared using the ImageNet dataset. The batch normalization and the dropout are used for the network. With the ImageNet dataset, the network converges very slowly with inconsistent improvement of the penalty function during the training. The regularization parameter is selected empirically and trained with the selected parameter. The ratios of the weights remaining after the optimization and the classification errors are reported in Table \ref{tab:compIMAGENET} where comparisons to the pruning results reported in \cite{luo2017thinet, luo2017entropy, zhuang2018discrimination, xu2018hybrid, he2017channel} are also given. The efficient networks designed by the proposed method provide the same or higher performance but with less computational complexity. \begin{table}[t!] \centering \caption{Comparison to Pruning Methods Reported with ResNet on The ImageNet Dataset} \label{tab:compIMAGENET} \begin{tabular}{clrrr} \hline & & \# of FLOPs & \# of weights & accuracy \\ \hline \cite{luo2017thinet} & baseline & & & 72.88\% \\ & pruned (ThiNet-70) & 63.2\% & 63.3\% & 72.04\% \\ & pruned (ThiNet-50) & 44.2\% & 48.4\% & 71.01\% \\ & pruned (ThiNet-30) & 28.4\% & 33.9\% & 68.42\% \\ \hline \cite{luo2017entropy} & baseline & & & 72.88\% \\ & pruned (Pruned-90) & 92.7\% & 93.5\% & 73.56\% \\ & pruned (Pruned-75) & 82.6\% & 84.9\% & 72.89\% \\ & pruned (Pruned-50) & 65.2\% & 68.0\% & 70.84\% \\ \hline \cite{zhuang2018discrimination} & baseline & & & 76.01\% \\ & pruned (DCP) & 44.4\% & 48.5\% & 73.20\% \\ & pruned (WM+) & 44.4\% & 48.5\% & 72.89\% \\ & pruned (WM) & 44.4\% & 48.5\% & 70.84\% \\ \hline \cite{xu2018hybrid} & baseline & & & 76.01\% \\ & pruned & N/A & 67.3\% & 74.87\% \\ \hline \cite{he2017channel} & baseline & & & 75.30\% \\ & pruned & N/A & 64.0\% & 72.30\% \\ \hline Proposed & baseline & & & 75.06\% \\ & proposed & 68.6\% & 56.3\% & 75.03\% \\ \hline \end{tabular} \end{table} \section{Conclusion} \label{sec:conclusion} In this study, we trained networks consisting of nodes with heterogeneous sensitivity to perform a given task using only a small number of sensitive nodes. The training is formulated as a constrained optimization problem whose parameter is found simultaneously during the training based on the L-curve. By introducing sensitivity layers that assign sensitivity variables to nodes, we were able to implement and train a network without using a complicated optimization tool. The networks trained in this manner possess a small and computationally efficient network architecture and simultaneously meet the performance criteria. In our experiments, the efficient networks designed by the proposed method provide the same or higher performance but with far less computational complexity. The proposed method can be used to determine the efficient network architectures of deep networks. \bibliographystyle{IEEEtran}
1,314,259,994,745
arxiv
\section{Introduction} Despite decades of research, planet formation is still not fully understood. At the point of formation, the protoplanetary disk is thought to contain submicron dust grains. The formation of planetesimals out of these grains is one of the more uncertain aspects in the theory of planet formation, since the growth of large dust particles by subsequent sticking collisions is very difficult to obtain in realistic models. Such a simple particle aggregation has been shown to encounter numerous obstacles, such as the electrostatic barrier \citep{2011ApJ...731...96O}, the bouncing barrier \citep{2010A&A...513A..57Z}, the fragmentation barrier \citep{1993Icar..106..151B}, and the radial drift barrier \citep{1977MNRAS.180...57W}. The relative velocities of dust particles, which are regulated by their interaction with gas, have been found to be too high to allow sticking of aggregates as small as millimeters. On the other hand, even if there is a way to grow meter-sized bodies, they are going to be lost inside of an evaporation line within a few hundred years due to the radial drift. Some solutions to these problems have been suggested in recent years. The sticking properties of ices are claimed to be much better than those of silicates \citep{2009ApJ...702.1490W}, leading to ice grains capable of forming highly porous aggregates. Including this property in models has been shown to let the particles avoid the radial drift barrier \citep{2012ApJ...752..106O}. However, the collisional properties of the ice particles are still rather uncertain, because of the difficulties in conducting the laboratory experiments. There is much more laboratory data concerning the collisional physics of the silicates \citep{2010A&A...513A..56G}, although even the silicate collisional properties remain an extensively discussed topic. Recent experiments have revealed a smooth transition between sticking and bouncing behavior \citep{2008ApJ...675..764L, 2012Icar..218..688W, 2013arXiv1302.5532K}, but numerical molecular dynamics simulations predict no or significantly less bouncing \citep{2011ApJ...737...36W, 2013A&A...551A..65S} At still higher collision velocities, particles are expected to fragment, but if the mass ratio is high enough, growth via mass transfer is also a possibility \citep{2005Icar..178..253W, 2009MNRAS.393.1584T, 2010ApJ...725.1242K, 2011ApJ...736...34B}. The relative velocity of collision between the aggregates is usually calculated on the basis of a mean turbulence model, where two particles with given masses $m_1$ and $m_2$ always collide at the same relative velocity $\Delta v(m_1,m_2)$. Considering a probability distribution function $P(\Delta v|m_1,m_2)$ and the sweep-up by the mass transfer is another possibility of overcoming the growth barriers \citep{2012A&A...544L..16W, 2013ApJ...764..146G}. However, as the exact nature of the probability distribution is unknown, it is not certain if this effect can indeed allow the planetesimal growth. The combined action of hydrodynamic and gravitational instabilities \citep{2000Icar..148..537G,2007Natur.448.1022J} is an alternative scenario for the formation of planetesimals. The radial drift barrier can be overcome by taking local disk inhomogeneities into account that lead to the pressure gradient change (pressure bumps) and suppress of the inward drift of bodies \citep{1972fpp..conf..211W, 1995A&A...295L...1B, 1997Icar..128..213K, 2007MNRAS.375..500A, 2007ApJ...671.2091G, 2007ApJ...664L..55K}. Modeling the planet formation is not only difficult because of the growth barriers at the first stage of the process. Formation of a single planet covers about 40 orders of magnitude in mass, which is not possible to handle with any traditional method, because of a fundamental difference in the physics involved in its different stages. In the small particle regime, there are too many independent particles for an individual treatment. The coagulation is driven by random collisions. Therefore, statistical methods are used to model the evolution of the fine dust medium \citep{1980Icar...44..172W,1981Icar...45..517N,2008A&A...480..859B,2010A&A...513A..79B}. In this approach, the dust medium is followed using the grain distribution function $f_{\rm{d}}(m,r,t)$, giving the number of dust particles of particular properties at a given time. In the big body regime, the evolution is led by gravitational dynamics. That forces us to treat the objects individually using N-body methods \citep{2000Icar..143...15K}. A connection between the two methods requires an ad hoc switch. Such a solution has been implemented by \citet{1991Icar...92..147S}, \citet{2006AJ....131.2737B} and recently \citet{2011arXiv1105.6094G}. In addition to the statistical methods mentioned above, there are also Monte Carlo methods used in the small particle regime \citep{1975MNRAS.170..541G, 2007A&A...461..215O}. In recent years, a new kind of algorithm has been developed: a Monte Carlo algorithm with the representative particle approach \citep{2008A&A...489..931Z}. In this method, the huge number of small particles is handled by grouping the (nearly) identical bodies into swarms and representing each swarm by a representative particle. Instead of evolving the distribution function $f_{\rm{d}}(m,r,t)$, it is sampling and reproducing it with the use of the representative bodies. This manner should allow a much smoother and more natural transition to the N-body regime. Indeed, this kind of approach is already used in the N-body codes. \citet{2010AJ....139.1297L} applied a superparticle approach to treat planetesimals. They showed that taking the gravitational interactions into account is very important in the case of kilometer size bodies. The gravitational interplay can lead to redistribution of the material and change accretion rates in the protoplanetary disk. With the work presented in this paper, we make the very first step toward a new computational model that will connect the small scale dust growth to the large scale planet formation. We develop a 2D Monte Carlo dust evolution code accounting both for drift and coagulation of the dust particles. We expect to extend this method in future by adding the gravitational interactions between the bodies. This paper is organized as follows. We introduce our numerical model in Sect.\ \ref{sub:model}. We demonstrate some tests of the code in Sect.\ \ref{sub:tests}. In Sect.\ \ref{sub:1D}, we show results obtained with the 1D version of our code and compare them with results presented by \citet{2011A&A...534A..73Z}. In Sect.\ \ref{sub:2D}, we present results obtained with the 2D version of the code, showing that applying a disk model including a steep variation of turbulent strength near the snow line \citep{2007ApJ...664L..55K} and a collisional model that takes the mass transfer effect into account \citep{2012A&A...540A..73W}, allows us to overcome the bouncing barrier and turn a limited number of particles into planetesimals on the timescale of approximately 10$^5$ yrs. We provide discussion of the presented results as well as conclusions in Sect.~\ref{sub:last}. \section{The numerical model}\label{sub:model} We develop a 2D Monte Carlo dust evolution code, able to resolve a protoplanetary disk structure in radial and vertical dimension. We assume that the disk is cylindrically symmetric, thereby ignoring the azimuthal dependence. We use an analytical description for the gas disk. The dust is treated using the representative particle approach. The code is a further development of the work presented by \citet{2011A&A...534A..73Z}. The code is written in Fortran 90 and is parallelized using OpenMP directives. In each time step the code performs the following steps: \begin{enumerate} \item{Advection velocities of the dust particles are determined taking their current properties and positions into account.} \item{The code time step is calculated on the basis on the advection velocities and existing grid, following the Courant condition.} \item{Advection of the particles is performed both in radial and vertical direction. The solids undergo the radial drift, vertical settling and turbulent diffusion.} \item{The new grid is established according to the updated positions of the particles, using the adaptive grid routine (see Fig.~\ref{fig:ag}).} \item{Collisions between the particles are performed in each cell by the Monte Carlo algorithm. The particle properties are updated.} \item{The output is saved when required.} \end{enumerate} More detailed description of the approach used in the code can be found in the following sections. \subsection{Gas description} The gas structure is implemented in the form of analytical expressions for the gas surface and volume density $\Sigma_{\rm{g}}(r)$ and $\rho_{\rm{g}}(r,z)$, pressure $P_{\rm{g}}(r)$, temperature $T_{\rm{g}}(r)$ and turbulent viscosity $D_{\rm{g}}(r)$. For now we assume that the gas in the disk does not evolve, although this is not a fundamental restriction, and the gas evolution is possible to implement without severe changes in the code structure. In a first-order approximation, the time evolution can be implemented analytically by expanding the gas properties description from the function of space $f_{\rm{g}}(r)$ to the function of space and time $f_{\rm{g}}(r,t)$. \subsection{Dust description}\label{sub:dustd} To describe the dust, we use the approach based on \citet{2008A&A...489..931Z}. We follow the “lives” of $n$ representative particles, which are supposed to be a statistical representation of $N$ physical particles present in an examined domain. Commonly $n \ll N$. For a typical protoplanetary disk, with mass of 0.01~$M_{\odot}$ and a dust to gas ratio of 0.01, consisting of 1 $\mu$m size dust grains, we would have $N\approx10^{42}$. For computational feasibility we would have e.g. $n=10^5$, meaning each representative particle $i$ represents $N_i\approx10^{37}$ physical particles. All of the $N_i$ physical particles, represented by a single representative particle $i$, share identical properties: for now these are mass $m_i$ and location in the disk $(r_i,z_i)$. As we impose the axial symmetry, we do not include the azimuthal position. We assume that the physical particles belonging to one swarm are homogeneously distributed along an annulus of given location $r_i$ and height above the midplane $z_i$. The total mass of physical particles contained in one swarm $M_{\rm{swarm}}=N_im_i$ is identical for every representative particle and it does not change with time. This means that the $N_i$ has to drop when the particle mass $m_i$ grows. This is not a physical effect, just a statistical. See \citet{2008A&A...489..931Z} for details. With the representative particle approach, it is relatively easy to add further dust properties, in particular the internal structure of aggregates, which was shown to be important by \citet{2007A&A...461..215O}. We leave the implementation of the porosity for further work. When performing the advection, we assume that all of the physical particles in the swarm undergo the same change of the position $(r_i,z_i)$ and after the shift, they are still uniformly distributed along the designated annulus. However, when we consider the collisions, we set up a numerical grid in order to account for the fact that only particles that are physically close can collide. In this case, we assume that the particles are homogeneously distributed inside a grid cell (see Sect.\ \ref{sub:coll} for description of the grid). This assumption is required by the method used to investigate the collisional evolution of the aggregates \citep{2008A&A...489..931Z}. The difference between the locations assumed in both of the cases is most often not important and can be treated as a kind of systematic uncertainty. \subsection{Advection of dust particles} The location of a representative particle changes because of radial drift and vertical settling as well as turbulent diffusion. The main particle characteristics determining its behavior with respect to gas is so called Stokes number $\rm{St}$. It is defined as \begin{equation}\label{stokes} {\rm{St}} = t_{\rm{s}} \Omega_{\rm{K}}, \end{equation} where $\Omega_{\rm{K}}$ denotes the Kepler frequency and $t_{\rm{s}}$ is the so-called stopping time of the particle. The Stokes number can be treated as a particle-gas coupling strength indicator. If $\rm{St} \ll 1$, the particle is well coupled to the ambient gas and its motion is fully dependent on the motion of the gas. On the other hand, the particles with $\rm{St} \gg 1$ are practically independent of the gas. The stopping time of the particle $t_{\rm{s}}$ determines a timescale that the particle needs to adjust its velocity to the velocity of the surrounding gas. The exact expression that we use to compute the $t_{\rm{s}}$ depends on the particle radius $a$. The ratio of the mean free path of the gas $\lambda_{\rm{mfp}}$ and the particle size $a$ is called Knudsen number~$\rm{Kn}$: \begin{equation}\label{Kn} {\rm{Kn}} = \frac{\lambda_{\rm mfp}} {a}. \end{equation} If a particle's Knudsen number is $\rm{Kn} > 4\slash9$, the particle is in the Epstein drag regime and its stopping time is given by \citep{1977MNRAS.180...57W} \begin{equation}\label{stEp} t_{\rm{s}}^{\rm{Ep}} = \frac{a \rho_{\rm{p}}} { v_{\rm{th}} \rho_{\rm{g}}}, \end{equation} where $\rho_{\rm{p}}$ is the internal density of the particle and $v_{\rm{th}}$ is the thermal velocity of the gas. The latter is expressed as $v_{\rm{th}}=\sqrt{8 k_{\rm{B}} T_{\rm{g}} \slash \pi m_{\rm{g}}}$, where $k_{\rm{B}}$ is the Boltzmann constant, $T_{\rm{g}}$ is the gas temperature and $m_{\rm{g}}$ is mass of the gas molecule. On the other hand, when $\rm{Kn} < 4\slash9$, the particle is in the Stokes regime. The Stokes regime is in general not homogeneous and is often divided into subregimes. The Reynolds number of the particles $\rm{Re}_{\rm{p}}$ defines which of the subregimes applies \citep{1977MNRAS.180...57W}. The $\rm{Re}_{\rm{p}}$ is specified as \begin{equation}\label{Re_p} {\rm{Re}}_{\rm{p}} = \frac{2a\Delta v_{\rm{pg}}}{\nu_{\rm{g}}}, \end{equation} with $a$ denoting the particle radius, $\Delta v_{\rm{pg}}$ the relative velocity between the particle and the gas and $\nu_{\rm{g}}$ the molecular viscosity of gas that is expressed as $\nu_{\rm{g}}=v_{\rm{th}}\lambda_{\rm{mfp}}\slash2$. As long as $\rm{Re}_{\rm{p}} < 1$, the first Stokes regime applies. In our models $\rm{Re}_{\rm{p}} > 1$ translates into $a \gtrsim 10^4$ cm. This is larger than we obtain in the models presented in this paper. Thus, for now we do not include the other Stokes regimes. For the particles with $\rm{Kn} < 4\slash9$ we assume \citep{1977MNRAS.180...57W} \begin{equation}\label{stSt} t_{\rm{s}}^{\rm{St}} = t_{\rm{s}}^{\rm{Ep}} \times \frac{4}{9}{\rm Kn}^{-1}. \end{equation} \paragraph{Radial drift} The radial drift of dust particles has two sources. One of them is the gas accretion onto the central star. The gas moves inwards and drags the dust particles with it. This phenomenon is stronger for small particles ($\rm{St} \ll 1$), and it is not important for big ones ($\rm{St} \gg 1$). The drift velocity caused by this effect can be expressed as \citep{2008A&A...480..859B} \begin{equation}\label{vdrift1} v_{\rm{d}}^{\rm{acc}} = \frac{v_{\rm{g}}^{\rm{r}}}{1+{\rm{St}}^2}, \end{equation} where $v_{\rm{g}}^{\rm{r}}$ denotes the accretion velocity of gas. We use a convention in which the $v_{\rm{g}}^{\rm{r}} < 0$ indicates inward drift. The other effect is also related to the coupling of the solids to gas, but now the radial drift is a result of orbital movement. In a gas-free environment, the solid particles orbit around the star with the Keplerian velocity $v_{\rm{K}}$, resulting from a balance between the gravity and the centrifugal force. For the gas however, the pressure needs to be considered. Therefore, the gas moves with a sub-Keplerian velocity. Because of the difference in the azimuthal velocity of gas and dust, the dust particles feel a constant head-wind. Interacting with the gas via the drag force, they loose the angular momentum and thus drift inwards with velocity \citep{1977MNRAS.180...57W,2008A&A...480..859B}: \begin{equation}\label{vdrift2} v_{\rm{d}}^{\rm{drift}} = \frac{2v_{\eta}}{\rm{St}+\frac{1}{St}}. \end{equation} Hence, this effect is not significant for both very small and very big dust particles, but for the particles with $\rm{St}\approx1$ the drift velocity $v_{\rm{d}}^{\rm{drift}}$ can reach even $30$ m s$^{-1}$ \citep{2008A&A...480..859B}. The maximum drift velocity $v_{\eta}$ can be expressed as \citep{2008A&A...487L...1B} \begin{equation}\label{veta} v_{\eta} = \frac{\partial_{\rm{r}} P_{\rm{g}}}{2 \rho_{\rm{g}} \Omega_{\rm{K}}}. \end{equation} The $v_{\eta}$ is dependent on the gas pressure gradient $\partial_{\rm{r}} P_{\rm{g}}$, which can be both negative (in most of the standard disk models it is negative over the whole disk) and positive. If we find a disk model, in which locally $\partial_{\rm{r}} P_{\rm{g}} > 0$ (a so-called {\it pressure bump}), we get outward radial drift of solids that leads to a local significant enhancement of dust density. The total radial drift velocity is given by \begin{equation}\label{vdrifttot} v_{\rm{d}}^{\rm{r}} = v_{\rm{d}}^{\rm{acc}} + v_{\rm{d}}^{\rm{drift}}. \end{equation} \paragraph{Vertical settling} The dust particles present in the protoplanetary disk are settling down towards the midplane due to gravity from the central star. The settling velocity is regulated by the gas drag. It can be obtained from basic equations as \citep{2004A&A...421.1075D} \begin{equation}\label{vsvel} v_{\rm{d}}^{\rm{z}} = -z \Omega^2_{\rm{K}} t_{\rm{s}}, \end{equation} where $z$ is the height above the midplane. It can be rewritten using Eq.\ (\ref{stokes}) as \begin{equation}\label{vsvel2} v_{\rm{d}}^{\rm{z}} = -z \Omega_{\rm{K}} \rm{St}. \end{equation} For big particles, the velocity calculated from Eq.\ (\ref{vsvel2}) would be higher than the orbital velocity projected on the $z$ axis, so we restrict it to \begin{equation}\label{vsvelmin} v_{\rm{d}}^{\rm{z}} = -z \Omega_{\rm{K}} \min(0.5,\rm{St}), \end{equation} following e.g. \citet{2010A&A...513A..79B}. This description is not valid for big particles that are completely decoupled from the gas. These particles undergo the orbital oscillations around the midplane. A direct integration of the equations of motion would need to be included in order to account for this effect. We leave it for further work. \paragraph{Turbulent diffusion} If there were no other effects in the disk, all the dust would eventually form an infinitely thin layer in the midplane. However, we assume that there is a turbulence present in the disk. We implement the effect of the turbulence on the particles spatial distribution in the same way to \citet{2010ApJ...723..514C} and \citet{2011A&A...534A..73Z}. We take the diffusion in both vertical and radial directions into account. The turbulence generally smears out the density distribution (turbulent diffusion). If we take a point dust distribution after time $t$ it will become a Gaussian distribution with the half width $L$ (in 1D): \begin{equation}\label{sigma} L = L(t) = \sqrt{2D_{\rm{d}}t}, \end{equation} where $D_{\rm{d}}$ is the dust turbulent diffusion coefficient, which we can express as \begin{equation}\label{ddust} D_{\rm{d}} = \frac{D_{\rm{g}}}{\rm{Sc}}, \end{equation} where $\rm{Sc}$ is called the Schmidt number, and the gas diffusion coefficient $D_{\rm{g}}$ (turbulent viscosity) is assumed to have the form of so-called $\alpha$ viscosity \citep{1973A&A....24..337S}: \begin{equation}\label{dgas} D_{\rm{g}} = \alpha c_{\rm{s}} H_{\rm{g}}. \end{equation} $\alpha$ is a parameter describing the efficiency of the angular momentum transport with values typically much lower than $1$. $c_s$ is the sound speed in gas and $H_{\rm{g}}$ is gas pressure scale height, which is expressed as $H_{\rm{g}} = {c_{\rm{s}}}\slash{\Omega_{\rm{K}}}$. The Schmidt number is currently estimated as \citep{2007Icar..192..588Y,2011MNRAS.415...93C} \begin{equation}\label{schmidt} \rm{Sc} = 1 + St^2. \end{equation} We implement the turbulent diffusion of the solid particles as random jumps. We add a term corresponding to our turbulence prescription to the velocity resulting from the radial drift and vertical settling. The turbulent velocity resulting from the prescription given above is \begin{equation}\label{vD1} v_{\rm{d}}^{\rm{D1}} = \frac{\Delta x}{\Delta t}, \end{equation} where $\Delta x$ is the displacement of the particle during the time step $\Delta t$. The displacement is taken as a random number drawn from a Gaussian distribution with the half width $L$ from Eq.\ (\ref{sigma}). This description of the diffusion is however simplified. In fact, there is an additional term in the diffusion equation for a non-homogeneous gas distribution. The velocity component resulting from this effect always points towards the gas density maximum. Therefore, the dust scale height never exceeds the gas scale height. For more details see \citet{2011A&A...534A..73Z} (their Eqs 7-8). The velocity corresponding to this term can be noted as \begin{equation}\label{vD2} v_{\rm{d}}^{\rm{D2}} = D_{\rm{d}}\frac{1}{\rho_{\rm{g}}}\frac{\partial\rho_{\rm{g}}}{\partial x}, \end{equation} where $x$ in Eqs (\ref{vD1}) and (\ref{vD2}) can be both $r$ and $z$, depending on direction along that we consider the diffusion. \subsection{Collisions}\label{sub:coll} \paragraph{Monte Carlo method} We model the dust coagulation using a Monte Carlo algorithm. This approach was already used in the protoplanetary disk context by \citet{2007A&A...461..215O}. It is based on a method presented for the first time by \citet{1975MNRAS.170..541G}. \citet{2008A&A...489..931Z} described in detail how to use this algorithm with the representative particles approach. Only the main facts are stated here for the reader's convenience. As mentioned in Sect.\ \ref{sub:dustd}, we assume that a limited number $n$ representative particles represent all $N$ physical particles present in the computational domain. Each representative particle $i$ describes a swarm of $N_i$ identical physical particles. Total mass $M_{\rm{swarm}}$ of every swarm is equal and constant in time. As we typically have $n \ll N$, we only need to consider the collisions between representative and non-representative particles. The collisions between the representative particles are too rare to be significant. The collisions among the physical particles do not need to be tracked as the basic assumption of the method. The particles taking part in the subsequent collisions as well as the time step between the events are determined on a basis of random numbers. For each collision we pick one representative particle $i$ and one non-representative particle from the swarm represented by the representative particle $k$. It is possible that $i=k$. The probability of a collision between particles $i$ and $k$ is determined as \begin{equation}\label{rik} r_{ik} = \frac{N_k K_{ik}}{V}, \end{equation} where $V$ is the cell volume and $K_{ik}$ is a coagulation kernel. Apart from some test cases we use \begin{equation}\label{Kik} K_{ik} = \Delta v_{ik} \sigma_{ik}, \end{equation} where $\Delta v_{ik}$ is the relative velocity between particles $i$ and $k$ and $\sigma_{ik}$ is the geometrical cross-section for their collision. The total collision rate among any of the pairs is \begin{equation}\label{totr} r = \sum\limits_i\sum\limits_k r_{ik}. \end{equation} We first choose the representative particle, and the probability that it is particle $i$ is \begin{equation}\label{Pi} P_{i} = \frac{\sum\limits_k r_{ik}}{r}. \end{equation} Then we choose the non-representative particle with the probability: \begin{equation}\label{Pik} P_{k|i} = \frac{r_{ik}}{\sum\limits_k r_{ik}}. \end{equation} The time step between the subsequent collisions is determined as \begin{equation}\label{timestepMC} \tau = -\frac{1}{r}\ln({\mbox{rand}}), \end{equation} where the $\mbox{rand}$ is a random number drawn from the uniform distribution between 0 and 1. As a result of the collision, only the representative particle $i$ changes its properties. For example, in the case of sticking, $m_i~\leftarrow~m_i~+~m_k$. Every time the mass of the particle changes, the number of particles represented by the swarm has to be updated as $N_i = M_{\rm{swarm}} \slash m_i$. \begin{figure} \centering \includegraphics[width=\hsize]{grid.pdf} \caption{Illustration of the adaptive grid algorithm. The dots correspond to the representative particles. First the vertical (blue in color version) walls are established so that the number of the representative particles in each radial zone is equal. Then the horizontal (green) walls are set up for each radial zone individually in order to preserve equal number of swarms in each cell.} \label{fig:ag} \end{figure} \paragraph{Adaptive grid} The coagulation of dust aggregates depends on the local properties of the ambient gas. This is the reason why, to perform the collisions, we first set up a 2D ($r+z$) grid and place our representative particles in the grid cells. The grid cells are assumed to be annuli at a given distance from the star $\left\{r,r+\Delta r\right\}$ and height above the midplane $\left\{z,z+\Delta z\right\}$. Only particles present inside the same annulus are allowed to collide with each other. To construct the annuli we developed an adaptive grid routine. The volume of the grid cells varies in order to keep the number of the swarms per cell constant. This procedure is illustrated in Fig.\ \ref{fig:ag}. In order to set up the grid walls, we first sort the particles by their radial positions. We choose the positions of the vertical walls such that the number of swarms in each radial zone is the same. Then we sort the particles by their vertical positions within every radial zone individually and set up the horizontal walls in order to preserve equal number of swarms in each cell. Thanks to this approach, we automatically gain higher spatial resolution in the important high dust density regions. Furthermore, keeping the number of the representative particles in one cell constant assures that we always have a sufficient amount of bodies to resolve the physics of the coagulation kernel properly (see Sect.~\ref{sub:test1}). As the Monte Carlo algorithm is generally an $O(n^2)$ method, the adaptive grid routine helps us to optimize the computational cost of performing the collisions by a significant factor. \paragraph{Relative velocities} As in e.g. \citet{2010A&A...513A..79B}, we consider five sources of relative velocities between the dust particles: namely the Brownian motion, turbulence, radial and azimuthal drift as well as differential settling. For the turbulent relative velocities we follow the prescription given by \citet{2007A&A...466..413O}. For calculation of the relative velocities, all the particles are assumed to be in the center of the cell. Due to this, we avoid unphysically high collision velocities that could occur e.g. in case of a big cell with one particle placed on significantly higher height above the midplane $z$ than the other one. In such a situation, the relative velocity calculated on a basis of Eq.\ (\ref{vsvelmin}) is dominated by the difference of the height $z$. In reality, at the moment of the collision, $z$ is identical for both particles and the relative velocity is set up by the difference of the Stokes numbers. \subsection{Time step} In order to resolve both advection and coagulation of the dust particles properly, a limit to the time step of the code is required. A drifting particle should be allowed to interact with every other particle along its way, thus it cannot “jump over” any cell. We implement an adaptive time-stepping method. We limit the time step according to the Courant condition: \begin{equation}\label{courant} \Delta t^x < \frac{\Delta x_{\rm{min}}}{v_{\rm{max}}^{x}}, \end{equation} where $x$ can be both $r$ and $z$, as we apply this condition to both directions and we finally choose $\Delta t =\min(\Delta t^{\rm{r}},\Delta t^{\rm{z}})$. $\Delta x_{\rm{min}}$ is the length of the shortest cell in the given direction and $v_{\rm{max}}^x$ is the drift velocity of the fastest particle in this direction. The final time step we obtain is typically of the order of a fraction of the local orbital period. Generally, the time step should be limited also by the dust growth timescale. However, in typical cases, the advection timescale is shorter than the growth timescale. This means that within one advection time step, the coagulation does not change the drift properties significantly. \begin{figure*} \centering \includegraphics[width=\hsize]{mctests.pdf} \caption{The grains mass distribution for the tests against the analytical solutions of the Smoluchowski equations (dashed lines) at different time instants: a) Test against the constant kernel $K_{ik}=1$, where $50$ representative particles are simulated five times. The particles masses are binned and the distribution functions are averaged at dimensionless times $t = 1,10,10^2,10^3,10^4,10^5$. b) Test against the linear kernel $K_{ik}=\frac{1}{2}\left(m_i+m_k\right)$. There are $150$ particles used and the simulation is repeated five times. The distribution function is produced at times $t = 4,8,12,16,20$. c) Test against the product kernel $K_{ik}=m_i \times m_k$. We use $400$ representative particles and repeat the simulation ten times. The outputs are produced at times $t = 0.4,0.7,0.9$.} \label{fig:mctests}% \end{figure*} \section{Test cases}\label{sub:tests} In order to validate our code, we perform a set of different tests. We test the advectional and collisional parts of the code separately as well as both of them together. In this section we present some of the more educative test results. \subsection{Tests of the coagulation model}\label{sub:test1} We test our implementation of the Monte Carlo coagulation method with the representative particle approach. In a 0-dimensional case, the only property of particles is their mass. In such case, the coagulation can be described by the Smoluchowski equation \citep{1916ZPhy...17..557S}. For some coagulation kernels $K_{ik}$, one can find analytical solution of the Smoluchowski equation. We test our approach against three such kernels, namely the constant kernel $K_{ik} = 1$, the linear kernel $K_{ik} = \frac{1}{2}(m_i + m_k)$ and the product kernel $K_{ik} = m_i \times m_k$. The tests results are presented in Fig.\ \ref{fig:mctests}. We start all the simulations with a homogeneous mass distribution of particles with $m_0=1$. The volume density of particles is also equal to unity. The analytical solutions are obtained from \citet{1979ApJ...229..242S} and \citet{1990Icar...88..336W}. We necessarily get similar results as \citet{2008A&A...489..931Z}. We find that the constant kernel can be properly resolved using a very limited number of representative particles. The linear kernel is possible to resolve using at least 100 representative particles. To obtain proper evolution in the case of the product kernel we need about 300 particles. As the mass dependence of the coagulation kernel in physical cases usually lies between the linear and product kernels, we conclude that we should use at least 200 representative particles per cell in our simulations. Thanks to the adaptive grid routine, it is possible to fulfill this requirement at any time during the simulation. \subsection{Vertical settling and turbulent diffusion of the particles}\label{sub:vert} In the case of absence of the radial drift and coagulation, the vertical structure of dust is modulated by the vertical settling and turbulent diffusion. From the test simulations, we obtain a Gaussian distribution defined by local properties of gas and solids. Its width can be derived comparing the timescales of the vertical settling and turbulent diffusion. The timescale of the vertical settling can be obtained from Eq.\ (\ref{vsvelmin}) as \begin{equation}\label{vstime} \tau_{\rm{sett}} \approx \frac{1}{\Omega_{\rm{K}} \min(0.5,\rm{St})}, \end{equation} The timescale of the turbulent diffusion can be estimated as \begin{equation}\label{difftime} \tau_{\rm{diff}} \approx \frac{L^2}{D_{\rm{d}}}, \end{equation} where the $L$ is a length scale over which the diffusion takes place and the $D_{\rm{d}}$ is defined by Eq.\ (\ref{ddust}). Comparing Eqs (\ref{difftime}) and (\ref{vstime}) and transforming the resulting formula using Eqs (\ref{ddust}) - (\ref{schmidt}) and taking $L = h_{\rm{d,1}}$ we can estimate the thickness of the dust layer as \begin{equation}\label{hdust1} h_{\rm{d,1}} = H_{\rm{g}} \left( \frac{\alpha}{\min(0.5,{\rm{St}}) (1 + {\rm{St}}^2)} \right)^{1\slash2}. \end{equation} The above estimate does not take the part of diffusion introduced with Eq.\ (\ref{vD2}) into account. This effect prevents the dust layer scale height from exceeding the gas scale height. An analytical solution of the advection-diffusion equation of the gas disk gives a more accurate expression for the height of the dust layer \citep{1995Icar..114..237D}: \begin{equation}\label{hdust} h_{\rm{d}} = h_{\rm{d,1}} \left[ 1+ \left(\frac{h_{\rm{d,1}}}{H_{\rm{g}}}\right)^2 \right]^{-1\slash2}. \end{equation} We perform a set of test runs to check if the dependence given by Eq.\ (\ref{hdust}) is reproduced by our code. We place the representative particles in a local column of a disk around a star with mass $M_\star = M_\odot$. The column is located at $r=1$ AU and we assume that a gas surface density $\Sigma_{\rm{g}}=100$ g cm$^{-2}$, temperature $T_{\rm{g}}=200$ K, and $\alpha=10^{-3}$ at this location. The initial dust to gas ratio is taken to be 0.01 and the dust material density 1.6 g cm$^{-3}$. The gas vertical distribution is assumed to be Gaussian with the standard deviation of $H_{\rm{g}}$. Initially we place the representative particles such that we get constant dust to gas ratio at every height above the midplane, so $h_{\rm{d},0}=H_{\rm{g}}$. We use particles with sizes ranging from 10$^{-5}$ to 10$^4$ cm, corresponding to the Stokes numbers range of 10$^{-6}$ to 10$^5$. The radial drift and collisions are switched off for this test. After the particle distribution reaches a steady state, we measure $h_{\rm{d}}$ by fitting a Gaussian. Results of the test are presented in Fig.\ \ref{fig:verticaltest}. For each of the runs we use 10$^4$ representative particles distributed over 100 cells. We find a good agreement between the analytical prediction (Eq.\ \ref{hdust}) and the test results. \begin{figure} \centering \includegraphics[width=\hsize]{z.pdf} \caption{The results of the vertical settling and turbulent diffusion test. The theoretical dependence given by Eq.\ (\ref{hdust}) is plotted with the solid line. The change of the slope around $\rm{St}=0.5$ comes from the Stokes number restriction applied in Eq.\ (\ref{vsvelmin}). The test simulations results are marked with points. We find a good agreement between the analytical prediction and the tests results. } \label{fig:verticaltest} \end{figure} \subsection{Trapping of the dust particles in a pressure bump}\label{sub:trap} The trapping of solids in a region with positive pressure gradient is a promising mechanism of overcoming the radial drift barrier and enhancing the growth of dust aggregates \citep{2007ApJ...664L..55K,2008A&A...487L...1B}. It was already studied theoretically by e.g. \citet{2007ApJ...671.2091G}. \citet{2012A&A...538A.114P} investigated trapping of solids in the outer regions of protoplanetary disk. They showed that disk models with pressure bumps give predicted spectral slope in the mm-wavelength range consistent with the observed for typical T-Tauri disks, contrary to disk models without the bumps. \begin{figure} \centering \includegraphics[width=0.9\hsize]{disk.pdf} \caption{The disk model with the pressure bump near the snow line according to \citet{2007ApJ...664L..55K}. The panels show: a) the $\alpha$ parameter, b) gas surface density, c) gas pressure in the midplane and its Taylor expansion around the pressure bump location (Eq.\ (\ref{pg}), dashed line), d) gas pressure gradient, as a functions of the radial distance from the central star, for our fiducial disk model. Region highlighted with the different background color refers to models described in Sect.\ \ref{sub:2D}. } \label{fig:disk} \end{figure} In this section we present a simple analytical prediction of width of the annulus formed by the trapped particles of given Stokes number and compare it to results of test runs. We use a disk model based on the work of \citet{2007ApJ...664L..55K}, where the $\alpha$ parameter varies with $r$ due to changes in the gas ionization. As the MRI turbulent strength depends on the degree of coupling to the magnetic field, a change in the gas ionization will affect $\alpha$. The gas ionization fraction depends on the total surface area of dust particles \citep{2009ApJ...698.1122O}, and is therefore most affected if there is a significant population of small particles. \citet{2007ApJ...664L..55K} assumed all particles to be $\mu$m-sized, meaning that the gas ionization rate is simply proportional to the dust to gas ratio. Beyond the snow line, the dust density steeply increases as the water vapor condenses into solid grains, causing a decrease in $\alpha$ that builds up a pressure bump on the disk accretion timescale. \citet{2007ApJ...664L..55K} present a disk model parametrized in the framework of the $\alpha$-prescription for a steady state obtained via the described mechanism. Our implementation of the model is presented in Fig.\ \ref{fig:disk}. The $\alpha$ parameter drops down from $10^{-3}$ inside the snow line to $10^{-6}$ in the dead zone. This causes the bump in the surface density and the change of the sign of the pressure gradient. In the region where the pressure gradient is positive, the particles drift outwards and can thus be trapped in a so-called pressure trap. \citet{2010MNRAS.402.2436Y} remarked that in such model the local density maximum is Rayleigh unstable if the bump width is less than the disk scale height. Therefore, we choose the parameters of the model such that the width of the gas density bump measured by fitting a Gaussian is equal to 4 times gas pressure scale height. The estimation of the trapped dust region width $L({\rm{St}})$ is done in a similar way as the derivation of the $h_{\rm{d,1}}({\rm{St}})$ in the previous section. We compare the timescales of the radial drift $\tau_{\rm{drift}}$ and turbulent diffusion $\tau_{\rm{diff}}$. As previously, we estimate the $\tau_{\rm{diff}}$ with Eq.\ (\ref{difftime}). We assume that to be trapped, the particle has to drift from its current location $r$ to the position of the pressure trap $r_0$. Thus, the radial drift timescale can be written as \begin{equation}\label{tdrift} \tau_{\rm{drift}} = \left|\frac{r-r_0}{v_{\rm{drift}}}\right|, \end{equation} where the drift velocity $v_{\rm{drift}}$ can be obtained from Eqs (\ref{vdrift2})-(\ref{veta}), and it is proportional to the pressure gradient $\partial_{\rm{r}} P_{\rm{g}}$. We assume that the disk is vertically isothermal, thus the gas pressure in the midplane is given by \citep{2007ApJ...664L..55K} \begin{equation}\label{pressure} P_{\rm{g}} = \frac{\Sigma_{\rm{g}}c_{\rm{s}}\Omega_{\rm{K}}}{2\pi}. \end{equation} In order to obtain the $L({\rm{St}})$, we want to get rid of the radial dependence of $\tau_{\rm{drift}}$. Thus, we approximate the pressure profile $P_{\rm{g}}(r)$ with the second order Taylor expansion around the location of the pressure bump $r_0$: \begin{equation}\label{pg} P_{\rm{g}}(r) \approx P_{\rm{g}}(r_0) + \frac{1}{2} \frac{d^2{P_{\rm{g}}}}{dr^2}(r_0)\cdot(r-r_0)^2 = {\rm{C}}-{\rm{A}}\left(r-r_0\right)^2, \end{equation} and we find ${\rm{A}}\approx2\times10^{-28}$ g~cm$^{-\mathbf{3}}$~s$^{-2}$ and $\rm{C} \approx 2.6\times10^{-2}$~g~cm$^{-1}$~s$^{-2}$ for $r_0 \approx 3.16$~AU. The Taylor expansion is plotted with the dashed line in the panel c) of Fig.\ \ref{fig:disk}. The derivative $\partial_{\rm{r}} P_{\rm{g}}$, needed to calculate the $v_{\rm{drift}}$, becomes \begin{equation}\label{dpg} \partial_{\rm{r}} P_{\rm{g}} \propto -2{\rm{A}}(r-r_0). \end{equation} Thus, we can estimate the radial drift timescale $\tau_{\rm{drift}}$ as \begin{equation}\label{tdrift2} \tau_{\rm{drift}} \approx \frac{\rho_{\rm{g}} \Omega_{\rm{K}}}{2{\rm{A}}} \left({\rm{St}} + \frac{1}{\rm{St}}\right). \end{equation} Comparing Eqs (\ref{difftime}) and (\ref{tdrift2}), using Eqs (\ref{ddust}) - (\ref{schmidt}), and replacing $\rho_{\rm{g}}c_s\slash\Omega_{\rm{K}} = \rho_{\rm{g}}H_{\rm{g}} = \Sigma_{\rm{g}}$ we find the expression for the width of the trapped dust annulus: \begin{equation}\label{l} L \approx \left(\frac{\alpha}{\rm{A}}\Sigma_{\rm{g}}c_{\rm{s}}\Omega_{\rm{K}}\frac{1}{\rm{St}}\right)^{1\slash2}. \end{equation} One can notice that the width of the annulus becomes larger with growing surface density, turbulent viscosity or temperature of the gas, consistent with intuition. \begin{figure} \centering \includegraphics[width=\hsize]{trapping.pdf} \caption{The top panel shows the analytically derived dependence for the trapped dust annulus width (Eq.\ (\ref{l}), line) and the results of test runs (points). The timescale of particles trapping is associated with the timescale of radial drift. The latter is specified on the lower plot (Eq.\ \ref{tdrift2}). The points on the top panel were measured after 10$^5$ years of evolution. This indicates a range of Stokes numbers of particles that can be trapped (marked with different background color). } \label{fig:trappingtest} \end{figure} The solids are trapped on a timescale of radial drift that is specified by Eq.\ (\ref{tdrift2}). The timescale is shortest for particles of ${\rm{St}} = 1$ and grows for both smaller and bigger sizes. We perform a set of simulations using different sizes of particles ranging form 10$^{-5}$ to 10$^{4}$ cm. For each simulation we use 10$^5$ of representative particle distributed over 100 radial and 20 vertical zones. Initially the particles are placed between 3 and 4 AU. The collisions are switched off. After 10$^5$ yrs of evolution, the width of the bump in the dust surface density is measured by fitting Gaussian distribution. In the top panel of Fig.\ \ref{fig:trappingtest}, the fitted standard deviation of the distribution is plotted as a function of the Stokes number, together with the fit errors. In the bottom panel, the timescale of radial drift is shown. The range of Stokes numbers for that the timescale is shorter than $10^5$ yrs is indicated with different background color. The width of the trapped dust annulus on the top panel is consistent with the dependence given by Eq.\ (\ref{l}), but only for the particles in the range specified by the short enough timescale condition. This result is perfectly in agreement with our predictions. In this test we neglect the radial drift velocity caused by gas accretion, specified by Eq.\ (\ref{vdrift1}). \citet{2012A&A...545A..81P} showed that if we do not neglect this effect, we get additional restriction for size of particles that can be trapped (their Eq.\ 11). Particles with Stokes number smaller than $\rm{St}_{crit}$ are not trapped because their coupling to gas is so strong that they move with the gas through the pressure maximum, where \begin{equation}\label{stcrit} {\rm{St}_{crit}} = -\frac{v_{\rm{g}}^{\rm{r}}}{\partial_{\rm{r}} P_{\rm{g}}}\rho_{\rm{g}}\Omega_{\rm{K}}, \end{equation} with $v_{\rm{g}}^{\rm{r}}$ as the radial velocity of gas. This condition holds only when the other component of the dust radial velocity is positive, i.e.\ $\partial_{\rm{r}} P_{\rm{g}} > 0$. In our model $\rm{St}_{crit} \approx 10^{-4}$, so this effect would not change the test result. \section{Sedimentation driven coagulation}\label{sub:1D} The Gaussian vertical structure of the dust as described in Sect.\ \ref{sub:vert} is usually a good approximation in the case of protoplanetary disk. However, it can be strongly affected by collisional evolution of the aggregates. We investigate the growth of the dust aggregates in a 1D vertical column. We base on a model presented by \citet{2005A&A...434..971D} and reproduced by \citet{2011A&A...534A..73Z} (henceforth ZsD11). The column is placed at the distance $r=1$ AU from the star of mass $M_\star = 0.5M_\odot$, with a gas surface density $\Sigma_{\rm{g}}=100$ g cm$^{-2}$ and a gas temperature $T_{\rm{g}}=200$~K. The radial drift is switched off. The dust particles are initially equal size monomers with radii $a_0=0.55$~$\mu$m and material density $\rho_{\rm{p}} = 1.6$ g cm$^{-3}$. They are initially vertically distributed such that the dust to gas ratio $\rho_{\rm{d}}\slash\rho_{\rm{g}}=0.01$ is constant along the column. Fragmentation is not included in this model, i.e.\ the particles collisions result in sticking for every collision energy. The growth is driven only by Brownian motion and differential settling. We ignore other sources of relative velocity: radial and azimuthal drift as well as turbulence. For the test we used $5\times10^4$ representative particles and 100 cells (500 particles per cell). The test run took about 48 hours on an 8 core 3.1 GHz AMD processor. Similar to \citet{2005A&A...434..971D} and ZsD11, we notice that initially the growth is slow, driven by the Brownian motions, and proceeds faster closer to the midplane, where the matter density is highest. At $t\approx100$ yrs, the particle growth in the upper layers speeds up as the differential settling comes into play. The value of the vertical settling velocity increases with height, as can be noticed from Eq.\ (\ref{vsvelmin}). The aggregates grow and settle down simultaneously. The first rain-out particles that reach the midplane have masses of around 10$^{-2}$ g. \begin{figure} \centering \includegraphics[width=\hsize]{aghist.pdf} \caption{Vertically integrated dust mass distribution at different stages of evolution for the sedimentation driven coagulation test. After approximately 400 yrs, the dust distribution splits into two parts. The big aggregates continue to grow at the expense of the small particles. } \label{fig:aghist} \end{figure} \begin{figure} \centering \includegraphics[width=0.93\hsize]{dd05.png} \caption{The vertical distribution of the dust grains of different sizes for the 1D sedimentation driven coagulation test. The three upper panels show the result of simulations with constant grid with growing number of cells. The bottom panel uses the adaptive grid routine described in this work with 100 cells. All the distributions were plotted after 1000 yrs of the evolution. The numerical convergence of results is noticeable. } \label{fig:dd05} \end{figure} Fig.\ \ref{fig:aghist} presents the mass distribution evolution obtained in this test. It can be noticed that within the first 400 years the dust distribution becomes bimodal. One population consists of the rain-out particles, which reached the midplane, and the other one are the smaller aggregates, which remain vertically dispersed. The bigger particles grow at the expense of the small ones, thus the surface density of the latter decreases. The final mass of the biggest aggregates is $\sim$10$^6$ g. Such a bimodal dust distribution for sedimentation driven coagulation was also reported by \citet{2005A&A...434..971D} and \citet{2005ApJ...625..414T}. The numerical model used by ZsD11 is practically identical to ours, but the spatial grid is fixed and consists of equally spaced cells, while in our case we use the adaptive grid method. They use 40 cells to resolve 4 gas pressure scale heights. We notice that in comparison to their results, we get a faster evolution of the dust. The first rain-out particles arrive to the midplane after approximately 400~yrs of evolution, instead of 500~yrs reported by ZsD11. We observe also that the growth proceeds to bigger sizes than in ZsD11, where the growth stalls at approximately 10$^{-1}$~g. In order to explain the discrepancy of the results obtained by ZsD11 and us, we perform resolution test. As the adaptive grid reflects a very high number of cells in high density regions, we investigate if the result obtained by ZsD11 depends on the number of cells used. Therefore, we perform a set of simulation with constant, equally spaced grid but increasing the number of cells. Fig.\ \ref{fig:dd05} presents the mass-height distribution of the dust after 1000 yrs of evolution for the constant grid with 80, 240 and 640 cells as well as for the adaptive gridding with 100 cells. Note that ZsD11 modeled only the upper half of the column, so their 40 cells is equivalent to our 80 cells resolution. We find that the timescale of the evolution is indeed dependent on the vertical resolution. With the adaptive grid method, we are also able to see the effect of sweeping up of the small particles by the big ones on the dust distribution around the midplane (see the bottom panel of Fig.~\ref{fig:dd05}). If we consider one grid cell with a bottom wall at $z=0$, using the model described in this section, the collision rate defined by Eqs (\ref{rik})-(\ref{Kik}) does not depend directly on the height above the midplane of the center of the cell $z_{\rm{c}}$. The relative velocity $\Delta v$ is dominated by the differential settling velocity that is directly proportional to $z_{\rm{c}}$. Also the cell volume $V$ is directly proportional to $z_{\rm{c}}$. Therefore, one could expect that the collisional evolution does not depend on the vertical resolution we choose. However, we find that the higher resolution we use, the faster the growth and settling proceed. This effect can be explained in the following way: we calculate the relative velocities of particles basing on the physical values obtained in the centers of the cells. Thus, the exact values of gas density, Stokes numbers and vertical settling velocities depend on the exact choice of the location of the cell. All these values influence the collision rate of particles. The more cells we use, the closer to the midplane (where the growth proceeds fastest at the very beginning as well as at the end of the evolution) we are able to resolve. On the other hand, the faster growth we obtain, the quicker the particles settle down. It is worth noting, that one of the basic assumptions of the method we use \citep{2008A&A...489..931Z} is that the particles are homogeneously distributed over the volume of the cell within they can collide. If we do not assure sufficiently high spatial resolution, this assumption is broken, and the model leads to unphysical results. The vertical resolution defines the maximum dust to gas ratio we are able to obtain. In the case of constant grid with the number $N_{\rm{c}}$ of cells the maximum dust to gas ratio $\rho_{\rm{d}}\slash\rho_{\rm{g}}=N_{\rm{c}}\times0.01$ would occur if we place all of the dust particles in one cell. The 0.01 is the global dust to gas mass ratio. In the case of the adaptive grid the dependence on the number of cells is much weaker, and we are able to resolve higher dust to gas ratios with much lower number of cells. The impact of the dust layer width on the growth was investigated by \citet{1986Icar...67..375N}. They concluded that the growth terminates for an infinitely thin layer, as when all of the bodies are located at $z=0$ the vertical velocity of dust resulting from Eq.\ (\ref{vsvelmin}) $v_{\rm{d}}^{z}=0$, and the main source of the relative velocities driving the collisions vanishes. However, even with the adaptive grid, we can never obtain an infinitely small cell, so the growth termination does not occur in our simulations. The existence of such an infinitely thin dust layer is unrealistic anyway. As soon as the dust to gas ratio exceeds unity, the shear instabilities \citep{1980Icar...44..172W,1993Icar..106..102C}, in particular the Kelvin-Helmholtz instability \citep{2006ApJ...643.1219J} are known to occur. \citet{2010ApJ...722.1437B} showed that in the case of no turbulence, another kind of hydrodynamic instability, namely the streaming instability \citep{2005ApJ...620..459Y}, will generate a turbulence and maintain the dust to gas ratio before the Kelvin-Helmholtz instability could be triggered. It then prevents the dust from further settling and the growth from terminating. With the adaptive grid routine, we are able to resolve the dust to gas ratio much higher than one. To avoid such an unphysical situation, we implement an artificial $\alpha$ viscosity, $\alpha_{\rm{SI}}$, that is designed to mimic the impact of the streaming instability on the vertical distribution of dust. We calculate the $\alpha_{\rm{SI}}$ as \begin{equation}\label{alphaSI} \alpha_{\rm{SI}} = \alpha_{\rm{SI}, max} \left[ 1 + \erf{ \left(\frac{\rho_{\rm{d}}\slash\rho_{\rm{g}} - {\rm{c_1}}}{\rm{c_2}}\right) } \right], \end{equation} where $\alpha_{\rm{SI}, max}$ defines a minimal turbulent viscosity that we need to maintain dust to gas ratio lower than one and the $\rm{c_1}$, $\rm{c_2}$ are parameters of the error function $\erf$. The $\alpha_{\rm{SI}, max}$ can be calculated from Eq.\ (\ref{hdust}) as \begin{equation} \alpha_{\rm{SI}, max} = Z_0^2 \min(0.5,{\bar{\rm{St}}})\left(1+\bar{\rm{St}}^2\right), \end{equation} with $Z_0$ representing initial dust to gas ratio, and $\rm{\bar{St}}$ being the Stokes number averaged over all particles present in given cell, as the strength of the streaming instability driven turbulence is determined by the collective property of the particles. The form of Eq.\ (\ref{alphaSI}) was chosen such that the additional term of viscosity is nonzero only when the dust to gas ratio exceeds unity and it adds only the amount of turbulence that is needed to maintain the dust to gas ratio below the unphysical value. \begin{figure} \centering \includegraphics[width=1.1\hsize]{dusttogas.pdf} \caption{The dust to gas ratio around the midplane after 1000 yrs of evolution as resolved by different algorithms: constant grid with 80 and 640 cells and the adaptive grid with 100 cells with and without the streaming instability (SI). The obtained dust to gas ratio depends strongly on the vertical resolution. In the case of the adaptive grid it exceeds unity. The implementation of the streaming instability lowers the dust to gas ratio only in the region in that such an unphysical values occur. } \label{fig:dusttogas} \end{figure} The Fig.\ \ref{fig:dusttogas} shows the dust to gas ratio at different height above the midplane after 1000 yrs of evolution for the different resolutions and for the test with the artificial viscosity introduced by the $\alpha_{\rm{SI}}$. The resolution dependence can be noticed. The dust to gas ratio in the case of the adaptive grid exceeds unity. The implementation of the $\alpha_{\rm{SI}}$ changes the dust to gas ratio only very close to the midplane. We find that implementing such an additional turbulence source speeds up the coagulation of the big particles. This is because it increases the relative velocities of the bodies and thus the collision rates. As we ignore the possibility of the aggregates fragmentation, the higher relative velocities result in faster growth. \begin{figure} \centering \includegraphics[width=\hsize]{histcomp.pdf} \caption{The dust mass distribution after 4000 yrs of evolution for the tests with the constant grid with 80 and 640 cells and adaptive gridding with 100 cells with and without the streaming instability (SI) implemented. The figure reveal a huge impact of the vertical resolution on the dust growth timescale. With the adaptive gridding we obtain much bigger bodies after the same time of evolution. Taking the SI into account additionally speeds up the growth. } \label{fig:histcomp} \end{figure} The Fig.\ \ref{fig:histcomp} shows the mass distributions after 4000 yrs of evolution obtained for different gridding as well as with and without the streaming instability (SI). This figure reveals how much the growth depends on the resolution. The mass of the biggest agglomerates obtained after 4000 yrs in the test with 640 constant cells and 100 adaptive grid cells varies by five orders of magnitude. This is however a timescale effect. If we wait long enough, which is of the order of Myrs for the constant grid, we will obtain the same resulting size of agglomerates. The growth can proceed only until all the small particles are swept up by the big ones. The consideration of the streaming instability allows us to obtain another 4 orders of magnitude in mass larger particles. The additional viscosity increases the vertical extent of the big bodies as well as their relative velocities. Thus, they are able to collide with the small particles that reside higher above the midplane. This speed up of the growth can however be a result of the simplified instability implementation we used. We do not account for the strong particle clumping reported for the streaming instability \citep{2009ApJ...704L..75J}. We ignore also the possibility of the gravitational instability of the clumps \citep{2007Natur.448.1022J}. We plan to include these effects in our future work. The growth timescale dependence on the vertical resolution revealed in these sections can have a huge impact on dust evolution models. In 2D cases the impact of vertical structure resolution is even stronger as the relative velocity is dominated by the radial and azimuthal drift. Its value does not depend on the vertical position, so the collision rate becomes explicitly dependent on $z$. In the model presented here, turbulence is not included, besides the one generated by the streaming instability. We ignore also the possibility of the aggregates fragmentation. We expect that including these effects would lower the discrepancy between the results obtained using the constant and adaptive gridding. The turbulent mixing prevents the high dust to gas ratio, which is problematic for the constant grid scheme. Taking the fragmentation into account sets up a maximum mass over which the particles cannot grow. Thus, the difference in the growth timescale does not change the maximum size of particles after the same time of evolution obtained in the different models. The tests presented in this section have proven our adaptive grid to deal very well with the high dust density regions. However, as can be observed on the bottom panel of Fig.\ \ref{fig:dd05}, the low density regions are resolved much worse. As the most of the coagulation happens in the high density regions around the midplane, this flaw should not effect the mass distribution function evolution. However, it may limit the possibilities of using our code in context of the protoplanetary disks observations. \section{Sweep-up growth at the inner edge of dead zone} \label{sub:2D} \begin{figure} \centering \includegraphics[width=\hsize]{Sketch.pdf} \caption{Sketch of the planetesimal formation mechanism we suggest. Thanks to the radial variation in turbulence efficiency, the position of the collision regimes is shifted in terms of particle size. The dust aggregates can grow to larger sizes in the dead zone than in the MRI active zone. The “big” particles grown in the dead zone drift inwards through the bouncing regime, to the location of the pressure trap, and some of them can continue to grow via sweeping up the small particles halted by the bouncing barrier. } \label{fig:sketch} \end{figure} In the test models presented so far, we have always assumed perfect sticking between particles, and ignored all other possible collision outcomes. However, collisional physics of dust aggregates is highly complex, and laboratory experiments have shown that also effects such as bouncing, fragmentation and erosion can occur \citep{2008ARA&A..46...21B}. By implementing the collision scheme proposed by \citet{2010A&A...513A..56G} in 0D simulations, \citet{2010A&A...513A..57Z} showed the importance of using a realistic collision model, and discovered the existence of the bouncing barrier, where growth-neutral bouncing collisions can completely prevent particle growth above millimeter-sizes, even before the fragmentation barrier is reached. \citet{2012A&A...540A..73W} showed that the existence of a collisional growth barrier (such as the bouncing barrier) can actually be beneficial for the growth of planetesimals. If some larger particles, or {\it seeds}, are artificially introduced into a 0D model, they can grow by sweeping up the population of particles kept small by the bouncing barrier, thanks to the mass transfer effect observed by \citet{2005Icar..178..253W}. When two large particles collide at a high velocity, they tend to fragment, but if the mass ratio between the colliding particles is high enough, only the smaller of the two will be disrupted, depositing a fraction of its mass onto the larger particle in the process. In this way, two populations of particles are formed, where the few seeds grown by sweeping up the small particles while colliding only rarely between themselves. \citet{2012A&A...544L..16W} and \citet{2013ApJ...764..146G} showed that the first seeds might be formed by including velocity distributions produced by stochastic turbulence, but the exact nature of these distributions, and whether the effect is capable of producing high enough mass ratios, is still unclear. In this study, we investigate whether the seeds can be produced at one location in the disk and then transported by the radial drift to another region, where they are significantly larger than the grains produced locally. This could allow them to grow further by sweeping up the smaller grains. In particular, we postulate that such a situation can occur for a sharp $\alpha$ change, e.g. at the inner edge of a dead zone. Fig.~\ref{fig:sketch} shows the basic idea behind our model. At the inner edge of the dead zone, strength of the MRI turbulence drops, affecting the relative velocities between dust particles. In the MRI active region, the turbulence is stronger than in the dead zone, causing bouncing to occur for significantly smaller particles. Thus, aggregates growing in the dead zone can reach larger sizes. The radial drift (that increases toward the Stokes number equal unity) can transport the largest particles to the MRI active region, and at the same time into another collisional regime. The drifting particles have now become seeds, and can continue to grow by sweeping up the small grains stuck below the bouncing barrier. Furthermore, the rapid turbulence strength decline can result in a formation of a pressure trap that allows the seeds to avoid further inward drift and becoming lost inside of the evaporation radius of the star. A difficulty in the planetesimal formation by sweep-up growth scenario is that the first seed particles have to be orders of magnitude more massive than the main population, and that if too many such seeds are formed, they will fragment among themselves too often to be able to grow. As a first application of our 2D code, we investigate whether the planetesimal formation via the mechanism described above can be initiated in a realistic protoplanetary disk. We focus on a protoplanetary disk with a pressure bump around the snow line \citep{2007ApJ...664L..55K}, using the disk model presented in Sect.~\ref{sub:trap}. We assume a stationary disk, which is a simplification, as the dust grains size distribution, evolution of which we model, should affect the disk structure. We discuss this issue further in Sect.\ \ref{sub:last}. The total disk mass integrated between 0.1 and 100 AU is $0.01M_{\odot}$, and we set the mass accretion rate to 10$^{-9}$ $M_{\odot}$~yr$^{-1}$. Thus, our model corresponds to a low-mass, passive protoplanetary disk. We focus this study on the region around the pressure bump, between $r = 3-5.5$ AU, as highlighted in Fig.~\ref{fig:disk}. At 3~AU, the disk has a gas surface density $\Sigma_{\rm g} = 65$ g cm$^{-3}$ and a temperature $T_{\rm{g}} = 140$~K. This disk model is highly simplified, especially in the outer regions, but as we focus only on the inner region, we consider it a good approximation. We also assume a stationary gas disk, since, because of the computational expense of the simulations, we only run the models for $\sim$$3 \times 10^4$ yrs, which is much shorter than the typical disk evolution timescale. We assume an initial dust to gas ratio of $0.01$, and distribute the dust mass into monomers of size $a_0=1$~$\mu$m. The internal density of the particles is set to $\rho_{\rm{p}}=$ 1.6~g~cm$^{-3}$. For the models presented in this study, we use over a half a million (exactly 2$^{19}$) representative particles and an adaptive grid resolution of 64 radial and 32 vertical zones. This gives 256 representative particles per cell, which allows us to resolve the coagulation physics properly (see Sect.\ \ref{sub:test1}). Each swarm represents $\sim$$10^{22}$~g, corresponding to a maximum representative particle size of roughly 100 km that is obtainable without breaking the requirement that the number of representative particles must be lower than the number of physical particles in the swarm they represent. At the current stage of our project we do not reach km-sizes. \begin{figure} \centering \includegraphics[width=\hsize]{mt_v1.png} \caption{The collision outcome for particle pairs of given sizes located in the midplane at 3.23 AU (pressure trap) and at 3.6 AU (dead zone) in collision model B. “S” marks sticking, “B” bouncing, “F” fragmentation, and “MT” mass transfer. Thanks to the change in the disk properties between the two regions, the bouncing barrier occur at different particle sizes, as predicted when constructing our planetesimal formation scenario (Fig. \ref{fig:sketch}).} \label{fig:colscheme} \end{figure} \begin{figure*} \centering \includegraphics[width=\hsize]{densar_mt_v1.png} \caption{The vertically integrated dust density at different stages of the evolution, using collision model B. The solid line shows the particle size corresponding to the Stokes number of unity, where the drift is the fastest. This line is also proportional to the gas surface density. The dashed line shows approximate position of the bouncing barrier. The dotted line indicates the location of the pressure trap. The symbols point the position of three selected representative particles. Two of them are the particles that become the seeds that continue growing, trapped in the pressure trap at 3.23~AU, while the growth of the other swarms is stopped by the bouncing barrier. The feature at $r>3.5$~AU and $a<$10$^{-2}$~cm comes from the bimodal distribution revealed in the 1D tests (see Sect.\ \ref{sub:1D}), where the small particles are vertically dispersed, while the bigger particles resides in the midplane of the disk.} \label{fig:densar_mt}% \end{figure*} For our collision model, we use two simplified prescriptions (models A and B) for bouncing, fragmentation and mass transfer based on the work of \citet{2012A&A...540A..73W}. In both models, we determine the sticking probability as a function of the relative velocity $\Delta v$: \begin{equation} p_{\rm s}(\Delta v) = \left\{ \begin{array}{ccl} 1 & & \Delta v < v_{\rm s} \\ 0 & {\rm if} & \Delta v > v_{\rm b} \\ 1-k & & {\rm otherwise}, \\ \end{array} \right. \end{equation} where $k = \log(1 + \Delta v - v_{\rm s}) /\log(1 + v_{\rm b} - v_{\rm s})$, consistent with the findings by \citet{2012Icar..218..688W}. The smooth transition between sticking and bouncing collisions turns out to be a natural way to limit the number of potential seeds, i.e. particles that are large enough to initiate sweep-up in the dead zone. The fragmentation probability is determined by a step function: \begin{equation} p_{\rm f}(\Delta v) = \left\{ \begin{array}{ccl} 0 & {\rm if} & \Delta v < v_{\rm f} \\ 1 & & \Delta v \geq v_{\rm f}, \\ \end{array} \right. \end{equation} and we let $v_{\rm s} = 3$~cm~s$^{-1}$, $v_{\rm b} = 60$~cm~s$^{-1}$, and $v_{\rm f} = 80$~cm~s$^{-1}$ be the sticking, bouncing and fragmentation threshold velocities. These values correspond to silicate grains, which are believed to be less sticky and resilient to fragmentation than icy grains that would also exist in the simulation domain. However, because of the lack of knowledge about the ice collision properties, and the uncertainty in the efficiency in sublimation and sintering at the snow line, we decide to take the pessimistic approach of using only the silicates. The $v_{\rm s}$, $v_{\rm b}$ and $v_{\rm f}$ are here independent on particle masses and $\Delta v$, which is different from the \citet{2012A&A...540A..73W} model. This is a significant simplification coming from the code optimization reason. However, the order of magnitude of these values is consistent with the original model, thus the overall scheme of collisional evolution is preserved. In both of the models, during a fragmenting event, the mass of both particles is distributed according the power-law $n(m) \propto m^{-9/8}$, consistent with findings by \citet{1993Icar..106..151B} as well as \citet{2010A&A...513A..56G}, and the representative particle is selected randomly from the fragments (see \citet{2010A&A...513A..57Z} for details on how this is done in the representative particles and Monte Carlo fashion). In model B, we also include the mass transfer effect, which occurs during a fragmenting event when the particle mass ratio is high enough, namely $m_1/m_2 > m_{\rm crit}$ ($m_1>m_2$), where we put $m_{\rm crit} = 10^3$. We assume a constant mass transfer efficiency of $0.8\cdot m_2$, i.e.\ the more massive particle gains 80\% of the mass of the smaller particle. The collisional model developed by \citet{2012A&A...540A..73W} is much more complex than ours. In their work the mass transfer efficiency is dependent on the impact velocity. We decided to assume the mass transfer efficiency to be constant, as we are here mostly concerned at the point where sweep-up is initiated, and we do not want to model the process in detail. For the same reason we ignored the threshold between erosion and mass transfer that \citet{2012A&A...540A..73W} found to be important for the growth to planetesimal sizes. In Fig.~\ref{fig:colscheme}, we present the collision outcome for all particle pairs with collision model B, in the midplane, at both the location of the pressure trap (3.23~AU) and in the dead zone (3.6~AU). In the case of collision model A, the plot is similar, but the mass transfer regime is replaced by fragmentation. From the plot, we can notice that due to differences in turbulent viscosity, the bouncing and fragmentation occur at different sizes depending on the location of the disk. In the dead zone, the turbulence is extremely low, $\alpha=10^{-6}$, compared to $\alpha \approx 10^{-4}$ at the pressure trap, and the particle growth can therefore continue to more than one order of magnitude larger sizes before the bouncing barrier halts it. This is exactly what is needed for our planetesimal formation via sweep-up mechanism to work. In the case of collision model A, the growth is halted by the bouncing barrier at $\sim$0.1 cm in the pressure trap region and $\sim$0.7 cm in the dead zone, and there is no possibility that the growth could proceed towards bigger sizes. In model B, if radial drift would be ignored and the growth would only be allowed to proceed locally, the particle growth would stop at the same sizes as in model A. However, we find in our simulations that when both drift and mass transfer are included, the situation changes significantly, in a way that enables sweep-up, as discussed earlier. The result of the simulation using collision model B is illustrated in Fig.~\ref{fig:densar_mt}, where we plot the vertically integrated dust density evolution at six different times between $t = 500$~yrs and $t = 30,000$~yrs. The dust growth proceeds the fastest in the inner part of the domain, where the relative velocities are the highest because of stronger turbulence. After $1,000$~yrs, the particles in the inner part of the disk have reached the bouncing barrier, which efficiently halts any further growth. The position of the bouncing barrier, indicated with the dashed line in Fig.~\ref{fig:densar_mt}, is estimated analogically as the location of the fragmentation barrier in \citet{2011A&A...525A..11B}. As time progresses, particles further out also halt their growth due to bouncing. The bouncing barrier occurs at larger sizes in the dead zone than in the pressure trap. After $20,000$ years, most of the particles are kept small by the bouncing, and only evolve by slowly drifting inwards. The size of the particles stopped by the bouncing in the dead zone corresponds to ${\rm{St}} < 5\times10^{-2}$, for which the drift timescale $> 10^4$ yrs (see Fig.\ \ref{fig:trappingtest}). Thus, the small dust is still present beyond the pressure trap at the end of the model. During the inward drift, the particles halted by the bouncing barrier in the dead zone are automatically shifted to the fragmentation$\slash$mass transfer regime in the region of higher turbulence. Most of these particles fragment due to equal-size collisions, which can be seen in the Fig.~\ref{fig:densar_mt}, as the majority of the bigger particles from the dead zone is fragmented down to the position of the bouncing barrier. With these contour plots, however it is not easy to display the minute, but very important, amount of bodies that are able to cross the barrier unscathed: these are the seeds. The Monte Carlo method finds 2 such seed representative particles in the model B run, which is hard to show in the contour plots, so that we mark them separately in Fig.~\ref{fig:densar_mt}. In the figure, we plot the exact positions of three selected swarms. All of them have similar initial locations and identical masses. Two of them are the only swarms that become the seeds for sweep-up, while the third is plotted for reference to show the evolution of an “average” particle in this model. This particle, after 25,000 yrs of evolution, clearly undergoes fragmentation. Because of the smooth transition between sticking and bouncing, a limited number of particles manage to grow to the maximum size before they have drifted inwards. These particles have a chance to avoid the fragmenting collisions. This is because of two reasons. One of them is that the largest particles are drifting the fastest. What is more: the more massive the drifting particle is, the lower is the probability of fragmenting collision, due to the fact that the transition from fragmentation to mass transfer regime is at the projectile mass equal to 0.001 times the target mass. Thus some “lucky” particles from the higher end of the mass distribution spectrum can reach the position, where the equal-size collisions are very unlikely, as most of the surrounding particles are more than the three order of magnitude in mass lower, and so collisions will primarily lead to sweep-up growth. In our model we observe two such seeds. After $t = 27,500$~yrs, they have reached the pressure trap, so their drift is halted, and by the end of the simulation, they have reached m-sizes. \begin{figure} \centering \includegraphics[width=\hsize]{surfde_mt_v1.pdf} \caption{The evolution of the spatially integrated surface density of different sized particles for the model B. The particles grow until they reach the bouncing barrier. After that only a limited number of bodies continue the growth thanks to the mass transfer effect.} \label{fig:surfde_mt} \end{figure} \begin{figure} \centering \includegraphics[width=\hsize]{rmax.pdf} \caption{The time evolution the maximum (solid lines) and the median (dashed lines) radius of the dust particles in the model with and without the mass transfer effect. Both of the models generally evolve in similar way, what can be observed by following the median radius change. However, whereas the growth of the biggest particles is halted at sizes approximately $\sim$0.7 cm by the bouncing in the case of the model A, in model B some particles manage to grow till the radius of 100 cm after 30,000 yrs.} \label{fig:rmax} \end{figure} In Fig.\ \ref{fig:surfde_mt}, we present the evolution of the spatially integrated size-distribution for model B, and Fig.~\ref{fig:rmax} shows the evolution of the median and maximum radius of the dust particles for both models A and B. The median size illustrates the evolution of an “average” particle. Initially, both of the models evolve in the same way, and the median and maximum particle size grows gradually from $\mu$m- up to mm-sizes. After about $t = 23,000$~yrs, the largest particles in model A halt in their growth at a size of $0.7$ cm due to the bouncing barrier. The median size continues to grow, as all particles still have to reach the bouncing regime. In model B, however, the first seeds are formed and drifted into the pressure trap at $t = 23,000$~yrs, initiating sweep-up and causing a sudden increase in the size of the largest particles from cm- up to m-sizes. It can here be noticed that the sweep-up is only local, and the median particle size remains unchanged compared to model A. After the seeds are formed, the size-distribution becomes bimodal. As seen in Fig.\ \ref{fig:surfde_mt}, the seeds constitute a separate population, the surface density of which grows in time, as they are sweeping-up the small particles. However, as the seed population is represented by only two representative particles, its mass distribution cannot be resolved well in this model. Fig.\ \ref{fig:rmax} shows also another interesting feature. The line corresponding to the maximum particle size in the model B exhibits two bumps before it finally jumps to depict the evolution of the two growing seeds: these are some “unlucky” particles that fragmented before being able to reach the region where the mass ratio between them and the small particles is high enough to lead to mass transfer collisions. Also later in our model we observe some particles that grow a bit bigger than the others but then fragment. Of course, there can and should be some new “lucky” seeds formed later, in particular from the particles that started their evolution further away in the disk. However, as the growth timescale rises with the distance from the star, we would have to carry on our models over longer time and possibly start with a larger domain to see more of the seeds, which is not possible with the current state of our code, as explained below. The model B run took about 50 days on an 8 core 2.83 GHz Intel processor. The model A run took about half as long, as all the particles stayed small so that a longer advection time step was used. Because of the computational expense of the simulations, we finish them at $t = 31,000$~yrs, when the seeds have reached sizes of $100$~cm. Extrapolating the growth rate from the data, we find that the seeds would grow to km-sizes within the next $10^5$ yrs. \citet{2010ApJ...724.1153X} investigated the planetesimal growth by dust sweep-up and found the growth from 0.1 to 10 km within $10^6$ yrs in their models. However, in our model the surface density of the dust in the pressure trap is enhanced, as the solids are constantly delivered to the trap by the radial drift, so the timescale of the growth is reduced. The dust density distribution in the dead zone exhibits a bump for particles smaller than 10$^{-2}$~cm (see Fig.\ \ref{fig:densar_mt}). This feature comes form the bimodal distribution revealed in the 1D tests presented in Sect.\ \ref{sub:1D}. It is not observed in the active zone while it is smeared by the relatively strong turbulence ($\alpha>10^{-4}$). Such structure cannot be modeled by the commonly used dust evolution codes that treat the vertical structure in an averaged way \citep{2010A&A...513A..79B}. However, the impact of resolving of this structure is hard to define without detailed comparison between results given by such 1D code and by our code, which is much beyond the scope of this work. \section{Discussion and Conclusions}\label{sub:last} We developed a new computational model for dust evolution in the protoplanetary disk. The representative particles method \citep{2008A&A...489..931Z} has been used to describe the dust. The gas disk has been included in an analytical way. The model tracks the dust drift as well as the coagulation. The Monte Carlo method has been used to investigate the collisions between the dust aggregates. The code is a further development of the work presented by \citet{2011A&A...534A..73Z}. We extended the model of \citet{2011A&A...534A..73Z} by adding an adaptive gridding method, which assures high spatial resolution in high dust density regions, as well as the radial dimension. With the new numerical code, we found that high spatial resolution is necessary to model the dust evolution properly. In particular, in the case of lack of turbulent mixing, when a dense midplane layer is formed, the dust growth timescale depends on the resolution very strongly. We noticed that a sharp change in a protoplanetary disk structure can be favorable for the planetesimal formation by sweep-up as suggested by \citet{2012A&A...540A..73W}. We applied our method to a snow line region in a low mass protoplanetary disk, and modeled the disk following the prescription of \citet{2007ApJ...664L..55K}, where the turbulent viscosity changes around the snow line, leading to occurrence of a low turbulence region as well as a pressure bump. We found that in such a disk it is indeed possible to grow planetesimals by the sweep-up. Due to the local dust density enhancement in the pressure bump, the sweep-up growth rate is increased, and the estimated planetesimal formation timescale is relevant for the planet formation. The main conclusions of the models presented in this paper can be summarized in the following points. \begin{itemize} \item{Adaptive gridding allows us to investigate the dust collisional evolution with Monte Carlo method in 2D. It assures sufficient spatial as well as mass resolution to account for the dust structure. It automatically moves the computational effort towards the high dust density regions.} \item{Proper resolution of the vertical structure of protoplanetary disk is important for obtaining a correct dust growth timescale. This is true especially in the case of low turbulence.} \item{In some protoplanetary disks, it is likely to overcome the growth barriers and obtain planetesimals via the sweep-up growth, as suggested by \citet{2012A&A...540A..73W}. Since the disk properties change at different disk locations, the bouncing barrier is shifted in terms of maximum particles size. In our model the particles can grow more in the dead zone region. A limited number of these particles can become planetesimal seeds and continue to grow in the region of stronger turbulence, and their radial drift is at the same time halted because of the existence of the pressure trap.} \item{The snow line, as modeled by \citet{2007ApJ...664L..55K}, is a favorable region for the planetesimal formation by incremental growth.} \end{itemize} Our model includes inevitable simplifications. We assumed that the disk is isothermal, i.e.\ the gas temperature $T_{\rm{g}}$ is constant along the vertical dimension. Also the turbulent viscosity $D_{\rm{g}}$ has been assumed to stay constant along a column at given distance from the star. We do not expect that including the dependence of $T_{\rm{g}}$ and $D_{\rm{g}}$ on the height above the midplane would influence the possibility of forming planetesimals via the mechanism described in this work. We did not include the gas disk evolution. This is not consistent with the snow line model we implement, as the gas ionization rate and thus the turbulence strength is dependent on the dust properties. As we include the dust growth, the total surface area of grains changes and thus the turbulence is modified. The disk structure builds up on the disk accretion timescale. As the dust coagulation proceeds much faster than the gas disk evolution, we start our models from a steady state disk, which is a common, but not self-consistent practice. We focused on the disk region around the snow line, where the temperature allows for existence of solid water ice. The collisional properties of ice aggregates are generally thought to be much better than silicates. However, due to lack of laboratory data, we did not include the ice in our models. We used a collision model that reflects the evolution of silicate grains. Including collision model of ices would help the growth of particles. On the other hand, it would also require considering another complex effects, such as evaporation and condensation \citep{2011ApJ...739...18K,2013A&A...552A.137R} or sintering \citep{2011ApJ...735..131S}. The planetesimal formation model we propose relies on the existence of a growth barrier, such as the bouncing barrier introduced by \citet{2010A&A...513A..57Z}. The robustness of the barrier has recently been put into question as the sticking and bouncing efficiency have been shown to exhibit a strong dependence on the internal structure of colliding aggregates as well as the impact parameter \citep{2011ApJ...737...36W, 2013arXiv1302.5532K}. Even though the results on the bouncing behavior is inconclusive, we argue that fragmentation could also work in a similar fashion for the sweep-up scenario. In case where fragmentation acts as the main growth barrier, the smaller dust population would be able to grow a bit further, but might still be swept up by the drifting seeds. As the mass transfer experiments have so far been performed over a limited parameter space only, this possibility would need to be verified experimentally. As mentioned in Sect.\ \ref{sub:1D}, we did not include the effects of particle clumping via the streaming instability \citep{2005ApJ...620..459Y} and possibility of the gravitational instability \citep{2007Natur.448.1022J} for dense midplane layer. These effects could change our results and lead to efficient planetesimal formation in the dead zone. We plan to implement these phenomena in future work. A great difficulty in the planet formation modeling is that the dynamic range involved is too wide to be covered by a single numerical method. When km-sized bodies are formed, regardless of their formation process, the gravitational interactions become important and the N-body dynamics needs to be considered. Statistical approach, which is commonly used to study the dust coagulation, is very hard to connect with the N-body methods, as it only handles the dust distribution function and it does not consider individual particles. In our code, we used the representative particles as a description for dust, thus the connection to the N-body regime is more natural. The code presented in this paper is a very first step towards a complete model of the planet formation. We focused the study presented in this paper on the snow line region, which is shown to be a favorable region for formation of big bodies in protoplanetary disk. This is consistent with other studies. The impact of the snow line is reported in the context of the exoplanets distribution. \citet{2009ApJ...691.1322S} argue that the statistical properties of observed exoplanets cannot be explained without taking the snow line into account. Also very recent work of \citet{2013MNRAS.428L..11M} suggests that the snow line region is where the asteroid belts are preferentially formed. They come to this conclusions because of a correlation between the location of the snow line and observed warm dust belts in exosolar systems. They argue that the existence of such asteroid belts may be crucial for existence of life on rocky planets. It is worth noting that the planetesimal formation mechanism introduced in this work can take place also at locations other than the snow line. \citet{2013ApJ...765..114D} have recently suggested that a steep variation of the turbulence efficiency and resulting pressure bump, which we need for our mechanism, can occur beyond what they call “metal freezeout line”, i.e. at the border beyond which metal atoms in the gas phase thermally adsorb on dust particles. Anyway, in our model, the internal planetesimal population is formed at the same location in the disk that corresponds to the pressure maximum. Such a narrow annulus of planetesimals was suggested by \citet{2009ApJ...703.1131H} as initial condition for formation of the terrestrial planets in the Solar System. The thresholds that change the structure of protoplanetary disk, such as the snow line, clearly have a great significance for the emergence and evolutions of planetary systems. Our models show that even with a “pessimistic” setup, in a low-mass protoplanetary disk consisting of not very sticky silicate grains, it is possible to form planetesimals at such a specific location. Further work is required to check the following evolution of the planetesimal ring. For bodies bigger than kilometers, consideration of the gravitational interactions, which we plan to include in our future work, is necessary. The question whether there will be an asteroid belt or a planet formed at the pressure trap cannot be answered at the current stage of our project. \begin{acknowledgements} We thank Carsten Dominik, Chris Ormel, Andras Zsom, Til Birnstiel and Paola Pinilla for useful discussions. We thank our referee, Satoshi Okuzumi, for very quick and thorough report that helped us to substantially improve this paper. J.D. was supported by the Innovation Fund FRONTIER of the Heidelberg University. F.W. was funded by the Deutsche Forschungsgemeinschaft within the Forschergruppe 759 “The Formation of Planets: The Critical First Growth Phase”. J.D. would also like to acknowledge the use of the computing resources provided by bwGRiD (http:$\backslash\backslash$www.bw-grid.de), member of the German D-Grid initiative, funded by the Ministry for Education and Research (Bundesministerium für Bildung und Forschung) and the Ministry for Science, Research and Arts Baden-Wuerttemberg (Ministerium für Wissenschaft, Forschung und Kunst Baden-Württemberg) \end{acknowledgements}
1,314,259,994,746
arxiv
\section{Introduction} Mobile networks are vulnerable to \emph{signalling attacks} which overload the control plane through traffic patterns that target the signalling procedures involved \cite{Serror2006,Enck2005,Lee2009,Ricciato2010}, by compromising a large number of mobile devices as in network Denial of Service (DoS) attacks \cite{Gelenbe2007,CACM-SAN} or from outside the mobile networks (e.g. the Internet). Similarly software and apps on mobile devices \cite{Viruses,Attack} can cause such disturbances through frequent traffic bursts. Such attackers can actively probe the network to infer the network's radio resource allocation policies \cite{Barbuzzi2008,Qian2010} and identify IP addresses in specific locations \cite{Qian2012}. Indeed, a review of 180 cellular carriers around the world revealed that 51\% of them allow mobile devices to be probed from the Internet by either assigning public IP addresses to mobile devices or allowing IP spoofing or device-to-device probing within the network \cite{Wang2011,Qian2012}. Signalling attacks may also be launched in conjunction with the presence of crowds in well identified locations such as sports arenas or concert venues \cite{CAMWA}. Signalling attacks are similar to \emph{signalling storms} caused by poorly designed or misbehaving mobile apps that repeatedly establish and tear down data connections \cite{NSN2011}, generating large amounts of signalling that may crash the network. Such signalling storms are a serious threat to the availability and security of cellular networks. While flash crowds last for a short time during special occasions such as New Year's Eve, signalling storms are unpredictable and tend to persist until the underlying problem is identified and corrected. This has prompted the industry to promote best practices for developing ``network-friendly'' mobile apps \cite{GSMA2012,Jiantao2012}. \subsection{Signalling Storms} Perhaps one of the most important features of smart phones and tablets is the ``always-on'' connectivity, which enables users to receive push messages, e.g. to notify of an incoming message or VoIP call. This is maintained by having the mobile device send periodic keep-alive messages to a cloud server. However, if for any reason the cloud service becomes unavailable, then the mobile device will attempt to reconnect more frequently generating signalling loads up to 20 times more than normal as reported in recent incidents \cite{Nokia2013}. In 2012 a Japanese mobile operator suffered a major outage \cite{Storm2012} due to a VoIP app that constantly polls the network even when users are inactive. In another incident \cite{Corner2011} the launch of a free version of a popular game on Android caused signalling overload in a large network due to frequent advertisements shown within the app. Also, many mobile carriers have reported \cite{Arbor2012} outages or performance issues caused by non-malicious but misbehaving apps, yet the majority of those affected followed a reactive approach to identify and mitigate the problem. Signalling storms could also occur as a byproduct of large scale malware infections \cite{Ricciato06}, such as botnets, which target mobile users rather than networks. A recent report by Kaspersky \cite{Kaspersky2013} revealed that the most frequently detected malware threats affecting Android OS are (i) SMS trojans which send costly messages without users' consent, (ii) adware which displays unwanted advertisements, and (iii) root exploits which allow the installation of other malware or the device to become part of a botnet. A sufficiently large number of users within a single network falling victims to such attacks, which involve frequent communications, could have a devastating impact on the control plane of the network. The purpose of this paper is to analyse the effect of signalling storms, as well as of signalling attacks, and analyse in particular the manner in which such attacks can cause maximum damage to the radio and core networks. The approach we take is based on the development of a mathematical model of user signalling behaviour from which we derive some useful analytical results. While the literature \cite{Haverinen2007,Yeh2009,Schwartz2013} has focused on analysing signalling behaviour from an energy consumption perspective, we hope that this work can offer to mobile operators a greater understanding of bottlenecks and vulnerabilities in the radio signalling system, so that network parameters may be modified so as to mitigate for those effects that lead to network outages \cite{nemesys1,nemesys2}. \section{Modelling Signalling of a Single User} In the context of UMTS networks, bandwidth is managed by the {\em radio resource control} (RRC) protocol which associates a state machine with each {\em user equipment} (UE). There are typically four RRC states, in order of increasing energy consumption: IDLE, Paging Channel (cell\_PCH), low bandwidth Forward Access Channel (cell\_FACH), and high bandwidth Dedicated Channel (cell\_DCH). We will refer hereafter to state cell\_X as X. State promotions are triggered by uplink (UL) and downlink (DL) transmissions, and the move to FACH or DCH is determined by the size of the {\em radio link control} (RLC) buffer of the UE: if at any time the buffer exceeds a certain threshold in either direction, the state will be promoted to DCH. State demotions are triggered by inactivity timers. Consider a UE that transitions from IDLE or dormant $D$ to FACH, perhaps later to DCH, and then sometimes directly from $D$ to DCH. We will let $\lambda_L$ and $\lambda_H$ be the rates at which low and high bandwidth calls\footnote{A call refers to any UL/DL activity, e.g. data session, location update, etc.} are normally made, and $L$ and $H$ be the corresponding states when the call is actually taking place in the sense that it is using the bandwidth of FACH and DCH. Furthermore, we will denote by $\eta$ the state when a low bandwidth request is handled while the mobile is in DCH. At the end of normal usage the call will transition from $L$ to $\ell$ or from $H,\eta$ to $h$, where $\ell$ and $h$ are the states when the UE is not using the bandwidth of FACH and DCH respectively; thus, $\{L,\ell\}\in$ FACH and $\{H,\eta,h\}\in$ DCH. We denote the rates at which low and high bandwidth calls terminate by $\mu_L$ and $\mu_H$. Since the amount of traffic exchanged in states $L$ and $\eta$ is usually very small (otherwise it will trigger a transition to $H$), we assume that their durations are independent but stochastically identical. If the UE does not start a new session for some time, it will be demoted from $h$ to $\ell$ or from $\ell$ to PCH which we denote by $P$. The UE will then return from $P$ to $D$ after another inactivity timer; however, because the mobile is not allowed to communicate in the $P$ state, it will first move to FACH, release all signalling connections, and finally move to $D$. Let $\tau_H$, $\tau_L$ and $\tau_P$ be the time-out rates in states $h, \ell$ and $P$, respectively. We are considering signalling attacks (or misbehaving apps) which falsely induce the mobile to go from $D, P$ to FACH or DCH, or from FACH to DCH, without the user actually having any usage for this request. The rates related to these malicious transitions will be denoted $\alpha_L$ and $\alpha_H$. Since in these cases a transition to an actual bandwidth usage state does not take place, unless the user starts a new session, the timers will demote the state of the UE. Consequently, the attack results in the usage of network resources both by the computation and state transitions that occur for call handling, and through bandwidth reservation that remains unutilised. In summary, the state of the UE at time $t$ is described by the variable $s(t)\in \{\mathcal{N},\mathcal{A},\mathcal{S(N)},\mathcal{A(N)}\}$ where: \begin{itemize} \item $\mathcal{N}=\{D,P,\ell,L,h,\eta,H\}$ represent the states occupied by the UE during or after a ``normal'' call. \item $\mathcal{A}=\{\ell_A,h_A,\eta_A\}$ are similar to $\{\ell,h,\eta\}$ but forced by malicious traffic. Note that a transition to state $\eta_A$ happens either from $L$ because of an attack that forces an ongoing low bandwidth call to communicate over DCH, or from $h_A$ because of a new normal low bandwidth call that could have been handled through FACH. \item $\mathcal{S(N)}$ and $\mathcal{S(A)}$ are the signalling states for respectively normal and attack conditions, which capture the non-negligible overhead needed in order to establish and release network resources during state promotions and demotions. We denote by $\sigma_{XY}^{-1}$ the average transition delay from state $X$ to $Y$, where $X,Y\in \{D,P,L,H\}$ and the subscripts $L$ and $H$ are used here to represent both normal and attack states in FACH and DCH. \end{itemize} \begin{figure}[]\centering% \includegraphics[width=0.49\textwidth]{CTMC} \caption{The Markov model of a single user.} \label{fig-model} \end{figure} Fig.~\ref{fig-model} shows the different states, signalling phases and transitions of the Markov model. The stationary equations for the states in $\mathcal{N}$ are given by: \begin{align*} &\pi(D)[\lambda_H + \alpha_H + \lambda_L + \alpha_L] = \pi(P) \tau_P,\\ &\pi(P)[\lambda_H + \alpha_H + \lambda_L + \alpha_L + \tau_P] = [\pi(\ell) + \pi(\ell_A)] \tau_L,\\ &\pi(\ell) [\lambda_H + \alpha_H + \lambda_L + \tau_L] = \pi(L) \mu_L + \pi(h) \tau_H, \\ &\pi(L) [\lambda_H + \alpha_H + \mu_L] = [\pi(D) + \pi(P) + \pi(\ell) + \pi(\ell_A)] \lambda_L, \\ &\pi(h)[\lambda_H + \lambda_L + \tau_H] = \pi(H) \mu_H + \pi(\eta) \mu_L, \\ &\pi(\eta)[\lambda_H + \mu_L] = \pi(h)\lambda_L, \\ &\pi(H)\mu_H = \sum_{s\in \{ \mathcal{N},\mathcal{A} \}, s \neq H}\pi(s) \lambda_H, \end{align*} while the equations for the attack states $\mathcal{A}$ are: \begin{align*}\nonumber &\pi(\ell_A) [\lambda_H + \alpha_H + \lambda_L + \tau_L] = [\pi(D)+\pi(P)] \alpha_L + \pi(h_A) \tau_H,\\\nonumber &\pi(h_A)[\lambda_H + \tau_H + \lambda_L ] = \sum_{s\in \{D,P,\ell,\ell_A\}}\pi(s) \alpha_H + \pi(\eta_A) \mu_L, \\ &\pi(\eta_A)[\lambda_H + \mu_L] = \pi(h_A) \lambda_L + \pi(L) \alpha_H. \end{align*} We can express the normalisation condition as a weighted sum of the probabilities of the states $\{\mathcal{N},\mathcal{A}\}$, i.e. $1=\sum_{s \in \{\mathcal{N},\mathcal{A}\}} \pi(s)w_s$ or: \begin{align}\nonumber &1 = \underbrace{\pi(D)[1 + \frac{\Lambda_H}{\sigma_{DH}} + \frac{\Lambda_L}{\sigma_{DL}}]}_{\Pr[\text{user in IDLE}]}\\\nonumber & + \underbrace{\pi(P)[1 + \frac{\Lambda_H}{\sigma_{PH}} + \frac{\Lambda_L}{\sigma_{PL}} + \tau_P (\frac{1}{\sigma_{PL}}}_{\Pr[\text{user in PCH}]} + \frac{1}{\sigma_{LD}})]\\\nonumber & + (\pi(\ell)+\pi(\ell_A))[1+ \frac{\Lambda_H}{\sigma_{LH}} + \frac{\tau_L}{\sigma_{LP}}] + \pi(L)[1+ \frac{\Lambda_H}{\sigma_{LH}}]\\\label{norm} & + \underbrace{(\pi(h)+\pi(h_A))[1+\frac{\tau_H}{\sigma_{HL}}] + \pi(\eta) + \pi(\eta_A)+\pi(H)}_{\Pr[\text{user in DCH}]} \end{align} with $\Lambda_H = \lambda_H + \alpha_H$ and $\Lambda_L = \lambda_L + \alpha_L$. Writing $\Lambda = \Lambda_L + \Lambda_H$, $q_L =\frac{\lambda_L}{\lambda_H + \mu_L}$, $\rho_L = \frac{\lambda_L}{\Lambda_H + \mu_L}$, and $q_H = \frac{\lambda_H}{\mu_H}$, the solution to the above set of equations becomes: \begin{align*} \pi(D)=~& \frac{\tau_P \tau_L}{(\Lambda + \tau_P)(\Lambda + \tau_L)} ~ G,\\ \pi(P)=~& \frac{\Lambda \tau_L }{(\Lambda + \tau_P)(\Lambda + \tau_L)}~ G,\\ \pi(L)=~& \rho_L ~G,\\ \pi(H)=~& q_H [\frac{q_L\rho_L\alpha_H }{\lambda_L} + (1+\rho_L)(\frac{\Lambda_H}{\tau_H}[1+ q_L] + 1 ) ] ~G,\\ \pi(h)=~& \frac{\mu_H}{\lambda_H [1+q_L] + \tau_H } ~ \pi(H), \\ \pi(\eta)=~& q_L ~ \pi(h),\\ \pi(\ell)=~& \frac{1}{\Lambda_H + \lambda_L + \tau_L} [\mu_L \rho_L G + \frac{\mu_H \tau_H\pi(H)}{\lambda_H [1+q_L] + \tau_H} ],\\ \pi(h_A)=~& \frac{\alpha_H}{\lambda_H [1+q_L] + \tau_H}[1+ \frac{q_L \rho_L\mu_L}{\lambda_L}]~ G,\\ \pi(\eta_A)=~& \frac{\alpha_Hq_L}{\lambda_H [1+q_L] + \tau_H} [ 1 + \frac{\lambda_H + \tau_H + \lambda_L}{\Lambda_H + \mu_L}]G,\\ \pi(\ell_A)=~& \frac{1}{\Lambda_H + \lambda_L + \tau_L}[\frac{\alpha_L \tau_L}{\Lambda + \tau_L} + \frac{\alpha_H \tau_H (1+\frac{q_L \rho_L\mu_L}{\lambda_L})}{\lambda_H [1+q_L] + \tau_H}]G, \end{align*} where $G$ can be obtained from \eqref{norm} yielding: \begin{align*} &G^{-1}= [1+\rho_L][q_H + \frac{\Lambda_H}{\tau_H} \{(1+ q_L) (1 + q_H)+ w_h - 1\} ]+\\ & \frac{\frac{\tau_L}{\Lambda + \tau_P} [\tau_Pw_D +\Lambda w_P] + \Lambda w_\ell}{\Lambda + \tau_L} + \rho_L [ w_L + \frac{q_L}{\lambda_L}(1+ q_H)\alpha_H]. \end{align*} \subsection{Signalling Load on the RNC and SGSN} Let $n_{XY}$ denote the number of signalling messages sent or received by the {\em radio network controller} (RNC) when a transition occurs from state $X$ to state $Y$, then the signalling rate generated by a single user due to both normal and malicious traffic can be computed as: \begin{align}\nonumber \gamma_{r} ~=~ & \pi(D)[\Lambda_H n_{DH} + \Lambda_L n_{DL}] + \pi(P)[\Lambda_H n_{PH} + \Lambda_L n_{PL}] \\\nonumber & + [\pi(\ell) + \pi(\ell_A) + \pi(L)] \Lambda_H n_{LH} \\\nonumber & + [\pi(h) + \pi(h_A)] \tau_H n_{HL} \\\nonumber & + [\pi(\ell) + \pi(\ell_A)] \tau_L \{n_{LP} \mathbf{1_{L\to P}} + n_{LD} \mathbf{1_{L\to D}}\} \\\label{radio-rate} & + \pi(P)\tau_P n_{PD}\mathbf{1_{L\to P}}, \end{align} where the characteristic function $\mathbf{1_{X\to Y}}$ takes the value 1 if the transition $X\to Y$ is implemented and 0 otherwise. Note that the mobile network operator may not use PCH state, e.g. when the vendor does not support it or it is disabled in order to extend the battery life of mobile devices. In this case, $\sigma_{PL},~\sigma_{LP}$ and $\tau_P$ are set to $\infty$ so that the user is moved directly from FACH to IDLE after an inactivity timer. On the other hand, the core network is more protected from signalling attacks since only transitions to/from state $D$ trigger signalling with the core. Let $m_{XY}\leq n_{XY}$ be the number of control plane messages exchanged between the RNC and the {\em serving GPRS support node} (SGSN) during such transitions, then the signalling load on the core network from a single user becomes: \begin{align}\nonumber \gamma_{c} ~=~& \pi(D)[\Lambda_H m_{DH} + \Lambda_L m_{DL}] + \pi(P) \tau_P m_{PD}\mathbf{1_{L\to P}}\\\label{core-rate} & + [\pi(\ell) + \pi(\ell_A)] \tau_L m_{LD} \mathbf{1_{L\to D}}. \end{align} Table~\ref{table-param} summarises the state transition model along with parameter values used in the numerical results: (i) the number of signalling messages exchanged during state transitions are obtained from the UMTS standards documentation, and can also be found in the literature (e.g. \cite{GSMA2011}); (ii) typical values for the inactivity timers $\tau_H^{-1}$ and $\tau_L^{-1}$ are in the range $2-10$~seconds, while $\tau_P^{-1}$ should be significantly longer (in the order of minutes); and (iii) the average transition times are assumed to be proportional to the number of signalling messages involved, and normalised with respect to the transition IDLE $\to$ DCH which is assumed to take 1~second. \begin{table}[t!]\caption{Network parameters}\label{table-param} \begin{tabular}{p{2cm}|p{3.85cm}|p{0.3cm}|p{0.35cm}|p{0.3cm}}\hline \textbf{Transition} & \textbf{Triggering Event} & $\hspace{-0.17cm}\mathbf{n_{XY}}$ & $\hspace{-0.17cm}\mathbf{m_{XY}}$ & $\hspace{-0.1cm}\mathbf{\sigma_{XY}^{-1}}$ \\\hline\hline IDLE $\to$ FACH & \multirow{3}{1.5in}{Low bandwidth UL/DL traffic (e.g. location update, keep-alive messages)} & 15 & 5 & 0.75\\ PCH~ $\to$ FACH & & ~3 & -- & 0.15 \\ &&&&\\\hline IDLE $\to$ DCH & \multirow{3}{1.5in}{High bandwidth UL/DL traffic (e.g. VoIP calls, video streaming, web browsing)}& 20 & 5 & 1.0 \\ PCH~ $\to$ DCH & & 10 & -- & 0.5 \\ FACH $\to$ DCH & & ~7 & -- & 0.35 \\\hline DCH $\to$ FACH & inactivity timer $\tau_H^{-1} = 2-10$s & ~5 & -- & 0.25 \\ FACH $\to$ PCH & inactivity timer $\tau_L^{-1} = 2-10$s & ~2 & -- & 0.1 \\ PCH ${\tiny \xrightarrow{\text{FACH}}}$ IDLE & inactivity timer $\tau_p^{-1} = 5-20$min & ~6 & 2 & 0.3 \\\hline \end{tabular} \end{table} \section{Maximising the Impact of an Attack} If an attacker succeeds in inferring the radio network configuration parameters (e.g. through active probing \cite{Barbuzzi2008,Qian2010,Qian2012}), then it is easy to monitor the user's behaviour in order to estimate $\lambda_L, \lambda_H, \mu_L$ and $\mu_H$. The attacker can then maximise the impact on the radio or core network by choosing the rate of malicious traffic bursts $\alpha_L$ and $\alpha_H$ so as to maximise \eqref{radio-rate} or \eqref{core-rate}. This is illustrated in Fig.~\ref{fig-gamma} where we plot the average rate of signalling messages that a misbehaving user generates on the RNC and SGSN assuming $\alpha_L = 0$ and different values of $\alpha_H$. The results indicate that there is indeed an optimum value of $\alpha_H$ which maximises the load on the {\em core network}, while the load on the radio network increases monotonically with the attack rate up to a maximum level. The effect of PCH state is also examined in Fig.~\ref{fig-gamma} showing a significant reduction (about 95\%) in the amount of control plane traffic reaching the core network as compared to the case where the user is moved directly from FACH to IDLE. In fact, as the value of the timer $\tau_P^{-1}$ gets larger, an attacker would find it extremely difficult to overwhelm the SGSN with signalling load unless a very large number of UEs are compromised. This feature also results in up to 30\% drop in the amount of signalling load traversing the radio network. \begin{figure}[t!]\centering% \includegraphics[width=0.49\textwidth]{gamma} \caption{The average signalling load ($msg/s$) on RNC and SGSN with and without PCH state versus attack rate $\alpha_H$ when $\alpha_L = 0$, normal traffic is characterised by $\lambda_L^{-1} = 600, \mu_L^{-1} = 5, \lambda_H^{-1} = 1800, \mu_H^{-1} = 120$, and using the parameters of Table~\ref{table-param} with $\tau_H^{-1} = \tau_L^{-1} = 5$s and $\tau_P^{-1} = 5$min. $\hat{\alpha}_H$ in \eqref{approx} provides a good estimate of the optimum attack rate.} \label{fig-gamma} \end{figure} \subsection{Radio Network}\label{sec:storm_radio} Numerical investigations suggest that the load on the radio network increases with the frequency of the malicious bursts up to a maximum level reached when {\em either} $\alpha_H$ or $\alpha_L$ tends to infinity, depending on the parameters of the network as well as the user's traffic characteristics. If PCH is enabled then the attacker could either induce the transition FACH $\to$ DCH as soon as the channel is released, or take a two-step approach to first move from PCH to FACH immediately after the timer $\tau_L^{-1}$ expires then trigger another transition to DCH some time later. Note that any other attack policy would be slowed down by the long timer $\tau_P^{-1}$ and thus would not succeed in creating a more severe impact. To investigate both policies, let us set $\alpha_L \to \infty$ so that the transition PCH $\to$ FACH is triggered repeatedly, creating a load on the radio network given by: \begin{align*} \gamma_r ~=~& \frac{n_{LH} + n_{HL}}{\theta_{LH} + \frac{\tau_L(\Lambda_H + \mu_L)}{\Lambda_H (\Lambda_H + \lambda_L + \mu_L)}~\theta_{PL}} \\ & +\frac{n_{PL} + n_{LP}}{\theta_{PL} + \frac{\Lambda_H (\Lambda_H + \lambda_L + \mu_L)}{\tau_L(\Lambda_H + \mu_L)}~\theta_{LH}},\qquad \alpha_L \to \infty, \end{align*} where $\theta_{XY} = \sigma_{XY}^{-1} + (1 + q_L)(1 + q_H)\tau_Y^{-1}+\sigma_{YX}^{-1}$. Now if we maximise the above expression with respect to $\alpha_H$, we obtain the following interesting result: \begin{equation} (\alpha_L^*,\alpha_H^*) = \left\{ \begin{array}{ll} (\infty, 0), & \text{if}~~\frac{n_{LH} + n_{HL}}{\theta_{LH}}\leq \frac{n_{PL} + n_{LP}}{\theta_{PL}},\\ (0,\infty), & \text{otherwise.} \end{array}\right. \end{equation} Therefore, the load on the radio network can be maximised through low (resp. high) bandwidth bursts that repeatedly induce the transition PCH $\to$ FACH (resp. FACH $\to$ DCH) if the condition $[n_{LH} + n_{HL}]\theta_{LH}^{-1}\leq [n_{PL} + n_{LP}]\theta_{PL}^{-1}$ is (resp. is not) satisfied. When PCH state is not used, we obtain similar results, but the attack is maximised by continuously triggering IDLE $\to$ FACH or FACH $\to$ DCH depending on whether the condition $[n_{LH} + n_{HL}]\theta_{LH}^{-1}\leq [n_{DL} + n_{LD}]\theta_{DL}^{-1}$ is satisfied or not, respectively. The worst case load on the RNC is then: \begin{equation*} \gamma_r^* = \max\left[\frac{n_{XL} + n_{LX}}{\theta_{XL}},\frac{n_{LH} + n_{HL}}{\theta_{LH}} \right], \end{equation*} $X=D$ or $P$ depending on which transition $L\to X$ is used. \subsection{Core Network}\label{sec:storm_core} Signalling between the UE and core network happens for a number of different reasons, but with respect to the RRC state machine, it usually occurs when the UE moves from/to the IDLE state. The attack against the core network can then be launched more effectively by causing a transition to FACH, rather than DCH, immediately after the user becomes IDLE so as to avoid the timer $\tau_H^{-1}$ and the associated demotion delay. Thus, optimally $\alpha_H^* =0$, and the attack rate that maximises the load on the core network can be shown to be: \begin{equation}\label{alpha-Lc} \alpha_L^* = \sqrt{c^2 + \frac{b-ca}{\theta_{PLH}} } -c -\lambda_L , \end{equation} where: \begin{align*} &\theta_{PLH} = \theta_{PL} + (1+q_L)\tau_L^{-1}\lambda_H \theta_{LH},\\ &a = \lambda_H [2\theta_{PLH} + \sigma_{DH}^{-1} - \sigma_{PL}^{-1} - \sigma_{LH}^{-1}] \\ &~~ + \tau_P [\theta_{PLH} + \sigma_{DL}^{-1} + \sigma_{LD}^{-1}]+(1 + q_L)(1 + q_H + \lambda_H \theta_{LH}),\\ &b = \lambda_H^2 [\theta_{PLH} + \sigma_{PH}^{-1} - \sigma_{PL}^{-1} - \sigma_{LH}^{-1}]\\ &~~ + \lambda_H \tau_P [\theta_{PLH} + \sigma_{DH}^{-1} + \sigma_{LD}^{-1} - \sigma_{LH}^{-1}]\\ &~~ +(\lambda_H + \tau_P)(1 + q_L)(1 + q_H + \lambda_H \theta_{LH}),\\ &c = \lambda_H ~ \frac{m_{DH} + m_{PD}}{m_{DL} + m_{PD}}. \end{align*} Obviously, the attack is worst when there is no background high bandwidth user traffic, in which case we end up with: \begin{equation*} \alpha_L^* = \sqrt{\frac{\tau_P [1+\frac{\lambda_L}{\mu_L}]}{\theta_{PL}}} - \lambda_L, \qquad \lambda_H = 0 \end{equation*} and consequently the maximum possible load that an attacker can impose on the SGSN is: \begin{align*} \gamma_c^* = \frac{m_{DL} + m_{PD}}{\sigma_{DL}^{-1}+ \sigma_{LP}^{-1}+ \sigma_{PL}^{-1}+ \sigma_{LD}^{-1} + (1+\frac{\lambda_L}{\mu_L})(\frac{1}{\tau_L} +\frac{1}{\tau_P} + \frac{2}{\Lambda_L^*})} \end{align*} with $\Lambda_L^* = \alpha_L^* + \lambda_L$. When $\tau_P \to \infty$, we get the intuitive result $\alpha_L^* = \infty$, i.e. the attacker should send a low-bandwidth traffic burst as soon as the timer $\tau_L^{-1}$ expires, leading to $\gamma_c^* =[m_{DL} + m_{LD}]\theta_{DLH}^{-1}$ where $\theta_{DLH} = \theta_{DL} + (1+q_L)\tau_L^{-1}\lambda_H \theta_{LH}$. In Fig.~\ref{fig-gc} we plot $\gamma_c$ versus the attack rates, and the numerical results indicate that $(\alpha_L^*,\alpha_H^*) = (0.02, 0)$ which coincide with the prediction of \eqref{alpha-Lc}. \begin{figure}[t!]\centering% \includegraphics[width=0.38\textwidth]{gc} \caption{The average signalling load ($msg/s$) on SGSN versus the attack rates $\alpha_H, \alpha_L$, when normal traffic profile is $\lambda_L^{-1} = 300, \mu_L^{-1} = 5, \lambda_H^{-1} = 600, \mu_H^{-1} = 180$, and the timers are $\tau_H^{-1} = \tau_L^{-1} =5$s and $\tau_P^{-1} = 5$~min.} \label{fig-gc} \end{figure} In practice, however, mounting an attack based solely on low bandwidth bursts may not be feasible. To begin with, it may be difficult to accurately estimate the RLC buffer's thresholds which determine whether a session will be handled through the low or high speed channel, and also the thresholds could differ from one RNC to another. Furthermore, many operators choose to move users directly into DCH or use very small RLC thresholds such that even keep-alive messages are sent over the high speed channel \cite{Qian2010}. Thus, a more practical approach for an attacker is to assume that the majority of data transmissions are handled through DCH, and in turn compute an attack rate $\hat{\alpha}_H$ that maximises the load on the SGSN under such circumstances, i.e.: \begin{equation*} \hat{\alpha}_H = \argmax_{\alpha_H} \quad \gamma_c, \qquad \text{when}\quad \Lambda_H >> \Lambda_L, \end{equation*} yielding: \begin{align}\nonumber \hat{\alpha}_H ~=~& \sqrt[3]{-\frac{B}{2} + \sqrt{\frac{B^2}{4} + \frac{A^3}{27}}}+ \sqrt[3]{-\frac{B}{2} - \sqrt{\frac{B^2}{4} + \frac{A^3}{27}}}\\\label{approx} & ~~ - \frac{b}{6a} - \lambda_H, \end{align} where: \begin{align*} A &= -\frac{b^2}{12a^2}, \quad B = \frac{b^3}{108a^3} - \frac{c}{2a}, \\ a &= \sigma_{LH}^{-1} + [1 + \frac{\lambda_H}{\mu_H}] \tau_H^{-1}+ \sigma_{HL}^{-1}, \\ b &= \tau_L (\sigma_{PH}^{-1} + [1 + \frac{\lambda_H}{\mu_H}][\tau_H^{-1}+\tau_L^{-1}]+ \sigma_{HL}^{-1}+ \sigma_{LP}^{-1}) + \tau_P a , \\ c &= \tau_L \tau_P [1 + \frac{\lambda_H}{\mu_H}]. \end{align*} When PCH is disabled, we have: \begin{equation*} \hat{\alpha}_H=\sqrt{\frac{\tau_L [1+\frac{\lambda_H}{\mu_H}]}{\sigma_{LH}^{-1} + [1 + \frac{\lambda_H}{\mu_H}]\tau_H^{-1} + \sigma_{HL}^{-1} } }- \lambda_H \end{equation*} and the resulting load on the SGSN becomes: \begin{equation*} \hat{\gamma}_c = \frac{m_{DH} + m_{LD}}{\sigma_{DH}^{-1}+ \sigma_{HL}^{-1}+ \sigma_{LD}^{-1}+ (1+\frac{\lambda_H}{\mu_H})(\frac{1}{\tau_H} +\frac{1}{\tau_L} + \frac{2}{\hat{\alpha}_H+\lambda_H})}. \end{equation*} Fig.~\ref{fig-gamma} shows that $\hat{\alpha}_H$ provides a good estimate of the optimum value $\alpha_H^*$ even when $\lambda_L > \lambda_H$. Fig.~\ref{fig-util} illustrates the manner in which the frequency of malicious traffic bursts affects signalling overhead as well as the {\em tail} which is the time the UE spends in FACH or DCH waiting for a time-out to expire. During these inactive periods, the mobile wastes considerable radio resources in the network as well as its own limited battery energy. As the attack rate increases, the proportion of time the UE remains inactive in either FACH or DCH also increases, while its average data volume is almost constant. This observation could be used by anomaly detection techniques to distinguish between normal ``heavy'' users and attackers: the former can be recognised by their low inactive times, while the latter can be detected by frequent connection attempts and low data volume. \begin{figure}[t!]\centering% \includegraphics[width=0.49\textwidth]{util2} \caption{The fraction of time the UE spends in DCH and FACH waiting for a timer or state transition (solid line) and while using the bandwidth (dotted line) as a function of $\alpha_H$, when $\alpha_L = 0, \lambda_L^{-1} = 600, \mu_L^{-1} = 5, \lambda_H^{-1} = 1800, \mu_H^{-1} = 120, \tau_H^{-1} = 2$s, $\tau_L^{-1} = 5$s and $\tau_P^{-1} = 10$~min. Large inactive times indicate anomalous signalling behaviour.} \label{fig-util} \end{figure} Finally, we examine in Fig.~\ref{fig-storm} the effect of a signalling storm on the RNC and SGSN when the total number of UEs is 10,000 and the percentage of misbehaving ones is increased from 0 to 20\%. Comparing the maximum load on the targeted network component and the corresponding load on the other, we see that PCH state prevents a situation where both the RNC and SGSN are simultaneously exposed to worst case loads, which happens when IDLE$\to$ FACH is the bottleneck transition in the radio network (cf. Section~\ref{sec:storm_radio}). In general, the radio network is less sensitive to the choice of the malicious bursts, as long as they are frequent, and thus it is more vulnerable to signalling storms. On the other hand, the load on the core network changes dramatically when the storm is optimised, which may not happen often, making signalling overloads in the SGSN a less likely event. This does not, however, include the effect of complex pricing and business models used by the operator which may exacerbate signalling load in the core network. \begin{figure}[t!]\centering% \includegraphics[width=0.49\textwidth]{optimumStorm} \caption{Load on RNC and SGSN versus percentage of mobile devices participating in a storm out of 10,000 users, when $\lambda_L^{-1} = 600, \mu_L^{-1} = 5, \lambda_H^{-1} = 600, \mu_H^{-1} = 180, \tau_H^{-1} = \tau_L^{-1}=5$s and $\tau_P^{-1}=10$~min. When PCH is enabled a storm can cause maximum load on {\em either} the radio or core network, but without PCH both of them could be targeted simultaneously.} \label{fig-storm} \end{figure} \section{Conclusions} This paper has focused on the behaviour of a mobile network user with a view to determining network overload in signalling servers and base stations that can result from signalling misbehaviours such as signalling storms. Such misbehaviours can be caused by poorly designed mobile apps, outages in cloud services, large scale malware infections, or malicious network attacks. In the course of this work we have derived a Markov model of user behaviour that can also be exploited in other studies concerning mobile networks as a whole. The Markov model has been solved analytically, and used to derive conditions and parameters for which the signalling misbehaviours can cause the largest damage and which therefore need to be avoided. The analytical results have been illustrated with several numerical examples, and we expect that this work will lead to ideas relating to control algorithms that can adaptively react to network measurements so as to eliminate or mitigate the effect of signalling storms and DoS attacks. \section*{Acknowledgment} The authors acknowledge the support of the EU FP7 project NEMESYS, Grant Agreement no. 317888.
1,314,259,994,747
arxiv
\section{Introduction} Throughout this paper we let $\HC$ denote a separable Hilbert space with inner product $\left\langle\cdot,\cdot\right\rangle_\HC$. Unless we explicitly state otherwise, we assume that $\HC$ is infinite-dimensional. We denote by $\LC=\LC\left(\HC\right)$ the space of bounded linear transformations on $\HC$, by $\SC^1$ the corresponding trace class, and by $\SC^2$ the Hilbert--Schmidt class. $\XC$ will be used as a generic notation for an element of the set $\left\{\HC,\LC,\SC^1,\SC^2\right\}$. We will use $\YC$ to denote a general Banach space. By $\Hol\left(\YC\right)$ we denote the space of $\YC$-valued analytic functions on the open unit disc $\D$. For $f\in\Hol(\YC)$, we denote the $n$th Taylor coefficients at the origin by $\hat f(n)$. We denote by $\OC\left(\YC\right)$ the space of functions in $\Hol\left(\YC\right)$ that admit an analytic extension to a larger disc (centered at the origin). If $\YC=\C$, then we suppress this in our notation, i.e. $\Hol=\Hol\left(\C\right)$, and $\OC=\OC\left(\C\right)$. The same principle will apply to all function spaces discussed below. For $p\in\left[1,\infty\right]$ and $\XC\in\left\{\HC,\SC^1\right\}$, we let $L^p\left(\T,\XC\right)$ denote the standard space of $p$-Bochner--Lebesgue integrable functions from $\T$ to $\XC$. Here $\T$ denotes the unit circle in $\C$. Similarly, we define $L^p\left(\T,\LC\right)$ as the natural WOT-analogue of $L^p(\T)$: A function $f:\T\to\LC$ belongs to $L^p\left(\T,\LC\right)$ if and only if for all $x,y\in\HC$ the function $\left\langle f\left(\cdot\right) x,y\right\rangle_\HC$ is measurable and, moreover, $\left\|f\right\|_{L^p\left(\T,\LC\right)}^p = \int_{\T}\left\|f\right\|_\LC^p\, dm<\infty$. Here $m$ denotes normalized Lebesgue measure on $\T$. The Hardy space $H^p\left(\XC\right)$ is the space of $f\in\Hol\left(\XC\right)$ such that \begin{equation}\label{Eq:Hp-norm} \left\|f\right\|_{H^p\left(\XC\right)}^p=\sup_{0<r<1}\left\|f_r\right\|_{L^p\left(\T,\XC\right)}<\infty, \end{equation} where we have defined the function $f_r:z\mapsto f\left(rz\right)$. An important property of Hardy space functions is that they have boundary values in a natural sense, cf. Proposition \ref{Proposition:BoundaryIdentification}. We denote the boundary values of $f\in H^p\left(\XC\right)$ by $bf\in L^p(\T,\XC)$. The space $H^2\left(\HC\right)$ is a Hilbert space, with inner product $\left\langle f,g\right\rangle = \sum_{0}^\infty \langle \hat f(n),\hat g(n)\rangle_\HC$. Of particular importance will be the set of $H^2\left(\HC\right)$-normalized functions in $\OC\left(\HC\right)$, which we denote by $\OC_1\left(\HC\right)$. We now introduce the main topics of this paper. Initially, we consider the scalar setting, rather than the proper vectorial one. \subsection{Hankel operators} Given $\phi\in\Hol$ and $f\in\OC$, we define the action of the Hankel operator $\Gamma_\phi$ on $f$ by \begin{equation}\label{Eq:HardyHankelFormula} \Gamma_\phi f\left(z\right)=\sum_{n=0}^\infty \left(\sum_{m=0}^\infty\hat \phi\left(m+n\right)\hat f\left(m\right) \right)z^n,\quad z\in\D. \end{equation} A standard reference on Hankel operators is \cite{Peller2003:HankOpsBook}. We refer to $\phi$ as the symbol of $\Gamma_\phi$. We say that $\Gamma_\phi$ is bounded if it extends to a bounded operator on $H^2$. For $\Gamma_\phi$ to be bounded it is necessary for $\phi$ to be in $H^2$. For $\phi\in H^2$, one shows by computation that $\Gamma_\phi f=P_+ \left(\phi \conjvar{f} \right)$, where $P_+$ denotes the orthogonal projection from $L^2\left(\T\right)$ onto $H^2$, and $\conjvar{ f}:z\mapsto f\left(\conj{ z}\right)$. It is convenient to define the operation of coefficient conjugation, $f\mapsto f^\#$, $f^\# (z)=\conj{f(\conj z)}$. Note that this is an isomorphism on $H^2$. A classical result is that $H^1=H^2\cdot H^2$: If $f,g\in H^2$, then $f\cdot g\in H^1$, and $\left\|h\right\|_{H^1}\le \left\|f\right\|_{H^2}\left\|g\right\|_{H^2}$. Conversely, if $h\in H^1$, then there exists $f,g\in H^2$ such that $h=f\cdot g$ and $\left\|f\right\|_{H^2}\left\|g\right\|_{H^2}\le C\left\|h\right\|_{H^1}$, where $C>0$ is a constant independent of $f$ and $g$. Now choose $f$ so that $f^\# g = h$. By the calculation \[ \left\langle\Gamma_\phi f,g\right\rangle = \left\langle P_+ \left(\phi\conjvar{f} \right),g\right\rangle= \left\langle \phi\conjvar{f},g\right\rangle= \left\langle\phi,f^\#g\right\rangle = \left\langle\phi,h\right\rangle, \] one obtains that $\Gamma_\phi$ is bounded if and only if $\phi\in\left(H^1\right)^*$. Since $H^1$ may be identified with a subspace of $L^1(\T)$, and $\left(L^1\left(\T\right)\right)^*=L^\infty\left(\T\right)$, a straightforward application of the Hahn--Banach theorem shows that $\left(H^1\right)^*=P_+L^\infty\left(\T\right)$. The fact that $\Gamma_\phi$ is bounded if and only if $\phi\in P_+L^\infty\left(\T\right)$ is known as Nehari's theorem \cite{Nehari1957:BddBilinFrms}. \subsection{Carleson embeddings} Every Borel measure $\mu\ge 0$ on $\D$ corresponds to a so-called Carleson embedding $H^2\hookrightarrow L^2(\D,d\mu)$. It is a classical result \citelist{\cite{Carleson1958:InterpolProblBddAnalFcns}\cite{Carleson1962:InterpolBddAnalFcnsCoronaProbl}} in complex and harmonic analysis that boundedness of such embeddings can be characterized by a simple geometric property of $\mu$. Specifically, the Carleson embedding condition \begin{equation} \sup_{f\in\OC_1\left(\HC\right)} \int_\D \left|f\left(z\right) \right|^2 \, d\mu\left(z\right)<\infty \end{equation} holds if and only if $\mu$ satisfies the so-called Carleson intensity condition \begin{equation}\label{Eq:CarlesonInt} \sup_{\substack{I\subset\T\\ I\textnormal{ arc}}}\frac{\mu\left(\left\{w\in\D;\, 1-m(I)<\left|w\right|<1,\, \frac{w}{\left|w\right|}\in I\right\}\right)}{m\left(I\right)}<\infty. \end{equation} \subsection{Bounded mean oscillation} A bridge connecting Hankel operators, and Carleson embeddings is given by $\textrm{BMOA}$; bounded mean oscillation of analytic functions. Suppose that $\phi\in H^1$. We then say $\phi$ belongs to the class $\textrm{BMOA}$ if and only if \[ \left\|\phi\right\|_*=\sup_{\substack{I\subset\T\\ I\textnormal{ arc}}}\frac{1}{m\left(I\right)}\int_I \left|b\phi-\left(b\phi\right)_I \right|\, dm<\infty. \] Here $\left(b\phi\right)_I$ denotes the Lebesgue integral average $\frac{1}{m\left(I\right)}\int_I b\phi\, dm$. The quantity $\left\|\cdot\right\|_*$ is a semi-norm. The class $\textrm{BMOA}$ becomes a Banach space when equipped with the norm $\left\|\phi\right\|_{\textrm{BMOA}}=\left|\phi\left(0\right)\right|+\left\|\phi\right\|_*$. A celebrated result by Fefferman \citelist{\cite{Fefferman1971:CharBMO}\cite{Fefferman-Stein1972:HpSpaces}} is that $\textrm{BMOA}$ is in fact the dual of $H^1$. Moreover, $\phi\in \textrm{BMOA}$ if and only if the measure $\mu$ given by $d\mu = \left|\phi'(z)\right|^2\left(1-\left|z\right|^2\right)\, dA(z)$ satisfies \eqref{Eq:CarlesonInt}. As a summary of this discussion we have: \begin{proposition} Let $\phi\in H^1$. Then the following are equivalent: \begin{itemize} \item[$(i)$] $\Gamma_\phi$ is $H^2$-bounded. \item[$(ii)$] $\phi\in\left(H^1\right)^*$. \item[$(iii)$] $\phi\in P_+L^\infty\left(\T\right)$. \item[$(iv)$] $\phi\in \textrm{BMOA}$. \item[$(v)$] The measure given by $d\mu = \left|\phi'(z)\right|^2\left(1-\left|z\right|^2\right)\, dA(z)$ has finite Carleson intensity. \item[$(vi)$] The Carleson embedding $H^2\hookrightarrow L^2\left(\D,\left|\phi'(z)\right|^2\left(1-\left|z\right|^2\right)\, dA\left(z\right)\right)$ is bounded. \end{itemize} \end{proposition} \subsection{The vectorial setting} Note that \eqref{Eq:HardyHankelFormula} makes perfect sense if $\phi\in\Hol(\LC)$ and $f\in\OC(\HC)$. We take this as the definition of a vectorial Hankel operator $\Gamma_\phi$. The factorization result $H^1\left(\SC^1\right)=H^2\left(\SC^2\right)\cdot H^2\left(\SC^2\right)$, due to Sarason \cite{Sarason1967:GenInterpol}, implies that $\Gamma_\phi$ is $H^2(\HC)$-bounded if and only if $\phi\in \left(H^1\left(\SC^1\right)\right)^*$, very much like in the scalar setting. Since $\left(L^1\left(\T,\SC^1\right)\right)^*$ is not equal to $L^\infty(\T,\LC)$ ($\LC$ does not have the so-called Radon--Nikodym property, e.g. \cite{Diestel-Uhl1977:VecMeasures}), it is not obvious that $\left(L^1\left(\T,\SC^1\right)\right)^*=P_+L^\infty\left(\T,\LC\right)$. However, this follows from a vectorial extension of Nehari's theorem, due to Page \cite{Page1970:BddCompctVecHankOps}: $\Gamma_\phi$ is $H^2\left(\HC\right)$-bounded if and only if $\phi\in P_+L^\infty\left(\T,\LC\right)$. The space of $\LC$-valued analytic functions for which the corresponding Hankel operators are $H^2\left(\HC\right)$-bounded is commonly referred to as \emph{Nehari--Page} $\textrm{BMOA}$: \begin{definition}\label{def:NP} Let $\phi\in\Hol\left(\LC\right)$. We then say that $\phi\in \textrm{BMOA}_{\NC\PC}\left(\LC\right)$ if and only if \[ \left\|\phi\right\|_{\textrm{BMOA}_{\NC\PC}}=\sup_{f\in\OC_1\left(\HC\right)}\left\|\Gamma_\phi f\right\|_{H^2\left(\HC\right)}<\infty. \] \end{definition} While $\textrm{BMOA}_{\NC\PC}\left(\LC\right)$ can be identified either with $P_+L^\infty\left(\T,\LC\right)$ or with $\left(H^1\left(\SC^1\right)\right)^*$, these characterizations are of an abstract nature. Finding concrete conditions that characterize $\textrm{BMOA}_{\NC\PC}\left(\LC\right)$ has proven to be notoriously difficult. For example, define the class $\textrm{BMOA}_\OC\left(\LC\right)$ as the class of $\phi\in H^1\left(\LC\right)$ such that the oscillation condition \[ \left\|\phi\right\|_*=\sup_{\substack{I\subset\T\\ I\textnormal{ arc}}}\frac{1}{m\left(I\right)}\int_I\left\|b\phi-\left(b\phi\right)_I\right\|_\XC\, dm<\infty \] holds. Then \[ \textrm{BMOA}_\OC\left(\LC\right)\subsetneq \textrm{BMOA}_{\NC\PC}\left(\LC\right). \] This fact represents an area of research, where authors consider some aspect of the theory for scalar-valued $\textrm{BMOA}$ (or its harmonic or dyadic analogues), and then discuss to what extent this aspect carries over to the vector-valued case, e.g. \citelist{\cite{Blasco1988:HardySpacesVecValDuality}\cite{Blasco-Pott2008:EmbOpValDyadicBMO}\cite{Bourgain1986:VecValSingIntsHardy-BMODualityChapter}\cite{Gillespie-Pott-Treil-Volberg2004:LogGrowthHilbTransfVecHank}\cite{Mei2006:MatValParaprods}\cite{Nazarov-Pisier-Treil-Volberg2002:EstsVecCarlesonEmbThmVecParaprods}\cite{Nazarov-Treil-Volberg1997:CounterExInfDimCarlesonEmbThm}}. Before we get to the meat of this paper, we define the differentiation operator $D:\Hol\left(\YC\right)\to\Hol\left(\YC\right)$ by $Df\left(z\right)=zf'\left(z\right)+f\left(z\right)$. With respect to the monomial basis, $D$ acts like a diagonal matrix. This presents an elementary way of taking arbitrary powers of $D$: For $\alpha\in\R$, we set \[ D^\alpha f\left(z\right)=\sum_{n=0}^\infty \left(1+n\right)^\alpha \hat f\left(n\right) z^n,\quad z\in\D. \] Another convenience of working with $D$ in place of ordinary differentiation is that it does not annihilate constants. In fact we can say more: For each $\alpha\in\R$, $D^\alpha:\Hol\left(\YC\right)\to\Hol\left(\YC\right)$ is a bijection that leaves $\OC\left(\YC\right)$ invariant. From a technical point of view, the present paper is mainly concerned with $H^2(\HC)$-boundedness of operators of the type $D^\alpha\Gamma_\phi$, with $\alpha>0$ and $\phi\in\Hol\left(\LC\right)$. The present paper was originally motivated by the natural appearance of such operators in control theory, e.g. \cite{Jacob-Rydhe-Wynn2014:WeightWeissConjRKTGenHankOps}. However, they also have implications to our understanding of $\textrm{BMOA}_{\NC\PC}\left(\LC\right)$. Our investigation motivates the definition of a class which we refer to as \emph{Carleson} $\textrm{BMOA}$: \begin{definition}\label{def:C} Let $\phi\in\Hol\left(\LC\right)$. We then say that $\phi\in \textrm{BMOA}_{\CC}\left(\LC\right)$ if and only if \begin{equation}\label{Eq:AAC} \left\|\phi\right\|_{\textrm{BMOA}_\CC}^2=\sup_{f\in\OC_1\left(\HC\right)} \int_\D \left\| \left(D\phi\right)\left(z\right) f\left(\conj{ z}\right)\right\|_\HC^2\left(1-\left|z\right|^2\right)\, dA\left(z\right)<\infty. \end{equation} \end{definition} Since $D$ does not annihilate constants, $\left\|\cdot\right\|_{\textrm{BMOA}_\CC}$ is a proper norm, and not a semi-norm. \subsection{Main result and corollaries} \begin{theorem}\label{Theorem:HankelCarleson} Let $\HC$ be a separable Hilbert space, $\LC$ its space of bounded linear transformations. Let $\alpha>0$ and suppose that $\phi:\D\to\LC$ is analytic. Then $D^\alpha \Gamma_\phi$ is $H^2\left(\HC\right)$-bounded if and only if $D^\alpha \phi\in \textrm{BMOA}_{\CC}\left(\LC\right)$, i.e. \[ \left\|D^\alpha\Gamma_\phi\right\|_{H^2\left(\HC\right)\to\HC^2\left(\HC\right)}=\sup_{f\in\OC_1\left(\HC\right)}\left\|D^\alpha\Gamma_\phi f\right\|_{H^2\left(\HC\right)}<\infty \] if and only if \[ \left\|D^\alpha\phi\right\|_{\textrm{BMOA}_\CC}=\sup_{f\in\OC_1\left(\HC\right)}\int_\D \left\| \left(D^{1+\alpha}\phi\right)\left(z\right) f\left(\conj{ z}\right)\right\|_\HC^2\left(1-\left|z\right|^2\right)dA\left(z\right)<\infty. \] Moreover, \[ \left\|D^\alpha\Gamma_\phi\right\|_{H^2\left(\HC\right)\to\HC^2\left(\HC\right)}\approx \left\|D^\alpha\phi\right\|_{\textrm{BMOA}_\CC}. \] \end{theorem} Theorem \ref{Theorem:HankelCarleson} generalizes a result by Janson and Peetre \cite{Janson-Peetre1988:Paracomms} who obtained essentially the above characterization in the case where $\HC=\C$. We point out that, in the case where $\phi$ is $\LC$-valued, we are forced to avoid the Schur multiplier techniques used in \cite{Janson-Peetre1988:Paracomms}. This is made evident by the discussion in \cite{Davidson-Paulsen1997:PolBddOps}*{Section 4}. Operators of the type $D\Gamma_\phi$ received a lot of attention in connection to the so called Halmos problem \cite{Halmos1970:TenProbls}*{Problem 6}: \begin{quote} If a Hilbert space operator is similar to a Hilbert space contraction, then it is also polynomially bounded (by von Neumann's inequality). Is the converse true? \end{quote} Following the works of many authors \citelist{\cite{Aleksandrov-Peller1996:HankOpsSimToContr}\cite{Bourgain1986:SimProblPolBddOpsHSpace}\cite{Foguel1964:CounterExSz.-NagyProbl}\cite{Paulsen1984:ComplPolBddSimContr}\cite{Peller1982:EstsFcnsPwrBddOpsHSpace}\cite{Sz.-Nagy1959:ComplContOpsUniformlyBddIterates}}, Pisier \cite{Pisier1997:PolBddNotSim} answered this question in the negative. Subsequently, different proofs of the same result have been given in several papers \citelist{\cite{Davidson-Paulsen1997:PolBddOps}\cite{Kislyakov2000:OpsDisSimContr}}. All of these proofs exploit boundedness properties of operators of the type $D\Gamma_\phi$. The following two propositions are essentially from Davidson and Paulsen \cite{Davidson-Paulsen1997:PolBddOps}: \begin{proposition}\label{Proposition:Davidson-Paulsen1} Let $\alpha>0$, and $\HC$ be a separable, infinite-dimensional Hilbert space, $\LC$ its space of bounded linear transformations. Then there exists an analytic function $\phi:\D\to \LC$ such that $D^\alpha\Gamma_\phi$ is bounded on $H^2\left(\HC\right)$, while $\Gamma_\phi D^\alpha$ is not. Moreover, $\phi$ may be chosen to be rank one-valued. \end{proposition} \begin{proposition}\label{Proposition:Davidson-Paulsen2} Let $\alpha>0$, and $\HC$ be a separable, infinite dimensional Hilbert space, $\LC$ its space of bounded linear transformations. Then there exists a bounded analytic function $\phi:\D\to \LC$ such that $D^\alpha\Gamma_{D^{-\alpha}\phi}$ is not bounded on $H^2\left(\HC\right)$. \end{proposition} \begin{remark}\label{Remark:Davidson-Paulsen} Proposition \ref{Proposition:Davidson-Paulsen1} is stated for $\alpha=1$ in \cite{Davidson-Paulsen1997:PolBddOps}*{Example 4.6}. Proposition \ref{Proposition:Davidson-Paulsen2} is essentially stated for $\alpha=1$ in \cite{Davidson-Paulsen1997:PolBddOps}*{Corollary 4.2}, but this does not explicitly mention the boundedness of $\phi$, even though it follows from the original proof. A dyadic analogue of this result has been proved by Mei \cite{Mei2006:MatValParaprods}. For the convenience of the reader, we present proofs of the above propositions in Section \ref{Sec:Davidson-Paulsen}. \end{remark} Combining the results by Davidson and Paulsen with Theorem \ref{Theorem:HankelCarleson}, we are able to derive several interesting results. Given $\phi\in\Hol\left(\LC\right)$, we define the function $\phi^\#:z\mapsto \phi\left(\conj{ z}\right)^*$. This is the function obtained by taking the Hilbert space conjugate of each Taylor coefficient of $\phi$. Note that $\Gamma_\phi D=\left(D\Gamma_{\phi^\#}\right)^*$. By Proposition \ref{Proposition:Davidson-Paulsen1} and Theorem \ref{Theorem:HankelCarleson}, it follows that $\textrm{BMOA}_{\CC}\left(\LC\right)$ is not closed under coefficient conjugation (cf. \cite{Aleman-Perfekt2012:HankFrmsEmbThmsDirichletSpaces}*{Proposition 3.3}). On the other hand, $\textrm{BMOA}_{\NC\PC}\left(\LC\right)$ is obviously closed under coefficient conjugation. We obtain the following corollary: \begin{corollary}\label{Corollary:CNP} Let $\HC$ be a separable infinite-dimensional Hilbert space, $\LC$ its space of bounded linear transformations. Then $\textrm{BMOA}_\CC\left(\LC\right)$ is not closed under the map $\phi\mapsto\phi^\#$, where $\phi^\#\left(z\right)=\phi\left(\conj{ z}\right)^*$. In particular \[ \textrm{BMOA}_{\CC}\left(\LC\right)\ne \textrm{BMOA}_{\NC\PC}\left(\LC\right), \] i.e. $H^2\left(\HC\right)$-boundedness of $\Gamma_\phi$ is not characterized by the anti-analytic Carleson embedding condition indicated by Theorem \ref{Theorem:HankelCarleson}. \end{corollary} Corollary \ref{Corollary:CNP} motivates the following definition: \begin{definition}\label{def:CS} Let $\phi\in\Hol\left(\LC\right)$. We then say that $\phi\in \textrm{BMOA}_{\CC^\#}\left(\LC\right)$ if and only if $\phi^\#\in \textrm{BMOA}_{\CC}\left(\LC\right)$. \end{definition} Consider now the relation \begin{align}\label{Eq:LeibnizDecomposition} \Gamma_{D\phi} &= D\Gamma_{\phi}+\left(D\Gamma_{\phi^\#}\right)^*-\Gamma_\phi, \end{align} which is obtained by duality, and the Leibniz rule for $D$. The operator $\Gamma_\phi$ is bounded on $H^2\left(\HC\right)$, whenever any of the other terms in \eqref{Eq:LeibnizDecomposition} is bounded, since then $D\phi$ is a Bloch function (cf. Lemma \ref{Lemma:HankelCarlesonBloch} below). In the light of Theorem \ref{Theorem:HankelCarleson}, it is then clear from \eqref{Eq:LeibnizDecomposition} that \begin{equation}\label{Eq:CC*NP} \textrm{BMOA}_{\CC}\left(\LC\right)\cap \textrm{BMOA}_{\CC^\#}\left(\LC\right)\subsetneq \textrm{BMOA}_{\NC\PC}\left(\LC\right). \end{equation} We point out that the above inclusion also follows implicitly from the proof of \cite{Nazarov-Pisier-Treil-Volberg2002:EstsVecCarlesonEmbThmVecParaprods}*{Theorem 0.8}. However, we obtain also that the inclusion is strict. To see that this is so, suppose that it is not. This would only be possible if $\textrm{BMOA}_{\NC\PC}\left(\LC\right)$ was contained in $\textrm{BMOA}_{\CC}\left(\LC\right)$. By another application of Theorem \ref{Theorem:HankelCarleson}, this would contradict Proposition \ref{Proposition:Davidson-Paulsen2}. We summarize the above discussion: \begin{corollary}\label{Corollary:CC*NP} Let $\HC$ be a separable Hilbert space, $\LC$ its space of bounded linear transformations. If $\phi:\D\to\LC$ is an analytic function such that \[ \left\|\phi\right\|_{\textrm{BMOA}_\CC}=\sup_{f\in\OC_1\left(\HC\right)}\int_\D \left\| \left(D\phi\right)\left(z\right) f\left(\conj{ z}\right)\right\|_\HC^2\left(1-\left|z\right|^2\right)dA\left(z\right)<\infty, \] and \[ \left\|\phi^\#\right\|_{\textrm{BMOA}_\CC}=\sup_{f\in\OC_1\left(\HC\right)}\int_\D \left\| \left(D\phi\right)\left(\conj{ z}\right)^* f\left(\conj{ z}\right)\right\|_\HC^2\left(1-\left|z\right|^2\right)dA\left(z\right)<\infty, \] then \[ \left\|\Gamma_\phi\right\|_{H^2\left(\HC\right)\to\HC^2\left(\HC\right)}=\sup_{f\in\OC_1\left(\HC\right)}\left\|\Gamma_\phi f\right\|_{H^2\left(\HC\right)}<\infty. \] Moreover, \[ \left\|\Gamma_\phi\right\|_{H^2\left(\HC\right)\to\HC^2\left(\HC\right)}\lesssim \left\|\phi\right\|_{\textrm{BMOA}_\CC}+\left\|\phi^\#\right\|_{\textrm{BMOA}_\CC}. \] If $\HC$ is infinite dimensional, then the converse statement does not hold. \end{corollary} Condition \eqref{Eq:AAC} states that $H^2\left(\HC\right)$ is continuously embedded into $L^2\left(\D,\HC,d\mu\right)$, where $\mu$ is a certain operator valued measure. It is natural to think of this as an embedding of anti-analytic functions, rather than analytic ones. For this reason, we call \eqref{Eq:AAC} the anti-analytic Carleson embedding, to be distinguished from the analytic one, which is given by the straightforward modification \eqref{Eq:AC} below. In the scalar case it is obvious that these two conditions are equivalent. In the general case, this is no longer obvious. In fact, whether or not the two conditions define the same class of functions was posed as an open question by Nazarov, Treil, and Volberg in \cite{Nazarov-Treil-Volberg1997:CounterExInfDimCarlesonEmbThm}. They later restated the question in a joint paper with Pisier \cite{Nazarov-Pisier-Treil-Volberg2002:EstsVecCarlesonEmbThmVecParaprods}. We answer this question in the negative: \begin{corollary} Let $\HC$ be a separable infinite-dimensional Hilbert space, $\LC$ its space of bounded linear transformations. Then there exists a bounded analytic function $\phi:\D\to\LC$ such that \begin{equation}\label{Eq:AC} \sup_{f\in\OC_1\left(\HC\right)}\int_\D \left\| \left(D\phi\right)\left(z\right) f\left(z\right)\right\|_\HC^2\left(1-\left|z\right|^2\right)dA\left(z\right)<\infty, \end{equation} while \[\tag{\ref{Eq:AAC}$'$} \sup_{f\in\OC_1\left(\HC\right)}\int_\D \left\| \left(D\phi\right)\left(z\right) f\left(\conj{ z}\right)\right\|_\HC^2\left(1-\left|z\right|^2\right)dA\left(z\right)=\infty. \] \end{corollary} The proof is as follows: Since $D$ is an isomorphism from $H^2\left(\HC\right)$ to the standard weighted Bergman space $A_1^2\left(\HC\right)$, it follows from the Leibniz rule that \eqref{Eq:AC} is satisfied whenever $\phi$ is bounded. On the other hand, by Proposition \ref{Proposition:Davidson-Paulsen2} and Theorem \ref{Theorem:HankelCarleson}, there exists $\phi\in H^\infty\left(\LC\right)$ that satisfies $\left(\ref{Eq:AAC}'\right)$. \qed Using standard arguments involving duality, $D^\alpha\Gamma_\phi$ is $H^2\left(\HC\right)$-bounded if and only if $\phi$ is in the dual of the space \[ D^{-\alpha} \left( \left(D^\alpha H^2\left(\HC\right) \right)\hat \otimes \conj{H^2\left(\HC\right)} \right)=D^{-\alpha} \left( \left(D^\alpha H^2\left(\SC^2\right) \right) \cdot H^2\left(\SC^2\right) \right). \] A similar statement holds for boundedness of $D^\alpha\Gamma_{\phi^\#}$. This yields an alternative formulation of Theorem \ref{Theorem:HankelCarleson}: \begin{corollary}\label{Corollary:Duality} Let $\HC$ be a separable Hilbert space, $\LC$ its space of bounded linear transformations. If $\alpha>0$, then $\textrm{BMOA}_{\CC}\left(\LC\right)$ is the dual of \[ D^{-\alpha} \left( \left(D^\alpha H^2\left(\HC\right) \right)\hat \otimes \conj{H^2\left(\HC\right)} \right)=D^{-\alpha} \left( \left(D^\alpha H^2\left(\SC^2\right) \right) \cdot H^2\left(\SC^2\right) \right), \] while $\textrm{BMOA}_{\CC^\#}\left(\LC\right)$ is the dual of \[ D^{-\alpha} \left(H^2\left(\HC\right)\hat \otimes \conj{D^\alpha H^2\left(\HC\right)} \right)=D^{-\alpha} \left(H^2\left(\SC^2\right) \cdot \left(D^\alpha H^2\left(\SC^2\right) \right) \right). \] \end{corollary} We return for a moment to the scalar case. By the square function characterization of $H^1$, due to Fefferman and Stein \cite{Fefferman-Stein1972:HpSpaces}, it follows that \begin{equation}\label{Eq:Inclusion} D^{-1} \left( \left(D^1 H^2 \right)\cdot H^2 \right)\subseteq H^1. \end{equation} A generalization to general $\alpha>0$, which also yields equality of function spaces in \eqref{Eq:Inclusion}, has been obtained by Cohn and Verbitsky \cite{Cohn-Verbitsky2000:FactTentSpacesHankOps}. By Corollary \ref{Corollary:Duality}, the dual inclusion becomes \[ \textrm{BMOA}_{\NC\PC}\subseteq \textrm{BMOA}_{\CC}. \] Combined with Corollary \ref{Corollary:CC*NP}, this implies the well-known result that $\textrm{BMOA}_{\CC}=\left(H^1\right)^*$. For this argument to work, it suffices to use Theorem \ref{Theorem:HankelCarleson} with (say) $\alpha=1$, a special case which is substantially simpler to prove. The paper is structured as follows: In Section \ref{Sec:Preliminaries} we fix notation, and review some preliminary material. Of particular importance are some Bergman type spaces of analytic functions. In Section \ref{Sec:MainProof} we prove Theorem \ref{Theorem:HankelCarleson}. In Section \ref{Sec:SpecialCases} we discuss and compare the special cases of $\HC$-valued, and $\HC^*$-valued symbols, and point out some significant differences between these. In Section \ref{Sec:Davidson-Paulsen} we provide proofs of the Propositions \ref{Proposition:Davidson-Paulsen1} and \ref{Proposition:Davidson-Paulsen2}. \section{Preliminaries and further notation}\label{Sec:Preliminaries} We use the standard notation $\Z$, $\R$, and $\C$ for the respective rings of integers, real numbers, and complex numbers. By $\N$ we denote the set of strictly positive elements of $\Z$, while $\N\cup\left\{0\right\}$ is denoted by $\N_0$. Given two parametrized sets of nonnegative numbers $\left(A_i\right)_{i\in I}$ and $\left(B_i\right)_{i\in I}$, we use the notation $A_i\lesssim B_i$, $i\in I$ to indicate the existence of a positive constant $C$ such that $\forall i\in I$, $A_i\le CB_i$. We then say that $A_i$ is bounded by $B_i$, and refer to $C$ as a bound. Sometimes we allow ourselves to not mention the index set $I$ and instead let it be implicit from the context. If $A_i\lesssim B_i$ and $B_i\lesssim A_i$, then we write $A_i\approx B_i$. We then say that $A_i$ and $B_i$ are comparable. The Hilbert space adjoint of $A\in\LC$ is denoted by $A^*$. We sometimes identify $x\in\HC$ with the rank one operator $\C\ni c\mapsto cx\in\HC$. Note that $x^*$ is then the linear functional $\HC\ni y\mapsto \left\langle y,x\right\rangle_\HC\in\C$. The dual of a Banach space $\YC$ will be denoted by $\YC^*$. With Hilbert spaces in mind, we equip $\YC^*$ with an anti linear structure, rather than the standard linear ditto. Thus, the duality pairing $\left\langle y,y^*\right\rangle_\YC$, of $y\in\YC$ and $y^*\in\YC^*$, becomes anti linear in $y^*$. We define the tensor product of two elements $x,y\in\HC$ as the rank one operator $x\otimes y:z\mapsto\left\langle z,y\right\rangle_\HC x$. The tensor product is anti linear in its second argument. The projective tensor product $\HC\hat \otimes\HC$, is the closed linear span of $\left\{x\otimes y\right\}_{x,y\in\HC}$, with respect to the norm \[ \left\|T\right\|_{\wedge}=\inf \left\{\sum_k \left\|x_k\right\|_\HC \left\|y_k\right\|_\HC;T=\sum_k x_k\otimes y_k \right\}. \] The space $\HC\hat \otimes\HC$ can be isometrically identified with $\SC^1$. The dual of $\SC^1$ is isometrically identified with $\LC$ via the pairing \[ \left\langle T,B\right\rangle_{\SC^1}=\tr \left(B^*T\right) =\sum_{n}\left\langle Te_n,Be_n\right\rangle_\HC=\sum_{k}\left\langle x_k,By_k\right\rangle_\HC, \] where $B\in\LC$, $\left(e_n\right)_{n=0}^\infty$ is any orthonormal basis of $\HC$, and $\sum_k x_k\otimes y_k$ is any representation of $T$, cf. Wojtaszczyk \cite{Wojtaszczyk1991:BSpacesForAnalysts}*{III.B.26}. An important property of Hardy spaces $H^p\left(\XC\right)$ is that, given certain properties of $\XC$, $H^p\left(\XC\right)$ may be isometrically identified as a subspace of $L^p\left(\T,\XC\right)$. The precise statement is as follows: \begin{proposition}\label{Proposition:BoundaryIdentification} Let $p\in[1,\infty]$, and $f\in H^p\left(\XC\right)$. \begin{itemize} \item[$\left(i\right)$] If $\XC\in\left\{\C,\HC,\SC^1\right\}$, then there exists a function $bf\in L^p\left(\T,\XC\right)$ such that for $m$-a.e. $\zeta\in\T$, $\lim_{r\to 0}f_r\left(\zeta\right)=bf\left(\zeta\right)$ in the norm topology on $\XC$. Moreover, $f_r\to bf$ in $L^p\left(\T,\XC\right)$, and \[ \int_{\T}\left(bf\right)\left(\zeta\right)\conj{ \zeta}^ndm\left(\zeta\right)= \left\{ \begin{array}{cl} \hat f\left(n\right) & \textnormal{for }n\in\N_0, \\ 0 & \textnormal{for }n\notin\N_0, \end{array} \right. \] \item[$\left(ii\right)$] If $\XC=\LC$, then there exists a function $bf\in L^p\left(\T,\XC\right)$ such that for $m$-a.e. $\zeta\in\T$, $\lim_{r\to 0}f_r\left(\zeta\right)=bf\left(\zeta\right)$ in the strong operator topology. Moreover, $\left\|bf\right\|_{L^p\left(\T,\XC\right)}=\left\|f\right\|_{H^p\left(\XC\right)}$, and all $x,y\in\HC$ \[ \int_{\T}\left\langle \left(bf\right)\left(\zeta\right)x,y\right\rangle_\HC\conj{ \zeta}^ndm\left(\zeta\right)= \left\{ \begin{array}{cl} \langle \hat f\left(n\right)x,y\rangle_\HC & \textnormal{for }n\in\N_0, \\ 0 & \textnormal{for }n\notin\N_0, \end{array} \right. \] \end{itemize} In particular, we may identify the Taylor coefficients of $f$ with the Fourier coefficients of $bf$. \end{proposition} In the scalar case, the above result is proved in any serious introduction to Hardy spaces. We mention \cite{Garnett2007:BddAnalFcnsBook}. We refer to \cite{Nikolski2002:EasyReading} for the case $\XC=\HC$, and \cite{Rosenblum-Rovnyak1985:HardyClassesOpTheory} for the case $\XC=\LC$. The statement for $\XC=\SC^1$ holds because $\SC^1$ has the so-called analytic Radon--Nikodym property, see \citelist{\cite{Bukhvalov-Danilevich1982:BdryPropsAnalHarmFcnsValBSpace}\cite{Haagerup-Pisier1989:FactAnalFcnsNon-CommL1Spaces}}. We define the formal duality pairing between $f\in\Hol\left(\YC\right)$ and $g\in\Hol\left(\YC^*\right)$ as \[ \left\langle f,g\right\rangle= \sum_{n=0}^\infty \left\langle \hat f\left(n\right), \hat g\left(n\right)\right\rangle_{\YC}. \] The pairing is well defined if $f\in\OC\left(\YC\right)$ or $g\in\OC\left(\YC^*\right)$, and generalizes the inner product on $H^2\left(\HC\right)$. Note that $\left\langle D^\alpha f,g\right\rangle = \left\langle f, D^\alpha g\right\rangle$, and, in the case where $\YC=\HC$, $\left\langle f,\Gamma_\phi g\right\rangle=\left\langle f\otimes \conjvar{g},\phi \right\rangle$. We will make use of two related notions of weighted Bergman spaces. For $\beta>-1$, we define two finite measures on $\D$: \[ dA_{\beta}\left(z\right)=\frac{1+\beta}{\pi} \left(1-\left|z\right|^2 \right)^{\beta}dA\left(z\right)\quad\textnormal{and}\quad dA_{\beta,\log}\left(z\right)=\frac{1+\beta}{\pi} \left(\log \left(\frac{1}{|z|^2} \right) \right)^{\beta}dA\left(z\right). \] Here $dA$ denotes area measure on $\C$. For $p\in\left[1,\infty\right)$, we denote by $L_{\beta}^p\left(\YC\right)$ the space of strongly measurable functions $f:\D\to\YC$ such that \[ \left\|f\right\|_{L_{\beta}^p\left(\YC\right)}^p=\int_\D\left\|f\left(z\right)\right\|_\YC^p\, dA_{\beta}\left(z\right)<\infty. \] We then define the standard weighted Bergman space $A_{\beta}^p\left(\YC\right)=L_{\beta}^p\left(\YC\right)\cap \Hol\left(\YC\right)$. We similarly define the logarithmically weighted spaces $L_{\beta,\log}^p\left(\YC\right)$ and $A_{\beta,\log}^p\left(\YC\right)$, with $dA_{\beta,\log}$ in place of $dA_{\beta}$. An enlightening reference for standard weighted Bergman spaces with $\YC=\C$ is \cite{Hedenmalm-Korenblum-Zhu2000:BergmanSpacesBook}. We remark that many of the results presented below for $\YC$-valued functions follow by the same proofs as in the scalar case. The above two notions of Bergman spaces are to a large extent interchangeable: \begin{proposition}\label{Proposition:InterchangeableWeights} Let $p\in\left[1,\infty\right)$, $\beta>-1$, and $\YC$ be an arbitrary Banach space. We then have that \[ \left\|f\right\|_{A_{\beta,\log}^p\left(\YC\right)} \approx \left\|f\right\|_{A_{\beta}^p\left(\YC\right)},\quad f\in\Hol\left(\YC\right). \] The corresponding bounds depend on $p$ and $\alpha$. \end{proposition} One of the above bounds is obtained using the pointwise estimate \[ 1-\left|z\right|^2\le \log \left(\frac{1}{\left|z\right|^2} \right),\quad z\in \D, \] and the other by using subharmonicity of the function $z\mapsto \left\|f\left(z\right)\right\|_\YC^p$. We refer the interested reader to the easily modified proof of \cite{Garnett2007:BddAnalFcnsBook}*{Lemma VI.3.2} for details. A multiplier is an operator $\lambda:\Hol\left(\YC\right)\ni f\mapsto \lambda f\in\Hol\left(\YC\right)$ given by \[ \lambda f\left(z\right)=\sum_{n=0}^\infty \lambda_n\hat f\left(n\right) z^n,\quad z\in\D, \] for some scalar sequence $\left(\lambda_n\right)_{n=0}^\infty$. With some abuse of the terminology in \cite{Buckley-Koskela-Vukotic1999:FracIntDiffBergmanSpaces}, we say that a multiplier is small if $\left|\lambda_n\right|\lesssim\frac{1}{1+n}$. Using ideas from the proof of \cite{Arregui-Blasco2002:MultplrsVecValBergmanSpaces}*{Theorem 3.2}, one can prove the following result, which we refer to as the small multiplier property for Bergman spaces. \begin{proposition}\label{Proposition:SmallMultipliers} Let $p\in\left[1,\infty\right)$, $\beta>-1$, and $\YC$ be an arbitrary Banach space. Then small multipliers act boundedly on $A_{\beta}^p\left(\YC\right)$. \end{proposition} The spaces $A_{\beta,\log}^2\left(\HC\right)$ and $A_{\beta}^2\left(\HC\right)$ are closed subspaces of $L_{\beta,\log}^2\left(\HC\right)$ and $L_{\beta}^2\left(\HC\right)$ respectively. The corresponding orthogonal projections are denoted by $P_{\beta,\log}$ and $P_{\beta}$. A calculation shows that if $\phi \in\Hol\left(\LC\right)$ and $f\in\Hol\left(\HC\right)$ are sufficiently regular, then \begin{equation}\label{Eq:BergmanHankelFormulalog} P_{\beta,\log}\left(\phi \conjvar{f} \right)\left(z\right) = \sum_{n=0}^\infty \left(\sum_{m=0}^\infty \left(\frac{1+n}{1+m+n} \right)^{1+\beta} \hat \phi \left(m+n\right)\hat f\left(m\right) \right)z^n,\quad z\in\D. \end{equation} and \begin{equation}\label{Eq:BergmanHankelFormula} P_{\beta}\left(\phi \conjvar{f} \right)\left(z\right) = \sum_{n=0}^\infty \left(\sum_{m=0}^\infty \frac{\Gamma\left(1+m+n\right)\Gamma\left(2+\beta+n\right)}{\Gamma\left(2+\beta+m+n\right)\Gamma\left(1+n\right)} \hat \phi \left(m+n\right)\hat f\left(m\right) \right)z^n,\quad z\in\D. \end{equation} Here $\Gamma:\C\setminus \left\{-1,-2,\ldots\right\}\to\C$ is the standard $\Gamma$-function. By \eqref{Eq:BergmanHankelFormulalog} and \eqref{Eq:BergmanHankelFormula} we are allowed to define $P_{\beta,\log}\left(\phi \conjvar{f} \right)$ and $P_{\beta}\left(\phi \conjvar{f} \right)$ as elements of $\Hol\left(\HC\right)$, whenever $\phi\in\Hol\left(\LC\right)$ and $f\in\OC\left(\HC\right)$. In this sense, they are analogues of \eqref{Eq:HardyHankelFormula}. Using Parseval's identity we obtain \[ \left\|f\right\|_{A_{\beta,\log}^2\left(\HC\right)}^2=\Gamma\left(2+\beta\right)\sum_{k=0}^\infty \frac{\|\hat f\left(k\right)\|_\HC^2}{\left(1+k\right)^{1+\beta}}, \] and \[ \left\|f\right\|_{A_{\beta}^2\left(\HC\right)}^2=\sum_{k=0}^\infty \binom{n+1+\beta}{n}^{-1}\|\hat f\left(k\right)\|_\HC^2, \] where $\binom{n+1+\alpha}{n}=\frac{\Gamma\left(2+\beta+n\right)}{\Gamma\left(2+\beta\right)\Gamma\left(1+n\right)}$ are generalized binomial coefficients. The Bloch space $\BC\left(\YC\right)$ is the space of functions $f\in\Hol\left(\YC\right)$ such that \[ \left\|f\right\|_{\BC\left(\YC\right)}=\sup_{z\in\D}\, \left(1-\left|z\right|^2\right)\left\|Df\left(z\right)\right\|_\YC<\infty. \] In the literature the Bloch space is typically defined by finiteness of the quantity \[ \left\|f\left(0\right)\right\|_\YC+\sup_{z\in\D}\, \left(1-\left|z\right|^2\right)\left\|f'\left(z\right)\right\|_\YC. \] We leave it as an exercise to show that these definitions are equivalent. The Bloch space has the simple property that \begin{equation}\label{Eq:BlochWBloch} \left\|f\right\|_{\BC\left(\YC^*\right)}=\sup_{\substack{y\in\YC\\\left\|y\right\|_\YC=1}}\left\|\left\langle y,f\right\rangle_\YC\right\|_\BC, \end{equation} as can be seen by interchanging the order of suprema. The importance of the Bloch space is that $\BC\left(\YC^*\right)$ is isometric to $A_{\beta}^1\left(\YC\right)^*$ via the pairing \[ \left\langle f,g\right\rangle_{A_{\beta}^1\left(\YC\right)} =\lim_{r\uparrow 1}\int_{\D}\left\langle f\left(rz\right),g\left(z\right)\right\rangle_{\YC}\, dA_{\beta}\left(z\right),\quad f\in A_{\beta}^1\left(\YC\right), g\in\BC\left(\YC^*\right). \] This follows mostly as in \cite{Hedenmalm-Korenblum-Zhu2000:BergmanSpacesBook}. The major difference is that $\BC\left(\YC^*\right)$ is the Bergman projection of a certain class of measures, rather than $L^\infty\left(\D,\YC^*\right)$, see \cite{Arregui-Blasco2003:BergmanBlochSpacesVecVal}. \section{Proof of Theorem \ref{Theorem:HankelCarleson}}\label{Sec:MainProof} Given $\alpha>0$, let $\beta>\max\left\{2,1+\alpha\right\}$ be an auxiliary parameter. To prove Theorem \ref{Theorem:HankelCarleson}, let $\phi\in\Hol\left(\LC\right)$ and define \begin{align*} \left\|\phi\right\|_{1,\alpha}&=\sup_{f\in \OC_1\left(\HC\right)} \left\|D^\alpha\Gamma_\phi f\right\|_{H^2\left(\HC\right)}, \\ \left\|\phi\right\|_{2,\alpha}&=\sup_{f\in \OC_1\left(\HC\right)} \left\|P_{2\beta-1,\log} \left(\left(D^{\beta+\alpha}\phi\right)\conjvar{ f} \right)\right\|_{A_{2\beta-1,\log}^2\left(\HC\right)}, \\ \left\|\phi\right\|_{3,\alpha}&=\sup_{f\in \OC_1\left(\HC\right)} \left\|P_{1,\log} \left(\left(D^{1+\alpha}\phi \right)\conjvar{ f} \right)\right\|_{A_{1,\log}^2\left(\HC\right)}, \\ \left\|\phi\right\|_{4,\alpha}&=\sup_{f\in \OC_1\left(\HC\right)} \left\|P_{1} \left(\left(D^{1+\alpha}\phi \right)\conjvar{ f} \right)\right\|_{A_{1,\log}^2\left(\HC\right)}, \\ \left\|\phi\right\|_{5,\alpha}&=\sup_{f\in \OC_1\left(\HC\right)} \left\|P_{1} \left(\left(D^{1+\alpha}\phi \right)\conjvar{ f} \right)\right\|_{A_{1}^2\left(\HC\right)}, \\ \left\|\phi\right\|_{6,\alpha}&=\sup_{f\in \OC_1\left(\HC\right)} \left\|\left(D^{1+\alpha} \phi\right)\conjvar{ f}\right\|_{L_{1}^2\left(\HC\right)}. \end{align*} Theorem \ref{Theorem:HankelCarleson} is the statement that $\left\|\phi \right\|_{1,\alpha}\approx \left\|\phi \right\|_{6,\alpha}$. We will prove that the quantities $\left\|\phi \right\|_{k,\alpha}$, $1\le k\le 6$ are pairwise comparable. The outline of the proof is as follows. We show in detail that $\left\|\phi \right\|_{1,\alpha}\lesssim \left\|\phi \right\|_{2,\alpha}$. The reverse estimate, as well as the estimates $\left\|\phi \right\|_{2,\alpha}\approx \left\|\phi \right\|_{3,\alpha}$, and $\left\|\phi \right\|_{3,\alpha}\approx \left\|\phi \right\|_{4,\alpha}$ are similar, although the last one is substantially simpler than the preceding ones. The statement that $\left\|\phi \right\|_{4,\alpha}\approx \left\|\phi \right\|_{5,\alpha}$ is just a special case of Proposition \ref{Proposition:InterchangeableWeights}. Furthermore, it is trivial that $\left\|\phi \right\|_{5,\alpha}\le \left\|\phi \right\|_{6,\alpha}$. The reverse of this last estimate follows in a routine manner from the following remarkable result by Aleman and Perfekt \cite{Aleman-Perfekt2012:HankFrmsEmbThmsDirichletSpaces}: \begin{lemma}\label{Lemma:AlemanPerfekt} There exists a constant $C>0$ such that whenever $\psi\in\Hol\left(\LC\right)$ it holds that \begin{align*} \sup_{f\in\OC_1\left(\SC^2\right)}\int \left\| \left(D\psi\right) \conjvar{ f}\, \right\|_{\SC^2}^2\, dA_1 \le C \sup_{f,g\in\OC_1\left(\SC^2\right)} \left|\int \tr \left( \left(D\psi \right) \conjvar{ f} \left( Dg \right)^* \right)\, dA_1 \right|. \end{align*} \end{lemma} To prove that $\left\|\phi \right\|_{1,\alpha}\lesssim \left\|\phi \right\|_{2,\alpha}$ we will need some lemmata. The first result gives us some preliminary control of $\phi $. \begin{lemma}\label{Lemma:HankelCarlesonBloch} For each $\alpha>0$ it holds that \[ \left\|D^\alpha \phi \right\|_{\BC\left(\LC\right)}\lesssim \left\|\phi \right\|_{k,\alpha},\quad \phi \in\Hol\left(\LC\right),\ 1\le k\le 6. \] \end{lemma} \begin{proof} We consider only the case $k=1$. The other cases are similar. By \eqref{Eq:BlochWBloch} it suffices to prove that \begin{equation}\label{Eq:EstBloch} \left|\left\langle x, D^{1+\alpha} \phi \left(w\right)y\right\rangle_\HC\right|\lesssim \frac{\left\|\phi \right\|_{\alpha,1}\left\|x\right\|_\HC\left\|y\right\|_\HC}{1-\left|w\right|^2},\quad w\in\D,\ x,y,\in\HC. \end{equation} Given $w\in\D$, $x,y\in\HC$, let \[ f\left(z\right)=\sum_{n=0}^\infty \conj{w}^n \left(1+n-n \left(\frac{n}{1+n} \right)^\alpha \right)z^ny,\quad g\left(z\right)=\sum_{n=0}^\infty w^nz^nx,\quad z\in\D. \] A calculation shows that $1+n-n \left(\frac{n}{1+n} \right)^\alpha$ is bounded in $n$, and so $\left\|f\right\|_{H^2}\left\|g\right\|_{H^2}\lesssim \frac{1}{1-\left|w\right|^2}$. The definition of $\left\|\phi \right\|_{\alpha,1}$ now yields \eqref{Eq:EstBloch}. \end{proof} \begin{remark} Another proof of Lemma \ref{Lemma:HankelCarlesonBloch} is to use \eqref{Eq:BlochWBloch} together with the (already known) scalar version of Theorem \ref{Theorem:HankelCarleson}. Our approach is chosen so that our results do not depend on the scalar case. \end{remark} The qualitative content of the next lemma is known, and due to Peller \cite{Peller1982:VecHankOps}. See also \cite{Peller2003:HankOpsBook}*{Chapter 6.9}. However, the original proof gives a much worse quantitative dependence on $l$. The proof we present is a bit lengthy, and is postponed to the next subsection. \begin{lemma}\label{Lemma:OrderControl} For each $\alpha>0$ it holds that \[ \left\|D^\alpha \Gamma_{D^{-\alpha-l}\psi }D^l\right\|_{H^2\left(\HC\right)\to H^2\left(\HC\right)}\le C l \left\|\psi \right\|_{\BC\left(\LC\right)},\quad l\in\N,\ \psi \in\Hol\left(\LC\right). \] \end{lemma} We are now ready for the main part of the argument. Given $f\in\OC\left(\HC\right)$, and $\phi \in\Hol\left(\LC\right)$, we use the formulas \eqref{Eq:HardyHankelFormula} and \eqref{Eq:BergmanHankelFormula} to write \begin{align*} \left\|D^\alpha \Gamma_\phi f\right\|_{H^2\left(\HC\right)}^2 &= \sum_{n=0}^\infty\left(1+n\right)^{2\alpha} \left\|\sum_{k=0}^\infty \hat \phi \left(n+k\right)\hat f\left(k\right)\right\|_\HC^2 \\ &= \sum_{n=0}^\infty\frac{1}{\left(1+n\right)^{2\beta}} \left\|\sum_{k=0}^\infty \left(\frac{1+n}{1+n+k} \right)^{\alpha+\beta} \left(D^{\alpha+\beta} \phi \right)^{\hat{}}\left(n+k\right)\hat f\left(k\right) \right\|_\HC^2 \\ &= \left\|P_{\alpha+\beta-1,\log} \left(\left(D^\beta \psi \right)\conjvar{f} \right)\right\|_{A^2_{2\beta-1,\log}\left(\HC\right)}^2, \end{align*} where $\psi =D^\alpha \phi $. A well known fact about standard weighted Bergman spaces is that there exists many bounded projections from $L_{\gamma}^p$ onto $A_{\gamma}^p$, eg. \cite{Hedenmalm-Korenblum-Zhu2000:BergmanSpacesBook}*{Theorem 1.10}. This inspires us to replace $P_{\alpha+\beta-1,\log}$ with $P_{2\beta-1,\log}$. By the triangle inequality \begin{align*} &\left\|P_{\alpha+\beta-1,\log} \left(\left(D^\beta \psi \right)\conjvar{f} \right)\right\|_{A^2_{2\beta-1,\log}} \\ &\le \left\| \left(P_{\alpha+\beta-1,\log}-P_{2\beta-1,\log} \right) \left(\left(D^\beta \psi \right)\conjvar{f} \right)\right\|_{A^2_{2\beta-1,\log}}+ \left\|P_{2\beta-1,\log} \left(\left(D^\beta \psi \right)\conjvar{f} \right)\right\|_{A^2_{2\beta-1,\log}}. \end{align*} We carry out a few manipulations with the Taylor coefficients of $\phi $ and $f$, use the power series expansion at the origin of the function $z\mapsto \left(1-z\right)^{\beta-\alpha}$, and apply Minkowski's inequality to obtain \begin{align*} &\left\| \left(P_{\alpha+\beta-1,\log}-P_{2\beta-1,\log} \right) \left(\left(D^\beta \psi \right)\conjvar{f} \right)\right\|_{A^2_{2\beta-1,\log}}^2 \\ &= \sum_{n=0}^\infty\frac{1}{\left(1+n\right)^{2\beta}} \left\|\sum_{k=0}^\infty \left[ \left(\frac{1+n}{1+n+k} \right)^{\alpha+\beta} - \left(\frac{1+n}{1+n+k} \right)^{2\beta} \right]\left(D^{\beta} \psi \right)^{\hat{}}\left(n+k\right)\hat f\left(k\right) \right\|_\HC^2 \\ &= \sum_{n=0}^\infty \left\|\sum_{k=0}^\infty \left[ \left(\frac{1+n}{1+n+k} \right)^{\alpha} - \left(\frac{1+n}{1+n+k} \right)^{\beta} \right]\hat \psi \left(n+k\right)\hat f\left(k\right) \right\|_\HC^2 \\ &= \sum_{n=0}^\infty \left\|\sum_{k=0}^\infty \left(\frac{1+n}{1+n+k} \right)^{\alpha} \left[1 - \left(1-\frac{k}{1+n+k} \right)^{\beta-\alpha} \right]\hat \psi \left(n+k\right)\hat f\left(k\right) \right\|_\HC^2 \\ &= \sum_{n=0}^\infty \left\|\sum_{l=1}^\infty\binom{\beta-\alpha}{l}\left(-1\right)^l\sum_{k=0}^\infty \frac{\left(1+n\right)^\alpha \left(1+k\right)^l}{\left(1+n+k\right)^{\alpha+l}}\hat \psi \left(n+k\right)\left(\frac{k}{1+k}\right)^l\hat f\left(k\right) \right\|_\HC^2 \\ &\le \left(\sum_{l=1}^\infty \left|\binom{\beta-\alpha}{l} \right| \left(\sum_{n=0}^\infty \left\|\sum_{k=0}^\infty \frac{\left(1+n\right)^\alpha \left(1+k\right)^l}{\left(1+n+k\right)^{\alpha+l}}\hat \psi \left(n+k\right) \left(\frac{k}{1+k} \right)^l\hat f\left(k\right) \right\|_\HC^2 \right)^{1/2} \right)^2 \\ &= \left(\sum_{l=1}^\infty \left|\binom{\beta-\alpha}{l}\right|\left\|D^\alpha\Gamma_{D^{-\alpha-l}}D^l f_l\right\|_{H^2\left(\HC\right)} \right)^2, \end{align*} where $f_l$ is defined by $\hat f_l\left(k\right)= \left(\frac{k}{1+k} \right)^l\hat f\left(k\right)$. By Stirling's formula, the binomial coefficients $\binom{\beta-\alpha}{l}$ decay like $\frac{1}{l^{1+\beta-\alpha}}$, and since the map $f\mapsto f_l$ is obviously $H^2\left(\HC\right)$-contractive for each $l$, we use Lemma \ref{Lemma:OrderControl} to conclude that \[ \left\| \left(P_{\alpha+\beta-1,\log}-P_{2\beta-1,\log} \right) \left(\left(D^\beta \psi \right)\conjvar{f} \right)\right\|_{A^2_{2\beta-1,\log}}\lesssim \left\|\psi \right\|_{\BC\left(\LC\right)}\left\|f\right\|_{H^2\left(\HC\right)}, \] since $\beta>1+\alpha$. Lemma \ref{Lemma:HankelCarlesonBloch} then implies that \[ \left\|P_{\alpha+\beta-1,\log} \left(\left(D^\beta \psi \right)\conjvar{f} \right)\right\|_{A^2_{2\beta-1,\log}} \lesssim \left\|\phi \right\|_{2,\alpha}\left\|f\right\|_{H^2\left(\HC\right)}. \] This proves that $\left\|\phi \right\|_{1,\alpha}\lesssim \left\|\phi \right\|_{2,\alpha}$. It is straightforward to use the same type of argument to show that $\left\|\phi \right\|_{2,\alpha}\lesssim \left\|\phi \right\|_{1,\alpha}$. In order to prove that $\left\|\phi \right\|_{2,\alpha}\approx \left\|\phi \right\|_{3,\alpha}$, we note that \[ \left\|P_{2\beta-1,\log} \left(\left(D^\beta \psi \right)\conjvar{f} \right)\right\|_{A^2_{2\beta-1,\log}} = \left\|P_{\beta,\log} \left(\left(D^1 \psi \right)\conjvar{f} \right)\right\|_{A^2_{1,\log}}. \] We repeat the above argument in order to replace $P_{\beta,\log}$ with $P_{1,\log}$. This time instead of $\beta>1+\alpha$, we use that $\beta>2$. A third application of the argument allows us to replace $P_{1,\log}$ with $P_1$, yielding $\left\|\phi \right\|_{3,\alpha}\approx \left\|\phi \right\|_{4,\alpha}$. As was pointed out earlier, $\left\|\phi \right\|_{4,\alpha}\approx \left\|\phi \right\|_{5,\alpha}$ is just a special case of Proposition \ref{Proposition:InterchangeableWeights}, while the estimate $\left\|\phi \right\|_{5,\alpha}\le \left\|\phi \right\|_{6,\alpha}$ is trivial. For the reverse inequality, if we identify $\HC$ as a subspace of rank one operators in $\SC^2$, it is obvious that \begin{align*} \sup_{f\in\OC_1\left(\HC\right)}\int \left\| \left(D^{1+\alpha}\phi \right) \conjvar{f} \right\|_{\HC}^2\, dA_1 \le \sup_{f\in\OC_1\left(\SC^2\right)}\int \left\| \left(D^{1+\alpha}\phi\right) \conjvar{f} \right\|_{\SC^2}^2\, dA_1. \end{align*} By a simple argument \begin{align*} \left|\int \tr \left( \left(D^{1+\alpha}\phi \right)\conjvar{f} \left(Dg \right)^* \right)dA_1 \right| \le \left\|\phi \right\|_{\alpha,5}\left\|f\right\|_{H^2\left(\SC^2\right)}\left\|g\right\|_{H^2\left(\SC^2\right)} \end{align*} holds whenever $f,g\in\OC\left(\SC^2\right)$. By Lemma \ref{Lemma:AlemanPerfekt}, $\left\|\phi \right\|_{6,\alpha}\lesssim\left\|\phi \right\|_{5,\alpha}$. This completes the proof of Theorem \ref{Theorem:HankelCarleson}. \subsection{Proof of Lemma \ref{Lemma:OrderControl}} For $\alpha>0$ we define the operator $\tilde D^\alpha:\Hol\left(\YC\right)\to\Hol\left(\YC\right)$ by \[ \tilde D^\alpha f\left(z\right)=\sum_{n=0}^\infty \frac{\Gamma\left(1+n+\alpha\right)}{\Gamma\left(1+n\right)}\hat f\left(n\right) z^n,\quad z\in\D. \] A calculation shows that \begin{align*} \left\langle \tilde D^\alpha f,\psi \right\rangle_{A_{\alpha-1}^1\left(\YC\right)}= \Gamma\left(1+\alpha\right)\left\langle f,\psi \right\rangle, \end{align*} whenever $f\in\OC\left(\YC\right)$ and $\psi\in\BC\left(\YC^*\right)$. Going to the case where $\psi\in\BC\left(\LC\right)$, $f,g\in\OC\left(\HC\right)$, we obtain that \begin{align*} \left\langle f,D^\alpha\Gamma_{D^{-\alpha-l}\psi }D^lg\right\rangle &= \left\langle D^{-\alpha-l} \left( \left(D^\alpha f \right)\otimes \conjvar{ \left(D^l g \right)} \right),\psi \right\rangle \\ &= \frac{1}{\Gamma\left(1+\alpha\right)} \left\langle \tilde D^\alpha D^{-\alpha}D^{-l} \left( \left(D^\alpha f \right)\otimes \conjvar{ \left(D^l g \right)} \right),\psi \right\rangle_{A_{\alpha-1}^1\left(\YC\right)}. \end{align*} Since $\psi \in\BC\left(\LC\right)$, we have that \[ \left|\left\langle f,D^\alpha\Gamma_{D^{-\alpha-l}\psi }D^lg\right\rangle\right| \lesssim\left\|\psi \right\|_\BC\left\|\tilde D^\alpha D^{-\alpha}D^{-l} \left( \left(D^\alpha f \right)\otimes \conjvar{ \left(D^l g \right)} \right)\right\|_{A_{\alpha-1}^1\left(\SC^1\right)}. \] Following the ideas in \cite{Buckley-Koskela-Vukotic1999:FracIntDiffBergmanSpaces}, we use Stirling's formula to see that $\tilde D^\alpha D^{-\alpha}$ acts like the identity plus a small multiplier. By Propositions \ref{Proposition:InterchangeableWeights} and \ref{Proposition:SmallMultipliers}, we can now complete the proof of Lemma \ref{Lemma:OrderControl} by showing that \[ \left\| D^{-l} \left( \left(D^\alpha f \right)\otimes \conjvar{ \left(D^l g \right)} \right)\right\|_{A_{\alpha-1,\log}^1\left(\SC^1\right)}\lesssim l \left\|f\right\|_{H^2\left(\HC\right)}\left\|g\right\|_{H^2\left(\HC\right)}. \] First we perform a simple decomposition of $f$ and $g$ into low and high frequencies. Assume that $f$ and $g$ are of degree at most $l$. By the triangle inequality we have \begin{align*} &\left\|D^{-l} \left( \left(D^\alpha f \right)\otimes \conjvar{ \left(D^l g \right)} \right)\right\|_{A_{\alpha-1,\log}^1\left(\SC^1\right)} \\ &\le \sum_{m,n=0}^l\left\|\hat f\left(m\right)\right\|_{\HC}\left\|\hat g\left(n\right)\right\|_{\HC} \left\| D^{-l}\left(\left(D^\alpha z^m\right)\left(D^l z^n\right)\right)\right\|_{A_{\alpha-1,\log}^1} \\ &= \sum_{m,n=0}^l\frac{\left(1+m\right)^\alpha\left\|\hat f\left(m\right)\right\|_{\HC}\left(1+n\right)^l\left\|\hat g\left(n\right)\right\|_{\HC}}{\left(1+m+n\right)^l} \left\| z^{m+n}\right\|_{A_{\alpha-1,\log}^1}. \end{align*} Using polar coordinates we compute that \[ \left\| z^{m+n}\right\|_{A_{\alpha-1,\log}^1}=\frac{2^\alpha\Gamma\left(1+\alpha\right)}{\left(2+m+n\right)^\alpha}, \] and so \begin{align*} \left\|D^{-l} \left( \left(D^\alpha f \right)\otimes \conjvar{ \left(D^l g \right)} \right)\right\|_{A_{\alpha-1,\log}^1\left(\SC^1\right)} &\lesssim \sum_{m,n=0}^l\left\|\hat f\left(m\right)\right\|_{\HC}\left\|\hat g\left(n\right)\right\|_{\HC} \\ &\le l\left\|f\right\|_{H^2\left(\HC\right)}\left\|g\right\|_{H^2\left(\HC\right)}, \end{align*} by Cauchy--Schwarz's inequality. Thus the low frequencies exhibit the desired behaviour. We now consider the high frequencies. Assume that $ \left(D^\alpha f \right)\otimes \conjvar{ \left(D^l g \right)}$ has a zero of order $l$ at the origin. We can then use Lemma \ref{Lemma:PrimitiveNorm}, followed by Cauchy-Schwarz's inequality, and Parseval's identity to obtain that \begin{align*} &\left\|D^{-l} \left( \left(D^\alpha f \right)\otimes \conjvar{ \left(D^l g \right)} \right)\right\|_{A_{\alpha-1,\log}^1\left(\SC^1\right)} \\ &\le \frac{\Gamma\left(1+\alpha\right)}{2^l\Gamma\left(1+\alpha+l\right)} \left(\frac{2+l}{1+l} \right)^l\left\| \left(D^\alpha f \right)\otimes \conjvar{ \left(D^l g \right)}\right\|_{A_{\alpha+l-1,\log}^1\left(\SC^1\right)} \\ &\le \frac{\Gamma\left(\alpha\right)}{2^l\Gamma\left(\alpha+l\right)2l} \left(\frac{2+l}{1+l} \right)^l\left\|D^\alpha f\right\|_{A_{2\alpha-1,\log}^2\left(\HC\right)}\left\|D^lg\right\|_{A_{2l-1,\log}^2\left(\HC\right)} \\ &= \frac{\Gamma\left(\alpha\right)\Gamma\left(1+\alpha\right)^{1/2}\Gamma\left(2l\right)^{1/2}}{2^l\Gamma\left(\alpha+l\right)} \left(\frac{2+l}{1+l} \right)^l\left\|f\right\|_{H^2\left(\HC\right)}\left\| g\right\|_{H^2\left(\HC\right)} \\ &\lesssim l^{1/4-\alpha}\left\|f\right\|_{H^2\left(\HC\right)}\left\| g\right\|_{H^2\left(\HC\right)}, \end{align*} where in the last step, we have used Stirling's formula. Assuming Lemma \ref{Lemma:PrimitiveNorm}, this completes the proof of Lemma \ref{Lemma:OrderControl}. \begin{lemma}\label{Lemma:PrimitiveNorm} Let $\alpha>0$, $N\in\N_0$, and assume that $h\in\Hol\left(\YC\right)$ has a zero of order $N$ at the origin. Then \[ \left\|D^{-l}h\right\|_{A_{\alpha-1,\log}^1\left(\YC\right)}\le \frac{\Gamma\left(1+\alpha\right)}{2^l\Gamma\left(1+\alpha+l\right)} \left(\frac{2+N}{1+N} \right)^l\left\|h\right\|_{A_{\alpha+l-1,\log}^1\left(\YC\right)}, \] whenever $l\in\N$. \end{lemma} \begin{proof} We will use an idea of Flett \cite{Flett1972:DualIneqHardyLittlewood}. Term by term integration of the power series of $h$ shows that \[ \left(D^{-l} \right)h\left(r\zeta\right)=\frac{1}{\Gamma\left(k\right)r}\int_{s=0}^{r}h_s\left(\zeta\right) \left(\log \left(\frac{r}{s} \right) \right)^{l-1}ds,\quad r\in\left[0,1\right),\zeta\in\T. \] By the triangle inequality \begin{align*} &\left\|D^{-l}h\right\|_{A_{\alpha-1,\log}^1\left(\YC\right)} \\ &\le \frac{2\alpha}{\Gamma\left(l\right)}\int_{r=0}^1\int_{\T}\int_{s=0}^r\left\|h_s\left(\zeta\right)\right\|_{\YC} \left(\log \left(\frac{r}{s} \right) \right)^{l-1} \left(\log \left(\frac{1}{r^2} \right) \right)^{\alpha-1}ds\, dm\left(\zeta\right) dr \\ &= \frac{\alpha 2^{\alpha}}{\Gamma\left(l\right)}\int_{s=0}^1\int_{\T}\left\|h_s\right\|_{\YC}\, dm \int_{r=s}^1 \left(\log \left(\frac{1}{s} \right)-\log \left(\frac{1}{r} \right) \right)^{l-1} \left(\log \left(\frac{1}{r} \right) \right)^{\alpha-1}dr\, ds. \end{align*} By the change of variables $\log\left(\frac{1}{r}\right)/\log\left(\frac{1}{s}\right)=x$ we obtain \begin{multline*} \int_{r=s}^1 \left(\log \left(\frac{1}{s} \right)-\log \left(\frac{1}{r} \right) \right)^{l-1} \left(\log \left(\frac{1}{r} \right) \right)^{\alpha-1}dr \\ =\left(\log \left(\frac{1}{s} \right) \right)^{\alpha+l-1}\int_{x=0}^1\left(1-x\right)^{l-1}x^{\alpha-1}s^xdx. \end{multline*} Therefore \begin{align*} &\left\|D^{-l}h\right\|_{A_{\alpha-1,\log}^1} \\ &\le \frac{\alpha 2^{\alpha}}{\Gamma\left(l\right)}\int_{s=0}^1\int_{\T}\left\|h_s\right\|_{\YC}\, dm \left(\log \left(\frac{1}{s} \right) \right)^{\alpha+l-1} \int_{x=0}^1\left(1-x\right)^{l-1}x^{\alpha-1}s^xdx\, ds \\ &= \frac{\alpha 2^{\alpha}}{\Gamma\left(l\right)}\int_{x=0}^1\left(1-x\right)^{l-1}x^{\alpha-1}\int_{s=0}^1\int_{\T}\left\|h_s\right\|_{\YC}\, dm \left(\log \left(\frac{1}{s} \right) \right)^{\alpha+l-1} s^xds\, dx. \end{align*} We now replace the variable $s$ with $s^\delta$, where $\delta=\delta\left(x\right)$ will soon be chosen. \begin{align*} &\frac{\Gamma\left(l\right)}{\alpha 2^{\alpha}}\left\|D^{-l}h\right\|_{A_{\alpha-1,\log}^1} \\ &\le \int_{x=0}^1\left(1-x\right)^{l-1}x^{\alpha-1}\delta^{\alpha+l}\int_{s=0}^1\int_{\T}\left\|h_{s^\delta}\right\|_{\YC}\, dm \left(\log \left(\frac{1}{s} \right) \right)^{\alpha+l-1} s^{\left(1+x\right)\delta-1}ds\, dx \\ &= \int_{x=0}^1\left(1-x\right)^{l-1}x^{\alpha-1}\delta^{\alpha+l}\int_{s=0}^1\int_{\T}\frac{\left\|h_{s^\delta}\right\|_{\YC}}{s^{\delta N}}\, dm \left(\log \left(\frac{1}{s} \right) \right)^{\alpha+l-1} s^{\left(1+x+N\right)\delta-1}ds\, dx. \end{align*} Choose $\delta=\frac{2+N}{1+N+x}$. Note that $\delta\ge 1$ whenever $x\in[0,1]$. By assumption, the function $z\mapsto \frac{f\left(z\right)}{z^N}$ is analytic. It follows by subharmonicity that \[ \int_{\T}\frac{\left\|h_{s^\delta}\right\|_{\YC}}{s^{\delta N}}\, dm\le \int_{\T}\frac{\left\|h_{s}\right\|_{\YC}}{s^{N}}\, dm, \] and so \begin{align*} & \frac{\Gamma\left(l\right)}{\alpha 2^{\alpha}}\left\|D^{-l}h\right\|_{A_{\alpha-1,\log}^1} \\ &\le \int_{x=0}^1\left(1-x\right)^{l-1}x^{\alpha-1}\delta^{\alpha+l}\int_{s=0}^1\int_{\T}\left\|h_s\right\|_{\YC}\, dm \left(\log \left(\frac{1}{s} \right) \right)^{\alpha+l-1} s^{\left(1+x+N\right)\delta-1-N}ds\, dx \\ &= \int_{x=0}^1\left(1-x\right)^{l-1}x^{\alpha-1} \left(\frac{2+N}{1+N+x} \right)^{\alpha+l}\int_{s=0}^1\int_{\T}\left\|h_s\right\|_{\YC}\, dm \left(\log \left(\frac{1}{s} \right) \right)^{\alpha+l-1} s\, ds\, dx \\ &= \frac{1 }{2^{l+\alpha}\left(l+\alpha\right)}\left\|h\right\|_{A_{\alpha+l-1,\log}^1}\int_{x=0}^1\left(1-x\right)^{l-1}x^{\alpha-1} \left(\frac{2+N}{1+N+x} \right)^{\alpha+l}dx. \end{align*} Replacing the variable $x$ with $\frac{\left(N+1\right)x}{N+2-x}$ we obtain \begin{align*} \int_{x=0}^1\left(1-x\right)^{l-1}x^{\alpha-1} \left(\frac{2+N}{1+N+x} \right)^{\alpha+l}dx &= \left(\frac{2+N}{1+N} \right)^l\int_{x=0}^1\left(1-x\right)^{l-1}x^{\alpha-1}dx \\ &= \left(\frac{2+N}{1+N} \right)^l\frac{\Gamma\left(l\right)\Gamma\left(\alpha\right)}{\Gamma\left(l+\alpha\right)}, \end{align*} and the proof of Lemma \ref{Lemma:PrimitiveNorm} is complete. \end{proof} \begin{remark} The bound in Lemma \ref{Lemma:PrimitiveNorm} is sharp, as is seen by testing on the function $h\left(z\right)=z^N$. In particular we have that \[ \left\|D^{-l}\right\|_{A_{\alpha-1,\log}^1\left(\YC\right)\to A_{\alpha+l-1,\log}^1\left(\YC\right)}= \frac{\Gamma\left(1+\alpha\right)}{\Gamma\left(1+\alpha+l\right)}. \] This shows that without the separation of $f$ and $g$ into low and high frequencies, the estimate obtained in Lemma \ref{Lemma:OrderControl} would instead be \[ \left\|D^\alpha \Gamma_{D^{-\alpha-l}\psi }D^l\right\|_{H^2\left(\HC\right)\to H^2\left(\HC\right)}\lesssim 2^l \left\|\psi \right\|_{\BC\left(\LC\right)},\quad \psi\in\Hol\left(\LC\right), \] which is of course far from sufficient for proving Theorem \ref{Theorem:HankelCarleson}. Still, some of the estimates in the proof of Lemma \ref{Lemma:OrderControl} are very crude, indicating room for improvement. If Lemma \ref{Lemma:OrderControl} could be improved so that for each $l\in\N$ \[ \left\|D^\alpha \Gamma_{D^{-\alpha-l}\psi }D^l\right\|_{H^2\left(\HC\right)\to H^2\left(\HC\right)}\le C_l \left\|\psi \right\|_{\BC\left(\LC\right)}, \] where $\sum_{l=1}^\infty \frac{C_l}{l^\gamma}<\infty$ whenever $\gamma>1$, then in the proof of Theorem \ref{Theorem:HankelCarleson} one could immediately prove that $\left\|\phi \right\|_{1,\alpha}\approx\left\|\phi \right\|_{3,\alpha}$, instead of using two iterations of the same argument. \end{remark} \section{$\HC$- and $\HC^*$-valued symbols}\label{Sec:SpecialCases} A function $k_w$, where $w\in\D$, defined by \[ k_w\left(z\right)=\frac{1}{1-\conj{w} z},\quad z\in\D, \] is called a reproducing kernel function for $H^2$. By Parseval's formula, $\left\langle f,k_w\right\rangle=f\left(w\right)$ whenever $f\in H^2$, and $\left\|k_w\right\|_{H^2}^2=\frac{1}{1-\left|w\right|^2}$. From \citelist{\cite{Blasco1997:VecValBMOAGeomBSpaces}\cite{Bonsall1984:BddnessHankMat}} we gather the following result: \begin{proposition}\label{Proposition:H-Valued} If $\phi:\D\to\HC$ is analytic, then $\phi \in H^1\left(\HC\right)^*$ if and only if either of the following conditions hold: \begin{itemize} \item[$\left(i\right)$] \[ \sup_{I\subset\T}\frac{1}{m\left(I\right)}\int_I\left\|b\phi-\left(b\phi\right)_I\right\|_\HC\, dm<\infty. \] \item[$\left(ii\right)$] \[ \sup_{f\in\OC_1} \int_\D \left|f\left(z\right)\right|^2\left\| \left(D\phi\right)\left(z\right)\right\|_\HC^2\left(1-\left|z\right|^2\right)dA\left(z\right)<\infty. \] \item[$\left(iii\right)$] \[ \sup_{w\in\D} \left(1-\left|w\right|^2\right)\int_\D \left|k_w\left(z\right)\right|^2\left\| \left(D\phi\right)\left(z\right)\right\|_\HC^2\left(1-\left|z\right|^2\right)dA\left(z\right)<\infty. \] \item[$\left(iv\right)$] \[ \sup_{f\in\OC_1} \left\|\Gamma_\phi f\right\|_{H^2\left(\HC\right)}<\infty. \] \item[$\left(v\right)$] \[ \sup_{w\in\D} \left(1-\left|w\right|^2\right)\left\|\Gamma_\phi k_w\right\|_{H^2\left(\HC\right)}<\infty. \] \end{itemize} Moreover, the corresponding norms are comparable. \end{proposition} We point out that even though the results that $\left(iii\right)\Rightarrow\left(ii\right)$ and $\left(v\right)\Rightarrow\left(iv\right)$ look similar, the relation between them is not trivial. The fact that boundedness of a Hankel operator may be determined by its action on reproducing kernels is often referred to as Bonsall's theorem, and is an example of a so called reproducing kernel thesis. It was shown in \cite{Jacob-Rydhe-Wynn2014:WeightWeissConjRKTGenHankOps} that for scalar-valued symbols, the operators $D^\alpha\Gamma_\phi :H^2\to H^2$ ($\alpha\ge 0$) have a reproducing kernel thesis, while $\left(D^\alpha \Gamma_{\phi^\#}\right)^*:H^2\to H^2$ do not. For $\HC$-valued symbols, $D^\alpha\Gamma_\phi :H^2\to H^2\left(\HC\right)$ ($\alpha\ge 0$) satisfies a reproducing kernel thesis. The proof is the same as in the scalar case. In this section, we investigate the corresponding results for Carleson embeddings. By specializing Theorem \ref{Theorem:HankelCarleson} to the case of rank one-valued symbols, we obtain the following corollary: \begin{corollary}\label{Corollary:HankelCarlesonrankone} Let $\alpha>0$ and $\phi\in\Hol\left(\HC\right)$. Then the operator $D^\alpha\Gamma_\phi:H^2\to H^2\left(\HC\right)$ is bounded if and only if \begin{equation}\label{Eq:RankOneCarlesonCondition} \sup_{f\in\OC_1}\int_ \D \left|f\left(z\right)\right|^2\left\|D^{1+\alpha}\phi\left(z\right)\right\|_\HC^2\left(1-\left|z\right|^2\right)dA\left(z\right)<\infty. \end{equation} Moreover, the above supremum is comparable to $\left\|D^\alpha\Gamma_\phi\right\|_{H^2\to H^2\left(\HC\right)}^2$. \end{corollary} Combined with Proposition \ref{Proposition:H-Valued}, Corollary \ref{Corollary:HankelCarlesonrankone} says that $D^\alpha\Gamma_\phi:H^2\to H^2\left(\HC\right)$ is bounded if and only if $D^\alpha \phi\in \textrm{BMOA}\left(\HC\right)$, i.e. $\phi\in D^{-\alpha} \textrm{BMOA}\left(\HC\right)=\left(D^\alpha H^1\left(\HC\right)\right)^*$. This shows that Corollary \ref{Corollary:HankelCarlesonrankone} could also have been obtained from the factorization $D^\alpha H^1\left(\HC\right)=H^2\cdot D^\alpha H^2\left(\HC\right)$, see \citelist{\cite{Cohn-Verbitsky2000:FactTentSpacesHankOps}\cite{Rydhe2016:CharTriebel-LizorkinSpaces}}. We now state the corresponding result for functional-valued symbols: \begin{corollary} Let $\alpha>0$ and $\phi\in\Hol\left(\HC\right)$. Then the operator $D^\alpha\Gamma_{\phi^\#}:H^2\left(\HC\right)\to H^2$ is bounded if and only if \begin{equation}\label{Eq:CoRankOneCarlesonCondition} \sup_{f\in\OC_1\left(\HC\right)}\int_ \D \left|\left\langle f\left(z\right),\left(D^{1+\alpha}\phi\right)\left(z\right)\right\rangle_\HC\right|^2\left(1-\left|z\right|^2\right)dA\left(z\right)< \infty. \end{equation} Moreover, the above supremum is comparable to $\left\|D^\alpha\Gamma_{\phi^\#}\right\|_{H^2\left(\HC\right)\to H^2}^2$. \end{corollary} Even though $\HC$ and $\HC^*$ are isomorphic, condition \eqref{Eq:CoRankOneCarlesonCondition} is far more subtle than \eqref{Eq:RankOneCarlesonCondition}. It is easy to show that \eqref{Eq:RankOneCarlesonCondition} implies \eqref{Eq:CoRankOneCarlesonCondition}. The reverse implication does not hold, as is seen by Theorem \ref{Theorem:HankelCarleson} together with Proposition \ref{Proposition:Davidson-Paulsen1}. This also shows that $D^\alpha H^1\left(\HC\right)\ne H^2\left(\HC\right)\cdot D^\alpha H^2$. Motivated by Proposition \ref{Proposition:H-Valued}, it is natural to consider the condition \begin{equation}\label{Eq:WeakBMOA} \sup_{\substack{w\in\D,\\ x\in\HC, \left\|x\right\|_\HC=1}}\left(1-\left|w\right|^2\right)\int_ \D \left|\left\langle k_w\left(z\right)x,\left(D^{1+\alpha}\phi\right)\left(z\right)\right\rangle_\HC\right|^2\left(1-\left|z\right|^2\right)dA\left(z\right)< \infty. \end{equation} This weak type condition means that the functions $z\mapsto \left\langle \phi\left(z\right),x\right\rangle_\HC$ are in scalar-valued $\textrm{BMOA}$, uniformly for all $x$ in the unit ball of $\HC$. We use the conditions \eqref{Eq:RankOneCarlesonCondition}, \eqref{Eq:CoRankOneCarlesonCondition}, and \eqref{Eq:WeakBMOA} to define the respective spaces $\textrm{BMOA}_{\CC}\left(\HC\right)$, $\textrm{BMOA}_{\CC^\#}\left(\HC\right)$, and $\textrm{BMOA}_{\WC}\left(\HC\right)$. We then have the strict inclusions \[ \textrm{BMOA}_{\CC}\left(\HC\right)\subsetneq \textrm{BMOA}_{\CC^\#}\left(\HC\right) \subsetneq \textrm{BMOA}_{\WC}\left(\HC\right). \] We refer to \cite{Rydhe2016:CounterExsCarlesonEmbThm} for an example showing that the last inclusion is strict. \section{The Davidson--Paulsen results}\label{Sec:Davidson-Paulsen} We will now present the proofs of Propositions \ref{Proposition:Davidson-Paulsen1} and \ref{Proposition:Davidson-Paulsen2}. We once again point out that these are (at most) straightforward adaptations of the arguments used in \cite{Davidson-Paulsen1997:PolBddOps}. It will be convenient to identify $H^2\left(\HC\right)$ with $l^2\left(\N_0,\HC\right)$, and let $\HC= l^2\left(\N_0\right)$. We let $\left(e_n\right)_{n= 0}^\infty$ denote the canonical basis for $l^2\left(\N_0\right)$. \subsection{Proof of Proposition \ref{Proposition:Davidson-Paulsen1}} Let $x\in\HC$ be a fixed vector of unit length, and consider the function $\phi:z\mapsto \sum_{n=0}^\infty \beta_n x\otimes e_n z^n$, where $\left(\beta_n\right)_{n=0}^\infty$ is some scalar sequence of moderate growth. The function $\phi$ is obviously rank one-valued, and with the right choice of $\left(\beta_n\right)_{n=0}^\infty$ it has the property that $D^\alpha\Gamma_\phi$ is bounded on $H^2\left(\HC\right)$, while $\Gamma_\phi D^\alpha$ is not. Since the contraction $H^2\left(\HC\right)\ni f\mapsto \left\langle f,x\right\rangle_\HC\in H^2$ maps a subset of the unit sphere in $H^2\left(\HC\right)$ onto the unit sphere in $H^2$, we may instead consider boundedness of $D^\alpha\Gamma_\psi:H^2\left(\HC\right)\to H^2$, where $\psi$ is the $\HC^*$-valued function $z\mapsto \sum_{n=0}^\infty \beta_n e_n^* z^n$. It will be simpler to consider boundedness of the operators $\left(D^\alpha\Gamma_\psi\right)^*=\Gamma_{\psi^\#}D^\alpha$ and $\left(\Gamma_\psi D^\alpha\right)^*=D^\alpha \Gamma_{\psi^\#}$. Let $X=[\beta_{m+n}e_{m+n}]_{m,n\ge 0}$ be the matrix representation of $\Gamma_{\psi^\#}$. The goal is now to show that $X\ \diag\left(\left(1+n\right)^{\alpha}\right)_{n\ge 0}$ is bounded from $l^2\left(\N_0\right)$ to $l^2\left(\N_0,\HC\right)$, while $\diag\left(\left(1+n\right)^{\alpha}\right)_{n\ge 0} X$ is not. Obviously, the operator norm of $D^\alpha X$ is at least as big as the $l^2\left(\N_0,\HC\right)$-norm of each column of the matrix, i.e. \[ \left\|D^\alpha X\right\|_{l^2\left(\N_0\right)\to l^2\left(\N_0,\HC\right)}^2\ge \sup_{k\in\N_0}\sum_{n=0}^\infty \left(1+n\right)^{2\alpha}\left|\beta_{n+k}\right|^2, \] so if for example $\sum_{n=0}^\infty \left(1+n\right)^{2\alpha}\left|\beta_{n}\right|^2=\infty$, then $D^\alpha X$ is unbounded. On the other hand, \[ \left\langle Xe_n,X e_m\right\rangle_\HC= \left\{ \begin{array}{cl} \gamma_n^2:=\sum_{k\ge n}\left|\beta_n\right|^2&\textnormal{for $m=n$},\\ 0&\textnormal{otherwise}. \end{array} \right. \] If follows that $ \left(XD^\alpha \right)^*XD^\alpha=\diag \left(\left(1+n\right)^{2\alpha}\gamma_n^2 \right)_{n\ge 0}$, and so \[ \left\|X D^\alpha\right\|_{l^2\left(\N_0\right)\to l^2\left(\N_0,\HC\right)}^2 = \sup_{n\in\N_0}\left(1+n\right)^{2\alpha}\sum_{k\ge n}\left|\beta_n\right|^2. \] Now chose $\beta_n=\frac{1}{\left(1+n\right)^{\alpha+1/2}}$ to complete the proof. \subsection{Proof of Proposition \ref{Proposition:Davidson-Paulsen2}} Given matrices $A=[a_{mn}]_{m,n\ge 0}$ and $B=[b_{mn}]_{m,n\ge 0}$, we define the Schur product $A\star B= \left[a_{mn}b_{mn}\right]_{m,n\ge 0}$. For a fixed matrix $B$, the operator $S_B:A\mapsto A\star B$ is called a Schur multiplier. The Grothendieck-Haagerup criterion, e.g. \cite{Paulsen2002:ComplBddMapsOpAlgs}*{Corollary 8.8}, states that $S_B:\LC\left(\HC\right)\to\LC\left(\HC\right)$ is bounded if and only if there exists sequences $\left(x_n\right)_{n\ge 0}$, $\left(y_n\right)_{n\ge 0}$ in the unit ball of $\HC$ such that $b_{mn}=\left\langle x_n,y_m\right\rangle_\HC$. From this follows the so called Bennett criterion, stating that if $S_B$ is a bounded Schur multiplier, and the iterated limits $\lim_{m\to\infty}\lim_{n\to\infty}b_{mn}$ and $\lim_{n\to\infty}\lim_{m\to\infty}b_{mn}$ both exist, then the limits are equal. Define an isometry $V:l^2\left(\N_0\right)\to H^2\left(\HC\right)$ by $Ve_n=e_n z^n$, and let $\left(E_{mn}\right)_{m,n\ge 0}$ be the scalar matrices defined by $\left\langle E_{mn}e_l,e_k\right\rangle_\HC=\delta_{mk}\delta_{nl}$. Given a scalar matrix $A=[a_{mn}]_{m,n\ge 0}$, we define the matrices $A_n=\sum_{k+l=n}a_{kl}E_{kl}$, and the function \[ \phi\left(z\right)=\sum_{n=0}^\infty A_nz^n=\diag \left(z^k \right)_{k\ge 0}A\ \diag \left(z^l \right)_{l\ge 0}. \] From the above relations, $\left\|\phi\right\|_{H^\infty\left(\LC\right)}=\left\|A\right\|_\LC$. Now $\Gamma_\phi$ corresponds to the (operator-valued) Hankel matrix $X=[A_{k+l}]_{k,l\ge 0}$. A calculation shows that \[ V^*D^\alpha\Gamma_{D^{-\alpha}\phi}V=S_{B}\left(A\right), \] where $B= \left[ \left(\frac{1+m}{1+m+n} \right)^\alpha\right]_{m,n\ge 0}$. It follows that \[ \left\|D^\alpha\Gamma_{D^{-\alpha}\phi}\right\|_{\LC\left(H^2\left(\HC\right)\right)}\ge \left\|S_{B}\left(A\right)\right\|_{\LC\left(l^2\left(\N_0\right)\right)}. \] From Bennett's criterion, $S_B$ is not a bounded Schur multiplier, and so the right-hand side in the above inequality will be infinite for some choice of $A$. It follows that, for the same choice of $A$, $D^\alpha\Gamma_{D^{-\alpha}\phi}$ is not bounded on $H^2\left(\HC\right)$. \section*{Acknowledgments} The author expresses his gratitude to Sandra Pott, and Alexandru Aleman, for interesting discussions on the topics above, and also to Erik Wahlén, and the anonymous referee, for their useful comments on the presentation of this manuscript. \bibliographystyle{amsplain} \begin{bibdiv} \begin{biblist} \bib{Aleksandrov-Peller1996:HankOpsSimToContr}{article}{ author={Aleksandrov, A.~.B}, author={Peller, V.~V.}, title={Hankel operators and similarity to a contraction}, date={1996}, ISSN={1073-7928}, journal={Internat. Math. Res. Notices}, number={6}, pages={263\ndash 275}, url={http://dx.doi.org/10.1155/S1073792896000190}, review={\MR{1386078}}, } \bib{Aleman-Perfekt2012:HankFrmsEmbThmsDirichletSpaces}{article}{ author={Aleman, A.}, author={Perfekt, K.-M.}, title={Hankel forms and embedding theorems in weighted {D}irichlet spaces}, date={2012}, ISSN={1073-7928}, journal={Int. Math. Res. Not. IMRN}, number={19}, pages={4435\ndash 4448}, url={http://dx.doi.org/10.1093/imrn/rnr195}, review={\MR{2981715}}, } \bib{Blasco1988:HardySpacesVecValDuality}{article}{ author={Blasco, O.}, title={Hardy spaces of vector-valued functions: duality}, date={1988}, ISSN={0002-9947}, journal={Trans. Amer. Math. Soc.}, volume={308}, number={2}, pages={495\ndash 507}, url={http://dx.doi.org/10.2307/2001088}, review={\MR{951618}}, } \bib{Blasco1997:VecValBMOAGeomBSpaces}{article}{ author={Blasco, O.}, title={Vector-valued analytic functions of bounded mean oscillation and geometry of {B}anach spaces}, date={1997}, ISSN={0019-2082}, journal={Illinois J. Math.}, volume={41}, number={4}, pages={532\ndash 558}, url={http://projecteuclid.org/euclid.ijm/1256068979}, review={\MR{1468865}}, } \bib{Arregui-Blasco2002:MultplrsVecValBergmanSpaces}{article}{ author={Blasco, O.}, author={Arregui, J.-L.}, title={Multipliers on vector valued {B}ergman spaces}, date={2002}, ISSN={0008-414X}, journal={Canad. J. Math.}, volume={54}, number={6}, pages={1165\ndash 1186}, url={http://dx.doi.org/10.4153/CJM-2002-044-3}, review={\MR{1940234}}, } \bib{Arregui-Blasco2003:BergmanBlochSpacesVecVal}{article}{ author={Blasco, O.}, author={Arregui, J.-L.}, title={Bergman and {B}loch spaces of vector-valued functions}, date={2003}, ISSN={0025-584X}, journal={Math. Nachr.}, volume={261/262}, pages={3\ndash 22}, url={http://dx.doi.org/10.1002/mana.200310109}, review={\MR{2020384}}, } \bib{Blasco-Pott2008:EmbOpValDyadicBMO}{article}{ author={Blasco, O.}, author={Pott, S.}, title={Embeddings between operator-valued dyadic {BMO} spaces}, date={2008}, ISSN={0019-2082}, journal={Illinois J. Math.}, volume={52}, number={3}, pages={799\ndash 814}, url={http://projecteuclid.org/euclid.ijm/1254403715}, review={\MR{2546008}}, } \bib{Bonsall1984:BddnessHankMat}{article}{ author={Bonsall, F.~F.}, title={Boundedness of {H}ankel matrices}, date={1984}, ISSN={0024-6107}, journal={J. London Math. Soc. (2)}, volume={29}, number={2}, pages={289\ndash 300}, url={http://dx.doi.org/10.1112/jlms/s2-29.2.289}, review={\MR{744100}}, } \bib{Bourgain1986:SimProblPolBddOpsHSpace}{article}{ author={Bourgain, J.}, title={On the similarity problem for polynomially bounded operators on {H}ilbert space}, date={1986}, ISSN={0021-2172}, journal={Israel J. Math.}, volume={54}, number={2}, pages={227\ndash 241}, url={http://dx.doi.org/10.1007/BF02764943}, review={\MR{852479}}, } \bib{Bourgain1986:VecValSingIntsHardy-BMODualityChapter}{incollection}{ author={Bourgain, J.}, title={Vector-valued singular integrals and the {$H^1$}-{BMO} duality}, date={1986}, booktitle={Probability theory and harmonic analysis ({C}leveland, {O}hio, 1983)}, series={Monogr. Textbooks Pure Appl. Math.}, volume={98}, publisher={Dekker, New York}, pages={1\ndash 19}, review={\MR{830227}}, } \bib{Buckley-Koskela-Vukotic1999:FracIntDiffBergmanSpaces}{article}{ author={Buckley, S.~.M}, author={Koskela, P.}, author={Vukoti{\'c}, D.}, title={Fractional integration, differentiation, and weighted {B}ergman spaces}, date={1999}, ISSN={0305-0041}, journal={Math. Proc. Cambridge Philos. Soc.}, volume={126}, number={2}, pages={369\ndash 385}, url={http://dx.doi.org/10.1017/S030500419800334X}, review={\MR{1670257}}, } \bib{Bukhvalov-Danilevich1982:BdryPropsAnalHarmFcnsValBSpace}{article}{ author={Bukhvalov, A.~V.}, author={Danilevich, A.~A.}, title={Boundary properties of analytic and harmonic functions with values in a {B}anach space}, language={Russian}, date={1982}, ISSN={0025-567X}, journal={Mat. Zametki}, volume={31}, number={2}, pages={203\ndash 214, 317}, review={\MR{649004}}, } \bib{Carleson1958:InterpolProblBddAnalFcns}{article}{ author={Carleson, L.}, title={An interpolation problem for bounded analytic functions}, date={1958}, ISSN={0002-9327}, journal={Amer. J. Math.}, volume={80}, pages={921\ndash 930}, review={\MR{0117349}}, } \bib{Carleson1962:InterpolBddAnalFcnsCoronaProbl}{article}{ author={Carleson, L.}, title={Interpolations by bounded analytic functions and the corona problem}, date={1962}, ISSN={0003-486X}, journal={Ann. of Math. (2)}, volume={76}, pages={547\ndash 559}, review={\MR{0141789}}, } \bib{Cohn-Verbitsky2000:FactTentSpacesHankOps}{article}{ author={Cohn, W.~S.}, author={Verbitsky, I.~E.}, title={Factorization of tent spaces and {H}ankel operators}, date={2000}, ISSN={0022-1236}, journal={J. Funct. Anal.}, volume={175}, number={2}, pages={308\ndash 329}, url={http://dx.doi.org/10.1006/jfan.2000.3589}, review={\MR{1780479}}, } \bib{Davidson-Paulsen1997:PolBddOps}{article}{ author={Davidson, K.~R.}, author={Paulsen, V.~I.}, title={Polynomially bounded operators}, date={1997}, ISSN={0075-4102}, journal={J. Reine Angew. Math.}, volume={487}, pages={153\ndash 170}, review={\MR{1454263}}, } \bib{Diestel-Uhl1977:VecMeasures}{book}{ author={Diestel, J.}, author={Uhl, J.~J.~Jr.}, title={Vector measures}, publisher={American Mathematical Society, Providence, R.I.}, date={1977}, note={With a foreword by B. J. Pettis, Mathematical Surveys, No. 15}, review={\MR{0453964}}, } \bib{Fefferman1971:CharBMO}{article}{ author={Fefferman, C.}, title={Characterizations of bounded mean oscillation}, date={1971}, ISSN={0002-9904}, journal={Bull. Amer. Math. Soc.}, volume={77}, pages={587\ndash 588}, review={\MR{0280994}}, } \bib{Fefferman-Stein1972:HpSpaces}{article}{ author={Fefferman, C.}, author={Stein, E.~M.}, title={{$H^{p}$} spaces of several variables}, date={1972}, ISSN={0001-5962}, journal={Acta Math.}, volume={129}, number={3-4}, pages={137\ndash 193}, review={\MR{0447953}}, } \bib{Flett1972:DualIneqHardyLittlewood}{article}{ author={Flett, T.~M.}, title={The dual of an inequality of {H}ardy and {L}ittlewood and some related inequalities}, date={1972}, ISSN={0022-247x}, journal={J. Math. Anal. Appl.}, volume={38}, pages={746\ndash 765}, review={\MR{0304667}}, } \bib{Foguel1964:CounterExSz.-NagyProbl}{article}{ author={Foguel, S.~R.}, title={A counterexample to a problem of {S}z.-{N}agy}, date={1964}, ISSN={0002-9939}, journal={Proc. Amer. Math. Soc.}, volume={15}, pages={788\ndash 790}, review={\MR{0165362}}, } \bib{Garnett2007:BddAnalFcnsBook}{book}{ author={Garnett, J.~B.}, title={Bounded analytic functions}, edition={first revised}, series={Graduate Texts in Mathematics}, publisher={Springer, New York}, date={2007}, volume={236}, ISBN={978-0-387-33621-3; 0-387-33621-4}, review={\MR{2261424}}, } \bib{Gillespie-Pott-Treil-Volberg2004:LogGrowthHilbTransfVecHank}{article}{ author={Gillespie, T.~A.}, author={Pott, S.}, author={Treil, S.}, author={Volberg, A.}, title={Logarithmic growth for weighted {H}ilbert transforms and vector {H}ankel operators}, date={2004}, ISSN={0379-4024}, journal={J. Operator Theory}, volume={52}, number={1}, pages={103\ndash 112}, review={\MR{2091462}}, } \bib{Haagerup-Pisier1989:FactAnalFcnsNon-CommL1Spaces}{article}{ author={Haagerup, U.}, author={Pisier, G.}, title={Factorization of analytic functions with values in noncommutative {$L_1$}-spaces and applications}, date={1989}, ISSN={0008-414X}, journal={Canad. J. Math.}, volume={41}, number={5}, pages={882\ndash 906}, url={http://dx.doi.org/10.4153/CJM-1989-041-6}, review={\MR{1015588}}, } \bib{Halmos1970:TenProbls}{article}{ author={Halmos, P.~R.}, title={Ten problems in {H}ilbert space}, date={1970}, ISSN={0002-9904}, journal={Bull. Amer. Math. Soc.}, volume={76}, pages={887\ndash 933}, review={\MR{0270173}}, } \bib{Hedenmalm-Korenblum-Zhu2000:BergmanSpacesBook}{book}{ author={Hedenmalm, H.}, author={Korenblum, B.}, author={Zhu, K.}, title={Theory of {B}ergman spaces}, series={Graduate Texts in Mathematics}, publisher={Springer-Verlag, New York}, date={2000}, volume={199}, ISBN={0-387-98791-6}, url={http://dx.doi.org/10.1007/978-1-4612-0497-8}, review={\MR{1758653}}, } \bib{Jacob-Rydhe-Wynn2014:WeightWeissConjRKTGenHankOps}{article}{ author={Jacob, B.}, author={Rydhe, E.}, author={Wynn, A.}, title={The weighted {W}eiss conjecture and reproducing kernel theses for generalized {H}ankel operators}, date={2014}, ISSN={1424-3199}, journal={J. Evol. Equ.}, volume={14}, number={1}, pages={85\ndash 120}, url={http://dx.doi.org/10.1007/s00028-013-0209-z}, review={\MR{3169032}}, } \bib{Janson-Peetre1988:Paracomms}{article}{ author={Janson, S.}, author={Peetre, J.}, title={Paracommutators---boundedness and {S}chatten-von {N}eumann properties}, date={1988}, ISSN={0002-9947}, journal={Trans. Amer. Math. Soc.}, volume={305}, number={2}, pages={467\ndash 504}, url={http://dx.doi.org/10.2307/2000875}, review={\MR{924766}}, } \bib{Kislyakov2000:OpsDisSimContr}{article}{ author={Kislyakov, S.~V.}, title={Operators that are (dis)similar to a contraction: {P}isier's counterexample in terms of singular integrals}, language={Russian}, date={1997}, ISSN={0373-2703}, journal={Zap. Nauchn. Sem. S.-Peterburg. Otdel. Mat. Inst. Steklov. (POMI)}, volume={247}, number={Issled. po Linein. Oper. i Teor. Funkts. 25}, pages={79\ndash 95, 300}, url={http://dx.doi.org/10.1007/BF02673734}, review={\MR{1692632}}, } \bib{Mei2006:MatValParaprods}{article}{ author={Mei, T.}, title={Notes on matrix valued paraproducts}, date={2006}, ISSN={0022-2518}, journal={Indiana Univ. Math. J.}, volume={55}, number={2}, pages={747\ndash 760}, url={http://dx.doi.org/10.1512/iumj.2006.55.2926}, review={\MR{2225451}}, } \bib{Nazarov-Pisier-Treil-Volberg2002:EstsVecCarlesonEmbThmVecParaprods}{article}{ author={Nazarov, F.}, author={Pisier, G.}, author={Treil, S.}, author={Volberg, A.}, title={Sharp estimates in vector {C}arleson imbedding theorem and for vector paraproducts}, date={2002}, ISSN={0075-4102}, journal={J. Reine Angew. Math.}, volume={542}, pages={147\ndash 171}, url={http://dx.doi.org/10.1515/crll.2002.004}, review={\MR{1880830}}, } \bib{Nazarov-Treil-Volberg1997:CounterExInfDimCarlesonEmbThm}{article}{ author={Nazarov, F.}, author={Treil, S.}, author={Volberg, A.}, title={Counterexample to the infinite-dimensional {C}arleson embedding theorem}, date={1997}, ISSN={0764-4442}, journal={C. R. Acad. Sci. Paris S\'er. I Math.}, volume={325}, number={4}, pages={383\ndash 388}, url={http://dx.doi.org/10.1016/S0764-4442(97)85621-2}, review={\MR{1467091}}, } \bib{Nehari1957:BddBilinFrms}{article}{ author={Nehari, Z.}, title={On bounded bilinear forms}, date={1957}, ISSN={0003-486X}, journal={Ann. of Math. (2)}, volume={65}, pages={153\ndash 162}, review={\MR{0082945}}, } \bib{Nikolski2002:EasyReading}{book}{ author={Nikolski, N.~K.}, title={Operators, functions, and systems: an easy reading. {V}ol. {I}}, series={Mathematical Surveys and Monographs}, publisher={American Mathematical Society, Providence, RI}, date={2002}, volume={92}, ISBN={0-8218-1083-9}, note={Hardy, Hankel, and Toeplitz, Translated from the French by Andreas Hartmann}, review={\MR{1864396}}, } \bib{Page1970:BddCompctVecHankOps}{article}{ author={Page, L.~B.}, title={Bounded and compact vectorial {H}ankel operators}, date={1970}, ISSN={0002-9947}, journal={Trans. Amer. Math. Soc.}, volume={150}, pages={529\ndash 539}, review={\MR{0273449}}, } \bib{Paulsen1984:ComplPolBddSimContr}{article}{ author={Paulsen, V.~I.}, title={Every completely polynomially bounded operator is similar to a contraction}, date={1984}, ISSN={0022-1236}, journal={J. Funct. Anal.}, volume={55}, number={1}, pages={1\ndash 17}, url={http://dx.doi.org/10.1016/0022-1236(84)90014-4}, review={\MR{733029}}, } \bib{Paulsen2002:ComplBddMapsOpAlgs}{book}{ author={Paulsen, V.~I.}, title={Completely bounded maps and operator algebras}, series={Cambridge Studies in Advanced Mathematics}, publisher={Cambridge University Press, Cambridge}, date={2002}, volume={78}, ISBN={0-521-81669-6}, review={\MR{1976867}}, } \bib{Peller1982:EstsFcnsPwrBddOpsHSpace}{article}{ author={Peller, V.~V.}, title={Estimates of functions of power bounded operators on {H}ilbert spaces}, date={1982}, ISSN={0379-4024}, journal={J. Operator Theory}, volume={7}, number={2}, pages={341\ndash 372}, review={\MR{658618}}, } \bib{Peller1982:VecHankOps}{article}{ author={Peller, V.~V.}, title={Vectorial {H}ankel operators, commutators and related operators of the {S}chatten-von {N}eumann class {$\gamma _{p}$}}, date={1982}, ISSN={0378-620X}, journal={Integral Equations Operator Theory}, volume={5}, number={2}, pages={244\ndash 272}, url={http://dx.doi.org/10.1007/BF01694041}, review={\MR{647702}}, } \bib{Peller2003:HankOpsBook}{book}{ author={Peller, V.~V.}, title={Hankel operators and their applications}, series={Springer Monographs in Mathematics}, publisher={Springer-Verlag, New York}, date={2003}, ISBN={0-387-95548-8}, url={http://dx.doi.org/10.1007/978-0-387-21681-2}, review={\MR{1949210}}, } \bib{Pisier1997:PolBddNotSim}{article}{ author={Pisier, G.}, title={A polynomially bounded operator on {H}ilbert space which is not similar to a contraction}, date={1997}, ISSN={0894-0347}, journal={J. Amer. Math. Soc.}, volume={10}, number={2}, pages={351\ndash 369}, url={http://dx.doi.org/10.1090/S0894-0347-97-00227-0}, review={\MR{1415321}}, } \bib{Rosenblum-Rovnyak1985:HardyClassesOpTheory}{book}{ author={Rosenblum, M.}, author={Rovnyak, J.}, title={Hardy classes and operator theory}, series={Oxford Mathematical Monographs}, publisher={The Clarendon Press, Oxford University Press, New York}, date={1985}, ISBN={0-19-503591-7}, note={Oxford Science Publications}, review={\MR{822228}}, } \bib{Rydhe2016:CharTriebel-LizorkinSpaces}{article}{ author={Rydhe, E.}, title={On the characterization of triebel--lizorkin type space of analytic functions}, date={2016}, journal={arXiv:1609.09229}, status={preprint}, } \bib{Rydhe2016:CounterExsCarlesonEmbThm}{article}{ author={Rydhe, E.}, title={Two more counterexamples to the infinite-dimensional carleson embedding theorem}, date={2016}, journal={arXiv:1608.06728}, status={preprint}, } \bib{Sarason1967:GenInterpol}{article}{ author={Sarason, D.}, title={Generalized interpolation in {$H^{\infty }$}}, date={1967}, ISSN={0002-9947}, journal={Trans. Amer. Math. Soc.}, volume={127}, pages={179\ndash 203}, review={\MR{0208383}}, } \bib{Sz.-Nagy1959:ComplContOpsUniformlyBddIterates}{article}{ author={Sz.-Nagy, B.}, title={Completely continuous operators with uniformly bounded iterates}, language={Hungarian}, date={1959}, journal={Magyar Tud. Akad. Mat. Kutat\'o Int. K\"ozl.}, volume={4}, pages={89\ndash 93}, review={\MR{0108722}}, } \bib{Wojtaszczyk1991:BSpacesForAnalysts}{book}{ author={Wojtaszczyk, P.}, title={Banach spaces for analysts}, series={Cambridge Studies in Advanced Mathematics}, publisher={Cambridge University Press, Cambridge}, date={1991}, volume={25}, ISBN={0-521-35618-0}, url={http://dx.doi.org/10.1017/CBO9780511608735}, review={\MR{1144277}}, } \end{biblist} \end{bibdiv} \end{document}
1,314,259,994,748
arxiv
\section{Introduction} The generalized surface quasi-geostrophic (gSQG) equation \begin{subequations}\label{sqg}\begin{eqnarray} & \theta_t + \vec{u} \cdot \nabla \theta = 0, \label{sqg1}\\ & \vec{u} = \nabla^\perp (-\Delta)^{-\alpha/2} \theta, \label{sqg2} \end{eqnarray}\end{subequations} is a transport equation in two space dimensions for an active scalar field $\theta(\vec{x},t)$, where $\vec{x} =(x,y)$. The divergence-free transport velocity $\vec{u}$ is determined nonlocally from $\theta$ by \eqref{sqg2}, where $\nabla^\perp = (-\partial_y, \partial_x)$ is the perpendicular gradient, and $0<\alpha\le2$ is a parameter. If $\alpha =2$, then \eqref{sqg1}--\eqref{sqg2} is the streamfunction-vorticity formulation of the two-dimensional incompressible Euler equations \cite{Ma}, while if $\alpha = 1$, then \eqref{sqg1}--\eqref{sqg2} is the SQG equation. The gSQG equation is a natural generalization of these cases. The SQG equation is an approximate equation for quasi-geostrophic flows confined near a surface \cite{La,Ped}. It also provides a useful two-dimensional model for singularity formation in the three-dimensional incompressible Euler equations \cite{CoMaTa94a, CoMaTa94b, MaTa}. For further analysis of the SQG equation, see \cite{BuShVi, Mar, Res} and the references cited there. Since $\theta$ is advected by a velocity field $\vec{u}$, the gSQG equation is compatible with piecewise constant solutions in which $\theta$ takes only two distinct values $\theta_{+}$, $\theta_-$, so that \[ \theta(\vec{x},t) = \begin{cases}\theta_+ & \vec{x} \in \Omega(t),\\ \theta_- & \vec{x} \in \Omega^c(t),\end{cases} \] for some domain $\Omega(t) \subset \mathbb{R}^2$. Under suitable assumptions, one may determine $\vec{u}(\cdot,t)$ from the domain $\Omega(t)$ and obtain closed contour-dynamics equations for the boundary $\partial\Omega(t)$, which moves with velocity $\vec{u}$. We distinguish two particular types of domains: \begin{itemize} \item[1.] Patches, whose boundary is a smooth, simple, closed curve diffeomorphic to the circle $\mathbb{T}$. \item[2.] Half-spaces, whose boundary is a smooth, simple curve diffeomorphic to $\mathbb{R}$ that divides $\mathbb{R}^2$ into two half-spaces. \end{itemize} In the first case of a patch, one can take $\theta(\cdot,t) = \chi_{\Omega(t)}$ where $\Omega(t)$ is bounded, and then $\vec{u} = \nabla^\perp G\ast \theta$, where $G$ is the Green's function for $(-\Delta)^{\alpha/2}$, which is the two-dimensional Riesz potential of order $\alpha$ if $0<\alpha<2$ \cite{Zi,Stein}, or the Green's function for the (negative) Laplacian if $\alpha=2$ . The convolution converges since $\theta$ has compact support, and one obtains well-defined contour dynamics equations for the motion of the patch. The vortex patch problem for the two-dimensional Euler equations has been studied extensively, and the boundary remains smooth globally in time \cite{BeCo, Che,Che1,Ma}. SQG and gSQG patches with $\alpha \in [1, 2)$ are analyzed in \cite{ CoCoGa, Gan}, where the local existence and uniqueness in Sobolev spaces of solutions of a suitable parametric equation for the patch boundary is proved. The formation of finite-time singularities in the boundary of an initially smooth SQG patch is an open question, but numerical solutions suggest that complicated, self-similar singularities can arise \cite{Dri}. Singularities have been proved to occur for two gSQG patches with $\alpha$ sufficiently close to $2$ in the presence of a rigid boundary \cite{KiRyYaZl}. In the second case of a half-space, we refer to the boundary $\partial\Omega(t)$ as a front, by which we will always mean a sharp front across which $\theta$ is discontinuous. We will consider only fronts that are a graph, located at \begin{equation}\label{thetas} y = \varphi(x,t), \end{equation} where $\varphi(\cdot,t) \colon \mathbb{R} \to \mathbb{R}$ is a smooth, bounded function. This assumption simplifies the evolution equations but becomes invalid if the front breaks. The front problem for vorticity discontinuities in the Euler equations is studied in \cite{BiHu,Ra}. Local existence and uniqueness for spatially periodic SQG fronts is proved in \cite{Rod05} for $C^\infty$-solutions by a Nash-Moser method, and in \cite{FeRo11} for analytic solutions by a Cauchy-Kowalewski method. Almost sharp fronts are analyzed in \cite{CoFeRo, FeLuRo, FeRo12, FeRo15}, and the global existence of Sobolev solutions for gSQG fronts with $0<\alpha<1$ that decay sufficiently rapidly as $|x|\to \infty$ is proved in \cite{CoGoIo}. If singularities do form in an SQG front, they would presumably differ from the ones observed numerically in \cite{Dri} for an elliptical SQG patch; in those simulations, the patch forms a very thin ``neck'' which is not approximated by a half-space. There is a difficulty in the formulation of contour dynamics for the half-space problem when $1\le \alpha \le 2$, which is the main case of interest here. The scalar field $\theta$ is not compactly supported, and the formal contour dynamics equations diverge because the corresponding Green's function decays too slowly at infinity. In this paper, we propose regularized contour dynamics equations for the motion of a front that are obtained by introducing a large-distance cutoff parameter $\lambda$ in the contour dynamics equations together with a suitable cutoff-dependent Galilean transformation with velocity $v(\lambda)$, where $v(\lambda) \to \infty$ as $\lambda\to \infty$. We show that the cut-off, Galilean transformed contour dynamics equations have a well-defined limit as $\lambda\to \infty$, in which the Galilean transformation removes a divergent constant from the velocity field $\vec{u}$. The derivation is formal, in the sense that we do not attempt to show that solutions of the truncated equations approach solutions of the regularized equations as $\lambda\to \infty$; rather, the goal is to formulate meaningful contour dynamics equations for fronts that provide a starting point for further analysis. After a normalization of the jump in $\theta$ across the front, the resulting equation for the displacement \eqref{thetas} of the front is \begin{equation}\label{nonconseqn} \varphi_t(x, t) + \int_\mathbb{R} \left[\varphi_x(x, t) - \varphi_x(x + \eta, t)\right] \biggl\{G(\eta) - G\left(\sqrt{\eta^2 + \left[\varphi(x, t) - \varphi(x + \eta, t)\right]^2}\right)\biggr\} \,\mathrm{d}{\eta} + \L \varphi_x(x, t) = 0, \end{equation} where \begin{equation} G(x) = \begin{cases} -\frac{1}{2\pi} \log |x| & \text{if $\alpha = 2$}, \\ 1/|x|^{2-\alpha} & \text{if $0<\alpha <2$}. \end{cases} \label{defG} \end{equation} The linear operator $\L$ in \eqref{nonconseqn} is given by \begin{align*} \L &= \begin{cases} \frac{1}{2}|\partial_x|^{-1} & \text{if $\alpha = 2$ (Euler)}, \\ b_\alpha |\partial_x|^{1-\alpha} & \text{if $0<\alpha < 1$ or $1 < \alpha < 2$}, \\ -2 \log |\partial_x| & \text{if $\alpha = 1$ (SQG)}, \end{cases} \end{align*} where $|\partial_x| = (-\partial_x^2)^{1/2}$ has symbol $|k|$, $\log |\partial_x|$ has symbol $\log|k|$, and \begin{equation} b_\alpha = 2\sin \left(\frac{\pi\alpha}{2}\right) \Gamma(\alpha - 1). \label{a-const} \end{equation} The integral in \eqref{nonconseqn} converges since \[ G\left(\sqrt{\eta^2 + \abs{\varphi(x, t) - \varphi(x + \eta, t)}^2}\right) - G(\eta) = \O\left(\frac{1}{|\eta|^{4-\alpha}}\right) \qquad \text{as $|\eta|\to \infty$}. \] Equation \eqref{nonconseqn} has the conservative form \eqref{conseqn} and the Hamiltonian form \eqref{hameqn}. The equation also applies to spatially periodic solutions, when it can be written as \eqref{pernonconseqn}. The explicit equations for Euler, SQG, gSQG are written out in Sections~\ref{front:euler}, \ref{front:sqg}, \ref{front:gsqg}, respectively. To study the small-amplitude dynamics of fronts, we approximate the nonlinear term in \eqref{nonconseqn} by cubic terms, which gives the equation \begin{align} &\varphi_t + \frac{1}{2}\partial_x\left\{\varphi^2 \mathbf{A} \varphi - \varphi\mathbf{A}\varphi^2 +\frac{1}{3}\mathbf{A}\varphi^3\right\} + \L \varphi_x = 0, \label{sqg_eq} \end{align} where the linear operator $\mathbf{A}$ is proportional to $\partial_x^2 \L$, \begin{align*} \mathbf{A} &= \begin{cases} \frac{1}{2}|\partial_x| & \text{if $\alpha = 2$ (Euler)}, \\ c_\alpha |\partial_x|^{3-\alpha} & \text{if $0<\alpha < 1$ or $1 < \alpha < 2$}, \\ \partial_x^2 \log|\partial_x| & \text{if $\alpha = 1$ (SQG)}. \end{cases} \end{align*} Here, the constant $c_\alpha$ is given by \eqref{defC12} for $1<\alpha<2$ or \eqref{defC01} for $0<\alpha<1$. The approximate equations for Euler, SQG, gSQG are written out explicitly in Sections~\ref{approx:euler}, \ref{subsec:sqg}, \ref{approx:gsqg}, respectively. The approximate equation with $\alpha=2$ for vorticity fronts is the same as the one derived by a systematic, but formal, multiple-scale expansion directly from the incompressible Euler equations in \cite{BiHu}. In Theorem~\ref{th:alpha}, we show that the spatially-periodic initial value problem for the approximate equation \eqref{sqg_eq} with $1<\alpha \le 2$ is well-posed for short times in the Sobolev space $\dot{H}^s(\mathbb{T})$ for $s> 7/2-\alpha$. The proof is hyperbolic in nature and makes no use of the lower-order dispersive term. A similar method of proof applied to the approximate SQG front equation, or its dispersionless version, gives the short-time weak well-posedness result stated in Theorem~\ref{sqglwpthm}, in which solutions may lose Sobolev derivatives over time. A stronger well-posedness result for the approximate SQG equation that uses the dispersion and has no loss of derivatives will be proved in \cite{HSZ}, following the method of C\'{o}rdoba et al. \cite{CoGoIo} for the case $\alpha < 1$. These results can be compared with those of Rodrigo and Fefferman \cite{FeRo11, Rod05} for much more regular $C^\infty$ or analytic fronts, and the results of C\'{o}rdoba, C\'{o}rdoba, and Gancedo \cite{ CoCoGa, Gan} for Sobolev solutions of suitably parametrized equations for bounded SQG patches, which do not lose derivatives. An outline of this paper is as follows. In Section~\ref{sec:dim}, we discuss the scaling properties of the gSQG front problem, including the anomalous scaling of the SQG problem. In Section~\ref{sec:reg}, we derive the regularized contour dynamics equation \eqref{nonconseqn}, and in Section~\ref{sec:approx}, we show that its cubic approximation can be written as \eqref{sqg_eq}. In Section~\ref{Sec-LWP}, we prove a short-time well-posedness result for \eqref{sqg_eq} with $1<\alpha \le 2$ and a short-time weak well-posedness result for the approximate SQG equation \eqref{sqg_eq} with $\alpha=1$. In Section~\ref{sec:nls}, we consider traveling waves and the NLS-approximation for \eqref{sqg_eq}, and in Section~\ref{sec:num}, we present some numerical solutions of the approximate SQG equation that appear to show the formation of oscillatory singularities. Finally, in the Appendix, we prove some algebraic inequalities used in the well-posedness proofs. \section{Dimensional analysis} \label{sec:dim} One reason for the interest of the front problem is that, unlike the patch problem, a planar front does not define any length scales, so it preserves the scaling properties of the gSQG equation. Suppose that $\theta$ is a piecewise-constant, odd function of $y$ that jumps across a planar front $y=0$, \[ \theta = \begin{cases} \theta_0 &\text{if $y > 0$}, \\ -\theta_0 & \text{if $y < 0$}.\end{cases} \] Up to a constant dimensionless factor, the corresponding gSQG shear flow $\vec{u} = \left(u(y),0\right)$ with $u = -\partial_y |\partial_y|^{-\alpha} \theta$ is given by \begin{equation*} u(y) = \begin{cases} \theta_0 |y| & \text{if $\alpha=2$}, \\ \theta_0 |y|^{\alpha-1} &\text{if $0<\alpha < 1$ or $1<\alpha < 2$}, \\ \theta_0 \log |y| & \text{if $\alpha=1$}. \end{cases} \end{equation*} As illustrated in Figure~\ref{fig:shear}, this shear flow is piecewise linear for the Euler equation, and has a logarithmic divergence in the tangential velocity on the front for the SQG equation. The tangential velocity on the front is zero if $1<\alpha \le 2$, and diverges algebraically if $0<\alpha<1$. \begin{figure} \includegraphics[width=0.7\textwidth]{interface} \caption{The gSQG shear flow for a planar front with $\alpha = 1$ (red), $\alpha=3/2$ (green), and $\alpha = 2$ (blue).} \label{fig:shear} \end{figure} We denote the dimensions of a variable $f$ by $[f]$ and the dimensions of length and time by $L$ and $T$, respectively. Since $\vec{u} = \nabla^\perp(-\Delta)^{-\alpha/2} \theta$ is a velocity, we have that \begin{equation*} [\vec{u}] = \frac{L}{T},\qquad [\theta] = \frac{L^{2-\alpha}}{T}. \end{equation*} Thus, the vorticity $\theta$ has dimensions of frequency for the Euler equations ($\alpha=2$), while $\theta$ has dimensions of velocity for the SQG equation ($\alpha=1$). The front is linearly stable and waves propagate along it. For small-amplitude, harmonic perturbations in the displacement of the form $y = A e^{ik x-i\omega t} + \text{c.c.}$, a naive dimensional argument gives the linearized dispersion relation \begin{align*} \omega &= C \theta_0 (\sgn k)|k|^{2-\alpha} \end{align*} where $C$ is a dimensionless constant. A more detailed analysis verifies this dispersion relation for $0<\alpha <1$ and $1<\alpha \le 2$. For example, in the case of vorticity discontinuities with $\alpha=2$, the waves are nondispersive with constant frequency \cite{BiHu}, while the waves are dispersive for $0<\alpha <1$ or $1<\alpha < 2$. For the SQG equation with $\alpha=1$, the only parameter $\theta_0$ is a velocity, so one might expect that the waves on an SQG front are nondispersive with constant linearized phase speed. However, as observed by Rodrigo \cite{Rod05}, one finds that the linearized dispersion relation has the form \[ \omega = c_0 k \log |k|,\qquad c_0 = C \theta_0 \] with an additional factor that is logarithmic in the wavenumber $k$. Thus, the linearized SQG problem has an anomalous scaling invariance $(x,t)\mapsto (\tilde{x},\tilde{t})$ which is given for $\lambda > 0$ by \[ \tilde{x} = \lambda\left[x-c_0 (\log \lambda)t\right],\qquad \tilde{t} = \lambda t. \] This invariance combines a hyperbolic-type scale-invariance $(x,t)\mapsto (\lambda x,\lambda t)$ with a Galilean transformation $(x,t)\mapsto (x-c_0(\log \lambda) t, t)$. For the nonlinear SQG front equation, \eqref{nonconseqn} with $\alpha=1$, one has $c_0=-2$. Moreover, the displacement $\varphi$ has dimension $[\varphi] = L$, and the equation is invariant under the transformation $(x,t,\varphi)\mapsto (\tilde{x},\tilde{t}, \tilde{\varphi})$ with \begin{equation} \tilde{x} = \lambda\left[x + 2 (\log \lambda)t\right],\qquad \tilde{t} = \lambda t,\qquad\tilde{\varphi} = \lambda \varphi. \label{sqg_scaling} \end{equation} We remark that the corresponding similarity solutions have the form \[ \varphi(x,t) = t f\left(\frac{x}{t} - 2 \log t\right) \] rather than the usual power-law form for scale-invariant equations. \section{Regularized contour dynamics equations for fronts} \label{sec:reg} In this section, we derive a regularized contour dynamics equation for infinite gSQG fronts. We begin by recalling the derivation of the contour dynamics equations for bounded patches (see e.g., \cite{CoCoGa, Gan}). \subsection{Contour dynamics for patches} Suppose that $\partial\Omega(t)$ is a smooth, simple, closed curve with bounded interior $\Omega(t)\subset \mathbb{R}^2$ and \begin{equation}\label{thetasp} \theta(\vec{x},t) = \begin{cases}\theta_0 & \vec{x} \in \Omega(t),\\ 0 & \vec{x} \in \Omega^c(t).\end{cases} \end{equation} The Green's function for the operator $(-\Delta)^{\alpha/2}$ on $\mathbb{R}^2$ is given by $g_\alpha G(|\vec{x}|)$ where \cite{Stein} \begin{align*} G(x) &= \begin{cases} -\frac{1}{2 \pi}\log|x| & \text{if $\alpha = 2$}, \\ |x|^{-(2 - \alpha)} & \text{if $0 < \alpha < 2$}, \end{cases} \qquad g_\alpha = \begin{cases} 1 & \text{if $\alpha = 2$}, \\ \frac{\Gamma(1 - \frac{\alpha}{2})}{2^\alpha \pi \Gamma(\frac{\alpha}{2})} & \text{if $0 < \alpha < 2$}. \end{cases} \end{align*} We normalize constants by choosing $\theta_0 = 1/g_\alpha$. Then, using \eqref{sqg2} and Green's theorem, one finds that the velocity field corresponding to \eqref{thetasp} is \begin{eqnarray} \vec{u}(\vec{x}, t) & = & \int_{\partial\Omega(t)} G(|\vec{x}-\vec{x}'|) \vec{n}^\perp(\vec{x}', t) \,\mathrm{d}{s(\vec{x}')}, \label{fl-u} \end{eqnarray} where $\vec{n} = (m,n)$ is the inward unit normal to $\Omega(t)$, $\vec{n}^\perp = (-n,m)$, and $s(\vec{x}')$ is arc-length on $\partial\Omega(t)$. We suppose that $\partial\Omega(t)$ is given by the parametric equation $\vec{x} = \vec{X}(\gamma,t)$, where $\vec{X}(\cdot,t) \colon \mathbb{T}\to \mathbb{R}^2$. Since $\theta$ satisfies the transport equation \eqref{sqg1}, the curve $\partial\Omega(t)$ moves with normal velocity $\vec{X}_t\cdot\vec{n} = \vec{u}\cdot\vec{n}$. If $0<\alpha \le 1$, then the tangential component of \eqref{fl-u} is unbounded on $\partial\Omega(t)$, but the normal component is well-defined, and the motion of the curve is determined solely by its normal velocity. The equation for $\vec{X}$ is therefore \begin{equation}\label{cde-cpt} \vec{X}_t(\gamma, t) = c(\gamma, t) \vec{X}_\gamma(\gamma, t) + \int_\mathbb{T} G(|\vec{X}(\gamma,t) - \vec{X}(\gamma', t)|) \left[\vec{X}_{\gamma}(\gamma, t) - \vec{X}_{\gamma'}(\gamma',t)\right] \,\mathrm{d}{\gamma'}, \end{equation} where $c(\cdot,t) \colon \mathbb{T} \to \mathbb{R}$ is an arbitrary smooth function that corresponds to a time-dependent reparametrization of the curve. The inclusion of the term proportional to the tangent vector $\vec{X}_\gamma$ in the integral ensures that the integral converges for $0<\alpha\le 1$; this term is not required for $1<\alpha \le 2$, since $G(|\vec{X} - \vec{X}'|)$ is locally integrable, and it could be absorbed into $c$ in that case. If $1\le \alpha \le 2$, there is a difficulty in extending the contour dynamics equation to an infinite front $y=\varphi(x,t)$ where $\varphi(\cdot,t) \colon \mathbb{R} \to \mathbb{R}$. In that case, $\vec{X}(x,t) = \left(x,\varphi(x,t)\right)$, and we get formally from \eqref{cde-cpt} that $c=0$ and \begin{align} \varphi_t(x, t) & = \int_\mathbb{R} G\left(\sqrt{(x - x')^2 + (\varphi(x, t) - \varphi(x', t))^2}\right) [\varphi_x(x, t) - \varphi_{x'}(x', t)] \,\mathrm{d}{x'}. \label{Req} \end{align} This equation makes sense for $0<\alpha < 1$ if $\varphi$ is a smooth, rapidly decaying (or bounded) function of $x$, since the integral converges at $x=x'$ and at infinity \cite{CoGoIo}. However, it does not make sense for $1 \leq \alpha \leq 2$, since $G$ is not integrable at infinity and $\varphi_x(x, t)$ does not decay as $x'\to \infty$. Roughly speaking, we have to regularize a short-distance ``ultraviolet'' singularity when $0<\alpha\le 1$, caused by the infinite tangential velocity on the front, and a long-distance ``infrared'' singularity when $1\le \alpha\le 2$, caused by the slow decay of the Green's function. The SQG equation --- which is the primary case of interest here --- is peculiar in that it exhibits both infrared and ultraviolet singularities. To regularize the long-distance singularity, we introduce a long-range cutoff parameter $\lambda$, make a Galilean transformation into a reference frame moving with a suitable velocity $v(\lambda)$, where $v(\lambda)\to\infty$ as $\lambda\to\infty$, and take the limit $\lambda\to\infty$. The need for a Galilean transformation to get a well-defined limit can be seen directly in the case of the Euler equations. For example, suppose one regards a planar vorticity discontinuity as the limit of a flow in a wide channel $-h <y<h$ as $h\to \infty$. If one requires that the tangential flow on the channel boundaries $y=\pm h$ is equal to zero, then the corresponding shear flow is $\vec{u} = (u(y),0)$ with $u(y) = \theta_0 (|y| - h)$. Thus, one needs to make a Galilean transformation $x\mapsto x+\theta_0 h t$ in order to get a well-defined limit as $h\to \infty$. This regularization would lead to the same equations as the ones derived here for a nonplanar front, but it appears to be more complicated to implement. For the SQG equations, we have $\vec{u} = R^\perp \theta$, where $R^\perp$ is the perpendicular Riesz transform. The Riesz transform of an $L^{\infty}$-function belongs, in general, to $\text{BMO}$, and $\text{BMO}$-functions are only defined modulo an additive constant. Thus, an alternative regularization procedure for SQG fronts would be to determine $\vec{u}\in \text{BMO}(\mathbb{R}^2)$ modulo a constant and derive the contour dynamics equations from that velocity. This procedure would presumably lead to equivalent equations to the ones derived here, but it also appears to be more complicated to implement. \subsection{Cutoff Regularization} We consider a front $y=\varphi(x,t)$ across which $\theta$ jumps from $\theta_0/2$ to $-\theta_0/2$. After a change of variables $x'=x+\eta$ in \eqref{Req}, we introduce a large cutoff parameter $\lambda > 0$ to get the truncated equation \begin{equation}\label{lambda-eveqn0} \varphi_t(x, t) = \int_{-\lambda}^{\lambda}G\left(\sqrt{\eta^2 + (\varphi(x, t) - \varphi(x + \eta, t))^2}\right) \left[\varphi_x(x, t) - \varphi_x(x + \eta, t)\right] \,\mathrm{d}{\eta}. \end{equation} We assume that $\varphi(\cdot,t) \colon \mathbb{R} \to \mathbb{R}$ is a smooth bounded function with bounded first derivative. The integral in \eqref{lambda-eveqn0} converges since $\eta G(\eta)$ is locally integrable for $\alpha > 0$ when $G(\eta)$ is given by \eqref{defG}. It is convenient to write \eqref{lambda-eveqn0} in the conservative form \begin{equation}\label{lambda-eveqn1} \varphi_t(x, t) = \partial_x \int_{-\lambda}^{\lambda} F\left(\eta, \varphi(x, t) - \varphi(x + \eta, t)\right) \,\mathrm{d}{\eta}, \end{equation} where $F$ is defined by \begin{equation} F(x, y) = \int_0^y G\left(\sqrt{x^2 + s^2}\right) \,\mathrm{d}{s}. \label{defF} \end{equation} To take the limit $\lambda\to \infty$, we write \eqref{lambda-eveqn1} as \begin{align} \begin{split} &\varphi_t(x, t) + \partial_x \int_{-\lambda}^{\lambda} K(\eta, \varphi(x, t) - \varphi(x + \eta, t)) \,\mathrm{d}{\eta} - \partial_x \int_{-\lambda}^{\lambda} G(\eta)\left[ \varphi(x, t) - \varphi(x + \eta, t) \right] \,\mathrm{d}{\eta} = 0, \end{split} \label{lambda-eveqn2} \end{align} where \begin{align} \begin{split} K(x, y) & = G(x)y - F(x, y). \end{split} \label{defK} \end{align} First, we consider the nonlinear term in \eqref{lambda-eveqn2}. We find from \eqref{defF} that \begin{equation*} F(x, y) \sim G(x)y + \O\left(\frac{G'(x)}{x}\right) \qquad \text{as $\abs{x} \to \infty$ with $y$ fixed}, \] so when $G$ is given by \eqref{defG} and $\varphi(\cdot,t)$ is bounded, we have \begin{equation} K\left(\eta,\varphi(x, t) - \varphi(x + \eta, t)\right) = \O\left(\frac{1}{|\eta|^{4-\alpha}}\right) \qquad \text{as $|\eta| \to \infty$}. \label{decayK} \end{equation} It follows that \[ \lim_{\lambda\to\infty} \int_{-\lambda}^{\lambda} K\left(\eta, \varphi(x, t) - \varphi(x + \eta, t)\right) \,\mathrm{d}{\eta} = \int_{\mathbb{R}} K(\eta, \varphi(x, t) - \varphi(x + \eta, t)) \,\mathrm{d}{\eta}, \] since the integral converges on $\mathbb{R}$. Next, we consider the linear term \[ \L_\lambda \varphi(x,t) = -\int_{-\lambda}^{\lambda} G(\eta)\left[\varphi(x, t) - \varphi(x + \eta, t)\right] \,\mathrm{d}{\eta}, \] where $\eta G(\eta)$ is locally integrable. There are three cases, depending on whether the Green's function $G(\eta)$ is: (a) nonintegrable at $0$ and integrable at infinity($0<\alpha<1$); (b) integrable at $0$ and nonintegrable at infinity ($1< \alpha \le 2$); (c) nonintegrable at both $0$ and $\infty$ ($\alpha =1$). We consider each of them in turn. In case (a), we have $\L_\lambda \varphi \to \L\varphi$ as $\lambda \to \infty$, where \begin{equation*} \L \varphi(x,t) = -\int_{\mathbb{R}} G(\eta)\left[\varphi(x, t) - \varphi(x + \eta, t)\right] \,\mathrm{d}{\eta}. \end{equation*} This operator is translation invariant; its symbol $b(k)$, such that $\L e^{ik x} = b(k) e^{ik x}$, is the function \begin{equation} b(k) = -\int_{\mathbb{R}} G(\eta)\left(1 - e^{ik\eta}\right) \,\mathrm{d}{\eta}. \label{defLa} \end{equation} Thus, the limit of \eqref{lambda-eveqn2} as $\lambda\to \infty$ is \begin{equation}\label{conseqn} \varphi_t(x, t) + \partial_x \int_{\mathbb{R}} K(\eta, \varphi(x, t) - \varphi(x + \eta, t)) \,\mathrm{d}{\eta} + \L \varphi_x(x, t) =0. \end{equation} Taking the $x$-derivative under the integral sign in \eqref{conseqn} and using \eqref{defK}, we get the non-conservative form of the regularized equation in \eqref{nonconseqn}. In case (b), we write \begin{align*} \L_\lambda \varphi(x,t) &= v(\lambda)\varphi(x,t) + \int_{-\lambda}^{\lambda} G(\eta) \varphi(x + \eta, t) \,\mathrm{d}{\eta}, \end{align*} where \begin{align} v(\lambda) = - 2\int_{0}^\lambda G(\eta) \,\mathrm{d}{\eta}, \label{defvb} \end{align} which diverges as $\lambda\to\infty$. Then \eqref{lambda-eveqn2} becomes \begin{align*} \begin{split} &\varphi_t(x, t) + v(\lambda)\varphi_x(x,t) + \partial_x \int_{-\lambda}^{\lambda} K(\eta, \varphi(x, t) - \varphi(x + \eta, t)) \,\mathrm{d}{\eta} + \partial_x \int_{-\lambda}^{\lambda} G(\eta) \varphi(x + \eta, t) \,\mathrm{d}{\eta} = 0. \end{split} \end{align*} We make a Galilean transformation $x \mapsto x - v(\lambda)t$ into a reference frame moving with velocity $v(\lambda)$, which removes the term $v\varphi_x$, and then let $\lambda\to \infty$, which gives the regularized equation \eqref{conseqn} with \begin{equation*} \L\varphi(x,t) = \int_{\mathbb{R}} G(\eta) \varphi(x + \eta, t) \,\mathrm{d}{\eta}. \end{equation*} This integral converges if $\varphi(\cdot,t)$ decays sufficiently rapidly at infinity, and can be interpreted in a distributional sense in other cases. The symbol of $\L$, \begin{equation} b(k) = \int_\mathbb{R} G(\eta) e^{ik \eta} \, d\eta, \label{defLb} \end{equation} is well-defined as a tempered distribution since $G(\eta)$ is locally integrable and has, at most, slow growth as $|\eta|\to\infty$. In case (c), we write \begin{align*} \L_\lambda \varphi(x,t) &= v(\lambda) \varphi_x(x,t) + \int_{1<|\eta| <\lambda} G(\eta) \varphi(x + \eta, t) \,\mathrm{d}{\eta} - \int_{|\eta| <1} G(\eta)\left[\varphi(x, t) - \varphi(x + \eta, t)\right] \,\mathrm{d}{\eta}, \end{align*} where \begin{equation} v(\lambda) = -2\int_{1}^{\lambda} G(\eta) \,\mathrm{d}{\eta}. \label{defvc} \end{equation} Making a Galilean transformation $x\mapsto x-v(\lambda) t$, and taking the limit of the resulting equation as $\lambda \to \infty$, we get the regularized equation \eqref{conseqn} with \begin{equation*} \L \varphi(x,t) = \int_{|\eta| >1} G(\eta) \varphi(x + \eta, t) \,\mathrm{d}{\eta} - \int_{|\eta| <1} G(\eta)\left[\varphi(x, t) - \varphi(x + \eta, t)\right] \,\mathrm{d}{\eta}. \end{equation*} In this case, the symbol of $\L$ is the sum of a tempered distribution and a function, \begin{equation} b(k) = \int_{|\eta| >1} G(\eta) e^{ik\eta}\,\mathrm{d}{\eta} -\int_{|\eta| <1} G(\eta)\left[1 - e^{ik\eta}\right] \,\mathrm{d}{\eta}. \label{defLc} \end{equation} \subsection{Regularized front equations} In this section, we write out the specific form of the regularized front equations for the Euler, SQG, and gSQG equations. \subsubsection{Euler equation $(\alpha = 2)$} \label{front:euler} The Green's function for the Euler equation is \[G(x) = -\frac{1}{2\pi} \log|x|.\] It follows from \eqref{defF} and \eqref{defK} that \[\begin{aligned} F(x, y) & = -\frac{1}{2\pi} \left\{y \log\sqrt{x^2 + y^2} + x \tan^{-1}\left(\frac{y}{x}\right) - y\right\}, \\ K(x,y) &= \frac{1}{2\pi} \left\{y \log\left[\frac{\sqrt{x^2 + y^2}}{|x|}\right] + x \tan^{-1}\left(\frac{y}{x}\right) - y\right\}. \end{aligned}\] In addition, the velocity \eqref{defvb} used in the Galilean transformation is \[ v(\lambda) = \frac 1\pi \left(\lambda \log\lambda - \lambda\right). \] Using the distributional Fourier transform of the logarithm \cite{Vlad}, we get from \eqref{defLb} that the symbol of $\L$ is given by \begin{align*} b(k) &= -\frac{1}{2\pi}\mathcal{F}\left[\log|\cdot|\right](-k) = \gamma \delta(-k) + \frac{1}{2} \text{p.f.} \frac{1}{|k|}, \end{align*} where $\gamma$ is the Euler-Mascheroni constant. It follows that $\L\partial_x = -\frac{1}{2}\mathbf{H}$ where $\mathbf{H}$ is the Hilbert transform with symbol $-i\sgn k$. Thus, the regularized equation for vorticity fronts is \[\begin{aligned} \varphi_t(x, t) & + \frac{1}{2 \pi} \partial_x \int_\mathbb{R} \bigg\{[\varphi(x, t) - \varphi(x + \eta, t)] \log\left[\frac{\sqrt{\eta^2 + [\varphi(x, t) - \varphi(x + \eta, t)]^2}}{|\eta|}\right]\\ & \quad + \eta \tan^{-1} \left(\frac{\varphi(x, t) - \varphi(x + \eta, t)}{\eta}\right) - [\varphi(x, t) - \varphi(x + \eta, t)]\bigg\} \,\mathrm{d}{\eta} = \frac 12 \hilbert \varphi(x, t), \end{aligned}\] and the non-conservative form of the equation is \[ \varphi_t(x, t) + \frac{1}{2\pi} \int_\mathbb{R} [\varphi_x(x, t) - \varphi_x(x + \eta, t)] \log\left[\frac{\sqrt{\eta^2 + [\varphi(x, t) - \varphi(x + \eta, t)]^2}}{|\eta|}\right]\,\mathrm{d}{\eta} = \frac 12 \hilbert \varphi(x, t). \] \subsubsection{SQG equation $(\alpha = 1)$} \label{front:sqg} The Green's function for the SQG equation is \[ G(x) = \frac{1}{|x|}. \] It follows from \eqref{defF} and \eqref{defK} that \begin{align*} F(x, y) & = \sinh^{-1}\left(\frac{y}{\abs{x}}\right), \qquad K(x,y) =\frac{y}{\abs{x}} - \sinh^{-1}\left(\frac{y}{\abs{x}}\right). \end{align*} In addition, the velocity \eqref{defvc} used in the Galilean transformation is \[ v(\lambda) = -2\log \lambda. \] We find from \eqref{defLc} that \begin{align*} b(k) &= 2\left(\int_1^\infty \frac{\cos (k\eta)}{\eta}\,\mathrm{d}{\eta} - \int_0^1 \frac{1-\cos (k\eta)}{\eta} \,\mathrm{d}{\eta} \right) \\ &= v_1 - 2\log|k|, \\ v_1 &= 2\left(\int_1^\infty \frac{\cos\eta}{\eta}\,\mathrm{d}{\eta} - \int_0^1 \frac{1-\cos\eta}{\eta}\,\mathrm{d}{\eta}\right). \end{align*} We can absorb $v_1$ into $v(\lambda)$ by the use of a Galilean transformation $x\mapsto x - [v_1 + v(\lambda)]t$, and then the remaining part of the linear operator is $\L = -2\log |\partial_x|$. Thus, the regularized equation for SQG fronts is \[\varphi_t(x, t) + \partial_x \int_\mathbb{R} \bigg\{\frac{\varphi(x, t) - \varphi(x + \eta, t)}{\abs{\eta}} - \sinh^{-1}\bigg[\frac{\varphi(x, t) - \varphi(x + \eta, t)}{\abs{\eta}}\bigg]\bigg\} \,\mathrm{d}{\eta} = 2 \log\abs{\partial_x} \varphi_x(x, t),\] and the non-conservative form of the equation is \[ \varphi_t(x, t) + \int_\mathbb{R} [\varphi_x(x, t) - \varphi_x(x + \eta, t)] \bigg\{\frac{1}{\abs{\eta}} - \frac{1}{\sqrt{\eta^2 + [\varphi(x, t) - \varphi(x + \eta, t)]^2}}\bigg\} \,\mathrm{d}{\eta} = 2 \log\abs{\partial_x} \varphi_x(x, t). \] \subsubsection{gSQG equation} \label{front:gsqg} The Green's function for the gSQG equation is \[G(x) = \frac{1}{|x|^{2 - \alpha}}.\] For $1 < \alpha < 2$, it follows from \eqref{defF} and \eqref{defK} that \begin{align*} F(x, y) &= \int_0^y \frac{1}{(x^2 + s^2)^{(2 - \alpha)/2}} \,\mathrm{d}{s}, \qquad K(x, y) = \frac{y}{|x|^{2 - \alpha}} - \int_0^y \frac{1}{(x^2 + s^2)^{(2 - \alpha)/2}} \,\mathrm{d}{s}. \end{align*} In addition, from \eqref{defvb}, the velocity used in the regularization is \[ v(\lambda) = -\frac{2 \lambda^{\alpha-1}}{\alpha-1}. \] We find from \eqref{defLb}, that \begin{align*} b(k) & = b_\alpha\abs{k}^{1-\alpha}, \qquad b_\alpha = 2\int_0^\infty \frac{\cos \eta}{\eta^{2-\alpha}}\,\mathrm{d}\eta = 2 \sin\left(\frac{ \pi\alpha}{2}\right) \Gamma(\alpha-1), \end{align*} where $b_\alpha > 0$ is given by \eqref{a-const}. The corresponding operator is $\L = b_\alpha |\partial_x|^{1-\alpha}$. Thus, the regularized equation for gSQG fronts with $1<\alpha<2$ is \[\varphi_t(x, t) + \partial_x \int_\mathbb{R} \Biggl\{\bigg[\frac{\varphi(x, t) - \varphi(x + \eta, t)}{\abs{\eta}^{2 - \alpha}}\bigg] - F\left(k, \varphi(x, t) - \varphi(x + \eta)\right)\Biggr\} \,\mathrm{d}{\eta} + b_\alpha |\partial_x|^{1-\alpha} \varphi_x(x, t)= 0,\] and the non-conservative form of the equation is \begin{equation}\label{reg-1a2}\begin{aligned} &\varphi_t(x, t) + \int_\mathbb{R} [\varphi_x(x, t) - \varphi_x(x + \eta, t)] \Biggr\{ \frac{1}{\abs{\eta}^{2 - \alpha}} - \frac{1}{\left(\eta^2 + [\varphi(x, t) - \varphi(x + \eta, t)]^2\right)^{(2- \alpha) / 2}} \Biggl\} \,\mathrm{d}{\eta} \\ &\qquad\qquad\qquad + b_\alpha \abs{\partial_x}^{1 - \alpha} \varphi_x(x, t) = 0. \end{aligned}\end{equation} The derivation of the regularized gSQG equation for $0 < \alpha < 1$ is similar to the case of $1 < \alpha < 2$, except that we do not need to make a Galilean transformation to obtain a finite limit, and we find $b(k)$ from \eqref{defLa}. One obtains the same equation \eqref{reg-1a2} as in the case $1<\alpha<2$, where $b_\alpha<0$ is given by \eqref{a-const}. This equation agrees with the gSQG equation for $0 < \alpha < 1$ that is analyzed in \cite{CoGoIo}. \subsection{Spatially periodic solutions} The previous equations do not require that $\varphi(\cdot,t)$ is rapidly decreasing; in particular, they apply to smooth periodic solutions $\varphi(\cdot,t) \colon \mathbb{T} \to \mathbb{R}$ where $\mathbb{T} = \mathbb{R}/2\pi\mathbb{Z}$. The symbol of the linear operator $\L$ remains the same. Moreover, we can write the nonlinear term in \eqref{conseqn} as \begin{align*} \int_{\mathbb{R}} K(\eta, \varphi(x, t) - \varphi(x + \eta, t)) \,\mathrm{d}{\eta} &=\int_\mathbb{T} K_p(\eta, \varphi(x, t) - \varphi(x + \eta, t)) \,\mathrm{d}{\eta}, \\ K_p(x,y) &= \sum_{n\in \mathbb{Z}} K(x + 2n\pi, y). \end{align*} The sum defining $K_p$ converges because of \eqref{decayK}. The conservative form of the periodic front equation is then \begin{equation*} \varphi_t(x, t) + \partial_x \int_{\mathbb{T}} K_p(\eta, \varphi(x, t) - \varphi(x + \eta, t)) \,\mathrm{d}{\eta} + \L \varphi_x(x, t) =0. \end{equation*} The non-conservative form can be written as \begin{align} \begin{split} &\varphi_t(x, t) + \int_{\mathbb{T}}\left[\varphi_x(x, t) - \varphi_x(x + \eta, t)\right] \biggl\{G_p(\eta,0) - G_p\left(\eta, \varphi(x, t) - \varphi(x + \eta, t)\right)\biggr\}\,\mathrm{d}{\eta} \\ &\hskip2in+ \L \varphi_x(x, t) =0, \end{split} \label{pernonconseqn} \end{align} where \begin{equation*} G_p(x,y) = G \left(\sqrt{x^2 + y^2}\right) + \sum_{n\in\mathbb{Z}_*}\left[G \left(\sqrt{(x+2\pi n)^2 + y^2}\right) - G(2\pi n)\right] \end{equation*} is the Green's function of $(-\Delta)^{\alpha/2}$ on the cylinder $\mathbb{T}\times \mathbb{R}$, and $\mathbb{Z}_* = \mathbb{Z}\setminus\{0\}$. One can verify that \eqref{pernonconseqn} is equivalent, up to a Galilean transformation, to the straightforward contour dynamics equation on a cylinder, \begin{align*} \begin{split} &\varphi_t(x, t) - \int_{\mathbb{T}}\left[\varphi_x(x, t) - \varphi_x(x + \eta, t)\right] G_p\left(\eta,\varphi(x, t) - \varphi(x + \eta, t)\right)\,\mathrm{d}{\eta} = 0. \end{split} \end{align*} However, \eqref{pernonconseqn} explicitly separates the linear dispersive term from the cubic-order nonlinearity. For the Euler equation with \[G(\eta) = -\frac{1}{2\pi} \log|\eta|,\] we get from the Euler product formula for $\sin z$ that \begin{align*} G_p(x,y) &= -\frac{1}{2\pi} \log \left|\sin\left(\frac{z}{2}\right)\right|, \end{align*} where $z=x+iy$. For the periodic SQG front equation with $G(\eta)=1/|\eta|$, we have \begin{align*} G_p(x,y) &= \frac{1}{\sqrt{x^2 +y^2}} + \sum_{n \in \mathbb{Z}_*}\left\{ \frac{1}{\sqrt{(x+2\pi n)^2 +y^2}} - \frac{1}{2\pi |n|}\right\}. \end{align*} \subsection{Hamiltonian structure} Let $\H$ be a functional of $\varphi \colon \mathbb{R} \to \mathbb{R}$ of the form \[ \H(\varphi) = \frac 12 \iint_{\mathbb{R} \times \mathbb{R}} H(x - x', \varphi - \varphi') \,\mathrm{d}{x}\,\mathrm{d}{x'}, \] where $H(x,y)$ is an even function of $x, y\in \mathbb{R}$ and $\varphi = \varphi(x)$, $\varphi' = \varphi(x')$. The variational derivative of $\mathcal{H}$ is given by \[ \frac{\delta \H}{\delta \varphi(x)} = \int_\mathbb{R} K(x - x', \varphi - \varphi') \,\mathrm{d}{x'}, \] where $K(x, y) = H_y(x, y)$. Thus, the conservative front equation \eqref{conseqn} has the Hamiltonian form \begin{equation} \varphi_t + \partial_x \bigg[\frac{\delta \H}{\delta \varphi}\bigg]= 0, \label{hameqn} \end{equation} where $\partial_x$ is the Hamiltonian operator and the Hamiltonian is \begin{align*} \H(\varphi) &= \frac 12 \iint_{\mathbb{R} \times \mathbb{R}} H\left(x - x', \varphi - \varphi'\right) \,\mathrm{d}{x}\,\mathrm{d}{x'} + \frac 12 \int_{\mathbb{R}} \varphi \L \varphi \,\mathrm{d}{x}, \\ H(x,y) &= \int_0^y K(x,s)\,\mathrm{d}{s}. \end{align*} The corresponding conserved momentum, which generates spatial translations, is \[\P(\varphi) = \frac 12 \int_\mathbb{R} \varphi^2 \,\mathrm{d}{x}.\] \section{Approximate equation} \label{sec:approx} In this section, we derive the approximate equation \eqref{sqg_eq} for fronts with small slopes by truncating the nonlinearity in the full equation \eqref{nonconseqn} at cubic terms. It follows from \eqref{defF} that \[\begin{aligned} F(x, y) & = |x|\left\{G(x) \frac{y}{|x|} + \frac 16 x G'(x) \frac{y^3}{|x|^3} + \O\left(\frac{y^5}{\abs{x}^5}\right)\right\} \qquad \text{as}\quad \frac{y}{|x|} \to 0. \end{aligned}\] Retaining the lowest order terms in $y$, we find that the kernel $K$ in \eqref{defK} has the approximation \[K(x, y) \sim - \frac{1}{6}\frac{G'(x)}{x}y^3.\] Thus, the cubic approximation of the conservative equation \eqref{conseqn} is \begin{equation}\label{consapprox} \varphi_t(x, t) -\frac 16 \partial_x \int_\mathbb{R} \frac{G'(\eta)}{\eta} \left [\varphi(x, t) - \varphi(x + \eta, t)\right]^3 \,\mathrm{d}{\eta} + \L \varphi_x(x, t) = 0. \end{equation} Equation \eqref{consapprox} is equivalent to \eqref{sqg_eq}, as we show by writing it in spectral form. \subsection{Spectral equation} For definiteness, we suppose that $\varphi \colon \mathbb{R} \to \mathbb{R}$ is a smooth function that decreases sufficiently rapidly at infinity, with Fourier transform \[ \hat{\varphi}(k) = \frac{1}{2\pi} \int_\mathbb{R} \varphi(x) e^{-ik x}\,\mathrm{d}{x}. \] The same results apply to periodic functions $\varphi \colon \mathbb{T} \to \mathbb{R}$, with Fourier transforms replaced by Fourier series. Then \begin{equation} -\int_\mathbb{R} \frac{G'(\eta)}{\eta}\left [\varphi(x) - \varphi(x + \eta)\right]^3 \,\mathrm{d}{\eta} = \int_{\mathbb{R}^3} T(k_2, k_3, k_4) \hat{\varphi}(k_2) \hat{\varphi}(k_3) \hat{\varphi}(k_4) e^{i(k_2 + k_3 + k_4) x} \,\mathrm{d}{k_2} \,\mathrm{d}{k_3} \,\mathrm{d}{k_4}, \label{defnonlin} \end{equation} where \begin{align} \begin{split} T(k_2, k_3, k_4) & = -\Re \int_\mathbb{R} \frac{G'(\eta)}{\eta} [(1 - e^{i k_2 \eta})(1 - e^{i k_3 \eta})(1 - e^{i k_4 \eta})] \,\mathrm{d}{\eta} \\ & = -\Re \int_\mathbb{R} \frac{G'(\eta)}{\eta} \bigg\{\left(1 - e^{i k_2 \eta}\right) + \left(1 - e^{i k_3 \eta}\right) + \left(1 - e^{i k_4 \eta}\right) + \left(1 - e^{i (k_2 + k_3 + k_4) \eta}\right)\\ & \qquad\qquad\qquad\qquad- \left(1 - e^{i (k_2 + k_3) \eta}\right) - \left(1 - e^{i (k_2 + k_4) \eta}\right) - \left(1 - e^{i (k_3 + k_4) \eta}\right)\bigg\} \,\mathrm{d}{\eta}. \end{split} \label{defTint} \end{align} We assume that $\eta^2 G'(\eta)$ is integrable at $0$ and $G'(\eta)/\eta$ is integrable at infinity, as is the case for the Green's function \eqref{defG}, and consider three cases, depending on whether: (a) $\eta G'(\eta)$ is integrable at $0$ ($1<\alpha\le 2$); (b) $\eta G'(\eta)$ is nonintegrable at $0$ and integrable at infinity ($0<\alpha<1$); (c) $\eta G'(\eta)$ is nonintegrable at $0$ and nonintegrable at infinity ($\alpha =1$). In case (a), we write $T$ in \eqref{defTint} as \begin{equation} T(k_2, k_3, k_4) = a(k_2) + a(k_3) + a(k_4) + a(k_2 + k_3 + k_4) - a(k_2 + k_3) - a(k_2 + k_4) - a(k_3 + k_4), \label{defT} \end{equation} where $a \colon \mathbb{R} \to \mathbb{R}$ is defined by \begin{equation} a(k) = -\int_\mathbb{R} \frac{G'(\eta)}{\eta}\left[1 - \cos(k \eta)\right] \,\mathrm{d}{\eta}. \label{defaa} \end{equation} In case (b), we use the cancelation \begin{equation} k_2^2 + k_3^2 + k_4^2 + (k_2 + k_3 + k_4)^2 - (k_2 + k_3)^2 - (k_2 + k_4)^2 - (k_3 + k_4)^2 = 0 \label{kcancel} \end{equation} in \eqref{defTint} and write $T$ as \eqref{defT} where \begin{equation} a(k) = -\int_\mathbb{R} \frac{G'(\eta)}{\eta}\left[1 - \frac{1}{2} (k\eta)^2 - \cos(k \eta)\right] \,\mathrm{d}{\eta}. \label{defab} \end{equation} In case (c), we use this cancelation only for $|\eta| < 1$ and write $T$ as \eqref{defT} where \begin{equation} a(k) = -\int_{|\eta|< 1} \frac{G'(\eta)}{\eta}\left[1 - \frac{1}{2} (k\eta)^2 - \cos(k \eta)\right] \,\mathrm{d}{\eta} - \int_{|\eta|> 1} \frac{G'(\eta)}{\eta}\left[1 - \cos(k \eta)\right] \,\mathrm{d}{\eta}. \label{defac} \end{equation} In order to give a symmetric expression for $T$, it is convenient to introduce another variable $k_1$ and define $S \colon \mathbb{R}^4 \to \mathbb{R}$ by \begin{align} \begin{split} S(k_1, k_2, k_3, k_4) & = a(k_1) + a(k_2) + a(k_3) + a(k_4) \\ &- \frac 12 \bigg\{a(k_1 + k_2) + a(k_1 + k_3) + a(k_1 + k_4) + a(k_2 + k_3) + a(k_2 + k_4) + a(k_3 + k_4)\bigg\}. \end{split} \label{defS} \end{align} Then \begin{equation} T(k_2, k_3, k_4) = S(k_1, k_2, k_3, k_4) \qquad \text{on $k_1 + k_2 + k_3 + k_4 = 0$}. \label{defST} \end{equation} Using \eqref{defnonlin} and \eqref{defST} in \eqref{consapprox}, we see that the spectral form of \eqref{consapprox} is \begin{align}\label{specapprox} \begin{split} &\hat{\varphi}_t(k_1, t) + \frac 16 ik_1 \int_{\mathbb{R}^3} \delta(k_1 + k_2 + k_3 + k_4) S(k_1, k_2, k_3, k_4) \hat{\varphi}^*(k_2, t) \hat{\varphi}^*(k_3, t) \hat{\varphi}^*(k_4, t) \,\mathrm{d}{k_2}\,\mathrm{d}{k_3}\,\mathrm{d}{k_4} \\ &\qquad\qquad\qquad\qquad\qquad + i k_1 b(k_1) \hat{\varphi}(k_1, t)= 0, \end{split} \end{align} where $\delta$ denotes the delta-distribution, $\hat{\varphi}^*(k) = \varphi(-k)$ denotes the complex conjugate of $\hat{\varphi}(k)$, and $b(k)$ is the symbol of $\L$. Using the convolution theorem and \eqref{defS} to take the inverse Fourier transform of \eqref{specapprox}, we find that the approximate equation \eqref{consapprox} can be written as \begin{equation}\label{realapprox} \varphi_t + \frac 12 \partial_x \bigg\{\varphi^2 \mathbf{A} \varphi - \varphi \mathbf{A} \varphi^2 + \frac 13 \mathbf{A} \varphi^3\bigg\} + \L \varphi_x = 0, \end{equation} where $\mathbf{A}$ is the self-adjoint operator with symbol $a(k)$. \subsection{Approximate equations} In this section, we write out the explicit form of the approximate equation derived above for Euler, SQG, and gSQG fronts. \subsubsection{Euler equation $(\alpha = 2)$} \label{approx:euler} For the Euler equation, we have \[ G'(\eta) = -\frac{1}{2 \pi \eta},\qquad \L = \frac{1}{2}|\partial_x|^{-1}, \] and the approximate equation \eqref{consapprox} for vorticity fronts is \[\varphi_t(x, t) + \frac{1}{12 \pi} \partial_x \int_\mathbb{R} \frac{[\varphi(x, t) - \varphi(x + \eta, t)]^3}{\abs{\eta}^2} \,\mathrm{d}{\eta} = \frac 12 \hilbert \varphi(x, t),\] where $\hilbert$ is the Hilbert transform. The symbol $a$ is given by \eqref{defaa}, so \[a(k) = \frac{1}{\pi}\int_0^\infty \frac{1 - \cos(k \eta)}{\eta^2} \,\mathrm{d}{\eta} = \frac 12 \abs{k},\] with the corresponding operator \[\mathbf{A} = \frac 12 \abs{\partial_x}.\] Thus, from \eqref{realapprox}, the approximate Euler equation is \begin{equation} \varphi_t + \frac 14 \partial_x \bigg\{\varphi^2 \abs{\partial_x} \varphi - \varphi \abs{\partial_x} \varphi^2 + \frac 13 \abs{\partial_x} \varphi^3\bigg\} = \frac 12 \hilbert \varphi.\ \label{approx2} \end{equation} This equation agrees with the asymptotic equation derived in \cite{BiHu} directly from the incompressible Euler equations when the vorticity jumps from $-1/2$ to $1/2$ across the front. \subsubsection{SQG equation $(\alpha = 1)$} \label{subsec:sqg} For the SQG equation, we have \[ G'(\eta) = -\frac{1}{\eta |\eta|},\qquad \L = -2\log|\partial_x|, \] and the approximate equation \eqref{consapprox} for SQG fronts is \[\varphi_t(x, t) + \frac 16 \partial_x \int_\mathbb{R} \bigg[\frac{\varphi(x, t) - \varphi(x + \eta, t)}{\abs{\eta}}\bigg]^3 \,\mathrm{d}{\eta} = 2 \log\abs{\partial_x} \varphi_x(x, t).\] The symbol $a$ is given by \eqref{defac}, so \begin{align*} a(k) & = 2\int_0^1 \frac{1 - \frac 12 k^2 \eta^2 - \cos(k \eta)}{\eta^3} \,\mathrm{d}{\eta} + 2\int_1^\infty \frac{1 - \cos(k \eta)}{\eta^3} \,\mathrm{d}{\eta} \\ &= 2k^2 \left(\int_0^{|k|} \frac{1 - \frac 12 \eta^2 - \cos \eta}{\eta^3} \,\mathrm{d}{\eta} + \int_{|k|}^\infty \frac{1 - \cos \eta}{\eta^3} \,\mathrm{d}{\eta}\right). \end{align*} Writing \begin{align*} \int_0^{|k|} \frac{1 - \frac 12 \eta^2 - \cos \eta}{\eta^3} \,\mathrm{d}{\eta} + \int_{|k|}^\infty \frac{1 - \cos \eta}{\eta^3} \,\mathrm{d}{\eta} &= \frac{1}{2}C -\frac{1}{2} \int_1^{|k|} \frac{\,\mathrm{d}{\eta}}{\eta} \end{align*} where \[ C = 2\int_0^1 \frac{1 - \frac 12 \eta^2 - \cos\eta}{\eta^3} \,\mathrm{d}{\eta} + 2\int_{1}^{\infty} \frac{1 - \cos\eta}{\eta^3} \,\mathrm{d}{\eta}, \] is a constant, we get that $a(k) = Ck^2-k^2 \log|k|$. The term $Ck^2$ cancels out of the expression in \eqref{defS} for $S(k_1, k_2, k_3, k_4)$ on $k_1 + k_2 + k_3 + k_4 = 0$, so we can take \begin{equation} a(k) = -k^2 \log|k|, \label{defsqga} \end{equation} which gives the kernel \begin{equation}\label{Skernelsqg}\begin{aligned} S(k_1, k_2, k_3, k_4) & = -k_1^2 \log\abs{k_1} - k_2^2 \log\abs{k_2} - k_3^2 \log\abs{k_3} - k_4^2 \log\abs{k_4}\\ & \quad + \frac 12 \bigg\{(k_1 + k_2)^2 \log\abs{k_1 + k_2} + (k_1 + k_3)^2 \log\abs{k_1 + k_3} + (k_1 + k_4)^2 \log\abs{k_1 + k_4}\\ & \qquad + (k_2 + k_3)^2 \log\abs{k_2 + k_3} + (k_2 + k_4)^2 \log\abs{k_2 + k_4} + (k_3 + k_4)^2 \log\abs{k_3 + k_4}\bigg\}. \end{aligned}\end{equation} As a result of the cancelation \eqref{kcancel}, this function is homogeneous of degree $2$ on $k_1+k_2+k_3+k_4 = 0$. The operator corresponding to \eqref{defsqga} is \[ \mathbf{A} = \partial_x^2 \log\abs{\partial_x}. \] Thus, from \eqref{realapprox}, the approximate SQG equation is \begin{equation}\label{realapproxsqg} \varphi_t + \frac 12 \partial_x \bigg\{\varphi^2 \log\abs{\partial_x} \varphi_{xx} - \varphi \log\abs{\partial_x} (\varphi^2)_{xx} + \frac 13 \log\abs{\partial_x} (\varphi^3)_{xx}\bigg\} = 2 \log\abs{\partial_x} \varphi_x. \end{equation} We remark that since the nonlinear term in \eqref{realapproxsqg} is scale-invariant, this approximate equation has the same anomalous scale-invariance \eqref{sqg_scaling} as the full SQG front equation. \subsubsection{gSQG equation} \label{approx:gsqg} For the gSQG equation with $0<\alpha<1$ or $1 < \alpha < 2$, we have \[G'(\eta) = -\frac{2 - \alpha}{\eta |\eta|^{2 - \alpha}},\qquad \L = b_\alpha |\partial_x|^{1-\alpha}, \] and the approximate equation \eqref{consapprox} for gSQG fronts is \[\varphi_t(x, t) + \frac{1}{6}(2-\alpha)\partial_x \int_\mathbb{R} \frac{[\varphi(x, t) - \varphi(x + \eta, t)]^3}{\abs{\eta}^{4 - \alpha}} \,\mathrm{d}{\eta} + b_\alpha \abs{\partial_x}^{1 - \alpha} \varphi_x(x, t) = 0.\] If $1<\alpha < 2$, then the symbol $a$ is given by \eqref{defaa}, so \begin{align} a(k) & = 2(2 - \alpha) \int_0^\infty \frac{1 - \cos(k \eta)}{\abs{\eta}^{4 - \alpha}} \,\mathrm{d}{\eta} = c_\alpha \abs{k}^{3 - \alpha}, \label{sqgalphaa} \end{align} where \begin{align} \begin{split} c_\alpha &=2(2 - \alpha) \int_0^\infty \frac{1 - \cos\eta}{\eta^{4 - \alpha}} \,\mathrm{d}{\eta} \\ &= 4(2-\alpha)\int_0^\infty \frac{\sin^2(\eta/2)}{\eta^{4 - \alpha}} \,\mathrm{d}{\eta} \\ &= 2(2-\alpha) \sin\left(\frac{\pi\alpha}{2}\right) \Gamma(\alpha-3) \\ &= \frac{b_\alpha}{3-\alpha}, \end{split} \label{defC12} \end{align} with $b_\alpha$ defined in \eqref{a-const}. The kernel $S$ in \eqref{defS} is given by \begin{equation}\label{Skernel1a2}\begin{aligned} S(k_1, k_2, k_3, k_4) & = c_\alpha\left\{\abs{k_1}^{3 - \alpha} + \abs{k_2}^{3 - \alpha} + \abs{k_3}^{3 - \alpha} + \abs{k_4}^{3 - \alpha}\right\}\\ & \quad - \frac 12 c_\alpha\left\{\abs{k_1 + k_2}^{3 - \alpha} + \abs{k_1 + k_3}^{3 - \alpha} + \abs{k_1 + k_4}^{3 - \alpha}\right.\\ & \qquad \left.+ \abs{k_2 + k_3}^{3 - \alpha} + \abs{k_2 + k_4}^{3 - \alpha} + \abs{k_3 + k_4}^{3 - \alpha}\right\}. \end{aligned}\end{equation} This function is homogeneous of degree $3-\alpha$. The corresponding operator is \[\mathbf{A} =c_\alpha \abs{\partial_x}^{3 - \alpha}.\] Thus, from \eqref{realapprox}, the approximate SQG equation is \begin{equation}\label{realapprox1a2} \varphi_t + \frac {1}{2}c_\alpha \partial_x \bigg\{\varphi^2 \abs{\partial_x}^{3 - \alpha} \varphi - \varphi \abs{\partial_x}^{3 - \alpha} (\varphi^2) + \frac 13 \abs{\partial_x}^{3 - \alpha} (\varphi^3)\bigg\} + b_\alpha \abs{\partial_x}^{1 - \alpha} \varphi_x= 0. \end{equation} If $0 < \alpha < 1$, then $a$ is given by \eqref{defab} instead of \eqref{defaa}, and we get \eqref{realapprox1a2} with \begin{equation} c_\alpha = 2(2 - \alpha) \int_0^\infty \frac{1 - \frac{1}{2}\eta^2-\cos\eta}{\eta^{4 - \alpha}} \,\mathrm{d}{\eta}. \label{defC01} \end{equation} \subsection{Hamiltonian structure} The approximate equation has the Hamiltonian form \[\varphi_t + \partial_x \left[\frac{\delta \H}{\delta \varphi}\right] = 0,\] where, suppressing the time variable, we can write the Hamiltonian in equivalent forms as \begin{align*} \H(\varphi) &= -\frac {1}{6 \cdot 8} \int_{\mathbb{R}^2}\left[\frac{G'(x - x')}{x - x'}\right] [\varphi(x) - \varphi(x')]^4 \,\mathrm{d}{x}\,\mathrm{d}{x'} + \frac 12 \int_\mathbb{R} \varphi(x) \L \varphi(x) \,\mathrm{d}{x} \\ &= \int_\mathbb{R} \bigg[\frac 16 \varphi \mathbf{A} \varphi^3 - \frac 18 \varphi^2 \mathbf{A} \varphi^2\bigg]\,\mathrm{d}{x} + \frac 12 \int_\mathbb{R} \varphi \L \varphi \,\mathrm{d}{x}. \end{align*} The spectral form of the Hamiltonian is \begin{align*} \H(\hat{\varphi}) &=-\frac {1}{6 \cdot 8} \int_{\mathbb{R}^4} \delta(k_1+k_2+k_3+k_4) \\ &\qquad\qquad\qquad S(k_1,k_2,k_3,k_4) \hat{\varphi}(k_1)\hat{\varphi}(k_2) \hat{\varphi}(k_3)\hat{\varphi}(k_4) \,\mathrm{d}{k}_1 \,\mathrm{d}{k}_2 \,\mathrm{d}{k}_3 \,\mathrm{d}{k}_4 \\ &\quad + \frac{1}{2} \int_{\mathbb{R}} b(k) \hat{\varphi}^*(k)\hat{\varphi}(k) \,\mathrm{d}{k}. \end{align*} This Hamiltonian structure explains the symmetry of $S$ in \eqref{defS}. For $\alpha\ne 1$, the quadratic term in the Hamiltonian is proportional to \[ \int_\mathbb{R} \varphi(x) |\partial_x|^{1-\alpha} \varphi(x) \,\mathrm{d}{x}, \] which controls the homogeneous $\dot{H}^s(\mathbb{R})$-norm of $\varphi$ with $s= {(1-\alpha)}/{2}$. The quartic term is proportional to \[ \int_{\mathbb{R}^2} \frac{\left[\varphi(x)-\varphi(x')\right]^4}{|x-x'|^{4-\alpha}} \,\mathrm{d}{x} \,\mathrm{d}{x'}, \] which controls the homogeneous $\dot{W}^{r,4}(\mathbb{R})$-Slobodeckij norm \cite{taheri} of $\varphi$ with $r= (3-\alpha)/{4}$. For $0<\alpha \le 2$, we have $-1/2\le s < 1/2$, $1/4\le r <3/4$, so these norms appear too weak to be useful for well-posedness results. \section{Local well-posedness for the approximate equation} \label{Sec-LWP} In this section, we study the local well-posedness of the initial value problem for the approximate gSQG front equation with $1<\alpha \le 2$ in \eqref{realapprox1a2} and the approximate SQG front equation in \eqref{realapproxsqg}. For simplicity, we consider spatially periodic functions with zero mean. The analysis for the SQG equation is more delicate than for the gSQG equation, and we obtain a weaker result in that case, in which solutions may lose Sobolev derivatives over time. The nonlinear fluxes in these equations appear to involve derivatives, but this is misleading because of a cancelation, as the the estimates below will show. For smooth solutions, a cartoon of the gSQG equation \eqref{realapprox1a2} with $1<\alpha \le 2$ is a cubically nonlinear conservation law with a lower-order dispersive term of order less than one, \[ \varphi_t + \left(\varphi^3\right)_x + |\partial_x|^{1-\alpha} \varphi_x=0. \] Additional logarithmic derivatives arise for the SQG equation \eqref{realapproxsqg}, and for smooth, spatially periodic solutions a rough cartoon of the equation is \[ \varphi_t + \left(3\varphi^2\L \varphi - 4\varphi\L \varphi^2 + \L\varphi^3\right)_x = \L\varphi_x, \qquad \L = \log|\partial_x|. \] \subsection{Notation} We denote the Fourier coefficients of a $2\pi$-periodic function (or distribution) $f \colon \mathbb{T}\to \mathbb{R}$ by \[ \hat{f}(k) = \frac{1}{2\pi} \int_{\mathbb{T}} f(x) e^{-ikx} \,\mathrm{d}{x} \] and the $\ell^p$-norm of $\hat{f} \colon \mathbb{Z}_* \to \mathbb{C}$ by \[ \|\hat{f}\|_{\ell^p} = \left(\sum_{k\in\mathbb{Z}_*} |\hat{f}(k)|^p\right)^{1/p}, \] where $\mathbb{Z}_* = \mathbb{Z} \setminus \{0\}$ the set of nonzero integers. For $s\in \mathbb{R}$, we let \[ \dot{H}^s(\mathbb{T}) = \left\{f \colon \mathbb{T} \to \mathbb{R} \mid \text{$\hat{f}(0) = 0$, $\norm{f}_{\dot{H}^s} < \infty$}\right\} \] denote the Hilbert space of zero-mean, periodic functions with square-integrable derivatives of the order $s$, and norm \begin{align} \norm{f}_{\dot{H}^s} &=\left(\sum_{k \in \mathbb{Z}_*} \abs{k}^{2s} |\hat{f}(k)|^2\right)^{1/2}. \label{Hsnorm} \end{align} We will use the following consequence of Young's inequality \begin{equation} \sum_{\substack{k_1, k_2, k_3, k_4 \in \mathbb{Z}_*\\ k_1 + k_2 + k_3 + k_4 = 0}} \left| \hat{f}_1(k_1) \hat{f}_2(k_2) \hat{f}_3(k_3) \hat{f}_4(k_4)\right| \le \|\hat{f}_1\|_{\ell^2} \|\hat{f}_2\|_{\ell^1} \|\hat{f}_3\|_{\ell^1} \|\hat{f}_4\|_{\ell^2} \label{convest} \end{equation} and the Sobolev inequality \begin{equation} \|\hat{f}\|_{\ell^1} \le Z(s) \|f\|_{\dot{H}^s} \qquad \text{for $s > 1/2$}, \label{sobest} \end{equation} where $Z$ is given in terms of the Riemann-zeta function by \[ Z(s) = \left(\sum_{k\in \mathbb{Z}_*} \frac{1}{|k|^{2s}}\right)^{1/2} = \sqrt{2\zeta(2s)}. \] Let $\rho \colon \mathbb{Z}_*^4 \to \mathbb{Z}_*^4$ be a map that permutes its entries and orders their absolute values. We denote the values of $\rho$ by $(m_1,m_2,m_3,m_4) = \rho(k_1,k_2,k_3,k_4) $, where \begin{align} &(m_1,m_2,m_3,m_4) = (k_{\sigma1},k_{\sigma2},k_{\sigma3},k_{\sigma4}) \quad \text{for some $\sigma\in S_4$}, \label{defkm2} \\ &\abs{m_1} \geq \abs{m_2} \geq \abs{m_3} \geq \abs{m_4}. \label{defkm1} \end{align} Here, $S_4$ denotes the symmetric group on $\{1, 2, 3, 4\}$. \subsection{Local well-posedness for the approximate gSQG equation $(1 < \alpha \le 2)$} In this section, we prove short-time existence and uniqueness for spatially periodic solutions of the initial value problem for the gSQG equation, \begin{align}\label{gSQGivp} \begin{split} &\varphi_t + \frac {1}{2}c_\alpha \partial_x \bigg\{\varphi^2 \abs{\partial_x}^{3 - \alpha} \varphi - \varphi \abs{\partial_x}^{3 - \alpha} (\varphi^2) + \frac 13 \abs{\partial_x}^{3 - \alpha} (\varphi^3)\bigg\} + b_\alpha \abs{\partial_x}^{1 - \alpha} \varphi_x= 0. \\ &\varphi(x,0) = \varphi_0(x). \end{split} \end{align} We begin with a general result that is the analog for cubically nonlinear equations of the well-posedness result in \cite{Hun} for quadratically nonlinear equations. The proof depends crucially on the symmetry of the interaction coefficients that follows from the Hamiltonian structure of the equation. Consider the spectral form of an initial value problem for a spatially-periodic function ${\varphi}(x,t)$, with Fourier coefficients $\hat{\varphi}(k,t)$, given by \begin{align}\label{speceqn} \begin{split} & \hat{\varphi}_t(k_1, t) + \frac 16 ik_1\sum_{\substack{k_2, k_3, k_4 \in \mathbb{Z}_*\\ k_2 + k_3 + k_4 = -k_1}}S(k_1, k_2, k_3, k_4) \hat{\varphi}^*(k_2, t) \hat{\varphi}^*(k_3, t) \hat{\varphi}^*(k_4, t) + ik_1b(k_1) \hat{\varphi}(k_1,t) = 0, \\ &\hat{\varphi}(k_1,0) = \hat{\varphi}_0(k_1). \end{split} \end{align} When convenient, we omit the time variable and write $\hat{\varphi}(k) = \hat{\varphi}(k, t) = \hat{\varphi}^*(-k)$, $\varphi_j = \varphi(k_j)$. In \eqref{speceqn}, we assume that $S \colon \mathbb{Z}_*^4 \to \mathbb{R}$ satisfies \begin{align} S(k_1, k_2, k_3, k_4) &= S(-k_1, -k_2, -k_3, -k_4),\label{Scond1} \\ S(k_1, k_2, k_3, k_4) &= S(k_{\sigma 1}, k_{\sigma 2}, k_{\sigma 3}, k_{\sigma 4}) \quad \text{for every $\sigma\in S_4$}\label{Scond2} \end{align} and that there exist $\mu,\nu \ge 0$ such that \begin{align} & \abs{S(k_1, k_2, k_3, k_4)} \leq C_S\abs{m_3}^{\mu} \abs{m_4}^\nu \qquad \text{for all $k_1,k_2,k_3,k_4\in \mathbb{Z}_*$}, \label{Scond3} \end{align} where $m_1,m_2,m_3,m_4$ are defined as in \eqref{defkm2}--\eqref{defkm1}, and $C_S$ is a constant. That is, the growth of $S$ is bounded by the smaller wavenumbers on which it depends. \begin{theorem}\label{lwp-gsqg1a2} Suppose that $S \colon \mathbb{Z}_*^4 \to \mathbb{R}$ satisfies \eqref{Scond1}--\eqref{Scond3}, and $b \colon \mathbb{Z}_* \to \mathbb{R}$ is a bounded, even function. If \[ s > \max\left\{\mu + \frac{3}{2}, \nu + \frac{1}{2}\right\}, \] then for every $\varphi_0 \in \dot{H}^s(\mathbb{T})$, there exists $T > 0$, depending on $\|\varphi_0\|_{\dot{H}^s}$, such that the initial value problem \eqref{speceqn} has a solution \[ \varphi \in C\left([0, T]; \dot{H}^s(\mathbb{T})\right) \cap C^1\left([0, T]; \dot{H}^{s - 1}(\mathbb{T})\right). \] Furthermore, the solution is unique if \begin{equation} s > \max\left\{\mu + \frac{3}{2}, \nu + \frac{3}{2}\right\}. \label{smnineq1} \end{equation} \end{theorem} \begin{proof} We prove the main \emph{a priori} estimates and only sketch the proof, which follows by standard arguments for quasilinear hyperbolic PDEs \cite{Ta}. Multiplying \eqref{speceqn} by $\hat{\varphi}^*(k_1)$, taking the real part, and using \eqref{Scond2} to symmetrize the result, we get that \begin{align} \begin{split} \frac{\,\mathrm{d}}{\,\mathrm{d}{t}} \norm{\varphi}_{\dot{H}^s}^2 &\leq \frac{1}{12}\sum_{\substack{k_1, k_2, k_3, k_4 \in \mathbb{Z}_*\\ k_1 + k_2 + k_3 + k_4 = 0}} \biggl|\left(k_1 \abs{k_1}^{2s} + k_2 \abs{k_2}^{2s} + k_3 \abs{k_3}^{2s} + k_4 \abs{k_4}^{2s}\right)\biggr. \\ &\qquad\qquad\qquad\qquad\qquad\quad \biggl.S(k_1, k_2, k_3, k_4) \hat{\varphi}(k_1) \hat{\varphi}(k_2) \hat{\varphi}(k_3) \hat{\varphi}(k_4)\biggr|. \end{split} \label{energyeqn} \end{align} Using Lemma~\ref{2s+1ineq}, the permutation property \eqref{defkm2} of the $m_j$, the symmetry of $S$ in \eqref{Scond2}, and the estimate \eqref{Scond3}, we get that \begin{align} \begin{split} \frac{\,\mathrm{d}}{\,\mathrm{d}{t}} \norm{\varphi}_{\dot{H}^s}^2 &\leq \frac{1}{12} C_0(s)\sum_{\substack{k_1, k_2, k_3, k_4 \in \mathbb{Z}_*\\ k_1 + k_2 + k_3 + k_4 = 0}} \abs{m_1}^s \abs{m_2}^s \abs{m_3} \cdot \abs{S(k_1, k_2, k_3, k_4) \hat{\varphi}(k_1) \hat{\varphi}(k_2) \hat{\varphi}(k_3) \hat{\varphi}(k_4)} \\ &\leq \frac{1}{12} C_S C_0(s)\sum_{\substack{k_1, k_2, k_3, k_4 \in \mathbb{Z}_*\\ k_1 + k_2 + k_3 + k_4 = 0}}\abs{m_1}^s \abs{m_2}^s \abs{m_3}^{\mu + 1} \abs{m_4}^\nu \abs{ \hat{\varphi}(m_1) \hat{\varphi}(m_2) \hat{\varphi}(m_3) \hat{\varphi}(m_4)}. \end{split} \label{tempest} \end{align} For fixed $(k_1,k_2,k_3,k_4)\in \mathbb{Z}_*^4$ with corresponding $(m_1,m_2,m_3,m_4)\in \mathbb{Z}_*^4$, as in \eqref{defkm2}--\eqref{defkm1}, we have \begin{align*} &\abs{m_1}^s \abs{m_2}^s \abs{m_3}^{\mu + 1} \abs{m_4}^\nu \abs{ \hat{\varphi}(m_1) \hat{\varphi}(m_2) \hat{\varphi}(m_3) \hat{\varphi}(m_4)} \\ &\qquad\qquad \le \sum_{\substack{(k_1', k_2', k_3', k_4') =\\ (k_{\sigma1}, k_{\sigma2}, k_{\sigma3}, k_{\sigma4}),\ \sigma\in S_4 }} \abs{k_1'}^s \abs{k_2'}^s \abs{k_3'}^{\mu + 1} \abs{k_4'}^\nu \abs{ \hat{\varphi}(k_1') \hat{\varphi}(k_2') \hat{\varphi}(k_3') \hat{\varphi}(k_4')} \end{align*} Using this inequality to estimate the sum of terms in \eqref{tempest} depending on $m_j$ by a sum depending on $k_j$, followed by the inequalities \eqref{convest}--\eqref{sobest}, we get that \begin{align*} \frac{\,\mathrm{d}}{\,\mathrm{d}{t}} \norm{\varphi}_{\dot{H}^s}^2 &\leq \frac{4!}{12} C_S C_0(s) \sum_{\substack{k_1, k_2, k_3, k_4 \in \mathbb{Z}_*\\ k_1 + k_2 + k_3 + k_4 = 0}} \abs{k_1}^s \abs{k_2}^s \abs{k_3}^{\mu + 1} \abs{k_4}^\nu \abs{ \hat{\varphi}(k_1) \hat{\varphi}(k_2) \hat{\varphi}(k_3) \hat{\varphi}(k_4)} \\ &\leq 2C_S C_0(s) \norm{\varphi}_{\dot{H}^s} \|\,|k|^{\mu+1}\hat{\varphi}\|_{{\ell}^1(\mathbb{Z}_*)} \|\,|k|^{\nu}\hat{\varphi}\|_{{\ell}^1(\mathbb{Z}_*)} \norm{\varphi}_{\dot{H}^s} \\ &\le C_4(s) \norm{\varphi}_{\dot{H}^s}^4. \end{align*} Thus, Gr\"{o}nwall's inequality gives the \emph{a priori} estimate \begin{equation*} \norm{\varphi(t)}_{\dot{H}^s}^2 \leq \frac{1}{C_4} \left(\frac{1} {T_* - t}\right), \qquad T_* = \frac{1}{C_4 \norm{\varphi_0}_{\dot{H}^s}^2}, \end{equation*} and it follows that \begin{equation} \sup_{0\le t \le T} \|\varphi(t)\|_{\dot{H}^s}^2 \le \frac{1}{C_4} \left(\frac{1} {T_* - T}\right), \label{phibd} \end{equation} for any $0<T<T_*$. To estimate $\varphi_t$, we write \eqref{speceqn} in spatial form as \begin{equation} \varphi_t + \partial_x \mathbf{Q}(\varphi,\varphi,\varphi) + \L\varphi_x= 0, \label{phiteq} \end{equation} where $\L$ is a bounded operator on $\dot{H}^s(\mathbb{T})$, and the trilinear operator $\mathbf{Q}$ is defined in terms of Fourier coefficients by \begin{equation} \hat{\mathbf{Q}}(\hat{\varphi},\hat{\psi},\hat{\chi})(k_1) = \frac 16 \sum_{\substack{k_2, k_3, k_4 \in \mathbb{Z}_*\\ k_2 + k_3 + k_4 = -k_1}}S(k_1, k_2, k_3, k_4) \hat{\varphi}^*(k_2) \hat{\psi}^*(k_3) \hat{\chi}^*(k_4). \label{defFform} \end{equation} The symmetry of $S$ implies that \begin{equation*} q(\eta, \varphi,\psi,\chi) = \int_\mathbb{T} \eta \mathbf{Q}(\varphi,\psi,\chi)\,\mathrm{d}{x} \end{equation*} is a symmetric form. Moreover, using \eqref{Scond3}, we get that \begin{align} \begin{split} \left|q(\eta, \varphi,\psi,\chi) \right| &\le \frac{\pi}{3} \sum_{\substack{k_1,k_2, k_3, k_4 \in \mathbb{Z}_*\\ k_1+ k_2 + k_3 + k_4 = 0}} |S(k_1, k_2, k_3, k_4) \hat{\eta}(k_1) \hat{\varphi}(k_2) \hat{\psi}(k_3) \hat{\chi}(k_4)| \\ &\le \frac{\pi}{3}C_S\sum_{\substack{k_1,k_2, k_3, k_4 \in \mathbb{Z}_*\\ k_1+ k_2 + k_3 + k_4 = 0}} |k_1|^{-s}|\hat{\eta}(k_1)|\cdot |k_1|^s |m_3|^\mu |m_4|^\nu |\hat{\varphi}(k_2) \hat{\psi}(k_3) \hat{\chi}(k_4)|. \end{split} \label{Fintest} \end{align} On $k_1+ k_2 + k_3 + k_4 = 0$, we have \[ |k_1|^s \le Y(s) \left(|k_2|^{s} + |k_3|^{s} + |k_4|^{s}\right). \] From \eqref{defkm1}, we have for any $1\le p\ne q\le 4$ that \[ |m_3|^\mu |m_4|^\nu \le |k_p|^\mu |k_q|^\nu + |k_p|^\nu |k_q|^\mu. \] Choosing $\{p,q\}$ disjoint from $\{1,2\}$, $\{1,3\}$, or $\{1,4\}$ as appropriate, we get that \begin{align*} |k_1|^s |m_3|^\mu |m_4|^\nu &\le Y\bigl[ |k_2|^{s}\left(|k_3|^\mu |k_4|^\nu +|k_3|^\nu |k_4|^\mu\right)\bigr. \\ &\qquad+ |k_3|^{s}\left(|k_2|^\mu |k_4|^\nu +|k_2|^\nu |k_4|^\mu\right) \\ &\qquad+ \bigl.|k_4|^{s}\left(|k_2|^\mu |k_3|^\nu +|k_2|^\nu |k_3|^\mu\right)\bigr]. \end{align*} Using this inequality in \eqref{Fintest}, followed by \eqref{convest}--\eqref{sobest} with the assumption that \begin{equation} s > \max\left\{\mu+\frac{1}{2}, \nu+\frac{1}{2}\right\}, \label{sineq1} \end{equation} we get \begin{align*} \left|q(\eta, \varphi,\psi,\chi) \right| &\le C \|\eta\|_{\dot{H}^{-s}} \|\varphi\|_{\dot{H}^s} \|\psi\|_{\dot{H}^s} \|\chi\|_{\dot{H}^s}, \end{align*} where $C$ denotes a constant. It follows by duality that \eqref{defFform} defines a bounded trilinear map \begin{equation} \mathbf{Q} \colon \dot{H}^s(\mathbb{T})\times\dot{H}^s(\mathbb{T})\times\dot{H}^s(\mathbb{T})\to \dot{H}^s(\mathbb{T}) \label{conF} \end{equation} when $s$ satisfies \eqref{sineq1}. Hence, \eqref{phibd}--\eqref{phiteq} imply that \begin{equation} \sup_{0\le t \le T} \|\varphi_t\|_{\dot{H}^{s - 1}} \le C. \label{est2} \end{equation} Moreover, if $\varphi$, $\psi$ are solutions of \eqref{phiteq} with initial data $\varphi(0) = \varphi_0$, $\psi(0) = \psi_0$, then writing $u=\varphi-\psi$, using the symmetry of $q$, and the identity \[ q(\eta_x, \varphi,\psi,\chi)+ q(\eta, \varphi_x,\psi,\chi)+ q(\eta, \varphi,\psi_x,\chi) + q(\eta, \varphi,\psi,\chi_x) =0, \] we get that \begin{equation} \frac{\,\mathrm{d}}{\,\mathrm{d}{t}} \|u\|_{L^2}^2 + 2q(u,u,\varphi,\varphi_x) + q(u,u,\varphi_x,\psi) + q(u,u,\varphi,\psi_x)+ 2q(u,u,\psi,\psi_x)=0. \label{stabest} \end{equation} For $s> \max\{\mu+1/2, \nu+1/2\}$, it follows as in \eqref{Fintest} that \[ |q(\eta,\varphi,\psi,\chi)| \le C \|\eta\|_{L^2} \|\varphi\|_{L^2} \|\psi\|_{\dot{H}^s} \|\chi\|_{\dot{H}^s}. \] Hence, when $s$ satisfies \eqref{smnineq1}, we have \[ \frac{\,\mathrm{d}}{\,\mathrm{d}{t}} \|u\|_{L^2}^2 \le C\left(\|\varphi\|^2_{\dot{H}^s} + \|\psi\|^2_{\dot{H}^s}\right) \|u\|_{L^2}^2, \] so Gr\"onwall's inequality gives the \emph{a priori} $L^2$-stability estimate \begin{equation} \sup_{0\le t\le T}\left\|\varphi(t)-\psi(t)\right\|^2_{L^2} \le \exp \left[C\int_0^T\left(\|\varphi(t)\|^2_{\dot{H}^s} + \|\psi(t)\|^2_{\dot{H}^s}\right)\,\mathrm{d}{t}\right] \left\|\varphi_0-\psi_0\right\|^2_{L^2}. \label{L2stab} \end{equation} The result then follows by standard methods. We construct Galerkin approximations $\{\varphi^N: N\in \mathbb{N}\}$ by projecting the equations onto Fourier modes with $|k|\le N$. These approximations satisfy the same estimates as the \emph{a priori} estimates derived above, so from \eqref{phibd} and \eqref{est2} we can extract a subsequence that converges weakly to a limit $\varphi$ in $L^\infty(0,T; \dot{H}^s(\mathbb{T})) \cap W^{1,\infty}(0,T; \dot{H}^{s-1}(\mathbb{T}))$. By the Aubin-Lions lemma, a further subsequence converges strongly in $C([0,T]; \dot{H}^{s-\epsilon}(\mathbb{T}))$ for sufficiently small $\epsilon>0$, and by the continuity of the nonlinear term in \eqref{conF}, the limit is a solution of the equation. The fact that $\varphi \in C([0,T]; \dot{H}^s(\mathbb{T}))$ follows from weak continuity $\varphi \in C_w([0,T]; \dot{H}^s(\mathbb{T}))$ and continuity of the norm $\|\varphi\|_{\dot{H}^s}$, and uniqueness follows from \eqref{L2stab} when $s$ satisfies \eqref{smnineq1}. \end{proof} Since \eqref{speceqn} is reversible, Theorem~\ref{lwp-gsqg1a2} also holds backward in time, and a similar result would apply to the spatial case $\varphi(\cdot,t) \colon \mathbb{R} \to \mathbb{R}$. One could also prove continuous dependence of the solution on the initial data by a Bona-Smith type argument, but we will not carry out the details here. \begin{theorem} \label{th:alpha} Suppose that $1 < \alpha \le 2$ and $s > {7}/{2} - \alpha$. Then for every $\varphi_0 \in \dot{H}^s(\mathbb{T})$, there exists $T > 0$, depending on $\|\varphi_0\|_{\dot{H}^s}$, such that the initial value problem \eqref{gSQGivp} has a solution with \[ \varphi \in C\left([0, T]; \dot{H}^s(\mathbb{T})\right) \cap C^1\left([0, T]; \dot{H}^{s - 1}(\mathbb{T})\right). \] The solution is unique if $s > {5}/{2}$. \end{theorem} \begin{proof} From Lemma~\ref{kernelest1a2}, the kernel $S$ for \eqref{gSQGivp} satisfies \eqref{Scond1}--\eqref{Scond3} with $\mu = 2 - \alpha$ and $\nu = 1$, and the symbol $b$ for \eqref{gSQGivp} is bounded, so the result follows from Theorem~\ref{lwp-gsqg1a2}. \end{proof} The Euler case $\alpha=2$ of this Theorem was proved previously in \cite{Ifr}. \subsection{Weak local well-posedness for the approximate SQG equation} Theorem~\ref{lwp-gsqg1a2} does not apply to the approximate SQG equation \eqref{realapproxsqg}, because its kernel \eqref{Skernelsqg} does not satisfy the estimate in \eqref{Scond3}. Instead, there is an additional logarithmic factor and, in the absence of dispersion, the nonlinear term appears to lead to a loss of derivatives at some finite rate. In this section, we prove a weak local well-posedness theorem for the initial value problem \begin{align} \label{sqgivp} \begin{split} &\varphi_t + \frac 12 \partial_x \bigg\{\varphi^2 \log\abs{\partial_x} \varphi_{xx} - \varphi \log\abs{\partial_x} (\varphi^2)_{xx} + \frac 13 \log\abs{\partial_x} (\varphi^3)_{xx}\bigg\} +\L\varphi_x = 0, \\ &\varphi(x,0) = \varphi_0(x), \end{split} \end{align} where $\L$ is an arbitrary self-adjoint operator. The case $\L = -2 \log\abs{\partial_x}$ corresponds to the approximate SQG front equation. Our proof is adapted from proofs for Gevrey-class solutions of nonlinear evolution equations (see e.g., \cite{FrVi, KuTeViZi}), in which one uses time-dependent norms to compensate for the loss of regularity. The difference here is that, since there is only a logarithmic derivative loss, we obtain solutions for initial data with finitely many derivatives, rather than $C^\infty$ Gevrey-class initial data. The existence time in the theorem depends on the number of Sobolev derivatives possessed by the initial data as well as its Sobolev norm. In addition to $\dot{H}^s(\mathbb{T})$, we use a logarithmically-modified Hilbert space \begin{align} \begin{split} \dot{H}^s_{\log}(\mathbb{T}) &= \left\{f \colon \mathbb{T} \to \mathbb{R} \mid \text{$\hat{f}(0) = 0$, $\norm{f}_{\dot{H}_{\log}^s} < \infty$}\right\}, \\ \norm{f}_{\dot{H}_{\log}^s} &= \left[\sum_{k \in \mathbb{Z}_*} \log(1 + |k|) \cdot \abs{k}^{2s} \abs{\hat{f}(k)}^2\right]^{1/2}. \end{split} \label{Hslog} \end{align} If $\tau \colon [0,T_*)\to [0,\infty)$ is a decreasing function, then we denote by $L^\infty(0,T_*; \dot{H}^\tau(\mathbb{T}))$ the space of functions \[ \varphi \colon [0,T_*) \to \bigcup_{t\in [0,T_*)} \dot{H}^{\tau(t)}(\mathbb{T}) \] such that $\varphi(t) \in \dot{H}^{\tau(t)}(\mathbb{T})$, and for every $0<T<T_*$ \[ \varphi\in L^\infty(0,T; \dot{H}^{\tau_1}(\mathbb{T}))\qquad \text{$\tau_1 = \tau(T)$}, \] with analogous notation for other time-dependent Sobolev spaces. \begin{theorem} \label{sqglwpthm} Let the operator $\L$ have real-valued symbol $b : \mathbb{Z} \to \mathbb{R}$ and suppose that $\tau_0 > 5/2$. For every $\varphi_0 \in \dot{H}^{\tau_0}(\mathbb{T})$, there exists $T_* > 0$ and a differentiable, decreasing function $\tau \colon [0,T_*) \to (5/2,\tau_0]$ with $\tau(0) = \tau_0$, depending on $\tau_0$ and $\|\varphi_0\|_{\dot{H}^{\tau_0}(\mathbb{T})}$, such that the initial value problem \eqref{sqgivp} has a solution with \[ \varphi \in L^\infty(0, T_*; \dot{H}^{\tau}(\mathbb{T})) \cap L^2(0, T_*; \dot{H}^{\tau}_{\log}(\mathbb{T})). \] Moreover, there exists a numerical constant $C>0$ such that \begin{equation}\label{sqgest} \sup_{t \in [0, T]} \norm{\varphi(t)}_{\dot{H}^{\tau(t)}}^2 + C \norm{\varphi_0}_{\dot{H}^{\tau_0}}^2\int_0^{T} \norm{\varphi(t)}_{\dot{H}^{\tau(t)}_{\log}}^2 \,\mathrm{d}{t} \leq \norm{\varphi_0}_{\dot{H}^{\tau_0}}^2 \end{equation} for every $0<T<T_*$, where the norms are defined in \eqref{Hsnorm}, \eqref{Hslog}. The solution is unique while $\tau(t) > 9/2$. \end{theorem} \begin{proof} First, we derive the \emph{a priori} estimate \eqref{sqgest}. Let $\tau \colon [0,T]\to (5/2,\infty)$ be a differentiable function, and let $\varphi$ be a smooth solution of \eqref{sqgivp}. We define energies $E, F \colon [0,T] \to [0,\infty)$ by \begin{align*} E(t) &= \norm{\varphi(t)}_{\dot{H}^{\tau(t)}}^2 = \sum_{k \in \mathbb{Z}_*} \abs{k}^{2 \tau(t)} \abs{\hat{\varphi}(k,t)}^2, \\ F(t) &= \norm{\varphi(t)}_{\dot{H}^{\tau(t)}_{\log}}^2 = \sum_{k \in \mathbb{Z}_*} \log(1+\abs{k}) \cdot \abs{k}^{2\tau(t)} \abs{\hat{\varphi}(k,t)}^2. \end{align*} We write the equation in the spectral form \eqref{specapprox} with kernel \eqref{Skernelsqg}. Using the energy equation \eqref{energyeqn}, Lemma~\ref{2s+1ineq}, and Corollary~\ref{kernelestsqg-int} to estimate the time-derivative of $E$, we get that \begin{align*} \frac{\,\mathrm{d} E}{\,\mathrm{d}{t}} & = 2 \dot{\tau} \sum_{k \in \mathbb{Z}_*} \log\abs{k} \cdot \abs{k}^{2\tau} \abs{\hat{\varphi}(k)}^2 + \sum_{k \in \mathbb{Z}_*} \abs{k}^{2\tau} \frac{\,\mathrm{d}}{\,\mathrm{d}{t}} \abs{\hat{\varphi}(k)}^2 \\ & \leq 2 \dot{\tau} F + \frac{1}{12} \sum_{\substack{k_1, k_2, k_3, k_4 \in \mathbb{Z}_*\\ k_1 + k_2 + k_3 + k_4 = 0}} \left|\left(k_1 \abs{k_1}^{2 \tau} + k_2 \abs{k_2}^{2 \tau} + k_3 \abs{k_3}^{2 \tau} + k_4 \abs{k_4}^{2 \tau}\right)S(k_1, k_2, k_3, k_4) \hat{\varphi}_1 \hat{\varphi}_2 \hat{\varphi}_3 \hat{\varphi}_4\right| \\ & \leq 2 \dot{\tau} F + \frac{4! }{12} C_0(\tau) C_2 \sum_{\substack{k_1, k_2, k_3, k_4 \in \mathbb{Z}_*\\ k_1 + k_2 + k_3 + k_4 = 0}} \left[\log(1 + |k_1|) \log(1 + |k_2|)\right]^{1/2} \abs{k_1}^\tau \abs{k_2}^\tau \abs{k_3}^2 \abs{k_4} \cdot \abs{\hat{\varphi}_1 \hat{\varphi}_2 \hat{\varphi}_3 \hat{\varphi}_4} \\ & \leq 2 \dot{\tau} F + 2C_0(\tau) C_2F \cdot \left(\sum_{k_3 \in \mathbb{Z}_*} \abs{k_3}^2 \abs{\hat{\varphi}(k_3)}\right) \cdot \sum_{k_4 \in \mathbb{Z}_*} \abs{k_4} \abs{\hat{\varphi}(k_4)}, \end{align*} where a dot denotes a time derivative. Then, as long as $\tau > 5 / 2$, the Sobolev inequality \eqref{sobest} implies that \begin{equation} \label{sqgenergyest1} \frac{\,\mathrm{d} E}{\,\mathrm{d}{t}} \leq 2 \left[\dot{\tau} + C_3(\tau) E\right] F,\qquad C_3(s) = C_0(s) C_2 Z(s-1) Z(s-2). \end{equation} The function $C_3 \colon (5/2,\infty) \to (0,\infty)$ is a smooth function such that $C_3(s) \to \infty$ as $s\to 5/2$ and $s\to \infty$. Thus, there is a numerical constant $C_4>0$ such that \[ C_3(s) \ge C_4\qquad \text{for $5/2<s<\infty$}. \] For example, if $C_0$, $C_2$ are given by \eqref{numC0}, \eqref{numC2}, then we find numerically that one can take $C_4=1000$. Fix a constant $M > 1$ and let $\tau$ be the solution of the initial value problem \begin{equation} \dot{\tau} + M E_0 C_3 (\tau) = 0,\qquad \tau(0) = \tau_0 \label{taueq} \end{equation} on a maximal time-interval $[0,T_*)$ such that $\tau(t)>5/2$, where $E_0 = E(0)$. Then it follows from \eqref{sqgenergyest1}--\eqref{taueq} that $E$ is decreasing on $[0,T_*)$ and \begin{equation*} \frac{\,\mathrm{d} E}{\,\mathrm{d}{t}} + (M-1) C_3 E_0 F \le 0. \end{equation*} Gr\"onwall's inequality gives \[ E(t) + E_0 (M-1) \int_0^t C_3(\tau(s)) F(s) \,\mathrm{d}{s} \leq E_0, \] so \eqref{sqgest} follows for $0\le T<T_*$ with $C = (M-1) C_4$. We define a trilinear form $\mathbf{Q}$ by \eqref{defFform} where $S$ is given by \eqref{Skernelsqg}. By a similar argument to the one in the proof of Theorem~\ref{lwp-gsqg1a2}, using Corollary~\ref{kernelestsqg-int}, we see that $\mathbf{Q} \colon \dot{H}^s(\mathbb{T}) \times \dot{H}^s(\mathbb{T}) \times \dot{H}^s(\mathbb{T})\to \dot{H}^s(\mathbb{T})$ is bounded for $s>1/2$. It follows from the equation for $\varphi$ and \eqref{sqgest} that if $0<T<T_*$, then \[ \sup_{0\le t \le T} \|\varphi_t(t)\|_{\dot{H}^{\tau_1-1}} \le C \] where $\tau_1 = \tau(T)$, for some constant $C$ depending on $\tau_0$, T, and $E_0$. The construction of the solution by the use of Galerkin approximations follows by standard arguments, as in the proof of Theorem~\ref{lwp-gsqg1a2}, and we omit the details. Finally, if $\varphi$, $\psi$ are solutions \eqref{sqgivp} with initial data $\varphi(0) = \varphi_0$, $\psi(0) = \psi_0$, then we let $\tau$ be the solution of \eqref{taueq} with $E_0=\max\{\|\varphi_0\|^2_{H^{\tau_0}}, \|\psi_0\|^2_{H^{\tau_0}}\}$, and we define \[ U(t) = \|\varphi(t)-\psi(t)\|_{H^{\tau(t)-2}},\qquad V(t) = \|\varphi(t)-\psi(t)\|_{H_{\log}^{\tau(t)-2}}, \] where we assume that ${\tau(t)-2} > 5/2$. Then a similar argument to the derivation of the energy estimate \eqref{sqgenergyest1} and the stability estimate \eqref{stabest}, whose details we omit, gives that \begin{align*} \frac{dU}{dt} &\le \left[2 \dot{\tau} + E_0 C(\tau)\right] V + C(\tau)\left(\|\varphi\|_{H^{\tau}_{\log}}+\|\psi\|_{H^{\tau}_{\log}}\right) U, \end{align*} where $C(\tau)>0$ is a continuous function of $\tau$. If $0<T<T_*$, then \eqref{taueq} implies that $\tau(t)$ is bounded independently of $M$ on a time-interval $0\le t \le T/M$. We choose $M$ large enough that $M C_3(\tau) \ge C(\tau)$ on this interval. Then \[ \frac{dU}{dt} \le C(\tau)\left(\|\varphi\|_{H^{\tau}_{\log}}+\|\psi\|_{H^{\tau}_{\log}}\right) U \] for $0\le t \le T/M$, and Gr\"onwall's inequality implies that the solution is unique. \end{proof} \section{Traveling waves and the NLS equation} \label{sec:nls} We look for periodic, zero-mean traveling wave solutions of \eqref{sqg_eq} of the form \[ \varphi = \varphi(kx-\omega t),\qquad \varphi(\theta + 2\pi) = \varphi(\theta). \] These traveling waves satisfy \begin{align*} &k \L \varphi - \omega \varphi + \frac{1}{2} k \left\{\varphi^2 \mathbf{A}\varphi - \varphi\mathbf{A}\varphi^2 +\frac{1}{3}\mathbf{A}\varphi^3\right\} = 0, \end{align*} where \begin{align*} \mathbf{A} e^{in\theta} &= a(nk) e^{in\theta},\qquad \L e^{in\theta} = b(nk) e^{in\theta}, \\ a(k) &= \begin{cases} \frac{1}{2}|k| & \text{if $\alpha = 2$ (Euler)}, \\ c_\alpha |k|^{3-\alpha} & \text{if $0<\alpha < 1$ or $1 < \alpha < 2$}, \\ -k^2\log|k| & \text{if $\alpha = 1$ (SQG)}, \end{cases} \\ b(k) &= \begin{cases} \frac{1}{2}|k|^{-1} & \text{if $\alpha = 2$ (Euler)}, \\ b_\alpha |k|^{1-\alpha} & \text{if $0<\alpha < 1$ or $1 < \alpha < 2$}, \\ -2 \log |k| & \text{if $\alpha = 1$ (SQG)}. \end{cases} \end{align*} The existence of an analytic branch of small-amplitude traveling waves follows from the Crandall-Rabinowitz theorem for bifurcation from a simple eigenvalue \cite{Ze}. A Fourier expansion for small-amplitude solutions of the form \begin{align*} \varphi(\theta;\epsilon) = \sum_{n=0}^\infty \epsilon^{2n+1} \psi_{2n+1} e^{i(2n+1)\theta} + \text{c.c.}, \qquad \omega(\epsilon) &= \sum_{n=0}^\infty \epsilon^{2n} \omega_{2n} \end{align*} gives \begin{align} \begin{split} \omega_0 &= k b(k) = \begin{cases} \frac{1}{2}\sgn k & \text{if $\alpha = 2$ (Euler)}, \\ b_\alpha k|k|^{1-\alpha} & \text{if $0<\alpha < 1$ or $1 < \alpha < 2$}, \\ -2 k\log |k| & \text{if $\alpha = 1$ (SQG)}, \end{cases} \\ \omega_2 &= \sigma_2|\psi_1|^2, \qquad \sigma_2 = \frac{1}{2}k \left[4a(k) - a(2k)\right]. \end{split} \label{defw0} \end{align} In addition, one finds that \[ \psi_3 = \frac{1}{2}\left[ \frac{a(k) - a(2k) + \frac{1}{3}a(3k)}{b(k) - b(3k)}\right] \psi_1^3. \] We remark that in the case of the approximate equation \eqref{approx2} for Euler, with $\alpha = 2$ and $a(k) = |k|/2$, we get that \[ a(k) - a(2k) + \frac{1}{3}a(3k) = 0, \] so $\psi_3 =0$. In fact, \eqref{approx2} has an exact harmonic traveling wave solution \[ \varphi = \psi e^{ikx-i\omega t} + \text{c.c.},\qquad \omega = \frac{1}{2}\left(1 + k^2 |\psi|^2\right)\sgn k. \] The coefficient $\psi_3$ is nonzero for $0<\alpha<2$, and presumably there is no simple explicit solution for the traveling waves in that case. If $0<\alpha<2$, then the linearized wave motion is dispersive, and the NLS approximation for \eqref{sqg_eq} is \begin{align*} &\varphi(x,t) = \epsilon \psi\left(\epsilon(x - \omega_0' t),\epsilon^2t\right) e^{ikx-i\omega_0 t} +\text{c.c.} + \O(\epsilon^3) \qquad \text{as $\epsilon \to 0$}, \end{align*} where a prime denotes the derivative with respect to $k$ and $\psi(X,T)$ satisfies \[ i \psi_T = -\frac{1}{2} \omega_0'' \psi_{XX} + \sigma_2 |\psi|^2 \psi. \] The same NLS equation follows from the full equation \eqref{nonconseqn}, since it depends only on the cubic part the nonlinearity. From \eqref{defw0}, we have for $k>0$ that $\sigma_2 > 0$, and \begin{equation*} \omega_0'' = \begin{cases} b_\alpha (2-\alpha)(1-\alpha) k^{-\alpha} & \text{if $0<\alpha < 1$ or $1 < \alpha < 2$}, \\ -2/k& \text{if $\alpha = 1$ (SQG)}. \end{cases} \end{equation*} Equation \eqref{a-const} implies that $b_\alpha > 0$ for $1<\alpha<2$ and $b_\alpha <0$ for $0<\alpha<1$, so $\omega_0'' < 0$. Hence, $\omega_0'' \sigma_2 < 0$, and the NLS equation is focusing for all $0< \alpha < 2$. It follows that small-amplitude wavetrains on the front are modulationally unstable, and the front supports envelope solitons. \section{Numerical solutions} \label{sec:num} In this section, we show two numerical solutions of the initial value problem for the approximate SQG front equation in \eqref{sqgivp} that indicate the formation of singularities in finite time. The first solution is for the initial data \begin{equation} \varphi_0(x) = \cos(x+\pi) + \frac{1}{2}\cos[2(x+\pi +2\pi^2)]. \label{two_cos_ic} \end{equation} A surface plot of the solution, computed using a pseudo-spectral method with spectral viscosity, is shown in Figure~\ref{fig:two_cos_surf}. Numerical results suggest that an oscillatory singularity forms at $t\approx 0.06$ near $x\approx 2.15$, before there is an appreciable change in the global shape of the solution. The solution appears to be smooth before the singularity forms, and the numerical singularity formation time does not appear to change under further refinement. Moreover the structure of the solution remains similar as one increases the number of Fourier modes, although the number of oscillations and the $x$-location of their left endpoint increases. One might conjecture that the formation of singularities in the approximate SQG front equation is associated with the breaking and filamentation of the front, rather than a loss of smoothness, but since we are using a graphical description of the front, we are unable to distinguish between the two. The numerical solutions suggest that it may be possible to continue smooth solutions of \eqref{sqgivp} by some type of weak solution after singularities form. These weak solutions appear to remain continuous, which could be associated with the extreme thinness of any filaments that form, as seems to occur in the case of the filamentation of vorticity fronts \cite{BiHu, BiHu1}. In Figure~\ref{fig:sech}--\ref{fig:sech_detail}, we show a solution of \eqref{sqgivp} with the initial data \begin{equation} \varphi_0(x) = \sech^2\left[\frac{5(x-\pi)}{2}\right]. \label{sech_ic} \end{equation} for $0\le t \le 0.05$. The singularity formation time is $t\approx 0.02$. As in the previous case, a singularity forms before there is an appreciable change in the global shape of the solution, but in this case singularities form at two different locations, the first near the peak of the pulse and then, a little later, a second near the front of the pulse. \begin{figure} \includegraphics[width=0.7\textwidth]{two_cos_14_surf} \caption{A surface plot of the solution of \eqref{sqgivp} with initial data \eqref{two_cos_ic} for $0\le t \le 0.12$. The solution is computed by a pseudo-spectral method with $2^{14}$ Fourier modes. A small oscillatory singularity forms at $t\approx 0.06$ near $x\approx 2.15$} \label{fig:two_cos_surf} \end{figure} \begin{figure} \includegraphics[width=0.7\textwidth]{two_cos_15_graph} \caption{Graphs of the solution of \eqref{sqgivp} with initial data \eqref{two_cos_ic} for The solution is shown at $t=0$ (blue), $t= 0.1875$ (cyan), $t= 0.375$ (magenta), $t= 0.5625$ (green), $t = 0.75$ (red). The solution is computed by a pseudo-spectral method with $2^{15}$ Fourier modes.} \label{fig:two_cos_graph} \end{figure} \begin{figure} \includegraphics[width=0.7\textwidth]{two_cos_15_detail} \caption{Detail of singularity formation in the solution of \eqref{sqgivp} with initial data \eqref{two_cos_ic} shown in Figure~\ref{fig:two_cos_graph}. The solution is shown at $t=0$ (blue), $t= 0.1875$ (cyan), $t= 0.375$ (magenta), $t= 0.5625$ (green), $t = 0.75$ (red).} \label{fig: two_cos_detail} \end{figure} \begin{figure} \includegraphics[width=0.7\textwidth]{sech_15_surf} \caption{A surface plot of the solution of \eqref{sqgivp} with initial data \eqref{sech_ic} for $0\le t \le 0.05$. The solution is computed by a pseudo-spectral method with $2^{15}$ Fourier modes.} \label{fig:sech} \end{figure} \begin{figure} \includegraphics[width=0.7\textwidth]{sech_15_graph} \caption{Graphs of the solution of \eqref{sqgivp} with initial data \eqref{sech_ic} for $t=0$ (blue) and $t=0.5$ (red). The solution is computed by a pseudo-spectral method with $2^{15}$ Fourier modes.} \label{fig:sech_graph} \end{figure} \begin{figure} \includegraphics[width=0.7\textwidth]{sech_15_detail} \caption{Detail of the singularity formation near the front of the pulse in the solution of \eqref{sqgivp} with initial data \eqref{sech_ic} shown in Figure~\ref{fig:two_cos_graph}. The solution is shown at $t=0$ (blue), $t= 0.125$ (cyan), $t= 0.25$ (magenta), $t= 0.375$ (green), $t = 0.5$ (red).} \label{fig:sech_detail} \end{figure}
1,314,259,994,749
arxiv
\section{Introduction} \label{Introduction} Particles produced in non-central relativistic nucleus-nucleus collisions are predicted to be globally polarized along the direction of the system's orbital angular momentum, perpendicular to the reaction plane~\cite{LiangPRL94, Voloshin0410089, Liang0411101}. The origin of this global polarization is in the transformation of the orbital angular momentum into the particle's spin due to spin-orbital coupling. Among different observable consequences of this effect are the hyperon's global polarization and global spin alignment of vector mesons. The global spin-orbital transformation can happen at various evolution stages of the system and its observation can provide important information on the hadronization mechanism and the origin of particle's spin. One specific scenario for such spin-orbit transformation via the polarized quark phase is discussed in~\cite{LiangPRL94}. Assuming that the strange and non-strange quark polarizations, $P_s$ and $P_q$, are equal, in the particular case of `exclusive' parton recombination scenario~\cite{LiangPRL94}, the values of the global polarization $P_H$ for $\Lambda$, $\Sigma$, and $\Xi$ hyperons appear to be similar to those for quarks: $P_H = P_q \simeq 0.3$. At the same time, the predicted global spin alignment of vector mesons is defined by terms proportional to higher quark polarization powers, $P_{q}^2$~\cite{ Liang0411101}. Recently more realistic calculations~\cite{Liang:Xian2006} of the global quark polarization were performed within a model based on the HTL (Hard Thermal Loop) gluon propagator. The resulting hyperon polarization was predicted to be in the range from $-0.03$ to $0.15$ depending on the temperature of the QGP formed. Preliminary results of $\Lambda$--hyperon global polarization, $\phi(1020)$ and ${K^*}^0(892)$ vector meson's spin alignment with respect to reaction plane were recently presented~\cite{Selyuzhenkov:2005xa, Selyuzhenkov:2006fc, Chen:2007}. In this paper we present the results for $\bar\Lambda$--hyperon global polarization in Au+Au collisions at $\sqrt{s_{NN}}$=62 and 200~GeV as a function of $\bar\Lambda$ transverse momentum and pseudorapidity measured with the STAR (Solenoidal Tracker At RHIC) detector. \section{$\bar\Lambda$ global polarization} \label{Polarization} $\bar\Lambda$ global polarization can be determined from the angular distribution of its decay products with respect to the system orbital momentum {\boldmath $L$}: \begin{eqnarray} \label{GlobalPolarizationDefinition} \frac{dN}{d \cos\theta^*} \sim 1~+~\alpha_{\bar\Lambda}~P_{\bar\Lambda}~\cos \theta^*, \end{eqnarray} where $P_{\bar\Lambda}$ is the $\bar\Lambda$ global polarization, $\alpha_{\bar\Lambda} = - 0.642\pm0.013$~\cite{Eidelman:2004wy} is the $\bar\Lambda$ decay parameter, $\theta^*$ is the angle between the system's orbital momentum {\boldmath $L$} and the 3-momentum of $\bar\Lambda$'s decay products in the $\bar\Lambda$'s rest frame. The observable used in the $\bar\Lambda$ global polarization measurement is derived in~\cite{Selyuzhenkov:2006fc}: \begin{eqnarray} \label{GlobalPolarizationObservable} P_{\bar\Lambda}~=~\frac{8}{\pi\alpha_{\bar\Lambda}}\langle \sin \left( \phi^*_p - \Psi_{RP}\right)\rangle. \end{eqnarray} Here $\phi^*_p$ is the azimuthal angle of the anti-proton's 3-momentum, measured in $\bar\Lambda$'s rest frame. Angle brackets in this equation denote averaging over the solid angle of anti-proton's 3-momentum in $\bar\Lambda$'s rest frame and over all directions of the system orbital momentum {\boldmath $L$}, or, in other words, over all possible orientations of the reaction plane. In this paper, $\bar\Lambda$ particles were reconstructed from their weak decay topology, $\bar\Lambda \to \bar p \pi^+ $, using charged tracks measured in the STAR main TPC (Time Projection Chamber)~\cite{Anderson:2003ur}. The reaction plane angle in Eq.~\ref{GlobalPolarizationObservable} is estimated by calculating the so-called event plane flow vector $Q_{EP}$~\cite{Voloshin:1994mz,Poskanzer:1998yz}. This first-order event plane vector was determined from charged tracks measured in two STAR Forward TPCs~\cite{Ackermann:2002yx}. \section{Results} \label{Results} Figures \ref{antiLambdaGlobalPolarization_eta} and \ref{antiLambdaGlobalPolarization_pt} present $\bar\Lambda$--hyperon's global polarization as a function of $\bar\Lambda$ pseudorapidity and transverse momentum. Black circles (red squares) show the result of the measurement for Au+Au collisions at $\sqrt{s_{NN}}$=200~GeV (62~GeV) with the STAR detector. \begin{figure}[h] \begin{center} \includegraphics[width=0.5\textwidth]{antiLambdaGlobalPolarization_eta.eps \put(-230,70){\rotatebox{90}{$P_{\bar\Lambda}$}} \put(-120,-5){$\eta$} \put(-128,125){{\bf STAR Preliminary}} \parbox{0.4\textwidth}{\caption{\label{antiLambdaGlobalPolarization_eta} {\small (Color online) Global polarization of $\bar\Lambda$--hyperons as a function of $\bar\Lambda$ pseudorapidity. Black circles show the results for Au+Au collisions at $\sqrt{s_{NN}}$=200~GeV (centrality region \mbox{20-70\%}) and red squares indicate the results for Au+Au collisions at $\sqrt{s_{NN}}$=62~GeV (centrality region \mbox{0-80\%}). Only statistical errors are shown. }}} \end{center} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[width=0.5\textwidth]{antiLambdaGlobalPolarization_pt.eps} \put(-230,70){\rotatebox{90}{$P_{\bar\Lambda}$}} \put(-140,-5){$p_t$ (GeV/c)} \put(-195,125){{\bf STAR Preliminary}} \parbox{0.4\textwidth}{\caption{\label{antiLambdaGlobalPolarization_pt} {\small (Color online) Global polarization of $\bar\Lambda$ hyperons as a function of $\bar\Lambda$ transverse momentum. Black circles show the results for Au+Au collisions at $\sqrt{s_{NN}}$=200~GeV (centrality region \mbox{20-70\%}) and red squares indicate the results for Au+Au collisions at $\sqrt{s_{NN}}$=62~GeV (centrality region \mbox{0-80\%}). Only statistical errors are shown. }}} \end{center} \end{figure} Within statistical errors no deviation from zero has been observed. The $p_t$-integrated global polarization result is dominated by the region $p^{\bar\Lambda}_t<3$~GeV, where measurements are consistent with zero. The constant line fits for the $\bar\Lambda$--hyperon global polarization as a function of pseudorapidity give: $P_{\bar\Lambda} = (1.7 \pm 10.7) \times 10^{-3}$ for Au+Au collisions at $\sqrt{s_{NN}}$=200~GeV (centrality region \mbox{20-70\%}) and $P_{\bar\Lambda} = (-17.3 \pm 11.0) \times 10^{-3}$ for Au+Au collisions at $\sqrt{s_{NN}}$=62~GeV (centrality region \mbox{0-80\%}). These results are consistent with those from $\Lambda$--hyperon global polarization measurements~\cite{Selyuzhenkov:2006fc}. \section{Acceptance corrections and systematic uncertainties} \label{Systematic} The derivation of Eq.~\ref{GlobalPolarizationObservable} assumes a perfect reconstruction acceptance for $\bar\Lambda$--hyperon. For the case of non-perfect detector one has to correct results by detector acceptance function~\cite{Selyuzhenkov:2006tj}: \begin{eqnarray} \label{AccCoefficient} A_{0}(p_t^H,\eta^H) = \frac{4}{\pi} \int {\frac{d\Omega^*_p}{4\pi} \frac{d\phi_H}{2\pi}A\left({\bf p}_H, {\bf p}^*_p\right) \sin\theta^*_p}. \end{eqnarray} Here $d\Omega^*_p = d\phi^*_p \sin \theta^*_p d \theta^*_p$ is the solid angle of the hyperon decay products' 3-momentum ${\bf p}^*_p$ in the hyperon rest frame; ${\bf p}_H$ ($\phi_H$) is the hyperon 3-momentum (azimuthal angle), and $A\left({\bf p}_H, {\bf p}^*_p\right)$ is a function to account for detector acceptance. For the $\bar\Lambda$--hyperons reconstructed with STAR detector, this function is found to follow the same pseudorapidity and transverse momentum dependence as for $\Lambda$--hyperon~\cite{Selyuzhenkov:2006tj}, and corresponding corrections are estimated to be less than 20\%. Similar to $\Lambda$--hyperon results~\cite{Selyuzhenkov:2006tj}, the admixture from $\bar\Lambda$ directed flow to the global polarization measurement is found to be negligible. Another type of correction is from possible dependence of the hyperon global polarization on the relative azimuthal angle between the direction of hyperon's 3-momentum and the system orbital momentum {\boldmath $L$}. For the perfect detector the observable in Eq.~\ref{GlobalPolarizationObservable} gives an average of the global polarization over this relative azimuthal angle. This can be shown by expanding the global polarization as a function of ($\phi_H-\Psi_{RP}$) in a sum (due to the symmetry of the system only even harmonics contribute): \begin{eqnarray} \label{sumForGlobalPolarization} &&P_H\left(\phi_H-\Psi_{RP},p_t^H,\eta^H\right)~=\\ \nonumber &&=~\sum_{n=0}^\infty P_H^{(n)}\left(p_t^H,\eta^H\right)\cos\{2n[\phi_H-\Psi_{RP}]\}. \end{eqnarray} Global polarization averaged over all possible values of ($\phi_H-\Psi_{RP}$) will be given by: \begin{eqnarray} \label{GPaverage} P_H\left(p_t^H,\eta^H\right) & \equiv & \overline{P_H\left(\phi_H-\Psi_{RP},p_t^H,\eta^H\right)} \\ \nonumber & = & P_H^{(0)}\left(p_t^H,\eta^H\right). \end{eqnarray} For the case of an imperfect detector, the observable in Eq.~\ref{GlobalPolarizationObservable} will be proportional to $P_H^{(0)}$ and contains the additive admixture of higher harmonic terms, namely $P_H^{(2)}$ (compare with Eq.~4 in~\cite{Selyuzhenkov:2006tj}): \begin{eqnarray} \label{GlobalPolarizationObservableAcc} && \langle \sin \left( \phi^*_p - \Psi_{RP}\right)\rangle = \\ &&= \nonumber \frac{\alpha_H}{2} \int {\frac{d\Omega^*_p}{4\pi} \frac{d\phi_H}{2\pi}A\left({\bf p}_H, {\bf p}^*_p\right) \sin\theta^*_p}\times \\ && \nonumber \times \left[P_H^{(0)} -\frac{P_H^{(2)}}{2} \cos\left[2(\phi_H-\phi_p^*)\right]\right]. \end{eqnarray} For the perfect acceptance this leads to observable in Eq.~\ref{GlobalPolarizationObservable}, where under $P_H$ ($P_{\bar\Lambda}$) one have to understand $P_H^{(0)}$. According to Eq.~\ref{GPaverage}, $P_H^{(0)}$ is the average of global polarization over relative azimuthal angle between hyperon's direction and the system's orbital momentum {\boldmath $L$}. Due to the non-uniform detector acceptance, Eq.~\ref{GlobalPolarizationObservableAcc} contains two different contributions. First one is defined by acceptance correction function $A_0(p_t^H,\eta^H)$ in Eq.~\ref{AccCoefficient}. Deviation of this function from unity (perfect detector) affects the overall scale of the measured global polarization. The contribution from the second term is proportional to $P_H^{(2)}$ and defined by the function: \begin{eqnarray} \label{AccCoefficientAdditive} A_{2}(p_t^H,\eta^H) &=& \frac{2}{\pi} \int {\frac{d\Omega^*_p}{4\pi} \frac{d\phi_H}{2\pi}A\left({\bf p}_H, {\bf p}^*_p\right)}\times \\&&\nonumber \times \sin\theta^*_p \cos\left[2(\phi_H-\phi_p^*)\right]. \end{eqnarray} For perfect acceptance this function is zero. Taking into account that the background contribution to the $\Lambda$ and $\bar\Lambda$ invariant mass distribution is less than 8\%, the value of function $A_2(p_t^H,\eta^H)$ can be extracted directly from the experimental data by calculating $\left<\sin\theta^*_p\cos\left[2(\phi_H-\phi_p^*)\right]\right>$ for $\Lambda$ and $\bar\Lambda$ candidates. The result of such calculations is presented in Figure~\ref{corr2Figure}. Assuming that different terms in expansion (\ref{sumForGlobalPolarization}) are of the same order of magnitude, the corresponding corrections from the admixture of $P_H^{(2)}\left(p_t^H,\eta^H\right)$ to the $\Lambda$ and $\bar\Lambda$'s hyperon global polarization measurement are found to be less than 20\%. \begin{figure}[h] \begin{center} \includegraphics[width=0.5\textwidth]{acc2_SinThetaStar_pt.eps} \put(-233,60){\rotatebox{90}{$A_{2}^{\Lambda,\bar\Lambda}$}} \put(-125,-5){$p_t^{\Lambda,\bar\Lambda}$}\\ \includegraphics[width=0.5\textwidth]{acc2_SinThetaStar_eta.eps} \put(-233,60){\rotatebox{90}{$A_{2}^{\Lambda,\bar\Lambda}$}} \put(-125,-5){$\eta^{\Lambda,\bar\Lambda}$} \parbox{0.4\textwidth}{\caption{ \label{corr2Figure} {\small (Color online) Integral (\ref{AccCoefficientAdditive}) as a function of $\Lambda$ (black circles) and $\bar\Lambda$ (red squares) transverse momentum (top) and pseudorapidity (bottom). }}} \end{center} \end{figure} Feed-down effects, spin precession, uncertainty of reaction plane angle reconstruction procedure for the $\Lambda$--hyperon global polarization measurement has been discussed in~\cite{Selyuzhenkov:2006tj}. Based on the similar study, the overall relative uncertainty in the $\bar\Lambda$ global polarization measurement due to detector effects is found to be less than a factor of 2. \section{Conclusion} \label{Conclusion} The $\bar\Lambda$--hyperon global polarization has been measured in Au+Au collisions at center of mass energies $\sqrt{s_{NN}}$=62 and 200~GeV with the STAR detector at RHIC. Within uncertainties we observe no significant deviation from zero of the $\bar\Lambda$ global polarization. The possible dependence of the global polarization on relative azimuthal angle between system's orbital momentum and hyperon's 3-momentum is discussed. The corresponding systematic uncertainty due to detector acceptance in the $\Lambda$ and $\bar\Lambda$ global polarization measurements is found to be less than 20\%. Combining results of this measurement and those from~\cite{Selyuzhenkov:2006fc}, an upper limit of $|P_{\Lambda,\bar\Lambda}| \leq 0.02$ for the global polarization of $\Lambda$ and $\bar\Lambda$ hyperons within STAR's acceptance is obtained. The obtained upper limit is far below the few tens of percent values discussed in~\cite{LiangPRL94}, but it falls within the predicted region from the more realistic calculations~\cite{Liang:Xian2006} based on the HTL (Hard Thermal Loop) model. {\small
1,314,259,994,750
arxiv
\subsection*{#1}} \definecollection{appendix-suspect} \definecollection{appendix-algorithm} \definecollection{appendix-mean-payoff} \bibliographystyle{abbrv} \title{Robust Equilibria in Mean-Payoff Games} \author{Romain Brenguier\thanks{Work supported by ERC Starting Grant inVEST (279499).} \\ \texttt{romain.brenguier@ulb.ac.be}} \newcommand\mylabel[2]{\label{#2}} \newcommand\mysetcounter[2]{} \def\texttt{t}\xspace{\texttt{t}\xspace} \def\texttt{w}\xspace{\texttt{w}\xspace} \def\newpage{\newpage} \begin{document} \maketitle \begin{abstract} We study the problem of finding robust equilibria in multiplayer concurrent games with mean payoff objectives. A $(k,t)$-robust equilibrium is a strategy profile such that no coalition of size $k$ can improve the payoff of one its member by deviating, and no coalition of size $t$ can decrease the payoff of other players. We are interested in pure equilibria, that is, solutions that can be implemented using non-randomized strategies. We suggest a general transformation from multiplayer games to two-player games such that pure equilibria in the first game correspond to winning strategies in the second one. We then devise from this transformation, an algorithm which computes equilibria in mean-payoff games. Robust equilibria in mean-payoff games reduce to winning strategies in multidimensional mean-payoff games for some threshold satisfying some constraints. We then show that the existence of such equilibria can be decided in polynomial space, and that the decision problem is \PSPACE-complete. \end{abstract} \section{Introduction} Games are intensively used in computer science to model interactions in computerized systems. Two player antagonistic games have been successfully used for the synthesis of reactive systems. In this context, the opponent acts as a hostile environment, and winning strategies provide controllers that ensure correctness of the system under any scenario. In order to model complex systems in which several rational entities interact, multiplayer concurrent games come into the picture. Correctness of the strategies can be specified with different solution concepts, which describe formally what is a ``good'' strategy. In game theory, the fundamental solution concept is Nash equilibrium~\cite{nash50} and other ones have been proposed to refine or relax it, such as subgame perfect equilibrium~\cite{Selten65}, iterative admissibility~\cite{Rub94}, and robust equilibria~\cite{abraham2006distributed}. The notion of robust equilibria refines Nash equilibria in two ways: \begin{myitemize} \item a robust equilibrium is \emph{resilient}, \ie when a ``small'' coalition of player changes its strategy, it can not improve the payoff of one of its participants; \item it is \emph{immune}, \ie when a ``small'' coalition changes its strategy, it will not lower the payoff of the non-deviating players. \end{myitemize} The maximal size of small coalitions is determined by a bound $k$ for resilience and another $t$ for immunity. When a strategy is both $k$-resilient and $t$-immune, it is called a $(k,t)$-robust equilibrium. We also generalize this concept to the notion $(k,t,r)$-robust equilibrium, where if $t$ players are deviating, the others should not have their payoff decrease by more than $r$. \paragraph*{Example} In the design of network protocols, when many users are interacting, coalitions can easily be formed and resilient strategies are necessary to avoid deviation. It is also likely that some clients are faulty and begin to behave unexpectedly, hence the need for immune strategies. As an example, consider a program for a server that distributes files, of which a part is represented in \figurename~\ref{fig:program}. The functions \texttt{listen} and \texttt{send\_files} will be run in parallel by the server. Some choices in the design of these functions have not been fixed yet and we wish to analyze the robustness of the different alternatives. This program uses a table~\texttt{clients} to keep track of the clients which are connected. Notice that the table has fixed size 2, which means that if 3 clients try to connect at the same time, one of them may have its socket overwritten in the table and will have to reconnect later to get the file. We want to know what strategy the clients should use and how robust the protocol will be: can clients exploit it to get their files faster than the normal usage, and how will the performance of the over clients be affected. We consider different strategies to chose between the possible alternatives in the program of \figurename~\ref{fig:program}. The strategy that chooses alternatives $1$ and $3$ does not give $1$-resilient equilibria even for just two clients: Player~$1$ can always reconnect just after its socket was closed, so that \texttt{clients[0]} belongs player~$1$ once again. In this way, he can deviate from any profile to never have to wait for the file. Since the second player could do the same thing, no profile is $1$-resilient (nor $1$-immune). For the same reasons, the strategy $2, 3$ does not give $1$-resilient equilibria. The strategy $1, 4$ does not give $1$-resilient equilibria either, since player~$1$ can launch a new connection after player~$2$ to overwrite \texttt{clients[1]}. The strategy $2, 4$ is the one that may give the best solution. We modeled the interaction of this program as a concurrent game for a situation with 2 potential clients in \figurename~\ref{fig:game-from-program}. Clients have a positive reward when we send them the file they requested. If both clients try to connect in the same slot, we use a matching penny game to decide which request was the first to arrive. For a general method to transform a game with continuous time into a concurrent game, see~\cite[Chapter~6]{brenguier12}. \paragraph*{Comparison with Nash equilibria and secure equilibria} In a Nash equilibria, we ask for a strategies that are so that no player can benefit by changing only its own strategy. In a secure equilibria, we ask that no player can benefit or keep the same reward while reducing the payoff of other players by changing its own strategy. Nash equilibria and secure equilibria have been proposed as concepts for synthesis of distributed systems~\cite{CHJ05,brenguier12}. However these concepts present two weaknesses: \begin{myitemize} \item There is no guarantee when two (or more) users deviate together. It can happen on a network that the same person controls several devices (a laptop and a phone for instance) and can then coordinate there behavior. In that case, the devices would be considered as different agents and Nash equilibria offers no guarantee. \item When a deviation occurs, the strategies of the equilibrium can punish the deviating user without any regard for payoffs of the others. This can result in a situation where, because of a faulty device, nobody can emit. \end{myitemize} By comparison, finding resilient equilibria with $k$ greater than $1$, ensures that clients have no interest in forming coalitions (up to size $k$), and finding immune equilibria with $t$ greater than $0$ ensures that other clients will not suffer from some agents (up to $t$) behaving differently from what was expected. \begin{figure}[htb] {\small \begin{verbatim} clients = new socket[2]; void listen() { while(true) { Socket socket = serverSocket.accept(); if(clients[0].isConnected()) //// Two possible alternatives: | 1) clients[1] = socket; | 2) if(socket.remoteSocketAddress() != clients[0].remoteSocketAddress()) | clients[1] = socket; else clients[0] = socket; } } void send_files() { while(true) { if(clients[0].isConnected()) { send(clients[0]); clients[0].close(); } //// Two possible alternatives : | 3) else if(clients[1].isConnected()) { | 4) if(clients[1].isConnected()) { send(clients[1]); clients[1].close(); } } } \end{verbatim} } \caption{Example of a server program.}\label{fig:program} \end{figure} \begin{figure}[htb] \centering{\scriptsize \begin{tikzpicture}[yscale=0.7] \draw (0,0) node[draw,rounded corners=3mm,minimum size=6mm] (S00) { $[0,0]$ }; \draw[-latex'] (-1,0) -- (S00); \draw (5,3) node[draw,rounded corners=3mm,minimum size=6mm,text width=1.5cm,text centered] (T10) { $[1,0]$ \texttt{send(1)} \texttt{close(1)}}; \draw (5,1) node[draw,rounded corners=3mm,minimum size=6mm,text width=1.5cm,text centered] (T12) { $[1,2]$ \texttt{send(1)} \texttt{close(1)}}; \draw (5,-3) node[draw,rounded corners=3mm,minimum size=6mm,text width=1.5cm,text centered] (T20) { $[2,0]$ \texttt{send(2)}}; \draw (0,-3) node[draw,rounded corners=3mm,minimum size=6mm,text width=1.5cm,text centered] (T21) { $[2,1]$ \texttt{send(2)}}; \draw[-latex'] (S00) -- node[above,sloped] {\texttt{ch,w}} node[below,sloped]{\texttt{ct,w}} (T10); \draw[-latex'] (S00) -- node[above,sloped] {\texttt{ch,ct}} node[below,sloped]{\texttt{ct,ch}} (T12); \draw[-latex'] (S00) -- node[above,sloped] {\texttt{w,ct}} node[below,sloped]{\texttt{w,ch}} (T20); \draw[-latex'] (S00) -- node[above,sloped] {\texttt{ch,ch}} node[below,sloped]{\texttt{ct,ct}} (T21); \draw[-latex'] (S00) .. controls +(0,-1) and +(-1,-1) .. node[below left,sloped] {\texttt{w,w}} (S00); \draw(T21.-90) node[below] {\dots}; \draw(T20.-90) node[below] {\dots}; \draw (10,0) node[draw,rounded corners=3mm,minimum size=6mm,text width=1.5cm,text centered] (V12) {$[0,2]$}; \draw[-latex'] (T12) -- node[above,sloped] {$\ast$, $\ast$} (V12); \draw[-latex'] (T10) -- node[above,sloped] {$\ast$, \texttt{ch}} node[below,sloped]{\texttt{$\ast$, ct}} (V12); \draw (5,-1) node[draw,rounded corners=3mm,minimum size=6mm,text width=1.5cm,text centered] (W02) { $[0,2]$ \texttt{send(2)} \texttt{close(2)}}; \draw (10,3) node[draw,rounded corners=3mm,minimum size=6mm,text width=1.5cm,text centered] (W12) { $[1,2]$ \texttt{send(2)} \texttt{close(2)}}; \draw (10,-2) node[draw,rounded corners=3mm,minimum size=6mm,text width=1.5cm,text centered] (W22) { $[2,2]$ \texttt{send(2)} \texttt{close(2)}}; \draw (12,-3) node[draw,rounded corners=3mm,minimum size=6mm,text width=1.5cm,text centered] (W21) { $[2,1]$ \texttt{send(1)} \texttt{close(1)}}; \draw[-latex'] (V12) -- node[above,sloped] {\texttt{w,w}} (W02); \draw[-latex'] (V12) -- node[right,pos=0.2] {\texttt{ch,w}} node[right,pos=0.4]{\texttt{ct,w}} node[right,pos=0.6]{\texttt{ch,ct}} node[right,pos=0.8]{\texttt{ct,ch}} (W12); \draw[-latex'] (V12) -- node[right,pos=0.3] {\texttt{w,ct}} node[right,pos=0.7]{\texttt{w,ch}} (W22); \draw[-latex',rounded corners=3mm] (V12) -| node[pos=0.6,right] {\texttt{ch,ch}} node[right,pos=0.7]{\texttt{ct,ct}} (W21); \draw[-latex',rounded corners=3mm] (T10) -| node[above,pos=0.4] {$\ast$, \texttt{w}} (S00); \draw[-latex'] (W02) -- (S00); \draw[-latex'] (W12) -- (T10); \draw[-latex'] (W22) -- (T20); \draw[-latex'] (W21) -- (T20); \end{tikzpicture} } \caption{Concurrent game generated from the program of \figurename~\ref{fig:program} for the choice $2, 4$. The labels represent the content of the table \texttt{clients}: $0$ means no connection, $1$ means connected with player~$1$ and $2$ connected with player~$2$; and the instruction that the function \texttt{send\_files} is executing. Because the game is quite big we represent only the part where player $1$ connects before player~$2$ and the rest of the graph can be deduced by symmetry. The actions of the players are either to wait (action {\texttt w}) or to connect (action {\texttt ch} or {\texttt ct}), in the case both player try to connect at the same time we simulate a matching penny game in order to determine which on will be treated first, this is the reason why we have two different possible actions to connect. In the graph $\ast$ means any possible action. Player~$i$, gets a positive reward when we are in a state with \texttt{send(i)}. }\label{fig:game-from-program} \end{figure} \paragraph*{Contribution} In this paper, we study the problem of finding robust equilibria in multiplayer concurrent games. In Section~\ref{sec:suspect}, we describe a generic transformation from multiplayer games to two-player games. The resulting two-player game is called the \newdef{deviator game}. We show that equilibria in the original game correspond to winning strategies in the second one. In Section~\ref{sec:mean-payoff}, we study quantitative games with mean-payoff objectives. We show that the game obtained by our transformation is equivalent to a multidimensional mean-payoff game. We then show that this can be reduced to a value problem with linear constraints in multidimensional mean-payoff games. We show that this can be solved in polynomial space, by making use of the structure of the deviator game. In Section~\ref{sec:hardness}, we prove the matching lower bound which shows the robustness problem is \PSPACE-complete. \paragraph*{Related works} Other solution concepts have been studied on game on graph, in particular Nash equilibrium~\cite{ummels2009complexity,ummels2011complexity,BBMU12}, subgame perfect equilibria~\cite{Ummels08,brihaye2010}, regret minimization~\cite{filiot10}, secure equilibria~\cite{CHJ05}. Note that in this paper we only consider \emph{pure} strategies: in the general case of randomized strategies, existence of a Nash equilibrium with a particular payoff is undecidable~\cite{ummels2011complexity}. To solve quantitative games, we rely on the analysis of multidimensional mean-payoff games~\cite{velner12,BR15}. Note that the concept of robust equilibria for games with LTL objectives is expressible in logics such as strategy logic~\cite{CHP10} or $\text{ATL}^*$~\cite{AHK02}. However, satisfiability in these logic is difficult: it is 2\ComplexityFont{EXPTIME}-complete for ATL$^*$ and undecidable for strategy logic in general (2\ComplexityFont{EXPTIME}-complete fragments exist~\cite{MMPV12}). Moreover, these logics cannot express equilibria in quantitative games such as mean-payoff. Nash equilibria correspond to the special case of $(1,0)$-robust equilibria. Secure equilibria~\cite{CHJ05}, do not exactly correspond to a class of robust equilibria, however we can note that $(1,1)\text{-robust} \implies secure \implies (1,0)\text{-robust}$. Although we do not directly solve the problem of finding secure equilibria in this paper, the techniques presented here could be adapted. \section{Definitions} \subsection{Weighted concurrent games We study concurrent game as defined in~\cite{AHK02} with the addition of weights on the edges. \begin{definition} A \newdef{weighted concurrent game} (or simply a \newdef{game})~$\calG$ is a tuple $\langle \Stat, s_0,$ $\Pl, \Act, \Tab, (w_A)_{A\in\Agt} \rangle$, where: \begin{itemize} \item $\Stat$ is a finite set of \newdef{states} and $s_0 \in \Stat$ is the \newdef{initial state}; \item $\Pl$ is a finite set of \newdef{players}; \item $\Act$ is a finite set of \newdef{actions}; a tuple $(\shortAct_A)_{A\in\Pl}$ containing one action~$\shortAct_A$ for each player~$A$ is called a \newdef{move}; \item $\Tab : \Stat \times \Act^\Pl \rightarrow \Stat$ is the \newdef{transition function}, it associates with a given state and a given move, the resulting state; \item for each player $A \in \Agt$, $w_A \colon \Stat \mapsto \Z$ is a \newdef{weight function} which assigns to each agent an integer weight. \end{itemize} \end{definition} In a game~$\calG$, whenever we arrive at a state~$\stat$, the players simultaneously select an action. This results in a move~$\shortAct_\Pl$; the next state of the game is then $\Tab(\stat,a_\Pl)$. This process starts from $s_0$ and is repeated to form an infinite sequence of states. An example of a game is given in \figurename~\ref{fig:game-from-program}. It models the interaction of two clients $A_1$ and $A_2$ with the program presented in the introduction. The weight functions for this game are given by $w_{A_1} = 1$ in states labeled by \texttt{send(1)} and $w_{A_1} = 0$ elsewhere, similarly $w_{A_2} = 1$ in states labeled by \texttt{send(2)}. \begin{definition}[History and plays] A \newdef{history} of the game~$\calG$ is a finite sequence of states and moves ending with a state, \ie an word in $(\Stat\cdot \Act^\Pl)^*\cdot \Stat$. We write $h_i$ the $i$-th state of $h$, starting from $0$, and $\act_i(h)$ its $i$-th move, thus $h= h_0 \cdot \act_0(h) \cdot h_1 \cdots \act_{n-1}(h) \cdot h_n$. The length $|h|$ of such a history is $n+1$. We write $\last(h)$ the last state of $h$, \ie $h_{|h|-1}$. A \newdef{play}~$\rho$ is an infinite sequence of states and moves, \ie an element of $(\Stat \cdot \Act^\Agt)^\omega$. We write $\rho_{\le n}$ for the prefix of $\rho$ of length $n+1$, \ie the history $\rho_0 \cdot \act_0(\rho) \cdots \act_{n-1}(\rho) \cdot \rho_n$. The \newdef{mean-payoff} for agent $A\in\Agt$ of a play~$\rho$ is the average of the weights along the play: \(\payoff_A(\rho) = \liminf_{n \rightarrow \infty} \frac{1}{n} \sum_{0 \le k\le n} w_A(\rho_{k}). \) Note that it only depends on the sequence of states, and not on the sequence of moves. The \newdef{payoff vector} of the run $\rho$ is the vector $p\in \mathbb{R}^\Agt$ such that for all player~$A \in \Agt$, $p_A = \payoff_A(\rho)$; we simply write $\payoff(\rho)$ for this vector. \end{definition} In fact, Section~\ref{sec:suspect}, which gives a general characterization of robust equilibria, does not depend much on the kind of objective we consider, and could be generalized to other payoff functions than mean-payoff. \def(\Stat \cdot \Act^\Agt)^* \cdot \Stat{(\Stat \cdot \Act^\Agt)^* \cdot \Stat} \begin{definition}[Strategies] Let~$\calG$ be a game, and~$A\in\Pl$. A \newdef{strategy} for player~$A$ maps histories to actions. Formally, a strategy is a function $\sigma_A\colon (\Stat \cdot \Act^\Agt)^* \cdot \Stat \to \Act$. A \newdef{coalition}~$C\subseteq \Pl$ is a set of players, its size is the number of players it contains and we write it $|C|$. A~strategy~$\sigma_C = (\sigma_A)_{A\in C}$ for a coalition~$C\subseteq \Agt$ is a tuple of strategies, one for each player in~$C$. We write $\sigma_{-C}$ for a strategy of coalition $\Agt\setminus C$. A~\newdef{strategy profile} is a strategy for~$\Pl$. We will write $(\sigma_{- C},\sigma'_C)$ for the strategy profile~$\sigma''_\Agt$ such that if $A \in C$ then $\sigma''_A = \sigma'_A$ and otherwise $\sigma''_A = \sigma_A$. We~write $\Strat_\calG(C)$ for the set of strategies of coalition~$C$. \end{definition} \begin{definition}[Outcomes] Let $C$ be a~coalition, and $\sigma_C$ a~strategy for~$C$. A~history~$h$ is \newdef{compatible} with the strategy~$\sigma_C$ if, for all~$k<\length h - 1$, $(\act_k(h))_A = \sigma_A(h_{\le k})$ for all~$A\in C$, and $\Tab(h_{k}, \act_k(h)) = h_{k+1}$. A~play~$\rho$ is \newdef{compatible} with the strategy~$\sigma_C$ if all its prefixes are. We~write~$\Out_{\calG}(\stat,\sigma_C)$ for the set of plays in~$\calG$ that are compatible with strategy~$\sigma_C$ of~$C$ and have initial state~$\stat$, these paths are called~\emph{outcomes} of $\sigma_C$ from~$\stat$. We simply write~$\Out_{\calG}(\sigma_C)$ when $\stat = \stat_0$ and $\Out_{\calG}$ is the set of plays that are compatible with some strategy. Note that when the coalition~$C$ is composed of all the players the outcome is unique. We write $\payoff(\sigma_\Agt)$ for the payoff of the unique outcome of $\sigma_\Agt$. An \newdef{objective}~$\Omega$ is a set of plays and a strategy $\sigma_C$ is said \newdef{winning} for objective $\Omega$ if all its outcomes belong to $\Omega$. \end{definition} \subsection{Equilibria notions} We now present the different solution concepts we will study. Solution concepts are formal descriptions of ``good'' strategy profiles. The most famous of them is Nash equilibrium~\cite{nash50}, in which no single player can improve the outcome for their own preference relation, by only changing their strategy. This notion can be generalized to consider coalitions of players, it is then called a resilient strategy profile. Nash equilibria correspond to the special case of $1$-resilient strategy profiles. \begin{definition}[Resilience~\cite{aumann1959acceptable}] Given a coalition~$C \subseteq \Agt$, a strategy profile~$\sigma_{\Pl}$ is \newdef{$C$-resilient} if for all agent $A$ in $C$, $A$ cannot improve her payoff even if all agents in $C$ change their strategies, \ie $\sigma_{\Pl}$ is said $C$-resilient when: \[ \forall \sigma'_C \in \Strat_\calG(C).\ \forall A\in C.\ \payoff_{A} (\sigma_{-C}, \sigma'_C) \le \payoff_A(\sigma_\Pl) \] Given an integer $k$, we say that a strategy profile is $k$-resilient if it is $C$-resilient for every coalition~$C$ of size $k$. \end{definition} To ensure that players \emph{not} deviating are not too much affected by deviation, we consider immune strategies. \begin{definition}[Immunity~\cite{abraham2006distributed}] A strategy profile~$\sigma_{\Agt}$ is \newdef{($C,r$)-immune} if all players not in $C$, are not worse off by more than $r$ if players in $C$ deviates, \ie when: \[ \forall \sigma'_C \in \Strat_\calG(C).\ \forall A\in \Agt \setminus C.\ \payoff_A(\sigma_\Pl) - r \le \payoff_A(\sigma_{-C}, \sigma'_C) \] Given an integer $t$, a strategy profile is said \newdef{($t,r$)-immune} if it is ($C,r$)-immune for every coalition~$C$ of size $t$. Note that $t$-immunity as defined in~\cite{abraham2006distributed} corresponds to $(t,0)$-immunity. \end{definition} Combining resilience and immunity, gives the notion of robust equilibrium. \begin{definition}[Robust Equilibrium~\cite{abraham2006distributed}] A strategy profile is a \newdef{$(k,t,r)$-robust equilibrium} if it is both $k$-resilient and $(t,r)$-immune. \end{definition} The aim of this article is to characterize robust equilibria in order to construct the corresponding strategies, and precisely describe the complexity of the following decision problem for mean-payoff games. \begin{definition}[Robustness Decision Problem] Given a game~$\calG$, integers~$k$, $t$, rational $r$ does there exist a profile $\sigma_\Agt$, that is a $(k,t,r)$-robust equilibrium~$\sigma_\Agt$ in $\calG$? \end{definition} \draft{ \subsection{Undecidability for randomized strategies} \fbox{ In the final version:} we should have a undecidability proof for existence in the general case and a part on finding memoryless strategies. \fbox{In the case of payoff constraints} This problem has been shown to be undecidable for Nash equilibria, that is, the case $k=1$, $t=0$~\cite{UmmelsW} \fbox{if we allow randomization} and the proof was improved with only 3? players~\cite{Stan}. In this paper, we study the restriction to pure strategies: it is important in practice and we recover decidability. \fbox{MAYBE} We will also show decidability for the memoryless case with randomized strategies. } \section{Deviator Game}\label{sec:suspect} In order to obtain simple algorithms for the robustness problem, we use a correspondence with zero-sum two-players game. The concept of winning strategy has been well studied in computer science and we can make use of existing algorithms. We present the deviator game, which is a transformation of multiplayer game into a turn-based zero-sum game, such that there are strong links between robust equilibria in the first one and winning strategies in the second one. This is formalized in Thm.~\ref{thm:dev-correct}. We begin by defining deviators. \subsection{Deviators} The basic notion we use to solve the robustness problem is that of deviators. It identifies players that cause the current deviation from the expected outcome. \begin{definition}[Deviator] A \emph{deviator} from move $\shortAct_\Agt$ to $\shortAct'_\Agt$ is a player $D \in \Agt$ such that $\shortAct_D \ne \shortAct'_D$. We write this set of deviators: \( \dev(\shortAct_\Agt,\shortAct'_\Agt) = \{ A\in \Agt \mid \shortAct_A \ne \shortAct'_A \}. \) We extend the definition to histories and strategies by taking the union of deviator sets, formally $\dev(h,\sigma_\Agt) = \bigcup_{0 \le i < |h|} \dev(\act_i(h), \sigma_\Agt(h_{\le i}))$. It naturally extends to plays: if $\rho$ is a play, then $\dev(\rho,\sigma_\Agt) = \bigcup_{i\in\N} \dev(\act_i(\rho), \sigma_\Agt(\rho_{\le i}))$. \end{definition} Intuitively, given an play~$\rho$ and a strategy profile~$\sigma_\Agt$, deviators represent the agents that need to change their strategies from $\sigma_\Agt$ in order to obtain the play $\rho$. The intuition is formalized in the following lemma. \begin{lemma}\mylabel{Lem}{lem:deviator-path} Let $\rho$ be a play, $\sigma_\Agt$ a strategy profile and $C \subseteq \Agt$ a coalition. Coalition $C$ contains $\dev(\rho,\sigma_\Agt)$ if, and only if, there exists $\sigma'_C$ such that $\rho \in \Out_\calG(\rho_0, \sigma'_C, \sigma_{-C})$. \end{lemma} \begin{proof} \fbox{$\Rightarrow$} Let $\rho$ be a play and $C$ a coalition which contains $\dev(\rho,\sigma_\Agt)$. We define $\sigma'_C$ to be such that for all $i$, $\sigma'_C(\rho_{\le i}) = (\act_i(\rho))_C$. We have that for all index~$i$, $\dev(\rho_{i+1}, \sigma_\Agt(\act_i(\rho))) \subseteq C$. Therefore for all agent $A \not\in C$, $\sigma_A(\rho_{\le i}) = (\act_i(\rho))_A$. Then $\Tab(\rho_i, \sigma'_C(\rho_{\le i}), \sigma_{-C}(\rho_{\le i})) = \rho_{i+1}$. Hence $\rho$ is the outcome of the profile $(\sigma_{-C}, \sigma'_C)$. \medskip \fbox{$\Leftarrow$} Let $\sigma_\Agt$ be a strategy profile, $\sigma'_C$ a strategy for coalition $C$, and $\rho \in \Out_\calG(\rho_0,\sigma_{-C},\sigma'_C)$. We have for all index~$i$ that $\act_i(\rho) = (\sigma_{-C}(\rho_{\le i}),\sigma'_C(\rho_{\le i}))$. Therefore for all agent~$A\not\in C$, $(\act_i(\rho))_A = \sigma_A(\rho_{\le i})$. Then $\dev(\act_i(\rho), \sigma_\Agt(\rho_{\le i})) \subseteq C$. Hence $\dev(\rho,\sigma_\Agt)\subseteq C$. \end{proof} \subsection{Deviator Game} We now use the notion of deviators to draw a link between multiplayer games and a two-player game that we will use to solve the robustness problem. Given a concurrent game structure $\calG$, we define the deviator game~$\devgame$ between two players called \Eve and \Adam. Intuitively \Eve needs to play according to an equilibrium, while \Adam tries to find a deviation of a coalition which will profit one of its player or harm one of the others. The states are in $\Stat' = \Stat \times 2^\Agt$; the second component records the deviators of the current history. The game starts in $(s_0,\varnothing)$ and then proceeds as follows: from a state~$(s,D)$, \Eve chooses an action profile $\shortAct_\Agt$ and \Adam chooses another one $\shortAct'_\Agt$, then the next state is $(\Tab(s,\shortAct'_\Agt),D \cup \dev(\shortAct_\Agt,\shortAct'_\Agt))$. In other words, \Adam chooses the move that will apply, but this can be at the price of adding players to the $D$ component when he does not follow the choice of \Eve. The weights of a state $(s,D)$ in this game are the same than that of $s$ in $\calG$. The construction of the deviator arena is illustrated in \figurename~\ref{fig:deviator-game}. \begin{figure}[hbt] \centering{\scriptsize \begin{tikzpicture}[xscale=1] \everymath{\scriptsize} \draw (0,0) node[draw,rounded corners=3mm,minimum size=6mm] (B) {$[0,0],\varnothing$}; \draw (4,2) node[draw,text width=1.5cm,text centered,rounded corners=3mm] (BA1) {$[1,0]$ \texttt{send(1)} \texttt{close(1)} $\varnothing$}; \draw (4,-0.7) node[draw,text width=1.5cm,text centered,rounded corners=3mm] (BA2) {$[2,1]$ \texttt{send(2)} $\{ A_2 \}$}; \draw (4,0.7) node[draw,text width=1.5cm,text centered,rounded corners=3mm] (BA3) {$[1,2]$ \texttt{send(1)} \texttt{close(1)} $\{A_1\}$}; \draw (4,-2) node[minimum size=1.5cm](BA4) {\vdots}; \draw (8,2) node[draw,right,text width=1.5cm,text centered,rounded corners=3mm] (F) {[0,2] \\$\varnothing$}; \draw (8,1) node[draw,right,text width=1.5cm,text centered,rounded corners=3mm,minimum size=6mm] (G) {[0,2] \\ $\{A_2\}$}; \draw (8,0) node[draw,right,text width=1.5cm,text centered,rounded corners=3mm,minimum size=6mm] (H) {[0,2] \\ $\{A_1\}$}; \draw (8,-1) node[draw,right,text width=1.5cm,text centered,rounded corners=3mm,minimum size=6mm] (I) {[0,2] $\{A_1,A_2\}$}; \draw[-latex'] (B) -- node[above,sloped] {\texttt{(ch,w),(ch,w)}} (BA1); \draw[-latex'] (B) -- node[below,sloped] {\texttt{(ch,w),(ch,ch)}} (BA2); \draw[-latex'] (B) -- node[below,sloped] {\texttt{(w,ct),(ch,ct)}} (BA3); \draw[-latex',dotted] (B) -- (BA4.170); \draw[-latex'] (BA1) -- node[above,sloped] {\texttt{(w,ch),(w,ch)}} (F.180); \draw[-latex'] (BA1) -- node[above,sloped] {\texttt{(w,w),(w,ch)}} (G.180); \draw[-latex'] (BA1) -- node[above,sloped] {\texttt{(w,ch),(ct,ch)}} (H.170); \draw[-latex'] (BA3) -- node[below,sloped] {\texttt{(w,w),(w,w)}} (H.180); \draw[-latex'] (BA3) -- node[below,sloped] {\texttt{(w,w),(w,ch)}} (I.180); \draw (11,2) node {\dots}; \draw (11,1.4) node {\dots}; \draw (11,0) node {\dots}; \draw (11,-1.4) node {\dots}; \end{tikzpicture} } \caption{Part of the deviator game construction for the game of \figurename~\ref{fig:game-from-program}. Some transitions have been omitted because the full game is to large to be represented here. Labels on the edges correspond to the action of \eve and the action of \adam. Labels inside the states are the state of the original game and the deviator component. } \label{fig:deviator-game} \end{figure} We now define some transformations between the different objects used in games $\calG$ and $\calD(\calG)$. The notations are summarized in \figurename~\ref{fig:notations}. We define projections $\projun$,~$\projdeux$ and~$\projact$ from $\Stat'$ to $\Stat$, from $\Stat'$ to $2^\Agt$ and from $\Act^\Agt \times \Act^\Agt$ to $\Act^\Agt$ respectively. They are given by $\projun(s,D) = s$, $\projdeux(s,D)=D$ and $\projact(\shortAct_\Agt,\shortAct'_\Agt) = \shortAct'_\Agt$. We~extend these projections to plays in a natural~way, letting $\projpath(\rho) = \projun(\rho_0) \cdot \projact(\act_0(\rho)) \cdot \projun(\rho_1) \cdot \projact(\act_1(\rho)) \cdots$ and $\projdeux(\rho) = \projdeux(\rho_0) \cdot \projdeux(\rho_1) \cdots$. For any play~$\rho$, and any index~$i$, $\projdeux(\rho_i) \subseteq \projdeux(\rho_{i+1})$. Therefore $\projdeux(\rho)$ seen as a sequence of sets of coalitions is increasing and bounded by $\Agt$, its limit~$\devlimit(\rho)=\cup_{i\in\mathbb{N}} \projdeux(\rho_i)$ is well defined. Moreover to a strategy profile $\sigma_\Pl$ in $\calG$, we can naturally associate a strategy~$\projstrat(\sigma_\Pl)$ for \Eve in $\devgame$ such that for all history $h$ by $\projstrat(\sigma_\Pl)(h) = \sigma_\Agt(\projpath(h))$. \begin{figure}[htb] \centering{ \begin{tikzpicture}[yscale=1.2] \draw (-1,1) node (G1) {$\calD(\calG):$}; \draw (-1,0) node (G0) {$\calG:$}; \draw (1,1) node (S1) {$\Stat'$}; \draw (1,0) node (S0) {$\Stat$}; \draw (2,0) node (M0) {$2^\Agt$}; \draw (4,1) node (A1) {$\Act^\Agt\times \Act^\Agt$}; \draw (4,0) node (A0) {$\Act^\Agt$}; \draw (8,1) node (P1) {$(\Stat \cdot \Act^\Agt\times \Act^\Agt)^\omega$}; \draw (8,0) node (P0) {$(\Stat \cdot \Act^\Agt)^\omega$}; \draw (11,1) node (ST1) {$\sigma_\exists$}; \draw (11,0) node (ST0) {$\sigma_\Agt$}; \draw[-latex'] (S1) -- node[left] {$\projun$} (S0); \draw[-latex'] (S1) -- node[right] {$\projdeux$} (M0); \draw[-latex'] (A1) -- node[right] {$\projact$} (A0); \draw[-latex'] (P1) -- node[right] {$\projpath$} (P0); \draw[-latex'] (ST0) -- node[right] {$\projstrat$} (ST1); \end{tikzpicture} } \caption{Summary of the transformations between the objects used in games $\calG$ and $\calD(\calG)$.} \label{fig:notations} \end{figure} The following lemma states the correctness of the construction of the deviator game, in the sense that it records the set of deviators in the strategy profile suggested by \adam with respect to the strategy profile suggested by \eve. The proof is a simple induction and can be found in the appendix. \begin{collect*}{appendix-suspect}{ \begin{lemma}\mylabel{Lem}{prop:correctness-deviator-game} Let $\calG$ be a game and $\sigma_\Pl$ be a strategy profile and $\sigma_\shortEve = \projstrat(\sigma_\Pl)$ the associated strategy in the deviator game. \begin{enumerate} \item If $\rho \in \Out_{\devgame} (\sigma_\shortEve)$, then $\dev(\projpath(\rho),\sigma_\Pl) = \devlimit(\rho)$. \item If $\rho \in \Out_\calG$ and $\rho'= ((\rho_i,\dev(\rho_{\le i},\sigma_\Pl)) \cdot (\sigma_\Pl(\rho_{\le i}), \act_i(\rho)))_{i\in \N}$ then $\rho' \in \Out_\devgame(\sigma_\shortEve)$ \end{enumerate} \end{lemma} }{}{}{ \begin{proof}[Proof of 1] We prove that for all $i$, $\dev(\projpath(\rho)_{\le i}, \sigma_\Pl) = \projdeux(\rho_{\le i})$, which implies the property. The property holds for $i=0$, since initially both sets are empty. Assume now that it holds for $i\ge 0$. \begin{align*} &\dev(\projpath(\rho)_{\le i+1}, \sigma_\Pl)\\ &= \dev(\projpath(\rho)_{\le i},\sigma_\Pl) \cup \dev(\sigma_\Pl(\projpath(\rho)_{\le i}), \projact(\act_{i+1}(\rho))) & \text{(by definition of deviators)}\\ &= \projdeux(\rho_{\le i}) \cup \dev(\sigma_\Agt(\projact(\rho)_{\le i}), \projact(\act_{i+1}(\rho))) & \text{(by induction hypothesis)}\\ &= \projdeux(\rho_{\le i}) \cup \dev(\sigma_\shortEve(\rho_{\le i}), \projact(\act_{i+1}(\rho))) & \text{(by definition of $\sigma_\shortEve$)}\\ &= \projdeux(\rho_{\le i}) \cup \dev(\act_{i+1}(\rho)) & \text{(by assumption $\rho \in \Out_{\devgame}(\sigma_\shortEve)$)}\\ &= \projdeux(\rho_{\le i+1}) & \text{(by construction of $\devgame$)} \end{align*} Which concludes the induction. \end{proof} \begin{proof}[Proof of 2] The property is shown by induction. It holds for the initial state. Assume it is true until index $i$, then \begin{align*} \Tab'&(\rho'_i,\sigma_\shortEve(\rho'_{\le i}),\act_{i}(\rho)) \\ &= \Tab'((\rho_i,\dev(\rho_{\le i},\sigma_\Agt)),\sigma_\shortEve(\rho'_{\le i}),\act_{i}(\rho)) & \text{(by definition of $\rho'$)}\\ &=(\Tab(\rho_i,\act_{i}(\rho)), \dev(\rho_{\le i},\sigma_\Agt)\cup \dev(\sigma_\shortEve(\rho'_{\le i}), \rho_{i+1})) & \text{(by construction of $\Tab'$)}\\ &=(\rho_{i+1}, \dev(\rho_{\le i},\sigma_\Agt)\cup \dev(\sigma_\shortEve(\rho'_{\le i}), \rho_{i+1})) & \text{(since $\rho$ is an outcome of the game)}\\ &=(\rho_{i+1}, \dev(\rho_{\le i},\sigma_\Agt)\cup \dev(\sigma_\Agt(\rho_{\le i}), \rho_{i+1})) & \text{(by construction of $\sigma_\shortEve$)}\\ &=(\rho_{i+1}, \dev(\rho_{\le i+1},\sigma_\Agt)) & \text{(by definition of deviators)}\\ & = \rho'_{i+1} \end{align*} This shows that $\rho'$ is an outcome of $\sigma_\shortEve$. \end{proof} }\end{collect*} \subsection{Objectives of the deviator game} We now show how to transform equilibria notions into objectives of the deviator game. These objectives are defined so that winning strategies correspond to equilibria of the original game. First, we define an objective~$\Omega(C,A,G)$ in the following lemma, such that a profile which ensures some quantitative goal $G\subseteq \mathbb{R}$ in $\calG$ against coalition $C$ corresponds to a winning strategy in the deviator game. \begin{lemma}\label{lem:objective-omega} Let $C\subseteq \Agt$ be a coalition, $\sigma_\Agt$ be a strategy profile, $\Goal \subseteq \mathbb{R}$ and $A$ a player. We have that for all strategy $\sigma'_C$ for coalition $C$, $\payoff_A(\sigma_{-C},\sigma'_C) \in \Goal$ if, and only if, $\projstrat(\sigma_\Agt)$ is winning in $\devgame$ for objective $\Omega(C,A,\Goal) = \{ \rho \mid \delta(\rho) \subseteq C \Rightarrow \payoff_A(\projpath(\rho)) \in \Goal \}$. \end{lemma} \begin{proof} \fbox{$\Rightarrow$} Let $\rho$ be an outcome of $\sigma_\shortEve=\projstrat(\sigma_\Agt)$. By Lem.~\ref{prop:correctness-deviator-game}, we have that $\devlimit(\rho) = \dev(\projpath(\rho),\sigma_\Agt)$. By Lem.~\ref{lem:deviator-path}, $\projpath(\rho)$ is the outcome of $(\sigma_{-\devlimit(\rho)},\sigma'_{\devlimit(\rho)})$ for some $\sigma'_{\devlimit(\rho)}$. If $\devlimit(\rho) \subseteq C$, then $\payoff_A(\projpath(\rho)) = \payoff_A(\sigma_{-C},\sigma_{C\setminus\devlimit(\rho)}, \sigma'_{\devlimit(\rho)}) = \payoff_A(\sigma_{-C},\sigma''_{C})$ where $\sigma''_A = \sigma'_A$ if $A \in \devlimit(\rho)$ and $\sigma_A$ otherwise. By hypothesis, this payoff belongs to $\Goal$. This holds for all outcomes~$\rho$ of $\sigma_\shortEve$, thus $\sigma_\shortEve$ is a winning strategy for $\Omega(C,A,\Goal)$. \medskip \fbox{$\Leftarrow$} Assume $\sigma_\shortEve = \projstrat(\sigma_\Agt)$ is a winning strategy in \devgame for $\Omega(C,A,\Goal)$. Let $\sigma'_C$ be a strategy for $C$ and $\rho$ the outcome of $(\sigma'_{C},\sigma_{-{C}})$. By Lem.~\ref{lem:deviator-path}, $\dev(\rho,\sigma_\Agt) \subseteq C$. By Lem.~\ref{prop:correctness-deviator-game}, $\rho'= (\rho_j,\dev(\rho_{\le j},\sigma_\Agt))_{j\in \N}$ is an outcome of $\sigma_\shortEve$. We have that $\devlimit(\rho') = \dev(\rho,\sigma_\Agt) \subseteq C$. Since $\sigma_\shortEve$ is winning, $\rho$ is such that $\payoff_A(\projpath(\rho)) \in \Goal$. Since $\payoff_{A}(\projun(\rho')) = \payoff_{A}(\rho)$, this shows that for all strategy $\sigma'_C$, $\payoff_A(\sigma_{-C},\sigma'_C) \in \Goal$ \end{proof} This lemma makes it easy to characterize the different kinds of equilibria, using objectives in $\devgame$. For instance, we define a \newdef{resilience objective} where if there are more than $k$ deviators then \eve has nothing to do; if the deviators are exactly $k$ then she has to show that none of them gain anything; and if there are less than $k$ then no player at all should gain anything. This is because if a new player joins the coalition, its size remains smaller or equal to $k$. We have a similar characterization for immune and robust equilibria, and the proof of the following Theorem is the object of the Section~\ref{sec:proof}. \begin{theorem}\label{thm:dev-correct} Let \calG be a concurrent game, $\sigma_\Agt$ a strategy profile in \calG, $k$, $t$ integers, and $r$ a rational. \begin{itemize} \item The strategy profile $\sigma_\Agt$ is $k$-resilient if, and only if, strategy~$\projstrat(\sigma_\Agt)$ is winning in \devgame for the \emph{resilience objective} $\calRe(k,p)$ where $p = \payoff(\Out(\sigma_\Agt))$ is the payoff profile of $\sigma_\Agt$ and $\calRe(k,p)$ is defined by: \begin{align*} \calRe(k,p) = & \{ \rho \mid ~ |\devlimit(\rho)| > k \} \\ & \cup \{ \rho \mid ~ |\devlimit(\rho)| = k \land \forall A \in \devlimit(\rho).\ \payoff_{A}(\projpath(\rho)) \le p(A)\}\\ &\cup \{ \rho \mid ~ |\devlimit(\rho)| < k \land \forall A \in \Agt.\ \payoff_{A}(\projpath(\rho)) \le p(A) \} \end{align*} \item The strategy profile $\sigma_\Agt$ is $(t,r)$-immune if, and only if, strategy~$\projstrat(\sigma_\Agt)$ is winning for the \newdef{immunity objective} $\calI(t,r,p)$ where $p = \payoff(\Out(\sigma_\Agt))$ is the payoff profile of $\sigma_\Agt$ and $\calI(t,r,p)$ is defined by: \begin{align*} \calI(t,r,p) = & \{ \rho \mid |\devlimit(\rho)| > t \} \cup \{ \rho \mid ~ \forall A \in \Agt \setminus \devlimit(\rho).\ p(A) - r \le \payoff_{A}(\projun(\rho)) \} \end{align*} \item The strategy profile~$\sigma_\Agt$ is a $(k,t,r)$-robust profile in $\calG$ if, and only if, $\projstrat(\sigma_\Agt)$ is winning for the \emph{robustness objective} $\calR(k,t,r,p)= \calRe(k,p) \cap \calI(t,r,p)$ where $p = \payoff(\Out(\sigma_\Agt))$ is the payoff profile of $\sigma_\Agt$. \end{itemize} \end{theorem} \subsection{Proof of Thm.~\ref{thm:dev-correct}}\label{sec:proof} The proof of the theorem relies on the two following lemmas. The first one shows the correctness of the resilience objective. The second lemma shows the correctness of the immunity objective, its proof follows the same ideas than the first and can be found in the appendix. \begin{lemma} \label{lem:obj-resilience}{} Let \calG be a concurrent game and $\sigma_\Agt$ a strategy profile in \calG. The strategy profile $\sigma_\Agt$ is $k$-resilient if, and only if, strategy~$\projstrat(\sigma_\Agt)$ is winning in \devgame for objective $\calRe(k,p)$ where $p = \payoff(\sigma_\Agt)$. \end{lemma} \begin{proof} By Lem.~\ref{lem:objective-omega}, $\sigma_\Agt$ is $k$-resilient if, and only if, for each coalition $C$ of size smaller than $k$, and each player $A$ in $C$, $\projstrat(\sigma_\Agt)$ is winning for $\Omega(C,A,\rbrack-\infty,\payoff_A(\sigma_\Agt)\rbrack)$. We will thus in fact show that for each coalition $C$ of size smaller than $k$, and each player $A$ in $C$, $\projstrat(\sigma_\Agt)$ is winning for $\Omega(C,A,\rbrack-\infty,\payoff_A(\sigma_\Agt)\rbrack)$ if, and only if, $\projstrat(\sigma_\Agt)$ is winning for $\calRe(k,p)$. \fbox{$\Rightarrow$} Let $\rho$ be an outcome of $\projstrat(\sigma_\Agt)$. \begin{itemize} \item If $|\devlimit(\rho)| > k$, then $\rho$ is in $\calRe(k,p)$ by definition. \item If $|\devlimit(\rho)| = k$, then for all $A\in \devlimit(\rho)$, $\payoff_A(\projpath(\rho)) \in \rbrack-\infty,p(A)\rbrack$ because $\projstrat(\sigma_\Agt)$ is winning for $\Omega(\devlimit(\rho),A,\rbrack-\infty,p(A)\rbrack)$. Therefore $\rho$ is in $\calRe(k,p)$. \item If $|\devlimit(\rho)| < k$, then for all $A\in \Agt$, $C= \devlimit(\rho)\cup \{A\}$ is a coalition of size smaller than $k$, and $\payoff_A(\projpath(\rho)) \in \rbrack-\infty,p(A)\rbrack$ because $\projstrat(\sigma_\Agt)$ is winning for $\Omega(C,A,\rbrack-\infty,p(A)\rbrack)$. Therefore $\rho$ is in $\calRe(k,p)$. \end{itemize} This holds for all outcome $\rho$ of $\projstrat(\sigma_\Agt)$ and shows that $\projstrat(\sigma_\Agt)$ is winning for $\calRe(k,p)$. \medskip \fbox{$\Leftarrow$} We now show that $\projstrat(\sigma_\Agt)$ is winning for $\Omega(C,A,\rbrack-\infty,p(A)\rbrack)$ for each coalition $C$ of size smaller or equal to $k$ and player $A$ in $C$. Let $\rho$ be an outcome of $\projstrat(\sigma_\Agt)$. Let $p$ be such that strategy $\projstrat(\sigma_\Agt)$ is winning for $\calRe(k,p)$. We have $\rho \in \calRe(k,p)$. We show that $\rho$ belongs to $\Omega(C,A,\rbrack -\infty, p(A)\rbrack)$: \begin{itemize} \item If $\devlimit(\rho) \not\subseteq C$, then $\rho \in \Omega(C,A,\rbrack -\infty, p(A)\rbrack)$ by definition. \item If $\devlimit(\rho) \subseteq C$ and $|\devlimit(\rho)| = k$, then $\dev(\rho) = C$. Since $\rho \in \calRe(k,p)$, for all $A\in C$, $\payoff_A(\rho) \le p(A)$ and therefore $\payoff_A(\rho) \in \rbrack -\infty, p(A) \rbrack$. Hence $\rho \in \Omega(C,A,\rbrack -\infty, p(A)\rbrack)$. \item If $\devlimit(\rho) \subseteq C$ and $|\devlimit(\rho)| < k$, then since $\rho \in \calRe(k,p)$, for all $A\in \Agt$, $\payoff_A(\rho) \le p(A)$. Therefore $\rho \in \Omega(C,A,\rbrack -\infty, p(A)\rbrack)$. \end{itemize} This holds for all outcome $\rho$ of $\projstrat(\sigma_\Agt)$ and shows it is winning for $\Omega(C,A,\rbrack-\infty,p(A)\rbrack)$ for each coalition $C$ and player $A$ in $C$, which shows that $\sigma_\Agt$ is $k$-resilient. \end{proof} \begin{lemma}\label{lem:obj-immunity} Let \calG be a concurrent game and $\sigma_\Agt$ a strategy profile in \calG. The strategy profile $\sigma_\Agt$ is $(t,r)$-immune if, and only if, strategy~$\projstrat(\sigma_\Agt)$ is winning for objective $\calI(t,r,p)$ where $p = \payoff(\sigma_\Agt)$. \end{lemma} \begin{proof} By Lem.~\ref{lem:objective-omega}, $\sigma_\Agt$ is $(t,r)$-immune if, and only if, for each coalition $C$ of size smaller than $t$, and each player $A$ not in $C$, $\projstrat(\sigma_\Agt)$ is winning for $\Omega(C,A,\lbrack \payoff_A(\sigma_\Agt) - r,+\infty\lbrack)$. We will thus in fact show that for each coalition $C$ of size smaller than $t$, and each player $A$ not in $C$, $\projstrat(\sigma_\Agt)$ is winning for $\Omega(C,A,\lbrack \payoff_A(\sigma_\Agt) -r ,+\infty\lbrack)$ if, and only if, $\projstrat(\sigma_\Agt)$ is winning for $\calRe(t,r,p)$. \fbox{$\Rightarrow$} Let $\rho$ be an outcome of $\projstrat(\sigma_\Agt)$. \begin{itemize} \item If $|\devlimit(\rho)| > t$, then $\rho$ is in $\calI(t,r,p)$ by definition. \item If $|\devlimit(\rho)| \le t$, then $C = \devlimit(\rho)$ is a coalition of size smaller than $t$. As a consequence, for all $A \not\in \devlimit(\rho)$, $\rho$ is winning for $\Omega(C,A,\lbrack p_A -r , +\infty \lbrack)$. By definition of $\Omega$, we have $\payoff_A(\rho) \ge p_A - r$. Thus $\rho$ is in $\calI(t,r,p)$. \end{itemize} \fbox{$\Leftarrow$} We now show that $\projstrat(\sigma_\Agt)$ is winning for $\Omega(C,A,\lbrack \payoff_A(\sigma_\Agt) -r ,+\infty\lbrack)$ for each coalition $C$ of size smaller than $t$ and player $A$ not in $C$. Let $\rho$ be an outcome of $\projstrat(\sigma_\Agt)$. Let $p$ be such that strategy $\projstrat(\sigma_\Agt)$ is winning for $\calI(t,r,p)$. We have $\rho \in \calI(t,r,p)$. We show that $\rho$ belongs to $\Omega(C,A,\lbrack p(A)-r, +\infty \lbrack)$: \begin{itemize} \item If $\devlimit(\rho) \not\subseteq C$, then $\rho \in \Omega(C,A,\lbrack p(A)-r, +\infty \lbrack)$ by definition. \item If $\devlimit(\rho) \subseteq C$, then since $\rho \in \calI(t,r,p)$, for all $A \not\in C$, $p(A) -r \le \payoff_A(\projpath(\rho))$. Therefore $\payoff_A(\rho) \in \lbrack p(A)-r, +\infty \lbrack$ and $\rho \in \Omega(C,A,\lbrack p(A)-r, +\infty \lbrack)$. \end{itemize} This holds for all outcome $\rho$ of $\projstrat(\sigma_\Agt)$ and shows it is winning for $\Omega(C,A,\lbrack p(A)-r, +\infty\lbrack)$ for each coalition $C$ and player $A$ in $C$, which shows that $\sigma_\Agt$ is $(t,r)$-immune. \end{proof} \begin{lemma Let \calG be a concurrent game and $\sigma_\Agt$ a strategy profile in \calG. The strategy profile~$\sigma_\Agt$ is a $(k,t,r)$-robust profile in $\calG$ if, and only if, the associated strategy of \eve is winning for the objective $\calR(k,t,r,\Out(\sigma_\Agt)) = \calRe(k,p) \cap \calI(t,r,p)$ where $p = \payoff(\sigma_\Agt)$. \end{lemma} \begin{proof} This is a simple consequence of Lem.~\ref{lem:obj-resilience} and Lem.~\ref{lem:obj-immunity}. Let $\sigma_\Agt$ be a $(k,t,r)$-robust strategy profile. It is $k$-resilient, so $\projstrat(\sigma_\Agt)$ is winning the resilience objective. It is also $(t,r)$-immune, so $\projstrat(\sigma_\Agt)$ is winning the immunity objective. Therefore any outcome of $\sigma_\Agt$ is in the intersection, and $\sigma_\Agt$ ensures the robustness objective. In the other direction, assume $\projstrat(\sigma_\Agt)$ wins the robustness objective. Then $\sigma_\Agt$ wins both the $k$-resilience objective and the $(t,r)$-immunity objective. Using lemmas~\ref{lem:obj-resilience} and \ref{lem:obj-immunity}, $\sigma_\Agt$ is $k$-resilient and $(t,r)$-immune; it is therefore $(k,t,r)$-robust. \end{proof} \section{Reduction to multidimensional mean-payoff objectives}\label{sec:mean-payoff} We first show that the deviator game reduces the robustness problem to a winning strategy problem in multidimensional mean-payoff games. This can then be solved by requests to the polyhedron value problem of~\cite{BR15}, as we will show in this section. \subsection{Definition of the multidimensional objectives}\label{sec:def-multidim} \begin{definition} Let $\mathcal{G}$ be a two-player game, $v \colon \Stat \mapsto \mathbb{Z}^{d}$ a multidimensional weight functions and $I,J \subseteq \lsem 1, d \rsem$\footnote{We write $\lsem i, j\rsem$ for the set of integers $\{ k \in \mathbb{Z} \mid i \le k \le j\}$.} a partition of $\lsem 1, d\rsem$ (i.e. $I \uplus J = \lsem 1, d \rsem$). We say that \eve \emph{ensures threshold} $u\in \mathbb{R}^{d}$ if she has a strategy $\sigma_\exists$ such that all outcome $\rho$ of $\sigma_\exists$ is such that for all $i \in I$, $\MP_{v_i}(\rho) \ge u_i$ and and for all $j \in J$, $\overline\MP_{v_{j}}(\rho) \ge u_{j}$, where \(\MPSup_{v_j}(\rho) = \limsup_{n \rightarrow \infty} \frac{1}{n} \sum_{0 \le k\le n} v_j(\rho_{k}). \) That is, for all dimension~$i\in I$, the limit inferior of the average of $v_i$ is greater than $u_i$ and for all dimension $j\in J$ the limit superior of $v_j$ is greater than $u_j$. \end{definition} We consider two decision problems on these games: \begin{itemize} \item The \emph{value problem}, asks given $\langle \mathcal{G},v,I,J\rangle$ a game with multidimensional mean-payoff objectives, and $u\in \mathbb{R}^d$, whether \eve can ensure $u$. \item The \emph{polyhedron value problem}, asks given $\langle \mathcal{G},v,I,J\rangle$ a game with multidimensional mean-payoff objectives, and $(\lambda_1,\dots,\lambda_n)$ a tuple of linear inequations, whether there exists a threshold $u$ which \eve can ensure and that satisfies the inequation $\lambda_i$ for all $i$ in $\lsem 1 ,d \rsem$. We assume that all linear inequations are given by a tuple $(a_1,\dots,a_d,b)\in\mathbb{Q}^{d+1}$ and that a point $u\in \mathbb{R}^d$ satisfies it when $\sum_{i\in \lsem 1 , d \rsem} a_i \cdot u_i \ge b$. \end{itemize} These problems have been studied and the value problem was showed to be \co\NP-complete~\cite{velner12} while the polyhedron value problem is $\Sigma_2$\P-complete~\cite{BR15}. Our goal is now to reduce our robustness problem to a polyhedron value problem for some well chosen weights. \medskip In our case, the number $d$ of dimensions will be equal to $4 \cdot |\Agt|$. We then number players so that $\Agt = \{ A_1, \dots , A_{|\Agt|}\}$. Let $W = \max\{ |w_{i}(s)| \mid A_i \in \Agt, s \in \Stat\}$ be the maximum constant occurring in the weights of the game, notice that for all player $A_i$ and play $\rho$, $ - W - 1 < w_{i}(\rho) \le W$. We fix parameters $k$, $t$ and define our weight function $v\colon \Stat \mapsto \mathbb{Z}^{d}$. Let $i\in \lsem 1 ,|\Agt| \rsem$, the weights are given for $(s,D) \in \Stat \times 2^\Agt$ by: \begin{enumerate} \item if $|D|\le t$ and $A_i\not\in D$, then $v_i(s,D) = w_{A_i}(s)$; \item if $|D|> t$ or $A_i\in D$, then $v_i(s,D) = W$; \item \label{it:small-k} if $|D|<k$, then for all $A_i \in \Agt$, $v_{|\Agt|+i}(s,D) = - w_{A_i}(s)$; \item \label{it:equal-k} if $|D|=k$ and $A_i\in D$, then $v_{|\Agt|+i}(s,D) = - w_{A_i}(s)$; \item \label{it:greater-k} if $|D| > k$ or $A_i \not\in D$ and $|D| = k$, then $v_{|\Agt|+i}(s,D) = W$. \item \label{it:Deqvarnothing} if $D = \varnothing$ then $v_{2\cdot|\Agt|+i}(s,D) = w_{A_i}(s)= -v_{3\cdot|\Agt|+i}(s,D)$; \item \label{it:Dnevarnothing} if $D \ne \varnothing$ then $v_{2\cdot|\Agt|+i}(s,D) = W = v_{3\cdot|\Agt|+i}(s,D)$; \end{enumerate} We take $I = \lsem 1 , |\Agt|\rsem \cup \lsem 2\cdot |\Agt|+1, 3 \cdot |\Agt| \rsem$ and $J = \lsem |\Agt|+1, 2\cdot |\Agt|\rsem \cup \lsem 3 \cdot |\Agt|+1, 4\cdot |\Agt|\rsem$. Intuitively, the components $\lsem 1,|\Agt|\rsem$ are used for immunity, the components $\lsem |\Agt|+1, 2\cdot|\Agt|\rsem$ are used for resilience and components $\lsem 2\cdot |\Agt|+1, 4\cdot |\Agt|\rsem$ are used to constrain the payoff in case of no deviation. \subsection{Correctness of the objectives for robustness} Let $\calG$ be a concurrent game, $\rho$ a play of $\devgame$ and $p\in \mathbb{R}^\Agt$ a payoff vector. The following lemmas shows the correctness of the weights we chose. We will then show that with these weights, we can obtain a multidimensional mean-payoff objective which is equivalent to the robustness objective~$\calR(k,t,r,p)$. \begin{lemma}\label{lem:mp-payoff} Let $\rho$ be a play. It satisfies objective $\devlimit(\rho) = \varnothing \Rightarrow \MP_{A_i}(\rho) = p_i$ if, and only if, $\MP_{2\cdot v_{|\Agt|+i}}(\rho) \ge p(A_i)$ and $\MPSup_{3\cdot v_{|\Agt|+i}}(\rho) \ge -p(A_i)$. \end{lemma} \begin{proof} We distinguish two cases according to whether $\devlimit(\rho)$ is empty. \begin{itemize} \item If $\devlimit(\rho) \ne \varnothing$, the implication holds and we have that after some point in the execution $\projdeux(\rho) \ne \varnothing$. By item~\ref{it:Dnevarnothing} of the definition of $v$, the average weight on dimensions $2\cdot|\Agt|+i$ and $3\cdot |\Agt|+i$ will tend to $W$, which is greater than $p(A_i)$ and $-p(A_i)$. Therefore the equivalence holds. \item If $\devlimit(\rho) = \varnothing$, then along all the run the $D$ component is empty. By item~\ref{it:Deqvarnothing} of the definition of $v$, $\MP_{v_{2\cdot |\Agt|+i}}(\rho) = \MP_{w_{A_i}} (\rho)$ and $\MPSup_{v_{3\cdot |\Agt|+i}}(\rho) = - \MP_{w_{A_i}} (\rho)$. Therefore $\MP_{A_i}(\rho) = p_i$ is equivalent to the fact $\MP_{2\cdot v_{|\Agt|+i}}(\rho) \ge p(A_i)$ and $\MPSup_{3\cdot v_{|\Agt|+i}}(\rho) \ge -p(A_i)$. \end{itemize} \end{proof} \begin{lemma}\label{lem:mp-resilient} If $\rho$ is an outcome of $\pi(\sigma_\Agt)$ with $\payoff(\sigma_\Agt) = p$, then play~$\rho$ satisfies objective $\calRe(k,p)$ if, and only if, for all agents $A_i$, $\MPSup_{v_{|\Agt|+i}}(\rho) \ge -p(A_i)$. \end{lemma} \begin{proof} First notice the following equivalence: \begin{align*} \payoff_{A_i}(\rho) \le p(A_i) & \Leftrightarrow \lim\inf \frac{w_i(\rho_{\le n}) }{n} \le p(A_i) \Leftrightarrow \lim\sup -\frac{w_i(\rho_{\le n}) }{n} \ge -p(A_i) \\ \end{align*} \fbox{$\Rightarrow$} Let $A_i$ be a player, and assume $\rho \in \calRe(k,p)$. We distinguish three cases based on the size of $\devlimit(\rho)$: \begin{itemize} \item If $|\devlimit(\rho)| < k$ then for all index $j$, $|\dev(\rho_{\le j'})|<k$ . Therefore $v_{|\Agt|+i}(\rho_{j}) = - w_{A_i}(\rho_{j})$ (item \ref{it:small-k} of the definition). Then as $\rho$ is in $\calRe(k,p)$, $\payoff_{A_i}(\rho) \le p(A_i)$ and therefore $\overline\MP_{v_{|\Agt|+i}}(\rho) \ge -p(A_i)$ \item If $|\devlimit(\rho)|=k$, then we distinguish two cases: \begin{itemize} \item If $A_i\not\in \devlimit(\rho)$, then there is a $j$ such that for all $j'\ge j$, $|\dev(\rho_{\le j})| = k$ and $A_i \not\in \dev(\rho_{j})$. Therefore for all $j' \ge j$ we have that $v_{|\Agt|+i}(\rho_{j'}) = W$ (item \ref{it:greater-k} of the definition). Since $p(A_i) \ge -W$, $\overline\MP_{v_{|\Agt|+i}}(\rho) \ge -p(A_i)$. \item Otherwise $A_i\in \devlimit(\rho)$, then there is a $j$ such that for all $j'\ge j$, $A_i \in \dev(\rho_{\le j'})$. Therefore for all $j' \ge j$ we have that $v_{|\Agt|+i}(\rho_{j'}) = - w_{A_i}(\rho_{j'})$ (item \ref{it:equal-k} of the definition). Then $\overline\MP_{v_{|\Agt|+i}}(\rho) = \lim\sup - \frac{w_i(\rho_{\le n})}{n}$. Then as $\rho$ satisfies $\calRe(k,p)$ $\payoff_{A_i}(\rho) \le p(A_i)$ and therefore using the equivalence at the beginning of this proof $\overline\MP_{v_{|\Agt|+i}}(\rho) \ge -p(A_i)$. \end{itemize} \item Otherwise $|\devlimit(\rho)| > k$. Then, there is some index~$j$ such that either $|\dev(\rho_{\le j})| > k$ or $|\dev(\rho_{\le j})| = k \land A\not\in \dev(\rho_{\le j})$. Then, by monotonicity of $\dev$ along $\rho$, for all $j'\ge j$, $v_{|\Agt|+i}(\rho_{j'}) = W$ (item \ref{it:greater-k} of the definition). Since $p(A_i) \ge -W$, $\overline\MP_{v_{|\Agt|+i}}(\rho) \ge -p(A_i)$. \end{itemize} \fbox{$\Leftarrow$} Now assume that for all player $A_i$, $\MPSup_{v_{|\Agt|+i}}(\rho) \ge - p(A_i)$. \begin{itemize} \item If $|\devlimit(\rho)|<k$, therefore for all $i$ and $j$ we have that $v_{|\Agt|+i}(\rho_j) = - w_{A_i}(\rho_j)$ then $\overline\MP_{v_{|\Agt|+i}}(\rho) = \lim\sup - \frac{w_i(\rho_{\le n})}{n}$. Thus using the equivalence at the beginning of this proof $\payoff_{A_i}(\rho) \le p(A_i)$ for all $A_i$. \item If $|\devlimit(\rho)|=k$. Let $A_i$ be a player in $\devlimit(\rho)$. Then for all $j$, either $|\dev(\rho_{\le j})|<k$ or $A_i\in \dev(\rho_{\le j})$. Therefore for all $j$ we have that $v_{|\Agt|+i}(\rho_j) = - w_{A_i}(\rho_j)$ then $\overline\MP_{v_{|\Agt|+i}}(\rho) = \lim\sup - \frac{w_i(\rho_{\le n})}{n}$. Thus using the equivalence at the beginning of this proof $\payoff_{A_i}(\rho) \le p(A_i)$. This being true for all player in $\devlimit(\rho)$ shows that $\rho$ belongs to $\calRe(k,p)$. \item Otherwise $|\devlimit(\rho)| > k$ and then $\rho \in \calRe(k,p)$ by definition of $\calRe(k,p)$. \end{itemize} \end{proof} We now show the immunity part. \begin{lemma}\label{lem:mp-immune} If $\rho$ is an outcome of $\pi(\sigma_\Agt)$ with $\payoff(\sigma_\Agt) = p$, then play~$\rho$ satisfies objective $\calI(t,r,p)$ if, and only if, for all agents $A_i$, $\MP_{v_i}(\rho) \ge p(A_i)-r$. \end{lemma} \begin{proof} \fbox{$\Rightarrow$} Let $A_i$ be a player and assume $\rho\in \calI(t,r,p)$. We distinguish two cases: \begin{itemize} \item If $|\devlimit(\rho)|\le t \land A_i \not\in \devlimit(\rho)$, then for all index $j$, $v_i(\rho_{j}) = w_{A_i}(\rho_j)$. Therefore $\MP_{v_i}(\rho) = \payoff_{A_i}(\projun(\rho))$. Then as $\rho$ satisfies $\calI(t,r,p)$, $p(A_i) -r \le \payoff_{A_i}(\projun(\rho)) = \MP_{v_i}(\rho)$. \item Otherwise there is some index~$j$ such that either $|\dev(\rho_{\le j})| > t$ or $A \in \dev(\rho_{\le j})$. Then, by monotonicity of $\dev$ along $\rho$, for all $j'\ge j$, $v_i(\rho_{j'}) = W \ge p(A_i)$. Hence $\MP_{v_i}(\rho) \ge p(A_i)$. \end{itemize} \fbox{$\Leftarrow$} Assume that for all player~$A_i$, $\MP_{v_i}(\rho) \ge p(A_i) - r$. \begin{itemize} \item If $|\devlimit(\rho)|\le t$. Let $A_i\not\in \devlimit(\rho)$, then for all $j$, $|\dev(\rho_{\le j})|\le t$ and $A\not\in \dev(\rho_{\le j})$. Therefore for all $j$ we have that $v_i(\rho_j) = w_{A_i}(\rho_j)$ and thus $\MP_{v_i}(\rho) \ge p(A_i) - r \Leftrightarrow \payoff_{A_i}(\rho) -r \le \payoff_{A_i}(\rho)$. This shows that $\rho$ belongs to $\calI(t,r,p)$. \item Otherwise $\devlimit(\rho)| > t$ and $\rho$ belongs to $\calI(t,r,p)$ by definition of $\calI(t,r,p)$. \end{itemize} \end{proof} We now join the two preceding result to talk about the robustness objective. \begin{lemma}\mylabel{Lem}{lem:multidim} If~$\rho$ is an outcome of $\projstrat(\sigma_\Agt)$ with $\payoff(\sigma_\Agt) = p$, then play $\rho$ satisfies objective $\calR(k,t,r,p)$ if, and only if, for all agents $A_i$, $\MP_{v_i}(\rho) \ge p(A_i)-r$ and $\MPSup_{v_{|\Agt| + i}}(\rho) \ge -p(A_i)$. \end{lemma} \begin{proof} \begin{align*} \rho \in \calR(k,t,r,p) \Leftrightarrow & \rho \text{ satisfies }\calRe(k,p)\text{ and }\calI(t,r,p) \text{ ~~~(By definition of $\calR(k,t,r,p)$)} \\ \Leftrightarrow & \rho \in \calRe(k,p)\text{ and } \forall A_i\in \Agt.\ \MP_{v_i}(\rho) \ge p(A_i)-r \text{ ~~(By Lem.~\ref{lem:mp-immune}) } \\ \Leftrightarrow & \forall A_i\in \Agt.\ \MPSup_{v_{|\Agt| + i}}(\rho) \ge -p(A_i) \\ & \text{ and } \forall A_i\in \Agt.\ \MP_{v_i}(\rho) \ge p(A_i)-r \text{ ~~~(By Lem.~\ref{lem:mp-resilient}) } \end{align*} \end{proof} Putting together these lemmas and the correspondence between the deviator game and robust equilibria of Thm.~\ref{thm:dev-correct} we obtain the following proposition. \begin{lemma}\mylabel{Lem}{lem:correct-mean-robust} Let \calG be a concurrent game with mean-payoff objectives. There is a $(k,t,r)$-robust equilibrium in $\calG$ if, and only if, for the multidimensional mean-payoff objective given by $v$, $I = \lsem 1 , |\Agt|\rsem \cup \lsem 2\cdot |\Agt|+1, 3 \cdot |\Agt| \rsem$ and $J = \lsem |\Agt|+1, 2\cdot |\Agt|\rsem \cup \lsem 3 \cdot |\Agt|+1, 4\cdot |\Agt|\rsem$, there is a payoff vector $p$ such that \eve can ensure threshold $u$ in $\devgame$, where for all $i \in \lsem 1,|\Agt|\rsem$, $u_{i} = p(A_i) -r$, $u_{|\Agt|+i} = -p(A_i)$, $u_{2\cdot |\Agt|+i} = p(A_i)$, and $u_{3\cdot |\Agt|+i} = -p(A_i)$. \end{lemma} \begin{proof} \fbox{$\Rightarrow$} Let $\sigma_\Agt$ be a robust equilibrium, using Thm.~\ref{thm:dev-correct} $\projstrat(\sigma_\Agt)$ is a strategy of \eve in $\devgame$ which ensures $\calR(k,t,r,p)$ where $p = \payoff(\Out(\sigma_\Agt))$. Let $\rho$ be an outcome of $\projstrat(\sigma_\Agt)$ in $\devgame$. We will show that it is above the threshold~$u$ in all dimensions. We first show that $\devlimit(\rho) = \varnothing \implies \MP_{A_i}(\rho) = p_i$. If $\devlimit(\rho) \ne \varnothing$ this is trivial. Otherwise $\devlimit(\rho) = \varnothing$, and by Lem.~\ref{prop:correctness-deviator-game}, there is $\rho'$ such that $\dev(\rho',\sigma_\Agt) = \varnothing$ and $\rho = \projpath(\rho')$. Then by Lem.~\ref{lem:deviator-path}, $\rho' = \Out_\calG(\sigma_\Agt)$. Therefore $\rho = \projpath(\Out_\calG(\sigma_\Agt))$, thus $\MP_{A_i}(\rho) = \payoff_{i}(\sigma_\Agt) = p_i$ and the implication holds. Then by Lem.~\ref{lem:mp-payoff}, we have that $\MP_{v_{2\cdot |\Agt|+i}}(\rho) \ge p(A_i)$ and $\MPSup_{v_{3\cdot |\Agt|+i}}(\rho) \ge -p(A_i)$. This shows we ensure the correct thresholds on dimensions in $\lsem 2 \cdot |\Agt|+1, 4\cdot |\Agt|\rsem$. Now, by Lem.~\ref{lem:multidim}, for all agents $A_i$, $\MP_{v_i}(\rho) \ge p(A_i)-r$ and $\MPSup_{v_{|\Agt| + i}}(\rho) \ge -p(A_i)$. This shows we ensure the correct thresholds on dimensions in $\lsem 1, 2\cdot |\Agt|\rsem$. \fbox{$\Leftarrow$} In the other direction, let $p$ be a payoff vector such that there exist a strategy $\sigma_\exists$ in $\devgame$ that ensure the threshold $u$. We define a strategy profile $\sigma_\Agt$ by induction, given a history $h$ in $\calG$: \begin{itemize} \item if $|h| = 1$, then $h' = (h_0,\varnothing)$; \item otherwise we assume $\sigma_\Agt$ has already been defined for histories shorter than $h$, we let $h'= (h_0,\varnothing) \cdot (\sigma_\Pl(h_{\le 0}), \act_0(h)) \cdot \left((h_i,\dev(h_{\le i},\sigma_\Agt)) \cdot (\sigma_\Pl(h_{\le i}), \act_i(h))\right)_{0 < i < |h|-1} \cdot (h,\dev(h,\sigma_\Agt))$. Note the similarity with the second point of Lem.~\ref{prop:correctness-deviator-game} and the fact that $\projpath(h') = h$. \end{itemize} We then set $\sigma_\Agt(h) = \sigma_\exists(h')$. We show that $\payoff(\Out(\sigma_\Agt)) = p$. Consider the strategy~$\sigma_\forall$ of \adam in $\devgame$ that always plays the same move as \eve, the outcome $\rho = \Out_\devgame(\sigma_\exists,\sigma_\forall)$ is such that $\devlimit(\rho)= \varnothing$. Since $\sigma_\exists$ ensures the threshold $u$, using Lem.~\ref{lem:mp-payoff}, for all agent $A_i$, $\MP_{A_i}(\rho) = p(A_i)$. We now show by induction that $\projpath(\rho)$ is compatible with $\sigma_\Agt$. Let $i\in \mathbb{N}$, we assume that the property holds for prefixes of $\projpath(\rho)$ of length less than~$i$. We have that $\projact(\rho_i) = \sigma_\Agt(\projpath(\rho)_{\le i}))$ because \adam plays the same move than \eve on this path. We also have that $\projun(\rho_{i+1}) = \Tab(\projun(\rho_i),\projact(\rho_i)) = \Tab(\projun(\rho_i),\sigma_\Agt(\projpath(\rho)_{\le i}))$ by construction of $\devgame$, and therefore $\projpath(\rho_{i+1})$ is compatible with $\sigma_\Agt$. Then as $\rho$ is the projection of the outcome of $\sigma_\Agt$, we have that $\payoff(\Out(\sigma_\Agt)) = p$. Let $\rho$ be an outcome of $\sigma_\exists$. By Lem.~\ref{lem:mp-immune}, we have that $\rho$ satisfies the objective $\calI(t,r,p)$, and by Lem.~\ref{lem:mp-resilient} it satisfies $\calRe(k,p)$. Using Thm.~\ref{thm:dev-correct}, strategy $\sigma_\Agt$ is an equilibrium in $\calG$ with the payoff $p$. \end{proof} \subsection{Formulation of the robustness problem as a polyhedron value problem} From the previous lemma, we can deduce an algorithm which works by querying the polyhedron value problem. Given a game $\calG$ and parameters $k,t,r$, we ask whether there exists a payoff $u$ that \eve can ensured in the game $\devgame$ with multidimensional mean-payoff objective given by $v$, $I$, $J$, and such that for all $i \in \lsem 1 , |\Agt|\rsem$, $u_i + r = - u_{|\Agt|+i} = u_{2 \cdot |\Agt| + i} = - u_{3\cdot |\Agt|+i}$. As we will show in Thm.~\ref{thm:pspace-membership} thanks to Lem.~\ref{lem:correct-mean-robust}, the answer to this question is yes if, and only if, there is a $(k,t,r)$-robust equilibrium. From the point of view of complexity, however, the deviator game on which we perform the query can be of exponential size compared to the original game. To describe more precisely the complexity of the problem, we will the following property, that states that given a query, we can find solutions which have a small representation. \begin{lemma}\label{lem:small-witness} If there is a solution to the polyhedron value problem in $\devgame$ then there is one whose encoding is of polynomial size with respect to $\calG$ and the polyhedron given as input. \end{lemma} \begin{proof} If we apply the bound of \cite[Thm.~22]{cav15-long}, to the deviator game, then this shows that if there is a solution there is one of size bounded by $d \cdot P_1(\max\{\size{a_i,b_i} \mid 1 \le i \le k\}, P_6(\size{W}, \size{\Stat \times 2^\Agt}, d), d)$ where $\size{x}$ represents the size of the encoding of the object $x$, $P_1$ and $P_6$ are two polynomial functions, $(a_i,b_i)_{1\le i \le k}$ are the inequations defining the polyhedron, $W$ is the maximal constant occurring in the weights and $d$ is the number of dimension, which in our case is equal to $4 \cdot |\Agt|$. The size $\size{\Stat \times 2^\Agt}$ is bounded by $\size{\Stat} + |\Agt|$ (using one bit for each agent in our encoding). Thus the global bound is polynomial with respect to the game $\calG$. \end{proof} What remains to be proved to adapt our algorithm, is that queries for the value problem in the deviator game can be done in space polynomial with respect to the original game. This is the goal of the next section, by considering small parts of the deviator game called fixed coalition games. \section{Fixed coalition game}\label{sec:fixed-coalition} Although the deviator game may be of exponential size, it presents a particular structure. As the set of deviators only increases during any run, the game can be seen as the product of the original game with a directed acyclic graph (DAG). The node of this DAG corresponds to possible sets of deviators, it is of exponential size but polynomial degree and depth. We exploit this structure to obtain a polynomial space algorithm for the value problem and thus also for the polyhedron value problem and the robustness problem. The idea is to compute winning states in what component at a time, and to recursively call the procedure for states that are successors of the current component. We will therefore consider one different game for each component. We now present the details of the construction. For a fixed set of deviator $D$, the possible successors of states of the component $\Stat \times D$ are the states in: \[ \Succ(D) = \{ \Tab_\calD((s,D), (m_\Agt,m_\Agt')) \mid s \in \Stat, m_\Agt,m'_\Agt\in \Mov(s) \} ~\setminus~ \Stat \times D.\] Notice that $\Succ(D)$ is of size bounded by $|\Stat| \times |\Tab|$, hence of polynomial size. Let $u$ be a payoff threshold, we want to know whether \eve can ensure $u$ in $\devgame$, for the multi-dimensional objective defined in Section~\ref{sec:def-multidim}. A winning path~$\rho$ from a state in $\Stat \times D$ is either: \begin{inparaenum}[1)] \item such that $\devlimit(\rho)= D$; \item or it reaches a state in $\Succ(D)$ and follow a winning path from there. \end{inparaenum} Assume we have computed all the states in $\Succ(D)$ which are winning. We can stop the game as soon as $\Succ(D)$ is reached, and declare that \Eve the winner if the state that is reached is a winning state of $\devgame$. This process can be seen as a game~$\mathcal{F}(D,u)$, called the \newdef{fixed coalition game}. In this game the states are those of $(\Stat\times D) \cup \Succ(D)$; transitions are the same than in $\devgame$ on the states of $\Stat \times D$ and the states of $\Succ(D)$ have only self loops. The winning condition is identical to $\calR(k,t,p)$ for the plays that never leave $(\Stat\times D)$, for a play that some reach $(s',D') \in\Succ(D)$, it is considered winning \eve has a winning strategy from $(s',D')$ in $\devgame$ and losing otherwise. In the fixed coalition game we keep the weights previously defined for states of $\Stat\times D$, and fix it for the states that are not in the same $D$ component by giving a high payoff on states that are winning and a low one on the losing ones. We define a multidimensional weight function $v^f$ on $\mathcal{F}(D,u)$ by: \begin{itemize} \item for all $s \in \Stat$, and all $i \in \lsem 1 , 4\cdot |\Agt|\rsem$, $v^f_i(s,D) = v_i(s,D)$. \item if $(s,D') \in \Succ(D)$ and \eve can ensure $u$ from $(s,D')$, then for all $i \in \lsem 1 , 4 \cdot |\Agt|\rsem$, $v^f_i(s,D') = W$. \item if $(s,D') \in \Succ(D)$ and \eve cannot ensure $u$ from $(s,D')$, then for all $i \in \lsem 1 , 4 \cdot |\Agt|\rsem$, $v^f_i(s,D') = -W-1$. \end{itemize} \begin{lemma}\label{lem:fixed-coalition} \Eve can ensure payoff $u \in \lsem -W, W\rsem^d$ in $\devgame$ from $(s,D)$ if, and only if, she can ensure $u$ in the fixed coalition game $\mathcal{F}(D,p)$ from $(s,D)$. \end{lemma} \begin{proof} \fbox{$\Rightarrow$} Let $\sigma_\exists$ be strategy which ensures $u$ in $\devgame$ from $(s,D)$. If we apply the strategy in $\mathcal{F}$, any of its outcome $\rho$ from $(s,D)$ will either stay in the $D$ component and correspond to an outcome of $\sigma_\exists$ in $\devgame$ or reach a state $(s',D')$ in $\Succ(D)$. Since \eve can ensure the payoff $u$ in $\devgame$, and $(s',D')$ is reached by one of its outcome, she can do so from $(s',D')$. Therefore $(s',D')$ is a state where the weights are maximal (equal to $W$ on all dimension) and the outcome is winning in $\mathcal{F}(D,u)$. \fbox{$\Leftarrow$} Let $\sigma_\exists$ be strategy that ensures $u$ in $\mathcal{F}(D,p)$. Every outcome of this strategy that get out of the $D$ component, reach a state $(s',D')\in \Succ(D)$ where the weights are greater than $u$. These states cannot be losing since the weights on any dimension $i$ would be smaller than $-W-1$, which is smaller than $u_i$. This means that these states are winning, and to each state $(s',D')$ with $D' \ne D$ that is reached by an outcome of $\sigma_\exists$ we can associate a strategy $\sigma^{s',D'}_\Eve$ that is winning from $(s',D')$. We consider the strategy $\sigma'_\exists$ that plays according to $\sigma_\exists$ as long has we stay in the $D$ component and according to $\sigma^{s',D'}_\Eve$ once we reach another component, where $(s',D')$ is the first state outside the $D$ component that was reached. The construction is such that the strategy $\sigma'_\exists$ ensures $u$ in $\devgame$: let $\rho$ be one of its outcome, if $\rho$ stays in the $D$ component, it is also an outcome of $\sigma_\exists$ with the same payoff which is above $u$; otherwise $\rho$ reaches a state $(s',D')$ in $\Succ(D)$ and from this point \eve follows a strategy that ensures threshold $u$. \end{proof} Using this correspondence, we deduce a polynomial space algorithm to check that \eve can ensure a given value in the deviator game and thus a polynomial space algorithm for the robustness problem. \begin{theorem}\mylabel{Thm}{thm:pspace-membership} There is a polynomial space algorithm, that given a concurrent game $\calG$, tells if there is a $(k,t,r)$-robust equilibrium. \end{theorem} \begin{proof} The first part of the proof is to show that there is a polynomial space algorithm to solve the value problem in $\devgame$. We consider a threshold $u$ and a state $(s,D)$. In the fixed coalition game~$\mathcal{F}(D,u)$, for each $(s',D') \in \Succ(D)$, we can compute whether it is winning by recursive calls. Once the weights for all $(s',D') \in \Succ(D)$ have been computed for $\mathcal{F}(D,u)$, we can solve the value problem in $\mathcal{F}(D,u)$. Thanks to Lem.~\ref{lem:fixed-coalition} the answer to value problem in this game is yes exactly when \eve can ensure $u$ from $(s,D)$ in $\devgame$. There is a \co\NP~algorithm~\cite{velner12} to check the value problem in a given game, and therefore there also is a algorithm that use polynomial space. The size of the stack of recursive calls is bounded by $|\Agt|$, so the global algorithm uses polynomial space. We now use this to show that there is a polynomial space algorithm for the polyhedron value problem in $\devgame$. We showed in Lem.~\ref{lem:small-witness}, that if the polyhedron value problem has a solution then there is a threshold $u$ of polynomial size that is witness of this property. We can enumerate all the thresholds that satisfy the size bound in polynomial space. We can then test that these threshold satisfies the given linear inequations, and that the algorithm for the value problem answers yes on this input, in polynomial space thanks to the previous algorithm. If this is the case for one of the threshold, then we answer yes for the polyhedron value problem. The correctness of this procedure holds thanks to Lem.~\ref{lem:small-witness}. We now use this to show that there is a polynomial space algorithm for the robustness problem. Given a game $\calG$ and parameters $(k,t,r)$, we define a tuple of linear equations, for all $i \in \lsem 1 , |\Agt|$, \(x_{2\cdot |\Agt|+i} = x_i + r \land x_{2\cdot |\Agt|+i} = - x_{|\Agt|+i} \land x_{2\cdot |\Agt|+i} = - x_{3 \cdot |\Agt|+i} \) (each equation can be expressed by two inequations). Thanks to Lem.~\ref{lem:correct-mean-robust}, there is a payoff that satisfies these constraints that \eve can ensure in $\devgame$ if, and only if, there is a $(k,t,r)$-robust equilibrium. Then querying the algorithm we described for the polyhedron value problem in $\devgame$ with the system of inequations that we gave, gives us the answer to the robustness problem. \end{proof} \section{Hardness}\label{sec:hardness} In this section, we show a matching lower bound for the resilience problem. The lower bound holds for payoff function given by simple reachability objectives. These are payoff function where in some terminal states the payoff is $1$ and if the run does not end in one of these states, the payoff is $0$. This can be seen as a restriction of mean-payoff, where the weights are $0$ everywhere except in some terminal states where it can be $1$. \begin{theorem} The robustness problem is \PSPACE-complete. \end{theorem} Note that we already proved \PSPACE-membership in the preceding section (Thm.~\ref{thm:pspace-membership}) and that hardness holds even when every player have a simple reachability objective (weights are either $0$ or $1$, and can be $1$ only on terminal states). \begin{proof} We encode \QSAT formulas with $n$ variable into a game with $2\cdot n+2$ players, such that the formula is valid if, and only if, there is $n$-resilient equilibria. We can assume that we are given a formula of the form $\phi = \forall x_1. \exists x_2.\ \forall x_3 \cdot \exists x_n.\ C_1 \land \cdots \land C_k$, where each $C_k$ is of the form $\ell_{1,k} \lor \ell_{2,k} \lor \ell_{3,k}$ and each $\ell_{j,k}$ is a literal (i.e. $x_m$ or $\lnot x_m$ for some $m$). We define the game $\calG_\phi$ as illustrated by an example in \figurename~\ref{fig:hardness}. It has a player $A_m$ for each positive literal $x_m$, and a player $B_m$ for each negative literal $\lnot x_m$. We add two extra players \eve and \adam. \eve is making choices for the existential quantification and \adam for the universal ones. When they chose a literal, the corresponding player can either go to a sink state $\bot$ or continue the game to the next quantification. Once a literal has been chosen for all the variables, \eve needs to chose a literal for each clause. The objective for \eve and the literal players is to reach $\bot$. The objective for \adam is to reach $\top$. We ask whether there is a $(n+1)$-resilient equilibrium. If the outcome is going to the state winning for \adam, it is possible for a $A_i$ to change its strategy and go to $\bot$, thus improving its payoff. Therefore a $(n+1)$-resilient equilibrium is necessarily losing for \adam and winning for all the others. To a history $h=\adam_1 \cdot X_1\cdot \eve_2 \cdot X_2 \cdot \adam_3 \cdots \eve_m \cdot X_m$ with $X_i \in \{ A_i,B_i\}$, we associate a valuation $v_h$, such that $v_h(x_i) = \true$ if $X_i = B_i$ and $v_h(x_i) = \false$ if $X_i = A_i$. \medskip \fbox{Validity $\Rightarrow$ equilibrium.} Assume that $\phi$ is valid, we will show that there is a $(n+1)$-resilient equilibrium. We define a strategy of \eve such that if $v_h$ makes $\exists x_m.\ \forall x_{m+1} \cdots \exists x_{n}.\ C_1 \land \cdots \land C_k$ valid, then $\sigma_\exists(h) = X_{m}$ such that $v_{h\cdot X_m}$ makes $\forall x_{m+1} \cdots \exists x_{n}.\ C_1 \land \cdots \land C_k$ valid. As $\phi$ is valid, we know that for all outcome~$h$ of $\sigma_\exists$ of the form $\adam_1 \cdot X_1\cdots \eve_k \cdot X_k$, $v_h$ makes $C_1 \land \cdots \land C_k$ valid. Then from $X_m$, \eve can choose for each close a state $Y$ that is different from all $X_1 \dots X_m$. We also fix the strategy of all players $A_i$ and $B_i$ and \adam to go to the state $\bot$. This defines a strategy profile that we will write $\sigma_\Agt$. Consider a strategy profile $\sigma'_\Agt$ where at most $(n+1)$ strategies are different from the ones in $\sigma_\Agt$. Assume $\sigma'_\Agt$ reaches $\eve_m$. We know that in $\sigma'_\Agt$ at least $n+1$ strategies are different from the ones $\sigma_\Agt$ and $\{ A \in \Agt \mid \sigma'_A \ne \sigma_A\} = \{ \adam , X_1 , \dots , X_m\}$. Then, by the choice of the strategy for \eve, the states that are seen in the following are controlled by players that are different from $X_1,\dots,X_m$. Thus the run ends in $\bot$. \medskip \fbox{Equilibrium $\implies$ validity.} Assume that $\sigma_\shortEve$ is part of a $(n+1)$-resilient equilibrium, we will show that $\phi$ is valid. Given a partial valuation $v_m \colon \{ x_1 , \dots , x_m\} \mapsto \{ \true, \false\}$, we define the function $f(v_m)$ such that: \[f(v_m) \Leftrightarrow \sigma_\exists(\adam_1 \cdot X_1 \cdots \adam_m \cdot X_m) = B_{m+1}.\] We will show that every valuation $v$, such that $v(x_{2k}) = f (v_{|2k-1})$, makes the formula $C_1 \land \cdots \land C_k$ valid, which shows that the formula~$\phi$ is valid. For all such valuation $v$, we can define strategies of \adam and players $X_i$ such that $X_i = A_i$ if $v(x_i) = \false$ and $X_i = B_i$ otherwise, such that keeping all other strategies similar to $\sigma_\Agt$, the state $\eve_m$ is reached. Then, if we see a state belonging to one of the $X_i$, we can make the strategy go to the $\top$ state. Since the profile is $(n+1)$-resilient, this is impossible. Which shows that $\sigma_\eve$ chooses for each clause a literal such that $v(\ell) = \true$. Therefore $v$ makes the formula $C_1 \land \cdots \land C_k$ valid. \begin{figure}[thb] \centering{ \begin{tikzpicture}[scale=1] \everymath{\scriptstyle} \draw (0,0) node[draw,rounded corners=2mm] (I1) {$\Adam$}; \draw (1,1) node[draw,rounded corners=2mm] (A1) {$A_1$}; \draw (1,-1) node[draw,rounded corners=2mm] (B1) {$B_1$}; \draw (1,0) node[draw,rounded corners=2mm,dashed] (C1) {$\bot$}; \draw (2,0) node[draw,rounded corners=2mm] (I2) {$\Eve$}; \draw (3,1) node[draw,rounded corners=2mm] (A2) {$A_2$}; \draw (3,-1) node[draw,rounded corners=2mm] (B2) {$B_2$}; \draw (3,0) node[draw,rounded corners=2mm,dashed] (C2) {$\bot$}; \draw (4,0) node[draw,rounded corners=2mm] (I3) {$\Adam$}; \draw (5,1) node[draw,rounded corners=2mm] (A3) {$A_3$}; \draw (5,-1) node[draw,rounded corners=2mm] (B3) {$B_3$}; \draw (5,0) node[draw,rounded corners=2mm,dashed] (C3) {$\bot$}; \draw (6,0) node[draw,rounded corners=2mm] (I4) {$\Eve$}; \draw (7,1) node[draw,rounded corners=2mm] (A4) {$A_4$}; \draw (7,-1) node[draw,rounded corners=2mm] (B4) {$B_4$}; \draw (7,0) node[draw,rounded corners=2mm,dashed] (C4) {$\bot$}; \draw (8,0) node[draw,rounded corners=2mm] (I5) {$\Eve$}; \draw (9,1) node[draw,rounded corners=2mm] (CA1) {$A_1$}; \draw (9,0) node[draw,rounded corners=2mm] (CA2) {$A_2$}; \draw (9,-1) node[draw,rounded corners=2mm] (CB3) {$B_3$}; \draw (10,2) node[draw,rounded corners=2mm,dashed] (C5) {$\top$}; \draw (10,-2) node[draw,rounded corners=2mm,dashed] (C6) {$\top$}; \draw (10,0) node[draw,rounded corners=2mm] (I6) {$\Eve$}; \draw (11,1) node[draw,rounded corners=2mm] (DB2) {$B_2$}; \draw (11,0) node[draw,rounded corners=2mm] (DA3) {$A_3$}; \draw (11,-1) node[draw,rounded corners=2mm] (DA4) {$A_4$}; \draw (12,0) node[draw,rounded corners=2mm,dashed] (END) {$\bot$}; \draw[-latex'] (I1) -- (A1); \draw[-latex'] (I1) -- (B1); \draw[-latex',dashed] (I1) -- (C1); \draw[-latex'] (A1) -- (I2); \draw[-latex'] (B1) -- (I2); \draw[-latex',dashed] (A1) -- (C1); \draw[-latex',dashed] (B1) -- (C1); \draw[-latex'] (I2) -- (A2); \draw[-latex'] (I2) -- (B2); \draw[-latex'] (A2) -- (I3); \draw[-latex'] (B2) -- (I3); \draw[-latex',dashed] (A2) -- (C2); \draw[-latex',dashed] (B2) -- (C2); \draw[-latex'] (I3) -- (A3); \draw[-latex'] (I3) -- (B3); \draw[-latex'] (A3) -- (I4); \draw[-latex'] (B3) -- (I4); \draw[-latex',dashed] (A3) -- (C3); \draw[-latex',dashed] (B3) -- (C3); \draw[-latex'] (I4) -- (A4); \draw[-latex'] (I4) -- (B4); \draw[-latex'] (A4) -- (I5); \draw[-latex'] (B4) -- (I5); \draw[-latex',dashed] (A4) -- (C4); \draw[-latex',dashed] (B4) -- (C4); \draw[-latex'] (I5) -- (CA1); \draw[-latex'] (I5) -- (CA2); \draw[-latex'] (I5) -- (CB3); \draw[-latex',dashed] (CA1) -- (I6); \draw[-latex',dashed] (CA2) -- (I6); \draw[-latex',dashed] (CB3) -- (I6); \draw[-latex'] (CA1) -- (C5); \draw (CA2) edge[-latex',bend right,bend angle=5] (C5.-100); \draw[-latex'] (CB3) -- (C6); \draw[-latex'] (I6) -- (DB2); \draw[-latex'] (I6) -- (DA3); \draw[-latex'] (I6) -- (DA4); \draw[-latex'] (DB2) -- (C5); \draw (DA3) edge[-latex',bend left,bend angle=5] (C5.-80); \draw[-latex'] (DA4) -- (C6); \draw[-latex',dashed] (DB2) -- (END); \draw[-latex',dashed] (DA3) -- (END); \draw[-latex',dashed] (DA4) -- (END); \end{tikzpicture} \caption{Encoding of a formula $\phi = \forall x_1. \exists x_2.\ \forall x_3.\ \exists x_4.\ (x_1 \lor x_2 \lor \lnot x_3) \land (\lnot x_2 \lor x_3 \lor x_4)$. The dashed edges represent the strategies in the equilibrium of the players other than \eve. }\label{fig:hardness} } \end{figure} \end{proof}
1,314,259,994,751
arxiv
\section{Introduction} \label{sec:intro} Kohn-Sham density functional theory (DFT)~\cite{HohenbergKohn1964,KohnSham1965} is the most widely used electronic structure theory for molecules and systems in condensed phase. The Kohn-Sham orbitals (a.k.a. Kohn-Sham wavefunctions) are eigenfunctions of the Kohn-Sham Hamiltonian and are generally delocalized, \textit{i.e.}{}~each orbital has significant magnitude across the entire computational domain. Consequently, the information about atomic structure and chemical bonding, which is often localized in real space, may be difficult to interpret directly from Kohn-Sham orbitals. The connection between the delocalized orbitals, and localized ones can be established through a \textit{localization} procedure, which has been realized by various numerical methods in the literature~\cite{FosterBoys1960,MarzariVanderbilt1997,WannierReview,Gygi2009,SCDM,ELiLu2010,OzolinsLaiCaflischEtAl2013,AquilantePedersenMerasEtAl2006}. The common goal of these methods is to find a set of orbitals that are localized in real space and span the Kohn-Sham invariant subspace, defined as the subspace as that spanned by the Kohn-Sham orbitals. \rr{For simplicity, we restrict our discussion to isolated systems and omit the discussion of Brillouin zone sampling in this manuscript.} Mathematically, the problem of finding a localized representation of the Kohn-Sham invariant subspace can be formulated as follows. Assume the collection of Kohn-Sham orbitals are discretized in the real space representation as a tall and skinny matrix $\Psi\in \mathbb{C}^{N\times n_{e}}$ with orthonormal columns and where $N\gg n_{e}$. We seek to compute a unitary transformation $Q\in\mathbb{C}^{n_{e}\times n_{e}}$ such that the columns of $\Phi=\Psi Q$ are \textit{localized}, \textit{i.e.}{}~each column of $\Phi$ becomes a sparse vector with spatially localized support after truncating entries with relative magnitude smaller than a prescribed threshold. Here, $N$ is the number of grid points in the discrete real space representation of each Kohn-Sham orbital, and $n_{e}$ is the number of orbitals. In the absence of spin degeneracy, $n_{e}$ is also the number of electrons in the system. For a general matrix $\Psi$ it may not be possible to construct such a $Q$ and obtain $\Phi$ with the desired structure. However, when $\Psi$ represents a collection of Kohn-Sham orbitals of an insulating system, such localized orbitals \rr{generally} exist. \rr{An important exception are topological insulators with non-vanishing Chern numbers \cite{hasan2010colloquium,Brouder2007} \textemdash here we restrict our discussion to systems where localized functions are known to exist.} Their construction can be justified physically by the ``nearsightedness'' principle for electronic matter with a finite HOMO-LUMO gap~\cite{Kohn1996,ProdanKohn2005}. The nearsightedness principle can be more rigorously stated as the single particle density matrix being exponentially localized along the off-diagonal direction in its real space representation~\cite{BenziBoitoRazouk2013,Kohn1996,Blount,Cloizeaux1964a,Cloizeaux1964b, Nenciu,LinLu2015}. The recently developed selected columns of the density matrix (SCDM) procedure \cite{SCDM} provides a simple, accurate, and robust way of constructing localized orbitals. Unlike many existing methods~\cite{FosterBoys1960,MarzariVanderbilt1997,ELiLu2010,OzolinsLaiCaflischEtAl2013,MustafaCohCohenEtAl2015}, the SCDM method requires no initial guess and does not involve a non-convex optimization procedure. \rr{The core ideas behind it also readily extend to systems with Brillouin zone sampling \cite{SCDMk}.} The SCDM procedure constructs localized orbitals directly from a column selection procedure implicitly applied to the density matrix. Hence, the locality of the basis is a direct consequence of the locality of the density matrix. The SCDM method can be efficiently performed via a single column pivoted QR (QRCP) factorization. Since efficient implementation of the QRCP is available both in serial and parallel computational environments through the LAPACK \cite{lapack} and ScaLAPACK~\cite{Scalapack} libraries, respectively, the SCDM method can be readily adopted by electronic structure software packages. From a numerical perspective, the computational cost of a QRCP factorization scales as $\mathcal{O}(N n_{e}^{2})$. The basic form of QRCP \cite{GVL} is not able to take full advantage of level 3 BLAS operations. Hence for matrices of the same size, the QRCP factorization can still be relatively expensive compared to level 3 BLAS operations such as general matrix-matrix multiplication (GEMM). The computational cost of the single QRCP is not necessarily an issue when the SCDM procedure is used as a post-processing tool, but is a potential concern when the SCDM procedure needs to be performed repeatedly. This, for instance, could occur in geometry optimization and molecular dynamics calculations with hybrid exchange-correlation functionals~\cite{WuSelloniCar2009,GygiDuchemin2012}, where a localized representation of the Kohn-Sham invariant subspace needs to be constructed in each step to reduce the large computational cost associated with the Fock exchange operator. In fact, \cite{SCDM} demonstrates how our existing SCDM algorithm may be used to accelerate Hartree-Fock exchange computations. Therefore, here we focus on accelerating the SCDM computation itself. Practically, any QRCP algorithm may be used within the SCDM procedure. In the serial setting, this includes recently developed methods based on using random projections to accelerate and block up the column selection procedure \cite{Martinsson,Gu}. In the massively parallel setting one may alternatively use the recently developed communication avoiding rank-revealing QR algorithm \cite{DemmelRRQR}. In this paper, we demonstrate that the computational cost of the SCDM procedure can be greatly reduced, to the extent that the column selection procedure is no longer the dominating factor in the SCDM calculation. This is based on the observation that SCDM does not really require the Q and R factors from the QRCP. In fact, only the pivots from the QRCP are needed. More specifically, we develop a two-stage column selection procedure that approximates the behavior of the existing SCDM procedure at a much lower computational cost. Asymptotically, the computational cost is only dominated by two matrix-matrix multiplications of the form $\Psi Q$ to construct the localized orbitals, at least one of which is needed in any localization procedure starting from the input $\Psi$ matrix. Notably, the only adjustable parameters we introduce are an oversampling factor and truncation threshold. Both of which may be picked without knowledge of the problems physical structure. The approximate column selection procedure consists of two stages. First, we use a randomized procedure to select a set of candidate columns that may be used in an SCDM style localization procedure. The number of candidate columns is only $\mathcal{O}(n_{e} \log n_{e})$ and is much smaller than $N$. We may use these candidate columns to quickly construct a basis for the subspace that is reasonably localized. In some cases, this fast randomized procedure may provide sufficiently localized columns. Otherwise, we propose a subsequent refinement procedure to improve the quality of the localized orbitals. This is achieved by using a series of QRCP factorizations for matrices of smaller sizes that may be performed concurrently. Numerical results for physical systems obtained from the Quantum ESPRESSO~\cite{QE} package indicate that the two-stage procedure yields results that are nearly as good as using the columns selected by a full QRCP based SCDM procedure. For large systems, the computational time is reduced by more than one order of magnitude. The remainder of this paper is organized as follows. In Section \ref{sec:prelim} we present both a brief introduction to Kohn-Sham DFT and a summary of the existing SCDM algorithm. Section \ref{sec:algo} discusses the new two stage algorithm we propose, and details both the randomized approximate localization stage and the refinement of the column selection. Finally, Section \ref{sec:numer} demonstrates the effectiveness of the algorithm for various molecules. \section{Preliminaries} \label{sec:prelim} For completeness we first provide a brief introduction to Kohn-Sham density functional theory, and the SCDM procedure for finding a localized basis for the Kohn-Sham subspace. \subsection{Kohn-Sham density functional theory} For a given atomic configuration with $M$ atoms at locations $\{R_{I}\}_{I=1}^{M}$, KSDFT solves the nonlinear eigenvalue problem \begin{equation} \begin{split} &\hat{H}[\hat{\rho};\{R_{I}\}]\hat{\psi}_{i} = \varepsilon_{i} \hat{\psi}_{i},\\ &\hat{\rho}(\bvec{r}) = \sum_{i=1}^{n_e} \abs{\hat{\psi}_{i}(\bvec{r})}^2, \quad \int \hat{\psi}^{*}_{i}(\bvec{r}) \hat{\psi}_{j}(\bvec{r}) \,\mathrm{d} \bvec{r} = \delta_{ij}. \end{split} \label{eqn:KS} \end{equation} For simplicity we omit the spin degeneracy. The number of electrons is $n_{e}$ and the eigenvalues $\{\varepsilon_{i}\}$ are ordered non-decreasingly. The lowest $n_e$ eigenvalues $\{\varepsilon_{i}\}_{i=1}^{n_e}$ are called the occupied state energies, and $\{\varepsilon_{i}\}_{j>n_e}$ are called the unoccupied state energies. We assume $\varepsilon_{g}:=\varepsilon_{n_e+1}-\varepsilon_{n_e}>0$. Here $\varepsilon_{n_e}$ is often called the highest occupied molecular orbital (HOMO), $\varepsilon_{n_e+1}$ the lowest unoccupied molecular orbital (LUMO), and hence $\varepsilon_{g}$ the HOMO-LUMO gap. \rr{For extended systems, if $\varepsilon_{g}$ is uniformly bounded away from zero as the system size increases,} the quantum system is an insulating system~\cite{Martin2004}. The eigenfunctions $\{\hat{\psi}_{i}\}_{i=1}^{n_e}$ define the electron density $\hat{\rho}(\bvec{r})$, which in turn defines the Kohn-Sham Hamiltonian \begin{equation} \hat{H}[\hat{\rho};\{R_{I}\}] = -\frac12 \Delta + \hat{V}_{c}[\hat{\rho}] + \hat{V}_{\mathrm{xc}}[\hat{\rho}] + V_{\mathrm{ion}}[\{R_{I}\}]. \label{eqn:ksdft} \end{equation} Here $\Delta$ is the Laplacian operator for the kinetic energy of electrons, \begin{equation*} \hat{V}_{c}[\hat{\rho}](\bvec{r}) \equiv \int \frac{\hat\rho(\bvec{r}')}{\abs{\bvec{r}-\bvec{r}'}} \,\mathrm{d} \bvec{r}' \label{} \end{equation*} is the Coulomb potential, and $\hat{V}_{c}$ depends linearly with respect to the electron density $\hat{\rho}$. $\hat{V}_{\mathrm{xc}}[\hat{\rho}]$ depends nonlinearly with respect to $\hat{\rho}$, and characterizes the many body exchange and correlation effect. $V_{\mathrm{ion}}[\{R_{I}\}]$ is an external potential depending explicitly on the ionic positions, and describes the electron-ion interaction potential and is independent of $\hat{\rho}$. Because the eigenvalue problem (\ref{eqn:KS}) is nonlinear, it is often solved iteratively by a class of algorithms called self-consistent field iterations (SCF)~\cite{Martin2004}, until~\eqref{eqn:KS} reaches self-consistency. In a finite dimensional discretization of Eq.~\eqref{eqn:ksdft}, let $N$ be the number of degrees of freedom. Using a large basis set such as the plane-wave basis set, we have $N=c n_{e}$ and $c$ is a large constant that is often $10^{2}\sim 10^{4}$. Due to this large constant, we explicitly distinguish $N$ and $n_{e}$ in the complexity analysis below. For large scale systems, the cost for storing the Kohn-Sham orbitals is $\mathcal{O}(N n_e)$, and the cost for computing them is generally $\mathcal{O}(N n_e^2)$ and scales cubically with respect to $n_e$. In modern KSDFT calculations the Hartree-Fock exact exchange term is also often taken into account in the form of hybrid functionals~\cite{PerdewErnzerhofBurke1996,Becke1993}. The computational cost for this step not only scales as $\mathcal{O}(Nn_e^{2})$ but also has a large pre-constant. When the self-consistent solution of the Kohn-Sham equation is obtained, the existence of finite HOMO-LUMO gap has important implications on the collective behavior of the occupied Kohn-Sham orbitals $\{\hat{\psi}_{i}\}_{i=1}^{n_e}$. Since any non-degenerate linear transformation of the set of Kohn-Sham orbitals yields exactly the same physical properties of a system, the physically relevant quantity is the subspace spanned by the Kohn-Sham orbitals $\{\hat{\psi}_{i}\}_{i=1}^{n_e}$. Various efforts~\cite{FosterBoys1960,MarzariVanderbilt1997,WannierReview,Gygi2009,ELiLu2010,OzolinsLaiCaflischEtAl2013} have been made to utilize this fact and to find a set of localized orbitals that form a compressed representation of a Kohn-Sham subspace. In other words, we find a set of functions $\{\hat{\varphi}_{i}\}_{i=1}^{n_e}$ whose span is the same as the span of $\{\hat{\psi}_{i}\}_{i=1}^{n_e}$. Compared to each Kohn-Sham orbital $\hat{\psi}_{i}$ which is delocalized in the real space, each compressed orbital $\hat{\phi}_{i}$ is often localized around an atom or a chemical bond. Hence working with $\hat{\phi}_{i}$'s can reduce both the storage and the computational cost. Assume we have access to $\hat{\psi}_{j}(\bvec{r})$'s evaluated at a set of discrete grid points $\{\bvec{r}_{i}\}_{i=1}^{N}$. Let $\{\omega_{i}\}_{i=1}^{N}$ be a set of positive integration weights associated with the grid points $\{\bvec{r}_{i}\}_{i=1}^{N}$, then the discrete orthonormality condition is given by \begin{equation} \sum_{i=1}^{N} \hat{\psi}_{j}(\bvec{r}_{i}) \hat{\psi}_{j'}(\bvec{r}_{i}) \omega_{i} = \delta_{jj'}. \label{eqn:orthonormal_discrete} \end{equation} Let $\hat{\psi}_{j}=[\hat{\psi}_{j}(\bvec{r}_{1}), \hat{\psi}_{j}(\bvec{r}_{2}), \ldots, \hat{\psi}_{j}(\bvec{r}_{N})]^{T}$ be a column vector, and $\hat{\Psi}=[\hat{\psi}_{1}, \ldots, \hat{\psi}_{n_e}]$ be a matrix of size $N\times n_e$. We call $\hat{\Psi}$ the \textit{real space representation} of the Kohn-Sham orbitals and define diagonal matrix $W=\mathrm{diag}[\omega_{1},\ldots,\omega_{N}]$. \rr{Our method requires the Kohn-Sham orbitals to be be represented on a set of real space grid points. This is the case for a plane-wave basis set, as well as other representations such as finite differences, finite elements and wavelets. For instance, if the Kohn-Sham orbitals are represented using the plane-wave basis functions, their real space representation can be obtained on a uniform grid efficiently with the fast Fourier transform (FFT) technique and in this case $\omega_{i}$ takes the same constant value for all $i$. It is in this setting that our method is of particular interest. However, this procedure is also applicable to other basis sets such as Gaussian type orbitals or numerical atomic orbitals when a real space representation of the basis functions is readily available. Therefore, our method is amenable to most electronic structure software packages.} We define $\Psi=W^{\frac12} \hat{\Psi}$ such that the discrete orthonormality condition in Eq.~\eqref{eqn:orthonormal_discrete} becomes $\Psi^{*}\Psi=I$, where $I$ is an identity matrix of size $n_e$. We now seek a compressed basis for the span of $\Psi$, denoted by the set of vectors $\Phi=[\phi_{1}, \ldots, \phi_{n_e}]$ where each $\phi_{i}$ is a sparse vector with spatially localized support after truncating entries with small magnitudes. In such case, $\phi_{i}$ is called a localized vector. \subsection{Selected columns of the density matrix} As opposed to widely-used procedures such as MLWFs \cite{WannierReview}, the key difference in the SCDM procedure is that the localized orbitals $\phi_{i}$ are obtained directly from columns of the density matrix $P = \Psi\Psi^*$. The aforementioned nearsightedness principle states that, for insulating systems, each column of the matrix $P$ is localized. As a result, selecting any linearly independent subset of $n_e$ of them will yield a localized basis for the span of $\Psi.$ However, picking $n_e$ random columns of $P$ may result in a poorly conditioned basis if, for example, there is too much overlap between the selected columns. Therefore, we would like a means for choosing a well conditioned set of columns, denoted $\mathcal{C} = \left\{c_1,c_2,\ldots,c_{n_e} \right\},$ to use as the localized basis. Intuitively we expect such a basis to select columns to minimize overlaps with each other when possible. \rr{This is algorithmically accomplished with a QRCP factorization (see, \textit{e.g.,}{}~\cite{GVL}). More specifically, given a matrix $A$ a QRCP seeks to compute a permutation matrix $\Pi$ such that the leading sub-matrices $\left(A\Pi\right)_{1,\ldots,k,:}$ for any applicable $k$ are as well conditioned as possible. In particular, if we let $\mathcal{C}$ denote the columns selected by the first $n_e$ columns of $\Pi$ then $A_{:,\mathcal{C}}$ should be a well conditioned set of $n_e$ columns of $A.$} \rr{In our setting, this means we would ideally compute an QRCP factorization of the matrix $P$ to identify $n_e$ well conditioned columns from which we may construct a localized basis. However, this would be highly costly since $P$ is a large matrix of size $N$. Fortunately, we may equivalently compute the set $\mathcal{C}$ by computing a QRCP factorization of $\Psi^*$ \rr{or, in fact, any matrix $U$ with orthogonal columns such that $P=UU^*.$} More specifically, we compute \begin{equation} \label{eqn:qrcp} \Psi^*\Pi = Q\begin{bmatrix} R_1 & R_2 \end{bmatrix}, \end{equation} and the first $n_e$ columns of $\Pi$ encode $\mathcal{C}.$} The SCDM procedure to construct an orthonormal set of localized basis elements, denoted $\phi_i$ for $i=1,\ldots,n_e$, and collected as columns of the matrix $\Phi$ is presented in its simplest form in Algorithm \ref{alg:scdm}. \begin{algorithm} \caption{The SCDM algorithm} \label{alg:scdm} \begin{algorithmic}[1] \Statex Given: the Kohn-Sham orbitals $\Psi$ \State Compute a column pivoted QR of $\Psi^*$, $\Psi^*\Pi = Q\begin{bmatrix} R_1 & R_2 \end{bmatrix}$ \State Compute $\Phi = \Psi Q$ or, alternatively, $\Phi^* = \begin{bmatrix} R_1 & R_2 \end{bmatrix}\Pi^*$ \Statex Output: a localized basis for the Kohn-Sham subspace $\Phi$ \end{algorithmic} \end{algorithm} In such form the algorithm requires knowledge of the orthogonal factor from the QRCP. However, an alternative description simply requires the column selection $\mathcal{C}.$ We may equivalently write the SCDM algorithm as in Algorithm \ref{alg:scdm_no_q}. Note that in Algorithm \ref{alg:scdm_no_q}, the cost of the QR factorization for the matrix $\left(\Psi_{\mathcal{C},:}\right)^*$ is only $\mathcal{O}(n_{e}^{3})$. \begin{algorithm} \caption{An alternative version of the SCDM algorithm} \label{alg:scdm_no_q} \begin{algorithmic}[1] \Statex Given: the Kohn-Sham orbitals $\Psi$ \State Compute $\mathcal{C}$ associated with a column pivoted QR of $\Psi^*$ \State Compute the QR factorization $\left(\Psi_{\mathcal{C},:}\right)^* = QR$ \State Compute $\Phi = \Psi Q$ \Statex Output: a localized basis for the Kohn-Sham subspace $\Phi$ \end{algorithmic} \end{algorithm} \begin{remark} There are various equivalent ways to construct the SCDM algorithm. While the simple presentation here differs slightly from the original presentation \cite{SCDM} \rr{the two are mathematically equivalent up to a choice of sign for the columns of $\Phi$. The original presentation corresponds to the computation of a QR factorization of $\left(\Psi_{\mathcal{C},:}\right)^*$ via a Cholesky factorization of $\left(\Psi_{\mathcal{C},:}\right)\left(\Psi_{\mathcal{C},:}\right)^*.$ QR factorizations are not unique, there is always ambiguity up to diagonal matrix with entries on the unit circle. However, such an ambiguity does not have any affect on the localization.} \end{remark} \rr{This second interpretation allows us to briefly explain why this algorithm constructs localized orbitals. Let $D$ be a diagonal matrix with $\pm 1$ on the diagonal such that $DR$ has positive diagonal entries. This means that \[ Q = \left(\Psi_{\mathcal{C},:}\right)^*R^{-1}D \] where $R^{-1}$ is a Cholesky factor of $\left[\left(\Psi_{\mathcal{C},:}\right)\left(\Psi_{\mathcal{C},:}\right)^*\right]^{-1}.$ Importantly, $\left(\Psi_{\mathcal{C},:}\right)\left(\Psi_{\mathcal{C},:}\right)^* = P_{\mathcal{C},\mathcal{C}}$ and, therefore, exhibits off diagonal decay so long as $P_{\mathcal{C},\mathcal{C}}$ is well conditioned. This property is then inherited by $R^{-1}$ \cite{BenziBoitoRazouk2013}. Finally, since $P_{\mathcal{C},:} = \Psi\left(\Psi_{\mathcal{C},:}\right)^*$ \[ \Phi = P_{\mathcal{C},:}R^{-1}D \] we may conclude that it is well localized \textemdash $P_{\mathcal{C},:}$ is well localized and $R^{-1}D$ does not destroy that locality. Importantly, here we see that all the factorizations we are performing can be thought of as involving sub-matrices of $P$.} The overall computational cost of the algorithm is $\mathcal{O}(N n_e^2),$ and practically the cost is dominated by the single QRCP factorization regardless of the version used. Another key feature of the algorithm, especially for our modifications later, is that because we are effectively working with the spectral projector $P,$ the method performs equivalently if a different orthonormal basis for the range of $\Psi$ is used as input. In physics terminology, the SCDM procedure is gauge-invariant. Lastly, the key factor in forming a localized basis is the selection of a well conditioned subset of columns. Small changes to the selected columns, provided they remain nearly as well conditioned, may not significantly impact the overall quality of the basis. \section{The approximate column selection algorithm} \label{sec:algo} When the SCDM procedure is used as a post-processing tool for a single atomic configuration, the computational cost is usually affordable. In fact in such a situation, the most time consuming part of the computation is often the I/O related to the $\Psi$ matrices especially for systems of large sizes. However, when localized orbitals need to be calculated repeatedly inside an electronic structure software package, such as in the context of hybrid functional calculations with geometry optimization or \textit{ab initio} molecular dynamics simulations, the computational cost of SCDM can become relatively large. Here we present an algorithm that significantly accelerates the SCDM procedure. The core aspect of the SCDM procedure is the column selection procedure. Given a set of appropriate columns the requisite orthogonal transform to construct the SCDM may be computed from the corresponding rows of $\Psi$, as seen in Algorithm~\ref{alg:scdm_no_q}. Here we present a two stage procedure for accelerating this selection of columns and hence the computation of $\Phi$. First, we construct a set of approximately localized orbitals that span the range of $\Psi$ via a randomized method that requires only $\Psi$ and the electron density $\rho,$ though if $\rho$ is not given it may be computed directly from $\Psi$ without increasing the asymptotic computational complexity. We then use this approximately localized basis as the input for a procedure that refines the selection of columns from which the localized basis is ultimately constructed. This is done by using the approximate locality to carefully partition the column selection process into a number of small, local, QRCP factorizations. Each small QRCP may be done in parallel, and operates on matrices of much smaller dimension than $\Psi.$ \subsection{Approximate localization} The original SCDM procedure, through the QRCP, examines all $N$ columns of $\Psi^*$ to decide which columns to use to construct $Q$. However, physical intuition suggests that it is often not necessary to visit all columns to find good pivots. For instance, for a molecular system in vacuum, it is highly unlikely that a pivot comes from a column of the density matrix corresponding to the vacuum space away from the molecule. This inspires us to accelerate the column selection procedure by restricting the candidate columns. This is accomplished by generating $\mathcal{O}\left(n_e\log n_e\right)$ independent and identically distributed (i.i.d.) random sample columns, using the normalized electron density as the probability distribution function (pdf). Indeed, if a column of the density matrix corresponds to the vacuum, then the electron density is very small and hence the probability of picking the column is very low. In statistics this corresponds to leverage score sampling, see, \textit{e.g.,}{}~\cite{MahoneyDrineas}. This randomized version of the SCDM algorithm is outlined in Algorithm~\ref{alg:rand}. \begin{algorithm} \caption{Computing an approximately localized collection of basis vectors} \label{alg:rand} \begin{algorithmic}[1] \Statex Given: Kohn-Sham orbitals $\Psi,$ electron density $\rho,$ concentration $\gamma,$ and failure probability $\delta$ \State Sample $(n_e / \gamma) \log n_e / \delta$ elements from $\left\{1,\ldots,N\right\}$ based on the discrete distribution \[ \mathbf{Pr}\left(\left\{j\right\}\right) = \rho(j)/n_e \] and denote this set $\wt{\CS}$ \State Compute the column pivoted QR factorization $$\left(\Psi_{\wt{\CS},:}\right)^*\Pi = QR$$ \State Form approximately localized basis $\wt{\Phi} = \Psi Q$ \end{algorithmic} \end{algorithm} To complete our discussion of Algorithm \ref{alg:rand} we must justify the sub-sampling procedure used to select $\wt{\CS}$. In order to do so we introduce a simple model for the column selection procedure based on the idea that columns ``similar'' to the ones selected by Algorithm~\ref{alg:scdm} will work well to compute an approximately localized basis. Our underlying assumption is that a set of columns will serve to approximately localize the basis if it contains at least one column in each region where one of the $\phi_i$ is large. Because the $\phi_i$ constructed via the SCDM procedure decay exponentially, this is analogous to saying that any column associated with a grid point close enough to the ``true'' grid point used will suffice. Though, by avoiding explicit use of spatial relations of grid points our algorithm and its parameters are not dependent on the physical geometry. To codify this postulate, we let $\mathcal{I}_i \subset \left\{1,\ldots,N\right\}$ be the smallest non-empty set such that \begin{equation} \sum_{j\in\mathcal{I}_i} \lvert\phi_i(j)\rvert^2 \geq \gamma, \label{eqn:gamma} \end{equation} where $\gamma \in (0,1)$. If multiple such sets exist we select the one that maximizes ${\sum_{j\in\mathcal{I}_i} \lvert\phi_i(j)\rvert^2}$. Now, we may write our assumption more concretely: a column $c_i \in \left\{1,\ldots,N\right\}$ suffices to approximately construct $\phi_i$ if it is contained in $\mathcal{I}_i$. \rr{Taking $\gamma$ to be sufficiently small would enforce adequate sampling to ensure the columns selected by Algorithm~\ref{alg:scdm_no_q} are selected to be part of $\wt{\CS}.$ However, in practice this is not necessary for the construction of a localized basis and by choosing a larger $\gamma$ we allow for other columns near the peak of $\phi_i$ to act as good surrogates.} Under this assumption, to approximately localize the basis, we must simply ensure that $\wt{\CS}$ contains at least one distinct column in each of the sets $\mathcal{I}_1,\mathcal{I}_2,\ldots,\mathcal{I}_{n_e},$ \textit{i.e.}{}~we need a one to one matching between sets and columns. Theorem \ref{thm:sample} provides an upper bound on the required cardinality of $\wt{\CS}$ to ensure it may be used to approximately localize the basis with high probability. We do require an additional mild assumption that ensures the sets $\mathcal{I}_i$ do not simultaneously overlap significantly and have small support. \begin{theorem} \label{thm:sample} Let $\eta$ be the largest constant such that there exist disjoint subsets $\mathcal{I}^s_i \subseteq \mathcal{I}_i, \; i=1,\ldots,n_e$ each satisfying \[ \sum_{j\in\mathcal{I}^s_i} \lvert\phi_i(j)\rvert^2 \geq \eta\sum_{j\in\mathcal{I}_i} \lvert\phi_i(j)\rvert^2. \] The set $\wt{\CS}$ constructed by sampling $$m \geq \left(n_e / \eta\gamma \right) \log n_e / \delta$$ elements with replacement from $\left\{1,\ldots,N\right\}$ based on the discrete distribution \[ \mathbf{Pr}\left(\left\{j\right\}\right) = \rho(j)/n_e \] contains an unique element in each $\mathcal{I}_i$ for $i=1,\ldots,n_e$ with probability $1-\delta.$ \end{theorem} \begin{proof} Let $\mathcal{F}$ be the event that $\wt{\CS}$ does not contain a distinct element in one of the sets $\mathcal{I}_i.$ We may write \[ \mathbf{Pr}\left(\mathcal{F}\right) \leq \mathbf{Pr}\left(\left\{\wt{\CS} \cap \mathcal{I}^s_i = \emptyset \text{ for some } i\right\}\right) \] because requiring $\wt{\CS}$ to contain an element in each $\mathcal{I}^s_i$ implies that $\wt{\CS}$ contains a distinct element in each $\mathcal{I}_i$. Subsequently, by a union bound \[ \mathbf{Pr}\left(\mathcal{F}\right)\leq \sum_{i=1}^{n_e} \mathbf{Pr}\left(\left\{\wt{\CS} \cap \mathcal{I}^s_i = \emptyset\right\}\right). \] The event $\left\{\wt{\CS} \cap \mathcal{I}^s_i = \emptyset\right\}$ is simply the probability that none of the $m$ samples fall in $\mathcal{I}^s_i.$ Because \[ \rho(j) = \sum_{i=1}^{n_e} \lvert \phi_i(j)\rvert^2, \] we may find the lower bound for the probability of selecting an element $j$ in $\mathcal{I}_i$ as \[ \mathbf{Pr}\left(\left\{j \in \mathcal{I}^s_i\right\}\right) \geq \eta\gamma/n_e. \] Consequently, \[ \sum_{i=1}^{n_e} \mathbf{Pr}\left(\left\{\wt{\CS} \cap \mathcal{I}^s_i = \emptyset\right\}\right) \leq n_e (1-\eta\gamma/n_e)^m \] and to ensure the probability of missing any of the sets $\mathcal{I}^s_i$ is less than $\delta$ we may simply enforce \[ m \geq \frac{n_e}{\eta\gamma} \log n_e / \delta. \] \end{proof} \begin{remark} The parameter $\eta$ is necessary in the proof, but algorithmically we simply assume it to be one, alternatively, one could consider choosing $1 / (\gamma \eta)$ rather than just $1 / \gamma$ as the oversampling factor. \end{remark} If, for example, we take $\gamma = 1/2$ and $\eta = 1/2$ this bound says $4n_e\log n_e/\delta$ samples suffices for the approximate localization procedure. As expected, when either the failure probability $\delta$ or the cardinality of $\mathcal{I}_i$ go to one the required number of samples grows. Furthermore, since $\gamma = \min_{i} \max_{j} \lvert \phi_i(j) \rvert^2$ corresponds to each of the sets $\mathcal{I}_i$ containing a single point, that is a lower bound on how small $\gamma$ can be theoretically. We remark that this theoretical bound may be pessimistic for two reasons. One is its use of the union bound. The other is the introduction of $\eta$. The disjoint requirement simplifies the assignment of selected columns to sets for the proof, but is a relaxation of what is really needed. \subsection{Accelerating the SCDM procedure using an approximately localized basis} Once the approximate localized orbitals $\wt{\Phi}$ are obtained, we would like to perform a refinement procedure to further localize the basis. We do this by taking advantage of the locality of the input to the SCDM procedure. \rr{In Algorithm~\ref{alg:rand} we approximate the behavior of the SCDM algorithm by restricting the number of columns of $\Psi^*$ that the QRCP factorization can select from. However, once we have a somewhat localized basis we can efficiently take more columns of $\Psi^*$ into consideration. This allows us to better approximate the original SCDM algorithm and, therefore, construct a more localized basis. We accomplish this with a procedure that resembles the tournament pivoting strategy for computing a QRCP. We first compute a bunch of small QRCP factorizations, each involving columns associated with the support of a subset of the rows of $\wt{\Phi}^*$. This is computationally feasible because the rows are already somewhat localized. Lastly, because this procedure generates more candidate columns than needed we perform one final QRCP to try and get as well conditioned a set of columns of $\Psi^*$ as possible.} Algorithmically, we first need to select a superset of the ultimately desired columns from which to select the final columns used in the localization. To construct such a superset we consider each approximately localized orbital and how we may refine it. Each orbital, once approximately localized, only exerts influence on, \textit{i.e.}{}~significantly overlaps with, nearby localized orbitals. Hence, we may refine the selected columns locally. This means that for each orbital we may simply figure out which orbitals it substantially overlaps with and compute a QRCP on just those orbitals (rows) of $\wt{\Phi}^*$ while simultaneously omitting columns with small norm over those rows. This process will yield a small number of selected columns that we add to a list of potential candidate columns for the final localization. However, because we repeat this process for each localized orbital we might have more than $n_e$ total candidate columns by a small multiplicative factor. Therefore, we perform one final column pivoted QR factorization on these candidate columns to select the final set $\mathcal{C}$. In principle, while the localized orbitals get small on large portions of the domain, they are not actually zero. Hence, for any given $\epsilon$ we say two orbitals substantially overlap if there is any spatial point where they both have relative magnitude greater than $\epsilon$. We find in practice that choosing $\epsilon$ close to the relative value at which one considers an orbital to have vanished suffices, though taking $\epsilon \rightarrow 0$ one recovers the original SCDM algorithm. Importantly, this parameter is completely independent of the geometry of the problem under consideration and thus does not require detailed knowledge of the system to set. We detail our complete algorithm for computing the orthogonalized SCDM from approximately localized input in Algorithm~\ref{alg:refine}. The only parameter is how small a column is for it to be ignored. At a high level, the goal is simply to generate additional candidate columns by allowing each orbital to interact with its immediate neighbors. Provided that each orbital is only close to a small number of others, and that the approximate localization is sufficient this procedure may be performed quickly. \begin{algorithm} \caption{Refining an approximately localized collection of basis vectors} \label{alg:refine} \begin{algorithmic}[1] \Statex Given: approximately localized Kohn-Sham orbitals $\wt{\Phi}$ and column tolerance $\epsilon$ \State $\JS_i = \left\{j\in \left\{1,\ldots,N\right\} \mid \lvert \wt{\phi_i}\rvert > \epsilon \max_{k} \lvert \wt{\phi_i} (\bvec{r}_k) \rvert \right\}$ for $i = 1,\ldots,n_e$ \For{$i = 1,\dots, n_e$} \State Set $\mathcal{R}_i = \left\{j \in \left\{1,\ldots,n_e\right\} \mid \JS_i \cap \JS_j \neq \emptyset \right\}$ \State Set $\displaystyle \LS_i = \bigcup_{j\in \mathcal{R}_i} \JS_j$ \State Compute a column pivoted QR factorization of $\left(\left.\rr{\wt{\Phi}}\right.^*\right)_{\mathcal{R}_i,\LS_i}$ and denote the pivot columns $\mathcal{C}_i$ \EndFor \State Set $\wt{\CS} = \cup_i \mathcal{C}_i$ \State Compute the column pivoted QR factorization $$\left(\rr{\wt{\Phi}}_{\wt{\CS},:}\right)^*\Pi = QR$$ \State Form the localized basis $\Phi = \wt{\Phi} Q$ \end{algorithmic} \end{algorithm} To illustrate the behavior of this algorithm, we sketch the behavior of Algorithm~\ref{alg:refine} in two cases. In one case, after the approximate localization, we have two sets of orbitals whose support sets after truncation are disjoint. This is shown in Figure~\ref{fig:disjoint}. Here, simply computing two independent QRCP factorizations is actually equivalent to computing the QRCP of the entire matrix. As we see Algorithm~\ref{alg:refine} partitions the orbitals into two sets and then only considers the columns with significant norm over the orbital set. In the second case, \rr{illustrated in Figure~\ref{fig:chain}}, we have a chain of orbitals whose support set after truncation forms a connected region in the spatial domain. In this situation we do not actually replicate the computation of a QRCP of the whole matrix, but rather for each orbital we compute a local QRCP ignoring interactions with distant orbitals. More specifically, any column mostly supported on a given orbital will be minimally affected by orthogonalization against columns associated with distant orbitals. Therefore, we may simply ignore those orthogonalization steps and still closely match the column selection procedure. \begin{figure}[ht!] \centering \includegraphics[width = 1\textwidth]{mat_disjoint.pdf} \includegraphics[width = .6\textwidth]{domain_disjoint.pdf} \caption{Matrix (top) and physical domain (bottom) associated with two collections of approximately localized orbitals whose support (lightly shaded region) is disjoint after truncation.} \label{fig:disjoint} \end{figure} \begin{figure}[ht!] \centering \includegraphics[width = 1\textwidth]{mat_chain.pdf} \includegraphics[width = .6\textwidth]{domain_chain.pdf} \caption{Matrix (top) and physical domain (bottom) associated with a chain of approximately localized orbitals whose support (lightly shaded region) is connected after truncation.} \label{fig:chain} \end{figure} \begin{remark} In Algorithm~\ref{alg:refine} it may sometimes be the case that several of the $\mathcal{R}_i$ are identical (\textit{e.g.}{}~as in Figure~\ref{fig:disjoint}). In this case, as a matter of practical performance optimization, one may simply skip instances of the loop over $i$ that would be computing a QRCP of the exact same matrix as a prior instance of the loop. Similarly, if there is a set $\mathcal{S}\subset \left\{1,\ldots,n_e\right\}$ such that $\left(\cup_{i\in \mathcal{S}} \mathcal{R}_i \right) \cap \left(\cup_{i\in\mathcal{S}^c} \mathcal{R}_i\right) = \emptyset$ and $\lvert \cup_{i\in \mathcal{S}} \mathcal{R}_i \rvert$ is small, we may combine the instances of the loop corresponding to $i\in\mathcal{S}$ into a single small QRCP by simply using $\cup_{i\in \mathcal{S}} \mathcal{R}_i$ as the set of rows. This avoids redundant work if a small collection of orbitals just interact amongst themselves and are disjoint from all others. \end{remark} \subsection{Computational complexity} The computational cost of Algorithms \ref{alg:rand} and \ref{alg:refine} is formally limited by the cost of computing a single general matrix-matrix multiplication (GEMM) $\Psi Q$ at cost $\mathcal{O}(n_e^2 N).$ However, this single BLAS 3 operation must appear in any localization procedure starting with the $\Psi$ matrix as an input. This GEMM operation is also highly optimized both in sequential and parallel computational environments. Furthermore, at large scale one can approximate the product with $\mathcal{O}(n_e N)$ complexity using sparse linear algebra, since the support of the result is sparse and one may infer the support based on the column indices from which $Q$ is built. For these reasons, we let $T_{mult}$ represent the cost of computing $\Psi Q$. Using this notation, the computational cost of Algorithm~\ref{alg:rand} is $$\mathcal{O}(n_e N \log n_e + n_e^3 \log n_e) + T_{mult}.$$ More specifically, the random selection of columns costs $\mathcal{O}(N n_e \log n_e)$ and the subsequent QR factorization costs $\mathcal{O}(n_e^3 \log n_e).$ If we assume that the support of the approximately localized basis used as input for Algorithm~\ref{alg:refine} and the number of nearby orbital centers are bounded by some constant independent of $N$ and $n_e,$ which is reasonable for models where the molecule is growing and the discretization quality remains constant, the computational cost is $$\mathcal{O}(n_e N + n_e^3) +T_{mult}.$$ Under these assumptions each of the, at most, $n_e$ small QR factorizations has constant cost. However, the procedure for finding $\JS_i$ introduces a dependency on $N$. While the computational cost of the refinement procedure and randomized algorithms are broadly similar, the randomized algorithm is significantly faster in practice because drawing the random samples is cheaper than the support computations needed in the refinement algorithm. \rr{Lastly, our algorithms are memory efficient. In fact, for practical purposes their memory cost is bounded by the cost of storing $\Psi$ plus a few work arrays of size $N.$ Besides the storage for $\Psi,$ all of the matrices we operate on cost at most $\mathcal{O}(n_e^3 \log n_e)$ to store and may be discarded after use. Furthermore, the QR factorizations and matrix multiplication may be done in place excepting a few work arrays of length $N.$} \section{Numerical examples} \label{sec:numer} To demonstrate the performance of our method we use three examples that capture the different facets of our algorithm. The first example is the dissociation of a BH$_{3}$NH$_{3}$ molecule, and the second example is the alkane chain. We select these two examples not because they are computationally expensive, but that they clearly demonstrates that our approximate localization algorithm is effective in two very different regimes. In particular, the effectiveness of the algorithm is independent of whether localized orbitals form one single group or multiple disconnected groups. Our third example is a large supercell with $256$ water molecules. We demonstrate the performance gains over the existing SCDM method and provide a comparison with Wannier90 \cite{wannier90}. In all of the examples here we assume we have access to the electron density $\rho$ from the electronic structure calculation, and therefore exclude its computation from the timings of our randomized method. \rr{We use $3 n_e \log n_e$ samples in the randomized algorithm, corresponding to $\gamma = 1/3,$ for all of the experiments.} Furthermore, to more clearly illustrate the advantages of our method, we separately report timings for computing the orthogonal transform $Q$ that localizes the orbitals and subsequent computation of the localized orbitals by a single matrix product. Here we only consider the orthogonalized SCDM, as discussed in this paper. For the refinement algorithm we set the relative truncation threshold at $\epsilon = 5\times 10^{-2}$ and we observe that this is sufficient for our new algorithm to closely match the results of the existing algorithm. Finally, in all of the experiments here we use $2.5 \times 10^{-2}$ as the relative truncation threshold of the localized orbitals when counting the fraction of entries that are non-zero. This measure of ``locality'' has the advantage of not depending on the geometry of the physical system. However, for completeness we also provide spread computations in the final example. \rr{Prior work validates the expected exponential decay of the orbitals by varying the truncation threshold \cite{SCDM}.} All numerical results shown were run on a quad-socket Intel Xeon E5-4640 processor clocked at 2.4 GHz with 1.5 TB of RAM and our algorithms were implemented in MATLAB R2015a. Our code is sequential and the only multi-threading is in the \mbox{LAPACK} and BLAS routines called by MATLAB. \rr{The storage cost of all the algorithms presented here is $\mathcal{O}(n_e N),$ which is also the cost to store $\Psi$ in memory. In our largest example a copy of $\Psi$ cost roughly 60 GB to store.} The Kohn-Sham orbitals were computed using Quantum ESPRESSO \cite{QE} and VMD \cite{VMD} was used for plotting orbitals and molecules in the alkane and water examples. \subsection{BH$_{3}$NH$_{3}$} First, we demonstrate the performance of the approximate column selection method for the dissociation process of a BH$_{3}$NH$_{3}$ molecule. The main purpose of this example is to demonstrate that our approximate localization algorithm is equally effective when localized orbitals form disconnected groups or a single group. Figure~\ref{fig:bh3nh3} shows the localized orbitals for three atomic configurations, where the distance between B and N atoms is 1.18, 3.09, and 4.96 Bohr, respectively and also shows the locality for orbitals computed by Algorithms~\ref{alg:scdm}, \ref{alg:rand}, and \ref{alg:refine} for each of the three atomic configurations. Here we plot an isosurface of the orbitals at a value of $2.5 \times 10^{-2}.$ We find that Algorithm~\ref{alg:refine} automatically identifies that the localized orbitals should be treated as two disconnected groups for the dissociated configuration, and as one single group for the bonded configuration. We see that in all cases, the randomized method works quite well on its own. Furthermore, after applying Algorithm~\ref{alg:refine} to the output of the randomized method, the sparsity of the orbitals is nearly indistinguishable from that of the original SCDM algorithm. \rr{In these three scenarios the condition number of $P_{:,\mathcal{C}},$ equivalently $\left(\Psi_{\mathcal{C},:}\right)^*,$ is never larger than three. Here we let $\mathcal{C} \subset \wt{\CS}$ denote the final set of columns we have selected via the last QRCP in Algorithm~\ref{alg:refine} as it corresponds to the columns of the density matrix from which we ultimately build the localized basis.} \begin{figure}[ht!] \centering \includegraphics[width = .6\textwidth]{bh3nh3_90loc.pdf} \includegraphics[width = .3\textwidth]{bh3nh3_90iso.png} \includegraphics[width = .6\textwidth]{bh3nh3_45loc.pdf} \includegraphics[width = .3\textwidth]{bh3nh3_45iso.png} \includegraphics[width = .6\textwidth]{bh3nh3_1loc.pdf} \includegraphics[width = .3\textwidth]{bh3nh3_1iso.png} \caption{Sparsity (left) of localized orbitals computed by Algorithms~\ref{alg:scdm}, \ref{alg:rand}, and \ref{alg:refine} based on fraction of non-zero entries after truncation and orbital isosurfaces (right) at $2.5 \times 10^{-2}$ generated by Algorithm~\ref{alg:refine} when using the output of Algorithm~\ref{alg:rand} as input. Three different configurations moving from the bonded configuration (top) to the dissociated configuration (bottom).} \label{fig:bh3nh3} \end{figure} \subsection{Alkane chain} Our second example is the alkane chain (atomic configuration shown in Figure~\ref{fig:alkane}). Similar to the BH$_{3}$NH$_{3}$, this example is not computationally expensive, but confirms that the approximate localization algorithm is effective even when all localized orbitals form one large connected group. We demonstrate that the refinement process still achieves the desired goal. In this example $N=820,125$ and $n_e = 100$. \begin{figure}[ht!] \centering \includegraphics[width = .75\textwidth]{alkane.png} \caption{Atomic configuration of the alkane chain} \label{fig:alkane} \end{figure} Figure~\ref{fig:alkane_locality} shows histograms of the fraction of non-zero entries after truncation for the randomized method, the refinement procedure applied to the output of the randomized method, and our original algorithm. We observe that the randomized method actually serves to localize the orbitals rather well. However, the output is clearly not as good as that produced by the original SCDM algorithm. However, once the refinement algorithm is applied we see that, while not identical, the locality of the localized orbitals basically matches that of the localized orbitals generated by Algorithm~\ref{alg:scdm}. This is further illustrated in Figure~\ref{fig:alkane_ex}, which shows isosurfaces for a localized orbital generated by each of the three methods. \rr{Once again the columns of the density matrix we ultimately select are very well conditioned, in fact the condition number is less than two.} \begin{figure}[ht!] \centering \includegraphics[width = 1\textwidth]{alkane_rand.pdf} \includegraphics[width = 1\textwidth]{alkane_refine.pdf} \includegraphics[width = 1\textwidth]{alkane_full.pdf} \caption{Histogram of localized orbitals for the alkane chain example computed by three different algorithms based on fraction of non-zero entries after truncation. (top) output of the randomized algorithm, (middle) output of the refinement algorithm applied to the output of the randomized algorithm, and (bottom) output of the original SCDM algorithm.} \label{fig:alkane_locality} \end{figure} \begin{figure}[ht!] \centering \includegraphics[width = .3\textwidth]{alkane_phi_rand.png} \includegraphics[width = .3\textwidth]{alkane_phi_refine.png} \includegraphics[width = .3\textwidth]{alkane_phi_full.png} \caption{The same (as determined by picking those maximally correlated) orbital as generated by the randomized algorithm (left), the refinement procedure (middle), and the original SCDM algorithm (right). Here an isosurface value of $5\times 10^{-3}$ was used and the colors delineate positive and negative regions.} \label{fig:alkane_ex} \end{figure} Table~\ref{tab:alkane} illustrates the computational cost of our new algorithms as compared to the original version. The refinement algorithm is over nine times faster than the original algorithm, though it cannot be perform without first approximately localizing the orbitals. Hence, we also provide the time of getting the orthogonal transform out of Algorithm~\ref{alg:refine}, which is the sum of the preceding three lines in the table. Even taking the whole pipeline into account we see a speed up of about a factor of close to seven. \begin{table} \caption{Runtime for localization algorithms as applied to an alkane chain.\label{tab:alkane}} \centering \begin{tabular}{|l|c|} \hline Operation & time (s) \\ \hhline{|=|=|} Matrix-matrix multiplication $\Psi Q$ & 0.2713 \\ \hline Randomized version, Algorithm \ref{alg:rand} & 0.0261 \\ \hline Refinement step, Algorithm \ref{alg:refine} & 0.7176 \\ \hline Total cost of our two stage algorithm & 1.0149 \\ \hline Original algorithm, Algorithm \ref{alg:scdm} & 6.9412 \\ \hline \end{tabular} \end{table} \subsection{Water molecules} We now consider a three dimensional system consisting of $256$ water molecules (part of the atomic configuration shown in Figure~\ref{fig:water}). In this example, $N=7,381,125$ and $n_e = 1024$. \begin{figure}[ht!] \centering \includegraphics[width = .4\textwidth]{water64.png} \caption{Part of the atomic configuration of 256 water molecules} \label{fig:water} \end{figure} Figure~\ref{fig:water_locality256} shows histograms of the fraction of non-zero entries after truncation for the randomized method, the refinement procedure applied to the output of the randomized method, and the original SCDM algorithm when applied to the 256 water molecule system. As before, the randomized method actually serves to localize the orbitals rather well. However, there is still visible difference between the result of the randomized algorithm and the original SCDM algorithm. Application of the refinement algorithm achieves a set of localized orbitals that broadly match the quality of the ones computed by the original SCDM algorithm. \rr{Similar to the previous two examples $P_{:,\mathcal{C}}$ is very well conditioned, its condition number is once again less than two.} \begin{figure}[ht!] \centering \includegraphics[width = 1\textwidth]{water256_rand.pdf} \includegraphics[width = 1\textwidth]{water256_refine.pdf} \includegraphics[width = 1\textwidth]{water256_full.pdf} \caption{Histogram of localized orbitals for 256 water molecules computed by three different algorithms based on fraction of non-zero entries after truncation. (top) output of the randomized algorithm, (middle) output of the refinement algorithm applied to the output of the randomized algorithm, and (bottom) output of the original SCDM algorithm.} \label{fig:water_locality256} \end{figure} Table~\ref{tab:water256} illustrates the computational cost of our new algorithms as compared to Algorithm~\ref{alg:scdm}. As before, the randomized algorithm for computing is very fast and in this case much faster than the matrix-matrix multiplication required to construct the localized orbitals themselves. This makes the algorithm particularly attractive in practice when $\rho$ is given and, as in many electronic structure codes, the application of $Q$ to $\Psi$ may be effectively parallelized. Furthermore, the complete procedure for getting orthogonal transform out of Algorithm~\ref{alg:refine}, the sum of the preceding three lines in the table, is more than 30 times faster than Algorithm~\ref{alg:scdm}. \begin{table} \caption{Runtime for localization algorithms as applied 256 water molecules.\label{tab:water256}} \centering \begin{tabular}{|l|c|c|} \hline Operation & time (s) \\ \hhline{|=|=|} Matrix-matrix multiplication $\Psi Q$ & 47.831 \\ \hline Randomized version, Algorithm \ref{alg:rand} & 14.024 \\ \hline Refinement step, Algorithm \ref{alg:refine} & 78.361 \\ \hline Total cost of our two stage algorithm & 140.22 \\ \hline Original algorithm, Algorithm \ref{alg:scdm} & 4496.1 \\ \hline \end{tabular} \end{table} Finally, we use this example to provide a comparison with the popular Wannier90 \cite{wannier90} software package and further demonstrate the quality of orbitals computed by our algorithms. While we have previously been looking at sparsity after truncation to evaluate the quality of the orbitals, we now also evaluate them based on the spread criteria Wannier90 tries to minimize. Loosely speaking this corresponds to the sum of the variance of each orbital~\cite{MarzariVanderbilt1997}. All of the spreads here were computed by Wannier90 to ensure the same quantity was being measured in each case. Importantly, this is the quantity Wannier90 seeks to minimize and we therefore do not expect to do better under this metric. For example, given our localized orbitals as input, Wannier90 should always be able to at least slightly decrease the objective function value. For this comparison, we used the random initial guess option in Wannier90 and the default convergence tolerance of $10^{-10}.$ \rr{Notably, the random choice in Wannier90 does not correspond to a random gauge, rather an initial guess is constructed by randomly placing Gaussian functions in space.} \begin{table} \caption{Final spread of orbitals for 256 water molecules and time to solution.} \label{tab:wannier} \centering \begin{tabular}{|l|c|c|} \hline Algorithm & spread $\left(\text{\AA}^2\right)$ & time to solution (s) \\ \hhline{|=|=|=|} Algorithm~\ref{alg:scdm} & 589.91 & 4496.1 \\ \hline Randomized version, Algorithm \ref{alg:rand} & 636.60 & 14.024 \\ \hline Two stage procedure using Algorithms~\ref{alg:rand} and~\ref{alg:refine} & 589.97 & 140.22 \\ \hline Wannier90 & 550.20 & 1715.2 \\ \hline \end{tabular} \end{table} Table~\ref{tab:wannier} shows the spreads and time to solution for all of the algorithms. First, we observe that the spread using our two stage procedure almost exactly matches that of the output of Algorithm~\ref{alg:scdm} while being over an order of magnitude faster. Secondly, the output from our algorithms, even the randomized one, are close to the local minimum found by Wannier90. \rr{Figure~\ref{fig:wannier_spread} directly compares the spreads of our localized orbitals from the two stage algorithm and those at a local minimum found by Wannier90. As expected, there are discrepancies, but we are generally within approximately $0.05 \text{\AA}^2$ per orbital. This aligns with our expectations based on the objective function difference. For comparison, we computed a localized basis for an isolated $\text{H}_2\text{O}$ molecule. Using Algorithm~\ref{alg:scdm} yielded spreads of 0.46, 0.48, 0.56, and 0.57 $\text{\AA}^2$. Subsequently running Wannier90 to convergence yielded spreads of 0.47, 0.48, 0.55, and 0.55 $\text{\AA}^2$, which matched the results when starting with a Wannier90 generated random initial guess.} \begin{figure}[ht!] \centering \includegraphics[width = 1\textwidth]{wannier_fast.pdf} \includegraphics[width = 1\textwidth]{wannier_conv.pdf} \caption{\rr{Histogram of localized orbitals for 256 water molecules computed by two different algorithms based on the Wannier90 spread functional. (top) output of the refinement algorithm applied to the output of the randomized algorithm, and (bottom) converged local minimum for Wannier90.}} \label{fig:wannier_spread} \end{figure} Admittedly, the time to solution for Wannier90 may depend strongly on the initial guess, \textit{e.g.,}{}~in this experiment the initial spread for Wannier90 was $5339 \text{ \AA}^2$ and convergence took 20 iterations. However, a poor initial guess could result in Wannier90 failing to converge, converging to a worse local minimum, \rr{or taking longer to converge}. Our algorithms are direct and have no such dependence on an initial guess. In this experiment, each iteration of Wannier90 took roughly 85 seconds. So even two iterations costs more than our two stage algorithm and we have omitted the cost of computing the overlap matrices based on $\Psi$, which Wannier90 requires as input and we computed using Quantum ESPRESSO on 256 processors. Collectively, these results make our algorithm an attractive alternative to Wannier90. If a local minimum of the objective is desired, the output from \rr{Algorithms~\ref{alg:rand} or~\ref{alg:refine}} may be used as an algorithmically constructed initial guess for Wannier90. \begin{remark} \rr{Seeding Wannier90 with our two stage algorithm convergence took 16 iterations and we appear to arrive at an equivalent local minima. For this system, it would appear the area around the local minimum is quite flat and the default absolute convergence criteria is rather stringent. In fact, after one iteration starting with our algorithm Wannier90 yields localized functions whose spread is withing $0.4\%$ of the converged value. To get to within $1\%$ starting from the random initial guess takes five iterations.} \end{remark} \section{Conclusion} \label{sec:conclusion} We have presented a two stage algorithm to accelerate the computation of the SCDM for finding localized representation of the Kohn-Sham invariant subspace. We first utilize an algorithm based on random sampling to approximately localize the basis and then perform a subsequent refinement step. This method can achieve computational gains of over an order of magnitude for systems of relatively large sizes. Furthermore, the orbitals computed are qualitatively and quantitatively similar to those generated by the original SCDM algorithm. Lastly, for large systems we observe that our algorithm may provide an attractive alternative to Wannier90 and, at the very least, may provide a simple method for the construction of a good intial guess. Rapid computation of a localized basis allows for its use within various computations in electronic structure calculation where a localized basis may need to be computed repeatedly. This includes computations such as molecular dynamics and time dependent density functional theory with hybrid exchange-correlation functions. Finally, the ideas inherent to the SCDM procedure have potentially applicability in problems outside of Kohn-Sham DFT because the structural and behavioral properties we exploit are not necessarily unique to the problem. Our new algorithms would admit faster computation in such contexts as well. \section*{Acknowledgments} A.D. is partially supported by a National Science Foundation Mathematical Sciences Postdoctoral Research Fellowship under grant number DMS-1606277. L.L. is supported by the DOE Scientific Discovery through the Advanced Computing (SciDAC) program, the DOE Center for Applied Mathematics for Energy Research Applications (CAMERA) program, and the Alfred P. Sloan fellowship. L.Y. is supported by the National Science Foundation under award DMS-1328230 and DMS-1521830 and the U.S. Department of Energy’s Advanced Scientific Computing Research program under award DE-FC02-13ER26134/DE-SC0009409. The authors thank Stanford University and the Stanford Research Computing Center for providing computational resources and support that have contributed to these research results. The authors also thank the anonymous referees for their many helpful suggestions. \bibliographystyle{siam}
1,314,259,994,752
arxiv
\section{Discontinuous-Galerkin scheme} We consider a usual first-order system of hyperbolic balance laws \begin{align} \label{eq:GeneralHyperbolicSystem1D} \partial_\timevar\moments+\partial_{\z}\ensuremath{\bF}\left(\moments\right) = \ensuremath{\bs}\left(\moments\right), \end{align} where $\moments(\timevar,\ensuremath{x})\in\mathbb{R}^{\ensuremath{n}}$ for all $\timevar\in\timeint$ and $\ensuremath{x}\in\Domain$. In the following, the spatial domain $\Domain = (\ensuremath{\z_{L}}, \ensuremath{\z_{R}})$ is divided into (for notational simplicity) $\ensuremath{n_{\z}}$ (equidistant) cells $\cell{\ensuremath{j}} = (\x_{\ensuremath{j}-\frac12}, \x_{\ensuremath{j}+\frac12})$, where the cell interfaces are given by $\x_{\ensuremath{j}\pm\frac12} = \x_\ensuremath{j} \pm \frac{\ensuremath{\Delta\z}}{2}$ for cell centres $\x_\ensuremath{j} = \ensuremath{\z_{L}} + (\ensuremath{j} - \frac12)\ensuremath{\Delta\z}$, and $\ensuremath{\Delta\z} = \frac{\ensuremath{\z_{R}} - \ensuremath{\z_{L}}}{\ensuremath{n_{\z}}}$. Furthermore, $\SpaceOfPolynomials{\ensuremath{k}}(\cell{\ensuremath{j}})$ is the set of polynomials of degree at most $\ensuremath{k}$ on the interval $\cell{\ensuremath{j}}$, and \begin{equation} \FiniteElementSpace{\ensuremath{k}} = \{\Testfunction \in \Lp{1}(\Domain): \Testfunction|_{\cell{\ensuremath{j}}} \in \SpaceOfPolynomials{\ensuremath{\spatialorder-1}}(\cell{\ensuremath{j}}) \text{ for } \ensuremath{j} \in \{ 1, \ldots , \ensuremath{n_{\z}} \} \} \label{eq:dg-space} \end{equation} is the finite-element space of piecewise polynomials of degree $\ensuremath{\spatialorder-1}$.\\ The discontinuous-Galerkin method for the general hyperbolic system \eqref{eq:GeneralHyperbolicSystem1D}, as outlined in \cite{Cockburn1989,Cockburn1989a,Cockburn1991}, can be briefly described as follows. For each $\timevar\in\timeint$, seek an approximate solution $\ensuremath{\moments[h]}(\timevar, \x)$ whose components live in the finite-element space $\FiniteElementSpace{\ensuremath{k}}$ as defined in \eqref{eq:dg-space}. Then follow the Galerkin approach: replace $\moments$ in \eqref{eq:GeneralHyperbolicSystem1D} by a solution of the form $\ensuremath{\moments[h]} \in \FiniteElementSpace{\ensuremath{k}}$, multiply the resulting equation by basis functions $\Testfunction[h]$ of $\FiniteElementSpace{\ensuremath{k}}$ and integrate over cell $\cell{\ensuremath{j}}$ to obtain \begin{subequations} \label{eq:dweakform1} \begin{align} \partial_\timevar \int_{\cell{\ensuremath{j}}} \ensuremath{\moments[h]}(\timevar, \x)\Testfunction[h](\x)~d\x &+ \ensuremath{\bF}(\ensuremath{\moments[h]}(\timevar, \x_{\ensuremath{j}+\frac12}^-)) \Testfunction[h](\x_{\ensuremath{j}+\frac12}^-) - \ensuremath{\bF}(\ensuremath{\moments[h]}(\timevar, \x_{\ensuremath{j}-\frac12}^+)) \Testfunction[h](\x_{\ensuremath{j}-\frac12}^+) \nonumber \\ &-\int_{\cell{\ensuremath{j}}} \ensuremath{\bF}(\ensuremath{\moments[h]}(\timevar, \x)) \partial_{\z} \Testfunction[h](\x)~d\x = \int_{\cell{\ensuremath{j}}} \ensuremath{\bs}(\ensuremath{\moments[h]}(\timevar, \x))\Testfunction[h](\x)~d\x \label{eq:dweakform1a},\\ \int_{\cell{\ensuremath{j}}} \ensuremath{\moments[h]}(0, \x)\Testfunction[h](\x)~d\x &= \int_{\cell{\ensuremath{j}}} \ensuremath{\moments[\timevar=0]}(\x) \Testfunction[h](\x)~d\x, \label{eq:dweakform1b} \end{align} \end{subequations} where $\x_{\ensuremath{j} \pm \frac12}^-$ and $\x_{\ensuremath{j} \pm \frac12}^+$ again denote the limits from left and right, respectively, and $\ensuremath{\moments[\timevar=0]}$ is the initial condition. In order to approximately solve the Riemann problem at the cell-interfaces, the fluxes $\ensuremath{\bF}(\ensuremath{\moments[h]}(\timevar, \x_{\ensuremath{j} + \frac12}^\pm))$ at the points of discontinuity are both replaced by a numerical flux $\ensuremath{\widehat{\bF}}(\ensuremath{\moments[h]}(\timevar, \x_{\ensuremath{j}+\frac12}^-), \ensuremath{\moments[h]}(\timevar, \x_{\ensuremath{j}+\frac12}^+))$, thus coupling the elements with their neighbours \cite{Toro2009}. Several well-known examples for such a numerical flux $\ensuremath{\widehat{\bF}}$ exist in literature. The simplest example is the global Lax-Friedrichs flux \begin{align} \label{eq:globalLF} \ensuremath{\widehat{\bF}}(\moments[1], \moments[2]) = \dfrac{1}{2} \left( \ensuremath{\bF}(\moments[1]) + \ensuremath{\bF}(\moments[2]) - C ( \moments[2] - \moments[1]) \right). \end{align} The numerical viscosity constant $\ensuremath{C}$ is taken as the global estimate of the absolute value of the largest eigenvalue of the Jacobian $\ensuremath{\bF}'$. The local Lax-Friedrichs flux could be used instead. This requires computing the eigenvalues of the Jacobian in every space-time cell to adjust the value of the numerical viscosity constant $\ensuremath{C}$ but possibly decreases the overall diffusivity of the scheme. However, since high-order methods are considered, the decrease in diffusivity achieved by switching to the local Lax-Friedrichs flux should be negligible.\\ The usual approach is to expand the approximate solution $\ensuremath{\moments[h]}$ on each interval as \begin{align} \label{eqn:solution_form} \left.\ensuremath{\moments[h]}\right|_{\cell{\ensuremath{j}}}(\timevar, \x) := \momentslocal{\ensuremath{j}}(\timevar, \x) := \sum_{\ensuremath{i}=0}^{\ensuremath{\spatialorder-1}}\momentspolynomialcoefficients{\ensuremath{j}}{\ensuremath{i}}(\timevar) \Testfunction[\ensuremath{i}]\left( \frac{\x - \x_\ensuremath{j}}{\ensuremath{\Delta\z}} \right), \end{align} where $\Testfunction[0], \Testfunction[1], \ldots ,\Testfunction[\ensuremath{\spatialorder-1}]$ denote a basis for $\SpaceOfPolynomials{\ensuremath{k}}(\ensuremath{\hat{I}})$ with respect to the standard $\Lp{2}$-scalar product on the reference cell $\ensuremath{\hat{I}} = \left(-\frac12,\frac12\right)$. It is convenient to choose an orthogonal basis like the Legendre polynomials scaled to the interval $\ensuremath{\hat{I}}$, denoted by \begin{align} \label{eq:LegendrePolynomialBasis} \Testfunction[0](\ensuremath{\hat{\z}}) = 1, \quad \Testfunction[1](\ensuremath{\hat{\z}}) = 2\ensuremath{\hat{\z}}, \quad \Testfunction[2](\ensuremath{\hat{\z}}) = \frac12 (12\ensuremath{\hat{\z}}^2-1), \: \ldots \end{align} With an orthogonal basis the cell means $\momentscellmean{\ensuremath{j}}$ are easily available from the expansion coefficients $\momentspolynomialcoefficients{\ensuremath{j}}{\ensuremath{i}}$, since $$ \momentscellmean{\ensuremath{j}}(\timevar) := \frac{1}{\ensuremath{\Delta\z}} \int_{\cell{\ensuremath{j}}} \momentslocal{\ensuremath{j}}(\timevar, \x)~d\x = \frac{1}{\ensuremath{\Delta\z}} \sum_{\ensuremath{i}=0}^{\ensuremath{\spatialorder-1}} \momentspolynomialcoefficients{\ensuremath{j}}{\ensuremath{i}}(\timevar) \int_{\cell{\ensuremath{j}}} \Testfunction[\ensuremath{i}]\left( \frac{\x-\x_\ensuremath{j}}{\ensuremath{\Delta\z}} \right)~d\x = \momentspolynomialcoefficients{\ensuremath{j}}{0}(\timevar). $$ Collecting the coefficients $\momentspolynomialcoefficients{\ensuremath{j}}{\ensuremath{i}}(\timevar)$ into the $\ensuremath{k} \times \ensuremath{n}$ matrix $$ \momentspolynomialmatrix{\ensuremath{j}}(\timevar) = \left( \momentspolynomialcoefficients{\ensuremath{j}}{0}(\timevar),\ldots , \momentspolynomialcoefficients{\ensuremath{j}}{\ensuremath{\spatialorder-1}}(\timevar)\right)^T, $$ equation \eqref{eq:dweakform1} can be written in compact form as the coupled system of ordinary differential equations \begin{align} \label{eq:odeDG} \partial_\timevar \momentspolynomialmatrix{\ensuremath{j}} &= \ensuremath{\tilde{L}_h}(\momentspolynomialmatrix{\ensuremath{j} - 1}, \momentspolynomialmatrix{\ensuremath{j}}, \momentspolynomialmatrix{\ensuremath{j} + 1}), \quad \text{for } \ensuremath{j} \in \{1, \ldots , \ensuremath{n_{\z}}\} \text{ and } \timevar \in \timeint, \end{align} with initial condition \eqref{eq:dweakform1b} and an appropriate choice of the local differential operator $\ensuremath{\tilde{L}_h}$. We incorporate boundary conditions via `ghost cells' at $\x_0$ and $\x_{\ensuremath{n_{\z}}+1}$ and set $\momentslocal{0}(\timevar, \x)$ and $\momentslocal{\ensuremath{n_{\z}} + 1}(\timevar, \x)$ accordingly. For Dirichlet-boundary conditions, the simplest approach is taken. The ghost-cell values are chosen to be the constant functions \begin{align*} \momentslocal{0}(\timevar, \x) &\equiv \momentslocal{0}(\timevar, \x_{\frac12})\momentslocal{0}(\timevar, \x_{\frac12}),\\ \momentslocal{\ensuremath{n_{\z}} + 1}(\timevar, \x) &\equiv \momentslocal{\ensuremath{n_{\z}} + 1}(\timevar, \x_{\ensuremath{n_{\z}} + \frac12}), \end{align*} with $\momentslocal{0}(\timevar, \x_{\frac12})$ and $\momentslocal{\ensuremath{n_{\z}} + 1}(\timevar, \x_{\ensuremath{n_{\z}} + \frac12})$ defined by the boundary conditions.\\ For periodic boundary conditions, the obvious choice is \begin{align*} \momentslocal{0}(\timevar, \x) &= \momentslocal{\ensuremath{n_{\z}}}(\timevar, \x+\ensuremath{\z_{R}}-\ensuremath{\z_{L}}),\quad \x\in\cell{0},\\ \momentslocal{\ensuremath{n_{\z}} + 1}(\timevar, \x) &=\momentslocal{1}(\timevar, \x-\ensuremath{\z_{R}}+\ensuremath{\z_{L}}),\quad \x\in\cell{\ensuremath{n_{\z}}+1}. \end{align*} To obtain a high-order scheme for \eqref{eq:odeDG} a suitable time integrator has to be used. Such a class of integrators is given by the \emph{strong-stability-preserving} (\emph{SSP}) methods, as used for example in \cite{Zhang2010,AllHau12}. The stages and steps of these type of methods are convex combinations of forward-Euler steps. Since the realizable set is convex, the analysis of a forward-Euler step then suffices to prove realizability preservation of the high-order method.\\ When possible, \emph{SSP-Runge-Kutta} (\emph{SSP-RK}) methods are used, but unfortunately they only exist up to order four \cite{Ruuth2004,Gottlieb2005}. For higher orders the so-called \emph{two-step Runge-Kutta} (\emph{TSRK}) \emph{SSP} methods \cite{Ketcheson2011} as well as their generalizations, the \emph{multi-step Runge-Kutta} (\emph{MSRK}) \emph{SSP} methods \cite{Bresten2013} can be applied. They combine Runge-Kutta schemes with positive weights and high-order multistep methods to achieve a total order higher than four while maintaining the important SSP property. See \cite{Schneider2015b} for more information about the SSP-schemes used in the actual implementation. In this work we want to investigate different combinations of troubled-cell indicators $\ensuremath{\Pi}$ and slope limiters/reconstructors $\ensuremath{\Lambda^{\text{scalar}}}$. In general, the process of limiting will be denoted by \begin{align} \label{eq:slopelimiterscalar} \momentspolynomialmatrix{\ensuremath{j}} = \begin{cases} \ensuremath{\Lambda^{\text{scalar}}}(\momentspolynomialmatrix{\ensuremath{j}-\ensuremath{k}},\ldots,\momentspolynomialmatrix{\ensuremath{j}},\momentspolynomialmatrix{\ensuremath{j}+\ensuremath{k}}) & \text{ if } \ensuremath{\Pi}(\momentspolynomialmatrix{\ensuremath{j}-\ensuremath{k}},\ldots,\momentspolynomialmatrix{\ensuremath{j}},\momentspolynomialmatrix{\ensuremath{j}+\ensuremath{k}}) = 1\\ \momentspolynomialmatrix{\ensuremath{j}} & \text{otherwise}. \end{cases} \end{align} It has been found that applying the limiter to the components themselves may introduce non-physical oscillations around an otherwise monotonic solution \cite{Cockburn1989}. Instead, the limiter is applied to the local characteristic fields of the solution. They are found by transforming the vector $\moments$ using the matrix $\ensuremath{\bV}_{\ensuremath{j}}$, whose columns hold the eigenvectors of the Jacobian $\ensuremath{\bF}'(\momentscellmean{\ensuremath{j}})$ evaluated at the cell mean $\momentscellmean{\ensuremath{j}}$. After applying the limiter, the characteristic fields are transformed back to the conserved quantities. In the end, since $\momentspolynomialmatrix{\ensuremath{j}}$ is a matrix of size $\ensuremath{k} \times \ensuremath{n}$, this transformation is accomplished by post-multiplying with $\ensuremath{\bV}_{\ensuremath{j}}^{-T}$ so that \begin{align} \label{eq:DGLimiter} \ensuremath{\Lambda}(\momentspolynomialmatrix{\ensuremath{j}-1},\momentspolynomialmatrix{\ensuremath{j}},\momentspolynomialmatrix{\ensuremath{j}+1}) = \ensuremath{\Lambda^{\text{scalar}}}(\momentspolynomialmatrix{\ensuremath{j}-1}\ensuremath{\bV}_{\ensuremath{j}}^{-T},\momentspolynomialmatrix{\ensuremath{j}}\ensuremath{\bV}_{\ensuremath{j}}^{-T},\momentspolynomialmatrix{\ensuremath{j}+1}\ensuremath{\bV}_{\ensuremath{j}}^{-T})\ensuremath{\bV}_{\ensuremath{j}}^T, \end{align} and similarly for $\ensuremath{\Pi}$. In the following we will neglect the transformation to characteristic fields for notational simplicity. \subsection{WENO reconstruction and smoothness indicators} \subsection{Troubled-cell indicators} \comment{blabla} \subsubsection{TVBM-corrected minmod indicator} \label{sec:TVBMminmod} An often-used indicator is the \emph{TVBM-corrected minmod limiter} proposed in \cite{Cockburn1989a}. It assumes that the major part of the spurious oscillations is generated in the linear part of the underlying polynomial, whose slope in the reference cell is simply $\momentspolynomialcoefficients{\ensuremath{j}}{1}$. The indicator in the $\ensuremath{j}^{\text{th}}$ cell is then given by \begin{align} \ensuremath{\Pi}(\momentspolynomialmatrix{\ensuremath{j}-\ensuremath{k}},\ldots,\momentspolynomialmatrix{\ensuremath{j}},\momentspolynomialmatrix{\ensuremath{j}+\ensuremath{k}}) = \begin{cases} 1 & \text{ if } \abs{\momentspolynomialcoefficients{\ensuremath{j}}{1}} \geq \ensuremath{M}(\ensuremath{\Delta\z})^2 \text{ and } \minmod{\momentspolynomialcoefficients{\ensuremath{j}}{1}, \momentspolynomialcoefficients{\ensuremath{j}+1}{0} - \momentspolynomialcoefficients{\ensuremath{j}}{0}, \momentspolynomialcoefficients{\ensuremath{j}}{0} - \momentspolynomialcoefficients{\ensuremath{j}-1}{0}} \neq \momentspolynomialcoefficients{\ensuremath{j}}{1},\\ 0 &\text{otherwise}. \end{cases} \end{align} The absolute value and the inequality are applied componentwise. The function $\minmod{\cdot}$ is the standard minmod function applied componentwise, defined by \begin{align} \label{eq:minmod} \minmod{a_1,a_2,a_3} &= \begin{cases} \operatorname{sign}(a_1) \min\{|a_1|,|a_2|,|a_3|\} & \text{if } \operatorname{sign}(a_1) = \operatorname{sign}(a_2) = \operatorname{sign}(a_3), \\ 0 & \text{else}. \end{cases} \end{align} The constant $\ensuremath{M}$ is a problem-dependent estimate of the second derivative, though it has to be noted that in \cite{Cockburn1989a} the authors did not find the solutions very sensitive to the value chosen for this parameter. \subsection{Slope limiters and reconstructors} \comment{bla} \subsubsection{The minmod limiter} Often-used due to its simplicity and small stencil is the following limiter \begin{align*} \ensuremath{\Lambda}(\momentspolynomialmatrix{\ensuremath{j}-\ensuremath{k}},\ldots,\momentspolynomialmatrix{\ensuremath{j}},\momentspolynomialmatrix{\ensuremath{j}+\ensuremath{k}}) = \left( \begin{array}{c} \left( \momentspolynomialcoefficients{\ensuremath{j}}{0} \right)^T \\ \minmod{\momentspolynomialcoefficients{\ensuremath{j}}{1}, \momentspolynomialcoefficients{\ensuremath{j}+1}{0} - \momentspolynomialcoefficients{\ensuremath{j}}{0}, \momentspolynomialcoefficients{\ensuremath{j}}{0} - \momentspolynomialcoefficients{\ensuremath{j}-1}{0}}^T \\ (0, 0, \ldots , 0)\\ \vdots\\ (0, 0, \ldots , 0) \end{array} \right) \end{align*} for the case $\ensuremath{k} \geq 3$, that is piece-wise quadratic or higher-degree polynomials, so that the final rows of zeros in the first case indicates that the coefficients for the higher-order basis functions are set to zero for each component. As before, the minmod function is defined as in \eqref{eq:minmod}. This limiter has the known disadvantage that it reduces the accuracy at smooth extrema for that it is mostly combined with the TVBM-corrected minmod indicator as in \secref{sec:TVBMminmod} which does not identify smooth extrema as troubled if the constant $\ensuremath{M}$ is chosen correctly. Note that there are generalizations of this indicator and limiter combination like in \cite{Krivodonova2007} which are not investigated here. \subsection{WENO} \comment{begin bla} Here, the standard WENO reconstruction method given for example in \cite{Shu1998,Toro2009} is used. For the unfamiliar reader, the method is introduced briefly in this section, while a more detailed documentation\footnote{Most of the documentation and the code has been written by Jochen Kall.} and demo implementations of the used reconstruction procedures can be found on the web page \cite{AGTMWENOpage}. Note that the described techniques should in principle work with every accurate high-order polynomial reconstruction method.\\ Since it is not a priori clear what it means to do a polynomial reconstruction in space of a function in angle, the angular domain has to be discretized using the quadrature rule $\ensuremath{\mathcal{Q}}$ (see \secref{sec:QRealizability}). Then, the following techniques can be applied at every quadrature point $\angularQuadratureNodes{\ensuremath{i}}$. For clarity of exposition, the time dependence is suppressed here.\\ Given polynomials of degree $\ensuremath{\spatialorder-1}$, $\WENOpoly{\ensuremath{j}}{\ensuremath{m}}(\cdot, \SCheight) \in \SpaceOfPolynomials{\ensuremath{\spatialorder-1}}\left(\cell{\ensuremath{j}}\right)$, $\ensuremath{m} = 0, 1, \ldots , \ensuremath{k}$, each solving the interpolation problem \begin{equation} \cellmean[\ensuremath{l}]{\WENOpoly{\ensuremath{j}}{\ensuremath{m}}(\x, \SCheight)} = \distributioncellmean{\ensuremath{j}}(\SCheight), \qquad \ensuremath{l} \in \WENOstencil{\ensuremath{j}}{\ensuremath{m}}{\ensuremath{k}} := \{\ensuremath{j} - \ensuremath{k} + \ensuremath{m}, \ldots , \ensuremath{j} + \ensuremath{m} - 1\}, \label{eq:interpolation} \end{equation} the WENO method then calculates coefficients $\ensuremath{\omega}^{\pm}_{\ensuremath{j}-\frac12, \ensuremath{m}}$ to form the weighted averages $$ \WENOpoly{\ensuremath{j}}{-\frac12}^\pm(\x, \SCheight) := \sum_{\ensuremath{m} = 0}^\ensuremath{k} \ensuremath{\omega}^{\pm}_{\ensuremath{j}-\frac12, \ensuremath{m}} \WENOpoly{\ensuremath{j}}{\ensuremath{m}}(\x, \SCheight). $$ The set $\WENOstencil{\ensuremath{j}}{\ensuremath{m}}{\ensuremath{k}}$ is called the \emph{WENO stencil} of order $\ensuremath{k}$ in cell $\ensuremath{j}$ with shift $\ensuremath{m}$. The weights $\ensuremath{\omega}^{\pm}_{\ensuremath{j}-\frac12, \ensuremath{m}}$ are non-linear functions of the cell-averages and reflect the smoothness of each polynomial $\WENOpoly{\ensuremath{j}}{\ensuremath{m}}$. They are computed such that for smooth data the approximation order at the cell edge is maximized. This gives an order $2 \ensuremath{k} - 1$ approximation at the cell interface, while the overall order in the interior of the cell is $\ensuremath{k}$. When at least one of the interaction coefficients $\ensuremath{\sigma_a}$ or $\ensuremath{\sigma_s}$ is spatially dependent, it is necessary to specify the reconstruction inside each cell. Thus it is essential to make a choice, because both $\WENOpoly{\ensuremath{j}}{\mp \frac12}^\pm$ are order $\ensuremath{k}$ reconstructions of the density $\ansatz$ in the $\ensuremath{j}^{\text{th}}$ cell% \footnote{ Some reconstruction methods, such as sub-cell WENO \cite{Cheng2013} or minmod \cite{Toro2009}, give only one polynomial reconstruction inside each cell, and so for these methods such a choice would be unnecessary. }. \comment{end bla} \section{Introduction} In recent years many approaches have been considered for the solution of time-dependent linear kinetic transport equations, which arise for example in electron radiation therapy or radiative heat transfer problems. Many of the most popular methods are moment methods, also known as moment closures because they are distinguished by how they close the truncated system of exact moment equations. Moments are defined through angular averages against basis functions to produce spectral approximations in the angle variable. A typical family of moment models are the so-called $\PN$-methods \cite{Lewis-Miller-1984,Gel61} which are pure spectral methods. However, many high-order moment methods, including $\PN$, do not take into account that the original kinetic density to be approximated must be non-negative. The moment vectors produced by such models are therefore often not realizable, that is, there is no associated non-negative kinetic distribution consistent with the moment vector, and thus the solutions can contain non-physical artefacts such as negative local particle densities \cite{Bru02}. The family of minimum-entropy models, colloquially known as $\MN$ models or entropy-based moment closures, solve this problem (for certain physically relevant entropies) by specifying the closure using a non-negative density reconstructed from the moments. The $\MN$ models are the only models which additionally are hyperbolic and dissipate entropy \cite{Lev96}. The cost of all these properties is that the reconstruction of this density involves solving an optimization problem at every point on the space-time mesh \cite{AllHau12,Alldredge2014}. These reconstructions, however, can be parallelized, and so the recent emphasis on algorithms that can take advantage of massively parallel computing environments has led to renewed interest in the computation of $\MN$ solutions both for linear and nonlinear kinetic equations \cite{DubFeu99,Hauck2010,Lam2014,AllHau12,Garrett2014, McDonald2012}. The key challenge for a numerical scheme is that, if not treated correctly, the numerical solution can leave the set of realizable moments \cite{Olbrant2012}, outside of which the defining optimization problem has no solution. Discontinuous-Galerkin methods can handle this problem using a realizability limiter directly on the moment vectors themselves \cite{Zhang2010,Olbrant2012,Schneider2015a}. At this level realizability conditions are in general quite complicated and also not well-understood for two- or three-dimensional problems for moment models of order higher than two. Realizability limiting for kinetic schemes \cite{Hauck2010,Schneider2015b}, however, is much easier because at the level of the kinetic density, realizability corresponds simply to non-negativity. One big drawback of explicit schemes is that the time step depends on the physical parameters (absorption and scattering properties of the material), resulting in stiff systems, which can be avoided using an implicit discretization. On the other hand, the hyperbolic flux, which is non-linear and usually expensive to calculate, is typically non-stiff. An implicit discretization is therefore undesired. To overcome this we derive a realizability-preserving, first-order kinetic scheme with implicit-explicit (IMEX) time stepping, treating stiff and non-stiff problems separately. The paper is organized as follows. A brief overview of the method of moment, the minimum-entropy approach and realizability is given in \secref{sec:Models}. Then, the reduced (space-homogeneous) moment system (which will be treated implicitly in the scheme) is investigated and the realizability-preserving property of this implicit discretization is shown in \secref{sec:RealizabilityReduced}. This is concluded by the description of the full scheme and the proof that it is realizability-preserving in \secref{sec:RPFO}. The scheme is then tested in a manufactured solution and a benchmark test in \secref{sec:NumExp}. Finally, conclusions and an outlook on future work is given in \secref{sec:Conclusions}. \section{Models} \label{sec:Models} In slab geometry, the transport equation under consideration has the form \begin{align} \label{eq:TransportEquation1D} \partial_\timevar\distribution+\SCheight\partial_{\z}\distribution + \ensuremath{\sigma_a}\distribution = \ensuremath{\sigma_s}\collision{\distribution}+\ensuremath{Q}, \qquad \timevar\in\timeint,\x\in\Domain,\SCheight\in[-1,1]. \end{align} The physical parameters are the absorption and scattering coefficient $\ensuremath{\sigma_a},\ensuremath{\sigma_s}:\timeint\times\Domain\to\R_{\geq 0}$, respectively, and the emitting source $\ensuremath{Q}:\timeint\times\Domain\times[-1,1]\to\R_{\geq 0}$. Furthermore, $\SCheight\in[-1,1]$, and $\distribution = \distribution(\timevar,\x,\SCheight)$. \begin{assumption} \label{ass:CollisionOperator} The operator $\ensuremath{\cC}$ is assumed to have the following properties. \begin{enumerate} \begin{subequations} \label{eq:CollisionProperty} \item Mass conservation \begin{align} \label{eq:CollisionPropertyMass} \int\limits_{-1}^1\collision{\distribution}~d\SCheight=0. \end{align} \item Local entropy dissipation \begin{align} \label{eq:CollisionPropertyLocalDissipation} \int\limits_{-1}^1\entropy'(\distribution)\collision{\distribution}~d\SCheight\leq 0, \end{align} where $\entropy$ denotes a strictly convex entropy (compare \secref{sec:MinimumEntropy}). \item The reduced (space-homogeneous) system \begin{align} \label{eq:CollisionDistributionEquation} \partial_\timevar \distribution = \collision{\distribution}, \end{align} admits a non-negative solution $\distribution\geq 0$ for all $\timevar\geq 0$ and initial conditions $\distribution(0,\SCheight)\geq 0$. \item For every $\ensuremath{\Delta \timevar}\geq0$, the following implication holds \begin{align} \label{eq:CollisionDistributionPositivity} \distribution(\timevar,\SCheight)-\ensuremath{\Delta \timevar}\collision{\distribution(\timevar,\SCheight)}\geq 0~\Rightarrow~ \distribution(\timevar,\SCheight)\geq 0 \quad\text{for all } \timevar\geq 0,~\SCheight\in[-1,1]. \end{align} \end{subequations} \end{enumerate} \end{assumption} The first two assumptions are from \cite{Levermore1996}, requiring that the operator is physically meaningful. The other assumptions are necessary for some of our proofs in the following\footnote{To be completely correct, the assumptions have to be formulated in a weak sense. However, all steps below can be performed similarly but with a greater notational effort.}. One example for such a collision operator is given by the Laplace-Beltrami operator \begin{align} \label{eq:LaplaceBeltrami} \collision{\distribution} = \frac12 \LaplaceBeltramiProjection \distribution = \frac12 \cfrac{d}{d\SCheight}\left(\left(1-\SCheight^2\right)\cfrac{d\distribution}{d\SCheight}\right). \end{align} This operator appears, for example, as the result of an asymptotic analysis of the Boltzmann equation under the assumption of small energy loss and deflection, and forward-peaked scattering in the context of electron transport \cite{Frank07,Pom92,HenIzaSie06}. Another typical choice is the linear integral collision operator \begin{equation} \collision{\distribution} = \int\limits_{-1}^1 \ensuremath{K}(\SCheight, \SCheight^\prime) \distribution(\timevar, \spatialVariable, \SCheight^\prime)~d\SCheight^\prime - \int\limits_{-1}^1 \ensuremath{K}(\SCheight^\prime, \SCheight) \distribution(\timevar, \spatialVariable, \SCheight)~d\SCheight^\prime. \label{eq:collisionOperatorR} \end{equation} The collision kernel $\ensuremath{K}$ is assumed to be strictly positive, symmetric (i.e. $\ensuremath{K}(\SCheight,\SCheight')=\ensuremath{K}(\SCheight',\SCheight)$) and normalized to $\int\limits_{-1}^1 \ensuremath{K}(\SCheight^\prime, \SCheight)~d\SCheight^\prime=1$. A typical example is the BGK-type \emph{isotropic-scattering} operator, where $\ensuremath{K}(\SCheight, \SCheight^\prime) \equiv \frac{1}{2}$. The transport equation \eqref{eq:TransportEquation1D} is supplemented by initial and boundary conditions: \begin{subequations} \begin{align} \distribution(0,\x,\SCheight) &= \ensuremath{\distribution[\timevar=0]}(\x,\SCheight) &&\text{for } \x\in\Domain = (\ensuremath{\z_{L}},\ensuremath{\z_{R}}), \SCheight\in[-1,1], \label{eq:TransportEquation1DIC}\\ \distribution(\timevar,\ensuremath{\z_{L}},\SCheight) &= \ensuremath{\distribution[b]}(\timevar,\ensuremath{\z_{L}},\SCheight) &&\text{for } \timevar\in\timeint, \SCheight>0, \label{eq:TransportEquation1DBCa}\\ \distribution(\timevar,\ensuremath{\z_{R}},\SCheight) &= \ensuremath{\distribution[b]}(\timevar,\ensuremath{\z_{R}},\SCheight) &&\text{for } \timevar\in\timeint, \SCheight<0. \label{eq:TransportEquation1DBCb} \end{align} \end{subequations} \subsection{The method of moments} In general, solving equation \eqref{eq:TransportEquation1D} is very expensive in two and three dimensions due to the high dimensionality of the state space. For this reason it is convenient to use some type of spectral or Galerkin method to transform the high-dimensional equation into a system of lower-dimensional equations. Typically, one chooses to reduce the dimensionality by representing the angular dependence of $\distribution$ in terms of some basis $\basis$. \begin{definition} The vector of functions $\basis:[-1,1]\to\mathbb{R}^{\ensuremath{n}}$ consisting of $\ensuremath{n}$ basis functions $\basiscomp[\basisind]$, $\basisind=0,\ldots\ensuremath{n}-1$ of maximal \emph{order} $\ensuremath{N}$ is called an \emph{angular basis}. Analogously, the symbol $\basis[\ensuremath{N}]$ can be used if the knowledge of $\ensuremath{N}$ is explicitly necessary. The so-called \emph{moments} of a given distribution function $\distribution$ with respect to $\basis$ are then defined by \begin{align} \label{eq:moments} \moments =\ints{{\basis}\distribution} = \left(\momentcomp{0},\ldots,\momentcomp{\ensuremath{n}-1}\right)^T, \end{align} where the integration $\ints{\cdot} = \int\limits_{-1}^1\cdot~d\SCheight$ is performed componentwise.\\ Assuming for simplicity $\basiscomp[0]\equiv 1$, the quantity $\momentcomp{0} = \ints{\basiscomp[0]\distribution}=\ints{\distribution}$ is called \emph{local particle density}. Furthermore, \emph{normalized moments} $\normalizedmoments = \left(\normalizedmomentcomp{1},\ldots,\normalizedmomentcomp{\ensuremath{n}-1}\right)\in\mathbb{R}^{\ensuremath{n}}$ are defined as \begin{align} \label{eq:NormalizedMoments} \normalizedmomentcomp{\basisind} = \cfrac{\momentcomp{\basisind}}{\momentcomp{0}}~, \qquad \basisind=1,\ldots\ensuremath{n}-1. \end{align} \end{definition} To obtain a set of equations for $\moments$, \eqref{eq:TransportEquation1D} has to be multiplied through by $\basis$ and integrated over $[-1,1]$, giving \begin{align*} \ints{\basis\partial_\timevar\distribution}+\ints{\basis\partial_{\z}\SCheight\distribution} + \ints{\basis\ensuremath{\sigma_a}\distribution} = \ensuremath{\sigma_s}\ints{\basis\collision{\distribution}}+\ints{\basis\ensuremath{Q}}. \end{align*} Collecting known terms, and interchanging integrals and differentiation where possible, the moment system has the form \begin{align} \label{eq:MomentSystemUnclosed1D} \partial_\timevar\moments+\partial_{\z}\ints{\SCheight \basis\ansatz[\moments]} + \ensuremath{\sigma_a}\moments = \ensuremath{\sigma_s}\ints{\basis\collision{\ansatz[\moments]}}+\ints{\basis\ensuremath{Q}}. \end{align} The solution of \eqref{eq:MomentSystemUnclosed1D} is equivalent to the one of \eqref{eq:TransportEquation1D} if $\basis$ is a basis of $\Lp{2}([-1,1],\mathbb{R})$. Since it is impractical to work with an infinite-dimensional system, only a finite number of $\ensuremath{n}<\infty$ basis functions $\basis$ of order $\ensuremath{N}$ can be considered. Unfortunately, there always exists an index $\basisind\in\{0,\dots,\ensuremath{n}-1\}$ such that the components of $\basiscomp\cdot\SCheight$ are not in the linear span of $\basis$. Therefore, the flux term cannot be expressed in terms of $\moments$ without additional information. Furthermore, the same might be true for the projection of the scattering operator onto the moment-space given by $\ints{\basis\collision{\distribution}}$. This is the so-called \emph{closure problem}. One usually prescribes some \emph{ansatz} distribution $\ansatz[\moments](\timevar,\spatialVariable,\SCheight):=\ansatz(\moments(\timevar,\spatialVariable),\basis(\SCheight))$ to calculate the unknown quantities in \eqref{eq:MomentSystemUnclosed1D}. Note that the dependence on the angular basis in the short-hand notation $\ansatz[\moments]$ is neglected for notational simplicity. Finally, we write \eqref{eq:MomentSystemUnclosed1D} in the form of a standard first-order hyperbolic system of equations: \begin{align} \label{eq:GeneralHyperbolicSystem} \partial_\timevar\moments + \partial_{\x}\ensuremath{\bF}(\moments) = \ensuremath{\bs}\left(\moments\right), \end{align} where $\ensuremath{\bF}(\moments) = \ints{\SCheight \basis\ansatz[\moments]}$ and $\ensuremath{\bs}\left(\moments\right) = \ensuremath{\sigma_s}\ints{\basis\collision{\ansatz[\moments]}}+\ints{\basis\ensuremath{Q}}-\ensuremath{\sigma_a}\moments$. \subsection{Minimum-entropy approach} \label{sec:MinimumEntropy} In this paper the ansatz density $\ansatz$ is reconstructed from the moments $\moments$ by minimizing the entropy-functional \begin{align} \label{eq:entropyFunctional} \entropyFunctional(\distribution) = \ints{\entropy(\distribution)} \end{align} under the moment constraints \begin{align} \label{eq:MomentConstraints} \ints{\basis\distribution} = \moments. \end{align} The kinetic entropy density $\entropy:\mathbb{R}\to\mathbb{R}$ is strictly convex and twice continuously differentiable and the minimum is simply taken over all functions $\distribution = \distribution(\SCheight)$ such that $\entropyFunctional(\distribution)$ is well defined. The obtained ansatz $\ansatz = \ansatz[\moments]$, solving this constrained optimization problem, is given by \begin{equation} \ansatz[\moments] = \argmin\limits_{\distribution:\entropy(\distribution)\in\Lp{1}}\left\{\ints{\entropy(\distribution)} : \ints{\basis \distribution} = \moments \right\}. \label{eq:primal} \end{equation} This problem, which must be solved over the space-time mesh, is typically solved through its strictly convex finite-dimensional dual, \begin{equation} \multipliers(\moments) := \argmin_{\tilde{\multipliers} \in \mathbb{R}^{\ensuremath{n}}} \ints{\ld{\entropy}(\basis^T \tilde{\multipliers})} - \moments^T \tilde{\multipliers}, \label{eq:dual} \end{equation} where $\ld{\entropy}$ is the Legendre dual of $\entropy$. The first-order necessary conditions for the multipliers $\multipliers(\moments)$ show that the solution to \eqref{eq:primal} has the form \begin{equation} \ansatz[\moments] = \ld{\entropy}' \left(\basis^T \multipliers(\moments) \right) \label{eq:psiME} \end{equation} where $\ld{\entropy}'$ is the derivative of $\ld{\entropy}$.\\ This approach is called the \emph{minimum-entropy closure} \cite{Levermore1996}. The resulting model has many desirable properties: symmetric hyperbolicity, bounded eigenvalues of the directional flux Jacobian and the direct existence of an entropy-entropy flux pair (compare \cite{Levermore1996,Schneider2016}).\\ The kinetic entropy density $\entropy$ can be chosen according to the physics being modelled. As in \cite{Levermore1996,Hauck2010}, Maxwell-Boltzmann entropy% \begin{align} \label{eq:EntropyM} \entropy(\distribution) = \distribution \log(\distribution) - \distribution \end{align} is used, thus $\ld{\entropy}(p) = \ld{\entropy}'(p) = \exp(p)$. This entropy is used for non-interacting particles as in an ideal gas. We use the modification of the adaptive-basis optimization routine \cite{Alldredge2014} as proposed in \cite{Schneider2015b} to solve \eqref{eq:dual}.\\ Substituting $\distribution$ in \eqref{eq:MomentSystemUnclosed1D} with $\ansatz[\moments]$ yields a closed system of equations for $\moments$: \begin{align} \label{eq:MomentSystemClosed} \partial_\timevar\moments+\partial_{\z}\ints{\SCheight \basis\ansatz[\moments]} + \ensuremath{\sigma_a}\moments = \ensuremath{\sigma_s}\ints{\basis\collision{\ansatz[\moments]}}+\ints{\basis\ensuremath{Q}}. \end{align} In this paper, the full-moment basis $\basis = \left(1,\SCheight,\ldots,\SCheight^\ensuremath{N}\right)$ will be used. Nevertheless, the scheme can be transferred directly to other bases like the \emph{half-moment monomial basis} ($\basiscomp = \indicator{[-1,0]}\SCheight^\basisind$ or $\basiscomp = \indicator{[0,1]}\SCheight^\basisind$) \cite{DubKla02,DubFraKlaTho03,Ritter2016} or the mixed-moment basis ($\basis = \left(1,\SCheight \indicator{[0,1]},\ldots,\SCheight^\ensuremath{N} \indicator{[0,1]},\SCheight \indicator{[-1,0]},\ldots,\SCheight^\ensuremath{N} \indicator{[-1,0]}\right)$) \cite{Schneider2014,Frank07,Schneider2015c}. Similarly, the results are not restricted to the minimum-entropy approach but can be transferred to other realizable closures like Kershaw \cite{Ker76,Schneider2016a,Schneider2015} or the quadrature method of moments \cite{Yuan2012,Fox2008,Fox2009,Vikas2013}. \subsection{Realizability} Since the underlying kinetic density to be approximated is non-negative, a moment vector only makes sense physically if it can be associated with a non-negative distribution function. In this case the moment vector is called \emph{realizable}. \begin{definition} \label{def:RealizableSet} The \emph{realizable set} $\RD{\basis}{}$ is $$ \RD{\basis}{} = \left\{\moments~:~\exists \distribution(\SCheight)\ge 0,\, \momentcomp{0} = \ints{\distribution} > 0, \text{ such that } \moments =\ints{\basis\distribution} \right\}. $$ If $\moments\in\RD{\basis}{}$, then $\moments$ is called \emph{realizable}. Any $\distribution$ such that $\moments =\ints{\basis \distribution}$ is called a \emph{representing density}. \end{definition} \begin{remark} \mbox{ } \begin{enumerate}[(a)] \item The realizable set is a convex cone, and \item Representing densities are not necessarily unique. \end{enumerate} \end{remark} Additionally, since the entropy ansatz has the form \eqref{eq:psiME}, in the Maxwell-Boltzmann case, the optimization problem \eqref{eq:primal} only has a solution if the moment vector lies in the ansatz space $$ \ensuremath{\cA} := \left\{\ints{\basis \ansatz[\moments]}\stackrel{\eqref{eq:psiME}}{=} \ints{\basis \ld{\entropy}'\left(\basis^T\multipliers\right) } : \multipliers \in \mathbb{R}^{\ensuremath{n}} \right\}. $$ In the case of a bounded angular domain, the ansatz space $\ensuremath{\cA}$ is equal to the set of realizable moment vectors \cite{Jun00}. Therefore, it is sufficient to focus on realizable moments only. The definition of the realizable set is not constructive, making it hard to check if a moment vector is realizable or not. There are several works about concrete representations of the realizable set for different bases, e.g. \cite{Curto1991,Ker76,Schneider2014,Schneider2015a,Curto1996,Schneider2015c}. Exemplarily, the full-moment realizable set of order $\ensuremath{N}=2$ is given by \cite{Curto1991} \begin{align} \label{eq:M2Realizability2} \RD{\basis}{} = \left\{\moments\in\mathbb{R}^3~:~\momentcomp{0}\geq\abs{\momentcomp{1}},~\momentcomp{0}\momentcomp{2}\geq\momentcomp{1}^2\right\}. \end{align} Fortunately, since we only use a first-order scheme, no information about the realizable set (except its convexity) is needed in the following. Note that this might no longer be true when higher-order schemes (in space and time) are used, see e.g. \cite{Schneider2015b,Zhang2010,Zhang2011a}. \section{Conclusions and outlook} \label{sec:Conclusions} We derived an implicit-explicit scheme for moment systems that are generated by a non-negative ansatz. This scheme preserves realizability under a standard CFL condition, even in the case of stiff source terms (e.g. strong scattering or absorption), while the implicit systems have to be solved only locally in every space-time cell. In many cases, these implicit systems are linear, resulting in a very efficient algorithm. Convergence of the algorithm was tested against a manufactured solution, showing the designed first order. Furthermore, the plane-source problem served as benchmark test, showing how close to the realizability boundary the scheme can get. While this first-order scheme is easy to implement, the benefit of high-order schemes in terms of efficiency is necessary to obtain reasonable approximations in higher dimensions in an appropriate time. Future work will investigate how to couple higher-order IMEX schemes with the fully-explicit, high-order kinetic \cite{Schneider2015b} and discontinuous-Galerkin scheme \cite{Schneider2015a}, removing the stiffness from these two methods. Furthermore, it is unclear if the mixed-moment model \cite{Frank07,Schneider2014} in combination with the Laplace-Beltrami operator $\LaplaceBeltramiProjection$ fulfils the assumptions of \thmref{thm:RP} (it contains the microscopic quantity $\ansatz[\moments]\left(0\right)$, i.e. the solution of \eqref{eq:ReducedMomentsEquation} depends on the chosen ansatz). Nevertheless, \eqref{eq:discretizedform} performs well in practice even in this situation, which could mean that either the above assumptions are fulfilled or \thmref{thm:RP} can be extended to a weaker set of assumptions. \section{Realizability} Since the underlying kinetic density to be approximated is non-negative, a moment vector only makes sense physically if it can be associated with a non-negative distribution function. In this case the moment vector is called \emph{realizable}. \begin{definition} \label{def:RealizableSet} The \emph{realizable set} $\RD{\basis}{}$\index{Realizability@\textbf{Realizability}!Realizable set $\RD{\basis}{}$} is $$ \RD{\basis}{} = \left\{\moments~:~\exists \distribution(\SCheight)\ge 0,\, \momentcomp{0} = \ints{\distribution} > 0, \text{ such that } \moments =\ints{\basis\distribution} \right\}. $$ If $\moments\in\RD{\basis}{}$, then $\moments$ is called \emph{realizable}. Any $\distribution$ such that $\moments =\ints{\basis \distribution}$ is called a \emph{representing density}. If $\distribution$ is additionally a linear combination of Dirac deltas \cite{Hassani2009,Mathematics2011,Kuo2006}, it is called \emph{atomic} \cite{Curto1991}. \end{definition} \begin{remark} \mbox{ } \begin{enumerate}[(a)] \item The realizable set is a convex cone, and \item Representing densities are not necessarily unique. \end{enumerate} \end{remark} Additionally, since the entropy ansatz has the form \eqref{eq:psiME}, in the Maxwell-Boltzmann case, the optimization problem \eqref{eq:primal} only has a solution if the moment vector lies in the ansatz space $$ \ensuremath{\cA} := \left\{\ints{\basis \ansatz[\moments]}\stackrel{\eqref{eq:psiME}}{=} \ints{\basis \ld{\entropy}'\left(\basis^T\multipliers\right) } : \multipliers \in \mathbb{R}^{\ensuremath{n}} \right\}. $$ In the case of a bounded angular domain, the ansatz space $\ensuremath{\cA}$ is equal to the set of realizable moment vectors \cite{Jun00}. Therefore, it is sufficient to focus on realizable moments only. Unfortunately, the definition of the realizable set is not constructive, making it hard to check if a moment vector is realizable or not. Therefore, other characterizations of $\RD{\basis}{}$ are necessary. For example, in the classical mixed-moment problem of first order, the realizable set is characterized by the inequalities \cite{Frank07,Schneider2014} \begin{align*} \momentcomp{1+}-\momentcomp{1-}\leq \momentcomp{0} \quad \mbox{and} \quad \pm \momentcomp{1\pm}\geq 0. \end{align*} In this paper, we want to focus on the lowest-order non-trivial model of the differentiable mixed-moment hierarchy, i.e. $\ensuremath{N} = 2$. \begin{theorem} The moment vector $\moments = \left(\momentcomp{0},\momentcomp{1},\momentcomp{2+},\momentcomp{2-}\right)\in\mathbb{R}^4$ is realizable, i.e. $\moments\in\RD{\basis}{}$, if and only if \begin{align} \label{eq:Realizability1D} \momentcomp{2+} - \sqrt{\momentcomp{2-}\, \left(\momentcomp{0}-\momentcomp{2+}\right)} &\leq \momentcomp{1} \leq \sqrt{\momentcomp{2+}\, \left(\momentcomp{0}-\momentcomp{2-}\right)} - \momentcomp{2-}\\ \momentcomp{0},\momentcomp{2\pm}\geq 0. \label{eq:Realizability1Db} \end{align} \end{theorem} \begin{proof} At first, we want to show that \eqref{eq:Realizability1D} and \eqref{eq:Realizability1Db} are necessary. Assume that $\distribution\geq 0$ is arbitrary but fixed. Note that $\momentcomp{0\pm}\geq \pm\momentcomp{1\pm}\geq \momentcomp{2\pm}\geq 0$ and $\momentcomp{0\pm}\momentcomp{2\pm}\geq \momentcomp{1\pm}^2$ due to the half-moment realizability conditions \cite{Schneider2014,Curto1991}. Then we have (using $\momentcomp{0-}+\momentcomp{0+} = \momentcomp{0}$) that \begin{align*} \momentcomp{2-}\, \left(\momentcomp{0}-\momentcomp{2+}\right) = \momentcomp{2-}\momentcomp{0-}+ \momentcomp{2-}\left(\momentcomp{0+}-\momentcomp{2+}\right) \geq \momentcomp{1-}^2 + \momentcomp{2-}\left(\momentcomp{0+}-\momentcomp{2+}\right) \geq \momentcomp{1-}^2. \end{align*} Since $\momentcomp{1-}\leq 0$ it follows that \begin{gather*} \sqrt{\momentcomp{2-}\, \left(\momentcomp{0}-\momentcomp{2+}\right) } \geq \abs{\momentcomp{1-}} = -\momentcomp{1-}\\ \Leftrightarrow\\ -\sqrt{\momentcomp{2-}\, \left(\momentcomp{0}-\momentcomp{2+}\right) } \leq \momentcomp{1-}. \end{gather*} Therefore \begin{align*} \momentcomp{2+} - \sqrt{\momentcomp{2-}\, \left(\momentcomp{0}-\momentcomp{2+}\right)} \leq \momentcomp{2+}+\momentcomp{1-} \leq \momentcomp{1+}+\momentcomp{1-} = \momentcomp{1}. \end{align*} The upper bound can be shown to be necessary in a similar way.\\ \eqref{eq:Realizability1Db} follows from the positivity of $1$ and $\SCheight^2$. We want to remark that the standard second-order full-moment realizability condition for $\momentcomp{2} = \momentcomp{2+}+\momentcomp{2-}$, namely $\momentcomp{0}\left(\momentcomp{2+}+\momentcomp{2-}\right)\geq \momentcomp{1}^2$, is implied by \eqref{eq:Realizability1D}+\eqref{eq:Realizability1Db}. To show that the above inequalities are also sufficient, we provide a non-negative realizing distribution with support in $[-1,1]$: \begin{align*} \distribution = \momentcomp{0}\left(\cfrac{\normalizedmomentcomp{1+}^2}{\normalizedmomentcomp{2+}}\cdot\ensuremath{\delta}\left(\SCheight-\cfrac{\normalizedmomentcomp{2+}}{\normalizedmomentcomp{1+}}\right)+\cfrac{\normalizedmomentcomp{1-}^2}{\normalizedmomentcomp{2-}}\cdot\ensuremath{\delta}\left(\SCheight-\cfrac{\normalizedmomentcomp{2-}}{\normalizedmomentcomp{1-}}\right)\right), \end{align*} with \begin{align*} \normalizedmomentcomp{1+} = \frac{\normalizedmomentcomp{2+}\, \left(\normalizedmomentcomp{1} + \normalizedmomentcomp{2-}\, \sqrt{\frac{ - {\normalizedmomentcomp{1}}^2 + \normalizedmomentcomp{2-} + \normalizedmomentcomp{2+}}{\normalizedmomentcomp{2-}\, \normalizedmomentcomp{2+}}}\right)}{\normalizedmomentcomp{2-} + \normalizedmomentcomp{2+}}\\ \normalizedmomentcomp{1-} = \frac{\normalizedmomentcomp{2-}\, \left(\normalizedmomentcomp{1} - \normalizedmomentcomp{2+}\, \sqrt{\frac{ - {\normalizedmomentcomp{1}}^2 + \normalizedmomentcomp{2-} + \normalizedmomentcomp{2+}}{\normalizedmomentcomp{2-}\, \normalizedmomentcomp{2+}}}\right)}{\normalizedmomentcomp{2-} + \normalizedmomentcomp{2+}} \end{align*} and $\normalizedmomentcomp{1+}=\normalizedmomentcomp{1-} = 0$ if $\normalizedmomentcomp{2+}=\normalizedmomentcomp{2-} = 0$. In this case, $\distribution = \momentcomp{0}\ensuremath{\delta}(\SCheight)$ (due to the quadratic term the second moment vanishes faster than the first moment so we have $\frac{\normalizedmomentcomp{2\pm}}{\normalizedmomentcomp{1\pm}}\to 0$ and $\frac{\normalizedmomentcomp{1\pm}^2}{\normalizedmomentcomp{2\pm}}\to \frac{\normalizedmomentcomp{2\mp}-\normalizedmomentcomp{1}^2}{\normalizedmomentcomp{2\mp}}$). It is simple to check that $\normalizedmomentcomp{1+}+\normalizedmomentcomp{1-} = \normalizedmomentcomp{1}$ and $\cfrac{\normalizedmomentcomp{1+}^2}{\normalizedmomentcomp{2+}}+\cfrac{\normalizedmomentcomp{1-}^2}{\normalizedmomentcomp{2-}} = 1$, i.e. all moments are correctly represented. It remains to show that under \eqref{eq:Realizability1D} we have that $\cfrac{\normalizedmomentcomp{2+}}{\normalizedmomentcomp{1+}}\in[0,1]$ and $\cfrac{\normalizedmomentcomp{2-}}{\normalizedmomentcomp{1-}}\in[-1,0]$, i.e. \begin{align*} 0 \leq \normalizedmomentcomp{2\pm} \leq \pm\normalizedmomentcomp{1\pm}. \end{align*} This corresponds to the standard half-moment realizability conditions of second order. We first note that \eqref{eq:Realizability1D}+\eqref{eq:Realizability1Db} imply that $\normalizedmomentcomp{2\pm}\in [0,1]$ since otherwise the bounds become complex. Second, we have that $\normalizedmomentcomp{2+} - \sqrt{\normalizedmomentcomp{2-}\, \left(1-\normalizedmomentcomp{2+}\right)} = \sqrt{\normalizedmomentcomp{2+}\, \left(1-\normalizedmomentcomp{2-}\right)} - \normalizedmomentcomp{2-}$ if and only if $\normalizedmomentcomp{2+} = 1-\normalizedmomentcomp{2-}$ or $\normalizedmomentcomp{2+} = \normalizedmomentcomp{2-} = 0$, implying the classical full-moment realizability conditions $\normalizedmomentcomp{1}\in[-1,1]$ and $\normalizedmomentcomp{2}\leq 1$. We start the investigation at the different parts of the realizability boundary. Let $\normalizedmomentcomp{1} = \sqrt{\normalizedmomentcomp{2+}\, \left(1-\normalizedmomentcomp{2-}\right)} - \normalizedmomentcomp{2-}$. Plugging this into the definition of $\normalizedmomentcomp{1+}$ we get that, after some elementary transformations, \begin{align*} \normalizedmomentcomp{1+} &= \cfrac{\normalizedmomentcomp{2+}}{\normalizedmomentcomp{2+}+\normalizedmomentcomp{2-}}\left(\sqrt{1-\normalizedmomentcomp{2-}}\left(\sqrt{\normalizedmomentcomp{2+}}+\cfrac{\normalizedmomentcomp{2-}}{\sqrt{\normalizedmomentcomp{2+}}}\right)\right)\\ &\stackrel{1\geq\normalizedmomentcomp{2+}+\normalizedmomentcomp{2-}}{\geq} \cfrac{\normalizedmomentcomp{2+}}{\normalizedmomentcomp{2+}+\normalizedmomentcomp{2-}}\left(\sqrt{\normalizedmomentcomp{2+}}\left(\sqrt{\normalizedmomentcomp{2+}}+\cfrac{\normalizedmomentcomp{2-}}{\sqrt{\normalizedmomentcomp{2+}}}\right)\right) = \normalizedmomentcomp{2+}. \end{align*} Similarly, we obtain $-\normalizedmomentcomp{1-}\geq \normalizedmomentcomp{2-}$ and the same in the case $\normalizedmomentcomp{1} = \normalizedmomentcomp{2+} - \sqrt{\normalizedmomentcomp{2-}\, \left(1-\normalizedmomentcomp{2+}\right)}$. Since the realizable set is always convex, the argumentation must also hold in the interior of the above set. \end{proof} \begin{figure}[h!] \begin{center} \includemovie[poster,toolbar,label=mesh.u3d,text={\externaltikz{Realizability}{\relinput{Images/Realizability}}}, 3Dviews2=Images/my_views.vws]{2\figurewidth} {2.3\figureheight}{Images/mesh.u3d} \end{center} \caption{The normalized realizable set for the differentiable mixed-moment basis of order $\ensuremath{N}=2$.\\ \textbf{Online version}: Press to activate 3D view ($\ensuremath{x}$-axis (red): $\normalizedmomentcomp{2+}$, $\ensuremath{y}$-axis (green): $\normalizedmomentcomp{2-}$, $z$-axis (blue): $\normalizedmomentcomp{1}$)} \label{fig:Realizability} \end{figure} \figref{fig:Realizability} shows the normalized realizable set (i.e. $\momentcomp{0}=1$), defined by \eqref{eq:Realizability1D} and \eqref{eq:Realizability1Db}. \begin{remark} \eqref{eq:Realizability1D} gives a surprising insight into realizability of mixed-moment models. While the realizable set for full-moment and classical mixed-moment models can be characterized by inequalities with rational functions of the moments, the differential mixed-moment model requires non-linearities. This implies that it might be impossible to transfer the general mixed-moment structure (which uses a linearity argument) shown in \cite{Schneider2014} to the differentiable case. \end{remark} \section{Realizability of the reduced equation} \label{sec:RealizabilityReduced} Before treating the space-dependent transport equation \eqref{eq:TransportEquation1D}, we want to investigate \eqref{eq:CollisionDistributionEquation} in more detail. The following example shows why explicit schemes for the Laplace-Beltrami fail. \begin{example} \begin{align} \label{eq:LaplaceBeltramiMomentsEquation} \partial_\timevar \moments = \ints{\basis\LaplaceBeltramiProjection\ansatz[\moments]}, \end{align} where the ansatz $\ansatz[\moments]$ can be chosen accordingly as \eqref{eq:psiME}, if necessary.\\ It is possible to show that \eqref{eq:CollisionDistributionEquation} has a solution in $\Lp{2}([-1,1],\R_{\geq 0})$ for every $\timevar\geq 0$ \cite{Kuo2006,risken1996fokker,hsu2002stochastic,hackenbroch1994stochastische}. Therefore, it is possible to expand $\distribution$ in $\SCheight$ in terms of the Legendre polynomials $P_\basisind$, which form an orthogonal basis of $\Lp{2}$ and are eigenfunctions of $\LaplaceBeltramiProjection$. Then, \eqref{eq:CollisionDistributionEquation} transforms to \begin{align*} \sum\limits_{\basisind=0}^{\infty} \left(\partial_\timevar\multiplierscomp{\basisind}+\basisind\left(\basisind+1\right)\multiplierscomp{\basisind}\right)c_\basisind P_\basisind = 0, \end{align*} where the coefficients $c_\basisind$ are normalization constants. This equation can be stated equivalently as an infinite, decoupled system of ordinary differential equations \begin{align*} \partial_\timevar\multiplierscomp{\basisind} = -\basisind\left(\basisind+1\right)\multiplierscomp{\basisind},\qquad\quad \basisind \in \N_{\geq 0} \end{align*} with solution \begin{align*} \multiplierscomp{\basisind}(\timevar) = e^{-\basisind\left(\basisind+1\right)\timevar}\multiplierscomp{\basisind}(0), \end{align*} where $\multiplierscomp{\basisind}(0)$ are the Fourier coefficients of $\distribution(0,\SCheight)$. For $\timevar\to\infty$ it obviously holds that \begin{align*} \lim\limits_{\timevar\to\infty}\multiplierscomp{\basisind}(\timevar) = 0,&& \basisind = 1,\ldots,\infty \end{align*} which means that $\distribution(\timevar,\SCheight)\stackrel{\timevar\to\infty}{\longrightarrow} \multiplierscomp{0}(0)$. This implies that for every initial condition for \eqref{eq:CollisionDistributionEquation} a stationary solution is attained and that it is isotropic. This is not very surprising since the constants are in the kernel of $\LaplaceBeltramiProjection$.\\ The corresponding second-order, full-moment vector field \begin{align} \label{eq:LaplaceBeltramiMomentsEquationFM} \ints{\basis[2]\LaplaceBeltrami\ansatz} = (0,-2\momentcomp{1},-6\momentcomp{2}+2\momentcomp{0})^T \end{align} is plotted in normalized moments in \figref{fig:M2LaplaceBeltramiVectorField}. \begin{figure}[h] \centering \externaltikz{M2LaplaceBeltramiVectorField}{\relinput{Images/M2LaplaceBeltramiVectorField}} \caption{Vector field of the right-hand side and some solution trajectories of \eqref{eq:LaplaceBeltramiMomentsEquationFM} for $\ensuremath{N} = 2$ and $\momentcomp{0} = 1$. The length of the arrows is scaled by $0.03$.} \label{fig:M2LaplaceBeltramiVectorField} \end{figure} Some solution curves (red dotted), starting at the realizability boundary (red triangles), are shown as well. All those curves end in the isotropic point (red dot) implying that the stationary solution of \eqref{eq:CollisionDistributionEquation} is recovered. This is not by accident. The solution of \eqref{eq:LaplaceBeltramiMomentsEquation} with a full-moment basis turns out to be just the projection of \eqref{eq:CollisionDistributionEquation} onto the corresponding moment space. This is proven below in \lemref{lem:LaplaceBeltramiFMStayRealizable}.\\ As visible in \figref{fig:M2LaplaceBeltramiVectorField}, the vector field in $\pm\normalizedmomentcomp{1} = \normalizedmomentcomp{2} = 1$ is tangential to the realizability boundary $\dRDone{\fmbasis[2]}$. Therefore, no explicit time discretization of \eqref{eq:LaplaceBeltramiMomentsEquation} generally preserves realizability using a fixed non-negative time step. This can be shown by a simple calculation. Due to \eqref{eq:CollisionPropertyMass}, it suffices to choose $\momentcomp{0} = 1$ and therefore the explicit discretization with step size $\ensuremath{\Delta \timevar}$ in normalized moments reads \begin{align*} \normalizedmomentcomp{1}(\timevar+\ensuremath{\Delta \timevar}) &= \normalizedmomentcomp{1}(\timevar)-2\ensuremath{\Delta \timevar}\normalizedmomentcomp{1}(\timevar),\\ \normalizedmomentcomp{2}(\timevar+\ensuremath{\Delta \timevar}) &= \normalizedmomentcomp{2}(\timevar)-6\ensuremath{\Delta \timevar}\normalizedmomentcomp{2}(\timevar)+2\ensuremath{\Delta \timevar}. \end{align*} Plugging in $\normalizedmoments(\timevar) = \left(1,1\right)$, the updated normalized moment is given by \begin{align*} \normalizedmomentcomp{1}(\timevar+\ensuremath{\Delta \timevar}) &= 1-2\ensuremath{\Delta \timevar},\\ \normalizedmomentcomp{2}(\timevar+\ensuremath{\Delta \timevar}) &= 1-4\ensuremath{\Delta \timevar}. \end{align*} The update $\normalizedmoments(\timevar+\ensuremath{\Delta \timevar})$ is realizable (see \eqref{eq:M2Realizability2}) if \begin{align*} 1 \geq \normalizedmomentcomp{2}(\timevar+\ensuremath{\Delta \timevar}) = 1-4\ensuremath{\Delta \timevar}\geq \normalizedmomentcomp{1}(\timevar+\ensuremath{\Delta \timevar})^2 = \left(1-2\ensuremath{\Delta \timevar}\right)^2 = 1-4\ensuremath{\Delta \timevar}+4\ensuremath{\Delta \timevar}^2. \end{align*} The last inequality implies $4\ensuremath{\Delta \timevar}^2\leq 0$, which is for $\ensuremath{\Delta \timevar}\in\mathbb{R}$ only possible if $\ensuremath{\Delta \timevar}=0$. \end{example} \begin{remark} This is in contrast to the linear collision operator \eqref{eq:collisionOperatorR}, which is in principle easy to control since its moments are always of the form \begin{align*} \ints{\basis \collision{\distribution}} = \collisionrealizablepart-\moments, \end{align*} where $\collisionrealizablepart\in\RD{\basis}{}$ \cite{Schneider2015a}. Note that this is true for any angular basis, not only for full moments. Since the realizable set is a convex cone, this additional realizable term does not affect the realizability of the moment systems solution in a negative way, even if everything is discretized explicitly. The explicit update for \eqref{eq:LaplaceBeltramiMomentsEquation} reads \begin{align*} \moments(\timevar+\ensuremath{\Delta \timevar}) = \left(1-\ensuremath{\Delta \timevar}\right)\moments(\timevar)+\ensuremath{\Delta \timevar}\collisionrealizablepart, \end{align*} which is realizable as long as $0\leq \ensuremath{\Delta \timevar}\leq 1$. This corresponds to the standard stability condition for the explicit Euler scheme and depends on the stiffness of the system under consideration. \end{remark} As a consequence of \assref{ass:CollisionOperator}(3), the solution $\distribution$ of \eqref{eq:CollisionDistributionEquation} is non-negative. Using this information one can conclude realizability of the exact solution of \begin{align} \label{eq:ReducedMomentsEquation} \partial_\timevar \moments = \ints{\basis\collision{\ansatz[\moments]}} =: \collisionu{\moments}, \end{align} under the following assumptions. \begin{assumption} \label{ass:MomentSystem} \begin{enumerate}[(a)] \item The map $\moments\to\ints{\basis\collision{\ansatz[\moments]}}$ is Lipschitz-continuous in $\moments$ (with respect to any norm in $\mathbb{R}^\ensuremath{n}$) \item \eqref{eq:ReducedMomentsEquation} admits a unique solution $\moments(\timevar)$ for all $\timevar\geq 0$. \end{enumerate} \end{assumption} \begin{lemma} \label{lem:LaplaceBeltramiFMStayRealizable} \mbox{ }\\ Let $\moments(0)\in\RD{\basis}{}$ and \assref{ass:MomentSystem} be valid. Then, the solution $\moments(\timevar)$ of \eqref{eq:ReducedMomentsEquation} satisfies $\moments(\timevar)\in\RD{\basis}{}$ for all $\timevar\geq 0$. \end{lemma} \begin{proof} Let $\distribution(\timevar,\SCheight)$ denote the solution of \eqref{eq:CollisionDistributionEquation}. As mentioned before, $\distribution(\timevar,\SCheight)\geq 0$ for all $\timevar\geq 0$ and $\SCheight\in[-1,1]$. Defining the moments of $\distribution$ as $\moments[\distribution] = \ints{\basis\distribution}$, it is immediately obvious that $\moments[\distribution]$ also solves \eqref{eq:ReducedMomentsEquation} and $\moments[\distribution](\timevar)\in\RD{\basis}{}$ for all $\timevar\geq 0$. Due to the uniqueness of the solution of \eqref{eq:ReducedMomentsEquation} (\assref{ass:MomentSystem}(b)) it follows that $\moments = \moments[\distribution]$, which completes the proof. \end{proof} Consequently, an implicit discretization of the moment system preserves realizability. \begin{corollary} \label{cor:ImplicitDiscretization} Let $\moments(0)\in\RD{\basis}{}$. Then the implicit time-discretization \begin{align} \label{eq:ImplicitDiscretization} \moments(\timevar+\ensuremath{\Delta \timevar}) = \moments(\timevar)+\ensuremath{\Delta \timevar} \collisionu{\moments(\timevar+\ensuremath{\Delta \timevar})} \end{align} of \eqref{eq:ReducedMomentsEquation} satisfies $\moments(\timevar)\in\RD{\basis}{}$ for all $\timevar = j\ensuremath{\Delta \timevar}$, $j\in\N$. \end{corollary} \begin{proof} Similar to the proof of \lemref{lem:LaplaceBeltramiFMStayRealizable}, one can make use of the discretization of the kinetic equation \eqref{eq:CollisionDistributionEquation}, which reads \begin{align*} \distribution(\timevar+\ensuremath{\Delta \timevar},\SCheight) = \distribution(\timevar,\SCheight)+\ensuremath{\Delta \timevar} \collision{\distribution(\timevar+\ensuremath{\Delta \timevar},\SCheight)}. \end{align*} Using \eqref{eq:CollisionDistributionPositivity} it follows that $\distribution(\timevar+\ensuremath{\Delta \timevar},\SCheight)\geq 0$, since by assumption $\distribution(\timevar,\SCheight)\geq 0$. The solution of the system \eqref{eq:ImplicitDiscretization} is unique by Banach's fixed point theorem (using a norm that is suitably scaled by the Lipschitz constant of $\ensuremath{\bC}$). As above, this solution has to satisfy $ \moments(\timevar+\ensuremath{\Delta \timevar}) = \ints{\basis \distribution(\timevar+\ensuremath{\Delta \timevar})}$ and is therefore realizable. \end{proof} \begin{example} We want to show that the Laplace-Beltrami operator satisfies \eqref{eq:CollisionDistributionPositivity}. Assuming that at time $\timevar$ the solution is non-negative, the implicit discretization of \eqref{eq:CollisionDistributionEquation} can be written as \begin{align} \label{eq:DistributionImplicitDiscretization} (I-\ensuremath{\Delta \timevar}\LaplaceBeltramiProjection) \distribution(\timevar+\ensuremath{\Delta \timevar}) = \distribution(\timevar) \geq 0. \end{align} Since the Laplace-Beltrami operator is a negative operator, the operator $(I-\ensuremath{\Delta \timevar}\LaplaceBeltramiProjection)$ is positive and consequently $\distribution(\timevar+\ensuremath{\Delta \timevar})\geq 0$. This can be derived rigorously by defining the Hilbert space \begin{align*} \ensuremath{\mathcal{V}} = \{\ensuremath{v}\in\Lp{2}(-1,1)~|~\sqrt{1-\SCheight^2}\cfrac{d\ensuremath{v}}{d\SCheight}\in\Lp{2}(-1,1) \} \end{align*} with the inner product \begin{align*} \left(\ensuremath{v},\distribution\right)_\ensuremath{\mathcal{V}} = \ints{\ensuremath{v}\distribution + \ensuremath{\Delta \timevar}(1-\SCheight^2)\cfrac{dv}{d\SCheight}\cfrac{d\distribution}{d\SCheight}} \end{align*} and the induced norm $\norm{\ensuremath{v}}{\ensuremath{\mathcal{V}}} = \sqrt{\left(\ensuremath{v},\ensuremath{v}\right)_\ensuremath{\mathcal{V}}}$. These definitions roughly follow \cite{Degond1987}. The weak formulation of \eqref{eq:DistributionImplicitDiscretization} reads \begin{align*} \left(\ensuremath{v},\distribution(\timevar+\ensuremath{\Delta \timevar})\right)_\ensuremath{\mathcal{V}} = \ints{\ensuremath{v}\distribution(\timevar)}. \end{align*} Choosing $\ensuremath{v} = \distribution^-(\timevar+\ensuremath{\Delta \timevar}) = \min\left(0,\distribution(\timevar+\ensuremath{\Delta \timevar})\right)$, the weak formulation turns to \begin{align*} \norm{\distribution^-(\timevar+\ensuremath{\Delta \timevar})}{\ensuremath{\mathcal{V}}}^2 = \ints{\underbrace{\distribution^-(\timevar+\ensuremath{\Delta \timevar})}_{\leq 0}\underbrace{\distribution(\timevar)}_{\geq 0}} \leq 0. \end{align*} Therefore, $\distribution^-(\timevar+\ensuremath{\Delta \timevar})\equiv 0$ almost everywhere and consequently $\distribution(\timevar+\ensuremath{\Delta \timevar})\geq 0$ almost everywhere. \end{example} \begin{remark} We want to remark that both collision operators \eqref{eq:LaplaceBeltrami} and \eqref{eq:collisionOperatorR} with the full-moment basis satisfy all the previous assumptions since in both cases the operator $\collisionu{\moments}$ is linear in $\moments$. \end{remark} \section{Numerical experiments} \label{sec:NumExp} \subsection{Manufactured solution} \label{sec:manu-soln} In general, analytical solutions for minimum-entropy models are not known. Therefore, to test the convergence and efficiency of our scheme, the method of manufactured solutions is used, following the target solution given in \cite{Schneider2015b}. The solution is defined on the spatial domain $\Domain = (-\pi, \pi)$ with periodic boundary conditions. A kinetic density in the form of the entropy ansatz is given by \begin{align} \distribution[a](\timevar,\x,\SCheight) =& \exp\left(\multiplierscomp{0}(\timevar,\x) + \multiplierscomp{1}(\timevar,\x) \SCheight \right), \label{eq:MFSM3}\\ \multiplierscomp{0}(\timevar,\x) =& -\ensuremath{K} - \sin(\x-\timevar) - \ensuremath{b},\nonumber\\ \multiplierscomp{1}(\timevar,\x) =& \ensuremath{K} + \sin(\x-\timevar).\nonumber \end{align} A source term is defined by applying the transport operator to $\distribution[a]$, giving $$ \ensuremath{Q}(\timevar,\x,\SCheight) := \partial_\timevar \distribution[a](\timevar,\x,\SCheight) + \SCheight \partial_{\z} \distribution[a](\timevar,\x,\SCheight) + \ensuremath{\sigma_a}(\timevar, \x) \distribution[a](\timevar,\x,\SCheight), $$ where $$ \ensuremath{\sigma_a}(\timevar, \x) := 4\left(1 - \cos\left(\x - \timevar\right)\right). $$ Thus, by inserting this $\ensuremath{Q}$ into \eqref{eq:TransportEquation1D} and setting $\ensuremath{\sigma_s} = 0$, $\distribution[a]$ is a solution of \eqref{eq:TransportEquation1D}. A straightforward computation shows that $\ensuremath{Q} \ge 0$ (for any $\ensuremath{b}$ and $\ensuremath{K}$), which means that \thmref{thm:RP} can be applied to the resulting moment system. Furthermore, $\ensuremath{b}$ is chosen as \begin{align*} \ensuremath{b} &= - \ensuremath{K} + 1 - \log\left( \cfrac{\ensuremath{K} - 1}{2\sinh(\ensuremath{K} - 1)} \right) \end{align*} so that the maximum value of $\ints{\distribution[a]}$ for $(\timevar, \x) \in [0, \tf] \times \Domain$ is one. As $\ensuremath{K}$ is increased, $\distribution[a]$ converges to a Dirac delta at $\SCheight = 1$.\\ Since $\distribution[a]$ has the form of an entropy ansatz, $\moments[a] = \ints{\basis \distribution[a]}$ is also a solution of \eqref{eq:MomentSystemUnclosed1D} whenever $1$ and $\SCheight$ are in the linear span of the basis $\basis$. Notice also that $\moments[a]$ approaches the boundary of realizability as $\ensuremath{K}$ is increased.\\ The final time is chosen to be $\tf = \pi / 5$ while $\ensuremath{K}\in\{2,25\}$ is used, for which the normalized first-order moment satisfies $\frac{\analyticalmomentcomp{1}}{\analyticalmomentcomp{0}} \in \{[0.313,0.672],[0.958,0.962]\}$ (recall that $\abs{\analyticalmomentcomp{1}} \leq \analyticalmomentcomp{0}$ is necessary for realizability). In the following, the $\MN[3]$ model is used so that the results include the effects of the numerical optimization. Errors are computed in the zeroth moment of the solution $\analyticalmomentcomp{0}(\timevar, \x) := \ints{\distribution[a](\timevar, \x, \cdot)}$. Then $\Lp{1}$- and $\Lp{\infty}$-errors for the zeroth moment $\momentcompprojected{0}(\timevar, \x)$ (that is, the zeroth component of a numerical solution $\ensuremath{\moments[h]}$) are defined as \begin{equation} \LpError{1} = \ensuremath{\Delta\z}\sum\limits_{\ensuremath{j}=1}^{\ensuremath{n_{\z}}} \left|\cellmean[\ensuremath{j}]{\analyticalmomentcomp{0}}(\tf) - \cellmean[\ensuremath{j}]{\momentcomp{0}}(\tf) \right| \quad \mbox{and} \quad \LpError{\infty} = \max_{\ensuremath{j}=1,\ldots,\ensuremath{n_{\z}}} \left|\cellmean[\ensuremath{j}]{\analyticalmomentcomp{0}}(\tf) - \cellmean[\ensuremath{j}]{\momentcomp{0}}(\tf) \right|, \label{eq:errors} \end{equation} respectively. The observed convergence order $\ensuremath{\nu}$ is defined by \begin{equation} \frac{\LpError[h_1]{\ensuremath{p}}}{\LpError[h_2]{\ensuremath{p}}} = \left( \frac{\ensuremath{\Delta\z}_1}{\ensuremath{\Delta\z}_2} \right)^\ensuremath{\nu}, \label{eq:conv-order} \end{equation} where $\LpError[h_i]{\ensuremath{p}}$, $i \in \{1, 2\}$, $\ensuremath{p} \in \{1, \infty\}$, is the $\Lp{\ensuremath{p}}$-error $\LpError{\ensuremath{p}}$ for the numerical solution using cell size $\ensuremath{\Delta\z}_i$. A convergence table for two different values of $\ensuremath{K}$ is presented in \tabref{tab:ConvergenceDG}. They correspond in spatial average to the sets of normalized moments $\normalizedmoments = \left(0.515,0.463,0.333\right)^T$ ($\ensuremath{K} = 2$) and $\normalizedmoments = \left(0.960,0.923,0.889\right)^T$ ($\ensuremath{K} = 25$) with relative distance to the realizability boundary (absolute distance divided by the maximal possible distance) of $5.016\%$ and $0.0006\%$, respectively. \begin{table}[htbp] \centering \begin{tabular}{r r@{.}l c r@{.}l c r@{.}l c r@{.}l c } & \multicolumn{6}{c}{$\ensuremath{K} = 2 $}& \multicolumn{6}{c}{$\ensuremath{K} = 25 $}\\ \cmidrule(r){2-7} \cmidrule(r){8-13} $\ensuremath{n_{\z}} $ & \multicolumn{2}{c}{$E^1_h$} & $\nu$ & \multicolumn{2}{c}{$E^\infty_h$} & $\nu$& \multicolumn{2}{c}{$E^1_h$} & $\nu$ & \multicolumn{2}{c}{$E^\infty_h$} & $\nu$\\ \cmidrule(r){1-1} \cmidrule(r){2-4} \cmidrule(r){5-7} \cmidrule(r){8-10} \cmidrule(r){11-13} 40 & 5 & 332e-02 & ---& 2 & 355e-02 & ---& 7 & 063e-03 & ---& 2 & 633e-03 & ---\\ 80 & 2 & 713e-02 & 0.97& 1 & 208e-02 & 0.96& 3 & 558e-03 & 0.99& 1 & 329e-03 & 0.99\\ 160 & 1 & 368e-02 & 0.99& 6 & 118e-03 & 0.98& 1 & 792e-03 & 0.99& 6 & 671e-04 & 0.99\\ 320 & 6 & 862e-03 & 1.00& 3 & 078e-03 & 0.99& 9 & 035e-04 & 0.99& 3 & 341e-04 & 1.00\\ 640 & 3 & 444e-03 & 0.99& 1 & 554e-03 & 0.99& 4 & 655e-04 & 0.96& 1 & 684e-04 & 0.99\\ \end{tabular} \caption{$\Lp{1}$- and $\Lp{\infty}$-errors and observed convergence order $\ensuremath{\nu}$ for the IMEX kinetic scheme with $\MN[3]$ manufactured solution \eqref{eq:MFSM3} and optimization gradient tolerance $\ensuremath{\tau} = 10^{-6}$.} \label{tab:ConvergenceDG} \end{table} It can be observed that the expected convergence rates are achieved both in $\Lp{1}$- and $\Lp{\infty}$-errors. \begin{remark} The scheme is not convergent for arbitrarily large values of $\ensuremath{K}$. For big $\ensuremath{K}$, the numerical solution veers so close to the boundary of the realizable set that the optimization has to use regularization, thus introducing errors into the solution. This has been shown in \cite{Schneider2015a} for a simpler convergence test and was also observed before in \cite{Alldredge2014}. \end{remark} \subsection{Plane source} \label{sec:Planesource} In this test case an isotropic distribution with all mass concentrated in the middle of an infinite domain $\x \in (-\infty, \infty)$ is defined as initial condition, i.e. \begin{align*} \ensuremath{\distribution[\timevar=0]}(\x, \SCheight) = \ensuremath{\distribution[\text{vac}]} + \delta(\x), \end{align*} where the small parameter $\ensuremath{\distribution[\text{vac}]} = 0.5 \times 10^{-8}$ is used to approximate a vacuum. In practice, a bounded domain must be used which is large enough that the boundary should have only negligible effects on the solution. For the final time $\tf = 1$, the domain is set to $\Domain = [-1.2, 1.2]$ (recall that for all presented models the maximal speed of propagation is bounded in absolute value by one \cite{Schneider2015a,Levermore1998}). At the boundary the vacuum approximation \begin{align*} \ensuremath{\distribution[b]}(\timevar,\ensuremath{\z_{L}},\SCheight) \equiv \ensuremath{\distribution[\text{vac}]} \quad \mbox{and} \quad \ensuremath{\distribution[b]}(\timevar,\ensuremath{\z_{R}},\SCheight) \equiv \ensuremath{\distribution[\text{vac}]} \end{align*} is used again. Furthermore, the physical coefficients are set to $\ensuremath{\sigma_s} \equiv 1$, $\ensuremath{\sigma_a} \equiv 0$ and $\ensuremath{Q} \equiv 0$. All solutions are computed with an even number of cells, so the initial Dirac delta lies on a cell boundary. Therefore it is approximated by splitting it into the cells immediately to the left and right. In \figref{fig:Planesource}, only positive $\x$ are shown since the solutions are always symmetric around $\x = 0$.\\ For an intense discussion of the solution of the moment models see e.g. \cite{Schneider2014,Schneider2016}. For convenience, the space-time behaviour of the density $\ensuremath{\rho}$ for $\MN[1]$ to $\MN[3]$ are shown in \figref{fig:PlanesourceCuts}. \begin{figure}[htbp] \externaltikz{PlanesourceCuts}{\relinput{Images/PlanesourceCuts}} \centering \caption{Results for the plane-source test in the space-time domain in a logarithmic scale.} \label{fig:PlanesourceCuts} \end{figure} A known problem of the minimum-entropy approach is the fact that close to the realizability boundary the moment system becomes ill-conditioned \cite{AllHau12}. We investigate the relative distance of the plane-source $\MN$ solutions to the boundary of the \emph{normalized realizable set} \begin{align*} \RDone{\basis} = \left\{\moments~:~\exists \distribution(\SC)\ge 0,\, \momentcomp{0} = \ints{\distribution} =1, \text{ such that } \moments =\ints{\basis\distribution} \right\} \end{align*} as the ratio of the euclidean distance to the realizability boundary and the maximal possible distance, i.e. \begin{align*} \distRDrel{\moments} = \cfrac{\distRD{\moments}}{\max\limits_{\hat{\moments}\in\RDone{\basis}} \distRD{\hat{\moments}}}, \qquad \distRD{\moments} = \min\limits_{\hat{\moments}\in\dRDone{\basis}} \norm{\moments-\hat{\moments}}{2}. \end{align*} The maximal distances are \begin{align*} \max\limits_{\hat{\moments}\in\RDone{\basis}} \distRD{\hat{\moments}}&=1 && \text{for }\MN[1],\\ \max\limits_{\hat{\moments}\in\RDone{\basis}} \distRD{\hat{\moments}}&=\frac12 && \text{for }\MN[2],\text{ and}\\ \max\limits_{\hat{\moments}\in\RDone{\basis}} \distRD{\hat{\moments}}&=\frac15 && \text{for }\MN[3]. \end{align*} The results are shown in \figref{fig:Planesource}. \begin{figure}[htbp] \centering \settikzlabel{fig:Planesource.M1D} \settikzlabel{fig:Planesource.M1FM} \settikzlabel{fig:Planesource.M2D} \settikzlabel{fig:Planesource.M2FM} \settikzlabel{fig:Planesource.M3D} \settikzlabel{fig:Planesource.M3FM} \externaltikz{Planesource}{\relinput{Images/PlanesourceLB}} \caption{Relative distance to the realizability boundary and related quantities for the plane-source test.} \label{fig:Planesource} \end{figure} While the relative distance in case of the $\MN[1]$ model (Figures~\ref{fig:Planesource.M1D} and \ref{fig:Planesource.M1FM}) is directly related the normalized first moment $\normalizedmomentcomp{1}$ by $\distRDrel{\moments} = 1-\abs{\normalizedmomentcomp{1}}$ (resulting in small distances only close to the peak at $\timevar = \pm \ensuremath{x}$), the distances in case of the higher-order models become smaller even in the interior of the set $\{\abs{\ensuremath{x}}\leq \timevar\}$. The minimal values that occurred are $0.0039$ ($\MN[1]$), $2.2147\cdot10^{-5}$ ($\MN[2]$) and $9.0981\cdot10^{-7}$ ($\MN[3]$), showing that the underlying moment problem becomes harder to solve with increasing moment order $\ensuremath{N}$. Figures~ \ref{fig:Planesource.M2FM} and \ref{fig:Planesource.M3FM} show a histogram ($300\cdot300$ bins) of the $\MN[2]$ and $\MN[3]$ solution in the $\normalizedmomentcomp{1}-\normalizedmomentcomp{\ensuremath{N}}$ phase space (where $\ensuremath{N}$ is either $2$ or $3$, respectively). The histogram is built out of the solution values at the $10000$ cell centres and $100$ time frames. The boundary of the (projected) normalized realizable set is depicted as a black, dashed line. In case of the $\MN[2]$ model, it is visible that mostly the lower part of the realizable set is filled with particles, complemented with a stream of particles connected to the isotropic point $\normalizedmoments[\text{iso}] = \left(0,\frac13\right)$. No particles occur close to the point of maximal distance $\normalizedmoments=\left(0,\frac12\right)$. Thus, the relative distance for $\MN[2]$ is always strictly smaller than $1$. A similar effect occurs in case of the $\MN[3]$ model, but less pronounced. \section{Realizability-preserving first-order scheme} \label{sec:RPFO} It is easy to show that a standard explicit, first-order finite-volume scheme for \eqref{eq:MomentSystemClosed} with suitably-chosen numerical fluxes automatically preserves the realizability of the underlying solution under a CFL-type constraint if the moments of the collision operator can be written as $\ints{\basis \collision{\distribution}} = \collisionrealizablepart-\moments$, where $\collisionrealizablepart\in\RD{\basis}{}$ is realizable (see e.g. \cite{Schneider2015a,Schneider2015b,Hauck2010}).\\ Unfortunately, this is in general not possible for the Laplace-Beltrami operator. As has been shown above, an explicit discretization of the right-hand-side of \eqref{eq:MomentSystemClosed} can lead to unrealizable moments even in the rather simple case of the full-moment $\MN[2]$ model. This results from the fact that the vector field defined by $\ints{\basis \collision{\distribution}}$ can point tangential to the realizability boundary and can be avoided using an implicit discretization. On the other hand, the hyperbolic flux, which is non-linear and usually expensive to calculate, is typically non-stiff. An implicit discretization is therefore undesired.\\ To overcome this, we treat the two parts separately using an implicit-explicit time-stepping. In the following, the spatial domain $\Domain = (\ensuremath{\z_{L}}, \ensuremath{\z_{R}})$ is divided into (for notational simplicity) $\ensuremath{n_{\z}}$ (equidistant) cells $\cell{\ensuremath{j}} = (\x_{\ensuremath{j}-\frac12}, \x_{\ensuremath{j}+\frac12})$, where the cell interfaces are given by $\x_{\ensuremath{j}\pm\frac12} = \x_\ensuremath{j} \pm \frac{\ensuremath{\Delta\z}}{2}$ for cell centres $\x_\ensuremath{j} = \ensuremath{\z_{L}} + (\ensuremath{j} - \frac12)\ensuremath{\Delta\z}$, and $\ensuremath{\Delta\z} = \frac{\ensuremath{\z_{R}} - \ensuremath{\z_{L}}}{\ensuremath{n_{\z}}}$. Defining the averaging operator \begin{align*} \cellmean{\, \cdot \,} := \frac{1}{\ensuremath{\Delta\z}} \int_{\cell{\ensuremath{j}}} \cdot~d\x, \end{align*} the discretized form of \eqref{eq:GeneralHyperbolicSystem} reads \begin{align} \label{eq:discretizedform} \cfrac{\momentscellmeantime{\ensuremath{j}}{\ensuremath{\kappa}+1}-\momentscellmeantime{\ensuremath{j}}{\ensuremath{\kappa}}}{\ensuremath{\Delta \timevar}} = - \frac{1}{\ensuremath{\Delta\z}}\left(\ensuremath{\widehat{\bF}}(\momentscellmeantime{\ensuremath{j}}{\ensuremath{\kappa}}, \momentscellmeantime{\ensuremath{j}+1}{\ensuremath{\kappa}})-\ensuremath{\widehat{\bF}}(\momentscellmeantime{\ensuremath{j}-1}{\ensuremath{\kappa}}, \momentscellmeantime{\ensuremath{j}}{\ensuremath{\kappa}})\right) + \ensuremath{\bs}\left(\momentscellmeantime{\ensuremath{j}}{\ensuremath{\kappa}+1}\right) \end{align} where $\ensuremath{\widehat{\bF}}$ is a numerical flux function coupling the solution on cell $\cell{\ensuremath{j}}$ with its neighbours. We use the kinetic flux (see e.g. \cite{Schneider2015b,Hauck2010,Garrett2014}) \begin{align} \label{eq:GodunovFlux1} \ensuremath{\widehat{\bF}}(\moments[1], \moments[2]) &= \ints{\basis \monotoneFlux{\distributionpv{1}}{\distributionpv{2}}}, \qquad \text{ where}\\ \monotoneFlux{\distributionpv{1}}{\distributionpv{2}} &= \begin{cases} \SCheight\distributionpv{1} & \text{ if } \SCheight\geq 0,\\ \SCheight\distributionpv{2} & \text{ if } \SCheight\leq 0 \end{cases} \end{align} and $\distributionpv{1,2}$ are the ans\"atze for $\moments[1,2]$, respectively. This is generally possible for minimum-entropy and Kershaw models by carrying out the integrations over the half-spaces separated by $\SCheight = 0$. This is particularly easy in case of half- and mixed-moment models since then the numerical flux can be explicitly written in terms of the moments instead of some half moments of the ansatz function. The incorporation of boundary conditions is non-trivial. Here, an often-used approach is taken that incorporates boundary conditions via `ghost cells'. First assume that it is possible to smoothly extend $\ensuremath{\distribution[b]}(\timevar,\x,\SCheight)$ in $\SCheight$ to $[-1,1]$ for $\x\in\{\ensuremath{\z_{L}},\ensuremath{\z_{R}}\}$ (note that while moments are defined using integrals over all $\SCheight$, the boundary conditions in \eqref{eq:TransportEquation1DBCa}--\eqref{eq:TransportEquation1DBCb} are only defined for $\SCheight$ corresponding to incoming data). Then the moment approximations in the ghost cells at $\x_0$ and $\x_{\ensuremath{n_{\z}}+1}$ simply take the form \begin{subequations} \label{eq:BCMomentSystemSimple1D} \begin{align} \momentslocal{0}(\timevar, \x_{\frac12}) &:= \ints{\basis \ensuremath{\distribution[b]}(\timevar,\ensuremath{\z_{L}},\SCheight)},\\ \momentslocal{\ensuremath{n_{\z}} + 1}(\timevar, \x_{\ensuremath{n_{\z}} + \frac12}) &:= \ints{\basis \ensuremath{\distribution[b]}(\timevar,\ensuremath{\z_{R}},\SCheight)}. \end{align} \end{subequations} Note, however, that the validity of this approach, due to its inconsistency with the original boundary conditions \eqref{eq:TransportEquation1DBCa}--\eqref{eq:TransportEquation1DBCb}, is not entirely non-controversial, but the question of appropriate boundary conditions for moment models is an open problem \cite{pomraning1964variational,Larsen1991,Rulko1991,Struchtrup2000,levermore2009boundary} which is not explored here. The IMEX time-stepping in \eqref{eq:discretizedform} uses the forward-backward Euler scheme \cite{Ascher1997}. Since this is nothing else than doing a Godunov splitting of the hyperbolic part (treated explicitly) and the (stiff) source term (treated implicitly), the following theorem can be concluded. \begin{theorem} \label{thm:RP} Let $\momentscellmeantime{\ensuremath{j}}{\ensuremath{\kappa}}\in\RD{\basis}{}$ for all $\ensuremath{j}=0,\ldots,\ensuremath{n_{\z}}+1$. Furthermore, let \assref{ass:CollisionOperator} and \assref{ass:MomentSystem} hold, and $\ensuremath{\sigma_a}(\timevar,\ensuremath{x}),\ensuremath{\sigma_s}(\timevar,\ensuremath{x}),\ensuremath{Q}(\timevar,\ensuremath{x},\SCheight)\in\R_{\geq 0}$ be bounded and continuous in $\timevar$. Then, the IMEX scheme \eqref{eq:discretizedform} preserves realizability (i.e. $\momentscellmeantime{\ensuremath{j}}{\ensuremath{\kappa}+1}\in\RD{\basis}{}$ for all $\ensuremath{j}=1,\ldots,\ensuremath{n_{\z}}$) under the CFL condition \begin{align} \label{eq:CFL} \ensuremath{\Delta \timevar} \leq \ensuremath{\Delta\z}. \end{align} \end{theorem} \begin{proof} The scheme \eqref{eq:discretizedform} is equivalent to the following splitting scheme \begin{subequations} \begin{align} \label{eq:discretizedform2a} \momentscellmeantime{\ensuremath{j}}{*} &= \momentscellmeantime{\ensuremath{j}}{\ensuremath{\kappa}}- \frac{\ensuremath{\Delta \timevar}}{\ensuremath{\Delta\z}}\left(\ensuremath{\widehat{\bF}}(\momentscellmeantime{\ensuremath{j}}{\ensuremath{\kappa}}, \momentscellmeantime{\ensuremath{j}+1}{\ensuremath{\kappa}})-\ensuremath{\widehat{\bF}}(\momentscellmeantime{\ensuremath{j}-1}{\ensuremath{\kappa}}, \momentscellmeantime{\ensuremath{j}}{\ensuremath{\kappa}})\right),\\ \label{eq:discretizedform2b} \momentscellmeantime{\ensuremath{j}}{\ensuremath{\kappa}+1} &= \momentscellmeantime{\ensuremath{j}}{*}+\ensuremath{\Delta \timevar}\ensuremath{\bs}\left(\momentscellmeantime{\ensuremath{j}}{\ensuremath{\kappa}+1}\right). \end{align} \end{subequations} We recapitulate the arguments from e.g. \cite{Schneider2015a,Schneider2015b} to show that \eqref{eq:discretizedform2a} preserves realizability. We have that \begin{align*} \momentscellmeantime{\ensuremath{j}}{*} &= \ints{\distribution[*]}\\ \distribution[*] &= \ansatz[\momentscellmeantime{\ensuremath{j}}{\ensuremath{\kappa}}]-\frac{\ensuremath{\Delta \timevar}}{\ensuremath{\Delta\z}}\left(\max\left(\SCheight,0\right)\left(\ansatz[\momentscellmeantime{\ensuremath{j}}{\ensuremath{\kappa}}]-\ansatz[\momentscellmeantime{\ensuremath{j}-1}{\ensuremath{\kappa}}]\right)+\min\left(\SCheight,0\right)\left(\ansatz[\momentscellmeantime{\ensuremath{j}+1}{\ensuremath{\kappa}}]-\ansatz[\momentscellmeantime{\ensuremath{j}}{\ensuremath{\kappa}}]\right)\right)\\ &\geq \left(1-\frac{\ensuremath{\Delta \timevar}}{\ensuremath{\Delta\z}}\right)\ansatz[\momentscellmeantime{\ensuremath{j}}{\ensuremath{\kappa}}]\stackrel{\eqref{eq:CFL}}{\geq} 0, \end{align*} where $\ansatz[\momentscellmeantime{\ensuremath{j}-1}{\ensuremath{\kappa}}],\ansatz[\momentscellmeantime{\ensuremath{j}}{\ensuremath{\kappa}}],\ansatz[\momentscellmeantime{\ensuremath{j}+1}{\ensuremath{\kappa}}]\geq 0$ are the respective ans\"atze \eqref{eq:psiME} for the moment vectors in cells $\cell{\ensuremath{j}-1},\cell{\ensuremath{j}}$ and $\cell{\ensuremath{j}+1}$. Thus, $\momentscellmeantime{\ensuremath{j}}{*}$ is generated by the non-negative distribution function $\distribution[*]$ and is therefore realizable, i.e. $\momentscellmeantime{\ensuremath{j}}{*}\in\RD{\basis}{}$. To show a similar result for \eqref{eq:discretizedform2b}, \corref{cor:ImplicitDiscretization} has to be adopted to the situation. The update has the form \begin{align*} \momentscellmeantime{\ensuremath{j}}{\ensuremath{\kappa}+1} &= \left(\momentscellmeantime{\ensuremath{j}}{*}+\ensuremath{\Delta \timevar}\ints{\basis\cellmean[\ensuremath{j}]{\ensuremath{Q}}}\right)+\ensuremath{\Delta \timevar}\left(\cellmean[\ensuremath{j}]{\ensuremath{\sigma_s}}\collisionu{\momentscellmeantime{\ensuremath{j}}{\ensuremath{\kappa}+1}}-\cellmean[\ensuremath{j}]{\ensuremath{\sigma_a}}\momentscellmeantime{\ensuremath{j}}{\ensuremath{\kappa}+1}\right)\\ &=\underbrace{\ints{\basis\underbrace{\left(\ansatz[\momentscellmeantime{\ensuremath{j}}{*}]+\ensuremath{\Delta \timevar}\cellmean[\ensuremath{j}]{\ensuremath{Q}}\right)}_{\geq 0}}}_{\in\RD{\basis}{}}+\ensuremath{\Delta \timevar}\left(\cellmean[\ensuremath{j}]{\ensuremath{\sigma_s}}\collisionu{\momentscellmeantime{\ensuremath{j}}{\ensuremath{\kappa}+1}}-\cellmean[\ensuremath{j}]{\ensuremath{\sigma_a}}\momentscellmeantime{\ensuremath{j}}{\ensuremath{\kappa}+1}\right). \end{align*} This can be stated equivalently as \begin{align*} \left(1+\ensuremath{\Delta \timevar}\cellmean[\ensuremath{j}]{\ensuremath{\sigma_a}}\right)\momentscellmeantime{\ensuremath{j}}{\ensuremath{\kappa}+1} &=\ints{\basis\left(\ansatz[\momentscellmeantime{\ensuremath{j}}{*}]+\ensuremath{\Delta \timevar}\cellmean[\ensuremath{j}]{\ensuremath{Q}}\right)}+\ensuremath{\Delta \timevar}\cellmean[\ensuremath{j}]{\ensuremath{\sigma_s}}\collisionu{\momentscellmeantime{\ensuremath{j}}{\ensuremath{\kappa}+1}}. \end{align*} Since the realizable set is a convex cone and $\cellmean[\ensuremath{j}]{\ensuremath{\sigma_a}}\geq 0$, $\left(1+\ensuremath{\Delta \timevar}\cellmean[\ensuremath{j}]{\ensuremath{\sigma_a}}\right)^{-1}\ints{\basis\left(\ansatz[\momentscellmeantime{\ensuremath{j}}{*}]+\ensuremath{\Delta \timevar}\cellmean[\ensuremath{j}]{\ensuremath{Q}}\right)}\in\RD{\basis}{}$. Thus, \begin{align*} \momentscellmeantime{\ensuremath{j}}{\ensuremath{\kappa}+1} &=\left(1+\ensuremath{\Delta \timevar}\cellmean[\ensuremath{j}]{\ensuremath{\sigma_a}}\right)^{-1}\ints{\basis\left(\ansatz[\momentscellmeantime{\ensuremath{j}}{*}]+\ensuremath{\Delta \timevar}\cellmean[\ensuremath{j}]{\ensuremath{Q}}\right)}+\left(1+\ensuremath{\Delta \timevar}\cellmean[\ensuremath{j}]{\ensuremath{\sigma_a}}\right)^{-1}\ensuremath{\Delta \timevar}\cellmean[\ensuremath{j}]{\ensuremath{\sigma_s}}\collisionu{\momentscellmeantime{\ensuremath{j}}{\ensuremath{\kappa}+1}} \end{align*} is of the form that \corref{cor:ImplicitDiscretization} can be applied (with suitable redefinitions of $\ensuremath{\Delta \timevar}$). Note that boundedness and continuity of the physical parameters are necessary such that a similar modification of \lemref{lem:LaplaceBeltramiFMStayRealizable} is still valid. Thus $\momentscellmeantime{\ensuremath{j}}{\ensuremath{\kappa}+1}\in\RD{\basis}{}$, which completes the proof. \end{proof} \begin{remark} Using an explicit discretization of the source term, the CFL condition \eqref{eq:CFL} has to be modified to \begin{align*} \ensuremath{\Delta \timevar}\leq \frac{1}{\frac{1}{\ensuremath{\Delta\z}}+\max\limits_{\ensuremath{j},\ensuremath{\kappa}}\left(\cellmean[\ensuremath{j}]{\ensuremath{\sigma_a}}\left(\timevar_\ensuremath{\kappa}\right)+\cellmean[\ensuremath{j}]{\ensuremath{\sigma_s}}\left(\timevar_\ensuremath{\kappa}\right)\right) } \end{align*} to preserve realizability (if possible at all) \cite{Schneider2015a,Schneider2015b}. \end{remark}
1,314,259,994,753
arxiv
\section{Introduction} With the rapid growth of Open Educational Resources (OER) \cite{unesco1} and Massively Open Online Courses (MOOC) \cite{ramesh2014learning,Guo_vid_prod}, large educational resource repositories need scalable tools to understand and estimate the engagement potential of newly added materials \cite{clementsQ}. While \emph{contextualised engagement} can be defined as learner's engagement driven by variables related to the context of the learner at a given time in their learning path (e.g., learning needs/goals, knowledge state, background on the topic, etc.), \emph{context-agnostic engagement} aims to capture patterns and features associated with engagement that instead are applicable to an entire learner population rather than individual contexts of specific learners \cite{context_agnostic_engagement}. Put simply, context-agnostic engagement is concerned with the features that generally make a piece of educational material engaging. For works such as this, it is important to clarify that Information Retrieval (IR) in education has very distinct characteristics than conventional web-searches, as users tend to \emph{gather} information. Learners discovering knowledge in an educational IR system usually carry out learning and familiarising with novel concepts during the search session leading to the task deviating or expanding into unanticipated sub tasks \cite{cortinovis2019supporting}. This behaviour is drastically different from typical search engine users who are usually aware of the exact information they are after. This distinction has led to many works that are specific to educational IR \cite{DavisSHH18,PenhaH20}. This difference also makes conventional query log datasets sub-optimal when training e-learning IR models. Prior works have shown this mismatch \cite{Chen18} leading to using education specific datasets \cite{Syed2017} instead of general-purpose query logs. This work attempts to enrich the educational IR research domain by making a large educational video dataset available to the community. \textbf{Video Lecture Engagement (VLE)} is a novel dataset that presents around 12,000 peer-reviewed scientific video lectures constructed from a popular OER repository, VideoLectures.NET and contains a variety of lecture types ranging from scientific talks to expert panels to MOOC-like lectures. The majority of these lectures belong to Computer Science (CS), Artificial Intelligence (AI) and Data Science subject areas, making this dataset a great source for understanding learner engagement with AI/CS related educational videos. The dataset provides an extensive set of textual and video-specific features extracted from the lecture transcripts, together with Wikipedia topics covered in the lecture (via entity linking) and user engagement labels (both explicit and implicit) for each video. This dataset is uniquely suited for solving the cold-start problem in educational recommenders, both i) \emph{user cold-start}, where new users join the system and we may not have enough information about their context and ii) \emph{item cold-start}, where new educational content is released, for which we may not have user engagement data yet and thus an engagement predictive model would be necessary. The effectiveness of using context-agnostic engagement prediction to address cold-start problem has been empirically demonstrated before (see Figure {\ref{fig:person}}). To the best of our knowledge, this is the largest publicly available dataset to tackle such problems in educational recommenders. The aim of the dataset is not to replace personalised recommenders, but rather complement them providing meaningful population baseline/priors to solve the common cold-start problem. While constructing the VLE dataset is a major contribution of this paper, a series of experimental results is also included as an additional contribution. These results validate VLE dataset's usefulness in i) significantly improving engagement prediction models, ii) determining how the training data size impacts model improvements, iv) using the VLN dataset for AI/CS and E-Learning/MOOC educational recommenders and finally, v) showing the feasibility of fusing context-agnostic prediction with personalised recommenders to improve overall prediction performance. \begin{figure}[!tbp] \centering \includegraphics[width=.7\linewidth]{marginal_vs_personal.pdf} \caption{How the difference between Mean Absolute Error (MAE) of population-based and personalised models change with the number of training events per learner. Until about 80 events in this plot, population-based predictions are more accurate. Plot from \cite{context_agnostic_engagement}}\label{fig:person} \end{figure} \section{Related Work} The majority of work in Intelligent Tutoring Systems (ITS) and Educational Recommendation Systems (EduRecSys) revolve around using learner context \cite{kim2021student,truelearn} to predict learner engagement. Although the connection between learner engagement and learning gains have been explored by many \cite{bonafini2017much,Davis18,lan2017behavior}, public datasets in this realm are hard to come by. Many MOOC platforms such as edX \cite{Guo_vid_prod} and Khan Academy \cite{khan_bigdata} harvest valuable data created in an \emph{in-the-wild} setting, yet this data is gated within course owners and consortia\cite{DavisSHH18} (or heavily anonymised) due to its proprietary nature. However, with the boom of online education, understanding features of contents leading to context-agnostic (population-based) engagement, an under researched knowledge area, has become a critical area to explore. This work marks a significant step in this direction by publishing a large dataset that the community can use to push the research frontiers. Other public datasets relating to educational question generation \cite{Chen18,rajpurkar2016squad}, argument strength \cite{persing2015modeling} and essay scoring \cite{AES_dataset} serve different objectives and tasks. Study of context-agnostic engagement of video lectures so far has been mainly qualitative, deriving guidelines such as keeping videos short and in parts \cite{Guo_vid_prod} and using conversational language \cite{brame2016effective}. These guidelines are only useful at the content creation stage and have no use in moderating the mammoth of materials already circulating in the Internet. Our work, proposes to model context-agnostic engagement using features associated with the educational resource itself, which is useful for scalable quality assurance and recommendation of existing (and newly created) materials. \subsection{Related Datasets} A few works on engagement prediction with videos (e.g. modelling watch time) have been done with YouTube \cite{Covington2016,Ribeiro_West_2021}, where YouTube specific meta-data features (e.g. channel reputation, category etc.) are used exclusively. Although large-scale public datasets relating to engagement prediction are encountered, they focus on general-purpose videos (largely entertainment) rather than educational videos \cite{beyondviews}. Some of the features used by these works share similarity with ones proposed in this paper (such as video duration, language and topic features). However, no textual features (based on video transcript) relating to understandability and presentation are used in this prior work, making the methods hard to generalise outside of YouTube. Beyond videos, educational IR \cite{SyedC17,Collins-Thompson:2011} and Wikipedia page quality prediction \cite{Dalip_wiki_svr,wiki_wang} has been attempted using features such as text style, readability, structure, network, recency and review data. Publicly available Wikipedia article quality dataset \cite{Dalip_wiki_svr} with human annotated (explicit) labels is used to tackle the latter task although implicit labels are not included in this dataset. Similar datasets are available for automated essay scoring \cite{taghipour2016neural}. But, none of these datasets fill the lack of resources for predicting engagement of educational videos. In the context of education, a different line of work looks at learner engagement using learner-specific multi-modal data (brain waives \cite{multa_dataset}, facial expressions \cite{kaur2018prediction} etc.). While tackling a different task, these datasets are mainly collected in a controlled lab setting where the in-the-wild factor is missing \cite{dewan2019engagement}. A large number of public datasets and competitions in education also relate to students interacting with learning problems (e.g. ASSISTments \cite{mendicino2009comparison} or multiple choice questions \cite{choi2020ednet}), but these datasets, contrary to the proposed VLE dataset, do not focus on implicit engagement. More relevant and similar datasets that address context-agnostic engagement prediction in education has been emerging with a focus on MOOCs. Studying approximately 800 videos from edX platform, Guo et al. \cite{Guo_vid_prod} manually processed and provided a qualitative analysis of engagement, with some features being relatively subjective and difficult to automate. A similar work \cite{edx_quality} takes 22 edX videos, extracts cross-modal features and manually annotates their quality with no focus on learner engagement with the videos. Neither dataset is publicly available. MOOCCube is a recently released dataset that contains a spectrum of details relating to MOOC interactions \cite{yu2020mooccube}. Although large, the video watch logs in MOOCCube come from 190,000 users where as VLE signals are generated over 1.1 Million users. As central values (e.g. median) are used for context-agnostic engagement prediction, larger user base adds stability to the engagement centres. MOOCCube uses a closed topic taxonomy disconnected from Wikipedia which prevents the dataset from using all the powerful signals in Wikipedia (e.g. semantic relatedness, category tree to name a few) to improve prediction models. No engagement prediction attempts are published thus far to demonstrate MOOCCube's promise in the task. On a more relevant contribution, \cite{context_agnostic_engagement} has demonstrated that context-agnostic engagement prediction models can be built with implicit watch time based labels and these models can be used in addressing the cold-start problem (see Figure {\ref{fig:person}}). We identify this work as the closest contribution to the proposed dataset. They gather a collection of 4,000 video lectures with engagement signals generated by 150,000 informal learners. We consider our work, VLE, as an expansion to this dataset with around 12,000 video lectures where engagement signals are generated by 7 times as many learners. The new dataset also restricts itself to strictly English lectures to preserve the meaningfulness of majority of features that are specific to English language. VLE also introduces a new set of Wikipedia-based topical features. Furthermore, this work also accompanies additional experiments that goes beyond data quantity and validates the usefulness of the dataset. \section{VLE Dataset} \begin{figure*}[ht] \begin{center} \centerline{\includegraphics[width=1\linewidth]{vle_diagram.pdf}} \caption{The video data and the learner interaction logs from VideoLectures.Net repository are processed separately to create the content-based, video-specific features and Wikipedia-based Topics. Multiple different engagement labels are extracted from interaction logs and published in VLE dataset.} \label{fig:vle_pipe} \end{center} \end{figure*} VLE dataset is constructed using the aggregated video lectures consumption data coming from a popular OER repository, VideoLectures.Net\footnote{{\url{www.videolectures.net}}} (VLN). These videos are recorded when researchers are presenting their work at peer-reviewed conferences. Lectures are thus reviewed and material is controlled for correctness of knowledge. The presenters and authors of published work that is recorded by VLN agree and provide rights to publish their presentation video, slides and supplementary materials under an open licence in the VLN website which can be used for educational and research purposes. The majority of research venues covered by VLN is related to Artificial Intelligence and Computer Science, making most videos associating to these topics. Although the dataset mainly consists of scientific talks that are geared towards postgraduate and PhD level learners, a significant number of tutorials (Table \ref{table:lect_type}) are geared for university level students. In that aspect, many videos in the dataset draw similarities to the style of conventional MOOC lectures. The dataset provides a set of statistics aimed at studying population based engagement in scientific videos, together with other conventional metrics in subjective assessment such as star ratings and number of views. We believe the dataset will serve the community applying AI in Education to further understand what are the features of educational material that makes it engaging for learners. The users also agree for the user-generated content in the VLN website to be available for research purposes. We have anonymised all user interaction data and aggregated them to ensure privacy of the users. Figure \ref{fig:vle_pipe} gives a high level representation of how different data silos within VLN repository has been used to create the VLE dataset. \subsection{Feature Extraction} \label{sec:features} We process the video meta-data and English transcriptions\footnote{provided by \url{www.translectures.eu}.} to extract i) content-based textual features, ii) Wikipedia topic-based features and iii) video-specific features. Majority of the extracted features (with the exception of a few features in the video-specific category) are cross modal (e.g. books, websites and audios) and are easily automatable. \subsubsection{Content-based Features} Prior work \cite{context_agnostic_engagement} has proposed a set of effective content-based features for similar datasets. We use the video meta-data and transcript to extract \emph{Word Count} \cite{wiki_wang}, \emph{Title Word Count} and \emph{Document Entropy} \cite{Bendersky2011}, language style features \cite{dalip_quality_features}, \emph{Preposition Rate}, \emph{Auxiliary Rate}, \emph{To Be Rate}, \emph{Conjunction Rate}, \emph{Normalisation Rate}, \emph{Pronoun Rate}, readability related \emph{Easiness (FK Easiness)}\cite{dalip_quality_features} and language style related \emph{Stop-word Presence Rate}, \emph{Stop-word Coverage Rate} \cite{Bendersky2011,Ntoulas2006}. To represent \emph{Freshness} of lectures (recency), we calculate the number of days between January 01, 1960 and the lecture published date which is a proxy for recency of the lecture \cite{context_agnostic_engagement}. \subsubsection{Wikipedia-based Features} We use Wikifier\footnote{\url{http://wikifier.org}} \cite{wikifier}, a novel entity linking method, on transcripts to extract topical features. Two main types of novel topical features that cover topic authority and topic coverage verticals \cite{quality_features} are proposed through this work. The \emph{top-5 authoritative topic URLs} and \emph{top-5 PageRank scores} features represent the Topic Authority feature vertical. Wikifier \cite{wikifier} produces a PageRank score \cite{pagerank} that indicates the marginal authoritativeness of a Wikipedia concept among all Wikipedia concepts associated with a lecture. We use this score to rank and identify the most authoritative topics and use the actual PageRank score as a proxy for quantifying authority. It is noteworthy that \emph{authority} of a learning resource entails author, organisation and content authority \cite{quality_features}. These features represent content authority. The \emph{top-5 covered topic URLs} and \emph{top-5 cosine similarity scores} features represent \emph{Topic Coverage} feature vertical. The cosine similarity score $cos(s_{tr}, c)$ between the \emph{Term Frequency-Inverse Document Frequency (TF-IDF)} representations of the lecture transcript $s_{tr}$ and the Wikipedia page of concept $c$ is also an output from the Wikifier. We use these values to i) rank and identify most covered topics and ii) quantify the topic coverage \cite{truelearn}. For authority and coverage, the top 5 topic URLs and their scores are included introducing four additional feature groups providing 20 distinct feature columns in the final dataset. Figure {\ref{fig:wordcloud}} presents two word clouds that show the 25 most authoritative and covered topics in the VLE dataset. With respect to the topics in a lecture, the authoritative topics are the ones that have strong connection (linkage in Wikipedia) with other co-occurring topics in a lecture whereas the covered topics are the ones with high textual overlap with Wikipedia concept pages. This figure confirms that these two feature groups represent different things although there is some overlap between the topics. \begin{figure}[!tbp] \centering \includegraphics[width=\linewidth]{pr_vs_cos.pdf} \caption{WordClouds summarising the distribution of top 25 (i) most authoritative and (ii) most covered Wikipedia topics in the dataset. Note that Data Science and Computer Science related topics are most dominant topics.} \label{fig:wordcloud} \end{figure} \subsubsection{Video-specific Features} We identify a set of easily automatable, prior proposed \cite{context_agnostic_engagement} video specific features. \emph{Lecture Duration, Is Chunked, Lecture Type}\cite{Guo_vid_prod}, \emph{Silence Period Rate (SPR)} and \emph{Speaker Speed} \cite{context_agnostic_engagement} are calculated based on prior work . \emph{Lecture Duration} is reported in seconds. \emph{Is Chunked} is a binary feature which indicates if a lecutre has multiple parts. \emph{Lecture type} value is derived from the metadata. The possible values for this feature are described in Table \ref{table:lect_type}. Table \ref{table:lect_type} also gives insight into how diverse the VLE dataset is. There are different types of videos such as research presentations (e.g. $\texttt{vl,vps,vpr,}\dots$ ), scientific talks (e.g. $\texttt{vbp,vdm,vkn,vit,}\dots$), dialogues (e.g. $\texttt{vpa,vdb,}\dots$) and tutorials (e.g. $\texttt{vtt}$) among the dataset. \begin{table} \small \centering \setlength{\tabcolsep}{4pt} \caption{14 types of lectures in the VLE dataset and their abbriviation (Abbr.) and frequency (Freq).} \label{table:lect_type} \begin{tabular}{ l l r l l r} \hline Abbr.&Description&Freq.&Abbr.&Description&Freq.\\ \hline \texttt{vbp}&Best Paper&67&\texttt{vdb}&Debate&135 \\ \texttt{vdm}&Demonstration&315&\texttt{viv}&Interview&121 \\ \texttt{vid}&Introduction&75&\texttt{vit}&Invited Talk&609 \\ \texttt{vkn}&Keynote&274&\texttt{vl}&Lecture&7125 \\ \texttt{vop}&Opening&190&\texttt{oth}&Other&58 \\ \texttt{vpa}&Panel&207&\texttt{vps}&Poster&162 \\ \texttt{vpr}&Promotional Video&69&\texttt{vtt}&Tutorial&2142 \\ \hline \end{tabular} \end{table} \subsection{Labels} Multiple labels based on explicit and implicit feedback are provided with this dataset which will allow researchers to compare between different labels and also integrate multiple label types when modelling engagement. Three main types of quantification of engagement labels are presented. \subsubsection{Explicit Ratings} \emph{Mean Star Rating} based on a rating scale of 1-5 for each lecture is provided. This value is accompanied by the number of ratings used to calculate the mean. The proposed dataset has 2,127 ratings (almost 2x than Bulathwela et al. \cite{context_agnostic_engagement}). Missing ratings are labelled with \texttt{-1}. \subsubsection{Popularity} The total number of views, named \emph{View Count}, for each video lecture as of February 1, 2021 is extracted from the metadata and provided with the dataset. \subsubsection{Watch Time/Engagement} The majority of learner engagement labels in the VLE dataset are based on {watch time}. We compute Normalised Engagement Time (NET) to compute the \texttt{Median of Normalised Engagement (MNET)}, as it has been proposed as the gold standard for engagement with educational materials in previous work \cite{Guo_vid_prod}. We further compute \texttt{Average of Normalised Engagement (ANET)}. To have the MNET and ANET labels in the range $[0,1]$, we set the upper bound to 1 and derive Saturated MNET (\emph{SMNET}) and Saturated ANET (\emph{SANET}) that are included in the dataset. The standard deviation of \texttt{NET} for each lecture (\emph{Std of Engagement}) is reported, together with the \emph{Number of User Sessions} used for calculating \texttt{MNET} and \texttt{ANET}. These measure allows understanding stability of the centres published. The set of individual NET values are also published to allow future researchers to exploit the true distribution of values. \subsection{Preserving Anonymity and Ethics} \label{sec:ethics} We only publish lectures with more than 5 views to preserve k-anonymity and avoid revealing learner identities \cite{orcas_dataset}. A regime of additional techniques are used to preserve lecturer anonymity in order to avoid having unanticipated effects on lecturer's reputation by associating implicit learner engagement values to their content. Rarely occurring \emph{Lecture Type} values were grouped together to create the \emph{other} category in Table \ref{table:lect_type}. Similarly, subject categories Life Sciences, Physics, Technology, Mathematics, Computer Science, Data Science and Computers subjects to \texttt{stem} category and the other subjects to \texttt{misc} category. Rounding is used with \emph{Freshness} and \emph{Lecture Duration} to the nearest 10 days and 10 seconds respectively. Gaussian white noise (10\%) is added to \emph{Title Word Count} feature and rounded to the nearest integer. VLE dataset comes from a video lecture collection that mainly belongs to the Computer Science community, where there is a gender imbalance, both in audience and presenters. To enhance the neutrality of our findings and contributions, we have avoided using feature classes that could potentially reflect gender characteristics. For example, we have avoided using visual features (facial features, presenter emotions) and sound related features (pitch) that may actively or passively embed gender characteristics. Instead we have focused on features that mainly reflect informational content of the lectures. Furthermore, where video specific features are incorporated, we have used very generic features such as "speaker speed" that are unlikely to be correlated with characteristics such as gender or age. \subsection{Final Dataset} The final dataset includes lectures that are published between September 1, 1999 and December 31, 2020. The engagement labels are created from events of over 1.1 Million users logged between December 01, 2016 and February 01, 2020. The final dataset contains 11,548 lectures across 21 subjects (eg.~Computer Science, Philosophy, etc. with a majority from AI and Computer Science) that are grouped into STEM and Miscellaneous categories. The collection of videos span various video lengths with the duration distribution having two modes at approx. 2000s (33 mins) and 4000s (1hr) time points which align with typical lengths of research talks and presentations. The mean word count of the videos is 5347.9. The video lecture collection uses on average 93.9 learners per video when calculating engagement centres. The dataset, helper tools, example code snippets and various descriptive statistics related to the VLE dataset are available publicly\footnote{\url{https://github.com/sahanbull/VLE-Dataset}}. \subsection{Supported Tasks} \label{sec:taks} This section introduces the reader to the tasks that the dataset could be used for. The main application areas of these tasks are i) quality assurance in educational video repositories and understanding and ii) predicting context-agnostic engagement in an web-based learning setting. We establish two main tasks, which we mainly focus on in this paper, that can be objectively addressed using the VLE dataset using a supervised learning approach. These are: \begin{itemize} \item \textbf{Task 1: Predicting context-agnostic (population-based) engagement of video lectures}: The dataset provides a set of relevant features and labels to construct machine learning models to predict context-agnostic engagement in video lectures. The task can be treated as a regression problem. \item \textbf{Task 2: Ranking of video lectures based on engagement}: Building predictive models that can rank videos based on their context-agnostic engagement could be useful in the setting of an educational recommendation system, including tackling the cold-start problem associated to new video lectures. The task can be treated as a ranking problem to predict the global/relative ranking of video lectures. \end{itemize} \paragraph{Other Tasks} Several auxiliary tasks can also be addressed with this dataset. This dataset is suitable for, not limiting to, several tasks such as i) understanding influential features for engagement prediction ii) understanding the strengths and weaknesses of different implicit/explicit labels, that have been investigated in prior work with similar datasets \cite{context_agnostic_engagement,perez2019pairwise}. The video representations, with the use of unsupervised approaches can be used to understand meaningful hidden patterns contrasting between clusters of videos (e.g. talks vs. lectures vs. tutorials). With the use of Wikipedia based topics, we also see opportunities in deducing the structure of knowledge based on how topics co-occur within videos. Such investigations can be done on this dataset in isolation or can be fused with similar datasets where this task has been attempted before \cite{yu2020mooccube}. \subsection{Evaluating Performance} We identify \emph{Root Mean Squared Error (RMSE)} as a suitable metric for evaluating Task 1. Measuring RMSE against the original labels published with the datasets will allow different works to be compared fairly. With reference to Task 2, we identify \emph{Spearman's Rank Order Correlation Coefficient (SROCC)} as a suitable metric. SROCC is suitable for comparing between ranking models that create global rankings (e.g. point-wise rankers). We use 5-fold cross validation to evaluate model performance with tasks 1 and 2. The folds are released together with the dataset, to allow to facilitate fair comparison and reproducability. The five folds can be identified using the \texttt{fold} column in the dataset. 5-fold cross validation allows reporting the \emph{confidence intervals (1.96 $\times$ Standard Error)} of the performance estimate, which we include in Table \ref{tab:accuracy}. \section{Baselines and Experiments} \label{sec:baselines} Through our experiments we seek answers to multiple research questions, which we detail below. Note that these research questions overlap only partially with the proposed supported tasks outlined in section \ref{sec:taks}, as the use purposes of the dataset go much beyond what we could explore in this paper. The main research questions of interest are: \begin{itemize} \item \textbf{RQ1: } Does the newly constructed VLE dataset lead to training more performant prediction models? \item \textbf{RQ2: } How does the larger quantity of training data affect predictive performance? \item \textbf{RQ3: } Is the model useful for modelling engagement with Computer Science materials? \item \textbf{RQ4: } Is this dataset useful for modelling engagement in E-Learning lectures and MOOC videos? \item \textbf{RQ5: } Does context-agnostic engagement prediction help in the cold-start scenario? \end{itemize} Prior work by Bulathwela et al. \cite{context_agnostic_engagement} demonstrated \emph{Random Forest (RF)} model obtains best performance among linear and non-linear models in similar datasets. Therefore, we use the RF model to benchmark the new VLE dataset for Tasks 1 and 2 described earlier. In addition, a handful of \emph{Multi Layer Perceptron (MLP)} architectures were also experimented with due to their success in engagement prediction with YouTube videos \cite{Covington2016}. The code for extracting the proposed features is also published with this work in the dataset repository. \subsection{Labels and Features for Baseline Models} \emph{SMNET} label is used as the target variable for both tasks. Preliminary investigations indicated that SMNET label follows a Log-Normal distribution, motivating us to use a log transformation on the SMNET values before training the models. Empirical results further confirmed that this step improves the final performance of the models. We undo this transformation for computing $RMSE$ while this transformation doesn't affect $SROCC$. All the features outlined as the content-based and video-based sections in section \ref{sec:features} are included in the baseline models. The models are trained with three different feature sets in an incremental fashion: \begin{enumerate} \item \emph{Content-based}: Features extracted from lecture metadata and the transcript-based textual features. \item \emph{+ Wiki-based}: Content-based + 2 Wikipedia-based features (Top 1 Most Authoritative Topic URL and Most Covered Topic URL). \item \emph{+ Video-based}: Content-based + Wikipedia-based + Video-specific features. \end{enumerate} However, due to the large amount of topics in the Wikipedia-based feature groups, we restrict to the top 1 authoritative and covered topic features where they are encoded as binary categorical variables. Our initial attempts to encode these features in a reduced dimension space (using Singular Value Decomposition) led to deteriorated results contrary to our expectations. Practitioners are encouraged to try further encoding of the topic variables, as it will likely have a positive impact on the performance. \subsection{Experiments} Addressing RQ1, both the RF and MLP models are experimented with the proposed features sets with the smaller $\texttt{4k}$ dataset \cite{context_agnostic_engagement} and the newly proposed VLE dataset. 5-fold cross validation is used in this experiment. This setup allows identifying i) how performance gains are achieved through adding each new group of features and ii) how performance gains are achieved through adding new observations. Follow on experiments addressing RQ2 and 3 are run using folds 1-4 as training data and fold 5 of the dataset as testing data. We experiment by using varying proportions of training data to train the model. When selecting training data, random sampling is used to keep the diversity of the lectures similar to the full dataset. All the trained models using different quantities of training data are then evaluated on the same held out test set. To validate RQ4, we first partition the entire dataset to two parts, i) tutorial videos (\texttt{vtt} in Table \ref{table:lect_type}) and ii) all other videos, as test and train data respectively. However, tutorials presented in a research conference may significantly vary from e-learning videos geared for course learning. To address this mismatch, we further identify 1,035 videos (among the tutorials) that exclusively belong to the Open Course Ware Consortium (OCWC)\footnote{\url{http://videolectures.net/ocwc}}. OCWC contains university lectures that have been intended for course teaching via e-learning. These lectures are recorded using a variety of MOOC production techniques such as classroom lecture, talking head and power point presentation \cite{Guo_vid_prod}. We define these videos as \texttt{ocw} lectures. We use the training data (all except tutorials) to train the engagement model and evaluate prediction performance on i) OpenCourseWare, \texttt{ocw} lectures, ii) all tutorials but OpenCourseWare, \texttt{!ocw} and iii) all tutorials \texttt{vtt}, (Entire test data). The follow on experiments (RQ2-4) are only done with the best performing model from Table \ref{tab:accuracy} (RF model with \emph{Content + Wiki + Video} feature group) to reduce computational cost. A different experiment was run to answer RQ5. We utilise TrueLearn Novel \cite{truelearn} (hereby referred to as \emph{TrueLearn}), a recently proposed educational recommender model that learns individualised models to predict engagement with video lectures. A key limitation with personalised models such as TrueLearn is that there is no information about the user in the beginning, leading to a user cold-start problem which effectively means having ill-informed engagement predictions in the beginning of the learner session. To address this, we utilise the proposed context-agnostic engagement prediction model where a hybrid recommendation system (hereby referred to as \emph{TrueLearn$_{++}$}) is built by combining it with TrueLearn model. For simplicity's sake, TrueLearn$_{++}$ uses "switching" \cite{burke2002hybrid}, where the context agnostic model is used to make a prediction for the \emph{first} event of the user (where the personalistion model has no information to work with) and then switched to TrueLearn model which can exploit the user watch history. PEEK dataset \cite{peek_orsum}, which includes more than 20,000 user sessions, is used for the experiment where the context agnostic model is trained using lectures that are not present in the PEEK test data. Then the predictive performance on the PEEK test data using TrueLearn (The baseline) and TrueLearn$_{++}$ (where the first event is predicted using the context-agnostic model) is measured using Accuracy, Precision, Recall and F1-Score. To evaluate if the improvement of metrics is statistically significant, a learner-wise, one-tailed paired t-test is used. \section{Results and Discussion} \begin{table}[!tbp] \centering \scriptsize \setlength{\tabcolsep}{4pt} \caption{RMSE and SROCC with confidence intervals for the engagement prediction (Task 1) and lecture ranking (Task 2) using the Random Forests model with both \texttt{4k} \cite{context_agnostic_engagement} and \texttt{12k} (Our VLE) datasets. Better performance per task is highlighted in \textbf{bold}.}\label{tab:accuracy} \begin{tabular}{l | l l | l l } \hline & \multicolumn{2}{c}{$RMSE$ with {Task 1}} & \multicolumn{2}{c}{$SROCC$ with {Task 2}} \\ Feature set & \multicolumn{1}{c}{\texttt{4k}} & \multicolumn{1}{c}{\texttt{12k} \emph{(Ours)}} & \multicolumn{1}{c}{\texttt{4k}} & \multicolumn{1}{c}{\texttt{12k} \emph{(Ours)}} \\ \hline Content-based & {.1801$\pm$.006} & \textbf{.1170$\pm$.006} & .6190$\pm$.011 & \textbf{.7504$\pm$.013} \\ + Wiki-based & {.1798$\pm$.007} & \textbf{.1178$\pm$.006} & {.6251$\pm$.014} & \textbf{.7505$\pm$.013} \\ + Video-specific & {.1728$\pm$.007} & \textbf{.1098$\pm$.007} & .6758$\pm$.020 & \textbf{.7832$\pm$.009}\\ \hline \end{tabular} \end{table} \begin{figure}[!tbp] \centering \includegraphics[width=1.025\linewidth]{training_size.pdf} \caption{Predictive performance for (i) engagement prediction and (ii) lecture ranking tasks with varying proportions of randomly sampled training data. The test set performance for full test dataset (Blue) and subsets of test dataset that consists of CS lectures only (Orange) and Non-CS lectures only (Green) are also reported}\label{tab:ocw} \label{fig:train_size} \end{figure} \begin{table}[!tbp] \centering \small \setlength{\tabcolsep}{5pt} \caption{Performance for OpenCourseWare (\texttt{ocw}), Non-OpenCourseWare (\texttt{!ocw}) tutorial and All tutorial (\texttt{vtt}) videos for engagement prediction and lecture ranking tasks. Better performance per task is highlighted in \textbf{bold}.} \begin{tabular}{l c c c c} \hline & \texttt{ocw} & \texttt{!ocw} & \texttt{vtt} & From Table \ref{tab:accuracy}\\ \hline $RMSE$ with Task 1 & .0539 & \textbf{.0404} & .0406 & .1098\\ $SROCC$ with Task 2 & \textbf{.9485} & .9209 & .9223 & .7832\\ \hline \end{tabular} \end{table} \begin{table}[!tbp] \centering \caption{Average test set performance for Accuracy (Acc.), Precision (Prec.), Recall (Rec.) and F1-Score (F1). The more performant value is highlighted in \textbf{bold}. The metrics where the proposed model that outperform the baseline counterpart in the \texttt{PEEK} dataset ($p< 0.01$ in a one-tailed paired t-test) are marked with $\cdot^{(*)}$.} \label{tab:personalise} \begin{tabular}{lllll} \hline \multicolumn{1}{c}{Model} & Acc. & Prec. & Rec. & F1 \\ \hline Truelearn & 62.69 & 57.54 & \textbf{81.88} & \textbf{64.98} \\ Truelearn$_{++}$ & \textbf{63.51}$\cdot^{(*)}$ & \textbf{57.91}$\cdot^{(*)}$ & 79.13 & 64.39 \\ \hline \end{tabular} \end{table} \begin{table}[!tbp] \centering \caption{Test set performance for Accuracy (Acc.), Precision (Prec.), Recall (Rec.) and F1-Score (F1) for \emph{first} event of each learner. The more performant value is highlighted in \textbf{bold}. The metrics where the proposed model that outperform the baseline counterpart in the \texttt{PEEK} dataset ($p< 0.01$ in a one-tailed paired t-test) are marked with $\cdot^{(*)}$.} \label{tab:event1} \begin{tabular}{lllll} \hline Model & Acc. & Prec. & Rec. & F1 \\ \hline TrueLearn & 44.21 & 44.21 & \textbf{100.00} & \textbf{61.32} \\ TrueLearn$_{++}$ & \textbf{56.09}$\cdot^{(*)}$ & \textbf{50.32}$\cdot^{(*)}$ & \ \ 53.58 & 51.90 \\ \hline \end{tabular} \end{table} The performance metrics observed with the RF model on Task 1 and 2 (RQ1) are outlined in Table \ref{tab:accuracy}. Although we expected competitive results with MLP models, we failed to observe promising results. We believe this is due to the restrictive architecture space we used in our experiments. We encourage future researchers to experiment with more complex and wider range of neural architectures that may lead to better results. Figure {\ref{fig:train_size}} illustrates how the training data size impacts the i) RMSE and ii) SROCC (RQ2). It also demonstrate the performance difference between predicting engagement of Computer Science videos (CS) vs. Non-CS videos (RQ3). Finally, Table \ref{tab:ocw} presents predictive performance of the model on e-learning type lectures and tutorials (RQ4). The overall performance results relating to the effect of \emph{combining} the context agnostic model with personalisation models to battle cold-start problem (RQ5) is reported in Table \ref{tab:personalise}. A magnified view of the performance of predicting the outcome of the first event of each user (where the context-agnostic predictor is supposed to help the personaliser) is reported in Table \ref{tab:event1}. \subsection{Performance Gains and Causes (RQ1-2)} Results in Table \ref{tab:accuracy} clearly shows that the 300\% larger VLE lead to significant performance gains in both engagement prediction and lecture ranking tasks. In the case where the full feature set is used with the RF model, RMSE on Task 1 drops from .17 to .1 (41\%) while SROCC on Task 2 jumps from .68 to .78 (15\%). The labels (both explicit and implicit) themselves are more accurate as they are calculated using a larger user population. This means that many of the lectures that already existed in the smaller dataset \cite{context_agnostic_engagement} are likely to get improved engagement labels as the new labels are calculated using more user sessions that interacted with the videos during a wider time period (including very recent sessions until February 2020). Within the VLE dataset itself, using additional feature groups tend to lead to better performing models. Results for VLE dataset in Table \ref{tab:accuracy} demonstrates this trend where a significant jump in performance is evidenced when incorporating modality specific features (Video-specific features). However, the results also show that the cross-modal content-based features alone lead to substantial performance. This is a good indication that easy-to-compute, cross-modal features alone are sufficient to build a system that can predict context-agnostic engagement of videos. In a practical viewpoint, the proposed cross-modal features are computationally light (unlike complex deep models, e.g. vision models). Wikification, used in generating Wiki-based features, also operates at web-scale\footnote{\url{http://wikifier.org}}. Although the results show minute gains by adding the Wiki-based features, we believe that this is due to the simplicity of the Wiki features used in constructing the baselines leaving much room for sophistication (e.g. exploiting the semantic relatedness of topics). The topics, coming from a humanly-intuitive taxonomy, leaves room for building interpretable features. Figure {\ref{fig:train_size}} confirms that the increase of training data is leading to performance gains in both tasks. Figure {\ref{fig:train_size}(i)} suggests that the Root Mean Square Error (RMSE) can be further improved with more training examples. However, Figure {\ref{fig:train_size}(ii)} tells rather a different story where the performance gain seem to saturate around 60\% mark. This is an indication that improving ranks of the test data gets significantly harder at around 5,500 training examples ($\approx$ 60\% of the 9,239 training set) . \subsection{Relevance to AI/CS Education (RQ3)} The follow up experimental results in Figure {\ref{fig:train_size}} demonstrates that this dataset allows achieving higher performance on CS-only lectures. A likely reason for this may be the higher diversity of lecture in the non-CS category as it consists of significantly different subjects. Nevertheless, Figure {\ref{fig:train_size}} shows that a test set RMSE of $\approx .1$ and SROCC of $\approx .8$ is achievable with CS lectures. Figure {\ref{fig:wordcloud}} further shows that majority of the lectures in the dataset contains concepts relating to Artificial Intelligence and Data Science (e.g. Machine Learning, Ontology, Semantic Web ...). This indicates that majority of the CS lectures in the dataset contain topics relating to AI making this dataset a highly suitable dataset for training engagement prediction models for AI and CS education. \subsection{Relevance to E-Learning and MOOCs (RQ4)} Table \ref{tab:ocw} shows strong evidence that the models trained with VLE dataset generalise really well for engagement modelling in e-learning type videos created for course teaching amid the dataset containing many different video types (as per Table \ref{table:lect_type}). The models trained are much better at engagement prediction and ranking of tutorial-like videos than general scientific talks. Having tested with lectures that have been recorded using different MOOC video production techniques, the high performance obtained on \texttt{ocw} lectures confirms that VLE dataset can be highly effective in building context-agnostic engagement models for e-learning and MOOC systems. \subsection{Relevance to Addressing Cold-Start (RQ5)} Table \ref{tab:personalise} shows that by simply incorporating the context-agnostic engagement prediction in TrueLearn Novel algorithm (together becoming TrueLearn$_{++}$) can lead to significant improvements in accuracy and precision. The same table also shows that the drop of overall F1 score can be attributed to the steep drop of Recall Score. Table \ref{tab:event1} sheds more light into where this steep drop of recall occurs. This is, the baseline TrueLearn model always predicts positive engagement for the first event of the user. As seen in table \ref{tab:event1}, the recall of baseline TrueLearn model being 1.0 while the accuracy and precision being the same depicts this fact. TrueLearn predicts positive in event 1 of each user because the model has no information to base the prediction on \cite{truelearn}. However, table \ref{tab:event1} shows that the scenario is different in TrueLearn$_{++}$ as the model has additional information during the first event. Both accuracy and precision of predictions in first event of the learner population significantly improves. The recall will fall as the proposed context-agnostic model only captures a population based prior which may deviate from the individuality of the learners. However, it can be argued that making a prediction with additional information is better than predicting with no prior information. In the bigger picture, being able to make slightly more informed and varied predictions for the first event of learners based on lecture content features enable significantly improving prediction accuracy and precision of TrueLearn$_{++}$ in Table \ref{tab:personalise}. It is also noteworthy that our experiment, for the sake of simplicity, uses a rule that could be significantly improved further, e.g. using weights of the probabilities of both population-based and personalised models at the beginning of a user session (using weighting or stacking \cite{burke2002hybrid}), where the weight of population-based engagement decreases as we gather more information about the user. as we gather more information about the user. \section{Opportunities and Limitations} \label{limitations} The VLE dataset brings plenty of opportunities to the research community that is interested in building context-agnostic engagement using content related features. It is significantly larger (15x than \cite{Guo_vid_prod} and 3x than \cite{context_agnostic_engagement}) and more education focused than the contenders \cite{beyondviews} focused on engagement with videos. The larger quantity of examples may even enable more complex model families (e.g. deep learning) to be used in this task although our limited early experiments did not reap fruit. Another noteworthy opportunity is that this dataset can be extended periodically to create even larger and more accurate evaluations of the dataset. The Wiki-based features open up limitless possibilities as many sophisticated feature sets can be built and experimented. Due to the connectivity to Wikipedia, both its content and link structures can be exploited to invent meaningful, yet interpretable features. A further step can enable other data structures such as knowledge bases (e.g Wikidata), category tree etc. to be used for feature creation. Support for other tasks beyond the main tasks will allow this dataset to be useful in creating decision support systems that can help future content creators and help them create engaging educational videos \cite{kurdi2021think} while confirming prior findings \cite{Guo_vid_prod,brame2016effective,context_agnostic_engagement} regarding drivers of learner engagement. Furthermore, the dataset also enables comparing the findings from studies outside of education \cite{beyondviews}. The Wikipedia features also facilitates grounds to explore topic related tasks such as learning the structure of knowledge \cite{yu2020mooccube}. There also exists limitations in this dataset. VLE dataset is largely comprised of Computer Science and Data Science materials (Figure \ref{fig:wordcloud}) that are delivered all in English. While this is an opportunity for AI and Computer Science education, results in Figure {\ref{fig:train_size}} also shows that this fact leads to comparatively \emph{less fruitful} non-CS results. The dataset and its features are also not suitable for non-English video collections. Amid its size, the dataset still lacks variety of materials in topical and lingual sense. At first sight, the majority of the lectures in our dataset come from male presenters, potentially creating a significant gender imbalance in the dataset. As pointed out in section \ref{sec:ethics}, we have taken some measures to restrict the feature set to what we believe to be more neutral features. However, since we do not have access to gender information in the data collected, it is impossible to test and guarantee that there is zero correlation between the proposed features in the VLE dataset and negative gender biases. Care should be taken when enhancing these features and there is room to do more rigorous tests to understand if any gender biases are present within the dataset. \emph{Learner Engagement} is a loaded concept with many facets. In relation to consuming videos, many behavioural actions such as pausing, rewinding and skipping can contribute to latent engagement with a video lecture \cite{lan2017behavior}. Due to the technical limitations of the platform and privacy concerns, only watch time, view and mean ratings are included in this dataset. Although watch time has been used as a representative proxy for learner engagement with videos \cite{Guo_vid_prod,beyondviews,Covington2016}, we acknowledge that more informative measures may lead to more complete engagement signals. \section{Conclusions} Identifying the need of resources to push the frontiers of context-agnostic engagement prediction, which is a crucial part of addressing scalable quality assurance and the cold-start problem in educational recommenders, we release the VLE dataset. This dataset consists of around 12,000 videos with three groups of features, namely, i) content-based, ii) Wikipedia-based and iii) Video-specific features, accompanied as well by iv) multiple implicit and explicit engagement labels. We establish 2 main tasks, and identify multiple other tasks that can be tackled with this dataset. Addressing the 2 main tasks proposed, we benchmark the new dataset to show significant prediction gains over a similar yet, smaller prior dataset. In follow on experiments, we investigate how the magnitude of training examples relate to performance gains while also demonstrating the suitability of this dataset to build models for AI/CS related video collections. We further validate that the dataset can be used to model engagement with e-learning type video lectures to show its relevance to educational recommendation systems in the context of MOOCs. With the use of a simple experiment, it is also demonstrated that such a model can be incorporated with a personalised (contextual) engagement prediction model to significantly improve the predictions. \paragraph{Future Directions} We see several lines of future work addressing current limitations of the dataset. Adding diverse examples from different subject areas is our top priority. Experimenting further with more sophisticated Wiki-based features (e.g. by exploiting the Wikipedia semantic graph \cite{ponza2020computing}) is a promising way forward. The possibility of including more learner engagement related signals (e.g.: pauses, replays, skips, etc.) will be explored in the subsequent version of the dataset, without compromising learner privacy.As more understanding of engagement with other modalities (such as PDFs and e-Books) is gained, it is possible to add more learning resources from diverse modalities to widen the horizons of the dataset and improve understanding of engagement with different modalities of educational material. This work only devices a simple mechanism to combine the context-agnostic model with a personalisation model for the sake of proving that the proposal has genuine use-cases. The experiment barely scratches the surface on how a context-agnostic engagement prediction model can be incorporated in an educational recommendation system. There are a variety of alternatively and more sophisticated approaches that can potentially have bring larger performance gains, which we will explore in future work. \section{Acknowledgments} This research was partially conducted as part of the X5GON project funded from the EU's Horizon 2020 research programme grant No 761758. This work is also supported by the European Commission funded project "Humane AI: Toward AI Systems That Augment and Empower Humans by Understanding Us, our Society and the World Around Us" (grant 820437) and the EPSRC Fellowship titled "Task Based Information Retrieval" (grant EP/P024289/1). \bibliographystyle{abbrv}
1,314,259,994,754
arxiv
\section{Introduction} \label{sec:intro} It is commonly accepted that most stars in the Milky Way were born in close proximity to other stars, constituting a stellar structure that formed at the same time within the same parental molecular gas structure \citep[e.g.][]{Lada_2003}.\footnote{In this work we will use the terminology stellar ``structure" to refer to any stellar system that formed at the same time within the same parental cloud of gas (encompassing both stellar clusters and associations) regardless of virial state.} These stellar siblings should be similar to one another in terms of their location, age, kinematics, and chemistry. As these stellar structures dissolve into the Galactic field, they offer an opportunity to study the star formation history of the Milky Way and the chemodynamical evolution of its disk. As the largest and most accurate astrometric catalog of stars ever produced, \textit{Gaia} \citep{Gaia_2016} offers an unprecedented opportunity to study these stellar structures from their formation to their dissolution. By constraining the distances and proper motions to over a billion stars, as well as the radial velocities of millions of stars, \textit{Gaia} has not only shed new light on the spatial and dynamical properties of existing stellar structures, but has also enabled the discovery of new ones. These discoveries include hundreds of previously unknown open clusters \citep[e.g.][]{Castro_Ginard_2020}, as well as new classes of stellar structures with much more complex spatial distributions \citep[c.f.][]{Cantat_Gaudin_2022}, including stellar ``streams" \citep{Meingast_2021}, ``pearls" \citep{Coronado_2022}, ``rings" \citep{Cantat_Gaudin_2019}, ``snakes" \citep{Wang_2022}, and ``strings" \citep[][KC19]{Kounkel_2019} \citep[see also][]{Kounkel_2020}. There are four attributes that members of a newly discovered stellar structure in the \textit{Gaia} era should share in order to plausibly be considered co-eval, or born at the same time within the same parental molecular gas structure. First, stars in a structure should have largely similar ages. Second, members of a stellar structure should be close enough to one another in 3D space such that they could have been born in the same location. Third, members of a stellar structure should share similar motions, as evidenced by small dispersion in their \textit{Gaia} tangential and radial velocities. Finally, members of a stellar structure should have similar metallicities, as evidenced by small dispersion in elemental abundances \citep[e.g. as measured by spectroscopic surveys like GALAH and APOGEE;][]{GALAH_DR3, APOGEE_DR16}. With these attributes in mind, we take a closer look at the spatial, kinematic, and abundance variations of the stellar strings --- extended filamentary stellar features first proposed by KC19. KC19 present a sample of 328 claimed co-eval stellar strings, some spanning hundreds of parsecs in length. KC19 argue the string-like morphology is primordial, rather than the result of dynamical processes dissolving a central cluster. KC19 identify the strings in a multi-step process. First, they apply the HDBSCAN algorithm \citep{HDBSCAN} in 5D space ($l$, $b$, parallax $\pi$, and proper motions) to a sample of stars out to 1 kpc from the Sun detected in \textit{Gaia} DR2. Specifically, they perform several iterations of the HDBSCAN algorithm over different parallax ranges, primarily with the ``leaf" clustering method, to obtain a set of stellar groups with similar 5D properties. Then the authors manually merge and split the groups detected in the various iterations by hand. Next, KC19 assign an age to each group using a combination of isochrone fitting and a convolutional neural network. Finally, once a sample of stellar groups is identified, KC19 manually assemble the strings by connecting the individual groups with similar ages and visually checking that the strings are ``fully continuous [and] coherent in all kinematic [i.e. tangential velocities] and spatial [i.e. $l$, $b$, $\pi$] dimensions." After the groups are connected, KC19 compute a ``spine" for the string in 5D space by averaging the star-by-star ($l$, $b$, $\pi$, and kinematics) results in different plane-of-the-sky longitude bins along the projected string, before smoothing with a Savitzky-Golay filter to avoid strong fluctuations in the averages. In this work, we independently test the kinematic and spatial coherence of these strings -- as well as their intrinsic abundance variations -- using data not fully considered by and/or available at the time of KC19. In \S \ref{data} we present the publicly available spatial and kinematic data for stellar strings from \textit{Gaia} DR2 and EDR3 utilized in this work, along with ancillary spectroscopic data used to examine the abundance variations within a subset of the strings. In \S \ref{methods_results} we use these data to derive estimates of the stars' 3D spatial dispersion around their respective string ``spines", their radial velocity dispersions, their predicted virial masses, their predicted dynamical lifetimes, and their elemental abundance variations. We then use these constraints to show that nearly all of these stellar strings are inconsistent with being co-eval, physical entities, and are rather artificial structures affected by limitations in the manual assembly process used in their selection. In \S \ref{discussion} we discuss the implications of the strings' nonphysical nature within the wider context of the \textit{Gaia} literature on extended stellar structures. Finally, we conclude in \S \ref{conclusions}. \section{Data} \label{data} KC19 identify 1312 stellar groups and 328 stellar strings, where a group is a single set of stars identified in HDBSCAN with similar 5D properties. We only consider the stellar strings in this work. We obtain the \textit{Gaia} DR2 data \citep{Gaia_2018} on the string stars directly from KC19 (see their Table 1), and we crossmatch their Table 1 with \textit{Gaia} EDR3 \citep{Gaia_2021} to obtain updated constraints on the parallax and parallax errors of the string stars. The XYZ positions (the Heliocentric Galactic Cartesian Coordinates) of string spines (defined using \textit{Gaia} DR2 data) are obtained from Table 3 in KC19. To analyze the kinematic coherence of the strings, we adopt the original radial velocity measurements from \textit{Gaia} DR2. To explore the metallicity distribution within the strings, we use the catalog from MHM22, which leverages GALAH DR3 data \citep{GALAH_DR3} to analyze the intrinsic abundance variations within nearby stellar structures, including ten strings (see their Supplementary Data). To compare the intrinsic abundance variations of the strings to a benchmark sample of open clusters, we adopt the catalog from \citet{Spina_2021} (see their Table 1), which combines GALAH \citep{GALAH_DR3} and APOGEE \citep{APOGEE_DR16} data to characterize the chemical compositions of hundreds of open clusters across the Galactic disk. \begin{figure*} \centering \includegraphics[width=1.0\textwidth]{Figure1.png} \caption{\textit{Center}: A topdown view of stellar strings, colored by age, as reproduced from KC19 (see their Figure 13). In the zoom-in boxes, we show the actual distribution of stellar members around the string ``spine" in \textit{Gaia} DR2 for a sub-sample of four strings spanning a range of ages. The typical distance error to stars is very small with N average parallax signal-to-noise $>$ 44 in \textit{Gaia} DR2. We find that these four strings are representative of the morphology seen across the full sample: the dispersion along the line of sight is significantly larger than the typical error on the parallax measurements (e.g. Theia 1532, Theia 1104), and many strings are composed of isolated groups with no evidence of connection between them (e.g. Theia 74, Theia 435). See Figure Set 1 in the Appendix or the \href{https://faun.rc.fas.harvard.edu/czucker/Paper\_Figures/String_Gallery\_Interactive.html}{online interactive figure gallery} for similar panels for the remaining strings in the KC19 sample. \label{fig:topdown}} \end{figure*} \section{The Spatial, Dynamical, and Chemical Composition of Stellar Strings} \label{methods_results} In this section, we re-examine the spatial (\S \ref{subsec:spatial}), dynamical (\S \ref{subsec:dynamics}), and chemical (\S \ref{subsec:chemical}) distribution of the strings in KC19. \footnote{On \href{https://github.com/catherinezucker/stellar\_strings\_reexamined.git}{GitHub}, we provide a Jupyter Notebook that reproduces all the results in this section, including the values in Table 1, and the data behind Figures \ref{fig:topdown}, \ref{fig:dispersion}, \ref{fig:lifetimes} and \ref{fig:metallicity}.} \subsection{3D Spatial Properties of Stellar Strings} \label{subsec:spatial} In Figure \ref{fig:topdown} we show a topdown XY \textit{Gaia} DR2 view of the stellar strings in a Heliocentric Galactic Cartesian reference frame, as reproduced from KC19 (see their Figure 13). We highlight a selection of strings (over a range of ages) to convey the relationship between each string and its underlying stellar members in a set of zoom-in panels. Similar plots for the rest of the string sample are shown in Figure Set 1 in the Appendix, and additionally include the topdown \textit{Gaia} EDR3 stellar distribution alongside DR2 shown here for comparison. The distances to the stars are very well constrained: the median signal to noise of the parallax measurements per string surpasses 44:1 in DR2 and 61:1 in EDR3. We find that for a majority of the strings, the dispersion around the spine is much larger than the average distance uncertainty over the ensemble of parallax measurements. We also note that many of the strings appear to be composed of discrete stellar groups that lack clear connections in 3D physical space despite that interpretation in KC19. Leveraging the improved astrometric precision of $Gaia$ EDR3, we compare the 3D spatial dispersion of stars around the string spine in \textit{Gaia} DR2 versus \textit{Gaia} EDR3 to determine whether the dispersion around the spine decreases as parallax errors improve, as would be expected if stellar strings are true physical structures. For each star in a given string, we compute the distance between the star's XYZ position in DR2 and EDR3 and the closest XYZ point in the string's spine. The results are presented in Figure \ref{fig:dispersion}, which shows the average percentage that the stars move closer to (or further away from) the spine as a function of the increase in the signal to noise of the parallax measurements. We find that despite the improvement of the parallax signal to noise of 20\%--120\% over the ensemble of strings in \textit{Gaia} EDR3, there is \textit{no improvement} in the stars' distances to the spines. We would have anticipated an improvement in the 3D spatial dispersion of stellar string members if these stars were born at the same time within the same parental filamentary molecular gas structure, as argued in KC19. However, we do observe (c.f. Figure Set 1 in the Appendix and the \href{https://faun.rc.fas.harvard.edu/czucker/Paper\_Figures/String\_Gallery\_Interactive.html}{online interactive figure gallery}) that in \textit{some} cases the 3D spatial dispersion within \textit{individual} stellar groups inside a string does improve. This suggests these strings may be partly composed of real stellar subgroups; however, we see no evidence for the larger string structure. \begin{figure*} \centering \includegraphics[width=1.0\textwidth]{Figure2.png} \caption{Change in the 3D spatial dispersion of stars around their string's spine from \textit{Gaia} DR2 to \textit{Gaia} EDR3. Each grey dot represents one of the 328 individual strings. The dots show the average percentage change in the 3D spatial offset from the spine as a function of the average increase in the signal-to-noise of the parallax measurements for the stars. Positive percentage (blue region) indicates the stars move closer to their spine with improved parallax measurements, while negative percentage (orange region) indicates they move farther away. The black line shows the rolling median and indicates that despite a significant improvement in the signal-to-noise of the parallax measurements, the dispersion around the spine does not improve on average as expected of genuine spatial structures. The diagonal line in the top left shows the predicted trend for genuine space structures. \label{fig:dispersion}} \end{figure*} \subsection{The Dynamical Properties of Stellar Strings} \label{subsec:dynamics} While the lack of any improvement in the 3D spatial dispersion of stars around the string spines raises concerns about their fidelity, one way to validate the authenticity of the strings is to show that their stellar members still share similar motions. In the original study, KC19 analyze the dispersion in the tangential velocities of string members and find them to be small (on the order of $\rm < 2.5 \; km \; s^{-1}$). This small dispersion in the tangential velocities is expected, as by design the stellar groups that were manually assembled into strings had to share similar 5D properties ($l$, $b$, parallax $\pi$, and proper motions) to be detected as a group in HDBSCAN. As such, the fairest way to evaluate the authenticity of the strings is to characterize their velocity dispersion in the sixth dimension --- the radial velocity dimension --- not considered in the original 5D clustering algorithm. KC19 find the dispersion in the radial velocities, $\rm \sigma_{V_r}$, span $\rm 5 - 40 \; km \; s^{-1}$ with an average radial velocity dispersion of $\rm 16 \; km \; s^{-1}$, a factor of $5-10\times$ larger than for the tangential velocities (see Figure 12 from KC19). We note that the typical \textit{Gaia}-based radial velocity dispersion for loosely bound open clusters is $\rm \approx 1 \; km \; s^{-1}$ \citep{Soubiran_2018}. Thus these strings are at least dynamically very atypical for known co-eval structures. We reproduce the KC19 $\rm \sigma_{V_r}$ results for the subset of stars with a \textit{Gaia} radial velocity measurement (see Column 6 in Table \ref{tab:summary}).\footnote{The radial velocity dispersions for the strings are shown in Figure 12 of KC19 but the data behind the figure are not made publicly available. We tested a few variations for computing the strings' radial velocity dispersions (including weighting the radial velocity measurements by their errors) but find that an unweighted standard deviation is the most consistent with the original results. See Figure \ref{fig:rv_disp} in the Appendix for a comparison between our derived radial velocity dispersions and the radial velocity dispersions shown in Figure 12 of KC19.} We then estimate the predicted virial mass of each string as: \begin{equation} M_{vir} = \frac{\sigma_{V_r}^{2} \times \eta \times r_{hm}}{G} \end{equation} where $r_{hm}$ is the half-mass radius of the string, which we approximate as the median distance offset between the string's 3D spine and the 3D position of its stars using \textit{Gaia} EDR3 data (see Column 4 of Table \ref{tab:summary}). The parameter $\eta$ is a dimensionless constant that depends on the shape of the density profile, typically assumed to be $\eta \approx 10$ for a Plummer model \citep{Plummer_1911} characterized by steep density profiles. Since recent studies have shown that $\eta$ can be smaller (consistent with much broader density profiles), particularly for younger systems due to e.g. mass segregation \citep{Zwart_2010}, we very conservatively adopt an $\eta = 1$. Larger values of $\eta$ will only raise the threshold necessary for the strings to be in virial equilibrium. We find that the average predicted virial mass of the strings is $\approx 2\times 10^{6} \; \rm M_{\sun}$ (see Column 7 of Table \ref{tab:summary}). After computing the predicted virial mass of the strings, we approximate the observed mass of each string by counting the number of members and assuming the average mass of each star is $\rm 0.61 \; M_\sun$ based on the Initial Mass Function from \citet{Maschberger_2013} \citep[see also e.g.][]{Kuhn_2019} (see Column 8 of Table \ref{tab:summary}). We find an average observed mass of $\rm M_{observed} = 134 \; M_\sun$, meaning that the strings on average require $ > 10^{4} \times$ larger masses than their observed masses in order to be in virial equilibrium. Even assuming a very poor completeness fraction of e.g. 10\% (such that a majority of the string's membership goes undetected in \textit{Gaia}) the virial masses are still typically three orders of magnitude larger than their observed masses, and total deficit between the string's observed masses and their predicted virial masses is $\rm 10^{9} \; M_\odot$. Thus, the strings are not gravitationally bound. While the unbound state of the strings does not in itself imply that strings are not physical entities, it does provide constraints on their predicted lifetimes. Specifically, if a string is not gravitationally bound, we expect the string to disperse on roughly a crossing time, $\rm t_{cross}$, such that its predicted lifetime is given as: \begin{equation} t_{dispersal} \approx t_{cross} \approx \frac{r_{hm}}{\sigma_{V_r}} \end{equation} We find a median predicted dispersal time for the strings of only 2 Myr. Using a combination of a convolutional neural network and isochrone fitting, KC19 find ages of between 4 Myr and 9 Gyr for the strings. Dividing the strings' reported ages by their dispersal times, we find that on average the strings' reported ages are $115\times$ larger than their dispersal times, such that they should have dispersed in $<1\%$ of their lifetimes on average (see Column 12 of Table \ref{tab:summary}). Only a single string in the sample has a reported age less than its predicted lifetime (Theia 9) which is also the youngest string in the sample. We argue that the remaining strings in the sample are not physical given their reported spatial distributions, their radial velocity dispersions, and their inferred ages in KC19. \begin{figure*} \centering \includegraphics[width=0.75\textwidth]{Figure3.png} \caption{The virial state of strings versus the discrepancy between the strings' reported and predicted lifetimes. The vertical axis shows the ratio of the predicted virial masses of the strings (based on their observed radial velocity dispersions) over their observed masses. Every string in the sample is gravitationally unbound, requiring on average of $2\times10^{4}$ times larger mass than observed to be in virial equilibrium. Given their unbound state, the dispersal time of the strings should be roughly the crossing time. As shown on the horizontal axis, we find that the reported ages of the strings are orders of magnitude larger than the dispersal time, meaning that strings should have dispersed in a small fraction of their reported lifetimes and are thus inconsistent with being physical structures. \label{fig:lifetimes}} \end{figure*} \subsection{The Chemical Homogeneity of Stellar Strings} \label{subsec:chemical} Given the lack of spatial and dynamical coherence of the strings, we perform a final test to determine the physicality of the stellar string members by examining the uniformity in their chemical composition with respect to a well-studied sample of open clusters members. Specifically, if a set of stars is born within the same parental molecular gas structure, the intrinsic chemical dispersion of these stars as traced by stellar spectroscopy should be small \citep[e.g.][]{Feng_2014}. To examine the chemical homogeneity of the strings, we build off the study of \citet[][MHM22]{Manea_2022}, who leverage GALAH data \citep{GALAH_DR3} to characterize the intrinsic chemical dispersion ($\rm \sigma_{[X/H]}$) of a sample of ten strings drawn from KC19. MHM22 fit the following likelihood function assuming that the chemical profile of each string is Gaussian with some mean abundance $\rm \mu_{[X/H]}$ and intrinsic dispersion $\rm \sigma_{[X/H]}$: \begin{equation} \mathcal{L} = \prod_i^N \rm{exp} \left [ \frac{-(x_i - \mu_{[X/H]})^2} {(2(\sigma_{[X/H]}^2 + \delta_i^2)} \right ] \times \frac{1}{\sqrt{2\pi(\sigma_{[X/H]}^2 + \delta_i^2)}} \label{eq:likelihood} \end{equation} where $x_i$ and $\sigma_i$ are the GALAH mean abundance and its reported uncertainty for the $i$th star in the string in a given element $\rm X$. MHM22 characterize the intrinsic chemical dispersion across a range of elements with a sample size between 7 and 19 stars per string. Given the likelihood function in Equation \ref{eq:likelihood} and the subset of stars in each string with GALAH data, we are able to reproduce the results of MHM22. In the left panel of Figure \ref{fig:metallicity}, we present the original results of MHM22 (see their Figure 4), showing the intrinsic dispersion $\rm \sigma_{[X/H]}$ for each of the ten strings. Comparing the intrinsic dispersion of each string to a sample of local field stars, MHM22 find that all but one of the strings is more homogeneous than their local fields, with half of the sample as homogeneous as the well-studied open cluster M67 in several elements \citep{Gao_2018}. In order to test whether it is possible for a string to appear chemically homogeneous in several elements without being co-eval (as suggested by our analysis in \S \ref{subsec:spatial} and \S \ref{subsec:dynamics}), we perform an experiment using the catalog from \citet{Spina_2021}, who curate a sample of high-probability members of nearby open clusters with GALAH and APOGEE abundance information. As we have found that the strings are likely to be agglomerations of unrelated open clusters and other dynamically cold field stars, we believe this experiment will provide a more direct comparison point to interpret apparent homogeneity found in MHM22. Based on the sample sizes from \citet{Manea_2022}, we draw random sub-samples of between 7 and 19 stars from the open cluster membership reported in \citet{Spina_2021}, restricting to open clusters which span the same broad age range for the strings considered in MHM22 (7.52 $<$ log(Age) $<$ 9.23), have a detection in GALAH, and high membership probability $p > 0.75$. These sub-samples consist of stars that do \textit{belong to well-studied open clusters}, but each sub-sample is drawn from many clusters that are \textit{unrelated}, with different ages and different locations in the Galaxy. We then fit the same likelihood function as MHM22 to the reported GALAH abundance and uncertainty estimates for the random sub-sample drawn from the \citet{Spina_2021} catalog, repeating this procedure over many trials. The results are summarized in the right panel of Figure \ref{fig:metallicity}, which shows the abundance variations for the random draws in comparison to the string sample. We find that we can match the chemical homogeneity of the strings with our random draws, with many draws comparable to the intrinsic dispersion of the benchmark M67 cluster \citep{Gao_2018} in several elements. We attribute the homogeneity in the random stellar clusters draws to two causes. First, the uncertainties on the GALAH abundance variations for an individual star $i$ (i.e. $\sigma_i$ in Equation \ref{eq:likelihood}) are typically of the same order as the expected intrinsic abundance variation of a cluster like M67 ($\sigma_i \approx \sigma_{[X/H]} \approx 0.1$ dex). Since the likelihood function in Equation \ref{eq:likelihood} is designed to capture the intrinsic uncertainty by modeling the observational error, any overestimation of the error in the GALAH catalog can lead to unrealistically small estimates for the intrinsic scatter when the errors are large. And second, these random sub-samples can appear more homogeneous than field stars simply by virtue of the stars being members of open clusters, even if these clusters themselves are physically unrelated. In addition, the local field star sample from \citet{Manea_2022} (showing poorer chemical homogeneity than the strings) also likely includes some thick disk stars, which will have a wider metallicity dispersion than the thin disk string stars selected by KC19. Our results are consistent with the scenario that, while these strings may sometimes contain real clusters, their abundance patterns are not discriminatory enough to favor a scenario where members of the string have the same origin, versus a range of origins in potentially real, yet physically unrelated, spatial sub-structures. The chemical homogeneity of the strings found in MHM22 does not show them to be co-eval. \begin{figure*} \centering \includegraphics[width=1.0\textwidth]{Figure4.png} \caption{Intrinsic abundance variations across fifteen different elements for the collection of ten strings from MHM22 (blue points, left) and ten random sub-samples of stars drawn from unrelated open clusters in the Milky Way (black points, right). The grey asterisks in each panel show the intrinsic abundance variations for the benchmark cluster M67 \citep{Gao_2018, Manea_2022}, while the gray shaded region marks the zone of reported open cluster dispersions \citep[e.g.][]{Ness_2018}. By drawing random samples of stars from unrelated clusters, we are able to reproduce the typical chemical homogeneity of a majority of the strings, indicating that the strings' abundance variations are not discriminatory enough to favor a common physical origin. An interactive version of this figure showing hundreds of random draws, rather than just ten, is available online \href{https://faun.rc.fas.harvard.edu/czucker/Paper\_Figures/Figure4\_strings\_vs\_random.html}{here}. \label{fig:metallicity}} \end{figure*} \section{Discussion} \label{discussion} Through a spatial, kinematic, and chemical re-analysis of their stellar membership, we have shown that strings are inconsistent with being co-eval stellar structures with a common physical origin. Recall that KC19 select all 328 strings through a manual assembly process, stitching together stellar groups by hand and visually confirming kinematic and spatial coherence by eye. Our work underlies the need for a more systematic, \textit{reproducible} selection process when declaring the existence of a new type of stellar structure: not only should these structures remain spatially coherent and continuous when viewed in true 3D physical space (in contrast to the morphologies seen in Figure \ref{fig:topdown}), but their radial velocity dispersions should also be significantly smaller than measured for the Galactic field. In the solar neighborhood, the age-velocity dispersion relation (describing how the velocity dispersion of stars appears to increase with age due to dynamical heating) shows typical vertical velocity dispersions of $\rm \approx 5 \; km \; s^{-1}$ for stars $<$ 1 Gyr, about $3\times$ smaller than the typical string radial velocity dispersion of $\rm 16 \; km \; s^{-1}$ \citep[see][]{Bird_2021, Casagrande_2011}. The age-velocity dispersion relation has also been explored in 3D using open clusters in \citet{Tarricq_2021}, derived by computing the 3D velocity dispersion for samples of open clusters falling in different age bins. \citet{Tarricq_2021} find a typical 3D velocity dispersion of $\rm 13 \; km \; s^{-1}$ for open clusters in the range of 150 - 250 Myr (consistent with the typical age of a string reported in KC19). The 3D velocity dispersion \textit{over} a sample of many open clusters in \citet{Tarricq_2021} is less than the typical radial velocity dispersion \textit{within} an individual string in KC19 \citep[see Figure 11, Table 3 in][]{Tarricq_2021}. While we argue against the physicality of strings in KC19, we emphasize that several other studies in the \textit{Gaia} era present compelling evidence for filamentary stellar distributions identified though more automated, reproducible selection algorithms. For example, \citet{Meingast_2019} identify an extended 400+ pc long, 2000 $\rm M_\odot$ stream called Meingast-1 \citep[also known as Pisces Eridanus; e.g.][]{Hawkins_2020} through a wavelet decomposition of the 3D velocity space distribution of nearby stars. Not only do \citet{Meingast_2019} find that the stream is spatially continuous in Heliocentric Galactic Cartesian space (see their Figure 2), but they also find a very low 3D velocity dispersion of $\rm 1.3 \; km \;s^{-1}$ for the system. Similarly, building off the analysis of \citet{Meingast_2019}, \citet{Meingast_2021} present a new method for identifying highly extended coronae --- reminiscent of tidal structures --- around ten nearby open clusters. The \citet{Meingast_2021} technique accounts for projection effects in proper motion space in an automated way \citep[inspired by the ``converging point technique"; c.f.][]{van_Leeuwen_2009} before deconvolving the spatial distribution with a Gaussian mixture model to mitigate \textit{Gaia} measurement errors. Critically, the coronae are likewise validated via their 3D space motions, showing typical 3D velocity dispersions of $\rm 1.4 \; km \; s^{-1}$. Both the extended coronae and the Meingast-1 stream have velocity dispersions on par with those found for nearby open clusters using the same \textit{Gaia} DR2 radial velocity data considered in this work. Using a sample of a few hundred nearby open clusters, \citet{Soubiran_2018} find typical intra-cluster radial velocity dispersion of $\rm 1.0-1.5 \; km \; s^{-1}$. In contrast, as noted in \S \ref{subsec:dynamics}, the typical radial velocity dispersion of the strings is a factor of fifteen times larger ($\rm \sigma_{V_r} \approx 15 \; km \; s^{-1}$, implying $\rm \sigma_{V_{3D}} > 15 \; km \; s^{-1}$) with some strings approaching radial velocity dispersions of $\rm 40 \; km \; s^{-1}$. Only a few percent of the strings in KC19 have radial velocity dispersions $\rm < 5 \; km \; s^{-1}$, while $\approx$ 90\% of the open clusters do \citep{Soubiran_2018}, as well as 100\% of the newly identified extended structures in \citet{Meingast_2019} and \citet{Meingast_2021}. Thus, the unphysical nature of the strings is not due to their claimed unique filamentary morphologies, but rather their lack of true 3D kinematic and spatial coherence stemming from limitations in the manual assembly process. \section{Conclusions} \label{conclusions} We investigate the spatial, dynamical, and chemical composition of stellar strings, a proposed collection of highly extended filamentary stellar structures identified in KC19 by manually linking stellar groups with similar 5D properties ($l$, $b$, parallax $\pi$, and proper motions). Our conclusions are as follows: \begin{enumerate} \item Using updated constraints on the distances to stellar string members from \textit{Gaia} EDR3, we find that the 3D spatial dispersion of stars around the string spine does not improve over \textit{Gaia} DR2: the average percentage that stars move closer to their respective string spines is consistent with zero, despite the signal to noise on the parallax measurements per string increasing by 20\% - 120\%. Real structures should tighten with higher fidelity distance measurements. \item The average dispersion in the radial velocity of the strings is $\rm 16 \; km \; s^{-1}$, about fifteen times larger than the typical radial velocity dispersion both of open clusters in \textit{Gaia} \citep{Soubiran_2018} and in other catalogs of extended stellar structures \citep[e.g. stellar ``streams" in the disk from][]{Meingast_2019, Meingast_2021}. \item Given the radial velocity dispersions, the virial masses of the strings are on average $> 10^{4}\times$ larger than their observed masses. Even assuming very low completeness fractions, all strings are gravitationally unbound. \item Given their unbound state, the strings should disperse on roughly a crossing time, which we estimate to be typically 2 Myr, while the ages of the strings from KC19 range from 4 Myr to 9 Gyr. Thus the strings should not exist based on their predicted dynamical lifetimes, and should have dispersed in $<$ 1\% of their reported ages, on average. \item Using complementary constraints on stellar chemical abundances from GALAH DR3 \citep{GALAH_DR3}, we compare the intrinsic abundance dispersion of the strings found in MHM22 to a random sample of stars drawn from physically unrelated open clusters. We find that the chemical homogeneity of the strings is similar to the chemical homogeneity seen in random stellar draws across clusters. \item The combined spatial, dynamical, and chemical evidence rules out the scenario that stars within a typical string were born at the same time within the same parental molecular gas structure. However, some subset of the stars within a string may still be co-eval, as many of these strings contain real clusters that have been linked together to form the larger string-like structure. \end{enumerate} Ultimately, by providing radial velocity measurements for five times more stars than previous data releases, \textit{Gaia} DR3 will provide further opportunity to characterize the physicality (or lack thereof) of not just the strings, but also the broader array of structures identified in extant \textit{Gaia} data releases. By evoking simple spatial and dynamical arguments, our work provides a straightforward, yet discerning, lens to evaluate the fidelity of newfound classes of objects, which should be considered when declaring the existence of new co-eval stellar structures in the \textit{Gaia} era. \begin{acknowledgments} CZ acknowledges that support for this work was provided by NASA through the NASA Hubble Fellowship grant \#HST-HF2-51498.001 awarded by the Space Telescope Science Institute (STScI), which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS5-26555. SL acknowledges support from NSF grant AST2109234 and HST-AR-16624 from STScI. The authors would like to thank Keith Hawkins, Catherine Manea, and Kevin Covey for helpful discussions that contributed to this work. \end{acknowledgments} \subsubsection*{#1}} \pagestyle{headings} \markright{Reference sheet: \texttt{natbib}} \usepackage{shortvrb} \MakeShortVerb{\|} \begin{document} \thispagestyle{plain} \newcommand{\textsc{Bib}\TeX}{\textsc{Bib}\TeX} \newcommand{\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}}{\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}} \begin{center}{\bfseries\Large Reference sheet for \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ usage}\\ \large(Describing version \fileversion\ from \filedate) \end{center} \begin{quote}\slshape For a more detailed description of the \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package, \LaTeX\ the source file \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\texttt{.dtx}. \end{quote} \head{Overview} The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package is a reimplementation of the \LaTeX\ |\cite| command, to work with both author--year and numerical citations. It is compatible with the standard bibliographic style files, such as \texttt{plain.bst}, as well as with those for \texttt{harvard}, \texttt{apalike}, \texttt{chicago}, \texttt{astron}, \texttt{authordate}, and of course \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}. \head{Loading} Load with |\usepackage[|\emph{options}|]{|\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}|}|. See list of \emph{options} at the end. \head{Replacement bibliography styles} I provide three new \texttt{.bst} files to replace the standard \LaTeX\ numerical ones: \begin{quote}\ttfamily plainnat.bst \qquad abbrvnat.bst \qquad unsrtnat.bst \end{quote} \head{Basic commands} The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package has two basic citation commands, |\citet| and |\citep| for \emph{textual} and \emph{parenthetical} citations, respectively. There also exist the starred versions |\citet*| and |\citep*| that print the full author list, and not just the abbreviated one. All of these may take one or two optional arguments to add some text before and after the citation. \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citet{jon90}| & Jones et al. (1990)\\ |\citet[chap.~2]{jon90}| & Jones et al. (1990, chap.~2)\\[0.5ex] |\citep{jon90}| & (Jones et al., 1990)\\ |\citep[chap.~2]{jon90}| & (Jones et al., 1990, chap.~2)\\ |\citep[see][]{jon90}| & (see Jones et al., 1990)\\ |\citep[see][chap.~2]{jon90}| & (see Jones et al., 1990, chap.~2)\\[0.5ex] |\citet*{jon90}| & Jones, Baker, and Williams (1990)\\ |\citep*{jon90}| & (Jones, Baker, and Williams, 1990) \end{tabular} \end{quote} \head{Multiple citations} Multiple citations may be made by including more than one citation key in the |\cite| command argument. \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citet{jon90,jam91}| & Jones et al. (1990); James et al. (1991)\\ |\citep{jon90,jam91}| & (Jones et al., 1990; James et al. 1991)\\ |\citep{jon90,jon91}| & (Jones et al., 1990, 1991)\\ |\citep{jon90a,jon90b}| & (Jones et al., 1990a,b) \end{tabular} \end{quote} \head{Numerical mode} These examples are for author--year citation mode. In numerical mode, the results are different. \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citet{jon90}| & Jones et al. [21]\\ |\citet[chap.~2]{jon90}| & Jones et al. [21, chap.~2]\\[0.5ex] |\citep{jon90}| & [21]\\ |\citep[chap.~2]{jon90}| & [21, chap.~2]\\ |\citep[see][]{jon90}| & [see 21]\\ |\citep[see][chap.~2]{jon90}| & [see 21, chap.~2]\\[0.5ex] |\citep{jon90a,jon90b}| & [21, 32] \end{tabular} \end{quote} \head{Suppressed parentheses} As an alternative form of citation, |\citealt| is the same as |\citet| but \emph{without parentheses}. Similarly, |\citealp| is |\citep| without parentheses. Multiple references, notes, and the starred variants also exist. \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citealt{jon90}| & Jones et al.\ 1990\\ |\citealt*{jon90}| & Jones, Baker, and Williams 1990\\ |\citealp{jon90}| & Jones et al., 1990\\ |\citealp*{jon90}| & Jones, Baker, and Williams, 1990\\ |\citealp{jon90,jam91}| & Jones et al., 1990; James et al., 1991\\ |\citealp[pg.~32]{jon90}| & Jones et al., 1990, pg.~32\\ |\citetext{priv.\ comm.}| & (priv.\ comm.) \end{tabular} \end{quote} The |\citetext| command allows arbitrary text to be placed in the current citation parentheses. This may be used in combination with |\citealp|. \head{Partial citations} In author--year schemes, it is sometimes desirable to be able to refer to the authors without the year, or vice versa. This is provided with the extra commands \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citeauthor{jon90}| & Jones et al.\\ |\citeauthor*{jon90}| & Jones, Baker, and Williams\\ |\citeyear{jon90}| & 1990\\ |\citeyearpar{jon90}| & (1990) \end{tabular} \end{quote} \head{Forcing upper cased names} If the first author's name contains a \textsl{von} part, such as ``della Robbia'', then |\citet{dRob98}| produces ``della Robbia (1998)'', even at the beginning of a sentence. One can force the first letter to be in upper case with the command |\Citet| instead. Other upper case commands also exist. \begin{quote} \begin{tabular}{rl@{\quad$\Rightarrow$\quad}l} when & |\citet{dRob98}| & della Robbia (1998) \\ then & |\Citet{dRob98}| & Della Robbia (1998) \\ & |\Citep{dRob98}| & (Della Robbia, 1998) \\ & |\Citealt{dRob98}| & Della Robbia 1998 \\ & |\Citealp{dRob98}| & Della Robbia, 1998 \\ & |\Citeauthor{dRob98}| & Della Robbia \end{tabular} \end{quote} These commands also exist in starred versions for full author names. \head{Citation aliasing} Sometimes one wants to refer to a reference with a special designation, rather than by the authors, i.e. as Paper~I, Paper~II. Such aliases can be defined and used, textual and/or parenthetical with: \begin{quote} \begin{tabular}{lcl} |\defcitealias{jon90}{Paper~I}|\\ |\citetalias{jon90}| & $\Rightarrow$ & Paper~I\\ |\citepalias{jon90}| & $\Rightarrow$ & (Paper~I) \end{tabular} \end{quote} These citation commands function much like |\citet| and |\citep|: they may take multiple keys in the argument, may contain notes, and are marked as hyperlinks. \head{Selecting citation style and punctuation} Use the command |\bibpunct| with one optional and 6 mandatory arguments: \begin{enumerate} \item the opening bracket symbol, default = ( \item the closing bracket symbol, default = ) \item the punctuation between multiple citations, default = ; \item the letter `n' for numerical style, or `s' for numerical superscript style, any other letter for author--year, default = author--year; \item the punctuation that comes between the author names and the year \item the punctuation that comes between years or numbers when common author lists are suppressed (default = ,); \end{enumerate} The optional argument is the character preceding a post-note, default is a comma plus space. In redefining this character, one must include a space if one is wanted. Example~1, |\bibpunct{[}{]}{,}{a}{}{;}| changes the output of \begin{quote} |\citep{jon90,jon91,jam92}| \end{quote} into [Jones et al. 1990; 1991, James et al. 1992]. Example~2, |\bibpunct[; ]{(}{)}{,}{a}{}{;}| changes the output of \begin{quote} |\citep[and references therein]{jon90}| \end{quote} into (Jones et al. 1990; and references therein). \head{Other formatting options} Redefine |\bibsection| to the desired sectioning command for introducing the list of references. This is normally |\section*| or |\chapter*|. Define |\bibpreamble| to be any text that is to be printed after the heading but before the actual list of references. Define |\bibfont| to be a font declaration, e.g.\ |\small| to apply to the list of references. Define |\citenumfont| to be a font declaration or command like |\itshape| or |\textit|. Redefine |\bibnumfmt| as a command with an argument to format the numbers in the list of references. The default definition is |[#1]|. The indentation after the first line of each reference is given by |\bibhang|; change this with the |\setlength| command. The vertical spacing between references is set by |\bibsep|; change this with the |\setlength| command. \head{Automatic indexing of citations} If one wishes to have the citations entered in the \texttt{.idx} indexing file, it is only necessary to issue |\citeindextrue| at any point in the document. All following |\cite| commands, of all variations, then insert the corresponding entry to that file. With |\citeindexfalse|, these entries will no longer be made. \head{Use with \texttt{chapterbib} package} The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package is compatible with the \texttt{chapterbib} package which makes it possible to have several bibliographies in one document. The package makes use of the |\include| command, and each |\include|d file has its own bibliography. The order in which the \texttt{chapterbib} and \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ packages are loaded is unimportant. The \texttt{chapterbib} package provides an option \texttt{sectionbib} that puts the bibliography in a |\section*| instead of |\chapter*|, something that makes sense if there is a bibliography in each chapter. This option will not work when \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ is also loaded; instead, add the option to \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}. Every |\include|d file must contain its own |\bibliography| command where the bibliography is to appear. The database files listed as arguments to this command can be different in each file, of course. However, what is not so obvious, is that each file must also contain a |\bibliographystyle| command, \emph{preferably with the same style argument}. \head{Sorting and compressing citations} Do not use the \texttt{cite} package with \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}; rather use one of the options \texttt{sort} or \texttt{sort\&compress}. These also work with author--year citations, making multiple citations appear in their order in the reference list. \head{Long author list on first citation} Use option \texttt{longnamesfirst} to have first citation automatically give the full list of authors. Suppress this for certain citations with |\shortcites{|\emph{key-list}|}|, given before the first citation. \head{Local configuration} Any local recoding or definitions can be put in \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\texttt{.cfg} which is read in after the main package file. \head{Options that can be added to \texttt{\char`\\ usepackage}} \begin{description} \item[\ttfamily round] (default) for round parentheses; \item[\ttfamily square] for square brackets; \item[\ttfamily curly] for curly braces; \item[\ttfamily angle] for angle brackets; \item[\ttfamily colon] (default) to separate multiple citations with colons; \item[\ttfamily comma] to use commas as separaters; \item[\ttfamily authoryear] (default) for author--year citations; \item[\ttfamily numbers] for numerical citations; \item[\ttfamily super] for superscripted numerical citations, as in \textsl{Nature}; \item[\ttfamily sort] orders multiple citations into the sequence in which they appear in the list of references; \item[\ttfamily sort\&compress] as \texttt{sort} but in addition multiple numerical citations are compressed if possible (as 3--6, 15); \item[\ttfamily longnamesfirst] makes the first citation of any reference the equivalent of the starred variant (full author list) and subsequent citations normal (abbreviated list); \item[\ttfamily sectionbib] redefines |\thebibliography| to issue |\section*| instead of |\chapter*|; valid only for classes with a |\chapter| command; to be used with the \texttt{chapterbib} package; \item[\ttfamily nonamebreak] keeps all the authors' names in a citation on one line; causes overfull hboxes but helps with some \texttt{hyperref} problems. \end{description} \end{document} \subsubsection*{#1}} \pagestyle{headings} \markright{Reference sheet: \texttt{natbib}} \usepackage{shortvrb} \MakeShortVerb{\|} \begin{document} \thispagestyle{plain} \newcommand{\textsc{Bib}\TeX}{\textsc{Bib}\TeX} \newcommand{\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}}{\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}} \begin{center}{\bfseries\Large Reference sheet for \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ usage}\\ \large(Describing version \fileversion\ from \filedate) \end{center} \begin{quote}\slshape For a more detailed description of the \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package, \LaTeX\ the source file \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\texttt{.dtx}. \end{quote} \head{Overview} The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package is a reimplementation of the \LaTeX\ |\cite| command, to work with both author--year and numerical citations. It is compatible with the standard bibliographic style files, such as \texttt{plain.bst}, as well as with those for \texttt{harvard}, \texttt{apalike}, \texttt{chicago}, \texttt{astron}, \texttt{authordate}, and of course \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}. \head{Loading} Load with |\usepackage[|\emph{options}|]{|\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}|}|. See list of \emph{options} at the end. \head{Replacement bibliography styles} I provide three new \texttt{.bst} files to replace the standard \LaTeX\ numerical ones: \begin{quote}\ttfamily plainnat.bst \qquad abbrvnat.bst \qquad unsrtnat.bst \end{quote} \head{Basic commands} The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package has two basic citation commands, |\citet| and |\citep| for \emph{textual} and \emph{parenthetical} citations, respectively. There also exist the starred versions |\citet*| and |\citep*| that print the full author list, and not just the abbreviated one. All of these may take one or two optional arguments to add some text before and after the citation. \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citet{jon90}| & Jones et al. (1990)\\ |\citet[chap.~2]{jon90}| & Jones et al. (1990, chap.~2)\\[0.5ex] |\citep{jon90}| & (Jones et al., 1990)\\ |\citep[chap.~2]{jon90}| & (Jones et al., 1990, chap.~2)\\ |\citep[see][]{jon90}| & (see Jones et al., 1990)\\ |\citep[see][chap.~2]{jon90}| & (see Jones et al., 1990, chap.~2)\\[0.5ex] |\citet*{jon90}| & Jones, Baker, and Williams (1990)\\ |\citep*{jon90}| & (Jones, Baker, and Williams, 1990) \end{tabular} \end{quote} \head{Multiple citations} Multiple citations may be made by including more than one citation key in the |\cite| command argument. \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citet{jon90,jam91}| & Jones et al. (1990); James et al. (1991)\\ |\citep{jon90,jam91}| & (Jones et al., 1990; James et al. 1991)\\ |\citep{jon90,jon91}| & (Jones et al., 1990, 1991)\\ |\citep{jon90a,jon90b}| & (Jones et al., 1990a,b) \end{tabular} \end{quote} \head{Numerical mode} These examples are for author--year citation mode. In numerical mode, the results are different. \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citet{jon90}| & Jones et al. [21]\\ |\citet[chap.~2]{jon90}| & Jones et al. [21, chap.~2]\\[0.5ex] |\citep{jon90}| & [21]\\ |\citep[chap.~2]{jon90}| & [21, chap.~2]\\ |\citep[see][]{jon90}| & [see 21]\\ |\citep[see][chap.~2]{jon90}| & [see 21, chap.~2]\\[0.5ex] |\citep{jon90a,jon90b}| & [21, 32] \end{tabular} \end{quote} \head{Suppressed parentheses} As an alternative form of citation, |\citealt| is the same as |\citet| but \emph{without parentheses}. Similarly, |\citealp| is |\citep| without parentheses. Multiple references, notes, and the starred variants also exist. \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citealt{jon90}| & Jones et al.\ 1990\\ |\citealt*{jon90}| & Jones, Baker, and Williams 1990\\ |\citealp{jon90}| & Jones et al., 1990\\ |\citealp*{jon90}| & Jones, Baker, and Williams, 1990\\ |\citealp{jon90,jam91}| & Jones et al., 1990; James et al., 1991\\ |\citealp[pg.~32]{jon90}| & Jones et al., 1990, pg.~32\\ |\citetext{priv.\ comm.}| & (priv.\ comm.) \end{tabular} \end{quote} The |\citetext| command allows arbitrary text to be placed in the current citation parentheses. This may be used in combination with |\citealp|. \head{Partial citations} In author--year schemes, it is sometimes desirable to be able to refer to the authors without the year, or vice versa. This is provided with the extra commands \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citeauthor{jon90}| & Jones et al.\\ |\citeauthor*{jon90}| & Jones, Baker, and Williams\\ |\citeyear{jon90}| & 1990\\ |\citeyearpar{jon90}| & (1990) \end{tabular} \end{quote} \head{Forcing upper cased names} If the first author's name contains a \textsl{von} part, such as ``della Robbia'', then |\citet{dRob98}| produces ``della Robbia (1998)'', even at the beginning of a sentence. One can force the first letter to be in upper case with the command |\Citet| instead. Other upper case commands also exist. \begin{quote} \begin{tabular}{rl@{\quad$\Rightarrow$\quad}l} when & |\citet{dRob98}| & della Robbia (1998) \\ then & |\Citet{dRob98}| & Della Robbia (1998) \\ & |\Citep{dRob98}| & (Della Robbia, 1998) \\ & |\Citealt{dRob98}| & Della Robbia 1998 \\ & |\Citealp{dRob98}| & Della Robbia, 1998 \\ & |\Citeauthor{dRob98}| & Della Robbia \end{tabular} \end{quote} These commands also exist in starred versions for full author names. \head{Citation aliasing} Sometimes one wants to refer to a reference with a special designation, rather than by the authors, i.e. as Paper~I, Paper~II. Such aliases can be defined and used, textual and/or parenthetical with: \begin{quote} \begin{tabular}{lcl} |\defcitealias{jon90}{Paper~I}|\\ |\citetalias{jon90}| & $\Rightarrow$ & Paper~I\\ |\citepalias{jon90}| & $\Rightarrow$ & (Paper~I) \end{tabular} \end{quote} These citation commands function much like |\citet| and |\citep|: they may take multiple keys in the argument, may contain notes, and are marked as hyperlinks. \head{Selecting citation style and punctuation} Use the command |\bibpunct| with one optional and 6 mandatory arguments: \begin{enumerate} \item the opening bracket symbol, default = ( \item the closing bracket symbol, default = ) \item the punctuation between multiple citations, default = ; \item the letter `n' for numerical style, or `s' for numerical superscript style, any other letter for author--year, default = author--year; \item the punctuation that comes between the author names and the year \item the punctuation that comes between years or numbers when common author lists are suppressed (default = ,); \end{enumerate} The optional argument is the character preceding a post-note, default is a comma plus space. In redefining this character, one must include a space if one is wanted. Example~1, |\bibpunct{[}{]}{,}{a}{}{;}| changes the output of \begin{quote} |\citep{jon90,jon91,jam92}| \end{quote} into [Jones et al. 1990; 1991, James et al. 1992]. Example~2, |\bibpunct[; ]{(}{)}{,}{a}{}{;}| changes the output of \begin{quote} |\citep[and references therein]{jon90}| \end{quote} into (Jones et al. 1990; and references therein). \head{Other formatting options} Redefine |\bibsection| to the desired sectioning command for introducing the list of references. This is normally |\section*| or |\chapter*|. Define |\bibpreamble| to be any text that is to be printed after the heading but before the actual list of references. Define |\bibfont| to be a font declaration, e.g.\ |\small| to apply to the list of references. Define |\citenumfont| to be a font declaration or command like |\itshape| or |\textit|. Redefine |\bibnumfmt| as a command with an argument to format the numbers in the list of references. The default definition is |[#1]|. The indentation after the first line of each reference is given by |\bibhang|; change this with the |\setlength| command. The vertical spacing between references is set by |\bibsep|; change this with the |\setlength| command. \head{Automatic indexing of citations} If one wishes to have the citations entered in the \texttt{.idx} indexing file, it is only necessary to issue |\citeindextrue| at any point in the document. All following |\cite| commands, of all variations, then insert the corresponding entry to that file. With |\citeindexfalse|, these entries will no longer be made. \head{Use with \texttt{chapterbib} package} The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package is compatible with the \texttt{chapterbib} package which makes it possible to have several bibliographies in one document. The package makes use of the |\include| command, and each |\include|d file has its own bibliography. The order in which the \texttt{chapterbib} and \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ packages are loaded is unimportant. The \texttt{chapterbib} package provides an option \texttt{sectionbib} that puts the bibliography in a |\section*| instead of |\chapter*|, something that makes sense if there is a bibliography in each chapter. This option will not work when \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ is also loaded; instead, add the option to \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}. Every |\include|d file must contain its own |\bibliography| command where the bibliography is to appear. The database files listed as arguments to this command can be different in each file, of course. However, what is not so obvious, is that each file must also contain a |\bibliographystyle| command, \emph{preferably with the same style argument}. \head{Sorting and compressing citations} Do not use the \texttt{cite} package with \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}; rather use one of the options \texttt{sort} or \texttt{sort\&compress}. These also work with author--year citations, making multiple citations appear in their order in the reference list. \head{Long author list on first citation} Use option \texttt{longnamesfirst} to have first citation automatically give the full list of authors. Suppress this for certain citations with |\shortcites{|\emph{key-list}|}|, given before the first citation. \head{Local configuration} Any local recoding or definitions can be put in \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\texttt{.cfg} which is read in after the main package file. \head{Options that can be added to \texttt{\char`\\ usepackage}} \begin{description} \item[\ttfamily round] (default) for round parentheses; \item[\ttfamily square] for square brackets; \item[\ttfamily curly] for curly braces; \item[\ttfamily angle] for angle brackets; \item[\ttfamily colon] (default) to separate multiple citations with colons; \item[\ttfamily comma] to use commas as separaters; \item[\ttfamily authoryear] (default) for author--year citations; \item[\ttfamily numbers] for numerical citations; \item[\ttfamily super] for superscripted numerical citations, as in \textsl{Nature}; \item[\ttfamily sort] orders multiple citations into the sequence in which they appear in the list of references; \item[\ttfamily sort\&compress] as \texttt{sort} but in addition multiple numerical citations are compressed if possible (as 3--6, 15); \item[\ttfamily longnamesfirst] makes the first citation of any reference the equivalent of the starred variant (full author list) and subsequent citations normal (abbreviated list); \item[\ttfamily sectionbib] redefines |\thebibliography| to issue |\section*| instead of |\chapter*|; valid only for classes with a |\chapter| command; to be used with the \texttt{chapterbib} package; \item[\ttfamily nonamebreak] keeps all the authors' names in a citation on one line; causes overfull hboxes but helps with some \texttt{hyperref} problems. \end{description} \end{document} \section{Introduction} \label{sec:intro} It is commonly accepted that most stars in the Milky Way were born in close proximity to other stars, constituting a stellar structure that formed at the same time within the same parental molecular gas structure \citep[e.g.][]{Lada_2003}.\footnote{In this work we will use the terminology stellar ``structure" to refer to any stellar system that formed at the same time within the same parental cloud of gas (encompassing both stellar clusters and associations) regardless of virial state.} These stellar siblings should be similar to one another in terms of their location, age, kinematics, and chemistry. As these stellar structures dissolve into the Galactic field, they offer an opportunity to study the star formation history of the Milky Way and the chemodynamical evolution of its disk. As the largest and most accurate astrometric catalog of stars ever produced, \textit{Gaia} \citep{Gaia_2016} offers an unprecedented opportunity to study these stellar structures from their formation to their dissolution. By constraining the distances and proper motions to over a billion stars, as well as the radial velocities of millions of stars, \textit{Gaia} has not only shed new light on the spatial and dynamical properties of existing stellar structures, but has also enabled the discovery of new ones. These discoveries include hundreds of previously unknown open clusters \citep[e.g.][]{Castro_Ginard_2020}, as well as new classes of stellar structures with much more complex spatial distributions \citep[c.f.][]{Cantat_Gaudin_2022}, including stellar ``streams" \citep{Meingast_2021}, ``pearls" \citep{Coronado_2022}, ``rings" \citep{Cantat_Gaudin_2019}, ``snakes" \citep{Wang_2022}, and ``strings" \citep[][KC19]{Kounkel_2019} \citep[see also][]{Kounkel_2020}. There are four attributes that members of a newly discovered stellar structure in the \textit{Gaia} era should share in order to plausibly be considered co-eval, or born at the same time within the same parental molecular gas structure. First, stars in a structure should have largely similar ages. Second, members of a stellar structure should be close enough to one another in 3D space such that they could have been born in the same location. Third, members of a stellar structure should share similar motions, as evidenced by small dispersion in their \textit{Gaia} tangential and radial velocities. Finally, members of a stellar structure should have similar metallicities, as evidenced by small dispersion in elemental abundances \citep[e.g. as measured by spectroscopic surveys like GALAH and APOGEE;][]{GALAH_DR3, APOGEE_DR16}. With these attributes in mind, we take a closer look at the spatial, kinematic, and abundance variations of the stellar strings --- extended filamentary stellar features first proposed by KC19. KC19 present a sample of 328 claimed co-eval stellar strings, some spanning hundreds of parsecs in length. KC19 argue the string-like morphology is primordial, rather than the result of dynamical processes dissolving a central cluster. KC19 identify the strings in a multi-step process. First, they apply the HDBSCAN algorithm \citep{HDBSCAN} in 5D space ($l$, $b$, parallax $\pi$, and proper motions) to a sample of stars out to 1 kpc from the Sun detected in \textit{Gaia} DR2. Specifically, they perform several iterations of the HDBSCAN algorithm over different parallax ranges, primarily with the ``leaf" clustering method, to obtain a set of stellar groups with similar 5D properties. Then the authors manually merge and split the groups detected in the various iterations by hand. Next, KC19 assign an age to each group using a combination of isochrone fitting and a convolutional neural network. Finally, once a sample of stellar groups is identified, KC19 manually assemble the strings by connecting the individual groups with similar ages and visually checking that the strings are ``fully continuous [and] coherent in all kinematic [i.e. tangential velocities] and spatial [i.e. $l$, $b$, $\pi$] dimensions." After the groups are connected, KC19 compute a ``spine" for the string in 5D space by averaging the star-by-star ($l$, $b$, $\pi$, and kinematics) results in different plane-of-the-sky longitude bins along the projected string, before smoothing with a Savitzky-Golay filter to avoid strong fluctuations in the averages. In this work, we independently test the kinematic and spatial coherence of these strings -- as well as their intrinsic abundance variations -- using data not fully considered by and/or available at the time of KC19. In \S \ref{data} we present the publicly available spatial and kinematic data for stellar strings from \textit{Gaia} DR2 and EDR3 utilized in this work, along with ancillary spectroscopic data used to examine the abundance variations within a subset of the strings. In \S \ref{methods_results} we use these data to derive estimates of the stars' 3D spatial dispersion around their respective string ``spines", their radial velocity dispersions, their predicted virial masses, their predicted dynamical lifetimes, and their elemental abundance variations. We then use these constraints to show that nearly all of these stellar strings are inconsistent with being co-eval, physical entities, and are rather artificial structures affected by limitations in the manual assembly process used in their selection. In \S \ref{discussion} we discuss the implications of the strings' nonphysical nature within the wider context of the \textit{Gaia} literature on extended stellar structures. Finally, we conclude in \S \ref{conclusions}. \section{Data} \label{data} KC19 identify 1312 stellar groups and 328 stellar strings, where a group is a single set of stars identified in HDBSCAN with similar 5D properties. We only consider the stellar strings in this work. We obtain the \textit{Gaia} DR2 data \citep{Gaia_2018} on the string stars directly from KC19 (see their Table 1), and we crossmatch their Table 1 with \textit{Gaia} EDR3 \citep{Gaia_2021} to obtain updated constraints on the parallax and parallax errors of the string stars. The XYZ positions (the Heliocentric Galactic Cartesian Coordinates) of string spines (defined using \textit{Gaia} DR2 data) are obtained from Table 3 in KC19. To analyze the kinematic coherence of the strings, we adopt the original radial velocity measurements from \textit{Gaia} DR2. To explore the metallicity distribution within the strings, we use the catalog from MHM22, which leverages GALAH DR3 data \citep{GALAH_DR3} to analyze the intrinsic abundance variations within nearby stellar structures, including ten strings (see their Supplementary Data). To compare the intrinsic abundance variations of the strings to a benchmark sample of open clusters, we adopt the catalog from \citet{Spina_2021} (see their Table 1), which combines GALAH \citep{GALAH_DR3} and APOGEE \citep{APOGEE_DR16} data to characterize the chemical compositions of hundreds of open clusters across the Galactic disk. \begin{figure*} \centering \includegraphics[width=1.0\textwidth]{Figure1.png} \caption{\textit{Center}: A topdown view of stellar strings, colored by age, as reproduced from KC19 (see their Figure 13). In the zoom-in boxes, we show the actual distribution of stellar members around the string ``spine" in \textit{Gaia} DR2 for a sub-sample of four strings spanning a range of ages. The typical distance error to stars is very small with N average parallax signal-to-noise $>$ 44 in \textit{Gaia} DR2. We find that these four strings are representative of the morphology seen across the full sample: the dispersion along the line of sight is significantly larger than the typical error on the parallax measurements (e.g. Theia 1532, Theia 1104), and many strings are composed of isolated groups with no evidence of connection between them (e.g. Theia 74, Theia 435). See Figure Set \ref{fig:figset} in the Appendix or the \href{https://faun.rc.fas.harvard.edu/czucker/Paper\_Figures/String_Gallery\_Interactive.html}{online interactive figure gallery} for similar panels for the remaining strings in the KC19 sample. \label{fig:topdown}} \end{figure*} \section{The Spatial, Dynamical, and Chemical Composition of Stellar Strings} \label{methods_results} In this section, we re-examine the spatial (\S \ref{subsec:spatial}), dynamical (\S \ref{subsec:dynamics}), and chemical (\S \ref{subsec:chemical}) distribution of the strings in KC19. \footnote{On \href{https://github.com/catherinezucker/stellar\_strings\_reexamined.git}{GitHub}, we provide a Jupyter Notebook that reproduces all the results in this section, including the values in Table 1, and the data behind Figures \ref{fig:topdown}, \ref{fig:dispersion}, \ref{fig:lifetimes} and \ref{fig:metallicity}.} \subsection{3D Spatial Properties of Stellar Strings} \label{subsec:spatial} In Figure \ref{fig:topdown} we show a topdown XY \textit{Gaia} DR2 view of the stellar strings in a Heliocentric Galactic Cartesian reference frame, as reproduced from KC19 (see their Figure 13). We highlight a selection of strings (over a range of ages) to convey the relationship between each string and its underlying stellar members in a set of zoom-in panels. Similar plots for the rest of the string sample are shown in Figure Set \ref{fig:figset} in the Appendix, and additionally include the topdown \textit{Gaia} EDR3 stellar distribution alongside DR2 shown here for comparison. The distances to the stars are very well constrained: the median signal to noise of the parallax measurements per string surpasses 44:1 in DR2 and 61:1 in EDR3. We find that for a majority of the strings, the dispersion around the spine is much larger than the average distance uncertainty over the ensemble of parallax measurements. We also note that many of the strings appear to be composed of discrete stellar groups that lack clear connections in 3D physical space despite that interpretation in KC19. Leveraging the improved astrometric precision of $Gaia$ EDR3, we compare the 3D spatial dispersion of stars around the string spine in \textit{Gaia} DR2 versus \textit{Gaia} EDR3 to determine whether the dispersion around the spine decreases as parallax errors improve, as would be expected if stellar strings are true physical structures. For each star in a given string, we compute the distance between the star's XYZ position in DR2 and EDR3 and the closest XYZ point in the string's spine. The results are presented in Figure \ref{fig:dispersion}, which shows the average percentage that the stars move closer to (or further away from) the spine as a function of the increase in the signal to noise of the parallax measurements. We find that despite the improvement of the parallax signal to noise of 20\%--120\% over the ensemble of strings in \textit{Gaia} EDR3, there is \textit{no improvement} in the stars' distances to the spines. We would have anticipated an improvement in the 3D spatial dispersion of stellar string members if these stars were born at the same time within the same parental filamentary molecular gas structure, as argued in KC19. However, we do observe (c.f. Figure Set \ref{fig:figset} in the Appendix and the \href{https://faun.rc.fas.harvard.edu/czucker/Paper\_Figures/String\_Gallery\_Interactive.html}{online interactive figure gallery}) that in \textit{some} cases the 3D spatial dispersion within \textit{individual} stellar groups inside a string does improve. This suggests these strings may be partly composed of real stellar subgroups; however, we see no evidence for the larger string structure. \begin{figure*} \centering \includegraphics[width=1.0\textwidth]{Figure2.png} \caption{Change in the 3D spatial dispersion of stars around their string's spine from \textit{Gaia} DR2 to \textit{Gaia} EDR3. Each grey dot represents one of the 328 individual strings. The dots show the average percentage change in the 3D spatial offset from the spine as a function of the average increase in the signal-to-noise of the parallax measurements for the stars. Positive percentage (blue region) indicates the stars move closer to their spine with improved parallax measurements, while negative percentage (orange region) indicates they move farther away. The black line shows the rolling median and indicates that despite a significant improvement in the signal-to-noise of the parallax measurements, the dispersion around the spine does not improve on average as expected of genuine spatial structures. The diagonal line in the top left shows the predicted trend for genuine space structures. \label{fig:dispersion}} \end{figure*} \subsection{The Dynamical Properties of Stellar Strings} \label{subsec:dynamics} While the lack of any improvement in the 3D spatial dispersion of stars around the string spines raises concerns about their fidelity, one way to validate the authenticity of the strings is to show that their stellar members still share similar motions. In the original study, KC19 analyze the dispersion in the tangential velocities of string members and find them to be small (on the order of $\rm < 2.5 \; km \; s^{-1}$). This small dispersion in the tangential velocities is expected, as by design the stellar groups that were manually assembled into strings had to share similar 5D properties ($l$, $b$, parallax $\pi$, and proper motions) to be detected as a group in HDBSCAN. As such, the fairest way to evaluate the authenticity of the strings is to characterize their velocity dispersion in the sixth dimension --- the radial velocity dimension --- not considered in the original 5D clustering algorithm. KC19 find the dispersion in the radial velocities, $\rm \sigma_{V_r}$, span $\rm 5 - 40 \; km \; s^{-1}$ with an average radial velocity dispersion of $\rm 16 \; km \; s^{-1}$, a factor of $5-10\times$ larger than for the tangential velocities (see Figure 12 from KC19). We note that the typical \textit{Gaia}-based radial velocity dispersion for loosely bound open clusters is $\rm \approx 1 \; km \; s^{-1}$ \citep{Soubiran_2018}. Thus these strings are at least dynamically very atypical for known co-eval structures. We reproduce the KC19 $\rm \sigma_{V_r}$ results for the subset of stars with a \textit{Gaia} radial velocity measurement (see Column 6 in Table \ref{tab:summary}).\footnote{The radial velocity dispersions for the strings are shown in Figure 12 of KC19 but the data behind the figure are not made publicly available. We tested a few variations for computing the strings' radial velocity dispersions (including weighting the radial velocity measurements by their errors) but find that an unweighted standard deviation is the most consistent with the original results. See Figure \ref{fig:rv_disp} in the Appendix for a comparison between our derived radial velocity dispersions and the radial velocity dispersions shown in Figure 12 of KC19.} We then estimate the predicted virial mass of each string as: \begin{equation} M_{vir} = \frac{\sigma_{V_r}^{2} \times \eta \times r_{hm}}{G} \end{equation} where $r_{hm}$ is the half-mass radius of the string, which we approximate as the median distance offset between the string's 3D spine and the 3D position of its stars using \textit{Gaia} EDR3 data (see Column 4 of Table \ref{tab:summary}). The parameter $\eta$ is a dimensionless constant that depends on the shape of the density profile, typically assumed to be $\eta \approx 10$ for a Plummer model \citep{Plummer_1911} characterized by steep density profiles. Since recent studies have shown that $\eta$ can be smaller (consistent with much broader density profiles), particularly for younger systems due to e.g. mass segregation \citep{Zwart_2010}, we very conservatively adopt an $\eta = 1$. Larger values of $\eta$ will only raise the threshold necessary for the strings to be in virial equilibrium. We find that the average predicted virial mass of the strings is $\approx 2\times 10^{6} \; \rm M_{\sun}$ (see Column 7 of Table \ref{tab:summary}). After computing the predicted virial mass of the strings, we approximate the observed mass of each string by counting the number of members and assuming the average mass of each star is $\rm 0.61 \; M_\sun$ based on the Initial Mass Function from \citet{Maschberger_2013} \citep[see also e.g.][]{Kuhn_2019} (see Column 8 of Table \ref{tab:summary}). We find an average observed mass of $\rm M_{observed} = 134 \; M_\sun$, meaning that the strings on average require $ > 10^{4} \times$ larger masses than their observed masses in order to be in virial equilibrium. Even assuming a very poor completeness fraction of e.g. 10\% (such that a majority of the string's membership goes undetected in \textit{Gaia}) the virial masses are still typically three orders of magnitude larger than their observed masses, and total deficit between the string's observed masses and their predicted virial masses is $\rm 10^{9} \; M_\odot$. Thus, the strings are not gravitationally bound. While the unbound state of the strings does not in itself imply that strings are not physical entities, it does provide constraints on their predicted lifetimes. Specifically, if a string is not gravitationally bound, we expect the string to disperse on roughly a crossing time, $\rm t_{cross}$, such that its predicted lifetime is given as: \begin{equation} t_{dispersal} \approx t_{cross} \approx \frac{r_{hm}}{\sigma_{V_r}} \end{equation} We find a median predicted dispersal time for the strings of only 2 Myr. Using a combination of a convolutional neural network and isochrone fitting, KC19 find ages of between 4 Myr and 9 Gyr for the strings. Dividing the strings' reported ages by their dispersal times, we find that on average the strings' reported ages are $115\times$ larger than their dispersal times, such that they should have dispersed in $<1\%$ of their lifetimes on average (see Column 12 of Table \ref{tab:summary}). Only a single string in the sample has a reported age less than its predicted lifetime (Theia 9) which is also the youngest string in the sample. We argue that the remaining strings in the sample are not physical given their reported spatial distributions, their radial velocity dispersions, and their inferred ages in KC19. \begin{figure*} \centering \includegraphics[width=0.75\textwidth]{Figure3.png} \caption{The virial state of strings versus the discrepancy between the strings' reported and predicted lifetimes. The vertical axis shows the ratio of the predicted virial masses of the strings (based on their observed radial velocity dispersions) over their observed masses. Every string in the sample is gravitationally unbound, requiring on average of $2\times10^{4}$ times larger mass than observed to be in virial equilibrium. Given their unbound state, the dispersal time of the strings should be roughly the crossing time. As shown on the horizontal axis, we find that the reported ages of the strings are orders of magnitude larger than the dispersal time, meaning that strings should have dispersed in a small fraction of their reported lifetimes and are thus inconsistent with being physical structures. \label{fig:lifetimes}} \end{figure*} \subsection{The Chemical Homogeneity of Stellar Strings} \label{subsec:chemical} Given the lack of spatial and dynamical coherence of the strings, we perform a final test to determine the physicality of the stellar string members by examining the uniformity in their chemical composition with respect to a well-studied sample of open clusters members. Specifically, if a set of stars is born within the same parental molecular gas structure, the intrinsic chemical dispersion of these stars as traced by stellar spectroscopy should be small \citep[e.g.][]{Feng_2014}. To examine the chemical homogeneity of the strings, we build off the study of \citet[][MHM22]{Manea_2022}, who leverage GALAH data \citep{GALAH_DR3} to characterize the intrinsic chemical dispersion ($\rm \sigma_{[X/H]}$) of a sample of ten strings drawn from KC19. MHM22 fit the following likelihood function assuming that the chemical profile of each string is Gaussian with some mean abundance $\rm \mu_{[X/H]}$ and intrinsic dispersion $\rm \sigma_{[X/H]}$: \begin{equation} \mathcal{L} = \prod_i^N \rm{exp} \left [ \frac{-(x_i - \mu_{[X/H]})^2} {(2(\sigma_{[X/H]}^2 + \delta_i^2)} \right ] \times \frac{1}{\sqrt{2\pi(\sigma_{[X/H]}^2 + \delta_i^2)}} \label{eq:likelihood} \end{equation} where $x_i$ and $\sigma_i$ are the GALAH mean abundance and its reported uncertainty for the $i$th star in the string in a given element $\rm X$. MHM22 characterize the intrinsic chemical dispersion across a range of elements with a sample size between 7 and 19 stars per string. Given the likelihood function in Equation \ref{eq:likelihood} and the subset of stars in each string with GALAH data, we are able to reproduce the results of MHM22. In the left panel of Figure \ref{fig:metallicity}, we present the original results of MHM22 (see their Figure 4), showing the intrinsic dispersion $\rm \sigma_{[X/H]}$ for each of the ten strings. Comparing the intrinsic dispersion of each string to a sample of local field stars, MHM22 find that all but one of the strings is more homogeneous than their local fields, with half of the sample as homogeneous as the well-studied open cluster M67 in several elements \citep{Gao_2018}. In order to test whether it is possible for a string to appear chemically homogeneous in several elements without being co-eval (as suggested by our analysis in \S \ref{subsec:spatial} and \S \ref{subsec:dynamics}), we perform an experiment using the catalog from \citet{Spina_2021}, who curate a sample of high-probability members of nearby open clusters with GALAH and APOGEE abundance information. As we have found that the strings are likely to be agglomerations of unrelated open clusters and other dynamically cold field stars, we believe this experiment will provide a more direct comparison point to interpret apparent homogeneity found in MHM22. Based on the sample sizes from \citet{Manea_2022}, we draw random sub-samples of between 7 and 19 stars from the open cluster membership reported in \citet{Spina_2021}, restricting to open clusters which span the same broad age range for the strings considered in MHM22 (7.52 $<$ log(Age) $<$ 9.23), have a detection in GALAH, and high membership probability $p > 0.75$. These sub-samples consist of stars that do \textit{belong to well-studied open clusters}, but each sub-sample is drawn from many clusters that are \textit{unrelated}, with different ages and different locations in the Galaxy. We then fit the same likelihood function as MHM22 to the reported GALAH abundance and uncertainty estimates for the random sub-sample drawn from the \citet{Spina_2021} catalog, repeating this procedure over many trials. The results are summarized in the right panel of Figure \ref{fig:metallicity}, which shows the abundance variations for the random draws in comparison to the string sample. We find that we can match the chemical homogeneity of the strings with our random draws, with many draws comparable to the intrinsic dispersion of the benchmark M67 cluster \citep{Gao_2018} in several elements. We attribute the homogeneity in the random stellar clusters draws to two causes. First, the uncertainties on the GALAH abundance variations for an individual star $i$ (i.e. $\sigma_i$ in Equation \ref{eq:likelihood}) are typically of the same order as the expected intrinsic abundance variation of a cluster like M67 ($\sigma_i \approx \sigma_{[X/H]} \approx 0.1$ dex). Since the likelihood function in Equation \ref{eq:likelihood} is designed to capture the intrinsic uncertainty by modeling the observational error, any overestimation of the error in the GALAH catalog can lead to unrealistically small estimates for the intrinsic scatter when the errors are large. And second, these random sub-samples can appear more homogeneous than field stars simply by virtue of the stars being members of open clusters, even if these clusters themselves are physically unrelated. In addition, the local field star sample from \citet{Manea_2022} (showing poorer chemical homogeneity than the strings) also likely includes some thick disk stars, which will have a wider metallicity dispersion than the thin disk string stars selected by KC19. Our results are consistent with the scenario that, while these strings may sometimes contain real clusters, their abundance patterns are not discriminatory enough to favor a scenario where members of the string have the same origin, versus a range of origins in potentially real, yet physically unrelated, spatial sub-structures. The chemical homogeneity of the strings found in MHM22 does not show them to be co-eval. \begin{figure*} \centering \includegraphics[width=1.0\textwidth]{Figure4.png} \caption{Intrinsic abundance variations across fifteen different elements for the collection of ten strings from MHM22 (blue points, left) and ten random sub-samples of stars drawn from unrelated open clusters in the Milky Way (black points, right). The grey asterisks in each panel show the intrinsic abundance variations for the benchmark cluster M67 \citep{Gao_2018, Manea_2022}, while the gray shaded region marks the zone of reported open cluster dispersions \citep[e.g.][]{Ness_2018}. By drawing random samples of stars from unrelated clusters, we are able to reproduce the typical chemical homogeneity of a majority of the strings, indicating that the strings' abundance variations are not discriminatory enough to favor a common physical origin. An interactive version of this figure showing hundreds of random draws, rather than just ten, is available online \href{https://faun.rc.fas.harvard.edu/czucker/Paper\_Figures/Figure4\_strings\_vs\_random.html}{here}. \label{fig:metallicity}} \end{figure*} \section{Discussion} \label{discussion} Through a spatial, kinematic, and chemical re-analysis of their stellar membership, we have shown that strings are inconsistent with being co-eval stellar structures with a common physical origin. Recall that KC19 select all 328 strings through a manual assembly process, stitching together stellar groups by hand and visually confirming kinematic and spatial coherence by eye. Our work underlies the need for a more systematic, \textit{reproducible} selection process when declaring the existence of a new type of stellar structure: not only should these structures remain spatially coherent and continuous when viewed in true 3D physical space (in contrast to the morphologies seen in Figure \ref{fig:topdown}), but their radial velocity dispersions should also be significantly smaller than measured for the Galactic field. In the solar neighborhood, the age-velocity dispersion relation (describing how the velocity dispersion of stars appears to increase with age due to dynamical heating) shows typical vertical velocity dispersions of $\rm \approx 5 \; km \; s^{-1}$ for stars $<$ 1 Gyr, about $3\times$ smaller than the typical string radial velocity dispersion of $\rm 16 \; km \; s^{-1}$ \citep[see][]{Bird_2021, Casagrande_2011}. The age-velocity dispersion relation has also been explored in 3D using open clusters in \citet{Tarricq_2021}, derived by computing the 3D velocity dispersion for samples of open clusters falling in different age bins. \citet{Tarricq_2021} find a typical 3D velocity dispersion of $\rm 13 \; km \; s^{-1}$ for open clusters in the range of 150 - 250 Myr (consistent with the typical age of a string reported in KC19). The 3D velocity dispersion \textit{over} a sample of many open clusters in \citet{Tarricq_2021} is less than the typical radial velocity dispersion \textit{within} an individual string in KC19 \citep[see Figure 11, Table 3 in][]{Tarricq_2021}. While we argue against the physicality of strings in KC19, we emphasize that several other studies in the \textit{Gaia} era present compelling evidence for filamentary stellar distributions identified though more automated, reproducible selection algorithms. For example, \citet{Meingast_2019} identify an extended 400+ pc long, 2000 $\rm M_\odot$ stream called Meingast-1 \citep[also known as Pisces Eridanus; e.g.][]{Hawkins_2020} through a wavelet decomposition of the 3D velocity space distribution of nearby stars. Not only do \citet{Meingast_2019} find that the stream is spatially continuous in Heliocentric Galactic Cartesian space (see their Figure 2), but they also find a very low 3D velocity dispersion of $\rm 1.3 \; km \;s^{-1}$ for the system. Similarly, building off the analysis of \citet{Meingast_2019}, \citet{Meingast_2021} present a new method for identifying highly extended coronae --- reminiscent of tidal structures --- around ten nearby open clusters. The \citet{Meingast_2021} technique accounts for projection effects in proper motion space in an automated way \citep[inspired by the ``converging point technique"; c.f.][]{van_Leeuwen_2009} before deconvolving the spatial distribution with a Gaussian mixture model to mitigate \textit{Gaia} measurement errors. Critically, the coronae are likewise validated via their 3D space motions, showing typical 3D velocity dispersions of $\rm 1.4 \; km \; s^{-1}$. Both the extended coronae and the Meingast-1 stream have velocity dispersions on par with those found for nearby open clusters using the same \textit{Gaia} DR2 radial velocity data considered in this work. Using a sample of a few hundred nearby open clusters, \citet{Soubiran_2018} find typical intra-cluster radial velocity dispersion of $\rm 1.0-1.5 \; km \; s^{-1}$. In contrast, as noted in \S \ref{subsec:dynamics}, the typical radial velocity dispersion of the strings is a factor of fifteen times larger ($\rm \sigma_{V_r} \approx 15 \; km \; s^{-1}$, implying $\rm \sigma_{V_{3D}} > 15 \; km \; s^{-1}$) with some strings approaching radial velocity dispersions of $\rm 40 \; km \; s^{-1}$. Only a few percent of the strings in KC19 have radial velocity dispersions $\rm < 5 \; km \; s^{-1}$, while $\approx$ 90\% of the open clusters do \citep{Soubiran_2018}, as well as 100\% of the newly identified extended structures in \citet{Meingast_2019} and \citet{Meingast_2021}. Thus, the unphysical nature of the strings is not due to their claimed unique filamentary morphologies, but rather their lack of true 3D kinematic and spatial coherence stemming from limitations in the manual assembly process. \section{Conclusions} \label{conclusions} We investigate the spatial, dynamical, and chemical composition of stellar strings, a proposed collection of highly extended filamentary stellar structures identified in KC19 by manually linking stellar groups with similar 5D properties ($l$, $b$, parallax $\pi$, and proper motions). Our conclusions are as follows: \begin{enumerate} \item Using updated constraints on the distances to stellar string members from \textit{Gaia} EDR3, we find that the 3D spatial dispersion of stars around the string spine does not improve over \textit{Gaia} DR2: the average percentage that stars move closer to their respective string spines is consistent with zero, despite the signal to noise on the parallax measurements per string increasing by 20\% - 120\%. Real structures should tighten with higher fidelity distance measurements. \item The average dispersion in the radial velocity of the strings is $\rm 16 \; km \; s^{-1}$, about fifteen times larger than the typical radial velocity dispersion both of open clusters in \textit{Gaia} \citep{Soubiran_2018} and in other catalogs of extended stellar structures \citep[e.g. stellar ``streams" in the disk from][]{Meingast_2019, Meingast_2021}. \item Given the radial velocity dispersions, the virial masses of the strings are on average $> 10^{4}\times$ larger than their observed masses. Even assuming very low completeness fractions, all strings are gravitationally unbound. \item Given their unbound state, the strings should disperse on roughly a crossing time, which we estimate to be typically 2 Myr, while the ages of the strings from KC19 range from 4 Myr to 9 Gyr. Thus the strings should not exist based on their predicted dynamical lifetimes, and should have dispersed in $<$ 1\% of their reported ages, on average. \item Using complementary constraints on stellar chemical abundances from GALAH DR3 \citep{GALAH_DR3}, we compare the intrinsic abundance dispersion of the strings found in MHM22 to a random sample of stars drawn from physically unrelated open clusters. We find that the chemical homogeneity of the strings is similar to the chemical homogeneity seen in random stellar draws across clusters. \item The combined spatial, dynamical, and chemical evidence rules out the scenario that stars within a typical string were born at the same time within the same parental molecular gas structure. However, some subset of the stars within a string may still be co-eval, as many of these strings contain real clusters that have been linked together to form the larger string-like structure. \end{enumerate} Ultimately, by providing radial velocity measurements for five times more stars than previous data releases, \textit{Gaia} DR3 will provide further opportunity to characterize the physicality (or lack thereof) of not just the strings, but also the broader array of structures identified in extant \textit{Gaia} data releases. By evoking simple spatial and dynamical arguments, our work provides a straightforward, yet discerning, lens to evaluate the fidelity of newfound classes of objects, which should be considered when declaring the existence of new co-eval stellar structures in the \textit{Gaia} era. \begin{acknowledgments} CZ acknowledges that support for this work was provided by NASA through the NASA Hubble Fellowship grant \#HST-HF2-51498.001 awarded by the Space Telescope Science Institute (STScI), which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS5-26555. SL acknowledges support from NSF grant AST2109234 and HST-AR-16624 from STScI. The authors would like to thank Keith Hawkins, Catherine Manea, and Kevin Covey for helpful discussions that contributed to this work. \end{acknowledgments}
1,314,259,994,755
arxiv
\section{Introduction} Near-Earth Apollo asteroid (1566) Icarus = 1949 MA was discovered by Baade (1949) as a 16th magnitude fast-moving object, on a plate taken using the 48-inch Palomar Schmidt on 1949 June 10, when Icarus approached the Earth to within 0.10 AU, near its descending node. Its orbital parameters were highly unusual: it had smaller semimajor axis ($a$ = 1.08 AU), smaller perihelion distance ($q$ = 0.19 AU), and larger eccentricity ($e$ = 0.83) than any other asteroid known at that time and relatively high inclination ($i$ = $23^\circ$). Icarus remained the record holder in having the smallest $q$ among all asteroids until the discovery of (3200) Phaethon in 1983. Indeed, on account of its small $q$, Icarus was historically of particular interest as to whether the relativistic effects on its orbital motion are detectable (e.g., Shapiro et al. 1971). In Icarus' subsequent approaches to the Earth in 1968, 1987, and 1996, the following physical data were derived: absolute magnitude $(H) = 15.95$ and $G$-parameter $= -0.04$ (Tedesco 1989); rather high albedo $\sim 0.33$ and diameter $\sim 1$ km (e.g., Harris 1998); fast rotation period $\sim2.273$ hr (e.g., Gehrels et al. 1970; De Angelis 1995) and others \footnote{http://earn.dlr.de/nea/001566.htm}. Especially notable is that Icarus is spectrally classified as a Q-type in Tholen's taxonomy. Q-type asteroids, which generally are spectroscopic analogues of ordinary chondrites (cf.\ McFadden et al. 1984; Hicks et al. 1998; Fevig \& Fink 2007), are regarded as being less space-weathered S-complex asteroids, with a surface age $\le 10$ Myr owing to resurfacing effects (Marchi et al. 2006). Hence Icarus may represent the rather fresh internal structure of a precursor object broken up in recent history. Moreover, with $q \sim 0.19$ AU, the subsolar point on Icarus should reach a temperature of 800 K by solar heating, in which case the solar thermal stress may be a trigger to destroy the asteroid's surface and subsurface. Resurfacing may alternatively be due to the tidal effects of the terrestrial planets (Nesvorn\'y et al. 2005). For the above reasons, we have expected some ``Icarus Family Members'' (hereafter ``IFM(s)'') to exist in near-Earth space. We have therefore been searching for IFMs based on time-lag measurements (see below) between the orbital evolution of Icarus and any candidate IFM. This procedure was successful in finding the dynamical relationship between (3200) Phaethon and (155140) 2005 UD (Ohtsuka et al. 2006, hereafter Paper I). No certain IFMs had been found in the Apollo asteroid database \footnote{e.g., http://cfa-www.harvard.edu/iau/lists/Apollosq.html} until very recently. However, we finally identified an extremely likely candidate from the latest MPECs (Minor Planet Electronic Circulars): a recently discovered Apollo asteroid 2007 $\mathrm{MK_6}$. \section{Orbital integration of (1566) Icarus} As preliminary work for the IFM survey and for measuring the time-lags (detailed in the next section) between Icarus and unknown potential IFMs, we calculated the orbital evolution of Icarus. We performed a backward and forward numerical integration of the KS (Kustaanheimo--Stiefel) regularized equation of motion (cf.\ Arakida \& Fukushima 2000, 2001), applying the 12th-order Adams method in double precision with a step size of 0.5 day. The integration covered 10000 BC to 10000 AD (JDT $-1931503.5$ to 5373520.5), and included terms of first order in the post-Newtonian approximation for the Sun's gravitational field since relativistic effects advance the line of apsides of Icarus at a rate of $10''$/century. We also confirmed that the results of our numerical integration did not significantly change even when we adopted smaller step sizes or when we used other integration methods such as the extrapolation method. The initial orbital data of Icarus at osculation epoch 2007 Apr 10.0 TT = JDT 2454200.5 were taken from NASA JPL's HORIZONS System \footnote{http://ssd.jpl.nasa.gov/horizons.html} and are listed in Table \ref{tbl:orbits}. All the major planets from Mercury through Neptune and the quasi-planet, Pluto, were included as perturbing bodies (the Earth--Moon barycenter being one body, with the Moon's mass added to the Earth's). The coordinates of the major planets were taken from the JPL Planetary and Lunar Ephemeris DE408. Over 20000 yr we found the orbital motion of Icarus to show a high degree of stability, with long-period secular changes according to the cycle in argument of perihelion $\omega$, also known as the Kozai cycle (Kozai 1962). The corresponding large-amplitude oscillations in $q$ and $i$, in antiphase with $e$, have period $\sim 25000$ yr, half that of the $\omega$ cycle. The $\omega$ period of $\sim 50000$ yr is somewhat larger than the $\sim 40000$ yr for Phaethon and 2005 UD (Paper I). \section{Time lag $\Delta t$ of the orbital evolutions} In the first stage of the formation of an asteroid family, the orbital energies ($\propto 1/a$) of bodies or fragments are slightly different from that of the precursor, since the motions of the released objects are slightly accelerated or decelerated relative to the precursor. This results in differences in their evolutionary rates under gravitational perturbations (there may additionally be differential nongravitational perturbations): then a time-lag (which hereafter we call $\Delta t$) in the orbital evolutions arises. At the starting epoch, $\Delta t \approx 0$ yr, and it tends to increase with time. We note that $\Delta t$ is not the time since separation, but rather quantifies how separated in phase two orbits have become in their respective (similar) secular perturbation cycles. For measuring a difference in the evolutionary phase of two orbits, $\Delta t$ is much more suitable than for example the difference in $\omega$, since (for highly eccentric orbits particularly) $d\omega/dt$ is strongly dependent on the phase within the Kozai cycle (cf.\ Fig. \ref{fig:orbits} later). Any IFM should show a very close orbital similarity with Icarus when shifted by the appropriate $\Delta t$ that brings both orbits to the same evolutionary phase. The following successful studies applying this time-lag theory have been made so far: i) anticipation of the Marsden and Kracht comet groups' periodicity and their return: Ohtsuka et al. (2003) anticipated these comet groups, which initially had parabolic orbit solutions, as being fragments of Periodic Comet 96P/Machholz. Their prediction was shown to be correct when Sekanina \& Chodas (2005) linked orbits and found these comet groups to have orbital periods of 5--6 yr, corresponding to 96P's $\sim 5.2$ yr. The Marsden and Kracht comet groups thus turned out to be decameter-size members of the 96P--Quadrantid stream complex. ii) genetic relationship of Phaethon and 2005 UD: Paper I revealed 2005 UD as being the most likely large fragment of Phaethon. This dynamical relationship was confirmed by the physical studies of Jewitt \& Hsieh (2006) and Kinoshita et al. (2007), who classified both objects as F- or B-type. These taxonomic types are very rare, comprising only $\sim 5$\% of NEOs that have been classified; combined with the dynamical evidence, the genetic relationship of Phaethon and 2005 UD is beyond doubt. This time-lag theory is straightforward and is now well established as a technique to demonstrate the existence of cometary stream complexes or likely NEO families; so it should be a useful tool to survey for IFMs. \section{Survey} \subsection{Procedure} The survey for IFMs in the Apollo asteroid database and latest MPECs uses the same procedure as in Paper I. We again applied the following three criteria as the retrieving engine for our IFM survey. The first is the traditional orbital similarity criterion $D_{\rm SH}$ of Southworth \& Hawkins (1963), who defined $D_{\rm SH}$ as a distance between the orbits of two objects $A$ and $B$ in five-dimensional orbital element space $(e, q, \omega, \mathit{\Omega}, i)$, as follows: \begin{eqnarray} D_{\rm SH}^{2} &=& \sum_{j=1}^{5} f_{j}^{2} \left( P_{A,j} - P_{B,j} \right)^2, \end{eqnarray} where $P_{A \; \mathrm{or} \; B,j}$ are orbital elements and $f_j$ are functions of the elements that ensure suitable weights are given to each term in (1). Thus we searched for potential IFMs on the basis of Icarus' orbital evolution from the integration described in Section 2. For each Near-Earth Apollo, we found the minimum $D_{\rm SH}$ between it and Icarus, as Icarus' orbit evolves. A minimum $D_{\rm SH} \le 0.15$ means that Icarus and the given asteroid are within the probable association range. The second and third criteria are the $C_1$ and $C_2$ integrals derived by Moiseev (1945) and Lidov (1961) respectively, which we calculate for candidates selected by $D_{\rm SH}$: \begin{eqnarray} C_1 &=& \left( 1 - e^2 \right) \cos^2 i,\\ C_2 &=& e^2 \left( 0.4 - \sin^2 i \: \sin^2 \omega \right) . \end{eqnarray} These integrals describe the secular orbital variations well. Both $C_1$ and $C_2$ are almost invariant for the orbital motions of Phaethon and 2005 UD (Paper I), and should also be useful criteria to distinguish IFMs. \subsection{Detection of the IFM candidate: Near-Earth Apollo asteroid 2007 $\mathbf{MK_6}$} In this way, we finally detected a very likely IFM candidate from the latest MPECs: Near-Earth Apollo asteroid 2007 $\mathrm{MK_6}$, which was recently discovered in the Catalina sky survey, on 2007 June 21.2 (Hill et al. 2007). Soon after, Ohtsuka (2007) made the identification of 2007 $\mathrm{MK_6}$ with another Apollo, 2006 $\mathrm{KT_{67}}$, so 2007 $\mathrm{MK_6}$ = 2006 $\mathrm{KT_{67}}$; the latter was both discovered (on 2006 May 26) and observed only (12 positions over a 1 day arc) by the Mt. Lemmon survey. This extended the arc to more than one year. Nakano successfully linked their orbits, based on 54 positions at two oppositions (covering 2006 May 26 to 2007 June 27) with an RMS residual of $0''.74$. The absolute magnitude $H \sim 19.9$ corresponds to an object a few hundred meters in size at most, if we assume 2007 $\mathrm{MK_6}$ is a high-albedo object such as an S-type. Using Nakano's data, listed in Table \ref{tbl:orbits}, we integrated 2007 $\mathrm{MK_6}$ using the same method as for Icarus. The dynamical evolutions of both asteroids are illustrated in Fig. \ref{fig:orbits}. Icarus and 2007 $\mathrm{MK_6}$ sometimes encounter the terrestrial planets. Encounters with Venus or Earth can cause changes in $a$ but these are small enough that the other elements display a stable secular evolution, as with Phaethon and 2005 UD (Paper I). Neither asteroid has a nodal intersection epoch with Venus or Earth in the past 10000 yr, hence the interval of constant $a$ in Fig. \ref{fig:orbits}. Comparing the orbital elements of 2007 $\mathrm{MK_6}$ at the current epoch with the changing orbit of Icarus over time, as described in Section 4.1, we found a strikingly good match with Icarus at around 1034 AD (Table \ref{tbl:orbits}); thus $\Delta t \sim 1000$ yr. The corresponding minimum value of $D_{\rm SH}$ is only 0.0098. Both $\Delta t$ and $D_{\rm SH}$ are fairly small compared to the respective values $\sim 4600$ yr and 0.04 between Phaethon--2005 UD (Paper I). The $C_1$ and $C_2$ parameters are almost constant, within the ranges 0.26--0.28 and 0.24--0.25 respectively. Therefore 2007 $\mathrm{MK_6}$ is a very strong candidate IFM. The two orbital evolutions show a similar profile, with quasi-sinusoidal changes, simply shifted by $\Delta t \sim 1000$ yr. Their smaller $\Delta t$ than Phaethon--2005 UD suggests a younger separation age, but $D_{\rm SH}$ between Icarus--2007 $\mathrm{MK_6}$ at the same osculation epoch has never been below 0.03 in our integration timespan. Only in quite rare cases (such as the Karin cluster in the main belt; Nesvorn\'y et al. 2002) can an exact separation age be found unambiguously, although we may certainly expect $D_{\rm SH}$ to have been smaller around the time that Icarus and 2007 $\mathrm{MK_6}$ separated. Some test integrations back $10^5$ yr tentatively show $\Delta t$ decreasing back in time but random small changes in $a$ due to close encounters make it hard to reach a precise quantitative conclusion about the separation age. However, this age is clearly within 10 Myr, the resurfacing age of Q-type NEOs, possibly two orders of magnitude shorter. The Icarus--2007 $\mathrm{MK_6}$ parent may well have been injected into the near-Earth environment of the order of 10 Myr ago (cf.\ Bottke et al. 2002), but with the separation occurring much more recently. We also surveyed meteor data related to the IFMs. Consequently, we noticed a likely meteor swarm found by Sekanina (1973) in the Harvard (Havana) radar meteor orbit survey: the daytime Taurid-Perseid meteor swarm, recorded around June 18 in the radar's 1961--1965 term of operation. The orbital parameters are in good agreement with those of Icarus, as presented in Table \ref{tbl:IandT}. Their $D_{\rm SH} \sim 0.08$ is in the probable association range. Although we cannot accurately measure their $\Delta t$, both the current orbits look to be at almost the same evolutionary phase. No further orbital data were found in the radar meteor orbit database. We may therefore regard the Taurids-Perseids as a transient Earth-crossing IFM dust band rather than a cometary meteor stream. \section{Discussion and Conclusions} There have been numerous studies on the formation of main belt asteroid (MBA) families, and also some on NEO families. Statistical studies using NEO orbit data, based on orbital similarity, often generate positive results on the existence of NEO families. However, Fu et al. (2005) concluded that it is unlikely that these results are anything more than random fluctuations in the NEO orbit population. In a past IFM study, Steel et al. (1992) noted an orbital similarity between (5786) Talos = 1991 RC and Icarus. Their orbital elements, except for $\omega$ and $\mathit{\Omega}$, indeed coincide well with each other. However, Talos' longitude of perihelion remains widely separated (by $\sim 50^\circ$) from that of Icarus over the past 11000 yr integrated by Steel et al., so that it is difficult to verify a genetic relationship. If there exist NEO families having high-eccentricity and rather highly inclined orbits, then unless their origin is extremely recent, their differential orbital evolutions, shifted by $\Delta t$, will lead to their current orbital elements being drastically different. For this reason it is more complicated to search for NEO families than MBA families. Nevertheless, we found Near-Earth Apollo asteroids Icarus and 2007 $\mathrm{MK_6}$ to be very likely candidates for IFMs, based on our time-lag theory. Their $\Delta t \sim 1000$ yr and minimized $D_{\rm SH} \sim 0.0098$ are even smaller than those, $\sim 4600$ yr and 0.04, of the well-established Phaethon--2005 UD relationship. Since Phaethon and 2005 UD may have a cometary origin (Paper I), therefore, the dynamical relationship between Icarus and 2007 $\mathrm{MK_6}$ along with a possible IFM dust band may constitute the first detection of an asteroidal NEO family, namely the ``Icarus asteroid family''. In this case, Icarus should be the parent body, but as it is only a 1-km size object, the Icarus family is on a smaller scale than MBA families. The next Earth approaches of Icarus and 2007 $\mathrm{MK_6}$ will occur respectively on 2015 June 17 to 0.05 AU and 2016 June 15 to 0.10 AU, providing good opportunities to determine additional physical parameters and to further study their common origin. It is possible that further accurate astrometry and advances in the numerical analysis will eventually resolve the separation age. \acknowledgments The authors are grateful to the anonymous referee for his careful reading of the manuscript and for his comments.
1,314,259,994,756
arxiv
\section{Introduction} The polarization is of fundamental importance in understanding condensed-matter systems~\cite{AshcroftMermin,Kittel}. Historically, theories of the polarization were first developed in order to understand ferroelectric materials with macroscopic electric polarization~\cite{Rabe}. The total electric dipole moment of a piece of material may be given in terms of the surface charge. However, since the total electric dipole moment is typically proportional to the volume of the material, it could be regarded as a bulk property. Hence the electric polarization may be defined with some caveat for the system with the periodic boundary condition, for which the surface charge is absent. It turned out that the concept of the polarization is useful in describing much wider materials and phenomena than the ferroelectricity. For example, the spin transport in topological insulators can be understood via `spin polarization'~\cite{FuTRP,XiaoLiangQi}. One of the key observations was the identification of the polarization as a Berry phase~\cite{Zak,KSVPRB1993,VanderbiltKingSmith,RestaRMP,RestaVanderbilt,OrtizMartin,AligiaOrtiz,Souza}, which revealed the topological nature of the polarization. Topological transports such as the quantum Hall effect and the Thouless pump~\cite{Thouless,NiuThouless} are deeply related to the polarization since the Chern number can be understood in terms of the adiabatic evolution of the polarization~\cite{XiaoLiangQi}. However, there is a substantial confusion in the very definition of the polarization as a Berry phase (see, e.g. Ref.~\onlinecite{Martin}). Several inequivalent Berry phases can be defined, and indeed found in the literature. Given the fundamental importance of the polarization, in this paper, we revisit the relation between the polarization and the Berry phase. Our systematic analysis clarifies the physical meanings of different forms of the Berry phase. As we will discuss in details, they can be related to the polarization, while only one particular definition of the Berry phase corresponds to the polarization which is standard in the literature. Nevertheless, other definitions of the Berry phase are also perfectly consistent and have their own physical meanings. Although the ``polarization current'' derived from the Berry phase does depend on the definition, the total charge transported during a Thouless pumping is given by the same quantized topological invariant. This paper is organized as follows. In Sec.~\ref{sec:Thouless}, after reviewing the Thouless pump, we introduce two Berry phases, one for the uniform vector potential and the other for twisted boundary condition, and clarify their meaning and properties. We confirm and demonstrate our understanding in a concrete model, in Sec.~\ref{sec:model}. We then clarify the relation between the Berry-phase formulation of the polarization and the compact expression proposed by Resta in Sec.~\ref{sec:Resta}. In Sec.~\ref{sec:band}, we discuss the special case of band insulators. Finally, Sec.~\ref{sec:conclusion} is devoted to conclusions. \section{General Formulation} \label{sec:Thouless} \subsection{Thouless pump} In order to motivate the formulation, let us start with reviewing the Thouless pump~\cite{Thouless,NiuThouless}. For simplicity, we discuss the quantum mechanics of particles on a 1D ring. In the Thouless pump, the Hamiltonian $\hat{H}(t)$ is adiabatically changed over time in such a way that $\hat{H}(0)=\hat{H}(T)$, and we consider the charge transported during the period $0 \leq t \leq T$. Although the pumping itself can be realized just by the adiabatic time-dependence of the Hamiltonian, it is convenient for theoretical analysis to introduce a magnetic flux $\theta$ piercing the ring~\cite{NiuThouless}. Let us represent the flux $\theta$ by the position and time-independent vector potential $A_x=\tfrac{\theta}{L}$. Then the simplest example of the Hamiltonian reads \begin{equation} \hat{H}_\theta(t)=\int_{0}^L dx\,\hat{c}_x^\dagger\left[-\tfrac{1}{2m}(\partial_x+i\tfrac{\theta}{L})^2+V_x(t)\right]\hat{c}_x. \label{H1} \end{equation} Throughout this paper, we set the charge of the particle to unity. Our discussion below does not rely on the specific form of the Hamiltonian~\eqref{H1}. Arbitrary finite-range interactions can be added as long as the particle number conservation is respected. There is a tradeoff between the periodicity in $x$ and that in $\theta$. The current choice of the uniform vector potential implicitly assumes the periodic boundary condition in space. On the other hand, $\hat{H}_{\theta+2\pi}$ is not identical to $\hat{H}_\theta$ and is only unitarily equivalent to $\hat{H}_\theta$ although $\theta$ and $\theta+2\pi$ are physically equivalent. (For the sake of brevity, we do not explicitly write the time dependence below when it is obvious.) These two values of $\theta$'s are related by the large gauge transformation $e^{2\pi i\hat{P}}$ as $\hat{H}_{\theta+2\pi}=e^{-2\pi i\hat{P}}\hat{H}_{\theta}e^{2\pi i\hat{P}}$, where \begin{equation} \hat{P}\equiv\tfrac{1}{L}\int_{0}^L dx\,x\hat{n}_x,\quad \hat{n}_x\equiv\hat{c}_{x}^\dagger \hat{c}_{x}\label{Pdef} \end{equation} is the polarization operator. Since the vector potential is uniform, taking a derivative of $\hat{H}_\theta$ with respect to $\theta$ gives the \emph{averaged} current operator, \begin{eqnarray} \hat{\bar{j}}_{\theta}\equiv \partial_\theta\hat{H}_\theta=\tfrac{1}{L}\int_{0}^Ldx\,\hat{j}_\theta(x).\label{Jave} \end{eqnarray} Here, $\hat{j}_\theta(x)$ is the local current. For instance, it reads $\hat{j}_\theta(x)=\tfrac{1}{2mi}\hat{c}_x^\dagger(\partial_x+i\tfrac{\theta}{L})\hat{c}_x+\text{h.c.}$ for the Hamiltonian in Eq.~\eqref{H1}. Let us denote by $|\Phi_\theta\rangle$ the ground state of the snapshot Hamiltonian with the energy eigenvalue $E_\theta$. We assume the uniqueness of the ground state $|\Phi_\theta\rangle$ and the finite excitation gap above the ground state for all values of $\theta\in[0,2\pi]$ and $t\in[0,T]$. Let $|\Psi_\theta(t)\rangle$ be the state that is initially the ground state $|\Phi_\theta\rangle$ at $t=0$. By taking into account the leading contribution of the excited states to $|\Psi_\theta(t)\rangle$ for $t>0$, Niu and Thouless showed that the current expectation value $\mathcal{J}_\theta(t)\equiv\langle\Psi_\theta(t)|\hat{\bar{j}}_{\theta}|\Psi_\theta(t)\rangle$ at each time $t$ is given in the form of the Berry curvature~\cite{NiuThouless}: \begin{eqnarray} \mathcal{J}_\theta&=&\partial_{\theta}E_\theta+\mathcal{F}_\theta,\label{NTcurrent}\\ \mathcal{F}_\theta&\equiv&i\big[\partial_t\langle\Phi_\theta|\partial_{\theta}|\Phi_\theta\rangle-\partial_{\theta}\langle\Phi_\theta|\partial_t|\Phi_\theta\rangle\big].\label{Ftheta} \end{eqnarray} We review the derivation in Appendix~\ref{app:thouless}. The term $\partial_{\theta}E_\theta$ is the persistent current of the ground state that can be neglected for a large $L$. Furthermore, Niu and Thouless also showed that $\mathcal{J}_\theta$ can be well-approximated by the average over $\theta$, \begin{eqnarray} \mathcal{J}&\equiv&\int_{0}^{2\pi}\tfrac{d\theta}{2\pi}\mathcal{J}_\theta=\int_{0}^{2\pi}\tfrac{d\theta}{2\pi}\mathcal{F}_\theta\notag\\ &=&\int_{0}^{2\pi}\tfrac{d\theta}{2\pi}i\big[\partial_t\langle\Phi_\theta|\partial_{\theta}|\Phi_\theta\rangle-\partial_{\theta}\langle\Phi_\theta|\partial_t|\Phi_\theta\rangle\big],\label{Jave2} \end{eqnarray} when $L$ is sufficiently large~\cite{NiuThouless}. After all, the transported charge $Q\equiv\int_{0}^Tdt\mathcal{J}$ during this time period is given in the form of the Chern number~\cite{Thouless,NiuThouless}: \begin{eqnarray} C\equiv\int_{0}^Tdt\int_{0}^{2\pi}\tfrac{d\theta}{2\pi}\,\mathcal{F}_\theta,\label{tChern} \end{eqnarray} which reveals the topological nature of the pump and suggests the quantization of the transported charge (but see blow). \subsection{Berry phase with uniform vector potential} Physically, we demand that the polarization $\mathcal{P}$ satisfies \begin{equation} \mathcal{J} = \tfrac{d}{dt} \mathcal{P} . \label{JvsP} \end{equation} Comparing Eq.~\eqref{JvsP} with the first term in the integrand of Eq.~\eqref{Jave2}, it is tempting to identify the integral \begin{eqnarray} \mathcal{P}\sim\int_{0}^{2\pi}\tfrac{d\theta}{2\pi}\,i\langle\Phi_\theta|\partial_{\theta}|\Phi_\theta\rangle\label{manybodyP} \end{eqnarray} as the polarization. Indeed, Eq.~\eqref{manybodyP} is the standard definition of the polarization in the bulk~\cite{OrtizMartin,AligiaOrtiz,Souza,Aligia,Hetenyi2012}, while there is a subtlety as we will discuss below. We note that, Eq.~\eqref{JvsP} and our convention of unit charge imply that $\mathcal{P}$ is dimensionless, which is consistent with Eq.~\eqref{manybodyP}. The form~\eqref{manybodyP} looks like a Berry phase. However, as we pointed out above, the Hamiltonian $\hat{H}_\theta$ lacks the periodicity in $\theta$, and thus the state $|\Phi_\theta\rangle$ is not periodic either. In fact, the value of Eq.~\eqref{manybodyP} can be arbitrarily modified by the gauge transformation $|\Phi_\theta\rangle\rightarrow |\Phi_\theta\rangle'=e^{i\chi(\theta)}|\Phi_\theta\rangle$ that would shift $\mathcal{P}$ by $\frac{\chi(2\pi)-\chi(0)}{2\pi}$. We thus need to define the polarization as \begin{eqnarray} \mathcal{P}\equiv\int_{0}^{2\pi}\tfrac{d\theta}{2\pi}\,i\langle\Phi_\theta|\partial_{\theta}|\Phi_\theta\rangle+\frac{1}{2\pi}\text{Im}\ln\langle\Phi_{0}|e^{2\pi i \hat{P}}|\Phi_{2\pi}\rangle \label{manybodyP12}, \end{eqnarray} whose fractional part can be confirmed as gauge invariant. In fact, one can reproduce both the first and the second term in the integrand of Eq.~\eqref{Jave2} by plugging Eq.~\eqref{manybodyP12} into Eq.~\eqref{JvsP} using $|\Phi_{2\pi}\rangle=e^{i\alpha(t)}e^{-2\pi i \hat{P}}|\Phi_{0}\rangle$ for some $\alpha(t)\in[0,2\pi]$. However, the topological nature of the polarization is not obvious in this formulation. In fact, even though Eq.~\eqref{tChern} appears as a Chern number, the lack of the periodicity in $\theta$ would invalidate the usual argument of its quantization. Thus we need to study the issue more carefully. \subsection{Berry phase under twisted boundary condition} \label{sec:twisted} To make the topological quantization evident, let us perform the unitary transformation \begin{equation} \hat{\tilde{H}}_{\theta}=e^{ i\theta\hat{P}}\hat{H}_{\theta}e^{- i\theta \hat{P}}. \end{equation} The new Hamiltonian has the nice periodicity in $\theta$, $\hat{\tilde{H}}_{\theta+2\pi}=\hat{\tilde{H}}_{\theta}$, but instead the boundary condition is twisted by the factor $e^{i\theta}$ (See Appendix~\ref{app:twisted}). Let $|\tilde{\Phi}_\theta\rangle$ be the unique ground state of $\hat{\tilde{H}}_{\theta}$. As the Hamiltonian is periodic in $\theta$, one can naturally demand $|\tilde{\Phi}_{\theta+2\pi}\rangle=|\tilde{\Phi}_\theta\rangle$ without loss of the generality. Given $|\tilde{\Phi}_\theta\rangle$ with this property, we can fix the phase ambiguity of $|\Phi_{\theta}\rangle$ in the uniform gauge by setting \begin{equation} |\Phi_{\theta}\rangle=e^{-i\theta \hat{P}}|\tilde{\Phi}_\theta\rangle. \label{rule} \end{equation} With this condition, $|\Phi_{\theta+2\pi}\rangle$ is related to $|\Phi_\theta\rangle$ as $|\Phi_{\theta+2\pi}\rangle=e^{-2\pi i \hat{P}}|\Phi_\theta\rangle$. (In other words, $\alpha(t)$ above is set $0$.) Then the second term in Eq.~\eqref{manybodyP12} vanishes and the definition of $\mathcal{P}$ reduces back to Eq.~\eqref{manybodyP}. Furthermore, the gauge transformation consistent with this condition must satisfy $e^{i\chi(0)}=e^{i\chi(2\pi)}$ and the fractional part of $\mathcal{P}$ is gauge invariant. The same condition also demands that $\langle \Phi_{2\pi} | \partial_t | \Phi_{2\pi} \rangle = \langle \Phi_{0} | \partial_t | \Phi_{0} \rangle$ so that $\int_0^{2\pi} d\theta\partial_\theta[\langle \Phi_{\theta} | \partial_t | \Phi_{\theta} \rangle]$ vanishes and $\mathcal{J}=\frac{d}{dt}\mathcal{P}$ precisely holds. On the other hand, using $|\tilde{\Phi}_\theta\rangle$ instead of $|\Phi_\theta\rangle$, one may introduce a different kind of Berry phase~\cite{HiranoKatsuraHatsugai1,HiranoKatsuraHatsugai2} \begin{eqnarray} \tilde{\mathcal{P}}=\int_{0}^{2\pi}\tfrac{d\theta}{2\pi}\,i\langle\tilde{\Phi}_\theta|\partial_{\theta}|\tilde{\Phi}_\theta\rangle. \label{tildeP} \end{eqnarray} It is tempting to identify this Berry phase as the polarization. However, we find that even \emph{the fractional part} of $\mathcal{P}$ and $\tilde{\mathcal{P}}$ do not agree in general. Instead, Eq.~\eqref{rule} suggests that \begin{equation} \mathcal{P}=\tilde{\mathcal{P}}+\bar{\mathcal{P}}_0,\quad\bar{\mathcal{P}}_0\equiv\int_{0}^{2\pi}\tfrac{d\theta}{2\pi}\,\langle \Phi_{\theta}|\hat{P}|\Phi_{\theta}\rangle.\label{relP} \end{equation} The definition of $\bar{\mathcal{P}}_0$ here involves averaging over $\theta$, but it is exponentially close to the one without the average $\mathcal{P}_0\equiv\langle \Phi|\hat{P}|\Phi\rangle$ for a sufficiently large $L$~\cite{flux}. Unlike $\mathcal{P}$ or $\tilde{\mathcal{P}}$, $\bar{\mathcal{P}}_0$ is completely gauge-independent. Ref.~\onlinecite{HatsugaiFukui} argued that $\mathcal{P}_0$ is related to the center of the mass position when the open boundary condition is taken. To understand the physical meaning of the polarization-like quantity $\tilde{\mathcal{P}}$, note that \begin{equation} \partial_\theta\hat{\tilde{H}}_{\theta}=\hat{\tilde{j}}_\theta(0)\label{current2} \end{equation} is the \emph{local} current operator at the `seam' $x=0$ ($=L$). This can be best seen by the fact that the unitary transformation $e^{i\theta\hat{P}}$ induces the gauge transformation $\tilde{A}_x=A_x-\partial_x(\frac{\theta}{L}x)=\tfrac{\theta}{L}-\frac{\theta}{L}(1-L\theta\delta(x))=\theta\delta(x)$. The delta function originates from the jump of $x$ by $-L$ at the seam. As a sanity check, we have $\int_0^LdxA_x=\int_0^Ldx\tilde{A}_x=\theta$, which is required since the total flux piercing the ring should not be altered by the unitary transformation. Another way of verifying Eq.~\eqref{current2} is based on the current conservation law: $i[\hat{H}_\theta,\hat{n}_x]+\partial_x\hat{j}_\theta(x)=0$ (Appendix~\ref{app:conservation}). Plugging the definition of $\hat{P}$ in Eq.~\eqref{Pdef} and integrating by part, we get \begin{eqnarray} i[\hat{H}_\theta,\hat{P}]=-\tfrac{1}{L}\int_{0}^L dx\,x\partial_x\hat{j}_\theta(x)=\hat{\bar{j}}_{\theta}-\hat{j}_\theta(0),\label{dtP} \end{eqnarray} where $\hat{\bar{j}}_{\theta}$ is defined in Eq.~\eqref{Jave}. Therefore, \begin{eqnarray} \partial_\theta\hat{\tilde{H}}_{\theta}&=&\partial_\theta(e^{ i\theta\hat{P}}\hat{H}_{\theta}e^{- i\theta \hat{P}})\notag\\ &=&e^{ i\theta\hat{P}}\left(\partial_\theta\hat{H}_{\theta}-i[\hat{H}_\theta,\hat{P}]\right)e^{- i\theta \hat{P}}\notag\\ &=&e^{ i\theta\hat{P}}\hat{j}_\theta(0)e^{- i\theta \hat{P}}=\hat{\tilde{j}}_\theta(0). \end{eqnarray} The relation in Eq.~\eqref{dtP} is somewhat nontrivial --- in the Heisenberg picture, the left-hand side is $\partial_t\hat{P}$. It means that $\hat{\bar{j}}_{\theta}\neq \partial_t\hat{P}$ at the operator level under the periodic boundary condition, although we still have $\mathcal{J}=\frac{d}{dt}\mathcal{P}$. Given Eq.~\eqref{current2}, following the discussion of Niu-Thouless, we find that the expectation value $\tilde{\mathcal{J}}_\theta\equiv\langle\Psi_\theta(t)|\hat{\tilde{j}}_\theta(x=0)|\Psi_\theta(t)\rangle$ of the local current flowing at the seam, induced by the adiabatic time evolution, is given by \begin{eqnarray} \mathcal{\tilde{J}}_\theta&=&\partial_{\theta}E_\theta+\mathcal{\tilde{F}}_\theta,\label{NTcurrent2}\\ \mathcal{\tilde{F}}_\theta&\equiv&i\big[\partial_t\langle\tilde{\Phi}_\theta|\partial_{\theta}|\tilde{\Phi}_\theta\rangle-\partial_{\theta}\langle\tilde{\Phi}_\theta|\partial_t|\tilde{\Phi}_\theta\rangle\big], \end{eqnarray} It is now clear that \emph{$\tilde{\mathcal{P}}$ counts the number of particles that go through the seam.} This is in sharp contrast to $\mathcal{P}$ that cares the motion of particles at every single point of space on the same footing, as suggested by Eq.~\eqref{Jave}. \subsection{``Gauge'' dependence} We have clarified that the polarization currents $\mathcal{J}$ and $\mathcal{\tilde{J}}$, respectively corresponding to $A_x=\frac{\theta}{L}$ and $\tilde{A}_x=\theta\delta(x)$, represent quite different quantities. In Sec.~\ref{sec:model}, we will demonstrate the clear difference between them in a simple example. However, this might sound puzzling because they were written in the form of the Berry curvatures: \begin{eqnarray} \mathcal{J}=\tfrac{d}{dt}\mathcal{P}=\int_{0}^{2\pi}\tfrac{d\theta}{2\pi}\mathcal{F}_\theta,\\ \mathcal{\tilde{J}}=\tfrac{d}{dt}\mathcal{\tilde{P}}=\int_{0}^{2\pi}\tfrac{d\theta}{2\pi}\tilde{\mathcal{F}}_\theta, \end{eqnarray} and the Berry curvature must be ``gauge-invariant''. \begin{figure} \begin{center} \includegraphics[width=0.99\columnwidth]{1Dlattice.pdf} \caption{\label{fig:1Dlattice} (a) A lattice with two sites ($x_1=0$ and $x_2=0.5$) in a unit cell. The lattice constant $a$ is set $1$. (b) The origin (shown by blue dot) is shifted by $-0.25$ and the lattice position becomes $x_1=0.25$ and $x_2=0.75$. (c) A different unit cell is chosen, which includes $x_1=1$ and $x_2=0.5$. } \end{center} \end{figure} To resolve this apparent paradox, one should note that there are two completely distinct types of gauge choices here. One is the choice of the vector potential $A_x$ associated with the local U(1) phase of the wavefunction \emph{as a function of $x$}. The gauge transformation in this sense is represented by the unitary operator $\hat{U}_{\epsilon}=e^{i\int dx\,\epsilon(x)\hat{n}_x}$ that induces $A_x\rightarrow A_x-\partial_x\epsilon(x)$ (Appendix~\ref{app:conservation}). The other one is the choice of the overall phase of the state vector. Since $|\Phi_\theta\rangle$ is defined for the snapshot Hamiltonian $\hat{H}_\theta$ independently for each $\theta$, we can always redefine $|\Phi_\theta\rangle'=e^{i\chi(\theta)}|\Phi_\theta\rangle$ \emph{as a function of $\theta$}. The Berry curvature $\mathcal{F}_\theta$, $\tilde{\mathcal{F}}_\theta$ are independent of such a gauge choice in the $\theta$-space~\cite{Kohmoto} but may change under an $x$-dependent gauge transformation discussed above. In fact, Eq.~\eqref{rule} implies \begin{eqnarray} \mathcal{F}_\theta-\tilde{\mathcal{F}}_\theta=\partial_t[\langle \Phi_{\theta}|\hat{P}|\Phi_{\theta}\rangle],\label{FF} \end{eqnarray} which is generically non-vanishing. Although we have only compared the two representative choices of the vector potential so far, one can freely move the seam or even split it (i.e., $A_x(x)=\theta\sum_{i}p_i\delta(x-x_i)$ with $\sum_ip_i=1$) by a proper local gauge transformation. The corresponding Berry phase simply denotes the weighted average of the number of particles going through each seam. \subsection{Issues in the polarization operator} \label{sec:issues} Although $\hat{P}$ in Eq.~\eqref{Pdef} is perfectly well-defined as it is, it has two unfavorable properties. (i) Origin dependence~\cite{Moore}: when the origin is shifted by $-\xi$ and $x$ is replaced with $x+\xi$, $\hat{P}$ becomes $\hat{P}'=\hat{P}+\xi\frac{\hat{N}}{L}$ [see Fig.~\ref{fig:1Dlattice}(b)]. (ii) Seam dependence: if the position of the seam is moved by $r$ and we use $r\leq x<L+r$ as the range of $x$, instead of $0\leq x<L$, $\hat{P}$ becomes $\hat{P}'=\hat{P}+\int_0^r dx\,\hat{n}_x$ [see Fig.~\ref{fig:1Dlattice}(c)]. As a consequence, $\bar{\mathcal{P}}_0$ depends both on the choice of the origin and the position of the seam. As we will see later, the choice of the position of the seam corresponds to the choice of the unit-cell~\cite{VanderbiltKingSmith,RestaVanderbilt} in the case of band insulators. The origin dependence may be resolved by imposing the charge neutrality condition and taking into account contributions from all `charged' particles (e.g., ions for the charge polarization)~\cite{RestaVanderbilt}. However, if we understand the polarization in a generalized sense, including the spin polarization for $S_z$ conserving magnets~\cite{HiranoKatsuraHatsugai1}, the neutrality condition is not necessarily satisfied. Since $\tilde{\mathcal{P}}$ cares only the position of the seam, it is independent of the choice of origin. This implies that $\mathcal{P}=\tilde{\mathcal{P}}+\bar{\mathcal{P}}_0$ depends on the origin but is independent of the seam. We summarize these properties in the Table~\ref{summary1}. Another related but distinct issue in $\hat{P}$ is about the boundary condition~\cite{RestaPRL1998}. When $|\Phi\rangle$ satisfies the periodic boundary condition, $\hat{P}|\Phi\rangle$ does not because $\hat{P}$ multiplies $x$ to the wavefunction, which becomes discontinuous at the seam. For this reason, strictly speaking, the quantity $\bar{\mathcal{P}}_0$ may not be an expectation value of an operator in the usual sense --- it appeared above as the difference of the two Berry phases with respect to the state under different boundary conditions. Nonetheless, since $\mathcal{P}$ and $\tilde{\mathcal{P}}$ are well-defined, $\bar{\mathcal{P}}_0=\mathcal{P}-\tilde{\mathcal{P}}$ should also be. \begin{table} \caption{Properties of the many-body Berry phases $\mathcal{P}$ and $\tilde{\mathcal{P}}$ defined in Eqs.~\eqref{manybodyP12} and \eqref{tildeP}, $\bar{\mathcal{P}}_{0}$ in Eq.~\eqref{relP}, and the change of Berry phases in Eqs.~\eqref{dp1} and \eqref{dp2}. \label{summary1}} \begin{tabular}{c|ccc}\hline\hline $\mathcal{P}$ & Gauge\footnote{The `gauge' here refers to the gauge choice in the $\theta$-space.} & Origin& Seam\\\hline $\mathcal{P}$ & depends\footnote{The fractional part of $\mathcal{P}$ and $\tilde{\mathcal{P}}$ is gauge-independent.} &depends\footnote{The origin dependence may be resolved by the charge-neutrality condition.}& independent \\ $\tilde{\mathcal{P}}$ & depends$^{\text{\textcolor{blue}{b}}}$ & independent & depends \\ $\bar{\mathcal{P}}_{0}$ & independent & depends$^{\text{\textcolor{blue}{c}}}$ & depends \\\hline $\Delta\mathcal{P}(t)$ & independent & independent & independent \\ $\Delta\tilde{\mathcal{P}}(t)$ & independent & independent & depends \\\hline\hline \end{tabular} \end{table} \subsection{Change of the polarization} In order to cancel out the dependence on the unphysical quantities, it is customary to focus on the difference, \begin{eqnarray} \Delta\mathcal{P}(t)&\equiv&\mathcal{P}(t)-\mathcal{P}(0)=\int_{0}^{t}dt'\int_{0}^{2\pi}\tfrac{d\theta}{2\pi}\mathcal{F}_\theta(t'),\label{dp1}\\ \Delta\tilde{\mathcal{P}}(t)&\equiv&\tilde{\mathcal{P}}(t)-\tilde{\mathcal{P}}(0)=\int_{0}^{t}dt'\int_{0}^{2\pi}\tfrac{d\theta}{2\pi}\tilde{\mathcal{F}}_\theta(t').\label{dp2} \end{eqnarray} Neither $\Delta\mathcal{P}(t)$ or $\Delta\tilde{\mathcal{P}}(t)$ depends on the choice of the origin. Note, however, that $\Delta\tilde{\mathcal{P}}$ still depends on the position of the seam because, by definition, it measures the current flowing at the seam. Despite $\Delta\tilde{\mathcal{P}}(t)\neq \Delta\mathcal{P}(t)$ in general, when $t$ is the period $T$ of the cyclic evolution of the Hamiltonian, we have \begin{equation} \Delta\tilde{\mathcal{P}}(T)=\Delta\mathcal{P}(T)=Q. \end{equation} Namely, the quantized charge transport is independent of the choice of the vector potential $A_x$. This can be readily seen based on Eq.~\eqref{FF}. Although $|\Phi_{\theta}\rangle$ at $t=0$ and $T$ may differ by a phase, the expectation value $\langle \Phi_{\theta}|\hat{P}|\Phi_{\theta}\rangle$ should be manifestly periodic in $t$ and the total derivation does not contribute to the integral $\int_{0}^{T}dt$. \subsection{Higher dimensions} Before moving on to our the analysis of a concrete model, let us comment on how to generalize our formulae to higher dimensions. As Berry phases are essentially one-dimensional quantity, we have formulated them in 1D systems. To extend them to higher-dimensions, we should take periodic boundary conditions in all directions with the period $L_i$ ($i=1,\cdots,d$). Correspondingly, the integral $\int dx$ in Eqs.~\eqref{Pdef} and \eqref{Jave} should be replaced by $\int d^d\vec{x}$: \begin{eqnarray} \hat{P}&\equiv&\tfrac{1}{L_1}\int d^d\vec{x}\,x\hat{n}_{\vec{x}},\\ \hat{\bar{j}}_{\theta}&\equiv&\partial_\theta\hat{H}_\theta=\tfrac{1}{L_1}\int d^d\vec{x}\,\hat{j}_\theta(\vec{x}). \end{eqnarray} In order to identify $e^{2\pi i \hat{P}}$ as the large gauge transformation operator, we \emph{do not} replace the $L_1^{-1}$ factor by $(L_1L_2\cdots L_d)^{-1}$. \begin{figure} \begin{center} \includegraphics[width=0.80\columnwidth]{Delta.pdf} \caption{\label{fig:Delta} (a) The probability amplitude for $q L=10$, $\xi=\frac{1}{8}L$, and $\theta=0$. The inset illustrates the setup. (b) $\mathcal{P}$ (gray) and $\tilde{\mathcal{P}}$ (black) as a function of $\xi$ for $q L=10$.} \end{center} \end{figure} \section{Model with delta-function potential.} \label{sec:model} Let us confirm this understanding through a simple \emph{one-particle} model in one dimension. We take the delta-function potential $V_x=-\frac{\lambda}{m}\delta(x-\xi)$ centered at $x=\xi$ in Eq.~\eqref{H1}. The Hamiltonian has a unique bound state with the negative energy $E_\theta=-\frac{q^2}{2m}$, where $q\simeq \lambda$ should be found by inverting $\lambda=q\tfrac{\cosh q L-\cos\theta}{\sinh q L}$. Under the periodic boundary condition $\Phi_\theta(L)=\Phi_\theta(0)$, the ground-state wavefunction, satisfying $\Phi_{\theta+2\pi}(x)=e^{-i\frac{2\pi}{L}x}\Phi_\theta(x)$, is given by \begin{eqnarray} \Phi_\theta(x)&=&\mathcal{N}_\theta e^{-i\theta\frac{x}{L}}\Big[e^{-q|x-\xi|}(1-e^{-q L+i\theta\text{sgn}(x-\xi)})\notag\\ &&+e^{+q|x-\xi|}(e^{-q L+i\theta\text{sgn}(x-\xi)}-e^{-2q L})\Big], \end{eqnarray} where $\mathcal{N}_\theta$ is the normalization factor. Other eigenenergies are all positive so that the excitation gap remains finite for any $\theta$. Now, suppose $\xi$ has a weak time-dependence, adiabatically increasing from $\xi=0$ at $t=0$ to $\xi=L$ at $t=T$. We find \begin{equation} \mathcal{P}=i\int_{0}^{2\pi}\tfrac{d\theta}{2\pi}\int_0^Ldx\,\Phi_\theta(x)^*\partial_\theta\Phi_\theta(x)=\tfrac{\xi}{L}. \end{equation} The transported charge is thus $Q=\Delta\mathcal{P}(T)=+1$. See the grey straight line in Fig.~\ref{fig:Delta}(b). After the gauge transformation, the wavefunction becomes $\tilde{\Phi}_\theta(x)=e^{i\theta\frac{x}{L}}\Phi_\theta(x)$. It satisfies the twisted boundary condition $\tilde{\Phi}_\theta(L)=e^{i\theta}\tilde{\Phi}_\theta(0)$ and is periodic in $\theta$, $\tilde{\Phi}_{\theta+2\pi}(x)=\tilde{\Phi}_\theta(x)$. When $q L\gg1$, the Berry phase $\tilde{\mathcal{P}}$ is well-approximated by \begin{eqnarray} \tilde{\mathcal{P}}&=&i\int_{0}^{2\pi}\tfrac{d\theta}{2\pi}\int_0^Ldx\,\tilde{\Phi}_\theta(x)^*\partial_\theta\tilde{\Phi}_\theta(x)\notag\\ &\simeq&\tfrac{1}{2}(e^{2q (\xi-L)}-e^{-2q \xi}). \end{eqnarray} See the black curve in Fig.~\ref{fig:Delta}(b). The exact expression is included in the Appendix~\ref{app:delta}. As $q L$ increases, the slope of $\tilde{\mathcal{P}}$ near $\xi=0$ and $L$ becomes sharper and sharper. In the limit of $q L\rightarrow\infty$ (i.e., the tight-binding limit), $\tilde{\mathcal{P}}$ vanishes for $0<\xi<L$ and the jump at $\xi=0$ and $L$ becomes abrupt just like the step-function. Regardless of the values of $q L$, we see that $Q=\Delta\tilde{\mathcal{P}}(T)=+1$. This behavior is perfectly consistent with our interpretation of $\tilde{\mathcal{P}}$ explained above --- only the motion across the seam affects $\tilde{\mathcal{P}}$. We plot $\mathcal{P}$ as a function of $\xi$ for several other choices of the vector potential in Appendix~\ref{app:delta}. An alternative way of viewing this particular model is via the Aharonov-Bohm phase. The wavefunction $\tilde{\tilde{\Phi}}_\theta(x)\equiv e^{i\theta\frac{\xi}{L}}\Phi_\theta(x)$ possesses the periodicity in $\xi$. For fixed $\theta$, the Berry phase $\mathcal{B}(\theta)\equiv-i\int_0^Ld\xi\int_0^Ldx\,\tilde{\tilde{\Phi}}_\theta(x)^*\partial_\xi\tilde{\tilde{\Phi}}_\theta(x)$ with respect to $\xi$ measures the flux piercing the ring $\theta$~\cite{Berry} in the limit of $q L\gg1$ and the difference $\frac{\mathcal{B}(2\pi)-\mathcal{B}(0)}{2\pi}=+1$ counts the transported charge. In this picture, the gauge-independence of transported charge is manifest because the Aharonov-Bohm phase is gauge-independent. \section{Resta's formula} \label{sec:Resta} Let us clarify the relation between the Berry phases introduced above and the compact expression for the polarization proposed by Resta. It is formulated in terms of the expectation value of the large gauge transformation $e^{2\pi i\hat{P}}$ on the ground state $|\Phi\rangle\equiv|\Phi_{\theta=0}\rangle=|\tilde{\Phi}_{\theta=0}\rangle$~\cite{RestaPRL1998}: \begin{equation} \mathcal{P}_{\text{R}}\equiv\tfrac{1}{2\pi}\text{Im}\ln \langle\Phi|e^{2\pi i\hat{P}}|\Phi\rangle,\label{PR} \end{equation} Just like $\mathcal{P}$ and $\tilde{\mathcal{P}}$, only the fractional part of $\mathcal{P}_{\text{R}}$ is well-defined because of the ambiguity in the logarithm. While the formula~\eqref{PR} was originally introduced for one-dimensional systems, it might appear that the same formula can be straightforwardly used in any dimension. However, a care must be taken in higher dimensions (as it was hinted in Ref.~\onlinecite{RestaPRL1998}). We will clarify the issue below. Under the condition in Eq.~\eqref{rule}, the expectation value $\langle\Phi|e^{2\pi i\hat{P}}|\Phi\rangle$ can be interpreted as the overlap of $|\Phi_{\theta}\rangle$ at $\theta=0$ and $2\pi$~\cite{NakamuraVoit}. We convert it into the form of the Berry phase by interpolating $\theta_n=\frac{2\pi n}{N}$ ($n=1,\cdots,N-1$): \begin{eqnarray} \langle\Phi_{2\pi}|\Phi_{0}\rangle&\simeq&\langle\Phi_{2\pi}|\Phi_{\pi}\rangle\langle\Phi_{\pi}|\Phi_{0}\rangle\notag\\ &\simeq&\langle\Phi_{2\pi}|\Phi_{\frac{3}{2}\pi}\rangle\langle\Phi_{\frac{3}{2}\pi}|\Phi_{\pi}\rangle\langle\Phi_{\pi}|\Phi_{\frac{1}{2}\pi}\rangle\langle\Phi_{\frac{1}{2}\pi}|\Phi_{0}\rangle\notag\\ &\simeq&\prod_{n=0}^{N-1}\langle \Phi_{\theta_{n+1}}|\Phi_{\theta_n}\rangle,\quad N=2^M. \end{eqnarray} In Appendix~\ref{app:Resta}, we prove that \begin{eqnarray} \Big|\langle\Phi_{2\pi}|\Phi_{0}\rangle-\lim_{M\rightarrow\infty}\prod_{n=0}^{2^M-1}\langle \Phi_{\theta_{n+1}}|\Phi_{\theta_n}\rangle\Big|\leq\tfrac{(2\pi)^2\mathcal{C}V}{2\Delta^2L_1^2}, \label{PRbound} \end{eqnarray} where $V=L_1L_2\cdots L_d$ is the volume of the system, $\Delta=\min_{\theta}\Delta_\theta$ and $\Delta_\theta$ is the excitation gap of $\hat{H}_\theta$, and $\mathcal{C}$ is the current fluctuation defined by \begin{equation} \mathcal{C}\equiv \tfrac{L_1^2}{V}\max_\theta\langle\Phi_{\theta}|(\delta\hat{\bar{j}}_\theta)^2|\Phi_{\theta}\rangle,\,\,\,\delta\hat{\bar{j}}_\theta\equiv\hat{\bar{j}}_\theta-\langle\Phi_{\theta}|\hat{\bar{j}}_\theta|\Phi_{\theta}\rangle. \end{equation} In gapped phases where correlation functions decay exponentially, $\mathcal{C}$ converges to a finite $O(1)$ number in the limit of large system size. After the interpolation, the overlap is precisely the Berry phase: \begin{eqnarray} &&\lim_{N\rightarrow\infty}\prod_{n=0}^{N-1}\langle \Phi_{\theta_{n+1}}|\Phi_{\theta_n}\rangle\notag\\ &&=\lim_{N\rightarrow\infty}e^{-\sum_{n=0}^{N-1}\frac{2\pi}{N}\langle \Phi_{\theta}|\partial_\theta|\Phi_{\theta}\rangle|_{\theta=\theta_n}}=e^{2\pi i\mathcal{P}}. \end{eqnarray} Therefore, we have \begin{equation} \mathcal{P}_{\text{R}}=\tfrac{1}{2\pi}\text{Im}\ln\langle\Phi_{2\pi}|\Phi_{0}\rangle=\mathcal{P} \end{equation} if $\frac{V}{L_1^2}=\frac{L_2\cdots L_d}{L_1}\rightarrow0$. This condition is violated in dimensions $d\geq2$ when the thermodynamic limit is taken in the isotropic manner ($L_i = L$), but can be satisfied in some anisotropic cases. For example, Ref.~\cite{Nakagawa} considered $\mathcal{P}_{\text{R}}$ in the thin-torus limit of 2D system and there $\frac{V}{L_1^2}=\frac{L_2}{L_1}\rightarrow0$ holds. Resta's original argument~\cite{RestaPRL1998} relating $\mathcal{P}_{\text{R}}$ to the polarization is via an adiabatic time evolution. He introduced a weak time dependence and showed that $\frac{d}{dt}\mathcal{P}_{\text{R}}$ coincides with $\mathcal{J}_{\theta=0}$, defined in Eq.~\eqref{NTcurrent}, to the leading order in $L_1^{-1}$. However, his argument is based on the first-order perturbation theory, expanding $|\Phi_{\theta+d\theta}\rangle$ as $|\Phi_{\theta}\rangle+d\theta |\Phi_{\theta}^{(1)}\rangle+\cdots$ for ``$d\theta=2\pi$''. Such an expansion cannot be verified in general. In appendix~\ref{app:Resta}, we find that $1-|\langle\Phi_{\theta}|\Phi_{\theta+d\theta}\rangle|^2\leq (d\theta)^2\frac{\mathcal{C}V}{\Delta^2L_1^2}$ to the leading order in $d\theta$; when the right-hand side is small, $|\Phi_{\theta+d\theta}\rangle$ should be close to $|\Phi_{\theta}\rangle$ and the perturbation may be well-controlled. This condition is violated when $d\theta=2\pi$ and $\frac{\mathcal{C}V}{\Delta^2L_1^2}>1$. While Eq.~\eqref{PRbound} is an inequality, we generally expect that \begin{eqnarray} \Big|\langle\Phi_{2\pi}|\Phi_{0}\rangle - e^{i 2 \pi \mathcal{P}} \Big| \propto \tfrac{V}{\Delta^2 L_1^2}. \end{eqnarray} In fact, one-dimensional insulators, Resta and Sorella showed that~\cite{RestaPRL1999} \begin{equation} |\langle\Phi^{\text{1D}}|e^{2\pi i\hat{P}}|\Phi^{\text{1D}}\rangle|= e^{-\frac{2\pi^2 n^{\text{1D}} \lambda^2}{L_1}+O(\frac{1}{L_1^2})}.\label{Resta1D} \end{equation} where $n^{\text{1D}}=N/L_1$ is the particle density and $\lambda>0$ is the localization length. Now, let us form a $d$-dimensional insulator by a $(d-1)$-dimensional array of the identical 1D chains with the lattice constant $a_i$ in $i$-th direction. Given Eq.~\eqref{Resta1D}, we have \begin{eqnarray} |\langle\Phi|e^{2\pi i\hat{P}}|\Phi\rangle|&=&|\langle\Phi^{\text{1D}}|e^{2\pi i\hat{P}}|\Phi^{\text{1D}}\rangle|^{\frac{L_2}{a_2}\cdots \frac{L_d}{a_d}}\notag\\ &=& e^{-2\pi^2n \lambda^2\frac{V}{L_1^2}+O(\frac{V}{L_1^3})},\,\,\, n\equiv\tfrac{n^{\text{1D}}}{a_2\cdots a_d}.\label{Souza} \end{eqnarray} Therefore, the magnitude $|\langle\Phi|e^{2\pi i\hat{P}}|\Phi\rangle|$ \emph{vanishes} when $\frac{V}{L_1^2}=\frac{L_2\cdots L_d}{L_1}\rightarrow+\infty$. In fact, Eq.~\eqref{Souza} is consistent with the higher-dimensional generalization of the localization length proposed in Ref.~\onlinecite{Souza}. \section{Polarization of band insulators} \label{sec:band} All the discussions so far apply regardless of the presence or absence of interactions or disorders, as long as the stated assumptions hold. Now let us consider the special case of band insulators, namely systems of noninteracting fermions in a periodic potential, with the Fermi level lying in a band gap. As we will see, the polarization in this case can be formulated in terms of Berry phases of the Bloch function in the momentum space. As we will see, we will find two inequivalent Berry phases $\mathcal{P}^{\text{Bloch}}$ and $\tilde{\mathcal{P}}^{\text{Bloch}}$ for a band insulator, which correspond to $\mathcal{P}$ and $\tilde{\mathcal{P}}$ for a many-body insulator as introduced above. The difference between $\mathcal{P}^{\text{Bloch}}$ and $\tilde{\mathcal{P}}^{\text{Bloch}}$ in band insulators was examined earlier~\cite{Kudin2007,Rhim}. In particular, the relation between the surface charge and the bulk quantities $\mathcal{P}^{\text{Bloch}}$ and $\tilde{\mathcal{P}}^{\text{Bloch}}$ was discussed in Ref.~\cite{Rhim}. In this paper, we provide a unified picture on polarization and Berry phase for both many-body and band insulators, with a particular emphasis on the polarization current. \subsection{One dimension, single occupied band} As an example, let us take $\hat{H}_\theta$ in Eq.~\eqref{H1} with a periodic potential $V_{x+a}=V_x$ and set $\theta=0$. The Hamiltonian can be block-diagonalized by the Fourier transformation \begin{equation} \hat{c}_{x}\equiv\tfrac{1}{\sqrt{L/a}}\sum_{n=1}^{L/a}\hat{c}_{k_n,r} e^{i k_n x},\quad k_n\equiv\tfrac{2\pi}{L}n.\label{Fourier1} \end{equation} Here, we decomposed $x$ as $x=R+r$ with $R=0,a,2a,\cdots,L-a$ and $0\leq r<a$, i.e., $R$ labels the unit cell and $r$ is the position within a cell. The Hamiltonian then reduces to $\hat{H}_{\theta=0}=\sum_{n=1}^{L/a}\hat{h}_{k_n}$, where \begin{equation} \hat{h}_k\equiv\int_0^adr\,\hat{c}_{k,r}^\dagger h_{k,r}\hat{c}_{k,r},\,\,\, h_{k,r}\equiv\tfrac{-(\partial_r+ik)^2}{2m}+V_r.\label{HTB} \end{equation} Observe the formal similarity between Eqs.~\eqref{H1} and \eqref{HTB} --- they are exactly mapped onto each other by $L\leftrightarrow a$ and $\tfrac{\theta}{L}\leftrightarrow k$. As a result, we can formulate the polarization of the band insulators in parallel with the general theory discussed before. For simplicity, let us consider the case the lowest band is completely filled and the other bands, separated by a band gap from the lowest band, are empty. Let $u_k(r)$ be the lowest energy eigenstate of $h_{k,r}$ under the `periodic boundary condition' $u_k(r)=u_k(r+a)$. We impose the normalization condition $\int_0^a dr | u_k(r)|^2 = 1$ for each $k$. In analogy to Eq.~\eqref{manybodyP}, the polarization of the band insulator can be defined as~\cite{Zak,KSVPRB1993,VanderbiltKingSmith,RestaRMP,RestaVanderbilt} \begin{equation} \mathcal{P}^{\text{Bloch}}\equiv\int_{0}^{\frac{2\pi}{a}}\tfrac{dk}{2\pi}\int_0^adr\,iu_k(r)^*\partial_ku_k(r) .\label{PB1} \end{equation} The single-particle wavefunction in Fourier space, $u_k(r)$, can be interpreted as the periodic part of the Bloch function as we shall discuss shortly. The seam-independence of $\mathcal{P}$ implies that $\mathcal{P}^{\text{Bloch}}$ does not depend on the choice of the unit cell~\cite{VanderbiltKingSmith,RestaVanderbilt}. Note that $h_{k,r}$ and $u_{k}(r)$ are not periodic in $k$; they instead satisfy $h_{k+\frac{2\pi}{a},r}=e^{-2\pi i\frac{x}{a}}h_{k,r}e^{2\pi i\frac{r}{a}}$ and $u_{k+\frac{2\pi}{a}}(r)=e^{-2\pi i\frac{r}{a}}u_k(r)$~\cite{Zak,VanderbiltKingSmith,RestaVanderbilt,Rhim}. \begin{table} \caption{Comparison of Berry phases for 1D band insulators. The atomic limit is the limit of the vanishing bopping with $\nu_i$ ($=0$ or $1$) localized electron at the site $x_i$. (a)-(c) correspond to the model in Eq.~\eqref{fig1model} illustrated in Fig.~\ref{fig:1Dlattice} (a)-(c). \label{summary12}} \begin{tabular}{c|c|ccc}\hline\hline & Atomic Limit & (a) & (b) & (c)\\\hline $\mathcal{P}^{\text{Bloch}}$ & $\sum_{i}x_i\nu_i$ & 0.25 & 0.5 & 0.25 \\ $\tilde{\mathcal{P}}^{\text{Bloch}}$ & 0 & 0 & 0 & $-0.5$ \\ $\bar{\mathcal{P}}_{0}^{\text{Bloch}}$& $\sum_{i}x_i\nu_i$ & 0.25 & 0.5 & 0.75 \\\hline\hline \end{tabular} \end{table} Just like there were two ways of describing the flux $\theta$, there are two equivalent conventions in the Fourier transformation. The alternative definition involves $e^{i k R}$ rather than $e^{i k x}$: \begin{equation} \hat{c}_{x}\equiv\tfrac{1}{\sqrt{L/a}}\sum_{n=1}^{L/a}\hat{\tilde{c}}_{k_n,r} e^{i k_n R}. \label{Fourier2} \end{equation} The two ways are simply related by $\hat{\tilde{c}}_{k,r}=e^{i k r}\hat{c}_{k,r}$. In the latter choice, both $\tilde{h}_{k,r}=e^{ikr}h_{k,r}e^{-ikr}$ and $\tilde{u}_{k}(r)=e^{ikr}u_{k}(r)$ are manifestly periodic in $k$ with the period $2\pi/a$. For this reason, $\tilde{u}_{k}(r)$ is actually more standard in the context of topological insulators~\cite{TI2013}. In turn, $\tilde{u}_{k}(r)$ satisfies the `twisted boundary condition' $\tilde{u}_{k}(r+a)=e^{ika}\tilde{u}_{k}(r)$. The Berry phase with respect to $\tilde{u}_{k}(r)$ \begin{equation} \tilde{\mathcal{P}}^{\text{Bloch}}\equiv\int_{0}^{\frac{2\pi}{a}}\tfrac{dk}{2\pi}\int_0^adr\,i\tilde{u}_k(r)^*\partial_k\tilde{u}_k(r)\label{PB2} \end{equation} measures the number of particles going through the unit-cell boundary. The value of the fractional part of $\mathcal{P}^{\text{Bloch}}$ and $\tilde{\mathcal{P}}^{\text{Bloch}}$ do not agree, and we have $\mathcal{P}^{\text{Bloch}}=\tilde{\mathcal{P}}^{\text{Bloch}}+\bar{\mathcal{P}}_0^{\text{Bloch}}$~\cite{Rhim}, the analog of Eq.~\eqref{relP}, where \begin{equation} \bar{\mathcal{P}}_0^{\text{Bloch}}\equiv\int_{0}^{\frac{2\pi}{a}}\tfrac{dk}{2\pi}\int_0^adr\,|\tilde{u}_k(r)|^2r.\label{PB3} \end{equation} A possibly more familiar way~\cite{Zak,VanderbiltKingSmith,RestaVanderbilt} of introducing the same $u_{k}(r)$ and $\tilde{u}_{k}(r)$, without explicitly performing the Fourier transformation, is via the Bloch theorem. It states that the single-particle wavefunction (the Bloch function) of $\hat{H}_{\theta=0}$ can be chosen in such a way that $\psi_{k}(x+a)=e^{ika}\psi_{k}(x)$ and $\psi_{k+\frac{2\pi}{a}}(x)=\psi_{k}(x)$. Then it is customary to introduce the periodic part of the Bloch function via $u_{k}(r)=e^{-ikx}\psi_{k}(x)$. However, we could have defined $\tilde{u}_{k}(r)=e^{-ikR}\psi_{k}(x)$ as well. It is easy to see that $u_{k}(r)$ and $\tilde{u}_{k}(r)$ formulated this way agree with the ones above. To see the properties of $\mathcal{P}^{\text{Bloch}}$ and $\tilde{\mathcal{P}}^{\text{Bloch}}$ more concretely, let us first discuss the atomic limit of tight-binding models. In the limit of vanishing hopping, $\tilde{u}_k$ can always be chosen \emph{$k$-independent} so that $\tilde{\mathcal{P}}^{\text{Bloch}}=0$. This is expected since $\tilde{\mathcal{P}}^{\text{Bloch}}$ measures twist of $\tilde{u}_{k,i}$, and the atomic limit has the most trivial, constant Bloch function. In contrast, $\mathcal{P}^{\text{Bloch}}$ may be nonzero even in this limit. The $i$-th site at $x=R+r_i$ in each unit cell, if occupied, adds $r_i$ to $\mathcal{P}^{\text{Bloch}}$ via $u_{k,i}=e^{-i k r_i}$. More generally, if there is $\nu_i$ ($=0$ or $1$) localized electron at the site $x=R+r_i$, we get $\mathcal{P}^{\text{Bloch}}=\sum_{i}r_i\nu_i$ at the filling $\nu=\sum_{i}\nu_i$ in the atomic limit. As another simple exercise, let us examine a two-band model for the lattice in Fig.~\ref{fig:1Dlattice} (a): \begin{equation} \hat{H}=t_0\sum_{R/a=0}^{L/a-1}\hat{c}_{R}^{2\dagger}\hat{c}_{R}^1+\text{h.c.}\label{fig1model} \end{equation} The Hamiltonian contains only an intra-cell hopping $t_0>0$. In this case, $\tilde{\mathcal{P}}^{\text{Bloch}}=0$ because the Hamiltonian in the momentum space does not have any $k$-dependence when the convention in Eq.~\eqref{Fourier2} is adopted. However, if a different unit cell is chosen as in Fig.~\ref{fig:1Dlattice} (c), the same hopping becomes an inter-cell hopping and results in $\tilde{\mathcal{P}}^{\text{Bloch}}\neq0$. This clarifies the unit-cell dependence of $\tilde{\mathcal{P}}^{\text{Bloch}}$, translated from the seam-dependence of $\tilde{\mathcal{P}}$. In contrast, $\mathcal{P}^{\text{Bloch}}=0.25$ both for Fig.~\ref{fig:1Dlattice} (a) and (c) as summarized in Table~\ref{summary12}. \subsection{General dimensions, multiple occupied bands} For more general band insulators in $d$-dimensions with $\nu$-occupied bands, Eqs.~\eqref{PB1}, \eqref{PB2}, and \eqref{PB3} should be replaced by \begin{align} \mathcal{P}^{\text{Bloch}}_{\vec{k}_{\perp}} &= \sum_{\alpha=1}^\nu \int_0^{\frac{2\pi}{a_1}}\tfrac{d k_1}{2\pi}\int_{\text{u.c.}} d^d\vec{r}\,iu^{\alpha}_{\vec{k}}(\vec{r})^*\partial_{k_1}u^{\alpha}_{\vec{k}}(\vec{r}), \\ \tilde{\mathcal{P}}^{\text{Bloch}}_{\vec{k}_{\perp}}&=\sum_{\alpha=1}^\nu\int_0^{\frac{2\pi}{a_1}}\tfrac{d k_1}{2\pi}\int_{\text{u.c.}} d^d\vec{r}\,i\tilde{u}^{\alpha}_{\vec{k}}(\vec{r})^*\partial_{k_1}\tilde{u}^{\alpha}_{\vec{k}}(\vec{r}),\\ \bar{\mathcal{P}}^{\text{Bloch}}_{0,\vec{k}_{\perp}}&=\sum_{\alpha=1}^\nu\int_0^{\frac{2\pi}{a_1}}\tfrac{d k_1}{2\pi}\int_{\text{u.c.}} d^d\vec{r}\,|u^{\alpha}_{\vec{k}}(\vec{r})|^2r_1. \end{align} where $u_{\vec{k}}^{\alpha}(\vec{r})=e^{-i\vec{k}\cdot\vec{r}}\tilde{u}_{\vec{k}}^{\alpha}(\vec{r})$ is the Bloch function of the $\alpha$-th occupied band, $\vec{k}_{\perp}\equiv(k_2,\cdots,k_d)$ is the momentum perpendicular to $k_1$, and $\text{u.c.}$ denotes the unit cell. By computing the many-body Berry phases $\mathcal{P}$ and $\tilde{\mathcal{P}}$ in Eqs.~\eqref{manybodyP12} and~\eqref{tildeP}, which were defined for general interacting systems, for band insulators, we find \begin{eqnarray} \mathcal{P} &=&\tfrac{V}{L_1}\int\tfrac{d^{d-1}\vec{k}_{\perp}}{(2\pi)^{d-1}}\left[\tfrac{L_1-a_1}{2a_1}\nu+\mathcal{P}^{\text{Bloch}}_{\vec{k}_{\perp}}\right], \label{PBgeneral} \\ \tilde{\mathcal{P}}&=&\tfrac{V}{L_1}\int\tfrac{d^{d-1}\vec{k}_{\perp}}{(2\pi)^{d-1}}\tilde{\mathcal{P}}^{\text{Bloch}}_{\vec{k}_{\perp}},\\ \bar{\mathcal{P}}_0&=&\tfrac{V}{L_1}\int\tfrac{d^{d-1}\vec{k}_{\perp}}{(2\pi)^{d-1}}\left[\tfrac{L_1-a_1}{2a_1}\nu+\bar{\mathcal{P}}^{\text{Bloch}}_{0,\vec{k}_{\perp}}\right], \end{eqnarray} where the $\vec{k}_{\perp}$-integral is performed over $[0,\frac{2\pi}{a_2}]\times\cdots\times[0,\frac{2\pi}{a_d}]$. See Appendix~\ref{app:BI} for the derivation. These formulae may look ill-defined because of the factor $\tfrac{V}{L_1}$, but as we are only interested in the fractional part, divergence of the integer part does not really matter. \section{Conclusions} \label{sec:conclusion} We have discussed several inequivalent Berry phases related to the polarization, and clarified their differences and mutual relations. Their values and behaviors can be quite different even for the same physical system, as it is evident in Fig.~\ref{fig:Delta} (b). Even the \emph{change} of the polarization over a generic time period [Eqs.~\eqref{dp1} and \eqref{dp2}], and thus the ``polarization current'' may depend on the definition. This difference can be attributed to the fact that they probe spatial regions differently, corresponding to the different gauges (distributions of the vector potential representing an Aharonov-Bohm flux). Nevertheless, the total transported charge in a Thouless pumping process of one cyclic period is given by the same topological invariant (Chern number), independent of the Berry phase chosen for the calculation. The bulk polarization which is standard in the literature is $\mathcal{P}$ [Eq.~\eqref{manybodyP}] in the many-body context~\cite{OrtizMartin,AligiaOrtiz} or $\mathcal{P}^{\text{Bloch}}$ [Eq.~\eqref{PB1}] for band insulators~\cite{Zak,KSVPRB1993,VanderbiltKingSmith,RestaRMP,RestaVanderbilt}, as $\mathcal{P}$ and $\mathcal{P}^{\text{Bloch}}$ take into account every point in space on the same footing. However, we emphasize that other types of Berry phases are also well-defined. The polarization current derived from these Berry phases corresponds to the current measured locally. The mutual relations between the inequivalent Berry phases are given, e.g., in Eq.~\eqref{relP}. Once a certain definition is adopted, symmetries may quantize the possible values of the polarization. In such a case, the value of the polarization may be used to distinguish different phases, as it was done for example in Refs.~\onlinecite{Zak, NakamuraVoit, NakamuraTodo, HiranoKatsuraHatsugai1}. \begin{acknowledgments} H. W. thanks S. Murakami, K. Shiozaki, and M. Nakagawa for useful discussions on this topic. M. O. thanks R. Kobayashi, Y.~O. Nakagawa, and Y. Fukusumi for a stimulating collaboration on a related project which motivated the present study, and G.-Y. Cho, M. Nakamura and G. Ortiz for useful discussions on related problems. In particular, we thank H. Katsura for fruitful discussions and comments on the initial draft, and J. W. Rhim for pointing out relevant references. This work is supported in part by JSPS KAKENHI Grant Numbers JP17K17678 (H. W.) and JP16K05469 (M. O.). \end{acknowledgments}
1,314,259,994,757
arxiv
\section{Introduction} \label{sec:Intro} Millimeter wave (mmWave) and massive MIMO are key enabling technologies for current and future wireless systems \cite{Rappaport2013a,HeathJr2016,Larsson2014a,Alkhateeb2014d,Boccardi2014,Jungnickel2014,Bjoernson2016a,Mumtaz2016}. This is mainly thanks to the very high data rates and multiplexing gains promised by these technologies. Employing large numbers of antennas, however, imposes critical challenges on mmWave and massive MIMO systems in supporting highly mobile users, ensuring reliability, and enabling low-complexity coordination among others. One main reason for these challenges is the large training, feedback, and coordination overhead associated with the large channel matrices. These channels, however, are intuitively some functions of the environment geometry, building materials, transmitter/receiver locations, etc. This motivated using machine/deep learning tools that leverage low-overhead features of the environment and user setups and learn how to use them to predict mmWave and massive MIMO channels/beams \cite{Alkhateeb2018,Li2018,Wang2018a,Va2017a}, enhance system reliability and proactive hand-off \cite{Alkhateeb2018a,Mismar2018}, and enable low-complexity base station coordination \cite{Alkhateeb2018}. \textbf{The Need for a Dataset:} To advance the machine learning research in mmWave and massive MIMO, it is crucial to have sufficiently large dataset that researchers can use for (i) evaluating the performance of their machine learning algorithms, (ii) reproducing the results of the other papers, and (iii) setting \textit{benchmarks} and comparing the different algorithms based on common data. Further, we define the following two important requirements for a useful dataset in mmWave and massive MIMO applications. \begin{itemize} \item \textbf{The dataset channels represent the environment:} Most of the machine learning applications in mmWave and massive MIMO rely on leveraging the correlation between some features of the environment setup (geometry, materials, transmitter/receiver locations, etc.) and the channels or beamforming vectors \cite{Alkhateeb2018,Li2018,Alkhateeb2018a,Wang2018a,Va2017a}. Therefore, in order to evaluate these algorithms, the dataset channels should capture this dependence on the environment. To achieve that, the channels need to be either collected via real-world measurements or constructed from accurate ray-tracing data. \item \textbf{The dataset is generic (parametrized):} Different than machine learning research in other fields, such as computer vision and natural language processing, that is mainly focused on developing and analyzing machine learning models, a major part of the machine learning research in mmWave/massive MIMO is the pre- and post-processing of the data. Further, it is normally important in wireless communication research to study the performance of the developed solutions under various system/channel scenarios. Therefore, a fixed dataset with a specific system/channel assumptions and a pre-defined set of features will highly limit the machine learning research space in mmWave/massive MIMO. This motivates the development of a \textit{generic} dataset where the researcher can adjust the key system/channel parameters and can do pre- and post-processing on the generated data. \end{itemize} While some methodologies for MIMO data generation have been proposed before for mobile applications \cite{Klautau2016,Wen2018}, there is no available dataset for mmWave/massive MIMO that satisfies the mentioned requirements, to the best of our knowledge. \textbf{The DeepMIMO Dataset:} In this work, we introduce the channels' dataset, DeepMIMO, which is designed for machine/deep learning research in mmWave and massive MIMO applications. More specifically, using this channels' dataset, researchers can easily construct the inputs and outputs of several machine learning applications. The DeepMIMO dataset generation framework has the following important features. \begin{itemize} \item The DeepMIMO channels are constructed from accurate ray-tracing data. These data are obtained from the ray-tracing simulator, Wireless InSite, developed by Remcom \cite{Remcom}. Remcom Wireless InSite \cite{Remcom}, is widely used in mmWave and massive MIMO research at both industry and academia \cite{Va2017a,Alkhateeb2018,Alkhateeb2018a,Khawaja2018}, and has been verified with real-world channel measurements \cite{Li2015a,Wu2016,Khawaja2018}. The DeepMIMO channels constructed using this ray-tracing simulation capture the dependence on the environment geometry/materials as well as the transmitter/receiver locations, which is essential for several machine learning applications in mmWave and massive MIMO systems. \item The DeepMIMO dataset is generic (parametrized). More clearly, the DeepMIMO dataset generation framework is designed to generate channel datasets based on a set of parameters that can be adjusted by the researcher. This allows tailoring the DeepMIMO dataset to fit the specific machine learning application of interest. This set of parameters controls various system and channel aspects such as the number of antennas, the number of OFDM subcarriers, and the number of channel paths. \item The DeepMIMO dataset is simple to define and to generate. More specifically, the DeepMIMO dataset is completely defined by (i) the ray-tracing scenario and (ii) the parameters set. This means that any researcher can easily define the adopted dataset and perfectly generate the same dataset defined in other papers by using the same ray-tracing scenario and parameters set. Further, using the DeepMIMO dataset generation framework to generate the desired dataset is very simple as will be shown in \sref{sec:use}. \end{itemize} The rest of this paper is organized as follows. In \sref{sec:general}, we present the general framework of the DeepMIMO dataset generation process, highlighting the key elements in this framework. Then, a detailed description of these elements is provided in \sref{sec:Explanation}, along with an example Wireless InSite ray-tracing scenario. This example scenario has 18 base stations and more than one million users, which generates a sufficiently large dataset for several mmWave/massive MIMO machine learning applications. In \sref{sec:use}, we describe in detail how to use the DeepMIMO dataset generation code. Finally, in \sref{sec:Application}, we present an example on using the DeepMIMO dataset to construct the inputs/outputs of the machine learning model and generate the mmWave beam prediction results in \cite{Alkhateeb2018}. \textbf{Notation}: We use the following notation throughout this paper: ${\mathbf{A}}$ is a matrix, ${\mathbf{a}}$ is a vector, $a$ is a scalar, and $\mathcal{A}$ is a set. $|{\mathbf{A}}|$ is the determinant of ${\mathbf{A}}$, $\|{\mathbf{A}} \|_F$ is its Frobenius norm, whereas ${\mathbf{A}}^T$, ${\mathbf{A}}^H$, ${\mathbf{A}}^*$, ${\mathbf{A}}^{-1}$, $\pinv{{\mathbf{A}}}$ are its transpose, Hermitian (conjugate transpose), conjugate, inverse, and pseudo-inverse respectively. $[{\mathbf{A}}]_{r,:}$ and $[{\mathbf{A}}]_{:,c}$ are the $r$th row and $c$th column of the matrix ${\mathbf{A}}$, respectively. $\mathrm{diag}({\mathbf{a}})$ is a diagonal matrix with the entries of ${\mathbf{a}}$ on its diagonal. ${\mathbf{I}}$ is the identity matrix and $\mathbf{1}_{N}$ is the $N$-dimensional all-ones vector. \section{DeepMIMO Dataset: The General Framework} \label{sec:general} \begin{figure*}[t] \centerline{ \includegraphics[width=1.6\columnwidth]{FW.pdf} } \caption{The General framework for generating the DeepMIMO dataset. The DeepMIMO dataset is defined based on (i) the given ray tracing scenario and (ii) the set of dataset parameters $\mathcal{S}$.} \label{fig:framework} \end{figure*} In this section, we briefly explain the main motivation for having a \textit{generic} MIMO dataset for deep learning applications, and highlight the general framework of our DeepMIMO dataset. In mmWave and massive MIMO research, the main signal processing tasks, e.g., precoding, channel estimation, beam tracking, and user selection, evolve around the characteristics of the wireless channels. Some of these characteristics, such as the correlation between the user channels at different locations of the environment, depend heavily on the environment geometry and materials. This makes it hard to generate channels that capture these environment-dependent characteristics using statistical channel models. In fact, most of the proposed mmWave and massive MIMO applications for machine learning relied on these environment-dependent channel characteristics to do their functions. Examples include predicting the beamforming and channel matrices based on the user RF signature \cite{Alkhateeb2018,Li2018}, or based on the user location \cite{Wang2018,Va2017a,Asadi2018} and predicting the future blockage based on the sequence of previously selected beams \cite{Alkhateeb2018a}. This explains why we need ray-tracing based channel generation in mmWave and massive MIMO based machine learning applications. The channels generated using ray-tracing simulations capture the geometry-based characteristics, such as the correlation between the channels at different locations, and the dependence on the materials of the various elements of the environment, among others. With this motivation, we build a generic (parametrized) dataset for mmWave and massive MIMO channels with the goal of facilitating the deep learning research in this area and enabling results replication and algorithms comparisons. Using our generic/parametrized dataset, researchers can tune several parameters, such as the number of antennas, the array configuration, and the number of subcarriers, to craft the dataset that fits their application. The general framework for our DeepMIMO dataset generation is illustrated in \figref{fig:framework}. Next, we highlight the main elements in this framework. \begin{itemize} \item \textbf{The ray-tracing scenario `R':} The ray-tracing scenario consists of a number of base stations (or access points) and users geographically distributed in a certain outdoor or indoor environment. Typically, in the ray-tracing scenario, the base stations and users have omni or quasi-omni antennas. The outputs of the ray-tracing simulation include the channel parameters (angles or arrival/departure, path gains, etc.) for the channels between every transmitter and receiver. In our dataset, we use the accurate ray-tracing simulator, Wireless InSite by Remcom \cite{Remcom} to obtain the ray-tracing outputs. These outputs for some ray-tracing scenarios are available on our dataset website \cite{DeepMIMODataset}. Further, to have a large enough dataset for deep learning applications, our ray-tracing scenarios include a large number of base stations and users. An example on these scenarios is explained in detail in \sref{subsection:scenario}. \item \textbf{The dataset parameters $\mathcal{S}$:} Since different machine learning applications require different datasets, we designed DeepMIMO as a parametrized dataset. This allows the researchers to adjust a set of parameters, $\mathcal{S}$, in the dataset generation code to generate a dataset that is customized for their application. This achieves two main objectives: (i) it provides the researchers with a wide control over the system setup and the antenna configuration, and (ii) it facilitates results reproducibility, as the researchers just need to state the parameters set, $\mathcal{S}$, and the adopted ray-tracing scenario, `R', to completely define the generated dataset . We describe these parameters in detail in \sref{subsection:parameters}. \item \textbf{The DeepMIMO dataset generation code:} Given the channel parameters generated from the ray-tracing scenario `R', and based on the selected parameters set $\mathcal{S}$, the DeepMIMO dataset generation code will construct the channel matrices for all the selected transmitter-receiver pairs. In addition to the channel matrices, the DeepMIMO dataset includes other important features, such as the user location, which can be leveraged in the machine learning modeling. The DeepMIMO dataset generation code and the structure of the generated dataset are explained in detail in Sections \ref{subsec:datasetGeneration}-\ref{subsec:datasetStructure}. \end{itemize} \noindent \textbf{It is important to note that the generated DeepMIMO dataset is completely defined by (i) the adopted ray-tracing scenario `R' and (ii) the dataset parameters set $\mathcal{S}$.} This allows the researchers to easily define their dataset, reproduce the results in other papers, and compare the performance of different algorithms using a common dataset. Next, we explain the DeepMIMO dataset framework in detail in \sref{sec:Explanation}, before showing how we can use the dataset is some mmWave deep learning applications in \sref{sec:Application}. \section{DeepMIMO Dataset: A Detailed Description} \label{sec:Explanation} In this section, we will describe in detail the different elements of the DeepMIMO dataset generation process, illustrated in \figref{fig:framework}. As discussed in \sref{sec:general}, the DeepMIMO dataset is completely defined by the ray-tracing scenario and the set of parameters $\mathcal{S}$. Therefore, to generate the dataset, the researcher will first choose one of the ray-tracing scenarios that are available for the DeepMIMO dataset on the dataset website \cite{DeepMIMODataset}. For each scenario, we provide the channel parameters of every transmitter/receiver pair, which is the first input to the dataset generation code, as shown in \figref{fig:framework}. Then, the researcher will adjust the dataset parameters $\mathcal{S}$ to fit the desired application. Finally, the dataset generation code will construct the DeepMIMO dataset based on the channel parameters of the adopted ray-tracing scenario and system parameters. In the next few subsections, we describe these aspects in more detail, which is important for leveraging the DeepMIMO dataset and understanding its full capabilities. \begin{figure*}[p] \centering \includegraphics[width=2\columnwidth]{O1_top.pdf} \caption{A top view of the 'O1' ray-tracing scenario, showing the two streets, the buildings, the 18 base stations, and the user x-y grids. This ray-tracing scenario is generated using Remcom Wireless InSite \cite{Remcom}. } \label{fig:top} \bigskip \includegraphics[width=2\columnwidth]{O1_bird.pdf} \caption{A bird's-eye view of a section of the 'O1' ray-tracing scenario, showing the intersection of the two streets. This ray-tracing scenario is generated using Remcom Wireless InSite \cite{Remcom}. \label{fig:side} } \end{figure*} \subsection{Ray-Tracing Scenarios} \label{subsection:scenario} As described briefly in \sref{sec:general}, the ray-tracing simulations generate channel parameters that capture the dependence on the environment geometry, materials, transmitter/receiver locations, etc., which is crucial for the machine learning applications. In our dataset, the channel parameters are generated using the accurate ray-tracing simulator Wireless InSite by Remcom \cite{Remcom}. These channel parameters are the first inputs to the DeepMIMO dataset generation code as shown in \figref{fig:framework}. On the DeepMIMO dataset website \cite{DeepMIMODataset}, the channel parameters for some ray-tracing scenarios will be available. In this section, we explain in detail one of those scenarios. Understanding these ray-tracing scenarios is important for several reasons: (i) the dataset user needs this understanding to be able to adjust the parameters in the set $\mathcal{S}$, such as the antenna configuration and orientation, given the adopted system and machine learning models, (ii) this understanding of the ray-tracing scenario enables the researcher to develop good explanation of the machine learning outcomes. Now, we explain one ray-tracing scenario, that we call `O1', in detail. \textbf{The ray-tracing scenario `O1':} This is an outdoor scenario of two streets and one intersection, with the top-view shown in \figref{fig:top}. The main street (the horizontal one) is 600m long and 40m wide, and the second street (the vertical one in \figref{fig:top}) is 440m long and 40m wide. In the following bullets, we describe the key components of this ray-tracing scenario. \begin{itemize} \item \textbf{Base stations:} As shown in \figref{fig:top}, the `O1' ray-tracing scenario includes 18 base stations, BS1-BS18, distributed on both sides of the two streets. The main street has 12 BSs, 6 on each side. The separation between BS1, BS3, and BS5 (or equivalently BS2, BS4, BS6) is constant and equals 100m (and similarly for BS7, BS8, BS9 and BS8, BS10, BS12). For the second street, it includes 6 BSs. The separation between BS13, BS 15, and BS17 (or equivalently BS14, BS16, and BS18) is 150m. The hight of all the BSs is 6m. Further, in the ray-tracing simulation, each BS has only a single half-wave dipole with the axis of the dipole antenna in the $z$-direction. In Sections \ref{subsection:parameters}-\ref{subsec:datasetGeneration}, we will show how the output of this ray-tracing simulation can be used to generate channels for larger antenna arrays at the BSs. \item \textbf{Users:} Since machine learning applications normally require large training dataset, the `O1' ray-tracing scenario is designed to include more than one million users (exactly, 1,184,923 users). The users are placed in 3 uniform x-y grids as shown in \figref{fig:top}. The first user grid is located along the main street, with a length of 550m and a width of 35m. It starts from the right on \figref{fig:top}, after 15m of the street starting point, and ends on the left, 35m before the street ending point. This first grid includes 2751 rows, R1 to R2751, with each row having 181 users. The spacing between every two adjacent users in this uniform x-y grid is 20 cm. The second grid is located in the southern side of the second street, as shown in \figref{fig:top}. It includes 1101 rows, R2752 to R3852, with 181 users at every row. Similar to the first grid, The spacing between every two adjacent users in the second x-y grid is 20 cm. The third grid is located on the northern part of the second street, as shown on \figref{fig:top}. It includes 1351 rows, from R3853 to R5203, with 361 users in every row. Different than the other two grids, the spacing between every two adjacent users in the third x-y grid is 10 cm. Note that the main advantage of having some differences between the three grids is to enable testing different scenarios for the ray-tracing simulations and machine learning applications. Finally, all the users are equipped with a single dipole antenna, with the axis aligning with the z-direction. \item \textbf{Buildings:} In this ray-tracing scenario, the two streets have buildings on both sides. For simplicity, all the building are assumed to be solid, with rectangular shapes. Along the main street, all the buildings have bases of the same dimensions, 30m $\times$ 60m. In the second street, the buildings have bases of 60m $\times$ 60m. The heights of the buildings are different, and the height of every building is written on it in \figref{fig:top}. \item \textbf{Materials:} In the `O1' ray-tracing scenario, we emulate a $60$ GHz signal propagation setup. Therefore, this scenario adopts the ITU dry earth 60 GHz material for the two streets and ITU layered drywall 60 GHz for the buildings. These materials are available in the Wireless InSite ray-tracing simulator \cite{Remcom}. \end{itemize} \textbf{Ray-tracing outputs:} In the Wireless InSite ray-tracing simulator \cite{Remcom}, we used the X3D model that is developed by Remcom and is capable of providing a highly-accurate 3D propagation model. Further, for simplicity, we considered only the first 4 reflections. For every transmitter-receiver pair, this ray-tracing simulation shoots hundreds of rays in all directions from the transmitter and record the strongest $25$ paths from those that made their ways to the receiver, where strongest paths are the paths with the highest receive power. Further, for every base station-user pair, the specified ray-tracing simulator calculates the channel parameters for every channel. More specifically, for every BS $b$ and user $u$, and for every channel path $\ell$, the ray-tracing simulations outputs (1) the azimuth and elevation angles of departure (AoDs) from the base station, $\phi^{b,u}_\mathrm{az}, \phi^{b,u}_\mathrm{el}$, (2) the azimuth and elevation angles of arrival (AoAs) at the user, $\theta^{b,u}_\mathrm{az}, \theta^{b,u}_\mathrm{el}$, (3) the path receive power, $P_\ell^{b,u}$, (4) the path phase, $\vartheta_\ell^{b,u}$, (5) the propagation delay of the path $\tau_\ell^{b,u}$. In \sref{subsec:datasetGeneration}, we show how these parameters can be used to construct the channel matrix between base station $b$ and user $u$. In addition to the channel parameters, the ray-tracing outputs include the x-y-z location of the user, which can be leveraged as an input feature for machine/deep learning applications. \subsection{DeepMIMO Dataset Parameters} \label{subsection:parameters} In \sref{subsection:scenario}, we described the ray-tracing outputs, which are the first input to the DeepMIMO dataset generation code in \figref{fig:framework}. In order to construct the channel matrices and build the dataset, though, we still need to define the DeepMIMO dataset parameters, such as the number of antennas and antenna spacing, which are the second input to the DeepMIMO dataset generation code. The objective of these dataset parameters, $\mathcal{S}$, is to give the researcher some flexibility in adjusting the DeepMIMO dataset to fit the desired application, which makes it a \textit{generic} dataset. In the following bullets, we list the DeepMIMO dataset parameters. \begin{itemize} \item \textbf{Active BSs} (defined by \texttt{active$\_$BS} in the MATLAB code): Here, we specify the BSs that we want to activate in the dataset, i.e., the BSs that we want the DeepMIMO dataset generation code to generate the channels connecting them and the mobile users. Specifying the active BSs helps reducing the size of the dataset by focusing on a certain subset of the available BSs. For example, the `O1' ray-tracing scenario includes 18 BSs. If our application requires only the channels between BSs 3, 4, 5, 6 (in \figref{fig:top}) and the mobile users, we set \texttt{active$\_$BS=[3,4,5,6]}. \item \textbf{Active users} (defined by \texttt{active$\_$user$\_$first} and \texttt{active$\_$user$\_$last} in the MATLAB code): Similar to the BSs, we can activate a certain group of users for the DeepMIMO dataset generation code. We do that by specifying the first and last row of the group of users. For example, we can activate the user group from row R1000 to row R1500 by setting \texttt{active$\_$user$\_$first=1000} and \texttt{active$\_$user$\_$last=1500}. \item \textbf{Number of BS antennas} (defined by \texttt{num$\_$ant$\_$x}, \texttt{num$\_$ant$\_$y}, and \texttt{num$\_$ant$\_$z} in the MATLAB code): These parameters specify the number of BS antennas in the x, y, and z axes, assuming a uniform array. Note that the axes are w.r.t. to the ray-tracing axes. For example, considering BS 3 in the `O1' ray-tracing scenario, in \figref{fig:top}. If this BS has a $16 \times 16$ uniform planar array (UPA) along the street, i.e., a UPA in the y-z plane, we set \texttt{num$\_$ant$\_$x=1}, \texttt{num$\_$ant$\_$y=16}, and \texttt{num$\_$ant$\_$z=16}. \item \textbf{Antenna spacing} (defined by \texttt{ant$\_$spacing} in the MATLAB code): This parameter specifies the spacing between the elements of the BS antenna array relative to the wavelength. For a half-wavelength antenna spacing, we set \texttt{ant$\_$spacing=.5}. \item \textbf{System bandwidth} (defined by \texttt{bandwidth} in the MATLAB code): This parameter defined the system bandwidth in GHz. For example, for 500 MHz bandwidth, we set \texttt{bandwidth=.5}. \item \textbf{OFDM parameters} (defined by \texttt{num$\_$OFDM}, \texttt{OFDM$\_$sampling$\_$factor}, and \texttt{OFDM$\_$limit} in the MATLAB code): These parameters specify the number of OFDM subcarriers and at which subcarriers we want the DeepMIMO dataset generation code to calculate the channels. Calculating the channels only at a specific set of subcarriers helps reducing the dataset size. To do that, two parameters \texttt{OFDM$\_$sampling$\_$factor}, and \texttt{OFDM$\_$limit} can be leveraged. While the first parameter, \texttt{OFDM$\_$sampling$\_$factor}, provides the option to consider only a sampled version of the OFDM subcarriers, the second one, \texttt{OFDM$\_$limit}, specifies how many \textit{sampled} subcarriers we want to consider. For example, consider an OFDM system with $1024$ subcarriers, if we want to calculate the channels only at the first $64$ subcarriers, we set \texttt{num$\_$OFDM=1024}, \texttt{OFDM$\_$sampling$\_$factor=1}, and \texttt{OFDM$\_$limit=64}. Also, considering the same OFDM system, if we want to calculate the channels only at the first 64 sampled subcarriers with a downsampling factor of 4, i.e., at subcarriers 1, 5, 9, ..., 256, then we set \texttt{num$\_$OFDM=1024}, \texttt{OFDM$\_$sampling$\_$factor=4}, and \texttt{OFDM$\_$limit=64}. \item \textbf{Number of channel paths} (defined by \texttt{num$\_$paths} in the MATLAB code): As described in \sref{subsection:scenario}, the ray-tracing simulation outputs the AoA, AoD, etc. of up to $25$ paths for the channel between every BS and user. These paths are ordered according to their received power. For example, the first path is the one with the highest received power. In some applications, we may be interested in considering only the strongest path or the first few paths. To provide this flexibility, we use the parameter \texttt{num$\_$paths} . For example, if we want to consider only the strongest 3 paths, we set \texttt{num$\_$paths=3}. \end{itemize} \subsection{DeepMIMO Dataset Construction Code} \label{subsec:datasetGeneration} Given the ray-tracing simulation outputs, described in \sref{subsection:scenario}, and the DeepMIMO dataset parameters, explained in \sref{subsection:parameters}, the DeepMIMO dataset generation code constructs the channels between the specified BSs and users. More specifically, consider a DeepMIMO dataset parameters $\mathcal{S}$ that specifies (i) a number of BS antennas $M=M_xM_yM_z$ with $M_x, M_y, M_z$ the number of BS antennas in the $x, y,$ and $z$ directions, (ii) antenna spacing $d$, (iii) system bandwidth $B$, (iv) a set of OFDM subcarriers $\mathcal{K}$ at which the channels need to be calculated, and (v) a number of channel paths $L$. Then, the DeepMIMO dataset generation code constructs the $M \times 1$ channel vector ${\mathbf{h}}_{k}^{b,u}$ for every active BS $b$ and active user $u$, and on each subcarrier $k \in \mathcal{K}$, where ${\mathbf{h}}_{k}^{b,u}$ is expressed as \begin{equation} {\mathbf{h}}_k^{b,u}= \sum_{\ell=1}^{L} \sqrt{\frac{\rho_\ell}{K}} e^{j \left(\vartheta_\ell^{b,u}+ \frac{2 \pi k}{K} \tau_\ell^{b,u} B \right)} {\mathbf{a}}(\phi^{b,u}_\mathrm{az}, \phi^{b,u}_\mathrm{el}), \end{equation} where ${\mathbf{a}}(\phi^{b,u}_\mathrm{az}, \phi^{b,u}_\mathrm{el})$ is the array response vector of the BS, and is defined as \begin{equation} {\mathbf{a}}\left(\phi^{b,u}_\mathrm{az}, \phi^{b,u}_\mathrm{el}\right)={\mathbf{a}}_z\left(\phi^{b,u}_\mathrm{el}\right) \otimes {\mathbf{a}}_y\left(\phi^{b,u}_\mathrm{az}, \phi^{b,u}_\mathrm{el}\right) \otimes {\mathbf{a}}_x\left(\phi^{b,u}_\mathrm{az}, \phi^{b,u}_\mathrm{el}\right), \end{equation} with ${\mathbf{a}}_x(.), {\mathbf{a}}_y(.), {\mathbf{a}}_z(.)$ the BS array response vectors in the $x, y,$ and $z$ directions, and are expressed as \begin{align} {\mathbf{a}}_x\left(\phi^{b,u}_\mathrm{az}, \phi^{b,u}_\mathrm{el}\right)&=\left[1, e^{j kd \sin(\phi^{b,u}_\mathrm{el}) \cos(\phi^{b,u}_\mathrm{az})}, ...\right. \nonumber \\ &\hspace{20pt}\left. ...,e^{j kd (M_x-1)\sin(\phi^{b,u}_\mathrm{el}) \cos(\phi^{b,u}_\mathrm{az})} \right]^T,\\ {\mathbf{a}}_y\left(\phi^{b,u}_\mathrm{az}, \phi^{b,u}_\mathrm{el}\right)&=\left[1, e^{j kd \sin(\phi^{b,u}_\mathrm{el}) \sin(\phi^{b,u}_\mathrm{az})}, ...\right. \nonumber \\ &\hspace{20pt}\left. ...,e^{j kd (M_x-1)\sin(\phi^{b,u}_\mathrm{el}) \sin(\phi^{b,u}_\mathrm{az})} \right]^T, \\ {\mathbf{a}}_z\left(\phi^{b,u}_\mathrm{el}\right)&=\left[1, e^{j kd \cos(\phi^{b,u}_\mathrm{el}) }, ...,e^{j kd (M_x-1)\cos(\phi^{b,u}_\mathrm{el}) } \right]^T. \end{align} In addition to the set of OFDM channel vectors ${\mathbf{h}}_{k}^{b,u}, k \in \mathcal{K}$ for every active BS $b$ and user $u$, the DeepMIMO dataset also outputs the location of the user ${\mathbf{p}}^u=[p_x,p_y,p_z]$ in the x-y-z space. This location information is important as it can be used, for example, as an input feature in some ML applications. \textbf{Remark} \textit{ It is important to emphasize here that, as described in this section, the constructed DeepMIMO dataset is a function of only two inputs: (i) the considered ray-tracing scenario `R', and (ii) the dataset parameters set $\mathcal{S}$. This makes it simple for researchers to describe the adopted dataset, reproduce the datasets in the other papers, and compare their proposed designs and algorithms. } \subsection{Structure of the DeepMIMO Dataset}\label{subsec:datasetStructure} For every active BS $b$ and user $u$, the DeepMIMO dataset includes (i) the channel vector ${\mathbf{h}}_{k}^{b,u}$ for the specified set of subcarriers $\mathcal{K}$, stored as an $M \times |\mathcal{K}|$ matrix, in which the $k$th column represents the channel at the $k$th specified subcarrier, and (ii) the user location ${\mathbf{p}}_u$. When running the DeepMIMO dataset generation code, it outputs all these data as a one MAT file, named DeepMIMO$\_$dataset.mat. This file includes only one cell array, called \texttt{DeepMIMO$\_$dataset}. Accessing the data (channels and location) of every base station-user pair is done as follows. \begin{itemize} \item \texttt{DeepMIMO$\_$dataset$\{\bar{b}\}$.user$\{\bar{u}\}$.channel} accesses the $M \times |\mathcal{K}|$ channel matrix between the active base station $\bar{b}$ and user $\bar{u}$. Note that $\bar{b}$ and $\bar{u}$ represent the $\bar{b}$th active BS and the $\bar{u}$th active user. For example, if we specified the active BSs \texttt{active$\_$BS=[3,4,5,6]} and the active users rows \texttt{active$\_$user$\_$first=1000} and \texttt{active$\_$user$\_$last=1500} in the parameters set $\mathcal{S}$, then \texttt{DeepMIMO$\_$dataset$\{1\}$.user$\{1\}$.channel} accesses the channel between the first active BS, which is BS 3, and the first active user, which is the first user in the row R$1000$. \item \texttt{DeepMIMO$\_$dataset$\{\bar{b}\}$.user$\{\bar{u}\}$.loc} accesses the position vector ${\mathbf{p}}_{\bar{u}}$ of the $\bar{u}$th active user. \end{itemize} In the following section, we describe in detail how to use the DeepMIMO dataset. \section{How to Use the DeepMIMO Dataset?} \label{sec:use} Given the framework of the DeepMIMO dataset generation in \figref{fig:top}, the researcher will need to define the ray-tracing scenario and the parameters set in order to generate the DeepMIMO dataset that is tailored for the desired application. More specifically, the steps for generating the DeepMIMO dataset can be summarized as follows. \begin{enumerate} \item From the DeepMIMO dataset website \cite{DeepMIMODataset}, download the 'DeepMIMO generation code' file and expand/ uncompress it. \item From the DeepMIMO dataset website \cite{DeepMIMODataset}, download the ray-tracing output files for the adopted scenario, for example the 'O1' scenario, and expand it. Note that the 'O1' ray-tracing scenario is described in detail in \sref{subsection:scenario}. \item Add the folder of the ray-tracing scenario, for example the 'O1' folder, to the path 'DeepMIMO Dataset Generation/RayTracing Scenarios/'. \item Open the file 'DeepMIMO$\_$Dataset$\_$Generation.m' and adjust the DeepMIMO dataset parameters. Note that these parameters are described in detail in \sref{subsection:parameters}. \item From the MATLAB command window, call the function [DeepMIMO$\_$dataset]=DeepMIMO$\_$Datase$\_$Generator(). This function will generate the DeepMIMO dataset given the defined ray-tracing scenario and adopted parameters set. \item Given the generated DeepMIMO dataset, the channels and users locations can be accessed as described in \sref{subsec:datasetStructure}. \end{enumerate} In the following section, we show an example of how to use the DeepMIMO dataset in one machine learning application. \section{An Example Application: Beam Prediction} \label{sec:Application} The DeepMIMO dataset includes a large number of channel matrices that have geometric meaning, as they are generated based on a ray-tracing model. In this section, we will show how this dataset can be utilized in a deep learning mmWave application. More specifically, we will use the DeepMIMO dataset to evaluate the performance of the deep learning coordinated beamforming algorithm proposed in \cite{Alkhateeb2018}. In this beamforming algorithm, a machine learning model uses the uplink pilot received with omni-antennas at multiple BSs to predict the best beamforming vector at each one of the coordinating BSs. Next, we will define the adopted DeepMIMO dataset before showing how it can be used in our mmWave beam prediction application. [all codes are available in \cite{DeepMIMODataset}.] \begin{table}[h] \caption{The adopted DeepMIMO dataset parameters} \begin{center} \begin{tabular}{ | c | c | } \hline \textbf{DeepMIMO Dataset Parameter} & \textbf{Value} \\ \hline \hline Active BSs & 3, 4, 5, 6 \\ \hline Active users & From row R1000 to row R1300 \\ \hline Number of BS Antennas & $M_x=1, M_y=32, M_z=8$ \\ \hline Antenna spacing & d=0.5 \\ \hline System bandwidth & B=0.5 GHz \\ \hline Number of OFDM subcarriers & 1024 \\ \hline OFDM sampling factor & 1 \\ \hline OFDM limit & 64 \\ \hline Number of paths & 5 \\ \hline \end{tabular} \end{center} \label{tabel:DeepMIMO_parameters} \end{table} \textbf{Dataset definition:} One main advantage of the DeepMIMO dataset is that it is completely defined by the ray-tracing scenario and the parameters set. This makes it easy for researchers to define their dataset, reproduce the dataset in other papers, and compare/benchmark their algorithms. In our simulations, we consider the DeepMIMO dataset with the `O1' ray-tracing scenarios and with the parameters set in Table \ref{tabel:DeepMIMO_parameters}. \textbf{Constructing the deep learning dataset for \cite{Alkhateeb2018}:} The deep learning coordinated beamforming algorithm in \cite{Alkhateeb2018} adopts a supervised learning model to learn the mapping between the OFDM omni-received sequence at a number of BSs (in our example, 4 BSs) and the beamforming vector at every one of them. Every data point in the dataset that trains this deep learning model consists then of (i) the input which is the omni-received OFDM sequence at 4 BSs, and (ii) the output which is the achievable rate of the candidate beamforming vectors. \begin{figure}[t] \centerline{ \includegraphics[width=1.1\columnwidth]{Result_Fig.pdf} } \caption{The achievable rate of the deep learning coordinated beamforming algorithm, \cite{Alkhateeb2018}, versus different sizes of training sets. This figure is generated using the DeepMIMO dataset.} \label{fig:ML_result} \end{figure} \begin{figure}[h] \centerline{ \includegraphics[width=.8\columnwidth]{ML_model.pdf} } \caption{The supervised deep learning model in \cite{Alkhateeb2018} that learns the mapping from the omni-received sequence collected from a number of BSs, $r_{k,n}^\mathrm{omni} \forall k,n$, and the achievable rate with every candidate beamforming vector $R^{(p)}_n, \forall p$ at one of the coordinating BSs $n$.} \label{fig:ML_Model} \end{figure} Given the DeepMIMO dataset, we can generate these inputs and outputs for every user $u$ as follows. \begin{itemize} \item For every BS $n$ (of the active BSs 3, 4, 5, 6), and for every subcarrier $k$ (of the considered set of subcarriers), we can access the channel vector ${\mathbf{h}}^{n,u}_{k}$ from the DeepMIMO dataset as described in \sref{subsec:datasetStructure}. Then, we obtain the input to the deep learning model as $r^\mathrm{omni}_{k,n}=\left[{\mathbf{h}}^{n,u}_{k}\right]_{1}$, i.e., the first element of the vector. \item For every BS $n$ (of the active BSs 3, 4, 5, 6), the achievable rate, $R_n^{(p)}$, when the candidate beamforming vector ${\mathbf{f}}_p$ is adopted is calculated as \begin{equation} R_n^{(p)}=\frac{1}{\left|\mathcal{K}\right|} \sum_{k \in |\mathcal{K}|} \log_2 \left(1+\mathsf{SNR} \left| {\mathbf{f}}_p^T {\mathbf{h}}_{k}^{n,u} \right|^2 \right), \end{equation} % where the channel vector $ {\mathbf{h}}_{k}^{n,u}$ is obtained from the DeepMIMO dataset. We refer the readers to \cite{Alkhateeb2018} for more details on the beamforming algorithm and the deep learning model. Further, the MATLAB and python codes that implement the beamforming algorithm and machine learning model based on the DeepMIMO dataset is available at \cite{DeepMIMODataset}. \end{itemize} \textbf{Simulation results:} With the generated deep learning data points that is based on the DeepMIMO dataset, we trained the deep learning model that is described in \cite{Alkhateeb2018} to obtain the performance results in \figref{fig:ML_result}. We encourage the researchers to reproduce these results and use them for comparisons with their proposed algorithms. The codes to generate \figref{fig:ML_result} are available on the DeepMIMO dataset website \cite{DeepMIMODataset}. \section{Acknowledgment} The author thanks Mr. Tarun Chawla and Remcom \cite{Remcom} for supporting and encouraging this work. The author also thanks Xiaofeng Li, Muhammad Alrabeiah, and Abdelrahman Taha for their valuable feedback and suggestions. \section{Conclusion} In this paper, we presented the DeepMIMO dataset which is a channels' dataset designed to advance machine learning research in mmWave and massive MIMO systems. The DeepMIMO dataset generation framework constructs the MIMO channels based on ray-tracing data obtained from the accurate ray-tracing simulation, Remcom Wireless InSite \cite{Remcom}. The DeepMIMO channels, therefore, capture the dependence on the various elements of the environment such as the scatterers geometry and transmitter/receiver locations, which is important for machine learning research. Further, the DeepMIMO dataset was designed to be generic, which enables the researcher to generate the dataset based on adjustable system/channel parameters. We also described an example DeepMIMO dataset based on a ray-tracing scenario of 18 base stations and more than one million users. Further, we explained how to use the DeepMIMO dataset generation code to adjust the set of parameters and generate the dataset. Finally, as an example application, we showed how the DeepMIMO dataset can be used to construct the inputs/outputs of the deep learning model of the mmWave deep learning coordinated beamforming solution \cite{Alkhateeb2018}.
1,314,259,994,758
arxiv
\section{Introduction} Sparse matrix-matrix multiplication is a key operation in large-scale electronic structure calculations based on for example Hartree--Fock or Kohn--Sham density functional theory~\cite{DBowler12}. Sparse matrix-matrix multiplication is used in particular in polynomial expansion~\cite{pur-pm} and minimization methods~\cite{dmm-lnv} to compute the density matrix. Such methods are used in a number of electronic structure codes such as {\sc Conquest}~\cite{conquestGillan200714}, {\sc CP2K}~\cite{cp2k-linearscaling}, {\sc Ergo}~\cite{linmemDFT}, {\sc FreeON}~\cite{FreeON}, {\sc Honpas}~\cite{honpas}, {\sc Onetep}~\cite{onetep}, and {\sc LATTE}~\cite{LATTE-jcp-2012} to achieve a computational cost that increases only linearly with system size. The matrix sparsity varies from tens to thousands of nonzeros per row depending on the underlying model and the basis set used. It is often beneficial to use a block-sparse data structure. The optimal block size depends on the model and on the order of the matrix rows and columns. The present work is mainly motivated by Hartree--Fock and Kohn--Sham density functional theory calculations using Gaussian basis sets in which the matrices have up to thousands of nonzero elements per row and a priori unknown sparsity patterns~\cite{sparsity-JCC:JCC21723, linmemDFT}. Algorithms based on dense matrix-matrix multiplication are generally considered attractive because of the existence of efficient linear algebra libraries, e.g.~\cite{Goto-matrix-mul, whaley04}, and parallelization through e.g.~Cannon's algorithm or SUMMA~\cite{summa}. Parallel sparse and block-sparse matrix-matrix multiplication has received less attention and is harder to implement, particularly when the nonzero pattern is not known in advance. Nevertheless, several parallel sparse matrix-matrix multiplication implementations have been presented. Several implementations assume some a priori knowledge about the sparsity structure and use that knowledge to improve performance~\cite{conquest-sparsematrix,Challacombe-sparsematrix, onetep-sparsematrix,WeberEtAlMidpointMmul2015}. Here, we will focus on the general case where no a priori knowledge about the structure is assumed. Recent such implementations are first employing a random permutation of the rows and columns of the matrix to destroy any structure in the sparsity pattern and \emph{decrease} data locality~\cite{BulucGilbert2012, Borstnik2014}. The goal of this maneuver is to obtain about the same density of nonzero elements everywhere in the matrix. Then, a static distribution of work and data is used in the same way as for dense matrices, but with the local block-block multiplies replaced by sparse products. This random permutation approach prevents load imbalance, but the obvious drawback is that the possibility to exploit the nonzero structure to reduce communication or make efficient use of the memory hierarchy is spoiled, see Figure~\ref{fig:random_destruction} for a trivial yet illustrative example. On the other hand, such exploitation is difficult to achieve since it requires that the mapping of data and work to physical resources is performed dynamically during the calculation, unless the nonzero structure is known beforehand \cite{communication_optimal_2}. \begin{figure} \begin{center} \includegraphics[width=0.3\textwidth]{A_diag_blocks} \includegraphics[width=0.3\textwidth]{A_diag_blocks_perm} \end{center} \caption{A trivial example of possible effects of random permutation to load balance static work and data distribution. Left: Example matrix for which we want to compute the square on 4 identical compute nodes. The best distribution of data and work is obvious and leads to perfect load balance and no communication of matrix elements. Right: Random permutation of the matrix columns and rows and a two dimensional data decomposition indicated by solid black lines. Although the workload may still be roughly load balanced, communication of matrix elements is now needed. \label{fig:random_destruction}} \end{figure} We believe that the difficulties are mainly associated with the programming model used to tackle the problem. While conventional programming models like message passing protocols work well for static distribution of work and data, they are inconvenient if you want to distribute data and work dynamically. The programmer has to make decisions about where data should be located, where every piece of work should be executed, and see to it that data is communicated as needed. Recently, we proposed a new programming model named Chunks and Tasks, developed to work well for algorithms with dynamic work and data~\cite{chunks-and-tasks}. We present in this article a block-sparse matrix library based on a hierarchical quadtree representation implemented using the Chunks and Tasks programming model. This new library is locality-aware in the sense that it is able to exploit a priori unknown structure in the sparsity pattern to reduce communication and thereby improve performance. The library presented here is a further developed version of the code used for test calculations in \cite{chunks-and-tasks}. This article is organized as follows: in Section~\ref{sec:model} we briefly discuss the Chunks and Tasks programming model. Our new matrix library based on Chunks and Tasks is presented in Section~\ref{sec:chtml}. In the present work the Chunks and Tasks matrix library is used together with a block-sparse leaf matrix type, described in Section~\ref{sec:leaf_matrix_types}. An analysis of the computational costs due to the quadtree representation is given in Section~\ref{sec:quadtree-effects}, followed by results of test calculations in Section~\ref{sec:performance} and concluding remarks in Section~\ref{sec:conclusions}. \section{Programming model}\label{sec:model} Our block-sparse matrix library has been implemented using the Chunks and Tasks programming model~\cite{chunks-and-tasks}. In Chunks and Tasks the programmer writes her program in terms of small pieces of data and work, chunks and tasks, respectively. The programmer is responsible for dividing work and data into smaller pieces but not for the mapping of work and data onto physical resources. The programmer need not worry about message passing, all communication is handled by the Chunks and Tasks library. The programmer does neither have to worry about race conditions nor non-deterministic behavior, determinism is achieved automatically by single assignment. The computation is driven by the registration of tasks, similarly to other task-based models. Recursive nesting of tasks is allowed, i.e.~during task execution new tasks can be registered as in for example Cilk~\cite{cilk}, Scioto~\cite{scioto}, SuperGlue~\cite{superglue}, and XKaapi~\cite{xkaapi}. This is important for scalability of dynamic algorithms, since otherwise only a single process can generate new tasks, or multiple processes generate predetermined (static) task graphs. A key feature of the Chunks and Tasks model is that abstractions are not only provided for work but also for data. The Chunks and Tasks library takes care of the distribution of both work and data. The user registers pieces of data called chunks to the runtime library. In return the user gets an identifier that can be used to specify dependencies later on. This is in a way similar to e.g.~Linda~\cite{Carriero1994633} and Concurrent Collections~\cite{CnC} that also have a ``space'' to which you can add a piece of data and later retrieve it, possibly on another process. A key difference is that in Linda and Concurrent Collections the identifier is chosen by the application programmer whereas in Chunks and Tasks the identifier is chosen by the runtime library. On one hand, being able to choose identifiers makes it possible for a process or task to ask for data without any prior communication or interaction whatsoever with the process or task that registered the data. On the other hand, it makes it possible to (supposedly unintentionally) introduce inconsistencies where for example several pieces of data with the same identifier exist, e.g.~on distant nodes in a cluster. Perhaps of even greater importance is that such a model makes it difficult for the runtime library to make data available efficiently. Any process may ask for any piece of data at any time possibly without any information being available locally about the location of the piece being asked for. This stands in contrast to Chunks and Tasks where the library for example can store information about the location in the chunk identifier. In this way Chunks and Tasks, by imposing appropriate restrictions, makes life easier both for the application programmer and the runtime library developer. \section{Quadtree representation of matrices in the Chunks and Tasks model}\label{sec:chtml} Hierarchical data structures based on a twodimensional block decomposition of the matrix at each level in a hierarchy have both been used to block for the memory hierarchy in dense matrix computations and to avoid operations on zero elements (or entire submatrices that are zero) in sparse matrix computations, see for example \cite{recursive_dense2004} and \cite{quadtreeWise1984}, respectively. Quadtree representations have also been advocated for simplicity and expressiveness in particular leading to ease of programming for multiprocessing (shared memory) environments \cite{Dinh_1999, lugowski2014, m-rrs, quadtreeWise1984} and straightforward exploitation of symmetry \cite{m-rrs}. As will be shown in Section~\ref{sec:quadtree-effects}, the quadtree representation is in principle also appropriate for distributed representation of matrices on computer clusters. The caveat is that a distributed quadtree representation is difficult to implement in conventional programming models, especially if a priori unknown sparsity patterns are to be handled efficiently. In this section we describe how such a sparse matrix quadtree representation can be straightforwardly implemented in the Chunks and Tasks programming model~\cite{chunks-and-tasks}. In our Chunks and Tasks matrix library, matrices are represented by sparse quadtrees of chunks. At the lowest level in the hierarchy, different leaf matrix representations, for example dense or sparse, may be used. In this work we will focus on regular matrix-matrix multiplication on the form $C = AB$ and the symmetric matrix square operation $C=A^2$, where $A$ and therefore also $C$ is symmetric and only the upper triangles of $A$ and $C$ are stored. The sparse symmetric matrix square is a key operation and a major computational challenge in linear scaling electronic structure calculations. We list and describe below all chunk and task types that are needed for the above operations. \subsection{Chunk types for quadtree representation}\label{subsec:matrixchunk} \begin{itemize} \item[--]\emph{Matrix}: A basic matrix chunk type is used to represent nonzero submatrices in the quadtree representation. At each but the lowest level in the hierarchy, the matrix is divided into four submatrices represented by their chunk identifiers. At the lowest level a leaf matrix type is used for matrix representation. Storage and addressing of zero submatrices is avoided at all levels in the hierarchy. Zero submatrices are represented by NIL chunk identifiers. The matrix dimension is also stored along with the maximum allowed dimension for leaf matrices. This basic chunk type is the natural Chunks and Tasks implementation of a quadtree matrix representation as defined by Wise and Franco~\cite{WiseAndFranco1990}. When setting up the quadtree structure the matrix is split so that a predetermined uniform blocksize at each level in the hierarchy is achieved as far as possible. A submatrix in the quadtree represented by a matrix chunk does not contain any global information such as the global matrix dimension or its location in the entire matrix, i.e.~row and column offsets. \item[--]\emph{Matrix parameters}: A chunk type for matrix parameters is used to convey information needed in the construction of matrix chunks. The chunk includes information about the matrix dimension and the leaf matrix dimension, and whenever needed information about the location of the matrix (its rowwise and columnwise offsets from the upper left corner) in the global matrix. It is also used to store information needed by the leaf matrix type. \end{itemize} \subsection{Task types for regular matrix-matrix multiplication} In all task type implementations, sparsity is dynamically exploited at all levels in the hierarchy by skipping operations on zero submatrices, which are represented by NIL chunk identifiers. \begin{itemize} \item[--]\emph{$C = AB$, $C = A^TB$, $C = AB^T$, and $C = A^TB^T$}: The task types for regular and transposed matrix-matrix multiplication takes two matrix chunks, described above, and returns the product. If the input matrices are both at the lowest level in the hierarchy, the corresponding leaf matrix multiplication is invoked. Otherwise matrix multiplication and matrix addition tasks for child submatrix multiplication and addition are registered for execution. The results are collected into the result matrix with a task for creation of a matrix chunk from four (submatrix) chunk identifiers. \item[--]\emph{$C = A+B$}: The task type for matrix addition takes two matrix chunks and returns their sum. If the input matrices are at the lowest level, the addition in the leaf matrix library is performed. Otherwise, tasks for child submatrix addition are registered for execution, and the results are collected with a task for creation of a matrix chunk from child submatrix identifiers. \item[--]\emph{Assignment from chunk identifiers}: A task type for creation of a matrix chunk from four submatrix chunk identifiers is needed since chunks are read-only after the point of registration. Since the submatrix chunk identifiers are included in the matrix chunk, it is not possible to construct the matrix chunk before the construction of the submatrices. Therefore, this task type is used whenever a matrix that depends on the results of other tasks needs to be constructed. This task type also takes a matrix parameters chunk as input. \end{itemize} \subsection{Additional task types for symmetric matrix square}\label{subsec:sysq-task-types} \begin{itemize} \item[--]\emph{$C = A^2$ where $A$ is symmetric}: A symmetric matrix square task squares a symmetric matrix in upper triangular storage. At the lowest level the corresponding leaf matrix symmetric matrix square operation is executed. At higher levels, the symmetric matrix square task registers symmetric rank-k, symmetric square, symmetric multiply, and matrix addition tasks to compute the submatrices in the product matrix. The results are collected into the result matrix chunk with a task for creation of a matrix chunk from submatrix identifiers. In general, a symmetric matrix square task directly or indirectly makes use of all but the $C = A^TB^T$ task type in this section. Therefore, a benchmark of the symmetric matrix square operation sets nearly all task types presented here to the test. \item[--]\emph{$C = AB$, where $A$ or $B$ is symmetric}: Symmetric matrix multiply is the task type for multiplication of two matrices where either the first or the second multiplicand is a symmetric matrix in upper triangular storage. At the lowest level the corresponding multiplication operation for symmetric matrix multiply is executed. At the higher levels, tasks for regular matrix multiplication, symmetric matrix multiply, and matrix addition are registered and the results are collected into the result matrix chunk with a task for matrix creation from submatrix chunk identifiers. \item[--]\emph{$C = AA^T$ and $C = A^TA$}: With the symmetric rank-k task, a symmetric matrix in upper triangular storage is constructed from the product of a general matrix and its transpose. At the lowest level the symmetric rank-k operation of the leaf matrix library is used. At higher levels, symmetric rank-k, matrix multiplication, and matrix addition tasks are registered and a creation from submatrix identifiers task is used to collect the results into the result matrix chunk, as for the other task types. \end{itemize} \section{Leaf matrix types}\label{sec:leaf_matrix_types} As discussed above different leaf matrix representations may be used at the lowest level in the quadtree representation. A leaf matrix type used together with our chunk quadtree representation has to implement some basic functionality such as serialization routines. The leaf matrix type also has to implement functionality needed by task types used together with the leaf matrix type. When implementing a leaf matrix type, one can assume that the matrix data fits in the memory of a single compute node. All functionality in the class has to be thread-safe, since the Chunks and Tasks library must be able to execute several leaf tasks and call the serialization routines simultaneously. \subsection{Block-sparse leaf matrix type} In the present work we are using a block-sparse leaf matrix type. The block-sparse matrix class uses a uniform blocksize configurable via the matrix parameters chunk type as discussed in Section~\ref{subsec:matrixchunk}. Submatrices are kept in a simple two-dimensional array where only non-zero submatrices are allocated. The leaf matrix library makes use of the Basic Linear Algebra Subprograms (BLAS) \cite{blas-level3} for submatrix-submatrix multiplications on CPUs and the NVIDIA CUDA Basic Linear Algebra Subroutines (cuBLAS) \cite{cublas} for submatrix-submatrix multiplications on Graphics Processing Units (GPUs). An advantage of using the BLAS and cuBLAS library interfaces is that we can take advantage of optimized BLAS and cuBLAS library implementations. We have for example observed substantial performance improvements of the cuBLAS library when going from Cuda~5.0 to Cuda~6.5 (see the caption of Table~\ref{tab:performance_batched_gemm}). Our block-sparse leaf matrix type targets problems where a block size around 16-64 is appropriate. The regular gemm operation in cuBLAS is inefficient for such small matrix dimensions. Therefore, we are instead making use of the batched gemm API in cuBLAS. The routine executes a batch of small matrix-matrix multiplications. The operations in a batch should be independent in the sense that none of the multiplications are allowed to write to the same product matrix. The block-sparse multiplication can be expressed as a sum of outer products, see Fig.~\ref{fig:block-sparse-mmul}. Each outer product is a batch of small matrix-matrix multiplications and all multiplies within a batch are independent. \begin{figure} \begin{center} \includegraphics[width=0.6\textwidth]{block-sparse-mmul} \end{center} \caption{Illustration of a sparse matrix product as a sum of sparse outer products. All multiplies within a batch are independent making it possible to use the batched gemm API in cuBLAS. \label{fig:block-sparse-mmul}} \end{figure} We would like to make use of both CPUs and GPUs, if any. If there is a GPU available the multiply will be processed by the GPU. Otherwise the multiply will be processed by a CPU core. Since there will be other threads that execute similar tasks, a GPU may become available during the calculation. Then, it is generally good if the remaining work can be offloaded to the GPU. To achieve this we are using Algorithm~\ref{alg:process_batches}. Using this algorithm, load balancing between the CPUs and GPUs is achieved when several threads executing leaf matrix multiplies are running on the host. An alternative could be to use a Chunks and Tasks library that let idle host threads reexecute already running tasks whenever there is no more work, as in \cite{beri2014scheduling}. The application programmer would then not have to worry about feeding smaller portions to the CPU, and such an approach could also help in case of various failures. On the other hand, some tasks would be executed more than once. \begin{algorithm} \begin{algorithmic}[1] \State Get list of batches (CPU) \While{not done} \If{free GPU slot} \State Process remaining batches on GPU \Else \State Process 1 batch on CPU \EndIf \EndWhile \end{algorithmic} \caption{Algorithm for load balanced processing of batch lists. \label{alg:process_batches}} \end{algorithm} \subsection{Device manager} The GPUs on each node are managed by a device manager. When a thread wants to offload work to one of the GPUs, it requests a slot on one of the GPUs from the device manager. In the present implementation there are two slots per device (GPU) and a bounded priority queue with default length equal to the number of devices. Each request comes with a priority which is a measure of the amount of work, e.g.~number of floating point operations, that the thread wants to offload to the device. If there is a free slot, it will be granted immediately. Otherwise the priority of the job will be compared to the priorities in the queue and the job will either be added to the queue or immediately be rejected. Note that being in the queue does not imply that a slot eventually will be granted. There may later come requests with higher priorities. If the job is rejected the thread starts to execute the job on the CPU, see Algorithm~\ref{alg:process_batches}. If a slot is granted, the thread gets access to data buffers on the host (pinned memory) and the device associated to the slot. Only one host thread per GPU is allowed to have computations running on the GPU at a time. Since there are two slots per GPU, one thread can transfer data to/from the GPU while another thread is carrying out computations. For further clarity, when the host thread checks for a free GPU on line 3 in Algorithm~\ref{alg:process_batches} it may happen that the thread will have to wait in a bounded priority queue. When the thread leaves the queue it will either have been granted a slot or been kicked out of the queue by higher priority jobs. The device manager helps to overlap computation on the GPU not only with data transfers to/from the device memory but also with the work on the CPU needed to get the list of batches. Note that the Chunks and Tasks C++ interface presented in \cite{chunks-and-tasks} does not include support for data transfers between the host and GPU memory nor does it assist in scheduling of tasks on GPUs, as does for example StarPU \cite{starpu}. The Chunks and Tasks library is unaware of any devices that may be installed on the compute nodes. The code for deciding when to offload work to the GPU, as well as data transfers to/from GPU memory, is part of the task execution. This means that data that is used by several tasks running on the same GPU will be transferred to the device memory once for each task. However, as will be seen in Section~\ref{sec:gpu-calculations}, the device manager design described above achieves both hiding of data transfer costs and sharing of the workload between GPUs and CPUs. \section{Computational costs associated with the quadtree representation}\label{sec:quadtree-effects} In this section we first consider the number of tasks for different sparsity patterns, and then use those results to get theoretical estimates of computation and communication costs. \subsection{Total number of tasks} We will here consider the total number of matrix-matrix multiplication tasks for different sparsity patterns. Tasks executed at higher levels can be seen as administration work required to determine which low-level tasks are needed. A key issue is how much such extra administration work that is generated due to the quadtree representation. In this section we consider a quadtree representation with blocksize 1 at the lowest level. In practical calculations a larger blocksize will typically be used for performance reasons; we use blocksize 1 here in order to more clearly see the effects of the quadtree structure. The use of a larger blocksize will correspond to merging several of the lowest levels, leaving it to the leaf matrix library to handle any sparsity there. We first consider a case with little data locality, a random sparsity pattern where the nonzero matrix elements are uniformly randomly distributed. The probability $\delta$ to find a nonzero element at a given position is the same everywhere in the matrix and uncorrelated to the position of other nonzero matrix elements. Let the levels in the hierarchy be labeled such that level $l = 0$ is the highest level (the root of the tree) and level $l = L$ is the lowest (leaf) level. Let $N$ be the matrix dimension and $N_l = 2^l$ the matrix dimension in terms of submatrices at level $l$. If we denote the probability of a submatrix at level $l$ being nonzero as $\delta_l$, the expected number of matrix-matrix multiplication tasks at level $l$ is \begin{equation}\label{eq:no_of_mmul_tasks_at_level_random} C_l^{\mathrm{random}} = N_l^3 \delta_l^2 = 2^{3l} ( 1 - (1-\delta)^{n_l} )^2 \end{equation} where $n_l = 2^{2(L-l)}$ is the total number of elements in each submatrix at level $l$. The relationship~(\ref{eq:no_of_mmul_tasks_at_level_random}) is illustrated in Figure~\ref{fig:ntasks_rand_and_banded}. \begin{figure} \begin{center} \includegraphics[width=0.48\textwidth]{ntasks_rand_and_banded_plot} \end{center} \caption{Number of matrix-matrix multiplication tasks at different levels in the hierarchy for random and banded matrices with matrix dimension $2^L$ with $L=20$ and $L=30$. The number of nonzero elements per row is 65, corresponding to $k=5$ in the banded case. The dashed lines indicate the upper bounds given by \eqref{eq:bound_random_A} and \eqref{eq:bound_random_B}. \label{fig:ntasks_rand_and_banded}} \end{figure} Since $\delta_l \leq 1$, it follows that \begin{equation} \label{eq:bound_random_A} C_l^{\mathrm{random}} \leq N_l^3 = 8^l \quad \textrm{for all } l. \end{equation} Furthermore, the probability $\delta_l$ of a submatrix being nonzero satisfies the relation $\delta_l = 1-(1-\delta_{l+1})^4 = 4\delta_{l+1}-6\delta_{l+1}^2+4\delta_{l+1}^3-\delta_{l+1}^4 \leq 4\delta_{l+1}$. Therefore, we also have that \begin{equation} \label{eq:bound_random_B} C_l^{\mathrm{random}} \leq N_l^3(4^{L-l}\delta)^2 = \frac{16^L\delta^2}{2^l} \quad \textrm{for all } l. \end{equation} Although both inequalities \eqref{eq:bound_random_A} and \eqref{eq:bound_random_B} are valid for all $l$, \eqref{eq:bound_random_A} is tight for low levels while \eqref{eq:bound_random_B} is tight for high levels, as seen in Figure~\ref{fig:ntasks_rand_and_banded}. Let $x = \log_2(N) + \frac{log_2(\delta)}{2}$ be the point where the two bounds above intersect, given by solving $4^{L-x}\delta = 1$. Then, $8^x = (\delta N^2)^\frac{3}{2}$, $8^{\lfloor x \rfloor} \leq 8^x$, and $\frac{16^L\delta^2}{2^{\lfloor x \rfloor+1}} < \frac{16^L\delta^2}{2^{x}} = 8^x$. Therefore, assuming $\delta \geq 1/N^2 \Longrightarrow x\geq 0$ the expected total number of tasks \begin{align} \sum_{l = 0}^L C_l^{\mathrm{random}} & \leq \sum_{l = 0}^{\lfloor x \rfloor} 8^l + \sum_{l = {\lfloor x \rfloor}+1}^L \frac{16^L\delta^2}{2^l} \\ & = 8^{\lfloor x \rfloor} \sum_{l = 0}^{\lfloor x \rfloor} \frac{1}{8^l} + \frac{16^L\delta^2}{2^{\lfloor x \rfloor+1}} \sum_{l = 0}^{L-\lfloor x \rfloor-1} \frac{1}{2^l} \\ & < 8^x \left( \frac{1}{1-1/8} + \frac{1}{1-1/2} \right) \\ & = (3\tfrac{1}{7})(\delta N^2)^{3/2}. \end{align} We note that even though the number of tasks at leaf level is $\mathcal{O}(N^3\delta^2)$, the total number of tasks is $\mathcal{O}(N^3\delta^{3/2})$ due to excessive administration work at higher levels. As a simple case with data locality, we consider banded matrices with bandwidth $b = 2d + 1$ where for simplicity we assume that $d = 2^k$ for some $k \geq 0$. In this case, the number of matrix-matrix multiplication tasks at level $l$ is bounded by \begin{equation}\label{eq:no_of_mmul_tasks_at_level_banded} C_l^{\mathrm{banded}} < N_l b_l^2 = 2^l (2 d_l + 1)^2 \end{equation} where \begin{equation} d_l=\begin{cases} 1 \quad \mathrm{for} \quad l < L-k, \\ 2^{l-(L-k)} \quad \mathrm{for} \quad l \geq L-k. \end{cases} \end{equation} As seen in Figure~\ref{fig:ntasks_rand_and_banded}, the case with data locality gives a very different behavior of the number of tasks on each level; most of the work is concentrated at the lowest levels in the hierarchy. The total number of tasks is bounded by \begin{align} \sum_{l=0}^L C_l^{\mathrm{banded}} & < \sum_{l=0}^{L-k-1} 2^l 3^2 + \sum_{l=L-k}^{L} 2^l(2 \cdot 2^{l-(L-k)}+1)^2 \\ & < (4\tfrac{4}{7}d^2+5\tfrac{1}{3}d+2+\frac{9}{d})N. \end{align} We note that the total number of tasks is proportional to the number of tasks at the lowest level, i.e. no excessive administration work is going on at higher levels. As an example of sparsity structures appearing in physical applications, we consider overlap matrices for systems of evenly distributed particles in $D \geq 1$ spatial dimensions with one spherically symmetric basis function per particle, ordered using a recursive divide-space procedure. A matrix element $A_{ij}$ is nonzero if the distance between particles $i$ and $j$ is smaller than some radius $R$. Note that this is a kind of sparsity structure found in many applications in physics and chemistry, where each matrix element is often related to a pair of particles or other objects in a physical system; sparsity arises from the fact that only matrix elements that correspond to objects that are sufficiently close in space are nonzero. In $D$ spatial dimensions with the chosen ordering of basis functions, the $N_l$ blocks at level $l$ can be seen as corresponding to a set of $N_l$ spatial boxes such that all basis functions (or particles) in a given block are contained in the corresponding box. The number of multiplication tasks at a given level can be estimated by \begin{equation}\label{eq:no_of_mmul_tasks_at_level_overlap} C_l^{\mathrm{overlap}} < N_l M_l^2 = 2^l M_l^2 \end{equation} where $M_l$ is the number of spatial boxes that can be reached by a sphere of radius $R$. For high levels where the width of spatial boxes is larger than $R$, $M_l$ is determined by the number of neighboring boxes, $M_l = 3^D$, independently of $l$. For lower levels $M_l$ is proportional to the volume of a $D$-dimensional sphere of radius $\frac{R}{h_l}$ where the width of boxes $h_l \propto 2^{\frac{L-l}{D}}$, giving $M_l \propto R^D 2^{l-L}$. Therefore, analogously to the banded matrix case, for high levels $C_l^{\mathrm{overlap}} \propto 3^{2D} 2^l$ and for lower levels $C_l^{\mathrm{overlap}} \propto R^{2D} 2^{3l-2L}$. We note also that $C_l^{\mathrm{overlap}} \geq 2 C_{l-1}^{\mathrm{overlap}}$ for all $l \geq 1$. Therefore, as for the banded matrix case, the total number of tasks is proportional to the number of tasks at the lowest level. Numerical experiments for $D = 1, 2, 3$ are shown in the left panel of Figure~\ref{fig:ntasks_1d_2d_3d_and_rmat}. The test matrices were created using the {\sc Ergo} program \cite{linmemDFT} to compute overlap matrices for artificially generated 1d, 2d, and 3d molecules with one basis function per atom from the standard Gaussian basis set STO-3G, applying the default recursive divide-space procedure to order the atoms. The molecules were generated by placing hydrogen atoms on a $D$-dimensional grid with separation 2~{\AA} and a uniform random displacement of up to $\pm 1$~{\AA} in each coordinate direction. The matrix size was $2^{16} = 65536$ for $D = 1, 2$ and $40^3 = 64000$ for $D = 3$. Blocksize 1 was used for the 1d case while the 2d and 3d cases used blocksize 2 and 4, reducing the number of hierarchy levels for those cases by 1 and 2, respectively. \begin{figure} \begin{center} \includegraphics[width=0.48\textwidth]{ntasks_1d_2d_3d_plot} \includegraphics[width=0.48\textwidth]{ntasks_rmat} \end{center} \caption{Number of matrix-matrix multiplication tasks for matrices with different sparsity structures. Left: overlap matrices for artificially constructed 1d, 2d, and 3d molecules. Right: R-MAT graph matrices with parameters corresponding to different degrees of data locality. See the text for details. \label{fig:ntasks_1d_2d_3d_and_rmat}} \end{figure} As an example where different degrees of data locality can be easily investigated we consider multiplication of graph matrices constructed using the R-MAT model~\cite{R-MAT}. We choose the R-MAT parameters such that $b=c=d=\frac{1-a}{3}$ and perform tests for different values of $a$ in the range $0.25 \leq a < 1$. Setting $a = 0.25$ essentially corresponds to the random case described above, while increasing $a$ corresponds to increasing the data locality. We use matrix dimension $8192$ and a number of graph edges corresponding to 5 nonzero elements per row, although for large $a$ values matrices become more sparse due to multiple graph edges between the same nodes. The right panel of Figure~\ref{fig:ntasks_1d_2d_3d_and_rmat} shows the number of matrix-matrix multiplication tasks for matrix dimension $8192$ for a set of different $a$ values. We note that most of the work is pushed towards lower levels as the degree of data locality increases. \subsection{Computation and communication costs} \label{subsec:CompAndCommCosts} We will here assume that the tasks are as far as possible evenly distributed over the computational nodes, i.e. that the total execution time is given by $\mathcal{O}(T_1/p + T_\infty)$ where $T_1$ and $T_\infty$ are the serial and critical path execution times, respectively and $p$ is the number of processes. Such load balancing can for example be achieved by work stealing \cite{BlumofeAndLeiserson1999}. We take $T_1$ to be proportional to the total number of tasks and $T_\infty = \mathcal{O}(\log(N))$ given by the depth of the task tree. Thus, for the random case we have that the execution time is \begin{equation} \mathcal{O}\left(\frac{(\delta N^2)^{3/2}}{p} + \log(N)\right) \end{equation} and for the banded case we have that the execution time is \begin{equation} \mathcal{O}\left(\frac{d^2N}{p} + \log(N)\right). \end{equation} We are interested in the amount of data that needs to be communicated by each process in the weak and strong scaling limits. A weak scaling test can be constructed by keeping the number of nonzero matrix elements per row fixed and increasing the matrix dimension together with the number of processes. In the random case, this means that $\delta \propto \frac{1}{N}$ and the number of leaf level tasks is $\mathcal{O}(N)$ but the total number of tasks is $\mathcal{O}(N\sqrt{N})$. Assuming that all data for each task needs to be communicated this means that each process needs to receive data scaling as $\mathcal{O}(\sqrt{p})$ with the number of processes. In the banded and overlap cases both the leaf level and total number of tasks is $\mathcal{O}(N)$ and the communication per process is $\mathcal{O}(1)$. The above results for the quadtree representation can be compared to the approach where a random permutation is employed to destroy data locality followed by application of the Sparse SUMMA algorithm~\cite{SparseSUMMA2008}. Assuming that the random permutation succeeds to evenly distribute the nonzero matrix elements, the number of matrix elements that each process needs to fetch from other processes becomes (see e.g.~equation (3.1) in \cite{BulucGilbert2012}) \begin{equation} \label{eq:sqrtp_equation} \frac{2mN}{\sqrt{p}} \end{equation} where $m$ is the number of nonzeros per row. Similarly to the above, a weak scaling test can be constructed by keeping $m$ fixed and letting $N \propto p$, leading to each process receiving data scaling as $\mathcal{O}(\sqrt{p})$ with the number of processes. The weak and strong scaling results are summarized in Table~\ref{tbl:weak_and_strong_scaling}. \begin{table} \begin{center} \begin{tabular}{lcc} \hline & Weak & Strong \\ \hline \hline Quadtree - random & $\mathcal{O}(\sqrt{p})$ & $\mathcal{O}(1/p)$ \\ Quadtree - banded & $\mathcal{O}(1)$ & $\mathcal{O}(1/p)$ \\ Quadtree - overlap & $\mathcal{O}(1)$ & $\mathcal{O}(1/p)$ \\ \hline SpSUMMA & $\mathcal{O}(\sqrt{p})$ & $\mathcal{O}(1/\sqrt{p})$\\ \hline \end{tabular} \end{center} \caption{Communication costs. Scaling of the amount of data received by each process with the number of processes $p$ in the weak and strong scaling limits for matrices with different sparsity patterns. \label{tbl:weak_and_strong_scaling}} \end{table} \section{Performance evaluation}\label{sec:performance} In this section we will examine the performance of our block-sparse matrix-matrix multiplication when linked to the publicly available Chunks and Tasks library implementation CHT-MPI~\cite{cht-mpi, chunks-and-tasks}, using the block-sparse leaf matrix library described in Section~\ref{sec:leaf_matrix_types}. The Chunks and Tasks library implementation CHT-MPI implements work stealing for task scheduling and caching of recently used chunks, see~\cite{chunks-and-tasks} for more information about CHT-MPI. \subsection{Calculations on cluster of GPU-equipped nodes}\label{sec:gpu-calculations} We will first present calculations performed on the Erik cluster at the Lunarc computer center, Lund University, using CHT-MPI~1.1 compiled with Open~MPI~1.6.5 and gcc~4.8.1, Cuda 6.5 and the Intel Math Kernel Library~11.1. The Erik cluster consists of 24 nodes each with dual 64-bit, 8-core Intel Xeon E5-2650 2.00 GHz processors. The nodes are interconnected with FDR InfiniBand and equipped with Nvidia Tesla K20m GPU cards: 16 nodes with 2 cards, 7 nodes with 4 cards, and 1 node with 8 cards. \begin{table} \begin{center} \begin{tabular}{l|cccccc} Matrix size & 16 & 32 & 48 & 64 & 80 & 96 \\ \hline Gflop/s (single) & 243.1 & 392.3 & 276.6 & 558.4 & 401.9 & 628.9 \\ Gflop/s (double) & 147.7 & 210.5 & 231.3 & 244.0 & 263.8 & 270.5 \end{tabular} \end{center} \caption{Practical peak performance figures for cuBLAS batched matrix-matrix multiplication in Cuda 6.5 on Nvidia K20. Computed from batches with 64000 matrix-matrix multiplications, $C_i = \beta C_i + \alpha A B, \ i = 1,2,\dots, 64000$ with $A$, $B$, and $C_i$ being $b \times b$ matrices with $b = 16,32,48,64,80,96$. The number of floating point operations is counted as $64000 \times 2b^3$. We note that the figures are up to 40\% larger than what we obtained with the benchmark program provided by Nvidia. This is mainly due to the reuse of the $A$ and $B$ matrices for all 64000 multiplies in the present benchmark. Also, in the present benchmark timers on the GPU were used instead of timers on the CPU combined with synchronization. This results in larger Gflop/s values. Furthermore, switching from Cuda 5.0 to Cuda 6.5 gave performance improvements ranging from 20~\% for the ``double precision, block size 16'' case to 310~\% for the ``single precision, block size 96'' case, for calculations that were otherwise identical. \label{tab:performance_batched_gemm}} \end{table} Our first benchmark measures only the performance of the block-sparse matrix-matrix multiplication used for the leaf multiplications in the Chunks and Tasks matrix library. Thus, Chunks and Tasks is not involved in this benchmark. We perform multiplications with matrix dimension $4096 \times 4096$ with varying degree of matrix sparsity. The nonzero submatrix blocks are randomly uniformly distributed over the matrix to get a predetermined fill factor which is the fraction of nonzero matrix elements. Results for double and single precision are shown in Figures~\ref{fig:leafmat_bench_double} and~\ref{fig:leafmat_bench_single}, respectively. Practical peak performance values (dashed lines) were calculated with a separate benchmark program measuring the performance of the cuBLAS batched matrix-matrix multiplication, see Table~\ref{tab:performance_batched_gemm}. We are not quite reaching up to those peak figures. The main reason is that the benchmark of the leaf matrix library includes data transfers to/from the GPU which are not included in the peak performance figures. The work needed to prepare the list of batches is also included in the measured wall time. However, when the leaf matrix library is used within the Chunks and Tasks matrix library, there will be several threads that need to execute leaf matrix multiplications. This means that both communication to/from the GPU and preparation of batch lists on the CPU can then be overlapped with computation on the GPU as described in Section~\ref{sec:leaf_matrix_types}. \begin{figure} \begin{center} \includegraphics[width=0.48\textwidth]{leafmat_bench_flops_4096_0_cpu} \includegraphics[width=0.48\textwidth]{leafmat_bench_flops_4096_0_gpu} \end{center} \caption{Results of leaf block-sparse matrix library matrix-matrix multiplication test runs with double precision and matrix dimension $4096 \times 4096$, varying sparsity (fill factor), and blocksizes 16, 32, and 64 on the Erik cluster. The nonzero submatrices are randomly uniformly distributed over the matrix. Left: running on one of the CPU cores. Right: running on one of the CPU cores but processing the list of batches on one of the GPUs. The dashed lines are practical peak performance figures computed from batches with 64000 $b\times b$ multiplies with $b = 16,32,64$, not including any data transfers, see Table~\ref{tab:performance_batched_gemm}. \label{fig:leafmat_bench_double}} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.48\textwidth]{leafmat_bench_flops_4096_1_cpu} \includegraphics[width=0.48\textwidth]{leafmat_bench_flops_4096_1_gpu} \end{center} \caption{Test runs for single precision corresponding to the double precision results in Figure~\ref{fig:leafmat_bench_double}. See that figure caption for more information. \label{fig:leafmat_bench_single}} \end{figure} This brings us to the next two benchmark figures where the Chunks and Tasks library is used but only on a single computational node, see Figures~\ref{fig:single_node_bench_double} and~\ref{fig:single_node_bench_single}. The computational node is equipped with 16 CPU cores and 2 GPUs. We are using a bounded priority queue in the device manager of length 2 and the number of slots per GPU is 2. This means that up to 6 threads, no processing of batch lists will ever occur on the CPU cores. When 2 threads are executing tasks, there is one GPU dedicated to each thread, similarly to the previous benchmark figures. Therefore, the performance improvement when going from 2 to 6 threads comes solely from overlapping data transfers to/from the GPUs and preparation of batch lists on the CPU with the processing of batch lists on the GPUs. When increasing the number of threads further, batch lists will also be processed on the CPU cores, according to Algorithm~\ref{alg:process_batches}. The figures show that we are able to take advantage of both the CPU and GPUs in a load balanced manner. We also note that no parametric models for task execution times on different hardware were required, the load balancing was achieved automatically without any information about the computational power of the devices, other than the assumption that a GPU is much more powerful than a CPU core. \begin{figure} \begin{center} \includegraphics[width=0.48\textwidth]{nthreads_vs_Gflops_snic_double_4096_cpu} \includegraphics[width=0.48\textwidth]{nthreads_vs_Gflops_snic_double_4096_gpu} \end{center} \caption{Results of matrix-matrix multiplication test runs in double precision using the Chunks and Tasks matrix library for dense $25 000\times 25 000$ matrices and blocksizes 16, 32, and 64 on a single Erik node equipped with two GPUs. A varying number of threads was used by CHT-MPI to execute tasks. The leaf matrix dimension was fixed to $4096 \times 4096$. Left: running on the CPU cores. Right: running on both the CPU cores and the GPUs. The dashed lines indicate practical peak performance figures for the two GPUs, see Table~\ref{tab:performance_batched_gemm}. Note that up to 6 threads all batch lists are executed on the GPUs, see the discussion in the text. \label{fig:single_node_bench_double}} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.48\textwidth]{nthreads_vs_Gflops_snic_single_4096_cpu} \includegraphics[width=0.48\textwidth]{nthreads_vs_Gflops_snic_single_4096_gpu} \end{center} \caption{Test runs for single precision and matrix dimension $40 000\times 40 000$, otherwise corresponding to the double precision results in Figure~\ref{fig:single_node_bench_double}. See that figure caption for more information. \label{fig:single_node_bench_single}} \end{figure} In Figure~\ref{fig:weak_scaling_erik} we investigate the weak scaling of our block sparse matrix-matrix multiplication for a set of banded matrices with fixed bandwidth but a matrix dimension that is increasing proportionally to the number of computational nodes. Each node used 17 worker threads and a chunk cache size of 5 GB. In this case, we noticed fluctuations in the execution time and have therefore carried out 6 test runs for each case (no.~of nodes and blocksize). Note that in case of perfect weak scaling as discussed in Section~\ref{sec:quadtree-effects} we would expect a constant wall time with increasing number of nodes. Figure~\ref{fig:weak_scaling_erik} shows results for both regular matrix-matrix multiplication and the symmetric matrix square operation that assumes upper triangular storage of a symmetric matrix and only computes the upper triangle of the symmetric product. The implementation of the symmetric matrix square operation is straightforward using Chunks and Tasks since all decisions regarding distribution of work and data are handled by the runtime library. The expected speedup of 2 compared to the regular multiplication is achieved. \begin{figure} \begin{center} \includegraphics[width=0.48\textwidth]{erik_weak_wall_regular} \includegraphics[width=0.48\textwidth]{erik_weak_wall_sysq} \end{center} \caption{Weak scaling test for banded matrices with bandwidth $2 \times 4000+1$ and matrix dimension $40000n \times 40000n$, where $n$ is the number of nodes. The calculations were performed in double precision with leaf matrix dimension $4096 \times 4096$ for internal leaf matrix block sizes 16, 32, and 64. Left: Regular matrix-matrix multiplication. Right: Symmetric matrix square taking advantage of that the product matrix is symmetric. For each case (no. of nodes and blocksize), the benchmark calculation was repeated 6 times, and we plot the smallest interval containing all 6 wall times. Lines are drawn through the average wall time of the 6 benchmark calculations. \label{fig:weak_scaling_erik}} \end{figure} \subsection{Application to overlap matrix in electronic structure program}\label{sec:smat-tests} To test the applicability of our block-sparse matrix-matrix multiplication in large-scale electronic structure calculations, we have adapted parts of the linear scaling electronic structure code {\sc Ergo} \cite{linmemDFT} so that the overlap matrix can be constructed in parallel using Chunks and Tasks. This allows us to test the symmetric matrix square operation for the overlap matrix. See the description of the symmetric matrix square task type in Section~\ref{subsec:sysq-task-types}. The overlap matrix construction was done using a hierarchical representation of the basis set, where each part of the hierarchy contains basis functions located in a particular part of space. At higher levels in the hierarchy, chunk identifiers are stored referring to basis set descriptions at lower levels. Using such a hierarchical basis set description, it is straightforward to implement tasks to compute the overlap matrix. The basis function ordering, which affects the sparsity pattern of the matrix, was determined based on the spatial coordinates of the basis functions using a recursive divide-space procedure. This is the default ordering used in the {\sc Ergo} program. The {\sc Ergo} overlap matrix test calculations were performed on the Tintin cluster at the UPPMAX computer center, Uppsala University, using CHT-MPI~1.1 compiled with Open~MPI~1.8.1 and gcc~4.9.1. The AMD Core Math Library (ACML) version 5.2.0 was used for BLAS operations on submatrices within the block-sparse leaf matrix implementation. The Tintin cluster consists of 160 compute nodes. Each node is a dual AMD Bulldozer compute server with two 8-core Opteron 6220 processors running at 3.0 GHz, with 64 GB of memory. The nodes are interconnected with a 2:1 oversubscribed QDR InfiniBand fabric. CHT-MPI was configured to use 15 threads for executing tasks, leaving one core on each node to handle communication. The chunk cache size was set to 8 GB. The test molecules were water clusters generated from a molecular dynamics simulation of bulk water at standard temperature and pressure by including all water molecules within spheres of varying radii. The Gaussian basis set STO-3G was used, corresponding to 7 basis functions for each water molecule. The largest test system contained 1745413 water molecules, giving 12217891 basis functions. The overlap matrix $S$ was truncated so that the Frobenius norm of the error matrix was smaller than $10^{-5}$. The calculations were performed in double precision with leaf matrix dimension 4096 and blocksize 16. For the largest test systems this gave a sparsity corresponding to on average 1070 matrix elements per row in $S$ after truncation, and about 7000 matrix elements per row in $S^2$. \begin{figure} \begin{center} \includegraphics[width=0.48\textwidth]{smat_tintin_sysq_timings} \includegraphics[width=0.48\textwidth]{smat_tintin_sysq_scaling_2} \end{center} \caption{Timings and scaling for $S^2$ symmetric matrix square computations on Tintin. Left: Timings for $S^2$ computations on overlap matrices for water clusters of varying size, using 25, 50, and 100 nodes of the Tintin cluster. Nearly linear system-size scaling is observed. Right: Scaling with respect to number of nodes, for three different matrix sizes. The speedups are relative to the 25 nodes case. We get closer to ideal speedup when the matrix size is increased. \label{fig:smat_tintin_sysq_timings_and_scaling_tintin}} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.48\textwidth]{smat_tintin_sysq_memusage} \includegraphics[width=0.48\textwidth]{smat_tintin_sysq_communication} \end{center} \caption{Memory usage and communication statistics for $S^2$ symmetric matrix square computations for water clusters of varying size, using 100 nodes of the Tintin cluster. Left: Chunk storage peak memory usage. Only the memory for owned chunks is shown here; chunk cache is not included. Right: Amount of data received by each node during the symmetric matrix square operation. \label{fig:smat_tintin_sysq_memusage_mm}} \end{figure} Timings and scaling for different numbers of compute nodes for the computation of $S^2$ using the symmetric matrix square operation are shown in Figure~\ref{fig:smat_tintin_sysq_timings_and_scaling_tintin}. The time scales nearly linearly with the size of the molecular system, and the parallelization speedup improves for larger problem sizes. Figure~\ref{fig:smat_tintin_sysq_memusage_mm} shows memory usage and communication statistics for the same water cluster $S^2$ calculations. The minimum, maximum, and average values among the 100 compute nodes are shown. Note that since CHT-MPI distributes both work and data dynamically, both the amount of data stored and the amount of communication needed will in general be different among the compute nodes. For the largest water cluster, the $S^2$ matrix contained about $8.6 \times 10^{10}$ matrix elements. Since only the upper triangle was computed and double precision was used, this corresponds to 344 GB of storage for $S^2$, or about 3.4 GB per node for the 100 nodes case. The average chunk memory usage shown in the left panel of Figure~\ref{fig:smat_tintin_sysq_memusage_mm} is larger, about 12.4 GB per node, as it includes also temporary matrix chunks used during the computation. Compared to the $S^2$ test calculations in \cite{chunks-and-tasks}, where plain dense matrix storage with blocksize 500 was used at the lowest level, the results in the present work represent significant improvements. The block-sparse leaf matrix type allows us to exploit sparsity much better, and using the symmetric matrix square operation reduces the computational effort even further. For a given water cluster size and number of compute nodes used, the memory usage for $S^2$ is reduced by about a factor of 16, and the time for the $S^2$ computation is reduced by about a factor of 6. Thanks to the reduced memory usage we are able to test significantly larger systems. The results in this section demonstrate that our block-sparse matrix-matrix multiplication is indeed well suited for applications in large-scale electronic structure calculations; we get the desired linear scaling with respect to the size of the molecular system and reasonable parallel speedup for large enough problems, with dynamic distribution of both work and data. However, there is room for performance improvements. Statistics from the calculations indicates that the worker threads were typically idle more than half of the time, either waiting for data to be fetched from other nodes or because there was not enough remaining work to occupy all worker threads. This can be addressed by improvements within the CHT-MPI implementation, for example by running tasks closer to their input chunks. As seen in the left panel of Figure~\ref{fig:smat_tintin_sysq_memusage_mm}, a more even distribution of the chunk storage in the the CHT-MPI implementation would also be desirable, for example using an upper limit for the chunk storage on each node, and storing chunks elsewhere when that limit is reached. Such improvements could be taken advantage of without changes in the matrix library code, by linking to an improved CHT-MPI or another Chunks and Tasks library. The compute nodes on the Tintin cluster where the {\sc Ergo} tests were run are not equipped with GPUs, so the effect of using GPUs was not studied here. However, as seen in Section~\ref{sec:gpu-calculations}, our GPU implementation can provide additional performance in case GPUs are available. Note that the {\sc Ergo} test calculations presented here only involved the overlap matrix. Full Hartree--Fock or Kohn--Sham density functional theory calculations require additional parts of the {\sc Ergo} code, notably the Coulomb and Hartree--Fock exchange matrix construction steps, to be parallelized using the Chunks and Tasks model. When that is done, the matrix library described here will be used to combine the different parts so that full Hartree--Fock and Kohn--Sham density functional theory calculations can be performed. The most performance-critical matrix operations are expected to be the symmetric matrix square operations during density matrix construction. The density matrix in general contains significantly more nonzero elements than the overlap matrix~\cite{sparsity-JCC:JCC21723}. Therefore, for a given size of the molecular system more work for each matrix-matrix multiplication can be expected compared to the $S^2$ tests here, especially if larger basis sets are also used. \subsection{Investigation of communication costs}\label{subsec:communication} As shown in Section~\ref{sec:quadtree-effects} the quadtree representation in principle allows data locality in sparse matrices to be exploited. Here we test this in practice by considering weak scaling using banded matrices similarly to the calculations presented in Figure~\ref{fig:weak_scaling_erik} with fixed bandwidth but a matrix dimension that is increasing proportionally to the number of worker processes. A constant amount of communication for each worker process can then be expected provided that the used CHT-MPI library succeeds in distributing the work. However, if an approach like~\cite{BulucGilbert2012, Borstnik2014} is used, where a random permutation destroying data locality is employed, the amount of communication needed will grow, as noted in Section~\ref{subsec:CompAndCommCosts}. In this case, since the matrix dimension is increased together with the number of processes, by \eqref{eq:sqrtp_equation}, the number of matrix elements that each process needs to fetch becomes \begin{equation} \label{eq:sqrtp_eq2} 2mk\sqrt{p} \end{equation} where $k$ is the constant relating $N$ and $p$ in our weak scaling tests such that $N = kp$. Since we are here interested in how the amount of communication scales for large numbers of processes, we use 8 worker processes per node and one worker thread per process. The test runs used up to 30 nodes of the Tintin cluster, corresponding to up to 240 worker processes. We consider here the total amount of data received by each process from other processes, including both communication between processes on the same node and between processes on different nodes. The chunk cache size for each process was set to 2 GB. The calculations were performed in double precision with leaf matrix dimension 4096 and blocksize 16. Figure~\ref{fig:tintin_weakscal_regular} shows results of our weak scaling tests using the Chunks and Tasks matrix library. The minimum, maximum, and average values among the worker processes are shown. For each case, the plotted numbers are averages from 6 repeated benchmark calculations. For comparison, the amount of communication that would have been needed if a random reordering and the Sparse SUMMA algorithm had been used is also shown. We note that the Chunks and Tasks matrix library, without a priori knowledge about the sparsity structure, is able to take advantage of locality and achieve essentially constant amount of communication per worker process. To get an indication of the load balance, the active percentage for the worker threads, defined as the fraction of the time that worker threads were busy executing tasks, is also shown in the figure. When the communication per process no longer increases the active percentage is also stabilized. Note the difference between our locality-aware approach and the Sparse SUMMA algorithm, for which communication would have dominated completely for large enough test cases. Figure~\ref{fig:tintin_weakscal_sysq} shows the corresponding results for the symmetric matrix square operation. For large numbers of processes, using the symmetric matrix square operation instead of regular multiplication reduced the average necessary communication per process from about 1.39 GB to 0.76 GB. Note that although the weak scaling tests described above were performed for the simple case of banded matrices, our Chunks and Tasks matrix library automatically takes advantage of any sparsity and locality that can be exploited by the quadtree structure. As another example, exploitation of data locality in the sparsity pattern improves the scaling of the amount of communication also for the more complex sparsity patterns occurring in electronic structure calculations for three-dimensional molecular systems. This can be seen by comparing some of the overlap matrix tests in Section~\ref{sec:smat-tests}. For example, going from 2463377 basis functions and 25 nodes to 9861383 basis functions and 100 nodes corresponds to a factor of 4 in the number of nodes and a factor of 4.003 in matrix size. The average amount of data received per node increased from 6.0 GB to 7.7 GB, or a factor of 1.28. This is a significant improvement compared to the factor of 2 that would have resulted from a $\sqrt{p}$ behavior. It should be noted that such a comparison of different $S^2$ calculations does not correspond exactly to a weak scaling study, since the spherical shape of the water cluster systems and the use of Frobenius norm truncation lead to an amount of work that increases slightly more than linearly; in this case the increase in matrix size by a factor of 4 lead to an increase in the number of nonzeros in $S^2$ by a factor of 4.33. \begin{figure} \begin{center} \includegraphics[width=0.48\textwidth]{tintin_weakscal_communication_regular} \includegraphics[width=0.48\textwidth]{tintin_weakscal_activepercentage_regular} \end{center} \caption{Weak scaling results for regular matrix-matrix multiplication of banded matrices with bandwidth $2 \times 2000+1$ and matrix dimension $5000p \times 5000p$, where $p$ is the number of worker processes. Left: Amount of data received by each process during the matrix-matrix multiplication operation. The dashed line indicates the amount of communication that would have resulted if a random reordering and the Sparse SUMMA algorithm had been used, see~(\ref{eq:sqrtp_eq2}). Right: active percentage: the fraction of the time that worker threads were busy executing tasks. \label{fig:tintin_weakscal_regular}} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.48\textwidth]{tintin_weakscal_communication_sysq} \includegraphics[width=0.48\textwidth]{tintin_weakscal_activepercentage_sysq} \end{center} \caption{Weak scaling results for symmetric matrix square computations corresponding to the regular matrix-matrix multiplication results in Figure~\ref{fig:tintin_weakscal_regular}. See that figure caption for more information. \label{fig:tintin_weakscal_sysq}} \end{figure} \section{Concluding remarks}\label{sec:conclusions} The matrix library presented in this work is based on a quadtree data structure, with the important property that it allows for automatic exploitation of matrix sparsity structure. The hierarchical quadtree representation and associated recursive algorithms have been implemented using the Chunks and Tasks programming model. In this matrix library code, parallelism is exposed to the Chunks and Tasks library by expressing matrices and operations as hierarchies of chunk and task objects. All details regarding message passing and synchronization are left to the Chunks and Tasks library. Thus, the matrix library code can be written without worrying about how many nodes there are, where data is to be sent, etc. Storage and manipulation of matrices at the lowest level of the hierarchy is handled by a separate leaf matrix library. This means that the matrix library code is relieved from the details regarding the best way to store a particular type of submatrix or the best way to perform submatrix-submatrix multiplication on a particular type of hardware. Such modular design is powerful since it allows each part to be developed and optimized separately, and one can easily switch between different implementations of each part. Well designed interfaces between different modules, in our experience, results in both increased programming productivity and improved performance. For matrices appearing in electronic structure calculations, the basis function ordering plays an important role in determining the sparsity pattern. For the overlap matrix test calculations in the present work, the default ordering in the {\sc Ergo} program was used. A different ordering, using e.g.~space filling curves~\cite{Challacombe-sparsematrix} or network modularity optimization~\cite{Girvan11062002-network,rubensson-inverse-factorization}, could lead to increased data locality and result in improved performance. This work illustrates the usefulness of programming models allowing dynamic distribution of work and data, which we expect will become increasingly important in the future, as larger compute systems are used. Apart from simplifying the implementation of dynamic algorithms, such models also make it easier to achieve fault resilience. They also facilitate the use of heterogeneous computational resources and allow robustness with respect to varying performance among compute nodes. Finally, we note that the flexible design of the presented Chunks and Tasks matrix library makes it easy to modify and extend the library. For example, inclusion of extra information such as the Frobenius norm of each submatrix in the quadtree as in~\cite{BockChallacombeSISC2013} could be straightforwardly implemented. The library could also be used for elementwise sparse matrices rather than block-sparse matrices by simply switching to a different leaf matrix library, employing for example a compressed sparse row format. \bibliographystyle{siam}
1,314,259,994,759
arxiv
\section{Introduction} In this paper, we study the design of revenue-maximizing bilateral trade mechanisms when the mechanism designer is poorly informed of the values of the buyer and the seller. The mechanism designer’s revenue is the difference between what the buyer pays and what the seller receives. We could think of this mechanism designer as the commercial designer of a trading platform who charges a fee for transactions on the platform. We assume the designer knows only the information about the expectations of the private values of the buyer and the seller, but does not know the joint distribution of the private values\footnote{That is, the designer knows neither the marginal distributions nor the correlation structure except for the expectations of the marginal distributions.}. The designer evaluates any trading mechanism by the expected revenue in the worst case, over all possible joint distributions consistent with the known expectations. The objective of the designer is to find a trading mechanism that maximizes worst-case expected revenue among all dominant strategy incentive compatible (DSIC) and ex-post individually rational (EPIR) mechanisms.\\ \indent This study is motivated by several observations. First, the joint distribution is statistically a high dimensional object, which may be hard to estimate. In contrast, the expectations are two parameters, about which it may be relatively easier to form an educated guess. Practically, it may fit into situations in which the designer knows little about the agents. At a high level, this study follows the ``Wilson doctrine'' (\cite{wilson1987game}) that motivated the search for economic institutions not sensitive to unrealistic assumptions about the information structure. \\ \indent Our main results offer a complete characterization of the maxmin trade mechanisms and the corresponding worst case joint distributions. Theorem \ref{t1} considers the symmetric case in which the known expectations of value distributions of the buyer and the seller sum up to the upper bound of the support, which is normalized to 1.\footnote{Note the lower the seller's value, the higher his willingness to trade. Thus it is plausible to regard the highest value seller as the lowest type seller. When the known expectations sum up to 1, the expectation of buyer and the expectation of seller have the same distance from the lowest type buyer and the lowest type seller respectively. Therefore we refer to this case as the symmetric case.} In this case, \textbf{Maxmin Trade Mechanism (I)} can be described as follows. Trade occurs with a positive probability if and only if the difference between the reported values exceeds some threshold; the trading probability is linear and strictly increasing in the difference between the reported values; in addition, trade occurs with probability 1 if and only if the reported value of the buyer and the seller is 1 and 0 respectively; finally, the transfer function is quadratic in the reported values of the agents. The support of \textbf{Worst-Case Joint Distribution (I)} is a triangular subset in the set of joint valuations, which is the same as the trading region\footnote{We refer to the set of value profiles in which trade occurs with a positive probability as the trading region.} of \textbf{Maxmin Trade Mechanism (I)}. Its marginal distribution for the buyer is a combination of a uniform distribution on an interior interval of values and an atom on 1, while for the seller is a combination of a uniform distribution on an interior interval and an atom on 0; Its conditional distribution is some truncated generalized Pareto distribution. Theorem \ref{t2} considers the asymmetric case in which the known expectations of value distributions of the buyer and the seller sum up to a number other than 1. For the asymmetric case, \textbf{Maxmin Trade Mechanism (II)} can be described as follows. Trade occurs with a positive probability if and only if the difference between the weighted reported values exceeds some threshold; the trading probability is strictly increasing in the difference between some logarithmic functions of some linear transformation of reported values; in addition, trade occurs with probability 1 if and only if the reported value of the buyer and the seller is 1 and 0 respectively; finally, the transfer function is the sum of a logarithmic function and a linear function of the reported value of the agents. The support of \textbf{Worst-Case Joint Distribution (II)} is also a triangular subset in the set of joint valuations, which is the same as the trading region of \textbf{Maxmin Trade Mechanism (II)}. Its marginal distribution for the buyer is some truncated generalized Pareto distribution with an atom on 1, while for the seller is some truncated generalized Pareto distribution with an atom on 0; Its conditional distribution is some truncated generalized Pareto distribution. \\ \indent We take a constructive approach based on the saddle point property. Specifically, we reformulate the designer's problem into a zero-sum game between the designer and Nature, who chooses a feasible joint distribution consistent with the known expectations to minimize expected revenue. Finding an optimal mechanism is equivalent to finding a saddle point of the zero-sum game.\\ \indent We first consider the symmetric case in which the buyer's value and the seller's value sum up to 1. To form an educated guess about the saddle point, we begin with the trading region of the maxmin mechanism and the support of the worst case joint distribution. First in the maxmin mechanism, trade occurs with a positive probability if and only if the difference between the values of the buyer and the seller exceeds some threshold. The intuition behind this property can be summarized as follows. In the symmetric case, the difference between the private values of buyer and seller can be interpreted as the true value to the designer\footnote{We may view buyer and seller as two departments of a company. The overall benefit to the company from trading between these two department is the difference between their private values .}. And the difference between the payment from buyer and the transfer to seller can be interpreted as the price charged by the designer. From the mechanism design literature, we learned that the revenue is generally higher if the designer exercises monopoly power, which corresponds to restricting trade by allowing trade to occur only when the true value exceeds some threshold in our environment. Second, the support of the worst-case joint distribution is the same as the trading region. The intuition behind this property can be summarized as follows. To the designer, the trades occurring outside of the support generate no revenue. However, to Nature, by allocating some probabilities outside of the trading region, there are two opposite effects: the upside is that this operation will reduce the overall probability on the trading region, which is potentially beneficial to the adversarial Nature, while the downside is that in order to respect the known expectation constraints, this operation will increase the probability of certain value profiles in the trading region, which is potentially detrimental to the adversarial Nature especially when the revenue from those value profiles are high. In the worst-case joint distribution, the tradeoff is resolved in favor of the downside in the saddle point and the trading region exactly coincides with the support of the worst-case joint distribution. Then, by the strong duality, the revenue from value profiles in the support is some linear function of the values. Intuitively, we expect the maxmin mechanism to exhibit a lot of indifference to various plausible joint distributions. When the revenue from value profiles in the support is some linear function, any plausible joint distribution will generate the same revenue as long as its support is contained in the support of the worst case joint distribution. Now with the help of envelop representation of the revenue, we are confronted with (essentially) a system of partial differential equations involving the trading probability. We can prove the trading probability is seperable in the buyer's value and the seller's value. Therefore it can be written in the sum of two functions with only one argument. Then we take a guess and verify approach to solve for the closed-form of these two functions, which turn out to be linear. \\ \indent For the construction of the worst-case joint distribution, we first derive a virtual representation of the expected revenue. That is, the expected revenue is equivalent to the inner product of the trading probability and the weighted virtual value, which will be defined in the main context. In the worst-case joint distribution, the weighted virtual value are positive only for the value profile in which the buyer's value and the seller's value is 1 and 0 respectively. Beside, the weighted virtual value are 0 for the other value profiles in the support. The intuition behind is that \textbf{Maxmin Trade Mechanism (I)} requires randomization (trade occurs with some probability greater than 0 but less than 1) for all interior value profiles. That is, the designer is indifferent between trade and no trade for these value profiles. Indeed, given the above properties, any trade mechanism will generate the same revenue and be a best response for the designer provided that it is a feasible and monotone trade mechanism in which trade occurs with a positive probability only for the value profiles in the support and trade occurs with probability 1 for the value profile in which the buyer's value and the seller's value is 1 and 0 respectively. And it is easy to verify that \textbf{Maxmin Trade Mechanism (I)} is such a mechanism. The remaining issue is whether these properties can hold. We provide an affirmative answer by constructing such a joint distribution. Briefly, these properties implies a system of ordinary differential equations and partial differential equations, which can be solved by the guess and verify approach.\\ \indent For the asymmetric case in which the buyer's value and the seller's value sum up to a number other than 1, trade occurs with a positive probability if and only if the difference between \textit{the weighted values} of the buyer and the seller exceeds some threshold in the maxmin mechanism. The rough intuition behind this property is that it may be beneficial to the designer to attach different weights to the values of the buyer and the seller, since they have different eagerness to trade\footnote{The expectation may be viewed as a metric about the average eagerness to trade. The higher the average eagerness to trade, the higher the expectation of the buyer and the lower the expectation of the seller.} for the asymmetric case. The intuition for the threshold is the same as the aforementioned intuition that the revenue is generally higher if the designer exercises monopoly power. Second, the support of the worst-case joint distribution is the same as the trading region, based on the same intuitions. The remaining procedures for characterizing the maxmin mechanism and the worst-case joint distribution are similar and we delay the details to the main context.\\ \indent Lastly, we restrict attention to \textit{deterministic} DSIC and EPIR mechanisms and characterize the entire set of maxmin determinisitic mechanisms as well as the worst-case joint distribution. Our finding is that any determinisitic DSIC and EPIR mechanism whose trade boundary contains two given value profiles and lies above (including) the line connecting the two given value profiles will be optimal; the worst-case joint distribution puts probability mass only on the two given value profiles and the value profile in which the buyer's value and seller's value is 1 and 0 respectively. Examples about the maxmin deterministic mechanisms includes \textit{linear trading}, in which trade occurs if and only if the difference between the weighted values exceeds a threshold, and \textit{threshold trading}, in which trade occurs if and only if the buyer's value exceeds a threshold and the seller's value falls below a threshold. Our construction is based on strong duality of linear programming. We first rule out mechanisms in which trade occurs for value profiles in which the value of the seller exceeds the value of the buyer. To do so, we note that the revenue from the four vertices value profiles are non-positive, implied by the monotonicity property of the mechanism. Then we show Nature can always put probability mass only on some of the four vertices value profiles, thus resulting in non-positive revenue guarantee. For the remaining mechanisms, we invoke strong duality and work on the dual maximization problem. We further propose a relaxation of the dual by omitting many constraints, resulting in a finite dimensional linear programming problem. We identify a greatest upper bound of the value of the relaxation and argument the upper bound is attainable by constructing both the (class of) mechanisms and the worst-case joint distribution.\\ \indent The remaining of the paper proceeds as follows. Section 2 provides a literature review. Section 3 presents the model. Section 4 characterizes our main results. Section 5 characterizes the class of maxmin deterministic mechanisms. Section 6 concludes. All proofs are in the Appendix. \section{Related Literature} This paper contributes to the literature of robust mechanism design. The closestly related papers are \cite{carrasco2018optimal}, \cite{che2020distributionally}, \cite{koccyiugit2020distributionally}, \cite{suzdaltsev2020optimal}, \cite{zhang2021correlation}, \cite{brooks2021maxmin}.\\ \indent \cite{carrasco2018optimal} study the revenue-maximizing selling mechanisms when the seller faced with a single buyer only knows the first $N$ moments of distribution, and solve the problem with a known expectation as a special case. They also find the optimal deterministic posted price for this special case. This paper can be viewed as a generalization of the special case of their model to the multidimensional bilateral trade setting, as our model is reduced to theirs when the expectation of the seller's value is known to be 0. Their approach also essentially bases on duality. However, our setting requires us to verify a guess about the joint distribution with rather intricate correlation structure, while they need to verify a guess about the single dimensional distribution. \\ \indent \cite{che2020distributionally} considers a model of auction design in which the auctioneer only knows the expectation of each bidder’s value, and characterizes the optimal random reserve prices for the second price auction. He further shows that it also achieves the greatest revenue guarantee within a class of competitive mechanisms. The constructive approach is similar, but one of the key assumptions differs. We do not restrict attention to any particular mechanism, but to the DSIC and EPIR mechanisms, which also does not coincide with the class of competitive mechanisms. \\ \indent \cite{suzdaltsev2020optimal} also considers exactly the same setting, but focuses on auction design and deterministic mechanisms. To wit, he considers an auctioneer who knows only the expectations of bidders' values and shows that a linear version of Myerson's optimal auction is optimal among all deterministic DSIC and EPIR mechanisms. We consider a bilateral trade model and also derive a result on maxmin deterministic DSIC and EPIR mechanisms (Theorem \ref{t3}). In addition, both papers characterize the entire class of maxmin deterministic mechanisms based on strong duality.\\ \indent \cite{koccyiugit2020distributionally} considers an auction design model in which Nature chooses the worst-case joint distribution subject to symmetric expectations of bidder's values. They find, among other results, that a highest-bidder lottery mechanism is optimal within the DSIC and EPIR mechanisms in which only the highest bidder is allocated the good. In contrast, we consider a broader class of mechanisms and do not have any restriction on the known expectations.\\ \indent \cite{zhang2021correlation} considers a model of auction design in which the auctioneer knows the marginal distribution of each bidder's value but does not knows the correlation structure , and characterizes maxmin auctions among some general class of mechanisms under certain regularity conditions. The worst-case joint distributions are motivated by some property of some version of ``virtual values'' in both papers. However, the construction of the worst-case joint distribution requires us to solve for some partial differential equation in addition to ordinary differential equations. In addition, this paper offers a complete characterization for all primitives. \\ \indent \cite{brooks2021maxmin} consider informationally robust auction design in the interdependent value environment. They assume the auctioneer only knows the expectation of each bidder's value , but does not know the distribution of values and higher order beliefs. The solution concept they use is what they refer to as strong maxmin solution. In contrast, our framework assumes values are known to the agents, and restrict attention to DSIC mechanisms, ruling out issues brought by higher order beliefs. Therefore, our methodology differs. However, in a high level, both papers rely on some version of virtual values to proceed the analysis. And interestingly, they characterize a proportional auction as a maxmin mechanism, which has similar features with the maxmin trade mechanism in our model for the symmetric case.\\ \indent There are other papers seeking robustness to value distributions, e.g., \cite{carrasco2019robust}, \cite{auster2018robust}, \cite{bergemann2011robust}, \cite{bergemann2008pricing}. \cite{carroll2017robustness}, \cite{giannakopoulos2020robust} and \cite{chen2019distribution} focus on the problem of selling multiple goods to a single buyer when the value distributions are unknown. A separate strand of papers focus on the case in which the designer does not have reliable information about the agents’ hierarchies of beliefs about each other while assuming the knowledge of the payoff environment, e.g., \cite{bergemann2005robust},\cite{chung2007foundations}, \cite{chen2018revisiting}, \cite{bergemann2016informationally,bergemann2017first,bergemann2019revenue}, \cite{du2018robust}, \cite{brooks2020optimal}, \cite{libgober2018informational}, \cite{yamashita2018foundations}. \cite{carroll2019robustness} provide an elaborate survey on various notions of robustness studied in the literature, e.g., robustness to preferences, robustness to strategic behavior and robustness to interaction among agents.\\ \indent There are other papers studying robust bilateral trade mechanisms. \cite{wolitzky2016mechanism} studies optimal mechanisms in terms of efficiency for bilateral trade when agents are maxmin expected utility maximizers, with similar ambiguity sets (that is, a buyer knows only the mean of a seller’s valuation, and vice versa). \cite{bodoh2012ambiguous} also assumes the agents are maxmin expected utility maximizers, but focuses on revenue-maximizing bilateral trade mechanisms in their applications. \cite{carroll2016informationally} studies bilateral trade mechanisms within the informationally robust framework with a focus on the expected surplus. In contrast, our paper considers a private-value environment, assumes that the designer has limited knowledge about the economic environment and focuses on revenue-maximizing mechanism design. \section{Preliminaries} We consider an environment where a single indivisible good is traded between two risk-neutral agents. One is the Seller ($S$), who holds the good initially, while the other is the Buyer ($B$), who does not hold the good initially. We denote by $I = \{S,B\}$ the set of agents. Each agent $i$ has private information about her valuation for the object, which is modeled as a random variable $v_i$ with cumulative distribution function $F_i$.\footnote{We do not make any assumption on the distributions of these random variables. It could be continuous, discrete, or any mixtures. Also we allow asymmetric distributions, that is, $F_S$ can be different from $F_B$. } We use $f_i(v_i)$ to denote the density of $v_i$ in the distribution $F_i$ when $F_i$ is differentiable at $v_i$; We use $Pr_i(v_i)$ to denote the probability of $v_i$ in the distribution $F_i$ when $F_i$ has a probability mass at $v_i$. We denote $V_i$ as the support of $F_i$. We assume each $V_i$ is bounded. Throughout, we assume common support, i.e., $V_S=V_B$. As a normalization, we assume $V_i = [0,1]$. The joint support of $F_i$ is denoted as $V = [0,1]^2$ with a typical value profile $v$. The joint distribution is denoted as $F$.\\ \indent The valuation profile $v$ is drawn from a joint distribution $F$. The designer only knows the expectations $M_B$ and $M_S$ for the private value of $B$ and $S$ respectively as well as the support, but does not know the joint distribution of the values of these two agents\footnote{That is, except for the expectations, the designer know neither the marginal distributions nor the correlation structure.}. Formally, we denote by $$\Pi(M_B,M_S) = \{ \pi \in \Delta V : \int v_B \pi(v)dv = M_B,\int v_S \pi(v)dv = M_S\}$$ the collection of such joint distributions.\\ \indent The designer seeks a dominant strategy incentive compatible (DSIC) and ex-post\\ individually rational (EPIR) mechanism. A direct mechanism\footnote{Since we restrict attention to DSIC mechanisms, the revelation principle holds and it is without loss of generality to focus on direct mechanisms.} $(q,t_B,t_S)$ is defined as a trading probability $q : V \to [0, 1]$ and transfer functions $t_i : V\to \mathbb{R}$.\footnote{$q$ is the probability that $B$ obtains the good, which can be interpreted as the trading probability in our environment. We allow randomization, which will play a crucial role in our analysis } With slight abuse of notations, we assume each agent report $v_i\in V_i$ to the designer. Upon receiving the reported profile $v=(v_B, v_S)$, the buyer $B$ gets the good with probability $q(v)$ and pays $t_B(v)$; the seller $S$ holds the good with the remaining probability $1-q(v)$ and receives a payment of $t_S(v)$. We use $t(v)\equiv t_B(v)-t_S(v)$ to denote the difference between what $B$ pays and what $S$ receives. The set of all DSIC and EPIR mechanisms is denoted as $\mathcal{D}$. \\ \indent We are interested in the designer's expected revenue in the dominant strategy equilibrium in which each agent truthfully reports her valuation of the good. The expected revenue of a DSIC and EPIR mechanism $(q,t_B,t_S)$ when the joint distribution is $\pi$ is $U((q,t_B,t_S),\pi)\equiv \int_{v\in V}\pi(v)t(v) dv$. The designer evaluates each such mechanism $(q,t_B,t_S)$ by its worst-case expected revenue over plausible joint distributions. The designer's goal is to find a mechanism with the maximal worst-case revenue for a given pair of expectations $(M_B,M_S)$. Formally, the designer tries to find a mechanism $(q^*,t_B^*,t_S^*)$ that solves the following problem:$$ (q^*,t_B^*,t_S^*)\in \argmax_{(q,t_B,t_S)\in \mathcal{D}}\min_{\pi\in \Pi(M_S,M_B)}\int_{v\in V}\pi(v)t(v)dv$$ s.t. $$v_Bq(v)-t_B(v)\ge 0 \quad \forall v \quad (EPIR_B)$$$$ v_Bq(v)-t_B(v)\ge v_Bq(v_B',v_S)-t_B(v_B',v_S)\quad \forall v,v_B' \quad(DSIC_B)$$ $$v_S(1-q(v))+t_S(v)\ge v_S \quad \forall v \quad (EPIR_S)$$$$ v_S(1-q(v))+t_S(v)\ge v_S(1-q(v_B,v_S'))+t_S(v_B,v_S')\quad \forall v,v_S' \quad(DSIC_S)$$$$ 0\le q(v)\le 1 \quad \forall v \quad(Feasibility)$$ \begin{remark} We may consider a slightly more general mechanism in which we allow the designer to destroy part of the good uniformly, i.e., the sum of the final allocations to $B$ and $S$ could be some $0\le a\le 1$. However, we argue it is without loss of generality to assume $a$ is exactly 1 for all $v$. To see this, note the constraints in the above program becomes $$v_Bq(v)-t_B(v)\ge 0 \quad \forall v \quad (EPIR_B)$$$$ v_Bq(v)-t_B(v)\ge v_Bq(v_B',v_S)-t_B(v_B',v_S)\quad \forall v,v_B' \quad(DSIC_B)$$ $$v_S(a-q(v))+t_S(v)\ge v_S \quad \forall v \quad (EPIR'_S)$$$$ v_S(a-q(v))+t_S(v)\ge v_S(a-q(v_B,v_S'))+t_S(v_B,v_S')\quad \forall v,v_S' \quad(DSIC'_S)$$$$ 0\le q(v)\le a, 0\le a\le 1\quad \forall v \quad(Feasiblity')$$ Given any $(a,q(v),t_B(v),t_S(v))$ satisfying the new constraints, we can inflate it to\\ $(1,\frac{1}{a}q(v),\frac{1}{a}t_B(v),\frac{1}{a}t_S(v))$. Under the new mechanism, $(EPIR'_S)$ holds since the RHS of $(EPIR'_S)$ is greater and the LHS of $(EPIR'_S)$ remains the same. The other constraints also holds trivially. And the new mechanism achieves weakly better revenue for any joint distribution because it inflates the revenue. Thus, we can assume $a=1$. After we present the main result, we can also consider a model in which we allow the designer to destroy part of the good not necessarily uniformly. \end{remark} \section{Main Results} \indent To facilitate the analysis, it will be useful to further simplify the problem. We will use the following proposition: its proof is standard but included in the Appendix for completeness. And all formal proofs are deferred to the Appendix. \begin{proposition}\label{p1} Maxmin Bilateral Trade Mechanisms have the following properties:\\(i). $q(v)$ is nondecreasing in $v_B$ and nonincreasing in $v_S$.\\(ii). $t_B(v)=v_Bq(v)-\int_0^{v_B}q(b,v_S)db$.\\(iii). $t_S(v)=1-(1-q(v))v_S-\int_{v_S}^1(1-q(v_B,s)ds$.\\(iv). $t(v)=(v_B-v_S)q(v)-\int_0^{v_B}q(b,v_S)db-\int_{v_S}^1q(v_B,s)ds$ \end{proposition} \indent For the rest of the paper, we focus on the case in which $M_B>M_S$. Otherwise the problem becomes trivial as the nature can always choose a distribution that the seller's value is never below the buyer's value, for instance, the joint distribution that put all probability mass on $(M_B,M_S)$. Thus, the revenue guarantee can not be positive, implied by EPIR. Then, trivially, No Trade Mechanism ($q(v)=t_S(v)=t_B(v)=0$ for any $v$) is a maxmin mechanism. We summarize this observation as follows. \begin{observation} No Trade Mechanism is a maxmin mechanism if $M_B\le M_S$. And the revenue guarantee is 0. \end{observation} In addition, we assume $M_S>0$ for the rest of the paper. If $M_S=0$, the problem becomes one-agent problem, which has been solved by \cite{carrasco2018optimal}. \subsection{The Symmetric Case: $M_B+M_S=1$} \indent In this subsection, we characterize the maxmin mechanism when the two known expectations sum up to 1. We observe that the maxmin optimization problem can be interpreted as a two-player sequential zero-sum game. The two players are the designer and Nature. The designer first chooses a mechanism $(q,t_B,t_S)\in \mathcal{D}$. After observing the designer’s choice of the mechanism, Nature chooses a joint distribution $\pi\in \Pi(M_B,M_S)$. The designer’s payoff is $U((q,t_B,t_S),\pi)$, and Nature’s payoff is $-U((q,t_B,t_S),\pi)$. Now instead of solving directly for such a subgame perfect equilibrium we can solve for a Nash equilibrium $((q^*,t_B^*,t_S^*),\pi^*)$ of the simultaneous move version of this zero-sum game, which corresponds to a saddle point of the payoff functional $U$, i.e.,$$U((q^*,t_B^*,t_S^*), \pi) \ge U((q^*,t_B^*,t_S^*),\pi^*) \ge U((q,t_B,t_S),\pi^*)\quad\quad $$ for any $(q,t_B,t_S)$ and any $\pi$. The properties of a saddle point imply that the principal’s equilibrium strategy in the simultaneous move game,$(q^*,t_B^*,t_S^*)$ , is also his maxmin strategy (i.e. his equilibrium strategy in the subgame perfect equilibrium of the sequential game).\\ \indent We propose the following \textbf{Maxmin Trade Mechanism (I)} and \textbf{Worst-Case Joint Distribution (I)}. Formally, they are described as below. \\ \textbf{Maxmin Trade Mechanism (I)}\\ Let $v=(v_B,v_S)$ be the reported value profile of the two agents. If $v_B-v_S\ge r$, then $$ q^*(v_B,v_S)= \frac{1}{1-r}(v_B-v_S-r)$$ $$t^*_B(v_B,v_S)=\frac{1}{2(1-r)}(v_B^2-(v_S+r)^2)$$ $$t^*_S(v_B,v_S)=\frac{1}{2(1-r)}((v_B-r)^2-v_S^2)$$ where $r=1-\sqrt{1-(M_B-M_S)}$\\ Otherwise $$q^*(v_B,v_S)=t^*_B(v_B,v_S)=t^*_S(v_B,v_S)=0$$ \textbf{Worst-Case Joint Distribution (I)}\\ Let $\pi^*(v_B,v_S)$ denote the density of the value profile $(v_B,v_S)$ whenever the density exists. Let $Pr^*(v_B,v_S)$ denote the probability mass of the value profile $(v_B,v_S)$ whenever there is some probability mass on $(v_B,v_S)$. Let $V(I):=\{v|v_B-v_S\ge r\}$. Worst-Case Joint Distribution (I) has the support $V(I)$ and is defined as follows: $$\pi^*(v_B,v_S)= \left\{ \begin{array}{lll} \frac{2r^2}{(v_B-v_S)^3} & & {v_B-v_S\ge r,v_B\neq 1,v_S \neq 0}\\ \frac{r^2}{(1-v_S)^2} & & {v_B=1,0<v_S \le 1-r}\\ \frac{r^2}{v_B^2} & & {r\le v_B < 1,v_S=0} \end{array} \right. $$ $$Pr^*(1,0)=r^2$$ Equivalently, \textbf{Worst-Case Joint Distribution (I)} can be described by worst-case marginal distributions and conditional distributions as follows:\\ For $B$, the worst-case marginal distribution is $$\pi^*_B(v_B)=1$$ for $r\le v_B <1$, $$Pr^*_B(1)=r$$ That is, worst-case marginal distribution for $B$ is uniform distribution on $[r,1)$ with a probability mass $r$ on 1.\\ For $S$, the worst-case marginal distribution is $$\pi^*_S(v_S)=1$$ for $0<x\le 1-r$, $$Pr^*_S(0)=r$$ That is, worst-case marginal distribution for $S$ is uniform distribution on $(0,1-r]$ with a probability mass $r$ on 0.\\ The conditional distribution of $v_S$ on $v_B$ is $$ \pi^*(v_S|v_B)=\frac{2r^2}{(v_B-v_S)^3}$$ for $r\le v_B< 1, 0<v_S\le v_B-r$, $$Pr^*(v_S=0|v_B)=\frac{r^2}{v_B^2}$$ for $r\le v_B< 1$; $$\pi^*(v_S|v_B=1)=\frac{r}{(1-v_S)^2}$$ for $0<v_S \le 1-r$, $$Pr^*(v_S=0|v_B=1)=r$$ That is, the conditional distribution of $v_S$ on $v_B$ is certain generalized Pareto distribution on $(0,v_B-r]$ with some probability mass on 0. Likewise, the conditional distribution of $v_B$ on $v_S$ is certain generalized Pareto distribution on $[v_S+r,1)$ with some probability mass on 1. The feature is similar and we omit the derivation. \begin{definition}[Positive correlation for bivariate distribution] Let $Z = (X,Y)$ be a bivariate random vector. $Z$ exhibits positive correlation for $D_X$ and $D_Y$ if $F(X|Y=y)$ \textit{first order stochastically dominates} $F(X|Y=y')$ for any $y>y', y, y'\in D_Y$ and $F(Y|X=x)$ \textit{first order stochastically dominates} $F(Y|X=x')$ for any $x>x', x, x'\in D_X$. \end{definition} \begin{remark} {Worst-Case Joint Distribution (I)} exhibits positive correlation for $r\le v_B<1$ and $0<v_S\le 1-r$.\footnote{To see this, note $F(v_S|v_B)=\frac{r^2}{(v_B-v_S)^2}$ is decreasing w.r.t. $v_B$ for $r\le v_B <1$. The positive correlation breaks when $v_B=1$. Similarly, $F(v_B|v_S)=1-\frac{r^2}{(v_B-v_S)^2}$ is decreasing w.r.t. $v_S$ for $0< v_S \le 1-r$. The positive correlation breaks when $v_S=0$.} \end{remark} \begin{theorem}\label{t1} When $M_B+M_S=1$, \textbf{Maxmin Trade Mechanism (I)} and \textbf{Worst-Case Joint Distribution (I)} form a Nash equilibrium. The revenue guarantee is $r^2$. \end{theorem} \subsection{Illustration of Theorem \ref{t1}} \subsubsection{Weighted Virtual Values} We begin our analysis by defining a generalized version of virtual values in our environment. We consider the problem that fixing any joint distribution $\pi$, the designer designs an optimal mechanism $(q,t_B,t_S)$. We denote the density of value profile $v=(v_i, v_j)$ as $\pi(v_i,v_j)$. We define $\pi_i(v_i)\equiv \int_{v_j}\pi(v_i,v_j)dv_j$. We denote the probability of $v_i$ conditional on $v_j$ as $\pi_i(v_i| v_j)$, the cumulative distribution function of $v_i$ conditional on $v_j$ as $\Pi_i(v_i|v_j)= \int_{s_i\le v_i} \pi_i(s_i| v_j)ds_i$. We define $\Pi_i(v_i,v_j)\equiv \pi_j(v_j)\Pi_i(v_i|v_j)=\int_{s_i\le v_i} \pi(s_i,v_j)ds_i $. An direct implication of Proposition 1 is that the expected revenue of $(q,t)$ under the correlation $\pi$ is $$E[t_B(v)-t_S(v)]=\int_vq(v)\Phi(v)dv$$ where $$\Phi(v)= \pi(v)(v_B-v_S)-(\pi_S(v_S)-\Pi_B(v_B,v_S))- \Pi_S(v_S,v_B) $$ Here $\Phi(v)$ is defined as \textbf{the weighted virtual value}\footnote{Note $\Phi(v)=\pi(v)(v_B-\frac{1-\Pi_B(v_B|v_S)}{\pi_B(v_B|v_S)} -(v_S+\frac{\Pi_S(v_S|v_B)}{\pi_S(v_S|v_B)}))$ when $\pi(v)$ is not 0. Here $\phi(v)\equiv v_B-\frac{1-\Pi_B(v_B|v_S)}{\pi_B(v_B|v_S)} -(v_S+\frac{\Pi_S(v_S|v_B)}{\pi_S(v_S|v_B)})$ is the virtual value in our environment, which is the difference between the conditional virtual values of $B$ and $S$. However, it turns out the weighted virtual values is more convenient for our analysis because it is well defined even for $\pi(v)=0$. Henceforth we directly work with the weighted virtual values. } when the value profile is $v$. Thus the problem of designing an optimal mechanism given a joint distribution can be viewed as maximizing the product of the trading probability and the weighted virtual values given that the trading probability is feasible and satisfies the monotonicity condition defined in Proposition 1. \subsubsection{Characterization of \textbf{Maxmin Trade Mechanism (I)}} \indent We are now ready to illustrate Theorem 1. At a high level, we expect that our solution exhibits a lot of ``indifference'', which is a general lesson from the robust mechanism design literature. In our environment, that means the maxmin mechanism should generate the same payoff for the designer across many plausible joint distributions and the worst-case joint distribution should generate the same payoff for the Nature across many feasible mechanisms.\\ \indent We start with the illustration of \textbf{Maxmin Trade Mechanism (I)}. As mentioned in the introduction, we form a simple and educated guess (A) that in the maxmin solution, trade occurs with positive probability if and only if the difference between the private values of $B$ and $S$ exceeds certain threshold, i.e., $v_B-v_S>r$. Second, note we can define the value profile $(1,0)$ as the highest type in our environment since it has the maximal virtual value \footnote{In our environment, high-value buyer and low-value seller are more willing to trade. Thus, a value profile with a high buyer's value and a low seller's value can be referred to as a ``high type'' in the traditional mechanism design literature. } (suppose it has non-zero density or probability mass). Hence, in the maxmin solution, it is without loss of generality to assume (B) the trade probability is 1 when the value profile is $(1,0)$.\footnote{This does not affect the monotonicity constraints as the value profile $(1,0)$ is the highest type in our environment.}\\ \indent Now consider the Nature's problem of finding a worst-case joint distribution $\pi$ to any mechanism $(q,t_B,t_S)$. We observe this is a semi-infinite dimensional linear program. We derive its dual program. By Theorem 3.12 in \cite{anderson1987linear}, we establish the strong duality (We leave all the details to the Appendix). Then, by the Complementarity Slackness condition, we obtain the following lemma. \begin{lemma}\label{l1} If $\pi$ is a best response for the Nature to a given mechanism $(q,t_B,t_S)$, then there exists some real numbers $\lambda_B, \lambda_S,\mu$ such that \begin{equation} \label{eq1} \lambda_Bv_B+\lambda_Sv_S+\mu \le t(v)\quad \forall v \in V \end{equation} \begin{equation} \label{eq2} \lambda_Bv_B+\lambda_Sv_S+\mu = t(v)\quad \forall v \in supp(\pi) \end{equation} \end{lemma} We conjecture that the support of the worst-case joint distribution $\pi^*$ is the area in which $v_B-v_S\ge r$. Then together with (iv) in Proposition \ref{p1}, (A) and \eqref{eq2}, we obtain that for any $v_B-v_S\ge r$, \begin{equation} \label{eq3} \lambda_Bv_B+\lambda_Sv_S+\mu = (v_B-v_S)q^*(v)-\int_{v_S+r}^{v_B}q^*(b,v_S)db-\int_{v_S}^{v_B-r}q^*(v_B,s)ds \end{equation} To solve for the trading probability, first we take first order derivatives with respect to $v_B$ and $v_S$ respectively, and we obtain \begin{equation} \label{eq4} (v_B-v_S)\frac{\partial q^*(v_B,v_S)}{\partial v_B}-\frac{\partial \int_{v_S}^{v_B-r}q^*(v_B,s)ds }{\partial v_B}=\lambda_B \end{equation} \begin{equation} \label{eq5} (v_B-v_S)\frac{\partial q^*(v_B,v_S)}{\partial v_S}-\frac{\partial \int_{v_S+r}^{v_B}q^*(b,v_S)db }{\partial v_S}=\lambda_S \end{equation} Then, we take cross partial derivative, with some algebra, we obtain \begin{equation} \label{eq6} (v_B-v_S)\frac{\partial q^*(v_B,v_S)}{\partial v_B\partial v_S}=0 \end{equation} Thus, $q^*(v_B,v_S)$ is separable, which can be written as \begin{equation}\label{eq7} q^*(v_B,v_S)=f(v_B)+g(v_S) \end{equation} Plugging \eqref{eq7} into \eqref{eq4} and \eqref{eq5}, we obtain \begin{equation}\label{eq8} rf'(v_B)-(f(v_B)+g(v_B-r))=\lambda_B \end{equation} \begin{equation}\label{eq9} rg'(v_S)+f(v_S+r)+g(v_S)=\lambda_S \end{equation} Note both \eqref{eq8} and \eqref{eq9} involve the two functions $f$ and $g$. We guess (C) that $f(v_B)+g(v_B-r)=0$ and $f(v_S+r)+g(v_S)=0$ for any $v$, then we can easily solve \eqref{eq8} and \eqref{eq9}, and we obtain \begin{equation}\label{eq10} f(v_B)=\frac{\lambda_B}{r}v_B+c_B \end{equation} \begin{equation}\label{eq11} g(v_S)=\frac{\lambda_S}{r}v_S+c_S \end{equation} In order for (C) to hold, we must have \begin{equation}\label{eq12} \lambda_B=-\lambda_S, c_B+c_S+\lambda_B=0 \end{equation} Now plugging \eqref{eq10},\eqref{eq11} and \eqref{eq12} into \eqref{eq7}, we obtain for any $v_B-v_S\ge r$, \begin{equation}\label{eq13} q^*(v_B,v_S)=\frac{\lambda_B}{r}(v_B-v_S-r) \end{equation} Finally, using (B), i.e., $q^*(1,0)=1$, we obtain $\lambda_B=\frac{r}{1-r}$, and therefore, \begin{equation}\label{eq14} q^*(v_B,v_S)=\frac{1}{1-r}(v_B-v_S-r) \end{equation} \subsubsection{Characterization of \textbf{Worst-Case Joint Distribution (I)}} Now we will illustrate the \textbf{Worst-Case Joint Distribution (I)}. As mentioned in the previous subsection, we expect the worst-case joint distribution to exhibit indifference to many mechanisms. We propose a guess that the worst-case joint distribution exhibits the property that the weighted virtual value is positive only for the highest type (1,0), zero for the other value profiles in the support and weakly negative for value profiles outside the support\footnote{By the definition of the weighted virtual values, the weighted virtual values are negative for value profiles outside the support.}. Formally, we guess (D) that in the worst-case joint distribution, we have \begin{equation}\label{eq15} \Phi(1,0)>0 \end{equation} \begin{equation}\label{eq16} \Phi(v)=0 \quad \forall v_B-v_S\ge r\quad and \quad v \neq (1,0) \end{equation} \begin{equation}\label{eq17} \Phi(v)\le 0 \quad \forall v_B-v_S< r \end{equation} Now if the joint distribution satisfies \eqref{eq15}, \eqref{eq16} and \eqref{eq17}, then any feasible and monotone mechanism in which trade occurs with some positive probability if and only if $v_B-v_S>r$ and trade occurs with probability 1 when $(v_B,v_S)=(1,0)$ yields the same payoff to the Nature, and is optimal for the designer. Then, the only remaining issue is whether we can construct a plausible joint distribution satisfying \eqref{eq15}, \eqref{eq16} and \eqref{eq17}.\\ \indent We give an affirmative answer by taking a constructive approach. We start from constructing the joint distribution for the boundary value profiles, i.e., either $v_B=1$ or $v_S=0$. Assume $Pr^*(1,0)=m$. Consider value profiles $(v_B,0)$ in which $r\le v_B < 1$. Define $S^*(v_B,0)\equiv \int_{[v_B,1)}\pi^*(b,0)db+Pr^*(1,0)$ for $r\le v_B< 1$; $S^*(1,0)\equiv Pr^*(1,0)=m$. Then we have $\pi^*(v_B,0)=-\frac{\partial S^*(v_B,0)}{\partial v_B}$ for $r\le v_B< 1$. Since the weighted virtual values for value profiles $(v_B,0)$ in which $r\le v_B < 1$ are zeroes, we obtain for $r\le v_B< 1$, \begin{equation}\label{eq18} \pi^*(v_B,0)(v_B-0)-S^*(v_B,0)=0 \end{equation} Note \eqref{eq18} is a simple ordinary differential equation, to which the solution is \begin{equation}\label{eq19} S^*(v_B,0)=\frac{m}{v_B}, \pi^*(v_B,0)=\frac{m}{v^2_B} \quad \forall r\le v_B< 1 \end{equation} Then consider value profiles $(1,v_S)$ in which $0< v_S \le 1-r$. Define $S^*(1,v_S)\equiv \int_{(0,v_S]}\pi^*(1,s)ds+Pr^*(1,0)$ for $0< v_S\le 1-r$. Then we have $\pi^*(1,v_S)=\frac{\partial S^*(1,v_S)}{\partial v_S}$ for $0< v_S \le 1-r$. Since the weighted virtual values for value profiles $(1,v_S)$ in which $0< v_S \le 1-r$ are zeroes, we obtain for $0< v_S \le 1-r$, \begin{equation}\label{eq20} \pi^*(1,v_S)(1-v_S)-S^*(1,v_S)=0 \end{equation} Note \eqref{eq20} is also a simple ordinary differential equation, to which the solution is \begin{equation}\label{eq21} S^*(1,v_S)=\frac{m}{1-v_S}, \pi^*(1,v_S)=\frac{m}{(1-v_S)^2}\quad \forall 0<v_S\le 1-r \end{equation} Now we will construct the joint distribution for the interior value profiles in the support, i.e., $v_B-v_S\ge r$ and $v_B\neq 1, v_S\neq 0$. Define $S^*(v_B,v_S)\equiv \int_{[v_B,1)}\pi^*(b,v_S)db+\pi^*(1,v_S)$ for $v_B-v_S\ge r$ and $v_B\neq 1, v_S\neq 0$. Then we have $\pi^*(v_B,v_S)=-\frac{\partial S^*(v_B,v_S)}{\partial v_B}$ for $v_B-v_S\ge r$ and $v_B\neq 1, v_S\neq 0$. Since the weighted virtual values for value profiles $(v_B,v_S)$ in which $v_B-v_S\ge r$ and $v_B\neq 1, v_S\neq 0$ are zeroes, we obtain for $v_B-v_S\ge r$ and $v_B\neq 1, v_S\neq 0$, \begin{equation}\label{eq22} \pi^*(v_B,v_S)(v_B-v_S)-S^*(v_B,v_S)-\int_{(0,v_S]}\pi^*(v_B,s)ds-\pi^*(v_B,0)=0 \end{equation} Note \eqref{eq22} is a (second order) partial differential equation. By taking the cross partial derivative, we find $S^*(v_B,v_S)$ is not separable. We take the guess and verify approach to solve for the PDE. We guess that for $v_B-v_S\ge r$ and $v_B\neq 1, v_S\neq 0$, \begin{equation}\label{eq23} S^*(v_B,v_S)=\frac{m}{(v_B-v_S)^2} \end{equation} Then the LHS of \eqref{eq22} is $\frac{2m}{(v_B-v_S)^3}(v_B-v_S)-\frac{m}{(v_B-v_S)^2}- \int_{(0,v_S]}\frac{2m}{(v_B-s)^3}ds-\frac{m}{v^2_B}$, which can be shown to be 0 with some algebra. Thus, we verified the guess.\\ \indent To solve for $m$, we use the fact that $\pi^*(v)$ is a distribution. We note the marginal distribution for $S$ is $\pi^*_S(v_S)=S(v_S+r,v_S)=\frac{m}{(v_S+r-v_S)^2}=\frac{m}{r^2}$ for $0<v_S\le 1-r$ and $\pi^*_S(v_S=0)=S(r,0)=\frac{m}{r}$. Since the integration is 1, we obtain \begin{equation}\label{eq24} \frac{m}{r}+\frac{m}{r^2}\cdot (1-r)=1 \end{equation} Thus, we obtain $m=r^2$.\\ \indent So far we have constructed \textbf{Worst-Case Joint Distribution (I)}. The final step is to make sure that \textbf{Worst-Case Joint Distribution (I)} satisfies the mean constraints, which will allow us to solve for the monopoly reserve $r$. Given the marginal distributions for $S$ and $B$, we have the following mean constraints, \begin{equation}\label{eq25} r\cdot 1+\int_r^1tdt=M_B \end{equation} \begin{equation}\label{eq26} r\cdot 0+\int_0^{1-r}tdt=M_S \end{equation} Summing up \eqref{eq25} and \eqref{eq26},we obtain $M_B+M_S=1$, which is the special case we are considering. Thus, in the special case where $M_B+M_S=1$, we have a (unique) solution $r=1-\sqrt{1-(M_B-M_S)}$. \begin{remark} The symmetric case may be a reasonable assumption for situations in which the designer knows both sides have similar eagerness to trade. \end{remark} \subsection{The Asymmetric Case: $M_B+M_S\neq 1$} We now turn to the asymmetric case in which $M_B+M_S\neq 1$. We follow the same approach as in the symmetric case. Indeed, the characterization for the symmetric case provides us with good intuitions about the solution to the general case, which will be made clear shortly. We propose the following \textbf{Maxmin Trade Mechanism (II)} and \textbf{Worst-Case Joint Distribution (II)}. Formally, they are described as below.\\ \textbf{Maxmin Trade Mechanism (II)}\\ Let $v=(v_B,v_S)$ be the reported value profile of the two agents. If $r_2v_B-(1-r_1)v_S\ge r_1r_2$, then $$q^*(v_B,v_S)=\frac{1}{\ln\frac{1-r_2}{r_1}}(\ln((1-\frac{r_2}{1-r_1})v_B+\frac{r_1r_2}{1-r_1})-\ln((\frac{1-r_1}{r_2}-1)v_S+r_1))$$ $$t^*_B(v_B,v_S)=-\frac{r_1r_2}{(1-r_1-r_2)\ln\frac{1-r_2}{r_1}}(\ln((1-\frac{r_2}{1-r_1})v_B+\frac{r_1r_2}{1-r_1})-\ln((\frac{1-r_1}{r_2}-1)v_S+r_1))$$$$ +\frac{1}{\ln\frac{1-r_2}{r_1}}(v_B-\frac{1-r_1}{r_2}v_S-r_1)$$ $$t^*_S(v_B,v_S)=-\frac{r_1r_2}{(1-r_1-r_2)\ln\frac{1-r_2}{r_1}}(\ln((1-\frac{r_2}{1-r_1})v_B+\frac{r_1r_2}{1-r_1})-\ln((\frac{1-r_1}{r_2}-1)v_S+r_1))$$$$ +\frac{1}{\ln\frac{1-r_2}{r_1}}(\frac{r_2}{1-r_1}v_B-v_S-\frac{r_1r_2}{1-r_1})$$ where $r_1,r_2$ is the unique solution to the following equations: \begin{equation}\label{eq27} M_B=\int_{r_1}^1\frac{r_1(1-r_2)}{(\frac{1-r_1-r_2}{1-r_1}v_B+\frac{r_1r_2}{1-r_1})^2}v_Bdv_B+r_1:= H_1(r_1,r_2) \end{equation} \begin{equation}\label{eq28} M_S=\int_{0}^{r_2}\frac{r_1(1-r_2)}{(\frac{1-r_1-r_2}{r_2}v_S+r_1)^2}v_Sdv_S:= H_2(r_1,r_2) \end{equation}\\ Otherwise $$q^*(v_B,v_S)=t^*_B(v_B,v_S)=t^*_S(v_B,v_S)=0$$ \textbf{Worst-Case Joint Distribution (II)}\\ Let $\pi^*(v_B,v_S)$ denote the density of the value profile $(v_B,v_S)$ whenever the density exists. Let $Pr^*(v_B,v_S)$ denote the probability mass of the value profile $(v_B,v_S)$ whenever there is some probability mass on $(v_B,v_S)$. Let $V(II):=\{v|r_2v_B-(1-r_1)v_S\ge r_1r_2\}$. Worst-Case Joint Distribution (II) has the support $V(II)$ and is defined as follows: $$\pi^*(v_B,v_S)= \left\{ \begin{array}{lll} \frac{2r_1(1-r_2)}{(v_B-v_S)^3} & & {r_2v_B-(1-r_1)v_S\ge r_1r_2,v_B\neq 1,v_S \neq 0}\\ \frac{r_1(1-r_2)}{(1-v_S)^2} & & {v_B=1,0<v_S \le r_2}\\ \frac{r_1(1-r_2)}{v_B^2} & & {r_1\le v_B < 1,v_S=0} \end{array} \right. $$ $$Pr^*(1,0)=r_1(1-r_2)$$ Equivalently, \textbf{worst-case joint distribution (II)} can be described by worst-case marginal distributions and conditional distributions as follows:\\ For $B$, the worst-case marginal distribution is $$\pi^*_B(v_B)=\frac{r_1(1-r_2)}{(\frac{1-r_1-r_2}{1-r_1}v_B+\frac{r_1r_2}{1-r_1})^2}$$ for $r_1\le v_B <1$, $$Pr^*_B(1)=r_1$$ That is, worst-case marginal distribution for $B$ is some generalized Pareto distribution on $[r_1,1)$ with a probability mass $r_1$ on 1.\\ For $S$, the worst-case marginal distribution is $$\pi^*_S(v_S)=\frac{r_1(1-r_2)}{(\frac{1-r_1-r_2}{r_2}v_S+r_1)^2}$$ for $0<x\le r_2$, $$Pr^*_S(0)=1-r_2$$ That is, worst-case marginal distribution for $S$ is some generalized Pareto distribution on $(0,r_2]$ with a probability mass $1-r_2$ on 0.\\ The conditional distribution can be easily derived from the joint distribution and the marginal distributions. It can be seen that the conditional distribution is some generalized Pareto distribution with a probability mass on either 0 or 1, which share the same feature with that in the symmetric case, therefore we omit the description of the conditional distribution. \begin{remark} {Worst-Case Joint Distribution (II)} exhibits positive correlation for $r_1\le v_B<1$ and $0<v_S\le r_2$.\footnote{To see this, note $F(v_S|v_B)=\frac{(\frac{1-r_1-r_2}{1-r_1}v_B+\frac{r_1r_2}{1-r_1})^2}{(v_B-v_S)^2}$ is decreasing w.r.t. $v_B$ for $r_1\le v_B <1$. The positive correlation breaks when $v_B=1$. Similarly, $F(v_B|v_S)=1-\frac{(\frac{1-r_1-r_2}{r_2}v_S+r_1)^2}{(v_B-v_S)^2}$ is decreasing w.r.t. $v_S$ for $0< v_S \le r_2$. The positive correlation breaks when $v_S=0$.} \end{remark} \begin{lemma}\label{l2} For any given $M_B+M_S\neq 1$, there is a unique solution $r_1,r_2$ to \eqref{eq27} and \eqref{eq28}. \end{lemma} \begin{theorem}\label{t2} When $M_B+M_S\neq 1$, \textbf{Maxmin Trade Mechanism (II)} and \textbf{Worst-Case Joint Distribution (II)} form a Nash equilibrium. The revenue guarantee is $r_1(1-r_2)$. \end{theorem} \subsubsection{Characterization of \textbf{Maxmin Trade Mechanism (II)}} As mentioned in the introduction, we guess that (A') that in the maxmin solution, trade occurs with positive probability if and only if the difference between the weighted private values of $B$ and $S$ exceeds certain threshold, i.e., $r_2v_B-(1-r_1)v_S\ge r_1r_2$. We further conjecture that the support of the worst-case joint distribution $\pi^*$ is the area in which $r_2v_B-(1-r_1)v_S\ge r_1r_2$. Then together with (iv) in Proposition 1, (A') and \eqref{eq2}, we obtain that for any $r_2v_B-(1-r_1)v_S\ge r_1r_2$, \begin{equation} \label{eq29} \lambda_Bv_B+\lambda_Sv_S+\mu = (v_B-v_S)q(v_B,v_S)-\int_{\frac{1-r_1}{r_2}v_S+r_1}^{v_B} q(b,v_S)db-\int_{v_S}^{\frac{r_2}{1-r_1}(v_B-r_1)} q(v_B,s)ds \end{equation} To solve for the trading probability, first we take first order derivatives with respect to $v_B$ and $v_S$ respectively, and we obtain \begin{equation} \label{eq30} (v_B-v_S)\frac{\partial q^*(v_B,v_S)}{\partial v_B}-\frac{\partial \int_{v_S}^{\frac{r_2}{1-r_1}(v_B-r_1)} q(v_B,s)ds }{\partial v_B}=\lambda_B \end{equation} \begin{equation} \label{eq31} (v_B-v_S)\frac{\partial q^*(v_B,v_S)}{\partial v_S}-\frac{\partial \int_{\frac{1-r_1}{r_2}v_S+r_1}^{v_B} q(b,v_S)db }{\partial v_S}=\lambda_S \end{equation} Then, we take cross partial derivative, with some algebra, we obtain \begin{equation} \label{eq32} (v_B-v_S)\frac{\partial q^*(v_B,v_S)}{\partial v_B\partial v_S}=0 \end{equation} Thus, $q^*(v_B,v_S)$ is separable, which can be written as (with abuse of notations) \begin{equation}\label{eq33} q^*(v_B,v_S)=f(v_B)+g(v_S) \end{equation} Plugging \eqref{eq33} into \eqref{eq30} and \eqref{eq31}, we obtain \begin{equation}\label{eq34} ((1-\frac{r_2}{1-r_1})v_B+\frac{r_1r_2}{1-r_1})f'(v_B)-\frac{r_2}{1-r_1}(f(v_B)+g(\frac{r_2}{1-r_1}(v_B-r_1)))=\lambda_B \end{equation} \begin{equation}\label{eq35} ((\frac{1-r_1}{r_2}-1)v_S+r_1)g'(v_S)+\frac{1-r_1}{r_2}(f(\frac{1-r_1}{r_2}v_S+r_1)+g(v_S))=\lambda_S \end{equation} Note both \eqref{eq34} and \eqref{eq35} involve the two functions $f$ and $g$. We guess (C') that $f(v_B)+g(\frac{r_2}{1-r_1}(v_B-r_1))=0$ and $f(\frac{1-r_1}{r_2}v_S+r_1)+g(v_S)=0$ for any $v$, then we can easily solve \eqref{eq34} and \eqref{eq35}, and we obtain \begin{equation}\label{eq36} f(v_B)=\frac{(1-r_1)\lambda_B}{1-r_1-r_2}\ln{((1-\frac{r_2}{1-r_1})v_B+\frac{r_1r_2}{1-r_1})}+c_B \end{equation} \begin{equation}\label{eq37} g(v_S)=\frac{r_2\lambda_S}{1-r_1-r_2}\ln{((\frac{1-r_1}{r_2}-1)v_S+r_1)}+c_S \end{equation} Observe that $$ g(\frac{r_2}{1-r_1}(v_B-r_1))=\frac{r_2\lambda_S}{1-r_1-r_2}\ln{((1-\frac{r_2}{1-r_1})v_B+\frac{r_1r_2}{1-r_1})}+c_S$$ Then, in order for (C') to hold, we must have \begin{equation}\label{eq38} (1-r_1)\lambda_B=-r_2\lambda_S, c_B+c_S=0 \end{equation} Now plugging \eqref{eq36},\eqref{eq37} and \eqref{eq38} into \eqref{eq33}, we obtain for any $r_2v_B-(1-r_1)v_S\ge r_1r_2$, \begin{equation}\label{eq39} q^*(v_B,v_S)=\frac{(1-r_1)\lambda_B}{1-r_1-r_2}(\ln{((1-\frac{r_2}{1-r_1})v_B+\frac{r_1r_2}{1-r_1})}-\ln{((\frac{1-r_1}{r_2}-1)v_S+r_1)}) \end{equation} Finally, using (B), i.e., $q^*(1,0)=1$, we obtain $\lambda_B=\frac{1-r_1-r_2}{(1-r_1)\ln{\frac{1-r_2}{r_1}}}$, and therefore, \begin{equation}\label{eq40} q^*(v_B,v_S)=\frac{1}{\ln\frac{1-r_2}{r_1}}(\ln((1-\frac{r_2}{1-r_1})v_B+\frac{r_1r_2}{1-r_1})-\ln((\frac{1-r_1}{r_2}-1)v_S+r_1)) \end{equation} \subsubsection{Characterization of \textbf{Worst-Case Joint Distribution (II)}} Just as the characterization for the special case, we guess that for the general case, the worst-case joint distribution also exhibits the property that the weighted virtual value is positive only for the highest type (1,0), zero for the other value profiles in the support and weakly negative for value profiles outside the support. The construction procedure for the joint distribution is exactly the same. Therefore we omit it. However, note here the marginal distributions no longer have uniform distribution part since $v_B-v_S$ is no longer constant on the line boundary due to different weights for $B$ and $S$. We start from the derivation for the marginal distribution of $S$. $\pi^*_S(v_S)=S(\frac{1-r_1}{r_2}v_S+r_1,v_S)=\frac{m}{(\frac{1-r_1}{r_2}-1)v_S+r_1)^2}$ for $0<v_S\le r_2$ and $\pi^*_S(v_S=0)=S(r_1,0)=\frac{m}{r_1}$. Since the integration is 1, we obtain \begin{equation}\label{eq41} \frac{m}{r_1}+\int_0^{r_2}\frac{m}{(\frac{1-r_1}{r_2}-1)v_S+r_1)^2}dv_B=1 \end{equation} \indent With some algebra, we obtain $m=r_1(1-r_2)$. The final step is to make sure that \textbf{Worst-Case Joint Distribution (II)} satisfies the mean constraints, which will allow us to solve for the $r_1,r_2$. Given the marginal distributions for $S$ and $B$, we have a system of two equations \eqref{eq27} and \eqref{eq28}. Lemma \ref{l2} states the solution exists and is unique for the general case, details of which are left to the Appendix. \begin{remark} We can now consider a general model in which the designer can destroy the good not necessarily uniformly. To wit, the sum of the final allocation $q_B(v_B,v_S)$ to $B$ and $q_S(v_B,v_S)$ to $S$ does not exceed 1. Formally, the constraints now becomes $$v_Bq_B(v)-t_B(v)\ge 0 \quad \forall v \quad (EPIR_B)$$$$ v_Bq_B(v)-t_B(v)\ge v_Bq_B(v_B',v_S)-t_B(v_B',v_S)\quad \forall v,v_B' \quad(DSIC_B)$$ $$v_S q_S(v)+t_S(v)\ge v_S \quad \forall v \quad (EPIR''_S)$$$$ v_Sq_S(v)+t_S(v)\ge v_S q_S(v_B,v_S')+t_S(v_B,v_S')\quad \forall v,v_S' \quad(DSIC''_S)$$$$ q_B(v)+q_S(v)\le 1\quad \forall v \quad(Feasiblity'')$$ We argue the solution to the above problem coincides with our main results. To see this, first note a simple adaption of Proposition 1 yields a virtual representation of the revenue for the above problem:$$E_{\pi}t=E_{\pi}[q_B\phi_B+q_S\phi_S]-1$$ where $\phi_B(v)= v_B-\frac{1-\Pi_B(v_B|v_S)}{\pi_B(v_B|v_S)}, \phi_S(v)= v_S+\frac{\Pi_S(v_S|v_B)}{\pi_S(v_S|v_B)}$. Given the constructed joint distribution in the main results, $\phi_B=\phi_S >0$ for any interior value profile except for the highest type (1,0), in which $1=\phi_B(1,0)>\phi_S (1,0) =0$. It is easy to see the constructed trade mechanisms remain optimal. \end{remark} \section{Deterministic Mechanisms} In this section, we restrict attention to deterministic DSIC and EXIR trade mechanisms and characterize the maxmin trade mechanisms in this class of mechanisms. This section is motivated by practical concerns. To wit, deterministic mechanisms are easier to understand and more practical than randomized mechanisms in many situations, e.g., when the agents do not trust the randomization device. Note that Proposition 1 still holds, with the addtional property that $q(v)$ is either 0 or 1 for any $v$. \\ \indent We begin with a definition, which is useful for exposition. \begin{definition} \textbf{Trade boundary} of a given deterministic DSIC and EXIR trade mechanism $(q,t_B,t_S)$ is a set of value profiles $\mathcal{B}:=\{\bar{v}=(\bar{v_B},\bar{v_S})| q(v)=1 \quad \text{if}\quad \exists \bar{v} \quad s.t. \quad v_B\ge \bar{v_B}, v_S< \bar{v_S}\quad or \quad v_B> \bar{v_B}, v_S\le \bar{v_S}; q(v)=0\quad \text{if}\quad \exists \bar{v} \quad s.t. \quad v_B\le \bar{v_B}, v_S\ge \bar{v_S}\}$.\footnote{For technical reasons, we assume the trading probability on the trade boundary is 0. This is to have a minimization problem for Nature. Otherwise we have to replace $\min$ with $\inf$. See also in \cite{carrasco2018optimal}. } \end{definition} We observe the trading boundary exhibits a monotone property, which is summarized below. \footnote{To see this, since $\bar{v}' \in \mathcal{B}$ and $\bar{v_B}>\bar{v_B}'$, $q(v_B,\bar{v_S}')=1$. Then by definition, $\bar{v_S}\ge \bar{v_S}'$.} \begin{observation} If $\bar{v},\bar{v}' \in \mathcal{B}$ and $\bar{v_B}>\bar{v_B}'$, then $\bar{v_S}\ge \bar{v_S}'$. \end{observation} \indent The main idea is as follows. We divide all possible deterministic DSIC and EXIR trade mechanisms into four classes according to the trade boundary. By strong duality, we can work on the dual program. We propose a relaxation of the dual program by omitting a lot of constraints. The merit of doing so is to have a finite dimensional linear programming problem. Then we derive an upper bound of the value of the relaxation for each class. We identify the greatest upper bound and then show that the greatest upper bound can be achieved by constructing the deterministic mechanisms and the worst-case joint distribution. \begin{theorem}\label{t3} When $\sqrt{M_S}+\sqrt{1-M_B}< 1$, any deterministic satisfying the following properties is a maxmin deterministic mechanism:\\ (i). $(1-\sqrt{1-M_B},0) \in \mathcal{B}, (1,\sqrt{M_S})\in \mathcal{B}$.\\ (ii). $\mathcal{B}$ is above (including) the line boundary $\sqrt{M_S}v_B-\sqrt{1-M_B}v_S=\sqrt{M_S}(1-\sqrt{1-M_B})$.\\ (iii). Transfers are characterized by Proposition 1.\\ The worst-case joint distribution put point mass $\sqrt{1-M_B},\sqrt{M_S}$ and $1-\sqrt{1-M_B}-\sqrt{M_S}$ on value profile $(1-\sqrt{1-M_B},0), (1,\sqrt{M_S})$ and $(1,0)$ respectively. The revenue guarantee is $(1-\sqrt{M_S}-\sqrt{1-M_B})^2$; When $\sqrt{M_S}+\sqrt{1-M_B}\ge 1$, no trade is optimal. \end{theorem} That is, we characterize the whole class of maxmin deterministic mechanisms. The worst-case joint distribution is discrete, and is the same across the mechanisms in this class. Now we provide examples of the trading rules of some maxmin deterministic mechanisms. \begin{example} \textit{Linear Trading}: trade occurs with probability 1 if and only if $\sqrt{M_S}v_B-\sqrt{1-M_B}v_S>\sqrt{M_S}(1-\sqrt{1-M_B})$. \end{example} \begin{example} \textit{Threshold Trading}: trade occurs with probability 1 if and only if $v_B>1-\sqrt{1-M_B}$ and $v_S< \sqrt{M_S}$. \end{example} \section{Concluding Remarks} In this paper, we provide a complete characterization of the maxmin trade mechanisms and the worst-case joint distributions when the designer knows only the expectations of the values, among all DSIC and EPIR mechanisms. The maxmin trade mechanisms are novel, featuring either linear randomization for the symmetric case or logarithmic-linear randomization for the asymmetric case. In addition, the revenue guarantee is positive as long as the expectation of the buyer's value exceeds the expectation of the seller's value. The key step in the construction of the worst-case joint distributions is to obtain a system of differential equations from properties about the weighed virtual value. The construction method may be of independent interest and useful for other design problems, e.g., multidimensional Bayesian persuasion, and even more general robust optimization problems. \newpage \section{Appendix} \subsection*{A Proofs for Section 4} \subsubsection*{A.1 Proof of Proposition \ref{p1}} (i) $q(v_B,v_S)$ is nondecreasing in $v_B$ and nonincreasing in $v_S$:\\ Dominant strategy incentive compatibility for a type $v_B$ of $B$ requires that for any $v_S$ and $v_B'\neq v_B$: $$v_Bq(v_B,v_S) - t_B(v_B,v_S)\ge v_Bq(v_B',v_S) - t_B(v_B',v_S) $$DSIC also requires that:$$ v_B'q(v_B',v_S) - t_B(v_B',v_S)\ge v_B'q(v_B,v_S) - t_B(v_B,v_S)$$Adding the two inequalities, we have that: $$(v_B-v_B')(q(v_B,v_S)-q(v_B',v_S))\ge 0$$It follows that $q(v_B,v_S)\ge q(v_B',v_S)$ whenever $v_B>v_B'$ .\\ Similarly, dominant strategy incentive compatibility for a type $v_S$ of $S$ requires that for any $v_B$ and $v_S'\neq v_S$: $$v_S(1-q(v_B,v_S)) + t_B(v_B,v_S)\ge v_S(1-q(v_B,v_S')) + t_S(v_B,v_S') $$DSIC also requires that:$$ v_S'(1-q(v_B,v_S')) +t_S(v_B,v_S')\ge v_S'(1-q(v_B,v_S)) + t_S(v_B,v_S)$$Adding the two inequalities, we have that: $$(v_S-v_S')(q(v_B,v_S)-q(v_B,v_S'))\ge 0$$It follows that $q(v_B,v_S)\le q(v_B,v_S')$ whenever $v_S>v_S'$ .\\\\ (ii) $t_B(v_B,v_S)=v_Bq(v_B,v_S)-\int_{0}^{v_B}q(b,v_S)db$:\\ Fix $v_S$, Define $$U_B(v_B)=v_Bq(v_B,v_S)-t_B(v_B,v_S)$$ By the first two inequalities in (i), we get $$(v_B'-v_B)q(v_B,v_S)\le U_B(v_B')-U_B(v_B)\le (v_B'-v_B)q(v_B',v_S)$$ Dividing throughout by $v_B'-v_B$ (suppose $v_B'>v_B$): $$q(v_B,v_S)\le \frac{U_B(v_B')-U_B(v_B)}{(v_B'-v_B)}\le q(v_B',v_S)$$As $v_B\uparrow v_B'$, we get: $$ \frac{dU_B(v_B)}{dv_B}=q(v_B,v_S)$$Then we get $$t_B(v_B,v_S)= v_Bq(v_B,v_S)-\int_{0}^{v_B}q(b,v_S)db-U_B(0)$$ Note $U_B(0)\ge 0$ by the ex post IR constraint. If $U_B(0)>0$, then we can reduce it to 0 so that we can increase the payment from $B$ for all value profiles and the value of the problem will be strictly greater. Thus, for any maxmin solution, $U_B(0)=0$ and $t_B(v_B,v_S)=v_Bq(v_B,v_S)-\int_{0}^{v_B}q(b,v_S)db$\\\\ (iii) $t_S(v)=1-(1-q(v))v_S-\int_{v_S}^1(1-q(v_B,s)ds$:\\ Similarly, Fix $v_B$, Define $$U_S(v_S)=v_S(1-q(v_B,v_S))+t_S(v_B,v_S)$$ By the fourth and fifth inequalities in (i), we get $$(v_S'-v_S)(1-q(v_B,v_S))\le U_S(v_S')-U_S(v_S)\le (v_S'-v_S)(1-q(v_B,v_S'))$$ Dividing throughout by $v_S'-v_S$ (suppose $v_S'>v_S$): $$1-q(v_B,v_S)\le \frac{U_S(v_S')-U_S(v_S)}{(v_S'-v_S)}\le 1-q(v_B,v_S')$$As $v_S\uparrow v_S'$, we get: $$ \frac{dU_S(v_S)}{dv_S}=1-q(v_B,v_S)$$Then we get $$t_B(v_B,v_S)= U_S(1)-v_S(1-q(v_B,v_S))-\int_{v_S}^{1}q(v_B,s)ds$$ Note $U_S(1)\ge 1$ by the ex post IR constraint. If $U_S(1)>1$, then we can reduce it to 1 so that we can decrease the payment to $S$ for all value profiles and the value of the problem will be strictly greater. Thus, for any maxmin solution, $U_S(1)=1$ and $t_S(v)=1-(1-q(v))v_S-\int_{v_S}^1(1-q(v_B,s)ds$.\\\\ (iv) $t(v)\equiv t_B(v)-t_S(v)=(v_B-v_S)q(v)-\int_0^{v_B}q(b,v_S)db-\int_{v_S}^1q(v_B,s)ds$:\\ This is implied by (ii) and (iii). \vspace{5mm} \subsubsection*{A.2 Proof of Lemma \ref{l1}} Given a DSIC and EPIR mechanism $(q,t_B,t_S)$, the (P) primal minimization problem of Nature is as follows (with dual variables in the bracket): $$(Primal)\min_{F\in \Pi(M_B,M_S)}\int t(v)dF$$ s.t. $$\int v_BdF=M_B\quad (\lambda_B)$$ $$\int v_SdF=M_S\quad (\lambda_S)$$ $$\int dF=1 \quad (\mu)$$ It has the following (D) dual maximization problem: $$(Dual)\max_{\lambda_B,\lambda_S,\mu \in \mathcal{R}}\lambda_B M_B+\lambda_S M_S+\mu$$ s.t. $$\lambda_Bv_B+\lambda_Sv_S+\mu \le t(v) \quad (dF)$$ Note that the value of (P) is bounded by 1 as $t(v)\le 1$. In addition, the trivial joint distribution that put all probability mass on the point $(M_B,M_S)$ is in the interior of the primal cone. Then by Theorem 3.12 in \cite{anderson1987linear}, strong duality holds.Then, by the Complementarity Slackness, \eqref{eq2} holds. And \eqref{eq1} is implied by feasibility constraints of (D). \vspace{5mm} \subsubsection*{A.3 Proof of Theorem \ref{t1}} We already illustrated the main idea and main steps in Section 4. Now we summarize them and give a formal argument. We will prove the proposed pair forms a Nash Equilibrium.\\ (i): \textbf{Maxmin Trade Mechanism (I)} is a best response to \textbf{Worst-Case Joint Distribution (I)}: Note Worst-Case Joint Distribution (I) satisfies \eqref{eq18}, \eqref{eq20}, \eqref{eq22}. Also note there is a point mass on the value profile (1,0). Thus \eqref{eq15}, \eqref{eq16} and \eqref{eq17} hold. Then any feasible and monotone mechanism in which trade occurs with some positive probability if and only if $v_B - v_S> r$ and trade occurs with probability 1 when $(v_B,v_S) = (1,0)$ is a best response for the designer. It is easy to see that Maxmin Trade Mechanism (I) is such a mechanism.\\ (ii): \textbf{Worst-Case Joint Distribution (I)} is a best response to \textbf{Maxmin Trade Mechanism (I)}: we use the duality theory to show (ii). First note that by \eqref{eq24}, \eqref{eq25} and \eqref{eq26}, all the three constraints in (P) holds. By the weighted virtual value representation, the value of P given Worst-Case Joint Distribution (I) and Maxmin Trade Mechanism (I) is simply $Pr(1,0)\times (1-0)=r^2$. Second, note by \eqref{eq3} and $\lambda_B=\frac{r}{1-r}>0$ and $\lambda_S=-\frac{r}{1-r}<0$, the constraints in (D) hold for all value profiles. To see this, note for any value profile $v=(v_B,v_S)$ outside the support of Worst-Case Joint Distribution (I), $$\lambda_B v_B+\lambda_S v_S + \mu < \lambda_B r+\lambda_S 0 + \mu=0$$ for any value profile $v=(v_B,v_S)$ inside the support of Worst-Case Joint Distribution (I), the constraints trivially holds. Also \eqref{eq3} is the Complementarity Slackness. Finally, the value of $D$ given the constructed $\lambda_B,\lambda_S, \mu$ is $\lambda_B M_B+\lambda_S M_S +\mu$, which, by \eqref{eq25}, \eqref{eq26} and some algebra, is equal to $r^2$. By the linear programming duality theory, (ii) holds and the revenue guarantee is $r^2$. \subsubsection*{A.4 Proof of Lemma \ref{l2}} We start from establishing the following four claims regarding some properties of the functions $H_1(r_1,r_2)$ and $H_2(r_1,r_2)$, which will play a crucial role in establishing Lemma \ref{l2}. \begin{claim}\label{cl1} Fix any $0<r_1\le 1$, $H_2(r_1,r_2)$ is strictly increasing w.r.t. $r_2$ for $r_2 \in [0,1)$. In addition, fix any $0<r_1\le 1$, as $r_2 \uparrow 1$, $H_2(r_1,r_2)\rightarrow 1$. \end{claim} \begin{proof}[Proof of Claim \ref{cl1}] Note when $0<r_1\le 1$, \begin{equation}\label{eq42} H_2(r_1,r_2)=\frac{r_1(1-r_2)r_2^2}{(1-r_1-r_2)^2}\ln{\frac{1-r_2}{r_1}}-\frac{r_1r_2^2}{1-r_1-r_2} \end{equation} Now taking first order derivative w.r.t. $r_2$ to \eqref{eq42}, with some algebra, we obtain \begin{equation}\label{eq43} \frac{\partial H_2(r_1,r_2)}{\partial r_2}=\frac{r_1r_2}{(1-r_1-r_2)^2}((2-3r_2+\frac{2r_2(1-r_2)}{1-r_1-r_2})\ln{\frac{1-r_2}{r_1}}-2(1-r_1)) \end{equation} Then to show the first part of Claim \ref{cl1}, it suffices to show that for any $r_2 \in (0,1)$ \begin{equation}\label{eq44} (2-3r_2+\frac{2r_2(1-r_2)}{1-r_1-r_2})\ln{\frac{1-r_2}{r_1}}-2(1-r_1)>0 \end{equation} Let $b\equiv \frac{1-r_2}{r_1}$, then $b\in (0,1)\cup (1,\infty)$. Plugging $r_2=1-br_1$ into \eqref{eq44}, it suffices to show that for any $b\in (0,1)\cup (1,\infty)$ \begin{equation}\label{eq45} (3br_1-1+\frac{2b(1-br_1)}{b-1})\ln{b}-2(1-r_1)>0 \end{equation} By slight rewriting \eqref{eq45}, it suffices to show that for any $b\in (0,1)\cup (1,\infty)$ \begin{equation} \label{eq46} \frac{b+1}{b-1}\ln{b}-2 +(\frac{b^2-3b}{b-1}\ln{b}+2)r_1>0 \end{equation} Then, it suffices to show that for any $b\in (0,1)\cup (1,\infty)$, the following two inequalities hold \begin{equation} \label{eq47} \frac{b+1}{b-1}\ln{b}-2>0 \end{equation} \begin{equation}\label{eq48} \frac{b^2-3b}{b-1}\ln{b}+2>0 \end{equation} Now to prove \eqref{eq47}, it suffices to show that $f(b):=\ln{b}-\frac{2(b-1)}{b+1}>0$ for $b\in (1,\infty)$ and $f(b)<0$ for $b\in (0,1)$. Taking first order derivative to $f(b)$, we obtain \begin{equation}\label{eq49} f'(b)=\frac{(b-1)^2}{b(b+1)^2} \end{equation} Therefore, $f(b)$ is strictly increasing. Note $f(1)=0$. Thus, we proved \eqref{eq47}. To prove \eqref{eq48}, it suffices to show that $g(b):=(b^2-3b)\ln{b}+2(b-1)>0$ for $b\in (1,\infty)$ and $g(b)<0$ for $b\in (0,1)$. Taking first order derivative to $g(b)$, we obtain \begin{equation}\label{eq50} g'(b)=(2b-3)\ln{b}+b-1 \end{equation} Now taking derivative again, we obtain \begin{equation}\label{eq51} g''(b)=2\ln{b}-\frac{3}{b}+3 \end{equation} Note $g''(b)$ is strictly increasing and $g''(1)=0$. This implies that $g'(b)$ is minimized at $b=1$. Note $g'(1)=0$. This implies that $g(b)$ is strictly increasing. Finally, note $g(1)=0$. This implies that \eqref{eq48} holds. \\ \indent So far we have shown the first part of Claim \ref{cl1}. For the second part of Claim \ref{cl1}, we note by the L'Hopita rule, we have \begin{equation} \lim_{x\rightarrow 0}x\ln{x}=\lim_{x\rightarrow 0}\frac{\ln{x}}{\frac{1}{x}}=\lim_{x\rightarrow 0}\frac{1/x}{-1/x^2}=\lim_{x\rightarrow 0}-x=0 \end{equation} Then, the first term of \eqref{eq42} goes to 0 as $r_2\uparrow 1$, and we obtain \begin{equation} \lim_{r_2\uparrow 1}H_2(r_1,r_2)=0-\frac{r_1}{1-r_1-1}=1 \end{equation} \end{proof} \begin{claim}\label{cl2} Fix any $0<r_2<1$, $H_2(r_1,r_2)$ is strictly increasing w.r.t. $r_1$ for $r_1 \in [0,1]$. \end{claim} \begin{proof}[Proof of Claim \ref{cl2}] Note when $0<r_2<1$, \eqref{eq42} holds. Now taking first order derivative w.r.t. $r_2$ to \eqref{eq42}, with some algebra, we obtain \begin{equation}\label{eq54} \frac{\partial H_2(r_1,r_2)}{\partial r_1}=\frac{(1-r_2)r_2^2}{(1-r_1-r_2)^2}((1+\frac{2r_1}{1-r_1-r_2})\ln{\frac{1-r_2}{r_1}}-2) \end{equation} Then it suffices to show that for any $r_1 \in (0,1)$ \begin{equation}\label{eq55} (1+\frac{2r_1}{1-r_1-r_2})\ln{\frac{1-r_2}{r_1}}-2>0 \end{equation} Let $b\equiv \frac{1-r_2}{r_1}$, then $b\in (0,1)\cup (1,\infty)$. Plugging $r_2=1-br_1$ into \eqref{eq55}, it suffices to show that for any $b\in (0,1)\cup (1,\infty)$ \begin{equation}\label{eq56} \frac{b+1}{b-1}\ln{b}-2>0 \end{equation} which is exactly \eqref{eq47} and has been shown in the Proof of Claim \eqref{cl1}. \end{proof} \begin{claim}\label{cl3} Fix any $r_2\in [0,1]$, $H_1(r_1,r_2)$ is strictly increasing w.r.t. $r_1$. In addition, for any $r_2\in [0,1]$, as $r_1\uparrow 1$, $H_1(r_1,r_2)\rightarrow 1$. \end{claim} \begin{proof}[Proof of Claim \ref{cl3}] Note when $r_2=1$, $H_1(r_1,r_2)=r_1$. Then both parts of Claim \ref{cl3} trivially holds when $r_2=1$. When $r_2=0,$, $H_1(r_1,r_2)=r_1-r_1\ln{r_1}$. Taking derivative w.r.t. $r_1$, we have $\frac{\partial H_1(r_1,r_2)}{\partial r_1}=-\ln{r_1}$. By L'Hopital Rule, $\lim_{r_1\uparrow 1}H_1(r_1,r_2)=1-0=1$. Thus, Claim \ref{cl3} holds when $r_2=0$. When $0<r_2<1$, \begin{equation}\label{eq57} H_1(r_1,r_2)=\frac{(1-r_2)r_1(1-r_1)^2}{(1-r_1-r_2)^2}\ln{\frac{1-r_2}{r_1}}-\frac{r_1r_2(1-r_1)}{1-r_1-r_2}+r_1 \end{equation} Now taking first order derivative w.r.t. $r_1$ to \eqref{eq42}, with some algebra, we obtain \begin{equation}\label{eq58} \frac{\partial H_1(r_1,r_2)}{\partial r_1}=\frac{(1-r_1)(1-r_2)}{(1-r_1-r_2)^2}((1-3r_1+\frac{2r_1(1-r_1)}{1-r_1-r_2})\ln{\frac{1-r_2}{r_1}}-2r_2) \end{equation} Then to show the first part of Claim \ref{cl1}, it suffices to show that for any $r_1 \in (0,1)$ \begin{equation}\label{eq59} (1-3r_1+\frac{2r_1(1-r_1)}{1-r_1-r_2})\ln{\frac{1-r_2}{r_1}}-2r_2>0 \end{equation} Let $b\equiv \frac{1-r_2}{r_1}$, then $b\in (0,1)\cup (1,\infty)$. Plugging $r_2=1-br_1$ into \eqref{eq59}, it suffices to show that for any $b\in (0,1)\cup (1,\infty)$ \begin{equation}\label{eq60} (1-3r_1+\frac{2(1-r_1)}{b-1})\ln{b}-2(1-br_1)>0 \end{equation} By slight rewriting \eqref{eq60}, it suffices to show that for any $b\in (0,1)\cup (1,\infty)$ \begin{equation}\label{eq61} \frac{b+1}{b-1}\ln{b}-2 +(-\frac{3b-1}{b-1}\ln{b}+2b)r_1>0 \end{equation} Then, it suffices to show that for any $b\in (0,1)\cup (1,\infty)$, the following two inequalities hold \begin{equation}\label{eq62} \frac{b+1}{b-1}\ln{b}-2>0 \end{equation} \begin{equation}\label{eq63} -\frac{3b-1}{b-1}\ln{b}+2b>0 \end{equation} Note \eqref{eq62} is exactly \eqref{eq47},which has been shown in the Proof of Claim \ref{cl1}. To prove \eqref{eq63}, it suffices to show that (with abuse of notations) $g(b):=(1-3b)\ln{b}+2b(b-1)>0$ for $b\in (1,\infty)$ and $g(b)<0$ for $b\in (0,1)$. Taking first order derivative to $g(b)$, we obtain \begin{equation}\label{eq64} g'(b)=4b-3\ln{b}+\frac{1}{b}-5 \end{equation} Now taking derivative again, we obtain \begin{equation}\label{eq65} g''(b)=\frac{(4b+1)(b-1)}{b^2} \end{equation} Note $g''(b)>0$ when $b>1$, $g''(b)<0$ when $b<1$ and $g''(1)=0$. This implies that $g'(b)$ is minimized at $b=1$. Note $g'(1)=0$. This implies that $g(b)$ is strictly increasing. Finally, note $g(1)=0$. This implies that \eqref{eq63} holds. \\ \indent So far we have shown the first part of Claim \ref{cl3}. For the second part of Claim \ref{cl3}, it trivially holds when $r_2\in (0,1)$ since the first two terms of \eqref{eq57} goes to 0 trivially as $r_1$ goes to 1. \end{proof} \begin{claim}\label{cl4} Fix any $0<r_1<1 $, $H_1(r_1,r_2)$ is strictly increasing w.r.t. $r_2$ for $r_2\in [0,1)$. In addition, fix any $0<r_1<1 $, as $r_2 \uparrow 1$, $H_1(r_1,r_2)\rightarrow 1$. \end{claim} \begin{proof}[Proof of Claim \ref{cl4}] Note when $0<r_1<1 $, \eqref{eq57} holds. Now taking first order derivative w.r.t. $r_2$ to \eqref{eq57}, with some algebra, we obtain \begin{equation}\label{eq66} \frac{\partial H_1(r_1,r_2)}{\partial r_2}=\frac{(1-r_1)^2r_1}{(1-r_1-r_2)^2}((-1+\frac{2(1-r_2)}{1-r_1-r_2})\ln{\frac{1-r_2}{r_1}}-2) \end{equation} Then it suffices to show that for any $r_2 \in (0,1)$ \begin{equation}\label{eq67} (-1+\frac{2(1-r_2)}{1-r_1-r_2})\ln{\frac{1-r_2}{r_1}}-2>0 \end{equation} Let $b\equiv \frac{1-r_2}{r_1}$, then $b\in (0,1)\cup (1,\infty)$. Plugging $r_2=1-br_1$ into \eqref{eq67}, it suffices to show that for any $b\in (0,1)\cup (1,\infty)$ \begin{equation}\label{eq68} \frac{b+1}{b-1}\ln{b}-2>0 \end{equation} which is exactly \eqref{eq47} and has been shown in the Proof of Claim \ref{cl1}.\\ \indent So far we have shown the first part of Claim \ref{cl4}. For the second part of Claim \ref{cl4}, using L'Hopita rule and the same argument in the Proof of Claim \ref{cl1}, the first term of \eqref{eq57} goes to 0 as $r_2\uparrow 1$, and we obtain \begin{equation}\label{eq69} \lim_{r_2\uparrow 1}H_1(r_1,r_2)=0-\frac{r_1(1-r_1)}{1-r_1-1}+r_1=1 \end{equation} \end{proof} We are now ready to prove Lemma \ref{l2}. Fix any $1\ge M_B>M_S>0$. By Claim \ref{cl3}, Claim \ref{cl4} and the Inverse Function Theorem, for any $0\le r_2<1$, there exists a strictly decreasing function $F$ such that $r_1=F(r_2)$ is a solution to \eqref{eq27}; By Claim \ref{cl1}, Claim \ref{cl2} and the Inverse Function Theorem, for any $0< r_1\le 1$, there exists a strictly decreasing function $G$ such that $r_2=G(r_1)$ is a solution to \eqref{eq28}. Thus it suffices to prove that there exist $0<r_2<1$ such that \begin{equation}\label{eq70} G(F(r_2))=r_2 \end{equation} Note $G(F(\cdot))$ is a strictly increasing function. Also note $G(F(0))\in (0,1)$ since $F(0 \in (0,1]$ and $G(r_1) \in (0,1)$ when $r_1 \in (0,1]$. Now, by the Intermediate Value Theorem, it suffices to show that there exists some $0<r_2<1$ such that \begin{equation}\label{eq71} G(F(r_2))\le r_2 \end{equation} This is equivalent to showing there is some $0<r_2<1$ such that \begin{equation}\label{eq72} F(r_2)\ge G^{-1}(r_2) \end{equation} since $G$ is strictly decreasing. By Claim \ref{cl3}, this is equivalent to showing that there is some $0<r_2<1$ such that \begin{equation}\label{eq73} H_1(G^{-1}(r_2),r_2)\le M_B \end{equation} Let $\epsilon \equiv M_B-M_S >0$. We observe a relationship between the two functions $H_1$ and $H_2$ when $0<r_1\le 1$ and $0<r_2<1$: \begin{equation}\label{eq74} H_1(r_1,r_2)-H_2(r_1,r_2)= (\frac{(1-r_1)^2}{r_2^2}-1)H_2(r_1,r_2)+r_1(2-r_1) \end{equation} Note when $r_2\uparrow 1$, $G^{-1}(r_2) \rightarrow 0$. To see this, suppose not, then by Claim \ref{cl1}, $H_2(G^{-1}(r_2),r_2)\rightarrow 1$ when $r_2\uparrow 1$, a contradiction to $H_2(G^{-1}(r_2),r_2)= M_S<1$. Then by the equation \eqref{eq74}, as $r_2\uparrow 1$, \begin{equation*} \begin{split} H_1(G^{-1}(r_2),r_2)-M_S & = H_1(G^{-1}(r_2),r_2)-H_2(G^{-1}(r_2),r_2) \\ & = (\frac{(1-G^{-1}(r_2))^2}{r_2^2}-1)H_2(G^{-1}(r_2),r_2)+G^{-1}(r_2)(2-G^{-1}(r_2))\\ & = (\frac{(1-G^{-1}(r_2))^2}{r_2^2}-1)M_S+G^{-1}(r_2)(2-G^{-1}(r_2))\\ & \rightarrow (\frac{(1-0)^2}{1^2}-1)M_S+0(2-0)\\ & = 0 \end{split} \end{equation*} This implies that there exists some $0<r_2<1$ such that \begin{equation}\label{eq75} |H_1(G^{-1}(r_2),r_2)-M_S|\le \frac{\epsilon}{2} \end{equation} Note \eqref{eq75} implies \eqref{eq73} as $H_1(G^{-1}(r_2),r_2)\le M_S+\frac{\epsilon}{2}<M_S+\epsilon=M_B$. Finally, the uniqueness of the solution is implied by that $G(F(r))$ is strictly increasing w.r.t. to $r$ and thus can only cross the function $y(r): = r$ once. \vspace{5mm} \subsubsection*{A.5 Proof of Theorem \ref{t2}} We already illustrated the main idea and main steps in Section 4. Now we summarize them and give a formal argument. We will prove the proposed pair forms a Nash Equilibrium.\\ (i): \textbf{Maxmin Trade Mechanism (II)} is a best response to \textbf{Worst-Case Joint Distribution (II)}: Note by construction, Worst-Case Joint Distribution (II) exhibits the property that the weighted virtual value is positive only for the highest type (1,0), zero for the other value profiles in the support and negative for value profiles outside the support. Then any feasible and monotone mechanism in which trade occurs with some positive probability if and only if $r_2v_B - r_1v_S> r_1r_2$ and trade occurs with probability 1 when $(v_B,v_S) = (1,0)$ is a best response for the designer. It is easy to see that Maxmin Trade Mechanism (II) is such a mechanism.\\ (ii): \textbf{Worst-Case Joint Distribution (II)} is a best response to \textbf{Maxmin Trade Mechanism (II)}: we use the duality theory to show (ii). First note that by \eqref{eq41}, \eqref{eq27} and \eqref{eq28}, all the three constraints in (P) holds. By the weighted virtual value representation, the value of P given Worst-Case Joint Distribution (I) and Maxmin Trade Mechanism (I) is simply $Pr(1,0)\times (1-0)=r_1(1-r_2)$. Second, note by \eqref{eq29} and $\lambda_B=\frac{1-r_1-r_2}{(1-r_1)\ln{\frac{1-r_2}{r_1}}}>0$ (this holds no matter what the sign of $1-r_1-r_2$ is ) and $\lambda_S=-\frac{1-r_1-r_2}{r_2\ln{\frac{1-r_2}{r_1}}}<0$, the constraints in (D) hold for all value profiles. To see this, note for any value profile $v=(v_B,v_S)$ outside the support of Worst-Case Joint Distribution (II), $$\lambda_B v_B+\lambda_S v_S + \mu < \lambda_B r_1+\lambda_S 0 + \mu=0$$ For any value profile $v=(v_B,v_S)$ inside the support of Worst-Case Joint Distribution (II), the constraints trivially holds. Also \eqref{eq29} is the Complementarity Slackness. Third, the value of $D$ given the constructed $\lambda_B,\lambda_S, \mu$ is $\lambda_B M_B+\lambda_S M_S +\mu$, which, by \eqref{eq27}, \eqref{eq28}, \eqref{eq42}, \eqref{eq57} and some algebra, is equal to $r_1(1-r_2)$. Finally, by Lemma \ref{l2}, the solution to \eqref{eq27} and \eqref{eq28} exists. By the linear programming duality theory, (ii) holds and the revenue guarantee is $r_1(1-r_2)$. \subsection*{B Proof for Section 5} \subsubsection*{B.1 Proof of Theorem \ref{t3}} \textbf{\textit{Step 1: Narrow down the search to a class of mechanisms}}\\ \indent We divide all deterministic, DSIC and EPIR trade mechanisms into the following four classes:\\ \textit{Class 1}: the trade boundary touches on the value profiles $(r_1,1)$ and $(0,r_2)$ for some $0\le r_1\le 1, 0\le r_2\le 1$.\\ \textit{Class 2}: the trade boundary touches on the value profiles $(0,r_1)$ and $(1,r_2)$ for some $0\le r_1\le 1, 0\le r_2\le 1$.\\ \textit{Class 3}: the trade boundary touches on the value profiles $(r_1,0)$ and $(r_2,1)$ for some $0\le r_1\le 1, 0\le r_2\le 1$.\\ \textit{Class 4}: the trade boundary touches on the value profiles $(r_1,0)$ and $(1,r_2)$ for some $0\le r_1\le 1, 0\le r_2\le 1$.\\ Note by (i) of Proposition \ref{p1}, for each class, all the value profiles to the right and below the trade boundary has trade probability of 1. Then by (iv) of Proposition \ref{p1}, we can show the revenue from the four vertices $(0,0),(0,1),(1,0), (1,1)$ will never be strictly positive for \textit{Class 1}, \textit{Class 2} and \textit{Class 3}. To see this, note for \textit{Class 1}: $t(0,0)=0-r_2=-r_2\le 0, t(0,1)=0, t(1,0)=(1-0)\cdot 1-1-1=-1<0, t(1,1)=(1-1)\cdot 1-(1-r_1)\cdot 1=-(1-r_1)\le 0$; for \textit{Class 2}: $t(0,0)=0-r_1=-r_1\le 0, t(0,1)=0, t(1,0)=(1-0)\cdot 1-1-r_2=-r_2\le 0, t(1,1)= 0$; for \textit{Class 3}: $t(0,0)=0, t(0,1)=0, t(1,0)=(1-0)\cdot 1-(1-r_1)-1=-(1-r_1)\le 0, t(1,1)= 0-(1-r_2)=-(1-r_2)\le 0$.\\ \indent Now when $M_B+M_S\le 1$, consider the joint distribution that put point masses $M_B$, $M_S$ and $1-M_B-M_S$ on the value profiles (1,0), (0,1) and (0,0) respectively. It is easily to verify that this is a plausible joint distribution and the revenue under this joint distribution cannot be strictly positive; when $M_B+M_S\ge 1$, t consider the joint distribution that put point masses $1-M_S$, $1-M_B$ and $M_B+M_S-1$ on the value profiles (1,0), (0,1) and (0,0) respectively. It is easily to verify that this is a plausible joint distribution and the revenue under this joint distribution cannot be strictly positive. Therefore, we can focus attention on \textit{Class 4} only.\\ \textbf{\textit{Step 2: Identify an upper bound of the revenue guarantee}}\\ \indent We propose a relaxation of (D) by omitting many constraints. Specifically, the only remaining contraints are for the four vertices (0,0), (1,0), (0,1) and (1,1) and the value profiles $(r_1,0)$ and $(1,r_2)$. Formally, we have the following relaxed problem (D'): $$ \max_{\lambda_B,\lambda_S,\mu \in \mathcal{R}}\lambda_BM_B +\lambda_S M_S+\mu$$s.t. \begin{equation}\label{eq76} \mu \le 0 \end{equation} \begin{equation}\label{eq77} \lambda_B r_1 +\mu \le 0 \end{equation} \begin{equation}\label{eq78} \lambda_B +\lambda_S r_2 +\mu \le 0 \end{equation} \begin{equation}\label{eq79} \lambda_S +\mu \le 0 \end{equation} \begin{equation}\label{eq80} \lambda_B +\lambda_S +\mu \le 0 \end{equation} \begin{equation}\label{eq81} \lambda_B +\mu \le r_1-r_2 \end{equation} Note the value of (D') (denoted by $val(D')$) is weakly greater than the value of (D). Now we are trying to find a greatest upper bound of the value of (D') across $r_1,r_2$ and argue it is attainable by constructing the mechanism and the joint distribution. We discuss four cases:\\ \textit{Case 1}: $\lambda_B\le 0, \lambda_S\le 0$. Note then by \eqref{eq76}, $val(D')\le 0$ for any $r_1,r_2$.\\ \textit{Case 2}: $\lambda_B\ge 0, \lambda_S\ge 0$. Note then by \eqref{eq76}, \eqref{eq80} and $M_B>M_S$, \begin{equation*} \begin{split} \lambda_BM_B +\lambda_S M_S+\mu & \le (\lambda_B+\lambda_S)M_B+\mu \\ & = (\lambda_B+\lambda_S+\mu)M_B+\mu(1-M_B)\\ & \le 0 \end{split} \end{equation*} Thus, $val(D')\le 0$ for any $r_1,r_2$.\\ \textit{Case 3}: $\lambda_B\le 0, \lambda_S\ge 0$. By the same argument as in \textit{Case 2}, $val(D')\le 0$ for any $r_1,r_2$.\\ \textit{Case 4}: $\lambda_B\ge 0, \lambda_S\le 0$. We will restrict attention to $r_1\ge r_2$, otherwise by the previous argument, the revenue guarantee cannot be strictly positive. Now we are left with \eqref{eq77}, \eqref{eq78} and \eqref{eq81} as they imply the other three constraints. Note at least one of \eqref{eq77}, \eqref{eq78} and \eqref{eq81} is binding, otherwise we can increase the value of (D') by increasing $\lambda_B$ by a small amount. We thus discuss three situations:\\ $(a): \lambda_B r_1 +\mu = 0$.\\ \indent We plug $\lambda_B=-\frac{\mu}{r_1}$ into \eqref{eq78} and \eqref{eq81}, and we obtain \begin{equation}\label{eq82} \lambda_S \le \frac{1-r_1}{r_1r_2}\mu \end{equation} \begin{equation}\label{eq83} \mu \ge -\frac{r_1(r_1-r_2)}{1-r_1} \end{equation} Then we have \begin{equation*} \begin{split} \lambda_BM_B +\lambda_S M_S+\mu & = -\frac{\mu}{r_1}M_B+\lambda_S M_S+\mu \\ & \le -\frac{\mu}{r_1}M_B+\frac{1-r_1}{r_1r_2}\mu M_S+\mu\\ & = (-\frac{M_B}{r_1}+1+\frac{1-r_1}{r_1r_2} M_S)\mu \\ & \le \max\{0, \frac{r_1-r_2}{1-r_1}M_B-\frac{r_1-r_2}{r_2}M_S-\frac{r_1(r_1-r_2)}{1-r_1}\} \end{split} \end{equation*} $(b): \lambda_B +\mu =r_1-r_2$.\\ \indent We plug $\lambda_B=r_1-r_2-\mu$ into \eqref{eq77} and \eqref{eq78}, and we obtain \begin{equation}\label{eq84} \lambda_S \le -\frac{r_1-r_2}{r_2}\mu \end{equation} \begin{equation}\label{eq85} \mu \le -\frac{r_1(r_1-r_2)}{1-r_1} \end{equation} Then we have \begin{equation*} \begin{split} \lambda_BM_B +\lambda_S M_S+\mu & = (r_1-r_2-\mu)M_B+\lambda_S M_S+\mu \\ & = (r_1-r_2)M_B+\lambda_S M_S+(1-M_B)\mu \\ & \le \frac{r_1-r_2}{1-r_1}M_B-\frac{r_1-r_2}{r_2}M_S-\frac{r_1(r_1-r_2)}{1-r_1} \end{split} \end{equation*} $(c): \lambda_B + \lambda_S r_2 +\mu =0$.\\ \indent We plug $\lambda_B=-\mu-\lambda_S r_2$ into \eqref{eq77} and \eqref{eq81}, and we obtain \begin{equation} \lambda_S \ge -\frac{r_1-r_2}{r_2} \end{equation} \begin{equation} \mu \le \frac{r_1r_2}{1-r_1}\lambda_S \end{equation} Then we have \begin{equation*} \begin{split} \lambda_BM_B +\lambda_S M_S+\mu & = (-\lambda_S r_2-\mu)M_B+\lambda_S M_S+\mu \\ & = (M_S-r_2M_B)\lambda_S+(1-M_B)\mu \\ & \le \max\{0, \frac{r_1-r_2}{1-r_1}M_B-\frac{r_1-r_2}{r_2}M_S-\frac{r_1(r_1-r_2)}{1-r_1}\} \end{split} \end{equation*} Let $K(r_1,r_2):= \frac{r_1-r_2}{1-r_1}M_B-\frac{r_1-r_2}{r_2}M_S-\frac{r_1(r_1-r_2)}{1-r_1}$. We are now solving $\max_{r_1\ge r_2}K(r_1,r_2)$. Taking first order derivative w.r.t. $r_1$, we obtain \begin{equation}\label{eq88} \frac{\partial K(r_1,r_2)}{\partial r_1}= \frac{1}{(1-r_1)^2}((1-r_2)M_B-\frac{M_S}{r_2}(1-r_1)^2-r_1(2-r_1)-r_2) \end{equation} Let $Q(r_1,r_2):=(1-r_2)M_B-\frac{M_S}{r_2}(1-r_1)^2-r_1(2-r_1)-r_2$. Note fixing $r_2$, $Q (r_1,r_2)$ is decreasing w.r.t. $r_1$ when $r_2\le r_1 \le 1$. Note \begin{equation*} \begin{split} Q(r_2,r_2) & = (1-r_2)M_B-\frac{M_S}{r_2}(1-r_2)^2-r_2(1-r_2)+r_2 \\ & = (1-r_2)(M_B+M_S-(\frac{M_S}{r_2}+r_2))\\ \end{split} \end{equation*} Note that $M_B+M_S-(\frac{M_S}{r_2}+r_2)\le M_B+M_S-2\sqrt{M_S}$. Therefore, if $M_B+M_S-2\sqrt{M_S}\le 0$, $Q(r_1,r_2)\le 0$ for any $r_2$ and $r_1\in[r_2,1]$. Then $K(r_1,r_2)$ is maximized at $r_1=r_2$, whose value is 0. If $M_B+M_S-2\sqrt{M_S} > 0$, solving $Q(r^*_1,r_2)=0$, we obtain (we ignore the other solution which exceeds 1) \begin{equation}\label{eq89} r^*_1=1-\sqrt{\frac{(1-r_2)(1-M_B)}{1-\frac{M_S}{r_2}}} \end{equation} If $r^*_1\le r_2$, then again $K(r_1,r_2)$ is maximized at $r_1=r_2$, whose value is 0. Now if \begin{equation}\label{eq90} r^*_1> r_2 \end{equation} which, by some algebra, is equivalent to \begin{equation}\label{eq91} \sqrt{(1-r_2)(1-\frac{M_S}{r_2})}> \sqrt{1-M_B} \end{equation} then $K(r_1,r_2)$ is maximized at $r_1=r^*_1$, whose value, by some algebra, is \begin{equation}\label{eq92} (\sqrt{(1-r_2)(1-\frac{M_S}{r_2})}-\sqrt{1-M_B})^2 \end{equation} Then \eqref{eq92} is maximized at $r_2=\sqrt{M_S}$. Then we have $r_1^*=1-\sqrt{1-M_B}$. Then \eqref{eq90} is equivalent to \begin{equation}\label{eq93} 1-\sqrt{M_S} > \sqrt{1-M_B} \end{equation} which, by some algebra, is equivalent to $M_B+M_S-2\sqrt{M_S} > 0$. Thus, we have found the solution $r_1=1-\sqrt{1-M_B}, r_2=\sqrt{M_S}$ when $M_B+M_S-2\sqrt{M_S} > 0$. And $K(r_1,r_2)=(1-\sqrt{M_S}-\sqrt{1-M_B})^2$.\\ \textbf{\textit{Step 3: Show the upper bound is attainable}}\\ \indent The last step is to construct deterministic trade mechanisms whose revenue guarantee is $(1-\sqrt{M_S}-\sqrt{1-M_B})^2$ when $M_B+M_S-2\sqrt{M_S} > 0$. Consider any deterministic mechanism satisfying (i), (ii) and (iii) in Theorem \ref{t3}. Let $\lambda_B=\frac{1-\sqrt{1-M_B}-\sqrt{M_S}}{\sqrt{1-M_B}}, \lambda_S=-\frac{1-\sqrt{1-M_B}-\sqrt{M_S}}{\sqrt{M_S}},\mu=-\frac{(1-\sqrt{1-M_B}-\sqrt{M_S})(1-\sqrt{1-M_B})}{\sqrt{1-M_B}}$. We will argue they are feasible for the original dual problem (D).\\ \indent Note first the constraint for the value profile (1,0) hold with equality. Then the constraints hold for any interior value profile. The reason is that the constraint is the most stringent for the value profile (1,0) by the monotonicity of the trade boundary. To see this, note the constraint for any interior value profile $(v_B,v_S)$ is equivalent to \begin{equation}\label{eq94} \lambda_Bv_B+g(v_B)+\lambda_S v_S - f(v_S)+\mu \le 0 \end{equation} where $(v_B,g(v_B))$ and $(f(v_S),v_S)$ are in the trade boundary. Since $\lambda_B >0, \lambda_S <0$, $g$ and $f$ are nondecreasing, the LHS of \eqref{eq94} is maximized at (1,0). For the value profiles outside (including) the boundary, the constraints also hold if (ii) holds. To see this, note given the constructed $\lambda_B, \lambda_S,\mu$, we have $\lambda_B v_B + \lambda_S v_S+\mu=0$ for the value profiles $(1-\sqrt{1-M_B},0)$ and $(1,\sqrt{M_S})$. Then, by linearity, we have $\lambda_B v_B + \lambda_S v_S+\mu=0$ for any value profiles on the line boundary mentioned in Theorem \ref{t3}. Therefore, if (ii) holds, the constraints also holds for value profiles outside (including) the boundary. Finally, we calculate the value of (D) under the constructed dual variables, which, by some algebra, is exactly $(1-\sqrt{M_S}-\sqrt{1-M_B})^2$. \indent For the joint distribution in Theorem \ref{t3}, we first can easily verify all probability masses add up to 1. Second, given the mechanisms satisfying the three properties and the joint distribution, the value of (P) is, by some algebra, $(1-\sqrt{M_S}-\sqrt{1-M_B})^2$. This finishes the proof. \newpage \bibliographystyle{apalike}
1,314,259,994,760
arxiv
\section{} \section{Introduction\label{se:intro}} The observables Forward-Backward Asymmetries (FBAs) have played an important role in the history of particle physics. Examples include the discovery of the parity violation of weak interaction \cite{Lee:1956qn,Wu:1957my,Garwin:1957hc}, the precision measurement of the $Z$ boson \cite{ALEPH:2005ab}, the study of the lepton universality \cite{Ali:1991is,Alok:2009tz,Belle:2009zue}. The introduction of the FBA and the FBA induced $C\!P$ Asymmetry (FB-$C\!P$A) to the hadronic multi-body decays of beauty and charmed mesons provides a good approach for isolating the interfering effects between near-by resonances \cite{Zhang:2021zhr}, which is helpful for the understanding of the behaviour of the $C\!P$ violation, the resonance spectroscopy, as well as the low energy quantum chromodynamics (QCD). The decays of $B$ meson are excellent probes for New Physics (NP) indirectly via the study of $C\!P$ violation ($C\!P$V) and rare decays, as well as good places for improving our understanding of QCD at low energy via spectroscopy study of resonances, among which the hadronic multi-body decays of $B$ mesons becomes increasingly important. For the former case, the lepton universality in decays $B\to K^{(\ast)}l^+l^-$ has gained a lot of attention form both the theoretical and experimental sides \cite{LHCb:2017avl,Belle:2019oag,LHCb:2019hip,Bordone:2016gaq,Bobeth:2007dw,Geng:2017svp,Alonso:2014csa,DAmico:2017mtc}. For the latter, QCD exotic states such as the pentaquark states were also first observed in hadronic multi-body decays of $B$ meson \cite{Belle:2003nnu,LHCb:2015yax,LHCb:2016axx,LHCb:2019kea}. This three-body $B$ meson decay channel $B^\pm\to K^\pm K^\mp K^\pm$ has been studies experimentally by BaBar \cite{BaBar:2006hyf,BaBar:2012iuj}, Belle\cite{Belle:2004drb}, and LHCb \cite{LHCb:2014mir}, in which a structure referred as $f_X(1500)$ when the invariant mass of one $K^+ K^-$ pair is around $1.5$ GeV was reported by BaBar and Belle. Although it can be explained by $f_0(1500)$ or a combination of some even-spin resonances such as $f_0(1500)$ and $f_2'(1525)$ for the BaBar and the Belle cases, the nature of $f_X(1500)$ is still unclear. Recent theoretical investigations via perturbative QCD approach indicates that the $f_X(1500)$ structure is probably the spin-1 resonance $\rho^0(1450)$ \cite{Zou:2020mul,Zou:2020atb,Liu:2021sdw}. The LHCb data in Ref. \cite{LHCb:2014mir} provide us the opportunity to investigate the nature of $f_X(1500)$ via the FBA and the FB-$C\!P$A in the decay channel $B^\pm\to K^\pm K^\mp K^\pm$. The evident large FBA when the invariant mass of $K^+ K^-$ pair is around $1.5$ GeV, which can even be clearly seen from the events distributions in the Dalitz plots of this channel, implies strongly the presence of a spin-odd resonance, which could probably be $\rho^0(1450)$, with reasons which will be explained in this paper. This is clearly in contradiction with the former analysis performed by BaBar and Belle. The remainder of this paper is structured as follows. In Sec. \ref{sec:IntroFBA}, we present the definition of the FBA and the FB-$C\!P$A for the three-body decays of $B$ meson, followed with a brief discussion. In Sec. \ref{sec:AnalysisFBA}, we perform the analysis of the FBA and the FB-$C\!P$A of $B^\pm\to K^\pm K^\mp K^\pm$ based on the data of LHCb in Ref. \cite{LHCb:2014mir}, according to which it is found that the large FBA strongly indicates the presence of the $p$-wave resonances $\rho^0(1450)$. In the last section, we make our conclusion. \section{\boldmath The FBA and the FB-$C\!P$A in three-body decays of heavy meson \label{sec:IntroFBA}} \begin{figure}[h] \centering \includegraphics[width=.6\linewidth]{def-theta.pdf} \caption{The definition of $\theta$ in the c.m. frame of the $M_1M_2$ system.}\label{fig:def-theta} \end{figure} For a three-body decay process of a beauty or a charmed meson, $H\to M_1 M_2 M_3$, with $M_j$ ($j=1,2,3$) being three pseudo-scalar mesons which can only decay via electro-weak interactions, we denote the angle between the momentum of $M_1$ and that of the initial particle $H$ in the c.m. frame of the $M_1M_2$ system as $\theta$ (see FIG. \ref{fig:def-theta} for illustration), and the invariant mass squared of $M_iM_j$ system as $m_{ij}^2$. One has the relation \begin{equation \cos\theta\equiv\frac{\vec{p}_1^\ast\cdot \vec{p}_3^\ast}{|\vec{p}_1^\ast||\vec{p}_3^\ast|}=\frac{m^2_{23}-(m^2_{{23},\text{max}}+m^2_{{23},\text{min}})/2}{(m^2_{{23},\text{max}}-m^2_{{23},\text{min}})/2}, \end{equation} where $\vec{p}_j^\ast$ is the momentum of $M_j$ in the c.m. frame of the $M_1M_2$ system, $m^2_{{23},\text{max (min)}}$ is the maximum (minimum) value of $m^2_{23}$ constrained by the phase space. The Forward-Backward Asymmetry (FBA), which describes the preference of the flying direction of $M_1$ with respect to that of $H$ in the c.m. frame of the $M_1M_2$ system, is defined as \cite{Zhang:2021zhr} \begin{equation}\label{eq:defFBA} A^{FB}=\frac{\int_0^1\langle\left|\mathcal{A}\right|^2\rangle d\cos\theta-\int_{-1}^0\langle\left|\mathcal{A}\right|^2\rangle d\cos\theta} {\int_{-1}^1\langle \left|\mathcal{A}\right|^2\rangle d\cos\theta}, \end{equation} where the notion ``$\langle \cdots\rangle$'' represents integration over the invariant mass squared $m^2_{12}$, $\langle \left|\mathcal{A}\right|^2\rangle\equiv\int_a^b \frac{(m^2_{{23},\text{max}}-m^2_{{23},\text{min}})}{2}\left|\mathcal{A}\right|^2 dm^2_{12}$, with $[a,b]$ the interval that the integration was performed on. By expressing the decay amplitudes in terms of partial waves, \begin{equation}\label{eq:PWamp} \mathcal{A}=\sum_l a_{l}P_{l}(\cos\theta), \end{equation} one find that the FBA depends on the interferences of even and odd partial waves: \begin{equation}\label{eq:AFB} A^{FB}= \frac{2}{\sum_{j} \left[{\langle \left|a_j\right|^2\rangle}/{(2j+1)}\right]}\sum_{{\text{even~}} l \atop {\text{odd~}} k} f_{lk}\Re\left(\langle a_{l}a_{k}^{\ast}\rangle\right), \end{equation} where $f_{lk}\equiv\int_0^1 P_{l}P_{k}d\cos\theta=\frac{(-)^{(l+k+1)/2}l!k!}{2^{l+k-1}(l-k)(l+k+1)[(l/2)!]^2\{[(k-1)/2]!\}^2}$ \cite{Byerly}. From this equation, one can see that the numerator contains {\it only} the interference term between even- and odd-waves. This implies that large FBA around even (odd)-wave resonances usually indicates the interference with nearby odd (even)-wave contributions \footnote{ Since $M_1$ and $M_2$ are spin-0 particles, the spin of the resonance decaying into them equals to the angular momentum between them.}. It is impossible to generate a large FBA with only the presence of even or odd waves. The Forward-Backward Asymmetry induced $C\!P$A (FB-$C\!P$A) is defined as the difference between FBAs of the pair of $C\!P$-conjugate processes, which reads \begin{equation}\label{eq:ACPFB} A_{CP}^{FB}\equiv\frac{1}{2}\left( A^{FB}- \overline{A}^{FB}\right), \end{equation} where $ \overline{A}^{FB}$ is the FBA of the $C\!P$-conjugate process $\overline{H}\to \overline{M}_1 \overline{M}_2 \overline{M}_3$, the factor $1/2$ is introduced so as to make sure the value of the $A_{CP}^{FB}$ lies between $-1$ and 1. One immediately see from Eqs. (\ref{eq:AFB}) and (\ref{eq:ACPFB}) that the FB-$C\!P$A has the ability of isolating $C\!P$Vs originated from the interference of even and odd waves. \section{\boldmath Data-based analysis of FBA and FB-$C\!P$A in $B^\pm \to K^\pm K^\mp K^\pm$ \label{sec:AnalysisFBA}} \begin{figure}[h!] \centering \includegraphics[width=1\textwidth]{datafigure.eps} \caption{ Various observables extracted from the LHCb data in Ref. \cite{LHCb:2014mir}. The solid and the dashed lines are the FBA for $B^-\to K^-K^+K^-$ and $B^+\to K^+K^-K^+$ respectively; the dash-dotted and the dotted lines are the FB-$C\!P$A $A_{C\!P}^{F\!B}$ and regional $C\!P$A $A_{CP}^{\text{reg}}$ respectively. }\label{fig:datafigure} \end{figure} Thanks to the high statistics, the LHCb is able to investigate the $B$ meson decays --including the branching fractions and the $C\!P$ Asymmetries ($C\!P$As) of the three-body decays of $B$ meson-- in an unprecedented precision \cite{Chen:2021ftn,LHCb:2020xcz}. A very detailed analysis has been carried out by LHCb for the decay process $B^\pm \to K^\pm K^\mp K^\pm$ mentioned in Sec. \ref{se:intro}, from which rich resonance structures and regional $C\!P$As can be clearly seen throughout the Dalitz plots \cite{LHCb:2014mir}. Besides, signal yields projected in bins of the invariant mass of one of the $K^+ K^-$ pair were also investigated. For each bin, the signal yield was divided into two parts according to whether $\cos\theta>0$ or $\cos\theta<0$, where $\theta$ was defined as the angle between the momenta of the two Kaons with the same-sign charge in the c.m. frame of the $K^+K^-$ pair with lower invariant mass, $m_{\text{low}}$ (also see FIG. \ref{fig:def-theta} for illustration). Based on the data of FIG. 6 of the LHCb's paper \cite{LHCb:2014mir}, we obtain the measured FBAs, FB-$C\!P$A, and regional $C\!P$A for each bin of the decay process $B^\pm \to K^\pm K^\mp K^\pm$, which are presented in FIG. \ref{fig:datafigure}, respectively. One interesting behaviour of the FB-$C\!P$A is that its value almost does not change for $m_{\text{low}}$ ranges from 1 Gev to 1.8 GeV, which deserves investigations from both the experimental and theoretical sides. However, since this is not what we focus on in this paper, we will simply skip this point from now on. \begin{table}[h!] \begin{center} \caption{The event yields, FBAs, and FB-$C\!P$As of each bin for ${m_{\text{low}}}$ ranging between 1.30 and 1.65 GeV, where the uncertainties are statistical only. The FBAs and FB-$C\!P$As are obtained according to the definitions $A^{FB}_{i}\equiv {[N_i(\cos\theta>0)-N_i(\cos\theta<0)]}/{[N_i(\cos\theta>0)+N_i(\cos\theta<0)]}$, $\overline{A}^{FB}_{i}\equiv {[\overline{N}_i(\cos\theta>0)-\overline{N}_i(\cos\theta<0)]}/{[\overline{N}_i(\cos\theta>0)+\overline{N}_i(\cos\theta<0)]}$ and $A_{CP,i}^{FB}\equiv\frac{1}{2}(A^{FB}_{i}-\overline{A}^{FB}_{i})$, respectively. } \label{tab:IntHalfIntSR} \begin{tabular}{c||c|c|c||c|c|c||c} \hline\hline \multirow{2}{*}{bin (GeV)} &\multicolumn{3}{c||}{$B^-$} & \multicolumn{3}{c||}{$B^+$} & \multirow{2}{*}{$A_{CP,i}^{FB}$ (\%)} \\ \cline{2-7} & $N_i(\cos\theta\!\!>\!\!0)$ & $N_i(\cos\theta\!\!<\!\!0)$ & $A_i^{FB}$ (\%) & $\overline{N}_i(\cos\theta\!\!>\!\!0)$ & $\overline{N}_i(\cos\theta\!\!<\!\!0)$ & $\overline{A}_i^{FB}$ (\%) & \\ \hline {1.30-1.35} & $683\pm26$ & $649\pm25$ & $2.6\pm2.7$ & $942\pm31$ & $1059\pm33$ & $-5.9\pm2.2$ & $4.2\pm1.9$ \\ \hline {1.35-1.40} & $926\pm30$ & $698\pm26$ & $14.0\pm2.5$ & $1038\pm32$ & $1223\pm35$ & $-8.2\pm2.1$ & $11.1\pm1.7$ \\ \hline {1.40-1.45} & $1399\pm37$ & $1019\pm32$ & $15.7\pm2.0$ & $1286\pm36$ & $1408\pm38$ & $-4.5\pm1.9$ & $10.1\pm1.4$ \\ \hline {1.45-1.50} & $1995\pm45$ & $986\pm31$ & $33.9\pm1.7$ & $1728\pm42$ & $1360\pm37$ & $11.9\pm1.8$ & $11.0\pm1.2$ \\ \hline {1.50-1.55} & $1702\pm41$ & $706\pm27$ & $41.4\pm1.9$ & $1646\pm41$ & $986\pm31$ & $25.1\pm1.9$ & $8.1\pm1.3$ \\ \hline {1.55-1.60} & $1351\pm37$ & $778\pm28$ & $26.9\pm2.1$ & $1212\pm35$ & $1034\pm32$ & $7.9\pm2.1$ & $9.5\pm1.4$ \\ \hline {1.60-1.65} & $1022\pm32$ & $671\pm26$ & $20.7\pm2.4$ & $842\pm29$ & $887\pm30$ & $-2.6\pm2.4$ & $11.7\pm1.7$ \\ \hline\hline \end{tabular} \end{center} \end{table} Another characterized feature of FIG. \ref{fig:datafigure} lies in the obvious large FBAs associated with some resonances sturcture when the invariant mass of the $K^+ K^-$ pair is around $1.5$ GeV for both the $C\!P$ conjugate processes $B^\pm \to K^\pm K^\mp K^\pm$, which indicates strongly the interference of odd- and even-partial waves according to the analysis in last section. The FBA of this region is so large that it can even be clearly seen from the events distributions in the Dalitz plots. In what follows, we will focus on this phase space region. To be more specific, our analysis in this paper is perform only for ${m_{\text{low}}}$ ranges between 1.30 and 1.65 GeV in order to exclude the potential pollution of resonances such as $\phi(1020)$ and $f_0(1710)$.\footnote{Both $\phi(1020)$ and $f_0(1710)$ have little influence to the observed large FBA. For $\phi(1020)$, although it is one of the dominant resonances, but it is far away from the region of the observed large FBA and its width is narrow enough. Hence its effects to the observed large FBA is negligible even if it is a vector resonance. For $f_0(1710)$, on the other hand, although it is not far away for the region of the observed large FBA and has a relatively large decay width, but it can not be the reason for the observed large FBA as it is a spin-even scalar resonance.} The corresponding event yields for $\cos\theta>0$ and $\cos\theta<0$, as well as the FBAs and FB-$C\!P$As, are presented in TABLE 1 for all bins of ${m_{\text{low}}}$ ranging between 1.30 and 1.65 GeV, where the uncertainties are statistical only. There are several resonances that could contribute to $B^\pm \to K^\pm K^\mp K^\pm$ in this region, including $f_0(1500)$, $\rho^0(1450)$, $X(1575)$, $f_2'(1525)$, etc., among which the presence of $f_0(1500)$ and $f_2'(1525)$ has been reported by BaBar \cite{BaBar:2012iuj}. After trying various fitting scenarios, we found that the best fit to the LHCb data of Ref. \cite{LHCb:2014mir} for ${m_{\text{low}}}$ ranges between 1.30 and 1.65 GeV constitutes of the resonances $f_0(1500)$ and $\rho^0(1450)$, plus a non-resonance $s$-wave. The decay amplitude of $B^-\to K^- K^+ K^-$ can then be parameterized as \begin{equation \mathcal{A}_{B^- \to K^- K^+ K^-}=\frac{c_1e^{i\delta_1}\cos\theta}{m^2_{\text{low}}-m_\rho^2+im_{\rho}\Gamma_\rho} +\frac{c_0e^{i\delta_0}}{m^2_{\text{low}}-m_f^2+im_{f}\Gamma_f}+\frac{c_{NS}e^{i\delta_{NS}}}{m_f\Gamma_f}, \end{equation} where $c_l$ and $\delta_l$ ($l=0,1, NS$) are the corresponding amplitudes (excluding the corresponding propagators and the Legendre polynomials) and the relative phases, respectively, $m_{\rho/f}$ and $\Gamma_{\rho/f}$ are respectively the masse and the decay width of the resonance $\rho^0(1450)/f_0(1500)$. The factor $1/m_f\Gamma_f$ in the last term is introduced to make sure that $c_{NS}$ has the same dimension with $c_0$ and $c_1$. The amplitude of $B^+ \to K^+ K^- K^+$ can be obtained by replacing $c_l$ and $\delta_l$ by $\overline{c}_l$ and $\overline{\delta}_l$, respectively. The event yield of each bin in FIG. \ref{fityield} is fitted according to \begin{equation \mathcal{N}_{B\pm,i}(\cos\theta\gtrless 0)/0.05\text{GeV}= \int_{\cos\theta\gtrless 0}\Big[R\left|\mathcal{A}_{B^\pm \to K^- K^+ K^\pm}\right|^2\Big]_{m_{\text{low}}=\bar{m}_{\text{low},i}} d \cos\theta, \end{equation} where $R=\sqrt{(m^2_{\text{low}}-4m_K^2) \left[m_B^2-(m_{\text{low}}-m_K)^2\right]\left[m_B^2-({m_{\text{low}}}+m_K)^2\right]}$ is the phase-space factor, and $\bar{m}_{\text{low},i}$ is the mean value of $m_{\text{low}}$ of bin $i$. Note that we have absorbed all the factors which are irrelevant to the discussions of FBAs and FB-$C\!P$As into the amplitudes $c_l$. Once this has been done, the amplitudes $c_l$'s become dimensionless. \begin{figure}[h!] \centering \subfigur { \label{fityield.sub.1} \includegraphics[width=0.45\linewidth]{fitBmYield.eps}} \subfigur { \label{fityield.sub.2} \includegraphics[width=0.45\linewidth]{fitBpYield.eps}} % \caption{ The event yields from the data of LHCb (only the statistical error depicted) and its corresponding fitted curves of both the $C\!P$-conjugate decay channels $B^- \to K^- K^+ K^-$ (left) and $B^+ \to K^+ K^- K^+$ (right) projected in bins of the invariant mass of the $K^+ K^-$ ranging between 1.3 and 1.6 GeV, for $\cos\theta>0$ (solid circle for the data, solid line for the fitted curve) and $\cos\theta<0$ (hollow circle for the data, dashed line for the fitted curve). The dotted lines represent the range of the $1 \sigma$ confident-level fits. }\label{fityield} \end{figure} \begin{table}[h!] \begin{center} \caption{The fitted values of the parameters $c_l$, $\delta_l$, $\overline{c}_l$, and $\overline{\delta}_l$, respectively. The phases $\delta_1$ and $\overline{\delta}_1$ are fixed to 0. } \label{tab:IntHalfIntSR} \begin{tabular}{c||c|c||c|c} \hline\hline resonance & $c_l$ & $\delta_l$ & $\overline{c}_l$ & $\overline{\delta}_l$ \\ \hline $\rho^0(1450)$ & $30.7\pm0.5$ & 0 & $31.7\pm0.8$ & 0 \\ \hline $f_0(1500)$ & $1.78\pm0.38$ &$-0.03\pm0.15$ & $2.12\pm0.33$ & $-0.27\pm0.13$ \\ \hline non-res $s$-wave & $0.26\pm0.29$ &$2.20\pm0.80$ & $1.06\pm0.28$ & $2.15\pm0.19$ \\ \hline\hline \end{tabular} \end{center} \end{table} The fitted curves are also presented in FIG. \ref{fityield} with inputs all taken from Ref. \cite{ParticleDataGroup:2020ssz}, while the numerical values of the fitted parameters are presented in TABLE \ref{tab:IntHalfIntSR}. The goodness of the corresponding fits are 0.92 and 0.86 for $B^-\to K^\mp K^\pm K^\mp$ and $B^+\to K^\pm K^\mp K^\pm$ respectively, indicating that the data around $1.5$ GeV can be reasonably described by the resonances $\rho^0(1450)$ and $f_0(1500)$, with $\rho^0(1450)$ the dominant one. This is in contradiction with the conclusion of BaBar in Ref. \cite{BaBar:2012iuj}, according to which the analysis showed no signal of the spin-1 resonance $\rho^0(1450)$. With those fitted parameters, one can also obtain the $C\!P$ asymmetries of $B^\pm\to\rho^0(1450)K^\pm$, which is $A_{CP}(B^\pm\to\rho^0(1450)K^\pm)\equiv\frac{\left|c_1\right|^2-\left|\overline{c}_1\right|^2}{\left|c_1\right|^2+\left|\overline{c}_1\right|^2}=(-3.4\pm3.0)\%$. \begin{figure}[h!] \centering \includegraphics[width=.8\linewidth]{Afb.eps} \caption{The best-fit FBAs of $B^{-} \to K^{-} K^{+} K^{-}$ (solid curve) and $B^{+} \to K^{+} K^{-} K^{+}$ (dashed curve) comparing with those extracted from the data of LHCb. The corresponding step lines are the FBAs extracted from the data. The dotted step lines represent the statistical uncertainties.}\label{fig:AFB} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=.8\linewidth]{Acp.eps} \caption{The best-fit FB-$C\!P$A (dash-dotted) and regional $C\!P$A (thick dotted) of $B^{\pm} \to K^{\pm} K^{\mp} K^{\pm}$. The corresponding step lines are the FBAs extracted from the data, while the curves are plotted with the input of the parameters taking values of the best fit. The thin dotted lines represent the statistical uncertanties.}\label{fig:CPV} \end{figure} With the central values of the fitted parameters, the FBAs of $B^\pm \to K^\pm K^\mp K^\pm$ are depicted in FIG. \ref{fig:AFB}, in along with comparisons with those obtained from the data. One can see from this figure that the fitted FBAs fit with the data quite well, which is not a surprising result since this fit is in essence optimised according to the FBAs. On the other hand, the fitted FB-$C\!P$As and regional $C\!P$As, which are presented in FIG. \ref{fig:CPV}, show less accordant with those from the data. This is also understandable since both of the FB-$C\!P$As and the regional $C\!P$As represent ``fine structures'' of the decay $B^\pm \to K^\pm K^\mp K^\pm$ comparing with the FBAs. Our analysis is too simple to describe such ``fine structures'' as the FB-$C\!P$As and the regional $C\!P$As well. But be that as it may, these fitted curves in FIG. \ref{fig:CPV}, especially that of the FB-$C\!P$A, still show tendency that are in accordant with those from the data. We also try other fitting scenarios, which are presented in TABLE \ref{tab:FitScenario}, along with the goodness of each scenario. For example, we have try to fit the data by replacing $\rho^0(1450)$ by $X(1575)$, which has been observed by the BES collaboration in the channel $J/\psi \to K^+K^-\pi^0$ long time ago \cite{BES:2006kmo}, but the goodness of this fit is bad. We have also try to fit the data by replacing $\rho^0(1450)$ by non-resonance $p$-wave. However, the fit is bad either, indicating that the large FBA for $m^2_{\text{low}}$ around 1.5 GeV can not either be explained by non-resonance $p$-wave contributions. From TABLE \ref{tab:FitScenario} one can see that the fitting scenario which was presented in details above represents the best among all those in this table. \begin{table}[h!] \begin{center} \caption{Various fitting scenarios that were performed. The goodness $\chi^2/\text{d.o.f.}$ for each fit are obtained with the inclusion of uncertainties from $m_{\text{low}}$, which were simply estimated as $0.05~\text{GeV}/2=0.025~\text{GeV}$ for each bin. The table is ordered according to the goodness of each fit.} \label{tab:FitScenario} \begin{tabular}{c c c||c|c} \hline\hline \multicolumn{3}{c||}{fitting scenario} & \multicolumn{2}{c} {$\chi^2/\text{d.o.f.}$} \\ \hline $s$-wave & $p$-wave & $d$-wave & $B^-$ & $B^+$\\ \hline\hline $f_0(1500)$ & \multirow{2}{*}{$\rho^0(1450)$} & & \multirow{2}{*}{0.92} & \multirow{2}{*}{0.86}\\ non-res $s$-wave & & & & \\ \hline $f_0(1500)$ & $\rho^0(1450)$ & non-res $d$-wave & 0.94 & 1.92 \\ \hline $f_0(1500)$ & $\rho^0(1450)$ & & 1.07 & 2.10 \\ \hline $f_0(1500)$ & $\rho^0(1450)$ & $f_2'(1525)$ & 1.10 & 2.57 \\ \hline & $\rho^0(1450)$ & $f_2'(1525)$ & 1.76 & 2.70 \\ \hline non-res $s$-wave & $\rho^0(1450)$ & $f_2'(1525)$ & 2.61 & 2.75 \\ \hline non-res $s$-wave & $\rho^0(1450)$ & & 3.95 & 3.40 \\ \hline $f_0(1500)$ & non-res $p$-wave & $f_2'(1525)$ & 10.1 & 41.1 \\ \hline $f_0(1500)$ & $X(1575)$ & $f_2'(1525)$ & 9.59 & 45.8 \\ \hline \hline \end{tabular} \end{center} \end{table} \section{Summary and Conclusion} A general analysis of the FBA and the FB-$C\!P$A were presented in this paper. According to the analysis of this paper, the FBA as well as the FB-$C\!P$A are sensitive to the interfering effects of even-and odd-waves in three-body decays of beauty and charmed mesons. This makes them serve as good tools for the resonance structure analysis in the aforementioned decay processes. We suggest our experimental colleagues to perform the measurements of the FBAs (as well as the FB-$C\!P$As) in three-body decay channels of beauty and charmed mesons. Enlightened by the notably large FBAs embedded in the LHCb data in Ref. \cite{LHCb:2014mir}, we performed a data-based analysis of the FBAs and $C\!P$As of the decay $B^\pm \to K^\pm K^\mp K^\pm$. We found that the large FBA observed when the invariant mass of the $K^+ K^-$ pair lies around 1.5 GeV can be interpreted as the interference of the amplitudes between the resonances $f_0(1500)$ with $\rho^0 (1450)$. The analysis shows the existence of the decay channel $B^\pm\to \rho^0(1450) K^\pm$, with $C\!P$ asymmetry of $A_{CP}(B^\pm\to \rho^0(1450) K^\pm)=(-3.4\pm3.0)\%$. This is in contradiction with the conclusion of BaBar in Ref. \cite{BaBar:2012iuj}, according to which the analysis showed no signal of the spin-1 resonance $\rho^0(1450)$. We suggest our experimental colleagues to take a closer analysis on the decay channel $B^\pm \to K^\pm K^\mp K^\pm$. \begin{acknowledgments} We thank Wen-Bin Qian for useful discussion. This work was supported by National Natural Science Foundation of China under Contracts Nos. 11705081 and 12192261. \end{acknowledgments}
1,314,259,994,761
arxiv
\section{Simulational Results} \label{shear} \subsection{Velocity Profiles} As mentioned in Section \ref{model}, we create a shear flow in our system by applying an external field with constant gradient along the $z$-axis of the box so that the flow is oriented along the $x$-direction - cf. Eq.(\ref{Met}), parallel to the hard walls at $z = 0$ and $z = D$. The lower half of the box would then flow in positive, the upper one - in negative $x$-direction. It is expected that in the immediate vicinity of the walls the flow might be somewhat distorted due to walls impenetrability. Below, in Fig.(\ref{jumps})a we plot the mean jump distance per MCS, $\delta x$, measured along the $x$-axis for different values of $B$. The $z$-coordinates of these successful jumps are taken from the respective $z$-coordinate of the monomers. \begin{figure} \special{psfile=fig2a.ps hoffset=-30 voffset=8 hscale=32 vscale=28 angle=270} \special{psfile=fig2b.ps hoffset=200 voffset=20 hscale=40 vscale=30 angle=270} \vskip 5.5cm \caption{\label{jumps}(a) Variation of the average distance of {\em accepted} jumps in $x$-direction vs $z$ coordinate with the external field amplitude $B$ in a box of size $Z_{max} = 32$. (b) Variation of the average velocity (distance in $x$-direction traveled by a monomer after $1024$ MCS) at density $c=1.0, Z_{max}=16$, and $B=0.1$.} \end{figure} Evidently, in a wide channel with $D = 32$ only for sufficiently weak field $B \le 0.3$ the average jump distance grows linearly with respect to the half-width of the box (for $B = 0$ it is zero). For $B > 0.3$ distortions in the $\delta x$ profile set in because the maximal jump distance is limited to $0.5$, as mentioned in Section \ref{model}. For a more narrow slit of width $D=16$ the region of linear response would then extend to higher values of $F\le 0.7$. Therefore most of the simulational results in what follows are derived for $D=16$. Fig. \ref{jumps}b then demonstrates that the velocity changes linearly across the slit for sufficiently small values of the field $B$. In the broad channel, $D = 32$, at $z = 0, D$ for $B = 0.5$ one gets $F = 8$ from Eq.(\ref{F}) so that the average jump distance there according to Eq.(\ref{xmean}) should be $\delta x \approx \pm 0.178$. The value of $\delta x$ at the borders of the box, as seen from Fig.(\ref{jumps})a, confirms this estimate demonstrating that the role of the microscopic interactions $U_M$ and $U_{FENE}$ is small. The presence of the walls is felt in their immediate vicinity and some local distortion of the displacement profile appears increasingly pronounced with growing $B$ although it remains spatially contained in a layer of thickness roughly equal to monomers diameter. It is interesting to note that this small increase of $\delta x$ (and, therefore, of velocity) immediately at the walls resembles the so called "slip effect"\cite{Biller,Goh} in simple shear flow of dilute polymer solutions in a narrow channel. This slip effect can be explained intuitively by the fact that the polymer molecules near the wall align themselves more strongly with the flow than those away from the wall, and are thus transporting less flow-wise momentum across the flow than would otherwise be the case. Indeed, in our Monte Carlo model the attempted jumps which would otherwise bring the monomers through and beyond the walls of the slit are always rejected. Since the molecules cannot penetrate the wall, their concentration is reduced at the wall, so that their contribution to the viscosity is further diminished. Using a model of dumbbells in parallel wall shear flow, one can calculate\cite{Aubert,Biller,Goh} both the nonlinear velocity profile of the suspended solutions as well as the center of mass concentration profile between the two walls - we shall see in the next section that the latter is qualitatively reproduced by our simulational results. \subsection{Effect of Shear Rate on Average Chain Length and Molecular Weight Distribution} We find that the average chain length of the EP solution, $L$, decreases steadily with growing $B$, which is in agreement with an earlier MD study\cite{K} - Fig. \ref{lvsb}: The mean chain length $L$ at this highest shear rate is about $70 \%$ of its value for a system at rest. Here we should like to point out the existence of considerable fluctuations in the derived values of $L$ for $F \neq 0$ - the statistical error has been reduced at the expense of considerable computational effort. Note that the reduced $L/L_0$ mean chain length dependence on shear rate $B$ ($L_0$ is the mean chain length of the solution at $B = 0$) appears to decrease nearly linearly with $B$: $L/L_0 = 1 - 0.35 B$ which simply follows (cf. Eq.(\ref{L}) below) from the exponential dependence of $L$ on interaction $J$: $L/L_0 = \exp[-(J-bB)/2]/\exp(-J/2) = \exp(bB/2) \approx 1 - bB/2$ with the constant $b = 0.7$ measuring the effective decrease of bond energy $J$ due to shear $B$. Another interesting observation is that the rate of decline is apparently independent of density $c$, at least for the small values of $F$ considered in the present work. \begin{figure} \special{psfile=fig3.ps hoffset=30 voffset=20 hscale=40 vscale=30 angle= 270} \vskip 5.5cm \caption{\label{lvsb} Relative decrease of mean chain length $L$ in lamellar flow field versus $B$ at $J/kT = 7$, densities $c = 1.0$ and $c = 2.0$ and $D=16$.} \end{figure} The form for the MWD $C(l)$, that is, the concentrations of chains of contour length $l$, appears to change qualitatively for equilibrium polymers in dilute solutions. This change is in line with the predictions of the recent scaling theory of EP\cite{WMC2} where we demonstrated that the purely exponential form of the MWD, $P(x) \propto \exp(x)$, corresponding to concentration/chain length regimes in which density correlations are suppressed (typically beyond the semi-dilute threshold), is replaced by a 'rounded' Schwartz power-exponential distribution: \begin{equation} P(x) dx = \left\{ \begin{array}{ll} \exp(-x) dx & \mbox{($L \gg L^*$)} \\ \frac{\gamma^{\gamma}}{\Gamma(\gamma)} x^{\gamma-1} \exp(-\gamma x) dx &\mbox{($L \ll L^*$)} \end{array} \right. \label{MWD} \end{equation} when correlations are important (typically dilute concentrations). In Eq.(\ref{MWD}) the reduced chain length, $x = l/L$, is taken as ratio of the particular chain length $l$ to the mean chain length $L$, $\gamma$ is the critical exponent of the $n\rightarrow 0$ vector model, (in 3D $\gamma \approx 1.165$ while its mean field value is $ \gamma_{MFA} = 1$), and $L^*$ marks the average chain length at the crossover from dilute to semi-dilute concentration, $(c \rightarrow c^*)$, of EP solutions. The mean chain length $L$ was predicted and confirmed to vary with dimensionless bond energy $J/k_BT$ as \begin{equation} L \propto c^{\alpha} \exp(\delta J/k_BT) \label{L} \end{equation} with exponents $\alpha_d=\delta_d=1/(1+\gamma)\approx 0.46$ in the dilute and $\alpha_s=1/2 (1+(\gamma-1)/(\nu d-1) \approx 0.6$, $\delta_d =1/2$ in the semi-dilute regime. In Fig. \ref{mwd} we plot $C(l)$ for a system at rest ($c = 0.5,\ B = 0$) and at maximum shear rate ($B = 0.7$) to demonstrate that the form of MWD changes qualitatively \begin{figure} \special{psfile=fig4a.ps hoffset=-30 voffset=20 hscale=40 vscale=30 angle=270} \special{psfile=fig4b.ps hoffset=200 voffset=20 hscale=40 vscale=30 angle=270} \vskip 5.5cm \caption{\label{mwd} (a) Molecular weight distribution in a system of EP at rest, $B = 0$, and in a flow, $B = 0.7$, at $J/kT = 7$ and monomer density $c = 0.5$. (b)The same as in (a) in dimensionless units $x=l/L$ fitted with Eq.(\protect\ref{MWD}) with $\gamma = 1.16$. } \end{figure} when shear is imposed. Thus it is evident from Fig. \ref{mwd} that in the presence of shear the correlations in polymer concentration in our dilute system of EP are effectively suppressed and the Molecular Weight Distribution is very well reproduced by the simple exponential function expected if a MFA description of the system holds. This finding can be understood if one recalls that the imposition of an external field with a shear rate $B$ has a twofold effect on the polymers: (i) it effectively reduces the bond strength, $J \rightarrow J - b B$, which makes the polymers shorter, and (ii) the shape of the polymer coils is changed towards more rod-like shape with the longest axis oriented along the field. Meanwhile it is well known that a system of rods exhibits a Mean-Field-like behavior\cite{CC} which is here manifested by the change in the MWD - Fig.\ref{mwd}. \subsection{Effect of Shear Rate on Chain Conformations} As the flow becomes faster, the individual shape of the chain coils change too. For weak shear rates $B$, the relative distortions (as measured for instance by flow birefringence\cite{J}) are essentially proportional to $\tau B$ where $\tau$ is the largest relaxation time of the unperturbed molecule\cite{Zimm}. Brownian dynamics studies\cite{Diaz} for Hookean dumbbells in a steady shear flow confirm this relative increase of the end-to-end distance with the shear rate both with and without hydrodynamic interactions included. A confirmation of these early predictions follows from our simulational results too - Fig.\ref{frg2}. \newpage In Fig.\ref{frg2} we plot the difference in gyration radii, typical for a large section of the length distribution and averaged over all EP, in two characteristic cases - system at rest and in a flow, \begin{figure} \special{psfile=fig5.ps hoffset=30 voffset=20 hscale=40 vscale=30 angle= 270} \vskip 5.5cm \caption{\label{frg2} Variation of coil size along the field, $R_{gx}^2$, and perpendicular to field, $R_{gz}^2$, with chain length $l$ and two values of $B = 0.0,\; \mbox{and}\; 0.7$. The inset shows the {\em total} $R_g^2$ in normal coordinates. Here $J/kT = 7$ and the density $c = 0.5$ in a narrow slit $D = 16$. The average coil size measured at zero shear rate is $R_e^2 = 50.03$ and $R_g^2 = 8.325$.} \end{figure} indicating that chain coils in a flow become more extended along the field and compressed parallel to it. The result is a total increase of $R_g^2$ as the whole system starts drifting along field's direction. It is also evident from Fig. \ref{frg2} that this asymmetry in the components of $R_g^2$ becomes progressively more pronounced as $l$ gets larger. The fact that $R_{gz}^2$ is somewhat smaller than $R_{gx}^2$ even at rest is due to the presence of hard walls at $z = 0, Z_{max}$ which slightly deforms the coils in $z$ - direction. We have not given here a plot of the $z$-dependence of $R_g^2$ because of the surface segregation of chain lengths, induced by the parallel plates. This segregation populates the vicinity of the walls with single monomers and very short species. In contrast, the longer chains reside at least a distance $R_g$ always from the walls (see next Section). Such a distribution of centers of mass with respect to chain length takes place in EP even at rest (and is further enhanced by the flow) making the MWD a $z$-dependent quantity and thus interfering with the pure effect of coil stretching under flow. \subsection{Density Profiles} The overall transformations which the system undergoes with increasing shear rate, however, become much more explicit if density and diffusion profiles are sampled as function of $z$. This is shown in Fig.\ref{dens} where the density is normalized to unity ($\int_0^D c(z) dz = 1$). It is evident from Fig.\ref{dens}a that in the absence of bias when the system is at rest the total monomer density is uniformly distributed across the box with a typical depletion immediately at the walls (at low concentration of the solution). The walls are avoided by the longer chains because of entropic reasons. When the system starts to flow a redistribution of density sets in with increasing bias $B$ whereby for the highest shear rate one observes a density maximum centered at the middle where the flow velocity is nearly zero. Qualitatively this density profile appears to be similar to analytic and simulational results\cite{Goh} for the center of mass concentration profile between upper and lower walls, obtained earlier for a single dumbbell in a slit\footnote{In the much simpler model\cite{Goh}, however, the density profile is independent of the shear rate $B$.}. As the concentration is further increased, one observes the onset of typical oscillations in density profiles in the vicinity of the walls, Fig.\ref{dens}b, c. Such oscillations are typical for polymer solutions confined between flat plates and have been comprehensively studied for conventional polymers by Monte Carlo simulations before\cite{Yet,PMB}. The observed transitions in the monomer density immediately at the walls from a deficit (depletion layer) at low concentration up to an excess (for melts) are governed by a competition between entropic and packing effects. Because of the resultant decrease of configurational entropy it becomes unfavorable for polymers to be near the walls. Chains near the walls, on the other hand, suffer collisions with the chains away from the walls, and tend to move closer to the walls. At low density the entropic effect dominates, while at high densities packing effects prevail. This is clearly seen in Fig.\ref{dens}a,b,c for zero shear. In dilute solutions, as seen from Fig.\ref{dens}a, the increase of shear rate leads to effective broadening of the depletion layers, adjacent to the walls, which is in agreement with EWIF (evanescent wave-induced fluorescence) experimental observations\cite{Aussere2} of GM, but at variance with an earlier computer simulation\cite{Duering}. This density variation across the slit, caused by the shear rate, appears to depend essentially on the overall concentration of the system. At larger shear a density maximum still forms for the slowest layer of flow in the box middle at concentration $c = 1.0$ whereas for very dense systems, Fig.\ref{dens}c, this density redistribution with shear is suppressed. One may thus conclude that effects of shear on the density profiles in a slit depend essentially on the free volume in the system which is available for rearrangement of the polymer chains. In the broader slit, Fig.\ref{dens}d, where the shear rate gradually diminishes in the vicinity of the wall (adjacent layers flow with nearly equal velocity), the density profile gets more complex with two local minima and a sharp increase at the walls. This complex picture indicates that monomer density is generally increased in locations of zero flow or steady flow with vanishing shear. \begin{figure} \special{psfile=fig6a.ps hoffset=-30 voffset=20 hscale=40 vscale=30 angle=270} \special{psfile=fig6b.ps hoffset=200 voffset=20 hscale=40 vscale=30 angle=270} \vskip 5.5cm \special{psfile=fig6c.ps hoffset=-30 voffset=20 hscale=40 vscale=30 angle=270} \special{psfile=fig6d.ps hoffset=200 voffset=20 hscale=40 vscale=30 angle=270} \vskip 5.5cm \caption{\label{dens} Distribution of total monomer density between the hard walls in a slit of width $Z_{max} = 16$ at varying shear rate (bias) and $J/kT=7$: (a) $c = 0.5$; (b) $c = 1.0$; (c) $c = 2.0$; (d) a broad slit with $Z_{max} = 32$ and $c = 0.5$. } \end{figure} Clearly, the changes in these profiles with $B$ when shear sets on reflect some complex reorganization in the polydisperse system of EP whereby the "rapids" of the flow may act differently on chains of different length. Additionally, even at rest, the system segregates in the vicinity of the walls for entropic reasons\cite{ML} into layers occupied predominantly by chains of decreasing contour length as one gets closer to the wall. These effects are indeed seen in Fig.\ref{zcm}, where the average positions of the centers of mass of single monomers and of chains of length $l = 70$ are shown at various strengths of the bias field. The single monomers evidently tend to occupy the immediate vicinity of the walls and this tendency is enhanced as the shear increases. The long chains, on the contrary, keep at distance $\approx R_g$ from the walls while the system is at rest. For growing $B$ their residence is further narrowed around the "slow" region in the center of the box. In view of Fig.\ref{dens}a one may conclude that the deficit of single monomers from this region is more than compensated by accumulation of longer chains. In the wide slit $(D = 32)$, only a fraction of the long chains still remains in the middle whereas two new maxima at the walls appear. Evidently, this happens in those regions where the shear rate for $B > 0.4$ (cf. Fig.\ref{jumps}) nearly vanishes. \begin{figure} \special{psfile=fig7a.ps hoffset=-30 voffset=20 hscale=40 vscale=30 angle=270} \special{psfile=fig7b.ps hoffset=200 voffset=20 hscale=40 vscale=30 angle=270} \vskip 5.5cm \caption{\label{zcm} (a) Center of mass distribution of single monomers in a slit with $D = 16$, $J/kT=7$ and $c = 0.5$ at various values of the shear rate (bias). (b) Center of mass distribution of chains with $l = 70$ under the same conditions as in (a).} \end{figure} One might expect that other properties of the system, related to density, will also be affected by the shear, as for instance, the local diffusion coefficient. In Fig.\ref{dis} we plot a histogram of mean square displacements (MSQD), performed by all those particles which remain in the same $z$-layer within a MCS. \begin{figure} \special{psfile=fig8a.ps hoffset=-30 voffset=20 hscale=40 vscale=30 angle=270} \special{psfile=fig8b.ps hoffset=200 voffset=20 hscale=40 vscale=30 angle=270} \vskip 5.5cm \caption{\label{dis} Distribution of MSQD during $1$ MCS in a box with $Z_{max} = 16$ for different shear rates (bias) at $J/kT=7$: (a) for $c = 0.5$; (b) for $c = 1.0$. } \end{figure} The distribution of MSQD, Fig. \ref{dis}b, develops from being nearly constant (with two small wings at the depletion zones) for zero bias to a well defined broad minimum in the middle of the box as $B \rightarrow 0.7$ whereby as a whole it also decreases. Evidently, the diffusion profile across the channel reflects simply the variations of the density distribution. \subsection{Nematic Ordering of Short Chains} Most of the simulational results, discussed in the preceding subsections, have been carried out for a sufficiently strong energy bond, $J/kT = 7$, which is equivalent to a rather low temperature of the system. The average chain length at $J/kT = 7$ thereby varies with density within the interval $40 \le L \le 70$ where the flexibility of the chains ensures that their conformations correspond to well shaped polymer coils. It is interesting to check whether a change in the mean size of the chains $L$ in some way affects the reorganization of the polydisperse system under shear flow. If one reduces the ratio of bond to thermal energy $J/k_BT$, as mentioned in the previous section, Eq.(\ref{L}), the average contour chain length decreases exponentially fast. In the present study we change $J/kT$ from $7$ down to $1$ whereby $L$ drops from $\approx 40$ to $2.5$. As shown below this leads to dramatic changes in the EP solution. The profiles along the $z$ axis for this case of very short chains are shown in Fig. \ref{1}. \begin{figure} \special{psfile=fig9a.ps hoffset=-30 voffset=20 hscale=40 vscale=30 angle=270} \special{psfile=fig9b.ps hoffset=200 voffset=20 hscale=40 vscale=30 angle=270} \vskip 5.5cm \special{psfile=fig9c.ps hoffset=-30 voffset=20 hscale=40 vscale=30 angle=270} \special{psfile=fig9d.ps hoffset=200 voffset=20 hscale=40 vscale=30 angle=270} \vskip 5.5cm \caption{\label{1} (a) Distribution of the total monomer density across a slit with $D=16$ at various rates of shear (bias). (b) Average mean square displacement after $1$ MCS. (c) Center-of-mass distribution of chains with $l = 1, 2, 3, 4, 5, 10\; \mbox{and}\; 20$ for $B = 0$. Each curve is shifted from the previous one along the $y$-axis at $0.0005$ for better visibility. (d) The same as in (c) but for $B = 0.7$. The width of the slit is $Z_{max} = 16,\;c = 0.5$ and $J/kT = 1$.} \end{figure} The oscillations in the the total density, Fig. \ref{1}, suggest a transition of the system into an ordered state of nematic liquid crystal with an easy direction parallel to the walls. Evidently the presence of hard walls acts as an external ordering field on the chains. The system is dominated by monomers, dimers and other very short species which behave largely like stiff rods aligning themselves parallel to the walls. Indeed, Figs. \ref{1}c, \ref{1}d demonstrate that this ordering is most pronounced for dimers, trimers and tetramers whereas neither single monomers nor chains with length $l \ge 5$ participate in the ordering. Thus both single monomers, which lack any anisotropy in shape, and longer chains with conformations of coils rather than stiff rods are insensitive to the ordering influence of the walls. The influence of growing shear rate on the system with $L = 2.5$ is similar to that in the case of $L \approx 40$ too. From Figs. \ref{1}c, \ref{1}d one may conclude that the longer chains, which are otherwise uniformly distributed, now pull closer to the middle of the slit where the flow velocity is zero. This tendency starts with the $4-$ and $5$-mers already and is very clearly seen for the $10$-mers (there are only few $20$-mers in the system at $J/k_BT = 1$ and their statistics is therefore rather poor). This effect more than compensates the developing shallow minimum in single monomer concentration in the middle of the box. \section{Concluding Remarks} The present simulational study of the impact of shear flow on EP in a slab reveals a number of interesting features: \begin{itemize} \item{The average chain length in a system of EP decreases steadily with growing shear rate} \item{The polymer coil is gradually stretched along the flow direction as the shear is increased} \item{The MWD in a dilute solution of EP changes qualitatively when sufficiently strong shear rate is imposed from Schwartz- to a mean-field like exponential distribution function.} \item{The shear rate introduces inhomogeneity in the system of EP, confined in a slit: the monomer density, the diffusion coefficient and the concentration of macromolecules with different lengths develop characteristic profiles perpendicular to the walls} \item{The width of the depletion layer near the wall for long chains grows with increasing shear rate in agreement with recent EWIF studies} \end{itemize} Another interesting phenomenon - an ordering transition in a system dominated by the shorter and stiffer chains is found to take place upon {\em heating} of the system of EP with the result that chain length is reduced. In this case shear flow is observed to enhance the degree of ordering in the system. Our observations show that the relaxation of a system of EP from a state of rest to that of steady state flow is a slow process which requires long time intervals of investigation, probably rendering Monte Carlo simulational methods probably more appropriate than Molecular Dynamics. We should like to note, however, that the shear rates studied in the present work are limited to low and moderate values since stochastic jumps along and against the external field may be biased by means of the Boltzmann factor in a MC procedure within the framework of $100\%$ at most. We therefore expect that at even higher shear rates the influence of flow on EP properties might be more dramatic. Clearly, additional work and adequate alternative methods are still needed to reach comprehensive understanding of the problem. \section{Acknowledgments} This research has been supported by the National Science Foundation, Grant No. INT-9304562 and No. DMR-9727714, and by the Bulgarian National Foundation for Science and Research under Grant No. X-644/1996. J.~W. acknowledges support by EPSRC under Grant GR/K56233 and is indebted for hospitality in the Center for Simulational Physics at the University of Georgia. \section{Introduction} Systems in which polymerization takes place under condition of chemical equilibrium between polymer chains and their respective monomers are termed "equilibrium polymers" (EP)\cite{ref}. The interest to EP from the point of view of both applications and basic research has recently triggered numerous investigation, including computer simulations\cite{WMC2,K} in an effort to avoid difficulties with laboratory experiments\cite{Greer} and approximations as the Mean Field Approximation (MFA). Recently the basic scaling concepts of polymer physics were tested by extensive Monte Carlo (MC) simulations of flexible EP on a lattice\cite{WMC2}. The results suggest that despite polydispersity, EP resemble conventional polymers (where the polymerization reaction has been deliberately terminated) in many aspects. However, dynamic aspects of their behavior may still be very different: for example, the constant process of scission and recombination in EP offers an additional mechanism of stress relaxation\cite{Cates}. Computer experiments on EP dynamics are already under way\cite{MWL1}. Considerably fewer simulation studies of {\em non-equilibrium} properties of EP have been reported\cite{K,AMYRDL}. Recently observed phenomena such as shear banding structure, shear inducing structure and phase transitions\cite{Berret,Schmitt,Makhloufi,Furo} are not completely understood. An earlier theoretical work\cite{Gelbart}, for instance, predicted a decrease in average size of dilute rod-like micelles whereas a later study\cite{Wang} concluded that rod-like micelles should grow at higher shear rates. Since it is known that viscoelastic surfactant solutions show unusual nonlinear rheology% \cite{Spenley}, it is clear that much more research in this field is needed before complete understanding of nonlinear properties of EP is achieved. Since EP behave in many respects as conventional "dead" polymers% \cite{WMC2}, comparisons with the latter where much more work on shear flow effects has been done so far, could prove very useful. Thus inhomogeneity of flows, due to the presence of boundaries, and its impact on polymer behavior may be directly observed experimentally by means of evanescent wave-induced fluorescence method\cite{Rondel} that can probe the polymer concentration in the depletion layer adjacent to the walls. Coil stretching of dilute flexible polymers in a flow, diffusion and density profiles as well as "slip" effects near walls have been treated theoretically\cite{deG,Aubert,Onuki,Biller} and by computer simulations\cite{Goh,Diaz,Duering}, and as we shall demonstrate below, many of these early results compare favorably with what we observe for EP in the present investigation. In the present study we employ a dynamic Monte Carlo algorithm in order to study EP properties in shear rate. The flow of the system in a semi-infinite slit of thickness $D$ is induced by applying an external field $F$ with magnitude which changes linearly across the slit and is parallel to the hard walls of the container. Thus the jump rate of the monomers becomes thus biased along the $x$-axis and a flow of the system through the periodic boundary sets in. One should emphasize that such an investigation should focus on the linear response in a laminar shear flow. MC methods cannot account for hydrodynamic interactions in principle and the transition from laminar to turbulent flow can be simulated by means of Molecular Dynamics (MD) only. The linear response breaks down at field intensities when the maximum flow velocity is attained, i.e. when all $100 \%$ of the random jumps along the $x$-axis are forced to occur, say, in positive direction. Any further increase of the field $F$ will then fail to accelerate the particles any further. Even with these limitations, however, is appears that this kind of MC simulation of EP in a shear flow is warranted, given the considerably longer time periods or systems sizes a MC methods may handle as compared to MD. All Monte Carlo studies of EP so far have been performed on a cubic lattice either exploiting an analogy of the Potts model of magnetism to random self-avoiding walks on a lattice\cite{MilchevPotts,ML}, or using the Bond Fluctuation (BFL) Model\cite{WMC2,AMYRDL}. These lattice models were developed and extensively used for monodisperse systems of conventional polymers and are known to faithfully reproduce their dynamic (Rouse) behavior. For the purpose of shear flow studies a disadvantage of these models , due to the discrete structure of the lattice, appears obvious: monomers would block each other on the lattice at higher shear rates. Random jumps would have to be of the size of single monomers only, and, last not least, the artificial cubic symmetry would predetermine ordering effects along the three major axes of the lattice\cite{WPB} thereby questioning possible phase transitions into liquid crystalline order. In the present work we employ an off-lattice model of EP, designed to overcome these and other shortcomings of previous lattice models and to serve in examining the role of polymers (semi)-flexibility. An off-lattice model should be a better tool in dynamic studies of a broader class of soft condensed matter systems where bifunctionality of the chemical bonds might be extended to polyfunctional bonds, as this is the case in gels and membranes. A comprehensive comparison of this off-lattice algorithm to earlier lattice models\cite{WMC2} shows that all properties of EP derived in former investigations, are faithfully reproduced in the continuum too.
1,314,259,994,762
arxiv
\section{introduction} \label{introduction} Carbon is one of the most fascinating elements in nature, which can form many different crystal structures with diverse electronic properties, such as C$_{60}$,~\cite{C60} nanotube,~\cite{nanotube} graphene,~\cite{Graphene-scotch} graphite, and diamond. Among them, graphene is one of the most amazing materials, which supports Dirac point in its low energy electronic structure, described as $H= v \vec{k} \cdot \vec{\sigma}$, where $v$ is velocity, $\vec{k}=(k_x, k_y)$ is momentum and $\vec{\sigma}$ is Pauli matrix. This novel electronic state leads to many interesting phenomena, such as the unconventional quantum Hall effect, large magnetoresistance, and unusual optical properties, which make graphene potentially useful.~\cite{Graphene} The presence of 2D Dirac cone is fragile, and two conditions are required to protect it: (1) the absence of spin-orbit coupling (SOC), and (2) the presence of inversion symmetry. The first condition is naturally satisfied in graphene, because its SOC strength is negligible ($\sim 10^{-3}$ meV).~\cite{Yao} If SOC in graphene is taken into account, it will open up a gap at Fermi level and lead to quantum spin Hall insulator (i.e., 2D topological insulator).~\cite{Kane} The second requirement is, however, very strong, and it is satisfied only in the presence of A-B sublattice symmetry, which can be easily broken, leading to a normal insulating state, similar to that in BN nanosheet. As proposed by A. L. Mackay and H. Terrones,~\cite{mackay} graphene can be extended to three-dimensional (3D) space to form 3D networks by placing graphitic tiles consisting of 4- to 8-membered rings onto the Schwarz minimal surface. Hereafter, we call such 3D all carbon allotrope Mackay-Terrones crystal (MTC). Schwarz minimal surface is a 3D periodic minimal surface with its mean curvature $H=(k_1+k_2)/2$ being zero and Gaussian curvature ($K=k_1k_2$) being negative everywhere on it. Here $k_1$ and $k_2$ are the principal curvatures. There are various Schwarz minimal surfaces, such as primitive (P), diamond (D) and gyroid (G) surface. One type of MTC based on P-surface is shown in Fig. 1. Different from C$_{60}$-like fullerene, which has positive Gaussian curvature, MTC has negative Gaussian curvature and is periodically connected. Such 3D network of $sp^2$-bonded carbon has unique properties, such as high surface-to-volume ratio and remarkable porosity, which stimulate extensive studies.~\cite{view, bio} Theoretically, MTC has been proved to be dynamically stable and require less formation energy than C$_{60}$.~\cite{dft_mackay, vanderbilt} Experimentally, saddle-like nano-carbon sheet, the main component of MTC, has been successfully synthesized.~\cite{saddle} Similar negatively curved $sp^2$ networks has been observed in spongy carbon~\cite{spongy} and negative replica of zeolite.~\cite{zeolite} Recently, high-quality 3D nanoporous graphene fabricated by using nanoporous Ni as template shows very similar MTC structure,~\cite{chenlab1, chenlab2} making synthesizing of it very promising. On the other hand, the topological properties of the band structure for these all-carbon MTC remain unexplored and will be the main subject of this paper. We will show that such all-carbon MTC can host non-trivial electronic states, including topological node-lines and 3D Dirac points, which are distinct from its 2D counter material graphene. \section{Results} \label{Results} We concentrate on the MTC formed with Schwarz minimal P-surface. As shown in Fig. 1, a stable structure with simple cubic lattice in $Pm\bar{3}m$ space group, and 176 atoms per unit cell has been obtained by Tagami {\it et al.} in Ref.~\onlinecite{ours} and labeled as 6-1-1-p. We have employed the software package OpenMX~\cite{openmx} for the first-principles calculation. It is based on norm-conserving pseudopotential and pseudo-atomic localized basis functions, which is highly efficient for the MTC with more than a hundred atoms. The choice of pseudopotentials, pseudo atomic orbital basis sets C6.0-s2p2d1 and the sampling of Brillouin zone with $10\times10\times10$-grid have been carefully checked. After full structural relaxation, we get the lattice constant $a$=14.48 \AA, and the diameters of the pipes or pores are around 9.876 \AA$ $ and 5.629 \AA, respectively, which are in good agreement with their results.~\cite{ours} The electronic band structure of this crystal, calculated based on the local density approximation (LDA), is shown in Fig. 1(d). We find that this crystal is a semimetal with band crossings around the Fermi level, similar to the massless Dirac cone in graphene, but they are in fact very different -- the key issue of this paper. \subsection{Band structure} Detailed analysis of the band structure reveals that: (1) The occupied and unoccupied low energy bands are triply-degenerate at $\Gamma$, and have T$_{1g}$ and T$_{2u}$ symmetry, respectively. The formers are even while the laters are odd under spacial inversion symmetry. Moving from $\Gamma$ to R point, their degeneracy recover again. However, their energy ordering exchanges, leading to the so called band inversion, which is one of the key ingredients for the topological insulators.~\cite{TIreview, TIreview-2} Due to the band inversion, the band crossings happen along both $X-R$ and $R-M$ paths as seen from Fig. 1(d). (2) Including SOC in the calculation, a gap will open up around the band crossings, leading to a 3D strong topological insulator with $Z_2$ index of (1;111)~\cite{Z2} by treating the lower half of the anti-crossing bands as occupied. However, similar to graphene, the SOC splitting is small (around 0.13 meV or 1.5 K), and can be neglected in cases with temperature higher than 1.5 K. \begin{figure} \includegraphics[scale=0.6]{Fig1-eps-converted-to.pdf} \caption{(Color online) (a) The Schwarz minimal P-surface in 2$\times$2$\times$2 supercell. (b) The top view of 6-1-1-p MTC in 2$\times$2 supercell. (c) Bulk and (001)-surface Brillouin zone, as well as the high symmetrical k-points. (d) Band structure from the first-principles calculation. The two triply degenerated eigenstates at $\Gamma$ and R with T$_{1g}$ and T$_{2u}$ symmetrical representation are marked. The band inversion between them can be easily seen.} \label{MTCstructure} \end{figure} \begin{figure} \includegraphics[scale=0.6]{Fig2-eps-converted-to.pdf} \caption{(Color online) (a) Band structure from effective tight-binding model calculation, which reproduces all the features of Fig. 1(d). (b) Fermi surface consists of three lotus root like rings. These rings are centering R point and parallel to the $k_x$=$\frac{\pi}{a}$, $k_y$=$\frac{\pi}{a}$ and $k_z$=$\frac{\pi}{a}$ plane, respectively. They are formed by the electron pockets (blue) and hole pockets (red) connected by nodal points at Fermi energy. } \label{tbband} \end{figure} The low energy bands near the Fermi level are formed by the overlapping of the molecular orbitals with T$_{1g}$ and T$_{2u}$ symmetry. Since each isolated carbon cluster in the MTC has approximately spherical symmetry, these molecular orbitals can be viewed as the ``atomic orbitals" with $g$ and $f$-wave symmetry, which are further splitted under the cubic crystal field. Therefore, the T$_{1g}$ sector consists of $g_{xy(x^2-y^2)}$, $g_{yz(y^2-z^2)}$ and $g_{zx(z^2-x^2)}$ orbitals, while the T$_{2u}$ sector contains $f_{x(y^2-z^2)}$, $f_{y(z^2-x^2)}$, and $f_{z(x^2-y^2)}$ orbitals. Thus, these six hypothetical atomic orbitals are used as basis set to reproduce the low energy physics of this system. A Slater-Koster tight-binding (TB) Hamiltonian has been established and the on-site energy levels, as well as hopping parameters, can be obtained by fitting the band structure from the first-principles calculations. (see Appendix for details) The triply degenerated T$_{1g}$ and T$_{2u}$ bands at $\Gamma$ have eigen energies of $E_g+4V_{ggp}+2V_{ggd}$ and $E_f-4V_{ffd}$, respectively. Those at R are $E_g-4V_{ggp}-2V_{ggd}$ and $E_f+4V_{ffd}$ due to the nearest-neighbor hopping. Here, $E_g$ and $E_f$ are on-site energies for $g$ and $f$ orbitals. $V_{ggp}$ and $V_{ggd}$ are the hopping parameters among $g$ orbitals. $V_{ffp}$ and $V_{ffd}$ are those for $f$-orbitals. From these analysis, we learn that the band inversion or the switching of $g$ (T$_{1g}$) and $f$ (T$_{2u}$) orbitals between $\Gamma$ and R points is due to the strong energy dispersion (or the large hopping parameters). As shown in Fig. 2, this TB model can reproduce all the features of bands, including band topology, with the fitted Slater-Koster parameters (in unit eV) $E_g$=-0.12, $E_f$=0.19, $V_{ffp}$=0.019, $V_{ffd}$=-0.075, $V_{fgp}$=0.05, $V_{fgd}$=0.0, $V_{ggp}$=-0.035, $V_{ggd}$=-0.055. The mean square error is minimized to 0.0016 eV$^2$ with sampling k-points along the high-symmetrical path shown in Fig. 1(d). Artificially reducing the hopping parameters (such as expanding the lattice parameter) by 50\% will eliminate the band inversion, with T$_{1g}$ states lower than the T$_{2u}$ states at R point. This calculation also suggests that the strength of band inversion in the system is strong. \subsection{Topological Node-Lines and 3D Dirac Points} Interestingly, the band crossings in MTC lead to node-lines rather than node points. In other words, the band crossings exist along certain closed loops in the 3D momentum space, and generate three circular-like node-lines around the $R$ point, as shown in Fig. 2. These node-lines are protected by two factors, one is the coexistence of time reversal (T) and spacial inversion (P) symmetry and the other factor is that the SOC is negligible. With the coexistence of P and T symmetries, there exists a certain gauge choice under which the spineless Hamiltonian is completely real valued. (See Appendix for details) Now we will show that for this system, if there is an energy level-crossing of two bands at a momentum $\mathbf{k}_0$, a stable node-line will unavoidably appear. Around the crossing point, the two-level 2$\times$2 Hamiltonian can be written in the following general form: \begin{equation} \mathcal{H}=d_0(\vec{k})+d_{x}(\vec{k}) \cdot \sigma_{x}+d_{y}(\vec{k}) \cdot \sigma_{y}+d_{z}(\vec{k}) \cdot \sigma_{z}, \end{equation} where the Pauli matrices $\sigma_{i}$($i$=$x, y, z$) denote the two-band space. Without loss of generality, $d_{i}(\vec{k})$($i$=$0,x,y,z$) are all real function of $\vec{k}$. The eigen energy of $\mathcal{H}$ being \begin{equation} E(\vec{k})=\pm\sqrt{d_{x}^2(\vec{k})+d_{y}^2(\vec{k})+d_{z}^2(\vec{k})}+d_0(\vec{k}), \end{equation} and the energy degeneracy can be obtained when the three conditions $d_{i}(\vec{k})$=0 ($i$=$x,y,z$) are satisfied with three parameters $\vec{k}(k_x,k_y,k_z)$ in the 3D momentum space. As mentioned above, the Hamiltonian can be chosen to be real valued leading to $d_y=0$. The rest $d_{0}(\vec{k})$, $d_{x}(\vec{k})$ and $d_{z}(\vec{k})$ can be expanded around $k_0$ and the location of the crossing points can be determined by $d_{x}(\vec{k}_0)\approx \delta_x+{{\vec v_x}}(\vec{k}-k_0)=0$ and $d_{z}(\vec{k}_0)\approx \delta_y+{{\vec v_z}}(\vec{k}-k_0)=0$, where ${\vec v_i}=\vec\nabla_{\vec{k}} d_i(\vec{k})$ and $\delta_i$ denote the small perturbative terms with both T and P symmetries. In the generic case, the above two equations give a line in the vicinity of $k_0$ with its direction determined by ${\vec v_x}\times{\vec v_z}$. Therefore, the generic solution of the band crossing point in 3D k-space is a closed loop. Any external perturbations that keep T, P and translational symmetry can only shift or distort but not eliminate the nodal loops. The topologically stable node-line in MTC is only protected by P and T and no other symmetry is required. The additional mirror symmetry in the present system only forces the node-lines to stay in $k_{z}$ (or $k_x, k_y$)=$\frac{\pi}{a}$ plane. The cubic symmetry leads to three in-plane node-lines, as what have been found from our calculations in Fig. 2. The node-lines are not necessarily to be flat in energy, and they can have energy dispersion in the k space determined by $d_0({\bf k})$ term (which breaks the particle-hole symmetry). Different from other proposals for the topological node-lines,~\cite{burkov} the appearance of node-lines in MTC is very stable and does not require fine tuning of any parameters. This mechanism to generate topological node-lines in three dimensional materials only requires T, P symmetry and weak enough SOC, which can be easily applied to a large class of materials consisting of mainly the light elements. It is now clear that this 3D MTC is different with graphene in the sense that it is a semimetal with node-lines in the 3D momentum space with the presence of both T and P symmetries. The thing becomes even more interesting if P symmetry is further broken. In such a case, from above discussions, we will in general expect three conditions $d_i({\bf k})=0$ with three parameters for the band crossing points, leading to isolated points in the 3D k-space. This is nothing but the 3D Dirac metals discussed recently.~\cite{Na3Bi, Cd3As2, Na3Biexp, Cd3As2exp,Cd3As2expHasan, Cd3As2expCava} On the other hand, comparing with other proposals for Dirac semimetals, the 3D Dirac point here is topologically stable and does not require the protection from any crystalline symmetry. Similar with the situation in graphene, finite SOC will open a gap at the Dirac point and makes the system a topological insulator. In fact, although our calculated structure has inversion symmetry, most of known real samples of MTC~\cite{chenlab1,chenlab2} have strong defects and orientation disorder, which should break inversion symmetry. The plausible existence of these stable 3D Dirac points has been indicated by the density of states~\cite{chenlab1} and heat capacity measurements.~\cite{privatecommuniction} If T symmetry is further broken in the system, we will expect Weyl semimetal states, which has been extensively studied but not realized yet experimentally.~\cite{wan, HgCrSe,multilayerTRI, multilayerTRB} \begin{figure} \includegraphics[scale=0.6]{Fig3-eps-converted-to.pdf} \caption{(Color online) (a) Band crossings of the two bands near Fermi level form node-line (in Green) in $k_z$=$\frac{\pi}{a}$ plane. (b) The crossing happens at different eigen energy as indicated by different color, the greener the lower in energy. } \label{nodeline} \end{figure} \begin{figure} \includegraphics[scale=0.5]{Fig4-eps-converted-to.pdf} \caption{(Color online) The (001)-surface state. (a) The nearly flat surface band is nestled between two solid Dirac cones, which are the projection of one of the node-line circles as indicated in the inset (red circle). The other two node-line rings are projected as two orthogonal diameters (green line). (b) The surface density of state. (c) The wave function of surface state pointed by the arrow decay rapidly into bulk. (d) The eigen energy distribution of surface flat band nestled inside of projected node-line circle, which looks like a vibration model of ``drumhead". The mixing of surface and bulk state leads to discontinuous in this plot. } \label{surf} \end{figure} \subsection{Fermi surface and surface flat band} The two crossing bands within the $k_z$=$\frac{\pi}{a}$ plane obtained by TB Hamiltonian are plotted in Fig. 3. In general the crossing of bands do not happen at the same energy. They have energy dispersion around 25 meV. The alternative electron and hole pockets are formed when the band crossing is lower or higher than the Fermi level and this results in lotus-root like Fermi surface instead of dispersionless line. This topologically stable node-line semimetal state can have nontrivial surface states.~\cite{Ryu_2002PRL, JETP932011, JETP942011, 2011arXiv1111.4627V, burkov} For the (001)-surface, the three node-line rings are projected to be a ring and two orthogonal diameter segments inside of it as shown in Fig. 4(a). The (001)-surface state is calculated based on the six-band TB model using both Green's function method and slab-model method.~\cite{MRS_weng:9383312} There is a nearly flat surface band nestled inside of the projected node-line ring with its band width being about 40 meV due to the particle-hole asymmetry. The peak-like surface density of states contributed by this nearly flat band is clearly shown in Fig. 4(b), which is proposed to be an important route to high temperature surface superconductivity.~\cite{PhysRevB.83.220503, 2014arXiv1409.3944V} The layer-resolved weight of wave function for the surface flat band is shown in Fig. 4(c). It penetrates just three layers into bulk with most of the weight on the surface layer. The surface localization of these flat bands is well resolved for those separated from bulk bands. The nestled flat surface states have small dispersion and their eigen energy distribution in surface BZ is shown in Fig. 4(d), which looks like some vibrational mode of ``drumhead". Such ``drumhead"-like states are readily to be detected by angle-resolved photoelectron spectroscopy or scanning tunnel microscope. The topological node-line state, as well as its surface flat band, can be understood by studying an effective 2$\times$2 toy model Hamiltonian. Taking $d_x$=$k_z$, $d_y$=0 and $d_z$=$M-B(k_x^2+k_y^2+k_z^2)$, the Hamiltonian gives a node-line determined by $k_x^2+k_y^2=\frac{M}{B}$ in the plane $k_z$=$0$. Obviously $\frac{M}{B} >0$ is required. The topology of this effective continuum bulk hamiltonian has been analyzed~\cite{mong} (See Appendix for details) and found to have topologically protected (001) surface states with dispersionless zero eigen energy inside of the projected node-line circle given by $\bar{k}_x^2+\bar{k}_y^2=\frac{M}{B}$. Here ($\bar{k}_x$, $\bar{k}_y$) denotes the k-point in (001) surface Brillouin zone. As mentioned above, $d_0(\vec{k})$ determines the energy dispersion of the node-line, as well as the surface flat band, though the detailed dispersion of surface states are also influenced by the surface potential in practice.~\cite{MRS_weng:9383312} \section{Discussion} We find that 6-1-1-p is not the only MTC having such novel node-line semimetal state. The MTC with the structure labeled as 6-1-2-p~\cite{ours} also has such nontrivial topological state. (as shown Appendix) The differences are: (1) The band inversion happens at M point and the $Z_2$ index is (1;000) when even weaker SOC splitting (about 0.03 meV compared with 0.136 meV in 6-1-1-p) is considered. (2) The low energy physics around Fermi level can be described by six atomic like molecular orbitals also, but they are T$_{1u}$ ($p_x$, $p_y$ and $p_z$) and T$_{2g}$ ($d_{xy}$, $d_{yz}$ and $d_{xz}$). The similar tight-binding model on simple cubic lattice can also reproduce all of its electronic structure. (3) There are also three mutually perpendicular line-node circles centering M point instead of R point. The similar surface state with nearly flat band can also be obtained. Therefore, it is most plausible that there are more 3D MTCs which can host such node-line semimetal state. Similar node-lines have also been found in optimally tuned photonic crystal composed of gyroid,~\cite{lufu} the Schwarz minimal G-surface. Other proposed carbon system include Bernal graphite~\cite{GP_Mikitik_2006PRB,GP_Mikitik_2008LTP} and hyper-honeycomb lattice.~\cite{2014arXiv1408.5522M} A carbon gyroid~\cite{PhysRevB_Gsurf} is found to be metal with Dirac cone in conduction bands higher away from Fermi level. Node-line is also proposed in Dirac or Weyl superconductors.~\cite{FanZhang_2014PRL} \section{Conclusion} \label{Conclusion} In summary, based on the first-principles calculations, we have predicted that a family of all-carbon 3D allotrope, Mackay-Terrones crystals, can have nontrivial topological node-line semimetal state, which is protected by both time-reversal symmetry and inversion symmetry after band inversion. When such bulk node-line is projected onto surface to form a circle, there is flat bands nestled inside of it. Such ``drumhead"-like state is an ideal playground for many interaction induced nontrivial states, such as superconductivity and fractional topological insulator states. Further, if the inversion symmetry is broken, the node-lines will evolve into stable 3D Dirac points. Two examples of such MTC with stable structure have been discussed. These predications can most probably be directly observed in further experiments. \section{acknowledgments} H.M.W., Z.F. and X.D. acknowledge the supports from National Natural Science Foundation of China, the 973 program of China (No. 2011CBA00108 and 2013CB921700) and the "Strategic Priority Research Program (B)" of the Chinese Academy of Sciences (No. XDB07020100). H.M.W. thanks the hospitality during his stay in Tohoku University and part of this work has been done there. Y.K. acknowledges to the Russian Megagrant project grant No. 14.B25.31.0030. Both Y.L. and Y.K. are supported by JST, CREST, ``A mathematical challenge to a new phase of material sciences" (2008-2013). {\it Note:} During the reviewing of this work, we noticed that a similar work from Y. Chen {\it et al}.~\cite{2015arXiv150502284C}, in which the Node-line, the nestled nearly flat surface bands and the stable 3D Dirac nodes due to inversion symmetry breaking proposed in this manuscript are also obtained for another carbon system. \bibliographystyle{unsrt}