text
stringlengths
14
5.77M
meta
dict
__index_level_0__
int64
0
9.97k
import { AuthEffects } from './auth/auth.effects'; import { ProfileEffects } from './profile/profile.effects'; import { RouterEffects } from './router/router.effects'; import { UiEffects } from './ui/ui.effects'; export const effects: any[] = [ AuthEffects, UiEffects, ProfileEffects, RouterEffects ]; export * from './auth/auth.effects'; export * from './ui/ui.effects'; export * from './profile/profile.effects'; export * from './router/router.effects';
{ "redpajama_set_name": "RedPajamaGithub" }
1,662
\section{Introduction} The hydrogen economy, which stands as one of the most promising pillars of the future of clean energy, strongly depends on two key reactions for the production of hydrogen and the generation of green energy: the hydrogen evolution reaction (HER) -- whose global process in acidic media is H$^+$ + e$^-$ $\rightarrow$ 0.5H$_2$ --, and the oxygen reduction reaction (ORR) -- whose global process in acidic media is O$_2$ + 4(H$^+$ + e$^-$) $\rightarrow$ 2H$_2$O --~\cite{Vesborg2015,Shao2016}. Without an appropriate catalyst, these processes take place at high temperatures and pressures, which hinders the widespread use of hydrogen in industrial applications. So far, the role model catalyst for the HER and the ORR is platinum, but its high cost and limited availability drives the research to look for more effective and less expensive electrocatalysts~\cite{Shao2016,Hansen2021}. Traditionally, three different strategies have been used to search new materials that fulfill these criteria: the introduction of surface defects~\cite{Fu2019,ZamoraZeledn2021,Mani2008,STRASSER2016166,YANG201772}, the exploration of different facets~\cite{Li2015,Duan2011}, and alloying~\cite{Li2018, SHI2018442,Liu2018,ELDEEB2015893,LIN2015274,Tian2018,YING2014214,CHEN2014380}. As an alternative, the application of elastic strains to modify the catalytic properties of materials has been comparatively less explored but there is recent evidence of promising results in metals~\cite{Shuttleworth2017,Shuttleworth2016,Shuttleworth20177,Shuttleworth2018,Mollaamin2008,Raman2018,Shen2017,Verga2018,Pati2011} and other materials~\cite{Grabow2006,Wang2018b}. Indeed, mechanical strains may have large effects on different physical and chemical properties through the modification of the electronic structure. It has been shown that elastic strains can minimize defect formation in halide perovskites~\cite{Kim2020,Saidaminov2018}, prevent non-radiative recombination~\cite{Jones2019}, and even increase device efficiency~\cite{Liu2021}. The application of elastic strain can also enhance corrosion~\cite{Shi2020} and oxidation processes~\cite{Pratt2013}. On transition metals, numerous studies show that mechanical strain can produce important changes in their reactivity~\cite{Bhattacharjee2016,Grunze1982,Rao1991,Ruban1997,Cheng1995,Xin2014}. The adsorption energy of the reactants constitutes one of the best descriptors in many catalytic processes and the effect of mechanical strains in the adsorption energy of different metals has been analyzed in various publications ~\cite{Shuttleworth2017,Shuttleworth2016, Shuttleworth20177,Shuttleworth2018}, including our recent study of the effects of elastic strain on the adsorption of H, O, and OH onto the surfaces of 11 transition metals~\cite{MartnezAlonso2021}. However, the influence of each reaction intermediate should be taken into account to assess the catalytic activity in processes in where many steps are involved, such as the ORR, and this analysis is missing for the catalytic activity of transition metals for the HER and the ORR. For instance, it is known that the application of tensile (compressive) strains can decrease (increase) the adsorption energy of H, O, and OH on transition metal surfaces ~\cite{MartnezAlonso2021,2021} but the influence of these changes on the catalytic activity has not been explored systematically for the HER and ORR. Within this context, the present article analyzes the effect of mechanical strains onto the catalytic properties of 8 fcc (Ni, Cu, Pd, Ag, Pt, Au, Rh, Ir) and 5 hcp (Co, Zn, Cd, Ru, Os) transition metals for the HER and the ORR. It was found that elastic strains modify the catalytic activity for all the studied metals: compressive strains were found to favor the catalytic properties of Pt, Pd, Rh, Ni, Ir, Co, Ru, and Os in the HER and Pt, Pd, Cd, Ir, Zn, Rh, Cu, Ni, Co, Ru, and Os in the ORR, while tensile strains improved the catalytic properties of Cu, Au, and Ag in the HER, and Au and Ag in the ORR. Nevertheless, metals presented different sensitivity to the effect of elastic strains on the catalytic activity, which depended on the rate limiting step. It was found that the optimum strain to attain the maximum catalytic activity was determined by either the maximum strain that can be achieved before reaching the mechanical stability limit or when the rate limiting step changes from one reaction to another. Thus, the results on this paper rationalize the mechanisms that determine the influence of mechanical strains on the catalytic activity of transition metals and provide a framework to explore their effect on other materials. \section{Methodology} The variation of the adsorption energy with strain can be used to modulate the activity of a catalyst following Sabatier$'$s principle~\cite{Sabatier}. It states that the adsorption energy should be neither too high nor too low for reactions passing through an adsorbed intermediate. If the energy is too high (endothermic), adsorption is slow and limits the overall rate, whereas the catalyst surface becomes poisoned and desorption is limited if the adsorption energy is too low (exothermic). In terms of hydrogen and oxygen electrocatalysis, this principle leads to the conclusion that the free energy of adsorption should be close to zero at the equilibrium potential~\cite{Viswanathan2012,Xie2017}. If Sabatier's principle is the only factor that governs the catalytic process, the plot of the reaction rate versus the energy of adsorption of the intermediate species leads to a volcano-shaped curve~\cite{Nrskov2004,Nrskov2005}. Starting from a low, negative (exergonic) free energy of adsorption, $G_\mathrm{ads}$, the catalytic activity initially rises with $G_\mathrm{ads}$, leading to the ascending branch of the volcano. The rate passes through a maximum around $G_\mathrm{ads}$ = 0, and then starts to decrease as $G_\mathrm{ads}$ becomes more endergonic in the descending branch of the volcano. The magnitude of $E_\mathrm{ads}$ -- obtained from DFT calculations -- can be used to determine the overall rate of the reaction according to \begin{equation} G_\mathrm{ads} = E_\mathrm{ads} + E_\mathrm{ZPE} - T\Delta S \label{ecgibbs} \end{equation} \noindent where $G_\mathrm{ads} $, $E_\mathrm{ads} $, $E_\mathrm{ZPE}$, and $\Delta S$ stand for the variation of the free and adsorption energies, the zero point energy and the entropy, respectively, during the adsorption of the intermediate species, and $T$ is the absolute temperature. This relationship should be evaluated for each intermediate species that appears in the catalytic process (only H in the HER reaction and O and OH in the ORR), taking into account that the process with the highest free energy barrier will limit the rate of the reaction. The metal surfaces of the thirteen metals were subjected to normal and shear stresses in the surface plane and the adsorption energies were calculated in the most favorable adsorption site (FCC for fcc (111) metals and HCP for hcp (0001) metals). Mixed boundary conditions are imposed to solve the elastic problem in the DFT calculations. They include imposed strains in the slab plane and zero stresses on the free surface. The deformation gradient ${\mathbf F}$ applied to the supercell was \begin{equation} \begin{array}{ccc} {\mathbf F}= \begin{pmatrix} 1+\epsilon_1 & \gamma & 0\\ 0 & 1+\epsilon_2 & 0\\ 0 & 0 & 1 \end{pmatrix} \end{array} \end{equation} \noindent where $\epsilon_1$ and $\epsilon_2$ stands for the normal strains along $x$ and $y$ directions while and $\gamma$ for the shear distortion in the $xy$ plane. Uniaxial deformation was applied when $\epsilon = \epsilon_1$ and $\epsilon_2 = \gamma$ = 0, while $\epsilon = \epsilon_1 = \epsilon_2$ and $\gamma =0$ for biaxial deformation. \subsection{Hydrogen Evolution Reaction} The HER is the cathodic reaction in the process of water splitting and plays a key role in the production of hydrogen by dissociation of the water molecule. It is a classic example of a reaction with transference of two electrons through the Volmer-Heyrovsky or Volmer-Tafel mechanisms~\cite{Jiao2015}. The kinetics of the reaction is limited by the adsorption of hydrogen on the surface for acidic solutions. The adsorption process (Volmer) limits the kinetics if hydrogen binds weakly on the catalyst surface, while desorption (Heyrovsky/Tafel) is the limiting process otherwise. Thus, the adsorption energy of hydrogen can be used to determine the optimum catalyst for the HER reaction. The global process in acidic media can be represented as \begin{center} \ce{H+ + e- ->1/2H2} \end{center} \noindent but the mechanism involves first the electrochemical hydrogen adsorption (Volmer reaction) followed by the electrochemical (Heyrovsky reaction) and/or chemical (Tafel reaction) hydrogen desorption reactions \begin{center} \ce{H+ + $*$ + e- -> H$^*$} \end{center} \begin{center} \ce{H$^*$ + H+ + e- ->$*$ + H2} \end{center} \begin{center} \ce{2H$^*$ -> 2$*$ + H2} \end{center} \noindent where $*$ represents a site on the surface of the catalyst and H$^*$ the hydrogen atom adsorbed. A more detailed diagram of the different steps is shown in Figure 1(a) of the supporting information. The reaction path for the HER is shown in Figure \ref{mecanismos}(a) and equation \eqref{ecgibbs} can used to compute the free energy associated with the adsorption process, $G_\mathrm{ads}$, including the differences in zero-point energies between products and reactants, $\Delta E_\mathrm{ZPE}$. These zero-point energies were calculated for the adsorption of H on Cu(111) and on Pt(111) and their values were 0.04 eV in both cases, in agreement with the one previously reported in the literature for the adsorption of H on Cu(111)~\cite{Nrskov2005}. Details about these calculations can be found in section two of the supporting information. The same value of $\Delta E_{ZPE}$ was used for all the metals studied below. \begin{figure}[t!] \centering \includegraphics[width=0.9\textwidth]{mecanismos.pdf} \caption{(a) Reaction path for the HER. (b) Reaction path for the ORR (dissociative mechanism). The free energies have been calculated for \{111\} Pt adsorbed at FCC positions without applied strain.} \label{mecanismos} \end{figure} \noindent The vibrational entropy is determined by the contribution of 0.5H$_2$ \begin{align} \bigtriangleup S = - \frac{1}{2}S^{0}_{H_2} \end{align} \noindent where $S^{0}_{H_2}$ = 130.68 J/mol*K ~\cite{entropy} is the entropy of H$_2$ in the gas phase at standard conditions (300K and 1 bar). \noindent Thus, $G_\mathrm{ads}$ at $T$ = 300 K can be expressed as \begin{align} G_\mathrm{ads_H} = E_\mathrm{ads_H} + 0.24 \ eV \label{gibbsH} \end{align} \noindent$ G_\mathrm{ads_H}$ depends only on the energy of adsorption of hydrogen according to equation (\ref{gibbsH}) and the catalytic activity of the HER can be measured by the exchange current density, i$_0$. This parameter describes the electron transfer between the electrode (which in our case would be the catalyst) and the solution and can be calculated following the strategy developed by Norskov {\it et al.}~\cite{Nrskov2005}. If $G_\mathrm{ads_H} <$ 0, the proton transfer is an exothermic process and the exchange current density can be expressed as \begin{align} i_0 = -e \hspace{0.05 cm} k_0 \hspace{0.05 cm} \frac{1}{1 + exp \hspace{0.05 cm}(- G_\mathrm{ads_H} /kT)} \label{i1} \end{align} \noindent where $e$ is the electron charge and $k_\mathrm{0}$ = 200 s$^{-1}$ site $^{-1}$~\cite{Nrskov2004} is a constant independent of the metal that includes all the effects related to the reorganization of the solvent during the transfer of protons to the surface. $kT$ stand for the Boltzmann constant and the absolute temperature, respectively. On the contrary, if $G_\mathrm{ads_H}$ $>$ 0, the proton transfer is endothermic, and the exchange current density is given by \begin{align} i_0 = -e \hspace{0.05 cm} k_0 \hspace{0.05 cm} \frac{1}{1 + exp \hspace{0.05 cm}(- G_\mathrm{ads_H} /kT)} \hspace{0.1 cm} exp \hspace{0.05 cm}(- G_\mathrm{ads_H} /kT) \label{i2} \end{align} \subsection{Oxygen Reduction Reaction} The ORR is the critical reaction for the generation of energy from hydrogen in a fuel cell. Currently, the kinetics of the ORR is limited by the low rate of the reduction reaction of oxygen in acidic media at the cathode \begin{center} \ce{O2 + 4(H+ + e-) -> 2H2O} \end{center} \noindent and it has to be enhanced by means of Pt-based catalysts. The high cost and low availability of Pt hinders the widespread application of proton exchange membrane fuel cells (PEMFC) in the transportation sector. The ORR in a PEMFC is carried out through the transfer of 4e- through either an associative or dissociative pathway~\cite{PhysRevLett.93.116105}. The catalytic activity is limited by the electron-proton transfer to O$^*$ or to OH$^*$ if oxygen is strongly adsorbed on the surface. Otherwise, the catalytic activity is controlled by the electron-proton transfer to O$^*$ or by the fracture of the O-O bond, depending on the applied potential~\cite{PhysRevLett.93.116105}. The energies associated with these processes are controlled by the electronic structure of the catalyst and can be modified by the application of elastic strains~\cite{MartnezAlonso2021}. Two different mechanisms, dissociative or associative, can be used to describe the ORR. The molecule of O$_2$ dissociates before it is hydrogenated in the former, whereas hydrogenation happens previously to the dissociation in the latter. A more elaborate diagram of both mechanisms is shown in Figure 1(b) of the supporting information. The steps of both mechanisms are show below Dissociative mechanism: \begin{center} \ce{1/2O2 + $*$ -> O$^*$} \end{center} \begin{center} \ce{O$^*$ + H+ + e- ->HO$^*$} \end{center} \begin{center} \ce{HO$^*$ + H+ + e- -> H2O + $*$ } \end{center} Associative mechanism: \begin{center} \ce{O2 + $*$ -> O2$^*$} \end{center} \begin{center} \ce{O2$^*$ + H+ + e- ->HO2$^*$} \end{center} \begin{center} \ce{HO2$^*$ + H+ + e- -> H2O + O$^*$ } \end{center} \begin{center} \ce{O$^*$ + H+ + e- -> HO$^*$} \end{center} \begin{center} \ce{HO$^*$ + H+ + e- -> H2O + $*$ } \end{center} Although each mechanism presents different reactions, the reaction rate is independent of the mechanism and this analysis will be based on the dissociative mechanism, which is simpler. Two different adsorbed species have to be considered in this case, O$^*$ and HO$^*$ (notice the difference with the HER, where the only intermediate adsorbate is H$^*$). The reaction path for the dissociative mechanism in depicted in Figure \ref{mecanismos}(b). Three different free energies, $G_\mathrm{0}$, $G_\mathrm{1}$, and $G_\mathrm{2}$, that correspond to the three steps of the dissociative mechanism shown above, are involved in the process. As detailed in the supporting information, following eq. \eqref{ecgibbs}, they can be expressed as \begin{align} G_\mathrm{0}=G_\mathrm{O^*+ H_2} - G_\mathrm{H_2O + *}= E_\mathrm{adsO'} + 0.01\ eV \label{G0sim} \end{align} \begin{align} G_\mathrm{1}= G_\mathrm{HO^* + \frac{1}{2}H_2} - G_\mathrm{O^* + H_2} =E_\mathrm{adsOH} - E_\mathrm{adsO'} -0.26\ eV \label{G1sim} \end{align} \begin{align} G_\mathrm{2}= G_\mathrm{H_2O + *} - G_\mathrm{HO^* + \frac{1}{2}H_2} =- E_\mathrm{adsOH} +0.25\ eV \label{G2sim} \end{align} \noindent where $E_\mathrm{adsO'}$ is defined below and it is assumed that $T$ = 300 K. $G_\mathrm{0}$ and $G_\mathrm{2}$ depend only on the adsorption energy of O and OH, respectively, whereas $G_\mathrm{1}$ depends on the difference. Additionally, the effect of the potential in the cell has to be taken into account because it is an electron transfer process. The free energy at pH = 0, atmospheric pressure of 1 bar, and 300 K of temperature, with electrode potential corrections, is given by \begin{align} G(U)= G + neU \label{potential} \end{align} \noindent where $U$ is the cell potential, $n$ the number of electrons that flow from the media to the electrode (if the electrons move in the opposite direction, $n$ must be multiplied by -1), and $e$ the elementary charge carried by a single electron, -1. At equilibrium, $U$ would be the equilibrium potential U$_0$=1.23 V \cite{Nrskov2004}. Considering the electrons that flow in each of the three steps, the corresponding free energies at equilibrium potential, atmospheric pressure and 300 K are: \begin{align} G_\mathrm{0}(U_0)= E_\mathrm{adsO'} + 0.01eV -2U_0 = E_\mathrm{adsO'} -2.45 \ eV \label{G0pot} \end{align} \begin{align} G_\mathrm{1}(U_0) =E_\mathrm{adsOH} - E_\mathrm{adsO'} -0.26eV + U_0= E_\mathrm{adsOH} - E_\mathrm{adsO'} + 0.97 \ eV \label{G1pot} \end{align} \begin{align} G_\mathrm{2}(U_0)= - E_\mathrm{adsOH} +0.25eV + U_0= - E_\mathrm{adsOH} + 1.48\ eV \label{G2pot} \end{align} The catalytic activity, $A$, can be expressed \cite{Nrskov2004} \begin{align} A = kT \hspace{0.1 cm} \log \big[exp \hspace{0.05 cm}(-G(U_0)/kT)\big] \label{activityorr} \end{align} \noindent where $G(U_0) = \max\{G_0(U_0), G_1(U_0), G_2(U_0)\}$ and corresponds to the activation energy of the process that limits the catalytic reaction. \subsection{Adsorption energies} The adsorption energies of H ($E_\mathrm{adsH}$), O ($E_\mathrm{adsO}$), and OH ($E_\mathrm{adsOH}$) on thirteen transition metals subjected different elastic strains were calculated using density functional theory by Martínez-Alonso {\it et al.}~\cite{MartnezAlonso2021}. The adsorption energy, $E_\mathrm{ads{O'}}$, takes into account that the adsorbed oxygen comes from water in the catalytic process from the reaction: \begin{center} \ce{H2O + $*$ -> O$^*$ + H2} \end{center} \setlength{\parskip}{0mm} \noindent and it is expressed as \begin{align} E_\mathrm{adsO'} = E_\mathrm{O^*} + E_\mathrm{H_{2}} - E_\mathrm{H_{2}O} - E_{*} = E_\mathrm{adsO} + E_\mathrm{H_{2}} - E_\mathrm{H_{2}O} + \frac{1}{2} E_\mathrm{O_2} \end{align} \noindent where $E_\mathrm{H_{2}}$, $E_\mathrm{O_{2}}$, and $E_\mathrm{H_{2}O}$ account for the total energy of the hydrogen, oxygen, and water molecules in the gaseous state, respectively. The metal surfaces of the thirteen metals were subjected to three different types of strain: biaxial and uniaxial (tension and compression) as well as shear, and the adsorption energies at the most favorable adsorption site (FCC for fcc (111) surfaces and HCP for hcp (0001) surfaces) were determined. The maximum strains were established taken into account the mechanical stability limits for each metal and surface obtained from phonon calculations and varied from -5\% compression to 8\% tension in Ni, Cu, Pd, Ag, Pt, Au, Rh, Ir, Co, Ru, and Os and from -2\% compression to 8\% tension in Cd and Zn. \parskip 5pt \section{Results} \subsection{Volcano plots at mechanical equilibrium} Gerischer~\cite{Gerischer2010} and Parsons ~\cite{Parsons_1958} were the first ones to find out that certain models for the activity of the HER resulted in a volcano-like curve. Nonetheless, it was Trasatti ~\cite{TRASATTI1972163} who collected experimental data and represented the first volcano plot for the HER. Since then, many authors have represented the exchange current density as a function of the free energy for hydrogen adsorption for the HER and the catalytic activity against the adsorption energy of O for the ORR ~\cite{Nrskov2005,Quaino2014,Greeley_2006,10.1093/nsr/nwx119}. These plots provide a very intuitive and visual representation of the catalytic performance of transition metals. The peak of the volcano corresponds to the optimum catalytic properties, the ascending brand corresponds to metals in which the reactions are limited by the desorption of the products, and the descending branch represents the processes that are limited by the adsorption of the reactants. According to Sabatier$'$s principle~\cite{Sabatier}, an equilibrium between adsorption of the reactants and desorption of the products should be attained to get the maximum activity. The adsorption energy of hydrogen at equilibrium in the most favorable adsorption site, $E_\mathrm{adsH}$, is given in Table \ref{gher}~\cite{MartnezAlonso2021}. This information can be used to calculate the free energy and the corresponding exchange current density according to eqs. \eqref{i1} and \eqref{i2}, which are also included in Table \ref{gher}. Volcano plots at mechanical equilibrium of 23 transition metals for the HER obtained from the DFT calculations are plotted in Figure \ref{volcanoseq}(a). A detailed comparison between the experimental data from the literature and the calculated exchange current densities as a function of free energy of adsorption of H is shown in section 3 of the supporting information. These results are in good agreement with the ones found in literature~\cite{Nrskov2005,Quaino2014,Greeley_2006,10.1093/nsr/nwx119} with the exceptions of Cu and Ir, whose experimental activity is lower than that computed according to Figure \ref{volcanoseq}(a). In particular, the $G_\mathrm{adsH}$ calculated for Cu is around 0.2 eV lower than what has been previously stated, which places Cu on the cusp of the volcano plot. \begin{centering} \begin{table}[!] \centering \caption{Adsorption energy of hydrogen at equilibrium in the most favorable adsorption site, $E_\mathrm{adsH}$, for different transition metals. The free energy of adsorption, $G_\mathrm{adsH}$, at 300 K and the logarithm of the exchange current density, $\log i_0$, for the HER according to eqs. \eqref{i1} and \eqref{i2} are also included.} \begin{tabular}{cccc} \toprule Metal & $E_\mathrm{adsH}$ (eV) & $G_\mathrm{adsH}$ (eV) & $\log i_0$ (A cm$^{-2}$) \\ \midrule Pt & -0.49 & -0.25 & -5.52 \\ Au & 0.09 & 0.33 & -7.06 \\ Cu & -0.25 & -0.01 & -1.52 \\ Ag & 0.17 & 0.41 & -8.27 \\ Pd & -0.54 & -0.30 & -6.40 \\ Ni & -0.52 & -0.28 & -6.07 \\ Ir & -0.39 & -0.15 & -3.83 \\ Rh & -0.53 & -0.29 & -6.27 \\ Cd & 0.81 & 1.05 & -19.39 \\ Zn & 0.67 & 0.91 & -16.92 \\ Co & -0.51 & -0.27 & -5.97 \\ Nb & -0.89 & -0.65 & -12.46 \\ Mo & -0.74 & -0.50 & -9.92 \\ W & -0.75 & -0.51 & -10.09 \\ Re & -0.80 & -0.56 & -10.97 \\ Ru & -0.64 & -0.40 & -8.08 \\ Fe & -0.59 & -0.35 & -7.26 \\ Os & -0.59 & -0.35 & -7.29 \\ Hf & -1.08 & -0.84 & -15.77 \\ Ta & -0.99 & -0.75 & -14.11 \\ V & -0.90 & -0.66 & -12.53 \\ Cr & -1.06 & -0.82 & -15.35 \\ Tc & -0.72 & -0.48 & -9.58 \\ \bottomrule \label{gher} \end{tabular} \end{table} \end{centering} The HER is a one-step process and the reaction path is controlled by the adsorption energy of hydrogen, whereas the ORR includes three steps with their associated free energies, $G_0 (U_0)$, $G_1 (U_0)$, and $G_2 (U_0)$. The corresponding values of the adsorption energies of O and OH were calculated previously on the most favorable site of the thirteen transition metals ~\cite{MartnezAlonso2021} and they are given in Table \ref{gorr}. They were used to determine the free energies for each of the steps and to calculate the activity according to eq. \eqref{activityorr}, which are also included in Table \ref{gorr}. The corresponding volcano plot for the catalytic activity as a function of E$_\mathrm{adsO'}$ is plotted in Figure \ref{volcanoseq}(b). The highest free energy establishes the rate limiting step for each metal and determines the catalytic activity. The rate limiting step in most transition metals is the desorption of HO$^*$ (\ce{HO$^*$ + H+ + e- -> H2O + $*$ }) and the highest free energy is $G_2 (U_0)$. The exceptions to this general situation are Au, where the rate limiting step is the adsorption of O* (\ce{1/2O2 + $*$ -> O$^*$}) with the associated free energy $G_0 (U_0)$, and Ir and Os, in which the rate limiting step is the formation of HO$^*$ (\ce{O$^*$ + H+ + e- ->HO$^*$}) and the highest free energy is $G_1 (U_0)$. Pt exhibits the maximum catalytic activity in the volcano plot, followed by Pd, Ag, and Ir, in agreement with experimental data ~\cite{Wang2018}. In addition, these results agree with previous simulations by Norskov {\it et al.} ~\cite{Nrskov2004}, with the only discrepancy of Ir, in which they report that the rate limiting step corresponds to the desorption of HO. Additionally, it is interesting to notice that $G_0 (U_0)$ is very low for Nb, Mo, W, Re, Fe, Hf, Ta, V, Cr, and Tc. This corresponds to a very favorable oxygen adsorption, which could lead to the formation of oxides. Finally, it should be noted that metals on the left side of the periodic table (Hf, Cr, Ta, V, Nb, Mo, and W) with less than half-filled d-bands present the worst catalytic properties for the HER and the ORR. These results also agree with previous studies~\cite{Nrskov2005,Quaino2014,Greeley_2006,10.1093/nsr/nwx119}. \begin{center} \begin{table}[!] \caption{Adsorption energy of O ($E_\mathrm{adsO'}$) and OH ($E_\mathrm{adsOH}$) at equilibrium at the most favorable adsorption site for different transition metals. The free energies of the three steps of the dissociative mechanism ($G_0 (U_0)$, $G_1 (U_0)$, and $G_2 (U_0)$) at 300 K and the catalytic activity for the ORR according to eq. \eqref{activityorr} are also included.} \hspace*{-1.3cm} \begin{tabular}{ccccccc} \toprule Metal & $E_\mathrm{adsO'}$ (eV) & $E_\mathrm{adsOH}$ (eV) & $G_0 (U_0)$ (eV) & $G_1 (U_0)$ (eV) & $G_2 (U_0)$ (eV) &Activity \\ \midrule Pt & 1.66 & 1.19 & -0.79 & 0.29 & 0.49 & -0.23 \\ Au & 2.43 & 1.52 & 0.06 & -0.02 & -0.04 & -0.69 \\ Cu & 0.73 & 0.14 & -1.72 & 0.38 & 1.34 & -0.62 \\ Ag & 2.04 & 0.72 & -0.41 & -0.35 & 0.76 & -0.36 \\ Pd & 1.25 & 0.85 & -1.20 & 0.57 & 0.63 & -0.29 \\ Ni & 0.29 & 0.03 & -2.16 & 0.71 & 1.45 & -0.68 \\ Ir & 0.79 & 0.76 & -1.66 & 0.94 & 0.72 & -0.44 \\ Rh & 0.45 & 0.27 & -2.00 & 0.78 & 1.21 & -0.56 \\ Cd & 1.17 & 0.31 & -1.28 & 0.11 & 1.17 & -0.54 \\ Zn & 0.79 & 0.37 & -1.66 & 0.54 & 1.11 & -0.52 \\ Co & -0.01 & -0.20 & -2.46 & 0.78 & 1.68 & -0.78 \\ Nb & -2.17 & -1.80 & -4.62 & 1.34 & 3.28 & -1.53 \\ Mo & -1.67 & -1.18 & -4.12 & 1.46 & 2.66 & -1.23 \\ W & -1.65 & -1.01 & -4.10 & 1.61 & 2.49 & -1.16 \\ Re & -1.19 & -0.64 & -3.64 & 1.52 & 2.12 & -0.98 \\ Ru & -0.41 & -0.21 & -2.86 & 1.17 & 1.69 & -0.78 \\ Fe & -0.98 & -1.03 & -3.43 & 0.93 & 2.51 & -1.16 \\ Os & -0.31 & 0.15 & -2.76 & 1.42 & 1.33 & -0.66 \\ Hf & -3.67 & -2.54 & -6.12 & 2.10 & 4.02 & -1.87 \\ Ta & -2.39 & -1.85 & -4.84 & 1.51 & 3.33 & -1.55 \\ V & -2.99 & -1.54 & -5.44 & 2.41 & 3.02 & -1.41 \\ Cr & -2.52 & -2.15 & -4.97 & 1.34 & 3.63 & -1.69 \\ Tc & -1.19 & -0.78 & -3.64 & 1.38 & 2.26 & -1.05 \\ \bottomrule \label{gorr} \end{tabular} \end{table} \end{center} \begin{figure}[!] \centering \includegraphics[width=0.8\textwidth]{VOLCANOSEQ_FINAL.pdf} \caption{Volcano plots at mechanical equilibrium for (a) the HER as a function of $G\mathrm{_{adsH}}$ and (b) the ORR as a function of E$_{adsO'}$. Cu* in the HER volcano plot corresponds to the free energy obtained from the adsorption energy calculated with the DFT functional SCAN~\cite{Sun2015}.} \label{volcanoseq} \end{figure} \subsection{Volcano plots under mechanical strain} The effect of elastic strains on the catalytic activity of transition metals for the HER and the ORR is represented by means of volcano plots in Figures \ref{volcanosstrain}(a) and (b), respectively. These plots were constructed from the adsorption energies of the biaxially strained slabs of Au, Pt, Pd, Ag, Rh, Co, Ni, Cu, Ir, Ru, and Os from -5\% compression to 8\% tension, and of Cd and Zn from -2\% compression to 8\% tension that can be found in Figures 6 and 7 of~\cite{MartnezAlonso2021}. These transition metals were chosen due to their position close to the top of the volcanos in Figure \ref{volcanoseq} and to their widespread use as heterogeneous catalysts. The black dot in each line in Figure \ref{volcanosstrain} stands for the catalytic activity when the applied strain is zero. The lines for Cd and Zn are not included for the HER reaction in Figure \ref{volcanosstrain}(a) because they are too far from the top. The catalytic activity of the transition metals on the left side of the volcanos is limited by the desorption of the products. Compressive strains increase the adsorption energy of the reactants and avoid the poisoning of the catalyst surface, improving the catalytic activity. On the contrary, the catalytic properties of metals on the right side of the volcanos are limited by the adsorption of the reactants, and can be improved with the application of tensile strains, which enhance the adsorption of the reactants by reducing the adsorption energy. In all cases, the effect of elastic strains on the catalytic activity is limited by the mechanical stability limits of the surface of the catalyst. The effect of elastic strains on the catalytic activity (measured by the $\log i_0$) for the HER depends on the metal. The largest variations in the catalytic properties with strain (Figure \ref{volcanosstrain}(a)) are observed for Ir and Au, where the activity (measured by the $\log i_0$) changes 4.5 and 2.7 units, respectively, when the surface slab is subjected to -5\% biaxial compression or to 8\% biaxial tension, respectively, with respect to the unstrained slab. On the contrary, Co is very insensitive to the mechanical strains because $E_\mathrm{adsH}$ does not vary with the elastic strain~\cite{MartnezAlonso2021}. There are not well established trends for the effect of mechanical strains on the adsorption energies but our previous investigation showed that the influence of the mechanical strain on the adsorption energy increases with the number of electrons in the valence band for the elements in the 4th period of the periodic table (3d). Nevertheless, the effect of mechanical strains on the adsorption energy is much smaller for the 5th and 6th periods ~\cite{MartnezAlonso2021}. It should also be noted that the limiting mechanism (either hydrogen adsorption or desorption) does not change with the applied strain (the evolution of the catalytic activity with the applied strain remains to the left or to the right of the volcano cusp) with the exception of Cu in which $E_\mathrm{adsH}$ is very close to 0 in the unstrained slab. In this case, tensile or compressive stresses activate one of either rate limiting steps and reduce the catalytic activity. The elastic strains also lead to important changes in the catalytic activity of transition metals for the ORR, as shown in Figure \ref{volcanosstrain}(b). The largest ones are found in Pt and Au and both metals can be tuned by means of either compressive or tensile strains, respectively, to attain the top of the corresponding volcano plot. This result was already detected for Pt, which is known to improve its catalytic activity through the application of elastic strains~\cite{EscuderoEscribano2016,2021} in agreement with our volcano plot. More interestingly, the application of 2\% biaxial tension can improve the catalytic activity of Au (which is known to be a bad catalyst for the ORR) close to that of Pt. In contrast, elastic strains have a minor influence on the catalytic activity of the ORR for Co and Cd. The maximum activity of Ir, Rh, Co, Ni, Cu, Zn, Cd, Pd, Ru, and Os for the ORR is attained when the slab is subjected to the highest compressive strain, while the best catalytic activity of Ag is found in the unstrained condition and the application of tensile or compressive strains always leads to the reduction in activity. \begin{figure}[!] \centering \includegraphics[width=0.8\textwidth]{VOLCANOSSTRAIN_FINAL_RuOs.pdf} \caption{(a) Effect of mechanical strains on the volcano plot for the HER as a function of $G_\mathrm{adsH}$ for thirteen transition metals. The range of activities for Pd, Rh, Ni, Co, Ru, and Os within the rectangle have been plotted to the right, shifted by 0.05 eV along the horizontal axis for the sake of clarity. (b) {\it Idem} for the ORR as a function of E$_\mathrm{adsO'}$. The free energy that limits the reaction rate is indicated at each branch of the volcano plot for each metal. The lines stand for the variation in the catalytic activity of each metal as a function of the applied elastic strain in the range from -5\% biaxial compression to 8\% biaxial tension in Ni, Cu, Pd, Ag, Pt, Au, Rh, Ir, Co, Ru, and Os, and from -2\% biaxial compression to 8\% biaxial tension in Cd and Zn. Black circles indicate the activity at the mechanical equilibrium. Cd and Zn are omitted from the HER volcano plot because $\log i_0$ is very low.} \label{volcanosstrain} \end{figure} \section{Discussion} The results presented above show how the systematic application of elastic strains can be tailored to optimize the catalytic response of transition metals for the HER and the ORR. In the former case, the catalytic activity is determined by $E_\mathrm{adsH}$ which determines the free energy of adsorption, $G_\mathrm{adsH}$, at a given temperature. The optimum scenario is achieved when $G_\mathrm{adsH} \rightarrow 0$ through the application of compressive or tensile strains and the limitations to this strategy are determined by the sensitivity of $E_\mathrm{adsH}$ to the strain and the maximum strains that can be applied before mechanical instability limit is achieved, leading to fracture. The largest changes in catalytic activity with strain for the HER were found in Pt, Au, and Ir while $E_\mathrm{adsH}$ in Co and Ni was very insensitive to elastic strains, which did not modify substantially the catalytic activity. It should be noted that the catalytic activity of Cu is overestimated in the volcano plot for the HER in Figure \ref{volcanoseq}. This discrepancy may be due to the fact that -- according to equation \eqref{gibbsH} -- the exchange current density depends strongly on $G_\mathrm{ads_H}$, which in turn is a linear function of $E_\mathrm{ads_H}$ (Equation \eqref{gibbsH}). Thus, small errors in the calculation of $E_\mathrm{ads_H}$ may lead to substantial changes in catalytic activity, particularly near the cusp of the volcano plot. In order to test this conjecture, $E_\mathrm{ads_H}$ was calculated again in Pt and Cu using the metaGGA SCAN functional~\cite{Sun2015} and the methodology presented in ~\cite{MartnezAlonso2021}. This functional is known to provide more accurate values of the adsorption energy at a much higher computational cost \cite{Sun2015}. In the case of Pt, the differences in $E_\mathrm{adsH}$ between the generalized gradient approximation (GGA) with the Perdew-Burke-Ernzerhof exchange-correlation functional and the metaGGA SCAN functional were only 0.03 eV but they increased to 0.24 eV in the case of Cu. Moreover, these differences remained constant as a function of strain in the case of Pt (supporting information Figure 3). Thus, the predictions of the adsorption energy for Cu seem to be particularly sensitive to the selection of the functional and this explains the overestimation of the Cu activity in the volcano plot of Figure \ref{volcanoseq}. And Cu moves to the appropriate position in the volcano plot for the HER in Figure \ref{volcanoseq}(a) when the more accurate adsorption energy (denoted by Cu*) is used to calculate the free energy. It should be noted that this sensitivity to the functional could be less important in the case of the ORR, because the activity depends on differences in $E_\mathrm{ads}$ and errors in the calculation of adsorption energies due to selection of the functional tend to cancel each other. This is evidenced by the position of Cu in the volcano plot for the ORR in Figure \ref{volcanoseq}, which is in agreement with the experimental data. \ However, further investigation of the variations in the adsorption energy with SCAN in the case of Cu is needed. The effect of mechanical strains on the catalytic activity of the ORR is a little bit more complex because it depends on the free energy of the rate limiting reaction, which is the maximum of $G_0$, $G_1$, and $G_2$. $G_0$ depends on $E_\mathrm{adsO}$, $G_1$ on $E_\mathrm{adsOH}- E_\mathrm{adsO}$, and $G_2$ on $-E_\mathrm{adsOH}$, and the influence of the mechanical strains on the respective adsorption energies is different. Tensile strains reduce $E_\mathrm{adsO}$ and $E_\mathrm{adsOH}$ while compressive strains have the opposite effect. Moreover, $E_\mathrm{adsO}$ at zero strain is negative for all the analyzed transition metals while $E_\mathrm{adsOH}$ is positive under the same conditions for all the analyzed transition metals (except Co, with $E_\mathrm{adsOH}$ = -0.2 eV). As a result, the catalytic activity for the ORR of metals in which the rate is controlled by $G_0$ (adsorption of O$^*$) and $G_2$ (desorption of HO$^*$) are very sensitive to elastic strains and their catalytic activity increases rapidly with the application of tensile strains in the former (Au) or compressive strains in the latter (Cu, Ni, Pt, Pd, Rh, Zn, Cd, Co, Ru, and Os). In some hcp metals, such as Cd and Zn, the range of elastic strains before mechanical instabilities develop is reduced and, thus, the tunability of their catalytic activity with mechanical strains is very limited. Finally, the rate limiting free energy in Ir is $G_1$ (change from O$^*$ to HO$^*$) that depends on the difference between $E_\mathrm{adsOH}- E_\mathrm{adsO}$. As both energies evolve in the same direction with strain, their difference is not very sensitive to the applied strain (Figure \ref{volcanosstrain}) and Ir activity varies slowly (as compared with other metals) with deformation. Thus, the efficiency of elastic strains to modify the catalytic activity not only depends on the catalyst itself but also on the intermediate species that are present in each reaction step. Some reactions are more sensitive to the application of elastic strains than others, which opens the possibility to explore the mechanism to enhance the activity of many other chemical processes with different intermediate species. In most metals, mechanical strains change the catalytic activity but not the rate limiting mechanism. This is not the case, however, for Au, Pt, Ag, and Os. The influence of biaxial strains (in the range -5\% to 8\%) on the free energies in the ORR reaction path for Au, Pt, and Ag is plotted in Figures \ref{PATHSPASOS}(a), (b), and (c), respectively. At zero strain, the adsorption of O is the limiting step in Au (Figure \ref{PATHSPASOS}(a)) and this energy barrier increases with compressive strains while $G_1$ fluctuates due to the competition between the adsorption energies of O and HO. As a result, compressive strains reduce the catalytic activity for the ORR of Au. However, the application of tensile strains switches the rate limiting step to the desorption of HO$^*$ because $G_2$ becomes positive and increases rapidly with the tensile strains while $G_0$ becomes negative and $G_1$ fluctuates between negative and positive values. The optimum catalytic activity is obtained at small tensile strains around 2\% biaxial tension. In the case of Pt (Figure \ref{PATHSPASOS}(b)), the desorption of HO$^*$ is the limiting step at zero strain, while the adsorption of O is strongly exothermic and the reaction from O$^*$ to HO$^*$ is slightly endothermic. The application of compressive strains reduces rapidly the energy barrier for HO$^*$ desorption and the rate limiting step changes to the reaction from O$^*$ to HO$^*$, which becomes endothermic. The application of tensile strains increases the energy barrier for HO$^*$ desorption, leading to a constant reduction in the catalytic activity. Thus, the maximum catalytic activity is achieved for small compressive strains (-1.57\% biaxial compression). The rate limiting step also changes from $G_2$ to $G_1$ with the application of compressive strains in the case of Os. Additionally, unstrained Ag is at the cusp of the volcano plot and the application of either tensile or compressive strains reduces the activity as one limiting reaction becomes dominant. In this case, the rate limiting step is found to be the desorption of HO$^*$ under tensile strains and the O adsorption under compressive strains (Figure \ref{PATHSPASOS}(c)). Finally, the application of strain in Ir does not change the rate limiting step of the ORR, which is the change from O$^*$ to HO$^*$ in all cases (Figure \ref{PATHSPASOS}(d)). $G_1$ decreases slightly with compressive strains and increases slightly with tensile strains and the optimum catalytic activity is found at the maximum compressive strains allowed by the mechanical stability limits. \begin{figure}[t!] \centering \includegraphics[width=1.0\textwidth]{PATHSPASOS_FINAL.pdf} \caption{Effect of biaxial elastic strains (in the range -5\% to 5\%) on the free energies along the reaction path for the ORR. (a) Au. (b) Pt. (c) Ag. (d) Ir. The rate limiting step at each strain is indicated with an asterisk.} \label{PATHSPASOS} \end{figure} \section*{Conclusions} The effect of elastic strains on the catalytic activity for the HER and the ORR was analyzed at (111) surfaces of eight fcc transition metals (Ni, Cu, Pd, Ag, Pt, Au, Rh, Ir) and (0001) surfaces of five hcp transition metals (Co, Zn, Cd, Ru, Os). The catalytic activity for both reactions was determined from the adsorption energies for H, O, and OH calculated by density functional theory under strain. Tensile, compressive, and shear stresses were applied to slabs until the mechanical stability limit (given by phonon calculations) was achieved. The volcano plots of the catalytic activity for the HER and the ORR in the absence of strain were in good agreement with the experimental data in the literature, validating the theoretical model. It was found that elastic strains led to significant changes in the catalytic activity of most transition metals, which could be rationalized as a function of changes in the free energy of the rate limiting step. In the case of the HER, it was found that compressive strains increased the catalytic activity of metals on the ascending branch of the volcanos (Pt, Pd, Rh, Ni, Ir, Ru, and Os) because they decreased the energy barrier for H$^*$ desorption, which is the limiting step. On the contrary, tensile strains improved the activity of metals in the descending branch of the volcano, as they decreased the energy barrier for H adsorption (Au and Ag). The largest improvements in activity were found in Au, Ir, and Ag which combined a large sensitivity of the H adsorption energy to strain while large mechanical strains could be applied without leading to failure. The catalytic activity of the ORR was controlled by the maximum free energy for the reactions in the dissociative mechanism: adsorption of O, reaction from from O$^*$ to HO$^*$, and desorption of HO$^*$. In particular, the free energies associated with the adsorption of O and the desorption of HO$^*$ were very sensitive to mechanical strains. Thus, the catalytic activity of Au (controlled by the former) could be enhanced by the application of tensile strains while that of Cu, Ni, Pt, Pd, Rh, Zn, Cd, Co, Ru, and Os (controlled by the latter) was improved by the application of compressive strains. The optimum catalytic activity of Ag was found in the unstrained condition and mechanical deformation always reduced the catalytic activity in this metal. Moreover, only small elastic strains could be applied to Cd and Zn before the mechanical instability was reached and elastic strain engineering is not a suitable strategy to modify the catalytic activity of these metals. Finally, it was also found that the application of elastic strains could change the rate limiting step for the ORR in Au, Pt, Ag, and Os because of the different effect of mechanical deformation of the free energy of the different intermediate reactions. \section*{Acknowledgments} This investigation was supported by the MAT4.0-CM project funded by the Madrid region under program S2018/NMT-4381 and by the HexaGB project (reference RTI2018-098245) funded by MCIN/AEI/10.13039/501100011033. Computer resources and technical assistance provided by the Centro de Supercomputaci\'on y Visualizaci\'on de Madrid (CeSViMa) are gratefully acknowledged. Additionally, the authors thankfully acknowledge the computer resources at CTE-Power and Minotauro in the Barcelona Supercomputing Center (project QS-2021-1-0013 and QHS2021-3-0019). Finally, use of the computational resources of the Center for Nanoscale Materials, an Office of Science user facility, supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, under Project No. 73377, is gratefully acknowledged. CMA also acknowledges the support from the Spanish Ministry of Education through the Fellowship FPU19/02031. \section*{Conflicts of interest} There are no conflicts of interest to declare. \footnotesize
{ "redpajama_set_name": "RedPajamaArXiv" }
4,299
Q: How to reconnect database if the connection closed in spring jpa? I am using spring-boot, spring-jpa, mysql in my web application.When my application is running for some hours, I always got below exceptions: 2016-07-30 21:27:12.434 ERROR 13553 --- [http-nio-8090-exec-8] o.h.engine.jdbc.spi.SqlExceptionHelper : No operations allowed after connection closed. 2016-07-30 21:27:12.434 WARN 13553 --- [http-nio-8090-exec-5] o.h.engine.jdbc.spi.SqlExceptionHelper : SQL Error: 0, SQLState: 08003 2016-07-30 21:27:12.434 ERROR 13553 --- [http-nio-8090-exec-5] o.h.engine.jdbc.spi.SqlExceptionHelper : No operations allowed after connection closed. 2016-07-30 21:27:12.438 ERROR 13553 --- [http-nio-8090-exec-8] [.[.[.[.c.c.Go2NurseJerseyConfiguration] : Servlet.service() for servlet [com.cooltoo.config.Go2NurseJerseyConfiguration] in context with path [] threw exception [org.springframework.dao.DataAccessResourceFailureException: could not prepare statement; nested exception is org.hibernate.exception.JDBCConnectionException: could not prepare statement] with root cause java.io.EOFException: Can not read response from server. Expected to read 4 bytes, read 0 bytes before connection was unexpectedly lost. at com.mysql.jdbc.MysqlIO.readFully(MysqlIO.java:3119) ~[mysql-connector-java-5.1.25.jar!/:na] at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3570) ~[mysql-connector-java-5.1.25.jar!/:na] at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3559) ~[mysql-connector-java-5.1.25.jar!/:na] at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4110) ~[mysql-connector-java-5.1.25.jar!/:na] at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2570) ~[mysql-connector-java-5.1.25.jar!/:na] at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2731) ~[mysql-connector-java-5.1.25.jar!/:na] at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2815) ~[mysql-connector-java-5.1.25.jar!/:na] at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:2155) ~[mysql-connector-java-5.1.25.jar!/:na] at com.mysql.jdbc.PreparedStatement.executeQuery(PreparedStatement.java:2322) ~[mysql-connector-java-5.1.25.jar!/:na] at org.hibernate.engine.jdbc.internal.ResultSetReturnImpl.extract(ResultSetReturnImpl.java:82) ~[hibernate-core-4.3.11.Final.jar!/:4.3.11.Final] I have checked that the database is running well. I have to restart my spring-boot application when that happened. How can I check what the problem is? Why the database connection got closed? If that happened, whether I can re-connect the database? Below is my application.properties: spring.datasource.url=jdbc:mysql://192.168.99.100:3306/test?characterEncoding=utf8 spring.datasource.username=admin spring.datasource.password=123456 spring.datasource.driver-class-name=com.mysql.jdbc.Driver spring.datasource.max-active=150 A: This seems like a common error with MySQL. 1) Add this to your application.properties and see how it goes: spring.datasource.testOnBorrow=true spring.datasource.validationQuery=SELECT 1 testOnBorrow is detailed in the spring doc and this other stackoverflow question. I'm however unable to find a reference on validationQuery in Spring's doc, but it seems to do the trick. 2) Or, you may use testWhileIdle as suggested here http://christoph-burmeister.eu/?p=2849 He suggests adding this to your application.properties: spring.datasource.testWhileIdle = true spring.datasource.validationQuery = SELECT 1 This solution is also mentionned in the other stackoverflow question, it was just not the accepted answer, but seems to be the solution for some. 3) In this case, they also added timeBetweenEvictionRunsMillis: spring.datasource.testWhileIdle = true spring.datasource.validationQuery = SELECT 1 spring.datasource.timeBetweenEvictionRunsMillis = 3600000 EDIT: Another stackoverflow question that covers this (with a very complete answer)
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,008
{"url":"https:\/\/webwork.maa.org\/moodle\/mod\/forum\/discuss.php?d=343","text":"WeBWorK Problems\n\nshowing students which coordinate is wrong in a vector answer\n\nby Darwyn Cook -\nNumber of replies: 5\nI have the following code inside a calc III problem:\n\nContext(\"Vector\");\nContext(\"Vector\")->variables->are(x=>'Real',y=>'Real',theta=>'Real',phi=>'Real');\n$r = Vector(\"$x\",\"$y\",\"$z\");\n$ru =$r->D('theta')->reduce;\n$rv =$r->D('phi')->reduce;\n$prod = Vector(\"$ru >< $rv\");$x,$y,$z are parametric equations defined previously. When a student puts in an incorrect answer the answer is marked wrong, but it does not indicate which coordinate is incorrect. I tried adding the line\n\nContext(\"Vector\");\nContext(\"Vector\")->variables->are(x=>'Real',y=>'Real',theta=>'Real',phi=>'Real');\nContext(\"Vector\")->flags->set(showCoordinateHints=>1);\n$r = Vector(\"$x\",\"$y\",\"$z\");\n$ru =$r->D('theta')->reduce;\n$rv =$r->D('phi')->reduce;\n$prod = Vector(\"$ru >< $rv\"); but then I got the error \"theta is not defined in this context\". So I tried placing the set flag before the variables declaration: Context(\"Vector\"); Context(\"Vector\")->flags->set(showCoordinateHints=>1); Context(\"Vector\")->variables->are(x=>'Real',y=>'Real',theta=>'Real',phi=>'Real');$r = Vector(\"$x\",\"$y\",\"$z\");$ru = $r->D('theta')->reduce;$rv = $r->D('phi')->reduce;$prod = Vector(\"$ru ><$rv\");\n\nwhich compiles, but does not change the behavior of the answer checker. I also tried setting the flag inside the answer checker:\n\nANS($prod->cmp(showCoordinateHints=>1)); which also did not seem to have any effect either. P.S. I have used vectors successfully before without this issue. I love math objects! In reply to Darwyn Cook Re: showing students which coordinate is wrong in a vector answer by Davide Cervone - Darwyn: There are several issues that are affecting your results. I will respond to each in a separate message. The most important one is that when Vector() has an argument that is a Formula, it returns a Vector-valued formula, and those do not support the coordinate hints. The reason is that the student answer may not be a simple list of coordinates, but rather some more complicated vector equation, where the \"coordinates\" are not so obvious. When the student provides an answer for a constant-valued answer, the reduced numeric result is shown in the answer preview area, and so the coordinates of a constant vector would show up and make sense. But for Formula answers (vector or otherwise) the student's preview is the formula, and coordinates may not be apparent, so it could be confusing to discuss coordinates in that case. On the other hand, you can achieve the result you want by using a custom answer checker. Here is one approach that is very general: $ans = Vector(\"<t,1-t,3t^2>\");\n\nANS($ans->cmp(checker => sub { my ($correct,$student,$ans) = @_;\nreturn 1 if $correct ==$student;\nif (!$ans->{isPreview}) { my$m = $correct->length; my$I = Value::Matrix->I($m); my @errors = (); foreach my$i (1..$m) { my$e = Vector($I->row($i));\npush (@errors,\"Your \".$correct->NameForNumber($i).\" coordinate is incorrect\")\nunless $correct.$e == $student.$e;\n}\nValue->Error(join(\"<BR>\",@errors)) if @errors;\n}\nreturn 0;\n}));\n\nHere, you provide a custom answer checker whose job it is to report the incorrect coordinates. The checker first checks if the student's answer is correct, and returns 1 (correct) in that case. Otherwise, if the student didn't press the \"Preview\" button (when you would not want to give messages about which are correct), you get the length of the vector ($m) and make an identity matrix of that size ($I).\n\nThe rows of that matrix will be the unit coordinate vectors of the correct dimension, and dotting a vector with one of those will extract one of the coordinates of the answer. We can use that fact to check the individual coordinates of the student's answer.\n\nTo do so, we loop through the indices of the vector (1..$m) and extract a row of the matrix (the vector that has a 1 in position$i and zeros elsewhere), and dot that with the correct and student answers. If they aren't the same, we store an error message indicating that fact. The message uses the NameForNumber() call to turn \"1\" into \"first\", \"2\" into \"second\" and so on.\n\nOnce all the coordinates are checked, we join all the errors together (with BR tags to force them to be on separate lines) and report the errors. If there were no errors (shouldn't happen), or the student is previewing, simply return 0 to indicate the answer is incorrect.\n\nThat should have the effect you want. You could even package this checker in a macro file of your own and use loadMacros() to load it. For example, make a file containing\n\n sub VectorCoordinateHints {\nmy ($correct,$student,$ans) = @_; ... return 0; }$VectorCoordinateHints = \\&VectorCoordinateHints;\n\n ANS($ans->cmp(checker=>$VectorCoordianteHints));\n\nThat would make it easier to include into different problems.\n\nDavide\n\nRe: showing students which coordinate is wrong in a vector answer\n\nby Davide Cervone -\nYou point out that when you do\n Context(\"Vector\");\nContext(\"Vector\")->variables->are(x=>'Real',y=>'Real',theta=>'Real',phi=>'Real');\nContext(\"Vector\")->flags->set(showCoordinateHints=>1);\n$r = Vector(\"$x\",\"$y\",\"$z\");\n$ru =$r->D('theta')->reduce;\n$rv =$r->D('phi')->reduce;\n$prod = Vector(\"$ru >< $rv\"); you get an error about theta not being defined. The reason is the multiple Context(\"Vector\") calls. Each time you say Context(\"Vector\"), the context gets reset to a copy of the basic vector context, losing any changes you may have made to the context. So the first Context(\"Vector\") gets you a copy of the vector context. Then the next line throws that away and gets another copy of the vector context and adds some variables to it. The next line throws that away, gets another copy of the vector context and sets one of its flags. That context does not include variables you set up in the previous line, since you have discarded that older context. The way to do this is to use Context() with no name in order to get the current context without changing it. So Context(\"Vector\"); Context()->variables->are(x=>'Real',y=>'Real',theta=>'Real',phi=>'Real'); Context()->flags->set(showCoordinateHints=>1); would be what you want. Of course, this still won't solve the problem, since vector-valued formulas don't show coordinate hints, but it is the correct way to add variables to and set flags in the same context. Davide In reply to Darwyn Cook Re: showing students which coordinate is wrong in a vector answer by Davide Cervone - I also wanted to point out that the line $r = Vector(\"$x\",\"$y\",\"$z\"); is probably better as $r = Vector($x,$y,$z); provided the values of$x, $y, and$z are defined after the Context(\"Vector\") line. While the first line works, it is less efficient, since WeBWorK first must convert each of $x,$y, and $z to strings, and then reparse the strings into formulas again when building the vector. The second form avoids the extra steps. The only reason to go through the extra parsing is if the variables were defined in a different context originally. In that case, you might use $r = Vector(\"<$x,$y,$z>\"); or even $r = Compute(\"<$x,$y,\\$z>\");\n\n\nI'm trying to promote the use of Compute() when the input is a string, since it works just like parsing the student's answer and returns a value of the correct type, whatever that may be.\n\nDavide\n\nRe: showing students which coordinate is wrong in a vector answer\n\nby Davide Cervone -\nI love math objects!","date":"2022-05-28 00:18:40","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.43229052424430847, \"perplexity\": 1755.9658607024323}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-21\/segments\/1652663011588.83\/warc\/CC-MAIN-20220528000300-20220528030300-00719.warc.gz\"}"}
null
null
La chiesa di San Giuseppe Sposo di Maria Vergine è un luogo di culto cattolico di Gandino, sussidiaria della parrocchiale di Santa Maria Assunta della diocesi di Bergamo. Storia L'edificio fu edificato tra il 1521 e il 1523 originariamente dedicato anche a san Rocco, passando poi alla gestione della confraternita di San Giuseppe eretta il 22 giugno 1596. Furono poi eseguiti lavori di mantenimento e ammodernamento nel Seicento mentre l'organo a canne fu posto solo nel 1836. La chiesa era posta in prossimità del convento dei frati francescani, e l'istituto Giovanelli retto dalle suore terziarie. L'edificio fu decorato a opera di Paolo Micheli nel 1602. La chiesa fu ampliata nel 1604 con la creazione del porticato con il sopraelevato oratorio che fa parte dell'aula. Nel 1679 furono eseguiti i lavori per la formazione della cantoria mentre la torre campanaria fu elevata su disegno di Lorenzo Bettera nel 1689. Descrizione Esterno L'edificio è posto ungo la via centrale Papa Giovanni XXIII, anticipato dal un pronao composto da quattro colonne in marmo di Zandobbio complete di basamento e capitelli ionici che reggono l'architrave con fregio e quelli che erano i locali della congregazione che aveva la gestione della chiesa. Questa parte ospita tre aperture rettangolari complete di contorni in pietra, atte a illuminare l'aula divise da lesene con capitelli che reggono il fregio e la gronda del tetto. Interno L'interno a navata unica aveva originariamente due altari oltre a quello maggiore dedicati a sant'Anna e san Rustico e Fermo. La navata con volta a botte, si compone di quattro campate divise da lesene complete di capitelli che reggono il cornicione non praticabile ed è illuminata dalle finestre poste sulla facciata e altre finestre poste sopra il cornicione. La prima campata ospita due colonne in pietra che reggono tre archetti nella cui parte superiore vi sono i locali della congregazione di san Giuseppe, locali che si protraggono verso l'aula. Fu poi edificato nel Seicento l'altare presente nella seconda campata titolato a san Carlo Borromeo, per essere poi intitolato a san Francesco da Paola, in armo bianco e verde che ospita l'ancona in stucco e la statua vestita del santo titolare con la scritta del 1847 "Fantoni fecit e Forzenigus restaravit", indicando la realizzazione dai Fantoni di Rovetta. La parte è completa di una piccola cupola ellittica con tamburo e quattro piccole finestre che danno luce all'altare. La cantoria e l'organo sono poste nella terza campata sul lato sinistro, mentre corrispondente sul lato destro vi è il pulpito ligneo. La zona presbiteriale è posta nella quarta campata più elevata rispetto al resto dell'aula e anticipata dall'arco trionfale con le entrate laterali. L'altare maggiore è in legno dipinto dorato opera della bottega Manni la cui ancona risulta eseguita nel 1754 e conserva il gruppo scultoreo della Crocifissione con la Madonna e san Giuseppe, anche se questa è una presentazione differente dall'usuale che vorrebbe san Giovanni a fianco della Madre perché per tradizione, san Giuseppe era già morto al tempo della crocifissione. La zona del presbiterio termina con il coro ligneo in radica completo di lesene divisorie in noce realizzato nel 1630 da Paolo Micheli. L'interno conserva le statue del compianto in terracotta documentata nei registri della confraternita di san Giuseppe nel 1707, e le tele del Seicento di autori ignoti: Episodi della vita di san Francesco da Paola, la Purificazione di Maria Vergine, Madonna col Bambino e san Carlo Borromeo e un dipinto a fresco raffigurante l'incontro di Tobia e Sara risalente al Cinquecento. Note Bibliografia Voci correlate Natività della Beata Vergine Maria Fantoni (famiglia) Collegamenti esterni Chiese di Gandino
{ "redpajama_set_name": "RedPajamaWikipedia" }
8,215
'use strict'; var graphs = require('../controllers/graphs'); // The Package is past automatically as first parameter module.exports = function(Graphs, app, auth, database) { /*app.get('/graphs/example/anyone', function(req, res, next) { res.send('Anyone can access this'); }); app.get('/graphs/example/auth', auth.requiresLogin, function(req, res, next) { res.send('Only authenticated users can access this'); }); app.get('/graphs/example/admin', auth.requiresAdmin, function(req, res, next) { res.send('Only users with Admin role can access this'); }); app.get('/graphs/example/render', function(req, res, next) { Graphs.render('index', { package: 'graphs' }, function(err, html) { //Rendering a view from the Package server/views res.send(html); }); });*/ // Set up JSON API app.get('/graphs', graphs.all); app.get('/graphs/user', graphs.allForUser); app.post('/graphs', auth.requiresLogin, graphs.create); app.get('/graphs/:graphId', graphs.show); app.put('/graphs/:graphId', auth.requiresLogin, graphs.update); app.delete('/graphs/:graphId', auth.requiresLogin, graphs.destroy); // Finish with setting up the graphId param app.param('graphId', graphs.graph); };
{ "redpajama_set_name": "RedPajamaGithub" }
3,379
package com.woodblockwithoutco.beretained.internal; /** * DO NOT USE. MAY BE CHANGED IN THE FUTURE WITHOUT NOTICE. * Interface for classes that can save/restore it's fields. */ public interface FieldsRetainer<T> { void onCreate(T source); void save(T source); boolean restore(T target); }
{ "redpajama_set_name": "RedPajamaGithub" }
2,133
Q: Dificuldade em javascript com o botão de retorno de um banner de imagens rotativas Eai comunidade, sou novo. Venho para pedir ajuda a pessoas mais experientes quanto ao seguinte problema: Quero fazer um banner de imagens em um site em que as imagens passam a cada 4 segundos, porém existem dois botões (um na extremidade esquerda e outro na direita) que, ao serem pressionados, PARAM o setInterval que criei e fazem com que seja necessário pressionar os botões para navegar entre as imagens. O problema é que não consegui programar o botão de retorno corretamente. Ele funciona no 1º clique e depois em diante retorna um erro. Deixarei o código a seguir, adianto que é um código meio extenso. let time = 4000, currentImageIndex = 0, images = document.querySelectorAll("#slider img") max = images.length; function nextImage() { images[currentImageIndex].classList.remove("selected") currentImageIndex++ if(currentImageIndex >= max) currentImageIndex = 0 images[currentImageIndex].classList.add("selected") } function returnImage(){ images[currentImageIndex].classList.remove("selected") currentImageIndex-- if(currentImageIndex <= 0) currentImageIndex = max images[currentImageIndex].classList.add("selected") } function button_next(){ console.log("next") nextImage(); stop(); } function button_return(){ console.log("return") returnImage(); stop(); } var rollInterval = setInterval(() => { nextImage()}, time) function start() { rollInterval } function stop() { clearInterval(rollInterval) } window.addEventListener("load", start) Me ajudando com essa parte já é mais que o suficiente e fico muito agradecido, entretanto se alguém puder responder ainda sobre se é possível eu dar algum tipo de reset no setInterval ao invés de PARÁ-LO, seria muito mais interessante para meu projeto. Sei que esta função não existe nativamente.
{ "redpajama_set_name": "RedPajamaStackExchange" }
1,347
{"url":"https:\/\/www.khanacademy.org\/math\/ap-calculus-ab\/ab-limits-new\/ab-1-5a\/v\/limits-of-combined-functions-piecewise","text":"If you're seeing this message, it means we're having trouble loading external resources on our website.\n\nIf you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.\n\nLimits of combined functions: piecewise\u00a0functions\n\nAP.CALC:\nLIM\u20111 (EU)\n,\nLIM\u20111.D (LO)\n,\nLIM\u20111.D.1 (EK)\n,\nLIM\u20111.D.2 (EK)\n\nVideo transcript\n\nwe are asked to find these three different limits I encourage you like always pause this video and try to do it yourself before we do it together so when you do this first one you might just try to find the limit as X approaches negative two of f of X and then the limit as X approaches negative two of G of X and then add those two limits together but you will quickly find a problem because when you find the limit as X approaches negative two of f of X it looks as we are approaching negative two from the left it looks like we're approaching one as we approach x equals negative two from the right it looks like we're approaching three so it looks like the limit as X approaches negative 2 of f of X doesn't exist and the same thing is true of G of X if we approach from the left it looks like we're approaching three which approach from the right it looks like we're approaching one but turns out that this limit can still exist as long as the limit as X approaches negative 2 from the left of the somme f of X plus G of X exists and is equal to the limit as X approaches negative 2 from the right of the sum f of X plus G of X so what are these things well as we approach negative 2 from the left f of X is approaching looks like 1 and G of X is approaching 3 so it looks like we're approaching 1 & 3 so it looks like this is approaching the sum is going to approach 4 and if we're coming from the right f of X looks like it's approaching 3 and G of X looks like it is approaching 1 and so once again this is equal to 4 and since the left and right handed limits are approaching the same thing we would say that this limit exists and it is equal to 4 now let's do this next example as X approaches 1 well we'll do the exact same exercise and once again if you look at the individual limits for f of X from the left and the right as we approach 1 this limit doesn't exist but the limit as X approaches 1 of the sum might exist so let's try that out so the limit as X approaches 1 from the left hand side of f of X plus G of X what is that going to be equal to as we approach so f of X as we approach 1 from the left it looks like this is going approaching - I'm just doing this for shorthand and G of X as we approach one from the left it looks like it is approaching zero so this will be approaching two plus zero which is two and then the limit as X approaches 1 from the right hand side of f of X plus G of X is going to be equal to well for f of X as we're approaching one from the right hand side looks like it's approaching a negative one and for G of X as we're approaching one from the right hand side looks like we're approaching a zero again and so here it looks like we're approaching negative one so the left and right hand limits aren't approaching the same value so this one does not exist and then last but not least X approaches 1 of f of X times G of X so we'll do the same drill limit as X approaches 1 from the left hand side of f of X times G of X well here and we could even use the values here we see we was approaching 1 from the left we are approaching 2 so this is 2 and when we're approaching 1 from the left here we're approaching 0 and so this is going to be 2 times we're going to be approaching 2 times 0 which is 0 and then we were approached from the right X approaches 1 from the right of f of X times G of X well we already saw when we're approaching 1 from the right of f of X we are approaching negative 1 but G of X approaching 1 from the right is still approaching 0 so this is going to be 0 again so this limit exists we get the same limit when we approach from the left and the right it is equal to 0 so these are pretty interesting examples because sometimes when you think that the component limits don't exist that that means that the sum or the product might not exist but this shows at least two examples where that is not the case\nAP\u00ae is a registered trademark of the College Board, which has not reviewed this resource.","date":"2021-05-09 23:46:20","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.849725604057312, \"perplexity\": 168.9475435096162}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-21\/segments\/1620243989018.90\/warc\/CC-MAIN-20210509213453-20210510003453-00133.warc.gz\"}"}
null
null
{"url":"https:\/\/www.numerade.com\/questions\/find-the-energy-released-in-the-fusion-reaction-_12-mathrmh_12-mathrmh-rightarrow_13-mathrmh_11-math\/","text":"Nuclear Physics\n\nParticle Physics\n\n### Discussion\n\nYou must be signed in to discuss.\n\n### Video Transcript\n\nno police culture. You want to find energy released by this Parker fusion reaction involving two deuterium forming a treaty and a hydrogen? You cut. And to do that, we cannot poetic you value that is close to the changing mess. What's right by C squared, not changing message. Define s t meself directions subjected by the mass of the products. Tell us how much off the mess or directors is actually commenter into energy. And it will be the energy just released. No, that IHS, we need to find two off the Maso follow deuterium. You subject that we've met softy treat you as well as the mess off T. I should join you claim. Tell me, Multiply that by C squared. See school. We're gonna use 91 point for right for we ve put you for convenience, Senior using ah, atomic mass units. Team s off the duty room. You think you given to be two points you want full once you are to you miss off Treatem is 3.1649 You in mess off t hundreds of new county is one point usual 78 to 5. You putting it into a calculator, you should be able to get value 4.0 tree and\n\nNational University of Singapore\n\nNuclear Physics\n\nParticle Physics","date":"2021-04-15 11:19:13","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.2296471893787384, \"perplexity\": 2860.545888964233}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-17\/segments\/1618038084765.46\/warc\/CC-MAIN-20210415095505-20210415125505-00238.warc.gz\"}"}
null
null
{"url":"http:\/\/en.dsplib.org\/content\/resampling_lagrange_ex\/resampling_lagrange_ex.html","text":"Using Farrow filter on the basis of piecewise cubic polynomial interpolation for digital signal resampling\n\nContents\nIntroduction\nIn the previous section we considered Farrow filter for digital signal resampling. The structure of Farrow digital resampling filter on the basis of piecewise cubic polynomial interpolation (Figure 1) is got.\n\nFigure 1. Functional chart of optimized Farrow filter\non the basis of piecewise cubic polynomial interpolation.\nCalculating coefficients of an interpolation polynom requires one multiplication by and two trivial multiplication by .\nIn this section we will consider examples of using the resampling filter to compensate fractional delay, digital interpolation and fractional resampling times.\nRecalculating sample indexes of an input signal in case of piecewise cubic interpolation\nHigh performance of a digital resampler is reached due to using piecewise cubic interpolation and recalculating sample indexes of the input signal , in the interval , as it is shown in Figure 2.\nFigure 2. Recalculating sample indexes of an input signal in the interval\nLet the sampling frequency of the input signal , be equal to . Then the signal , at the resampler filter output will have the following sampling frequency . Besides the first sample of the output signal is delayed concerning the input one by the value .\nHere and elsewhere in this section as well as in the previous section the variable indexes samples of the input signal , and the variable indexes samples of the output signal .\nAs sampling frequencies of the input and output signals are various, the same indexes of the input and output samples correspond to different timepoints. For example, the th sample of the output signal after resampling does not correspond to the th sample of the input signal . Therefore we have to specify what scale we use to index samples. We will use three scales: the scale is for timepoints concerning samples of the input signal , the scale is for timepoints concerning the output signal and the scale in the interval is for calculating coefficients of a cubic polynom.\nFor the sample of the output signal according to the scale it is necessary to calculate the corresponding indexes of the input signal according to the scale (Figure 2a), and then to pass to the scale in the interval for calculating coefficients of a cubic polynom (Figure 2b). At that the output sample point according to the scale has to be recalculated under according to the scale .\nThe point of the output sample after resampling according to the scale of the input signal is equal to:\n (1)\n(1)\nTo calculate a cubic polynom we need four samples of the input signal , , and , as it is shown in Figure 2a. At that two samples and have to be more to the right of and two other samples and have to be more to the left of it.\nThen the index of the sample which corresponds to the timepoint according to the scale from -2 to 1 is equal to:\n (2)\n(2)\nwhere the functional means rounding to floor.\nThe value according to the scale is equal to:\n(3)\n>\nIn Figure 3 the example of recalculating indexes of the input and output signals for the th output sample under , , is shown.\nFigure 3. Example of recalculating indexes under , ,\nThe point according to the scale in accordance with (1) is equal to:\n (4)\n(4)\nThen the index in accordance with (2) is equal to:\n (5)\n(5)\nand the parametric variable according to the scale is equal to:\n (6)\n(6)\nThus, it is necessary to calculate coefficients of the cubic polynom according to the following equations:\n (7)\n(7)\nthen the value can be got as the interpolation polynom value for , i.e. .\nUsing Farrow filter on the basis of polynomial interpolation to compensate the signal fractional delay\nLet the input signal contain samples as it is shown in Figure 4.\n\nFigure 4. Input signal of Farrow filter.\nCompensate the fractional time delay of this signal by the value of the sampling period. Upon changing the fractional delay the sampling frequency is unchanged, the parametric variables and are equal to and the number of samples at the Farrow filter output is equal to the number of samples at the input, i.e. .\nThe calculation process of the signal fractional delay is given in Table 1. This process is shown in Figure 4.\nTable 1. Equalization of the signal fractional delay .\nThe values and are calculated according to (1), (2) and (3) respectively. Then the values for each value are given. Zero values for and which fall outside the limits of the signal indexation are red. In the following columns of Table 1 the coefficient values of the cubic polynom calculated according to (7) for the current point are given. At last, the signal value at the Farrow filter output is given in the last column.\nThe pickup signal (black) and the signal after equalizing the fractional delay (red) are shown in Figure 5.\n\nFigure 5. Input signal of Farrow filter and the signal after equalizing the fractional delay .\nTable 1 shows that upon equalizing the fractional delay the parametric variables for all . In this case Farrow filter can be interpreted as the third order FIR filter with the variable group delay by specifying the required value .\nThe family of Farrow filter impulse response for a various fractional delay value is shown in Figure 6.\n\nFigure 6. Farrow filter impulse responses for a various fractional delay value\nThe magnitude and the group delay of received filters are shown in Figure 7.\n\nFigure 7. Magnitude and group delay of Farrow filters for a various fractional delay value\nThe magnitude and the group delay charts allow to draw a conclusion that delay equalization is possible only within the scope of up to rad\/s. For a wider scope Farrow filter equalization on the basis of polynomial piecewise and cubic interpolation does not suit because of unacceptably high distortion of the filter characteristics. If it is required to use fractional delay equalization in a wider scope, it is necessary to use more difficult filters which use in their turn polynomial interpolation of a higher order or alternative FIR filters, or IIR filters of group delay equalization of signals.\nUsing Farrow filter as a digital signal interpolator\nConsider an example of using the digital resampling filter for digital signal interpolation.\nLet the input signal contain signal samples as it is shown in Figure 8a. We have already used this signal as an input signal of the digital fractional delay equalization filter.\nIt is necessary to perform digital signal interpolation and to increase the signal sampling frequency at the Farrow filter output times.\nIn case of digital interpolation the number of signal samples at the output will be equal to (see Figure 8b):\n (8)\n(8)\nRecalculating values and is also performed according to (1), (2) and (3) respectively.\nThe result of digital interpolation when using Farrow filter is shown in Figure 8b. Interpolation nodes corresponding to the input signal are black.\nFigure 8. Input signal and result of digital interpolation when using Farrow filter.\nThe filter of digital interpolation can be interpreted as the low-pass filter. The Farrow filter impulse response and the magnitude of digital signal interpolation are shown in Figure 9 for .\n\nFigure 9. Farrow filter impulse response and magnitude.\nThe impulse response under is a dotted graph.\nIn Figure 9 it is possible to note that the pulse response has no continuous derivative in interpolation nodes (has salient points) in view of the fact that plotting Lagrange polynomial interpolation is carried out only according to signal samples without restrictions to derivative continuity. Therefore the filter suppression in stopband is 28 dB only.\nUsing Farrow filter for fractional sample rate conversion\nLet the sine input signal\n (9)\n(9)\nwith the frequency kHz contain samples taken with the sampling frequency kHz.\nThe ratio of the sampling frequency of the sine signal to this signal frequency is not an integer . As a result the integer of samples is not located in one sine signal period, and the pickup signal is shown in the upper chart of Figure 10.\nChange the sampling frequency of the input signal times, where , and we will receive the resampled signal samples of which are received with the sampling frequency kHz. As a result exactly 8 samples will be located in one signal period . Fractional resampling will be performed when using Farrow filter on the basis of polynomial interpolation. Recalculating the values and is performed according to (1), (2) and (3) respectively.\nThe signal after digital resampling when using Farrow filter is shown in the lower chart of Figure 10.\nFigure 10. Fractional change of the signal frequency sampling.\nThe pickup continuous sine signal is shown in Figure 10 with the help of continuous lines. Figure 10 shows that after resampling exactly eight signal samples are located in one pickup signal period.\nConclusions\nIn this section we considered a question of recalculating sample indexes of the input signal in case of piecewise and cubic interpolation. The equations for recalculating output signal sampling points according to the scale of the input signal, and also value calculation according to the scale were received.\nThe received equations were used in examples of using Farrow filter to compensate fractional delay, digital interpolation and fractional sample rate conversion.\nFamilies of impulse responses, magnitudes and group delay of Farrow filters for a various fractional delay value were given.\nThe Farrow filter impulse response and magnitude of digital signal interpolation are also given.\nReference\n[1] Kahaner D., Moler C., Nash S. Numerical Methods and Software. Prentice Hall, 1988.\n\n[2] Farrow C.W. A Continuously Variable Digital Delay Element. Circuits and Systems, IEEE International Symposium. 1988, p. 2641\u20132645. vol. 3\n\n[3] Gardner Floyd M. Interpolation in Digital Modems-Part I: Fundamentals: IEEE Transactions on Communications, Vol. 41, No. 3, March 1993, P. 501-507.\n\n[4] Erup L., Gardner Floyd M., Harris Robert A. Interpolation in Digital Modems-Part II: Implementation and Performance: IEEE Transactions on Communications, Vol. 41, No. 6, June 1993, p.998-1008.\n\n[5] Franck A. Efficient Algorithms for Arbitrary Sample Rate Conversion with Application to Wave Field Synthesis. PhD thesis. Universit\u00e4tsverlag Ilmenau, Ilmenau, 2012. [PDF]\n\n[6] McConnell J. Analysis of Algorithms: An Active Learning Approach. Jones and Bartlett Publishers, 2001.\n\nAppendix\nPython simulation scripts:\n\n# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Sun Aug 27 23:40:00 2017\n\n@author: sergey\n\"\"\"\n\nimport numpy as np\n\np = 1\nq = 1\nx0 = 0.25\n\ns0 = np.array([1., 2., 2., 1., -0.5, -1., -2., -0.5])\n\ntin = np.linspace(0.0, len(s0), len(s0), endpoint=False)\n\ny = np.zeros(int(np.floor( float(len(s0)*q)\/float(p))))\n\ns = np.concatenate((np.array([0., 0.]), s0, np.array([0., 0.])))\n\nprint('---------------------------------------------------------', end='');\nprint('-------------------------------------------------------\\n', end='');\nprint('k x_k n d s(n-3) s(n-2) s(n-1) s(n) ', end='');\nprint('a_0 a_1 a_2 a_3 y(k)\\n', end='');\nprint('---------------------------------------------------------', end='');\nprint('-------------------------------------------------------\\n', end='');\n\ntout = np.zeros(int(np.floor( float(len(s0)*q)\/float(p))))\nfor k in range(len(y)):\nx = k*q\/p - x0\nn = int(x) + 1 + 3\nd = int(x) + 1 - x\n\ntout[k] = x;\n\na0 = s[n-1]\na3 = 1\/6 * (s[n] - s[n-3]) + 0.5*(s[n-2] - s[n-1]);\na1 = 0.5 * (s[n] - s[n-2]) - a3\na2 = s[n] - s[n-1] - a3 - a1\n\ny[k] = a0 - a1 * d + a2*d**2 - a3*d**3\n\nprint('%(d0)d %(f0)7.2f %(d1)2d %(f1)7.2f' %\n{'d0': k, 'f0': x, 'd1': n, 'f1': d}, end='')\nprint('%(s0)8.1f %(s1)8.1f %(s2)8.1f %(s3)8.1f ' %\n{'s0': s[n-3],'s1': s[n-2], 's2': s[n-1], 's3':s[n]}, end='')\n\nprint('%(a0)9.3f %(a1)9.3f %(a2)9.3f %(a3)9.3f '%\n{'a0': a0, 'a1': a1, 'a2': a2, 'a3': a3}, end='')\nprint('%(y)9.4f\\n' % {'y': y[k]}, end='')\n\nprint('---------------------------------------------------------', end='');\nprint('-------------------------------------------------------\\n', end='');\n\n\"\"\"\nfigure; stem(tin, s0);\nhold on;\nstem(tout, y, 'r');\naxis([-0.5, 7.5, -2.5, 2.5]);\ngrid on;\n\n\"\"\"\n\n\n\n# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Mon Aug 28 23:34:59 2017\n\n@author: sergey\n\"\"\"\n\nimport numpy as np\nimport scipy.signal as signal\n\ndef resampling_lagrange(s, p, q, x0):\n\"\"\"\n% y = resample_lagrange(s, p, q, x0)\n% Digital resampling by polynomial Lagrange interpolation.\n% Function changes input signal s samplerate to p\/q times and adds fractional\n% delay.\n%\n% Input parameters\n% s - input signal vector [N x 1];\n% p - p paramter of samplarate conversion\n% q - q paramter of samplarate conversion\n% x0 - fractional delay\n%\n% Ouptut parameters\n% y - Resampled signal\n%\n% Author: Sergey Bakhurin (dsplib.org)\n\"\"\"\nif(p>1):\nif(q==1):\ny = np.zeros(int(float((len(s)-1) * p) \/ float(q)) + 1)\nelse:\ny = np.zeros(int( float(len(s) * p ) \/ float(q)))\nelse:\ny = np.zeros(int( float(len(s) * p ) \/ float(q)))\n\nt = np.zeros(len(y))\ns = np.concatenate((np.array([0., 0.]), s, np.array([0., 0.])))\n\nfor k in range(len(y)):\nx = k*q\/p - x0\nt[k] = x\nn = int(np.floor(x)) + 4\nd = np.floor(x) + 1 - x\na0 = s[n-1]\na3 = 1\/6 * (s[n] - s[n-3]) + 0.5*(s[n-2] - s[n-1])\na1 = 0.5 * (s[n] - s[n-2]) - a3\na2 = s[n] - s[n-1] - a3 - a1\ny[k] = a0 - a1 * d + a2*d**2 - a3*d**3\nreturn y\n\nx0 = np.linspace(0.0, 0.9, 10)\ns = np.array([0., 1., 0., 0.])\nt = np.array([0., 1., 2., 3.])\n\nfor k in range(len(x0)):\nh = resampling_lagrange(s, 1, 1, x0[k])\nw, H = signal.freqz(h)\nH = 20.0 * np.log10(np.abs(H))\nw, gd = signal.group_delay((h, 1))\n\nfname = 'dat\/resample_lagrange_filter_fd_time_%.1f.csv' % x0[k]\nnp.savetxt(fname, np.transpose([t, h]), fmt=\"%+.9e\")\n\nfname = 'dat\/resample_lagrange_filter_fd_mag_%.1f.csv' % x0[k]\nnp.savetxt(fname, np.transpose([w, H]), fmt=\"%+.9e\")\n\nfname = 'dat\/resample_lagrange_filter_fd_gd_%.1f.csv' % x0[k]\nnp.savetxt(fname, np.transpose([w, gd]), fmt=\"%+.9e\")\n\n\n\n# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Mon Aug 28 23:34:59 2017\n\n@author: sergey\n\"\"\"\n\nimport numpy as np\nimport scipy.signal as signal\n\ndef resampling_lagrange(s, p, q, x0):\n\"\"\"\n% y = resample_lagrange(s, p, q, x0)\n% Digital resampling by polynomial Lagrange interpolation.\n% Function changes input signal s samplerate to p\/q times and adds fractional\n% delay.\n%\n% Input parameters\n% s - input signal vector [N x 1];\n% p - p paramter of samplarate conversion\n% q - q paramter of samplarate conversion\n% x0 - fractional delay\n%\n% Ouptut parameters\n% y - Resampled signal\n%\n% Author: Sergey Bakhurin (dsplib.org)\n\"\"\"\nif(p>1):\nif(q==1):\ny = np.zeros(int(float((len(s)-1) * p) \/ float(q)) + 1)\nelse:\ny = np.zeros(int( float(len(s) * p ) \/ float(q)))\nelse:\ny = np.zeros(int( float(len(s) * p ) \/ float(q)))\n\nt = np.zeros(len(y))\ns = np.concatenate((np.array([0., 0.]), s, np.array([0., 0.])))\n\nfor k in range(len(y)):\nx = k*q\/p - x0\nt[k] = x\nn = int(np.floor(x)) + 4\nd = np.floor(x) + 1 - x\na0 = s[n-1]\na3 = 1\/6 * (s[n] - s[n-3]) + 0.5*(s[n-2] - s[n-1])\na1 = 0.5 * (s[n] - s[n-2]) - a3\na2 = s[n] - s[n-1] - a3 - a1\ny[k] = a0 - a1 * d + a2*d**2 - a3*d**3\nreturn y\n\np = 10\nq = 1\nx0 = 0\n\n\"\"\"\n\u041f\u0440\u0438\u043c\u0435\u0440 \u0438\u043d\u0442\u0435\u0440\u043f\u043e\u043b\u044f\u0446\u0438\u0438 \u0441\u0438\u0433\u043d\u0430\u043b\u0430 \u0441 \u0438\u0441\u043f\u043e\u043b\u044c\u0437\u043e\u0432\u0430\u043d\u0438\u0435\u043c \u0444\u0438\u043b\u044c\u0442\u0440\u0430 \u0424\u0430\u0440\u0440\u043e\u0443\n\"\"\"\ns = np.array([1., 2., 2., 1., -0.5, -1., -2., -0.5])\n\nt = np.linspace(0, len(s)*p, len(s), endpoint = False, dtype = 'float64')\nnp.savetxt(\"dat\/resample_lagrange_interp_s.txt\", np.transpose([t, s]), fmt=\"%+.9e\")\n\ny = resampling_lagrange(s, p, q, x0)\n\nt = np.linspace(0, len(y), len(y), endpoint = False, dtype = 'float64')\nnp.savetxt(\"dat\/resample_lagrange_interp_y.txt\", np.transpose([t, y]), fmt=\"%+.9e\")\n\n\"\"\"\n\u0420\u0430\u0441\u0447\u0435\u0442 \u0438\u043c\u043f\u0443\u043b\u044c\u0441\u043d\u043e\u0439 \u0445\u0430\u0440\u0430\u043a\u0442\u0435\u0440\u0438\u0441\u0442\u0438\u043a\u0438 h[k] \u0438 \u0447\u0430\u0441\u0442\u043e\u0442\u043d\u043e\u0439 \u0445\u0430\u0440\u0430\u043a\u0442\u0435\u0440\u0438\u0441\u0442\u0438\u043a\u0438 H(w)\n\"\"\"\n\ns = np.array([0., 0., 1., 0., 0.])\n\nh = resampling_lagrange(s, p, q, x0)\n\nt = np.linspace(0, len(h), len(h), endpoint = False, dtype = 'float64')\nnp.savetxt(\"dat\/resample_lagrange_interp_filter_time.txt\", np.transpose([t, h]), fmt=\"%+.9e\")\n\nw, H = signal.freqz(h)\nH = 20.0 * np.log10(np.abs(H))\nnp.savetxt(\"dat\/resample_lagrange_interp_filter_freq.txt\", np.transpose([w, H]), fmt=\"%+.9e\")\n\n\n\n\n# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Sat Sep 2 12:18:52 2017\n\n@author: sergey\n\"\"\"\n\n# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Mon Aug 28 23:34:59 2017\n\n@author: sergey\n\"\"\"\n\nimport numpy as np\nimport scipy.signal as signal\n\ndef resampling_lagrange(s, p, q, x0):\n\"\"\"\n% y = resample_lagrange(s, p, q, x0)\n% Digital resampling by polynomial Lagrange interpolation.\n% Function changes input signal s samplerate to p\/q times and adds fractional\n% delay.\n%\n% Input parameters\n% s - input signal vector [N x 1];\n% p - p paramter of samplarate conversion\n% q - q paramter of samplarate conversion\n% x0 - fractional delay\n%\n% Ouptut parameters\n% y - Resampled signal\n%\n% Author: Sergey Bakhurin (dsplib.org)\n\"\"\"\nif(p>1):\nif(q==1):\ny = np.zeros(int(float((len(s)-1) * p) \/ float(q)) + 1)\nelse:\ny = np.zeros(int( float(len(s) * p ) \/ float(q)))\nelse:\ny = np.zeros(int( float(len(s) * p ) \/ float(q)))\n\nt = np.zeros(len(y))\ns = np.concatenate((np.array([0., 0.]), s, np.array([0., 0.])))\n\nfor k in range(len(y)):\nx = k*q\/p - x0\nt[k] = x\nn = int(np.floor(x)) + 4\nd = np.floor(x) + 1 - x\na0 = s[n-1]\na3 = 1\/6 * (s[n] - s[n-3]) + 0.5*(s[n-2] - s[n-1])\na1 = 0.5 * (s[n] - s[n-2]) - a3\na2 = s[n] - s[n-1] - a3 - a1\ny[k] = a0 - a1 * d + a2*d**2 - a3*d**3\nreturn y\n\nP = 20\nQ = 11\nN = 54\nL = 10\nFs = 26.4\nf0 = 6.0\n\ntc = np.linspace(0, N*L, N*L, endpoint = False, dtype = 'float64')\/(Fs * float(L))\nc = np.sin(np.pi*2.0*f0*tc)\n\nts = np.linspace(0, N, N, endpoint = False, dtype = 'float64')\/Fs\ns = np.sin(np.pi*2.0*f0*ts)\n\ny = resampling_lagrange(s, P, Q, 0.0)\nty = float(Q) * np.linspace(0, len(y), len(y), endpoint = False, dtype = 'float64') \/ (Fs * float(P))\n\nnp.savetxt(\"dat\/resample_lagrange_ex_fs_c.txt\", np.transpose([tc[0:(N-1)*L], c[0:(N-1)*L] ]), fmt=\"%+.9e\")\nnp.savetxt(\"dat\/resample_lagrange_ex_fs_s.txt\", np.transpose([ts, s]), fmt=\"%+.9e\")\nnp.savetxt(\"dat\/resample_lagrange_ex_fs_y.txt\", np.transpose([ty, y]), fmt=\"%+.9e\")","date":"2018-08-17 22:20:46","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5591080188751221, \"perplexity\": 4443.810602651815}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-34\/segments\/1534221213158.51\/warc\/CC-MAIN-20180817221817-20180818001817-00209.warc.gz\"}"}
null
null
Q: jQuery remove all characters but numbers and decimals var price = "$23.03"; var newPrice = price.replace('$', '') This works, but price can also be such as: var price = "23.03 euros"; and many many other currencies. Is there anyway that I could leave only numbers and decimal(.)? A: var newPrice = price.replace(/[^0-9\.]/g, ''); No jQuery needed. You will also need to check if there is only one decimal point though, like this: var decimalPoints = newPrice.match(/\./g); // Annoyingly you have to check for null before trying to // count the number of matches. if (decimalPoints && decimalPoints.length > 1) { // do whatever you do when input is invalid. } A: var newprice = price.replace( /\D+$/, '');
{ "redpajama_set_name": "RedPajamaStackExchange" }
3,794
AgendaNew! Announce Your Events Photo events THE GEOLOCATED AGENDA Photography Exhibitions All Photography Exhibitions Photography Exhibitions Paris Photography Exhibitions New-York Photo Artfairs Photography Festivals Photography Auctions Artist talks and book signings All photo events Portfolios of readers Your holiday pictures Geolocated Agenda © 2019 All rights reserved - The Eye of Photography Arles 2013: Jean-Paul Capitani (Actes Sud) Gorée Island: Vivre ! Resilience Photographs – Dapper Foundation Subscribe now for full access to The Eye of Photography! That's thousands of images and articles, documenting the history of the medium of photography and its evolution during the last decades, through a unique daily journal. Notre-Dame de Paris – The Classics Ren Hang: A Major Plagiarism Best Of 2018 – Paolo Ventura, Racconti Immaginari at Armani/Silos L'Allume-Feu – Number two, La Triche / Winter 2019 Irène Jonas – Sleeping, she says Alex Majoli – MACK, the Book Julia Jaeckin – Morocco A black book on the Syrian conflict Lars Boering : "World Press Photo shows stories that matter" A photographic reimagining of black past and black future with… Kasia Strek, laureate of the Jean-Luc Lagardère Foundation's 2018 Photographer… 2018 Virginia Award : Carolle Bénitah 16th Lucie Awards Itiphoto: Charles Delcourt – Isle of eigg Art souterrain : underground festival in Montreal Dissidence #2019 – 12 women photographers from the Grand-Est exhibit their… Gérard-Aimé, the photographer of March 22, 1968, is dead Art Shay, a fountain of evocation, dies at 96 Jean Pierre Fizet, stills photographer to the stars of the… Treasure lands by Stephen King Picto Online Expands with a Blog Agence Roger-Viollet: Sale of photographs Very affordable prints by Michael Tighe for sale Photographs of Andy Warhol's apartment, just after he died, by… Video – The funeral of Victor Hugo in photography Vidéo – An unexpected photographic reconversion: Franck Alamo Video – Interview of Helmut Newton by Henri Chapier Emma Summerton represented by Christophe Guye Galerie This is not a holiday photograph! Huawei / Magnum Photos – Report of the 14th March… Susan Meiselas, Mediations: on the links between photographer and subject On the Frontline: Susan Meiselas's diary The Russians at 94th Street, a vintage story from ICP Jean-Paul Capitani © Olivier Dion Christian Caujolle In 2012, Christian Caujolle conducts this interview with Jean-Paul Capitani. The Méjan Association and Actes Sud have a unique position vis-à-vis the programme of the Rencontres: Yes, of course. It's something that dates back a few years. I'm from Arles and extremely attached to my town and to what happens here. The Méjan and Actes Sud are now more active than ever throughout the year, what with the cinema, the bookshop, exhibitions and concerts. We have been producing the Rencontres catalogues for a very long time and this, above all, is a means for us to pledge our support for and loyalty to what is an important event and we provide the knowhow we have at our disposal. I don't think of it in terms of business. Moreover, the Méjan Association serves as a patron, even if only at a modest level. It contributes some money and offers free access to the exhibitions it puts on to those with Rencontres acreditations. We have gradually developed the spaces, programming them in accord with the Rencontres programme. This partnership is important within the context of Arles and bears witness to our desire to participate in a global project. This year is particularly impressive, both in terms of exhibitions and publications. How did you go about putting your programme together? You're right, we've got a lot on our plate (smiles). We've got Lee Ufan at the Capitole, which has finally become the exhibition space I dreamt it would be, and we're publishing a major volume of his work. Penone at the Mejan is an ambitious exhibition – which I hope will be followed by a book of the photographic work – and it would not be a bad thing if the tree in bronze that has been placed in front of the entrance is allowed to remain there. At the Parc des Ateliers, there's a work by Moriyama specially created for the site and a series of exhibitions linked to some of our titles, including the first Gordon Parks retrospective in France. François Hébel told me that he liked the fact that I exhibit major contemporary artists who haven't always been included in the programme for the Rencontres. As always, I took up the challenge and redoubled my efforts. Of course, I love Penone. His relationship with nature strikes a chord with me. We have managed to put together a rare ensemble of his photographic work which, in the same way that Lee Ufan gives us that Asian colour, links us back to Arte Povera. Our choices were born out of a certain logic, not in opposition but along the lines of what is happening elsewhere in Arles. Without losing our own identity of course. Can you describe some of what's happening from a publishing point of view? Everyone knows that things are difficult in the sector right now, that the whole industry is going through a crisis, that bookshops are struggling and that cultural consumption is not currently a priority. Photo Poche, which does not have a huge turnover, is a wonderful venture that has also been developed abroad with translated versions appearing. Robert Delpire is editor of the collection and Benoît Rivero has overall control. Of course it gets a lot of attention but so does the whole company. When it comes to our other books, we naturally would like to put out a lot more than we're able to. Without the European Publishers Award which allows a real synergy to exist between publishers, certain books would never see the light of day. And then – often to the great displeasure of Benoît – certain opportunistic decisions are made, on the back of exhibitions, financial support or options being taken out. Many titles just aren't viable without co-publishing deals and financial support. It's regrettable but that's how things are. We have to keep our feet on the ground and at the same time follow our dreams. And often we have to put projects together on a shoestring … When Maja Hoffmann decided to go ahead with her plans for the Ateliers, Actes Sud announced its intention to move to the site. Where are you at with that? It's still happening. It's going ahead. The Renovation of the Magasin Electrique, with 1000 m2 of permanent exhibition space, has been ready for a long time and will house our offices. Things should be moving pretty fast and I think that we can realistically expect to be in the Ateliers by 2017. You have to take into account the fact that, given the quality of the requirements, heavy renovation work such as large sections of roofing and building standards take time, the workshops are designed to house archives or international exhibitions. It can't all be done just like that. François Hébel is concerned about whether there will be sufficient premises for the Rencontres in years to come. There has been a lively debate between him and Maja Hoffmann. It's a real shame! Nobody can accuse Maja of not being generous or of not having been generous. All the same… there's a lot to be done and ten years is a long time to have been waiting. There are many different reasons for the delay but it looks as if solutions are being found. I'm not convinced that now is the right time to be discussing things, especially as the discussions haven't always been entirely lucid. The Rencontres owe Maja a lot and she of course became a partner of the Rencontres at the invitation of François Barré and François Hébel. The Rencontres also owe a lot to the political will of local government or individuals. You can't take a whole community hostage on something like this, especially when it's making a real effort to work together, constructively. Why throw the cat among the pigeons in this way – who wins? The plan for the workshops is clear and perfectly justifiable, with decisions made and stages mapped out. Yes, it's going to take time and yes some spaces won't be available during certain periods but that's a necessary by-product of the situation and the alternative is no progress at all. Moreover, there are other potential spaces in Arles, other disused industrial ground. Of course, local government needs to get involved but there are also partnership and patronage options. I'm prepared to do my bit to help the process and I think that Maja, who is investing several tens of millions – and who has shown so much generosity in the past – is very much entitled to negotiate with an event that is by no means poor. Truth be told, I'm astonished by what's going on. The only thing of any importance is to bring everyone's efforts together in the service of the project. Archives of the Eye of Photography – Christian Caujolle, 2012 Your holiday pictures: Michel Fainsilber Your holiday pictures: Christophe Audebert Your holiday pictures: Jean-François Bessonnat Your holiday pictures: Brad Barrett Your holiday pictures: Bruno Helsens Your holiday pictures: Alain Masson Latest articles of the category "Photo daily news" Your holiday pictures: David Van Ruymbeke Your holiday pictures: Thierry Meeusen Your holiday pictures: Jacques Revon Your holiday pictures: Patrick Flandrin Your holiday pictures: Pierre-Yves Rospabé Your holiday pictures: Agnès Vergnes from June 25th, 2019 to August 25th, 2019 Workshop: Phototherapy, by Emilie Danchin Belgium & Switzerland from September 19th, 2019 to November 22nd, 2020 Nicolas Baghir, Photographic Passports from June 13th, 2019 to July 27th, 2019 Louis Vuitton Pop-up Book Store - Le Buste et l'Oreille from July 1st, 2019 to September 22nd, 2019 Swiss Life 4 Hands Prize 2020-2021 Call for entries from July 1st, 2019 to November 1st, 2019 Paddle8 & The Little Black Gallery launch Boys! Boys! Boys! Storefront Keep an eye out for the latest photography news! Subscribe to our Newsletter Every morning, receive the latest world photography news and events. And it's free! Languages EN FR Every morning, receive the latest world photography news and events. And it's free! Install WebApp on iPhone Install WebApp on Android Advertize with us!
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
1,758
Postal Holidays DHL Near Me in New York Addresses, phone numbers, and business hours for DHL in New York. DHL Near Me DHL by County Bronx County About DHL DHL began in 1969, when Adrian Dalsey, Larry Hillblom, and Robert Lynn founded the company in San Francisco, CA and named it after their last initials. At its founding, DHL became the first international door-to-door express delivery service in the world. Since then, Deutsche Post acquired DHL, making DHL the world's largest logistics company operating worldwide today. The global headquarters for DHL is in Bonn, Germany, as part of the Deutsche Post headquarters. New York Shipping DHL Authorized Shipping Centers DHL Drop Offs DHL Staffed Facilities Post Locations is not affiliated with any government agency. Third party advertisements support hosting, listing verification, updates, and site maintenance. Information found on Post Locations is strictly for informational purposes and does not construe legal or financial advice. © 2020 Post Locations. All Rights Reserved.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
8,482
WHY NEIGHBOURHOOD? Stockport town centre – building the future? Stockport town centre – building… There's a seminal moment in the movie The Social Network about the genesis of Facebook and the power struggles that ensued, when Sean Parker turns to Mark Zuckerberg and says "A million dollars isn't cool. You know what's cool? A billion dollars". It seems Stockport has taken this to heart. The council, in conjunction with partners, is investing over a billion pounds to reignite the fortunes of this slumbering giant of a neighbourhood to the south of Manchester. Already the skyline has changed with, among others, the new Light cinema, the Redrock centre, new office blocks at Stockport exchange and the new Holiday Inn Express. Only a ten-minute train ride from the centre of Manchester and intersected by the M60 ring road, Stockport has always been well placed. And with the new additions all trading ahead of expectations, Stockport is knocking the wind out of the last remaining sceptics. Plans are also being implemented to build new homes at Covent Garden, sort out the traffic congestion, bring new amenity space with a park on top of the main bus station and the development of the Aurora Business Park. In other words, the council are bringing vigour where there used to be stagnation. The latest phase is focusing on the Market Place and Underbanks area. These characterful streets are already home to an ever-increasing array of funky restaurants and an 'emporium of niche businesses'. With the council having acquired some key gateway locations for redevelopment, it is only a matter of time before the mix of new housing, new people, new bars and restaurants makes this a destination location for the young and young at heart. The success of the monthly event, Foodie Friday, shows there is a latent appetite for a 'foodie' led regeneration of the area. Restaurateur, Steve Pilling, has taken up the challenge of transforming the Produce Hall into an Altrincham Market style food and drink hall. However, it's the small local business people, such as Simon from Hillgate Cakes, that are now picking up the mantle and running with it which really reflects the desire to make Stockport boom. The enthusiasm is infectious. It is this enthusiasm, mixed with a desire to create a community and backed with the investment to make it happen, that has inspired Neighbourhood Co-living to establish our first co-living scheme in the Underbanks. We want to make beautiful housing for community minded people, so Stockport is a no brainer. Why not join us? © Neighbourhood Co-Live 2017. All rights reserved. Heart IT Web Design hello@neighbourhoodco.live Be the first to hear about Neighbourhood when we launch! Simply complete the form below to keep up to date with the latest updates.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
5,378
Cynthia Erivo (born January 8, 1987 in London, England, of Nigerian parents) trained at the Royal Academy of Dramatic Arts, starred on Broadway in the musical The Color Purple (2015-2017), earning a Tony Award and a Grammy. She played a singer in Bad Times at the El Royal (2018) with Jeff Bridges, a hairdresser in Widows (2018) directed by Steve McQueen with Viola Davis. She portrayed abolitionist Harriet Tubman in Harriet (2019) directed by Kasi Lemmons, wrote the original song "Stand Up." She acted in Chaos Walking (2021) directed by Doug Liman from the science fiction trilogy by Patrick Ness. On television she acted in The Outsider (2020), played Aretha Franklin in the miniseries Genius: Aretha (2021). Read Nominee Profile 2020: Cynthia Erivo, "Harriet" by Elisa Leonelli. Genius: Aretha Best Actress - Motion Picture Drama Best Song Motion Picture
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,204
Johannes "Hannes" Trautloft (3 March 1912 – 11 January 1995) was a German Luftwaffe military aviator during the Spanish Civil War and World War II, and general in the postwar German Air Force. As a fighter ace, he is credited with 58 enemy aircraft shot down, including 5 in Spain, 8 on the Western Front and 45 on the Eastern Front of World War II. Born in Großobringen, Trautloft volunteered for military service in the Reichsheer of the Weimar Republic in 1931. In parallel, he was accepted for flight training with the Deutsche Verkehrsfliegerschule, a covert military-training organization, and at the Lipetsk fighter-pilot school. Following flight training, he served with Jagdgeschwader 134 "Horst Wessel" (JG 134—134th Fighter Wing) and was one of the first German volunteers to fight in the Spanish Civil War. From August to December 1936, he claimed five aerial victories. For his service in Spain he was awarded the Spanish Cross in Gold with Swords. Following his service in Spain, Trautloft held various command positions, and at the outbreak of World War II on 1 September 1939, he was the Staffelkapitän (squadron leader) of 2. Staffel (2nd squadron) of Jagdgeschwader 77 (JG 77—77th Fighter Wing). He claimed his first aerial victory during the Invasion of Poland and was appointed Gruppenkommandeur (group commander) of I. Gruppe of Jagdgeschwader 20 which later became III. Gruppe of Jagdgeschwader 51 (JG 51—51st Fighter Wing). In August 1940, during the Battle of Britain, Trautloft was given command of Jagdgeschwader 54 (JG 54—54th Fighter Wing). He led JG 54 in Operation Barbarossa, the German invasion of the Soviet Union in June 1941. There, he was awarded the Knight's Cross of the Iron Cross on 27 July 1941. Trautloft continued to lead JG 54 on the Eastern Front until July 1943 when he was called to the staff of the General der Jagdflieger (General of Fighters), assisting in the readiness, training and tactics of the Luftwaffe fighter force. After the war, Trautloft joined the new German Air Force of West Germany in 1957. Serving as deputy Inspector of the Air Force and commander of (Air Force Group South), Trautloft retired in 1970 holding the rank of Generalleutnant (lieutenant general). He died on 11 January 1995 in Bad Wiessee. Early life and career Trautloft was born on 3 March 1912 in Großobringen near Weimar in Thüringen. On 7 April 1931, he began his pilot training at the Deutsche Verkehrsfliegerschule (German Air Transport School) at Schleißheim. The course he and 29 other trainees attended was called Kameradschaft 31, abbreviated "K 31". Among the members of "K 31" were future Luftwaffe staff officers Bernd von Brauchitsch, Wolfgang Falck, Günther Lützow, Günther Radusch and Ralph von Rettberg. Trautloft graduated from the Deutsche Verkehrfliegerschule 19 February 1932. From "K 31" Trautloft and 9 others were recommended for Sonderausbildung (special training) at the Lipetsk fighter-pilot school. These 10 men were the privileged few and were allowed to attend fighter pilot training. Following four months of training in the Soviet Union, he returned to Germany and joined the military service of the Reichswehr and attended the Kriegsschule (war school) in Dresden. on 1 May 1934, In October 1934, Trautloft was posted to the Jagdfliegerschule at Schleißheim. On 1 May 1936, Trautloft was posted to Jagdgeschwader 134 "Horst Wessel" (JG 134—134th Fighter Wing), named after the Nazi martyr Horst Wessel. At the time of the outbreak of the Spanish Civil War, Trautloft was serving in the 9. Staffel (9th squadron) of JG 134. This squadron was subordinated to III. Gruppe (3rd group) of JG 134 and was commanded by Major Oskar Dinort. The Gruppe had been moved to an airfield at Cologne-Butzweilerhof on 9 March 1936 following the Remilitarization of the Rhineland. There on 28 July, Dinort called Trautloft and informed him of the unfolding events in Spain and Trautloft proactively volunteered for service in Spain. Spanish Civil War Sworn to secrecy, Trautloft was instructed to immediately travel to Dortmund where he received further instructions from Kurt-Bertram von Döring, and then to the assembly location at Döberitz. There, 25 officers and 66 non-commissioned officers, soldiers and civilian technicians gathered, including six pilots of which Trautloft was one. This detachment was then placed under the overall command of Oberst (Colonel) Alexander von Scheele. The volunteers were then discharged from the Wehrmacht and dressed in civilian clothes. As tourist of the Reisegesellschaft Union (Union Travel Association), the volunteers travel aboard the SS Usaramo, a passenger ship of the Woermann-Linie from Hamburg to Cádiz on 31 July. The Usaramo also transported the equipment and weapons, including six disassembled and boxed Heinkel He 51 biplane fighter aircraft. The ship arrived in Cádiz on 7 August 1936 and the men then travelled by train to Seville. At Tablada airfield, the pilots assisted in reassembling the He-51 fighters, the first of which becoming operational on 10 August. On 25 August, during the Nationalist advance on Madrid, Trautloft and two other German pilots flew their first combat mission in Spain. In the vicinity of Madrid, the Germans spotted a flight of three Republican Bréguet 19 light bombers and reconnaissance aircraft. Trautloft attacked one of the Republican aircraft, shooting it down near the village of Colmenar Viejo. This claim may have been the first aerial victory by a German pilot in Spain. Five days later, shortly after claiming a Potez 540 aircraft, Trautloft was himself shot down by a Dewoitine D.372, forcing him to bail out over Nationalists held territory. Following German recognition of Francisco Franco's government on 30 September, German efforts in Spain were reorganized and expanded, and the contingent of German forces was named Condor Legion by Hermann Göring. By October, the Condor Legion was augmented, receiving more equipment and men. This made it possible to split the fighter force in two, with Trautloft leading the detachment sent to Léon airfield. As the war escalated, the Soviet Union sent better planes to aid the Republicans. Among these were the Polikarpov I-15 and Polikarpov I-16 fighter aircraft, outclassing the German He-51s. By mid-November, the fighter force had increased and the Jagdgruppe 88 was created. In December, Versuchsjagdgruppe 88, an experimental fighter group for testing new aircraft under operational conditions was created at Tablada. Trautloft was chosen as one of the pilots to test the then new Messerschmitt Bf 109. Trautloft had this aircraft personalized with the "Green Heart" of Thuringia. He wrote several recommendations on how to improve the design and combat operations of the Bf 109. On 2 March 1937, Trautloft who had claimed five aerial victories, left Spain and returned to Germany. In 1937, Trautloft participated in the 4th international flight meeting held at the Dübendorf military airfield, Switzerland from 23 July to 1 August. Trautloft, Hauptmann Werner Restemeier and Oberleutnant Fritz Schleif, flying a flight of three BF 109 B-1s and B-2 took first place in the category Alpine flight. On 15 March 1937, Trautloft was transferred and appointed Staffelkapitän (squadron leader) of 1. Staffel of Jagdgeschwader 135 (JG 135—135th Fighter Wing). This squadron was subordinated to I. Gruppe of JG 135 which had just been created on 15 March and was commanded by Major Max Ibel. Trautloft served in this capacity until 1 July 1938 when he was transferred to command the newly created 12. Staffel of Jagdgeschwader 132 (JG 132—132nd Fighter Wing), a squadron of IV. Gruppe headed by Oberstleutnant Theo Osterkamp. This Staffel was reassigned to 2. Staffel of Jagdgeschwader 331 (JG 331—331st Fighter Wing) on 3 November. With this unit, Trautloft participated in the German occupation of Czechoslovakia in March 1939. On 1 May, the squadron was again renamed, becoming 2. Staffel of Jagdgeschwader 77 (JG 77—77th Fighter Wing). In 1939, Trautloft published his Spanish War diaries named [As a Fighter Pilot in Spain] with a foreword by Ernst Udet. World War II World War II in Europe had begun on Friday 1 September 1939 when German forces invaded Poland. In preparation for the invasion in end-August 1939, I. Gruppe of JG 77, to which the 2. Staffel was subordinated, had been moved from Breslau-Schöngarten to an airfield at Juliusburg, present-day Dobroszyce. The Gruppe operated over the left flank of Army Group South, supporting the 8th Army advance into Poland. Its main task was flying combat air patrols but had relatively little enemy contact, claiming three aerial victories, including one by Trautloft. On 5 September, Trautloft was credited with the destruction of a PZL.23 Karaś bomber near Warta, northwest of Sieradz. On 20 September, Trautloft was promoted to Hauptmann (captain) and appointed Gruppenkommandeur (group commander) of I. Gruppe of Jagdgeschwader 20 (JG 20—20th Fighter Wing) on 23 September. At the time of his posting to JG 20, the Gruppe had already been withdrawn from Poland and was based at Brandenburg-Briest. Subordinated to the Stab (headquarters unit) of Jagdgeschwader 2 "Richthofen" (JG 2—2nd Fighter Wing), I./JG 20 flew fighter protection over central Germany. On 6 November, the Gruppe was moved to Döberitz where it remained until 21 February 1940. That day, I./JG 20 was ordered to Bönninghardt and placed under the control of the Stab of Jagdgeschwader 51 (JG 51—51st Fighter Wing). There, the Gruppe patrolled Germany's western border during the "Phoney War" period of World War II. Battle of France Trautloft led I. Gruppe of JG 20 during the Battle of France which began on 10 May 1940. At the beginning of the campaign, I. Gruppe was still based at Bönninghardt and subordinated to JG 51. The Gruppes area of operation was the Netherlands and northeastern Belgium, flying fighter escort missions for the bombers. On 16 May, the Gruppe was ordered to move to Eindhoven airfield where it remained until 20 May when it relocated to an airfield at Hoogerheide. From Hoogerheide, I. Gruppe initially flew missions to Bruges and on 24 May, the area of operations shifted towards Dunkirk and Calais. On the morning of 29 May, I./JG 20 moved further west to an airfield at Sint-Denijs-Westrem. That evening, Trautloft claimed a Royal Air Force (RAF) Supermarine Spitfire shot down southeast of Dunkirk. Two days later, Trautloft claimed another Spitfire during the Battle of Dunkirk. In preparation for Operation Paula on 3 June, I./JG 20 was ordered to Vitry-En-Artois and flew escort missions in the afternoon. It was then ordered back to Sint-Denijs-Westrem before moving to Saint-Omer to support Fall Rot, the second phase of the conquest of France. Supporting Army Group B, the Gruppe advanced to Estrées-lès-Crécy on 8 June and claimed its last aerial victory of the Battle of France on 13 June. The next day, I./JG 20 moved to an airfield southeast of Rouen and to Vouziers on 20 June. On 22 June, I./JG 20 returned to Saint-Omer where it patrolled the French coast on the English Channel. In total, I. Gruppe of JG 20 under Trautloft's command claimed 35 aerial victories during the Battle of France, losing five pilots killed in action, two were taken prisoner of war and three were wounded. In addition, ten Bf 109s were lost in combat. Following the armistice of 22 June 1940, the Battle of France ended on 25 June. By this date, the official allotted strength of I./JG 20 had been reduced to 60%. Battle of Britain and Balkans campaign On 4 July, I./JG 20 was officially integrated into JG 51 becoming its III. Gruppe. The end of the Battle of France marked the beginning of the Battle of Britain. The Gruppe received new aircraft during the second half of July, bringing its strength nearly to its allotment. On 19 July, III. Gruppe claimed the destruction of eleven Boulton Paul Defiant interceptor aircraft in aerial combat south of Folkestone, including one claim by Trautloft. According to British records, No. 141 Squadron lost six aircraft in this encounter. Trautloft claimed his last aerial victory with JG 51 on 8 August. That day, the Gruppe claimed five victories over RAF fighters, including a Spitfire near Dungeness by Trautloft. In late August it was becoming apparent to the Oberkommando der Wehrmacht (German High Command) that the Battle of Britain was not going as planned. A frustrated Göring relieved several Geschwaderkommodore (wing commander) of their commands, and appointed younger, more aggressive men in their place. On 21 August the Luftwaffe communicated and continued with the changes which had started in June when Falck had been tasked with the creation of Nachtjagdgeschwader 1 (NJG 1—1st Night Fighter Wing). Lützow took command of Jagdgeschwader 3 (JG 3—3rd Fighter Wing), Adolf Galland was given command of Jagdgeschwader 26 "Schlageter" (JG 26—26th Fighter Wing), Werner Mölders was given command of JG 51, and Trautloft took over Jagdgeschwader 54 (JG 54—54th Fighter Wing) from Martin Mettig. Command was transferred on 25 August and Trautloft was promoted to Major (major). At the time, JG 54 was based at Campagne-lès-Guines and also fighting against the RAF, either escorting bombers to England or flying combat air patrols. Trautloft claimed his first aerial victory with JG 54 that very same day. At 20:20, he claimed a Spitfire over the English Channel. Trautloft claimed two further aerial victories against the RAF, bringing his total to eight victories claimed during World War II. This includes a Hawker Hurricane shot down over Maidstone on 7 September, and a Spritfire claimed on 27 October over Ashford. On 15 September, the Luftwaffe embarked on an all-out attack against London which later dubbed the Battle of Britain Day. The next day, Trautloft met with his three group commanders at Campagne-lès-Guines, these were Hauptmann Hubertus von Bonin of I. Gruppe, Hauptmann Dietrich Hrabak of II. Gruppe, and the acting Gruppenkommandeur of III. Gruppe, Oberleutnant Günther Scholz. The topics of discussion where the poor radio discipline and the concern regarding overclaiming of aerial victories. On 2 November, Trautloft's Bf 109s E-3 (Werknummer 724—factory number) was damaged by a squib load but he managed to land the aircraft safely. On 20 November, the Geschwaderstab began transferring to Germany for a period of rest and maintenance, arriving at Dortmund Airfield on 3 December. The unit stayed in Dortmund until 15 January 1941, when it was ordered to Le Mans Airfield in France. On 29 March, JG 53 was withdrawn from France and ordered to Graz-Thalerhof in preparation for the Balkans campaign. The Geschwaderstab remained in Graz-Thalerhof until 14 April and relocated to Deta. The next day, the Geschwaderstab moved again, this time to Pančevo Airfield where it remained until 19 April. Following the capitulation of Yugoslavia JG 54 was ordered to Belgrade. Trautloft's Bf 109s E-3 (Werknummer 724) was again damaged on 22 April in a forced landing at Fünfkirchen, present-day Pécs, following engine failure. On 25 April, JG 54 was ordered to return to Germany, arriving at Airfield Stolp-Reitz in Pomerania, present-day Słupsk, on 3 May. The Geschwaderstab did not claim any aerial victories during the Balkans campaign. Operation Barbarossa At Stolp-Reitz, JG 54 upgraded their aircraft to the Bf 109 F-2. For the next four weeks, the pilots familiarized themselves with the new aircraft before on 15 June, the Geschwaderstab was ordered to Trakehnen in preparation for Operation Barbarossa, the invasion of the Soviet Union. During the upcoming invasion, JG 54 would be deployed in the area of Army Group North, was subordinated to I. Fliegerkorps (1st Air Corps) and supported the 16th and 18th Army as well as the Panzer Group 4 in their strategic objective to reach Leningrad. On 22 June, the day of the invasion, JG 54 was tasked with escorting German bombers from Kampfgeschwader 1, 76 and 77 (KG 1, KG 76 and KG 77—1st, 76th and 77th Bomber Wing) on their mission to bomb Soviet airfields near the Lithuanian border. On one of these missions, Trautloft claimed an Ilyushin DB-3 bomber shot down northwest of Marijampolė. The next day, he claimed a Tupolev SB bomber in the vicinity of Kussen in the Krasnoznamensky District. On 24 June, elements of JG 54 moved to Kaunas with the objective to achieve air supremacy over the combat area of Army Group North. Flying from Kaunas, Trautloft claimed two DB-3's, one on 24 June and another the next day. On 28 June, the Geschwaderstab was moved to Daugavpils, protecting the bridgehead on the eastern bank of the Daugava. On 30 June, the bridgehead came under heavy attack by Soviet bombers attacking German forces near the captured bridges over the Daugava. In defense of the bridgehead, Trautloft claimed two further DB-3's. That day, 1 Minno-torpednyy Aviatsionnyy (1 MTAP—1st minelaying and torpedo-bomber regiment) had dispatched 32 DB-3s which lost 15 aircraft in this engagement plus 10 further aircraft sustained combat damage. On 27 July, Trautloft was awarded the Knight's Cross of the Iron Cross () for 20 aerial victories claimed in World War II. The Geschwaderstab moved to Siversky on 7 September followed by I. and III. Gruppe a few days later. The airfield was located southwest of Leningrad and was equipped with hangars and buildings and JG 54 would be based there during the Siege of Leningrad. On 22 September, Trautloft visited the German front lines of the infantry and came under attack by strafing aircraft. Eastern Front On 5 December 1941, the Stavka (high command of the Soviet armed forces) launched a series of counter offensives named the winter campaign of 1941–42. Based at Siversky, JG 54 was the only fighter wing in the combat area of Army Group North, responsible for patrolling an area from Leningrad in the north to the Valdai Hills in the south, spanning a front line of approximately . On 7 January 1942, the Stavka launched the Lyuban Offensive Operation which was fought on the southern shore of Lake Ladoga, near Lyuban. The attack began north of Novgorod and aimed at encircling elements of the German 18th Army with the objective to break the German siege of Leningrad. This attack forced Trautloft to largely commit JG 54 to the defense of this attack. Subsequently, most of the missions flown in January and February where over the Volkhov River, connecting Lake Ilmen and Lake Ladoga, although some missions where still flown over Leningrad. By early March, JG 54 had replaced its Bf 109 F-2 aircraft with the newer Bf 109 F-4 variant. On 6 March, Trautloft claimed a Polikarpov R-5 reconnaissance bomber aircraft near Chudovo. He was credited with an aerial victory over a I-16 on 9 March and a Yakovlev Yak-1 five days later. On 15 March, German forces launched a counterattack leading the encirclement of the Soviet 2nd Shock Army on 19 March. During this counter offensive, Trautloft claimed two further victories. On 9 May, Trautloft claimed a Yak-1 fighter and a Petlyakov Pe-2 bomber in the combat area south-southwest of Valday and east of Demyansk, following the relief of the Kholm Pocket. The Geschwaderstab returned to Siversky on 15 May. Luftwaffe commander On 6 July 1943 Trautloft was appointed as Jagdflieger Inspizient Ost, serving with the General der Jagdflieger office. This position put him in overall charge as Inspector of all the Fighter aircraft units fighting on the Eastern Front. On 20 November, Trautloft succeeded Günther Lützow as Inspekteur der Tagjäger, giving him overall responsibilities for all day-fighters. On 11 November, Göring, in his role as commander-in-chief of the Luftwaffe, organized a meeting of high-ranking Luftwaffe officers, including Trautloft. The meeting, also referred to as the "Areopag" was held at the Luftkriegsakademie (air war academy) at Berlin-Gatow. This Luftwaffe version of the Greek Areopagus—a court of justice—aimed at finding solutions to the deteriorating air war situation over Germany. In late 1944, a rumor crossed Trautloft's desk that a large number of Allied airmen were being held at Buchenwald Concentration Camp. Trautloft decided to visit the camp and see for himself under the pretence of inspecting aerial bomb damage near the camp. Trautloft was about to leave the camp when captured US airman Bernard Scharf called out to him in fluent German from behind a fence. The SS guards tried to intervene but Trautloft pointed out that he out-ranked them and made them stand back. Scharf explained that he was one of more than 160 allied airmen imprisoned at the camp and begged Trautloft to rescue him and the other airmen. Trautloft's adjutant also spoke to the group's commanding officer, a NZ airman Phil Lamason. Disturbed by the event, Trautloft returned to Berlin and began the process to have the airmen transferred out of Buchenwald. Seven days before their scheduled execution, the airmen were taken by train by the Luftwaffe to Stalag Luft III. In early 1945, Trautloft joined other high-ranking pilots in the "Fighter Pilots' Revolt incident" which escalated in a meeting with Göring on 22 January 1945. This was an attempt to reinstate Galland who had been dismissed for outspokenness regarding the Oberkommando der Luftwaffe (Luftwaffe high command), and had been replaced by Oberst Gordon Gollob as General der Jagdflieger. The meeting was held at the Haus der Flieger in Berlin and was attended by a number of high-ranking fighter pilot leaders which included Trautloft, Lützow, Hermann Graf, Gerhard Michalski, Helmut Bennemann, Kurt Bühligen, Erich Leie and Herbert Ihlefeld, and their antagonist Göring supported by his staff Brauchitsch and Karl Koller. The fighter pilots, with Lützow taking the lead as spokesman, criticized Göring and made him personally responsible for the decisions taken which contributed to the lost air war over Europe. Following this incident, Trautloft was relieved of his position and sent to command the 4. Flieger-Schule Division (4th Pilot School Division) in Strassburg. He spent the remainder of the war there. Trautloft ended the war as an Oberst (colonel). Later life After the war, Trautloft joined the new German Air Force, at the time referred to as the Bundesluftwaffe, of West Germany on 1 October 1957, now with the rank of Brigadegeneral. In 1961, he served as deputy Inspector of the Air Force. On 1 January 1962, Trautloft succeeded Generalmajor Hermann Plocher as commander of (Air Force Group South) in Karlsruhe. Trautloft was retired on 26 June 1970 with a (Grand Tattoo), holding the rank of Generalleutnant. That day, he was awarded the Grand Cross of Merit with Star of the Order of Merit of the Federal Republic of Germany () for his service in Bundesluftwaffe. He was an active member of many veteran organizations including the Gemeinschaft der Jagdflieger until his death on 11 January 1995 at Bad Wiessee. Summary of career Aerial victory claims According to US historian David T. Zabecki, Trautloft was credited with 58 aerial victories, five of which during the Spanish Civil War. Mathews and Foreman, authors of Luftwaffe Aces — Biographies and Victory Claims, researched the German Federal Archives and found records for 58 aerial victory claims, plus three further unconfirmed claims. This number includes five claims during the Spanish Civil War, eight on the Western Front and 45 on the Eastern Front of World War II. Victory claims were logged to a map-reference (PQ = Planquadrat), for example "PQ 36 Ost 10523". The Luftwaffe grid map () covered all of Europe, western Russia and North Africa and was composed of rectangles measuring 15 minutes of latitude by 30 minutes of longitude, an area of about . These sectors were then subdivided into 36 smaller units to give a location area 3 × 4 km in size. Awards Spanish Cross in Gold with Swords (14 April 1939) Knight's Cross of the Iron Cross on 27 July 1941 as Major and Geschwaderkommodore of Jagdgeschwader 54 German Cross in Gold on 27 July 1942 as Major in Jagdgeschwader 54 Grand Cross of Merit with Star (26 June 1970) Works Trautloft, Hannes (1940). Als Jagdflieger in Spanien: Aus dem Tagebuch eines deutschen Legionärs [As a Fighter Pilot in Spain: From the Diary of a German Legionnaire]. Berlin: A. Nauck & Co. Notes References Citations Bibliography 1912 births 1995 deaths People from Weimarer Land People from Saxe-Weimar-Eisenach Spanish Civil War flying aces German World War II flying aces German military personnel of the Spanish Civil War Recipients of the Gold German Cross Recipients of the Knight's Cross of the Iron Cross Bundeswehr generals Knights Commander of the Order of Merit of the Federal Republic of Germany Condor Legion personnel German Air Force personnel Military personnel from Thuringia
{ "redpajama_set_name": "RedPajamaWikipedia" }
647
\section{Introduction and main results} The handling of various languages and approaches for describing groups of automorphisms of spherically homogeneous rooted trees brought many significant results. For example, the algebraic language of the so-called 'tableaux' with 'truncated' polynomials over finite fields, introduced by L.~Kaloujnine~\cite{17}, turned out to be effective in studying these groups as iterated wreath products. V.~I.~Sushchanskyy~\cite{16} uses this language to construct the pioneering examples of infinite periodic 2-generated $p$-groups ($p>2$) as well as to produce factorable subgroups of wreath products of groups~\cite{18,19}. A.~V.~Rozhkov~\cite{44}, by using the notion of the cortage and studying the so-called Aleshin type groups, first gave an example of a 2-generated periodic group containing elements of all possible finite orders (see also~\cite{66,55}). A.~Ershler~\cite{31,32} uses a combinatorial language of a time-varying automaton to discover new results concerning the growth and the so-called F${\rm \ddot{o}}$lner functions of groups. In the present paper, by introducing the notion of a self-similar automaton over a changing alphabet and the group generated by such an automaton, we make an attempt to show that the concept of a self-similar group with its faithful action by automorphisms on a homogeneous rooted tree (also called a regular rooted tree) can be naturally extended to an arbitrary spherically homogeneous rooted tree. As we show in Theorem~\ref{t1}, our combinatorial approach allows to characterize the class of finitely generated residually finite groups. We also provide (Theorem~\ref{t2}) naturally-defined representations for a large class of lamplighter groups for which the realization by the standard notion of a Mealy type automaton is not known. \subsection{Mealy automata and self-similar groups} Self-similar groups appear in many branches of mathematics, including operators algebra, dynamical systems, automata theory, combinatorics, ergodic theory, fractals, and others. The most interesting and beautiful examples of self-similar actions are defined on the tree $X^*$ of finite words over a finite alphabet $X$ by using the notion of a transducer (also called a Mealy automaton), which, by definition, is a finite set $Q$ together with the transition function $\varphi\colon Q\times X\to Q$ and an output function $\psi\colon Q\times X\to X$. The convenient way defining and presenting a Mealy automaton is to draw its diagram. For example, the diagram in Figure~\ref{f1} determines a Mealy automaton over the alphabet $X=\{0,1\}$, with five states $a, b, c, d, e$, with a transition function given by the oriented edges and with an output function given by labelling of the vertices by the maps $X\to X$, where $id$ denotes the identity map and $\sigma$ is a transposition: $0\mapsto 1, 1\mapsto 0$. \begin{figure}[hbtp] \centering \includegraphics[width=4.5cm]{gr.eps} \caption{The automaton generating the Grigorchuk group} \label{f1} \end{figure} We easily verify from this diagram the following equalities defining the transition and output functions of this automaton: \begin{eqnarray*} \varphi(a,x)&=&\varphi(e,x)=e,\;\;\;x\in X,\\ \varphi(b,0)&=&a,\;\; \varphi(b,1)=c,\;\;\varphi(c,0)=a,\\ \varphi(c,1)&=&d,\;\varphi(d,0)=e,\;\;\varphi(d,1)=b,\\ \psi(a,x)&=&\sigma(x),\;\psi(b,x)=\psi(c,x)=\psi(d,x)=\psi(e,x)=x,\;\;\;x\in X. \end{eqnarray*} Each state $q\in Q$ of a Mealy automaton $A=(X, Q, \varphi, \psi)$ defines a transformation of the tree $X^*$. Namely, given a word $x_0x_1\ldots x_k\in X^*$ the automaton $A$ acts on the first letter $x_0$ by the map which labels the vertex corresponding to the state $q$ in the diagram of $A$, changes its state to the state given by the end of the arrow going out of $q$ and labelled by $x_0$; being in a new state the automaton reads the next letter $x_1$, transforms it and moves to the next state according to the above rule; and continues in this way to transform the remaining letters of the word. The induced transformation of the set $X^*$ preserves the empty word and the beginnings of words and, if $A$ is invertible (i.e. all the maps labelling vertices in the diagram of $A$ are permutations of the alphabet $X$), it defines an automorphism of the tree $X^*$. The group generated by automorphisms corresponding to all states of an invertible automaton $A$ with the composition of mappings as a product is called the group generated by this automaton. According to~\cite{0} (Definition~1.1, p. 129) an abstract group $G$ is called self-similar over an alphabet $X$ if there is a Mealy automaton $A=(X, Q, \varphi, \psi)$ which generates a group isomorphic with $G$. For instance, the group $G$ generated by the automaton depicted in Figure~\ref{f1} is a fameous Grigorchuk group constituting one of the most interesting example among groups generated by all the states of a Mealy automaton and, as it turned out, in the whole group theory. This is also the first nontrivial example of a self-similar group. Groups generated by the states of a Mealy automaton form a remarkable class of groups containing various important examples connected to many interesting topics. For more about Mealy automata, groups generated by them, self-similar groups and for many open problems around these concepts see the survey paper~\cite{1} or~\cite{4}. \subsection{Time-varying automata and self-similarity over a changing alphabet} A time-varying automaton, which is a natural generalization of a Mealy type automaton, allows to work not only with a fixed finite alphabet $X$ but also with an infinite sequence $$ X=(X_i)_{i\geq 0}=(X_0, X_1, \ldots) $$ of such alphabets, which further will be called a changing alphabet. We say that the changing alphabet $X$ is unbounded if the sequence of cardinalities $(|X_i|)_{i\geq 0}$ (the so-called branching index of $X$) is unbounded. By definition, a time-varying automaton over a changing alphabet $X=(X_i)_{i\geq 0}$ is a quadruple $$ A=(X, Q, \varphi, \psi), $$ where $Q$ is a finite set of states, $\varphi=(\varphi_i)_{i\geq 0}$ and $\psi=(\psi_i)_{i\geq 0}$ are sequences of, respectively, transition functions and output functions of the form $$ \varphi_i\colon Q\times X_i\to Q,\;\;\;\psi_i \colon Q\times X_i\to X_i. $$ The automaton $A$ is called invertible if $x\mapsto \psi_i(q, x)$ is an invertible mapping of the set $X_i$ for all $i\geq 0$ and $q\in Q$. For every $k\geq 0$ we also define the {\it $k$-th shift} of the automaton $A$ that is a time-varying automaton $$ A_{k}=(X_{(k)}, Q, \varphi_{(k)}, \psi_{(k)}), $$ where $$ X_{(k)}=(X_{k+i})_{i\geq0},\;\;\;\varphi_{(k)}=(\varphi_{k+i})_{i\geq 0},\;\;\;\psi_{(k)}=(\psi_{k+i})_{i\geq 0}. $$ In particular, $A$ is a Mealy automaton if and only if $A_k=A$ for every $k\geq 0$. At first, in the literature of time-varying automata only the notion of an automaton over a fixed alphabet was considered and merely the structural properties of such automata were studied (see~\cite{9}). We first define in~\cite{11} the class of time-varying automata over a changing alphabet $X=(X_i)_{i\geq0}$ (see also~\cite{6,12,15}). Such an alphabet defines a tree $X^*$ of finite words, which is an example of a spherically homogeneous rooted tree. The tree $X^*$ is homogeneous if and only if the branching index of $X$ is constant. Moreover, any locally finite spherically homogeneous rooted tree $T$ is isomorphic with $X^*$ for a suitable changing alphabet $X=(X_i)_{i\geq 0}$. We introduced in~\cite{11} the notion of a group generated by an automaton over a changing alphabet and show that this combinatorial language is apt to define and study groups of automorphisms of spherically homogeneous rooted trees which may be not homogeneous. Specifically, if $A=(X, Q, \varphi, \psi)$ is an invertible time-varying automaton over a changing alphabet $X=(X_i)_{i\geq 0}$, then for every $k\geq 0$ and each state $q\in Q$ we define, via transition and output functions, an automorphism $q_k$ of the tree $X^*_{(k)}$ and call the group $$ G(A_k)=\langle q_k\colon q\in Q\rangle $$ as a {\it group generated by the $k$-th shift of $A$}. \begin{defin} An invertible time-varying automaton $A=(X, Q, \varphi, \psi)$ is called {\it self-similar} if for every $k\geq0$ the mapping $q_k\mapsto q_0$ ($q\in Q$) induces an isomorphism $G(A_{k})\simeq G(A)$. \end{defin} If $A=(X, Q, \varphi, \psi)$ is a Mealy automaton, then we have $q_k=q_0$ for all $k\geq 0$ and $q\in Q$. Thus every Mealy automaton is self-similar by the above definition. \begin{defin} An abstract group $G$ is called a {\it self-similar group over the changing alphabet $X=(X_i)_{i\geq 0}$} if there is a self-similar automaton $A=(X, Q, \varphi, \psi)$ which generates a group isomorphic with $G$, i.e. $G(A)\simeq G$. \end{defin} Every self-similar group $G$ over a changing alphabet $X$ is an example of a finitely generated residually finite group (a group is called residually finite if there is a descending chain $N_0\geq N_1\geq N_2\geq \ldots$ of normal subgroups of finite indexes and the trivial intersection). Indeed, the group $Aut(X^*)$ of all automorphisms of the tree $X^*$ is an example of a residually finite group (stabilizers of consecutive levels of this tree are normal subgroups of finite indexes which intersect trivially). Thus $G$, as a group isomorphic with a subgroup of $Aut(X^*)$, is residually finite. Given an abstract group $G$ and a changing alphabet $X=(X_i)_{i\geq 0}$, it is natural to ask about the {\it automaton complexity of the group $G$ over the alphabet $X$}, i.e. about the minimal number of states in an automaton over $X$ which generates a group isomorphic with $G$. Obviously, for any changing alphabet $X$ the automaton complexity of any group $G$ over $X$ is not smaller than the rank $r(G)$ of this group, i.e. than the minimal cardinality of a generating set of $G$. \begin{defin} If the number of states in an automaton $A=(X, Q, \varphi, \psi)$ coincides with the rank of the group $G(A)$, i.e. $|Q|=r(G(A))$, then the automaton $A$ is called {\it optimal}. \end{defin} \subsection{The class of finitely generated self-similar groups over an unbounded changing alphabet} In the theory of groups generated by Mealy automata the problem of classifying all groups generated by automata over a given alphabet $X$ and with a given number of states is essential and intensively studied. For example, by the result from~\cite{1} there are only six groups (up to isomorphism) generated by 2-state Mealy automata over the binary alphabet, including the simplest example of an interesting group generated by a Mealy automaton, that is the lamplighter group $C_2\wr \mathbb{Z}$, where $C_2$ denotes a cyclic group of order 2 and $\mathbb{Z}$ is the infinite cyclic group. \begin{figure}[hbtp] \centering \includegraphics[width=6.3cm]{lg.eps} \caption{A Mealy automaton generating the lamplighter group $C_2\wr \mathbb{Z}$} \label{f3} \end{figure} Much effort put towards a classification of groups generated by 3-state Mealy automata over the binary alphabet. In the paper~\cite{7} was shown, among other things, that there are no more than $124$ pairwise nonisomorphic groups defined by such automata. Only recently it was discovered that a free-nonabelian group of finite rank is self-similar. Namely, M.~Vorobets and Ya.~Vorobets showed in~\cite{8} that a Mealy automaton (the so-called Aleshin type automaton) depicted in Figure~\ref{f2} generates a free group of rank 3. Moreover, it is still an open question whether or not there exists an optimal Mealy automaton which generates a free group of rank 2. Referring to this problem, we provided~\cite{15} for this group an explicit and naturally defined realization by an optimal automaton over a changing alphabet; see also Example~\ref{ex2}. \begin{figure}[hbtp] \centering \includegraphics[width=5.5cm]{fg.eps} \caption{A Mealy automaton generating a free group} \label{f2} \end{figure} The first result of the present paper shows that the situation completely changes if we try to characterize all self-similar groups over an arbitrary unbounded changing alphabet. Namely, we show the following \begin{thm}\label{t1} Every finitely generated residually finite group $G$ is self-similar over any unbounded changing alphabet and the corresponding self-similar automaton is optimal. \end{thm} As we will see in the proof, the construction leading to Theorem~\ref{t1} is based on the specific type of automaton which we call an automaton of a diagonal type. \begin{defin} We say that a time-varying automaton $A=(X, Q, \varphi, \psi)$ is of {\it a diagonal type}, or that $A$ has transition functions defined in the {\it diagonal way} if $\varphi_i(q, x)=q$ for all $i\geq 0$, $q\in Q$ and $x\in X_i$. \end{defin} Note that the proof of Theorem~\ref{t1} is unconstructive and we can not explicitly define the output functions of the corresponding self-similar automaton for a given group $G$ and a changing alphabet $X=(X_i)_{i\geq 0}$. Indeed, in that proof we choose for the construction of an automaton $A$ which generates the group $G$ an arbitrary generating set $S$ as a set of states and we define the transition functions in the diagonal way. Next, we show that the output functions of $A$ can be induced by some embedding $g\mapsto (g^{(0)}, g^{(1)}, \ldots)$ of the group $G$ into the infinite cartesian product $S(X_0)\times S(X_1)\times \ldots$ of symmetric groups on sets of letters; due to this embedding we define the output functions as follows: $\psi_i(s, x)=s^{(i)}(x)$ for all $i\geq 0$, $s\in S$ and $x\in X_i$. However, the above embedding arises simply from the fact that $G$ embeds into the cartesian product $G/N_0\times G/N_1\times\ldots$, where $G=N_0\triangleright N_1\triangleright N_2\triangleright \ldots$ is an infinite normal series with finite quotients $G/N_i$ ($i\geq 0$) and the trivial intersection. But the existence of such a normal series is equivalent to the assumption that $G$ is residually finite. Hence, given a residually finite group $G$, this method does not tell us how to compute effectively the permutations $s^{(i)}\in S(X_i)$. In particular, we know nothing about the cycle structure of these permutations. Moreover, in the case of a free group of rank~2 we tried to find in the paper~\cite{6} an explicit representation by an optimal automaton of a diagonal type. However, our computations led to a pretty complicated description of the mappings $s^{(i)}$ and, consequently, of output functions in such an automaton. In view of the above disadvantages as well as the previous results, it would be interesting to find and study explicit and naturally defined representations of a given group $G$ by self-similar automata over a changing alphabet, especially if such representations are unknown by using the notion of a Mealy automaton. \subsection{Lamplighter groups as self-similar groups over a changing alphabet} Let $K$ be an arbitrary nontrivial finitely generated (finite or infinite) abelian group. By definition, the lamplighter group $K\wr \mathbb{Z}$ is a semidirect product $$ \bigoplus_\mathbb{Z}K\rtimes \mathbb{Z} $$ with the action of the infinite cyclic group on the direct sum $\bigoplus_\mathbb{Z}K$ by a shift. For the rank $r(K\wr\mathbb{Z})$ of this group we have: $r(K\wr \mathbb{Z})=r(K\times \mathbb{Z})=r(K)+1$. In recent years groups of this form are intensively studied via their self-similar actions. For example, the automaton realization of the lamplighter group $C_2\wr\mathbb{Z}$ from the paper~\cite{2} was used by R.~Grigorchuk and A.~${\rm \dot{Z}}$uk to compute the spectral measure associated on random walks on this group, which leads to a counterexample to the strong Atiyach conjecture~\cite{10}. This construction was generalized~\cite{3,5} by P.~Silva and B.~Steinberg in the concept of a reset Mealy automaton, where for every finite abelian group $K$ the corresponding lamplighter group was realized as a group generated by a Mealy automaton in which the set of states and the alphabet coincides with $K$. Hence, the only known optimal realization of the group $K\wr \mathbb{Z}$ by an automaton concerns the simplest case $K=C_2$. Here, it should also be mentioned that the question whether or not there exists a representation of a lamplighter group $K\wr \mathbb{Z}$ with an infinite $K$ by a Mealy automaton is still open. For the second result of this paper we come from a naturally defined and self-similar automaton $\mathcal{D}$ of a diagonal type over a changing alphabet $X$ that generates the direct product $K\times\mathbb{Z}$ (see Example~\ref{ex0}). We discover that only by a simple manoeuvre on each transition function in $\mathcal{D}$ we can pass to a new automaton $\mathcal{A}$ and obtain in this way a self-similar optimal automaton realization for the lamplighter group $K\wr \mathbb{Z}$. Namely, we distinguish the state $a$ in the automaton $\mathcal{D}$ and the letters $x_{q, i}\in X_i$ ($i\geq 0$) for each state $q\neq a$ and we define $\mathcal{A}$ as an automaton which operates like $\mathcal{D}$, unless being in a state $q\neq a$ it receives the letter $x_{q, i}$ in a moment $i\geq 0$, then it moves to the state~$a$. To define the changing alphabet $X=(X_i)_{i\geq0}$ and output functions in the automaton $\mathcal{D}$ we decompose the group $K$ into a direct product of cyclic groups $$ K=\mathbb{Z}^{n_1}\times (C_{r_1}\times\ldots\times C_{r_{n_2}}),\;\;\;n_1, n_2\geq 0 $$ such that $n_1+n_2=r(K)$. Let us denote $n:=r(K)=n_1+n_2$. For every $i\geq 0$ we consider arbitrary $2n$ pairwise disjoint cycles $$ \pi_{1,i}, \ldots, \pi_{n, i},\; \sigma_{1, i}, \ldots, \sigma_{n, i} $$ in the symmetric group on a finite set $X_i$ which satisfy the following conditions: for each $1\leq s\leq n$ the cycles $\pi_{s,i}$ ($i\geq 0$) do not have a uniformly bounded length, as well as the cycles $\sigma_{s, i}$ ($i\geq 0$) for each $1\leq s\leq n_1$, and the length of each cycle $\sigma_{s,i}$ ($n_1<s\leq n$, $i\geq 0$) does not depend on $i$ and is equal to $r_{s-n_1}$. In particular, we see that the arising changing alphabet $X=(X_i)_{i\geq0}$ is unbounded. Now, taking $n$ symbols $b_1, \ldots, b_n$ different from $a$, we define the automaton $\mathcal{D}$ as an automaton of a diagonal type over the changing alphabet $X=(X_i)_{i\geq0}$, with the set of states $Q=\{a, b_1,\ldots, b_{n}\}$ and the output functions defined as follows: $$ \psi_i(q,x)=\left\{\begin{array}{ll}\label{3} \alpha_i(x),&q=a,\\ \beta_{s, i}(x), &q=b_s,\;1\leq s\leq n \end{array}\right. $$ for all $i\geq 0$, $q\in Q$ and $x\in X_i$, where the permutations $\alpha_i$, $\beta_{s,i}$ are defined in the following way $$ \alpha_i=\pi_{1,i}\cdot\ldots\cdot\pi_{n, i},\;\;\;\;\;\beta_{s, i}= \sigma_{s, i}\cdot\alpha_i. $$ As we mention above, passing to the automaton $\mathcal{A}$ just needs to redefine transition functions of $\mathcal{D}$. Explicitly, we define: $\mathcal{A}=(X, Q, \varphi^{\mathcal{A}}, \psi)$, where $$ \varphi_i^{\mathcal{A}}(q, x)=\left\{\begin{array}{ll}\label{2} a, &q=b_s,\;\;x=x_{s,i},\;\;1\leq s\leq n,\\ q, &\mbox{\rm otherwise}, \end{array}\right. $$ and $x_{s,i}:=x_{b_s,i}$ is a letter from the cycle $\pi_{s,i}$. Our main result is the following \begin{thm}\label{t2} The automaton $\mathcal{A}$ is self-similar and the group $G(\mathcal{A})$ is isomorphic with the lamplighter group $K\wr \mathbb{Z}$. \end{thm} In the proof of Theorem~\ref{t2} we do not use the algebraic language of embedding into wreath products of groups (the so-called wreath recursion), which is common in studying self-similar groups. Instead of this, for any $i\geq 0$ we directly provide some recursive formulae describing the action on the tree $X^*_{(i)}$ of automaton transformations $a_i, c_{s,i}\colon X^*_{(i)}\to X^*_{(i)}$, where $$ c_{s,i}=(b_s)_i\cdot a_i^{-1},\;\;\;1\leq s\leq n. $$ We show (Proposition~\ref{prop1} and Lemma~\ref{lem1}) that for every $i\geq0$ any two transformations $c_{s, i}$ and $c_{s', i}$ ($1\leq s, s'\leq n$) commute and the mapping $c_{s, i}\mapsto \kappa_s$ induces an isomorphism between the group $$ K_i=\langle c_{i,1}, \ldots, c_{i, n}\rangle $$ and the group $K$, where the elements $\kappa_s\in K$ ($1\leq s\leq n$) form the standard generating set of $K$, that is $\kappa_s=(0, \ldots, 0, u_s, 0,\ldots, 0)$, where $u_s$ occurs in the $s$-th position and represents a generator of the corresponding cyclic factor of $K$. Next, for each $k\in\mathbb{Z}$ we consider the conjugation $K_i^{(k)}=a_i^{-k}K_ia_i^k$ as well as the group $$ H_i=\langle K_i^{(k)}\colon k\in\mathbb{Z}\rangle $$ generated by all these conjugations, and we show (Proposition~\ref{prop2}) that $H_i$ is a direct sum of its subgroups $K_i^{(k)}$, $k\in\mathbb{Z}$, which gives the isomorphism $H_i\simeq \bigoplus_\mathbb{Z}K$. Further, we show (Lemma~\ref{lem5}) that the group $H_i$ intersects trivially with the cyclic group generated by $a_i$ and that this cyclic group acts on $H_i$ by conjugation via the shift (Lemma~\ref{lem4}). This implies that the group $$ G_i=\langle H_i, a_i\rangle=\langle c_{1, i}, \ldots, c_{n,i}, a_i\rangle $$ is a semidirect product of the subgroups $H_i$ and $\langle a_i\rangle$, and, consequently, that $G_i$ is isomorphic with the lamplighter group $K\wr \mathbb{Z}$ via the mapping $a_i\mapsto u$, $c_{s, i}\mapsto \eta_s$, $1\leq s\leq n$, where $\{u, \eta_1,\ldots,\eta_n\}$ is the standard generating set of $K\wr \mathbb{Z}$, that is $u$ is a generator of $\mathbb{Z}$ in the product $\bigoplus_\mathbb{Z}K\rtimes \mathbb{Z}$ and $$ \eta_s=(\ldots,{\bf 0},{\bf 0},{\bf 0},\kappa_s,{\bf 0},{\bf 0},{\bf 0},\ldots)\in \bigoplus_\mathbb{Z}K, $$ where each $\kappa_s$ on the right hand side occurs in zero position. Finally, we obtain that the group $$ G(\mathcal{A}_{i})=\langle a_i, (b_1)_i, \ldots, (b_n)_i\rangle $$ generated by the $i$-th shift of the automaton $\mathcal{A}$ is isomorphic with $G_i$ and the mapping $a_i\mapsto u$, $(b_s)_i\mapsto \eta_s\cdot u$ induces the isomorphism $G(\mathcal{A}_{i})\simeq K\wr \mathbb{Z}$. As we will see in the proof, all calculations verifying the above propositions and lemmas are entirely elementary. However, they are burdened with some technical formulae and the presentation of the proof seems to be quite formalistic. This contrasts with a transparent and intuitive idea behind the definition of the automaton $\mathcal{A}$. On the other hand, our construction confirms one of the most intriguing phenomenon in the computational group theory which originates from dealing with the simplest examples of transducers, i.e. although they can generate various classic and important groups with rare and exotic properties, the derivation of such a group from the corresponding automaton is far from obvious and, quite often, requires sophisticated and complex approaches. It can be seen, for example, in the original proof from~\cite{2} finding the lamplighter group $C_2\wr\mathbb{Z}$ as a self-similar group or in a long history discovering a free group behind the Aleshin type automaton. We hope that our construction will not only fill the gap in self-similar automaton representations for a large class of lamplighter groups, but also can serve as a useful tool in the further study of these groups, in particular, via their dynamics and geometry on the corresponding tree. We also think that it will help to better understand the connection between the structure of an automaton over a changing alphabet and the group it defines. \section{The tree $X^*$ and its automaton transformations} In the Introduction, we have already presented the notion of a changing alphabet $X=(X_i)_{i\geq 0}$, a time-varying automaton $A=(X, Q, \varphi, \psi)$, their shifts $X_{(k)}$, $A_k$ ($k\geq 0$) and a self-similar group over a changing alphabet. Now, we only recall the necessary notions and facts concerning the tree $X^*$ of finite words over $X$ and automaton transformations $q_k\colon X^*_{(k)}\to X^*_{(k)}$ ($q\in Q$, $k\geq 0$). We define the tree $X^*$ as a disjoint sum of cartesian products $$ X_0\times X_{1}\times\ldots\times X_{i},\;\;\;i\geq 0, $$ together with the empty word $\epsilon$. Since the elements of a word $w\in X^*$ are called letters, we will not separate them by commas and we will write $w=x_0x_1\ldots x_k$, where $x_i\in X_{i}$ are the letters. Thus for a given $i\geq 0$ the above cartesian product consists of all words $w\in X^*$ with the length $|w|=i+1$. If $w\in X^*$ and $v\in X^*_{(|w|)}$, then $wv$ denotes a concatenation of $w$ and $v$. Obviously $wv\in X^*$. The set $X^*$ has the structure of a spherically homogeneous rooted tree. The empty word is the root of this tree and two words are connected if and only if they are of the form $w$ and $wx$ for some $w\in X^*$ and $x\in X_{|w|}$. In particular, the $i$-th level $X^i$ ($i\geq0$) of the tree $X^*$ (that is the set of vertices at the distance $i$ from the root) consists of all words of the length $i$. There is a natural interpretation of an automaton $A=(X, Q, \varphi, \psi)$ over a changing alphabet $X=(X_i)_{i\geq 0}$ as a machine, which being in a moment $i\geq 0$ in a state $q\in Q$ reads a letter $x\in X_i$ from the input tape, types the letter $\psi_i(q, x)$ on the output tape, moves to the state $\varphi_i(q, x)$ and proceeds further to the moment $i+1$. This allows to define for every $k\geq0$ and $q\in Q$ the transformation $q_k\colon X^*_{(k)}\to X^*_{(k)}$ recursively: $q_k(\epsilon)=\epsilon$ and if $xw\in X^*_{(k)}$ with $x\in X_k$, $w\in X^*_{(k+1)}$, then \begin{equation}\label{0} q_k(xw)=x'q'_{k+1}(w), \end{equation} where $x'=\psi_k(q,x)$ and $q'=\varphi_k(q, x)$. The transformation $q_k$ is called an automaton transformations of the tree $X^*_{(k)}$ (corresponding to the state $q$ in the $k$-th transition of $A$). It is convenient to present an automaton $A=(X, Q, \varphi, \psi)$ over a changing alphabet $X=(X_i)_{i\geq 0}$ as a labeled directed locally finite graph with the set $$ \{(i, q)\colon i\geq0,\; q\in Q\} $$ as a set of vertices. Two vertices are connected with an arrow if and only if they are of the form $(i, q)$ and $(i+1, \varphi_i(q, x))$ for some $i\geq0$, $q\in Q$ and $x\in X_i$. This arrow is labeled by $x$, starts from the vertex $(i, q)$ and goes to the vertex $(i+1, \varphi_i(q, x))$. Each vertex $(i, q)$ is labeled by the mapping $$ \tau_{i, q}\colon X_i\to X_i,\;\;\;\tau_{i,q}\colon x\mapsto \psi_i(q,x) $$ (the so-called labelling of the state $q$ in the $i$-th transition of $A$). Further, to make the graph of the automaton clear, we will substitute a large number of arrows connecting two given vertices and having the same direction for a one multi-arrow labeled by suitable letters and if the labelling of such a multi-arrow starting from a given vertex follows from the labelling of other arrows starting from this vertex, we will omit this labelling. \begin{figure}[hbtp] \centering \includegraphics[width=9cm]{3.eps} \caption{The graph of the automaton $\mathcal{A}$ generating $K\wr \mathbb{Z}$, case $n=1$} \label{fig14} \end{figure} Directly by the formula~(\ref{0}) we see that in case $A$ invertible (i.e. each $\tau_{i, q}$ is a permutation of the set $X_i$), every automaton transformation $q_i\colon X^*_{(i)}\to X^*_{(i)}$ is also invertible. By the formula (\ref{0}) we also see that $q_i$ preserves the lengths of words and common beginnings and hence it defines an endomorphisms of the tree $X^*_{(i)}$. Moreover, the composition of two automaton transformations of the same tree is also an automaton transformation of this tree and the inverse mapping to the invertible automaton transformation is also of this type (we can show this by using an operation of the composition of two time-varying automata over the same changing alphabet and the notion of an inverse automaton - see~\cite{11}). In particular, the set of all invertible automaton transformations of a tree $X^*$ together with the composition of mappings as a product forms a proper subgroup of the group $Aut(X^*)$ of automorphisms of this tree, which is denoted by $FA(X^*)$ and called the group of finite-state automorphisms. \begin{rem} Through the rest of the text we use the right action convention in writing the composition $fg$ of two transformation $f, g\colon X^*\to X^*$, i.e. the transformation applied first is written on the left and for any word $w\in X^*$ we have the equality $fg(w)=g(f(w))$. \end{rem} \section{The proof of Theorem~\ref{t1}} Since the group $G$ is countable and residually finite, there is an infinite normal series $$ G=N_0\triangleright N_1\triangleright\ldots $$ with finite quotients $G/N_i$ and the trivial intersection. Then the function $g\mapsto (gN_i)_{i\geq 0}$ is an embedding of $G$ into the infinite cartesian product \begin{equation}\label{cp} G/N_0\times G/N_1\times\ldots. \end{equation} Let $X=(X_i)_{i\geq 0}$ be an unbounded changing alphabet. Then there is an infinite increasing sequence $0<t_0<t_1<\ldots$ of integers such that for every $i\geq 0$ the quotient $G/N_i$ embeds into the product $$ S(X_{t_{i-1}+1})\times S(X_{t_{i-1}+2})\times\ldots\times S(X_{t_i}) $$ of symmetric groups of consecutive sets of letters, where for $i=0$ we take $t_{(-1)}=-1$. Consequently, there is an embedding $$ g\mapsto (g^{(0)}, g^{(1)}, \ldots),\;\;\;g\in G $$ of $G$ into the cartesian product $S(X_0)\times S(X_1)\times\ldots$. The above embedding allows to define the transformations $\xi_{g,i}\colon X^*_{(i)}\to X^*_{(i)}$ ($i\geq 0$, $g\in G$) as follows: $\xi_{g,i}(\epsilon)=\epsilon$ and if $w=x_0x_1\ldots x_k\in X^*_{(i)}$, then $$ \xi_{g,i}(w)=g^{(i)}(x_0)g^{(i+1)}(x_1)\ldots g^{(i+k)}(x_k). $$ In particular, for all $g, g'\in G$ and $i\geq 0$ we have \begin{equation}\label{epim} \xi_{gg',i}=\xi_{g,i}\cdot \xi_{g',i},\;\;\;\xi_{g^{-1},i}=(\xi_{g,i})^{-1}. \end{equation} Let $S$ be any generating set of $G$ whose cardinality coincides with the rank of $G$. Let us define a time-varying automaton $A=(X, S, \varphi, \psi)$ as follows: $$ \varphi_i(s, x)=s,\;\;\;\psi_i(s,x)=s^{(i)}(x) $$ for all $i\geq 0$, $s\in S$, $x\in X_i$. Let us fix $i\geq 0$. Directly by the definition of $A$ the transformation $s_i$ defined by a state $s\in S$ in the $i$-th transition of $A$ acts on any word $w=x_0x_1\ldots x_k\in X^*_{(i)}$ in the following way $$ s_i(w)=s^{(i)}(x_0)s^{(i+1)}(x_1)\ldots s^{(i+k)}(x_k). $$ In particular, we have $s_i=\xi_{s,i}$ for every $s\in S$ and, since the elements $s_i$ for $s\in S$ generate the group $G(A_i)$, we have $$ G(A_i)=\{\xi_{g,i}\colon g\in G\}. $$ By the equalities~(\ref{epim}) we see that the mapping $$ G\to G(A_i),\;\;\;g\mapsto \xi_{g,i} $$ is an epimorphism. Further, since $\bigcap_{k\geq 0}N_k=\{1_G\}$, for every $g\neq 1_G$ there are infinitely many $k\geq 0$ such that $gN_k\neq N_k$. Consequently $g^{(k)}\neq Id_{X_k}$ for infinitely many $k\geq 0$, which implies $\xi_{g,i}\neq Id_{X^*_{(i)}}$ for every $g\neq 1_G$. Hence the above epimorphism is one-to-one. Consequently, the mapping $s\mapsto s_i$ ($s\in S$) induces an isomorphism $G\simeq G(A_i)$, i.e. the automaton $A$ is an optimal self-similar realization of $G$. As we explained in the introduction, in spite of Theorem~\ref{t1}, it is not easy to find an explicit self-similar automaton realization for a given group $G$. Even if we are able to construct an automaton defining $G$, this automaton may not be necessarily self-similar. \begin{ex}\label{ex2} Let $(r_i)_{i\geq 0}$ be a nondecreasing unbounded sequence of integers $r_i>1$ and let $X=(X_i)_{i\geq0}$ be a changing alphabet in which $X_i=\{1,2, \ldots, r_i\}$. For each $i\geq 0$ we define two cyclic permutations $\alpha_i$, $\beta_i$ of the set $X_i$: $\alpha_i=(1,2)$, $\beta_i=(1,2\ldots,r_i)$. Let $A=(X, \{a, b\}, \varphi, \psi)$ be an automaton in which the transition and output functions are defined as follows $$ \varphi_i(q, x)=\left\{ \begin{array}{ll} q, &x\neq1,\\ a, &x=1,\;q=b,\\ b,& x=1,\; q=a, \end{array} \right.\;\;\;\psi_i(q, x)=\left\{ \begin{array}{ll} \alpha_i(x), &q=a,\\ \beta_i(x), &q=b. \end{array} \right. $$ \begin{figure}[hbtp] \centering \includegraphics[width=9.5cm]{1.eps} \caption{A self-similar automaton from Example~\ref{ex2}} \label{fig11} \end{figure} According to~\cite{15} the group $G(A)=\langle a_0, b_0\rangle$ is a free nonabelian group of rank two generated freely by the transformations $a_0$, $b_0$. Directly by the construction of the automaton $A$ we see that for every $i\geq 0$ the automaton transformations $a_i, b_i\colon X^*_{(i)}\to X^*_{(i)}$ also generate a free nonabelian group of rank 2. In particular, the mapping $a_i\mapsto a_0$, $b_i\mapsto b_0$ induces an isomorphism $G(A_{i})\simeq G(A)$. Thus the automaton $A$ is a self-similar realization of a free nonabelian group of rank 2 over the changing alphabet $X$. \end{ex} \begin{ex}\label{ex1} Let the changing alphabet $X=(X_i)_{i\geq0}$ and the permutations $\alpha_i, \beta_i\in S(X_i)$ be defined as in the previous example. Let $A=(X, \{a, b\}, \varphi, \psi)$ be an automaton of a diagonal type with the output functions defined as follows: $\psi_i(a, x)=\alpha_i(x)$ and $\psi_i(b, x)=\beta_i(x)$ for all $i\geq 0$ and $x\in X_i$. \begin{figure}[hbtp] \centering \includegraphics[width=9cm]{2.eps} \caption{An automaton from Example~\ref{ex1}} \label{fig12} \end{figure} The automaton $A$ is not a self-similar realization of the group $G=G(A)$ because $A$ is not self-similar. Indeed, let $t\geq 0$ be the smallest number such that $r_{t}\geq 3$ and $t'> t$ be the smallest number such that $r_{t'}>r_t$. Then one can verify by straightforward calculations that $W(a_{t'}, b_{t'})\neq Id_{X^*_{(t')}}$ and $W(a_0, b_0)=Id_{X^*}$, where the group-word $W=W(a,b)$ is defined as follows $W(a,b)=(ab^{r_t-1}ab^{-r_t+1})^2$. Thus the mapping $a_{t'}\mapsto a_{0}$, $b_{t'}\mapsto b_{0}$ can not be extended to the isomorphism of groups $G(A_{t'})$ and $G(A)$. \end{ex} \begin{ex}\label{ex0} Let $\mathcal{D}$ be an automaton of a diagonal type defined in the introduction. For every $i\geq 0$ let $a_i$, $(b_s)_i$ ($1\leq s\leq n$) be automaton transformations of the tree $X^*_{(i)}$ corresponding to the states of $\mathcal{D}$ in $i$-th transition. Obviously, the automaton $\mathcal{D}$ is invertible. By definition of transition and output functions in $\mathcal{D}$ we obtain the following recursions: $a_i(xw)=\alpha_i(x)a_{i+1}(w)$, $(b_s)_i(xw)=\beta_{s,i}(x)b_{i+1}(w)$ for every $x\in X_i$ and $w\in X^*_{(i+1)}$. The group $G(\mathcal{D}_i)$ generated by the $i$-th shift of the automaton $\mathcal{D}$ is generated by the transformations $a_i$ and $c_{s,i}=(b_s)_ia_i^{-1}$ for $1\leq s\leq n$. Since $\sigma_{s,i}=\beta_{s,i}\alpha_i^{-1}=\alpha_i^{-1}\beta_{s,i}$, we have by the above recursions: $c_{s,i}(xw)=\sigma_{s,i}(x)c_{s,i+1}(w)$ for $x\in X_i$ and $w\in X^*_{(i+1)}$. We consider in the infinite cartesian product $S(X_0)\times S(X_1)\times\ldots$ the subgroup $\langle \lambda_i, \gamma_{1,i}, \ldots, \gamma_{n,i}\rangle$, where $$ \lambda_i=(\alpha_i, \alpha_{i+1}, \ldots),\;\;\;\gamma_{s,i}=(\sigma_{s,i}, \sigma_{s, i+1},\ldots). $$ By the above recursions for $a_i$ and $c_{s, i}$ we see that the mapping $a_i\mapsto \lambda_i$, $c_{s, i}\mapsto \gamma_{s,i}$, $1\leq s\leq n$ induces an isomorphism $G(\mathcal{D}_i)\simeq \langle \lambda_i, \gamma_{1,i}, \ldots, \gamma_{n,i}\rangle$. For each $i\geq 0$ the permutations $\alpha_i, \sigma_{1,i}, \ldots, \sigma_{n, i}$ are pairwise disjoint. Consequently, the group $\langle \lambda_i, \gamma_{1,i}, \ldots, \gamma_{n,i}\rangle$ is abelian and the product $\lambda_i^{m_0}\gamma_{1,i}^{m_1}\ldots\gamma_{n,i}^{m_n}$ is trivial if and only if $o(\lambda_i)\mid m_0$ and $o(\gamma_{s,i})\mid m_s$ for $1\leq s\leq n$, where $o(g)$ denotes the order of an element $g$. Since $\alpha_i=\pi_{1,i}\cdot\ldots\cdot\pi_{n, i}$, we see by the assumptions on the permutations $\pi_{s,i}$ and $\sigma_{s,i}$ that $o(\lambda_i)=o(\gamma_{1,i})=\ldots=o(\gamma_{n_1,i})=\infty$ and $o(\gamma_{s,i})=r_{s-n_1}$ for $n_1<s\leq n$. This implies that the automaton $\mathcal{D}$ is self-similar and it generates the group isomorphic with $K\times \mathbb{Z}$. \end{ex} \section{The proof of Theorem~\ref{t2}} Let $\mathcal{A}$ be an automaton defined in the introduction and for every $i\geq 0$ let $a_i$, $(b_s)_i$ ($1\leq s\leq n$) be automaton transformations of the tree $X^*_{(i)}$ corresponding to the states of $\mathcal{A}$ in its $i$-th transition. Obviously, the automaton $\mathcal{A}$ is invertible. Directly by formulae defining transition and output functions of $\mathcal{A}$ we obtain the following recursions: $$ a_i(xw)=\alpha_i(x)a_{i+1}(w),\;\;\;(b_s)_i(xw)=\left\{\begin{array}{ll}\beta_{s,i}(x)a_{i+1}(w),& x=x_{s,i},\\ \beta_{s, i}(x)(b_s)_{i+1}(w),& x\neq x_{s, i}\end{array}\right. $$ for every $x\in X_i$ and $w\in X^*_{(i+1)}$. Hence $a_i^{-1}(xw)=\alpha_i^{-1}(x)a^{-1}_{i+1}(w)$ and the transformations $c_{s,i}=(b_s)_ia_i^{-1}$ satisfy: \begin{eqnarray*} c_{s, i}(xw)&=&(b_s)_ia_i^{-1}(xw)=a_i^{-1}((b_s)_i(xw))=\\ &=&\left\{ \begin{array}{ll} a_i^{-1}(\beta_{s, i}(x)a_{i+1}(w)),&x=x_{s, i},\\ a_i^{-1}(\beta_{s, i}(x)(b_s)_{i+1}(w)), &x\neq x_{s, i}, \end{array} \right.=\\ &=&\left\{ \begin{array}{ll} \alpha_i^{-1}(\beta_{s, i}(x))a^{-1}_{i+1}(a_{i+1}(w)),&x=x_{s, i},\\ \alpha_i^{-1}(\beta_{s, i}(x))a^{-1}_{i+1}((b_s)_{i+1}(w)), &x\neq x_{s, i}, \end{array} \right.=\\ &=&\left\{ \begin{array}{ll} \sigma_{s, i}(x)w,&x=x_{s, i},\\ \sigma_{s, i}(x)c_{s, i+1}(w), &x\neq x_{s, i}. \end{array} \right. \end{eqnarray*} But, if $x=x_{s, i}$, then $\sigma_{s, i}(x)=x$ as the cycles $\pi_{s,i}$ and $\sigma_{s,i}$ are disjoint. In consequence, we have \begin{equation}\label{5} c_{s,i}(xw)=\left\{\begin{array}{ll}xw,& x=x_{s,i},\\ \sigma_{s, i}(x)c_{s, i+1}(w),& x\neq x_{s, i}.\end{array}\right. \end{equation} \subsection{The groups $K_i$} We consider the groups $$ K_i=\langle c_{1, i}, \ldots, c_{n, i}\rangle,\;\;\;i\geq 0. $$ \begin{prop}\label{prop1} For every $i\geq 0$ the mapping $c_{s, i}\mapsto \kappa_s$ ($1\leq s\leq n$) induces an isomorphism $K_i\simeq K$. \end{prop} \begin{proof} At first we show that for $s\neq s'$ the transformations $c_{s, i}$ and $c_{s', i}$ commute. By (\ref{5}) we obtain: \begin{eqnarray} c_{s, i}c_{s',i}(xw)&=&c_{s',i}(c_{s,i}(xw))=\nonumber\\ &=&\left\{\begin{array}{ll} c_{s',i}(xw),&x=x_{s,i},\\ c_{s',i}(\sigma_{s,i}(x)c_{s,i+1}(w)), &x\neq x_{s, i}.\label{10} \end{array}\right. \end{eqnarray} If $x=x_{s, i}$, then $x\neq x_{s',i}$ and $\sigma_{s',i}(x)=x$. Thus by (\ref{5}) and (\ref{10}) we have in this case $$ c_{s, i} c_{s',i}(xw)=c_{s', i}(xw)=\sigma_{s',i}(x)c_{s', i+1}(w)=xc_{s',i+1}(w). $$ Similarly, if $x=x_{s',i}$, then $\sigma_{s, i}(x)=x$ and by (\ref{5}) and (\ref{10}) we obtain in this case $$ c_{s, i} c_{s',i}(xw)=c_{s',i}(\sigma_{s,i}(x)c_{s,i+1}(w))=c_{s',i}(xc_{s, i+1}(w))=xc_{s, i+1}(w). $$ If $x\neq x_{s, i}$ and $x\neq x_{s',i}$, then $\sigma_{s, i}(x)\neq x_{s',i}$ and consequently $$ c_{s, i} c_{s',i}(xw)=c_{s',i}(\sigma_{s,i}(x)c_{s,i+1}(w))=\sigma_{s, i}\sigma_{s',i}(x)c_{s, i+1}c_{s',i+1}(w). $$ We conclude \begin{equation}\label{14} c_{s, i} c_{s',i}(xw)=\left\{\begin{array}{ll} xc_{s',i+1}(w),&x=x_{s,i},\\ xc_{s,i+1}(w), &x=x_{s',i},\\ \sigma_{s,i}\sigma_{s',i}(x)c_{s,i+1}c_{s',i+1}(w), & \mbox{\rm otherwise}. \end{array}\right. \end{equation} By analogy we have $$ c_{s', i} c_{s,i}(xw)=\left\{\begin{array}{ll} xc_{s',i+1}(w),&x=x_{s,i},\\ xc_{s,i+1}(w), &x=x_{s',i},\\ \sigma_{s',i} \sigma_{s,i}(x)c_{s',i+1} c_{s,i+1}(w), &\mbox{\rm otherwise}. \end{array}\right. $$ Thus we may write \begin{equation}\label{eq111} c_{s, i} c_{s',i}(xw)=\left\{ \begin{array}{ll} c_{s',i} c_{s, i}(xw),& x\in\{x_{s, i}, x_{s',i}\},\\ \sigma_{s, i}\sigma_{s',i}(x)c_{s,i+1} c_{s',i+1}(w),&\mbox{\rm otherwise}. \end{array} \right. \end{equation} Since the cycles $\sigma_{s,i}$ and $\sigma_{s',i}$ commute, we see by~(\ref{eq111}) that the transformations $c_{s, i}$ and $c_{s', i}$ commute as well. If $x\neq x_{s,i}$, then $\sigma_{s,i}^{-1}(x)\neq x_{s,i}$ and by the recursion (\ref{5}) we have: $$ c^{-1}_{s,i}(xw)=\left\{\begin{array}{ll}xw,& x=x_{s,i},\\ \sigma^{-1}_{s, i}(x)c^{-1}_{s, i+1}(w),& x\neq x_{s, i}\end{array}\right. $$ By easy inductive argument we obtain for any integer $k$: \begin{equation}\label{9} c^k_{s,i}(xw)=\left\{\begin{array}{ll}xw,& x=x_{s,i},\\ \sigma_{s, i}^k(x)c^k_{s, i+1}(w),& x\neq x_{s, i}.\end{array}\right. \end{equation} By our assumption, for each $1\leq s\leq n_1$ the cycles $\sigma_{s, i}$ ($i\geq 0$) do not have a uniformly bounded length. Hence, by the recursion (\ref{9}) we see that the transformation $c_{s, i}$ is of infinite order for every $1\leq s\leq n_1$. Similarly, if $n_1<s\leq n$, then the length of $\sigma_{s, i}$ is equal to $r_{s-n_1}$ and consequently the order of $c_{s, i}$ is equal to $r_{s-n_1}$. We define the subset $\mathcal{V}\subseteq \mathbb{Z}^n$ as follows: $$ \mathcal{V}=\{(m_1, \ldots, m_n)\colon 0\leq m_s<r_{s-n_1}\;\;{\rm for}\;\;n_1<s\leq n\}. $$ By above, we obtain the equality $$ K_i=\{C_{M, i}\colon M\in \mathcal{V}\}, $$ where for every $M=(m_1,\ldots, m_n)\in\mathcal{V}$ we define $$ C_{M,i}=c_{1, i}^{m_1}\cdot\ldots \cdot c_{n, i}^{m_n}. $$ We show that the above presentation is unique. To this end to each $C_{M,i}$ we associate the permutation $\Sigma_{M, i}\in Sym(X_i)$ defined as follows: $$ \Sigma_{M, i}=\sigma_{1, i}^{m_1}\cdot\ldots\cdot\sigma_{n_,i}^{m_n}. $$ In the following lemma, which is a generalization of the recursion (\ref{9}), for every $M\in\mathcal{V}$ we denote by $M^{(s)}$ ($1\leq s\leq n$) the element arising from $M$ by the substitution of $m_s$ for $0$. \begin{lem}\label{lem1} We have \begin{equation}\label{6} C_{M, i}(xw)=\left\{ \begin{array}{ll} xC_{M^{(s)}, i+1}(w), &x=x_{s,i},\\ \Sigma_{M, i}(x)C_{M, i+1}(w), &otherwise \end{array} \right. \end{equation} for all $x\in X_i$, $w\in X^*_{(i+1)}$ and $M\in\mathcal{V}$. \end{lem} \begin{proof}(of Lemma~\ref{lem1}) Let $M=(m_1, \ldots, m_n)$ and for every $1\leq s\leq n$ let $M_s\in \mathcal{V}$ arises from $M$ by the substitution of every component $m_i$ with $i>s$ for 0. Since $M_n=M$ and for every $1\leq s\leq n$ we have $$ C_{M_s, i}=c_{1, i}^{m_1}\cdot\ldots \cdot c_{s, i}^{m_s}, $$ it is sufficient to show that for every $1\leq s\leq n$ the following recursion holds: \begin{equation}\label{11} C_{M_s, i}(xw)=\left\{ \begin{array}{ll} xC_{M^{(s')}_s, i+1}(w), &x=x_{s',i},\;1\leq s'\leq s,\\ \Sigma_{M_s, i}(x)C_{M_s, i+1}(w), &\mbox{\rm otherwise}. \end{array} \right. \end{equation} For $s=1$ we have $C_{M_s,i}=C_{M_1,i}=c_{1, i}^{m_1}$ and (\ref{11}) coincides with the recursion~(\ref{9}). Let us fix $s>1$ and let us assume that (\ref{11}) is true for $s-1$. Then we have \begin{eqnarray*} C_{M_s, i}(xw)&=&c_{s, i}^{m_s}(C_{M_{s-1}, i}(xw))=\\ &=&\left\{\begin{array}{ll} c_{s, i}^{m_s}(xC_{M_{s-1}^{(s')},i+1}(w)), & x=x_{s', i},\;\;1\leq s'<s,\\ c_{s, i}^{m_s}(\Sigma_{M_{s-1},i}(x)C_{M_{s-1}, i+1}(w)), &\mbox{\rm otherwise}. \end{array}\right. \end{eqnarray*} But, if $x=x_{s',i}$ for some $1\leq s'<s$, then $\sigma_{s, i}^{m_s}(x)=x$. Thus by the recursion~(\ref{9}) we have in this case $$ c_{s, i}^{m_s}(xC_{M_{s-1}^{(s')},i+1}(w))=\sigma_{s, i}^{m_s}(x)c_{s, i+1}^{m_s}(C_{M_{s-1}^{(s')},i+1}(w))= xC_{M_s^{(s')}, i+1}(w). $$ If $x=x_{s, i}$, then $\Sigma_{M_{s-1},i}(x)=\sigma^{m_1}_{1,i}\cdot\ldots\cdot \sigma^{m_{s-1}}_{s-1,i}(x)=x$. Thus by the recursion~(\ref{9}) we have in this case \begin{eqnarray*} C_{M_s, i}(xw)&=&c_{s, i}^{m_s}(\Sigma_{M_{s-1},i}(x)C_{M_{s-1}, i+1}(w))=\\ &=&c_{s, i}^{m_s}(xC_{M_{s-1},i+1}(w))=\\ &=&xC_{M_{s-1},i+1}(w)=\\ &=&xC_{M_s^{(s)},i+1}(w). \end{eqnarray*} If $x\notin\{x_{1 i},\ldots, x_{s, i}\}$, then $\Sigma_{M_{s-1}, i}(x)\neq \Sigma_{M_{s-1}, i}(x_{s, i})=x_{s,i}$. Thus by the recursion~(\ref{9}) we have in this case \begin{eqnarray*} C_{M_s, i}(xw)&=&c_{s, i}^{m_s}(\Sigma_{M_{s-1},i}(x)C_{M_{s-1}, i+1}(w))=\\ &=&\sigma_{s, i}^{m_s}(\Sigma_{M_{s-1},i}(x))c_{s, i+1}^{m_s}(C_{M_{s-1}, i+1}(w))=\\ &=&\Sigma_{M_{s},i}(x)C_{M_{s}, i+1}(w). \end{eqnarray*} To sum up, we conclude the recursion (\ref{11}), which finishes the proof of the lemma. \end{proof} Now, let us assume that the product $C_{M, i}=c_{1, i}^{m_1}\cdot\ldots \cdot c_{n, i}^{m_n}$ represents the identity mapping for some $M=(m_1,\ldots, m_n)\in~\mathcal{V}$. Let $n_1<s\leq n$. Since there is $x_0\in X_i$ which belongs to the cycle $\sigma_{s, i}$ and $x_0\notin\{x_{1, i},\ldots, x_{n, i}\}$, we have by Lemma~\ref{lem1}: $x_0=C_{M,i}(x_0)=\Sigma_{M, i}(x_0)=\sigma_{s, i}^{m_s}(x_0)$. Thus the divisibility $r_{s-n_1}\mid m_s$ holds and consequently $m_s=0$. If $1\leq s\leq n_1$, then there is $i_0>i$ such that the length of the cycle $\sigma_{s, i_0}$ is greater than $|m_s|$. Let $w=x_0x_1\ldots x_{i_0-i}\in X^*_{(i)}$ be a word such that the last letter $x_{i_0-i}$ belongs to the cycle $\sigma_{s, i_0}$ and $x_j\notin\{x_{1, j+i},\ldots, x_{n, j+i}\}$ for every $0\leq j\leq i-i_0$. By Lemma~\ref{lem1} we obtain $$ C_{M, i}(w)=\Sigma_{M, i}(x_0)\Sigma_{M, i+1}(x_1)\ldots \Sigma_{M, i_0}(x_{i_0-i}). $$ Since $C_{M, i}(w)=w$, we have $x_{i_0-i}=\Sigma_{M, i_0}(x_{i_0-i})=\sigma_{s, i_0}^{m_s}(x_{i_0-i})$. Consequently $m_s=0$, which finishes the proof of Proposition~\ref{prop1}. \end{proof} \subsection{The groups $H_i$} We consider the transformations $$ d_{k, s, i}=a_i^{-k}\cdot c_{s, i}\cdot a_i^k,\;\;\;i\geq 0,\;\;\;k\in\mathbb{Z},\;\;\;1\leq s\leq n. $$ For every $i\geq 0$ we define the group $$ H_i=\langle d_{k, s, i}\colon k\in\mathbb{Z},\; 1\leq s\leq n\rangle. $$ For every $i\geq 0$ and $k\in\mathbb{Z}$ we also define the subgroup $K^{(k)}_i\leq H_i$, where $$ K^{(k)}_i=a_i^{-k}K_ia_i^k=\{a_i^{-k}C_{M,i}a_i^k\colon M\in\mathcal{V}\}. $$ By Proposition~\ref{prop1} we have the isomorphism $K^{(k)}_i\simeq K_i\simeq K$. \begin{prop}\label{prop2} The group $H_i$ is a direct sum of its subgroups $K_i^{(k)}$ for $k\in\mathbb{Z}$ and the mapping $$ d_{k,s,i}\mapsto (\ldots,{\bf 0},{\bf 0},\kappa_s,{\bf 0},{\bf 0},\ldots),\;\;\;k\in\mathbb{Z},\;\;\;1\leq s\leq n, $$ where the generator $\kappa_s\in K$ on the right hand side is in the $k$-th position, induces the isomorphism $H_i\simeq \bigoplus_\mathbb{Z}K$. \end{prop} \begin{proof} By the recursion for $a_i$ and by~(\ref{5}) we easily obtain: $$ d_{k, s, i}(xw)=\left\{\begin{array}{ll} xw, &x=\alpha_i^k(x_{s, i}),\\ \sigma_{s, i}(x)d_{k, s, i+1}(w), &x\neq \alpha_i^k(x_{s, i}) \end{array} \right. $$ for all $x\in X_i$ and $w\in X^*_{(i+1)}$. Further, by the recursion for $a_i$ and by~(\ref{14}) we obtain for all $k, k'\in\mathbb{Z}$, $1\leq s, s'\leq n$, $x\in X_i$ and $w\in X^*_{(i+1)}$ the following recursion: $$ d_{k, s, i}d_{k', s',i}(xw)=\left\{\begin{array}{ll} xd_{k', s',i+1}(w),&x=\alpha_i^k(x_{s,i}),\\ xd_{k, s , i+1}(w), &x=\alpha_i^{k'}(x_{s',i}),\\ \sigma_{s,i}\sigma_{s',i}(x)d_{k, s,i+1}d_{k', s',i+1}(w), &\mbox{\rm otherwise}, \end{array}\right. $$ and hence \begin{eqnarray*} d_{k,s,i}d_{k', s',i}(xw)=\left\{\begin{array}{ll} d_{k', s', i}d_{k, s,i}(xw),&x\in\{\alpha_i^k(x_{s,i}),\alpha_i^{k'}(x_{s',i})\},\\ \sigma_{s,i}\sigma_{s',i}(x)d_{k, s,i+1}d_{k', s',i+1}(w), &\mbox{\rm otherwise}. \end{array}\right. \end{eqnarray*} Consequently, the group $H_i$ is abelian. For $k\in\mathbb{Z}$, $M\in\mathcal{V}$ and $i\geq 0$ we denote $$ d_{k, M, i}=a_i^{-k}C_{M,i}a_i^k. $$ In particular, we have $K_i^{(k)}=\{d_{k, M, i}\colon M\in\mathcal{V}\}$. \begin{lem}\label{lem3} If $M=(m_1, \ldots, m_n)\in\mathcal{V}$, then $$ d_{k, M, i}=d_{k, 1, i}^{m_1}\cdot\ldots\cdot d_{k, n, i}^{m_n}. $$ \end{lem} \begin{proof}[of Lemma~\ref{lem3}] We have \begin{eqnarray*} d_{k, M,i}&=&a_i^{-k}\cdot c_{1, i}^{m_1}\cdot\ldots\cdot c_{n, i}^{m_n}\cdot a_i^k=\\ &=&(a_i^{-k}\cdot c_{1, i}^{m_1}\cdot a_i^k)\cdot (a_i^{-k}\cdot c_{2, i}^{m_2}\cdot a_i^k)\cdot\ldots\cdot (a_i^{-k}\cdot c_{n, i}^{m_n}\cdot a_i^k)=\\ &=&(a_i^{-k}\cdot c_{1, i}\cdot a_i^k)^{m_1}\cdot\ldots\cdot (a_i^{-k}\cdot c_{n, i}\cdot a_i^k)^{m_n}=\\ &=&d_{k, 1, i}^{m_1}\cdot\ldots\cdot d_{k, n, i}^{m_n}, \end{eqnarray*} which finishes the proof of Lemma~\ref{lem3}. \end{proof} Since the group $H_i$ is abelian, we see by Lemma~\ref{lem3} that every element of $H_i$ can be presented as a product \begin{equation}\label{18} d_{k_1, M_1, i}\cdot\ldots\cdot d_{k_t, M_t, i},\;\;\;t\geq 1, \end{equation} where $$ k_1<k_2<\ldots<k_t,\;\;\;k_j\in\mathbb{Z},\;\;\; M_j\in \mathcal{V},\;\;\;1\leq j\leq t. $$ We call the weight of the product (\ref{18}) as a number $$ ||M_1||+\ldots+||M_t||, $$ where $||M||=|m_1|+\ldots+|m_n|$ for any $M=(m_1, \ldots, m_n)\in\mathcal{V}$. Now, we have to show that every element of $H_i$ is represented uniquely by such a product. Suppose not. Then there are products (\ref{18}) with nonzero weights defining the identity transformation. By Proposition~\ref{prop1} it must be $t\geq 2$ in every such product. Let us choose $t\geq 2$, $i_0\geq 0$ and the triples $$ (k_1, M_1, i_0), \ldots, (k_t, M_t, i_0)\in \mathbb{Z}\times \mathcal{V}\times \{i_0\}, $$ such that the product $$ d_{k_1, M_1, i_0}\cdot\ldots\cdot d_{k_t, M_t, i_0} $$ defines the identity transformation and its nonzero weight is minimal. We can assume that each $M_j\in\mathcal{V}$ ($1\leq j\leq t$) is a nonzero vector. For every $i\geq 0$ we define the product $$ D_i=d_{k_1, M_1, i}\cdot\ldots\cdot d_{k_t, M_t, i}. $$ There is a letter $z_i\in X_i$ ($i\geq 0$) which does not belong to any of the cycles $\pi_{s, i}$ ($1\leq s\leq n$). In particular $z_i\neq\alpha_i^{k_1}(x_{s, i})$ for every $1\leq s\leq n$. For every $1\leq s\leq n$ we also have \begin{equation}\label{eqqq} \Sigma_{M_1, i}\cdot \Sigma_{M_2, i}\cdot\ldots\cdot\Sigma_{M_j, i}(z_i)\neq \alpha_i^{k_{j+1}}(x_{s, i}),\;\;\;1\leq j\leq t. \end{equation} Let us denote: $$ z_i'=\Sigma_{M_1, i}\cdot\ldots\cdot\Sigma_{M_t, i}(z_i),\;\;\;i\geq0. $$ For all $i\geq 0$, $k\in\mathbb{Z}$, $M\in\mathcal{V}$, $x\in X_i$ and $w\in X^*_{(i+1)}$ we obtain by Lemma~\ref{lem1} the following recursion \begin{equation}\label{lem2} d_{k, M, i}(xw)=\left\{ \begin{array}{ll} xd_{k, M^{(s)}, i+1}(w), &x=\alpha^k_i(x_{s,i}),\\ \Sigma_{M, i}(x)d_{k, M, i+1}(w), &\mbox{\rm otherwise}. \end{array} \right. \end{equation} The above recursion and the inequalities~(\ref{eqqq}) imply \begin{equation}\label{16} D_i(z_iz_{i+1}\ldots z_{i+j}w)=z'_iz'_{i+1}\ldots z'_{i+j}D_{i+j+1}(w) \end{equation} for all $i, j\geq 0$ and $w\in X^*_{(i+j+1)}$. Since $D_{i_0}$ defines the identity mapping, we obtain by (\ref{16}) that for every $i\geq i_0$ the transformation $D_{i}$ also defines the identity mapping. Let $1\leq s\leq n$ be such that the $s$-th component of $M_1$ is not equal to 0. In particular $||M_1^{(s)}||<||M_1||$. Let $\nu\geq i_0$ be such that the length $L$ of the cycle $\pi_{s, \nu}$ is greater than the sum $|k_1|+\ldots+|k_t|$. Let us consider the letter $$ z=\alpha_{\nu}^{k_1}(x_{s, \nu})=\pi_{s, \nu}^{k_1}(x_{s, \nu})\in X_{\nu}. $$ By the equality~(\ref{lem2}) we have \begin{equation}\label{22} d_{k_1, M_1, \nu}(zw)=zd_{k_1, M_1^{(s)}, \nu+1}(w) \end{equation} for every $w\in X^*_{(\nu+1)}$. For every $2\leq j\leq t$ we obviously have: \begin{equation}\label{20} \Sigma_{M_2, \nu}\cdot\ldots\cdot \Sigma_{M_j, \nu}(z)=z. \end{equation} For $2\leq j\leq t$ we also have \begin{equation}\label{21} z\notin\{\alpha_\nu^{k_j}(x_{1, \nu}),\ldots, \alpha_\nu^{k_j}(x_{n, \nu})\}. \end{equation} Indeed, suppose to othe contrary that $$ z=\alpha_\nu^{k_j}(x_{s', \nu})=\pi_{s', \nu}^{k_j}(x_{s', \nu}) $$ for some $2\leq j\leq t$ and $1\leq s'\leq n$. Then we have $$ \pi_{s, \nu}^{k_1}(x_{s, \nu})=\pi_{s', \nu}^{k_j}(x_{s', \nu}). $$ Since the cycles $\pi_{1, \nu}, \ldots, \pi_{n, \nu}$ are pairwise disjoint, we obtain $s=s'$. Hence $\pi_{s,\nu}^{k_j-k_1}(x_{s, \nu})=x_{s, \nu}$ and consequently $L\mid k_j-k_1$, which contradicts with the inequality $L>|k_1|+\ldots+|k_t|$. By (\ref{lem2}) and by (\ref{22})-(\ref{21}) we obtain $D_\nu(zw)=z D(w)$ for all $w\in X^*_{(\nu+1)}$, where $$ D=d_{k_1, M^{(s)}_1, \nu+1}\cdot d_{k_2, M_2, \nu+1}\cdot\ldots\cdot d_{k_t, M_t, \nu+1}. $$ Since $D_\nu$ defines the identity mapping, the product $D$ also defines the identity mapping. Since $||M_1^{(s)}||<||M_1||$ and each $M_j$ ($2\leq j\leq t$) is a nonzero vector, we see that $D$ has a nonzero weight which is smaller than the weight of $D_{i_0}$, contrary to our assumption. This finishes the proof of Proposition~\ref{prop2}. \end{proof} \begin{lem}\label{lem4} We have $$ a_i^{-k}\cdot d_{k_1, M_1, i}\cdot\ldots\cdot d_{k_t, M_t, i}\cdot a_i^k=d_{k+k_1, M_1, i}\cdot\ldots\cdot d_{k+k_t, M_t, i} $$ for all $i\geq 0$, $t\geq 1$, $k, k_j\in\mathbb{Z}$ and $M_j\in\mathcal{V}$ ($1\leq j\leq t$). \end{lem} \begin{proof} The left side of the above equality is equal to $$ (a_i^{-k}\cdot d_{k_1, M_1, i}\cdot a_i^k)\cdot (a_i^{-k}\cdot d_{k_2, M_2, i}\cdot a_i^k)\cdot\ldots\cdot (a_i^{-k}\cdot d_{k_t, M_t, i}\cdot a_i^k). $$ Now, it suffices to see that the $j$-th factor ($1\leq j\leq t$) of the above product is equal to \begin{eqnarray*} a_i^{-k}\cdot d_{k_j, M_j, i}\cdot a_i^k&=&a_i^{-k}\cdot (a_i^{-k_j}\cdot C_{M_j, i}\cdot a_i^{k_j})\cdot a_i^k=\\ &=&a_i^{-(k+k_j)}\cdot C_{M_j, i}\cdot a_i^{k+k_j}=d_{k+k_j, M_j, i}. \end{eqnarray*} This finishes the proof of Lemma~\ref{lem4}. \end{proof} Let us fix $1\leq s\leq n$. For all $i, j\geq 0$ let us consider the words $w_{i, j}\in X^*_{(i)}$, where $$ w_{i, j}=x_{s, i}x_{s, i+1}\ldots x_{s, i+j}. $$ \begin{lem}\label{lem5} For every $i\geq 0$ the following hold: \begin{itemize} \item[(i)] the group $H_i$ is contained in the stabilizer of any $w_{i, j}$, \item[(ii)] for every nonzero integer $k$ there is $j\geq0$ such that $a_i^k(w_{i,j})\neq w_{i, j}$. \end{itemize} \end{lem} \begin{proof} For $j\geq 0$ and $M\in\mathcal{V}$ we have $\Sigma_{M, i+j}(x_{s, i+j})=x_{s, i+j}$. Thus by the equality~(\ref{lem2}) we see that for every $j\geq 0$ the transformations $d_{k, M, i+j}$ ($k\in\mathbb{Z}$, $M\in\mathcal{V}$) stabilize the letter $x_{s, i+j}$. Consequently, the elements $d_{k, M, i}$ ($k\in\mathbb{Z}$, $M\in\mathcal{V}$) stabilize the words $w_{i, j}$ ($j\geq 0$). Since the elements generate the group $H_i$, we obtain~(i). To show (ii) we see by the recursion for $a_i$ that for every $j\geq0$ we have: $$ a_i^k(w_{i,j})=\alpha_i^k(x_{s,i})\ldots\alpha_{i+j}^k(x_{s, i+j})=\pi_{s,i}^k(x_{s,i})\ldots\pi_{s, {i+j}}^k(x_{s, i+j}). $$ Now, the claim follows from the assumption that the lengths of cycles $\pi_{s, i+j}$ ($j\geq 0$) are not uniformly bounded. \end{proof} \subsection{The groups $G_i$} Let us define the groups $$ G_i=\langle a_i, c_{1, i},\ldots, c_{n, i}\rangle=\langle K_i, a_i\rangle,\;\;\;i\geq 0. $$ Directly by the definition of the group $H_i$ we obtain $K_i\leq H_i\triangleleft G_i$. By Lemma~\ref{lem5} the group $H_i$ intersects trivially with the cyclic group $\langle a_i\rangle$. Thus $G_i$ is a semidirect product of the subgroups $H_i $ and $\langle a_i\rangle$. By Proposition~\ref{prop2} the subgroup $H_i$ is isomorphic with the direct sum $\bigoplus_\mathbb{Z}K$ via $$ d_{k,s,i}\mapsto (\ldots,{\bf 0},{\bf 0},\kappa_s,{\bf 0},{\bf 0},\ldots),\;\;\;k\in\mathbb{Z},\;\;\;1\leq s\leq n, $$ where $\kappa_s\in K$ on the right hand side is in the $k$-th position. By Lemma~\ref{lem5} the cyclic group $\langle a_i\rangle$ is isomorphic, via $a_i\mapsto u$, with $\mathbb{Z}$. By Lemma~\ref{lem4} this cyclic group acts on $H_i$ by conjugation via the shift. Consequently, we obtain the isomorphism $G_i\simeq \bigoplus_\mathbb{Z}K\rtimes \mathbb{Z}= K\wr \mathbb{Z}$, which is induced by the mapping $a_i\mapsto u$, $c_{s,i}\mapsto \eta_s$ ($1\leq s\leq n$). Further, for the group $G(\mathcal{A}_{i})$ generated by the $i$-th shift ($i\geq0$) of the automaton $\mathcal{A}$ we have $$ G(\mathcal{A}_{i})=\langle a_i, (b_1)_i,\ldots, (b_n)_i\rangle. $$ Since $c_{s,i}=(b_s)_ia_i^{-1}$, we obtain $G(\mathcal{A}_{i})=G_i$ and the mapping $$ a_i\mapsto u,\;\;\;(b_s)_i\mapsto \eta_s\cdot u,\;\;\;1\leq s\leq n $$ induces an isomorphism $G(\mathcal{A}_{i})\simeq K\wr\mathbb{Z}$, which finishes the proof of Theorem~\ref{t2}. \section{Examples} \subsection{The lamplighter group $\mathbb{Z}\wr\mathbb{Z}$} The lamplighter group $\mathbb{Z}\wr\mathbb{Z}$ is an example of a 2-generated torsion-free not finitely presented metabelian group. It is generated by two elements $u$, $\eta$, where $u$ is a generator of $\mathbb{Z}$ in the semidirect product $\bigoplus_\mathbb{Z}\mathbb{Z}\rtimes \mathbb{Z}$ and $$ \eta=(\ldots,0,0,0,u,0,0,0,\ldots), $$ where $u$ on the right hand side is in zero position. This group has the following presentation $\langle u, \eta\colon [\eta^{u^k},\eta^{u^{k'}}]=1,\;\;k,k'\in\mathbb{Z}\rangle$, where $[x,y]=x^{-1}y^{-1}xy$. Basing on the construction of the automaton $\mathcal{A}$ we can present the lamplighter group $\mathbb{Z}\wr \mathbb{Z}$ as a group defined by a 2-state self-similar automaton over the changing alphabet as follows. Let $(r_i)_{i\geq 0}$ be an arbitrary unbounded sequence of integers $r_i>1$ and let $X=(X_i)_{i\geq 0}$ be the changing alphabet with $$ X_i=\{1,2,\ldots, 2r_i\}. $$ Let $A=(X, Q, \varphi, \psi)$ be an automaton with the 2-element set $Q=\{a, b\}$ of internal states and the following sequences $\varphi=(\varphi_i)_{i\geq 0}$, $\psi=(\psi_i)_{i\geq 0}$ of transition and output functions: $$ \varphi_i(q, x)=\left\{\begin{array}{ll} a, &q=b,\;\;x=1,\\ q, &\mbox{\rm otherwise}, \end{array}\right.\;\;\; \psi_i(q,x)=\left\{\begin{array}{ll} \alpha_i(x),&q=a,\\ \beta_{i}(x), &q=b, \end{array}\right. $$ where the permutations $\alpha_i$, $\beta_i$ of the set $X_i$ may be defined, for example, as follows: $$ \alpha_i=(1,3,\ldots, 2r_i-1),\;\;\;\beta_i=(1,3,\ldots, 2r_i-1)(2,4,\ldots,2r_i). $$ By above, we see that the automaton $A$ is self-similar and the group $G(A)$ defined by $A$ is isomorphic with the lamplighter group $\mathbb{Z}\wr\mathbb{Z}$ via $a_0\mapsto u$, $c_0\mapsto \eta$, where $c_0=b_0a_0^{-1}$. \subsection{The lamplighter groups $\mathbb{Z}^n\wr \mathbb{Z}$, $n\geq 1$} Generalizing the above construction, we can describe for every positive integer $n$ the wreath product $\mathbb{Z}^n\wr \mathbb{Z}$ as a group defined by the following self-similar automaton $A=(X, Q, \varphi, \psi)$: \begin{itemize} \item $X=(X_i)_{i\geq 0}$, where $X_i=\{1,2,\ldots, 2nr_i\}$ and $(r_i)_{i\geq 0}$ is an arbitrary unbounded sequence of integers greater than 1, \item $Q=\{a, b_0, \ldots, b_{n-1}\}$, \item $ \varphi_i(q, x)=\left\{\begin{array}{ll} a, &q=b_s,\;\;x=2sr_i+1,\;\;0\leq s\leq n-1,\\ q, &\mbox{\rm otherwise}, \end{array}\right. $ \item $ \psi_i(q,x)=\left\{\begin{array}{ll} \alpha_i(x),&q=a,\\ \beta_{s, i}(x), &q=b_s,\;\;0\leq s\leq n-1, \end{array}\right. $ \end{itemize} where the permutations $\alpha_i$, $\beta_{s, i}$ of the set $X_i$ are defined as follows \begin{eqnarray*} \alpha_i&=&\prod\limits_{s=0}^{n-1}(2sr_i+1, 2sr_i+3,\ldots, 2sr_i+2r_i-1),\\ \beta_{s, i}&=&\alpha_i\cdot (2sr_i+2, 2sr_i+4, \ldots, 2sr_i+2r_i). \end{eqnarray*} \subsection{The lamplighter groups $C_r\wr\mathbb{Z}$, $r>1$} For lamplighter groups of the form $C_r\wr\mathbb{Z}$ ($r>1$) the corresponding self-similar automaton representation $A=(X, Q, \varphi, \psi)$ may be described as follows: \begin{itemize} \item $X_i=\{1,2,\ldots, r_i, r_i+1, r_i+2,\ldots, r_i+r\}$, where $(r_i)_{i\geq 0}$ is an arbitrary unbounded sequence of integers greater than 1, \item $Q=\{a, b\}$, \item $ \varphi_i(q, x)=\left\{\begin{array}{ll} a, &q=b,\;\;x=1,\\ q, &\mbox{\rm otherwise}, \end{array}\right. $ \item $\psi_i(q,x)=\left\{\begin{array}{ll} \alpha_i(x),&q=a,\\ \beta_{i}(x), &q=b, \end{array}\right. $ \end{itemize} where the permutations $\alpha_i,\beta_i\in Sym(X_i)$ may be defined, for example, as follows: $$ \alpha_i=(1,2,\ldots, r_i),\;\;\;\beta_i=(1,2,\ldots, r_i)(r_i+1, r_i+2, \ldots, r_i+r). $$
{ "redpajama_set_name": "RedPajamaArXiv" }
3,795
\chapter[The orthogonality of Al-Salam-Carlitz polynomials for complex parameters] {The orthogonality of Al-Salam-Carlitz polynomials for complex parameters }\label{ra_ch1} \author[H.~S.~Cohl, R.~S.~Costas-Santos and W.~Xu]{ Howard S. Cohl${}^\ast$, Roberto S. Costas-Santos$^{\dag}$ and Wenqing Xu$^{\ddag}$ } \address{${}^{\ast}$Applied and Computational Mathematics Division, \\ National Institute of Standards and Technology, \\ Gaithersburg, MD 20899-8910, USA\\ {\tt howard.cohl@nist.gov} \\[0.2cm] ${}^\dag$Departamento de F\'isica y Matem\'aticas, Facultad de Ciencias, Universidad de Alcal\'a, 28871 Alcal\'a de Henares, Madrid, Spain\\ {\tt rscosa@gmail.com}\\[0.2cm] ${}^\ddag$Department of Mathematics and Statistics, California Institute of Technology, CA 91125, USA\\ {\tt williamxuxu@yahoo.com} } \begin{abstract} In this contribution, we study the orthogonality conditions satisfied by Al-Salam-Carlitz polynomials $U^{(a)}_n(x;q)$ when the parameters $a$ and $q$ are not necessarily real nor `classical', i.e., the linear functional $\bf u$ with respect to such polynomial sequence is quasi-definite and not positive definite. We establish orthogonality on a simple contour in the complex plane which depends on the parameters. In all cases we show that the orthogonality conditions characterize the Al-Salam-Carlitz polynomials $U_n^{(a)}(x;q)$ of degree $n$ up to a constant factor. We also obtain a generalization of the unique generating function for these polynomials. \end{abstract} \body \vspace{0.2cm} \noindent Keywords:~$q$-orthogonal polynomials; $q$-difference operator; $q$-integral representation; discrete measure.\\ MSC classification:~33C45; 42C05\\[-0.5cm] \section{Introduction} \noindent The Al-Salam-Carlitz polynomials $U_n^{(a)}(x;q)$ were introduced by W.~A.~Al-Salam and L. Carlitz in \cite{AlSaCa} as follows: \begin{equation}\label{1:1} U_n^{(a)}(x;q):=(-a)^n q^{n\choose 2}\sum_{k=0}^n \frac{(q^{-n};q)_k (x^{-1};q)_k}{(q;q)_k} \frac{q^k x^k}{a^k}. \end{equation} In fact, these polynomials have a Rodrigues-type formula \cite[(3.24.10)]{Koekoeketal} \[ U_n^{(a)}(x;q)=\frac{a^n q^{n \choose 2}(1-q)^n}{q^n w(x;a;q)} {\mathscr D}^n_{q^{-1}}\big( w(x;a;q)\big), \] where \[ w(x;a;q):=(qx;q)_\infty(qx/a;q)_\infty, \] the $q$-Pochhammer symbol ($q$-shifted factorial) is defined as \[ (z;q)_0:=1,\quad (z;q)_n:=\prod_{k=0}^{n-1} (1-zq^k), \] \[ (z;q)_\infty:=\prod_{k=0}^{\infty} (1-zq^k), \quad |z|<1, \] and the $q$-derivative operator is defined by \[ {\mathscr D}_q f(z):=\left\{\begin{array}{cl} \dfrac{f(qz)-f(z)}{(q-1)z} & \text{if} \ q\neq 1 \ {\rm and} \ z\ne 0, \\[4mm] f'(z) & \text{if} \ q=1 \ {\rm or} \ z=0.\end{array}\right. \] \begin{remark} Observe that by the definition of the $q$-derivative \[ {\mathscr D}_{q^{-1}} f(z)={\mathscr D}_{q} f(qz),\quad {\rm and} \quad {\mathscr D}^n_{q^{-1}} f(z):={\mathscr D}^{n-1}_{q^{-1}} \big({\mathscr D}_{q^{-1}} f(z)\big), \ n=2, 3, \dots \] \end{remark} The expression (\ref{1:1}) shows us that $U_n^{(a)}(x;q)$ is an analytic function for any complex value parameters $a$ and $q$, and thus can be considered for general $a, q\in \mathbb C\setminus \{0\}$. The classical Al-Salam-Carlitz polynomials correspond to parameters $a<0$ and $0<q<1$. For these parameters, the Al-Salam-Carlitz polynomials are orthogonal on $[a, 1]$ with respect to the weight function $w$. More specifically, for $a<0$ and $0<q<1$ \cite[(14.24.2)]{Koekoeketal}, \[ \int_a^1 U_n^{(a)}(x;q)U_m^{(a)}(x;q) (qx,qx/a;q)_\infty d_q x = d_n^2 \delta_{n,m}, \] where \[ d_n^2:= (-a)^n (1-q)(q;q)_n (q;q)_\infty (a;q)_\infty (q/a;q)_\infty q^{n\choose 2}, \] and the $q$-Jackson integral \cite[(1.15.7)]{Koekoeketal} is defined as \[ \int_a^b f(x)d_q x:=\int_0^b f(x)d_q x-\int_0^a f(x)d_q x, \] where \[ \int_0^a f(x)d_q x:=a(1-q)\sum_{n=0}^\infty f(aq^n)q^n. \] Taking into account the previous orthogonality relation, it is a direct result that if $a$ and $q$ are classical, i.e., $a$, $q\in \mathbb R$, with $a\ne 1$, $0<q<1$ all the zeros of $U_n^{(a)}(x;q)$ are simple and belong to the interval $[a,1]$, but this is no longer valid for general $a$ and $q$ complex. In this paper we show that for general $a$, $q$ complex numbers, but excluding some special cases, the Al-Salam-Carlitz polynomials $U_n^{(a)}(x;q)$ may still be characterized by orthogonality relations. The case $a<0$ and $0<q<1$ or $0<aq<1$ and $q>1$ are classical, i.e., the linear functional $\bf u$ with respect to such polynomial sequence is orthogonal is positive definite and in such a case there exists a weight function $\omega(x)$ so that \[ \langle {\bf u}, p\rangle=\int_{a}^1 p(x)\, \omega(x)\, dx,\quad p\in \mathbb P[x]. \] Note that this is the key for the study of many properties of Al-Salam-Carlitz polynomials I and II. Thus, our goal is to establish orthogonality conditions for most of the remaining cases for which the linear form $\bf u$ is quasi-definite, i.e., for all $n, m\in \mathbb N_0$ \[ \langle {\bf u}, p_n p_m \rangle=k_n \delta_{n,m},\quad k_n\ne 0. \] We believe that these new orthogonality conditions can be useful in the study of the zeros of Al-Salam-Carlitz polynomials. For general $a, q\in \mathbb C\setminus\{0\}$, the zeros are not confined to a real interval, but they distribute themselves in the complex plane as we can see in Figure 1. Throughout this paper denote $p:=q^{-1}$. \vspace{-0.2cm} \begin{figure}[hbtp!] \label{fig1} \begin{center} \begin{tikzpicture}[domain=-0.8:1.3,scale=4] \draw[->] (-0.8,0) -- (1.2,0) node[above] {$x(t)$}; \draw[->] (0,-0.4) -- (0,1.2) node[left] {$y(t)$}; \foreach \x/\xtext in {1/1, 0.5/0.5,-0.5/-0.5} \draw[shift={(\x,0)}] (0pt,0.5pt) -- (0pt,-0.5pt) node[below] {\small $\xtext$}; \foreach \y/\ytext in {-0.2/-0.2, 0.2/0.2,0.4/0.4,0.6/0.6,0.8/0.8,1/1} \draw[shift={(0,\y)}] (0.5pt,0pt) -- (-0.5pt,0pt) node[left] {\small $\ytext$}; \fill[gray!70] (1,1) circle (0.027) node[above,black]{$a$}; \draw plot[only marks, mark=*, mark options={fill=gray!70},mark size=0.37pt] coordinates{(1., 1.)(0.293, 1.093)(1., 0) (-0.234, 0.874) (0.693, 0.4) (-0.512, 0.512)(0.32, 0.554)(-0.56, 0.15)(0, 0.512) (-0.448, -0.12)(-0.2048, 0.355)(-0.262, -0.262) (-0.284, 0.164) (-0.077, -0.287)(-0.262, 0) (0.061, -0.23) (-0.1816, -0.105) (0.1342, -0.1342) (-0.084, -0.1453) (0.1467, -0.0393) (0, -0.134) (0.117, 0.03144)(0.0537, -0.093) (0.0687195, 0.0687195) (0.074391, -0.0429497)(0.0201225, 0.075098) (0.0687195, 0)(-0.016098, 0.0600784)(0.0476103, 0.0274878) (-0.0351844, 0.0351844) (0.0219907, 0.038088)(-0.0384484, 0.010304) (0.0000322943, 0.0351876) (-0.0307947, -0.00822027) (-0.0144521, 0.0250943) (-0.0175046, -0.017489)(0.0121248, -0.0212596) (-0.00238931, -0.0230145)(-0.0114884, -0.0157088)(-0.0145, 0.01)}; \end{tikzpicture} \end{center} \caption{Zeros of $U^{(1+i)}_{30}\left(x;\frac45 \exp(\pi i/6)\right)$} \end{figure} \vspace{0.1cm} \section{Orthogonality in the complex plane} \label{sectionorthog} \begin{theorem} \label{thm:3.1} Let $a, q\in \mathbb C$, $a\ne 0, 1$, $0<|q|<1$, the Al-Salam-Carlitz polynomials are the unique polynomials (up to a multiplicative constant) satisfying the property of orthogonality \begin{equation} \label{2:1} \int_a^1 U_n^{(a)}(x;q)U_m^{(a)}(x;q)w(x;a;q)d_q x=d_n^2 \delta_{n,m}. \end{equation} \end{theorem} \begin{remark} I if $0<|q|<1,$ the lattice $\{q^k:k\in \mathbb N_0\}\cup \{aq^k:k\in \mathbb N_0\}$ is a set of points which are located inside on a single contour that goes from 1 to 0, and then from 0 to $a,$ through the spirals \[ S_1: z(t)=|q|^t \exp(it\arg q),\quad S_2: z(t)=|a||q|^t \exp(it\arg q+i\arg a), \] where $0<|q|<1,$ $t\in [0,\infty)$, which we can see in Figure 2. Taking into account \eqref{2:1}, we need to avoid the $a=1$ case. For the $a=0$ case, we cannot apply Favard's result \cite{chi1}, because in such a case this polynomial sequence fulfills the recurrence relation \cite{Koekoeketal} \[ U_{n+1}^{(0)}(x;q)=(x-q^n)U_n^{(0)}(x;q),\quad U_0^{(0)}(x;q)=1. \] \end{remark} \begin{figure}[!hbtp] \label{fig2} \begin{center} \includegraphics[scale=0.65]{Spirals.eps} \end{center} \caption{ The lattice $\{q^k:k\in \mathbb N_0\}\cup \{(1+i)q^k:k\in \mathbb N_0\}$ with $q=4/5\exp(\pi i/6)$.} \end{figure} \bdm Let $0<|q|<1$, and $a\in \mathbb C$, $a\ne 0, 1$. We are going to express the $q$-Jackson integral (\ref{2:1}) as the difference of the two infinite sums and apply the identity \begin{eqnarray} \label{2:2} &&\sum_{k=0}^M f(q^k){\mathscr D}_{q^{-1}} g(q^k)q^k=\frac{f(q^M)g(q^M)-f(q^{-1}) g(q^{-1})}{q^{-1}-1}\nonumber\\ &&\hspace{4.5cm}-\sum_{k=0}^M g(q^{k-1}){\mathscr D}_{q^{-1}} f(q^k) q^k. \end{eqnarray} Let $n\ge m$. Then, for one side since $w(q^{-1};a;q)=0$, and using the identities \cite[(14.24.7), (14.24.9)]{Koekoeketal}, one has \[ \begin{array}{l} \displaystyle \sum_{k=0}^\infty U_m^{(a)}(q^k;q)U_n^{(a)}(q^k;q) w(q^k;a;q)q^k\\[3mm] \hspace{0.5cm}\displaystyle = \frac{a(1-q)}{q^{2-n}}\lim_{M\to \infty} \sum_{k=0}^M\!{\mathscr D}_{q^{-1}}[w(q^k;a;q)U_{n-1}^{(a)}(q^k;q)]U_m^{(a)} (q^k;q) q^k\\[5mm] \hspace{0.5cm}\displaystyle = a q^{n-1} \lim_{M\to \infty} U_m^{(a)}(q^M;q) U_{n-1}^{(a)}(q^M;q)w(q^M;a;q) \\[3mm] \hspace{0.5cm}\hspace{0.6cm}\displaystyle +aq^{n-1} (q^m-1)\! \lim_{M\to\infty} \sum_{k=0}^{M-1} w (q^k;a;q) U_{n-1}^{(a)}(q^k;q) U_{m-1}^{(a)}(q^k;q) q^k. \end{array}\] Following an analogous process as before, and since $w(aq^{-1};a;q)=0$, we have \begin{eqnarray*} &&\displaystyle \sum_{k=0}^\infty U_m^{(a)}(aq^k;q) U_n^{(a)}(aq^k;q)w(aq^k;a;q)a q^k \\[3mm] &&\hspace{0.5cm}\displaystyle =a q^{n-1} \lim_{M\to \infty} U_m^{(a)}(aq^M;q)U_{n-1}^{(a)}(aq^M;q) w(aq^M;a;q)\\[3mm] &&\hspace{0.2cm}\hspace{0.6cm}\displaystyle + a q^{n-1} (q^m-1) \lim_{M\to\infty} \sum_{k=0}^{M-1} w(aq^k;a;q) U_{n-1}^{(a)} (aq^k;q) U_{m-1}^{(a)}(aq^k;q) aq^k. \end{eqnarray*} Therefore, if $m<n$, and since $m$ is finite one can first repeat the previous process $m+1$ times obtaining \[\begin{array}{l} \displaystyle \sum_{k=0}^\infty U_m^{(a)}(q^k;q)U_n^{(a)}(q^k;q) w(q^k;a;q)q^k\\[4mm] \hspace{0.5cm}\displaystyle =\lim_{M\to \infty} \sum_{\nu=1}^{m+1} (-a q^n)^\nu q^{-\nu(\nu+1)/2} (q^{-m+\nu-1};q)_{\nu}\nonumber\\[5mm] \hspace{3.0cm}\times U_{m-\nu+1}^{(a)}(q^M;q) U_{n-\nu}^{(a)}(q^M;q)w(q^M;a;q), \end{array}\] and \[ \begin{array}{l} \displaystyle \sum_{k=0}^\infty U_m^{(a)}(aq^k;q) U_n^{(a)}(aq^k;q)w(aq^k;a;q)a q^k \\[3mm] \hspace{0.5cm}\displaystyle =\lim_{M\to \infty}\sum_{\nu=1}^{m+1} (-a q^n)^\nu q^{-\nu(\nu+1)/2} (q^{-m+\nu-1};q)_{\nu}\\[4mm] \hspace{3cm}\times U_{m-\nu+1}^{(a)}(aq^M;q) U_{n-\nu}^{(a)}(aq^M;q)w(aq^M;a;q). \end{array}\] Hence since the difference of both limits, term by term, goes to 0 since $|q|<1$, then \[ \displaystyle \int_a^1 U_n^{(a)}(x;q)U_m^{(a)}(x;q) (qx,qx/a;q)_\infty d_q x =0. \] For $n=m$, following the same idea, we have \[\begin{array}{l} \displaystyle \int_a^1 U_n^{(a)}(x;q)U_n^{(a)}(x;q) w(x;a;q)d_q x\\[3mm] \hspace{0.5cm}=\displaystyle\frac{a (q^n-1)}{q^{1-n}}\sum_{k=0}^\infty \Biggl( w(q^k;a;q) \left(U_{n-1}^{(a)}(q^k;q)\right)^2 q^k\\[3mm] \hspace{4cm}-a w(aq^k;a;q) \left(U_{n-1}^{(a)} (aq^k;q)\right)^2 q^k\Biggr)\\[3mm] \hspace{0.5cm}=\displaystyle (-a)^n (q;q)_n q^{n \choose 2} \sum_{k=0}^\infty \left( w(q^k;a;q) q^k-a\ w(aq^k;a;q) q^k\right)\\ \hspace{0.5cm}=\displaystyle (-a)^n (q;q)_n (q;q)_\infty\, q^{n \choose 2} \sum_{k=0}^\infty\left((q^{k+1}/a;q)_\infty-a(aq^{k+1};q)_\infty\right)\frac {q^k}{(q;q)_k}, \end{array}\] since it is known that in this case \cite[(14.24.2)]{Koekoeketal} \begin{eqnarray*} &&\int_a^1 U_n^{(a)}(x;q)U_n^{(a)}(x;q) w(x;a;q)d_q x\\ &&\hspace{4cm}=\displaystyle (-a)^n (q;q)_n (q;q)_\infty (a;q)_\infty (q/a;q)_\infty q^{n\choose 2}. \end{eqnarray*} Due to the normality of this polynomial sequence, i.e., $\deg U_n^{(a)}(x;q)=n$ for all $n\in \mathbb N_0$, the uniqueness is straightforward, hence the result holds. \edm From this result, and taking into account that the squared norm for the Al-Salam-Carlitz polynomials is known, we got the following consequence for which we could not find any reference. \begin{corollary} Let $a, q\in \mathbb C\setminus\{0\}$, $|q|<1$. Then \[ \sum_{k=0}^\infty\left((q^{k+1}/a;q)_\infty-a(aq^{k+1};q)_\infty \right)\frac {q^k}{(q;q)_k}=(a;q)_\infty (q/a;q)_\infty. \] \end{corollary} The following case, which is just the Al-Salam-Carlitz polynomials for the $|q|>1$ case, is commonly called the Al-Salam-Carlitz II polynomials. \begin{theorem} Let $a,q\in\mathbb C$, $a\ne 0,1$, $|q|>1$. Then, the Al-Salam-Carlitz polynomials are unique (up to a multiplicative constant) satisfying the property of orthogonality given by {\small \begin{eqnarray} \label{2:22} &&\hspace{-0.65cm}\int_a^1 U_n^{(a)}(x;q^{-1})U_m^{(a)}(x;q^{-1}) (q^{-1}x;q^{-1})_\infty (q^{-1}x/a;q^{-1})_\infty d_{q^{-1}} x\nonumber\\[1mm] &&\hspace{-0.4cm} =(-a)^n (1-q^{-1})(q^{-1};q^{-1})_n (q^{-1};q^{-1})_\infty (a;q^{-1})_\infty (q^{-1}/a;q^{-1})_\infty \, q^{-{n\choose 2}}\delta_{m,n}. \end{eqnarray}} \end{theorem} \bdm Let us denote $q^{-1}$ by $p$, then $0<|p|<1$. For $a\in\mathbb C$, $a\ne 0, 1$. Then, by using the identity (\ref{2:2}) replacing $q\mapsto p$, and taking into account that $w(aq;a;p)=w(q;a;p)=0$ and \cite[(14.24.9)]{Koekoeketal}, for $m<n$ one has \[\begin{array}{l} \displaystyle \sum_{k=0}^\infty a w(ap^k;a;p) U_m^{(a)}(ap^k;p) U_n^{(a)}(ap^k;p)p^k \\[3mm] \hspace{0.5cm}= \displaystyle a p^{n-1} \lim_{M\to \infty} U_m^{(a)}(ap^M;p)U_{n-1}^{(a)}(ap^M;p) w(ap^M;a;p)\\[3mm] \hspace{0.5cm}\hspace{0.3cm}\displaystyle +a p^{n-1}(1-p^m) \lim_{M\to \infty} \sum_{k=0}^{M-1} a w(ap^k;a;p) U_{n-1}^{(a)}(ap^k;p) U_{m-1}^{(a)}(ap^k;p)p^k. \end{array}\] Following the same idea from the previous result, we have \[\begin{array}{l} \displaystyle \sum_{k=0}^\infty w(p^k;a;p) U_m^{(a)}(p^k;p) U_n^{(a)}(p^k;p)p^k \\[3mm] \hspace{0.5cm}\displaystyle = a p^{n-1} \lim_{M\to \infty} U_m^{(a)}(p^M;p)U_{n-1}^{(a)}(p^M;p) w(p^M;a;p)\\[3mm] \hspace{0.5cm}\hspace{0.3cm}\displaystyle +a p^{n-1}(1-p^m) \lim_{M\to \infty} \sum_{k=0}^{M-1} w(p^k;a;p) U_{n-1}^{(a)}(p^k;p) U_{m-1}^{(a)}(p^k;p)p^k. \end{array}\] Therefore, the property of orthogonality holds for $m<n$. Next, if $n=m$, we have \[\begin{array}{l} \displaystyle \int_a^1 U^{(a)}_n(x;p)U^{(a)}_n(x;p) w(x;a;p)\, d_p x \\ \hspace{0.5cm}=\displaystyle\frac{a(p^n-1)}{p^{1-n}}\!\!\sum_{k=0}^\infty\Biggl( a w (ap^k;a;p) \left(U_{n-1}^{(a)}(ap^k;p)\right)^2 p^{k}\\[4mm] \hspace{4cm}- w(p^{k};a;p)\left( U_{n-1}^{(a)}(p^{k};p)\right)^2 p^{k}\Biggr)\\[3mm] \hspace{0.5cm}=\displaystyle (-a)^n (p;p)_n p^{n\choose 2} \left(\sum_{k=0}^\infty a w(ap^{k};a;p) p^{k}- w(p^{k};a;p) p^{k}\right)\\[3mm] \hspace{0.5cm}=\displaystyle (-a)^n \ (q^{-1};q^{-1})_n (p;p)_\infty p^{n\choose 2} \sum_{k=0}^\infty \frac{q^k\left(a(p^{k+1}a;p)_\infty-(p^{k+1}/a;p)_\infty\right)} {(p;p)_k}\\[3mm] \hspace{0.5cm}=\displaystyle (-a)^n (q^{-1};q^{-1})_n (p;p)_\infty (a;p)_\infty(p/a;p)_\infty p^{n\choose 2}. \end{array}\] Using the same argument as in Theorem \ref{thm:3.1}, the uniqueness holds, so the claim follows. \edm \begin{remark} Observe that in the previous theorems if $a=q^m$, with $m\in \mathbb Z$, $a\ne 0$, after some logical cancellations, the set of points where we need to calculate the $q$-integral is easy to compute. For example, if $0<aq<1$ and $0<q<1$, one obtains the sum \cite[p. 537, (14.25.2)]{Koekoeketal}. \end{remark} \begin{remark} The $a=1$ case is special because it is not considered in the literature. In fact, the linear form associated with the Al-Salam-Carlitz polynomials $\bf u$ is quasi-definite and fulfills the Pearson-type distributional equations \[ {\mathscr D}_q[(x-1)^2 {\bf u}]=\frac {x-2}{1-q} {\bf u}\quad \text{and} \quad {\mathscr D}_{q^{-1}}[q^{-1}{\bf u}]=\frac {x-2}{1-q} {\bf u}. \] Moreover, the Al-Salam-Carlitz polynomials fulfill the three-term recurrence relation \cite[(14.24.3)]{Koekoeketal} \begin{equation}\label{2:5} xU^{(a)}_{n}(x;q)=U^{(a)}_{n+1}(x;q)+(a+1)q^nU^{(a)}_{n}(x;q)- aq^{n-1}(1-q^n)U^{(a)}_{n-1}(x;q), \end{equation} where $n=0, 1, \dots,$ with initial conditions $U^{(a)}_{0}(x;q)=1$, $U^{(a)}_{1}(x;q)=x-a-1$. Therefore, we believe that it will be interesting to study such a case for its peculiarity because the coefficient $q^{n-1}(1-q^n)\ne 0$ for all $n$, so one can apply Favard's result. \end{remark} \subsection{The $|q|=1$ case.} \noindent In this section we only consider the case where $q$ is a root of unity. Let $N$ be a positive integer such that $q^N=1$ then, due to the recurrence relation \eqref{2:5} and following the same idea that the authors did in \cite[Section 4.2]{cola2}, we apply the following process: \begin{enumerate} \item The sequence $(U_n^{(a)}(x;q))_{n=0}^{N-1}$ is orthogonal with respect to the Gaussian quadrature \[ \langle {\bf v},p\rangle:=\sum_{s=1}^{N} \gamma_1^{(a)}\dots \gamma_{N-1}^{(a)} \frac {p(x_s)}{\left(U^{(a)}_{N-1}(x_s)\right)^2}, \] where $\{x_1,x_2,\dots,x_N\}$ are the zeros of $U_N^{(a)}(x;q)$ for such value of $q$. \item Since $\langle {\bf v}, U_n^{(a)}(x;q)U_n^{(a)}(x;q)\rangle=0$, we need to modify such a linear form. Next, we can prove that the sequence $(U_n^{(a)}(x;q))_{n=0}^{2N-1}$ is orthogonal with respect to the bilinear form \[ \langle p, r \rangle_{2}=\langle {\bf v},pq\rangle+ \langle {\bf v},{\mathscr D}^N_q p {\mathscr D}^N_q r\rangle, \] since ${\mathscr D}_q U_n^{(a)}(x;q)=(q^n-1)/(q-1)U_{n-1}^{(a)}(x;q)$. \item Since $\langle U_{2N}^{(a)}(x;q), U_{2N}^{(a)}(x;q) \rangle_{2}=0$ and taking into account what we did before, we consider the linear form \[ \langle p, r \rangle_{3}=\langle {\bf v},pq\rangle+ \langle {\bf v},{\mathscr D}^N_q p {\mathscr D}^N_q r\rangle+ \langle {\bf v},{\mathscr D}^{2N}_q p {\mathscr D}^{2N}_q r\rangle. \] \item Therefore one can obtain a sequence of bilinear forms such that the Al-Salam-Carlitz polynomials are orthogonal with respect to them. \end{enumerate} \section{A generalized generating function for Al-Salam-Carlitz polynomials} \noindent For this section, we are going to assume $|q|>1$, or $0<|p|<1$. Indeed, by starting with the generating functions for Al-Salam-Carlitz polynomials \cite[(14.25.11-12)]{Koekoeketal}, we derive generalizations using the connection relation for these polynomials. \begin{theorem} Let $a, b, p\in\mathbb C\setminus \{0\}$, $|p|<1$, $a, b\ne 1$. Then \begin{equation} \label{con1} U_n^{(a)}(x;p)=(-1)^n(p;p)_np^{-{n \choose 2}}\sum_{k=0}^{n} \frac{(-1)^ka^{n-k}(b/a;p)_{n-k}p^{\binom{k}{2}}}{(p;p)_{n-k}(p;p)_k} U_k^{(b)}(x;p). \end{equation} \end{theorem} \begin{proof} If we consider the generating function for Al-Salam-Carlitz polynomials \cite[(14.25.11)]{Koekoeketal} \[ \frac{(xt;p)_{\infty}}{(t,a t;p)_{\infty}}= \sum_{n=0}^{\infty}\frac{(-1)^np^{n \choose 2}}{(p;p)_n}U_n^{(a)}(x;p)t^n, \] and multiply both sides by ${(b t;p)_{\infty}}/{(b t;p)_{\infty}}$, obtaining \begin{equation} \label{con2} \sum_{n=0}^{\infty}\frac{(-1)^np^{n \choose 2}}{(p;p)_n}U_n^{(a)} (x;p)t^n=\frac{(bt;p)_\infty}{(at;p)_\infty}\sum_{n=0}^{\infty} \frac{(-1)^np^{n \choose 2}}{(p;p)_n}U_n^{(b)}(x;p)t^n. \end{equation} If we now apply the $q$-binomial theorem \cite[(1.11.1)]{Koekoeketal} \[ \frac{(az;p)_\infty}{(z;p)_\infty}=\sum_{k=0}^\infty \frac{(ap;p)_n} {(p;p)_n}z^n,\quad 0<|p|<1, \quad |z|<1, \] to (\ref{con2}), and then collect powers of $t$, we obtain \begin{eqnarray*} &&\sum_{k=0}^{\infty}t^k\sum_{m=0}^{k}\frac{(-1)^ma^{k-m}(b/a;p)_{k-m} p^{m \choose 2}}{(p;p)_{k-m}(p;p)_m}U_m^{(b)}(x;p)\\[-0mm] &&\hspace{3cm}=\sum_{n=0}^{\infty}\frac{(-1)^n p^{n \choose 2}}{(p;p)_n}U_n^{(a)}(x;p)t^n. \end{eqnarray*} Taking into account this expression, the result follows. \end{proof} \begin{theorem} Let $a, b, p\in \mathbb C\setminus\{0\}$, $|p|<1$, $a,b\ne 1$, $t\in\mathbb C$, $|at|<1$. Then \begin{equation} \label{ASCGenfun} (at;p)_{\infty}\,{}_1\phi_1\left(\begin{array}{c} x \\ at \end{array}; p,t \right)=\sum_{k=0}^{\infty}\frac{p^{k(k-1)}}{(p;p)_k}\,{}_1\phi_1\left( \begin{array}{c} b/a \\ 0 \end{array};p, atp^k\right)U_k^{(b)}(x;p)t^k, \end{equation} where \begin{eqnarray*} &&\hspace{-0.2cm}{}_r\phi_s\left(\begin{array}{c} a_1, a_2, \dots, a_r\\ b_1, b_2, \dots,b_s \end{array}; p,z\right)\\ &&\hspace{1cm}= \sum_{k=0}^\infty \frac{(a_1;p)_k(a_2;p)_k\cdots (a_r;p)_k} {(b_1;p)_k(b_2;p)_k\cdots (b_s;p)_k}\frac{z^k}{(p;p)_k}(-1)^{(1+s-r)k}p^{(1+s-r) {k\choose 2}}, \end{eqnarray*} is the unilateral basic hypergeometric series. \end{theorem} \begin{proof} We start with a generating function for Al-Salam-Carlitz polynomials \cite[(14.25.12)]{Koekoeketal} \[ (a t;q)_{\infty}\,{}_1\phi_1\left(\begin{array}{c} x \\ at \end{array}; q,t\right)=\sum_{k=0}^\infty \frac{q^{n(n-1)}}{(q;q)_n} V^{(a)}_n(x;q)t^n \] and (\ref{con1}) to obtain \begin{eqnarray*} &&(at;p)_{\infty}\,{}_1\phi_1\left(\begin{array}{c} x \\ at \end{array}; p,t \right)\\ &&\hspace{1.8cm}=\sum_{n=0}^{\infty}t^n(-1)^n p^{n \choose 2}\sum_{k=0}^n\frac{(-1)^k a^{n-k}(b/a;p)_{n-k}p^{k \choose 2}}{(p;p)_{n-k}(p;p)_k}U_k^{(b)}(x;p). \end{eqnarray*} If we reverse the order of summations, shift the $n$ variable by a factor of $k$, using the basic properties of the $q$-Pochhammer symbol, and \cite[(1.10.1)]{Koekoeketal}. Observe that we can reverse the order of summation since our sum is of the form \[ \sum_{n=0}^\infty a_n \sum_{k=0}^n c_{n,k} U_k^{(a)}(x;p), \] where \[ a_n=t^n,\qquad c_{n,k}=\frac{(-1)^ka^{n-k}(b/a;p)_{n-k}p^{\binom{k}{2}}}{(p;p)_{n-k}(p;p)_k}. \] In this case, one has \[ |a_n|\le |t|^n,\quad |c_{n,k}|\le K(1+n)^{\sigma_1} |a|^n, \] and $|U_n^{(a)}(x;p)|\le (1+n)^{\sigma_2}$, where $K_1$, $\sigma_1$, and $\sigma_2$ are positive constants independent of $n$. Therefore, if $|at|<1$, then \[ \left|\sum_{n=0}^\infty a_n \sum_{k=0}^n c_{n,k} U_k^{(a)}(x;p)\right|<\infty, \] and this completes the proof. \end{proof} As we saw in Section \ref{sectionorthog}, the orthogonality relation for Al-Salam-Carlitz polynomials for $|q|>1$, $|p|<1$, and $a\ne 0, 1$ is \[ \int_{\Gamma} U_n^{(a)}(x;p)U_m^{(a)}(x;p) w(x;a;p) d_{p} x=d_n^2 \delta_{n,m}. \] Taking this result in mind, the following result follows. \begin{theorem} Let $a, b, p\in \mathbb C\setminus\{0\}$, $t\in \mathbb C$, $|at|<1$, $|p|<1$, $m\in \mathbb N_0$. Then \[\begin{array}{rl} \displaystyle \int_a^1 {}_1\phi_1\left(\begin{array}{c} q^{-x} \\ at \end{array}; q,t\right) U_m^{(b)}(q^{-x};p)(q^{-1}x;q^{-1})_\infty(q^{-1}x/a;q^{-1})_\infty dq^{-1}\\[4mm] = \displaystyle \big(-bt\big)^mq^{3 {m\choose 2}}(b;p)_\infty (p/b;p)_{\infty}\, {}_1\phi_1\left(\begin{array}{c} b/a \\ 0 \end{array};q, atq^m\right). \end{array}\] \end{theorem} \begin{proof} From (\ref{ASCGenfun}), we replace $x\mapsto p^{x}$ and multiply both sides by $U_m^{(b)}(x;p)w(x;a;p)$, and by using the orthogonality relation (\ref{2:22}), the desired result holds. \end{proof} Note that the application of connection relations to the rest of the known generating functions for Al-Salam-Carlitz polynomials \cite[(14.24.11), (14.25.11)]{Koekoeketal} leave these generating functions invariant. \section*{Acknowledgments} \noindent The author R. S. Costas-Santos acknowledges financial support by National Institute of Standards and Technology. The authors thank the anonymous referee for her/his valuable comments and suggestions. They contributed to improve the presentation of the manuscript. \def$'$} \def\dbar{\leavevmode\hbox to 0pt{\hskip.2ex \accent"16\hss}d{$'$} \def\dbar{\leavevmode\hbox to 0pt{\hskip.2ex \accent"16\hss}d}
{ "redpajama_set_name": "RedPajamaArXiv" }
6,072
Краснополянский сельский совет (, ) — согласно законодательству Украины — административно-территориальная единица в Черноморском районе Автономной Республики Крым. Население по переписи 2001 года — 2006 человек. К 2014 году состоял из 3 сёл: Красная Поляна Внуково Кузнецкое История В 1973 году в Крымской области УССР в СССР был образован Краснополянский сельский совет выделением населённых пунктов из состава Межводненского совета и на 1977 год уже имел современный состав. С 12 февраля 1991 года сельсовет в восстановленной Крымской АССР, 26 февраля 1992 года переименованной в Автономную Республику Крым. С 21 марта 2014 года — в составе Республики Крым России. Законом «Об установлении границ муниципальных образований и статусе муниципальных образований в Республике Крым» от 4 июня 2014 года территория административной единицы была объявлена муниципальным образованием со статусом сельского поселения. Примечания Литература Ссылки Сельские советы Черноморского района
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,056
package androidx.media3.datasource; import static androidx.media3.common.util.Assertions.checkNotNull; import static androidx.media3.common.util.Util.castNonNull; import android.net.Uri; import androidx.annotation.Nullable; import androidx.media3.common.C; import androidx.media3.common.util.UnstableApi; import java.io.IOException; import java.util.List; import java.util.Map; import javax.crypto.Cipher; /** A {@link DataSource} that decrypts the data read from an upstream source. */ @UnstableApi public final class AesCipherDataSource implements DataSource { private final DataSource upstream; private final byte[] secretKey; @Nullable private AesFlushingCipher cipher; public AesCipherDataSource(byte[] secretKey, DataSource upstream) { this.upstream = upstream; this.secretKey = secretKey; } @Override public void addTransferListener(TransferListener transferListener) { checkNotNull(transferListener); upstream.addTransferListener(transferListener); } @Override public long open(DataSpec dataSpec) throws IOException { long dataLength = upstream.open(dataSpec); cipher = new AesFlushingCipher( Cipher.DECRYPT_MODE, secretKey, dataSpec.key, dataSpec.uriPositionOffset + dataSpec.position); return dataLength; } @Override public int read(byte[] buffer, int offset, int length) throws IOException { if (length == 0) { return 0; } int read = upstream.read(buffer, offset, length); if (read == C.RESULT_END_OF_INPUT) { return C.RESULT_END_OF_INPUT; } castNonNull(cipher).updateInPlace(buffer, offset, read); return read; } @Override @Nullable public Uri getUri() { return upstream.getUri(); } @Override public Map<String, List<String>> getResponseHeaders() { return upstream.getResponseHeaders(); } @Override public void close() throws IOException { cipher = null; upstream.close(); } }
{ "redpajama_set_name": "RedPajamaGithub" }
9,398
{"url":"http:\/\/jgaa.info\/getPaper?id=427","text":"Faster Algorithms for the Minimum Red-Blue-Purple Spanning Graph Problem Ahmad Biniaz, Prosenjit Bose, Ingo van Duijn, Anil Maheshwari, and Michiel Smid Vol. 21, no. 4, pp. 527-546, 2017. Regular paper. Abstract Consider a set of $n$ points in the plane, each one of which is colored either red, blue, or purple. A red-blue-purple spanning graph (RBP spanning graph) is a graph whose vertices are the points and whose edges connect the points such that the subgraph induced by the red and purple points is connected, and the subgraph induced by the blue and purple points is connected. The minimum RBP spanning graph problem is to find an RBP spanning graph with minimum total edge length. First we consider this problem for the case when the points are located on a circle. We present an algorithm that solves this problem in $O(n^2)$ time, improving upon the previous algorithm by a factor of $\\Theta(n)$. Also, for the general case we present an algorithm that runs in $O(n^5)$ time, improving upon the previous algorithm by a factor of $\\Theta(n)$. Submitted: August 2016. Reviewed: January 2017. Revised: January 2017. Accepted: February 2017. Final: February 2017. Published: April 2017. Communicated by Stephen G. Kobourov article (PDF) BibTeX","date":"2017-10-19 00:47:20","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6530919671058655, \"perplexity\": 433.74021740942567}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-43\/segments\/1508187823168.74\/warc\/CC-MAIN-20171018233539-20171019013539-00395.warc.gz\"}"}
null
null
{"url":"https:\/\/publiclab.org\/notes\/JSummers\/04-09-2014\/datalogger-part-2-hardware-design","text":"# DataLogger: Part 2, Hardware design\n\nby JSummers | | 193 views | 2 comments | 09 Apr 16:33\n\n### Disclaimer:\n\nThis research note was begun in April, 2014 to allow data logging using commercially available probes sold by Vernier Instruments. The design did not turn out to be compatible with Vernier pH probes or ion selective electrodes. A newer design that does work with these probes is available here. At the time I wrote this note, I did not have a firm working understanding of how Vernier probes interacted with their recording devices and I assumed the probes were passive devices requiring amplification to get good precision. In fact, the probes in question have amplifiers built in and these need to be powered and do not require amplification.\n\n### What I want to do\n\nOur goal is to provide an open source data logger for use in education and environmental monitoring. While there are commercially available data loggers (such as those from Vernier and National Instruments), we believe that having an open source product will keep costs lower and will ultimately allow greater flexibility in what can be measured.\n\n### Background:\n\nIn a previous research note (found here), I describe writing software for logging data to a computer using a Texas Instruments MSP430g LaunchPad development board. The LaunchPad code was written using the Arduino-like language, Energia and the GUI was written using Processing. The early code is available from our GitHub page, here.\n\n### My attempt and results\n\nOne of the big impediments to making a data logger is to have access to a large variety of sensors and probes. This report describes our effort to get around this by making something that will interface with the probes that are available from Vernier Instruments (web site here). Vernier sells a wide variety of probes that connect to their data loggers via British Telecommunications plugs. There are two types; the BTA (analog) and BTD (digital) plugs. Both plugs have recently become available from Sparkfun. We have designed a two channel data logger to accept and amplify signals from low voltage sensors, such as pH electrodes, ORP sensors, thermocouples, etc. A simplified schematic for one channel is presented below:\n\nIn this schematic, an offset voltage (Voff) is provided by the pin labeled PWM, and the amplified voltage difference between the two inputs is read at the pin labeled READ. The purpose of the offset voltage is to allow measurement of negative voltage differences. The read voltage is determined by the Voff, and the resistances of R1 and the potentiometer (Rpot): Vread = Voff + R1\/Rpot (input+ - input-)\nSince the theoretical voltage difference for a pH electrode will be ~59 mV per pH unit, a 12 pH unit range will cover 700 mV. Since the dynamic range of our microcontroller is 3.3 volts, we will want to be able to amplify by a factor of ~5. That means that a 10 Kohm potentiometer and R1 of ~2 Kohms should work. The main image for this note is the board file created in Eagle for measuring amplified signals from two probes connected using BTA connectors. Since each channel requires three amps, we use one quad and one dual op amp. The design also incorporates a digital potentiometer, which we will control from the GUI. A bill of materials with sources, part numbers and estimated costs (USD) is provided below:\n\nTiva Launchpad Texas Inst EK-TM4C123GXL $12.99 BTA connectors (2) Sparkfun PRT-12753$3.90\nquad op amp Digikey MCP6004-I\/SL $0.53 dual op amp Digikey LMV358IDR$0.82\n\n### Why I'm interested\n\nThe dogs tell me that the barking in my head will not stop until this project is finished, written up, and on line.\n\nAh, this will be a great thing! Leveraging the widely-available Vernier probes with open hardware will be a fantastic advance. Would it be possible \/ easy \/ sufficiently inexpensive to add the appropriate connector for BNC probes as well?\n\n(And good with the barking :) I've got so much barking I'm hoping to quell by finally writing up the research note backlog ...)\n\nIs this a question? Click here to post it to the Questions page.","date":"2019-06-18 08:51:01","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.1851390302181244, \"perplexity\": 3083.433806982939}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-26\/segments\/1560627998708.41\/warc\/CC-MAIN-20190618083336-20190618105336-00176.warc.gz\"}"}
null
null
@class NSString; @class NSLocale; @class NSDate; @class NSTimeZone; @class NSDateComponents; @class NSArray; #import <CoreFoundation/CFCalendar.h> enum { NSCalendarWrapComponents = (1UL << 0), NSCalendarMatchStrictly = (1ULL << 1), NSCalendarSearchBackwards = (1ULL << 2), NSCalendarMatchPreviousTimePreservingSmallerUnits = (1ULL << 8), NSCalendarMatchNextTimePreservingSmallerUnits = (1ULL << 9), NSCalendarMatchNextTime = (1ULL << 10), NSCalendarMatchFirst = (1ULL << 12), NSCalendarMatchLast = (1ULL << 13) }; enum { NSWrapCalendarComponents = NSCalendarWrapComponents, }; enum { NSCalendarUnitEra = kCFCalendarUnitEra, NSCalendarUnitYear = kCFCalendarUnitYear, NSCalendarUnitMonth = kCFCalendarUnitMonth, NSCalendarUnitDay = kCFCalendarUnitDay, NSCalendarUnitHour = kCFCalendarUnitHour, NSCalendarUnitMinute = kCFCalendarUnitMinute, NSCalendarUnitSecond = kCFCalendarUnitSecond, NSCalendarUnitWeekday = kCFCalendarUnitWeekday, NSCalendarUnitWeekdayOrdinal = kCFCalendarUnitWeekdayOrdinal, NSCalendarUnitQuarter = kCFCalendarUnitQuarter, NSCalendarUnitWeekOfMonth = kCFCalendarUnitWeekOfMonth, NSCalendarUnitWeekOfYear = kCFCalendarUnitWeekOfYear, NSCalendarUnitYearForWeekOfYear = kCFCalendarUnitYearForWeekOfYear, NSCalendarUnitNanosecond = (1 << 15), NSCalendarUnitCalendar = (1 << 20), NSCalendarUnitTimeZone = (1 << 21), NSEraCalendarUnit = NSCalendarUnitEra, NSYearCalendarUnit = NSCalendarUnitYear, NSMonthCalendarUnit = NSCalendarUnitMonth, NSDayCalendarUnit = NSCalendarUnitDay, NSHourCalendarUnit = NSCalendarUnitHour, NSMinuteCalendarUnit = NSCalendarUnitMinute, NSSecondCalendarUnit = NSCalendarUnitSecond, NSWeekCalendarUnit = kCFCalendarUnitWeek, NSWeekdayCalendarUnit = NSCalendarUnitWeekday, NSWeekdayOrdinalCalendarUnit = NSCalendarUnitWeekdayOrdinal, NSQuarterCalendarUnit = NSCalendarUnitQuarter, NSWeekOfMonthCalendarUnit = NSCalendarUnitWeekOfMonth, NSWeekOfYearCalendarUnit = NSCalendarUnitWeekOfYear, NSYearForWeekOfYearCalendarUnit = NSCalendarUnitYearForWeekOfYear, NSCalendarCalendarUnit = NSCalendarUnitCalendar, NSTimeZoneCalendarUnit = NSCalendarUnitTimeZone, }; typedef NSUInteger NSCalendarOptions; typedef NSUInteger NSCalendarUnit; FOUNDATION_EXPORT NSString* const NSCalendarIdentifierGregorian; FOUNDATION_EXPORT NSString* const NSCalendarIdentifierBuddhist; FOUNDATION_EXPORT NSString* const NSCalendarIdentifierChinese; FOUNDATION_EXPORT NSString* const NSCalendarIdentifierCoptic; FOUNDATION_EXPORT NSString* const NSCalendarIdentifierEthiopicAmeteMihret; FOUNDATION_EXPORT NSString* const NSCalendarIdentifierEthiopicAmeteAlem; FOUNDATION_EXPORT NSString* const NSCalendarIdentifierHebrew; FOUNDATION_EXPORT NSString* const NSCalendarIdentifierISO8601; FOUNDATION_EXPORT NSString* const NSCalendarIdentifierIndian; FOUNDATION_EXPORT NSString* const NSCalendarIdentifierIslamic; FOUNDATION_EXPORT NSString* const NSCalendarIdentifierIslamicCivil; FOUNDATION_EXPORT NSString* const NSCalendarIdentifierJapanese; FOUNDATION_EXPORT NSString* const NSCalendarIdentifierPersian; FOUNDATION_EXPORT NSString* const NSCalendarIdentifierRepublicOfChina; FOUNDATION_EXPORT NSString* const NSCalendarIdentifierIslamicTabular; FOUNDATION_EXPORT NSString* const NSCalendarIdentifierIslamicUmmAlQura; FOUNDATION_EXPORT NSString* const NSCalendarDayChangedNotification; enum { NSDateComponentUndefined = NSIntegerMax, NSUndefinedDateComponent = NSDateComponentUndefined }; FOUNDATION_EXPORT_CLASS @interface NSCalendar : NSObject <NSCopying, NSSecureCoding> + (NSCalendar*)currentCalendar; + (NSCalendar*)autoupdatingCurrentCalendar STUB_METHOD; + (NSCalendar*)calendarWithIdentifier:(NSString*)calendarIdentifierConstant; - (id)init NS_UNAVAILABLE; - (id)initWithCalendarIdentifier:(NSString*)string; @property (readonly, copy) NSString* calendarIdentifier; @property NSUInteger firstWeekday; @property (copy) NSLocale* locale; - (NSRange)maximumRangeOfUnit:(NSCalendarUnit)unit; @property NSUInteger minimumDaysInFirstWeek; - (NSRange)minimumRangeOfUnit:(NSCalendarUnit)unit; - (NSUInteger)ordinalityOfUnit:(NSCalendarUnit)smaller inUnit:(NSCalendarUnit)larger forDate:(NSDate*)date; - (NSRange)rangeOfUnit:(NSCalendarUnit)smaller inUnit:(NSCalendarUnit)larger forDate:(NSDate*)date; - (BOOL)rangeOfUnit:(NSCalendarUnit)unit startDate:(NSDate* _Nullable*)datep interval:(NSTimeInterval*)tip forDate:(NSDate*)date; - (BOOL)rangeOfWeekendStartDate:(NSDate* _Nullable*)datep interval:(NSTimeInterval*)tip containingDate:(NSDate*)date; @property (copy) NSTimeZone* timeZone; - (NSDate*)dateByAddingComponents:(NSDateComponents*)comps toDate:(NSDate*)date options:(NSCalendarOptions)opts; - (NSDate*)dateByAddingUnit:(NSCalendarUnit)unit value:(NSInteger)value toDate:(NSDate*)date options:(NSCalendarOptions)options; - (NSDate*)dateFromComponents:(NSDateComponents*)comps; - (void)enumerateDatesStartingAfterDate:(NSDate*)start matchingComponents:(NSDateComponents*)comps options:(NSCalendarOptions)opts usingBlock:(void (^)(NSDate*, BOOL, BOOL*))block; - (NSDate*)dateBySettingHour:(NSInteger)h minute:(NSInteger)m second:(NSInteger)s ofDate:(NSDate*)date options:(NSCalendarOptions)opts; - (NSDate*)dateBySettingUnit:(NSCalendarUnit)unit value:(NSInteger)v ofDate:(NSDate*)date options:(NSCalendarOptions)opts; - (NSDate*)dateWithEra:(NSInteger)eraValue year:(NSInteger)yearValue month:(NSInteger)monthValue day:(NSInteger)dayValue hour:(NSInteger)hourValue minute:(NSInteger)minuteValue second:(NSInteger)secondValue nanosecond:(NSInteger)nanosecondValue; - (NSDate*)dateWithEra:(NSInteger)eraValue yearForWeekOfYear:(NSInteger)yearValue weekOfYear:(NSInteger)weekValue weekday:(NSInteger)weekdayValue hour:(NSInteger)hourValue minute:(NSInteger)minuteValue second:(NSInteger)secondValue nanosecond:(NSInteger)nanosecondValue; - (BOOL)date:(NSDate*)date matchesComponents:(NSDateComponents*)comps; - (NSDate*)nextDateAfterDate:(NSDate*)date matchingComponents:(NSDateComponents*)comps options:(NSCalendarOptions)options; - (NSDate*)nextDateAfterDate:(NSDate*)date matchingHour:(NSInteger)hourValue minute:(NSInteger)minuteValue second:(NSInteger)secondValue options:(NSCalendarOptions)options; - (NSDate*)nextDateAfterDate:(NSDate*)date matchingUnit:(NSCalendarUnit)unit value:(NSInteger)value options:(NSCalendarOptions)options; - (BOOL)nextWeekendStartDate:(NSDate* _Nullable*)datep interval:(NSTimeInterval*)tip options:(NSCalendarOptions)options afterDate:(NSDate*)date; - (NSDate*)startOfDayForDate:(NSDate*)date; - (NSComparisonResult)compareDate:(NSDate*)date1 toDate:(NSDate*)date2 toUnitGranularity:(NSCalendarUnit)unit; - (BOOL)isDate:(NSDate*)date1 equalToDate:(NSDate*)date2 toUnitGranularity:(NSCalendarUnit)unit; - (BOOL)isDate:(NSDate*)date1 inSameDayAsDate:(NSDate*)date2; - (BOOL)isDateInToday:(NSDate*)date; - (BOOL)isDateInTomorrow:(NSDate*)date; - (BOOL)isDateInWeekend:(NSDate*)date; - (BOOL)isDateInYesterday:(NSDate*)date; - (NSInteger)component:(NSCalendarUnit)unit fromDate:(NSDate*)date; - (NSDateComponents*)components:(NSCalendarUnit)unitFlags fromDate:(NSDate*)date; - (NSDateComponents*)components:(NSCalendarUnit)unitFlags fromDate:(NSDate*)startingDate toDate:(NSDate*)resultDate options:(NSCalendarOptions)options; - (NSDateComponents*)components:(NSCalendarUnit)unitFlags fromDateComponents:(NSDateComponents*)startingDateComp toDateComponents:(NSDateComponents*)resultDateComp options:(NSCalendarOptions)options; - (NSDateComponents*)componentsInTimeZone:(NSTimeZone*)timezone fromDate:(NSDate*)date; - (void)getEra:(NSInteger*)eraValuePointer year:(NSInteger*)yearValuePointer month:(NSInteger*)monthValuePointer day:(NSInteger*)dayValuePointer fromDate:(NSDate*)date; - (void)getEra:(NSInteger*)eraValuePointer yearForWeekOfYear:(NSInteger*)yearValuePointer weekOfYear:(NSInteger*)weekValuePointer weekday:(NSInteger*)weekdayValuePointer fromDate:(NSDate*)date; - (void)getHour:(NSInteger*)hourValuePointer minute:(NSInteger*)minuteValuePointer second:(NSInteger*)secondValuePointer nanosecond:(NSInteger*)nanosecondValuePointer fromDate:(NSDate*)date; @property (readonly, copy) NSString* AMSymbol; @property (readonly, copy) NSString* PMSymbol; @property (readonly, copy) NSArray* weekdaySymbols; @property (readonly, copy) NSArray* shortWeekdaySymbols; @property (readonly, copy) NSArray* veryShortWeekdaySymbols; @property (readonly, copy) NSArray* standaloneWeekdaySymbols; @property (readonly, copy) NSArray* shortStandaloneWeekdaySymbols; @property (readonly, copy) NSArray* veryShortStandaloneWeekdaySymbols; @property (readonly, copy) NSArray* monthSymbols; @property (readonly, copy) NSArray* shortMonthSymbols; @property (readonly, copy) NSArray* veryShortMonthSymbols; @property (readonly, copy) NSArray* standaloneMonthSymbols; @property (readonly, copy) NSArray* shortStandaloneMonthSymbols; @property (readonly, copy) NSArray* veryShortStandaloneMonthSymbols; @property (readonly, copy) NSArray* quarterSymbols; @property (readonly, copy) NSArray* shortQuarterSymbols; @property (readonly, copy) NSArray* standaloneQuarterSymbols; @property (readonly, copy) NSArray* shortStandaloneQuarterSymbols; @property (readonly, copy) NSArray* eraSymbols; @property (readonly, copy) NSArray* longEraSymbols; @end
{ "redpajama_set_name": "RedPajamaGithub" }
6,567
{"url":"https:\/\/gmatclub.com\/forum\/m-and-n-are-integers-such-that-6-m-n-what-is-the-value-of-n-220876.html?kudos=1","text":"GMAT Question of the Day - Daily to your Mailbox; hard ones only\n\n It is currently 12 Dec 2019, 01:26\n\n### GMAT Club Daily Prep\n\n#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.\n\nCustomized\nfor You\n\nwe will pick new questions that match your level based on your Timer History\n\nTrack\n\nevery week, we\u2019ll send you an estimated GMAT score based on your performance\n\nPractice\nPays\n\nwe will pick new questions that match your level based on your Timer History\n\n# M and N are integers such that 6<M<N.What is the value of N?\n\nAuthor Message\nTAGS:\n\n### Hide Tags\n\nDirector\nStatus: I don't stop when I'm Tired,I stop when I'm done\nJoined: 11 May 2014\nPosts: 522\nGPA: 2.81\nM and N are integers such that 6<M<N.What is the value of N?\u00a0 [#permalink]\n\n### Show Tags\n\nUpdated on: 15 Jul 2016, 05:03\n10\n121\n00:00\n\nDifficulty:\n\n75% (hard)\n\nQuestion Stats:\n\n59% (02:09) correct 41% (02:09) wrong based on 1395 sessions\n\n### HideShow timer Statistics\n\nM and N are integers such that 6<M<N.What is the value of N?\n\n(1) The greatest common divisor of M and N is 6\n(2) The least common multiple of M and N is 36\n\nOG Q 2017 New Question (Book Question: 297)\n\nOriginally posted by AbdurRakib on 24 Jun 2016, 12:24.\nLast edited by AbdurRakib on 15 Jul 2016, 05:03, edited 2 times in total.\nCurrent Student\nJoined: 22 Jun 2016\nPosts: 223\nRe: M and N are integers such that 6<M<N.What is the value of N?\u00a0 [#permalink]\n\n### Show Tags\n\n24 Jun 2016, 13:43\n23\n10\nStatement 1: greatest common divisor of M and N is 6. So, M and N are multiple of 6. But, an exact value of N cannot be determined. Insufficient!\n\nStatement 2: LCM of M and N is 36. M can be 9 and N can be 12 or M can be 12 and N can be 18. Multiple possible answer. Insufficient!\n\nCombining 1&2, M and N are multiple of 6 and LCM is 36. So the only possible values on M and N can be 12 and 18 respectively.\nSufficient!\n\n##### General Discussion\nRetired Moderator\nJoined: 05 Jul 2006\nPosts: 1380\nRe: M and N are integers such that 6<M<N.What is the value of N?\u00a0 [#permalink]\n\n### Show Tags\n\n14 Oct 2016, 14:15\n9\n4\nAbdurRakib wrote:\nM and N are integers such that 6<M<N.What is the value of N?\n\n(1) The greatest common divisor of M and N is 6\n(2) The least common multiple of M and N is 36\n\nOG Q 2017 New Question (Book Question: 297)\n\nBoth together\n\nLCM*HCF = MN ( each of m, n are multiple of 6 --- assume m = 6k , k is +ve integer )\n\nN =36*6\/ 6k thus N = 36\/k , since N is a multiple of 6 then k can only be (1,2,3,6)\n\nif k is 1 thus N= 36, M = 6, if k = 2 thus N= 18 and M = 12 , IF K=3 then N= 12 , M = 18 , IF K = 6 then N= 6 and M = 36) the only option that satisfy the constraint ( 6<M<N) is when K= 2 and N=18 , M=12\n\nC\nManager\nJoined: 23 May 2017\nPosts: 231\nConcentration: Finance, Accounting\nWE: Programming (Energy and Utilities)\nRe: M and N are integers such that 6<M<N.What is the value of N?\u00a0 [#permalink]\n\n### Show Tags\n\n29 Oct 2017, 09:30\n6\n1\nM>6 and N>M\n\n[1] GCD is 6 :\nN= 6 * 3 & M = 6 * 2\nN= 6 * 5 & M = 6 * 3\n\nhere number can be anything - as long as we multiply the number 6 by any of the prime numbers, the statement 2 will be satisfied\n\n[2] LCM = 36 : 2 * 2 * 3 * 3 - N & M can only be formed with the combination of 2's or 3's\ngiven m & n > 6 so possible values are\nN= 2 * 3 * 3 & M = 2 * 2 * 3\nN= 2 * 2 * 3 * 3 & M = 2 * 2 * 3\nN= 2 * 2 * 3 * 3 & M = 2 * 3 * 3\n\nHence [1] & [2] individually not sufficient but together they yield the number N = 18 & M = 12\nManager\nJoined: 22 Feb 2016\nPosts: 81\nLocation: India\nConcentration: Economics, Healthcare\nGMAT 1: 690 Q42 V47\nGMAT 2: 710 Q47 V39\nGPA: 3.57\nRe: M and N are integers such that 6<M<N.What is the value of N?\u00a0 [#permalink]\n\n### Show Tags\n\n14 Oct 2016, 08:19\n2\nhcf*LCM= a*b\nstatement1: only HCF is mentioned, multiple values are possible NS\nStatement 2: Only LCM is mentioned , multiple values are possible NS\n\ncombining HCF*LCM= product of M*N\nand we know M<N hence we can determine the values.\n\nPS: please let me know if my approach is correct.\nIntern\nJoined: 19 Jul 2012\nPosts: 27\nLocation: India\nConcentration: Finance, Marketing\nGMAT 1: 640 Q47 V32\nGMAT 2: 660 Q49 V32\nGPA: 3.6\nWE: Project Management (Computer Software)\nRe: M and N are integers such that 6<M<N.What is the value of N?\u00a0 [#permalink]\n\n### Show Tags\n\n06 Nov 2016, 23:05\n1\n1\n3\n6*36=216=m*n\nBoth m & n are greater than 6\nM<N can be satisfied under\nPairs (12,18),(9,24) and (8,27)\nBut only 12,18 can give gcf 6\n\nSent from my ONEPLUS A3003 using GMAT Club Forum mobile app\nIntern\nJoined: 18 Jun 2017\nPosts: 3\nM and N are integers such that 6<M<N.What is the value of N?\u00a0 [#permalink]\n\n### Show Tags\n\n29 Oct 2017, 09:02\n1\nIf M and N are among 6, 12, 18 and 36 as well as 6 < M < N then M cannot be either 6 or 36 and N cannot be 6. The only test cases from to use are:\n\nCase 1 - M = 12 , N = 18 , GCD = 6 , LCM = 36\nCase 2 - M = 12 , N = 36 , GCD = 12 , LCM = 36\nCase 3 - M = 18 , N = 36 , GCD = 18 , LCM = 36\n\nThe only case that satisfies the limitations of both statement 1 (GCD) and statement 2 (LCM) is case 1 and therefore N is 18 (answer again is C). Hope this helps explain why N cannot be 36\nManager\nJoined: 18 Jul 2015\nPosts: 84\nGMAT 1: 530 Q43 V20\nWE: Analyst (Consumer Products)\nRe: M and N are integers such that 6<M<N.What is the value of N?\u00a0 [#permalink]\n\n### Show Tags\n\n28 Sep 2019, 23:58\n1\nInten21 wrote:\nBunuel\n\nCan you please provide an elaborated Solution for this problem?\n\nWould really appreciate it if you could explain in detail as to how to think and approach such tough LCM and GCF problems?\n\nAlso, can you link some more SIMILAR QUESTIONS if it is possible.\n\nWhile your question is to Bunuel, I will try to share an detailed explanation that may help you as well as others.\n\nWhat is the GCD - It is the product of the common terms with the least power (This is in my words and not a textbook perfect definition )\n\nE.g. If there are two integers 12 and 16\nStep-1: Break into prime factors. 12 would be $$2^2*3$$ and 16 would be $$2^4$$\nStep-2: Identify common prime(s) i.e. 2\nStep-3: Pick the common prime with the lowest power i.e. $$2^2$$ in our example and that is your GCD\nGCD - $$2^2$$\n\nConceptually GCD is the largest number that can divide the 2 integers in question. Try to find an integer greater than 4 that divides both 12 and 16. You won't be able to find one!\n\nWhat is LCM - It is the product of the common terms with the highest power (Again this is in my words and not a textbook perfect definition )\n\nIn the above example, pick out the highest power of all distinct primes i.e. $$2^4$$ and $$3^1$$\nHence LCM would be $$2^4*3^1 = 48$$\nConceptually LCM is the smallest multiple of the 2 integers in the question. Try to find an integer less than 48 that is a multiple of both 12 and 16. You won't be able to find one!\n\nOn to the question at hand:-\n\nInfo. provided in the question:\n1. Both M and N are integers\n2. Both are greater than 6\n3. N is the greater than M ($$N>M$$, E.g. Least values of N could be 8 and that of M could be 7)\n\nWe are asked to determine the value of N\n\nStatement-1: GCD (Greatest Common Divisor) of M and N is 6 (or $$2*3$$)\n\nThis states that 6 will be common to both M and N. Plus, as M and N are greater than 6 there will be other primes too. But this statement does not provide insight into those other primes and hence this statement is insufficient. Let me demonstrate that:\n\n$$M = 2*3*5 = 30$$\n$$N = 2*3*7 = 42$$\n\nOR\n\n$$M = 2*3*11 = 66$$\n$$N = 2*3*13 = 78$$\n\nHere GCD (M, N) is 6 but N can take different values while staying true to the 3 data points provided by the question stem.\n\nStatement-2: LCM (Least Common Divisor) of M and N is 36 (or $$2^2*3^2$$)\n\nBasis the definition of LCM, in which we consider the highest powers of all distinct primes, this statement provides information that 2 and 3 are the only primes carried by M and N. But it does not provide insight into the powers of 2 and 3 specific to M and N. E.g. in the examples below $$2^2$$ can be part of M in the first example and also part of N in the very last example, and hence this statement is insufficient.\n\n$$M = 2*3*2 = 12$$\n$$N = 2*3*3 = 18$$\n\n$$M = 3*3 = 9$$\n$$N = 2*3*2 = 12$$\n\nCombining both statements:\n\nWhen combining we need to ensure that:\n\n1. $$N>M$$ - This one is the key!\n2. N and M are greater than 6\n3. From statement-1 we know that 6 is common to both M and N\n4. From statement-2 we know that 2 and 3 are the only primes carried by M and N and their highest powers are 2 ($$2^2$$ and $$3^2$$)\n\n$$M = 2*3*2 = 12$$\n$$N = 2*3*3 = 18$$\n\nYou cannot do the below as it would violate the condition that N>M (pt. 1 i.e. Key).\n$$M = 2*3*3 = 18$$\n$$N = 2*3*2 = 12$$\n\nAns C (or $$N = 18$$)\n\nWhile it sounds very simple, but as you can observe that the best approach to solving these questions and math questions in general is to list down the various possibilities in an organized fashion (one below another) in your notebook. The biggest mistakes happen when we ignore the data points provided in the question stem E.g. N>M or N,M>6 in this case.\n\nHope it helps!\n_________________\nCheers. Wishing Luck to Every GMAT Aspirant!\nManager\nJoined: 23 Jan 2016\nPosts: 180\nLocation: India\nGPA: 3.2\nRe: M and N are integers such that 6<M<N.What is the value of N?\u00a0 [#permalink]\n\n### Show Tags\n\n10 Nov 2016, 23:00\nStatement 1 tells us that between M and N, 2 and 3 are the lowest factors. However we do not know exactly who has 2 and who has 3; there can also be other factors between them. Insufficient.\n\nStamtement 2 tells us that 2^2 and 3^2 are the highest factors between M and N. However we do not know whether thats the only factors common between them or that there are lower factors of 2 and 3 between them than 2^2 and 2^3.\n\nCombining both statements we understand that 2 and 3 are the lowest factors and 2^2 and 3^2 are the highest factors. So one of them must be 12 and the other must be 18. Since 6<M<N, N must be 18.\n\nC\nIntern\nJoined: 08 Dec 2016\nPosts: 32\nLocation: Italy\nSchools: IESE '21\nRe: M and N are integers such that 6<M<N.What is the value of N?\u00a0 [#permalink]\n\n### Show Tags\n\n25 Jun 2017, 12:28\n14101992 wrote:\nStatement 1: greatest common divisor of M and N is 6. So, M and N are multiple of 6. But, an exact value of N cannot be determined. Insufficient!\n\nStatement 2: LCM of M and N is 36. M can be 9 and N can be 12 or M can be 12 and N can be 18. Multiple possible answer. Insufficient!\n\nCombining 1&2, M and N are multiple of 6 and LCM is 36. So the only possible values on M and N can be 12 and 18 respectively.\nSufficient!\n\nHi 14101992,\n\nAccording the statement 2, LCM can be 36 as well.\nThe further constrain is given \"by the formula (concept)\": LCM*HCF = M*N --> 6*36, which tells that LCM cannot be 36 and so only M=12 and N=18 cen be the answer.\n\nC is correct.\n\nHope it helps.\n\nMatt\nIntern\nJoined: 25 Mar 2017\nPosts: 1\nRe: M and N are integers such that 6<M<N.What is the value of N?\u00a0 [#permalink]\n\n### Show Tags\n\n10 Aug 2017, 19:35\n\nSent from my Nexus 6P using GMAT Club Forum mobile app\nStatus: It's now or never\nJoined: 10 Feb 2017\nPosts: 175\nLocation: India\nGMAT 1: 650 Q40 V39\nGPA: 3\nWE: Consulting (Consulting)\nM and N are integers such that 6<M<N.What is the value of N?\u00a0 [#permalink]\n\n### Show Tags\n\n28 Aug 2017, 07:02\nStatement 1 and 2: are clearly NOT SUFFICIENT. Can anyone explain the easiest way how together they are sufficient. I just had a lucky guess 'C' which was correct. Thanks.\n_________________\n\nClass of 2019: Mannheim Business School\nClass 0f 2020: HHL Leipzig\nIntern\nJoined: 13 Aug 2018\nPosts: 47\nM and N are integers such that 6<M<N.What is the value of N?\u00a0 [#permalink]\n\n### Show Tags\n\n28 Oct 2018, 01:39\nAbdurRakib wrote:\nM and N are integers such that 6<M<N.What is the value of N?\n\n(1) The greatest common divisor of M and N is 6\n(2) The least common multiple of M and N is 36\n\nOG Q 2017 New Question (Book Question: 297)\n\nDear Bunuel, what is your take on this question?\n\nIs there any way we could solve this question faster with a formula or something?\n\nBecause I kept thinking about what numbers could fit the statements and doing so took some time.\n\nThanks\nManager\nJoined: 05 Oct 2017\nPosts: 66\nLocation: India\nSchools: GWU '21, Isenberg'21\nRe: M and N are integers such that 6<M<N.What is the value of N?\u00a0 [#permalink]\n\n### Show Tags\n\n10 Dec 2018, 05:43\n2\nIt a Very good question !!!! some claps for GMAC\n\nLets understand what is being asked.\n\nHere we suppose to find the value of N. a concrete , solid value of N\n\nStatement 1 says GCD of M&N is 6 which means they are multiple of 6 or separated from each other by 6\nso M&N can be 12 & 18 , 18 & 24 , 24 & 30 .........and so on (Hence Not sufficient)\n\nStaement 2 says LCM of M&N is 36 means MAX value of N can be 36\nso M&N can have value as 9 & 12 , 12 & 18 , 18 & 36 , 9 & 36 , 12 & 36 .(Hence Not sufficient)\n\nOn Combining we got 12 & 18 as our final answer because it is common in both.\nGCD of 12 & 18 is 6 and LCM of 12 & 18 is 36\nNo other combination satisfy these condition\n\nOption C is correct choice\n\nHope that Helps!!!!\n_________________\nIntern\nJoined: 04 May 2018\nPosts: 4\nConcentration: Technology, Strategy\nRe: M and N are integers such that 6<M<N.What is the value of N?\u00a0 [#permalink]\n\n### Show Tags\n\n11 Dec 2018, 10:17\nStatement 1: The GCD of M and N is 6. Therefore, M and N must contain 2*3 and may or may not contain any other number\nM = 2 * 3 * 7 (any other number or nothing at all)\nN = 2 * 3 * 5 (any other number or nothing at all) --- INSUFFICIENT\n\nStatement 2: The LCM of M and N is 36. Therefore, M and N can take the following forms:\nM = 2 * 3 * 2 = 12\nN = 2 * 3 * 3 = 18\n\nOR\n\nM = 2 * 3 * 2 = 12\nN = 2 * 3 * 2 * 3 = 36\n\n--- INSUFFICIENT\n\nTogether (1) & (2)\nonly 1 possibility ,\nM = 2 * 3 * 2 = 12\nN = 2 * 3 * 3 = 18\n\nIntern\nJoined: 15 Jun 2018\nPosts: 1\nRe: M and N are integers such that 6<M<N.What is the value of N?\u00a0 [#permalink]\n\n### Show Tags\n\n28 Jun 2019, 06:13\nmatt882 wrote:\n14101992 wrote:\nStatement 1: greatest common divisor of M and N is 6. So, M and N are multiple of 6. But, an exact value of N cannot be determined. Insufficient!\n\nStatement 2: LCM of M and N is 36. M can be 9 and N can be 12 or M can be 12 and N can be 18. Multiple possible answer. Insufficient!\n\nCombining 1&2, M and N are multiple of 6 and LCM is 36. So the only possible values on M and N can be 12 and 18 respectively.\nSufficient!\n\nHi 14101992,\n\nAccording the statement 2, LCM can be 36 as well.\nThe further constrain is given \"by the formula (concept)\": LCM*HCF = M*N --> 6*36, which tells that LCM cannot be 36 and so only M=12 and N=18 cen be the answer.\n\nC is correct.\n\nHope it helps.\n\nMatt\n\nWhy can't the LCM be 36?\nManager\nJoined: 20 Oct 2018\nPosts: 147\nRe: M and N are integers such that 6<M<N.What is the value of N?\u00a0 [#permalink]\n\n### Show Tags\n\n01 Jul 2019, 10:52\nI will try to simplify amins309's explanation to some extent:\n\nStatement 1: The GCD of M and N is 6. Therefore, M and N must contain 2*3 and may or may not contain any other number\nM = 2 * 3 * 7 (any other number or nothing at all)\nN = 2 * 3 * 5 (any other number or nothing at all) --- INSUFFICIENT\n\nStatement 2: The LCM of M and N is 36. Therefore, M and N can take the following forms:\nM = 2 * 3 * 2 = 12 N=2*3*3 = 18\nM = 2 * 3 * 3 = 18 N=2*2*3*3 = 36\nM = 2*3*3 = 18 N= 2*2*3*3 = 36\n\n--- INSUFFICIENT\n\nTogether (1) & (2)\n\nIn addition to this we use one more property: LCM*HCF = product of two numbers (M*N)\nLCM*HCF = 36*6 = 6*6*6 ---- (1)\nnow in case M=12 and N= 36 --> M*N = 6*6*6*2 is not equal to (1)\nSimilar to M = 18 and N=36 ---> M*N = 6*6*6*3\n\nWhile M = 12 and N = 18 ---> M*N = 6*6*6 = LCM*HCF\nManager\nJoined: 18 Jan 2017\nPosts: 127\nRe: M and N are integers such that 6<M<N.What is the value of N?\u00a0 [#permalink]\n\n### Show Tags\n\n05 Sep 2019, 05:50\nBunuel\n\nCan you please provide an elaborated Solution for this problem?\n\nWould really appreciate it if you could explain in detail as to how to think and approach such tough LCM and GCF problems?\n\nAlso, can you link some more SIMILAR QUESTIONS if it is possible.\n\nIntern\nJoined: 16 Jul 2019\nPosts: 1\nRe: M and N are integers such that 6<M<N.What is the value of N?\u00a0 [#permalink]\n\n### Show Tags\n\n05 Sep 2019, 10:51\nS1: if hcf is 6, let m=6a, n=6b, where a & b relatively prime.\n=>6<6a<6b\n=>1<a<b\nNot sufficient\n\nS2: if lcm is 36, 6ab=36;\n=> ab=6,\n=> a=2, b=3\n=> m=2*hcf, n=3*hcf\n=> Not sufficient\n\nwith S1+S2;\nm=12, n=18\nSufficient\nRe: M and N are integers such that 6<M<N.What is the value of N? \u00a0 [#permalink] 05 Sep 2019, 10:51\nDisplay posts from previous: Sort by","date":"2019-12-12 08:27:26","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.727811872959137, \"perplexity\": 1253.3150801375057}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-51\/segments\/1575540542644.69\/warc\/CC-MAIN-20191212074623-20191212102623-00375.warc.gz\"}"}
null
null
Horacio Pancheri (Esquel, Argentína, 1982. december 2. –) argentin színész, modell. Élete 1982. december 2-án született Esquel-ben, Argentínában. Van két testvére, Emilio és Victoria. És van egy fia, Benicio. Karrierjét modellként kezdte. 2012-ben Argentínából Mexikóba költözött, hogy színészetet tanuljon a legnagyobb telenovella-gyártó cég, a Televisa színi iskolájában. Első szerepét 2014-ben kapota A szenvedély száz színe című telenovellában. Majd A múlt árnyékában kapott szerepet. 2016-ban megkapta első főszerepét A sors útjaiban. 2017-ben újabb főszerepet kapott az En tierras salvajesben. Volt egy futó kapcsolata Aracely Arámbula színésznővel. Ezután Grettell Valdezzel alkottak egy párt. Jelenlegi párja Paulina Goto, akivel A sors útjai forgatásai alatt szerettek egymásba. Telenovellák Források Mexikói színészek 1982-ben született személyek Élő személyek Mexikói modellek
{ "redpajama_set_name": "RedPajamaWikipedia" }
9,568
David Hogan spoke with Glen Ryan about filming the 'miracle' of the Brindabella ranges. Kristy Stark, 2010 recipient of the NFSA-ACS John Leake OAM ACS Award talks about her transition from cinematographer to producer, producing for online and 'Wastelander Panda'. Meg Labrum on the life and work of prolific film producer David Hannay (1939-2014). Can you help us identify these Sydney locations from landmark 1920s films made by the McDonagh sisters? Listen to our Oral History interview with Wendy Hughes as Meg Labrum and Ken Berryman celebrate the talent and beauty of one of Australia's most beloved actors.
{ "redpajama_set_name": "RedPajamaC4" }
3,059
\chapter{Logical spectra} In this chapter we will construct a topological groupoid $\MM=\Spec(\bTT)$ called the \emph{spectrum} of a coherent first-order logical theory $\bTT$. We begin by defining the space of objects $\MM_0$ constructed from the models of $\bTT$. Next we consider the topos of sheaves over $\MM_0$ and, in particular, single out a class of $\bTT$-definable sheaves over this space. Then we will extend $\MM$ to a groupoid whose morphisms are $\bTT$-model isomorphisms; using this we characterize the definable sheaves as those which admit an equivariant action by the isomorphisms in $\MM$. As a corollary we obtain a topological/groupoidal characterization of definability in first-order logic. We close by considering the special case of classical first-order logic and prove a few facts specific to this case. \section{The spectrum $\MM_0$}\label{sec_CovSp} In this section we associate a topological spectral space $\MM_0(\bTT)$ with any coherent first-order theory $\bTT$. We build the spectrum from the semantics of $\bTT$, generalizing the Stone space construction from propositional logic. This is based upon an idea of Joyal \& Tierney \cite{JT} which was later developed by Joyal \& Moerdijk \cite{JM1} \cite{JM2}, Butz \& Moerdijk \cite{butz_thesis} \cite{BM_article} and Awodey \& Forssell \cite{forssell_thesis}. \begin{defn} A (multi-sorted) \emph{coherent theory} is a triple $\<\AA,\LL,\bTT\>$ where \begin{itemize} \item $\AA=\{A\}$ is a set of basic sorts, \item $\LL=\{R(\overline{x}), F(\overline{x})=y\}$ is a set of function\footnote{We write $F(\overline{x})=y$ for the function symbol because in multi-sorted logic we must specify the sort $y:B$ of the codomain of $F$ as well as its arity $\overline{x}:A_1\times\ldots\times A_n$.} and relation symbols with signatures in $\AA$ and \item $\bTT=\left\{\varphi(\overline{x})\overprove{\overline{x}}\psi(\overline{x})\right\}$ is a set of coherent axioms, written in the language $\LL$ and involving variables $\overline{x}=\<x_1,\ldots,x_n\>$ (see below). \end{itemize} \end{defn} We write $x:A$ to indicate that $x$ is a variable ranging over some sort $A\in\AA$. A \emph{context of variables} is a sequence of sorted variables, written ${\<x_1,\ldots,x_n\>:A_1\times\ldots\times A_n}$ or $\overline{x}:\prod_i A_i$. We will assume that $\AA$ is closed under tupleing, although this is not necessary for our arguments; in practice, this allows us to avoid subscript overload by focusing most of the time on the single-variable case $x:A$. As usual in categorical logic, we will assume that every $\LL$-formula (in particular each basic function or relation) is written in a fixed context of variables $x:A$, indicated notationally by writing $\varphi(x)$. We may always weaken a formula $\varphi(x)$ to include dummy variables $y:B$ by conjunction with a tautology; the weakening $\varphi(x,y)$ is shorthand for the formula $\varphi(x)\wedge(y=y)$. It is important to note that we regard $\varphi(x)$ and $\varphi(x,y)$ as different formulas. A first-order formula is \emph{coherent} if belongs to the fragment generated by $\{\bot,\top,=,\wedge,\vee,\exists\}$. Axioms in a coherent theory are presented as \emph{sequents}, ordered pairs of coherent formulas written in a common context $x:A$ and usually written $\varphi(x)\vdash_x\psi(x)$. Relative to this data, a $\bTT$-model $M$ is defined as usual. Each basic sort $A\in\AA$ defines an underlying set $A^M$; when $A=A_1\times\ldots\times A_n$ is a compound context $A^M=A_1^M\times\ldots\times A_n^M$. If $x:A$ is a context of variables, we may also write $|M|^x$ for the ``underlying set'' $A^M$. When $\LL$ is single-sorted and $x=\<x_1,\ldots,x_n\>$, this agrees with the usual notaion $|M|^x=|M|^{n}$. For any coherent formula $\varphi(x)$ we may construct a \emph{definable set} $\varphi^M\subseteq |M|^x$ in the usual inductive fashion. $R^M\subseteq |M|^x$ is already defined and $(F(x)=y)^M$ is the graph of the function $F^M:|M|^x\to|M|^y$. If $\varphi^M\subseteq|M|^x$ is already defined, the interpretation of a weakened formula is given by $\varphi(x,y):=\varphi^M\times|M|^y$. The interpretation of an existential formula is given by the image of the projection: $$\big(\exists y.\varphi(x,y)\big)^M=\pi_x(\varphi^M)=\big\{a\in|M|^x\ |\ \<a,b\>\in\varphi^M\rm{\ for\ some\ }b\in|M|^y\big\}\\$$ To interpret conjunctions and disjunctions we first weaken the component formulas to a common context $x:A$ and then compute $$\begin{array}{c} \big(\varphi\wedge\psi\big)^M=\varphi^M\cap\psi^M\subseteq|M|^x,\\ \big(\varphi\vee\psi\big)^M=\varphi^M\cup\psi^M\subseteq|M|^x.\\ \end{array}$$ We begin by defining the spectrum $\MM_0=\MM_0(\LL)$ associated with a language $\LL$. Every theory $\bTT$ in the language $\LL$ will define a subspace $\MM_0(\bTT)\subseteq\MM_0(\LL)$. We let $\kappa=\rm{LS}(\bTT)=|\LL|+\aleph_0$ denote the Lowenheim-Skolem number of $\bTT$ (see e.g. \cite{hodges}, ch. 3.1). Since any model has an elementary substructure of size $\kappa$, $\bTT$ is already complete for $\kappa$-small models, a fact that we will use repeatedly. \begin{defn}[Points of $\MM_0$, cf. Butz \& Moerdijk \cite{butz_thesis} \cite{BM_article}]\label{M0points} A point $\mu\in |\MM_0|$ is a tuple $\<M_{\mu},K^A_{\mu}, v^A_{\mu}\>_{A\in\AA}$, where $M=M_{\mu}$ is an $\LL$-structure and, for each $A\in\AA$, \begin{itemize} \item $K^A_{\mu}$ is a subset of $\kappa$ and \item $v_{\mu}^A:K_{\mu}^A\twoheadrightarrow A^M$ is an infinite-to-one function (called a \emph{labelling}). \end{itemize} The latter condition means that, for each $a\in A^M$, the set $\{k\in\kappa\ |\ v_{\mu}^A(k)=a\}$ is infinite. \end{defn} We call the elements $k\in\kappa$ \emph{parameters} and the points $\mu\in|\MM_0|$ \emph{labelled models}. The motivation behind these parameters is similar to that behind the use of variable assignments in traditional first-order logic. \emph{A priori} we can only say whether a (labelled) model satisfies a sentence; the parameters allow us to evaluate the truth of any (parameterized) formula $\varphi(\overline{k})$ relative to $\mu$. Fix a labelled model $\mu\in|\MM_0|$, a context of variables $x:A=\<x_i:A_i\>$ and a formula $\varphi(x)$. Given this data, we introduce the following notation and terminology: \begin{itemize} \item If $x:A$ is a context of variables we may write $K_{\mu}^x$ (resp. $v_{\mu}^x$) rather than $K_{\mu}^A$ (resp. $v_{\mu}^A$). We call $K_{\mu}^x$ the \emph{domain} of $\mu$ at $x$. \item We write that $k\in \mu^x$ to indicate that $k\in K^x_{\mu}$, and say that \emph{$k$ is defined for $x$ at $\mu$}. \item We write $\varphi^\mu$ in place of $\varphi^{M_{\mu}}$ to denote the definable set associated to $\varphi(x)$ in the model $M_{\mu}$. \item When $k$ is defined for $x$ at $\mu$ we abuse notation by writing $\mu^x(k)$ for the element $v_{\mu}^x(k)\in|M_{\mu}|^x$. \item Given a formula $\varphi(x)$, we write $\mu\models\varphi(k)$ when $k\in\mu^x$ and ${M_{\mu}\models \varphi[\mu^x(k)/x]}.$ \end{itemize} \begin{defn}[Pre-basis for $\MM_0$] \mbox{} \begin{itemize} \item For each context $x:A$ and each parameter $k\in\kappa$ there is a basic open set $$V_{k,x}:=\big\{\mu\in|\MM_0|\ \big|\ k\in\mu^x\big\}.$$ \item Given a basic relation $R(x)$ and a parameter $k\in\kappa$ there is a basic open set $$V_{R(k)}=\big\{\mu\in\MM_0\ \big|\ \mu\models R(k)\big\}\subseteq V_{k,x}.$$ \item Given a basic function $F(x)=y$ and parameters $k,l\in\kappa$ there is a basic open set $$V_{F(k)=l}=\big\{\mu\in\MM_0\ \big|\ \mu\models F(k)=l\big\}\subseteq V_{\<k,l\>,\<x,y\>}.$$ \end{itemize} \end{defn} From these we can build up a richer collection of open sets corresponding to other parameterized formulas. \begin{defn}[Basic opens of $\MM_0$]\label{M0opens} \mbox{} \begin{itemize} \item Each prebasic open set above is a basic open set. \item Given an inductively defined basic open set $V_{\varphi(k)}$ and a parameter $l^y$ there is a weakening $$V_{\varphi^y(k,l)}=V_{\varphi(k)}\cap V_{l^y}.$$ \item Only when $V_{\varphi(k)}$ and $V_{\psi(k)}$ share the same parameters $k^x$ $$V_{\varphi\wedge\psi(k)}= V_{\varphi(k)}\cap V_{\psi(k)} \hspace{1cm} V_{\varphi\vee\psi(k)}= V_{\varphi(k)}\cup V_{\psi(k)}.$$ \item If $V_{\varphi(k,l)}$ is defined for all parameters $l^y$, then $$V_{\exists y.\varphi(k)}=\displaystyle\bigcup_{l\in\kappa^y} V_{\varphi(k,l)}$$ \end{itemize} \end{defn} If the defining formula of a basic open set is complicated we may sometimes write it in brackets $V[\ldots]$, rather than as a subscript. We have the following easy lemma. \begin{lemma} Given a coherent formula $\varphi(x)$ and a parameter $k\in\mu^x$, $$\mu\in V_{\varphi(k)}\Iff \mu\models\varphi(k).$$ \end{lemma} \begin{proof} The claim holds by definition for basic functions and relations. Suppose $\varphi(x,y)\equiv \varphi(x)\wedge(y=y)$ is a weakening of a formula $\varphi(x)$. Then $\mu\models \varphi(k,l)$ just in case $\mu\models\varphi(k)$ and $l$ is defined for $y$ at $\mu$. But this is exactly the intersection $V_{\varphi(k)}\cap V_{l,y}$. For joins and meets in a common context, the claim follows immediately from the definition of satisfaction. As for the existential, consider a formula $\varphi(x,y)$. If $\mu\models \exists y.\varphi(k)$ there must be some $b\in|M_{\mu}|^y$ such that $\mu\models \varphi(\mu^x(k),b)$. Because $\mu$ is a surjective labelling there is a parameter $l$ such that $\mu^y(l)=b$, and therefore $\mu\in V_{\varphi(k,l)}\subseteq\displaystyle\bigcup_{l\in\kappa} V_{\varphi(k,l)}$. \end{proof} Properly speaking, $\MM_0$ as described above is not a $T_0$ space and, in fact, there is a proper class of models in the underlying set $|\MM_0|$. However, these models are all $\kappa$-small so they represent only a set of isomorphism-classes. For each isomorphism class there is only a set of labellings and this allows us to modify $\MM_0$ in an essentially trivial way to obtain an ordinary space. \begin{prop} Two labelled models $\mu$ and $\nu$ are topologically indistinguishable if and only if $K_\mu^A=K_\nu^A$ for every basic sort $A$ and these labellings induce a $\bTT$-model isomorphism $M_\mu\cong M_\nu$. \end{prop} \begin{proof} Suppose that $\mu$ and $\nu$ are topologically indistinguishable models (i.e., they belong to exactly the same open sets). Because they co-occur in the same context open sets $V_{k^x}$ we see that $k$ is defined for $x$ at $\mu$ just in case it is defined at $\nu$. Considering the basic open set $V_{k=l}$, we see that $\mu(k)=\mu(l)$ if and only if $\nu(k)=\nu(l)$. This means that the labellings induce a bijection between the underlying sets of $\mu$ and $\nu$: $$\xymatrix{ K_{\mu} \ar@{=}[r] \ar@{->>}[d] & K_{\nu} \ar@{->>}[d]\\ |M_{\mu}| \ar@{-->}[r]^{\sim} & |M_{\nu}|}.$$ Because $\mu\models R(k)$ just in case $\nu\models R(k)$ for each basic relation (or similarly for functions), this bijection is actually an $\LL$-structure homomorphism. Thus $\mu$ and $\nu$ are topologically indistinguishable just in case there is an isomorphism $M_{\mu}\cong M_{\nu}$ which carries the labelling of $\mu$ to the labelling of $\nu$. \end{proof} Among these indistinguishable points there is a canonical representative. The relation $\mu(k)=\mu(l)$ defines an equivalence relation $k\sim_{\mu} l$ on $K_{\mu}$ (or equivalently a partial equivalence relation on the full set $\kappa$). We may substitute the quotient $K_{\mu}/\sim_{\mu}$ (also called a \emph{subquotient} $\kappa/\sim_{\mu}$) in place of the underlying set $|M_{\mu}|$, replacing each $a\in M_{\mu}$ by the equivalence class $\{k\in\kappa\ |\ \mu(k)=a\}$. The result is a labelled model indistinguishable from $\mu$ and canonically determined by the neighborhoods of $\mu$. Thus every labelled model is indistinguishable from one whose underlying set is a subquotient of $\kappa$, and one can easily show that each of these is distinguishable. Thus in definition \ref{M0points} we could have specified these as the points of our space, making $|\MM_0|$ a set and $\MM_0$ a $T_0$ space. We prefer to use the unreduced space because it is sometimes convenient to refer to underlying sets (as in lemma \ref{reassignment} below); at the expense of conceptual clarity we can always rephrase these results in terms of isomorphisms between subquotients. Caution: even with this modification, the topological space $\MM_0$ is \emph{not} Hausdorff. In some respects this is unsurprising given that the same phenomena occur for algebraic schemes. However, the reduced version of $\MM_0$ is at least sober, as in the algebraic case; the reader can see \cite{butz_thesis} or \cite{forssell_thesis} for a proof. To verify the failure of separation in $\MM_0$, we observe that points are not closed. \begin{prop}\label{spectral_closure} Consider two labelled models $\mu,\nu\in|\MM_0|$. Then $\mu$ belongs to the closure of $\nu$ just in case \begin{itemize} \item For each $A\in \AA$, $K_{\mu}^A\subseteq K_{\nu}^A$, \item these inclusions induce functions on the underlying sets of $\mu$ and $\nu$: $$\xymatrix{ K_{\mu}^A \ar@{->>}[d] \ar@{}[r]|{\subseteq} & K_{\nu}^A \ar@{->>}[d]\\ A^\mu \ar@{-->}[r] & A^\nu} $$ \item The induced functions $A^\mu\to A^\nu$ define an $\LL$-structure homomorphism $M_{\mu}\to M_{\mu}$. \end{itemize} \end{prop} \begin{proof} First suppose that $\mu\in\overline{\nu}$. This means that, for any open set $V\subseteq \MM_0$, $\mu\in V$ implies $\nu\in V$. First consider open sets of the form $V_{k,x}$. Applying the above observation, this means that $k$ is defined for $x$ at $\mu$ (so that $\mu\in V_{k,x}$) implies that $k$ is defined for $x$ at $\nu$ as well. Therefore $K_{\mu}^x\subseteq K_{\nu}^x$. Similarly, whenever $\mu\models k=k'$ then we must have $\nu\models k=k'$ as well. It follows that the inclusion $K_{\mu}^x\subseteq K_{\nu}^x$ induces a function on underlying sets: $$\xymatrix{ &\kappa&\\ K^x_{\mu} \ar@{->>}[d] \ar@{>->}[ur] \ar@{>->}[rr] && K^x_{\nu} \ar@{->>}[d] \ar@{>->}[ul]\\ |M_{\mu}|^x \ar@{-->}[rr]^{|\alpha|^x} && |M_{\nu}|^x.\\ }$$ Finally, if $\mu\models \varphi(k)$ then $\nu\models \varphi(k)$ as well, so the induced functions $|M_{\mu}|^x\to|M_{\nu}|^x$ preserve any coherent properties of elements in $M_{\mu}$. In particular, this holds for basic functions and relations, so the functions $|\alpha|^x$ define an $\LL$-structure homorphism $M_{\mu}\to M_{\nu}$. Conversely, if there are inclusions $K_{\mu}^A\subset K_{\nu}^A$ which induce a homorphism $M_{\mu}\to M_{\nu}$, then one easily shows that $\mu$ belongs to any open neighborhood of $\nu$.\footnote{In fact, one can show that the only closed point of $\MM_0$ is the empty structure. This is because the equivalence classes making up the underlying sets $|M_{\mu}|^x$ are infinite. Therefore we can always throw out a finite set of labels to create a new model $\mu^-$ which belongs to the closure of $\mu$.} \end{proof} \begin{lemma}[Reassignment lemma]\label{reassignment} Suppose that $\mu$ is a labelled model, $k^x$ and $l^y$ are disjoint sequences of parameters and $b\in|M_{\mu}|^y$ is a sequence of elements in $\mu$. Then there is another labelling $\nu$ with the same underlying model, $M_{\mu}=M_{\nu}$, such that $\mu(k)=\nu(k)$ and $\nu(l)=b$ \end{lemma} \begin{proof} It is possible that some of the $l$-parameters may already be defined at $\mu$, so we begin by removing them from each domain $K_{\mu}$. Because $v_{\mu}$ is infinite-to-one and $l$ is finite, the modified labelling is again infinite-to-one and we call the resulting labelled model $\mu\setminus l$. Because $k$ is disjoint from $l$, we have $v_{\mu\setminus l}(k)=v_{\mu}(k)\in|M_{\mu}|^x$. Now we can freely reassign $l$ to $b$. Specifically, when $y=\<y_j:B_j\>$ we set $$K_{\nu}^A=K_{\mu\setminus l}^A\cup\{ l_j\ |\ B_j=A\}.$$ Then we extend $v_{\nu}$ to this larger domain by setting setting $v_{\nu}(l_j)=b_j$. Clearly $\nu(l)=b$. As an extension of $\mu\setminus l$, we also have $\nu(k)=v_{\mu\setminus l}(k)=\mu(k)$, as desired. \end{proof} Now we shift from languages and structures to theories and models. In categorical logic, axioms are always expressed relative to a context of variables $x:A$; in this way we can interpret an outer layer of ``universal quantification'' even though $\forall$ is not a coherent symbol. Similarly, we express our axioms as sequents ${\varphi(x)\underset{x:A}{\vdash} \psi(x)}$, allowing for ``one layer'' of implication (or negation) even though $\to$ is not coherent. A first-order theory $\bTT$ is \emph{coherent} if it has an axiomatization using sequents of coherent formulas. To indicate that a sequent is valid relative to $\bTT$ we write $$\bTT\tri \varphi(x)\vdash_{x:A} \psi(x).$$ In an $\LL$-structure $M$, satisfaction of sequents is dictated by definable sets: $$M\models \varphi(x)\vdash_{x:A}\psi(x)\ \Iff\ \varphi^M\subseteq \psi^M\subseteq |M|^x.$$ \begin{defn} Consider a coherent theory $$\bTT=\{\varphi_i(x_i)\underset{x_i:A_i}{\vdash} \psi_i(x_i)\}_{i\in I}$$ written in the language $\LL$. The \emph{(spatial) spectrum of $\bTT$} is the subspace $\Spec_0(\bTT)\subseteq\MM_0(\LL)$ consisting of those labelled $\LL$-structures $\mu$ which are models of $\bTT$: i.e., for all $i\in I$, $$\varphi_i^\mu\subseteq\psi_i^\mu\subseteq |M_{\mu}|^{x_i}$$ \end{defn} \begin{lemma}\label{inclusion_lemma} Now let $\MM_0=\Spec_0(\bTT)$ denote the spectrum of $\bTT$. Any (non-empty) basic open set $U\subseteq V_{\varphi(k)}$ has the form $U=V_{\psi(k,l)}$ where $\bTT,\psi(x,y)\vdash \varphi(x)$. \end{lemma} \begin{proof} There are two claims here. The first is that any basic open set contained in $V_{\varphi(k)}$ also depends on the parameters $k$. The second asserts the $\bTT$-provability of the sequent $\psi(x,y)\vdash\varphi(x)$. For the first claim, suppose that $U=V_{\psi(l)}$ and that there is some $k_0\in k$ with $k_0\not\in l$. Fix a model $\mu'\in V_{\psi(l)}$ and consider the modified labelling $\mu=\mu'\setminus k_0$ as in the reassignment lemma. We have not adjusted the $l$-parameters so we again have $\mu\in V_{\psi(l)}$. However $k_0$ (and hence $k$) is not even defined at $\mu$ and therefore $\mu\not\in V_{k^x}\supseteq V_{\varphi(x)}$. This is contrary to the assumption that $V_{\psi(l)}\subseteq V_{\varphi(k)}$. As we assumed $U$ was a basic open set, it must have the form $V_{\psi(k,l)}$ for some formula $\psi$ and some additional parameters $l$. Now we need to see that $\bTT$ proves $\exists y.\psi(x,y)\vdash \varphi(x)$ (which is equivalent to the sequent above). This is an easy application of completeness for $\kappa$-small models. If the sequent were not valid then we could find a labelled model $\mu$ and a parameter $k^x$ such that $\mu\models \exists y.\psi(k,y)$ but $\mu\not\models \varphi(k)$. Let $b\in |M_{\mu}|^y$ witness the existential. By the reassignment lemma, there is an isomorphism $\alpha:\mu\stackrel{\sim}{\longrightarrow}\nu$ with $\nu\not\models \varphi(k)$ and $\nu(l)=\alpha(b)$. But then $\nu\models \psi(k,l)$ so that $V_{\psi(k,l)}\not\subseteq V_{\varphi(k)}$. This is again contrary to the assumption that $U\subseteq V_{\varphi(k)}$. \end{proof} \section{Sheaves on $\MM_0$}\label{sec_CovSpSh} In this section we will define, for every $\bTT$-formula $\varphi(x)$, a sheaf $\ext{\varphi}$ over $\MM_0$. Fix a basic sort $A\in\AA$ and a variable $x:A$. In a labelled model $\mu$ this corresponds to a labelling of the underlying set $K^x_{\mu}\to |M_{\mu}|^x$. Now we will reencode this data into the category of sheaves over $\MM_0$ by providing a diagram whose stalk at $\mu$ recovers the subquotient up to isomorphism $\Sh(\MM_0)$ $$\xymatrix{ K^x \ar@{>->}[r] \ar@{->>}[d]_-q&\Delta(\kappa)\\ \ext{A}&\\ }\ \ \raisebox{-.7cm}{$\overset{\stalk_{\mu}}{\longmapsto}$}\ \ \xymatrix{ K^x_{\mu} \ar@{>->}[r] \ar@{->>}[d]_-q&\kappa\\ |M_{\mu}|^x&.\\ }$$ Here $\Delta(\kappa)$ is the constant sheaf $\coprod_{\kappa} 1$; we usually abuse terminology and simply write $\kappa$. As a subobject of the constant sheaf, $K^x$ must be a coproduct of open sets. At the $k$th index, we take the basic open set $V_{k^x}$: $$K^x=\coprod_{k\in\kappa} V_{k^x} \rightarrowtail \coprod_{k\in\kappa} \MM_0=\Delta(\kappa).$$ Clearly the stalk of $K^x$ at $\mu$ is isomorphic to the set of parameters $k$ such that $\mu\in V_{k^x}$; this is exactly the domain $K^x_{\mu}$. \begin{comment} Alternatively, the open set lattice $\OO(\MM_0)$ is the subobject classifier in $\Sh(\MM_0)$. We can define a map $\kappa\to\OO(\MM_0)$ by sending each $k_0\mapsto V_{k_0}$ and define $K$ as a pullback of the subobject classifier: $$\xymatrix{ K \pbcorner \ar[r]^\! \ar@{>->}[d] & 1 \ar[d]^\top\\ \kappa \ar[r] & \OO(\MM_0)\\ }$$ \end{comment} Now we define a sheaf $\ext{A}$ which encodes the underlying set $|M_{\mu}|^x$. We build it as an \'etale space, generalizing the construction of $\MM_0=\ext{\top}$. \begin{defn}[The semantic realization $\ext{A}$]\label{extA} \mbox{}\begin{itemize} \item The points of $\ext{A}$ are pairs $\big\<\mu,a\in |M_{\mu}|^x\big>$. \item Each $k\in\kappa$ determines a \emph{canonical (partial) section} over the open set $V_{k^x}$: $$\hat{k}^x:V_{k^x}\to\ext{A}$$ sending $\mu\mapsto \<\mu,\mu^x(k)\>$. This defines an open set $W_{x=k}\subseteq\ext{A}$, homeomorphic to $V_{k^x}$, and these form an open cover of $\ext{A}$. \end{itemize}\end{defn} It is obvious from the definition that the fiber of $\ext{A}$ over $\mu$ is isomorphic to the underlying set $|M_{\mu}|^x$. Although the sections $\hat{k^x}$ give an open cover of $\ext{A}$, it is convenient to have a richer collection of basic open sets. For any formula $\varphi(x)$ and parameters $k^x$, the basic open set $V_{\varphi(k)}$ has a homeomorphic image in $\ext{A}$: $$W[\varphi(x)\wedge(x=k)]\cong V_{\varphi(k)}.$$ More generally, any formula $\varphi(x,y)$ together with parameters $l^y$ defines an open set $$\begin{array}{rcl} W_{\varphi(x,l)} &=&\{\ \<\mu,a\>\ |\ M_{\mu}\models\varphi(a,\mu^y(l))\ \}\\ &=&\displaystyle\bigcup_{k\in\kappa} W\big[\varphi(x,l)\wedge(x=k)\big]. \end{array}$$ The sets in this union are of the basic open form listed above, so $W_{\varphi(x,l)}$ is again open. The last equality follows from the fact that each $a\in|M_{\mu}|^{x}$ must be labelled by some parameter $k\in\kappa$. \begin{prop}\label{Asheaf} $\ext{A}$, as defined above, is a sheaf over $\MM_0$. \end{prop} \begin{proof} A convenient reformulation of the sheaf condition (cf. Joyal-Tierney, \cite{JT}) says that $\FF$ is a sheaf just in case the projection $\FF\to\MM_0$ and the fiber-diagonal ${\delta_{\FF}:\FF\to \FF\underset{\MM_0}{\times} \FF}$ are both open maps. The necessity of first condition is obvious because any sheaf projection is a local homeomorphism; the diagonal condition characterizes discreteness in the fibers of $\FF$. It is enough to check both these conditions on basic open sets. The projection of a basic $\ext{A}$-open is given by quantifying out the $x$-variable: $$\begin{array}{rcl} \pi(W_{\varphi(x,l)}\big)&=&\displaystyle\pi\left(\bigcup_{k\in\kappa} W_{\varphi(x,l)\wedge(x=k)}\right)\\ &=&\displaystyle\bigcup_{k\in\kappa}V_{\varphi(k,l)}\\ &=&V_{\exists x. \varphi(x,l)}\\ \end{array}$$ Here the second equality follows from the definition of $W[\dots]$ as a section of $\pi$. For the second map, we know that boxes $W_{\varphi(x,j)}\underset{\MM_0}{\times} W_{\psi(x,l)}$ give a basis for the topology of the fiber product. As in $\ext{A}$, unions provide for a richer collection of open sets. Suppose $\gamma(x,x',y)$ is a formula where $x$ and $x'$ are variables in the same context and $l^y$ is an additional parameter; from these we define an open set $$\begin{array}{rcl} W_{\gamma(x,x',l)}&=& \{\ \<\mu,a,a'\>\ |\ \mu\models \gamma(a,a',\mu(l))\ \}\subseteq\ext{A}\overtimes{\MM_0}\ext{A}\\ &=&\displaystyle\bigcup_{k,k'\in\kappa}\Big( W\big[\gamma(x,k',l)\wedge(x=k)\big]\overtimes{\MM_0} W\big[\gamma(k,x',l)\wedge(x'=k')\big]\Big). \end{array}$$ This is a union of boxes of the basic open form above, and equality again follows from the fact that every $a,a'\in|M_{\mu}|^x$ is labelled by some pair $k,k'\in\kappa$. Now the image of $\ext{A}$ under the fiber diagonal is the open set $W_{x=x'}$. Similarly, a basic open subset $W_{\varphi(x,l)}\subseteq\ext{A}$ maps to $W_{\varphi(x,l)\wedge(x=x')}$. These are both of the form $W_{\gamma(x,x',l)}$, so the fiber diagonal is an open map. \end{proof} Together the canonical sections $\hat{k}^x$ describe a map of sheaves $v:K^x\to\ext{A}$ $$\xymatrix{ K^x \cong \coprod_{k\in\kappa} V_{k^x}\ar@{->>}[r]^-v &\ext{A}\\ V_{k^x} \ar@{>->}[u] \ar[ur]_{\hat{k}^x} }$$ Since these sections cover $\ext{A}$, $v$ is an epimorphism of sheaves. Furthermore, $v$ sends $k\in K^x_{\mu}$ to $\hat{k}^x(\mu)=\<\mu,\mu(k)\>$. This realizes our goal for this section, and altogether we have the following theorem. \begin{thm}\label{ext_labels} For each context $x:A$ for a basic sort $A\in\AA$ there is a span in $\Sh(\MM_0)$ of the form $$\xymatrix{ K^x \ar@{>->}[r] \ar@{->>}[d]_-v&\kappa\\ \ext{A}&}$$ such that \begin{itemize} \item[(i)] $K^x$ is a subsheaf of the constant sheaf $\kappa$ and the stalk of $K^x$ at $\mu$ is isomorphic to the domain $K_{\mu}^x=\{k\in\kappa\ |\ k\rm{\ is\ defined\ for\ }x\rm{\ at\ }\mu\}$. \item[(ii)] The stalk of $\ext{A}$ at $\mu$ is isomorphic to the underlying set $|M_{\mu}|^x=A^\mu$. \item[(iii)] The map $v: K^x\to\ext{A}$ is an epimorphism of sheaves and the stalk of $v$ at $\mu$ is isomorphic to the subquotient map $v_{\mu}^x:K_{\mu}^x\twoheadrightarrow|M_{\mu}|^x$ (cf. definition \ref{M0points}). \end{itemize} \end{thm} \section{The generic model} In the last section we defined a family of sheaves $\ext{A}$ for the basic sorts $A\in\AA$. In this section we will show that this family carries the structure of a $\bTT$-model in the internal logic of $\Sh(\MM_0)$. Moreover, the model is ``generic'' in the sense that it satisfies all and only the sequents which are provable in $\bTT$. First we review the interpretation of $\LL$-structures and coherent theories in the internal logic of a topos $\SS$ (cf. Johnstone \cite{elephant}, D1 or Mac Lane \& Moerdijk \cite{SGL}, VI). In this and later chapters we will restrict our attention to Grothendieck toposes; we typically omit the adjective, so ``topos'' will always mean ``Grothendieck topos.''. To define an $\LL$-structure $M$ in $\SS$ one must give, first of all, an ``underlying object'' $A^M$ for each basic sort $A\in\AA$. Given a compound context $A=A_1\times\ldots\times A_n$ we let $A^M$ denote the product $A_1^M\times\ldots\times A_n^M$. Given a context of variables $x:A$ we use the notation $|M|^x:= A^M$. $M$ must also interpret basic relations and functions. Each relation $R(x)$ is associated with an interpretation $R^M$ which is a subobject of $|M|^x$. Similarly, a function symbol $f:x\to y$ is interpreted as an $\SS$-arrow $f^M:|M|^x \to |M|^y$. From these we can define an interpretation $\varphi^M\leq |M|^x$ for each coherent formula $\varphi(x)$; we construct these in essentially the same way that we build up the definable sets of a classical model in $\Sets$. We use limits (intersections, pullbacks) in $\SS$ to interpret meet and substitution. Epi-mono factorization gives us existential quantification while factorization together with coproducts gives us joins. \noindent\begin{tabular}{cccc}\label{coh_logic} \\ \textbf{Subst.} &$\xymatrix{(\varphi[f(x)/y])^M \ar[r] \pbcorner \ar@{>->}[d] & \varphi^M \ar@{>->}[d]\\ |M|^y \ar[r]_{f^M} & |M|^x\\}$ & \textbf{Meet} & $\xymatrix{(\varphi\wedge\psi)^M \pbcorner \ar@{>->}[r] \ar@{>->}[d] & \psi^M \ar@{>->}[d]\\ \varphi^M \ar@{>->}[r] & |M|^x\\}$\\\\ \textbf{Exist} & $\xymatrix{\varphi^M \pbcorner \ar@{->>}[r] \ar@{>->}[d] & (\exists y.\varphi)^M \ar@{>->}[d]\\ |M|^{\<x,y\>} \ar[r] & |M|^x\\}$ & \textbf{Join} & $\xymatrix{\varphi^M + \psi^M \ar@{>->}[d] \ar@{->>}[r] & (\varphi\vee\psi)^M \ar@{>->}[d]\\ |M|^x+|M|^x \ar[r] & |M|^x}$\\\\ \end{tabular} Remember that an axiom in categorical logic has the form of a sequent $\varphi(x)\vdash_{x:A} \psi(x)$. We say that $M$ \emph{satisfies} this sequent if $\varphi^M\leq\psi^M$ in the subobject lattice $\Sub_{\SS}(|M|^x)$. As usual, $M$ \emph{satisfies $\bTT$} or is a \emph{$\bTT$-model} if it satisfies all the sequents in $\bTT$. We denote the class of $\bTT$-models in $\SS$ by $\bTT\Mod(\SS)$. In the last section we defined a sheaf $\ext{A}$ for each basic sort $A\in\AA$; these are the underlying objects refered to above. We have already had occasion to consider the fiber product $\ext{A^2}=\ext{A}\overtimes{\MM_0}\ext{A}$ in the proof of proposition \ref{Asheaf}. There we saw that each parameterized formula $\gamma(x,x',l)$ involving two $A$-variables and an additional parameter $l^y$ determines a basic open set $W_{\gamma(x,x',l)}\subseteq\ext{A^2}$.\label{defin_products} We can generalize this to any context $x=\<x_i:A_i\>_{i\leq n}$ by setting $\ext{A^x}=\ext{A_1}\overtimes{\MM_0}\ldots\overtimes{\MM_0}\ext{A_n}$. Given a formula $\varphi(x,y)$ and parameters $l^y$, the fiber product contains an open set $$W_{\varphi(x,l)}=\{ \<\mu, a\>\ |\ M_{\mu}\models\varphi(a,\mu^y(l))\}.$$ Just as before, we show this set is open by representing it as a union of boxes indexed over the set of parameter sequences $k^x$. Given a basic relation $R(x)$ we extend the $\ext{-}$ notation by setting $$\ext{R}=W_{R(x)}=\{\<\mu,a\>\in\ext{A^x}\ |\ M_{\mu}\models R(a)\}.$$ This provides an interpretation in $\Sh(\MM_0)$ for each basic relation. Similarly, each function symbol $f:x\to y$ induces a map of sheaves $\ext{f}:\ext{A^x}\to\ext{B^y}$. This is defined fiberwise, sending each $a\in |M_{\mu}|^x$ to $f^\mu(a)\in |M_{\mu}|^y$. This is continuous because the inverse image of a basic open set corresponds to substitution $$\ext{f}^{-1}(V_{\varphi(x,k)})= V_{\varphi[f(y)/x,k]}\subseteq \ext{B^y}.$$ This specification defines an $\LL$-structure $M^*$ in $\Sh(\MM_0)$. Now suppose that $\varphi(x)\vdash_{x:A} \psi(x)$ is an axiom of $\bTT$ proves a coherent sequent $\varphi(x)\vdash_{x:A}\psi(x)$. Then for each labelled model there is an inclusion of definable sets $\varphi^\mu\subseteq\psi^\mu\subseteq A^\mu$ and consequently $\ext{\varphi}\subseteq\ext{\psi}$. Hence our $\LL$-structure satisfies the sequent, and $M^*$ is a model of $\bTT$ in $\Sh(\MM_0)$. In fact, every coherent formula $\varphi(x)$ determines a \emph{definable sheaf} $\ext{\varphi}\subseteq\ext{A^x}$. On one hand, $\ext{\varphi}$ is the interpretation of $\varphi$ in the sheaf model $M_0$; as such, it can be constructed inductively from the interpretation of basic relations and functions. Alternatively, we may define $\ext{\varphi}$ semantically by setting $$\ext{\varphi}=W_{\varphi(x)} = \{\<\mu,a\>\in\ext{A^x}\ |\ M_{\mu}\models \varphi(a)\}.$$ \label{def_sheaf} \begin{prop} $M_0$ is a generic model for $\bTT$ in the sense that a coherent sequent $\varphi(x)\vdash_{x:A} \psi(x)$ is provable in $\bTT$ if and only if it is satisfied in $M_0$: $$\bTT\tri\varphi(x)\vdash_{x:A}\psi(x) \Iff M_0\models \varphi(x)\vdash_{x:A}\psi(x).$$ \end{prop} \begin{proof} The left to right implication is an immediate consequence of soundness. If $\bTT$ proves the sequent then for every $\bTT$-model $M$ (and hence every labelled model $\mu$), $\varphi^M\subseteq \psi^M$. But then $\ext{\varphi}\subseteq \ext{\psi}$ so that $M^*$ satisfies the sequent as well. The converse follows from the fact that $\bTT$ is complete for $\kappa$-small models (see \pageref{M0points}). If $M_0$ satisfies the sequent $\varphi(x)\vdash_{x:A}\psi(x)$ then $\ext{\varphi}\subseteq\ext{\psi}$. Then $\varphi^\mu\subseteq \psi^\mu$ for every labelled model $\mu$, and every $\kappa$-small model is isomorphic to a labelled model. Thus satisfaction in $M_0$ implies satisfaction in every $\kappa$-small model, and completeness ensures provability in $\bTT$. \end{proof} \section{The spectral groupoid $\MM=\Spec(\bTT)$}\label{sec_SpGpd} In the last section we saw that each formula $\varphi(x)$ defines a subsheaf $\ext{\varphi}\subseteq \ext{A^x}$. Now, following Butz \& Moerdijk \cite{butz_thesis} \cite{BM_article}, we will characterize those subsheaves which are definable in this sense. To do so we introduce a space of $\bTT$-model isomorphisms together with continuous domain, codomain, composition and inverse operations. This topological groupoid $\MM=\Spec(\bTT)$ is the (groupoid) spectrum of $\bTT$. This groupoid acts naturally on the definable sheaves and we will show in the next section that the existence of such an action, together with a compactness condition, characterizes definability. A groupoid in $\Top$ is a diagram of topological spaces and continuous maps like the one below: $$\xymatrix{ \MM:&\MM_1\underset{\MM_0}{\times}\MM_1 \ar@<-1.5ex>[rr]_-\circ \ar@<.5ex>[rr] \ar@<1.5ex>[rr]^-{p_0,\ p_1}&& \MM_1 \ar@(ul,ur)^{\rm{inv}} \ar@<.5ex>[rr] \ar@<1.5ex>[rr]^-{\rm{dom,\ cod}} &&\MM_0 \ar@<1.5ex>[ll]^{\rm{id}} }$$ $\MM_0$ and $\MM_1$ are called the object and arrow spaces of $\MM$, respectively. These spaces and continuous maps are required to satisfy the same commutative diagrams as a groupoid in $\Sets$. See \cite{SGL}, section V.7 for a discussion of internal categories and equivariance. \begin{defn}[The spectral groupoid $\MM=\Spec(\bTT)$]\label{M1points} \mbox{} \begin{itemize} \item An isomorphism of labelled models $\alpha\in\Hom_{\MM}(\mu,\nu)$ is simply a isomorphism of underlying $\bTT$-models $\alpha:M_{\mu}\stackrel{\sim}{\longrightarrow} M_{\nu}$. We do not require that these respect the labellings on $\mu$ and $\nu$. Domain, codomain, composition, inverse and identity are computed as in $\bTT\Mod$. \item For each $x:A$, $\alpha$ defines a component $\alpha^x:|M_{\mu}|^x\to|M_{\nu}|^x$. Given an element $a\in|M_{\mu}|^x$ we often omit the superscript and simply write ${\alpha(a)\in|M_{\nu}|^x}$. \item Each basic open set $V_{\varphi(k)}\subseteq \MM_0$ determines two basic open sets in $\MM_1$, the inverse images of $\underline{\dom}$ and $\underline{\cod}$: $$V_{\varphi(k)^d} = \{\alpha:\mu\stackrel{\sim}{\longrightarrow}\nu\ |\ \mu\models \varphi(k)\}$$ $$V_{\varphi(k)^c} = \<\alpha:\mu\stackrel{\sim}{\longrightarrow}\nu\ |\ \nu\models \varphi(k)\}.$$ We refer to these basic opens as domain and codomain conditions in $\MM_1$. \item For any context $x:A$ and any two parameter sequences $k^x$ and $l^x$ in the same arity, there is a basic open set $$V_{k\overset{x}{\mapsto} l}=\{\alpha:\mu\stackrel{\sim}{\longrightarrow}\nu\ |\ \alpha(\mu^x(k))=\nu^x(l)\}.$$ \end{itemize}\end{defn} \begin{prop} $\MM$, as defined above, is a topological groupoid. \end{prop} \begin{proof} As the compositional structure on $\MM$ is inherited from $\bTT\Mod$, the internal category conditions on $\MM$ are immediate. A bit less obvious is that all of the structure maps are continuous. For the domain and codomain maps this is built into the definition; it follows that the fiber projections $p_0$ and $p_1$ are continuous as well. After these inverse is the easiest, as it obviously swaps the basic open sets in pairs. $$\begin{array}{rcl} \alpha\in V_{\varphi(k)^d}&\Iff& \alpha^{-1}\in V_{\varphi(k)^c}\\ \alpha\in V_{k\overset{x}{\mapsto} l}&\Iff&\alpha^{-1}\in V_{l \overset{x}{\mapsto} k}.\\ \end{array}$$ For $\underline\id$ we have $$\begin{array}{ccccc} 1_{\mu}\in V_{\varphi(k)^d}&\Iff &1_{\mu}\in V_{\varphi(k)^c}&\Iff& \mu\in V_{\varphi(k)}\\ 1_{\mu}\in V_{k \overset{x}{\mapsto} l}&\Iff& \mu^x(k)=\mu^x(l)&\Iff& \mu\in V_{k\underset{x}{=}l}.\\ \end{array}$$ These latter sets are open in $\MM_0$, so $\underline\id$ is continuous as well. Lastly, composition. Note that if $\alpha$ satisfies a codomain condition $V_{\varphi(k)^c}$ then so does any composite $\alpha\circ \beta$. This means that the inverse image of $V_{\varphi(k)^c}$ along $\circ$ is just a fiber product with $\MM_1$ $$\xymatrix{ V_{\varphi(k)^c}\underset{\MM_0}{\times} \MM_1 \ar[r]^-{p_0} \ar@{>->}[d]& V_{\varphi(k)^c} \ar@{>->}[d]\\ \MM_1\underset{\MM_0}{\times} \MM_1 \ar[r]^-\circ& \MM_1.\\ }$$ This is clearly open, and the same reasoning applies to domain conditions. Lastly, suppose that we have composable maps ${\mu\stackrel{\beta}{\longrightarrow}\lambda\stackrel{\alpha}{\longrightarrow}\nu}$ and a neighborhood $\alpha\circ\beta\in V_{k\overset{x}{\mapsto} l}$. Choose any parameter $j$ such that $\alpha(\mu^x(k))=\lambda^x(j)$; this determines an open box in the fiber product: $$\<\alpha,\beta\>\in\big(V_{k\overset{x}{\mapsto} j}\big)\times_{\MM_0} \big(V_{j\overset{x}{\mapsto} l}\big)\subseteq\circ^{-1}(V_{k\overset{x}{\mapsto} l}).$$ Hence composition and the other structure maps are continuous, and $\MM$ is a groupoid in $\Top$. \end{proof} \begin{prop} Suppose that we have two isomorphisms $\alpha:\mu_0\cong\mu_1$ and $\beta:\nu_0\cong\nu_1$. Then $\alpha$ belongs to the closure of $\beta$ just in case $\mu_i$ belongs to the closure of $\nu_i$ ($i=1,2$) and the canonical homomorphisms $h_i:M_{\mu_i}\to M_{\nu_i}$ induced by these closures form a commutative square $$\xymatrix{ M_{\nu_0} \ar[r]^\beta & M_{\nu_1}\\ M_{\mu_0} \ar[r]_{\alpha} \ar[u]^{h_0} & M_{\mu_1} \ar[u]_{h_1}\\ }$$ \end{prop} \begin{proof}\label{iso_closure} We have $\alpha\in\overline{\beta}$ just in case $\beta$ belongs to every $\MM_1$-open set which $\alpha$ belongs to. Applying this observation to the domain and codomain conditions tells us that $\nu_i$ belongs to every $\MM_0$-open set which $\mu_i$ does, and this tells us that $\mu_i\in\overline{\nu_i}$. Recall from proposition \ref{spectral_closure} that $\mu$ belongs to the closure of $\nu$ just in case $K_{\mu}\subseteq K_{\nu}$ and this inclusion induces a model homomorphism $h(\mu(k))=\nu(k)$: $$\xymatrix{ K_{\mu} \ar@{}[r]|{\textstyle\subseteq} \ar@{->>}[d] & K_{\nu} \ar@{->>}[d]\\ |M_{\mu}| \ar@{-->}[r]_{h} & |M_{\nu}|.\\ }$$ For any element $a\in|M_{\mu_0}|$ we can find a parameter $k$ such that $a=\mu_0(k)$ and a parameter $l$ such that $\mu_1(l)=\alpha(a)$. This tells us that $\alpha$ belongs to $V_{k\mapsto l}$, and therefore $\beta$ must as well. It follows that $$\begin{array}{rcl|l} \beta(h_0(a))&=&\beta(h_0(\mu_0(k))& k\ \mathrm{labels}\ a\\ &=& \beta(\nu_0(k))& \mathrm{def'n\ of\ }h_0\\ &=& \nu_1(l) & \beta\in V_{k\mapsto l}\\ &=& h_1(\mu_1(l)) & \mathrm{def'n\ of\ }h_1\\ &=& h_1(\alpha(\mu_0(k))) & \alpha\in V_{k\mapsto l}\\ &=& h_1(\alpha(a)) & k\ \mathrm{labels\ }a\\ \end{array}$$ Therefore $\beta\circ h_1=h_0\circ\alpha$, as asserted. \end{proof} In what follows we will often need to pull back along the domain map $\MM_1\to\MM_0$ so we give this operation a special notation $\MM*(-)$. Given a sheaf $F\in\Sh(\MM_0)$, an element $\MM* F$ consists of a map $\alpha:\mu\to\nu$ together with an element of the fiber $f\in F_{\mu}$. Similarly, $\MM*\MM*F$ denotes the pullback of $\MM* F$ along the second projection $p_1:\MM_1\overtimes{\MM_0}\MM_1\to\MM_1$. This is the space of composable pairs ${\mu\stackrel{\alpha}{\longrightarrow}\nu\stackrel{\beta}{\longrightarrow}\lambda}$ together with an element $f\in F_{\mu}$. With this shorthand, an \emph{equivariant sheaf} is an object $F\in\Sh(\MM_0)$ (viewed as an \'etale space) together with an action $\rho:\MM* F\to F$ commuting with the codomain: $$\xymatrix{ \MM*F \ar[d]\ar[rrr]^{\rho} &&& F \ar[d]\\ \MM*\MM_0 \ar@{=}[r]^-{\sim}&\MM_1\ar[rr]_-{\cod} && \MM_0.\\ }$$ This says that each $\alpha:\mu\to\nu$ defines an action on fibers, $\rho_{\alpha}:F_{\mu}\to F_{\nu}$. We additionally require that this action satisfies the following \emph{cocycle conditions}, ensuring that $\rho$ respects the groupoid structure in $\MM$: $$\begin{array}{ccc} \rho_{1_{\mu}}(f)=f && \rho_{\alpha\circ\beta}(f)=\rho_{\alpha}\circ\rho_{\beta}(f)\\\\ \xymatrix@C=35pt{F \ar[r]^-{\id\times 1_{F}} \ar@{=}[dr]& \MM* F \ar[d]^\rho\\& F\\} && \xymatrix@C=35pt{\MM*\MM* F \ar[r]^-{\circ\times 1_{F} } \ar[d]_{1_{\MM}\times\rho} & \MM* F \ar[d]^\rho\\ \MM* F \ar[r]_{\rho} & F. }\end{array}$$ \begin{prop} For each context $x:A$, the assignment $$\rho_{x,\alpha}=\alpha^x:|M_{\mu}|^x\to|M_{\nu}|^x.$$ defines a canonical $\MM$-equivariant structure $\rho_x$ on $\ext{A}$. \end{prop} \begin{proof} The cocycle conditions for $\rho_x$ reduce to associativity and identity conditions for composition of $\bTT$-model homomorphisms. As for continuity, suppose that $\alpha:\mu\to\nu$ and $a\in|M_{\mu}|^x$. For any neighborhood $\alpha(a)\in W_{\varphi(x,l)}$ pick some $k$ such that $\alpha:k\mapsto l$. Because $\nu\models\varphi(\alpha(a),l)$ and $\alpha$ is an isomorphism, $\mu\models\varphi(a,k)$. Thus the pair $\<\alpha,a\>$ belongs to an open neighborhood inside the inverse image $$V_{k\overset{x}{\mapsto} l}\overtimes{\MM_0} W_{\varphi(x,k)}\subseteq \rho_x^{-1}(W_{\varphi(x,l)}).$$ Therefore $\rho$ is continuous. \end{proof} \section{Stability, compactness and definability}\label{sec_StabDef} \begin{comment} In this section we will prove the following theorem, giving a semantic characterization of definability: \begin{thm}[Topological definability theorem]\label{TopDef} Fix a context $x:A$ and suppose that for each $\bTT$-model $M$ we have a subset $S_M\subseteq |M|^x$. The family ${S_M}$ is definable by a coherent formula just in case \begin{itemize} \item[(i)] $S$ is stable under isomorphisms. For all $\alpha:M\stackrel{\sim}{\longrightarrow} N$, $\alpha^x(S_M)=S_{N}$. \item[(ii)] For all $M$ and $a\in S_M$, there is some formula $\varphi(x)$ such that $M\models\varphi(a)$ and for all $N$, $\varphi^N\subseteq S_N$. \item[(iii)] For each $M$ and each cover $S_M=\displaystyle\bigcup_{i\in I} \varphi_i^M$ there is a finite subcover $S_M=\varphi_{i_1}^M\cup\ldots\cup\varphi_{i_n}^M$. \end{itemize} \end{thm} \end{comment} Now we are ready to characterize the definable sheaves $\ext{\varphi}$ in terms of equivariance together with a further condition: compactness. We say that an equivariant sheaf $\<E,\rho\>$ is \emph{compact} if any cover by \emph{equivariant} subsheaves has a finite subcover. Equivalently, any cover has a finite subfamily whose orbits under $\rho$ cover $E$. \begin{thm}[Stability for subobjects, cf. Awodey \& Forssell \cite{forssell_thesis,FOLD}]\label{stab_thm} A subsheaf $S\subseteq\ext{A}$ is definable if and only if it is equivariant and compact. \end{thm} \begin{prop} Each definable subsheaf $\ext{\varphi}\leq\ext{A}$ is equivariant and compact. \end{prop} \begin{proof} If $\alpha:\mu\to\nu$ is an isomorphism and $\mu\models \varphi(a)$ then clearly $\nu\models\varphi(\alpha(a))$. This means that the restriction of $\rho_A$ factors through $\ext{\varphi}$, making it an equivariant subsheaf $$\xymatrix{ \MM*\ext{A} \ar[rr]^{\rho_A} && \ext{A} \\ \MM*\ext{\varphi} \ar@{>->}[u] \ar@{-->}[rr]_{\rho_{\varphi}} && \ext{\varphi} \ar@{>->}[u]\\ }$$ Now suppose that $\ext{\varphi}=\bigcup_{i\in I} \ext{\psi_i}$ and let us abbreviate the sequent ${\psi(x)\vdash_{x:A}\bot}$ by $\neg\psi(x)$. Suppose that $\ext{\varphi}$ were not compact. Then for any finite subset $I_0\subseteq I$, $\bigcup_{i\in I_0}\ext{\psi_i}\subsetneq \ext{\varphi}$; thus there is a labelled model $\mu$ and an element $a\in|M_{\mu}|^x$ such that $\mu$ satisfies $\{\varphi(a)\}\cup\left\{\neg \bigvee_{i\in I_0} \psi_i(a)\right\}$. Extend the language of $\bTT$ by a constant $\bf{c}_0$ and let $$\bTT'=\bTT\cup\{\varphi(\bf{c}_0)\}\cup\{\neg\psi_i(\bf{c}_0)\}.$$ The pairs $\<M_0,a_0\>$ witness the consistency of each finite subtheory, so $\bTT'$ is also consistent. Let $\<\mu_*,a_*\>$ be a labelled model of $\bTT'$. Then $a_*\in\varphi^\mu$ but $a_*\not\in \psi_i^\mu$ for each $i\in I$. This contradicts the assumption that $\ext{\varphi}=\displaystyle\bigcup_{i\in I} \ext{\psi_i}$. \end{proof} This shows that every formula $\varphi(x)$ defines a compact equivariant subsheaf of $\ext{A}$. Now we argue the converse. First we show that every equivariant subsheaf is a union of definable pieces and from this compactness easily implies definability. Recall from definition \ref{extA} that each parameter $k^x$ determines a canonical open section $\ext{k^x}:V_{k^x}\to\ext{A}$ sending $\mu\mapsto \<\mu,\mu^x(k)\>$. When $\mu\models\varphi(k)$ this factors through $\ext{\varphi}$, giving a subsection $$\xymatrix{ \ext{\varphi} \ar@{}[r]|\subseteq & \ext{A}\\ V_{\varphi(k)} \ar@{-->}[u]^{\ext{k\models\varphi}} \ar@{}[r]|\subseteq & V_{k^x} \ar[u]_{\ext{k}}\\ }$$ \begin{lemma}\label{equiv_ext} Given any equivariant sheaf $\<E,\rho\>$ and a partial section $e:V_{\varphi(k)}\to E$ there is a unique equivariant extension of $e$ along the canonical section $\ext{k\models\varphi}$: $$\xymatrix{ \ext{\varphi} \ar[rr]^{\tilde{e}} && E\\ &V_{\varphi(k)} \ar[ul]^{\ext{k\models\varphi}} \ar[ur]_e&.\\ }$$ \end{lemma} \begin{proof} This is an easy application of the reassignment lemma \ref{reassignment}. For any point $\<\mu,a\>\in\ext{\varphi}$ we find another model $\nu$ and an isomorphism $\alpha:\mu\cong\nu$ so that $\nu(k)=\alpha(a)$. Then equivariance forces us to set $$\begin{array}{rcl} \tilde{e}\<\mu,a\>&=& \rho_{\alpha^{-1}}\circ\tilde{e}\circ\rho_{\alpha}\<\mu,a\>\\ &=&\rho_{\alpha^{-1}}\circ\tilde{e}\ \<\nu,\alpha(a)\>\\ &=&\rho_{\alpha^{-1}}\circ\tilde{e}\circ\ext{k\models\varphi}(\mu)\\ &=&\rho_{\alpha^{-1}}\circ e(\nu). \end{array}$$ \end{proof} \begin{lemma} If $S\subseteq\ext{A}$ is an equivariant subsheaf, then $S$ is a union of definable subsheaves. \end{lemma} \begin{proof} First we show that when a basic open set $W_{\varphi(x,k)}$ is contained in $S$ then so is the definable sheaf $\ext{\exists y.\varphi}$. Fix a model $\mu$ and a pair $\<a,b\>\in|M_{\mu}|^{\<x,y\>}$ such that $\mu\models\varphi(a,b)$. We need to see that $\<\mu,a\>$ belongs to $S$. By the reassignment lemma we can find an isomorphism $\alpha:\mu\stackrel{\sim}{\longrightarrow}\nu$ such that $\alpha(b)=\nu(k)$. Then $\nu\models\varphi(\alpha(a),k)$, so that $\<\nu,\alpha(a)\>\in W_{\varphi(x,k)}\subseteq S$. Because $S$ is equivariant along $\alpha$, this implies that $\<\mu,a\>\in S$ as well. For each $\<\mu,a\>\in S$ choose a basic open neighborhood $W[\varphi_a(x,k_a)]\subseteq S$. By the foregoing, $\ext{\exists y_a.\varphi_a}$ is contained in $S$ as well, so we have $$S=\displaystyle \bigcup_{\<\mu,a\>\in S} \ext{\exists y_a.\varphi_a}.$$ \end{proof} \begin{thm}\label{coh_def} An equivariant subsheaf $S\subseteq \ext{A}$ is definable if and only if it is compact. \end{thm} \begin{proof} We have just seen that $S=\bigcup_i\ext{\varphi_i}$ for some set of formulas $\{\varphi_i(x)\}_{i\in I}$. If $S$ is compact, then we can reduce this to a finite subcover $$S=\ext{\varphi_{i_1}}\cup\ldots\cup\ext{\varphi_{i_n}},$$ in which case $S$ is definable by the finite disjunction: $$S=\bigext{A}$$ $$S=\bigext{\bigvee_{j=1,\ldots,n} \varphi_{i_j}}.$$ Conversely, suppose that $S=\ext{\varphi}$ is definable and that $S=\bigcup_{i\in I} T_i$ for some equivariant subsheaves $T_i$. By the previous lemma, we may express each of these as a union of definable pieces: $T_i=\bigcup_{j\in J_i} \ext{\psi_{ij}}$. Then $S=\bigcup_{i,j}\ext{\psi_{ij}}$ and, by completeness, it follows that $$\bTT\tri \{\psi_{ij}(x)\}_{i\in I, j\in J_i}\vdash_{x:A} \varphi(x).$$ Logical compactness ensures that only finitely many of these formulas is required to prove $\varphi$. Consequently, finitely many of the definable sheaves $\ext{\psi_{ij}}$ cover $S$. These are contained in finitely many of the subsheaves $T_i$, which must also cover $S$, so $S$ is compact for equivariant covers. \end{proof} \begin{cor}[cf. Butz \& Moerdijk \cite{butz_thesis} \cite{BM_article}]\label{ext_full} Any equivariant map between definable sheaves $s:\ext{A}\to\ext{B}$ is definable by a formula $\sigma(x,y)$, in the sense that for any $\<\mu,a\>\in\ext{A}$, $s(a)=b$ iff $\mu\models\sigma(a,b)$. \end{cor} \begin{proof} Because $s$ is equivariant, the graph $\Gamma_s=\{\<\mu,a,b\>|s(a)=b\}$ defines an equivariant subsheaf of $\ext{A}\overtimes{\MM_0}\ext{B}$. Because it is a graph, this subsheaf is isomorphic to $\ext{A}$ which is compact. It then follows that the graph is definable: $\Gamma_s=\ext{\sigma}$, so $$s(a)=b\Iff \<\mu,a,b\>\in\ext{\sigma}\Iff \mu\models\sigma(a,b).$$ \end{proof} In the next chapter we will define the syntactic pretopos $\EE_{\bTT}$ associated with a coherent theory $\bTT$. Roughly speaking, the object of $\EE_{\bTT}$ are formulas in context (supplemented with formal coproducts and quotients). The arrows are provably functional relations. A $\bTT$-model is equivalent to a pretopos functor $\EE_{\bTT}\to\Sets$, taking each formula $\varphi$ to the definable set $\varphi^M$. We close this section with a few topos-theoretic results connecting $\EE_{\bTT}$ and $\MM_{\bTT}$. For any pretopos $\EE$, the family of finite, jointly epimorphic families defines a Grothendieck topology called the \emph{coherent topology} $\JJ_c$. Whenever we talk about sheaves on a pretopos we will mean the coherent sheaves, so we write $\Sh(\EE)$ rather than $\Sh(\EE,\JJ_c)$. It is well-known that $\Sh(\EE_{\bTT})$ is the classifying topos for $\bTT$, and for any other topos $\SS$ a geometric morphism $\epsilon:\SS\to\Sh(\EE_{\bTT})$ is essentially determined by a pretopos functor $\epsilon_0:\EE_{\bTT}\to\SS$. From this, one defines the inverse image $\epsilon^*:\Sh(\EE_{\bTT})\to\SS$ by left Kan extension; each $\EE_{\bTT}$-sheaf is a colimit of representables $F\cong\colim_j yA_j$ and $\epsilon^*(F)\cong\colim_j \epsilon_0(A_j)$. \begin{thm}[Butz-Moerdijk \cite{butz_thesis} \cite{BM_article}]\label{eq_top_equiv} $\EqSh(\MM_{\bTT})$ is the classifying topos for $\bTT$-models. Specifically, the classifying geometric morphism of the generic $\bTT$-model in $\EqSh(\MM_{\bTT})$ is an equivalence of categories. \end{thm} \begin{proof} The following argument comes from Awodey \& Forssell, \cite{forssell_thesis, FOLD}. The definable sheaf construction induces a pretopos functor $\ext{-}:\EE_{\bTT}\to\EqSh(\MM)$. Finite limits are preserved a the level of fibers because these are definable sets (where limits are preserved) and so so $\big|\ext{\lim A_i}\big|\cong\big|\lim \ext{A_i}\big|$. Then observe (cf. proposition \ref{Asheaf}) that every open set in $\ext{\lim A_i}$ is a union of boxes from the factors. Similarly, coproducts and quotients are preserved on the stalks and, since colimits in equivariant sheaves are computed stalk-wise, they are also preserved at the level of sheaves. By the Grothendieck comparison lemma, the induced geometric morphism $\EqSh(\MM)\to\Sh(\EE_{\bTT})$ is an equivalence just in case $\ext{-}$ is full, faithful and generating, in the sense that for any equivariant sheaves and maps $h,h':F\rightrightarrows G$ there is a definable sheaf $\ext{\varphi}$ and an equivariant map $e:\ext{\varphi}\to F$ such that $h\circ e\not=h'\circ e$. Corollary \ref{ext_full} shows that $\ext{-}$ is full. Completeness ensures that it is also faithful. Suppose that $\sigma\not=\tau:A\rightrightarrows B$. Then there is a model $M$ (which we may assume is $\kappa$-small) and an element $a\in A^M$ such that $\sigma^M(a)\not=\tau^M(a)$. Choose any labelled model $\mu$ with $M_{\mu}=M$. Then $\ext{\sigma}(\<\mu,a\>)=\sigma^\mu(a)\not=\tau^\mu(a)=\ext{\tau}(\<\mu,a\>)$, so $\ext{\sigma}\not=\ext{\tau}$. Now suppose that $\<F,\rho\>$ is any equivariant sheaf on $\MM$. For any basic open set there is a canonical section $\hat{k}:V_{\varphi(k)}\to\ext{\varphi}$, and every partial section $s:V_{\varphi(k)}\to F$ has a unique equivariant extension to $\ext{\varphi}$ such that $\hat{s}\circ\hat{k}=s$ (lemma \ref{equiv_ext}). Any $h,h'$ can be distinguished by some partial section $s$ (as these cover $F$), so these are also distinguished by $\hat{s}$. Therefore $\ext{-}$ generates $\EqSh(\MM)$. \end{proof} \begin{prop}\label{essential_gm} The geometric morphism $\epsilon:\xymatrix{\Sh(\MM_0)\to\EqSh(\MM)\simeq\Sh(\EE)}$ is both surjective and essential (and therefore open). \end{prop} \begin{proof} The inverse image of $\epsilon$ is given by the forgetful functor $\epsilon^*:\Sh(\EE)\simeq\EqSh(\MM)\to\Sh(\MM_0)$. We extend the semantic bracket notation by writing $\epsilon^*E=\ext{E}$ for any $E\in\Sh(\EE)$. By definition, $\epsilon$ is surjective just in case $\epsilon^*$ is faithful. Since an equivariant map $E\to E'$ is just a sheaf map which preserves the equivariant action, this is certainly true. According to the adjunction, the direct image $\epsilon_*$ is is canonically determined by maps out of the definable sheaves. Given a sheaf $F\in\Sh(\MM_0)$, $$\begin{array}{rcl} (\epsilon_*F)(A)&\cong & \Hom_{\Sh(\EE)}(yA,\epsilon_*F)\\ &\cong& \Hom_{\Sh(\MM_0)}(\ext{A},F).\\ \end{array}$$ The essential left adjoint $\epsilon_!\dashv\epsilon^*$ is determined by colimits (which it must preserve) together with the requirement $\epsilon_! V_{\varphi(k)}=y\varphi$. Since any $F\in\Sh(\MM_0)$ is a colimit of these basic opens, $F\cong\colim_i V_{\varphi_i(k_i)}$, we are forced to define $\epsilon_!$ by $$\epsilon_!F\cong\colim_i y\varphi_i.$$ The adjunction then follows from the universal property of the canonical sections (referenced in the previous theorem). These induce a sequence of canonical isomorphisms $$\begin{array}{rcl} \Hom_{\Sh(\MM_0)}(F,\ext{E})&\cong& \Hom_{\Sh(\MM_0)}(\colim_i V_{\varphi_i(x_i)},\ext{E})\\ &\cong & \lim_i \Hom_{\Sh(\MM_0)}(V_{\varphi_i(x_i)},\ext{E})\\ &\cong & \lim_i \Hom_{\EqSh(\MM)}(\ext{\varphi_i},\ext{E})\\ &\cong & \lim_i \Hom_{\Sh(\EE)}(y\varphi_i,E)\\ &\cong & \Hom_{\Sh(\EE)}(\colim_i y\varphi_i,E)\\ &\cong & \Hom_{\Sh(\EE)}(\epsilon_!F,E). \end{array}$$ \end{proof} \section{Classical first-order logic}\label{FOL} The foregoing chapter has been concerned with the coherent fragment of (intuitionistic) first-order logic. In this section we discuss the relevant generalization to full first-order (classical) logic. In order to extend the coherent definability theorem we will replace the given language $\LL$ by a theory $\overline{\bTT}$, written in an extended language $\overline{\LL}$, such that $\MM(\LL)^{fo}\cong\MM(\bTT)^{coh}$. Supplementing coherent logic by negation yields a complete set of quantifiers, so we may assume that all first-order formulas are written using $\{\top,\wedge,\exists,\neg\}$. Apart from this, the definition of the first-order spectrum is completely analagous to that of the coherent spectrum. \begin{defn}[First-order spectra] Fix a language $\LL$ and a regular cardinal $\kappa\geq |\LL|+\aleph_0$. The first-order spectrum of the language $\MM^{fo}=\MM(\LL)^{fo}$ is defined by \begin{itemize} \item The underlying sets of $\MM^{fo}$ are the same as those of $\MM^{coh}$ (cf. definitions \ref{M0points} \& \ref{M1points}): $$|\MM_0^{fo}|=|\MM_0^{coh}| \hspace{1cm}|\MM_1^{fo}|=|\MM_1^{coh}|$$ \item The topology of $\MM^{fo}$ is defined in the same fashion as that of $\MM^{coh}$ (cf. definitions \ref{M0opens} \& \ref{M1points}), except that the basic open sets $V_{\varphi(k)}$ range over all \emph{first-order} formulas $\varphi(x)$. \item The spectrum of a first-order theory $\bTT$ is the full subgroupoid $\MM(\bTT)^{fo}\subseteq\MM(\LL)^{fo}$ consisting of labelled models $\mu$ which satisfy the axioms of $\bTT$. \end{itemize}\end{defn} The first-order and coherent spectra are nearly identical; they share exactly the same groupoid structure of labelled models and isomorphisms. However, the first-order topology is finer and, in particular, more separated. Recall that the Stone space for a Boolean propositional theory has a basis of clopen sets. Points of the Stone space are valuations of the algebra and these are completely disconnected in the topology. Here we have the related fact \begin{lemma} The connected components of $\MM_0(\LL)^{fo}$ correspond to complete theories in $\LL$. \end{lemma} \begin{proof} For any collection of sentences $\Delta$ we let $V^\Delta=\bigcap_{\varphi\in\Delta} V_{\varphi}$. Every model $\mu$ defines a complete theory $\Delta_{\mu}=\{\varphi\ |\ M_{\mu}\models\varphi\}$. If $\Delta\not=\Gamma$ are distinct complete theories then there is some formula with $\varphi\in\Delta$ and $\neg\varphi\in\Gamma$, so $V^\Delta\cap V^\Gamma\subseteq V_\varphi\cap V_{\neg\varphi}=\emptyset$. Thus we have a partition $$\MM_0(\LL)^{fo}=\coprod_{\Delta\rm{\ compl.}} V^\Delta.$$ It remains to see that each subset $V^\Delta$ is itself connected. This follows from the fact that complete theories satisfy the joint embedding property: if $M_1,M_2\models \Delta$ there is another $\Delta$-model $N$ and a pair of elementary embeddings $M_1\to N$ and $M_2\to N$. This is easy to see using the method of diagrams introduced in the next chapter (cf. proposition \ref{diag_models}). By Lowenheim-Skolem, we may assume $|N|=|M_1|+|M_2|$. Now suppose that $\mu_1$ and $\mu_2$ are labelled models of $\Delta$. The labellings on $\mu_1$ and $\mu_2$ are infinite so we may restrict to sublabellings $\mu_1'$ and $\mu_2'$ (still infinite) which use disjoint sets of labels. Thus we have $\mu_1'\in\overline{\mu_1}$ and $\mu_2'\in\overline{\mu_2}$. By the previous observation regarding joint embedding, together with the disjointness of labels, we can find another labelled model $\nu$ with $\mu_1',\mu_2'\in\overline{\nu}$. Thus we have a chain $$\overline{\mu_1} \ni \mu_1' \in \overline{\nu} \ni \mu_2' \in \overline{\mu_2}.$$ Now suppose that $V^\Delta$ has a clopen partition $V^\Delta= U_1+U_2$. If $\mu\in U_1$ then we also have the closure $\overline{\mu}\subseteq U_1$. Given the chain above, this implies that $\mu_1\in U_1$ if and only if $\mu_2\in U_1$. Hence $U_2=\emptyset$, and $V^\Delta$ is connected. \end{proof} In order to apply the coherent definability theorem \ref{coh_def} to the first-order case, we give a translation from classical to coherent logic. This is a process called Morleyization and a discussion can be found at \cite{elephant}, D1.5.13. \begin{lemma}\label{bool_comp} For each first-order theory $\bTT$ in a language $\LL$ there is a coherent theory $\bTT^*$, written in an extended language $\LL^*$, such that $\MM(\bTT)^{fo}\cong\MM(\bTT^*)^{coh}$. \end{lemma} \begin{proof} First consider the case of first-order $\LL$-structures (i.e., $\bTT$ is the empty theory over $\LL$). We obtain the extended language as a union $\LL^*=\bigcup \LL_n$ where $\LL_0=\LL$. Let $\rm{Coh}(\LL_n)$ denote the set of coherent formulas in $\LL_n$ and define $\LL_{n+1}=\LL_n\cup\{N_\varphi(x)\ |\ \varphi(x)\in\rm{Coh}(\LL_n)\}$. Similarly, $\bTT^*=\bigcup \bTT_n$ where $\bTT_{n+1}$ is $\bTT_n$ together with the $\LL_{n+1}$-coherent axioms $$\begin{array}{c} \vdash \varphi(x)\vee N_\varphi(x)\\ \varphi(x)\wedge N_\varphi(x)\vdash \bot.\\ \end{array}$$ Of course, many of these new symbols are redundant; for example, one of de Morgan laws forces an equivalence $N_{\varphi\wedge\psi}\equiv N_{\varphi}\vee N_{\psi}$. However, there is little benefit in a more parsimonious approach. Suppose that $M$ is an $\LL$-structure. Given the axioms in $\bTT_1$, each $\LL_1$-symbol $N_\varphi$ has a unique interpretation as the complement $N_\varphi^M=|M|^x\setminus\varphi^M$. As usual, these interpretations for $\LL_1$-symbols extend uniquely to an interpretation for any $\LL_1$-formula. Then each $\LL_2$-symbol has a unique interpretation satisfying $\bTT_2$ (as the complement of an $\LL_1$-formula), and so on. This shows that each $\LL$-structure $M$ has a unique extension to a model $M^*\models\bTT^*$. As $\LL$ and $\LL^*$ have the same basic sorts we can also lift a labelling of $M$ to a labelling of $M^*$; this gives a map $\mu\mapsto \mu^*$ inducing a bijection $$j:|\MM_0(\LL)^{fo}|\cong|\MM_0(\LL)^{coh}|\cong|\MM_0(\bTT^*)^{coh}|.$$ Every classical formula $\varphi(x)$ over $\LL$ is $\bTT^*$-equivalent to a coherent formula over $\LL^*$. This is easily proved by induction. If $\varphi(x)$, $\psi(x)$ and $\gamma(x,y)$ are all coherent over $\overline{\LL}$ then they are already coherent over some $\LL_n$. But then $(\varphi\wedge\psi)(x)$ and $\exists y.\gamma(x)$ are also coherent over $\LL_n$ and $\neg\varphi(x)\equiv N_{\varphi}$ is coherent over $\LL_{n+1}$. From this it is easy to see that the bijection $j$ is actually a homeomorphism: $$\mu\in V_{\neg\varphi(k)}\subseteq\MM_0(\LL)^{fo}\Iff \mu^*\in V_{N_\varphi(k)}\subseteq\MM_0(\bTT^*)^{coh}.$$ Similarly, an isomorphism $\alpha:M\cong N$ lifts uniquely to $\alpha^*:M^*\cong N^*$ and this induces a homeomorphism at the level of $\MM_1$. This establishes the claimed isomorphism of spectra $\MM(\LL)^{fo}\cong\MM\left(\bTT^*\right)^{coh}$ Now suppose that $\bTT$ is not empty. Each formula $\varphi\in\bTT$ corresponds to a coherent formula $\varphi^*$ over $\LL^*$, and we simply add these into $\bTT^*$. Then $\mu\models\varphi$ just in case $\mu^*\models\varphi^*$ and so $\MM(\bTT)^{fo}\cong\MM(\bTT^*)^{coh}$. \end{proof} As in the coherent case, one can proceed to define the a generic first-order model in the equivariant sheaves over $\MM^{fo}$. For each context $x:A$ there is a sheaf $\ext{A}^{fo}$ and for each formula $\varphi(x)$ there is a definable subsheaf $\ext{\varphi}^{fo}\subseteq \ext{A}^{fo}$. Moreover, the homeomorphism $j:\MM(\bTT)^{fo}\stackrel{\sim}{\longrightarrow} \MM(\bTT^*)^{coh}$ induces corresponding maps $j_A:\ext{A}^{fo}\stackrel{\sim}{\longrightarrow}\ext{A}^{coh}$ between these. \begin{thm}[First-order definability] Suppose that $x:A$ is a context and for every first-order labelled model $\mu$ we have a subset $S_{\mu}\subseteq|M_{\mu}|^x$. These subsets are first-order definable just in case the union $\bigcup_{\mu} S_{\mu}$ defines a compact equivariant subsheaf of $\ext{A}^{fo}$. \end{thm} \begin{proof} Clearly the subsets $S_{\mu}$ are first-order definable just in case the corresponding subsets $S_{\mu}^*=j(S_{\mu})$ are coherently definable from $\overline{\bTT}$. By the coherent definability theorem, this is true just in case $j_A(S)$ is a compact equivariant subsheaf of $\ext{A}^{coh}$. A homeomorphism preserves open sets, so $S$ is a subsheaf just in case $j_A(S)$ is. We have already observed that a first-order isomorphism $\mu\stackrel{\sim}{\longrightarrow}\nu$ is the same as a coherent isomorphism $\mu^*\stackrel{\sim}{\longrightarrow}\nu^*$ and these act on $\ext{A}^{fo}$ and $\ext{A}^{coh}$ in the same way. Thus $S$ is equivariant just in case $j_A(S)$ is. Finally, as $j_A$ preserves covers and equivariant subsheaves, it also preserves (equivariant) compactness. Thus \begin{tabular}[t]{rcl} $S$ is definable &$\Iff$& $j_A(S)$ is definable\\ &$\Iff$& $j_A(S)$ is a compact equivariant subsheaf\\ &$\Iff$& $S$ is a compact equivariant subsheaf\\ \end{tabular} \end{proof} This provides a full solution to the problem of ``logicality'', the question of when a family of subsets $\{S_M\subset|M|\}_{M\models \bTT}$ is definable in first-order logic. This question goes back to Tarski, who proved (with Lindenbaum) that definable sets must be invariant under isomorphism \cite{LT1936}. However, this is clearly not a sufficient condition for (first-order) definability because, for example, infinitary quantifiers and connectives are also isomorphism stable. This is a question which he returned to throughout his career \cite{tarski1986}. Most attempts to answer this question have centered around stability under some (typically) larger class of morphisms. For example, McGee has shown that the sets which are stable under isomorphisms are exactly those which are definable in $\LL_{\infty\infty}$ (i.e., definable by formulas with any (infinite) number of variables and arbitrarily large conjunctions \cite{mcgee}. Bonnay showed that a family of sets is stable under arbitrary homomorphism if and only if it is definable in $\LL_{\infty\infty}$ without equality \cite{bonnay}. Feferman has shown that sets are $\lambda$-definable from monadic operations if and only if they are definable in first-order logic without equality \cite{feferman}. The foregoing discussion suggests a different approach, relying on topology rather that morphism invariance. We have given a precise topological characterization of definability in coherent logic and, via Morleyization, we may regard any classical first-order theory as a coherent theory. Together, this gives an exact characterization of definability in first-order logic. Similar methods can also be applied to yield a topological definability theorem for intuitionistic logic, so long as we modify our spectra to range over Henkin models as well as the intended interpretations. To make sense of this we will need to employ the categorical logic discussed in the next chapter. \chapter{Pretopos Logic} In this chapter we will recall and prove a number of well-known facts about pretoposes, most of which can be found in either the Elephant \cite{elephant} or Makkai \& Reyes \cite{MakkaiReyes}. We recall the definitions of coherent categories and pretoposes and the pretopos completion which relates these two classes of categories. We go on to discuss a number of constructions on pretoposes, including the quotient-conservative factorization and the machinery of slice categories and localizations. In particular, we define the elementary diagram, a logical theory associated with any $\bTT$-model, and its interpretation as a colimit of pretoposes. These are Henkin theories: they satisfy existence and disjunction properties which can be regarded as a sort of ``locality'' for theories. We will also show that this machinery interacts well with complements, so that the same methods may be applied to the study of classical theories. \section{Coherent logic and pretoposes}\label{sec_Ptop} Categorical logic is founded upon two principles: (i) logical theories are structured categories and (ii) models and interpretations are structure-preserving functors. These premises were first developed by Lawvere in his PhD. thesis, \emph{Functorial Semantics of Algebraic Theories} \cite{lawvereFuncSem}. The first slogan means that the syntactic entities of a theory $\bTT$ (i.e., formulas and terms) can be arranged into a \emph{syntactic category} $\CC_{\bTT}$. Any $\bTT$-model then defines a functor $\CC_{\bTT}\to\Sets$ which is unique up to isomorphism. The ambient logic of $\bTT$ (e.g., classical or intuitionistic, propositional or first-order) corresponds to the relevant categorical structure which $\CC_{\bTT}$ possesses (and which its models preserve). We will assume that the reader is familiar with the basic contours of functorial semantics, in particular the interpretation of finite limits as pairing and conjunction and regular factorization as existential quantification. These ideas were discussed briefly in the last chapter (\pageref{coh_logic}), and a more complete discussion can be found in \cite{MakkaiReyes}, \cite{SGL} or \cite{elephant}. In this chapter we will present the extension of these ideas to coherent categories and pretoposes. The material in this section and the next is well-known; a standard reference is Johnstone \cite{elephant} A1. A category $\CC$ is \emph{coherent} if it has (i) finite limits, (ii) regular epi-mono factorization and (iii) finite joins of subobjects and, moreover, the latter two constructions should be stable under pullbacks. Coherent structure is sufficient to interpret coherent theories; this structure is exactly that which was required, in the last chapter, for our interpretation of the generic model in $\Sh(\MM_0)$ (page \pageref{coh_logic}). Now fix a (multi-sorted) first-order language $\LL$. An \emph{interpretation} $I$ of $\LL$ in a coherent category $\bEE$ begins with an underlying object $A^I\in\bEE$ for each basic sort $A\in\LL$. A compound context $B=\<A_1,\ldots,A_n\>$ is interpreted by the product $B^I=\prod_i A_i^I$. To each basic relation $R(x)$ (in a context $x:A$) we assign a subobject $R^I\leq A^I$; similarly, we assign an arrow $f^I:A^I\to B^I$ each function symbol $f:x\to y$ (where $y:B$). Given this data, we can construct an interpretation of any coherent formula $\varphi(x)$ as a subobject $\varphi^I\leq A^I$. The construction proceeds inductively based on the formulaic structure of $\varphi(x)$, where each logical operation is interpreted as on page \pageref{coh_logic}. Given such an interpretation, we asy that $I$ \emph{satisfies} a sequent $\varphi(x)\overprove{x:A}\psi(x)$ just in case $\varphi^I\leq\psi^I$ in $\Sub_{\bEE}(A^I)$. Each coherent theory $\bTT$ determines a \emph{syntactic category} $\bEE_{\bTT}$ which is itself coherent. For each context $x:A$ there is an object $A^x\in\EE_{\bTT}$ and these are related inductively by $$A^{\<x,y\>}\cong A^x\times A^y\hspace{1cm} A^{\<\>}\cong 1_{\EE_{\bTT}}.$$ Every formula $\varphi(x)$ determines a subobject $[x|\varphi]\leq A^x$ and we have $[x|\varphi]\leq[x|\psi]$ if and only if $\bTT$ proves the corresponding sequent, which we notate $$\bEE_{\bTT}\tri \varphi(x)\vdash_{x:A} \psi(x).$$ A morphism $\sigma:[\varphi(x)]\to[\psi(y)]$ is a formula $\sigma(x,y)$ which is \emph{provably functional} (p.f.) on $\varphi$: $$\begin{array}{c} \sigma(x,y)\vdash \varphi(x)\wedge\psi(y)\\ \sigma(x,y)\wedge\sigma(x,y')\vdash y=y'\\ \varphi(x)\vdash \exists y. \sigma(x,y).\\ \end{array}$$ When $\sigma$ is provably functional, we will often write $\sigma(x)=y$ in place of $\sigma(x,y)$. If $I$ in an interpretation of $\bTT$ in a coherent category $\bFF$, this induces a coherent functor $\widetilde{I}:\bEE_{\bTT}\to\bFF$ by sending $[x|\varphi]\mapsto \varphi^I$. One can also define $\LL$-structure homomorphisms in $\bFF$ and these correspond to natural transformations $\widetilde{I_0}\to\widetilde{I_1}$. This defines an equivalence of categories (natural in $\FF$) $$\bTT\Mod(\bFF)\sim\Coh(\bEE_{\bTT},\bFF)$$ and this classification property fixes $\bEE_{\bTT}$ up to equivalence. In particular, a classical model of $\bTT$ corresponds to a coherent functor $M:\bEE_{\bTT}\to\Sets$. \begin{defn} An initial object $0\in\bEE$ is called \emph{strict} if, for any other object $A\not\cong0$, $\Hom_{\bEE}(A,0)=\emptyset$. Suppose that $\bEE$ has a strict initial object. A coproduct $A+B\in\bEE$ is called \emph{disjoint} if the pullback of the coprojections is 0: $$\xymatrix{ 0 \pbcorner \ar[r] \ar[d] & A \ar[d]^{i_A}\\ B \ar[r]_{i_B} & A+B.}$$ A left-exact category with a strict initial object and all disjoint coproducts, both stable under pullbacks, is called \emph{extensive}. \end{defn} \begin{defn} A subobject $R\leq A\times A$ in $\bEE$ is called an \emph{equivalence relation} or \emph{congruence} if it satisfies diagrammatic versions of reflexivity, transitivity and symmetry axioms: \begin{tabular}{lll} \textbf{Refl.} & \textbf{Trans.} & \textbf{Sym.}\\ $\xymatrix{A \ar@{-->}[r] \ar@{>->}[rd]_{\Delta} & R \ar@{>->}[d] &\\& A\times A\\}$& $\xymatrix{\raisebox{-3.5ex}{$R\overtimes{A}R$} \ar@{-->}[r] \ar[rd]_{\<p_1,p_3\>} & R \ar@{>->}[d] \\& A\times A\\}$& $\xymatrix@R=5ex{ R \ar@{-->}[r] \ar@{>->}[rd] &A\times A \ar[d]^{\rsim}_{\rm{tw}}\\& A\times A}$ \end{tabular} An object $Q=A/R$ is the \emph{(exact) quotient} of $A$ by $R$ if there is a coequalizer $e:A\twoheadrightarrow Q$ (necessarily a regular epimorphism) such that $R$ is the kernel pair of $e$: $$\xymatrix{R \pbcorner \ar[r] \ar[d] & A \ar[d]^e \\ A \ar[r]_e & Q.\\}$$ If a left-exact category $\bEE$ has exact quotients for all equivalence relations and these are stable under pullback then $\bEE$ is called an \emph{exact category}. \end{defn} \begin{defn} A \emph{pretopos} is a category which is both extensive and exact. \end{defn} Note that a pretopos is automatically regular and coherent. The first claim follows from the fact that each kernel pair is an equivalence relation, so that an exact category already has a quotient for every kernel pair. For the second claim, simply note that the join of two subobjects $R,S\leq A$ can be computed as the epi-mono factorization $R+S \twoheadrightarrow R\vee S \rightarrowtail A$. A coherent functor automatically preserves any disjoint coproducts and quotients in its domain, so pretopos functor is nothing more than a coherent functor between pretoposes. \begin{lemma}\label{ptop_nice} If $\EE$ is a pretopos, then \begin{enumerate} \item every monic in $\EE$ is regular (an equalizer). \item $\EE$ is balanced: epi + mono $\Rightarrow$ iso. \item every epic in $\EE$ is regular (a coequalizer). \end{enumerate} Furthermore, for any epimorphism $f:A\twoheadrightarrow B$, \begin{enumerate} \item[4.] $f^*:\Sub(B)\to\Sub(A)$ is injective. \item[5.] $\exists_f\circ f^*$ is the identity on $\Sub(B)$. \end{enumerate} \end{lemma} \begin{proof} Suppose that $m:R\rightarrowtail A$ is monic. This induces an equivalence relation $E_R\rightarrowtail A+A$ and its quotient is a pushout: $Q\cong A\overplus{R} A$. Moreover, the coprojections $A\rightrightarrows Q$ are again monic and their intersection is $R$. Consequently, $R\rightarrowtail A\rightrightarrows Q$ is an equalizer diagram and $m$ is regular. Now suppose that $m$ is both epic and monic. Because it is epic and equalizes the coprojections $A\rightrightarrows Q$, these coprojections must be equal. But then $m$ is the equalizer of identical maps, and therefore an isomorphism. Hence $\EE$ is balanced. Now suppose $f:A\to B$ is epic. Factor $f$ as a regular epi $e$ followed by a monic $m$. Since $f$ is epic, $m$ is as well. But then balance implies that $m$ is an isomorphism. Since $e$ and $f$ are isomorphic maps and $e$ is regular, $f$ too. Note that $f^*$ injective follows immediately from the identity $\exists_f\circ f^*=1_{\Sub(B)}$. Because regular epis are stable under pullback, any subobject $S\leq B$ induces a pullback square in which the left-hand side is a reg epi-mono factorization $$\xymatrix{ f^*S \pbcorner \ar@{>->}[r] \ar@{->>}[d] & A \ar@{->>}[d]^f\\ S \ar@{>->}[r] & B.\\ }$$ This is exactly the definition of $\exists_f$, so uniqueness of factorizations guarantees that $S\cong \exists_f(f^*S)$.\end{proof} \begin{defn}\label{ptop_morph_props} Suppose that $I:\bEE\to\bFF$ is a coherent functor we say that $I$ is: \begin{itemize} \item \emph{conservative} if it reflects isomorphisms. \item \emph{full on subobjects} if, for every object $A\in\EE$ and every subobject $S\leq IA\in \FF$ there is a subobject $R\leq A$ with $IR\cong S$. \item \emph{subcovering} if for every object $B\in\FF$ there is an object $A\in\EE$, a subobject $S\leq IA\in\FF$ and a (regular) epimorphism $S\twoheadrightarrow B$. \end{itemize} \end{defn} \begin{lemma}[Pretopos completion]\label{ptop_comp} The forgetful functor $\Ptop\to\Coh$ has a left adjoint sending each coherent category $\bEE$ to its \emph{pretopos completion} $\EE$. Moreover, the unit is a coherent functor $I:\bEE\to\EE$ which is conservative, full, full on subobjects and (finitely) subcovering.\footnote{Because $\bEE$ may not have coproducts, we are allowed to cover each $B\in\EE$ by a finite family of subobjects $S_i\leq IA_i$.} \end{lemma} \begin{proof}[Sketch] Please see \cite{elephant}, A3 or \cite{shulman_completion} for a more detailed presentation. A \emph{finite-object equivalence relation} $A_i/R$ in $\mathbb{E}$ is a family of objects $\<A_i\>_{i\leq n}$ together with binary relations $R_{ij}\rightarrowtail A_i\times A_j$ satisfying \begin{tabular}{cc} \textbf{Refl.} & $\xymatrix{A_i \ar@/^3ex/[rr]^{\Delta} \ar@{-->}[r] & R_{ii} \ar@{>->}[r] & A_i\times A_i,}$\\\\ \textbf{Trans.} & $\xymatrix{\raisebox{-3.5ex}{$R_{ij}\overtimes{[\varphi_j]} R_{jk}$} \ar@/^3ex/[rr]^{\<p_i,p_k\>} \ar@{-->}[r] & R_{ik} \ar@{>->}[r] & A_i\times A_k,}$\\\\ \textbf{Sym.} & \raisebox{.7cm}{$\xymatrix@R=5ex{ R_{ij} \ar@{=}[d]^{\rsim} \ar@{>->}[r] &A_i\times A_j \ar@{=}[d]^{\rsim}_{\rm{tw}}\\ R_{ji} \ar@{>->}[r] & A_j\times A_i}$}.\\ \end{tabular} The objects of $\EE$ are the finite-object equivalence relations in $\bEE$. Given two such objects $A_i/R$ and $B_k/S$ an arrow $f:A_i/R\to B_j/S$ is a family of relations $F_{ik}\rightarrowtail A_i\times B_k$ which is provably functional modulo the equivalences $R$ and $S$. For example, the condition $\sigma(x,y)\wedge\sigma(x,y')\vdash y=y'$ becomes the following requirement: for all $Z\in\bEE$ and all $x_i:Z\to A_i$ and $y_k:Z\to B_k$ we have $$\begin{array}{c} \left(\raisebox{.8cm}{\xymatrix{ Z \ar[r]^-{\<x_i,x_j\>} \ar@{-->}[rd] & A_i\times A_j & Z \ar[r]^-{\<x_i,y_k\>} \ar@{-->}[rd] & A_i\times B_k & Z \ar[r]^-{\<x_j,y_l\>} \ar@{-->}[rd] & A_j\times B_l \\ & R_{ij} \ar[u] && F_{ik} \ar[u] && F_{jl} \ar[u]\\ }}\right)\\ \xymatrix{\ar@{}[rd]|{\mbox{\huge{$\Rightarrow$}}} && Z \ar[r]^-{\<y_k,y_l\>} \ar@{-->}[rd]& B_k\times B_l\\ &&& S_{kl} \ar[u] \\ }\\ \rm{i.e.,}\ \big(R_{ij}(x_i,x_j)\wedge F_{ik}(x_i,y_k)\wedge F_{jl}(x_j,y_l) \big)\Rightarrow S_{jl}(y_j,y_l) \end{array} $$ Each object $A\in\bEE$ defines a trivial quotient $A/\!\!=$ and the comparison functor $I:\bEE\to\EE$ acts by sending $A\mapsto A/\!\!=$. Two maps are equal modulo the identity relation just in case they are provably equal, so this displays $\bEE$ as a full subcategory inside $\EE$. It follows at once that $I$ is conservative. Henceforth we will not distinguish between an object $A\in\bEE$ and its image $IA\in\EE$. Suppose $A_i$ is a family of objects in $\bEE$. Define an equivalence relation $\Delta_{ij}=0$ if $i\not=j$ and $\Delta_{ii}=\Delta_{A_i}$. In $\EE$ this defines a (disjoint) coproduct $\coprod_i A_i=A_i/\Delta_{ij}$. From these, any binary coproduct $(A_i/R)+(B_j/S)$ can be defined as a quotient of $\coprod_i A_i+\coprod_j B_j$, so $\EE$ is closed under $+$. Moreover, any object $A_i/R$ comes equipped with a family $A_i\in\bEE$ and a presentation $\coprod_i A_i\twoheadrightarrow A_i/R$, so $I$ is finitely subcovering. To see that $I$ is full on subobjects, suppose that $A, B_i\in\bEE$ and $B_i/R\leq A$. Each composite $B_i\to B_i/R\rightarrowtail A$ belongs to the full subcategory $\bEE$, so its epi-mono factorization $B_i\twoheadrightarrow \im(B_i) \rightarrowtail A$ does as well. Since $B_i/R$ is the epi-mono factorization of a map $\coprod_i B_i\to A$ it is isomorphic to $\bigvee_i \im(B_i)$, which again belongs to $\bEE$. \end{proof} \begin{defn}\label{class_ptop} The \emph{classifying pretopos} $\EE_{\bTT}$ of a coherent theory $\bTT$ is the pretopos completion of the syntactic category $\bEE_{\bTT}$. We will say that $\bTT$ is a \emph{pretopos theory} if $I:\bEE_{\bTT}\to\EE_{\bTT}$ is an equivalence of categories. \end{defn} Every pretopos classifies a theory $\bTT_{\EE}$. To form this theory, we let every object $A\in\EE$ determines a basic sort, every subobject $S\leq A$ a basic relation and every arrow $f:A\to B$ a function symbol. The axioms of $\bTT_{\EE}$ are defined by the subobject ordering in $\EE$: $R\underset{\bTT_{\EE}}{\vdash} S \Iff R\leq S\in\Sub_{\EE}(A)$. If $\EE=\EE_{\bTT}$ is already the classifying category for some coherent theory $\bTT$, then the extension $\bTT\subseteq\bTT_{\EE}$ is essentially the same as Shelah's model-theoretic closure $\bTT\subseteq\bTT^{\rm{eq}}$ (see e.g. \cite{harnik} for a discussion). Because coproducts and quotients are definable in $\Sets$, every $\bTT$-model $M$ has an essentially unique extension to a $\bTT^{\rm{eq}}$ model $M^{\rm{eq}}$. Category-theoretically, this amounts to an equivalence of categories $$\Coh(\bEE_{\bTT},\Sets)\simeq\Ptop(\EE_{\bTT},\Sets)$$ and this follows immediately from the fact that pretopos completion is left adjoint to the forgetful functor $\Ptop\to\Coh$. Therefore it is reasonable to regard $\bTT$ and $\bTT^{\eq}$ as the ``same'' theory in the sense that they are semantically equivalent. $\bTT^\rm{eq}$ is a conservative extension which allows for a syntactic operation called elimination of imaginaries. An \emph{imaginary element} $a/E$ in a model $M$ is a definable equivalence relation $E(x,x')$ (where $x,x':A$) together with an equivalence class $[a]\in A^M/E^M$. $M$ has \emph{uniform elimination of imaginaries} (u.e.i) if for every equivalence relation $E(x,x')$ there is a sort $y:B$ and a p.f. formula $\epsilon(x,y)$ such that $$M\ \models\ \ E(a,a') \ \dashv\vdash\ \exists y.\big(\epsilon(a)=y=\epsilon(a')\big).$$ Pretopos structure supports elimination of imaginaries in the sense that every $\bTT^{\rm{eq}}$-model has u.e.i. Given a definable equivalence relation $E(x,x')$, simply take $B=A/E$ and let $\epsilon(x,y)$ be the p.f. relation corresponding to the quotient $A\twoheadrightarrow A/E$. When we study categorical semantics we sometimes fix a pretopos $\SS$ (often $\SS=\Sets$) to serve as a ``semantic universe''. In that case, we distinguish between \emph{interpretations} $I:\EE\to \FF$ (between arbitrary pretoposes) and \emph{models} $M:\EE\to\SS$ (into $\SS$). We let $A^M$ denote the value of a model $M$ at an object $A\in\EE$. We (loosely) refer to $A^M$ as a definable ``set'' in $M$ and write $a\in A^M$ as a shorthand for $a\in\Hom_{\SS}(1,A^M)$. In this terminology, interpretations act on models by precomposition (contravariantly): $$\xymatrix@R=2ex{ \FF\Mod(\SS) \ar[rr]^{I^*} && \EE\Mod(\SS)\\ N \ar@{|->}[rr] && I^*N\\ \FF \ar[rdd]_N && \EE \ar[ll]_I \ar[ldd]^{I^*N}\\\\ &\SS&\\ }$$ For each $A\in\EE$, $A^{I^*N}=(IA)^N$ and we call $I^*N$ the \emph{reduct} of $N$ along $I$. This generalizes the classical terminology; if $I$ is induced by an inclusion of theories $\bTT\subseteq\bTT'$ and $N\models\bTT'$, $I^*N\cong N\!\!\upharpoonright\!\LL(\bTT)$. From this point forward we will work in the doctrine of pretoposes and define our logical terminology accordingly: ``theory'' is synonymous with ``pretopos'', a formula is an object and a model is a pretopos functor to $\Sets$. Please see the table on page \pageref{dictionary} for a full list of categorical definitions and their logical equivalents. \section{Factorization in $\Ptop$} In this section we review the quotient-conservative factorization system in $\Ptop$. A discussion of this factorization system can be found in \cite{makkai}. Given a pretopos $\EE$, a sequent in $\EE$ is an object $A\in \EE$ (a context) together with an (ordered) pair of subobjects $\varphi,\psi\in\Sub(A)$. We say that $\EE$ satisfies $\varphi\vdash\psi$ just in case $\varphi\leq \psi$. An interpretation $I:\EE\to\FF$ is conservative in the logical sense if $\EE$ satisfies any sequents which $I$ forces in $\FF$ $$\FF\tri I(\varphi)\vdash I(\psi) \Rightarrow \EE\tri \varphi\vdash\psi.$$ Categorically, this says that $I$ is \emph{injective on subobjects}: $R\lneq S\leq A$ implies $IR\lneq IS$. We will see below that, in a pretopos, this is equivalent conservativity in the categorical sense. \begin{lemma}\label{eq_cons} For a pretopos functor $I:\EE\to\FF$, the following are equivalent: \begin{itemize} \item[(i)] $I$ is conservative (reflects isomorphisms). \item[(ii)] $I$ is injective on subobjects. \item[(iii)] $I$ is faithful. \end{itemize}\end{lemma} \begin{proof} The implication (i)$\Rightarrow$(ii) is immediate. Suppose that $I$ identifies two subobjects $IR \cong IS\in\Sub_{\FF}(IA)$; then $IR\cong I(R\wedge S)\cong IS$ and, since $I$ reflects both these isomorphisms, $R\cong S$ already in $\EE$. To see that (ii)$\Rightarrow$(iii), suppose that $I$ is injective on subobjects and that $f\not=g:A\rightrightarrows B\in\CC$. Then $\Eq(f,g)\lneq A$, which implies that $$\Eq(If,Ig)=I(\Eq(f,g))\lneq IA.$$ But then $If\not=Ig$, so $I$ is faithful. For (iii)$\Rightarrow$(i), suppose that $I$ is faithful and that $If$ is an isomorphism. $If$ is monic and epic, and both the properties are reflected by faithful functors. For example, if $If$ is monic and $f\circ g= f\circ h$ then $If\circ Ig=If\circ Ih$, whence $Ig=Ih$ by monicity and $g=h$ by faithfulness. Thus $f$ is also monic and epic, and so by balance (lemma \ref{ptop_nice}) $f$ is an isomorphism. \end{proof} Now we introduce a second class of pretopos functors, the quotients. Suppose that $\bTT\subseteq\bTT'$ is an extension of theories written in the same language. Let $\EE$ and $\EE'$ denote the classifying pretoposes of $\bTT$ and $\bTT'$, respectively. From the pretopos completion we know that each object of $\EE$ is a $\bTT$-definable equivalence relation, and therefore also $\bTT'$-definable. Similarly, $\EE$-arrows are $\bTT$-provably functional relations (modulo equivalence), and these are also $\bTT'$-provably functional. This means that the inclusion $\bTT\subseteq\bTT'$ induces an interpretation $I:\EE\to\EE'$. Since equality of arrows in $\EE'$ is $\bTT'$-provable equality, $I$ may not be faithful. Similarly, the new axioms in $\bTT'$ may introduce provable functionality or equivalence, so in general $I$ is neither full nor surjective on objects. However, $I$ does satisfy some related surjectivity conditions defined in the last section. \begin{lemma} If $\bTT\subseteq\bTT'$ is an extension of theories in the same language $\LL$ then the induced functor $I:\EE\to \EE'$ is subcovering and full on subobjects. \end{lemma} \begin{proof} Suppose that $Q$ is an object of $\EE'$. By the pretopos completion we know that $Q$ has the form $Q=B_i/E$, where $B_i=\varphi_i(x_i)$ is a formula in context $x_i:A_i$ and $E$ is a $\bTT'$-provable finite-object equivalence relation. Since $\bTT$ and $\bTT'$ share the same language, $A_i=IA_i$ already belongs to $\EE$ and the presentation $\coprod_i IA_i\geq \coprod_i B_i \twoheadrightarrow Q$ shows that $I$ is subcovering. Now suppose $A=A_i/E$ is an object in $\EE$ and $S\leq IA$. Then $S=S_i/IE$ for some family of subobjects $S_i\leq IA_i$. Each $S_i$ corresponds to a $\bTT'$-formula $\varphi_i(x_i)$ in context $x_i:A_i$. Since $\bTT$ and $\bTT'$ have the same language, $\varphi_i(x_i)$ also defines a subobject $R_i\leq A_i$ and $I(R_i/E)\cong S_i/IE$. Therefore $I$ is full on subobjects. \end{proof} \begin{defn} A pretopos functor $I:\EE\to\FF$ is called a \emph{quotient} if it is both subcovering and full on subobjects (cf. definition \ref{ptop_morph_props}). In particular, for every $B\in\FF$ there is an $A\in\EE$ and an epimorphism $IA\twoheadrightarrow B$. \end{defn} \begin{prop}\label{ptop_factor} Every pretopos functor $I:\EE\to\FF$ has an factorization into a quotient map followed by a conservative functor: $$\xymatrix@R=3ex{ \EE \ar[rr]^I \ar@{->>}[rd]_{Q} && \FF\\ & \EE\!/I \ar@{>->}[ur]_C &\\ }$$ \end{prop} \begin{proof} Begin by factoring $I$ as a composite $\EE\labelarrow{Q_0}\bGG\labelarrow{C_0}\FF$, where $Q_0$ is essentially surjective on objects and $C_0$ is full and faithful. $\bGG$ is not a pretopos but it is coherent, and its pretopos completion is our desired factorization $$\xymatrix{ \EE \ar[rrr]^I \ar[rd]_{Q_0} \ar[rrd]^{Q} &&& \FF\\ &\bGG \ar[r]_J & \GG \ar[ur]_C\\ }$$ Since $Q_0$ is e.s.o., it is both subcovering and full on subobjects. $J$ satisfies the same properties by lemma \ref{ptop_comp}. Since both classes of functors are closed under composition, $Q$ must be a quotient. Note that if $S\lneq B$ is a proper subobject and $p:A\twoheadrightarrow B$ is a (regular) epi then $p^*S\lneq A$ is again proper. This is because the composite $p^*S\to S\rightarrowtail A$ is \emph{not} epic (because $S$ is proper) and this is equal to the composite $p^*S\rightarrowtail B\twoheadrightarrow A$. If $p^*S\stackrel{\sim}{\longrightarrow} B$ were not proper then the composite would be epic, in contradiction to the previous observation. Suppose $S\leq B$ is a subobject in $\GG$. Because $Q$ is a quotient, there is an object $A\in \EE$ and an epimorphism $p:QA\twoheadrightarrow B$. Moreover, the pullback $p^*S\cong S\overtimes{B} QA$ is a subobject of $QA$ and therefore also in the image of in $Q$: $\exists R \leq A$ with $Q(R)\cong S\overtimes{B} QA$. In order to see that $C$ is conservative, suppose $CS\cong CB$. Since $C$ preserves pullbacks and $C\circ Q\cong I$, this implies $$IR\cong C(S\overtimes{B} QA)\cong CS\overtimes{CB} C(QA)\cong CB\overtimes{CB} IA \cong IA.$$ As $C_0$ is full and faithful, it follows that $Q_0 R\cong Q_0 A$ in $\bGG$ (and hence $QR\cong QA$ in $\GG$). But $p:QA\twoheadrightarrow B$ is an epimorphism, so $p^*$ is injective on subobjects. Thus $QR\cong p^*S\cong QA$ implies $S\cong B$, so that $C$ is injective on subobjects and hence conservative. \end{proof} \begin{prop}\label{qc_ortho} Quotient and conservative pretopos functors are orthogonal. I.e., for any diagram of pretopos functors as below with $C$ conservative and $Q$ a quotient there is a diagonal filler $$\xymatrix{ \EE \ar@{->>}[d]_Q \ar[rr]^I && \GG \ar@{>->}[d]^{C}\\ \FF \ar[rr]_J \ar@{-->}[urr]^{D} && \HH.\\ }$$ The factorization is unique up to natural isomorphism. \end{prop} \begin{proof}[Sketch] Consider an object $B\in\FF$. Since $Q$ is a quotient, there is an $A\in \EE$ together with an epimorphism $e:QA\twoheadrightarrow B$. The kernel pair of $e$ is a subobject of $Q(A\times A)$ and so has a preimage $K\rightarrowtail A\times A$ in $\EE$. Applying $J$ to $B$ and observing that $JQ\cong CI$ gives us a coequalizer diagram in $\HH$: $$CI(K)\rightrightarrows CI(A) \twoheadrightarrow J(B).$$ Parallel arrows $k_1,k_2:K\rightrightarrows A$ define an equivalence relation just in case certain monics defined from $k_1$ and $k_2$ are, in fact, isomorphisms. For example, the reflexivity and transitivity conditions (where $\tau:A\times A\stackrel{\sim}{\longrightarrow} A\times A$ the twist isomorphism) can be expressed by requiring that the marked arrows in the following pullback diagrams are isos: $$\xymatrix{ A\wedge K \ar@{>->}[d] \ar@{>->}[r]^-{\sim} & A \ar@{>->}[d] && \tau(K)\wedge K \ar@{>->}[d]^{\rm{\rotatebox[origin=c]{90}{$\sim$}}} \ar@{>->}[r]^-{\sim} & \tau(K) \ar@{>->}[d] \\ K \ar@{>->}[r] & A\times A && K \ar@{>->}[r] & A\times A\\ }$$ A more complicated diagram involving subobjects of $A\times A\times A$ expresses transitivity in much the same way. Since a conservative morphism reflects isomorphisms, it must also reflect equivalence relations. Since $C$ is conservative, this shows that $IK\rightrightarrows IA$ is already an equivalence relation in $\GG$, and we set $D(B)$ equal to its quotient. Arrows can be handled similarly. Each morphism $b:B\to B'$ induces a subobject $S_b\rightarrowtail IA\times IA'$ which is provably functional modulo the equivalence relations defining $B$ and $B'$. This has a preimage $R_b\in \EE$ and the property of being provably functional modulo equivalence is reflected by conservative morphisms, so that $I(R_b)$ induces a morphism $D(B)\to D(B')$ as desired. Essential uniqueness of the functor follows from the fact that $D$ must preserve quotients; if the kernel pair $QK\rightrightarrows QA$ maps to the kernel pair $IK\rightrightarrows IA$ (so that $D\circ Q\cong I$), then $D$ must send $B$ to the quotient of $IK$, as above. A similar argument applies to arrows. For further details, please see Makkai \cite{makkai_ultra}. \end{proof} \begin{cor} The factorization in lemma \ref{ptop_factor} is unique up to equivalence in the category $\EE\!/I$ and also functorial: for any square in $\Ptop$ there is a factorization (unique up to natural isomorphism) $$\xymatrix{ \EE \ar@/^3ex/[rr]^I \ar@{->>}[r] \ar[d] & \EE\!/I \ar@{>->}[r] \ar@{-->}[d] & \FF \ar[d]\\ \EE' \ar@/_3ex/[rr]_J \ar@{->>}[r] & \EE'/J \ar@{>->}[r] & \FF'\\ }$$ \end{cor} \begin{proof} First suppose that we have two quotient-conservative factorizations of $I$, through $\GG$ and $\GG'$. By the previous proposition, there are essentially unique diagonal fillers $$\xymatrix{ \EE \ar@{->>}[d] \ar@{->>}[rr] && \GG \ar@{>->}[d] \ar@{-->}@/_/[lld]_D \\ \GG' \ar@{>->}[rr] \ar@/_/@{-->}[rru]_{D'} && \FF\\ }$$ The composite $D'\circ D$ is a diagonal filler between $\GG$ and itself so, by uniqueness of these diagonals, it must be naturally isomorphic to $1_{\GG}$. Similarly, $D\circ D'\cong 1_{\GG'}$, so $\GG$ and $\GG'$ are equivalent categories. In order to establish functoriality, simply observe that there is a diagonal filler $$\xymatrix{ \EE \ar@{->>}[d] \ar[r] & \EE' \ar@{->>}[r] & \EE'/J \ar@{>->}[d] \\ \EE\!/I \ar@{>->}[r] \ar@{-->}[urr] & \FF \ar[r] & \FF'.\\ }$$ \end{proof} It is worth noting that the (coherent) sheaf functor sends the quotient/conservative factorization on pretoposes to the surjection/embedding factorization of geometric morphisms. $$\xymatrix{ \EE \ar@{->>}[r]&\EE/I \ar@{>->}[r] & \FF& \rm{in\ }\Ptop\\ \Sh(\EE) & \ar@{>->}[l] \Sh(\EE/I) & \ar@{->>}[l] \Sh(\FF)& \rm{in\ }\Top\\ }$$ See \cite{SGL} VII.4 for details of this construction in $\Top$. \section{Semantics, slices and localization} In this section we translate some basic model-theoretic concepts into the context of pretoposes. Specifically, we will consider certain \emph{model-theoretic} extensions $\bTT\subseteq\bTT'$ which involve new constants and axioms, but do not involve new sorts, functions or relations. Although some of the specific definitions may be new, the material is well-known and folklore. Most can be found, at least implicitly, in Makkai \& Reyes \cite{MakkaiReyes}. Fix a pretopos $\EE$ and an object $A\in\EE$. Observe that the slice category $\EE\!/A$ is again a pretopos. Most of the pretopos structure--pullbacks, sums and quotients--is created by the forgetful functor $\EE\!/A\to\EE$. The remaining structure, products, exist because the identity $1_A$ is terminal in $\EE\!/A$. Moreover, the forgetful functor has a right adjoint $A^\times: \EE\to\EE\!/A$ sending an object $B$ to the second projection $B\times A\to A$. As a right adjoint $A^\times$ preserves limits and the definition of pretoposes assumes pullback-stability for sums and quotients. Therefore $A^\times$ is a pretopos functor. \begin{lemma} $\EE\!/A$ classifies $A$-elements. More specifically, there is an equivalence of categories $$\Ptop(\EE\!/A,\SS)\cong \Sigma_{M\models\EE} \Hom_{\SS}(1,A^M).$$ \end{lemma} \begin{proof} An object of the latter category in the statement of the lemma is a pair $\<M,a\>$ where $M$ is a model $\EE\to\SS$ and $a$ is a (global) element of the $\EE$-definable set $A^M$. A morphism $\<M,a\>\to\<M',a'\>$ is an $\EE$-model homomorphism $h:M\to M'$ such that $h_A(a)=a'$. Suppose that $N:\EE\!/A\to\SS$; we recover $M$ from $N$ by taking the reduct along $A^\times$: $\varphi^M\cong (\varphi\times A)^N$. To recover $a$, note that the diagonal is a global section in $\EE\!/A$ $$\xymatrix{ \Delta_A:1_{\EE\!/A} \cong A \ar[r] & A^\times(A)}.$$ Therefore its interpretation in $N$ is a global element $(\Delta_A)^N:1\longrightarrow (A\times A)^N=A^M$. Similarly, we can recover an $\EE$-model homomorphism by composition $\xymatrix{\EE \ar[r]^{A^\times} & \EE\!/A \rtwocell^M_N{h} & \SS}$. The following naturality square shows that $h(a)=a'$: $$\xymatrix{ 1 \ar@{=}[d] \ar[rr]^{a'=(\Delta_A)^{M'}} && A^{M'} \ar[d]^{h_A} \\ 1 \ar[rr]_{a=(\Delta_A)^M} && A^M.\\ }$$ Conversely, given $M$ and $a$ we may define $N:\EE\!/A\to\SS$ by sending each map $\sigma:E\to A$ to the fiber of $\sigma^M$ over $a$. The same construction applies to morphisms: when $h(a)=a'$ the map $h_E$ restricts to the fiber $\sigma^N$: $$\xymatrix@=2ex{ \sigma^{N'} \ar[rr] \ar[dd] \ar@{-->}[rd] &&E^{M'} \ar[dd]|\hole \ar[rd]^{h_E}\\ &\sigma^N \pbcorner \ar[rr] \ar[dd] && E^M \ar[dd]^{\sigma^M}\\ 1 \ar@{=}[rd] \ar[rr]^(.3){a'}|\hole && A^{M'} \ar[rd]^{h_A}\\ &1 \ar[rr]_a && A^M.\\ }$$ We leave it to the reader to check that these maps define an equivalence of categories. \end{proof} More generally, the pullback along any arrow $\tau:A\to B$ is a pretopos functor. If $N:\EE\!/A\to\SS$ classifies an element $a\in A^M$ then the composite $N\circ\tau^*$ classifies the element $\tau^M(a)\in B^M$. In particular, an element $a\in A^M$ satisfies a formula $\varphi\rightarrowtail A$ just in case the associated functor $\<M,a\>:\EE\!/A\to \SS$ factors through the pullback $\EE\!/A\to \EE\!/\varphi$. A (model-theoretic) \emph{type} in context $A$ is a filter of formulas $p\subseteq\Sub_{\EE}(A)$. This is precisely a set of mutually consistent formulas $\varphi(x)$ in a common context $x:A$. Given an element $a\in A^M$ we write $a\models \varphi$ if $M\models\varphi(a)$. In that case, there is an essentially unique factorization of $\<M,a\>$ through $\EE\!/\varphi$. $$\xymatrix{ \EE\!/A \ar[rr]^{\<M,a\>} \ar[d]_{\varphi\vdash A} && \SS\\ \EE\!/\varphi \ar@{-->}[urr]_{a\models\varphi} &\\ }$$ Similarly, we write $a\models p$ if $a\models \varphi$ for every $\varphi\in p$. This induces a functor out of the filtered colimit $$\xymatrix{ \EE\!/A \ar[r]^{\<M,a\>} \ar[d]_{\varphi\vdash A} & \SS\\ \EE\!/\varphi \ar@{-->}[ur] \ar[r]_{\psi\vdash\varphi} & \EE\!/\psi \ar[r]_-{p\vdash\psi} \ar@{-->}[u] & \colim_{\varphi\in p} E/\varphi. \ar@{-->}[ul] \\ }$$ It follows that we can define the classifying pretopos for $p$-elements by taking a directed (2-)colimit\footnote{See Lack \cite{lack} for the definition of (pseudo-)colimits in a 2-category.} $$\EE_p\simeq\colim_{\varphi\in p} \EE\!/\varphi.$$ We will see in a moment that $\EE_D$ is a pretopos. Although $\EE_D$ is only defined up to equivalence of categories, the same is true for the classifying pretopos of a theory. More generally we have the following definition: \begin{defn}\label{def_localization} Given a filtered diagram $D:J\to\EE^{\op}$ the \emph{localization} of $\EE$ at $D$ is the colimit (in $\Cat$) of the composite $j\mapsto D_j\mapsto \EE\!/D_j$: $$\EE_D\simeq \colim_{j\in J}\EE\!/D_j$$ \end{defn} Given a map $i\to j$ in $J$ we let $d_{ij}:D_j\to D_i$ so that $d_{ij}^*:\EE\!/D_i\to\EE\!/D_j$. For each $j\in J$, $\tilde\jmath$ denotes the colimit injection $\EE\!/D_j\to \EE_D$. We may take $\Ob(\EE_D)=\coprod_j \Ob(\EE\!/D_j)$. Given $\sigma\in\EE\!/D_i$ and $\tau\in\EE\!/D_j$, an arrow $f:\sigma\to\tau$ is defined by a span $D_i\leftarrow D_k \to D_j$ together with a map $\overline{f}:d_{ki}^*\sigma\to d_{kj}^*\tau$. Similarly, two arrows $\overline{f}\in\EE\!/D_i$ and $\overline{g}\in\EE\!/D_j$ are identified in the colimit just in case there is a span as above such that $d_{ki}^*\overline{f}=d_{kj}^*\overline{g}$. \begin{lemma}\label{localization_limits} If $\EE$ is a pretopos and $D$ is a filtered diagram in $\EE^{\op}$ then $\EE_D$ is again a pretopos and a finite (co)cone in $\EE_D$ is (co)limiting just in case it is the image of a (co)limit in one of the slice categories. \end{lemma} \begin{proof} Because any two indices $D_i$ and $D_j$ have a common span $D_i\leftarrow D_k\to D_j$, representatives from the two categories can be compared in $D_k$. Therefore, for any finite family of objects and arrows $\EE_D$ may choose representatives belonging to a single slice category $\EE\!/D_k$. Pulling back along a further map $D_l\to D_k$ we may realize all of the (finite number of) equations holding among this family. Thus any finite diagram in $\EE_D$ has a representative in a single slice. This allows us to compute limits in $\FF$ using those in the slices. Suppose that $F$ is a finite diagram in $\EE_D$ and find a diagram of representatives $\overline{F}\in\EE\!/D_j$. Let $\overline{L}$ denote the limit of these representatives in $\EE\!/D_j$. Now suppose that $Z\to F$ is a cone in $\EE_D$. By the previous observation there is a further map $D_k\to D_j$ such that the cone and the diagram together have representatives $\overline{Z}\to d_{jk}^*\overline{F}$ in $\EE\!/D_k$. Since pullbacks preserve limits, this induces a map $\overline{Z}\to d_{jk}^*\overline{L}$. This makes $\tilde\jmath(\overline{L})$ a limit for $F$ in $\EE_D$; uniqueness in $\EE_D$ follows from uniqueness in each of the slices. Essentially the same argument shows that coproducts and quotients may be computed in slices, so $\EE_D$ is a pretopos and each map $\tilde\jmath:\EE\!/D_j\to\EE_D$ preserves pretopos structure. \label{ptop_alg} From a more sophisticated perspective, we may observe that the theory of pretoposes is itself quasi-algebraic. This means that we may write down the theory of pretoposes using sequents of Cartesian formulas (using only $\{=,\wedge\}$). This follows from the facts that pretopos structure amounts to the existence of certain adjoints, and the theory of adjoints is equational (see \cite{awodeyCT}, ch. 9). It is also important that provable functionality and equivalence relations are Cartesian-definable. The categorical interpretation $\{\wedge, =\}$ involves only finite limits, so any functors which preserve these must also preserve models of Cartesian theories. In $\Sets$ filtered colimits commute with finite limits, so a filtered colimit of pretoposes in $\Sets$ is again a pretopos. \end{proof} \section{The method of diagrams}\label{sec_diagrams} Classically the Henkin diagram of a $\bTT$-model $M$ is an extension $\bTT\subseteq\rm{Diag}(\bTT)$ constructed by: \begin{itemize} \item Extending the language $\LL(\bTT)$ to include new constants $\bf{c}_a$ for each $a\in|M|$. \item Adding the axiom $\vdash \varphi(\bf{c}_a)$ for any $\bTT$-formula $\varphi(x)$ such that $a\in \varphi^M$. \end{itemize} In this section we review the categorical interpretation for Henkin diagrams. Here we work in the semantic context $\SS=\Sets$. Fix a model $M:\EE\to\Sets$. From this one defines the \emph{category of elements} $\int_{\EE} M$. For the objects of $\int_{\EE} M$ we take the disjoint union $\displaystyle\coprod_{A\in\EE} A^M$. A morphism $\<b\in B^M\>\to\<a\in A^M\>$ is an arrow $f:B\to A$ such that $f^M(b)=a$. We call the pair $\<f,b\>$ a \emph{specialization} for $a$, and write $f:b\mapsto a$ to indicate that $\<f,b\>$ is a specialization of $a$. Composition in $\int_{\EE} M$ is computed as in $\EE$. The existence of products and equalizers in $\EE$ guarantees that $(\int_{\EE} M)^{\op}$ is a filtered category. A specialization for a finite family of elements $a_i\in A_i^M$ is a specialization of the tuple, i.e. a family of maps $\<f_i: b\mapsto a_i\>$. \begin{defn}[The pretopos diagram]\label{def_diagram} Given a model $M$, the \emph{localization} $\EE_M$ is the localization of $\EE$ along the filtered diagram $(\int_{\EE} M)^{\op}\to\Ptop$ defined by $\<a\in A^M\>\mapsto A\mapsto \EE\!/A$. We also write $\DD(M)\simeq \underset{a\in\int M}{\colim} \EE/A$ and call this category the \emph{diagram} of $M$. \end{defn} An object in $\DD(M)$ is called a \emph{parameterized set} in $M$ or a p-set over $M$; such an object is defined by a triple $\<A,\sigma,a\>$ where $\sigma\in\EE\!/A$ and $a\in A^M$. The \emph{context} of the p-set is $A$ and its \emph{domain} is the domain of $\sigma$. We usually suppress the context and denote this object $\<\sigma,a\>$. A \emph{morphism of p-sets} $\<\sigma,a\>\to\<\tau,b\>$ consists of a specialization $\<f,g\>:c\mapsto\<a,b\>$ together with a map between the pullbacks in $\EE\!/C$: $$\xymatrix{ D \ar[d]_{\sigma} & \ar[l] \raisebox{1.5ex}{$f^*\sigma \stackrel{h}{\longrightarrow} g^*\tau$} \ar[r] \ar/va(-8) 1.6cm/;[d] \ar/va(-6) 2.7cm/;[d] &E \ar[d]^{\tau}\\ A & \ar[l]^{f} C \ar[r]_{g} & B\\ }$$ Though technically $\<h,c,f,g\>:\<\sigma,a\>\to\<\tau,b\>$, we write $h/c:f^*\sigma\to g^*\tau$ as a shorthand for this data. Two parallel morphisms $h/c$ and $h'/c'$ from $\<\sigma,a\>\to\<\tau,b\>$ induce a diagram: $$\xymatrix@=2ex{ &&& f^*\sigma \ar[llldd] \ar[rd] \ar[rr]^h &&g^*\tau \ar[rrrdd] \ar[ld]&\\ &&&&C \ar[rrd]^g \ar[lld]_f &&\\ D \ar[rr]^{\sigma} && A &&&& B && E \ar[ll]_{\tau}\\ &&&& C' \ar[urr]_{g'} \ar[ull]^{f'} &&\\ &&& f'^*\sigma \ar[llluu] \ar[ru] \ar[rr]_{h'} &&g'^*\tau \ar[rrruu] \ar[lu]&\\ }$$ These two maps are equal if there is a further specialization $\<k,k'\>:d\to \<c,c'\>$ such that (i) $\<k,k'\>$ factors through the pullback $C\overtimes{A\times B} C'$ and (ii) $k^*h$ and $k'^*h'$ are equal (up to canonical isomorphism). More specifically, the first condition ensures that $f\circ k=f'\circ k'$, inducing a canonical isomorphism $k^*f^*\cong (f\circ k)^*=(f'\circ k')^*\cong k'^*f'^*$ (and similarly for $k^*g^*\cong k'^*g'^*$). Then $h/c$ and $h'/c'$ are equal if the following diagram commutes: $$\xymatrix{ k^*f^*\sigma \ar@{=}[d]^{\rm{\rotatebox[origin=c]{90}{$\sim$}}} \ar[r]^{k^*h} & k^*g^*\tau \ar@{=}[d]^{\rm{\rotatebox[origin=c]{90}{$\sim$}}}\\ k'^*f'^*\sigma \ar[r]_{k'^*h'} & k'^*g'^*\tau\\ }$$ Now suppose that $h/b$ and $k/c$ are composable at $\<\sigma,a\>$. The definitions of $h$ and $k$ involve maps $f:B\to A$ and $g:C\to A$; let $p_B$ and $p_C$ denote the projections from their pullback. Then we can define a map in $\EE\!/(B\overtimes{A} C)$ by $$\xymatrix{ l:p_B^*(\dom(h)) \ar[r]^-{p_B^*h} & p_B^*f^*\sigma\cong p_C^*g^*\sigma \ar[r]^-{p_C^*k} & p_C^*(\cod(k))\\ }$$ Since $h/b$ and $k/c$ are morphisms of p-sets, $f(b)=a=g(c)$ and the pair $\<b,c\>$ belongs to $(B\overtimes{A} C)^M$. Then we may define the composite by $(k/c)\circ(h/b)=l/\<b,c\>$. \begin{comment} $\DD(M)$ is a localization (a 2-colimit) and so it is only defined up to equivalence of categories. In the next section we will use diagrams to define the structure sheaf of a theory. To show that we have a sheaf (rather than a stack or 2-sheaf) it will be important to have identity conditions for the objects of $\DD(M)$. Fortunately, equality in $\EE$ induces a natural equality in $\DD(M)$. \begin{defn} Two p-sets $\<\sigma,a\>$ and $\<\tau,b\>$ are \emph{equal} if \bf{(i)} they have the same domain $E=\dom(\sigma)=\dom(\tau)$ and \textbf{(ii)} there is a specialization $\<f,g\>:c\mapsto \<a,b\>$ such that the pullbacks $f^*\sigma$ and $g^*\tau$ are equal as subobjects of $E\times C$: $$\xymatrix{ A \ar@{>->}[d]_{\Delta_A} & \ar[l] f^*\sigma = g^*\tau \ar[r] \ar@{>->}/va(-8) 1.85cm/;[d] \ar@{>->}/va(-6) 2.82cm/;[d] &E \ar@{>->}[d]^{\Delta_B}\\ A\times A & \ar[l]^{\sigma\times f} E\times C \ar[r]_{\tau\times g} & B\times B\\ }$$ \end{defn} \end{comment} \begin{prop} $\DD(M)$ is the classifying pretopos for the classical diagram $\rm{Diag}(M)$. \end{prop} \begin{proof} The unique element $\top\in 1^M$ induces an interpretation of $\tilde{\top}:\EE\to\DD(M)$; let $\tilde{A}=\tilde{\top}(A)$ denote the interpretation of $A$ in $\DD(M)$. For each $a\in A^M$, applying $\tilde{a}$ to the generic constant $\Delta_A$ defines a global section (i.e., a constant) $1\to\tilde{A}$ in $\DD(M)$. Now suppose that $i:R\rightarrowtail A$ is a basic relation and $M\models R(a)$. To see that $\DD(M)\vdash R(\bf{c}_a)$ we must show that the constant $\bf{c}_a:1\to \tilde{A}$ factors through the inclusion $\tilde{R}\rightarrowtail\tilde{A}$. Because $M\models R(a)$ we have the following factorizations, where $a_R=i(a)$ is $a$ itself, regarded as an element of $R^M$: $$\xymatrix{ \EE \ar[rr]|-{A^\times} \ar@/^5ex/[rrrrrr]^(.3){\tilde{\top}} \ar@/_3ex/[rrrr]_(.4){R^\times} && \EE\!/A \ar[rr]|{\ i^*\ } \ar@/^3ex/[rrrr]^(.2){\tilde{a}}&& \EE\!/R \ar@{-->}[rr]_{\tilde{a_R}} && \DD(M)\\}$$ Therefore, the following diagrams are identified (up to isomorphism) in $\DD(M)$, yielding the desired factorization $$\xymatrix@R=2ex{ A^\times(R)\ \ar@{>->}[r] & **[l]A^\times(A) & R^\times(R)\ \ar@{>->}[r] & **[l]R^\times(A) &\ \ \tilde{R}\ \ar@{>->}[r] &\ \tilde{A}\ \ \\ & \ar@{}[r]^(.2){}="a"^(.7){}="b" \ar@{|->} "a";"b"^{i^*} && \ar@{}[r]^(.25){}="a"^(.75){}="b" \ar@{|->} "a";"b"^{\tilde{a_R}} &\\ A \ar[ruu]_{\Delta_A} && R \ar[ruu]_{i^*(\Delta_A)} \ar[uu]|{\Delta_R} && 1 \ar@{-->}[uu] \ar[uur]_{\bf{c}_a}\\ }$$ \end{proof} \begin{lemma}\label{diag_models} $\DD(M)$ classifies $\EE$-models under $M$. A model of $\DD(M)$ consists of an $\EE$-model $N$ together with a homomorphism $h:M\to N$. A morphism of $\DD(M)$-models is a commutative triangle under $M$ In particular, the identity on $M$ induces a canonical model $I_M:\DD(M)\longrightarrow\Sets$. \end{lemma} \begin{proof} Suppose that $H:\DD(M)\to\Sets$ is a model of $\DD(M)$. The composite $H\circ\tilde{\top}$ provides the asserted $\EE$-model $N$. Similarly, $H\circ\tilde{a}$ defines an interpretation of each $\bf{c}_a$ in $N$: $$\xymatrix{ \EE\!/A \ar@/^3ex/[rr]^{\<N,\bf{c}_a^N\>} \ar[r]_-{\tilde{a}} & \DD(M) \ar[r]_H & \Sets\\ \EE \ar[u]^{A^\times} \ar[ur]_{\tilde{\top}} \ar@/_3ex/[urr]_{N} \\ }$$ From this we can define a family of functions $h_A: A^M\to A^{N}$ by setting $h_A(a)=\bf{c}_a^N$. Fix an $a\in A^M$ and for each $\sigma:A\to B$ let $b_\sigma=\sigma^M(a)$. This corresponds to an axiom $\vdash \sigma(\bf{c}_a)=\bf{c}_{b_\sigma}$ in $\DD(M)$ and therefore $$\sigma^N\circ h_A(a) = \sigma^N(\bf{c}_a^N) = \big(\sigma(\bf{c}_a)\big)^N = \bf{c}_{b_\sigma}=h_B(\sigma^M(a))$$ This ensures that $h$ defines a natural transformation $M\to N$ and hence an $\EE$-model homomorphism. This construction is reversible. Given a homomorphism $h:M\to N$, simply define $H:\DD(M)\to \Sets$ by sending each pair $\<b,\sigma\>\in\DD(M)$ to the definable set $(\sigma^N)^{-1}\big(h_B(b)\big)$ For a natural transformation $\theta:H_1\to H_2$, the composite $\theta\cdot\tilde{\top}$ induces a homomorphism $g_\theta:N_1\to N_2$. Similarly, $\theta\cdot\tilde{a}$ induces an $\EE\!/A$-natural transformation $H_1\cdot\tilde{a}\to H_2\cdot\tilde{a}$. From this it follows that $g_\theta\big(\bf{c}_a^{N_1}\big)=\bf{c}_a^{N_2}$. Then $g_\theta$ commutes with the maps $h_i:M\to N_i$, making a commutative triangle under $M$. \end{proof} The diagram $\DD(M)$ is closely related to the notion of a definable set. Recall that a set $S\subseteq A^M$ is \emph{definable} if there is some binary formula (subobject) $\varphi\rightarrowtail A\times B$ and an element $b\in B^M$ such that $S=\{a\in A^M\ |\ M\models\varphi(a,b)\}$. We often specify a definable set in $M$ by writing $S=\varphi(x,a)^M$. A function $f:S\to T$ is \emph{definable} just in case its graph is. Let $\DS(M)$ denote the category of definable sets in $M$. \begin{defn} When $\sigma:E\to A$, the \emph{realization} of $\<\sigma,a\>$ in $M$ is a definable subset $(\sigma^M)^{-1}(a)\subseteq E^M$, the fiber of $\sigma^M$ over $a$. Given any homomorphism $f:M\to N$, the \emph{realization under} $f$ is the definable set $(\sigma^N)^{-1}(f(a))$. We denote this set by $\<\sigma,a\>^f$ or $\<\sigma,f(a)\>^N$. \end{defn} Suppose $S=\varphi(x,b)$ for some formula $\varphi\rightarrowtail A\times B$. If $\sigma$ denotes the composite map $\varphi\rightarrowtail A\times B\to B$, then $S$ is the realization of $\<\sigma,b\>$ in $M$. Thus every definable set is a realization of some p-set (though this requires changing the ``ambient set'' of $S$ from $A^M$ to $\varphi^M$). Similarly, given two p-sets and a definable function between their realization, the formula which defined its graph induces a morphism of p-sets. Thus realization defines a functor $\DD(M)\to\DS(M)$ which is essentially surjective on objects and full. However, $\DD(M)$ and $\DS(M)$ are not identical. A definable set can have many definitions and these definitions might induce induce different p-sets in $\DD(M)$. This is analogous to the situation in algebraic geometry, where distinct polynomials sometimes induce the same vanishing set (e.g. $xy$ and $x^2y$ both vanish on the same set $\{\<x,y\>|\ x=0\vee y=0\}$). In fact, $\DS(M)$ is the quotient-conservative factorization of the canonical functor $I_M$ defined above: $$\xymatrix{ \DD(M) \ar[rr]^{I_M} \ar@{->>}[rd]_{\rm{quot.}} &&\Sets\\ & \DS(M) \ar@{>->}[ur]_{\rm{cons.}}&\\ }$$ \begin{lemma} Suppose that we have two p-sets $\<\sigma,a\>$ and $\<\tau,b\>$ and that $\dom(\sigma)=E=\dom(\tau)$. The following are equivalent: \begin{itemize} \item $\<\sigma,a\>\leq\<\tau,b\>$ in the subobject lattice $\Sub_{\DD(M)}(\tilde{E})$. \item There is a binary formula $\gamma\rightarrowtail A\times B$ such that $M\models \gamma(a,b)$ and $\EE$ proves $$(\sigma(w)=x)\wedge \gamma(x,y)\vdash \tau(w)=y.$$ \item For every homomorphism $f:M\to N$, the realization of $\<\sigma,a\>$ in $N$ is contained in the realization of $\<\tau,b\>$: $$(\sigma^N)^{-1}\big(f(a)\big)\subseteq (\tau^N)^{-1}\big(f(b)\big).$$ \end{itemize} \end{lemma} \begin{proof} First suppose that $\<\sigma,a\>\leq\<\tau,b\>$. Because $\DD(M)$ classifies the diagram of $M$, there must be a proof $$\rm{Diag}(M),\sigma(w)=\bf{c}_a \vdash \tau(w)=\bf{c}_b.$$ Let $\vdash\varphi_i(\bf{c}_{c_i})$ be the (finitely many) axioms from $\Diag(M)$ involved in the proof and set $\varphi(\bf{c}_a,\bf{c}_b,\bf{c}_c))=\bigwedge_i \varphi_i(\bf{c}_{c_i})$, where $\bf{c}_c$ is disjoint from $\bf{c}_a$ and $\bf{c}_b$. This leaves us with $$\varphi(\bf{c}_a,\bf{c}_b,\bf{c}_c)\wedge(\sigma(z)=\bf{c}_a)\vdash \tau(z)=\bf{c}_b.$$ The right-hand side of this sequent does not involve the constants $\bf{c}_c$, so we may replace them by an existential quantifier on the left. Let $\gamma(x,y)=\exists z.\varphi(x,y,z)$. The existence of $c\in C^M$ shows that $M\models\gamma(a,b)$. Now replacing the constants $\bf{c}_a$ and $\bf{c}_b$ by free variables $x$ and $y$, we are left with the desired statement: $$\EE\tri\gamma(x,y)\wedge(\sigma(z)=x)\vdash \tau(z)=y.$$ Now suppose that the second condition holds and that $f:M\to N$. Since $M\models\gamma(a,b)$, we also have $N\models\gamma(f(a),f(b))$. If $c\in \sigma^{-1}(f(a))$ then the $\<f(a),f(b),c\>$ satisfies the left-hand side of the sequent above. By soundness, we also have $\tau^N(c)=f(b)$, so the realization of $\<\sigma,a\>$ under $f$ is contained in that of $\<\tau,b\>$. For the last equivalence, simply note that $\DD(M)$ classifies homomorphisms under $M$; let $N_f:\DD(M)\to\Sets$ denote the model associated to a homomorphism $f:M\to N$. Then in every model we have $$N_f(\<\sigma,a\>)=\<\sigma,f(a)\>^N\leq \<\tau,f(b)\>^N=N_f(\<\tau,b\>).$$ Then completeness guarantees that this inclusion must hold in the theory, so $\<\sigma,a\>\leq\<\tau,b\>$ in $\DD(M)$. \end{proof} We close with a discussion of the locality property satisfied by the diagram $\DD(M)$. This will be an important condition when we define the structure sheaf of a logical scheme in the next chapter. \begin{defn}\label{def_local} Fix an object $A\in\EE$. \begin{itemize} \item $A$ is \emph{projective} if every epi $p:B\twoheadrightarrow A$ has a section $s:A\to B$, so that $p\circ s=1_A$. \item $A$ is \emph{indecomposable} if for any subobjects $R,S\leq A$ we have $R\vee S=A$ implies $R\cong A$ or $S\cong A$. \item $\EE$ is \emph{local} if the terminal object $1\in\EE$ is projective and indecomposable. \end{itemize} \end{defn} \begin{lemma}\label{local_functor} When $\EE$ is a local pretopos, the co-representable functor $\Hom(1,-):\EE\to\Sets$ is a pretopos functor (and hence a model of $\EE$). \end{lemma} \begin{proof} The co-representables preserve limits, essentially by the definition of limits: $\Hom(1,\lim_i A_i)\cong\lim_i \Hom(1,A_i)$. Therefore it is enough to check that $\Hom(1,-)$ preserves epimorphisms and coproducts. For the first, suppose that $f:A\twoheadrightarrow B$ is epic and that $b:1\to B$. Pulling $f$ back along $b$ yields another epi $b^*A\twoheadrightarrow 1$ which, by locality, has a section $s:1\to b^*A\to A$. Then $f\circ s=b$, so the induced map $\Hom(1,A)\to\Hom(1,B)$ is surjective. Similarly, given a point $c:1\to A+B$ we may pull back the coproduct inclusions to give a partition $1\cong c^*A+c^*B$. Since $1$ is indecomposable, either $c^*A\cong 1$ (giving a factorization of $c$ through $A$), or vice versa. This defines an isomorphism $\Hom(1,A+B)\cong\Hom(1,A)+\Hom(1,B)$. When $\EE=\Diag(M)$ is the diagram of a $\bTT$-model $M$, one easily checks that $\Hom(1,-)$ is the canonical $\EE$-model associated with the identity homomorphism $M\to M$ (cf. lemma \ref{diag_models}) \end{proof} \begin{lemma}\label{local_stalks} Each diagram $\DD(M)$ is a local pretopos. Logically speaking, $\DD(M)$ satisfies the existence and disjunction properties. \end{lemma} \begin{proof} Suppose $E=\<\sigma,b\>$ is an object in $\DD(M)$ and that the projection $E\twoheadrightarrow 1$ is epic. Applying the canonical interpretation $I_M$ sends this epi to a surjection $E^M\to 1\in \Sets$. This means the definable set $E^M$ is non-empty, so pick an element ${a\in E^M\subseteq A^M}$. We need to see that the constant $\bf{c}_a$ factors through $E$. In $\DD(M)$ we have a section $\<\bf{c}_a,\bf{c}_b\>:1\to \tilde{A}\times \tilde{B}$ and $E$ is a pullback along this section: $$\xymatrix{ E \pbcorner \ar[d] \ar[r] & \tilde{A}\times \tilde{A} \ar[d]^{1\times\tilde{\sigma}}\\ 1 \ar[r]_-{\<\bf{c}_a,\bf{c}_b\>} \ar[ur]|{\<\bf{c}_a,\bf{c}_a\>} & \tilde{A}\times \tilde{B}\\ }$$ Because $\sigma^M(a)=b$, the diagram $\DD(M)$ contains an axiom $\vdash \sigma(\bf{c}_a)=\bf{c}_b$. This implies that the bottom triangle in this diagram commutes and therefore the map $\<\bf{c}_a,\bf{c}_a\>$ induces a section $1\to E$, as desired. Now suppose that $R,S\leq 1$ are subterminal objects and that $R\vee S=1$. Applying the canonical model $I_M$, this gives a surjection $R^M+S^M\twoheadrightarrow 1$ in $\Sets$, so either $R^M$ or $S^M$ is non-empty. Without loss of generality, suppose $a\in R^M$. Just as above, this defines a section $1\to R$ in $\DD(M)$. Since we are in a pretopos and the projection $R\to 1$ is both epic and monic, $R\cong 1$. Logically, projectivity and indecomposability correspond to the existence and disjunction properties for $\DD(M)$: $$\begin{array}{rcl} \DD(M)\models \exists x. \varphi(x) & \Iff & \DD(M)\models\varphi(t)\textrm{ for some (definable) closed term }t.\\ \DD(M)\models \varphi\vee\psi & \Iff & \DD(M)\models \varphi\textrm{ or }\DD(M)\models \psi.\\ \end{array}$$ The first equivalence follows from the fact that sections $1\to \varphi\leq\tilde{A}$ are exactly the $\DD(M)$-definable singletons in $A$ which provably satisfy $\varphi$ (where $\varphi(x)=\varphi(x,\bf{c}_b)$ is an $\EE$-formula in context $x:A$ which may also contain parameters from $M$). For the second, suppose that $\varphi,\psi\leq 1$ are subterminal object in $\DD(M)$ (i.e. sentences). If $\DD(M)\models\varphi\vee\psi$ then the projection $\varphi+\psi\twoheadrightarrow 1$ is an epi. But then either $\varphi\to 1$ or $\psi\to 1$ is both epi and mono, hence an isomorphism. Thus either $\DD(M)\models\varphi$ or $\DD(M)\models\psi$. \end{proof} \section{Classical theories and Boolean pretoposes}\label{sec_bool_ptop} \begin{defn} A coherent category (or pretopos) $\bEE$ is \emph{Boolean} if every subobject in $\bEE$ is complemented: for any object $A\in\bEE$ and subobject $S\leq A$, there exists another subobject denoted $\neg S$ such $A\cong S+\neg S$. We denote the (2-)category of Boolean pretoposes by $\BPtp$. \end{defn} \begin{lemma} If $\bEE$ is a coherent category which is Boolean, then its pretopos completion $\EE$ is also Boolean. \end{lemma} \begin{proof} Fix an object $Q\in \EE$ and a subobject $S\leq Q$. Because the unit of the completion $I:\bEE\to\EE$ is finitely subcovering, there is a finite coproduct $A\cong \coprod_i A_i$ with $A_i\in\bEE$ and a cover $f:A\twoheadrightarrow Q$. Let $R=f^*S$ and $R_i=R\cap A_i$. $I$ is full on subobjects, so each $R_i$ belongs to $\bEE$. Since $\bEE$ is Boolean, this means that $R$ has a complement $\neg R\cong\coprod_i\neg R_i$. Its image $\exists_f(\neg R)$ is the complement of $S$. Since $S=\exists_f(f^*S)=\exists_f(R)$ and $R+\neg R\twoheadrightarrow Q$, the two subobjects cover $Q$: $S\vee\exists_f(\neg R)=\exists_f(R+\neg R)=Q.$ They are disjoint because $S\cap \exists_f(\neg R)=\exists_f(R\cap\neg R)=\exists_f(0)$. Thus any subobject $S\leq Q$ has a complement $\neg S=\exists_f(\neg R)$, so $\EE$ is Boolean. \end{proof} \begin{prop} There is a Boolean completion operation $\hat{(-)}:\Ptop\to\BPtp$ which is right adjoint to the forgetful functor $\BPtp\to\Ptop$. \end{prop} \begin{proof} For any classical theory $\bTT$ the first-order formulas and (classically) provably functional relations form a Boolean coherent category. Its pretopos completion is the classifying pretopos of the (classical) theory. Moreover, any coherent theory may be regarded as a classical theory with the same language and axioms (but more connectives and inference rules). Now replace any pretopos $\EE$, by its associated theory $\bTT_{\EE}$ (page \pageref{class_ptop}) and then regard $\bTT_{\EE}$ as a classical theory. The associated Boolean pretopos is the Boolean completion $\hat{\EE}$. The unit $\EE\to\hat{\EE}$ acts by sending each coherent $\EE$-formula to the same formula regarded classically. This operation extends to maps and to quotients because intuitionistic proofs (of provable functionality or equivalence relations) are all classically valid. When $\BB$ is Boolean we can lift any pretopos functor $I:\EE\to\BB$ along the unit: $$\xymatrix{ \hat{\EE} \ar@{-->}[r]^{\hat{I}} & \BB\\ \EE \ar[u] \ar[ur]_{I}&.\\ }$$ We build the lift using the usual inductive scheme $\hat{I}(A)=I(A)$ when $A\in\EE$ and while for other objects $$\begin{array}{cc} \hat{I}(A \overtimes{C} B)\cong \hat{I}(A)\overtimes{\hat{I}(C)} \hat{I}(B) & \hat{I}(\top_{\hat{E}})=\top_{\BB} \\ \hat{I}(A+B)\cong \hat{I}(A)+\hat{I}(B) & \hat{I}(A/R)\cong \hat{I}(A)/\hat{I}(R)\\ \hat{I}(\neg A)\cong \neg\hat{I}(A) \end{array}$$ On one hand, these operations suffice to construct all the classical pretopos formulas which are the objects of $\hat{E}$. The functor is well-defined because $I$ is a pretopos functor, meaning that the inductive clauses agree with the base cases wherever they overlap. \end{proof} For example, in the last chapter we showed how to replace any classical theory $\bTT$ by an equivalent coherent theory $\overline\bTT$. To do so we needed to extended the language $\LL\subseteq\overline\LL$; at the level of pretoposes, this corresponds to Boolean completion: $\EE_{\LL} \longrightarrow \widehat{\EE_{\LL}}\simeq \EE_{\overline\LL}$. Since $\Sets$ is Boolean, the universal mapping property of $\widehat{\EE_{\LL}}$ guarantees that every $\LL$-structure has a unique extension to an $\overline\LL$-structure. Similarly, when $\bTT$ is already coherent we have $\EE_{\overline\bTT}\simeq\widehat{\EE_{\bTT}}$. \begin{lemma} \mbox{} \begin{itemize} \item Any quotient of a Boolean pretopos is Boolean. \item Any localization of a Boolean pretopos is Boolean. \end{itemize} \end{lemma} \begin{proof} First consider a quotient map $I:\EE\to\FF$. Any pretopos functor necessarily preserves joins and disjointness (a limit condition). Therefore, it also preserves coproducts and hence complementation: $I(S)+I(\neg S)\cong I(S+\neg S)\cong IA$. Since quotients are full on subobjects, any $S\leq IA$ has a preimage $IR\cong S$ and therefore a complement $\neg S=I(\neg R)$. For any other object $B\in\FF$ and $S\leq B$, we define $\neg S$ just as in the the previous lemma: $\neg S=\exists_q(\neg q^*S)$. Exactly the same argument shows that $S+\neg S\cong B$. As for localizations, note that subobjects and complements in $\EE\!/A$ are just subobjects and complements in $\EE$. Thus $\EE\!/A$ is Boolean so long as $\EE$ is. Now suppose that $\FF\simeq\colim_j \EE\!/A_j$ is a localization of $\EE$ over a filtered category $J$ and that $S\leq F$ belong to $\FF$. This subobject has a representative $\overline{S}\leq\overline{F}$ in $\EE\!/A_j$ for some $j\in J$ and the representative is complemented. Since $\tilde\jmath:\EE\!/A_j\to\FF$ is a pretopos functor, $\tilde\jmath(\neg \overline{S})$ is a complement to $S$. \end{proof} Recall that $\EE$ is \emph{well-pointed} if for any distinct maps $f\not= g:A\to B$ there is a global element $a:1\to A$ such that $f\circ a\not=g\circ a$. This is the appropriate notion of locality for Boolean pretoposes. \begin{lemma}\label{bool_stalks} If a pretopos $\DD$ is Boolean and local, then it is two-valued and well-pointed. In particular, if $\EE$ is Boolean and $M:\EE\to\Sets$ is a model then the diagram $\DD(M)$ satisfies these conditions. \end{lemma} \begin{proof} Consider any subterminal object $S\leq 1$ in $\DD$. Since $\DD$ is Boolean, $S+\neg S\cong 1$. But $\EE$ is local and $1$ is indecomposable, so either $S=1$ or $\neg S=1$ (and hence $S=0$). Thus $\DD$ is two-valued. Now suppose that $f\not=g: A\to B$ in $\DD$. Let $R\leq A$ denote the complement of the equalizer of $f$ and $g$. Since $f\not= g$, $R$ is non-zero and therefore its image $R\twoheadrightarrow \exists R \rightarrowtail 1$ in non-zero as well. Since $\DD$ is two-valued, we must have $\exists R=1$, so the projection $R\twoheadrightarrow 1$ is epic. Since $\DD$ is local, $1$ is projective and we can find a section $s:1\to R\leq A$. If $f\circ s=g\circ s$ then $s$ would factor through the equalizer; this is impossible since $s$ factors through $R$, which is the (disjoint) complement of the equalizer. Therefore $f\circ s\not= g\circ s$, so $\DD$ is well-pointed. In particular, we know that the diagram $\DD(M)$ is always local (lemma \ref{local_stalks}). By the previous lemma, when $\EE$ is Boolean so are its localizations. Therefore the assumptions of the lemma apply, so $\DD(M)$ is two-valued and well-pointed. \end{proof} \chapter{Logical Schemes} In this chapter we will define the 2-category of logical schemes and prove some basic facts about them. We begin by defining the logical structure sheaf $\OO_{\bTT}$, a sheaf of local pretoposes over the spectrum $\MM_{\bTT}$ defined in chapter 1. The pair $(\MM_{\bTT},\OO_{\bTT})$ is an affine logical scheme. Following familiar definitions in algebraic geometry, we go on to define more general logical schemes by gluing, and we use the affine case to guide the definition of morphisms and natural transformations of schemes. This defines a reflective 2-adjunction with the 2-category of pretoposes, and we use this fact to show that schemes are closed under all finite 2-limits. \section{Stacks and Sheaves} We begin by defining three closely related notions of a ``category over $\EE$;'' see Vistoli \cite{fibrations} or Moerdijk \cite{moerdijk_stacks} (for the topological case) for a thorough treatment of fibrations and stacks. The simplest of these three is a \emph{presheaf of categories}, which is simply a (strict) functor $\bf{C}:\EE^{\op}\to\Cat$. The second is that of an \emph{$\EE$-indexed category}, which is a pseudofunctor $\bf{C}:\EE^{\op}\to\Cat$. Here the transition morphisms $f^*:\bf{C}(B)\to\bf{C}(A)$ are only expected to commute up to a canonical natural isomorphism, $(g\circ f)^*\cong f^*\circ g^*$, and these satisfy further coherence conditions. See \cite{bunge_pare} for further details. The third, fibrations, requires a preliminary definition. Given a fixed projection functor $\bf{p}:\CC\to\EE$, an arrow $g:C'\to C$ is \emph{Cartesian over $f$} if $\bf{p}(g)=f$ and for any $h:C''\to C$ and any factorization $\bf{p}(h)=f\circ q$, there exists a unique lift $\bf{p}\left(\bar{q}\right)=q$ such that $h=g\circ \bar{q}$. Diagramatically, $$\xymatrix@=12pt{ C'' \ar[rrrrd]^{h} \ar@{-->}[rrd]_{\exists !\bar{q}}\\ && C' \ar[rr]_{g} && C & g\rm{ is Cartesian}\\ \bf{p}(C'') \ar[rrd]_{q} \ar[rrrrd]^{\bf{p}(h)} &&&&& \rm{if } \forall h,q \ \exists !\ \overline{q} \\ && \bf{p}(C') \ar[rr]_{f=\bf{p}(g)} && \bf{p}(C).\\ }$$ The functor $\bf{p}:\CC\to\EE$ is a \emph{fibration} if every arrow in $\EE$ has a Cartesian lift in $\CC$. Usually we will suppress the functor $\bf{p}$ and refer to a fibration by its domain $\CC$. A \emph{fibered functor} over $\EE$ is a functor between fibrations which commutes with the projections to $\EE$ and sends Cartesian arrows to Cartesian arrows. We let $\uHom_{\EE}(\CC,\DD)$ denote the category of fibered functors and natural transformations between them. Given a fibration $\CC\to\EE$ and an object $A\in\EE$, we write $\CC(A)$ for the subcategory of objects and arrows sent to $A$ and $1_A$, respectively. If we fix an $f:B\to A$ and choose a Cartesian lift $f^*C\to C$ for every $C\in\CC(A)$, this defines a ``pullback'' functor $f^*:\CC(A)\to\CC(B)$ $$\xymatrix@=12pt{ &f^*D \ar@{-->}[rd]_{\exists! f^*h} \ar[rrr] &&& D \ar[rd]^{\forall h} \\ && f^*C \ar[rrr] &&& C &\\\\ && \CC(B) &&& \ar[lll]_{f^*} \CC(A). }$$ A \emph{cleavage} on $\CC$ is a choice of Cartesian lift $f^*C\to C$ for every pair $\<f,C\>$. Modulo the axiom of choice, these always exist. Cartesian arrows compose and any two ``pullbacks'' are canonically isomorphic (by the usual argument) yielding canonical isomorphisms $(g\circ f)^*\cong f^*\circ g^*$. This shows that the data necessary to specify a cloven fibrations is exactly the same as that necessary to give an indexed category. The choice of cleavage is essentially irrelevant; different choices induce canonically isomorphic pullbacks. Therefore, we abuse terminology by referring to ``the'' transition functor $\CC(A)\to\CC(B)$ associated with a fibration without specifying a chosen cleavage. This is harmless so long as we make no claims about object identity in $\CC(A)$. However, note that the identity of arrows in $\Hom_{\CC(A)}(f^*D,f^*C)$ does not depend on the choice of cleavage. We say that $\CC$ is \emph{fibered in $\Sets$} (or $\Grpd$, $\Ptop$, etc.) if for every $A\in\EE$, $\CC(A)$ is a set (or a groupoid, pretopos, $\ldots$). A fibration in $\Sets$ is essentially an ordinary presheaf $\EE^{\op}\to\Sets$. This follows from the fact that the Cartesian lifts of a fibration in $\Sets$ are unique: any two must be isomorphic and isomorphisms in a set (regarded as a discrete category) are identities. In particular, the representable functors $yA$ correspond to the forgetful functor $\EE\!/A\to\EE$ and there is a fibrational analogue of the Yoneda lemma. \begin{defn} For every object $A\in\EE$ the forgetful functor $\EE\!/A\to\EE$ is a fibration in $\Sets$, call the \emph{representable fibration} $yA$: $$\xymatrix{ \EE\!/A \ar[dd]_{yA} & E' \ar[rr]^{\rm{Cartesian}} \ar[rd] && E \ar[ld]\\ &&A&\\ \EE & E' \ar[rr] && E\\ }$$ \end{defn} \begin{prop}[2-Yoneda Lemma]\label{2yoneda} For any fibration $\CC\to\EE$ and any object $A\in\EE$ there is an equivalence of categories (natural in $\CC$ and $A$) $$\CC(A)\ \simeq\ \uHom_{\EE}(yA,\CC).$$ \end{prop} \begin{proof} The argument is essentially the same as in $\Sets$. Evaluation at $1_A\in yA(A)$ defines a functor $\uHom_{\EE}(yA,\CC)\to\CC(A)$. Conversely, given an object $C\in\CC(A)$ we define a fibered functor $\EE\!/A\to\CC$ by sending $f\mapsto f^*C$. We always have $1_A^*C\cong C$, so the composite $\CC(A)\to\uHom(yA,\CC)\to\CC(A)$ is clearly an equivalence of categories. On the other hand, suppose $F:yA\to\CC$ is a fibered functor. Every $g:E\to A$ can be viewed as a map $g\to 1_A$ in $\EE\!/A$. This map is Cartesian (every map is Cartesian, since $yA$ is fibered in $\Sets$). Therefore its image $F(g)\to F(1_A)$ is also Cartesian, so $F(g)\cong g^*(F(1_A))$. This guarantees that the composite $\uHom_{\EE}(\EE\!/A,\CC)\to\CC(A)\to \uHom_{\EE}(\EE\!/A,\CC)$ is again an equivalence of categories. \end{proof} A \emph{splitting} of $\CC$ is a coherent cleavage $(g\circ f)^*= f^*\circ g^*$, corresponding to a presheaf of categories. Not every fibration has a splitting, but we can always find split fibration which is pointwise equivalent to $\CC$. Indeed, if we set $\widehat{\CC}(A)=\uHom_{\EE}(\EE\!/A,\CC)$ then, because composition of functors is strict, this defines a presheaf of categories over $\EE$. Pointwise equivalence follows from the Yoneda lemma. Now we turn to the definition of stacks, which have roughly the same relation with fibrations as sheaves have to presheaves. \begin{defn} Suppose that $J=\{A_i\to Q\}$ is a covering family in $\EE$. Let $A_{ij}=A_i\overtimes{Q} A_j$. An object of \emph{descent data for $J$ over $Q$}, denoted $C_i/\alpha$, consists of the following: \begin{itemize} \item a family of objects $C_i\in \CC(A_i)$ \item together with a family of gluing isomorphisms $\alpha_{ij}:p_i^*C_i\cong p_j^*C_j$ in $\CC(A_{ij})$ \item which satisfy the following \emph{cocycle conditions} in $\CC(A_i)$ and $\CC(A_{ijk})$, respectively: $$\Delta^*(\alpha_{ii})=1_{A_i} \hspace{1cm} p_{ij}^*(\alpha)\circ p_{jk}^*(\alpha)=p_{ik}^*(\alpha).$$ \end{itemize} The situation is summarized in the following diagram (where, e.g., $\displaystyle p_{12}$ is the aggregate of the projections $A_{ijk}\to A_{ij}$). $$\xymatrix@R=3ex{ p_{ij}^*(\alpha_{ij})\circ p_{jk}^*(\alpha_{jk})=p_{ik}^*(\alpha_{ik}) & p_i^*C_i\stackrel{\alpha_{ij}}{\cong}p_j^*C_j & C_i & C/\alpha\\ \raisebox{-4ex}{$\displaystyle\coprod_{i,j,k} A_{ijk}$} \ar@<1.5ex>[r]^-{p_{12}} \ar[r]|-{p_{13}} \ar@<-1.5ex>[r]_-{p_{23}} & \raisebox{-4ex}{$\displaystyle\coprod_{i,j} A_{ij}$} \ar@<-1.5ex>[r]_-{p_2} \ar@<1.5ex>[r]^-{p_1} & \raisebox{-4ex}{$\displaystyle\coprod_{i} A_{i}$} \ar@{->>}[r]^{q} \ar[l]|-{\Delta} & Q.\\ }$$ \end{defn} This defines a descent category $\Desc(\CC,J)$, where a morphism $C_i/\alpha\to D_i/\beta$ is a family of maps $h_i:C_i\to D_i$ which respect the descent isomorphisms: $\beta_{ij}\circ p_i^*(h_i)=p_j^*(h_j)\circ \alpha_{ij}$. Roughly speaking, we can think of $\alpha_{ij}$ as instructions for gluing the pieces $E_i$ together to form an object over $Q$; the cocycle conditions guarantee these gluings are compatible. Moreover, there is a functor $i_Q:\CC(Q)\to\Desc(\CC,J)$ sending each object $D\in\CC(Q)$ to $q^*D$, together with the canonical isomorphism $p_1^*(q^*D)\cong p_2^*(q^*D)$. \begin{defn} $\CC$ is \emph{stack} if the map $i_Q$ is an equivalence of categories for every object $Q$ and every covering family $J$. We denote the 2-category of stacks by $\Stk(\EE)$ (when the topology $\JJ$ is implicit). \end{defn} When we are working with the coherent topology on a pretopos, we can factor any covering family into an extensive (coproduct) cover $\{A_i\to\coprod_i A_i\}$ and a regular (singleton) cover $\{A\twoheadrightarrow Q\}$. The stack condition for the former simply asserts that $\CC(\coprod_i A_i)\simeq\prod_i \CC(A_i)$ and this is typically easy to verify. Therefore we usually restrict consideration to singleton covers, leaving the general case to the reader. We note that any representable functor $yA$ is a sheaf of sets (and hence a stack) for the coherent topology. \begin{lemma}[cf. Awodey \cite{awodey_thesis}]\label{stack_to_sheaf} $ $\begin{itemize} \item If $\CC$ is a presheaf of categories (i.e. $\CC_0$ and $\CC_1$ are fibered in $\Sets$) and $\CC$ is a stack, then it is equivalent in $\Stk(\EE)$ to the sheafification $$\bf{a}\CC=\bf{a}\CC_1\rightrightarrows\bf{a}\CC_0.$$ \item For every stack $\CC$ there is a sheaf of categories $\overline{\CC}$ such that $\CC(A)\simeq\overline{\CC}(A)$ (naturally in $A$). \end{itemize} \end{lemma} \begin{proof} Recall that for a presheaf $P$, the sheafification is defined by $\bf{a}(P)=P^{++}$, where $P^+$ is the presheaf of matching families in $P$. A (strictly) matching family for a singleton cover $q:A\to Q$ is simply an object $C\in\widehat{\CC}(A)$ such that $p_1^*C=p_2^*C$. Similarly, a map $h:C'\to C$ matches for $q$ if $p_1^*(h)=p_2^*(h)$. This is precisely a descent map $C'/\!\!=\longrightarrow C/\!\!=$. This displays $\widehat{\CC}^+(Q)$ as a full subcategory inside $\Desc(\CC,q)$. As $\CC$ is a stack, any object of descent data $C/\alpha$ has a representative $D\in\CC(Q)$ with $C/\alpha\cong q^*D/\!\!=$. Therefore the inclusion $\widehat{\CC}^+(Q)\subseteq\Desc(\CC,q)$ is essentially surjective and $$\widehat{\CC}^+(Q)\simeq\Desc(\CC,q)\simeq\widehat{\CC}(Q).$$ Now for any stack $\CC$ we set $\overline{\CC}=\bf{a}\widehat{\CC}$, the sheafification of the strict fibration $\widehat{\CC}(A)=\uHom_{\EE}(yA,\CC)$. Now we have an equivalence $\widehat{\CC}\simeq\CC$ where $\widehat{\CC}$ is a presheaf and $\CC$ is a stack, so the previous statement applies: $\CC(A)\simeq\widehat{\CC}(A)\simeq\overline{\CC}(A)$. \end{proof} Because of this many sheaf constructions (e.g., limits and colimits, direct and inverse image) have ``up to isomorphism'' analogues for stacks. Moreover, many of the same formulas will hold so long as we weaken equalities to isomorphisms and isomorphisms to equivalence of categories. In particular, limits of stacks are computed pointwise. Since adjunctions, disjointness and equivalence relations can be expressed in finite limits, $\CC$ is a pretopos in $\Stk(\EE)$ just in case $\CC(A)$ is a pretopos for each $A\in\EE$ and each transition map $f^*:\CC(A)\to\CC(B)$ is a pretopos functor. Shifting attention to topological spaces, a stack on $X$ is a stack on the open sets $\OO(X)$. Any continuous function $f:X\to Y$ induces an adjoint pair $\xymatrix{\Stk(X)\rtwocell^{f_*}_{f^*}{'\top} & \Stk(Y)}$. These are defined just as for sheaves. \begin{comment} The direct image is precomposition with $f^{-1}$: $$f_*\CC:\OO(Y)^{\op}\stackrel{f^{-1}}{\longrightarrow}\OO(X)^{\op}\stackrel{\CC}{\longrightarrow}\Cat.$$ For the inverse image we present a stack $\DD\in\Stk(Y)$ as a (weighted) colimit of open sections $\DD\simeq\colim_j U_j$ and set $f^*\DD\simeq\colim_j f^{-1} U_j$. When $X$ is a topological groupoid $X_1\overset{d}{\underset{c}{\rightrightarrows}} X_0$ and $\CC$ is a stack on $X_0$, an \emph{equivariant structure} for $\CC$ is a map $\rho:d^*\CC\to c^*\CC$. For each $\alpha:x\to x'$ this defines a map of stalks $\rho_\alpha:\CC_{x}\to\CC_{x'}$ and these are subject to the familiar cocycle conditions $\rho_{1_x}\simeq 1_{\CC_x}$ and $\rho_{\alpha\circ\beta}\simeq\rho_\alpha\circ\rho_\beta$. \end{comment} \section{Affine schemes} Now we define a sheaf of pretoposes over the topological groupoid $\MM_{\EE}$ (from chapter 1, definitions \ref{M0points}, \ref{M0opens}, \ref{M1points}) which will act as the structure sheaf in our scheme construction. This arises most naturally as a stack over $\EE$. The ``walking arrow'' is the poset $\2=\{0\leq 1\}$ regarded as a category, and the exponential $\EE^{\2}$ is the arrow category of $\EE$. Its objects are $\EE$-arrows $E\to A$ and its morphisms are commutative squares. The inclusion $\{1\}\subseteq \2$ induces the \emph{codomain fibration} $\EE^{\2}\to\EE$. To justify the name, note that an arrow in $\EE^{\2}$ (i.e. a commutative square) is Cartesian just in case it is a pullback square; therefore $\EE^{\2}\to\EE$ is a fibration so long as $\EE$ has finite limits. The associated pseudofunctor $\EE^{\op}\to\Cat$ sends $A\mapsto\EE\!/A$, with the contravariant action given by pullback. We will usually refer to an object in $\EE\!/A$ by its domain $E$; when necessary, we generically refer to the projection $E\to A$ by $\pi$ or $\pi_E$. \begin{prop}[cf. Bunge \& Pare \cite{bunge_pare}] When $\EE$ is a pretopos, $\EE^{\2}$ is a stack for the coherent topology. \end{prop} \begin{proof} Suppose that we have a covering map $q:A\twoheadrightarrow Q$ with its kernel pair $K=A\overtimes{Q} A\overset{p_1}{\underset{p_2}{\rightrightarrows}} A$. Consider an object $E\to A$ together with descent data $\alpha:p_1^* E\stackrel{\sim}{\longrightarrow} p_2^*E$. From this we define an equivalence relation $R_\alpha$ on $E$ by setting $e\stackrel{\alpha}{\sim} e'$ whenever $\alpha(e)=e'$ (which makes sense so long as $\pi(e)\stackrel{q}{\sim} \pi(e')$). More formally, $$\xymatrix{ p_1^*E \ar@{-->>}[r] \ar@{>->}[d]_{\<1,\alpha\>} & R_\alpha \ar@{>-->}[d]\\ p_1^*E\times p_2^*E \ar[r] & E\times E. }$$ Invertibility of $\alpha$ implies that $R_\alpha$ is symmetric, while reflexivity and transitivity follow from the cocycle conditions on $\alpha$. Moreover, the quotient $E/\alpha$ is compatible with $q$ in the following sense. Since $\alpha$ is a map over $K$, we can factor through the kernel pair: \raisebox{2ex}{$\xymatrix@R=1.5ex@C=2ex{p_1^*E \ar@{->>}[r] \ar@{-->}[rd] & R_\alpha \ar@<.5ex>[r] \ar@<-.5ex>[r] & E \ar[d] \\ &K \ar@<.5ex>[r] \ar@<-.5ex>[r] & A. }$} Therefore these upper maps coequalize $q$ and, since $p_1^*E\twoheadrightarrow R_\alpha$ is epic, so do the maps $R_\alpha\rightrightarrows E\to A$. It follows that $R_\alpha$ factors through the kernel pair and this induces a map from $E/\alpha$ into $Q$: $$\xymatrix{ R_\alpha \ar@<.7ex>[r] \ar@<-.7ex>[r] \ar@{-->}[d] & E \ar[d] \ar@{->>}[r] & E/\alpha \ar@{-->}[d]\\ K \ar@<.7ex>[r] \ar@<-.7ex>[r] & A \ar@{->>}[r]_q & Q.\\ }$$ To see that $\EE^{\2}$ is a stack, we need to check that $E\cong q^*(E/\alpha)$ so that the right-hand square is Cartesian. It is enough to show (following Johnstone \cite{elephant} Prop 1.2.1) that the comparison map $h:E\to q^*(E/\alpha)$ is both epic and monic (and hence an iso). For the first claim, consider the diagram: $$\xymatrix{ p_1^* E \ar[r] \ar[d] & E \ar[r]^-h & q^*(E/\alpha) \ar[d] \ar[r] & A \ar[d]^q\\ E \ar@{->>}[rr] && E/\alpha \ar[r] & Q.\\}$$ The right-hand square is evidently a pullback; the outer rectangle is as well, since it equals \raisebox{2.5ex}{$\xymatrix@=2ex{p_1^*E \ar[r] \ar[d] & K \ar[r] \ar[d] & A \ar[d] \\ E \ar[r] & A \ar[r] & Q}$}. This guarantees that the left-hand square is a pullback, and that the map $p_1^*E\twoheadrightarrow q^*(E/\alpha)$ is epic (since pullbacks preserve covers). Therefore the second factor $E\twoheadrightarrow q^*(E/\alpha)$ is epic as well. To show the map is monic consider the composite $E\stackrel{h}{\to} q^*(E/\alpha)\rightarrowtail E/\alpha\times A$. Clearly, this is monic just in case $h$ itself is. Now suppose that we have parallel maps ${Z\rightrightarrows E}$ whose composites $Z\to E\to E/\alpha\times A$ are equal. Equality in the first component shows that $Z$ factors through $R_\alpha$; equality in the second guarantees that the composite $Z\to R_\alpha \to K$ equalizes the projections $K\rightrightarrows A$. Therefore $Z$ factors through the equalizer $\Eq(K\rightrightarrows A)$, and this equalizer is just the diagonal $\Delta:A\to K$. This leaves us with the diagram below, which plainly shows that the two maps $Z\rightrightarrows E$ are equal: $$\xymatrix{ && Z \ar@{-->}[lld]_{z_1=z=z_2\ \ } \ar@{-->}[d] \ar[rrd]_{z_2} \ar@<1ex>[rrd]^{z_1} &&&\\ **[l] \Delta^*p_1^*E \cong E\ar[r] \ar[d] & p_1^*E \ar@{->>}[r] \ar[rd] & R_\alpha \ar[d] \ar@<.3ex>[rr] \ar@<-.7ex>[rr] && E\ar[d] \ar@{->>}[r] & E/\alpha\\ A \ar@<-1ex>@/_2ex/[rrrr]_{1_A} \ar[rr]^{\Delta} && K \ar@<.5ex>[rr] \ar@<-.5ex>[rr] && A &\\ }$$ \end{proof} \begin{defn}[The structure sheaf]\label{def_str_sheaf} Given a pretopos $\EE$, the \emph{structure sheaf} $\OO=\OO_{\EE}$ is the sheaf over $\MM(\EE,\kappa)$ associated with the stack $\EE^{\2}$ via the maps $\Stk(\EE)\stackrel{\widehat{\ }}{\longrightarrow}\Cat(\Sh(\EE))\simeq \EqSh(\MM)$ (cf. lemma \ref{stack_to_sheaf} and theorem \ref{eq_top_equiv}). \end{defn} As we have defined it, $\OO$ is an equivariant sheaf of pretoposes over $\MM$ but usually we will work up to equivalence, as if $\OO$ were a stack over $\EE$. This eliminates a good deal of bookkeeping and emphasizes the relationship to the codomain fibration (as opposed to its strict replacement $\widehat{\EE^{\2}}$). At the same time, this allows us to justify our formal manipulations while avoiding the additional machinery of equivariant stacks. We can give a more concrete description of $\OO$ using the notion of relative equivariance. For any open subset $U\subseteq\MM_0$ there is a smallest parameter $k=k(U)$ such that $U\subseteq V_k$; we call $k$ the \emph{context} of $U$. $U$ is \emph{relatively invariant} if it is closed under those isomorphisms sending $k\mapsto k$. We may also call $U$ \emph{$k$-invariant} or, when $k=\emptyset$, simply \emph{invariant}. By the definability theorem \ref{stab_thm} from Chapter 1, an invariant set $U$ which is compact for invariant covers is definable: $U=V_\varphi$ for some sentence $\varphi$. Any other invariant set is a union of these definable pieces. By the same token, a $k$-invariant $U$ which is compact for $k$-invariant covers has the form $V_{\varphi(k)}$ for some formula $\varphi(x)$. When $U$ is relatively invariant, the pair $\<U,V[k\mapsto k]\>$ defines an open subgroupoid of $\MM$. Given an equivariant sheaf $\<E,\rho\>$, we say that a subsheaf $S\leq E$ is \emph{relatively equivariant} if the image $S\twoheadrightarrow U\subseteq \MM$ is relatively invariant and $S$ is equivariant with respect to the subgroupoid defined by $U$. In particular, an open section $s:V_{\varphi(k)}\to E$ is relatively equivariant if $s(\nu)=\rho_\alpha(s(\mu))$ whenever $\alpha(\mu(k))=\nu(k)$. We let $E(U)$ denote the set of relatively equivariant sections of $E$ over $U$, while $\Gamma_{\eq}(E)$ denotes the set of equivariant global sections $\MM_0\to E$. \begin{prop}\label{str_sheaf_secs} $\Gamma_{\eq}(\OO)$ is equivalent to $\EE$. More generally, $$\OO(V_{\varphi(k)})\simeq\EE\!/\varphi.$$ \end{prop} \begin{proof} For the first claim we have the following sequence of equivalences $$\begin{array}{rcl} \EE\simeq \EE\!/1&\simeq&\uHom_{\EE}(y1,\EE^{\2})\\ &\simeq & \uHom_{\EE}(y1,\widehat{\EE^{\2}})\\ &\simeq & \uHom_{\MM}(\ext{1},\OO_{\EE}) = \Gamma_{\eq}(\OO_{\EE}). \end{array}$$ Here we use the Yoneda lemma together with the fact that $\MM_0=\ext{1}$ is terminal in $\EqSh(\MM)$. Similarly, suppose that we have a relatively equivariant section $s\in\OO(V_{\varphi(k)})$. By proposition \ref{equiv_ext}, there is a unique equivariant extension along the canonical section $\hat{k}$: $$\xymatrix{ \ext{\varphi} \ar@{-->}[r]^{\exists ! \overline{s}} & \OO\\ & V_{\varphi(k)} \ar[u]_s \ar[ul]^{\hat{k}}\\ }$$ Since the equivalence $\EqSh(\MM)\simeq\Sh(\EE)$ sends $\ext{\varphi}$ to the representable sheaf $y\varphi$, this gives another chain of equivalences $$\OO(V_{\varphi(k)})\simeq\uHom_{\MM}(\ext{\varphi},\OO)\simeq\uHom_{\EE}\left(y\varphi,\widehat{\EE^{\2}}\right)\simeq \EE\!/\varphi.$$ \end{proof} \begin{lemma}\label{struct_stalks} The stalk of $\OO$ at a labelled model $\mu$ is equivalent to the diagram of the model $M_\mu$ (cf. definition \ref{def_diagram}). An isomorphism $\alpha:\mu\to\nu$ acts on these fibers by sending $\<\sigma, a\>\longmapsto\<\sigma,\alpha(a)\>$. \end{lemma} \begin{proof} The partial sections $s\in\OO(V_{\varphi(k)})$ cover the structure sheaf, so we can use them to compute its stalks. Recall (lemma \ref{inclusion_lemma}) that an inclusion of basic open sets always has the form $V_{\psi(k,l)}\subseteq V_{\varphi(k)}$ where $\exists z.\psi(x,z)\vdash\varphi(x)$. This corresponds to a map from $\psi\to\varphi$ in $\EE$; pullback along this map describes the transition functor $\OO(V_{\varphi(k)})\to\OO(V_{\psi(k,l)})$. This means that the stalk $\OO_\mu\simeq\displaystyle\colim_{\mu\models\varphi(k)} \EE\!/\varphi$ is a directed colimit of slices and pullbacks, i.e. a localization as in definition \ref{def_localization}. This is nearly the definition of the diagram; however, $\DD(M_\mu)$ and $\OO_\mu$ are presented as colimits over different index categories. The former is indexed by elements $a\in\varphi^M$; the latter is indexed by labels $k\in\kappa$ with $\mu\models \varphi(k)$. Evaluation of parameters induces a comparison map $\OO_\mu\to\DD(M_\mu)$ sending $\<\sigma,k\>\mapsto\<\sigma,\mu(k)\>$. Since $\mu$ is a surjective labelling, this functor is both surjective on objects and full. As we are working with pretoposes, conservativity will imply faithfulness and hence an equivalence of categories. If $\<\sigma,\mu(k)\>\cong\<\tau,\mu(l)\>$ is an isomorphism in $\DD(M_\mu)$ then there is a diagram $$\xymatrix{ D \ar[d]_{\sigma} & \ar[l] \raisebox{1.5ex}{$f^*\sigma \cong g^*\tau$} \ar[r] \ar/va(-8) 1.6cm/;[d] \ar/va(-6) 2.4cm/;[d] &E \ar[d]^{\tau}\\ \varphi & \ar[l]^{f} \gamma \ar[r]_{g} & \psi\\ }$$ together with an element $c\in \gamma^\mu$ with $f^\mu(c)=\mu(k)$ and $g^\mu(c)=\mu(l)$. This same diagram forces an isomorphism in $\OO_\mu$: for any label $\mu(j)=c$, $$\<\sigma,k\>\cong\big\<f^*\sigma,j\big\>\cong\big\<g^*\tau,j\big\>\cong\<\tau,l\>.$$ Finally, we want to see that an isomorphism $\alpha:\mu\stackrel{\sim}{\longrightarrow}\nu$ acts by sending $\<\sigma,a\>\mapsto\<\sigma,\alpha(a)\>$. We proceed in two steps, first supposing that there is a parameter $k$ with $a=\mu(k)$ and $\alpha(a)=\nu(k)$. Since $\<\sigma,k\>$ is a $k$-equivariant section sending $\mu\mapsto \<\sigma,\mu(k)\>$ and $\alpha:k\mapsto k$, it follows that $$\rho_\alpha(\<\sigma,a\>)\cong\rho_\alpha(\<\sigma,k)(\mu))\cong\<\sigma,k\>(\nu)\cong\<\sigma,\alpha(a)\>.$$ For the general case, fix labels $\mu(j)=a$ and $\nu(l)=\alpha(a)$. Using the reassignment lemma \ref{reassignment} we may find a diagram of labelled models satisfying the following conditions (with $k$ disjoint from $j$ and $l$): $$\xymatrix{ \mu \ar[r]^{\alpha} \ar[d]_{\beta} & \nu & \mu'(k)=\mu'(j)=\beta(a)\\ \mu' \ar[r]_{\alpha'} & \nu' \ar[u]_{\beta'}& \beta'(\nu(k))=\beta'(\nu(l))=\alpha(a)\\ }$$ Now $\beta:j\mapsto j$, $\alpha':k\mapsto k$ and $\beta':l\mapsto l$, the previous observation applied three times gives $$\begin{array}{rcl} \rho_\alpha(\<\sigma,a\>)&\cong&\rho_{\beta'}\circ\rho_{\alpha'}\circ\rho_{\beta}(\<\sigma,\mu(k)\>)\\ &\cong&\rho_{\beta'}\circ\rho_{\alpha'}(\<\sigma,\mu'(k)\>)\\ &\cong&\rho_{\beta'}\circ\rho_{\alpha'}(\<\sigma,\mu'(j)\>)\\ &\cong&\rho_{\beta'}(\<\sigma,\nu'(j)\>)\\ &\cong&\rho_{\beta'}(\<\sigma,\nu'(l)\>)\\ &\cong&\<\sigma,\nu(l)\>\cong\<\sigma,\alpha(a)\>\\ \end{array}$$ \end{proof} \begin{cor}[Subdirect product representation]\label{sub_dir_prod} Every pretopos $\EE$ embeds (conservatively) into a product of local pretoposes. \end{cor} \begin{proof} Lemma \ref{struct_stalks} showed that the stalks of $\OO=\OO_{\EE}$ are local pretoposes. Thus $\prod_{\mu\in\MM} \OO_\mu$ is a product of local pretoposes. Lemma \ref{str_sheaf_secs} showed that $\EE$ is equivalent the equivariant global sections of $\OO_{\EE}$. Evaluating these sections at each stalk defines a pretopos functor $\EE\to\prod_{\mu\in\MM} \OO_\mu$. This is clearly conservative: if $\varphi^\mu\cong\psi^\mu$ for every $\mu\in\MM$ then the equivalence $\EE\simeq\Gamma_{\eq}\OO$ ensures that $\varphi\cong\psi$ in $\EE$. \end{proof} We collect the results of this section into a theorem: \begin{thm}[Equivariant sheaf representation for pretoposes]\label{aff_schemes} For every pretopos $\EE$ there is a topological groupoid $\MM=\MM_{\EE}$ and an equivariant sheaf of pretoposes $\OO=\OO_{\EE}$ over $\MM$ such that $\Gamma_{\eq}\OO\simeq\EE$ and, for each $\mu\in\MM_0$, the stalk $\OO_\mu$ is a local pretopos (cf. definition \ref{def_local}). \end{thm} \begin{defn}[Affine schemes] The pair $\<\MM_{\EE},\OO_{\EE}\>$ is the \emph{affine (logical) scheme} associated with $\EE$, denoted $\Spec(\EE)$. \end{defn} In particular, we may specialize to the case of classical first-order logic by considering Boolean pretoposes. \begin{cor} For every classical first-order theory $\bTT$ there is a topological groupoid of (labelled) $\bTT$-models and isomorphism $\MM=\MM_{\bTT}$ and an equivariant sheaf of Boolean pretoposes $\OO=\OO_{\bTT}$ over $\MM$ such that $\Gamma_{\eq}\OO\simeq\EE_{\bTT}$. For each model $\mu\in\MM_0$, the stalk $\OO_\mu$ is a well-pointed pretopos, the classifying pretopos of the complete diagram of $\mu$. \end{cor} \begin{proof} As discussed in Chapter 2, section \ref{sec_bool_ptop}, every classical theory has a classifying pretopos $\EE_{\bTT}$ which is Boolean and we apply the previous theorem. The sheaf of pretoposes $\OO_{\bTT}$ is Boolean because complements are preserved by slicing. The stalks are well-pointed by lemma \ref{bool_stalks}. \end{proof} \begin{cor} Every Boolean pretopos $\BB$ embeds into \begin{itemize} \item a product of well-pointed pretoposes. \item a power of $\Sets$ \end{itemize} \end{cor} \begin{proof} The first claim follows corollary \ref{sub_dir_prod} together with the fact (lemma \ref{bool_stalks}) that any local, Boolean pretopos is well-pointed. For the second claim, notice that the global sections $\Hom(1,-)$ in a well-pointed pretopos define a faithful (and hence conservative) functor into sets. Composing these functors with the previous claim yields the asserted embedding: $$\xymatrix{ \BB \ar@{>->}[r] \ar@{>->}[rd] & \prod_{\mu\in\MM} \OO_{\BB,\mu} \ar@{>->}[d]^{\prod_\mu \Hom_{\OO_{\BB,\mu}}(1,-)}\\ & \prod_{\mu\in\MM} \Sets. }$$ \end{proof} \section{The category of logical schemes} In the previous section we defined the affine scheme $\<\MM_{\EE},\OO_{\EE}\>$ associated with a pretopos $\EE$. This gave a representation of $\EE$ as the equivariant global sections of $\OO_{\EE}$. In this section we will show that this construction is functorial in $\EE$; pretopos functors and natural transformations lift to the structure sheaves, and these are recovered from their global sections. This guides the definition of the 2-category of logical schemes. \begin{prop}\label{embed_prop} \mbox{}\begin{itemize} \item An interpretation $I:\EE\to\FF$ (contravariantly) induces a reduct functor $I_\flat:\MM_{\FF}\to\MM_{\EE}$. \item $I$ also defines an internal pretopos functor $I^\sharp:\OO_{\EE}\to I_{\flat*}\OO_{\FF}$ such that $\Gamma_{\eq}(I^\sharp)\cong I$. \item Every stalk of the transposed map $I_\flat^*\OO_{\EE}\to\OO_{\FF}$ is a conservative functor. \item For any natural transformation $\xymatrix{\EE \rtwocell^{\2}_J{\tau} & \FF}$ there is an internal pretopos functor $\tau^*:J_{\flat*}\OO_{\FF}\to I_{\flat*}\OO_{\FF}$ (in the opposite direction) and a natural transformation $$\xymatrix{ \OO_{\EE}\ar[rr]^{I^\sharp} \ar[dr]_{J^\sharp} \rrlowertwocell<\omit>{<3>\ \ \tau^\sharp} && I_{\flat*}\OO_{\FF}\\ & J_{\flat*}\OO_{\FF} \ar[ru]_{{\tau^*}} & \\ }$$ such that $\Gamma_{\eq}\tau^*\cong 1_{\FF}$ and $\Gamma_{\eq}\tau^\sharp\cong\tau$. \end{itemize} \end{prop} \begin{proof} Recall that the reduct of an $\FF$-model $N$ along $I$ is defined to be the composite $I_\flat N:\EE\to\FF\stackrel{N}{\to}\Sets$. The reduct of an isomorphism $\xymatrix{\FF \rtwocell^N_{N'}{\omit\rm{\rotatebox[origin=c]{270}{$\cong$}}} & **[r]\Sets}$ is defined in the same way. Since $A^{I_\flat N}=(IA)^N$, the reduct $I_\flat N$ inherits labellings from $N$, defining a functor $I_\flat:\MM_{\FF}\to\MM_{\EE}$. Given a basic open set $V_{\varphi(k)}\subseteq \MM_{\EE}$, the inverse image along $I_\flat$ is $V_{I\varphi(k)}$, so $I_\flat$ is continuous on the space $\Ob(\MM_{\EE})$. The map $\Ar(\MM_{\FF})\to\Ar(\MM_{\EE})$ is continuous because open sets of morphisms are defined by labels, which are inherited. Hence $I_\flat:\MM_{\FF}\to\MM_{\EE}$ is a continuous functor and induces a geometric morphism $\xymatrix{ \EqSh(\MM_{\FF}) \rrtwocell^{I_\flat^*}_{I_{\flat *}}{`\bot} && \EqSh(\MM_{\EE}).\\ }$ In order to define $I^\sharp$ we consider the relatively equivariant sections of $I_{\flat*}\OO_{\FF}(V_{\varphi(k)})$, These are equivalent to the sections of $\OO_{\FF}(V_{I\varphi(k)})$ and hence to $\FF/I\varphi$. $I^\sharp$ is then defined section-wise by obvious functors $I_\varphi:\EE\!/\varphi\to\FF/I\varphi$. In particular, setting $\varphi=1$ gives $\EE\!/1\simeq\EE$ and $\Gamma_{\eq} I^\sharp\cong I:\EE\to\FF$. Recall that the stalk of $\OO_{\EE}$ at a labelled model $\mu$ is the diagram of $\mu$; its objects (modulo coproducts and quotients) are parameterized formulas of the form $\varphi(x,b)$ where $\varphi(x,y)$ is a formula in context $A\times B$ and $b$ is an element in $B^\mu$. Since pullbacks preserve fibers, the stalk of $\OO_{\EE}$ at $I_\flat(\nu)$ is the same as the stalk of $I_\flat^*\OO_{\EE}$ at $\nu$. Moreover, by the definition of the reduct $I_\flat(\nu)$ we have $B^{I_\flat(\nu)}=(IB)^\nu$, allowing us to regard the element $b\in B^{I_\flat(\nu)}$ as an element of $\nu$. Thus (at the level of stalks) the transposed map $I_\flat^*\OO_{\EE}\to\OO_{\FF}$ maps the diagram of the reduct $I_\flat(\nu)$ into the diagram of $\nu$ by sending $\varphi(x,b)\mapsto I\varphi(x,b)$. Since the reduct $I_\flat(\nu)$ satisfies a sequent $\varphi(x,b)\vdash\psi(x,b')$ just in case $\nu$ satisfies $I\varphi(x,b)\vdash I\psi(x,b')$, this map is obviously conservative on stalks. Similarly, suppose $I,J:\EE\to\FF$ and $\tau:I\Rightarrow J$ is a natural transformation. Pullback along $\tau_\varphi$ induces a pretopos functor $\tau_\varphi^*:\FF/J\varphi\to\FF/I\varphi$ while $\tau^\sharp$ is induced by the naturality square for $E\in\EE\!/\varphi$: $$\xymatrix{ IE \ar@/^3ex/[rr]^{\tau_E} \ar[d] \ar[r]_-{\tau_E^\sharp} & \tau_\varphi^*(JE) \ar[r] \ar[ld]& JE \ar[d]& \EE\!/\varphi \ar[rr]^{I/\varphi} \ar[dr]_{J/\varphi} \rrlowertwocell<\omit>{<3>\ \ \tau^\sharp_\varphi} && \FF/I\varphi\\ I\varphi \ar[rr]_{\tau_\varphi} && J\varphi && \FF/J\varphi \ar[ru]_{{\tau_\varphi^*}} & \\ }$$ Since $I_{\flat*}\OO_{\FF}(V_{\varphi(k)}\simeq\OO_{\FF}(V_{I\varphi(k)})\simeq\FF/I\varphi$ and $J_{\flat*}(V_{\varphi(k)})\simeq\FF/J\varphi$, this gives a section-wise description of the transformation $\tau^\sharp$ asserted in the proposition. Taking global sections of the transformation amounts to setting $\varphi=1$ in the diagrams above. Since $I$ and $J$ preserve the terminal object, $\tau_1^*$ is the identity $1_{\FF}$ and the component of $\Gamma_{\eq}\tau^\sharp$ at $E\in\EE\!/1$ is just $\tau_E$: $$\xymatrix{ IE \ar[r]^{\tau_E^\sharp=\tau_E} \ar[d] & JE \ar@{=}[r] \ar[ld]& JE \ar[d]& \EE \ar[rr]^{I} \ar[dr]_{J} \rrlowertwocell<\omit>{<3>\ \ \tau} && \FF\\ 1_{\FF} \ar[rr]_{!} && 1_{\FF} && \FF \ar@<-1ex>@{=}[ru]_{1_{\FF}^*} & \\ }$$ Therefore, modulo the canonical equivalences $\Gamma_{\eq}\OO_{\EE}\simeq\EE\!/1\cong\EE$ and $\Gamma_{\eq}\OO_{\FF}\simeq\FF$, $\Gamma_{\eq}\tau^\sharp$ is isomorphic to $\tau$. \end{proof} \begin{defn}\label{def_ax_space} We adapt the following definitions from algebraic geometry (cf. Hartshorne \cite{hartshorne}): \begin{itemize} \item An \emph{axiomatized space} $\XX$ is a topological groupoid $X$ together with an sheaf of pretoposes $\OO_{\XX}\in\EqSh(X)$. \item $\XX$ is \emph{locally axiomatized} if, for each $x\in X$, the stalk $(\OO_{\XX})_x$ is a local pretopos (i.e., satisfies the existence and disjunction properties cf. definition \ref{def_local}). \item A \emph{morphism of axiomatized spaces} $\XX\to\YY$ is a pair $\<F_\flat,F^\sharp\>$ where $F_\flat:X\to Y$ is a continuous functor of topological groupoids and $F^\sharp:\OO_{\YY}\to F_{*}\OO_{\XX}$ in an internal pretopos functor in $\EqSh(Y)$. \item A \emph{morphism of locally axiomatized spaces} is a morphism of axiomatized spaces such that the transposed map $F_\flat^*\OO_{\YY}\to\OO_{\XX}$ is conservative on every stalk.\footnote{Notice that the disjunction property implies that the subobjects of 1 in a local pretopos contain a unique maximal ideal. Conservativity on stalks amounts to the requirement that the maximal ideal in the stalk of $\OO_{\YY}$ at $F_\flat(x)$ maps into the maximal ideal of the stalk of $\OO_{\XX}$ at $x$.} \item A morphism $F:\XX\stackrel{\sim}{\longrightarrow} \YY$ is an \emph{equivalence of axiomatized spaces} if $F_\flat$ and $F^\sharp$ are both full, faithful and essentially surjective.\footnote{The conditions on $F^\sharp$ should be interpreted internally. An internal functor $F:\CC\to\DD$ is (internally) full and faithful if the square below is a pullback, and essentially surjective if the displayed map on the right is a regular epimorphism: $$\xymatrix@=0ex{\CC_1 \pbcorner \ar[dd] \ar[rr]^{F_1} &\left.\ \ \ \right.& \DD_1 \ar[dd] \\ &&&\left.\ \ \right.& \raisebox{-3ex}{$\CC_0\overtimes{D_0}\rm{Iso}(\DD_1)$} \ar@{->>}[rr] &\left.\ \ \ \right.&\DD_0\\ \CC_0\times\CC_0 \ar[rr]_{F_0\times F_0} && \DD_0\times\DD_0 }$$ } \item A \emph{2-morphism of axiomatized spaces} $\xymatrix{\XX \rtwocell^F_G{\tau} & \YY}$ is a pair $\<\tau^*, \tau^\sharp\>$ where \begin{itemize} \item $\tau^*$ is an internal pretopos functor $G_{\flat*}\OO_{\FF}\to F_{\flat*}\OO_{\FF}$ (in the opposite direction) such that $\Gamma\tau^*\cong 1_{\FF}$.\footnote{Note that this condition is justified by the fact that $\Gamma_{\eq}^{\YY}\circ F_{*}\simeq \Gamma_{\eq}^{\XX}\simeq \Gamma_{\eq}^{\YY}\circ G_{*}$.} \item $\tau^\sharp$ is a natural transformation $$\xymatrix{ \OO_{\YY} \ar[rr]^{F^\sharp} \ar[dr]_{G_\sharp} \rrlowertwocell<\omit>{<3>\ \ \tau^\sharp} && F_{\flat*}\OO_{\XX}.\\ & G_{\flat*}\OO_{\XX} \ar[ru]_{{\tau^*}} & \\ }$$ \end{itemize}\end{itemize}\end{defn} Theorem \ref{aff_schemes} states that the affine scheme $\Spec(\EE)=\<\MM_{\EE},\OO_{\EE}\>$ associated with a pretopos $\EE$ is a locally axiomatized space. Equivariant global sections provide a functor $\Gamma_{\eq}:\bf{AxSp}\to\Ptop$, and proposition \ref{embed_prop} guarantees that the composite $\Gamma\circ\Spec$ is equivalent to $1_{\Ptop}$. Now we will define a logical scheme to be a locally axiomatized space which is covered by affine pieces. \begin{defn}\mbox{}\begin{itemize} \item Suppose that $\XX$ is an axiomatized space and $U\subseteq X$ is a (non-full) subgroupoid (with the subspace topology). The associated \emph{axiomatized subspace} $\UU\subseteq\XX$ is defined by restricting $\OO_{\XX}$ to the object space $U_0$, equipped with the equivariant action inherited from $U_1\subseteq X_1$. \item $\UU\subseteq\XX$ is an open subspace if $U_0\subset X_0$ and $U_1\subseteq X_1$ are open sets. \item A family of open subgroupoids $\{U^i\subseteq X\}$ is an \emph{open cover} of $X$ if any $\alpha:x\to x'$ in $X_1$ has a factorization in $\bigcup_i U^i$: $$\xymatrix{ x =z_0 \ar@/^4ex/[rrr]^\alpha \ar[r]_-{\beta_1} & z_1 \ar[r]_-{\beta_2}&\ldots \ldots\ar[r]_-{\beta_n}&z_n = x',& \beta_j\in \bigcup_i U^i_1.\\ }$$ \end{itemize}\end{defn} \begin{defn} A \emph{logical scheme} is a locally axiomatized space $\XX=\<X,\OO_{\XX}\>$ for which there exists an open cover by affine subspaces $\UU^i\simeq\Spec(\EE^i)$. This defines a full 2-subcategory $\LSch\subseteq\bf{AxSp}$. \end{defn} In order to lighten the notation when working with schemes we will write $\Gamma$ in place of $\Gamma_{\eq}$. We also drop the $(-)_\flat$ notation from definition \ref{def_ax_space} and use the same letter $F$ to denote a scheme morphism $\XX\to\YY$ and the underlying continuous groupoid homomorphism $X\to Y$. Recall that the (sheaf) descent category for an open cover $J=\{U_i\subseteq X\}$ is defined as follows. Set $U=\coprod_i U_i$ and let $q$ denote the resulting map $U\to X$. An object of $\Desc(J)$ is a pair $\<E,\alpha\>$ where $E\in \EqSh(U)$ and $\alpha:p_1^*E\cong p_2^*E$ is an isomorphism over $U\overtimes{X}U$ (satisfying the usual cocycle conditions). We close with the following lemma, which allows us to represent structure sheaves using descent. \begin{lemma}\label{eff_desc} If $\XX$ is a scheme and $J=\{U_i\subseteq X\}$ is an open cover then the induced geometric morphism $j:\prod_i\EqSh(U_i)\to \EqSh(X)$ is an open surjection in the topos theoretic sense: the inverse image functor $q^*$ is faithful and its localic reflection is an open map. In particular, $\EqSh(X)\simeq\Desc(J)$. \end{lemma} \begin{proof} When an equivariant sheaf $\<E,\rho\>$ is described as an \'etale space, the inverse image $q^*\<E,\rho\>$ is defined by pulling $E$ back to $U_0$ and factoring $U_1$ through the pullback of $\rho$: $$\xymatrix{ U_1\overtimes{U_0} q^*E \ar[r] \ar[rd]_{q^*\rho} & **[r] q^*(X_1\overtimes{X_0}E) \pbcorner \ar[rr] \ar[d] && X_1\overtimes{X_0}E \ar[d]_\rho \\ & q^*E \pbcorner \ar[rr] \ar[d] && E \ar[d]\\ & (U)_0 \ar[rr]_{q_0} && X_0 }$$ This is certainly faithful. If $g\not=h:E\to F$ are distinct maps in $\EqSh(X)$ there is a point $x$ such that the stalks disagree: $g_x\not=h_x$. Since $J$ is an open cover, $x$ belongs to one of the subgroupoids $U_i$; let $x_i$ denote the associated point in $U$. Since $q^*$ preserves stalks we have $(q^*g)_{x_i}\not=(q^*h)_{x_i}$ and therefore $q^*g\not=q^*h$. The localic reflection of $\EqSh(X)$ is a topological space $S_X$ whose open sets are the $X$-invariant open sets $U\subseteq X_0$. For any set $P\subseteq S_X$ there is a smallest $X$-invariant set $\overline{P}$; we can define it as the image of the composite $P\overtimes{X_0}X_1 \to X_1 \to X_0$. Because $\XX$ is a scheme, $X$ is an open topological groupoid, in the sense that the domain and codomain $X_1\rightrightarrows X_0$ are open maps. This is true for affine schemes (cf. definition \ref{M1points}) and it is a local condition. This means that if $V\subseteq X_0$ is open, then so is $\overline{V}$. In order to show that the localic reflection is open it is enough to check that the restriction of $q^*$ to $\OO(S_X)$ has a left adjoint $q_!\dashv q^*$. We define $q_!$ by sending a family of $U_i$-equivariant sheaves $W=\<W_i\>$ to the union $\bigcup_i \overline{W_i}$. If $V$ is an $X$-equivariant open then $q^*V=\<V\cap (U_i)_0\>$, giving us the following sequence of equivalences: $$\begin{array}{rcll} W=\<W_i\> &\leq & \<V\cap (U_i)_0\>=q^*V& \\ \hline\hline \forall\ i,\ W_i & \leq & V\\ \hline\hline \forall\ i,\ \overline{W_i} & \leq & V \\ \hline\hline q_!W=\bigcup_i\overline{W_i} & \leq & V\\ \end{array}$$ For the last statement of the lemma we appeal to a theorem of Joyal and Tierney \cite{JT}. They proved that open surjections are effective descent morphisms for sheaves, which means precisely that $\EqSh(X)\simeq \Desc(q)$. \end{proof} \section{Open subschemes and gluing}\label{sec_subsch} In this section we will prove that schemes are local, in the sense that any open subspace of a scheme is again a scheme. Then we show that schemes which share an open subscheme $\XX\supseteq \UU\simeq \VV\subseteq \YY$ can be glued to give a pushout $\XX\overplus{\UU}\YY$. These will give us our first examples of non-affine schemes. Recall that an affine scheme $\Spec(\EE)$ has a basis of open subsets $V_{\varphi(k)}$ where $\varphi(x_1,\ldots,x_n)\leq\Pi_i A_i$ is a formula in context and $k=(k_1,\ldots,k_n)\in\kappa^n$ is a sequence of parameters. This is the set of labelled $\EE$-models $\mu$ such that $\mu\models\varphi(k)$ (cf. definitions \ref{M0points} \& \ref{M0opens}). There is also an open set of morphisms $V_{k\mapsto k}\rightrightarrows V_{\varphi(k)}$, defining a subgroupoid $U_{\varphi(k)}$. We let $\VV_{\varphi(k)}$ denote the associated open subspace of $\Spec(\EE)$. \begin{lemma}\label{slice_subsch} \mbox{} \begin{itemize} \item If $\XX=\Spec(\EE)$ is affine then each open subspace $\VV_{\varphi(k)}$ is again affine, with $\VV_{\varphi(k)}\simeq\Spec(\EE\!/\varphi)$. \item If $\XX$ is a scheme then any open subspace $\UU\subseteq\XX$ is again a scheme. \end{itemize}\end{lemma} \begin{proof} For simplicity, suppose that $\varphi(x)$ is a unary formula; the same argument will apply to the general case. Thus we have a variable $x:A$ and a subobject $\varphi\leq A$. The open subset $V_{\varphi(k)}$ is the set of labelled $\EE$-models $\mu$ such that $\mu(k)\in A^\mu$ is defined and the underly model $M_\mu\models \varphi(\mu(k))$. On the other hand, a model of the slice category $\EE\!/\varphi$ is a pair $\<M,a\>$ where $M$ is an $\EE$-model, $a\in A^M$ and $M\models\varphi(a)$. We define an equivalence $J:\VV_{\varphi(k)}\stackrel{\sim}{\longrightarrow} \Spec(\EE\!/\varphi)$ by sending a labelled $\EE$-model $\mu$ to the pair $\<\mu,\mu(k)\>$. Given any model $M$ and any $a\in A^M$ there is some labelling $\mu$ such that $\mu(k)=a$, so this map is essentially surjective. Similarly, an isomorphism of $\EE\!/\varphi$-models $\<M,a\>\stackrel{\sim}{\longrightarrow} \<M',a'\>$ is simply an isomorphism $\alpha:M\cong M'$ such that $\alpha(a)=a'$. Given labellings of these models with $\mu(k)=a$ and $\mu'(k)=a'$, this says precisely that $\alpha:k\mapsto k$. Thus $J$ is full and faithful. Now note that $J$ is an open map. An open subset of $V_{\varphi(k)}$ has the form $V_{\psi(k,l)}$ where $\psi(x,y)\vdash \varphi(x)$ (cf. the inclusion lemma \ref{inclusion_lemma}). Now we associate the parameters $\<k,l\>$ (defined for $A$ and $B$ in $\EE$) with a new parameter $l_k$ defined for $\varphi\times B$ in $\EE\!/\varphi$. Since $\psi$ is a subobject of $\varphi\times B$, this defines an open subset $V_{\psi(l_k)}\subseteq\Spec(\EE\!/\varphi)$ and the equivalence $(\OO_{\EE})|_{V_{\varphi(k)}}\simeq\OO_{\EE\!/\varphi}$ then follows from $$\OO_{\EE}(V_{\psi(k,l)})\simeq \EE\!/\psi\simeq (\EE\!/\varphi)/\psi\simeq \OO_{\EE\!/\varphi}(V_{\psi(k_l)}).$$ From this we can see that if $\XX$ is a scheme then so is any open subspace $\UU\subseteq\XX$. For any point $x\in\UU$ has an affine neighborhood $x\in\VV\subseteq\XX$ with $\VV\simeq\Spec(\FF)$. Since $\UU\cap\VV$ is again an open set, there is a basic open $V_{\varphi(k)}\subseteq\UU\cap\VV$ (where $\varphi$ is an $\FF$-formula). Then every point $x$ has an affine neighborhood $x\in\VV_{\varphi(k)}\subseteq\UU$, so the objects of $\UU$ are covered by affine pieces. Similarly, any isomorphism $\alpha:x\stackrel{\sim}{\longrightarrow} x'$ in $\UU$ can be factored as $\alpha=\beta_n\circ\ldots\circ\beta_1$ with $\beta_i:\mu_i\to\mu_{i+1}$ in $\VV_i\cong\Spec(\FF_i)$. Each intersection $\VV_i\cap\UU$ is again open and so contains a basic open neighborhood $\beta_i\in V_{k\mapsto l}$ (where the parameters $k$ and $l$ are associated with $\FF_i$). Now, as in the proof of lemma \ref{struct_stalks}, we factor each $\beta_i$ into a trio of maps $\mu_i\to\nu_i\to\nu_i'\to\mu_{i+1}$ such that $$k \stackrel{\beta_{i_1}}{\longmapsto} k\underset{\nu_i}{=} k' \stackrel{\beta_{i_2}}{\longmapsto} k\underset{\nu_i'}{=} l \stackrel{\beta_{i_3}}{\longmapsto} l.$$ Because $\VV_i\cap\UU$ is open, we may assume that this factorization lies inside $\UU$. Each $\beta_{i_j}$ preserves some parameter and therefore belongs to one of the affine subschemes $\VV_k$, $\VV_{k'}$ or $\VV_l$. Therefore the morphisms of $\UU$ are also covered by affine pieces and $\UU$ is a scheme. \end{proof} \begin{lemma}[Gluing Lemma]\label{gluing_lemma} Logical schemes admit gluing along isomorphisms. \end{lemma} \begin{proof} First we note that the category of logical schemes has (disjoint) coproducts. Given schemes $\XX$ and $\YY$ the spectrum for the coproduct is just the disjoint sum of groupoids $X+Y$. Described as the total space of an \'etale map, the structure sheaf is also a disjoint sum: $$\OO_{\XX+\YY}\cong\OO_{\XX}+\OO_{\YY}\longrightarrow X+Y.$$ Given a pair of maps $\XX\stackrel{F}{\longrightarrow}\ZZ\stackrel{G}{\longleftarrow}\YY$, the following diagram indicates the induced map $\<F,G\>^\sharp:\<F,G\>^*\OO_{\ZZ}\to\OO_{\XX+\YY}$: $$\xymatrix{ \TT_{\ZZ} \ar[d] & \ar[l] \pbrcorner F^*\TT_{\ZZ}+G^*\TT_{\ZZ} \ar[d] \ar[r]^-{F^\sharp+G^\sharp} & \TT_{\XX}+\TT_{\YY} \ar[d]\\ \MM_{\ZZ} & \ar[l]_-{\<F,G\>} \MM_{\XX}+\MM_{\YY} \ar@{=}[r]^-\sim&\MM_{\XX+\YY}.\\ }$$ Infinite sums are handled in the same fashion. These coproducts constitute a special case of gluing along the empty subscheme. Next suppose that $\XX$ and $\YY$ have a common open subscheme: $\UU\subseteq\XX$ and $\UU\subseteq\YY$. We want to describe the glued space $\ZZ=\XX\overplus{\UU}\YY$. The first step is to construct the pushout of groupoids $Z=X\overplus{U}Y$. Because the forgetful functor $\Top\to\Sets$ lifts limits and colimits, the underlying category of $Z$ (sans topology) is the ordinary pushout of groupoids. The space of objects is just the ordinary quotient $Z_0=(X_0+Y_0)/U_0$. The arrows of $Z$ are generated (under composition) by those of $X$ and $Y$. When it is defined, composition is computed as in $X$ or $Y$. Otherwise composition is formal, subject to the following equivalence relation: $$\left.\right.[\beta]\underset{Z}{\circ}\left[\gamma\underset{X}{\circ}\alpha\right]\sim \left[\beta\underset{Y}{\circ}\gamma\right]\underset{Z}{\circ} [\alpha]$$ $$\xymatrix{x \ar[rr]^{\alpha\in X} && u \ar[rr]^{\gamma\in U} && u' \ar[rr]^{\beta\in Y} && y}.$$ This formal composition defines a function from the space of $U$-composable pairs $X\underset{U}{*}Y$ (or composable $n$-tuples) into $Z_1$, as below. The topology on $Z_1$ is defined by requiring that all of these are open maps: $$\xymatrix@R=1ex{ X_1 \ar[d]_{\cod} & \bullet \pbrcorner \ar[l] \ar[d] & \raisebox{-3ex}{$X\underset{U}{*}Y$} \pbrcorner \ar[d] \ar[l] \ar[r]^-{\circ} & Z_1\\ X_0 & U_0 \ar[l] \ar[dd] & \bullet \pbrcorner \ar[l] \ar[dd]\\\\ & Y_0 & Y_1 \ar[l]^{\dom}\\ }$$ For example, any two open neighborhoods $V_X\subseteq X_1$ and $V_Y\subseteq Y_1$ define a neighborhood $V_Y\circ V_X=\circ\left(V_Y\underset{U}{*} V_X\right)\subseteq Z_1$. If $\alpha\in V_X$ and $\beta\in V_Y$ are composable then $\beta\circ\alpha\in V_Y\circ V_X$. It is not difficult to see that this is a pushout of continuous groupoids. Suppose that $F:X\to Z'$ and $G:Y\to Z'$ are continuous groupoid homomorphisms which agree on $U$. $Z$ is a pushout in groupoids, so this induces a unique functor $H:Z\to Z'$. It is enough to check that this comparison map is continuous. Because $F$ and $G$ agree on $U$, composition in $Z'$ induces a continuous functor $F\circ G:X\underset{U}{*}Y\to Z'$ (and similarly for longer composable strings). Given an open set $W\subseteq Z'$, the inverse image $H^{-1}(W)\subseteq Z$ is a union of the inverse images $F^{-1}(W)$, $G^{-1}(W)$, $(F\circ G)^{-1}(W)$, etc. Since these are all continuous $H^{-1}(W)$ is a union of open sets, and $H$ is continuous. Now that we have the underlying groupoid $Z$ we can define the structure sheaf $\OO_{\ZZ}$. Notice that the isomorphism $\alpha:\OO_{\XX}|_{U}\cong \OO_{\YY}|_U$ defines a object of descent data $\<\OO_{\XX},\OO_{\YY},\alpha\>$ for the binary cover $X+Y\to Z$. Since an open cover is effective descent (lemma \ref{eff_desc}), this defines the sheaf $\OO_{\ZZ}$ up to isomorphism. The stalks of $\OO_{\ZZ}$ are inherited from $\OO_{\XX}$ and $\OO_{\YY}$ while the equivariant action is a composite $\rho_{\XX}$ and $\rho_{\YY}$: $$\xymatrix{ \gamma:& x \ar[r]^\alpha & u \ar[r]^\beta& y\\ \rho_\gamma:& \OO_{\mathcal{Z},x}\cong\OO_{\XX,x} \ar[r]^-{\rho_{\XX,\alpha}}& \OO_{\XX,u}\cong \OO_{\UU,u}\cong\OO_{\YY,u} \ar[r]^-{\rho_{\YY,\beta}}&\OO_{\YY,y}\cong\OO_{\mathcal{Z},y}.\\ }$$ This defines a new scheme $\mathcal{Z}=\XX\overplus{\UU}\YY$ which contains open subschemes $\XX$, $\YY$ and $\UU\cong\XX\cap\YY$. More generally, suppose that we have a family of schemes $\XX_i$, a family of open subschemes $\UU_{ij}\subseteq \XX_i$ and isomorphisms $h_{ij}:U_{ij}\cong U_{ji}$ which satisfy the cocycle conditions: $h_{ij}=h_{ji}^{-1}\ \ \rm{and}\ \ h_{ij}\circ h_{jk}=h_{ik}$ (when the latter is defined). We can glue these to form a new scheme $\mathcal{Z}=\displaystyle\bigoplus_{U_{ij}} \XX_i$. The definition of $\mathcal{Z}$ is entirely analogous to the binary case. The underlying groupoid $Z$ is a pushout in the category of groupoids, consisting of strings of composable arrows and topologized by open maps from the spaces of such tuples. The structure sheaf on $\ZZ$ is defined locally by descent from the cover $\coprod_i X_i\to Z$. More formally, this shows that the usual descent diagram has a colimit in $\LSch$ whenever each $\UU_{ij}\rightarrowtail\XX_i$ is an open subscheme: $$\xymatrix{ \displaystyle\coprod\UU_{ijk} \ar@<1.5ex>[r] \ar[r] \ar@<-1.5ex>[r] & \displaystyle\coprod\UU_{ij} \ar@<1.5ex>@{>->}[r] \ar@<-1.5ex>@{>->}[r]& \displaystyle\coprod\XX_i \ar[l] \ar@{->>}[r]& \raisebox{-.75cm}{$\displaystyle\ZZ=\bigoplus_{\UU_{ij}} \XX_i$}.\\ }$$ \end{proof} The gluing lemma provides our first examples of non-affine schemes. In fact, the coproduct $\XX=\Spec(\EE)+\Spec(\FF)$ is already non-affine. An equivariant section on $\XX$ is just a section of $\Spec(\EE)$ together with a section of $\Spec(\FF)$, so that $\Gamma\XX\simeq \EE\times\FF$. In general (the diagram of) an $(\EE\times\FF)$-model is not equivalent to the diagram of either an $\EE$-model or an $\FF$-model, which means that $\Spec(\Gamma\XX)\not\simeq \XX$. We will see in the next section that this is a general characterization of non-affine schemes. \section{Limits of schemes} In this section we will prove that the category of logical schemes is closed under finite limits, and that these can be computed from the colimits of pretoposes. This is analogous to the situation in algebraic geometry, where the affine line $A^1$ is (dual to) the ring of polynomials $\bCC[x]$ and the plane $A^1\times A^1$ corresponds to the \emph{co}product $\bCC[x,y]\cong\bCC[x]\overplus{}\bCC[y]$. \begin{thm} There is an (op-)adjunction $\Ptop(\EE,\Gamma\XX)\simeq\LSch(\XX,\Spec(\EE))$ presenting pretoposes as a reflective (op-)subcategory of logical schemes $$\xymatrix{ \Ptop^{\op} \rrtwocell_{\Spec}^{\Gamma}{`\bot} && \LSch.\\ }$$ \end{thm} \begin{proof} We will describe the adjunction in terms of its unit and counit. Because this is an ``adjunction on the right'', this amounts to a pair of natural transformations $$\begin{array}{c} \eta=\eta_{\XX}: \XX\longrightarrow\Spec(\Gamma\XX)\\ \epsilon=\epsilon_{\EE}:\EE\longrightarrow\Gamma(\Spec\ \EE).\\ \end{array}$$ The counit $\epsilon_{\EE}$ is the equivalence of categories computed in theorem \ref{aff_schemes}. In order to describe the unit $\eta_{\XX}:\XX\to\Spec(\Gamma\XX)$ we fix affine cover $J=\{\UU_i\simeq\Spec(\FF_i)\}$. Taking global sections induces a family of pretopos functors and, dually, a family of scheme morphisms $$\xymatrix@R=3ex{ **[r] \Gamma(\XX) \ar[rrr]^{J_i} &&& **[l] \Gamma(\UU_i)\simeq \FF_i\\ **[r] \UU_i\simeq \Spec(\FF_i) \ar[rrr]^-{J_i^*} &&& **[l] \Spec(\Gamma\XX).\\ }$$ Now we define $\eta=\eta_{\XX}$ by setting $\eta(x)=J^*_i(x)$ for $x\in\UU_i$. Similarly, we can define the action of $\eta$ on isomorphisms by setting $\eta(\alpha)=J^*_i(\alpha)$ when $\alpha\in\UU_i$ and extend to all of $\XX$ by composition. To see that this is well-defined, suppose that we have an isomorphism $\alpha\in\UU_i\cap\UU_j$. The intersection is an open subscheme, so there is an affine neighborhood $\UU_{ij}\simeq\Spec(\FF_{ij})$ such that $\alpha\in\UU_{ij}\subseteq \UU_i\cap\UU_j$. The resulting composites $\UU_{ij}\rightrightarrows \Spec(\Gamma\XX)$ will agree on $\alpha$ (modulo a canonical isomorphism in $\Spec(\Gamma\XX)$) because they arise from isomorphic pretopos functors: $$\xymatrix@C=2ex{ & \UU_{ij} \ar[rd] \ar[ld] &&&&& \FF_{ij} & \\ \UU_i \ar[rd] \ar[rdd]_{J_i} && \UU_j \ar[ld] \ar[ldd]^{J_j} &&& \FF_i \ar[ru] && \FF_j \ar[lu]\\ &\XX \ar@{-->}[d] &&&&& \Gamma\XX \ar[ru] \ar[lu] &\\ &\Spec(\Gamma\XX) &&&&& \Gamma(\Spec\ \Gamma\XX). \ar[u]_{\textstyle\rm{\rotatebox[origin=c]{90}{$\sim$}}}& \\ }$$ By considering a triple intersection we can show that these isomorphisms satisfy cocycle conditions, allowing us to invoke the gluing lemma \ref{gluing_lemma}. This says that $\XX$ is a colimit $\XX\simeq\colim_i\UU_i$. Since the maps $J_i^*$ agree (up to isomorphism) on their overlap, they induce a map $\eta:\XX\to\Spec(\Gamma\XX)$. We also need a structure map $\eta^\sharp:\eta^*\OO_{\Gamma\XX}\to\OO_{\XX}$, which we construct via descent. Recall that $\EqSh(\XX)$ is equivalent to the category of descent sheaves $\Desc(J)$; via this equivalence, $\OO_{\XX}$ and $\eta^*\OO_{\Gamma\XX}$ correspond to the families $\<\OO_{\UU_i}\>$ and $\<J_i^*\OO_{\Gamma\XX}\>$, respectively. Both of these carry natural descent data and the maps $J_i^\sharp:J_i^*\OO_{\Gamma\XX}\to \OO_{\UU_i}$ commute with these isomorphisms (again because they are defined locally by isomorphic functors). Therefore the family $\<J_i^\sharp\>$ defines a map in $\Desc(J)$ and we let $\eta^\sharp$ equal the corresponding map in $\EqSh(\XX)$. In order to see that $\eta$ and $\epsilon$ define a (2-)adjunction we must verify the triangle isomorphisms $$\epsilon_\Gamma\circ\Gamma(\eta)\cong1_\Gamma\ \ \ \ \ \ \Spec(\epsilon)\circ\eta_{\Spec}\cong 1_{\Spec}.$$ Since $\epsilon$ is (pointwise) an equivalence of categories, it is enough to check that $\Gamma(\eta)$ and $\eta_{\Spec(\EE)}$ are also equivalences; this will guarantee the existence of the natural isomorphisms asserted above. First we show that the the global sections $\Gamma(\eta)=\Gamma(\eta^\sharp)$ are pseudo-inverse to $\epsilon_\Gamma$. Given a global object $A\in\Gamma\XX$, let $B=\epsilon_\Gamma(A)$. This is a section over $\Spec(\Gamma\XX)$ sending a labelled $\Gamma\XX$-model $\mu$ to $L_\mu A$, the germ of $A$ at $\mu$.\footnote{$L_\mu A$ was denoted $\tilde{\top}(A)$ in section \ref{sec_diagrams}.} For each $i$, this gives us a pair of local sections over $\UU_i$: $A_i=A|_{\UU_i}$ and $B_i=J_i(B)$. These families lift to a pair of objects in $\Desc(J)$ and we will show that these are isomorphic. Fixing a point $\nu\in\UU_i\subseteq\XX$, we calculate $$\begin{array}{rcl} (J_i^\sharp B)(\nu) & \cong & J_i^\sharp\big[(\epsilon_\Gamma A)(J_i^*\nu)\big]\\ &\cong & J_i^\sharp(L_{J_i^*\nu} A)\\ &\cong & L_\nu(J_i A)\\ \end{array}$$ Because $J_i$ is defined by global sections inclusion $\UU_i\subseteq\XX$ we have $J_i(A)\cong A_{\UU_i}$ and therefore $A_i\cong B_i$. Appealing to the compatibility of the maps $J_i$ one easily check that these isomorphism commutes with the descent data yielding an isomorphism $\eta^\sharp(B)\cong A$. Therefore $\Gamma\eta$ is essentially surjective. A similar argument shows that $(\Gamma\eta)(\epsilon_\Gamma f)= f$ for any arrow $f\in\Gamma\XX$, so that $\Gamma\eta$ is full. To see that $\Gamma\eta$ is faithful, first notice that the maps $\Gamma\XX\to\Gamma\UU_i$ are jointly faithful (because the maps $\UU_i\to\XX$ are jointly surjective). Therefore the induced map $\Spec(\prod_i\Gamma\UU_i)\to\Spec(\Gamma\XX)$ is again essentially surjective and gives rise to another faithful functor $$\xymatrix{ \Gamma\Spec(\Gamma\XX) \ar@{>->}[rd]_{\Gamma\eta} \ar@{>->}[rr] && **[l] \Gamma\Spec(\prod_i \Gamma\UU_i)\simeq\prod_i\Gamma\UU_i.\\ &\Gamma\XX \ar[ur] &\\ }$$ Given the factorization above, $\Gamma\eta$ must also be faithful and, therefore, an equivalence of categories. Finally we consider the map $\eta_{\Spec(\EE)}:\Spec(\EE)\to\Spec\ \Gamma(\Spec\ \EE)$. Given an $\EE$-model $\mu$, the underlying model of $\eta_{\Spec(\EE)}(\mu)$ is given by the composite of the localization $L_\mu:\Gamma(\Spec\ \EE)\to\Diag(\mu)$ with the canonical model $\Hom(1,-):\Diag(\mu) \longrightarrow \Sets$. Now we compose with the equivalence $\EE\simeq\Gamma(\Spec\ \EE)$ and note that the resulting functor $\EE\to \Sets$ is canonically isomorphic to the underlying model $M_\mu$. This shows that (the underlying groupoid homomorphism) $\eta_{\Spec(\EE)}$ is essentially surjective. Similarly, any isomorphism $\alpha:\mu\stackrel{\sim}{\longrightarrow} \mu'$ induces an isomorphism of diagrams $\Diag(\alpha):\Diag(\mu)\cong\Diag(\mu')$ and this yields a factorization of $\alpha$ through $\Gamma(\Spec\ \EE)$: $$\xymatrix@R=2ex{ && \Diag(\mu) \ar[dd]|{\Diag(\alpha)} \ar[dr] & \\ \EE \ar[r]^-\sim \ar@/^10ex/[rrr]^{M_\mu} \ar@/_10ex/[rrr]_{M_{\mu'}} \ar@{}[rr]^{\raisebox{3ex}{$\rm{\rotatebox[origin=c]{270}{$\cong$}}$}} \ar@{}[rr]_{\raisebox{-5ex}{$\rm{\rotatebox[origin=c]{270}{$\cong$}}$}} \ar@{}[rr]|(.8){\mbox{$\rm{\rotatebox[origin=c]{270}{$\cong$}}$}} \ar@{}[rrr]|(.8){\mbox{$\rm{\rotatebox[origin=c]{270}{$\cong$}}$}}& \Gamma(\Spec\ \EE) \ar[ur] \ar[dr] && \Sets.\\ && \Diag(\mu') \ar[ur] &\\ }$$ On the other hand, for any $\mu\in\Spec(\EE)$ we have $\Diag(\mu)\cong\Diag(\eta(\mu))$, so an isomorphism $\eta(\mu)\cong\eta(\mu')$ factors through $\EE$ in the same way. Thus $\eta_{\Spec(\EE)}$ is fully faithful, and hence an equivalence of continuous groupoids. At the level of structure sheaves we have a map $\eta^\sharp:\OO_{\Gamma(\Spec\ \EE)}\to \eta_*\OO_{\EE}$. By the previous argument, its global sections $I=\Gamma(\eta^\sharp)$ is an equivalence is an equivalence of categories. On any other basic open set $V_{\varphi(k)}\in\Spec\ \Gamma(\Spec\ \EE)$, the partial sections of $\eta^\sharp$ factor as sequence of equivalences $$\OO_{\Gamma(\Spec\ \EE)}(V_{\varphi(k)})\simeq \Gamma(\Spec\ \EE)/\varphi\simeq \EE\!/I\varphi\simeq \OO_{\EE}(V_{I\varphi(k)})=\eta_*\OO_{\EE}(V_{\varphi(k)}).$$ Therefore $\eta_{\Spec(\EE)}$ is an equivalence of schemes, establishing the second triangle isomorphism and the adjunction $$\Ptop(\EE,\Gamma\XX)\simeq\LSch(\XX,\Spec(\EE)).$$ \end{proof} \begin{cor} A logical scheme $\XX$ is affine if and only if the unit $\eta:\XX\to\Spec(\Gamma\XX)$ is an equivalence of schemes. \end{cor} \begin{proof} We have just verified a triangle isomorphism $\Spec(\epsilon_{\EE})\circ\eta_{\Spec(\EE)}\cong 1_{\Spec(\EE)}$. If $\XX\simeq\Spec(\EE)$ is affine then $\eta_{\Spec(\EE)}$ is the asserted equivalence. If $\XX$ is not affine then for any $\XX\not\simeq\Spec(\EE)$ for any $\EE$, so $\eta:\XX\to\Spec(\Gamma\XX)$ cannot be an equivalence. \end{proof} Now we will show that the category of logical schemes is closed under finite 2-limits (or weighted limits). We will not recall the general definition, instead referring the reader to \cite{lack}. Instead, we introduce a few examples which will suffice for our purposes. For any ordinary limit (e.g., products, equalizers, etc.) defined in ordinary categories there is an associated pseudo-limit defined for 2-categories. For example, if $\KK$ is a 2-category then a (pseudo-)terminal object in $\KK$ is an object $1_{\KK}$ such that for any other $K\in\KK$ there is an essentially unique morphism $K\to 1_{\KK}$. More precisely, such a map always exists and any two such maps are associated by a unique 2-cell $\xymatrix{K \rtwocell{\omit \rm{\rotatebox[origin=c]{270}{$\cong$}}} & 1_{\KK}}$. Such a limit is unique up to equivalence in $\KK$. Similarly, the (pseudo-)pullback $$\xymatrix{ \raisebox{-3ex}{$K_1\overtimes{L} K_2$} \ar[r]^-{p_1} \ar[d]_{p_2} \drtwocell<\omit>{\omit \textstyle^\gamma\rotatebox{45}{$\cong$}} & K_1 \ar[d]^{k_1}\\ K_2 \ar[r]_{k_2} & L\\ }$$ is defined by the following universal property: for any maps $z_i: Z\to K_i$ with a natural isomorphism (2-cell) $\theta:k_1\circ z_1\cong k_2\circ z_2$ there is an essentially unique map $\overline{\theta}:Z\to K_1\overtimes{L} K_2$ such that $z_i\cong p_1\circ\overline{\theta}$ and $\theta=\gamma\cdot\overline{\theta}$. Henceforth we will drop the ``pseudo-'' prefix, and simply refer to such objects as limits in $\KK$. Additionally, there are 2-limits which have no analogue in ordinary categories. Remember that $I$ is the ``walking arrow'' category with two objects and one non-identity arrow. A \emph{power} of an object $K$ by $I$ is an object $K^{\2}$ with a universal (oriented) 2-cell $\xymatrix{ K^{\2} \rtwocell & K}$. In $\Cat$ (or $\Ptop$) the power of $\EE$ by $I$ is the arrow category $\EE^{\2}$; this has two projections ($\dom$ and $\cod$) and any natural transformation $\xymatrix{\FF \rtwocell{\tau} & \EE}$ induces a map $\FF\to \EE^{\2}$ sending $F\mapsto \tau_F\in\Ar(\EE)$. In ordinary categories, terminal objects and pullbacks suffice to construct all finite limits. Similarly, a theorem of Street \cite{street_limits} guarantees that these three constructions (terminal objects, pullbacks and powers of $I$) suffice to construct all finite limits. All of the above can be reversed to give definitions of 2-colimits, and the same considerations apply. \begin{lemma}\label{2colimits} The 2-category of pretoposes is closed under finite 2-colimits. \end{lemma} \begin{proof} In fact, $\Ptop$ is closed under \emph{all} small 2-colimits (essentially because $\Ptop$ is monadic over $\Cat$), but we will not use that fact here. By the above, it is enough to check that $\Ptop$ has an initial object, pushouts and copowers by $I$. The category of finite sets $\Sets_f$ is the initial object in $\Ptop$. Since a pretopos functor must preserve the terminal object and finite coproducts, any map $\Sets_f\to\EE$ is canonically determined (up to unique isomorphism) by $n\mapsto \overbrace{1_{\EE}+\ldots+1_{\EE}}^{n\rm{ times}}$. Next consider the pushout of two pretopos functors $I:\EE\to\FF$ and $J:\EE\to\GG$. According to its universal property, a model of the pushout (i.e. a functor $\FF\overplus{\EE}\GG\to\SS$) is a pair of models $M\models\FF$ and $N\models\GG$ together with an isomorphism between their reducts $I^*M\cong J^*N$. This class is axiomatized by the following theory: $$\begin{array}{rcl} \LL_+&=&\LL_{\FF}+\LL_{\GG}+\{i_E: IE\to JE,\ j_E:JE\to IE\ |\ E\in\EE\}\\ \bTT_+&=& \bTT_{\FF}+\bTT_{\GG}+\{J(f)\circ i_D=i_{E}\circ I(f)\ |\ f:D\to E)\},\\ &&\hspace{1.4cm}+\{I(f)\circ j_D=j_{E}\circ J(f)\ |\ f:D\to E\},\\ &&\hspace{1.4cm}+\{ j_E\circ i_E=1_{IE},\ i_E\circ j_E=1_{JE}\ |\ E\in\EE\}\\ \end{array}$$ Therefore the classifying pretopos of $\bTT_+$ is a pushout in $\Ptop$. The copower of $\EE$ by $I$, which we denote $\HH=\EE\Rightarrow\EE$, is similarly axiomatizable. A model of the copower consists of a pair of $\EE$-models together with a homomorphism between them. To axiomatize this class, first duplicate each object (or arrow or equation between arrows) of $\EE$; every symbol $E\in\EE$ has two copies $E_0$ and $E_1$ in the copower. We also add in components of a transformation $h_E:E_0\to E_1$ along with equational axioms expressing its naturality. This leaves us with the following theory: $$\begin{array}{rcl} \LL_\Rightarrow&=&(\LL_{\EE})_0+(\LL_{\EE})_1+\{h_E: E_0\to E_1\ |\ E\in\EE\}\\ \bTT_\Rightarrow&=& (\bTT_{\EE})_0+(\bTT_{\EE})_1+\{h_E\circ f_0=f_1\circ h_D\ |\ f:D\to E\}.\\ \end{array}$$ According to its universal property, the classifying pretopos of $\bTT_{\Rightarrow}$ must be equivalent to the copower $\EE\Rightarrow\EE$. More categorically, we can regard $\bTT_\Rightarrow$ as a presentation of the product category $\bHH=\EE\times I$. This has finite limits, computed pointwise, and it comes equipped with two ``vertical'' functors $m_0,m_1:\EE\to\bHH$. These send $E$ to $\<E,0\>$ or $\<E,1\>$ and they induce a coherent topology on $\bHH$: the basic covers are vertical families which are covers in $\EE$. This makes sense: the ``horizontal'' maps $h_E=\<1_E,I\>$ are not covers because homomorphisms are not generally surjective. However, the composition in $\bHH$ is computed pointwise, and this ensures that the horizontal maps satisfy a naturality condition. For each $\tau:D\to E$ $$\begin{array}{rcccl} h_E\circ m_0(\tau)&=&\<1_E,I\>\circ\<\tau,\rm{id}_0\>\\ &=&\<\tau,I\>\\ &=&\<\tau,\id_1\>\circ\<1_D,I\> &=& \tau_1\circ h_D\\ \end{array}$$ This shows that $\bHH$ is a copower of $\EE$ in $\Coh$, the class of coherent categories. Because pretopos completion is a left adjoint, it follows immediately that the pretopos completion of $\bHH$ is the copower of $\EE$ by $I$ in $\Ptop$ $$\begin{array}{c} \xymatrix{\EE \rtwocell^{M_1}_{M_2}{h} & \SS}\\\hline \xymatrix{\bHH \ar[r]^{N} & \SS}\\\hline \xymatrix{\HH=\Ptop(\bHH) \ar[r] & \SS}.\\ \end{array}$$ \end{proof} \begin{thm}\label{scheme_limits} The 2-category of logical schemes is closed under finite 2-limits, which are computed from colimits in the 2-category of pretoposes. \end{thm} \begin{proof} As in the algebraic case, we exploit the fact that an adjunction on the right must send colimits to limits, so that $\lim_i \Spec(\EE_i)\simeq\Spec(\colim_i\EE_i)$: $$\begin{array}{rcll} \LSch\big(\ZZ,\lim_i(\Spec\ \EE_i)\big)&\cong& \lim_i \LSch(\ZZ,\Spec\ \EE_i) & \rm{(limit in }\Cat\rm{)}\\ &\cong & \lim_i \Ptop(\EE_i,\Gamma\ZZ)\\ &\cong & \Ptop(\colim_i\EE_i,\Gamma\ZZ)\\ &\cong & \LSch\big(\ZZ,\Spec(\colim_i\EE_i)\big).\\ \end{array}$$ This shows that finite 2-limits of affine schemes exist, and they are the affine schemes associated with colimits in $\Ptop$. This already shows that schemes have a terminal object: $\Spec(\Sets_f)$. Now we construct the pullback of two scheme morphisms $\YY\to \XX$ and $\ZZ\to\YY$; the argument is essentially the same as that for the algebraic case. First, note that if $\YY\overtimes{\XX}\ZZ$ exists and $\UU\subseteq \YY$ then the pullback $\UU\overtimes{\XX}\ZZ$ is given by the inverse image $p_{\YY}^{-1}(\UU)$. In the diagram below, $\WW$ is supported over $\UU$, so the induced map $\WW\to\YY\overtimes{\XX}\ZZ$ must factor through $p_{\YY}^{-1}(\UU)$: $$\xymatrix@C=3ex{ \WW \ar[rrrrd] \ar[rdd] \ar@{-->}[rrd] \ar@{-->}[rd]\\ & p_{\YY}^{-1}(\UU) \ar@{}[r]|{\textstyle\subseteq} \ar[d] & \raisebox{-3ex}{$\YY\overtimes{\XX}\ZZ$} \pbcorner \ar[rr] \ar[d] && \ZZ \ar[d]\\ & \UU \ar@{}[r]|{\textstyle\subseteq} & \YY \ar[rr] && \XX\\ }$$ Now suppose that $\YY\simeq\bigoplus_i \YY_i$ is an open cover and that each pullback $\YY_i\overtimes{\XX}\ZZ$ exists. Let $\VV_{ij}=p_{\YY_i}^{-1}(\YY_i\cap\YY_j)$. By the previous observation, $\VV_{ij}$ is a pullback and therefore uniqueness implies that there is a canonical equivalence $\VV_{ij}\simeq\VV_{ji}$. This means that we can glue the schemes $\YY_i\overtimes{\XX}\ZZ$ along the $\VV_{ij}$ in order to define a scheme $\bigoplus_i \left(\YY_i\overtimes{\XX}\ZZ\right)$. Given any maps $F:\WW\to\YY$ and $G:\WW\to\ZZ$ which commute with the projections to $\XX$, we set $\WW_i=F^{-1}(\YY_i)$. This defines a family of maps $\WW_i\to \YY_i\overtimes{\XX}\ZZ$ and one easily checks that these agree on their overlaps. Gluing gives the existence of a map $\WW\to\bigoplus_i \left(\UU_i\overtimes{\XX}\ZZ\right)$ and the essential uniqueness of this map can be checked locally. This shows that $\YY\overtimes{\XX}\ZZ=\bigoplus_i \left(\UU_i\overtimes{\XX}\ZZ\right)$ is a pullback in schemes. By the argument at the beginning of the theorem we know that affine schemes have pullbacks: $$\Spec(\FF)\overtimes{\Spec(\EE)}\Spec(\GG)\simeq\Spec(\FF\overplus{\EE}\GG).$$ Now suppose that $\XX=\Spec(\EE)$ is affine and choose open covers $\YY\simeq\bigoplus_i\YY_i$ and $\ZZ\simeq\bigoplus_j\ZZ_j$. By the previous argument, applied twice, we have $$\YY\overtimes{\XX}\ZZ\simeq \bigoplus_i \left(\YY_i\overtimes{\XX}\ZZ\right) \simeq \bigoplus_{i,j} \left(\YY_i\overtimes{\XX}\ZZ_j\right).$$ Finally, suppose that we have an open cover $\XX=\bigoplus_i\XX_i$; let $\YY_i$ and $\ZZ_i$ denote the inverse images in $\XX_i$ in $\YY$ and $\ZZ$. We have just shown that the pullback $\YY_i\overtimes{\XX_i}\ZZ_i$ exists; now we observe that $\YY_i\overtimes{\XX_i}\ZZ_i\simeq\YY_i\overtimes{\XX}\ZZ$: $$\xymatrix{ \raisebox{-3ex}{$\YY_i\overtimes{\XX}\ZZ$} \pbcorner \ar[r] \ar[d] & \ZZ_i \pbcorner \ar[d] \ar[r] & \ZZ \ar[d] \\ \YY_i \ar[r] & \XX_i \ar[r] & \XX\\ }$$ By the same argument as above, it follows that $\LSch$ is closed under pullbacks: $$\YY\overtimes{\XX}\ZZ\simeq\bigoplus_i \left(\YY_i \overtimes{\XX} \ZZ\right) \simeq\bigoplus_i \left(\YY_i\overtimes{\XX_i}\ZZ_i\right).$$ A similar argument shows that $\LSch$ has powers of $I$, which we denote $\XX^{\2}$. In $\Ptop$ we can push out a natural transformation along a slice $\EE\to\EE\!/A$. Locally, this defines restriction of a 2-cell $\xymatrix{\ZZ \rtwocell^F_G{\tau} & \XX}$ to an open subscheme $\UU\subseteq\XX$: $$\xymatrix{ \EE \rrtwocell^{\2}_J{\theta} \ar[d] && \FF \ar[d] & \XX \rrtwocell<\omit>&& \ZZ \lltwocell^G_F{\omit\ \ \ \ \ \tau} \\ \EE\!/A \rrlowertwocell<\omit>{<3>\ \ \ \theta/A} \ar[rr] \ar[rd] && \FF/IA & \UU \ar[u] \rrlowertwocell<\omit>{<3>\ \ \ \tau|_{\UU}} && F^*\UU \ar[ll] \ar[dl] \ar[u]\\ & \FF/JA \ar[ur]_{\theta_A^*} &&& G^*\UU \ar[ul]\\ }$$ This is closely related to the fact that the copower of slices is a slice of the copower: $\big(\EE\!/A\Rightarrow\EE\!/A\big)\simeq (\EE\Rightarrow\EE)/A_1$. This is because an $\EE\!/A$-morphism $h:\<M_1,a_1\>\to\<M_2,a_2\>$ is, by definition, an $\EE$-morphism $M_1\to M_2$ such that $h(a_1)=a_2$. Now suppose that $\XX^{\2}$ exists and that $\UU\subseteq\XX$ is an open subscheme. Then $\UU^{\2}$ exists, and it is given by the inverse image along the domain map $\xymatrix{\XX^{\2} \rtwocell^\dom_\cod & \XX}$. To see that this gives the correct universal property, suppose that we have a map $T:\ZZ\to \UU^{\2}\subseteq\XX^{\2}$. By the universal property of $\XX^{\2}$ this is equivalent to a 2-cell $\xymatrix{\ZZ \rtwocell^F_G{\tau} & \XX}$. Because $T$ factors through $\UU^{\2}=\dom^{-1}(\UU)$, we must have $F^*\UU=\dom(T)^*\UU \cong \ZZ$. It follows that the restriction of $\tau$ to $\UU$ is a 2-cell in $\LSch(\ZZ,\UU)$: $$\begin{array}{c} \xymatrix{\ZZ \ar[rr] && \dom^{-1}(\UU)=\UU^{\2}}\\\hline \xymatrix{**[r] \ZZ\cong F^*\UU \rrlowertwocell<\omit>{<3>\ \ \ \tau|_{\UU}} \ar[rr] \ar[rd] && \UU.\\ & G^*\UU \ar[ur]\\}\\ \end{array}$$ On the other hand, if $\XX=\bigoplus_i \XX_i$ and $\XX_i^{\2}$ exists then so does $\XX^{\2}$, and it is given by $\bigoplus_i \XX_i^{\2}$. Here $\XX_i^{\2}$ is a matching family because, by the previous argument, $(\XX_i\cap\XX_j)^{\2}$ exists and is a subscheme of $\XX_i^{\2}$. One can check that if a family of scheme morphisms $\YY_i\to\XX_i^{\2}$ matches on $\YY=\bigoplus_i\YY_1$, then so do the induced 2-cells $\xymatrix{\YY_i\rtwocell & \XX_i}$. Thus any map $\YY\to\XX^{\2}$ corresponds to a unique 2-cell $\xymatrix{\YY \rtwocell & \XX}$, so $\XX^{\2}$ is a power in $\LSch$. If $\XX=\Spec(\EE)$ is affine, then its power $\XX^{\2}$ is also affine, given by $\Spec(\EE\Rightarrow\EE)$. If $\VV=\Spec(\FF)$ is affine then we have the sequence of equivalences below; the general case follows by gluing. $$\begin{array}{c} \xymatrix@=5ex{**[r] \VV \rrtwocell{\ \ \tau'} && **[l]\UU}\\\hline \xymatrix@=5ex{**[r]\EE \rrtwocell{\ \ \ \Gamma\tau'} && **[l]\FF}\\\hline \widetilde{\Gamma\tau'}:\xymatrix@=5ex{**[r]\EE\Rightarrow\EE \ar[rr] && **[l]\FF}\\\hline \widetilde{\tau'}:\xymatrix@=5ex{**[r]\VV \ar[rr] && **[l]\Spec(\EE\Rightarrow\EE)}\\ \end{array}$$ Since any scheme has an affine cover $\XX\simeq\bigoplus_i \Spec(\EE_i)$, it also has a power $\XX^{\2}\simeq\bigoplus_i\Spec(\EE_i\Rightarrow\EE_i)$. \end{proof} \begin{comment} There are a few other constructions in $\LSch$ worth mentioning. First of all, although arbitrary limits of schemes (probably) may not exist, they will for affine schemes (following from the colimits in $\Ptop$). In particular, let $\Phi(X)=\{\varphi_i(x_i)\ |\ x_i\stackrel{fin}{\subseteq}X\}$ be a model-theoretic type in a set of variables with $|X|<\kappa$. We have already discussed the way that each basic open set $V_{\varphi(k)}$ defines an open subscheme $\MM_{\varphi(k)}\rightarrowtail\MM_{\EE}$; in just the same way, an infinite parameter $K:X\to\kappa$ induces an intersection $V_{\Phi(K)}=\bigcap_i V_{\varphi_i(k_i)}$. This defines a $G_\delta$ subscheme $\MM_{\Phi(K)}\rightarrowtail\MM_{\EE}$; if $\EE$ is Boolean, the subscheme will be relatively closed in the context scheme $\MM_k$. Affine schemes also create some additional colimits in $\LSch$. A morphism of schemes $\YY\to\XX$ is called affine if, for any map $s:\MM_{\EE}\to \XX$ with affine domain, the pullback $s^*\YY$ is again affine. We expect (from the algebraic case) that a directed diagram with affine transition maps should have a colimit in $\LSch$. Finally, $\Ptop$ contains many freely generated categories which define finitely generated or ``polynomial'' schemes. First among these is the terminal scheme, dual to the pretopos $\SS_f$ of finite sets. We can extend $\SS_f$ to a pretopos $\SS[U]$ with a generic object $U$, so that a pretopos functor $\SS[U]\to\EE$ is equivalent to an object $E\in\EE$. For example, the generating object which was assumed in sections 1 \& 2 corresponds to morphism $U:\SS[U]\to\EE$ such that, for any other $E:\SS[U]\to\EE$, there is a context $x$ and a natural monomorphism $$\xymatrix{ \SS[U] \ar[r]^E \ar[rd]_{U^x} \drtwocell<\omit>{<-2>} & \EE \ar[d]\\ & \EE.\\ }$$ Given two models $M,N:\EE\to\Sets$, a natural transformation $I\circ U\Rightarrow J\circ U$ is a pair of $\EE$ models together with a function $|M|\to|N|$ between their underlying sets. This is closely related to the notion of models in reducted languages, an idea which plays a prominent role in the model theory of pseudoelementary classes. In much the same way we can define a scheme associated with a generic automorphisms $\SS[U\stackrel{\sim}{\longrightarrow} U]$ or with a generic meet $$\SS\left[\raisebox{.7cm}{\xymatrix{V\wedge W \pbcorner \ar[r] \ar[d] & V \ar@{>->}[d]\\ W \ar@{>->}[r] & U\\}}\right].$$ Formally, these generated pretoposes arise from a chain of free/forgetful ($F$/$U$) adjunctions which runs through left exact, regular, extensional and exact categories (categories with finite limits, epi-mono factorization, coproducts and quotients by equivalence relations, respectively). $$\xymatrix{ &&&\rm{\bf{Exten}} \drtwocell^F_U{'\bot}&\\ \Cat \rtwocell^F_U{'\bot} &\bf{\rm{Lex}} \rtwocell^F_U{'\bot} &\bf{\rm{Reg}}\drtwocell^F_U{'\bot} \urtwocell^F_U{'\bot}&& \Ptop\\ &&&\rm{\bf{Exact}} \urtwocell^F_U{'\bot}&\\ }$$ These ``polynomial theories'' are examples of coersion between different logical doctrines (in the sense of Lawvere \cite{doctrines}). \section{Structure sheaf as type-theoretic universe} In the second section we defined the affine structure sheaf $\TT_{\EE}$ as a sheaf of complete theories, varying over the labelled models in $\MM_{\EE}$. Now we would like to introduce an auxilliary sheaf map $\El(\TT)\to\TT$ which can be interpreted as the display map for a type-theoretic universe \`a la Streicher \cite{streicher}. We adopt a weaker version than is presented there, omitting the assumptions regarding dependent products. \begin{defn} A \emph{universe} in a topos $\SS$ is a class of morphisms $\mathcal{U}\subseteq\SS$ such that \begin{itemize} \item $\UU$ is closed under pullbacks along \emph{any} map in $\SS$. \item $\UU$ contains all monos in $\SS$. \item $\UU$ is closed under composition (type-theoretically, closed under dependend sums). \item $\UU$ contains a generic display map $\El(\UU)\to\Ob(\UU)$. This means that for every $u:U'\to U\in\UU$ there is a map $f_u:U\to\Ob(\UU)$ which induces a pullback square $$\xymatrix{ U' \pbcorner \ar[d]_u \ar[r] & \El(\UU) \ar[d]\\ U \ar[r]_{f_u} & \Ob(\UU)\\ }$$\end{itemize}\end{defn} The last requirement suggests that the display map $\El(\UU)\to\Ob(\UU)$ is something like a ``small-object'' classifier. In particular, the monos in $\SS$ form a universe whose display map is the subobject classifier $1\to\Omega$. More generally, the maps in $\UU$ should be regarded as having small fibers. We begin by defining an auxilliary sheaf map $\TT^\#\to\TT$ (in fact, a pretopos functor). The pullbacks of $\TT^\#$ can be characterized in terms of $\EE$, in such a way that they are clearly closed under composition. Finally, we combine $\TT^\#$ with the ordinary subobject classifier in order to accomodate monomorphisms. We construct $\TT^\#$ in the same fashion as $\TT$: by describing $\mathring{\TT^\#}\subseteq\TT^\#$, the separated subpresheaf of relatively equivariant section. Recall from section 2 that $\mathring{\TT}(V_{\varphi(k)})$ is defined to be the syntactic pretopos of the extended theory $\bTT\cup\{\varphi(\bf{c}_k)\}$. This is isomorphic to the slice category $\EE\!/\varphi$, but the additional syntax allows us to define strict transition maps. $\mathring{\TT^\#}(V_{\varphi(k)})$ consists of global sections in the same syntactic pretopos: $$\mathring{\TT^\#}(V_{\varphi(k)})\cong\Gamma(\EE\!/\varphi).$$ There is an obvious forgetful functor $\Gamma(\EE\!/\varphi)\to\EE\!/\varphi$, natural in $\varphi$, so this induces a functor of presheaves $\mathring{\TT^\#}\to\rTT$. Sheafification gives the actual display map $\TT^\#\to\TT$. The stalks of $\El(\TT)$ are easy to describe: an object of the stalk is a definable set $A$ in the model $\mu$ together with an element $a\in A$. A map $\<A',a'\>\to\<A,a\>$ is a definable function $\sigma:A'\to A$ in $\TT_\mu$ such that $\sigma(a')=a$. An isomorphism $\alpha:\mu\to\nu$ acts on these stalks in the obvious way, sending $a\in A^\mu\mapsto \alpha(a)\in A^\nu$. This leaves us with: \begin{defn} The display map $\TT^\#\to\TT$ is a pretopos functor in \emph{$\EqSh(\MM)$}. $\TT^\#$ is the sheafification of a separated presheaf $$\mathring{\TT^\#}(V_{\varphi(k)})\cong\Gamma(\EE\!/\varphi)$$ and the display map is induced by the forgetful functors $\Gamma(\EE\!/\varphi)\to\EE\!/\varphi$. \end{defn} \begin{defn} An map $F\to G\in\emph{\EqSh}(\MM)$ is called \emph{definably $\EE$-small} if the pullback $s^*F$ is definable whenever $s:\ext{\varphi}\to G$ has a definable domain (from which it follows that the projection $s^*F\to\ext{\varphi}$ is definable as well). \end{defn} \begin{prop} Pullbacks of the display map $\TT^\#\longrightarrow\TT$ are exactly the definably $\EE$-small maps in \emph{$\EqSh(\MM)$}. \end{prop} \begin{proof} First suppose that we have a map $\sigma:\psi\to\varphi$ in $\EE$. From this, we can define an equivariant map $\name\sigma:\ext{\varphi}\to\TT_0$ by sending each $b\in\varphi^\mu$ to the definable set $$\name\sigma(b)=\{a\in\psi^\mu\ |\ \mu\models\sigma(a,b)\}.$$ Let $P$ denote the pullback $\El(\TT)$ along $\name\sigma$. Since our display map sends each element $\big\<a\in\sigma(x,b)^\mu\big\>\in\El(\TT)_\mu$ to its container $\sigma(x,b)^\mu\in\TT_\mu$; from this it is obvious that $P$ and $\ext{\psi}$ have the same fibers over $\ext\varphi$. As for the topology, note that a basis section $V_{\gamma(k)}\to P$ determines a section $\xymatrix{E \ar[r] & \ar@/_2ex/[l] \gamma}$ and a map $\tau:\gamma\to\varphi$. The commutativity condition says that $E\cong\tau^*\psi$ $$\xymatrix@R=3ex@C=2ex{ V_{\gamma(k)} \ar[dd] \ar[rr] \ar@{}[rrdd]|{\bigcirc} && \El(\TT) \ar[dd] &&E \pbcorner \ar[dd] \ar[rrr] &&& \psi \ar[dd]^{\sigma} \\ &&&\Iff&&&\\ \ext{\varphi} \ar[rr]_{\name\sigma} &&\TT && \gamma \ar@/^2ex/[uu] \ar[rrr]_\tau &&& \varphi\\ }$$ This provides a factorization $$V_{\gamma(k)}\longrightarrow\ext{\psi}\stackrel{\ext{\sigma}}{\longrightarrow}\ext{\varphi},$$ from which it is easy to show that the bijection $P\cong\ext{\psi}$ is in fact a homeomorphism. Now suppose that we have any definably $\EE$-small map $h:F\to G$ and that the equivariant maps $s_i:\ext{\varphi_i}\to G$ present $G$ as a colimit of definable sheaves. The pullbacks of $h$ always present $F$ as a colimit over the same index category; because $h$ is $\EE$-small each $s_i^*F\cong\ext{\psi_i}$ is again definable. Therefore, the definable projections $\ext{\sigma_i}:\ext{\psi_i}\to\ext{\varphi_i}$ induce new maps $\tilde{\sigma_i}:\ext{\varphi_i}\to\TT_0$. Because the $\psi_i$ arise as pullbacks of a common morphism they satisfy a coherence condition: pulling $\sigma_j$ back along the transition map $s_{ij}$ returns $\sigma_i$. As above, this is precisely what we need to show that the associated maps $\tilde{\sigma_i}$ commute with the transition maps $s_{ij}$: $$\begin{array}{ccc} \xymatrix{\ext{\varphi_i} \ar[rd]_{\tilde\sigma_i} \ar[r]^{s_{ij}} & \ext{\varphi_j} \ar[d]^{\tilde\sigma_j}\\ &\TT_0\\} &\raisebox{-.7cm}{$\Iff$}& \xymatrix{\psi_i \pbcorner \ar[d]_{\sigma_i} \ar[r] & \psi_j \ar[d]^{\sigma_j}\\ \varphi_j \ar[r]_{s_{ij}}&\varphi_i.\\}\\ \end{array}$$ Because they commute with the transition maps, the $\tilde\sigma_i$ knit together to give a single map $\tilde\sigma:G\to\TT_0$. The fact that $F$ is a pullback along this map follows from the fact that the pullbacks of $F$ along $s_i$ agrees with pullback of $\TT^\#$ along the composite $\sigma\circ s_i$. \end{proof} The pullbacks of $\TT^\#\to\TT$ almost define a universe in $\EqSh(\MM)$. The problem is that, by construction, $\TT$ classifies slice objects in $\EE$ but $\EqSh(\MM)$ has monomorphisms $F\rightarrowtail\ext{\varphi}$ where the domain $F$ is not definable; therefore, it cannot correspond to an arrow $\ext{\varphi}\to\TT$. We can get around this by combining $\TT$ with the ordinary subobject classifier. \begin{lemma} Suppose that we have two presentation maps $E\to B$ and $E'\to B$ which generate classes of maps $\SS$ and $\SS'$ under pullbacks. Then product map $E\times E'\to B\times B'$ generates the class $\SS\vee\SS'$ of all diagonals $G\overtimes{F} G'\to F$. $$\xymatrix{ G\overtimes{F} G' \ar[rr] \ar[d] \ar[rrd]|{\in\SS\vee\SS'} && G' \ar[d]^{\in\SS'}\\ G \ar[rr]_{\in\SS} && F. }$$ \end{lemma} \begin{proof} Let $P=G\overtimes{F} G'$ and consider the following diagram: $$\xymatrix@=3ex{ && G \pbcorner \ar[dd] \ar[rrrr] &&&& E \ar[d]\\ &&&&&& B \\ P \pbccorner \ar[rruu] \ar[rrdd]&& F \ar[rrr]_{\<\name G,\name{G'}\>} &&&B\times B' \ar[ru] \ar[dr]&\\ &&&&&& B' \\ && G' \pbdcorner \ar[uu] \ar[rrrr] &&&& E' \ar[u]\\ }$$ To give a map $Z\longrightarrow P$ we must provide $Z\to G$ and $Z\to G'$ which agree when composed to $F$. Once the common map $Z\to F$ is given, the others are specified by arrows $Z\to E$ and $Z\to E'$ which agree with $\name G$ and $\name{G'}$ individually. But this nothing more than a commutative square $$\xymatrix{ Z \ar[rr] \ar[d] \ar@{}[rrd]|-{\bigcirc}&& E\times E' \ar[d]\\ F \ar[rr]_-{\<\name G,\name{G'}\>} && B\times B'.\\ }$$ $P$ is universal among such $Z$, so it must be the pullback of $E\times E'$ along the map $\<\name G,\name{G'}\>$. \end{proof} \begin{defn} A map $F\to G\in\emph{\EqSh}(\MM)$ is called \emph{$\EE$-small} if its image factorization $F\twoheadrightarrow\im(F)$ is definably $\EE$-small. Equivalently, for any $s:\ext{\varphi}\to G$ with definable domain there is a subobject $E\rightarrowtail\ext{\varphi}$ (possibly not definable) such that $s^*F$ is the restriction of $s^*\TT^\#$ to $E$: $$\xymatrix{ s^*F\ar@{}[r]|-{\cong} &s^*\TT^\#|_E \pbcorner \ar@{>->}[r] \ar[d] & \ext{\psi} \pbcorner \ar[d] \ar[r] & \TT^\# \ar[d]\\ &E \ar@{>->}[r] & \ext{\varphi} \ar[r]_s & \TT.\\ }$$ \end{defn} \begin{thm} The class of $\EE$-small maps is a universe in the topos \emph{$\EqSh(\MM)$}. The display map for the universe is given by the product: $$\xymatrix{ \ext{\psi}\Big|_E \pbigcorner \ar[rr] \ar[d] && \ \ 1\times \TT^\# \ar@<-.3ex>[d]\\ \ext{\varphi} \ar[rr]_{\<\name\psi,\name E\>} &&\Omega\times\TT\\ }$$ \end{thm} \begin{proof} The preceding lemma shows that the pullbacks of $\1\times\TT^\#\to\Omega\times\TT$ are precisely the restrictions of definably $\EE$-small maps to subobjects, and that is our definition of $\EE$-smallness. This immediately verifies the first and last conditions universe conditions. Since $\EE$ is closed under composition, so is the class of definably $\EE$-small maps (and similarly for monos, of course). Given composable $\EE$-small maps $f, g$, represent each as the pullback of a mono and a definably $\EE$-small map. Composing these representatives pointwise and taking a pullback yields the composite $f\circ g$, which is manifestly $\EE$-small. Finally, the class of $\EE$-small models contains any monomorphism $E\rightarrowtail G$ as the pullback $$\xymatrix{ E \ar@{>->}[d] \ar[rr] \pbcorner && \ \ 1\times\TT^\sigma \ar@<-.3ex>[d]\\ G \ar[rr]_{\<\name E,\name{1_G}\>} && \Omega\times\TT.\\ }$$ \end{proof} Let $\SS_1$ denote the class of maps which pull back from $\TT^\sigma\to\TT$, so that $\SS_1$ automatically satisfies the first and fourth criteria for internal universes. By the foregoing, $\SS_1$ is also the classof definably $\EE$-small maps, whence it is easy to show that $\SS_1$ is closed under composition. However, we need to do a little more work in order to accomodate arbitrary monos. The problem is this: In fact, we can go further. By regarding $\ext{\varphi}$ as a discrete category in $\EqSh(\MM)$ we may consider natural transformations $\xymatrix{\ext{\varphi} \rrtwocell^{\tilde\sigma}_{\tilde\tau}{\alpha} && \TT}$. This amounts to an assignment of definable functions $\sigma(x,a)^\mu\longrightarrow\tau(y,a)^\mu$ which is continuous in both $a$ and $\mu$. Given a continuous family of functions between the fibers, we can define a map on the total spaces $$\xymatrix@=15pt{ \ext{\sigma(x,z)} \ar@/^/[rrrr] \ar[rd]^{\bar{\alpha}} \ar[rddd] &\rlowertwocell<\omit>{\omit} &&& \El(\TT) \ar[ddd]\\ &\ext{\tau(y,z)} \ar[dd] \ar@/_/[rrru]\\\\ &\ext{\varphi} \ar@/^8pt/[rrr]^{\tilde \sigma} \ar@/_8pt/[rrr]_{\tilde\tau} &\rtwocell<\omit>{\alpha \hspace{1cm}}&& \FF.\\ }$$ Every map over $\varphi$ determines a natural transformation in this way, so we have an equivalence $$\uHom_{\MM}(\ext{\varphi},\TT)\simeq \EE\!/\varphi.$$ This is a version of the 2-Yoneda lemma. \section{Conclusion} We have just finished sketching a theory of ``logical schemes'', a framework for logical investigations which is modelled on Grothendieck's theory of schemes for algebraic geometry. At the end of the day, this leaves us with a great deal of work left to do. Logical schemes inhabit an area of mathematics somewhere between formal category theory, algebraic geometry and model theory. Each discipline suggests its own avenues of study. On the logical side, we suspect that many of the results of Shelah's classification theory should translate into this new language. Hopefully, this will emphasize the geometric content of such results, many of which are motivated by Galois-theoretic notions. At the same time, algebraic geometry offers a wealth of hammers: powerful theorems and methods developed over a half-century of concerted research. In order to utilize these tools, we must understand which of the myriad ring-theoretic distinctions (finitely generated, noetherian, etc.) translate over to the logical context. Category theory itself offers several additional avenues of further research. One question regards the applicability of our scheme construction in other logical doctrines. The results of Joyal and Moerdijk \cite{JM1} suggest that our results should readily translate to doctrines involving universal quantification, function spaces and/or dependent products. The existence of schemes for weaker doctrines is less clear, principally because codomain functor $\EE^{\2}\to\EE$, the model for our structure sheaf $\TT\to\MM$, may fail to be a stack when $\EE$ is not a pretopos. Another avenue of study regards the labellings used to construct the spectral groupoid $\MM_{\EE}$. As we saw in section 3, the fine details of our labelling has effects on the resulting category of schemes; we made several modifications in order to do away with the generating object assumption and to improve the relationship between subschemes and extended theories. A more careful analysis of various labelling schemes ought to provide a more robust scheme theory which is more stable under small changes in the definitions. ************Context categories****************, which formally distinguishes contexts from formulas, suggest themselves as a good *********context************* for such an analysis. We are heartened by the geometric analysis of groups via Cayley graphs, which depends intimately on the choice of a generating set. And always, higher categories beckon. Here we have focused primarily on the category of logical schemes, largely ignoring their 2-categorical structure. There remain many questions regarding this higher structure. For example, we conjecture that l-schemes are closed under all finite 2-limits, via a gluing construction similar to that for 1-pullbacks. In a different vein, we may ask whether these scheme-type constructions will generalize to the case of a structure $n$-category $\EE$. The prospects for such constructions seem reasonably high, given that our toolbox (sheaves, equivariance, reflective adjunctions, etc.) generalizes nicely to the context of higher categories. \end{comment} \chapter{Applications} The affine scheme $\Spec(\bTT)$ associated with a logical theory incorporates both semantic and syntactic components of the theory. As such, it is a nexus to study the connections between different branches of logic and other areas of mathematics. In this chapter we will describe a few of these connections. The first section discusses a connection between schemes and topos theory. The structure sheaf $\OO_{\bTT}$ is an internal pretopos on the category of equivariant sheaves $\EqSh(\MM_{\bTT})$. The machinery of sheaves and sites can be relativized to this context, so that $\OO_{\bTT}$ is a site for the (internal) coherent topology. We show that this topos classifies $\bTT$-model homomorphisms. It follows that this can also be regarded as the (topos) exponential of $\Sh(\MM_{\bTT})$ by the Sierpinski topos $\Sets^{\2}$. The next section describes another view of the structure sheaf, as a type-theoretic universe. From the results of chapters 1 and 3 we know that an equivariant morphism of sheaves $e:\ext{\varphi}\to\OO_{\bTT}$ can be classified (up to isomorphism) by an object $E\in\EE\!/\varphi$. We will define an auxilliary sheaf $\El(\OO_{\bTT})\to\OO_{\bTT}$ which allows us to recover $E$ from $e$ via pullback: $$\xymatrix{ \ext{E} \pbcorner \ar[d] \ar[rr] && \El(\OO_{\bTT}) \ar[d]\\ \ext{\varphi} \ar[rr] && \OO_{\bTT}.\\ }$$ This allows us to think of $\OO_{\bTT}$ as a universe of \emph{definably} or \emph{representably} small sets. Formally, we show that $\OO_{\bTT}$ is a coherent universe, a pretopos relativization of Streicher's notion of a universe in a topos \cite{streicher}. In the third section we demonstrate a tight connection between our logical schemes and a recently defined ``isotropy group'' \cite{isotropy} which is present in any topos. This allows us to interpret the isotropy group as a logical construction. Using this, we compute the stalk of the isotropy group at a model $M$ and show that its elements can be regarded as parameter-definable automorphisms of $M$. In the last section we discuss Makkai \& Reyes' conceptual completeness theorem \cite{MakkaiReyes} and reframe it as a theorem about schemes. The original theorem says that if an interpretation $I:\bTT\to\bTT'$ induces an equivalence $I^*:\bf{Mod}(\bTT')\stackrel{\sim}{\longrightarrow}\bf{Mod}(\bTT)$ under reducts, then $I$ itself was already an equivalence (at the level of syntactic pretoposes). The corresponding statement for schemes is trivial: if $\<I^\flat,I_\sharp\>:\Spec(\bTT')\to\Spec(\bTT)$ is an equivalence of schemes, then the global sections $\Gamma I_\sharp$ defines an equivalence $\bTT\simeq\bTT'$. However, we can unwind the Makkai \& Reyes proof to provide insight into the spectral groupoid $\MM_{\bTT}$. The resulting ``Galois theory'' relates logical properties of $I$ to a mixture of topological and algebraic (i.e., groupoid-theoretic) properties of $I^\flat$. \section[$\OO_{\bTT}$ as a site]{Structure sheaf as site} The formal definitions of sites and sheaves can be described in geometric logic, and can therefore be interpreted internally in any topos (see, e.g., \cite{elephant}, C2.4). We have already noted that the structure sheaf $\OO_{\EE}$ is a pretopos (and hence a site) internally in $\Sh(\EE)\simeq\EqSh(\MM_{\EE})$. In this section we will discuss the topos of internal sheaves $\Sh_{\EE}(\OO_{\EE})$. In particular, we will show that this category of internal sheaves is equivalent to the topos exponential of $\Sh(\EE)$ by the Sierpinski topos $\Sets^{\2}$. Before proceeding we introduce the necessary terminology and notation. In this section we let $\SS=\Sets$, although the same arguments can be relativized to any base topos. We let $I$ denote the poset $\{0\leq 1\}$; we will variously regard $I$ as a category (with one non-identity morphism), a finite-limit category (where $1$ is terminal and $0$ is a subobject of $1$) and a regular category (where the non-identity morphism is not a cover. We will use standard notation $\bf{C}\times \bf{D}$ or $\bf{D}^\bf{C}$ for products and exponentials (i.e., functor categories) in $\Cat$. In particular, the \emph{Sierpinski topos} is the functor category $\SS^{\2}$, also known as the arrow category of $\SS$. Its objects are functions in $\SS$ and its morphisms are commutative squares. The Sierpinski topos classifies subobjects $p\leq 1$. Indeed, a geometric morphism $f:\ZZ\to\SS^{\2}$ is equivalent to a left-exact functor $f_0:\2\to\ZZ$. Since $f_0$ must preserve the terminal object 1, it is completely determined by the image $f_0(0)$. Similarly, $f_0$ must preserve subobjects, so the image $f_0(0)=p$ is subterminal in $\ZZ$, and $\2$ has no other finite limits. Let $\Top$ denote the category of (Grothendieck) toposes and geometric morphisms. We will be concerned with products and exponentials in $\Top$ (which will exist for those cases we are concerned with, cf. \cite{elephant} C4); in order to distinguish these from product and functor categories we use the following notation: $$\begin{array}{c} \XX\otimes\YY\to \ZZ\\\hline \XX\to\exp(\YY,\ZZ). \end{array}$$ As a final piece of notation, recall section 3.5 that $\EE\Rightarrow\EE$ denotes the copower of $\EE$ in $\Ptop$, which is to say that a model of $\EE\Rightarrow\EE$ is the same as a homomorphism of $\EE$-models: $$\begin{array}{c} (\EE\Rightarrow\EE)\longrightarrow\SS\\\hline \xymatrix{\EE \rtwocell & \SS}. \end{array}$$ From lemma \ref{2colimits} we have a syntactic description of $\HH=(\EE\Rightarrow\EE)$ as the pretopos completion of the product category $\EE\times \2$. In particular, $\HH$ contains two objects $A_0$ and $A_1$ for each object $A\in\EE$, and these are connected by a distinguished morphism $k_A:A_0\to A_1$ \begin{thm} The following toposes are equivalent: \begin{enumerate} \item The category of internal (coherent) sheaves on $\OO_{\EE}$. \item The category of (coherent) sheaves on $\EE\Rightarrow\EE$. \item The exponential (in $\Top$) of $\Sh(\EE)$ by the Sierpinski topos. \end{enumerate} $$\Sh_{\EE}(\OO_{\EE})\simeq\Sh(\EE\Rightarrow\EE)\simeq\exp(\SS^{\2},\Sh(\EE))$$ \end{thm} \begin{lemma} The product in $\Top$ of a topos $\FF$ with the Sierpinski topos is given by the functor category $\FF^{\2}$: $$\FF\otimes\SS^{\2}\simeq\FF^{\2}.$$ \end{lemma} \begin{proof}[Sketch] A more general proof, following from Diaconescu's theorem, can be found in Johnstone \cite{elephant}, Cor. 3.2.12. We will show that both toposes have the same classifying property. First of all, notice that $\FF^{\2}$ has projections to both $\FF$ and $\SS^{\2}$: $$\begin{array}{c} \gamma_{\FF}:\xymatrix{\FF^{\2} \rtwocell^{\id}_{\dom}{`\bot} & \FF},\\ \Gamma^{\2}:\xymatrix{\FF^{\2} \rtwocell^{\Delta^{\2}}_{\Gamma^{\2}}{`\bot} & \SS^{\2}},\\ \end{array}$$ Given $P:\ZZ\to\FF^{\2}$ we can recover $f:\ZZ\to\FF$ and $p\leq 1_{\ZZ}$ by composing with these two projections (and applying the classifying property of $\SS^{\2}$). On the other hand, given $p$ and $f$ we wish to construct an extension $P_f$ as below $$\xymatrix{ \SS^{\2} \ar[r]^{\Delta^{\2}} \ar@{-->}[rd]|{P} & \FF^{\2} \ar@{-->}[d]^{P_f}\\ \2 \ar[r]_{p\leq 1} \ar[u]^y & \ZZ.\\ } $$ In order to construct $P_f$ it helps to first understand $P$. The representable functors $y0$ and $y1$ correspond to the unique functions $\emptyset\to 1$ and $1\to 1$, respectively; $P$ sends these to $p$ and to $1_{\ZZ}$. The unique function $!_2:2\to 1$ is the coequalizer of two projections $y0\rightrightarrows y1$; it follows that $P(!_2)$ is given by two copies of $1_{\ZZ}$ glued together along $p$. Similarly, $P(!_n)$ consists of $n$ copies of $1_{\ZZ}$ glued at $p$. $P$ sends any other function $\pi:F\to A$ to $A$-many disjoint pieces, each consisting of $F(a)$-many copies of $1_{\ZZ}$ glued along $p$. The construction of $P_f$ is formally similar. The inverse image of a map $\pi:F\to A$ is covered by $p\times f^*A$ and $f^*F$, which we think of as $A$-many copies of $p$ together with $F$-many copies of $1_{\ZZ}$. The image of $\pi$ is then given as a pushout $$\xymatrix{ p\times f^*F \ar[r]^{p\times f^*\pi} \ar[d]_{p_2} & p\times f^*A \ar[d] \\ f^*F \ar[r] & P_f^*(\pi). }$$ When $F$ and $A$ are discrete (i.e., in the image of $\Delta$) this is the same as our description of $P$, so the extension diagram above commutes. One should also check that $P_f^*$ does, in fact, define the inverse image of a geometric morphism. This can be established by constructing a hom-tensor adjunction as in \cite{SGL}, VII.7-9. \end{proof} \begin{lemma}\label{left_equiv} The exponential topos $\exp(\SS^{\2},\Sh(\EE))$ classifies homomorphisms of $\EE$-models and is therefore equivalent to the topos of sheaves on the copower $\EE\Rightarrow\EE$. \end{lemma} \begin{proof} According to the defining adjunction of the exponential together with the last lemma we have equivalences, natural in $\ZZ$, $$\begin{array}{c} \ZZ\longrightarrow\exp(\SS^{\2},\Sh(\EE))\\\hline \ZZ\otimes\SS^{\2}\longrightarrow \Sh(\EE)\\\hline \ZZ^{\2}\longrightarrow\Sh(\EE).\\ \end{array}$$ The latter geometric morphism is completely determined by a pretopos functor $\EE\to\ZZ^{\2}$. But $\ZZ^{\2}$ is a power object in $\Ptop$ and, by the universal properties of powers and copowers we have another sequence of equivalences $$\begin{array}{c} \EE\longrightarrow \ZZ^{\2}\\\hline \xymatrix{\EE \rtwocell & \ZZ}\\\hline (\EE\Rightarrow\EE)\longrightarrow \ZZ. \end{array}$$ The last pretopos functor induces a unique geometric morphism ${\ZZ\to\Sh(\EE\Rightarrow\EE)}$. By the uniqueness of classifying toposes, we conclude that $$\exp(\SS^{\2},\Sh(\EE))\simeq\Sh(\EE\Rightarrow\EE).$$ \end{proof} \begin{prop}\label{right_equiv} The topos of internal sheaves on $\OO_{\EE}$ is equivalent to the category of sheaves on $\EE\Rightarrow\EE$: $$\Sh_{\MM_{\EE}}(\OO_{\EE})\simeq\Sh(\EE\Rightarrow\EE).$$ \end{prop} \begin{proof} Let $\GG=\Sh_{\MM_{\EE}}(\OO_{\EE})$. Recall the construction of $\OO_{\EE}$: we first sheafified the codomain fibration $\EE^{\2}\to\EE$ and then transported it across the equivalence $\Sh(\EE)\simeq\EqSh(\MM_{\EE})$. Because equivalent sites produce equivalent categories of internal sheaves, the topos $\GG$ is insensitive to both operations; therefore we need not distinguish between $\OO_{\EE}$ and $\EE^{\2}$. Since $\Sh(\EE\Rightarrow\EE)$ is a coherent topos, Delingne's theorem ensures that it has enough points. Moreover, an internal version of Deligne's theorem ensures that $\GG$ also has enough points. Since $\OO_{\EE}$ is a coherent site internally it has enough ``internal points'' $\Sh(\EE)\to\HH$. $\Sh(\EE)$ is (externally) coherent, so it has enough ordinary points, and we may compose these facts. Given distinct, parallel morphisms $f\not=g$ in $\HH$ first find $H:\Sh(\EE)\to\HH$ such that $H^*f\not=H^*g$. Then find $M:\SS\to\Sh(\EE)$ such that $M^*(H^*f)\not= M^*(H^*g)$. The resulting composite $H\circ M$ is a point $\SS\to\HH$ which separates $f$ and $g$. Therefore $\GG$ also has enough points and we can verify that $\GG\simeq\Sh(\EE\Rightarrow\EE)$by checking an equivalence between their categories of points: $$\Top(\SS,\Sh(\EE\Rightarrow\EE))\simeq\Top(\SS,\GG).$$ We begin by calculating the points $g:\SS\to\GG$. Such a point can be represented as a pair $g=\<M,f\>$ where $M$ is a point of $\Sh(\EE)$ and $f$ is a point in the (external) sheaf topos $\Sh(M^*\OO_{\EE})$. Given $g$, we can recover $M$ by composing with the internal global sections morphism $\GG\to\Sh(\EE)$. From the results of section 2.4 we know that the stalk of $\OO_{\EE}$ at $M$ is the diagram of $M$: $M^*\OO_{\EE}\simeq \Diag(M)$. Since a model of the diagram of $M$ consists of another model $N$ together with a homomorphism $f:M\to N$, the points of $\GG$ are the same as those of $\Sh(\EE\Rightarrow\EE)$. We must also check the morphisms between points of $\GG$ and show that these agree with the morphisms of points in $\Sh(\EE\Rightarrow\EE)$. First suppose that we have a morphism of points in $\Sh(\EE\Rightarrow\EE)$; this amounts to a natural transformation $\xymatrix{**[l](\EE\Rightarrow\EE) \rtwocell^h_{h'}{\eta} & \SS}$; here $h$ and $h'$ correspond to $\EE$-model homomorphisms $M\to N$ and $M'\to N'$, respectively. If we consider the components of $\eta$ at $A_0$ (resp. $A_1$) for each object $A\in\EE$, these define a model homomorphism $\eta_0:M\to M'$ (resp. $\eta_1:N\to N'$). Naturality of $\eta$ along the distinguished morphisms $k_A:A_0\to A_1$ shows that these homomorphisms commute with $h$ and $h'$, so that a morphism of points is $\Sh(\EE\Rightarrow\EE)$ is equivalent to a commutative square in $\bf{Mod}(\EE)$: $$\xymatrix{ h(A_0) \ar[r]^{\eta_{0A}} \ar[d]_{h(k_A)} & h'(A_0) \ar[d]^{h'(k_A)} && M \ar[r]^{\eta_0} \ar[d]_{h} & M' \ar[d]^{h'}\\ h(A_1) \ar[r]_{\eta_{1A}} & h'(A_1) && N \ar[r]_{\eta_1} & N'. }$$ Now consider a transformation $\xymatrix{\SS \rtwocell^g_{g'}{\gamma} & \GG}$. As we did for the points, we can characterize such a transformation in terms a component at $\Sh(\EE)$ together with a component at (the inverse image of) the structure sheaf. As above, the first of these is defined by whiskering with the internal global sections geometric morphism. This yields a homomorphism $\gamma_0$ between the first components of $g=\<M,f\>$ and $g'=\<M',f'\>$: $$\xymatrix{\SS \rrtwocell^g_{g'}{\gamma} \ar@/^1cm/[rrr]^M \ar@/_1cm/[rrr]_{M'} && \GG \ar[r] & \Sh(\EE)}$$ This homomorphism induces a pretopos functor between the diagram of $M$ and the diagram of $M'$ and which, by abuse of notation, we also call $\gamma_0$. This functor mediates a natural transformation $\overline{\gamma}$ between the second components of $g$ and $g'$: $$\xymatrix{ \Diag(M) \ar[rr]^{\Diag(\gamma_0)} \ar[rd]_f & \ar@{}[d]|{\Rightarrow} \ar@{}[d]|(.35){\overline{\gamma}} & \Diag(M') \ar[ld]^{f'}\\ & \SS &.\\ }$$ The original transformation $\gamma$ is completely determined by the pair $\<\gamma_0,\overline{\gamma}\>$. Recall that an object of $\Diag(M)$ has the form $\varphi(x,b)$, where $\varphi\rightarrowtail A\times B$ is a formula in $\EE$ and $b\in B^M$ is a parameter in $M$. Since $\gamma_0$ (regarded as a pretopos functor) acts by sending $\varphi(x,b)\mapsto\varphi(x,\gamma_0(b))$, it has no effect on unparameterized formulas. Therefore it commutes with the canonical inclusions $\EE\to\Diag(M)$ and $\EE\to\Diag(M')$. If we let $N$ and $N'$ denote the (codomain) models associated with $f$ and $f'$, then composing with $\overline{\gamma}$ yields a second model homomorphism $\gamma_1:N\to N'$ $$\xymatrix@R=3ex{ & \Diag(M) \ar[dd] \ar[rd]^f &\\ \EE \ar[rd] \ar[ru] \ar@/^1.5cm/[rr]^N \ar@/_1.5cm/[rr]_{N'} & \ar@{}[r]|{\overline{\gamma}} \ar@{}[r]|(.35){\Downarrow}& \SS\\ & \Diag(M') \ar[ru]_{f'} &\\ }$$ The naturality of $\overline{\gamma}$ ensures that this homomorphism defines a commutative square. To see this, consider the component of $\overline{\gamma}$ at the singleton formula $x=a$ (for some element $a\in A^M$). This has an obvious inclusion $(x=a)\rightarrowtail A$ and, by naturality, this must commute with $\overline{\gamma}_A=\gamma_{1A}$: $$\xymatrix{ \Diag(M): \ar[d]_{\gamma_0} & x=a \ar@{|->}[d] \ar@{|->}[r] && (x=f(a))^N \ar[d]_{\overline{\gamma}_{x=a}} \ar@{>->}[r] & A^N \ar[d]^{\gamma_{1A}=\overline{\gamma}_A} \\ \Diag(M'): & x=\gamma_0(a) \ar@{|->}[r] && (x=f'(\gamma_0(a)))^{N'} \ar@{>->}[r] & A^{N'}\\ }$$ This shows that the component of $\gamma_1$ at $A$ applied to the unique element ${x=f(a)}$ in $N$ is equal to the unique element $x=f'(\gamma_0(a))$ in $N'$; in other words, $\gamma_1\circ f=f'\circ \gamma_0$. Therefore $\Top(\SS,\GG)\simeq\Top(\SS,\Sh(\EE\Rightarrow\EE)$, completing the proof. \end{proof} Combining lemmas \ref{left_equiv} and \ref{right_equiv} we have proved the following theorem: \begin{thm*} The following toposes are equivalent: \begin{enumerate} \item The category of internal sheaves on $\OO_{\EE}$. \item The category of sheaves on $\EE\Rightarrow\EE$. \item The exponential (in $\Top$) of $\Sh(\EE)$ by the Sierpinski topos. \end{enumerate} $$\Sh_{\EE}(\OO_{\EE})\simeq\Sh(\EE\Rightarrow\EE)\simeq\exp(\SS^{\2},\Sh(\EE))$$ \end{thm*} \section{Structure sheaf as universe} In this section we investigate a connection between logical schemes and type theory, specifically the notion of a type-theoretic universe. We will show that the structure sheaf $\OO_{\EE}$ can be regarded as a universe of ``representably small'' morphisms in the topos of equivariant sheaves $\EqSh(\MM_{\EE})$. In particular, this will show that we can conservatively extend any coherent theory to include a universe of small sets. The category theorist tends to think of arrows, rather than objects, as small; a universe in $\EE$ is a subclass of arrows $\UU\subseteq \rm{Ar}(\EE)$ satisfying some natural closure and generation principles. Intuitively, a map is small when each of its fibers is; consequently, an object is small just in case the terminal projection $E\to 1$ belongs to $\UU$. From this characterization we can immediately recognize one of the defining properties of a universe: it should be closed under pullback. After all, any fiber of a pulled back map is a fiber of the original map, so if a given map has small fibers so will all of its pullbacks. Notice that this automatically closes small maps under isomorphism. Another important requirement is that small maps should be closed under composition. In $\Sets$, this corresponds to the assumption that a small coproduct of small sets is again small. This is essentially the axiom of replacement: when $A$ is a set and $\{F_a\ |\ a\in A\}$ is a (disjoint) family of sets, then the union $F=\bigcup_{a\in A} F_a$ is again a set. More generally, this translates to closure under the type-theoretic operation of dependent sums. \begin{comment} Recall that a \emph{natural numbers object (NNO)} in a category (with products) $\EE$ is a universal diagram of the form $\xymatrix{1 \ar[r]^0 & N \ar@(ur,dr)^s}$\ . For most purposes we require an NNO to be stable under pullback: $N$ is a \emph{parameterized NNO} if, for every $X\in\EE$, the pullback $N\times X$ is an NNO for the slice category $\EE\!/X$. This means that for any slice object $E\to X$, any section $e:X\to E$ and any endomorphism $f:E\to E$ over $X$ there is a unique recursion map as in the diagram below: $$\xymatrix{ N\times X \ar[rr]^{s\times X} \ar@{-->}[d]^{\rm{rec}(e,f)} && N\times X \ar@{-->}[d]^{\rm{rec}(e,f)}\\ E \ar[d] \ar[rr]_{f} && E \ar[lld] \\ X \ar@/^2ex/[u]^(.65){e\!\!} \ar@/^5ex/[uu]^(.65){0\times X} &&\\ }$$ In a locally cartesian closed category (e.g., a topos), exponentiation allows one to curry the parameters in $X$, so that the two definitions coincide. Henceforth, we will drop the modifier, assuming that all NNO's are parameterized. Given a pretopos with NNO we can give the usual recursive definitions of addition and ordering on $N$: $$\xymatrix{ N \ar[r]^-{0\times N} \ar[dr]_N & N\times N \ar[r]^{s\times N} \ar[d]^{+} & N\times N \ar[d]^{+}\\ & N \ar[r]_s & N,\\ }$$ $$\leq^N=\{(k,n)\ |\ \exists m.\ k+m=n\}\subseteq N\times N.$$ The second projection from this subobject defines the indexed `set' of Kuratowski ordinals $K\to N$. Each finite ordinal $k=\coprod_k 1_{\EE}$ arises as the pullback of $K$ against some singleton $1\to N$. $$\xymatrix{ k \pbcorner \ar[r] \ar[d] & K \ar[d]\\ 1 \ar[r]_{s^k(0)} & N\\ }$$ More generally, any map with finite fibers can be represented as a pullback of K. In this way, we think of $N$ as a universe of small (finite) sets. The projection $K\to N$ displays the elements of these small sets: the fiber of $K$ over an element $n\in N$ is isomorphic to the set $n$ itself. \end{comment} Finite sets provide a motivating example for these issues. Consider the following map in $\Sets$: $$K=\{(k,n)\in \NN\times\NN\ |\ k<n\} \stackrel{p_2}{\longrightarrow} \NN.$$ This projection has the nice property that, for every $m\in\NN$, the fiber of $K$ over $m$ contains exactly $m$ elements. More generally, any function $F\to A$ whose fibers are finite is realized as a pullback of $K$: $$\xymatrix{ F \pbcorner \ar[rr] \ar[d] && K \ar[d]\\ A \ar[rr]_{a\mapsto \|F(a)\|} && \NN. }$$ We say that $K\to \NN$ is \emph{generic} for maps with finite fibers. The third axiom of small maps says that any categorical universe should contain such a generic display map. Any class of small maps should contain a map $\rm{El}(U)\to U$ such that every small $s:E'\to E$ arises as the pullback against some $f:E\to U$: $$\xymatrix{ E' \pbcorner \ar[r] \ar[d]_{\forall\ \rm{small}\ s} & \rm{El}(U) \ar[d]\\ E \ar[r]_{\exists\ f} & U.\\ }$$ Note that in general neither $U$ nor $\El(U)$ will themselves by small objects. \begin{defn} A \emph{coherent universe} in a category $\EE$ is a class of maps $\UU\subseteq \rm{Ar}(\EE)$ such that: \begin{itemize} \item $\UU$ is closed under pullbacks. \item $\UU$ is closed under composition/dependent sums. \item There is a an object $U$ and a morphism $\pi_{\UU}:\El(U)\to U$ in $\UU$ such that every small map is a pullback of $\pi_{\UU}$. \end{itemize}\end{defn} This is a generalization of Streicher's definition of a universe in a topos (cf. \cite{streicher}), from which we have removed two conditions which are inappropriate to the context of pretoposes. The first says that all monos should be small maps. This is intuitively reasonable, since the fibers of a mono are either empty or singletons. However, this is not as straightforward as it seems. We can reformulate the example of finite sets in any pretopos containing a (parameterized) natural numbers object. This amounts to a weak intuitionistic set theory, and in these cases the finite sets may not be closed under subobjects. This means that we can have a small object which contains non-small subobjects, in the sense that they do not arise as pullbacks of $K$. Something similar occurs in our context, where smallness will correspond to definability. As already observed in Chapter 2, not every equivariant subobject of a definable sheaf is definable; a further compactness condition is required. Our notion of smallness incorporates, in some sense, smallness of definition in addition to smallness of fibers and so closure under monomorphisms is an unreasonable requirement in this setting. Additionally, the definition of a universe in a topos usually requires small maps to be closed under dependent products as well as sums. As we are working with pretoposes, which do not model dependent products, it is reasonable to omit this requirement as well. \begin{defn} A map of equivariant sheaves $f:E'\to E\in\EqSh(\MM)$ is \emph{definably small} if the pullback along any map from a definable sheaf $\ext{A}\to \FF$ (cf. page \pageref{def_sheaf}) is a definable map: $$\xymatrix{ [\![\ F]\!] \pbcorner \ar[d]_{\ext{\tau}} \ar[r] & E' \ar[d]^{f}\\ \ext{A} \ar[r] & E. }$$ \end{defn} These maps are also called \emph{representably small}, because the equivalence $\EqSh(\MM)\simeq\Sh(\EE)$ carries each definable sheaf $\ext{E}$ to the corresponding representable functor $yE$. By stacking pullback squares either horizontally or vertically, the two pullbacks lemma immediately verifies both the first and second requirements for a universe: $$\begin{array}{p{7cm}p{5cm}} \raisebox{-1cm}{$\xymatrix{ q^*(p^*E')\cong (pq)^*E' \pbcorner \ar@/^3ex/[rr] \ar[r] \ar[d]_{\rm{definable}} & p^*E' \pbcorner \ar[d] \ar[r] & E' \ar[d]^{\rm{small}}\\ \ext{A} \ar[r]_q & D \ar[r]_p & E,\\ }$} & \xymatrix{ \ext{F} \pbcorner \ar[d] \ar@/_3ex/[dd]_{\rm{definable}} \ar[r] & E'' \ar[d]^{\rm{small}}\\ \ext{B} \pbcorner \ar[d] \ar[r] & E' \ar[d]^{\rm{small}}\\ \ext{A} \ar[r] & E.\\ } \end{array}$$ The most interesting aspect of this universe is the generic map. Let $L$ denote the ``walking section'', the category with three non-identity arrows $$\xymatrix{ 0 \ar@<1ex>[r]^p \ar@(dl,ul)^r & \ar@<1ex>[l]^s 1, & r=s\circ p,& p\circ s=\id_1. }$$ The inclusion of $\2$ into $L$ (as $p$) induces a functor $\EE^L\to\EE^{\2}$. This makes $\EE^L$ into a stack over $\EE$, so we sheafify it and transport it across the equivalence $\Sh(\EE)\simeq\EqSh(\MM_{\EE})$ just as we did with for the structure sheaf. This gives our desired generic map. \begin{thm} Let $\pi_{\EE}:\El(\OO_{\EE})\to\OO_{\EE}$ denote the map of equivariant $\MM_{\EE}$-sheaves which arises as the sheafification and transport of the canonical functor $\EE^L\to\EE^{\2}$. This map is generic for definably small maps in $\EqSh(\MM_{\EE})$. \end{thm} \begin{proof} First we show that every definable map $\ext{\tau}:\ext{B}\to\ext{A}$ arises as a pullback of $\El(\OO_{\EE})$. From proposition \ref{str_sheaf_secs} we know that there is an equivalence between partial sections of the structure sheaf and objects of the slice category $$\OO_{\EE}(V_{A(k)})\simeq\EE\!/A.$$ Using this we can associate $\tau$ with a partial equivariant section $t:V_{A(k)}\to\OO_{\EE}$. According to lemma \ref{equiv_ext}, this has a unique extension to an equivariant map ${\overline{t}:\ext{A}\to\OO_{\EE}}$. Now consider the pullback $$\xymatrix{ P \pbcorner \ar[r] \ar[d] & \El(\OO_{\EE}) \ar[d]^{\pi_{\EE}}\\ \ext{A} \ar[r]_{\overline{t}} & \OO_{\EE}.\\ }$$ We claim that the left-hand map is isomorphic to $\ext{\tau}$. Both strictification and transport preserve pullbacks, so it will be enough to show that the following diagram of functors is a (strict) pullback in stacks over $\EE$: $$\xymatrix{ \EE\!/B \pbcorner \ar[d]_{\EE\!/\tau} \ar[r] & \EE^L \ar[d]\\ \EE\!/A \ar[r]_{\overline{\tau}} & \EE^{\2}\\ }$$ Here $\EE\!/A$, e.g. is the (representable) stack fibered over its domain as in proposition \ref{2yoneda}. Now suppose that we have an object $\epsilon\in\EE\!/A$ and a section $\<p,t\>\in\EE^\sigma$. To say that these agree over $\EE^{\2}$ means that $\overline{\tau}(\epsilon)\cong\epsilon^*(\tau)\cong p$, and $t$ is a section of this pullback: $$\xymatrix{ \epsilon^*B \pbcorner \ar[d]^{\overline{\tau}(\epsilon)} \ar[rr] && B \ar[d]^{\tau}\\ E \ar[rr]_{\epsilon} \ar@/^2ex/[u]^t && A\\ }$$ But this data is precisely equivalent to a map $\beta:E\to B$ such that $\beta\circ\tau=\epsilon$. Since $\EE\!/-$ acts by precomposition, this means that $\EE\!/B$ is the pullback of $\EE^\sigma$ along $\hat{\tau}$. Now suppose that some equivariant sheaf map $f:D\to E$ is definably small. We must define a map $\name{f}:E\to\OO_{\EE}$ such that we have an isomorphism $\name{f}^*(\EE^L)\cong D$ over $E$. We define $\name{f}$ locally by giving the image of any basic open section in $E$. For such a partial section $e:V_{A(k)}\to E$ there is a unique equivariant extension $\ol{e}:\ext{A}\to E$ and, because $f$ is definably small, its pullback is a definable map $\ext{\beta}$. This definable map induces another a partial section $\name{\beta}$ in the structure sheaf and this section will be the image of $e$ under $\name{f}$. We can see that $\name{f}$ is well-defined by checking that its definition agrees whenever two sections $b$ over $V_{B(j)}$ and $c$ over $V_{C(k)}$ overlap, as in the diagram below. That overlap will contain a subsection over a smaller basic open neighborhood $V_{A(i,j,k)}$, and these lift to equivariant maps $\ext{A}\to\ext{B}$ and $\ext{A}\to\ext{C}$. Because the sections $b$ and $c$ overlap, these canonical lifts will commute: $$\xymatrix{ \ext{A} \ar@{-->}[rr] \ar@{-->}[rd] && \ext{C} \ar@{->}[rd]^{\overline{c}} &\\ & \ext{B} \ar@{->}[rr]_(.3){\overline{b}} && E \\ V_{A(i,j,k)} \ar[uu] \ar@{>->}[rr]|(.52)\hole \ar@{>->}[rd] && V_{C(j)} \ar[uu]|\hole \ar[ur]^{c} \\ & V_{B(k)} \ar@/_5ex/[uurr]_(.3){b} \ar[uu] \\ }$$ Now suppose that $d$ is a lift of $e$ along the map $f$ (i.e., $f\circ d=e$) as in the diagram below. This induces a equivariant lift $\overline{d}:\ext{A}\to D$ such that $f\circ\overline{d}=\overline{e}$, and hence a section of the pullback $\ext{\beta}$. $$\xymatrix{ \ext{B} \ar[dd]^{\ext{\beta}} \ar[rr] && D \ar[dd]_f \ar@{-->}[rr] && \El(\OO_{\EE}) \ar[dd]\\ &&& V_{A(k)} \ar[ld]_e \ar[ul]^{d} \ar[dr]_{\name{\beta}} \ar[ur]^{\name{\<\beta,\delta\>}}\\ \ext{A} \ar[rr]_{\ol{e}} \ar@/^3ex/@{-->}[uu]^{\ext{\delta}} \ar[uurr]_{\ol{d}}&& E \ar[rr] &&\OO_{\EE}\\ }$$ The pair $\<\beta,\delta\>$ defines a partial section of $\El(\OO_{\EE})$ over $V_{A(k)}$ and an argument like the one above shows that these agree on their overlaps. Therefore they patch together to give a map $D\to\EE^L$. The projection $\EE^L\to\EE^{\2}$ sends $\name{\<\beta,\delta\>}$ to $\name{\beta}$, so the map commutes over $\EE^{\2}$. Moreover, the fact that $\delta$ is uniquely determined by the canonical lift $\ol{d}$ implies that the right-hand square is a pullback. This demonstrates that the projection $\El(\OO_{\EE})\to\OO_{\EE}$ is generic for definably small maps in $\EqSh(\MM)$ and completes the proof. \end{proof} The results of this section show that we can regard the map $\El(\OO_{\EE})\to\OO_{\EE}$ as a universe for an interpretation of (a weak form of) dependent type theory which involves dependent sums but not dependent products. More specifically, when $\EE$ is locally Cartesian closed, this can act as a universe for all of dependent type theory \cite{repre_models}. Following the results of \cite{AST} we can also use this as the basis for a model of algebraic set theory. \section{Isotropy} In this section we discuss a connection between our logical schemes and recent developments in topos theory. Motivated by constructions in semigroups, Funk, Hofstra and Steinberg have recently discovered a canonical group object internal to any topos \cite{isotropy}. Here we give a logical interpretation of this isotropy group, showing that it is closely related to the structure sheaf of our logical schemes. This provides a new perspective on this construction, and also provides an easy calculation of the stalks of the group. As a corollary of our earlier results, we also give an external description of the isotropy group relating it to the Sierpinski topos. If $\FF$ is a topos, then so is each slice category $\FF\!/A$, and any morphism $f:B\to A$ in $\FF$ induces a geometric morphism $\overline{f}:\FF\!/B\to\FF\!/A$ whose inverse image is given by pullback along $f$. In particular, every slice topos has a canonical geometric morphism $\pi_A:\FF\!/A\to\FF$. Funk, Hofstra and Steinberg define an \emph{isotropy functor} ${\ZZ:\FF^{\op}\to\bf{Grp}}$ which sends each object $A\in\FF$ to the group of natural automorphisms of $\pi_A$ (i.e., of the inverse image $\pi_A^*$): $$\ZZ(A)\cong\left\{\alpha\ \bigg|\ \xymatrix{**[l]\FF\!/A \rtwocell{\omit \rm{\rotatebox[origin=c]{270}{$\cong$}}\ \alpha} & \FF}\right\}$$ We sometimes denote such an automorphism by writing $\alpha:\FF\!/A\longrightarrow\hspace{-.45cm}\raisebox{.225cm}{\rotatebox{270}{\scalebox{.7}{$\curvearrowleft$}}}\hspace{.35cm}\FF$. The action of a morphism $f:B\to A$ on $\ZZ$ is given by composition with $\overline{f}$. As they demonstrate, $\ZZ$ preserves colimits (i.e., sends colimits in $\FF$ to limits in $\bf{Grp}$) from which it follows that $\ZZ$ is representable: there is an internal group $Z\in\bf{Grp}(\FF)$ such that $\ZZ\cong\Hom_{\FF}(-,Z)$. To define $Z$ from $\ZZ$, represent $\FF$ as the topos of sheaves on a site $(\bf{C},\JJ)$. The composite $\bf{C}^{\op}\stackrel{y}{\longrightarrow}\FF^{\op}\stackrel{\ZZ}{\longrightarrow}\bf{Grp}$ is a sheaf of groups and (naturally, for any object $\displaystyle F\cong\colim_{i\in\int F} yC_i$) $$\begin{array}{rclcl} \ZZ(F)&\cong&\ZZ(\colim_i yC_i)\\ &\cong& \lim_i \ZZ(yC_i)\\ &\cong& \lim_i Z(C_i)\\ &\cong& \lim_i \Hom_{\FF}(yC_i,Z)\\ &\cong& \Hom_{\FF}(\colim_i yC_i,Z) & \cong & \Hom_{\FF}(F,Z).\\ \end{array}$$ \begin{defn} The \emph{isotropy group} of a topos $\FF$ is the internal group $Z\in\bf{Grp}(\FF)$ which represents the isotropy functor (i.e., $\ZZ\cong\Hom_{\FF}(-,Z)$). \end{defn} \begin{lemma} When $(\bf{C},\JJ)$ is a subcanonical site closed under products then for any $A\in\bf{C}$ the Yoneda embedding induces a canonical isomorphism $$\ZZ(yA)=\Aut(\pi_{yA}^*)\cong\Aut(A^\times).$$ \end{lemma} \begin{proof} Here $A^\times:\bf{C}\to\bf{C}/A$ is the functor sending each object $C\in\bf{C}$ to the second projection $C\times A\to A$. The slice category $\bf{C}/A$ inherits a topology from $\bf{C}$ and (because $J$ is subcanonical) there is an equivalence $\Sh(\bf{C}/A)\simeq\Sh(\bf{C})/yA$ (cf. \cite{elephant}, C2.2.17). Let $y_A$ denote the sliced Yoneda embedding $\bf{C}/A\to\Sh(\bf{C}/A)\simeq\Sh(\bf{C})/yA$. Since both $A^\times$ and $\pi_{yA}^*$ act by taking products (with $A$ and $yA$, respectively) there is a factorization $\pi_{yA}^*\circ y\cong y_A\circ A^\times$: $$\pi_{yA}^*(yC)= \raisebox{.7cm}{\xymatrix{yC\times yA\ar[d]\\yA\\}} \cong \raisebox{.7cm}{\xymatrix{y(C\times A)\ar[d]\\yA\\}}= \ y_A\left(\raisebox{.7cm}{\xymatrix{C\times A\ar[d]\\A\\}} \right)=y_A(A^\times(C)).$$ Now suppose $\alpha$ is a natural automorphism of $\pi_{yA}^*$. Since $\pi_{yA}^*$ factors through $A^\times$ while $y_A$ is full and faithful, the components of $\alpha$ descend uniquely to an automorphism of $A^\times$: $$\xymatrix{ \Sh(\EE) \ar[rr] \ar[rr]|{\rotatebox{270}{$\curvearrowleft$}}^(.45){\alpha}_(.7){\pi_{yA}^*} && **[r] \Sh(\EE\!/A)\simeq\Sh(\EE)/yA \\\\ \EE \ar[uu]^y \ar[rr] \ar[rr]|{\rotatebox{270}{$\curvearrowleft$}}^(.43){\alpha_0}_(.7){A^\times} && **[r] \EE\!/A \ar[uu]_y\\ }$$ Thus restriction along $y$ defines a map $\Aut(\pi_{yA}^*)\to\Aut(A^\times)$; since the group action in either case is defined by composition, this is obviously a group homomorphism. Now we must show that the map $\alpha\mapsto\alpha_0$ is invertible. To see this, fix a sheaf $E\in\Sh(\bf{C})$ and represent it as a colimit $E\cong\colim_j yB_j$. If $\beta$ is a natural automorphism of $A^\times$ this gives us a family of automorphisms $\beta_j:B_j\times A\cong B_j\times A$. By naturality these isos commute with the colimit presentation $E$, inducing an automorphism $\overline{\beta_E}:E\times yA\cong E\times yA$: $$\xymatrix{ y(B_j\times A) \ar[r] \ar[d]_{y\beta_j} & y(B_{j'}\times A) \ar[d]_{y\beta_{j'}} \ar[r] & E\times yA \ar@{-->}[d]^{\overline{\beta_E}}\\ y(B_j\times A) \ar[r] & y(B_{j'}\times A) \ar[r] & E\times yA.\\ }$$ Using the fact that $\overline{\beta_E}$ is uniquely determined from $\beta$, it is easy to show that these two maps are mutually inverse. \end{proof} \begin{cor}\label{isot_desc} If $\EE$ is a pretopos, the isotropy group of $\Sh(\EE)$ can be defined directly from $\EE$: $$Z(A)\cong \Aut(A^\times)=\left\{\alpha\ \bigg|\ \xymatrix{\EE \rtwocell^{A^\times}_{A^\times}{\omit \rm{\rotatebox[origin=c]{270}{$\cong$}}\ \alpha} & **[r] \EE\!/A}\right\}$$ \end{cor} \begin{proof} This follows immediately from the lemma, given that the coherent topology is subcanonical and pretoposes are closed under products. As with any sheaf, we may represent $Z$ as a fibration over $\EE$ and its description in this context is particularly nice. On one hand, we have the codomain fibration $\EE^{\2}$, whose fiber over $A$ is exactly the slice category $\EE/A$. On the other, the constant fibration $\Delta\EE\cong \EE\times\EE$ has $\EE$ for each fiber. There is also a canonical fibered functor $T:\Delta\EE\to\EE^{\2}$, the transpose of the identity functor $\EE\to\Gamma\EE^{\2}\cong\EE$. The component of $T$ at $A$ is exactly $A^\times$, so the corollary says precisely that (the Grothendieck construction applied to) $Z$ is the group of (fibered) natural automorphisms of $T$: $$Z\cong\Aut_{\EE}(T:\Delta \EE\longrightarrow\hspace{-.45cm}\raisebox{.225cm}{\rotatebox{270}{\scalebox{.7}{$\curvearrowleft$}}}\hspace{.35cm} \EE^{\2}).$$ \end{proof} \begin{defn} Fix an $\EE$-model $M$. We say that an automorphism ${\alpha:M\cong M}$ is \emph{(parameter-)definable} if for every (basic) sort $B$ there is an object $A_B$, an element $a\in A_B^M$ and a formula $\sigma(y,y',x)$ (where $x:A$ and $y,y':B$) such that $$\alpha(b)=b'\Iff M\models\sigma(b,b',a).$$ \end{defn} \begin{defn}\label{Mdef} We say that a family of formulas $\{\sigma_B(y,y',x)\}$ (where $x:A$ is fixed and $y,y':B$, range over all (basic) sorts $B$) is an \emph{$A$-definable automorphism} if for every model $M$ and every element $a\in A^M$ the parameterized formulas $\sigma_B(y,y',a)$ yield a parameter-definable automorphism. Given a model $M$, we say that a family of parameterized formulas $\{\sigma_B(y,y',a_B)\}$ in $\Diag(M)$ is an \emph{$M$-definable automorphism} if for every homomorphism $h:M\to N$ parameterized formulas $\sigma_B(y,y',h(a_B))$ yield a parameter-definable automorphism of $N$. \end{defn} \begin{lemma} The family $\{\sigma_B(y,y',x)\}$ is an $A$-definable automorphism just in case $\EE$ proves the following sequents: $$\begin{array}{ccl} \underset{x,y}{\vdash} \exists y'.\sigma_B(y,y',x) & \sigma_B(y,y',x)\wedge\sigma_B(y,y'',x)\underset{x,y,y',y''}{\vdash} y'=y''\\[3ex] \underset{x,y'}{\vdash}\exists y.\sigma_B(y,y',x) & \sigma_B(y,y'',x)\wedge\sigma_B(y',y'',x)\underset{x,y,y',y''}{\vdash} y=y'\\[3ex] \sigma_B(y,y',x)\wedge R(y)\underset{x,y,y'}{\vdash} R(y') & \sigma_B(y,y',x) \underset{x,y,y'}{\vdash}\sigma_C(f(y),f(y'),x)\\ \end{array}$$ \end{lemma} \begin{proof} This is immediate from completeness. The sequents on the first line specify functionality $y\mapsto y'$; the second line specifies invertibility. The third line (where I have simplified by assuming that $R$ and $f$ are unary) says that the family of maps defined by $\sigma_B$ respects the basic functions and relations, ensuring that the bijection is a model homomorphism and hence an automorphism. \end{proof} A good example to keep in mind is conjugation of groups. The classifying pretopos $\EE_{\Grp}$ contains an object $U$ which represents the underlying set; for any group $G:\EE_{\Grp}\to\Sets$ we have $U^G=|G|$. Because the theory of groups is single-sorted, $U$ is the only basic sort. Conjugation is a $U$-definable automorphism with a defining formula $$\sigma_U(y,y',x)\Iff y'=xyx^{-1}.$$ The next lemma shows that this exactly corresponds to a natural automorphism $\EE_{\Grp}\longrightarrow\hspace{-.45cm}\raisebox{.225cm}{\rotatebox{270}{\scalebox{.7}{$\curvearrowleft$}}}\hspace{.35cm}\EE_{\Grp}/U$. \begin{lemma}\label{def_aut} For each $A\in\EE$, the isotropy group $Z(A)$ is isomorphic to the family of $A$-definable automorphisms in $\EE$. \end{lemma} \begin{proof} Recall that $\EE\!/A$, regarded as a logical theory, represents the original theory $\EE$ extended by a single constant $\bf{c}_a:A$. Given a model $\<M,a\>$ of the extended theory and a natural automorphism $\alpha:\EE\longrightarrow\hspace{-.45cm}\raisebox{.225cm}{\rotatebox{270}{\scalebox{.7}{$\curvearrowleft$}}}\hspace{.35cm}\EE/\!A$, composition induces a model automorphism $M\underset{a}{\cdot}\alpha:M\cong M$ $$\xymatrix{ \EE\!/A \ar[rrd]^{\<M,a\>} \\ \EE \ar[u] \ar[u]|{\curvearrowleft}^(.3){\alpha} \ar[rr] \ar@{}[rr]|(.45){\rotatebox{270}{\footnotesize$\curvearrowleft$}}_(.6){M\underset{a}{\cdot}\alpha} && \Sets\\ }$$ The fact that each component $(M\underset{a}{\cdot}\alpha)_B:B^M\cong B^M$ is the image of a map in $\EE\!/A$ means that there is a formula $\sigma_B(y,y',x)$ (where $x:A$ and $y,y':B$) such that $$(M\underset{a}{\cdot}\alpha)_B(b)=b' \Iff M\models\sigma_B(b,b',a).$$ This is precisely an $\EE$-definable automorphism depending on a parameter $x:A$. On the other hand, suppose that we have a family of formulas $\{\sigma_B(y,y',x)\}$ ranging over the (basic) sorts of $\EE$ and which define an $\EE$-definable automorphism (parameterized by variable $x:A$). By naturality, we are forced to set $$\sigma_{B\times C}(\<y,z\>,\<y',z'\>,x)=\sigma_B(y,y',x)\wedge\sigma_{C}(z,z',x),$$ $$\xymatrix{ B\times A \ar[d]_{\sigma_B} & B\times C\times A \ar[r] \ar[l] \ar[d]|{\exists ! \sigma_{B\times C}=\sigma_B\wedge\sigma_C} & C\times A \ar[d]^{\sigma_C}\\ B & B\times C \ar[l] \ar[r] & C\\ }$$ In much the same way, there is a unique extension of $\sigma$ to any pullback, coproduct or quotient. Because these maps are assumed to define an $\EE$-model automorphism they respect basic relations and this allows us to define a restriction $\sigma_R$ for any basic relation $R \leq B$ (which, for simplicty, we take to be unary) $$\sigma_R(y,y',x)=\sigma_B(y,y',x)\wedge R(y)\wedge R(y').$$ Similarly, respect for basic functions $f(y)=z$ allows us to preserve naturality: $$\xymatrix{ R\times A \ar@{>->}[r] \ar@{-->}[d]_{\sigma_R} & B\times A \ar[d]^{\sigma_{B}} & & B\times A \ar[r]^-{f\times A} \ar[d]_{\sigma_{B}} & C\times A \ar[d]^{\sigma_C}\\ R \ar@{>->}[r] & B & & B \ar[r]_f & C.\\ }$$ This shows that any $A$-definable automorphism can be extended to a natural automorphism $\EE\longrightarrow\hspace{-.45cm}\raisebox{.225cm}{\rotatebox{270}{\scalebox{.7}{$\curvearrowleft$}}}\hspace{.35cm}\EE\!/A$. Using the uniqueness of the extension of $\sigma$ from basic sorts to products, coproducts, etc., one easily shows that these constructions are mutually inverse. \end{proof} In order to connect the isotropy group with our structure sheaves, we use the description of $Z$ in terms of fibrations given in corollary \ref{isot_desc}. We can strictify this map and transport it across the equivalence $\Sh(\EE)\simeq\EqSh(\MM_{\EE})$, just as we did for the structure sheaf in chapter 3, section 1. However, the constant fibration $\Delta\EE=\EE\times\EE$ is already a sheaf (over $\EE$), so it is uneffected by strictification. Transporting it across the equivalence sends it to another constant sheaf over $\MM$ (also denoted $\Delta\EE$). Alternatively, we can think of $\tau$ as the transpose of the identity functor $\EE\to\EE\simeq\Gamma\OO_{\EE}$ under the adjunction $\Delta\dashv\Gamma$. $$\xymatrix{ && \ol{\EE^{\2}} \ar[d]\\ \Delta\EE \ar[rr]^{T} \ar[rru]^{\ol{T}} \ar[rd]_{p_2} && \EE^{\2} \ar[ld]^{\cod} &\ar@{|->}[r] \ar@{}@<3ex>[r]^{\EqSh(\MM)\to\Sh(\EE)}&& \Delta\EE \ar[rr]^\tau \ar[rd] && \OO_{\EE} \ar[ld]\\ &\EE&&&&&\MM_{\EE}&\\ }$$ \begin{prop} The equivalence $\Sh(\EE)\simeq\EqSh(\MM_{\EE})$ identifies the isotropy group $Z$ with the group of automorphisms of $\tau$. \end{prop} \begin{proof} We already know that $Z$ is equivalent to the (internal) group of fibered automorphisms of $T$. Strictification is a pointwise equivalence of categories, so any fibered automorphism of $T$ lifts uniquely to $\ol{T}$. Similarly, the equivalence $\Sh(\EE)\simeq\EqSh(\MM_{\EE})$ preserves natural automorphisms of (strict) internal categories, leaving us with the following isomorphisms (modulo the Grothendieck construction, applied to $Z$) $$Z\cong\Aut(T)\cong\Aut\left(\ol{T}\right)\cong\Aut(\tau).$$ \end{proof} This shows that the isotropy group can be defined directly from our structure sheaf. This also provides an immediate calculation of the stalks of $Z$. \begin{cor} Given an $\EE$-model $M$, the stalk $Z_M$ is the group of $M$-definable automorphisms from definition \ref{Mdef}. \end{cor} \begin{proof} The argument follows the same lines as lemma \ref{def_aut}. One can define the natural automorphism group of an internal functor using only finite limits, so it is preserved when passing to the stalks. This means that we can use the stalks of $\tau:\Delta\EE\to\OO_{\EE}$ to compute $Z_M$: $$\begin{array}{rcl} Z_M&\cong &\Aut\big(\tau:\Delta\EE\longrightarrow\hspace{-.45cm}\raisebox{.225cm}{\rotatebox{270}{\scalebox{.7}{$\curvearrowleft$}}}\hspace{.35cm}\OO_{\EE}\big)_M\\ &\cong& \Aut\big(\tau_M:(\Delta\EE)_M\to(\OO_{\EE})_M\big)\\ \end{array}$$ The stalk of $\OO_{\EE}$ at $M$ is the complete diagram of $M$ (lemma \ref{struct_stalks}) and the stalk of $\tau$ is the canonical inclusion $\top:\EE\to\Diag(M)$. If $\alpha$ is a natural automorphism of this inclusion, then its components are isomorphisms in $\Diag(M)$, and these are all parameter-definable maps. In particular, for every $B\in\EE$ we have a formula $\sigma_B(y,y',a_B)$ for some sort $A_B\in\EE$ and some element $a_b\in A_B^M$. Remember that $\Diag(M)$ classifies homomorphisms $h:M\to N$. If $H$ classifies a homomorphism $h:M\to N$, then $N$ is reduct of $H\circ\top$: $$\xymatrix{ \Diag(M) \ar[rrd]^{h:M\to N} \\ \EE \ar[u]^\top \ar[rr]_{N} &&\SS\\ }$$ This tells us that for any homomorphism $h$, the formulas $H(\alpha_B)=\sigma_B(y,y',h(a_B))$ induce and automorphism of $N$. Thus the family $\alpha_B=\sigma_B(y,y',a_B)$ is precisely an $M$-definable automorphism. On the other hand, if $\{\sigma_B(y,y',a_B)\}$ is an $M$-definable automorphism, then completeness (relative to homomorphisms $M\to N$) ensures that $\Diag(M)$ proves that these parameterized formulas define isomorphisms $\top(B)\cong\top(B)$. Similarly, these automorphisms are natural in $B$ because the naturality conditions are satisfied in every $\Diag(M)$ model: a model automorphism $N\cong N$ is a exactly a natural transformation $\xymatrix{\EE \rtwocell^N_N{\omit\rm{\rotatebox[origin=c]{270}{$\cong$}}} & \SS}$. Thus every stalk automorphism $\alpha\in Z_M$ defines an $M$-definable automorphism and vice versa, and these constructions are mutally inverse. In either case, the group multiplication is interpreted by composition so these are obviously group homomorphisms, proving that $Z_M\cong\Aut(\tau_M)$ is the group of $M$-definable automorphisms. \end{proof} \begin{cor} To every coherent or classical first-order logical theory $\bTT$ we can associate an equivariant sheaf of groups $Z_{\bTT}$ over the spectral groupoid $\MM_{\bTT}$ such that, for every labelled model $\mu\in\MM_{\bTT}$, the stalk of $Z_{\bTT}$ is a normal subgroup of $\Aut(M_\mu)$. Moreover, this subgroup does not depend on the labelling of $\mu$. \end{cor} \begin{proof} This follows almost immediately from the previous corollary by taking $Z_{\bTT}$ to be the isotropy group of the classifying topos of $\bTT$. Because $Z_{\bTT}$ is equivariant, it cannot depend on the labelling of $\mu$. The only thing we must check is that the group of $M_\mu$-definable automorphisms is normal. If $\alpha:M_\mu\cong M_\mu$ is definable then, for every basic sort $A\in\bTT$ there is a formula $\sigma(x,x',y)$ (in context $A\times A\times B$) and an element $b\in B^\mu$ such that $$\alpha_A(a)=a' \Iff M_\mu\models \sigma(a,a',b).$$ If $\beta$ is any other automorphism of $M_\mu$, let $\gamma=\beta^{-1}\circ\alpha\circ\beta$. Then the component of $\gamma$ at $A$ is given by $$\gamma_A(a)=a' \Iff \alpha(\beta(a))=\beta(a') \Iff M_\mu\models \sigma(\beta(a),\beta(a'),\beta(b)).$$ Since $\beta$ is an automorphism, $\gamma_A(a)=a'$ if and only if $\gamma_A(\beta^{-1}(a))=\beta^{-1}(a')$ and therefore $\gamma$ is definable by the formula $\sigma(x,x',\beta(b))$. Since any conjugate of an $M_\mu$-definable automorphism is again $M_\mu$-definable, these form a normal subgroup. \end{proof} \begin{comment} \begin{cor} Suppose that $\alpha:M\cong M$ is a parameter-definable automorphism, defined by some formulas $\sigma_B(y,y',a)$ with some $a\in A^M$. Then there is a subquotient $A \geq S \stackrel{q}{\twoheadrightarrow} Q$ such that $M\models S(a)$ and the family $\sigma_B$ is a $Q$-definable automorphism over $q(a)$. \end{lemma} \begin{proof} Notice that $Z_M$ is defined as a colimit $$Z_M\cong\underset{A\in \EE, a\in A^M}{\colim} Z(A).$$ To each pair $\<\alpha,a\>$ with $a\in A^M$ and $\alpha:\EE\longrightarrow\hspace{-.45cm}\raisebox{.225cm}{\rotatebox{270}{\scalebox{.7}{$\curvearrowleft$}}}\hspace{.35cm}\EE\!/A$ we associate the parameter-definable automorphism $$M\underset{a}{\cdot} \alpha: \EE \longrightarrow\hspace{-.45cm}\raisebox{.225cm}{\rotatebox{270}{\scalebox{.7}{$\curvearrowleft$}}}\hspace{.35cm} \EE\!/A \stackrel{\<M,a\>}{\longrightarrow} \Sets.$$ The map $\alpha\mapsto M\underset{a}{\cdot}\alpha$ is obviously a group homomorphism and, by \end{proof} \end{comment} \section{Conceptual Completeness} Makkai \& Reyes' theorem of conceptual completeness \cite{MakkaiReyes} says roughly that a pretopos $\EE$ is determined up to equivalence by its category of models $M:\EE\to\Sets$. More specifically, recall (cf. prop \ref{embed_prop}) that the \emph{reduct} of an $\FF$-model $N:\FF\to\Sets$ along an interpretation $I:\EE\to\FF$ is the composite $I^*N:\EE\to\FF\to\Sets$. This induces a functor $I^*:\Mod(\FF)\to\Mod(\EE)$ and conceptual completeness says the following: \begin{thm}[Conceptual Completeness, Makkai \& Reyes, \cite{MakkaiReyes}] A pretopos functor $I:\EE\to\FF$ is an equivalence of categories if and only if the reduct functor $I^*:\Mod(\FF)\to\Mod(\EE)$ is an equivalence of categories. \end{thm} The left-to-right direction of conceptual completeness is immediate; if $\EE'\simeq\EE$ is an equivalence, then one easily shows that precomposition $\EE'\simeq\EE\stackrel{M}{\longrightarrow}\Sets$ induces an equivalence $\Mod(\EE)\simeq\Mod(\EE')$. The real argument is in the proof of the converse. Note that the analogous theorem for schemes is immediate: if the induced map $\<I_\flat,I^\sharp\>:\Spec(\FF)\stackrel{\sim}{\longrightarrow}\Spec(\EE)$ is an equivalence of affine schemes, then the global sections $\Gamma_{\eq} I^\sharp:\EE\to\FF$ is necessarily an equivalence of pretoposes. However, we can mine the Makkai \& Reyes proof of conceptual completeness in order to establish a sort of Galois theory at the level of spectral groupoids. Their proof has the following structure (where e.s.o means essentially surjective on objects): \begin{tabular}{rlcl} (i) & $I^*$ e.s.o. &$\Rightarrow$ &$I$ faithful.\\ (ii) & $I^*$ e.s.o. + full & $\Rightarrow$ & $I$ full on subobjects.\\ (iii) & $I^*$ an equivalence & $\Rightarrow$ & $I$ subcovering.\\ (iv) & $I$ full on subs + faithful & $\Rightarrow$ & $I$ full.\\ (v) & $I$ subcovering + full on subs + faithful & $\Rightarrow$ & $I$ e.s.o.\\ \end{tabular} \vspace{0cm} There is clearly an element of duality in this proof, matching ``surjectivity'' conditions with ``injectivity'' conditions, but it is obscured by additional conditions in (ii) \& (iii). In this section we will refactor the existing proof in order to give biconditional statements for (i)-(iii). We also provide another set of equivalent conditions relating syntactic properties of $I$ to topological/groupoidal properties of the spectral dual $I_\flat$. Our main theorem is below (with definitions to follow). We will prove each of the statements (a), (b) and (c) in turn. \begin{thm}\label{scheme_cc} Given a pretopos functor $I:\EE\to\FF$ with a reduct functor $I^*:\Mod(\FF)\to\Mod(\EE)$ and spectral dual $I_\flat:\Spec(\FF)\to\Spec(\EE)$ the items in each row of the following table are equivalent: \noindent\begin{tabular}{|c|c|c|c|} \hline & \textbf{Syntactic} & \textbf{Semantic} & \textbf{Spectral} \\\hline (a) & $I$ is conservative & $I^*$ is supercovering & $I_\flat$ is superdense \\\hline (b) & $I$ is full on subs & $I^*$ stabilizes subobjects & $I_\flat$ separates subgroupoids \\\hline (c) & $I$ is subcovering & $I^*$ is faithful & $I_\flat$ is non-folding \\\hline \end{tabular} \end{thm} \vspace{0cm} A central ingredient in our proof with be the so-called local scheme of $\Spec(\EE)$ at $\mu$. Recall that, for an element of an affine scheme $\mu\in\Spec(\EE)$, the stalk of the structure sheaf $\OO_{\EE}$ at $\mu$ is equivalent to the (syntactic pretopos of) the Henkin diagram of the model $M_\mu$ (denoted $\Diag(\mu)$, cf. lemma \ref{struct_stalks}). Accordingly, for any scheme $\XX$ and any point $x\in\XX$, we let $\Diag(x)$ denote the stalk of $\OO_{\XX}$ at $x$. \begin{defn} Given a logical scheme $\XX$ and a point $x\in\XX$, the \emph{local scheme} of $\XX$ at $x$ (which we denote $\XX_x$) is the affine scheme which is dual to the pretopos $\Diag(x)$. \end{defn} The local scheme has a canonical projection $\pi_x:\XX_x\to\XX$. This can be defined by choosing an affine neighborhood $\UU\simeq\Spec(\EE)$ containing $x$, so that $x=\mu$ is a labelled $\EE$-model. Then the projection $\pi_x$ is dual to the canonical interpretation $\EE\to\Diag(\mu)$. Here a model of $\Diag(\mu)$ corresponds to $\EE$-model homomorphisms $h:\mu\to\mu'$, and $\pi_x$ sends $h\mapsto\mu'$. One can easily check that this definition does not depend on the choice of $\UU$. \begin{defn} Given a scheme morphism $J:\YY\to\XX$, the \emph{blowup} of $\YY$ at $x$ is the pullback $$\xymatrix{ \rm{Bl}_x(\YY) \pbcorner \ar[rr] \ar[d]_{J_x} && \YY \ar[d]^{J}\\ \XX_x \ar[rr]_{\pi_x} && \XX.\\ }$$ We say that $\YY$ is consistent near $x$ if the blowup of $\YY$ at $x$ is not the empty scheme. \end{defn} \begin{lemma} Fix an interpretation $I:\EE\to\FF$. \begin{itemize} \item For any object $A\in\EE$ there is a canonical functor $I_A:\EE\!/A\to\FF\!/IA$ and this is the pushout of $I$ along $A^\times:\EE\to\EE\!/A$. \item Any localization $t:\EE\to\EE_L$ defines a new localization $It:\FF\to\FF_{IL}$. There is a canonical functor $I_L:\EE_L\to\FF_{IL}$ and this is the pushout of $I$ along $t$. \end{itemize} $$\xymatrix{ \EE \ar[r]^-{A^\times} \ar[d]_I & \EE\!/A \ar[d]^{I_A} && \EE \ar[r]^-t \ar[d]_{I} & \EE_L \ar[d]^{I_L}\\ \FF \ar[r]_-{IA^\times} & \FF\!/IA \pushoutcorner && \FF \ar[r]_-{It} & \pushoutcorner \FF_{IL}.\\ }$$ \end{lemma} \begin{proof} First consider the pushout $\PP=\FF\overplus{\EE}\EE\!/A$. By the universal property of the pushout, a model of $\PP$ is a pair $\<N,\<M,a\>\>$ where (i) $N$ is an $\FF$-model and (ii) $\<M,a_0\>$ is an $\EE$-model together with an element $a_0\in A^M$ such that (iii) $M=I^*N$. Since $IA^N=A^{I^*N}=A^M$, this is exactly the same as a pair $\<N,a_1\>$ where $N$ is an $\FF$-model and $a_1\in IA^N$. Such pairs $\<N,a_1\>$ are classified by the slice category $\FF\!/IA$ which therefore has the appropriate universal property for the pushout $\FF\!/IA\simeq \FF\overplus{\EE}\EE\!/A$. Now suppose that $\EE_L\simeq\colim_{l\in L} \EE\!/A_l$ is a localization of $\EE$. Using $I$, we define a new localization $\FF_{IL}\simeq\colim_{l\in L} \FF\!/IA_l$. The universal property of the colimit $\EE_L$ induces a map $I_L:\EE_L\to\FF_{IL}$ as in the diagram below. In order to see that the outer square is a pushout we apply the previous paragraph together with commutation of colimits: \begin{tabular}{cc} \mbox{$\begin{array}{rcl} \FF_{IL} & \simeq & \colim_L \FF\!/IA_l\\ &\simeq& \colim_L \big(\FF\overplus{\EE} \EE\!/A_l\big)\\ &\simeq& \FF \overplus{\EE} \big(\colim_L \EE\!/A_l\big)\\ &\simeq& \FF\overplus{\EE} \EE_L.\\ \end{array}$} & \raisebox{1.5cm}{$\xymatrix@=4ex{ \EE \ar[rrrr]^{t} \ar[rrd]_{A_l^\times} \ar[ddd]_I &&&& E_L \ar@{-->}[ddd]^{I_L}\\ && \EE\!/A_l \ar[urr]_{t_l} \ar[d]^{I_{A_l}} && \\ && \FF\!/IA_l \ar[rrd]^{It_l} &&\\ \FF \ar[rrrr]_{It} \ar[urr]^{IA_l^\times} &&&& \FF_{IL}\\ }$} \end{tabular} \end{proof} \begin{lemma}\label{cons_loc} If $I_L:\EE_L\to\FF_{IL}$ is the pushout of a conservative functor $I:\EE\to\FF$ along a localization $t:\EE\to\EE_L$, then $I_L$ is again conservative. \end{lemma} \begin{proof} First notice that for any object $A\in\EE$, the pushout $I_A:\EE\!/A\to\FF\!/IA$ is again conservative. By lemma \ref{eq_cons} it is enough to check that $I_A$ is injective on subobjects. Since a subobject in the slice category $\EE\!/A$ is just an object $E\to A$ together with a subobject $R\leq E$ in $\EE$, we obviously have $$\begin{array}{ccc} \rm{In }\FF && \rm{In }\FF\!/IA\\\hline IR \lneq IE &\Iff & \raisebox{1.5ex}{$\xymatrix@=1ex{ **[r] IR \ar@{}[rr]|{\textstyle\lneq} \ar[rd] && **[l] IE \ar[ld] \\&IA}$}\\ \end{array}$$ Thus $I$ is conservative if and only if $I_A$ is. Now suppose that we have a map $s:R\to E$ in $\EE_L$ which has a representative $s_l:R_l\to E_l$ in one of the slice categories $\EE\!/A_l$. As discussed in lemma \ref{localization_limits}, $s$ is monic if and only if $s_l$ is ``eventually monic'': there is a map $f:A_{l'}\to A_l$ in the localization $L$ such that $f^*s_l:f^*R_l\rightarrowtail f^*E_l$ is monic in $\EE\!/A_{l'}$. In just the same sense, $s$ is an isomorphism just in case $s_l$ is eventually an iso in some further slice category $\EE\!/A_{l''}$. This means that $R\lneq E$ is a proper subobject in $\EE_L$ just in case, for every map $f:A_{l'}\to A_l$ in $L$, the pullback $f^*R_l\lneq f^*E_l$ is again proper. By the previous observation each of the maps $I_{A_{l'}}$ is conservative, so the images $If^*(R_l)\lneq If^*(E_l)$ in $\FF\!/IA_{l'}$ are also proper. These are representatives of the image of $R$ under $I_L$ and, since these representatives are not eventually iso, $I_L(R)\lneq I_L(E)$ is a proper subobject in $\FF_{IL}$. Thus $I_L$ is injective on subobjects and therefore conservative. \end{proof} \begin{cor} Given a scheme morphism $J:\YY\to\XX$ and a point $x\in\XX$, $\YY$ is consistent near $x$ if and only if there is a point $y\in \YY$ such that $x\in\overline{Jy}$. \end{cor} \begin{proof} Without loss of generality we may replace $\XX$ by an affine neighborhood $x\in\UU\simeq\Spec(\EE)$; in particular, we may think of $x$ as a labelled $\EE$-model $M_x$. The blowup of $\YY$ at $x$ is a pullback in $\LSch$ and, by theorem \ref{scheme_limits}, this is computed (locally) as a pushout in $\Ptop$: $$\xymatrix{ \Bl_x(\YY) \pbcorner \ar[rr] \ar[d] && **[l] \bigoplus_i \Spec(\FF_i) \simeq\YY \ar[d]^J & \raisebox{-3ex}{$\FF_i\overplus{\EE}\Diag(x)$} \pbcorner & \FF_i \ar[l] \\ \XX_x \ar[rr]_{\pi_x} && \XX & \Diag(x) \ar[u] & \EE \ar[l]_-{t} \ar[u]_{\Gamma J_i} \\ }$$ In particular, $\Bl_x(\YY)$ is non-empty just in case one of these pushouts is a consistent theory. A model for one of these pushouts consists of a model $N\models \FF_i$ together with a homomorphism $h:M_x\to (\Gamma J_i)^*N$. Let $\nu$ denote a labelling on $N$ such that, whenever $k$ is defined at $\mu$, $h(\mu(k))=\nu(k)$. This labelled model $\nu$ corresponds to a point $y\in\YY$, and $J(y)=(\Gamma J_i)^*\nu$. According to proposition \ref{spectral_closure}, $\mu$ belongs to the closure of $(\Gamma J_i)^*\nu$, so $x\in\overline{Jy}$ as asserted. \end{proof} \subsection{Proof of (a)} Na\"ively we expect that the semantic property which is dual to conservativity should bear a family resemblence to essential surjectivity. However, the following example shows that conservative interpretations do not, in general, induce essentially surjective reducts. Consider the (single-sorted) theories of countably-many and continuum-many distinct constants, respectively: $$\EE=\Ptop\{a_i=a_j\vdash\bot\ |\ i\not=j\in\omega\}.$$ $$\FF=\Ptop\{a_i=a_j\vdash\bot|i\not=j\in2^\omega\}.$$ Any countable subset of $S\subset 2^\omega$ induces an interpretation $I_S:\EE\to\FF$. $I_S^*$ sends each $\FF$-model $N$ to its reduct $N\reduct S$; this is the forgetful functor which omits constants outside of $S$. It is easy to check that $I_S$ is conservative, but it cannot possibly be essentially surjective: there are no countable models of $\FF$. However, the following weaker property holds: \begin{defn} Given an interpretation $I:\EE\to\FF$ we call the reduct functor $I^*:\Mod(\FF)\to\Mod(\EE)$ \emph{supercovering} if for any $\EE$-model $M$, any element $a\in A^M$ and any $R\rightarrowtail A$ with $a\not\in R^M$, there exists a $\FF$-model $N$ and a homomorphism $h:M\to I^*N$ such that $h(a)\not\in R^{I^*N}$. \end{defn} As noted above (cf. proposition \ref{spectral_closure}), one labelled model belongs to the closure of another, $\mu\in\overline{\mu'}\subseteq\Spec(\EE)$, just in case every label defined at $\mu$ is also defined at $\mu'$ and the inclusion $K_\mu\subseteq K_{\mu'}$ induces a model homomorphism $M_\mu\to M_\mu'$. Using this we can tranlate semantic conditions about homomorphisms into spectral conditions on the topological groupoid $\Spec(\EE)$. \begin{defn} We say that a map of logical schemes $J:\YY\to\XX$ is \emph{superdense} if for any open subgroupoid $\UU\subseteq\Spec(\EE)$ and any full and open subgroupoid $\VV\subseteq\UU$, whenever $\mu\in\UU-\VV$ there exists $\nu\in I_\flat^{-1}(\UU)-I_\flat^{-1}(\VV)$ such that $\mu\in\overline{I_\flat\mu}$. \end{defn} \begin{prop}\label{a_prop} The following are equivalent: \begin{enumerate} \item[(i)] $I:\EE\to\FF$ is conservative. \item[(ii)] $I^*$ is supercovering. \item[(iii)] $I_\flat$ is superdense. \end{enumerate} \end{prop} \begin{proof} We proceed by showing that (i)$\Rightarrow$(ii)$\Rightarrow$(iii)$\Rightarrow$(i). \noindent\textbf{(i)$\Rightarrow$(ii).} Consider the following pushout; $$\xymatrix{ \EE \ar[d]_I \ar[r] & \EE\!/A \ar[d]_{I_A} \ar[r]^{\tilde{a}} & \Diag(M) \ar[d]^{I_M}\\ \FF \ar[r] & \FF\!/IA \ar[r]_{\widetilde{\cc_a}} & \pushoutcorner \FF_M\\ }$$ We can describe $\FF_M$ syntactically as follows: we extend $\FF$ by a constant $\cc_a:IA$ for each element $a\in A^M$ and an axiom $\vdash I\varphi(\cc_a)$ whenever $M\models \varphi(a)$. This is the theory of an $\FF$-model $N$ together with an $\EE$-model homomorphism $h:M\to I^*N$. Now fix an element $a\in A^M$ such that $a\not\in R^M$. In order to show that $I^*$ is supercovering we must find an $\FF$-model $N$ and a homomorphism $h:M\to I^*N$ such that $h(a)\not\in R^{I^*N}$. Thinking of the pair $H=\<N,h\>$ as a model for $\FF_M$, $h(a)=\cc_a^H$, so it will be enough to show that $\FF_M$ may be consistently extended by the axiom $IR(\cc_a)\vdash\bot$ or, equivalently, that $IR(\cc_a)$ is not derivable in $\FF_M$. By assumption $a$ does not belong to $R^M$, so we know that $\Diag(M)\not\vdash R(\cc_a)$. According to lemma \ref{cons_loc}, the pushout of a conservative functor along a localization is again conservative. Since $IR(\cc_a)=I_M(R(\cc_a))$, this tells us that $\FF_M\not\vdash IR(\cc_a)$. \noindent\textbf{(ii)$\Rightarrow$(iii).} Now suppose that $I^*$ is supercovering, that $\UU$ is an open subgroupoid of $\Spec(\EE)$ with a full and open subgroupoid $\VV\subseteq\UU$, and that $\mu\in\UU-\VV$. We must find $\nu\in I_\flat^{-1}(\UU)-I_\flat^{-1}(\VV)$ such that $\mu\in\overline{I_\flat\mu}$. According to lemma \ref{slice_subsch}, we may replace $\UU$ by a smaller affine subgroupoid $\UU_k\simeq\Spec(\EE\!/A)$ for some object $A\in\EE$. By lemma \ref{slice_subsch}, a full and open subgroupoid $\VV\subseteq\UU_k$ must have the form $\bigcup_i \VV_{R_i(k)}$ for some set of subobjects $R_i\leq A$. To say that $\mu\in\UU_k-\VV$ means that the underlying model $M_\mu$ contains an element $a=\mu(k)\in A^\mu$ such that $a\not\in R_i^\mu$ for every $i$. Let $R^*=R_{i_1}\vee\ldots\vee R_{i_n}$ denote an arbitrary finite join of these subobjects. Since $I^*$ is supercovering, we may find an $\FF$-model $N$ with a homomorphism $h:M\to I^*N$ such that $N\not\models IR^*(h(a))$. This tells us that the following pushout theory $\PP^*$ is consistent; by compactness, the aggregate theory $\PP_{\VV}$ is also consistent: $$\xymatrix{ \EE\!/A \ar[r] \ar[d] & \FF\!/IA \ar[r] \ar[d] & \FF\!/\neg IR^* \ar[r] \ar[d]& \ldots \ar[r] & \FF\!/\{\neg IR_i\} \ar[d]\\ \Diag(\mu) \ar[r] & \FF_\mu \ar[r] & \PP^* \ar[r] & \ldots \ar[r] & \PP_{\VV}\\ }$$ Therefore, we have an $\FF$-model $N$ and a homomorphism $h:M\to I^*N$ such that $N\not\models IR_i(h(a))$ for every index $i$. By the downward Lowenheim-Skolem theorem, we may assume that $N$ is $\kappa$-small; let $\nu$ denote a labelling of $N$ such that, for each label $l$ defined at $\mu$, $h(\mu(l))=\nu(l)$. In particular, $\nu(k)=h(a)\not\in R_i^\nu$ for any index $i$, so $\nu\not\in\VV$. This gives us a labelled model $\nu\in I_\flat^{-1}(\UU)-I_\flat^{-1}(\VV)$ which, according to proposition \ref{spectral_closure}, belongs to the closure $\overline{I_\flat\nu}$. Therefore, $I_\flat$ is superdense. \noindent\textbf{(iii)$\Rightarrow$(i).} We suppose that $I_\flat$ is superdense and that $R\lneq A$ is a proper subobject in $\EE$. We must show that $IR$ is proper in $\FF$, so that $I$ is a conservative functor. Let $k$ denote a parameter of type $A$; set $\UU= \VV_k$ and $\VV=\VV_{R(k)}$ (cf. section \ref{sec_subsch}). Since $R$ is proper, we can find a labelled model $\mu\in\UU$ such that $\mu(k)\not\in R^\nu$ (and hence $\mu\not\in\VV_{R(k)}$). Because $I_\flat$ is superdense we can find a labelled $\FF$-model $\nu$ such that $\nu\not\in I_\flat^{-1}(\VV)$. But $I_\flat^{-1}(\VV_{R(k)})=\VV_{IR(k)}$, so $\nu(k)\not\in IR^\nu$. The element $\nu(k)$ is a witness demonstrating that $IR\lneq IA$, so $I$ is conservative. \end{proof} \subsection{Proof of (b)} Suppose that $A\in\EE$ and $S\leq IA\in\FF$, so that for any $\FF$-model $N$, $$S^N\subseteq IA^N=A^{I^*N}.$$ Therefore, if we have $N_0,N_1\in\Mod(\FF)$ and an $\EE$-homomorphism $h:I^*N_0\to I^*N_1$ we can compare $S^{N_1}$ to the image $h_A(S^{N_0})$. \begin{defn} Given $I:\EE\to\FF$ we say that $I^*$ \emph{stabilizes} a subobject $S\leq IA$ if $$h_A\left(S^{N_0}\right)\subseteq S^{N^1}\subseteq A^{I^*N_1}$$ for any $h:I^*N_0\to I^*N_1$. $I^*$ \emph{stabilizes (all) subobjects} if for any $A\in\EE$ and $S\leq IA\in\FF$, $I^*$ stabilizes $S$. \end{defn} As above, we can then translate this into a condition on the spectral topology: \begin{defn}\label{def_separates} Given a map of logical schemes $J:\YY\to\XX$ we say that $J$ \emph{separates (full \& open) subgroupoids} if for any open $\UU\subseteq\XX$, any full and open $\VV\subseteq J^{-1}(\UU)$ and for any points $y_0\in\VV$ and $y_1\in J^{-1}(\UU)$ $$y_0\in\overline{y_1} \Rightarrow y_1\in\VV.$$ \end{defn} \begin{prop}\label{b_prop} The following are equivalent: \begin{enumerate} \item[(i)] $I:\EE\to\FF$ is full on subobjects. \item[(ii)] $I^*$ stabilizes subobjects. \item[(iii)] $I_\flat$ separates subgroupoids. \end{enumerate} \end{prop} \begin{proof} We proceed by showing that (i)$\Rightarrow$(iii)$\Rightarrow$(ii)$\Rightarrow$(i). \noindent\textbf{(i)$\Rightarrow$(iii).} Suppose that, as in definition \ref{def_separates} above, we have an open subgroupoid $\UU\subseteq\Spec(\EE)$, a full and open subgroupoid $\VV\subseteq I_\flat^{-1}(\UU)$ and two labelled models $\nu_0\in\VV$ and $\nu_1\in I_\flat^{-1}(\UU)$. We must show that $\nu_1\in\VV$ as well. As in the previous proposition we can replace $\UU$ by a smaller affine neighborhood $\UU_k\simeq\Spec(\EE\!/A)$ for some object $A\in\EE$. The inverse image $I_\flat^{-1}(\UU_k)$ is equivalent to $\Spec(\FF\!/IA)$ and a full and open subgroupoid of this must have the form $\VV=\bigcup_i \VV_{S_i(k)}$ for some collection of subobjects $S\leq IA$. This means that for some index $i$, $\nu_0(k)\in S_i^{\nu_0}$. By assumption $I$ is full on subobjects, so $S_i\cong IR_i$ and $I_\flat(\nu_0)\in \VV_{R_i(k)}$. Since $I_\flat(\nu_0)$ belongs to the closure $\overline{I_\flat(\nu_1)}$ it follows that $I_\flat(\nu_1)\in\VV_{R_i(k)}$ as well. But then $$\nu_1\in I_\flat^{-1}(\VV_{R_i(k)})=\VV_{S_i(k)}\subseteq \VV.$$ This demonstrates that $I_\flat$ separates subgroupoids. \noindent\textbf{(iii)$\Rightarrow$(ii).} Fix an arbitrary subobject $S\leq IA$ and and suppose that $N_0$ and $N_1$ are $\FF$-models, $h:I^*N_0\to I_*N_1$ and that $a\in S^{N_0}$. We must show that $h(a)\in S^{N_1}$. By the downward Lowenheim-Skolem theorem, we may find $\kappa$-small submodels $N_0'\subseteq N_0$ and $N_1'\subseteq N_1$ such that $a\in N_0'$ and $h\upharpoonright I^*N_0'$ factors through $I^*N_1'$: $$\xymatrix@R=3ex{ I^*N_0 \ar[r]^h & I^*N_1\\ I^*N_0' \ar@{}[u]|{\rm{\rotatebox[origin=c]{90}{$\subseteq$}}} \ar@{-->}[r] & I^*N_1' \ar@{}[u]|{\rm{\rotatebox[origin=c]{90}{$\subseteq$}}}\\ }$$ Now extend these submodels to labelled models $\nu_0$ and $\nu_1$ in such a way that $h(\nu_0(k))=\nu_1(k)$ whenever $k$ is defined at $I_\flat(\nu_0)$. This ensures that $I_\flat(\nu_0)\in\overline{I_\flat(\nu_1)}$. Now $\nu_1$ belongs to the open subgroupoid $I_\flat^{-1}(\UU_k)$ and $\nu_0$ belongs to the full and open subgroupoid $\VV_{S(k)}$ (since $\nu_0(k)=a\in S^{\nu_0}$. Now we may apply the assumption that $I_\flat$ separates subgroupoids to conclude that $\nu_1\in \VV_{S(k)}$. Therefore $$\nu_1(k)=h(\nu_0(k))=h(a)\in S^{\nu_1}$$ or, in other words, $h(a)\in S^{N_1'}\subseteq S^{N_1}$. Thus any $\EE$-model homomorphism $h:I^*N_0\to I^*N_1$ between reducts stabilizes an arbitrary subobject $S\leq IA$, so $I^*$ stabilizes all subobjects. \noindent\textbf{(ii)$\Rightarrow$(i).} Fix an arbitrary subobject $S\leq IA$ and let $$\Gamma=\{R\leq A\ |\ IR\leq S\}.$$ We will show that for every $\FF$-model $N$ and every element $a\in S^N$ there is a some formula $R\in\Gamma$ such that $a\in IR^N$. This means that we can represent $S$ as a join\footnote{To be more precise, as $\FF$ does not have infinite joins, we should think of this as a statement about representable functors in $\Sh(\FF)$.} $$S=\bigvee_{R\in\Gamma} IR.$$ Since $S$ is compact, this reduces to a finite join, whence $S\cong I(R_1\vee\ldots\vee R_n)$ belongs to the essential image of $I$. We are left to show that ever $S$-element satisfies some formula in $\Gamma$, so fix an $\FF$-model $N$ and an element $a\in S^{N}$. Consider the pushout $$\xymatrix{ \EE \ar[r]^{A^\times} \ar[d]^{\2}&\EE\!/A \ar[r]^-{a} \ar[d] & \Diag(I^*N) \ar[d] \\ \FF \ar[r]_{IA^\times} & \FF\!/IA \ar[r]_-{\cc_a} & \FF_{I^*N}\\ }$$ This is the theory of pairs $H=\<N',h\>$ where $N'$ is another $\FF$-model and $h$ is an $\EE$-model homomorphism $I^*N\to I^*N'$. By assumption $I^*$ stabilizes $S$, so we must have $h(a)=\cc_a^H\in S^{N'}$. It follows by completeness, that is provable: $\FF_{I^*N}\vdash S(\cc_a)$. We have the following syntactic description of $\FF_{I^*N}$: we extend $\FF$ by a constant $\cc_b:IB$ for each element $b\in IB^N$ and an axiom $\vdash I\varphi(\cc_b)$ whenever $M\models \varphi(b)$. Given this description, compactness implies that the derivation $\FF_{I^*N}\vdash S(\cc_a)$ can only involve finitely many axioms $\{I\varphi_i(\cc_a,\cc_{b_i})\}$ coming from $\Diag(I^*N)$.\footnote{Without loss of generality we have weakened each of these axioms to include the constant $\cc_a$.} We might say that, aside from these axioms, the remainder of the derivation takes place in $\FF$. Formally, this means that after extending $\FF$ by the constants $\cc_a$ and $\cc_{b_i}$ the following sequent is provable $$\bigwedge_i I\varphi_i(\cc_a,\cc_{b_i}) \vdash S(\cc_a).$$ This is a (closed) sequent in the slice category $\FF\!/(IA\times IB)$. Moreover, the constants $\cc_{b_i}$ only appear on the left of the turnstile, so we are free to replace them by an existential quantifier. If we let $y=\<y_i\>$ and $\epsilon(x)=\exists y. \bigwedge_i \varphi_i(x,y_i)$ this gives us the following deduction: $$\begin{array}{cc} \vdash S(\cc_a) & \rm{in }\FF_{I^*N}\\\cline{1-1} \exists b_i\in B_i^{I^*N}\rm{ and } \varphi_i\leq A\times B_i & \\ \bigwedge_i I\varphi_i(\cc_a,\cc_{b_i})\vdash S(\cc_a) & \rm{in }\FF\!/(IA\times IB)\\\cline{1-1} \exists \epsilon\leq IA\\ I\epsilon(\cc_a)\vdash S(\cc_a) & \rm{in }\FF\!/A\\\cline{1-1} \exists \epsilon\leq IA\\ I\epsilon(x)\underset{x:A}{\vdash} S(x)& \rm{in }\FF. \end{array}$$ On one hand, the last sequent tells us that $\epsilon\in\Gamma$. On the other hand, $b=\<b_i\>$ gives us an existential witness to the fact that $N\models I\epsilon(a)$. This shows that every $S$-element satisfies some formula in $\Gamma$, completing the proof. \end{proof} \subsection{Proof of (c)} Recall from definition \ref{ptop_morph_props} that a pretopos functor $I:\EE\to\FF$ is \emph{subcovering} if for every object $B\in\FF$ there is an object $A\in\EE$, a subobject $S\leq IA$ and an epimorphism (necessarily regular) $\sigma:S\twoheadrightarrow B$. To say that $I$ is subcovering means, roughly, that $\EE$ and $\FF$ have the same basic sorts; more precisely, the basic sorts of $\FF$ are definable (in $\FF$) from the images of the basic sorts of $\EE$. Classically, an $\FF$-model homomorphism $f:N\to N'$ consists of a family of functions $f_B:B^N\to B^{N'}$ ranging over the \emph{basic} sorts of $\FF$ which preserve the basic functions and relations. In particular, two homomorphisms are equal just in case they agree on the basic sorts. From this it is obvious that when $\EE$ and $\FF$ share the same basic sorts, the reduct functor $I^*:\Mod(\FF)\to\Mod(\EE)$ must be faithful. The same holds when $I$ is subcovering. Below we will show that the converse is also true: if $I^*$ is faithful then $I$ must be subcovering. We must also translate this into a spectral condition. We have already seen that statements about model homomorphisms can be recast into statements about the closure of points: an inclusion $\mu_0\in\overline{\mu_1}$ induces a canonical model homomorphism between their underlying models $M_0\to M_1$. However, it is not immediately obvious how to talk about parallel homomorphisms $M_0 \rightrightarrows M_1$. The trick is to replace $M_0$ by an isomorphism $M_0\cong M_0'$ and the parallel arrows by a \emph{non}commuting diagram $$\xymatrix{ M_0 \ar[d] \ar@{=}[r]^{\sim} \ar@{}[rd]|{\displaystyle\times} & M_0' \ar[d]\\ M_1 \ar@{=}[r] & M_1\\ }$$ According to proposition \ref{iso_closure} this can be rephrased in terms of isomorphisms in the spectral groupoid. \begin{defn}\label{def_nonfold} Suppose that $J:\YY\to\XX$ is a morphism of schemes. We say that $J$ is \emph{non-folding} if for any two isomorphisms $\alpha:\nu_0\cong\nu_0'$ and $\beta:\nu_1\cong\nu_1'$ such that $\nu_0\in\overline{\nu_1}$ and $\nu_0'\in\overline{\nu_1'}$ $$J(\alpha)\in\overline{J(\beta)}\Iff \alpha\in\overline{\beta} .$$ \end{defn} Of course continuous functions preserve closure, so $\alpha\in\overline{\beta}$ automatically implies that $J(\alpha)\in\overline{J(\beta)}$. To see that a map is non-folding it suffices to check the right-to-left direction. \begin{prop}\label{c_prop} The following are equivalent: \begin{enumerate} \item[(i)] $I:\EE\to\FF$ is subcovering. \item[(ii)] $I^*$ faithful. \item[(iii)] $I_\flat$ is non-folding. \end{enumerate} \end{prop} \begin{proof} We proceed by showing that (i)$\Rightarrow$(iii)$\Rightarrow$(ii)$\Rightarrow$(i). \noindent\textbf{(i)$\Rightarrow$(iii).} Suppose that $\alpha:\nu_0\cong\nu_0'$ and $\beta:\nu_1\cong\nu_1'$ are labelled $\FF$-models as in definition \ref{def_nonfold}. Let $h,h'$ denote the $\FF$-model homomorphisms induced by the inclusions $\nu_0\in\overline{\nu_1}$ and $\nu_0'\in\overline{\nu_1'}$. According to proposition \ref{iso_closure}, $\alpha\in\overline{\beta}$ if and only if the following diagram commutes, where $N_i$ (resp. $N_i'$) is the underlying model of $\nu_i$ (resp. $\nu_i'$) $$\xymatrix{ N_1 \ar[rr]^\beta && N_1'\\ N_0 \ar[u]^{h} \ar[rr]_\alpha && N_0' \ar[u]_{h'}.\\ }$$ By assumption $I$ is subcovering so, for each object $B\in\FF$ there is an object $A\in\EE$ and a subquotient $IA\geq S \stackrel{q}{\twoheadrightarrow}$. Since $q$ is an epimorphism we have $$\big(\beta_B\circ h_B=h'_B\circ\alpha_B\big)\Iff\big(\beta_B\circ h_B\circ q^{N_0}=h'_B\circ\alpha_B\circ q^{N_0}\big).$$ As displayed in the following diagram, the naturality of $h$, $h'$, $\alpha$ and $\beta$ ensures that this is equivalent to the equation $q^{N_1'}\circ \beta_S\circ h_S=q^{N_1'}\circ h'_S\circ \alpha_S$: $$\xymatrix{ S^{N_1} \ar@{->>}[rd]_{q^{N_1}} \ar[rrr]^{\beta_S} &&&S^{N_1} \ar@{->>}[ld]^{q^{N_1'}}\\ & B^{N_1} \ar[r]^{\beta_B} &B^{N_1'}& \\ & B^{N_0} \ar[r]_{\alpha_B} \ar[u]^{h_B} & B^{N_0'} \ar[u]_{h'_B} &\\ S^{N_0} \ar[uuu]^{h_S} \ar@{->>}[ur]^{q^{N_0}} \ar[rrr]_{\alpha_S} &&& S^{N_0'} \ar[uuu]_{h'_S} \ar@{->>}[ul]_{q^{N_0'}}\\ }$$ Clearly this can hold just in case $\beta_S\circ h_S=\circ h'_S\circ \alpha_S$. Now suppose that $I_\flat(\alpha)\in\overline{I_\flat(\beta)}$. This tells us that the following diagram of $\EE$-models commutes: $$\xymatrix{ I^*N_1 \ar[rr]^{I^*\beta} && I^*N_1'\\ I^*N_0 \ar[rr]_{I^*\alpha} \ar[u]^{I^*h} && I^*N_0' \ar[u]_{I^*h'}\\ }$$ In particular, $I^*\beta_A\circ I^*h_A=I^*h'_A\circ I^*\alpha_A$. Since $S$ is a subobject of $IA$ and $IA^{N_0}=A^{I^*N_0}$, it follows that $\beta_S\circ h_S=\circ h'_S\circ \alpha_S$. Hence $\alpha$ belongs to the closure of $\beta$ and $I_\flat$ is non-folding. \noindent\textbf{(iii)$\Rightarrow$(ii).} We must show that when $I_\flat$ is non-folding the reduct functor $I^*:\Mod(\FF)\to\Mod(\EE)$ is faithful, so suppose that we have distinct $\FF$-model homomorphisms $g',h':N_0'\rightrightarrows N_1'$. By the downward Lowenheim-Skolem theorem we can find a $\kappa$-small submodel $N_0\subseteq N_0'$ such that $g'\upharpoonright N_0\not= h'\upharpoonright N_0$. Similarly, we can find a $\kappa$-small submodel $N_1\subseteq N_1'$ such that both of these restrictions factor through $N_1$; call the resulting factorizations $g$ and $h$: $$\xymatrix@R=3ex{ N_0' \ar@<1ex>[rr]^(.6){g'} \ar@<-1ex>[rr]_(.6){h'} && N_1'\\ N_0 \ar@{}[u]|{\rm{\rotatebox[origin=c]{90}{$\subseteq$}}} \ar@<1ex>[rr]^(.4){g} \ar@<-1ex>[rr]_(.4){h} && N_1' \ar@{}[u]|{\rm{\rotatebox[origin=c]{90}{$\subseteq$}}}\\ }$$ Fix a labelled model $\nu_1$ whose underlying model is $N_1$. In addition, choose two labellings $\nu_0$ and $\nu_0'$ on $N_0$ such that whenever $k$ or $l$ is defined we have $g(\nu_0(k))=\nu_1(k)$ or $h(\nu_0'(l))=\nu_1(l)$. This ensures that $\nu_0,\nu_0'\in\overline{\nu_1}$. Moreover, the identity on $N_0$ induces an isomorphism of labelled models $\alpha:\nu_0\cong\nu_0'$; the fact that $g\not=h$ tells us that $\alpha$ does not belong to the closure of the identity on $\nu_1$. Now we can apply the assumption that $I_\flat$ is non-folding; this ensures that $I_\flat(\alpha)\not\in\overline{I_\flat(1_{\nu_1})}$. The inclusion $I_\flat(\nu_0)\in\overline{I_\flat\nu_1}$ is induced by the reduct morphism $I^*g:I^*N_0\to I^*N_1$ and similarly for $\nu_0'$, while the underlying map of $I_\flat(\alpha)$ is the identity on $I^*N_0$, so this tells us that the following diagram does not commute in $\Mod(\EE)$ $$\xymatrix{ N_1 \ar@{=}[rr]^{1_{\nu_1}} & \ar@{}[d]|{\textstyle\times} & N_1\\ N_0 \ar[u]^{I^*g} \ar@{=}[rr]_{\alpha} && N_0 \ar[u]_{I^*h}\\ }$$ Since $g$ and $h$ are restrictions of $g'$ and $h'$ it follows that $I^*g'\not=I^*h'$, so $I^*$ must be faithful. \noindent\textbf{(ii)$\Rightarrow$(i).} The proof is formally similar to the last argument in part (b). First fix an object $B\in\FF$. Given an object $A\in\EE$, a partial function $IA\part B$ is a two-place relation $\sigma\leq IA\times B$ which is provably many-one in $\FF$: $$\sigma(x,y)\wedge\sigma(x,y')\underset{\underset{x:IA,}{y,y':B}}{\vdash} y=y'.$$ Categorically speaking, $\varphi$ is a partial function just in case the composite $\varphi\rightarrowtail IA\times B\to IA$ is monic. Now let $$\Sigma=\{\sigma\leq IA\times B\ |\ A\in\EE, \sigma:IA\part B\}.$$ We will show that for every $\FF$-model $N$ and every element $b\in B^N$ there is some partial function $\sigma\in\Sigma$ and an element $a\in IA^N$ such that $N\models \sigma(a,b)$. It will follow that we can express $B$ as a join $$B(y)\cong \bigvee_{\sigma\in\Sigma} \exists x.\sigma(x,y).$$ By compactness we can reduce this to a finite subcover. If we set $S(x)=\exists y.\sigma(x,y)$, this allows us to represent $B$ as a subquotient of $\EE$: $$\xymatrix@R=3ex{ I(A_1+\ldots A_n)\\ S_1+\ldots+S_n \ar@{->>}[r] \ar@{}[u]|{\rm{\rotatebox[origin=c]{90}{$\leq$}}} &B.\\ }$$ It remains to show that every $B$-element is the image of an $IA$ element under some partial function, so fix an $\FF$-model $N$. The first step is to axiomatize the following data: a second $\FF$-model $N'$ together with a pair of homomorphisms $g_1,g_2:N\rightrightarrows N'$ such $I^*g_1=I^*g_2$. This theory can be presented as the following iterated pushout: $$\xymatrix{ \EE \ar[r] \ar[d]_I & \Diag(I^*N) \ar[d] & \\ \FF \ar[r] & \pushoutcorner \FF_{I^*N} \ar[r] \ar[d] & \Diag^1(N) \ar[d]\\ & \Diag^2(N) \ar[r] & \pushoutcorner \TT\\ }$$ $\TT$ contains two copies of $\Diag(N)$ which we distinguish with superscripts; each element $b\in B^N$ defines two constants $\cc^1_b,\cc^2_b:B$ and every parameterized formula $\varphi(x,b)\in\Diag(N)$ defines two objects $\varphi^1(x,\cc_b^1)$ and $\varphi^2(x,\cc_b^2)$. Whenever $N\models\varphi(b)$ we attach two axioms $\{\vdash \varphi^1(\cc_b^1), \vdash\varphi^2(\cc_b^2)\}$. Given a model $G\models \TT$ we can recover the functions $g_1$ and $g_2$ by setting $g_i(b)=(\cc_b^i)^H$. Finally, we ensure that $I^*g_1=I^*g_2$ by adding an axiom $\vdash \cc_a^1=\cc_a^2$ for every element $a\in IA^N$; to simplify notation we will treat these two as a single constant $\cc_a$. By assumption $I^*$ is faithful, so $I^*g_1=I^*g_2$ implies that $g_1=g_2$. By completeness this must be provable in $\TT$: for a fixed element $b\in B$, $\TT\vdash \cc_{b}^1=\cc_{b}^2$. This derivation must involve only a finite number of the axioms mentioned in the last paragraph, say $$\{\vdash \varphi^1_i(\cc_{a_i},\cc^1_{d_i}), \vdash \psi^2_j(\cc_{a_j},\cc^2_{d_j})\}.$$ Now we unify these formulas. First, we may throw in some irrelevant assumptions in order to assume that the two sets of formulas $\{\varphi_i(z_i)\}$ and $\{\psi_j(z_j)\}$ are identical. Next, let $A=\prod_i A_i\times\prod_j A_j$ and $D=\prod_i D_i\times \prod_j D_j$ and weaken all of these formulas to a common context $A\times B\times D$. The remainder of the derivation can proceed in the slice category $\FF\!/(IA\times B^2\times D^2)$. As in part (b), we first conjoin the formulas and eliminate constants to the left of the turnstile. We then replace the remaining closed sequent in $\FF\!/(IA\times B^2)$ by an open sequent in $\FF$ (where the formulas $\varphi^1=\varphi^2$ are identified, though $\cc_{b}^1$ and $\cc_b^2$ are not). Letting $\epsilon(x,y)=\exists z.\bigwedge_i \varphi_i(x,y,z)$, this leaves us with the following series of equivalent statements $$\begin{array}{cc} \vdash \cc_b^1=\cc_b^2 & \rm{in }\TT\\\cline{1-1} \exists d_i\in D_i^{I^*N}\rm{ and } \varphi_i\leq IA\times B\times D_i&\\ \Big(\bigwedge_i \varphi_i(\cc_a,\cc^1_b,\cc^1_{d_i})\Big)\wedge\Big(\bigwedge_i \varphi_i(\cc_a,\cc^2_b,\cc^2_{d_i})\Big)\vdash \cc_b^1=\cc_b^2 & \rm{in }\FF\!/(IA\times B^2\times D^2)\\\cline{1-1} \exists \epsilon\leq IA\times B\\ \epsilon(\cc_a,\cc_b^1)\wedge\epsilon(\cc_a,\cc_{b}^2)\vdash \cc_b^1=\cc_b^2& \rm{in }\FF\!/(IA\times B^2)\\\cline{1-1} \exists \epsilon\leq IA\times B\\ \epsilon(x,y_1)\wedge\epsilon(x,y_2)\underset{\underset{x:A}{y_1,y_2:B}}{\vdash} y_1=y_2& \rm{in }\FF. \end{array}$$ On one hand, the last sequent tells us that $\epsilon$ defines a partial function and so belongs to $\Sigma$. On the other hand, the element $d=\<d_i\>$ gives us an existential witness to the fact that $N\models I\epsilon(a,b)$. This shows that every $B$-element is the image of an $IA$-element under a definable partial function, completing the proof. \end{proof} \subsection{Conceptual Completeness} With the model-theoretic characterizations of these syntactic properties, we are in a good position to complete the proof of conceptual completeness. \begin{prop} Suppose that $I:\EE\to\FF$ is a pretopos functor. \begin{itemize} \item If $I$ is conservative and full on subobjects, then $I$ is full. \item If $I$ is conservative, full on subobjects and subcovering, then $I$ is essentially surjective. \end{itemize} \end{prop} \begin{proof} For the first claim, suppose that $f:IA\to IB$ is a map in $\FF$. We can represent this as a graph $\Gamma_f\leq I(A\times B)$. Since $I$ is full on subobjects, there is a preimage $R\leq A\times B$ such that $IR\cong \Gamma_f$. To say that $f$ is functional means that the composite $\Gamma_f\rightarrowtail I(A\times B)\to IA$ is an isomorphism. $I$ is conservative, so it reflects this isomorphism, giving us an isomorphism $R\rightarrowtail A\times B \to A$. That means that $R=\Gamma_{\overline{f}}$ is a graph in $\EE$, and $I(\overline{f})=f$. For the second, take any object $B\in \FF$. Since $I$ is subcovering, this can be represented as a subquotient $$\xymatrix{ IA & \\ S \ar@{}[u]|{\rm{\rotatebox[origin=c]{90}{$\leq$}}} \ar@{->>}[r] & B\\ }$$ Since $I$ is full on subobjects, $S\cong IR$ lies in the image of $I$. Now form the kernel pair $K\rightrightarrows IR \twoheadrightarrow B$. $K$ is a subobject of $I(R\times R)$, so $K\cong IL$ also belongs to the image of $I$. As discussed in the proof of proposition \ref{qc_ortho}, conservative functors reflect equivalence relations, so the preimage $L\rightrightarrows R$ is an equivalence relation in $\EE$. Since pretopos functors preserve quotients, $B\cong I(R/L)$. \end{proof} \begin{thm} A pretopos functor $I:\EE\to\FF$ is an equivalence of categories if and only if the reduct functor $I^*:\Mod(\FF)\to\Mod(\EE)$ is an equivalence of categories. \end{thm} \begin{proof} This follows immediately from the previous proposition, together with theorem \ref{scheme_cc}. The right-to-left direction is trivial. As for the converse, suppose $I^*$ is essentially surjective. Then it is certainly supercovering: for any $\EE$-model $M$ there is an isomorphism $h:M\stackrel{\sim}{\longrightarrow} I^*N$ and if $a\not\in R^M$ then $h(a)\not\in R^{I^*N}$. Therefore $I$ must be conservative (and hence faithful by lemma \ref{eq_cons}. If $I^*$ is full then any morphism $h:I^*N_0\to I^*N_1$ has a lift $\overline{h}:N_0\to N_1$ with $I^*\overline{h}=h$. For any $S\leq IA\in\FF$, this means $$h_A(S^{N_0})=(I^*\overline{h})_A(S^{N_0})=\overline{h}_{IA}(S^{N_0})=\overline{h}_S(S^{N_0})\subseteq S^{N_1}.$$ Thus $I^*$ stabilizes subobjects and $I$ must be full on subobjects. When $I^*$ is faithful $I$ is subcovering, and we have just seen that a functor which is conservative, full on subobjects and subcovering is an equivalence of categories. \end{proof} \chapter*{Introduction} Although contemporary model theory has been called ``algebraic geometry minus fields'' \cite{hodges}, the formal methods of the two fields are radically different. This dissertation aims to shrink that gap by presenting a theory of ``logical schemes,'' geometric entities which relate to first-order logical theories in much the same way that algebraic schemes relate to commutative rings. \vspace{.25cm} Recall that the affine scheme associated with a commutative ring $R$ consists of two components: a topological space $\Spec(R)$ (the spectrum) and a sheaf of rings $\OO_R$ (the structure sheaf). Moreover, the scheme satisfies two important properties: its stalks are local rings and its global sections are isomorphic to $R$. In this work we replace $R$ by a first-order logical theory $\bTT$ (construed as a structured category) and associate it to a pair $(\Spec(\bTT),\OO_{\bTT})$, a topological spectrum and a sheaf of theories which stand in a similar relation to $\bTT$. These ``affine schemes'' allow us to import some familiar definitions and theorems from algebraic geometry. \subsection*{Stone duality for first-order logic: the spectrum $\Spec(\bTT)$} In the first chapter we construct $\MM=\Spec(\bTT)$, the \emph{spectrum} of a coherent first-order logical theory. $\MM$ is a topological groupoid constructed from the semantics (models and isomorphisms) of $\bTT$, and its construction can be regarded as a generalization of Stone duality for propositional logic. The construction is based upon an idea of Joyal \& Tierney \cite{JT} which was later developed by Joyal \& Moerdijk \cite{JM1} \cite{JM2}, Butz \& Moerdijk \cite{butz_thesis} and Awodey \& Forssell \cite{forssell_thesis, FOLD}. Points of the object space $\MM_0$ are models supplemented by certain variable assignments or labellings. Satisfaction induces a topology on these points, much as in the classical Stone space construction. As in the algebraic case, the spectrum is \emph{not} a Hausdorff space. Instead the topology incorporates model theoretic information; notably, the closure of a point $M\in\MM$ (i.e., a labelled model) can be interpreted as the set of $\bTT$-model homomorphisms mapping into $M$. Every formula $\varphi(x_1,\ldots,x_n)$ determines a ``definable sheaf'' $\ext{\varphi}$ over the spectrum. Over each model $M$, the fiber of $\ext{\varphi}$ is the definable set $$\textrm{stalk}_M(\ext{\varphi})=\varphi^M=\{\overline{a}\in|M|^n\ |\ M\models\varphi(\overline{a})\}.$$ The space $\ext{\varphi}$ is topologized by the terms $t$ which satisfy $\varphi(t)$; each of these defines a section of the sheaf, sending $M\mapsto t^M\in\varphi^M$. Although $\ext{\varphi}$ is nicely behaved its subsheaves, in general, are not. The problem is that these typically depend on details of the labellings which have no syntactic relevance. To ``cancel out'' this effect we appeal to $\bTT$-model isomorphisms. Specifically, we can topologize the isomorphisms between models, turning $\Spec(\bTT)$ into a topological groupoid. Each definable sheaf $\ext{\varphi}$ \emph{is} equivariant over this groupoid, which is just a fancy way of saying that an isomorphism $M\cong M'$ induces an isomorphism $\varphi^M \cong {\varphi^M}'$ for each definable set. The pathological subsheaves, however, are not equivariant; any subsheaf $S\leq\ext{\varphi}$ which \emph{is} equivariant must be a union of definable pieces $\ext{\psi_i}$, where $\psi_i(\overline{x})\vdash\varphi(\overline{x})$. Moreover, $S$ itself is definable just in case it is compact with respect to such covers. This reflects a deeper fact: $\Spec(\bTT)$ gives a presentation of the classifying topos for $\bTT$. That is, there is a correspondence between $\bTT$-models inside a topos $\mathcal{S}$ (e.g., $\Sets$) and geometric morphisms from $\SS$ into the topos equivariant sheaves over $\Spec(\bTT)$. Since any classical first-order theory can be regarded as a coherent theory with complements, this gives a spectral space construction for classical first-order logic. We close the chapter with an elementary presentation of this construction, and prove a few facts which are specific to this case. \subsection*{Theories as algebras: the logic of pretoposes} In the second chapter we recall and prove a number of well-known facts about pretoposes, most of which can be found in either the Elephant \cite{elephant} or Makkai \& Reyes \cite{MakkaiReyes}. We recall the definitions of coherent categories and pretoposes and the pretopos completion which relates them. Logically, a pretopos $\EE=\EE_{\bTT}$ can be regarded as a coherent first-order theory $\bTT$ which is extended by disjoint sums and definable quotients; formally, this corresponds to a conservative extension $\bTT\subseteq\bTT^{\eq}$ which is important in contemporary model theory. We go on to discuss a number of constructions on pretoposes, including the quotient-conservative factorization and the machinery of slice categories and localizations. In particular, we define the elementary diagram, a logical theory associated with any $\bTT$-model, and its interpretation as a colimit of pretoposes. These are Henkin theories: they satisfy existence and disjunction properties which can be regarded as a sort of ``locality'' for theories. We also demonstrate that the pretopos completion interacts well with complements, so that classical first-order theories may be regarded as Boolean pretoposes. Similarly, quotients and colimits of theories are well-behaved with respect to complementation, so that the entire machinery of pretoposes can be specialized to the classical case. \subsection*{Sheaves of theories and logical schemes} The third chapter is the heart of the disseration, where we introduce a sheaf representation for logical theories and explore the logical schemes which arise from this representation. The most familiar example of such a representation is Grothendieck's theorem that every commutative ring $R$ is isomorphic to the ring of global sections of a certain sheaf over the Zariski topology on $R$. Together with a locality condition, this is essentially the construction of an affine algebraic scheme. Later it was shown by Lambek \& Moerdijk \cite{LM}, Lambek \cite{lambek} and Awodey \cite{awodey_thesis, sheaf_rep} that toposes could also be represented as global sections of sheaves on certain (generalized) spaces. The structure sheaf $\OO_{\bTT}$ is especially close in spirit to the last example. Formally, the structure sheaf is constructed from the codomain fibration $\EE^I\to\EE$, which can be regarded as a (pseudo-)functor $\EE^{\op}\to\Cat$ sending $A\mapsto\EE/A$; the functorial operation of a map $f:B\to A$ is given by pullback $f^*:\EE/A\to\EE/B$. The quotients ad coproducts in $\EE$ ensure that this map is a ``sheaf up to isomorphism'' over $\EE$ (i.e., a stack). In order to define $\OO_{\bTT}$ we first turn the stack into a (strict) sheaf and then pass across the equivalence $\Sh(\EE_{\bTT})\simeq\EqSh(\MM_{\bTT})$ discussed in chapter 1. Given a model $M\in\Spec(\bTT)$, the stalk of $\OO_{\bTT}$ over $M$ is the elementary diagram discussed in chapter 2. An object in the diagram of $M$ is a defined by a triple $\<M,\varphi(\overline{x},\overline{y}),\overline{b})$ where $M$ is a model and $\overline{b}$ and $\overline{y}$ have the same arity. We can think of this triple as a definable set $$\varphi(\overline{x},\overline{b})^M=\left\{\overline{a}\in|M|\ |\ M\models\varphi(\overline{a},\overline{b})\right\}.$$ The groupoid of isomorphisms acts on this sheaf in the obvious way: given $\alpha:M\to N$ we send $\varphi(\overline{x},\overline{b})^M\mapsto \varphi(\overline{x},\alpha(\overline{b}))^N$. In particular, every formula $\varphi(\overline{x})$ determines a global section $\ulcorner\!\varphi\!\urcorner:M\mapsto \varphi^M$. These are stable with respect to the equivariant action and, together with formal sums and quotients, these are \emph{all} of the equivariant sections. This yields a representation theorem just as in the algebraic case: $$\Gamma_{\eq}(\OO_{\bTT})\simeq\EE_{\bTT}.$$ The pair $(\MM_{\bTT},\OO_{\bTT})$ is an \emph{affine logical scheme}. A map of theories is an interpretation $I:\bTT\to\bTT'$ (e.g., adding an axiom or extending the language). This induces a forgetful functor $I_\flat:\Spec(\bTT')\to\Spec(\bTT)$, sending each $\bTT'$-model $N$ to the $\bTT$-model which is the interpretation of $I$ in $N$. If $I$ is a linguistic extension then $I_\flat N$ is the usual reduct of $N$ to $\mathcal{L}(\bTT)$. Moreover, $I$ induces another morphism at the level of structure sheaves: $I^\sharp: I_\flat^*\OO_{\bTT}\to \OO_{\bTT'}.$ On fibers, this sends each $I^*N$-definable set $\varphi^{I^*N}$ to the $N$-definable set $(I\varphi)^N$ (the same set!). The pair $\<I_\flat,I^\sharp\>$ is a map of schemes. Equivalently, we can represent $I^\sharp$ as a map $\OO_{\bTT}\to I_{\flat *}\OO_{\bTT'}$ and, since $\Gamma_{\bTT'}\circ I_{\flat *}\cong\Gamma_{\bTT}$, the global sections of $I^\sharp$ suffice to recover $I$: $$\Gamma_{\bTT}(I^\sharp)\cong I\ :\ \bTT=\Gamma_{\bTT}(\OO_{\bTT})\longrightarrow \Gamma_{\bTT}(I_{\flat *}\OO_{\bTT'})\cong \Gamma_{\bTT'}(\OO_{\bTT'})\cong\bTT'.$$ Similarly, we can define a natural transformation of schemes, and these too can be recovered from global sections. This addresses a significant difficulty in Awodey \& Forssell's first-order logical duality \cite{FOLD}: identifying which homomorphisms between spectra originate syntactically. This problem is non-existent for schemes: without a syntactic map at the level of structure sheaves, there is no scheme morphism. With this framework in place, algebraic geometry provides a methodology for studying this type of object. The necessary definitions to proceed from affine schemes to the general case follow the same rubric as algebraic geometry. There are analogs of locally ringed spaces, gluings (properly generalized to groupoidal spectra) and coverings by affine pieces. Importantly, the equivariant global sections functor presents (the opposite of) the 2-category of theories as a reflective subcategory of schemes. This allows us to construct limits of affine schemes using colimits of theories. This mirrors the algebraic situation, where the polynomial ring $\ZZ[x]$ represents the affine line and its coproduct $\ZZ[x,y]\cong \ZZ[x]+\ZZ[y]$ represents the plane. Via affine covers, one can use this to compute any finite 2-limits for any logical schemes. \subsection*{Applications} The scheme $(\MM_{\bTT},\OO_{\bTT})$ associated with a theory incorporates both the semantic and syntactic components of $\bTT$. As such, it is a nexus to study the connections between different branches of logic and other areas of mathematics. In the final chapter of the dissertation we discuss a few of these connections. \begin{itemize} \item \textbf{The structure sheaf as a site.\footnote{The author would like to thank Andr\'e Joyal for a helpful conversation in which he suggested the theorem presented in this section.}} $\OO_{\bTT}$ is itself a pretopos internal to the topos of equivariant sheaves over $\MM_{\bTT}$. We can regard this as a site (with the coherent topology, internalized) and consider its topos of sheaves $\Sh_{\MM_{\bTT}}(\OO_{\bTT})$. We prove that this topos classifies $\bTT$-model homomorphisms. We also show that it can be regarded as the (topos) exponential of $\Sh(\MM_{\bTT})$ by the Sierpinski topos $\Sets^I$. \item \textbf{The structure sheaf as a universe.} Following on the results of chapters 1 and 3 we can show that every equivariant sheaf morphism $\ext{\varphi}\to\OO_{\bTT}$ corresponds to an object $E\in\EE/\varphi$. In this section we introduce an auxilliary sheaf $\El(\OO_{\bTT})\to\OO_{\bTT}$ allowing us to recover $E$ as a pullback: $$\xymatrix{ \ext{E} \pbcorner \ar[d] \ar[rr] && \El(\OO_{\bTT}) \ar[d]\\ \ext{\varphi} \ar[rr] && \OO_{\bTT}.\\ }$$ This allows us to think of $\OO_{\bTT}$ as a universe of \emph{definably} or \emph{representably} small sets. Formally, we show that $\OO_{\bTT}$ is a coherent universe, a pretopos relativization of Streicher's notion of a universe in a topos \cite{streicher}. \item \textbf{Isotropy.} In this section we demonstrate a tight connection between our logical schemes and a recently defined ``isotropy group'' \cite{isotropy} which is present in any topos. This allows us to interpret the isotropy group as a logical construction. We also compute the stalk of the isotropy group at a model $M$ and show that its elements can be regarded as parameter-definable automorphisms of $M$. \item \textbf{Conceptual completeness.} In this section we reframe Makkai \& Reyes' conceptual completeness theorem \cite{MakkaiReyes} as a theorem about schemes. The original theorem says that if an interpretation $I:\bTT\to\bTT'$ induces an equivalence $I^*:\textbf{Mod}(\bTT')\stackrel{\sim}{\longrightarrow}\textbf{Mod}(\bTT)$ under reducts, then $I$ itself was already an equivalence (at the level of syntactic pretoposes). The theorem follows immediately from our scheme construction: if $I^*:\Spec(\bTT')\to\Spec(\bTT)$ is an equivalence of schemes, then its global sections define an equivalence $\bTT\simeq\bTT'$. From here we go on to unwind the Makkai \& Reyes proof, providing some insights into the ``Galois theory'' of logical schemes. \end{itemize}
{ "redpajama_set_name": "RedPajamaArXiv" }
5,748
\section{Introduction} \label{sec:intro} By many aspects, quantum theory is a very strange theory with numerous non-intuitive predictions. Nevertheless, our familiar classical world is the result of quantum phenomena at the atomic and subatomic levels. So, it is interesting to establish connections between classical and quantum descriptions, and to understand how the macroscopic world emerges from the microscopic world. An interesting approach for stationary quantum states is to compare the probability density given by the square modulus of the wave-function with a ``classical probability distribution" obtained from the corresponding classical equations of motion. It can then be shown that both functions approach each other, in the limit of large quantum excitations, once the rapid oscillations of the quantum density are averaged. The classical probability distribution can be compared directly with the explicit (analytical or numerical) corresponding quantum distribution for some particular Hamiltonians. This is done, for instance, in \cite{robi95,yode06} for one-dimensional Schr\"odinger equations. But a more general procedure is available. The WKBJ method, named after Wentzel, Kramers, Brillouin and Jeffreys \cite{flug99,maha09}, yields a semi-classical solution of a quantum problem, also in the limit of large quantum excitations. So it is possible to compare the classical probability distribution directly with the averaged WKBJ solution for Schr\"odinger equations \cite{sen04,sen06}. In this paper, the same approach is generalized for one-dimensional Hamiltonians with an arbitrary kinetic energy. Such Hamiltonians are used in several domains: atomic physics with non-parabolic dispersion relation \cite{arie92}, hadronic physics with particle masses depending on the relative momentum \cite{szcz96}, quantum mechanics with a minimal length \cite{brau99,ques10}. The characteristics of a general Hamiltonian with a non-usual kinetic energy are given in Sec.~\ref{sec:hamiltonian}, where natural constraints are given on the kinetic part. The notion of classical probability distribution for the usual Schr\"odinger equation is recalled in Sec.~\ref{sec:classical}, and extended to the case of more general Hamiltonians. In sec.~\ref{sec:wkbj}, the WKBJ approximation is generalized for this type of Hamiltonians, and the connection is made between the classical probability distribution and the quantum probability distribution obtained from the WKBJ method. Some examples are treated in Sec.~\ref{sec:examples}, and concluding remarks are given in Sec.~\ref{sec:conclusion}. \section{The Hamiltonian} \label{sec:hamiltonian} The following general one-dimensional Hamiltonian is considered \begin{equation} \label{TpVx} H=T(p)+V(x), \end{equation} where $T(p)$ is the kinetic part depending on the momentum $p$, and $V(x)$ the potential part depending on the position $x$. Variables $p$ and $x$ are conjugate: $[x,p]=i\hbar$. This Hamiltonian can correspond to a particle in a potential well if $x$ is interpreted as the distance from the origin, or to two particles in mutual interaction if $x$ is interpreted as the relative distance. It is assumed that bound states are supported by this Hamiltonian and that the potential well $V(x)$ has no singularity. Obviously, the form of the kinetic energy $T$ cannot be completely arbitrary. Four conditions are imposed: \begin{description} \item{A.} $T(p) \ge 0$ for all values of the momentum $p$, in order that the kinetic energy is a positive quantity. This sounds physically reasonable, but this is not necessary from a mathematical point of view. \item{B.} $T(p)=T(-p)$. It seems reasonable that the kinetic energy is an even function of the momentum in order that it cannot be dependent on the direction of propagation of the particle. \item{C.} $T(p)$ is a monotonically increasing function of $|p|$. It seems quite natural that the kinetic energy increases with the modulus of the momentum. \item{D.} $T(p)$ is at least of class $C^2$. The utility of this condition will appear below. \end{description} The speed of the particle is defined using the usual Hamilton's equations \begin{equation} \label{speed} v(p) = \frac{\partial H}{\partial p}= \frac{\partial T}{\partial p} =T'(p). \end{equation} This is in agreement with the phenomenological definition given in \cite{arie92,sema13}. Moreover, $v(-p) = T'(-p) = -T'(p) = -v(p)$, since $T'(p)$ is an odd function because of condition~(B). So, to change the sign of the momentum is to change the sign of the speed, as expected. In particular, the speed is vanishing for a null momentum, $v(0) = T'(0) = 0$. Using conditions (B) and (D) for small values of the momentum, one can write \begin{equation} \label{Texp} T(p)= T(0)+ \frac{T''(0)}{2} p^2 + \textrm{O}(p^4). \end{equation} In this limit, Hamiltonian~(\ref{TpVx}) reduces to a usual Schr\"odinger Hamiltonian with an effective mass $M=1/T''(0)$ and a constant contribution $T(0)$ to the eigenenergies which can be identified with the rest energy of the particle. \section{The classical probability distribution} \label{sec:classical} The position of a particle can be, in principle, perfectly determined in a classical motion. So, for a classical probability distribution $\rho_\textrm{cl}(x)$ to make sense in this context, it is necessary to introduce a random procedure into the problem. For example, one can choose to perform a measurement of the position at a random time. As only bounded one-dimensional motions in a potential well are considered here, the motion is periodic, with a period $\tau$, and the particle bounces back and forth between two classical turning points (TP) at $x=a$ and $x=b$ ($b > a$). We can then define the classical probability $\rho_\textrm{cl}(x)\, dx$ as the probability to find the particle into the interval $[x,x+dx]$. This gives \begin{equation} \label{rhox} \rho_\textrm{cl}(x)\, dx = \frac{2}{\tau}\, dt(x) = \frac{2}{\tau}\, \frac{dx}{|v(x)|}, \end{equation} where $v(x)$ is the speed of the particle. The absolute value insures that the probability is a positive number (the measurement is blind to the sense of propagation). The distribution is correctly normalized since the motion from left to right is identical to the motion from right to left, and then \begin{equation} \label{normrhox} \int_a^b \rho_\textrm{cl}(x)\, dx = \int_{t_a}^{t_b} \frac{2}{\tau}\, dt = 1. \end{equation} Definition~(\ref{rhox}) for $\rho_\textrm{cl}(x)$ seems quite natural since a particle is more likely measured at positions where it travels slowly. In a classical motion, the particle cannot exist outside the two TP. So, $\rho_\textrm{cl}(x) = 0$ for $x < a$ or $x > b$. For a stationary solution, the energy $E$ of the particle is constant and is given by \begin{equation} \label{ener} E = T(p) + V(x). \end{equation} Thanks to conditions~(B) and (C), a function $T^{-1}(p)$ can be defined such that $T^{-1}(T(p))=p$ for $p\ge 0$. Using~(\ref{ener}), the speed modulus of the particle can be written as a function of $x$ by using the definition~(\ref{speed}) \begin{equation} \label{vx} |v(x)| = T'(T^{-1}(E-V(x))), \end{equation} with $E-V(x) \ge 0$ for the classical motion. The probability distribution is then given by \begin{equation} \label{rhovx} \rho_\textrm{cl}(x) = \frac{2}{\tau}\, \frac{1}{T'(T^{-1}(E-V(x)))} . \end{equation} We consider situations for which only two TP exist. The speed of the particle vanishing at TP, they are solutions of the equation \begin{equation} \label{tp} V(a)=V(b)=E-T(0)=E_B, \end{equation} where $E_B$ is the binding energy of the particle. The distribution $\rho_\textrm{cl}(x)$ diverges at TP, but this is not a problem, provided the normalization condition (\ref{normrhox}) is satisfied. The kinetic part $t(p)=T(p)-T(0)$ vanishes for a null momentum. It has the following properties: $t'(p)=T'(p)$ and $t^{-1}(y)=T^{-1}(y+T(0))$. So, \begin{equation} \label{tp0} T'(T^{-1}(E-V(x)))=t'(t^{-1}(E_B-V(x))). \end{equation} This shows that the presence of a rest energy does not influence the dynamics of the system. \section{The WKBJ approximation} \label{sec:wkbj} The WKBJ approximation relies on a semi-classical expansion of the wave-function $\psi(x)$ of the form \begin{equation} \label{wkbj} \psi(x)=\exp\left(\frac{i}{\alpha}\sigma(x)\right), \end{equation} with the parameter $\alpha\to 0$ \cite{flug99,maha09}. Usually, this parameter $\alpha$ is simply posed as $\hbar$. But it is more satisfactory, from a mathematical point of view, to build a dimensionless quantity depending on $\hbar$ and other characteristic parameters of the system under study. In \cite{sen04,sen06}, for non-relativistic systems ($t(p)=p^2/(2 m)$), it is suggested to take \begin{equation} \label{alpha} \alpha = \frac{\hbar}{\sqrt{2 m\, |E_B|}\, d}, \end{equation} where $d=b-a$ is the distance between the two TP, $m$ is the mass of the particle. The semi-classical limit can then be reached for high value of the energy $|E_B|$, large mass $m$ of the particle or large size of the classical region $d$. This parameter appears naturally in a dimensionless rewriting of the Schr\"odinger equation. With a non-standard kinematics, this parameter must be redefined since there is a priori no automatic equivalent of the parameter $m$. This will be done at the end of this section. In the following, it is simply assumed that $\alpha$ can always be determined. The computation of the WKBJ approximation for a Schr\"odinger equation can be found in many textbooks. But with a non-standard kinetic part, the derivation is more involved. The procedure developed here is inspired from a calculation performed in \cite{gold60} for a non-relativistic WKBJ approximation computed in the momentum space. The equation to solve is \begin{equation} \label{3.3} T\left(-i\hbar \frac{d}{d x}\right) \psi(x) + V(x) \psi(x) = E \psi(x) \end{equation} where the kinetic operator is defined by its Taylor expansion \begin{equation} \label{3.3a} T\left(-i\hbar \frac{d}{d x}\right) = \sum_{n=0}^{\infty} \frac{T^{(n)}(0)}{n!} \left(-i\hbar \frac{d}{d x}\right)^{n}. \end{equation} Condition (D) is not sufficient to guarantee the relevance of this last expression. This will be commented below. For the moment, (\ref{3.3a}) is assumed correct. As the limit $\alpha\to 0$ will be considered, it can be shown by induction that \begin{equation} \label{prop3.1} \left(-i\hbar \frac{d}{d x}\right)^n e^{\frac{i}{\alpha} \sigma(x)} = \epsilon^n\, e^{\frac{i}{\alpha}\sigma(x)} \left( \left(\frac{d \sigma}{d x}\right)^n - i \alpha \frac{n(n-1)}{2} \frac{d^2 \sigma}{d x^2} \left(\frac{d \sigma}{d x}\right)^{n-2} + \textrm{O}(\alpha^2)\right), \end{equation} where $\epsilon=\hbar/\alpha$. The combination of (\ref{3.3a}) and (\ref{prop3.1}) gives \begin{equation} T\left(-i\hbar \frac{d}{d x}\right) e^{\frac{i}{\alpha} \sigma(x)} = e^{\frac{i}{\alpha} \sigma(x)} \left[ T\left(\epsilon \frac{d \sigma}{d x}\right) - \frac{i \alpha}{2} \epsilon^2 \frac{d^2 \sigma}{d x^2} T''\left(\epsilon \frac{d \sigma}{d x}\right) + \textrm{O}(\alpha^2)\, T(\epsilon) \right]. \end{equation} Putting this last result in (\ref{3.3}), and dropping the exponential factor, gives \begin{equation} \label{3.5} T\left(\epsilon \frac{d \sigma}{d x}\right) - \frac{i \alpha}{2} \epsilon^2 \frac{d^2 \sigma}{d x^2} T''\left(\epsilon \frac{d \sigma}{d x}\right) + V(x) + \textrm{O}(\alpha^2) = E, \end{equation} where the coefficient $T(\epsilon)$ is reabsorbed in $\textrm{O}(\alpha^2)$. The function $\sigma(x)$ can also be expanded in powers of $\alpha$ \begin{equation} \label{3.5b} \sigma(x) = \sigma_0(x) + \alpha\, \sigma_1(x) + \textrm{O}(\alpha^2), \end{equation} where all function $\sigma_j(x)$ are assumed to be independent of $\alpha$. In this case, an obvious result is \begin{equation} \left(\frac{d \sigma_0}{d x} + \alpha \frac{d \sigma_1}{d x}\right)^n = \left(\frac{d \sigma_0}{d x}\right)^n + n \left(\frac{d \sigma_0}{d x}\right)^{n-1} \alpha \frac{d \sigma_1}{d x} + \textrm{O}(\alpha^2), \end{equation} The last equation yields \begin{eqnarray} T\left(\epsilon \frac{d \sigma}{d x}\right) & =& \sum_{n=0}^{\infty} \frac{T^{(n)}(0)}{n!} \epsilon^n \left(\frac{d \sigma_0}{d x} + \alpha \frac{d \sigma_1}{d x}\right)^n + \textrm{O}(\alpha^2) \nonumber \\ & =& T\left(\epsilon \frac{d \sigma_0}{d x}\right) + \alpha\, \epsilon \frac{d \sigma_1}{d x} T'\left(\epsilon \frac{d \sigma_0}{d x}\right) + \textrm{O}(\alpha^2) \end{eqnarray} and \begin{eqnarray} \alpha\, T''\left(\epsilon \frac{d \sigma}{d x}\right) & =& \alpha\, \sum_{n=0}^{\infty} \frac{T^{(n)}(0)}{n!} n (n-1) \left(\epsilon \frac{d \sigma_0}{d x}\right)^{n-2} + \textrm{O}(\alpha^2) \nonumber\\ & =& \alpha\, T''\left(\epsilon \frac{d \sigma_0}{d x}\right) + \textrm{O}(\alpha^2). \end{eqnarray} Finally, (\ref{3.5}) can be written \begin{eqnarray} \label{3.6} && \left[T\left(\epsilon \frac{d \sigma_0}{d x}\right) + V(x) - E\right] \nonumber \\ && + \alpha \left[\epsilon \frac{d \sigma_1}{d x} T'\left(\epsilon \frac{d \sigma_0}{d x}\right) - \frac{i}{2} \epsilon^2 \frac{d^2 \sigma_0}{d x^2} T''\left(\epsilon \frac{d \sigma_0}{d x}\right)\right] + \textrm{O}(\alpha^2) = 0. \end{eqnarray} The coefficient of each power of $\alpha$ must be vanishing, that is to say \begin{eqnarray} \label{3.7a} &&T\left(\epsilon \frac{d \sigma_0}{d x}\right) + V(x) - E = 0, \\ \label{3.7b} &&\frac{d \sigma_1}{d x} T'\left(\epsilon \frac{d \sigma_0}{d x}\right) - \frac{i}{2} \epsilon \frac{d^2 \sigma_0}{d x^2} T''\left(\epsilon \frac{d \sigma_0}{d x}\right) = 0. \end{eqnarray} Only a solution between the two TP is searched for. With the definition given above for the function $T^{-1}$, (\ref{3.7a}) implies \begin{equation} \label{epst} \epsilon \frac{d \sigma_0}{d x} = \pm T^{-1}\left(E - V(x)\right), \end{equation} whose solution is given by \begin{equation} \label{3.8} \sigma_0(x) = \pm \frac{1}{\epsilon} \int_x T^{-1}\left(E - V(y)\right) dy. \end{equation} The notation $\int_x\, f(y)\, dy$ denotes the integral of the function $f(y)$ with one limit at $x$ and the other limit at one of the TP. Equation~(\ref{3.7b}) can be written \begin{equation} \frac{d \sigma_1}{d x} = \frac{i}{2} \frac{d}{d x} \ln\left(T'\left(\epsilon \frac{d \sigma_0}{d x}\right)\right). \end{equation} Using (\ref{epst}), the solution of this last equation is given by \begin{equation} \sigma_1(x) = \frac{i}{2} \ln\left(T'\left(T^{-1}(E - V(x))\right)\right) + C, \end{equation} where $C$ is a constant. The argument of the $\ln$-function is positive thanks to the definition of the function $T^{-1}$. Finally, using~(\ref{wkbj}), the wave-function can be written \begin{eqnarray} \label{3.10} \psi_{\textrm{WKBJ}}(x) &= & \frac{C_1}{\sqrt{T'\left(T^{-1}(E - V(x))\right)}} e^{+\frac{i}{\hbar} \int_x T^{-1}\left(E - V(y)\right) dy} \nonumber \\ & +& \frac{C_2}{\sqrt{T'\left(T^{-1}\left(E - V(x)\right)\right)}} e^{-\frac{i}{\hbar} \int_x T^{-1}\left(E - V(y)\right) dy}, \end{eqnarray} where $C_1$ and $C_2$ are normalization constants. The parameter $\alpha$ is not explicitly present is this expression. But, this approximation is only valid when $\alpha \ll 1$. The usual form is recovered for a non-relativistic kinematics. In the case of bound states, the wave-function decays exponentially outside the TP. Inside, a calculation similar to the non-relativistic one \cite{flug99,maha09} gives \begin{equation} \label{3.14} \psi_{\textrm{WKBJ}}(x) = \frac{D}{\sqrt{T'\left(T^{-1}\left(E-V(x)\right)\right)}} \sin\left( \frac{1}{\hbar} \int_x^b T^{-1}\left(E-V(y)\right) dy + \beta \right). \end{equation} The normalization constant $D$ and phase angle $\beta$ are determined by matching this eigenfunction onto the evanescent wave-functions outside the TP. This procedure is not trivial because the WKBJ eigenfunction is a poor approximation to the actual eigenfunction near the TP. This problem has been solved by Langer in the non-relativistic case by using an explicit solution of the Schr\"odinger equation near the TP \cite{lang37}. In the case of a non-standard kinetic energy, the point must be reconsidered. But, very close to the TP, the particle is very slow and the kinetic energy can be replaced by the expansion~(\ref{Texp}). The general Hamiltonian~(\ref{TpVx}) reduces then to a Schr\"odinger Hamiltonian. Equation~(\ref{tp0}) shows that the presence of a rest energy does not change the dynamics. We can deduce that the result of Langer is still valid and that $\beta=\pi/4$. Computations are performed in the position space with $p=-i\hbar\, d/dx$, but it is equivalent to work in the momentum space with $x=i\hbar\, d/dp$. There is no relevant differences between the results obtained by the two procedures, since the Fourier transform of the wave-function obtained by the WKBJ method in the position space is equal to the wave-function obtained by the WKBJ method in the momentum space up to a term $\textrm{O}(\alpha^2)$. This can be shown by a procedure which is similar to the one presented in \cite{gold60} for a non-relativistic kinematics. The quantification of the energy is obtained from the constraint that the wave-function~(\ref{3.14}) can be defined by integrating from one TP or from the other. The calculation is not different from the non-relativistic case \cite{flug99,maha09}, and the result is \begin{equation} \label{3.17} \int_{a}^{b} T^{-1}\left(E-V(x)\right) dx = \int_{a}^{b} t^{-1}\left(E_B-V(x)\right) dx = \pi\,\hbar\left(n + \frac{1}{2}\right), \end{equation} where the quantum number $n$ is a positive integer including 0. The limits of integration depend on $E$ via relation~(\ref{tp}). In principle, (\ref{3.3a}) demands a smooth behaviour of the kinetic operator, much more constraining that condition~(D). Nevertheless, once the WKBJ approximation is computed, it appears that non-smooth terms after the second derivative in the expansion of $T$ could probably spoil very little the quality of the approximation. This will be checked on an example in Sec.~\ref{sec:examples}. It is well known that the classical limit is reached for large values of the quantum number $n$, that is to say for large values of the excitation energy. In order to make apparent the role of $n$ in these cases, $E_B-V(x)$ is replaced by a constant, denoted here $E^*$. Then, (\ref{3.17}) reduces to \begin{equation} \label{3.17b} \int_{a}^{b} t^{-1}\left(E^*\right) dx \approx \pi\,\hbar \, n, \end{equation} that is to say \begin{equation} \label{3.17c} \frac{1}{\pi\, n} \approx \frac{\hbar}{t^{-1}\left(E^*\right)\, d}. \end{equation} In the non-relativistic case, the right-hand side of (\ref{3.17c}) is the number $\alpha$ defined by (\ref{alpha}), provided $E^*$ is replaced by $|E_B|$. A natural definition of the parameter $\alpha$ for all types of kinematics is then \begin{equation} \label{alphans} \alpha = \frac{\hbar}{t^{-1}\left(E^*\right)\, d} = \frac{\hbar}{T^{-1}\left(T(0)+E^*\right)\, d}, \end{equation} where $E^*$ is an estimation of $E_B-V(x)$, and where $d$ depends also on $E_B$ by (\ref{tp}). If the potential has no singularity, $E^*$ can be chosen as $E_B-\min V(x)$ for instance. But an accurate computation of $E^*$ is not necessary to validate the method, since solutions~(\ref{3.14}) and (\ref{3.17}) do not explicitly depend on $\alpha$. It is just sufficient to be sure that a small parameter $\alpha$ can be defined for the system under study. The semi-classical regime is then reached when $\alpha \ll 1$, that is to say $n \gg 1$ since $\alpha\approx (\pi\, n)^{-1}$. Within these conditions, the variable argument of the sine function in (\ref{3.14}) can be approximated by \begin{equation} \label{argsin} \alpha^{-1} \frac{b-x}{d} \approx n\,\pi\frac{b-x}{d}, \end{equation} and the sine function oscillates a great number of times between the two TP. The approximate quantum probability distribution is given by $\rho_{\textrm{WKBJ}}(x)=\psi_{\textrm{WKBJ}}^2(x)$ for $a < x < b$. For $x < a$ or $x > b$, it can be assumed that this distribution is vanishing since the wave-function decays exponentially. A classical approximation $\rho_{\textrm{cl WKBJ}}(x)$ for the quantum distribution $\rho_{\textrm{WKBJ}}(x)$ is obtained by replacing the rapidly oscillating square sine function by its average value $1/2$ \cite{sen04,sen06}. Finally, inside the two TP, \begin{equation} \label{rhoa} \rho_{\textrm{cl WKBJ}}(x)=\frac{D^2}{2\,T'\left(T^{-1}\left(E-V(x)\right)\right)}. \end{equation} With proper normalizations, (\ref{rhovx}) and (\ref{rhoa}) are identical. This shows that, for a quite general one-dimensional Hamiltonian, the quantum probability distribution and the classical probability distribution approach each other, in the limit of large quantum excitations, once the rapid oscillations of the quantum density are averaged. \section{Examples} \label{sec:examples} In order to test the validity of the WKBJ approximation for a non-usual kinematics, eigenstates of the relativistic Hamiltonian, written in natural unit ($\hbar=c=1$), \begin{equation} \label{hsr} H=\sqrt{p^2+m^2}+\lambda\,|x| \end{equation} have been computed. Such Hamiltonian (in 3D-space) are used to study hadrons in constituent quark models \cite{isgu85,buis12}. The numerical solutions are computed with the Fourier grid Hamiltonian (FGH) method \cite{mars89,sema00}, which is particularly well suited for the case of Hamiltonians with non-standard kinetic parts \cite{sema98,sema12}. The quantum probability distribution $\rho_{\textrm{FGH}}(x)$ obtained by this method has been compared with the corresponding distributions $\rho_{\textrm{WKBJ}}(x)$ and $\rho_{\textrm{cl}}(x)$. $\rho_{\textrm{FGH}}(x)$ is normalized to unity on $]-\infty,+\infty[$, while $\rho_{\textrm{WKBJ}}(x)$ and $\rho_{\textrm{cl}}(x)$ are normalized to unity on $]a,b[$. For such Hamiltonian, the integrals necessary for the computation of $\rho_{\textrm{WKBJ}}(x)$ are analytical, but they are not written here because of their complicated structure. It can be seen on Fig.~\ref{fig1} that the WKB approximation is quite good, even for values of $n$ as low as 5. With $m=\lambda=0.2$, the relative errors between the eigenvalues computed by the FGH method and the WKBJ approximation are respectively around $10^{-1}$, $10^{-3}$, $10^{-4}$, for $n=0$, 5, 15. These results are similar for other computations performed with different finite values of the dimensionless ratio $m/\sqrt{\lambda}$. \begin{figure}[htb] \includegraphics[width=5cm,height=3.27cm]{lineaire_relativiste_m02_n0.png} \includegraphics[width=5cm,height=3.27cm]{lineaire_relativiste_m02_n5.png} \includegraphics[width=5cm,height=3.30cm]{lineaire_relativiste_m02_n15.png} \caption{Probability distributions $\rho_{\textrm{FGH}}(x)$ (solid blue), $\rho_{\textrm{WKBJ}}(x)$ (dashed green) and $\rho_{\textrm{cl}}(x)$ (bold solid orange) for Hamiltonian~(\ref{hsr}) with $m=\lambda=0.2$. From left to right: $n=0$, 5, 15.} \label{fig1} \end{figure} The case $m=0$ in Hamiltonian~(\ref{hsr}) is particular since $T(p)=|p|$ is not derivable in $p=0$. Moreover, $|v(x)|=1$ and $\rho_{\textrm{cl}}(x)=1/d$. Some results are presented on Fig.~\ref{fig2}. Surprisingly, the WKBJ results are reasonable as well as the classical probability distribution, though expansion~(\ref{3.3a}) is not relevant. In this case, the relative errors between the eigenvalues computed by the FGH method and the WKBJ approximation are respectively around $2\times 10^{-1}$, $3\times 10^{-3}$, $3\times 10^{-4}$, for $n=0$, 5, 15. This shows that this kind of approximation is probably quite robust. \begin{figure}[htb] \includegraphics[width=5cm,height=3.26cm]{lineaire_ultrarelativiste_n0.png} \includegraphics[width=5cm,height=3.26cm]{lineaire_ultrarelativiste_n5.png} \includegraphics[width=5cm,height=3.24cm]{lineaire_ultrarelativiste_n15.png} \caption{Probability distributions $\rho_{\textrm{FGH}}(x)$ (solid blue), $\rho_{\textrm{WKBJ}}(x)$ (dashed green) and $\rho_{\textrm{cl}}(x)$ (bold solid orange) for Hamiltonian~(\ref{hsr}) with $m=0$ ($\lambda=0.2$ to fix the scale). From left to right: $n=0$, 5, 15.} \label{fig2} \end{figure} \section{Concluding remarks} \label{sec:conclusion} The WKBJ method \cite{flug99,maha09} yields a semi-classical solution of a one-dimensional Schr\"odinger equation. The approximate quantum probability distribution obtained is a good approximation of the genuine solution, in the limit of large quantum excitations. With an appropriate averaging procedure, this WKBJ distribution reduces to the classical probability distribution which can be defined for the corresponding classical systems \cite{robi95,yode06}. In this paper, all these results are generalized for one-dimensional Hamiltonians with an arbitrary kinetic energy. Only one-dimensional general Hamiltonians are considered here. But the results obtained can probably be generalized for systems living in spaces with more than one dimension, as it is the case of non-relativistic Hamiltonians \cite{pete87,sen05,mart13a,mart13b}.
{ "redpajama_set_name": "RedPajamaArXiv" }
8,730
\section{Introduction} Riemannian manifolds of nonpositive sectional curvature are a class of manifolds featuring a rich interplay between their geometry, their topology, and their dynamics. In the broader setting of geodesic metric spaces, we have the notion of a locally CAT(0)-metric. These provide a metric space analogue of nonpositively curved Riemannian manifolds, and many classic results concerning Riemannian manifolds of nonpositive sectional curvature have now been shown to hold more generally for locally CAT(0)-spaces. We are interested in understanding the difference, within the class of closed manifolds, between (1) supporting a Riemannian metric of nonpositive sectional curvature, and (2) supporting a locally CAT(0) metric. A closed topological manifolds equipped with a locally CAT(0)-metric will be called a {\it locally CAT(0)-manifold}. In low dimensions, there is no difference between these two classes. In two dimensions, this follows easily from the classification of surfaces, while in three dimensions, this follows from Thurston's geometrization theorem (recently established by Perelman). In contrast, Davis and Januszkiewicz \cite{DJ} have constructed examples, in all dimensions $\geq 5$, of locally CAT(0)-manifolds which do {\it not} support any Riemannian metric of nonpositive sectional curvature. In this paper, we deal with the remaining open case. \vskip 10pt \noindent {\bf Main Theorem:} There exists a 4-dimensional closed manifold $M$ with the following four properties: \begin{enumerate} \item $M$ supports a locally CAT(0)-metric, \item $M$ is smoothable, and $\tilde M$ is diffeomorphic to $\mathbb R ^4$, \item $\pi_1(M)$ is {\bf not} isomorphic to the fundamental group of any Riemannian manifold of nonpositive sectional curvature. \item if $K$ is any locally CAT(0)-manifold, then $M\times K$ is a locally CAT(0)-manifold which does not support any Riemannian metric of nonpositive sectional curvature. \end{enumerate} \vskip 15pt Let us briefly outline the idea behind the proof of our main result. First of all, we introduce the notion of a triangulation of $S^3$ to have {\it isolated squares}. Any such triangulation has a well-defined {\it type}, which is the isotopy class of an associated link in $S^3$. In Section 3, we provide a proof that any given link in $S^3$ can be realized as the type of a suitable flag triangulation of $S^3$ with isolated squares. In Section 4, we start with a flag triangulation $L$ of $S^3$ with isolated squares, whose type is a nontrivial knot, and use it to construct the desired $4$-manifold. This is done by considering the right angled Coxeter group $\Gamma_{L}$ associated to the triangulation $L$, and defining $M$ to be the quotient of the corresponding Davis complex by a torsion free finite index subgroup $\Gamma \leq \Gamma_L$. Standard properties of the triangulation $L$ ensure that $M$ is smoothable, and that the Davis complex is CAT(0) and diffeomorphic to $\mathbb R^4$. The isolated squares condition on the flag triangulation $L$ ensures the Davis complex satisfies Hruska's {\it isolated flats} condition. The fact that the type of $L$ is a nontrivial knot ensures that the Davis complex contains a periodic $2$-dimensional flat $F$ which is {\it knotted at infinity}. But now if $M$ supported a Riemannian metric $g$ of nonpositive sectional curvature, the flat torus theorem ensures that one could find a corresponding flat $F^\prime$ (in the $g$-metric) which is $\Gamma$-equivariantly homotopic to $F$, and the isolated flats condition then forces $F^\prime$ to also be knotted at infinity. However, in the Riemannian setting, it is easy to see that a codimension two flat must be unknotted at infinity, yielding a contradiction. \vskip 10pt \centerline{\bf Acknowledgments} \vskip 5pt The first two authors were partially supported by the NSF, under grant DMS-50706259. The last author was partially supported by the NSF, under grant DMS-0906483, and by an Alfred P. Sloan research fellowship. \vskip 10pt \section{Previously known obstructions.} Our Main Theorem provides a new obstruction to the problem of finding a {\it Riemannian smoothing} on a manifold $M$ supporting a locally CAT(0)-metric. More precisely, we say that such a manifold supports a Riemannian smoothing provided one can find a smooth Riemannian manifold $(N,g)$, with $g$ a Riemannian metric of nonpositive sectional curvature, and a homeomorphism $f: N \rightarrow M$. In this section, we briefly summarize the known obstructions to Riemannian smoothing. \subsection{Example: no smooth structure.} Given a Riemannian smoothing $f: N \rightarrow M$ of a locally CAT(0)-manifold $M$, one can forget the Riemannian structure and simply view $N$ as a smooth manifold. This immediately tells us that, if $M$ has a Riemannian smoothing, then it must be homeomorphic to a smooth manifold, i.e. the topological manifold $M$ must be {\it smoothable}. The first examples of aspherical topological manifolds not homotopy equivalent to smooth manifolds were constructed (in all dimensions $\geq 13$) by Davis and Hausmann \cite{DH} by using the reflection group trick. Non-smoothable aspherical PL-manifolds were constructed (in all dimensions $\geq 8$) in the same paper. For the sake of completeness, we now sketch out a (slightly different) construction of a closed 8-dimensional locally CAT(-1)-manifold $M^8$ which is not homotopy equivalent to any smooth 8-manifold. Recall that Milnor constructed \cite{Mi} an 8-dimensional PL-manifold $N^8$ which is not homotopy equivalent to any smooth 8-manifold. Milnor's example had the property that the second rational Pontrjagin class $p_2(N^8)$ was {\it not} an integral class, and hence cannot be homeomorphic to a smooth manifold. Let us take $N^8$ equipped with a PL-triangulation. Charney and Davis \cite{CD} developed a {\it strict hyperbolization} process, which inputs a triangulated manifold $M$ and outputs a piecewise hyperbolic manifold $h(M)$ equipped with a locally CAT(-1)-metric. Furthermore, they showed that the hyperbolization process preserves rational Pontrjagin classes. In particular, applying their strict hyperbolization process to $N^8$, we obtain a locally CAT(-1)-manifold $h(N^8)$, having the property that $p_2(h(N^8))$ fails to be integral, and hence forcing $h(N^8)$ to be non-smoothable. Finally, we note that the Borel Conjecture is known to hold for this class of aspherical manifolds (see \cite{BL}), so if $h(N^8)$ was homotopy equivalent to some smooth manifold, it would in fact be homeomorphic to the smooth manifold (contradicting non-smoothability). Similar examples can be constructed in all dimensions of the form $n=4k$, with $k\geq 2$ (see also the discussion in \cite[Section 5]{BLW}). \subsection{Example: no PL structure.}\label{ss:PL} In a similar vein, it is also possible to construct (topological) locally CAT(0)-manifolds that do not even support any PL-structures. We recall such an example from \cite[Section 5a]{DJ}. We let $M^4(E_8)$ denote the $E_8$ homology manifold. Recall that this space is constructed by first plumbing together eight copies of the tangent disk bundle to $S^2$, according to the pattern given by the $E_8$ Dynkin diagram. This results in a smooth $4$-manifold with boundary $N^4$, whose boundary $\partial N^4$ is homeomorphic to Poincar\'e's homology $3$-sphere. Coning off the boundary gives the space $M^4(E_8)$, a simply connected homology manifold of signature $8$ with one singular point. Taking a triangulation of $N^4$, one can extend it (by coning on the boundary) to a triangulation of $M^4(E_8)$, which we can then hyperbolize to obtain a space $H^4$. The space $H^4$ is now a homology $4$-manifold of signature $8$ with one singular point, and comes equipped with a locally CAT(0)-metric. It follows from Edward's Double Suspension Theorem that $H^4\times T^k$ is a topological $(4+k)$-manifold (where $T^k$ denotes the $k$-torus and $k\geq 1$). The manifolds $H^4 \times T^k$ come equipped with a (product) locally CAT(0)-metric, but it follows from the arguments in \cite[Section 5a]{DJ} that they do not admit a PL structure. Thus, in each dimension $\geq 5$ there is a locally CAT(0)-manifold with no PL structure. \subsection{Example: universal cover distinct from $\mathbb R ^n$.} For a third family of examples, we recall that the classic Cartan-Hadamard theorem asserts that the universal cover of a Riemannian manifold of nonpositive sectional curvature must be diffeomorphic to $\mathbb R^n$. In particular, a CAT(0)-manifold $M$ with the property that $\tilde M$ is {\it not} diffeomorphic to $\mathbb R^n$ can not support a Riemannian smoothing. Davis and Januszkiewicz constructed (see \cite[Thm. 5b.1]{DJ}) examples of locally CAT(0)-manifolds $M^n$ (for $n\geq 5$), with the property that their universal covers $\tilde M^n$ are {\it not} simply connected at infinity (and hence, not homeomorphic to $\mathbb R^n$). Further examples of this type are described in \cite{adg}. \subsection{Example: boundary at infinity distinct from $S^{n-1}$.} In the previous three families of examples, {\it topological} properties (smoothability, PL-smoothings, topology of universal cover) were used to obstruct the existence of a Riemannian metric of nonpositive sectional curvature. The next family of examples have obstructions that arise from the {\it large scale geometry} of the universal covers. Associated to a CAT(0)-space $X$, we have a topological space called the {\it boundary at infinity} $\partial ^\infty X$. If $X$ is Gromov hyperbolic, then the homeomorphism type of $\partial^\infty X$ is a quasi-isometry invariant of $X$. In particular, if $X$ is the universal cover of a locally CAT(-1)-space $Y$, then $\partial^\infty X$ depends only on $\pi_1(Y)$. When $X$ is the universal cover of an $n$-dimensional closed Riemannian manifold of nonpositive sectional curvature, the corresponding $\partial ^\infty X$ is homeomorphic to the standard sphere $S^{n-1}$. Now consider the locally CAT(-1) $5$-manifold $M^5$ obtained by applying a strict hyperbolization procedure (from \cite{CD}) to the double suspension of a triangulation of Poincar\'e's homology $3$-sphere. Denote by $X^5$ its universal cover, and observe that, although $\partial^\infty X^5$ has the homotopy type of $S^4$, it is proved in \cite[Section 5c]{DJ} that $\partial ^\infty X^5$ is {\it not} locally simply connected. So $\partial ^\infty X^5$ cannot be homeomorphic to $S^4$ (in fact, is not even an ANR). Thus, $M^5$ is not homotopy equivalent to a Riemannian $5$-manifold of strictly negative sectional curvature. The same argument applies to a strict hyperbolization of the manifold $M^4(E_8)\times S^1$ discussed in Section \ref{ss:PL}. There are similar examples in higher dimensions $n>5$ obtained by strictly hyperbolizing double suspensions of homology $(n-2)$-spheres. Thus, in each dimension $n \geq 5$ there are closed locally CAT(-1) manifolds $M^n$ with universal cover homeomorphic to $\mathbb R^n$ but which are not homotopy equivalent to any Riemannian $n$-manifold of strictly negative sectional curvature. \subsection{Example: stability under products.} Finally, we point out one last method for producing manifolds which do not have Riemannian smoothings: \begin{Prop} Let $M^n$ be a locally CAT(0)-manifold which does not support any Riemannian smoothing, and assume that $n\geq 5$. Then for $K$ an arbitrary locally CAT(0)-manifold, the product $M\times K$ is a locally CAT(0)-manifold which does not support any Riemannian smoothing. \end{Prop} \begin{proof} To see this, we first note that the product of the locally CAT(0)-metrics on $M$ and $K$ provide a locally CAT(0)-metric on $M\times K$. Now assume that $M\times K$ supported a Riemannian smoothing $f: N\rightarrow M\times K$, and let $g$ be the associated Riemannian metric of nonpositive sectional curvature on $N$. Since $\pi_1(N) \cong \pi_1(M) \times \pi_1(K)$, the classical splitting theorems (see Gromoll and Wolf \cite{GW}, Lawson and Yau \cite{LY}, and Schroeder \cite{Sc}) imply that we have a corresponding {\it geometric} splitting $(N, g) \cong (M^\prime, g_1) \times (K^\prime, g_2)$, having the property that: \begin{itemize} \item each factor can be identified with a totally geodesic submanifold of $(N, g)$, \item the factors satisfy $\pi_1(M)\cong \pi_1(M^\prime)$, and $\pi_1(K) \cong \pi_1(K^\prime)$. \end{itemize} So we see that $M^\prime$ is a Riemannian manifold of nonpositive sectional curvature, of dimension $\geq 5$, and satisfying $\pi_1(M) \cong \pi_1(M^\prime)$. Since the Borel conjecture is known to hold for this class of manifolds (see Farrell and Jones \cite{FJ}), there exists a homeomorphism $M^\prime \rightarrow M$ realizing the isomorphism of fundamental groups. This provides a Riemannian smoothing of $M$, giving us the desired contradiction. \end{proof} We remark that property (4) in our Main Theorem can be deduced from a virtually identical argument: instead of appealing to the Borel Conjecture to obtain a contradiction, we resort instead to property (3) in our Main Theorem. \section{Special triangulations of $S^3$.} Recall that a simplicial complex is {\it flag} provided it is determined by its 1-skeleton, i.e. every $k$-tuple of pairwise incident vertices spans a $(k-1)$-simplex $\sigma ^{k-1}$ (for $k\geq 3$). A subcomplex $\Sigma^\prime$ of a simplicial complex $\Sigma$ is {\it full} provided every simplex $\sigma \subset \Sigma$ whose vertices lie in $\Sigma ^\prime$ satisfies $\sigma \subset \Sigma^\prime$. We will say a cyclically ordered 4-tuple of vertices $(v_1, v_2, v_3, v_4)$ in a simplicial complex forms a {\it square} provided each consecutive pair of vertices determines an edge in the complex, while the pairs $(v_1,v_3)$ and $(v_2,v_4)$ do {\it not} determine an edge. \begin{figure} \label{graph} \begin{center} \includegraphics[width=2in, angle=0]{DJL-Fig1} \caption{Basic triangulation of a triangular prism.} \end{center} \end{figure} \begin{definition} A flag triangulation of $S^3$ is said to have {\em isolated squares} provided no two squares in the triangulation intersect (i.e. each vertex lies in at most one square). For such a triangulation, the collection of squares form a link in $S^3$. We call the isotopy class of this link the {\em type} of the triangulation. \end{definition} In this section, we establish: \begin{Thm} Let $k\subset S^3$ be any prescribed link in the 3-sphere. Then there exists a flag triangulation of $S^3$, with isolated squares, and with type the given link $k$. \end{Thm} We establish this result in several steps, gradually building up the triangulation to have the properties we desire. \vskip 10pt \noindent{\bf Step 1: Triangulating the solid torus.} \vskip 5pt As a first step, we describe a triangulation on a solid torus $\mathbb D^2 \times S^1$. Recall that there is a canonical decomposition of the 3-dimensional cube $[0,1]^3 \subset \mathbb R^3$ into six tetrahedra. This triangulation is determined by the inequalities $0\leq x_{\sigma(1)} \leq x_{\sigma(2)}\leq x_{\sigma (3)}\leq 1$, where $\sigma$ ranges over the six possible permutations of the index set $\{1,2,3\}$. Now if we restrict to the region where $x_1\leq x_2$, we obtain a triangulation of the triangular prism $\Delta ^2 \times [0,1]$ into exactly three tetrahedra. Let us denote by $F, G$ the two square faces of the triangular prism defined via the hyperplanes $x_1=0$ and $x_1=x_2$ respectively. The triangulation of the prism cuts each of these squares into two triangles, along the diagonal originating at the origin. We call the {\it bottom} of the prism the triangle corresponding to the intersection with the hyperplane $x_3=0$, and call the {\it top} of the prism the triangle arising from the intersection with the hyperplane $x_3 = 1$. Figure 1 contains an illustration of this decomposition of the triangular prism (drawn to respect the orientation of the ``bottom'' and ``top''). In the picture, the two square sides facing us are $F$ and $G$ respectively. We can now take three copies of the triangular prism, and cyclically identify each $F_i$ to the corresponding $G_{i+1}$. This gives a new triangulation of a triangular prism (with nine tetrahedra), with an inherited notion of ``top'' and ``bottom''. This new triangulation has the following key properties: \begin{itemize} \item there exists a unique edge $e$ of the triangulation joining the center of the bottom triangle to the center of the top triangle, \item the center of the bottom triangle is adjacent to {\it every} vertex in the triangulation, and \item aside from the center of the bottom triangle, the center of the top triangle is adjacent to {\it no other} vertices in the bottom of the prism. \end{itemize} We will call a copy of this canonical triangulation of the triangular prism a {\it block}. Fixing an identification of $\mathbb D^2$ with the base of the triangular prism, we can think of a block as a triangulation of $\mathbb D^2\times [0,1]$. To obtain the desired triangulation of the solid torus $\mathbb D^2\times S^1$, we ``stack'' four blocks together. More precisely, we take four blocks and cyclically identify the top of each block with the bottom of the next block. This gives us a triangulation of the solid torus $\mathbb D^2 \times S^1$ into thirty-six tetrahedra. We say blocks are {\it adjacent} or {\it opposite}, according to whether they share a vertex or not. Corresponding to the above properties for the individual blocks, this triangulation of the solid torus satisfies: \begin{itemize} \item the triangulation contains a canonical, unique square having the property that it is entirely contained within the {\it interior} of $\mathbb D^2\times S^1$; the four vertices of this square will be called {\it interior vertices}. \item all the remaining vertices of the triangulation lie on the boundary of $\mathbb D^2\times S^1$, and will be called {\it boundary vertices}. \item every tetrahedron in the triangulation contains at least one interior vertex. \item every interior vertex has the property that, if one looks at all adjacent boundary vertices, these vertices are all contained in single block (the unique block whose bottom contains the given interior vertex). \end{itemize} We call the unique square in the interior of this triangulation of $\mathbb D^2\times S^1$ the {\it core} of the solid torus. Observe that, out of the thirty-six tetrahedra occuring in the triangulation, exactly twenty-four of them arise as the join of a triangle in $\partial \mathbb D^2\times S^1$ with an interior vertex, while the remaining twelve occur as the join of an edge in $\partial \mathbb D^2\times S^1$ with an edge in the core. \vskip 10pt \noindent {\bf Step 2: Getting squares realizing the link $k$.} \vskip 5pt Next, let us take the desired link $k$, and take pairwise disjoint regular closed neighborhoods $\hat N_i$ of the individual components of the link. Each of these neighborhoods is homeomorphic to a solid torus, and we denote by $N_i \subset \hat N_i$ the slightly smaller solid torus of radius half as large. We proceed to construct a triangulation of $S^3$ as follows: first, within each of the tori $N_i$, we use the triangulation described in Step 1, identifying the components of the link with the cores of the various triangulated solid tori. Secondly, removing the interiors of all of the $\hat N_i$, we obtain a compact 3-manifold $M$ with boundary $\partial M = \coprod \partial \hat N_i$. Since 3-manifolds are triangularizable, we now choose an arbitrary triangulation of this 3-manifold $M$, obtaining a triangulation of $M \cup \coprod N_i \subset S^3$. The closure of the complementary region is a disjoint union of the sets $\hat N_i \setminus N_i$, each of which is topologically a fattened torus $S^1 \times S^1 \times [0,1]$. Furthermore, we are given triangulations $\mathcal T_0, \mathcal T_1$ of the two boundaries $S^1\times S^1 \times \{0\}$, $S^1\times S^1\times \{1\}$ (coming from the triangulations of $\partial N_i$ and $\partial M$ respectively). But any two triangulations of the 2-torus $S^1\times S^1$ have subdivisions which are simplicially isomorphic. Letting $\mathcal T ^\prime$ denote such a triangulation, we assign this triangulation on the level set $S^1\times S^1 \times \{1/2\}$. Finally, we extend the triangulation into the two regions $S^1 \times S^1 \times [0,1/2]$ and $S^1 \times S^1 \times [1/2,1]$ using the following procedure. On each of these two regions, we have a triangulation $\mathcal T_i$ on one of the boundary components, and a subdivision $\mathcal T ^\prime$ of the triangulation on the other boundary component. We proceed to inductively subdivide each of the regions $\sigma \times I$, where $\sigma$ ranges over the simplices of the triangulation $\mathcal T_i$. First of all, we add in edges $\sigma^0\times I$ for each vertex in the triangulation $\mathcal T_i$. Now assuming that we have already triangulated the product $\mathcal T_i^{(k-1)}\times I$ of the $(k-1)$-skeleton of $\mathcal T_i$ with the interval, let us extend the triangulation to $\mathcal T_i^k\times I$. Given a $k$-simplex $\sigma^k$, we have that the region $\sigma^k \times I$ is topologically a closed $(k+1)$-dimensional ball, with boundary that can be identified with $(\sigma^k \times \{0\}) \coprod (\sigma ^k\times \{1\}) \coprod (\partial \sigma ^k \times I)$. Furthermore, the bottom level consists of a simplex (the original $\sigma ^k \in \mathcal T_i$), the top level consists of a subdivision of the simplex (the subdivision of $\sigma ^k$ inside $\mathcal T^\prime$), and each of the faces have already been triangulated. In other words, we see that we have a topological $\mathbb D^{k+1}$, along with a given triangulation of $\partial \mathbb D^{k+1}$. But it is now easy to extend: just cone the given triangulation on the boundary inwards. Performing this process on each of the $\sigma ^k\times I$ now provides us with a triangulation of the set $\mathcal T_i^{k}\times I$. This results in a triangulation of the 3-sphere with the following two properties: \begin{itemize} \item the triangulation contains a collection of squares, whose union realize the given link $k$, \item for each of the squares, the union of the simplices incident the the square form a regular neighborhood $\mathbb D^2 \times S^1$, triangulated as in Step 1, and \item all of these regular neighborhoods are pairwise disjoint. \end{itemize} \vskip 5pt \noindent {\bf Step 3: Getting rid of all other squares.} \vskip 5pt At this stage, we have constructed a triangulation of $S^3$, which contains a collection of squares realizing the given link $k$. However, there are still two problematic issues: our triangulation might not be flag, and it might fail the isolated squares condition. The third step is to modify the triangulation in order to ensure these two additional conditions. To fix some notation, we will keep using $N_i$ to denote the regular neighborhood of the squares we are interested in keeping. Recall that each of these is topologically a solid torus $\mathbb D^2\times S^1$, with triangulation combinatorially isomorphic to the triangulation given in Step 1. We will first modify the given triangulation in the complement of the $N_i$, and subsequently change it within the regions $N_i$. Let us denote by $X$ the closure of the complement of the union of the $N_i$. This is topologically a 3-manifold with boundary, equipped with a triangulation (from the previous two steps). Now the standard method of obtaining a flag triangulation is to take the barycentric subdivision of a given triangulation. But unfortunately, this process creates lots of squares. Recently, Przytycki and \'Swi{\polhk{a}}tkowski \cite{PS}, building on earlier work of Dranishnikov \cite{Dr}, have found a different subdivision process that takes a 3-dimensional simplicial complex and returns a subdivision of the complex that is flag {\it and has no squares}. For an arbitrary simplicial complex $Z$, we will denote by $Z^*$ the simplicial complex obtained by applying this procedure to $Z$. We modify the given triangulation of $S^3$ in two stages: first we modify the triangulation in $X$, by replacing $X$ by $X^*$. Next, we describe the extension of this triangulation into the various components $N_i$. For the original triangulation of each of the $N_i$, we see that the thirty-six tetrahedra are of one of two types: \begin{enumerate} \item[(a)] twenty-four of them are the join of one of the interior vertices with a triangle on $\partial N_i$, and \item[(b)] twelve of them are the join of one of the four edges on the core square with an edge on $\partial N_i$. \end{enumerate} Now the subdivision $X^*$ restricts to a subdivision on each simplex in $\partial N_i$, which changes the simplicial complex $\partial N_i$ into $(\partial N_i)^*$. The effect of this subdivision on simplices in $\partial N_i$ is to subdivide each edge in $\partial N_i$ into two, and to replace each original triangle by the subdivision in Figure 2. We extend the subdivision $(\partial N_i)^*$ of $\partial N_i$ to a subdivision $N_i^\prime$ of the original $N_i$ in the most natural way possible: \begin{figure} \label{graph} \begin{center} \includegraphics[width=2in, angle=0]{DJL-Fig2} \caption{Dranishnikov subdivision of triangles.} \end{center} \end{figure} \begin{enumerate} \item[(a)] each tetrahedron in $N_i$ that was a join of an interior vertex with a triangle $\sigma \subset \partial N_i$ gets replaced by the join of the same vertex with $\sigma ^*$ (i.e. we cone the subdivision of $\sigma$ to the interior vertex), subdividing the original tetrahedron into ten new tetrahedra (the cone over Figure 2), and \item[(b)] each tetrahedron that was a join of an edge on the square with an edge on $\partial N_i$ gets replaced by two tetrahedra (i.e. the join of the internal edge with each of the two edges obtained from subdividing the boundary edge). \end{enumerate} This changes the original triangulation on each $N_i$ into a new triangulation $N_i^\prime$ with a total of $264$ tetrahedrons. We will continue to use the term {\it block} to refer to the subcomplexes of the $N_i^\prime$ that are subdivisions of the original blocks in $N_i$. Observe that, in each of the $N_i$, our subdivision process did not introduce any new vertices in the interior of the $N_i$. As such, the core squares have been left unchanged (and we will still refer to them as the cores of the $N_i^\prime$). Finally, we note that by construction the two subdivisions $N_i^\prime$ of $N_i$, and $X^*$ of $X$ coincide on their common subcomplex $\partial N_i = N_i\cap X$. In particular, they glue together to give a well defined triangulation $\Sigma$ of $S^3$. \vskip 10pt \noindent {\bf Step 4: Verifying that $\Sigma$ has the desired properties.} \vskip 5pt Note that the triangulation $\Sigma$ contains a copy of $X^*$, as well as copies of each $N_i^\prime$. These partition the triangulation $\Sigma$ into various pieces. \begin{lemma} The complex $X^*$, the individual $N_i^\prime$, and the intersections $X^*\cap N_i^\prime$, are all full subcomplexes of $\Sigma$. \end{lemma} \begin{proof} This follows easily from the following two facts: \begin{itemize} \item each of the intersections $X^* \cap N_i^\prime= (X \cap N_i) ^*$ is a full subcomplex of $X^*$, \item each of the intersections $X^*\cap N_i^\prime = \partial N_i^\prime$ is a full subcomplex of the corresponding $N_i^\prime$. \end{itemize} The first statement is a direct consequence of \cite[Lemma 2.10]{PS}, where it is shown that if $U$ is any subcomplex of $W$, then $U^*$ is a full subcomplex of $W^*$. The second statement is a consequence of the construction of the triangulation $N_i^\prime$, since by construction, each simplex of $N_i^\prime$ which is {\it not} contained in $\partial N_i^\prime$ contains a vertex in the interior of $N_i^\prime$ (and hence in $N_i^\prime - \partial N_i^\prime$). \end{proof} \begin{lemma} The triangulation $N_i^\prime$ is flag. \end{lemma} \begin{proof} Given a collection of pairwise incident vertices $V$, there are three possibilities: $V$ contains either two, one, or no interior vertices of $N_i^\prime$. We consider each of these three cases in turn. If $V$ contains no interior vertices, then $V\subset \partial N_i^\prime$, and since the latter is a full subcomplex of $N_i^\prime$ (see Lemma 4), $V$ is in fact a collection of vertices in $\partial N_i^\prime$ which are pairwise adjacent {\it within} $\partial N_i^\prime$. But recall that $\partial N_i^\prime$ is just the triangulation $(\partial N_i)^*$, hence is flag. This implies that $V$ spans out a simplex in $\partial N_i^\prime$. If $V$ contains one interior vertex $v$, then, by the previous argument, $V-\{v\}$ spans a simplex in $\partial N_i^\prime = (\partial N_i)^*$ which is contained within some (maximal) 2-dimensional simplex $\sigma$ in $(\partial N_i)^*$. Note that, since all vertices $V-\{v\}$ are adjacent to the interior vertex $v$, they must lie in the block $B$ corresponding to $v$. So the 2-dimensional simplex $\sigma \subset (\partial N_i)^*$ can additionally be chosen to lie within that same block $B$. This means that there exists a 2-dimensional simplex $\tau \in \partial N_i$ with the property that $\sigma$ is one of the 10 triangles in $\tau^*$ (see Figure 2). Finally, observe $\tau$ must lie within the block $B$, so the join of $\tau$ with the interior vertex $v$ defines a tetrahedron inside the original triangulation $N_i$ (of type (a) in the terminology of Step 3). But recall how the subdivision $(\partial N_i)^*$ of the triangulation $\partial N_i$ was extended into $N_i$: for tetrahedra of type (a), the subdivision on the boundary was coned off to the interior vertex. This implies that the join of $\sigma$ and the vertex $v$ defines a tetrahedron in $N_i^\prime$, and as the set $V$ is a subset of the vertex set of this tetrahedron, we deduce that $V$ spans a simplex in $N_i^\prime$. Finally, if $V$ contains two interior vertices $v,w$, let $B_v, B_w$ denote the corresponding blocks. Since $V-\{v,w\}$ is a collection of vertices in $\partial N_i^\prime= (\partial N_i)^*$ which are adjacent to {\it both} interior vertices, we see that the set $V-\{v,w\}$ must lie within $B_v\cap B_w$, which is a 1-dimensional complex homeomorphic to $S^1$ (subdivided into 6 consecutive edges). Since $V-\{v,w\}$ are pairwise adjacent, there is an edge $\sigma$ in $B_v\cap B_w$ whose vertex set contains $V-\{v,w\}$. This edge is contained in a subdivision of an edge $\tau$ from the original triangulation $\partial N_i$, where $\tau$ is an edge which is common to the two blocks $B_v$ and $B_w$. In particular, the join $\omega*\tau$ of $\tau$ with the edge $\omega$ in the core joining $v$ to $w$ defines a tetrahedron in the original triangulation $N_i$ (of type (b) in the terminology of Step 3). Again, from the way the subdivision $(\partial N_i)^*$ was extended inwards, we recall that the tetrahedra $\omega * \tau$, being of type (b), gets replaced by two tetrahedra $\omega * \sigma$ and $\omega * \sigma^\prime$, where $\tau^* = \sigma \cup \sigma^\prime$. Since the join of $\sigma$ and $\omega$ defines a tetrahedron in $N_i^\prime$, and the set $V$ is a subset of the vertex set of this tetrahedron, we again deduce that $V$ spans a simplex in $N_i^\prime$. \end{proof} \begin{Cor} The triangulation $\Sigma$ is flag. \end{Cor} \begin{proof} If all of the vertices are contained in $X^*$, then the claim follows immediately from the fact that $X^*$ itself is flag (see \cite[Proposition 2.13]{PS}). So we can now assume that at least one of the vertices is contained in the interior of one of the $N_i^\prime$. Note that an interior vertex in one of the $N_i^\prime$ has its closed star entirely contained within the same $N_i^\prime$. So we see that the tuple of pairwise adjacent vertices must be entirely contained within the same subcomplex $N_i^\prime$. But by Lemma 5, we have that each of the subdivided $N_i^\prime$ are themselves flag, finishing the proof. \end{proof} \begin{Prop} The only squares in $\Sigma$ are the cores of the various $N_i^\prime$. \end{Prop} \begin{proof} To see this, let us start with an arbitrary square $(v_1, v_2, v_3, v_4)$ inside the triangulation $\Sigma$. Our goal is to show that all four vertices must be interior vertices to a single $N_i^\prime$, which would then force the square to be the core of the corresponding $N_i^\prime$. To this end, we first note that, if the square does {\it not} contain any interior vertex to any of the $N_i^\prime$, then it is contained entirely within $X^*$. But from Lemma 4, the latter is a full subcomplex of $\Sigma$, and by the result of Przytycki and \'Swi{\polhk{a}}tkowski \cite[Proposition 2.13]{PS}, has no squares. So we may assume that at least one of the vertices is an interior vertex to some $N_i^\prime$. If all the vertices are interior to $N_i^\prime$, then we are done, so by way of contradiction we can also assume that the square contains a vertex which is {\it not} interior to $N_i^\prime$ (which we will call {\it exterior} vertices to $N_i^\prime$). Now the square $(v_1, v_2, v_3, v_4)$ contains exactly four edges, and since it contains vertices which are both interior and exterior to $N_i^\prime$, we must have that at least two of the four edges must connect an interior vertex to an exterior vertex (call these {\it intermediate} edges). We now argue that in fact the square must contain {\it exactly} two intermediate edges. Indeed, if there were $\geq 3$ intermediate edges, then one could find a pair of adjacent intermediate edges, which share a common exterior vertex. Up to cyclic relabeling, we may assume that $v_1$ is the exterior vertex. Considering the other endpoints of these two intermediate edges, we see that $v_2, v_4$ are interior vertices for $N_i^\prime$, which are both adjacent to the exterior vertex $v_1 \in \partial N_i^\prime$. But this implies that the two blocks whose bottoms contain $v_2$ and $v_4$ cannot be opposite, so must in fact be adjacent. This forces $v_2$ and $v_4$ to be adjacent vertices in the core of $N_i^\prime$, contradicting the fact that $(v_1, v_2, v_3, v_4)$ forms a square. So our hypothetical square $(v_1, v_2, v_3, v_4)$ must have exactly two intermediate edges, leaving us with exactly two possibilities: \begin{enumerate} \item the intermediate edges are not adjacent in the square $(v_1, v_2, v_3, v_4)$, \item the intermediate edges are adjacent at an interior vertex of $N_i^\prime$, and the remaining edges are exterior. \end{enumerate} We now explain why each of these possibilities give rise to a contradiction. \begin{figure} \label{graph} \begin{center} \includegraphics[width=5in, angle=0]{DJL-Fig3} \caption{Triangulation on the boundary of a block.} \end{center} \end{figure} In case (1), we note that up to cyclic relabeling, we have that $v_1, v_2$ are adjacent vertices in the core of the $N_i^\prime$, while $v_3, v_4$ are adjacent vertices in $\partial N_i^\prime$. We can also assume that the top of the block $B_1$ corresponding to $v_1$ attaches to the bottom of the block $B_2$ corresponding to $v_2$. Now recall that an interior vertex is {\it only adjacent to boundary vertices in its corresponding block}. Since $v_3$ is adjacent to $v_2$, we have that $v_3$ must lie in the block $B_2$. Similarly, the vertex $v_4$ being adjacent to $v_1$ must lie in the block $B_1$. Since $v_3$ and $v_4$ are adjacent, we conclude that one of these two vertices must lie in the common boundary $B_1\cap B_2$. But such a vertex is incident to both $v_1$ and $v_2$, violating the square condition for $(v_1, v_2, v_3, v_4)$. It remains to rule out case (2). To this end, we may again assume that $v_1$ is the common interior vertex for the two intermediate edges. Now if $B$ denotes the block corresponding to $v_1$, then we have that the boundary vertices $v_2, v_4$, both being adjacent to $v_1$, must actually lie in $B$. Moreover, for $(v_1, v_2, v_3, v_4)$ to be a square, we must have that $v_3$ is {\bf not} adjacent to $v_1$, and hence $v_3\notin B$. Since $v_3$ is adjacent to both the vertices $v_2, v_4 \in B$, we see that the latter are either both in the top of $B$ or both in the bottom of $B$, while $v_3$ lies in an adjacent block $B^\prime$. Let us assume that the vertices lie in the top of $B$ (the other case being completely analogous), so that we can view $v_2,v_4$ as lying in the {\it bottom} of the block $B^\prime$. We now have the following situation occurring inside the boundary of the block $B^\prime$: we have two vertices $v_2, v_4$ lying in the bottom of the block, and we have a vertex $v_3$ which does {\bf not} lie in the bottom of $B^\prime$, but which is adjacent to both $v_2$ and $v_4$. Now recall that the triangulation of the block $B^\prime$ is a subdivision (given in Step 3) of a canonical triangulation of the triangular prism. This subdivision takes the boundary of the original triangulation and applies the Dranishnikov subdivision procedure to it: each edge gets subdivided into two, and each triangle gets replaced by the subdivision in Figure 2. The resulting triangulation on $S^1\times [0,1]$ is shown in Figure 3. In the illustration, the left and right side of the rectangle have to be identified, and the ``bottom'' and ``top'' of the boundary of the block is precisely the bottom and the top of the rectangle. Note that this triangulation actually consists of six original triangles (see Step 1), each of which has been subdivided into 10 triangles as in Figure 2 (see Step 3). Finally, inspecting the triangulation in Figure 3, we observe that there are exactly six vertices which are adjacent to two distinct vertices in the bottom of the block: these are the only possibilities for $v_3$. But for each of these six vertices, we see that the two adjacent vertices in the bottom of the block (i.e. the corresponding $v_2$ and $v_4$) are adjacent to each other, contradicting the fact that $(v_1, v_2,v_3,v_4)$ was a square. Since we've ruled out all other possibilities, we see that the square cannot contain {\it any} intermediate edges, i.e. the four vertices of our hypothetical square $(v_1,v_2,v_3,v_4)$ must all lie in the interior of a single $N_i^\prime$. This implies that our square must coincide with the core of one of the $N_i^\prime$, as desired. \end{proof} It follows from Corollary 6 that the triangulation $\Sigma$ is flag, and from Proposition 7 that it has isolated squares with type given by the original link $k$. This completes the proof of Theorem 3. \section{Constructing the manifold.} In this section, we establish the Main Theorem. Our goal is to use some of the triangulations of $S^3$ constructed in the previous section to produce a 4-dimensional manifold $M$ with the desired properties. In order to do this, we start by reviewing some properties of the Davis complex for right angled Coxeter groups. \vskip 10pt Recall that one can associate to the 1-skeleton of {\it any} simplicial complex $L$ a corresponding {\it right angled Coxeter group} $\Gamma_L$. This group has one generator $x_i$ of order two for each vertex $v_i$ of the simplicial complex $L$, and a relation $x_ix_j=x_jx_i$ whenever the corresponding vertices $v_i, v_j$ are adjacent in $L$. Let us consider the associated Davis complex $\tilde P_{L}$. This complex is obtained via the following procedure: we first consider the cubical complex $[-1,1]^{V(L)}$, that is to say, the standard cube with dimension equaling the number of vertices in the simplicial complex $L$. Now every face of the cube is an affine translation of $[-1,1]^S$ for some subset $S\subset V(L)$, which we call the {\it type} of the face. Consider the cubical subcomplex $P_L \subset [-1,1]^{V(L)}$ consisting of all faces whose type defines a simplex in $L$, and let $\tilde P_L$ to be its universal cover. Observe that the Coxeter group $\Gamma_L$ acts on $P_L$, where each generator $x_i$ acts by reflection on the corresponding coordinate. The kernel of the resulting morphism $\Gamma_L \rightarrow (\mathbb Z _2)^{|V(L)|}$ coincides with the fundamental group of $P_L$. There is a natural piecewise flat metric on $P_L$, obtained by making each $k$-dimensional face in the cubulation of $P_L$ isometric to $[-1,1]^k \subset \mathbb R ^k$. Properties of the cubical complex $P_L$ are intimately related to properties of the simplicial complex $L$. For instance, we have: \begin{enumerate} \item[(a)] if $L$ is a flag complex, then the piecewise flat metric on $P_L$ is locally CAT(0), \item[(b)] the links of vertices in $P_L$ are canonically simplicially isomorphic to $L$, \item[(c)] if $L$ is the join of two subcomplexes $L_1, L_2$, then the space $P_L$ splits isometrically as a product of $P_{L _1}$ and $P_{L _2}$, \item[(d)] if $L ^\prime$ is a full subcomplex of $L$, then the natural inclusion induces a totally geodesic embedding $P_{L^\prime}\hookrightarrow P_{L}$, \item[(e)] if the geometric realization of $L$ is homeomorphic to an $(n-1)$-dimensional sphere, then $P_L$ is an $n$-dimensional manifold, \item[(f)] if $L$ is a PL-triangulation of $S^{n-1}$ then $P_L$ is a PL-manifold, and $\partial ^\infty \tilde P_L$ is homeomorphic to $S^{n-1}$, \item[(g)] if $L$ is a {\it smooth} triangulation of $S^{n-1}$, then $P_L$ is a smooth manifold. \end{enumerate} These results are discussed in detail in the book \cite{Da1}. \vskip 10pt In the previous section, we showed that given a prescribed link $k$ in $S^3$, one can construct a triangulation of $S^3$ with isolated squares, and with type the given link. Let us apply this result in the special case where $k$ is a nontrivial knot inside $S^3$. Let $L$ denote the corresponding triangulation of $S^3$. Since we are in the special case of dimension $=3$, the triangulation $L$, in addition to being flag, is automatically PL and smooth. We now consider the cubical complex $M:= P_L$ associated to the corresponding right angled Coxeter group $\Gamma_L$. In view of our earlier discussion, we have the following: \vskip 5pt \noindent {\bf Fact 1:} The space $M$ is a smooth 4-manifold (from (g) above), and the natural piecewise Euclidean metric on $M$ induced from the cubulation is locally CAT(0) (from (a) above). Furthermore, the boundary at infinity of $\tilde M$ is homeomorphic to $S^3$ (from (f) above), and $\tilde M$ is diffeomorphic to $\mathbb R^4$. \vskip 10pt The very last statement in {\bf Fact 1} can be deduced from work of Stone (see \cite[Theorem 1]{St}), who showed that a metric (piecewise flat) polyhedral complex which is both CAT(0) and a PL-manifold without boundary must in fact be PL-homeomorphic to the appropriate $\mathbb R^n$. Since our $\tilde M$ satisfies these conditions, this ensures that $\tilde M$ is PL-homeomorphic to the standard $\mathbb R^4$. But in the 4-dimensional setting, there is no difference between PL and smooth, so $\tilde M$ is in fact diffeomorphic to $\mathbb R^4$. Our goal is now to show that $M$ has the properties postulated in our Main Theorem. Note that properties (1) and (2) are included in {\bf Fact 1}, while property (4) can be easily deduced from property (3) (see the comment after the proof of Proposition 1). So we are left with establishing property (3): that $\pi_1(M)$ cannot be isomorphic to the fundamental group of any nonpositively curved Riemannian manifold. This last property will be established by looking at the large scale geometry of flats inside the universal cover $\tilde M$. As a starting point, let us describe some flats inside $\tilde M$. Observe that each square inside the triangulation $L$ is a full subcomplex isomorphic to a 4-cycle $\square$. The right angled Coxeter group associated to a 4-cycle is a direct product of two infinite dihedral groups $\Gamma_\square \cong D_\infty \times D_\infty = (\mathbb Z _2 * \mathbb Z_2) \times (\mathbb Z_2 * \mathbb Z_2)$ (see (c) above). The corresponding complex $P_\square$ is isometric to a flat torus (with cubulation given by 16 squares, obtained via the identification $S^1\times S^1 = \square \times \square$). By considering the unique square inside the triangulation $L$, we obtain: \vskip 10pt \noindent {\bf Fact 2:} $M $ contains a totally geodesic $2$-dimensional flat torus $T^2$ (see (d) above). Furthermore, at any vertex $v\in T^2 \subset M$ of the cubulation, we have that the torus $T^2$ is {\it locally knotted} inside the ambient $4$-dimensional manifold $M$ (see (b) above), in that there is a canonical simplicial isomorphism $\big(lk_v(M), lk_v(T^2)\big) \cong (L, k)$ where $k$ is the unique (knotted) square in the triangulation $L$. \vskip 10pt Since the embedding $T^2 \hookrightarrow M$ is totally geodesic, by lifting to the universal cover, we obtain a $2$-dimensional flat $F \hookrightarrow \tilde M$ which is locally knotted at lifts of vertices. This induces an embedding of the corresponding boundaries at infinity, giving us an embedding of $\partial ^\infty F \cong S^1$ into $\partial ^\infty \tilde M \cong S^3$. The rest of our argument will rely on the following ``local-to-global'' assertion: \vskip 10pt \noindent {\bf Assertion:} The embedding $\partial ^\infty F \cong S^1$ into $\partial ^\infty \tilde M \cong S^3$ defines a nontrivial knot in the boundary at infinity of $\tilde M$. \vskip 10pt That is to say, the ``local knottedness'' of the flat propagates to ``global knottedness'' of its boundary at infinity. For the sake of exposition, we delay the proof of the assertion, and first show how we can use it to deduce the Main Theorem. To this end, let us assume that $(M^\prime, g)$ is a closed manifold equipped with a Riemannian metric of nonpositive sectional curvature, and that we are given an isomorphism of fundamental groups $\phi:\Gamma= \pi_1(M) \rightarrow \pi_1(M^\prime)$. From this assumption, we want to work towards a contradiction. \vskip 10pt The first step is to use the isomorphism of fundamental groups to obtain an equivariant homeomorphism between the corresponding boundaries at infinity. As a cautionary remark, we recall that given a pair $X_1,X_2$ of CAT(0)-spaces with geometric $G$-actions, a celebrated example of Croke and Kleiner \cite{CK} shows that the corresponding boundaries at infinity $\partial ^\infty X_1$ and $\partial ^\infty X_2$ need {\bf not} be homeomorphic. Even if the boundaries at infinity {\it are} homeomorphic, an example of Buyalo \cite{Bu} shows that the homeomorphism might {\bf not} be equivariant with respect to the $G$-action. In his thesis \cite{H}, Hruska introduced CAT(0)-spaces with {\it isolated flats}. Subsequent work of Hruska and Kleiner \cite{HK} established the following two foundational results for CAT(0)-spaces with isolated flats: \begin{enumerate} \item for a pair $X_1, X_2$ of CAT(0)-spaces with geometric $G$-actions, if $X_1$ has isolated flats, then so does $X_2$ (see \cite[Corollary 4.1.3]{HK}), and there is a $G$-equivariant homeomorphism between $\partial ^\infty X_1$ and $\partial ^\infty X_2$ (see \cite[Theorem 4.1.8]{HK}). \item for a group $G$ acting geometrically on a CAT(0)-space $X$, we have that $X$ has the isolated flats property if and only if $G$ is a relatively hyperbolic group with respect to a collection of virtually abelian subgroups of rank $\geq 2$ (see \cite[Theorem 1.2.1]{HK}). \end{enumerate} As such, if we could establish that our {\it group} $\Gamma$ is a relatively hyperbolic group with respect to a collection of virtually abelian subgroups of rank $\geq 2$, then result (2) above would ensure that our CAT(0)-manifold $\tilde M$ has the isolated flats property. Result (1) above would then give the desired $\Gamma$-equivariant homeomorphism between $\partial ^\infty \tilde M$ and $\partial ^\infty \tilde M^\prime$. So our next goal is to establish: \vskip 10pt \noindent {\bf Fact 3:} The group $\Gamma=\pi_1(M)$ is hyperbolic relative to the collection of all virtually abelian subgroups of $\Gamma$ of rank $\geq 2$. \vskip 10pt The notion of a group $G$ being relatively hyperbolic with respect to a collection $\mathcal A$ of subgroups of $G$ was originally suggested by Gromov \cite{Gr}, whose approach was later formalized by Bowditch \cite{Bo}. Alternate formulations appear in Farb's thesis \cite{Fa}, in work of Dru\c tu and Sapir \cite{DrSa}, and in the memoir of Osin \cite{Os}. We refer the reader to the original sources for a detailed definition as well as basic properties of such groups. For our purposes, we merely need to know that the property of a group $G$ being hyperbolic relative to a collection of virtually abelian subgroups of rank $\geq 2$ is inherited by finite index subgroups of $G$. In particular, to show the desired property for $\Gamma$, we see that it is sufficient to establish that our original Coxeter group $\Gamma_L$ is relatively hyperbolic with respect to higher rank virtually abelian subgroups (since $\Gamma \leq \Gamma_L$ is of finite index). Caprace \cite[Cor.~D~(ii)]{Ca} recently provided a criterion for deciding whether a Coxeter group is hyperbolic relative to the collection of its higher rank virtually abelian subgroups. In the right-angled case the condition is that the flag complex $L$ which defines $\Gamma_L$ contains no full subcomplex isomorphic to the suspension $\Sigma K$ of a subcomplex $K$ with 3 vertices which is either \begin{enumerate} \item[(a)] the disjoint union of 3 points, or \item[(b)] the disjoint union of an edge and 1 point. \end{enumerate} In both cases $\Sigma K$ does not have isolated squares. Since the Coxeter group $\Gamma_L$ with which we are working is associated to a triangulation $L$ of $S^3$ with isolated squares, we conclude that $\Gamma_L$ is relatively hyperbolic with respect to the collection of all virtually abelian subgroups of rank $\geq 2$. Hence, {\bf Fact 3}. \vskip 10pt Applying Hruska and Kleiner's results from \cite{HK}, we conclude that the original $\tilde M$ is a CAT(0)-space with the isolated flats property, and that there exists a $\Gamma$-equivariant homeomorphism from $\partial ^\infty \tilde M$ to $\partial ^\infty \tilde M^\prime$. The nontrivial knot $\partial ^\infty F \cong S^1$ inside $\partial ^\infty \tilde M \cong S^3$ appearing in the {\bf Assertion} can be identified with the limit set of the corresponding subgroup $\pi_1(T^2) \cong \mathbb Z^2 \leq \Gamma = \pi_1(M)$. Since we have an equivariant homeomorphism between the boundaries at infinity of $\tilde M$ and $\tilde M^\prime$, this immediately yields: \vskip 10pt \noindent {\bf Fact 4:} The boundary at infinity $\partial ^\infty \tilde M^\prime$ is homeomorphic to $S^3$, and the limit set of the canonical $\mathbb Z^2$-subgroup in $\Gamma\cong \pi_1(M^\prime)$ defines a nontrivial knot $S^1\hookrightarrow \partial ^\infty \tilde M^\prime\cong S^3$. \vskip 10pt On the other hand, the flat torus theorem implies that there exists a $\mathbb Z^2$-periodic flat $F^\prime \hookrightarrow \tilde M^\prime$, with the property that $\partial ^\infty F^\prime$ coincides with the limit set of the $\mathbb Z^2$. In particular, $\partial ^\infty F^\prime$ defines a nontrivial knot inside $\partial ^\infty M^\prime$. But taking any point $p\in F^\prime$, we note that geodesic retraction provides a homeomorphism $\rho: \partial ^\infty \tilde M^\prime \rightarrow T_p\tilde M^\prime$. This homeomorphism takes the knotted subset $\partial ^\infty F^\prime$ lying inside $S^3 \cong \partial ^\infty M^\prime$ to the {\it unknotted} subset $T_pF^\prime$ lying inside $S^3\cong T_p\tilde M^\prime$. This contradiction allows us to conclude that no such Riemannian manifold $(M^\prime, g)$ can exist. \vskip 20pt So in order to complete the proof of the Main Theorem, we are left with establishing the {\bf Assertion}. We note that a similar result was shown in the setting of CAT(-1)-manifolds by Farrell and Lafont \cite{FL}, the proof of which extends almost verbatim to yield the {\bf Assertion}. For the convenience of the reader, we provide a (slightly different) self-contained argument for the {\bf Assertion}. The basic idea is as follows: picking a vertex $v \in F$, we have a geodesic retraction map $\rho:\partial ^\infty \tilde M\rightarrow lk_v(\tilde M)$. Under this map, we see that $\partial ^\infty F$ maps to the link $lk_v (F)$ inside $lk_v (\tilde M)$. But recall from {\bf Fact 2} that the torus is locally knotted in $\tilde M$, i.e. the pair $\big( lk_v(\tilde M), lk_v(F) \big)$ is simplicially isomorphic to $(S^3, k)$, where $S^3$ is the 3-sphere equipped with the triangulation $L$, and $k$ is the knot in $S^3$ given by the unique square in the triangulation $L$. Now the retraction map $\rho$ is {\it not} a homeomorphism, but is nevertheless ``close enough'' to a homeomorphism for us to use it to compare the pair $\big(\partial ^\infty \tilde M, \partial ^\infty F \big)$ with the knotted pair $\big( lk_v(\tilde M), lk_v(F) \big) \cong (S^3, k)$. More precisely, for any given subset $Z\subset lk_v(\tilde M)\cong S^3$ we denote by $Z_\infty$ the corresponding pre-image $Z_\infty:= \rho^{-1}(Z)$ inside $\partial ^\infty \tilde M$. Then we have: \vskip 10pt \noindent{\bf Fact 5:} \cite[Proposition 2, pg. 627]{FL} For any open set $U \subset lk_v(\tilde M)$, the map $\rho:U_\infty\rightarrow U$ is a proper homotopy equivalence. Moreover, the map $\rho$ is a {\it near-homeomorphism}, i.e. can be approximated arbitrarily closely by homeomorphisms. \vskip 10pt This is shown by identifying $U_\infty$ with the inverse limit of the sets $\{U_r\}_{r\in \mathbb R^+}$, where each $U_r$ is the pre-image of $U$ under the geodesic projection from the sphere $S_v(r)$ of radius $r$ centered at $v$ to the link at $v$. For $r>s$, the bonding maps $\rho_{r,s}: U_r \rightarrow U_s$ are given by geodesic retraction, and the canonical map $\rho_{\infty, s}$ from $U_\infty = \varprojlim \{U_r\}$ to each individual $U_s$ coincides with the geodesic retraction map. Since the link $lk_v(\tilde M)$ can be identified with $S_\epsilon(r)$, a small enough $\epsilon$-sphere centered at $v$, the map $\rho$ can be identified with the canonical map $\rho_{\infty, \epsilon}$ from $U_\infty = \varprojlim \{U_r\}$ to the corresponding $U_\epsilon = U$. Now by results of Davis and Januszkiewicz \cite[Section 3]{DJ} each of the bonding maps $\rho_{r,s}$ are cell-like maps, i.e. point pre-images have the shape of a point (see Dydak and Segal \cite{DySe} for background on shape theory). Since the shape functor commutes with inverse limits, and since $\rho = \rho_{\infty, \epsilon}$, we see that $\rho$ is also a cell-like map. A result of Edwards \cite[Section 4]{Ed} now implies that $\rho$ is a proper homotopy equivalence, while work of Armentrout \cite{Ar} ensures that $\rho$ is a near homeomorphism. \vskip 10pt Now to show that $\partial ^\infty F$ defines a nontrivial knot in $\partial ^\infty \tilde M$, we need to establish that the complement $\partial ^\infty \tilde M - \partial ^\infty F$ cannot be homeomorphic to $S^1\times \mathbb R^2$. This will follow if we can show that $\pi_1\big( \partial ^\infty \tilde M - \partial ^\infty F \big)$ is a non-abelian group. To do this, let us decompose $\partial ^\infty \tilde M - \partial ^\infty F$ into a union of a suitable pair of open sets. We start by decomposing $lk_v(\tilde M)$, and will then use the map $\rho$ to ``lift'' this decomposition to $\partial ^\infty \tilde M$. Let $lk_v(F) \subset N_1\subset N_2 \subset lk_v(\tilde M)$ be nested open regular neighborhoods of the knot $k=lk_v(F)$ inside $S^3 \cong lk_v(\tilde M)$. Define open sets in $lk_v(\tilde M)$ by setting $U_2:= N_2$, and $U_1:= lk_v(\tilde M) - \bar N_1$, where $\bar N_1$ denotes the closure of $N_1$. Note that we have homeomorphisms $U_2\cong S_1\times \mathbb D ^2$ and $U_1\cap U_2 \cong N_2 - \bar N_1\cong S^1\times S^1\times \mathbb R$, while $U_1$ is homeomorphic to the complement of the nontrivial knot $k\subset S^3$. So at the level of $\pi_1$, we have that (a) $\pi_1(U_1\cap U_2) \cong \mathbb Z \oplus \mathbb Z$, and (b) $\pi_1(U_1)$ is a non-abelian group. The latter fact follows from work of Papakyriakopoulos \cite{Pa}, who showed that $\pi_1$ of the complement of a nontrivial knot cannot be isomorphic to $\mathbb Z$. But by Alexander duality such a group must have abelianization isomorphic to $\mathbb Z$, hence cannot be abelian. Now corresponding to this decomposition of $lk_v(\tilde M)$, we have an associated open decomposition of $\partial ^\infty \tilde M$ in terms of the corresponding $(U_1)_\infty$, $(U_2)_\infty$. We now define an open decomposition of $\partial ^\infty \tilde M - \partial ^\infty F$ by setting $U:= (U_1)_\infty$ and $V:= (U_2)_\infty - \partial ^\infty F$. The intersection satisfies $U\cap V = (U_1\cap U_2) _\infty$. Applying {\bf Fact 5} to the discussion in the previous paragraph, we obtain that (a) $\pi_1(U\cap V) \cong \mathbb Z\oplus \mathbb Z$, and (b) $\pi_1(U)$ is non-abelian. From Seifert-Van Kampen, we have: $$ \pi_1\big(\partial ^\infty \tilde M - \partial ^\infty F \big) = \pi_1(U) *_{\pi_1(U\cap V)} \pi_1(V)$$ So to see that $\pi_1\big(\partial ^\infty \tilde M - \partial ^\infty F \big)$ is non-abelian, it suffices to show that the non-abelian group $\pi_1(U)$ injects into the amalgamation. But this will follow from: \vskip 10pt \noindent {\bf Fact 6:} The map $i_*:\pi_1(U\cap V) \rightarrow \pi_1(V)$ induced by inclusion is injective. \vskip 10pt To establish {\bf Fact 6}, we first choose a suitable basis for $\pi_1(U\cap V)\cong \mathbb Z \oplus \mathbb Z$. Recall that the map $\rho$ gives a proper homotopy equivalence between $U\cap V = (U_1\cap U_2)_\infty$ and the space $U_1\cap U_2 = N_2 - \bar N_1$, where $N_1\subset N_2$ are nested open regular neighborhoods of the knot $k$. Since $U_2 = N_2$ can be identified with $S^1\times \mathbb D^2$, where $S^1\times \{0\}$ corresponds to the knot $k$, we choose the generators for $\pi_1(N_2 - \bar N_1)\cong \mathbb Z \oplus \mathbb Z$ to have the following two properties: \begin{enumerate} \item[\bf{(A)}] the generator $\langle 1, 0\rangle$ maps to a generator represented by $[S^1\times \{0\}] \in \pi_1(N_2)\cong \mathbb Z$ under the obvious inclusion, and \item[\bf{(B)}] the generator $\langle 0, 1\rangle$ is chosen so that a representative curve exists which, under the natural inclusion into $N_2\cong S^1\times \mathbb D^2$, projects to a generator for $\pi_1(\mathbb D^2 -\{0\}) \cong \mathbb Z$ in the $\mathbb D^2$-factor, and is null-homotopic in $N_2$. \end{enumerate} We choose the generators of $\pi_1(U\cap V)\cong \mathbb Z \oplus \mathbb Z$ to map to the above two generators of $\pi_1(U_1\cap U_2)$ under the homotopy equivalence $\rho$. To verify that $i_*: \pi_1(U\cap V) \rightarrow \pi_1(V)$ is injective, we first argue that an element $\langle a, b\rangle \in \ker (i_*)$ must satisfy $a=0$. Consider the commutative diagram: $$\xymatrix{ \mathbb Z \oplus \mathbb Z \cong \pi_1(U\cap V) \ar[d]^{\rho_*} \ar[r]^-{i_*} & \pi_1(V) \ar[r] & \pi_1\big( (N_2)_\infty\big) \cong \mathbb Z \ar[d]^{\rho_*} \\ \mathbb Z \oplus \mathbb Z \cong \pi_1(U_1\cap U_2) \ar[rr] & & \pi_1( N_2) \cong \mathbb Z \\ } $$ where all horizontal arrows are induced by the obvious inclusions, and the two vertical arrows are the isomorphisms induced by the geodesic retraction maps. By the choice of the basis on $\pi_1(U\cap V)$, we have that $\rho_*(\langle a, b\rangle)= \langle a, b\rangle \in \pi_1(U_1\cap U_2)$, which by property {\bf (A)} maps to $a\in \mathbb Z \cong \pi_1(N_2)$. From the commutativity of the diagram, we conclude that if $\langle a, b\rangle \in \ker (i_*)$, then $a=0$. Our next goal is to show that $b=0$. Given a pair $\eta_1, \eta_2$ of disjoint oriented curves in $S^1\times \mathbb D^2$, with $\eta_1$ null-homotopic, there is a well-defined linking number $L(\eta_1,\eta_2)$. For smooth curves this is obtained by looking at the oriented intersection number of $\eta_2$ with a smooth bounding disk for the curve $\eta_1$, and for continuous curves one uses an approximation by smooth curves. This linking number has the property that if $\eta_1\sim \eta_1^\prime$ (respectively $\eta_2\sim \eta_2^\prime$) are two curves homotopic to each other {\it in the complement of $\eta_2$} (respectively $\eta_1$), then $L(\eta_1, \eta_2^\prime)=L(\eta_1, \eta_2)=L(\eta_1^\prime, \eta_2)$. Now from the choice of basis on $\pi_1(U\cap V)$, along with property {\bf (B)}, we can choose a representative curve $\gamma$ for the element $\langle 0,b\rangle \in \ker (i_*) \subset \pi_1(U\cap V)$ with the property that the image curve $\rho(\gamma) \subset U_1\cap U_2 \subset N_2 \cong S^1\times \mathbb D^2$ projects to $b$ times a generator for $\pi_1(\mathbb D^2- \{0\})$. One can easily check that this forces $L\big(\rho(\gamma) , S^1\times \{0\}\big) = \pm b$. Applying {\bf Fact 5}, we can find a homeomorphism $\rho^\prime : (N_2)_\infty \rightarrow N_2$ which is $\epsilon$-close to the map $\rho$. In view of the discussion above, and recalling that the curve $S^1\times \{0\}$ corresponds to $lk_v(F)= \rho(\partial ^\infty F)$, this gives us that: \begin{eqnarray*} \pm b &=& L\big(\rho(\gamma) , S^1\times \{0\}\big) = L\big(\rho(\gamma), \rho(\partial ^\infty F) \big)\\ &=& L\big(\rho^\prime (\gamma), \rho(\partial ^\infty F)\big) = L\big(\rho^\prime (\gamma), \rho^\prime(\partial ^\infty F) \big) = L\big(\gamma, \partial ^\infty F \big) \\ \end{eqnarray*} \vskip -15pt \noindent where for the last equality, we use the fact that $\rho^\prime$ is a homeomorphism, and hence preserves the linking number. But since $\langle 0, b\rangle \in \ker (i_*)$, we also have that $\gamma$ bounds a disk in $V = (N_2)_\infty- \partial ^\infty F$, which implies that $L\big(\gamma, \partial ^\infty F \big)=0$. This now forces $b=0$, completing the proof of {\bf Fact 6}. \vskip 10pt Since the non-abelian group $\pi_1(U)$ injects into $\pi_1\big(\partial ^\infty \tilde M - \partial ^\infty F \big)$, we obtain that $\partial^\infty F \cong S^1$ defines a nontrivial knot in $\partial ^\infty \tilde M \cong S^3$, establishing the {\bf Assertion}, and finishing off the proof of the Main Theorem. \section{Concluding remarks.} Finally, we point out a few interesting questions that come up naturally from this work. As discussed in Section 2.2, locally CAT(0)-manifolds whose universal covers are {\bf not} diffeomorphic to $\mathbb R^n$ cannot support a Riemannian smoothing. In dimensions $n\neq 4$, there is no difference between ``homeomorphic to $\mathbb R^n$'' and ``diffeomorphic to $\mathbb R^n$''. In contrast, it is known that $\mathbb R^4$ supports many distinct smooth structures (in fact, continuum many). Moreover, the method used to construct the Davis examples of closed aspherical manifolds whose universal covers are not homeomorphic to $\mathbb R^n$ requires $n\geq 5$. So one can ask: \vskip 10pt \noindent {\bf Question:} Can one find locally CAT(0) closed 4-manifolds $M^4$ with the property that their universal covers $\tilde M^4$ are \begin{enumerate} \item not homeomorphic to $\mathbb R^4$? \item homeomorphic, but not diffeomorphic to $\mathbb R^4$? \end{enumerate} \vskip 10pt \noindent Paul Thurston \cite{Th} proved that $\tilde M^4$ must be homeomorphic to $\mathbb R^4$ if it has at least one ``tame'' point. We remark that the result of Stone \cite{St} tells us that there is no hope of constructing such examples via piecewise flat metric complexes (for their universal covers would then have to be diffeomorphic to the standard $\mathbb R^4$). Moreover, if one asks instead for {\it aspherical} closed $4$-manifolds, we remark that Davis \cite{Da2} has constructed examples where the universal cover is {\it not} homeomorphic to $\mathbb R^4$ (but it is unknown whether those examples support a locally CAT(0)-metric). \vskip 5pt Now concerning the dimension restriction in our construction, we note that this was due to the need for finding triangulations of spheres with the property that the associated Davis complex had the isolated flats condition (in order to obtain a well-defined boundary at infinity). The ``isolated squares'' condition we introduced was designed to ensure that Caprace's criterion was fulfilled. Attempting to generalize this construction to higher dimensions, the difficulty we run into is that, by work of Januszkiewicz and \'Swi{\polhk{a}}tkowski \cite[Section 2.2]{JS} (see also the discussion in \cite[Appendix]{PS}), there is no higher-dimensional analogue of the Dranishnikov-Przytycki-\'Swi{\polhk{a}}tkowski procedure for modifying triangulations in order to get rid of squares. Finally, we remark that our construction relies on the presence of flats with specific large scale behavior in order to obstruct Riemannian smoothings. As such, our methods require the presence of zero curvature. If one desires examples which are {\it strictly negatively curved}, we are brought to the following: \vskip 10pt \noindent {\bf Question:} Can one construct examples of smooth, locally CAT(-1)-manifolds $M^n$ with the property that $\partial ^\infty \tilde M$ is homeomorphic to $S^{n-1}$, but which do {\it not} support any Riemannian metric of nonpositive sectional curvature?
{ "redpajama_set_name": "RedPajamaArXiv" }
4,533
package com.gitblit.transport.ssh; import org.apache.sshd.common.session.Session; import org.apache.sshd.common.util.net.SshdSocketAddress; import org.apache.sshd.server.forward.ForwardingFilter; public class NonForwardingFilter implements ForwardingFilter { @Override public boolean canConnect(Type type, SshdSocketAddress address, Session session) { return false; } @Override public boolean canForwardAgent(Session session, String requestType) { return false; } @Override public boolean canForwardX11(Session session, String requestType) { return false; } @Override public boolean canListen(SshdSocketAddress address, Session session) { return false; } }
{ "redpajama_set_name": "RedPajamaGithub" }
4,407
Home News Environment & Sustainability News Bioeconomy is an opportunity for rural Europe Bioeconomy is an opportunity for rural Europe © iStock/Eoneren From the European Network for Rural Development, Laura Jalasjoki outlines the opportunities in bioeconomy for rural development, as well as the policy context around rural bioeconom From the European Network for Rural Development, Laura Jalasjoki outlines the opportunities in bioeconomy for rural development, as well as the policy context around rural bioeconomy in the EU. Europe is envisaging a shift to a circular bioeconomy, a model of production and consumption based on the sustainable use of renewable, biological resources. The bioeconomy is expected to remediate global environmental challenges, from climate change to plastic pollution, while sustaining economic growth. In 2012 the European Union adopted a bioeconomy strategy, updated in 2018 with an Action Plan defining three concrete axes. Bio-based sectors will be strengthened and up-scaled by unlocking investments and markets. The deployment of the bioeconomy to all EU regions will be supported through the development of adapted innovations and strategies. Finally, the EU will ensure that the bioeconomy is genuinely sustainable by developing knowledge and tools to track its ecological impact.1 Both public and private sectors invest massively on research and innovation on bio-based solutions for food, manufacturing, chemistry, and energy sectors. Traditional industries are competing to decarbonise their business models and conquer new markets through bio-based products and services. Multiple benefits for rural areas The bioeconomy is closely connected to the production and management of natural resources and, consequently, to rural areas. Many predominantly rural regions in Europe see the shift to a circular bioeconomy as an opportunity to answer their multiple economic and social needs and aspirations. Essentially, the bioeconomy offers rural regions pathways for diversification and value addition, with possible social, economic and environmental benefits. Growing demand for biomass is an opportunity for primary producers. In addition, the incentives for greater resource efficiency, modernisation and value addition associated with bioeconomy can contribute to increased farm competitiveness. New bio-based value chains and commodities create new needs for pre-processing, processing, logistics and numerous supporting services that should take place where biomass is sourced, often meaning rural areas. Circular and cascading models valuing by-products and wastes require collaboration between diverse stakeholders and sectors, creating new business ecosystems and providing opportunities for rural SMEs and micro-entrepreneurs. Rural areas have a lot to gain through their integration into new value chains established by bio-based industries, as well as through the creation of locally sustained models of circular bioeconomy. Existing public support Public funding and political support are needed to enable rural actors to seize these opportunities. Producers, rural SMEs and local communities need specific support to be able to retain a fair share of the added value created by the bioeconomy in rural areas. Through information, advice and targeted funding the playing field can be levelled out for rural actors, ensuring that the benefits of the bioeconomy are widely distributed along the value chain. Several existing funding and support instruments can be used to this end. The European Agricultural Fund for Rural Development offers a panoply of measures that are relevant for promoting the development of an inclusive rural bioeconomy. A Thematic Group of the European Network for Rural Development, including representatives from various EU Members States, regions, Local Action Groups and rural organisations, mapped these existing measures and their potential to support rural bioeconomy value chains. The group also recommended to programme Rural Development measures in a complementary way with other EU Structural Investment Funds to upscale public support to rural bioeconomy initiatives.2 Mainstreaming the rural bioeconomy While funding and tools are available, the way they are targeted depends on the policy framework. The lack of coherent policies and a weak understanding of the bioeconomy at all levels – national, regional and local – can prevent a coordinated development of rural bioeconomy. It is essential to ensure that the potential benefits of the bioeconomy can be understood in the frame of local specificities and people's concrete needs. Therefore, bioeconomy strategies and their expected results should be defined based on the specific territorial context. Awareness-raising activities should support a shared understanding of the concept and its practical applications. The EU Bioeconomy strategy dovetails with numerous EU-level and regional initiatives that support the design of bioeconomy strategies and programmes. The national preparation of EU's Common Agricultural Policy for the 2021-2027 period includes extensive analysis of strengths, weaknesses, opportunities and threats in the Member States' agricultural and rural development sectors. This work is an excellent opportunity to define the bioeconomy in a rural context. 1. https://ec.europa.eu/research/bioeconomy/ pdf/ec_bioeconomy_strategy_2018.pdf#view=fit&pagemode=none 2. https://enrd.ec.europa.eu/enrd-thematic-work/greening-rural-economy/bioeconomy_en Laura Jalasjoki Policy Analyst European Network for Rural Development (ENRD) laura.jalasjoki@enrd.eu https://enrd.ec.europa.eu/ Contributor Profile Sustainable community initiative endorsed by EIT InnoEnergy Disruptive flexible and bendable battery for everyday life C3-Mobility – closed carbon cycle and climate-neutral fuels for the traffic of the future Burning issues for animal gene banks: news from IMAGE H2020 project The latest climate agreement negotiations: will the latest IPCC science be ignored?
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
6,649
Q: Rails - handeling page reloads and bookmarked traffic with Ajaxish dashboard application This application is primarily a dashboard. The main view elements are defined in my application.html.erb file. Then these view elememnts are changed by the relevant controller actions. For example; when a user clicks on the Users link(remote: true), the UsersController will append a partial into the main view-pane. The same with the JobsController, TasksController, etc. Im using the Push state function described in this railscast to update the URL based on the relative path of the resource. Herein lies the problem; how do I handle page reloads and traffic that has bookmarked a particular page? The format.js portion of the respond_to block works great, but Im uncertain how to factor for format.html traffic. thanks in advance.
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,652
La Temporada 2015 del fútbol jujeño abarcará todas las actividades relativas a campeonatos de fútbol profesional, interior e nacionales e internacionales, disputados por clubes jujeños, y por las selecciones provinciales de este país en sus diversas categorías. Torneos locales Liga Jujeña de Fútbol de la Primera A Tabla de Reclasificación En esta tabla se tienen en cuenta todos los partidos del año.El equipo con mejor puntaje de esta tabla, clasificarán a la Torneo Federal C 2016 como Jujuy 1 y Jujuy 2 , respectivamente. Asimismo, el segundo y el tercer mejor puntaje de esta tabla mientras que los otros cupos que se darán serán para el campeón de la Copa Jujuy 2014/15 (Jujuy 1) y el ganador de la Superliga de Jujuy 2016 (Jujuy 2). Tabla de posiciones Liga Jujeña de Fútbol de la Primera B Tabla de posiciones Resultados Liga Jujeña de Fútbol de la Primera C Nota: el sistema del torneo aún no está definido. Véase también Liga Jujeña de Fútbol Anexo:Clubes jujeños en torneos nacionales Referencias Enlaces externos Página web oficial de la Liga Jujeña de Fútbol. Página web oficial de Liga Jujeña de Fútbol Córdoba. Deporte en Jujuy
{ "redpajama_set_name": "RedPajamaWikipedia" }
3,337
Q: As a young student aspiring to have a career as a programmer, how should I feel about open source software? Every once in a while on some technology websites a headline like this will pop up: http://www.osor.eu/news/nl-moving-to-open-source-would-save-government-one-to-four-billion My initial thought about government and organizations moving to open source software is that tons of programmers would lose their jobs and the industry would shrink. At the same time the proliferation and use of open source software seems to be greatly encouraged in many programming communities. Is my thinking that the full embrace of open source software everywhere will hurt the software industry a misconception? If it is not, then why do so many programmers love open source software? A: Open source economics are pretty strange and often counter-intuitive. Take a product like the Excel spreadsheet (just an example, any big commercial product would do). The business of building and supporting Excel employs some number of employees, say X. X would probably sound like a big number to you and I, but I have no idea what it is. What I do know is that it's a tiny number compared to the number of people making a living supporting Excel in offices, schools, and other institutions and creating tools using Excel. That number is probably X * 10000. So, if you replace Excel with an open source product, you replace X but the X* 10000 is unaffected. In fact, it's not even that simple. Without the X employees, more paid developers are needed to train, troubleshoot, and modify the open source spreadsheet. Just because there isn't a commercial enterprise behind the product doesn't mean that business won't demand (and pay for) good service. In fact, if your open source product gains enough traction, companies are sometimes willing to support a foundation that guarantees the future development of said product. This is especially true if their business interests are intimately tied to the product. Think of Mozilla, the Apache Software Foundation, Mono Project, or Canonical. Finally, open source tools are never a threat when you're trying to sell a service. Think of organizations like Facebook, Twitter, and even Stackoverflow. Ultimately, these organizations don't want to sell you software. They want to create a giant network. Once the network is big enough it creates its own gravity. Using any other "product" wouldn't make any sense because number of participants is what matters most. The underlying technology is just a detail. A: I'd say read up on the various ideologies behind some of the more prominent OpenSource projects, like Chromium, Mozilla, etc., and then make up your own mind. No one really has a right to tell you how to feel one way or another. That being said, I embrace OpenSource because I like the idea of transparency in software design. I also like that the community of users has a very real and direct impact on the direction of the project. You don't get that in a closed-source environment. If I remember correctly, one of the points a Creative Commons supporter made was that by making things "free," you allow people to use the product of your ideas in ways you may have never imagined. This is a video I particularly enjoyed: https://creativecommons.org/videos/a-shared-culture A: Just because a project is open-source does not mean that programmers are not making a living off of it. Governments and companies donate large amounts of money to foundations like mozilla and apache. Also keep in mind, companies have to hire programmers to MODIFY the open source project to customize it for their business. Companies can't use off the shelf tools for everything. This is something that can't be done with closed-source software so it's an example of how you can open up new opportunities for programming. It's not about eliminating programmers or not paying them, it's about rearranging the structure to hopefully make things more efficient so we have more time for NEW projects. Another thing to realize about open source is that you don't necessarily have to reveal the source code of your program unless you're going to distribute the program. For programs that a company is going to use for itself in its servers or intracompany needs, it will probably NOT distribute and therefore not have to reveal the source code for the modified program. A: We will never see a full embrace. We love to try to contribute positively to the world. Besides, participating in an open source project is a great PLUS to your CV. A: Open source is a threat to packaged software companies whose products are in an area that's popular enough that enough interest is present in the open source community to develop a free alternative. I think one case is point is the significant decrease in prices that both Oracle and Microsoft can charge for database software. mysql is more than adequate for most projects and essentially free unless the customer wants to pay for support so they'll have someone on the hook if things go sideways. It is absolutely complementary to the consulting and services businesses because it lowers the total cost of production and increases the productivity of their developers. Companies like it for the same reasons although some insist on finding vendors to provide commercial support so that there's someone to call/blame if it doesn't live up to expectations. A: Biggest risks... * *Volatiliity: much of OSS is developed in spurts. There are prominent projects, stable releases in lesser knowns, but because the universe of OSS is so divergent and fragmented in many areas (and ever evolving), it's rare for a project to become mature enough to say that development will be regular, indefinate, or perpetual. Changing course midstream is costly, even if the product is free because integration, regression, and hands-on or immediate support is not free, even if available. *Lack of accountability: there isn't anyone 'invested' so it's hard to seek recourse when bad things happen. There is no warranty. Nothing that even resembles one. The only assurance you generally have is reputation and eventually your own personal experience. Since it was free, the developers can tell you to go firetruck off, and not care one bit about your lack of success, or less importantly if you continue to use their product. A: Embrace OSS tools and stuff, but don't get obsessed by them (and yes, I've seen a lot of people get obsessed with open source stuff, almost always to their detriment). Pick and choose the best tool(s) for each job, irrespective of whether they're open source or not (mind you, some open source licenses make anything licensed under them useless for commercial work, especially GPL licensed libraries suffer from this). A: A majority of the modern Open Source Software is developed by full-time employees, who are primarily paid for developing it. The rest is developed by those who are paid for doing something that depend on the software they're developing, and a collaborative work on it, crowdsoursing a support and maintenance is absolutely mandatory for them. A: The vast majority of programmers do not get paid per copy distributed of the software they create. They get paid a one time fee for their time spent. Even companies who employ programmers don't generally make their money per copy sold. With a few notable exceptions like Microsoft and Adobe, software is typically part of their infrastructure, like a company website or internal tools, or given away as part of another product or service. Others have pointed out that most major open source contributors have corporate sponsors. On the hobbyist side, I find it interesting that people always focus on what is given instead of received. It's like an electrician receiving all the components of a house for free, already assembled except for some wiring improvements he does himself, and people consider him crazy if he spends a few hours one weekend teaching others to make those same improvements for other houses that got the same deal. Sure, he's giving away some of his time and expertise for free, but in return he gets a great product worth several times the work he put in and ensures a healthy ecosystem for the next time he needs something. A: How should you feel? Good grief, next you'll be asking "how do I talk to women". Open source will never replace but a small portion of the paid SW. For most organizations, the increased costs of moving from what they already know to anything else, even free, is more than the cost of the SW. A: The main philosophy of free/open source (as I see it) is that when you distribute software you distribute the source as well with it. Open source does not necessarily mean free of cost. And certainly in any large project, simply picking an open source solution does not mean you just pick off something from a shelf and plug it in and you're done. For any large application, you need to adapt it for your specific needs (can be as simple as setting it up and migrating your existing system to it or as complex as modifying large parts of it) and have a reliable mechanism for support as well as updates/bug fixes with the original software. That means there will always be jobs for programmers. Not to mention, for any major open source project, there are progranmmers being paid to work on it - mainly by large corporations that have a stake in that projects well-being (they use it or sell software/services for it). Think about it this way, if there is a mature open source solution to your problem already existing out there and being used by lots of people, does it make sense to sink in large amounts of cash for something that cannot possibly be as mature as that? It is simply more efficient to use it. Its not about preserving jobs (as I said there will always be a need for programmers), but simple business sense, which is even more important when its tax-payer money. Shunning open source in the name of keeping jobs is just creating an artificial environment, restricting the sharing of technology and IMHO generally bad for the health of the programming community. A: I would look at some of the Linux contributors to get an Idea of how the opensource community is made up of people who get paid to make their code available for free. http://apcmag.com/linux-now-75-corporate.htm A: For me, open source is also political: it allows programmers to help each others so that the hard work doesn't have to be repeatingly re-crafted and not allowed to be used between projects. It also set a better set of background rules for the project, it's not under the rule of managing: at the end, the result is a code of better quality and longevity. Know that the computer science subject is much vast, and there are some pieces of software that are so much complex that there are not so many competent people to write them, maintain them, and also add interesting features. I really find your argument "tons of programmers would lose their jobs and the industry would shrink" very misleading, not only about the software industry, but for the world in general. Remember the web bubble: it's easy to fool not-programming people in a company. Open source is safe way to put a barrier to that. You also have to think that software is not like many other industries: you deliver something which is volatile, something capitalism can't really work with. Just imagine if we were able to duplicate physical objects, but you would need to pay for each aspirin pill you duplicate, because the molecule is kind of "owned" by somebody. That could makes very little sense. Now think about copying pure, clean water (which will one day become expensive): do you think it's ethically and philosophically correct to make people pay for such thing ? If programmers lose their jobs because of open source, it's maybe because they are just unable to reproduce the same kind of software quality, so in a way, they deserve to be fired. But that doesn't mean they should be less programmers having a job: it's just a matter of community, teamwork and ethics: companies should pay programmers either to implement solutions for problems using existing software, or else, hire more competent programmers who can add features for an existing code. Take the iOS, windows phone, symbian and android: those are 75% doing the same thing, meaning almost same "wheels". It's just different flavors, but in the end, a lot of money were spent because companies wanted to survive to their ideals. Open source is not just political, it's also about innovation: how do you want to give reality to new idea if you have to restart everything from scratch over and over ? A: What Free/Open Source software does is establish a baseline: if your company can't produce anything better than the F/OS alternative, it isn't going to be able to sell many copies. If your company can offer something significantly better than the F/OS available, it's going to be able to sell copies and make a profit. Therefore, one use is that it reduces the ability of companies to get by selling bad software. It also lowers barriers of entry. Anybody with a halfway modern desktop or laptop can, without spending a dime on software licenses, have a very functional OS with easy-to-use GUI and excellent development environment (there's plenty of people who think MS Windows with Visual Studio is better than this sort of environment, and plenty who don't). Therefore, F/OSS helps the software entrepeneur get a business started at low cost. This increases the influence and profit of the software innovator compared to the financial guys, who were the ones that controlled most non-University computer use in the old days. Many of the recent massive success stories would have been harder to get going, perhaps impossible, without F/OSS and its effects. It reduces the opportunity to make a lot of money without corresponding ability, which is arguably a good thing. Developers who aren't very good will find niches in internal software for companies that don't rely on their computer systems as a strategic asset, and those jobs aren't going to be affected much by F/OSS. Developers who are very good but not the entrepeneur type will still do well with companies that sell good-quality non-F/OS commercial software. The money-based market is more effective at providing for lots of needs than the F/OSS reputation market, and much better at producing the dull necessary stuff. There are plenty of vital applications that most F/OSS developers will avoid. So, overall, I think it's healthy for the development community. It allows developers a better shot at becoming wealthy, and serves as an incentive to make good products (and most developers would rather work on good products than bad). It can hurt developers who aren't that good, or work for badly run companies, but it doesn't reduce the demand all that much, and they can likely find jobs anyway.
{ "redpajama_set_name": "RedPajamaStackExchange" }
439
{"url":"http:\/\/tex.stackexchange.com\/questions\/65480\/migrating-from-pdftex-to-luatex-problems-with-reproducing-output-for-legacy-pro?answertab=oldest","text":"# Migrating from pdfTeX to LuaTeX: Problems with reproducing output for legacy projects\n\nI'm currently analyzing and evaluating a migration process from pdfTeX to LuaTeX for our current TeX workflow. With LuaTeX being a fork of pdfTeX and packages like luainputenc at hand it seemed promising to reproduce pdfTeX's output using LuaTeX when sacrificing (some or most of) LuaTeX's new features.\n\nCurrently, however, I'm stuck and need your help to decide whether it's worth digging deeper or accepting what I found out. There are two problems I'm facing.\n\nHere's the first problem. Engines: pdfTeX 3.1415926-2.4-1.40.13 and LuaTeX beta-0.70.2-2012052410 (both from TeX Live 2012) with --output-format=pdf. When using UTF-8 input ([utf8x]{inputenc} for pdfTeX, [utf8x]{luainputenc} for LuaTeX) and T1 encoded fonts ([T1]{fontenc}) the output from both engines differs for some non-T1 characters.\n\nMWE:\n\n\\documentclass{article}\n\\usepackage[utf8x]{luainputenc}\n\\usepackage[T1]{fontenc}\n\\renewcommand{\\rmdefault}{lmr}\n\\begin{document}\n\\begin{tabular}{@{}l*{10}{p{7mm}@{}}}\nSome T1 characters: & \\# & \\$& \\% & \u0102 & \u0147 & \u00a7 & @ & \u00c6 & \u00df & \u00a3 \\\\[1.5mm] Some non-T1 characters: & \u2021 & \u00ff & \u2030 & \u2026 & \u00b6 & \u00bd & \u0129 & \u00b5 & | | & | | \\\\ \\end{tabular} \\end{document} Notes: luainputenc is forwarding to inputenc if called from pdfTeX. The output is the same when using an \\ifluatex and loading inputenc and luainputenc separetely. The spaces between the bars in the last two columns of the second row are supposed to be a Unicode NO-BREAK SPACE (U+00A0) and a THIN SPACE (U+2009). pdfTeX output: (everything fine here) LuaTeX output: (notice the last five columns of the second row) This problem does most certainly exist for other characters as well, these are just some I encountered. Is there a way to get the pdfTeX output from LuaTeX? The problem is the same with different T1 encoded fonts, try mathpazo if you want. Using a different font encoding is not an option here as the task at hand is to migrate TeX engines, not fonts or their encodings. Am I perhaps missing a pdfTeX package that has to be replaced for LuaTeX usage? The lutf8x (notice the leading L) luainputenc package option does not help here either. (Which is no suprise as its purpose is different.) What I'm trying to use here, according to its manual, is the \"UTF-8 legacy mode\" of luainputenc, i.e. mimicing the behavior of inputenc in pdfTeX by making non-ASCII characters active to determine the correct bit length of characters and so on. Maybe the problem is not the input side (afaik luainputenc's job of translating input bytes into LICR) but the output side (afaik translating LICR into glyph positions of the font used)? Maybe LuaTeX is just translating to Unicode positions, not regarding glyph positions of the font, like the EU2 encoding? Anyway, where does the problem come from? Can it be helped and if so, how? Btw: It works fine with LuaTeX when using an OpenType-Font with fontspec and EU2 encoding, but that's not the primary goal here. And here's the second problem. It's closely tied to the first one and can be reproduced using the MWE above. When using --output-format=dvi pdfTeX is producing a DVI file that dvips has no problem with. When using a LuaTeX DVI file dvips stops with something like This is dvips(k) 5.992 Copyright 2012 Radical Eye Software (www.radicaleye.com) ' LuaTeX output 2012.08.01:1700' -> ohnexml-luatex.ps dvips: ! invalid char 297 from font ec-lmr10 This is how I actually came up with the suspicion that for the first problem Unicode positions and not font encoding specific glyph positions are written to the output file as 297 is the decimal Unicode point of \u0129 (second row, fourth last column in the MWE above). If a solution to the first problem does not solve this problem transitively, how can this one be helped? Thank you for your thoughts. - if i remember correctly, taco hoekwater, at a tug meeting sometime during the last few years, said that exact output equivalence to dek-tex was not a requirement for luatex. by this he was referring to line and page breaks, not to character presentation (where identical results seem to me to be highly desirable). but if absolutely identical results are a requirement of your project, you may want to take this into consideration. (i can't find a written record; this may have been in reply to a personal question.) \u2013 barbara beeton Aug 1 '12 at 18:06 add comment ## 1 Answer At first you should imho better use utf8 instead of utf8x. utf8x is unmaintained and has problems e.g. with biblatex. (You will have to set up the some missing definitions for pdflatex). You will also have to add some definitions for lualatex as it will map - as you already found out - undeclared chars simply to their unicode position. Here e.g. two definitions for \u00bd & \u00b5: \\documentclass{article} \\usepackage[utf8]{luainputenc} \\usepackage[T1]{fontenc} \\usepackage{textcomp} \\renewcommand{\\rmdefault}{lmr} \\DeclareUnicodeCharacter{00BD}{\\textonehalf} \\DeclareUnicodeCharacter{00B5}{\\textmu} \\begin{document} \\begin{tabular}{@{}l*{10}{p{7mm}@{}}} Some T1 characters: & \\# & \\$ & \\% & \u0102 & \u0147 & \u00a7 & @ & \u00c6 & \u00df & \u00a3 \\\\[1.5mm]\nSome non-T1 characters: & \u2021 & \u00ff & \u2030 & \u2026 & \u00b6 & \u00bd & \u00b5 %\u0129 & & | | & |\u2009| \\\\\n\\end{tabular}\n\\end{document}\n\n-\nThank you for your answer Ulrike. I must admit I wasn't aware of \\DeclareUnicodeCharacter. Can you by chance tell me where the different behavior of pdfTeX\/LuaTeX comes from? Is it just that utf8x.def is unmaintained and thus behaves so much different in inputenc and luainputenc? I found the ucharacters.sty from the PassiveTeX project to be quite a comprehensive list of such character declarations, it can be used by defining e.g. \\newcommand{\\DefineCharacter}[3]{\\DeclareUnicodeCharacter{#2}{#3}}. Your answer solves the DVI mode problem as well. \u2013\u00a0 Patrick Bergner Aug 1 '12 at 16:36\nWell pdftex\/luatex are simply different. pdftex reads an utf8 char in pieces (each 8 bit long), luatex reads it as one entity. So if a char is not declared pdftex will give an error at the first octet, but luatex will simply pass it through. Beside this: as far as I can see luainputenc predeclares less chars then is done by normal inputenc. This could probably been improved but on the other side luainputenc is not much used. Most people that use lualatex uses also fontspec. \u2013\u00a0 Ulrike Fischer Aug 1 '12 at 16:54","date":"2014-04-19 17:04:11","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.890796422958374, \"perplexity\": 3623.8697236293165}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2014-15\/segments\/1397609537308.32\/warc\/CC-MAIN-20140416005217-00191-ip-10-147-4-33.ec2.internal.warc.gz\"}"}
null
null
Q: Deleting o dollar amount from a column in Microsoft excel I'm using micrososft excel and I am going over finance records with it. I would like to delete all $0 dollar amounts from column C. Any way to do that? A: Highlight column C. Press CTRL+H Find 0 Replace (blank) Click Replace All --Quick and dirty version.
{ "redpajama_set_name": "RedPajamaStackExchange" }
8,556
Home > TRAVEL > KUMAMOTO > TRAVEL Kumamoto's Hidden Gems: Top 10 Travel Spots in Uki, Uto & Misato The beautiful off the beaten path Uki area near Kumamoto city in Kyushu is the true embodiment of the Japanese concept of 'satoyama' – nature at its most authentic and beautiful self. From stunning nature landscapes and phenomena, to rich history and culture, and even unique photo spots you can hardly believe are real –… Experience Small-Town Life in Tamana, Kumamoto If you have already seen the sights of Kumamoto and find yourself at a tourism loss of what to do, consider a day trip to the nearby city of Tamana for an outing that combines history, culture and relaxation. Location View this post on Instagram #repost @3oni6 via @PhotoAroundApp #とつけむにゃあ玉名 #タマてバコ #ななつ星 #ななつ星in九州… Check Out JR Kyushu's Newest Sightseeing Train in Kumamoto, Japan! JR Kyushu's (九州) newest train makes its first trip this March and with it are tales of hope. Named after kingfishers local to the Kuma (球磨) region, the KAWASEMI YAMASEMI (かわせみ やませみ) is the first sightseeing train launched by JR Kyushu after the Kumamoto (熊本) earthquake in 2016 and is hoped to be a symbol… 10 Things to Do on the SL Hitoyoshi, the Sightseeing Train Along Kumamoto Japan is known for its extensive train network that runs throughout the country. From local trains to the fastest bullet train, each will surely bring you to your destination on time. How about hopping on a sightseeing train down south? Let us take a ride on the SL Hitoyoshi (人吉)! It was as if time… Amakusa Dolphin Cruise – A Top Attraction for Summer Holidays in Kyushu! Honestly, I'm not big on dolphins. It was one of those things (like pink and chocolate) that all other little girls I knew growing up seemed to adore, and being the sort of child who didn't want to be "like everybody else," I decided quite quickly that I didn't like dolphins. Actually, I don't really… Kumamoto: A Day Trip Exploring Shimabara's Hidden Treasures If you're tired of bustling crowds at popular tourist attractions and the impossible task of taking a selfie without including the world and his wife in the background, perhaps you'll consider taking a day-trip to a less well-known location such as Shimabara. Visiting a small place may not be as exciting as top tourist attractions… The Cutest Tram in JapanーGetting Around in Kumamoto With a population of over 700,000, Kumamoto is a small city with a limited number of foreigners. Getting around in Tokyo is easy enough – signs and timetables are often displayed in English, and if you get stuck you can probably find an English-speaking attendant to help you. But in a small city like Kumamoto… Experience the Best of Japanese Festivals at Shizuoka's Biggest Spring Festival Feb 8, 2018 Discover the Art of 'Nihonga' at the Yamatane Museum of Art in Shibuya! Aug 11, 2016 10 Japanese Animations to Bring Back Those Childhood Memories May 28, 2015 Don't Miss One of Japan's Largest and Liveliest Festivals this Golden Week! Apr 26, 2016 Inaniwa Udon: One of the Top 3 Udon Types Apr 1, 2015
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
1,691
\section{Introduction} Understanding the 3D structure of scenes is an essential topic in machine perception, which plays a crucial part in autonomous driving and robot vision. Traditionally, this task can be accomplished by Structure from Motion and with multi-view or binocular stereo inputs \cite{bjorkman2002real}. Since stereo images are more expensive and inconvenient to acquire than monocular ones, solutions based on monocular vision have attracted increasing attention from the community. However, monocular depth estimation is generally more challenging than stereo methods due to scale ambiguity and unknown camera motion. Several works \cite{eigen2014depth, godard2017unsupervised} have been proposed to narrow the performance gap. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.48\textwidth]{images/photometric_loss.png} \end{center} \caption{Visualization of the photometric loss. The first row is the reference image, and the second and third rows are warped images from adjacent images using ground truth depth and pose.} \label{fig:photometric} \end{figure} \begin{figure*} \begin{center} \begin{minipage}[htbp]{\textwidth} \centering \includegraphics[width=14.2cm]{images/startingPic.jpg} \end{minipage} \end{center} \caption{Comparisons of prior ``point-to-point" alignment paradigm to our ``region-to-region" one. \re{We propose the ``region-to-region" alignment paradigm by enforcing photometric consistency at feature-level (a) and replacing point cloud alignment with voxel density alignment in 3D space (b).}} \label{fig:1} \end{figure*} Recently, with the unprecedented success of deep learning in computer vision \cite{he2016deep,dosovitskiy2020image}, Convolutional Neural Networks (CNNs) \cite{he2016deep} have achieved promising results in the field of depth estimation. In the paradigm of supervised learning, depth estimation is usually regarded as a regression or classification problem \cite{eigen2014depth,fu2018deep}, which needs expensive labeled datasets. By contrast, there are also some successful attempts \cite{zhou2017unsupervised,mahjourian2018unsupervised,godard2019digging} to execute monocular depth estimation and visual odometry prediction together in a self-supervised manner by utilizing cross-view consistency between consecutive frames. In most prior works of this pipeline, two networks are used to predict the depth and the camera pose separately, which are then jointly exploited to warp source frames to the reference ones, thereby converting the depth estimation problem to a photometric error minimization process. The essence of this paradigm is utilizing the cross-view geometry consistency from videos to regularize the joint learning of depth and pose. Previous SS-MDE works have proved the effectiveness of the photometric loss among consecutive frames, but it is quite vulnerable even problematic in some cases. First, the photometric consistency is based on the assumption that the pixel intensities projected from the same 3D point in different frames are constant, which is easily violated by illumination variance, reflective surface, and texture-less region. Second, there are always some dynamic objects in natural scenes and thus generating occlusion areas, which also affects the success of photometric consistency. To demonstrate the vulnerability of photometric loss, we conduct a preliminary study on Virtual KITTI \cite{cabon2020virtual} because it has dense ground truth depth maps and precise poses. As shown in Figure \ref{fig:photometric}, even though the ground truth depth and pose are used, the photometric loss map is always not zero due to factors such as occlusions, illumination variance, dynamic objects, etc. To address this problem, the Perceptual losses are used in recent work \cite{shu2020featdepth}. In line with this research direction, we are dedicated to proposing more robust loss items to help enhance the self-supervision signal. Therefore, our work targets to explore more robust cross-view consistency losses to mitigate the side effect of these challenging cases. We first propose a Depth Feature Alignment (DFA) loss, which learns feature offsets between consecutive frames by reconstructing the reference frames from its adjacent frames via deformable alignment. Then, these feature offsets are used to align the temporal depth feature. In this way, we utilize the consistency between adjacent frames via feature-level representation, which is more representative and discriminative than pixel intensities. As shown in Figure \ref{fig:1} (a), comparing the photometric intensity between consecutive frames can be problematic, because the intensities of the surrounding region of the target pixel are very close, and the ambiguity may probably cause mismatches. Besides, prior work \cite{mahjourian2018unsupervised} proposes to use ICP-based point cloud alignment loss to utilize 3D geometry to enforce cross-view consistency, which is useful to alleviate the ambiguity of 2D pixels. However, rigid 3D point cloud alignment can not work properly in scenes with the object motion and the resulting occlusion, as shown in Figure \ref{fig:1} (b), thereby being sensitive to local object motion. In order to make the model more robust to moving objects and the resulting occlusion areas, we propose voxel density as a new 3D representation and define Voxel Density Alignment (VDA) loss to enforce cross-view consistency. Our VDA loss regards point cloud as an integral spatial distribution. It only enforces the numbers of points inside corresponding voxels of adjacent frames (voxels in the same color in Figure \ref{fig:1} (b)) to be consistent and does not penalize small spatial perturbation since the point still stays in the same voxel. These two cross-view consistency losses exploit the temporal coherence in depth feature space and 3D voxel space for SS-MDE, both shifting the prior ``point-to-point'' alignment paradigm to the ``region-to-region'' one. Our method can achieve superior results than the state-of-the-art (SOTA). We conduct ablation experiments to demonstrate the effectiveness and robustness of the proposed losses. \section{Related Work} SS-MDE paradigm has become very popular in the community, which mainly takes advantage of cross-view consistency in monocular videos. In this section, we explore different categories of cross-view consistency used in previous self-unsupervised monocular depth estimation works. \subsection{Photometric Cross-view Consistency} The photometric cross-view consistency can be traced back to the Direct Method in SLAM (Simultaneous Localization and Mapping) optimizing camera poses through minimizing reprojection error, which skips the feature point extraction step in the traditional method and only depends on the difference in pixel intensity. SFM-learner \cite{zhou2017unsupervised} is one of the first attempts to propose a self-supervised end-to-end network for training with monocular videos, which can jointly predict the depth and pose between consecutive frames. The core technique is using a spatial transformer network \cite{jaderberg2015spatial} to synthesize reference frames from source frames, which converts the depth estimation problem to a reprojection photometric error minimizing process. Geonet \cite{yin2018geonet} designs a joint learning framework of monocular depth, optical flow and ego-motion estimation, which combines flow consistency with photometric consistency to model cross-view consistency. DF-Net \cite{zou2018df} also leverage the pixel-level consistency among multiple tasks including depth, optical flow and motion segmentation estimation. Gordon et al. \cite{Gordon_2019_ICCV} improve the photometric loss by simultaneously predicting a translation field and an occlusion-aware mask to exclude object motion and occlusion regions, respectively. But the occlusion-aware loss is calculated by comparing predicted depth values in consecutive frames, which is easily affected by the inaccuracy of estimated depth. \re{SGDdepth \cite{klingner2020self} introduces semantic guidance to solving photometric consistency violations caused by dynamic objects, via jointly learning depth estimation with supervised semantic segmentation task.} Monodepth2 \cite{godard2019digging} also proposes several schemes to improve the effectiveness of photometric loss, including an auto-masking loss and a minimum reprojection loss, yielding more accurate results. However, we believe areas with lower photometric loss can not necessarily guarantee more accurate depth and pose because of the ambiguity and low discriminability of photometric consistency loss, which is in line with the motivation of FeatDepth \cite{shu2020featdepth}. Moreover, the pixel-level photometric consistency may become invalid in challenging cases like moving objects region and the resulting occlusion area. \subsection{Feature-level Cross-view Consistency} Therefore, some researchers start to explore cross-view consistency at the feature level. Depth-VO-Feat \cite{zhan2018unsupervised} is a pioneering work to explore the combination of photometric consistency and feature-level consistency to generate temporally consistent depth estimation, taking binocular videos as input. Kumar et al. \cite{cs2018monocular} firstly combine Generative Adversarial Networks (GANs) with the self-supervised depth estimation architecture and uses a discriminator to distinguish the synthetic reference frames and the real ones, which can also be regarded as a feature-level constraint. Following it, several more works are proposed to improve the GAN-based feature-level consistency loss \cite{wang2020adversarial,zhao2020masked}. \re{Except for utilizing deep representation learned in the depth estimation task separately, imposing semantic-aware features to enhance or align depth feature representation is a promising direction. Recent works \cite{li2021learning,Jung_2021_ICCV} propose to incorporate the semantic segmentation task to impose both feature-level implicit guidance and pixel-level explicit constraints.} \re{Besides}, FeatDepth \cite{shu2020featdepth} learns specific feature representations by a separate auto-encoder network in order to constrain cross-view consistency in feature space. These attempts prove the effectiveness of utilizing feature-level representation for cross-view consistency. We also explore feature-level consistency in the depth feature space but leverage the feature offsets estimated from temporal image frames to regularize the temporal coherency of estimated depth maps via Depth Feature Alignment (DFA) loss. \subsection{3D Space Cross-view Consistency} Besides the exploration of cross-view consistency in feature space, many works introduce additional 3D information to constrain geometric consistency. LEGO \cite{yang2018lego} presents a self-supervised framework to jointly estimate depth, normal, and edge, and uses normal as an additional 3D constraint to strengthen cross-view consistency constraint. Luo et al. \cite{luo2020consistent} leverage optical flow to find the corresponding 3D points in other frames and then build long-term 3D geometric constraints. Similarly, GLNet \cite{chen2019self} simultaneously predicts depth and optical flow, and utilizes the predicted flow to build 3D points coupling to construct 3D points consistency and epipolar constraint. These two methods naturally build cross-view consistency in 3D space by imposing flow information among continuous frames, however, the accuracy heavily relies on the flow estimation which is an unsolved problem itself in different scenes, especially for those including occlusion and moving objects. Previous self-supervised depth estimation works started to impose additional information (i.e., normal and optical flow \cite{luo2020consistent}) to help enhance geometric restriction but hardly used 3D representation as to the 3D space cross-view consistency. Vid2depth \cite{mahjourian2018unsupervised} constrains temporal cross-view consistency via 3D point cloud alignment based on the differentiable ICP method, which imposes 3D geometry information in the learning pipeline. However, point cloud alignment is a rigorous constraint to align corresponding 3D points and this loss is very sensitive to the points positions, which is fragile in scenes with moving objects and the resulting occlusion regions. By contrast, we propose Voxel Density Alignment (VDA) loss to impose 3D geometric information, which is robust and tolerant to the above challenging cases. \section{METHODS} \subsection{Preliminary} \begin{figure*} \begin{center} \begin{minipage}[t]{\textwidth} \centering \includegraphics[width=16cm]{images/frameF.pdf} \end{minipage} \end{center} \caption{An illustration of our learning framework, which consists of DepthNet, PoseNet, and OffsetNet for depth estimation, pose estimation, and alignment offset learning respectively. OffsetNet learns feature alignment offset field using self-supervised loss calculated by reconstructing reference from adjacent views with deformable convolutions. The learned offset field is then used to align temporal depth features learned from DepthNet. \re{The three branches in the framework are jointly optimized during training while only DepthNet is used during inference.}} \label{fig:network} \end{figure*} \paragraph{Camera model} The process of a camera mapping a point in 3D space to the 2D image plane can be described by a geometric model, basically the pinhole camera model. The mapping of a 3D point $P=(X, Y, Z)$ and its corresponding 2D point $p=(u,v)$ can be described as: \begin{equation} D(p) \begin{bmatrix} u \\ v\\1\end{bmatrix} = \begin{bmatrix} K\big|\textbf{0}\end{bmatrix}\begin{bmatrix} X \\ Y\\Z\\1\end{bmatrix}, \\ ~{\rm where} ~ K=\begin{bmatrix} f_x &0&u_0\\ 0&f_y&v_0\\0&0&1\end{bmatrix}. \label{eq:cameraProj} \end{equation} Matrix $K$ is the camera intrinsic matrix. $D(p)$ is the depth value at point $p$, i.e., the learning target of depth estimation task. Once the point $p$ and its depth value $D(p)$ are known, we can backproject it to get the corresponding 3D point $P$: \begin{equation} P = D(p)K^{-1}p. \end{equation}{} \paragraph{2D cross-view consistency} The essence of SS-MDE is using cross-view (in consecutive or stereo frames) consistency as the self-supervision signal. The most commonly used one is the photometric consistency, i.e.\re{,} assuming the intensity of 3D point $P$ projected in $I_t$ and $I_{t+m}$ is invariant: \begin{equation} I_t(p_t)=I_{t+m}(p_{t+m}). \end{equation}{}The projection point $p_{t+m}$ of $P$ in frame $I_{t+m}$ can be calculated from $p_t$ in frame $I_{t}$ and its depth $D(p_t)$, with the estimated transformation $T_{t\rightarrow t+m}$, by a differentiable warping function $\omega$: \begin{equation} \begin{aligned} p_{t+m}\sim\omega\left( KT_{t\rightarrow t+m}D(p_t)K^{-1}p_t\right). \end{aligned} \end{equation}{}Thus, the frame $\hat{I}_{t+m\rightarrow t}$ can be reconstructed from frame $I_{t+m}$: \begin{equation} \hat{I}_{t+m\rightarrow t}(p) = I_{t+m}(p_{t+m}). \end{equation}The photometric error minimization process is used to optimize depth and ego-motion estimation: \begin{equation} L_{ph} =\sum_{p\in I_t}\left|I_{t}(p)-\hat{I}_{t+m\rightarrow t}(p)\right|. \end{equation}{} \re{The photometric error adopted in previous works is usually the weighted sum of L1 and SSIM difference.} To overcome the depth discontinuity, a smoothing term is often incorporated to add a regularization on depth maps in many previous works \cite{godard2017unsupervised,godard2019digging}: \begin{equation} L_{sm} = \left|\partial_x\mu_{D_t}\right|e^{-|\partial_xI_t|}+\left|\partial_y\mu_{D_t}\right|e^{-|\partial_yI_t|}, \end{equation} where $\mu_{D_t}$ is the inverse depth normalized by mean depth. \re{ $\partial_x\mu_{D_t}$ and $\partial_x\mu_{D_t}$ denote the disparity gradient among two directions.} Although the photometric consistency effectively models the depth estimation task as a self-supervised problem, the photometric metric is not stable and robust based on the intensity invariance hypothesis, especially in complex outdoor scenes. Cases like moving objects, occlusion, and texture-less regions will mislead the optimization. Therefore, we propose two new cross-view consistency supervision from the perspective of deep feature space and 3D space. \subsection{Depth feature alignment loss} The motivation of DFA loss is that the coherence of consecutive depth frames is the same as the coherence of consecutive RGB frames, since the 2D pixel movement in either RGB or depth frames corresponds to the movement of the same 3D scene point, following the same projection described by Eq.~\eqref{eq:cameraProj}. As shown in the Figure \ref{fig:correspondence}, the correspondence learned from the RGB image can guide the maintenance of this correspondence during depth estimation. For example, a 3D point (red dot) is located on the edge of a black car. It will have many unique properties, such as an obvious depth change will occur around it. Then, in the next frame, it is still an edge point and still has this property. It is very natural to use optical flow to model the inter-frame movement. However, as discussed above, 2D photometric information is ambiguous and unreliable, which may also degrade optical flow estimation. Therefore, we propose to learn cross-view consistency from feature representation of RGB frames to guide the depth learning among corresponding frames. \begin{figure} \begin{center} \begin{minipage}[t]{0.45\textwidth} \centering \includegraphics[width=7cm]{images/correspondence.png} \end{minipage} \end{center} \caption{Illustration of the guidance from the correspondence in RGB images to the correspondence in depth.} \label{fig:correspondence} \end{figure} Different from \cite{shu2020featdepth} using a separate network to learn feature representation from RGB frames and aligning consecutive frames via differentiable feature warping, we learn the temporal feature alignment by reconstructing reference frame using its adjacent frames via deformable convolution networks \cite{dai2017deformable,tian2020tdan} in a totally self-supervised manner. Given consecutive frames $I_t$ and $I_{t+m}$, the feature representations $F_t$ and $F_{t+m}$ are first learned via a feature extractor $\Phi$: $F_i = \Phi(I_i)$. The features extracted from adjacent frames are then taken as inputs into the deformable alignment network to learn the alignment offset $\Theta_{t+m\rightarrow t}$ between $F_t$ \re{and $F_{t+m}$ and obtain the aligned feature $\hat{F}_{t+m\rightarrow t}$}: \begin{equation} \begin{aligned} \hat{F}_{t+m\rightarrow t},\Theta_{t+m\rightarrow t} = f_{\re{align}}(F_{t+m},F_t), \end{aligned} \end{equation} \re{where $f_{align}$ denotes deformable alignment network, which consists of regular convolutions and deformable convolutions. The convolutional layer in deformable convolution is responsible for learning the 2D offsets and output the deformed feature map using bilinear interpolation. For each position $p$ on the aligned feature map $\hat{F}_{t+m\rightarrow t}$, it is calculated as:} \begin{equation} \begin{aligned} \hat{F}_{t+m\rightarrow t}(p) = \sum_{k\in \Omega} \gamma(p_k)F_{t+m}(p+p_k+\Delta p_k). \end{aligned} \end{equation} \re{$\Theta_{t+m\rightarrow t} = \{\Delta p_k|k=1,...,|\Omega|\}$ denotes the offset learned by the deformable convolution, $\Omega$ is the kernel size and $p+p_k+\Delta$ is the $k$th learned additional offset at location $p+p_k$ by $\Delta p_k$. $p_k$ is the k-th sampling offset of a standard convolution with a kernel size of $n \times n$. For example, when $n=3$, we have $p_k \in \{(-1,-1),(-1,0),...,(1,1)\}$. The overall offset field learned by the deformable alignment network is a vector with $G\times 2N$ dimension for each pair of input images. $G$ is the deformable group number, which is set to 8 in our work. $2N$ represents the channel of each group offset field where the offset of each point is a two-dimensional vector, and it demonstrates the offset value on the x-direction and the y-direction respectively. $N$ means the square of kernel size $n\times n$.} In this way, the aligned feature $\hat{F}_{t+m\rightarrow t}$ is obtained, the reference frame can be reconstructed from it: \begin{equation} \hat I_{t+m\rightarrow t} = Re(\hat F_{t+m\rightarrow t}), \end{equation} where $Re$ denotes a reconstruction network simply consisting three \re{convolution layers}. The feature alignment offset $\Theta_{t+m\rightarrow t}$ can be learned by minimizing the difference between the reconstructed and original reference frame, namely ReconLoss: \begin{equation} L_{RE} = \left\|I_t - \hat I_{t+m\rightarrow t}\right\|^2. \end{equation} The learned offset is then used to conduct the temporal alignment of corresponding depth features: \begin{equation} \hat F^D_{t+m} = f_{dc}\left(F^D_{t+m},\Theta_{t+m\rightarrow t}\right). \end{equation} Here, the deformable convolution $f_{dc}$ and $\Theta_{t+m\rightarrow t}$ are the same as the ones used in RGB features alignment to take advantage of the prior of feature alignment coherence. The aligned depth feature $\hat F^D_{t+m}$ is enforced to be consistent with the estimated depth feature $F^D_t$ via DFLoss $L_{DF}$: \begin{equation} L_{DF}=\left\|F^D_t-\hat F^D_{t+m}\right\|^2. \end{equation} Our DFA loss is a combination of the ReconLoss and DFLoss: \begin{equation} L_{DFA}= L_{RE} + L_{DF}. \end{equation} The key process of DFA loss is shown in Figure \ref{fig:offsetcore}. \begin{figure} \begin{center} \begin{minipage}[t]{0.5\textwidth} \centering \includegraphics[height=5.8cm, width=6.4cm]{images/offset.png} \end{minipage} \end{center} \caption{Illustration of the key process of OffsetNet, which aims to learn feature alignment offsets from RGB frames. The learned offsets is then used to align depth feature.} \label{fig:offsetcore} \end{figure} DFA loss governs consistent depth estimation using the temporal coherence learned from features instead of 2D photometric information. It is beneficial to overcome the vulnerability of photometric loss in cases like illumination variance because the feature-level alignment can model temporal alignment non-locally compared with the pixel-wise photometric alignment. \red{\subsubsection{Analysis of DFA loss}\label{DFAlossAnalysis} The cross-view consistency is usually enforced via the warping among adjacent frames, but DFA loss adopts an OffsetNet branch for learning the temporal alignment using the offset field from deformable convolutions. The previous excellent work FeatDepth \cite{shu2020featdepth} also utilizes the feature representation to improve the cross-view consistency. However, the extracted features of input images are just a kind of additional representation of photometric measurement. The cross-view alignment is still conducted by the warping among continuous frames, which highly relies on the predicted poses and makes it less robust to many challenging cases, e.g., low-texture and variant illumination regions. By contrast, our DFA loss adopts the OffsetNet branch to learn the cross-view alignment during training, and the learned feature-metric alignment is independent of the predicted poses, which makes our method more robust to the above challenging cases and can predict more temporally consistent and accurate depth. } \subsection{Voxel density alignment loss} Due to the vulnerability of pixel-wise consistency supervision, Vid2depth \cite{mahjourian2018unsupervised} first impose 3D constraints by aligning two point clouds estimated in adjacent frames via Iterative Closest Point (ICP). Enforcing the 3D geometry consistency of adjacent views seems to be reasonable and could be more effective compared with 2D consistency. However, this point alignment constraint is too strict to be robust enough in challenging scenes with moving objects and occlusion. We thus propose Voxel Density Alignment (VDA) loss as a new 3D cross-view consistency supervision that is robust to these challenging cases. Intuitively, the whole 3D space can be divided into the same number of voxels among consecutive views. Our VDA loss enforces the number of 3D points in corresponding voxels consistent between adjacent frames instead of forcing every corresponding point aligned. This means our VDA loss can be less affected by local object motion and occlusion. To calculate the voxel density, we divide the 3D space into $N = N_x \times N_y \times N_z$ voxels\re{, using the Cartesian coordinate system shown in Figure \ref{fig:network}, with x- and y-axes as being horizontal and the z-axis as being verticald}. Point $P_i(x_i,y_i,z_i)\in \mathbb{R}^3$ in point cloud $PC= \lbrace{P_i}\rbrace^{n}_{i=1}$ will fall into voxel $V_j$, if $x_i\in[a_j,a_j+\Delta a), y_i\in[b_j,b_j+\Delta b), z_i\in[c_j,c_j+\Delta c)$, where $(a_j, b_j, c_j, \Delta a, \Delta b, \Delta c)$ are a set of parameters presenting the spacial range of voxel $V_j$. Then, the voxel density can be calculated as: \begin{equation} C(V_j) = \sum^N_{i = 1}[P_i\in V_j],~~VD(V_j)= C(V_j)/n. \end{equation} Here, $[…]$ is the Iverson bracket. $[v]$ is defined to be $1$ if $v$ is true, and $0$ if it is false. $C(V_j)$ is a counting operation to obtain the number of points inside voxel $V_j$ and $n$ is the total number of 3D points. This is the naive implementation of VDA loss, which is easily understood but not differentiable due to the counting operation. We thus develop another technique to implement it in a differentiable and more efficient manner. Specifically, once we estimated the depth map of a frame, the point cloud is easy to obtain. We first calculate a voxel index for each 3D point $P(x,y,z)$ according to the 3D position of the point: \begin{equation}\label{Voxelization} \begin{aligned} \nu(P) = \left\lfloor\frac{x-x_{min}}{\Delta x}\right\rfloor+\left\lfloor\frac{y-y_{min}}{\Delta x}\right\rfloor N_x+\left\lfloor\frac{z-z_{min}}{\Delta z}\right\rfloor N_xN_y. \end{aligned} \end{equation} Here, $N_x$, $N_y$, $N_z$ are the number of voxels along each axis, and $\Delta x = \frac{x_{max}-x_{min}}{N_x}$, $\Delta y = \frac{y_{max}-y_{min}}{N_y}$, $\Delta z = \frac{z_{max}-z_{min}}{N_z}$ are the shape parameters of voxels. Thus, point cloud $PC= \lbrace{P_i}\rbrace^{n}_{i=1}$ can be expressed as an $n$ dimension vector $V$. We then calculate the number of points in each voxels. We devise a function $g:\mathbb{R}^n \to \mathbb{R}^n$ to map $V$ to a counting vector $C= \lbrace{C_i}\rbrace^{n}_{i=1}$: \begin{equation} C_i = g_i(V)= n-\left\|sign(|V-i|)\right\|_1, \end{equation} In this way, the 3D point cloud can be represented as a voxel density vector: $\rho = C/n$. In conclusion, the calculation of voxel density vector of frame $I_t$ from its estimated point cloud $PC_t$ is: \begin{equation} \rho _t = \frac{1}{n} g(\nu(PC_t)). \end{equation} \re{$sign$ denote\re{s} the sign function,} \re{\begin{equation} sign(x) =\left\{ \begin{array}{lr} 1, & if\ x>0,\\ 0, & if\ x=0,\\ -1,& if\ x<0 . \end{array} \right. \end{equation}} We refer to the Straight Through Estimator (STE) \cite{yin2018understanding} to differentiably implement the sign function \re{and get valid gradient during training}: \re{ \begin{equation} \left\{ \begin{array}{lr} sign(r), & fp\\ Htanh(r)=Clip(r,-1,1)=max(-1,min(1,r)),& bp \end{array} \right. \end{equation} here, $fp$ and $bp$ mean the forward pass and back-propagation process, and $r=(x-\frac{1}{2})\times 2$.} The voxel density in one voxel can be regarded as the probability of 3D points situated in this 3D region. This representation of the 3D point cloud is more non-local and integral to express 3D geometry. To exploit the temporal coherence in voxel space, our VDA loss adopts KL divergence as the quantity to measure the voxel density vectors calculated from adjacent frames: \begin{equation} L_{VD\re{A}}=D_{KL}\left(\rho_t||\rho_{t+m\rightarrow t}\right)= \sum _{i\in n}\rho_{t}(i) \log\left(\frac{\rho_t(i)}{\rho_{t+m\rightarrow t}(i)}\right). \end{equation} Here, $\rho_{t+m\rightarrow t} = \frac{1}{n}g(\nu(P_{t+m\rightarrow t}))$. $P_{t+m\rightarrow t}$ is the point cloud transformed from the frame $I_{t+m}$. By aligning the distribution of 3D point cloud using voxel density representation, we can shift the prior ``point-to-point'' alignment paradigm to the ``region-to-region'' one to constrain cross-view consistency, which will be more robust and tolerant to challenging cases like moving objects and occlusion. \subsubsection{Analysis of VDA loss} The effectiveness of our VDA loss mainly owes to the robust representation, i.e.\re{,} the voxel density. Here, we analyze the merit of the voxel density representation. Given two consecutive frames, the 3D geometry estimated from the two frames should be totally consistent after the ego-motion transformation if there is no inconsistent perturbation. However, in natural scenes, especially outdoor scenarios, violation cases such as moving people and vehicles are quite common. Assuming we got $I_t$, $I_{t+\re{m}}$, their estimated depth map $D_t$, $D_{t+\re{m}}$, and their ego-motion transformation $T$, the most commonly used representation to depict the 3D geometry is the point cloud $P_t$ and $P_{t+m}$: \begin{equation} P_t = D K^{-1}I_t,~~ P_{t+\re{m}} = D_{t+\re{m}}K^{-1}I_{t+\re{m}},~~ \hat P_t = T P_{t+\re{m}}. \end{equation} The prior point cloud loss measures the inconsistency of two estimated 3D point clouds via L1 norm: \begin{equation} L_{pc} = \sum^n_{i=1}\left\|P_t(i) - \hat P_t(i)\right\|_1. \end{equation} L1 norm is sensitive to each element, which means this loss enforces each point to align with its corresponding point rigidly. Sometime there is an object motion between two frames, which means the existing of a small perturbation on some points. Assuming point $P_1(x_1,y_1,z_1)$ moves $\delta_{x_1}, \delta_{y_1}, \delta_{z_1}$ in axis $x, y, z$ respectively, the cross-view consistency loss can be calculated via point cloud loss as: \begin{equation} L_{pc} = |\delta_{x_1}| + |\delta_{y_1}| + |\delta_{z_1}|. \end{equation} Differently, our VDA loss pays more attention to the spatial positions of 3D points, measuring the inconsistency of corresponding groups of points. Grouping operation is realized via putting points into different voxels according to their 3D positions: \begin{equation} L_{v} = \left\|\nu(P_t(i)\re{)}-\nu(\hat P_t(i))\right\|_1, \end{equation} where the ``voxelization'' process $\nu(P)$ is calculated as Eq.~\eqref{Voxelization}. When there is a small perturbation $\delta_{x_1}, \delta_{y_1}, \delta_{z_1}$ in $P_1$, $L_v$ can be calculated as: \begin{equation} \begin{aligned} L_{v} &= \left\lfloor\frac{x+\delta x-x_{min}}{\Delta x}\right\rfloor+\left\lfloor\frac{y+\delta y-y_{min}}{\Delta y}\right\rfloor N_x\\&+\left\lfloor\frac{z+\delta z-z_{min}}{\Delta z}\right\rfloor N_xN_y\\ & - \left(\left\lfloor\frac{x-x_{min}}{\Delta x}\right\rfloor+\left\lfloor\frac{y-y_{min}}{\Delta y}\right\rfloor N_x+\left\lfloor\frac{z-z_{min}}{\Delta z}\right\rfloor N_xN_y\right). \end{aligned} \end{equation} Taking $z$ as for example, because there is a floor operation, only when $\frac{z+\delta z-z_{min}}{\Delta z}-\frac{z-z_{min}}{\Delta z} > 1$, the value of $L_{v}$ can be non-zero, which means $\delta z$ need to be larger than $\Delta z$. Therefore, the small perturbation will not change the voxel index $\nu(P_1)$, so that $L_v$ maintains $0$ and $L_{VD}$ maintains its stability. For an intuitive explanation, although the object has a small motion, this whole object still stays in the original voxel as shown in Figure \ref{fig:1}. Therefore, our VDA loss is more robust to object motions than the point cloud alignment loss. \begin{table*}[t] \small \begin{center} \caption{Quantitative performance of single depth estimation on KITTI eigen test set. For a fair comparison, all the results are evaluated at the maximum depth threshold of 80m. All methods are evaluated with raw LiDAR scan data. ``$\dagger$" means updated result after publication. Bold is the best indicator and underlines indicate the second-best results. $\delta_1, \delta_2, \delta_3$ denote $\delta < 1.25, \delta < 1.25^2, \delta < 1.25^3$, respectively. \re{The column ``train'' means training manners, with ``M'' denonte self-supervised monocular training and ``M\&Seg'' denotes self-supervised monocular training together with supervised segmentation training.}} \label{tab:1} \setlength{\tabcolsep}{0.005\linewidth} \begin{tabular}{ccccccccccc} \toprule \multirow{2}{*}{\!\!\!Methods\!\!\!}&\multirow{2}{*}{Train} &\multirow{2}{*}{Backbone}& \multirow{2}{*}{Resolution}& \multicolumn{4}{c}{Error metric$\downarrow$} & \multicolumn{3}{c}{Accuracy metric$\uparrow$} \\ \cmidrule(r){5-8} \cmidrule(r){9-11} &&& &Abs Rel & Sq Rel & RMSE & RMSE log \!\!\! & \re{$\delta_1$} & \re{$\delta_2 $} & \re{$\delta_3$} \\ \midrule SFMlearner \cite{zhou2017unsupervised} \re{(CVPR 2017)}$\dagger$&\re{M}&DispNet & $416\times128$ &0.183 &1.595 &6.709 &0.270 &0.734 &0.902 &0.959\\ Mahjourian et al. \cite{mahjourian2018unsupervised} \re{(CVPR 2018)} &\re{M}& DispNet &$416\times128$ &0.163 &1.240 &6.220 &0.250 &0.762 &0.916 &0.968\\ GeoNet \cite{yin2018geonet} \re{(CVPR 2018)}$\dagger$ &\re{M}&ResNet50 &$416\times128$ &0.149 &1.060 &5.567 &0.226 &0.796 &0.935 &0.975\\ DDVO \cite{wang2018learning} \re{(CVPR 2018)} &\re{M}&DispNet & $416\times128$ &0.151 &1.257 &5.583 &0.228 &0.810 &0.936 &0.974\\ DF-Net \cite{zou2018df} \re{(ECCV 2018)} &\re{M}& ResNet50 &$576\times160$ &0.150 &1.124 &5.507 &0.223 &0.806 &0.933 &0.973\\ Struct2depth \cite{casser2019depth} \re{(AAAI 2019)} &\re{M}& DispNet &$416\times128$ &0.141 &1.026 &5.142 &0.210 &0.845 &0.845 &0.948 \\ SC-SFMlearner \cite{bian2019unsupervised} \re{(NeurIPs 2019)} &\re{M}&DispResNet \!\!\!& $832\times256$ &0.137 &1.089 &5.439 &0.217 &0.830 &0.942 &0.975\\ HR \cite{zhou2019unsupervised} \re{(ICCV 2019)}&\re{M}& ResNet50& $1248\times384$ &0.121 &0.873 &4.945 &0.197 &0.853 &0.955 &0.982\\ Monodepth2 \cite{godard2019digging} \re{(ICCV 2019)} &\re{M}& ResNet18 & $1024\times320$ &0.115 &0.882 &4.701 &0.190 &0.879 &0.961 &0.982\\ PackeNet \cite{Guizilini_2020_CVPR} \re{(CVPR 2020)}&\re{M}& PackNet &$640\times 192$ &0.111 &0.785 &4.601 &0.189 &0.878 &0.960 &0.982\\ \re{TrianFlow \cite{zhao2020towards} (CVPR 2020)}&\re{M}&\re{ResNet18}&\re{$832\times 256$}&\re{0.113}&\re{0.704}&\re{4.581}&\re{0.184}&\re{0.871}&\re{0.961}&\re{\textbf{0.984}}\\ \re{Johonston et al. \cite{johnston2020self} } \re{(CVPR 2020)}&\re{M} &\re{ResNet101} &\re{$640\times 192$ }&\re{0.106} &\re{0.861} &\re{4.699} &\re{0.185}&\re{\underline{0.889}}&\re{0.962}&\re{0.982}\\ FeatDepth \cite{shu2020featdepth} \re{(ECCV 2020)}&\re{M}& ResNet50 &$1024\times320$ &0.104 &\underline{0.729} &\underline{4.481} &\textbf{0.179} &\textbf{0.893} &\textbf{0.965} &\textbf{0.984}\\ MLDA-Net \cite{song2021mlda} \re{(TIP 2021)} &\re{M}&ResNet50&$640\times 192$ &0.110 &0.824 &4.632 &0.187 &0.883 &0.961 &0.982\\ HR-Depth \cite{Lyu_Liu_Wang_Kong_Liu_Liu_Chen_Yuan_2021} \re{(AAAI 2021)}&\re{M}&HRNet &$640\times 192$ &0.109 &0.792 &4.632 &0.185 &0.884 &0.962 &\underline{0.983}\\ \re{R-MSFM6 \cite{zhou2021r} (ICCV 2021)}&\re{M} &\re{ResNet18} & \re{$640\times 192$} & \re{0.112} & \re{0.806} & \re{4.704} & \re{0.191} & \re{0.878} & \re{0.960} & \re{0.981}\\ \re{Wang et al. \cite{Wang_2021_ICCV} (ICCV 2021)}&\re{M}&\re{ResNet18}&\re{$640\times 192$}&\re{0.109}&\re{0.779}&\re{4.641}&\re{0.186}&\re{0.883}&\re{0.962}&\re{0.982}\\ \bottomrule \re{SGDepth \cite{klingner2020self} (ECCV 2020)}&\re{M\&Seg}&\re{ResNet18}&\re{$640\times 192$ }&\re{0.113}&\re{0.835}&\re{4.693}&\re{0.191}&\re{0.879}&\re{0.961}&\re{0.981}\\ \re{Li et al. \cite{li2021learning} (ArXiv 2021)}&\re{M\&Seg}&\re{ResNet50}&\re{$640\times 192$ }&\re{0.103}&\re{0.709}&\re{4.471}&\re{0.180}&\re{0.892}&\re{0.966}&\re{0.984}\\ \re{FSRE-Depth \cite{Jung_2021_ICCV} (ICCV 2021)}&\re{M\&Seg}&\re{ResNet50}&\re{$640\times 192$ }&\re{0.102}&\re{0.675}&\re{4.699}&\re{0.178}&\re{0.893}&\re{0.966}&\re{0.984}\\ \bottomrule \textbf{Ours (R18 LR)} &\re{M}&ResNet18 & $640\times 192$ & 0.106 & 0.738 & 4.587 & 0.183 & 0.881 & 0.962 & \underline{0.983}\\ \textbf{Ours (R50 LR)} &\re{M}&ResNet50 & $640\times 192$ & 0.105 & 0.741 & 4.540 & 0.183 & 0.884 & 0.962 & \underline{0.983}\\ \bf{Ours (R18 HR)} &\re{M}&ResNet18 &$1024\times320$ &\underline{0.103} &0.764 &4.672 &\underline{0.182} &0.885&0.962 &\underline{0.983}\\ \bf{Ours (R50 HR)} &\re{M}&ResNet50 &$1024\times320$ &\textbf{0.102} &\textbf{0.726} &\textbf{4.479} &\textbf{0.179} &\underline{0.889} &\underline{0.963} &\underline{0.983}\\ \bottomrule \end{tabular} \end{center} \end{table*} \subsection{Overall learning pipeline} In this paper, our method adopts DFA loss and VDA loss as additional cross-view consistency to the widely used photometric loss and smooth loss. Therefore, the total loss is: \begin{equation} L = \alpha L_{ph} +\beta L_{sm} + \gamma L_{DFA} +\eta L_{VDA}. \end{equation} Here, $\alpha,\beta, \gamma, \eta$ is $1,0.01,0.05,0.05$, respectively. The parameters $N_x$,$N_y$,$N_z$ is 40, 24, and 40 in our work. Our implementation of photometric and smooth loss follows the baseline method \cite{godard2019digging}. \begin{figure*}[htbp] \begin{center} \begin{minipage}{0.96\textwidth} \includegraphics[ width=3.4cm,valign=t]{images/75.jpg}\includegraphics[ width=3.4cm,valign=t]{images/254.jpg}\includegraphics[ width=3.4cm,valign=t]{images/427.png}\includegraphics[ width=3.4cm,valign=t]{images/559.jpg}\includegraphics[ width=3.4cm,valign=t]{images/674.jpg} \includegraphics[ width=3.4cm,valign=t]{images/75gt.jpg}\includegraphics[ width=3.4cm,valign=t]{images/254gt.jpg}\includegraphics[ width=3.4cm,valign=t]{images/428gt.jpg}\includegraphics[ width=3.4cm,valign=t]{images/559gt.jpg}\includegraphics[ width=3.4cm,valign=t]{images/674gt.jpg} \includegraphics[ width=3.4cm,valign=t]{images/75_disp.jpeg}\includegraphics[ width=3.4cm,valign=t]{images/254_disp.jpeg}\includegraphics[ width=3.4cm,valign=t]{images/427_disp_ours.jpg}\includegraphics[ width=3.4cm,valign=t]{images/559_disp.jpeg}\includegraphics[ width=3.4cm,valign=t]{images/674_disp.jpeg} \includegraphics[ width=3.4cm,valign=t]{images/feat_disp_75.png}\includegraphics[ width=3.4cm,valign=t]{images/feat_disp_254.png}\includegraphics[ width=3.4cm,valign=t]{images/feat_disp_427.png}\includegraphics[ width=3.4cm,valign=t]{images/feat_disp_559.png}\includegraphics[ width=3.4cm,valign=t]{images/feat_disp_674.png} \includegraphics[ width=3.4cm,valign=t]{images/75_hrdepth.jpg}\includegraphics[ width=3.4cm,valign=t]{images/254_hrdepth.jpg}\includegraphics[ width=3.4cm,valign=t]{images/427_hrdepth.jpg}\includegraphics[ width=3.4cm,valign=t]{images/559_hrdepth.jpg}\includegraphics[ width=3.4cm,valign=t]{images/674_hrdepth.jpg} \includegraphics[ width=3.4cm,valign=t]{images/75_packnet.png}\includegraphics[ width=3.4cm,valign=t]{images/254_packnet.png}\includegraphics[ width=3.4cm,valign=t]{images/427_packnet.png}\includegraphics[ width=3.4cm,valign=t]{images/559_packnet.png}\includegraphics[ width=3.4cm,valign=t]{images/674_packnet.png} \includegraphics[ width=3.4cm,valign=t]{images/75_disp_mono2.jpeg}\includegraphics[ width=3.4cm,valign=t]{images/254_disp_mono2.jpeg}\includegraphics[ width=3.4cm,valign=t]{images/427_disp.jpeg}\includegraphics[ width=3.4cm,valign=t]{images/559_disp_mono2.jpeg}\includegraphics[ width=3.4cm,valign=t]{images/674_disp_mono2.jpeg} \includegraphics[ width=3.4cm,valign=t]{images/75_disp_DDVO.jpeg}\includegraphics[ width=3.4cm,valign=t]{images/254_disp_DDVO.jpeg}\includegraphics[ width=3.4cm,valign=t]{images/427_disp_DDVO.jpeg}\includegraphics[ width=3.4cm,valign=t]{images/559_disp_DDVO.jpeg}\includegraphics[ width=3.4cm,valign=t]{images/674_disp_DDVO.jpeg} \includegraphics[ width=3.4cm,valign=t]{images/75_disp_geo.jpeg}\includegraphics[ width=3.4cm,valign=t]{images/254_disp_geo.jpeg}\includegraphics[ width=3.4cm,valign=t]{images/427_disp_geonet.jpeg}\includegraphics[ width=3.4cm,valign=t]{images/559_disp_geo.jpeg}\includegraphics[ width=3.4cm,valign=t]{images/674_disp_geo.jpeg} \includegraphics[ width=3.4cm,valign=t]{images/075_DF.jpg}\includegraphics[ width=3.4cm,valign=t]{images/254_DF.jpg}\includegraphics[ width=3.4cm,valign=t]{images/427_dfjpg.jpg}\includegraphics[ width=3.4cm,valign=t]{images/559DF.jpg}\includegraphics[ width=3.4cm,valign=t]{images/674DF.jpg} \end{minipage} \begin{minipage}[htbp]{0.005\textwidth} \fontsize{7pt}{\baselineskip}\selectfont \noindent \begin{turn}{90}{Df-Net~~ GeoNet~~~ DDVO~~~ MD2 ~~ \re{PackNet}~~\re{HR-Depth}~~\red{Feat(MS)}~~ Ours~~~~~~~~\re{GT}~~~~ Input~~} \end{turn} \end{minipage} \end{center} \caption{\red{Qualitative results on KITTI test set. Our method produces more accurate depth maps in low texture regions, moving vehicles, and delicate structures. The advantages of our results are highlighted in green boxes. \red{Feat(MS) means the visualization results generated from FeatDepth \cite{shu2020featdepth} are using their models trained with both monocular and stereo inputs.}}} \label{fig:inferresult} \end{figure*} \section{EXPERIMENTS} \subsection{Network implementation}\label{networkarchi} As shown in Figure \ref{fig:network}, our network is composed of three branches for offset learning, depth estimation and pose estimation, respectively. The depth network adopts an encoder-decoder architecture in a U-shape with skip connections similar to DispNet \cite{mayer2016large}. The encoder takes a three-frame snippet as the sequential input, using the pre-trained Resnet \cite{he2016deep} as the backbone network. The depth decoder has three branches with shared weights, with a similar structure to \cite{godard2019digging}, using sigmoid activation functions in multi-scale side outputs and ELU nonlinear functions otherwise. The pose network takes two consecutive frames as input at each time and outputs the corresponding ego-motion, based on an encoder-decoder structure as well. For the DFA loss, we use a deformable alignment network to learn the feature alignment offset, also sharing weights with the deformable convolution used in the DepthNet branch for calculating the DFA Loss. \re{The three branches in the framework are jointly optimized during training while only DepthNet is used during inference.} Our models are implemented in PyTorch with the Adam optimizer in 4 Tesla V100 GPUs, using a learning rate $8e-5$ for first 10 epochs and $8e-6$ for another 30 epochs. We trained models in monocular videos with a resolution of $640 \times 192$ (LR) and $1024 \times 320$ (HR) at a batch size of 4. \subsection{Evaluation metrics} For real-world datasets, it is very hard to get ground truth depth. Most datasets use LIDAR sensors to scan the environment and utilize them as ground truth after being processed, for instance, KITTI \cite{geiger2012we} use Velodyne laser scanner to collect the data. The most commonly used depth estimation benchmark of KITTI is the Eigen split, which is further improved by Zhou et al. \cite{zhou2017unsupervised} consisting of 39810 sequences for training, 4424 items for validation and 697 images for testing, respectively. And there are five evaluation metrics: \textbf{Abs Rel} for Absolute Relative Error, \textbf{Sq Rel} for Square Relative Error, \textbf{RMSE} for Root Mean Square Error, \textbf{RMSE log} for Root Mean Square Logarithmic Error and Accuracy: \begin{itemize} \item $Abs~Rel = (1/n)\sum_{i\in n}((|d_i-d^{\ast}_i|)/d_i) $, \item $Sq~Rel = (1/n)\sum_{i\in n}((||d_i-d^{\ast}_i||^2)/d_i)$, \item $RMSE = ((1/n)\sum_{i\in n}||d_i-d^{\ast}_i||^2)^{1/2}$, \item $RMSE~log = ((1/n)\sum_{i\in n}||log(d_i)-log(d^{\ast}_i)||^2)^{1/2}$ \item Accuracy: \% of $d_i$ s.t. $max((d_i/d^{\ast}_i), (d^{\ast}_i/d_i)) = \delta < \delta_n$, \end{itemize} where $n$ is the total number of pixels in the ground truth depth map, $d_i$ and $d^{\ast}_i$ represent the predicted and ground truth depth value of pixel $i$. $\delta_n$ denotes a threshold, which is usually set to $1.25^1$, $1.25^2$ and $1.25^3$. \iffalse \begin{figure*}[htbp] \begin{minipage}{\textwidth} \includegraphics[width=14cm]{images/inferresult.jpg} \end{minipage} \caption{Qualitative results on KITTI test set. Our method produces more accurate depth maps in low texture regions and moving object cases than the baseline Monodepth2 \cite{godard2019digging}.} \label{fig:inferresult} \end{figure*} \fi \begin{figure*}[htbp] \begin{minipage}{\textwidth} \centering \includegraphics[height=3.8cm,width=14cm]{images/ablation.pdf} \end{minipage} \caption{\re{Visualization results of ablation study. The left and right parts show the ablation of DFA loss and VDA loss respectively.}} \label{fig:ablation} \end{figure*} \begin{table*}[htbp] \small \begin{center} \caption{Ablation study of the cross-view consistency loss on KITTI.} \label{tab:ablation} \begin{tabular}{l|c|c|c|c|c|c|c|c|c} \hline &\!\!\!DFA \!\!\! &\!\!\!VDA\!\!\! &\!\!\!Abs Rel\!\!\! &\!\!\!Sq Rel\!\!\! &RMSE &\!\!\!RMSE log\!\!\! & $\!\!\delta \!< \!1.25$ \!\! & \!\! $\delta \!<\! 1.25^2 \!\! $ & \!\! $\delta \!<\! 1.25^3$ \!\!\!\\ \hline Baseline & & &0.124 &0.968 &5.030 &0.201 &0.855 &0.954 &0.980\\ \hline Ours w/ DFA &$\surd$ & &0.114 &0.873 &4.807 &0.191 &0.877 &0.960&0.982\\ \hline Ours w/ VDA & &$\surd$&0.110 &0.834 &4.694 &0.185 &0.885 &0.963 &0.983\\ \hline \!\!\! Ours w/ DFA+VDA &$\surd$ &$\surd$&0.106 &0.738 &4.587 &0.183 &0.881 &0.962 &0.983\\ \hline \end{tabular} \end{center} \end{table*} \subsection{Depth estimation evaluation} We trained our models using the KITTI dataset \cite{geiger2012we}, which is the most commonly used benchmark in the field of depth estimation. During inference, we take only the reference frame as input following the standard monocular test protocol proposed by Eigen \cite{eigen2014depth} and Zhou {\em et~al.} \cite{zhou2017unsupervised}. The experimental results on the KITTI test set are presented in Table \ref{tab:1}. It is clear that our method outperforms most prior SS-MDE methods while using a smaller backbone network and image resolution, and achieves superior performance than the SOTA FeatDepth \cite{shu2020featdepth}, using a larger backbone network and resolution of training images. Some visual results are shown in Figure \ref{fig:inferresult}. As can be seen, our method can generate more accurate depth maps than our baseline method Monodepth2 \cite{godard2019digging}, especially in challenging cases, e.g., low-texture regions and moving objects. \re{Although challenging cases usually only occupy a small portion in all scenes, it is worth noting that handling these challenging cases really matters for real-world applications like autonomous driving.} \subsection{Ablation study} The ablation study is conducted on KITTI to highlight the effectiveness of the proposed two cross-view consistency losses. Table \ref{tab:ablation} shows the detailed results by adding specific loss to the baseline method. We trained the models of Monodepth2 \cite{godard2019digging} with resolution $640\times 192$ and batch size 4 as the baseline, taking the same setting as ours (R18 LR). \subsubsection{VDA loss} Our VDA loss is designed to exploit temporal coherence in voxel space to ensure the models' robustness and tolerance to challenging cases, especially moving objects. Samples of depth images produced by models with and without VDA loss are shown in Figure \ref{fig:ablation}. The right parts show the superiority of our method in handling moving objects. The depth maps in the last row (with VDLoss) are of higher quality than those in the third row (without VDLoss) and the second-row PCloss (baseline method with Point cloud alignment loss \cite{mahjourian2018unsupervised}), confirming the effectiveness of VDLoss in moving objects regions. \re{\subsubsection{DFA loss} Our DFA loss aims to learn the temporal alignment from features of consecutive frames, which is used to guide temporal-consistent depth learning. We display samples of depth maps from one sequence in the left part of Figure \ref{fig:ablation}. As shown in the highlighted areas, our method can generate more accurate and coherent depth maps with sharper boundaries, especially in regions with illumination variance and low texture, e.g., the figure of the cyclist. The second and the third row show the results of the variants using optical flow alignment (more detail and analysis can be found in Section \ref{OFA}) and without any temporal alignment, respectively.} \iffalse \begin{figure*} \begin{center} \begin{minipage}{0.96\textwidth} \centering \includegraphics[width=13cm]{image_supp/voxelablationeffect.jpg} \label{fig:voxelablation} \end{minipage} \begin{minipage}[htbp]{0.02\textwidth} \noindent \begin{turn}{90}{\fontsize{6pt}{\baselineskip}\selectfont With~~~~~~~~ MD2~~~~~~~~Without~~~~~~~~Input} \end{turn} \end{minipage} \end{center} \caption{The depth maps in the last row (with VDLoss) are of higher quality than those in the third row (without VDLoss) and the second row (baseline method \cite{mahjourian2018unsupervised}), confirming the effectiveness of VDLoss in handling moving objects.} \end{figure*} \fi \re{\subsection{Experimental analysis}} \re{\subsubsection{VDA loss} The experimental analysis of VDA loss consists of the analysis of VDA loss in handling moving objects and the analysis of hyperparameters in VDA loss.} \paragraph{The effectiveness of VDA loss in handling moving objects} We conducted quantitative and qualitative experiments to validate the effectiveness of our VDA loss. We split the whole test set into two parts namely motion set and static set, respectively, according to if there is moving object in the scenes. We evaluated our method and the baseline method in both sets. The results are reported in Table \ref{tab:motion}. For the quantitative comparison, it is clear that our method is more robust to scenes with object motions than the baseline. Moreover, we show several visual samples in Figure \ref{fig:ablation}. As shown in the right part of Figure \ref{fig:ablation}, our method can robustly and clearly infer the moving vehicles in outdoor driving scenes, while other methods may lose or misestimate the moving objects. \paragraph{Analysis of hyperparameters in VDA loss} Our VDA loss exploits cross-view consistency of the distribution of point clouds in voxel space. It is interesting to investigate the impact of the way of dividing the 3D space to voxels, i.e., the number of voxels along each axis $N_x, N_y, N_z$. Therefore, we conduct an ablation study of different numbers of voxels, reported in Table \ref{voxeltab}. The size of the voxel has a close relation to the tolerance to moving objects. According to our prior knowledge, the objects moving in the vertical direction are almost impossible on the outdoor driving dataset. We thus set \re{$N_z$} to 24 and evaluate different size of voxel by changing $N_x$ and $N_z$. As shown in Table \ref{voxeltab}, the number of voxels along axis $x$ and axis $z$ can slightly affect the performance. Assuming a border case, if $N_x= N_y=N_z=1$, which means regarding the whole space as one voxel, the voxel density is the same and the VDA loss always equals 0. However, if the values of $N_x, N_y, N_z$ are too large, the voxel size will be very small, which is contrary to the goal of VDA loss. Therefore, large values for $N_x, N_y, N_z$ are not recommended. \begin{table*}[htbp] \small \begin{center} \caption{Evaluation results on the split motion and static test sets.} \label{tab:motion} \begin{tabular}{l|c|c|c|c|c|c|c} \hline &Abs Rel &Sq Rel &RMSE &RMSE log & $\delta < 1.25$ & $\delta < 1.25^2 $ & $\delta < 1.25^3$ \\ \hline Baseline (motion)&0.118 &0.978 &5.079 &0.199&0.874&0.957&0.980\\ \hline \red{FeatDepth (motion)}&\red{0.108} &\red{0.987} &\red{4.910} &\red{0.189}&\red{0.877}&\red{0.958}&\red{0.981}\\ \hline Ours R18 LR (motion)& 0.106 & 0.790 & 4.769 & 0.186 & 0.882& 0.959& 0.982\\ \hline \red{Ours R50 HR (motion)}& \red{\textbf{0.102}} & \red{\textbf{0.734}} & \red{\textbf{4.434}} & \red{\textbf{0.179}} & \red{\textbf{0.889}}& \red{\textbf{0.963}}& \red{\textbf{0.983}}\\ \hline Baseline (static)&0.109 &0.704 &3.999 &0.173 &0.888 &\textbf{0.968} &\textbf{0.986}\\ \hline \red{FeatDepth (static)}&\red{0.104} &\red{0.632} &\red{\textbf{4.079}} &\red{0.173}&\red{\textbf{0.892}}&\red{0.967}&\red{0.985}\\ \hline Ours R18 LR (static) & 0.106 & 0.641 & 4.250 & 0.177 & 0.879 & 0.967 & 0.985\\ \hline \red{Ours R50 HR (static)} & \red{\textbf{0.103}} & \red{\textbf{0.627}} & \red{4.136} &\red{\textbf{ 0.173}} & \red{0.890} & \red{0.967} & \red{0.985}\\ \hline \end{tabular} \end{center} \end{table*}{} \subsubsection{DFA loss}\re{The experimental analysis of DFA loss includes the analysis of depth feature alignment offset, visualization of alignment offsets and comparison experiment with alignment using optical flow.} \paragraph{Analysis of depth feature alignment offset} In our DFA loss, we use OffsetNet to learn the correspondences from deep features of consecutive RGB frames and use the learned correspondences to help to learn more coherent depth. In our understanding, the feature offset is a kind of flow of features, which should share some similarities with optical flow. But they are region-wise rather than pixel-wise because the offsets are learned from deep features, which contain more semantic information and 3D geometry information. For the relationship between deformable convolution offset and optical flow, it is hard to clearly formulate. But researchers using deformable alignment in other areas \cite{chan2021understanding} is specifically dedicated to exploring this issue. They believe that deformable convolution can be decomposed into a combination of spatial warping and convolution. This decomposition can reveal the commonality of deformable alignment and flow-based alignment in formulation, but with a key difference in their offset diversity. The offset diversity is closely related to the group number of offset in the deformable convolutions. Their experiments demonstrate that the increased diversity in deformable alignment yields better aligned features, and hence significantly improves the quality of alignment. In our work, we set the group number of offset to $8$, therefore the deformable alignment learns $8\times 2\times 3 \times 3$, i.e., $144$ sets of offset for each pair of feature map to represent the temporal coherence of the features of continuous frames rather than only one offset learned in optical flow. \re{We also use kernel size $5\times5$ to obtain a more diverse and higher-dimension ($8\times 2\times 5 \times 5$, i.e., $400$) offset field and conduct a comparison experiment. The experiment results show that the variant using a larger kernel size in DCN achieves similar results to the original version, i.e., Sq Rel: 0.738 vs. 0.735 and RMSE: 4.587 vs. 0.564 for $3\times3$ and $5\times 5$, respectively. We believe that the 144-dimension offset field is enough to enlarge the receptive field and model the temporal coherence of adjacent two frames. Taking model complexity into account, we choose kernel size $3\times 3$ in our final version.} \begin{table*}[htbp] \small \begin{center} \caption{\re{Experiment results for hyperparameters analysis in VDA loss. $N_x, N_y, N_z$ denote the number of voxels divided along three axis.}} \label{voxeltab} \begin{tabular}{c|c|c|c|c|c|c|c} \hline Parameters \re{ ($N_x, N_y, N_z$)}&Abs Rel &Sq Rel &RMSE &RMSE log & $\delta < 1.25$ & $\delta < 1.25^2 $ & $\delta < 1.25^3$ \\ \hline $(20,20,24)$&0.108 & 0.855 & 4.912 & 0.183 & 0.882 & 0.961 & 0.983\\ \hline $(40,40,24)$&0.106 & 0.738 & 4.587 & 0.183 & 0.881 & 0.962 & 0.983\\ \hline $(60,60,24)$ & 0.107 & 0.737 & 4.742 & 0.184 & 0.875 & 0.960 & 0.983\\ \hline \end{tabular} \end{center} \end{table*}{} \paragraph{Visualization of depth feature alignment offset} \begin{figure}[t] \begin{center} \includegraphics[width=0.35\textwidth]{images/offsetfeature.jpg} \end{center} \caption{Samples of the visualization of the learned depth feature offset.} \label{offsetfeature} \end{figure} We adopt PCA decomposition to select one set offset to visualize the heatmaps of learned feature alignment offset in Figure \ref{offsetfeature}. Hotter colors denote higher values. Our OffsetNet pays more attention to stable regions and semantic outlines such as ground and wall to learn useful alignment offset. Moreover, the direction of offsets in the same semantic part tends to be the same, which implies the alignment offsets are learned in a ``region-to-region'' manner instead of varying with pixels. \paragraph{Compared with alignment using optical flow}\label{OFA} To demonstrate the effectiveness of the DFA loss, we replace the DFA loss with the optical flow of the corresponding feature to conduct the depth feature alignment. The evaluation result is shown in Table \ref{OFAloss}, where the ``OFA loss" denotes the variant using optical flow alignment. The evaluation results show that use optical flow as a single channel offset to regularize the depth learning turns out to be not feasible, which verifies the effectiveness of our DFA loss. \begin{figure*}[htbp] \begin{center} \begin{minipage}{\textwidth} \centering \includegraphics[width=15.5cm]{images/featComparison.pdf} \end{minipage} \end{center} \caption{\red{Qualitative comparison with FeatDepth \cite{shu2020featdepth} on the KITTI test set. The first and fourth rows are the input images. The second and fifth rows are the results of FeatDepth \cite{shu2020featdepth}. The third and last rows are the results of our method. Green boxes highlight the difference of the predicted depth maps by different methods. }} \label{fig:featcomparison} \end{figure*} \red{\paragraph{Comparison with the state-of-the-art work \cite{shu2020featdepth}} To show the superiority of our method over the previous state-of-the-art work \cite{shu2020featdepth}, we conduct comprehensive experiments in terms of qualitative results and quantitative evaluation, especially comparing their ability in dealing with moving objects scenes and generalizing to unseen scenes. For quantitative comparison results, as shown in Table \ref{tab:1}, our method outperforms FeatDepth \cite{shu2020featdepth} in four error metrics while being inferior to it in three accuracy metrics. Generally, they perform comparably on the KITTI test set. Nevertheless, it is noteworthy that, according to Table \ref{inferencetable}, our method uses smaller models and fewer computation resources than FeatDepth \cite{shu2020featdepth} and enjoys a faster inference speed. As for qualitative comparison results, we compare with FeatDepth in Figure \ref{fig:featcomparison}. As shown in the top three rows of Figure \ref{fig:featcomparison}, we show a sequence of continuous frames from the test set. Benefiting from the temporal alignment of depth features by using the DFA loss, the depth maps predicted by our method are more temporally consistent and accurate, especially for the object boundaries. Besides, the remaining rows in Figure \ref{fig:featcomparison} also validate the superiority of our method in low-texture areas, thin structures and object boundaries. To further validate the superiority of our method in dealing with moving object scenes, we compare it with FeatDepth \cite{shu2020featdepth} on the motion and static split of the KITTI test set. As shown in Table \ref{tab:motion}, our method performs better than FeatDepth \cite{shu2020featdepth} on the motion split and slightly worse on the static split. It is noteworthy that we use both ``ours R18 LR'' and ``ours R50 HR'' version in this experiment. The results demonstrate the advantage of our method in dealing with motion object scenes. We also compare the generalization ability of our method and FeatDepth \cite{shu2020featdepth} in Section \ref{generalization}. The quantitative results summarized in Table \ref{maketabel} show that our method outperforms FeatDepth \cite{shu2020featdepth} in all metrics. We also show the qualitative results on the CityScapes and Make3D datasets in Figure \ref{city9} and Figure \ref{fig:make}, respectively. } \begin{table*}[htbp] \small \begin{center} \caption{\re{Experiment results compared with the variant using optical flow alignment.}} \label{OFAloss} \begin{tabular}{l|c|c|c|c|c|c|c} \hline Methods&Abs Rel &Sq Rel &RMSE &RMSE log & $\delta < 1.25$ & $\delta < 1.25^2 $ & $\delta < 1.25^3$ \\ \hline Ours (DFA loss) & 0.106 & 0.738 & 4.587 & 0.183 & 0.881 & 0.962 & 0.983\\ \hline Ours (OFA loss)&0.119 & 0.902 & 4.860 & 0.194 & 0.863 & 0.958 & 0.982\\ \hline \end{tabular} \end{center} \end{table*}{} \begin{table*}[htbp] \small \begin{center} \caption{Evaluation results of methods with or without online refinement.}\label{tab:refinement} \begin{tabular}{l|c|c|c|c|c|c|c} \hline Methods &Abs Rel &Sq Rel &RMSE &RMSE log & $\delta < 1.25$ & $\delta < 1.25^2 $ & $\delta < 1.25^3$ \\ \hline Baseline&0.115 &0.882 &4.701 &0.190 &0.879 &0.961 &0.982\\ \hline Baseline (refinement)& 0.097 & 0.717 & 4.339 & 0.174 & 0.904 & 0.965 & 0.984 \\ \hline Ours&0.106 & 0.738 & 4.587 & 0.183 & 0.881 & 0.962 & 0.983\\ \hline Ours (refinement) & 0.089 & 0.716 & 4.243 & 0.172 & 0.911 & 0.965 & 0.985\\ \hline \end{tabular} \end{center} \end{table*}{} \begin{table}[t] \begin{minipage}{\linewidth} \footnotesize \caption{Test results on Make3D. \re{Column ``Train" with label ``M"/``S" means training with monocular/stereo data. }} \label{maketabel} \begin{center} \begin{tabular}{lccccc} \toprule \multirow{1}{*}{Methods}& \multirow{1}{*}{Train}& Abs Rel & Sq Rel & RMSE & log10 \\ \midrule \re{Monodepth \cite{godard2017unsupervised}} &\re{S} &\re{0.544} &\re{10.94} &\re{11.760} &\re{0.193}\\ SFMlearner \cite{zhou2017unsupervised} &M &0.383 &5.321 &10.470 &0.478\\ DDVO \cite{wang2018learning} &M &0.387 &4.720 &8.090 &0.204\\ Monodepth2 \cite{godard2019digging} &M &0.322 &3.589 &7.417 &0.163\\ \re{Monodepth2 \cite{godard2019digging}} &\re{MS} &\re{0.374} &\re{3.792} &\re{8.238} &\re{0.201}\\ \re{Johonston et al. \cite{johnston2020self}}&\re{M}&\re{0.297} &\re{\textbf{2.902}} &\re{7.013} &\re{0.158}\\ \red{FeatDepth \cite{shu2020featdepth}}&\red{M}&\red{0.313} &\red{3.489} &\red{7.228} &\red{0.158}\\ Ours &M & 0.316 & 3.200 & 7.095 & 0.158\\ \red{Ours R50 HR} &\red{M} & \red{\textbf{0.290}} & \red{3.070} & \red{\textbf{6.902}} & \red{\textbf{0.155}}\\ \bottomrule \end{tabular} \end{center} \end{minipage} \end{table} \begin{figure} \begin{center} \begin{minipage}{0.46\textwidth} \centering \includegraphics[height=4.6cm, width=8.3cm]{images/cityscapes.png} \end{minipage} \begin{minipage}[htbp]{0.02\textwidth} \noindent \begin{turn}{90}{\fontsize{8pt}{\baselineskip}\selectfont MD2~~~\red{FeatDepth}~~ Ours~~~~~Input~} \end{turn} \end{minipage} \end{center} \caption{Qualitative results on Cityscapes \cite{cordts2016cityscapes}. Our method produces more accurate depth maps in moving objects and texture-less regions. \red{Green boxes highlight the difference of the predicted depth maps by different methods.}} \label{city9} \end{figure} \subsection{Evaluation of generalization ability}\label{generalization} Though our models were only trained on KITTI \cite{geiger2012we}, competitive results can be achieved on unseen datasets without any fine-tuning. \red{We display the generalization ability of our method using the version ours (R18 LR).} \begin{figure} \begin{center} \begin{minipage}{0.46\textwidth} \centering \includegraphics[height=4.6cm,width=8.3cm]{images/make3d.png} \end{minipage} \begin{minipage}[htbp]{0.02\textwidth} \noindent \begin{turn}{90}{\fontsize{8pt}{\baselineskip}\selectfont MD2~\red{FeatDepth}~ Ours~~~GT~~~Input~} \end{turn} \end{minipage} \end{center} \caption{Qualitative results on Make3D \cite{saxena2008make3d}. Our method can generate more accurate depth maps. \red{Green boxes highlight the difference of the predicted depth maps by different methods.}} \label{fig:make} \end{figure} {\bf Cityscape.} The challenges of Cityscape \cite{cordts2016cityscapes} mainly arise from the poor lighting condition, raining weather and moving objects. We cropped out the bottom part of the original images to remove the car hoods. The generated depth maps show the good domain adaptation ability of our models. Compared with other state-of-the-art approaches \red{Monodepth2 \cite{godard2019digging} and FeatDepth \cite{shu2020featdepth}}, our method is more accurate at perceiving moving or distant objects and delicate structures, as shown in Figure \ref{city9}\red{, especially the part highlighted in the green boxes.} {\bf Make3D.} We conducted the center cropping and scaling \cite{godard2019digging} for the alignment between input images and the ground truth \cite{saxena2008make3d}. In Table \ref{maketabel}, our results outperform our baseline \cite{godard2019digging} and the qualitative comparison in Figure \ref{fig:make} provides additional intuitive evidence on the generalization ability of our method. \re{According to the results in Table \ref{maketabel}, the self-supervised monocular methods usually show better generalization ability than self-supervised stereo or hybrid training methods. Although our performance is inferior to \cite{johnston2020self} in some metrics, their model is much larger than ours, i.e., 51.34M v.s. 14.33M, which may be a key factor to improve methods' generalization ability. \red{Compared with FeatDepth \cite{shu2020featdepth}, our method exceeds it in three metrics while using smaller models, showing our superiority in the generalization ability.}} \begin{table}[t] \begin{minipage}{\linewidth} \scriptsize \caption{Comparison of model complexity.}\label{inferencetable} \small \begin{center} \begin{tabular}{l|c|c|c} \hline &Params (M)$\downarrow$ &FLOPs (G)$\downarrow$ &FPS$\uparrow$ \\ \hline Monodepth2 \cite{godard2019digging}&14.33 &8.03 &184.5 \\ \hline Ours (R18 LR) &14.33 &8.03 &184.5 \\ \hline FeatDepth \cite{shu2020featdepth}&33.16 &85.11 & 59.6\\ \hline \re{Ours (R50 HR)} &\re{32.52} &\re{44.31} &\re{87.3} \\ \hline \end{tabular} \end{center} \end{minipage} \end{table} \subsection{Online refinement} Our method utilizes temporal coherence among consecutive video frames via proposed cross-view consistency to improve monocular depth estimation. However, the current test protocol proposed by prior works \cite{eigen2014depth,zhou2017unsupervised} is for single frame depth estimation. To demonstrate the ability of our method to learn temporal coherence, we adopt the online refinement technique following the approach proposed by \cite{casser2019depth}. Because no ground truth depth supervision is needed in the self-supervised training paradigm, it is natural to train the network's parameters during inference using the same losses. We thus update the model when performing inference. Each input batch during test-time refinement includes the the test frame $I_t$ and the nearby frames $I_{t-1}, I_{t+1}$. This online refinement approach can fully utilize the cross-view consistency among test frames and their adjacent frames, which improves the test result significantly. The quantitative results of different methods with online refinement are reported in Table \ref{tab:refinement}. \iffalse \begin{figure*} \begin{center} \begin{minipage}{0.96\textwidth} \centering \includegraphics[width=13cm]{image_supp/qualitative.jpg} \end{minipage} \begin{minipage}[htbp]{0.02\textwidth} \noindent \begin{turn}{90}{\fontsize{8pt}{\baselineskip}\selectfont Df~~~~~~~V2D~~~~~~ MD2~~~~~Ours~~~~~~Input} \end{turn} \end{minipage} \end{center} \caption{Qualitative results on KITTI test set. Our method produces more accurate depth maps in low-texture region and moving instances compared with other methods Monodepth2 (MD2) \cite{godard2019digging}, V2D \cite{mahjourian2018unsupervised}, and Df \cite{zou2018df}} \label{fig:1} \end{figure*} \fi \subsection{Model complexity analysis} Since depth estimation methods are often used in autonomous driving or drone systems, model size and speed during inference is very important. Therefore, we compare our model with prior works in terms of parameters (M), computations (FLOPs), and inference speed (FPS), as shown in Table \ref{inferencetable}. Our method proposes two cross-view loss items to regularize the temporal coherence during training, which are not used during inference. Therefore, the model complexity of our methods is consistent with our baseline method. \iffalse \begin{figure}\label{fig:offsetvisual} \begin{center} \begin{minipage}[h]{0.5\textwidth} \centering \includegraphics[width=8cm]{image_supp/offsetvisual.jpg} \end{minipage} \end{center} \caption{Visualization of the learned feature offset by OffsetNet.} \end{figure} \fi \section{Conclusion and discussion}\label{conclusion} This study is dedicated to the SS-MDE problem with a focus on robust cross-view consistency. We first propose DFA loss to exploit the temporal coherence in feature space to produce consistent depth estimation. Compared with the photometric loss in the RGB space, measuring the cross-view consistency in the depth feature space is more robust in challenging cases such as illumination variance and texture-less regions, owing to the representation power of deep features. Moreover, we design VDA loss to exploit robust cross-view 3D geometry consistency by aligning point cloud distribution in the voxel space. \re{VDA loss has shown to be more effective} in handling moving objects and occlusion regions than the rigid point cloud alignment loss. Experimental results \re{on outdoor benchmarks} demonstrate that our method has achieved superior results than state-of-the-art approaches and can generate better depth maps in texture-less regions and moving object areas. \re{More efforts can be made to improve the voxelization method in VDA loss to enhance the generalization ability and apply the proposed method to indoor scenes, which will be left for future work.} \bibliographystyle{IEEEtran} \section{Introduction} Understanding the 3D structure of scenes is an essential topic in machine perception, which plays a crucial part in autonomous driving and robot vision. Traditionally, this task can be accomplished by Structure from Motion and with multi-view or binocular stereo inputs \cite{bjorkman2002real}. Since stereo images are more expensive and inconvenient to acquire than monocular ones, solutions based on monocular vision have attracted increasing attention from the community. However, monocular depth estimation is generally more challenging than stereo methods due to scale ambiguity and unknown camera motion. Several works \cite{eigen2014depth, godard2017unsupervised} have been proposed to narrow the performance gap. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.48\textwidth]{images/photometric_loss.png} \end{center} \caption{Visualization of the photometric loss. The first row is the reference image, and the second and third rows are warped images from adjacent images using ground truth depth and pose.} \label{fig:photometric} \end{figure} \begin{figure*} \begin{center} \begin{minipage}[htbp]{\textwidth} \centering \includegraphics[width=14.2cm]{images/startingPic.jpg} \end{minipage} \end{center} \caption{Comparisons of prior ``point-to-point" alignment paradigm to our ``region-to-region" one. \re{We propose the ``region-to-region" alignment paradigm by enforcing photometric consistency at feature-level (a) and replacing point cloud alignment with voxel density alignment in 3D space (b).}} \label{fig:1} \end{figure*} Recently, with the unprecedented success of deep learning in computer vision \cite{he2016deep,dosovitskiy2020image}, Convolutional Neural Networks (CNNs) \cite{he2016deep} have achieved promising results in the field of depth estimation. In the paradigm of supervised learning, depth estimation is usually regarded as a regression or classification problem \cite{eigen2014depth,fu2018deep}, which needs expensive labeled datasets. By contrast, there are also some successful attempts \cite{zhou2017unsupervised,mahjourian2018unsupervised,godard2019digging} to execute monocular depth estimation and visual odometry prediction together in a self-supervised manner by utilizing cross-view consistency between consecutive frames. In most prior works of this pipeline, two networks are used to predict the depth and the camera pose separately, which are then jointly exploited to warp source frames to the reference ones, thereby converting the depth estimation problem to a photometric error minimization process. The essence of this paradigm is utilizing the cross-view geometry consistency from videos to regularize the joint learning of depth and pose. Previous SS-MDE works have proved the effectiveness of the photometric loss among consecutive frames, but it is quite vulnerable even problematic in some cases. First, the photometric consistency is based on the assumption that the pixel intensities projected from the same 3D point in different frames are constant, which is easily violated by illumination variance, reflective surface, and texture-less region. Second, there are always some dynamic objects in natural scenes and thus generating occlusion areas, which also affects the success of photometric consistency. To demonstrate the vulnerability of photometric loss, we conduct a preliminary study on Virtual KITTI \cite{cabon2020virtual} because it has dense ground truth depth maps and precise poses. As shown in Figure \ref{fig:photometric}, even though the ground truth depth and pose are used, the photometric loss map is always not zero due to factors such as occlusions, illumination variance, dynamic objects, etc. To address this problem, the Perceptual losses are used in recent work \cite{shu2020featdepth}. In line with this research direction, we are dedicated to proposing more robust loss items to help enhance the self-supervision signal. Therefore, our work targets to explore more robust cross-view consistency losses to mitigate the side effect of these challenging cases. We first propose a Depth Feature Alignment (DFA) loss, which learns feature offsets between consecutive frames by reconstructing the reference frames from its adjacent frames via deformable alignment. Then, these feature offsets are used to align the temporal depth feature. In this way, we utilize the consistency between adjacent frames via feature-level representation, which is more representative and discriminative than pixel intensities. As shown in Figure \ref{fig:1} (a), comparing the photometric intensity between consecutive frames can be problematic, because the intensities of the surrounding region of the target pixel are very close, and the ambiguity may probably cause mismatches. Besides, prior work \cite{mahjourian2018unsupervised} proposes to use ICP-based point cloud alignment loss to utilize 3D geometry to enforce cross-view consistency, which is useful to alleviate the ambiguity of 2D pixels. However, rigid 3D point cloud alignment can not work properly in scenes with the object motion and the resulting occlusion, as shown in Figure \ref{fig:1} (b), thereby being sensitive to local object motion. In order to make the model more robust to moving objects and the resulting occlusion areas, we propose voxel density as a new 3D representation and define Voxel Density Alignment (VDA) loss to enforce cross-view consistency. Our VDA loss regards point cloud as an integral spatial distribution. It only enforces the numbers of points inside corresponding voxels of adjacent frames (voxels in the same color in Figure \ref{fig:1} (b)) to be consistent and does not penalize small spatial perturbation since the point still stays in the same voxel. These two cross-view consistency losses exploit the temporal coherence in depth feature space and 3D voxel space for SS-MDE, both shifting the prior ``point-to-point'' alignment paradigm to the ``region-to-region'' one. Our method can achieve superior results than the state-of-the-art (SOTA). We conduct ablation experiments to demonstrate the effectiveness and robustness of the proposed losses. \section{Related Work} SS-MDE paradigm has become very popular in the community, which mainly takes advantage of cross-view consistency in monocular videos. In this section, we explore different categories of cross-view consistency used in previous self-unsupervised monocular depth estimation works. \subsection{Photometric Cross-view Consistency} The photometric cross-view consistency can be traced back to the Direct Method in SLAM (Simultaneous Localization and Mapping) optimizing camera poses through minimizing reprojection error, which skips the feature point extraction step in the traditional method and only depends on the difference in pixel intensity. SFM-learner \cite{zhou2017unsupervised} is one of the first attempts to propose a self-supervised end-to-end network for training with monocular videos, which can jointly predict the depth and pose between consecutive frames. The core technique is using a spatial transformer network \cite{jaderberg2015spatial} to synthesize reference frames from source frames, which converts the depth estimation problem to a reprojection photometric error minimizing process. Geonet \cite{yin2018geonet} designs a joint learning framework of monocular depth, optical flow and ego-motion estimation, which combines flow consistency with photometric consistency to model cross-view consistency. DF-Net \cite{zou2018df} also leverage the pixel-level consistency among multiple tasks including depth, optical flow and motion segmentation estimation. Gordon et al. \cite{Gordon_2019_ICCV} improve the photometric loss by simultaneously predicting a translation field and an occlusion-aware mask to exclude object motion and occlusion regions, respectively. But the occlusion-aware loss is calculated by comparing predicted depth values in consecutive frames, which is easily affected by the inaccuracy of estimated depth. \re{SGDdepth \cite{klingner2020self} introduces semantic guidance to solving photometric consistency violations caused by dynamic objects, via jointly learning depth estimation with supervised semantic segmentation task.} Monodepth2 \cite{godard2019digging} also proposes several schemes to improve the effectiveness of photometric loss, including an auto-masking loss and a minimum reprojection loss, yielding more accurate results. However, we believe areas with lower photometric loss can not necessarily guarantee more accurate depth and pose because of the ambiguity and low discriminability of photometric consistency loss, which is in line with the motivation of FeatDepth \cite{shu2020featdepth}. Moreover, the pixel-level photometric consistency may become invalid in challenging cases like moving objects region and the resulting occlusion area. \subsection{Feature-level Cross-view Consistency} Therefore, some researchers start to explore cross-view consistency at the feature level. Depth-VO-Feat \cite{zhan2018unsupervised} is a pioneering work to explore the combination of photometric consistency and feature-level consistency to generate temporally consistent depth estimation, taking binocular videos as input. Kumar et al. \cite{cs2018monocular} firstly combine Generative Adversarial Networks (GANs) with the self-supervised depth estimation architecture and uses a discriminator to distinguish the synthetic reference frames and the real ones, which can also be regarded as a feature-level constraint. Following it, several more works are proposed to improve the GAN-based feature-level consistency loss \cite{wang2020adversarial,zhao2020masked}. \re{Except for utilizing deep representation learned in the depth estimation task separately, imposing semantic-aware features to enhance or align depth feature representation is a promising direction. Recent works \cite{li2021learning,Jung_2021_ICCV} propose to incorporate the semantic segmentation task to impose both feature-level implicit guidance and pixel-level explicit constraints.} \re{Besides}, FeatDepth \cite{shu2020featdepth} learns specific feature representations by a separate auto-encoder network in order to constrain cross-view consistency in feature space. These attempts prove the effectiveness of utilizing feature-level representation for cross-view consistency. We also explore feature-level consistency in the depth feature space but leverage the feature offsets estimated from temporal image frames to regularize the temporal coherency of estimated depth maps via Depth Feature Alignment (DFA) loss. \subsection{3D Space Cross-view Consistency} Besides the exploration of cross-view consistency in feature space, many works introduce additional 3D information to constrain geometric consistency. LEGO \cite{yang2018lego} presents a self-supervised framework to jointly estimate depth, normal, and edge, and uses normal as an additional 3D constraint to strengthen cross-view consistency constraint. Luo et al. \cite{luo2020consistent} leverage optical flow to find the corresponding 3D points in other frames and then build long-term 3D geometric constraints. Similarly, GLNet \cite{chen2019self} simultaneously predicts depth and optical flow, and utilizes the predicted flow to build 3D points coupling to construct 3D points consistency and epipolar constraint. These two methods naturally build cross-view consistency in 3D space by imposing flow information among continuous frames, however, the accuracy heavily relies on the flow estimation which is an unsolved problem itself in different scenes, especially for those including occlusion and moving objects. Previous self-supervised depth estimation works started to impose additional information (i.e., normal and optical flow \cite{luo2020consistent}) to help enhance geometric restriction but hardly used 3D representation as to the 3D space cross-view consistency. Vid2depth \cite{mahjourian2018unsupervised} constrains temporal cross-view consistency via 3D point cloud alignment based on the differentiable ICP method, which imposes 3D geometry information in the learning pipeline. However, point cloud alignment is a rigorous constraint to align corresponding 3D points and this loss is very sensitive to the points positions, which is fragile in scenes with moving objects and the resulting occlusion regions. By contrast, we propose Voxel Density Alignment (VDA) loss to impose 3D geometric information, which is robust and tolerant to the above challenging cases. \section{METHODS} \subsection{Preliminary} \begin{figure*} \begin{center} \begin{minipage}[t]{\textwidth} \centering \includegraphics[width=16cm]{images/frameF.pdf} \end{minipage} \end{center} \caption{An illustration of our learning framework, which consists of DepthNet, PoseNet, and OffsetNet for depth estimation, pose estimation, and alignment offset learning respectively. OffsetNet learns feature alignment offset field using self-supervised loss calculated by reconstructing reference from adjacent views with deformable convolutions. The learned offset field is then used to align temporal depth features learned from DepthNet. \re{The three branches in the framework are jointly optimized during training while only DepthNet is used during inference.}} \label{fig:network} \end{figure*} \paragraph{Camera model} The process of a camera mapping a point in 3D space to the 2D image plane can be described by a geometric model, basically the pinhole camera model. The mapping of a 3D point $P=(X, Y, Z)$ and its corresponding 2D point $p=(u,v)$ can be described as: \begin{equation} D(p) \begin{bmatrix} u \\ v\\1\end{bmatrix} = \begin{bmatrix} K\big|\textbf{0}\end{bmatrix}\begin{bmatrix} X \\ Y\\Z\\1\end{bmatrix}, \\ ~{\rm where} ~ K=\begin{bmatrix} f_x &0&u_0\\ 0&f_y&v_0\\0&0&1\end{bmatrix}. \label{eq:cameraProj} \end{equation} Matrix $K$ is the camera intrinsic matrix. $D(p)$ is the depth value at point $p$, i.e., the learning target of depth estimation task. Once the point $p$ and its depth value $D(p)$ are known, we can backproject it to get the corresponding 3D point $P$: \begin{equation} P = D(p)K^{-1}p. \end{equation}{} \paragraph{2D cross-view consistency} The essence of SS-MDE is using cross-view (in consecutive or stereo frames) consistency as the self-supervision signal. The most commonly used one is the photometric consistency, i.e.\re{,} assuming the intensity of 3D point $P$ projected in $I_t$ and $I_{t+m}$ is invariant: \begin{equation} I_t(p_t)=I_{t+m}(p_{t+m}). \end{equation}{}The projection point $p_{t+m}$ of $P$ in frame $I_{t+m}$ can be calculated from $p_t$ in frame $I_{t}$ and its depth $D(p_t)$, with the estimated transformation $T_{t\rightarrow t+m}$, by a differentiable warping function $\omega$: \begin{equation} \begin{aligned} p_{t+m}\sim\omega\left( KT_{t\rightarrow t+m}D(p_t)K^{-1}p_t\right). \end{aligned} \end{equation}{}Thus, the frame $\hat{I}_{t+m\rightarrow t}$ can be reconstructed from frame $I_{t+m}$: \begin{equation} \hat{I}_{t+m\rightarrow t}(p) = I_{t+m}(p_{t+m}). \end{equation}The photometric error minimization process is used to optimize depth and ego-motion estimation: \begin{equation} L_{ph} =\sum_{p\in I_t}\left|I_{t}(p)-\hat{I}_{t+m\rightarrow t}(p)\right|. \end{equation}{} \re{The photometric error adopted in previous works is usually the weighted sum of L1 and SSIM difference.} To overcome the depth discontinuity, a smoothing term is often incorporated to add a regularization on depth maps in many previous works \cite{godard2017unsupervised,godard2019digging}: \begin{equation} L_{sm} = \left|\partial_x\mu_{D_t}\right|e^{-|\partial_xI_t|}+\left|\partial_y\mu_{D_t}\right|e^{-|\partial_yI_t|}, \end{equation} where $\mu_{D_t}$ is the inverse depth normalized by mean depth. \re{ $\partial_x\mu_{D_t}$ and $\partial_x\mu_{D_t}$ denote the disparity gradient among two directions.} Although the photometric consistency effectively models the depth estimation task as a self-supervised problem, the photometric metric is not stable and robust based on the intensity invariance hypothesis, especially in complex outdoor scenes. Cases like moving objects, occlusion, and texture-less regions will mislead the optimization. Therefore, we propose two new cross-view consistency supervision from the perspective of deep feature space and 3D space. \subsection{Depth feature alignment loss} The motivation of DFA loss is that the coherence of consecutive depth frames is the same as the coherence of consecutive RGB frames, since the 2D pixel movement in either RGB or depth frames corresponds to the movement of the same 3D scene point, following the same projection described by Eq.~\eqref{eq:cameraProj}. As shown in the Figure \ref{fig:correspondence}, the correspondence learned from the RGB image can guide the maintenance of this correspondence during depth estimation. For example, a 3D point (red dot) is located on the edge of a black car. It will have many unique properties, such as an obvious depth change will occur around it. Then, in the next frame, it is still an edge point and still has this property. It is very natural to use optical flow to model the inter-frame movement. However, as discussed above, 2D photometric information is ambiguous and unreliable, which may also degrade optical flow estimation. Therefore, we propose to learn cross-view consistency from feature representation of RGB frames to guide the depth learning among corresponding frames. \begin{figure} \begin{center} \begin{minipage}[t]{0.45\textwidth} \centering \includegraphics[width=7cm]{images/correspondence.png} \end{minipage} \end{center} \caption{Illustration of the guidance from the correspondence in RGB images to the correspondence in depth.} \label{fig:correspondence} \end{figure} Different from \cite{shu2020featdepth} using a separate network to learn feature representation from RGB frames and aligning consecutive frames via differentiable feature warping, we learn the temporal feature alignment by reconstructing reference frame using its adjacent frames via deformable convolution networks \cite{dai2017deformable,tian2020tdan} in a totally self-supervised manner. Given consecutive frames $I_t$ and $I_{t+m}$, the feature representations $F_t$ and $F_{t+m}$ are first learned via a feature extractor $\Phi$: $F_i = \Phi(I_i)$. The features extracted from adjacent frames are then taken as inputs into the deformable alignment network to learn the alignment offset $\Theta_{t+m\rightarrow t}$ between $F_t$ \re{and $F_{t+m}$ and obtain the aligned feature $\hat{F}_{t+m\rightarrow t}$}: \begin{equation} \begin{aligned} \hat{F}_{t+m\rightarrow t},\Theta_{t+m\rightarrow t} = f_{\re{align}}(F_{t+m},F_t), \end{aligned} \end{equation} \re{where $f_{align}$ denotes deformable alignment network, which consists of regular convolutions and deformable convolutions. The convolutional layer in deformable convolution is responsible for learning the 2D offsets and output the deformed feature map using bilinear interpolation. For each position $p$ on the aligned feature map $\hat{F}_{t+m\rightarrow t}$, it is calculated as:} \begin{equation} \begin{aligned} \hat{F}_{t+m\rightarrow t}(p) = \sum_{k\in \Omega} \gamma(p_k)F_{t+m}(p+p_k+\Delta p_k). \end{aligned} \end{equation} \re{$\Theta_{t+m\rightarrow t} = \{\Delta p_k|k=1,...,|\Omega|\}$ denotes the offset learned by the deformable convolution, $\Omega$ is the kernel size and $p+p_k+\Delta$ is the $k$th learned additional offset at location $p+p_k$ by $\Delta p_k$. $p_k$ is the k-th sampling offset of a standard convolution with a kernel size of $n \times n$. For example, when $n=3$, we have $p_k \in \{(-1,-1),(-1,0),...,(1,1)\}$. The overall offset field learned by the deformable alignment network is a vector with $G\times 2N$ dimension for each pair of input images. $G$ is the deformable group number, which is set to 8 in our work. $2N$ represents the channel of each group offset field where the offset of each point is a two-dimensional vector, and it demonstrates the offset value on the x-direction and the y-direction respectively. $N$ means the square of kernel size $n\times n$.} In this way, the aligned feature $\hat{F}_{t+m\rightarrow t}$ is obtained, the reference frame can be reconstructed from it: \begin{equation} \hat I_{t+m\rightarrow t} = Re(\hat F_{t+m\rightarrow t}), \end{equation} where $Re$ denotes a reconstruction network simply consisting three \re{convolution layers}. The feature alignment offset $\Theta_{t+m\rightarrow t}$ can be learned by minimizing the difference between the reconstructed and original reference frame, namely ReconLoss: \begin{equation} L_{RE} = \left\|I_t - \hat I_{t+m\rightarrow t}\right\|^2. \end{equation} The learned offset is then used to conduct the temporal alignment of corresponding depth features: \begin{equation} \hat F^D_{t+m} = f_{dc}\left(F^D_{t+m},\Theta_{t+m\rightarrow t}\right). \end{equation} Here, the deformable convolution $f_{dc}$ and $\Theta_{t+m\rightarrow t}$ are the same as the ones used in RGB features alignment to take advantage of the prior of feature alignment coherence. The aligned depth feature $\hat F^D_{t+m}$ is enforced to be consistent with the estimated depth feature $F^D_t$ via DFLoss $L_{DF}$: \begin{equation} L_{DF}=\left\|F^D_t-\hat F^D_{t+m}\right\|^2. \end{equation} Our DFA loss is a combination of the ReconLoss and DFLoss: \begin{equation} L_{DFA}= L_{RE} + L_{DF}. \end{equation} The key process of DFA loss is shown in Figure \ref{fig:offsetcore}. \begin{figure} \begin{center} \begin{minipage}[t]{0.5\textwidth} \centering \includegraphics[height=5.8cm, width=6.4cm]{images/offset.png} \end{minipage} \end{center} \caption{Illustration of the key process of OffsetNet, which aims to learn feature alignment offsets from RGB frames. The learned offsets is then used to align depth feature.} \label{fig:offsetcore} \end{figure} DFA loss governs consistent depth estimation using the temporal coherence learned from features instead of 2D photometric information. It is beneficial to overcome the vulnerability of photometric loss in cases like illumination variance because the feature-level alignment can model temporal alignment non-locally compared with the pixel-wise photometric alignment. \red{\subsubsection{Analysis of DFA loss}\label{DFAlossAnalysis} The cross-view consistency is usually enforced via the warping among adjacent frames, but DFA loss adopts an OffsetNet branch for learning the temporal alignment using the offset field from deformable convolutions. The previous excellent work FeatDepth \cite{shu2020featdepth} also utilizes the feature representation to improve the cross-view consistency. However, the extracted features of input images are just a kind of additional representation of photometric measurement. The cross-view alignment is still conducted by the warping among continuous frames, which highly relies on the predicted poses and makes it less robust to many challenging cases, e.g., low-texture and variant illumination regions. By contrast, our DFA loss adopts the OffsetNet branch to learn the cross-view alignment during training, and the learned feature-metric alignment is independent of the predicted poses, which makes our method more robust to the above challenging cases and can predict more temporally consistent and accurate depth. } \subsection{Voxel density alignment loss} Due to the vulnerability of pixel-wise consistency supervision, Vid2depth \cite{mahjourian2018unsupervised} first impose 3D constraints by aligning two point clouds estimated in adjacent frames via Iterative Closest Point (ICP). Enforcing the 3D geometry consistency of adjacent views seems to be reasonable and could be more effective compared with 2D consistency. However, this point alignment constraint is too strict to be robust enough in challenging scenes with moving objects and occlusion. We thus propose Voxel Density Alignment (VDA) loss as a new 3D cross-view consistency supervision that is robust to these challenging cases. Intuitively, the whole 3D space can be divided into the same number of voxels among consecutive views. Our VDA loss enforces the number of 3D points in corresponding voxels consistent between adjacent frames instead of forcing every corresponding point aligned. This means our VDA loss can be less affected by local object motion and occlusion. To calculate the voxel density, we divide the 3D space into $N = N_x \times N_y \times N_z$ voxels\re{, using the Cartesian coordinate system shown in Figure \ref{fig:network}, with x- and y-axes as being horizontal and the z-axis as being verticald}. Point $P_i(x_i,y_i,z_i)\in \mathbb{R}^3$ in point cloud $PC= \lbrace{P_i}\rbrace^{n}_{i=1}$ will fall into voxel $V_j$, if $x_i\in[a_j,a_j+\Delta a), y_i\in[b_j,b_j+\Delta b), z_i\in[c_j,c_j+\Delta c)$, where $(a_j, b_j, c_j, \Delta a, \Delta b, \Delta c)$ are a set of parameters presenting the spacial range of voxel $V_j$. Then, the voxel density can be calculated as: \begin{equation} C(V_j) = \sum^N_{i = 1}[P_i\in V_j],~~VD(V_j)= C(V_j)/n. \end{equation} Here, $[…]$ is the Iverson bracket. $[v]$ is defined to be $1$ if $v$ is true, and $0$ if it is false. $C(V_j)$ is a counting operation to obtain the number of points inside voxel $V_j$ and $n$ is the total number of 3D points. This is the naive implementation of VDA loss, which is easily understood but not differentiable due to the counting operation. We thus develop another technique to implement it in a differentiable and more efficient manner. Specifically, once we estimated the depth map of a frame, the point cloud is easy to obtain. We first calculate a voxel index for each 3D point $P(x,y,z)$ according to the 3D position of the point: \begin{equation}\label{Voxelization} \begin{aligned} \nu(P) = \left\lfloor\frac{x-x_{min}}{\Delta x}\right\rfloor+\left\lfloor\frac{y-y_{min}}{\Delta x}\right\rfloor N_x+\left\lfloor\frac{z-z_{min}}{\Delta z}\right\rfloor N_xN_y. \end{aligned} \end{equation} Here, $N_x$, $N_y$, $N_z$ are the number of voxels along each axis, and $\Delta x = \frac{x_{max}-x_{min}}{N_x}$, $\Delta y = \frac{y_{max}-y_{min}}{N_y}$, $\Delta z = \frac{z_{max}-z_{min}}{N_z}$ are the shape parameters of voxels. Thus, point cloud $PC= \lbrace{P_i}\rbrace^{n}_{i=1}$ can be expressed as an $n$ dimension vector $V$. We then calculate the number of points in each voxels. We devise a function $g:\mathbb{R}^n \to \mathbb{R}^n$ to map $V$ to a counting vector $C= \lbrace{C_i}\rbrace^{n}_{i=1}$: \begin{equation} C_i = g_i(V)= n-\left\|sign(|V-i|)\right\|_1, \end{equation} In this way, the 3D point cloud can be represented as a voxel density vector: $\rho = C/n$. In conclusion, the calculation of voxel density vector of frame $I_t$ from its estimated point cloud $PC_t$ is: \begin{equation} \rho _t = \frac{1}{n} g(\nu(PC_t)). \end{equation} \re{$sign$ denote\re{s} the sign function,} \re{\begin{equation} sign(x) =\left\{ \begin{array}{lr} 1, & if\ x>0,\\ 0, & if\ x=0,\\ -1,& if\ x<0 . \end{array} \right. \end{equation}} We refer to the Straight Through Estimator (STE) \cite{yin2018understanding} to differentiably implement the sign function \re{and get valid gradient during training}: \re{ \begin{equation} \left\{ \begin{array}{lr} sign(r), & fp\\ Htanh(r)=Clip(r,-1,1)=max(-1,min(1,r)),& bp \end{array} \right. \end{equation} here, $fp$ and $bp$ mean the forward pass and back-propagation process, and $r=(x-\frac{1}{2})\times 2$.} The voxel density in one voxel can be regarded as the probability of 3D points situated in this 3D region. This representation of the 3D point cloud is more non-local and integral to express 3D geometry. To exploit the temporal coherence in voxel space, our VDA loss adopts KL divergence as the quantity to measure the voxel density vectors calculated from adjacent frames: \begin{equation} L_{VD\re{A}}=D_{KL}\left(\rho_t||\rho_{t+m\rightarrow t}\right)= \sum _{i\in n}\rho_{t}(i) \log\left(\frac{\rho_t(i)}{\rho_{t+m\rightarrow t}(i)}\right). \end{equation} Here, $\rho_{t+m\rightarrow t} = \frac{1}{n}g(\nu(P_{t+m\rightarrow t}))$. $P_{t+m\rightarrow t}$ is the point cloud transformed from the frame $I_{t+m}$. By aligning the distribution of 3D point cloud using voxel density representation, we can shift the prior ``point-to-point'' alignment paradigm to the ``region-to-region'' one to constrain cross-view consistency, which will be more robust and tolerant to challenging cases like moving objects and occlusion. \subsubsection{Analysis of VDA loss} The effectiveness of our VDA loss mainly owes to the robust representation, i.e.\re{,} the voxel density. Here, we analyze the merit of the voxel density representation. Given two consecutive frames, the 3D geometry estimated from the two frames should be totally consistent after the ego-motion transformation if there is no inconsistent perturbation. However, in natural scenes, especially outdoor scenarios, violation cases such as moving people and vehicles are quite common. Assuming we got $I_t$, $I_{t+\re{m}}$, their estimated depth map $D_t$, $D_{t+\re{m}}$, and their ego-motion transformation $T$, the most commonly used representation to depict the 3D geometry is the point cloud $P_t$ and $P_{t+m}$: \begin{equation} P_t = D K^{-1}I_t,~~ P_{t+\re{m}} = D_{t+\re{m}}K^{-1}I_{t+\re{m}},~~ \hat P_t = T P_{t+\re{m}}. \end{equation} The prior point cloud loss measures the inconsistency of two estimated 3D point clouds via L1 norm: \begin{equation} L_{pc} = \sum^n_{i=1}\left\|P_t(i) - \hat P_t(i)\right\|_1. \end{equation} L1 norm is sensitive to each element, which means this loss enforces each point to align with its corresponding point rigidly. Sometime there is an object motion between two frames, which means the existing of a small perturbation on some points. Assuming point $P_1(x_1,y_1,z_1)$ moves $\delta_{x_1}, \delta_{y_1}, \delta_{z_1}$ in axis $x, y, z$ respectively, the cross-view consistency loss can be calculated via point cloud loss as: \begin{equation} L_{pc} = |\delta_{x_1}| + |\delta_{y_1}| + |\delta_{z_1}|. \end{equation} Differently, our VDA loss pays more attention to the spatial positions of 3D points, measuring the inconsistency of corresponding groups of points. Grouping operation is realized via putting points into different voxels according to their 3D positions: \begin{equation} L_{v} = \left\|\nu(P_t(i)\re{)}-\nu(\hat P_t(i))\right\|_1, \end{equation} where the ``voxelization'' process $\nu(P)$ is calculated as Eq.~\eqref{Voxelization}. When there is a small perturbation $\delta_{x_1}, \delta_{y_1}, \delta_{z_1}$ in $P_1$, $L_v$ can be calculated as: \begin{equation} \begin{aligned} L_{v} &= \left\lfloor\frac{x+\delta x-x_{min}}{\Delta x}\right\rfloor+\left\lfloor\frac{y+\delta y-y_{min}}{\Delta y}\right\rfloor N_x\\&+\left\lfloor\frac{z+\delta z-z_{min}}{\Delta z}\right\rfloor N_xN_y\\ & - \left(\left\lfloor\frac{x-x_{min}}{\Delta x}\right\rfloor+\left\lfloor\frac{y-y_{min}}{\Delta y}\right\rfloor N_x+\left\lfloor\frac{z-z_{min}}{\Delta z}\right\rfloor N_xN_y\right). \end{aligned} \end{equation} Taking $z$ as for example, because there is a floor operation, only when $\frac{z+\delta z-z_{min}}{\Delta z}-\frac{z-z_{min}}{\Delta z} > 1$, the value of $L_{v}$ can be non-zero, which means $\delta z$ need to be larger than $\Delta z$. Therefore, the small perturbation will not change the voxel index $\nu(P_1)$, so that $L_v$ maintains $0$ and $L_{VD}$ maintains its stability. For an intuitive explanation, although the object has a small motion, this whole object still stays in the original voxel as shown in Figure \ref{fig:1}. Therefore, our VDA loss is more robust to object motions than the point cloud alignment loss. \begin{table*}[t] \small \begin{center} \caption{Quantitative performance of single depth estimation on KITTI eigen test set. For a fair comparison, all the results are evaluated at the maximum depth threshold of 80m. All methods are evaluated with raw LiDAR scan data. ``$\dagger$" means updated result after publication. Bold is the best indicator and underlines indicate the second-best results. $\delta_1, \delta_2, \delta_3$ denote $\delta < 1.25, \delta < 1.25^2, \delta < 1.25^3$, respectively. \re{The column ``train'' means training manners, with ``M'' denonte self-supervised monocular training and ``M\&Seg'' denotes self-supervised monocular training together with supervised segmentation training.}} \label{tab:1} \setlength{\tabcolsep}{0.005\linewidth} \begin{tabular}{ccccccccccc} \toprule \multirow{2}{*}{\!\!\!Methods\!\!\!}&\multirow{2}{*}{Train} &\multirow{2}{*}{Backbone}& \multirow{2}{*}{Resolution}& \multicolumn{4}{c}{Error metric$\downarrow$} & \multicolumn{3}{c}{Accuracy metric$\uparrow$} \\ \cmidrule(r){5-8} \cmidrule(r){9-11} &&& &Abs Rel & Sq Rel & RMSE & RMSE log \!\!\! & \re{$\delta_1$} & \re{$\delta_2 $} & \re{$\delta_3$} \\ \midrule SFMlearner \cite{zhou2017unsupervised} \re{(CVPR 2017)}$\dagger$&\re{M}&DispNet & $416\times128$ &0.183 &1.595 &6.709 &0.270 &0.734 &0.902 &0.959\\ Mahjourian et al. \cite{mahjourian2018unsupervised} \re{(CVPR 2018)} &\re{M}& DispNet &$416\times128$ &0.163 &1.240 &6.220 &0.250 &0.762 &0.916 &0.968\\ GeoNet \cite{yin2018geonet} \re{(CVPR 2018)}$\dagger$ &\re{M}&ResNet50 &$416\times128$ &0.149 &1.060 &5.567 &0.226 &0.796 &0.935 &0.975\\ DDVO \cite{wang2018learning} \re{(CVPR 2018)} &\re{M}&DispNet & $416\times128$ &0.151 &1.257 &5.583 &0.228 &0.810 &0.936 &0.974\\ DF-Net \cite{zou2018df} \re{(ECCV 2018)} &\re{M}& ResNet50 &$576\times160$ &0.150 &1.124 &5.507 &0.223 &0.806 &0.933 &0.973\\ Struct2depth \cite{casser2019depth} \re{(AAAI 2019)} &\re{M}& DispNet &$416\times128$ &0.141 &1.026 &5.142 &0.210 &0.845 &0.845 &0.948 \\ SC-SFMlearner \cite{bian2019unsupervised} \re{(NeurIPs 2019)} &\re{M}&DispResNet \!\!\!& $832\times256$ &0.137 &1.089 &5.439 &0.217 &0.830 &0.942 &0.975\\ HR \cite{zhou2019unsupervised} \re{(ICCV 2019)}&\re{M}& ResNet50& $1248\times384$ &0.121 &0.873 &4.945 &0.197 &0.853 &0.955 &0.982\\ Monodepth2 \cite{godard2019digging} \re{(ICCV 2019)} &\re{M}& ResNet18 & $1024\times320$ &0.115 &0.882 &4.701 &0.190 &0.879 &0.961 &0.982\\ PackeNet \cite{Guizilini_2020_CVPR} \re{(CVPR 2020)}&\re{M}& PackNet &$640\times 192$ &0.111 &0.785 &4.601 &0.189 &0.878 &0.960 &0.982\\ \re{TrianFlow \cite{zhao2020towards} (CVPR 2020)}&\re{M}&\re{ResNet18}&\re{$832\times 256$}&\re{0.113}&\re{0.704}&\re{4.581}&\re{0.184}&\re{0.871}&\re{0.961}&\re{\textbf{0.984}}\\ \re{Johonston et al. \cite{johnston2020self} } \re{(CVPR 2020)}&\re{M} &\re{ResNet101} &\re{$640\times 192$ }&\re{0.106} &\re{0.861} &\re{4.699} &\re{0.185}&\re{\underline{0.889}}&\re{0.962}&\re{0.982}\\ FeatDepth \cite{shu2020featdepth} \re{(ECCV 2020)}&\re{M}& ResNet50 &$1024\times320$ &0.104 &\underline{0.729} &\underline{4.481} &\textbf{0.179} &\textbf{0.893} &\textbf{0.965} &\textbf{0.984}\\ MLDA-Net \cite{song2021mlda} \re{(TIP 2021)} &\re{M}&ResNet50&$640\times 192$ &0.110 &0.824 &4.632 &0.187 &0.883 &0.961 &0.982\\ HR-Depth \cite{Lyu_Liu_Wang_Kong_Liu_Liu_Chen_Yuan_2021} \re{(AAAI 2021)}&\re{M}&HRNet &$640\times 192$ &0.109 &0.792 &4.632 &0.185 &0.884 &0.962 &\underline{0.983}\\ \re{R-MSFM6 \cite{zhou2021r} (ICCV 2021)}&\re{M} &\re{ResNet18} & \re{$640\times 192$} & \re{0.112} & \re{0.806} & \re{4.704} & \re{0.191} & \re{0.878} & \re{0.960} & \re{0.981}\\ \re{Wang et al. \cite{Wang_2021_ICCV} (ICCV 2021)}&\re{M}&\re{ResNet18}&\re{$640\times 192$}&\re{0.109}&\re{0.779}&\re{4.641}&\re{0.186}&\re{0.883}&\re{0.962}&\re{0.982}\\ \bottomrule \re{SGDepth \cite{klingner2020self} (ECCV 2020)}&\re{M\&Seg}&\re{ResNet18}&\re{$640\times 192$ }&\re{0.113}&\re{0.835}&\re{4.693}&\re{0.191}&\re{0.879}&\re{0.961}&\re{0.981}\\ \re{Li et al. \cite{li2021learning} (ArXiv 2021)}&\re{M\&Seg}&\re{ResNet50}&\re{$640\times 192$ }&\re{0.103}&\re{0.709}&\re{4.471}&\re{0.180}&\re{0.892}&\re{0.966}&\re{0.984}\\ \re{FSRE-Depth \cite{Jung_2021_ICCV} (ICCV 2021)}&\re{M\&Seg}&\re{ResNet50}&\re{$640\times 192$ }&\re{0.102}&\re{0.675}&\re{4.699}&\re{0.178}&\re{0.893}&\re{0.966}&\re{0.984}\\ \bottomrule \textbf{Ours (R18 LR)} &\re{M}&ResNet18 & $640\times 192$ & 0.106 & 0.738 & 4.587 & 0.183 & 0.881 & 0.962 & \underline{0.983}\\ \textbf{Ours (R50 LR)} &\re{M}&ResNet50 & $640\times 192$ & 0.105 & 0.741 & 4.540 & 0.183 & 0.884 & 0.962 & \underline{0.983}\\ \bf{Ours (R18 HR)} &\re{M}&ResNet18 &$1024\times320$ &\underline{0.103} &0.764 &4.672 &\underline{0.182} &0.885&0.962 &\underline{0.983}\\ \bf{Ours (R50 HR)} &\re{M}&ResNet50 &$1024\times320$ &\textbf{0.102} &\textbf{0.726} &\textbf{4.479} &\textbf{0.179} &\underline{0.889} &\underline{0.963} &\underline{0.983}\\ \bottomrule \end{tabular} \end{center} \end{table*} \subsection{Overall learning pipeline} In this paper, our method adopts DFA loss and VDA loss as additional cross-view consistency to the widely used photometric loss and smooth loss. Therefore, the total loss is: \begin{equation} L = \alpha L_{ph} +\beta L_{sm} + \gamma L_{DFA} +\eta L_{VDA}. \end{equation} Here, $\alpha,\beta, \gamma, \eta$ is $1,0.01,0.05,0.05$, respectively. The parameters $N_x$,$N_y$,$N_z$ is 40, 24, and 40 in our work. Our implementation of photometric and smooth loss follows the baseline method \cite{godard2019digging}. \begin{figure*}[htbp] \begin{center} \begin{minipage}{0.96\textwidth} \includegraphics[ width=3.4cm,valign=t]{images/75.jpg}\includegraphics[ width=3.4cm,valign=t]{images/254.jpg}\includegraphics[ width=3.4cm,valign=t]{images/427.png}\includegraphics[ width=3.4cm,valign=t]{images/559.jpg}\includegraphics[ width=3.4cm,valign=t]{images/674.jpg} \includegraphics[ width=3.4cm,valign=t]{images/75gt.jpg}\includegraphics[ width=3.4cm,valign=t]{images/254gt.jpg}\includegraphics[ width=3.4cm,valign=t]{images/428gt.jpg}\includegraphics[ width=3.4cm,valign=t]{images/559gt.jpg}\includegraphics[ width=3.4cm,valign=t]{images/674gt.jpg} \includegraphics[ width=3.4cm,valign=t]{images/75_disp.jpeg}\includegraphics[ width=3.4cm,valign=t]{images/254_disp.jpeg}\includegraphics[ width=3.4cm,valign=t]{images/427_disp_ours.jpg}\includegraphics[ width=3.4cm,valign=t]{images/559_disp.jpeg}\includegraphics[ width=3.4cm,valign=t]{images/674_disp.jpeg} \includegraphics[ width=3.4cm,valign=t]{images/feat_disp_75.png}\includegraphics[ width=3.4cm,valign=t]{images/feat_disp_254.png}\includegraphics[ width=3.4cm,valign=t]{images/feat_disp_427.png}\includegraphics[ width=3.4cm,valign=t]{images/feat_disp_559.png}\includegraphics[ width=3.4cm,valign=t]{images/feat_disp_674.png} \includegraphics[ width=3.4cm,valign=t]{images/75_hrdepth.jpg}\includegraphics[ width=3.4cm,valign=t]{images/254_hrdepth.jpg}\includegraphics[ width=3.4cm,valign=t]{images/427_hrdepth.jpg}\includegraphics[ width=3.4cm,valign=t]{images/559_hrdepth.jpg}\includegraphics[ width=3.4cm,valign=t]{images/674_hrdepth.jpg} \includegraphics[ width=3.4cm,valign=t]{images/75_packnet.png}\includegraphics[ width=3.4cm,valign=t]{images/254_packnet.png}\includegraphics[ width=3.4cm,valign=t]{images/427_packnet.png}\includegraphics[ width=3.4cm,valign=t]{images/559_packnet.png}\includegraphics[ width=3.4cm,valign=t]{images/674_packnet.png} \includegraphics[ width=3.4cm,valign=t]{images/75_disp_mono2.jpeg}\includegraphics[ width=3.4cm,valign=t]{images/254_disp_mono2.jpeg}\includegraphics[ width=3.4cm,valign=t]{images/427_disp.jpeg}\includegraphics[ width=3.4cm,valign=t]{images/559_disp_mono2.jpeg}\includegraphics[ width=3.4cm,valign=t]{images/674_disp_mono2.jpeg} \includegraphics[ width=3.4cm,valign=t]{images/75_disp_DDVO.jpeg}\includegraphics[ width=3.4cm,valign=t]{images/254_disp_DDVO.jpeg}\includegraphics[ width=3.4cm,valign=t]{images/427_disp_DDVO.jpeg}\includegraphics[ width=3.4cm,valign=t]{images/559_disp_DDVO.jpeg}\includegraphics[ width=3.4cm,valign=t]{images/674_disp_DDVO.jpeg} \includegraphics[ width=3.4cm,valign=t]{images/75_disp_geo.jpeg}\includegraphics[ width=3.4cm,valign=t]{images/254_disp_geo.jpeg}\includegraphics[ width=3.4cm,valign=t]{images/427_disp_geonet.jpeg}\includegraphics[ width=3.4cm,valign=t]{images/559_disp_geo.jpeg}\includegraphics[ width=3.4cm,valign=t]{images/674_disp_geo.jpeg} \includegraphics[ width=3.4cm,valign=t]{images/075_DF.jpg}\includegraphics[ width=3.4cm,valign=t]{images/254_DF.jpg}\includegraphics[ width=3.4cm,valign=t]{images/427_dfjpg.jpg}\includegraphics[ width=3.4cm,valign=t]{images/559DF.jpg}\includegraphics[ width=3.4cm,valign=t]{images/674DF.jpg} \end{minipage} \begin{minipage}[htbp]{0.005\textwidth} \fontsize{7pt}{\baselineskip}\selectfont \noindent \begin{turn}{90}{Df-Net~~ GeoNet~~~ DDVO~~~ MD2 ~~ \re{PackNet}~~\re{HR-Depth}~~\red{Feat(MS)}~~ Ours~~~~~~~~\re{GT}~~~~ Input~~} \end{turn} \end{minipage} \end{center} \caption{\red{Qualitative results on KITTI test set. Our method produces more accurate depth maps in low texture regions, moving vehicles, and delicate structures. The advantages of our results are highlighted in green boxes. \red{Feat(MS) means the visualization results generated from FeatDepth \cite{shu2020featdepth} are using their models trained with both monocular and stereo inputs.}}} \label{fig:inferresult} \end{figure*} \section{EXPERIMENTS} \subsection{Network implementation}\label{networkarchi} As shown in Figure \ref{fig:network}, our network is composed of three branches for offset learning, depth estimation and pose estimation, respectively. The depth network adopts an encoder-decoder architecture in a U-shape with skip connections similar to DispNet \cite{mayer2016large}. The encoder takes a three-frame snippet as the sequential input, using the pre-trained Resnet \cite{he2016deep} as the backbone network. The depth decoder has three branches with shared weights, with a similar structure to \cite{godard2019digging}, using sigmoid activation functions in multi-scale side outputs and ELU nonlinear functions otherwise. The pose network takes two consecutive frames as input at each time and outputs the corresponding ego-motion, based on an encoder-decoder structure as well. For the DFA loss, we use a deformable alignment network to learn the feature alignment offset, also sharing weights with the deformable convolution used in the DepthNet branch for calculating the DFA Loss. \re{The three branches in the framework are jointly optimized during training while only DepthNet is used during inference.} Our models are implemented in PyTorch with the Adam optimizer in 4 Tesla V100 GPUs, using a learning rate $8e-5$ for first 10 epochs and $8e-6$ for another 30 epochs. We trained models in monocular videos with a resolution of $640 \times 192$ (LR) and $1024 \times 320$ (HR) at a batch size of 4. \subsection{Evaluation metrics} For real-world datasets, it is very hard to get ground truth depth. Most datasets use LIDAR sensors to scan the environment and utilize them as ground truth after being processed, for instance, KITTI \cite{geiger2012we} use Velodyne laser scanner to collect the data. The most commonly used depth estimation benchmark of KITTI is the Eigen split, which is further improved by Zhou et al. \cite{zhou2017unsupervised} consisting of 39810 sequences for training, 4424 items for validation and 697 images for testing, respectively. And there are five evaluation metrics: \textbf{Abs Rel} for Absolute Relative Error, \textbf{Sq Rel} for Square Relative Error, \textbf{RMSE} for Root Mean Square Error, \textbf{RMSE log} for Root Mean Square Logarithmic Error and Accuracy: \begin{itemize} \item $Abs~Rel = (1/n)\sum_{i\in n}((|d_i-d^{\ast}_i|)/d_i) $, \item $Sq~Rel = (1/n)\sum_{i\in n}((||d_i-d^{\ast}_i||^2)/d_i)$, \item $RMSE = ((1/n)\sum_{i\in n}||d_i-d^{\ast}_i||^2)^{1/2}$, \item $RMSE~log = ((1/n)\sum_{i\in n}||log(d_i)-log(d^{\ast}_i)||^2)^{1/2}$ \item Accuracy: \% of $d_i$ s.t. $max((d_i/d^{\ast}_i), (d^{\ast}_i/d_i)) = \delta < \delta_n$, \end{itemize} where $n$ is the total number of pixels in the ground truth depth map, $d_i$ and $d^{\ast}_i$ represent the predicted and ground truth depth value of pixel $i$. $\delta_n$ denotes a threshold, which is usually set to $1.25^1$, $1.25^2$ and $1.25^3$. \iffalse \begin{figure*}[htbp] \begin{minipage}{\textwidth} \includegraphics[width=14cm]{images/inferresult.jpg} \end{minipage} \caption{Qualitative results on KITTI test set. Our method produces more accurate depth maps in low texture regions and moving object cases than the baseline Monodepth2 \cite{godard2019digging}.} \label{fig:inferresult} \end{figure*} \fi \begin{figure*}[htbp] \begin{minipage}{\textwidth} \centering \includegraphics[height=3.8cm,width=14cm]{images/ablation.pdf} \end{minipage} \caption{\re{Visualization results of ablation study. The left and right parts show the ablation of DFA loss and VDA loss respectively.}} \label{fig:ablation} \end{figure*} \begin{table*}[htbp] \small \begin{center} \caption{Ablation study of the cross-view consistency loss on KITTI.} \label{tab:ablation} \begin{tabular}{l|c|c|c|c|c|c|c|c|c} \hline &\!\!\!DFA \!\!\! &\!\!\!VDA\!\!\! &\!\!\!Abs Rel\!\!\! &\!\!\!Sq Rel\!\!\! &RMSE &\!\!\!RMSE log\!\!\! & $\!\!\delta \!< \!1.25$ \!\! & \!\! $\delta \!<\! 1.25^2 \!\! $ & \!\! $\delta \!<\! 1.25^3$ \!\!\!\\ \hline Baseline & & &0.124 &0.968 &5.030 &0.201 &0.855 &0.954 &0.980\\ \hline Ours w/ DFA &$\surd$ & &0.114 &0.873 &4.807 &0.191 &0.877 &0.960&0.982\\ \hline Ours w/ VDA & &$\surd$&0.110 &0.834 &4.694 &0.185 &0.885 &0.963 &0.983\\ \hline \!\!\! Ours w/ DFA+VDA &$\surd$ &$\surd$&0.106 &0.738 &4.587 &0.183 &0.881 &0.962 &0.983\\ \hline \end{tabular} \end{center} \end{table*} \subsection{Depth estimation evaluation} We trained our models using the KITTI dataset \cite{geiger2012we}, which is the most commonly used benchmark in the field of depth estimation. During inference, we take only the reference frame as input following the standard monocular test protocol proposed by Eigen \cite{eigen2014depth} and Zhou {\em et~al.} \cite{zhou2017unsupervised}. The experimental results on the KITTI test set are presented in Table \ref{tab:1}. It is clear that our method outperforms most prior SS-MDE methods while using a smaller backbone network and image resolution, and achieves superior performance than the SOTA FeatDepth \cite{shu2020featdepth}, using a larger backbone network and resolution of training images. Some visual results are shown in Figure \ref{fig:inferresult}. As can be seen, our method can generate more accurate depth maps than our baseline method Monodepth2 \cite{godard2019digging}, especially in challenging cases, e.g., low-texture regions and moving objects. \re{Although challenging cases usually only occupy a small portion in all scenes, it is worth noting that handling these challenging cases really matters for real-world applications like autonomous driving.} \subsection{Ablation study} The ablation study is conducted on KITTI to highlight the effectiveness of the proposed two cross-view consistency losses. Table \ref{tab:ablation} shows the detailed results by adding specific loss to the baseline method. We trained the models of Monodepth2 \cite{godard2019digging} with resolution $640\times 192$ and batch size 4 as the baseline, taking the same setting as ours (R18 LR). \subsubsection{VDA loss} Our VDA loss is designed to exploit temporal coherence in voxel space to ensure the models' robustness and tolerance to challenging cases, especially moving objects. Samples of depth images produced by models with and without VDA loss are shown in Figure \ref{fig:ablation}. The right parts show the superiority of our method in handling moving objects. The depth maps in the last row (with VDLoss) are of higher quality than those in the third row (without VDLoss) and the second-row PCloss (baseline method with Point cloud alignment loss \cite{mahjourian2018unsupervised}), confirming the effectiveness of VDLoss in moving objects regions. \re{\subsubsection{DFA loss} Our DFA loss aims to learn the temporal alignment from features of consecutive frames, which is used to guide temporal-consistent depth learning. We display samples of depth maps from one sequence in the left part of Figure \ref{fig:ablation}. As shown in the highlighted areas, our method can generate more accurate and coherent depth maps with sharper boundaries, especially in regions with illumination variance and low texture, e.g., the figure of the cyclist. The second and the third row show the results of the variants using optical flow alignment (more detail and analysis can be found in Section \ref{OFA}) and without any temporal alignment, respectively.} \iffalse \begin{figure*} \begin{center} \begin{minipage}{0.96\textwidth} \centering \includegraphics[width=13cm]{image_supp/voxelablationeffect.jpg} \label{fig:voxelablation} \end{minipage} \begin{minipage}[htbp]{0.02\textwidth} \noindent \begin{turn}{90}{\fontsize{6pt}{\baselineskip}\selectfont With~~~~~~~~ MD2~~~~~~~~Without~~~~~~~~Input} \end{turn} \end{minipage} \end{center} \caption{The depth maps in the last row (with VDLoss) are of higher quality than those in the third row (without VDLoss) and the second row (baseline method \cite{mahjourian2018unsupervised}), confirming the effectiveness of VDLoss in handling moving objects.} \end{figure*} \fi \re{\subsection{Experimental analysis}} \re{\subsubsection{VDA loss} The experimental analysis of VDA loss consists of the analysis of VDA loss in handling moving objects and the analysis of hyperparameters in VDA loss.} \paragraph{The effectiveness of VDA loss in handling moving objects} We conducted quantitative and qualitative experiments to validate the effectiveness of our VDA loss. We split the whole test set into two parts namely motion set and static set, respectively, according to if there is moving object in the scenes. We evaluated our method and the baseline method in both sets. The results are reported in Table \ref{tab:motion}. For the quantitative comparison, it is clear that our method is more robust to scenes with object motions than the baseline. Moreover, we show several visual samples in Figure \ref{fig:ablation}. As shown in the right part of Figure \ref{fig:ablation}, our method can robustly and clearly infer the moving vehicles in outdoor driving scenes, while other methods may lose or misestimate the moving objects. \paragraph{Analysis of hyperparameters in VDA loss} Our VDA loss exploits cross-view consistency of the distribution of point clouds in voxel space. It is interesting to investigate the impact of the way of dividing the 3D space to voxels, i.e., the number of voxels along each axis $N_x, N_y, N_z$. Therefore, we conduct an ablation study of different numbers of voxels, reported in Table \ref{voxeltab}. The size of the voxel has a close relation to the tolerance to moving objects. According to our prior knowledge, the objects moving in the vertical direction are almost impossible on the outdoor driving dataset. We thus set \re{$N_z$} to 24 and evaluate different size of voxel by changing $N_x$ and $N_z$. As shown in Table \ref{voxeltab}, the number of voxels along axis $x$ and axis $z$ can slightly affect the performance. Assuming a border case, if $N_x= N_y=N_z=1$, which means regarding the whole space as one voxel, the voxel density is the same and the VDA loss always equals 0. However, if the values of $N_x, N_y, N_z$ are too large, the voxel size will be very small, which is contrary to the goal of VDA loss. Therefore, large values for $N_x, N_y, N_z$ are not recommended. \begin{table*}[htbp] \small \begin{center} \caption{Evaluation results on the split motion and static test sets.} \label{tab:motion} \begin{tabular}{l|c|c|c|c|c|c|c} \hline &Abs Rel &Sq Rel &RMSE &RMSE log & $\delta < 1.25$ & $\delta < 1.25^2 $ & $\delta < 1.25^3$ \\ \hline Baseline (motion)&0.118 &0.978 &5.079 &0.199&0.874&0.957&0.980\\ \hline \red{FeatDepth (motion)}&\red{0.108} &\red{0.987} &\red{4.910} &\red{0.189}&\red{0.877}&\red{0.958}&\red{0.981}\\ \hline Ours R18 LR (motion)& 0.106 & 0.790 & 4.769 & 0.186 & 0.882& 0.959& 0.982\\ \hline \red{Ours R50 HR (motion)}& \red{\textbf{0.102}} & \red{\textbf{0.734}} & \red{\textbf{4.434}} & \red{\textbf{0.179}} & \red{\textbf{0.889}}& \red{\textbf{0.963}}& \red{\textbf{0.983}}\\ \hline Baseline (static)&0.109 &0.704 &3.999 &0.173 &0.888 &\textbf{0.968} &\textbf{0.986}\\ \hline \red{FeatDepth (static)}&\red{0.104} &\red{0.632} &\red{\textbf{4.079}} &\red{0.173}&\red{\textbf{0.892}}&\red{0.967}&\red{0.985}\\ \hline Ours R18 LR (static) & 0.106 & 0.641 & 4.250 & 0.177 & 0.879 & 0.967 & 0.985\\ \hline \red{Ours R50 HR (static)} & \red{\textbf{0.103}} & \red{\textbf{0.627}} & \red{4.136} &\red{\textbf{ 0.173}} & \red{0.890} & \red{0.967} & \red{0.985}\\ \hline \end{tabular} \end{center} \end{table*}{} \subsubsection{DFA loss}\re{The experimental analysis of DFA loss includes the analysis of depth feature alignment offset, visualization of alignment offsets and comparison experiment with alignment using optical flow.} \paragraph{Analysis of depth feature alignment offset} In our DFA loss, we use OffsetNet to learn the correspondences from deep features of consecutive RGB frames and use the learned correspondences to help to learn more coherent depth. In our understanding, the feature offset is a kind of flow of features, which should share some similarities with optical flow. But they are region-wise rather than pixel-wise because the offsets are learned from deep features, which contain more semantic information and 3D geometry information. For the relationship between deformable convolution offset and optical flow, it is hard to clearly formulate. But researchers using deformable alignment in other areas \cite{chan2021understanding} is specifically dedicated to exploring this issue. They believe that deformable convolution can be decomposed into a combination of spatial warping and convolution. This decomposition can reveal the commonality of deformable alignment and flow-based alignment in formulation, but with a key difference in their offset diversity. The offset diversity is closely related to the group number of offset in the deformable convolutions. Their experiments demonstrate that the increased diversity in deformable alignment yields better aligned features, and hence significantly improves the quality of alignment. In our work, we set the group number of offset to $8$, therefore the deformable alignment learns $8\times 2\times 3 \times 3$, i.e., $144$ sets of offset for each pair of feature map to represent the temporal coherence of the features of continuous frames rather than only one offset learned in optical flow. \re{We also use kernel size $5\times5$ to obtain a more diverse and higher-dimension ($8\times 2\times 5 \times 5$, i.e., $400$) offset field and conduct a comparison experiment. The experiment results show that the variant using a larger kernel size in DCN achieves similar results to the original version, i.e., Sq Rel: 0.738 vs. 0.735 and RMSE: 4.587 vs. 0.564 for $3\times3$ and $5\times 5$, respectively. We believe that the 144-dimension offset field is enough to enlarge the receptive field and model the temporal coherence of adjacent two frames. Taking model complexity into account, we choose kernel size $3\times 3$ in our final version.} \begin{table*}[htbp] \small \begin{center} \caption{\re{Experiment results for hyperparameters analysis in VDA loss. $N_x, N_y, N_z$ denote the number of voxels divided along three axis.}} \label{voxeltab} \begin{tabular}{c|c|c|c|c|c|c|c} \hline Parameters \re{ ($N_x, N_y, N_z$)}&Abs Rel &Sq Rel &RMSE &RMSE log & $\delta < 1.25$ & $\delta < 1.25^2 $ & $\delta < 1.25^3$ \\ \hline $(20,20,24)$&0.108 & 0.855 & 4.912 & 0.183 & 0.882 & 0.961 & 0.983\\ \hline $(40,40,24)$&0.106 & 0.738 & 4.587 & 0.183 & 0.881 & 0.962 & 0.983\\ \hline $(60,60,24)$ & 0.107 & 0.737 & 4.742 & 0.184 & 0.875 & 0.960 & 0.983\\ \hline \end{tabular} \end{center} \end{table*}{} \paragraph{Visualization of depth feature alignment offset} \begin{figure}[t] \begin{center} \includegraphics[width=0.35\textwidth]{images/offsetfeature.jpg} \end{center} \caption{Samples of the visualization of the learned depth feature offset.} \label{offsetfeature} \end{figure} We adopt PCA decomposition to select one set offset to visualize the heatmaps of learned feature alignment offset in Figure \ref{offsetfeature}. Hotter colors denote higher values. Our OffsetNet pays more attention to stable regions and semantic outlines such as ground and wall to learn useful alignment offset. Moreover, the direction of offsets in the same semantic part tends to be the same, which implies the alignment offsets are learned in a ``region-to-region'' manner instead of varying with pixels. \paragraph{Compared with alignment using optical flow}\label{OFA} To demonstrate the effectiveness of the DFA loss, we replace the DFA loss with the optical flow of the corresponding feature to conduct the depth feature alignment. The evaluation result is shown in Table \ref{OFAloss}, where the ``OFA loss" denotes the variant using optical flow alignment. The evaluation results show that use optical flow as a single channel offset to regularize the depth learning turns out to be not feasible, which verifies the effectiveness of our DFA loss. \begin{figure*}[htbp] \begin{center} \begin{minipage}{\textwidth} \centering \includegraphics[width=15.5cm]{images/featComparison.pdf} \end{minipage} \end{center} \caption{\red{Qualitative comparison with FeatDepth \cite{shu2020featdepth} on the KITTI test set. The first and fourth rows are the input images. The second and fifth rows are the results of FeatDepth \cite{shu2020featdepth}. The third and last rows are the results of our method. Green boxes highlight the difference of the predicted depth maps by different methods. }} \label{fig:featcomparison} \end{figure*} \red{\paragraph{Comparison with the state-of-the-art work \cite{shu2020featdepth}} To show the superiority of our method over the previous state-of-the-art work \cite{shu2020featdepth}, we conduct comprehensive experiments in terms of qualitative results and quantitative evaluation, especially comparing their ability in dealing with moving objects scenes and generalizing to unseen scenes. For quantitative comparison results, as shown in Table \ref{tab:1}, our method outperforms FeatDepth \cite{shu2020featdepth} in four error metrics while being inferior to it in three accuracy metrics. Generally, they perform comparably on the KITTI test set. Nevertheless, it is noteworthy that, according to Table \ref{inferencetable}, our method uses smaller models and fewer computation resources than FeatDepth \cite{shu2020featdepth} and enjoys a faster inference speed. As for qualitative comparison results, we compare with FeatDepth in Figure \ref{fig:featcomparison}. As shown in the top three rows of Figure \ref{fig:featcomparison}, we show a sequence of continuous frames from the test set. Benefiting from the temporal alignment of depth features by using the DFA loss, the depth maps predicted by our method are more temporally consistent and accurate, especially for the object boundaries. Besides, the remaining rows in Figure \ref{fig:featcomparison} also validate the superiority of our method in low-texture areas, thin structures and object boundaries. To further validate the superiority of our method in dealing with moving object scenes, we compare it with FeatDepth \cite{shu2020featdepth} on the motion and static split of the KITTI test set. As shown in Table \ref{tab:motion}, our method performs better than FeatDepth \cite{shu2020featdepth} on the motion split and slightly worse on the static split. It is noteworthy that we use both ``ours R18 LR'' and ``ours R50 HR'' version in this experiment. The results demonstrate the advantage of our method in dealing with motion object scenes. We also compare the generalization ability of our method and FeatDepth \cite{shu2020featdepth} in Section \ref{generalization}. The quantitative results summarized in Table \ref{maketabel} show that our method outperforms FeatDepth \cite{shu2020featdepth} in all metrics. We also show the qualitative results on the CityScapes and Make3D datasets in Figure \ref{city9} and Figure \ref{fig:make}, respectively. } \begin{table*}[htbp] \small \begin{center} \caption{\re{Experiment results compared with the variant using optical flow alignment.}} \label{OFAloss} \begin{tabular}{l|c|c|c|c|c|c|c} \hline Methods&Abs Rel &Sq Rel &RMSE &RMSE log & $\delta < 1.25$ & $\delta < 1.25^2 $ & $\delta < 1.25^3$ \\ \hline Ours (DFA loss) & 0.106 & 0.738 & 4.587 & 0.183 & 0.881 & 0.962 & 0.983\\ \hline Ours (OFA loss)&0.119 & 0.902 & 4.860 & 0.194 & 0.863 & 0.958 & 0.982\\ \hline \end{tabular} \end{center} \end{table*}{} \begin{table*}[htbp] \small \begin{center} \caption{Evaluation results of methods with or without online refinement.}\label{tab:refinement} \begin{tabular}{l|c|c|c|c|c|c|c} \hline Methods &Abs Rel &Sq Rel &RMSE &RMSE log & $\delta < 1.25$ & $\delta < 1.25^2 $ & $\delta < 1.25^3$ \\ \hline Baseline&0.115 &0.882 &4.701 &0.190 &0.879 &0.961 &0.982\\ \hline Baseline (refinement)& 0.097 & 0.717 & 4.339 & 0.174 & 0.904 & 0.965 & 0.984 \\ \hline Ours&0.106 & 0.738 & 4.587 & 0.183 & 0.881 & 0.962 & 0.983\\ \hline Ours (refinement) & 0.089 & 0.716 & 4.243 & 0.172 & 0.911 & 0.965 & 0.985\\ \hline \end{tabular} \end{center} \end{table*}{} \begin{table}[t] \begin{minipage}{\linewidth} \footnotesize \caption{Test results on Make3D. \re{Column ``Train" with label ``M"/``S" means training with monocular/stereo data. }} \label{maketabel} \begin{center} \begin{tabular}{lccccc} \toprule \multirow{1}{*}{Methods}& \multirow{1}{*}{Train}& Abs Rel & Sq Rel & RMSE & log10 \\ \midrule \re{Monodepth \cite{godard2017unsupervised}} &\re{S} &\re{0.544} &\re{10.94} &\re{11.760} &\re{0.193}\\ SFMlearner \cite{zhou2017unsupervised} &M &0.383 &5.321 &10.470 &0.478\\ DDVO \cite{wang2018learning} &M &0.387 &4.720 &8.090 &0.204\\ Monodepth2 \cite{godard2019digging} &M &0.322 &3.589 &7.417 &0.163\\ \re{Monodepth2 \cite{godard2019digging}} &\re{MS} &\re{0.374} &\re{3.792} &\re{8.238} &\re{0.201}\\ \re{Johonston et al. \cite{johnston2020self}}&\re{M}&\re{0.297} &\re{\textbf{2.902}} &\re{7.013} &\re{0.158}\\ \red{FeatDepth \cite{shu2020featdepth}}&\red{M}&\red{0.313} &\red{3.489} &\red{7.228} &\red{0.158}\\ Ours &M & 0.316 & 3.200 & 7.095 & 0.158\\ \red{Ours R50 HR} &\red{M} & \red{\textbf{0.290}} & \red{3.070} & \red{\textbf{6.902}} & \red{\textbf{0.155}}\\ \bottomrule \end{tabular} \end{center} \end{minipage} \end{table} \begin{figure} \begin{center} \begin{minipage}{0.46\textwidth} \centering \includegraphics[height=4.6cm, width=8.3cm]{images/cityscapes.png} \end{minipage} \begin{minipage}[htbp]{0.02\textwidth} \noindent \begin{turn}{90}{\fontsize{8pt}{\baselineskip}\selectfont MD2~~~\red{FeatDepth}~~ Ours~~~~~Input~} \end{turn} \end{minipage} \end{center} \caption{Qualitative results on Cityscapes \cite{cordts2016cityscapes}. Our method produces more accurate depth maps in moving objects and texture-less regions. \red{Green boxes highlight the difference of the predicted depth maps by different methods.}} \label{city9} \end{figure} \subsection{Evaluation of generalization ability}\label{generalization} Though our models were only trained on KITTI \cite{geiger2012we}, competitive results can be achieved on unseen datasets without any fine-tuning. \red{We display the generalization ability of our method using the version ours (R18 LR).} \begin{figure} \begin{center} \begin{minipage}{0.46\textwidth} \centering \includegraphics[height=4.6cm,width=8.3cm]{images/make3d.png} \end{minipage} \begin{minipage}[htbp]{0.02\textwidth} \noindent \begin{turn}{90}{\fontsize{8pt}{\baselineskip}\selectfont MD2~\red{FeatDepth}~ Ours~~~GT~~~Input~} \end{turn} \end{minipage} \end{center} \caption{Qualitative results on Make3D \cite{saxena2008make3d}. Our method can generate more accurate depth maps. \red{Green boxes highlight the difference of the predicted depth maps by different methods.}} \label{fig:make} \end{figure} {\bf Cityscape.} The challenges of Cityscape \cite{cordts2016cityscapes} mainly arise from the poor lighting condition, raining weather and moving objects. We cropped out the bottom part of the original images to remove the car hoods. The generated depth maps show the good domain adaptation ability of our models. Compared with other state-of-the-art approaches \red{Monodepth2 \cite{godard2019digging} and FeatDepth \cite{shu2020featdepth}}, our method is more accurate at perceiving moving or distant objects and delicate structures, as shown in Figure \ref{city9}\red{, especially the part highlighted in the green boxes.} {\bf Make3D.} We conducted the center cropping and scaling \cite{godard2019digging} for the alignment between input images and the ground truth \cite{saxena2008make3d}. In Table \ref{maketabel}, our results outperform our baseline \cite{godard2019digging} and the qualitative comparison in Figure \ref{fig:make} provides additional intuitive evidence on the generalization ability of our method. \re{According to the results in Table \ref{maketabel}, the self-supervised monocular methods usually show better generalization ability than self-supervised stereo or hybrid training methods. Although our performance is inferior to \cite{johnston2020self} in some metrics, their model is much larger than ours, i.e., 51.34M v.s. 14.33M, which may be a key factor to improve methods' generalization ability. \red{Compared with FeatDepth \cite{shu2020featdepth}, our method exceeds it in three metrics while using smaller models, showing our superiority in the generalization ability.}} \begin{table}[t] \begin{minipage}{\linewidth} \scriptsize \caption{Comparison of model complexity.}\label{inferencetable} \small \begin{center} \begin{tabular}{l|c|c|c} \hline &Params (M)$\downarrow$ &FLOPs (G)$\downarrow$ &FPS$\uparrow$ \\ \hline Monodepth2 \cite{godard2019digging}&14.33 &8.03 &184.5 \\ \hline Ours (R18 LR) &14.33 &8.03 &184.5 \\ \hline FeatDepth \cite{shu2020featdepth}&33.16 &85.11 & 59.6\\ \hline \re{Ours (R50 HR)} &\re{32.52} &\re{44.31} &\re{87.3} \\ \hline \end{tabular} \end{center} \end{minipage} \end{table} \subsection{Online refinement} Our method utilizes temporal coherence among consecutive video frames via proposed cross-view consistency to improve monocular depth estimation. However, the current test protocol proposed by prior works \cite{eigen2014depth,zhou2017unsupervised} is for single frame depth estimation. To demonstrate the ability of our method to learn temporal coherence, we adopt the online refinement technique following the approach proposed by \cite{casser2019depth}. Because no ground truth depth supervision is needed in the self-supervised training paradigm, it is natural to train the network's parameters during inference using the same losses. We thus update the model when performing inference. Each input batch during test-time refinement includes the the test frame $I_t$ and the nearby frames $I_{t-1}, I_{t+1}$. This online refinement approach can fully utilize the cross-view consistency among test frames and their adjacent frames, which improves the test result significantly. The quantitative results of different methods with online refinement are reported in Table \ref{tab:refinement}. \iffalse \begin{figure*} \begin{center} \begin{minipage}{0.96\textwidth} \centering \includegraphics[width=13cm]{image_supp/qualitative.jpg} \end{minipage} \begin{minipage}[htbp]{0.02\textwidth} \noindent \begin{turn}{90}{\fontsize{8pt}{\baselineskip}\selectfont Df~~~~~~~V2D~~~~~~ MD2~~~~~Ours~~~~~~Input} \end{turn} \end{minipage} \end{center} \caption{Qualitative results on KITTI test set. Our method produces more accurate depth maps in low-texture region and moving instances compared with other methods Monodepth2 (MD2) \cite{godard2019digging}, V2D \cite{mahjourian2018unsupervised}, and Df \cite{zou2018df}} \label{fig:1} \end{figure*} \fi \subsection{Model complexity analysis} Since depth estimation methods are often used in autonomous driving or drone systems, model size and speed during inference is very important. Therefore, we compare our model with prior works in terms of parameters (M), computations (FLOPs), and inference speed (FPS), as shown in Table \ref{inferencetable}. Our method proposes two cross-view loss items to regularize the temporal coherence during training, which are not used during inference. Therefore, the model complexity of our methods is consistent with our baseline method. \iffalse \begin{figure}\label{fig:offsetvisual} \begin{center} \begin{minipage}[h]{0.5\textwidth} \centering \includegraphics[width=8cm]{image_supp/offsetvisual.jpg} \end{minipage} \end{center} \caption{Visualization of the learned feature offset by OffsetNet.} \end{figure} \fi \section{Conclusion and discussion}\label{conclusion} This study is dedicated to the SS-MDE problem with a focus on robust cross-view consistency. We first propose DFA loss to exploit the temporal coherence in feature space to produce consistent depth estimation. Compared with the photometric loss in the RGB space, measuring the cross-view consistency in the depth feature space is more robust in challenging cases such as illumination variance and texture-less regions, owing to the representation power of deep features. Moreover, we design VDA loss to exploit robust cross-view 3D geometry consistency by aligning point cloud distribution in the voxel space. \re{VDA loss has shown to be more effective} in handling moving objects and occlusion regions than the rigid point cloud alignment loss. Experimental results \re{on outdoor benchmarks} demonstrate that our method has achieved superior results than state-of-the-art approaches and can generate better depth maps in texture-less regions and moving object areas. \re{More efforts can be made to improve the voxelization method in VDA loss to enhance the generalization ability and apply the proposed method to indoor scenes, which will be left for future work.} \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
577
\section{#1}\setcounter{equation}{0}} \newcommand{\sN}{{\mathsf N}} \newcommand{\Ts}{{\mathsf T}} \newcommand{\Ss}{{\mathsf S}} \newcommand{\Sshat}{\widehat{\mathsf S}} \newcommand{\fs}{{\mathsf f}} \newcommand{\Js}{{\mathsf J}} \newcommand{\vs}{{\mathsf v}} \newcommand{\BB}{{\mathscr B}} \newcommand{\DD}{{\mathscr D}} \newcommand{\EE}{{\mathscr E}} \newcommand{\HH}{{\mathscr H}} \newcommand{\GG}{{\mathscr G}} \newcommand{\KK}{{\mathscr K}} \newcommand{\FF}{{\mathscr F}} \newcommand{\LL}{{\mathcal L}} \newcommand{\Ac}{{\mathcal A}} \newcommand{\Bc}{{\mathcal B}} \newcommand{\Cc}{{\mathcal C}} \newcommand{\Dc}{{\mathcal D}} \newcommand{\Fc}{{\mathcal F}} \newcommand{\Hc}{{\mathcal H}} \newcommand{\cN}{{\mathcal N}} \newcommand{\cP}{{\mathcal P}} \newcommand{\cT}{{\mathcal T}} \newcommand{\Uc}{{\mathcal U}} \newcommand{\Zc}{{\mathcal Z}} \newcommand{\Ai}{{\mathcal A}^\circ} \newcommand{\cV}{{\mathcal V}} \newcommand{\cI}{{\mathcal I}} \newcommand{\Ic}{{\mathcal I}} \newcommand{\cK}{{\mathcal K}} \newcommand{\OO}{{\mathscr O}} \newcommand{\QQ}{{\mathcal Q}} \newcommand{\Aa}{{\cal A}} \newcommand{\Ba}{{\cal B}} \newcommand{\Fa}{{\cal F}} \newcommand{\Uu}{{\cal U}} \newcommand{\Xx}{{\cal X}} \newcommand{\Zz}{{\cal Z}} \newcommand{\Ft}{{\widetilde{\Ff}}} \newcommand{\sW}{{\sf W}} \newcommand{\kb}{{\boldsymbol{k}}} \newcommand{\lb}{{\boldsymbol{\ell}}} \newcommand{\fb}{{\boldsymbol{f}}} \newcommand{\gb}{{\boldsymbol{g}}} \newcommand{\hb}{{\boldsymbol{h}}} \newcommand{\nb}{{\boldsymbol{n}}} \newcommand{\xb}{{\boldsymbol{x}}} \newcommand{\yb}{{\boldsymbol{y}}} \newcommand{\xbo}{{\boldsymbol{x_0}}} \newcommand{\etb}{{\boldsymbol{\eta}}} \newcommand{\Tb}{{\boldsymbol{T}}} \newcommand{\Afrak}{{\ygoth A}} \newcommand{\Of}{{\mathfrak O}} \newcommand{\ff}{{\mathfrak F}} \newcommand{\Ip}{{\mathfrak I}} \newcommand{\Afr}{{\mathfrak A}} \newcommand{\Mfr}{{\mathfrak M}} \newcommand{\ogth}{{\mathfrak o}} \newcommand{\tgth}{{\mathfrak t}} \newcommand{\wgth}{{\mathfrak w}} \newcommand{\etap}{\eta'} \newcommand{\Ran}{{\rm Ran}\,} \newcommand{\supp}{{\rm supp}\,} \newcommand{\Span}{{\rm span}\,} \DeclareMathOperator{\ad}{ad} \DeclareMathOperator{\diam}{diam} \DeclareMathOperator{\Inn}{Inn} \DeclareMathOperator{\Tr}{Tr} \DeclareMathOperator{\sgn}{sgn} \DeclareMathOperator{\pr}{pr} \DeclareMathOperator{\sep}{sep} \renewcommand{\Re}{{\rm Re}\,} \renewcommand{\Im}{{\rm Im}\,} \newcommand{\im}{{\rm im}\,} \newcommand{\End}{{\rm End}} \newcommand{\Sym}{{\rm Sym}} \newcommand{\dvol}{d\textrm{vol}} \newcommand{\ip}[2]{{\langle #1\mid #2\rangle}} \newcommand{\ket}[1]{{\vert #1\rangle}} \newcommand{\bra}[1]{{\langle #1 \mid}} \newcommand{\stack}[2]{\substack{#1 \\ #2}} \newcommand{\expt}[2]{{\langle #1 \rangle}_{#2}} \newcommand{\ub}{\overline{u}} \newcommand{\vb}{\overline{v}} \newcommand{\wb}{\overline{w}} \newcommand{\Xb}{\overline{X}} \newcommand{\Xib}{\overline{\Xi}} \newcommand{\Xxb}{\overline{\Xx}} \newcommand{\Yb}{\overline{Y}} \newcommand{\fhat}{\widehat{f}} \newcommand{\Had}{{\rm\sf Had}\,} \newcommand{\WF}{{\rm WF}\,} \newcommand{\Ob}{{\boldsymbol{0}}} \newcommand{\eb}{{\boldsymbol{e}}} \newcommand{\Bb}{{\boldsymbol{B}}} \newcommand{\Cb}{{\boldsymbol{C}}} \newcommand{\Db}{{\boldsymbol{D}}} \newcommand{\Fb}{{\boldsymbol{F}}} \newcommand{\Ib}{{\boldsymbol{I}}} \newcommand{\Lb}{{\boldsymbol{L}}} \newcommand{\Mb}{{\boldsymbol{M}}} \newcommand{\Nb}{{\boldsymbol{N}}} \newcommand{\Pb}{{\boldsymbol{P}}} \newcommand{\Qb}{{\boldsymbol{Q}}} \newcommand{\Ub}{{\boldsymbol{U}}} \newcommand{\Mbu}{{\boldsymbol{\utilde{M}}}} \newcommand{\Nbu}{{\boldsymbol{\utilde{N}}}} \newcommand{\Lc}{{\mathcal{L}}} \newcommand{\Mc}{{\mathcal{M}}} \newcommand{\Nc}{{\mathcal{N}}} \newcommand{\Sc}{{\mathcal{S}}} \newcommand{\Wc}{{\mathcal{W}}} \newcommand{\Tu}{{\mathcal T}} \newcommand{\dist}{{\rm dist}\,} \newcommand{\Bund}{{\sf Bund}} \newcommand{\Ct}{{\sf C}} \newcommand{\Jt}{{\sf J}} \newcommand{\It}{{\sf I}} \newcommand{\Ut}{{\sf U}} \newcommand{\Cat}{{\sf Cat}} \newcommand{\FSetInj}{{\sf FSet\hbox{-}Inj}} \newcommand{\LCTo}{{\sf LCT}_0} \newcommand{\LCT}{{\sf LCT}} \newcommand{\Loco}{{\sf Loc}_0} \newcommand{\Loc}{{\sf Loc}} \newcommand{\FLoc}{{\sf FLoc}} \newcommand{\Obs}{{\sf Obs}} \newcommand{\Val}{{\sf Val}} \newcommand{\IsoP}{{\sf IsoPhys}} \newcommand{\preSympl}{{\sf preSympl}} \newcommand{\Set}{{\sf Set}} \newcommand{\iSet}{{\sf iSet}} \newcommand{\States}{{\sf States}} \newcommand{\Sympl}{{\sf Sympl}} \newcommand{\Sys}{{\sf Sys}} \newcommand{\SysLPE}{{\sf SysLPE}} \newcommand{\Alg}{{\sf Alg}} \newcommand{\AlgSts}{{\sf AlgSts}} \newcommand{\CAlgSts}{{\sf C^*\hbox{-}AlgSts}} \newcommand{\CAlg}{{\sf C^*\hbox{-}Alg}} \newcommand{\TAlg}{{\sf TAlg}} \newcommand{\Test}{{\sf Test}} \newcommand{\Top}{{\sf Top}} \newcommand{\TVS}{{\sf TopVS}} \newcommand{\GH}{{\sf GH}} \newcommand{\Vect}{{\sf Vect}} \newcommand{\iVect}{{\sf iVect}} \newcommand{\Phys}{{\sf Phys}} \newcommand{\twAlgSts}{{\sf gr\hbox{-}Alg\hbox{-}Sts}} \newcommand{\xLCT}{{\sf xLCT}} \newcommand{\xAlg}{{\sf xAlg}} \newcommand{\Af}{{\mathscr A}} \newcommand{\Afd}{{\mathscr A}^\sqcup} \newcommand{\Bf}{{\mathscr B}} \newcommand{\Bfd}{{\mathscr B}^\sqcup} \newcommand{\Cf}{{\mathscr C}} \newcommand{\Df}{{\mathscr D}} \newcommand{\Ef}{{\mathscr E}} \newcommand{\Ff}{{\mathscr F}} \newcommand{\Gf}{{\mathscr G}} \newcommand{\If}{{\mathscr I}} \newcommand{\Jf}{{\mathscr J}} \newcommand{\Lf}{{\mathscr L}} \newcommand{\Mf}{{\mathscr M}} \newcommand{\Pf}{{\mathscr P}} \newcommand{\Qf}{{\mathscr Q}} \newcommand{\Rf}{{\mathscr R}} \newcommand{\Sf}{{\mathscr S}} \newcommand{\Tf}{{\mathscr T}} \newcommand{\Uf}{{\mathscr U}} \newcommand{\Vf}{{\mathscr V}} \newcommand{\Tc}{{\mathcal T}} \newcommand{\Vc}{{\mathcal V}} \newcommand{\Wf}{{\mathscr W}} \newcommand{\Xf}{{\mathscr X}} \newcommand{\Zf}{{\mathscr Z}} \newcommand{\zetad}{\zeta^\sqcup} \newcommand{\etad}{\eta^\sqcup} \newcommand{\CPT}{\mathscr{CPT}} \newcommand{\CoPow}{\text{\bf CoPow}} \newcommand{\PT}{\mathscr{PT}} \newcommand{\Sol}{{\mathscr L}} \newcommand{\Forget}{{\mathscr V}} \newcommand{\obj}{{\rm obj}\,} \newcommand{\id}{{\rm id}} \newcommand{\Mor}{{\rm Mor}} \newcommand{\nto}{\stackrel{.}{\to}} \newcommand{\op}{{\rm op}} \newcommand{\Aut}{{\rm Aut}} \newcommand{\Der}{{\rm Der}} \newcommand{\Gpd}{{\rm Gpd}} \newcommand{\Funct}{{\rm Funct}} \newcommand{\Iso}{{\rm Iso}} \newcommand{\Fld}{{\rm Fld}} \newcommand{\Nat}{{\rm Nat}} \newcommand{\Var}{{\rm Var}} \newcommand{\card}{{\rm card}\,} \newcommand{\Sp}{{\rm Sp}} \DeclareMathOperator{\Fol}{Fol} \DeclareMathOperator{\Aff}{Aff} \DeclareMathOperator{\cl}{cl} \DeclareMathOperator{\opcl}{op-cl} \DeclareMathOperator{\lincl}{lin-cl} \DeclareMathOperator{\co}{co} \DeclareMathOperator{\eq}{eq} \DeclareMathOperator{\inte}{int} \newcommand{\rce}{{\rm rce}}\newcommand{\rced}{{\rm rce}^\sqcup} \newcommand{\loc}{{\rm loc}} \newcommand{\Cpts}{{\rm Cpts}} \newcommand{\dyn}{{\rm dyn}} \newcommand{\kin}{{\rm kin}} \newcommand{\sym}{{\rm sym}} \newcommand{\tc}{{\textrm{tc}}} \DeclareMathOperator{\dcl}{-cl} \newcommand{\wscl}{{\rm w}^*\!\dcl} \newcommand{\septens}{\stackrel{{\rm sep}}{\otimes}} \newcommand{\cPhi}{\check{\Phi}} \newcommand{\tchi}{\widetilde{\chi}} \newcommand{\GL}{{\rm GL}} \newcommand{\SL}{{\rm SL}} \newcommand{\SU}{{\rm SU}} \newcommand{\SO}{{\rm SO}} \newcommand{\Spin}{{\rm Spin}} \DeclareMathOperator{\Diff}{Diff} \DeclareMathOperator{\lcm}{lcm} \newcommand{\psum}{\sideset{}{'}\sum} \DeclareMathOperator{\CCR}{CCR} \newcommand{\WW}{\mathcal{W}} \newcommand{\ett}{\widetilde{\eta}} \newcommand{\xit}{\widetilde{\xi}} \newcounter{tightenum} \newenvironment{tightitemize {\begin{list}{$\bullet$}{\setlength{\itemsep}{0pt}\setlength{\parsep}{0pt}\setlength{\topsep}{0pt}} {\end{list}} \newenvironment{tightenumerate {\begin{list}{(\roman{tightenum})}{\usecounter{tightenum} \setlength{\itemsep}{0pt}\setlength{\parsep}{0pt}\setlength{\topsep}{0pt}} {\end{list}} \newcounter{assumptions} \newcommand{\cPsi}{\check{\Psi}} \newcommand{\tS}{\widetilde{S}} \newcommand{\elb}{{\boldsymbol{\ell}}} \newcommand{\sS}{\leftidx{_*}{}{S}} \newcommand{\sT}{\leftidx{_*}{}{T}} \newcommand{\SNb}{\leftidx{_\Nb}{}{S}} \newcommand{\Ngth}{{\mathfrak N}} \newcommand{\Rgth}{{\mathfrak R}} \newtheorem{theorem}{Theorem}[section] \newtheorem{proposition}[theorem]{Proposition} \newtheorem{definition}[theorem]{Definition} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{remark}[theorem]{Remark} \newcounter{Landau_assumptions} \begin{document} \title{The split property for locally covariant quantum field theories in curved spacetime} \author{Christopher J Fewster\thanks{\tt chris.fewster@york.ac.uk}\\ Department of Mathematics, University of York, \\ Heslington, York YO10 5DD, U.K.} \date{\today} \maketitle \begin{abstract} The split property expresses the way in which local regions of spacetime define subsystems of a quantum field theory. It is known to hold for general theories in Minkowski space under the hypothesis of nuclearity. Here, the split property is discussed for general locally covariant quantum field theories in arbitrary globally hyperbolic curved spacetimes, using a spacetime deformation argument to transport the split property from one spacetime to another. It is also shown how states obeying both the split and (partial) Reeh--Schlieder properties can be constructed, providing standard split inclusions of certain local von Neumann algebras. Sufficient conditions are given for the theory to admit such states in ultrastatic spacetimes, from which the general case follows. A number of consequences are described, including the existence of local generators for global gauge transformations, and the classification of certain local von Neumann algebras. Similar arguments are applied to the distal split property and circumstances are exhibited under which distal splitting implies the full split property. \\ \par\noindent {\bf Mathematics Subject Classification (2010)} 81T05, 81T20\\ {\bf Keywords} Split property, Reeh-Schlieder theorem, local covariance \\[5pt] {\em Dedicated to the memory of John E Roberts} \end{abstract} \section{Introduction} In relativistic physics, one expects that spacelike separated local spacetime regions should constitute independent subsystems. The simplest expression of this in quantum field theory (QFT) is \emph{Einstein causality}, which requires that observables localized in spacelike separated regions commute and are therefore commensurable. Algebraic quantum field theory~\cite{Haag} offers various strengthened criteria for statistical independence of observables at spacelike separation (see~\cite{Summers:1990,Summers:2009} for reviews) of which the \emph{split property} has turned out to be particularly deep and fruitful. For the most part, the split property has been studied in Minkowski space, while in curved spacetime results have related to particular linear field theories~\cite{Verch_nucspldua:1993, DAnHol:2006}. In this paper we establish the split property in general globally hyperbolic spacetimes, within the framework of locally covariant QFT~\cite{BrFrVe03} and subject to additional conditions described below. To set the scene, we briefly recall the definition of the split property in Minkowski space. In the algebraic framework~\cite{Haag} one considers a net of $C^*$-algebras $\Ac(O)$ indexed by open bounded regions of Minkowski space. These algebras share a common unit, and (among other axioms) are isotonous, i.e., $O_1\subset O_2$ implies that $\Ac(O_1)\subset \Ac(O_2)$. Let $\omega$ be a state on the $C^*$-algebra $\Ac$ generated by all the $\Ac(O)$, thereby inducing a GNS representation $\pi$ of $\Ac$ on Hilbert space $\HH$ with GNS vector $\Omega$. In this representation we may form local von Neumann algebras by taking double commutants, $\Rgth(O)=\pi(\Ac(O))''$. Clearly, whenever $O_1\subset O_2$, there is an inclusion $\Rgth(O_1)\subset \Rgth(O_2)$ of von Neumann algebras; following~\cite{DopLon:1984}, the inclusion is said to \emph{split} if there is a type $\text{I}$ von Neumann factor $\Ngth$ such that $\Rgth(O_1)\subset\Ngth\subset\Rgth(O_2)$. That is, $\Ngth$ has trivial centre, and is isomorphic as a von Neumann algebra to the algebra of all bounded operators on some (not necessarily separable) Hilbert space~\cite[Prop.~2.7.19]{BratRob}. The state $\omega$ is said to have the \emph{split property} if such inclusions split for all relatively compact $O_1,O_2$ with $\overline{O_1}\subset O_2$. The relationship with statistical independence arises as follows. Suppose the net of local algebras obeys Einstein causality, so that algebras of causally disjoint regions commute elementwise. If $O_2$ and $O_3$ are causally disjoint and the inclusion $\Rgth(O_1)\subset\Ngth\subset\Rgth(O_2)$ is split for some $O_1\subset O_2$, then $\Rgth(O_1)$ and $\Rgth(O_3)$ enjoy a high degree of statistical independence: the algebra they generate is isomorphic to their $W^*$-tensor product, and thus any normal states $\varphi_1$ and $\varphi_3$ on $\Rgth(O_1)$ and $\Rgth(O_3)$ can be extended to a normal product state $\varphi$ obeying $\varphi(A_1A_3)=\varphi_1(A_1)\varphi_3(A_3)$ for $A_i\in\Rgth(O_i)$ ($i=1,3$). Originally conjectured by Borchers, the split property was first proved for free fields by Buchholz~\cite{Buc:1974}. Subsequently, it was established for general models \cite{BucDAnFre:1987} under suitable hypotheses of \emph{nuclearity}, which controls the growth of the localized state space with energy. As the nuclearity criterion is closely linked to the thermodynamic properties of the theory~\cite{BucWic:1986,BucJun:1986,BucJun:1989}, it is expected to hold for many theories of physical interest. In particular, it is satisfied by free fields and even for countably many free fields provided that the spectrum of masses obeys suitable conditions~\cite{BucWic:1986}. Our approach to the split property in curved spacetimes is similar in spirit to Sanders' work on the Reeh--Schlieder property~\cite{Sanders_ReehSchlieder}: the existence of a state with the desired properties on the given spacetime is deduced by deforming to a spacetime on which such a state is known (or assumed) to exist. (In the Reeh--Schlieder case, the states obtained are not generally cyclic for \emph{all} local algebras, and so what is proved is a partial Reeh--Schlieder property.) For linear fields, related arguments appear in~\cite{Verch:1993,Dappiaggi:2011} and are also used in the proof of the split property~\cite{Verch_nucspldua:1993,DAnHol:2006}. In these cases, the existence of states with the Reeh--Schlieder or split property was proved for ultrastatic spacetimes and used to deduce similar results in more general spacetimes. A novelty of our specific approach is that we rephrase the deformation arguments for the split and partial Reeh--Schlieder properties into a common language, allowing streamlined proofs running in close analogy to one another. Indeed, we will give a combined result on states obeying both the split and partial Reeh--Schlieder properties, thus yielding \emph{standard split inclusions}~\cite{DopLon:1984}. The paper is structured as follows: in section~\ref{sect:prelim} we describe the relevant geometrical background, in particular introducing the concept of a \emph{regular Cauchy pair}, and also recall the main ideas needed from locally covariant QFT~\cite{BrFrVe03}. Section~\ref{sect:split} contains our main results on the split and Reeh--Schlieder properties. In the latter case, this reproduces results from~\cite{Sanders_ReehSchlieder}; the interest here is that the proof runs in close analogy to that of the split property, and that the split and Reeh--Schlieder properties can hold simultaneously. We also show that the \emph{distal split property}, in which one demands split inclusions only for situations in which the outer region is sufficiently larger than the inner, is also amenable to deformation arguments; moreover, we give results to show that the full split property follows from a suitable distal split condition for models which have state spaces compatible with local quasiequivalence and the timeslice condition. It follows that models that obey a distal split condition but not the full split property (see, e.g.,~\cite[Thm 4.3]{DAnDopFreLon:1987}) cannot admit such state spaces. Section~\ref{sect:ultrastatic} describes sufficient conditions for the existence of states with the Reeh--Schlieder and split properties in connected ultrastatic spacetimes. As every connected spacetime in our category can be deformed to such a spacetime, this establishes conditions for our results to hold in generality. Nonetheless, our deformation arguments hold even for disconnected spacetimes and we given an example of a state over a disconnected spacetime with the (full) Reeh--Schlieder property. By way of outlook, a number of applications of the split property are described in Section~\ref{sect:split}. These include the statistical independence at spacelike separation, the existence of local generators of global gauge transformations (established in the Minkowski space case in~\cite{DopLon:1983}) and the identification of local algebras as the unique hyperfinite type $\text{III}_1$ factor, up to a tensor product with an abelian algebra. However there are numerous additional directions that can be explored, and in general the split property brings a much more detailed set of tools to bear on the general analysis of QFT in curved spacetimes than has so far been available. \section{Preliminaries}\label{sect:prelim} \subsection{The category $\Loc$ and spacetime deformation} Locally covariant quantum field theory~\cite{BrFrVe03} describes QFT on a category of globally hyperbolic spacetimes $\Loc$. Fixing a spacetime dimension $n\ge 2$, objects of $\Loc$ are quadruples $\Mb=(\Mc,g,\ogth,\tgth)$ where $\Mc$ is a smooth paracompact orientable nonempty $n$-manifold with finitely many connected components, $g$ is a smooth time-orientable metric of signature $+-\cdots-$ on $\Mc$, $\ogth$ and $\tgth$ are choices of orientation and time-orientation respectively,\footnote{The orientation (resp., time-orientation) is conveniently represented as a choice of one of the connected components of the nowhere-zero smooth $n$-forms (resp., $g$-timelike $1$-forms) on $\Mc$.} so that the spacetime $\Mb$ is globally hyperbolic. That is, $\Mb$ has no closed causal curves and the intersections $J^+_\Mb(p)\cap J^-_\Mb(q)$ of the causal future of $p$ with the causal past of $q$ is compact (including the possibility of being empty) for any pair of points $p,q\in\Mc$. A morphism between two objects $\Mb=(\Mc,g,\ogth,\tgth)$ and $\Mb'=(\Mc',g',\ogth',\tgth')$ of $\Loc$ is any smooth embedding $\psi:\Mc\to\Mc'$ that is isometric, preserves the (time)orientation (i.e., $\psi^*g'=g$, $\psi^*\ogth'=\ogth$, $\psi^*\tgth'=\tgth$) and has a causally convex image. If the image contains a Cauchy surface of $\Mb'$, $\psi$ will be described as a \emph{Cauchy morphism}. We will often consider open causally convex subsets of $\Mb$ with finitely many mutually causally disjoint components; the set of all such sets will be denoted $\OO(\Mb)$. Suppose $\Mb=(\Mc,g,\ogth,\tgth)\in\Loc$, and that $O\in\OO(\Mb)$ is nonempty. Then $\Mb|_O:=(O,g|_O,\ogth|_O,\tgth|_O)$, i.e., $O$ regarded as a spacetime in its own right with the induced metric and causal structures from $\Mb$, is an object of $\Loc$, and the inclusion map $O\hookrightarrow\Mc$ induces a morphism $\iota_{\Mb;O}:\Mb|_O\to\Mb$. There is a useful canonical form for objects of $\Loc$. Objects of the form $(\RR\times\Sigma,g, \tgth\wedge\wgth,\tgth)$ where (a) $(\Sigma,\wgth)$ is an oriented $(n-1)$-manifold, (b) $dt$ is future-directed according to $\tgth$, where $t$ is the coordinate corresponding to the first factor of the Cartesian product $\RR\times\Sigma$, and (c) the metric splits as \begin{equation}\label{eq:split_metric} g = \beta dt\otimes dt -h_t, \end{equation} where $\beta\in C^\infty(\RR\times\Sigma)$ is strictly positive and $t\mapsto h_t$ is a smooth choice of (smooth) Riemannian metrics on $\Sigma$, are said to be in \emph{standard form}. Every leaf $\{t\}\times\Sigma$ is a smooth spacelike Cauchy surface of the spacetime. The structure theorem for $\Loc$ \cite[\S 2.1]{FewVer:dynloc_theory} is: \begin{proposition}\label{prop:BS} Supposing that $\Mb\in\Loc$, let $\Sigma$ be a smooth spacelike Cauchy surface of $\Mb$ with induced orientation $\wgth$, and let $t_*\in\RR$. Then there is a $\Loc$-object $\Mb_{\text{st}}=(\RR\times\Sigma,g, \tgth\wedge\wgth,\tgth)$ in standard form and an isomorphism $\rho:\Mb_{\text{st}}\to\Mb$ in $\Loc$ such that each $\{t\}\times\Sigma$ is a smooth spacelike Cauchy surface of $\Mb_{\text{st}}$, and $\rho(t_*,\cdot)$ is the inclusion of $\Sigma$ in $\Mb$. \end{proposition} Here, the induced orientation $\wgth$ of the Cauchy surface $\Sigma$ in $\Mb=(\Mc,g,\ogth,\tgth)$ is the unique orientation such that $\ogth|_\Sigma=\tgth|_\Sigma\wedge\wgth$. Proposition~\ref{prop:BS} is a slight elaboration of results due to Bernal and S\'anchez (see particularly, \cite[Thm 1.2]{Bernal:2005qf} and \cite[Thm 2.4]{Bernal:2004gm}), which were previously long-standing folk-theorems. We will occasionally make use of a Riemannian metric $k$ associated with any smooth spacelike Cauchy surface $\Sigma$ of $\Mb\in\Loc$, uniquely defined so that $\sqrt{k_\sigma(u,u)}n|_{\iota(\sigma)}+\iota_* u$ is a future-directed null vector for every $u\in T_\sigma\Sigma$, where $n$ is the future-directed unit normal vector field to $\Sigma$ and $\iota:\Sigma\to\Mb$ is the inclusion map. If $\Mb$ is in standard form and $\Sigma$ is the hypersurface of constant $t$, then $k=\beta^{-1}h_t$, of course. The metric $k$ measures spatial distances in terms of light travel times in the rest frame defined by $n$ and is an instantaneous version of the optical metric defined in static spacetimes~\cite{GibbonsPerry:1978}. Accordingly, we refer to $k$ as the \emph{instantaneous optical metric}.\footnote{In~\cite{GibbonsPerry:1978}, the optical metric is defined for static spacetimes, on spatial sections orthogonal to a timelike Killing vector: if the spacetime metric takes form~\eqref{eq:split_metric} with $h_t\equiv h$ and $\beta$ independent of $t$, then the optical metric is precisely $\beta^{-1}h$, coinciding with our instantaneous optical metric. In these circumstances the geodesics of the optical metric are precisely the spatial projections of null geodesics in the spacetime; this property is not generally true of the instantaneous optical metric, which, however, is a more general concept.} Methods for deforming one globally hyperbolic spacetime into another go back to the work of Fulling, Narcowich and Wald~\cite{FullingNarcowichWald}, in which the existence of Hadamard states on ultrastatic spacetimes was used to deduce their existence on general globally hyperbolic spacetimes. As first recognized in~\cite{Verch01}, the same idea can be used to great effect in locally covariant QFT. The fundamental spacetime deformation result can be formulated as follows (see~\cite[Prop.~2.4]{FewVer:dynloc_theory}). \begin{proposition}\label{prop:Cauchy_chain} Two spacetimes $\Mb$, $\Nb$ in $\Loc$ have oriented-diffeomorphic Cauchy surfaces if and only if there exists a chain of Cauchy morphisms in $\Loc$ forming a diagram \begin{equation} \label{eq:Cauchy_chain} \Mb\xleftarrow{\alpha} \Pb \xrightarrow{\beta} \Ib \xleftarrow{\gamma} \Fb \xrightarrow{\delta} \Nb. \end{equation} \end{proposition} \begin{proof} For later use, we sketch some details needed in the forward implication; see~\cite[Prop.~2.4]{FewVer:dynloc_theory} for the full proof. Assume without loss that $\Mb$ and $\Nb$ are in standard form with $\Mb=(\RR\times\Sigma,g_1,\ogth,\tgth_1)$ and $\Nb=(\RR\times\Sigma,g_2,\ogth,\tgth_2)$, where $\ogth=\tgth_1\wedge\wgth=\tgth_2\wedge\wgth$ for some orientation $\wgth$ of $\Sigma$. Given any reals $t_1<t_1'<t_2'<t_2$, one may construct a metric $g$ of the form~\eqref{eq:split_metric}, so that \begin{tightitemize} \item $g=g_1$ on $P=(-\infty,t_1)\times\Sigma$ and $g=g_2$ on $F=(t_2,\infty)\times\Sigma$ \item on $(-\infty,t_2')\times\Sigma$ every $g$-timelike vector is $g_1$-timelike \item on $(t_1',\infty)\times\Sigma$ every $g$-timelike vector is $g_2$-timelike. \end{tightitemize} The idea for constructing such a metric is described in~\cite{FullingNarcowichWald}; the argument is simplified and given in more detail in~\cite[Prop.~2.4]{FewVer:dynloc_theory}. Choosing $\tgth$ so that $dt$ is future-directed, the spacetime $\Ib:=(\RR\times\Sigma,g,\ogth,\tgth)$ is globally hyperbolic, because every inextendible $g$-timelike curve intersects each surface of constant $t$ precisely once. In addition, we set $\Pb:=\Mb|_{P}$ and $\Fb:=\Nb|_{F}$, whereupon the inclusions of $F$ and $P$ into $\RR\times\Sigma$ induce the required Cauchy morphisms in \eqref{eq:Cauchy_chain}. \end{proof} \subsection{Regular Cauchy pairs} We will be interested in some particular subsets of Cauchy surfaces defined as follows. \begin{definition} Let $\Mb\in\Loc$. A \emph{regular Cauchy pair} $(S,T)$ in $\Mb$ is an ordered pair of subsets of $\Mb$, that are nonempty, open, relatively compact subsets of a common smooth spacelike Cauchy surface of $\Mb$ in which $\overline{T}$ has nonempty complement, and so that $\overline{S}\subset T$. There is a preorder on regular Cauchy pairs so that $(S_1,T_1)\prec (S_2,T_2)$ if and only if $S_2\subset D_\Mb(S_1)$ and $T_1\subset D_\Mb(T_2)$.\footnote{ The preorder is not a partial order, because $(S_1,T_1)\prec (S_2,T_2)\prec (S_1,T_1)$ implies $D_\Mb(S_1)=D_\Mb(S_2)$ and $D_\Mb(T_1)=D_\Mb(T_2)$, but not necessarily $S_1=S_2$ and $T_1=T_2$.} \end{definition} These conditions ensure that $D_\Mb(S)$ and $D_\Mb(T)$ are open and casually convex, and hence elements of $\OO(\Mb)$. Here, for any subset $U$ of $\Mb$, $D_\Mb(U)$ denotes the Cauchy development, consisting of all points $p$ in $\Mb$ with the property that all inextendible piecewise-smooth causal curves through $p$ intersect $U$. The preorder $\prec$ is illustrated in Figure~\ref{fig:preorder}. \begin{figure} \tdplotsetmaincoords{75}{90} \pgfmathsetmacro{\rvec}{.8} \pgfmathsetmacro{\thetavec}{15} \pgfmathsetmacro{\phivec}{60} \begin{center} \begin{tikzpicture}[scale=5,tdplot_main_coords] \coordinate (O) at (0,0,0); \coordinate (Q) at (0,0,0.4); \tdplotdrawarc{(O)}{0.35}{0}{360}{anchor=north}{} \tdplotdrawarc{(O)}{0.5}{0}{360}{anchor=north}{} \tdplotdrawarc[dashed]{(O)}{0.6}{0}{360}{anchor=north}{} \tdplotdrawarc{(Q)}{0.17}{0}{360}{anchor=north}{} \tdplotdrawarc[dotted]{(Q)}{0.25}{0}{360}{anchor=north}{} \tdplotdrawarc{(Q)}{0.7}{0}{360}{anchor=north}{} \draw[dotted] (0,0.35,0) -- (0,0.25,0.4); \draw[dotted] (0,-0.35,0) -- (0,-0.25,0.4); \draw[dashed] (0,0.6,0) -- (0,0.7,0.4); \draw[dashed] (0,-0.6,0) -- (0,-0.7,0.4); \node at (0,0.0,0.0) {$S_1$}; \node at (0,0.425,0) {$T_1$}; \node at (0,0.55,0.4) {$T_2$}; \node at (0,0.0,0.4) {$S_2$}; \end{tikzpicture} \end{center} \caption{\small Regular Cauchy pairs with $(S_1,T_1)\prec (S_2,T_2)$. Dotted (resp., dashed) lines indicate relevant portions of $D_\Mb(S_1)$ (resp., $D_\Mb(T_2)$).} \label{fig:preorder} \end{figure} The following lemmas give the properties of regular Cauchy pairs that will be needed. The first is elementary: \begin{lemma} \label{lem:Cauchypairs} Let $\psi:\Mb\to\Nb$ be a Cauchy morphism. Then a pair of subsets $(S,T)$ of $\Mb$ is a regular Cauchy pair if and only if $(\psi(S),\psi(T))$ is a regular Cauchy pair for $\Nb$ and $\overline{\psi(T)}\subset \psi(\Mb)$. \end{lemma} \begin{proof} The forward direction holds because the image of a Cauchy surface under a Cauchy morphism is again a Cauchy surface \cite[Lem.~A.2]{FewVer:dynloc_theory}. In the reverse direction, similar arguments show that the same is true for pre-images, provided that the Cauchy surface is completely contained in $\psi(\Mb)$. The remaining task is thus to show that $\psi(T)$ lies in at least one smooth spacelike Cauchy surface contained in $\psi(\Mb)$. Let $\Sigma$ be any smooth spacelike Cauchy surface of $\Nb$ containing $\psi(T)$. It can happen that $\Sigma$ leaves $\psi(\Mb)$.\footnote{I am grateful to Ko Sanders for pointing out this possibility, which was missed in an earlier version.} However, as $\overline{\psi(T)}\subset\psi(\Mb)$, there exists a compactly supported smooth function $\chi:\Sigma\to\RR$ such that $0\le \chi\le 1$, $\chi\equiv 1$ on $\psi(T)$ and $\chi$ is supported on the portion of $\Sigma$ within $\psi(\Mb)$. Taking any regular value $\alpha\in(0,1)$ of $\chi$, the preimage $\chi^{-1}([\alpha,\infty))$ is a compact submanifold-with-boundary of $\Sigma$ (see, e.g., \cite[\S2]{Milnor_top_diff_view}). Thus, it is also a spacelike and acausal compact codimension-$1$ submanifold-with-boundary of $\psi(\Mb)$ and can therefore be extended to a smooth spacelike Cauchy surface contained in $\psi(\Mb)$ \cite[Thm~1.1]{Bernal:2005qf}, which necessarily contains $\psi(T)$. \end{proof} The next two results indicate the extent to which ordered regular Cauchy pairs may be found in nearby Cauchy surfaces. We use two pieces of notation: first, when a spacetime is presented in standard form with manifold $\RR\times\Sigma$, we denote any regular Cauchy pair of the form $(\{t\}\times S,\{t\}\times T)$ by $(S,T)_t$ for brevity; second, the notation $A\Subset B$ indicates that $\overline{A}$ is compact and contained in $B$. \begin{lemma}\label{lem:lightspeed} Suppose that $\Mb$ takes standard form with underlying manifold $\RR\times\Sigma$, and that $T$ is an open relatively compact subset of $\Sigma$ with nonempty exterior. Let $t_*\in\RR$ and let $B(U,\delta)$ denote the open ball of radius $\delta$ about $U\subset \Sigma$ with respect to the instantaneous optical metric on $\{t_*\}\times\Sigma$ induced by $\Mb$. For all $\delta>0$ such that $B(T,\delta)$ is relatively compact with nonempty exterior,\footnote{The existence of a relatively compact $\delta$-ball about $T$ follows from the existence of a compact exhaustion of $\Sigma$~\cite[Prop.~4.76]{Lee:topman}, given that $T$ has nonempty exterior.} there exists $\epsilon>0$ such that $\{t\}\times T \subset D_\Mb(\{t'\}\times B(T,\delta))$ provided that $t,t'\in(t_*-\epsilon,t_*+\epsilon)$. Further, suppose that $S\subset\Sigma$ is open and relatively compact with $B(S,\delta) \Subset T$, then $(B(S,\delta),T)_t\prec (S,B(T,\delta))_{t'}$ for any $t,t'\in (t_*-\epsilon,t_*+\epsilon)$. \end{lemma} \begin{proof} Without loss of generality take $t_*=0$ and denote the instantaneous optical metric induced on $\Sigma$ via the slice $\{\tau\}\times\Sigma$ of $\Mb$ by $k_{\tau}$. As $B(T,\delta)$ is relatively compact, there is a constant $K\ge 1$ such that $k_{0,\sigma}(u,u) \le K k_{{\tau,\sigma}}(u,u)$ for all $u\in T_{\sigma}\Sigma$, $(\tau,\sigma)\in [0,\delta]\times \overline{B(T,\delta)}$. We set $\epsilon= \delta/(2\sqrt{K})$ and choose any $t,t'\in(-\epsilon,\epsilon)$. Any smooth inextendible $\Mb$-causal curve $\gamma$ may be parameterized so that $\gamma(\tau) = (\tau,\sigma(\tau))$ ($\tau\in\RR$), where $\sigma$ is smooth and necessarily obeys $k_{\tau,\sigma(\tau)}(\dot{\sigma}(\tau),\dot{\sigma}(\tau))\le 1$ for all $\tau\in\RR$. Thus if $\gamma(t)\in T$, and $|t|,|t'|<\epsilon$, we may estimate (using the $k_0$ metric) \begin{equation}\label{eq:delta_est} {\text{dist}}(\sigma(t),\sigma(t')) \le \sqrt{K}|t-t'| < 2\epsilon \sqrt{K}=\delta \end{equation} and hence $\gamma(t')\in B(T,\delta)$ as required. (We eliminate the possibility that $\sigma(\tau)$ leaves $B(T,\delta)$ at some intermediate time by similar reasoning.) Thus $\{t\}\times T \subset D_\Mb(\{t'\}\times B(T,\delta))$ as required. Under the additional assumptions concerning $S$, we may apply the same estimates, and reverse the roles of $t$ and $t'$ to find $\{t'\}\times S \subset D_\Mb(\{t\}\times B(S,\delta))$ in addition to the previous statement concerning $T$, whereupon $(B(S,\delta),T)_t\prec (S,B(T,\delta))_{t'}$. \end{proof} \begin{lemma} \label{lem:step} Suppose that $\Mb\in\Loc$ takes standard form with underlying manifold $\RR\times\Sigma$. (a) Let $S_1,S_2,T_1,T_2$ be open relatively compact subsets of $\Sigma$ with $S_2\Subset S_1 \Subset T_1 \Subset T_2$ and so that $T_2$ has nonempty exterior. Then, for any $t_*\in\RR$, there exists $\epsilon>0$ such that \begin{equation} (S_{1}, T_{1})_{t_1} \prec (S_{2}, T_{2})_{t_2} \end{equation} for all $t_1,t_2\in (t_*-\epsilon,t_*+\epsilon)$. (b) Let $(S,T)_t$ be a regular Cauchy pair in $\Mb$ for some $t\in\RR$. Let $S_{\text{inner}}$, $S_{\text{outer}}$, $T_{\text{inner}}$ and $T_{\text{outer}}$ be any open relatively compact subsets of $\Sigma$ such that\footnote{The existence of such sets follows from the assumptions on $S$ and $T$.} \begin{equation}\label{eq:STchain} S_{\text{inner}}\Subset S \Subset S_{\text{outer}} \Subset T_{\text{inner}}\Subset T \Subset T_{\text{outer}} \end{equation} and so that $T_{\text{outer}}$ has nonempty exterior. Then there exists an $\epsilon>0$ such that \begin{equation} (S_{\text{outer}}, T_{\text{inner}})_{t'} \prec (S,T)_t \prec (S_{\text{inner}}, T_{\text{outer}})_{t'} \end{equation} for all $t'\in (t-\epsilon,t+\epsilon)$. In particular, every Cauchy surface $\{t'\}\times\Sigma$ with $|t'-t|<\epsilon$ contains a regular Cauchy pair that precedes $(S,T)_t$ and one that is preceded by it. \end{lemma} \begin{proof} (a) Let $B(U,\delta)$ denote the open $\delta$-ball about $U\subset\Sigma$ in the instantaneous optical metric on $\{t_*\}\times\Sigma$. By assumption on the various sets in the hypotheses, we may choose $\delta>0$ such that $B(S_{2},\delta)\subset S_1$ and $B(T_1,\delta)\subset T_{2}$. Using these inclusions together with Lemma~\ref{lem:lightspeed}, there exists $\epsilon>0$ such that \begin{equation} (S_{1}, T_{1})_{t_1} \prec (B(S_2,\delta), T_{1})_{t_1} \prec (S_2, B(T_{1},\delta))_{t_2} \prec (S_2, T_2)_{t_2} \end{equation} holds for all $t_1,t_2\in (t_*-\epsilon,t_*+\epsilon)$. (b) Apply (a) twice, taking $t_*=t$. \end{proof} \begin{remark}\label{rem:multiCauchypair}\emph{ It follows immediately that, if finitely many regular Cauchy pairs $(S_j,T_j)$ ($1\le j\le N$) are specified in the Cauchy surface $\{t\}\times\Sigma$, then every Cauchy surface $\{t'\}\times\Sigma$ with $t'$ sufficiently close to $t$ contains, for each $j$, a regular Cauchy pair preceding $(S_j,T_j)$ and one that is preceded by it.} \end{remark} \subsection{Locally covariant quantum field theory} The basic premise of locally covariant QFT~\cite{BrFrVe03} is that a theory is given by a functor $\Af:\Loc\to\CAlg$, where $\CAlg$ is the category of unital $C^*$-algebras and injective unit-preserving $*$-homomorphisms.\footnote{Other target categories are possible and frequently employed, for example the category $\Alg$ of unital $*$-algebras with injective unit-preserving $*$-homomorphisms.} This means that each spacetime $\Mb$ corresponds to a $C^*$-algebra $\Af(\Mb)$, and that every morphism $\psi:\Mb\to\Nb$ between spacetimes has a corresponding $\CAlg$-morphism $\Af(\psi):\Af(\Mb)\to\Af(\Nb)$, subject to the requirement that $\Af(\id_\Mb)=\id_{\Af(\Mb)}$ and $\Af(\psi\circ\varphi)=\Af(\psi)\circ\Af(\varphi)$. Given such a functor, a net of local algebras may be defined in each spacetime $\Mb\in\Loc$ by setting $\Af^\kin(\Mb;O)$ to be the image of the map $\Af(\iota_{\Mb;O})$ for each nonempty $O\in\OO(\Mb)$. As described in~\cite{BrFrVe03}, these local algebras obey suitable generalizations of the assumptions in the Araki--Haag--Kastler framework~\cite{Haag}. In particular, they are \emph{isotonous}: if $O_1\subset O_2$ then $\Af^\kin(\Mb;O_1)\subset\Af^\kin(\Mb;O_2)$. The additional assumptions we will use are that the theory is \emph{Einstein causal}: if $O_1,O_2\in\OO(\Mb)$ are causally disjoint (in the sense that no causal curve connects them), then $\Af^\kin(\Mb;O_1)$ and $\Af^\kin(\Mb;O_2)$ commute, and that the theory has the \emph{timeslice property}: if $\psi:\Mb\to\Nb$ is Cauchy, then $\Af(\psi)$ is an isomorphism. \begin{definition} \label{def:lcQFT} A \emph{locally covariant QFT} is a functor $\Af:\Loc\to\CAlg$ obeying Einstein causality and having the timeslice property. \end{definition} The utility of the deformation result Proposition~\ref{prop:Cauchy_chain} arises because any chain of Cauchy morphisms such as \eqref{eq:Cauchy_chain} induces, by the timeslice property, an isomorphism \begin{equation}\label{eq:alg_Cauchy_chain} \Af(\delta)\circ\Af(\gamma)^{-1}\circ\Af(\beta)\circ \Af(\alpha)^{-1}:\Af(\Mb)\to\Af(\Nb) . \end{equation} Although such isomorphisms are not canonical, owing to the many choices used in the construction, they often permit the transfer of properties and structures between the instantiations of the theory on $\Mb$ and $\Nb$. The description just given encodes the algebraic aspects of the theory. To incorporate states as well, we first define a category $\CAlgSts$ as follows. Objects of $\CAlgSts$ are pairs $(\Ac,\Sc)$, where $\Ac\in\CAlg$ and $\Sc$ is a state space for $\Ac$, i.e., a convex subset of the set of all states on $\Ac$, that is closed under operations induced by $\Ac$.\footnote{That is, if $\omega\in\Sc$ and $B\in\Ac$ with $\omega(B^*B)> 0$, then the state $\omega_B(A):=\omega(B^*AB)/\omega(B^*B)$ is also an element of $\Sc$.} A morphism in $\CAlgSts$ between $(\Ac,\Sc)$ and $(\Bc,\Tc)$ is induced by any $\CAlg$-morphism $\alpha:\Ac\to\Bc$ such that $\alpha^*\Tc\subset\Sc$; as a slight abuse of notation we will often denote the $\CAlgSts$-morphism in the same way as its underlying $\CAlg$ morphism. A state space for a locally covariant QFT $\Af:\Loc\to\CAlg$ is an assignment of state space $\Sf(\Mb)$ to each $\Af(\Mb)$ ($\Mb\in\Loc$) so that $\Xf(\Mb)=(\Af(\Mb),\Sf(\Mb))$ defines a functor $\Xf:\Loc\to\CAlgSts$ for which each $\Xf(\psi)$ has underlying $\CAlg$-morphism $\Af(\psi)$. We say that $\Xf$ obeys the timeslice axiom if $\Xf(\psi)$ is an isomorphism in $\CAlgSts$ for all Cauchy morphisms $\psi:\Mb\to\Nb$, which means that $\Af(\psi)^*\Sf(\Nb)=\Sf(\Mb)$ (of course, $\Af(\psi)$ is also an isomorphism because $\Af$ obeys Definition~\ref{def:lcQFT}). In this case $\Xf$ will be described as a \emph{locally covariant QFT with states}. \section{Main Results} \label{sect:split} \subsection{The split property} The split property is defined as follows.\footnote{This definition directly generalizes that used in Minkowski space, but differs from the condition studied in~\cite{BrFrImRe:2014} and discussed briefly at the end of this section.} \begin{definition}\label{def:split} Let $\Af:\Loc\to\CAlg$ be a locally covariant QFT and $\Mb\in\Loc$. A state $\omega$ on $\Af(\Mb)$ is said to have the \emph{split property} for a regular Cauchy pair $(S,T)$ if, in the GNS representation ($\HH,\pi,\Omega)$ of $\Af(\Mb)$ induced by $\omega$, there is a type $\text{\em I}$ factor $\Ngth$ such that \begin{equation}\label{eq:split} \pi(\Af^\kin(\Mb;D_\Mb(S)))''\subset \Ngth \subset \pi(\Af^\kin(\Mb;D_\Mb(T)))''. \end{equation} (For brevity, we will sometimes say that $\omega$ is split for $(S,T)$.) \end{definition} \begin{remark}\label{rem:split}\emph{ If $\omega$ has the split property for $(S,T)$ then it does for every $(\tilde{S},\tilde{T})$ with $(S,T)\prec (\tilde{S},\tilde{T})$: for $\tilde{S}\subset D_\Mb(S)$ implies $D_\Mb(\tilde{S})\subset D_\Mb(S)$ and hence by isotony \begin{align} \pi(\Af^\kin(\Mb;D_\Mb(\tilde{S})))'' &\subset \pi(\Af^\kin(\Mb;D_\Mb(S)))'', \nonumber\\ \pi(\Af^\kin(\Mb;D_\Mb(T)))'' &\subset \pi(\Af^\kin(\Mb;D_\Mb(\tilde{T})))''. \end{align} Moreover, if nonempty $O_i\in\OO(\Mb)$ obey $O_1\subset D_\Mb(S)$, $D_\Mb(T)\subset O_2$, then there is a split inclusion \begin{equation}\label{eq:splitOi} \pi(\Af^\kin(\Mb;O_1))''\subset\Ngth \subset \pi(\Af^\kin(\Mb;O_2))'' \end{equation} by the same argument. } \end{remark} \begin{lemma}\label{lem:split} Suppose $\psi:\Mb\to\Nb$ is a Cauchy morphism and let $\Af$ be a locally covariant QFT. A state $\omega_\Nb$ on $\Af(\Nb)$ has the split property for a regular Cauchy pair $(\psi(S),\psi(T))$ with $\overline{\psi(T)}\subset \psi(\Mb)$ if and only if $\Af(\psi)^*\omega_\Nb$ has the split property for $(S,T)$. (As $\Af(\psi)$ is an isomorphism, this implies that $\omega_\Mb$ is split for $(S,T)$ if and only if $(\Af(\psi)^{-1})^*\omega_\Mb$ is split for $(\psi(S),\psi(T))$.) \end{lemma} \begin{proof} Let $\omega_\Mb=\Af(\psi)^*\omega_\Nb$ and write $(\HH_{\omega_{\star}},\pi_{\omega_{\star}},\Omega_{\omega_{\star}})$, where $\star=\Mb$ or $\Nb$, for the corresponding GNS representations. As $\Af(\psi)$ is an isomorphism there is a unitary $U:\HH_{\omega_\Mb}\to \HH_{\omega_\Nb}$ so that $U\Omega_{\omega_\Mb} = \Omega_{\omega_\Nb}$ and \begin{equation}\label{eq:intertwine} U \pi_{\omega_\Mb}(A)= \pi_{\omega_\Nb}(\Af(\psi) A)U,\qquad (A\in\Af(\Mb)). \end{equation} Consequently, $\pi_{\omega_\Nb}(\Af^\kin(\Nb;\psi(O)))'' = U\pi_{\omega_\Mb}(\Af^\kin(\Mb;O))''U^{-1}$ for any nonempty $O\in\OO(\Mb)$, and as $U\Ngth U^{-1}$ is a type $\text{I}$ factor if and only if $\Ngth$ is, the result follows. \end{proof} We now present our first deformation result on the split property. \begin{theorem}\label{thm:split} Suppose $\Af$ is a locally covariant QFT. Let $\Mb,\Nb\in\Loc$ have oriented-diffeomorphic Cauchy surfaces and suppose $\omega_\Nb$ is a state on $\Af(\Nb)$ that has the split property for all regular Cauchy pairs in $\Nb$. Given any regular Cauchy pair $(S_\Mb,T_\Mb)$ in $\Mb$, there is a chain of Cauchy morphisms between $\Mb$ and $\Nb$ inducing an isomorphism $\nu:\Af(\Mb)\to\Af(\Nb)$ such that $\nu^*\omega_\Nb$ has the split property for $(S_\Mb,T_\Mb)$. Consequently (by Remark~\ref{rem:split}) if nonempty $O_i\in\OO(\Mb)$ are such that $O_1\subset D_\Mb(S_\Mb)$, $D_\Mb(T_\Mb)\subset O_2$, then there is a split inclusion of the form \eqref{eq:splitOi} in the GNS representation of $\nu^*\omega_\Nb$. \end{theorem} \begin{proof} Assume without loss of generality (by Proposition~\ref{prop:BS} and Lemma~\ref{lem:Cauchypairs}) that $\Mb$ is in standard form $\Mb=(\RR\times\Sigma,g_\Mb,\ogth,\tgth_\Mb)$ and that $S_\Mb$ and $T_\Mb$ lie in the Cauchy surface $\{t_\Mb\}\times\Sigma$ for some $t_\Mb\in\RR$. By Lemma~\ref{lem:step} there exist $t_*>t_\Mb$ and a regular Cauchy pair $(S_*,T_*)$ in $\{t_*\}\times\Sigma$ such that $(S_*,T_*)\prec_\Mb (S_\Mb,T_\Mb)$, where $\prec_\Mb$ indicates the preorder with respect to the causal structure of $\Mb$. Now we may also assume without loss of generality that $\Nb$ is also in standard form $\Nb=(\RR\times\Sigma,g_\Nb,\ogth,\tgth_\Nb)$. As $(S_*,T_*)$ is also a regular Cauchy pair for $\Nb$, there exist $t_\Nb>t_*$ and a regular Cauchy pair $(S_\Nb,T_\Nb)$ in $\{t_\Nb\}\times\Sigma$ such that $(S_\Nb,T_\Nb)\prec_\Nb (S_*,T_*)$. We now construct a metric $g$ using Prop.~\ref{prop:Cauchy_chain}, choosing the values $t_1,t_1',t_2',t_2$ so that $t_\Mb<t_1<t_1'<t_*<t_2'<t_2<t_\Nb$, and thus creating an interpolating globally hyperbolic spacetime $\Ib$ and a chain of Cauchy morphisms \eqref{eq:Cauchy_chain}. The key point is that $(S_\Nb,T_\Nb) \prec_\Ib(S_*,T_*) $ and $(S_*,T_*)\prec_\Ib (S_\Mb,T_\Mb)$ and hence $(S_\Nb,T_\Nb)\prec_\Ib(S_\Mb,T_\Mb)$. To see this, consider any inextendible $g$-timelike curve $\gamma$ through $S_\Mb$. In the region $t\le t_*$ this is also a $g_\Mb$-timelike curve and intersects $S_*$, because $S_\Mb\subset D_\Mb(S_*)$. Thus $S_\Mb\subset D_\Ib(S_*)$. Similarly, if $\gamma$ is an inextendible $g$-timelike curve through $T_*$, it is $g_\Mb$-timelike in $\Mb$ and intersects $T_\Mb$, so $S_*\subset D_\Ib(S_\Mb)$. This shows that $(S_*,T_*)\prec_\Ib (S_\Mb,T_\Mb)$; one proves $(S_\Nb,T_\Nb)\prec_\Ib (S_*,T_*)$ in the same way. As $\omega_\Nb$ has the split property for $(S_\Nb,T_\Nb)$ in $\Nb$, it follows (applying Lemma~\ref{lem:split} twice) that $(\Af(\delta)\circ\Af(\gamma)^{-1})^*\omega_\Nb$ has the split property for $(S_\Nb,T_\Nb)$, as a regular Cauchy pair in $\Ib$, and hence for $(S_\Mb,T_\Mb)$, again as a regular Cauchy pair in $\Ib$, because $(S_\Nb,T_\Nb)\prec_\Ib (S_\Mb,T_\Mb)$. Two further applications of Lemma~\ref{lem:split} show that $(\Af(\beta)\circ\Af(\alpha)^{-1})^* (\Af(\delta)\circ\Af(\gamma)^{-1})^*\omega_\Nb = \nu^*\omega_\Nb$ has the split property for $(S_\Mb,T_\Mb)$ in $\Mb$. \end{proof} \begin{remark}\label{rem:multisplit}\emph{ The result may be extended as follows. Suppose finitely many regular Cauchy pairs $(S_\Mb^{(j)},T_\Mb^{(j)})$ ($1\le j\le N$), lying in a common Cauchy surface of $\Mb$ are given. Owing to Remark~\ref{rem:multiCauchypair}, the values $t_*$ and $t_\Nb$ in the proof above may be chosen so that there are Cauchy pairs $(S_*^{(j)},T_*^{(j)})$ and $(S_\Nb^{(j)},T_\Nb^{(j)})$ ($1\le j\le N$) lying in the hypersurfaces $\{t_*\}\times\Sigma$ and $\{t_\Nb\}\times\Sigma$ respectively so that $(S_\Nb^{(j)},T_\Nb^{(j)})\prec_\Nb (S_*^{(j)},T_*^{(j)}) \prec_\Mb (S_\Mb^{(j)},T_\Mb^{(j)})$ for each $1\le j\le N$ and hence $(S_\Nb^{(j)},T_\Nb^{(j)})\prec_\Ib (S_\Mb^{(j)},T_\Mb^{(j)})$ for a common interpolating metric. Then the state $\nu^*\omega_\Nb$ has the split property for each of the pairs $(S_\Mb^{(j)},T_\Mb^{(j)})$ ($1\le j\le N$).} \end{remark} For theories with states $\Xf=(\Af,\Sf):\Loc\to\CAlgSts$, we may say a little more. First, if the state $\omega_\Nb$ in the hypotheses of Theorem~\ref{thm:split} belongs to the state space $\Sf(\Nb)$, then the induced state obeys $\nu^*\omega_\Nb\in\Sf(\Mb)$, as a result of the timeslice property for $\Xf$ and the fact that $\nu$ arises from a chain of Cauchy morphisms. Much more follows if each $\Sf(\Mb)$ consists of mutually \emph{locally quasi-equivalent} states on $\Af(\Mb)$, in which case we describe $\Xf$ as obeying local quasi-equivalence. This condition requires that for every spacetime $\Mb$, relatively compact nonempty $O\in\OO(\Mb)$ and states $\omega_i\in\Sf(\Mb)$ ($i=1,2$), the GNS representations $(\HH_{\omega_i},\pi_{\omega_i},\Omega_i)$ restrict to quasi-equivalent representations of $\Af^\kin(\Mb;O)$, i.e., there is an isomorphism of von Neumann algebras $\beta:\pi_{\omega_1}(\Af^\kin(\Mb;O))'' \to\pi_{\omega_2}(\Af^\kin(\Mb;O))''$ such that $\beta\circ\pi_{\omega_1}(A)=\pi_{\omega_2}(A)$ for all $A\in \Af^\kin(\Mb;O)$.\footnote{An equivalent definition of quasi-equivalence is that the sets of states on $\Af^\kin(\Mb;O)$ induced by density matrices on $\HH_1$ and $\HH_2$ coincide \cite[Thm 2.4.26]{BratRob}.} An example of a locally quasi-equivalent state space ~\cite[Thm 3.4]{BrFrVe03} is provided by taking, in each spacetime $\Mb$, all states on the Weyl algebra of the Klein--Gordon field that are locally quasi-equivalent to any quasi-free Hadamard state (the latter being mutually locally quasi-equivalent~\cite{Verch:1994}). We have: \begin{lemma} \label{lem:quasi} If state $\omega_1$ has the split property for regular Cauchy pair $(S,T)$ in $\Mb$ and $\omega_2$ is locally quasi-equivalent to $\omega_1$, then $\omega_2$ also has the split property for $(S,T)$. \end{lemma} \begin{proof} Let $\Ngth$ be the type $\text{I}$ factor obeying \eqref{eq:split} and let $\beta:\pi_{\omega_1}(\Af^\kin(\Mb;D_\Mb(T)))'' \to\pi_{\omega_2}(\Af^\kin(\Mb;D_\Mb(T)))''$ be the isomorphism induced by local quasi-equivalence, obeying $\beta\circ\pi_{\omega_1}=\pi_{\omega_2}$ on $\Af^\kin(\Mb;D_\Mb(T))$. In particular, $\beta$ restricts to an isomorphism of $\pi_{\omega_1}(\Af^\kin(\Mb;D_\Mb(S)))'' \to\pi_{\omega_2}(\Af^\kin(\Mb;D_\Mb(S)))''$. Then $\beta(\Ngth)$ is a type $\text{I}$ factor, and clearly obeys $\pi_{\omega_2}(\Af^\kin(\Mb;D_\Mb(S)))'' \subset\beta(\Ngth)\subset \pi_{\omega_2}(\Af^\kin(\Mb;D_\Mb(T)))''$. \end{proof} As an immediate consequence (just as was argued for the Klein--Gordon theory in~\cite{Verch_nucspldua:1993}): \begin{theorem} Suppose $\Xf=(\Af,\Sf):\Loc\to\CAlgSts$ is a locally covariant QFT with states obeying local quasi-equivalence. Let $\Mb,\Nb\in\Loc$ have oriented-diffeomorphic Cauchy surfaces and suppose $\omega_\Nb\in\Sf(\Nb)$ has the split property for all regular Cauchy pairs in $\Nb$. Then every state $\omega_\Mb\in\Sf(\Mb)$ obeys the split property for all regular Cauchy pairs in $\Mb$. Consequently, if $O_i\in\OO(\Mb)$ are such that $O_1\subset D_\Mb(S)$, $D_\Mb(T)\subset O_2$, for a regular Cauchy pair $(S,T)$ in $\Mb$, then there is a split inclusion of the form \eqref{eq:splitOi} in the GNS representation induced by any state of $\Sf(\Mb)$. \end{theorem} \begin{proof} For each regular Cauchy pair $(S_\Mb,T_\Mb)$ of $\Mb$, Theorem~\ref{thm:split} shows the existence of some state in $\Sf(\Mb)$ having the split property for $(S_\Mb,T_\Mb)$, and hence by Lemma~\ref{lem:quasi} and local quasi-equivalence of $\Xf$, the same is true for all states of $\Sf(\Mb)$. \end{proof} \subsection{Partial Reeh--Schlieder results} As already mentioned, our result on the split property was inspired by Sanders' partial analogue of the Reeh--Schlieder theorem~\cite{Sanders_ReehSchlieder}. The original Reeh--Schlieder theorem~\cite{ReehSchlieder:1961} establishes that the Minkowski vacuum vector is cyclic for all local algebras, and consequently separating for all local algebras for regions with nonempty causal complement. The results of~\cite{Sanders_ReehSchlieder} demonstrate the existence of states with partial Reeh--Schlieder properties: given a spacetime region in $\Mb$, one may find (suitably regular) states that are cyclic for the corresponding local algebra, on the assumption that $\Mb$ can be deformed to a spacetime that admits a (suitably regular) state enjoying the full Reeh--Schlieder property of being cyclic for all local algebras. The introduction of regular Cauchy pairs allows for a streamlined proof of Sanders' result, which we give for completeness. More significantly, we combine this proof with that of our result on the split property to demonstrate the existence of states obeying both the split and Reeh--Schlieder properties, which give so-called standard split inclusions~\cite{DopLon:1984}. The properties we will consider are given as follows. Terminology differs from~\cite{Sanders_ReehSchlieder}. \begin{definition} Let $\Af:\Loc\to\CAlg$ be a locally covariant QFT and $\Mb\in\Loc$. A state $\omega$ on $\Af(\Mb)$ is said to have the \emph{Reeh--Schlieder property} for a regular Cauchy pair $(S,T)$ if, in the GNS representation ($\HH,\pi,\Omega)$ of $\Af(\Mb)$ induced by $\omega$, the GNS vector $\Omega$ is cyclic for $\pi(\Af^\kin(\Mb;D_\Mb(S)))''$ and separating for $\pi(\Af^\kin(\Mb;D_\Mb(T)))''$. For brevity, we will sometimes say that \emph{$\omega$ is Reeh--Schlieder for $(S,T)$}. If $O\in\OO(\Mb)$ and $\Omega$ is both cyclic and separating for $\pi(\Af^\kin(\Mb;O))''$, we will say that $\omega$ \emph{has the Reeh--Schlieder property for $O$}.\footnote{Sanders~\cite{Sanders_ReehSchlieder} uses this term for cyclicity alone.} \end{definition} Note that we regard the separation condition as part of the Reeh--Schlieder property, which turns out to expedite the proofs below. See Corollary~\ref{cor:RS} for a formulation involving only cyclicity as a hypothesis. \begin{remark}\label{rem:RS} \emph{If a vector is separating for an algebra, it is separating for any subalgebra thereof; if it is cyclic for an algebra, it is cyclic for any algebra of which it is a subalgebra. Thus it is clear that if $\omega$ has the Reeh--Schlieder property for $(S,T)$ then it does for every $(\tilde{S},\tilde{T})$ with $(\tilde{S},\tilde{T})\prec (S,T)$.\footnote{Note the reversal of order relative to Remark~\ref{rem:split}.} Moreover, if $O\in\OO(\Mb)$ is such that $D_\Mb(S)\subset O\subset D_\Mb(T)$, then the GNS vector of $\omega$ is both cyclic and separating for $\pi(\Af^\kin(\Mb;O))''$, i.e., $\omega$ is Reeh--Schlieder for $O$. Note that the separating property is defined at the level of the represented algebras. If $\omega$ induces a faithful GNS representation, we would have the stronger property that $\omega(A^*A)=0$ for $A\in\Af^\kin(\Mb;O)$ implies $A=0$. } \end{remark} \begin{lemma}\label{lem:RS} Let $\Af$ be a locally covariant QFT. Let $(S,T)$ be a regular Cauchy pair in $\Mb\in\Loc$ and suppose $\psi:\Mb\to\Nb$ is Cauchy. A state $\omega_\Nb$ on $\Af(\Nb)$ is Reeh--Schlieder for a regular Cauchy pair $(\psi(S),\psi(T))$ with $\overline{\psi(T)}\subset \psi(\Mb)$ if and only if $\Af(\psi)^*\omega_\Nb$ is Reeh--Schlieder for $(S,T)$. (As $\Af(\psi)$ is an isomorphism, this implies that $\omega_\Mb$ is Reeh--Schlieder for $(S,T)$ if and only if $(\Af(\psi)^{-1})^*\omega_\Mb$ is Reeh--Schlieder for $(\psi(S),\psi(T))$.) \end{lemma} \begin{proof} As in the proof of Lemma~\ref{lem:split}, we set $\omega_\Mb=\Af(\psi)^*\omega_\Nb$, and infer the existence of a unitary $U:\HH_{\omega_\Mb}\to \HH_{\omega_\Nb}$ so that $U\Omega_{\omega_\Mb} = \Omega_{\omega_\Nb}$ and $\pi_{\omega_\Nb}(\Af^\kin(\Nb;\psi(O)))'' = U\pi_{\omega_\Mb}(\Af^\kin(\Mb;O))''U^{-1}$ for $O\in\OO(\Mb)$. Consequently, $\Omega_{\omega_\Nb}$ is cyclic (resp., separating) for $\pi_{\omega_\Nb}(\Af^\kin(\Nb;\psi(O)))''$ if and only if $\Omega_{\omega_\Mb}$ is cyclic (resp., separating) for $\pi_{\omega_\Mb}(\Af^\kin(\Mb;O))''$. \end{proof} An analogue of Theorem~\ref{thm:split} now gives a partial Reeh--Schlieder result. \begin{theorem}\label{thm:RS} Suppose $\Af$ is a locally covariant QFT. Let $\Mb,\Nb\in\Loc$ have oriented-diffeomorphic Cauchy surfaces and suppose $\omega_\Nb$ is a state on $\Af(\Nb)$ that is Reeh--Schlieder for all regular Cauchy pairs. Given any regular Cauchy pair $(S_\Mb,T_\Mb)$ in $\Mb$, there is a chain of Cauchy morphisms between $\Mb$ and $\Nb$ inducing an isomorphism $\nu:\Af(\Mb)\to\Af(\Nb)$ such that $\nu^*\omega_\Nb$ has the Reeh--Schlieder property for $(S_\Mb,T_\Mb)$. Consequently, if $O\in\OO(\Mb)$ is relatively compact with nontrivial causal complement $O':=\Mb\setminus\overline{J_\Mb(O)}$, there is a state (formed in the same way) on $\Af(\Mb)$ with the Reeh--Schlieder property for $O$. \end{theorem} \begin{proof} The first part of the argument is identical to that of Theorem~\ref{thm:split}, except that we replace $\prec$ by $\succ$, and `split' by `Reeh--Schlieder' on every occasion, and use Lemma~\ref{lem:RS} and Remark~\ref{rem:RS} in place of Lemma~\ref{lem:split} and Remark~\ref{rem:split}. For the last part, choose any smooth spacelike Cauchy surface $\Sigma$ intersecting $O$ and $O'$;\footnote{The existence of such a $\Sigma$ may be seen, for example, as follows: there certainly exist smooth spacelike Cauchy surfaces $\Sigma_1$ and $\Sigma_2$ that intersect, respectively, $O$ and $O'$ in open sets. Choosing compact submanifolds-with-boundary $H_1$ (resp., $H_2$) of $\Sigma_1$ (resp., $\Sigma_2$) contained in $O\cap\Sigma_1$ (resp., $O'\cap\Sigma_2$), the union $H_1\cup H_2$ is acausal as well as being a spacelike compact submanifold-with-boundary in $\Mb$ and the existence of $\Sigma$ follows from~\cite[Thm~1.1]{Bernal:2005qf}.} then there certainly exist open relatively compact subsets $S$ and $T$ of $\Sigma$ so that $(S,T)$ is a regular Cauchy pair with $D_\Mb(S)\subset O\subset D_\Mb(T)$ (e.g., take $T=J_\Mb(O)\cap\Sigma$), and we apply the first part of the result along with Remark~\ref{rem:RS}. \end{proof} \begin{remark}\label{rem:multiRS}\emph{ For exactly the same reason as in Remark~\ref{rem:multisplit}, Theorem~\ref{thm:RS} may be extended to yield a state that has the Reeh--Schlieder property simultaneously for finitely many regular Cauchy pairs specified in a common Cauchy surface of $\Mb$.} \end{remark} The following result reproduces the main statement of~\cite[Thm 4.1]{Sanders_ReehSchlieder}. \begin{corollary} \label{cor:RS} Let $\Af$ be a locally covariant QFT and assume the geometric hypotheses of Theorem~\ref{thm:RS}. Suppose that $\omega_\Nb$ has the property that its GNS vector is cyclic for each $\pi_{\omega_\Nb}(\Af^\kin(\Nb;O))''$ indexed by a nonempty relatively compact $O\in\OO(\Nb)$ with nontrivial causal complement $O'$. Then the conclusions of Theorem~\ref{thm:RS} hold. \end{corollary} \begin{proof} We need only prove that $\omega_\Nb$ is Reeh--Schlieder for all regular Cauchy pairs $(S_\Nb,T_\Nb)$ of $\Nb$. By hypothesis, the GNS vector $\Omega_{\omega_\Nb}$ is cyclic for $\pi_{\omega_\Nb}(\Af^\kin(\Nb;D_\Nb(S_\Nb)))''$, so we need only prove that it is separating for $\pi_{\omega_\Nb}(\Af^\kin(\Nb;D_\Nb(T_\Nb)))''$. Choose any nonempty relatively compact $O\in\OO(\Nb)$ contained in the causal complement $T_\Nb'$ of $T_\Nb$ (so $O$ itself also has nontrivial causal complement $O'$ containing $T_\Nb$), whereupon $\Omega_{\omega_\Nb}$ is cyclic for $\pi_{\omega_\Nb}(\Af^\kin(\Nb;O))''$, and hence separating for (any subalgebra of) $\pi_{\omega_\Nb}(\Af^\kin(\Nb;O))'$. By Einstein causality, this includes $\pi_{\omega_\Nb}(\Af^\kin(\Nb;D_\Nb(T_\Nb)))$ and its weak closure. \end{proof} For a theory with states $\Xf:\Loc\to\CAlgSts$, we may argue further that the state $\nu^*\omega_\Nb$ belongs to $\Sf(\Mb)$. If one assumes that each $\Sf(\Mb)$ is a full local-equivalence class then further conclusions on the existence of states that are Reeh--Schlieder for arbitrary globally hyperbolic regions of $\Mb$ may be obtained -- see \cite{Sanders_ReehSchlieder}, which also discusses various applications of these results. We have emphasized that the proofs of Theorems~\ref{thm:split} and~\ref{thm:RS} run in close analogy. Indeed, they may be combined. \begin{theorem} \label{thm:RSsplit} Assume the hypotheses of Theorem~\ref{thm:split}. If, in addition, $\omega_\Nb$ is Reeh--Schlieder for all regular Cauchy pairs in $\Nb$, then the state $\nu^*\omega_\Nb$ has both the Reeh--Schlieder and split properties for $(S_\Mb,T_\Mb)$. \end{theorem} \begin{proof} We combine the proofs of Theorems~\ref{thm:split} and~\ref{thm:RS}. The value $t_*$ may be chosen so that $\{t_*\}\times\Sigma$ contains regular Cauchy pairs $(\sS ,\sT)$ and $(S_*,T_*)$ with \begin{equation}\label{eq:splitRSM} (\sS ,\sT)\prec_\Mb (S_\Mb,T_\Mb)\prec_\Mb (S_*,T_*) , \end{equation} while $t_\Nb>t_*$ may be chosen so that $\{t_\Nb\}\times\Sigma$ contains regular Cauchy pairs $(S_\Nb,T_\Nb)$ and $(\leftidx{_\Nb}{}{S},\leftidx{_\Nb}{}{T})$ such that \begin{equation}\label{eq:splitRSN} (\leftidx{_\Nb}{}{S},\leftidx{_\Nb}{}{T})\prec_\Nb (\sS ,\sT),\qquad (S_*,T_*)\prec_\Nb (S_\Nb,T_\Nb). \end{equation} Constructing the interpolating metric as in the proof of Theorem~\ref{thm:split}, the orderings \eqref{eq:splitRSM} and~\eqref{eq:splitRSN} hold with $\prec_\Mb$ and $\prec_\Nb$ replaced by $\prec_\Ib$, and we may deduce \begin{equation}\label{eq:splitRSI} (\leftidx{_\Nb}{}{S},\leftidx{_\Nb}{}{T})\prec_\Ib (S_\Mb,T_\Mb) \prec_\Ib (S_\Nb,T_\Nb). \end{equation} Now $\omega_\Nb$ has the Reeh--Schlieder property for $(S_\Nb,T_\Nb)$ and is split for $(\leftidx{_\Nb}{}{S},\leftidx{_\Nb}{}{T})$ in $\Nb$, and hence the same is true in $\Ib$ for $(\Af(\delta)\circ\Af(\gamma)^{-1})^*\omega_\Nb$. By \eqref{eq:splitRSI} and Remarks~\ref{rem:split} and~\ref{rem:RS}, $(\Af(\delta)\circ\Af(\gamma)^{-1})^*\omega_\Nb$ is both Reeh--Schlieder and split for $(S_\Mb,T_\Mb)$, again as a regular Cauchy pair in $\Ib$. Hence $\nu^*\omega_\Nb$ is both Reeh--Schlieder and split for $(S_\Mb,T_\Mb)$ in $\Mb$. \end{proof} \begin{remark}\label{rem:multisplitRS}\emph This result also extends to the case of finitely many Cauchy pairs in a common Cauchy surface.} \end{remark} \subsection{Standard split inclusions and applications} In the situation of Theorem~\ref{thm:RSsplit}, but now writing $(S,T)$ for $(S_\Mb,T_\Mb)$, let $\tilde{S}$ be an open subset of the Cauchy surface containing $S$ and $T$ such that $\overline{\tilde{S}}\subset T\setminus \overline{S}$. Then $(\tilde{S},T)$ is a regular Cauchy pair lying in a common Cauchy surface with $(S,T)$. Applying Remark~\ref{rem:multisplitRS}, the construction of $\nu$ may be arranged so that $\omega=\nu^*\omega_\Nb$ has the Reeh--Schlieder and split properties for both $(S,T)$ and $(\tilde{S},T)$. Writing $(\HH,\pi,\Omega)$ for the corresponding GNS representation, we define \begin{equation} \Rgth_U =\pi(\Af^\kin(\Mb;D_\Mb(U)))'', \end{equation} where $U$ is any of $S,\tilde{S},T$. So far, we have $\Rgth_S\subset\Ngth\subset \Rgth_T$ and that $\Omega$ is cyclic for $\Rgth_S$ (hence also for $\Ngth$ and $\Rgth_T$). Moreover $\Omega$ is cyclic for $\Rgth_{\tilde{S}}$, and therefore also for $\Rgth_T\wedge \Rgth_S'$ (using Einstein causality and causal disjointness of $S$ and $\tilde{S}$). On the other hand, $\Omega$ is separating for $\Rgth_T$ and, therefore, for its subalgebras $\Rgth_S$ and $\Rgth_T\wedge \Rgth_S'$. In summary, the inclusion $\Rgth_S\subset \Rgth_T$ is split, and $\Omega$ is cyclic and separating for each of $\Rgth_S$, $\Rgth_T$ and $\Rgth_T\wedge \Rgth_S'$.\footnote{Note that the split property for $(\tilde{S},T)$ was not used in this argument.} In the terminology of \cite{DopLon:1984}, the triple $(\Rgth_S,\Rgth_T,\Omega)$ is, therefore, a \emph{standard split inclusion}. Excluding a trivial situation in which $\Rgth_T=\CC\II$ (which can only arise if the GNS space $\HH$ is one-dimensional) it follows that both $\Rgth_S$ and $\Rgth_T$ are properly infinite von Neumann algebras with separable preduals, and the Hilbert space $\HH$ is infinite-dimensional and separable~\cite[Prop.~1.6]{DopLon:1984}.\footnote{To bring out the main point: $\Omega$ is a faithful normal state on $\Ngth$, which is therefore countably decomposable, and hence (by virtue of being a type $\text{I}$ factor) isomorphic to $\BB(\KK)$ where $\KK$ has countable dimension~\cite[7.6.46]{KadisonRingrose:iv}. That is, $\Ngth$ is of type $\text{I}_\infty$. As $\Omega$ is cyclic for $\Ngth$, separability of $\HH$ follows.} There is also a unitary $W:\HH\to\HH\otimes\HH$ with the properties \begin{align} W A B' W^{-1} &= A\otimes B' \qquad (A\in\Rgth_S,~B'\in\Rgth_T') \nonumber\\ W\Rgth_S W^{-1}& = \Rgth_S\otimes\II_{\HH} \nonumber\\ W\Rgth_T' W^{-1}& = \II_{\HH}\otimes\Rgth_T' \nonumber\\ W\Rgth_T W^{-1}& = \BB(\HH)\otimes\Rgth_T \end{align} and we may take $\Ngth$ to be the `canonical type $\text{I}$ factor' \begin{equation} \Ngth= W^{-1}\left(\BB(\HH)\otimes \II_{\HH}\right) W. \end{equation} It is conventional to denote the split inclusion by $\Lambda=(\Rgth_S,\Rgth_T,\Omega)$. As is well known, various consequences follow from this situation (see, e.g., \cite[\S V.5]{Haag}). We give some representative applications. \paragraph{Statistical independence} The algebras $\Rgth_S$ and $\Rgth_T'$ are statistically independent in the $W^*$-sense,\footnote{See~\cite{Summers:1990} for discussion of the relation between $C^*$- and $W^*$-senses of statistical independence.} because any pair of normal states $\varphi_S$ and $\varphi_T'$ on these algebras with respective density matrices $\rho_S$ and $\rho_T'$ induces a normal state $\varphi$ with density matrix $\rho= W^{-1} \rho_S\otimes\rho_{T}' W$ so that \begin{equation} \varphi(AB') = \Tr \rho AB' = \Tr \left((\rho_S A)\otimes(\rho_{T}'B')\right) =\varphi_S(A)\varphi_{T}'(B') \end{equation} for $A\in\Rgth_S$, $B'\in\Rgth_T'$. \paragraph{Strictly localized states} States of the form $\Psi = W^{-1} \psi\otimes\Omega$ ($\psi\in\HH$, $\|\psi\|=1$) may be regarded as states strictly localized in $D_\Mb(T)$ relative to $\Omega$, because \begin{equation} \ip{\Psi}{B'\Psi} = \ip{\psi\otimes\Omega}{(\II_{\HH}\otimes B')\psi\otimes\Omega} = \ip{\Omega}{B'\Omega} \end{equation} for all $B'\in\Rgth_T'$. \paragraph{Local implementation of gauge symmetries} In locally covariant QFT, the global gauge group of a theory $\Af$ may be identified with the automorphism group $\Aut(\Af)$, the group of natural isomorphisms of $\Af$ with itself~\cite{Fewster:gauge}. Suppose that the state $\omega_\Nb$ is gauge invariant in the sense that $\omega_\Nb\circ\zeta_\Nb=\omega_\Nb$ for all $\zeta\in\Aut(\Af)$, where $\zeta_\Nb$ is the component of natural transformation $\zeta$ in spacetime $\Nb$. Then $\omega=\nu^*\omega_\Nb$ is gauge invariant, $\omega\circ\zeta_\Mb=\omega$ for all $\zeta\in\Aut(\Af)$, by naturality and the definition of $\nu$, so the GNS representation carries a unitary implementation $\zeta\mapsto U(\zeta)$ of the gauge group $\Aut(\Af)$ under which $\Omega$ is fixed. Then we may define \begin{equation} U_\Lambda(\zeta)=W^{-1} (U(\zeta)\otimes\II_{\HH}) W, \end{equation} which provides a second representation of $\Aut(\Af)$, implemented by unitaries belonging to $\Ngth\subset\Rgth_T$, with \begin{equation} U_\Lambda(\zeta)AB' U_\Lambda(\zeta)^{-1} = W^{-1}\left( U(\zeta)AU(\zeta)^{-1}\otimes B'\right) W =U(\zeta)AU(\zeta)^{-1} B' \end{equation} for $A\in\Rgth_S$, $B'\in\Rgth_T'$. In other words, $U_\Lambda$ is a local representation of the gauge group on $\Rgth_S$, leaving the commutant of $\Rgth_T$ fixed. The representation is strongly continuous (with respect to a given topology on $\Aut(\Af)$) if and only if $U$ is, and this construction produces local generators for the gauge group and thus a local current algebra \cite{DopLon:1983}. In principle this discussion could be developed further to incorporate geometric symmetries of the Cauchy surface (cf.~\cite{BucDopLon:1986}) by modifying the construction of the interpolating spacetimes to ensure that the isometry is preserved throughout, and starting from an invariant state on $\Nb$. \paragraph{Hyperfiniteness and type $\text{III}_1$} Suppose $T$ can be approximated from within by subsets $S_k\subset T$ so that each $(S_k,S_{k+1})$ is a regular Cauchy pair, $T=\bigcup_{k\in\NN} S_k$ and \begin{equation} \Rgth_T = \bigvee_{k\in\NN} \Rgth_{S_k} . \end{equation} (This inner continuity would be expected if, for example, the von Neumann algebras are generated by a system of fields, cf.~\cite{BucDAnFre:1987}; alternatively, it might be imposed as an additivity assumption.) Then because each inclusion $\Rgth_{S_k}\subset \Rgth_{S_{k+1}}$ is split there is an increasing family of type $\text{I}$ factors $\Ngth_k$ so that \begin{equation} \Rgth_T = \bigvee_{k\in\NN} \Ngth_k \end{equation} and as $\HH$ is separable, $\Rgth_T$ is seen to be hyperfinite. If, in addition, the factors appearing in the central decomposition of $\Rgth_T$ are known to be of type $\text{III}_1$, as would happen given a suitable scaling limit~\cite[Thm 16.2.18]{BaumWollen:1992} (based on \cite{Fredenhagen:1985}) then $\Rgth_T$ is isomorphic to the unique hyperfinite $\text{III}_1$ factor~\cite{Haagerup:1987} (up to a tensor product with the centre of $\Rgth_T$). \bigskip Now consider the situation of a theory with states $\Xf=(\Af,\Sf):\Loc\to\CAlgSts$ obeying local quasi-equivalence, and so that $\omega_\Nb\in\Sf(\Nb)$. Then the state $\omega$ discussed above lies in $\Sf(\Mb)$ and the GNS representation $(\tilde{\HH},\tilde{\pi},\tilde{\Omega})$ of any state $\tilde{\omega}\in\Sf(\Mb)$ restricts to representations of $\Af^\kin(\Mb;D_\Mb(S))$ and $\Af^\kin(\Mb;D_\Mb(T))$ that are quasi-equivalent to those obtained by restricting the GNS representation of $\omega$. As already mentioned, the corresponding von Neumann algebras $\tilde{\Rgth}_S$, $\tilde{\Rgth}_T$ are split, though the GNS vector is not necessarily a standard vector. However, some elements of the discussion above hold true as a result of the quasi-equivalence: for instance, the Hilbert space $\tilde{\HH}$ is separable (cf.\ the proof of \cite[Thm 2.4.26]{BratRob}) and $\tilde{\Rgth}_T$ contains unitaries implementing the global gauge group on $\tilde{\Rgth}_S$, and leaving $\tilde{\Rgth}_T'$ fixed. Of course, the type of the local von Neumann algebras is preserved, because they are isomorphic. Further applications of the split property to the issue of independence of local algebras can be found in~\cite{Summers:1990,Summers:2009}.\footnote{Terminology in these references differs in some respects from ours, which follows~\cite{DopLon:1984}; refs.~\cite{Summers:1990,Summers:2009} refer to a pair $(\Rgth_1,\Rgth_2)$ as split if $\Rgth_1\subset\Ngth\subset\Rgth_2'$ for some type $\text{I}$ factor $\Ngth$.} A weaker condition than the split property, namely \emph{intermediate factoriality}, is studied in \cite{BrFrVe03}, where various consequences are derived. The interpretative framework for quantum systems described by funnels of type $\text{I}_\infty$ factors has recently been addressed in~\cite{BucSto:2014}. Finally, we comment on the version of the split property used in~\cite{BrFrImRe:2014} in a discussion of the tensorial structure of locally covariant QFTs. This differs from ours in that the type $\text{I}$ von Neumann factor is required to lie between the $C^*$-algebras $\Af^\kin(\Mb;O)$ and $\Af(\Mb)$ for connected $O\in\OO(\Mb)$ with compact closure, rather than between the von Neumann algebras of nested relatively compact regions in suitable representations. An additional continuity requirement is also imposed in~\cite{BrFrImRe:2014}. While it seems likely that one could at least partly address this version of the split property with our deformation argument, we will not do this here. Alternatively one could investigate whether the results of~\cite{BrFrImRe:2014} hold under the version of the split property established here. \subsection{The distal split property}\label{sect:distal} The \emph{distal split property} requires only that inclusions of nested local algebras be split when the outer region is sufficiently larger than the inner. In Minkowski space, for instance, the \emph{splitting distance} $d(r)$ is defined~\cite{DAnDopFreLon:1987} so that, for $r>0$, the inclusion $\Rgth_{B_r}\subset \Rgth_{B_{r+d}}$ is split for all $d>d(r)$ and non-split for $d<d(r)$, where $B_r$ is the open ball of radius $r$ in the $t=0$ hyperplane, centred at the origin.\footnote{ One might more strongly insist on $(\Rgth_{B_r},\Rgth_{B_{r+d}},\Omega)$ being a standard split inclusion, for cyclic and separating vector $\Omega$.} An example is given in~\cite[Thm 4.3]{DAnDopFreLon:1987} that consists of infinitely many independent scalar fields of masses $m_n=(2d_0)^{-1}\log(n+1)$ and for which the splitting distance obeys $d_0\le d(r) \le 2d_0$ for all $r>0$. In this subsection we generalise the notion of the distal split property and splitting distance to the curved spacetime context and show that, on the one hand, a weak version of the distal split property is amenable to deformation arguments, while on the other, that for models obeying the timeslice axiom and local quasi-equivalence, stronger versions of the distal split property actually imply versions of the split property. In particular, if (any version of) the distal split property holds, then the splitting distance for any open ball vanishes, while if \emph{uniform distal splitting} holds then the splitting distance vanishes for all open relatively compact sets. These results entail that models such as that of~\cite[Thm 4.3]{DAnDopFreLon:1987}, if extended to a locally covariant theory, must fail either to obey the timeslice or local quasi-equivalence axioms. As the repetition of the phrase `open relatively compact' would become tedious in this subsection, we use the abbreviation \emph{orc} instead. Our first result applies deformation arguments to a weak version of the distal split property. \begin{theorem} \label{thm:weak_distal} Suppose $\Af$ is a locally covariant QFT. Let $\Mb$ and $\Nb$ have oriented-diffeomorphic Cauchy surfaces, and suppose $\Sigma_\Nb$ is a particular smooth spacelike Cauchy surface of $\Nb$. Suppose $\omega_\Nb$ is a state on $\Af(\Nb)$ with the property that to each orc $S_\Nb\subset \Sigma_\Nb$ with nonempty exterior, there exists a regular Cauchy pair $(S_\Nb,T_\Nb)$ in $\Sigma_\Nb$ for which $\omega_\Nb$ is split. Then, for any smooth spacelike Cauchy surface $\Sigma_\Mb$ of $\Mb$ and any orc $S_\Mb\subset \Sigma_\Mb$ with nonempty exterior, there exist a regular Cauchy pair $(S_\Mb,T_\Mb)$ in $\Sigma_\Mb$ and an isomorphism $\nu:\Af(\Mb)\to\Af(\Nb)$ so that $\nu^*\omega_\Nb$ is split for $(S_\Mb,T_\Mb)$. \end{theorem} \begin{proof} We may assume without loss of generality that both $\Mb$ and $\Nb$ are in standard form on manifold $\RR\times\Sigma$, so that $\{0\}\times\Sigma$ corresponds to $\Sigma_\Mb$ in $\Mb$ and $\Sigma_\Nb$ in $\Nb$. As usual, we abuse notation by regarding $S_\Mb$ as a subset of $\Sigma$. We may choose other orcs $S_*$, $S_\Nb$ and $\tilde{S}$ so that $S_\Mb \Subset S_* \Subset S_\Nb\Subset \tilde{S}$, and so that $\tilde{S}$ has nonempty exterior. By hypothesis on $\omega_\Nb$ and $\Sigma_\Nb$, we may choose an orc $\tilde{T}$ with $\tilde{S}\Subset \tilde{T}$ so that $\omega_\Nb$ is split for $(\tilde{S},\tilde{T})_0$ in $\Nb$, and also further orcs $T_\Nb$, $T_*$ and $T_\Mb$ so that \begin{equation} S_\Mb \Subset S_* \Subset S_\Nb\Subset \tilde{S} \Subset \tilde{T} \Subset T_\Nb \Subset T_* \Subset T_\Mb \end{equation} and $T_\Mb$ has nonempty exterior. We now claim that there exist $0<t_*<t_\Nb$ such that \begin{equation} (\tilde{S},\tilde{T})_0\prec_\Nb (S_\Nb,T_\Nb)_{t_\Nb}\prec_\Nb (S_*,T_*)_{t_*} \prec_\Mb (S_\Mb,T_\Mb)_0. \end{equation} To see this, first apply Lemma~\ref{lem:step}(a) twice to deduce that the left- and right-hand orderings hold for any sufficiently small $t_\Nb, t_*$; fixing such a $t_\Nb$, a further application of Lemma~\ref{lem:step}(a) entails that $t_*$ may be chosen close enough to $t_\Nb$ in $(0,t_\Nb)$ so that the central ordering is also valid. This being so, we may infer that $\omega_\Nb$ is split for $(S_\Nb,T_\Nb)$ in $\Nb$. Constructing an interpolating metric as in the proof of Theorem~\ref{thm:split}, we obtain an isomorphism $\nu:\Af(\Mb)\to\Af(\Nb)$ such that $\nu^*\omega_\Nb$ is split for $(S_\Mb,T_\Mb)$ in $\Mb$. \end{proof} Applied to a locally covariant QFT $\Xf=(\Af,\Sf)$ with states obeying local quasiequivalence, and supposing that $\omega_\Nb\in\Sf(\Nb)$, Theorem~\ref{thm:weak_distal} implies that every state in $\Sf(\Mb)$ is split for $(S_\Mb,T_\Mb)$, of course. Next, we aim to show that distal splitting is enough to deduce that the splitting distance vanishes for certain sets, and that the full split property can be inferred under some circumstances. Let $\Xf=(\Af,\Sf)$ be a locally covariant QFT with states obeying local quasi-equivalence. To establish our notation, $\Mb=(\RR\times\RR^{n-1},\eta,\ogth,\tgth)$ will denote Minkowski spacetime, with standard inertial coordinates $(t,x^1,\ldots,x^{n-1})$, metric $\eta=dt\otimes dt-\sum_{i=1}^{n-1} dx^i\otimes dx^i$, and orientation and time-orientation so that $dt\wedge dx^1\wedge\cdots\wedge dx^{n-1}$ is positively oriented and $dt$ is future-pointing. If $S\subset\RR^{n-1}$ then $B(S,r)$ will denote the open ball of radius $r$ about $S$ in the Euclidean metric, and as before, $(S,T)_t$ will denote a regular Cauchy pair $(\{t\}\times S, \{t\}\times T)$. \begin{definition} For any orc $S$, the \emph{splitting distance} $d(S)\in [0,\infty]$ is defined as the infimum over all $r>0$ such that there is a state in $\Sf(\Mb)$ with the split property for $(S, B(S,r))_\tau$ for some $\tau\in\RR$. We say that $\Sf(\Mb)$ has the \emph{distal split property} if $d(S)<\infty$ for every orc $S$. If there exists $d_0>0$ such that $d(S)\le d_0$ for every orc $S$ then $\Sf(\Mb)$ is said to obey the \emph{uniform distal split property}; if, further, $d_0=0$, then $\Sf(\Mb)$ is said to obey the \emph{split property}. \end{definition} Owing to local quasi-equivalence, if $r>d(S)$ then there exists $\tau\in\RR$ such that every state in $\Sf(\Mb)$ is split for $(S, B(S,r))_\tau$; as $\Af(\psi)^*\Sf(\Mb)=\Sf(\Mb)$ for every automorphism $\psi$ of $\Mb$, this statement is also true for every $\tau\in\RR$, and remains true if $S$ is replaced by any of its translates or rotations. Thus $d(S)$ depends only on the equivalence class of $S$ under orientation-preserving Euclidean isometries. The split property as defined above means that every state in $\Sf(\Mb)$ is split for every regular Cauchy pair lying in a constant-time hypersurface of Minkowski space and hence (by our results of earlier sections) for every Cauchy pair in $\Mb$. We note some elementary observations: \begin{lemma} \label{lem:easydistal} For any orc $S$, we have \begin{equation}\label{eq:easydistal1} d(S) \le d(B(S,r))+r \end{equation} for all $r\ge 0$. Moreover, if $d(B(R)) <\infty$ for every $R>0$, where $B(R)=B(\{0\},R)$ is the open ball of radius $R$ about the origin, then $\Sf(\Mb)$ has the distal split property and \begin{equation}\label{eq:easydistal2} d(S)\le \inf_{R>\diam(S)} (R+d(B(R))) \end{equation} holds for every orc $S$. \end{lemma} \begin{proof} In $\RR^{n-1}$ we have $B(B(S,r_1),r_2) = B(S,r_1+r_2)$. Considering the chain of inclusions $S\subset B(S,r)\subset B(B(S,r),\rho)= B(S,r+\rho)$ we easily see that $\rho>d(B(S,r))$ implies $\rho+r>d(S)$, and \eqref{eq:easydistal1} follows. Next, suppose that all open balls have finite splitting distance. Let $S$ be any orc containing the origin and let $R>\diam(S)$, $\rho>d(B(R))$. Considering the inclusions $S\subset B(R)\subset B(R+\rho)\subset B(S,R+\rho)$, we see that $d(S)\le R+\rho<\infty$. As $d(S)$ is invariant under translations of $S$, \eqref{eq:easydistal2} holds for all orcs $S$ and $\Sf(\Mb)$ obeys the distal split condition. \end{proof} A less trivial observation is the following. \begin{theorem} \label{thm:distal_diffeo} Let $f\in\Diff(\RR^{n-1})$ be any diffeomorphism with uniformly bounded derivatives. For any orc $S$, $\epsilon>0$ and $r>d(B(f(S),\epsilon))$, one has \begin{equation}\label{eq:diffeo_bd1} d(S) \le \inf\{\rho>0: f^{-1}(B(f(S),r+2\epsilon))\subset B(S,\rho)\}. \end{equation} In particular, this gives an estimate \begin{equation}\label{eq:diffeo_bd} d(S)\le \kappa \,d^+(f(S)) \end{equation} where $d^+(T):=\liminf_{\epsilon\to 0+} d(B(T,\epsilon))$ is the \emph{upper splitting distance} and $\kappa$ is the supremum of $\|D(f^{-1})\|$ over $B(f(S),r)\setminus f(S)$. \end{theorem} Before giving the proof, we illustrate this theorem with two examples. First, suppose $f(\xb)= \xb/\lambda$, for $\lambda>0$, in which case $\kappa=\lambda$ and we find \begin{equation}\label{eq:lin_scaling} d(S)\le \lambda d^+(\lambda^{-1}S)\quad\text{and hence}\quad d(\lambda S)\le \lambda d^+(S) \end{equation} for every orc $S$. Thus splitting distances scale at most linearly. Consequently, we have: \begin{corollary}\label{cor:distal_scale} If $d(B(R))<\infty$ for some $R>0$ then $\Sf(\Mb)$ has the distal split property. If $\Sf(\Mb)$ has the uniform distal splitting property then $\Sf(\Mb)$ has the split property. \end{corollary} \begin{proof} If $d(B(R))$ is finite for some $R$, then by \eqref{eq:easydistal1} one sees that $d^+(B(r))<\infty$ for any $r<R$ and hence by \eqref{eq:lin_scaling}, $d(B(r'))<\infty$ for all $r'>0$. The distal split property follows by the second part of Lemma~\ref{lem:easydistal}. If the uniform distal split property holds then we have $d(S)\le \lambda d_0$ for all $\lambda>0$ and any open relatively compact $S$. Thus $d(S)=0$ for all such $S$. (Clearly it would have been enough for this conclusion that $d(\lambda S)=o(\lambda)$ as $\lambda\to\infty$.) \end{proof} For our second example, we suppose $d(B(r_*))$ is finite and nonzero for some $r_*>0$. Let $\epsilon>0$ and set $\rho_1=\frac{1}{2}d(B(r_*+\epsilon))$ (which is finite by Corollary~\ref{cor:distal_scale}). Choose $r>2\rho_1$ and $\rho_2>\frac{1}{2}r+\epsilon$. Next, choose a real-valued $\chi\in C_0^\infty(\RR^+)$ that obeys $\chi\equiv 0$ on $[0,r_*]$, $\inf_{\RR^+}\chi' >-1$, and $\chi(\rho) = \rho-r_*$ for $\rho\in[r_*+\rho_1,r_*+\rho_2]$ (such $\chi$ certainly exist). Then we obtain a diffeomorphism $f\in\Diff(\RR^{n-1})$ by \begin{equation} f(\xb) = (\|\xb\|+\chi(\|\xb\|))\frac{\xb}{\|\xb\|} \end{equation} which acts trivially outside a compact set and obeys $f(B(\rho))=B(\rho+\chi(\rho))$; in particular, $f(B(r_*))=B(r_*)$. Applying Theorem~\ref{thm:distal_diffeo}, equation~\eqref{eq:diffeo_bd1} gives \begin{equation} d(B(r_*))\le \inf\{\rho>0: r_*+r+2\epsilon \le r_*+\rho + \chi(r_*+\rho)\} \end{equation} Noting that $\rho + \chi(r_*+\rho)=2\rho$ for any $\rho\in [ \rho_1, \rho_2]$, and that this interval contains $\frac{1}{2}r+\epsilon$ in its interior, we therefore have $d(B(r_*))\le \frac{1}{2}r+\epsilon$, and hence \begin{equation}\label{eq:distal_bd} d(B(r_*)) \le \frac{1}{2} d (B(r_*+\epsilon)) + \epsilon, \end{equation} because $r$ was arbitrary apart from the constraint $r>2\rho_1$. The inequality~\eqref{eq:distal_bd} holds for all $\epsilon>0$ and any $r_*>0$ (our argument assumed that $d(B(r_*))>0$, but the statement holds trivially if $d(B(r_*))=0$). Next, take any $r>0$ and iterate \eqref{eq:distal_bd} over two subintervals of length $\epsilon/2$, with $r_*=r$ and $r_*=r+\epsilon/2$, thus obtaining $d(B(r)) \le \frac{1}{4} (d (B(r+\epsilon)) + 3\epsilon)$, also valid for all $r>0$, $\epsilon>0$. Repeating the bisection process $k$ times in total, one finds \begin{equation} d(B(r)) \le \frac{d (B(r+\epsilon))}{2^{2^k}} +\frac{(1-2^{-2^k})\epsilon}{2^{k-1}}, \end{equation} and taking $k\to\infty$, we deduce that $d(B(r))=0$ for all $r$. The upshot of this argument is: \begin{corollary} If $d(B(r_*))<\infty$ for some $r_*>0$ then $d(B(r))=0$ for every $r>0$, and $d(S)=0$ for every open relatively compact $S$ that is diffeomorphic to an open ball under $f\in\Diff(\RR^{n-1})$ with bounded derivatives. \end{corollary} \begin{proof} We have already proved that $d(B(r))=0$ for all $r>0$. The remaining statement follows by Theorem~\ref{thm:distal_diffeo}: we may assume that $f(S)=B(R)$ for some $R>0$ and hence $d(B(f(S),\epsilon))=d(B(R+\epsilon))=0$ for all $\epsilon>0$, so $d^+(f(S))=0$ and $d(S)=0$ by~\eqref{eq:diffeo_bd}. \end{proof} This result stops slightly short of proving that the full split property holds if a ball of some radius has a finite splitting distance. The arguments used here cannot exclude the possibility that, for example, a hollow ball with inner and outer radii $a$ and $b$ might have a splitting distance $a$ (although this would be excluded if one assumes uniform distal splitting). Of course, the interpretation of these results is that models with nonzero splitting distances for balls, such as those of~\cite[Thm 4.3]{DAnDopFreLon:1987}, cannot be compatible with the axioms of local covariance together with the timeslice property and local quasi-equivalence. At least heuristically one may understand the reason as follows: a nonzero splitting distance is related to the existence of a maximum admissible temperature, which is understood technically as the statement that KMS states of any higher temperature fail to be locally quasi-equivalent to the vacuum state~\cite{BucJun:1986}. The argument we will shortly present to prove Theorem~\ref{thm:distal_diffeo} is based on a spacetime metric in which a period of inflation occurs between two constant time hypersurfaces, while the metric takes the Minkowski form to the past and future of this region. As inflation tends to cool temperatures, one might expect that some KMS states with subcritical temperature in the future of the inflation must have arisen from states with (at least locally) supercritical temperatures to the past of the inflationary period. Therefore the evolution induced by the timeslice axiom cannot preserve the local quasi-equivalence class. \begin{proof}[Proof of Theorem~\ref{thm:distal_diffeo}] Let $t_*=\epsilon/3$ and let $P=(-\infty,0)\times\RR^{n-1}$, $F=(t_*,\infty)\times \RR^{n-1}$, and define $\Pb=\Mb|_P$ and $\Fb=\Mb|_F$, with inclusion morphisms $\iota_{\Mb;P}:\Pb\to\Mb$ and $\iota_{\Mb;F}:\Fb\to\Mb$. As $f\in\Diff(\RR^{n-1})$ has uniformly bounded derivatives, the push-forward by $f$ has a bounded Euclidean norm on tangent vectors, i.e., there is a constant $c>0$ such that $c\| f_* u\|\le \|u\|$ for every $u\in T_p\RR^{n-1}$ and $p\in\RR^{n-1}$, where $\|\cdot\|$ is the Euclidean norm on tangent spaces of $\RR^{n-1}$. Define a metric $h$ on $\RR^{n-1}$ so that $f^*h=\delta$, the Euclidean metric, and set \begin{equation} g = c^{2\varphi} \left( dt\otimes dt - c^{-2} \varphi h -(1-\varphi) \delta\right) \end{equation} where $\varphi\in C^\infty(\RR\times\RR^{n-1})$ takes values in $[0,1]$, with $\varphi\equiv 0$ on $F$ and $\varphi\equiv 1$ on $P$. Note that $g$ is a smooth metric, with the following properties: \begin{tightitemize} \item $g|_F= \eta$, while $g|_P=c^2 dt\otimes dt - h$; \item as quadratic forms $g\le c^{2\varphi}\eta$, because $h(f_*u,f_*u)=\delta(u,u)\ge c^2\delta(f_*u,f_*u)$ for all $u$; \item every $g$-causal curve is therefore $\eta$-causal and the Cauchy development of any set with respect to $\eta$ is thus contained in the Cauchy development with respect to $g$; \item every surface $\{t\}\times\RR^3$, being a Cauchy surface for $\eta$, is a Cauchy surface for $g$, which is accordingly globally hyperbolic~\cite[Cor.~14.39]{ONeill}. \end{tightitemize} We now define a spacetime $\Ib=(\RR\times\RR^{n-1}, g, \ogth,\tgth)$ which is an object in $\Loc$. The map $\beta(t,\xb)= (t/c, f(\xb))$ defines a morphism $\beta:\Pb\to\Ib$, and we have a Cauchy chain \begin{equation} \label{eq:distalCauchychain} \Mb\xleftarrow{\iota_{\Mb;P}} \Pb \xrightarrow{\beta} \Ib \xleftarrow{\iota_{\Ib;F}} \Fb \xrightarrow{\iota_{\Mb;F}} \Mb, \end{equation} where the morphisms other than $\beta$ are all induced by inclusions. An important consequence of our comments about Cauchy developments with respect to $g$ and $\eta$ is that the partial ordering $\prec_\Ib$ of regular Cauchy pairs in $\Ib$ is coarser than the Minkowski ordering $\prec_\Mb$: $(S_1,T_1)\prec_\Mb (S_2,T_2)$ implies $(S_1,T_1)\prec_\Ib (S_2,T_2)$. We now turn to our region $S$ of interest. Letting $\tau=-ct_*$, we have $\beta(\{\tau\}\times S) = \{-t_*\}\times f(S)$, and standard Minkowski geometry gives \begin{equation} ( B(f(S),\epsilon), B(f(S),\epsilon+r))_{2t_*}\prec_\Ib ( f(S), B(f(S),2\epsilon+r))_{-t_*} \end{equation} for any $r>0$; for this ordering certainly holds with respect to $\prec_\Mb$, in which we have unit speed of light and a time separation of $3t_*=\epsilon$ between the hypersurfaces containing these regular Cauchy pairs. Applying Lemma~\ref{lem:split} and Remark~\ref{rem:split} as in the proof of Theorem~\ref{thm:split} and using local quasi-equivalence of $\Sf(\Mb)$, we may conclude that if $r>d(B(f(S),\epsilon))$, then every state in $\Sf(\Mb)$ is split for the regular Cauchy pair $( S, f^{-1}(B(f(S),2\epsilon+r)))_{-\tau}$. Accordingly, we have $d(S)<\rho$ for all $\rho>0$ such that $B(S,\rho)$ contains $f^{-1}(B(f(S),2\epsilon+r))$, thus establishing \eqref{eq:diffeo_bd1}. To complete the proof we must estimate $\rho$. Take any $\kappa'>\kappa$, and note that $\|D(f^{-1})\|\le\kappa'$ on $f^{-1}(B(f(S),2\epsilon+r))\setminus f(S)$ for all sufficiently small $\epsilon>0$. Then $f^{-1}(B(f(S),2\epsilon+r))\subset B(S,\kappa'(2\epsilon+r))$, so \eqref{eq:diffeo_bd1} gives $d(S)\le \kappa' (2\epsilon+d(B(f(S),\epsilon)))$ as $r$ may be chosen arbitrarily close to $d(B(f(S),\epsilon))$. Taking the limit inferior as $\epsilon\to 0+$ gives $d(S)\le \kappa' d^+(f(S))$ and hence \eqref{eq:diffeo_bd} on taking $\kappa'\to\kappa+$. \end{proof} \section{Ultrastatic spacetimes}\label{sect:ultrastatic} In this section we comment briefly on sufficient conditions for a locally covariant QFT to admit a state obeying both the split and (full) Reeh--Schlieder properties on the class of connected ultrastatic globally hyperbolic spacetimes, i.e., those spacetimes $\Nb\in\Loc$ in standard form $\Nb=(\RR\times\Sigma, dt\otimes dt - h,\ogth,\tgth)$ where $h$ is a fixed complete\footnote{See, e.g.,~\cite[Prop. 5.2]{Kay1978} for the relation of completeness to global hyperbolicity.} Riemannian metric on $\Sigma$, which is assumed connected. As every connected spacetime $\Mb\in\Loc$ has Cauchy surfaces oriented-diffeomorphic to those of such an ultrastatic spacetime (by virtue of~\cite{NomizuOzeki1961} any Cauchy surface of $\Mb$ can be equipped with a complete Riemannian metric from which an ultrastatic spacetime may be constructed), such conditions would enable the results of Section~\ref{sect:split} to apply nontrivially to any connected $\Mb\in\Loc$. Let $\Nb$ be connected and ultrastatic, as defined above. Then $\Nb$ admits a one-parameter group of time translations $T_\tau:(t,\sigma)\mapsto (t+\tau,\sigma)$ and hence automorphisms $\Af(T_\tau)$ of $\Af(\Mb)$. Our first assumption is that $\Af(\Nb)$ admits a faithful ground state $\omega_{\Nb}$ for the time translations $\Af(T_\tau)$. That is, (a) $\omega_{\Nb}$ is a time-translationally invariant state, $\Af(T_\tau)^*\omega_{\Nb}=\omega_{\Nb}$ for all $\tau\in\RR$, and (b) the unitary implementation $U(\tau)$ of $\Af(T_\tau)$ in the GNS representation $(\HH_{\omega_\Nb},\pi_{\omega_\Nb},\Omega_{\omega_\Nb})$ induced by $\omega_{\Nb}$, which obeys $U(\tau)\pi_{\omega_\Nb}(A) U(\tau)^{-1}=\pi_{\omega_\Nb}(\Af(T_\tau)A)$ and $U(\tau)\Omega_{\omega_\Nb}=\Omega_{\omega_\Nb}$, has a positive generator, i.e., $U(\tau)=e^{iH\tau}$ with positive self-adjoint operator $H$. In the case of a theory with states $(\Af,\Sf)$, one would also assume that $\omega_\Nb\in\Sf(\Nb)$. (If $\zeta\in\Aut(\Af)$ is a global gauge transformation, we have $\zeta_\Nb\circ\Af(T_\tau) = \Af(T_\tau)\circ\zeta_\Nb$ by naturality, and as $\zeta_\Nb$ is an isomorphism, $\zeta_\Nb^*\omega_\Nb\in\Sf(\Nb)$ is also a ground state. Hence, if there is a unique ground state in $\Sf(\Nb)$, it is automatically gauge invariant.) The second assumption is needed for the Reeh--Schlieder property. Defining the local von Neumann algebras $\Rgth(O):=\pi_{\omega_\Nb}(\Af^\kin(\Nb;O))''$ for nonempty $O\in\OO(\Nb)$, we assume the \emph{weak timelike tube criterion} \begin{equation} \left(\bigcup_{\tau\in\RR} \Rgth(T_\tau O)\right)'' = \Rgth(\Nb) \end{equation} holds for any nonempty $O\in\OO(\Nb)$ (the right-hand side is of course $\BB(\HH_{\omega_{\Nb}})$ in the case that $\omega_\Nb$ is pure).\footnote{E.g., this condition is fulfilled if the $\Af^\kin(\Nb;T_\tau O)$ ($\tau\in\RR$) generate a dense subspace of $\Af(\Nb)$.} This condition was established by Borchers in general Wightman theories in Minkowski space~\cite{Borchers:1961} and (in suitable representations) for linear fields in stationary spacetimes by Strohmaier~\cite{Stroh:2000}.\footnote{An alternative proof of the Reeh--Schlieder theorem on ultrastatic spacetimes, based on antilocality of fractional powers of the Laplace operator, is given in~\cite{Verch:1993}.} Given this condition, it then holds immediately that $\Omega$ is cyclic for every $\pi_{\omega_\Nb}(\Af^\kin(\Nb;O))$ with nonempty $O\in\OO(\Nb)$ and so satisfies the hypotheses of Corollary~\ref{cor:RS}. See, e.g., Borchers' version~\cite[Thm~1]{Borchers_vacstate:1965} of the Reeh--Schlieder theorem~\cite{ReehSchlieder:1961}. It seems reasonable that the timelike tube criterion holds on \emph{connected} ultrastatic spacetimes for general theories of interest. For the split property, we assume additionally that $\Omega_{\omega_\Nb}$ obeys a suitable \emph{nuclearity criterion}. Let $O\in\OO(\Nb)$ be nonempty and denote $\Rgth(O):=\pi_{\omega_\Nb}(\Af^\kin(\Nb;O))''$. We say that $\omega_\Nb$ obeys the nuclearity criterion for $O$ if the maps $\Xi_\beta:\Rgth(O)\to \HH_{\omega_\Nb}$ given for $\beta>0$ by $\Xi_\beta(A)=e^{-\beta H}A\Omega_{\omega_\Nb}$, are \emph{nuclear}. That is, for each $\beta$ there is a countable decomposition $\Xi_\beta(\cdot) = \sum_i \psi_i\varphi_i(\cdot)$ for vectors $\psi_i\in\HH_{\omega_\Nb}$ and bounded linear functionals $\varphi_i$ on $\Rgth(O)$ such that $\sum_i \|\psi_i\|\,\|\varphi_i\|$ is finite, whereupon we write $\|\Xi_\beta\|_1$ for the infimum of this sum over all possible decompositions -- a quantity called the \emph{nuclearity index}. Using~\cite[Prop.~17.1.4]{BaumWollen:1992} (which is abstracted from~\cite{BucDAnFre:1987}), one easily sees that if $(S,T)$ is a regular Cauchy pair in $\Nb$ and $\omega_\Nb$ obeys nuclearity for $D_\Nb(T)$ with the corresponding nuclearity index obeying $\|\Xi_\beta\|_1\le e^{(\beta_0/\beta)^n}$ for some fixed $n>0$, $\beta_0>0$ and all $\beta\in(0,1)$, then $\omega_\Nb$ has the split property for $(S,T)$ with $\Omega_{\omega_\Nb}$ as a cyclic and separating vector. In the Minkowski space theory, nuclearity conditions of this type are closely related to good thermodynamic properties such as the existence of KMS states~\cite{BucJun:1986,BucJun:1989}, so again there is good reason to believe that they should hold for theories of interest. In ultrastatic spacetimes, nuclearity was established for the Klein--Gordon field in~\cite{Verch_nucspldua:1993} and for Dirac fields in~\cite{DAnHol:2006}. In summary, there is good reason to believe that physically well-behaved locally covariant theories should admit states satisfying the Reeh--Schlieder and split properties in connected ultrastatic spacetimes, and hence that the results of Section~\ref{sect:split} apply nontrivially to yield states with the split and partial Reeh--Schlieder properties in general connected globally hyperbolic spacetimes. The question of whether Reeh--Schlieder and split states can be expected in general disconnected ultrastatic spacetimes would seem to require more detailed information concerning $\Af$. Our deformation arguments work equally well for disconnected spacetimes, however, and one can certainly find states on disconnected spacetimes that are sufficiently entangled across the various components that they have the Reeh--Schlieder property. For example, suppose $\omega_\Mb$ has the full Reeh--Schlieder property on a connected spacetime $\Mb$, and let $O\in\OO(\Mb)$ have multiple components. Then the restriction of $\omega_\Mb$ to $\Af(\Mb|_O)$ has the full Reeh--Schlieder property on this disconnected spacetime. In this situation the `behind the moon' aspect of the Reeh--Schlieder property is brought into sharp relief: the moon need not even be in the same spacetime component as the experimenter! \section{Summary} In this paper, it has been shown that the split property and Reeh--Schlieder properties can be established for locally covariant theories, using a common framework based on regular Cauchy pairs. The proofs of these properties become quite streamlined and can be run simultaneously, thus implying the existence of standard split inclusions and permitting the results of analyses such as~\cite{DopLon:1984} to be used. Sufficient conditions have been given for the existence of states obeying the split and Reeh--Schlieder properties in ultrastatic spacetimes, whereupon a spacetime deformation argument is used to export these properties to general globally hyperbolic spacetimes. As a bonus, our methods also show that (in Minkowski space) the distal split property, in combination with the timeslice property and the assumption that state spaces obey local quasiequivalence, actually implies (in various specific senses) that the split condition holds. \paragraph{Acknowledgement} I thank Klaus Fredenhagen for asking a question about the status of the distal split property, which is answered in Section~\ref{sect:distal}, and Ko Sanders for comments on the text and for pointing out an error in a previous formulation of Lemma~\ref{lem:Cauchypairs}.
{ "redpajama_set_name": "RedPajamaArXiv" }
248
Salgó är ett berg i Ungern. Det ligger i provinsen Nógrád, i den norra delen av landet, km nordost om huvudstaden Budapest. Toppen på Salgó är meter över havet, eller meter över den omgivande terrängen. Bredden vid basen är km. Terrängen runt Salgó är huvudsakligen kuperad, men åt sydväst är den platt. Den högsta punkten i närheten är Karancs, meter över havet, km väster om Salgó. Runt Salgó är det ganska tätbefolkat, med invånare per kvadratkilometer. Närmaste större samhälle är Salgótarján, km sydväst om Salgó. I omgivningarna runt Salgó växer i huvudsak lövfällande lövskog. Inlandsklimat råder i trakten. Årsmedeltemperaturen i trakten är  °C. Den varmaste månaden är juli, då medeltemperaturen är  °C, och den kallaste är december, med  °C. Genomsnittlig årsnederbörd är millimeter. Den regnigaste månaden är maj, med i genomsnitt mm nederbörd, och den torraste är december, med mm nederbörd. Kommentarer Källor Externa länkar Berg i Nógrád Berg i Ungern 500 meter över havet eller högre Artiklar med robotjusterad position
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,993
{"url":"http:\/\/mathoverflow.net\/revisions\/96149\/list","text":"5 added 706 characters in body\n\nThis is an edited version of the original question taking into account the comments below by Bruce. The original formulation was imprecise.\n\nIt is well-known that $\\mathfrak{g}$ leaves invariant a quadratic form $Q \\in \\operatorname{Sym}^2 V$ and a cubic form $C \\in \\operatorname{Sym}^3V$ on $V$. Indeed, $\\mathfrak{g}$ can be characterised as the Lie subalgebra of $\\mathfrak{sl}(V)$ which leaves invariant $Q$ and $C$. Since $V$ is irreducible, $Q$ is non-degenerate and we may use it to identify $V$ with $V^*$ as $\\mathfrak{g}$ modules.\n\nIt seems to be part of the group theoretical folklore in the Physics literature (starting possibly with this paper) that any $\\mathfrak{g}$-invariant tensor on $V$ --- that is, any $\\mathfrak{g}$-invariant element of the tensor algebra of in $V$ \\bigoplus_{n\\geq 0} V^{\\otimes n}$--- can be constructed out of$Q$(and its inverse),$C$and a nonzero \"volume element\"$\\nu \\in \\Lambda^{26}V$. A Lambda^{26}V$ via products in the tensor algebra and contractions.\n\nFor example, we can construct six invariant tensors out of $Q$ and $C$ in degree $4$ Q_{ab}Q_{cd} \\qquad Q_{ac}Q_{bd} \\qquad Q_{ad}Q_{bc} \\qquad C_{abe}C_{cdf} Q^{ef} \\qquad C_{ace}C_{bdf} Q^{ef} \\qquad C_{ade}C_{bcf} Q^{ef}which satisfy a linear relation, since there is only a 5-dimensional space of such tensors.\n\nNow, a quick calculation in LiE , however, shows reveals that there is a $\\mathfrak{g}$-invariant tensor $\\Phi \\in \\Lambda^9 V$\n\nwhich I would have a hard time constructing cannot be constructed out of $Q$, $C$ and $\\nu$.\\nu$in the aforementioned way. \u2022 Do \u2022 Can every invariant tensor be constructed out of$Q,C,\\Phi,\\nu$form a complete generating set for the Q$ (and its inverse) $F_4$-invariants in C$,$\\bigotimes V$\\nu$ and $\\Phi$ by products and contractions?\n\n\u2022 4 added 12 characters in body\n\nLet $\\mathfrak{g}$ denote a complex simple Lie algebra of type $F_4$. Its smallest nontrivial irreducible representation is 26-dimensional. Let's call it $V$. This question is about the invariants of $\\mathfrak{g}$ in this representation.\n\nIt is well-known that $\\mathfrak{g}$ leaves invariant a quadratic form $Q \\in \\operatorname{Sym}^2 V$ and a cubic form $C \\in \\operatorname{Sym}^3V$ on $V$. Indeed, $\\mathfrak{g}$ can be characterised as the Lie subalgebra of $\\mathfrak{sl}(V)$ which leaves invariant $Q$ and $C$.\n\nIt seems to be part of the group theoretical folklore in the Physics literature (starting possibly with this paper) that any $\\mathfrak{g}$-invariant tensor on $V$ --- that is, any $\\mathfrak{g}$-invariant element of the tensor algebra of $V$ --- can be constructed out of $Q$, Q$(and its inverse),$C$and a nonzero \"volume element\"$\\nu \\in \\Lambda^{26}V$. A quick calculation in LiE, however, shows that there is a$\\mathfrak{g}$-invariant tensor$\\Phi \\in \\Lambda^9 V$> alt_tensor(9,[0,0,0,1],F4)|[0,0,0,0] 1 > which I would have a hard time constructing out of$Q$,$C$and$\\nu$. One possible way to understand$\\Phi$is to think in terms of the$\\mathfrak{so}(9)$subalgebra of$\\mathfrak{g}$. Under$\\mathfrak{so}(9)$,$V$breaks up as a direct sum of the trivial ($\\Lambda^0$), vector ($\\Lambda^1$) and spinor ($\\Delta$) irreducible representations: $$V = \\Lambda^0 \\oplus \\Lambda^1 \\oplus \\Delta$$ There are precisely two$\\mathfrak{so}(9)$-invariants in$\\Lambda^9 V$: one is the volume form on$\\Lambda^1$and the other is the \"volume\" form on$\\Lambda^0$wedged with the$\\mathfrak{so}(9)$-invariant 8-form on$\\Delta$. Notice that$(\\mathfrak{so}(9),\\Delta)$is the holonomy representation for the Cayley plane$F_4\/\\operatorname{Spin}(9)$, which is well-known to have a parallel self-dual$8$-form. Then$\\Phi$is some linear combination of these two$\\mathfrak{so}(9)$-invariants, which I have yet to work out. Questions I have two questions and, as usual, I would be very grateful for any pointers to the relevant literature: 1. Do$Q,C,\\Phi,\\nu$form a complete generating set for the$F_4$-invariants in$\\bigotimes V$? 2. Is there a more convenient (for calculations) description of$\\Phi$? In particular, I would like to know about the relation of the form$\\Phi \\otimes \\Phi = \\cdots$. Thank you in advance. 3 deleted 381 characters in body Let$\\mathfrak{g}$denote a complex simple Lie algebra of type$F_4$. Its smallest nontrivial irreducible representation is 26-dimensional. Let's call it$V$. This question is about the invariants of$\\mathfrak{g}$in this representation. It is well-known that$\\mathfrak{g}$leaves invariant a quadratic form$Q \\in \\operatorname{Sym}^2 V$and a cubic form$C \\in \\operatorname{Sym}^3V$on$V$. Indeed,$\\mathfrak{g}$can be characterised as the Lie subalgebra of$\\mathfrak{sl}(V)$which leaves invariant$Q$and$C$. It seems to be part of the group theoretical folklore in the Physics literature (starting possibly with this paper) that any$\\mathfrak{g}$-invariant tensor on$V$--- that is, any$\\mathfrak{g}$-invariant element of the tensor algebra of$V$--- can be constructed out of$Q$,$C$and a nonzero \"volume element\"$\\nu \\in \\Lambda^{26}V$. A quick calculation in LiE, however, shows that there is a$\\mathfrak{g}$-invariant tensor$\\Phi \\in \\Lambda^9 V$> alt_tensor(9,[0,0,0,1],F4) 1X[0,0,0,0] +1X[0,0,0,1] +2X[0,0,0,2] +2X[0,0,0,3] +2X[0,0,0,4] + 1X[0,0,0,5] +1X[0,0,1,0] +2X[0,0,1,1] +2X[0,0,1,2] +1X[0,0,1,3] + 3X[0,0,2,0] +1X[0,0,2,1] +1X[0,1,0,0] +2X[0,1,0,1] +1X[0,1,0,2] + 2X[0,1,1,0] +1X[0,2,0,0] +1X[1,0,0,1] +1X[1,0,0,2] +1X[1,0,0,3] + 3X[1,0,1,0] +3X[1,0,1,1] +1X[1,0,1,2] +1X[1,1,0,0] +1X[1,1,0,1] + 1X[2,0,0,0alt_tensor(9,[0,0,0,1],F4)|[0,0,0,0] +1X[2,0,0,1] +1X[2,0,0,2] 1 > which I would have a hard time constructing out of$Q$,$C$and$\\nu$. One possible way to understand$\\Phi$is to think in terms of the$\\mathfrak{so}(9)$subalgebra of$\\mathfrak{g}$. Under$\\mathfrak{so}(9)$,$V$breaks up as a direct sum of the trivial ($\\Lambda^0$), vector ($\\Lambda^1$) and spinor ($\\Delta$) irreducible representations: $$V = \\Lambda^0 \\oplus \\Lambda^1 \\oplus \\Delta$$ There are precisely two$\\mathfrak{so}(9)$-invariants in$\\Lambda^9 V$: one is the volume form on$\\Lambda^1$and the other is the \"volume\" form on$\\Lambda^0$wedged with the$\\mathfrak{so}(9)$-invariant 8-form on$\\Delta$. Notice that$(\\mathfrak{so}(9),\\Delta)$is the holonomy representation for the Cayley plane$F_4\/\\operatorname{Spin}(9)$, which is well-known to have a parallel self-dual$8$-form. Then$\\Phi$is some linear combination of these two$\\mathfrak{so}(9)$-invariants, which I have yet to work out. Questions I have two questions and, as usual, I would be very grateful for any pointers to the relevant literature: 1. Do$Q,C,\\Phi,\\nu$form a complete generating set for the$F_4$-invariants in$\\bigotimes V$? 2. Is there a more convenient (for calculations) description of$\\Phi$? In particular, I would like to know about the relation of the form$\\Phi \\otimes \\Phi = \\cdots\\$.","date":"2013-05-26 04:14:40","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9990960359573364, \"perplexity\": 482.9360753555111}, \"config\": {\"markdown_headings\": true, \"markdown_code\": false, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2013-20\/segments\/1368706624988\/warc\/CC-MAIN-20130516121704-00032-ip-10-60-113-184.ec2.internal.warc.gz\"}"}
null
null
Repumatic is a powerhouse application that has been designed to build, protect, and if necessary cleanup your online reputation. At Repumatic we offer transparency, what we believe is the strongest technological back-end, and the highest level of return on your investment. We also offer the lowest pricing in the industry. Our 100% turn-key package is priced at $199/Month and includes an editor, US based professional author, team manager, and data entry team working together each month on your campaign. This package also includes a Repumatic premium account where clients can login and check work reports each month while having full editing access to all sites created as well as a plethora of other reputation management tools. The Repumatic platform also gives users the ability to launch their own reputation management campaign in a full scale do it yourself system. Do it Yourself pricing ranges from free to $19/Month. Users can instantly launch up to 50 branding sites and manage them all from one central login. Compare this to other services that allow you to launch and manage a single branding site. Users also have bulk tools like backlinking from the network they created, 6 month reputation management campaign templates, integrated local search enhancement, reputation monitoring that includes social, defamation, and review monitoring as well as full access to our reputation management insiders journal. It's only going to keep getting better! Thanks for stopping by.
{ "redpajama_set_name": "RedPajamaC4" }
4,294
Q: Arrays.asList(arr).indexOf is not working Consider the following snippet int key1 = Arrays.asList(new int[]{1,2,3,4,5}).indexOf(5) ;//wrapper int key2 = new ArrayList<>(Arrays.asList(new int[]{1,2,3,4,5})).indexOf(5); //another copy But this snippet evaluates to -1 -1 which means It did not find the key 5 in the list. But why Arrays.aslist not finding the key in list. Can anyone please explain or quick fix to code for Searching key in array without explicit logic implementation. Of course we can sort it then use Arrays.binarySearch. Any other suggestions or any other ways to do this. A: The problem is the type of Array you're creating inside the asList(), Considering that the List types require non-primitive types, you need to declare int as Integer. If you change your code to: Arrays.asList(new Integer[]{1,2,3,4,5}).indexOf(5); It will work.
{ "redpajama_set_name": "RedPajamaStackExchange" }
6,128
Carriera Prodotto delle giovanili del , gioca per due stagioni e mezzo con la seconda squadra, prima di rimanere svincolato. All'inizio del 2019, firma un contratto con il , rendendosi uno dei protagonisti dell'ascesa dalla terza divisione alla massima serie belga. Statistiche Presenze e reti nei club Statistiche aggiornate al 4 febbraio 2022. Collegamenti esterni
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,716
Member of the starry suite—modular functions for iterable objects. [![npm](https://img.shields.io/npm/v/starry.concat.svg?style=flat-square)](https://www.npmjs.com/package/starry.concat) [![node](https://img.shields.io/node/v/starry.concat.svg?style=flat-square)](https://nodejs.org/en/download/) ## Status Applies to the whole suite. [![Build Status](https://img.shields.io/travis/seangenabe/starry.svg?style=flat-square)](https://travis-ci.org/seangenabe/starry) [![Coverage Status](https://img.shields.io/coveralls/seangenabe/starry.svg?style=flat-square)](https://coveralls.io/github/seangenabe/starry) ## Usage ```typescript function concat<T = any>( ...iterables: Iterable<T>[] ): Iterable<T> ``` Returns an iterable that returns the elements of each iterable passed. Parameters: * ...iterables: `Iterable<T>[]` Returns: `Iterable<T>`
{ "redpajama_set_name": "RedPajamaGithub" }
9,197
Roos Glacier () is a steep glacier that drains the northwest slopes of Mount Murphy on Walgreen Coast, Marie Byrd Land. Named by Advisory Committee on Antarctic Names (US-ACAN) after S. Edward Roos, oceanographer with the Byrd Antarctic Expeditions of 1928-30 and 1933–35. Buettner Peak is a sharp peak rising midway along the north wall of Roos Glacier. References Glaciers of Marie Byrd Land
{ "redpajama_set_name": "RedPajamaWikipedia" }
2,997
Dec. 30, 2018 12:24 PM PT1:24 PM MT2:24 PM CT3:24 PM ET20:24 GMT4:24 1:24 PM MST2:24 PM CST3:24 PM EST2:24 PM CT0:24 UAE (+1)15:24 ETNaN:� - Mohamed Sanu had seven receptions for 90 yards Sunday in the Atlanta Falcons' 34-32 win over the Tampa Bay Buccaneers. He added one rush for two yards. Dec. 23, 2018 12:36 PM PT1:36 PM MT2:36 PM CT3:36 PM ET20:36 GMT4:36 1:36 PM MST2:36 PM CST3:36 PM EST2:36 PM CT0:36 UAE (+1)15:36 ETNaN:� - Mohamed Sanu had five receptions for 81 yards and one touchdown Sunday in the Atlanta Falcons' 24-10 win over the Carolina Panthers. He added two rushes for 29 yards. Dec. 16, 2018 12:37 PM PT1:37 PM MT2:37 PM CT3:37 PM ET20:37 GMT4:37 1:37 PM MST2:37 PM CST3:37 PM EST2:37 PM CT0:37 UAE (+1)15:37 ETNaN:� - Mohamed Sanu had three receptions for 30 yards Sunday in the Atlanta Falcons' 40-14 win over the Arizona Cardinals. He added one rush for 11 yards. Dec. 9, 2018 1:45 PM PT2:45 PM MT3:45 PM CT4:45 PM ET21:45 GMT5:45 2:45 PM MST3:45 PM CST4:45 PM EST3:45 PM CT1:45 UAE (+1)16:45 ETNaN:� - Mohamed Sanu had six receptions for 54 yards Sunday in the Atlanta Falcons' 34-20 loss to the Green Bay Packers. Dec. 2, 2018 1:00 PM PT2:00 PM MT3:00 PM CT4:00 PM ET21:00 GMT5:00 2:00 PM MST3:00 PM CST4:00 PM EST3:00 PM CT1:00 UAE (+1)16:00 ETNaN:� - Mohamed Sanu had three receptions for 37 yards Sunday in the Atlanta Falcons' 26-16 loss to the Baltimore Ravens. Nov. 22, 2018 7:34 PM PT8:34 PM MT9:34 PM CT10:34 PM ET3:34 GMT11:34 8:34 PM MST9:34 PM CST10:34 PM EST9:34 PM CT7:34 UAE (+1)22:34 ETNaN:� - Mohamed Sanu had four receptions for 74 yards Thursday in the Atlanta Falcons' 31-17 loss to the New Orleans Saints. He added one rush for three yards. Nov. 18, 2018 12:20 PM PT1:20 PM MT2:20 PM CT3:20 PM ET20:20 GMT4:20 1:20 PM MST2:20 PM CST3:20 PM EST2:20 PM CT0:20 UAE (+1)15:20 ETNaN:� - Mohamed Sanu had four receptions for 56 yards Sunday in the Atlanta Falcons' 22-19 loss to the Dallas Cowboys. He added one rush for three yards. Nov. 15, 2018 12:54 PM PT1:54 PM MT2:54 PM CT3:54 PM ET20:54 GMT4:54 1:54 PM MST2:54 PM CST3:54 PM EST2:54 PM CT0:54 UAE (+1)15:54 ETNaN:� - Sanu returned to practice Thursday after sitting out Wednesday's session with a hip injury. He's not on the Falcons' final injury report for Sunday's game against Dallas. Analysis: Sanu's injury was never believed to be a concern, but the slot receiver has been quiet lately with 12 catches totaling 113 yards and no touchdowns over his last three games. Nov. 11, 2018 1:16 PM PT2:16 PM MT3:16 PM CT4:16 PM ET21:16 GMT5:16 2:16 PM MST3:16 PM CST4:16 PM EST3:16 PM CT1:16 UAE (+1)16:16 ETNaN:� - Mohamed Sanu had six receptions for 47 yards Sunday in the Atlanta Falcons' 28-16 loss to the Cleveland Browns. He also fumbled once. Nov. 4, 2018 12:43 PM PT1:43 PM MT2:43 PM CT3:43 PM ET20:43 GMT4:43 1:43 PM MST2:43 PM CST3:43 PM EST2:43 PM CT0:43 UAE (+1)15:43 ETNaN:� - Mohamed Sanu had four receptions for 45 yards Sunday in the Atlanta Falcons' 38-14 win over the Washington Redskins. Nov. 1, 2018 12:59 PM PT1:59 PM MT2:59 PM CT3:59 PM ET19:59 GMT3:59 12:59 PM MST1:59 PM CST2:59 PM EST1:59 PM CT23:59 UAE15:59 ETNaN:� - Sanu (hip) is active for Week 9 at Washington. Analysis: Sanu has had just two catches each of the last two weeks as his fantasy value is declining. Oct. 22, 2018 7:46 PM PT8:46 PM MT9:46 PM CT10:46 PM ET2:46 GMT10:46 7:46 PM MST8:46 PM CST9:46 PM EST9:46 PM CT6:46 UAE (+1)22:46 ETNaN:� - Mohamed Sanu had two receptions for 21 yards Monday in the Atlanta Falcons' 23-20 win over the New York Giants. Oct. 14, 2018 11:33 AM PT12:33 PM MT1:33 PM CT2:33 PM ET18:33 GMT2:33 11:33 AM MST12:33 PM CST1:33 PM EST1:33 PM CT22:33 UAE14:33 ETNaN:� - Sanu, who came out of Atlanta's Week 6 game against the Bucs due to a hip injury, returned to practice on Saturday. Analysis: He's been removed from the injury report. Oct. 7, 2018 12:49 PM PT1:49 PM MT2:49 PM CT3:49 PM ET19:49 GMT3:49 12:49 PM MST1:49 PM CST2:49 PM EST2:49 PM CT23:49 UAE15:49 ETNaN:� - Mohamed Sanu had four receptions for 73 yards and one touchdown Sunday in the Atlanta Falcons' 41-17 loss to the Pittsburgh Steelers. Sep. 30, 2018 1:25 PM PT2:25 PM MT3:25 PM CT4:25 PM ET20:25 GMT4:25 1:25 PM MST2:25 PM CST3:25 PM EST3:25 PM CT0:25 UAE (+1)16:25 ETNaN:� - Mohamed Sanu had six receptions for 111 yards Sunday in the Atlanta Falcons' 37-36 loss to the Cincinnati Bengals. Sep. 23, 2018 1:21 PM PT2:21 PM MT3:21 PM CT4:21 PM ET20:21 GMT4:21 1:21 PM MST2:21 PM CST3:21 PM EST3:21 PM CT0:21 UAE (+1)16:21 ETNaN:� - Mohamed Sanu had four receptions for 36 yards and one touchdown Sunday in the Atlanta Falcons' 43-37 loss to the New Orleans Saints. Sep. 16, 2018 12:31 PM PT1:31 PM MT2:31 PM CT3:31 PM ET19:31 GMT3:31 12:31 PM MST1:31 PM CST2:31 PM EST2:31 PM CT23:31 UAE15:31 ETNaN:� - Mohamed Sanu had two receptions for 19 yards Sunday in the Atlanta Falcons' 31-24 win over the Carolina Panthers. He added one rush for -4 . Sep. 6, 2018 8:43 PM PT9:43 PM MT10:43 PM CT11:43 PM ET3:43 GMT11:43 8:43 PM MST9:43 PM CST10:43 PM EST10:43 PM CT7:43 UAE (+1)23:43 ETNaN:� - Mohamed Sanu had four receptions for 18 yards Thursday in the Atlanta Falcons' 18-12 loss to the Philadelphia Eagles.
{ "redpajama_set_name": "RedPajamaC4" }
7,249
\section{Introduction} \textbf{Motivation}. According to the recent Pew Social Media Usage report \cite{pew}, 52\% of online users now use two or more online social networking sites (OSNs). As such, users today may find themselves engaging friends using a number of OSNs. For example, they may ``like'' their friends' posts on Facebook, retweet their friends' tweets on Twitter, and share photos on Instagram. The participation in multiple OSNs implies that users have to stretch and spread their already limited time and attention over the networks, which results in new dynamics in maintenance of friendships. For instance, a user may choose to connect to the same group of friends in multiple OSNs for ease of friendship maintenance, or conversely a user may partition and maintain different groups of friends in different OSNs while keeping only a smaller group of close friends overlapped in multiple OSNs. This similarity of user's friendship in multiple OSNs also has impact on the evenness of user's friendship in multiple OSNs. For example, a user who maintains high similarity of friendship in multiple OSNs may or may not choose to partition and distribute his friends evenly across multiple OSNs. Our goal in this paper is to investigate the how users maintain friendships in multiple OSNs. Specifically, we study the similarity of users' friendship and the evenness in user's friendship distribution in multiple OSNs. The study on users' friendship maintenance behavior may provide some new insights to other user behavior studies in multiple OSNs. Lim et al. conducted an empirical study on user's information sharing behavior in six OSNs and found users exhibited varied information sharing behaviors on different OSNs \cite{lim2015}. They postulated that this was due to the difference in user's usage for different OSNs. From friendship maintenance perspective, a possible explanation could be the users were varying their sharing of information to cater for the different groups of ``audience'' (i.e. friends) in different OSNs. Thus, research on friendship maintenance behavior of users can potentially help to provide new insights to other user's behaviors in these OSNs. The study on friendship in multiple OSNs have real-world applications. In the second part of our study, we extend our empirical research on user's friendship maintenance in multiple OSNs and propose friendship maintenance related features to predict missing links (i.e. friendship) in multiple OSNs. There have been few recent link prediction studies done on \emph{multidimensional networks} which refers to networks with multiple types of links between nodes. Researchers applied neighborhood features such as Common Neighbors and Adamic-Adar on a dimension of network to predict user's links in another dimension within the same network \cite{rossetti}. However, it is important to point out that there are differences between multidimensional networks and in multiple OSNs. For example, the users need to be matched across different networks in multiple OSNs, while users account matching is not required in multidimensional networks. Also, for multiple OSNs, user behaviors in one network are only observed by neighbors in the same network but not the same users's neighbors in another network, while in multidimensional networks, user behaviors are observed by all neighbors of the multidimensional network. As such, the link prediction in our study is different from the previous link prediction studies in multidimensional networks. \textbf{Research Objectives and Contributions}. This research is conducted on a large real world dataset consisting of about 100,000 users on both Twitter and Instagram with tens of millions online friends. Our research in this paper is divided into two main parts addressing different research questions. In the first part, the research question is how users maintain friendship across networks. We focus on friendship maintenance measures that allow us quantify \emph{friendship overlapping} and \emph{friendship distribution}. In the second part of our study, we address the research question of how one conducts friendship prediction in the context of multiple social networks. In particular, we would like to explore using the friendship maintenance measures as features to improve the \emph{friendship prediction} accuracy. As shown in Figure~\ref{fig:framework}, our proposed research framework begins with data crawling from both Twitter and Instagram to assemble a dataset of base users. For this set of users, we perform \emph{cross-network friend matching} to identify the Twitter and Instagram friends of the same users. We then propose several measures for their friendship maintenance behavior. Finally, we use our findings to design both unsupervised and supervised friend prediction methods. \begin{figure}[h] \centering \includegraphics[scale = 0.23]{framework.pdf} \captionof{figure}{Research Framework} \label{fig:framework} \end{figure} This work improves the state-of-the-art of social network analysis and link prediction in multiple OSNs. We establish a novel research framework to compare friends in two OSNs. Included in the framework are the measures for evenness of friendship distribution and similarity of friendship across multiple OSNs, as well as the prediction of links in the multiple OSNs settings. The interesting findings derived from our work include: \begin{itemize} \item Most users prefer to maintain roughly the same number of friends in Twitter and Instagram. i.e. evenly distributed friendship across multiple OSNs. \item Most users prefer to maintain different friendships in Twitter and Instagram, while keeping only a small clique of common friends across the two OSNs. i.e. low similarity in friendship across multiple OSNs. \item Unsupervised methods can yield good accuracy predicting friendship in one network using neighborhood properties of another network. In particular, the Jaccard Coefficient of two users computed in Instagram network can quite accurately predict the link between the two users in Twitter (average F1 score of 0.882). \item Supervised method with friendship maintenance measures as features can further improve the accuracy in friendship prediction across multiple OSNs (average F1 score of 0.93). \end{itemize} \textbf{Paper Outline}. The rest of the paper is organized as follows. We first describe the construction of our Twitter and Instagram datasets. Next, we propose measures that quantify the evenness of user's friendship distribution and similarity of friendship in multiple OSNs. We then apply the proposed measures to analyse the users' friendship maintenance in Twitter and Instagram networks. Subsequently, we describe the friendship link prediction experiments conducted using friendship features and present the results. Finally, we review related research to this study and conclude this work with possible future research. \section{Base User Dataset} In order to study the user friendships in multiple OSNs, we first need to construct a dataset of users who have accounts with both Twitter and Instagram, a popular microblogging site and a photo-sharing social media site respectively. As the two selected OSNs serve different purposes, it is unlikely that the two OSNs cannibalize each other's users. Furthermore, the two OSNs are highly complementary and popular among teen users \cite{pew}. We therefore expect a user on both Twitter and Instagram would generally have the interest to include the same friends in both networks. We begin by gathering a set of 100,000 Twitter users who have declared their Instagram accounts in their Twitter biography description from \textit{Followerwonk} \footnote{https://moz.com/followerwonk/}, a Twitter analytic platform. Subsequently, the Twitter and Instagram followers and followees of these 100,000 users were crawled using the Twitter and Instagram APIs. However, as some of these Twitter and Instagram accounts have set their privacy settings to ``private'', we are not able to obtain all the followers and followees of the users. We are also only interested in analyzing friendship of average OSN users, thus we further filter away celebrity or popular users who have more than 2,000 followers. At the end, we manage to obtain 97,978 users who have declared both their Twitter and Instagram accounts, and these users constituted the \textbf{base user set}. \begin{figure}[h] \centering \includegraphics[scale = 0.18]{degree.pdf} \caption{Twitter and Instagram Friendship Distribution} \label{fig:degree} \end{figure} Next, we retrieve the Twitter and Instagram friends of the users in \emph{base user set}. As Twitter and Instagram only capture follower and followee relationships, we define the \emph{friend} of a user to be someone who follows and is followed by the user \cite{xie,java}. An estimated 17 million Twitter friends and 24 million Instagram friends are finally obtained. Figure~\ref{fig:degree} shows the Twitter and Instagram friendship degree distributions. The average Twitter and Instagram friendship degrees for these users are 171 and 245 respectively. \section{User Friend Matching} \label{sec:friendmatching} Before we can study how the users maintain friendships in their Twitter and Instagram accounts, we are required to match the friend accounts in the two OSNs. Unfortunately, very few of the friends have declared both their Twitter and Instagram accounts. Hence, in this section, we present a few simple but effective ways to match users between OSNs by adapting the methods proposed by Zafarani and Liu \cite{zafarani:connect} and Vosecky et al. \cite{vosecky2009user}, which are quite effective in this context. We match the Twitter and Instagram friends of our base user set using three levels of user matching methods: \begin{enumerate} \item \textbf{Self-Report Matching}. This method matches the Twitter and Instagram friends of the base user set if these friends declare both their Twitter and Instagram accounts. \item \textbf{Username Matching}. Past research has reported that 59\% of users prefer to use the same username repeatedly on different OSNs for easy recall \cite{zafarani:connect}. Instead of matching all our Twitter and Instagram users by their usernames, we match Twitter users with Instagram users by username when they are the friends of the same user in our base set. This minimizes the possibility of two users being matched because they adopt more popular username. \item \textbf{Username Bigram Matching}. Users may tweak their usernames slightly across different OSNs due to the unavailability of their usual usernames. To cater for such situations, we introduce an approximate method which matches the Twitter and Instagram friends of the base users using username bigrams. Each username is now represented by a vector of bigram weights each of which is the number of occurrences of the bigram in the username. Cosine similarity is then applied on two username bigram vectors to determine if the two usernames are sufficiently similar. If the cosine similarity score exceeds a threshold, the two usernames are considered matched. We adopt a threshold value of 0.63 which is derived by taking the median cosine similarity values of Twitter and Instagram username bigrams of the base users. \end{enumerate} \begin{table} [h] \caption{Number of users and friends matched using different methods} \label{tab:match} \centering \scriptsize \begin{tabular}{|l|c|c|c|c|} \hline Methods & Self- & Username & Username & Total \\ & Report & & Bigram & \\ \hline \hline \# Users Matched & 17,236 & 1,473,217 & 1,546,645 & 3,037,098 \\ \hline \# Friends Matched & 22,234 & 1,735,719 & 1,798,457 & 3,556,410 \\ \hline \end{tabular} \normalsize \end{table} Table~\ref{tab:match} shows the number of friends matched using the above three methods. As expected, the self-report method returns the smallest number of matched friends. A total of 22,234 friends were matched using this method giving an average of $\frac{22,234}{97,978}=0.23$ matched friends per user. In other words, vast majority of base users do not have their Twitter and Instagram friends matched using self-report. User name matching method, on the other hand, is able to match a total of 1,735,719 friends (in addition to those matched by self-report) or an additional 17.72 friends per user, representing $\frac{17.72}{171}=10.4\%$ and $\frac{17.72}{245}=7.2\%$ of all Twitter and Instagram friends of the base users respectively. Finally, the username bigram matching method returns yet an additional 1,798,457 matched friends, or 18.36 matched friends per user. This corresponds to 10.7\% and 7.5\% of all Twitter and Instagram friends respectively. Combining all methods, we are able to match 3,556,410 friends, or 36.3 matched friends per user. Henceforth, we will use all these matched friends in the subsequent analysis. As there are no ground truth for the validation of the matched friends, we randomly inspected Twitter and Instagram profiles of 100 pairs of matched friend pairs using the username matching and another 100 pairs of matched friends using combined method. We then looked at the visual cues such as their profile photos to assess whether the matching methods are accurate. Among the inspected 100 pairs of matched friends using the exact username matching method, we observed that 77 of the pairs have (i) matching profile photos for their Twitter and Instagram accounts, or (ii) their Twitter profile photos matched with some of the photos posted by the Instagram accounts. Majority of the non-matched friend profiles are due to the users not setting profile picture for their Twitter accounts, thus the actual number of matched pair could be higher than 77. For the username bigram method, 68 of the pairs meet the matching profile photos criteria. This suggests that the user matching methods were able to match the user friends with good accuracy. \section{Friendship Maintenance \\ Measurement} \label{sec:maint} Before we study how users maintain friendship in Twitter and Instagram, we first propose two measures, \emph{friendship similarity} and \emph{friendship evenness}, to quantify the similarity of user's friendship and the evenness of user's friendship distribution in multiple OSNs respectively. \subsection{Friendship Similarity} To ease friendship maintenance, users may choose to overlap their friendships in multiple OSNs. We adapt the \textit{D-Correlation} approach by Berlingerio et. al \cite{berlingerio} to measure this overlap or similarity of friendship across multiple OSNs. D-Correlation was originally designed for multi-dimensional networks where it measures how redundant are two dimensions for existence of a node or an edge. We use $\mathbb{N}$ to denote a set of OSNs $\{N_1, N_2, \cdots, N_n\}$. We denote the set of friends of a user $x$ in a OSN $N_i$ by $FR(x,N_i)$. We define the friendship similarity of user $x$ among these OSNs, $F_{Sim}(x,\mathbb{N})$, to be the ratio of common friends of $x$ across all OSNs as shown in Equation~\ref{sim_eqn}. \begin{equation} \label{sim_eqn} F_{sim}(x,\mathbb{N}) = \frac{|\cap_{N_i \in \mathbb{N}} FR(x,N_i)|}{|\cup_{N_i \in \mathbb{N}} FR(x,N_i)|} \end{equation} \textit{\textbf{Example.}} Figure \ref{fig:example} illustrates the an example of user distributing his friends in two OSNs, \textit{A} and \textit{B}. The user $x$ have a total of 25 friends; 10 friends in \textit{A}, 20 friends in \textit{B} and 5 of the friends are overlap two OSNs. Thus, the user \textit{x}'s friendship similarity in OSN \textit{A} and \textit{B} will be computed as $F_{sim}(x,\mathbb{N}) = 5/25 = 0.2$. \begin{figure}[h] \centering \includegraphics[scale = 0.11]{sim.pdf} \captionof{figure}{Example of user's friendship in two OSNs} \label{fig:example} \end{figure} \textit{\textbf{Upper Bound of Friendship Similarity.}} The maximum Friendship Similarity value is only achieved when $x$ has the same friends in all OSNs. The maximum value of for a user's friendship similarity in multiple OSNs is equal to ratio between the minimum and maximum number of friends added to a OSN among the OSNs that the user has participated (as shown in Equation~\ref{maxsim_eqn}). Referencing to the earlier example in Figure~\ref{fig:example}, the maximum possible $F_{sim}$ value for user \textit{x} would be $10/20=0.5$. i.e. user \textit{x} added all his friends in OSN \textit{A} in OSN \textit{B} as well. \begin{equation} \label{maxsim_eqn} max(F_{sim}(x,\mathbb{N})) \leq \frac{\min\limits_{N_i \in \mathbb{N}}| FR(x,N_i)|}{\max\limits_{N_i \in \mathbb{N}} |FR(x,N_i)|} \end{equation} \subsection{Friendship Evenness} Suppose that a user \textit{x} divides all his friends among all the $n$ OSNs without overlap, we expect $\frac{1}{n}$ of his friends in each OSN. Suppose there is a non-zero overlap among his friends across all the OSNs but negligible overlap between subsets of OSNs, and $F_{sim}(x,\mathbb{N})>0$, the \textit{expected ratio of friends x adds to each OSN} is then estimated by $\frac{1}{n}+\frac{F_{sim}(x,\mathbb{N})}{n}$ as shown in Equation~\ref{equal_eqn}. \begin{equation} \label{equal_eqn} F_{equal}(x,\mathbb{N}) = \frac{1+(n-1)\cdot F_{sim}(x,\mathbb{N})}{n} \end{equation} \noindent \textbf{Proof.} Suppose $x$ has $N$ unique friends in $\mathbb{N}$. Assume that $x$ distributes her friends evenly across the OSNs. Let $N_u$ be the number of unique friends in each OSN and let $F$ denote $F_sim(x,\mathbb{N})$. We then expect $x$ to have $N \cdot F$ common friends across the OSNs. In other words, $x$ has $N_u + F\cdot N$ friends in each OSN. As $N = n \cdot N_u + F \cdot N$, we obtain $N = \frac{n\cdot N_u}{1-F}$. Each OSN is then expected to have $N_u + F \cdot \frac{n\cdot N_u}{1-F}$ friends in each OSN. The expected ratio of friends in each OSN is therefore \begin{equation} \frac{N_u + F \cdot N}{N} \\ = \frac{N_u + F \cdot \frac{n\cdot N_u}{1-F}}{\frac{n\cdot N_u}{1-F}} \\ = \frac{1+(n-1)\cdot F}{n} \end{equation} When $F=0$, the above ratio degenerates to $\frac{1}{n}$ implying that all friends of $x$ are equally divided among OSNs exclusively. When $F=1$, the ratio also becomes $1$ implying that every OSN covers all friends of $x$. When there are only two OSNs, i.e., $n=2$, the expected ratio of friends in each OSN is $\frac{1+F}{2}$. However, we would expect that in many circumstances, unevenness exists among the friend counts of the OSNs. For example, a user may maintain a larger group of friends in an OSN $N_i$ while keeping a smaller clique in another network. We thus define the \textit{ratio of friends of a user $x$ in OSN $N_i$ relative to all friends} in Equation~\ref{in_eqn}. \begin{equation} \label{in_eqn} F_{in}(x,N_i,\mathbb{N}) = \frac{|FR(x,N_i)|}{|\cup_{N_i \in \mathbb{N}} FR(x,N_i)|} \end{equation} Finally, we then define the \textit{evenness of user's friendship distribution} in multiple OSNs as the inverse of summation of difference between the ratio of friends added in each OSN and the expected ratio of friends a user adds to each OSN when the friends are evenly distribution as shown in Equation \ref{even_eqn}. \begin{equation} \label{even_eqn} F_{even}(x,\mathbb{N}) = 1 - \sum_{i=1}^{n}\Bigl|F_{in}(x,N_i,\mathbb{N})-F_{equal}(x,\mathbb{N})\Bigr| \end{equation} \textit{\textbf{Example.}} Referring to our earlier example in Figure~\ref{fig:example}, $F_{in}(x,A,\{A,B\})$ is $10/25 = 0.4$ and $F_{in}(x,B,\{A,B\})$ is $20/25 = 0.8$. User \textit{x}'s evenness of friendship distribution in OSN \textit{A} and \textit{B} is $F_{even}(x,\{A,B\}) = 1 - (|0.4-\frac{1+0.2}{2}|+|0.8- \frac{1+0.2}{2}|) = 0.6$. Note that $F_{even}(x,\{A,b\})$ measure is also in the range of 0 to 1. Suppose that a user add equal number of friends in the two OSNs with any number of overlap friends among the two OSNs, the user's friendship evenness value will 1. The value for friendship evenness will be 0 is no friend in one of the two networks. \textit{\textbf{Relationship between Measures.}} There is also an interesting relationship between the upper bound of Friendship Similarity and Friendship Evenness. Based on Equation~\ref{maxsim_eqn}, in order to achieve a maximum friendship similarity value of 1 (i.e., $max(F_{sim}(x,\mathbb{N}))=1$), the minimum and maximum numbers of friends in all the OSNs are identical. That is, user $x$ distributes friendships evenly among all the OSNs ($F_{even}(x,\mathbb{N})=1$). Thus, the more evenly distributed the friends among OSNs, the higher the $max(F_{sim}(x,\mathbb{N}))$. \section{Empirical Results} In this section, we apply the friendship similarity and evenness measures to analyze how the 97,978 \textit{base users} maintain their friendships in Twitter and Instagram. \subsection{Distribution Analysis} Figure~\ref{fig:fsimdist} shows the distribution of friendship similarity. The average friendship similarity is 0.104. The 1st, 2nd and 3rd quartile friendship similarity values are 0.046, 0.09 and 0.148 respectively. This left-leaning bell shape distribution suggests that there are very few users who maintained similar friendship in their Twitter and Instagram accounts. Interestingly, this is contrary to our initial hypothesis that user would prefer to have a high friendship similarity for ease of maintenance. There could be a few reasons for the low average friendship similarity; for instance, the users may have maintained low evenness for their friendship in the two OSNs, thus limiting the maximum possible friendship similarity value for the users, or the users simply prefer to maintain different groups of friends in different OSNs. \begin{figure}[h] \centering \includegraphics[scale = 0.2]{fsimdist.pdf} \caption{Friendship Similarity Distribution} \label{fig:fsimdist} \end{figure} \begin{figure}[h] \centering \includegraphics[scale = 0.2]{fevendist.pdf} \caption{Friendship Evenness Distribution} \label{fig:fevendist} \end{figure} Figure~\ref{fig:fevendist} depicts the distribution of friendship evenness of the base users. The average friendship evenness is 0.648, a value much higher than the average friendship similarity. The 1st, 2nd and 3rd quartile evenness values are 0.534, 0.705 and 0.856 respectively. The distribution is right-leaning, suggesting that most users may prefer to have not overly uneven friendship counts in different OSNs. Also, the right-learning friendship evenness distribution further strengthens our earlier finding that the users tend to prefer to maintain different groups of friends in different OSNs. There could be many reasons for users preference to maintain different friendship in different OSNs. One of the possible reasons could be as suggested by Lim et al. \cite{lim2015}, that users use different OSNs for different purposes or interests, which indirectly motivates the users to connect to different friends in different OSNs. To explain the the user's friendship maintenance behavior, we will study beyond the structural properties of multiple OSNs and investigate the differences in the user interests across different OSNs in our future works. \subsection{Relationship Between Measures} We also examine the relationship between friendship similarity and friendship evenness of users in Figure~\ref{fig:correlation} where each point in the figure represents a user with his friendship similarity and evenness values. \begin{figure}[h] \centering \includegraphics[scale = 0.2]{correlation.pdf} \caption{Friendship Similarity and Friendship Evenness} \label{fig:correlation} \end{figure} \begin{figure}[h] \centering \includegraphics[scale = 0.2]{topbottom.pdf} \caption{Friendship Similarity of Top and Bottom 10\% Friendship Evenness Users} \label{fig:topbottom} \end{figure} Figure~\ref{fig:correlation} shows that as the user's friendship evenness increases, friendship similarity seems to increase its range of values. This supports what we have highlighted in our earlier discussion that the friendship similarity is limited by the friendship evenness. We also further investigate this by showing the friendship similarity distribution of users with top and bottom 10\% friendship evenness in Figure~\ref{fig:topbottom}. The top 10\% friendship evenness users have friendship similarity distribution similar to the overall friendship similarity distribution (as shown in Figure~\ref{fig:fsimdist}), while the bottom 10\% friendship evenness users have a more left-leaning friendship similarity distribution. The top 10\% friendship evenness users also have an average of friendship similarity of 0.124, slightly higher than the 0.104 friendship similarity of an average user, while the bottom 10\% friendship evenness users have an average of 0.055 friendship similarity, significantly lower than the average user. However, it is observed that there are quite still a number of users who have high friendship evenness but low friendship similarity. To investigate the dependency between friendship evenness and similarity, we performed a Chi-squared Test of Independence on the two measures. The test result shows p-value < 2.2e-16, which is lesser than the 0.05 significance level, therefore we reject the null hypothesis that friendship similarity is independent of friendship evenness. The two measures also shows a positive weak correlation of 0.277. \section{Friendship Link Prediction} \label{sec:linkprediction} We now examine how the link prediction in multiple social networks can leverage on the links across networks. Link prediction can come in two forms, namely, prediction of future links and prediction of missing links \cite{nowell,goldberg,taskar}. In our research, we focus on the latter which is useful in applications such as friend recommendations. As this is the first attempt to conduct link prediction for multiple social networks, we also want to answer the following research questions: \begin{itemize} \item \emph{Can we predict the link between two users in one network using the structural information of the two users in another network?} Suppose that two users have many common friends in a single OSN, it is likely the they are friends in the OSN. Intuitively, the existence of a link between the two users in one OSN should also increase the likelihood of a link between the users in another OSN. \item \emph{Can the friendship maintenance features improve the accuracy of link prediction in multiple online social networks?} Now that we have the friendship similarity and evenness measures, we would like to know if they can make good features for link prediction. \end{itemize} \subsection{Task Definitions} There are two prediction tasks to be performed: (a) \textbf{Twitter Link Prediction (TWLP)} where we predict if two users are friends in Twitter; and (b) \textbf{Instagram Link Prediction (INLP)}, where we predict if two users are friends in Instagram. We now describe the setup of the training and test data in our the link prediction task. Let $V_{Both}$ be the 97,978 base users who exist in both Twitter and Instagram. For our base users in Twitter, we define the set of positive instances to be $(u,v)$ pairs such that both $u$ and $v$ are in $V_{Both}$ and $(u,v)$ is an observed link in Twitter. We denote this set of positive instances by $E_{pos}(TWT)$. The set of negative instances, denoted by $E_{neg}(TWT)$, is the set of $(u,v)$ pairs with both $u$ and $v$ from $V_{Both}$ but are not friends in Twitter. The sets of positive and negative instances for our base users in Instagram are defined in a similar manner. With the above definitions, we derive 17,651 and 26,241 positive instances for base users in Twitter and Instagram respectively, i.e., $|E_{pos}(TWT)|=17,651$ and $|E_{pos}(INT)|=26,241$. The numbers look small compared with the size of base users largely because the base users which are selected based on having both Twitter and Instagram accounts do not come from the same user community. Hence, only very few of them know each other on Twitter or Instagram. In other words, there are many more negative instances making the link prediction tasks highly imbalanced. Furthermore, there are additional overheads crawling additional data (e.g., friends of neighbors) for each positive and negative instance in the prediction task. In order to keep the number of instances manageable for the prediction methods, we randomly select 5,000 positive instances and 25,000 negative instances for each run in our prediction tasks. The negative instances are kept to five times that of positive instances. To make the prediction harder, we also check that at least 5,000 negative instances have at least 1 common neighbor in Twitter or Instagram. \textbf{Unsupervised Prediction task}. For this task, we rank the 5,000 positive and 25,000 negative instances by some ranking measure. We expect the top ranked instances to be positive if the prediction method works accurately. In the ideal case, all positive instances are ranked above all negative ones. \textbf{Supervised Prediction Task}. For this task, we select set of training and test datasets. Each dataset consist of 5,000 positive instances and 25,000 negative instances which are randomly selected. We also check that the instances selected for testing dataset does not exist in the training dataset. We then train a classifier using the training dataset and apply the trained classifier on the test dataset. This experiment is repeated three times and the results reported are the average of the three runs. \subsection{Unsupervised Link Prediction Methods} We propose to use several unsupervised link prediction methods using different \emph{neighborhood features} as ranking measures\cite{newman, adamic}. These measures involve using the common neighbors between a pair of users $u$ and $v$ to derive some affinity score for ranking the user pair. These measures are also based on the triadic closure principle in social network analysis \cite{simmel}. In this work, the following measures are used: \begin{itemize} \item \textbf{Common Neighbors} (\textbf{CN}): This measure counts the number of common neighbors between $u$ and $v$. \item \textbf{Jaccard Coefficient} (\textbf{JC}): This measure returns the fraction of common neighbors between $u$ and $v$. \item \textbf{Adamic-Adar} (\textbf{AA}): This measure considers the popularity of common neighbors. The less popular common neighbors are given larger weights as they are added together to derive an affinity score. \end{itemize} The above measures are chosen as they were commonly used in link prediction experiments. More formal definitions of them are given at the top of Table~\ref{tab:features}. In Table~\ref{tab:features}, $FR(u,T)$ and $FR(u,I)$ denotes the friends of $u$ in Twitter and Instagram respectively. While applied to score each of the 5,000 positive and 25,000 negative instance, the measures are computed using all observable link instances in our dataset, i.e., all links excluding those used as positive instances. There were also recently studies that applied these neighborhood measures in multidimensional networks, where links between users in one dimension are ranked using the neighborhood features of users in another dimension of the same network \cite{rossetti}. Unlike these existing link prediction works on multidimensional networks, we are now using these neighborhood measures for unsupervised link prediction between users in multiple social networks where users may not have accounts on both networks and users having accounts on both networks may not have their accounts matched. \textbf{Performance Evaluation.} We use \emph{F1 at Top K} to evaluate each unsupervised link prediction method. We first rank all given 30,000 instances by the method's measure in decreasing order. The \emph{Precision} and \emph{Recall at Top K} are computed by: \[ Prec@K = \frac{\text{\# correct predictions among top K ranked instances}}{K} \] \[ Rec@K = \frac{\text{\# correct predictions among top K ranked instances}}{5000} \] \[ F1@K = \frac{2 \cdot Prec@K \cdot Rec@K}{Prec@K + Rec@K} \] \begin{figure}[h] \centering \includegraphics[scale = 0.22]{f1topk2.pdf} \caption{F1 scores @ Top K for TWLP and INLP} \label{fig:f1topk} \end{figure} \textbf{Experiment Results.} Figure~\ref{fig:f1topk} shows F1@K of unsupervised link prediction methods in TWLP and INLP tasks. We introduce a baseline method which returns randomly selected K instances as predicted links. We vary $K$ from 1000 to 10,0000 to examine the performance of each method. As shown in the figure, all the unsupervised methods perform significantly (3 to 4 folds) better than the random baseline in both TWLP and INLP tasks. While the baseline method increases gradually with larger K values due to increasing recall, most of the other methods improve their F1@K only up K=4000 or K=5,000. Beyond which, their F1@K drops. This is because these methods are able to rank positive instances more highly than negative instances. Interestingly, the figure also shows that the prediction methods using Instagram links outperform those using Twitter links even when the prediction task involves Twitter link prediction, i.e., TWLP. In particular, the method using Jaccard Coefficent on Instagram links (i.e., $\textbf{JC}_I$ ) outperforms the rest for almost all K values, achieving the highest F1 scores of 0.882 and 0.838 for TWLP and INLP tasks respectively for top 5,000 ranked results. A possible explanation of the above findings could be that the users have higher friendship degrees in Instagram than Twitter. Two users who are friends in Twitter are likely to have common friends in Instagram. Even though the Twitter neighborhood measures performed worse than Instagram neighborhood measures, they still yield good results (up to 0.689 for F1@5K) in predicting links between users in Instagram. This suggests that predicting links in one OSN using the neighborhood information of another OSN can yield very respectable accuracy. \subsection{Supervised Link Prediction Methods} For supervised link prediction, we use Support Vector Machine (SVM) with linear kernel as the binary classifier trained with each instance represented as a feature vector. SVM is chosen because of its relatively good results in other link prediction tasks. We also consider three types of features as shown in Table~\ref{tab:features}. The \textbf{neighborhood features} are the scores from different measures used in unsupervised link prediction methods. By including the neighborhood features, the supervised methods can hopefully achieve at least the good accuracy of the unsupervised methods. We introduce a binary \textbf{cross network feature} \textbf{CL} which returns 1 if the users of the instance are friends in another network, and 0 otherwise. For example, in the case of TWLP task, a $(u,v)$ instance is assigned a CL feature value of 1 if and only if $u$ and $v$ are friends in Instagram. This feature is included because having a friendship in another OSN should increase the odd of the users having friendship in the target OSN. \begin{table}[t] \centering \caption{Link Prediction Features} \label{tab:features} \begin{tabular}{|l|l|} \hline \textbf{Feature} & \textbf{Description} \\ \hline \hline \multicolumn{2}{|l|}{Neighborhood features} \\ \hline $\textbf{CN}_T$ & $|FR(u,T) \cap FR(v,T)|$ \\ $\textbf{JC}_T$ & $\frac{|FR(u,T) \cap FR(v,T)|}{|FR(u,T) \cup FR(v,T)|}$ \\ $\textbf{AA}_T$ & $\sum_{z \in FR(u,T) \cap FR(v,T)} \frac{1}{log |FR(z,T)|}$ \\ $\textbf{CN}_I$ & $|FR(u,T) \cap FR(v,I)|$ \\ $\textbf{JC}_I$ & $\frac{|FR(u,I) \cap FR(v,I)|}{|FR(u,I) \cup FR(v,I)|}$ \\ $\textbf{AA}_I$ & $\sum_{z \in FR(u,I) \cap FR(v,I)} \frac{1}{log |FR(z,I)|}$ \\ \hline \hline \multicolumn{2}{|l|}{Common Neighbor Friendship Maintenance features} \\ \hline $\textbf{HEHS}_T$ & $\frac{|\{z \in FR(u,T) \cap FR(v,T) | F_{sim}(z) \mbox{ is high}, F_{even}(z) \mbox{ is high}\}|}{|FR(u,T) \cup FR(v,T)|}$ \\ $\textbf{HELS}_T$ & $\frac{|\{z \in FR(u,T) \cap FR(v,T) | F_{sim}(z) \mbox{ is low}, F_{even}(z) \mbox{ is high}\}|}{|FR(u,T) \cup FR(v,T)|}$ \\ $\textbf{LEHS}_T$ & $\frac{|\{z \in FR(u,T) \cap FR(v,T) | F_{sim}(z) \mbox{ is low}, F_{even}(z) \mbox{ is low}\}|}{|FR(u,T) \cup FR(v,T)|}$ \\ $\textbf{LELS}_T$ & $\frac{|\{z \in FR(u,T) \cap FR(v,T) | F_{sim}(z) \mbox{ is high}, F_{even}(z) \mbox{ is low}\}|}{|FR(u,T) \cup FR(v,T)|}$ \\ $\textbf{HEHS}_I$ & $\frac{|\{z \in FR(u,I) \cap FR(v,I) | F_{sim}(z) \mbox{ is high}, F_{even}(z) \mbox{ is high}\}|}{|FR(u,I) \cup FR(v,I)|}$ \\ $\textbf{HELS}_I$ & $\frac{|\{z \in FR(u,I) \cap FR(v,I) | F_{sim}(z) \mbox{ is low}, F_{even}(z) \mbox{ is high}\}|}{|FR(u,I) \cup FR(v,I)|}$ \\ $\textbf{LEHS}_I$ & $\frac{|\{z \in FR(u,I) \cap FR(v,I) | F_{sim}(z) \mbox{ is high}, F_{even}(z) \mbox{ is low}\}|}{|FR(u,I) \cup FR(v,I)|}$ \\ $\textbf{LELS}_I$ & $\frac{|\{z \in FR(u,I) \cap FR(v,I) | F_{sim}(z) \mbox{ is high}, F_{even}(z) \mbox{ is low}\}|}{|FR(u,I) \cup FR(v,I)|}$ \\ \hline \hline \multicolumn{2}{|l|}{Cross Network features} \\ \hline \textbf{CL} & $ \begin{cases} 1 & \mbox{if } (u,v) \mbox{ exists in another network}\\ 0 & \mbox{otherwise} \end{cases} $ \\ \hline \end{tabular} \end{table} Finally, we also include a group of features known as \textbf{common neighbor friendship maintenance features}. While the neighborhood features in one OSN yield reasonable or even good results in unsupervised link prediction in another OSN, the features may not work very well when the common neighbors demonstrate friendship maintenance behavior that prevent friendship inference across OSNs. For example, a common neighbor between users $u$ and $v$ in Instagram who maintain separate friends in Twitter and Instagram does not increase the likelihood of friendship between $u$ and $v$ in Twitter. The common neighbor friendship maintenance features are obtained by dividing all common neighbors who are present in both Twitter and Instagram into four different categories: namely: (a) high friendship evenness and high friendship similarity; (b) low friendship evenness and high friendship similarity; (c) high friendship evenness and low friendship similarity; and (d) low friendship evenness and low friendship similarity. We say that a user has high (or low) friendship evenness if her friendship evenness is greater than (or not greater than) the average friendship evenness value. We define the user with high or low friendship similarity in the same way. These common neighbor friendship maintenance features are shown in Table~\ref{tab:features}. We use six different feature configurations in our supervised link prediction methods as follows: \begin{itemize} \item \textbf{NBO}: Neighborhood features only \item \textbf{NFM}: Common Neighbor Friendship Maintenance features only \item \textbf{NBOFM}: Neighborhood and Common Neighbor Friendship Maintenance features \item \textbf{NBCL}: Neighborhood and Cross Network features \item \textbf{NFMCL}: Common Neighbor Friendship Maintenance and Cross Network features \item \textbf{ALL}: All features \end{itemize} \textbf{Performance Evaluation.} We conduct three runs of TWLP and INLP experiments and report the average precision, recall and F1 score of each method. For each run, we use a sample of 5,000 user pairs with friendship and 25,000 user pairs without friendship as the positive and negative instances respectively for training a SVM classifier, and another sample of 5,000 user pairs with friendship and 25,000 user pairs without friendships for testing. We conducted altogether three runs of training and test evaluation. \begin{table} [h] \caption{Link Prediction Results by Supervised Methods} \label{tab:svmresult} \centering{ \begin{tabular}{|l|l|c|c|c|} \hline Tasks & Methods& Avg Prec. & Avg Recall & Avg F1 \\ \hline \hline \multirow{5}{*}{TWLP} & \textbf{NBO} & 0.954 & 0.873 & 0.911 \\ \cline{2-5} & \textbf{NFM} & 0.955 & 0.830 & 0.888 \\ \cline{2-5} & \textbf{NBOFM} & 0.953 & 0.875 & 0.912 \\ \cline{2-5} & \textbf{NBCL} & 0.976 & 0.887 & 0.929 \\ \cline{2-5} & \textbf{NFMCL} & \textbf{0.979} & 0.861 & 0.916 \\ \cline{2-5} & \textbf{ALL} & 0.973 & \textbf{0.891} & \textbf{0.930} \\ \cline{2-5} \cline{2-5} & $\textbf{JC}_I$ & 0.882 & 0.882 & 0.882 \\ \hline \hline \multirow{5}{*}{INLP} & \textbf{NBO} & 0.942 & 0.832 & 0.883 \\ \cline{2-5} & \textbf{NFM} & 0.959 & 0.721 & 0.823 \\ \cline{2-5} & \textbf{NBOFM} & 0.942 & 0.833 & 0.884 \\ \cline{2-5} & \textbf{NBCL} & 0.958 & 0.838 & 0.894 \\ \cline{2-5} & \textbf{NFMCL} & \textbf{0.971} & 0.74 & 0.84 \\ \cline{2-5} & \textbf{ALL} & 0.956 & \textbf{0.841} & \textbf{0.895} \\ \cline{2-5} \cline{2-5} & $\textbf{JC}_I$ & 0.838 & 0.838 & 0.838 \\ \hline \hline \end{tabular} } \end{table} \textbf{Experiment Result.} Table~\ref{tab:svmresult} shows the results of supervised link prediction for TWLP and INLP tasks. In these experiments, all the feature configurations yield better precision than recall. Most of them have F1 higher than the best F1 scores of the unsupervised methods (i.e., $\textbf{JC}_I$ ). Generally, according to F1, the configuration using all features outperforms other methods. Although the Common Neighbor Friendship Maintenance (\textbf{NFM}) features performed slightly worse than the Neighborhood (\textbf{NBO}) features, the \textbf{NFM} features still managed to achieve a reasonably good F1 score of 0.888 and 0.823 for TWLP and INLP tasks respectively. This suggests that we are able to predict, with reasonable accuracy, the friendship between users using the common neighbor's friendship maintenance behavior as features. The addition of Cross Network (\textbf{CL}) feature also improves the results of \textbf{NFM} and \textbf{NBO} features. Interestingly, the configuration with Common Neighbor Friendship Maintenance and Cross Network features (i.e., \textbf{NFMCL}) yield the best precision result in both TWLP and INLP task. This suggests that the existence of a link between the two users in one OSN increases the likelihood of a link between the users in another OSN. A possible reason for Common Neighbor Friendship Maintenance (\textbf{NFM}) features performing slightly worse than the Neighborhood (\textbf{NBO}) features could be due to the lack of common neighbors with friendship maintenance measures who are also base users. Thus we re-examined the supervised link prediction results and determined the accuracy of link prediction for test instances that have at least one common neighbor who is also a base user. \begin{table} [h] \caption{Link Prediction Results of Test Instances with at Least 1 Base User Common neighbor} \label{tab:svmsubset} \centering{ \begin{tabular}{|l|l|c|c|c|} \hline Task & Methods& Avg Prec. & Avg Recall & Avg F1 \\ \hline \hline \multirow{2}{*}{TWLP} & \textbf{NBO} & 0.948 & 0.970 & 0.959 \\ \cline{2-5} & \textbf{NFM} & \textbf{0.971} & \textbf{0.994} & \textbf{0.982} \\ \cline{2-5} \hline \hline \multirow{2}{*}{INLP} & \textbf{NBO} & 0.938 & 0.959 & 0.949 \\ \cline{2-5} & \textbf{NFM} & \textbf{0.976} & \textbf{0.999} & \textbf{0.987} \\ \cline{2-5} \hline \hline \end{tabular} } \end{table} As shown in Table~\ref{tab:svmsubset}, our \textbf{NFM} features only method outperformed the method using \textbf{NBO} features by precision, recall and F1 score in both TWLP and INLP tasks. This suggests that there were several occasions where the \textbf{NBO} features only method wrongly labeled a positive instance as negative but these instances are correctly labeled by \textbf{NFM} features. Upon further examination of these test instances, we found that although each user pair have very few common neighbors, the common neighbors actually falls in the \textit{low friendship evenness and high friendship similarity} friendship maintenance category (i.e., LEHS). The users in LEHS connect to more friends in either Twitter or Instagram, while keeping a smaller and potentially closer clique of common friends across the two OSNs. Thus, a pair of users with a LEHS common neighbor are more likely to be friends especially when they belong to the smaller clique of friends in one of the OSNs. \section{Related Works} \label{sec:related} In this section, we review thee groups of existing research works related to our research. The first group is the research studies on structural properties and user behaviors in multiple OSNs. The second group discusses link prediction conducted in multidimensional networks. Finally, the last group focuses on triadic closure property in OSNs, which is often used in link prediction. The study on structural properties and user behaviors in multiple OSNs is an emerging topic and the research subject has been gaining attractions in recent years. Magnani and Rossi \cite{magnani} did a study on the structural properties in multiple OSNs and proposed to represent multiple OSNs as a \emph{multi-layer network}. They had also extended the degree and closeness centrality measures to multi-layer network. Their work however did not consider other network structural properties or behaviors such as the friendship similarity and evenness across networks. The linkage of user accounts across multiple OSNs belong to the same person is also a widely studied topic \cite{zafarani:connect, zhang2014}. With wider adoption of the new user linkage methods by proposed by previous research works, researchers also studied user behaviors across multiple OSNs. Benevenuto, et. al, performed a macro-level analysis of user behaviors such as browsing and content posting at different OSNs \cite{benevenuto}. Zafarani and Liu conducted an empirical study on users in 20 social media sites and showed that the most users join and stay active in less than 3 social media sites \cite{zafarani:join}. Kumar et al. analyzed the user migration patterns across seven OSNs \cite{kumar2011}. Unlike the existing works on user behaviors across multiple OSNs, our study focuses on the friendship maintenance behavior of users when they join multiple OSNs. Our study analyzes if a user would prefer to add a friend in multiple OSNs or simply maintain and restrict the friend to a particular OSN only. The findings of our work provide new perspectives to the existing studies on user behaviors in multiple OSNs. For instance, Ottoni et al. studied the users' activities across Twitter and Pinterest and found that the user usage patterns across the two OSNs differ significantly \cite{ottoni2014}. They found that users tend to post items to Pinterest before posting them on Twitter. Using the insights from our studies, a possible explanation for the observed user behaviors in Ottoni et al's study could be due to the low user friendship similarity across the multiple OSNs and the users were maintaining different groups of friends in different OSNs, thus there was a need for users to re-post the content on multiple OSNs so as to disseminate the information to all friends in different OSNs. Similar explanation could also be made for the study conducted by Lim et al. where they found that users exhibited varied information sharing behaviors on different OSNs \cite{lim2015}; the users, who may maintained low friendship similarity, were catering for the different group of ``audience'' (i.e. friends) in different OSNs. Future works could be done to investigate the impact of friendship maintenance on other user behaviors such as information adoption and diffusion. There were few link prediction studies done on multidimensional networks. Rossetti et. al performed supervised and unsupervised multidimensional link predictions on the DBLP and IMDb networks \cite{rossetti}. In that study, the researchers used neighborhood features such as Common Neighbors and Adamic-Adar to predict user collaboration in the different dimensions of a network. For example, they predicted the collaboration of authors in DBLP with the publishing venues defined as the dimensions. Our link prediction experiment differs from the previous study as we predict friendship of users in different OSNs instead of different dimensions of the same network. Multiple OSNs is quite different from multidimensional networks as there are unmatched user accounts across multiple OSNs while user accounts matching is not required in multidimensional OSN. Furthermore, our friendship link prediction methods not only consider friendship neighborhood features but also friendship maintenance features. Another related field of work is the study on triadic closure property in social networks. The triadic closure property been widely studied for many years even before the rise of OSNs \cite{simmel,wasserman}. In recent years, researchers modeled and studied the process of triadic closure in OSNs. For example, Romero and Kleinberg had empirically investigated the triadic closure process in Twitter network \cite{romero}. Lou, et. al, performed prediction of reciprocal relationships and triadic closure process in Twitter. They also developed a model to accurately predict 90\% of the reciprocal relationships in Twitter and to predict the triadic closure process among users \cite{lou}. Our study builds on the existing works and focus on how similarity and evenness of friendship across OSNs affect the likelihood of the triadic closure. \section{Conclusion and Future Works} \label{sec:conclusion} In this paper, we studied how users manage and maintain friendships across multiple social networks. We constructed a base set of about 100,000 users with Twitter and Instagram accounts and studied the friendship of these users in the two OSNs. We introduced friendship similarity to measure the similarity of friendships between two OSNs. A friendship evenness measure was also defined to quantify the degree of balance a user maintains for the number of friendships in different OSNs. We shown that most users prefer to maintain different friendships in different OSNs, while keeping only a small clique of common friends across OSNs. We also investigated link prediction in multiple OSNs using unsupervised and supervised methods. We shown that the conventional unsupervised methods using neighborhood features perform well even when we predicted links in one OSN using only the network structural properties from another OSN. We also proposed a set of network features and applied them to supervised link prediction method. The experiments shown that the supervised methods with suitable feature sets improved the accuracy over that of unsupervised methods. To conclude, we note that this research is among the very few conducted on multiple social networks. While we have shown that the concepts of friendship similarity and evenness are important, they need to be generalized beyond just two OSNs. As part of the future work, we plan to expand the study to include larger and more diverse OSNs with overlapping user communities. The content generated by users can be further studied so as to provide more insights about the way users manage the different OSNs. \section{Acknowledgements} This work is supported by the National Research Foundation under its International Research Centre@Singapore Funding Initiative and administered by the IDM Programme Office, and National Research Foundation (NRF). \balance \bibliographystyle{plain}
{ "redpajama_set_name": "RedPajamaArXiv" }
8,552
Harvard, MIT and Brown Students, MAP for Health and Quest Diagnostics Employees Again Rally Boston to Set "See No Evil, Hear No Evil, Speak No Evil" Guinness World Record, Drawing Attention to World Hepatitis Day Volunteer Team Urges City Residents and Visitors to Come Out, Be Counted, and Paint at Hepatitis Awareness Event, Chinatown Park, 10:00 a.m. on July 28th. BOSTON, July 26, 2013 /PRNewswire/ -- Students at Harvard, Massachusetts Institute of Technology and Brown University, Massachusetts Asian & Pacific Islanders (MAP) for Health and local employee volunteers of Quest Diagnostics will join elected officials, community and healthcare leaders and Boston residents to once again rally the city of Boston as part of a global, synchronized action to highlight the need for greater awareness of hepatitis risk, prevention and treatment. The public health event will take place from 10 a.m. to 12 p.m. on Sunday, July 28th at Chinatown Park (Rose Kennedy Greenway) and is part of the World Hepatitis Alliance's Guinness World Record attempt to have the most people participate in 24 hours at multiple venues around the world. To view a video encouraging Boston to join Sunday's rally in Chinatown Park, click here. (Logo: http://photos.prnewswire.com/prnh/20130717/NY48934LOGO ) Participants and community families are also invited to add their brush strokes to a community art project to paint a colorful mural that will be donated to South Cove Community Health Center. The mural, designed by MAP for Health's own Narong Sokhom, will represent the 'See No Evil', 'Hear No Evil', 'Speak No Evil' theme to highlight that hepatitis B and C need greater attention and action around the world. The South Cove Community Health Center is dedicated to providing healthcare services to underserved patients, particularly Asian and Pacific Islanders, and encourages onsite hepatitis B screening and vaccination. Last year, Boston recorded nearly 100 participants, helping the World Hepatitis Alliance achieve Guinness World Record status for the first time with more than 12,000 strong worldwide. They asked the Boston team of volunteers to help them do it again this year. "Globally and here in the United States, hepatitis is a silent epidemic. Awareness can be prevention, it's that simple," said Jennifer Chen, Co-President of Team HBV at Harvard College. "We're proud to work with MAP for Health, Quest Diagnostics and others to share the message locally and promote awareness internationally." "Millions of people in the United States are unaware of their hepatitis status, and are at serious risk for severe complications including cirrhosis of the liver, liver cancer, and death," said Salim Kabawat M.D., Clinical Pathology Regional Medical Director, New England, Quest Diagnostics. "Quest Diagnostics is proud to again team up with Team HBV student volunteers, MAP for Health and the community leaders and residents of Boston – and those rallying around the world on Sunday – to do what we can to improve public health and protect those we love." According to the World Health Organization (WHO), hepatitis is inflammation of the liver, most commonly caused by a viral infection. In particular, hepatitis B and C lead to chronic disease in hundreds of millions of people. Together, they are the most common cause of liver cirrhosis and cancer. Worldwide, 350 million people have chronic hepatitis B and 170 million have chronic hepatitis C. As infectious diseases, hepatitis B and C can be transmitted sexually, from mother to child at birth, and from blood-to-blood contact. Hepatitis B is preventable, by using a vaccine and by using protection. Screening also plays an important role in stopping the spread of the disease. There also are treatments that can help. Routine hepatitis B vaccination was recommended for some U.S. adults and children beginning in 1982, and for all children in 1991. Since 1990, new hepatitis B infections among children and adolescents have dropped by more than 95% – and by 75% in other age groups. In many countries in Asia and Africa, universal vaccination for hepatitis B is yet to be instituted. "While the prevalence of hepatitis B and C is higher than the prevalence of HIV or any cancer, awareness is low," Chen said. World Hepatitis Day and the partnership between MAP for Health, Team HBV and Quest Diagnostics and the local community is especially important this year in light of new Hepatitis C guidelines from the United States Preventive Services Task Force. The influential health advisory group concluded in June that all Baby Boomers (adults born between 1945 and 1965) should be tested at least once for hepatitis C. About three-quarters of the more than three million Americans with hepatitis C are baby boomers, most of them infected decades ago. But most do not know it because they have no symptoms. Individuals in this "baby boomer" generation are five times more likely than other adults to be infected, and one-time testing could prevent more than 120,000 deaths in this age group. Earlier this month, Quest Diagnostics also announced a partnership with the U.S. Centers for Disease Control and Prevention to promote early detection and medical intervention for Americans infected with hepatitis C (http://newsroom.questdiagnostics.com/2013-07-10-Quest-Diagnostics-Partners-with-CDC-to-Improve-Hepatitis-C-Public-Health-Research-to-Promote-Early-Detection-and-Medical-Intervention). Melissa Wong, Chair of the Board of Directors at MAP for Health added, "Combating hepatitis B and C starts with awareness, and community events like World Hepatitis Day help spread the word in an exciting and engaging way. In addition, MAP for Health is proud to work with Team HBV and Quest Diagnostics to provide free screenings to the community. Testing is crucial. If a chronic viral infection is discovered, treatments are available to prevent severe complications." The Boston team has gained the support of Cambridge City Councillor Minka vanBeuzekom and multiple community organizations. For more information about the World Hepatitis Day Boston rally led by Team HBV, Quest Diagnostics, and MAP for Health, and for a list of screening events in the area, visit TeamHBV.org/Boston. For more patient information on hepatitis C, visit www.QuestDiagnostics.com/HepC About Team HBV Team HBV is an international community comprised of collegiate chapters, high school chapters and local volunteers based out of the Asian Liver Center (ALC) at Stanford University. ALC is the first non-profit organization in the United States that addresses the disproportionately high rates of chronic hepatitis B infection and liver cancer in Asians and Asian Americans. Learn more at TeamHBV.org/Boston. About MAP for Health MAP for Health is a community-based, non-profit organization that works to improve healthcare access, disease prevention and service delivery for the Asian and Pacific Islander community in Massachusetts. More information about MAP for Health is available at mapforhealth.org. Quest Diagnostics is the world's leading provider of diagnostic information services that patients and doctors need to make better healthcare decisions. The company offers the broadest access to diagnostic information services through its network of laboratories and patient service centers, and provides interpretive consultation through its extensive medical and scientific staff. Quest Diagnostics is a pioneer in developing innovative diagnostic tests and advanced healthcare information technology solutions that help improve patient care. Additional company information is available at QuestDiagnostics.com. Follow us at Facebook.com/QuestDiagnostics and Twitter.com/QuestDX. Quest, Quest Diagnostics, and all associated Quest Diagnostics registered or unregistered trademarks are the property of Quest Diagnostics. All third-party marks are the property of their respective owners. Caitlin McHugh, Quest Diagnostics: 201-874-4940 Jenny Dudikoff, KP Public Affairs 916-224-9429 Dan Haemmerle: 973-520-2900
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
8,033
CANES events EPSRC Centre for Doctoral Training in Cross-Disciplinary Approaches to Non-Equilibrium Systems (CANES) The mission of CANES is to train future research leaders in the understanding, control and design of systems far from equilibrium, based on rigorous training in theoretical modelling, simulation and data-driven analysis, and a breadth of awareness of common themes across disciplines. Additionally, CANES will also function as a UK Centre of Excellence for the research and research user community, and a national and international hub in the area of non-equilibrium systems. Overview of Centre Non-equilibrium processes underpin many challenging problems across the natural sciences. The objectives of CANES are to train a new generation of researchers in cross-disciplinary approaches to non-equilibrium systems and develop deeper insights into non-equilibrium processes focusing on the three key strands of theoretical modelling, simulation and data-driven analysis. The ultimate goal is to address interdisciplinary challenges: How do we characterize, design and grow materials, and devices, with novel properties out of equilibrium? How do we control and exploit the stochastic processes inherent to biological systems? Can we use inference and information assimilation approaches from physics and biology to monitor and evaluate the state and direction of non-equilibrium environmental systems? CANES draws on a broad range of supervisor expertise in Mathematics, Physics, Chemistry, Informatics, Computational and Systems Biomedicine, Earth and Environmental Sciences, including partners at Imperial College London, University College London and Queen Mary London. Each year the centre will offer 10 fully-funded 4-year PhD studentships at King's College London. CANES Studentships covers course fees, a stipend for living expenses (ca £16,000 per year), and conference travel and internship funds. The programme can support UK applicants as well as a limited number of students from the EU and overseas. CANES website Publications Highlights CNES | Designed by Teslathemes
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,691
**TRUTH** **SIMON BLACKBURN** was the Bertrand Russell Professor of Philosophy at the University of Cambridge and remains a Fellow of Trinity College. He is known for his appearances in the British media such as BBC Radio 4's _The Moral Maze_ and his many publications which span popular and academic moral philosophy. His books include _Spreading the Word_ (1984), _The Oxford Dictionary of Philosophy_ (1994), _Ruling Passions: A Theory of Practical Reasoning_ (1998), _Think: A Compelling Introduction to Philosophy_ (2001), _Being Good: A Short Introduction to Ethics_ (2002), _Lust: The Seven Deadly Sins_ (2003), and _Mirror, Mirror: The Uses and Abuses of Self-Love_ (2014). ALSO BY SIMON BLACKBURN _Reason and Prediction_ _Philosophical Logic_ _Spreading the Word_ _Essays In Quasi-Realism_ _The Oxford Dictionary of Philosophy_ _Ruling Passions_ _Think: A Compelling Introduction to Philosophy_ _A Very Short Introduction to Ethics_ _Lust_ _Truth: A Guide for the Perplexed_ _Plato's Republic_ _How to Read Hume_ _Mirror, Mirror: The Uses and Abuses of Self-Love_ Edited _Meaning, Reference and Necessity_ Edited with Keith Simmons _Truth_ **TRUTH** **SIMON BLACKBURN** First published in Great Britain in 2017 by PROFILE BOOKS LTD 3 Holford Yard Bevin Way London WC1X 9HD _www.profilebooks.com_ www.profilebooks.com Copyright © Simon Blackburn 2017 The moral right of the author has been asserted. All rights reserved. Without limiting the rights under copyright reserved above, no part of this publication may be reproduced, stored or introduced into a retrieval system, or transmitted, in any form or by any means (electronic, mechanical, photocopying, recording or otherwise), without the prior written permission of both the copyright holder and the publisher of this book. All reasonable efforts have been made to obtain copyright permissions where required. Any omissions and errors of attribution are unintentional and will, if notified in writing to the publisher, be corrected in future printings. A CIP catalogue record for this book is available from the British Library. eISBN 978 1 78283 292 8 **CONTENTS** **PREFACE** **PART I: THE CLASSIC APPROACHES** 1. Correspondence 2. Coherence 3. Pragmatism 4. Deflationism 5. Tarski and the semantic theory of truth 6. Summary of Part I **PART II: VARIETIES OF ENQUIRY** 7. Truths of taste; truth in art 8. Truth in ethics 9. Reason 10. Religion and truth 11. Interpretations Notes Further investigations Index **PREFACE** This is the third book under my name with the word 'truth' in the title, so perhaps some explanation is in order. The first was a collection of classic philosophical and logical writings that my erstwhile colleague Keith Simmons and I put together in 1999, as a title in the series of Oxford Readings in Philosophy. So, apart from my contribution to our joint introduction, it was by no means an exposition of my own views. The second I billed as a 'Guide for the Perplexed', and it wrestled above all with problems of scepticism and relativism, perhaps more prevalent in the carefree 'postmodern' world at the turn of the millennium than they are at present. It was easier then to think that anything goes, when nothing much in the way of war, religious intolerance and terrorism was going on, than it is now, when they are pervasive features of everyday life. In that book I took to task some philosophers, particularly Richard Rorty and Donald Davidson, who seemed to me to have come too close to a relativistic view of truth. But when fine philosophers go astray, there is usually some truth in the offing, and this book tries to do fuller justice to the pragmatist strain in each of those writers, and to others in the pragmatist tradition. So the approach of this book is very different. It briefly lays out the classical approaches to understanding the notion of truth, but then devotes the second half of the book to some areas, such as aesthetics, religion, ethics and interpretive disciplines, where truth can seem especially fugitive and endlessly contestable. The aim is to show that a better understanding of all our practices with the notion of truth arises if we take seriously the point of Jeremy Bentham's and C. S. Peirce's remarks (see epigraph on page 4). What this means has to unfold in due course, but when it does we gain not only a new perspective on the old problem of truth, but a new sense of the practices of philosophy itself. The selection of topics is necessarily partial, since philosophers have pursued issues of truth and our attempts to find it in more areas than I have space to talk about. Perceptual judgement, mathematical investigations, scientific truth, truth about possibilities and necessities, give rise to their own huge literatures. But in order to avoid superficial treatments of too many things I have instead tried to follow a particular thread, to see where it takes us in a limited number of especially contentious areas. I hope it will be evident how the thread can be extended further, and it will be an exercise for the reader to think about that. Philosophy, like gardening, needs to be practised to be understood, and although I hope to provide tips, suggestions and examples to follow, the point has to be to initiate a process, not to deliver a finished product. To whet people's appetites, I might say that this is also the moral I am deriving from Bentham and Peirce. I have been indebted over the years to many colleagues, friends and writings. I would like particularly to mention Edward Craig, Allan Gibbard, Robert Kraut, Huw Price and Michael Williams, who have all influenced the way I think about these things. I owe a great deal to Chris Hookway's and Cheryl Misak's work on the American pragmatist tradition. I owe the stimulus to think about truth in law to Andrew Stumff Morrison, and the fascinating, and to me new, material on Thomas Hobbes to Thomas Holden. I owe thanks to Catherine Clarke for encouragement, and to John Davey for his faith in the project. Stretching his hand up to reach the stars, too often man forgets the flowers at his feet. Jeremy Bentham We must not begin by talking of pure ideas – vagabond thoughts that tramp the public highways without any human habitation – but must begin with men and their conversation. C. S. Peirce **PART I** **THE CLASSIC APPROACHES** There is an air of divinity that hangs over the concept of truth. Truth is the goal of enquiry, the aim of experiment, the standard signalling the difference between it being right to believe something, and wrong to do so. We must court it, for in its absence we are bewildered or lost or may even be facing the wrong way, on the wrong track altogether. Deception is an insult to this divinity, as well as an insult to its target. Sometimes, perhaps more often than we think, truth hides itself, and we have to put up with simplifications, models, idealisations, analogies, metaphors and even myths and fictions. These may be useful, but we think of them as only at best paving the way to the altar of truth. Sometimes we have to settle for mere opinion or guesswork, but the god of truth is better served by attendant deities, such as reason, justification and objectivity. Once we have it, truth radiates benefits such as knowledge and, perhaps most notably, success in coping with the world. It is theology that tries, with doubtful success, to unravel the nature of other deities, but it is philosophy that wrestles with the nature of truth. How does it set about doing so? **1** **CORRESPONDENCE** A good map corresponds with the landscape. If, in accordance with the mapping conventions, there is a symbol showing a road at some place, then there is a road there, if it shows a river, then there is a river, and so on. The conventions are not always obvious. We may not even know which bit of land the map is describing (think of pirates' treasure maps), and we may not know the conventions. A short red line does not look much like a road, and a thin blue line not much like a river, and some maps ignore conventions that others use. Famously, the distances shown between stations on the classic London Underground map do not correspond with the actual distances on the ground in a systematic way, whereas on most maps they do. Hence reading a map is a skill that needs teaching. But once the conventions are understood, a good map will correspond with what is found on the ground. A good portrait corresponds with a face even more readily, since a portrait can look significantly like a face – one might even mistake one for the other in a bad light – whereas a map does not generally look like a landscape. Both, of course, can go wrong. Bad maps or portraits do not correspond with their target in the way they should. What kinds of thing are true? For the purposes of our investigation we shall put aside the sense in which a friend might be true (i.e. loyal) or a ruler might be true (i.e. straight). We are concerned here only with the things that we assert or think. They are standardly conveyed by indicative sentences, which we use to claim that something _is_ the case. We could say that it is the beliefs expressed by such sentences that are true, or perhaps the thoughts or assertions or judgements or propositions. Questions are not themselves true or false, although they may be answered truly or falsely. Nor are injunctions or commands, although they may be obeyed or disobeyed. If we think of thoughts as being true or false we should also notice that a thought might be entertained without being asserted. I might wonder whether someone eats meat, and then, discovering that he does, assert the very same thought about which I had been undecided. Unless it is asserted, a thought is not at fault for being false – we can while away time pleasurably enough entertaining thoughts that are not true – but an assertion or belief is supposed to be true, and at fault if it is not. So in what follows I shall talk about beliefs and assertions as the primary candidates for being true or not. A belief is said to be identified by its content, which is roughly the sum total of what makes it true or false. Beliefs in this sense are public property. I can believe the same thing that you believe, and the possibility of communication depends upon that. Beliefs can also be held in common by people speaking different languages, although there can be difficulties of exact translation. To investigate truth I am going to put aside the question of whether there could be inexpressible beliefs, that is, that have no linguistic vehicle. People are often led to suppose that there are because of the experience of being at a loss for words, of thinking that there is something to be said but not knowing what it is. However, when we are in that frustrating state, we are casting around for something to say, which is just the same as casting around for something to believe. In this state we do not at the same time know what to believe and yet not know what to say. Similarly, we may want to attribute thoughts or beliefs to animals, which have no means of linguistic expression. But when we do so, we ourselves can say what we think they believe: if on the basis of its avoidance behaviour we say that a chicken believes some grain to be poisonous, we have found words to say what we think it believes. The first natural thing to say about true beliefs is that, like portraits or maps, they too should correspond with something. They should correspond with the facts – the way the world is. The view is standardly fathered onto Aristotle: 'To say of what is that it is, or of what is not that it is not, is true.' True statements tell it like it is; true beliefs get the facts right. The world bears them out. Philosophers often say odd things, but nobody denies that true beliefs correspond with the facts: it goes without saying, a platitude that nobody doubts. What philosophers do doubt is whether this is a useful thing to say, or is more than a merely nominal or verbal equivalence. Anything deserving the name of a correspondence _theory_ of truth must say more. It must add that the notion of corresponding to the facts is the key to understanding truth itself, and many philosophers have indeed doubted that. They fear that 'corresponds with the facts' is just an elaborate synonym for 'true', rather than a useful elucidation of the notion. The question is whether we have a good understanding of facts, as a category, and of correspondence as a relation that a belief or statement can bear to them. And philosophers do find difficulty with each of these. Actually, this understates it. Many of the most influential philosophers of the last century or so have competed to express enough contempt for the idea that correspondence gives us a real _theory_ of truth, or explanation of the notion. 'The idea of correspondence is not so much wrong as empty', said Donald Davidson. 'The intuition that truth is correspondence should be extirpated rather than explicated,' said Richard Rorty, echoing Peter Strawson's 'the correspondence theory requires not purification, but elimination.' Other giants such as Nelson Goodman, Willard Van Orman Quine, Hilary Putnam and Jürgen Habermas all said similar things. In order to appreciate these onslaughts, consider facts first. Many people become a little nervous with some categories of fact. People often wonder whether there are ethical facts (given intractable ethical disagreements) or whether there are aesthetic facts (given stubborn differences of taste and preference). These are areas in which the facts seem at best elusive, and possibly non-existent. By contrast we might think of good, concrete facts as ones that fall under our observation: the fact that there is a computer in front of me as I write, or that I am wearing shoes, for instance. But then there is the fact that there is not a lion in front of me (a negative fact) or the fact that if I attempt to walk in some directions I shall bump into a wall (a conditional or hypothetical fact). Do I come across these facts, in the same way that I come across the computer and the shoes? I am sure of them, there is no doubt about that. But my confidence is not given by what I see so much as what I do not see, or bump into. It is an _interpretation_ of my situation. But to interpret a situation is just to have a _belief_ about it. Now, however, it seems that to come upon a fact, such as there not being a lion in front of me, is close to the same thing as believing that there is not a lion in front of me. And the fact then loses its status as an independent entity to which the belief must correspond. We can compare the map and the landscape, or the portrait with the sitter: here is the one, and here is the other. But we can't compare the fact and our belief, if to hold there to be a fact that such-and-such is just the same as to believe that such-and-such. 'If we can know fact only through the medium of our own ideas, the original forever eludes us.' It is as if in our mind the fact coalesces into the belief. It is no accident that facts are identified by the very same indicative sentences as beliefs: this is the logic we have given them. It is not a gift of the world, an independent 'thing' alongside the computer and the shoes that our minds are fortunately able to mirror. It is we who say things, and as we do so we use the same sentences to identify both our beliefs and what we hope to be the facts. Of course, we can (and must) insist that the fact about the room, that there is no lion in it, is one thing, and the fact about me, that I believe this, is a different thing. They are independent: the room might have been lion-free although I had no opinion about whether it was, and I might unfortunately have believed it to be lion-free when it was not. An investigation of the contents of the room is a different thing from an investigation of the contents of my beliefs about it. But this is just to say that the one judgement, that the room is lion-free, is not the other, the judgement that I, Simon Blackburn, believe it to be so. The judgement about the room is not a judgement about people, and my judgement about the room is not a judgement about myself. Granted, but this does not imply that either type of judgement is essentially relational or comparative, fitting a belief into something of the same shape, as it were. We can come at the same difficulty in a different way, by means of another example. Nearly everybody knows their mother's name. So fix the belief in your mind that your mother's name is such-and-such. Now go through a process of firstly attending to that belief, and secondly attending to the fact that your mother's name is such-and-such, and thirdly comparing the two. I suspect you will find yourself bewildered. The belief does not present itself to your consciousness as a 'thing' or presence. You believe it, sure enough, but that is not an acquaintance with a mental thing or structure. It's more like a disposition. You are disposed simply to answer the question, what was your mother's name, by giving her name. You can probably do that without thought or doubt: the name simply springs to mind. And the fact that your mother's name is such-and-such does not hover into view either, as a kind of ghostly doppelganger to your belief. So believing something (which is the same as believing it to be true) is not a tripartite process of fixing A in your mind, then B, and then comparing the two to see if they correspond. Yet the idea of correspondence seems to require that this is what it should be. Another way to become uneasy about facts as a category to which thoughts or beliefs can correspond is to reflect on the difference between facts and objects, or even structures of objects. Wittgenstein asked us to consider the difference between the Eiffel Tower, a large, structured object which reflects light and weighs so many tons, and a fact about it, such as the fact that it is in Paris. He pointed out that while it would be possible to move the Eiffel Tower to Berlin, you cannot move the fact that the Eiffel Tower is in Paris anywhere. Unlike a thing, a fact has no location, and no chance of moving. A fact is not a locatable structure. In a similar vein the German logician Gottlob Frege had said 'that the sun has risen is not an object that emits rays that reach my eyes, it is not a visible thing like the sun itself.' It might seem to be so because there are certainly processes that we call 'being confronted with the facts'. If I blandly assert that there are no potatoes in the cupboard, my wife can confront me with the fact that there are. The process is one of checking beliefs, enquiring into their truth, and well-directed observation is a royal road to doing that. Similarly, if you find yourself worried that you may have got your mother's name wrong, you could in principle mount an enquiry. You could look at (what you take to be) old letters she signed, or court records, or birth certificates. You may even be able to ask her. Such processes can, and often should, confirm or disconfirm your belief. They might lay your doubts to rest. They will do so, of course, insofar as you take them to be what they seem to be. But that in turn is a matter of having beliefs about them. The piece of paper is useless unless you take it to be one of her letters, and the court record is useless if you suppose it to refer to someone else. The person's avowal of her name is useless if you are unsure whether it is your mother who is speaking, or whether you think she has dementia. Interpretation and belief is always required, even as we check up on what we might take to be a simple matter of fact. What look to be potatoes in the cupboard may be no such thing, just fakes or fools' potatoes (and that too can be checked). Perhaps the best stab at an uninterpreted confrontation with fact comes if we think of bare experience, or pure sensation. A squeak, a whiff or a glimpse can certainly engender belief: that mice have got into the kitchen, that Rover has been rolling in the mud, or that there are potatoes in the cupboard. The interpretation may be obvious and automatic. But it is still required to get from sensation to belief: to the unadapted mind the squeak or whiff or glimpse would suggest nothing at all. The association between that kind of glimpse and potatoes is all too familiar. But it is still required. Sensations cannot, by themselves, point beyond themselves. William James put the true situation memorably: > A sensation is rather like a client who has given his case to a lawyer and then has passively to listen in the courtroom to whatever account of his affairs, pleasant or unpleasant, the lawyer finds it most expedient to give. In the philosophy of mind it is controversial whether there are such things as uninterpreted sensations at all, or whether all sensation carries interpretation with it. In either case, as far as truth goes it is only with the interpretation that we even get a candidate for truth. Otherwise the sensation remains dumb, a passing experience of which we may make nothing. As James elsewhere put it, 'new experiences simply _come_ and _are_. Truth is what we say about them.' As an aside, it is one of the many ironies in the history of philosophy that in spite of such dicta James was frequently (and with some justice) accused of supposing that, given that they are subjectively useful, the consolations, yearnings or ecstatic experiences claimed by religious persons were themselves a kind of truth, ignoring the point that it is only interpretations of them in divine terms that could be true or false. But such claims, framed in terms of supernatural agency or expectations for the future, are then themselves subject to public scrutiny and criticism. We shall hear more about James later, discussing pragmatism's theory of truth. Although I think the strongest objection to the correspondence theory of truth is that it is vacuous or empty, this does not exhaust the arguments that have been raised against it. Some say that far from being empty it is pernicious, insinuating a false picture of the way the mind relates to the world. It sees us, it might be thought, as passive recipients doing no more than mirroring a self-interpreting or ready-made world, rather than responsible, active investigators, authors of our own categories and our own interpretations of things. Some say that it implies a 'metaphysical realism' according to which there is just one true, complete, book of the world, and it is our job to read it. Others say that it makes the world a Kantian 'thing in itself', lying unknowably beyond the categories that our minds shape in order to deal with it, and so opens the door to a complete and unanswerable scepticism. It would be a long business to work out what justice, if any, there is in these complaints. One thing, however is clear enough, which is that a correspondence theory of truth cannot be charged both with being entirely empty and with being horribly misleading. You can mount one charge or the other but not both. If it is vacuous, then it can't be dangerous. Similarly, if it is vacuous it cannot best apply to some kinds of judgement, such as common-sense remarks about the environment, and not to others, such as ethical or aesthetic judgements. **2** **COHERENCE** As I've described, these difficulties about what is involved in 'being confronted with the facts' have left many philosophers disenchanted with taking correspondence as a key to understanding truth. Instead they emphasise the work of the mind in actively interpreting any data of the senses in the light of whatever endowment of categories and thoughts have been developed by long processes of experience and learning. If we go back to the subject intent on allaying doubts about their mother's name, we see that the simplest enquiry into the truth of a belief will require other interpretations, other beliefs, until either sooner or later doubt can be laid to rest. The hope is that only one coherent picture emerges, with the discovery of signed letters, court records, testimony, recognition, all coming together to vindicate just one answer. And of course, if we are unlucky this does not happen. The enquiry may fail and doubts persist. But often enough the process works, and only one verdict emerges as justified. So what else could we be looking for? Truth, we surely agree, is the goal of enquiry. But if enquiry must be content with a terminus in a coherent, interlocking structure, a 'reflective equilibrium' in which all our beliefs about a subject matter fit together – if there are no serious doubts that need laying to rest – then why not say that this is just what truth consists in? Why not settle for coherence, which we can obtain, as opposed to the fantasy of a confrontation between our beliefs, over here, with the facts, over there, which we cannot? This is the suggestion of the 'coherence theory of truth'. We might worry that equilibrium might be obtained although we are completely off track. We would be tormenting ourselves with a vast scepticism, the kind that Descartes raised and which is therefore known as Cartesian scepticism, a doubt that although everything is hanging together yet we may be on utterly the wrong track, living in a fool's paradise, forever shut off from the real facts, the real truth. This is the possibility dramatised as the idea that 'for all we know' we may be brains in a vat under the control of a mad scientist (a mad scientist with godlike powers, since he or she is so good at deluding us into feeling at home). There are arguments that this is not even a bare remote possibility, but for the moment we can content ourselves with the remark that except perhaps in certain areas, which we shall come to in due course, such sceptical thoughts vanish in the light of day. As we go about our business such doubts have no place. Even if philosophers entertain such bizarre thoughts in the study, they are as quick as anybody else to interpret a scene as one of a bus bearing down on them, and to jump out of the way accordingly. So coherence theorists regard the Cartesian search for infallible foundations, rocks of certainty that resist the most determined scepticism, as wrong-headed. We must start not with unreal, mere paper doubts, but _in medias res_ – in the middle of things. When we have a doubt that needs settling and pursue an enquiry to settle it we do not empty our mind of everything we know and start from a pure blank slate of ignorance. We rely on what we do know, make inferences that we normally make, and assess sources of evidence in accordance with our tried and practised procedures. We warp our overall picture of the world as little as possible in order to accommodate the solution to our doubt. The coherence theory of truth gained a strong following in the nineteenth century, partly due to the influence of Kant and Hegel, and especially in the thought of the British philosophers influenced by them, known as the British Idealists. One of its implications is that beliefs do not belong to whole systems in the way that pebbles lie on a beach, disconnected from each other and independent of their neighbours. Rather, they belong organically to whole systems or theories of the world in the way that a hand belongs to an arm or an arm to a body: the interlocking system has the character of a living body, an organic whole in which each part gains its value precisely by its being a part of the whole. This idea, called the holism of belief systems, diverts attention from the single sentence expressing a single truth, to whole theories or systems of belief. As an illustration, think of learning elementary arithmetic. You do not learn, one at a time, that thirteen is greater than eleven, or that twenty-six is an even number. You learn a whole system and a whole set of interconnected implications and applications, and then, as Wittgenstein put it, 'light dawns gradually over the whole.' In nineteenth-century hands the coherence theory had a semi-religious flavour: ideal coherence, it was thought, could belong only to the thoughts of an infinite mind, a mind capable of encompassing an infinity of interlocking beliefs, something like God's mind, which the idealists christened the Absolute. Anything like it could arrive only at the endpoint of the progress of the Human Spirit, but like the endpoint of the rainbow that could never be reached by mere mortals. These thoughts might reintroduce a kind of pessimism or scepticism. Coherence is the best we can achieve, but our coherence might not be that of the gods. Again the idea arises that we might be faltering along on the wrong track, disconnected from the real world. The thought is that however much we may be at home with it, the empirical world of common sense and science is but the appearance of a hidden reality of a different nature. In Kant's jargon the ordinary world of chairs and tables, cars and buses, is 'empirically real' – it is what our senses tell us is real – but 'transcendentally ideal' – the product of the way our minds structure a reality of which we can form no idea, since in forming any such idea we would be back deploying the structuring powers of the mind. This Kantian doctrine gave a satisfactorily pious, religion-friendly tinge to philosophy in the Victorian age ('now we see through a glass darkly...'). There is a standard objection to the coherence theory of truth, canonised as the 'Bishop Stubbs objection' because of an example used by Bertrand Russell in his _Philosophical Essays_ of 1910. The Oxford coherence theorist H. H. Joachim had urged that real truth belonged not to individual beliefs but only to the interlocking, godlike 'whole truth' that we shall never obtain. Individual beliefs were only ever partially true, and error consisted in misplaced certainty, when we take what is partially true to be wholly true. Russell urged that if this were so, then 'Bishop Stubbs wore ecclesiastical gaiters,' held with total confidence, would be deemed erroneous, whereas 'Bishop Stubbs died on the gallows,' held as a hypothesis with only modest confidence, might be part of an interlocking coherent story about the man's life, and would therefore count as true. A little history, however, tells us that it is true that the eminent and respectable Bishop of Oxford, William Stubbs, wore ecclesiastical gaiters (they did in his day) and entirely false that he died on the gallows (he died in 1901 at the age of seventy-five, in his bed). Although Russell's amusing objection may have some force against some of the wilder statements of the coherence theory of truth, it is hard to see it as effective against more cautious ways of framing the view, perhaps most obviously because Russell's thought of Bishop Stubbs dying on the gallows cannot enter into a properly coherent system of _beliefs._ It would be doing so only as the result of fancy. But firstly, although on occasion we may become convinced of things for which there is remarkably little evidence, we do not allow ourselves to believe anything and everything that is the result of fancy. If you tell me you have just dreamed something up, you give me no reason at all to believe it. And secondly, a principle allowing one to believe anything and everything that is just the result of fancy would rapidly lead to hopeless incoherence. One can fancy all kinds of things true: not only that Bishop Stubbs died on the gallows, but also that he died from a surfeit of bananas, drowned at sea, and so on forever. Any of these could equally belong to a coherent fiction, so the coherence theory needs some control, some principle for determining the _right_ coherent system. So a coherence theorist is within his rights to specify a much more demanding nature of coherence. The image to be avoided is that of a belief system 'spinning frictionless in the void', as the philosopher John McDowell put it. This can only be avoided if we can ensure that a properly coherent system of beliefs contains quite serious controls. These will have to be described by 'meta-beliefs' – beliefs about how beliefs deserve to get into the system. And we do have such meta-beliefs. A thought is only a proper candidate for belief if it comes with a pedigree: it should be the result of some processes of enquiry and interpretation that have earned their keep and have general application. Most beliefs get into our own belief systems through perceptual experience (to check if there are potatoes in the cupboard we go and look), or, in the case of historical beliefs, through research into texts and archives. In the case of scientific beliefs there are well-established procedures of experiment and observation. When these fail we suspend belief, but often enough they do not fail. It is particularly in processes of observation that the world provides the friction and resistance that McDowell wanted. It is when we get nasty surprises that the world bares its teeth, and shows its unmistakable resistance to false expectations. It is here that the idea of brute confrontation with the facts has its home. The idea is that by observation or less direct methods we put ourselves in a state that _causally co-varies_ with the truth of what we come to believe. By looking in the cupboard I expose myself to causal influences that will put me in one state if I receive light, smells or tactile sensations from the potatoes, but will put me in a different state if I do not. It is in the light of those perceptual states that I gain a title to authority on whether there are potatoes in the cupboard. It remains true that the confrontation is not entirely brute: my interpretation may be instant and automatic, but it is necessary for all that. Reality makes itself felt all right, but it takes a mind to make judgements about it. It gives, but only to a mind prepared to receive the gift. Such a mind can make something of the friction and resistance provided by things: not only the squeaks, whiffs and glimpses we have already met, but the whole huge mass of everyday, uncontested interpretation that we have developed since childhood. We might at this point even say that correspondence theorists were at least half right. It may not be theoretically advantageous to say that beliefs correspond with the facts. But it is certainly true, and it would be theoretically catastrophic to forget, that we ourselves _respond_ to the facts. This is clear when we think of perceptual beliefs, and processes of enquiry that involve them, including such things as listening to informants, checking in libraries or experimenting in laboratories. The road from perception to interpretation can be short and immediate, or long and winding and fallible, but so long as it is there we have a toehold on the truth. Coherentists have often faltered when trying to explain the importance of control by observation – what William James called the 'coercions of the world of sense'. Over-impressed by the omnipresence of interpretation, they have often jumped to the conclusion that 'nothing can count as a reason for holding a belief except another belief.' But that is hopelessly misleading. In the first place, it is not a disembodied belief that is justified. It is people who are justified or not in what they believe: one person may be properly justified in believing something when another, worse placed, is not. And the primary mark of placing yourself properly is to confront the evidence, putting yourself in the way of causal processes so that your state is very apt to vary with the fact to be determined. In this way, apprehending the whiff is an integral part of the justification for believing that Rover has been rolling in the mud. The whiff causes the belief, but unless and until one gets further evidence, it is also an essential part of what justifies it. If the belief just popped into one's head it would lack this justification. So it is not simply my belief that there is a muddy whiff that justifies my belief that Rover has been rolling in the mud. It is the fact that this belief was caused by a reliable process – that is, a process that is reliable in anyone such as myself, who has a sense of smell and who remembers all too well what Rover smells like when he has been rolling in the mud. We might have hoped that Rover would stay clean today, but the whiff provides the friction with the world, just as the bell causes one of Pavlov's dogs to salivate, and means that it is 'justified', in the sense of salivating appropriately, because it has experience associating the bell with forthcoming food. A dog that cannot make the association does worse, and a dog that salivates at random wastes its energies. In the language of the logic of relations, Davidson's mistake is to think that the domain of possible substitutions for X in the relation 'X justifies Y' contains only beliefs. In fact at ground level it contains at least a trio of elements: <causal impacts on person a + experienced interpretation by a + belief of a>. And the range of the relation – the possible substitutions for Y – contains not an abstract proposition or belief but a concrete situation <person a holding belief _p_ >.a It is we (or, by extension, other animals such as dogs, in the case of Pavlov's experiment) who are justified, or not, in believing what we do. Control by experience gives (most) beliefs their appropriate pedigree: experience justifies us in holding them as well as simply causing us to hold them. This confirmation can be very indirect, trickling through complex theory and a web of implications. But at its simplest it just means that the person who has gone and looked or who has listened, and who has a track record of recognising what he is claiming to be the case on the basis of such observation, has an authority that the person who has neither qualification lacks. Of course, none of this ensures that the processes of observation and verification are free from mistake. Even experienced birdwatchers know that it is wise to try to confirm the identification that a brief glimpse suggests, and lawyers have frequent cause to lament the unreliability of eyewitnesses to events. Our default setting may be to accept the testimony of others, but when we suspect that these others may have motives for deceiving us, or are advancing horrible improbabilities, we may be very unwise to do so. These down-to-earth common-sense thoughts defend a coherence theory against the Bishop Stubbs objection, and they also defend it against the picture of an entirely self-enclosed world of thought, spinning frictionless in the void. The processes by which our beliefs change and are updated typically start with causal impacts from our environment, and when these are surprising they change our mind. If we think it is safe to cross the road, it is the looming truck that forces us to change our mind, and a fortunate adaptation it is that we do so. The words of other people may or may not manage to do the same, and once we guess that someone is advancing a scurrilous story about a good bishop's death just because he finds the idea rather funny, credibility flies out of the window. What then about the idea that anything less than the whole truth is only partially true? One might have in mind the difficulty of giving the whole story. If we are attempting to describe some complex human affair an incomplete narrative may strike us as slanted or one-sided, and therefore only partially true. We may reasonably worry that we need to hear the other side of the story, and then still other sides without limit. But only some cases, notably those involving such things as distribution of responsibility or blame, are like this. Everyday certainties do not require that we get the whole truth before we get any truth. For example, there are, no doubt, many ways in which my own armoury of beliefs falls short of ideal. I do not know that they all hang together as well as I would like, and there are certainly things I do not know and could learn. We none of us have infinite minds, and neither are we infallible. All the same, my belief that, for instance, my name is Simon is not 'partially true', whatever that might mean (Simone? Salmon?). It is wholly and incontrovertibly true, done and dusted. I know it to be true, and so do many other people. It is ground we can stand on. A professed doubt about it would be unreal, a mere paper doubt. We can call this kind of controlled coherence, when a belief system together with the principles whereby beliefs have a title to being in the system hang together, rounded or complete coherence, and in spite of Russell it has a lot going for it as a theory of truth. When rounded coherence fails, it leaves the believer no legitimate claim to have got hold of the truth. When a person apparently arbitrarily elects to believe some guru or some text or piece of evidence, and to ignore others, they lose any rounded coherence: this is the problem facing doctrinaire religious fundamentalism. If someone thinks the earth is around six thousand years old, we have to ask why they jettison huge amounts of science, history, and principles of weighing evidence that, inevitably, they normally rely upon, and ignore them in just this context and just this way. We are unlikely to get a satisfactory answer, and insofar as they have to duck and weave to escape outright incoherence and contradiction, their view loses any credibility. Finally, what about the fear that our rounded, coherent view of the world may be forever cut off from reality, the fear dramatised as the idea that, for all we know, we may be brains in vats? Philosophers have sometimes hoped to prove that there is not even a remote, bare, logical possibility that we are wrong in all the basic tenets of our world view. One simple argument is that we could never verify that this is so, and we should not countenance unverifiable possibilities. That sounds a little swift. A more popular and more devious argument derives from what is needed to understand a world view or 'conceptual scheme' in the first place. Donald Davidson (again) influentially argued that in order to identify what language a group is using we must deploy a 'principle of charity' supposing that by and large they believe what is true and desire what is good for them. Otherwise the process of interpretation could never get started. The argument continues, roughly, that the same principle applies if others are bent on interpreting us: hence no proper process of interpretation could ever describe us as wrong about our world, root and branch. And if no proper process could deliver this result we may assume that such a conclusion – that we have so misunderstood our world – must be false. It is safe to say that no such argument is uncontroversial: the possibility of a wholly Matrix-like existence in a virtual reality has too much imaginative bite to be exorcised so easily. So in spite of its virtues, there remain lingering doubts as to whether even rounded coherence is enough. We may still be troubled by the possibility of a large, roundly coherent body of belief that is, for all that, a giant fiction, an elaborate fairy story. One way of fending off this threat might be to bring in an aspect of truth that is so far missing: its connection with successful action. Footnote aPut less formally, this just means that Davidson abstracts away from the concrete situations of real people forming beliefs in the light of real pressures of experience, and it is this abstraction that does the damage. **3** **PRAGMATISM** When we are out of touch with the way things work we are set to fail in our actions, but when we know our way about we succeed. Success is a mark that we are getting things right; failure denotes that we have not done so. The associations are not perfect: we can understand a mechanism but fail to use it appropriately, for example through carelessness, and, conversely, a false belief can bring about a successful action, for instance by luck. But overall, in countless ways, day in and day out, we can do what we want to do because we are familiar with the way things are. I would not be as good as I am at getting to my office if I was wrong about the layout of Cambridge; I would not succeed in pulling on my trousers if I was wrong about them having two legs. A coherence theorist need not regard a criterion of success in action as a competitor to his own view. Indeed, he can deny that it really brings in any new element. A rounded coherent system of beliefs will include many beliefs about our own successes – such as my belief that I have for many years got to my office successfully. But it is the success itself that causes my belief that I am successful, and that dissolves any temptation to scepticism. At the end of the twentieth century the intellectual fashion known as postmodernism took an ironical stance towards science, regarding it in an anthropological spirit as simply the ideology of a particular tribe of self-selecting people calling themselves physicists, chemists, engineers or biologists. The standpoint seemed to many to be a sophisticated response to science's claims to authority. It was tough-minded and knowing, and its proponents could flatter themselves as having seen through and exploded spurious claims to authority, as relativists and sceptics typically do. It was all very exciting – until one saw the same sophisticates using iPhones and GPS devices, relying on detergents and paints and aeroplanes, vaccinating their children and doing all the other things that the progress of science has enabled us to do. And then its glamour disappears, and instead it looks more than a little bonkers. You risk a certain amount of ridicule if you hope to undermine science's claims to deliver true, or largely true, theories while at the same time relying happily on so many things designed in the light of those theories. Knowing how things work is surely close to knowing how to use them for our own purposes, and if the sciences deliver this latter, it seems graceless to deny them the former. At the very least abundant success must show that they are on the right track: the proof of the pudding is in the eating. A fully rounded coherence requires a concord between what we believe and how we act. This is what the fanatic in his desert hideout, using a bank account accessed with a mobile phone to plot the overthrow of western civilisation and its entire works, conspicuously lacks. The connection between true belief and success is further cemented if we indulge in a little evolutionary thinking. Our big brains are notoriously expensive to run, so why has evolution burdened us with them? The natural answer is that they enable us to cope. By thinking, we learn how to overcome obstacles, invent new strategies, make use of new technologies. Nor is this confined to humans. Throughout nature cognition is at the service of action. Thus, in his observation of honeybees, Karl von Frisch was able to interpret aspects of bees' dances as signals telling of the presence of food, its distance and its direction, by correlating the behaviour with the successes bees have in directing other members of the hive to fly in the right way. If there had been no consequential behaviours, there would have been no interpretation of their movements. We surmise that the bees have the aim of collecting nectar, since this is how they live, and Frisch discovered the communication process enabling them to do so, with features of the signal directing features of their journey to food. The connection between cognition, as awareness of truths, and the role it has in enabling actions that fulfil our desires or needs is summarised in dicta voiced in various ways by various philosophers: reason is the slave of the passions (David Hume) or a belief is a preparation for action (Alexander Bain). We want true beliefs largely because we want to act successfully. This connection between truth and success in action was the watchword of the 'American pragmatists', a group of philosophers that emerged in the last twenty years or so of the nineteenth century, whose leading members included C. S. Peirce, William James and John Dewey. They do not form an indivisible whole, but the tenor of the approach is unmistakable: > The true is the opposite of whatever is instable, of whatever is practically disappointing, of whatever is useless, of whatever is lying and unreliable, of whatever is unverifiable and unsupported, of whatever is inconsistent and contradictory, of whatever is artificial and eccentric, of whatever is unreal in the sense of being of no practical account...What wonder that its very name awakens loyal feeling James's verve as a writer often led him to somewhat scatter-shot formulations, as here, and we shall come upon some of the trouble this caused. C. S. Peirce had offered a more cautious formulation in a famous essay. Peirce was interested in the way in which scientists, although they may begin by holding different theories, or wanting to approach phenomena in different ways, will be led to converge: 'the progress of investigation carries them by a force outside of themselves to one and the same conclusion.' He compared this force to an operation of destiny: > No modification of the point of view taken, no selection of other facts for study, no natural bent of mind even, can enable a man to escape the predestinate opinion. This great law is embodied in the conception of truth and reality. The opinion which is fated to be ultimately agreed to by all who investigate, is what we mean by the truth, and the object represented in this opinion is the real. (pp. 56–7) Thought of as a definition of truth, this is subject to damning criticism: there are surely truths that we may be condemned never to discover. The 'long run' of investigation may not be all that long, if catastrophe overtakes the entire scientific community, so there may be nothing that is fated to be ultimately agreed upon. We could slacken the description a little to meet this problem, substituting that a truth is something that _would_ be agreed by all who investigate, if they were to do so long enough, but making no claim that anybody will actually be able to do that. Still, even if we did arrive at the end of the long run of investigation, we would not know that we were there. The posit that we are finished would itself be unverifiable. But scholars agree that Peirce was not advancing the final quoted sentence as a definition. He was not interested in the reality of an endpoint. He was interested in the actual processes of scientific enquiry, and the ways in which convergence does take place, so that consensus also emerges and dissent fades into history. In short, he was interested in process rather than product: the actual procedures that winnow out the constraints on what we are to think, and it is when, but only when, those who investigate begin to converge that they can talk of themselves as being on the right track, closing in on the truth. Peirce was not interested in God's truth, truth at an imagined endpoint of enquiry, but in the actual laborious processes through which we are entitled to take ourselves as getting nearer to the truth. It is, inevitably, not quite as simple as it sounds. Peirce was himself a scientist, largely dealing with sciences of measurement and calculation where it is easy to believe that 'all who investigate' would converge on the same result. But even in scientific contexts there are cases where this is less likely to happen. It is, for instance, a scientific question as to whether people can communicate with each other by extra-sensory perception, or can interfere with the physical properties of things by pure thought alone, or foretell the future. Myself, I think there is a truth about all these cases (people can't do these things) but it is not at all likely that 'all who investigate' will converge on this. Partly this is because some who investigate are motivated to do so in the first place by a love of the wonderful and the abnormal, and they may be very reluctant to have that fascination displaced by having to acknowledge the humdrum reality. And then in historical and political contexts the pressure of experience often has to fight with preconceived ideas and pre-existent needs and desires. I think it is true and almost undeniable that the United Kingdom went to war in Iraq in 2003 as a result of government lies, but I also think that there are investigators who will never accept that. Reality indeed presses us to believe what is true, but sad to say, when we want to believe what is in fact false, we have defences against the pressure. People dislike admitting that they have been taken in. The physicist Max Planck is credited with saying that 'truth does not triumph. Its opponents just die out.' As a remark about the actual historical process this may well be correct, and Planck himself, embroiled in the then controversial subject of quantum theory, no doubt had reason enough to say it. But Peirce has perhaps the last laugh, since today near enough 'all who investigate' or who are competent to hold an opinion accept quantum theory. It is, after all, certainly the most successful physical theory there has ever been. So perhaps truth does triumph. Even in historical and political contexts the passions that stand in the way of many people accepting what is true may eventually subside. Those who are embattled die out, and truth, we can hope, triumphs. But it is unwise to suppose that there is anything inevitable about the process. William James muddied the water in a different way, and much to Peirce's disgust. Pragmatism aims to tie the value of truth to its role in generating success in action. James identified this with giving the believer the 'fuller sum of satisfactions'. But this led him into trouble when he considered religious belief. For some people, believing that there is a deity or providence with special powers to judge them sympathetically gives them a 'fuller sum of satisfaction'. But it evidently does so regardless of whether there is such a being. We can draw the parallel with children in emotional trouble who take comfort from an imaginary friend. But James did not regard this as a refutation or even a difficulty: clinging to his equation, he argued that in this situation the belief is, after all, true. One of the passages in which he defends this is worth quoting in full. In it James is repudiating an earlier idea he had, which was that 'God' and 'matter' might be regarded as synonymous terms, so long as no differing future consequences were deducible from the two conceptions. He writes: > The flaw was evident when, as a case analogous to that of a godless universe, I thought of what I called an 'automatic sweetheart,' meaning a soulless body which should be absolutely indistinguishable from a spiritually animated maiden, laughing, talking, blushing, nursing us, and performing all feminine offices as tactfully and sweetly as if a soul were in her. Would anyone regard her as a full equivalent? Certainly not, and why? Because, framed as we are, our egoism craves above all things inward sympathy and recognition, love and admiration... Pragmatically, then, belief in the automatic sweetheart would not _work_ , and in point of fact no one treats it as a serious hypothesis. The godless universe would be exactly similar. Even if matter could do every outward thing that God does, the idea of it would not work as satisfactorily, because the chief call for a God on modern man's part is for a being who will inwardly recognize them and judge them sympathetically. Matter disappoints this craving of our ego, so God remains for most men the truer hypothesis, and indeed remains so for definite pragmatic reasons. Even in 1909 James's conception of young women might have provoked a snort, and it seems not to have occurred to him to worry whether his own mind might be but the result of a parallel craving for sympathy and love by needy females. Nor did he reflect on the consolations a man such as himself might derive by thinking that male rivals for young ladies are themselves mindless automata and therefore unable to enjoy any triumphs they engineer. On the other hand, there is something intriguing about comparing belief in other minds with belief in the supernatural, although fortunately the first is hardwired in a way that the second is not. All of us except severe sufferers from autism see others as minded like ourselves in a way in which we do not see the natural environment as minded like ourselves. Personifying nature ('Gaia') is a fringe activity; believing in other people is not. But it is the whole idea that truth can arise from a need (or what he elsewhere calls a will) to believe that is theoretically so shocking. By including subjective personal satisfaction of the believer as the kind of success that marks a belief as true, James has destroyed the distinction between pleasurable wishful thinking and truth. It is this to which Peirce, rightly, objected. Indeed there are signs that James himself was not entirely comfortable. Why does he talk of the 'truer' hypothesis rather than saying that for 'most men' belief in God would be outright true? And what is he going to make of the little qualification 'for most men', even if his statistics were right? Is belief in God to be true for some, but not for others? Or is his idea of God that of a being whose existence varies according to the bulk of the answers in human questionnaires? That way lie wastelands of subjectivity and relativism, not truth, facts and reality. James's false step suggests that it is going to be quite difficult to describe a connection between truth and success that enables the latter to give us a good picture of the former. Nor do we have to enter the abstract realms of religious belief, or even just belief in other minds (male or female), to foresee problems. As soon as we enter ordinary worldly issues that are remote enough we find places where wishful thinking and myth are as good as or better than truth. Many Scots are enamoured of their ancient wild, free, glamorous and gorgeous Highland past, blissfully ignoring that the glamour was larded on by Sir Walter Scott in the nineteenth century, and the gorgeous outfits invented at the same time by a couple of Polish tailors posing as old Scottish royalty. Being a matter of brutish and impoverished servitude, the actual past would not serve nearly so well. Friedrich Nietzsche actually thought it puzzling that, in spite of such allurements, we do care about remote and historical truth. It is a kind of self-imposed asceticism, for which it is difficult to provide a Darwinian function. The pragmatists knew they had to cope with this problem and they have a number of defences. One is to insist that the utility and success of true belief does not lie in private, subjective satisfaction, but in a whole array of capacities that truth gives us. In other passages James himself seems to recognise this, defending with great vehemence the objectivity of the pragmatist's conception of truth: > Pent in, as the pragmatist more than anyone else sees himself to be, between the whole body of funded truths squeezed from the past and the coercions of the world of sense about him, who so well as he feels the immense pressure of objective control under which our minds perform their operations? A false belief stands ready to wreck any number of projects. There is no limit to the ways in which a falsehood, by worming its way in among the holistic system of our beliefs, can bring us into sharp collision with the world. The belief that I am the most popular boy in the class may give me a sense of pleasure, but if I am not, there are many ways in which the truth can come out and disrupt my fantasy. The belief that I am a good rock climber may flatter my vanity, but if I am not it leaves me poised to wreck the expedition, endanger my friends or break my neck. Even the Scotsman's happy pride in his glorious history comes up against the 'coercion of the world of sense' when he finds that he cannot enjoy the frost and rain, leap about the heather or even tolerate the incessant midges in the way he supposes to be his birthright. A second defence is to insist upon the public and social dimension of belief. It is not _I alone_ who act upon a belief, but _we_ who do so. Beliefs are to be shared and evaluated publicly, and bitterness and disappointment await the vain and the boastful once it is borne in upon them that their own estimate of themselves is far from being shared. It may be pleasant for me to feel vain about my singing, but much more mortifying to find that nobody else can stand it. So we share evidence, advance considerations, and try to coordinate our views about things. We 'divide the labour' of enquiry, trusting those who do it to give us the real results of their investigations and experiments. We try to come to one mind about things. And then James's enjoyment of his belief in God doesn't help. It doesn't work on me as a reason for believing in God: if anything it puts me on the alert, since the most atrocious falsehoods gain currency when people want to believe them or find it pleasant to do so. American pragmatism is not a rival to coherentism, but an elaboration of it, adding the dimension of success in action and enquiry. Together they deliver many valuable legacies. There is the stress on the interlocking nature of systems of belief. There is the mistrust of 'foundationalism' or the idea that our body of knowledge is built upon self-evident, undeniable principles and beliefs. Even our most sacred principles of inference, that previous philosophers would have dubbed 'a priori' or beyond revision or falsification by any experience whatsoever, began to be classed as merely central parts of our 'web of belief', less than sacrosanct if the going gets really tough. (This attitude was of course fuelled by such advances as the discovery of non-Euclidean geometry, and the upheaval that both theories of relativity gave to hitherto cherished ideas about the structure of space and time.) And as we have seen, even our most immediate and certain perceptual judgements are not 'given' to an unprepared mind. They are interpretations of the world, not its naked deliverance. So pragmatists and coherentists substitute a fallible and holistic picture, sometimes using the common metaphor of the web of belief: a loose structure that hangs together, but with each part testable and potentially vulnerable to alteration or dismissal in the light of the evolution of the whole system. They emphasise that each element in the web can be revisited and tried and tested in the light of the others. But they also stress the unreality of 'paper' doubts and the foolishness of trying to empty our minds or start anywhere else but 'in medias res', or in other words in the light of what stands firm at the moment. As Peirce put it, 'enquiry is not standing upon the bedrock of fact. It is walking upon a bog, and can only say, this ground seems to hold for the present. Here I will stay until it begins to give way.' It is time to take stock. The alert reader may already have noticed that truth itself has not featured very prominently in the last two sections. Although we were allegedly talking of 'the coherence theory of truth' and the 'pragmatist theory of truth', the discussion quickly slid towards such things as the nature of enquiry, the control of belief by experience, and the architecture of bodies of belief. The two directions in which we have been led have a pleasing symmetry. Talking of coherence we were concentrating on the _input_ side: the evidential basis for judgement, and its connection with experience and verification. Talking of success we were bringing in the _output_ side: the consequences of holding a belief, and using it as a 'preparation for action'. We can feel pleased enough with our cognitive abilities when there is a harmony here: when the friction and resistance we meet that causes us to change our minds, issues in actions that work better, in the light of our desires and goals. But none of this has issued in a tight definition of what truth is. Is it possible that we should not really be looking for such a thing? A radical suggestion along those lines would counsel jettisoning the very notion. 'Truth', it might be suggested, has too many liaisons with the very kinds of philosophy that coherence theories and pragmatism are trying to overthrow. It is supposed to be something divine and authoritative, attainable if at all only at the vanishing endpoint of enquiry. It consorts with absolutes and certainties, not with the polite and modest fallibilism that we have been fondly sketching. Perhaps it is inextricably tied to illusory notions of correspondence, and inadequate, simplistic conceptions of fact. If there is no 'given' independent of our interpretative habits and no a priori telling us with absolute authority how we are to conduct our inferences or make our theories, then where 'on this moonlit and dream-visited planet', as James put it, is truth and its authority to be found? Better then to forget about it altogether, and confine ourselves to operating the procedures and processes of refining and improving belief, however tentative and corrigible those processes may be. Such was the advice of the philosopher Richard Rorty, himself a pragmatist writing around a century after the trio we have mentioned. The message had an ancestry in some of the more flamboyant passages of Nietzsche, and found a ready audience in the climate of postmodern scepticism, which we have already touched upon. Rorty's is one way to go, and in Part II of this work we describe some areas where something like his scepticism can seem appealing. But his attack on the very idea of truth is unconvincing, and well before he came on the scene truth had been offered a defensive strategy that foils it. This brings us to the fourth view of truth that we shall consider: _deflationism_ or minimalism. It suggests that truth is, as it were, too _small_ a notion to deserve such scepticism. It has a proper backroom role but it does not denote an enemy worth fighting. It also does not denote anything that it is necessary to define. Rather, we can point to the interesting role it plays in our activities and our thinking. **4** **DEFLATIONISM** Deflationism starts with an observation that is again due to Frege. This is that it makes no difference whether we simply assert something or assert it prefacing the assertion with 'it is true that'. Following usual logical practice we shall let the letter ' _p_ ' stand for an arbitrary assertion (or proposition, statement or belief). Then if 'T' stands for 'it is true that', we have it that there is no difference between _p_ and T _p_. Slightly more cautiously we should say that if there is a difference it would be one of emphasis, rather like shouting instead of speaking normally. But the important point is that there is no purely cognitive or rational difference. If you believe _p_ you believe T _p_ , if you prove _p_ you prove T _p,_ if you wonder whether _p_ you are wondering whether T _p,_ and so on across the board. We can christen this the transparency property of truth. The transparency property repeats itself: TT _p,_ it is true that it is true that _p_ , adds nothing to T _p_ , just as that adds nothing to _p._ This transparency ought to strike us as odd. If introducing a reference to truth is introducing a real new property, like 'it is interesting that' or 'it was said by the government that', you would expect a difference. It is one thing to assert that grass is green, but quite another to assert that it is interesting that grass is green or that the government said that grass is green. And if we lay it down that any proposal about the nature of truth must respect the transparency property, then it is going to be quite hard for theory to rise to the challenge. For example, if Peirce had been incautious enough to present 'fated to be agreed at the endpoint of enquiry' as a proposal about the meaning of truth, it would fall foul of the transparency property. For 'Henry VIII had flat feet' and 'it is fated to be agreed at the endpoint of enquiry that Henry VIII had flat feet' are not at all equivalent. It may be that Henry VIII did have flat feet but that no trace of this fact remains, so that enquiry would forever be silent about whether he did or not. The claim that he did so might be true even if it is forever unverifiable, so that for want of evidence it does not belong to any roundly coherent maximally complete historical narrative. We also get a new handle on the question of whether 'corresponds with the facts' is a useful, theoretically rich proposal about truth, or a mere long-winded synonym. If 'corresponds with' introduces a relation, and 'the facts' denote a substantial thing or structure or element to be found in the world, then 'grass is green' and 'the thought that grass is green corresponds with the facts' would seem to be different. The second says something about the assertion or thought that grass is green, whereas the first does not. It would be as if the second turns one's glance sideways, at the assertion itself, whereas the first directs your attention only towards the grass and its colour. And if we repeated it, saying that the thought that (the thought that grass is green corresponds with the facts) itself corresponds with the facts, we would be turned aside another time, getting yet further away from the grass and its colour. It looks as if we could only avoid these uncomfortable consequences by shrinking 'corresponds with the facts' down to mere synonymy with 'it is true that'. Then we might restore transparency, but only at the cost of losing any substantial theory about truth. The philosopher Peter Strawson summed this point up nicely. Suppose I say something, for example that whales descend from cows. And nodding sagely, you say 'that's true.' Strawson pointed out that this is rather like saying 'ditto'. You thereby ally yourself with me, in the sense that now if I am wrong, you are wrong as well. You do not adopt a different, sideways position by making a comment on my assertion, as you might if you said that it was surprising or uncertain. That kind of remark could be false even if what I said was correct (it might not be surprising, or uncertain). But just saying that it is true stands or falls precisely as does the original. It is more like saying 'I'll sign up to that.' Or you could just grunt assent, as we often do. Deflationism is the view about truth that celebrates its transparency. Its core is the idea that once you understand the transparency property you understand all you need about what truth is. Truth is a kind of dress version of the grunt of assent. There are, however, some bells and whistles to add to this core. If the notion of truth never added anything to what is given by making an assertion, then it would appear to be entirely redundant. Why have the extra words if you can say what you want without them? Indeed, in the early days of deflationist thinking, around 1930 or so, deflationism was known as the redundancy theory of truth. But this proved to be a misnomer, for it is not always so easy to get rid of reference to the notion. The difficult contexts are those in which we do not have an identified assertion or proposition about which we are talking. We may be referring only indirectly to something: 'John's guesses are always true'; 'What Sam said was true, although people were doubtful.' We may be generalising: 'every proposition is either true or not true'; 'truth is the goal of enquiry'; 'beliefs are supposed to be true'; 'Mary is always right about people.' How are we to understand such remarks as these? They are remarks which describe a class of sayings, wide or narrow, and the distribution of truth within that class. But they do not work by giving you the actual assertions in play, such as what John guessed or Sam said. So you cannot grunt assent to what John guessed or Sam said, because you are not told what that was. To take the first of these, if we knew the thought or proposition involved in John's guess, we could progress. Suppose John's guess was that Mary is three years older than Jane. We would then be able to say 'John's guess was that Mary is three years older than Jane, and it is true that Mary is three years older than Jane.' Then we could use the transparency property to get 'John's guess was that Mary is three years older than Jane, and Mary is three years older than Jane.' We get down to voicing the proposition ourselves, and there is no residual mention of truth required. But that depends on being able to identify the proposition involved, and we may not be able to do that. However, there is something we can do instead. We can offer a schema or template, knowing that there is some assertion that fills it in: 'John guessed that _p_ and it is true that _p.'_ If we could enumerate all the guesses John might have made (say g1... gn) we could go one step further: John guessed that g1 and it is true that g1, and... and John guessed that gn and it is true that gn. But often we cannot do that because we do not know what John might have guessed. This is where a term like 'is true' comes into its own. We can sum up the indefinite and incomplete list we might start out on, along the lines 'John said that g1 and g1, and John said that g2 and g2... and so on,' simply by saying 'John's guesses are always true.' We wrap up the indefinite number of instances to which we might dig down, if we can, with a simple generalisation. A similar strategy helps the deflationist to defuse the other examples. It helps to have the notation 'not- _p_ ' to abbreviate the negation of _p_ , as in 'it is not the case that dogs quack.' Then we can offer: > Every proposition is true or not true = there are no cases where neither _p_ nor not- _p_. The right-hand side of this equation makes no use of the notion of truth, but the claim is that to hold the one side to be true is just the same as to hold the other. Of course, this may still be so if we doubt both sides equally, and doubt would consist in trying to find a plausible case in which you do not want to assert either of these. Borderline cases of vague concepts are potential examples: 'he is neither rich, nor not rich – just nicely comfortable.' It is a controversial matter to come up with the best logic to cope with vagueness. > Truth is the goal of enquiry = in all cases the goal of enquiry is to certify that _p_ only if _p_. Notice that this does not say that the goal of enquiry is to certify that _p_ whenever _p._ That would imply that the goal of enquiry is to discover everything that is true, which is presumably not what is meant. What is meant is that your goal as you enquire is not to get the wrong (false) result. For example, and putting brackets in just for the sake of clarity, you investigate whether eating celery causes weight loss with the goal of (certifying that eating celery causes weight loss only if eating celery causes weight loss). Otherwise, you would have got it wrong. > Beliefs are supposed to be true = in all cases it is right to believe that _p_ only if _p._ > > Mary is always right about people = in all cases if Mary believes about someone that _p_ , then _p._ We have to say 'in all cases' because we cannot enumerate all the things that Mary believes about people. If we could, we would simply write out a list of the things that Mary believes, and add that these are all the cases in question, and we would get the same effect. There is now an interesting twist to the story. Initially these generalisations and contexts in which we refer indirectly to some proposition or assertion (but cannot identify it as the proposition or assertion that...) looked to be an obstacle to the deflationary programme. It worked, one might have thought, when we have the simple transparency property, and do not find any difference between asserting that _p_ and asserting that it is true that _p._ But now we can advance the idea that it is these very contexts that stop the idea of truth from being redundant. It is, as it were, just because it visits these places that our grunt of assent needs its full dress. It is precisely because we may believe that what Einstein said was true without being able to identify what he said that we cannot do without the term. Or rather, if we do attempt to do without it we get cumbersome paraphrases. This is summed up in the literature by saying that truth is a 'device for indirect reference' or a 'device for generalisation'. While I hold that 'everything Einstein said was true' I may still not know what to think about space-time. But when I learn that Einstein said that the curvature of space-time was responsible for gravity I must either sign up to the curvature of space-time being responsible for gravity, or backtrack on my alliance with Einstein. Truth may be a device for doing other things as well, but this is perhaps its central function. We started this part of the book with a hymn of praise to truth. Truth is often said to be a 'normative' notion, meaning one associated with norms or rules, correctness and incorrectness, and this is its title to divinity. Surprisingly, perhaps, it now seems that deflationism can do full justice to this. We put on our most serious face and intone that 'you ought to believe what is true' and what we mean is that across the spectrum it ought to be the case that if you believe that _p_ then _p_. Only thus are you a trustworthy and reliable informant, which is how people ought to be. 'Truth is sacred' means something along the lines that, in general, if you do not have sufficient reason to believe that _p_ then you should not behave as if _p_ , for instance by asserting that _p_ or entering undertakings in sublime confidence that _p._ In particular, perhaps, you should neither lie nor even, more stringently, bullshit. To lie is to make assertions in such a way that 'he asserted that _p_ , although not- _p_ , and he intended to deceive you about that' has examples or instances that are true of you. To bullshit, according to the acclaimed account by Harry Frankfurt, is to make assertions in such a way that 'he said that _p_ although he had no regard for whether _p_ or whether not- _p_ ' is sometimes true of you. White lies, such as the many compliments made in socially appropriate circumstances ('It's lovely to see you!', 'My, you look younger every day!'), are nearer to bullshit than actual out-and-out lies since there is no realistic intent to deceive. They may be nearer still to jokes, ironic utterances or play-acting, where there is no assertion made but only the appearance of one. In the next section of the book we look at cases in which deflationism has the power to transform or undermine philosophical debates. However, one example of this is appropriate here, since it highlights an important and natural objection that might be raised. Discussing pragmatism, we talked of the way in which success in action is a good indicator of our being on the right track. If the things we design according to the best scientific theory work, then this suggests that the best scientific theory is either true or approximately true. Many philosophers of science have seized on this, and brandished it as the centrally important argument for 'scientific realism'. The idea is that we couldn't be so successful in, for instance, using what science tells us about electromagnetism to design motors, radios, communications satellites and all the other things that the modern world depends upon, if we had after all got the properties of electromagnetism wrong. If we had got them wrong, or if we were mistaken in supposing that there was such a phenomenon as electromagnetism, or if the whole story was a kind of fiction or merely a metaphor or picture, then it would be a miracle for it to work as well as it does. But we should not be satisfied with regarding it as a miracle. Hence we should be scientific realists. This is the 'no miracles' argument for scientific realism. The question is whether this casts some doubt on any deflationary theory of truth. The idea is that it does so because in the argument, the truth, or near-truth, of scientific theory is advanced as an explanation of its success. In a nutshell, its truth explains its success. But if this is so, then must not truth be a real, robust and explanatory property? You cannot say that the mice explain the hole in the cheese unless you believe in real live causally efficacious mice, capable of making holes in cheese. You cannot explain the behaviour of a magnet by citing the repulsive power of electromagnetic fields unless you believe in electromagnetic fields and their repulsive power. In other words, if a property or relation enters into explanations then that is a gold-standard sign of its reality, in the eyes of those advancing the explanations. But, the argument continues, the no-miracles demand means that we have to explain scientific success by the truth or near-truth of scientific theories. So 'truth or near-truth' is in good standing as a real, substantive, causally active property of things. But this is just what deflationism denies. It claims, as we saw above, that truth is just a device for certain kinds of abbreviation, not a good, robust, explanatory notion. So it stands refuted. Fortunately, it is not as easy as that. What is casually called 'the success of science' in this argument is obviously a conglomerate of many different successes with many different explanations. The success of optical theory derives from what that theory claims about light; the success of mining exploration derives from geological theory; the success of electronic engineering derives from what quantum theory tells us, and so on. And when we disaggregate these successes and regard them piecemeal, we find that deflationism not only survives but actually accrues yet further credit. In effect, we have already had practice in seeing why this is so. What we are faced with, again, are generalisations, and if we dig down to instances of the generalisations we can do without the mention of truth. For instance, consider: > The mining company found the coal because geology gave them the truth. If this is right there is going to be an instance of geology saying something which was true and which explains why they found the coal. For example, something like: the mining company found the coal because geology said that it would lie above the millstone grit, and that was true.a We already know how to parse this without using the notion of truth: the mining company found the coal because geology said it would lie above the millstone grit, and the coal did lie above the millstone grit. We may not know exactly what geology said, but all that is required is that there is some definite statement like this. Similarly for all the other examples. The rocket hit the asteroid because science got the forces right. Science will have said something complicated resulting in a magnitude for the forces; which will reduce to saying that the rocket hit the asteroid because science said that F = xyz, and F= xyz. Again, we may not know what the calculation was, but if the explanation is good there is going to be some specification along these lines, which underlies and identifies the particular reason that the rocket hit the asteroid. So once more there is a cunning twist that turns the objection into a point in favour of deflationism. Science rightly prides itself on providing the explanations of why things happen, including why our practices turn out to be successful when we follow its recipes and formulae. But science does not deal with the notion of truth. Physics deals with such things as force, mass, acceleration and charge. Medicine deals with drugs and their effects, or such things as surgical interventions and their effects. Geology deals with rocks and their distribution. And the stripped-down explanations stick with the same things and properties that the sciences deal with. In other words, just by being stripped down, they are giving science's own explanations of the success of particular recipes and directions for doing what we want to do. It is good to look through the presence of truth in the explanations. Truth is only present as deflationists say – as a device for pointing in the general direction in which the real explanation is going to be found. It is rather like hearing someone refer to something, although you do not know what it is. Suppose that, yourself unable to see a tennis match, you hear a nearby spectator say 'I like the way Federer does that.' Frustratingly, you do not know what particular action of Federer's the spectator admired. But you know that there was one. So it is as if you are put in a waiting state or an incomplete state of information. It would only be completed if you learn what it was that Federer did. And similarly 'the rocket hit the asteroid because the physicists got the forces right' puts you into a state of suspension. You do not know what the physicists said and did but you know that there is something, and because of it the rocket hit the asteroid. We are told that when Jesus said that He testified to the truth, Pontius Pilate answered 'What is truth?'b The deflationist answer to Pilate's question could be put rather pithily as 'You tell me.' Not, of course, you tell me what truth is, but rather you tell me which alleged truth – that is, which belief, assertion or judgement – you are interested in. And then we can tell you what you need to know. For example, if the question is whether the man in front of him pretends to be a king, then that answer would be that this is true if, and only if, the man in front of him pretends to be a king. And it was Pilate's job to judge that, not to start beating around in distant philosophical thickets. To be fair, however, the context may be that Jesus was propounding theological statements, and since it can be hard to see what those are about, or what would confirm or disconfirm them, Pilate's question may have had an exasperated edge, akin to 'who can possibly know what this man is trying to say?' And if you do not know what belief or judgement someone is making, then indeed you cannot know what its truth might consist in either. The reader may detect that I have considerable sympathies with deflationism. Yet many philosophers have supposed that there is something lacking in it. The fare it offers is too thin to be fully satisfying. The standard complaint is that truth may be transparent in the way that Frege and Wittgenstein thought, but perhaps this is only because it has already been smuggled into the very notion of a statement, belief or assertion. So instead of the equivalence between T _p_ and _p_ implying that there is nothing much more to say about truth, perhaps it implies that there is a whole lot more to say to unravel the nature of assertion or belief itself. To make an assertion is to undertake a commitment, or perhaps a number of commitments. It will imply a vulnerability: having asserted that _p_ , a person may be shown to be wrong, to have erred, if evidence against _p_ mounts and _p_ is falsified. Having asserted that _p_ , a person can be held to the implications that follow from _p_ , so the vulnerability does not end with _p._ For example, if I claim that someone has a daughter, I am not only caught out if she has no daughter, but also if she has no children at all. The content of the assertion determines the range of commitments with which one is saddled. These commitments are integral to the nature of assertion: it would not be assertion if the commitments are absent, but could, for instance, be joking. The reason that we find ourselves vulnerable to criticism is of course clear from the pragmatists' connection between truth and success in action. Someone who informs people of what is in fact false not only does an injury to their cognitive awareness of the world, but exposes them to an increased risk of behaving inappropriately, failing in their projects, committing injustice, or damaging themselves. There is no limit to the size of catastrophe that acting on a false belief can bring about. There are other norms or conventions governing assertion. A person may present what is in fact a guess or a hunch or stab in the dark as if it is something she knows to be true, thereby misleading an audience as to her authority to pronounce on the matter. Some philosophers have indeed suggested that it is wrong to assert anything unless you actually know it to be true. That may be an ideal of a kind, and it is usually appropriate to signal when one's evidence is not sufficient to warrant a claim to know what one is saying. But it imposes an unreasonably high standard of purity: there are contexts in which we assert things when it is pretty obvious that we do not know them to be true. Before the game the fan asserts that his team will win, and although the judicious audience will maintain reservations, they will not criticise the fan for saying it. In religious contexts it may be meritorious to assert as articles of faith things about which nobody knows the truth. Perhaps less commonly, and usually less reprehensibly, someone might present only tentatively something of which she is justifiably certain. This may be because of a forgivable modesty, and in any event it is usually less harmful to impart unwarranted doubt than to impart unwarranted certainty. Other social criticisms may come into view neither through what is said, nor the confidence with which it is said, but through other, indirect routes, such as the implications of having said it, or having said it and said nothing else. These were first explored by the philosopher H. P. Grice, who called these indirect implications 'implicatures'. For example, if I am asked about an academic colleague's merits and reply that people tell me he loves his dogs, what I say does not itself imply either that he is good or bad at his academic work. But the fact that I replied this way, and did not go on to add favourable comment of a more relevant kind, certainly implies that I don't think very much of his academic merits. We can convey attitudes, and thereby undertake commitments, by silence, by choice of words, and by selection from what might have been said but was not. That is, I would be vulnerable to criticism, and in that sense deemed to have been committed to something false or at least inappropriate, if as well as being said to love his dogs my colleague was a Nobel prizewinner, and hence unmistakably at the top of the academic tree. Implicatures may, however, be more deniable than outright falsehoods: it is often supposed to be less reprehensible to mislead by insinuating what is false, than it is to lie outright. It is not all that clear why this is so, but perhaps it reflects the idea that if one misleads someone, that person bears some responsibility for coming to believe what is false, whereas if one gives the lie directly, that is entirely where the responsibility resides. The trusting audience is then a victim, not responsible for their own deception. The other aspect of assertion that bears on deflationism is that of how we get to understand what people do as the making of assertion in the first place. It is one thing to make noises or to scratch inscriptions, but another thing for those noises or inscriptions to be rightly interpretable as vehicles of thought or belief. There need to be practices of interpretation that are familiar to the agent and the audience, or conventions to which they are parties. Just as a piece of paper must be embedded in an established social practice in order to count as a banknote, so an inscription or noise must be similarly entrenched to be a vehicle of thought and belief. And just as the value of a note may change as the economy shifts, so the meaning to be attached to a noise or an inscription can change as social practice changes. This is not the place to explore the whole subject of linguistics, but it is important to bear in mind that the amazing complexities of thought and belief that language enables do not come from nowhere. The interpretation of any language is a skill that needed to be learned, in one's early years if it is the mother tongue, or with more pain and effort if it is not. Footnotes aIn northern coalfields in the UK, the millstone grit is a layer of rock older than the coal-bearing seams. bOften elaborated to '"What is truth?" asked jesting Pilate, and would not stay for an answer.' The quotation is apparently due to Francis Bacon. However, in John 18:38, which is the original source, there is no indication that Pilate was jesting, and if he did not stay for an answer this was only because he was already himself convinced of Jesus's innocence, and went out to communicate that verdict. **5** **TARSKI AND THE SEMANTIC THEORY OF TRUTH** This brings us to a philosophical and logical project that ought to be touched upon before we leave this part of the book. Perhaps the most famous name among theorists who have studied truth is that of the logician Alfred Tarski, whose work issued in what was called a 'semantic theory of truth' in 1933. Tarski's aim was to provide a theory that would give a 'formally correct definition' of the true sentences of a language, L, that is under logical investigation (the object language). This list would be given in another language (the metalanguage) since problems arise if one language tries to provide the definition for itself.a If L is a simple enough language, and only capable of forming a finite number of sentences, the definition could be provided by a list of so-called T-sentences for each sentence of the object language. A T-sentence would have the form of naming or describing a particular sentence of L, and then saying, in the metalanguage, under what circumstances that sentence is true. For instance if L was German, and the metalanguage English, an example of a T-sentence would be: '"Schnee ist weiss" is true in German if and only if snow is white.' If German were simple enough to be capable of forming only half a dozen sentences, then half a dozen such T-sentences, one for each German sentence, would provide a formally correct definition of truth in German. Of course, even in the quite restricted formal languages that interested logicians at the time, things are not so simple. Languages have a 'recursive' syntax, meaning that operations can be applied to simple sentences to produce more complex sentences, and then repeated indefinitely to give yet more complex sentences. So the theory Tarski wants cannot be given by a simple list, and it was no trivial task to find such a theory even for the simplified forms of language that logicians were happiest about. These difficulties, and the machinery necessary to overcoming them, can be found in many logic texts. From our point of view, however, the question is how to relate what Tarski was doing to the philosophical enquiries into truth that we have been describing. One suggestion, that Tarski himself flirted with, was that he was providing a scientific and mathematically up-to-date formulation of a correspondence theory of truth. But that is not right at all. A T-sentence says, in one language, under what conditions a sentence in another language is true. But identifying this condition is not at all the same thing as relating the original sentence to something worldly, like a structure or state of affairs or a fact, as a correspondence theory would have it. According to our specimen T-sentence, we learn that to judge that 'Schnee ist weiss' is true in German we must judge that snow is white. But it does not tell us anything about what it is to judge this, and what if anything it has to do with any of correspondence, coherence or success in action. This is not at all to criticise what Tarski was doing, and his work has extended to embrace and enrich many formal studies. He was right that insofar as we cannot provide a T-sentence for each sentence of an object language we do not understand the language, and if we cannot provide a description of the way the sentence is built up then we do not understand the structure of the language either. But he was silent about the basis of cognition in experience and in causal correlations with things, and he was silent about the conventions, rules and complex behaviour that identify a group as speaking a language in the first place. Perhaps the most telling difference is that a philosophical view of truth aspires to say something that applies to any number of languages: that human beings all make assertions, all have concepts grounded in experience, all do better by knowing the truth than by ignoring it, and so on. Yet a Tarskian definition of truth in German would be very different from one for truth in French because the words, the structures, and the resulting sentences are all different. In short, it is much less misleading to say that Tarski was interested in a formal account of what is needed to define a language, rather than what is needed to define truth.b The account is one that enables an interpreter to say, in her own familiar language – the metalanguage she uses – under what conditions any sentence the object language is capable of forming is true. But it is no more than that. It does not, by itself, give any insight into the skills or conventions, the experience or the cognitive structures, that the interpreter herself must possess. She must have such skills adequate to the task, so the resources of the object language cannot outrun the resources of her own language; if they did, she would be unable to say what they mean. We can see how this induces a certain pessimism about the prospects for a real theory of semantics, or the ways in which words relate to the world. Rather than showing this, Tarskian T-sentences show us how words in some languages relate to things we say in our own language. That 'Schnee ist weiss' in German is true if and only if snow is white tells me what I have to judge in order to determine that the German sentence is true. But it is silent about whatever relations I must bear to the world in order to judge it. That is left as business for another day. Fortunately, it is business we have been pursuing throughout this part of the book, in our wrestlings with correspondence, coherence, pragmatism and deflationism. It is now time to sum up where we have arrived. Footnotes aThe kind of problem is illustrated by the Liar Paradox, in which a sentence appears to say of itself that it is false, in which case if it is true it is false, and if it is false it is true. There are many versions of the paradox that resist simple diagnosis and solution. bThis is in effect the use Donald Davidson made of Tarski's work. **6** **SUMMARY OF PART I** Earlier, we saw how C. S. Peirce was interested in the actual sifting processes whereby enquiry moves us towards settling doubt and fixing belief. William James similarly described himself as following the great physicist James Clerk Maxwell: 'When people put him off with vague verbal accounts of any phenomenon, he would interrupt them impatiently by saying, "Yes: but I want you to tell me the _particular go_ of it".' The 'particular go' of truth is found not only in men's conversations, but in their curiosity, their enquiries, their disagreements and doubts, and their ways of settling issues as they arise. It is a question of the processes intended to put doubt to rest, to result in the fixation of belief. These questions may belong to many kinds of subject matter – empirical, theoretical, mathematical, moral, aesthetic, legal, religious – and in each domain there should be procedures for rectifying doubt or ignorance. Asking for the 'particular go' of truth, William James said that 'true ideas are those that we can assimilate, validate, corroborate and verify.' We need to look at these practices, and correlated practices of rejection, criticism and refutation. To revert to Bentham's saying, treating truth in the abstract may be stretching up to reach the stars, but the actual practices of real people are the flowers at our feet. This introduces a sea change in philosophy, or, since it is not fully appreciated even today, perhaps it is better to say that it _should_ introduce a sea change in philosophy. We might suppose that to understand legitimate, or authoritative, enquiry in any area we must first have a good grasp of what counts as fact in that area. Legitimate enquiry would then be certified as whatever method increases the probability that its results accord with the facts. But as we have already seen, facts are tricky customers. Facts are not things that can be pinned down, and in many areas we tend to flounder when we try to imagine them. Do we have a firm grasp of what counts as fact in aesthetics, religion, morals, history, or even in mathematics or science? What James and Peirce are therefore offering is a reversal of this priority. Instead of facts first, with method analysed in terms of its contribution to fact, we look at the methods first, and then describe fact in terms of the ideal endpoint (which we may never reach) of satisfactory applications of method. The question at the forefront of our minds should not be 'what is aesthetic (etc.) fact?', but 'what makes for a good aesthetic (etc.) enquiry?' The reversal is parallel to one that impresses many philosophers who think about ethics. One way of proceeding, parallel to that of 'facts first', is to sketch a conception of the human good, or _summum bonum_ , and then think of personal or social virtue in terms of its contribution to this desirable end. The most familiar version of this is utilitarianism, which uses an aggregate of human happiness to measure the goodness of any state of affairs. A different way, suggested by Aristotle, is 'virtue ethics'. This asks us first to think of the qualities that enable people to live well, and then to think of the human good in terms of lives spent exhibiting those qualities. It is fair to say that opinion is fairly divided between these two priorities, and it may be that although each has its merits we should follow neither of them without qualification. But it is vital that we recognise and come to terms with the Peirce–James alternative. Instead of 'facts first' we may do better if we think of 'enquiry first', with the notion of fact modestly waiting to be invited to the feast afterwards. This is the reversal that guides Part II of this book. In it we take up some of the issues that arise, and some of the gains that can be made, if we follow the advice of James and Peirce, and ask for the 'particular go' of truth-seeking activities in different areas. **PART II** **VARIETIES OF ENQUIRY** **7** **TRUTHS OF TASTE; TRUTH IN ART** It may seem strange to start with this domain. Questions of taste are often thought not to admit of truth or falsity at all. People have their own opinions. A salient fact about taste and preferences in matters of taste is that people differ. In this domain, and others we'll come to, there is a variation of subjective responses, and this makes it awkward to defend any idea of the one true taste. Actually, variations of subjective taste and preference would not matter if we could simply see that some tastes are inferior to others, thereby recapturing some sense of authority and truth. But this too may be hard to defend. The old maxim _de gustibus non est disputandum_ – tastes are not to be disputed – is practically a cliché. The thought is pretty much cemented into classical economics when people's preferences are simply taken as they are. None are to be discounted, for they are all immune to rational pressure. Some may be strange, but unless they trespass on the rightful space of other people, in which case moral considerations arise, none are better or worse for that. But this is exactly what makes aesthetics an appropriate starting point for applying the discussion we have had so far. If truth can hold its head up in this context, it can surely find a home in others, where it is of more obvious importance to get things right, and to persuade others to do so. If we follow Peirce's maxim and begin with men and their conversation we find that things are not quite so straightforward as the old maxim implies. There exist, after all, practices of criticism. There are professional music critics, literary critics, drama critics, wine critics, food critics and so on. People listen to them, and often respect them, even if they sometimes disagree with them. We may be inclined to scoff: perhaps the critics are distributing arbitrary badges of fashion that their audiences are snobbishly anxious to display (this was roughly the view of Jean-Jacques Rousseau). But before we scoff it may pay to look a little closer. Fortunately, critics themselves have provided ample commentary on their own procedures. Henry James, for instance, a prolific literary critic as well as a novelist, characterised himself not as 'the narrow lawgiver or the rigid censor', but as 'the student, the inquirer, the observer, the interpreter, the active, indefatigable commentator, whose constant aim was to arrive at justness of characterization'. To take first the negative claim, James gives a splendid rebuttal of the idea that it is appropriate for critics to 'lay down the law' in one of his early essays, 'Italy Revisited'. He has bought a copy of _Mornings in Florence_ , by the fierce and dogmatic Victorian critic John Ruskin, and is eventually moved to hilarity: > I had really been enjoying the good old city of Florence; but I now learned from Mr. Ruskin that this was a scandalous waste of charity. I should have gone about with an imprecation on my lips, I should have worn a face three yards long... Nothing in fact is more comical than the familiar asperity of the author's style and the pedagogic fashion in which he pushes and pulls his unhappy pupils about, jerking their heads toward this, rapping their knuckles for that, sending them to stand in corners and giving them Scripture texts to copy. James and his friend eventually agree that you can read a hundred pages of 'this sort of thing' without ever dreaming that Ruskin is talking about art: > There can be no greater want of tact in dealing with those things with which men attempt to ornament life than to be perpetually talking about 'error'. A truce to all rigidities is the law of the place; the only thing that is absolute there is sensible charm. The grim old bearer of the scales excuses herself; she feels that this is not her province. Differences here are not iniquity and righteousness; they are simply variations of temperament and of point of view. We are not under theological government. In contrast, James presents his work as the student, the interpreter, and the active indefatigable commentator as being a matter of opening 'the gateway to appreciation and appreciation is the gateway to enjoyment.' In a similar vein T. S. Eliot says of the practice of literary criticism: > Here, one would suppose was a place for quiet co-operative labour. The critic, one would suppose, if he is to justify his existence, should endeavor to discipline his personal prejudices and cranks – tares to which we are all subject – and compose his differences with as many of his fellows as possible in the common pursuit of true judgement. Eliot talks unblushingly of true judgement, and James's talk of the search for a just characterisation of a work helps us to parse this. James is implying that unjust, hasty or careless characterisation is a trap to avoid, and in this he is surely right. The practised eye or ear is sensitive to differences and nuances that the novice misses. An increasing acquaintance with any art form enables us to 'place' it in its tradition, appreciate the problems the artist faced and perhaps solved, bring in comparisons and contexts, and in other words think and talk more intelligently about what we read, or look at, or listen to, or even taste. And this in turn increases our enjoyment, as James promises. If at first we hear a string of notes only as noise, afterwards we may hear melody, counterpoint, key shifts and such intangible features as pathos, resignation, hope, excitement or peace. Good critics are those whom we can trust in the exercise of increasing understanding. To play this role they need a number of qualifications. Obviously they should have experienced the work, for in such matters one thing we cannot do is pass a verdict on something of which we have no experience at all, such as a painting we have never seen, a piece of music we have never heard, or a book we have never read. A critic needs to have been in the right circumstances: not a hot, noisy theatre, not distracted by other concerns, but able to give whatever it is their full attention. They need a delicate taste, refined by practice. They need to have comparisons to hand so that they can know how this work stands among others in the same genre. They need to be free from prejudice, or Eliot's noxious weeds. We would not usually trust a person's verdict on the work of an avowed enemy or, for that matter, a member of their immediate family (if their child was acting in a play their judgement that it was exquisite might not further the aim of shared judgement). At the very least we would need to be reassured that critics have put such things out of their minds, before we let them hold our hands. In noting these as virtues of the critic we are following in the footsteps of Hume. After expounding some of the deficiencies we commonly labour under as we come to try to appreciate works of art, Hume describes what we need in order to avoid them: > Under some or other of these imperfections, the generality of men labour; and hence a true judge in the finer arts is observed, even during the most polished ages, to be so rare a character: Strong sense, united to delicate sentiment, improved by practice, perfected by comparison, and cleared of all prejudice, can alone entitle critics to this valuable character; and the joint verdict of such, wherever they are to be found, is the true standard of taste and beauty. Hume is not over-optimistic about finding such critics. Nor does he think that the 'joint verdict' is always forthcoming. There are differences of taste and sentiment that are blameless on both sides, and that cause a divergence of taste. He gives a charming example: > A young man, whose passions are warm, will be more sensibly touched with amorous and tender images, than a man more advanced in years, who takes pleasure in wise, philosophical reflections concerning the conduct of life and moderation of the passions. At twenty, _Ovid_ may be the favourite author; _Horace_ at forty; and perhaps _Tacitus_ at fifty. Vainly would we, in such cases, endeavour to enter into the sentiments of others, and divest ourselves of those propensities, which are natural to us. We choose our favourite author as we do our friend, from a conformity of humour and disposition. Mirth or passion, sentiment or reflection; whichever of these most predominates in our temper, it gives us a peculiar sympathy with the writer who resembles us. Nevertheless, we can to some extent put aside our subjective or personal preferences, and take up the enterprise of the 'common pursuit'. William James, we may remember, talked of opinions that _we_ can assimilate, validate, corroborate and verify, and this leaves it interestingly open how far the _we_ extends. It may not matter to the processes of extracting whatever enjoyments the arts may give us, if the _we_ remains relative to place and time. The fact that the 'grim old bearer of the scales', the figure of justice, is not invited means in effect that we do not have to enter into disputes with people who do not belong to the same milieu. We can raise our hats and pass them by politely. Is this enough to fend off the cynic, sceptic or relativist who insisted that _de gustibus..._ there is no real pursuit of true judgement in these areas, and no such thing as just appreciation, but only self-deception, fraud or vanity? We know that the cynic can point to variations of subjectivity and revolutions of taste, and in some areas, such as fashion, revolutions may follow one another so quickly that there seems no prospect of a 'joint verdict' of any two fashionistas from one season to the next (and there are no doubt commercial and perhaps generational reasons for this, given that the young want to differentiate themselves from the preceding cohort). But he can hardly deny that there do exist qualifications, and disqualifications. Having read Tolstoy in English I can say some things about his work, but I cannot comment on the beauty of the original Russian, since I have no acquaintance with the language. Furthermore, we all know of people who have more delicate and practised capacities than ourselves. I heard recently of the death of a chief technician for Steinway, who was able to tell from listening to a few bars not only which famous pianist was playing but also which individual instrument they were using. I would not offer my opinion against his as to the qualities of a piano or a performance. We are often grateful enough to have things we would otherwise have missed pointed out to us. And it increases enjoyment, as Henry James said, to join in the pursuit of a shared judgement, and to find that our own enthusiasms and aversions are shared by others. Still, the cynic may persist, however enjoyable these activities may be, and however pleasant it is to come to one mind with others on the qualities of a work and the values to attach to them, is there any reason to think that you are getting nearer to some mysterious aesthetic _truth_? Fortunately, our discussion so far gives us a way to deal with this. First of all, deflationism comes to the rescue. I believe that Ludwig van Beethoven is a more imaginative and wider-ranging composer than Leonard Bernstein, good though Bernstein is. So, I believe that it is true that Ludwig van Beethoven is a more imaginative and wider-ranging composer than Leonard Bernstein. If I voice this opinion and you agree, you can signal this using many words – 'I agree,' 'that goes for me too,' 'that's right,' 'sure' – or you can grunt assent, or without extra theoretical strain you can say 'that's true.' But we can also say more than this. The description we have given of the just critic, the person of some authority, to whom we might be pleased to defer, gives us an idea of what these processes come to. The good critic can lead James's process of assimilation and corroboration, and this is what validation and verification come to, in this area. We assimilate an opinion when we come to share it, we corroborate it when we come across things that bear it out, and we validate and verify an opinion when we find enough about the subject matter to suspect that it is robust enough to withstand any questions we can think of asking. A good symptom of this, of course, is that it stands the test of time. If generations have found much to admire and astonish them in Shakespeare, Beethoven, Titian or Homer, we can suppose that a critic who disagrees is revealing more about himself than about these immortals. In Peirce's terms, their merits are 'fated to be ultimately agreed to by all who investigate' – where investigation includes paying due attention to those of 'strong sense, united to delicate sentiment, improved by practice, perfected by comparison, and cleared of all prejudice'. This application of our discussions in Part I also enables us to understand better the point of Peirce's advice not to start with vagabond ideas that have no human habitation, but with men and their conversation (equally, we can describe ourselves not as reaching for the stars but as paying attention to the flowers beneath our feet). If we thought of 'aesthetic truth' as some kind of abstraction possibly lying apart from and beyond all human responses, beyond all our satisfactions and enjoyments, a bloodless property distributed who knows how among the things in our universe, then it would be hard to see the point of coming to discriminate between those things that do and do not possess it, and impossible to imagine a method for doing so, given that we can start from nowhere but our own human natures and all the cultural and social contexts that shape them. Scepticism about the notion would be an entirely natural response to this 'realist' or 'rationalist' metaphysic. But instead we have looked at art in terms of our enjoyments and understandings, and in terms especially of the virtues that entitle anyone to enter into an enquiry or to lead one. At no point are we likely to have exhausted such enquiry – we have a modest (and virtuous) feeling that even after we have done our best there may be aspects of things we haven't fully appreciated. There may be more to be said. But if we have been careful and imaginative and profited from the best opinion of others in the common pursuit, we can be reasonably confident that we have done justice to the topic. We can advance our opinions, which also means we can judge them, perhaps provisionally and in cognisance of our own fallibility, as true. We must remember that a tentative judgement of truth is not the same as a dogmatic assertion of certainty; we can heed Henry James's warning against inviting the grim old bearer of the scales, the figure of justice, into our presence. If we do we can even admit a grain of truth in the saying _de gustibus_... It may be right that in matters of taste dispute is out of place. But this is not because any opinion is as good as any other. It is because it is collaboration and imaginative discrimination that wins the day, not dispute. Rather than argue someone into agreement, we hope to use persuasion, put things in different lights, to remind them of similar things that have delighted them, or to excite their imagination. It is not a matter of syllogisms and proofs, but of leading another to assimilate whatever response we find appropriate, and this will be a process dependent upon patience and concern, like any process of give and take in which education and learning go hand in hand. As James says, we are not under theological government. In aesthetic matters we are not so clearly likely to get our comeuppance if we are careless, or ill educated, or inattentive or naturally insensitive, as we are if we have the corresponding blind spots in empirical matters. Empirical ignorance implies an inability to do many things; aesthetic blindness seems less important. I am careful to say that this seems to be so: a case could be made that blindness to the ugliness of surroundings, the superficiality and sentimentality of popular entertainments, and the tasteless, indecorous or purely witless diversions that bombard us, is as great an obstacle to decent living as ignorance in any other direction. One can become a campaigner. But often aesthetic conversation will seem less urgent than others, and aesthetic truth less compulsory than more mundane truths. So far in this section we have considered the practices of criticism. What about truth in the practice of art itself? There is a long tradition of supposing that the artist sees things especially truly. With an intensity of discrimination and perhaps feeling he or she perceives something in things that others miss, and, insofar as the art is successful, manages to communicate to others what it is that they have seen. In his book _What is Art?_ R. G. Collingwood, the most impressive philosopher of art of the twentieth century, carefully distinguished between practices aimed at a specific, foreseen end, and art proper. The former include entertainment, which aims to arouse particular enjoyable feelings in an audience, such as excitement or amusement, and magic, which aims to express and perhaps exorcise specific feelings, such as terror or impotence in the face of the ills that afflict people. This is craft, not art, and the practitioners are craftsmen who know exactly what they want to achieve and set about achieving it. A different false view thinks of the artist as possessed of particular feelings that he then seeks to arouse in others. The problem with this is that it assimilates art to craft again. There is a specific aim, to arouse an emotion in others, and the art is the means to achieve it. But, according to Collingwood, this is wrong. Rather, the point of the expression must be to make clear to ourselves, as well as potentially to others, exactly what we feel. The expression is addressed primarily to ourselves. This is why we associate art with increase in understanding. I can only understand how I feel if I can express, or recognise an expression, of the feeling. If we listen to a Schubert song, we not only learn what Schubert wanted us to feel about lost love or hope or desolation, but what _can_ be felt about it, or what _is to be_ felt about it. The expression lifts a weight, an oppression we feel while our feelings remain inchoate or incommunicable. However, Collingwood did not rest content with describing art in terms of the expression of emotion. There was in addition the imaginative activity that the artist must have brought to the work, and what the spectator or auditor or reader can take out of it is an 'imagined experience of total activity' – a phrase which Collingwood tried hard to explain, with doubtful success. It refers to something like the sense of life opening up or revealing itself to us through great music, art or literature.a The difficulty remains that if we think that some kind of truth is thereby revealed to us, we face the problem that it cannot be specified except by listening, looking, or reading the work itself; art resists encapsulation or paraphrase. Perhaps it is better to admit that rather than revealing ineffable truths to us, works of art, like experiences of the beautiful or the sublime in nature, leave us strangely refreshed. If we are in the right mood, an hour in the National Gallery, or the concert hall, or reading a great novel, leaves us refreshed and invigorated, ready to face the world and its mundane facts with a new spring in our step. This increase in understanding is not further propositional knowledge (that is, knowledge that _p_ for some substitution for _p_ ) but an increase in know-how. And knowing how to face the world is no mean gift. Footnote aIt also refers to the sense of life being degraded or desecrated by sufficiently bad art. In his _Autobiography_ Collingwood describes the awful misery that afflicted him when on his daily way to work he had to pass the Albert Memorial in Kensington. 'Verminous' and 'crawling' are just two of the descriptions of it that he offers. **8** **TRUTH IN ETHICS** How can we proceed in order to 'assimilate, validate, corroborate and verify' ideas when we ask how to live? The question is a serious one, since while in aesthetics _de gustibus non est disputandum_ has some claim upon us, in ethics it has almost none. If I am minded to forbid one thing, permit another and make a third compulsory, and you are minded to permit the first, forbid the second and allow the omission of the third, then we are in dispute; indeed, in the very paradigm of a dispute, since we will find it hard to live together, or tolerate each other's practices and policies. It is often remarked that 'freshman relativists' – those who hold that anything goes, that in this area it is all a matter of opinion, or that you have your views and I have mine, but let's just move on – are as quick as anybody else to get hot, angry and resentful when they are lied to, or cheated, or given what they regard as an unfair grade, or when their pet concerns get challenged. Free speech (or its suppression), the rights of animals (or their lack of them), not to mention the legal status of abortion or the death penalty, see sides lining up very, very quickly. That, of course is part of the problem in coming at any idea of ethical truth or fact, since, as with aesthetics and religion, we again have the diversity of subjectivities, and again may be unsure how to judge one side to be right, or nearer to being right than the other. If we ignore Peirce's maxim and simply try to think in the abstract about 'moral truth', it is easy to become sceptical about whether there is any such thing. At the beginning of the twentieth century the Cambridge philosopher G. E. Moore published a famous argument that moral truth would have to be different, root and branch, from 'natural' truths, such as truths of psychology, sociology or other empirical and scientifically tractable disciplines. For example, suppose someone presents the doctrine of utilitarianism, that is, that the value of a situation or the outcome of a plan depends entirely on what it does for the general happiness. This may be true. But it cannot be true as a matter of definition, since it is intelligible to doubt whether it is true, wonder whether it is true, or argue that it is not true. (As a matter of fact, people have argued, powerfully enough, that it is not true, citing examples where promoting the general happiness requires sacrificing some person or some subset of people, infringing their rights in what seems like an unjust manner.) Moore argued that since such a doubt is always possible, it followed that the 'moral truth' could not be simply identified with any natural, empirical or scientific truth. The 'open question' showed that even if you settle all the natural facts, there is still something left to settle – whether one or another distribution of them counted as good. He concluded that 'goodness' was a non-natural, distinct, property of things. Moore's argument has been much discussed, since although it is powerful, its conclusion seems totally unacceptable. Nobody of an empirical or scientific bent wants to countenance spooky non-natural properties, hovering as it were above the natural world and delivering their benefit in unimaginable ways to one thing or another. How could we know about them? And why would we want to do so? We have enough on our plate coping with the given world of pleasures and pains, happiness, misery, hopelessness and joy. If other moral properties such as 'being good' or 'being a duty' lie outside the causal order of things, how could we possibly have evolved to track them successfully? Evolution favours animals that are successful in leaving descendants, which requires skills in coping with such things as food supplies, predators or signs of potential mates. But there is no story about how success in tracking Moore's non-natural properties would help in the least. And if there is no reason for us to have evolved into skilled detectors of non-natural properties, there is no reason to suppose that any opinions we form about what possesses them and what does not are reliable. Scepticism seems the only possible upshot. For all we know it might be that arranging the biggest three boulders on Ben Nevis into a straight line is the most valuable human activity, that you trespass on people's right by being honest with them, and that misery and hopelessness are the best things to wish for. Some writers, 'error theorists', take this to show that moral discussion is chasing a will-o'-the-wisp. There is no truth in this area, and if we say, for instance, that stamping on babies for fun is wrong, it is a mistake to suppose that what we say is true. Others, 'fictionalists', suppose that at best it is a useful fiction to say that this is true, although in reality it is not. Either way it is not strictly true that it is wrong to stamp on babies for fun. A bizarre denial, and one it might be best not to express to their mothers. However, all this pessimism and nihilism is the consequence of thinking about moral truth in the abstract – taking it as one of Peirce's 'vagabond thoughts that tramp the public highways without any human habitation'. If instead we 'begin with men and their conversation', or look at the flowers under our feet rather than reaching for the stars in the sky, things are much brighter. This alternative has its origins in Aristotle, who saw that ethics is nothing else than the business of enabling humans to flourish. And we know quite a lot about what counts as flourishing, and what counts as failing to flourish. Aristotle said good things about this, pinpointing virtuous activity in a life rich enough for such things as civic activity and friendship (although, sad to say, he also had the rather peculiar view that 'reasoning' was the final good of human beings, and eventually concluded that the best life would be one of pure contemplation). Other writers have had a more realistic take on the issue. You do not have to be a monk or a sage to flourish. In the modern world, the eighteenth century saw the first extended attempts to found an idea of truth in moral philosophy on a science of human nature. The creed of these philosophers of the Enlightenment was that if we properly understand who we are, and our place in the natural order, a theoretically satisfying and edifying understanding of morality will follow on. The giants in this enterprise were David Hume, Adam Smith and Immanuel Kant, although many other admirable figures surrounded them.a Smith forms an interesting bridge between Hume, with whom I shall start, and Kant, who follows later. In Hume's exploration, our capacity for moral thought has five underpinnings: > (1) Like other animals we have a natural endowment of desires and aversions, according to whether things have a positive or negative effect on our own well-being. These fuel our capacity for looking after our own needs, if necessary by foresight and prudence. > > (2) We have a limited or minimal degree of sympathy with others and benevolence towards them, but a much greater concern for our own family and friends. > > (3) However, we also have a further capacity to take up a 'common point of view', so, for instance, we can abstract from our own involvement in a state of affairs, or our own involvement with a character, and disinterestedly contemplate the ways in which different people tend to behave. This enables us to take up attitudes to people in history, where our own interests are absent, or even in fiction, when the characteristics of people are presented, even if no actual persons of the kind ever existed. We saw something akin to this when we considered aesthetic appraisal as the 'common pursuit', an enterprise of deciding what _we_ are to think about something. > > (4) We then have a propensity to take pleasure in and therefore be pleased by and approve of those qualities of mind that are useful or agreeable to those who have them and those around them – their kith and kin as it were.b > > (5) We can build upon this, by means of our ability to enter into conventions, when coordination with others is essential in order to prosecute our interests. The first of these almost goes without saying. Some classical moralists, notably the Stoics, sometimes make it seem as though having desires and aversions at all is a regrettable feature of the human condition, and one that we should try as hard as possible to suppress. Later on, Kant would show a degree of sympathy with this harsh view, but although all moralists recognise that there may be some desires that it is best to suppress, neither Hume nor Smith had much sympathy with this Stoic ambition in general. Even the term 'self-control' is absent from the genial Hume's lexicon, although he was well aware that it is often difficult to defer an immediate gratification for the sake of some long-term or distant goal. The second element introduces our nature as social animals. We are able to 'mirror' the minds of others, or enter imaginatively into their feelings. When we do so we may find we can sympathise with them, both in the sense of understanding how they feel and, when appropriate, feeling sorrow or concern, mirth or joy, as they do. The third of these elements, the arrival of the 'common point of view', separates simple likes, dislikes and preferences from the more reflective and disinterested states of mind that underlie public approval and disapproval: > When a man denominates another his _enemy_ , his _rival_ , his _antagonist_ , his _adversary_ , he is understood to speak the language of self-love, and to express sentiments, peculiar to himself, and arising from his particular circumstances and situation. But when he bestows on any man the epithets of _vicious_ or _odious_ or _depraved_ , he then speaks another language, and expresses sentiments, in which, he expects, all his audience are to concur with him. The fourth element introduces both Hume's own prime topics for approval, and his standards for granting that approval. Hume trawls through the kinds of qualities of motivation and character that we admire in others. He makes a reasonably convincing case that it is possible to identify four topics of love or praise. There are qualities of mind that are useful to ourselves: we admire someone for being cautious, or prudent, intelligent, temperate in her emotions, or herself a good judge of character.c These are qualities that will enable her to get on well in life. We also admire, perhaps even more, those social qualities that render a person useful to others, notably benevolence, generosity, eagerness to be of service. These make a person a good team player, as it were. Thirdly, there are traits of character that are agreeable to those that have them: a cheerful disposition, an easy-going temper, sufficient balance or fortitude to take things as they are. And finally there are traits that are agreeable to those around us: cheerfulness, helpfulness, tact, the ability to act with grace and sense. Of course, these categories overlap to a large extent, but in principle they mark different dimensions of excellence. So putting it all together we get that a virtue is a quality of mind that is 'useful or agreeable to ourselves or others'. The fifth and final element in Hume's picture is in a way the most interesting, and marks not only a major advance on his predecessors, but one of his principal legacies to successors, not only in philosophy but in economics and social sciences. He considers the common situation in which we each want to do something, but can only do it together. So, we need to coordinate. One might think, as philosophers before Hume had done, that what would be needed would be a contract or promise whereby each gives his word that he will play his part. But Hume wants to dig deeper: the power of promises is one of the things he wants to explain. It is part of the problem, not part of the solution. The ingredients he has handed himself so far do not include obligations and rights, and promises are above all instruments for creating obligations and rights. Instead he begins with habits that involve reciprocity, as in 'I'll scratch your back if you scratch mine.' I will respect the property of others, if they will respect mine. Sensitivity to the difference between those who are cooperative, and those who are not, is found in species other than _Homo sapiens_. A chimpanzee will scratch another's back if the other shows a disposition to reciprocate; less or no such disposition otherwise. So, in a world of limited concern for others we need to expect a reward for putting ourselves out on another's behalf. Expecting the reward provides the motivation. But of course the person first benefited might just walk away, so it is good to have something that cements the reciprocity in place. A solution arises if there is some mechanism whereby the person trusted incurs a significant penalty if he departs from the expected pattern. Giving a promise is a public act that creates such a penalty. Now if the person trusted fails to perform, the person who trusted can expect social sympathy from others, and the person trusted can expect disgrace and penalty. A promise is not a signal of a pre-existent state, but the creation of a new state. And with it comes the notion of a right, invested in the promisee, and an obligation falling on the agent. After we are inducted, as children, into this social process, the institution takes on a life of its own. The mere fact of having made a promise, quite apart from the probability of social penalty, creates a repugnance to breaking it in the well brought-up individual. But it is not only the activities of giving and taking promises that can be understood by the arrival of conventions: > Thus two men pull the oars of a boat by common convention, for common interest, without any promise or contract: Thus gold and silver are made the measures of exchange; thus speech and words and language are fixed, by human convention and agreement. Whatever is advantageous to two or more persons, if all perform their part; but what loses all advantage, if only one perform, can arise from no other principle. There would otherwise be no motive for any one of them to enter into that scheme of conduct. Hume's strategy is very much that of the 'evolutionary psychologist'. Starting from a bare sketch of human nature and human circumstances, we are given an understanding of how it is that without any remarkable leaps or remarkable exercises of 'reason', we enter into conventions or institutions that enable social life to flourish. And a key part of the story is that it only takes a basic budget of desires and concerns for conventions of property, promises, law, government, money and language to take root and grow into the most essential supports of our social lives. Hume compares the whole process to the building of an arch or vault, where each stone plays its role provided the others do. What then of Moore's later 'open question' argument, and the scepticism it brought in its train? Wasn't Moore right that there is always an open question about whether rightness or goodness attaches to any empirically given property, including those that Hume picks out? Well, Moore abstracted from 'men and their conversation' whereas Hume starts with it. There is simply no room, in the Humean vision, for a metaphysically spooky, invisible and intangible property that is mysteriously of great importance to us. There are only the natural properties of things, such as the dispositions of character that make up useful and agreeable lives, and our propensity to admire them, choose them, educate people into them and regret their absence. This is what ethics and morality are about. Of course, Moore was right that any opinion of the value of things can be contested and queried. If someone comes along and says that martial courage is a virtue, for instance, we might well wonder whether it is a quality of character that, taken in human history as a whole, hasn't been a nuisance rather than a benefit, and we can go on to discuss whether in the light of that it should be admired, as it often is. As with aesthetics, we can discuss what to admire and what to dislike. Ethics is our technique for living, and like any technique it can be practised well or badly. If we admire and dislike the wrong things, we can expect things to go worse than they otherwise might have gone. This is why, as with aesthetics, but with more urgency, we properly treat it as if there is a question of truth to be settled. We do not content ourselves with voicing our reactions ('martial courage – wow!') but are concerned that our reactions get corroborated, validated, agreed and verified in a common pursuit of solutions to the problems of living well. This is why we have the moral and ethical, and aesthetic, language we do, in which verdicts can be discussed and agreed or challenged. There is no problem, in this approach, akin to the problem of scepticism that bedevilled Moore's metaphysics of non-natural properties. There are no non-natural properties. There are only human enterprises of discussing what to like or dislike, encourage or forbid, tolerate or oppose. A 'sceptic' who says that for all we know misery is better than happiness has no voice in any sensible moral conversation. Unless, as seems utterly improbable, he can succeed in putting misery in a more favourable light than happiness, he is not a voice in our common pursuit but a mere nuisance deserving the dismissal he will doubtless receive. Our conversations start with recognisable human beings as their topic: it is we who desire some things and avoid others and who have to solve the problems of living together. If part of our solution is to agree, for instance, that the convention that gives us the possibility of making promises is a valuable one, then we are in effect agreeing that someone who wittingly and without excuse breaks a promise should forfeit our good opinion. Agreeing to this is, as the deflationary theory of truth insists, the same thing as agreeing that it is true that such a person should forfeit our good opinion – or, to put it in more ordinary language, he did something he ought not to have done. Truths of this kind follow naturally upon the very existence of conventions. Wherever coordination is necessary to our living together, a person who defects from the coordination is a nuisance, and in line for criticism and sometimes penalty. Indeed, we saw above that in Hume it is only with the arrival of conventions that notions like justice, obligation or rights come into view. The landscape before they do is one of desires and needs, pleasures and pains, but not one in which questions of justice or obligation can be raised, any more than questions of credit and debt could be raised in a society without any concept of money, or without a tally of goods and services provided or received. Since injustice only arises when someone defects from an agreement or convention, Hume thinks it is not in play when, for instance, a powerful party (such as European settlers in the eighteenth century) comes across a powerless one (such as indigenous peoples). The former may be under a duty to behave well, out of benevolence and humanity, but there is no convention, or motive to initiate one, and hence no question of justice in the case. It is here that Adam Smith departed from Hume. Smith thought that anger and resentment were natural reactions to cases of trespass by another. If an agent takes my goods, invades my space, ignores my interests or in any way crosses the boundaries of decent behaviour, then resentment is a natural reaction, and one that a bystander, an 'impartial spectator', can sympathise with, feeling indignation on my behalf. These reactions, of resentment of the injured party and indignation on his behalf by the impartial spectator, are naturally expressed in terms of the agent having behaved unfairly or unjustly. So issues of justice do arise even when there is no antecedent convention to which the parties subscribe. Smith thought that as soon as we have as many as two people, and in a bare social landscape, there are things that would count as trespassing against the proper boundaries of the other. Not only bodily assault and injury, but other ways of showing that, in one's mind, the other person is of no account, cause natural resentment and anger, and any impartial spectator would sympathise with the injured party. By doing so they in effect classify the offender's behaviour as unjust. We lie under a duty to each other from the very beginning, not only after the arrival of the structures under which we live in society. Smith's rejection of the way in which Hume's conception of justice was wedded to antecedent conventions was taken up and expanded by Kant, who saw respect for each other as a more important foundation stone for morality than the straightforward pursuit of goods and avoidance of evils, or the satisfactions arising from such pursuits. He thought that it was the bare, human capacity for rationality that entitled us to this respect.d Moral philosophers still divide on whether Kant managed to make a viable system based on this principle, and if so whether it is an advance on the legacy Hume or Smith left us. We do not have to judge this dispute here. It is enough to remark that there is much to say on each side and that those who do take sides regard themselves as correct and right and their opponents as incorrect or wrong. But we now know to expect that. Each side is advancing something as the proposition to be accepted, that is, as true, and there is nothing spooky in the case, nor anything that should prompt surprise or scepticism. One might wonder, of course, whether the dispute is irresolvable, and one might express that doubt by asking whether there is any truth in the matter. This would be, in effect, wondering whether there is any one opinion that is 'fated to be agreed upon', in Peirce's words. Perhaps there is not, but it would be incautious to announce that at the outset. There is no God'seye view, or vantage point from which anyone can know in advance the result of the exploration or enquiry. Footnotes aIncluding Thomas Hobbes, Shaftesbury, Joseph Butler, Frances Hutcheson and Jean-Jacques Rousseau. bIn using this phrase Hume is echoing the Roman poet Horace, who argued that poetry is to be _dulce et utile_ , i.e. pleasant and useful. cEleanor in Jane Austen's _Sense and Sensibility_ is an ideal example of this. dThereby leaving himself a problem about animal welfare and animal rights. Many of us will sympathise with Bentham's remark, 'The question is not, can they reason? Nor, can they talk? But, can they suffer?' **9** **REASON** We should not leave the topic of moral truth without noticing that much of what has been said applies across a wider spectrum. We pay attention not only to the overt actions of ourselves and others, but to the way our minds move. As soon as we have perceptions of the world at all we think about what they imply and what we can infer from them – indeed, a plausible way of distinguishing perceptions from the brute sensations we talked about earlier is that a perception has implications, whereas a sensation just happens. A glimpse or whiff is just something that happens, but when it is interpreted consequences follow, expectations arise, and significances are discerned. And the ways in which people's minds move are as much the subject of criticism and conversation as our other practices. So what is meant if we say that a person, X, takes something, A, as a reason for some conclusion, B? A first stab would be that when X becomes aware of A it moves him towards a mental state B. Notice that B might be a belief, but it could be something else: a desire, the formation of an intention or plan, an emotional reaction, or an attitude to something or some person. The movement towards B might be checked by something else in X's mind, such as counter-vailing reasons against B. But X is, as it were, given a shove towards B. This is a good start, but I think we need more than this. For X may find himself moved towards B but against his will or his better judgement. He wouldn't endorse the movement from A towards B, or try to justify his ending up at B by citing A (he might feel guilty that A moves him towards B, so recognising that it is no reason for B at all). So we can try instead that X takes A to be a reason for B if X does endorse and defend the tendency. He thinks that from the common point of view, a move from A towards B is one to be approved of. He can advocate it in a conversation designed to achieve such a common point of view. Such endorsements or approvals can come in degrees. At its most lukewarm it might be that X does not actually disapprove of taking a move from awareness of A towards B. Further along he might approve of it, and eventually disapprove of anyone who, aware of A, fails to be moved towards B. He would be holding that the move is compulsory. The endorsements and approvals in question might be ethical, but they need not be. If someone moves from hearing a politician say something to believing it, one might criticise them as credulous or gullible, and these are criticisms of the way their minds work, but not in a particularly ethical or moral register. It is their intelligence or savoir faire that is at fault, even if their heart is in the right place. Of course, I have abstracted a little for the sake of simplicity. As holism, which we met earlier, reminds us, any human being becoming aware of something is going to be adding it to an enormous background of things she already believes, knows, desires, intends, and so forth. It may be that a move from A to B is to be approved of against some backgrounds and not others. We may want to say that other things being equal, A is a reason for B, or just that A is sometimes a reason for B, deferring to such variations. But sometimes we think it is compulsory or categorical. It doesn't matter what else you believe if you learn that Y is in China or India, and then that he is not in India; it is right, then, to infer that he is in China. If you believe that there are five girls in the room, and then that there are five boys, it is right to infer that there are ten children. One might say that logic and mathematics codify compulsory inferences. Sometimes we want to wind the clock back, as it were, and use an aversion to B to undo acceptance of A, or of whatever else in our background set of assumptions made the inference a good one. You can be sure that not all these things are true: that Y is in China or India; that Y is not in India; that Y is not in China. But you may not know which belief to give up. So it is better to say that logic and mathematics determine which sets of propositions it is compulsory to avoid. This set would be one of them. Much of the philosophy of science is concerned not with questions of logical consistency, or with purely mathematical inferences and proofs, but with evaluating interpretations of experiments and observations. It needs to think about such things as our tendencies to generalise, the use of analogies and models, our bias towards simplicity in explanations, and the amount of confidence any one interpretation of things should command. These are essentially evaluative exercises, and can be as open-ended and subject to judgement and preference as comparable discussions in ethics and morality. When we picked up Max Planck's alleged remark, that the opponents of a theory never get convinced, but just die out, we are recognising that some inferential or reasoning tendencies, given what else is in the mix, are incorrigible and ineradicable. How badly we think about those who have those tendencies may vary. When they stand in the way of what we are sure is the truth (or even when they stand in the way of our promotion and fame) we tend not to forgive them. So we can discuss which movements of the mind are reasonable or unreasonable in much the same way as we discuss which motivations and behaviours are admirable, or compulsory or impermissible. It follows that scepticism about the idea of moral truth should suggest scepticism about assessment of mental tendencies as reasonable or unreasonable. An error theory or fictionalism about whether one thing is ever a reason for another would loom. But that seems utterly intolerable. To say that all movements of the mind, all inferences, all interpretations of things, all tendencies to believe things, are equally good is absurd. If you see an electric plate glowing red hot, it is better to expect it to burn you if you touch it, rather than for it to do anything else, such as shower gold on you or turn into a frog. Much of our reasoning is automatic and implicit. A perception that there is a chair in front of me leads me to suppose that there is one behind me after I turn round. Isn't it possible that I should have had the perception although the chair was an ephemeral being; a manifestation that itself carried no implications for the moment when I twirl around and bend my knees? Yes, barely possible. But a mind that took that possibility to be wide open, that failed to make the inference, is not one well adapted to life in this wonderfully regular and predictable world in which we live, and in which we have been adapted to live. It would be neither useful nor agreeable to possess such a mind. In fact, taken to the limit it would not be a mind at all but a mere register of sensations of the moment; in Kant's terms, a 'rhapsody of sensation, less even than a dream', or, as William James put it, 'a blooming buzzing confusion'. It is with inference that sensation turns into perception. When we talk of reason, as when we talk of aesthetics and morality, things become much clearer when we stop dealing with truth in the abstract and look at the 'particular go' of it. We then understand why we want it: it is because we do not want people thinking badly, faltering along foolish paths of inference, and we need to signal what counts as doing so. We remove anything spooky from the area, and we sideline error theories and fictionalism, for both these are reactions to Moore's non-natural distribution of moral properties or moral facts 'out there'. In the case of reasons, they would similarly suppose that there is a non-natural distribution of inferences we ought to hold and things we ought to believe (a distribution that exists somewhere in reality, 'out there'), but despair about our actual contact with them. Whereas if we start where we are and look at our procedures of conversation, agreement and disagreement – and at our actual successes in learning how to live and what to believe – we can achieve modest confidences, although at any time we may encounter problems that stump us. In other words, we locate 'moral truth' or 'rational truth' as the axis around which important discussions and enquiries revolve, hopefully informed by whatever we know and think we know about human beings, their limitations and their possibilities. The enquiry is essentially practical: we can say that its goal is truth, but it can as well be described as knowing when and how to act, whom to admire, how to educate people, what to believe, or, all in all, how to live. **10** **RELIGION AND TRUTH** Like aesthetics and ethics, religion is an area whose credentials, if they are presented in terms of truth and fact, seem decidedly doubtful. As with aesthetics, we have, obviously enough, the great diversity of personal responses, as different religions have appealed or continue to appeal to different people in different social and cultural circumstances. They can't all be true. And even under the broad umbrellas of religions such as Christianity or Islam sects proliferate and are all too apt to sow discord and hatred. One man's faith is another man's lunacy, or worse, and peaceful co-existence is a fragile commodity, little more than a short truce in the battles for hearts and minds: battles, unfortunately, in which the adherents of one sect rather too often set about murdering those of another. One might suppose that this is just an unfortunate side effect, a by-product of our not yet having found the right words or true religion. But it is more than that. The anthropologist Emile Durkheim supposed that the principal function of religious practice was to weld numbers of people into a social whole, a congregation. To this end the arbitrary nature of practices of faith, such as the veneration of some text, or some place or object, is ideally adapted. You gain a separate identity, a tribal badge as it were, by the very arbitrariness. You can't distinguish yourself from your neighbours by saying that we are the people who eat and breathe, but you can by identifying as the people who read just these books or sing just these songs, wear these hats, grow these beards, cut our hair, wear veils, take off our shoes, eat seafood, worship cows, or don't. One of the nice features of Durkheim's account is that it reconciles the existence of arbitrary systems of faith with the evolutionary pressure to have our systems of confident belief conducive to success in action. If religion is a prime cement of tribal loyalty, then it may well be adaptive for social animals such as ourselves to exempt its doctrines from the critical attention that our reasoning powers order us to bestow on beliefs in other areas. As far as reason is concerned, religion has a get-out-of-jail-free card. It would otherwise be difficult to explain the success of arbitrary practices of belief, up to and including those of the most outlandish cults, in a Darwinian world. It is vital to the way this mechanism works that it is not recognised as such. To weld people into a social unit or congregation, religions need faith, not an ironic understanding that you would be doing entirely different things if the chips had fallen differently. The mysterious or ineffable nature of religious doctrine is a handmaiden to this faith. It deliberately stupefies the understanding, protecting the tenets of a religion with a shroud or mist that makes processes of rational assessment, weighing of probabilities, or scientific investigation not only ineffective, but blasphemous, even, inadequate to the tremendous weight and gravity of the central mysteries. It is presumptuous and indeed a kind of desecration to suppose that finite creatures can comprehend the infinite. Perhaps so, but for centuries theologians tried to do better. Until the seventeenth century the most able thinkers in Europe and the Islamic world beat their heads trying to understand God. Concepts such as existence, time, causation, substance, necessity, omnipotence, foreknowledge, infinitude, evil and many others were shaped and deployed and discarded and resurrected in the process. One of the most powerful schools of philosophy in the ancient, classical world had been that of the sceptics, who doubted the power of human reason to fathom issues far removed from simple empirical experience. But their cautions had little influence on theologians, compared with the glittering prize of establishing a true understanding of the cosmos with a definite place for God as well as for ourselves. God was to be the underlying cause of everything, the ground of the being of the universe, the unmoved mover, and the only issue was to settle on his relationship to us. One of the earliest modern voices to pour cold water on any such ambition was Thomas Hobbes. Hobbes believed that it was natural to people to ask for the cause or ground of the entire cosmos, and to push back along the chain of causes until, out of simple weariness, we can get no further. We then call the stopping point, of which we can have no definite conception, God. Hobbes did not rail against this tendency of the mind, but he insisted that we can go no further: > He that will attribute to God, nothing but what is warranted by natural reason, must either use such negative attributes, as infinite, eternal, incomprehensible; or superlatives, as most high, most great, and the like; or indefinite, as good, just, holy, creator; and in such sense, as if he meant not to declare what He is (for that were to circumscribe Him within the limits of our fancy), but how much we admire Him, and how ready we would be to obey Him; which is a sign of humility, and of a will to honour him as much as we can. Giving ourselves the vague idea of an ultimate ground for the gigantic frame of the physical universe, all we can do is, as it were, feel thankful and grateful, although we have absolutely no idea to what kind of thing or things our thanks and gratitude are directed. Hobbes offers a splendid analogy to describe the befuddlements of those theologians who pretend to offer more: > But they that venture to reason of his Nature, from these Attributes of Honour, losing their understanding in the very first attempt, fall from one Inconvenience into another, without end, and without number; in the same manner, as when a man ignorant of the Ceremonies of Court, coming into the presence of a greater Person than he is used to speak to, and stumbling at his entrance, to save himself from falling, lets slip his Cloake; to recover his Cloake, lets fall his Hat; and with one disorder after another, discovers his astonishment and rusticity. It is an inevitable part of the human lot to become thus befuddled at any attempt to comprehend whatever it might be that serves as a cause or ground for the entire cosmos. Hobbes is usually described as an atheist, or at best an agnostic, but it is not plain that these labels fit. He certainly thought that the sovereign power in a state had the right to command specific religious practices – toleration was a rare commodity in the seventeenth century – and, crucially, he had nothing to say against advancing the right kinds of praise, even if we have no idea what it is that we are praising, except, of course, the vast frame of nature that sustains us. Such praises can sound right to religious ears. They might include saying that God is infinite, great, just, loving, and so on, but we should realise that these are not descriptions of a being, and therefore true or false according to whether they get the being's nature right. According to Hobbes, such praises are: > oblations rather than propositions, and these names, if we were to apply them to God as we understand them, would be called blasphemies and sins against God's ordinance (which forbids us to take His name in vain) rather than true propositions... The words under discussion are not the propositions of people philosophizing but the actions of those who pay homage. In short, theologians might think they are investigating the nature of God (or the afterlife) but all they manage to do is to take up a beatific attitude to the world, and, hopefully, to each other. Such attitudes can have other functions than that of tribal cement, as identified by Durkheim. They can act as consolations, for as the philosopher Roger Scruton has said, the consolation of an imaginary friend is not an imaginary consolation. When life goes badly it can be pleasant to imagine a better world hereafter. Why was Hobbes so sure that any attempt to circumscribe the nature of whatever it might be that grounds the entire frame of nature must fail? The formula that the finite should not attempt to comprehend the infinite is scarcely convincing by itself (mathematicians manage to say a great deal about infinity). Perhaps the problem comes further into focus if we bring in the next great critic of theological reasoning in the English-speaking tradition, David Hume. In his great posthumous work, _The Dialogues Concerning Natural Religion_ , Hume, surprisingly, has two spokesmen for 'natural religion', which is the attempt to prove the existence of God and describe something of God's nature by means of our ordinary reasoning powers alone. The two spokesmen represent two different directions this enterprise can take. The first, Cleanthes, aims to show the existence of a Divine Architect, a single being modelled upon human nature. This is a being that has plans, designs, intentions and even emotions and preferences. He is, as it were, a human being writ large. The other protagonist, Demea, wants something that exists necessarily, and his proof is to proceed not by seeing the world as analogous to the production of an architect, but by reflecting on an existence that requires nothing by way of support (as human beings most obviously do). The discussion can be summarised by saying that Cleanthes is arguing for the God of Abraham and Isaac, while Demea is arguing for something more abstract, the God of the philosophers. The third character in the _Dialogues_ , Philo, representing Hume himself, does little but nudge these two apparent allies towards discovering the size of the gulf that separates them. Cleanthes's problems are plain enough. Human architects have many properties. They have finite lifespans, and they would not exist but for the previous activity of parents and ancestors. Some are more experienced than others. Some are apprentices, and others are past it, in their dotage. They tend to work together in groups, and they depend upon traditions and long histories of experience. They also make mistakes. Some of their productions are inferior to those of others. They depend upon pre-existent, given materials. Furthermore, if we are acquainted with just one of an architect's productions, all we can infer is that this is an architect who produces things like that. If we have no other examples of an architect's work, or of the work of others with which to compare it, we cannot even pronounce whether it is a good or bad example of its kind. And we certainly can't infer that he or she also makes entirely different kinds of building as well. Above all we cannot infer that they also design the heavens. That would be like taking the multiple imperfections of some ghastly child to be itself a reason for supposing that it has a much nicer brother or sister. None of these properties is supposed to have an analogy in the Divine Architect. He does not have a lifespan, does not depend upon parents and ancestors, does not make mistakes, does not produce inferior works, does not depend on pre-existent materials, does not serve an apprenticeship, does not enter a dotage, and does not work in a tradition or alongside others. He is to be, as it were, above all that. So it is time for Demea to unroll his less anthropomorphic, more abstract conception of the ground of the cosmos, sometimes called the God of the philosophers. This is to be a 'necessary being', self-sufficient, eternal, beyond assessment, far from capable of being modelled upon the example of human life. Probably Demea's best analogy would be with a number (although philosophers dispute about whether it is right to say that numbers exist). That aside, if we take a number, say the number seven, then it makes no sense to imagine it changing (it is not odd one day and even the next), nor does it depend on anything material, nor does it have a finite lifespan, and it is necessary at least insofar as we can make little or no sense of the idea of it not existing, as if one day we might find that whereas there used to be something between the number six and the number eight, now, alas, it seems to have disappeared. Unfortunately, neither is the number seven responsive to prayer, concerned about humanity, subject to emotions and preferences, a counsellor or a judge, a consolation or a creator. So now Demea can call Cleanthes names: 'anthropomorphite', making God too like ourselves, just a big daddy in the sky with a whole host of human properties, how inadequate, how blasphemous. And Cleanthes can call Demea names: a God about which nothing can be said, and to whom no prayers can be addressed, how useless, how mystical – and each ends up saying that the other is little better than an outright atheist. But the trouble is that the ordinary religious believer will find that he needs to oscillate between the two conceptions. When he seeks God's help or forgiveness or consolation he sides with Cleanthes; when he reflects on what could possibly be the ground of the existence of the cosmos he must side with Demea. He is in what Hume calls a 'somewhat unaccountable state of mind', having no clear concept in mind, and hence no clear belief or thought to assess as true or false. And, as Hume said in a different context, 'carelessness and inattention will alone afford any remedy.'a It may seem surprising that after all this, Hume, like Hobbes, nevertheless retains a soft spot for the dispositions of the human mind that lead to arguing for an ultimate cause of the cosmos, or a divine architect. There must, we think, be something outside the physical universe that sustains its patterns, a divine maintenance man keeping its laws running, its magnitudes constant, its whole frame capable of supporting order and life. In the final section of the _Dialogues_ Hume himself shows sympathy with this tendency of the mind. However, this just means that the sceptic's target shifts. It doesn't matter any more what you say you believe. We are in an area where, as Hobbes said, words function only as 'oblations', prayers or paeans of praise, not as descriptions of aspects of the world. We are not in an area where truth is to be expected. In James's terms, we do not have doctrines that can be assimilated, corroborated, validated or verified. We have something more akin to songs and dances. As usual Hume sums it up admirably: > In vain would our limited understanding break through those boundaries, which are too narrow for our fond imagination. While we argue from the course of nature, and infer a particular intelligent cause, which first bestowed, and still preserves order in the universe, we embrace a principle, which is both uncertain and useless. It is uncertain; because the subject lies entirely beyond the reach of human experience. It is useless; because our knowledge of this cause being derived entirely from the course of nature, we can never, according to the rules of just reasoning, return back from the cause with any new inference, or making additions to the common and experienced course of nature, establish any new principles of conduct and behaviour. What does matter is that you should not draw any inferences from whatever you like to say or try to imagine. Indulge supernatural imaginings and practices if you wish – Hobbes thought that a well-ordered state would make it a law that you should do so – but do not think that you can derive any results about what to expect, who to love and hate, what to tolerate or oppose, how to relate to your neighbours, or how to live in general. You have to supply all that for yourself, and whether or not they knew it, this is what the writers of the holy texts – the preachers and priests and imams, and all the upholders of the various moralities that claim religious underpinnings – have always done, sometimes beneficially but often catastrophically. Might religions, then, be accorded the same generosity that we earlier gave to aesthetics and ethics? There we turned from bothering about the concept of truth in the abstract to describing, and, generally speaking, encouraging, the practices of the enquirer and the critic. Trying to better our practical responses to things is a valuable activity, and recognising that we would do well to look for this betterment is a good exercise of modesty. Could we say similarly that there are religious practices, including those of enquiry and discussion, that can be regarded both as attempts to know how to improve our awarenesses and attitudes towards the universe, and as trying to find religious truth? The exercise could be regarded as close to the practices of art and aesthetics. Just as a musician might be described as trying to find an adequate expression of, say, her response to the arrival of spring, so a religious adept could be described as trying to find an adequate expression of his hope, consolation, gratitude or reconciliation with the universe. The suggestion is well placed, and there may be religious practices and attitudes to which it is well fitted. A religious attitude to life would be a (good?) version of a well-tuned aesthetic and ethical attitude to life. But whereas it is straightforward to agree to there being know-how in ethics, or grades of experience and expertise in many of the practices associated with taste and art, it is harder to believe that the same is true, or true in the same way, when it comes to a more religious dimension. Beatific joy is one thing, self-lacerating despair another, yet each seems to find an equal place in religious practice. The music of Bach is one thing, and if appreciating it exhausted religious practice all would be well, but the hatreds of sectarians and jihadists are another. Perhaps as the apophatic traditions (that proceed through saying only what God is not) of various religions suggest, a truly religious attitude to the world is best found in silence; but silence is not an expression of any kind of truth, and the effort to articulate any wisdom that silence contains is not apt to deliver anything recognisable as know-how, but more apt to deliver something like Hobbes's befuddlement. Nor is a religion of silence a way to tribal identity, or hope and consolation, nor a provision of moral stiffening, yet all these are things that people look for in their religions. What we find, then, perhaps unsurprisingly, is that any religious practice that seeks a cousinship with ethics and aesthetics, and thereby a title to being its own kind of truth seeking, must eventually stand trial at the bars of ethics and aesthetics. Are the practices it recommends useful and agreeable? Are the attitudes to the world that it enjoins adequate and admirable, in a way similar to those encouraged by great art, literature or music? Is it free from exalting human vanity, or pride, or self-deception, let alone tribalism and sectarianism? If the answer to such questions is positive, then at the very least there is nothing to oppose or complain about. Religious truth could find a berth by sharing a compartment with ethical and aesthetic truth, to which we have been quite hospitable. But religions are also ways of turning up the volume. A dissenter is not a voice to be accommodated, a fellow enquirer in a serious attempt to allay doubt, with whom we may come to be one-minded about things, but someone to be shunned or extirpated. _Anathema sit:_ let it be damned. So looking at the way in which religions implement themselves in the actual world, it would be naïve to be too optimistic. Footnote aThis always reminds me of Alice's remark after hearing the nonsense poem 'Jabberwocky' in Lewis Carroll's _Alice's Adventures in Wonderland_ : 'It seems to fill my head with ideas – but I don't quite know what they are.' **11** **INTERPRETATIONS** The moral of these discussions is basically simple, but I hope sufficiently provocative to be worth emphasising. Peirce, James, Bentham and others constantly remind us of our actual activities, and actual motivations and cares. We only have the language, the resources of thought that we do, because some activities have proved useful or essential. These activities include trying to thrash things out: trying to warp our own heritage of beliefs and dispositions as little as possible in order to accommodate the problems that friction with the world throws up. The language of truth, reason, justification, knowledge, certainty and doubt is our instrument for discussing all this. This language is used in the same way in connection with any subject matter, even, as we have seen, those where truth has proved especially elusive and contested, such as ethics and aesthetics. This language is also at work in interpretive disciplines, where we try to make sense of a historical period, or a set of texts. An interesting example is that of the common law, which is a structure built on experience and reason. Common law evolves from a succession of adaptations to difficult cases, unforeseen problems, principles, statutes, rules and reasons, all of which are gradually refined by the pressure of experience, and, we hope, better ideas at the expense of worse ones. It is an edifice, a cathedral, especially in lawyers' minds, but growing organically and worthy of the same kind of worship as the divinity of truth itself. Perhaps. Aristotle said that we should live under the rule of law, not the rule of men, and this suggests a kind of structure found nowhere on earth, but somehow casting itself over us from on high. Hobbes, who always had his feet firmly on the ground, mocked this roundly: > And therefore this is another Errour of Aristotles Politiques, that in a well ordered Common-wealth, not Men should govern, but the Laws. What man, that has his naturall Senses, though he can neither write nor read, does not find himself governed by them he fears, and beleeves can kill or hurt him when he obeyeth not? or that beleeves the Law can hurt him; that is, Words, and Paper, without the Hands, and Swords of men? In sum, law is the force-backed command of the sovereign.a Of course, we hope that the legislature and the executive show fidelity to as much preceding law as possible, and so require no sudden jumps, retrospective legislation, arbitrary diktats, and so on. When this is so, there is a sense in which we have 'the rule of law', and in the absence of this tradition and conservatism (in an unusually good sense of the word) the security that is the very raison d'être of law crumbles. Stability of investment, property and contract, and safety itself, wither and perhaps disappear. Who is going to sow crops if by the summer his property is sequestrated and gone? It would be nice if we could be sure that later law is better than earlier law, in a rising arc of progress. But the commands of the sovereign bodies, and what the courts manage to make of those commands, emerge from politics and other human motivations: greed and fear, arrogance, lust for wealth, the corruptions of power, and simple fantasies about human nature. Even when tempered by interpretations of the tradition, they may represent a step backwards. So, for instance, the law now gives us a tax code for the United Kingdom that is around 17,000 pages long, and one that has aptly been described as a dog whistle luring the rich into the Elysian fields of tax avoidance. In the United States, the Constitution gives rise to the unedifying spectacle of the greybeards of the Supreme Court wondering what the framers of the Constitution 'would have' intended had they been aware of modern automatic weaponry and crowded urban living conditions. Thus a clause guaranteeing the right of the people to form a militia and bear arms becomes interpreted to mean that almost anybody can own and often carry such weapons in such contexts. It does not yet cover bazookas, grenades or tactical nuclear weapons, although those will no doubt be on the horizon, in spite of the consequence that in recent years more citizens of the USA have been killed by toddlers than by terrorists.b Sometimes the difficulty of applying rules outside the circumstance for which they were framed are more comical than tragic, as when Gennardy Lupey, a Russian sea captain, pleaded guilty to being drunk in charge of a ship – a bad offence one might think, except that his ship at the time was in dry dock waiting to be broken up, so poor Gennardy had no more reason to stay sober than had he been partying at home.c Argument about what the law _is_ proceeds by citing past practice and interpreting the reasons it has taken the shape that it has. Argument about what the law _should_ be is a matter of morals and politics. The two are different, but not entirely dissociated, since our own past laws and conventions will have influenced our sense of propriety and decency, and those in turn shape our verdicts about what ought to be done. But this is no obstacle to recognising the distinction, and recognising that, often enough, there is nothing sacrosanct about where we stand now. Enquiry in interpretative disciplines such as history or law is apt to be contestable and fallible. The results are typically provisional and open to refinement and improvement or outright rejection. Here truth seems especially fugitive. Nonetheless, it is far from true that anything goes: even when our pictures of how things were are incomplete or partial, they may still be better than others. And even when truth veils herself, falsity can be detected for what it is. In other cases – simple empirical beliefs about the here and now – the cost of getting things wrong looms more immediately, and looms larger. But reasoning works the same way and deserves the same respect whether the question is difficult or easy. And by seeing discussion of reasons as, in effect, versions of the same evaluative exercise that happen whenever those loaded words 'good', 'ought', 'must' and their kin dominate our thoughts, we diminish the gap between exercises of scientific and empirical reason on the one hand, and practical and aesthetic reason on the other. All concern the common pursuit of values and priorities. These values provide us with our stance towards the world, a stance that has us walking upon Peirce's bog rather than upon a bedrock of fact. Here we stand until it gives way. 'I did it my way' boasts the song made popular by Frank Sinatra, plugging into a common fantasy of autonomy and sovereignty. But nobody does it their way, since everybody stands on a huge deposit of history and culture, the work of generations of trial and error and refinement. We all speak a language we did not invent, benefit from conventions we did not design, travel roads we did not level, inhabit buildings we did not make, under the protection of laws we did not frame. It would be as big a delusion to think of oneself as a self-made person in the world of thought and ideas as it is in the world of politics and commerce. So how can we retain any kind of confidence in those opinions that, for the moment, seem to provide solid footing? The best answer, somewhat brutal, is that we have nowhere else to stand. Imagining that we do would be to go back to the Cartesian quest for a method that requires no standpoint, no landmarks and no luggage, a method that, in all its glory, commands the allegiance of the innocent mind. We do not need, and cannot have, that. We start 'in medias res', where we are, and deal with problems as they arise, deploying a huge inheritance of mental habits, experiences, natural and practised capacities of observation, and inference and reasoning. We deploy our sense of what analogies to trust, what simplifications we can make, what is useful, what solidifies our judgement with those of our fellows whose judgement we respect. Whatever else sceptics and cynics of all stripes may say, we have no alternative. We cannot live without elementary confidences, cemented routes of inferences, preferences, relatively fixed pleasures and desires. These give us the indissoluble rocks around which we have to steer our fragile barks. And this is what it is to look for truth, to enquire into it, to set doubt to rest, to improve our understandings of the world. When we look back at life over the millennia of which we have a history, we do not seem to have done so badly. Reasons and interpretations lie in a Darwinian world, in which we might hope that not only the big beasts, but also the _dulce et utile_ – the agreeable and the useful – out-compete the others. We should toast our ancestors for getting us where we are, and since reason, the protectorate that philosophers police, is our concern, we must continually monitor the forces that lead us uphill or downhill, and have faith that the best will overcome the worst. Footnotes aHobbes's view, echoed by later philosophers of law such as Jeremy Bentham and John Austin, was contested by H. L. A. Hart in his hugely influential book _The Concept of Law_. Hart had two arguments. One was that it may be hard to determine which body is the sovereign, to which the answer is that when that is true it is equally hard to know what is the law – just think of failed states. The other is that some laws enable us to do things, such as making valid wills, rather than commanding us to do things. This is true, but force is close by in the wings, since the validity of a will means that the intended beneficiary becomes the owner of the property in the legacy, which in turn means that any other person taking possession of that property will incur the full force of the law. No adjustment to Hobbes, Bentham or Austin is necessary. bThe text of the second amendment reads: 'A well regulated militia being necessary to the security of a free state, the right of the people to keep and bear arms shall not be infringed.' Thanks to relentless commercial pressures and fantastical views of human nature, the initial clause, which clearly introduces the point of the amendment, is now completely ignored. cThe case was reported in the London _Times_ on 30 May 2016. Philosophers call the issues surrounding knowing how to extend rules to new cases the 'rule following considerations', which were first highlighted by Wittgenstein. **NOTES** **1. CORRESPONDENCE** Jeremy Bentham, _Deontology, Or The Science of Morality_ , vol. 2, §52. _Collected Papers of Charles Sanders Peirce_ , vol. 8, Arthur W. Burks, ed., Cambridge: Harvard University Press, 1958, §112, p. 83. Donald Davidson, 'Truth Rehabilitated', in Brandom, ed., _Rorty and his Critics_ , Oxford: Blackwell, 2000, p. 66. Richard Rorty, 'Texts and Lumps', in his _Philosophical Papers,_ vol. 1, Cambridge: Cambridge University Press, p. 79; Peter Strawson, 'Truth', in _Proceedings of the Aristotelian Society_ 1950, p. 129. Brand Blanshard, _The Nature of Thought_ , London: Allen & Unwin, 1939, vol. 2, p. 268. Gottlob Frege, 'The Thought: A Logical Inquiry', _Mind_ , vol. 65, 1956, p. 292. William James, _Pragmatism_ , New York: Longmans, Green & Co, 1907, p. 246. William James, _Pragmatism_ , p. 62. James Leuba, 'Professor William James's Interpretation of Religious Experience', _International Journal of Ethics,_ vol. 14, 1903, p. 331. **2. COHERENCE** John McDowell, _Mind and World_ , Cambridge: Harvard University Press, 1996, p. 11. Donald Davidson, 'A Coherence Theory of Truth and Knowledge', in Lepore, ed., _Truth and Interpretation: Perspectives on the Philosophy of Donald Davidson_ , Oxford: Blackwell, 1986, p. 310. **3. PRAGMATISM** William James, _The Meaning of Truth_ , New York: Longmans, Green & Co. 1927, p. 76. C. S. Peirce, 'How to Make Our Ideas Clear', in _Chance, Love, and Logic,_ Lincoln, Nebraska: Bison Books, 1998. Essay originally published in 1878. William James, _The Meaning of Truth_ , p. 189. Sir Hugh Trevor-Roper, 'The Invention of Tradition: The Highland Tradition of Scotland', in _The Invention of Tradition,_ Eric Hobsbawm & Terence Ranger, eds, Cambridge: Cambridge University Press, 1983. Of course the Scots are not alone. Myths of national glory are virtually inescapable. William James, _Pragmatism_ , p. 233. _The Collected Papers of Charles Sanders Peirce_ , vol. 5, Charles Harshorne & Paul Weiss, eds, Cambridge: Harvard University Press, 1934, §589, p. 412. **4. DEFLATIONISM** Harry Frankfurt, _On Bullshit_ , Princeton: Princeton University Press, 2005. **6. SUMMARY OF PART I** William James, _Pragmatism_ , p. 197. **7. TRUTHS OF TASTE; TRUTH IN ART** Henry James, _Portraits of Places_ , London: Macmillan, 1883. Unless otherwise signalled, the quotations from Henry James are from this essay. T. S. Eliot, 'The Function of Criticism', in _The Complete Prose of T. S. Eliot, The Perfect Critic 1919–1926_ , A. Cuda & R. Schuchard, eds, Baltimore: Johns Hopkins University Press, 2014, p. 459. Hume, 'Of The Standard of Taste', in _Essays, Moral, Political and Literary_ , vol. 1, Eugene F. Miller, ed., Indianapolis: Liberty Fund, p. 244. R. G. Collingwood, _The Principles of Art_ , Oxford: Oxford University Press, 1938, pp. 110f. Ibid., pp. 125–53. **8. TRUTH IN ETHICS** David Hume, _Enquiry Concerning the Principles of Morals_ , L. A. Selby-Bigge, ed., Oxford: Oxford University Press, 1975, §9, pp. 272–3. David Hume, _Enquiry Concerning the Principles of Morals_ , appendix 3, p. 306. **10. RELIGION AND TRUTH** The paragraphs that follow are owed to the scholarship and interpretation of Thomas Holden, 'Hobbes's First Cause', _Journal of the History of Philosophy_ , vol. 53, no. 4, 2015, pp. 647–68. Hobbes, _Leviathan_ , London 1651, Rod Hay, ed., for the McMaster University Archive of the History of Economic Thought, xxxi, p. 223. Hobbes, _Leviathan_ , xlvi, p. 423. Hobbes, _Critique du_ De Mundo _de Thomas White_ , J. Jacquot and H. W. Jones, eds, Paris: J. Vrin 1973, xxxv §16, p. 32. Hume, _Enquiry Concerning Human Nature_ , L. A. Selby-Bigge, ed., Oxford: Oxford University Press, 1975, §11, p. 142. **11. INTERPRETATIONS** Hobbes, _Leviathan_ , xlvi, p. 427. **FURTHER INVESTIGATIONS** Useful collections of classical readings and articles on truth include: Blackburn, Simon, & Simmons, Keith (eds), _Truth_. Oxford: Oxford University Press (1999) Horwich, Paul (ed.), _Theories of Truth_. New York: Dartmouth (1994) Lynch, Michael P. (ed.), _The Nature of Truth: Classic and Contemporary Perspectives_. Boston: The MIT Press (2001) Schmitt, Frederick F. (ed.), _Theories of Truth_. Oxford: Blackwell (2003) A valuable account of some of the ways in which truth and falsity appeared problematic in classical philosophy is given in: Denyer, Nicholas, _Language, Thought, and Falsehood in Ancient Greek Philosophy_. London: Routledge (1991) It was only at the end of the nineteenth century that articles and books with truth itself as a topic began to proliferate. Although the British Idealists, particularly F. H. Bradley in his essay 'On Truth and Copying' ( _Mind_ , 1907) and H. H. Joachim in _The Nature of Truth_ (Oxford: Oxford University Press, 1906), had powerfully attacked any correspondence theory, attempts to make it work included: Russell, Bertrand, 'The Philosophy of Logical Atomism', in R. C. Marsh (ed.), _Logic and Knowledge_. London: Allen & Unwin (1956) Wittgenstein, Ludwig, _Tractatus Logico-Philosophicus_. London: Routledge (1922) In their hands, however, the correspondence theory required an involved and now discredited metaphysics. The idea of correspondence itself lives on, however, and later contributions include: Armstrong, D. M., _A World of States of Affairs_. Cambridge: Cambridge University Press (1997) Armstrong, D. M., _Truth and Truthmakers_. Cambridge: Cambridge University Press (2004) The relation between correspondence and deflationary accounts of truth is explored in: David, Marian, _Correspondence and Disquotation: An Essay on the Nature of Truth_. New York: Oxford University Press (1996) Merricks, Trenton, _Truth and Ontology_. Oxford: Oxford University Press (2007) In the philosophy of mathematics it is natural to suspect that there is no more to mathematical truth than provability, although technical results such as the famous incompleteness theorems of Kurt Gödel make this difficult to explain and defend. The essays collected in Michael Dummett's _Truth and Other Enigmas_ (Oxford: Clarendon Press, 1978) revolve around applying a similar approach to the relation between truth in other areas and assertibility. There is a parallel between this approach, which is similar to Peirce's prioritisation of method over truth, and the ethical theory in which virtue is a more fundamental concept than any good that is achieved by its exercise. Each approach privileges process over product. A collection on this parallel is: Battaly, Heather D. (ed.), _Virtue and Vice, Moral and Epistemic_. Oxford: Blackwell (2010) Further work in Dummett's direction is found in Crispin Wright's _Truth and Objectivity_ (Cambridge, MA: Harvard University Press, 1992). Wright's book also stimulated the view that different conceptions of truth could apply in different areas. A useful collection on this theme is: Pedersen, Nikolai, & Wright, Cory (eds), _Truth and Pluralism: Current Debates_. Oxford: Oxford University Press (2013) The earliest stirrings of a deflationary approach to truth can be found in Gottlob Frege's 'Thoughts', in his _Logical Investigations_ (Oxford: Blackwell, 1977), and 'The thought: A Logical Inquiry' ( _Mind_ 65, 1956). F. P. Ramsey's paper 'Facts and Propositions' ( _Aristotelian Society Supplementary Volume_ 7, 1927) was another pioneering instance of the idea. Important later contributions include: Horwich, Paul, _Truth_. Oxford: Blackwell (1990) Quine, W. V. O., _Pursuit of Truth_. Harvard University Press (1992) The idea of truth being indefinable was boosted by papers collected in: Davidson, Donald, _Inquiries Into Truth and Interpretation_ : _Philosophical Essays_ vol. 2. Oxford: Oxford University Press (2001) The Paradox of the Liar and its many offspring have generated a huge and generally technical literature. An accessible and interesting account can be found in: Simmons, Keith, _Universality and the Liar: An Essay on Truth and the Diagonal Argument_. Cambridge: Cambridge University Press (1993) The relevance of the theory of truth to modern or postmodern scepticism about the notion is explored in: Blackburn, Simon, _Truth: A Guide for the Perplexed_. London: Allen Lane & Penguin (2005) Nagel, Ernest, _The Last Word_. New York: Oxford University Press (1997) Williams, Bernard, _Truth and Truthfulness._ Princeton: Princeton University Press (2002) Truth is, inevitably, connected to issues in metaphysics and ontology. A useful wide-ranging collection is: Chalmers, David, Manley, David, & Wasserman, Ryan (eds), _Metametaphysics: New Essays on the Foundations of Ontology_. New York: Oxford University Press (2009) Internet resources on the topic of truth include the Stanford Encyclopedia of Philosophy at <http://plato.stanford.edu/>, and the Philosophical Papers site at <http://philpapers.org/>. A video classic is the conversation between Sir Peter Strawson and Gareth Evans from 1973: Part 1 is at https://www.youtube.com/watch?v=BLV-eYacfbE and Part 2 at https://www.youtube.com/watch?v=w__pIcl_1rs. The interviews recorded at <http://www.philosophybites.com/> make another excellent resource for people finding their way into philosophy. **Index** **A** Absolute, the _see also_ God aesthetics , , , , –, , , , , , , afterlife _Alice's Adventures in Wonderland_ American pragmatism , _see also_ pragmatism animal rights , anthropomorphism arbitrary belief , Aristotle , , , art –, , , assertion , , , –, , –, –, , , , – _see also_ belief; experience; observation atheism , Austen, Jane Austin, John **B** Bacon, Francis Bain, Alexander bees Beethoven, Ludwig van , belief –, –, , , _see also_ assertion; experience; observation belief systems, holism of , Bentham, Jeremy , , , , , , Bernstein, Leonard Bradley, F. H. (Francis Herbert) British Idealists –, Butler, Joseph **C** Carroll, Lewis Cartesian scepticism _see also_ scepticism certainty , –, , Christianity Cleanthes , Clerk Maxwell, James coherence –, Collingwood, R. G. (Robin George) – common law – _see also_ law common sense _Concept of Law, The_ controlled coherence – correspondence – criticism – critics – **D** Darwinianism , , Davidson, Donald , , –, , deflationism , – Demea , , Descartes, René Dewey, John _Dialogues Concerning Natural Religion, The_ , Divine Architect , , Dummett, Michael , Durkheim, Emile –, **E** Eiffel Tower Einstein, Albert Eliot, T. S. (Thomas Stearns) , emotion –, , , empiricism , , , , , , enquiry , , –, , , , , , , –, –, , , , , error theory , , ethics , , , , –, , , , , evolution , , , –, experience –, , , –, , , , , , , , , , experiment , , , **F** facts –, interpretation of , , failure faith , , falsity, truth and , Federer, Roger fictionalism , , foundationalism Frankfurt, Harry Frege, Gottlob , , , Frisch, Karl von **G** Gaia geology , God , , –, , , –, , –, Goodman, Nelson Grice, H. P. (Herbert Paul) **H** Habermas, Jürgen Hart, H. L. A. (Herbert Lionel Adolphus) Hegel, Georg Wilhelm Friedrich Henry VIII, king of England history , , , , Hobbes, Thomas , –, , , , holism of belief systems , Homer Horace (Quintus Horatius Flaccus) Hume, David , –, –, , , , , , Hutcheson, Frances **I** Idealists, British –, implicature interpretation , , , Islam **J** James, Henry –, , – James, William , , , –, –, , , , –, , , , , Jesus Joachim, H. H. (Harold Henry) , judgement justice , **K** Kant, Immanuel , , , –, , , , **L** language , , , –, –, , , , law , , , , , _see also_ common law liar, paradox of the , literature London Underground Lupey, Gennardy **M** maps , , mathematics , , , McDowell, John medicine meta-belief metaphysical realism metaphysics , minimalism _see also_ deflationism Moore, G. E. (George Edward) , , , , morality , , , –, –, , , , , –, , , _Mornings in Florence_ – music , , , , , **N** Nietzsche, Friedrich , nominal equivalence – **O** observation , , , , , , _see also_ assertion; belief; experience **P** paradox of the liar , Pavlov, Ivan , Peirce, C. S. (Charles Sanders) , , , , , , , , , , , , , , , , , , , perception Philo physics , Planck, Max , – Pontius Pilate – portraits , , postmodernism , , attitude to science – postmodern scepticism , _see also_ scepticism pragmatism , –, American , principle of charity Putnam, Hilary **Q** quantum theory , Quine, Willard Van Orman **R** Ramsey, F. P. (Frank Plumpton) rationalism rationality reason – reciprocity – redundancy theory of truth _see also_ deflationism relativism , relativity, theories of religion , , , – religious belief , , –, , , , , – _see also_ God Rorty, Richard , , , Rousseau, Jean-Jacques , Ruskin, John , Russell, Bertrand , , **S** scepticism , , , , , , , , , , , , postmodern , Schubert, Franz science , , –, –, philosophy of – postmodern attitude to – scientific realism , Scott, Sir Walter Scruton, Roger semantic theory of truth – semantics sensation – _Sense and Sensibility_ Shaftesbury, 3rd Earl of (Anthony Ashley Cooper) Shakespeare, William Sinatra, Frank Smith, Adam –, , –, society – sovereign power , – Steinway pianos Stoicism Strawson, Peter , Stubbs, William, bishop of Oxford , , success , , , , , , , , , , , –, –, , , syntax **T** Tarski, Alfred – taste – Titian tolerance, religious , Tolstoy, Leo – translation , –, – transparency property –, , , truth, definition of – T-sentences – **U** utilitarianism , **V** verbal equivalence – virtue ethics –, **W** web of belief , _What is Art?_ white lies Wittgenstein, Ludwig Josef Johann , , , **OTHER TITLES IN THE IDEAS IN PROFILE SERIES** Truth Simon Blackburn One of the world's leading thinkers on truth explains what it is, the different ways of approaching and understanding it, and why it matters. ISBN 978 1 78125 722 7 eISBN 978 1 78283 292 8 Music Andrew Gant The best concise introduction to an essential subject. ISBN 978 1 78125 642 8 eISBN 978 1 78283 251 5 Theories of Everything Frank Close An account of theories of everything in the past, present and future. ISBN 978 1 78125 751 7 eISBN 978 1 78283 309 3 Geography Danny Dorling & Carl Lee A clear and accessible introduction to geography by two experts in the topic. ISBN 978 1 78125 530 8 eISBN 978 1 78283 196 9 Criticism Catherine Belsey A snappy but serious introduction to criticism. ISBN 978 1 78125 450 9 eISBN 978 1 78283 157 0 Politics David Runciman An accessible introduction to politics from David Runciman, Professor of Politics at Cambridge University. ISBN 978 1 78125 257 4 eISBN 978 1 78283 056 6 Art in History, 600 BC–2000 AD Martin Kemp An epic journey through the history of art from religious painting to postmodernism by one of the world's greatest art historians. ISBN 978 1 78125 336 6 eISBN 978 1 78283 102 0 The Ancient World Jerry Toner A sparkling introduction to the Ancient World that brings its peoples and cultures vividly to life. ISBN 978 1 78125 420 2 eISBN 978 1 78283 141 9 Shakespeare Paul Edmondson A short introduction to Shakespeare by expert Paul Edmondson. ISBN 978 1 78125 337 3 eISBN 978 1 78283 103 7 Social Theory William Outhwaite Social theory lets us understand the full complexity of the world we live – this book explains how. ISBN 978 1 78125 481 3 eISBN 978 1 78283 175 4
{ "redpajama_set_name": "RedPajamaBook" }
4,828
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Frameset//EN""http://www.w3.org/TR/REC-html40/frameset.dtd"> <HTML> <HEAD> <meta name="generator" content="JDiff v1.0.9"> <!-- Generated by the JDiff Javadoc doclet --> <!-- (http://www.jdiff.org) --> <meta name="description" content="JDiff is a Javadoc doclet which generates an HTML report of all the packages, classes, constructors, methods, and fields which have been removed, added or changed in any way, including their documentation, when two APIs are compared."> <meta name="keywords" content="diff, jdiff, javadiff, java diff, java difference, API difference, difference between two APIs, API diff, Javadoc, doclet"> <TITLE> org.apache.hadoop.http.HttpServer </TITLE> <LINK REL="stylesheet" TYPE="text/css" HREF="../stylesheet-jdiff.css" TITLE="Style"> </HEAD> <BODY> <!-- Start of nav bar --> <TABLE summary="Navigation bar" BORDER="0" WIDTH="100%" CELLPADDING="1" CELLSPACING="0"> <TR> <TD COLSPAN=2 BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <TABLE summary="Navigation bar" BORDER="0" CELLPADDING="0" CELLSPACING="3"> <TR ALIGN="center" VALIGN="top"> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="../../api/org/apache/hadoop/http/HttpServer.html" target="_top"><FONT CLASS="NavBarFont1"><B><tt>hadoop 0.20.2-cdh3u5</tt></B></FONT></A>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="changes-summary.html"><FONT CLASS="NavBarFont1"><B>Overview</B></FONT></A>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="pkg_org.apache.hadoop.http.html"><FONT CLASS="NavBarFont1"><B>Package</B></FONT></A>&nbsp;</TD> <TD BGCOLOR="#FFFFFF" CLASS="NavBarCell1Rev"> &nbsp;<FONT CLASS="NavBarFont1Rev"><B>Class</B></FONT>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="jdiff_statistics.html"><FONT CLASS="NavBarFont1"><B>Statistics</B></FONT></A>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="jdiff_help.html"><FONT CLASS="NavBarFont1"><B>Help</B></FONT></A>&nbsp;</TD> </TR> </TABLE> </TD> <TD ALIGN="right" VALIGN="top" ROWSPAN=3><EM><b>Generated by<br><a href="http://www.jdiff.org" class="staysblack" target="_top">JDiff</a></b></EM></TD> </TR> <TR> <TD BGCOLOR="#FFFFFF" CLASS="NavBarCell2"><FONT SIZE="-2"> &nbsp;<A HREF="org.apache.hadoop.http.FilterInitializer.html"><B>PREV CLASS</B></A> &nbsp;<B>NEXT CLASS</B>&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <A HREF="../changes.html" TARGET="_top"><B>FRAMES</B></A> &nbsp; &nbsp;<A HREF="org.apache.hadoop.http.HttpServer.html" TARGET="_top"><B>NO FRAMES</B></A></FONT></TD> <TD BGCOLOR="#FFFFFF" CLASS="NavBarCell3"><FONT SIZE="-2"> DETAIL: &nbsp; <a href="#constructors">CONSTRUCTORS</a>&nbsp;|&nbsp; <a href="#methods">METHODS</a>&nbsp;|&nbsp; <a href="#fields">FIELDS</a> </FONT></TD> </TR> </TABLE> <HR> <!-- End of nav bar --> <H2> Class org.apache.hadoop.http.<A HREF="../../api/org/apache/hadoop/http/HttpServer.html" target="_top"><tt>HttpServer</tt></A> </H2> <a NAME="constructors"></a> <p> <a NAME="Added"></a> <TABLE summary="Added Constructors" BORDER="1" CELLPADDING="3" CELLSPACING="0" WIDTH="100%"> <TR BGCOLOR="#CCCCFF" CLASS="TableHeadingColor"> <TD VALIGN="TOP" COLSPAN=2><FONT SIZE="+1"><B>Added Constructors</B></FONT></TD> </TR> <TR BGCOLOR="#FFFFFF" CLASS="TableRowColor"> <TD VALIGN="TOP" WIDTH="25%"> <A NAME="org.apache.hadoop.http.HttpServer.ctor_added(java.lang.String, java.lang.String, int, boolean, org.apache.hadoop.conf.Configuration, org.apache.hadoop.security.authorize.AccessControlList)"></A> <nobr><A HREF="../../api/org/apache/hadoop/http/HttpServer.html#HttpServer(java.lang.String, java.lang.String, int, boolean, org.apache.hadoop.conf.Configuration, org.apache.hadoop.security.authorize.AccessControlList)" target="_top"><tt>HttpServer</tt></A>(<code>String,</nobr> String<nobr>,</nobr> int<nobr>,</nobr> boolean<nobr>,</nobr> Configuration<nobr>,</nobr> AccessControlList<nobr><nobr></code>)</nobr> </TD> <TD VALIGN="TOP">Create a status server on the given port.</TD> </TR> <TR BGCOLOR="#FFFFFF" CLASS="TableRowColor"> <TD VALIGN="TOP" WIDTH="25%"> <A NAME="org.apache.hadoop.http.HttpServer.ctor_added(java.lang.String, java.lang.String, int, boolean, org.apache.hadoop.conf.Configuration, org.apache.hadoop.security.authorize.AccessControlList, org.mortbay.jetty.Connector)"></A> <nobr><A HREF="../../api/org/apache/hadoop/http/HttpServer.html#HttpServer(java.lang.String, java.lang.String, int, boolean, org.apache.hadoop.conf.Configuration, org.apache.hadoop.security.authorize.AccessControlList, org.mortbay.jetty.Connector)" target="_top"><tt>HttpServer</tt></A>(<code>String,</nobr> String<nobr>,</nobr> int<nobr>,</nobr> boolean<nobr>,</nobr> Configuration<nobr>,</nobr> AccessControlList<nobr>,</nobr> Connector<nobr><nobr></code>)</nobr> </TD> <TD>&nbsp;</TD> </TR> <TR BGCOLOR="#FFFFFF" CLASS="TableRowColor"> <TD VALIGN="TOP" WIDTH="25%"> <A NAME="org.apache.hadoop.http.HttpServer.ctor_added(java.lang.String, java.lang.String, int, boolean, org.apache.hadoop.conf.Configuration, org.mortbay.jetty.Connector)"></A> <nobr><A HREF="../../api/org/apache/hadoop/http/HttpServer.html#HttpServer(java.lang.String, java.lang.String, int, boolean, org.apache.hadoop.conf.Configuration, org.mortbay.jetty.Connector)" target="_top"><tt>HttpServer</tt></A>(<code>String,</nobr> String<nobr>,</nobr> int<nobr>,</nobr> boolean<nobr>,</nobr> Configuration<nobr>,</nobr> Connector<nobr><nobr></code>)</nobr> </TD> <TD>&nbsp;</TD> </TR> </TABLE> &nbsp; <a NAME="methods"></a> <p> <a NAME="Added"></a> <TABLE summary="Added Methods" BORDER="1" CELLPADDING="3" CELLSPACING="0" WIDTH="100%"> <TR BGCOLOR="#CCCCFF" CLASS="TableHeadingColor"> <TD VALIGN="TOP" COLSPAN=2><FONT SIZE="+1"><B>Added Methods</B></FONT></TD> </TR> <TR BGCOLOR="#FFFFFF" CLASS="TableRowColor"> <TD VALIGN="TOP" WIDTH="25%"> <A NAME="org.apache.hadoop.http.HttpServer.addInternalServlet_added(java.lang.String, java.lang.String, java.lang.Class, boolean, boolean)"></A> <nobr><code>void</code>&nbsp;<A HREF="../../api/org/apache/hadoop/http/HttpServer.html#addInternalServlet(java.lang.String, java.lang.String, java.lang.Class, boolean, boolean)" target="_top"><tt>addInternalServlet</tt></A>(<code>String,</nobr> String<nobr>,</nobr> Class<nobr>,</nobr> boolean<nobr>,</nobr> boolean<nobr><nobr></code>)</nobr> </TD> <TD VALIGN="TOP">Add an internal servlet in the server specifying whether or not to protect with Kerberos authentication.</TD> </TR> <TR BGCOLOR="#FFFFFF" CLASS="TableRowColor"> <TD VALIGN="TOP" WIDTH="25%"> <A NAME="org.apache.hadoop.http.HttpServer.addJerseyResourcePackage_added(java.lang.String, java.lang.String)"></A> <nobr><code>void</code>&nbsp;<A HREF="../../api/org/apache/hadoop/http/HttpServer.html#addJerseyResourcePackage(java.lang.String, java.lang.String)" target="_top"><tt>addJerseyResourcePackage</tt></A>(<code>String,</nobr> String<nobr><nobr></code>)</nobr> </TD> <TD VALIGN="TOP">Add a Jersey resource package.</TD> </TR> <TR BGCOLOR="#FFFFFF" CLASS="TableRowColor"> <TD VALIGN="TOP" WIDTH="25%"> <A NAME="org.apache.hadoop.http.HttpServer.addSslListener_added(java.net.InetSocketAddress, org.apache.hadoop.conf.Configuration, boolean, boolean)"></A> <nobr><code>void</code>&nbsp;<A HREF="../../api/org/apache/hadoop/http/HttpServer.html#addSslListener(java.net.InetSocketAddress, org.apache.hadoop.conf.Configuration, boolean, boolean)" target="_top"><tt>addSslListener</tt></A>(<code>InetSocketAddress,</nobr> Configuration<nobr>,</nobr> boolean<nobr>,</nobr> boolean<nobr><nobr></code>)</nobr> </TD> <TD VALIGN="TOP">Configure an ssl listener on the server.</TD> </TR> <TR BGCOLOR="#FFFFFF" CLASS="TableRowColor"> <TD VALIGN="TOP" WIDTH="25%"> <A NAME="org.apache.hadoop.http.HttpServer.createDefaultChannelConnector_added()"></A> <nobr><code>Connector</code>&nbsp;<A HREF="../../api/org/apache/hadoop/http/HttpServer.html#createDefaultChannelConnector()" target="_top"><tt>createDefaultChannelConnector</tt></A>()</nobr> </TD> <TD>&nbsp;</TD> </TR> <TR BGCOLOR="#FFFFFF" CLASS="TableRowColor"> <TD VALIGN="TOP" WIDTH="25%"> <A NAME="org.apache.hadoop.http.HttpServer.hasAdministratorAccess_added(javax.servlet.ServletContext, javax.servlet.http.HttpServletRequest, javax.servlet.http.HttpServletResponse)"></A> <nobr><code>boolean</code>&nbsp;<A HREF="../../api/org/apache/hadoop/http/HttpServer.html#hasAdministratorAccess(javax.servlet.ServletContext, javax.servlet.http.HttpServletRequest, javax.servlet.http.HttpServletResponse)" target="_top"><tt>hasAdministratorAccess</tt></A>(<code>ServletContext,</nobr> HttpServletRequest<nobr>,</nobr> HttpServletResponse<nobr><nobr></code>)</nobr> </TD> <TD VALIGN="TOP">Does the user sending the HttpServletRequest has the administrator ACLs If it isn't the case response will be modified to send an error to the user.</TD> </TR> <TR BGCOLOR="#FFFFFF" CLASS="TableRowColor"> <TD VALIGN="TOP" WIDTH="25%"> <A NAME="org.apache.hadoop.http.HttpServer.isInstrumentationAccessAllowed_added(javax.servlet.ServletContext, javax.servlet.http.HttpServletRequest, javax.servlet.http.HttpServletResponse)"></A> <nobr><code>boolean</code>&nbsp;<A HREF="../../api/org/apache/hadoop/http/HttpServer.html#isInstrumentationAccessAllowed(javax.servlet.ServletContext, javax.servlet.http.HttpServletRequest, javax.servlet.http.HttpServletResponse)" target="_top"><tt>isInstrumentationAccessAllowed</tt></A>(<code>ServletContext,</nobr> HttpServletRequest<nobr>,</nobr> HttpServletResponse<nobr><nobr></code>)</nobr> </TD> <TD VALIGN="TOP">Checks the user has privileges to access to instrumentation servlets.</TD> </TR> </TABLE> &nbsp; <p> <a NAME="Changed"></a> <TABLE summary="Changed Methods" BORDER="1" CELLPADDING="3" CELLSPACING="0" WIDTH="100%"> <TR BGCOLOR="#CCCCFF" CLASS="TableHeadingColor"> <TD VALIGN="TOP" COLSPAN=3><FONT SIZE="+1"><B>Changed Methods</B></FONT></TD> </TR> <TR BGCOLOR="#FFFFFF" CLASS="TableRowColor"> <TD VALIGN="TOP" WIDTH="25%"> <A NAME="org.apache.hadoop.http.HttpServer.createBaseListener_changed(org.apache.hadoop.conf.Configuration)"></A> <nobr><code>Connector</code>&nbsp;<A HREF="../../api/org/apache/hadoop/http/HttpServer.html#createBaseListener(org.apache.hadoop.conf.Configuration)" target="_top"><tt>createBaseListener</tt></A>(<code>Configuration</code>) </nobr> </TD> <TD VALIGN="TOP" WIDTH="30%"> Change of visibility from protected to public.<br> </TD> <TD VALIGN="TOP">Create a required listener for the Jetty instance listening on the port provided.</TD> </TR> </TABLE> &nbsp; <a NAME="fields"></a> <p> <a NAME="Added"></a> <TABLE summary="Added Fields" BORDER="1" CELLPADDING="3" CELLSPACING="0" WIDTH="100%"> <TR BGCOLOR="#CCCCFF" CLASS="TableHeadingColor"> <TD VALIGN="TOP" COLSPAN=2><FONT SIZE="+1"><B>Added Fields</B></FONT></TD> </TR> <TR BGCOLOR="#FFFFFF" CLASS="TableRowColor"> <TD VALIGN="TOP" WIDTH="25%"> <A NAME="org.apache.hadoop.http.HttpServer.BIND_ADDRESS"></A> <nobr><code>String</code>&nbsp;<A HREF="../../api/org/apache/hadoop/http/HttpServer.html#BIND_ADDRESS" target="_top"><tt>BIND_ADDRESS</tt></A></nobr> </TD> <TD>&nbsp;</TD> </TR> <TR BGCOLOR="#FFFFFF" CLASS="TableRowColor"> <TD VALIGN="TOP" WIDTH="25%"> <A NAME="org.apache.hadoop.http.HttpServer.CONF_CONTEXT_ATTRIBUTE"></A> <nobr><code>String</code>&nbsp;<A HREF="../../api/org/apache/hadoop/http/HttpServer.html#CONF_CONTEXT_ATTRIBUTE" target="_top"><tt>CONF_CONTEXT_ATTRIBUTE</tt></A></nobr> </TD> <TD>&nbsp;</TD> </TR> <TR BGCOLOR="#FFFFFF" CLASS="TableRowColor"> <TD VALIGN="TOP" WIDTH="25%"> <A NAME="org.apache.hadoop.http.HttpServer.KRB5_FILTER"></A> <nobr><code>String</code>&nbsp;<A HREF="../../api/org/apache/hadoop/http/HttpServer.html#KRB5_FILTER" target="_top"><tt>KRB5_FILTER</tt></A></nobr> </TD> <TD>&nbsp;</TD> </TR> <TR BGCOLOR="#FFFFFF" CLASS="TableRowColor"> <TD VALIGN="TOP" WIDTH="25%"> <A NAME="org.apache.hadoop.http.HttpServer.SPNEGO_FILTER"></A> <nobr><code>String</code>&nbsp;<A HREF="../../api/org/apache/hadoop/http/HttpServer.html#SPNEGO_FILTER" target="_top"><tt>SPNEGO_FILTER</tt></A></nobr> </TD> <TD>&nbsp;</TD> </TR> </TABLE> &nbsp; <HR> <!-- Start of nav bar --> <TABLE summary="Navigation bar" BORDER="0" WIDTH="100%" CELLPADDING="1" CELLSPACING="0"> <TR> <TD COLSPAN=2 BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <TABLE summary="Navigation bar" BORDER="0" CELLPADDING="0" CELLSPACING="3"> <TR ALIGN="center" VALIGN="top"> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="../../api/org/apache/hadoop/http/HttpServer.html" target="_top"><FONT CLASS="NavBarFont1"><B><tt>hadoop 0.20.2-cdh3u5</tt></B></FONT></A>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="changes-summary.html"><FONT CLASS="NavBarFont1"><B>Overview</B></FONT></A>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="pkg_org.apache.hadoop.http.html"><FONT CLASS="NavBarFont1"><B>Package</B></FONT></A>&nbsp;</TD> <TD BGCOLOR="#FFFFFF" CLASS="NavBarCell1Rev"> &nbsp;<FONT CLASS="NavBarFont1Rev"><B>Class</B></FONT>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="jdiff_statistics.html"><FONT CLASS="NavBarFont1"><B>Statistics</B></FONT></A>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="jdiff_help.html"><FONT CLASS="NavBarFont1"><B>Help</B></FONT></A>&nbsp;</TD> </TR> </TABLE> </TD> <TD ALIGN="right" VALIGN="top" ROWSPAN=3></TD> </TR> <TR> <TD BGCOLOR="#FFFFFF" CLASS="NavBarCell2"><FONT SIZE="-2"> &nbsp;<A HREF="org.apache.hadoop.http.FilterInitializer.html"><B>PREV CLASS</B></A> &nbsp;<B>NEXT CLASS</B>&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <A HREF="../changes.html" TARGET="_top"><B>FRAMES</B></A> &nbsp; &nbsp;<A HREF="org.apache.hadoop.http.HttpServer.html" TARGET="_top"><B>NO FRAMES</B></A></FONT></TD> <TD BGCOLOR="0xFFFFFF" CLASS="NavBarCell3"></TD> </TR> </TABLE> <HR> <!-- End of nav bar --> </BODY> </HTML>
{ "redpajama_set_name": "RedPajamaGithub" }
5,419
\section{Introduction} All the graphs in this paper are simple and finite. The vertex set and edge set of a graph $G$ will be denoted by $V(G)$ and $E(G)$ respectively. \begin {dfn}\label{Intro} An $r$-\emph {matching} in a graph $G$ is a set of $r$ edges, no two of which have a vertex in common. The number of $r$-matchings in $ G$ will be denoted by $p( G,r)$. We set $p(G,0)=1$ and define the \emph {matching polynomial} of $G$ by \begin {equation} \mu ( G,x)=\sum_{r=0}^{\lfloor n/2\rfloor} (-1)^rp(G,r)x^{n-2r}.\notag \end {equation} Let $u\in V(G)$. The graph obtained from $G$ by deleting the vertex $u$ and all edges that contain $u$ will be denoted by $G\setminus u$. Inductively, if $u_1,\dots, u_k\in V(G)$ then $G\setminus u_1\dots u_k=(G\setminus u_1\dots u_{k-1})\setminus u_k$. Note that the order in which the vertices are being deleted is not important, that is, if $i_1,\dots, i_k$ is a permutation of $1,\dots, k$, we have $G\setminus u_1\dots u_k= G\setminus u_{i_1}\dots u_{i_k}$. Furthermore if $X=\{u_1,\dots, u_k\}$, we write $G\setminus X=G\setminus u_1\dots u_k$. Similarly, if $H$ is a subgraph of $G$ and $V(H)=\{v_1,\dots, v_k\}$, we write $G\setminus H=G\setminus v_1\dots v_k$. Let $e_1,e_2,\dots, e_k\in E(G)$. We shall denote the graph obtained from $G$ by deleting the edges $e_1,e_2,\dots, e_k$ by $G-e_1e_2\dots e_k$. \end {dfn} It is well known that all roots of $\mu(G,x)$ are real. Throughout, let $\theta$ be a real number and $\textnormal {mult} (\theta, G)$ denote the multiplicity of $\theta$ as a root of $\mu(G,x)$. In particular, $\textnormal {mult} (\theta, G)=0$ if and only if $\theta$ is not a root of $\mu(G,x)$, if and only if $G$ has a perfect matching. In the literature, $\textnormal{mult}(0,G)$ is also known as the {\em deficiency} of $G$, usually denoted by $\textnormal{def}(G)$. \begin {lm}\label{interlacing}\textnormal{\cite[Corollary 1.3 on p. 97]{G0} (Interlacing)} Let $ G$ be a graph and $u\in V( G)$. Then \begin {equation} \textnormal {mult} (\theta, G)-1\leq \textnormal {mult} (\theta, G\setminus u)\leq \textnormal {mult} (\theta, G)+1.\notag \end {equation} \end {lm} \noindent As a consequence of Lemma \ref {interlacing}, we can classify the vertices in a graph with respect to $\theta$ as follows: \begin {dfn}\label {P:D3}\textnormal {\cite [Section 3]{G}} For any $u\in V( G)$, \begin {itemize} \item [(a)] $u$ is $\theta$-\emph {essential} if $\textnormal {mult} (\theta, G\setminus u)=\textnormal {mult} (\theta, G)-1$, \item [(b)] $u$ is $\theta$-\emph {neutral} if $\textnormal {mult} (\theta, G\setminus u)=\textnormal {mult} (\theta, G)$, \item [(c)] $u$ is $\theta$-\emph {positive} if $\textnormal {mult} (\theta, G\setminus u)=\textnormal {mult} (\theta, G)+1$. \end {itemize} Furthermore if $u$ is not $\theta$-essential but is adjacent to some $\theta$-essential vertex, we say that $u$ is $\theta$-{\em special}. \end {dfn} The subgraph of $G$ induced by $\theta$-essential vertices plays an important role in the Gallai-Edmonds decomposition of $G$. Indeed, it consists of components such that every vertex is $\theta$-essential in each of the components. Such a component is called $\theta$-{\em critical}. It is worth noting that a connected graph is factor-critical if and only if it is $0$-critical. Recently, a graph operator $D(G)$, called the $D$-graph of $G$, was introduced by Bauer et al. \cite{BBMS} for graphs with a perfect matching. This notion was later extended to general graphs by Busch et al. \cite{BFK}. \begin {dfn}\label{D-graph} Let $G$ be a graph. The graph $D(G)$ is defined as follows: \begin {itemize} \item[(a)] $V(D(G))=V(G)$, and \item[(b)] $(x,y) \in E(D(G))$ if and only if $\textnormal{mult}(0, G \setminus xy) \le \textnormal{mult}(0, G)$. \end {itemize} \end {dfn} Let $X$ be a subset of $V(G)$. Recall that $X$ is a {\em Tutte set} in $G$ if $\omega_{o}(G\setminus X)=\textnormal{mult}(0, G)+|X|$, where $\omega_{o}(G)$ denotes the number of odd components of $G$. Another standard term for Tutte set in the literature is {\em barrier} (see \cite{Lo}). If $\textnormal{mult}(0, G \setminus X) = \textnormal{mult}(0,G)+|X|$, we say that $X$ is an {\em extreme set} in $G$. The following theorem summarizes the main structural result in \cite{BBMS} and \cite{BFK}: \begin {thm}\label{Tutte} Let $G$ be a graph and $X \subseteq V(G)$, $\vert X\vert>1$. The followings are equivalent: \begin {itemize} \item[(a)] $X$ is a maximal Tutte set in $G$, \item[(b)] $X$ is a maximal extreme set in $G$, \item[(c)] $X$ is a maximal independent set in $D(G)$. \end {itemize} \end {thm} The above has proven useful in investigating maximal Tutte sets. For example, it has been instrumental in determining the complexity of finding maximum Tutte sets for several interesting classes of graphs \cite{BBKMSS}. To generalize the preceding result for nonzero real $\theta$, we need a $\theta$-analogue of $D(G)$. The following is a natural generalization of $D(G)$ for general $\theta$: \begin {dfn}\label{D-graph-2} Let $G$ be a graph and $\theta$ be a real number. The graph $D_{\theta}(G)$ is defined as follows: \begin {itemize} \item[(a)] $V(D_{\theta}(G))=V(G)$, and \item[(b)] $(x,y) \in E(D_{\theta}(G))$ if and only if $\textnormal{mult}(\theta, G \setminus xy) \le \textnormal{mult}(\theta, G)$. \end {itemize} \end {dfn} We also require a $\theta$-analogue of Tutte sets and extreme sets. The corresponding definitions were first introduced in \cite{KW}: \begin {dfn} Suppose $X \subseteq V(G)$. \begin {itemize} \item[(a)] $X$ is a $\theta$-Tutte set if $c_{\theta}(G \setminus X) = \textnormal{mult}(\theta, G) + |X|$, where $c_{\theta}(G)$ denotes the number of $\theta$-critical components of $G$. \item[(b)] $X$ is a $\theta$-extreme set if $\textnormal{mult}(\theta, G \setminus X) = \textnormal{mult}(\theta, G)+|X|$. \end {itemize} \end {dfn} Note that the definitions of $0$-extreme set and extreme set coincide. But the definitions of $0$-Tutte set and Tutte set are different. Nevertheless, the definition of a $\theta$-Tutte set is not unmotivated. Indeed, it is motivated by a $\theta$-analogue of Berge's formula proved by the authors in \cite{KW}. Interested readers may refer to \cite{KW} for a more detailed description of $\theta$-Tutte sets and $\theta$-extreme sets. One of our main results is the following: \begin {thm}\label{Main01} Let $G$ be a graph, $X \subseteq V(G)$, $|X|>1$, and $\theta$ be a real number. The followings are equivalent: \begin {itemize} \item[(a)] $X$ is a maximal $\theta$-Tutte set in $G$, \item[(b)] $X$ is a maximal $\theta$-extreme set in $G$, \item[(c)] $\textnormal{mult}(\theta, G \setminus uv)=\textnormal{mult}(\theta, G)+2$ for any $u, v \in X$, $u \not = v$. \end {itemize} \end {thm} It is clear that conditions (b) of Theorem \ref{Main01} and Theorem \ref{Tutte} are the same when $\theta=0$. In fact, we shall see later that conditions (a) and (c) of Theorem \ref{Main01} and Theorem \ref{Tutte} are also equivalent when $\theta=0$. Therefore, Theorem \ref{Main01} can be regarded as an extension of Theorem \ref{Tutte} to general $\theta$. In this paper, we introduce another related graph $S_{\theta}(G)$ which is a supergraph of $G$ obtained by joining any $\theta$-special vertex to all the other vertices in $G$. Note that if $G$ has no $\theta$-special vertices then $S_{\theta}(G)=G$. We shall establish the followings: \begin {thm}\label{Main03} Let $G$ be a graph and $\theta$ be a real number. Then $G$ and $S_{\theta}(G)$ have the same Gallai-Edmonds decomposition. \end {thm} \begin {thm}\label{Main04} If $G$ and $G'$ have the same Gallai-Edmonds decomposition with respect to $\theta$, then $D_{\theta}(G)\cong D_{\theta}(G')$. In particular, $D_{\theta}(G)=D_{\theta}(S_{\theta}(G))$. \end {thm} It was also proved in \cite{BBMS} and \cite{BFK} that $D(G)$ contains an isomorphic copy of $G$. In general, $D_{\theta}(G)$ does not contain an isomorphic copy of $G$. However, we can prove the following: \begin {thm}\label{Main02} Given any $\theta$-extreme set $X$ of $G$ with $|X|>1$, there exists an independent set $Y$ disjoint from $X$ such that $X$ is matchable to $Y$ and $D_{\theta}(G)$ contains an isomorphic copy of the subgraph of $G$ induced by $X \cup Y$. \end {thm} Recall that a set $X$ is {\em matchable} to a set $Y$ if there is a matching in $G$ which matches every vertex of $X$ to a vertex in $Y$. The $D$-graph $D(G)$ demonstrates interesting properties when iterated, in particular, it converges very quickly regardless of the structure of the underlying graph $G$, that is $D(D(D(G))) \equiv D(D(G))$ (see \cite{BBMS, BFK}). At the present, we do not know whether such property also holds for the $D_{\theta}$-operator. The outline of this paper is as follows. In Section 2, we list some basic properties of the matching polynomial and describe the Gallai-Edmonds decomposition for general root $\theta$ which is an important tool for the rest of the paper. Theorem \ref{Main01} is proved in Section 3. In Section 4, we prove Theorem \ref{Main03} which consequently allow us to establish Theorem \ref{Main04} in Section 5. Finally, in Section 6, we relate $\theta$-extreme sets with matchings and independent sets and prove Theorem \ref{Main02}. \section{Gallai-Edmonds Decomposition} The followings are some basic properties of $\mu (G,x)$. \begin {thm}\label{basic_property} \textnormal { \cite[Theorem 1.1 on p. 2] {G0}} \begin {itemize} \item [(a)] $\mu ( G\cup H,x)=\mu ( G,x)\mu ( H,x)$ where $ G$ and $ H$ are disjoint graphs, \item [(b)] $\mu ( G,x)=\mu ( G-e,x)-\mu ( G\setminus uv, x)$ if $e=(u,v)$ is an edge of $ G$, \item [(c)] $\mu ( G,x)=x\mu ( G\setminus u,x)-\sum_{i\sim u} \mu ( G\setminus ui,x)$ where $i\sim u$ means $i$ is adjacent to $u$, \item [(d)] $\displaystyle \frac {d}{dx} \mu ( G,x)=\sum_{i\in V( G)} \mu ( G\setminus i,x)$ where $V( G)$ is the vertex set of $ G$. \end {itemize} \end {thm} Note that if $\textnormal {mult} (\theta, G)=0$ then for any $u\in V(G)$, $u$ is either $\theta$-neutral or $\theta$-positive and no vertices in $G$ can be $\theta$-special. By \cite[Corollary 4.3]{G}, a $\theta$-special vertex is $\theta$-positive. Therefore \begin {equation} V(G)=B_{\theta}(G)\cup A_{\theta}(G)\cup P_{\theta}(G)\cup N_{\theta}(G),\notag \end {equation} where \begin {itemize} \item [] $B_{\theta}(G)$ is the set of all $\theta$-essential vertices in $G$, \item [] $A_{\theta}(G)$ is the set of all $\theta$-special vertices in $G$, \item [] $N_{\theta}(G)$ is the set of all $\theta$-neutral vertices in $G$, \item [] $P_{\theta}(G)$ is the set of all $\theta$-positive vertices which are not $\theta$-special in $G$, \end {itemize} is a partition of $V(G)$. Note that there are no $0$-neutral vertices. So $N_0(G)=\varnothing$ and $V(G)=B_{0}(G)\cup A_{0}(G)\cup P_{0}(G)$. The Gallai-Edmonds Structure Theorem (henceforth the GEST) contains structural information of the above decomposition of $V(G)$ with respect to the root $\theta=0$ of $\mu (G,x)$. In \cite {KC}, Chen and Ku extended the GEST to any root $\theta$. It essentially consists of two lemmas: the $\theta$-Stability Lemma and the $\theta$-Gallai's Lemma. \begin {thm}\label {P:T5}\textnormal {\cite [Theorem 1.5]{KC} (The $\theta$-Stability Lemma)}\\ Let $G$ be a graph with $\theta$ a root of $\mu (G,x)$. If $u\in A_{\theta} (G)$ then \begin {itemize} \item [(i)] $B_{\theta}(G\setminus u)=B_{\theta}(G)$, \item [(ii)] $P_{\theta}(G\setminus u)=P_{\theta}(G)$, \item [(iii)] $N_{\theta}(G\setminus u)=N_{\theta}(G)$, \item [(iv)] $A_{\theta}(G\setminus u)=A_{\theta}(G)\setminus \{u\}$. \end {itemize} \end {thm} \begin {thm}\label {P:T6}\textnormal {\cite [Theorem 1.7]{KC} (The $\theta$-Gallai's Lemma)} \\ If $G$ is connected and $\theta$-critical then $\textnormal {mult} (\theta, G)=1$. \end {thm} By Theorem \ref {P:T5} and Theorem \ref {P:T6}, it is straightforward to deduce the following whose proof is omitted. \begin {cor}\label {P:C7}\ \begin {itemize} \item [(i)] $A_{\theta}(G\setminus A_{\theta}(G))=\varnothing$, $B_{\theta}(G\setminus A_{\theta}(G))=B_{\theta}(G)$, $P_{\theta}(G\setminus A_{\theta}(G))=P_{\theta}(G)$, and $N_{\theta}(G\setminus A_{\theta}(G))=N_{\theta}(G)$. \item [(ii)] $G\setminus A_{\theta}(G)$ has exactly $\vert A_{\theta}(G)\vert+\textnormal {mult} (\theta, G)$ $\theta$-critical components. \item [(iii)] If $H$ is a component of $G\setminus A_{\theta}(G)$ then either $H$ is $\theta$-critical or $\textnormal {mult} (\theta, H)=0$. \item [(iv)] The subgraph induced by $B_{\theta}(G)$ consists of all the $\theta$-critical components in $G\setminus A_{\theta}(G)$. \end {itemize} \end {cor} \section{The Structure of Maximal $\theta$-Tutte Sets} In this section, we study the structure of maximal $\theta$-Tutte sets. We first establish a characterization of these sets in their relation to $\theta$-extreme sets. Let $X \subseteq V(G)$. By interlacing (Lemma \ref{interlacing}), it is immediate that $\textnormal{mult}(\theta, G \setminus X) \le \textnormal{mult}(\theta, G) + |X|$. On the other hand, by the $\theta$-Gallai's Lemma (Theorem \ref{P:T6}), we have $c_{\theta}(G \setminus X) \le \textnormal{mult}(\theta, G \setminus X)$. Therefore, if $X$ is a $\theta$-Tutte set, then it is also $\theta$-extreme. The converse is not true. Nevertheless, a maximal $\theta$-extreme set is always a maximal $\theta$-Tutte set. \begin {thm}\label{max-Tutte} Let $G$ be a graph and $\theta$ be a real number. A set $X$ is a maximal $\theta$-Tutte set in $G$ if and only if $X$ is a maximal $\theta$-extreme set in $G$. \end {thm} \begin{proof} It remains to show that if $X$ is a maximal $\theta$-extreme set in $G$, then $c_{\theta}(G \setminus X) = \textnormal{mult}(\theta, G) + |X|$. Notice that $G \setminus X$ has no $\theta$-positive vertices; otherwise, any $\theta$-positive vertex of $G \setminus X$ together with $X$ form a larger $\theta$-extreme set containing $X$, violating the maximality of $X$. In particular, $D_{\theta}(G \setminus X) \cup N_{\theta}(G \setminus X) = V(G) \setminus X$. This means that if $H_{1}, \ldots, H_{s}$ are the components of $G-X$ with $\theta$ as a root, then $V(H_{1}) \cup \cdots \cup V(H_{s}) = D_{\theta}(G \setminus X)$. By GEST for $\theta$, each $H_{j}$ is $\theta$-critical and satisfies $\textnormal{mult}(\theta, H_{j})=1$. Since $X$ is $\theta$-extreme, we obtain \[ \textnormal{mult}(\theta, G)+|X| = \textnormal{mult}(\theta, G \setminus X) = \sum_{j=1}^{s} \textnormal{mult}(\theta, H_{j}) = s = c_{\theta}(G \setminus X), \] and thus $X$ is a $\theta$-Tutte set in $G$. If $X$ is not a maximal $\theta$-Tutte set in $G$, then $X$ is properly contained in a $\theta$-Tutte set $Y$. But $Y$ would be a $\theta$-extreme set which properly contains $X$, violating the maximality of $X$. Hence, $X$ is a maximal $\theta$-Tutte set. \end{proof} It is worth noting that a $0$-Tutte set is always a Tutte set but the converse is not true (\cite[Proposition 2.3]{KW}). However, a maximal Tutte set is always a maximal $0$-Tutte set (\cite[Proposition 2.4]{KW}). We proceed to prove another characterization of maximal $\theta$-Tutte sets. Recall that $D_{\theta}(G)$ is completely determined by knowing the multiplicities of $\theta$ when deleting any two distinct vertices of $G$. Moreover, by interlacing, these multiplicities lie between $\textnormal{mult}(\theta, G)-2$ and $\textnormal{mult}(\theta, G)+2$. This motivates the following terminology: \begin {dfn}\label {BP:D3} Let $G$ be a graph. We define the graph $D_{r,\theta}(G)$ for $r=-2,-1,0,1,2$ as follows: \begin {itemize} \item [(a)] $V(D_{r,\theta}(G))=V(G)$, and \item [(b)] $e=(u,v)\in E(D_{r,\theta}(G))$ if and only if $\textnormal {mult} (\theta, G\setminus uv)=\textnormal {mult} (\theta, G)+r$. \end {itemize} \end {dfn} Note that $D_{\theta}(G) = D_{-2, \theta}(G) \cup D_{-1, \theta}(G) \cup D_{0, \theta}(G)$. Note also that the powers of $x$'s in the matching polynomial $\mu (G,x)$ are either all even or all odd. This implies that $\theta$ is a root of $\mu (G,x)$ if and only if $-\theta$ is. Also the powers of $x$'s in the $n$-th derivative of $\mu (G,x)$ are either all even or all odd. From these we deduce that $\textnormal {mult} (\theta,G)=\textnormal {mult} (-\theta,G)$. Hence we have \begin {thm}\label {BP:T4a} $D_{r,\theta}(G)=D_{r,-\theta}(G)$. \end {thm} In view of Theorem \ref{Main01}, we further introduce the following definition: \begin {dfn}\label{nice_set_independent} A set $X \subseteq V(G)$ with $|X|>1$ is said to be $\theta$-{\em nice} in $G$ if $\textnormal{mult}(\theta, G \setminus uv)=\textnormal{mult}(\theta, G)+2$ for any $u, v \in X$, $u \not = v$. Clearly, if $X$ is $\theta$-nice then all its vertices are $\theta$-positive. Equivalently, a set $X$ is $\theta$-nice in $G$ if the subgraph $D_{2,\theta}(G)[X]$ of $D_{2, \theta}(G)$ induced by $X$ is complete. \end {dfn} It has been shown that $X$ is an $0$-extreme set if and only if $X$ is an independent set of $D_{0}(G)$, provided that $\vert X\vert>1$ (Theorem \ref{Tutte}). Recall that $N_{0}(G)=\varnothing$ and so $\textnormal {mult} (0,G\setminus uv)=-2,0,2$ for all $u,v\in V(G)$ (by interlacing (Lemma \ref{interlacing})). This implies that $X$ is an independent set of $D_{0}(G)$ if and only if $D_{2,0}(G)[X]$ is a complete graph. Hence, Theorem \ref{Tutte} can be reformulated as follows: \begin {thm}\label{extreme_zero} Let $G$ be a graph and $X \subseteq V(G)$, $|X|>1$. The following are equivalent: \begin {itemize} \item[(a)] $X$ is a maximal $0$-Tutte set in $G$, \item[(b)] $X$ is a maximal extreme set in $G$, \item[(c)] $X$ is a maximal complete subgraph in $D_{2,0}(G)$, that is $X$ is a maximal $0$-nice set in $G$. \end {itemize} \end {thm} So it is quite natural to ask whether Theorem \ref{extreme_zero} holds for $\theta\neq 0$. Indeed, we shall prove that \begin {thm}\label{theta-max-Tutte} Let $G$ be a graph, $X \subseteq V(G)$, $|X|>1$ and $\theta$ be a real number. The followings are equivalent: \begin {itemize} \item[(a)] $X$ is a maximal $\theta$-Tutte set in $G$, \item[(b)] $X$ is a maximal $\theta$-extreme set in $G$, \item[(c)] $X$ is a maximal complete subgraph in $D_{2,\theta}(G)$, that is $X$ is a maximal $\theta$-nice set in $G$. \end {itemize} \end {thm} In view of Theorem \ref{max-Tutte}, it suffices to show that (b) and (c) of Theorem \ref{theta-max-Tutte} are equivalent. Using the fact that $X$ is an $\theta$-extreme set, it is not hard to prove the following proposition. \begin {prop}\label{one_side} Let $G$ be a graph and $X\subseteq V(G)$ with $\vert X\vert>1$. If $X$ is an $\theta$-extreme set then $X$ is $\theta$-nice. \end {prop} To complete the proof of Theorem \ref{theta-max-Tutte}, our aim for the rest of this section is to show that a $\theta$-nice set must be $\theta$-extreme. We shall need the following results. \begin{lm}\label{path_interlacing}\textnormal {\cite[Corollary 2.5]{G}} For any root $\theta$ of $\mu(G,x)$ and a path $P$ in $G$, \[ \textnormal{mult} (\theta, G \setminus P) \ge \textnormal{mult}(\theta, G)-1. \] \end{lm} \begin{lm}\label{godsil_positive}\textnormal {\cite[Theorem 4.2]{G}} Let $u$ be a $\theta$-positive vertex in $G$. Then \begin{itemize} \item[\textnormal{(a)}] if $v$ is $\theta$-essential in $G$ then it is $\theta$-essential in $G \setminus u$, \item[\textnormal{(b)}] if $v$ is $\theta$-positive in $G$ then it is $\theta$-essential or $\theta$-positive in $G \setminus u$, \item[\textnormal{(c)}] if $u$ is $\theta$-neutral in $G$ then it is $\theta$-essential or $\theta$-neutral in $G \setminus u$. \end{itemize} \end{lm} \begin{rem} The assertions of Lemma \ref{godsil_positive}, excluding part (b), still hold even if $\theta$ is not a root of $\mu(G,x)$. \end{rem} The following corollary is an immediate consequence of part (a) of Lemma \ref{godsil_positive}. \begin{cor}\label{delete-essential-positive} Suppose $u$ is $\theta$-positive and $v$ is $\theta$-essential in $G$. Then $u$ remains $\theta$-positive in $G \setminus v$. \end{cor} \begin{lm}\label{all_positive} Let $u_1,u_2,\dots, u_k$ be $\theta$-positive vertices in $G$. Then $\textnormal{mult} (\theta, G\setminus u_1u_2\dots u_k)$ is either equal to $\textnormal{mult} (\theta, G)+k$ or at most $\textnormal{mult} (\theta, G)+k-2$. \end{lm} \begin{proof} We shall prove by induction on $k$. Clearly it is true for $k=1$. Suppose $k\geq 2$. Assume that it is true for $k-1$, that is to say, $\textnormal{mult} (\theta, G\setminus u_1u_2\dots u_{k-1})$ is either equal to $\textnormal{mult} (\theta, G)+k-1$ or at most $\textnormal{mult} (\theta, G)+k-3$. In the latter we are done by Lemma \ref{interlacing}. In the former, $u_{i}$ is $\theta$-positive in $G \setminus u_{1} \cdots u_{i-1}$ for all $1 \le i \le k-1$. By Lemma \ref{godsil_positive}, $u_{i}$ is either $\theta$-positive or $\theta$-essential in $G \setminus u_{1} \cdots u_{i-1}$ for all $1 \le i \le k$. In particular, $u_{k}$ is either $\theta$-positive or $\theta$-essential in $G \setminus u_{1} \cdots u_{k-1}$ whence $\textnormal{mult} (\theta, G \setminus u_{1} \cdots u_{k}) = \textnormal{mult} (\theta, G)+k$ or $\textnormal{mult} (\theta, G)+k-2$. \end{proof} \begin{lm}\label{neutral_neighbor} Suppose $\theta \not = 0$ and $u$ is a $\theta$-essential vertex in $G$. Then $u$ has a neighbor which is $\theta$-neutral in $G \setminus u$. \end{lm} \begin{proof} By Lemma \ref{path_interlacing}, no neighbor of $u$ can be $\theta$-essential in $G \setminus u$. Suppose all neighbors of $u$ are $\theta$-positive in $G \setminus u$. Then, by comparing multiplicities of $\theta$ on both sides of the recurrence $\mu (G,x)=x\mu ( G\setminus u,x)-\sum_{v\sim u} \mu ( G\setminus uv,x)$ (part (c) of Theorem \ref{basic_property}) and the fact that $\theta \not = 0$, we observe that $\textnormal{mult} (\theta, G \setminus u) \ge \textnormal{mult} (\theta, G)$, contradicting the assumption that $u$ is $\theta$-essential in $G$. \end{proof} \begin{lm}\label{base_nice_case} Let $G$ be a graph and $X\subseteq V(G)$ with $\vert X\vert=3$. If $X$ is $\theta$-nice then $X$ is a $\theta$-extreme set. \end{lm} \begin{proof} The case $\theta=0$ is covered in Theorem \ref{extreme_zero}. So we may assume $\theta\neq 0$. Let $X=\{x_1,x_2,x_3\}$ and $\textnormal {mult} (\theta,G)=k$ (We allow $k$ to take zero value). Now $\textnormal {mult} (\theta,G\setminus x_2)=k+1$ and $\textnormal {mult} (\theta,G\setminus x_2x_3)=k+2=\textnormal {mult} (\theta,G\setminus x_2x_1)$. This implies that $x_1$ and $x_3$ are $\theta$-positive in $G\setminus x_2$. By Lemma \ref{godsil_positive}, $x_1$ is either $\theta$-positive or $\theta$-essential in $G\setminus x_2x_3$. If the former holds, then $\textnormal {mult} (\theta,G\setminus x_2x_3x_1)=k+3$ and $X$ is an $\theta$-extreme set. So we may assume the latter holds. Then $\textnormal {mult} (\theta,G\setminus x_2x_3x_1)=k+1$. By Lemma \ref{neutral_neighbor}, $x_1$ is adjacent to a vertex $z$ in $G\setminus x_2x_3$, where $z$ is $\theta$-neutral in $G\setminus x_2x_3x_1$. Therefore $\textnormal {mult} (\theta,G\setminus x_2x_3x_1z)=k+1$. By part (b) of Theorem \ref{basic_property}, $\mu ( G\setminus x_2x_3,x)=\mu ( (G\setminus x_2x_3)-e,x)-\mu ( G\setminus x_2x_3x_1z, x)$ where $e=(x_1,z)$ is an edge of $G$. Since $\textnormal {mult} (\theta,G\setminus x_2x_3)=k+2$, we must have $\textnormal {mult} (\theta,(G\setminus x_2x_3)-e)=k+1$. \noindent {\bf Case 1.} Suppose $z$ is $\theta$-essential in $G\setminus x_1$. Then $\textnormal{mult} (\theta, G\setminus x_1z)=k$. Recall that $\textnormal{mult} (\theta, G\setminus x_1x_2)=k+2=\textnormal{mult} (\theta, G\setminus x_1x_3)$. This implies that $x_2$ and $x_3$ are $\theta$-positive in $G\setminus x_1$. By Corollary \ref{delete-essential-positive}, $x_2$ and $x_3$ are $\theta$-positive in $G\setminus x_1z$. By Lemma \ref{all_positive}, $\textnormal {mult} (\theta,G\setminus x_1zx_2x_3)$ is either equal to $\textnormal{mult} (\theta, G\setminus x_1z)+2=k+2$ or at most $k$, a contrary to the fact that $\textnormal {mult} (\theta,G\setminus x_2x_3x_1z)=k+1$. \noindent {\bf Case 2.} Suppose $z$ is $\theta$-neutral or $\theta$-positive in $G\setminus x_1$. Then $\textnormal{mult} (\theta, G\setminus x_1z)\geq k+1$. By part (b) of Theorem \ref{basic_property}, $\mu ( G)=\mu ( G-e,x)-\mu ( G\setminus x_1z, x)$. By comparing the multiplicity of $\theta$ as zero on both sides of the equation, we deduce that $\textnormal{mult} (\theta, G-e)=k$. Note that $(G-e)\setminus x_1x_2=G\setminus x_1x_2$ and $(G-e)\setminus x_1x_3=G\setminus x_1x_3$. Therefore $\textnormal{mult} (\theta, (G-e)\setminus x_1x_2)=k+2=\textnormal{mult} (\theta, (G-e)\setminus x_1x_3)$. This implies that $x_2$ and $x_3$ are $\theta$-positive in $G-e$. By Lemma \ref{all_positive}, $\textnormal{mult} (\theta, (G-e)\setminus x_2x_3)$ is either equal to $k+2$ or at most $k$, a contrary to the fact that $\textnormal {mult} (\theta,(G\setminus x_2x_3)-e)=k+1$. Hence $\textnormal {mult} (\theta,G\setminus x_2x_3x_1)=k+3$ and $X$ is an $\theta$-extreme set. \end{proof} \begin{thm}\label{general_nice_case} Let $G$ be a graph and $X\subseteq V(G)$ with $\vert X\vert>1$. Then $X$ is an $\theta$-extreme set if and only if $X$ is $\theta$-nice. \end{thm} \begin{proof} By Proposition \ref{one_side}, it is sufficient to prove that if $X$ is $\theta$-nice then $X$ is an $\theta$-extreme set. We shall prove by induction on $\vert X\vert$. Clearly it is true when $\vert X\vert=2$. Let $\vert X\vert\geq 3$. Assume that it is true for all $\theta$-nice sets $X'$ with $\vert X'\vert<\vert X\vert$. Let $a,b,c\in X$ and $X_1=X\setminus \{a,b,c\}$ ($X_1$ could be empty). Note that $X_1\cup \{a,b\}$, $X_1\cup \{a,c\}$ and $X_1\cup \{b,c\}$ are all $\theta$-nice sets. By induction, all of them are $\theta$-extreme sets. By using Lemma \ref{interlacing}, it is not hard to deduce that $\textnormal{mult} (\theta, G\setminus X_1)=\textnormal{mult} (\theta, G)+\vert X_1\vert$. Then $\{a,b,c\}$ is a $\theta$-nice set in $G\setminus X_1$. By Lemma \ref{base_nice_case}, $\{a,b,c\}$ is an $\theta$-extreme set in $G\setminus X_1$ and $\textnormal{mult} (\theta, G\setminus X)=\textnormal{mult} (\theta, G)+\vert X_1\vert+3=\textnormal{mult} (\theta, G)+\vert X\vert$. Hence $X$ is an $\theta$-extreme set. \end{proof} \section{The $S_{\theta}$-graphs} This section is devoted to the graph $S_{\theta}(G)$ which is a supergraph of $G$ obtained by joining any $\theta$-special vertex to all the other vertices. Formally, \begin {dfn}\label {BP:D3b} Let $G$ be a graph and $\theta$ be a real number. Then the graph $S_{\theta}(G)$ is defined by $V(S_{\theta}(G))=V(G)$ and $(w,z)\in E(S_{\theta}(G))$ if and only if $(w,z)\in E(G)$ or $w\in A_{\theta}(G)$ and $z\in V(G)$. \end {dfn} We shall prove that the graph $S_{\theta}(G)$ and $G$ have the same Gallai-Edmonds decomposition (Corollary \ref{similar_gallai_edmond}). We require the following lemmas: \begin{lm}\label{existence_essential}\textnormal {\cite[Lemma 3.1]{G}} Suppose $\textnormal{mult}(\theta, G)>0$. Then $G$ contains at least one $\theta$-essential vertex. \end{lm} \begin{lm}\label{Ku-Chen-neutral}\textnormal {\cite[Proposition 2.9]{KC}} Let $u$ be a $\theta$-neutral vertex in $G$. Then \begin{itemize} \item[\textnormal{(a)}] if $v$ is $\theta$-positive in $G$ then it is $\theta$-positive or $\theta$-neutral in $G \setminus u$; \item[\textnormal{(b)}] if $v$ is $\theta$-essential in $G$ then it is $\theta$-essential in $G \setminus u$; \item[\textnormal{(c)}] if $v$ is $\theta$-neutral in $G$ then it is $\theta$-neutral or $\theta$-positive in $G \setminus u$. \end{itemize} \end{lm} \begin {thm}\label {BP:T3a} Let $G$ be a graph. Let $u\in A_{\theta}(G)$ and $v\in V(G)$ where $(u,v)\notin E(G)$. Let $G'$ be the graph with $V(G')=V(G)$ and $E(G')=E(G)\cup \{(u,v)\}$. Then $\textnormal {mult} (\theta,G')=\textnormal {mult} (\theta,G)$ and \begin {itemize} \item [(a)] $B_{\theta}(G')=B_{\theta}(G)$, \item [(b)] $P_{\theta}(G')=P_{\theta}(G)$, \item [(c)] $N_{\theta}(G')=N_{\theta}(G)$, \item [(d)] $A_{\theta}(G')=A_{\theta}(G)$. \end {itemize} \end {thm} \begin {proof} Let $\textnormal {mult} (\theta,G)=k$. Then $\textnormal {mult} (\theta,G\setminus u)=k+1$. Also by part (b) of Theorem \ref {basic_property}, \begin{equation} \mu (G',x)=\mu (G,x)-\mu (G\setminus uv,x),\tag {1} \end{equation} \noindent {\bf Case 1.} Suppose $v\in B_{\theta}(G)$. Then by part (i) of Theorem \ref {P:T5}, $\textnormal {mult} (\theta,G\setminus uv)=k$. We first show that $\textnormal {mult} (\theta,G')=\textnormal {mult} (\theta,G)$. By comparing the multiplicity of $\theta$ as zero on both sides of the equation in (1), we deduce that $\textnormal {mult} (\theta,G')\geq k$. Note that $\mu (G'\setminus v,x)=\mu (G\setminus v,x)$. So $\textnormal {mult} (\theta,G'\setminus v)=k-1$. By Lemma \ref {interlacing}, $\textnormal {mult} (\theta,G')=k$. Now we show that $u\in A_{\theta}(G')$. Since $\textnormal {mult} (\theta,G'\setminus v)=k-1$, $v\in B_{\theta}(G')$. On the other hand, $\mu (G'\setminus u,x)=\mu (G\setminus u,x)$. So $\textnormal {mult} (\theta,G'\setminus u)=k+1$ and $u\in A_{\theta}(G')$, for $u$ is adjacent to $v$. By part (i) of Theorem \ref {P:T5}, we have $B_{\theta}(G')=B_{\theta}(G'\setminus u)=B_{\theta}(G\setminus u)=B_{\theta}(G)$. The proof of part (a) is complete. Now part (b), (c) and (d) follow easily from part (ii), (iii) and (iv) of Theorem \ref {P:T5}. \noindent {\bf Case 2.} Suppose $v\in N_{\theta}(G)$. Then by part (iii) of Theorem \ref {P:T5}, $\textnormal {mult} (\theta,G\setminus uv)=k+1$. Using (1) again, we deduce that $\textnormal {mult} (\theta,G')=k=\textnormal {mult} (\theta,G)$. Now we show that $u\in A_{\theta}(G')$. Since $u\in A_{\theta}(G)$, $u$ is adjacent to a $\theta$-essential vertex $w$ in $G$. By part (i) of of Theorem \ref {P:T5}, $w\in B_{\theta}(G\setminus u)$. Recall that $v\in N_{\theta}(G\setminus u)$. So, by part (a) of Lemma \ref{Ku-Chen-neutral}, $w\in B_{\theta}(G\setminus uv)$. Therefore $\textnormal {mult} (\theta,G\setminus uvw)=k$. Since $\textnormal {mult} (\theta,G\setminus w)=k-1$, we deduce from $\mu (G'\setminus w,x)=\mu (G\setminus w,x)-\mu (G\setminus uvw,x)$ (part (b) of Theorem \ref {basic_property}) that $\textnormal {mult} (\theta,G'\setminus w)=k-1$. Hence $u\in A_{\theta}(G')$. As before part (a), (b), (c) and (d) follow easily from part (i), (ii), (iii) and (iv) of Theorem \ref {P:T5}. \noindent {\bf Case 3.} The case when $v\in A_{\theta}(G)\cup P_{\theta}(G)$ is proved similarly. \end {proof} Note that when $A_{\theta}(G)=\varnothing$, $S_{\theta}(G)=G$. Now by repeatedly applying Theorem \ref {BP:T3a}, we have the following corollary. \begin {cor}\label{similar_gallai_edmond} Let $G$ be a graph. Then $\textnormal {mult} (\theta,S_{\theta}(G))=\textnormal {mult} (\theta,G)$ \begin {itemize} \item [(a)] $B_{\theta}(S_{\theta}(G))=B_{\theta}(G)$, \item [(b)] $P_{\theta}(S_{\theta}(G))=P_{\theta}(G)$, \item [(c)] $N_{\theta}(S_{\theta}(G))=N_{\theta}(G)$, \item [(d)] $A_{\theta}(S_{\theta}(G))=A_{\theta}(G)$. \end {itemize} \end {cor} Two graphs $G$ and $G'$ are said to have the \emph{same Gallai-Edmonds decomposition} with respect to $\theta$, if there is a bijection, $\psi :V(G)\rightarrow V(G')$ such that $\psi(A_{\theta}(G))=A_{\theta}(G')$ and the restriction of $\psi$ to $G\setminus A_{\theta}(G)$ is an isomorphism onto $G'\setminus A_{\theta}(G')$. Corollary \ref{similar_gallai_edmond} asserts that the Gallai-Edmonds decomposition of $G$ is stable under the $S_{\theta}$-operator. Since $G\setminus A_{\theta}(G)=S_{\theta}(G)\setminus A_{\theta}(S_{\theta}(G))$, we conclude that $G$ and $S_{\theta}(G)$ have the same Gallai-Edmonds decomposition with respect to $\theta$ and this proves Theorem \ref{Main03}. Corollary \ref{similar_gallai_edmond} also allows us, for the rest of this section, to predict the multiplicity of $\theta$ upon deleting two vertices of $S_{\theta}(G)$ in terms of the Gallai-Edmonds decomposition (see Corollary \ref{S:C6} below). \begin {lm}\label {S:L4a} Let $G$ be a graph and $S\subseteq V(G)$ be a set for which each $s\in S$ is adjacent to every other vertices in $G$. Suppose $\textnormal {mult} (\theta, G\setminus v)\geq 1$ with $v\in V(G)\setminus S$ and $s\notin B_{\theta}(G\setminus v)$ for all $s\in S$. Then $S$ is an $\theta$-extreme set in $G\setminus uv$ for all $u\in V(G)\setminus S$, $u\neq v$. \end {lm} \begin {proof} If $S=\varnothing$, we are done. Suppose $S\neq \varnothing$. By Lemma \ref{existence_essential}, $B_{\theta}(G\setminus v)\neq \varnothing$. Since $s$ is adjacent to every other vertices in $G$ and $s\notin B_{\theta}(G\setminus v)$, $s\in A_{\theta}(G\setminus v)$. Hence $S\subseteq A_{\theta}(G\setminus v)$ and by part (iv) of Theorem \ref {P:T5}, \begin {equation} \textnormal {mult} (\theta, (G\setminus v)\setminus S)=\textnormal {mult} (\theta, G\setminus v)+\vert S\vert.\notag \end {equation} Suppose $u\in B_{\theta}(G\setminus v)$. Then $\textnormal {mult} (\theta, G\setminus uv)=\textnormal {mult} (\theta, G\setminus v)-1$. By part (i) of Theorem \ref {P:T5}, $u\in B_{\theta}((G\setminus v)\setminus S)$. So $\textnormal {mult} (\theta, (G\setminus uv)\setminus S)=\textnormal {mult} (\theta, G\setminus v)+\vert S\vert-1=\textnormal {mult} (\theta, G\setminus uv)+\vert S\vert$ and $S$ is an $\theta$-extreme set in $G\setminus uv$. The case $u\in A_{\theta}(G\setminus v)\cup N_{\theta}(G\setminus v)\cup P_{\theta}(G\setminus v)$ is proved similarly. \end {proof} \begin {thm}\label{extreme_set_in_S} Let $G$ be a graph. Let $H_1,\dots, H_q,Q_1,\dots, Q_m$ be all the components in $G\setminus A_{\theta}(G)$ with $H_i$ is $\theta$-critical for all $i$ and $\textnormal {mult} (\theta, Q_j)=0$ for all $j$. Suppose \begin {itemize} \item [(a)] $u\in V(G)\setminus A_{\theta}(G)$ and $v\in V(Q_{j_0})$ for some $j_0$, or \item [(b)] $u,v\in V(H_{i_0})$ for some $i_0$, or \item [(c)] $\textnormal {mult} (\theta, G)\geq 2$, $u\in V(H_{i_1})$ and $v\in V(H_{i_2})$ for some $i_1,i_2$ and $i_1\neq i_2$. \end {itemize} Then $A_{\theta}(G)$ is an $\theta$-extreme set in $S_{\theta}(G)\setminus uv$. \end {thm} \begin {proof} By Corollary \ref {similar_gallai_edmond}, $A_{\theta}(G)=A_{\theta}(S_{\theta}(G))$. If $A_{\theta}(S_{\theta}(G))=\varnothing$, we are done. So we may assume $A_{\theta}(S_{\theta}(G))\neq \varnothing$. This also means that $\textnormal {mult} (\theta, S_{\theta}(G))\geq 1$. Note that $S_{\theta}(G)\setminus A_{\theta}(S_{\theta}(G))=G\setminus A_{\theta}(G)$. So $H_1,\dots, H_q,Q_1,\dots, Q_m$ are all the components in $S_{\theta}(G)\setminus A_{\theta}(S_{\theta}(G))$ with $H_i$ is $\theta$-critical for all $i$ and $\textnormal {mult} (\theta, Q_j)=0$ for all $j$. By Lemma \ref {S:L4a}, it is sufficient to show that $\textnormal {mult} (\theta, S_{\theta}(G)\setminus v)\geq 1$ and $w\notin B_{\theta}(S_{\theta}(G)\setminus v)$ for all $w\in A_{\theta}(S_{\theta}(G))$. \noindent (a) By Theorem \ref {P:T5}, $v\in N_{\theta}(G)\cup P_{\theta}(G)$, and by Corollary \ref {similar_gallai_edmond}, $v\in N_{\theta}(S_{\theta}(G))\cup P_{\theta}(S_{\theta}(G))$. Therefore $\textnormal {mult} (\theta, S_{\theta}(G)\setminus v)\geq 1$. Let $w\in A_{\theta}(S_{\theta}(G))$. By Theorem \ref {P:T5}, $v\in N_{\theta}(S_{\theta}(G)\setminus w)\cup P_{\theta}(S_{\theta}(G)\setminus w)$. Therefore $\textnormal {mult} (\theta, S_{\theta}(G)\setminus wv)\geq \textnormal {mult} (\theta, S_{\theta}(G)\setminus w)\geq \textnormal {mult} (\theta, S_{\theta}(G)\setminus v)$. This implies that $w\notin B_{\theta}(S_{\theta}(G)\setminus v)$. Hence $w\notin B_{\theta}(S_{\theta}(G)\setminus v)$ for all $w\in A_{\theta}(S_{\theta}(G))$. \noindent (b) and (c). Suppose $\textnormal {mult} (\theta, G)\geq 2$ and $v\in V(H_{i_0})$ for some $i_0$. By Theorem \ref {P:T5} and Corollary \ref {similar_gallai_edmond}, $v\in B_{\theta}(S_{\theta}(G))$ and $\textnormal {mult} (\theta, S_{\theta}(G))\geq 2$. Therefore $\textnormal {mult} (\theta, S_{\theta}(G)\setminus v)=\textnormal {mult} (\theta, S_{\theta}(G))-1\geq 1$. Let $w\in A_{\theta}(S_{\theta}(G))$. By Theorem \ref {P:T5}, $v\in B_{\theta}(S_{\theta}(G)\setminus w)$. So $\textnormal {mult} (\theta, S_{\theta}(G)\setminus wv)=\textnormal {mult} (\theta, S_{\theta}(G)\setminus w)-1= \textnormal {mult} (\theta, S_{\theta}(G))>\textnormal {mult} (\theta, S_{\theta}(G)\setminus v)$. This implies that $w\notin B_{\theta}(S_{\theta}(G)\setminus u)$. Hence $w\notin B_{\theta}(S_{\theta}(G)\setminus v)$ for all $w\in A_{\theta}(S_{\theta}(G))$. It is left only to show case (b) with $\textnormal {mult} (\theta, G)=1$ and $v\in V(H_{i_0})$ for some $i_0$. Note that $v\in B_{\theta}(S_{\theta}(G))$ and $\textnormal {mult} (\theta, S_{\theta}(G))=1$. Now $\textnormal {mult} (\theta, S_{\theta}(G)\setminus v)=0$. By Lemma \ref {interlacing}, $\textnormal {mult} (\theta, S_{\theta}(G)\setminus vu)=0$ or 1. Suppose $\textnormal {mult} (\theta, S_{\theta}(G)\setminus vu)=0$. By Lemma \ref {interlacing} again, $\textnormal {mult} (\theta, (S_{\theta}(G)\setminus vu)\setminus A_{\theta}(S_{\theta}(G)))\leq \vert A_{\theta}(S_{\theta}(G))\vert$. On the other hand, by part (a) of Theorem \ref {basic_property} and Corollary \ref {P:C7}, \begin {equation} \textnormal {mult} (\theta, (S_{\theta}(G)\setminus A_{\theta}(S_{\theta}(G)))\setminus uv)=\textnormal {mult} (\theta, H_{i_0}\setminus uv)+q-1=\textnormal {mult} (\theta, H_{i_0}\setminus uv)+\vert A_{\theta}(S_{\theta}(G))\vert.\tag {2} \end {equation} Therefore $\textnormal {mult} (\theta, H_{i_0}\setminus uv)+\vert A_{\theta}(S_{\theta}(G))\vert\leq \vert A_{\theta}(S_{\theta}(G))\vert$ and $\textnormal {mult} (\theta, H_{i_0}\setminus uv)=0$. Hence $A_{\theta}(G)$ is an $\theta$-extreme set in $S_{\theta}(G)\setminus uv$. Suppose $\textnormal {mult} (\theta, S_{\theta}(G)\setminus vu)=1$. Let $w\in A_{\theta}(S_{\theta}(G))$. By Lemma \ref {interlacing}, \begin {equation} \textnormal {mult} (\theta, (S_{\theta}(G)\setminus vuw)\setminus (A_{\theta}(S_{\theta}(G))\setminus w))\leq \textnormal {mult} (\theta, S_{\theta}(G)\setminus vuw)+\vert A_{\theta}(S_{\theta}(G))\vert -1.\notag \end {equation} On the other hand, (2) holds. Therefore $\textnormal {mult} (\theta, H_{i_0}\setminus uv)+\vert A_{\theta}(S_{\theta}(G))\vert\leq \textnormal {mult} (\theta, S_{\theta}(G)\setminus vuw)+\vert A_{\theta}(S_{\theta}(G))\vert -1$ and $\textnormal {mult} (\theta, S_{\theta}(G)\setminus vuw)\geq 1=\textnormal {mult} (\theta, S_{\theta}(G)\setminus vu)$. So $w\notin B_{\theta}(S_{\theta}(G)\setminus uv)$. Since $w$ is adjacent to every other vertices in $S_{\theta}(G)$, $w\in A_{\theta}(S_{\theta}(G)\setminus uv)$. Hence $A_{\theta}(S_{\theta}(G))\subseteq A_{\theta}(S_{\theta}(G)\setminus uv)$ and $A_{\theta}(G)$ is an $\theta$-extreme set in $S_{\theta}(G)\setminus uv$. \end {proof} \begin {cor}\label {S:C6} Let $G$ be a graph. Let $H_1,\dots, H_q,Q_1,\dots, Q_m$ be all the components in $G\setminus A_{\theta}(G)$ with $H_i$ is $\theta$-critical for all $i$ and $\textnormal {mult} (\theta, Q_j)=0$ for all $j$. Then the following holds: \begin {itemize} \item [(a)] If $u\in V(H_{i_0})$ and $v\in V(Q_{j_0})$ for some $i_0,j_0$, then \begin {equation} \textnormal {mult} (\theta, S_{\theta}(G)\setminus uv)=\textnormal {mult} (\theta, S_{\theta}(G))-1+\textnormal {mult} (\theta, Q_{j_0}\setminus v).\notag \end {equation} \item [(b)] If $u,v\in V(Q_{j_0})$ for some $j_0$, then \begin {equation} \textnormal {mult} (\theta, S_{\theta}(G)\setminus uv)=\textnormal {mult} (\theta, S_{\theta}(G))+\textnormal {mult} (\theta, Q_{j_0}\setminus uv).\notag \end {equation} \item [(c)] If $u\in V(Q_{j_1})$ and $v\in V(Q_{j_2})$ for some $j_1,j_2$, $j_1\neq j_2$, then \begin {equation} \textnormal {mult} (\theta, S_{\theta}(G)\setminus uv)=\textnormal {mult} (\theta, S_{\theta}(G))+\textnormal {mult} (\theta, Q_{j_1}\setminus u)+\textnormal {mult} (\theta, Q_{j_2}\setminus v).\notag \end {equation} \item [(d)] If $u,v\in V(H_{i_0})$ for some $i_0$, then \begin {equation} \textnormal {mult} (\theta, S_{\theta}(G)\setminus uv)=\textnormal {mult} (\theta, S_{\theta}(G))-1+\textnormal {mult} (\theta, H_{i_0}\setminus uv).\notag \end {equation} \item [(e)] If $\textnormal {mult} (\theta, G)\geq 2$, $u\in V(H_{i_1})$ and $v\in V(H_{i_2})$ for some $i_1,i_2$, $i_1\neq i_2$, then \begin {equation} \textnormal {mult} (\theta, S_{\theta}(G)\setminus uv)=\textnormal {mult} (\theta, S_{\theta}(G))-2.\notag \end {equation} \end {itemize} \end {cor} \begin {proof} (a) Suppose $u\in V(H_{i_0})$ and $v\in V(Q_{j_0})$ for some $i_0,j_0$. By Theorem \ref {extreme_set_in_S}, $A_{\theta}(G)$ is an $\theta$-extreme set in $S_{\theta}(G)\setminus uv$. Therefore \begin {equation} \textnormal {mult} (\theta, S_{\theta}(G)\setminus (A_{\theta}(S_{\theta}(G))\cup \{u,v\}))=\textnormal {mult} (\theta, S_{\theta}(G)\setminus uv)+\vert A_{\theta}(G)\vert.\notag \end {equation} Recall that $S_{\theta}(G)\setminus A_{\theta}(S_{\theta}(G))=G\setminus A_{\theta}(G)$. By Corollary \ref {P:C7}, and part (a) of Theorem \ref {basic_property}, we have \begin {align} \textnormal {mult} (\theta, G\setminus (A_{\theta}(G)\cup \{u,v\})) &=\textnormal {mult} (\theta, Q_{j_0}\setminus v)+\textnormal {mult} (\theta, H_{i_0}\setminus u)+\sum_{1\leq i\leq q, i\neq i_0} \textnormal {mult} (\theta, H_i)\notag\\ &=\textnormal {mult} (\theta, Q_{j_0}\setminus v)+\textnormal {mult} (\theta, G)+\vert A_{\theta}(G)\vert-1.\notag \end {align} This implies that $\textnormal {mult} (\theta, S_{\theta}(G)\setminus uv)=\textnormal {mult} (\theta, Q_{j_0}\setminus v)+\textnormal {mult} (\theta, G)-1=\textnormal {mult} (\theta, Q_{j_0}\setminus v)+\textnormal {mult} (\theta, S_{\theta}(G))-1$, where the last inequality follows from Corollary \ref{similar_gallai_edmond}. (b), (c), (d) and (e) are proved similarly. \end {proof} \section{The graphs $D_{\theta}(G)$ and $D_{\theta}(S_{\theta}(G))$} In this section, we shall determine the edge-set of $D_{\theta}(G)$ in terms of its Gallai-Edmonds decomposition (Theorem \ref{d_graph_for_G}). Finally, we shall prove that $D_{\theta}(G) = D_{\theta}(S_{\theta}(G))$ (Corollary \ref{gallai_Edmond_decomposition_G_S}). First, we list all possibilities for $\textnormal{mult}(\theta, G \setminus uv)$ with respect its Gallai-Edmonds decomposition: \begin {lm}\label {BP:L4a} Let $G$ be a graph. Then the following hold. \begin {itemize} \item [(a)] If $u\in B_{\theta}(G)$ then $\textnormal {mult} (\theta, G)-2\leq \textnormal {mult} (\theta, G\setminus uv)\leq \textnormal {mult} (\theta, G)$ for all $v\in V(G)\setminus \{u\}$. \item [(b)] If $u\in P_{\theta}(G)$ then $\textnormal {mult} (\theta, G)\leq \textnormal {mult} (\theta, G\setminus uv)\leq \textnormal {mult} (\theta, G)+2$ for all $v\in V(G)\setminus \{u\}$. \item [(c)] If $u\in N_{\theta}(G)$ then $\textnormal {mult} (\theta, G)-1\leq \textnormal {mult} (\theta, G\setminus uv)\leq \textnormal {mult} (\theta, G)+1$ for all $v\in V(G)\setminus \{u\}$. \item [(d)] If $u\in A_{\theta}(G)$ then \begin {itemize} \item [(i)] $\textnormal {mult} (\theta, G\setminus uv)=\textnormal {mult} (\theta, G)+1$ whenever $v\in N_{\theta}(G)$, \item [(ii)] $\textnormal {mult} (\theta, G\setminus uv)=\textnormal {mult} (\theta, G)+2$ whenever $v\in P_{\theta}(G)\cup (A_{\theta}(G)\setminus \{u\})$, \item [(iii)] $\textnormal {mult} (\theta, G\setminus uv)=\textnormal {mult} (\theta, G)$ whenever $v\in B_{\theta}(G)$. \end {itemize} \end {itemize} \end {lm} \begin {proof} Clearly, if $u \in B_{\theta}(G)$, then $\textnormal {mult} (\theta, G\setminus u)=\textnormal {mult} (\theta, G)-1$. So part (a) follows from Lemma \ref {interlacing}. Part (b) and (c) are proved similarly. Part (d) follows from Theorem \ref {P:T5}. \end {proof} Recall that $D_{\theta}(G) = D_{-2, \theta}(G) \cup D_{-1,\theta}(G) \cup D_{0,\theta}(G)$. Therefore, in order to determine the edges in $D_{\theta}(G)$, we can first determine the edges in $D_{r, \theta}(G)$ for $r=-2,-1,0$. However the graphs $D_{r,\theta}(G)$ do not behave `nicely'. Therefore we shall study $D_{r,\theta}(S_{\theta}(G))$ instead. In fact, we shall do this for all $r=-2,-1,0,1,2$. \subsection{$D_{-2,\theta}(G)$} \begin {lm}\label {BP:L4} Let $G$ be a graph with $\textnormal {mult} (\theta, G)=0$ or 1. Then $D_{-2,\theta}(G)$ is an empty graph with $\vert V(G)\vert$ vertices. \end {lm} \begin {proof} Since $\textnormal {mult} (\theta, G\setminus uv)\geq 0$ for all $u,v\in V(G)$, we can never have $\textnormal {mult} (\theta, G\setminus uv)=\textnormal {mult} (\theta, G)-2$. Hence the lemma holds. \end {proof} \begin {lm}\label {BP:L6a} Let $G$ be a graph with $\textnormal {mult} (\theta, G)\geq 2$. Let $H_1,\dots, H_q$ be all the $\theta$-critical components in $G\setminus A_{\theta}(G)$. If $(u,v)\in E(D_{-2,\theta}(G))$, then $u\in V(H_i)$ and $v\in V(H_j)$ for some $i\neq j$. \end {lm} \begin {proof} Suppose $(u,v)\in E(D_{-2,\theta}(G))$. By Lemma \ref {BP:L4a}, we must have $u,v\in B_{\theta}(G)$. By part (iv) of Corollary \ref {P:C7}, $u,v\in V(H_1)\cup\cdots\cup V(H_q)$. Suppose $u,v\in V(H_{j_0})$ for some $j_0$. By Corollary \ref {P:C7} and part (a) of Theorem \ref {basic_property}, we have $\textnormal {mult} (\theta, G\setminus A_{\theta}(G))=\sum_{i=1}^q \textnormal {mult} (\theta, H_i)=q=\textnormal {mult} (\theta, G)+\vert A_{\theta}(G)\vert$. Note that $\textnormal {mult} (\theta, H_{j_0}\setminus u)=0$. Therefore $\textnormal {mult} (\theta, H_{j_0}\setminus uv)\geq 0$ and that $\textnormal {mult} (\theta, G\setminus (A_{\theta}(G)\cup \{u,v\}))=\textnormal {mult} (\theta, H_{j_0}\setminus uv)+\sum_{1\leq i\leq q, i\neq j_0} \textnormal {mult} (\theta, H_i)\geq q-1=\textnormal {mult} (\theta, G)+\vert A_{\theta}(G)\vert-1$. On the other hand, $\textnormal {mult} (\theta, G\setminus uv)=\textnormal {mult} (\theta, G)-2$. By Lemma \ref {interlacing}, $\textnormal {mult} (\theta, G\setminus (A_{\theta}(G)\cup \{u,v\}))\leq \textnormal {mult} (\theta, G)-2+\vert A_{\theta}(G)\vert$, a contradiction. Hence $u\in V(H_i)$ and $v\in V(H_j)$ for some $i\neq j$. \end {proof} Note that in general the converse of Lemma \ref {BP:L6a} is not true. In the following graph $G$ (see Figure 1), we have $A_{1}(G)=\{u,v\}$ and $H_1,H_2,H_3,H_4$ are all the $1$-critical components in $G\setminus A_{1}(G)$. Now $\textnormal {mult} (1,G)=2$ and $w\in V(H_1)$, $z\in V(H_2)$. But $\textnormal {mult} (1,G\setminus wz)=1\neq 0=\textnormal {mult} (1,G)-2$. \begin{center} \begin{pspicture}(0,0)(6,6) \cnodeput(1.5, 5){1}{} \cnodeput(5, 5){2}{} \cnodeput(0.5, 4){3}{} \cnodeput(0.5, 3){4}{} \cnodeput(2.5, 4){5}{} \cnodeput(2.5, 3){6}{} \cnodeput(4, 4){7}{} \cnodeput(4, 3){8}{} \cnodeput(6, 4){9}{} \cnodeput(6, 3){10}{} \ncline{1}{3} \ncline{1}{5} \ncline{2}{5} \ncline{2}{7} \ncline{2}{9} \ncline{3}{4} \ncline{5}{6} \ncline{7}{8} \ncline{9}{10} \psellipse[linestyle=dotted](0.5,3)(0.5,1.5) \psellipse[linestyle=dotted](2.5,3)(0.5,1.5) \psellipse[linestyle=dotted](4,3)(0.5,1.5) \psellipse[linestyle=dotted](6,3)(0.5,1.5) \rput(3,1){Figure 1.} \rput(-1,4){$G=$} \rput(0.5,2){$H_1$} \rput(2.5,2){$H_2$} \rput(4,2){$H_3$} \rput(6,2){$H_4$} \rput(1.8,5){$u$} \rput(5.3,5){$v$} \rput(0.8,3){$w$} \rput(2.8,3){$z$} \end{pspicture} \end{center} However it is true for the graph $S_{\theta}(G)$ (see Theorem \ref {d_graph_2_negative}). \begin {thm}\label{d_graph_2_negative} Let $G$ be a graph with $\textnormal {mult} (\theta, G)\geq 2$. Let $H_1,\dots, H_q$ be all the $\theta$-critical components in $G\setminus A_{\theta}(G)$. Then $(u,v)\in E(D_{-2,\theta}(S_{\theta}(G)))$ if and only if $u\in V(H_i)$ and $v\in V(H_j)$ for some $i\neq j$. \end {thm} \begin {proof} Suppose $u\in V(H_i)$ and $v\in V(H_j)$ for some $i\neq j$. By part (e) of Corollary \ref {S:C6}, we have $\textnormal {mult} (\theta, S_{\theta}(G)\setminus uv)=\textnormal {mult} (\theta, S_{\theta}(G))-2$. So $(u,v)\in E(D_{-2,\theta}(S_{\theta}(G)))$. The converse follows from Lemma \ref {BP:L6a} (Recall that $S_{\theta}(G)\setminus A_{\theta}(S_{\theta}(G))=G\setminus A_{\theta}(G)$). \end {proof} \subsection{$D_{-1,\theta}(G)$} The proof of the following lemma is similar to Lemma \ref {BP:L4} and therefore is omitted. \begin {lm}\label {BP:L9} Let $G$ be a graph with $\textnormal {mult} (\theta, G)=0$. Then $D_{-1,\theta}(G)$ is an empty graph with $\vert V(G)\vert$ vertices. \end {lm} Using Lemma \ref{Ku-Chen-neutral} and Lemma \ref {BP:L4a}, one can easily deduce Lemma \ref {BP:L10}. \begin {lm}\label {BP:L10} Let $G$ be a graph with $\textnormal {mult} (\theta, G)\geq 1$. If $(u,v)\in E(D_{-1,\theta}(G))$ then either $u\in N_{\theta}(G)$ and $v\in B_{\theta}(G)$ or $u,v\in B_{\theta}(G)$. \end {lm} \begin {thm}\label{d_graph_1_negative} Let $G$ be a graph with $\textnormal {mult} (\theta, G)\geq 2$. Let $H_1,\dots, H_q$ be all the $\theta$-critical components in $G\setminus A_{\theta}(G)$. Then $(u,v)\in E(D_{-1,\theta}(S_{\theta}(G)))$ if and only if \begin {itemize} \item [(a)] $u\in N_{\theta}(G)$ and $v\in B_{\theta}(G)$, or \item [(b)] $(u,v)\in E(D_{-1,\theta}(H_{i_0}))$ for some $i_0$. \end {itemize} \end {thm} \begin {proof} Suppose (a) holds. By Corollary \ref {similar_gallai_edmond}, $u\in N_{\theta}(S_{\theta}(G))$ and $v\in B_{\theta}(S_{\theta}(G))$. By Lemma \ref{Ku-Chen-neutral}, $\textnormal {mult} (\theta, S_{\theta}(G)\setminus uv)=\textnormal {mult} (\theta, S_{\theta}(G))-1$. Thus $(u,v)\in E(D_{-1,\theta}(S_{\theta}(G)))$. Suppose (b) holds. Then $\textnormal {mult} (\theta, H_{i_0}\setminus uv)=0$. By part (d) of Corollary \ref {S:C6}, $ \textnormal {mult} (\theta, S_{\theta}(G)\setminus uv)=\textnormal {mult} (\theta, S_{\theta}(G))-1+\textnormal {mult} (\theta, H_{i_0}\setminus uv)=\textnormal {mult} (\theta, S_{\theta}(G))-1$. Hence $(u,v)\in E(D_{-1,\theta}(S_{\theta}(G)))$. Suppose $(u,v)\in E(D_{-1,\theta}(S_{\theta}(G)))$. By Lemma \ref {BP:L10}, we may assume that $u,v\in B_{\theta}(G)$. Note that $H_1,\dots, H_q$ are all the $\theta$-critical components in $S_{\theta}(G)\setminus A_{\theta}(S_{\theta}(G))$. By part (d) and (e) of Corollary \ref {S:C6}, we must have $u,v\in V(H_{i_0})$ for some $i_0$. Therefore $\textnormal {mult} (\theta, S_{\theta}(G))-1= \textnormal {mult} (\theta, S_{\theta}(G)\setminus uv)=\textnormal {mult} (\theta, S_{\theta}(G))-1+\textnormal {mult} (\theta, H_{i_0}\setminus uv)$, which implies that $\textnormal {mult} (\theta, H_{i_0}\setminus uv)=0$. Hence $(u,v)\in E(D_{-1,\theta}(H_{i_0}))$. \end {proof} \subsection{$D_{0,\theta}(G)$} Using Lemma \ref{godsil_positive}, and Lemma \ref {BP:L4a}, one can easily deduce Lemma \ref {BP:L13}. \begin {lm}\label {BP:L13} Let $G$ be a graph. If $(u,v)\in E(D_{0,\theta}(G))$ then either $u\in P_{\theta}(G)\cup A_{\theta}(G)$ and $v\in B_{\theta}(G)$ or $u,v\in B_{\theta}(G)$ or $u,v\in P_{\theta}(G)\cup N_{\theta}(G)$. \end {lm} \begin {thm}\label{d_graph_0_neutral} Let $G$ be a graph with $\textnormal{mult}(\theta, G)\ge 2$ and $H_1,\dots, H_q,Q_1,\dots, Q_m$ be all the components in $G\setminus A_{\theta}(G)$ with $H_i$ is $\theta$-critical for all $i$ and $\textnormal {mult} (\theta, Q_j)=0$ for all $j$. Then $(u,v)\in E(D_{0,\theta}(S_{\theta}(G)))$ if and only if \begin {itemize} \item [(a)] $u\in P_{\theta}(G)\cup A_{\theta}(G)$ and $v\in B_{\theta}(G)$, or \item [(b)] $u,v\in N_{\theta}(G)$ with $u\in V(Q_{j_1})$ and $v\in V(Q_{j_2})$ for some $j_1$ and $j_2$, $j_1\neq j_2$, or \item [(c)] $(u,v)\in E(D_{0,\theta}(H_{i_0}))$ for some $i_0$, or \item [(d)] $(u,v)\in E(D_{0,\theta}(Q_{j_0}))$ for some $j_0$. \end {itemize} \end {thm} \begin {proof} Suppose (a) holds. Then it follows from Lemma \ref{godsil_positive} that $(u,v)\in E(D_{0,\theta}(S_{\theta}(G)))$. Suppose (b) holds. By Theorem \ref {P:T5}, $u,v\in N_{\theta}(G\setminus A_{\theta}(G))$. By using part (a) of Theorem \ref {basic_property}, it is not hard to deduce that $\textnormal {mult} (\theta, Q_{j_1}\setminus u)=0=\textnormal {mult} (\theta, Q_{j_2}\setminus v)$. Then by part (c) of Corollary \ref {S:C6}, $\textnormal {mult} (\theta, S_{\theta}(G)\setminus uv)=\textnormal {mult} (\theta, S_{\theta}(G))+\textnormal {mult} (\theta, Q_{j_1}\setminus u)+\textnormal {mult} (\theta, Q_{j_2}\setminus v)=\textnormal {mult} (\theta, S_{\theta}(G))$. Hence $(u,v)\in E(D_{0,\theta}(S_{\theta}(G)))$. Suppose (c) holds. Then $\textnormal {mult} (\theta, H_{i_0}\setminus uv)=1$. By part (d) of Corollary \ref {S:C6}, $ \textnormal {mult} (\theta, S_{\theta}(G)\setminus uv)=\textnormal {mult} (\theta, S_{\theta}(G))-1+\textnormal {mult} (\theta, H_{i_0}\setminus uv)=\textnormal {mult} (\theta, S_{\theta}(G))$. Hence $(u,v)\in E(D_{0,\theta}(S_{\theta}(G)))$. Suppose (d) holds. Then $\textnormal {mult} (\theta, Q_{j_0}\setminus uv)=0$. By part (b) of Corollary \ref {S:C6}, $ \textnormal {mult} (\theta, S_{\theta}(G)\setminus uv)=\textnormal {mult} (\theta, S_{\theta}(G))+\textnormal {mult} (\theta, Q_{j_0}\setminus uv)=\textnormal {mult} (\theta, S_{\theta}(G))$. Hence $(u,v)\in E(D_{0,\theta}(S_{\theta}(G)))$. Suppose $(u,v)\in E(D_{0,\theta}(S_{\theta}(G)))$. By Lemma \ref {BP:L13}, we may assume that $u,v\in B_{\theta}(G)$ or $u,v\in P_{\theta}(G)\cup N_{\theta}(G)$. Suppose $u,v\in B_{\theta}(G)$. By part (d) and (e) of Corollary \ref {S:C6}, we must have $u,v\in V(H_{i_0})$ for some $i_0$. So $\textnormal {mult} (\theta, S_{\theta}(G))= \textnormal {mult} (\theta, S_{\theta}(G)\setminus uv)=\textnormal {mult} (\theta, S_{\theta}(G))-1+\textnormal {mult} (\theta, H_{i_0}\setminus uv)$, which implies that $\textnormal {mult} (\theta, H_{i_0}\setminus uv)=1$. Hence $(u,v)\in E(D_{0,\theta}(H_{i_0}))$. Suppose $u,v\in P_{\theta}(G)\cup N_{\theta}(G)$. If $u,v\in V(Q_{j_0})$ for some $j_0$, then by part (b) of Corollary \ref {S:C6}, we have $\textnormal {mult} (\theta, S_{\theta}(G))=\textnormal {mult} (\theta, S_{\theta}(G)\setminus uv)=\textnormal {mult} (\theta, S_{\theta}(G))+\textnormal {mult} (\theta, Q_{j_0}\setminus uv)$, which implies that $\textnormal {mult} (\theta, Q_{j_0}\setminus uv)=0$, i.e., $(u,v)\in E(D_{0,\theta}(Q_{j_0}))$. If $u\in V(Q_{j_1})$ and $v\in V(Q_{j_2})$ for some $j_1,j_2$, $j_1\neq j_2$, then by part (c) of Corollary \ref {S:C6}, $\textnormal {mult} (\theta, S_{\theta}(G))=\textnormal {mult} (\theta, S_{\theta}(G)\setminus uv)=\textnormal {mult} (\theta, S_{\theta}(G))+\textnormal {mult} (\theta, Q_{j_1}\setminus u)+\textnormal {mult} (\theta, Q_{j_2}\setminus v)$, which implies $\textnormal {mult} (\theta, Q_{j_1}\setminus u)=0=\textnormal {mult} (\theta, Q_{j_2}\setminus v)$. By using part (a) of Theorem \ref {basic_property}, we can deduce that $u,v\in N_{\theta}(G\setminus A_{\theta}(G))$. It then follows from Theorem \ref {P:T5}, that $u, v\in N_{\theta}(G)$. \end {proof} \subsection{$D_{1,\theta}(G)$} Using Lemma \ref {BP:L4a}, one can easily deduce Lemma \ref {BP:L16}. \begin {lm}\label {BP:L16} Let $G$ be a graph. If $(u,v)\in E(D_{1,\theta}(G))$ then either $u\in A_{\theta}(G)$ and $v\in N_{\theta}(G)$ or $u,v\in P_{\theta}(G)\cup N_{\theta}(G)$. \end {lm} \begin {thm}\label{d_graph_1_positive} Let $G$ be a graph and $Q_1,\dots, Q_m$ be all the components in $G\setminus A_{\theta}(G)$ with \linebreak $\textnormal {mult} (\theta, Q_j)=0$ for all $j$. Then $(u,v)\in E(D_{1,\theta}(S_{\theta}(G)))$ if and only if \begin {itemize} \item [(a)] $u\in A_{\theta}(G)$ and $v\in N_{\theta}(G)$, or \item [(b)] $u\in P_{\theta}(G)$ and $v\in N_{\theta}(G)$ with $u\in V(Q_{j_1})$ and $v\in V(Q_{j_2})$ for some $j_1$ and $j_2$, $j_1\neq j_2$, or \item [(c)] $(u,v)\in E(D_{1,\theta}(Q_{j_0}))$ for some $j_0$. \end {itemize} \end {thm} \begin {proof} Suppose (a) holds. Then it follows from Lemma \ref {BP:L4a} that $(u,v)\in E(D_{1,\theta}(S_{\theta}(G)))$. Suppose (b) holds. By Theorem \ref {P:T5}, $u\in P_{\theta}(G\setminus A_{\theta}(G))$ and $v\in N_{\theta}(G\setminus A_{\theta}(G))$. By using part (a) of Theorem \ref {basic_property}, we deduce that $\textnormal {mult} (\theta, Q_{j_1}\setminus u)=1$ and $\textnormal {mult} (\theta, Q_{j_2}\setminus v)=0$. Then by part (c) of Corollary \ref {S:C6}, $\textnormal {mult} (\theta, S_{\theta}(G)\setminus uv)=\textnormal {mult} (\theta, S_{\theta}(G))+\textnormal {mult} (\theta, Q_{j_1}\setminus u)+\textnormal {mult} (\theta, Q_{j_2}\setminus v)=\textnormal {mult} (\theta, S_{\theta}(G))+1$. Hence $(u,v)\in E(D_{1,\theta}(S_{\theta}(G)))$. Suppose (c) holds. Then $\textnormal {mult} (\theta, Q_{j_0}\setminus uv)=1$. By part (b) of Corollary \ref {S:C6}, $ \textnormal {mult} (\theta, S_{\theta}(G)\setminus uv)=\textnormal {mult} (\theta, S_{\theta}(G))+\textnormal {mult} (\theta, Q_{j_0}\setminus uv)=\textnormal {mult} (\theta, S_{\theta}(G))+1$. Hence $(u,v)\in E(D_{1,\theta}(S_{\theta}(G)))$. Suppose $(u,v)\in E(D_{1,\theta}(S_{\theta}(G)))$. By Lemma \ref {BP:L16}, we may assume that $u,v\in P_{\theta}(G)\cup N_{\theta}(G)$. If $u,v\in V(Q_{j_0})$ for some $j_0$, then by part (b) of Corollary \ref {S:C6}, $\textnormal {mult} (\theta, S_{\theta}(G))+1=\textnormal {mult} (\theta, S_{\theta}(G)\setminus uv)=\textnormal {mult} (\theta, S_{\theta}(G))+\textnormal {mult} (\theta, Q_{j_0}\setminus uv)$, which implies that $\textnormal {mult} (\theta, Q_{j_0}\setminus uv)=1$, i.e., $(u,v)\in E(D_{1,\theta}(Q_{j_0}))$. If $u\in V(Q_{j_1})$ and $v\in V(Q_{j_2})$ for some $j_1,j_2$, $j_1\neq j_2$, then by part (c) of Corollary \ref {S:C6}, $\textnormal {mult} (\theta, S_{\theta}(G))+1=\textnormal {mult} (\theta, S_{\theta}(G)\setminus uv)=\textnormal {mult} (\theta, S_{\theta}(G))+\textnormal {mult} (\theta, Q_{j_1}\setminus u)+\textnormal {mult} (\theta, Q_{j_2}\setminus v)$, which implies (without loss of generality) $\textnormal {mult} (\theta, Q_{j_1}\setminus u)=1$ and $\textnormal {mult} (\theta, Q_{j_2}\setminus v)=0$. By using part (a) of Theorem \ref {basic_property} again, we can deduce that $u\in P_{\theta}(G\setminus A_{\theta}(G))$ and $v\in N_{\theta}(G\setminus A_{\theta}(G))$. It then follows from Theorem \ref {P:T5}, that $u\in P_{\theta}(G)$ and $v\in N_{\theta}(G)$. \end {proof} \subsection{$D_{2,\theta}(G)$} Using Lemma \ref {BP:L4a}, one can easily deduce Lemma \ref {BP:L19}. \begin {lm}\label {BP:L19} Let $G$ be a graph. If $(u,v)\in E(D_{2,\theta}(G))$ then either $u,v\in A_{\theta}(G)$ or $u\in A_{\theta}(G)$ and $v\in P_{\theta}(G)$ or $u,v\in P_{\theta}(G)$. \end {lm} \begin {thm}\label{d_graph_2_positive} Let $G$ be a graph and $Q_1,\dots, Q_m$ be all the components in $G\setminus A_{\theta}(G)$ with \linebreak $\textnormal {mult} (\theta, Q_j)=0$ for all $j$. Then $(u,v)\in E(D_{2,\theta}(S_{\theta}(G)))$ if and only if \begin {itemize} \item [(a)] $u,v\in A_{\theta}(G)$, or \item [(b)] $u\in A_{\theta}(G)$ and $v\in P_{\theta}(G)$, or \item [(c)] $u,v\in P_{\theta}(G)$ with $u\in V(Q_{j_1})$ and $v\in V(Q_{j_2})$ for some $j_1$ and $j_2$, $j_1\neq j_2$, or \item [(d)] $(u,v)\in E(D_{2,\theta}(Q_{i_0}))$ for some $i_0$. \end {itemize} \end {thm} \begin {proof} Suppose (a) or (b) holds. Then it follows from Lemma \ref {BP:L4a} that $(u,v)\in E(D_{2,\theta}(S_{\theta}(G)))$. Suppose (c) holds. By Theorem \ref {P:T5}, $u,v\in P_{\theta}(G\setminus A_{\theta}(G))$. By using part (a) of Theorem \ref {basic_property}, we deduce that $\textnormal {mult} (\theta, Q_{j_1}\setminus u)=1=\textnormal {mult} (\theta, Q_{j_2}\setminus v)$. Then by part (c) of Corollary \ref {S:C6}, we have $\textnormal {mult} (\theta, S_{\theta}(G)\setminus uv)=\textnormal {mult} (\theta, S_{\theta}(G))+\textnormal {mult} (\theta, Q_{j_1}\setminus u)+\textnormal {mult} (\theta, Q_{j_2}\setminus v)=\textnormal {mult} (\theta, S_{\theta}(G))+2$. Hence $(u,v)\in E(D_{2,\theta}(S_{\theta}(G)))$. Suppose (d) holds. Then $\textnormal {mult} (\theta, Q_{j_0}\setminus uv)=2$. By part (b) of Corollary \ref {S:C6}, $ \textnormal {mult} (\theta, S_{\theta}(G)\setminus uv)=\textnormal {mult} (\theta, S_{\theta}(G))+\textnormal {mult} (\theta, Q_{j_0}\setminus uv)=\textnormal {mult} (\theta, S_{\theta}(G))+2$. Hence $(u,v)\in E(D_{2,\theta}(S_{\theta}(G)))$. Suppose $(u,v)\in E(D_{2,\theta}(S_{\theta}(G)))$. By Lemma \ref {BP:L19}, we may assume that $u,v\in P_{\theta}(G)$. If $u,v\in V(Q_{j_0})$ for some $j_0$, then by part (b) of Corollary \ref {S:C6}, $\textnormal {mult} (\theta, S_{\theta}(G))+2=\textnormal {mult} (\theta, S_{\theta}(G)\setminus uv)=\textnormal {mult} (\theta, S_{\theta}(G))+\textnormal {mult} (\theta, Q_{j_0}\setminus uv)$, which implies that $\textnormal {mult} (\theta, Q_{j_0}\setminus uv)=2$, i.e., $(u,v)\in E(D_{2,\theta}(Q_{j_0}))$. If $u\in V(Q_{j_1})$ and $v\in V(Q_{j_2})$ for some $j_1,j_2$, $j_1\neq j_2$, then by part (c) of Corollary \ref {S:C6}, $\textnormal {mult} (\theta, S_{\theta}(G))+2=\textnormal {mult} (\theta, S_{\theta}(G)\setminus uv)=\textnormal {mult} (\theta, S_{\theta}(G))+\textnormal {mult} (\theta, Q_{j_1}\setminus u)+\textnormal {mult} (\theta, Q_{j_2}\setminus v)$, which implies that $\textnormal {mult} (\theta, Q_{j_1}\setminus u)=1=\textnormal {mult} (\theta, Q_{j_2}\setminus v)$. As before using part (a) of Theorem \ref {basic_property}, we deduce that $u,v\in P_{\theta}(G\setminus A_{\theta}(G))$. It then follows from Theorem \ref {P:T5}, that $u,v\in P_{\theta}(G)$. \end {proof} Now let us look at the case $\theta=0$. Note that we have $N_{0}(G)=\varnothing$ for any graph $G$. By Lemma \ref {interlacing}, we have $\textnormal {mult} (0,G\setminus uv)=-2,0,2$ for all $u,v\in V(G)$. Hence, \begin {thm}\label {BP:T22} Let $G$ be a graph. Then $D_{-1,0}(G)$ and $D_{1,0}(G)$ are empty graphs. \end {thm} Now let us determine the edges of $D_{\theta}(G)$. We shall begin with the following lemma. \begin {lm}\label {R:L3} Let $G$ be a graph and $u\in P_{\theta}(G)\cup N_{\theta}(G)$. Then $A_{\theta}(G)\subseteq A_{\theta}(G\setminus u)$. \end {lm} \begin {proof} Let $w\in A_{\theta}(G)$. Then by Theorem \ref {P:T5}, $\textnormal {mult} (\theta, G\setminus wu)=\textnormal {mult} (\theta,G)+2$ or $\textnormal {mult} (\theta,G)+1$, depending on whether $u\in P_{\theta}(G)$ or $u\in N_{\theta}(G)$. In either cases, $w\notin B_{\theta}(G\setminus u)$. Let $z\in B_{\theta}(G)$ be adjacent to $w$. By Lemma \ref{godsil_positive} and Lemma \ref{Ku-Chen-neutral}, we have, $z\in B_{\theta}(G\setminus u)$. This implies that $w\in A_{\theta}(G\setminus u)$ and $A_{\theta}(G)\subseteq A_{\theta}(G\setminus u)$. \end {proof} \begin {thm}\label{extreme_in_G} Let $G$ be a graph and $u,v\in P_{\theta}(G)\cup N_{\theta}(G)$. Then $A_{\theta}(G)$ is an $\theta$-extreme set in $G\setminus uv$. \end {thm} \begin {proof} By Lemma \ref {R:L3}, $A_{\theta}(G)\subseteq A_{\theta}(G\setminus u)$. If $v\in P_{\theta}(G\setminus u)\cup N_{\theta}(G\setminus u)$ then by Lemma \ref {R:L3}, $A_{\theta}(G\setminus u)\subseteq A_{\theta}(G\setminus uv)$. If $v\in A_{\theta}(G\setminus u)$, by Theorem \ref {P:T5}, $A_{\theta}(G)\subseteq A_{\theta}(G\setminus uv)$. In either cases we have $A_{\theta}(G)$ is an $\theta$-extreme set in $G\setminus uv$. So we may assume $v\in B_{\theta}(G\setminus u)$. Using Lemma \ref{Ku-Chen-neutral}, we deduce that $u\in P_{\theta}(G)$. So $\textnormal {mult} (\theta, G\setminus u)=\textnormal {mult} (\theta,G)+1$ and by Theorem \ref {P:T5}, $\textnormal {mult} (\theta, (G\setminus u)\setminus A_{\theta}(G))=\textnormal {mult} (\theta,G)+1+\vert A_{\theta}(G)\vert$. Again by Theorem \ref {P:T5}, we see that $v\in B_{\theta}((G\setminus u)\setminus A_{\theta}(G))$. Therefore \begin {equation} \textnormal {mult} (\theta, (G\setminus uv)\setminus A_{\theta}(G))=\textnormal {mult} (\theta,G)+\vert A_{\theta}(G)\vert.\notag \end {equation} Since $\textnormal {mult} (\theta, G\setminus uv)=\textnormal {mult} (\theta,G)$, $A_{\theta}(G)$ is an $\theta$-extreme set in $G\setminus uv$. \end {proof} \begin {cor}\label {S:C3b} Let $G$ be a graph. Let $Q_1,\dots, Q_m$ be all the components in $G\setminus A_{\theta}(G)$ with \linebreak $\textnormal {mult} (\theta, Q_j)=0$ for all $j$. Then the following holds: \begin {itemize} \item [(a)] If $u,v\in V(Q_{j_0})$ for some $j_0$, then \begin {equation} \textnormal {mult} (\theta, G\setminus uv)=\textnormal {mult} (\theta, G)+\textnormal {mult} (\theta, Q_{j_0}\setminus uv).\notag \end {equation} \item [(b)] If $u\in V(Q_{j_1})$ and $v\in V(Q_{j_2})$ for some $j_1,j_2$, $j_1\neq j_2$, then \begin {equation} \textnormal {mult} (\theta, G\setminus uv)=\textnormal {mult} (\theta, G)+\textnormal {mult} (\theta, Q_{j_1}\setminus u)+\textnormal {mult} (\theta, Q_{j_2}\setminus v).\notag \end {equation} \end {itemize} \end {cor} \begin {proof} (a) Suppose $u,v\in V(Q_{j_0})$ for some $j_0$. By Theorem \ref {extreme_in_G}, $A_{\theta}(G)$ is an $\theta$-extreme set in $G\setminus uv$. Therefore \begin {equation} \textnormal {mult} (\theta, G\setminus (A_{\theta}(G)\cup \{u,v\}))=\textnormal {mult} (\theta, G\setminus uv)+\vert A_{\theta}(G)\vert.\notag \end {equation} Let $H_1,\dots, H_q$ be all the $\theta$-critical components in $G\setminus A_{\theta}(G)$. By Corollary \ref {P:C7}, and part (a) of Theorem \ref {basic_property}, we have \begin {align} \textnormal {mult} (\theta, G\setminus (A_{\theta}(G)\cup \{u,v\})) &=\textnormal {mult} (\theta, Q_{j_0}\setminus uv)+\sum_{1\leq i\leq q} \textnormal {mult} (\theta, H_i)\notag\\ &=\textnormal {mult} (\theta, Q_{j_0}\setminus uv)+\textnormal {mult} (\theta, G)+\vert A_{\theta}(G)\vert.\notag \end {align} This implies that $\textnormal {mult} (\theta, G\setminus uv)=\textnormal {mult} (\theta, G)+\textnormal {mult} (\theta, Q_{j_0}\setminus uv)$. (b) is proved similarly. \end {proof} \begin {thm}\label{d_graph_for_G} Let $G$ be a graph and $Q_1,\dots, Q_m$ be all the components in $G\setminus A_{\theta}(G)$ with \linebreak $\textnormal {mult} (\theta, Q_j)=0$ for all $j$. Then $(u,v)\in E(D_{\theta}(G))$ if and only if \begin {itemize} \item [(a)] $u\in B_{\theta}(G)$ and $v\in V(G)$, or \item [(b)] $u,v\in N_{\theta}(G)$ with $u\in V(Q_{j_1})$ and $v\in V(Q_{j_2})$ for some $j_1$ and $j_2$, $j_1\neq j_2$, or \item [(c)] $(u,v)\in E(D_{0,\theta}(Q_{j_0}))$ for some $j_0$. \end {itemize} \end {thm} \begin {proof} Suppose (a) holds. Since $u\in B_{\theta}(G)$, $\textnormal {mult} (\theta, G\setminus u)=\textnormal {mult} (\theta, G)-1$. By Lemma \ref {interlacing}, we have $\textnormal {mult} (\theta, G\setminus uv)\leq \textnormal {mult} (\theta, G)$ for all $v\in V(G)$. Hence $(u,v)\in E(D_{\theta}(G))$ for all $v\in V(G)$. Suppose (b) holds. By Theorem \ref {P:T5}, $u,v\in N_{\theta}(G\setminus A_{\theta}(G))$. By using part (a) of Theorem \ref {basic_property}, we can deduce that $\textnormal {mult} (\theta, Q_{j_1}\setminus u)=0=\textnormal {mult} (\theta, Q_{j_2}\setminus v)$. By part (b) of Corollary \ref {S:C3b}, $\textnormal {mult} (\theta, G\setminus uv)=\textnormal {mult} (\theta, G)+\textnormal {mult} (\theta, Q_{j_1}\setminus u)+\textnormal {mult} (\theta, Q_{j_2}\setminus v)=\textnormal {mult} (\theta, G)$. Hence $(u,v)\in E(D_{\theta}(G))$. Suppose (c) holds. Then $\textnormal {mult} (\theta, Q_{j_0}\setminus uv)=0$. By part (a) of Corollary \ref {S:C3b}, $\textnormal {mult} (\theta, G\setminus uv)=\textnormal {mult} (\theta, G)+\textnormal {mult} (\theta, Q_{j_0}\setminus uv)=\textnormal {mult} (\theta, G)$. Hence $(u,v)\in E(D_{\theta}(G))$. Suppose $(u,v)\in E(D_{\theta}(G))$. By Lemma \ref {BP:L6a}, Lemma \ref {BP:L10} and Lemma \ref {BP:L13}, we may assume that $u,v\in P_{\theta}(G)\cup N_{\theta}(G)$. Suppose $u,v\in V(Q_{j_0})$ for some $j_0$. By part (a) of Corollary \ref {S:C3b}, $\textnormal {mult} (\theta, G\setminus uv)=\textnormal {mult} (\theta, G)+\textnormal {mult} (\theta, Q_{j_0}\setminus uv)$. Since $\textnormal {mult} (\theta, G\setminus uv)\leq \textnormal {mult} (\theta, G)$, we must have $\textnormal {mult} (\theta, Q_{j_0}\setminus uv)=0$ and $(u,v)\in E(D_{0,\theta}(Q_{j_0}))$. Suppose $u\in V(Q_{j_1})$ and $v\in V(Q_{j_2})$ for some $j_1$ and $j_2$, $j_1\neq j_2$. By part (b) of Corollary \ref {S:C3b}, $\textnormal {mult} (\theta, G\setminus uv)=\textnormal {mult} (\theta, G)+\textnormal {mult} (\theta, Q_{j_1}\setminus u)+\textnormal {mult} (\theta, Q_{j_2}\setminus v)$. Since $\textnormal {mult} (\theta, G\setminus uv)\leq \textnormal {mult} (\theta, G)$, we must have $\textnormal {mult} (\theta, Q_{j_1}\setminus u)=0=\textnormal {mult} (\theta, Q_{j_2}\setminus v)$. This implies that $u,v\in N_{\theta}(G\setminus A_{\theta}(G))$. Hence by Theorem \ref {P:T5}, $u,v\in N_{\theta}(G)$. \end {proof} Note that in Theorem \ref{d_graph_for_G}, the edge-set in $D_{\theta}(G)$ depends only on the Gallai-Edmonds decomposition of $G$. Therefore if $G$ and $G'$ have the same Gallai-Edmonds decomposition with respect to $\theta$ via $\psi$, then $D_{\theta}(G)\overset{\psi}{\cong} D_{\theta}(G')$. Since $G$ and $S_{\theta}(G)$ have the same Gallai-Edmonds decomposition via the identity map, we have $D_{\theta}(G)=D_{\theta}(S_{\theta}(G))$. This proves the following corollary. \begin {cor}\label{gallai_Edmond_decomposition_G_S} If $G$ and $G'$ have the same Gallai-Edmonds decomposition with respect to $\theta$, then $D_{\theta}(G)\cong D_{\theta}(G')$. In particular, $D_{\theta}(G)=D_{\theta}(S_{\theta}(G))$. \end {cor} Note that if $G$ is a graph with $n$ vertices then $E(K_n)=E(D_{-2,\theta}(G))\cup E(D_{-1,\theta}(G))\cup E(D_{0,\theta}(G))\cup E(D_{1,\theta}(G))\cup E(D_{2,\theta}(G))$, where $K_n$ is the complete graph on $n$ vertices ($V(K_n)=V(G)$). If we denote the complement of a graph $G$ by $\overline G$, by Corollary \ref {gallai_Edmond_decomposition_G_S}, we have \begin {cor}\label {R:C6} Let $G$ be a graph. Then $\overline {D_{\theta}(G)}=\overline {D_{\theta}(S_{\theta}(G))}=G_{+}$, where $G_+$ is the graph with $V(G_+)=V(G)$ and $E(G_+)=E(D_{1,\theta}(G))\cup E(D_{2,\theta}(G))$. \end {cor} \section{$\theta$-Nice Sets and Matchings} In this section, we first relate $\theta$-nice sets with matchings. Then we proceed to show that $D_{\theta}(G)$ always contain certain induced subgraphs of $G$ related to $\theta$. Recall that a path $P$ is called $\theta$-essential if $\textnormal{mult} (\theta, G\setminus P)=\textnormal{mult} (\theta, G)-1$. We shall require the following lemmas: \begin{lm}\label{essential-path}\textnormal {\cite[Lemma 3.3]{G}} If $P$ is a $\theta$-essential path in $G$, then both of its end points are $\theta$-essential in $G$. \end{lm} \begin{lm}\label{essential-neighbor}\textnormal {\cite[Lemma 3.4]{G}} Let $G$ be a graph and $u$ a vertex in $G$ which is not $\theta$-essential. Then $u$ is $\theta$-positive in $G$ if and only if some neighbor of it is $\theta$-essential in $G \setminus u$. \end{lm} \begin{lm}\label{nonpositive_path} Let $u, v$ be two distinct $\theta$-positive vertices of $G$. Then $\textnormal{mult}(\theta, G \setminus uv) \le \textnormal{mult}(\theta, G)$ if and only if there exists a path $P$ from $u$ to $v$ such that $\textnormal{mult}(\theta, G \setminus P) \le \textnormal{mult}(\theta, G)$. \end{lm} \begin{proof} Let $k=\textnormal{mult}(\theta, G)$ where $k \ge 0$. Consider the Heilmann-Lieb Identity (see \cite[Theorem 6.3]{HL} and \cite[Lemma 2.4]{G}): \[ \mu(G \setminus u,x)\mu(G \setminus v,x) - \mu(G,x)\mu(G \setminus uv,x) = \sum_{P \in \mathbb{P}(u,v)} \mu(G \setminus P,x)^{2}\] where $\mathbb{P}(u,v)$ denote the set of paths from $u$ to $v$ in $G$. $(\Longrightarrow)$ Suppose there is no path $P$ from $u$ to $v$ such that $\textnormal{mult}(\theta, G \setminus P) \le \textnormal{mult}(\theta, G)$. Then $\theta$ is a root of the polynomial $\mu(G \setminus u,x)\mu(G \setminus v,x) - \sum_{P \in \mathbb{P}(u,v)} \mu(G \setminus P,x)^{2}$ with multiplicity at least $2k+2$. But this contradicts the fact that the multiplicity of $\theta$ as a root of $\mu(G,x)\mu(G \setminus uv,x)$ is at most $2k$. $(\Longleftarrow)$ Suppose $\textnormal{mult}(\theta, G \setminus uv) > \textnormal{mult}(\theta, G)=k$. By Lemma \ref{godsil_positive}, $\textnormal{mult}(\theta, G \setminus uv)=k+2$. Since $\{P \in \mathbb{P}(u,v): \textnormal{mult}(\theta, G \setminus P) \le k\} \not = \emptyset$, we can write \[ \sum_{P \in \mathbb{P}(u,v) \atop \textnormal{mult}(\theta, G \setminus P) \le k} \mu(G \setminus P,x)^{2} = \sum_{i=1}^{m} (x-\theta)^{2t} (g_{i}(x))^{2}\] for some $m$ and $t \le k$ and $g_{j}(\theta) \not = 0$ for some $j \in \{1, \ldots, m\}$. On the other hand, from the Heilmann-Lieb Identity, we see that \[ \mu(G \setminus u,x)\mu(G \setminus v,x) - \mu(G,x)\mu(G \setminus uv,x)- \sum_{P \in \mathbb{P}(u,v) \atop \textnormal{mult}(\theta, G \setminus P)> k} \mu(G \setminus P,x)^{2} = \sum_{P \in \mathbb{P}(u,v) \atop \textnormal{mult}(\theta, G \setminus P) \le k} \mu(G \setminus P,x)^{2} \] where the left-hand side has $\theta$ as a root with multiplicity at least $2k+2$. Therefore \[ \frac{1}{(x-\theta)^{2t}} \left(\mu(G \setminus u,x)\mu(G \setminus v,x) - \mu(G,x)\mu(G \setminus uv,x)- \sum_{P \in \mathbb{P}(u,v) \atop \textnormal{mult}(\theta, G \setminus P)> k} \mu(G \setminus P,x)^{2} \right) = \sum_{i=1}^{m} (g_{i}(x))^{2}\] where the left-hand side has $\theta$ as a root with nonzero multiplicity. But this contradicts the fact that $\sum_{i=1}^{m} (g_{i}(\theta))^{2}>0$. \end{proof} \begin{thm}\label{nice_matching} Suppose $X=\{x_{1}, \ldots, x_{m}\}$ is $\theta$-nice in $G$ and $\textnormal{mult}(\theta, G)=k$ (We allow $k$ to take zero value). Then there exists a set $Y=\{y_{1}, \ldots, y_{m}\}$ disjoint from $X$ such that \begin{itemize} \item[(i)] $M=\{x_{1}y_{1}, \ldots, x_{m}y_{m}\}$ is a matching of size $m$ in $G$, \item[(ii)] for any $M' \subseteq M$, we have $\textnormal{mult}(\theta, G \setminus V(M')) = k$ and if $|X \setminus V(M')| \ge 2$, then $X \setminus V(M')$ is $\theta$-nice in $G \setminus V(M')$, and \item[(iii)] $Y$ is an independent set. \end{itemize} \end{thm} \begin{proof} We shall prove it by induction on $m$. Suppose $m=2$. By Lemma \ref{essential-neighbor}, $x_1$ is adjacent to a vertex $y_1$ which is $\theta$-essential in $G \setminus x_1$. Therefore $\textnormal{mult} (\theta,G\setminus x_1y_1)=k$. Note that $\textnormal{mult} (\theta,G\setminus x_1x_2)=k+2$. So by Lemma \ref{interlacing}, $\textnormal{mult} (\theta,G\setminus x_1x_2y_1)\geq k+1$, and $x_2$ is $\theta$-positive in $G\setminus x_1y_1$. Again by Lemma \ref{essential-neighbor}, $x_2$ is adjacent to a vertex $y_2$ in $G\setminus x_1y_1$ and $y_2$ is $\theta$-essential in $G \setminus x_1y_1x_2$. Hence $\textnormal{mult} (\theta,G\setminus x_1y_1x_2y_2)=k$. Now part (i) has been proved. For part (ii), if $M'=M$ or $M'=\{x_1y_1\}$, we are done. Suppose $M'=\{x_2y_2\}$. Since $x_2$ is $\theta$-positive in $G$, by Lemma \ref{interlacing}, $\textnormal{mult} (\theta, G\setminus x_2y_2)\geq k$. If the equality holds, we are done. Suppose $\textnormal{mult} (\theta, G\setminus x_2y_2)\geq k+1$. Then by Lemma \ref{interlacing}, we deduce that $\textnormal{mult} (\theta, G\setminus x_2y_2)$ is either equal to $k+2$ or $k+1$. If the former holds then by Lemma \ref{path_interlacing}, $\textnormal{mult} (\theta, G\setminus x_2y_2x_1y_1)\geq k+1$, a contrary to the fact that $\textnormal{mult} (\theta, G\setminus x_2y_2x_1y_1)= k$. Suppose the latter holds. Note that $\textnormal{mult} (\theta, G\setminus x_2x_1)=k+2$. By Lemma \ref{interlacing}, $\textnormal{mult} (\theta, G\setminus x_2x_1y_2)\geq k+1$. So $x_1$ is either $\theta$-neutral or $\theta$-positive in $G\setminus x_2y_2$. By Lemma \ref{path_interlacing} and Lemma \ref{essential-path}, $\textnormal{mult} (\theta, G\setminus x_2y_2x_1y_1)\geq k+1$, a contradiction. Hence $\textnormal{mult} (\theta, G\setminus x_2y_2)=k$ and the proof for part (ii) for $m=2$ is complete. Let $m\geq 3$. Assume that it is true for all $\theta$-nice set $X'$ with $\vert X'\vert<\vert X\vert$. As before, $x_1$ is adjacent to a vertex $y_1$ which is $\theta$-essential in $G \setminus x_1$. Therefore $\textnormal{mult} (\theta,G\setminus x_1y_1)=k$. On the other hand, by Theorem \ref{general_nice_case}, $X$ is an $\theta$-extreme set. So $\textnormal{mult} (\theta,G\setminus X)=k+\vert X\vert$. Let $X'=\{x_2,x_3,\dots, x_m\}$. By Lemma \ref{interlacing}, \begin{equation} k+\vert X\vert-1=k+\vert X'\vert\geq \textnormal{mult} (\theta,(G\setminus x_1y_1)\setminus X')=\textnormal{mult} (\theta,G\setminus (X\cup \{y_1\}))\geq k+\vert X\vert-1.\notag \end{equation} Thus $\textnormal{mult} (\theta,(G\setminus x_1y_1)\setminus X')=k+\vert X'\vert$ and $X'$ is an $\theta$-extreme set in $G\setminus x_1y_1$. Note that $X'$ is a $\theta$-nice set by Theorem \ref{general_nice_case}. Therefore by induction, there is a matching $M_1=\{x_2y_2,x_3y_3,\dots, x_my_m\}$ in $G\setminus x_1y_1$ for which the conclusions in part (ii) holds. Let $M=M_1\cup\{x_1y_1\}$. Then part (i) is proved. Let $M'\subseteq M$. Suppose $x_1y_1\in M'$. Let $M_1'=M'\setminus \{x_1y_1\}$. Then we have $\textnormal{mult} (\theta, G\setminus V(M'))=\textnormal{mult} (\theta, (G\setminus x_1y_1)\setminus V(M_1'))=k$, where the last inequality follows from induction. Furthermore $X\setminus V(M')=X'\setminus V(M_1')$, so, if $\vert X\setminus V(M')\vert\geq 2$, $X \setminus V(M')$ is $\theta$-nice in $G \setminus V(M')$. Suppose $x_1y_1\notin M'$. Let $X_2=X\setminus V(M')$. Since $X$ is an $\theta$-extreme set, by Lemma \ref{interlacing}, it is not hard to deduce that $\textnormal{mult}(\theta, G \setminus (X\setminus X_2))=k+\vert X\setminus X_2\vert$. By Lemma \ref{interlacing} again, $\textnormal{mult}(\theta, G \setminus V(M'))=\textnormal{mult}(\theta, (G \setminus (X\setminus X_2))\setminus (V(M')\setminus (X\setminus X_2)))\geq k+\vert X\setminus X_2\vert-\vert X\setminus X_2\vert=k$. Suppose $\textnormal{mult}(\theta, G \setminus V(M'))\geq k+1$. If $\textnormal{mult}(\theta, G \setminus V(M'))\geq k+2$, then by Lemma \ref{path_interlacing}, we have $\textnormal{mult}(\theta, (G\setminus V(M'))\setminus x_1y_1)\geq k+1$, a contradiction, for by induction we have $\textnormal{mult}(\theta, (G\setminus x_1y_1) \setminus V(M'))=k$. Thus $\textnormal{mult}(\theta, G \setminus V(M'))=k+1$. Let $X_3=X\setminus X_2$. Since $X$ is an $\theta$-extreme set, by Lemma \ref{interlacing}, $\textnormal{mult}(\theta, G \setminus (X_3\cup\{x_1\}))=k+\vert X_3\vert +1$. Again by Lemma \ref{interlacing}, $\textnormal{mult}(\theta, (G \setminus (X_3\cup\{x_1\}))\setminus V(M'))\geq k+1$. Note that $(G \setminus (X_3\cup\{x_1\}))\setminus V(M')=G\setminus (V(M')\cup \{x_1\})$. So $x_1$ is either $\theta$-neutral or $\theta$-positive in $G\setminus V(M')$. But then by Lemma \ref{path_interlacing} and Lemma \ref{essential-path}, $\textnormal{mult} (\theta, G\setminus (V(M')\cup \{x_1,y_1\}))\geq k+1$, a contradiction. Hence $\textnormal{mult} (\theta, G\setminus V(M'))=k$. Suppose $|X_2| \ge 2$. Recall that $X$ is an $\theta$-extreme set. So $\textnormal{mult} (\theta, G\setminus X)=k+\vert X\vert$ and by Lemma \ref{interlacing}, $\textnormal{mult} (\theta, (G\setminus X)\setminus V(M'))\geq k+\vert X\vert-\vert X\setminus X_2\vert=k+\vert X_2\vert$. Note that $(G\setminus X)\setminus V(M')=(G\setminus V(M'))\setminus X_2$. By Lemma \ref{interlacing} again, $\textnormal{mult} (\theta, (G\setminus V(M'))\setminus X_2)\leq k+\vert X_2\vert$. Hence $X_2$ is an $\theta$-extreme set and thus a $\theta$-nice set in $G\setminus V(M')$. This completes the proof of part (ii). Suppose $Y$ is not an independent set, i.e. $y_{i}$ is joined to $y_{j}$ for some $i, j \in \{1, \ldots, m\}$. Then the path $P:=x_{i}y_{i}y_{j}x_{j}$ satisfies $\textnormal{mult}(\theta, G \setminus P)=\textnormal{mult}(\theta, G)$ by part (ii). By Lemma \ref{nonpositive_path}, we deduce that $\textnormal{mult}(\theta, G \setminus x_{i}x_{j}) \le \textnormal{mult}(\theta, G)$, contradicting the $\theta$-niceness of $X$. \end{proof} \begin{thm} Let $X$ be a $\theta$-nice set in $G$ and $Y$ be a corresponding independent set guaranteed by Theorem \ref{nice_matching}. Then $D_{\theta}(G)$ contains an isomorphic copy of the subgraph of $G$ induced by $X \cup Y$. \end{thm} \begin{proof} Let $X=\{x_{1}, \ldots, x_{m}\}$ and $Y=\{y_{1}, \ldots, y_{m}\}$. Consider the subgraph $H$ of $G$ induced by $X \cup Y$. By part (ii) of Theorem \ref{nice_matching}, $(x_{i},y_{i}) \in E(D_{\theta}(G))$ for all $i=1, \ldots, m$. If $(x_{i}, x_{j}) \in E(H)$, then the path $P:=y_{i}x_{i}x_{j}y_{j}$ satisfies $\textnormal{mult}(\theta, G \setminus P) \le \textnormal{mult}(\theta, G)$ by part (ii) of Theorem \ref{nice_matching}, so by Lemma \ref{nonpositive_path}, $(y_{i}, y_{j}) \in E(D_{\theta}(G))$. Similarly, if $(x_{i}, y_{j}) \in E(H)$ then the path $Q:=y_{i}x_{i}y_{j}x_{j}$ satisfies $\textnormal{mult}(\theta, G \setminus Q) \le \textnormal{mult}(\theta, G)$, whence $(y_{i}, x_{j}) \in E(D_{\theta}(G))$. Therefore, $D_{\theta}(G)$ contains an isomorphic copy of $H$. \end{proof}
{ "redpajama_set_name": "RedPajamaArXiv" }
1,825
Q: Dual boot with encrypted Windows partition I tried to make the Windows option on GRUB boot menu the default by following the instructions I found on the internet. The instructions were good, graceful but since I encrypted the Windows partition after I installed Ubuntu, the graceful approach didn't work for me when I did update-grub. The option simply disappeared because decryption does not work under Ubuntu. I found on the Internet some codes for grub.conf to restore Windows startup. It was successful but there is an error message after I select the Windows option. I wonder whether update-grub automatically backs up a copy of grub.conf or any other config files it changes? If so, I can probably retrieve that config and get rid of the error message. I didn't mean to encrypt, it's a company PC and encryption of Windows partition is mandatory. I do not want to decrypt the Windows partition, fix GRUB and encrypt again - it simply takes too much time. A: Try installing Windows Bootloader on another partition then chainload from GRUB then from there boot into Windows.
{ "redpajama_set_name": "RedPajamaStackExchange" }
1,090
{"url":"https:\/\/imathworks.com\/tex\/tex-latex-beamer-change-size-of-figure-caption\/","text":"# [Tex\/LaTex] Beamer: change size of figure caption\n\nbeamercaptionsfontsize\n\nI am creating a beamer presentation and I would like to change the size of the caption under the figures. I am using the Madrid theme.\n\nI tried the obvious way:\n\n\\caption{\\scriptsize{Text of the caption.}}\n\n\nand\n\n\\setbeamerfont{caption}{size=\\scriptsize}\n\n\nin the preamble of the document.\n\nHowever none of that seems to have any effect.\n\nUse the caption package: \\usepackage{caption}\nThen use captionsetup: \\captionsetup{font=scriptsize,labelfont=scriptsize}","date":"2023-03-31 06:39:06","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9740725159645081, \"perplexity\": 3455.378574188113}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-14\/segments\/1679296949573.84\/warc\/CC-MAIN-20230331051439-20230331081439-00792.warc.gz\"}"}
null
null
Marion Street Strategies is a Data and Analytics consulting firm specializing in building reports, visualizing data and creating sustainable workflows. Our work aims to bring actionable insights to change leaders, make data more accessible for all and use these buildings blocks to empower organizations to achieve their long term goals. Marion Street Strategies has experience in the political, media and non-profit industries and is based in Washington, DC.
{ "redpajama_set_name": "RedPajamaC4" }
3,666
var Account, AccountConfigError, async, log, notifications, _; _ = require('lodash'); Account = require('../models/account'); AccountConfigError = require('../utils/errors').AccountConfigError; log = require('../utils/logging')({ prefix: 'accounts:controller' }); async = require('async'); notifications = require('../utils/notifications'); module.exports.fetch = function(req, res, next) { var id, _ref, _ref1; id = req.params.accountID || req.body.accountID || ((_ref = req.mailbox) != null ? _ref.accountID : void 0) || ((_ref1 = req.message) != null ? _ref1.accountID : void 0); return Account.findSafe(id, function(err, found) { if (err) { return next(err); } req.account = found; return next(); }); }; module.exports.format = function(req, res, next) { log.debug("FORMATTING ACCOUNT"); return res.account.toClientObject(function(err, formated) { log.debug("SENDING ACCOUNT"); if (err) { return next(err); } return res.send(formated); }); }; module.exports.formatList = function(req, res, next) { return async.mapSeries(res.accounts, function(account, callback) { return account.toClientObject(callback); }, function(err, formateds) { if (err) { return next(err); } return res.send(formateds); }); }; module.exports.create = function(req, res, next) { var data; data = req.body; return Account.createIfValid(data, function(err, created) { if (err) { return next(err); } res.account = created; next(); return res.account.imap_fetchMailsTwoSteps(function(err) { if (err) { log.error("FETCH MAIL FAILED", err.stack || err); } return notifications.accountFirstImportComplete(res.account); }); }); }; module.exports.check = function(req, res, next) { var tmpAccount; if (req.body.imapLogin) { req.body.login = req.body.imapLogin; } tmpAccount = new Account(req.body); return tmpAccount.testConnections(function(err) { if (err) { return next(err); } return res.send({ check: 'ok' }); }); }; module.exports.list = function(req, res, next) { return Account.request('all', function(err, founds) { if (err) { return next(err); } res.accounts = founds; return next(); }); }; module.exports.edit = function(req, res, next) { var updated; updated = new Account(req.body); if (!(updated.password && updated.password !== '')) { updated.password = req.account.password; } return updated.testConnections(function(err) { var changes; if (err) { return next(err); } changes = _.pick(req.body, Object.keys(Account.schema)); return req.account.updateAttributes(changes, function(err, updated) { res.account = updated; return next(err); }); }); }; module.exports.remove = function(req, res, next) { return req.account.destroyEverything(function(err) { if (err) { return next(err); } return res.status(204).end(); }); };
{ "redpajama_set_name": "RedPajamaGithub" }
5,438
{"url":"https:\/\/www.physicsoverflow.org\/17252\/can-we-design-a-cute-handout-to-announce-physicsoverflow","text":"# Can we design a cute handout to announce PhysicsOverflow?\n\n+ 2 like - 0 dislike\n371 views\n\nJust a few hours ago I was once again at the Physics Institute of the University of Rostock and I thought I should really nail down a note to announce PhysicsOverflow on the students announcement board there.\n\nBut of course the handout should not look like kindergarten (as it always does when I try to make one :-\/) instead it should look rather cool, fun, but nevertheless some kind of professional. And it should contain the most relevant information of course.\n\nCan we design such a nice \"official\" handout, which can be printed in A4 or A5 and distributed at universities for example?\n\nedited May 8, 2014\n\nOh yes, this is a very good idea!\n\nI think we should also explicitly encourage all users to post such a pamphlet or handout on their university's notice board.\n\nMaybe this thread could also be used to design a small creative advertisement that users could use when promoting PhysicsOverflow on their blog or something. Like the Stack Exchange community ads, they should be small and unintrusive, yet nice and attention-catching.\n\n+ 1 like - 0 dislike\n\n### Instructions for printing\n\nPrint the brochure double-sided on an A4 paper and fold it along the faint grey lines. Ensure that the third column on the first page of the word document comes first and the second column of the first page of the word document comes last. The first column of the first page of the word document (which has the third column of the second page of the word document on its other side) should get folded into the brochure if you do it correctly.\n\nTo attach to a noticeboard, attach tape to the second column of the second page.\n\nPlease do not print in excess.\n\nanswered May 9, 2014 by (1,975 points)\nedited May 24, 2014\n\nHm, when I download and look at it with my Adobe Acrobat, on the first site on the right side the Logo is truncated ...\n\n@Dilaton It seems the Google drive viewer does not recognise rotation of images. I have converted it into a PDF file on my computer and re-uploaded it. See the updated link.\n\nIt looks very nice now :-)\n\nVery professional looking!\n\nI don't quite understand how the vertical text goes with everything else.\n\n@physicsnewbie Thanks, but I actually just inserted the content into a default template : )\n\nI don't know what the \"vertical text\" is; are you referring to the logo?\n\n@dimension10 yes, the logo and the URL. Is there a need to fold it into 3?\n\nI think it would look better if you just folded it in two, with the main text of the current second document inside, and just a large logo on the front with a smaller URL below it, with a quick bullet summary below that to arouse people's curiosity to look inside. For example:\n\n\u2022 Community driven\n\nI don't think there's a need to include anything about the software used in the brochure.\n\n+ 1 like - 0 dislike\n\nA banner to put on your personal homepage.\n\nWorks in all backgrounds, email me at abhi99.ps at g mail if you want a larger version.\n\nI am for some reason not able to add the HTML of the post (despite adding it in visual mode, which means using &lt; &gt; etc.,, and changing the source to have both code and pre and everything, the image renders inside the code. So please right click and click \"inspect element\" to view the HTML source to add.\n\nanswered May 10, 2014 by (1,975 points)\nedited May 24, 2014\n\nNice banner :-)\n\n+ 1 like - 1 dislike\n\nHow about using the very same invitation email? Let me write down my idea - please feel free to comment:\n\nWe are a group of users who are starting a higher-level (graduate-level and above) physics site, called Physics Overflow, outside the Stack Exchange network.\u00a0In addition to a Q&A section, we will also be releasing a \"Reviews\" section, in which all papers from ArXiV will be imported, for our users to review. Eventually, we will extend this to journal databases, conference proceedings, important conferences and seminars, and so on.\n\nPhysics Overflow has graduated from its private beta phase, and is now a full grown website. Hence, we wish to invite all those interested in posting graduate-level questions, giving answers to such questions, etc. to join this website.\n\nIf you are interested in participating on Physics Overflow, you may find us here:\u00a0http:\/\/physicsoverflow.org\/.\n\nPhysics Overflow (Company)\n\nanswered May 9, 2014 by (285 points)\nedited May 9, 2014\n\nActually, it's graduated from it's\u00a0private beta\u00a0phase and the public beta phase is the phase during which the reviews section is to be developed, etc.\n\nAlso, the trollsouthere14 should be removed.\n\n@dimension10 I edited it. Is it good?\n\nIt's better, but I think its a little short.\n\nBy the way, I don't understand the purpose of \"(Company)\". PhysicsOverflow is completely non-profit and there is not (yet) any organisation associated with it.\n\n Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the \"link\" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor)\u00a0\u00a0 Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\\varnothing$ in the following word:p$\\hbar$ys$\\varnothing$csOverflowThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register.","date":"2021-01-16 15:36:56","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.3366646468639374, \"perplexity\": 1585.1659461272081}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-04\/segments\/1610703506697.14\/warc\/CC-MAIN-20210116135004-20210116165004-00661.warc.gz\"}"}
null
null
Benjamin Mbunga Kimpioka (Knivsta, 2000. február 21. –) svéd korosztályos válogatott labdarúgó, a svájci Luzuern csatárja kölcsönben a svéd AIK csapatától. Pályafutása Klubcsapatokban Kimpioka a svédországi Knivsta községben született. Az ifjúsági pályafutását a Knivsta és Sirius csapatában kezdte, majd az angol Sunderland akadémiájánál folytatta. 2018-ban mutatkozott be a Sunderland harmadosztályban szereplő felnőtt keretében. 2021-ben a Torquay United és a Southend United csapatát erősítette kölcsönben. 2022. március 31-én hároméves szerződést kötött a svéd első osztályban érdekelt AIK együttesével. Először a 2022. április 17-ei, Malmö ellen 3–0-ra elvesztett mérkőzés 65. percében, Nicolás Stefanelli cseréjeként lépett pályára. Első gólját 2022. július 10-én, az Elfsborg ellen idegenben 2–2-es döntetlennel zárult találkozón szerezte meg. A 2022–23-as szezon második felében a svájci Luzernnél szerepelt kölcsönben. 2023. január 28-án, a Basel ellen idegenben 3–2-re megnyert bajnokin debütált és egyben megszerezte első gólját is a klub színeiben. A válogatottban Kimpioka az U18-as, az U19-es és az U21-es korosztályú válogatottakban is képviselte Svédországot. 2019-ben debütált az U21-es válogatottban. Először a 2019. március 22-ei, Oroszország ellen 2–0-ra elvesztett mérkőzés 85. percében, Viktor Gyökerest váltva lépett pályára. Első gólját 2019. március 25-én, Skócia ellen 2–1-es győzelemmel zárult barátságos mérkőzésen szerezte meg. Statisztikák 2023. január 28. szerint Jegyzetek További információk Transfermarkt 2000-ben született személyek Svéd labdarúgók Labdarúgócsatárok A Sunderland labdarúgói A Torquay United labdarúgói A Southend United labdarúgói Az AIK labdarúgói A Luzern labdarúgói Az English Football League labdarúgói Az Allsvenskan labdarúgói A Swiss Super League labdarúgói Élő személyek
{ "redpajama_set_name": "RedPajamaWikipedia" }
8,960
{"url":"https:\/\/huggingface.co\/nielsr\/coref-bert-base","text":"# CorefBERTa base model\n\nPretrained model on English language using Masked Language Modeling (MLM) and Mention Reference Prediction (MRP) objectives. It was introduced in this paper and first released in this repository.\n\nDisclaimer: The team releasing CorefBERT did not write a model card for this model so this model card has been written by me.\n\n## Model description\n\nCorefBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives:\n\n\u2022 Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence.\n\u2022 Mention reference prediction (MRP): this is a novel training task which is proposed to enhance coreferential reasoning ability. MRP utilizes the mention reference masking strategy to mask one of the repeated mentions and then employs a copybased training objective to predict the masked tokens by copying from other tokens in the sequence.\n\nThis way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks, especially those that involve coreference resolution. If you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the CorefBERT model as inputs.\n\n### BibTeX entry and citation info\n\n@misc{ye2020coreferential,\ntitle={Coreferential Reasoning Learning for Language Representation},\nauthor={Deming Ye and Yankai Lin and Jiaju Du and Zhenghao Liu and Peng Li and Maosong Sun and Zhiyuan Liu},\nyear={2020},\neprint={2004.06870},\narchivePrefix={arXiv},\nprimaryClass={cs.CL}\n}","date":"2021-09-28 22:01:26","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.3944866359233856, \"perplexity\": 2298.9407377638954}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-39\/segments\/1631780060908.47\/warc\/CC-MAIN-20210928214438-20210929004438-00412.warc.gz\"}"}
null
null
# Tipis | Tepees | Teepees ## History and Design of the Cloth Tipi ### Linda A. Holley Tipis | Tepees | Teepees History and Design of the Cloth Tipi Digital Edition 1.0 Text © 2007 Linda A. Holley Photographs and drawings © 2007 as noted in the photo credits chapter All rights reserved. No part of this book may be reproduced by any means whatsoever without written permission from the publisher, except brief portions quoted for purpose of review. Gibbs Smith P.O. Box 667 Layton, Utah 84041 Orders: 1.800.835.4993 www.gibbs-smith.com ISBN: 978-1-4236-1140-0 To all my "tipi slaves" and those who love a night and a day in a tipi. # Tipis | Tepees | Teepees **Table of Contents** Acknowledgments Introduction Documenting the Historic Tipi Styles of Tipis Materials and Steps for Making the Tipi Cover The Liner or Lining Rain Covers and Rain Caps Poles, Pole Care, and Pole Maintenance Pitching or Setting Up a Tipi Living in a Tipi Decorations on the Cover and Liner Transporting a Tipi Tips on the Care and Buying of a Tipi Today's Tipi Encampments Tipis Outside the United States Modern Tipis Camp Stories Appendix Glossary Bibliography Resources Photo and Drawing Credits # Acknowledgments I wish to thank all the members of Material Culture of the Prairie, Plains, and Plateau and Lodge Owners discussion Web site groups. The input of the following people—Bill Holmes, Benson Lanford, Jan Kristek, Billy Maxwell, Bill and Kathy Brewer, Bob Brewer, Peter Gibbs, Creig White, Georg Barth, "Jack in the Black Hills," Ted Asten, Mike Terry, David Sager, David Ansonia, Alexander Barber, Duane Alderman, Carolyn Corey, Mike Cowdrey, Allen Chronister, Ken Weidner, Curtis Carter, and the many others who helped by doing historic research, locating photos, e-mailing, and sending packages of information—has been invaluable. They made this book possible. Thanks also to all of those wonderful people who kept my life sane during repeated computer crashes; three hurricanes (Francis, Ivan, and Jean) that left me with no electricity for weeks; surgery to fix my hand that was smashed during Hurricane Jean; and my knee replacement, which all my accidents with tipi poles contributed to. A special thanks to "Weird" Wayne McDowell who traveled from Illinois to Florida for the making of a 12-foot tipi. He took over five thousand pictures . . . and I only used about fifteen. "Tipi slaves" Bill Burns and Mike Ketchum endured a very intense week of filming and cutting and being yelled at by an unforgiving taskmaster. At least they got food and some money out of this and my gratitude. Thanks also to Jay Deen, my first tipi slave, who helped build many of the three hundred tipis that came from Alligator Trading Co., my tipi company. Thanks to the tipi makers around the world (Richard Reese of Reese Tipis, Nomadics, Strinz Tipis, Spring Valley, R. K. Lodges, Wolf Glen Lodges of England/Scotland, Rainbow Tipis of Australia, Arrow Tipis of Canada, and others), who answered my phone calls and e-mails with a wealth of information in photos and personal tidbits, which they were not afraid to pass on to me. A big acknowledgment to Louis W. Jones and Doug Rodgers who kept me up late at night with questions and the inquisitiveness to go find the answers or the usual "when will the book be done?" And a special thanks to James E. Dudley and Ed De Torres who took the time to sit down with me and edit hundreds of pages of information into something readable. # Introduction My interest in tipis began about 1971 when my husband and I saw a neighbor working on a set of tipi poles and we just had to dive in and help. That was it. Tipi fever hit us and we just had to have one. During the Christmas holidays of 1972, we ordered our first tipi, an 18-footer from Darry Wood. It arrived one night in the back of Darry's truck. I wanted to start painting it right away, but someone wisely advised me to wait a year, do some research, and then decorate. So, we began our research, finally settling on a beaded Cheyenne-style cover with a painted lining. We took that tipi everywhere—to rendezvous, powwows, and campouts. When my husband and I "split the blanket" in 1977, I kept the tipi and continued to travel to events. In 1978, I accepted the challenge to make a tipi. With David Clayton, I headed down to south Florida to buy an industrial sewing machine. From 1978 to 1995, I sewed over three hundred lodges, sometimes with the assistance of "tipi slaves" (those foolish enough to volunteer to help me). During all of these years of making tipis, going to various gatherings, doing beadwork, and making tipi accoutrements, I was studying—doing research and listening to those knowledgeable about tipi life. Like virtually all tipi lovers of my generation, I learned a lot from the classic work on tipis, _The Indian Tipi: Its History, Construction, and Use,_ by Reginald and Gladys Laubin. This book, published in 1957 and still in print, proved to be a wellspring of knowledge and inspiration; it spread the lore of the tipi literally all over the world. In Europe, however, the tipi was already in use before the publication of the Laubins' book. In the early twentieth century, there were already American Indian hobbyists in Europe, especially in Germany, inspired by the works of novelist Karl May and the Wild West show of Buffalo Bill Cody. Even so, it is fair to say that "the Laubin book," as it is affectionately called by tipi lovers, stimulated newcomers to the tipi worldwide. Today tipis are made and camped in on every continent but Antarctica (but it might be possible to use them there as well!). The modern tipi lovers' debt to the Laubins is enormous. It is time, however, to reassess. The Laubins wrote about the tipis they knew best, the tipis of the 1940s and 1950s. The tipis of that era represented a particular stage in the evolution of tipi design and construction. The cloth tipis, still popular at the time the Laubins' book was written, had replaced the early buffalo-hide tipis. Since then, tipi construction has evolved due to amazing developments in technology. So, focusing on the progress the tipi has undergone, this book will address three areas: first, the history and evolution of the tipi; second, a step-by-step process of acquiring or making your own tipi; and third, current uses of tipis, drawing on the experiences of tipi lovers all over the world. # Documenting the Historic Tipi The nomadic dwelling structure of the American Indian is called several names by many different tribes. The Apsaroka, Apsáalooke, or Crow Native Americans call the tipi _ashé_ (home) or _ashtáale_ (real home). The Blackfoot call it niitoy-yiss (the tipi), and the Lakota call it _tipestola_ (tipis) or what we now call all lodges.[1] In the last 175 years, the tipi has evolved in materials used, construction techniques, and usage. It has gone from buffalo and elk hides to cloth and now to the synthetic materials of the twenty-first century. The two important innovations in tipis were first the use of horses for transporting bigger poles and larger hide lodges. Then in the early nineteenth century came the introduction of cloth covers. The information about the historic tipi and tipi camp of the early to late nineteenth century in this chapter is taken from historic ledger drawings, daguerreotypes (tin types), glass slides, photographs, and handwritten documentation. Many pieces of information come from diaries, observations of clergy and the military, trappers and rendezvous people, visitors to tribal camps, and trading post lists of materials. Secondhand material, not directly observed or experienced, was avoided in collecting this information. One main exception is the quoted material from the late James H. Creighton that follows in the next section. Since many books written in the mid-1800s were not published until later in the twentieth century, the bibliography in the back of the book lists the sources by the date they were originally written or when the major firsthand information was observed. Arapaho women sit by a buffalo-hide tipi in Ft. Sill, Indian Territory, Oklahoma. The tipi was created between 1869 and 1874. ## Historical References to the Tipi My good friend the late James H. Creighton, who passed away in 2005, had a great love of the tipi and its construction. In honor of him, this special section on basic tipi history, which he wrote, is included: > The stately conical lodges of the Great Plains, with their beautiful symmetry, adjustable smoke flaps and snug interiors are familiar to just about everyone. The tipi, a Sioux word, has come to immortalize the glamour of the Wild West, chiefly through Hollywood Westerns. Aside from a handful of movies, such as _Little Big Man_ and _Dances with Wolves,_ however, the tipi is rarely portrayed accurately. It is a comfortable dwelling that can be used in any weather year-round. It is perhaps the most perfectly designed tent structure that has ever been used and its history is long and detailed. > > The conical home is as old as man and has multiple origins worldwide. In this country, it was found in one form or another almost everywhere. It often acted as a 'mobile home' for hunting societies. With a series of slender poles arranged around a tripod or four-pole base, the structure was covered in whatever was available in a particular region. Birch bark or marsh grass matting in the Northeast, dressed caribou hides in the far north, buffalo or elk in the Midwest and Plains, and tulle mat covering in the Northwest were common. Today, the nomadic Laplanders still use reindeer-hide lodges very similar to the Plains tipi, as do indigenous tribal groups across Siberia and into Mongolia. In ancient Europe, I am sure that the tipi-style lodge was also used both as temporary hunting lodges as well as permanent homes. The classic Sioux tipi was a relative latecomer, but it was on the American prairies and Great Plains that it rose to its present form. > > In the 1530s the Spanish under Hernando de Soto tore a path from western Florida to Virginia, then turned west to the Mississippi, and traveled on into present-day Oklahoma. The Indians encountered were already in a state of transition. Many were the remnants of the great Mound Builder societies that had centered on the Ohio River Basin. These overlapping cultures had once ruled networks of trade centers that spanned from the Atlantic to the Pacific and from Canada to Mexico. At that time, the Great Plains was not a center of population, although some groups had found their way to the Missouri River. Some appear to have been ancestors to the Pawnee and Arickara (Arikara), of the Caddoan language family. Farther upriver were transplanted Siouans, like the Mandan and Hidatsa, who had left their Ohio River homeland in prior centuries. In de Soto's time, other breakaway Siouan tribes, such as the Omaha, Ponca, Osage, Quapaw, and Kansa, were migrating west as they traveled. They took on traits learned from the groups already on the Missouri. > > Western Algonquin groups had done the same over many centuries, leaving forest homes around the Great Lakes to enter the prairie country. Early breakaways included the Blackfeet bands that migrated west to Montana and the Canadian Rockies. The Algonquian Cheyenne, Arapaho, and Shutai joined the migrating Siouans at the Missouri, where they took on similar planting cultures to that of the Mandan and Arickara. These "river" tribes had developed an elaborate culture in which they lived in semipermanent earth lodge villages, traveling to the buffalo country to hunt. The "classic" tipi evolved, in part, from this transitional period. > > Much of the cultural mixing between unrelated groups on the Missouri River occurred in the sixteenth and seventeenth centuries while the French were making rapid inroads to the Lakes region of the Midwest and beyond. Those that we refer to as Sioux were then more or less unified and lived in Wisconsin and Minnesota as "the Seven Council Fires." Linguistically they were of three main groups: the Dakota, or Eastern Sioux; the Nakota, or Middle Sioux; and the Lakota, or Western Sioux. Their culture also included earth lodges, especially among the eastern Dakota bands. These eastern bands, along with their Winnebago cousins also utilized the domed wigwam common to the Woodlands tribes, but it is probable that all used tipis as well. I suspect that the practice of fusing tribes was very old among the related Siouans that lived on the prairies' edge. Some of the Omaha (Dhe'giha) group and their related Iowa, Oto, and Missouri cousins (who with the Winnebago made up the Chiwere Siouans) built domed wigwams well into historic times. > > Until the arrival of the horse on the Plains, the "Tipi People," wherever they were located, were forced to make small lodges that could be transported with dogs and by backpacking. Somewhere, a "three- pole" and a "four-pole" culture developed, but there does not seem to be a uniform tribal breakdown. The Crow, who broke away from their Hidatsa cousins, became four-pole people. The Omaha, learning from the Arickara, were four-pole people, but the Cheyenne, who also learned to make tipis from the Arickara, were three-pole people, as were the Sioux. > > Innovative design alterations occurred as former woodland cultures adjusted to the windy, flat country. In former sheltered regions, a cone shape alone met their needs, with a separate flap possibly being pulled across the top opening in wet weather. On the Plains, where sudden wind gusts and tornados were common, attached smoke flaps were added to the tops of the lodges. With external poles, these flaps could be adjusted to prevent wind from blowing smoke back into the lodge. After horses became available, the "Holy Dogs" revolutionized the tipi culture to what it is today. > > Horses became the symbol of wealth in this economy. The more horses to drag poles of ever lengthening size, the larger a lodge could become. Originally averaging 10 to 14 feet in pre-horse days, the "horse" tipis often stood 18 to 20 feet high. Some council lodges were much larger. Although individual tribal construction techniques varied, the basic hide tipi was fairly uniform from tribe to tribe. For an average family tipi, possibly 15 feet in diameter and height, fifteen spring-killed cow buffalo hides were sewn to make the cover and smoke flaps. The Cheyenne and Arapaho had women's sewing societies that specialized in tipi construction. Some survive to this day in Oklahoma. The finished product, always owned by the woman of the lodge, was snug and weather tight. The brain-tanned hide was soft as velvet, but heavy. Cooking indoors and constant wetting through the winters caused a tipi to become brittle and stained; the hide tipis were replaced at least every other year. When canvas became available in the early 1800s, many people replaced buffalo hide covers with the lighter army duck material. By 1875 the buffalo were all but gone and commercial canvas tipis became the norm."[2] The historic hide lodges are best shown in the early pictures and drawings by George Catlin, Karl Bodmer, Alfred Miller, Rudolph Kurz, and Mary Seth Eastman. After returning to the studios with their field sketches, some artists embellished their paintings, thereby showing far more detail than was originally observed. For the most part, these artists of the pre-1860 period did not show the existence of specifically cut smoke flaps, the large rectangular flaps or ropes from the bottom of the flaps to a pole out front. The lodges also appear circular or conical in shape, without formal door openings. Many lodges appear decorated while others are left plain. Few show some type of "streamer(s)" at the tips of the poles. Mayer, in his 1851 drawing, does show an undecorated Sioux lodge with the beginnings of smoke-flap extensions, but without a liner or formal door (Mayer 1932, 112). The pins at the bottom are pulled to open up the bottom of the door for easy access. Buffalo-hide tipis were heavy, weighing about fifty to one hundred pounds on average. The skins were known for their translucence. They let in light, while still protecting the inhabitants from the weather. A big step in the evolution of tipi covers came around the late 1840s with the introduction of cloth to the western trading market. The Native Americans started replacing hides with cloth because it was lightweight, materials to make it were readily available, it was easy to make, and it let more light inside the lodge. Cowhide tipi. Rudolph Friederich Kurz was a Swiss artist who spent the years from 1846 to 1852 at western trading posts on the Mississippi and upper Missouri Rivers. He recorded his experiences in his journals and drawings. His journal of the 1840s indicates a few wealthy Indians were experimenting with single-fill canvas that is similar to what is for sale today. As the hide trade accelerated and treaty annuities were collected, linen cloth and cotton canvas came into greater demand as they became more affordable. Also available were ticking material and striped awning cloth. Newspaper and magazine photographs and illustrations of the era show striped awning material used by the U.S. military and European visitors. From the arrival of the first European explorers and settlers to North America, traders had been inviting the Indian Nations to trade their fur hides for guns, beads, cloth, needles, mirrors, or anything that might be of use or interest. The well-kept records of the trading companies (the Hudson Bay Company, Bent's Fort, Northwest Company, Astoria Inventories 1813, St. Louis Missouri Fur Company, the 1831 Manifest of Jedediah Smith's Trade Goods Santa Fe, and the Rocky Mountain Outfit of 1836, to name a few) have a wealth of information on the materials traded to the Indians in all parts of the country. For instance, steel needles were very popular along with kettles, pots, pans, vermilion, powder, and steel blades. The dates also show a healthy early trade at the turn of the nineteenth century in St. Louis, Santa Fe, and the Oregon territories. Wherever trading posts and early rendezvous were set up, the Indians came to exchange their furs and crafts. George Ruxton describes the interior and exterior of a Sioux buffalo hide tipi in the later days of the mountain man. At this time, the fur trade days were ending and the trading posts were taking the place of the yearly Western Rendezvous held west of the Mississippi River. In his writings, "Life in the Far West" for _Blackwood's Magazine_ and _Adventures in Mexico and the Rocky Mountains_ (1847), he observes: > The Sioux are very expert in making their lodges comfortable, taking more pains in their construction than most Indians. They are all of conical form: a framework of straight slender poles, resembling hop-poles, and from twenty to twenty-five feet long, is first erected, round which is stretched a sheeting of buffalo robes, softly dressed, and smoked to render them watertight. The apex, through which the ends of the poles protrude, is left open to allow the smoke to escape. A small opening, sufficient to permit the entrance of a man, is made on one side, over which is hung a door of buffalo hide. A lodge of the common size contains about twelve or fourteen skins, and contains comfortably a family of twelve in number. The fire is made in the centre immediately under the aperture in the roof, and a flap of the upper skins is closed or extended at pleasure, serving as a cowl or chimney-top to regulate the draught and permit the smoke to escape freely. Round the fire, with their feet towards it, the inmates sleep on skins and buffalo rugs, which are rolled up during the day, and stowed at the back of the lodge. > > In traveling, the lodge-poles are secured half on each side a horse, and the skins placed on transversal bars near the ends, which trail along the ground—two or three squaws or children mounted on the same horse, or the smallest of the latter borne in the dog travees (travois). A set of lodge-poles will last from three to seven years, unless the village is constantly on the move, when they are soon worn out in trailing over the gravelly prairie. They are usually of ash, which grows on many of the mountain creeks, and regular expeditions are undertaken when a supply is required, either for their own lodges, or for trading with those tribes who inhabit the prairies at a great distance from the locality where the poles are procured. A few years later, Mary Seth Eastman gave an account in her journals, _Dahcotah: Life and Legends of the Sioux_ (1849), of the life of women and how they set up summer and winter lodges. This is one of the few accounts of the use of tipis and the more permanent lodges: "Her work is never done. She makes the summer and the winter house. . . . Visit her in her teepee, and the she willingly gives you what you need, if in her power: and with alacrity does what she can to promote your comfort. . . . The women plant the poles of their teepees firmly in the ground and cover them with a buffalo skin. A fire is made in the center and the corn put on to boil" (v, 60). The eyewitness account of American artist Frank Blackwell, _With Pen and Pencil on the Frontier in 1851,_ explores the structure and materials of the hide lodge. Mayer journeyed through the Minnesota frontier, recording his experiences and making sketches. His most memorable entries and sketches were made at Traverse des Sioux in the summer of 1851. > The village is composed of two sorts of habitations winter houses and summer houses or Tipis a house, or Waykayas skin covering & Tipitonka's large house. [The Sioux word Tanka means 'large'; thus tipitanka may be translated as "large house." Mayer applies the term, variously spelled, to the summer house of the Sioux. The wakeya was the "skin tent," probably the same as the ordinary tipi . . . edited by Bertha L. Heilbron 1932.] > > The winter house is a tent made of furless buffalo hides tanned like buckskin & sewed together, supported on poles & hand held together at the seam by splints of wood, it being left open at the top to permit the smoke to escape & beneath is an aperture for egress & ingress—thus forming a circular conical edifice with the ends of the poles protruding from the top, the edges of the skin falling over & varying the colour & form. These are the winter habitations and are near ten feet in diameter generally. A fire is made in the center & the occupants repose around it. In warmer weather it is sometimes used. It is then thrown open, the aperture of entrance being enlarged & the portions of the slack skin supported on the sticks, thus giving rise to two graceful festoons from either side of the seam. > > [Teepees] . . . belonging to Indians of the plains are sometimes forty feet [in diameter]. The poles of tamarack are of large size . . . the protruding end of the tallest of which is suspended a horses tail, [an indication of] the residence of a principal warrior or a chief, the exterior being decorated with diagrams of his principal actions. I know not why, but there is home feeling about the interior of a teepee. As I have lounged on a buffalo robe by the light of a smoldering fire, it reminds me of my childish positions on the parlor rug in front of a hickory fire, during the winter evenings. The teepee is rendered very comfortable in the interior by piling straw around the exterior & strewing it within, & laying buffalo robes & furs upon it. Without, the snow soon accumulates about the straw leaving only the upper portion of the tent visible. Closing the entrance & building a fire it becomes a snug refuge from the inclement winters. Tepees last four or five years, but owing [to] the rotting of the lower portion of the skins decrease in size. The tipi or skin lodge is . . . peculiar to the western or Dacotah branch of the Indian races; the Algonquin having lived in bark-wigwams. . . . The summer house is similar in form to our log cabin, tho' more nearly approaching a square, & the roof reaching nearer to the ground . > > It is seldom that the Indians congregate in villages during the winter. After the gathering of their crop of corn in the fall, they separate into parties of from one to three or four families, & with their tipis depart for the woods which afford shelter from the inclemency's of winter & brings them nearer to the game on which they subsist during that season (104–10). Sioux encampment, ca 1850. Jedediah Smith's trade lists indicated that he carried a line of thousands of yards of linen cloth.[3] This material as well as "heavy sheeting" was used in clothing and wagon covers, and for tent making. Santa Fe was a major trading route and base for Smith in his travels of the Southwest and Great Plains area. None of this proves that the Native Americans traded or bought this material, but there are the records of this cloth offered for sale. The earliest known photograph of canvas tipis was taken in 1852 when a Santee Dakota canvas/hide tipi was photographed near today's Bridge Square, Minnesota. The picture shows an encampment with several tipis. The tipis are randomly arranged, have no door poles or smoke flap ropes, and look conical in shape rather than having the steep-back tilt found in later tipis. Also notice the shorter lodge poles and the lack of streamers and decorations. From an inventory of the Cheyenne and Sioux property destroyed by order of Major General W. S. Hancock in April 1867, we know that the Indians possessed diverse tools, cooking utensils, and other objects.[4] Retaliations like this were for Indian raids on settlers camps, attacks by other native groups that were then misattributed to other tribes in the area or to materials and supplies from hostile tribes. _Home of Mrs. American Horse,_ 1891, showing a Sioux (Lakota) interior of a lodge with cooking pot and pole liner. This list is a remarkable cross section of a nineteenth-century Plains Indian village. It gives insight into the lives of desperate people as they fled with their children, weapons, animals, and minimal personal belongings. ### Cheyenne Camp 132 lodges --- 396 buffalo robes 57 saddles 120 travois 78 headmats 90 axes 58 kettles 125 fry pans 200 tin cups 130 wooden bowls 116 tin pans 103 whetstones 44 sacks—paint 57 sacks—medicines 40 hammers 63 water kegs 14 ovens 117 rubbing horns 42 coffee mills 150 rope lariats 100 chains 264 parfleches 70 coffee pots 50 hoes 120 fleshing irons 100 par-flesh sacks 200 horn spoons 42 crow bars 400 sacks feathers 200 tin plates 160 brass kettles 15 sets lodge-poles 17 stew pans 4 drawing knives 10 spades 2 bridles 93 hatchets 25 teakettles 250 spoons 157 knives 4 pickaxes ## Sioux Camp 140 lodges --- 420 buffalo robes 226 saddles 150 travois 140 headmats 142 axes 138 kettles 40 frying pans 190 tin cups 146 tin pans 140 whetstones 70 sacks—paint 63 water kegs 6 ovens 280 rope lariats 140 chains 146 parfleches 50 curry combs 58 coffee pots 82 hoes 25 fleshing irons 40 horn spoons 14 crow bars 54 brass kettles 11 hammers 5 sets lodge-poles 4 stew pans 160 rubbing horns 3 pitchforks 3 teakettles 280 spoons 4 pickaxes 1 sword 1 extra scabbard 1 bayonet 1 mail bag stone mallets 1 lance 9 drawing knives 2 spades 8 bridles 7 coffee mills Father Peter J. Powell, a well-known and respected twentieth-century author on Cheyenne material and culture, sums up the loss of this way of life in his book _Sweet Medicine_ (1969): "Most of the old Northern Cheyenne material beauty died in the flames of Morning Star's camp. Two hundred tipis, nearly all of canvas, but some of buffalo hide, were destroyed. Among them were the elaborately decorated lodges of the military societies, their linings covered with vividly colored paintings, men and horses moving in battle. Exquisitely quilled and beaded clothing, the sacred shields, scalp shirts and war bonnets were carried off or burned" (166). After the destruction of so many camps in the 1870s, and particularly on the southern plains as late as 1874, the U.S. government issued canvas for tipi covers to the bands that surrendered at Ft. Sill, Oklahoma. Records of the military supply offices and journal books show the distribution of supplies to the Indians. Other historic visual evidence of tipis is in the sketchbook drawings of the Kiowa, Sioux, and Cheyenne. At first these drawings were not thought to be historic in detail or use, but they are now considered to be very accurate and rich with information on tipis, clothing, decorations of all types, manners, daily life, courting, and other customs (McCoy 1987, 51). Tipi sketches show detail paintings or beadwork on the outside covers, and how they were set up for cooking, sleeping, and meetings (Viola 1988, 44). They do not show smoke-flap ropes extending from the bottom of the flaps to a pole in the front. One good interior drawing depicts backrest, beds, and bags (Bad Heart Bull 1967, 299). Today the Internet is also a good place to find sketchbook drawings. Web sites are available for study and interpretation of these drawings. From the late 1860s to the early 1900s, a few photographers came west to photograph and document the Indians. In the winter of 1872–73, William S. Soule took twelve photos of a Comanche camp showing buffalo-hide tipis. John A. Anderson photographed the Sioux on the Rosebud Reservation between 1885 and 1900. He showed life inside and outside the lodges. From 1902 to 1911, Richard Throssel photographed the Crow reservation and Fred Miller (from 1898 to 1912) was there as well. Both men took photo studies of the outside structure of Crow covers with some interior shots. Edward Latham took pictures of the Colvinne tipis from 1900 to 1905, and Lee Moorehouse photographed the Umatilla in 1903. Julia E. Tuell was one of the few women photographers. Tuell lived on the Northern Cheyenne Reservation from 1906 to 1912 and took photographs of the people who had defeated Custer. Then there is the 1898 to 1910 volume of work of Edward Curtis, who took hundreds of pictures from the Southwest to the Northwest coast life of Indians. However, there are hundreds of photos taken by identified and unidentified photographers in the last half of nineteenth century that will probably never be acknowledged, but their work lives on in public and private collections. Universities and libraries, including the University of Washington Library, Denver Public Library in partnership with the Colorado Historical Society, the Denver Art Museum, and the Library of Congress, to name a few, are now putting their photographic collections on the Web. View of Sioux Indian camp. ## The Cloth Tipi of the Mid- to Late Nineteenth Century From observations made of the earliest pictures and sketches of the historic tipis, a tipi camp of cloth tipis might have some of the following characteristics: * The tipis could face in several directions and could be either tilted or straight up and down in a more conical form. * Some were tall and others squat in appearance. * Tipis might be made of the striped awning, ticking with the narrower stripes, single-filled duck, or a muslin-weight material. A few photos show material that has company marks such as "Monumental Standard." This might be a cloth company or a tipi maker of the time period. * Even in the same tribe, there could be different ways of sewing the panels together at different angles. This was very obvious with Sioux tipis. * Patchwork repair work would be seen around the lift areas, sides, and bottom. * The covers would all touch the ground, with the pegs going in at an angle and not straight up and down. Around and on the covers would be pieces of wood and rocks—anything to anchor the tipi and keep out the wind. Most poles would be crooked, partially stripped, and somewhat pointed from being dragged and put into the ground. * The majority of southern-style tipis would not be painted, while the Northern Blackfoot tipis would be highly decorated with paint. * The tipi poles would not extend too far beyond the tie point area and in some cases would barely reach the tie point. * The poles might be wrapped around the tie point with rope once, twice, or not at all depending on the amount of rope. * Very few streamers would adorn the tips of the poles. * There would not be any door poles holding open the smoke flaps. The exception to this might be the Crow, Blackfoot, and Nez Perce who used the flap where the smoke-flap pole extends beyond the corners of the pockets or holes. During the last part of the century, these tribes started using a door pole out front as well as the longer cover poles to extend the flaps. * The flaps would fold left over right facing the front for lacing pins. * There would be no formal bottom underturn (sod cloth) for liners, if there was any liner. Liners are rectangular panels tied only at the tops and held in place with objects at the bottom. In the Northern tribes there was more use for liners during the seasons because of the severe winters and cooler nights. * Medicine bundles might be attached to the top, back, or front of the tipi. Some might be on tripods to the side or back depending on the tribe. Most gear would be put away for storage and travel. There would not be tripods in the front for weapons. * In a typical tipi interior, there were no altars or skulls. They appeared to be in the society lodges or those used by holy men. More adornments appeared in tipis of the late nineteenth century and early twentieth century. This was in part due to the start of the big fairs, dances, and Wild West shows. * Gores, or triangular additions to the smoke flaps, were only found on a few of the historic covers. * Designs would be different within the tribe and the families. From the actual covers, photos, and drawings of tipis, it is a bit difficult to tell what is a Cheyenne, Sioux, Kiowa, Arapaho, Comanche, or Crow tipi. Though there is something of a set pattern today, the families of this time period intermarried, gifted covers, or made do with what they had in material. Ponca encampment, 1907, Ft. Sill, Oklahoma. ## The Tipi from the Late Nineteenth Century to the Mid-Twentieth Century Ernest Thompson Seton seems to have been one of the first writers to give more information on the structure and materials for building a general type tipi. He wrote profusely about Indian crafts. Seton also helped in the formation of the Boy Scouts of America, although he had previously formed his own group, Woodcraft League of America. Seton drew some of the first layouts for making a tipi cover showing a 10- and a 14-footer, both of which are still used today. In his book _Two Little Savages_ (1903, 151–54), Seton explains the construction of the tipi: > Si had not only sewed on and hemmed the smoke-flaps, but had re-sewn the worst of the patches and hemmed the whole bottom of the teepee cover with a small rope in the hem, so that they were ready now for the pins and poles. 'Ten strong poles and two long thin ones,' said Yan, reading off. These were soon cut and brought to the campground. > > 'Tie them together the same height as the teepee cover—' > > 'Rawhide rope,' he said, but he also said, 'Make the cover of skins. I'm afraid we shall have to use common rope for the present,' and Yan looked a little ashamed of the admission. > > The tripod was firmly lashed with the rope and set up. Nine poles were duly leaned around in a twelve-foot circle, for a teepee twelve feet high usually has a twelve-foot base. A final lashing of the ropes held these, and the last pole was then put up opposite to the door, with the teepee cover tied to it at the point between the flaps. The ends of the two smoke-poles carried the cover round. Then the lacing-pins were needed. . . . > > 'You can't beat White Oak for pins.' He cut a block of White Oak, split it down the middle, then split half of it in the middle again, and so on till it was small enough to trim and finish with his knife. > > Ten pins were made eight inches long and a quarter of an inch thick. They were used just like dressmakers' stickpins, only the holes had to be made first, and, of course, they looked better for being regular. Thus the cover was laced. (Continued in the Appendix.) Nearly fifty years after some of the earliest observations of tipis, these dwellings still continued to fascinate anthropologists, missionaries, and general observers. In _1885 An Average Day in Camp Among the Sioux,_ Alice C. Fletcher describes the everyday life of a Sioux village, including the struggles of living in a tipi. Later in 1911, with Francis La Flesche, she writes again about the Omaha in _The Omaha Tribe._ The Omaha use the four-pole style as do the Crow, Nez Perce, Blackfoot, and Comanche as compared to the three-pole setup by the Sioux, Cheyenne, Arapaho, Kiowa, and many other southern tribes. The observations of Fletcher and La Flesche cover the construction, setting up, and transporting of the lodge. Both writers comment on the effect of weather, travel, and living in a tipi, and they describe the simplicity of moving the camp and then setting it back up. (See Appendix for excerpts from both sources.) Studies of the Hidatsa and their culture were reported by Gilbert Wilson in 1924 in _An Anthropological Paper of the American Museum of Natural History, Vol. XV, Part II—The Horse and the Dog in Hidatsa Culture (Based on the material from 1908–1918 Hidatsa or Gros Ventre . . . called Minitari by the Mandans, are a Siouan tribe speaking a dialect akin to that of the Crow.)_ Wilson's account of a tipi, most of which was related to him by Buffalo-Bird-Woman in 1913, is one of the most detailed for this time period. He gives exacting information on how the tipi is placed and constructed, on the interior fires, and on the pole setup for a Mandan or Sioux-style lodge. It appears he camped with and observed life in a village, judging from the detailed notes he wrote on life in a tipi. However, there is a problem with his explanation of the pole setup for a tipi as compared to a Sioux tipi as we know it. This discrepancy in pitching poles is pointed out later in the material. Wilson's observations include the following: > **Winter Lodges and Drying Stages.** Our winter camp was of earth lodges, but these were smaller and less carefully built than were our summer lodges. > **Our Camp on the Sandbar.** About noon, we camped on the sandbar. There were about one hundred buffalo skin tipis in the camp. When we camped in a good level place it was customary to pitch the tipis in a big circle, and if the wind was calm when we pitched camp all the tipi doors faced the center of the circle. However, if we were camped along a creek that had a narrow bank, or in any other place where a circle could not be easily formed, the tipis were set up in rows or whatever other arrangement the formation of the land compelled. If there was a stiff wind blowing a tipi was pitched with the door away from the wind. > **Turning a Tipi.** Camped thus in a tipi, if a windstorm arose and it became necessary to turn the tipi with the door away from the wind, my husband and I and two or three neighbors, who were invited to help us, could very easily turn it around. Sometimes seven or more people turned the tipi; the larger number could handle it better, though if there were people enough to hold the foundation poles steadily that was sufficient. > > First, the pins that held the cover to the ground on the outside were pulled up. Then, we went inside the tipi, picked up the four foundation poles and the one to which the cover was tied and moved the poles and the cover at the same time. The rest of the poles were now shifted about as was necessary. If the five poles were held firmly while they were moved about there was no danger that the tipi would fall down. > > A Mandan tipi could be raised and turned by four persons since its foundations was of but three poles. > **Anchoring the Tipi.** During a windstorm it was often necessary to anchor the tipi to prevent it from being blown over. For this purpose a rawhide lariat was passed around the poles, inside and under the tipi cover, and the ends were drawn together in a noose. The noose was pushed up by means of a forked stick to the point where the poles converged, and drawn taut. Then the loose end of the lariat was drawn downward and tied to a pin driven into the ground, four or five feet from the fireplace, toward the windward side of the tent. Very often two pins were driven into the ground and crossed. If the tipi were very large, it might be anchored with a second lariat on the outside. > > In a Mandan tipi, a lariat always hung in the center in readiness for a storm. The Mandan three-pole tie was weaker than our Hidatsa four-pole tie and for that reason a lariat was passed around all the poles at the tie. In the Hidatsa tipis this was unnecessary, except in a heavy windstorm, since our poles locked at the top. > **The Fire.** The fireplace was surrounded by stones only when wood was scarce and buffalo chips were used for fuel, but when it was abundant the kettle was set directly on the coals and the meat roasted on wooden spits. When we camped on the prairie, however, we could obtain no wood, and made our fire of buffalo chips. In that case, we roasted our meat on stones. > > In our lodges in like-a-fish-hook village, the fire was smothered at night. If it became extinguished by any accident, the woman went to a neighbor who had a fire and got some coals. We followed the same custom when in camp. > > As has already been said, each of the tent poles had a hole at the smaller end through which a thong was drawn. The larger ends of the poles dragged loosely on the ground, spread fan shape. Sometimes one of these tent poles broke where it was pierced for drawing through the thong. In that case, a slight groove was cut into the pole as a substitute. This, of course, was done only in an emergency; the ordinary method was to perforate the end of the tent pole. > > As the tent cover lay on the horse, it made a load on either side of the animal, twelve inches thick, twenty-six inches long, and twenty-four inches wide, while the connecting portion that passed over the saddle was about eighteen inches long. > **The Mandan Tent Tie.** In setting up tents, the Hidatsa used a four-pole foundation. The great advantage in employing this method was that in ordinary weather it was unnecessary to draw a lariat around the top of the poles at the place where they converged to steady them and strengthen the tie. The foundation poles of an Hidatsa tent interlocked, as the fingers of the two hands may be made to interlock. The Mandan, however, used the Dakota tent tie, which needs to be reinforced by a lariat drawn around the poles at the top. The Mandan tie (see drawings on page 19): in A, the three poles are tied together for the skeleton frame. To tie these poles, they are laid on the ground and fastened at the joint. It will be observed that pole c projects beyond the others at the top. This is for the purpose of making the front of the tent slightly longer than the rear so that the smoke hole will be directly over the fireplace. B shows the skeleton frame after the tie has been made and the framework has been set up, while in C will be seen the ground plan of the completed framework. C (a, b, and c) presents the three foundation poles as already described, a resting upon b and b upon c. The remainder of the poles for the completed framework are then set up in order as shown in the following table: > > a rests upon b > --- > b rests upon c > d rests upon a and c > e rests upon c and d > f rests upon c and e > g rests upon c (under d) > h rests upon g and c > i rests upon h and c > j rests upon a and b > > The tent cover is tied to the pole, j, in the same manner as in an Hidatsa tent. Then the pole is raised between f and c and the tent cover drawn around the frame. > > It will be noted B that a lariat hangs from the tie. When all the poles but j have been set up in the framework, this lariat is drawn out from beneath the poles, a and b, and carried quite around the frame in such a manner as to draw the poles snugly together at the meeting point. The arrows in the diagram C indicate the direction in which the lariat is drawn. After it has been drawn once around the poles, the pole j is raised and the lariat drawn around far enough to enclose this pole. Then it is drawn inside the framework, between j and c and anchored to a short post, or pin, driven into the ground. The owner of the tent draws the cover around and laces it in place. Finally, the two poles that hold the smoke flaps are raised. The drawing on the opposite page shows the setup for the Mandan tipi, which Wilson drew for his notes. He states that they use the Dakota tent tie for the setup. The setup is backwards from the familiar three-pole setup of the Sioux, or else Wilson has his notes drawn wrong. I did use this style to set up my 12-foot lodge and instead of one pole in the tripod on the left facing my door, there were two tripod poles on either side of the door and only one pole towards the back. The lift pole was to the south of the F pole in Wilson's directions. This is opposite of what we know as the three-pole setup and this style gives the appearance of a four-pole setup when the cover is up. Continuing with Wilson: > The tent poles were of pine [spruce] brought to the village by visiting Crow. In spite of the fact that they did not grow on our Reservation, we always had a great many of these poles. They lasted a long time. There were fifteen poles to our tent, including the two that upheld the smoke hole flaps. The tent door was of an old cloth blanket. On the hunt, the tent door was often made of a deerskin hung fur inside so that whenever anyone went out of the tent, the fur of the door skin fell smooth against the head and body. For this reason, the fur was hung head up. > > b, articles of food piled here for safety, also any meat brought in from the hunt, skins, etc. > > h, dishes, bowls, cups, spoons, and the like used at meals, piled here when not in use; > > i, fire place, > > j and k, firewood. > > Between each bed and fire lay a log. The space between the log and the tent wall was filled with grass and the whole covered with robes. The logs were laid in this position for two reasons: to keep the shape of the bed and to prevent sparks from the fire from setting fire to the grass. These small logs were placed near the bed when wood was plentiful, especially if we expected to camp in the locality for some time. When we camped in the hills where wood was scarce, or if for any reason we were in a great hurry or it was inconvenient to obtain logs, we did not use them. Diagrams of a Mandan tipi setup and framework. Ground plan of tent used on tribal hunt, showing position of beds, fireplace, and household utensils. Besides anthropologists, missionaries and visiting travelers wrote down their observations in diaries and letters. A few of these have been published over the years, giving firsthand information on reservation life. Elizabeth M. Page went west to administer the word of God and help the Indians. She kept a diary of her daily life among the Indians she worked with in her book, _In Camp and Tepee: An Indian Mission Story:_ > Occasionally we see a woman moving about among the dingy tepees, and now and then a child ventures out from the school to visit his people. The appearance of the camp is uninviting enough. The tepees, which ordinarily look approximately white, now present their blackened cones against the white snow. A few are protected by wind-breaks made of the dried stalks of the tall weeds which grow in our river-bottoms, bound together and standing upright in a circle. One of these structures has been unable to withstand the force of the wind, and has blown over against the tepee in the centre. Beside each tepee stands the wagon of its occupant. Most of the ponies have sought protection behind the hills or in the ravines, but one team is cowering close behind the wind-break. They are but frail shelters—these hastily constructed tepees; only a frame of poles covered by an inferior quality of domestic, with an aperture at the top through which the smoke from the fire within escapes. (Continued in the Appendix.) Blackfoot village in Portland, Oregon, from 1910 postcard. From 1890 to 1920, tipis started getting more formal in their look. They became taller and more decorated. Tipi poles were cut longer, especially those of the Crow, Nez Perce, and Blackfoot. The Cheyenne, Arapaho, Kiowa, and Sioux lodges were lavishly decorated with beaded medallions and tinklers or dangles on the covers. Door poles were very evident in the front of most lodges with the bottom of smoke flaps tied to them. Streamers and ribbons on the lodge poles were in abundance. Fairs and gatherings were on the increase, and it became very popular to decorate in order to show off the tipi and the interior materials. Crow Fair became the tipi capital of the world. Ernest T. Seton tipi pattern, 1912. In _The Book of Indian Crafts and Indian Lore_ (1928, 139–44), Julian Harris Salomon expanded on the Ernest T. Seton material by giving a more detailed, designed draft of the tipi cover, drawing of a liner in place, and information about pegging the liner down. This may be the first time attachments to the bottom of a liner are described. Historic photos and observed liners do not show ties on the bottom. > The poles should cross at the tie flap. There they should be tied together by the Teton method or by simply passing a rope around them three or four times and tying a square knot. Thus the tripod is made. . . . > > If the tipi is to stand for some time the end of the tripod rope should be sufficiently long so that it may be wrapped around the outside of all the poles at the point where they come together. Its end is then spiraled down one of the poles. In windy weather it is fastened to stakes driven near the center of the lodge. When the lodge was to remain in one place for a long period, the poles were sunk in the ground to a depth of twelve to eighteen inches, extra allowance for this having been made when the tripod poles were measured off. . . . > > To the ends of the poles the Indians often fastened streamers of cloth, buffalo or horse tails. The latter were supposed to bring fortune and many horses to the tipi's owner. > > The cords on the bottom of the smoke flaps are tied to a peg placed well in front of the door. By swinging the smoke poles around according to the direction of the wind, a good draft for the fire may always be had. > > A most important part of the tipi is the dew-cloth of lining. This is made of strips of cloth four to six feet wide, which are fastened to the poles on the inside of the lodge. The upper edge of each strip has tie strings on it for this purpose, and the lower is provided with rope loops one foot apart, so that it may be staked close to the ground. It is best made of drill or light canvas in six-foot sections and should be long enough to extend entirely around the lodge. Its purpose is to keep the lower part of the lodge free of smoke and the rain which runs down the poles from the beds. Julian Salomon–style tipi pattern, 1932. Anthropologist Robert Lowie in _The Crow Indians_ (1935, 88–89) gives one of the better descriptions of the four versus three-pole setup using butted poles of the Crow Indians. Lowie became familiar with the Crow Indians and their tipis through their rodeos and fairs. > Basic and correlated with other differences is the use of either three or four poles as a foundation for the rest. The Cheyenne, Arapaho, Teton, Assiniboin, Kiowa, Gros Ventre, Cree, Mandan, Arikara, Ponca, Oto, and Wichita use three poles, whereas the Crow, Hidatsa, Blackfoot, Sarsi, Ute, Shoshone, Omaha and Comanche use four. From observation and experience, Prof. W.S. Campbell finds that the three-pole type is the stauncher, offering greater resistance to the winds, the Cheyenne form being the most serviceable of all; the Crow variety is the most elegant in shape, though inferior in painted decoration to that of the Blackfoot, Dakota, Arapaho and Kiowa. (Continued in the Appendix.) Royal Hassrick, an anthropologist, published _The Sioux_ in 1964 based on information from 1830 to 1870. Although this book is not based on observed firsthand material, it is informative of the earlier time period. His book also has some detailed drawings of tipi covers, water bags, and backrests. Here's an excerpt: "During the spring month some of the people moved into wigwams lest the drizzling rains rot the less durable tipis. This was the time of year when tipis were repaired or renewed, from new hides collected during the fall and winter. Leggings and moccasins were made from the 'smoked tops,' and smoking of hides began in the warm weather." (Continued in the Appendix.) Ben Hunt wrote the first of his articles on Indian crafts in 1938. They were later compiled into his book _Indian Crafts_ in 1942. The drawings show a rope in the bottom hem of the cover with loops sticking out for the pegs, pocket smoke flaps, and a very circular cutout for the cover. In his popular 1954 book, _Indian Crafts and Lore,_ his drawings indicate a liner but no underturn, or sod cloth, with a trapezoidal pattern smaller on the top and longer on the bottom to fit the curvature of the cover. Ties at the bottom attach the cover to the ground and there is rope at the top (101). The cover uses the Blackfoot-style pattern (96) and grommets on the bottom (98). Ben Hunt–style tipi pattern, 1954. Reginald and Gladys Laubin in their seminal book _The Indian Tipi_ (1957) show a more modern, streamlined tipi with the fitted liner that has an underturn or sod cloth. An interior rain cover (or what the Laubins called an ozan) is introduced for the first time. From this point on, almost all non-Native American tipis around the world became based on the Laubins' designs. After 1957, most books on tipi-making used the Laubins' patterns and terminology, with some embellishments to make their work appear different. Smoke-flap differences with tribal names are now identified or categorized. The tipi became oval, or egg shaped, instead of the circular pattern of the historical tipis of the nineteenth century. Based on the Laubins' drawings and the patterns for each tribe, these setups have now become the established pattern in tipi making. Reginald Laubin could be called the "Father of the Modern Tipi." Laubins' pattern for an 18-foot Sioux-style tipi cover. Laubins' pattern for an 18-foot-style liner/lining. ## Late-Twentieth-Century Tipis to the Present The next innovations in tipi making came with Guy (Darry) Wood of Haysville, North Carolina. In his article, "The All-American Do-It-Yourself Portable Shelter," found in an issue of _Aquarian Angel,_ published in 1972, Wood further advanced tipi design. Though still based on the Laubins' book, Wood engineered the cover and the lining so that the liner fits every angle of a pole in the tipi, which makes the cover more egg shaped. He incorporated flaps on the outside cover over the door to keep water from coming in between the cover and door. He also introduced new synthetic materials like Sunbrella for the underturn, or sod cloth, on the liner, nylon chording for peg loops, polyester threads, and a synthetic blend for the cover/liner. My own tipi making is based on his designs. I am thankful for the experience of his teachings. Very few people, if any, are making the authentic tipi of one hundred and fifty years ago. Most tipi makers today base their patterns on the Laubins'. So, what is an authentic pattern? You might have to go back to the tipis of the mid- to late nineteenth century when covers touched the ground, tipi poles were very short, and there were no formal fitted liners, rain caps, or modern materials. The exceptions to the basic look would have been the society or community tipis, the medicine lodges, and some individual family tipis showing off their battle deeds. And all of these varied from tribe to tribe and area of the country. ## Timeline and Evolution of the Historic Tipi into the Modern Tipi of Today The timeline on pages 26–28 is a summary of the development of the tipi, first made of buffalo hide, later of canvas, and now of modern materials. All early dates are approximate and vary because of inadequate information from available oral histories, vaguely written records, sketches, paintings, and undated photographic images. From the beginning of the twentieth century, the information, based on early anthropological studies, photos, diaries, articles, and books written about tipis and their construction, is more precise and detailed. * **Before European contact, around 1500s** Small hide tipis on the Plains Hide tipis average 12 feet in size Reed tipis in the Northwest Bark tipi in the far North Dogs pull travois with camp gear * **After the introduction of the horse, 1520s** Tips increase in size from 12 to 18 feet or more Covers do not have flap extensions or door poles Tipis are pegged with cover touching ground Slit doors—no formal door cover Short poles sticking out top Very few streamers One or two smoke-flap poles Hide linings are used * **Fur traders and mountain men, 1800–1860s** Fur trade posts in northern and southern Plains Introduction of cloth material in tipis Larger hide and cloth lodges Slit-type doors Pegs are placed in ground at an angle Hide flaps are scalloped Some painted lodges and quilled decorations No smoke-flap ropes or door poles in front Some extensions on bottom of flaps Leather hides or robes for liners Some cloth for rectangular linings or blankets * **Indian wars and frontier settlers, 1860s–1900s** Start of reservations Cloth being issued for tipis Decline of buffalo-hide lodges Many groups on the move due to hostilities Smoke flaps start having longer extensions Use of striped awning material for covers Some very large (approximately 25-foot) covers Tipis do not have door poles Ropes or cords attached to smoke flaps Cords attached to pegs holding cover Fur, blanket, or cloth doors Start of formal sewn door openings No rain caps Painted lodges Quilled and beaded rosettes Dangles or tinklers used * **End of the Indian wars and reservation period, 1880s–1920s** Wild West shows and Indian fairs Wood platforms start to appear in Wild West show tipis A few door poles appear Decorated doors of hide and cloth Lodge poles get longer with some groups Some tipi covers are very decorated for show Very few buffalo-hide tipis—now made from cowhide Tailored oval door and variations Highly decorated hide and beaded rectangular cloth liners Crow Fair and roundups showcase family lodges The use of a "Gore" in smoke flaps Stanley Campbell (Vestal) defines the four major types of smoke flaps as well as characteristics of Crow and Cheyenne tipis * **Scouts, tourists, and anthropologists discover tipis, 1890s–1940s** Ernest T. Seton introduces tipis to Scouting Introduction of the outer rain cap J. Salomon introduces a fitted liner Cloth tipis Tipis mostly used in the summer or family gatherings Great use of door pole Long lodge poles Rawhide and fancy cloth doors Blackfoot have highly painted lodges Crow, Sioux, Arapaho, Cheyenne, and Kiowa use the beaded or quilled rosettes and tinklers Pegs in at angle to hold cover to ground Few liners used and are made of cloth, shawls, or blankets * **Hobbyists' tipis, 1954** Hunt writes the _Indian Crafts_ books Detailed drawings on the cover and fitted-type liners Rain cap Liners are pegged down and have underturns (sod cloth) Cloth tipis shown at public events, like ceremonies and powwows Covers start to come off the ground an inch or more and the gore becomes standard in most patterns Fewer pegs for quick setup and takedown on the cover * **First major book on history and making a tipi, 1957** Reginald and Gladys Laubin write _The Indian Tipi,_ introducing the standards for lodges The fitted liner uses the trapezoidal-style pattern with underturn First major work on tipis gives measurements for cover and liner Drawings for the inside rain cover Introduction to the three-pole and four-pole setups The liner becomes a major part of the tipi Cover starts to lift off the ground, showing the liner bottom Most tipis now patterned after Laubins' book * **Groups using tipis and innovations in tipi materials, 1950s–1970s** Start of the hobbyist powwows and rendezvous tipis for camping Hippies discover tipis; movement back to the earth and nature New synthetic materials in cords, ropes, and threads Waterproofing materials for canvas Fire-resistant chemicals for cloth Innovations in wood and concrete platforms Waterproof and mildew-resistant polysynthetic materials * **Next major step in tipi making and materials, 1962–1972** Darry Wood writes "The All-American Do-It-Yourself Portable Shelter" Introduction of the formal fitted liner with the trapezoidal fitting of the angles to the poles in relation to the cover Use of 50/50 synthetic material in the covers Addition of Sunforger to the bottom of liners for extreme wet conditions Flaps attached to the cover going over the door to prevent water from coming in * **Basic look of today's tipi, 1970s–present** _(Non-Indian tipi owners)_ Follow Laubins' pattern Fitted liner and rain cover Door pole out front Cover off the ground by 2 to 10 inches Pegs driven into ground straight up Plain white or decorated cover Painted liner or plain white Cloth door and sewn door opening Fancy carved pegs and lacing pins Comforts of home inside, including a fire Try to re-create the old Indian look _(Indian tipi owners)_ Family/tribal traditions or buy from a tipi maker who used Laubins' pattern No liner in most cases or until needed No rain cover Door pole out front Decorated with family or tribal patterns or plain white Pegs driven in either at angle or straight up Most occasions will not have a fire inside Sometimes have a real four-poster bed, carpeting, and trunks for storage * * * [1] My relative Albert White Hat Sr. provided the following explanation: "The term _tipi_ in Lakota means, 'They live [someplace].' In Lakota, objects are described; therefore, the word does not refer to the shape itself. The more correct term would be _tipes'tola_ —"He or she lives in the sharp-pointed lodge." This explanation is taken from Albert's book _Reading and Writing the Lakota Language,_ 35. [2] Taken from James Creighton's correspondence, an essay on his Web site he wrote about the history of tipis, and conversations with him. Permission granted by Kathy Creighton in honor of Jim's memory. [3] Translations from Spanish to English of Samuel Parkman's list of trade goods. [4] The inventory appears in microfilm copies of _Letters Received, Adjutant General's Office._ An almost identical list, marked "A True Copy of Original Inventory," was signed by E. W. Wynkoop, U.S. Indian Agent, and appears in microfilm copies of his official correspondence. # Styles of Tipis In researching thousands of photographs showing various tipi groups, people standing by lodges, valley overviews of tipis, and tipis from different tribes, I have seen tipis that are tilted, straight up in back, or that have a shorter lean towards the door than the back. So, a true tipi can be tilted or not; it is up to the maker and is dependent on the materials available. Tradition has told us that a tipi must always face east. This is also not necessarily true since a tipi should be faced with the wind at its back, which can change for various parts of the country and different weather patterns, including the day or night winds. Most of the examined photos showed tipis facing different directions even in a group. The custom of facing the tipi to the east, towards the sunrise, is a religious part of culture, though it is not known when this custom started. However, a tipi is, first of all, a dwelling that you live in and not a religious structure, unless so converted into one. Tipi maker Wes Housler's buffalo-hide tipi. ## Cover Material The first tipis were made of buffalo or elk hide, depending upon the area of the country, but most were made out of buffalo hides. The hides were taken in the spring from the cows, as the bull hides were deemed too tough. As cloth material became available in the early to mid-nineteenth century, covers started to change. The cloth material was lightweight, let in lots of light, and was easy to put together, which allowed the making of larger covers. Each tribe adapted its own buffalo-hide pattern to this new fabric material. With the rectangular shape of the cloth came changes in the smoke flaps, door openings, and the bottom pegging area. Cloth came in widths of 36, 48, and 60 inches, with some variations in between. Widths had to be taken into account when piecing together material for the cover. Sometimes the cover was a combination of different widths. The cloth was mostly white or off-white, but the awning material came in red-and-white or blue-and-white stripes. Each tribe did not make its tipis in a uniform pattern and this is reflected in the photographs from the time period. (See historic photos throughout this book.) Even within the same family or same tribe, patterns could be different. Included in this chapter are some of the many patterns used by the same tribe in making a cover. The drawings are taken from the hundreds of photos where seams can be seen. Although some tribes were fond of the striped awning material, the type of material used most likely depended on what supplies were available. The striped lodges were found mostly in Northern groups as opposed to the Southern Plains groups. There were, however, a few examples of Arapaho and Comanche awning cloth covers in Oklahoma. Comanche tipi made of striped cloth. The cloth of the Nez Perce and some Blackfoot tipis were sewn together in right angles to each other, forming an alternating stripe look. Whether this arrangement of fabric formed a strong jointed cover was up to the makers and was based on their sewing skills. A more up-to-date setup for a Blackfoot lodge is diagrammed on pages 16 to 23 of Brian Cannavaro's book _How to Set up a Blackfoot Lodge_. These new corrections are from Brian Cannavaro, who gave his permission to reprint the drawings on pages 34 and 35 of this book. #### Styles of laying canvas for a cover using 36-inch to 6-foot awning material 5' to 6' stripped awning material mixed with 36" canvas Blackfoot Tipi Cover Sioux Tipi Cover from 1890 Different Nez Perce styles of cover patterns using awning material. Material is stripped in the 4- to 6-foot size. Colors are red in most cases. #### Updated Blackfoot-style tipi setup ## Smoke Flaps Today, because of the Laubins and Stanley Campbell (Vestal), smoke flaps are divided into four recognized shapes: Crow (C with J-style smoke pole opening); Sioux (A and D); Cheyenne (B, C, and E); and Blackfoot (G). The Sioux and Cheyenne smoke flaps are similar in shape. The Sioux is wider with a shorter extension, if used, at the bottom. The Cheyenne tipi smoke flap is narrower with a longer extension. It is difficult to distinguish some Cheyenne smoke flaps from those of the Sioux, the Arapaho, the Kiowa, or from the many tribes using a very similar type of smoke flap by looking at photographs and sketches. There are subtle differences in the width and length and also in the way the flap is attached to the lacing-pin area. During the mid- to late nineteenth century, many of these tribes camped together and intermarried with each other. There was also the gifting of tipis from one family to another or from one society to another of a different tribe. It is thus not unusual in this cross-cultural interchange that there was a blending of the smoke-flap patterns among the tribes. The most distinguishable smoke-flap patterns are those of the Crow, Blackfoot, and Nez Perce (F). The Crow smoke flap is as elongated as the Cheyenne. The difference is the smoke pole going through the flap pocket or hole in the ear of the smoke flap and extending beyond. The Blackfoot smoke flap also has a hole for a pole to extend through, but the flap usually is much shorter in length but wider. The Nez Perce smoke flaps also have a hole for the pole to go through the corner area, but this triangulates back towards the lacing-pin area and does not seem to have the extensions of the Sioux/Cheyenne styles. The drawings in this chapter are not broken down by tribal styles, but show the three main ways of attaching the smoke pole. They also show how each of the smoke flaps look in relationship to the lacing-pin area. In the diagrams shown here, smoke flap A could be identified as Sioux and smoke flap B as Cheyenne in an older traditional style without the gores. Gores are the triangular insert in cloth flaps that Native Americans started adding around the turn of the twentieth century to help cover the smoke-hole area. The old hide tipi flaps would stretch around the poles while the cloth tipi smoke flaps stayed the same shape, not stretching to cover the smoke-hole area. With the addition of this extra piece of cloth, the gore area design covers more of the tripod opening in inclement weather. Diagrams C and D are described as Cheyenne or Sioux depending on the width of the flap. D shows an extra piece of cloth added to the front edge making it wider and more in the style of a Sioux tipi. Extensions on the bottom of the flaps can be found on both tribal styles. Flaps, if they are used, can vary in length. Smoke-flap styles. Sewing a hole in the corner pocket of smoke flap C would change it into the Crow-style of flap and possible Nez Perce flap where a smoke pole went through the corner pocket area. Smoke flap F is used by Sarcee and Nez Perce. There are differences within these groups such as length, width, and use of pockets or sewn holes. The way smoke flaps attach to the lacing-pin area can also be different within each tribal group. The drawings on page 39 reflect some of the different views of the front of a lodge, which vary according to the time periods of the last part of the nineteenth into the early twentieth century. Differentiating Sioux and Cheyenne tipis by the way they place their smokeflap poles is not always possible by viewing the photographic evidence or sketchbook drawings. Cheyenne tipi smoke-flap poles have been described as meeting in the back while the Sioux's crossed (Laubins 1957, 46). In the hundreds of observed images of both groups, poles were crossed or not. There was no uniformity. In many cases, smoke-flap poles are not seen in use or only one is shown, and there are no door poles out front to tie the smoke flaps to, as is commonly done today. Four-foot-tall Southern Cheyenne tipi, 1907. Variations on front smoke-flap/lacing-pin areas. # Materials and Steps for Making the Tipi Cover This chapter covers the entire process of making a tipi cover, including the materials, workplace setup, patterns, layout, and the sewing process. Construction of the lining, preparation of the poles, and care of the tipi will be covered in subsequent chapters. ## Materials for Making a Tipi * canvas * thread * cording/tie tapes * pencils or markers * grommets * rulers * safety glasses * steam iron * linen waxed thread * sewing machine * needles (for use with sewing machine and for use by hand) * scissors * glue * leather * tables * butcher/kraft paper or cardboard * long roll of 4-foot-wide plastic (optional) * marbles and/or round river pebbles ## Canvas/Cloth Materials Cotton canvas is the material I prefer for making tipis. It breathes, which minimizes condensation inside the tipi when a different temperature exists on the outside, and it has the perfect amount of stretch for a cover. It is subject to mold, mildew, and ultraviolet-ray damage; however, special treatments have been developed to combat these conditions. Cotton and synthetic canvas can be purchased from canvas supply houses, fabric stores, and awning stores. Major canvas suppliers are in the Resources of this book. Many tipi makers will also be glad to sell you canvas. A few tipi makers sell tipi kits. Various weights and grades of canvas are available. Almost all tent companies will describe materials, assorted canvas used in tipis, and the weight of the canvases in their brochures so it may seem a bit repetitive. But it is a good idea to read it all so you don't waste your time and money on a poorly made tipi. Depending on where you live, what you choose is up to you. The original tipi was designed for drier climates, without much humidity. We have now adapted the lodge to fit our lifestyles and weather. ## Different Weights of Canvas Canvas can be purchased at various weights. The lightest is 8-ounce duck, and the heaviest is around 26 ounces. The weight refers to the number of ounces per square yard. The lightest is naturally the cheapest. Use 10- to 12-ounce lightweight canvas if you plan to move your tipi a lot. If your tipi is going to be semipermanent or where there are a lot of storms, use a heavyweight canvas, 12 to 14 ounces. The bigger the cover, the heavier the weight of material you will want for the cover. Sunforger Marine Finish Boat Shrunk is the premier cotton canvas fabric for tipis used in regions of high humidity, such as the Great Lakes region or the southeastern United States. This is the canvas that I prefer and recommend for tipis. It is available in 10.10 ounce and 12.65 ounce weights, both suitable for tipis. The heavier weight provides greater strength but the lighter material can be used when the goal is to hold down the overall weight of the tipi. Sunforger Fire Resistant canvas has the same treatment for water repellency and mildew resistance, but has an additional flame-retardant quality that meets the California standard (CPAI-84), which has become the industry-wide standard. Many states now require all tents and tipis to be fire resistant. That does not mean they are fireproof. I have seen five lodges burn to the ground due to negligence and my 12-footer was one of them. Fire resistance is a good quality, but it cannot compensate for carelessness. Marine-treated 10.10-ounce army duck is another excellent fabric for tipis. The fabric has a high thread count and the weave is even, consistent in strength, and will not leak when touched. Although Sunforger and army duck are the two best cotton canvases for tipis, other canvas types can be used in some situations, notably in arid environments. Army duck is seldom, if ever, specially treated to resist mold and mildew. Nor is it preshrunk, and both it and single-fill duck will shrink drastically, so your 18-footer may end up a 17-footer if you don't allow for shrinkage. Single-fill duck is made with coarse single-ply yarns and has a tendency to leak when touched. The two single-fill (see Resources) weights for tipis are 12 ounce and 14.90 ounce. The latter has 20 percent more cotton in it and is more durable and water repellent. Synthetic canvases are an alternative to the cotton canvas that I prefer. Fifteen-ounce Starfire is a 45 percent polyester/55 percent cotton fabric with an acrylic topcoat. Each application of acrylic is heat sealed onto the base fabric for added strength. It is water, mildew, and fire resistant, and meets CPIA-84 standards. It is soft, flexible, easily cleaned, and will last a long time, but it cannot be painted. Polaris is a 50 percent cotton/50 percent polyester blend. It is UV-resistant, mildew resistant, breathable, water repellent, and flame retardant. It remains flexible in extreme temperatures and is recommended for tipis that will be set up for extended periods of time. It can be painted with acrylic paints or exterior latex house paints. Sunbrella is an acrylic material that has the advantage of being very strong and extremely decay resistant. It cannot be painted, but it comes in an amazing array of colors, including many bold stripe patterns. This material does not breathe or meet CPAI-84 standards, but it is an excellent material to use on the bottom of a liner, the part that touches the ground. Dean Wilson related this story about camping with the Blackfoot at one of their dances and the materials they used for their lodges: > I have danced on the Piegan and Kainai (Blackfoot nations) reserves each year for the past five years—and almost everyone lives in tipis during the dances, including us dancers, and I have noticed that almost all of the tipis are hand sewn, use much lighter canvas than the commercial ones I own. They are not treated with anything (flame retardant or mildew treatment)—none are Sunforger canvas at all—all are very large though (they used to laugh at our little 16-foot and even our 18-foot one), but the covers are quite light. I asked several families if they used lightweight, untreated canvas specifically to keep it light to raise the large lodges. Without fail everyone responded the same way—this was all they could afford. Ah, Napi! Humour and lessons at the same time! To sum up, in choosing your canvas material, keep in mind where you will be camping, the weather conditions in your area, how large your tipi will need to be to accommodate all the people you expect, and how much weight you can manage. ## Selecting a Sewing Machine The original hide tipis were sewn together using bone awls and sinew, and then later with steel sewing needles and cotton threads. The whip stitch, also called the overcast stitch, was used. Later in the nineteenth century, a few wealthier Indians gained access to the first hand-turned wheel or crank-driven Singer machines that were available after 1854. But these machines were expensive and very few were seen on the Great Plains. The electric sewing machines became available around 1920 before many areas of the country had electricity, but the old treadle sewing machine remained in use well after World War II.[5] Old hand-sewn method of piecing a tipi cover together without a sewing machine. I made two tipis with ordinary household sewing machines before switching to an industrial sewing machine. My first tipi, in 1978, was made on a Singer Featherweight Sewing Machine my dad bought for me in 1964. The tipi was an 18-foot lodge made from 12.48-ounce Vivatex that had to be fitted into the machine arm on each pass. I used Singer leather-weight needles to get the heavy cotton thread through the canvas. With the two, four, and six layers of canvas, I broke or bent needles every other pass or so. I wore safety glasses on the job as broken needles shot everywhere. When the tipi was finished, all the gears of my machine had to be recalibrated. My next sewing machine was an old Singer foot treadle machine. It had a much bigger arm to work with and heavier gears to work the thicknesses of canvas. The job was much easier and I broke fewer needles, but it became clear that if I were to continue making tipis I would have to have an industrial sewing machine. 1964 Singer Baby Sewing Machine Linda Holley used to make her 18- foot lodge. I next purchased a 3/4-horsepower, double-needle Chandler Adler industrial sewing machine, with a forward and reverse stitch. The reverse stitch is great for locking your stitches, whereas without it, you have to pick up the lever and move the cloth back in order to sew forward again. The double-needle feature enables you to sew the large panels to each other with one pass rather than two. Other features of the Adler are the large opening in the arm for fitting the canvas through and the "dog feet" that grab the material and walk it past the needles. Look for these desirable features on any model of industrial sewing machine that you consider purchasing or renting. Regardless of the machine, wear safety glasses and shoo bystanders away when sewing canvas because when needles break they can shoot out six feet or more. Adler Industrial Double-Needle Sewing Machine. Choose a good sewing machine that has metal gears and a large neck opening and is strong enough to sew through six layers of canvas. Many of today's machines are made with plastic gears and internal parts that cannot withstand heavy-material sewing for long periods of time. After every tipi, the sewing machine will have to have its gears recalibrated. If all you have is the old home sewing machine, do your best with what you have. Check with awning shops to see if they will sew the main parts for a price and you can do the remainder. And, of course, there is the old standby of hand sewing, the way the Indians did it. ## Thread The supplier of the canvas should also be able to supply thread. Any strong, rot-resistant thread will do. One such is a heavy, bonded, UV-resistant Dacron/ polyester thread. Another is Filco thread, which has a polyester core wrapped in a cotton shell; the polyester core provides strength and durability, and the outer cotton shell will swell to fill the holes made by the needle when sewing. ## Hand and Sewing Machine Needles Leather needles for sewing machines are some of the largest for household machines. These are sturdy enough for use with canvas. You must take it slow and easy when going through more than four thicknesses of material. Some thicker areas may have to be hand sewn. For industrial or heavy-duty sewing machines, use sizes 110, 120, and 135 to 160, the latter being the heaviest. The heavier and stronger the needle, the bigger the hole; this can be a problem because it leaves a bigger hole where leaks may occur. A good size needle for most jobs is about 120 to 135. Always check your sewing machine manual to determine proper sizes and lengths of needles for your sewing machine and make sure the needle you are using is compatible with the machine. Most needles used in sewing canvas are three-cornered or Glover's needles. They have very sharp edges, which cut their way into the fabric. Other needles are round ended and fairly blunt, with much less tendency to cut the fabric. The blunt needle moves the threads aside and allows them to "heal" more rapidly and form a much tighter waterproof barrier. The needle eye should be the same or a bit larger than the thread used. It is not necessary to use a larger needle than the job or the thread requires. The only things I hand sew are the buttonholes for the lacing pins, where I use a blanket stitch to reinforce the area surrounding the hole. ## Cording/Tie Tapes Hardware stores sell 1/4-inch cotton or nylon cord. The choice is yours on materials for the ties used on the smoke flaps, liner, door, and cover. About 50 feet or so should do the trick. Nylon cord has the advantage of not rotting, freezing, or staying wet. Cotton will mildew and freeze to the ground, and it stays wet longer, making takedown a slower process, but it does look more historic. Tie tapes and loops can be used in place of cording for the bottom of the tipi and smoke flap. Make these by folding a 3-inch-wide strip of canvas (cut on the bias) over twice and sewing it down. You can also purchase tapes by the roll from the dealer from whom you buy the canvas. You will need 10 to 15 feet of tie tape. ## Scissors Whatever scissors you use, make sure they are sharp at all times. Straight-blade scissors are preferable. A type of scissors called "polyester," or "tiny teeth," scissors are made for fine cutting where special control is needed. They are great for bias cuts and small, rounded holes in the lacing-pin area. ## Pencils or Markers Use #2 pencils for most drawing on canvas. Remember, pencil marks do not erase on canvas. If you have to erase, use a white eraser and not the red type. The latter does not come out of the canvas and will continue to show later. Never use markers except on paper for making pattern designs. Markers do show up better on patterns than pencil. Tailor's chalk also works on canvas and does not leave marks; it just dusts off. ## Glue Use Elmer's Glue or any white glue for gluing the cover panels together. Do not use carpenter's glue because it leaves a yellow residue. White glue will dry clear and is not affected by the weather. When it does get wet, it will just fill in the seam and sewing holes. ## Leather If not making or using cloth cords, use good commercial or brain-tanned deer hide for the leather ties at the top of the liner and door. Cut a strip 21 inches by 3/8 inches. Pull both ends to stretch the strip. As it stretches, the width will narrow down. If it cracks or breaks, cut another one a bit wider, say, 1/2 inch. Cut at least thirteen or more of these ties, depending on the size of the tipi. See bottom photo on page 88. ## Grommets Brass size #4 male and female grommets are needed for the two- or three-piece liner sections where the liners join together in the bottom overlap area. Use one set for the two-piece and two sets for the three-piece liner. None are needed if you are making a one-piece liner. ## Rulers Twelve-inch to six-foot rulers or straight-edge pieces of metal or wood are needed. You'll also need a fifty-foot tape to make measurements and patterns for the canvas. ## Tables Card tables, dining room tables, or cafeteria tables are great for laying out the canvas and patterns. Getting material off the ground will help your back and help you be more accurate when cutting your cloth. You will need a large enough area to glue and iron the strips together. You will need a table at sewing-machine height to work the canvas. As an alternative, you can do this on a long area of clear floor that can withstand the heat of an iron. Four-by-eight sheets of plywood set up on sawhorses can be substituted for tables. A couple of coats of polyurethane or other varnish will put a smooth surface on the wood to help in the movement of the canvas. ## Steam Iron Use a Teflon-coated steam iron to iron together the long panels of the cover. I prefer Teflon because the glue will not stick to the surface, creating a gummy, sticky residue on the bottom plate that has to be constantly cleaned. ## Butcher/Kraft Paper or Cardboard Use paper or cardboard to make the patterns for the liner and the small pieces of the cover. If I'm making more than one tipi, I prefer cardboard for its thickness and ability to hold an edge for longer periods of time. For one tipi, the paper will work. Large pieces need to be 5 to 6 feet wide and 25 feet long. Paper or cardboard patterns can be pieced together with duct tape. ## Waxed Linen Twines (thread) Waxed linen twines will be used for the buttonholes and can be found in large needlecraft stores or in boating or leather craft stores. Any tough, heavy cotton or linen thread will do. Coat the thread with beeswax, thread it on a needle, and fold it double. Twist and coat it again with beeswax. ## Long Roll of 4-Foot-Wide Plastic (optional) Roofers' plastic or any plastic sheeting should be used to keep the canvas clean when it is measured out for the cover and liner. Roll out about 50 feet of it on a driveway or grassy area and peg down both ends to keep the wind from blowing it around. Walk on it with stocking feet. ## Marbles, Round River Pebbles, or Bottle Caps The bottom edge of the tipi cover can be attached to the ground in one of two ways. The first method is to sew loops onto the bottom of the cover. The pegs are then driven through the loops into the ground. The second method is to wrap marbles or round river pebbles into individual "pouches" about three inches above the bottom edge of the cover with cord ties, using clove hitches. The long ends of the cord ties are then attached to the pegs, which are pounded into the ground. Both methods are traditional and are documented on the old tipis. The advantage of the marble or pebble ties is that they can be moved along the cover where needed. The sewn-on ties cannot be moved. Two disadvantages of the pebble ties is that they should be redone after a long period of time so that the cloth doesn't thin out around the pebble or marble and they do pop out, requiring replacement. Craft stores carry the marbles in round or half shape and in regular or large size. I use a large marble in the middle back of the tipi cover to quickly show me the middle back for my first peg, and regular marbles for the rest. I always carry extra marbles because the cord ties sometimes pop open and the marbles get lost. Do not use round, wood-type marbles as they can mildew, swell, and break apart in time. This can cause the canvas to rot as well. Preston Miller, who owns the Four Winds Trading Company in Montana, related to me how some Crows used old beer bottle caps to tie into the cover. These might work in a pinch, but they can cut the canvas over time. A 45-caliber lead muzzle loading ball can also be used in an emergency. **** ## Tips to Get You Started * Get all materials and tools ready before you begin. * Know the size of the tipi you want to make. Go for what works for you and what you can put up by yourself. * The pattern given in this book is based on a Cheyenne-style tipi. You can change the pattern to fit your needs. * If making this yourself, have some weights around to hold the canvas down or tight when worked. The weights act like extra hands. * Don't let all of your friends get involved right at first. Choose one and work from there. I call them "tipi slaves," and they just love to help . . . for a while. The usual drinks, food, and the awe of making a real Indian tipi are often the best pay. * Working at table height is better than on the floor. * If you are on the floor, make sure it is free of clutter and wear knee pads. * Anywhere the canvas is spread out or worked must be free of dirt, hair, or pets. All these things can get sewn or "shown" on the surface as streaks or paw prints. * Wear clean socks or have clean feet when walking on the canvas. Keep your hands clean, too. * If it is hot, wear a sweatband to keep moisture out of your eyes and off the cloth. * As you unroll the canvas, look for imperfections. There can be notes written in the seams by the processor, factory slices or cuts, or little items woven into the selvedge edge like small sticks, cigarettes, and whatever else got picked up off the factory floor. You would be surprised what can show up on your "clean canvas." * Once you cut, there is no turning back. * Study the pattern you want to use so that you don't have to take any unnecessary steps. You save material if the pattern is laid out properly, but there is always a scrap pile. * Remember, any mark made in pencil on the canvas is darn near impossible to erase. * If working outside, do not pick a windy or rainy day to measure or lay out the work. * All seam directions should point down or away from water movement—similar to a roof shingle. You do not want water to get up and under the seams and leak in. ## Making Your Tipi Patterns in this chapter give detailed measurements for the cover and liner, proper angles for the fitted liner, approximate tripod tie points, and the number of poles needed. Each pattern is complete, making it easier to buy the canvas for cutting. Buy at least 15 percent more canvas than you need for the cover and liner because you still need material to make the door (6 feet, or 2 meters). You also might need some extra fabric in case there are mistakes in cutting and measurement. Leftovers can be made into bags for the tipi or a small awning, depending on how much is left over. Measurements for the cover are given in panel numbers with #1 being the complete smoke flap, tie point or lift area, lacing-pin area, and door. Panel #2 through the largest number become progressively shorter in length. Everyone has his or her own way of making a tipi. My way works for me and may give you some insight into how easy or confusing that process can be. For those individuals who are picture oriented, I have included several drawings and photos to help. Czech tipi encampment. #### Ten-foot tipi dimensions #### Twelve-foot tipi dimensions #### Fourteen-foot tipi dimensions #### Sixteen-foot tipi dimensions #### Seventeen-foot tipi dimensions #### Eighteen-foot tipi dimensions #### Twenty-foot tipi dimensions #### Twenty-two-foot tipi dimensions #### Twenty-six-foot tipi dimensions #### Seventeen-foot tipi cover Seventeen-foot tipi cover, approximately 65 yards/36-inch material. ## Making the Cover ### Preparing the Panels The first step in making the tipi cover is to prepare all the paper patterns and have them ready to trace onto the canvas. Then, you will measure out the different panels that comprise the cover and cut them to length. To do this, spread plastic sheeting on your driveway and roll out your canvas there, or use your hallway or living room. Have two people help, if possible. If no help is available, one person can do it, using a large weight on one end of the canvas to hold it down tight. Measure the length with at least a 40-foot tape measure. Run it out and lay it alongside the canvas. Before cutting, always measure twice. Measure the longest panels first and work down to the shortest panels. After measuring the first panel, cut the canvas off at the bolt end and bring the two raw ends together, folding the entire length in the exact middle. Pull the canvas tight with one person or a weight holding the two raw ends together and the other person holding the middle fold. Make a little mark on both selvedge edges on the inside and outside to establish the center of each panel for future reference. Repeat this procedure for all the panels cut. On the raw edges, in the corner, mark the panel number. Unfold and stretch the canvas back out. To get the canvas to fit into the arm of the sewing machine, you need to narrow the canvas. Fold the canvas lengthwise, with one selvedge edge folded two-thirds of the way over toward the other selvedge. Then fold the folded material one more time lengthwise, so that two-thirds of the width of the canvas has been folded twice, and one-third remains one thickness, with no folds over it. Fold this narrow width of canvas accordion style to make working the long lengths of canvas easier. Handling the canvas panel by panel, with careful marking and neat folding, is the key to easy sewing and avoiding confusion. Above and below: To find the middle of the panel, bring the raw edges together, pull tight, and mark the middle at the selvedge edges. Open back up to fold sections into neat piles. If marking more than one tipi at a time, mark each panel with the panel number and the size of the tipi. For example, panel #2 for an 18-foot tipi would be "2/18." Once I was making five tipis at a time, and I cut all the panels at the same time with insufficient marking. It was very embarrassing to set up the poles for an 18-foot tipi and discover the cover was for a 14-footer. It certainly looked strange all squished up there on those big poles. Once you have all your panels cut, marked, and folded neatly, put them away, except for panel #1. A. Fold the panel strip for a tight crease two-thirds of the way to selvedge. B. Fold down, still keeping a tight hold on the canvas. C. Flip again for a double fold and then accordion fold toward one end, forming a pile. ### Panel #1 It is here that the real work of building the tipi begins, as most of the work in making the tipi is in panel #1. Panel #1 consists of the smoke flaps with gores, the lift area or tie flap, the lacing pin/door strip, and the door. **Smoke flaps** consist of the pockets in which the poles will set to hold the flaps erect, reinforcement pieces, the gores, extensions to the bottom of the smoke flaps, and the biased seam tape. Smoke-flap pockets are made from the four pieces of canvas as shown in the drawing. For each pocket, sew two pieces together across the longest width and then turn inside out and top stitch on the folded seam. Fold lengthwise again, making a cup or pocket, and sew from the smaller end to the larger. Sew several lines of stitching to reinforce the seam. Next, turn it inside out to give a smooth exterior appearance. Inside this pocket is where the tip of the smoke-flap pole will rest, so it must be strong enough for the pole not to punch through. Set aside the smoke-flap pockets; they will be sewn to the smoke flaps later. After making the smoke-flap pockets, sew the reinforcement on the inside corner of the main smoke-flap panel. This piece of cloth can be cut from scrap material. Try to cut the longest edge on the selvedge so you save a step by not having to turn under a raw edge. Cut the tip off so it can be folded later without bunching up too much material. A. Smoke-flap pockets and reinforcement area. B. Gore, tie point, and lift reinforcement. C. Additional flaps over the door area to prevent rain from going between door and cover. Door fits under the flaps. **The tie flap for the lift pole** is the piece that attaches the whole cover to the tipi. While lifting the tipi up, the entire weight of the tipi will be on this tie flap, so it needs to have sufficient size and strength to stand up to a lot of weight and wear. This area will be reinforced with rope, heavy 1-inch nylon tape, or pieces of canvas that are cut on the bias, folded lengthwise, and sewn several times to form a tape. Whichever you use, these pieces should be cut about 8 inches long. This is the tape that will have a rope or cord attached to it and tied to the lifting pole. Cut two pieces of canvas in a trapezoidal pattern (A and B), one a bit larger than the other. Sew up one side. Before sewing across the top, fold a piece of the strong tape, making a 1- to 1-1/2-inch loop through which the tie rope will later be inserted. Place this to the inside of the trapezoid, the closed end of the loop facing toward the bottom end of the trapezoid. Stitch it down as you sew across the top. Now sew down the other side of the trapezoid, leaving the bottom end open. Reach up inside the trapezoid and pull the tie tape down hard, turning the trapezoid inside out. If there are raw edges at the bottom, turn the bottom raw hem toward the inside of the trapezoid that will face the tipi cover. Smooth out all the edges and top stitch them. Then sew back and forth across the trapezoid in a zigzag pattern to further strengthen the area. #### Tie Flap for Lift Pole Smoke-Flap Pockets— Cut 4 pieces, 2 for each smoke flap. **The gores** should be cut so that one long side is selvedge. Each gore is sewn on the inside of the smoke flap, putting the raw edge of the gore to the selvedge edge of the flap. Move it in about 1/2 inch from the edge and sew a single line from the top of the flap to the bottom tip of the gore. Now fold the gore over and sew along the selvedge edge of the smoke flap from the bottom tip of the gore to the top of the gore. This is called a single-turn hem and it hides the raw edge of the gore. The selvedge edge of the gore is now facing out to form a new edge for the smoke flap. It will be sewn to panel #2 when panel #1 is sewn to panel #2. The next step is to cut and sew the two extensions to the bottom of the smoke flaps. This extension is used mainly on the Cheyenne- and Sioux-style lodges and among other tribes that use the same pattern. It can be left off entirely if you wish. If you are changing the width of the smoke flaps from the size shown here in order to reflect a certain tribal tradition, remember to change the measurements of the extensions to fit your altered width. In cutting the flap extensions, you can eliminate the need to hem the bottom if you place the extension pattern on the selvedge edge. If you don't use the selvedge edge, hem the bottom edge of the extension with a double-turn hem. The edge of the extension, which will be sewn to the flap, will be left raw at this point, but will shortly be sewn to the bottom of the smoke flap. For the moment, set the extensions aside and proceed with the tie flap. #### Smoke-Flap Construction **The lacing pin/door strip** piece runs from the bottom of the smoke flap to the bottom of the tipi on the finished tipi. There are two such strips and the lacing pins that hold the tipi together down the front are inserted into holes made through these strips. To make a lacing pin/door strip, turn the raw edge under 1/2 inch and iron the crease all the way down the strip. Turn it under again for a 4-inch hem and iron it down. Sew this 4-inch hem down near the edge formed by the 1/2-inch underturn. Repeat the process for the other strip. Buttonholes are about 7 inches apart vertically, on the left panel are 2 inches apart horizontally, and on the right panel 11⁄2 inches apart horizontally. After measuring and marking your buttonholes carefully (see drawing B on page 60 and drawing E above), cut each hole with very sharp scissors or a hole-punch cutter to about 5/16 inch. Then take some heavy waxed linen or cotton thread, and using a buttonhole stitch, hand sew each hole. Lacing-pin hole constructions. A. Size of cut hole 5/16 inch. B. Button hole stitch. C. Machine button hole. D. Machine-sewn outline and cut hole. E. Left over right. Some people use a sewing machine to make the buttonhole and either single or double stitch the hole. A heavy-duty machine can do this. Others insert an extra piece of canvas inside the 4-inch hem and then cut the hole and leave it raw, unstitched, because they feel the thickness of four pieces of canvas won't rip. My preference is to hand sew the holes for a finished look and to make a kind of seal around the lacing pin. This gives an authentic appearance to the lodge as compared to machine-stitched holes. There are exceptions to multiple lacing pins. An old photograph, taken around 1890 by David F. Barry, shows Sitting Bull's tipi (A) with actual buttonholes and buttons holding the front together, rather than lacing pins. The buttons appear to be about 1 inch in diameter and spaced about 8 inches from each other down the front to the top of the door opening. Another picture shows a Cree tipi (B) with two lacing-pin holes in the front and one in the back panel. The last picture is a Yakima tipi (C) that shows one big lacing-pin hole in the front and two smaller ones in the under strip or left side panel. Lacing-pin exceptions. A. Sitting Bull's tipi, Sioux 1890. B. Cree. C. Yakima. Since the lacing pins are made of wood, they will swell when they get wet. You can avoid excessive swelling by putting a wax or varnish finish on them. With the lacing pin/door strip essentially complete, it is time to assemble the different parts of panel #1. Begin by matching up the smoke-flap extensions and the lacing pin/door strip to the base of the smoke flaps. Line these two pieces up with the bottom inside of the smoke flap. Aligning these pieces correctly will give a streamlined shingle effect on the outside of the cover. Sew the lacing pin/door strip section to the smoke flap, stopping 11/2 inches from the outer fold of the lacing-pin hem. This is slightly overlapped by the extension, the hem of which does not quite meet the other seam. Fold down the lacing pin/door strip and the extension toward the bottom of the lodge cover. Then sew a rolled-hem (double-hem) facing, turning under the raw edges left from joining the sections. This hem will face downward in a shingle effect. **The door opening** is an integral part of the lacing pin/door strip, and it is necessary to consider the type of door you want before finishing panel #1. There is a wide range of choices in door openings for your tipi. The old buffalo-hide tipis usually had no formal opening. The bottom corners were just folded back, leaving a triangular door, or the bottom one or two lacing pins were left open and people squeezed in and out until the hide stretched and formed a sort of saggy door opening. When canvas tipis were introduced, these methods continued to be used, but with some additional, more formal, door treatments. Quanah Parker, the famous Comanche leader, had a tipi with a triangular door, pointed at the top and squared off at the bottom just above the lacing pins. Others began cutting oval doors, some low to the ground and others high. The latter are associated particularly with the northern tribes, probably because of the snow buildup in front of the door. One thing to remember, however, is that the higher your door threshold, the higher up you will have to step to get in and out of your lodge. This can be a problem for visitors and children and even yourself if you don't lift your feet high enough to clear the threshold. It has been the source of a lot of broken bones. It is a better idea to just pull out the pins on the bottom and leave it unpinned until you need it pinned at night to keep the weather out or when you go away for a while. The threshold below the door opening takes a lot of stress and abuse, so it needs to be reinforced. The door opening can be made as tall as you need for your purposes. But remember, when severe weather sets in, a smaller door opening keeps more weather out. **Door covers** in early times were just pieces of leather, old furs, or old blankets suspended over the door to keep out the wind, rain, sleet, and snow. Later, the rawhide or parfleche door covers, with their colorful geometric designs and the U-shaped, Cheyenne door covers, often with lanes of beadwork, came into use, as did the circular door covers. Some were simply decorated rectangles of fabric. Today there are some highly tailored doors and door covers, which keep out the worst of rains. Many have special flaps on the cover to prevent water from running down between the door and the door cover. Nowadays almost all door covers are made out of the same material as the cover. **Three ways to make a door opening** for your tipi: For an authentic-looking door that is both simple and practical, just leave the lacing pin/door strip straight, all the way to the end. When the tipi is set up, fold the bottom corners back to give a triangular opening all the way to the ground, with no canvas door threshold. Four types of door openings. A. Reinforced, oval, high-off-ground door attachments. B. Oval-cut door. Peg in front. C. Round-cut door high off ground. D. Slit door, not tailored, just straight. Lacing pins optional at bottom. For another authentic look, do not make a formal door opening; just leave the lacing pin/door strip straight all the way to the end. When the tipi is set up, the lacing pins are put in place both above and below the entry area. You enter by stretching your arms forward through the slit and pushing the fabric aside. In time the fabric will stretch to some degree and form a somewhat saggy door opening that looks very much like that of old-time tipis. For a more formal, cut-out door, make a template from paper or cardboard—or use a nice rounded platter or plate—to establish the curves at the top and the bottom of the door opening. Decide how far from the bottom of the finished tipi you want the door hole to begin, and how high you want the top of the door to be. Then, draw in these curves at the top and bottom. Since the second panel of the cover has not yet been sewn to the first panel, only the curve for the top and bottom of the door can be drawn at this point. The side of the door will be rather straight, following the line where the first panel is sewn to the second panel. Set the completed first panel aside while you sew the other panels together. Later you will sew panel #1 to the rest of the tipi. Old and new door styles. ### Main Body of the Cover You are now ready to put the **strips for the main part of the cover** together. Start with the bottom (or smallest) panel and work your way up to the #1 panel. Now the little mid-marks of each panel become important. It works best if you have a large area in your backyard, house, or driveway to lay out a full cover. Then you can start from one end and work to the other. It is the way I made over three hundred tipis. Lay the shortest panel out on the surface where you are going to work and find the mid-mark. Then line up the next shortest panel parallel to the shortest panel with the two mid-marks matching up. It is not necessary to unfold them completely as you will start from the midpoint and work to each end. The larger panel will be longer on each end than the shorter panel, and it will start forming a shingle effect. Accordion-folded panels ready to glue and sew. Mid-marks for centering canvas panels. Top always goes over the bottom to give shingle effect. Ironing the two panels together. Get an iron (set it on the highest setting) and a bottle of glue ready. With the two panels right next to each other at the mid-mark, run a thin bead of glue down the shorter panel to the end. Carefully lay the longer strip panel over the shorter strip panel by covering one selvedge edge over the other by about 1/2 inch. This will cover the glue. Take the iron and move it slowly along the selvedges until you get to the end of the glued area. Do not leave the iron in one spot too long as it will leave scorch marks. This is not a permanent way to hold a tipi together, but it's better than clothespins or needles. If it comes apart, reapply the glue and iron the material again until it sticks. The heat sets the glue, which keeps the two pieces of canvas together until you sew them. If two people are working together, one person can do the overlapping while the other irons. Repeat this process on the other side of the mid-mark for the same two panels. After you finish gluing and ironing, carefully accordion fold the two panels. Take them to the sewing machine and sew. If you have a one-needle machine, make two passes. This type of seam, called a simple overlap, is just as strong as the flat-felled seam. When done properly, it will last the life of the cover. The old-style tipis were made with an overlapping seam. Repeat this process for the remainder of the panels, continuing to sew the next larger panel to the rest of the cover. Cross Section of Lockstitch Diagrams of sewing stitches, seams, and hand-sewing seams. Lay out the smoke flaps and triangle for the tie point on panel #1. They should match from the mid-mark of #1 to the bottom where the bottom of the lacing-pin section meets. Glue the lifting tie to the cover at the midpoint. I usually put this under the panel instead of on top to act as a buffer where it lies on the lift pole. Starting 31/2 inches on either side of the center, overlap the gores, smoke flap, and lacing-pin section on top of panel #1. Carefully fold when finished and start on the other side of the tie flap and do the same thing. Fold toward the rest of the pile and sew straight down one end to the other. Attaching smoke flaps, gore, and lifting tab on section 1 panel. To cut the door, sew the lifting triangle in place and attach the reinforcing bias binding to flaps and gore, as shown in the diagrams. If you have an industrial sewing machine, run the reinforcements more than once. The threads will strengthen the area. When you unfold this completed panel, stretch it out as tight as possible to get as much of the curves out as possible. Then refold the smoke-flap area and #1 panel back on itself to make a nice tight bundle. Next find the mid-mark on the bottom of panel #1 so it can be matched to the mid-mark of panel #2. Take the completed panels out and unfold and then refold again so that the selvedge edge of the largest strip is under the next largest panel at the mid-marks. Always work from the middle out on both sides and accordion fold when finished. After all the main panels are sewn together, it is time to attach panel #1 to the main body. You should have two rather large, neatly folded piles (you hope). The #1 strip will go through the arm of the sewing machine. Carefully glue and join the sections by using your fingers and hands, stretching the canvas to fight the curves. Fold to create a small compact pile in that area. Match the mid-marks for #1 to #2 and start the process of joining the two major areas. This is not going to be easy, as there is a natural curve to some panels. When all is done, sew. Attachment of reinforcing canvas tape to smoke flaps and lift area. Cut door opening. Now that the #1 panel is sewn to the main body of the cover, it can be taken out for the final trim and to attach the bottom cords. Spread the cover in a clean grass area or put it on a big plastic tarp in a parking lot. An abandoned lot is always good; no cars will roll over the cover. Using a big nail or a tipi peg, use a cord attached to the lift tie and peg the cover to the ground. Stretch the cover down the middle back and pin the small panel at midpoint. Next pull the first seam from either side to lay as straight as possible and stick nails in the lacing-pin holes at the bottom of the door area. Smooth out the entire cover until it is nice and stretched. Peg big nails or heavy rocks/bricks on the other corners of the panels to keep the cover taut and straight. To cut the outside curve/bottom of the cover, the radius point is drawn. Check several times before cutting. To find this point, place a big nail or peg between the smoke-flap pockets and measure out to the edge of the cover. The closer the peg is to the tie point, the more you will have a shape of a cone. The farther out to the smoke-flap pockets the radius is moved, the shorter the back or tilt of your tipi. All measurements can be adjusted to the size and shape needed. Use a non-stretchable tape measure and start marking your pivot, or circumference, with a pencil or tailor's chalk. Trim along the mark, staying to the outside. Once cut, the edge can be left unfinished for a Southern-style look, which has very little raveling, or the edge can be turned and sewn for the more Northern-style of tipi bottom. Diagrams of different styles of tailored bottoms and ties have been included in this chapter. The choice is up to you; this will be determined by your use and the area of country in which you live. For the unfinished bottom, cut as many peg loops as needed. These are spaced anywhere from 18 to 25 inches around the bottom. Since the pegs hold down the cover, I prefer more loops and I just do not use all of them unless needed. Peg loops are cotton cord or 1/8-inch nylon about 25 to 9 inches long depending on if you want the cover touching the ground or not. If you use the nylon chord, which does not freeze in cold weather, melt the ends before attaching them to prevent unraveling and slippage. Put a square knot in the ends of the cords and then attach a marble or some round, smooth object using a clove hitch. With smoke flap and tie point cords added, you can fold the cover and store until use. This is only one way of making a tipi. There are many other ways depending on styles, tribes, and number of family members in a tribe. Another popular way is to cut the smoke flaps and lacing-pin area out of the same strip and not as separate pieces. The gore would be the only other separate piece added. It would be reinforced with a heavy-duty twill tape, as would the tie point, which would be heavily sewn on the cover. Other stress areas would be over sewn with more canvas and tapes to strengthen them. Bottom hem styles for covers. * * * [5] Lee Bryant., "Ask the Expert," _Doll Craft Magazine,_ July 2002, 86–87. # The Liner or Lining A bit of confusion has arisen over the official name for the liner or lining of the tipi. Many Sioux Native Americans residing on the reservation and Sioux dictionary translations indicate the name used for the liner is the _ozan._ However, the Laubins used the word _ozan_ to refer to the rain cover (Laubins 1957, 53). In this book, the ozan is called the liner. In the next chapter, the covering above the head that protects from rain is referred to as the rain cover. The liner has also been known as the _o'zan,_ and _óza_ with a nasal "n," _ozanpi,_ _woza_ (n) (Warcloud 1989, 43), _oh-WON-eh-OH-zon_ or _Wo-zon_.[6] > A dew-curtain, called an óza (pronounced with a nasal "n"), was hung all around and was long enough to be tucked under the carpet. This was made in matched pieces, with strings attached for tying them together and to the tipi poles. Many an oza, (ozan), was elaborately decorated at about two to four feet intervals with vertical bands of fancy work in patterns of bright colors—or so painted. This dew-curtain, which was tied to the poles at a height of perhaps four feet, and the sloping tipi wall, together formed a little circular alley-way, like a lean-to in shape. And there all surplus foods and robes were stored, as well as extra personal belongings of the family, all packed in proper containers. This storage area was insulation as well, and the inside of the tipi was always noticeably warmer because of it. The dew-curtain was usually of either doe or calf skin. This summer curtain was purely for decoration and was hung only across the back of the ticatku (place of honor). If anything, this was more elaborate than the winter-curtain whose primary purpose was to protect against extreme cold.[7] Old-style buffalo-hide liner. The liner's purpose is twofold. It helps insulate the tipi's interior, and it also serves to block the draft. Historically the liner was made of tanned hides. Two hides were sewn together head to head, tail to tail, or side to side. Later cloth was hung from the tipi poles by a rope of rawhide or braided buffalo hair. Often liners consisted of several sections that overlapped when in use. These liner pieces could be decorated with paintings or horizontal lanes of quillwork or beadwork. Some have dangles. Tipi liners are typically about 3 to 5 feet high. There have been references to a liner going around the whole interior of the lodge. A single liner made of hide or cloth would be too heavy and bulky to encircle the lodge, so it probably means there were two or more rectangular liners, which appear as one complete liner. The later cloth liners carried over this rectangular shape and style (Cheyenne-Arapaho beaded liners and most painted muslin liners). When the liner is adorned in historic style, it can be a striking addition to the overall historic appearance of the lodge's interior. Cheyenne tipi cloth panel or liner. Original cloth liners were rectangular shaped. Since these panels were not fitted, they basically hung straight down from the tie point. Then parfleches, beds, or whatever else was on hand were piled on the liner to push it back as far as it would go. Dried food, extra storage material, saddles, or whatever was not needed at the moment could be stored between the cover and the liner. This created a lodge with a very small living space, but with lots of storage area behind the liner. Liners could also be made of blankets, Russian shawls, calico-printed cotton, and any other cloth that could be found. The old, decorated shawls seemed to be popular in the Northern Plains for Crow tipis. Stripped awning material and printed cloth were also popular. The lodges of the nineteenth century did not use a liner in the warmer times of the year. Photos of the Plains tipis rarely show liners. However, liners are pictured on the Northern Plains tipis, used where temperatures can get very cold even in summer. In some photos the liners are not just in front of the poles, but in many cases between the cover and the poles. This seems to help with some insulation because many tipis are made from lighter materials, such as muslin. Also the cover and these interior liners go all the way to the ground, blocking out all drafts. An interesting point to remember is that Native Americans could adapt more easily to warm and cold temperatures than we can today, as they spent all their life outdoors. Southern Cheyenne summer tipi, 1913. Interior of Crow lodge. There are no pictures of liners pegged to the ground by the poles. All the liners that I've seen had no ties or attachments at the bottom. The only ties were at the top of the lining, which could be tied to a rope or pole on the inside of the lodge. Many tipis had liners that overlapped each other, or there was just one in the back going over a bed area. Today's linings or liners can come in all types, sizes, and materials. These materials can be canvas, natural muslin, calico-patterned cloth, brain-tan, or commercial hides. As tipis got larger and more elaborate, liners became more tailored to fit the angles and number of poles. Today we have two types of linings: the pole liner and the rope liner. The pole liner is attached directly to the pole at the top and bottom. The rope liner is attached to a rope, which is wrapped around the poles about 5 to 6 feet above the floor space. The bottom on the same liner is attached to pegs or another rope wrapped around the butts of the same tipi pole or tied to the bottom of the tipi pole. There seem to be several ways and combinations to putting up a lining. However you choose to put up your lining, it adds to the beauty and mystery of the lodge. Privacy was not a main concern as women and men dressed in front of each other until the coming of the missionaries. So "privacy" curtains are a new concept. Ernest Thompson Seton describes a liner in his articles on tipis for Scouting. Pictured are pieces of rectangular cloth that hang from the tipi poles. The articles do not mention how it is attached at the bottom. In later books, Ben Hunt draws pictures of liners without an underturn but shows a cut curve with small ties attached at the bottom for pegs. In _The Indian Tipi_ by the Laubins, we start seeing the tailored, fitted bottom underturn liner of cloth. The trapezoidal sections for the liner are drawn out with good instructions and measurements for an 18-foot lodge. Their book was the basis for all the other books based on this style of tipi. Should a liner have a "sod cloth" or should it underturn at the bottom? A sod cloth does help seal out the cold winds when other articles are put on top. Being cloth, however, it will mildew or rot over time. If the underturn is made out of the synthetic cloth, it will be rot resistant. My opinion is that if you have a liner, you should fold it under. But if the climate is good and you want to travel light, you should leave the liner at home. To tie my liner down, I have long loops or ties at the ends of the seams. These line up with my poles and are simply hooked under or tied to the bottom butt of the poles. This saves me from having to use more pegs. Because of the length of the loops, the liner doesn't stick out beyond the cover and catch water. So, do you need a liner in your lodge? No! Today we have pitched the tipi cover so high off the ground that the liner becomes just an unnecessary extra cover to keep out the weather. Does a liner keep you cooler in summer by forcing the air to flow upward, creating a better draft? Not from my experience of camping in the 100-degree temperatures of the South and West. The tipi is actually cooler without the liner during the summer. In cooler weather, with or without a liner inside, a fire burns easily. The smoke flaps and door opening control the airflow more than any liner. Today the liner has become simply a big privacy curtain in the daylight hours and a "shadow catcher" at night. ## Materials for the Fitted Liner All the equipment, materials, and cloth listed in the previous chapter, "Materials and Steps for Making the Tipi Cover," are needed here. If you have already made the panels, which go together and form the top and bottom of the lining, then put the glue and iron away. If you are going for the 9-foot-tall liner, then another strip can be added to the 6-foot liner. This does add more weight and angles at the top following the natural curve of the tipi. In a 24-foot or larger tipi, a very tall liner can be used for winter living. These liners also require extra ties vertically to a rope or pole to support the sagging middle sections. This is also the time to decide on a one-, two-, or three-piece liner. For a small tipi, a one-piece is OK. But, because of weight and maneuverability, when you get into the big tipis, like the 18-footers, three pieces may be a better choice. The fitted liner dictates the shape of the cover and how it looks from the outside. The lining can be made from either a 10-ounce material or a calico/muslin cloth, which is lighter still. It does not have to be from the heavier canvas. It is all up to you. If you do go for the lightweight material, a regular sewing machine will suffice to sew the liner. The underturn, or sod cloth, can be a longer extension already built into the panels or it can be added cloth sewn to the bottom. The pattern in this chapter is a tailored style and requires a protractor (which can be made of cardboard or wood), a ruler, and a long straight edge of 6 to 7 feet. Ties on the liner are placed at the top and bottom, which attach to the tipi poles. This eliminates the need for an extra rope at the bottom or pegs to anchor the lining. Some people still use a rope wrapped around the poles to attach the top of the lining instead of just the pole. The most time-consuming aspect of pitching a tipi is hanging the lining, especially if you're using the fitted styles. Compared to a tailored liner, hanging the liner in the old way is fast but not neat, nor does it give a wrinkle-free look. To get the geometric arrangement that fits the poles, you will need to use a large protractor. Sometimes you can find a large one in a teaching supply store. Or you can make one by using the biggest one you can find and enlarging it onto a piece of cardboard or wood. The size I made, and worked with for twenty-five years, was 2 feet across and high. Find the center and then 60 degrees to 90 degrees on both sides. The long 7-foot stick or straight edge is for continuing the lines at the top and bottom. Kraft, butcher, or any paper large enough to draw a full-scale pattern works when making designs. A heavy board or Masonite paneling works great if you want a more durable pattern. The patterns only have to be drawn and cut once but can be used hundreds of times. Remember, there are half as many panels as there are poles, plus the door area. That means if you have a twelve-pole lodge, you have to have at least five panel patterns (which will make two exact panels) and one for the door, which makes six. The door area is drawn only one time. Total cloth panels will be twelve plus the door for thirteen liner pieces to make the one-piece liner. The diagrams below show how to find angles for the fitted liner. If you want to make a new size or add more poles, they show the measurements for angles. You do not have to be a real math wizard to figure this out or know how to use a CAD program for mechanical engineers. The fitted angles also help in the regular rectangular-panel liner. All panel patterns should be cut true to their angles or the liner will be thrown off. There will be some imperfections in the cloth, curves, and in your drawing ability. Be as exact as possible with very little deviation. If your measurements deviate from the given patterns by as much as half a degree or an inch, go back and recheck. A tiny bit off will not hurt too much. One- and three-piece liners. Pattern for 17-foot tipi liner. Two strips of the cloth should be sewn together to make a long panel of about 50 to 75 feet. This all depends on how big of a tipi you have chosen. Most liners measure a little less in canvas yardage than the cover. If more sections are needed, you can add them later. The pattern can be adjusted to a shorter vertical version by measuring down 12 inches from the top. I bend the pattern at this point, and let this be the new top line all the way across. This will leave you an extra 12 inches or so at the bottom and will form the underturn, or sod cloth, on its own. ## Construction Cut the pattern drawings with number one as the door covering. Number one and number two panels are the same angles. Number three and number four panels are the same angles and so on until you are finished. The acute angle (an angle getting less than 90 degrees) faces the door, and the obtuse angle (an angle getting closer to 90 degrees) is at the lift pole or lift area. Make sure panels are stacked in the way they will be sewn and reverse each panel in the opposite direction of the other to have the correct angles to the door or lift pole. See drawing B on page 83 and the drawing on page 85. When sewing these sections together, it will create the shingle effect of the seam facing downward on every section. If not, water can collect in the seam and drip in. Always double check to make sure angles are facing in the right directions before sewing. Ripping out is no fun and takes time. Try to work with only one section of the panel in the arm of the sewing machine. You do not want piles of canvas in the way. Finding the angles of a fitted liner with any number of poles. Twenty-foot tipi showing working drawings for position of poles and 6-foot liner. Drawing the liner panel. Protractor and long straight edge are used to draw the first pattern for a liner panel. Draw the back edge, bottom, and front side of the pattern following straight line. Bottom edge is drawn in if sod cloth is added to pattern. Makes a shorter 5-foot liner plus underturn. Or slide pattern down to opposite seam for a full 6-foot-size liner panel and add 10 to 12 inches for a sod cloth. After the pieces are sewn with either a flat-felled, overlap, or just two-edges-put-together seam, then add on the 8- to 12-inch sod cloth or underturn. This can also be done with any of the stitches pictured. For neatness and smooth folding, the flat-felled seam is the best. But if your machine cannot handle the thickness, do whatever works for you. Next put on the leather ties (cotton tape) at the top and 1/4-inch nylon cords at the bottom of each panel seam. The ties at the top are 22-inch-long strips or what will reach around the thickness of your poles. On the top, use the awl to punch the holes for the leather and hand or machine sew the cotton ties. The bottom of the joined seam has a marble inserted with an 18-inch nylon cord to tie it to the bottom of the tipi pole. These are placed on the opposite seam from the top tie. Where the two major panels come together, sew in a 2-inch reinforcing piece of cloth and install a size 4 or 5 grommet. One panel fits into the other at the 10-inch overlap. This keeps the draft out on the side and from going up the panel. It is also possible to hand sew this hole, and then reinforce it several times with waxed linen thread. However, under stress, this has a tendency to stretch out and can tear. This is the only area that I use a metal grommet because of the strength. After the tipi is pitched, the grommet does not show. Three ways to sew lining panels together. Stitches per inch on overlap seam. Awl for piercing canvas. Leather with cut tip used for liner ties. * * * [6] From e-mail correspondence with Benson Lanford based on conversations with the late Milford Chandler and Buechel. Full quotes can be found in the Appendix. [7] E-mail correspondence with Peter Gibbs quoting Ella Delorias's papers on the usage of _oza_. Full quotes can be found in the Appendix. # Rain Covers and Rain Caps When standing inside a tipi and looking up at the gaping smoke hole, the newcomer to tipis invariably asks, "How do you keep the rain out?" This is a good question indeed and the subject of this chapter. The simple, brief answer is you use rain caps and rain covers. But it is actually more complicated than that. In all the known daguerreotypes (tin types), glass negatives, photographs, sketchbooks, journals, or writings, there is no illustration or mention of a cover on top of a tipi to protect it from rain. The first time that it appears is when Ernest Thompson Seton illustrated it in his book _Two Little Savages,_ first published in 1901. Seton may have adopted the idea from the Missouri River Indians who covered the smoke holes of their earth lodges with their buffalo-hide bull boats. It is unlikely that Seton ever observed this practice applied to tipis either among Indians of his day or in historical documents. Because of the heavy weight of the boat, the uneven length, and the thin tips of the poles, it is unrealistic to assume that a tipi could support a boat. Moreover, throughout much of the Plains region such boats were rarely used. 1938 Czechoslovakia Woodcraft encampment showing rain caps. The next time a drawing of a tipi with a rain cover appears in tipi literature is in Ben Hunt's book _Indian and Camp Handcraft,_ which was largely a compilation of articles that he had published in the magazine _Industrial Arts and Vocational Education_ in 1938. His tipi illustrations, apparently based on Seton's drawings, show a bowl-shaped cover and rain pouring down. In later drawings, Hunt depicts ropes holding the storm cap in place. The use of rain caps today is more prevalent in Europe, particularly in England, France, and Germany, than America. French tipi made by Guy Vaudois, showing a rain cap, with tie downs at three corners. ## Interior Rain Covers Could some kind of rain covering have been used inside the early lodges? There is a possibility with the Wild West shows and other entertainment at the turn of the century where tipis were on display. Some type of interior cover could have been used inside to protect the occupants from the many storms that happen in the East. However, from all the photographs and descriptions of the time, no picture or information can be found of either an outside rain cover or inside rain cover. If one was used, it was probably a piece of cloth that was tied up inside on the poles to keep the water from the occupants and then taken down when not needed. Spare hides, blankets, robes, and cloth were the only practical, known materials for this time period. Makeshift awning material stretched for rain cover over liner for Linda Holley's 12-foot tipi. Drawstring rain cover. In America the preference seems to be for the interior rain canopy, which the Laubins called the "ozan." In Lakota, the word _ozan_ really refers to the lining, not the overhead rain canopy, so the words _rain cover_ or _rain canopy_ will be employed here. Rain canopies appear to be a rare exception rather than the norm until the Laubins published _The Indian Tipi_ in 1957. They brought us the fitted or tailored interior rain cover. Historically, the portable tipi was designed for the Great Plains, where there is little rain but a constant wind. Typically the storms and rains that hit the Great Plains move through fast, driven by the wind. Any objects that got wet would dry out rather quickly due to the low level of humidity. The success of the Laubins in introducing the tipi to people living outside the arid Plains region, in wet and humid climates such as those found in Oregon, Florida, and the southern Appalachians, has resulted in far greater use of the interior rain cover today than in earlier times. Refer back to bottom drawing on page 25. Double rain covers. Rain cap. Reflecting rope in the center going to a pan. New Zealand Jaia Tipis' way of keeping water off your head. Today the interior rain canopy is a large piece of cloth, preferably a water-repellent, fire-retardant, lightweight canvas that fits overhead, from one side of the tipi to the other. It angles down to the sides and back, just above the lining, so that water will run off the canopy behind the liner. Properly set up, the rain cover will prevent water from dripping onto the beds. After a rain is over, if there are any items that got wet, perhaps from water coming under the tipi, they can be placed on top of the rain cover to dry out, assuming they are not too heavy. The rain cover is also effective in retaining and reflecting heat. Tipi makers can sell you a fitted rain cover or you can easily make your own with a large piece of painter's cloth, treated material, or any water-resistant cloth that will fit the interior of your lodge. A fire-retardant material is a good idea. It is easy to take a piece of cloth, marbles, and string or leather ties and attach the rain cover to each tipi pole above your head level using the same marble-tie system employed to make stake loops. Stretch it across from side to side and then angle it down and back, attaching it to each pole as it angles down to the back sleeping area. If there is extra material overlapping the liner, cut it so that it fits behind the lining, forming a flap. All this can be done for just a few dollars. In my own tipi, I use two of these rain covers to keep out the heaviest of storms and to keep in the warmth of a small fire. One goes from the back to the middle and the other goes from the front to the middle. They meet above the fire area, with one overlapping the other by a space of about 12 or more inches so the smoke can escape. Each cover slants back to the lining so that any moisture can drain out behind the liner. ## Alternative Methods of Interior and Exterior Rain Covers Another possibility for a rain cover is to use a fitted drawstring cover that attaches to the lining and gathers in the middle with a hole for smoke to pass through. Here is a tip to consider from Jaia Tipis in New Zealand: > I know [one tipi owner] who will pass a 1/8-inch cord over his pole cluster and hang an umbrella upside down in the tipi. He pulls the upside down umbrella all the way up to where it will touch as many poles as possible and ties the cord off. He has also poked a small hole near the tip of the umbrella and runs a cotton string out the hole and down to a stake on the ground so [that] as the umbrella fills up, [the water] goes out the hole near the tip of the umbrella, down the string and ends up in the grass where his stake is. It works for him. Another elaborate system for deflecting interior water is to put little flags on the inside of the tipi poles about 3 feet from the lift area so that water drips into the rain cover before it gets to the liner. Or simply tie a piece of cloth to the poles with ropes or strings to poles and have it drain into a pan in the middle of the tipi. The most innovative method of this type of rain cover, which uses clear plastic for the light advantage, is from Rainbow Tipis. On the outside several ideas have been developed lately of what would be called rain caps. These are attached to the outside poles, covering all the exterior poles and the smoke-hole area, preventing the rain from getting in. The Europeans seem to prefer this outside rain cap or cover and were probably influenced by the early works of Seton and Ben Hunt. In using these, you may have to cut your poles much shorter in order to get the cap to cover the whole area to be protected. Then the cap should be tied down with at least four pegs in the ground to prevent it from flying off and damaging the top poles or cover. Some caps are attached to the tipi poles themselves and stay up for the season. For the beauty of the lodge and poles, I prefer to use an interior rain cover. In the last fifty years, the interior and exterior rain covers have evolved greatly. Each person seems to be coming up with his own idea of how to keep the rain out. The important thing to remember is that whatever works for you works. Specialized plastic interior rain cover diverts the water back to the outside. # Poles, Pole Care, and Pole Maintenance Whether you get your poles from a commercial source or you go out and cut your own, the next step is to get them ready for your tipi. If you have access to trees in your area, check them out. But also check to see what the laws are on cutting them down. The easiest way is to cut down poles on private property, but you can also check with your local National Forest office about obtaining a cutting permit. Buying poles can cost from $100 to $450 for a set, plus transportation. Some poles can cost from $15 to $25 or so each, depending on whether they are stripped of bark or not. Any tree that is straight, light enough to handle, and does not have a heavy sap can be used for tipi poles. But also make sure trees are hefty enough to support your cover, liner, and interior rain cover. It your poles are too thin, add more poles to your setup until you can get better, thicker ones. Some of the best trees to use are lodgepole pine (Indians preferred these), some cedar, tamarack (also known as larch), southern cypress, Douglas fir, Wisconsin balsam, and longleaf pine. Bamboo has also been used, but I would not recommend it because of the rings that will drip water. The choices are different all over the country or the world. Trees selected should come from a heavily wooded area so that the stands grow as straight as possible. Make sure you can walk or float the tree out. Whatever you choose, be sure to get the following: * permission to go on property to cut poles * a friend or two to help * bug, snake, bear, and alligator repellent * good handsaws or chainsaws * a first aid kit and someone skilled in responding to accidents * a truck with racks or trailer to get the trees home Always yell "timber" when cutting. Nothing like a tree falling on you from out of the blue, even in a swamp. Fresh poles can weigh four or five times their weight when newly cut. Our cypress poles lose 50 to 75 percent of their water weight in twenty-four hours after being stripped. Then they are easier to pick up and work in the other stages when they are light and dry. In getting your own trees make sure you choose a time of the year when the sap is rising. Here in the South, where we use cypress trees, the preferred time is in the early spring. If you can start stripping the bark in the first forty-eight hours, in most cases, you can use a knife to peel it from bottom to top like a banana. After several days you will have to use a drawknife to start debarking the tree because it is not as easy once the water or sap has evaporated. Some people put their trees in small ponds, streams, or some area of water containment to soak the bark. If you have ordered your poles from some other part of the country, they will probably arrive in a very dry state bundled together. It is a very good idea to order the poles pre-stripped to reduce the cost of shipping. This will add to the cost of your poles, but in the long run it will save you a great deal of time and trouble. Some dealers do such a good job of stripping the poles that very little work is needed in final preparation. Poles should be kept off the ground because of the risk of rot, mildew, and bugs. When working with my poles I prefer to get them off the ground away from the dirt and to make it easier to pick each pole up. If your poles need to be stripped of their bark, they will have to be set up off the ground so that you can use a drawknife or other way of debarking them. You will need either sawhorses, two large tripods set about 10 or 15 feet apart, or any other platform on which you can set a pole to be stripped down. It is also nice if you have a friend who can help hold the pole and turn it while you do the stripping. Drawknife stripping through bark. Bevel should be facing down. A drawknife with about a 12-inch blade is good enough to strip your poles, but any knife with a sharp blade will do. To get a good, clean cut, make sure that the bevel is facing down. A bevel that is facing up has a tendency to gouge out the wood. You want to glide between the bark and the inner wood. Work with the blade coming toward you with long clean cuts. Short, jagged cuts cause cutting marks in the surface of the wood, which will create more work for you later. Today there are other tools that can be used for stripping poles. Electric 3- and 4-inch planers are wonderful for shortcutting the process. These work great on smooth-bark trees, more so than on pine trees and other trees with jagged bark. After stripping the poles with a drawknife, I like to use a 4-inch planer to smooth down and get rid of any jagged edges that may have been left by the blade. I also use a rasping tool, like a Stanley Sureform, to smooth down small limbs or indentations that may have been left. The next process involves sanding down the pole. Wait a week or so after stripping the bark until the wood has dried out. Hand sanding is a great way to get around all the curves that you are going to find. Do this in an up-and-down or top-to-bottom motion when the pole is completely dry. You can start with a heavy grit and then work your way to a finer sandpaper. It is not a good idea to use a circular orbiting sander as it leaves swirling marks on your pole, which will allow water to follow the grooves and drip into your lodge. It is possible to use a belt sander. A final step is cutting your poles to the length that you would use in setting up the tipi. The butt of the pole should be pointed for ease in setting up and keeping your tipi in place on the ground. This is accomplished by standing the pole up on a wood block and then taking an ax to chop off sections of the butt at an angle, forming a point. It can be four- to six-sided or whatever works for you. Take your drawknife or electric planer to finish off the cuts for a smooth look. Do not make your angle too long if you are using the butt to tie the liner to as the liner will slip up the pole. A flat butt skips across the ground when setting up the pole. The pointed butt will dig into the ground, making it easier to maneuver the poles into position by walking them up and into the crouch area. The rounded-off tips help prevent the pole from splitting and make the poles look better. ## Treating Poles Poles will weather and eventually fall apart from dry or wet rot. To protect your investment in time and money, you should follow these steps. * Before using your poles, paint or spray one of the following solutions on the poles: Thompson's Water Seal or Behr water seal a solution of 1/2 linseed oil to 1/2 turpentine or shingle oil made by Chevron, which is cheaper than linseed oil * Pour deck sealer into a large, tall bucket and soak both ends of the poles in the sealer for a day or two until the ends have entirely soaked in the liquid. This waterproofs the tips. Waterproofing on the bottom of the pole need not go up more than 6 to 8 inches. The top end should be coated, if possible, as much as it is exposed above and beyond the top of the tipi (the UV protector in the sealer helps the upper parts, too). Try to do this every two to three years or when the wood seems to be a bit thirsty. This is also good for wood stakes. * Do not be concerned when a dry pole soaks up a tremendous quantity of liquid. Let the poles dry well before using them so they don't stain the cover. * If your poles have already started to weather, treat mildew with the same bleach solution. Living in the humid parts of the country, this is a must before applying any other treatments. Scrub the pole with a brush and a solution of 1/4 cup bleach in a 2-gallon bucket of warm water. Let the pole area dry completely. This will kill the mildew and stop it from spreading. Then follow up with any of the above sealers. * Though some people do this, I would not store poles by leaning them in a crotch of a large tree or leaving the poles up in the frame and leaning the smoke-flap poles and lifting pole with the bundle. In wet climates the poles will rot at the bottom or be eaten by termites or other bugs. There are "bore bees," which look like bumblebees, that will eat holes in the poles to lay their eggs. Those holes can run 12 inches in the interior pith of the pole, thus weakening them. If this has already happened to your poles, fill the holes with a polymer plastic resin to harden the tubes. * Store poles off the ground and covered. They will last longer out of the sun and weather. Ideally, storing poles indoors (horizontally and well supported to prevent warping) would be the best for when they're not in use, but not all of us have a 23-foot garage or barn wall to put them on. You could build a rack system that puts the poles a few feet off the ground. The racks can look like an H with one or more tiers to support two or three sets. At least four of these the length of the poles will help support the weight. Make sure you cover all the poles to protect them from rain, sun, and insects. Pole racks store tipi-pole sets. The green waterproof cover protects the poles. ## Fixing Broken Poles Tripod poles that are broken cannot be restored to their original strength. But they can be used as one of the fill-in poles, not as a tripod pole. Gorilla Glue with some duct tape can stick most poles back together. Rawhide, like they used in the old days, that has been wetted, sewn on tight, and then left to dry will form a very strong sleeve. This could be used along with the glue. Remember that rawhide tends to stay moist in humid climates. If you are at home, make a special sleeve for the broken area and pour in clear cast or some other plastic-like material to make a hardened sleeve. This can also be accomplished with fiberglass mending kits. Whether or not to split the poles for storage or transportation is a question that is asked many times. It can be done but the results are poles that are not as strong as full-length poles. To join them together, some people use a sleeve material made from metal or plastic. This means you must carve away portions of the pole to fit into the top and bottom of the sleeve. It does work, but water going down the pole will leak as it hits the sleeve casing. However, in very wet weather, you can tie a piece of leather just above the cut and the water will run down the pole until it reaches the leather and drips onto the rain cover or behind a liner. There is also the problem of constant shrinkage of the wood as it ages. On the upside, it does make transporting poles much easier. ## Pegs, Lacing Pins, and Rain Sticks **Pegs** for the outside cover hold down the tipi in all types of weather. Depending on the area of setup, wood pegs should be about 18 inches long and 1 inch thick. The size of the tipi will depend on the number of pegs you need. In my case, I carry four sets with me: a short wood set of 15 inches, a longer wood set of 24 inches, and a steel set. The 24-inch circus long pegs are for very sandy soil, while the steel set is for rock-hard ground that breaks wood pegs. It is nice to know what the ground is like where you are going to camp, but that is not always possible. I have left wood pegs in the ground where I could not get them back out after finally pounding them in. This is extra material to cart around, but the pegs hold the tipi cover down in windy weather and it is important that it is securely tied to the ground. I also carry a pointed steel pick about 15 inches long (to first dig a hole in the ground before pounding in the wood pegs). That makes it much easier to drive wood pegs into a hard surface. Bring a steel hammer for this job in addition to a good wooden mallet for the pegs. Two full sets of wood pegs. For pegging the cover to the ground, you will need twenty to twenty-five wooden pegs. Always cut extras as they break with age or split. Any hardwood you can find will be good enough. If you know trees, you can look for hickory, maple, ash, oak, or chokecherry. My sets are locust from North Carolina, ironwood from Texas, and chokecherry from Florida. In a pinch, use 1-inch round dowels cut down to size. Sharpen the pegs to a point at one end and leave about 6 inches of bark at the other to catch the rope you will be tying to it. Another way to catch the rope is to carve a ring around the top of the peg. Pegs and hammers. **Lacing pins** insert into the pairs of holes down the front to button up and close the cover around the poles. Make them out of the same wood as the wooden pegs. These sticks should be round pointed, 12 inches long, and about 3/8-inch thick. Lumberyards have dowels you can buy and cut to size. If you wish, leave about 4 inches of bark on the pin for carving rings in a decorative manner. Strip off the bark on the rest of the stick. Put floor wax or a polyurethane finish on the lacing pins to protect them from the weather and make them go into the holes easier. Canvas swells when it gets wet, and it becomes very difficult to get the pins out or in. Make as many lacing pins as needed for the pairs for holes down the front of the lodge. My 12-foot lodge only takes about eight whereas my 17-foot lodge takes fifteen. Rain Pegs: Get together about thirty 5-inch sticks 1/4 to 1/2 inches thick. These will be used after your lining is up. They are inserted between the rope or leather ties that tie the lining to the poles and the poles themselves; they keep water running down the poles by forming an uninterrupted channel. Rain pegs are used anywhere you want the water to continue to run down the pole wherever there is an interruption on the tipi poles by ropes or other materials, such as decorative ropes for hanging items. Tying cover to lift pole. Lacing pins inserted into cover. Rain sticks. Darry Wood rain stick used to channel water and hang items. # Pitching or Setting Up a Tipi ## Materials Get a good storage box to store materials in for setting up your tipi. This box can be made from wood, rawhide, or canvas. It will take a great deal of punishment so choose a material that holds up to constant use. My own box shows some of the items I carry: * A good book on tipis for reference on setting up a lodge * Ropes for tying the tripod together and wrapping the poles, along with some extra rope for hanging items in the interior * Ribbons or cloth for the tips of poles (streamers) * Lacing pins in their pouch * Buffalo hooves made into a door knocker for outside * Rawhide cylinder container that holds the rain pegs * Small broom for dusting off items inside and outside the lodge * Rawhide box to hold these items You will develop your own storage system for the items you will need to set up your tipi. Tipi storage box for setup materials. **Rope:** You will need about 25 to 45 feet of 1/2-inch manila or hemp rope to tie the poles together at the top. Hardware and large discount stores should have manila rope. Hemp rope is OK, but wet down and stretch out your rope until dry before setting up the tipi. Some ropes have threads that stick out and can get under the skin. Wear gloves if this poses a problem. I have two sets of rope that are used for miscellaneous purposes. The set I use depends on the occasion of the camp I am attending. Use a 1/2-inch by 25-foot braided horsehair rope spliced into 12 feet of good 1/4-inch manila rope for the old-time look. The 1/4 inch x 12 foot of rope ties the tripod and then wraps around the other poles placed in the crotch while the horsehair part hangs down into the middle of the lodge. This style of rope is still strong enough to hold down my 17-foot lodge in a big storm. The rope on the right is my basic rope for everyday use. It is 1/2 inch x 25-foot hemp spliced into 12 feet of 1/4-inch hemp rope that is tied around the tripod. Do not use a nylon rope as it will slip and does not grip into the poles. I have found it necessary to carry different sets of pegs, hammers, and steel stakes. This is a lot of material to carry around, but I have never regretted bringing the extra items. If you ever have to abandon or cut a set of pegs off level in the ground so no one will trip on them after you are gone, you will bring an extra set, too. Following is a set of materials you might want to consider for your special box to be used in setting up your tipi. ## Camping Tools and Pegs * Pegs. (See page 100, #5–9.) * Lacing pins. * Rain sticks. (See page 101, bottom two photos.) * Large, carved oak mallet. I use this to place the anchor peg(s) and to pound in my wood tipi pegs so the heads don't splinter. (See page 100, #1.) * Plastic dead-blow hammer for those with weak hands who might need a little help. (See page 100, #2.) * 3-pound steel hammer or an ax with a flat metal end for driving in the steel or iron pegs. (See page 100, #3.) * 22-inch pointed steel bar to loosen up for dry, hard, or frozen ground for the wood pegs. (See page 100, #4.) * 28-inch, or longer, thick pegs for sandy or loose soil. Florida has some very sandy areas and covers can lift straight up if not firmly pegged down. I call these the "circus" pegs, and for a good reason. (See page 100, #5.) * Locus wood pegs approximately 20 inches long with no bark. (See page 100, #6.) * Chokecherry wood pegs carved and painted for show. (See page 100, #7.) * Center peg for tying down the tie rope inside the lodge. This peg has a crook in the top to wrap and tie down the rope securely. Two pegs can also be used for this purpose. (See page 100, #8.) * Steel set of pegs for rock-hard ground. (See page 100, #9.) After getting your material and poles together, choose a good level area, clear of brush, rocks, and debris for the setup. The area needs to slope so that water drains away from the lodge, should not have overhanging branches, and should have enough space for you to be able to maneuver your smoke-flap poles. If setting up on a platform, concrete pad, or a smooth flooring surface, have someone else there to help you hold the poles in position while setting the poles. If possible, try to put your tipi in a protected area that has the least exposure to the wind. It is important to face the front of the tipi away from the prevailing winds. This allows the smoke to be drawn upwards and out of the tipi properly. Although a tipi can be comfortable in the broiling sun, an ideal place would be to pitch yours northeast of a tree (or trees) in the summer for late morning to evening shade. But make sure that it is not a tree like an oak that will leach tannin stain onto your cover or a pine that can drip sap onto the cover. ## Setting Up the Tripod The setup described in this chapter is for a three-pole tipi. Know beforehand the size of your tipi and how many poles it takes for the inside. Drawings of seven different pole setups are given for a comparison, with an extra one to design on your own. As with most three-pole sets, the Laubins' tripod pattern is used. Streamers should be put on before poles are placed into position. It does not matter which style of wrapping is chosen for the tripod. What does matter is that it works for you in getting the poles up without slipping. Most people use the Laubins' method (A), a clove hitch tied off with a couple of half hitches. Others find that the Seton woven wrap (B) or the Ben Hunt wrap (C) work for them. See page 106 for the three styles. Choose the four heaviest poles in your group. Three of these will be for the tripod and the fourth is for the lift pole. Mark your poles in some way that will let you know which are the north, south, east, and lift poles. Make sure all poles have a pointed butt. Flat bottom poles will skid when setting up. The lift pole should be the longest pole sticking out over the crown of the grouping. Some people like to make a fancy "war bonnet" effect with all the poles the same height to create a flared look from a distance. Originally, poles were positioned for strength in weather and not for looks as we do today. Rain, snow, canvas shrinkage, or strong wind are all going to bend in the frame somewhat. If the poles are relatively stout, they will mostly spring back and return to near their original straightness when the cover is dry. Weak poles do not tend to recover and only get worse as time passes. When this happens, the poles may warp and need to be turned or twisted back out. North and south pole can be measured from lift area or where they are when poles are up. Basic setup pattern is oval or circular. Depends on your liner and cover design. Keep angle small and then move door pole out to tighten or lock where three poles are wrapped. Trying to get an hourglass look with the poles or using overly long pole extensions out the top of the cover adds to the tie-point thickness and increases the opening for water leaks. Four to six feet makes them easier to handle and is a good look without overdoing the length. The north and south poles are laid next to each other with the door pole crossing over. No measurements are given here because everyone will have different numbers. Use your 35-foot rope and tie a clove hitch to pull the three poles tightly together, leaving about 5 or more feet of rope to later wrap around about three more times and finish with a square knot. A little trick to help keep the tripod from slipping: place the door pole closer toward the north and south pole, and after tying them off, kick the door pole back out to position. This locks or tightens the knot by putting pressure on the rope. Different styles of wrapping the tripod. A. Clove hitch method. B. Woven wrap side view. C. Ben Hunt style of wrap. Lift the poles into place by standing at the north and south pole butts and using your foot to brace and pull on the tie rope to start the process of bringing the poles up. Two people can do this quickly if one person stands on the butts and the other person walks the poles up with the rope. I am 5 foot, 1 inch and can put up 28-foot poles by myself. It does take leverage and practice. Another way is to peg the tipi pole butts down so they do not move and then pull on the tie rope while walking the poles up until they are vertical. Still holding on to the rope, spread the north and south poles. Hang on to the south pole and push the north pole out and away from you. If it all feels somewhat balanced, walk to the north pole and bring it into place. Then set the whole tripod into position. Looking from the outside, the door pole is always to the left and the #1 pole is to the right. Place the rest of the poles into position, following the pattern starting with the right door pole (#1), placing it even with your tripod door pole. Leave enough room so that your shoulders do not touch the poles when walking between them. This allows you to enter the tipi with ease. Depending on the style of tipi, poles should be slightly in from their final position, with the exception of the door pole. When working the cover and liner, it is easier to bring poles out than in. Take the lift pole rope out of the south pole area. Then move in a clockwise direction around the set of poles. Wrap your tripod at least two to four times. Come in the north pole area and set the rope in the center by pulling tightly. Then tie it to the anchor peg. This will steady the poles during the cover and liner setup. ## The Cover Setup There are three basic ways of attaching the cover to the lift pole. You want to make sure it does not slip when lifting and this can be done with a good leather tie or cotton ties (1). Place a small peg into the wood (2), and (from an old photo of a Cheyenne tipi being set upi) carve a small curve (3) indention around the tie area and tightly tie the cover. Do not put a V-shape groove as this will weaken the pole, which may cause it to snap later. Then move the lift pole and cover into place. Again, if you are the only person around, place two pegs into the ground to brace the butt of the lift pole and then walk into place. This is a bit more work, but better than the butt sliding out of position or skidding away. A. Placement of anchor pegs. B. Single anchor peg with truckee knot. C. Tying cover to lift pole. Bring the cover around and put in the lacing pins, left over right, as you face your tipi. Left over right is the standard way, but in some of the old pictures the reverse can be seen. Just make sure that the lacing-pin holes for the top part of the cover (the left) are a bit farther apart than the two holes underneath, which are closer together. This helps divert water from leaking directly inside. Also bring a small ladder if the lacing-pin holes are much higher than your reach. It can be very dangerous standing on makeshift items. Leave the bottom lacing pins out until the liner and cover are just about set. This makes going in and out much easier and the danger of broken limbs caused by tripping on the closed door opening is lessened. In the final stages of setting up, put the pins in and set the cover by moving the poles in or out as needed. See photos on page 101 for placement of pins. Setting up the tripod. ## Hanging a Rope Liner If you used a pattern in this book to make your fitted lining, then you are going to have an oval-shaped tipi. The Laubins said "that the floor plan of a properly pitched tipi is oval or egg shaped, rather than round. The tipi cone is also tilted and steeper up the back than the front" (Laubins 1957, 27). Some tipis do come in round and oval footprint shapes. The structure can tilt more toward the front than back, as well as the true shape of a cone. So, what is your tipi? Again, if made from the pattern in this book, it will have a slight tilt to the back and have a liner that will form an oval shape with the poles. In making the lining using the old way with rectangular panels, it will most likely be a rope liner and can be set up using any of the positions shown. All these methods are shown in pictures of Northern and Southern tribal tipis using a rope. 1 and 2 were mainly used in the Northern tipis with 3 being used with the most rope liners. Panels are tied at the top, hang down, and then items are pushed against them to the sides. The more formal rope liners, connected to ropes or pegs at the bottom, are a modern addition in the last sixty years. ## Hanging the Fitted liner Inside the tipi, locate the lift pole and tie the bottom base cord of the liner to the butt of the pole. Make the tie tight with a square knot. Then working from the lift pole toward the door, secure all the other bottom ties on each side of the lift. Once this is complete, the bottom ties do not have to be untied. They can be left tied for the next setup. This will leave a loop to slip over the butts. If your liner is in more than one section, place the ties of the end panel into the grommet hole and tie to the pole tightly. Try not to have the ties ride up the butt of the pole. Ropes are difficult to tie on poles that are not butted, but have a straight-across cut. They can slip up the pole, making the bottom of the liner loose. To prevent slippage, place a small nail or wood peg (B) at the base of the pole. With the bottom of the liner attached to the butts of the poles, stretch the liner and shift poles as you work from the lift pole to the door. Next lift the top of the liner up to the pole and tie it into place using the leather/cotton ties. The leather ties grip better than cotton. If the cotton ties do slip, slightly wet the strips to give them some grip to the material. Wrap them around at least once and then tie a bow in the front of the pole. Start pulling poles in and out as needed to get the wrinkles out of the liner. The last panel to the north of the door wraps around the pole. This extra panel is used at night or during inclement weather to seal the tipi by bringing it across the opening to the door pole and tucking it behind or tying in front. If the liner is not smooth, you may have to go back around, moving poles in and out to the door to get the fitted look (A). Two people can do this with one person inside directing the person on the outside to move poles in or out or to the door. You need to learn to read the wrinkles in the canvas so you know whether to pull out or in. This is an easy liner to set up once you know how. My own way of setting up is to put the entire lining up first and then the cover. Since the measurements on the poles are already fixed and marked from experience, the cover can go up when I need it. I like to put all my materials inside while there is a breeze and good weather, and then put the cover on. However, this method does drive people crazy! When not using the fitted liner, the cover can be set up to the ground and poles set in a more circular final shape. Using a rope or tying directly to the poles, I will use old shawls and blankets as a liner only where my sleeping area lies. The bed holds the bottom of the liner in place. Linings do not have to be placed in front of the poles. Some are placed between the poles and cover, forming a double thickness of the cover, which then goes to the ground. A few others weave under, over, and between poles, with no ropes or ties. A. Airflow and liner placement. B. Attaching liner to butt of pole. Single or double cord attaching liner to butt of pole. If poles are not butted, tap in a small nail to help keep the line from sliding up. C. Three top lining setups—last one is used the most. ## Pegging Down the Cover Start with the lift pole and peg this area down first, and then pull on the cover to get out wrinkles. This sets up the cover for the rest of the pegs. Next, go to the front and peg down the loops on either side of the door. Again, pull the cover and shake it before pegging in the two pegs. This will insure that your door will not be pulled out of place. If the cover is to go all the way to the ground, drive in the pegs at an angle to the cover. If the cover will be several inches off the ground, drive pegs in straight up and down, with the loops doubled over and twisted on the peg. This will lock the loop in place and make it possible to slide the loop down if it comes up without hammering the peg still farther into the ground. The last item is to set the interior anchor peg and attach the tripod rope with a tight truckers knot, sheepshank knot, or whatever knot works for you. To prevent movement inward of the poles or traveling in strong winds, the anchor peg helps the poles "bite" into the ground. If not corrected, traveling poles can cause the tipi to fall over or move a few feet. ## Smoke-Flap Poles Next the smoke-flap poles are placed in the pockets of the flaps. These poles need to be sturdy and not bent around the cover. Too many people choose thin, long poles for this and they snap or wrap around the tipi under pressure. Since they have to take the wind without breaking, choose a stout pole that fits from ground to pocket in an open position from the back side of the tipi to front. Lastly, put the door on over the door opening, securing it in place. Interior of tipi with liner down showing position of poles and basic wind pattern. ## Folding the Cover or Takedown There are four possible ways to fold a tipi. A, C and D are examples of the old ways to fold for loading on a pack horse, travois, or for storage. Example B is an easy way to fold right on and off the lift pole. While the cover is still on the poles, undo the lacing pins and vigorously shake both sides of the cover, one side at a time. Also take a broom and hit the inside, knocking out the dust. Grab the lacing-pin area and walk it to the lift pole, forming a pie wedge. Go back to the fold and repeat about three times, depending on the size of the cover. Start on the other side and fold again, following the same procedure. The back of the tipi now looks like a giant thin triangular wedge shape from little top to larger bottom. At this point, take a rope and tie it around the cover and lift pole at about eye level. This keeps the cover from slipping off the pole. Then gently ease the lift pole down. Two people can reverse the steps they used in setting up; one person reverses his or her setup steps. While the cover is still on the lift pole, take the pole out and sweep or brush off the cover as it is tightly rolled into a ball from base to top. Tie the cords on the tie point around the cover, if they reach, and you should have a tight little ball of canvas. This is particularly helpful in keeping your tipi dry if the surrounding ground is wet. Above and below: Methods of folding a cover for storage. Door ## Problems with Canvas in the Cover or Liner * Wrinkles are most likely caused (assuming the tipi was cut correctly) by the tripod tie being too low and/or the poles being too spread out at the base to fit the lining. The lower tie point prohibits the cover from sliding down the pole structure and forming a nice snug fit. * Some wrinkles are caused by not pulling on the cover and then pegging it down to work out wrinkles as you peg from back to front. * If you have a sagging liner or the rope or ties have slipped, retie the rope or bring the pole in or out. * Rains coming in from the top can be caused by smoke flaps not being wrapped around tight enough, or the tie point might be too high, making a bigger opening than can be covered. * A rough bark spot or splinter on the pole creates drips. Use your finger or a long stick to guide the water down the pole. Redirect the water on the surface by touching the dripping area until it flows down. * When water is coming in through the canvas or splattering inside, the cover needs to be re-waterproofed. # Living in a Tipi ## Smoke Flaps Long ribbons fluttering from pole tips aren't simply decorative; they're the wind sock that helps to determine which direction prevailing winds are blowing. This allows you to set your tipi's smoke flaps accordingly. The diagrams on page 117 show eight ways to adjust or use the flaps to adapt to the weather. Figures A through C show the placement of flaps when wind and rain are coming from the sides; E shows the placement when winds are from the back. Figures G and H have the flaps wide open and the sides of the tipis are rolled up or spread out to act as a sun shade. Rolling up a side of your tipi cover and dropping portions of the liner will provide your camp with both shade and breeze on those hot, sticky July/August afternoons. On the shady side of the lodge, roll the side of tipi cover up to a height of 4 feet or so. Prop the rolled-up cover on a couple of forked sticks. On the opposite side, drop the liner and side that best takes advantage of any cross breeze. Figure D comes from a picture labeled as "circling winds." Both flaps are shown rolled up on the poles. I only saw this once on a Cheyenne lodge.[8] Most interesting is Figure F that shows the use of reed walls made of cattails or sunflower stalks built around the tipi to keep the high winds and dust out. In Oklahoma photos, many Cheyenne and Kiowa tipis were seen with this surrounding wall. Outside were arbors of willow and reeds. This gave shade for cooking and just sitting around. Depending on the part of the country you live, there are different solutions to rain and wind problems. A cane windbreak helps keep the hot western winds and dust from getting into the tipi. Setting the smoke flaps and camp for different types of weather. ## Extreme Winds Though the shape of the tipi makes for a very wind-resistant structure, a strong gust can shift a lodge. You should frequently check poles for movement and kick or move them out regularly to keep the canvas taut. Crow tipis use ropes attached to the lift poles that go out to the back or sides for a distance and are then pegged down. This keeps lodges from walking or moving out of position. Sioux and Kiowa tipis use similar ropes that surround structures about midway and are then pegged down to the sides. Sticks can also be placed on the inside to brace weak poles in extreme winds. I have used these methods in winds clocked at over ninety miles per hour, and they do work. But it is always best to start with good poles, and by good poles I mean ones that are hefty, strong wood. Weak poles will cave in from the pressure of the wind against the sides of the cover. One way to keep the poles from moving is to bury the bottom of the poles. This trick also works when the door pole is too long and you do not want to start over. Cheyenne women used this trick when door poles were too long. The Crow talked about making holes in the ground with an ax to keep poles from moving. The following is a system that Arrow Tipi Co. uses: they drill a 1/2-inch hole through the pole 4 inches from the bottom, slanting the drill toward the bottom of the pole so that the drill exits the pole 2 inches from the ground. Rebar pegs may be used, but the pegs should be 3/8 inch or #10 rebar and up to 30 inches long. A hook should be formed on one end. The peg must be a snug fit in the drilled hole. The peg is driven into the ground through the pole at a slant until flush with the top hole opening. The hook on the peg should be turned to the side to allow for removal. Gary and Luella Johnson Crow family pattern tipi. Note ropes at back. Forked sticks to help brace poles. Rope attached to lifting pole that is pegged down in the back. This method is used by the Crow/Blackfoot and others but can be used by any tipi owner to support the tipi in very high winds. ## Cold-Weather Camping Fires should be small and their size and location depends on the size and shape of your lodge. Many people cut a hole for a fire about 2-1/2 feet toward the door from center. Stand directly under the smoke-flap opening, and then take a step or two toward the door and cut your fire hole. Make sure it is not in the way when you step into your lodge. Also consider how close the fire is to the sleeping areas. Stuffing grass between the liner and the cover is the old-time way of adding some insulation; however, you might not want to do this because mice and other little animals use it for food and nesting. Pete Roller, of _Whispering Wind Magazine,_ experimented with fiberglass insulation by using plastic sheeting on both sides instead of grass stuffed between the two liners and the cover (Wilson 1924, 242). He also used a rain cover with a double liner and found it to be very warm with his small stove. This setup is a great idea for a permanent camp, but not for weekend setups. ## Liner or Lining Use A liner does more for insulation than for circulation. Take the liner down when it is hot and put it up when it's cold. Roll the sides up on hot days. On hot humid days, the liner does not help the airflow. ## Floors and Platforms Traditionally there are no floors in tipis. It was common to put down animal robes and trade blankets for a floor covering. Today we use all types of materials to keep the dust, dirt, water, and critters out of lodges or protect the valuable robes from damage from wet ground. Some floors are canvas, old rugs, clear plastic, painter's cloth, and heavy-duty Weblon synthetic plastic, which you often see used on trucks for cover tarps. However, if you are in an area where the ground is mostly dirt, rock, or sand, you might want to consider having a solid floor. You can get a solid floor in different colors, depending on the material. I live in a wet area of the country where biting ants and spiders are a real problem, so a floor has become a necessity. My need is to keep the moisture out and to protect my buffalo hides. I have a floor going from side to side and front to back and going up a foot and over the sod cloth. All this is hidden by the multiple robes and blankets on my "green" floor covering. My floor of choice is a Weblon material that can be purchased from most canvas and awning shops. It is more expensive than most flooring material, but it is worth the price if you have ever had heavy rains flood your tipi and all your gear ended up soaking wet from groundwater. If buying this material, do not let the supplier sew the edges or the center seams. The seams should be heat welded together without any sewing holes. Once you get it home, cut the material to size or just fold it over for protection from water going over the edges in a very heavy rain. It may feel like you are walking on a rubber raft with the water underneath, but it definitely saves your gear. But do not cut a fire hole as the water will come up and over the fire-hole opening. Since I use a propane wood log fire that does not give off heat toward the bottom, I can have solid floor construction. A drawback to "plastic" floors is that if water drips in from above or the door opening leaks, the water stays on the surface until it dries up. It can be like living in a small wading pool after a heavy rainfall. Leak proof your lodge as much as possible to prevent this. Most water comes from the ground underneath the tipi. If you are going to live in your tipi, build it on some sort of foundation to get it off the ground. These foundations can be wood platforms, gravel, very sandy mounds, or concrete pads. If your area is flat, make your platform at least 3 to 4 inches high so your lodge will be above the standing water. A lot depends on the soil's drainage. You can trench your tipi all around the outside, draining water away from the camp. Be aware, however, that many campgrounds or facilities will not let you trench because this can cause damage to the ground. Platforms to support a tipi off the ground. **Concrete** pads can be poured in your backyard or chosen camping area. I had a 16-foot circle pad poured in my backyard for my 17-foot oval lodge. I didn't go with an oval shape for the pad because I wanted the water to drain from the front and the poles to just touch or be close to the pad. You can either build a fire hole right into the concrete or you can set the fire on top of the concrete. Putting a hole in the pad is easier for cleaning up ashes and it helps keep the fire from getting out of control. You can still put rock around the fire whether it is in the ground or on top of the pad. A few concrete pads have indentions for each pole or special metal brackets to hold a lodge pole in place. These keep the poles off the ground to prevent rot and bug infestation. Wood decks can be built anywhere and can be built in any shape or of any height. Most people build these in off-location places where there are animals that might be of concern or for that great view of the surrounding landscape. If you decide on a wooden floor, give careful thought to the choice of wood and a suitable sealant. Treated or untreated wood is generally available everywhere. Leave suitable gaps between each board for the water to drain off and away from the tipi. There are also synthetic decks or porch boards that do not rot, decay, or absorb water. They last forever and never need to be treated. Remember, if your deck is elevated, air is an excellent thermal insulator. It conducts very little heat, but it will lose heat through convection. Insulation will be considerably better if your floor is constructed to minimize air movement. Several tipi companies now offer portable wooden platforms and permanent decking for tipis. If you are not a skilled craftsman, this is a way to go for comfort and protection of equipment. Look at companies' brochures and Web sites to get ideas for your permanent camping. ## Beds Any bed you can get in the lodge will work. Remember that the lodge sides slant in so that a bed frame cannot come off the ground very high without giving up room. The more real-sized furniture placed inside a tipi, the less room there is for living, unless you have a very large lodge. Traditionally beds are situated on the ground with fur robes and some padding underneath (Wilson 1924, 242). In some cases, beds are constructed off the ground using supports from the backrests and small logs to guide the size. Backrests form the head and foot of the bed (B). The bed structure in the photo below comes with more permanent campsites that allow for the luxury of an interior bed with bags, beaded liner, and other gear. Backrest beds. ## Bathroom and Washup If you are going to an event such as a powwow or rendezvous, you will find porta-potties and other facilities. In some cases, there are real bathrooms with sinks and water. But for those of you who take your lodges out to the deep woods for longer periods of time, you have to take your own water and bathroom with you. There are many self-contained potties that can be used in your lodge or the old 5-gallon paint bucket with a toilet lid on top does it for me. You can buy chemicals to help biodegrade the waste at camping stores and other retail outlets that sell camping goods. And then you can dig your own outhouse away from the tipi for privacy and sanitation. For a good shower, there are solar setups using 5-gallon black bags, or you can heat the water over an open fire and sponge on and rinse off. Do this away from the lodge so that water does not splash on your cover or run into the tipi. My preference is a small, covered area away from the camp where I can wash, brush my teeth, and clean my hair without messing up my camp. If you are into a more modern shower, there are the battery-operated types that will suck up the water from a warm or hot pot and then spray it on you like a real shower. No picking up heavy bags or containers of water for a gravity-fed rinse of the solar types. ## Heating Your Lodge The basic materials for a fire in your lodge are as follows: * Good lighter for starting the flame * Small firewood, either cut or broken to size * Cleared area in the center or towards the front of the tipi * Rocks or fire brick to circle the flames * Small shovel or rake to clear the area of all grass and roots * Metal plate of some type to keep the fire off the ground * Rocks, metal ring, or fire brick to set the metal plate or galvanized garbage lid * Fire extinguisher or at least 5 gallons of water for safety Once you have your lodge set up almost everybody wants to have a fire. Do you have to have a fire inside? No! Most cooking was done outside. After all, do you cook in your living room or bedroom? But when the weather does turn cold and there is a need to conserve firewood, cooking in the tipi keeps you warm and can be used to cook your food. The kind of fire you have in your dwelling depends upon the size of the tipi, whether or not it is a cooking fire, the weather, and how many people are inside. If you do have some kind of flooring in your lodge, do not put straw or any combustible material underneath it. Make sure the flooring itself will not catch fire. Several fires have started when people got their insulating material too close to the fire and the tipi burned to the ground. The fire traveled underneath the floor and caught the lining on fire as well as other combustible items. A small fire is better because it is easier to control than a big fire. Too big of a fire can send sparks into the smoke flaps of your tipi and outside, which can set the grass on fire. A small fire also uses less fuel in maintaining the camp. The type of wood used is determined by where you camp in the country. Many people bring one or two night's worth of firewood with them. I like to bring in a day or so worth of good split oak wood and pine. Pine is a fast-burning wood while the oak is a slow-burning wood that makes coals. Some people bring enough wood for the week because they are not too happy with the firewood they get at the campground. It can supplement the type of wood found in the area. Keeping the fire burning in a lodge is very different from keeping it burning in your house. You need to keep it burning fast. That way it produces less smoke and draws the smoke outside faster. Using smaller pieces of dry, split hardwood will keep your fire burning longer and keep the amount of smoke down, making it more comfortable in your lodge. Whether you build a round or rectangular fire pit is not really determined by the tipi tribal style. What does matter is if your fire is on the ground or if you dig a pit. By putting the fire directly on the ground or on some material such as dirt or sand, the heat will spread out around you. Clear a 3-foot area of all grass and burnable material. If you are digging a pit and digging the grass plug out, dig the hole deep to get out all of the grass and roots. Put the plug at least a foot away from the fire or outside the lodge. Heat kills the grass on the sides of a fire pit. Depending on how long you are camping, you can keep the grass plug watered and replace it when you leave, after making sure that there is no heat coming from the fire pit. I put water in the hole and replace the grass plugs and water them again. The grass grows back in a week or so. Putting rocks around your fire is good for keeping heat and the fire in their area. Some parts of the country do not have rocks so you might need to bring your own. Being from Florida, I bring rocks almost everywhere I go, and this included a trip to Wyoming. I did not consider that Wyoming has lots of rocks. That's why they call it the Rockies. Fire brick, which can be obtained from fireplace shops, ceramic stores, some home improvement centers, and kiln builders, can also be used. It is lightweight, retains heat, and helps protect from some sparks. Dianne Best (Jin-o-ta-ka), a woman I corresponded with, told me that when she was down on the Cheyenne River Reserve, she was talking to the old people about winter life in the tipi and asked about ventilation (since she had been told they often stuffed dry grass down in the space between the covering and the ghost sheet liner). She was told that it was common to dig a shallow trench extending from the outside of the lodge cover to the fire in the center. The trench was then covered with branches (so you didn't trip on it). This allowed cold air to reach the fire without making the draft on everyone inside. With today's problems of forest fires, it is a good idea to bring extra fire-safety items like rocks and a fire extinguisher. Some areas will not let you have a ground fire unless you have protection to keep it from spreading. Bring some cheap garden dirt, such as vermiculite, from a garden center. It does not absorb heat. Then on top of this foundation, place a metal rim, galvanized garbage lid, a few fire bricks, or a wok to hold the fire. A wire screen can also be used to stop sparks from flying. These are some alternative ideas if camping rules do not allow you to dig a hole. At historical sites, certain private areas, national parks, Disney World Wilderness Campground sites, and "rocky areas" where campfires are still not permitted on the ground but in contained pits off the ground, these suggestions may prove helpful to you. It is always a good idea to find out before you get to the site if a ground or pit fire is allowed. Your lodge should always have the wind at its back, but it doesn't matter whether your tipi is facing east, west, north, or south. Winds can come at you from all directions. They can change from day to night, season to season, and with cold to warm fronts. On the Great Plains, most of the wind is from the west going east, but at night it can change to the opposite direction. In the Southeast, the winds swirl in from out of the southwest and then change to the east because of the ocean temperatures, which cause land and sea breezes. Use that local information to decide in which direction to face your tipi. If you build a fire outside the tipi to cook on, like I do, be on the watch for sparks and changes in the wind direction. I saw a lodge go up in flames because of someone's fire that spread to other tipis and tents. Even if the weather appears calm, never leave a fire unattended. Losing your own tipi is bad enough, but to be the cause of such a loss to others is unbearable. The best wood to use for fires is hardwood oak or hickory. Take the bark off to help keep the smoke down on the larger pieces of wood. Small logs of about 1 to 3 inches in diameter are great to use. Wood should always be dry and well seasoned. I have my own favorite species, but if you're in a different part of the world, what's available that burns cleanly is probably sufficient. Split the logs into small pieces—no more than 4 inches in diameter. Don't throw on 8- and 12-inch-diameter logs that will just sit and smolder for hours. You'll find you have to spend a lot of time splitting wood and feeding the fire. It's not a log burner that you can stoke up and forget about for hours. In feeding a fire, Ken Weidner, an experienced primitive tipi camper, explains that a "feeder" piece is sometimes a piece of wood you can use that is larger than 6 inches in diameter. It is fed into the fire from the side nearest the door. You just keep pushing it in as it burns down. But do not leave it unattended before you go to bed. He continues: > The feeder does a couple of things: > > 1. It protects the fire from the draft from the door. > > 2. It acts like a kind of thermal flywheel—when the fire is burning well, it stores up heat, which it then puts back into the fire as it starts to die. > > Building your fire on stones helps, too, for much the same reason as stated in #2 above. But be careful, as some stones will split or explode, partly because of trapped moisture if they are river rocks. You'll often find the first fire you make is the worst, particularly if you build it straight on the ground. As the ground dries out, subsequent fires will burn better. > > Also, get yourself a length of 15 millimeters of copper water pipe, a heavy-gauge nail and a big hammer. Position the nail carefully resting in one end of the pipe and then flatten that end of the pipe around the nail by hitting the pipe with the hammer. Pull the nail out so that you now have only a small, nail-sized opening in one end of the pipe. You now have one of the most useful tools you'll ever want in a tipi—a blow pipe. If the fire is awkward, point the flattened end at a strategic spot and blow through the other end. This provides oxygen to the fire, vital for a fire to burn well. It works wonders! > > I try to burn smaller wood (up to 2-inch diameter). But I've seen some guys burning large logs (4- to 6-inch diameter), but they always have a good bed of coals going before they put on the bigger wood. I still try to burn smaller stuff; it gives instant heat and burns cleaner. Also fire tending is a necessary evil in a tipi, and the amount of vigilance required increases with the size of the wood used. > > As far as lifting the tipi cover up higher for better draft, I usually peg mine right on the ground and usually have no problem with draft at all. There is still plenty of air that gets in, regardless of how tightly pegged it is, unless it snows or something else keeps air from entering. The old photos seem to show the lodges pegged down closely. If you want that "dark smoked look" top on your tipi, burn a very smoky fire and close the flaps. That way you will get it quickly. In my new tipis, I like to start a small fire with a little fat lighter (which is very sappy pinewood), sold in some grocery stores or fireplace equipment outlets. One stick is splintered apart and then I use smaller pieces of oak or other wood to make up a tipi fire or square fire. After the fire gets going and grows larger, branches and sticks can be added to the fire. When burning oak wood, you will get a nice yellowish brown tint on the inside of your lodge and on the smoke flaps. This is a nicer color than the black smoke flaps that are caused when the fire is too smoky and sometimes too big. Do not use paper as a fire starter, as it has a tendency to fly up in the smoke flaps and can set the flaps on fire or even the outside area if the conditions are right. Paper also flies around the inside and make a mess. Do not use lighter fluid or other flammable materials because they can lead to a taller, smokier fire than you expected. Other types of fuel are the "wax logs" you get in stores. They do work, but they will leave a black, sooty coating in the top of the tipi. I saw one lodge with way too large of a fire made of these logs almost burn to the ground. Sparks got caught up in the smoke flaps and caused a small fire. Luckily for the owner a fire extinguisher was nearby. A better idea is to break or cut up these logs and use them for starting the fire and/or to supplement the flame if needed later. It is possible to use charcoal, but that can pose a big danger. If you close up all the air openings, it creates carbon monoxide buildup, which is a serious health hazard. Smoke can also be a problem if you do not change the direction of the smoke flaps when the wind or weather changes. Crazy Cyot's description, as related to me in conversation of what can happen in a lodge when flaps are not changed or other climate changes occur, is described below. It provides a firsthand experience of living in a tipi and what can happen with winter conditions. > But I have had to lay low a few times when the smoke got thick, due to a wind change. Before I got up to change the smoke flaps, the first winter I lived in my tipis I dug a trench to the outside as before mentioned but I (forgive the lack of period correctness) laid a pipe in the ground with a ninety on it so it had about 3 feet of pipe standing above the ground so the snow would not cover it. This worked very well; it helped keep a good draft for the fire and helped to keep the inside of the lodge warmer because the fire was not pulling cold air in from the edges of the lodge. I also found that a tall liner helps to keep it warmer. I never tried the grass stuffed in the liner; I just figured that when the grass got wet and started to mold, it would help to rot the canvas faster. I found building a wind-break fence around the tipis helped, too; it also gave me a place to store firewood. I spent two winters in a tipi on the bend of the Bear River by Soda Springs, Idaho. I also found out why the Indians did not winter there: it's got to be one of the coldest places in the state. Kerosene heaters are another way of heating a tipi. Heaters come in rectangular to round styles. A good 10,000 BTU heater will warm an 18-foot lodge as long as the lodge has a good liner and rain cover inside. I prefer the rounded heaters inside the lodge as they seem to distribute warmth in a circular motion in comparison to the flat-sided styles. Another advantage of the rounded heater is that you can put a pot of water on the top for tea or coffee in the morning. But it is not good for roasting marshmallows nor does it have the ambiance of a roaring fire. One gallon of fuel will take off the chill for the night. The drawback to kerosene heaters is the smell of the fumes and carbon monoxide. You want to make sure that there is adequate ventilation. Leaving the smoke flaps slightly open at the top will help. In all the years I have used my heater in the tipi, there has been only one small problem—the slight smell of the fumes. If your tipi is closed up too tight, there is a possibility of getting headaches from the fumes. Another type of heater is the propane fireplace. Some of these can put out between 5,000 to 15,000 BTU. They have a circular pan of between 20 to 30 inches and have logs that look real. The logs give off heat that you can cook on. The pan is insulated on the bottom with a layer of vermiculite that protects the floor from burning or getting too hot. There are 2 to 3 inches of air space under the pan, permitting cool air to circulate. Some pans have built-in legs while others rest on fire bricks. A 12-foot hose is connected to a 20-pound propane bottle. The tank can be hidden inside the lodge or outside. Use extreme caution when handling propane, and place the propane container as far from the fire as possible. The disadvantage to the propane fireplace is again the carbon monoxide, but I have never had trouble with the unit because of the airflow in my lodge. Another disadvantage is not being able to throw another log on the fire, which we are all prone to do. They are, however, fairly easy to unclog if some trash does end up in the vent holes. The vermiculite also helps protect the mechanics of the unit from trash or other items put in the fire. To make the fire more realistic, put rocks around the pan for effect. They look great and no one can tell the difference between gas and wood except that there are no sparks and you do not keep putting wood on the fire. Depending on how long you keep the fire going, a 20-pound cylinder of gas can last two to three days. I base this on personal experience of two nights in 17-degree weather. I did not heat during the day unless cooking. It is also so nice to just roll over and turn up the fire in the morning for a little hot chocolate or tea. Louis Beergeron tipi with stove and rain cover. I have used buffalo chips with little sticks of wood in my tipi fire. I found that they did not smoke and the chips were like charcoal when they burned. Chips also gave off a warm glow with little or no smell. They should be totally dried out before using. If they are not totally dry, they can explode because the moisture inside is quickly boiled in the fire. It is well known that the Plains Indians used buffalo chips and wrapped knots of prairie grass for their fires. Tipis are being used around the world and everyone is coming up with their own ideas of having a fire or heating the inside. In Japan, there are lodges with hibachis or wood-type, double-glass-door fire holders. In the Southwest, chimeneas of varying sizes are set up. In Europe, one gentleman came up with an elaborate underground air-piping system. The three pipes lead from the inside fire outside using the airways. This system was designed for heavy snow where the tipi is covered halfway to the top for days or weeks. Each of the three airways lead to a vertical pipe that is 3 to 5 feet above the ground and is above the snow line to provide oxygen. For permanent or longtime living in a tipi, the ultimate heater is the cast-iron stove. The woodstove enables you to cook and heat water regardless of the weather. Stove jacks can be installed in tipis that stabilize the stovepipe to the poles. Some people have the usual fire on the floor (hearth) and the stove. Louis Beergeron, a tipi dweller, lives in New England, where there are all kinds of wet weather. Her lodge has held up great in severe weather conditions. She put an 8-foot canvas umbrella with a wooden frame on top of her tipi poles. It works for her and she has been dry ever since. She did have to cut down the poles to get the umbrella to sit right. According to Louis, it has had over a foot of snow and ice on it and has held up wonderfully. Four straps are tied to the umbrella and poles to hold the rain cover down to the ground. The winds have howled and pulled at the umbrella, but it has withstood them. A woodstove is inside for winter camping, and it is toasty warm inside. There is no open fire, which she misses, "but you can't have everything." To prevent a fire when using a woodstove inside the tipi, the stovepipe should be insulated where it goes through the smoke flaps or a hole in the cover. This can be done by double piping the stovepipe and by using fireproof materials around the pipe opening. Check with your local fireplace equipment supplier or tent camping store that sells portable fireplaces and see what they have in non-heat-absorbing material. Other heating possibilities include gas generators, underground electrical lines, and solar panels for generating electricity. While living in a tipi with all the new survivalist material, camping journals, and microengineering, your life does not have to be primitive. There is an electric line laid underground that goes under my tipi cover to run my TV, computer, and a small light. I have all the comforts of home. When I want to be primitive, I put it away and pull out the buffalo robes. * * * [8] Oklahoma State Archive Phillips Collection #1760, "Cheyenne Tipi Setup." # Decorations on the Cover and Liner ## Applying Design and Paint to the Cover WARNING: Do not copy any designs found in this book or any other book showing cover designs. Many of these images are still owned by families today and they can and do get upset when they see them on other tipis. I know of a family that took their tipi to a dance in Montana only to be greeted by an angry Native American family who demanded they take their tipi down because the designs belonged to them. The other family complied and later sold the painted cover and replaced it with a plain white one. "The painted tipis, or 'medicine,' tipis were owned by only a few distinguished families. ...Medicine designs usually originated in dreams...and were handed down generation to generation. Such designs could be purchased in proper ceremonies and the 'medicine' and rituals 'passed' from one owner to another" (Laubins 1957, 243). NMLRA Western National Rendezvous in Wyoming, 1987. Kathy Brewer of Indian Images describes the use of cloth and the tribes who painted their lodges: > Canvas, linen or cloth covers for tipis started becoming popular among the Native Americans around 1851. According to Kurz's 1851 journal, more of the wealthy men were already getting canvas for their lodges. Treaty annuity payments in the late 1850s and 1860s also were accompanied by lots of canvas bed ticking and other fabrics. As the 19th century wore on, the covers got bigger....Bolder surface designs were applied with the new industrial paints and dyes that did not wash off or fade. Old photos of Blackfoot tipis, drawings of the Kiowa, Cheyenne, and Sioux depict spectacular cover paintings in their sketchbooks. As for copyright of tipi designs, Blackfoot tipi designs (especially the stylized, elongated animals painted between the bottom surrounding base design and the surrounding top design) are family owned. Permission is needed from the tribe or the family before you use a particular design. You must also gain permission to use patterns from Kiowa, Cheyenne, and some Sioux families. Getting "the rights" from a tribe or family historically involved a feast, music, and a ritual that had to be learned with certain rites to be performed. "A painted tipi was an announcement that a sacred bundle rested within [and that] the tipi owner possessed the rites and rituals of that bundle" (Maurer 1978, 22). You can decorate your tipi any way you see fit, with designs that are relevant to you and your family. If you want to make a more traditional-looking tipi, look at some ledger art drawings to get acquainted with the pictographic style of drawing and painting. Then do your own patterns in that style. The advantage to pictographic-style paintings is that you can decorate your tipi liner/cover with figures that do not cover it completely with paint. Historically, the men were in charge of painting pictographic representational images (even though they were highly stylized), and the women painted geometric-type designs, as on parfleches and robes. Many nonfamily tipi designs were based on the individual's own exploits or dreams. These are the possession of just that one individual or society. If you do not want to stick with traditional Native American designs, your creativity and imagination are your only limits. I have seen cross-stitched designs, rainbows, Scottish heraldic images, and every type of animal and geometric symbol that can be envisioned. Then there are the tie-dye tipis of the Netherlands and England. Tipis have a huge canvas that can painted. Or you can always leave it white and just enjoy the tipi. Anadarko, Oklahoma, 24-foot tipi. Karl Miller tipi, National Powwow, Crescent City, Illinois, 1990. Modern Sioux lodge, 1999. Nomadic tipi makers. ## Painting the Entire Cover If you are painting the whole cover, remember heavy paints can cause the tipi to shrink 3 to 6 inches. Make sure your canvas is of a tight weave like army duck. If it is pretreated with waterproofing solution or fire retardant, you will have to use your brushes to push the color into the weave. This will take the waterproofing and fire retardant out of the canvas. After it dries, you will need to re-waterproof and repaint the fire retardant. Some of the old-time colors are red iron oxide (sometimes called light red or Indian red), yellow ochre, Prussian blue, chromium green oxide, and Van Dyke brown. Of course, you are free to use whatever colors you wish if you aren't aiming for an old-time look. Try painting on scrap pieces of canvas first. For large areas, you should paint on premoistened canvas. Use sponges or wet towels. It is more difficult when the tipi is dry. The paint needs to be applied in thin coats and several coats are needed. This is better than one heavy coat! I use flat latex exterior house paint, which comes in several colors or can be custom mixed. Give your cover time to dry before folding it up. If you have painted the whole cover, you will find that ten or more pounds have been added to the material. Any added decorations, such as appliqués, beadwork, and dangles, will also put weight on the tipi cover. Innovative tipi owner Gary Winders (Short Bull) wanted his lodge to look like an old-time buffalo-brain-tanned hide, which is white at first and then turns a light yellow color with time. He took a piece of "smoked brain tan" to a paint store and matched it with their computer color duplicator. He mixed the latex exterior house paint with water (5 parts water to 1 part paint) in a 5-gallon bucket. He then laid his 18-foot tipi out on the ground. Using a big sponge, he went over it starting at the top and working his way down to the bottom. That way he could make it splotchy and uneven. Then he went back and applied more to the top, to the bottom, and around the door to make it look more used. Later he painted his liner with the same color by going over the pictographs that were already on the surface. This helps keep the sun out, he says, thus keeping the lodge cooler in the summer. Above and below: Rick Patterson tipi, Union Grove, North Carolina, AICA Dance. Tipi by Darry Wood. Beaded flower cover. Tipi by Darry Wood; owned and beaded by Bob Acorn. NMLRA Western National Rendezvous. Painted tipi by the Brewers of Indian Images. The Brewer family restores and makes reproduction Native American materials for museums and private individuals. Through their years of work, they have come up with some very innovative and creative ways to make materials look old. For a more aged look on a tipi, they developed this idea: > Bill sprayed [the tipi] with a very weak, muddy-colored acrylic solution, akin to tea water or coffee water, using one of those tank-style spray painters. He did it outdoors, and threw some dirt on it while still damp, and then brushed off the excess when dry. We had some black soot that had been shoveled out of an old blacksmith shop, and he used that to darken the top and flaps of the tipi to look like it had been discolored by smoke. > > You can paint your canvas tipi cover with large hide-shaped areas, rather than adding additional material cut in hide shapes. This was how it was done in the movie _Dances with Wolves_ , and it looked remarkably good from a distance. It may take some artistic talent, but I am sure with practice (try a miniature tipi cover or even a plain piece of canvas to start with) you could come up with a pretty good facsimile. Just look at old photos (preferably color ones, if you can find some) and try to replicate the "design" of the hides sewn together. There should be shading and some color differential in the different areas representing separate skins, but that is what trompe l'oeil is all about. This 'fool the eye' painting is really popular in the United States right now; all sorts of programs on the Home and Garden channel have ways to paint your house up in this way. ## Painting the Designs Before painting your cover, you might want to make a small toy paper cover, with all the designs painted or drawn to scale. It is better to make your little mistakes on paper before you paint on the big cover. I have done this before painting any of my covers. This definitely shows design flaws. It also gives you a chance to make any changes in color choices. Use a heavy paper or kraft butcher paper for patterns or stencils. This will save on extra pencil marks that are almost impossible to erase on canvas. Use tacky light spray glues that can be applied to the back of the pattern to hold it in place. Make sure to wear socks or have clean feet when walking on the cover. Bob Ellis's 16-foot tipi, 1975. Disney World Lodge Owners, 2000. Linda Holley's 17-foot lodge. ### Materials for Painting * Water-based acrylics such as Liquitex, Golden Fluid Acrylics, and Colorplace paint * Flat latex exterior house paints * Paper for patterns * #2 pencils * White erasers (not red because these leave marks on the canvas) * Blue or red tailor's chalk (unlike pencil, this will brush out) * Large area to lay the cover out (peg or staple the cover to grass or weigh it down to the floor) * Good brushes for use with acrylic paints in sizes from 1/8 inch to 2 inches For the various designs, about any water-based color medium will do. Try several colors on similar scrap material and compare before you choose. The pigments should be colorfast and designed to withstand exposure to UV radiation (sunlight). Dilute the fluid acrylics with water as much as 15 to 1 for a watercolor effect and 10 to 1 for most colors. You can also purchase pre-diluted airbrush colors that are pretty close to the correct consistency. A 6-ounce tube of pigment or large artist tubes can be used to paint an entire top of a tipi if diluted to a stain or slightly opaque consistency. You can mix pigments in thinned "hide glue" or "craft fabric paints." Paints that are applied too thick crack and peel even at a 50/50 mixture. Paints applied too thick will also create a very heavy cover that will result in leaks where the canvas bends. This is called crackling. The paint exposes spiderweb lines of canvas that break the paint. For highlighting, use permanent paint pens to outline your images for sharper, cleaner edges. You might find a black or white outline accents your design. However, if you re-waterproof after using the paint pens, a halo or bleeding effect around the dark lines may happen. Try it on an extra piece of canvas to see how the paint performs. For designs that you want to have a definite edge, the paint needs to be thicker, but not so thick that it cracks when you fold it. The only way you can find the right consistency is practice. Put the tipi paint in plastic containers with covers so the paint won't dry out between uses. It will, however, dry out eventually. This is why you shouldn't mix the whole tube at once. Use a good-quality acrylic brush if you want to do detailed work (like pictographs), a wider brush for large areas, and a smaller one for detail areas. Just a few notes on painting: Painting the cover makes the cover heavier and less likely to cause deteriorating trouble as the tipi wears out. Painted areas have to be re-waterproofed as the paint and scrubbing action of the paintbrush take the water-resistant chemical out of the canvas. Whatever is painted on the surface is going to cut down on the amount of light that gets through the cover. ## Painting the Top and Bottom of the Cover Paint the top part of the cover first so that you don't track over your work. One of the hardest things to do along the bottom of the tipi is to keep your lines even as you make your design. To make circular designs that follow the entire bottom of the tipi to form a border or line designs, first peg a tape measurer to the radius point of the lodge between the two smoke flaps. This will give you a good swing line to form your pattern lines. Check and recheck before marking any line. Because tipis can have a different radius point, you may have to move the tape measure a few times before getting the right measurement. Once you have the distance, either tape or hold the pencil tightly on the tape measure and start making the "swing" line. Jim Creighton's hand-stitched and painted lodge, Arizona. Interior of Jim Creighton's tipi looking up to back. Blackfeet tipi liner, ca 1915, which belonged to Cecile Horn of Browning, Montana. Hand-stitched cotton cloth colored with crayon, Indian ink, and pencil. 24 1/2 x 5 feet, 10 inches wide. Timber Line Tipis of England has a different formula for marking the bottom lines setting the tipi up on poles: > A tip that people might have worked out before, but that makes marking out a tipi a lot easier, is to run bands around the tipi that you want parallel to the ground. Erect the poles and put the cover on them, but inside out. No need for the smoke-flap poles. Then to mark the bands, stand on a stepladder inside the tipi. It's a lot easier and safer than trying to mark the outside when you find yourself overreaching and in risk of falling off. Also, you don't need to move the stepladder about as much. Blackfoot painted hide liner, Denver Art Museum. The next step is to get your colors together and start painting. The cover can be painted on the ground or put on tables. For large areas, sponges, rags, or roller brushes will cover the most area. On the small design areas, try to get the cover up on a table so that you will be more comfortable painting the smaller details. Also, do not be surprised by the overlapping strokes that can show up with these techniques. Some people have used sprayers, but these do not seem to penetrate the threads because of the water-resistant properties of treated canvas. The paint sits on the surface. Paint needs to be worked into the material. To create a very light or stained look to your designs, heavily water down the paints and apply with a sponge or light hand with the paintbrush. ## Painting a Liner The same paints used for decorating a cover can also be used to paint a liner. Crayons, permanent magic markers, and indelible inks can also be used in the process. Do not use watercolors, as they will run if the liner gets wet. If a waterproofing sealant is applied to the surface, marking pens can run or give a halo effect of yellow or greenish cast around the darker marked areas. Water repellents like Scotch Guard, which come as an aerosol spay, might be better than Behr or Thompson, which are of a heavier consistency when applied with a brush or sprayer. Pattern designs for a liner are generally geometric, are taken from rawhide or parfleche designs, and are considered a woman's art. Other designs are of heroic or war exploits of the man. These can depict buffalo hunts, horse stealing, landscape views, or any pictographic designs. Not found on liners are the war bonnet design or sunburst. They are found on buffalo robes, which would be hung up behind the owner or act as a liner themselves. In the Laubins' book on tipis, such a robe is hanging behind them in color plate 7. Today, many people have painted similar designs on their liners at the lift-pole area. Eighteen-foot interior of Cheyenne lodge built by Darry Wood. Owned and decorated by Linda Holley in 1978. Before painting a liner, research any design that you are considering. If trying to get the historic look, stay simple and geometric, with a light touch on the colors. The old rectangular liners had patterns running around the top and then in groups going from top to bottom. Designs did not follow the poles or seams as the old liners did not have vertical seams like today's liners. Some Blackfoot liners not only have a pattern running horizontally across the entire length of the top, but also three more rows evenly spaced parallel to the top row and then vertical patterns about every 2 feet. The Smithsonian and National Museum of the American Indian have several examples of tipi liners showing geometric designs. For a placement of designs on a fitted liner, make a pattern showing all vertical seams or where the panels join. Plan your design around the fact that there will be beds, boxes, and other goods piled up around the bottom of the liner. It is not necessary to go all the way to the bottom with a pattern or down every seam or pole. The designs can also be a combination of geometric and pictographic. Drawings can be traditional or based on happenings in your world or past experiences. What goes on or in your tipi is your choice. Pictograph-painted inner liner depicting war exploits. View of liner with all materials in place. Rope liner tipi made by Spring Valley and painted by Stephen Jarrard. ## Beaded Liners For the most part, beaded liners are found in Cheyenne, Arapaho, and Crow lodges. These liners are rectangular panels of muslin, with rows of horizontal, single-lane, colored beadwork inserted every so many inches with tufts of wool yarn or sheep's wool. The women who made the liners above were part of the Cheyenne Women's Quilling (Marriott 1956) or Beading Society (Coleman 1980). At the top of these panels was found a row of beaded and quilled rosettes with sheep toes and colored wool hanging from each one. These special liners were used during the summer or on special occasions. Some tipis could have two or more liners or bed curtains hanging around the full perimeter of the lodge. Above an below: Cheyenne-beaded liner. Cheyenne beaded-liner drawing based on James Mooney's accounts. ## Decorations or Ornaments on Tipis Ornaments are materials added to the surface of the cover. Examples of these ornaments are quilled pendants, beaded rosettes, horsehair tassels, dewclaws of sheep and deer, and the tin tinkles of the Sioux. Billy Maxwell of Montana, longtime Native American cultural enthusiast and interpreter for the Lewis and Clark Center, gives an overall description of where the decorations are placed as told to him: "The tipi had beaded discs at the four sides and bison tails hanging from the ear tips. Floratine Blue Thunder commented that ears with bull [bison] tails was a sign that the owner was a giving person. I know that old hide lodges would have a split tail at the top tip from the single split hide that makes the flaps." In 1833, Prince Maximilian described a Minatare or Hidatsa tipi in his travels: "Each side of the entrance [was] finished with a stripe and rosettes of dyed porcupine quills very neatly executed" (Maximilian 1906, 61). Details of the medallions are described by Hassrick (Hassrick 1962, 212) and Kroeber (Kroeber 1983, 60) as being made of quill and beadwork and attached to the cover in a ceremony. Hassrick takes his information from the Crows living in the 1950s while Kroeber's information from the Arapaho is fifty or more years earlier. Rosettes were usually anywhere from 4 to 8 inches in diameter and often a lock of hair or fur was inserted in the center. Four rosettes matched, but the fifth one attached at the back of the cover just below the tie point on the lift pole. It was usually bigger or of a different design and shape, trapezoidal or rectangular. Rosettes were placed in a southeast, southwest, northwest, and northeast direction if the tipi was facing east at a height of about 6 to 8 feet. This height varied with the makers. Cheyenne, Arapaho, Kiowa, Gros Ventres, Shoshone, and Ponca tribes used these and similar types of adornments. Drawn examples of the Kiowa, Cheyenne, and Sioux are shown in several sketchbooks. Photographs taken from 1880 to the 1940s also give ample evidence of their use among the tribes. Some old photographs also show a lock of hair or fur sewn to the bottom of the rosette on the sides of the tipi, rather than from the center. Rosettes and dangles, referred to as stars, for outside of tipi. They are put on at the four directions. Hassrick and Kroeber go on to describe the dangles or tinklers of the Sioux and Cheyenne, which are applied to the front of the lodge, smoke flaps, and door: "A well-appointed tipi required additional embellishment with 'tipi front quills.' These dangles made of short strips of rawhide wrapped with brightly colored porcupine quills to which were attached horsetails hung from their center might be attached to the tipi to the height of a man's head and at equal intervals" (Hassrick 1964, 212). "Arapaho and Cheyenne applied these quilled pendants, arranging them in two vertical rows on the front of the tent and edge of the flaps" (Kroeber 1983, 60). The crafting guilds for women of the Cheyenne and Arapaho, called "the Trade Guild of the Southern Cheyenne Women" (Marriott 1956, 1) or "the Cheyenne Women's Sewing Society" (Coleman 1980, 50), came about as a fulfillment of a vow to acknowledge the good deeds of the husband or other desired happening. The first written observation of this type of tipi was in 1811 (Coleman 1980, 50). Four types of special decorative beadwork were made by the members: rosettes for the tipi cover, pillows, wall (describing a liner), bedspreads, and doors (Marriott 1956, 20). Marriott goes on to describe the beaded/quilled ornaments used on the four different styles of lodges. It is not recommended that these covers be copied, as they are still made by members of the guilds. A brief description is taken from the article: > Tipis were of four types, again depending on the amount of work involved. The first type, which brought the least honor to the maker, had a row of vertical stripes of beading down the back, four medallions on the sides—one at each quarter—and a large medallion at the juncture of the smoke flaps. From the top of each smoke flap to the point where it joined the tipi proper were five tassels, similar to the ones on the wall, but having three buckskin strands instead of two. From the bottom of the smoke-flaps to the top of the door hung six more tassels. There was also a tassel in the center of each medallion. The tassels on the tipi were flat, so that they would not blow about and get torn. Those on the wall (liner) were round, since they hung inside. Second was made with four medallions of blue beads and no other ornament. Third type was known as the 'ghost tipi.' On this tipi the medallions were black and yellow a black ring around the yellow disk. It had no connection with the Ghost Dance, but derived its name from having once been set up to mark a burial place. The fourth type, the highest and most honorable, had four tassels, each of three buckskin strings and a cow's tail. There was a single medallion at the tipi, bearing the design of a birth with outstretched wings. The bird was worked in red beads against a black background." (Marriott 1956, 20) Decorated Arapaho tipis at St. Louis Fair, 1904–05. Rawhide door and beaded Cheyenne cover, 1910. The women who made these items must have taken great pride in the beauty and craftsmanship required to become members of the guild. The woman's role in the tribe was enhanced by the meeting of these required skills. It must have been a lovely sight to see the great beaded/quilled cover amongst some of the painted lodges. Only women who had passed the required test could make the rosettes, tassels, pendants, special walls (liners), and pillows. These were not common embellishments and required special ceremonies in the making, just as in a painted lodge. Sioux lodges also used four rosettes, but the tribes did not seem to have the women's societies, such as the Cheyenne Women's Sewing Society. The main difference between the Sioux and Cheyenne pendants is that the tinkler or dangle is attached to the bottom as a pendant. The Sioux use metal tin cones for their tinklers and the Cheyenne use a dew toe from a sheep or deer. Placement on the covers is about the same, but the designs on the medallions and dangles are different. As the tribes visited and mixed on the Plains, the designs and materials also became mixed. Photographs and existing collected tipis of the Sioux and Cheyenne show the similarities and changes in the construction of dangles, smoke flaps, and medallions. A tipi made by a Cheyenne family or group could be gifted to a Sioux family or group and then that tipi was decorated by the new group, taking on different properties other than when it was originally made. Now it is hard to tell whether the tipi is Sioux or Cheyenne. And then there is the intermarrying of families, which also influenced tipi making. In addition, the Sioux and Northern Cheyenne often camped together, further adding to the tribal mixture. Streamers and ribbons have been featured on the tips of many lodge poles. These are often shown in the paintings and drawings of Catlin and Alfred Jacob Miller. Frank Blackwell Mayer made drawings of Sioux lodges in the Little Crow village in 1851, which did not show such streamers (Mayer 1906, 112). In later photos of Indian villages, the streamers are also absent. It would seem that ribbons or colored cloth on pole tips is a very optional decoration. Tribal fairs and Wild West shows become popular toward the end of the nineteenth century and tipis displayed more streamers. Streamers show wind direction in areas where there are swirling winds or variable wind directions like east of the Mississippi. Materials of red cloth or any color should be colorfast. There is nothing like a cover where ribbon, tufts of hair, and strips of leather have bled, leaving colored streaks down the sides of the cover. Today, there are all kinds of colored ribbons, cloth, and feathers for the four directions. On occasion, horsetails on a lift pole can be seen. Anything will go and does, along with all kinds of reasons for doing them. That is OK—use whatever works for you. I use yellow steamers because they look great and complement my painted cover. ## Making Dangles or Tassels for a Tipi Dangles or tassels, as some tribes refer to them, are decorations or symbolic icons attached to tipi covers, liners, doors, and smoke flaps. They are also pictured in old-time photos and ledger drawings. In the past, they were usually seen on Cheyenne (Tsistsistas), Arapaho (Hinanina), Shoshone, Kiowa, Bannock, Hidatsa, Crow, Ute, and some Ponca tipis. Today, you can see them on almost any tipi cover. According to traditional beliefs, as the dewclaws "clicked" together, they imitated the sound of deer or buffalo. From my own experience, the dangles make very little noise except in a big wind and that is not much of a sound. Today's dangles seem to be just decorations. Tipi covers can have as few as ten and upwards of one hundred or so. It all depends on where you want them located. Traditional standards have dangles on set areas around the covers, doors, and linings. These areas are along the smoke flaps, the front on either side of the lacing pins, and down the back, where they are placed in two vertical columns. Another place is along the top of the door covering, which is mostly seen with Cheyenne lodges. Three different styles of beaded Cheyenne doors. ## Materials for Dangles and Tassels ### Traditional Materials * Old strap or harness leather that is approximately 1/8 inch thick for the base. Sometimes you will see rawhide. * Porcupine quills, leaves of cattails, cornhusks, and sometimes bird quills. * Colors: aniline dyes; roots of horsetail plants for the black, natural earth pigments; insects; and animal urine. * Dewclaws of deer, sheep, elk, and buffalo. The toe itself is rarely used. * Pieces of cloth, soft leather for ties, and sinew. * An awl or a small piece of pointed steel or iron you can heat up to burn a hole in leather. * Tufts of wool, horsehair, yarn, frayed cloth. ### Modern Materials You Can Find Today * Rawhide or leather harness found at a leather supply store. Some people use old milk jugs or plastic containers. The latter is not recommend as they break apart with age and develop sharp edges that cut into the covering of the lodge as it is folded for travel or storage. * Quills, leaves of cattails, cornhusks, and my favorite, raffia (a grass-like material found in almost every craft store that comes in several pre-dyed colors). * Colored, synthetic, artificial sinews. They come in red, green, blue, white, yellow, and natural. * Dewclaws, or toes of deer. * Pieces of blue or red cloth, soft leather for the ties, artificial sinew, or real animal sinew. * Steel awl, ice pick, or a hole puncher. * Tufts of real sheep hair (wool), horsehair, or acrylic yarn. Undyed real, natural wool does not sun fade as fast as the fake yarn, but there are exceptions to the rule. And all this depends on the dye methods used. * Good pair of scissors and/or shears for cutting cloth and cutting heavy leather. * Matte polyurethane available at a hardware store or art supply house. Materials for dangles. ### Colors * Traditional dangles used very basic colors: red, white, blue to black, green, and yellow. * The Sioux used red, yellow, purple, or faded blue. * The Cheyenne used red, dark blue to black, white, and yellow to light orange. **The Cheyenne had special meanings for colors:** Bright red—war Yellow—women and buffalo calves (fertility) Black—discolored bodies of dead enemies and the night sky Blue—power of the creator White—snow, potential for rebirth and east direction Dye your materials with Rit Dye, or better yet, go to an arts supply store and buy Batik dyes, which are sun and colorfast. They cost more, but do not bleed or fade as much as the Rit does. If you are using the Rit Dye, put in some vinegar or salt to the bath to help set the color. Rinse completely and then let it dry. When coloring your own wrappings of wool or horsehair, be sure to check for colorfastness. Streaks of red or blue running down the sides of your nice white cover after a rainstorm is not good. A. Sioux tinklers quill wrapped. B. Northern Cheyenne dangles with sheep toes. C. Plant-fiber-wrapped dangle. Dangle or tinklers wrapped on harness leather or rawhide. ## Construction of the Dangle/Drop * Choose the area on the cover, door, or lining where you want the drop attached. * Decide if the dangle will have one to three "legs" and how wide, long, and what style (Sioux, Cheyenne?) they should be. (See drawings for sizes and shapes.) * Gather your materials. The base of the dangle can be rawhide or strap leather (or rawhide dog chews flattened out). * Cut a template pattern for the dangle. This will save you time and material when making multiple pieces. Make the template of cardboard or heavy paper that will hold a shape for outlining several times. * Cut out the dangle forms using heavy shears, scissors, a rotary blade, or a good knife. If you are cutting out a two- or three-leg dangle, you might want to cut out a little more of the leather in the middle of the legs to leave room for your wrapping material. * Take a leather hole puncher or traditional pointed hot iron and put your holes at the top and bottom of your drop. Don't get too close to the edges, or when you pull your ties with toes through the opening you could break the hole open. * Prepare the materials for wrapping the dangle. When I wrap, I like to start at the top and go to the bottom of each leg. * See illustrations for wrapping and start your project. * To protect my work, I use a modern convenience that makes my material last longer and gives it a "quilled" look. I dip each dangle in matte polyurethane and set each aside to dry. Build a little tower or drying rack by turning a stool over, wrapping string around the legs about 1 foot up and then your work on the stings to drip dry. Wire or paper clips can be used to dunk the leather. The urethane soaks into the wrappings and the rawhide to give it some protection from the elements. You may have to ream out the holes if they cover over, but that is easy with an awl. Also, this polyurethane covering helps cut down on any bleeding of the dyes. * You could also use synthetic sinew to do your wrapping. This will not bleed and gives the same look. No need to protect from the elements. * Dewclaws, or toes, should be cleaned. If you let them soak in warm water for a few hours, they are easier to clean, trim, and then drill a hole. Be careful, as any sharp edges can cut your cover when folded and stored. Some people take all the dangles off between uses. This process can create more wear and tear on the cover unless it does not go up very often. * Attach decorative materials to the dewclaws and then attach them to the dangle with a soft lace of buckskin. My preference is brain-tanned buckskin lace, as it seems to hold up better in outside conditions. But again this is up to you. * Lace the top ties into place on the cover, smoke flaps, liner, or door. Tie a knot only on one end, put through top hole, and then pull through. When you get them all done, you are ready to attach them to the surface of the cover, door, or liner. * Most dangles are attached to the cover through the canvas. This means you have to put a hole or lots of little holes in your cover. If done right, the leather lace will fill in the hole and rain will not get in. Mark where the dangle will go. Use a sharp awl to make a hole that does not tear but spreads the threads apart to place your lace through the material. Then hold a small square of cloth on the inside and go through it with the lace. This acts like a protective barrier to help keep the knot from pulling back out and tearing the cover. Use a piece of brain-tanned leather for a plug to help prevent the dangle from pulling back through the canvas. * Once all the decorations are in place, the tipi is ready for display. * There are other ways to make dangles. The Sioux have a decoration that is called a "tinkler." Tinklers are made of tin cones instead of dewclaws, or toes. The Dakota Tipi Tinkler is very popular for decorations on the covers of tipis, liners, and doors. Directions for making dangles. A. Cheyenne. B. Northern Cheyenne. C. D. E. Sioux Directions for making tinklers. # Transporting a Tipi Once you get a tipi, the next big problem is taking it out of the backyard and attending a campout. How do you get your lovely, smooth, 24-foot-plus tipi poles out of the backyard? There are three choices: racks on top of the vehicle, a trailer to carry the poles, or racks plus trailer to haul all the equipment that goes with the tipi. So, which do you choose? Racks for cars without rain gutters. Here are some considerations to think about as you make your choice. * The laws in the states you will be traveling in for carrying long items on state or county roads, the interstate, or toll roads. * The flags or lights you will need for the overhanging poles to be in compliance with the laws. * Your vehicle's pulling power and whether you can put a rack on the roof. * Your vehicle's turning capacity with a trailer and poles. * Where you can buy racks for your roof or bumper-mounted frames for racks, or how you can build them. * Whether your pole butts are facing the front or back of vehicle. * The weight of the poles. * The height of your trailer or vehicle and the overhangs. * The types of ropes or cords you will use to tie down equipment and poles. * The wind resistance on the racks and trailer that will affect gas mileage with and without equipment. I am not responsible for any damages or accidents you may have trying out any of these ideas! This chapter will not give any specifics on how to build a trailer or racks, but it will give you some ideas of things that have worked or not worked for me and other people ## Racks First look at the car you have in the garage. Is it sturdy enough to carry poles on top and carry the equipment that goes with the tipi? Don't forget the family and pets. I once knew a person who had an old 1969 Volkswagen Beetle, who was able to put his 24-foot long tipi poles on top of the bug. He used a light luggage rack available at most department stores at that time. Poles were secured in two bundles and to the front and back bumper with ropes. Though he could not get much equipment into the Volkswagen, he was still able to transport his cover, liner, and poles. You can have your poles face whichever way works for you as long as they are balanced. I prefer that my pole butts are toward the front of the vehicle to balance the weight and reduce wind resistance. My first tipi poles were over 30 feet long, and my mode of transportation was a small Toyota Land Cruiser. It was obvious to me that my poles were not going on top of this vehicle. So I opted for a trailer. This trailer was called Moby Dick because it was 28 feet long and approximately 3 feet wide, with a wooden box built in the middle. It looked more like a bluish green steel bridge going down the road. The poles fit great in the racks on the trailer. The only problem was turning. I had to make a very wide swing. This type of trailer was good for just parking and storing the poles with everything on it. But after a year, I deemed it too much of a problem and decided to go with a van that could carry the poles on top and all my equipment inside. Racks for cars without rain gutters. My next van had special racks built for it that were cradled in the rain gutters. The racks were made of reinforced metal rebar with a large 4 x 4 piece of wood connecting the two metal braces. I also cut my poles back from the butts to 28 feet, which made them lighter and easier to control and set up. There was a rack in the front of the van and one toward the back of the van. In order to cradle each pole individually, the wood across the top had 17 V cuts. In each V cut, or U shape, was a small peg or concrete nail placed in the wood in order to hold each pole. The pole was drilled with a small 1/2-inch hole to a depth of 1 inch to fit on each peg. Then each pole was roped down in place so it would not slide forward or backward. The butts of the poles went forward just 5 or so feet in front of the van with the tips going to the back. Even with the van there was a need for flags and lights at night. When the wood was cut for the racks, there was an angle cut in them that swept toward the back to help with wind resistance. After my divorce, I sold my big 18-foot lodge and van. I then went to a 14-foot tipi, a Toyota Tercel, and a small trailer. Later I purchased Yakima heavy-duty canoe racks for the Tercel and built a little "do it yourself" 4 x 4-foot trailer. The built-in roof racks that came with the car were not sturdy enough to hold poles. On the canoe racks, I made the same type of wood top bracing as on the van to hold each pole. The trailer helped with the overhangs of the poles in travel. It gave me a legal overhang of 4 feet beyond the trailer edge, but I still used flags for safety. Jim Keener from Miami, Florida. Fancy rig with bumper hitches front and back, with area for steel lock box. Walt Disney World, Orlando, Florida, Thanksgiving Camp, 2002. Racks mounted with heavy foam padding and tied to the front bumper. Today (three cars and four more trailers later), I drive a Ford Explorer and pull a trailer. My tipi poles are 24 feet long for a 17-foot lodge, and I still use the rack system of the individual pole cradles in wood. And I am on my eighth trailer. Unfortunately, when I built this trailer (which happens to be 12 feet long, 5 feet wide, and 4 feet tall), I did not take into account how tall the Explorer was with the poles on top. If I hit some kind of the ditch, the poles go crashing into the trailer. It is also difficult to make turns when they slide across the trailer top. To solve this problem, I built a taller wood support for the back rack. This makes the poles clear the trailer top by another foot. So, if you are going to have a trailer, make sure that your poles clear the trailer by at least 1 foot. Or, that the trailer is 1 foot lower than the roof of your vehicle. A rack on top of your car is a simple and great way for carrying poles if you do not want to have a bumper system welded to the front or back. Just make sure it fits your rain gutter or has the curved bracing for those cars that no longer have them. Some of the best racks are available at the bicycle and canoe shops, which sell heavy-duty models. Cheaper racks can break or collapse under the weight of the poles. Or you can go to a welding shop and have your racks made to your specifications. But absolutely use a rack. Never tie poles directly on the top of your vehicle. A dented-in roof and broken windshield will convince you on the first bump. David Ansonia's truck in the Alps. You can have special rack holders welded or attached to the front and back bumper of your vehicle. Then the pole racks are bolted on when needed for travel and taken off when finished. Trucks and vans use these for carrying ladders or heavy equipment. Camper shells will need a rack that wraps around the shell. Some racks are welded under the chassis and wrap around the body, front, and rear and are connected to the main pole mounts. The basic framework for the racks are an H or Y. Experiment to find what works best for you. I keep inventing new racks and trailers with each new car or tipi. Size makes a big difference in hauling tipis—the size of the poles, car, and equipment. Each affects the other. If you are not using a frame that cradles each individual pole, spread out the poles or at least put them in two bundles on the outer edges. Do not put them in one bundle in the center. For best wind resistance, do not have the flat surface facing the front. Poles sticking too far out front can make visibility difficult and can also become a problem at stop signs and when you turn corners. They might catch hotel light poles, hanging roofs, or awning covers. I've known some to hit those old neon lights in motel parking lots. Those lights may be old, but they are expensive to replace! If possible, know where you are going and know the clearance from side to side and front to back. Keep your distance from other vehicles on the road so that a sudden stop by the driver ahead of you does not result in your poles going through his vehicle. Secure your poles. If your poles are not secured front to back, a sudden stop can send your poles flying. This is one reason I have the holding pins. Try driving and stopping in a vacant lot when you first set up your pole arrangement to see how secure they are. Do not overload the top of your car. If it is top heavy, it will be prone to rolling over. There is nothing like being pulled out of a creek where have you landed on your side after making too sharp of a turn with the extra load. Injury, vehicle damage, and embarrassment can be avoided by paying attention to the weight you are carrying on top. ## Trailers I have bought or built eight trailers. I now know what I like best. There are the box and flatbed styles. With the box styles, materials can be out of the weather and locked. The open flatbed is only good for hauling poles until you build a box to store equipment. The best trailer is the factory aluminum with dual axles because it is lightweight and it stays level without the front being on a jack or on the hitch. Size depends on your budget and how much you carry. If you are imaginative, you can take some of the lawn trailers and build your covered box. Or you can go to a welding shop and design your framework for size and length of a box. Small "do-it-yourself" trailers can be ordered through the mail and delivered to you for assembly. Tipi owner Guy Pazzogna Vaudois shared his driving experiences and problems in Europe and the United States with me: > After having spent more than twenty years driving trucks carrying poles in the Dakotas, Nebraska, Wyoming, and Montana, it appeared to me that the best way is to bundle the poles together so that they form a single longitudinal beam trailer: one end with the hitch and at the other end a shaft (the simplest possible). > > This setup allows for having the poles the closest to the ground. In France, where I am, toll is a function of the height of the load. Under 2 meters height, no change in toll, but 9 to 10 meters, there is a toll. > > In France, the load must not be wider than the vehicle by more than 20 centimeters on each side. Load can be longer up to 3 meters at the rear with signal, or 1 meter without signal. In the front, load cannot extend farther than the bumpers. Apparently in Switzerland, poles can extend by 1.5 meters both at the front and the rear. > > How is it in other European countries and in North America? No technical inspection is required in France for a trailer less then 500 Kg GWT; no regulation either for its construction if length is less than 12 meters (+ 1 meter or + 3 meters) between hitch and rear lights. The overall carriage (truck and trailer) shall be less than 18.75 meters total. > > I own a Chevy Suburban 4 x 4, 5.5 meters long. I can carry my 13-meter-long poles without any problem in a trailer less than 500 Kg. When loading the poles in the trailer and passing them over the truck roof, height limitation becomes a problem. There are many types of cars, trucks, and vans that can carry poles. With today's rising gas prices, it may be necessary to decide which will work best for you and still let you transport your tipi. Using a vehicle alone or with a trailer is a decision you have to face. Hopefully some of the suggestions and information given here will help you make your choice. Jeff Mos (NightHawk) of Tipis from Africa with his Citroën, Xsara Picasso pulling a trailer for his 6-meter poles (16-foot tipi). Beerse in Belgium at the 2005 Beer Valley Rendezvous. Above and below: Specially built steel trailer with racks on top, pulled behind a station wagon. Poles are 24 feet long. Side and back doors with covered open access on top. Linda Holley's fifth trailer for heavy-duty storage of poles and equipment. National Powwow trailer poles fit butts in the front box then going back. # Tips on the Care and Buying of a Tipi Here are some general tips on maintenance for the tipi or tent owner: * To increase the longevity of the canvas, keep the canvas dry. * It is best not to ever "wash" your canvas. It takes out all the waterproofing and other protective chemicals you might have on the cover. Just keep it as clean as possible. * If your canvas was treated and you have cleaned it with soapy water or some other chemical to get rid of mildew or dirt, you will have to re-waterproof your canvas. * When taking down your tipi from a dusty encampment, always take a broom to the inside and swat outward to knock off the dust and dirt on the surface. It is like beating an old rug on a line. Then shake it really hard on the lift pole. Once you have beaten out the dust, you can start the takedown. * Do not store canvas directly on a concrete floor. Concrete stores moisture and will rot the canvas or cardboard box (if in one). * Do not take the cover down or store it away while it is wet. Moisture will cause the canvas to mildew, which will eventually lead to the canvas rotting. Never put a damp tent into storage. If you must take your tipi down when it is wet, set it back up at your home or next location as soon as possible so it can dry completely. * Keep canvas taut at all times and smoke-flap poles snug up to the tipi. Go inside your tipi and push out any poles that have shifted in the frame. Choose poles for the smoke flaps that are smooth with no rough spots to snag the canvas (refer to Poles, Pole Care, and Pole Maintenance chapter). * Treat the canvas with approved preservatives for cloth. Canvak and Mildex are preservative liquids specifically made for canvas. They help with waterproofing and mildew resistance of cloth materials. Behr or Olympic water seals can also be used, but read the label before application. * UV rays from the sun are destructive and will degrade the canvas. So, if you are not going to be using your lodge, store it in a clean, dry, and critter-free area. When left up year-round, a tipi will typically last five years (depending on the climate). Storing a tipi when not in use will double its life. Some tipis can have a longer lifespan if the utmost care is taken. * It is important to occasionally air out the tipi. To reduce the humidity inside the lodge, build a small fire as this will dry out the air. The small amount of tannic acid and other chemicals that are deposited on the canvas from a fire tend to repel mildew. This process is kind of like smoking a ham, which helps the tipi last through the winter. UV exposure also helps reduce mildew, but it will deteriorate the canvas with time. * This is a homemade recipe for fire-retardant liquid that can be applied to canvas: 9 ounces 20 Mule Team Borax; 4 ounces boric acid; 1 gallon lukewarm water. Combine all and spray or paint onto canvas. * In the Northeast and other locations downwind from power or industrial plants, acid rain is a problem. When it rains, diluted acid showers down on your tipi. If this happens, hose off your tipi with plain water as soon as possible. This will help dilute any acid that remains. Dry the tent well before folding and storing it. * Do not store your ropes with your canvas. Ropes are sometimes treated with oil and they will leave a stain on the surface of the tipi. ## Making Your Tipi Mildew Resistant Mildew is one of the most common problems that can attack your canvas. Even if your tipi is made of a fabric that is mildew resistant, it will mildew if left damp. Mildew is very destructive to cotton fabrics as well as other materials. Mildew usually forms when damp tipis are put in storage. It can also start under certain conditions of humidity and temperature. If mildew has started to grow, it can be stopped from spreading by thoroughly drying the tent, preferably in the hot sun, and applying a cleaner such as IOSSO Tent & Camping Gear Cleaner. This cleaner is made to remove tough dirt and mildew stains. Afterwards, you will find it necessary to treat the tent with a water-repellent compound such as Canvak. If using another treatment, be sure to read the labels to make sure it is safe for use on canvas. IOSSO cleaner removes tough dirt, mold, mildew, algae, most food and drink stains, and bloodstains. It is color safe and may be used on most fabrics, vinyl, plastics, canvas, carpeting, and wood. It is biodegradable, nontoxic, and does not contain bleach or chlorine products, which degrade fabric fibers. If you get mildew on your canvas, here are some home remedies that might work. But there are always problems with any remedy. For mildew, spray the canvas with a 50/50 mix of white vinegar and water and let it dry in the sun. This will kill the mildew but won't remove the spots. This method is only used as a last resort! Try scrubbing with a brush and a bleach solution of 1/4 cup bleach in a 2-gallon bucket of warm water. Let the area dry completely. This will kill the mildew and stop it from spreading, but will not remove the discoloration. Make sure to use lots of water; bleach will destroy the canvas if not diluted. In about 8 ounces of water, dissolve 2 teaspoons of salt and 2 teaspoons of concentrated lemon juice. Wash the mildewed cloth with this solution and then rinse with fresh water. Let dry in open air before using. Do not use any product like X-14 that has a high bleach content. It will shred or disintegrate the material, which will get rid of the spots and the canvas. If your tipi is really dirty, get a pressure washer. Set it for a very light spray; you do not want to put a hole in the canvas. Set the cover up on the poles and spray from the inside out. ## Making Your Tipi Water Resistant Canvak is an easy-to-apply canvas preservative. Its properties prolong the life of older tipis. It contains mildew retardant as well as an excellent water repellent. One gallon covers approximately one hundred square feet. Canvak is not recommended for synthetic fabrics. This product is to be used after you clean the canvas or when treating material that has not had prior waterproofing treatment. After any treatment has been applied, make sure the tent dries completely before putting it into storage. Scotchgard is a water-repellent aerosol spray that can be bought at most house and garden stores. This is a quick repair for leaks. Here is a homemade waterproofing formula: Mix 3 cups soybean oil with 1 1/2 cups turpentine. Paint it on your tipi and let dry. Reapply after a year or two of hard use or outdoor exposure. After cleaning with any product, you will need to re-waterproof your canvas. Re-waterproof on the same poles you use to set up. Make sure it is a warm or very warm day and the sun is out. Warm days help to heat up the chemicals in the waterproofing materials and make application easier to apply and penetrate the canvas surface. With a garden sprayer, put the chemical application on the cover from the outside. Work from the top to the bottom. Do not do this in your yard as it kills all the grass. Put it on a driveway or a vacant lot and treat it there. Avoid blacktop as it will stain your canvas. With heavy use, it is a good idea to re-waterproof your tipi every couple of years. All chemicals will wash or wear off in time. ## Security How do you keep people out of your tipi? The Laubins described the idea of crossed sticks over the door (Laubins 1957, 91). In the historic pictures I have seen, sticker thorn bushes, animal hides, and large wooden boxes or crates are shown. ## First-Aid Kit for Tipis Always have a first-aid kit for yourself and one for your tipi. Having been in two tornados and two floods, and being the queen of all klutzes, I am prepared for all emergencies. You need a first-aid kit for doing emergency repairs on your tipi as well. Sometimes you are camped in areas or places where getting materials is impossible. You must rely on yourself to keep your tipi in good living condition. The kit will grow or decrease as your needs change. For quick access, put materials in a nice parfleche box or some type of container. Here are some items you should keep on hand to repair your cover, liner, or door: * Duct tape to quickly patch a hole * Extra pieces of canvas for patching * Elmer's Glue, which will attach canvas to canvas * Quick-set glue * Real sinew * Awl * Extra marbles * Long pieces of leather * A few steel nails * Extra ties * Extra rain pegs * Extra lacing pins * Glovers (canvas) needles for sewing canvas * Sewing needles * Thread of various thicknesses of cotton and fake sinew * Swiss Army knife * Scissors * Matches and candles and clean wipes ## What to Look for in a New or Used Tipi With all the new and used tipis for sale, you need to know which tipi fits your needs. Before making that first big buy, ask yourself these questions: * How many people will be using your tipi or will it just be you? Consider the kids, spouse or significant other, close friends, and the dog. * Where do you intend to camp? Rendezvous, powwow, backyard, nearby state park, private land, show or demonstration? * Do you plan on doing long-distance driving with your tipi? This will affect the size of the tipi/poles and equipment you can take. * Do you anticipate pitching the tipi and leaving it up for long periods of time, such as three months or a few years, or do you intend to use it for just weekend campouts? * What state or region do you live in? Climate in your area may affect your choice of material. * Which season do you anticipate using it the most—fall, winter, spring, or summer? These all bring different problems with them depending on where you pitch the tipi. * Where will you store all the materials, such as poles, the tipi itself, and support gear? * How often will your tipi be used? ### Material Generally, most tipi makers today offer their tipis in 13-ounce Sunforger, which seems to be the industry standard, although a few good quality makers still use 10-ounce Sunforger. I would get the heavier material unless you think the overall weight could be a problem. The 10-ounce is adequate for the liner. Along with the new acrylic blends of material, some makers offer an 18-ounce and even a 20-ounce highly treated canvas for long-term setups. I wouldn't want a tipi made of these materials unless I intended to pitch my lodge for several months or more. I think it would be a mistake to go with one of the cheaper canvases, such as single-fill canvas, to save money. Use the best quality canvas, even if that means a delay in purchasing the tent until more money can be saved. ### Construction Each tipi maker has their own areas of reinforcement for certain wear and stress areas of the cover. These areas can include the tie point, door, gore, smoke flaps, and smoke-flap pockets. Problems can occur later if the following areas are not properly sewn: * Some type of binding or overlay material, flat cording, or biased seam binding of 1 inch x 12 inches, covering both sides of the tie point. It should cover over the top of the gore to help protect it from the abrasions of constant movement or wrapping around the poles. * Smoke-flap pockets need to be at least two thicknesses of canvas. This will help prevent the smoke pole from pushing through. Heavily sewn pockets will reinforce the seam from the constant pushing of the pole. * The corner of the smoke flap where the pocket is attached should have another piece of canvas sewn in and should be over-stitched a few times to take the stress of the smoke pole. Many tears happen where the pocket attaches to the flap. * Door openings get the most use so they should have a biased seam binding or other type of cording sewn all around or turned several times to make a smooth, thick seam. Leather can be used for this purpose, but it doesn't last as long with constant sun and/or moisture exposure, which leads to either cracking or not drying out, which creates mildew. * Lacing-pin holes should be sewn so that they do not tear or spread out with use. Buttonhole styles are OK but they do not give a tight fit around the lacing pin. Some tipi makers who do not reinforce the holes use extra material inside the lacing-pin area or material that does not stretch. * Any of the types of sewn seams will work on the cover and there appears to be no best seam for keeping out water or for durability. Straight stitch or zigzag—they all work in the give-and-take movement of the cover or liner using canvas. Since it is a one-thread stitch, the only stitch not recommended on a tipi is a "chain stitch," as it will come out if pulled. All other stitches are interlocking with a top and bottom thread. ### Repairs and Warranties In choosing a tipi maker, look for the quality of the workmanship and ask others what they like or dislike about their lodge. Ask what tore out first and how it was repaired. Some companies will fix their lodges for free while others charge a fee. This does depend on how, where, and what was damaged on the lodge. A warranty will not cover a burn hole in your cover from a lit cigarette thrown from another car into the back of your open trailer. But the damage can be repaired for a cost. ### Size Sizes of tipis also are different with each tipi maker. For an 18-foot tipi, measurements can vary by as much as 1 to 2 feet smaller or larger. When ordering a tipi ask what the finished measurement of the cover will be from the bottom back to front when pitched. It does not matter the distance from the bottom front to smoke flaps or bottom back up to the lift-pole area. Side to side should be within a foot or two of the back to front dimension. How much head room will be determined by the length of the lift and steepness of the front. If your tipi is tall, you will have good head room; if it is short, you will have less but more storage for the sides. ### Used Tipis Finding a good used tipi poses different problems than does buying a brand new one. Here are a few things to look at and question if you are considering a secondhand tipi: * Who was the maker? It is easier to contact a company for repairs and additions than an individual who made it for his or her own personal use. But sometimes those made for personal use were done with lots of loving care, so it's a toss-up. * How old is the canvas and how has it been stored? Dry rot can cause canvas to fall apart, so pull on it and see how it holds up. Some canvas can last thirty years if properly stored. * What has been done to the canvas? Does it need re-waterproofing? When was the last time this procedure was done? * Ask about usage. Where did most of the camping take place? * Unfold the cover, liner, and door. Is there any damage, such as mildew, tears, or dirt? * Are all ties or marbles in place? If not, can the tipi be repaired? If there are several tears around the bottom, then it is starting to rot and it will keep on tearing. * Are lacing-pin holes intact? * Is the door bottom intact? Has it been fixed by some reinforcement? * Are smoke-flap pockets intact? Are there signs of popping through? * Is the tie point or lift tab attached securely to the cover? Is the cording or rope for the lift pole in good shape or can it be replaced? * Does the door fit the opening area and are all ties and wood in place? Is the lacing-pin area of the door torn or worn through? * Is the liner a rope or pole setup? * Does the liner have all ties at the top and bottom and are they long enough to tie to the rope or fit around a pole? * Is the bottom of the liner rotting out from mildew or extreme use? * Does the liner fit the cover? Some liners are too small or too big for the cover. * Is the tipi decorated with paint or beadwork? Is the paint cracking? Has the cover been re-waterproofed after it was painted? * Do you like the designs and does the overall style fit into your use? It is impossible to repaint a tipi or take paint off. * Are the dangles and beadwork in good shape or repairable if you intend to keep them? * Does the tipi have a door? Check on how it is placed on the cover for wear and tear. It may have to be replaced or re-waterproofed. * Are the poles rotted or breaking apart? Stand one straight up and then shake it vigorously. Even though the poles may look good, some do not show damage until they are shaken. A good used lodge will fit your needs and last longer than a week or two. Do not be afraid to ask questions before buying. Being an informed buyer will save you time and money. # Today's Tipi Encampments There are many large tipi encampments held in the United States. They range from primitive to more modern camps. If you like dressing up to play the part of an old-time buckskinner or mountain man, then go to a rendezvous or Buffalo Days encampment. If you like to just camp, there are private encampments, powwows, or lodge owners at Disney World. Or there is your own backyard. Finally, you can get a group of people together, find a spot, and camp. Groups in every part of the United States sponsor some type of tipi camping. Look on the Internet for information or ask a company that makes tipis about encampments. You may want to try out a few different types of encampments to see what you like. NMLRA Western Rendezvous at Dubois, Wyoming, 1994. ## Rendezvous The National Muzzle Loading Rifles Association (NMLRA) and, later, the National Association of Primitive Riflemen (NAPR) started the early 1960s/1970s events that became very popular for the historically inclined tipi owner. These organizations combined their events in the late '70s and early '80s until the NAPR went under. Today the NMLRA has formed a new group called the National Rendezvous and Living History Foundation (NRLHF) to support primitive camping and muzzle loading. They sponsor regional encampments all over the United States, such as the Eastern, Northeastern, Southeastern, Midwestern, and Southwestern Rendezvous. Each of these events has its own style or regulations for tipis, white tents, and tarps. In the late 1970s, near Tampa, Florida, the Alafia Rendezvous was started by a group of people who wanted to camp in their tipis and have a good time shooting and cooking in the style of the nineteenth century. Today it is the largest encampment in the Southeast. It meets the third weekend in January. It has expanded in the last twenty-five years from the original two tipis to over one hundred tipis and twelve hundred other period camps and tents. Other large rendezvous are the Rocky Mountain Nationals, the High Plains Regional, Pacific Primitive, White Oak Society, 1838 Rendezvous Association, and many others too numerous to list here. It is now possible to search for a rendezvous camp near you on the Internet. Some people travel every weekend during the spring and summer to these rendezvous, either staying a weekend or for the whole week. Rendezvous wind down during the winter and then start back up in the early spring. If you don't like going to big events, there are always the smaller ones. Always check rendezvous rules before going to an event. Many rendezvous are very strict on their camping guidelines. Some regulations specify what clothing, tents, and materials you can bring into camp while others just ask that you be discreet. Interested cow in John Neidenthal's 14-foot tipi at the Buffalo Days Camp, 1980, Charlie Knight Ranch. Blackfoot tipis at Fort Benton Fur Trade Symposium. ## Buffalo Days Camps Buffalo Days camps are not rendezvous. Rules are strictly enforced. Tipi encampments follow the old Native American traditional ways of the 1870s. In order to attend this event, a committee must jury/judge your equipment and then an invitation is given. They are usually held in the month of July and in a western state. The camp goes on for approximately a week and a half to two weeks. At Buffalo Days, horses are provided for a rental fee or you can bring your own. However, you and the horse must be in traditional attire. Sometimes there is even a buffalo hunt, along with raids on the cavalry and nearby settlements. Each group tries to be as authentic as possible. There is a welcoming group dinner and all participants come dressed in their best finery. These type of camps are held out West, where there is room to roam and room for the mock battles and trail rides. Ken Weldner, Curtis Carter, and Armin in Buffalo Days tipi. Curtis Carter Buffalo Days–style lodge. ## Powwows The basic powwows, as we know them, have been going on since the early twentieth century. Crow Fair in Montana is said to be the largest tipi encampment in the world. It is held in August along with the annual Crow Fair Rodeo. This event lasts for a week, with dance contests, a horse parade every day with participants in all their finery going through camp, and over a thousand lodges. With all the families getting together for cooking, talking, and dressing for the dance, this is definitely a spectacular eyeful. If you wish to go to this, make sure you call ahead and make arrangements. It is possible to camp there with your tipi or get a hotel room in nearby Hardin, Montana. In many other powwows, today you will see tipi camps in which whole families may participate. In Oklahoma, Montana, the Dakotas, Wyoming, and other western states, people come together for the major family dances. At the larger contest dances, the emphasis is not on tipis but on dance. There used to be contests for tipis at powwows in which ribbons and prize money were awarded. Tipi contests have declined over the past thirty years. This may be due in part to the rising cost of gasoline and many people's switching to smaller, more economical cars. There are still a few small dances and the National Powwow is held every three years. Spring Valley Lodge, owned and decorated by Stephen Jarrard, Orlando, Florida, powwow, 1998. Roy Martin's tipi at National Powwow. If you go to a dance, be prepared for the tourists, who might come into your lodge whether you are there or not. Tourists think the tipis are there for their enjoyment and do not realize that these are the lodgings of people who are participating in the dance. You might have to rope off your lodge, like I have many times, to keep uninvited guests from the inside of your tipi and to keep people from touching the outside of your tipi. You will be asked a myriad of questions, so, if you are so inclined, you can explain what a tipi is, how it is set up, and even give tours inside. Most of us are very proud of our lodges and do not mind showing people around. But there are horror stories of tourists just popping into a tipi when the people were sleeping or of tourists sitting on the portable potty. Check out a dance before you go, making sure that there is room for your tipi, if there are facilities or bathrooms, and what kind of security there is for your camp. Many dances do not have an available area for tipis or even one for camping in general, so you have to get a motel room. Dance powwow. Interior of John Neidenthal's lodge. Tipi made by Linda Holley. ## Modern Day Camps and Disney World You do not have to get dressed up for dancing or shooting to go camping. Some people have a few acres set aside and invite friends over to camp in their lodges or bring a tent. It's just good, old fun camping, cooking, and enjoying your family and friends. General campouts are held all over the United States at special times of the year like New Year's, Easter, or Thanksgiving, or for just friend or family reunions. I have held campouts at state parks, on a friend's sixty acres, and in my own small backyard. Thanksgiving Lodge Owners Rendezvous was started originally by John Niedenthal of South Florida. This rendezvous has been going on since 1971 for family and friends. This is the time when all types of tipi owners gather for one large encampment. The only requirement is that you have a tipi and are registered in advance because of the limited number of tipis that can attend. But you must preregister to attend. This event started out in a place called Fish Eating Creek in central Florida. Some years later it was moved to the Walt Disney World Fort Wilderness Campground complex after they opened up a primitive camping area. Now hosted by the Butler family, anyone with a tipi can share in the largest sit-down Thanksgiving family event in the Southeast. On one occasion over two hundred people sat down to twenty-five turkeys (which we cook on the site). Who knows how many pumpkin pies, dressing, baked potatoes, and desserts were cooked and eaten. This encampment lasts for an entire week and gives you the run of the Disney World camping facilities with access to the Disney World entertainment area. There are men's and women's restrooms, with showers right next to the camping area. And if you don't want to stay in camp, you can leave the Disney World complex to enjoy anything in the Orlando area. The only real planned event is the Thanksgiving day turkey dinner, which leaves you time to do anything you want for the week or days you are encamped. Of all the events that have been listed in this chapter, probably the best one is just setting up your tipi in your backyard, building a nice fire, and looking up from your own smoke flaps and enjoying the starry sky. Invite the neighbors over for a visit. They probably have already seen the poles going up and are very curious about what you're doing. Dig a fire pit and have a barbecue or cookout in front of the tipi and enjoy the view and the company. Disney World Lodge Owners Camp in Orlando, Florida, 2004. ## Tipi Competitions When you get a number of tipis together, every so often someone wants to see who has the best tipi. But how do you judge the different styles, tribes, time periods, and groups? This is not easy even for the most knowledgeable and experienced tipi people. What you are going to look for, in most cases, is "what is the lodge owner trying to portray?" Then see if he or she has completed that task inside and out. No matter the group, there are some general guidelines that can be followed. But keep in mind that these are only guidelines, not rules. Guidelines can change and do so because of weather, ground conditions, what the committee wants, and the majority of the tipi styles present. The next thing you need to do for a competition is pick your judges. It can be anyone from experienced tipi owners to the beginner. A checklist with ground rules is an essential item to give the judges. The checklists are located later in this chapter. Judging is a tough job. It can take most of the day if there are a lot of tipis and it is done in a thorough manner. Plan for transportation for your judges if the tipis are scattered over a large area. I've seen everything from cars to golf carts used. Of all the competitions at a powwow or rendezvous, this is the toughest task because the competitors cannot come to the judges—the judges have to go to them. How do you judge a tipi? Some committees ask for a popular vote from the visitors and registered campers in the tipis or tents. This popular vote doesn't always pick the best tipi, however. Sometimes it just rewards the tipi owner who has the most friends or family. Visitors have a tendency to pick the most colorful but not necessarily the most accurate camp. If going this route, the following guidelines can be used: * All registered contestants will serve as peer judges and will be asked to judge and rank each tipi in the contest, including their own. * All tipi owners who wish to participate in the contest must be registered with the tipi camp coordinator/head judge. Each contestant will be given a registration number on a piece of cardboard. Registration numbers must be prominently placed above the tipi door the morning of the contest. Formal judging must be completed no later than noon on the given day and the rankings turned into the tipi camp coordinator/head judge. * The tipi camp coordinator/head judge is not eligible to participate in the tipi contest. * Tipis will be ranked on an individual sheet, then tabulated, and the top tipis will be recognized by an announcement and/or ribbons at the event activities or camp meeting. * A suggested guide for judging the tipi will be distributed at the time of registration. * At the start of any tipi judging, doors should be open. If doors are decorated, they may also be swung or laid to the side for viewing. * Judges can ask questions of the tipis owners. Some questions can be about the purpose of materials in the lodge, time period portrayed, type of tipi, and so on. The judge can determine if the tipi owners understand what they are trying to portray. Basically there are three major areas of tipi competition: Contemporary, Contemporary Traditional, and Traditional. Even if you are a Boy Scout, Buckskinner/Rendezvous, Mother Earther, Powwower, or everyday camper, you can fit into one or more of these areas. Guidelines can be adjusted to fit the time period (depending upon tribe and location) or camp. **Contemporary tipis** are modern inside and out. Any tipi that doesn't quite fit into the other two categories can fit into this category. Modern accoutrements such as chairs, table, ice boxes, modern lamps, bed, and so on can be displayed. The tipis are set up to live in for the weekend or the week. They are not set up to show traditional materials. Safety, livability, and neatness are the primary concerns. Everything is fitted, including the rain covers, liner, and doors. The cover and liner can be plain or ornately painted. **Contemporary traditional tipis** from the late 1940s to the present. Any tipi cover that does not go all the way to the ground and has a fitted liner is in this category. The liner itself is made of rectangular or trapezoidal panels carefully sewn together to match the angles of the poles. For a tailored, streamline look, a fitted liner ties on the poles/rope at the top and then to a rope/pegs or pole at the ground level. The liner has a specific underturn, or sod cloth, of about 10 to 20 inches depending on the maker. There may also be a rain cover inside. Interiors can have one to six backrests, built-up beds, robes, rawhide parfleche of folded envelopes and large boxes, hanging articles, toys, and anything else beaded or made. Outside can have a set of backrests, cooking utensils, toys, horse gear, and so on, and a covered area for shade for a complete look. This is the show tipi that most people think is the way a tipi looked in the nineteenth century, but which we now know is not true. It is the romanticized version of the tipi. 1975 parfleche trail on back of 18-foot tipi, White House Dance, Ohio. Reese Tipi made tipis for _Dream Keepers_ and the TNT miniseries _Into the West._ **Traditional tipis** represent the era from around 1840 to about 1920. They can be from any tribe or area where hide or cloth tipis were built. A cloth tipi was considered a status symbol because of the cost, lightness of material, and increase in size of the lodge. This is the time of the American Indian and his travels through the Great Plains into the reservation period. The primary purpose was to live in the tipi, not to show it off at a Wild West show or trade fair. But when on display, the tipi was shown to its finest. There are about six or so main time periods of tipis seen in competition. Tipis are assigned to three categories for judging. Time periods listed here are approximate. **Traditional Period (1800–1860)** —before the main contact of Europeans to the fur-trading times of the West. The tipis can be brain-tanned hides to canvas. **Reservation Period (1860–1890** )—tribes are moved on to reservations. There are very few buffalo tipis; most covers are made of domestic cowhide, canvas, or other cloth materials. **Wild West Show Period (1880–1920)** —tipis designed to show off the tipis and the Native American culture to the world. **Transitional Period (1930–1960)** —Time of very few tipis, but the tipi was at the beginning of a revival. They were mostly seen at powwows. **Contemporary or Modern Period (1960?–present)** —New traditional tipis and camps show interpretations of the past. **New-Traditional Period (1980–present)** —going back to the old ways of the 1800s in building, setting up, and camping in a tipi. ## Tipi Competition: Suggested Criteria or Guidelines Tipi owners do not need to have the items listed in each category. These are only some of the suggested items they might have in their lodges. This is to jog the judge's memory or add to it. ## Traditional 1800 to 1870s: What You Will Not Find in and around a Plains Tipi The following is a list of some of the common mistakes people make when they are trying to set up a traditional tipi. If this is your intent, you might look over the list and make sure you are not committing the error of having any of these items. There are probably exceptions to every item, but don't make the exception the rule. The judges won't. Traditional tipis should _not_ have the following: * Mandela and dream catchers. These were not part of a traditional tipi or camp. Fancy mandelas and dream catchers came out of the 1960s. * Lawn chairs. * Plank wood backrests. * Modern cooking equipment. * Leg-bone lacing pins. Lacing pins were wood and not much bigger than a #2 pencil in thickness or length. * Very decorated pegs. * Highly decorated streamers. * Cow or buffalo skulls, or any skulls for that matter. The use of the buffalo skull is for the Sun Dance ceremony and not usually an everyday item in or around a tipi. Skulls are spiritual and generally not for the public view. * Awnings attached to the front of a tipi. These are buckskinner items that came around in the last twenty years or so. * Large rawhide boxes. The Sioux were basically the only ones who had the boxes, and these came out in the 1870s or so. Sizes were about 15 x 15 x 18 inches. There were other types of containers, but they are special rawhide items folded in the shape of a box. * Large wrought-iron cook sets inside the tipi. Remember, you are cooking in your bedroom/living room unless it is winter and wood is scarce. You can have a small tripod for hanging pots. * Lots of items hanging around the tipi. Unless being used, personal material should be put away. * Medicine bundles should be put away in their containers or hung in a back area above your head. They can be hung on the top back of tipi, above the door, or on a tripod outside. * Door pole out front. * Fitted/tailored liner—depending on weather, a rectangular cloth tied at its top to poles or rope, around the tipi about 4 to 6 feet off the ground, or the tipi doesn't have to have a liner. # Tipis Outside the United States Ever since the _Leatherstocking Tales_ by James Fennimore Cooper in 1832, the romance and lore of the noble Indian and his way of life have enchanted the entire world. Books from this time period have enticed others to emulate the dress and living structures of the Indians. Many Europeans made their way to the Great Plains of the United States to see for themselves the animals, mountains, and Native Americans. When they returned home, they also brought back with them part of the material culture of the groups they visited, such as shirts, dresses, weapons, bags, robes, and tipis. Some items went into private collections and others into the museums for all to see. David Ansonia's Nomadics tipis in Switzerland. In the early twentieth century, Indian clubs started springing up for those interested in Indian culture. This may have been because of the influence of the touring Wild West shows of Buffalo Bill, 101 Ranch, and Pawnee Bill. With the making of the Indian clothing came the desire to live or camp as the Indians once lived. Of all the many types of Indian dwellings, tipis were the structures that the clubs chose to build. The Germans and Czechoslovakians formed some of the first clubs. Then the clubs seemed to spread to the rest of Europe. Currently almost every country in the world has some type of Indian club, Powwow organization, American-Western camping, rendezvous, Indian reenactment, or tipi maker. England and Scotland have several clubs that sponsor dances and encampments. Several tipis are always displayed at Glastonbury Festival, a large music festival. Many tipi makers advertise on the Internet with information on their different style and types. People are transporting their tipis to other countries to participate in Indian and buckskinner camps. But you still need a large area of land for big camps, so the larger countries, like Germany, France, Poland, Czechoslovakia, and Hungary, host the big events. Russia also has a large number of Indian enthusiasts and could possibly host an event in the future. Chakra Tipis from the Netherlands. TipiVerhuur cover is lifted into place by the smoke flaps. Special tourist villages made for the tipi camper have sprung up all over Europe. The fascination with the American West and the spirituality of Native Americans have people going to these tipi camps to find themselves or to have a good time. Most of the tipis are highly decorated with bright paints and spiritual designs on their covers. You can stay for a few days to a week and get the Indian experience. There are also frontier villages that look like Old West towns of the mid- to late 1870s, with tipis scattered around for additional authentic character. France and Germany hold large rendezvous or Indian camps with several hundred tipis. Different nationalities come to these events, sharing their knowledge, showing off their clothing and skills, and camping in the old ways. Looking at the many Web sites and photos will give you a view of the details participants pay attention to for the sake of authenticity. The Hudson Bay Indian Trading Post sells the raw materials for reproduction work to Europe and sometimes the United States. It is also possible to buy exquisitely made beaded and quilled items crafted by highly skilled craftsmen from Germany and Czechoslovakia. Some of these works can pass for original material made with the same materials and techniques used by Native Americans of the eighteenth and nineteenth centuries. The Czech Republic and Slovakia have some of the oldest and continuous Indian groups in Europe. Based on Seton's Woodcraft from 1902, Czech Woodcraft had their first tipi camps in 1913. Interrupted by World War II, the groups or tribes renewed their interests afterwards. When the communists took over in 1948, they strongly curtailed all activity in 1951. The Woodcraft League was forced to "voluntarily disband." It did not stop the idea and its dedicated followers. During the forty years of communist rule, Woodcraft and Scouting survived by going underground and hiding under various officially condoned organizations. After the Velvet Revolution, they were free to organize again. Woodcraft League grew to more than seven hundred members by 1998, and many who practiced woodcraft skills in various organizations flocked back to Scouting. Other than the Woodcraft groups, there are also independent enthusiasts enacting Indian camps. One of these is the Indian Corral, which has a large following on the Internet and in reenacting. Piers Conway's tipi winter setup in France. Lyubomir Kyumyurdjiev (or White Horse) is the founder and chairman of the Bulgarian Indian Eagle Circle Society, along with Ivailo Grozdev. They describe summer gatherings in the Bulgarian mountains with the wonderful background of secluded mountains, tipis in snow, and beautiful valley views. The seclusion is one thing I have noticed in correspondence with many groups; they like to be away from the crowd and do not let their presence be known to uninitiated people around them. Intrusions from the outside world into their camps are not welcome. This is true of most camps I have been to, even in the United States. But tipi enthusiasts are always pleased to share information with other knowing people, no matter what country they might be from. Eagle Circle participants often dress like Native Americans, eat traditional food, make jewelry, build large tipis, and hold powwows for twenty days in the mountains in order to remember historical events and get in touch with the spirit of the Native American people across the ocean. They're very knowledgeable about the different Native American groups, though they admit that their initial interest was in the romanticized version they learned as children. Over the years, through the Internet and the exchange of information between Bulgaria and America, as well as with other Native American studies groups throughout Europe, they have learned more about the culture and history of Native Americans, and now they know a more accurate "truth." Hungary has Indian clubs, similar to the Czech groups, that have been going on for at least forty-two years. Indian club member Krisztina Szabo describes the tipi encampments with great love. Her husband and son, Csaba, are very involved. She describes an encampment as follows: > Our camp always is in summer in July for two weeks. During this time, we live in tipis, we wear only Indian clothes this time, and we don't use technology, and we try to follow Indian traditions. We have got Lakota, Oglala, Blackfoot Blood, Siksika, Pawnee, River Crow, Mountain Crow, Wild Crow, Hidatsa, Hunkpapa, and Cheyenne tribes but we have got nine camps. And we go on the warpath against each other day and night, anytime at all. In two weeks, every tribe can fight every other tribe. This is always very exciting because another tribe can fight our tribe at two o'clock a.m. or 4 o'clock p.m. We don't know when will come somebody or when will come to steal horses. And the battles are always to be very exciting, too. I really enjoy them. Czech tipi interior. Czech Indian encampment. Czech interior of Wataglapi, 2004. Dmitriy from Russia has been engaged with the art and culture of the Lakota and Cheyenne for many years during the long suppression by the Soviet Union. All people interested in Indian culture during the communist regime had to do it with some difficulty. They collected materials, little by little, in libraries and schools and then practiced in the forest. His group is probably one of the oldest in Russia. Dmitriy talks about a Russian gathering as follows: > We tried to not be limited to the books, but to make something in practice, to receive real experience. Our tipi, our old tipi—was one of the very first, which was made in Russia. My older brothers made it in 1975–79. We lived in it. It became simply sacred for us. We smoked in it pipes and were prayed many times. New tipi we have sewed recently because our club and our families have grown; it was necessary to move. Children require the special care, as well as our parents, which sometimes also will carry out with us time in tipi. Many Italians are fascinated by Indians and tipis. Several people have bought property and set up their structures. Some have even learned English so they can read the Laubins' book on tipis to construct their first lodges. Conversion of feet and inches into metric units was the first hurdle; getting the sewing machines, fabric, and then an area to work were the next steps. Giorgio Strazzari thinks he was probably the first tipi owner in Italy. Other tipis followed according to the demands from his friends. His tipi survived a rare wind and hailstorm, which destroyed all the vineyards on the outskirts of his town: Bulgarian tipi encampment. > The zone was then declared subject to natural calamity. The following year, we moved to another land near to the Swiss border. We mounted a tipi of 6 meters of diameter, painting suit, furnishing and of all the comfort. In particular, we spread on the bottom a gravel layer, one of ferns, and finally one of soft and perfumed pine needles. We still remember with immense delight the beautiful evenings passed observing stars and the moon, wrapped in our sleeping bags and in the resin scent. He then describes how his family felt staying in a tipi: > Since the first lights of the dawn, we felt the songs of all the birds of the surrounding forest. After a pair of months of permanence in the tipis, we realized to be in complete harmony with the natural rhythms, we woke up without efforts at the sunrise and with the darkness of the night it arrived also a healthy sleep. All this did not lack to astonish various friends coming from from the city that did not succeed to explain this change and behaviour. The friends were not the only ones to visit our tipis; it was also attractive for small mices, cats, and fox during in their nocturnal wanderings. Interior of Hungarian tipi. Hungarian spring. Hungarian winter encampment. From Israel, Neta and Larry Schwartz have their plain white 20-foot, 13-ounce Sunforger Nomadics set up year-round within a 2,000-year-old Roman quarry nested between the Carmel Mountain Range in the east and the Mediterranean Sea in the west. Their painted tipi is located under the large limestone rocks of the quarry that were used to build the aqueducts that brought fresh water to the Roman port of Caesarea when the Roman Empire was flourishing in that part of the world. The flowering cactus on the right in the photo on page 191 is called "the Queen of the Night" since the flower, the size of a large grapefruit, only opens at night and closes the next morning. They used bamboo poles since their original option of wood poles was difficult to find. Australia and New Zealand have their share of Western participation with rendezvous and tipi camps. It seems that many of the tipi camps are for the New Agers or spiritual-seeking groups who want to have an "Indian experience." All over the world, the tipi is being associated with spirituality and ceremonies. Russian tipi camp, 2003. Dmitriy and wife from the Russian Indian Club. In Japan and Korea, the tipi has found a home. They are made from the Laubins' pattern out of canvas and synthetic materials. One tipi, found in Japan, was completely made from blue plastic tarps sewn together. There was even a type of hibachi fireplace inside, carefully supported a few inches above the floor. The floor was also blue plastic. The tipi appeared to be about 14 feet, or 4 meters or so, and the poles were bamboo. Another tipi, with a painted bear on the side, was made from what appeared to be a canvas material, just like tipis made in the United States. For those in "the hobby," Japan has its own tipi-making company and Indian trading posts for fur, leather, and beads. Now the tipi has spread to South America and Africa, with the first tipi makers in those respective countries setting up shop. With the tipi being such a versatile camping medium for most types of weather, it is no wonder that it is seen around the world. Other types of tipi-style housing have been used by the nomads of Russia, Finland, Norway, and Canada. Some are made of bark, reeds, hides, or cloth. But the most romantic and widely used is the Native American cloth tipi. Neta and Larry Schwartz tipi located in the Mt. Carmel mountain range, Israel, within the Roman quarry. Japanese tipi. # Modern Tipis The tipi is an ever-evolving structure that continues to change in materials and use. With the new space-age synthetics, there is an ever-increasing interest in using these materials to prolong the life of the cover and liner against rot and mildew. Today, there are those who are trying to reinvent the tipi by making it more a permanent structure by using these synthetics. Military and civilian campers have been using red and blue striped awning material for tipi covers for a few hundred years. Sunbrella makes a synthetic or acrylic stripped material that can be made into a long-lasting and colorful cover. It also comes in solid colors. Since it is a synthetic, it does not rot, and it is mildew and UV resistant. It also does not breathe or let air pass through. Since plastics can melt in high heat, interior fires, if used, need to be watched carefully. Darry Wood was one of the first people to incorporate this new material into his tipi liners and covers. His lodges stand out at any powwow or camp, not only for the colorful covers but also for his craftsmanship. Tipi made by Darry Wood from awning material, 1977, North Florida Indian Cultural Society powwow. Another experimental tipi was made by Brooke E. Demos, who took synthetic materials to a new level. Her tipi is made from processed postconsumer plastic bags, which she wove on a 36-inch, 4-harness floor loom. The tipi poles are 12-foot closet rods painted blue to highlight the blue trim of the cover. A couple of tipi manufacturers have sewn in clear vinyl or plastic material to replace panels or sections of the cover to make "windows" to lighten up the inside of the tipi. This does let in more light, but it also presents possible problems with the plastic deteriorating before the cloth, due to UV rays. Also, constant taking down and putting up may cause the vinyl to crack and break over time. Rainbow Tipis of Australia is not standing still in tipi design. Using solid colors of Sunbrella materials, zippers, and window screening, they bring their product designs into the twenty-first century. Keeping the basic structure, they incorporate zippered awning sides, screened windows, and clear plastic interior rain covers. This may be the ultimate tipi camp where the sides of the tipi can be raised to form sun screens and built-in mesh is used to keep the bugs out. Above and below: Rainbow Tipis from Australia showing the use of synthetic materials, pull-out awning from cover, and screened windows on the sides. Made from postconsumer plastic shopping bags woven on a loom for cover. Tripods have always been set up with ropes tied in different variations of knots with the goal of keeping the lodge up for long lengths of time. Unfortunately, over time and because of wood shrinkage or expansion, the tie rope loosens up and can slip, causing the structure to fail. Arrow Tipi of Canada has come up with what it calls a Widget. The Widget bolts together the three tripod poles into an interlocking group that will not slide or break. A tipi can stay up for a few years, depending on the poles and cover material, without coming down at the tie point. Because the 20- to 30-foot poles are sometimes hard to transport, poles may be changing too. People are coming up with ways to cut the poles in half and then join them back together using sleeves made of plastic, metal, or carbon alloy-based materials. Because wood shrinks and expands with time and weather, the problem is keeping a tight fit where the poles join. Solving this would also prevent the other problem of water going down the pole, hitting the sleeve, and dripping into the living area below. And the cut/sleeved area still needs to maintain its strength in high winds. It has been suggested that the pole be cut in half and a double screw of wood or plastic be inserted and the two poles screwed back together. This also runs into the same problem with the contraction and expansion of wood. Synthetic poles made of carbon fibers, similar to pole-vaulting material, as well as poured plastic resins over a reinforcing center core have been investigated. Unfortunately, the cost is highly prohibitive for the everyday camper for just one pole. Disney World has some of these built into their exhibits and tourist attractions. Innovative ways of setting up a tipi cover and poles come from Froit Yurts in the Netherlands. He said he learned this method from a traveling Sioux Indian. He set the poles up in a spiral sequence on the tripod. Each pole is wrapped in place one time before the next one is set up. This is repeated for each pole. Not using a lift pole at the tie point in the cover, but using the smoke-flap poles and placing them in the smoke-flap pockets, the lightweight cover is lifted into place by two people. The cover is then pulled around the poles, as in the traditional method, and laced in the front. Froit told me that they only set their tipis up in the good weather of summer. Above and below: Widgets for setting up the tripod by Arrow Tipi, Canada. In the early 1930s, Ben Hunt wrote an article, with drawings, on a tipi that only used three poles segmented or spliced together with a sleeve or double ferrules. The rest of the poles were replaced with ropes radiating from the tripod tie point to the ground. He called this his "three-pole tepee." Today, there are manufacturers recreating this style of tipi with one pole in the center and the ropes radiating from the center down to pegs in the ground. The cover and liner are made from lightweight or rip-stop nylon used in windsocks, jackets, and modern pop-up tents. The center pole is sleeved and a cross bar is bolted near the top to form the smoke-flap opening. This tipi is very transportable and a convenient way to enjoy tipi camping without all the poles and weighty canvas for setup. The last major innovation I have seen comes from Fun Camp Co. of Canada. These tipis are an adaptation of the tipi design in a permanent stationary camp. A concrete platform is built inside the pole structure. The top area of poles do not use a tripod setup but an adjustable metal cap, where the main poles are bolted into the cap. The top poles of the lodge are wired and bolted to the top metal structure. Froit-Netherlands spiral-based tipi setup with smoke-flap poles. When the cover is fully extended and closed up, the metal top serves as an adjustable flue, controlled from a rope inside the lodge. This adjustable plate can be opened or closed to adjust the ventilation inside, just as smoke flaps in a traditional tipi are set for airflow. These modern versions of a tipi also have built-in bug screens for the door. Inside are places for framed beds, a chair, and a central propane fire, if wanted. With the new discoveries in plastics, synthetics, and UV-resistant materials, there is no stopping the imagination in adapting traditional designed tipis into more comfortable and longer-lasting lodges. But a choice may need to be made by the tipi enthusiast as to how many of these new ideas you want to implement and what you want from tipi camping. You'll need to decide whether you're a primitive traditionalist tipi camper or a high-tech, I-want-all-my-conveniences tipi camper. You will have to make the choice on how you want to camp and in what manner. Some of these choices will depend on money, transportation, and why you chose a tipi. Above and below: Fun Camp Company of Canada. Poles fit into a top form and a concrete base pad. As seen from above, sequence of putting poles in frame and method of using smoke poles to slide cover up when pitching the eagle nested tipi (without lifting pole). # Camp Stories **Darry Wood** started living in a tipi with his family in the early 1970s. He lived in a tipi in rural Upstate New York for three winters, during a six-year period when the tipi was the only home his family had. He and his wife sold their suburban home and started out with a six-year-old child and a big dog. By the time they folded the tent and moved into a log house, where they now live, they had two children, a dog, and a cat. By the time Darry decided on where their first winter's campsite would be, he and his wife had made themselves an 18-footer and moved from Tallahassee to the Catskill Mountains. They were just getting used to the significantly greater space when a Thanksgiving day snowstorm dropped a 2-foot blanket of snow on them. It was quite an eye-opener, as were the below-zero temperatures and icy winds that soon followed. They were now using a 16-foot tipi for storing the extra gear and ever-accumulating possessions. Darry remembers sitting before a raging fire late at night while the north wind rattled the poles. The dog's water dish, frozen solid, served as one of the weights trying to seal out the draft coming in under the lining door. Darry remembers looking at his little brood, snuggled there beside him under deep piles of wool blankets, and thinking to himself, "You've got this all wrong, we should be storing our stuff in this 18-foot lodge, and living in the 16-footer." The smaller lodge was easier to heat. **Doug Rodgers** relates these stories and advice about living in a tipi out in the woods: > I have a friend who lived in his tipi for nearly two decades. The highly important thing to note is that this friend was single. He had no wife and no children whose needs he had to meet. He was/is a rugged outdoorsman type. I have another friend who lived in a tipi two years (near me, in Alabama) with a wife and four young children, two still in diapers. It was a tough experience. They had to haul water to the site. They took showers at a park a few miles away. They had some sort of privy in the woods not far from the tipi. The wife was not the rugged outdoor type and it was hard on her. After a couple of years, they moved into a house. Despite some serious downsides to the experience, the whole family feels that it was one of the highlights of their lives—being outside, seeing wildlife, sitting by the fire at night, family togetherness, cooking outdoors, experiencing the moon and stars in a way that most people never do, and experiencing the changes in the seasons in an intimate way. > > I've lived in a lodge. The only changes I'd make are to get a stove (wood burns slower and you can cook on it), extend stovepipe up through the poles/smoke hole. That way you're not flooded out. If you're going to stay in it for a few years, put in a wooden foundation on 2 x 6's on a solid base of rock or cement pillar and then floor over with 2 x 4's spaced about 1/4 inch apart. Then you can design so your lodge has a porch all around. You can get an old closet or bureau to hold your clothes under an outside shed; that way they don't get too smoky to wear to work. Just hit the laundromat once a week—then fill up the bottom of the closet with some pet bedding cedar chips, which are cheap at the store; smell of cedar keeps bugs at bay, too. Since you've got the shower figured by using a solar setup, no need to put in water system. Get rid of the apartment and you'll be able to afford a new lodge cover every couple years easily. > > If you're not keen on peeling poles (who is?), plunge them into running water, a ditch, or creek. This seems to take away the sticky sap that holds the bark on so tight and they'll peel slicker than a carrot at a kitchen sink. For kitchen doings, put in a small table along one side with slide-out mouse- and bug-proof compartments. Couple coolers kept in 3-foot hole dug in between floor joists for constant-temperature root-cellar box; eggs and cheese keep well at 40 to 50 degrees; double wrap ice in grocery sack paper keeps for five days in cooler, so meat is good for week. > > I use a style of cot for a bed. Get some old used carpet for floor covering in winter and keep out breezes and bugs and you're in business. I was out West so there was no problem with mildew in that dry climate. I keep a fire going to keep it fairly dried out. This is all the advice I can offer. Only downside thing is the darkness of a winter lodge. Winters are long, in the mountains. I sorely missed the luxury of having a window. **Hurricane Andrew** in 1992 was a great test for living in a tipi. I had friends whose homes were completely destroyed. But they were able to salvage their tipi, liner, and few poles from the remains of their houses. They refused to live in those "green army" tents that were set up for shelters. With more poles brought in from other parts of Florida, they set up their tipi in the yard. They lived in this tipi while cleaning up what was left of their house and goods. Even in the extreme heat and rain, they chose the tipi because it was light and airy for the weather conditions. With the streamers waving above the debris that was once Homestead, Florida, we could find the tipi amongst the surrounding rubble. From the Internet, **monkeytown99** related this story of living in his lodge: > My wife and I were drumming in our lodge and we were lost in the rhythm as can oftentimes happen. There was a rustling in the liner and something was trying to get in. It had obviously come in under the cover, which we usually keep a few inches off the ground. It moved along the wall poking and prodding. As the hypnotic effects of the drumming were wearing off, I got my wits about me and was going for a stick to beat this animal off with when it poked its head around the liner at the door opening —it was one of my dogs with a "what's all the fuss about" look on his face. I was relieved it was him, but disappointed at the same time—I was thrilled in thinking that an animal spirit was visiting our lodge . . . but nervous that it might be a bear or cougar! > > The other time we had a visitor was when we weren't home. A feral domestic cat—we keep them around to put the run on skunks—had lunch, a bird, in our lodge and left the feathers and a turd as souvenirs of his adventure. Monkeytown99 also made a point about privacy and noise in a tipi: > Another time we must have set [our tipi] up on a drain field because that night it rained and there floated my rubber ducky right beside my pillow. > > Then you always have those moments where someone just can't stop snoring . . . pinching their nose doesn't help . . . you're desperately trying to get a good night's rest because next day there is dance competition and it's just not happening! __ Then there was the story of the attack of the killer armadillo. Do you know that no one will help you out when you yell "Armadillo!"? We kept hearing someone yell "Armadillo!" but we thought it was just someone having a nightmare. Suddenly I heard someone yell "Fire!" and a couple of us ran into the tipi with water buckets. No fire, but one really angry, horny-plated animal that wanted out but could not find the door. It had come in from under the cover and liner but could not get back out the same way because of the underturn of the liner. It took four of us to herd it out. **Asher Rospigliosi** talks of a strange critter at the Glastonbury Festival in England in 1997 when there was a heavy rain. He had to get out the spade and dig trenching around his lodge and a couple of others. A young Chinese woman who spoke no English took shelter in his lodge and stayed for three days. She was very self-contained; she slept curled up away from him and his three sons. One night, in a break from the rain, she entertained them by juggling fire clubs around the central fire; shortly after that she disappeared. **Leon Dunham** was camping in his mother's pasture and after a late night of watching a movie, he came back to find this camp had been attacked by the natives. The door was torn off, a couple of stakes were pulled out of the ground, and lacing pins were missing or chewed up. From the mess inside, it was obvious at least one critter had been inside. Looking around outside with a flashlight, he found two horses standing by the barn, looking at him with a classic "what?" look. He also related: "Then there was [a time in] New Mexico where we got driven nuts by ants, but it was the big furry spider that made us spend the night in the bus instead of the tipi." And then, in another trip to camp in their tipi for a few weeks: > On a two-week trip to Niobrara, Nebraska, we carried our tipi on our VW bus and set it up in the state park there for four days for a family reunion. It was a great hit. One of the things we noticed on this trip was many more tipis and tipi poles were being set up around the country. More than we've seen before (last major trip like this was three years ago—we are from Oregon). We did not get a close look at any of the other tipis. But my impression is that most of them were from one of the commercial manufacturers. We did not notice any that were obviously homemade or were of skins rather than canvas. Many of them were in private campground/resort/lodge facilities, but an awful lot were just sitting behind somebody's house out on the range. Very nice to see . . . > > We ended up sleeping in the tipi only one night of the four because of storms. One night was so bad we even drove away from our chosen campsite to a protected place between the nearby hills. For long periods of time, lightning was so continuous you could read by it. The storm lasted four hours or so. In the morning when we went back to the tipi, [we] found it had been blown together and leaned up against the tree we had put it under. > > Everything was soaked, but nothing [was] broken or torn. It appeared the rain had softened the ground enough that the wind pulled the stakes out and moved the poles together. It looked pretty funny. Had we been inside we think we would have been fine. We will be putting the stakes in better and such. We were able to squeeze inside it and simply walk the poles back out and restacked it and all was fine. > > Our camp was set up on a small ledge overlooking the Niobrara River—quite exposed to the east, which is where the storm came from in the wee hours of the morning. It was quite educational—like, yes, put the stakes in good! Even if it is a beautiful day when you're setting up? And is this site really a good one? Hmmm, nice view, but . . . > > One of the things we decided on was to have everyone who came to see our tipi sign the liner. I liked that a lot. Very nontraditional—but fun. On the last night, the older kids (third and fourth cousins at this point in this family) came to the tipi and had a late night together. A much nicer use than just us sleeping in it! > > We did take some friendly ribbing from a couple of guys in South Dakota about the short poles we had on the bus. They definitely thought they should be 25 to 30 feet long. The poor old bus was overloaded as it was the summer of 2005. ## Why People Never Camp with Me People never camp with me because, well, let's see, I burned down my 12-foot tipi when I left a candle burning that I thought I had blown out. I went out to do my morning thing and about an hour later, I heard that horrible word, "Fire!" Being at a rendezvous I was thinking about the firing line for the muzzleloaders—not so. The smoke and flame was from my tipi, which was burning to the ground. Many of my good friends were trying to put it out. With canvas going down around me, I stupidly ran in to save my camera equipment, bedding, and clothes. It is nice to say I was OK and my personal gear was saved by my quick-thinking friends pulling the lacing pins out to get the cover down. My cover and liner were destroyed and my poles were scorched, as was my reputation. Back in 1978, I was camped out at a powwow in White House, Ohio. Never again will I let someone tell me where to camp when I know the area is a "little" low. Well, it rained that night and now I have a 2-foot-high tide mark all around my liner. It took three days for the water to drain away and for me to take down the lodge. There is nothing like sleeping in a van full of wet buffalo robes and rawhide. When the "First American Mobile Home" meets the "Modern American Mobile Home," there can be a clash. How about my tipi poles going through the back of one of those big travel homes that backed into my van and tipi racks? Boy, did it cost that guy some big bucks. He had to pay for half a new set of 28-foot poles, which were shortened to 20 and 14 feet. My poles went from his bedroom into the bathroom. It looked like a jousting tournament gone wrong with the splintered wood, glass, and metal. The travel home driver wanted me to pay for damages! The police said it was entirely his fault. I had lights and flags on the poles to conform to state laws. I was outside screaming my head off trying to get him to stop. He wasn't looking! Those were beautiful poles I had spent long hours and days sanding to a fine finish. You could run your hands down their entire length without a single splinter. Darn! Then one time I set up all my poles, lining, and had the door. I had, however, forgotten the cover for my 14-foot lodge. I was kindly invited to move in with two gentlemen (thank you, Chip), who offered me half of their living area in a new 9 x 15-foot Marquee that I had just made for them. Now they were true Southern gentlemen. They even put up a curtain to divide the tent and give me privacy. For the rest of the week, and for years afterward, I have been teased by the camp. At the NMLRA Midwestern, sometimes called the "Mud Western" after the period it was held on the Kickapoo River in Illinois, a tornado went through camp. Well, it got my small wedge tent that was set up outside my big 18-foot lodge and sent two tent poles through the side of my lodge, just missing the head of a friend, George Kuhn, who had bravely camped with me. I spent the next few hours, in the rain, trying to find equipment, supplies, and friends. As it turned out, other camps were destroyed or blown over; but no one was seriously hurt. When the sun finally came out, it looked like a disaster area. Broken wood, canvas, and clothing were all over the place. About 5 feet above the ground, my lodge had two large tears of about 25 and 36 inches long. I know because I measured them, as I needed to find the extra canvas to repair the damage and find something to stand on while making repairs. This is when I found the value of having a tipi repair kit, with extra canvas, Elmer's Glue, and sewing needles. I was very upset fixing the tears and hoping that I would get it done before the next big storm hit. If the winds had gotten into the tears, it would have ripped my whole tipi apart as these rips got bigger going up the side. I must say I was very proud of myself because some years later that same tipi survived another tornado and the patchwork held together. Well, a few years later I had that same tipi in Tampa, Florida, and our camp was hit by another tornado. This was the big Alafia Rendezvous and I had set up my 18-foot lodge with everything in it. I had buffalo robes, parfleches, beaded robes, bedding, backrests . . . everything. I had also set up a 9 x 15-foot Marquee tent for shade, storage, and cooking for the week. While I went back home, friends of mine were going to watch my camp until I came back in a few days. When I returned a few days later to camp on a starry night, the camp looked unpopulated and abandoned. Coming over the bend and up the small hill that overlooked my tipi, I was horrified to see that I had no camp. All the other tipis and tents were there, but not mine, not even my Marquee tent. And where were my friends and all the people? I could see candles and campfires in the area . . . and then I saw my fire pit, with my rocks surrounding the pit . . . but no tipi. Out of the dark stepped a friend who said, "Guess what? You know the old fairy tales where once upon a time something happens . . . or as we say in the buckskinning way, 'This ain't no s—t?' Well, this tornado came down and took the tipis on the hill, some tents in the second camp, and then came down and sucked up your tipi and Marquee out of the middle of all these tipis." I stood there dumfounded and could only say, "OK . . . joke's over, now where did you guys hide my tipi?" which I said several times in complete disbelief. I really thought they had moved my camp just for a big laugh. But no, it was sucked up by a tornado. Now the other people in camp came out of the shadows to tell me the story about what had happened and how they had saved my equipment. Many people volunteered to help me put the tipi back up the next day. Looking at my cover neatly rolled up and dried out, I decided to put the tipi back up that night. Some people let me borrow a few more poles to replace those that were broken. A few hours later all was back up, and I spent the rest of the night in my fluffy, warm bed. In the daylight, I was amazed that the cover was not more damaged. A few marble ties were torn at the bottom of the lining, which was ripped in a couple of places. Eight other tipis were not so lucky. They were torn to pieces by the wind and their poles were smashed. In my case, because the soil was sandier, the wind had just sucked the tipi up and moved it about 50 feet into another lodge where it slid down on its side. The tipi might have survived intact if my pegs had been longer to go into the soft soil or it might have been saved because it pulled out so easy. As for my Marquee tent, it was found in the lake and pulled out to dry. Thank goodness for friends at rendezvous and powwows; whether you know them or not, they all pull together in a disaster. In another camp, I woke up one morning to find a cow halfway in my tipi door. Someone had left a gate open and the cows had come back to their pasture in which several tipis were now camped. Horns do not go well with canvas . . . canvas loses every time. While I was at home, I found another use for tipi poles. A hot-air balloon crashed or slowly landed in my swimming pool. And do not believe that old story about getting to drink champagne and eat strawberries after they land. My tipi poles were used to help support the envelope from sinking farther into the deep end while we scurried to retrieve the basket part. It is never a good idea to carry watermelons in the back of your car next to the windows, especially several days before pitching your tipi. As in my case, they have a tendency as they get hot to explode all over your blankets, cover, and you when you are trying to open up the side door. There is nothing like the smell of fermented melon all over you and your blankets, clothing, and tipi. And I didn't get to eat the melon after traveling all that way. Make sure you take a long, heavy bat with you when using the outhouses in Florida. I was using the facilities around 2:00 a.m. when I tried to open the door to get out. It just would not budge as I kept banging on something outside. Then I started hearing a hissing and a low growl. I knew I was in trouble. Being that Florida is a very water-oriented state, I realized that on the other side of the door was one big, mean, angry alligator. They do not like being hit on the head or snout with anything. My only recourse was to stay in the rather smelly outhouse or yell for help. So, knowing that you get more of a response yelling "Fire!" rather than "Armadillo!" or "Alligator!" I yelled "Fire!" Were those guys ever surprised when they came running. ## Questions Questions about living in a tipi, and what you need to think about before taking the plunge. #### Is it possible that living in a tipi is a cheaper way of living? It is if you like camping. You will need an area for the human waste and garbage. #### Where will your water and electricity come from? If you have a power pole near you, it is possible to run an extension line to the lodge. Or use a car battery or other devices like generators to run lights and other appliances. Water can be brought in or you can set up a water station nearby for running water. You need water for sanitary purposes, drinking, and cooking daily. But most people just live in the tipi as a primitive lifestyle. Why not? One of the greatest pleasures of having a lodge is its lack of modern conveniences. #### Do you use the fire pit in your tipi during the summer months? If so, how do you keep from getting too hot? You can have a fire in the summer, but the heat can run you out. Cooking is done in an outdoor arbor or shaded area. The tipi sides are rolled up or opened up in the front. It is like spreading your wings to let the air pass through. At night, you might have a little smoky fire to keep the biting bugs at bay. #### How do you keep cool in the summer? Do not use a liner. The cover goes all the way to the ground except when rolled up during the day to catch the breeze. There are portable air conditioners that you can put in the lodge, but you need electricity to run them. If you have electricity, you can also use a portable fan. #### Do you have a problem with rodents coming and living with you? No matter where you camp, you are going to have some type of little critter who comes to visit you. The only way to help prevent this is to build your tipi up on platform with the cover going all the way to the ground. The lining must be sealed all the way around and to the door. #### What do you use for bedding? Anything you want or that can fit in a tipi. Some people use air mattresses and others use foam padding. The primitive groups just use sleep pallets of blankets and buffalo robes. Whatever you use, put it on some type of waterproof tarp. You do not want to wake up in a flooded, water-soaked bed. #### What do a husband and wife or anyone do about the intimacy issue? You can put curtains in but keep the sound down. Otherwise, how big is the car? Today we are far more modest. I have been in that situation . . . so it was under the buffalo robes and very quiet. It was fun, and exciting too, if you get my meaning. #### Where do you store all of your clothing since there are no closets? Usually in another tipi since I have two tipis. I have a smaller one for cooking and storage. For cooking and when the weather is bad, I put up a big awning to cover the tables, chairs, and fire pit. #### What about cooking? Do you cook right over the fire, or do you have to invest in special camp kitchen stuff? Depending on what you want to call cooking, you can do it over the open fire and Dutch ovens. I love the Dutch oven pots. My entire kitchen is in a special storage box built to hold food items. Then there is a long extension cord for electricity use. I dug a trench in the ground to hide the cord for my microwave. OK, OK, it is not that completely primitive, but I like my TV and computer. Batteries can also work for some items. #### Do you feel safe at night when you sleep in a tipi? Very much so. Especially with Mr. Smith, Mr. Wesson, and sometimes Mr. Browning (small handguns) right next to me. I have had some unexpected guests, human and animal, and some big bugs. In this day and age, do what you think will offer protection. # Appendix ### Documenting the Historic Tipi **Seton, Ernest Thompson.** Two Little Savages. **New York: Doubleday Page & Co., 1903, 64-76. ** "You make ten Oak pins a foot long and an inch square, Sam. I've a notion how to fix them." Then Yan cut ten pieces of the rope, each two feet long, and made a hole about every three feet around the base of the cover above the rope in the outer seam. He passed one end of each short rope through this and knotted it to the other end. Thus he had ten peg-loops, and the teepee was fastened down and looked like a glorious success. Caleb came over and nodded..."Got yer teepee, I see? Not bad, but what did ye face her to the west fur?" "Fronting the creek," explained Yan. "I forgot to tell ye," said Caleb, "an Injun teepee always fronts the east; first, that gives the morning sun inside; next, the most wind is from the west, so the smoke is bound to draw." "And what if the wind is right due east?" asked Sam, "which it surely will be when it rains?" "And when the wind's east," continued Caleb, addressing no one in particular, and not as though in answer to a question, "ye lap the flaps across each other tight in front, so," and he crossed his hands over his chest. "That leaves the east side high and shuts out the rain; if it don't draw then, ye raise the bottom of the cover under the door just a little—that always fetches her. An' when you change her round don't put her in under them trees. Trees is dangerous; in a storm they draw lightning, an' branches fall from them, an' after rain they keep on dripping for an hour. Ye need all the sun ye kin get on a teepee. Sam and Yan did so, and when it was finished Raften said: "Now, fetch that little canvas I told yer ma to put in; that's to fasten to the poles for an inner tent over the bed." "Indians don't have them that I ever heard of," said Little Beaver. "Yan, did ye iver hear of a teepee linin' or a dew-cloth?" "Oh, I remember reading about it now, and they are like that, and it's on them that the Indians paint their records. Isn't that bully," as he saw Raften add two long inner stakes which held the dew-cloth like a canopy.... The shower grew heavier instead of ending. Caleb went out and dug a trench all round the teepee to catch the rain, then a leader to take it away.... "Where's your anchor rope?" asked the Trapper. Sam produced the loose end; the other was fastened properly to the poles above. It had never been used, for so far the weather had been fine; but now Caleb sunk a heavy stake, lashed the anchor rope to that, then went out and drove all the pegs a little deeper.... The smoke hung heavy in the top of the teepee and kept crowding down until it became unpleasant. "Lift the teepee cover on the windward side, Yan. There, that's it—but hold on," as a great gust came in, driving the smoke and ashes around in whirlwinds. "You had ought to have a lining. Give me that canvas: that'll do." Taking great care not to touch the teepee cover, Caleb fastened the lining across three pole spaces so that the opening under the canvas was behind it. This turned the draught from their backs and, sending it over their heads, quickly cleared the teepee of smoke as well as kept off what little rain entered by the smoke hole." **Fletcher, Alice C., and Francis La Flesche.** The Omaha Tribe. **Lincoln: University of Nebraska Press, 1972, 285–87.** Included in this book is "An Average Day in Camp Among the Sioux," written in 1885, which is from Alice Fletcher's journals. On the day designated for a journey every one is astir, while the stars are still shining. Those who sleep late are wakened by the crackling of the leaping blaze. Shadowy forms are moving about the entrance to the lodge, and the boiling kettle warns the sleepy one that he had better be up and ready for breakfast. To slip out into the cool morning air, to dash the water over the face and hands, and dry them on the tall grass, is the work of a moment; and, with a little shaking together, every one is ready for the morning meal. This is portioned out by the wife, and each one silently eats his share. The baby still sleeps on its cradle-board, but the older children are relishing their broth with the vigor of young life. As each one finishes, he passes his dish to the matron, springs up, and leaves the tent. When the mother has eaten, she too goes out, and, with rapid steps and bent form, passes around the outside of the tent, pulling up the tent-pins used to hold the tent-cloth taut, and throwing down the poles which support the smoke-flaps. If there is an adult female companion, she takes out the round, slender sticks which fasten the tent-cloth together in front. The two women then fold back the cloth in plaits on each side, bringing it together in two long plaits at the back pole; and this is now tipped backward, and allowed to fall to the ground. The cloth is loosened from the upper part of the pole, and rapidly doubled up into a compact bundle. The baby, who has wakened and lain cooing to the rattle of blue beads dangling from the bow over its cradle-board, gives a shout as the sunlight falls in its face, and watches the quick motions of the mother throwing down the tent-poles, thus leaving the circle free of access. It is the leader's tent which first falls as a signal to all the others. Meanwhile the boys are off with many a whoop, and snatch of song, gathering together the ponies. The men are busy looking after the wagons, or else sit in groups and discuss the journey and the routine of the intended visits, or attend to the packing of the gifts to be bestowed. All visitors are expected to bring presents to their hosts. The younger children run here and there, undisturbed in their play by the commotion. Soon the boys come riding in, swinging the ends of their lariats in wide circles, and driving before them a motley herd of ponies, some frisking and galloping, and others in a dogged trot, none following a path, or keeping a straight line, but spreading out on each side in the onward movement. As they come abreast with the dismantled tent, the women, without any break in their talk, make a dash at a pony, and generally capture him. The animal may, if he is good-natured, at once submit to be packed, two poles on each side, the packs containing the gala dress: bass filled with meat and corn are adjusted like **** panniers. Between the poles, which trail behind, a skin or blanket is fastened; and here the young children and the puppy have a comfortable time together as they journey. There are enough ponies for all the men and women to ride, and colts running along beside. If wagons are to be used in traveling, the tent-poles are tied on each side of the wagon box. The harness is dragged along by a woman, who slings the mass of straps and buckles on the pony's back, he giving a light start as the load drops on him. The buckling is quickly done by the women, and the stores packed in the bottom of the wagon. Finally the kettle and coffee pot are picked up; and nothing is left of the camp but circles of trampled grass, each one with a pile of ashes in its centre. The delight of being 'off' affects every one, the older people enjoying it sedately: the young men dash about up on the hills, where they stand silhouetted against the cloudless sky. Now and then they drop from their ponies, and lie flat on the ground, while the animal nibbles unconcernedly. The women ride with the stores in the bottom of the wagon, and the men on the seat, driving. It is hard, teeth-chattering work to travel in the bottom of a springless wagon, and no fun to ford a rapid river full of quicksands; for down will go one wheel, and the water come swirling in, wetting every thing and every body. At such times the bags of provisions are held high aloft in the hands: all else must take its chance. Those on the ponies fare better; for, with the feet on the horse's neck, all goes well, unless the little fellow gets into a very bad hole, and topples over into the water. Sometimes the men take off leggings and moccasins, roll them in a bundle, tie them on the head or back of the neck, and wade over, leaving the pony to follow. Such persons generally have time enough to lie down on the bank to dry off, and from their vantage-point watch the struggles of the loaded wagon as the men spring from their seat into the stream, and tug at the wheels to save the vehicle from sinking. All day we ride over the prairie-trails, starting up the birds, seeing the flash of the antelope, or catching sight of the retreating wolf. If location serves, about three o'clock we camp, always near a stream and timber. It is the work of a few moments to set up the tents, while the men and boys scatter with the ponies. The young girls go laughing to the creek for water, the older women cut and gather the dry wood, and in less than an hour the thin blue smoke is curling through the tent-flaps, and the kettle hanging on its crotch-stick over the fire. Each bundle of bedding is thrown down in the place its owner is to occupy, and it will be untied and spread when needed. There is a fascination in lying on the grass after a hard day's ride, and watching the settling of a camp. The old men gather in groups, and smoke the pipe. The young men lie at full length, resting on their elbows, their ornaments glistening in the sunlight as these gallants keep watch through the swaying grass of tents where coy maidens are on household cares intent. It is not unlikely that more than one youth is planning how he can best gain access to his sweetheart, and speak a few words to her when she goes for water to the creek in the early morning; and it is equally possible that similar thoughts are flitting through the girl's head. The creek or the spring is the trysting-place for lovers, but the chances for a word are hard to gain. It is against etiquette for a young woman to speak to any man in public who is not a near relation; and such a one, by the law of the gentes, can never be a lover. But young hearts are stronger than society restrictions; and so when the girl, accompanied by her mother or aunt, goes for water in the early morning, she will sometimes drop behind her chaperone, and the young man, who has lain hid in the grass, darts forward, swiftly and silently, and secures the favored moment. Should the mother turn, he as instantly drops in the grass; while the girl demurely walks on, keeping her secret. The small boys have already fallen into games, and are shooting arrows of barbed grass. From within the cone-shaped tents comes the sound of the chatter of the women, broken now and then by loud laughter. This might arise from the practical joking of the mother's brother. Such a relative is privileged in the home, and the source of many sports. While the women are cutting up the meat for the evening meal, and preparing the corn-cake, the young man, lounging in the shadows of the tent, has improvised a drum, captured his small nephew, and breaking into song, bids the little fellow dance for his supper. He obeys with a zest, his scalp-lock, and the flaps of his breechcloth, snapping to the tune. The little sister, having secured a premature bite from the mother, stands diligently eating, as she watches her brother's antics, stimulated by the mischief-loving uncle. There are shiftless folly among Indians, persons who are always borrowing from their more forehanded relatives; but not all borrowers are of this class. A custom prevails concerning borrowing a kettle susceptible of easy misconstruction by our own tidy housewives; that is, that it is expected, when a borrowed kettle is returned, that there will be a small portion of the food which has been cooked in the kettle remaining in the bottom of the pot. The language has a particular word to designate this remnant. Should this custom be disregarded by any one, that person would never be able to borrow again, as the owner must always know what was cooked in her kettle. Great indignation was the result of the action of a white woman, who returned a scoured kettle. She meant to teach a lesson in cleanliness; but her act was much talked over, and interpreted as fresh evidence of the meanness of white folk! Soon the savory odors give token that supper is ready. Dishes are set in the traditional places occupied by the members of the family, and the food ladled out, and portioned to each person. The little girl is sent out to call the men in. There is no formality about the family meal. If the father is a religious man, he may take a bit of his food, lift it up, and drop it in the fire; the act is without ostentation, and apparently unobserved by the others. Sometimes the children take their supper together outside the tent. The mother seldom eats until all are fully served. She may join her children with her portion; or if she has female companions in the tent, they will draw together, and gossip over the meal. Every one falls to with zest, and the pot is generally emptied. After eating, all lie down, stretching out in the tent, or going outside if the day is fine, and resting in the long slanting sunlight. As the air cools, a fire is kindled; and here grouped about the companionable blaze we watch the stars come out. Some persons doze, some discuss the journey, or recount reminiscences of former times: the women gather together and complete the story of the day; while the children chase the fireflies, or subside into drowsy listeners. Across the hum of voices is borne the song of a young man, who, hidden in the grass, lies on his back drumming on his breast as he sings. There are no urgent demands upon any one. The matron has no dishes or linen to wash, or scrubbing to do; there is nothing to clear away after the evening meal. The single pot is emptied, and set to one side. No transitory fashions perplex the fancy of the maiden; no lessons to learn harass the child. The men talk or sing, unconscious of money making or losing, or questions in science or art. To the people, no great disasters are probable, no great successes possible. The stars above silently hold their secrets, the unmarred prairie tells no tales and the silence of uninquisitive ignorance shuts down upon our little life. To one thrust from the midst of civilization into so strange a camp-circle, the summer days hardly bring a realizing sense of the great estrangement between the two orders of society. It is only when the frozen calm of winter obliterates every touch of color and individuality of outline in the landscape that it becomes possible to gauge fully the mental poverty of aboriginal life. The cold nights when the tent freezes hard so that it sounds like a drum, and the frost lies thick on the bedrobes, make one dread to rise early; and the sun is often up before the fire is kindled, and the kettle bubbles with the morning meal. After looking to what comfort it is possible to give the ponies, and having gathered in the wood, the outdoor work of the day is over. In winter the tent is made warmer by putting a lining around to half the height of the tent-cloth, and by banking without and within, stuffing with grass the space between the lower edge of the tent-cloth and the ground to keep out the wind. This done, and with plenty of wood to feed the fire, one can be passably comfortable. During the day the women are busy making clothes, mending moccasins, or embroidering gala garments with porcupine quills or beads: the men, if not out trapping, are engaged in fashioning pipes and clubs, or shaping spoons on the ball of the foot. The winter is the season for story-telling, and many hours of the evening are spent in this enjoyment. The cold season brings pleasures to the children, snowballing, sliding down bill on blocks of ice, or standing on a flat stick and coasting swiftly, balancing with a pole. The glow on the faces of the little ones as they run in breathless from their sport to meet the welcome of the group within the tent, is about the only zest the days bring. Indian good manners just the reverse of ours, never speak to the person by name when present, no word of courtesy, silence, never good morning or good night, come silently, go silently. In the tent the wife's place is by the door at the left hand as you enter, husband next, guest at the rear opposite the door. Other members of the family on the right. We built our fire in the tent, cooked and sat by it. Smoke made the eyes smart, the lower one sits the less smoke. Indians lie down in tent—sensible. I did so. Straw and hay in the bottom of the tent. The floor was all muddy, clay. No grass under the trees.... **Fletcher, Alice C., and Francis La Flesche.** The Omaha Tribe. **Lincoln: University of Nebraska Press, 1972, 95–97.** __ The earth lodge and the tipi (tent) were the only types of dwelling used by the Omaha during the last few centuries. The tipi (pl. 17 and fig. 16) was a conical tent. Formerly the cover was made of 9 to 12 buffalo skins tanned on both sides. To cut and sew this cover so that it would fit well and be shapely when stretched over the circular framework of poles required skillful workmanship the result of training and of accurate measurements. The cover was cut semicircular. To the straight edges, which were to form the front of the tent, were added at the top triangular flaps. These were to be adjusted by poles according to the directions from whirls the wind blew so as to guide the smoke from the central fire out of the tent. These smoke-flaps were called _ti'hugabtli"tha_ (from _ti,_ "tent or house;" _hugabtli"tha,_ ''to twist"). At intervals from about 3 feet above the bottom up to the smoke-flaps holes were made and worked in the straight edges. Through these holes pins (sticks) about 8 inches long, well shaped and often ornamented, were thrust to fasten the tent together, when the two edges lapped in front or were laced together with a thong. This front lap of the tent was called _ti'mo"thule_ (from _ti,_ ''tent''; _mo"thuhe,_ ''breast''). The term refers to this part of the hide forming the lap. The tent poles were 14 to 16 feet long. Straight young cedar poles were preferred. The bark was removed and the poles were rubbed smooth. The setting up of a tent was always a woman's task. She first took four poles, laid them together on the ground, and then tied them firmly with a thong about 3 feet from one end. She then raised the poles and spread their free ends apart and thrust them firmly into the ground. These four tied poles formed the true framework of the tent. Other poles—10 to 20 in number, according to the size of the tent—were arranged in a circle, one end pressed well into the ground, the other end laid in the forks made by the tied ends of the four poles. There was a definite order in setting up the poles so that they would lock one another, and when they were all in place they constituted an elastic hut firm frame, which could resist a fairly heavy wind. There was no name for the fundamental four poles, nor for any other pole except the one at the back, to which the tent cover was tied. This pole was called _teçi'deugashke,_ "the one to which the buffalo tail was tied." The name tells that the back part of the tent cover was a whole hide, the tail indicating the center line. When the poles were all set, this back pole was laid on the ground and the tent cover brought. This had been folded so as to be ready to be tied and opened. The front edges had been rolled or folded over and over back to the line indicating the middle of the cover; on this line thongs had been sewed at the top and bottom of the cover; the cover was laid on the ground in such manner that this back line was parallel to the pole, which was then securely tried to the cover by the thongs. When this was done, the pole, the folded tent cover were grasped firmly together, and set in place. Then, if there were two women doing the work, one took one fold of the cover and the other the other fold, and walked with her side around the framework of poles. The two straight edges were then lapped over each other and the wooden pins were put in or the thong was threaded. Each of the lower ends of the straight edges had a loop sewed to it, and through both loops a stake was thrust into the ground. The oval opening formed the door, which was called _tizhe'be_. Over this opening a skin was hung. A stick fastened across from one foreleg to the other, and another stick running from one hind leg to the other, held this covering taut, so that it could be easily tipped to one side when a person stooped to enter the oval door opening. It was always an interesting sight to watch the rapid and precise movements of the women and their deftness in setting up a tent. On a journey, no matter how dark the evening might be when the tent was pitched the opening was generally so arranged as to face the east. In the village, or in a camping place likely to be used for some time, a band of willow withes was bound around the frame of poles about midway their height to give additional stability. **Page, Elizabeth M.** In Camp and Tepee: An Indian Mission Story. **New York: Fleming H. Revell Co., 1915, 76, 100.** Page 76 The Mohonk Lodge, like every other new institution among Indians, had to begin slowly. Mrs. Roe's first idea had been that the actual work of the "Indian House" would fall to the Indian women, that they would prepare for any festivities or clear away afterwards, that they would keep it clean and in order, as the best of them did their tepees in camp. But a few weeks' experience showed the necessity for modifying this plan. Housekeeping in a tepee was a very different science from that in a white man's house. If anything spilled on an Indian woman's pounded earth floor, her method was to let it soak in as speedily as might be and when any given area became soaked to the point of saturation so that odors were intolerable even to a camp-trained nose, then she moved her tepee to a new spot, leaving the sunshine, the rain and Nature's scavengers to do a more thorough house-cleaning than she could ever hope to accomplish. Presented with the problem of a non-porous floor and an immovable structure the Indian's method affected nothing but a glaring failure. The missionaries visited tepee after tepee, some comparatively neat, others disgusting in their dirt and unsightliness, everywhere to be greeted with friendliness. Page 100 Nearby was a wagon that had just come to a standstill and the man was leading away the horses while the woman, her baby on her back, was pulling the long poles out from behind. Near her the old grandmother, her white hair blown in elf-locks across her face, and her tattered blanket whipped about her bent, shriveled form, was rooting up the grass with a queer bone instrument and pounding the earth down hard and smooth with a stone to make the tepee-floor. Just beyond them a young girl, evidently a bride, judging from her new equipment, had already raised the formidable tripod of sixteen-foot poles, and Mrs. Roe watched with interest the slender girlish figure as, holding the long rope that tied her three main props, she raised pole after pole, setting them in position and then with a quick turn of her wrist sending a loop whirling up the rope to settle over the pointed end and tie it fast. Every movement was easy, assured and graceful, and the brown face, framed in its wings of glossy black hair, that she turned to her mother who cackled approval from the wagon-seat, was radiant with winsome happiness. The two last poles to which the spotless new tepee cloth was fastened were put in place, the cloth was pinned securely together save for the low doorway at the bottom, the lower edge was staked down close on the sunny side but pushed up a little on the other to catch the breeze, before the mother descended from her perch to light the fire in the hole in the centre of the tepee's floor. **Lowie, Robert H.** The Crow Indians. **New York: Rinehart, 1935, 33–36.** Everything connected with the tipi belonged to the women's sphere of influence. Desiring extra long poles, they were bound not only to strip the bark but to pare down the logs to a suitable diameter, since a forty-foot pine would be far too thick at the base for a lodge pole. To prevent slipping on the ground they pointed the butts. Like the Blackfoot and Shoshone but in contrast to their Dakota and Cheyenne sisters, the Crow women invariably set up four—not three—poles as a foundation for the rest. It takes a pair to pitch the tipi, one woman raising the crossed foundation-poles above her head, her assistant pulling on a guy rope. The poles are then separated so that the butts form an oblong. Naturally the last pole set up carries the cover, which is brought around the framework and pinned in front. In making this adjustment a woman mounts on rungs made to cross between the two front poles or nowadays uses a regular ladder. Outside the framework are put two special poles, which when moved back and forth open or shut the smoke-hole. For greater safety in stormy weather an inside guy-rope is tied to a peg near the fireplace while outside guys are fastened to a peg on a tree. It took an expert to design a cover, and the housewife employing her would pay her four different kinds of property. The designer had as many as twenty collaborators, whom she instructed in the requisite sewing together of skins and whom the tipi owner remunerated with a feast. A whole day was spent on making the sinew thread. Work on the cover was considered particularly appropriate to the fall of the year. When the lodge was put up, the people burnt sagebrush and weeds inside and as the smoke appeared through the hides they said, "This will keep out the rain," and opened the smoke-vent. The housewife's husband invited old men to smoke with him; the guests recited coups and said, "In the spring this will be a very good tipi from which to make bags and moccasins." The fireplace was approximately in the center of the lodge, and the rear ( _aco', aco'ria_ ) was the place of honor. It was there that chief Rotten-belly received Maximilian, bidding the Prince seat himself at his left. On either side of the entrance was an _aro'kape,_ and between it and the rear the _icgyewatsu'a._ In the latter were spread the robes for sleeping, and a husband and wife were likely to rest there when not receiving visitors. The bottom of the cover was pegged to the ground; according to Bear-crane, rocks formerly weighted it down, but another informant restricts this custom to the winter season. Against draft the Crow used a hide screen ( _bitã'ricia_ ) _,_ on which the owner often had his deeds depicted. Bedsteads were lacking; the Indian slept on several hides and covered themselves with skin robes. But they had backrests of willows strung with sinew, which were suspended from tripods and covered with buffalo skins. **Hassrick, Royal B.** The Sioux: Life and Customs of a Warrior Society. **Norman: University of Oklahoma Press, 1964, 212–13.** . . . fifteen to twenty feet high, and extremely heavy, the poles must be well secured in case of wind and storm. Three main poles were first set up as foundation, usually secured with a guy rope to a stake driven into the earth at a point approximately in the center of the tipi. The remaining poles were then placed in the crotch formed by the junction of the main poles. The exact position of the poles was adjusted after the cover was placed, forming an ellipse rather than a perfect circle so that the front of the tipi was steeper than the rear... Beds of folded buffalo robes were placed away from the door at intervals around the perimeter of the tipi. The place of honor opposite the door at the back of the lodge was sometimes reserved for the master, although often he and wife slept nearer the entrance to the south. Back rests of willow rods supported by tripods were placed at the head and foot of the owner's bed. Parfleches and soft leather storage bags containing foods, utensils, and clothing were stacked along the dew cloth between the beds. On a forked pole to the left of the door hung the water bag. Firewood was stored just outside the door. From the tipi poles, or from tripods supporting the back rests, the man might hang his painted bonnet case and his medicine pouch. Shield was hung from a forked pole a the rear or the tipi... . . . some families tied cut deer hoofs and later tine bells to the tipi tightening rope. When the wind blew, music filled the tipi... Decorated on the exterior with its four medallions and rows of quilled pendants paralleling the entrance, frequently painted with bold symbols and animal figures belonging to the husband, and topped by a spiral of graceful lodge poles extending from the apex often tipped with long white or red deerskin streamers or a scalp which airily fluttered in the breeze.... The dew cloth embroidered in horizontal stripes of quilling served as a handsome background for the painted back rests and decorated packing cases... The flaps, for example, were "woman's arms." **Humfreville, J. Lee.** Twenty Years Among Our Hostile Indians. **New York: Hunter and Co., 1899, 75, 101-4.** Page 75 There was no regularity in setting the lodges of an Indian camp. No one, not even the chief, had supervision over the manner of place where the lodges were to be set. They were erected in such places as best suited each individual owner. There were no streets or walks, neither did the owner of lodge claim the space around it which he kept clean, and no sanitary precautions whatever were taken. Dirt, bones, and filth of every description were strewn everywhere, and the stench was frequently unendurable to any... Page 101–4 Indian women did all the tanning for the family requirements and the work was done in various ways. When it was intended that a shin should be very soft and pliable, only the brain of the animal and clear fresh water were used. Shins tanned in this way were made into dresses, leggings, moccasins, and other articles of personal and wearing apparel. The shins used for the lodge covers, and hides used for horse equipment and coarser articles of home and camp life were tanned in a different way and with much less care. They were simply thrown into the water and allowed to remain until their hair fell off, when they were stretched tight on the ground by driving sticks through holes cut in the edges while the hide was wet and soft. Scraping knives made from the horn of the elk were generally used. The women would get down on their hands and knees on the hide and scrape off all the flesh and pulpy matter. After the hide had dried it was put through a process of softening before it was in condition to be used as a lodge cover. The hide used for this purpose was usually that of a buffalo bull, as it was much thicker and more serviceable than that of a buffalo cow. Lodge covers were made by the women, who sewed them together with thongs. From ten to twenty hides were required for the covering of each lodge according to its size. Poles for the lodges were difficult to obtain by the Indians of the plains, where wood was scarce and good straight poles hard to find, and they were accordingly highly valued. They were procured and finished by the women, and were necessarily of sound, straight young trees, generally of pine, birch or other light but strong wood. They were from one and one-half to three inches in diameter, and from fifteen to twenty-five feet in length. The bark and every small knot or growth was carefully removed from them and they were made perfectly smooth. In putting up a lodge from fifteen to twenty-five of these poles were used. The covering was drawn over them and fastened with skewers or sticks where the edges of the covering met. At the top of the lodge was a large flap in the corner of which the end of a pole was inserted. When this flap was closed it kept the heat in and the cold out, and unless opened when the fire was built the interior would soon be filled with smoke. The lower edge of the lodge covering was fastened to the ground by long pegs driven deep into the earth. The pegs prevented the lodge from being blown over by high winds. The entrance was the only hole of any size, except the top, in the entire covering. This entrance was covered by a hide, drawn over a hoop made from a small branch and hung over the hole. The opening was rarely closed, except in cold weather or to keep the dogs out. Even the best of these lodges afforded but slight protection against severe storms or bitter cold. Rain found its way into them and the snow blew through the holes underneath the covering, half-filling the interior, making it exceedingly uncomfortable. During the severe rainstorms the beds and sometimes the lodges were flooded, and the occupants were compelled to flee to higher ground with such effects as they could carry. Lodge fires were necessarily built on the ground and around them the women and children huddled to keep warm. During winter storms when the Indians were compelled to go about their camps in the performance of necessary duties they frequently did so barefoot, as their moccasins and leggings became saturated with water or snow in a short time, and when in that condition were cold and disagreeable to the wearer. They preferred to keep their footwear dry even at the expense of temporary discomfort. Both men and women frequently carried their moccasins and leggings in their hands after having been caught in a cold rain or snowstorm. Sometimes during the cold weather they wore sandals made from the flint hides of some animal as a protection to the soles of the feet. During the prolonged cold storm or blizzard, which was frequent in the far north, the Indians and their animals, including their dogs, were great sufferers. Lodges of this description were probably the best habitations that could be used by these nomads; for, being continually on the move, it was necessary to transport their entire camp equipment from place to place. They were easily and quickly put up and taken down, and it was a rare thing, even in the severest wind storm, for one of them to be blown down, although it sometimes occurred. Frequently the coverings were fantastically painted with figures outlined in different colors, red and blue being the favorite. These figures represented different scenes, some depicting a warrior seated on his horse in deadly combat with a hostile brave; an Indian fighting a bear with his spear; an Indian on foot killing a man with his bow and arrow, tomahawk, knife, or lance; or some other prodigious deed of valor. Sometimes the entire lodge covering was decorated with these rude drawings. They generally commemorated some great event in the career of the occupant of the lodge or hairbreadth escape of himself or some of the male members of his family. These drawings were usually made by the men, some of them showing considerable artistic ability. Some of the women also possessed no little skill. Nearly all Indians were fond of decorating their lodge covers in this manner, using the brightest colors they could obtain, and some of their imaginary or real deeds of valor were portrayed in the most picturesque style, though they were often more glaring than artistic. When the wild Indians retired to sleep they wrapped themselves in the robes or blankets they had worn during the day. The beds were more a name than a reality; these consisted of the dried hides of buffalo, horses, or other animals, laid upon the ground to keep out the dampness. Occasionally they placed an additional buffalo robe or two on top. For pillows they used skins, or any bulky soft stuff which they might have at hand. The interior arrangement of an Indian lodge was a series of such beds arranged in a circle, leaving a space in the center for the fire on which the cooking was done, and it also served to some extent to warm the lodge in winter. Page 107 People of today little realize how long it took the Indians to acquire or accumulate the small amount of stuff they had in their keeping. Beads, porcupine work, Iroquois shells, claws and teeth of bears and mountain lions, arrowheads, lances, shield, pipes and stems, bows and arrows, and horse equipments largely made up their possessions. These were handed down from generation to generation, and were much prized as having been the property of their forefathers. As they never cleaned or washed their effects, their dirty condition can be readily imagined. All their habitations were foul-smelling from the unutterably filthy condition of their entire belongings. ### The Liner or Lining i I have a copy of Ella Deloria's papers and in them she refers to the lining by the Lakota term _ozan_ (there is a nasal n and the o is slightly separated from the z). Then I checked with all the language instructors here and they came back with the same translation, a curtain or liner that hangs down. A dew-curtain, called an _oza,_ was hung all around and was long enough to be tucked under the carpet. This was made in matched pieces, with strings attached for tying them together and to the tipi poles. Many an oza (ozan) was elaborately decorated at about 2 to 4 feet intervals, with vertical bands of fancy work in patterns of bright colors—or so painted. This dew-curtain, which was tied to the poles at a height of perhaps 4 feet, and the sloping tipi wall, together formed a little circular alleyway, like a lean-to in shape. And there all surplus foods and robes were stored, as well as extra personal belongings of the family, all packed in proper containers. This storage area was an insulation as we, and the inside of the tipi was always noticeably warmer because of it. The dew-curtain was usually of either doe or calfskin. This summer curtain was purely for decoration and was hung only across the back of the _ticatku_ (place of honor). If anything, this was more elaborate than the winter curtain, the primary purpose of which was to protect against extreme cold. Again nowhere could we come up with any word for the addition as I refer to it. I now think that Laubin was confused when translating terms in regard to the lodge and since that is the only book so far, it became the correct term. Again in Lakota language the word ozan translates as something that hangs down (i.e., a curtain in Western terms; _ozanpi_ translates to "bed curtains"). Therefore, no relationship to what Laubin describes and I call a canopy. Thus, the term would be more appropriately used for the so-called liner or liners or robes used to hang from the poles. Pete Gibbs, past curator of the British museum and now teacher at the University of South Dakota, supplied this information in e-mail and personal contact. ii As well, Buechel gives _oza(n)_ as a curtain, and _oza(n)pi_ as bed curtains—probably vertically rigged curtain to give privacy to sleepers. I think _ozan_ also refers to the general liner—but that would have to be verified. I know that Eastman called the interior rain ceiling/heat retained an "ozan"—enforcing the meaning of it being a curtain (of any kind). Undoubtedly all tribes had their own terminology for such riggings for the interiors of tipis. Rather than to establish Lakota terminology as the norm, perhaps the appropriate, corresponding English terms could/should be used. English speakers have enough trouble pronouncing foreign words correctly. I hear Lakota and other words poorly uttered frequently _. Parfletch_ (for parfleche)—"sh" sound at the end, not "etch!" ( _chaNUMBpa—_ the second syllable like English "numb!" Oj vej! I think the Laubins got it a bit wrong. Milford Chandler actually told them about the ceiling or temporary rain shield, which they called "ozan." They didn't believe him at first, and then didn't credit him as an informant for it. I think Mr. Chandler got his info from Dr. Charles Eastman, a close personal friend of his. In Lakota, an ozan is a curtain, and the Lakota term for what most people call the liner or dew cloth. At times the liner would be left to hang straight down to reduce the area to be heated inside the lodge. The term evidently derives from _oyu'zan,_ "to spread out, as a curtain" (according to Buechel). He translates _ozan_ as a curtain (not a ceiling). He gives bed curtains as _ozanpi._ Again, the "a" is nasalized. These are curtains that are hung from ropes stretched across a lodge to give sleeping couples privacy. I think Chandler used _ozan_ for the liner, dividers ( _ozanpi_ ), and a temporary rain shield that could be rigged as a semicircular ceiling toward the back of the inside a tipi, behind the fire. Probably no one ever cut a separate half circle to make a permanent ceiling. In extremely cold weather, the partial ceiling could also serve to trap heat from the fire, making a snug compartment in the back of the lodge. This is what most people now think of as an ozan. Another misconception! Benson Lanford, noted authority on Native American material and author, supplied this information to me in personal phone calls and letters. # Glossary ### Terms Used in the Making of a Tipi **Bevel:** Putting an angle on a blade to make it sharp. **Brain-tanned:** Using the brains of an animal to break down the fibers of an animal hide to make it soft. **Daguerreotypes:** Photographic process developed on tin in the 1830s. **Drawknife:** Double-handled blade used for removing surface bark or wood. **Gore:** Triangular piece of material that gives extra stretch and strength in going around tie point of poles. **Lockstitch:** Stitching on the sewing machine that gives even tension to the top needle and bottom thread bobbin for a strong meeting of the two threads in the middle of the cloth. **Ozan:** Liner inside the tipi used to insulate against the weather. **Radius point:** A measurement used to find the circumference of the tipi. **Rebar:** Concrete reinforcing bar made of steel or iron. **Travois:** Poleslashed on either side of a horse or dog. At the base of two or more poles are lashed smaller poles at a right angle, which help form a platform used for carrying camp gear or materials. ### Canvas Terms **Duck:** The name derives from a trademark of a duck stenciled on heavy sail cloth imported from Europe around 1840. The term applies to a broad range of heavy, plain, flat, woven fabrics. Cotton duck breathes, or lets air pass through the fibers. **Army Duck:** Two or more plied yarns in both warp and filling produce a cloth of high tensile strength that meet U.S. Army standards. **Single-Fill (Ounce) Duck:** Fabric made with coarse, single-ply yarn. There are two warp yarns for each fill yarn. The warp yarns are woven in pairs, side by side, sized, and are predominant over the filling yarns. Untreated single-fill duck can shrink as much as 7 percent. This much shrinkage has the same effect as cutting 12 inches from the bottom of an 18-foot tipi. In other words, your 18-foot tipi may end up a 17-foot tipi if you don't choose the right fabric. Also, single-fill fabrics, when wet, have a tendency to leak if touched. (This is due to the looseness of the weave and of the yarns.) This shrinkage will also occur when painting a cover. **12-Ounce Natural Canvas** : The 12-ounce per square yard, single-filled material. The natural water-repellent qualities of the fabric provide a nice dry tent, especially after it has shrunk up a bit. The material is breathable and has good insulating qualities compared to lighter weight fabrics. **14.90-Ounce Natural Canvas:** 14.9-ounce per square yard, 100-percent cotton duck, single-filled material is a popular tent material and for good reason. It is tough, water repellent, and warm. Because it has 20 percent more cotton woven in it than the 12-ounce material, it has superior insulating qualities. The additional cotton also provides a tighter, more water-repellent, and more durable fabric. Although the untreated canvas is susceptible to the harmful effects of sun and moisture, it is an inexpensive alternative for arid to semiarid climates. **Marine** : Product description for Marine Finish—a finish specifically designed as the best available in water repellence and mildew resistance. Do not confuse this with Marine Duck, a term often used for any duck sold to the marine trade. Sunforger Marine Finish Boat Shrunk is the original finish to offer the best in weathering qualities. Originally the same kind of process was called Vivatex, but this finish was discontinued some years ago, though some of it is still around. **9.5-Ounce Marine-Treated Army Duck:** Quality fabric that is tightly woven, lightweight, and durable. It has a dry treatment that aids in mildew resistance and water repellence. **10.10-Ounce. Marine-Treated Army Duck** : A premium cotton fabric more tightly woven than the single-filled variety, creating excellent strength and durability in a lighter weight base fabric. Shrinkage is greatly reduced with this fabric. **Sunforger Marine Finish Boat Shrunk:** This finish comes in both 10-ounce and 13-ounce weights. It contains a special added compound that gives two to three times greater water repellence and mildew resistance than other "marine" finishes. The marine- (mineral-) treated army duck is a firm, high-thread-count, plain-woven fabric made with plied (twisted) yarns in both warp and filling. There are at least two yarns in each strand. The 10.10-ounce weight and 12.65-ounce weight refer to the ounces of thread per square yard of material. It is highly recommended if you live in an area of high humidity or rain, such as the Great Lakes area or Southeastern United States, that you use only treated material. **Sunforger Fire Resistant:** This product has the treatment for water repellence and mildew resistance and an additional flame-retardant quality that meets the flammability standards of CPAI-84, an industry-wide standard. Many states now require that all tents and tipis or any camping dwelling be fire resistant. It is not the same as being fireproof. ### Synthetic Canvases and Treatments **Acrylic-Coated Vinyls,** 100-percent synthetic materials, are much heavier, have a problem with condensation, and can be flammable. They also lack the ability of cotton to expand when wet. **15-Ounce Starfire** : This fabric is the equivalent to "all weather" ducks that are available. This is a 45 percent polyester/55 percent cotton base fabric pigmented with an acrylic topcoat. Each application is heat sealed onto the base fabric for added strength. It is water, mildew, and fire resistant. It meets Title 19, CPIA-84 (section 6), and FMVSS-302 fire requirements. It is soft, flexible, and easily cleaned. It will last a long time, but cannot be painted. **Polaris:** 50 percent cotton/50 percent polyester blend. This fabric is sturdy and long lasting. It is UV resistant, breathable, mildew resistant, water repellent, and flame retardant. Polaris is flexible in extreme temperatures and recommended for tipis that will be set up for extended periods of time. It is very well suited to customization with acrylic paints or exterior latex house paints. **Sunbrella:** This material is technically not canvas (which I think of as being natural fiber) but is canvas-like. It is made in 46- and 60-inch widths in an amazing array of colors, including many bold stripe patterns. Acrylic material has the advantage of being very strong and extremely decay resistant, and does not change dimension __ when wet. It cannot be painted. Sunbrella is used on the bottom extension for liners and some tipis. # Bibliography **Included works were written about a specific time period and based on first-person observations or artistic/photographic material.** 1832—Maximilian, Prince of Wied. "Travels in the Interior of North America 1832–1834." _Early Western Travels_. Edited by Reuben G. Thawaites. Cleveland, OH: Arthur H. Clark Company, 1906. --- 1838—Catlin, George. _Letters and Notes of the Manners and Customs and Conditions of the Native American Indians_ 2. New York: Dover Press, __ 1973. __ 1844—Carleton, Lt. James H. _The Prairie Logbooks: Dragoon Campaigns to the Pawnee Villages in 1844, and to the Rocky Mountains in 1845._ Chicago: The Caxton Club, 1943. __ 1849—Eastman, Mary H. _Dahcotah-Life and Legends of the Sioux Around Fort Snelling_. Minneapolis, MN: Ross & Haines, Inc., 1962. 1851—Mayer, Frank Blackwell. _With Pen and Pencil on the Frontier in 1851._ St. Paul, MN: Minnesota Historical Society, 1932. 1852—Kurz, Rudolph Friederich. _An Account of His Experiences Among Fur Traders and American Indians on the Mississippi and the Upper Missouri Rivers During the Years 1846 to 1852_. Washington D.C.: U.S. Government Printing Office, 1937. __ 1862—Wakefield, Sarah. _Six Weeks in the Sioux Tepees._ Falcon, 2003. 1873—Kavanagh, Thomas W. _Domestic Architecture in the Comanche Village on Medicine Creek, Indian Territory, Winter 1873_. Self published at http://php.indiana.edu/~tkavanag/asoule.html. 1990. 1874—Coleman, Winfield. _Feeding Scalps to Thunder: Shamanic Symbolism in the Art of the Cheyenne Berdache._ Vol. 1. People of the Buffalo. Dietmar Kuegler, Germany: Tatanka Press, 2003. 1876—Viola, Herman J. _Warrior Artists: Historic Cheyenne and Kiowa Indian Ledger Art._ Washington, D.C.: National Geographic Society, 1998. 1880—McCoy, Ronald. _Kiowa Memories: Images from Indian Territory 1880._ Santa Fe: Morning Star Gallery, 1987. 1885—Hamilton, Henry and Jean. _The Sioux of the Rosebud: A History in Pictures_. Norman: University of Oklahoma Press, 1981. 1898—Miller, Fred E. _Photographer of the Crows._ Missoula, MT: University of Montana: Carnan Vidfilm, Inc., 1985. 1890—Grinnell, George Bird. _The Cheyenne Indians, Their History and Ways of Life, I and II._ Lincoln: University of Nebraska Press, 1923. 1892—Mooney, James. _The Ghost: Dance, Religion, and the Sioux Outbreak of 1890–1892._ Washington D.C.: Government Printing Office, 1896. 1896—McClintock, Walter. _Old Indian Trails_. New York: Houghton Mifflin Co., 1923. 1896—McClintock, Walter. _Painted Tipis and Picture Writing of the Blackfoot Indians._ Southwest Museum Leaflet, no. 6 (1936). 1898—Ewers, John C. _Murals in the Round: Painted Tipis of the Kiowa and Kiowa-Apache Indians._ Washington D.C.: Smithsonian Institution Press, 1978. 1899—Humfreville, J. Lee. _Twenty Years Among Our Hostile Indians._ New York: Hunter and Co., 1899. 1902—Albright, Peggy. _Crow Indian Photographer: The Work of Richard Throssel._ Albuquerque: University of New Mexico Press, 1997. 1902—Kroeber, Alfred L. _The Arapaho_. Lincoln: University of Nebraska Press, 1983. 1903—Brownstone, Arni. _Bear Chief's War Deed Tipi._ Vol. 2. People of the Buffalo. Dietmar Kuegler, Germany: Tatanka Press, 2005. 1903—Seton, Ernest Thompson. _Two Little Savages_. New York: Doubleday Page & Co., 1903. 1905—Tibbles, Thomas Henry. _Buckskin and Blanket Days: Memoirs of a Friend of the Indians_. A Bison Book, 1905. 1906—Aadland, Dan. _Women and Warriors of the Plains—The Pioneer Photography of Julia E. Tuell._ New York: MacMillan, 1996. 1909—Wilson, Gilbert L. _The Horse and the Dog in Hidatsa Culture._ Anthropological Papers of the American Museum of Natural History XV, Part II. New York: American Museum Press, 1924. 1910—McClintock, Walter. _The Old North Trail: Life, Legends and Religion of the Blackfeet Indians_. London: MacMillan, 1910. 1911—Fletcher, Alice C., and Francis La Flesche. _The Omaha Tribe._ Lincoln: University of Nebraska Press, 1972. 1912—Seton, Ernest Thompson. _The Book of Woodcraft_. Garden City: Garden City Publishing, 1912. 1915—Page, Elizabeth M. _In Camp and Tepee_. New York: Fleming H. Revell Co., 1915. 1916—Durkin, Peter. "Cane Windbreaks for Tipis." _Whispering Wind_ 34, no. 6 (2005). 1916—Jennings, Vanessa Paukeigope. "Kiowa Battle Tipi." _Whispering Wind_ 34, no. 6 (2005). 1917—Campbell, Stanley (Vestal). "The Cheyenne Tipi." _American Anthropologist,_ 1915. Vol. 17, 685–94. 1927—Campbell, Stanley (Vestal). "Tipis of the Crow Indians." _American Anthropologist,_ January–March 1927. 1928—Salomon, Julian Harris. _The Book of Indian Crafts and Indian Lore._ New York: Harper and Row, 1928. 1931—Douglas, Fredrick H., ed. _The Plain Indian Tipi_. Denver Art Museum, Leaflet no. 19 (April 1931). 1932—Seton, Ernest Thompson. "Tipis: Habitations of the Indians." _The Totem Board-Woodcraft Indian Service._ Vol., 2. no 2. Seton Village, Santa Fe: University of New Mexico Press, Alberquerque, NM, February 1932, 62. 1935—Lowie, Robert H. _The Crow Indians_. New York: Rinehart, 1935. 1936—McClintock, Walter. _Blackfoot Tipi._ Southwest Museum Leaflet _,_ no. 5, 1936. 1937—Marriott, Alice. "The Trade Guild of the Southern Cheyenne Women." _Bulletin of the Oklahoma Anthropological Society_ , 4 April 1956. 1937—Pohrt, Richard A. _A Gros Ventre Painted Lodge._ Vol. 1. __ People of the Buffalo. Dietmar Kuegler, Germany: Tatanka Press, 2003. 1940—Lyford, Carrie A. _Quill and Beadwork of the Western Sioux._ Boulder, CO: Johnson Books, 1982. 1945—Ewers, John C. _Blackfeet Crafts_. United States Department of the Interior. Washington, D.C.: Stevens Pint: Schneider, 1945. 1954—Hunt, W. Ben. _Indian Crafts and Lore._ New York: Golden Press-West Publishing, 1954. 1954—Lowie, Robert H. _Indians of the Plains_. American Museum of Natural History, 1954. 1955—Ewers, John C. _Horse in Blackfoot Indian Culture_ , Washington, D.C.: Smithsonian Institution Press, 1955. 1957—Laubin, Reginald and Gladys. _The Indian Tipi: Its History, Construction, and Use._ Norman: University of Oklahoma Press, 1957. 1960—Thulin, William D., and Thomas Thulin. _Tipi Life_. Self-published, 1960. 1961—Denig, Edwin Thompson. _Five Indian Tribes of the Upper Missouri._ Norman: University of Oklahoma Press, 1961. 1962—Grinnell, George Bird. _Blackfoot Lodge Tales._ Lincoln: University of Nebraska Press, 1962. 1964—Hassrick, Royal B. _The Sioux: Life and Customs of a Warrior Society._ Norman: University of Oklahoma Press, 1964. 1967—Bad Heart Bull, Amos. _A Pictographic History of the Oglala Sioux._ Lincoln: University of Nebraska Press, 1967. 1967—Hiller, Carl. _From Teepees to Towers: A Photographic History of American Architecture._ Boston: Little Brown & Company, 1967. 1969—Powell, Peter J. _Sweet Medicine: Volume One_. Norman: University of Oklahoma Press, 1969. 1970— _Mother Earth News._ "The Plains Indian Tipi, Build it and Move In." Vol. 1, no. 1 (January 1970): 29–40. 1971—Peterson, Helmut and Wolfgang de Bruyn. _Indianishe Zeltbemalun._ Leipzig, Germany: Prisma-Verlag, 1990. 1972—Hungry, Adolf Wolf. _Tipi Life._ Good Medicine Book, 1972. 1972—Mails, Thomas E. _Mystic Warriors of the Plains._ Garden City, NY: Doubleday, 1972. 1972—Wood, Guy (Darry). "The All American, Do It Yourself, Portable Shelter." _Aquarian Angel_ , 1972. 1973—Capps, Benjamin. _The Indians_ ( _the Old West)._ New York: Time-Life Books, 1973. 1973—Hunt, W. Ben. _The Complete How-To Book of Indiancraft._ Racine, WI: Macmillan Publishing Co., 1973. 1973—Past, Earl. "The Indian Tipi 'Castle of the Plains.'" _American Indian Crafts and Culture_ 7, no. 2 (February 1973): 8–11, 15. 1973—Raleigh, Steve, and Paul Alexander. "Tipi-Making." _Woodstock Craftsman's Manual_ 2, New York: Praeger, 1973. 1973—United States Department of the Interior. _Painted Tipis by Contemporary Plain Indian Artists_ , 1973. 1974—Moore, John. "A Study of Religious Symbolism Among the Cheyenne Indians." PhD diss., New York University, 1974. 1974—Neidenthal, John. "Cheyenne Decorations." _Florida Indian Hobbyist Assoc. Newsletter_ , 1974. 1974—Robinson, Peter D. "Tipi on the Tundra." _Alaska,_ October 1974. 1975—Lodge Owners Society, Lodge Owners Quarterly, or Lodge Owners. Self-published newsletter/small magazine out of SD and then TN, 1975. 1975— _Women's Quilting Society._ The Old West Series. New York: Time-Life Books, 1975. 1978—Maurer, Evan M. _Visions of the People: American Indian Art._ New York: Doubleday, 1978. 1979—Blair, Neal. "The Incomparable Tipi." _Wyoming Wildlife_ , March 1979. 1979—Brasser, Ted J. "The Pedigree of the Hugging Bear Tipi in the Blackfoot Camp." _American Indian Art_ (Winter 1979). 1979—Hatton, E. M. _The Tent Book_. Itasca, IL: Houghton Mifflin, 1979. 1979—Holley, Linda A. _North Fla. Indian Culture Society._ _Whispering Wind._ Vol. 12, no. 5 (1979): 16–17. 1979—Kolk, Glenn and Jacalyn. "Tipi Tips." _Camping Journal,_ April 1979, 33–52. 1979— _Mother Earth News._ "That Good Ol' Tipi Living," May/June 1979. 1979—Neale, Gay. "The Ultimate Mobile Home." _The Indian Trader_ 10, no. 10 (1979): 1–4 1980—Coleman, Winfield. "The Cheyenne Women's Sewing Society." Conference on Design Symbology and Decoration at the Buffalo Bill Historical Center in Cody, WY, 1980. 1980—Glenn, George. "The Lodge." _Track of the Wolf._ Book of Buckskinning I _._ Texarkana, TX: Rebel Publishing Co., 1980: 53–73. 1980—Kiowa Indian News. _Painted Tipis of the Kiowa and Kiowa-Apache Indians,_ September 1980. 1980—Walter, Bill. "Tipi Know-How." _Track of the Wolf._ Book of Buckskinning I _.,_ Texarkana, TX: Rebel Publishing Co., 1980. 1980—Walter, Bill and Lila. "Flap Facts." _The Buckskin Report_ , June 1980. 1981—Horse Capture, George P. "The Timeless Tipi Symbol of the Great Circle of Life." _The American West,_ March/April 1981. 1982—Brasser, Ted J. "The Tipi as an Element in the Emergence of Historical Plains Indian Nomadism." _Plains Indian Anthropologist_ 1 (1982): 27–98. 1982—Finnigan, James T. _Tipi Rings and Plains Prehistory: A Reassessment of Their Archaeological Potential._ National Museums of Canada,1982. __ 1982—Jackson, Jaime. _The Canvas Tipi_. Lafayette, CA: Lodgepole Press, 1982. 1982—Thomson, Scott. "A Tipi Dedication." _Whispering Wind._ Vol. 15, no. 3 (1982): 22–23. 1984—Engages. "Canvas Tipi." _The Museum of the Fur Trade Quarterly_ 20, no. 3 (1984): 13–14. 1984—Lorenz, Ray. "A Shelter for All Seasons." _Sports Afield,_ October 1984, 66–69, 108–112. 1984—Yue, David and Charlotte. _The Tipi: A Center of Native American Life._ New York: Knopf Books for Young Readers, 1984. 1985—Lynch, "Owl" Lanny Winterin. "Tipi Furnishings." _Buckskin Report_ (Spring 1985). 1986—Wuellner, Lance H. "The Indian Tipi." _Muzzle Blasts,_ November 1986. 1987—O'Meara, Jim. "The Terrible Tipi." _Muzzleloader_ , November/December 1987. 1987—Whitefield, Patrick. _Tipi Living._ _Simple Living._ East Meon, Hampshire, UK: Permanent Publications, 1987. 1988—Peterson, Karen Daniels. _American Pictographic Images: Historical Works on Paper by the Plains Indians_. Alexander Gallery-New York and Morning Star Gallery-Santa Fe, New Mexico. Princeton Polychrome Press, New York, 1988. 1989—Warcloud, Paul. _Dakotah Sioux Indian Dictionary._ Tekakwith Fine Arts Center, Sisseton, SD. 1990—Nabokov, Peter, and Robert Eastern. _Native American Architecture._ New York: Oxford University Press, 1990. 1990—Peterson, Helmut and Wolfgang de Bruyn. _Indianishe Zeltbemalun._ Leipzig, Germany: Prisma-Verlag, 1990. 1990—Reese, Frank Pond. _The 20th Century Indian Tipi: How to Choose and Use a Tipi Today_. Reese Tipis Publication, 1990. 1990—Scriver, Bob. _The Blackfeet: Artist of the Northern Plains, The Scriver Collection of Blackfeet Indian Artifacts and Related Objects, 1894–1990._ Kansas City: Lowell Press, Inc., 1990. __ 1993—Atwill, Lionel. "Tepee: The Ultimate Hunting Lodge." _Sports Afield,_ October 1993, 74–80. 1993—Brewer, Kathy. "A Brief Discussion of 19th Century Plains Women's Roles." _Whispering Wind_ 26, no. 2 (1993): 20–23. 1993—"The Buffalo Hunters." The American Indians Series. Alexandria, VA: Time-Life Books, 1993. 1993—Goble, Paul. _Her Seven Brothers_. New York: Bradbury Press, 1988. 1993—Lewellyn, Dixie. _A Plains Indian's Talking Tipi_. Beverly Hills, FL: Rhythm & Reading Resources, 1993. 1994—Kaye, Dena. "Taming a Tepee: A Western Fantasy in Aspen." _Architectural Digest_ , August 1994, 72–75, 139–141. 1994—Szabo, Joyce M. _Howling Wolf and the History of Ledger Art_. Albuquerque: University of New Mexico Press, 1994. 1994—Taylor, Colin F. _The Plains Indian_. New York: Crescent Books, 1994. 1994— _This is My Tipi: A Gallery Guide to Dreams and Dusty Stars: Blackfeet Lodge Decoration._ Pamphlet, High Desert Museum, Bend, OR, September 1993–July 10, 1994. 1995—Blue Evening Star. _Tipis & Yurts: Authentic Design for Circular Shelters._ Ashville, NC: Lark Books, 1995. 1995—Durkin, Peter. "Black Feet Tipis." _Whispering Wind_ 30, no. 6 (1995): 38–39. 1995—Durkin, Peter. "Carrying Tipi Poles: You can get there from here.'" _Whispering Wind_ 30, no. 5 (1995): 36. 1995— _Tribes of the Southern Plains:_ _The American Indians_. Alexandria, VA: Time-Life Books, 1995. 1996—Durkin, Peter. "Cheyenne Tipi Beds." _Whispering Wind_ 34, no. 2 (2004). 1996—Durkin, Peter. "Cheyenne Tipi Guilds." Unpublished article, 1996. 1996—Durkin, Peter. "Miniature Tipis and James Mooney." _Whispering Wind_ 28, no. 2 (1996): 40–43. 1996—Durkin, Peter. "Rawhide Tipi Doors." _Whispering Wind_ 27, no. 5 (1996): 39–41. 1996—Jennys, Susan. "The Tipi in the Early 1800s." _Muzzleloader,_ January/February 1996. 45–49. 1996—McCoy, Ron. "Searching for Clues in Kiowa Ledger Drawings." _American Indian Art Magazine_ (Summer 1996): 54–61. __ 1996—Redfern, Patrick. _The Tipi-Construction and Use_. Self-published booklet, Book Publishing Co., 1996. 1997—Durkin, Peter. "The Hide Tipis" _Whispering Wind_ 29, no. 1 (1997): 38–39. 1997—Durkin, Peter. "Tipi Camps." _Whispering Wind_ 28, no. 5 (1997): 30–33. 1997—Ewers, John C. _Plains Indian History and Culture: Essays on Continuity and Change._ Norman: University of Oklahoma Press, 1997 __ 1997—Housler, Wes. "Mountaineers and Hide Lodges." _Rendezvous,_ October/December 1997. 1997—Jennys, Susan. "Ladies Living History: Portraying the Plains Indian Woman Part II." _Muzzle Blasts Magazine,_ May 1997, 43–45. 1997—Miller, Preston E., and Carolyn Corey. _The Four Winds Guide to Indian Artifacts_. Atglen, PA: Schiffer Publishing Ltd., 1997. 1998—Chronister, Allen. "Chief Washakie and an Eastern Shoshone Camp." _Whispering Wind_ 29, no. 3 (1998): 21–23. 1998—Chronister, Allen. "Nez Perce Camp, Rawhide Tipis and Hats." _Whispering Wind_ 29, no. 6 (1998): 24–26. 1998—Durkin, Peter. "Plains Indian Encampment." _Whispering Wind_ 29, no. 4 (1998): 38–39. 1999—Garcia, Louis. "Tipi Tinklers." _Whispering Wind_ 130, no. 2 (1999): 4–11. 1999—Hunter, Tony A. "Short Visit to a Tipi: Parts One and Two." _Muzzleloader_ , January/February 1999, 26–30. 1999—Jones, James E. _Make Your Own Tipi._ Living History Publishers, Inc., 1999. 1999—Living History Publishers, Inc., _Tipi Living Magazine._ 5 issues, _(_ July 1999–Winter 2001). 1999—Terry, Mike. _Daily Life in a Plains Indian Village 1868_. New York: Clarion Books, 1999. 2000—Adams, Kimberly L. and Dawson Kurnizki. _Tipi (Native American Homes):_ New York: Rourke Publishing, 2000. 2000—Berry, Charlotte. "Tipi." _Cowboys and Indians_ (Fall 2000). 2000—Chronister, Allen. "Cloth Tipi Covers." _Museum of the Fur Trade Quarterly_ 36, no. 3 (2000): 13–14. 2000—Durkin, Peter. "Southern Cheyenne Hide Tipi." _Whispering Wind_ 30, no. 6 (2000): 44–45. 2000—Roller, Pete. "Winter Tipi Camping." _Whispering Wind_ 30, no.6 (2000): 22–25. 2001—Durkin, Peter. "Tipi Interiors." _Whispering Wind_ 31, no. 3 (2001): 42. 2001—Goble, Paul. _Her Seven Brothers_. Reprint ed. London: Aladdin Books, 1993. 2001—Greene, Candace S. _Silver Horn: Master Illustrator of the Kiowas._ Norman: University of Oklahoma Press, 2001. 2001—Pearson, David. _Yurts, Tipis and Benders or Circle Houses: Yurts, Tipis and Benders._ White River Junction, VT: Chelsea Green Publishing, 2001. 2002—Geissal, Dynah. "Tipi." _Backwoods Home Magazine,_ July/August 2002, 17–23. 2002—Macek, Jiri. _Taborime V Tipi._ Liga lesni moudrosti., 2002. 2003—Bruno, Isabelle. _Yourtes et Tipis_. France: Hoëbeke, 2003. 2003—Cannavaro, Brian. _How to Set up a Blackfoot Lodge._ Kalispell, MT: Fort Selish Spice and Trading Co., Inc., 2003. 2003—Cortez, Javier and Dyanne Fry. _Tipi: A Modern How-To Guide._ Austin, TX: Dos Puertas Publishing, 2003. 2003—Hunt, Heidi. "Tipis and Yurts." _Mother Earth News_ , December/January 2003, 56–59. 2003—Price, Dan. "Living Free." _Mother Earth News_ , December/January 2003. 2004—Durkin, Peter. "Cheyenne Tipi Beds." _Whispering Wind_ 34, no. 2 (2004): 26–27. 2005—Durkin, Peter. "Cane Windbreaks for Tipis." _Whispering Wind_ 34, no. 6 (2005): 32–35. 2005—Helland, Mary Arnoux. _Picking Up Ewers' Trail of the Fort Peck Reservation Assiniboines._ Vol. 2. People of the Buffalo. Dietmar Kuegler, Germany: Tatanka Press, 2005. 2005—Jennings, Vanessa Paukeigope. "Kiowa Battle Tipi." _Whispering Wind_ 34, no. 6 (2005): 16–18. 2006— Belitz, Larry. _The Buffalo Hide Tipi of the Sioux._ Sioux Falls, SD: Pine Hill Press, 2006. 2006—Hungry, Adolf Wolf. _The Tipi: Traditional Native American Shelter_. Summertown, TN: Native Voices, 2006. # Resources ### **U.S. Tipi Makers** #### AH~KI Tipi 510.268.8779 --- hometown.aol.com/redpath/earth.html #### Anchor TeePees PO Box 3477 --- Evansville, IN 47733 800.322.8368 www.anchorinc.com/teepees.html #### Buffalo Days Tipi R.D. 1, Box 70 --- Galway, NY 12074 578.882.9997 www.portalmarket.com/buffalodaystipi.html #### The Colorado Tent Company 6489 E. 39th Ave --- Denver, CO 80203 800.354.8368 303.294.0924 #### www.coloradotent.net #### The Colorado Yurt Company or Earthworks Tipis 28 W. S. 4th St. --- Montrose, CO 81402 800.288.3190 www.coloradoyurt.com #### Conneautville Canvas Conneaut Ville, PA 814.587.2755 --- Dave Ellis Canvas Products 387 CR. 234 Durango, CO 81301 877.259.2059 www.cowboycamp.net #### DeadBird Tipis 33905 RCR 43A --- Steamboat Springs, CO 80487 970.879.0314 www.deadbirdtipi.com/index.php hertzog@springsips.com #### Don Strinz Tipi 2325 'O' St. Rd. --- Milford, NE 68405 800.525.8474 (TIPI) www.strinztipi.com #### Dreaming Buffalo Tipi PO Box 9285 --- Santa Fe, NM 87504 505.424.8626 www.dreamingbuffalo.com #### Fabricon 806 W. Spruce --- Missoula, MT 59801 406.728.8300 fabricon.com #### Four Directions Tribal Dwellings 1801 Old Greensprings Hwy. --- Ashland, OR 97520 541.601.6997 541.821.0400 www.roguedwellings.com #### Four Seasons Tentmasters 4221 Livesay Rd. --- Sand Creek, MI 49279 517.436.6245 www.geocities.com/tentmasters #### Fox River Traders 110 Ombre Rose Dr. --- Combined Locks, WI 54113 920.759.2347 www.foxrivertraders.com #### Goodwin Cole 8320 Belvedere Ave. --- Sacramento, CA 95826 800.752.4477 goodwincole.com #### Harris Canvas & Camping 501 30th Ave. SE --- Minneapolis, MN 55414 612.331.1321 800.397.5026 www.harriscanvascamp.com #### The High Desert Trading Post Tularosa, NM --- www.highdeserttradingpost.com sales@highdeserttradingpost.com #### Idaho Canvas Products **** 195 Northgate Mile --- PO Box 50856 Idaho Falls, ID 83405 888.395.7999 www.idahocanvas.com/id50.htm #### Jesse Salcedo Tipis PO Box 620834 --- Woodside, CA 94062 650.369-0383 www.salcedocustomtipi.com/jesse.html #### Kinney's Tents and Tepees 1407 N. Custer Ave. --- Hardin, MT 59034 406.665.3422 888.523.3422 www.forevermontana.com/tepees.htm #### Konza Tipi 785.494.2797 --- barchery@kansas.net www.kansas.net/~barchery/ol'ebuff.html/konza_tipi.htm #### M BAR M 2970 Texas Ave. --- Grand Junction, CO 81504 970.263.4599 www.teepees4u.com #### Manataka Tipis PO Box 476 Hot Springs --- Reservation, AR 71902 501.627.0555 www.manataka.org/page39.html #### Montana Canvas Box 390 --- Belgrade, MT 59714 406.388.1225 www.montanacanvas.com #### Nomadics Tipi Makers 17671 Snow Creek Rd. --- Bend, OR 97701 541.389.3980 www.tipi.com #### Northwest Tipis 2001 S. Main St. Rd. --- Horicon, WI 53032 920.485.4744 #### Old West Enterprises RR 1 Box 11 --- Lapwai, ID 83540 www.angelfire.com/id/tipimaker tipi_maker@yahoo.com #### Panther Primitives PO Box 32 --- Normantown, WV 25267 304.462.7718 www.pantherprimitives.com #### R. K. Lodges PO Box 58 --- Hackensack, MN 56452 218.675.5630 www.rklodges.com #### Real Goods Solar, Wind & Hydro Web site for an Ecologically Sustainable Future www.realgoods.com --- #### Reese Tipis, Inc. 2291-J Waynoka Rd. --- Colorado Springs, CO 80915 719.265.6519 866.890.8474 (TIPI) www.reesetipis.com #### Red Cloud Tipis PO Box 518 Pine Ridge, SD 57770 605.887.2810 --- indianyouth.org/redcloud.html #### Red Hawk Trading 321 N. 5400 W. --- Malad, ID 83252 800.403.4295 (HAWK) www.redhawk-trading.com #### Reliable Tent and Tipi 120 N. 18th St. --- Billings, MT 59101 406.252.4689 800.544.1039 www.reliabletent.com #### Sagebrush Tipi Works PO Box 1811 Priest River, ID 83856 877.993.1155 --- www.sandpoint.net/sagebrush/index.html #### Sheridan Tent and Awning PO Box 998 128 N. Brooks --- Sheridan, WY 82801 800.310.6313 www.sheridantent.com #### Sky Lodge Tipis 247 Granite St. --- Ashland, OR 888.488.8127 541.488.7737 www.skylodgetipis.com #### Spirit Tipis PO Box 262 --- Skull Valley, Arizona 86338 928.442.3225 www.spirittipis.com #### Spring Valley Lodges N. 3515 Hwy. F --- Brodhead, WI 53520 608.897.8474 (TIPI) #### Straw Bale Tipis PO Box 126 --- Moyie Springs, ID 83845 208.267.1086 #### www.strawbaletradingpost.homestead.com/Tipis.html Sweetwater Tipis and Canvas --- PO Box 262 Hayesville, NC 28904 828.389.4028 www.main.nc.us/openstudio/sweetwater/canvas.html #### Tent Smiths PO Box 1748 --- Conway, NH 03818 603.447.2344 www.tentsmiths.com #### Thunder Mountain Tent & Canvas 107 McClure Ave. --- Nampa, ID 83651 208.467.3109 800.925.9175 www.idfishnhunt.com/thunder.html #### Tomahawk Lodge Evolution to single pole–style tipi --- www.portalmarket.com/teepee.html #### Trapline Lodges PO Box 14 --- Whitehall, MT 59759 406.287.3580 www.trapline.com #### Warren "Two Bears" Billiter 6800 Englewood --- Raytown, MO 64133 816.353.6264 #### Western Canvas Supply & Repair PO Box 1382 --- Cody, WY 82414 800.587.6707 www.westerncanvas.com #### White Buffalo Lodges PO Box 1382 --- Livingston, MT 59047 866.358.8547 www.whitebuffalolodges.com #### Willow Winds 962 F-30 Mikado --- 962 F-30 Mikado, MI 48745 989.736.3487 www.jmwillowwinds.com/index.shtml #### Wrights Canvas 41 Independence Way Cashmere WA 98814 509.782.3932 --- #### Yakima Tent PO Box 391 --- Yakima, WA 98907 800.447.6169 www.yakimatent.com ### **** ### **Tipi Makers Around the World** ### **Australia** #### OneMoon Tipis PO Box 27 --- Kinglake Vic 3763 Tel. (61)+3+57 861 629 www.onemoon.com #### Rainbow Tipis 3/2 Brigantine St. --- Byron Bay, NSW. 2480 Arts & Industry Estate Tel. (+61) 02 66 855895 www.RainbowTipis.com.au #### The Tipi Company PO Box 555 --- Tipi Farm Dereham, Norfolk NR20 5PZ Tel. 00 44 (0)1362 680074 Tel. 00 44 (0)1362 680074 www.thetipico.com #### Tipis by Don O'Connor Gentle Earth Walking --- PO Box 395 Daylesford, AU 3460 Tel. 03 5348 7506 users.netconnect.com.au/~sueandon/index.html #### United Earth Tipis Tel. 02 95643991 --- www.unitedearth.com.au ### **Belgium** #### Tymmyt Tents 9320 Nieuwerkerken --- www.tymmyt.com/home.htm ### **Canada** #### Arrow Tipi Box 115 --- Burton BC VOG-1EO Tel. 866.902.3399 www.arrowtipi.com #### Bushwhacker 6517 Concession 7 --- Tosorontio, R.R.#1 Everett, ON L0M 1J0 Tel. 705.435.1211 Tel. 705.435.1211 www.bushwhacker.ca #### Fun Camp Company Box 7 Oro --- Oro, ON L0L 2X0 Tel. 888.297.5551 www.funcampco.ca/Tipi.htm Labis Moon Canvas Dwellings BC labiscreations.com gitta@labiscreations.com #### Murray Tent and Awning Tel. 800.774.0442 --- www.murraytentandawning.com/html/teepee.html #### Porcupine Canvas 33 First Ave. --- Schumacher, ON P0N 1GO 800.461.1045 www.porcupinecanvas.com #### Quappelle Tipi Maker Box 1754 Ft. Qu'Appelle --- SK, SoG-1So Tel. 306.332.4524 www.quappelletipimaker.com #### Sun Maker Arts Terry Wild --- Box 159 Cumberland, BC, Vor 150 Tel. 1.250.2070 www.dwayneedwardrourke.com/Pages/Sunmaker/Pages/TipisByTerry.html #### Teepee Tseiwei 640 Atironta Wendake --- QC G0A 4V0 Tel. 418.842.0157 www.tipiquebec.com #### Traditional Villages Box 655 Biggar --- Saskatchewan SOK OMO Tel. 306.948.3832 crazyhorse_193@hotmail.com #### Wi Tents and Tipis Charlevoix, QC Tel. 418.240.0295 --- www.witentes.com #### Wikwemikong Tipis 81 Yellek Trail --- North Bay, ON Tel. 705.472.2577 www.wikwemikongtipicompany.com #### Wolfchild Tipis and Tents 96 Mill Rd. --- Cardiff Echoes, AB Tel. 780.939.3866 members.shaw.ca/wolfchildinc/tipi.htm ### **Czech Republic** #### Delta tents V.Toman - DELTA International --- Smilovského 20 120 00 Prague 2 www.ares.cz/tents/index_uk.htmdelta@ares.cz ### **France** #### Atelier de Sellerie Jean Lehman 2 rue de la garde --- 67 390 Saasenheim Tel. 03 88 57 76 59 www.tipi-tente.com www.tipi-tente.com jean.lehmann3@wanadoo.fr ### **Germany/Austria** #### Fam West GmbH, Rannetsreit 3 1⁄3 94535 Eging am See --- BRD, Germany Tel. +49.(0)8544 – 9180878 www.naturzelte.de/eng #### Red Fox Delitzscher Straße 34 04129 --- Leipzig, Germany Tel. 0341/ 9 11 35 16 www.redfox-indianstore.de #### TiBo - Tipi am Bodensee Bernaumühle 2 D-88099 Neukirch --- Tel. 0 75 28 - 95 16 40 www.tipi-bodensee.de #### Tipi-Werkstatt A-8554 Soboth 155 --- Austria Tel. +43 3460 259 www.tipi.at/e/1st.html #### Tipi Zelte Prälat-Sommer Str. 46 D-76846 --- Hauenstein/Pfalz Tel. 06392-2390 www.tipi-zelte.de ### **Great Britain/Scotland** #### Albion Canvas Unit 6, Barkingdon Business Park Staverton Totnes Devon TQ9 6AN UK Tel. + 44 (0) 1803 762230 --- www.albioncanvas.co.uk #### Grays Marquees Southbank, Blackwater Rd. --- Newport, Isle of Wight PO30 3BG Tel. 01983 525221 tipi-tents.co.uk/index.html info@tipi-tents.co.uk #### Hearthworks Tipis Mr. Tara Weightman --- Bushy Combe Farm Bulwarks Ln. Glastonbury BA6 8JT Tel. 01749 860 708 www.hearthworks.co.uk/tipis.html #### Lassana Tipis The Linnet, Wrigglebrook Ln. --- Kings Thorn, Hereford, HR2 8AW Tel. 01981 541076 www.lassanatipis.com #### Manataka Tipis Every Hill, Shells Ln. Colyford, Devon EX24 6QE Tel. +(44)(0) 1297 553456 --- www.manatakatipis.co.uk #### Past Tents New Farm, Main St, Walesby, Newark Nottinghamshire, England --- NG22 9NJ Tel. 00 44 (0)1623 862480 www.past-tents.demon.co.uk #### Shelters Unlimited Rhiw'r Gwreiddyn Ceinws Machynlleth Powys --- SY20 9EX Tel. 01654 761720 www.tipis.co.uk #### Thunderbird Tipi Tel. +44 (0)1505 842103 --- www.piloto.u-net.com #### Timberline Tipis The Old Pottery --- Bull Lane Warminster, Wiltshire BA12 8AY Tel. O7979 420153 www.timberlinetipis.co.uk #### Wigwamsam Tipis The Strawbale Barn The Yarner Trust --- Welcombe Barton Welcombe Devon EX39 6HF Tel: 0044 (0)1288 352316 www.wigwamsam.co.uk #### Wolf Glen Tipis Williamhope Cottage Clovenfords Galashiels TD1 3LL Tel. 01896 850390 --- www.wolfglentipis.co.uk #### Woodland Yurts 80 Coleridge Vale Rd. S. Clevedon North Somerset BS21 6PG Tel. 01275 879705 --- www.woodlandyurts.co.uk #### World Tents Redfield --- Buckingham Rd. Winslow Bucks MK18 3LZ Tel. 01296 714555 www.worldtents.co.uk ### **India** #### Canvas Emprium 283, Azad Market --- Delhi-11006 Tel. +091 11 23628696 www.canvashome.com ### **Italy** #### Giorgio Strazzari Tipis Tel. 0039031807957 --- www.tepee.it lontrastrazz@libero.it ### **Japan** #### Gfield Tipis www.joy.hi-ho.ne.jp/gfield --- ### **Netherlands/Holland** #### ATELIER ANNELIES postcode 1054( ergens in Amsterdam Centrum) naar Atelier Annelies in 7831 AV Nieuw Weerdinge Tel. 0591-521018 --- www.6.brinkster.com #### Bosjuweel Tipis Koopvaardijweg 3, 6541 BR Nijmegen, 024-3776086 --- www.bosjuweel.nl/tentenevenementen/index.htm #### Tipi Verhuur dhr. Ben Acket --- Nieuwemaastrichtsebaan 11 5126 NS Gilze 06- 51899830 www.tipiverhuur.nl #### Womime Wakan Tipis Grindweg 216 8483 JL Scherpenzeel (Friesland) Vanuit Nederland: 0561481405 --- www.womime-wakan.com ### **New Zealand** #### Jaia Tipis PO Box 93, Takaka --- Golden Bay Tel. 03 525 9102 www.jaiatipis.com ### **Poland** #### Hau Kola Tipis www.tipi-tent.com --- kola@tipi.com.pl ### **South Africa** #### Sacred Arrow Tipis www.icon.co.za/~tipi --- tipi@icon.co.za Tipis from Africa PO.Box 1750 Nelspruit 1200 Tel. +27 (0)13 7440124 home.wanadoo.nl/jeff.mos/TipisFromAfrica ### **South America** #### Bacab – Nomad Art Movement Argentina, Brazil, Chile, Peru, Bolivia Tel. 00.54.11.4.779.0721 --- www.bacab-nam.org bacabnam@hotmail.com ### **Spain** #### Tipiwakan Tel. (00-34) 639.689.879 --- www.tipiwakan.org/page04_e.htm info@tipiwakan.org ### **Switzerland** #### FAM ZELTWELT FAM ZELTWELT GmbH --- Grossholz / Postfach 158 8253 Diessenhofen TG Schweiz Tel. ++41/52/657 5858 www.zeltwelt.ch/deutsch/tipi/index_deutsch_tipi.htm #### PEDDIG-KEEL Bastelartikel + --- Tipivermietung Bachstrasse 4 9113 Degersheim Tel. 071 371 14 44 www.peddig-keel.ch ### **Tipi Pole Suppliers** #### Buffalo Tipi Pole Co. Idaho 208.263.6953 --- Chris Jenkins Canada 250.489.5141 jenkins.chris@shaw.ca #### Noisy Creek Adventures Jeff Everson --- Wisconsin 715.362.3903 noisycreekadv@hotmail.com #### Nomadics Tipi Makers Washington 541.389.3980 --- Pole Specialties Montana 406.491.4966 #### Reese Tipis Colorado 866.890.8474 info@reesetipis.com --- #### Rembrandt Leather Iowa --- 712.286.6321 Willow Winds Jim Miller Michigan 517.736.3487 ### **Canvas Suppliers** #### Astrup Co. Dealers all over the United States --- Cleveland, OH 800.786.7601 www.astrup.com #### Claredon Textiles, Inc. 7630 Southrail Road, Unit A --- North Charleston, SC 29420 800.752.1332 www.claredontextiles.com info@claredontextiles.com #### Itex, Inc. PO Box 5187 --- Englewood, CO 80155 800.525.7058 sales@banwear.com John Boyle & Co. Dealers all over the United States Statesville, NC 800.438.1061 www.johnboyle.com ### **Buffalo Hide Tipi Makers** #### Larry Belitz (SS&A TRADERS) 7537 E. Belleview St. --- Scottsdale, AZ 85257 480.970.4854 www.buffalorobe.com #### Wes Housler 22 Bell Canyon Rd. --- Cloudcroft, NM 88317 505.687.3267 wes@pvtnetworks.net #### Mike Bad Hand Terry 541.964.3184 --- www.warriorsplus.com badhand@badhand.org #### "The Whirlwind" Ken Weidner www.ibco.net/whirlwind/index.htm --- whirlwind@ucom.net ### **** ### **Museum and Gallery Sites** Type in the word _tipi, tipis,_ or _teepees_ on many of these sites, and you will find hundreds of photos and drawings of lodges. --- #### **Bein** ecke Library Digital Collection, Yale University beinecke.library.yale.edu/dl_crosscollex/default.htm --- #### Biographical Dictionary of the Mandan, Hidatsa, and Arikara lib.fbcc.bia.edu/fortberthold/TATBIO.htm --- #### The British Museum www.thebritishmuseum.ac.uk/compass --- #### Camping with the Sioux: Fieldwork Diary of Alice Cunningham Fletcher, Smithsonian www.nmnh.si.edu/naa/fletcher/fletcher.htm --- #### Canadian Museum of Civilization www.ottawakiosk.com/civilization.html --- #### Colorado Springs Pioneers Museum www.cspm.org --- #### Colorado State University Libraries www.digital.library.colostate.edu --- #### Curtis Collection www.curtiscollection.com --- #### Division of Anthropology, American Museum of Natural History www.anthro.amnh.org --- #### Division of Anthropology, #### University of Nebraska State Museum www.museum.unl.edu/research/anthropology/anthro.html --- #### Domestic Architecture at the Comanche Village on Medicine Creek php.indiana.edu/~tkavanag/asoule.html --- #### Gallery of the Open Frontier, University of Nebraska Press gallery.unl.edu/Gallery.html --- #### Glenbow Archives www.glenbow.org --- #### Glenbow Museum www.glenbow.org/lasearch/searmenu.htm --- #### Indian Congress Photo Gallery, Omaha Public Library www.omahapubliclibrary.org --- #### The Lewis Henry Morgan Collection, New York State Museum www.nysm.nysed.gov/morgan --- #### Library of Congress, American Memory Collection memory.loc.gov/ammem/amtitle.html --- lcweb2.loc.gov/ammem/daghtml/daghome.html memory.loc.gov/ammem/index.html #### Library of Western Fur Trade Historical Source Documents www.xmission.com/~drudy/mtman/mmarch.html --- #### Mathers Museum Collections, The Wanamaker Collection www.indiana.edu/~mathers/collections/photos/wanamake.html --- #### Minnesota Historical Society Library and Collections www.mnhs.org/library/index.html --- #### Museum of Anthropology, University of Missouri-Columbia coas.missouri.edu/AnthroMuseum/default.shtm #### National Gallery of Art www.nga.gov/cgi-bin --- #### National Museum of the American Indian www.nmai.si.edu --- #### The New York Public Library digital.nypl.org/mmpco #### The Old North Trail by Walter McClintock #### www.1st-handhistory.org/ONT/album1.html #### PBS, The West www.pbs.org/weta/thewest/resources/archives/one/61_16.htm --- #### Peabody Museum, Yale University www.peabody.yale.edu/databases --- #### #### Pikes Peak Library District library.ppld.org/SpecialCollections/Project/admin/photosearch.asp?fields=subject&terms=Tipis --- #### Plains Indian Drawings www.tribalarts.com/feature/plains/index.html#7 --- #### Princeton University Library Western Americana photographs collection diglib.princeton.edu --- #### Rudolph Friederich Kurz's Sketchbook www.xmission.com/~drudy/mtman/gif/kurz/kurz.html --- #### Smithsonian Institution Research Information System (SIRIS) www.siris.si.edu --- #### Smithsonian's Collections of Kiowa Drawings www.nmnh.si.edu/naa/kiowa/mooney.htm --- #### Smithsonian, The Horse in Blackfoot Indian Culture by John C. Ewers www.sil.si.edu/DigitalCollections/BAE/Bulletin159 --- #### South Dakota State Historical Society www.sdhistory.org --- #### Spurlock Museum at the University of Illinois Housing the Laubin Collection www.spurlock.unuc.edu --- #### Trans Mississippi & International Exposition www.omaha.lib.ne.us/transmiss --- #### University of Oklahoma Western History Collection libraries.ou.edu/etc/westhist/intro --- #### University of Washington Libraries Digital Collections #### content.lib.washington.edu/aipnw/index.html #### Western Americana Collection, Princeton University www.princeton.edu/~rbsc/department/western --- # Photo and Drawing Credits **Page ii:** Photo by Linda A. Holley. --- **Page vii:** Photo by Wayne McDowell ("Weird Wayne"). **Page 1:** Courtesy of the Denver Public Library, Western History Collections. Photo by W. S. Soule, Call Number X-32133. **Page 5:** Buffalo Bill Museum in Cody, Wyoming. 1998 photo by Linda A. Holley. **Page 8:** Tallmadge Elwell Daguerreotype-Bridge Sq. Minneapolis, MN. Photo courtesy of the Minneapolis Public Library, Minneapolis Collection. **Page 9:** Courtesy of Denver Public Library, Western History Collection. **Page 12:** Pine Ridge Agency, S. D., Jan. 17, 1891. Private collection. **Page 13:** Photo from Linda A. Holley collection. **Page 19:** Gilbert L. Wilson, "Horse and Dog in Hidatsa Culture," _Anthropological Papers of the American Museum of natural History,_ Vol. XV, Part II, New York, 1924. **Page 20:** Postcard collection of Linda A. Holley. **Page 22:** Seton design from his book _Little Savages_ and Salomon/Ben Hunt-style redrawn from their craft/woodland articles, ca 1932. **Page 25:** Illustrations from _The Indian Tipi: Construction and Use,_ by Reginald and Gladys Laubin. Copyright 1957, 1977 by the University of Oklahoma Press, Norman. Reprinted with permission of the publisher. All rights reserved. **Page 30:** Wes Housler buffalo-hide tipi, Custer Battlefield Museum, Garry Owen, MT. Permission from Put Thompson. **Page 31:** Courtesy of the Western History Collection, University of Oklahoma Library. **Page 32:** Drawings by Linda A. Holley. **Page 33:** Drawings by Linda A. Holley. **Page 34–35:** Drawings by Linda A. Holley with the permission of Brian Cannavaro. **Page 37:** Drawings by Linda A. Holley. **Page 38:** Southern Cheyenne 1907 tipi, 4 feet tall, from Freya's by permission. **Page 39:** Drawings by Linda A. Holley. **Page 43 (above):** Drawing by Linda A. Holley. **Page 43 (below):** Photo by Linda A. Holley. **Page 44:** Photo by Linda A. Holley. **Page 50:** Photo by Jan Kisteek. **Page 51:** Drawings by Linda A. Holley. **Page 52:** Drawings by Linda A. Holley. **Page 53:** Drawings by Linda A. Holley. **Page 54:** Drawings by Linda A. Holley. **Page 55:** Drawings by Linda A. Holley. **Page 56:** Drawings by Linda A. Holley. **Page 57:** Drawings by Linda A. Holley. **Page 58:** Drawings by Linda A. Holley. **Page 59:** Drawings by Linda A. Holley. **Page 60:** Drawings by Linda A. Holley. **Page 61:** Photos by Wayne McDowell. **Page 62:** Photos by Wayne McDowell. **Page 63:** Photos by Linda A. Holley. **Page 64:** Drawings by Linda A. Holley. **Page 66:** Drawings by Linda A. Holley. **Page 67:** Drawings by Linda A. Holley. **Page 68:** Drawings by Linda A. Holley. **Page 69:** Postcard from Linda A. Holley collection. **Page 70–71:** Drawings by Linda A. Holley. **Page 72:** Photos by Wayne McDowell. **Page 73:** Drawings by Linda A. Holley. **Page 74:** Drawings by Linda A. Holley. **Page 75:** Photos by Wayne McDowell. **Page 76:** Drawings by Linda A. Holley. **Page 77:** Liner made by Mike Terry. Photo by Mike Terry. **Page 78:** Benson Lanford collection. **Page 79:** Permission from the Western History Collections, University of Oklahoma Libraries, and Interior of Crow Lodge. Photographed by Richard Throssel. **Page 82:** Drawings by Linda A. Holley. **Page 83:** Drawings by Linda A. Holley. **Page 84:** Drawing by Linda A. Holley. **Page 85:** Drawing by Linda A. Holley. **Page 86:** Photos by Wayne McDowell. **Page 87:** Drawings by Linda A. Holley. **Page 88:** Photos by Linda A. Holley. **Page 89:** Permission from Guy Pazzogna Vaudois, France. **Page 90:** Permission from Guy Pazzogna Vaudois, France. **Page 91:** Permission from Atelier de Sellerie. **Page 92:** Drawings A, B, and C by Linda A. Holley. D from Jaia Tipis of New Zealand. **Page 94:** Photo from Rainbow Tipis in Australia. **Page 96:** Photo by Linda A. Holley. **Page 98:** Photo by Linda A. Holley. **Page 99:** Photo by Linda A. Holley. **Page 100:** Photo by Linda A. Holley. **Page 101:** Photos by Linda A. Holley. **Page 102:** Photo by Linda A. Holley. **Page 103:** Photo by Linda A. Holley. **Page 105:** Drawings Linda A. Holley. **Page 106:** Drawings Linda A. Holley. **Page 108:** Drawings by Linda A. Holley. **Page 109:** Photos by Wayne McDowell. **Page 111:** Drawings by Linda A. Holley. **Page 112:** Drawing by Linda A. Holley. **Page 113:** Drawings by Linda A. Holley. **Page 114:** Drawings by Dr. Eweres. **Page 116:** Courtesy of Benson Lanford collection. **Page 117:** Drawings by Linda A. Holley. **Page 118 (left):** Photo by Linda A. Holley. **Page 118 (right):** Drawing by Linda A. Holley. **Page 120:** Drawings by Linda A. Holley. **Page 121:** Drawings and photo from Chicago Field Museum. Photo taken by Linda A. Holley. **Page 127:** Photos by Louis Beergeron. **Page 129:** Photo by Linda A. Holley. **Page 131 (above right and left):** Photos by Linda A. Holley. **Page 131 (middle):** James Jones. **Page 131 (below):** Permission Nomadics Tipis. **Page 132 (above left, above right, middle, and below left):** Photos by Linda A. Holley **Page 132 (below right):** Photo by Brewers. **Page 134:** Photo by Linda A. Holley. **Page 135:** Photo by Linda A. Holley. **Page 137 (above and below):** Photos by Jim Creighton. **Page 138 (above):** Photo by Carolyn Corey, Four Winds Trading Co. **Page 138 (below):** Denver Art Museum. Photo by Linda A. Holley. **Page 139:** Lodge made by Darry Wood. Owned and decorated by Linda A. Holley. **Page 140 (above and middle):** Photos by Kathy Brewer. **Page 140 (below):** Photo by Linda A. Holley. **Page 141:** Buffalo Bill Museum. Photos by Linda Holley. **Page 142:** Photo by Linda A. Holley. **Page 143 (above):** Courtesy of Western History Collections, University of Oklahoma Libraries. **Page 143 (below):** Collection of Linda A. Holley. **Page 146:** Photos by Linda A. Holley. **Page 147:** Photo by Linda A. Holley. **Page 148:** Photos by Linda A. Holley. **Page 150:** Drawings by Linda A. Holley. **Page 151:** Drawings by Mike Cowdrey. **Page 153:** Photo by Linda A. Holley. **Page 154:** Photo by Linda A. Holley. **Page 155:** Photos by Linda A. Holley. **Page 156:** Photo by David Ansonia. **Page 159 (above):** Photo by Jeff Mos. **Page 159 (middle and below):** Photos by Linda A. Holley Arts. **Page 167:** Photo by Linda A. Holley. **Page 168:** Photo by Linda A. Holley. **Page 169:** Photo by Steve Gill. **Page 170:** Photos by Ken Weinder. **Page 171:** Photo by Linda A. Holley. **Page 172:** Photos by Linda A. Holley. **Page 173:** Photo by Linda A. Holley. **Page 176 (left):** Photo by Linda A. Holley. **Page 176 (right):** Photo by Richard Reese. **Page 184:** Tipi decorated by David Ansonia and made by Nomadics Tipis. Photo by David Ansonia. **Page 185:** Photo by Chakra Tipis. **Page 186:** Photo by Piers Conway. **Page 187 (above):** Preston Miller collection. **Page 187 (middle and below):** Photos by Jan Kisteck **Page 188:** Photo by "White Horse." **Page 189:** Photos by Kriztina Szabo. **Page 190:** Dmitri collection. **Page 191:** Photo by Larry Schwartz. **Page 192:** Private collection. **Page 193:** Photo by Linda A. Holley. **Page 194 (left two photos):** Photos by Rainbow Tipis. **Page 194 (right):** Photo by Brook E. Demos of Chicago. **Page 195:** Photos by Arrows Tipis. **Page 196:** Drawings by Froit Tipis and Yurts, #148 **Page 197:** Permission of Fun Camp Co. of Canada. ### Color Insert Credits **Page 1:** Drawings by Linda A. Holley. --- **Page 2 (above):** Putt Thompson. **Page 2 (below):** Photo by Linda A. Holley. **Page 3 (above):** Back of Linda A. Holley's tipi. **Page 3 (below):** Photos by Linda A. Holley. **Page 4 (above):** David Ansonia. **Page 4 (below):** Photo by Linda A. Holley. **Page 5 (above):** Photo by Kriztina Szabo. **Page 5 (below):** Photo by Jan Kisteck. **Page 6:** Photo by the Brewers. **Page 7 (above):** Mount Carmel tipi by Schawartz. **Page 7 (below):** Photo by Ken Weidner. **Page 8 (above):** From Linda A. Holley collection. **Page 8 (below):** Photo by Linda A. Holley.
{ "redpajama_set_name": "RedPajamaBook" }
2,388
\section{Introduction} This paper considers the problem of estimating an average treatment effect from observational or experimental data, provided that a sufficient set of control variables are available. We pose the question: might the statistical precision of our estimates improve if we used only a subset of the available controls or possibly a dimension reduced transformation of them? This question is evergreen in the applied social sciences (see \cite{leamer1983let} or \cite{hernan2020causal}, page 195), but is surprisingly tricky to navigate for many applied researchers. In this paper, we break the problem down by considering the somewhat stylized situation of discrete covariates with finite support, where we are able to conduct a thorough variance analysis. This paper examines this question in detail using tools from three distinct formalisms: potential outcomes, causal diagrams, and structural equations. We show (Section 2) that a key condition licensing valid causal inference from observational data can be expressed equivalently in each of the three distinct frameworks (conditional unconfoundedness, the back-door criterion, and exogenous errors), allowing us to alternate between perspectives as is convenient pedagogically. Importantly, this equivalence is established in terms of a generic function of observed covariates, meaning that it covers not only variable selection, but ``feature selection''; this generality means that insights built on this equivalence apply seamlessly to modern methods such as regression trees or neural networks, which implicitly introduce potentially non-invertible transformations of the observed covariates. For clarity, we focus on the simplified (yet fairly common in practice) setting of discrete covariates with finite support, which allows us to derive finite sample properties of common stratification estimators, including widely-used linear regression and propensity score methods. Section 3 presents two novel-but-elementary results that will be used to re-analyze earlier theoretical results pertaining to regression adjustment for causal effect estimation. The first result defines the notion of a minimal control function, allowing us to distinguish between necessary and sufficient statistical control for causal effect estimation. The second result is a finite-sample analysis of stratification estimators of average causal effects in the setting of discrete control variables with finite support. This finite-sample analysis, presented in Theorem \ref{theorem2}, articulates the conditions by which a control function may be viewed as optimal in the sense of minimum variance. Section 4 collects concrete examples illustrating practical implications of the theory presented in Section 3, detailing how these results relate to previous literature, both classic and contemporary. By bringing together these profound results in the context of a common statistical framework, we hope to harmonize their insights for practitioners. Section 5 concludes by discussing further connections to previous literature. \section{Formal frameworks for causal inference} Let $\RV{Y}$ be the outcome/response of interest, $\RV{Z}$ be a binary treatment assignment, and $\RV{X}$ be a vector of covariates drawn from covariate space $\mathspace{X}$, all denoted here as random variables. For a sample of size $n$, observations are assumed to be drawn independently as triples $(X_i, Y_i, Z_i)$, for $i = 1, \dots, n$. The goal of causal effect estimation is to understand how the response variable $Y$ changes according to hypothetical manipulations of the treatment assignment variable, $Z$. For simplicity, we will refer to our observational units as ``individuals'', although of course in applications that need not be the case. The essential challenge to causal estimation is that only one of the two possible treatment assignments can be observed; as a consequence, if individuals who happen to receive the treatment differ systematically from those who do not, either in terms of their likely response value or in terms of how they respond to treatment, naive comparisons between the treated and untreated units will not simply reflect the causal impact of the treatment --- the treatment effect is said to be {\em confounded} with other aspects of the population. The field of causal inference has proposed and developed a variety of techniques for coping with this difficulty, the most common of which is some form of regression adjustment (meant here to include propensity score estimators and matching estimators, etc), which entails estimating average causal effects as (weighted) averages of (estimated) conditional expectations. The key assumption that justifies this process is referred to as {\em conditional unconfoundedness}, which asserts that the measured covariates adequately account for all of the systematic differences between the treated and untreated individuals in our observational sample; formalizing this assumption can be approached in a number of ways, which we turn to now. Only after the notation of these formalisms has been introduced can our causal estimand, and the class of estimators we will study, be precisely defined. \subsection{Potential outcomes} \label{potential_outcome_section} The potential outcomes framework casts causal inference as a missing data problem: causal estimands are contrasts between pairs of outcomes that are mutually unobservable --- when we see one, we cannot see the other. At present, the standard reference for the potential outcomes framework is \cite{imbens2015causal}, which contains extensive citations to the primary literature. Let $\RV{Y}^1$ and $\RV{Y}^0$ refer to the ``potential outcomes'' when $\RV{Z}=1$ and $\RV{Z}=0$. For individual $i$, the {\em individual treatment effect} will be defined as the difference between the potential outcomes: $$\tau_i = Y^1_i - Y^0_i.$$ Other treatment effects, such as a ratio rather than a difference, are sometimes considered, but in this paper we focus on the difference. Because the potential outcomes $(\RV{Y}^1, \RV{Y}^0)$ are never observed simultaneously, individual treatment effects can never be estimated directly. However, {\em average} treatment effects can be identified (learned from data) provided certain assumptions are satisfied. The causal estimand this paper will focus on is the average treatment effect, or ATE: \begin{equation} \bar{\tau} \equiv \mathbb{E}[\RV{Y}^1 - \RV{Y}^0]. \end{equation} The precise population over which this expectation is taken will be discussed in more detail in section \ref{estimands}. The standard assumptions that allow this average effect to be estimated are: \begin{enumerate} \item Stable unit treatment value assumption (SUTVA), which consists of two conditions: \begin{enumerate} \item {\em Consistency}: The observed data is related to the potential outcomes via the identity \begin{equation}\label{gating} \RV{Y} = \RV{Y}^1 Z + \RV{Y}^0 (1 - Z), \end{equation} which describes the ``gating'' role of the observed treatment assignment, $Z$. \item {\em No Interference}: for any sample of size $\scalarobs{n}$ with $\RV{Y} \in \mathcal{Y}$ and $\RV{Z} \in \mathcal{Z}$, $(\RV{Y}_i^1, \RV{Y}_i^0) \independent \RV{Z}_j$ for all $i,j \in \{1, ..., \scalarobs{n}\}$ with $j \neq i$, which rules out interference between observational units. \end{enumerate} \item Positivity: $0 < \mathbb{P}(\RV{Z}=1 \mid \RV{X}= x) < 1$ for all $x \in \mathcal{X}$ \item Conditional unconfoundedness: $(\RV{Y}^1, \RV{Y}^0) \independent \RV{Z} \mid \RV{X}$ \end{enumerate} Imagining concrete violations of these conditions is intuition-building. Consistency can be violated under non-compliance, so that treatment assignment doesn't match treatment actually received. No interference can be violated, for example, if we were studying the effect of individual tutoring on student grades in a certain classroom and students study together; Jimmy's treatment assignment may impact Sally's grade. Positivity is violated if certain individuals can never receive treatment, rendering their contribution to the average treatment effect unlearnable. And finally, conditional unconfoundedness can be violated, for example, if both treatment assignment and the outcome variable share a common cause. However, this is not the only way conditional unconfoundedness can be violated, and exploring other possibilities in full generality is the topic of the remainder of the paper. Taken together, the above assumptions enable identification of average treatment effects because they imply the following equality, the left-hand side of which is estimable: \begin{equation*} \begin{aligned} \mathbb{E}_X[\mathbb{E}[\RV{Y} \mid \RV{X}, \RV{Z} = 1] - \mathbb{E}[\RV{Y} \mid \RV{X}, \RV{Z} = 0]] & = \mathbb{E}[\RV{Y}^1 - \RV{Y}^0]. \end{aligned} \end{equation*} In more detail, the equivalence is established as follows: \begin{equation*} \begin{aligned} \mathbb{E}_X[\mathbb{E}[\RV{Y} \mid \RV{X}, \RV{Z} = 1] &= \mathbb{E}_X[\mathbb{E}[\RV{Y}^1 Z + \RV{Y}^0 (1-Z) \mid \RV{X}, \RV{Z} = 1]]\\ &= \mathbb{E}_X[\mathbb{E}[\RV{Y}^1 \mid \RV{X}, \RV{Z} = 1] = \mathbb{E}[\RV{Y}^1].\\ \mathbb{E}_X[\mathbb{E}[\RV{Y} \mid \RV{X}, \RV{Z} = 0] &= \mathbb{E}_X[\mathbb{E}[\RV{Y}^1 Z + \RV{Y}^0 (1-Z) \mid \RV{X}, \RV{Z} = 0]]\\ &= \mathbb{E}_X[\mathbb{E}[\RV{Y}^0 \mid \RV{X}, \RV{Z} = 0] = \mathbb{E}[\RV{Y}^0]. \end{aligned} \end{equation*} An alternative parametrization is: $$Y_i = Y_i^0 + \tau_i Z_i$$ where $$\tau_i = Y_i^1 - Y_i^0,$$ which emphasizes that $\tau_i$ itself can differ across units and, as a random variable, can be {\em dependent} on the treatment assignment so that $\tau \not\independent Z$. This treatment effect parametrization will be used extensively in our exposition. This paper is focused on the following question: If $X$ satisfies conditional unconfoundedness, might there be a function of $X$ with a reduced range that also satisfies conditional unconfoundedness? That is, can $X$ be reduced in dimension while still providing valid causal effect estimation? Answering this question requires a more detailed examination of {\em how} conditional unconfoundedness is achieved in any particular data generating process, which is facilitated by the introduction of causal diagrams. \subsection{Causal diagrams} \label{DAG_section} \subsubsection{Graph theory for causal identification} Causal diagrams provide a more fine-grained look at confounding, as they consider the full joint distribution of the response, treatment, and control variables regressors. The graphical approach to causality has its earliest roots in the work of Sewell Wright \citep{wright1918nature, wright1920relative, wright1921correlation}, but attained its mature modern form in the prodigious work of Judea Pearl \citep{pearl1987embracing, pearl1987logic, pearl1995theory, pearl1995causal}. See \cite{pearl2009causality} for a textbook treatment and comprehensive references. The presentation here loosely follows the expository treatment in \cite{shalizi2021advanced}. Recall that any joint density over $p$ random variables may be expressed in {\em compositional form}, as a product of conditional densities: $$f(x_1, x_2, \dots, x_p) = f(x_1)f(x_2 \mid x_1)f(x_3 \mid x_1, x_2)...f(x_p \mid x_1, x_2, \dots, x_{p-1}),$$ where the density functions $f(\cdot)$ and $f(\cdot \mid \cdot)$ refer to different densities depending on their arguments. The labeling of the variables is arbitrary, and so we can chain together these marginal and conditional distributions in any order (though of course that will lead to different forms). Some of these variables might exhibit {\em conditional independence}, meaning that, for example $$f(x_1 \mid x_2, x_3) = f(x_1 \mid x_2)$$ which is equivalently expressed as $$X_1 \independent X_3 \mid X_2.$$ The relationship to {\em directed (acyclic) graphs} (DAG) is straightforward: draw a node for each variable and draw a line from $X_j$ going into $X_i$ if $X_j$ appears in the conditional distribution of $X_i$. This graph is {\em directed}, with the arrow pointing from $X_j$ {\em to} $X_i$. We say that $X_j$ is a ``parent'' of $X_i$ and that $X_i$ is the ``child'' of $X_j$. From the graph, the joint distribution may be expressed as $$f(x_1, \dots, x_p) = \prod_{j = 1}^p f(x_j \mid \mbox{parents}(x_j)).$$ This leads us to the {\em Markov property}, which is $$X_j \independent \mbox{non-descendants}(X_j) \mid \mbox{parents}(X_j),$$ where ``descendant'' refers to children, grandchildren, great-grandchildren, etc. We can see this by dividing through by the marginal distribution of $\mbox{parents}(X_j)$ and observing that the resulting distribution is a product of terms involving either $X_j$ or $\mbox{non-descendants}(X_j)$, but not both. The Markov property allows one to efficiently deduce conditional independence relationships and underpins Pearl's algorithm (which will be described shortly). Finally, a complete treatment of confounding in the causal diagram framework requires the following definition: \begin{definition} A {\em collider} is a node/variable $V$ in a DAG that sits on an undirected path between two other nodes/variables, $X_j$ and $X_i$, and the paths both have arrows pointing {\em into} $V$. \end{definition} Conditioning on a collider induces dependence between its parents. For a classic example of this phenomenon, suppose that a certain college grants admission only to applicants with high test scores and/or athletic talent. Even if we grant that in the general population these talents may be independent, but among admitted students, these two attributes become highly dependent. If we know that a student is not athletic, then we know for sure that they must be academically gifted and vice-versa. While this is a basic result in probability theory, Pearl's work emphasized its significance to the problem of regression adjustment for causal effect estimation. With a DAG in hand, it is possible to deduce -- rather than assume -- conditional unconfoundedness: Pearl developed an algorithm for determining subsets of variables in $X$ (i.e., its coordinate dimensions) that define valid regression estimators. The inputs to this algorithm are a directed acyclic graph (DAG) that characterizes the causal relationships between variables; such a graph describes a particular compositional representation of the joint distribution, reflecting conditional independences that are implied by the {\em stipulated} causal relationships. The prohibition on cycles rules out positive feedback self-causation. Here we present Pearl's algorithm in a somewhat simplified form, assuming that the graph contains no descendants of $Z$ other than $Y$. Given an input DAG, $\mathcal{G}$ and a subset of nodes $S$, the ``backdoor'' algorithm proceeds as follows: \begin{enumerate} \item Identify all (undirected) paths between $Z$ and $Y$. \item Consider each variable along each of these paths and make sure that at least one of them is ``blocked''. \begin{enumerate} \item A variable $W$ is blocked if \begin{enumerate} \item $W$ is not a collider and is in the set $S$ or \item $W$ is a collider and neither $W$ nor any of its descendants is in the set $S$. \end{enumerate} \end{enumerate} \item Return {\tt TRUE} if every ``backdoor'' path between $Z$ and $Y$ (all paths except the direct causal arrow from $Z$ to $Y$), is blocked. Otherwise return {\tt FALSE}. \end{enumerate} Sets of variables satisfying the backdoor criterion --- those sets where the algorithm returns {\tt TRUE} --- are valid adjustment sets in the sense that $Y$ and $Z$ {\em would be} conditionally independent, given those variables, {\em if there were no causal relationship} between $Y$ and $Z$. By ruling out all other possible sources of association, any observed association may be interpreted as arising from a causal relationship. \subsubsection{Functional causal models.}\label{fcm} Causal DAGs may be associated with a functional causal model, a set of deterministic functions that take as inputs elements of $X$ as well as independent (``exogenous'') error terms. The basic triangle confounding graph corresponding to an $(X, Y, Z)$ triple satisfying conditional unconfoundedness is shown in Figure \ref{graph1}. \begin{figure} \ctikzfig{graph1} \caption{A simple triangle confounding diagram, where a control variable $X$ causally influences both the treatment $Z$ and the response $Y$. This graph does not clarify what information contained in (the potentially multidimensional) $X$ is relevant for $Z$ or $Y$ or both or neither, only that knowing the value of $X$ in its entirety permits causal estimation.}\label{graph1} \end{figure} The corresponding functional causal model can be expressed as \begin{equation} \begin{split} Z &\leftarrow G(X,\epsilon_z)\\ Y & \leftarrow F(X,Z,\epsilon_y) \end{split} \end{equation} where $X$, $\epsilon_z$ and $\epsilon_y$ are mutually independent (though all three may be vector-valued with non-independent elements). The exogenous errors ($\epsilon_z$ and $\epsilon_y$) that appear in a single equation are suppressed in the graph. All of the stochasticity is inherited from the exogenous variables, while all of the deterministic relationships are reflected in the functions $G(\cdot)$ and $F(\cdot)$, which are explicitly endowed with a causal interpretation. Specifically, the potential outcomes are given by: \begin{equation} \begin{split} Y^1 &\leftarrow F(X,1,\epsilon_y)\\ Y^0 & \leftarrow F(X,0,\epsilon_y) \end{split} \end{equation} where $(X, \epsilon_y)$ are drawn from their marginal distributions, irrespective of the value of the treatment argument. As was mentioned previously, throughout this paper we assume that $X$ does not contain any causal descendants of $Z$. Consider two ways to conceptualize the data generating process for both the potential outcome pairs, $(Y^0, Y^1)$, and the observed response $Y$. On the one hand, the potential outcomes can be generated from the functional causal model, by fixing the $Z$ argument to 0 or 1, irrespective of the implied distribution of $Z \mid X$. Procedurally, this would look like drawing $X$ from its marginal distribution, drawing $\epsilon_y$, and evaluating $F(X, 0, \epsilon_y)$ and $F(X, 1, \epsilon_y)$. The observed data can then be constructed via the consistency assumption $Y = F(X, 1, \epsilon_y)Z + F(X, 0, \epsilon_y)(1-Z)$. Equivalently, $Y$ may be drawn directly via $F(X, Z, \epsilon_y)$, where $Z$ (the observed treatment assignment) was drawn according to $Z \mid X$ (as specified by the CDAG). This equivalence is especially instructive as to why $Y \mid Z = z$ and $Y^z$ do not generally have the same distribution and, furthermore, why $Y \mid Z = z, X = x$ and $Y^z \mid X= x$ do have the same distribution (assuming, as we have above, that $X$ is causally exhaustive). The role of $\epsilon_y$ in defining the distribution of the potential outcomes is worth considering in more detail. Note that for a binary $Z$, any functional causal model $F$ may be rewritten as $$F(X, Z , \epsilon_y) = F(X, 0, \epsilon_y) + Z \left[ F(X, 1, \epsilon_y) - F(X, 0, \epsilon_y) \right] = \mu(X, \epsilon_y) + Z \tau(X, \epsilon_y).$$ This formulation invites us to consider that $\epsilon_y$ may be multivariate, distinct elements of which may affect $\mu(X, \epsilon_y)$ and $\tau(X, \epsilon_y)$. Three particular cases are especially notable: \begin{enumerate} \item $\mu(X, \epsilon_y) = \mu(X) + \epsilon_y$ and $\tau(X, \epsilon_y) = \tau(X)$: here, $\epsilon_y$ has the same effect on the two potential outcomes $F(X, 1, \epsilon_y)$ and $F(X, 0, \epsilon_y)$, so that their joint distribution is singular. \item $\mu(X, \epsilon_y) = \mu(X) + \epsilon_{y,0}$ and $\tau(X, \epsilon_y) = \tau(X) + \left( \epsilon_{y,1} - \epsilon_{y,0} \right)$ where the exogenous error is partitioned as $\epsilon_y = (\epsilon_{y, 0}, \epsilon_{y, 1})$. Here, $\epsilon_{y,0}$ and $\epsilon_{y,1}$ are distinct random variables that separately define the potential outcome distributions so that one effect of the treatment is in changing {\em which} exogenous influences affect the response. \item $\mu(X, \epsilon_y) = \mu(X) + \epsilon_{y, \mu}$, $\tau(X, \epsilon_y) = \tau(X) + \epsilon_{y, \tau}$, where the exogenous errors is partitioned as $\epsilon_y = (\epsilon_{y, \mu}, \epsilon_{y, \tau})$. In this case, a distinct set of causal factors dictate exogenous variation in the prognostic (baseline) response and exogenous variation in the treatment effect itself. For example, variation in the baseline response may be due to environmental factors that are independent from genetic factors dictating one's response to a new drug. \end{enumerate} These three cases are visualized in Figure \ref{errors} with $\tau(X) = 1$. Empirically, these cases are indistinguishable in that they are ``observationally equivalent'' --- because the potential outcomes are never jointly observed, most aspects of their joint distribution are fundamentally unidentified. \begin{figure} \includegraphics[width=2.5in]{Error_Comparison_a.png} \includegraphics[width=2.5in]{Error_Comparison_b.png} \caption{Left panel: Potential outcome distributions with a common additive univariate error and a homogeneous treatment effect (which shifts the line up one unit from the diagonal), articulated in Case 1 below. Right panel: Potential outcome distributions with a homogeneous treatment effect and distinct additive bivariate errors, $\epsilon_{y,0}$ and $\epsilon_{y,0}$, shown here with a positive correlation less than one, articulated in Case 2 below.}\label{errors} \end{figure} With a more detailed causal graph, a more detailed assessment of conditional unconfoundedness can be made. For instance, consider Figure \ref{graph2}, which is equivalent to the standard triangle digram in the sense that controlling for all of the elements of $X = (X_1, X_2, X_3, X_4)$ indeed satisfies conditional unconfoundedness. However, Pearl's algorithm reveals that $(X_1, X_2)$ would suffice. By positing more information about the joint distribution of $X$, it is possible to absorb $X_3$ into $\epsilon_z$ and $X_4$ into $\epsilon_y$, while redefining $X = (X_1, X_2)$, bringing us back to the triangle graph, but with a reduced set of control variables. \begin{figure} \centering \begin{tikzpicture}[baseline=-0.25em,scale=0.5] \begin{pgfonlayer}{nodelayer} \node [style=myvar] (1) at (2.5, 2.5) {$X_1$}; \node [style=myvar] (2) at (2.5, -2.5) {$X_2$}; \node [style=myvar] (3) at (-2.5, -2.5) {$X_3$}; \node [style=myvar] (4) at (7.5, 2.5) {$X_4$}; \node [style=myvar] (5) at (-2.5, 0) {$Z$}; \node [style=myvar] (6) at (7.5, 0) {$Y$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=arrow] (1) to (5); \draw [style=arrow] (1) to (6); \draw [style=arrow] (2) to (5); \draw [style=arrow] (2) to (6); \draw [style=arrow] (3) to (5); \draw [style=arrow] (4) to (6); \draw [style=arrow] (5) to (6); \end{pgfonlayer} \end{tikzpicture} \caption{An elaboration of the triangle graph, depicting $X_1$ and $X_2$ as confounders, $X_4$ as a pure prognostic variable, and $X_3$ is an instrument.}\label{graph2} \end{figure} \subsection{Structural equations: Mean regression models with exogenous additive errors} \label{structural_model_section} Finally, the classic econometric literature approaches causality in terms of mean regression models with additive (but not necessarily homoskedastic) error terms, which are referred to as ``structural'' models (although the term is often used informally and imprecisely in the applied literature). \cite{heckman2005structural} reviews the structural model approach in econometrics in depth, noting that such methods have their origin in the study of dynamic macroeconomic systems. A seminal reference is \cite{haavelmo1943statistical}. The mean regression perspective arises naturally if one takes a linear regression model as a starting point, but is straightforward to motivate starting from a generic functional causal model. Define \begin{equation} \begin{split} \mu(x) &\equiv \mathbb{E}(F(x,0,\epsilon_y)), \\ \tau(x) &\equiv \mathbb{E}(F(x,1,\epsilon_y)) - \mu(x),\\ \upsilon(x,\epsilon_y) &\equiv F(x,0,\epsilon_y) - \mu(x),\\ \delta(x,\epsilon_y) &\equiv F(x,1,\epsilon_y) - F(x,0,\epsilon_y) - \tau(x) \end{split} \end{equation} giving a ``structural model'' \begin{equation}\label{structural_eq} Y = \mu(x) + \upsilon(x,\epsilon_y) + (\tau(x) + \delta(x,\epsilon_y)) Z \end{equation} where $\upsilon(x, \epsilon_y)$ and $\delta(x, \epsilon_y)$ are deterministic functions, both of which are mean zero integrating over $\epsilon_y$ (for any $x$): $\mathbb{E}(\upsilon(x, \epsilon_y)) = 0$ and $\mathbb{E}(\delta(x, \epsilon_y)) = 0$. In this formulation, conditional unconfoundedness may be expressed in terms of independence of the treatment, $Z$, and the error terms $\upsilon(x,\epsilon_y)$ and $\delta(x, \epsilon_y)$. Such models are commonly used in a simplified form, where $\delta(x, \epsilon_y)$ is assumed to be identically zero and $\tau(x)$ is assumed to be constant in $x$, but such assumptions are not intrinsic to the formalism. \subsection{Relating the three frameworks}\label{equivalence} If every node in a causal diagram is observable, all remaining factors determining $Y$ are attributable to the exogenous errors, which are, by definition, independent of the treatment assignment. In that case, it is easy to forge a connection between the three formalisms, as they all assert that \begin{equation} Y^z \mid X=x \;\;\; \,{\buildrel d \over \sim}\, \;\;\; Y \mid X = x, Z = z, \end{equation} where (recall) $Y^z = F(x, z, \epsilon_y)$, with distribution induced by the distribution over $\epsilon_y$. The above assertion essentially declares that the estimable conditional distributions which appear on the right hand side warrant a causal interpretation. For sets of control variables that are {\em not} exhaustive, more care is needed in translating the formalisms, but a precise relationship can be obtained, as spelled out in the following lemma. \begin{lemma}\label{synthesis} The assertions below (with their corresponding causal framework labeled in brackets) stand in the following logical relationship: $1 \Rightarrow 2 \Leftrightarrow 3$. \begin{enumerate} \item $S = s(X)$ satisfies the back-door criterion. [Causal DAGs] \item $S= s(X)$ satisfies conditional unconfoundedness: $(Y^0, Y^1) \independent Z \mid S$. [Potential Outcomes] \item The response $Y$ can be represented in terms of a mean regression model with error terms $(\upsilon(s,X,\epsilon_y), \delta(s,X, \epsilon_y) ) \independent Z \mid s(X) = s$. [Structural Equations] \end{enumerate} \end{lemma} \begin{proof} Let $X$ denote all of the variables in a complete causal diagram with the exception of the treatment variable $Z$ and response variable $Y,$ and consider the following causal model, written in terms of functional equations, potential outcomes, and a structural mean regression with additive exogenous errors: \begin{equation} \begin{split} Z &\leftarrow G(X, \epsilon_z),\\ Y^z &\leftarrow F(X, z, \epsilon_y) = \mu(X) + \upsilon(X,\epsilon_y) + (\tau(X) + \delta(X,\epsilon_y)) z,\\ \begin{pmatrix} Y^0 \\ Y^1 \end{pmatrix} &\leftarrow \begin{pmatrix} \mu(X) + \upsilon(X,\epsilon_y) \\ \mu(X) + \tau(X) + \upsilon(X,\epsilon_y) + \delta(X, \epsilon_y) \end{pmatrix}. \end{split} \end{equation} To see that 1 implies 2, recall that 1 means that $S$ renders the treatment and response conditionally independent in the modified DAG with no causal arrow between $Z$ and $Y$. But it is precisely such a graph that defines the relationship between $Z$ and the potential outcomes $Y^0 = F(X, 0, \epsilon_y)$ and $Y^1 = F(X, 1, \epsilon_y)$, as shown in Figure \ref{po_graph}. To see that 2 and 3 are equivalent, re-parametrize the additive error model in terms of $S$, as follows: \begin{equation} \begin{split} Y^z &\leftarrow \mu(s) + \upsilon(s,X,\epsilon_y) + (\tau(s) + \delta(s, X, \epsilon_y))z\\ \mu(s) &\equiv \mathbb{E}(\mu(X) \mid S(X) = s)\\ \tau(s) &\equiv \mathbb{E}(\tau(X) \mid S(X) = s)\\ \upsilon(s,X,\epsilon_y) &\equiv \mu(X) - \mu(s) + \upsilon(X,\epsilon_y)\\ \delta(s,X,\epsilon_y) &\equiv \tau(X) - \tau(s) + \delta(X,\epsilon_y). \end{split} \end{equation} For a fixed value of $s$, the mean terms $\mu(s)$ and $\tau(s)$ are constant, so that $(Y^0, Y^1)$ stands in a one-to-one relationship with $\upsilon(s,X,\epsilon_y)$ and $\delta(s,X,\epsilon_y)$; therefore if the former are independent of $Z$, then so must be the latter, and vice-versa. \end{proof} \begin{figure} \centering \begin{tikzpicture}[baseline=-0.25em,scale=0.5] \begin{pgfonlayer}{nodelayer} \node [style=myvar] (1) at (2.5, 2.5) {$X_1$}; \node [style=myvar] (2) at (2.5, -2.5) {$X_2$}; \node [style=myvar] (5) at (-2.5, 0) {$Z$}; \node [style=myvar] (6) at (7.5, 0) {$Y$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=arrow] (1) to (5); \draw [style=arrow] (1) to (6); \draw [style=arrow] (2) to (5); \draw [style=arrow] (2) to (6); \draw [style=arrow] (5) to (6); \end{pgfonlayer} \end{tikzpicture} \hfill \begin{tikzpicture}[baseline=-0.25em,scale=0.5] \begin{pgfonlayer}{nodelayer} \node [style=myvar] (1) at (2.5, 2.5) {$X_1$}; \node [style=myvar] (2) at (2.5, -2.5) {$X_2$}; \node [style=myvar] (5) at (-2.5, 0) {$Z$}; \node [style=myvar] (6) at (7.5, 0) {$Y^*$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=arrow] (1) to (5); \draw [style=arrow] (1) to (6); \draw [style=arrow] (2) to (6); \draw [style=arrow] (2) to (5); \end{pgfonlayer} \end{tikzpicture} \caption{A typical causal DAG (CDAG) and its potential outcome counterpart, where $Y^* = (Y^0, Y^1)$.}\label{po_graph} \end{figure} \begin{figure} \begin{minipage}[b]{180pt} \centering \begin{tikzpicture}[baseline=-0.25em,scale=0.5] \begin{pgfonlayer}{nodelayer} \node [style=myvar] (1) at (-2.5, 2.5) {$X_1$}; \node [style=myvar] (3) at (2.5, 2.5) {$X_2$}; \node [style=myvar] (5) at (-2.5, 0) {$Z$}; \node [style=myvar] (6) at (2.5, 0) {$Y$}; \node [style=myvar] (7) at (-2.5, -2.5) {$X_3$}; \node [style=myvar] (8) at (2.5, -2.5) {$X_4$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=arrow] (1) to (5); \draw [style=arrow] (3) to (6); \draw [style=arrow] (5) to (6); \draw [style=arrow] (1) to (3); \draw [style=arrow] (7) to (5); \draw [style=arrow] (8) to (7); \draw [style=arrow] (8) to (6); \end{pgfonlayer} \end{tikzpicture} \caption{The ``box diagram'', which implies several valid control sets: any set containing at least one of $\{ X_1, X_2\}$ and at least one of $\{ X_3, X_4\}$.} \label{rectangle} \end{minipage} \hfill \begin{minipage}[b]{180pt} \centering \begin{tikzpicture}[baseline=-0.25em,scale=0.5] \begin{pgfonlayer}{nodelayer} \node [style=myvar] (1) at (2.5, 2.5) {$X_2$}; \node [style=myvar] (2) at (2.5, -2.5) {$X_3$}; \node [style=myvar] (5) at (-2.5, 0) {$Z$}; \node [style=myvar] (6) at (7.5, 0) {$Y$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=probedge] (1) to (5); \draw [style=arrow] (1) to (6); \draw [style=arrow] (2) to (5); \draw [style=arrow] (5) to (6); \draw [style=probedge] (2) to (6); \end{pgfonlayer} \end{tikzpicture} \caption{The ``box diagram'' with $X_1$ and $X_4$ omitted; a CDAG representation is no longer possible.\vspace{0.2in}} \label{triangle} \end{minipage} \end{figure} \subsection{Estimands, estimators, and sampling distributions}\label{estimands} As described previously, by {\em treatment effect}, we mean the difference between the treated and untreated potential outcomes. By {\em average} treatment effect, we mean the average of this difference over some population of individuals. The functional causal model and a distribution over the exogeneous errors define an infinite hypothetical {\em population} from which the observed data is assumed to be a random sample. From this perspective, the population average treatment effect (PATE) may be expressed as $$\mathbb{E}(\tau(X) + \delta(X, \epsilon)) = \mathbb{E}(\tau(X)),$$ where $\tau$ is a fixed-but-unknown function and the expectation is taken with respect to the data generating process defined by the CDAG and the associated functional causal model, so that $X$ and $\epsilon$ are both being averaged over. Other average causal effects, differing in terms of the (sub)population over which the average is taken, are likewise readily defined in terms of the functional causal model (FCM). For instance, if we wish to restrict our attention to the average treatment effect among individuals in our observed sample, we may define our estimand as the {\em sample average treatment effect}, or SATE: $$\frac{1}{N} \sum_{i = 1}^{N} \left( \tau(x_i) + \delta(x_i, \epsilon_i) \right ).$$ Note that the SATE and the PATE differ from one another in that, in general, $$\mathbb{E}(\tau(X)) \neq \frac{1}{N} \sum_{i = 1}^{N} \tau(x_i)$$ and $$\frac{1}{N} \sum_{i = 1}^{N} \delta(x_i, \epsilon_i) \neq \mathbb{E}(\delta(X, \epsilon)) = 0.$$ In this paper, we will compare stratification estimators of the PATE, evaluating them in terms of their finite sample variance over repeated sampling of independent draws from $(X_i, Y_i, Z_i)$. While it would be possible to consider the sampling distribution over $(Y_i, Z_i)$ for a fixed vector of observed covariates $x_i$, doing so would make cross comparison of different stratifications impossible, because the sampling distribution would be over-specified relative to the coarser stratification. Because the PATE is of wide applied interest, we argue that averaging over observed control variables $X_i$ is sensible and all of our results are derived in this setting. Another average treatment effect of broad interest is the {\em conditional average treatment effect} (CATE), which defines an average treatment effect conditional on a set of covariate values. The population CATE, $$\mathbb{E}(\tau(X) + \delta(X, \epsilon) \mid X = x) = \mathbb{E}(\tau(X) \mid X = x),$$ takes an expectation with respect to a conditional sampling distribution $\tau(X) \mid X = x$, where $\left\{X = x\right\}$ may denote a set of covariates rather than a single value. While the focus of this paper is on the PATE, its insights extend automatically to the population CATE. The CATE is sometimes mistakenly reported in the literature as the {\em individual treatment effect} (ITE), which is a separate estimand that is only identified with more restrictive assumptions. The ITE is defined at the unit level as the difference in potential outcomes. For unit $i$, the ITE is given by $$F(X_i, Z_i = 1, \epsilon_{i,y,1}) - F(X_i, Z_i = 0, \epsilon_{i,y,0}).$$ This is unidentified without further assumptions on the nature of the error term, as in general $\epsilon_{i,y,1} \neq \epsilon_{i,y,0}$; see Figure \ref{errors}. \section{Minimal and optimal statistical control} \subsection{The principal deconfounding function} Although conditional unconfoundedness is central to our conception of causal effect estimation, in fact it is a stronger than necessary assumption for identifying the ATE. More specifically, one only needs a function $s(x)$ that satisfies {\em mean conditional unconfoundedness}. \begin{definition} A function $s$ on covariate space $\mathcal{X}$ is said to satisfy {\em mean conditional unconfoundedness} if \begin{equation} Z \independent (\mu(X), \tau(X)) \mid s(X). \end{equation} \end{definition} \begin{lemma}\label{MCU} Mean conditional unconfoundedness is a sufficient condition for estimating average treatment effects. \end{lemma} \begin{proof} Denote the causal model as $$Y^z \leftarrow \mu(X) + \upsilon(X,\epsilon_y) + (\tau(X) + \delta(X,\epsilon_y)) z$$ where $\epsilon_y \independent (Z, X)$, $\mathbb{E}(\upsilon(x,\epsilon_y)) = 0$, and $\mathbb{E}( \delta(x,\epsilon_y)) = 0$ for all $x$. We aim to show that $$\mathbb{E}(Y^z \mid s(X) = s) = \mathbb{E}(Y \mid s(X) = s, Z = z),$$ from which the result follows by the estimability of the right hand side for both $z = 0$ and $z = 1$. Recalling the relationship between $Y^z$ and $Y \mid Z = z$ described in Section \ref{fcm}, this is equivalent to showing that \begin{align*} \mathbb{E}(\mu(X) + &\upsilon(X,\epsilon_y) + (\tau(X) + \delta(X,\epsilon_y)) z \mid s(X) = s) =\\ & \mathbb{E}(\mu(X) + \upsilon(X,\epsilon_y) + (\tau(X) + \delta(X,\epsilon_y)) z \mid s(X) = s, Z = z), \end{align*} where the expectation over $(X, \epsilon_y)$ is with respect to its marginal distribution on the left hand side and with respect to its conditional distribution, given $Z = z$, on the right hand side. By the independence of $\epsilon_y$, the mean zero errors for each $x$, and the linearity of expectation, this reduces to showing that $$\mathbb{E}(\mu(X) + \tau(X)z \mid s(X) = s) = \mathbb{E}(\mu(X) + \tau(X)z \mid s(X) = s, Z = z).$$ By the assumption of mean conditional unconfoundedness, $Z \independent (\mu(X), \tau(X)) \mid s(X)$, and the result follows. \end{proof} Mean conditional unconfoundedness can be used to define a {\em minimal} control function, but first we must recall the definition of the propensity score \citep{rosenbaum1983central}, which we will denote by $\pi(\cdot)$. \begin{definition} The {\em propensity score}, based on a vector of control variables $x$, is the conditional probability of receiving treatment: \begin{equation}\label{propscore} \pi(x) \equiv \mathbb{P}(\RV{Z} = 1 \mid \RV{X} = x). \end{equation} \end{definition} \noindent It is common to interchangeably refer to the propensity {\em score}, which emphasizes a specific numerical value, $\pi(x)$, and the propensity {\em function}, which emphasizes the mapping, $\pi: \mathcal{X} \rightarrow (0, 1)$. In turn, we have: \begin{definition} The {\em principal deconfounding function} is given by following conditional expectation: $$\lambda(x) = \mathbb{E}(\pi(X) \mid \mu(X) = \mu(x), \tau(X) = \tau(x)).$$ \end{definition} \begin{theorem} \label{theorem1} The principal deconfounding function is the coarsest function satisfying mean conditional unconfoundedness. \end{theorem} \begin{proof} By iterated expectation, $Z \mid \mu(X), \tau(X)$ is a Bernoulli random variable with probability $\lambda(X)$, therefore $$\mathbb{E}(Z \mid \tau(X), \mu(X), \lambda(X)) = \mathbb{E}(Z \mid \lambda(X)),$$ which shows that $Z \independent \left ( \mu(X), \tau(X)\right ) \mid \lambda(X)$ because $Z$ is binary. Furthermore, $|\lambda(\mathcal{X})|$ is minimal: it takes exactly as many values as there are unique conditional distributions of $Z \mid \mu(X), \tau(X)$. In more detail, suppose $s(x)$ is coarser than $\lambda(x)$ so that there exists $x_1$ and $x_2$ such that $s(x_1) = s(x_2)$ but $\lambda(x_1) \neq \lambda(x_2)$. But $\lambda(x_1) \neq \lambda(x_2)$ implies $(\mu(x_1), \tau(x_1)) \neq (\mu(x_2), \tau(x_2))$, which in turn shows that $$Z \not \independent \mu(X), \tau(X) \mid s(X)$$ so mean conditional unconfoundedness is violated. \end{proof} \subsection{Optimal stratification for causal effect estimation} \label{trueprop} Recognizing that valid control features are non-unique raises the question: which control features are the best ones? To make this question precise, we study the finite sample variance of fixed-strata estimators, restricting our attention to a vector of discrete control variables. Without loss of generality, discrete control variables with finite support can be represented as a single covariate taking $K = |\mathcal{X}|$ distinct values. For example, a length $d$ vector of binary covariates would be represented as a single variable taking $2^d$ values. This assumption is mathematically convenient and, by setting $K$ large enough, can capture most empirical applications to a satisfactory degree of realism. (We revisit the plausibility of this assumption in the discussion section.) In the mathematical formalism and discussion of this paper, we will use the words ``strata" and ``features" interchangeably, to refer to functions of this single categorical variable. In detail, this paper considers the following data generating process: \begin{equation}\label{dgp_equation} \begin{split} \mathcal{X} &= \{1, \dots, K\},\\ \pi: \mathcal{X} &\mapsto (0, 1),\\ \RV{Z} &\sim \mbox{Bernoulli}(\pi(\RV{X})),\\ \RV{Y} &\leftarrow \mu(X) + \upsilon_X + (\tau(X) + \delta_X) Z \end{split} \end{equation} where $\mathbb{E}(\upsilon_x) = 0$ and $\mathbb{E}(\delta_x) = 0$ for all $x$ so that $\mu(\scalarobs{x}) = \mathbb{E}(\RV{Y} \mid \RV{X} = \scalarobs{x}, \RV{Z} = 0)$ and $\mu(\scalarobs{x}) + \tau(\scalarobs{x}) = \mathbb{E}(\RV{Y} \mid \RV{X} = \scalarobs{x}, \RV{Z} = 1)$. Lastly, let the random variable $\RV{N}$ denote the overall sample size and define subset-specific sample sizes as follows: \begin{itemize} \item $\RV{N}_{x}$: the number of observations with $\RV{X} = x$, \item $\RV{N}_{x, z}$: the number of observations with $\RV{X} = x$ and $\RV{Z} = z$. \end{itemize} We define the stratification estimator using a stratification function $s\left(\mathcal{X}\right)$, which returns $J \leq K$ discrete function values. We compute the average difference in outcomes between the treated and control groups separately for individuals in each of the $J$ strata, so that \begin{equation*} \begin{aligned} \bar{\tau}^{s}_{strat} &= \sum_{j \in s(\mathcal{X})} \frac{N_{j}}{n} \left( \bar{Y}_{j,1} - \bar{Y}_{j, 0} \right)\\ N_{j, 0} &= \sum_{i=1}^n \mathbf{1}\left\{s(X_i) = j\right\} \mathbf{1}\left\{Z_i = 0\right\}\\ \bar{Y}_{j,0} &= \frac{1}{N_{j, 0}} \sum_{i=1}^n Y_i \mathbf{1}\left\{s(X_i) = j\right\} \mathbf{1}\left\{Z_i = 0\right\} \end{aligned}\;\;\;\;\;\; \begin{aligned} N_{j} &= \sum_{i=1}^n \mathbf{1}\left\{s(X_i) = j\right\}\\ N_{j, 1} &= \sum_{i=1}^n \mathbf{1}\left\{s(X_i) = j\right\} \mathbf{1}\left\{Z_i = 1\right\}\\ \bar{Y}_{j,1} &= \frac{1}{N_{j, 1}} \sum_{i=1}^n Y_i \mathbf{1}\left\{s(X_i) = j\right\} \mathbf{1}\left\{Z_i = 1\right\}\\ \end{aligned} \end{equation*} Note that if we choose the trivial stratification $s(x) = x$, we stratify completely on all $K$ unique levels of $\mathcal{X}$. The following theorem describes when stratification beyond the minimal valid stratification, $\lambda(X)$, is beneficial, in terms of conditions on the underlying data generating process. \begin{theorem} \label{theorem2} Assume we have stratified on $\lambda(X)$ so that the average treatment effect is identified using a minimal deconfounding set. Consider a {\em refinement} of $\lambda$, $s(X)$, which also identifies the ATE: $s(x) \neq s(x')$ while $\lambda(x) = \lambda(x')$ for at least two $x, x' \in \mathcal{X}$. Define $\bar{\tau}_{\textrm{strat}}^{\lambda}$ as a stratification estimator which uses level sets of $\lambda(X)$ to define strata and $\bar{\tau}_{\textrm{strat}}^{s}$ as a stratification estimator which uses level sets of $s(X)$. Then $\mathbb{V} \left( \bar{\tau}_{\textrm{strat}}^{s} \right) < \mathbb{V} \left( \bar{\tau}_{\textrm{strat}}^{\lambda} \right)$ if $\nu < \eta$ where \begin{equation*} \begin{aligned} m(j) &= \lvert \left\{ s(x) : x \in \mathcal{X}\mbox{ such that } \lambda(x) = j \right\} \rvert\\ \mathcal{B} &= \left\{j \in \lambda(\mathcal{X}): m(j) > 1 \textrm{ and all sub-strata means and variances are constant} \right\}\\ \mathcal{C} &= \left\{j \in \lambda(\mathcal{X}): m(j) > 1 \textrm{ and either the sub-strata means or variances are non-constant} \right\}\\ \nu &= \sum_{b \in \mathcal{B}} \left[ \mathbb{V}\left( \frac{N_{b}}{n} \left( \bar{Y}_{b,1} - \bar{Y}_{b, 0} \right) \right) - \mathbb{V}\left( \sum_{\ell=1}^{m(b)} \frac{N_{b\ell}}{n} \left( \bar{Y}_{b\ell,1} - \bar{Y}_{b\ell, 0} \right) \right)\right]\\ \eta &= \sum_{c \in \mathcal{C}} \left[ \mathbb{V}\left( \sum_{\ell=1}^{m(c)} \frac{N_{c\ell}}{n} \left( \bar{Y}_{c\ell,1} - \bar{Y}_{c\ell, 0} \right) \right) - \mathbb{V}\left( \frac{N_{c}}{n} \left( \bar{Y}_{c,1} - \bar{Y}_{c, 0} \right) \right)\right]\\ \end{aligned} \end{equation*} and $\mathbb{V} \left( \bar{\tau}_{\textrm{strat}}^{s} \right) \geq \mathbb{V} \left( \bar{\tau}_{\textrm{strat}}^{\lambda} \right)$ otherwise. \end{theorem} A detailed proof is provided in Appendix \ref{appA}, but here we offer a sketch of the proof to build intuition. In comparing two stratifications, $\lambda$ and $s$, across discrete covariates $X$, we can partition the level sets of the two stratfication functions as follows: \begin{enumerate} \item $\mathcal{A}$: values of $x \in \mathcal{X}$ for which both $\lambda$ and $s$ agree \item $\mathcal{B}$: values of $x \in \mathcal{X}$ for which $s$ substratifies $\lambda$ but the mean and variance of $Y \mid Z$ are constant across substrata formed by $s$ \item $\mathcal{C}$: values of $x \in \mathcal{X}$ for which $s$ substratifies $\lambda$ and either the mean of $Y \mid Z$, the variance of $Y \mid Z$, or both vary across substrata formed by $s$ \end{enumerate} We ignore $\mathcal{A}$ and focus on $\mathcal{B}$ and $\mathcal{C}$. In the case of $\mathcal{B}$, $s$ performs ``unnecessary" stratification, estimating and re-aggregating conditional means which are the same in the underlying data generating process, and thus incurs additional variance over the $\lambda$ stratification estimator. On the other hand, when we consider $\mathcal{C}$, $\lambda$ incurs additional variance over $s$ by failing to control for differences in the $Y \mid Z$. In summary, $\mathcal{B}$ induces a variance penalty on $s$ relative to $\lambda$ by ``overstratification", while $\mathcal{C}$ induces a variance penalty on $\lambda$ relative to $s$ by ``understratification." Which estimator is preferred depends on the magnitude of these competing effects, as articulated in the $\nu < \eta$ inequality above. The practical upshot of this theorem is that stratification that accounts for substantial variation in the response will tend to reduce variance of the treatment effect estimator (whether or not it is confounded in the sense of covarying with propensity to receive treatment), while stratification that accounts only for variation in treatment assignment will increase variance of the treatment effect estimator. This conclusion is illustrated in the examples of the following section. \section{Vignettes}\label{examples} This section collects examples illustrating the statistical trade-offs underlying feature selection for causal effect estimation that are articulated in Theorem \ref{theorem2}. Many of the examples are interesting in their own right; connections to previous literature are provided throughout. \subsection{In what sense is randomization the ``gold standard'' for causal effect estimation?} It has become boiler-plate in reports on observational studies to remark that ``in the absence of the gold standard of a randomized clinical trial, one may pursue statistical methods to control for confounding''. But in what sense is randomized treatment assignment the gold standard? Surely solid-state physicists do not randomize their lab conditions and hope their sample size is large enough to reveal interesting results. Famously, esteemed physicist Ernst Rutherford quipped ``If your experiment needs statistics, you ought to have done a better experiment'' (\cite{hammersley1962monte}). The intuition behind this remark is that it is {\em control} that is central, not randomization. See section \ref{constantcontrol} for a definition of a control feature that evokes the experimental notion of ``control''. Indeed, randomization is simply a way to guarantee control {\em on average} in the event that exact control is impossible, such as when crucial confounding factors are unobserved. This perspective in turn suggests that controlling for factors that we {\em can} observe and randomizing only for factors that we cannot observe would be the ideal approach. The following thought experiment amplifies this intuition. Consider studying the effect of treatment $Z$ on outcome $Y$ in a sample of $n$ pairs of identical twins and deciding how to allocate treatment across the $2n$ study participants. Completely randomized treatment assignment satisfies the assumptions outlined above and thus identifies the treatment effect. However, a naive randomization would sometimes accidentally treat both twins and leave other twin pairs untreated. This violates most people's intuition about why twin studies are interesting and useful, which is that giving one twin the treatment and the other a placebo implicitly ``controls for'' all of the shared biological and environmental factors that may impact the treatment effect. Randomization within each twin pair can protect against unmeasured factors that may confound the result, such as (perhaps) which twin was born first. In this case, both $Z$ and the twin pair index, $X$, are informative about the expected value of $Y$. Now consider four possible approaches to study the effect of $Z$ on $Y$: \begin{center} \begin{tabular}{c | c | c} & Design & Estimator \\ \hline 1 & Complete randomization & Unadjusted mean difference \\ 2 & Twin pair randomization & Unadjusted mean difference \\ 3 & Complete randomization & \;\;Adjusted mean difference \\ 4 & Twin pair randomization & \;\;Adjusted mean difference \\ \end{tabular} \end{center} where the unadjusted mean difference estimator is defined as $$\bar{\tau}_U = \bar{Y}_{Z=1} - \bar{Y}_{Z=0}$$ and the adjusted mean difference estimator is defined as $$\bar{\tau}_A = \sum_{x \in \mathcal{X}} \frac{n_x}{n} \left( \bar{Y}_{X=x, Z=1} - \bar{Y}_{X=x, Z=0} \right)$$ where $\mathcal{X}$ is the set of twin pairs and $X$ is a variable that indexes twin pairs. Each of the four approaches above identifies the ATE. However, adjusting for twin pairs (approaches 3 and 4) will tend to reduce variance over the unadjusted alternatives (1 and 2) and, similarly, designs that incorporate twin pairs in randomization (2 and 4) will also see a reduction in variance over the completely randomized alternatives (1 and 3). These results are implicit in Theorem \ref{theorem2}, which can be applied even if the propensity function is constant, as in a randomized trial. As intuitive as this example may be, and despite its lesson being a straightforward implication of Theorem \ref{theorem2}, regression adjustment for randomized trial data remains controversial. Freedman \citep{freedman2008regression, freedman2008randomization} criticized regression adjustment on the grounds that linear or linear logistic regression is potentially biased. Unfortunately, many researchers took this advice without first considering non-linear alternatives. \cite{lin2013agnostic} shows that regression adjustment in experimental data is not asymptotically unbiased if one entertains a richer set of interacted or saturated models, rather than a basic linear model. Of course, the stratification estimators studied here are fundamentally nonparametric and so are consistent with the conclusions of \cite{lin2013agnostic}. At the same time, Theorem \ref{theorem2} concedes that for some data generating processes, undertaking a regression adjustment (via stratification) would simply produce unnecessary variability, specifically for data generating processes where the available control factors are not sufficiently predictive of the response. In many applied problems we find ourselves somewhere in between this case of mostly useless controls and the twin experiment situation of profoundly informative controls. \subsection{Propensity scores}\label{propensity} Following the work of \cite{rosenbaum1983central}, the propensity score (expression \ref{propscore}) has become a central element in many applied analyses of causal effects. In that paper, it was first shown that $\pi(x)$ satisfies conditional unconfoundedness, from which it follows that \begin{equation} \textrm{ATE} = \mathbb{E}[\RV{Y}^1 - \RV{Y}^0] = \mathbb{E}_{\pi(\RV{X})}[\mathbb{E}[\RV{Y} \mid \pi(\RV{X}), \RV{Z} = 1] - \mathbb{E}[\RV{Y} \mid \pi(\RV{X}), \RV{Z} = 0]]. \end{equation} This differs from the more general form of conditional unconfoundedness in that $\pi(\RV{X})$ is one-dimensional, while $\RV{X}$ itself typically involves many controls. An especially common use of the propensity score in practice is via the inverse-propensity weighted (IPW) estimator \begin{equation} \label{ipw} \bar{\tau}_{\textrm{ipw}} = \frac{1}{\RV{N}} \sum_{i=1}^{\RV{N}} \left( \frac{\RV{Y}_i \RV{Z}_i}{\pi(\RV{X}_i)} - \frac{\RV{Y}_i (1-\RV{Z}_i)}{1-\pi(\RV{X}_i)} \right), \end{equation} which is known to be consistent and has been widely studied theoretically. Here we re-examine a curious result of \cite{hirano2003efficient} which shows that an IPW estimator based on estimated propensity scores attains lower asymptotic variance than one based on the true propensity function. We can apply the finite-sample results of Theorem \ref{theorem2} to re-evaluate the meaning of this widely-known result by noting the following correspondence between IPW estimators and stratification estimators: \begin{lemma}\label{ipw_strat} The empirical inverse propensity weighting (IPW) estimator is equivalent to $\bar{\tau}^{x}_{strat}$ under the following conditions: \begin{enumerate} \item $\mathcal{X}$ is discrete, \item For all $x \in \mathcal{X}$, $N_{x,1} > 0$ and $N_{x,0} > 0$, \item The propensity weighting function is estimated nonparametrically as $\hat{\pi}(x) = N_{x, 1} / N_{x}$ for each $x \in \mathcal{X}$. \end{enumerate} \end{lemma} \begin{proof} By direct calculation, \begin{equation*} \begin{aligned} \bar{\tau}^{x}_{ipw} &= \frac{1}{n} \sum_{i=1}^n \left(\frac{Y_i Z_i}{\hat{\pi}(X_i)} - \frac{Y_i(1-Z_i)}{1-\hat{\pi}(X_i)}\right) \\ &= \frac{1}{n} \sum_{i=1}^n \left(\frac{Y_i Z_i}{N_{x_i, 1} / N_{x_i}} - \frac{Y_i(1-Z_i)}{N_{x_i, 0} / N_{x_i}}\right) = \frac{1}{n} \sum_{i=1}^n \left(\frac{Y_i Z_i N_{x_i}}{N_{x_i, 1}} - \frac{Y_i (1-Z_i) N_{x_i}}{N_{x_i, 0}}\right)\\ &= \frac{1}{n} \sum_{x \in \mathcal{X}} \left( \frac{N_x}{N_{x,1}} \left( \sum_{i: X_i = x} Y_i Z_i \right) - \frac{N_x}{N_{x,0}} \left( \sum_{i: X_i = x} Y_i (1 - Z_i) \right)\right)\\ &= \frac{1}{n} \sum_{x \in \mathcal{X}} \left( \frac{N_x}{N_{x,1}} \left( N_{x,1} \bar{Y}_{x,1} \right) - \frac{N_x}{N_{x,0}} \left( N_{x,0} \bar{Y}_{x,0} \right)\right)\\ &= \sum_{x \in \mathcal{X}} \frac{N_x}{n} \left(\frac{N_{x,1} \bar{Y}_{x,1}}{N_{x,1}} - \frac{N_{x,0} \bar{Y}_{x,0}}{N_{x,0}}\right) = \sum_{x \in \mathcal{X}} \frac{N_x}{n} \left(\bar{Y}_{x,1} - \bar{Y}_{x,0}\right) = \bar{\tau}^{x}_{strat}. \end{aligned} \end{equation*} \end{proof} First, we give a finite-sample analogue of the \cite{hirano2003efficient} result in the stratification context. Then, we demonstrate a modified estimator based on a known propensity score that improves upon the estimated propensity score IPW. \subsubsection{``Noisy estimates of one''.} Denote a candidate propensity function by $q: \mathcal{X} \mapsto (0, 1)$, so that the corresponding IPW estimator is \begin{equation} \bar{\tau}_{\textrm{ipw}}^q = \sum_x \left( \frac{\RV{N}_x}{\RV{N}} \right) \bar{\tau}_{\textrm{ipw}}^{q,x} \end{equation} where \begin{equation} \bar{\tau}_{\textrm{ipw}}^{q,x} = \left( \frac{\hat{\pi}(x)}{q(x)} \bar{\RV{Y}}_{x, \RV{Z}=1} - \frac{1-\hat{\pi}(x)}{1-q(x)} \bar{\RV{Y}}_{x, \RV{Z}=0} \right) \end{equation} and $\hat{\pi}(x) = \left( \RV{N}_{x, \vectorobs{z}=1} / \RV{N}_{x} \right)$ is the proportion of treated units in each stratum. Taking $q(x) = p(x)$ is the ``true propensity score'' case, while letting $q(x) = \hat{\pi}(x)$ is the ``estimated propensity score'' case. In the former case, the treated and untreated stratum averages are weighted by $\hat{\pi}(x) / \pi(x)$ and $\left( 1-\hat{\pi}(x) \right) / \left(1-\pi(x) \right)$, respectively; in the latter case the weights are identically one. This difference in weights leads to the following analogue of the result of \cite{hirano2003efficient}: \begin{theorem} \label{theorem3} There exists some $\epsilon > 0$ such that if $\lvert \mu(x) \rvert + \lvert \tau(x) \rvert > \epsilon$ for at least one $x \in \mathcal{X}$, $$\mathbb{V} \left( \sum_{x \in \mathcal{X}} \bar{\tau}_{\textrm{ipw}}^{\hat{\pi},x} \right) \leq \mathbb{V} \left( \sum_{x \in \mathcal{X}} \bar{\tau}_{\textrm{ipw}}^{\pi,x} \right).$$ \end{theorem} Essentially, the random weights in the true propensity IPW are only adding variability, compared to the IPW based on estimated weights, where exact cancellation occurs. A proof may be found in the appendix. Of course, there are many other possible IPW estimators, such as those based on parametric estimates. However, any parametric form will have a similar problem to the true propensity IPW if exact cancellation is not obtained. \subsubsection{The dimension reduction benefit of known propensity scores.} Armed with an understanding of why the estimated propensity weights outperform the true propensity weights permits us to consider a modified estimator that is able to make use of knowledge of the true propensity scores (should they be known). Suppose $K_{\pi} = |\pi(\mathcal{X})| < |\mathcal{X}| = K$. If $\pi$ were known exactly prior to estimating the average treatment effect, this reduction in the strata should confer a benefit in terms of variance reduction --- there are simply fewer conditional expectations to estimate and there are more data available for estimating each one. Moreover, it is still possible to avoid the noisy-estimation-of-one effect by estimating the propensity score values on the level sets of $\pi$; letting $\rho \in \pi(\mathcal{X})$ denote a specific value in the range of $\pi$ gives \begin{equation} \begin{split} \bar{\tau}_{\textrm{ipw}}^{\hat{\pi},\rho} &= \left( \frac{\hat{\pi}(\rho)}{\hat{\pi}(\rho)} \bar{\RV{Y}}_{\rho, \RV{Z}=1} - \frac{1-\hat{\pi}(j)}{1-\hat{\pi}(\rho)} \bar{\RV{Y}}_{\rho, \RV{Z}=0} \right)\\ &= \bar{\RV{Y}}_{\rho, \RV{Z}=1} - \bar{\RV{Y}}_{\rho, \RV{Z}=0} . \end{split} \end{equation} More precisely: \begin{corollary} \label{corollary1} Suppose the following conditions hold: \begin{enumerate} \item If $\pi(x) = \pi(x')$, then $\mu(x) = \mu(x')$ and $\tau(x) = \tau(x')$, \item $\mathbb{V}(\epsilon \mid \RV{X} = x) = \sigma_{j}^2$ for all $x$ with $\pi(x) = j$ and for all $j \in \pi(\mathcal{X})$, and \item $|\pi(\mathcal{X})| < |\mathcal{X}|$. \end{enumerate} Then $$\mathbb{V} \left( \sum_{j \in \pi(\mathcal{X})} \bar{\tau}_{\textrm{ipw}}^{\hat{\pi},j} \right) \leq \mathbb{V} \left( \sum_{j \in \pi(\mathcal{X})} \left( \sum_{x: \pi(x) = j} \frac{N_x}{N_j} \bar{\tau}_{\textrm{ipw}}^{\hat{\pi},x} \right) \right).$$ \end{corollary} This result formalizes the intuition that fewer strata implies a greater degree of aggregation and that, with larger sample sizes in the remaining strata, estimation should be accordingly more efficient. In other words, knowledge of the true propensity function permits feature selection, after which the empirical propensities can be used in an IPW estimator (which is equivalent to the stratification estimator on the selected features). Condition one requires some explanation: in the fixed-strata case ``over-stratification'' can actually be beneficial if the additional strata are predictive of the response itself and condition one rules out this possibility, as it states that $\mu\mbox{-}\tau$ is at least as coarse as $\pi$. That is, in addition to the ``noisy-estimation-of one'' phenomenon, empirical estimates of the propensity score can benefit from being defined on strata that are predictive of the response, but {\em not} the treatment assignment; this benefit is not directly related to the true-versus-actual propensity score question, but merely reflects the fact that controlling for prognostic factors can benefit treatment effect estimation. \subsubsection{The inefficiency of instrumental controls.} While a known propensity score can potentially aid IPW estimation by preventing unnecessary stratification, an additional corollary of Theorem \ref{theorem2} tells us that stratification based on a known propensity function may produce unnecessary stratification as a result of {\em unconfounded} variation in propensity scores, which we refer to as ``instrumental'' stratification. \begin{corollary} \label{corollary2} Define a stratification $s$ such that $\lvert s(\mathcal{X}) \rvert < \lvert \pi(\mathcal{X}) \rvert$ and define $g: \pi(\mathcal{X}) \rightarrow s(\mathcal{X})$ as a function that collapses level sets of $\pi$ into level sets of $s$. Let $m(j) = \lvert \left\{ \pi(x): g(\pi(x)) = j \right\} \rvert$ and suppose the following conditions hold: \begin{enumerate} \item There exist $x, x'$ such that $\pi(x) \neq \pi(x')$ while $s(x) = s(x')$, $\mu(x) = \mu(x')$, and $\tau(x) = \tau(x')$, \item $\mathbb{V}(\epsilon \mid \pi(\RV{X}) = p) = \sigma_{j}^2$ for all $x$ with $\pi(x) = p$ and $g(\pi(x)) = j$ and for all $j \in s(\mathcal{X})$ \end{enumerate} Then, $$\mathbb{V} \left( \sum_{j \in s(\mathcal{X})} \bar{\tau}_{\textrm{ipw}}^{\hat{\pi},j} \right) \leq \mathbb{V} \left( \sum_{j \in s(\mathcal{X})} \left( \sum_{\pi: g(\pi) = j} \frac{N_{\pi}}{N_j} \bar{\tau}_{\textrm{ipw}}^{\hat{\pi},\pi} \right) \right).$$ \end{corollary} This corollary and the other results of this subsection provide rigorous finite-sample corroboration of the advice offered in \cite{hernan2020causal} quoted in the introduction. \subsection{Generalized prognostic scores} In data generating processes where variation in $\tau$ is independent of $Z$, the {\em prognostic score}, $\mathbb{E}(Y^0 \mid X = x) = \mu(x)$, is a sufficient control function. This follows because mean conditional unconfoundedness is satisfied trivially by $s(X) = \mu(X)$ when $\tau(X) \independent Z$; see Lemma \ref{MCU}. Like the propensity score, the prognostic score can be estimated from partially observed data --- the propensity score can be estimated from $(X, Z)$ pairs and the prognostic score can be estimated from control units only, $(X, Z = 0, Y)$, which in many contexts are more readily available than treated observations. See \cite{hansen2008prognostic} for a rigorous exposition of prognostic scores. The vector-valued function $(\mu, \tau)$ is a ``generalized'' prognostic score, containing both the usual prognostic score, as well as the treatment effect itself. This version of the prognostic score has received little attention, presumably because it ``begs the question'', in that one of its elements is the very estimand of interest. However, note that conditioning on a random variable is not about the values of that variable per se, but is rather about the level sets of the function defining that random variable. In particular, any one-to-one function of $\mu$-$\tau$ also satisfies mean conditional unconfoundedness; knowledge of the treatment effect itself is not required, merely knowledge of which strata have distinct treatment effects. Note also that Theorem \ref{theorem2} suggests that prognostic strata are more desirable from an estimation variance perspective, suggesting, perhaps counterintuitively, that large control groups may be advantageous in practice and that investing in data collection of prognostic factors should be prioritized in cases where randomization of treatment assignment is not possible. \subsection{Constant control function}\label{constantcontrol} The previous two examples showed that propensity scores and prognostic scores are sufficient control functions; this example demonstrates a function that may be coarser than either one. Consider a function on $\mathcal{X}$ defined as follows: \begin{definition} A function $s$ on $\mathcal{X}$ is a {\em constant control} function if for all $x, x' \in \mathcal{X}$ such that $s(x) = s(x')$ at least one of the following holds \begin{itemize} \item $\pi(x) = \pi(x')$, \item $\mu(x) = \mu(x')$ and $\tau(x) = \tau(x')$. \end{itemize} \end{definition} In other words, a constant control function is a coarsening of $\mathcal{X}$ such that on each level set defined by $s$, either $\pi(x)$ or $(\mu(x), \tau(x))$ are constant. The following lemma shows that a constant control function defines a random variable $S = s(X)$ such that $\mathbb{E}(Y \mid Z = z, S) = \mathbb{E}(Y^z \mid S)$. \begin{lemma} \label{lemma4} Assume $X$ satisfies conditional unconfoundedness and consider the random variable $S = s(X)$, where $s$ is a constant control function; then $S$ satisfies conditional unconfoundedness. \end{lemma} \begin{figure} \tikzfig{tikgraph}\tikzfig{tikgraph2} \caption{Causal graphical model and partially causal graphical model after integrating out $X$.} \label{decomposed_graphs} \end{figure} \begin{proof} Consider the causal diagram of $(X, Z, Y)$ expanded to include random variables $X_p = \pi(X)$, $X_y = (\mu(X), \tau(X))$, and $X_c = s(X)$ for $s$ defined above, depicted in panel (a) of Figure \ref{decomposed_graphs}. Integrating out $X$ leads to a probabilistic graphical model as shown in panel (b) of Figure \ref{decomposed_graphs}; dashed lines denote not-necessarily causal probabilistic dependence and solid arrows denote causal relationships. The result follows by showing that $X_p \independent X_y \mid X_c$; in terms of the diagram this means that the curved dashed line does not exist. But this follows immediately from the definition of $X_c$. For any value of $X_c$, either $X_p$ or $X_y$ is constant, and so the conditional distribution of $X_p$ and $X_y$ is trivially a product distribution. \end{proof} \noindent The intuition behind a constant control function is that one way to control for ``systematic co-variation'' is simply to remove all variation. Clearly, both $\pi(X)$ and $(\mu(X), \tau(X))$ are themselves constant control functions, as is $X$ itself. However, a constant control function may be coarser than either, as illustrated in Figure \ref{CCDR}, which shows an example of a simple data generating process that has a constant control function. In this example, the CCDR comprises just two strata, although $\mu$ and $\pi$ take 10 and 11 unique values, respectively, and $|\mathcal{X}| = 20$. The treatment effect is heterogeneous but unconfounded: $\tau \sim \mbox{U}(5,10)$. The second panel of Figure \ref{CCDR} shows the sampling distributions of four different stratification estimators: one using level sets of $\mu$, one using level sets of $\pi$, one using all 20 values of $x$, and one using the two values of the minimal constant control function, indicating if $x \leq 11$. All four stratification estimators are unbiased, but their differing variances exhibit a pattern consistent with Theorem \ref{theorem2}: $\mu$ gives the lowest variance, followed by $x$, followed by the constant control function, followed by $\pi$. \begin{figure} \hspace*{-0.5cm} \includegraphics[width=3in]{DGP_level_sets.png}\includegraphics[width=3in]{Estimator_distribution.png} \caption{An example of a DGP admitting a simple constant control function, $\lambda = \mathds{1}(x \leq 11)$. Here $\tau \sim \mbox{U}(5,10)$ is heterogeneous and $x \in \lbrace 1, \dots 20 \rbrace$. The left panel shows the $\mu$ values in black and the $\pi$ values in gray. The right panel shows the sampling distributions of stratification estimators based on the level sets of different function: $\mu$ (solid black), $x$ (solid gray), $\lambda$ (dashed gray) and $\pi$ (dashed black). All four estimators are unbiased, with variances that differ in line with the results of Theorem \ref{theorem2}.} \label{CCDR} \end{figure} \begin{figure} \begin{tikzpicture} \begin{scope} \fill[lightgray] (330:1.25cm) circle (1.5cm); \fill[lightgray] (210:1.25cm) circle (1.5cm); \end{scope} \begin{scope} \fill[white] (90:1.25cm) circle (1.5cm); \end{scope} \begin{scope} \clip (330:1.25cm) circle (1.5cm); \fill[draw=black, pattern=north east lines] (90:1.25cm) circle (1.5cm); \end{scope} \begin{scope} \clip (210:1.25cm) circle (1.5cm); \fill[draw=black, pattern=north east lines] (90:1.25cm) circle (1.5cm); \end{scope} \draw (90:1.25cm) circle (1.5cm) node[text=black,above] {$\pi$}; \draw (210:1.25cm) circle (1.5cm) node [text=black,below left] {$\mu$}; \draw (330:1.25cm) circle (1.5cm) node [text=black,below right] {$\tau$}; \end{tikzpicture} \caption{If the coordinate dimensions of $\RV{X}$ are independent, then which variables $X_j$ appear (or not) in the eight possible combinations of $\pi$, $\mu$, and $\tau$ can be used to characterize four relevant variable types with respect to treatment effect estimation: necessary controls, pure prognostic variables, instruments, and extraneous (or noise) variables. The above Venn diagram depicts these eight regions, shaded according to these designations. Variables in the cross-hatched region are necessary controls, as they appear in both $\pi$ and either $\mu$ or $\tau$ (or both). The gray shaded region corresponds to pure prognostic variables, appearing in $\mu$ or $\tau$ (or both), but not appearing in $\pi$. The white region corresponds to instruments, variables which appear in $\pi$, but neither in $\mu$ nor $\tau$. Variables outside of the three circles are entirely irrelevant to either the outcome or the treatment. Such designations become considerably more complicated when the elements of $X$ are not independent (cf. Example \ref{noncausalSEM}).} \label{venn} \end{figure} \subsection{Independent variables in both $\pi$ and $(\mu, \tau)$.}\label{independent} When the coordinates of $\RV{X}$ (the nodes in the graph) are all mutually independent, a valid control set is the elements $X_j$ occurring in {\em both} the propensity model and (at least one of) the prognostic and moderation models. For example, this was the strategy used in concocting the example DGP presented in section \ref{partial}. As a more general example, if $\pi(x_1, \dots, x_d) = \pi(x_1, x_2)$, $\mu(x_1, \dots, x_d) = \mu(x_2, x_3)$, and $\tau(x_1, \dots, x_d) = \tau(x_4, x_5)$, and $X_1 \independent X_{j}$ for $j \neq 1$, then $X_2$ is a sufficient control. This is because $X_1$ can be integrated out of the model without inducing dependence between $\pi(x_2) = \mathbb{E}(\pi(X_1, x_2) \mid X_2 = x_2)$ and $(\mu, \tau)$, because $X_1$ is independent of variables appearing in $\mu$ and/or $\tau$ (and does not itself appear). A similar integration could be performed for variables in $\mu$ and/or $\tau$ that do not appear in $\pi$, so long as it too was independent. In fact, only conditional independence is necessary; in the present example, $X_1 \independent (X_3, X_4, X_5) \mid X_2$. Figure \ref{venn} depicts the characterization of variables into four categories (necessary controls, pure prognostics variables, instruments, and extraneous) in the case that they are mutually independent. \subsection{Sets satisfying the back-door criterion according to a given CDAG.}\label{noncausalSEM} Consider the causal diagram in Figure \ref{rectangle}. Either the propensity controls $(X_1, X_3)$ or the prognostic-moderation controls $(X_2, X_4)$ are adequate for statistical control. However, Pearl's algorithm tells us that ``mixed'' variables also suffice, such as $(X_1, X_4)$ or $(X_2, X_3)$. Interestingly, such examples show that the notion of ``instrumental'' variables and ``prognostic'' variables are context dependent. Specifically, relative to a conditioning set of $(X_2, X_3)$, additional stratification using $X_4$ is prognostic, while additional stratification on $X_1$ would be instrumental. Theorem \ref{theorem2} suggests that adding prognostic controls is often desirable, while adding instruments should be avoided, but such designations will fluctuate depending on what has already been included. This example also illustrates a limitation of the triangle graph. Suppose that only $(X_2, X_3)$ were available for measurement. The resulting diagram for just those two controls (Figure \ref{triangle}) is {\em not} the usual causal diagram, because $X_2$ has no causal impact on $Z$, while $X_3$ has no causal impact on $Y.$ Accordingly, there is no unaugmented CDAG describing $(X_2, X_3, Z, Y)$; instead, we must denote merely statistical relationships using dashed lines. When a practitioner invokes conditional unconfoundedness in the potential outcomes framework, it therefore does not imply the triangular CDAG. Similarly, invoking (conditionally) exogenous errors does not imply that the resulting mean components of the structural model are causal. In more detail, if the potential outcomes are defined in terms of the CDAG on the full set $(X_1, X_2, X_3, X_4)$, a structural model can be derived that only involves $(X_2, X_3)$, as follows: \begin{equation} \begin{split} Y^0 &= F(x_1, x_2, x_3, x_4, z = 0, \epsilon_y) = F(x_2, x_3, z = 0, \epsilon_y)\\ Y^1 &= F(x_1, x_2, x_3, x_4, z = 1, \epsilon_y) = F(x_2, x_3, z = 1, \epsilon_y)\\ \mu(x_2, x_3) &\equiv \mathbb{E}(Y^0 \mid X_2, = x_2, X_3 = x_3)\\ \tau(x_2, x_3) &\equiv \mathbb{E}(Y^1 \mid X_2 = x_2, X_3 = x_3) - \mathbb{E}(Y^0 \mid X_2, = x_2, X_3 = x_3)\\ \upsilon(X_1, x_2, x_3, X_4, \epsilon_y) &\equiv F(X_1, x_2, x_3, X_4, z = 0, \epsilon_y) - \mu(x_2, x_3)\\ & = F(x_2, X_4, z = 0, \epsilon_y) - \mu(x_2, x_3)\\ \delta(X_1, x_2, x_3, X_4, \epsilon_y) &\equiv F(X_1, x_2, x_3, X_4, z = 1, \epsilon_y) - F(X_1, x_2, x_3, X_4, z = 0, \epsilon_y) - \tau(x_2, x_3)\\ &= F(x_2, X_4, z = 1, \epsilon_y) - F(x_2, X_4, z = 0, \epsilon_y) - \tau(x_2, x_3). \end{split} \end{equation} Noting that the resulting error terms now depend not only $\epsilon_y$, but also on $X_4$, it is necessary to show that $$(X_4, \epsilon_y) \independent Z \mid (X_2, X_3).$$ But this follows from the fact that $\mathbb{E}(Z \mid X_2 = x_2, X_3 = x_3) = \mathbb{E}(\pi(X_1, X_3) \mid X_2 = x_2, X_3 = x_3) \equiv \pi(x_2, x_3)$ is free of $X_4$. In this model, $\mu(x_2, x_3)$, $\tau(x_2, x_3)$ and $\pi(x_2, x_3)$ must not be interpreted as causal functions, despite yielding the required exogenous errors; specifically, from the graph we know that $X_2$ has no causal impact on $Z$ and $X_3$ has no causal impact on $Y,$ as depicted in Figures \ref{rectangle} and \ref{triangle}. \subsection{Sets satisfying the back-door criterion according to a transformed CDAG.} \label{transformedCDAG} This example considers a data generating process that admits distinct CDAGs, depending on how the control variables are parametrized. This scenario is not commonly discussed, presumably because observed measurements are taken to be designated by ``nature'', so to speak. However, reflecting on invertible transformations such as $(x_1, x_2) \rightarrow (x_1, x_1/x_2)$ highlights that functional causal models are certainly subject to changes of variables. More concretely, consider the following DGP: \begin{equation*} \begin{aligned} X_j &\stackrel{iid}{\sim}\mbox{Bernoulli}\left(p_j\right)\\ \pi(X) &= \beta_0 + \beta_1 (2X_1X_2 - X_1 - X_2 + 1) + \beta_2 X_3\\ Z &\sim \mbox{Bernoulli}\left(\pi(X)\right)\\ \mu(X) &= \alpha_0 + \alpha_1 (2X_1X_2 - X_1 - X_2 + 1) + \alpha_2 X_4\\ \tau(X) &= \tau \;\; \mbox{(constant treatment effect)}\\ Y &= \mu(X) + \tau(X) Z + \epsilon, \;\;\; \epsilon \sim \mathcal{N}\left(0, \sigma^2_{\epsilon}\right)\\ \end{aligned} \end{equation*} Next, define random variable $W = (2X_1X_2 - X_1 - X_2 + 1)$, regarding $X_2$ as the exogenous variable in the functional model for $W \mid X_1$. Additionally, suppressing $X_3$ and $X_4$, as they represent exogenous variation, yields the causal graph in Figure \ref{graph5}. \begin{figure} \begin{minipage}[b]{180pt} \centering \begin{tikzpicture}[baseline=-0.25em,scale=0.45] \begin{pgfonlayer}{nodelayer} \node [style=myvar] (1) at (2.5, 2.5) {$X_1$}; \node [style=myvar] (2) at (2.5, -2.5) {$X_2$}; \node [style=myvar] (3) at (-2.5, -2.5) {$X_3$}; \node [style=myvar] (4) at (7.5, 2.5) {$X_4$}; \node [style=myvar] (5) at (-2.5, 0) {$Z$}; \node [style=myvar] (6) at (7.5, 0) {$Y$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=arrow] (1) to (5); \draw [style=arrow] (1) to (6); \draw [style=arrow] (2) to (5); \draw [style=arrow] (2) to (6); \draw [style=arrow] (3) to (5); \draw [style=arrow] (4) to (6); \draw [style=arrow] (5) to (6); \end{pgfonlayer} \end{tikzpicture} \caption{Causal graph in terms of original covariates} \label{graph5a} \end{minipage} \hfill \begin{minipage}[b]{180pt} \centering \begin{tikzpicture}[baseline=-0.25em,scale=0.75] \begin{pgfonlayer}{nodelayer} \node [style=myvar] (1) at (-2.5, 2.5) {$X_1$}; \node [style=myvar] (2) at (2.5, 2.5) {$W$}; \node [style=myvar] (7) at (0.66, 0) {$Z$}; \node [style=myvar] (8) at (4.33, 0) {$Y$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=arrow] (1) to (2); \draw [style=arrow] (2) to (7); \draw [style=arrow] (2) to (8); \draw [style=arrow] (7) to (8); \end{pgfonlayer} \end{tikzpicture} \caption{Causal graph under transformed covariates} \label{graph5} \end{minipage} \end{figure} From this graph, it is clear that conditioning on $W$ satisfies conditional unconfoundedness. Most interestingly, $\lvert \mu(\mathcal{X}) \rvert = \lvert \pi(\mathcal{X}) \rvert = 4$. while $\lvert \mathcal{W} \rvert = 2$; thus $W$ provides the smallest possible random variable. Indeed, the level sets of $W$ are exactly the level sets of $\lambda$: $$\mathbb{E}(\pi(X) \mid \mu(X)) = \mathbb{E}(\pi(X) \mid W, X_4) = \mathbb{E}(\pi(X) \mid W).$$ \subsection{Sets that induce collider bias in a graph without colliders}\label{pseudoCollider} We see in Section \ref{transformedCDAG} that conditioning on synthetic ``features" that combine existing variables can lead to smaller control sets than their component variables. It is thus perhaps natural to consider machine learning useful in searching for and constructing such sets. It is true that \textit{certain} combinations of confounding variables may create a synthetic, minimal deconfounders. However, it is also possible to combine two independent variables to create a ``collider" (defined in Section \ref{DAG_section}) which confounds the causal effect of $Z$ on $Y$ after conditioning. Consider the graph in Figure \ref{pseudo_collider_graph} and define its data generating equations as \begin{equation*} \begin{aligned} Y &\sim \mathcal{N}\left(\alpha X_2 + \tau Z , \sigma^2 \right)\\ Z &\sim \mbox{Bernoulli}\left(1 / 4 + X_1 / 2\right)\\ X_1, X_2 &\sim \mbox{Bernoulli}\left(1 / 2\right) \end{aligned} \end{equation*} \begin{figure} \centering \begin{tikzpicture}[baseline=-0.25em,scale=0.45] \begin{pgfonlayer}{nodelayer} \node [style=myvar] (1) at (-3, 3) {$X_1$}; \node [style=myvar] (2) at (3, 3) {$X_2$}; \node [style=myvar] (3) at (-3, 0) {$Z$}; \node [style=myvar] (4) at (3, 0) {$Y$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=arrow] (1) to (3); \draw [style=arrow] (2) to (4); \draw [style=arrow] (3) to (4); \end{pgfonlayer} \end{tikzpicture} \caption{Graph with no confounding and no colliders} \label{pseudo_collider_graph} \end{figure} From this graph, we can see that the average causal effect of $Z$ on $Y$ is identified unconditional of $X_1$ and $X_2$, though we may condition on either or both variables. Suppose we construct two synthetic variables \begin{equation*} \begin{aligned} \tilde{X}_A &= \min\left\{X_1, X_2\right\}\\ \tilde{X}_B &= a \left[ \mathbf{1}\left\{X_1 == 1\right\} \mathbf{1}\left\{X_2 == 1\right\} + \mathbf{1}\left\{X_1 == 0\right\}\mathbf{1}\left\{X_2 == 0\right\} \right]\\ &\;\;\;\;\; + b \left( \mathbf{1}\left\{X_1 == 1\right\} \mathbf{1}\left\{X_2 == 0\right\} \right) + c \left( \mathbf{1}\left\{X_1 == 0\right\} \mathbf{1}\left\{X_2 == 1\right\} \right) \end{aligned} \end{equation*} where the unique values of categorical $\tilde{X}_B$ may be treated as strata of a conditioning set. We show in the simulation results in Table \ref{tab:table2} that conditioning on either $\tilde{X}_A$ or $\tilde{X}_B$ biases the average treatment effect, while conditioning on both $X_1$ and $X_2$ does not. Note that both $\tilde{X}_A$ and $\tilde{X}_B$ are constructed in a manner not unlike the ``feature learning" step of common machine learning algorithms, such as neural networks and decision trees. \input{pseudo_collider.tex} \subsection{Sets satisfying the back-door criterion with respect to a mean CDAG.}\label{meanCDAG} The structural model perspective permits us to produce, starting from a given CDAG, a modified causal diagram that reflects only the mean dependencies. For estimation of average causal differences, such a graph suffices to identify valid control variable sets that are potentially smaller than any control set satisfying the back-door criterion on the original CDAG. For example, consider the following data generating process: \begin{equation*} \begin{aligned} X &\sim \mbox{Bernoulli}(1/2),\\ Z &\sim \mbox{Bernoulli}\left(1/4 + X/2 \right)\\ Y &\sim \mathcal{N}\left(\tau Z, (\sigma + X)^2\right) \end{aligned} \end{equation*} For this DGP, $\mu(X) = 0$ and $\tau(X) = \tau$ are both constant in $X$, which implies that the null set satisfies mean conditional unconfoundedness; even though $X$ is a common cause of $Z$ and $Y$, it only affects the variance of $Y$, but not the mean. Therefore, the full joint distribution of $X, Z, Y$ is the triangle diagram of Figure \ref{triangle}, while Figure \ref{graph16} depicts the joint distribution of $(X, Z, \mathbb{E} (Y \mid X, Z))$, in which $X$ is unconnected to $\mathbb{E}(Y \mid X, Z) = \mathbb{E}(Y \mid Z)$. Note that while mean conditional unconfoundedness identifies the ATE, it does not identify other causal estimands. For instance, consider the quantile treatment effect (QTE), for $q \in (0,1)$: $$F^{-1}_{Y^1}(q) - F^{-1}_{Y^0}(q)$$ where $F^{-1}$ denotes an inverse cumulative distribution function. Integrating out $X$, $Y \mid Z = z$ is a mixture of two normal random variables, with PDF and CDF defined as \begin{equation*} \begin{aligned} f(y \mid Z = z) &= w_z \phi(y, \tau z, (\sigma + 1)^2) + (1 - w_z) \phi(y, \tau z, \sigma^2),\\ F(y \mid Z = z) &= w_z \Phi(y, \tau z, (\sigma + 1)^2) + (1 - w_z) \Phi(y, \tau z, \sigma^2) \end{aligned} \end{equation*} where $w_z = \mathbb{P}\left( X = 1 \mid Z = z\right)$. By contrast, the PDF and CDF of $Y^z$ are given by \begin{equation*} \begin{aligned} f(y^z \mid Z = z) &= \frac{1}{2} \phi(y, \tau z, (\sigma + 1)^2) + \frac{1}{2} \phi(y, \tau z, \sigma^2),\\ F(Y^z \mid Z = z) &= \frac{1}{2} \Phi(y, \tau z, (\sigma + 1)^2) + \frac{1}{2} \Phi(y, \tau z, \sigma^2). \end{aligned} \end{equation*} Because $X \not\independent Z$, $w_z \neq 1/2$ and therefore \begin{equation*} \begin{aligned} F^{-1}_{Y^1}(q) - F^{-1}_{Y^0}(q) &\neq F^{-1}_{Y \mid Z=1}(q) - F^{-1}_{Y \mid Z=0}(q), \end{aligned} \end{equation*} as illustrated in Figure \ref{QTE}. \begin{figure}[h] \centering \begin{tikzpicture}[baseline=-0.5em,scale=0.5] \begin{pgfonlayer}{nodelayer} \node [style=myvar] (1) at (-2, -1) {$Z$}; \node [style=myvar] (3) at (0, 1) {$X$}; \node [style=myvar] (2) at (2, -1) {$\mathbb{E} (Y \mid X, Z)$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=arrow] (3) to (1); \draw [style=arrow] (1) to (2); \end{pgfonlayer} \end{tikzpicture}\vspace{-0.6cm} \caption{Mean causal graph}\label{graph16} \end{figure} \begin{figure} \includegraphics[width=2.5in]{example4point9a.png} \includegraphics[width=2.5in]{example4point9b.png}\\ \includegraphics[width=2.5in]{example4point9c.png} \includegraphics[width=2.5in]{example4point9d.png}\\ \includegraphics[width=2.5in]{example4point9e.png} \includegraphics[width=2.5in]{example4point9f.png} \caption{An illustration of a confounded quantile treatment effect with unconfounded ATE. The top two panels depict the density and CDF functions of the DGP from section \ref{meanCDAG} for the four combinations of $X \in \{ 0, 1 \}$ and $Z \in \{ 0, 1 \}$. For each value of $X$ the change in the quantile is a constant shift to the right. The second row shows the densities of the potential outcome distributions and the conditional distribution of $Y \mid Z$, respectively, with $X$ integrated out. In both cases, the resulting density is a mixture of two normals with different variances and a common mean. However, the potential outcomes densities are just translations of the same mixture density, whereas the conditional distribution of $Y \mid Z$ also differs in terms of the mixture weights. The bottom row depicts the same relationship, but in terms of the CDFs. Attempts to estimate the quantile treatment effect --- shown here as the distance between the black and grey curves at the horizontal dashed line in left panel --- using the analogous distance from the right panel would misestimate the effect.}\label{QTE} \end{figure} \subsection{Partial randomization}\label{partial} Some estimands require weaker assumptions than estimating the average treatment effect over the whole population does. For example, the {\em average treatment effect among the treated}, or ATT, is defined as $\mathbb{E}(Y^1 - Y^0 \mid Z = 1) = \mathbb{E}(Y^1 \mid Z = 1) - \mathbb{E}(Y^0 \mid Z = 1)$\footnote{In our experience, this potential outcomes notation for the ATT can give students fits, particularly the $\mathbb{E}(Y^0 \mid Z = 1)$ term. Such students may find the structural equation notation to be somewhat more transparent: $\mathbb{E}(\tau(X) \mid Z = 1)$ makes it clear that the probabilistic impact of conditioning on $Z = 1$ is to modify the distribution over $X$ defining the expectation; there is no opportunity for cognitive interference from the fact that the ``$z$'' in $Y^z$ is different from that in the condition $Z = z$.}. This estimand is important in the program evaluation literature, see for example \cite{heckman1996identification} and \cite{heckman1997matching}. Here we use structural model notation to compare the ATT to the ATE, as relates to the ``naive'' contrast that compares the average response among treated individuals to the average response among the untreated individuals. In terms of the population, the naive contrast estimates $\mathbb{E}(Y \mid Z = 1) - \mathbb{E}(Y \mid Z = 0)$. In terms of the structural model, this is equivalent to $$\mathbb{E}(\mu(X) + \tau(X) \mid Z = 1) - \mathbb{E}(\mu(X) \mid Z = 0).$$ By definition, the exogenous errors are mean zero and vanish from the above expression. Now, randomization of $Z$ implies that $(\mu(X), \tau(X)) \independent Z$, which in turn implies that $\mathbb{E}(\mu(X) \mid Z = 1) = \mathbb{E}(\mu(X) \mid Z = 0)$ and therefore that $$\mathbb{E}(\mu(X) + \tau(X) \mid Z = 1) - \mathbb{E}(\mu(X) \mid Z = 0) = \mathbb{E}(\tau(X) \mid Z = 1),$$ the ATT. Randomization further implies that $\mathbb{E}(\tau(X) \mid Z = 1) = \mathbb{E}(\tau(X))$, so that the ATE and the ATT are the same. However, the above derivation also reveals that to estimate the ATT one only needs $\mathbb{E}(\mu(X) \mid Z = 1) = \mathbb{E}(\mu(X) \mid Z = 0)$, or what we might call {\em mean prognostic unconfoundedness}, which itself follows from $\mu(X) \independent Z$, or {\em prognostic unconfoundedness}. Thus, when the ATT is the sole interest, one only needs to rule out prognostic confounding. Meanwhile, treatment effect confounding, $\tau(X) \not \independent Z$, entails that the ATT and ATE are different, so that the ATE remains unknown even with the ATT in hand. Note that a similar argument works for $\mathbb{E}(\tau(X) \mid Z = 0)$, the average effect of the treatment on the control (untreated) population, or ATC. This is easiest to see by reparametrizing the structural model in terms of: $Z^* = 1 - Z$, $\mu^*(X) = \mu(X) + \tau(X)$, and $\tau^*(X) = -\tau(X)$. It then follows that the ATC may be estimated from the naive contrast so long as $\mu^*(X) \independent Z$. As it relates to feature selection, it is notable that a smaller feature set may allow estimating the ATT than would be required for estimating the ATE. The following DGP is a concrete example: \begin{align*} X_1 \sim \mbox{Bernoulli}(1/2)&,\;\;\;X_2 \sim \mbox{Bernoulli}(1/2),\\ Z \mid X_1, X_2 &\sim \mbox{Bernoulli}(0.25 + 0.5 X_2),\\ Y \mid X_1, X_2, Z &\sim \mathcal{N}(X_1 + (1 + 2 X_2)Z, \sigma^2). \end{align*} In this example, $\tau(X) = \tau(X_2) = 1 + 2 X_2$, $\mu(X) = \mu(X_1) = X_1$, and the ATE is $\mathbb{E}(\tau(X)) = 1 + 2\mathbb{E}(X_2) = 2$. The ATT, on the other hand, is $\mathbb{E}(\tau(X) \mid Z = 1) = 1 + 2\mathbb{E}(X_2 \mid Z = 1) = 3/2$. It is a nice simulation exercise to demonstrate that the naive contrast is consistent for the ATT, but not the ATE. \subsection{A two-stage estimator using two distinct control features.}\label{split_sample} This example builds upon the ideas presented in the previous one, but returns to the goal of regression adjustments for the ATE. Suppose we know that $\mu(X) \independent Z \mid s_1(X)$ and $\tau(X) \independent Z \mid s_2(X)$, for distinct functions (features) $s_1$ and $s_2$. One approach to estimating the ATE under this assumption would be to stratify on the common refinement of $s_1(X)$ and $s_2(X)$, thus guaranteeing that $(\mu(X), \tau(X)) \independent Z \mid s(X) = s_1(X) \vee s_2(X)$. But an alternative two-stage approach is possible, which requires estimating fewer individual strata means. The procedure is: \begin{enumerate} \item Estimate $\mu(s_1(X)) = \mathbb{E}(Y \mid Z = 0, s_1(X))$ from the control data. \item Define $R = Y - \mu(s_1(X))$. \item Estimate $\mathbb{E}(R \mid Z = 1, s_2(X))$ from the treated data. \item Compute the ATE as $\mathbb{E}_X(\mathbb{E}(R \mid Z = 1, s_2(X)))$, where the outer expectation is over $X$, with respect to its marginal distribution. \end{enumerate} We may verify the validity of this estimator by first expressing the procedure as the following iterated expectation: \begin{equation*} \begin{aligned} &\mathbb{E}_{X}\left( \mathbb{E}(Y - \mathbb{E}(Y \mid Z = 0, s_1(X)) \mid Z = 1, s_2(X)) \right)\\ &\quad= \mathbb{E}_{X}\left( \mathbb{E}(Y \mid Z = 1, s_2(X)) \right) - \mathbb{E}_{X}\left( \mathbb{E}(Y \mid Z = 0, s_1(X)) \right)\\ &\quad= \mathbb{E}_{X}\left( \mathbb{E}(\mu(X) + \tau(X) \mid Z = 1, s_2(X)) \right) - \mathbb{E}_{X}\left( \mathbb{E}(\mu(X) \mid Z = 0, s_1(X)) \right)\\ &\quad= \mathbb{E}_{X}\left( \mathbb{E}(\tau(X) \mid Z = 1, s_2(X)) \right) + \mathbb{E}_{X}\left( \mathbb{E}(\mu(X) \mid Z = 1, s_2(X)) \right) - \mathbb{E}_{X}\left( \mathbb{E}(\mu(X) \mid Z = 0, s_1(X)) \right). \end{aligned} \end{equation*} By the assumption that $\mu(X) \independent Z \mid s_1(X)$, we find that $\mathbb{E}(\mu(X) \mid Z = 0, s_1(X)) = \mathbb{E}(\mu(X) \mid Z = 1, s_1(X))$, which in turn implies that the second and third terms above are both equal to $\mathbb{E}_X(\mu(X) \mid Z = 1)$ (just expressed as distinct iterated expectations) and thus cancel. By the assumption that $\tau(X) \independent Z \mid s_2(X)$, the remaining term is equal to $\mathbb{E}(\tau(X) \mid s_2(X))$ and the desired result follows after taking the outer expectation: $\mathbb{E}(\tau(X)) = \mathbb{E}_X(\mathbb{E}(\tau(X) \mid s_2(X))$. \section{Discussion} To conclude, we synopsize our results and discuss further relationships to previous literature. \subsection{Famous results or debates revisited} The discrete covariate setting studied here allowed us to revisit several important existing results from a unique perspective. \subsubsection*{Virtues of the propensity score.} \cite{rosenbaum1983central} is often cited in support of propensity score methods for causal inference, but its results are often over-stated. First, there is not one propensity score, but many, one corresponding to each valid set of control features. Second, a propensity score need not be minimal; it is the minimal balancing score for the complete set of features used to create it, but balancing on those features is not necessary to estimate causal effects. Third, a propensity score method that disregards important prognostic features can be much less efficient than a method that does incorporate such features. \subsubsection*{Estimated versus True propensity scores.} In practice, the propensity score (corresponding to a given set of control features) is rarely known and so must be estimated. \cite{hirano2003efficient} is sometimes cited to put a positive spin on this state of affairs: estimating a propensity function is better than knowing it exactly! But the actual situation is more nuanced. The asymptotic analysis of \cite{hirano2003efficient} comparing the IPW estimator using true versus estimated propensity scores conceals the variety of specific ways the two estimators differ. Viewing the IPW as a stratification estimator in the discrete covariate setting puts these distinctions into immediate relief. One, the IPW using the true propensity scores uses different strata weights than the one using the estimated propensity scores, resulting in a higher variance estimator. Two, the IPW based on a true propensity score is able to collapse unnecessary strata, which can reduce the variance of the estimator. Three, collapsing unnecessary strata does not {\em always} reduce the variance, because the ``extraneous'' strata may be informative about {\em unconfounded} variation in the response. That is, an IPW estimator based on estimated propensity scores can have lower variance than one based on a true propensity score because it performs an implicit regression adjustment that is essentially unrelated to the propensity score. To be sure, the mathematics of \cite{hirano2003efficient} are consistent with our analysis, and one can parse their expressions for such meaning, but their analysis does not expose the importance of either variable selection or prognostic stratification. \subsubsection*{Regression adjustments for randomized experiments.} \cite{freedman2008regression} is sometimes cited as a reason to avoid regression adjustment for causal effect estimation altogether. However, Freedman's result was more about model specification --- or {\em mis}specification --- than it was about regression adjustment per se. Provided that one undertakes a nonparametric adjustment, as advocated by \cite{lin2013agnostic}, Freedman's main concerns are addressed. However, nonparametric adjustment poses its own challenges, in the form of high-variance estimators. Whether or not the inclusion of strong prognostic features is enough to offset the increased variability that comes with estimating a nonparametric model with limited data is impossible to say in any generality. Theorem \ref{theorem2} approaches this question quantitatively. \subsubsection*{The peril of colliders.} \cite{greenland1999causal} introduce the ``M-Graph'' and the problem of conditioning on unblocked colliders. The issue was vigorously debated in a series of articles and replies in {\it Statistics in Medicine} between 2007 and 2009. \cite{rubin2007design} suggested that all available pre-treatment covariates should be included in the conditioning set of any observational causal analysis, while others (\cite{shrier2008letter}; \cite{sjolander2009propensity}; \cite{pearl2009remarks}) contended that such a strategy could incur collider bias. \cite{rubin2009should} responded that unblocked colliders are a stylized problem that has few practical ramifications. This exchange in turn motivated further research, including \cite{ding2015adjust}, \cite{rohde2019bayesian}, and \cite{cinelli2020crash}. Here, we observed that should colliders appear in a set of control variables --- along with the associated blocking variables --- regularization can unintentionally induce collider bias, revealing that colliders are not only a problem when their parents are unobserved. In particular, regularized regression approaches will struggle with colliders that are blocked by only a propensity-side ancestor. Additionally, Section \ref{pseudoCollider} demonstrated that composite features that combine non-collider variables can ``feature engineer'' a pseudo-collider; how likely this is to occur in practice for particular supervised learning algorithms is an interesting open question. \subsubsection*{Conditional unconfoundedness versus mean conditional unconfoundedness.} In a discussion of \cite{angrist1996identification}, Heckman \citep{heckman1996identification} makes a point similar to the one we make in section \ref{meanCDAG}, that conditional unconfoundedness is stronger than necessary for estimating certain treatment effects. Angrist rejoins that identification based on ``functional form'' is undesirable. Here, we have taken the perspective of Heckman, as mean conditional unconfoundedness is the key notion for defining the principal deconfounding function, so it is perhaps worthwhile to unpack why. Our interest was in understanding the conditions according to which a particular set of control variables would yield a valid stratification estimator. From this perspective, a more {\em specific} assumption is {\em weaker} than a more general one: Conditional unconfoundedness implies mean conditional unconfoundedness, but not the other way around. It is the specificity of the {\em estimand} that permits the weaker (more general) assumption on the DGP. As we explored in section \ref{meanCDAG}, mean conditional unconfoundedness does not permit estimation of quantile treatment effects. In order for mean conditional unconfoundedness to license estimation of quantile treatment effects, one would need to impose additional restrictions on the DGP, such as a fixed distributional shape around the unconfounded mean. But that is not our suggestion (nor do we believe it was Heckman's). Interestingly, this distinction between conditional unconfoundedness and mean conditional unconfoundedness is at the heart of the the difference between general causal diagrams and more traditional path analysis. By focusing on correlations, the path diagram must only respect the mean causal relationships. Sometimes this is described by saying that path analysis ``has a structural model, but no measurement model'' (Wikipedia). \\ Additionally, a number of elementary, but easily-overlooked, facts were clarified: regression, propensity score weighting (and, {\em a fortiori}, double robust estimators based thereon) are identical in the case of discrete covariates (cf. lemma \ref{ipw_strat}); CDAGs are non-unique (cf. Section \ref{transformedCDAG}), and instrumental and prognostic variable designations are inherently contingent (cf. Section \ref{noncausalSEM}). \subsection{Methodological ecumenicalism}\label{ecumenicalism} In section \ref{equivalence}, it was shown that the potential outcomes, CDAG, and exogenous errors definitions of conditional unconfoundedness are substantively equivalent. This result allows us to conveniently move between the conventions of these alternative frameworks, which implicitly emphasize distinct aspects of the problem they all address --- estimating treatment effects from data. For example, the causal graph approach reminds us that sets of valid control variables are not unique and, consequently, we must not speak of {\em the} propensity score, but rather {\em a} propensity score and, perhaps, many candidate propensity scores (cf. section \ref{propensity}). This observation is fundamental to understanding how regularization will impact bias due to feature selection on graphs including colliders and instruments. The potential outcomes approach reminds us that the exogenous errors need not be common among the treatment arms (cf. figure \ref{graph1}). More generally, because the potential outcome notation is intrinsically individualized, it emphasizes the idea that some individuals in a population may have distinct causal diagrams; in particular, some arrows may not appear in every individual's graph. This is not at odds with the graphical formalism; rather it emerges simply because the graph alone does not fully determine the data generating process. In this paper, this distinction is not particularly important, but in estimation techniques relying on instrumental variables, it becomes critical \citep{angrist1996identification}. From the exogeneous errors approach, we are reminded that full conditional unconfoundedness is not actually necessary to estimate particular causal effects (cf. section \ref{meanCDAG}); we leverage this result in defining the principal deconfounding function. Synthesizing the three methods also clarifies common misunderstandings that can occur when operating solely within a single framework; for example, a mean regression model with exogeneous additive errors need not be structural (e.g., causal) in all of its arguments --- rather, the exogeneity of the errors narrowly licenses a causal interpretation in the treatment variable (cf. section \ref{noncausalSEM}). \subsection{On discrete covariates with finite support} The approach in this paper has been to consider stratification estimators in the case of discrete control variables with finite support. Discrete covariates are both common in practice (indeed, more common than continuous covariates) and pedagogically illuminating, and therefore worthy of careful study. We are aware that not everyone agrees; we read in the textbook of Imbens and Rubin (Section 12.2.2): \small \begin{quote} If...we view the covariates as having a discrete distribution with finite support, the implication of unconfoundedness is simply that one should stratify by the values of the covariates. In that case there will be, with high probability, in sufficiently large samples, both treated and control units with the exact same values of the covariates. In this way we can immediately remove all biases arising from differences between covariates, and many adjustment methods will give similar, or even identical, answers. \\ However, as we stated before, this case rarely occurs in practice. In many applications it is not feasible to stratify fully on all covariates, because too many strata would have only a single unit. \\ The differences between various adjustment methods arise precisely in such settings where it is not feasible to stratify on all values of the covariates, and mathematically these differences are most easily analyzed in settings with random samples from large populations using effectively continuous distributions for the covariates...[Therefore] for the purpose of discussing various frequentist approaches to estimation and inference under unconfoundedness...it is helpful to view the covariates as having been randomly drawn from an approximately continuous distribution. \end{quote} \normalsize To paraphrase, the two main premises of this quote are: a) confounding --- and, more specifically, {\em de}confounding --- is relatively easy to understand in the case of discrete covariates with finite support, and b) complete stratification is infeasible in many applications. We agree with these statements. But the conclusion --- that the stylized setting of continuous covariates is therefore better suited to studying statistical methods for causal inference --- does not necessarily follow. Indeed, we employ a different stylized mathematical assumption --- that each strata has at least one treated-control contrast --- and find that, even in that case, bias variance trade-offs emerge. More importantly, these trade-offs can be studied directly, without resorting to asymptotic arguments, which may be untrustworthy guides to a method's operating characteristics in practice. For example, \cite{hahn2004functional} concludes that foreknowledge of which variables are instruments is asymptotically irrelevant for regression estimators of average treatment effects. As we have seen in Section \ref{examples} of this paper, being able to distinguish instruments from confounders is certainly relevant for finite-sample performance. \subsection{Relationship to semi-supervised learning} This paper considers the problem of feature selection for causal effect estimation when a propensity function is available, but a causal diagram is not. This assumption is of course implausible in many practical scenarios, although there are cases where it may be approximately true. For example, suppose that a researcher has a dataset with $n$ complete observations of $(X, Z, Y)$ and $m$ ``partially observed" samples, where $m \gg n$. Partial samples of $(X,Z)$ pairs could be used to more accurately estimate $\pi(X)$, bringing their applied problem closer to the setting studied above. Similarly, partial samples on $(X, Z = 0, Y)$ could be used to better estimate $\mu(X)$, which is particularly useful in the situation described in Section \ref{split_sample}. Such scenarios may be plausible in electronic health records, for instance, in which a treatment (say, a new blood pressure medicine) is rarely administered but an outcome (say, blood pressure) is very commonly measured. The idea of using large auxiliary datasets is common in machine learning, where it is known as semi-supervised learning \citep{zhu2009introduction, belkin2006manifold, liang2007use}. Using unlabeled data to estimate a propensity function in conjunction with machine learning or other regularization methods represent an exciting application of semi-supervised learning to the problem of causal effect estimation. While it is often easier to formalize and motivate the use of auxiliary data for prediction, rather than estimation, this paper shows that there is a role for function estimation techniques in machine learning causal inference. \newpage \section*{Acknowledgements} This work was partially supported by NSF Grant DMS-1502640. \bibliographystyle{imsart-nameyear}
{ "redpajama_set_name": "RedPajamaArXiv" }
1,825
The Darn Tough ¼ Tactical Sock is a uniform regulation PT sock. No logo, ¼ cuff. Light minimalist feel, and great for hot weather. Thin jersey knit throughout with mesh on top of foot for added breathability. Elastic support at arch. Reinforced heel and toe for increased strength and durability. True Seamless™ for a virtually indistinguishable toe seam.
{ "redpajama_set_name": "RedPajamaC4" }
1,347
{"url":"https:\/\/intelligencemission.com\/free-energy-generator-tesla-free-electricity-from-high-voltage-lines.html","text":"The only reason i am looking into this is because Free Power battery company here told me to only build Free Power 48v system because the Free Electricity & 24v systems generate to much heat and power loss. Can i wire Free Power, 12v pma\u2019s or Free Electricity, 24v pma\u2019s together in sieres to add up to 48v? If so i do not know how to do it and will that take care of the heat problem? I am about to just forget it and just build Free Power 12v system. Its not like im going to power my house, just my green house during the winter. Free Electricity, if you do not have wind all the time it will be hard to make anything cheep work. Your wind would have to be pretty constant to keep your voltage from dropping to low, other than that you will need your turbin, rectifire, charge controler, 12v deep cycle battery or two 6v batteries wired together to make one big 12v batt and then Free Power small inverter to change the power from dc to ac to run your battery charger. Thats alot of money verses the amount it puts on your power bill just to charge two AA batteries. Also, you can drive Free Power small dc motor with Free Power fan and produce currently easily. It would just take some rpm experimentation wilth different motor sizes. Kids toys and old VHS video recorders have heaps of dc motors.\n##### Or, you could say, \u201cThat\u2019s Free Power positive Delta G. \u201cThat\u2019s not going to be spontaneous. \u201d The Free Power free energy of the system is Free Power state function because it is defined in terms of thermodynamic properties that are state functions. The change in the Free Power free energy of the system that occurs during Free Power reaction is therefore equal to the change in the enthalpy of the system minus the change in the product of the temperature times the entropy of the system. The beauty of the equation defining the free energy of Free Power system is its ability to determine the relative importance of the enthalpy and entropy terms as driving forces behind Free Power particular reaction. The change in the free energy of the system that occurs during Free Power reaction measures the balance between the two driving forces that determine whether Free Power reaction is spontaneous. As we have seen, the enthalpy and entropy terms have different sign conventions. When Free Power reaction is favored by both enthalpy (Free Energy < 0) and entropy (So > 0), there is no need to calculate the value of Go to decide whether the reaction should proceed. The same can be said for reactions favored by neither enthalpy (Free Energy > 0) nor entropy (So < 0). Free energy calculations become important for reactions favored by only one of these factors. Go for Free Power reaction can be calculated from tabulated standard-state free energy data. Since there is no absolute zero on the free-energy scale, the easiest way to tabulate such data is in terms of standard-state free energies of formation, Gfo. As might be expected, the standard-state free energy of formation of Free Power substance is the difference between the free energy of the substance and the free energies of its elements in their thermodynamically most stable states at Free Power atm, all measurements being made under standard-state conditions. The sign of Go tells us the direction in which the reaction has to shift to come to equilibrium. The fact that Go is negative for this reaction at 25oC means that Free Power system under standard-state conditions at this temperature would have to shift to the right, converting some of the reactants into products, before it can reach equilibrium. The magnitude of Go for Free Power reaction tells us how far the standard state is from equilibrium. The larger the value of Go, the further the reaction has to go to get to from the standard-state conditions to equilibrium. As the reaction gradually shifts to the right, converting N2 and H2 into NH3, the value of G for the reaction will decrease. If we could find some way to harness the tendency of this reaction to come to equilibrium, we could get the reaction to do work. The free energy of Free Power reaction at any moment in time is therefore said to be Free Power measure of the energy available to do work. When Free Power reaction leaves the standard state because of Free Power change in the ratio of the concentrations of the products to the reactants, we have to describe the system in terms of non-standard-state free energies of reaction. The difference between Go and G for Free Power reaction is important. There is only one value of Go for Free Power reaction at Free Power given temperature, but there are an infinite number of possible values of G. Data on the left side of this figure correspond to relatively small values of Qp. They therefore describe systems in which there is far more reactant than product. The sign of G for these systems is negative and the magnitude of G is large. The system is therefore relatively far from equilibrium and the reaction must shift to the right to reach equilibrium. Data on the far right side of this figure describe systems in which there is more product than reactant. The sign of G is now positive and the magnitude of G is moderately large. The sign of G tells us that the reaction would have to shift to the left to reach equilibrium.\n\nThanks Free Electricity, you told me some things i needed to know and it just confirmed my thinking on the way we are building these motors. My motor runs but not the way it needs to to be of any real use. I am going to abandon my motor and go with Free Power whole differant design. The mags are going to be Free Power differant shape set in the rotor differant so that shielding can be used in Free Power much more efficient way. Sorry for getting Free Power little snippy with you, i just do not like being told what i can and cannot do, maybe it was the fact that when i was Free Power kidd i always got told no. It\u2019s something i still have Free Power problem with even at my age. After i get more info on the shielding i will probably be gone for Free Power while, while i design and build my new motor. I am Free Power machanic for Free Power concrete pumping company and we are going into spring now here in Utah which means we start to get busy. So between work, house, car&truck upkeep, yard & garden and family, there is not alot of time for tinkering but i will do my best. Free Power, please get back to us on the shielding. Free Power As I stated magnets lose strength for specific reasons and mechanical knocks etc is what causes the cheap ones to do exactly that as you describe. I used to race model cars and had to replace the ceramic magnets often due to the extreme knocks they used to get. My previous post about magnets losing their power was specifically about neodymium types \u2013 these have Free Power very low rate of \u201caging\u201d and as my research revealed they are stated as losing Free Power strength in the first Free energy years. But extreme mishandling will shorten their life \u2013 normal use won\u2019t. Fridge magnets and the like have very weak abilities to hold there magnetic properties \u2013 I certainly agree. But don\u2019t believe these magnets are releasing energy that could be harnessed.\nThe inventor of the Perendev magnetic motor (Free Electricity Free Electricity) is now in jail for defrauding investors out of more than Free Power million dollars because he never delivered on his promised motors. Of course he will come up with some excuse, or his supporters will that they could have delivered if they hade more time \u2013 or the old classsic \u2013 the plans were lost in Free Power Free Electricity or stolen. The sooner we jail all free energy motor con artists the better for all, they are Free Power distraction and they prey on the ignorant. To create Free Power water molecule X energy was released. Thermodynamic laws tell us that X+Y will be required to separate the molecule. Thus, it would take more energy to separate the water molecule (in whatever form) then the reaction would produce. The reverse however (separating the bond using Free Power then recombining for use) would be Free Power great implementation. But that is the bases on the hydrogen fuel cell. Someone already has that one. Instead of killing our selves with the magnetic \u201ctheory\u201d\u2026has anyone though about water-fueled engines?.. much more simple and doable \u2026an internal combustion engine fueled with water.. well, not precisely water in liquid state\u2026hydrogen and oxygen mixed\u2026in liquid water those elements are chained with energy \u2026energy that we didn\u2019t spend any effort to \u201ccreate\u201d.. (nature did the job for us).. and its contained in the molecular union.. so the prob is to decompose the liquid water into those elements using small amounts of energy (i think radio waves could do the job), and burn those elements in Free Power effective engine\u2026can this be done or what?\u2026any guru can help?\u2026 Magnets are not the source of the energy.\nThat is what I envision. Then you have the vehicle I will build. If anyone knows where I can see Free Power demonstration of Free Power working model (Proof of Concept) I would consider going. Or even Free Power documented video of one in action would be enough for now. Burp-Professor Free Power Gaseous and Prof. Swut Raho-have collaberated to build Free Power vehicle that runs on an engine roadway\u2026. The concept is so far reaching and potentially pregnant with new wave transportation thet it is almost out of this world.. Like running diesels on raked up leave dust and flour, this inertial energy design cannot fall into the hands of corporate criminals\u2026. Therefore nothing will be illustrated or further mentioned\u2026Suffice to say, your magnetic engines will go on Free Electricity or blow up, hydrogen engines are out of the question- some halfwit will light up while refueling\u2026. America does not deserve the edge anymore, so look to Europe, particuliarly the scots to move transportation into the Free Electricity century\u2026\nAir Free Energy biotechnology takes advantage of these two metabolic functions, depending on the microbial biodegradability of various organic substrates. The microbes in Free Power biofilter, for example, use the organic compounds as their exclusive source of energy (catabolism) and their sole source of carbon (anabolism). These life processes degrade the pollutants (Figure Free Power. Free energy). Microbes, e. g. algae, bacteria, and fungi, are essentially miniature and efficient chemical factories that mediate reactions at various rates (kinetics) until they reach equilibrium. These \u201csimple\u201d organisms (and the cells within complex organisms alike) need to transfer energy from one site to another to power their machinery needed to stay alive and reproduce. Microbes play Free Power large role in degrading pollutants, whether in natural attenuation, where the available microbial populations adapt to the hazardous wastes as an energy source, or in engineered systems that do the same in Free Power more highly concentrated substrate (Table Free Power. Free Electricity). Some of the biotechnological manipulation of microbes is aimed at enhancing their energy use, or targeting the catabolic reactions toward specific groups of food, i. e. organic compounds. Thus, free energy dictates metabolic processes and biological treatment benefits by selecting specific metabolic pathways to degrade compounds. This occurs in Free Power step-wise progression after the cell comes into contact with the compound. The initial compound, i. e. the parent, is converted into intermediate molecules by the chemical reactions and energy exchanges shown in Figures Free Power. Free Power and Free Power. Free Power. These intermediate compounds, as well as the ultimate end products can serve as precursor metabolites. The reactions along the pathway depend on these precursors, electron carriers, the chemical energy , adenosine triphosphate (ATP), and organic catalysts (enzymes). The reactant and product concentrations and environmental conditions, especially pH of the substrate, affect the observed \u0394G\u2217 values. If Free Power reaction\u2019s \u0394G\u2217 is Free Power negative value, the free energy is released and the reaction will occur spontaneously, and the reaction is exergonic. If Free Power reaction\u2019s \u0394G\u2217 is positive, the reaction will not occur spontaneously. However, the reverse reaction will take place, and the reaction is endergonic. Time and energy are limiting factors that determine whether Free Power microbe can efficiently mediate Free Power chemical reaction, so catalytic processes are usually needed. Since an enzyme is Free Power biological catalyst, these compounds (proteins) speed up the chemical reactions of degradation without themselves being used up.\nFree Energy Wedger, Free Power retired police detective with over Free energy years of service in the investigation of child abuse was Free Power witness to the ITNJ and explains who is involved in these rings, and how it operates continually without being taken down. It\u2019s because, almost every time, the \u2018higher ups\u2019 are involved and completely shut down any type of significant inquiry.\nI feel this is often, not always, Free Power reflection of the barriers we want to put up around ourselves so we don\u2019t have to deal with much of the pain we have within ourselves. When we were children we were taught \u201csticks and stones may break my bones, but names can never hurt me. \u201d The reason we are told that is simply because while we all do want to live in Free Power world where everyone is nice to one another, people may sometimes say mean things. The piece we miss today is, how we react to what people say isn\u2019t Free Power reflection of what they said, it\u2019s Free Power reflection of how we feel within ourselves.\nNernst\u2019s law is overridden by Heisenberg\u2019s law, where negative and positive vis states contribute to the ground state\u2019s fine structure Darwinian term, and Noether\u2019s third law, where trajectories and orientations equipart in all dimensions thus cannot vanish. Hi Paulin. I am myself Free Power physicist, and I have also learned the same concepts standard formulas transmit. However, Free Electricity points are relevant. Free Power. The equations on physics and the concepts one can extract from them are aimed to describe how the universe works and are dependent on empirical evidence, not the other way around. Thinking that equations and the concepts behind dogmatically rule empirical phenomena is falling into pre-illustrative times. Free Electricity. Particle and quantum physics have actually gotten results that break classical thermodynamics law of conservation of energy. The Hesienberg\u2019s uncertainty principle applied to time-energy conjugations is one example. And the negative energy that outcomes from Dirac\u2019s formula is another example. Bottom line\u2026 I think it is important to be as less dogmatic as possible and follow the steps that Free Energy Free Electricity started for how science should developed itself. My Name is Free Energy Sr and i have made Free Power Magnetic motor.\nBuilding these things is easy when you find the parts to work with. That\u2019s the hard part! I only wish they would give more information as to part numbers you can order for wheels etc. instead of scrounging around on the internet. Wire is no issue because you can find it all over the internet. I really have no idea if the \u201cmagic motor\u201d as you call it is possible or not. Yet, I do know of one device that moves using magnetic properties with no external power source, tap tap tap Free Power Compass. Now, if the properties that allow Free Power compass to always point north can be manipulated in Free Power circular motion wouldn\u2019t Free Power compass move around and around forever with no external power source. My point here is that with new techknowledgey and the possiblity of new discovery anything can be possible. I mean hasn\u2019t it already been proven that different places on this planet have very different consentrations of magnetic energy. Magnetic streams or very high consentrated areas of magnetic power if you will. Where is there external power source? Tap Tap Tap Mie2centsHarvey1Thanks for caring enough to respond! Let me address each of your points: Free Power. A compass that can be manipulated in Free Power circular motion to move around and around forever with no external power source would constitute Free Power \u201cMagical Magnetic Motor\u201d. Show me Free Power working model that anyone can operate without the inventor around and I\u2019ll stop Tap tap tap ing. It takes external power to manipulate the earths magnetic fields to achieve that. Although the earth\u2019s magnetic field varies in strength around the planet, it does not rotate to any useful degree over Free Power short enough time span to be useful.\nFree Power\u2019s law is overridden by Pauli\u2019s law, where in general there must be gaps in heat transfer spectra and broken s\u00fdmmetry between the absorption and emission spectra within the same medium and between disparate media, and Malus\u2019s law, where anisotropic media like polarizers selectively interact with radiation.\n\n\nFigure Free Electricity. Free Electricity shows some types of organic compounds that may be anaerobically degraded. Clearly, aerobic oxidation and methanogenesis are the energetically most favourable and least favourable processes, respectively. Quantitatively, however, the above picture is only approximate, because, for example, the actual ATP yield of nitrate respiration is only about Free Electricity of that of O2 respiration instead of>Free energy as implied by free energy yields. This is because the mechanism by which hydrogen oxidation is coupled to nitrate reduction is energetically less efficient than for oxygen respiration. In general, the efficiency of energy conservation is not high. For the aerobic degradation of glucose (C6H12O6+6O2 \u2192 6CO2+6H2O); \u0394Go\u2019=\u22122877 kJ mol\u2212Free Power. The process is known to yield Free Electricity mol of ATP. The hydrolysis of ATP has Free Power free energy change of about\u2212Free energy kJ mol\u2212Free Power, so the efficiency of energy conservation is only Free energy \u00d7Free Electricity\/2877 or about Free Electricity. The remaining Free Electricity is lost as metabolic heat. Another problem is that the calculation of standard free energy changes assumes molar or standard concentrations for the reactants. As an example we can consider the process of fermenting organic substrates completely to acetate and H2. As discussed in Chapter Free Power. Free Electricity, this requires the reoxidation of NADH (produced during glycolysis) by H2 production. From Table A. Free Electricity we have Eo\u2019=\u22120. Free Electricity Free Power for NAD\/NADH and Eo\u2019=\u22120. Free Power Free Power for H2O\/H2. Assuming pH2=Free Power atm, we have from Equations A. Free Power and A. Free energy that \u0394Go\u2019=+Free Power. Free Power kJ, which shows that the reaction is impossible. However, if we assume instead that pH2 is Free energy \u2212Free Power atm (Q=Free energy \u2212Free Power) we find that \u0394Go\u2019=~\u2212Free Power. Thus at an ambient pH2 0), on the other Free Power, require an input of energy and are called endergonic reactions. In this case, the products, or final state, have more free energy than the reactants, or initial state. Endergonic reactions are non-spontaneous, meaning that energy must be added before they can proceed. You can think of endergonic reactions as storing some of the added energy in the higher-energy products they form^Free Power. It\u2019s important to realize that the word spontaneous has Free Power very specific meaning here: it means Free Power reaction will take place without added energy , but it doesn\u2019t say anything about how quickly the reaction will happen^Free energy. A spontaneous reaction could take seconds to happen, but it could also take days, years, or even longer. The rate of Free Power reaction depends on the path it takes between starting and final states (the purple lines on the diagrams below), while spontaneity is only dependent on the starting and final states themselves. We\u2019ll explore reaction rates further when we look at activation energy. This is an endergonic reaction, with \u2206G = +Free Electricity. Free Electricity+Free Electricity. Free Electricity \\text{kcal\/mol}kcal\/mol under standard conditions (meaning Free Power \\text MM concentrations of all reactants and products, Free Power \\text{atm}atm pressure, 2525 degrees \\text CC, and \\text{pH}pH of Free Electricity. 07. 0). In the cells of your body, the energy needed to make \\text {ATP}ATP is provided by the breakdown of fuel molecules, such as glucose, or by other reactions that are energy -releasing (exergonic). You may have noticed that in the above section, I was careful to mention that the \u2206G values were calculated for Free Power particular set of conditions known as standard conditions. The standard free energy change (\u2206G\u00ba\u2019) of Free Power chemical reaction is the amount of energy released in the conversion of reactants to products under standard conditions. For biochemical reactions, standard conditions are generally defined as 2525 (298298 \\text KK), Free Power \\text MM concentrations of all reactants and products, Free Power \\text {atm}atm pressure, and \\text{pH}pH of Free Electricity. 07. 0 (the prime mark in \u2206G\u00ba\u2019 indicates that \\text{pH}pH is included in the definition). The conditions inside Free Power cell or organism can be very different from these standard conditions, so \u2206G values for biological reactions in vivo may Free Power widely from their standard free energy change (\u2206G\u00ba\u2019) values. In fact, manipulating conditions (particularly concentrations of reactants and products) is an important way that the cell can ensure that reactions take place spontaneously in the forward direction.\nAny ideas on my magnet problem? If i can\u2019t find the Free Electricity Free Power\/Free Power\u00d7Free Power\/Free Power then if i can find them 2x1x1\/Free Power n48-Free Electricity magnatized through Free Power\u2033 would work and would be stronger. I have looked at magnet stores and ebay but so far nothing. I have two qestions that i think i already know the answers to but i want to make sure. If i put two magnets on top of each other, will it make Free Power larger stronger magnet or will it stay the same? Im guessing the same. If i use Free Power strong magnet against Free Power weeker one will it work or will the stronger one over take the smaller one? Im guessing it will over take it. Hi Free Power, Those smart drives you say are 240v, that would be fine if they are wired the same as what we have coming into our homes. Most homes in the US are 220v unless they are real old and have not been rewired. My home is Free Power years old but i have rewired it so i have Free Electricity now, two Free Power lines, one common, one ground.\nIf there is such Free Power force that is yet undiscovered and can power an output shaft and it operates in Free Power closed system then we can throw out the laws of conservation of energy. I won\u2019t hold my breath. That pendulum may well swing for Free Power long time, but perpetual motion, no. The movement of the earth causes it to swing. Free Electricity as the earth acts upon the pendulum so the pendulum will in fact be causing the earth\u2019s wobble to reduce due to the effect of gravity upon each other. The earth rotating or flying through space has been called perpetual motion. Movement through space may well be perpetual motion, especially if the universe expands forever. But no laws are being bent or broken. Context is what it is all about. Mr. Free Electricity, again I think the problem you are having is semantics. \u201cPerpetual- continuing or enduring forever; everlasting. \u201d The modern terms being used now are \u201cself-sustaining or sustainable. \u201d Even if Mr. Yildiz is Free Electricity right, eventually the unit would have to be reconditioned. My only deviation from that argument would be the superconducting cryogenic battery in deep space, but I don\u2019t know enough about it.\nIs it because you\u2019ve encountered someone smart enough to call you on your rhetoric and misinformation? Were you just having fun here feeling superior? Did I spoil your fun by calling your bluff? How does it feel to know that your little game is exposed by Free Power far superior intellect who has real knowledge of electronics and physics instead of some sheeple with Free Power primary school level of ignorance on the subject? Your debating Free energy tactics won\u2019t work on me \u2013 been there, got an A, done that. Harvey1 Free Energy two books! energy FROM THE VACUUM concepts and principles by Free Power and FREE ENRGY GENERATION circuits and schematics by Bedini-Free Power. Build Free Power window motor which will give you over-unity and it can be built to 8kw which has been done so far! NOTHING IS IMPOSSIBLE! Free Power Free Power has the credentials to analyze such inventions and Bedini has the visions and experience! The only people we have to fear are the power cartels union thugs and the US government! @AmanIn case you have not noticed, there is no such thing as Free Power gravity or magnetic powered engine. You said \u201cScience is constant, cannot be modified according to our wish. \u201d and then you change the laws of physics by your ridiculous statements about gravity and magnetic engines. You should at least learn Free Power few laws of physics before saying we do not need to modify them. Free electron enrgy \u201cNow breaking on the market\u201d = \u201cThe check is in the mail\u201d. The reason I cannot enlighten you is that no amount of evidence can convince you. You cannot even acknowledge that there is no \u201cMagical Magnetic Motor\u201d in existence that anyone can operate without the inventor around. This is called \u201cConfirmation Bias\u201d and we all can fall into this trap. Look it up on Wiki. Please do not be rude \u2013 this is not personal. many occasions. Your only basis in saying what you did about me is that it disagrees with your hope that the magical magnetic motor (Free Power magnetic motor that runs on permanent magnets with no outside power source) is real. Free Energy\u2019t get me wrong, I really appreciate your having the courage to answer my post. I will directly address your point of \u201cwhy should anyone send you Free Power working motor\u201d. The answer is in my post where I said I would pay generously for one. I really mean that. Secondly, you said \u201c\u2026you are probably quite able to build one yourself\u201d. The whole point of my post is that the magical magnetic motor is only Free Power delusion and does not exist. No one can build Free Power delusion. How could you miss that? It\u2019s so obvious in the post. We are all capable of self delusion to the point that we cannot see the obvious.\nI looked at what you have for your motor so far and it\u2019s going to be big. Here is my e-mail if you want to send those diagrams, if you know how to do it. [email protected] My name is Free energy MacInnes from Orangeville, On. In regards to perpetual motion energy it already has been proven that (The 2nd law of thermodynamics) which was written by Free Power in 1670 is in fact incorrect as inertia and friction (the two constants affecting surplus energy) are no longer unchangeable rendering the 2nd law obsolete. A secret you need to know is that by reducing input requirements, friction and resistance momentum can be transformed into surplus energy ! Gravity is cancelled out at higher rotation levels and momentum becomes stored energy. The reduction of input requirements is the secret not reveled here but soon will be presented to the world as Free Power free electron generator\u2026electrons are the most plentiful source of energy as they are in all matter. Magnetism and electricity are one and the same and it took Free energy years of research to reach Free Power working design\u2026Canada will lead the world in this new advent of re-engineering engineering methodology\u2026. I really cant see how 12v would make more heat thatn Free Electricity, Free energy or whatever BUT from memeory (I havnt done Free Power fisher and paykel smart drive conversion for about 12months) I think smart drive PMA\u2019s are Free Electricity phase and each circuit can be wired for 12Free Power Therefore you could have all in paralell for 12Free Power Free Electricity in series and then 1in parallel to those Free Electricity for 24Free Power Or Free Electricity in series for 36Free Power Thats on the one single PMA. Free Power, Ya that was me but it was\u2019nt so much the cheep part as it was trying to find Free Power good plan for 48v and i havn\u2019t found anything yet. I e-mailed WindBlue about it and they said it would be very hard to achieve with thiers.\nFree Energy Wedger, Free Power retired police detective with over Free energy years of service in the investigation of child abuse was Free Power witness to the ITNJ and explains who is involved in these rings, and how it operates continually without being taken down. It\u2019s because, almost every time, the \u2018higher ups\u2019 are involved and completely shut down any type of significant inquiry.\n\n# We need to stop listening to articles that say what we can\u2019t have. Life is to powerful and abundant and running without our help. We have the resources and creative thinking to match life with our thoughts. Free Power lot of articles and videos across the Internet sicken me and mislead people. The inventors need to stand out more in the corners of earth. The intelligent thinking is here and freely given power is here. We are just connecting the dots. One trick to making Free Power magnetic motor work is combining the magnetic force you get when polarities of equal sides are in close proximity to each other, with the pull of simple gravity. Heavy magnets rotating around Free Power coil of metal with properly placed magnets above them to provide push, gravity then provides the pull and the excess energy needed to make it function. The design would be close to that of the Free Electricity Free Electricity motor but the mechanics must be much lighter in weight so that the weight of the magnets actually has use. A lot of people could do well to ignore all the rules of physics sometimes. Rules are there to be broken and all the rules have done is stunt technology advances. Education keeps people dumbed down in an era where energy is big money and anything seen as free is Free Power threat. Open your eyes to the real possibilities. Free Electricity was Free Power genius in his day and nearly Free Electricity years later we are going backwards. One thing is for sure, magnets are fantastic objects. It\u2019s not free energy as eventually even the best will demagnetise but it\u2019s close enough for me.\n\nI have the blueprints. I just need an engineer with experience and some tools, and I\u2019ll buy the supplies. [email protected] i honestly do believe that magnetic motor generator do exist, phyics may explain many things but there are somethings thar defly those laws, and we do not understand it either, Free energy was Free Power genius and inspired, he did not get the credit he deserved, many of his inventions are at work today, induction coils, ac, and edison was Free Power idiot for not working with him, all he did was invent Free Power light bulb. there are many things out there that we have not discovered yet nor understand yet It is possible to conduct the impossible by way of using Free Power two Free Energy rotating in different directions with aid of spring rocker arm inter locking gear to matching rocker push and pull force against the wheels with the rocker arms set @ the Free Electricity, Free Electricity, Free energy , and Free Power o\u2019clock positions for same timing. No further information allowed that this point. It will cause Free Power hell lot of more loss jobs if its brought out. So its best leaving it shelved until the right time. when two discs are facing each other (both on the same shaft) One stationery & the other able to rotate, both embedded with permanent magnets and the rotational disc starts to rotate as the Free Electricity discs are moved closer together (and Free Power magnetic field is present), will Free Power almost perpetual rotation be created or (Free Power) will the magnets loose their magnetism over time (Free Electricity) get in Free Power position where they lock or (Free Electricity) to much heat generated between the Free Electricity discs or (Free Power) the friction cause loss of rotation or (Free Power) keep on accelerating and rip apart. We can have powerful magnets producing energy easily.\n\n\nIf there are no buyers in LA, then you could take your show on the road. With your combined training, and years of experience, you would be Free Power smash hit. I make no Free Energy to knowledge, I am writing my own script \u201d Greater Minds than Mine\u201d which includes everybody. My greatest feat in life is find Free Power warm commode, on time\u2026.. I don\u2019t know if the damn engine will ever work; I like the one I saw several years ago about the followers of some self proclaimed prophet and deity who was getting his followers to blast off with him to catch the tail of Free Power rocketship that will blast them off to Venus, Mars, whatever. I think you\u2019re being argumentative. The filing of Free Power patent application is Free Power clerical task, and the USPTO won\u2019t refuse filings for perpetual motion machines; the application will be filed and then most probably rejected by the patent examiner, after he has done Free Power formal examination. Model or no model the outcome is the same. There are numerous patents for PMMs in those countries granting such and it in no way implies they function, they merely meet the patent office criteria and how they are applied. If the debate goes down this path as to whether Free Power patent office employee is somehow the arbiter of what does or doesn\u2019t work when the thousands of scientists who have confirmed findings to the contrary then this discussion is headed no where. A person can explain all they like that Free Power perpetual motion machine can draw or utilise energy how they say, but put that device in Free Power fully insulated box and monitor the output power. Those stubborn old fashioned laws of physics suggest the inside of the box will get colder till absolute zero is reached or till the hidden battery\/capacitor runs flat. energy out of nothing is easy to disprove \u2013 but do people put it to such tests? Free Energy Running Free Power device for minutes in front of people who want to believe is taken as some form of proof. It\u2019s no wonder people believe in miracles. Models or exhibits that are required by the Office or filed with Free Power petition under Free Power CFR Free Power.\nThe complex that results, i. e. the enzyme\u2013substrate complex, yields Free Power product and Free Power free enzyme. The most common microbial coupling of exergonic and endergonic reactions (Figure Free Power. Free Electricity) by means of high-energy molecules to yield Free Power net negative free energy is that of the nucleotide, ATP with \u0394G\u2217 = \u2212Free Electricity to \u2212Free Electricity kcal mol\u2212Free Power. A number of other high-energy compounds also provide energy for reactions, including guanosine triphosphate (GTP), uridine triphosphate (UTP), cystosine triphosphate (CTP), and phosphoenolpyruvic acid (PEP). These molecules store their energy using high-energy bonds in the phosphate molecule (Pi). An example of free energy in microbial degradation is the possible first step in acetate metabolism by bacteria: where vx is the monomer excluded volume and \u03bc is Free Power Lagrange multiplier associated with the constraint that the total number of monomers is equal to Free Energy. The first term in the integral is the excluded volume contribution within the second virial approximation; the second term represents the end-to-end elastic free energy , which involves \u03c1Free Energy(z) rather than \u03c1m(z). It is then assumed that \u03c1Free Energy(z)=\u03c1m(z)\/Free Energy; this is reasonable if z is close to the as yet unknown height of the brush. The equilibrium monomer profile is obtained by minimising f [\u03c1m] with respect to \u03c1m(z) (Free Power (Free Electricity. Free Power. Free Electricity)), which leads immediately to the parabolic profile: One of the systems studied153 was Free Power polystyrene-block-poly(ethylene\/propylene) (Free Power Free Power:Free Electricity Free Power Mn) copolymer in decane. Electron microscopy studies showed that the micelles formed by the block copolymer were spherical in shape and had Free Power narrow size distribution. Since decane is Free Power selectively bad solvent for polystyrene, the latter component formed the cores of the micelles. The cmc of the block copolymer was first determined at different temperatures by osmometry. Figure Free Electricity shows Free Power plot of \u03c0\/cRT against Free Electricity (where Free Electricity is the concentration of the solution) for T = Free Electricity. Free Power \u00b0C. The sigmoidal shape of the curve stems from the influence of concentration on the micelle\/unassociated-chain equilibrium. When the concentration of the solution is very low most of the chains are unassociated; extrapolation of the curve to infinite dilution gives Mn\u2212Free Power of the unassociated chains.\n\n\nReality is never going to be accepted by tat section of the community. Thanks for writing all about the phase conjugation stuff. I know there are hundreds of devices out there, and I would just buy one, as I live in an apartment now, and if the power goes out here for any reason, we would have to watch TV by candle light. lol. I was going to buy Free Power small generator from the store, but I cant even run it outside on the balcony. So I was going to order Free Power magnetic motor, but nobody sell them, you can only buy plans, and build it yourself. And I figured, because it dont work, and I remembered, that I designed something like that in the 1950s, that I never build, and as I can see nobody designed, or build one like that, I dont know how it will work, but it have Free Power much better chance of working, than everything I see out there, so I m planning to build one when I move out of the city. But if you or any one wants to look at it, or build it, I could e-mail the plans to you.\nThe internet is the only reason large corps. cant buy up everything they can get their hands on to stop what\u2019s happening today. @Free Power E. Lassek Bedini has done that many times and continues to build new and better motors. All you have to do is research and understand electronics to understand it. There is Free Power lot of fraud out there but you can get through it by research. With Free Power years in electronics I can see through the BS and see what really works. Build the SG and see for yourself the working model.. A audio transformer Free Power:Free Power ratio has enough windings. Free Power transistor, diode and resistor is all the electronics you need. A Free Energy with magnets attached from Free Electricity\u2033 to however big you want is the last piece. What? Maybe Free Electricity pieces all together? Bedini built one with Free Power Free energy \u2032 Free Energy and magnets for Free Power convention with hands on from the audience and total explanations to the scientific community. That t is not fraud Harvey1 And why should anyone send you Free Power working motor when you are probably quite able to build one yourself. Or maybe not? Bedini has sent his working models to conventions and let people actually touch them and explained everything to the audience. You obviously haven\u2019t done enough research or understand electronics enough to realize these models actually work.. The SC motor generator is easily duplicated. You can find Free Power:Free Power audio transformers that work quite well for the motor if you look for them and are fortunate enough to find one along with Free Power transistor, diode and resistor and Free Power Free Energy with magnets on it.. There is Free Power lot of fraud but you can actually build the simplest motor with Free Power Free Electricity\u2033 coil of magnet wire with ends sticking out and one side of the ends bared to the couple and Free Power couple paperclips to hold it up and Free Power battery attached to paperclips and Free Power magnet under it.\nClausius\u2019s law is overridden by Guth\u2019s law, like 0 J, kg = +n J, kg + \u2212n J, kg, the same cause of the big bang\/Hubble flow\/inflation and NASA BPP\u2019s diametric drive. There mass and vis are created and destroyed at the same time. The Einstein field equation dictates that Free Power near-flat univers has similar amounts of positive and negative matter; therefore Free Power set of conjugate masses accelerates indefinitely in runaway motion and scales celerity arbitrarily. Free Electricity\u2019s law is overridden by Poincar\u00e9\u2019s law, where the microstates at finite temperature are finite so must recur in finite time, or exhibit ergodicity; therefore the finite information and transitions impose Free Power nonMaxwellian population always in nonequilibrium, like in condensed matter\u2019s geometric frustration (\u201cspin ice\u201d), topological conduction (\u201cpersistent current\u201d and graphene superconductivity), and in Graeff\u2019s first gravity machine (\u201cLoschmidt\u2019s paradox\u201d and Loschmidt\u2019s refutation of Free Power\u2019s equilibrium in the lapse rate).\nAn increasing number of books and journal articles do not include the attachment \u201cfree\u201d, referring to G as simply Free Power energy (and likewise for the Helmholtz energy). This is the result of Free Power Free Power IUPAC meeting to set unified terminologies for the international scientific community, in which the adjective \u2018free\u2019 was supposedly banished. [Free energy ] [Free Electricity] [Free Power] This standard, however, has not yet been universally adopted, and many published articles and books still include the descriptive \u2018free\u2019. Get free electricity here.","date":"2020-12-05 09:01:29","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.48692017793655396, \"perplexity\": 1650.3904928886361}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": false}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-50\/segments\/1606141747323.98\/warc\/CC-MAIN-20201205074417-20201205104417-00715.warc.gz\"}"}
null
null
{"url":"https:\/\/kluedo.ub.uni-kl.de\/frontdoor\/index\/index\/docId\/1554","text":"## Parallel lattice Boltzmann simulation of complex flows\n\n\u2022 After a short introduction to the basic ideas of lattice Boltzmann methods and a brief description of a modern parallel computer, it is shown how lattice Boltzmann schemes are successfully applied for simulating fluid flow in microstructures and calculating material properties of porous media. It is explained how lattice Boltzmann schemes compute the gradient of the velocity field without numerical differentiation. This feature is then utilised for the simulation of pseudo-plastic fluids, and numerical results are presented for a simple benchmark problem as well as for the simulation of liquid composite moulding.\n\n$Rev: 13581$","date":"2016-02-13 21:29:32","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.20979607105255127, \"perplexity\": 492.36790294676837}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2016-07\/segments\/1454701168011.94\/warc\/CC-MAIN-20160205193928-00239-ip-10-236-182-209.ec2.internal.warc.gz\"}"}
null
null
Q: Identify the required (missing) certificate that's causing a Java app to fail I'm using Intellij IDEA 15, a Java-based IDE. It allows me to click on a link in any open-source Java class in my project and download the source and documentation from the internet. However, this feature is failing right now because we have to go through a proxy server, which does certificate substitution. Although the Windows system I'm on knows about the local certificate, the Java VM I'm using doesn't know about it, so the download process fails with this error: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target The solution would seem to be to export the proper certificate from my Windows box's certificate store and add it to the JVM certificate store in /jre/lib/security/cacert, except that I have no idea which one of the dozens of certificates in the Windows store is missing. Can anyone suggest a debugging method to identify which certificate is missing? In my own code I would set a breakpoint using Intellij and look at the values being passed, but since the problem is inside Intellij I really don't know how to get to these values. Any help is appreciated, if this needs to be moved to a different StackExchange community I understand. A: Proxy servers are always fun! Okay. The trick when it comes to tracking down cert issues is that the root cert is the most important one. The root certificate of the chain of trust is a CA, and if your system trusts the CA, it also trusts anything it signs. Conversely, if the CA is not trusted, anything it signs is invalid. Your Windows is configured to trust the CA, probably courtesy of the local IT department. Java, however, is not. Therefore: You need to get the CA key that is generating certs on your proxy server, and insert that into your Java keystore. You can probably find this by opening up any secure site on any web browser, opening the page properties, and taking a look at the actual certificate. On Firefox, that looks something like this: You'd click on the top level one, here called "COMODO ECC Certification Authority", export it, and then use the java keytool to install it as a CA. Note: On your setup, the name will certainly be different. Chances are, it won't be the name of a company known for certificates like Comodo or Verisign, it'll be the name of an equipment vendor like Barracuda or Bluecoat Further details on the workings of keytool on Windows are available here
{ "redpajama_set_name": "RedPajamaStackExchange" }
2,165
The Los Altos Hills Digital History Archives Introduction (To skip this Introduction and go directly to the Index of Books click ♦here♦ ) The "Scrapbooks" Over the years, the Town of Los Altos Hills has accumulated around 30 scrapbooks filled with newspaper clippings and sundry other documents dating back to immediately prior to the incorporation of the town. The scrapbooks are unfortunately not complete over time, but are the result of sporadic efforst by different people with some periods more richly represented than others. The physical scrapbooks are not in good shape and are deteriorating rapidly. The newsprint (that represents the bulk of the material) has yellowed and become brittle over time, and the print quality has faded somewhat. The liberal use of "Scotch Tape" to mount the clippings on the pages of the scrapbooks has made matters worse - the sticky backing on the tape has darkened the paper where it was applied and the tape itself has also deteriorated and become brittle. Besides the ravages of time, the scrapbooks themselves present a number of problems for digital archiving. On some pages, the cippings have been overlapped to save space on the page (taped in such a way that so that a reader of the original book could lift one clipping on a tape hinge to reveal the one below) or large clippings were just folded back on themselves to fit within the body of the closed book. With some inventiveness and care these problems have been overcome to a large extent. More problematic however is that a number of books are of the type where the pages of the book are held together with two screws so that the original creator of the book could take an arbitrary number of loose pages and bolt them together into an album - rendering the inch or so closest to the spine of the book inaccessible to the subsequent reader, but where the clippings were originaly taped close to the edge of the page. These are not now accessible - short of dismantling the whole book and processing the pages individualy separated from the rest of the body of the book. We may opt to do this at some future point - but for right now the priority is to capture the pages digitally with minimum disturbance so that at least the bulk is preserved without undue risk of harm or loss of the original materials. Most of the time, only one edge of the clipping cannot be read, but the bulk of the text can be seen - usualy enough so that the reader can get the gist of the article. Subsequent "Books" and "Collections" Once we had created digital versions of the first few "scrapbooks", it became apparent that there were a number of other sets of materials which also needed to be preserved and made accessible. These ranged from early Photo Albums created by residents, as well as various booklets created to recognize special events or binders and files of historically significant documents, and even some 35 mm slides captured by a member of Town Staff in the course of his duties. While not really "scrapbooks" in the srictest sense of the term, these other "Books" and "Collections" have been created along similar lines and added in such a way as to fit into the overall structure of the Digital Archives. How to use the system Each scrapbook exists as a separate set of linked web-pages, and a link to each of these discrete scrapbooks is provided below. Clicking on one of the links with take you to one scrapbook. Each scrapbook then consists of an "index" page containing a series of links to each "page" in that book - each link on this index page having some associated text reflecting the subject matter of the clippings on that page. Frequently this is just words taken from the headlines of each clipping, sometimes a paraphrase of the subject matter and/or some keywords (such as street names or names of people) to make it easy for the reader to decide whether they want to follow that particular link to access the image of the actual page. Library Structure Selecting a page to view Once you have chosen a book and clicked on its link, you will arrive on the index page associated with that book. Here you will see a column of links on the left-hand edge of the page, each followed by a block of descriptive text designed to give you a feel for the subject matter covered on the page referenced by that link. By reading through these text blocks you can choose which actual "book page" you want to look at and click on its associated link to arrive at that page. Alternatively, you can do a rudimentary form of keyword search over all the text blocks on this index page by using the built-in "Find" capability of your browser. By typing Ctrl-F you will cause a small search box to appear on your screen which enables you to search for a given word or character-string within the page that is currently displayed on your screen. This "Find" capability can only be applied on the index page itself - it will not work on the text within the clippings on the pages of the scrapbook itself as the latter "text" is in fact only a photographic image of text, and not actual text that has been typed into a computer and therefore searchable by the built-in capabilities of your browser. Zooming into a page to read the small text When an image from the scrapbook first appears on the screen, it will be adjusted in size so that it fits completely on your screen. This means that you can probably read the headlines over the individual articles and clippings, but the body of the text will be too small to read. Depending on which browser you are using, you will now be able to click on the image or use the "wheel" on your mouse to expand the image to where the text can be read comfortably - the whole page will no longer fit on the screen and you will have to scroll up and down or sideways to navigate over the page. Links to the individual Scrapbooks and Collections Book 1 1/1955 - 6/1957 This book includes the year 1956, the year marked by the birth pangs leading to incorporation of the town. Major topics are the fight for incorporation and the role of Neary Quarry opponents. Book 2 1961 - 1969 The book spans eight years 1961 to 1969. Major topics topics include Sewer District Issues, failed attempt to recall Mary Davey, Byrne Preserve acquisition, pivotal role played by Central Drive, soil instability in the Hills, and acquisition of Little League Fields. This book covers a turbulent period when our zoning came under attack from La Raza Unida, the Citizens United for a Rural Environment (C.U.R.E) organization became active, Council Member Mary Davey faced another recall election and City Manager Fritschle was fired. Book 4 5/1969 - 12/1970 The biggest issues in this book include Adobe Creek Lodge with Dave Bellucci and La Raza Unida, the Central Drive wars, Mary Davey controversies, and consternation over potential development of Coyote Hill by Palo Alto with associated impact on Arastradero Road. Covering the town's second year of existence, major topics include ongoing battle with Neary Quarry about heavy trucks, a movement to disincorporate the town, formation of Los Altos Hills Association, argument about and selection of/between three major North/South traffic routes through local cities (eventualy to become Foothill Boulevard and Hiway 280), Duveneck property (Hidden Villa) is greenbelted, LAH Fire District formed by separating from County Fire District, annexation disputes with both Palo Alto and Los Altos. This book covers the 15th anniversary of the town. Ongoing subjects include lawsuits about sewer assessments and the Adobe Creek Lodge permit. Zoning is under attack by a lawsuit from La Raza Unida, and the town is going to go forward with the Matadero Creek subdivision. After much debate, LAH is in favor of the formation of the Mid-Peninsula Open Space District. Councilmembers Davey and Kubby targets for recall, the saga of El Retiro and Adobe Creek Lodge annexation and use permits continues, Uncashed checks at Town Hall lead to major problem, Elections - intereviews with all the candidates, CURE sponsored candidates triumph in election, Town Hall to be remodelled, Palo Alto School District sells 10 acres in LAH, Formation of Mid-Pen Regional Space District ongoing topic. Book 8 12/1971 - 2/1988 This book is fairly short and appears to be a work-in-progress that was never "finished" because it contains a number of loose clippings that were never mounted on pages, also the clippings cover a wide span of nearly two decades. Rather than a particular time-span, it looks as if the book was intended to feature well-known or notable local personalities, primarily the Duvenecks of Hidden Villa although articles on some other notables are included. Collection - Town Newsletters 1967 - 2003 A collection of Town Newsletters - collection still in process of being built, as more old newsletters are acquired, re-formatted digitally, and keyword indexed. Folder - Pathways 1957 - 2002 A folder containing a number of loose newspaper clippings mainly focused on the pathways and trails of Los Altos Hills. Rex Gardiner Collection 1955 A collection of original documents prepared and assembled by Rex Gardiner in 1955, in preparation for, and support of, the idea of incorporating a town to be called Los Altos Hills. Photo Album - History of Los Altos Hills 1900 - 1975 Images of pages copied from a photo album. Subsequent research has identified the original album from which these pages were copied, it is part of the Florence Fava Collection, now archived at the Los Altos History Museum. There are references within the text to "current residence of" and similar - from the context of other captions this would appear to refer to 1973 or thereabouts - one caption mentions the Bicentennial celebrations of 1975 which is the latest date referred to in the album thus giving some idea about when it was compiled, although clearly a lot of the photos are much older. Photo Album - Penfold Collection of Slides 1970's Some images scanned from a collection of 35mm slides taken by Gordon Penfold who was the Town's Public Safety Officer for about seven years during the 1970's. Images include Byrne Preserve covered in a layer of snow, a few aerial images of spots within the town, the Fire Stations serving Los Altos Hills and surrounding environs, and further miscellaneous shots. Photo Album - Bicentennial Commemorative Postcards 1976 On the occasion of the United States Bicentennial, a set of eight postcards showing historic scenes from Los Altos Hills was created. This set was reproduced and made available for sale to the public. Also included in this album are images of a commemorative tie-tack and lapel button created for the same occasion. Early History of Police Protection in Los Altos Hills 1956 - 1972 This volume is in a different format - it consists of a total about 100 images of newspaper clippings (spread over three pages) about the way Police Protection was handled in the early years after incorporation of the town. These clippings were collected from a number of different scrapbooks and assembled here as a subject-specific collection. As such, the images presented here are redundant in that they also exist (but scattered widely) in other parts of the digital library. 15th Anniversary Booklet 1971 "Formation of th Town of Los Altos Hills" - A booklet created by Florence Fava on the occasion of the Town's 15th Anniversary. A copy of the original typescript pages were captured photographically and preserved in a long-forgotten PDF file. Unfortunately, the combination of the font used by the original typewriter, and the loss of sharpness caused by the "duplicating" rendered the original PDF hard to read on a computer screen. So the orignal text was re-typed directly into a computer to create an approximate facsimile of the text, this was combined with the photographs and illustrations which were lifted from the original pdf files to create a new more readable edition of the original typescript. Commemorative Plaques Various This is an inventory of the various commemorative plaques located in town or could once be seen in Los Altos Hills Historical Sites Booklet Unknown This is a xeroxed copy of a document which appears to be a draft of a booklet intended to describe the designated "Historical Sites" within Los Altos Hills Ginzton Archives 1948 - 2000 A collection of newspaper clipping covering Ed and Artemas Ginzton (residents of Los Altos Hills) as well as two comemmorative booklets summarizing the history of Varian Associates at different points in its growth. Ed was a founding employee of Varian Associates and rose to become its CEO. Artemas was a motive force in the establishment of pathways through the Town as well as at the County and State level.. Fremont Hills Country Club - In the beginning 1956-1957 Soon after the Town was founded, the Planning Commission realised that active encouragement by the private sector would be needed to provide the new town with recreational amenities - so they made known to subdividers of property that land should be set aside for this purpose. Accordingly the five partners in the organization that was developing the Fremont Hills property, formed a non-profit and became its five-member Board of Directors. From this grew the Fremont Hills Country Club. Some years later (date not determined) a 12-page book was published for the club's membership in which a few of the early documents that surrounded the formation of the club were reproduced. That document was scanned and is shown here. Early Aerial Images from before the town existed. Approx 1947 Six aerial images are presented which show the Los Altos Hills landscape before the town's incorporation. Each image is annotated with a few colored dots to orient the viewer to the geographic area shown in each image. Los Altos Hills Aerial Survey Approx 2001 The exact date is unknown, but believed to be around 2001, an aerial survey was performed covering the Town of Los Altos Hills. The original black-and-white images were in a computer file format that could not be read by widely available consumer software at that time, so they were converted to "JPEG" format and made available on a CD together with a web-page formatted as an index into the collection of images, thus making the imagery accessible to people who had a home computer equiped with an HTML "browser" on their computer. There were two such browsers in common use at that time, one called "Netscape" and another called "Explorer". The contents of this CD are reproduced here. The Evolution of Los Altos Hills Town Hall 1956 - 2020 A Picture Album of photographs (and a few newspaper clippings) spread across three web "pages" - the Original Town Hall (1956-2004), The "New" Town Hall (Built 2005), and The Heritage Park (The Orchard area with the Heritage House and collection of Old Farm Equipment) Grab Bag Various A number of miscellaneous images and text files with historic value that are worth archiving, but for which we do not have a physical "hard copy" counterpart.. They have been collected here in this "Grab Bag" to assure there is a copy in the archives - rather than loose track of them. Book n.....n+1 Further books, folders, collections, etc. will appear here as digital versions get created. This "Digital History Archives" project is an ongoing effort performed by volunteers. If you think you would like to find out more and maybe get involved yourself, please contact us through the History Committee. (Details about the History Committee may be found on the Town's Website). Potential volunteers who might want to help with this project, but who want to first find out more about what is involved and the "how" of creating a digital scrappbook, may click ♦here♦
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
6,708
require 'spec_helper' RSpec.describe MixedGauge::Routing do let(:config) { MixedGauge::ClusterConfig.new(:test) } before do config.define_slot_size(1024) config.register(0..511, :connection_x) config.register(512..1023, :connection_y) end let(:routing) { described_class.new(config) } it 'routes to a connection name' do expect(routing.route('xxx')).to eq(:connection_y) end end
{ "redpajama_set_name": "RedPajamaGithub" }
8,840
{"url":"https:\/\/math.stackexchange.com\/questions\/3328171\/how-to-compute-partial-frac1z","text":"# How to compute $\\partial \\frac{1}{z^*}$?\n\nI have trouble understanding some basic concepts in Complex Analysis:\n\nFor $$z=x+\\mathrm{i}y$$, we define: $$\\partial \\equiv \\frac{\\partial}{\\partial z}=\\frac{1}{2}\\left(\\frac{\\partial}{\\partial x}-i \\frac{\\partial}{\\partial y}\\right)$$\n\nThe following is stated as obvious: $$\\partial \\frac{1}{z^{*}}=\\pi \\delta(x) \\delta(y)$$\n\nIn order to prove this equality I was told to integrate the left and right hand sides over a small square centered at the origin. However I do not recover the desired result.\n\nMy main obstacles is to understand why: $$\\int_{-\\epsilon}^\\epsilon\\int_{-\\epsilon}^\\epsilon\\pi \\delta(x) \\delta(y)\\mathrm{d}x\\mathrm{d}y\\neq\\pi\\text{ ?}$$ $$\\left(\\frac{\\partial}{\\partial x}-i \\frac{\\partial}{\\partial y}\\right)\\frac{1}{x-\\mathrm{i}y}\\neq-\\frac{1}{(x-\\mathrm{i}y)^2}+\\frac{1}{(x-\\mathrm{i}y)^2}=0 \\text{ ?}$$\n\nEdit: I suspect it has something to do with the fact that $$\\frac{1}{z^*}$$ does not have a series expansion....\n\nThe key here is to understand that $$\\partial_z\\frac{1}{z^*}=0$$ everywhere but at the origin. This is how a delta-singularity can arise in this context. Once one is aware of this, integrating this function on a square shouldn't be too difficult, and can be done without use of multivariable tools like Stokes' theorem. To wit, using the fundamental theorem of calculus and Fubini's theorem we obtain:\n\\int_{[-a,a]\\times[-a,a]}\\partial_z\\frac{1}{z^*}dxdy=\\frac{1}{2}\\int_{-a}^{a}\\int_{-a}^{a}(\\partial_x-i\\partial_y)\\frac{1}{x-iy}dxdy\\\\\\begin{align}&=\\frac{1}{2}\\int_{-a}^ady\\frac{1}{x-iy}\\Bigg|_{(-a,y)}^{(a,y)}-\\frac{i}{2}\\int_{-a}^adx\\frac{1}{x-iy}\\Bigg|_{(x,-a)}^{(x,a)}\\\\&=\\frac{1}{2}\\int_{-a}^a{dy}\\frac{2a}{a^2+y^2}-\\frac{i}{2}\\int_{-a}^{a}dx\\frac{2ia}{a^2+x^2}\\\\&=2a\\int_{-a}^a\\frac{dy}{a^2+y^2}\\\\&=\\pi\\end{align}","date":"2021-10-25 09:46:48","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 8, \"wp-katex-eq\": 0, \"align\": 1, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9586098194122314, \"perplexity\": 80.74750586986795}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-43\/segments\/1634323587659.72\/warc\/CC-MAIN-20211025092203-20211025122203-00219.warc.gz\"}"}
null
null
#ifndef OPTIONS_H_ #define OPTIONS_H_ /* * $Id: propdialog.h 2839 2009-12-17 11:57:55Z haraldkipp $ */ #include <wx/wxprec.h> #ifdef __BORLANDC__ #pragma hdrstop #endif #ifndef WX_PRECOMP #include <wx/wx.h> #endif class COptions { public: COptions(); void Save(); wxString m_scanip; wxString m_scanport; wxString m_scantime; }; extern COptions *g_options; class COptionsDialog : public wxDialog { public: COptionsDialog(wxWindow *parent); virtual ~COptionsDialog(); protected: wxTextCtrl *m_scanip_entry; wxTextCtrl *m_scanport_entry; wxTextCtrl *m_scantime_entry; }; #endif
{ "redpajama_set_name": "RedPajamaGithub" }
500
Night in the Woods Coming to Nintendo Switch in February January 17, 2018 by JoeTheBard Home » News » Night in the Woods Coming to Nintendo Switch in February Night in the Woods, an award-winning story-heavy adventure game, is making its way to the Nintendo Switch at the beginning of February. Switch owners will be getting the Weird Autumn edition, which includes some extra content absent from the launch version of Night in the Woods. The Nintendo Switch is continuing their mission to become the console haven of indie games. And, with Sony's recent track record, they've all but achieved that goal. The newest indie port for the Switch will be the narrative-focused adventure game Night in the Woods. The port has been announced on the game's Twitter account. The release date, according to the Night in the Woods Twitter, is February 1st. The price will be $19.99, which is confirmed on the Nintendo America website. The website lists the release date as January 18th, which is likely some kind of error. ▼Article Continues Below▼ Night in the Woods is coming to the Switch as the Weird Autumn edition. It's kinda like a director's cut of the game. It contains a bunch of extra content that didn't make the first cut of the game, including new mini-games and dream sequences. Also, you get the chance to play through the mini episodes that preceded the game's launch. So, you're basically getting the "full" version of the game right out of the gate. If you aren't familiar with the award-winning Night in the Woods, let's get you up to speed real quick. The game centers on Mae, a college dropout that goes back home to the town of Possum Springs. The town isn't doing too well since the mine closed, and people are having trouble keeping afloat. At its core, Night in the Woods is a touching story about becoming an adult and the stress of finding your place in the world. But then, also, something is lurking in the woods of Possum Springs. TAGS Night in the Woods Written by: Stefan Djakovic aka JoeTheBard A language teacher and video game enthusiast turned rogue, Joe is on a quest to become the ultimate gaming journalist. This is somewhat hampered by his belief that the golden age of gaming ended with the PlayStation One, but he doesn't let that stop him. His favorite games include Soul Reaver and Undertale. Other interests are D'n'D, dad rock, complaining about movies, and being the self-appointed office funny man, which nobody else agrees with. YOU MAY ALSO READ PREVIOUS POSTMonster Hunter World PS4 Pro Limited Edition Bundle Announced NEXT POSTShadow of the Colossus Getting Photo Mode in Upcoming Remake
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
367
System.register(['aurelia-framework'], function (_export, _context) { "use strict"; var bindable, _dec, _class, ProjectCard; function _classCallCheck(instance, Constructor) { if (!(instance instanceof Constructor)) { throw new TypeError("Cannot call a class as a function"); } } return { setters: [function (_aureliaFramework) { bindable = _aureliaFramework.bindable; }], execute: function () { _export('ProjectCard', ProjectCard = (_dec = bindable('project'), _dec(_class = function ProjectCard() { _classCallCheck(this, ProjectCard); }) || _class)); _export('ProjectCard', ProjectCard); } }; }); //# sourceMappingURL=data:application/json;charset=utf8;base64,eyJ2ZXJzaW9uIjozLCJzb3VyY2VzIjpbImNvbXBvbmVudHMvcHJvamVjdC1jYXJkL3Byb2plY3QtY2FyZC5qcyJdLCJuYW1lcyI6WyJiaW5kYWJsZSIsIlByb2plY3RDYXJkIl0sIm1hcHBpbmdzIjoiOzs7Ozs7Ozs7Ozs7O0FBR1FBLGMscUJBQUFBLFE7Ozs2QkFHS0MsVyxXQURaRCxTQUFTLFNBQVQsQyIsImZpbGUiOiJjb21wb25lbnRzL3Byb2plY3QtY2FyZC9wcm9qZWN0LWNhcmQuanMiLCJzb3VyY2VzQ29udGVudCI6WyIvKipcclxuICogQ3JlYXRlZCBieSBiZW4gb24gMTEvNi8xNi5cclxuICovXHJcbmltcG9ydCB7YmluZGFibGV9IGZyb20gJ2F1cmVsaWEtZnJhbWV3b3JrJztcclxuXHJcbkBiaW5kYWJsZSgncHJvamVjdCcpXHJcbmV4cG9ydCBjbGFzcyBQcm9qZWN0Q2FyZCB7XHJcblxyXG59Il19
{ "redpajama_set_name": "RedPajamaGithub" }
1,254
Q: system vs call vs popen in Python cmd = 'touch -d '+date_in+' '+images_dir+'/'+photo_name os.system(cmd) Doesn't work subprocess.call(['touch','-d','{}'.format(date_in),'{}'.format(images_dir+'/'+photo_name)]) Doesn't work subprocess.Popen(['touch','-d','{}'.format(date_in),'{}'.format(images_dir+'/'+photo_name)]) Works! Why? What am I missing in first two cases? pi@raspberrypi:~ $ python --version Python 2.7.13 Actual code snippet: try: response = urllib2.urlopen(url) if(response.getcode() == 200): photo_file = response.read() with open(images_dir+'/'+photo_name,'wb') as output: output.write(photo_file) #cmd = 'touch -d '+date_in+' '+images_dir+'/'+photo_name #subprocess.Popen(['touch','-d','{}'.format(date_in),'{}'.format(images_dir+'/'+photo_name)]) subprocess.check_call(['touch','-d','{}'.format(date_in),'{}'.format(images_dir+'/'+photo_name)]) with open(images_dir+'/captions/'+photo_name+'.txt','wb') as output: output.write(photo_title) else: print 'Download error' except Exception as message: print 'URL open exception {}'.format(message) A: I wouldn't use touch at all for this; use os.utime instead. import os try: response = urllib2.urlopen(url) except Exception as message: print 'URL open exception {}'.format(message) else: if response.getcode() == 200: photo_file = response.read() f = os.path.join(images_dir, photo_name) with open(f,'wb') as output: output.write(photo_file) os.utime(f, (date_in, date_in)) f = os.path.join(images_dir, 'captions', photo_name + '.txt') with open(f, 'wb') as output: output.write(photo_title) else: print 'Download error' Note that the date/time arguments to os.utime must be integer UNIX timestamps; you'll need to convert your date_in value from whatever it currently is first. A: Now it's clear: with open(images_dir+'/'+photo_name,'wb') as output: output.write(photo_file) #cmd = 'touch -d '+date_in+' '+images_dir+'/'+photo_name #subprocess.Popen(['touch','-d','{}'.format(date_in),'{}'.format(images_dir+'/'+photo_name)]) subprocess.check_call(['touch','-d','{}'.format(date_in),'{}'.format(images_dir+'/'+photo_name)]) My assumption is: you're still in the with block, so if check_call or system is performed, it ends, then the file is closed, setting the date again and ruining touch efforts. With Popen, the process is performed in the background, so when it's executed, the file is already closed (well, actually it's a race condition, it's not guaranteed) I suggest: with open(images_dir+'/'+photo_name,'wb') as output: output.write(photo_file) subprocess.check_call(['touch','-d','{}'.format(date_in),'{}'.format(images_dir+'/'+photo_name)]) so the file is properly closed when you call check_call Better written: fullpath = os.path.join(images_dir,photo_name) with open(fullpath ,'wb') as output: output.write(photo_file) # we're outside the with block, note the de-indentation subprocess.check_call(['touch','-d','{}'.format(date_in),fullpath])
{ "redpajama_set_name": "RedPajamaStackExchange" }
8,696
Interview: Ricky Skaggs and Sharon White Discuss Duets Album, 'Hearts Like Ours' Frederick Breedon, Getty Images for ACM Grammy-winning musicians Ricky Skaggs and Sharon White have been married for 33 years, so it's only natural that they would collaborate on an album of duets, right? Actually, not so fast. Though they scored a major hit together in 1987 with 'Love Can't Ever Get Better Than This' -- which earned them a CMA Award for Vocal Duo of the Year -- the couple were signed to competing labels at the time and were unable to work out the contractual complications to go forward with a whole album. The idea lingered on their to-do list through subsequent decades as they became involved in a number of other projects, but on Tuesday (Sept. 30), fans will finally get to hear the results when Skaggs and White release 'Hearts Like Ours' via Skaggs Family Records. The album is available here. The Boot caught up with Skaggs and White recently to discuss the new album and more in the following interview. This album has obviously been a really long time coming. What made this finally the right time? Ricky: Well, 33 years of knowing someone like I know Sharon, and her knowing me like she does, that's worth something when you're working on a project together. You know, part of it was the busyness of life. We had a single out in the '80s called 'Love Can't Ever Get Better Than This.' That was '87, and it was CMA Vocal Duo of the Year, and we thought then, 'What a great time to capitalize on this. What a great time, we've got some momentum, let's go in and do a record.' God's timing is perfect, and we knew in our hearts that there would be a time that we would actually get to do this. But you know, that was talk that we thought would be good. We loved each other, we really wanted to do music together, but maybe it might have been misguided, that kind of focus, you know. I just know that the record label that I was on did not want to share me as an artist with Warner/Curb, the label that Sharon was on with the Whites, and vice versa. They didn't want to share Sharon with CBS. So it was kind of a heartbreaking thing, you know; we felt disappointed about it. But I just feel that God, in His infinite wisdom, knew this, and knew what was coming, and knew what it would take for us to be able to do a CD called 'Hearts Like Ours.' Obviously your faith informs everything you do. Ricky: God's always concerned about the hearts of people, and this is not a Christian record, but it's two Christians that's on this record [laughs], and sing on this record, and that's the difference. We're not Christian artists, but both of us, the Whites and me, we've done gospel music for years. We do it in our shows, but we're marketplace country music artists, we're not Christian artists. But to go around a long way and give you an answer, God's timing is perfect, and we knew in our hearts that there would be a time that we would actually get to do this. I'm not sure Sharon ... [laughs.] Sharon: Well, I have to say, I'm not sure that I knew it. I had almost given up hope. But we were booked to do an event a couple of years ago, and it was not a concert. It was an encouragement event for couples, and our part was gonna be singin' some songs and sharing the story of our life, and Ricky said, 'Let's go in and cut tracks on these songs.' There were five songs that we sang, and we've been singin' 'em ... in fact, the first song that we sang together, ever, as a duet was 'If I Needed You,' and we actually sang that together in our wedding. So that's how far back it goes. We had these five songs, and we went in and cut 'em, and were pretty far into the process when the event canceled. And I said, 'Great, now what do we do?' [Laughs.] He said, 'Let's finish this. Let's make a CD. We've always wanted to.' So honestly, it didn't feel like it was an intentional ... Ricky said the Lord just kicked us up off the couch. [Laughs.] That's the way it felt, like we were set up. We thought we were gonna do one thing, and we did something else. But it's been the right time, and we've enjoyed the entire process so much. And I can say now, I had kind of a hurt in my heart 'cause we didn't get to do it back years ago, or it's always felt like it was pushed to the back burner. It was never something that was important enough to do [until] now. And I realized through working on this that not only is the Lord's timing right, but I think it would not mean to me what it means to me now. For one thing, my perspective about it, and my perspective about our life together and our music, has matured to a place that I know this is more of an outspringing of who we are, rather than it being this musical project that we just want the world to hear. It's not that so much as it is an expression of our hearts, and who we are, and who we are together. So I believe it's something that's going to bless people and bless the world, but if it doesn't, that's okay, too, 'cause it sure did bless us! [Laughs.] When you've had a project in mind for so long, there could be an infinite number of songs that you have in your mind for it. How did you narrow it down to these? Sharon: Part of it, as I said, I had some ideas in my mind about things we could sing together, and of course we had the five that we did. But I had kind of given up hope, and I did bring back some of the ideas that I'd been carrying for so long, and honestly, we didn't end up doing any of those. We contacted some writers that we really like their work, and just started listening. We had a pretty good, lengthy list of songs, and some of the songs that we really liked, we either felt like we didn't sing them together very well, or for whatever reason they weren't pitched right for us, or something like that. But these that we chose, we felt like they were strong, the kind of songs that we wanted to sing, the kind of songs that meant something to us, expressed something that we would say. I believe it's something that's going to bless people and bless the world, but if it doesn't, that's okay, too, 'cause it sure did bless us! I've always liked a song that has a real strong message, and I think, as Ricky said, it's not a Christian gospel record, but it is a record that I think has strong inspirational messages in it, and it expresses our faith in the Lord and our relationship together. That's what we were looking for, and just good songs that we could really make ... you know that one that's on there, 'No Doubt About It,' that's an old bluegrass song that Flatt & Scruggs did, and I heard it on the radio one day. It's been re-recorded so many times by so many people, and I heard it, and it was a couple of men singing it, and I thought, 'You know what, that is a duet! That is a male/female duet, right there.' So I told Ricky, and he said, 'Sharon, that's a great idea!' And we had more fun with that one. Kind of our little happy song on there. You co-produced this album. That can mean a lot of different things, since 'producer' is such a nebulous term. What role did each of you play in the production process? Ricky: Well, I've produced my records for so many years, and even working with the Whites when I produced them, they really got involved with finding the songs. I don't think I was responsible, that wasn't my job necessarily in that process. What I tried to do with them in the studio was make sure it was recorded well, and make sure I was there for them in their singin' parts and stuff like that. They were really responsible for finding the songs, and I knew Sharon ... I have always known that Sharon had a great sense of a song. She knows a good song, she knows what she wants to sing. Now, granted, this record stretched her quite a bit. There's two or three songs on this record that I feel like she really wanted in her heart to sing, but it was one of those things where she didn't know if she could really pull it off. But just the desire in your heart that you want to sing something and you love it enough, that's enough reason to go into the studio and work on it. That's what it's really about, is getting in there and hammering it out, and continuing to work on it until you're happy with it. That's the way we make records, anyway. And she did it, boy; she nailed these songs that she felt like she was gonna have some struggle with. She got in there and worked on 'em until she got them she way she wanted 'em, and I really admired that. She never gave up. I just think that co-producing is something that I'm not really used to, because I, you know ... even co-producing with Gordon Kennedy on 'Mosaic,' you know ... Sharon: Ricky had the final say! [Laughs.] In a nutshell, [for this album] we made all of the decisions together. And that was such a beautiful thing, because I know there are times when I just have to say, 'I don't really understand that part right there, but I trust you. I trust what you're telling me.' I'm talking about technical stuff -- a lot of that I don't really understand why we do things a certain way, but I trusted him and his knowledge of it, and his heart for it, and he did me the same way. If I said, 'I really don't want to go that direction with this one,' he'd just quit pushing on that. He'd say, 'You're right, we need to do what feels right to both of us.' We both worked hard, and we brought ... I'm not the kind of producer that Ricky is, I'm not an experienced producer. But I bring what I bring to this. That's kinda how it came together. I have to tell you, it was so much fun. I really enjoyed the entire process more than I ever have, more than I thought I would. I think we're in a place in our relationship where we're just secure and trust each other, and trust the Lord in such a way that we believe He's the one guiding the boat. We're just getting to go with Him. So if you can relax and not strive in something -- that doesn't mean you don't work; you put your whole effort into it, but you're not striving, you're in a place of trust. I would have never just demanded that this has to be this way, and you're going to do it. That's not the way our life is. That's just ... that's SUICIDE. Ricky: I preferred her. I honored her. I wanted her to be ... Sharon: I have to say [laughs], I saw sometimes he would turn to me and look and ask me things, and I saw the guys look at him like, 'He's never done that before.' [Laughs.] Ricky: Well, I can always produce another record down the road. I'll always get another chance to do that. But this was so special to me that I wanted her to feel honored, and I wanted her to feel that she was more than 50 percent involved in this. I would have never just demanded that this has to be this way, and you're going to do it. That's not the way our life is. That's just ... that's suicide. [Laughs.] Sharon: [Laughing] That would be crazy, wouldn't it? Ricky: Yeah. But we had a ball, we really did. I swear, this was the most fun. We laughed so hard, and there were so many crazy little things that happened along the way. But we just had a great time. It was a wonderful experience. I would start another one tomorrow of we had the songs. What kind of promotion is there going to be for this? Are you going to do some live dates together that are focused specifically on these songs? Ricky: We really haven't totally decided exactly how we're going to do this, but we're just compiling data into our minds and trying to figure out what approach we want to do. Certainly we want to do some dates. I told Sharon the other day, we was looking at some dates we already have booked, me and my road manager, thinking about what we could do to promote the record and how we could get it out to the most people, and so I think there's gonna be some opportunities for Sharon to come out with me and Kentucky Thunder, where we'll do maybe 20 or 30 minutes of bluegrass stuff like we normally do, or 45 minutes, and take a break, do an intermission, then come back and do four or five songs from this record and really focus on having Sharon come out and do this with me and the boys in the band. And if we want to do 'Heartbroke,' 'Honey (Open That Door)' or something like that while we're in the process, we can. I just feel like there's gonna be lots of opportunities for us. I know there's already some TV things that's come up, some major things that people want us to do. It's kind of newsworthy. Sharon's had a huge career with the Whites for so many years, and it's not like we've never sang anything together. We've had some opportunities to do some things together, but never like this. And this is such a great time, it's a great record. I think with all the new country music that's out there, this record comes at a time when it is absolutely vital that people hear [laughs] ... especially Baby Boomers that have almost drifted away from country music because it's so pop, it's so Katy Perry-ish. It's just really gotten away, and I know that there's a generation of youth out there that are so into vinyl, they are so into Merle Haggard, they are so into Ray Price and Willie Nelson and Patsy Cline and Tammy Wynette and Loretta [Lynn] ... there's a real groundswell of traditional country, and even people like Jack White are still really, really in love with it. So this is a perfect record at a perfect time for something like this to come out and really inspire people again to go back and glean through the last century's country music. [Laughs.] So we're pretty excited about it. NEXT: The Top 10 Ricky Skaggs Songs Filed Under: Ricky Skaggs Categories: Bluegrass, Country News, Exclusives, Interviews, Legends, Longform
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,768
Bătălia de la Rozgony sau Bătălia de la Rozhanovce s-a dat între regele Carol Robert al Ungariei și familia Aba pe 15 iunie 1312, la Rozgony (astăzi, Rozhanovce). Cronica pictată de la Viena descrie lupta ca fiind "cea mai crudă bătălie de la invazia mongolă din Europa". Victoria zdrobitoare a lui Carol a marcat sfârșitul familiei Aba și a dominației acesteia în estul Ungariei. Cauze După dispariția dinastiei Arpadiene în 1301, succesiunea la tronul Ungariei este contestată de mulți membrii ai unor familii influente ale Europei. Unul dintre aceștia era Carol Robert de Anjou, apărătorul Papei. După câțiva ani Carol și-a condus oponenți în afara țării și s-a instalat pe tronul Regatului Ungar. La acel moment, Ungaria era o confederație de regate mici, principate și ducate. Însă, domnia sa nu era recunoscută în multe zone ale Ungariei de nobilii, magnații și ducii locali. Inițial, marele său inamic a fost Matei Csák al III-lea, care controla mai multe ținuturi în vestul și nordul Ungariei. Dar acesta s-a aliat cu familia Aba, ce domnea în estul regatului. În 1312, Carol a asediat castelul Sáros (din Slovacia de astăzi - Castelul Šariš) controlat de familia Aba. După ce Abii au primit ajutor de la Matei Csák al III-lea (Cronica pictată de la Viena sustine ca Máté a trimis o forță compusă din 1.700 mercenari lăncieri), Carol Robert de Anjou a fost forțat să se retragă în comitatul loial Szepes (Spiš), unde locuitorii saxoni i s-au alăturat. Familia Aba a profitat de retragerea sa și au decis să atace orașul Kassa (astăzi, Košice) cu forțele pe care le-au strâns, din pricina importanței strategice. Carol se deplasează înspre Kassa pentru a-și înfrunta adversarul. Bătălia Forțele familiei Aba au fost obligate să ridice asediul cetății Kassa și să-și așeze trupele lângă Tarca (râul Torysa). Carol I al Ungariei a fost forțat să își poziționeze într-un teren agricol de lângă un deal. Deși numărul trupelor beligerante este incert, armata regelui era compusă din oamenii săi, o unitate de elită a cavalerilor ioaniți și 1.000 de infanteriști saxoni din Spiš. Datorită versiunilor contradictorii ale cronicilor vremii, nu este clar în ce măsură familia Aba a fost sprijinită de Matei Csák al III-lea. Bătălia a început odată cu atacul surpriză al rebelilor în timpul amiezei în tabăra regelui. A urmat o sângeroasă luptă corp la corp, unde cavalerii ambelor părți au avut de suferit. Stindardul regelui este pierdut într-un moment al conflictului, iar acesta s-a văzut obligat să lupte sub cel al cavalerilor ioaniți. După pierdere comandanților și sosirea unei întăriri din Kassa, soarta rebelilor este pecetluită, iar victoria și tronul ungar îi revine casei de Anjou-Sicilia. Urmări Conducătorii importanți ai familiei Aba au pierit în bătălie, iar domeniile lor au fost împărțite între rege și nobilii loiali. Pierderea aliatului principal a fost o lovitură importantă pentru Matei Csák al III-lea. El a reușit să controleze o mare parte din teritoriile sale până la moartea sa survenită în 1321, puterea lui a început să scadă imediat după luptă și nu a mai putut lansa o nouă ofensivă majoră împotriva regelui. Ca și consecință imediată, Carol I al Ungariei a obținut controlul asupra părții de nord-est a regatului. Bătălia a redus drastic opoziția magnaților împotriva sa. Regele și-a extins puterea și prestigiul. Poziția lui Carol ca rege al Ungariei a fost asigurată prin victoria sa, iar rezistența împotriva sa a fost distrusă. Vezi și Basarab I Bătălia de la Posada Regatul Ungariei (1000-1538) Legături externe The Battle of Rozgony Rozgony Bătălii în 1312 Bătălii medievale
{ "redpajama_set_name": "RedPajamaWikipedia" }
2,310
\section{INTRODUCTION} \label{sec:intro} ASTRO-H, the new Japanese X-ray Astronomy Satellite\cite{NeXT08, Takahashi10} following the currently-operational Suzaku mission, aims to fulfill the following scientific goals: \begin{itemize} \item Revealing the large-scale structure of the universe and its evolution. \item Understanding the extreme conditions of the universe. \item Exploring the diverse phenomena of the non-thermal universe. \item Elucidating dark matter and dark energy. \end{itemize} In order to fulfill the above objectives, the ASTRO-H mission hosts the following instruments: high energy-resolution soft X-ray spectrometer covering the 0.3--10~keV band, consisting of thin-foil X-ray optics (SXT, Soft X-ray Telescope) and a microcalorimeter array (SXS, Soft X-ray Spectrometer); soft X-ray imaging spectrometer sensitive over the 0.5--12~keV band, consisting of an SXT focussing X-rays onto CCD sensors (SXI, Soft X-ray Imager); hard X-ray imaging spectrometer, sensitive over the 3--80~keV band, consisting of multi-layer-coated, focusing hard X-ray mirrors (HXT, Hard X-ray Telescope) and silicon (Si) and cadmium telluride (CdTe) cross-strip detectors (HXI, Hard X-ray Imager)\cite{Takahashi02-NeXT,Takahashi02,Takahashi03-SGD,Takahashi04-SGD,HXI08}; and soft gamma-ray spectrometer covering the 40--600~keV band, utilizing the semiconductor Compton camera with narrow field of view (SGD, Soft Gamma-ray Detector)\cite{Takahashi02-NeXT,Takahashi02,Takahashi03-SGD,Takahashi04-SGD,Tajima05}. The SXT-SXS and SGD systems will be developed by international collaboration led by the Japanese, US and European institutions. The SXS will use a $6\times6$ element microcalorimeter array. The energy resolution is expected to be better than 7~eV. In conjunction with the $\sim$6~m focal-length SXT, the field of view and the effective area will be, respectively, about 3 arc minutes and about 210~cm${^2}$. The SXT-SXS system will provide accurate measurements of the temperature and the turbulence/macroscopic motions of intra-cluster medium in distant clusters of galaxies up to redshift of about 1, allowing studies of the formation history of the large scale structure of the universe, which in turn will eventually constrain the evolution of the dark energy. The focal length of the HXT will be 12~m and the effective area will be larger than 200~cm${}^2$ at 50~keV. The HXI detector utilizes four layers of double-sided Si strip detectors overlaid on a double-sided CdTe strip detector with a BGO (Bi${}_4$Ge${}_3$O${}_{12}$) active shield. The extremely low background of the HXT-HXI system will improve the sensitivity in 20--80~keV range by almost two orders of magnitude as compared to conventional non-imaging detectors in this energy band. The search for highly absorbed active galactic nuclei and understanding their evolution is one of main science topics of the HXT-HXI. The SGD also utilizes semiconductor detectors using Si and CdTe pixel sensors with good energy resolution ($\lower.5ex\hbox{$\buildrel < \over\sim$}$2~keV) for the Compton camera, which was made possible by recent progress on the development of high quality CdTe sensors\cite{Takahashi01b,Watanabe05,Watanabe09,Takeda09}. The BGO active shield provides a low background environment by rejecting the majority of external backgrounds. Internal backgrounds are rejected based on the inconsistency between the constraint on the incident angle of gamma rays from Compton kinematics and that from the narrow FOV (field of view) of the collimator. This additional background rejection by Compton kinematics will improve the sensitivity by an order of magnitude in the 40--600~keV band compared with the currently operating space-based instruments. Science objectives of the SGD include studies of particle acceleration in various sources via measurement of non-thermal emission and the high-energy cutoff, origin of the emission in the GeV gamma-ray emission though the observation of non-thermal bremsstrahlung signatures expected in the SGD band, and searches for origin of 511~keV emission from electron-positron annihilation. In addition, the SGD will be sensitive to the polarization in the 50--200~keV band from a number of accreting Galactic black hole and neutron star binaries, and for active galactic nuclei in flaring states. The ASTRO-H mission has just completed the preliminary design phase, aimed at verifying that the design of the system/subsystems/components will meet the mission requirements with sufficient reliability to accomplish the mission objectives, and to assure that system/subsystems/components designs are feasible in terms of technology and schedule via design analysis, fabrication and tests of bread board models. The expected launch date is in 2014. \section{Science Requirements and Drivers} The mission-level science objectives described above require the SGD to provide spectroscopy up to 600~keV for over 10 super-massive black holes with fluxes equivalent to 1/1000 of the Crab Nebula (as measured over the 2--10~keV band, assuming the spectrum to be a power-law with spectral index of 1.7). This mission-level science requirement defines the following instrument-level requirements for the SGD: \begin{itemize} \item Effective area for the detector must be greater than 20~{cm${}^2$}\ at 100~keV to obtain sufficient number of photon in a reasonable observation time (typically 100~ks); \item Field of view must be 0.6{${}^\circ$}\ at 150~keV or less to minimize source confusion; \item Energy resolution must be better than 2~keV to identify nuclear lines from activation backgrounds. \end{itemize} The SGD instrument with capabilities defined above will determine the non-thermal emission processes for a large range of celestial sources (via the measurement of broad-band spectral shape and and high-energy cutoff), with the goal of studying particle acceleration in GeV band. With some sources, parameters of non-thermal bremsstrahlung processes will be determined; and finally, SGD will enable the identification of the origin of 511~keV emission line, arising from electron-positron annihilation. Measurements of spectra up to 600~keV for more than 10 AGNs (Active Galactic Nuclei) will enable a probe of existence of spectral breaks above 100~keV. Measurements of such spectral breaks will play a crucial role in solving the question on the origin of the soft gamma-ray emissions in AGN (whether the emission arises from the accretion disk or relativistic electrons in the jet). The detailed spectral measurements are expected to contribute to understanding of the soft gamma-ray emission in more than 10 X-ray pulsars and magnetars. The SGD is also expected to be able to measure the spectrum of supernova remnants (with the prime example of Cas~A), to determine whether it is indeed dominated by non-thermal bremsstrahlung, as expected on theoretical grounds. The soft gamma-ray flux measured by the SGD can determine the magnetic field of Cas~A by combining the SGD data with those from other wavelength: this is essential when estimating the fluxes and spectra of electrons and protons accelerated in Cas~A. Perhaps the most unique SGD parameter is that Compton kinematics utilized in the SGD yield good sensitivity to the polarization in the 50--200~keV band from several Galactic black holes and neutron stars, and some AGNs in flare states. Detection of the gamma-ray polarization from these sources will bring new probes into the gamma-ray emission mechanism. Moreover, the detection of X-ray / soft gamma-ray polarization from sources at the cosmological distance will place stringent constraints on the violation of Lorentz invariance, which has a profound impact on the fundamental physics. Since X-ray polarization is largely unexplored, discovery potential is very high. In summary, the SGD is expected to provide essential data towards studies of the origin of CXB (Cosmic X-ray background), particle acceleration in SNR, origin of the hard X-ray emission from the vicinity of accreting black holes such as X-ray binaries, the Galactic center, and AGN, and non-thermal emission from galaxy clusters. \section{Instrument Concept} The SGD concept originates from Hard X-ray Detector (HXD)\cite{HXD} onboard Suzaku satellite. The HXD consists of Si photodiodes and GSO scintillators with BGO active shield and copper passive collimator, and achieved the best sensitivities in the hard X-ray band. The SGD replaces the Si photodiodes and GSO scintillators in HXD with the Compton camera, which provides additional background rejection capabilities based on Compton kinematics. Figure~\ref{fig:SGD-concept} (a) shows a conceptual drawing of a SGD unit. A BGO collimator defines $\sim$10{${}^\circ$}\ FOV of the telescope for high energy photons while a fine collimator restricts the FOV to $\lower.5ex\hbox{$\buildrel < \over\sim$}$0.6{${}^\circ$}\ for low energy photons (\lower.5ex\hbox{$\buildrel < \over\sim$} 150~keV), which is essential to minimize the CXB (cosmic X-ray backgrounds) and source confusions. Scintillation light from the BGO crystals is detected by avalanche photo-diodes (APDs) allowing a compact design compared with phototubes. The hybrid design of the Compton camera module incorporates both pixelated Si and CdTe detectors. The Si sensors are used as the scatterer since Compton scattering is the dominant process in Si above $\sim$50~keV compared with $\sim$300~keV for CdTe. The Si sensors also yield better constraints on the Compton kinematics because of smaller effect from the finite momentum of the Compton-scattering electrons (Doppler broadening) than CdTe (approximately by a factor of two). The CdTe sensors are used as the absorber of the gamma ray following the Compton scattering in the Si sensors. Combination of two materials with low and high $Z$ (atomic number) are also beneficial for lowering backgrounds since neutron scattering is suppressed in high-Z material and activation backgrounds are negligible in low-$Z$ material. Note that neutron and activation backgrounds are the dominant background contributions in the SGD. \begin{figure}[bth] \centering \begin{tabular}{ll} \includegraphics[height=8cm]{SGD-concept.pdf} \end{tabular} \caption{Conceptual drawing of an SGD Compton camera unit.}\label{fig:SGD-concept} \end{figure} We require each SGD event to interact twice in the Compton camera, once by Compton scattering in a Si sensor, and then by photo-absorption in a CdTe sensor. Once the locations and energies of the two interactions are measured, as shown in Figure~\ref{fig:SGD-concept}, the Compton kinematics can be calculated by the direction of the incident photon with the formula, \begin{eqnarray} \cos\theta &=& 1+\frac{m_ec^2}{E_2+E_1}-\frac{m_ec^2}{E_2}, \label{eq:kinematics} \end{eqnarray} where $\theta$ is the polar angle of the Compton scattering, and $E_1$ and $E_2$ are the energy deposited in each photon interaction. The high energy resolution of the Si and CdTe devices is essential in reducing the uncertainty of $\theta$. The angular resolution is limited to $\sim$8{${}^\circ$}\ at 100~keV due to the Doppler broadening and $\sim$3{${}^\circ$}\ at 600~keV due to pixel size of the semiconductor sensor. We require that the incident photon angle inferred from the Compton kinematics is consistent with the FOV, which dramatically reduces dominant background sources such as radio-activation of the detector materials and neutrons. Low background realized by the Compton kinematics is the key feature of SGD since the photon sensitivity of SGD is limited by the backgrounds, not the effective area. As a natural consequence of the Compton approach used to decrease backgrounds, SGD is quite sensitive to X/gamma-ray polarization, thereby opening up a new window to study the geometry of the particle acceleration and emission regions, and the magnetic field in compact objects and astrophysical jets. The Compton scattering cross section depends on the azimuth Compton scattering angle with respect to the incident polarization vector as; \begin{eqnarray} \frac{\delta\sigma}{\delta\Omega} \propto \left( \frac{E_\gamma'}{E_\gamma}\right)^2\left(\frac{E_\gamma'}{E_\gamma}+\frac{E_\gamma}{E_\gamma'}-2\sin^2\theta\cdot\cos\phi\right), \label{eq:polarization} \end{eqnarray} where $\phi$ and $\theta$ are the azimuth and polar Compton scattering angles, and $E_\gamma$ and $E_\gamma'$ are incident and scattered photon energies. It shows that the $\phi$ modulation is largest at $\theta=90^\circ$, \emph{i.e.} perpendicular to the incident polarization vector. \section{Instrument Design} SGD consists of two identical set of a SGD-S, two SGD-AE, a SGD-DPU and a SGD-DE. SGD-S is a detector body that includes a $4\times 1$ array of identical Compton camera modules surround by BGO shield units and fine passive collimators as shown in Figure~\ref{fig:SGD-design} (a). Two SGD-S are mounted on opposite sides of the spacecraft side panels to balance the weight load since it has a high mass (150~kg). It was determined that a $2\times 2$ array arrangement is preferred as it allows an increase of the BGO thickness for the same weight and can also keep a symmetry against 90{${}^\circ$}\ rotation which is important for polarization measurements. However, the current $4\times 1$ is employed to minimize the deformation of the spacecraft side panel. SGD cooling system is attached to the cold plate of the SGD-S housing. APD CSA (charge-sensitive amplifier) box and HV (high voltage) power supply are also attached to the SGD housing. SGD-AE is an electronics box that provides power management and housekeeping (HK) functions for Compton camera system and APD readout system. It also performs APD signal processing. SGD-DPU functions as a digital interface to the SGD-DE via SpaceWire network standard and also houses SGD-PSU (Power Supply Unit) inside. SGD-DE includes a microprocessor and performs data processing for event and HK data, and is connected to the satellite SpaceWire network. Topology of the SpaceWire network is designed to be redundant. Data can be routed via another DPU if one of DPU-DE or DE-router connections is broken. In addition, a spare DE shared by all instruments on the satellite is included in the payload: the data can be routed to the spare DE if one of the SGD-DEs malfunctions. Design details of each component are described below. \begin{figure}[bth] \centering \begin{tabular}{ll} (a) & (b) \\ \includegraphics[height=6cm]{SGD-crosssection.pdf} \hspace*{0.2cm} & \includegraphics[height=4.5cm]{SGD-CC-side.pdf} \includegraphics[height=4.5cm]{SGD-CC-top.pdf} \end{tabular} \caption{Schematic drawing of (a) an SGD-S and (b) sensor configuration of a Compton camera.}\label{fig:SGD-design} \end{figure} \subsection{Compton camera} The Compton camera consists of 32 layers of Si sensors and 8 layers of CdTe sensors surrounded by 2 layers of CdTe sensors as shown in Figure~\ref{fig:SGD-design} (b). The location of the CdTe sensors on the side is slightly displaced in the horizontal direction to allow placement of readout ASIC (Application Specific Integrated Circuit) at the corner of the sensor. This arrangement allows a placement of the CdTe sensor on the side very close to the stacked Si and CdTe sensors to maximize the coverage of the photons scattered by the Si sensors. In addition to sensor modules, the Compton camera holds an ACB (ASIC controller board) and four ADBs (ASIC driver boards). The ACB holds an FPGA (filed programmable gate array) that controls the ASIC and communicates with SpaceWire interface by a serial link. The ADB buffers control signals from the ACB and sends control signals to 52 ASICs, and also provides a current limiter to power the ASICs. The mechanical structure of the Compton camera needs to hold all components described above within the size of $11\times11\times12$~cm$^3$. This size constraint is imposed to minimize the size of BGO active shield since the BGO is the dominant contribution to the total weight of the SGD-S. Another important requirement for the mechanical structure is cooling of the sensors. The temperature of all sensors needs to be maintained to within 5{${}^\circ$C}\ of the cold plate interface at the bottom of the Compton camera. \begin{figure}[htbp] \centering \begin{tabular}{ll} (a) & (b) \\ \includegraphics[height=5cm]{./CC-structure.pdf} & \includegraphics[height=5cm]{./CC-Si-CdTe-stack.pdf} \end{tabular} \caption{(a) Drawing of Compton camera structure. (b) Drawing of a stack of Si and CdTe sensor tray modules. Note they are facing left in the side view.} \label{fig:CC-structure} \end{figure} \begin{figure}[htbp] \centering \begin{tabular}{ll} (a) & (b) \\ \includegraphics[height=5cm]{./CC-Si-tray.pdf} \hspace*{1cm} & \includegraphics[height=5cm]{./CC-CdTe-tray.pdf} \end{tabular} \caption{Top and side views of (a) Si and (b) CdTe sensor tray modules.} \label{fig:CC-trays} \end{figure} Figure~\ref{fig:CC-structure} (a) shows the mechanical support structure of the Compton camera. The Compton Camera consists of a stack of Si and CdTe sensor trays as shown in Figure~\ref{fig:CC-structure} (b), four CdTe sensor modules on the side, and also top and bottom frames to hold them together. The top and bottom frames are held together by four pillars with M3 screws. Each ADB is attached to the side CdTe sensor module and an ACB is attached to the bottom frame. The material of the camera structure must have the CTE (Coefficient for Thermal Expansion) close to those of Si and CdTe sensors (a few~{$\mu$m}/m/K). Currently, it is planned to employ PEEK (polyether ether ketone) loaded with carbon fibers for the trays and the top frame where a low-$Z$ material is required, and titanium for the pillars and the bottom frame. Since the carbon fiber-filled PEEK is conductive, trays need to be conformal-coated by Parylene (commercial name of Xylylen polymers) to avoid shorting of the bias voltage for the sensors. Parylene can produce pinhole-free coating with high resistivity, uniform thickness and chemical tolerance. \begin{figure}[htbp] \centering \begin{tabular}{ll} (a) & (b)\\ & \includegraphics[height=2.5cm]{./SGD-FEC-FPC.pdf} \\ & (c) \vspace*{-2.8cm}\\ \includegraphics[height=5cm]{./SGD-FEC.pdf} & \includegraphics[height=2.0cm]{./SGD-FEC-assembly.pdf} \end{tabular} \caption{Drawings of (a) FEC (front-end card) (b) FPC (flexible printed circuits) that connects two FECs and take signals to ADB (ASIC driver board), and (c) cross-sectional view of two assemblies of two FEC and a FPC.} \label{fig:CC-FEC} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=10cm]{./CC-side-CdTe-module.pdf} \caption{Top and side views of CdTe sensor module on the side of Compton camera.} \label{fig:CC-side-CdTe} \end{figure} Each Si and CdTe sensor tray consists of Si or CdTe sensors and FEC (Front-End Card) mounted on both sides of the tray frame as shown in Figure~\ref{fig:CC-trays}. An ASIC is mounted on each FEC. Figure~\ref{fig:CC-FEC} shows the schematic drawings of a FEC, and a FPC that connects two FECs and an ADB. Two FECs are mounted at each corner of a tray and connected by an FPC as shown in Figure~\ref{fig:CC-FEC} (c). Figure~\ref{fig:CC-side-CdTe} shows drawings of side CdTe sensor module viewed from three directions. Two CdTe sensor boards are stacked together with PEEK spacers. A titanium frame will be attached on the back of this module to reinforce mechanical rigidity. We have fabricated mechanical models of the Compton camera with slightly different materials (polycarbonate trays and aluminum pillars) and confirmed that those survive vibrations expected from the launch vehicle (HII-A). We plan to fabricate a mechanical model and a thermal model of the Compton camera with the final design and evaluate mechanical and thermal properties. \subsection{Si and CdTe sensors} Si and CdTe sensors are pixellated to give two-dimensional coordinates with a pixel size of $3.2\times3.2$~{mm${}^2$}\ and a thickness of 0.6~mm for Si and 0.75~mm for CdTe. Pixel size is determined to minimize the number of pixels for lower power consumption while avoiding the pixel size to be the dominant contribution to the angular resolution of Compton kinematics. The thicknesses of Si and CdTe sensors are determined from constraints on the bias voltages required to operate the sensors at the best condition. In order to suppress the leakage current from the edge of the sensor, a guard ring is placed at the periphery of the sensor surrounding all the pixels. Each Si sensor has $16\times16$ pixels providing $5.12\times5.12$~{cm${}^2$}\ active area. Signal of each pixel on the Si sensor is brought out to one of bonding pads at the corner of the sensor by a readout electrode laid out on top of the SiO$_2$ insulation layer with a thickness of 1.5~{$\mu$m}\ as shown in Figure~\ref{fig:sensors} (a). For the readout purposes, the Si sensors are grouped into four quadrants of $8\times8$ pixels. A CdTe sensor has $8\times8$ pixels providing $2.56\times2.56$~{cm${}^2$}\ active area since it is difficult to fabricate the CdTe sensor much larger than $3\times3$~{cm${}^2$}. CdTe sensors are tiled in an $2\times2$ array for each layer in the bottom and in an $2\times3$ array for each layer on the side to obtain the required active area. In order to overcome small mobility and short lifetime of carriers in CdTe sensors, we employ a Schottky-barrier diode type CdTe sensor with Indium (In) anode and Platinum (Pt) cathode so that we can apply high bias voltage with low leakage current. Indium electrode functions as a common biasing electrode while Pt electrodes form pixels. Titanium is placed on the In electrode to reduce the resistance. Gold (Au) is placed on the Pt electrode to improve connection of In/Au stud bump bonding. Diode type CdTe sensors suffer degradation of energy resolution due to charge trapping over time which is called polarization. It is known that the polarization slows down at lower temperature and the effect of polarization can be reduced by applying higher bias voltage. For example, it was found that operation of this type of CdTe sensor for a week has little polarization effect at $<$5{${}^\circ$C}\ and $>$1000~V/mm and this polarization effect can be recovered by turning off the bias voltage. Since the recovery process accelerates at a higher temperature, annealing the sensor to minimize the down time may be required. CdTe sensors cannot have integrated readout electrodes above pixel electrodes on the device unlike Si sensors. \begin{figure}[bthp] \centering \begin{tabular}{ll} (a) & (b)\\ \includegraphics[height=5cm]{./SGD-Si.pdf} & \includegraphics[height=2.5cm]{./CdTe-module-schematic.pdf} \end{tabular} \caption{(a) Schematic drawing of Si sensor showing layout of pixels and readout traces. (b) Conceptual illustration for the structure of a CdTe sensor module.} \label{fig:sensors} \end{figure} In addition, it is not possible to wire-bond on the electrodes of the CdTe sensor. In order to address those issues, we employ a separate fanout board to route signal from each pixel to the corner of the sensor where ASICs are placed. The fanout board is made of 0.3~mm thick ceramic (Al$_2$O$_3$) substrate that allows fine pitch between electrodes to match the input pitch of the ASIC (91~{$\mu$m}). The CdTe sensor and the fanout board are bump-bonded via In/Au stud bump as shown in Figure~\ref{fig:sensors} (b). ASIC and the fanout board are connected by wire bonding. Table~\ref{table:sensors} summarizes specifications of Si and CdTe sensors. \begin{table}[htdp] \caption{Specifications for Si and CdTe sensors} \begin{center} \begin{tabular}{|lrr|} \hline Description & Si & CdTe \\\hline Sensor active area & $5.12 \times 5.12 $~{cm${}^2$} & $2.56 \times 2.56 $~{cm${}^2$} \\ Pixel area & $3.2\times 3.2$~{mm${}^2$} & $3.2\times 3.2$~{mm${}^2$} \\ Number of pixels & $16\times 16$ & $8\times 8$ \\ Thickness of sensor & $0.62$~mm & $0.75$~mm \\ Thickness of depletion (active) layer & $0.60$~mm & $0.75$~mm \\ Thickness of inactive layer & $0.02$~mm & N/A \\ Bias voltage & 250~V & 1000~V\\ Leakage current per pixel @ $-10${${}^\circ$C} & $<$50~pA & $<$50~pA \\ Leakage current per pixel @ $20${${}^\circ$C} & $<$4000~pA & $<$4000~pA \\ Width of readout electrode & 8~{$\mu$m} & N/A \\ Thickness of insulation for readout electrodes & 1.5~{$\mu$m} & N/A\\\hline \end{tabular} \end{center} \label{table:sensors} \end{table}% \subsection{Application Specific Integrated Circuit} The main performance requirements for an ASIC are low noise ($\lower.5ex\hbox{$\buildrel < \over\sim$} 2$~keV FWHM), low power ($\lower.5ex\hbox{$\buildrel < \over\sim$} 0.3$~mW/channel) and fast readout time ($\lower.5ex\hbox{$\buildrel < \over\sim$} 200\;\mu$s) to satisfy the $<$2\% dead time for 100~Hz trigger rate. In order to satisfy those main requirements, an ASIC is developed based on the VIKING architecture\cite{VA94,Tajima04} which has been known for good noise performance and used in various space experiments like Swift, PAMELA and AGILE. Figure~\ref{fig:VIKING} shows the circuit diagram for the ASIC developed for the SGD (and HXI). In the VIKING architecture ASIC, each channel consists of charge sensitive amplifier followed by two shapers. One shaper with a short shaping time is followed by a discriminator to form a trigger signal. The other shaper with a long shaping time is followed by a sample and hold circuit to hold the pulse height at the timing specified by an external hold signal. \begin{figure}[htbp] \centering \includegraphics[height=10cm]{./VATA-schematic.pdf} \caption{Circuit diagram of the ASIC developed for the SGD. The circuits shown in a blue background are implemented in this development.} \label{fig:VIKING} \end{figure} The hold signal is produced from the trigger signal with an appropriate delay. Many important functionalities are integrated in the ASIC for the SGD in order to minimize additional components required to readout the signal as shown in the circuit diagram with a blue background region. As a result, we only need an FPGA, several digital drivers and receivers, and passive components (resistors and capacitors) to operate 208 ASICs in a Compton camera. The signals in all channels on the ASIC are converted to digital values in parallel with Wilkinson-type analog-to-digital converters (ADCs) where the time duration of voltage ramp to cross the sampled voltage is counted by a counter. The conversion time is less than 100~$\mu$s using the external clock or less than 50~$\mu$s using the internal clock. (The conversion time depends on the pulse height of the signal.) In order to minimize the readout time, the only channels that are read-out are those above a data threshold that can be digitally set for each channel independently from the trigger threshold. We usually observe common mode noise from this type of ASIC at the level of $\sim$1~keV (can be worse if power supplies and grounding are not appropriate). Common mode noise has to be subtracted to accurately apply the threshold for the zero suppression. Common mode noise level of each event is detected by taking an ADC value of the 32nd (a half of number of channel) pulse height, corresponding to a median value of all ADC values. With zero suppression, the readout time is $0.5\;\mu$s per ASIC when no data is readout and $(9+n)\;\mu$s when we readout $n$ channels. Without zero suppression, the readout time becomes $73\;\mu$s per ASIC. The ASIC produces all necessary analog bias currents and voltages on the chip by internal DACs (Digital to Analog Converters) except for the main bias current which sets the scale of all bias currents: this is provided by an external circuit on the FEC. Each bit of the registers for all internal DACs and other functions consists of three flip-flops and a majority selector for tolerance against single event upset (SEU). If the majority selector detects any discrepancies among three flip-flops, it will set a SEU flag which will be readout as a part of output data. The ASIC is fabricated on a wafer with an epitaxial layer which will improve immunity against latch up. Table~\ref{table:ASIC-spec} summarizes specifications. \begin{table}[htdp] \caption{SGD ASIC (VATA450) specifications} \begin{center} \begin{tabular}{|l|r|} \hline \multicolumn{2}{|c|}{Geometrical specifications} \\\hline Number of channels & 64 \\ Input pitch & 91~$\mu$m \\ Thickness & 0.45~mm \\ \hline \multicolumn{2}{|c|}{Analog specifications} \\\hline Power consumption & 0.2~mW/channel \\ Fast shaper peaking time & 0.6~$\mu$s \\ Slow shaper peaking time & $\sim$3 $\mu$s \\ Noise performance & 180~$e^-$ (RMS) at 6~pF load\\ & 1.5~keV (FWHM) for Si \\ Threshold & 1500~$e^-$ at 6~pF load \\ & 5.4 keV for Si \\ Threshold range & 625 -- 6250~$e^-$ \\ Threshold step & 208~$e^-$ \\ Dynamic range & $\pm$100,000~$e^-$ \\ & 360~keV for Si \\ \hline \multicolumn{2}{|c|}{Digital specifications} \\\hline ADC setup time & 5~$\mu$s \\ ADC power consumption & 0.5--2 mW/channel \\ & 5--20~$\mu$W/channel at 100~Hz \\ Data clock speed & $<$10~MHz \\ Conversion clock speed & $<$10~MHz (external clock) \\ & $<$20~MHz (internal clock)\\ Conversion time & $<$100~$\mu$s (external clock) \\ & $<$50~$\mu$s (internal clock)\\ Readout time (no data) & 0.5~$\mu$s per ASIC \\ Readout time ($n$ channels) & $(9+n)$~$\mu$s per ASIC \\ \hline \end{tabular} \end{center} \label{table:ASIC-spec} \end{table}% The data input and output circuits on the ASIC are designed to allow daisy-chaining of multiple ASICs. In one scheme, the data output of one ASIC can be connected to the input of another ASIC and the ASIC will pass the input data to the output via a shift register. This scheme is used to set register values. In another scheme, output of several ASICs can be connected to a single bus. The output is controlled by passing a token from ASIC to ASIC. Or, in the case of trigger signal, ASICs can issue trigger signals at any time since the output circuit is open-drain FET to allow multiple triggers on the same bus. In the Compton camera, 6 or 8 ASICs are daisy chained. \subsection{BGO active shield} The thick active shield made of BGO scintillator is employed to reduce in-orbit background of the SGD. The BGO crystal is heavy, and has a high stopping power, high transparency and ability to form larger crystal, while its light output is lower than NaI or CsI. Scintillating light from a BGO is detected via an APD. In addition to providing veto signals for cosmic rays and gamma rays from outside of the FOV, the BGO shield is also used to reduce the number of SAA (South Atlantic Anomaly) protons since those protons are the main cause of the activation of sensor materials. The BGO shape is designed so that any trajectory that intersects with the Compton camera must go through at least 3~cm of BGO before it reaches the camera. In order to effectively detect cosmic rays and gamma rays that interact with the BGO, the detection threshold of the BGO readout system must be lower than 100~keV. This requirement imposes constraints on the BGO shape, reflector design, and the performance of the APD readout system. The BGO shield consists of 30 BGO crystals and their locations and shapes are indicated by green polygons in Figure~\ref{fig:SGD-design} (a). The weight of each BGO module is 2--6~kg and total weight is $\sim$100~kg. We employ a modular mechanical structure for the BGO shield where each BGO crystal is supported by a CFRP enclosure in order to make it easier to handle BGO modules. The BGO enclosure consists of a CFRP base that is glued to the BGO crystal via BaSO$_4$-based reflector painted on the BGO, and CFRP covers as shown in Figure~\ref{fig:BGO-concept}. The CFRP base screw holes to be used to attach them to the housing structure. BaSO$_4$-based reflector is chosen for the mechanical strength that is required for the base bonding. The remaining sides of the BGO crystal is covered by both ESR (Enhanced Specular Reflector) and Gore-Tex sheet for better reflection properties. \begin{figure}[htbp] \centering \includegraphics[height=6cm]{./BGO-CFRP-concept.pdf} \hspace*{2cm} \includegraphics[height=6cm]{./BGO_HA_3d_image.jpg} \caption{Conceptual views of a BGO enclosure.} \label{fig:BGO-concept} \end{figure} \subsection{Fine collimator}\label{sec:FC} The BGO active shield has an opening of $9.7\times9.7$ deg$^2$, which is too large, resulting in CXB (cosmic X-ray backgrounds) higher than NXB (non X-ray backgrounds) and substantial source confusions within the FOV below $\sim$150~keV. Passive collimators called fine collimators (FCs) are installed in opening of the BGO active shield, to reduce the FOV to 33.3~arcmin (FWHM). Material and its thickness defines the maximum effective energy (100--150~keV) of the FC. Note that BGOs are thick enough to detect any gamma rays in the SGD energy band ($<$600~keV). The default choice is 0.1~mm thick PCuSn (with a length of 324~mm), which yields the maximum effective energy of $\sim$100~keV as shown in Figure~\ref{fig:FC-trans} (a). Collimator cell size is chosen at 3.2~mm yielding an aperture opening of $\sim$94\%. In order to ensure a transparency better than 90\%, the alignment of FC must be better than 10\% of its FOV, $\sim$3~arcmin. Alignment mechanisms will be built into the mounting structure of the FC. The FC material could be molybdenum (Mo) to obtain higher maximum effective energy, $\sim$150 keV as shown in Figure~\ref{fig:FC-trans} (b). However, Mo is expected to have more activation lines than PCuSn due to higher atomic number. We plan to have a beam test to measure the activation of Mo. \begin{figure}[htbp] \centering \begin{tabular}{ll} (a) & (b)\\ \includegraphics[height=5.5cm]{./sgd_fc_33arcmin_100umPCuSn_lin_20100208a.pdf} \hspace*{0.5cm} & \includegraphics[height=5.5cm]{./sgd_fc_ene_33arcmin_lin_20100220a.pdf} \end{tabular} \caption{(a) Analytical calculation of the transparency of the fine collimator as a function of the angle from the FOV center for the default design with 100~$\mu$m thick PCuSn. (b) Comparison of the FC transparency among different FC materials as a function of the energy.} \label{fig:FC-trans} \end{figure} \subsection{Avalanche Photo Diode} The APD is chosen for the photon detector of the BGO shield mainly due to its compact size compared with photo multipliers, and is chosen to be compatible with a modular structure of the shield. Although the larger APD yields better photon collection efficiencies ($\propto$S$^{0.5}$), the capacitance and the leakage current of the APD also increases proportionally to the area. Based on the experiments with 3, 5, 10 and 20~mm APDs, we concluded that 10~mm is the most appropriate for our application. We employ HPK S8664-55 with slightly modified structure for less leakage current: those were used by one of the CERN LHC experiments, CMS. The APD is encapsulated by silicone resin to avoid cracks that appeared in epoxy resin due to thermal cycles. Since the gain of the APD is temperature dependent ($-3$\%/{${}^\circ$C}), the temperature of the APD needs to be controlled within 3{${}^\circ$C}\ to keep the gain variation within 10\%. If the temperature variation cannot be controlled within 3{${}^\circ$C}, the APD bias voltage needs to be adjusted to compensate for the gain change. Signals from the APDs are routed to CSAs (charge sensitive amplifiers) in shielded boxes located close vicinity of the APDs on the SGD-S housing. Since the APD capacitance is relatively large, $\sim$250~pF, the CSA has to be low noise amplifier with low capacitance dependence. The breadboard model of the CSA yields 540 electrons (FWHM) at 0~pF load with a capacitance dependence of 2.7 electrons/pF, corresponding to 1200 electrons at 250~pF. \subsection{Electronics} The SGD electronics system consists of the Compton camera front-end, CPMU (Camera Power Management Unit), APD-CSA, APMU (APD Processing and Management Unit), MIO (Mission I/O) boards and power supplies as shown in a SGD electronics block diagram in Figure~\ref{fig:electronics-diagram}. \begin{figure}[htbp] \centering \includegraphics[width=\textwidth]{./SGD-electronics-diagram-SPIE.pdf} \caption{Block diagram of SGD readout system.} \label{fig:electronics-diagram} \end{figure} The front-end electronics of the Compton camera consists of four groups of 42 Front-End Cards (FECs) and an ASIC Driver Boards (ADBs), and an ASIC Control Board (ACB). Two FECs are connected back to back at the corner of each Si and CdTe trays, and are read out in daisy chain. FECs for the CdTe modules on the side have six ASICs that are daisy-chained on each board. Forty FECs from the Si/CdTe trays and two FECs from the CdTe modules on the side are connected to an ADB, which is located on the side of the Compton camera. Eight FECs (eight ASICs) are daisy-chained for Si/CdTe trays, resulting in 7 groups of ASICs for each side, 5 for Si/CdTe trays (8 ASICs each) and 2 for side-CdTe (6 ASICs each). Only digital communication is required between ADBs and FECs and all digital signals are differential to minimize the EMI (electro magnetic interference). Digital signals that are not used frequently are single ended between the ADB and the ACB due to constraints on the cable pin count. The ADB detects excess current of each ASIC group in order to protect ASICs from latch-ups due to highly ironizing radiation or other origins. We can recover ASICs from latch-ups by cycling the power supply. ASICs are controlled by an FPGA on the ACB (one board per Compton camera). Functions of the ACB include: loading ASIC registers, sending a trigger to MIO and hold signals to ASICs with proper delays upon reception of triggers from ASICs, controlling analog-to-digital conversion on ASICs and data transfer from ASICs, halting data acquisition process if MIO cancels the trigger, formatting data from ASICs and send it to MIO, counting triggers and monitoring dead time. CPMU functions include control of power switches and power supply voltages, monitor of power supply voltages and temperature. Remote HV (high voltage) bias power supplies are controlled via slow serial data link. We plan to ramp up and down bias power supply for the safety of the front-end electronics in the normal power up and down procedure. However, appropriate low pass filters should be placed between each sensor and the bias power supply so that any sudden change of the HV due to unforeseen reasons does not destroy front-end electronics connected to the sensors. The APMU receives APD signals from APD CSAs and digitize them with flash ADCs. Digitized values are continuously monitored by an FPGA on the APMU. The FPGA differentiates the APD signal and issues a trigger when the differentiated signal is above a certain threshold. The time constant of the differentiation has to be optimized based on sampling frequency and the rising time of the CSA. Slightly more sophisticated algorithm is used to calculate more accurate pulse height information given the trigger timing of the Compton camera. Other APMU functions include HK such as control of power switches and power supply voltages, monitor of power supply voltages and temperature. Remote HV power supplies for APD are controlled via slow serial data link. MIO functions include: recording event time tag, assemble veto information upon reception of trigger signals from Compton camera, sending trigger cancel within 10~$\mu$s if necessary based on veto information, managing dead time, veto signals from APMU and ASIC registers which includes checking SEU bit from ASIC data, formatting data including sensor data, time tag, veto hit pattern, and send them to MIO, and controlling CPMU including reception of HK data from CPMU. Communications between the Compton camera and MIO, and that between CPMU/APMU and MIO are handled via 3-line (CLK, DATA, STRB) serial protocol on LVDS physical layer. We have two additional real-time LVDS lines dedicated for trigger and trigger acknowledgement signals between CPMU and MIO. We also have dedicated LVDS lines to issue two types of veto signals between APMU and MIO. \section{Expected Scientific Performance} Effective area, non X-ray backgrounds, and sensitivities are evaluated by Geant4-based Monte Carlo simulations. The solid line in Figure~\ref{fig:SGD-performance} (a) shows the effective area as a function of the incident energy for the current SGD design. Maximum effective area of more than 30~{cm${}^2$}\ is realized at around 80--100~keV, which corresponds to $\sim$15\% reconstruction efficiency since the geometrical area of the SGD is 210~{cm${}^2$}. The effective area at low energies is suppressed due to the photo-absorption in Si while the loss at high energies is due to multiple-Compton events, which can be recovered by improved reconstruction algorithm. The dotted line in Figure~\ref{fig:SGD-performance} (a) shows the inverse of minimum detectable polarization (MDP) in arbitrary units assuming no background. The polarization sensitivity falls off slower at low energies and faster at high energies due to lower modulation factor resulting from more forward scattering at higher energies. This result indicates that SGD is sensitive to the polarization in the 50--200~keV energy band. \begin{figure}[bth] \centering \begin{tabular}{ll} (a) & (b) \\ \includegraphics[height=5.4cm]{SGD-Aeff.pdf} \hspace*{0.5cm} & \includegraphics[height=5.4cm]{SGD-BG.pdf} \end{tabular} \caption{(a) Effective area (red solid) and inverse of MDP in arbitrary unit (blue dashed) as a function of incident energy. (b) Background flux as a function of reconstructed energy.} \label{fig:SGD-performance} \end{figure} \begin{figure}[bth] \centering \begin{tabular}{ll} (a) & (b) \\ \includegraphics[height=5.7cm]{SGD-sensitivity_cont_point.pdf} \hspace*{0.5cm} & \includegraphics[height=5.7cm]{SGD-sensitivity_cont_extended.pdf} \end{tabular} \caption{3$\sigma$ sensitivity targets for the SXI, HXI and SGD in the ASTRO-H mission for continuum emissions from (a) point sources and (b) extended sources, assuming an observation time of 100~ks and comparison with other hard X-ray and soft gamma-ray instruments.}\label{fig:sensitivity} \end{figure} Main in-orbit background components of the SGD are expected to be activations induced during the SAA passages and elastic scatterings of albedo neutrons, at the expected orbit of ASTRO-H (altitude of 550 km with an inclination angle of 31{${}^\circ$}). These background events can be heavily suppressed by a combination of multi-layer low-$Z$/high-$Z$ sensor configuration, active shield, and the background rejection based on the Compton kinematics. The remaining background level is estimated to be much lower than any past instrument as shown in Figure~\ref{fig:SGD-performance} (b). The neutron background (green dotted curve) is estimated by the simulation assuming the neutron spectrum described in Ref.~\citenum{Armstrong73}. The flux of the neutron background is scaled by a factor of two based on the background studies of the Suzaku hard X-ray detector \cite{Fukazawa09}. The spectrum of the activation background (blue dashed curve) is estimated from experimental results on the radioactivities induced by mono-energetic protons \cite{Murakami03}. The flux is scaled by a rejection factor expected from constraints by the Compton kinematics. The signal fluxes corresponding to 1/100 and 1/1000 of the Crab brightness are overlaid in black and orange dotted straight lines, respectively. This clearly illustrates that the expected background in SGD varies from 1/1000 to 1/100 of the Crab brightness in the 50--400~keV band. Fig.~\ref{fig:sensitivity} shows 3$\sigma$ sensitivity for three instruments in the ASTRO-H mission, the SXI, HXI and SGD, for continuum emission from (a) point sources and (b) extended sources ($1{{}^\circ}\times 1{{}^\circ}$) with an observation time of 100~ks and comparison with other instruments. (Sensitivity depends on the bandwidth of each point and observation time, and can be lower than the background level with sufficient statistics.) SGD represents great improvement in the soft gamma-ray band compared with the currently operating INTEGRAL\cite{INTEGRAL} or Suzaku HXD, and extends the bandpass to well above the cutoff of hard X-ray telescopes, which in turn allows us to study the high energy end of the particle spectrum. Combined with the SXI and the HXI on board the ASTRO-H, SGD realizes unprecedented level of sensitivities from soft X-ray to soft gamma-ray band. \begin{figure}[bthp] \centering \begin{tabular}{ll} \vspace*{-0.cm} (a) & (b) \\ \includegraphics[height=6cm]{spec_1mCrab_AGN_joint.pdf} \hspace*{0.5cm}& \includegraphics[height=6cm]{CasA_100ks.pdf} \end{tabular} \caption{(a) HXI (black) and SGD (red) simulation results for a 100~ks observation of a source with 1/1000 of the Crab brightness and power law index of 1.7. (b) SGD simulation results for a 100~ks observation of bremsstrahlung emissions from Cas~A with three magnetic field hypotheses, 0.1~mG (black), 0.3~mG (red) and 1.0~mG (green).} \label{fig:SGD-science} \end{figure} \begin{figure}[bth] \centering \begin{tabular}{ll} (a) & (b) \\ \includegraphics[height=6cm]{Crab-pol.pdf} \hspace*{1.0cm} & \includegraphics[height=6cm]{SGD-MDP.pdf} \end{tabular} \caption{(a) Efficiency-corrected azimuth angle distribution of Compton scattering from a source with a brightness of Crab and 100\% linear polarization in a 10~ks observation. (b) $3\sigma$ MDP as a function of observation time for sources with 1, 1/10 and 1/100 of the Crab brightness.} \label{fig:pol-performance} \end{figure} A simulation results shown in Figure~\ref{fig:SGD-science} (a) demonstrate that spectral index can be measured within 10\% error for a 100~ks observation of a source with 1/1000 of the Crab brightness in 2--10~keV and power law index of 1.7 using the current SGD design parameters. Another type of SGD target is supernova remnants where we can study nature of particle accelerations. Cas~A is one of the most promising SNR for the SGD since sizable non-thermal bremsstrahlung emission is expected in the SGD band. Figure~\ref{fig:SGD-science} (b) shows simulation results for observation of non-thermal bremsstrahlung from a supernova remnant, Cas~A, which confirms that SGD can determine the magnetic field of Cas~A with a 100~ks observation. This measurement will have significant implications on modeling of multi-wavelength observations since a model with leptonic origin predicts $B\approx0.12$~mG while a hadronic model prefers $B\approx0.5$~mG. The polarization signature of incident gamma-ray is detected by the modulation of the azimuth angle distribution of Compton scattering in SGD as shown in Figure~\ref{fig:pol-performance}~(a) for a 100\%-polarized source. A fit to $AVG[1+Q\cos2(\phi-\chi_0)]$ yields $Q=56.7\pm1.0$\%, where $Q$ is the modulation factor which is proportional to the polarization degree and $\chi_0$ is the angle of the polarization vector. Using the modulation factor obtained here and the background level described above, we can calculate the MDP (minimum detectable polarization) analytically assuming no systematic effect from uneven backgrounds and uncertainties of the detector response. Figure~\ref{fig:pol-performance}~(b) shows the $3\sigma$ MDP as a function of the observation time for sources with 1, 1/10 and 1/100 of the Crab brightness, which can be parametrized as $3.5\%\sqrt{10^4/t_\mathrm{obs}}$, $3.6\%\sqrt{10^5/t_\mathrm{obs}}$ and $4.3\%\sqrt{10^6/t_\mathrm{obs}}$, respectively, where $t_\mathrm{obs}$ is the observation time in seconds. We can conclude that SGD can detect polarization from sources down to a few$\times1/100$ of the the Crab brightness with a polarization degree of several \% in a few$\times100$~ks of observation time. \section{Summary} The Soft Gamma-ray Detector (SGD) onboard the next Japanese X-ray astronomy satellite ASTRO-H is designed to measure spectra of celestial sources with $>$1/1000 of Crab brightness in the 40--600~keV energy band, which is the highest end of the ASTRO-H energy coverage. The sensitivity of the SGD presents more than an order of magnitude improvement in the soft gamma-ray band as compared with the currently operating INTEGRAL or Suzaku HXD instruments. Combined with the soft and hard X-ray imagers (SXI and HXI) on board of ASTRO-H, the SGD realizes unprecedented level of sensitivities from soft X-ray to soft gamma-ray band. A key to achieve such sensitivity is a low background realized by a combination of the Compton camera surrounded by the BGO active shield where the incoming photon angle constrained by Compton kinematics is required to be consistent with the narrow field view of the active shield and passive collimator. The SGD is also capable of measuring polarization of celestial sources brighter than a few $\times 1/100$ of the Crab Nebula, polarized above the $\sim$10\%. This capability is expected to yield polarization measurements in several celestial objects, providing new insights into properties of soft gamma-ray emission processes. The combination of low-$Z$ (Si) and high-$Z$ (CdTe) sensors allows us to employ appropriate sensor materials to lower the energy threshold, to minimize Doppler broadening while maximizing absorption efficiencies of the scattered photons. The low-$Z$/high-$Z$ arrangement also suppresses contributions from neutron and activation backgrounds. The SGD successfully completed preliminary design review in May 2010 and is currently in the detailed design phase. The ASTRO-H is expected to be launched in early 2014.
{ "redpajama_set_name": "RedPajamaArXiv" }
388
1 week ago Yeh Rishta Kya Kehlata Hai 23 Views Watch Yeh Rishta Kya Kehlata Hai 21th January 2023 Video Episode 3970 of Star Plus in HD Video. Yeh Rishta Kya Kehlata Hai Drama Serial Today Episode 3970 by Watch Video Online Yeh Rishta Kya Kehlata Hai 21th Jan 2023 By HotStar Stay to Connect to US for More Episodes Previous Watch Ghum Hai Kisi Ke Pyar Mein 21th January 2023 Next Watch Anupama 21th January 2023 Watch Yeh Rishta Kya Kehlata Hai 19th January 2023 Video Episode 3968 of Star Plus in …
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
6,129
You are here: Home > Maritime companies > Azamara Cruises - United States Shipping to splash out on ammonia in green bid A shipbuilder and engine maker are among leading companies looking to develop a vessel that can run on ammonia as part of... MAIB: CMA CGM G. Washington Lost 137 Containers after 20-Degree Rolls Two years ago, the UK-flagged containership CMA CGM G. Washington experienced 20° rolls in the... Unprecedented number of crew kidnappings in the Gulf of Guinea despite drop in overall global numbers Despite overall piracy incidents declining in 2019, there was an alarming increase in crew kidnappings across the Gulf of Guinea, according to the... Two vessels caught in China for IMO 2020 violations The Chinese authorities have caught two vessels for low sulphur fuel violations, the first... Azamara Cruises - United States Azamara is the new, deluxe cruise experience for discerning travelers who long to reach out-of-the-ordinary destinations and indulge in amenities and service unparalleled on the high seas. The unique offerings of Azamara are beyond compare: butler service is provided in every stateroom; our shore excursions (we prefer to call them shore immersions), are designed to let guests become part of the fabric of life in each destination, instead of merely being an observer; our enrichment programs offer everything from culinary to photographic explorations; our two specialty restaurants provide the finest cuisine at sea; live entertainment can be enjoyed nightly; and the level of service offered is unmatched. Azamara Club Cruises consists of two intimate ships, Azamara Journey and Azamara Quest. Each can carry 694 fortunate guests to discover the hidden corners of the world that larger cruise ships simply cannot reach. In addition to offering a more personalized experience, our ships have recently undergone a $17.5 million revitalization with new European bedding and soft goods, flat screen televisions, new veranda decking and furniture, and wireless internet service in all staterooms and public areas. Plus, 93% of our staterooms offer ocean views and 68% have a private veranda. You may be wondering where the name Azamara comes from. Azamara is a coined term derived from the Romance languages. This includes the more obvious links to blue (az) and the sea (mar). The name was also inspired by a star, Acamar. In classical times, the star Acamar was the most southerly bright star that could be seen from the latitude of Greece. We think of Azamara Club Cruises as a star on the blue sea. We love the flowing name that conjures up the imagery of magnificent journeys around the world. And we look forward to sharing these voyages with our guests. Smaller ships - Out of the Ordinary Destinations Unmatched Amenities - this is Azamara Club Cruises While each of our ships is well-appointed with amenities beyond compare, they are also comfortable, and provide warm and inviting atmosphere in which to enjoy your every indulgence. Azamara Cruises is proud to introduce you to our family of intimate ships: Azamara Journey and Azamara Quest. While each of our ships is well-appointed with amenities beyond compare, they are also thoughtfully designed, and provide you with the ideal atmosphere in which to thoroughly enjoy your every indulgence. Now, without further ado, meet our fine fleet: Azamara Journey  Fortunate are the 694 guests who embark on a voyage with Journey. Fewer guests mean your experience is a more intimate one, and that definitely has its advantages. Discover for yourself how uncrowded and unhurried a voyage can be. We have 17 active job ads listed.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
5,377
Strong Plastic Shredder Low-speed Dual Shaft Shredder Plastic Bottle & Film & Bag Crusher Claw Cutter Crusher Flat Cutter Crusher Medium Speed Crusher Slow Mute Crusher Heavy-duty Crusher Pipe Crusher Recycling unit system. Jinhengli HG-10 series of low speed Recycling Granulators is suitable to crush materials with a low volume and a high requirement on noise reduction. It is also suitable to crush hard or tough materials such as PC, PBT, nylon and POM, etc. The PJB-series low-speed granulators are suitable for by-the-press recycling of sprues. They feature easy operation, excellent performance, low noise and dust level. They are in"Euro"style design and compact in size. Slow Speed Crusher *Low noise and low speed for an low decibels environment. *Special tools are made of steel and vacuum heat treated to increase operation life. And to be installed beside the machines .Low-speed rotation, especially for recycling of all kinds of blow moldwd products and scrape materials. • Stable function, low noise, and low dust contamination, high crushing power with good results. low speed plastic crusher. Buy low speed plastic crusher with cheap wholesale factory price from China best supplier and manufacturer TCM SHREDDER dealer, we have best quality recycling shredding machine for sale than any other suppliers and manufacturer in China. Plastic Crusher or Shredder. Low Speed Granulator or Plastic Crusher features: 1. Easy to operate, low electricty consumption & durable and low noise. Noise proof low speed crusher-Shanghai . Noise proof low speed crusher. Time saving: recycling in 30 seconds. Higher quality: no reabsorb of moisture no oxidation as recycling when the runner is still hot. In 1989, Ronald and current owner Jack Cress uncovered a need for more reliable industrial low speed single shaft grinders and shredders in the recycling industry; which inspired the formulation of Cresswood Recycling Equipment (currently known as Cresswood Shredding Machinery). Low-speed 2 shaft crusher,waste circuit board shredder The feeding materials can be 1. Bicycles, motorcycles, car shells, waste metal. 2. Cans, paint bucket, iron sheet, color steel tile. Plastic Granulator Shredder Equipment, Single Twin Shaft Shredder Equipment, Wasted Plastic Crusher, Plastic Raw Material Mixer, Plastic Edge Recycling System, Low Speed Scrap Granulator. Even low-speed shredders can produce flying fragments. Brittle metals and plastics can break apart violently under pressure and this can result in various projectiles with the potential to do great harm to employees and surrounding structures or equipment. dual shaft speed crusher - plastic crusher lowspeed, low speed dual shaft crusher elite verision .this product is a plastic crusher without the sieve.low speed operation is specially designed to crush all kinds of ...low speed crusher singapore oduct Description. Low Speed Crusher For Recycling - industrialflooringcoin. Rotor centrifugal crusher - Selective Our rotor centrifugal crusher (type RSMX) is a high-performance vertical-shaft crusher for throughput rates of 30 to 400 t/h . Read More; china low speed crusher - antriksharaliasorgin.
{ "redpajama_set_name": "RedPajamaC4" }
6,727
Supreme Court of India Gives Government Two Weeks to Decide on Crypto Legality Oct 27, 2018, 5:52PM Regulation 1 min, 49 sec READ by Rahul Nambiampurath The Supreme Court of India has asked the government to file an affidavit on the legality of crypto in the country within two weeks. After numerous adjournments, The Supreme Court of India finally heard the petitions filed by cryptocurrency exchanges and businesses against the country's financial regulator. On April 6, 2018, the Reserve Bank of India directed all regulated financial institutions to cease banking relationships with crypto-affiliated companies. The move eventually went on to cripple all fiat-crypto corridors in the country and dropped trading volume to a fraction of previous amounts. Realizing this, a number of businesses challenged the central bank's authority on the matter, even going as far as calling the move 'unconstitutional'. According to a report by The Economic Times, Supreme Court Justices Rohinton Fali Nariman and Navin Sinha brought up the case on October 25. After listening to both parties, the Justices asked the government to submit a report on the matter within two weeks. Nischal Shetty, Founder and CEO of local cryptocurrency exchange WazirX, was able to shed some more light on the matter. He said in a tweet, [The] Supreme court has asked Govt. to file an affidavit related to the findings of the crypto committee set up by them. They're supposed to submit this within 2 weeks. Blockchain-Based GridPlus Licensed in Texas to Sell Retail Electricity British Authorities to Discuss a Possible Crackdown on Crypto Derivatives Nakul Dewan, the counsel representing petitions of nine cryptocurrency exchanges, asserted that the RBI ban had effectively brought crypto trade and commerce to a standstill. Notably, the Indian government has not declared an outright ban on cryptocurrency so far. RBI's counsel Shyam Divan, on the other hand, said that the move only sought to "discourage the use of cryptocurrencies" in the country, until the government reached a decision on its legality. The RBI has historically been skeptical of the digital currency asset class. In 2013, it published a circular that "caution[ed] users of Virtual Currencies against Risks". The press release stated that virtual currencies had no underlying asset, no legal status and little provision for regulation. It also brought up concerns over money laundering and terrorism financing. Three years later, it released another circular to "reiterate the concerns conveyed in the earlier press releases." $28.82 -0.01340 (-0.04648%) $5.72 -0.001855 (-0.03242%) More in Regulation Privacy regulations such as GDPR could affect blockchain and cryptocurrency platforms. How are blockchain projects adapting to this new force? Summer 2019: As the Bitcoin Surge Continues, What Are Regulators Saying ? Bitcoin is surging as the crypto-markets move into Q2 2019, and regulators appear to be keeping ahead of the hype with new regulatory frameworks. Tether and Bitfinex In Crisis: What the NY Attorney General's Charges Mean The New York Attorney General's office has accused Bitfinex of mishandling funds. How will this impact the exchange and its stablecoin, Tether? Crypto Regulation and Blockchain Vocabulary - Do We Really Know What We're Saying? The confusion over cryptocurrency vocabulary among regulatory bodies is destructive. It's time to clarify the legal definitions of crypto terms. Decentralized Finance (#DeFi): A Quick Walkthrough of the Ethereum DeFi Stack
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,652
{"url":"https:\/\/cran.stat.auckland.ac.nz\/web\/packages\/GGMncv\/vignettes\/sign_restrict.html","text":"# Introduction\n\nFor some research questions, there might be expectations in regards to the directions of the edges. For example, in symptom networks, all relations are often hypothesized to be positive (i.e., positive manifold). In turn, any negative relations are thought to be spurious.\n\nIn GGMncv, it is possible to estimate the conditional dependence structure, given that all edges in the graph are positive (sign restriction).\n\n## Packages\n\nlibrary(GGMncv)\nlibrary(corrplot)\n\n## Correlations\n\nThe ptsd dataset includes 20 post-traumatic stress symptoms. The following visualizes the correlation matrix:\n\nNotice that all of the correlations are positive.\n\n## Partial Correlations\n\nHere are the partial correlations:\n\npcors <- -cov2cor(solve(cor(ptsd))) + diag(ncol(ptsd))\n\ncorrplot::corrplot(pcors,\n\nNotice that some relations went to essentially zero (white), whereas other changed direction altogether.\n\n## GGM\n\nHere the conditional dependence structure is selected via ggmncv:\n\n# fit model\nfit <- GGMncv::ggmncv(cor(ptsd),\nn = nrow(ptsd),\nprogress = FALSE,\npenalty = \"atan\")\n\n# plot graph\nplot(GGMncv::get_graph(fit),\nedge_magnify = 10,\nnode_names = colnames(ptsd))\n\nNotice a few negatives are included in the graph.\n\n## Sign Restriction\n\nHere the graph is re-estimated, with the constraint that all of negative edges in the above plot are actually zero.\n\n# set negatives to zero (sign restriction)\nadj_new <- ifelse(fit$P <= 0, 0, 1) check_zeros <- TRUE # track trys iter <- 0 # iterate until all positive while(check_zeros){ iter <- iter + 1 fit_new <- GGMncv::constrained(cor(ptsd), adj = adj_new) check_zeros <- any(fit_new$wadj < 0)\nadj_new <- ifelse(fit_new$wadj <= 0, 0, 1) } # make graph object new_graph <- list(P = fit_new$wadj,\nclass(new_graph) <- \"graph\"\n\n# plot graph\nplot(new_graph,\nedge_magnify = 10,\nnode_names = colnames(ptsd))\n\nThe graph now only includes positive edges. Note this is not the same as simply removing the negative relations, as, in this case, this is the maximum likelihood estimate for the inverse covariance matrix.\n\nNote also new_graph is making the graph class so that it can be plotted with plot.\n\n## Alternative Approach\n\nThe above essentially takes the selected graph, and then re-estimates it with the constraint that the negative edges are zero. Perhaps a more sophisticated approach is to select the graph with those constraints.\n\nThis can be implemented with:\n\nR <- cor(ptsd)\nn <- nrow(ptsd)\np <- ncol(ptsd)\n\n# store fitted models\nfit <- ggmncv(R = R,\nn = n,\nprogress = FALSE,\nstore = TRUE,\nn_lambda = 50)\n\n# all fitted models\n# sol: solution\nsol_path <- fit$fitted_models # storage bics <- NA Thetas <- list() for(i in seq_along(sol_path)){ # positive in wi is a negative partial adj_new <- ifelse(sol_path[[i]]$wi >= 0, 0, 1)\n\ncheck_zeros <- TRUE\n\n# track trys\niter <- 0\n\n# iterate until all positive\nwhile(check_zeros){\niter <- iter + 1\ncheck_zeros <- any(fit_new$wadj < 0) adj_new <- ifelse(fit_new$wadj <= 0, 0, 1)\n}\n\nbics[i] <- GGMncv:::gic_helper(\nTheta = fit_new$Theta, R = R, n = n, p = p, type = \"bic\", edges = sum(fit_new$Theta[upper.tri(fit_new$Theta)] != 0) ) Thetas[[i]] <- fit_new$Theta\n}\n\n# select via minimizing bic\n# (then convert to partial correlatons)\npcors <- -(cov2cor(Thetas[[which.min(bics)]]) - diag(p))\n\n# make graph class\nnew_graph <- list(P = pcors,\nadj = ifelse(pcors == 0, 0, 1))\nclass(new_graph) <- \"graph\"\n\n# plot graph\nplot(new_graph,\nedge_magnify = 10,\nnode_names = colnames(ptsd))","date":"2022-07-07 04:40:13","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.4307827055454254, \"perplexity\": 7577.503093976245}, \"config\": {\"markdown_headings\": true, \"markdown_code\": false, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-27\/segments\/1656104683683.99\/warc\/CC-MAIN-20220707033101-20220707063101-00267.warc.gz\"}"}
null
null
<?php namespace backend\models\filter; use Yii; use yii\behaviors\AttributeBehavior; use yii\db\Expression; use yii\db\ActiveRecord; use common\models\Base; use backend\models\User; /** * This is the model class for table "filter_user_rel". * * @property integer $user_id * @property integer $filter_id * @property string $lastview_at * * @property Filters $filter * @property User $user */ class FilterUserRel extends Base { public function behaviors() { return array_merge([ [ 'class' => AttributeBehavior::className(), 'attributes' => [ ActiveRecord::EVENT_BEFORE_INSERT => ['lastview_at'], ActiveRecord::EVENT_BEFORE_UPDATE => ['lastview_at'], ], 'value' => function ($event) { return new Expression('NOW()'); }, ], ],parent::behaviors()); } /** * @inheritdoc */ public static function tableName() { return 'filter_user_rel'; } /** * @inheritdoc */ public function rules() { return [ [['user_id', 'filter_id'], 'required'], [['user_id', 'filter_id'], 'integer'], [['lastview_at'], 'safe'], [['filter_id'], 'exist', 'skipOnError' => true, 'targetClass' => Filter::className(), 'targetAttribute' => ['filter_id' => 'id']], [['user_id'], 'exist', 'skipOnError' => true, 'targetClass' => User::className(), 'targetAttribute' => ['user_id' => 'userid']], ]; } /** * @inheritdoc */ public function attributeLabels() { return [ 'user_id' => Yii::t('backend', 'User ID'), 'filter_id' => Yii::t('backend', 'Filter ID'), 'lastview_at' => Yii::t('backend', 'Lastview At'), ]; } /** * @return \yii\db\ActiveQuery */ public function getFilter() { return $this->hasOne(Filter::className(), ['id' => 'filter_id']); } /** * @return \yii\db\ActiveQuery */ public function getUser() { return $this->hasOne(User::className(), ['userid' => 'user_id']); } }
{ "redpajama_set_name": "RedPajamaGithub" }
8,060
What is being characterized by Federal law enforcement as the largest college admissions scandal in U.S. history began with the U.S. Justice Department charging 50 individuals with Federal crimes on March 12, 2019. The 200-page criminal complaint that accompanied those charges identified 58-year-old Californian Rick Singer as the ringleader of a vast conspiracy to admit academically and athletically unqualified children of wealthy parents to elite universities by taking advantage of the lower academic entrance requirements afforded collegiate athletes. One of the nine NCAA coaches nabbed in Singer's criminal enterprise was University of Texas head tennis coach Michael Center, who was heard on a FBI wiretap confirming to Singer that he had received nearly $100,000 from the ringleader, "in exchange for which Center would designate a student as a recruit to the (UT) tennis team, thereby facilitating his admission to (UT)." Details of the enormous bribes provided to Singer and Center by the parent of the student who ultimately gained admission to the University of Texas were also included in the complaint, but unlike dozens of other parents caught conspiring with Singer in similar criminal complaints, the parent in the case involving Center was not named – let alone indicted. In reporting the news of the charges against Center, the DAILY TEXAN reported last week: Those within the University investigating the incident believe it is isolated and does not involve any other employees. While that may indeed be the case, a testimonial and image that SportsbyBrooks.com has obtained from Singer's now-deleted website may prove otherwise. On the March 12, 2019, the day the U.S. Justice Department first announced the case against Singer and 49 other defendants, a testimonial from University of Texas student Michael Chiu-Schaepe could be seen on the the front page of a website owned and operated by Rick Singer. Accompanying the text of the testimonial was a photo of Chiu-Schaepe alongside former University of Texas basketball player and NBA star Kevin Durant. The text reads as follows: I wanted to thank you personally for all your help in getting me into the University of Texas in Austin, and for helping me secure a managers position with the UT basketball team. And, can you believe it, here is a picture of me with basketball star, Kevin Durrant at the UT Summer Basketball Camp. Michael Chui (sic) Schaepe The following passage is contained in the aforementioned Federal complaint against University of Texas tennis coach Center: Documents reviewed by the complaint's FBI agent show that the student's University application listed him as a manager of his high school football and basketball teams. The University of Texas basketball media guide listed Chiu-Schaepe, whose father is mega-wealthy venture capitalist Christopher Schaepe, as having been a team manager for the 2015-16 season – which was current Texas head basketball coach Shaka Smart's first season with the Longhorns. In addition to Michael Chiu-Schaepe's connection to the school, the record of a $100,000 donation to the University of Texas Moody College of Communications from Jennie Chiu and Christopher Schaepe can be found on the University's official website. As part of our ongoing coverage of the subject, SportsbyBrooks.com has also previously documented the connection between ringleader Singer and: a starting football player for the Cal and the University of Arizona football teams current NFL head coach Pete Carroll NFL legend Joe Montana, NBA superagent Bill Duffy NBA owner Mark Mastrov ex-NFL owner Chip Rosenbloom New, exclusive reports on the largest college admissions scandal in the history of the United States will posted on SportsbyBrooks.com in coming days. PreviousWHY MICKELSON ADMITTED CONNECTION TO ADMISSION CHEAT NextBLOCKBUSTER VIDEO: MONTANA, ADMISSIONS FIXER ARE CO-STARS
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
8,960
{"url":"https:\/\/cob.silverchair.com\/jeb\/article\/206\/18\/3159\/13874\/The-effect-of-colour-vision-status-on-the","text":"The evolution of trichromatic colour vision by the majority of anthropoid primates has been linked to the efficient detection and selection of food,particularly ripe fruits among leaves in dappled light. Modelling of visual signals has shown that trichromats should be more efficient than dichromats at distinguishing both fruits from leaves and ripe from unripe fruits. This prediction is tested in a controlled captive setting using stimuli recreated from those actually encountered by wild tamarins (Saguinus spp.). Dietary data and reflectance spectra of Abuta fluminum fruits eaten by wild saddleback (Saguinus fuscicollis) and moustached(Saguinus mystax) tamarins and their associated leaves were collected in Peru. A. fluminum leaves, and fruits in three stages of ripeness,were reproduced and presented to captive saddleback and red-bellied tamarins(Saguinus labiatus). Trichromats were quicker to learn the task and were more efficient at selecting ripe fruits than were dichromats. This is the first time that a trichromatic foraging advantage has been demonstrated for monkeys using naturalistic stimuli with the same chromatic properties as those encountered by wild animals.\n\nAs an order, primates are among the most frugivorous of mammals. Indeed,with the exception of tarsiers (Tarsius spp.), all primate species have been recorded to eat fruit, and many eat it in large quantities(Richard, 1985); it even accounts for 25\u201350% of the diet of folivorous' species such as howler monkeys (Alouatta seniculus;Guillotin et al., 1994;Julliot, 1994). Whilst some species are specialized seed predators, the majority of primates act as dispersers for the species that they consume. Indeed, primate-mediated endozoochory may be the primary method of dispersal for many tropical plant species (Julliot, 1994). Given the importance of fruit to primates, and of primates to plant species in their dispersal, co-evolution has produced a suite of associated characteristics on both sides of this relationship. Trichromatic colour vision and the colour changes shown by fruits during maturation may be examples of such co-evolved characters.\n\nWithin placental mammals, trichromacy is unique amongst primates: all other species so far examined are either dichromats or monochromats(Jacobs, 1993;Ahnelt and Kolb, 2000;Arrese et al., 2002). It has been hypothesized that the evolution of trichromatic colour vision by the majority of primate species is a direct result of the chromatic signals produced by fruits (Regan et al.,2001) or leaves (Dominy and Lucas, 2001). For an animal to feed on fruits it has first to detect them against a background of leaves. Vision and olfaction are probably the principal senses employed. Theoretically, trichromacy has been predicted to be more efficient than dichromacy when detecting and identifying fruits against a leaf background (Osorio and Vorobyev, 1996; Sumner and Mollon, 2000a; Regan et al.,2001). In addition to detecting fruiting trees, an animal has to select ripe from unripe fruits. Physical and chemical defences may protect fruits until their seeds are ready to be dispersed. The ripening process is often characterized by a colour change that can give a clear visual signal to potential dispersers of the increased palatability of the ripe fruits(Regan et al., 2001). Theoretically, trichromats have also been predicted to be capable of distinguishing a greater number of ripe from unripe fruit species(Sumner and Mollon, 2000b;Regan et al., 2001).\n\nDespite its theoretical advantages, trichromacy is not uniform within the primates. Whilst all catarrhines so far studied are trichromatic, all platyrrhines, with the two exceptions of howler (Alouatta spp.\u2013 uniformly trichromatic; Jacobs et al., 1996a) and night monkeys (Aotus spp. \u2013uniformly monochromatic; Jacobs et al.,1996b; Jacobs,1984; Mollon et al.,1984), and some strepsirhines(Tan and Li, 1999;Jacobs et al., 2002) have a polymorphic colour vision system. All males and homozygous females are dichromats, whilst heterozygous females are trichromats. In platyrrhines, two loci code for the visual pigment proteins or opsins. The first, an autosomal locus, has a single allele that codes for the short wavelength (S) opsin and is common to all individuals. The second, on the X chromosome, codes for opsins within the long to medium wavelength (LM) range. A single X-linked locus model, with three alleles, explains the visual polymorphism observed in callitrichids (Mollon et al.,1984).\n\nFor non-human species it is necessary to take account of the animal's perceptual abilities. Thus, we should not relate our verbal classification of colours to colour discriminability or memorability for another species; even one with the same set of photopigments. A good starting point for understanding how other species might discriminate colours is to measure spectral stimuli and estimate the responses of their photoreceptors(Table 1).\n\nTable 1.\n\nSex and visual status of experimental animals\n\nSpeciesID #SexVisual statusOpsins (nm)\nSaddleback\u00a02422\u00a0Female\u00a0Trichromat\u00a0423, 543, 563\ntamarin\u00a03894\u00a0Female\u00a0Trichromat\u00a0423, 543, 563\n3948\u00a0Female\u00a0Trichromat\u00a0423, 543, 563\n2214\u00a0Female\u00a0Trichromat\u00a0423, 543, 563\n1045\u00a0Female\u00a0Dichromat\u00a0423, 543\n3895\u00a0Male\u00a0Dichromat\u00a0423, 543\n989\u00a0Male\u00a0Dichromat\u00a0423, 563\n2365\u00a0Male\u00a0Dichromat\u00a0423, 563\nRed-bellied\u00a03782\u00a0Female\u00a0Trichromat\u00a0423, 543, 563\ntamarin\u00a03873\u00a0Female\u00a0Trichromat\u00a0423, 543, 563\n2972\u00a0Female\u00a0Dichromat\u00a0423, 563\n2666\u00a0Female\u00a0Dichromat\u00a0423, 563\n657\u00a0Female\u00a0Dichromat\u00a0423, 563\n874\u00a0Male\u00a0Dichromat\u00a0423, 543\n3201\u00a0Male\u00a0Dichromat\u00a0423, 563\n3874\u00a0Male\u00a0Dichromat\u00a0423, 563\nSpeciesID #SexVisual statusOpsins (nm)\nSaddleback\u00a02422\u00a0Female\u00a0Trichromat\u00a0423, 543, 563\ntamarin\u00a03894\u00a0Female\u00a0Trichromat\u00a0423, 543, 563\n3948\u00a0Female\u00a0Trichromat\u00a0423, 543, 563\n2214\u00a0Female\u00a0Trichromat\u00a0423, 543, 563\n1045\u00a0Female\u00a0Dichromat\u00a0423, 543\n3895\u00a0Male\u00a0Dichromat\u00a0423, 543\n989\u00a0Male\u00a0Dichromat\u00a0423, 563\n2365\u00a0Male\u00a0Dichromat\u00a0423, 563\nRed-bellied\u00a03782\u00a0Female\u00a0Trichromat\u00a0423, 543, 563\ntamarin\u00a03873\u00a0Female\u00a0Trichromat\u00a0423, 543, 563\n2972\u00a0Female\u00a0Dichromat\u00a0423, 563\n2666\u00a0Female\u00a0Dichromat\u00a0423, 563\n657\u00a0Female\u00a0Dichromat\u00a0423, 563\n874\u00a0Male\u00a0Dichromat\u00a0423, 543\n3201\u00a0Male\u00a0Dichromat\u00a0423, 563\n3874\u00a0Male\u00a0Dichromat\u00a0423, 563\n\nThe perceptual capabilities of various primate visual systems have been modelled to examine the potential advantages of trichromacy in detecting ripe fruits (e.g. Osorio and Vorobyev,1996; Sumner and Mollon,2000a,b;Regan et al., 2001) or flush leaves (Dominy and Lucas,2001). The most pertinent stimuli for such modelling are those actually seen by the visual system of the primate in question in the wild. However, these models make (varying) assumptions about how photoreceptor signals are used to make behavioural decisions (e.g.Vorobyev and Osorio, 1998). For any given perceptual task we cannot be sure that model assumptions will hold. To examine whether an actual foraging advantage is conferred by trichromacy, the relative performance of actual subjects must be measured. For example, Caine and Mundy (2000)used artificially coloured food to show a trichromatic advantage for Geoffroy's marmosets (Callithrix geoffroyi) in a foraging task.\n\nWhilst modelling and behavioural experiments imply that trichromacy is advantageous, this has yet to be demonstrated for a colour discrimination task that closely resembles that faced by primates foraging in their natural habitat. This is the goal of the present study. The relative efficiency of di-and trichromacy for tamarins (Saguinus spp.) is evaluated through an experimental protocol utilising captive monkeys and stimuli recreated from the reflectance spectra of actual fruits eaten (and their associated leaves) by wild tamarins in Peru and presented in a dappled naturalistic leaf canopy.\n\n### Field observations\n\n#### Field site and monkeys\n\nTwo mixed-species groups of saddleback (Saguinus fuscicollis nigrifrons I. Geoffroy 1850) (N=4 and 8 individuals) and moustached (Saguinus mystax mystax Spix 1823) tamarins (N=5 and 8 individuals) were observed (by A.C.S.) for 164 days (1612 h) from January 2000 until December 2000 at the Estacio\u0301n Biolo\u0301gica Quebrada Blanco II (4\u00b021\u2032 S, 73\u00b009\u2032 W) in northeastern Peru (for details, see Heymann and Hartmann, 1991). The tamarins were observed for approximately 14 days each month.\n\n#### Data collection and analysis\n\nAll observed instances of fruit feeding were recorded. From these data, the number of tamarin feeding minutes' was calculated (where one tamarin feeding minute' equals one tamarin feeding for 1 min) and divided by the number of tamarins of the given species to account for differences in group size between groups, and species, and over the course of the study. Furthermore, each month's data were weighted equally to account for slight differences in the number of observation days.\n\n#### Colour measurement\n\nColour measurements were taken using a portable S2000 spectrometer, HL2000 halogen light source (both Ocean Optics, Dunedin, FL, USA) and Satellite 4030CDT laptop computer (Toshiba) running SpectraWin 4.1 software (Top Sensor Systems, Eerbeek, The Netherlands). Reflectance spectra from a minimum of three fruits and three associated mature leaves were recorded for each species eaten. Where possible, spectra were recorded from parts of fruits discarded by tamarins as they fed and taken from both the upper and lower surfaces of leaf samples. Spectra were recorded on the day that the samples were collected.\n\n### Colour modelling\n\nWe estimated the responses of the tamarin's photoreceptors, and hence colour signals to spectral stimuli, as follows. We derived tamarin photoreceptor spectral sensitivities in vivo by fitting a standard exponential model of rhodopsin absorption(Stavenga et al., 1993) to spectral sensitivity maxima measured for common marmoset (Callithrix jacchus) cones with sensitivity maxima at 425 nm, 543 nm, 555 nm and 562 nm (Williams et al., 1992),which are close to those for Saguinus(Jacobs et al., 1987) assuming a maximum optical density of 0.4. Spectral absorption by the ocular media was also based on the common marmoset(Tove\u0301e et al., 1992). Recent work (Kawamura et al.,2001) lowers the estimated sensitivity maximum of the common marmoset 543 nm receptor to 539 nm; this difference is of negligible significance for the design and interpretation of our study.\n\nSpectral stimuli reaching the eye depend upon the reflectance and illumination spectra. Reflectance was measured as described above, and the illumination spectrum was natural sunlight measured by a spectroradiometer calibrated with a known standard (LS1-cal; Ocean Optics). For an eye viewing the surface of an object, the (relative) quantum catch of the receptor i (Qi) is given by the following expression:\n$\\ Q_{i}={{\\int}_{{\\lambda}_{\\mathrm{min}}}^{{\\lambda}_{\\mathrm{max}}}}R_{\\mathrm{i}}({\\lambda})S({\\lambda})I({\\lambda})\\mathrm{d}{\\lambda},$\n1\nwhere \u03bb denotes wavelength, \u03bbmin and\u03bb max denote the lower and upper limits of the visible spectrum, respectively, Ri(\u03bb) is the spectral sensitivity of receptor i, S(\u03bb) is the reflectance spectrum and I(\u03bb) is the illumination spectrum. The receptor response normalised to the illuminant qi is then given by: qi=Qi(t)\/Qi(i),where Qi(t) and Qi(i) are estimated quantum catches for a target and the barium sulphate reflectance standard,respectively. Finally, stimulus chromaticities(Fig. 1) were given by Macleod and Boynton (1979)chromaticity coordinates based on outputs of marmoset 425 nm (S), 543 nm (M)and 562 nm (L) cone photoreceptors (see alsoRegan et al., 1998). The Cartesion coordinates are given by S\/(L+M) and L\/(L+M), which is convenient because S\/(L+M) roughly represents the blue\u2013yellow chromatic signal available to a dichromat, while the red\u2013green parameter, L\/(L+M), is available only to trichromats. Although the colours used for the experiments did not exactly match those of the plant(Fig. 1), the chromaticity differences between the leaf background and fully ripe fruit were very similar for the real and experimental colours, with the unripe' and `mid-ripe' model fruit lying at intermediate locations on the red\u2013green axis.\nFig. 1.\n\nChromaticities of natural Abuta fluminum leaves and fruit and of the model colours used in this experiment, plotted in a standard chromaticity diagram modified for the common marmoset eye (see text;Macleod and Boynton, 1979;Regan et al., 1998). Colour differences on the horizontal axis are visible only to trichromats. Note that distance in this diagram does not directly predict colour discriminability. For example, in general, a given colour distance on the vertical axis will be less discriminable than on the horizontal. L, leaf; U, unripe; M, mid-ripe; R,ripe.\n\nFig. 1.\n\nChromaticities of natural Abuta fluminum leaves and fruit and of the model colours used in this experiment, plotted in a standard chromaticity diagram modified for the common marmoset eye (see text;Macleod and Boynton, 1979;Regan et al., 1998). Colour differences on the horizontal axis are visible only to trichromats. Note that distance in this diagram does not directly predict colour discriminability. For example, in general, a given colour distance on the vertical axis will be less discriminable than on the horizontal. L, leaf; U, unripe; M, mid-ripe; R,ripe.\n\n### Diet composition and choice of representative fruit species\n\nThe tamarins ate fruits from 833 plants from 167 species in 87 genera and 50 families during 164 days of observation. Abuta was chosen as representative of ripe fruit eaten by tamarins for which trichromatic colour vision may give an advantage in the detection and selection. It formed a significant part of the diet of both species in both groups. It was eaten in all months but two; no other genus was eaten in as many months. It was chosen over other important genera (i.e. Parkia, Tapirira, Pourouma, Buchenavia,Unonopsis and Simaba), as these genera typically ripened to a dark purple or black colour for which trichromacy has little benefit, and over Inga, as the bean-like fruit pods of many species of this genus may be deemed cryptic since they remain green even when mature. Six species of Abuta were eaten by the tamarins: A. arborea, A. fluminum, A. imene, A. pahni, A. rufescens and A. solimoensis. Of these, A. fluminum was chosen as representative as it accounted for the greatest number of feeding records. Fig. 2 shows the reflectance spectra of ripe and unripe A. fluminum fruit and leaves (upper surface).\n\nFig. 2.\n\nReflectance spectra of ripe and unripe A. fluminum fruit and leaves (upper surface).\n\nFig. 2.\n\nReflectance spectra of ripe and unripe A. fluminum fruit and leaves (upper surface).\n\nThe fruits and leaves of A. fluminum occupy roughly mid positions on the L\/(L+M) axis (the red\u2013green parameter available only to trichromats) of all the species sampled. Of the ripe fruits sampled, those of A. fluminum have a value of 0.5474\u00b10.0052 (N=12 fruits), from a range spanning 0.5032\u20130.5914 (N=137 species),whereas the leaves of A. fluminum have a value of 0.5009\u00b10.0021 (N=9 leaves), from a range of 0.4957\u20130.5147 (N=154 species). Their chromaticity is similar to that of other fruits eaten by tamarins and also by other primates(Sumner and Mollon, 2000b;Regan et al., 2001).\n\n### Captive experiment\n\n#### Animals and housing\n\nEight captive adult saddleback (S. fuscicollis weddelli Deville 1849) and six red-bellied tamarins (S. labiatus labiatus Geoffroy in Humboldt 1812) held at the Belfast Zoological Gardens were observed (by A.C.S.) in the experiment. The numbers of each species are given for each sex and visual phenotype in Table 1. Effort was made to balance sex and visual status across species from the animals available.\n\nThe monkeys were housed in standard indoor\/outdoor enclosures off-exhibit. Testing took place in the outside enclosures (1.95 m\u00d71.55 m\u00d73.50 m). Each was furnished with a network of approximately eight branches (5 cm to>10 cm diameter), with the three branches closest to the test apparatus placed in the same configuration. The monkeys were accustomed to being held individually in the outside enclosures.\n\n#### Genotyping\n\nVisual status was determined genetically (by A.K.S.), by amplification and sequencing of the X-linked opsin gene. Tamarin opsin alleles can be defined by four amino acid substitutions at positions 180 in exon 3, 229 and 233 in exon 4 and 285 in exon 5, which are important for spectral tuning(Shyue et al., 1998). DNA was extracted from plucked hair samples from each individual tamarin using a QIAamp DNA mini-kit (Qiagen, Crawley, UK). PCR and sequence analysis of exons 3, 4 and 5 were performed as previously described(Surridge and Mundy, 2002). Genotypes were assigned according to the combined sequence of the four important amino acids in each of the exons mentioned above. These are as follows for each of the three opsin alleles: 543 nm=Ala, Ile, Ser, Ala; 556 nm=Ala, Phe, Ser, Thr; 563 nm=Ser, Phe, Gly, Thr. Trichromatic females were identified by the presence of heterozygous sites in the DNA sequence at these important positions.\n\n#### Test apparatus\n\nThe apparatus consisted of two rigid, wire grid panels. One was covered with laminated paper leaves (leaf background) and the other was unadorned (no background). The leaves, in the oval shape of A. fluminum, ranged from 70 mm\u00d750 mm to 150 mm\u00d7115 mm. They were arranged to form a naturalistic canopy, giving dappled lighting from the incident daylight. The randomly varying degrees of illumination from the dappled light ensured that the task could not be solved by brightness cues of the targets alone. Twenty-one fruit bases, made from 1.5 mm card, were fixed at regular intervals as per Fig. 3. Each was covered with a lid, also made from 1.5 mm card that overhung and covered its sides. The lids were covered in one of three colours of paper corresponding to unripe, mid-ripe and ripe A. fluminum fruit. Ripe fruits contained 0.5 g fudge, mid-ripe contained 0.25 g fudge and unripe fruits contained no reward. The pattern of the fruit locations was varied systematically.\n\nFig. 3.\n\nDiagram of artificial fruit and its coloured lid, and the pattern of the 21 test fruits.\n\nFig. 3.\n\nDiagram of artificial fruit and its coloured lid, and the pattern of the 21 test fruits.\n\nThe leaves were made from a commercially available green paper, the reflectance spectrum of which roughly matched that of real A. fluminum leaves, although overall the colour was somewhat brighter than the real leaves (Table 2,TBL3;Fig. 4). Fruit lid colours were calculated to differ in chromaticity from the model leaves in the same way that natural fruits differ from natural leaves(Fig. 1). This design, with dappled lighting, means that as a test of colour vision the experimental task closely resembles the task faced in natural foraging. We modelled ripe,mid-ripe and unripe A. fluminum fruit(Table 2,TBL3). Colours were made in Adobe Photoshop and printed using an Epson Color 580 inkjet printer.\n\nTable 2.\n\nQuantum catches, relative to a barium sulphate white standard, of tamarin cones for A. fluminum fruit and leaves and recreated stimuli\n\nFruit and leaves\n425 nm543 nm556 nm562 nm\nStimulusActualModelActualModelActualModelActualModel\nRipe fruit\u00a00.0227\u00b10.0088\u00a00.0832\u00a00.1666\u00b10.0204\u00a00.2415\u00a00.1908\u00b10.0247\u00a00.2531\u00a00.2017\u00b10.0268\u00a00.2577\n(12)\u00a0\u00a0(12)\u00a0\u00a0(12)\u00a0\u00a0(12)\nMid-ripe fruit\u00a00.0181\u00a00.1792\u00a00.1071\u00a00.4628\u00a00.1201\u00a00.4632\u00a00.1257\u00a00.4611\nUnripe fruit\u00a00.0136\u00b10.0008\u00a00.1349\u00a00.0477\u00b10.0018\u00a00.3927\u00a00.0494\u00b10.0025\u00a00.3834\u00a00.0497\u00b10.0028\u00a00.3768\n(2)\u00a0\u00a0(2)\u00a0\u00a0(2)\u00a0\u00a0(2)\nLeaf (upper side)\u00a00.0087\u00b10.0025\u00a00.1583\u00a00.0483\u00b10.0133\u00a00.3976\u00a00.0489\u00b10.0137\u00a00.3767\u00a00.0485\u00b10.0138\u00a00.3653\n(9)\u00a0\u00a0(9)\u00a0\u00a0(9)\u00a0\u00a0(9)\nFruit and leaves\n425 nm543 nm556 nm562 nm\nStimulusActualModelActualModelActualModelActualModel\nRipe fruit\u00a00.0227\u00b10.0088\u00a00.0832\u00a00.1666\u00b10.0204\u00a00.2415\u00a00.1908\u00b10.0247\u00a00.2531\u00a00.2017\u00b10.0268\u00a00.2577\n(12)\u00a0\u00a0(12)\u00a0\u00a0(12)\u00a0\u00a0(12)\nMid-ripe fruit\u00a00.0181\u00a00.1792\u00a00.1071\u00a00.4628\u00a00.1201\u00a00.4632\u00a00.1257\u00a00.4611\nUnripe fruit\u00a00.0136\u00b10.0008\u00a00.1349\u00a00.0477\u00b10.0018\u00a00.3927\u00a00.0494\u00b10.0025\u00a00.3834\u00a00.0497\u00b10.0028\u00a00.3768\n(2)\u00a0\u00a0(2)\u00a0\u00a0(2)\u00a0\u00a0(2)\nLeaf (upper side)\u00a00.0087\u00b10.0025\u00a00.1583\u00a00.0483\u00b10.0133\u00a00.3976\u00a00.0489\u00b10.0137\u00a00.3767\u00a00.0485\u00b10.0138\u00a00.3653\n(9)\u00a0\u00a0(9)\u00a0\u00a0(9)\u00a0\u00a0(9)\n\nRecreated stimuli\nS 425nmM 543nmL 562nmS\/(L+M)L\/(L+M)\nStimulusActualModelActualModelActualModelActualModelActualModel\nRipe fruit\u00a00.023\u00a00.083\u00a00.167\u00a00.242\u00a00.2017\u00a00.259\u00a00.0616\u00a00.1667\u00a00.5477\u00a00.5163\nMid-ripe fruit\u00a00.0181\u00a00.179\u00a00.1071\u00a00.463\u00a00.1257\u00a00.461\u00a00.0777\u00a00.1940\u00a00.5399\u00a00.4991\nUnripe fruit\u00a00.0136\u00a00.135\u00a00.0477\u00a00.393\u00a00.0497\u00a00.377\u00a00.1396\u00a00.1753\u00a00.5103\u00a00.4897\nLeaf\u00a00.0087\u00a00.158\u00a00.0483\u00a00.398\u00a00.0485\u00a00.365\u00a00.0899\u00a00.2075\u00a00.5010\u00a00.4789\nRecreated stimuli\nS 425nmM 543nmL 562nmS\/(L+M)L\/(L+M)\nStimulusActualModelActualModelActualModelActualModelActualModel\nRipe fruit\u00a00.023\u00a00.083\u00a00.167\u00a00.242\u00a00.2017\u00a00.259\u00a00.0616\u00a00.1667\u00a00.5477\u00a00.5163\nMid-ripe fruit\u00a00.0181\u00a00.179\u00a00.1071\u00a00.463\u00a00.1257\u00a00.461\u00a00.0777\u00a00.1940\u00a00.5399\u00a00.4991\nUnripe fruit\u00a00.0136\u00a00.135\u00a00.0477\u00a00.393\u00a00.0497\u00a00.377\u00a00.1396\u00a00.1753\u00a00.5103\u00a00.4897\nLeaf\u00a00.0087\u00a00.158\u00a00.0483\u00a00.398\u00a00.0485\u00a00.365\u00a00.0899\u00a00.2075\u00a00.5010\u00a00.4789\n\nN is given in parentheses\n\nFig. 4.\n\nA saddleback tamarin foraging for the artificial fruits when presented on a leaf background.\n\nFig. 4.\n\nA saddleback tamarin foraging for the artificial fruits when presented on a leaf background.\n\n### Protocol\n\nTamarins were tested individually in their outside enclosures. There were two conditions: condition 1, where 21 fruits, seven of each of three colours,were presented against no background (the plain wire mesh of the guide frame and cage wall), and condition 2, where the same fruits were presented against a leaf background (Fig. 4). Each tamarin received training trials until it had successfully located and taken six fruits. These trials were performed as for condition 2. The experiment was split into two phases: phase 1 was three trials of condition 1,and phase 2 was three trials of condition 2.\n\nTrials were terminated either after the tamarin had taken all 21 fruits or after 15 min, whichever was sooner. During each trial, the time and colour of the fruit the tamarin took was continuously recorded using a hand-held computer running the Observer TM package (Tracksys Ltd., Nottingham, UK). General linear models run through SPSS were used for statistical comparisons.\n\n### Results\n\nTrichromats required significantly fewer training trials than their dichromatic counterparts (1.83\u00b11.33 vs 4.60\u00b12.88,respectively: F1,10=9.40, P<0.05) to reach the criterion of six fruits taken. Neither species (saddleback, 2.38\u00b11.60;redbellied, 4.75\u00b13.20: F1,10=1.29, P>0.05) nor sex (male, 3.17\u00b12.64; female, 3.80\u00b12.90: F1,10=4.52, P>0.05) had a significant effect on number of trials to criterion, nor were the interactions of species and vision (F1,10=0.97, P>0.05) and species and sex (F1,10=0.01, P>0.05) significant.\n\nTo examine the efficiency with which fruits were selected, the number of ripe fruits within the first six fruits taken was calculated. When the fruits were presented against both the no background and the leaf background,trichromats took significantly more ripe fruits than did dichromats(Table 3). There were no other significant effects.\n\nTable 3.\n\nMean number of ripe fruits (\u00b1S.D.) taken within the first six fruits\n\nFruits against no backgroundFruits against leaf background\nEffect\/interactionCategory (N)Mean no. ripe fruitsF1,10PMean no. ripe fruitsF1,10P\nVisual status\u00a0Trichromat (6)\u00a03.28\u00b10.49\u00a07.71\u00a00.02\u00a03.06\u00b10.80\u00a08.08\u00a0<0.05\nDichromat (10)\u00a02.35\u00b10.59\u00a0\u00a0\u00a02.13\u00b10.64\nSpecies\u00a0Saddleback (8)\u00a02.75\u00b10.61\u00a00.51\u00a0>0.05\u00a02.27\u00b10.60\u00a04.63\u00a0>0.05\nRed-bellied (8)\u00a02.65\u00b10.84\u00a0\u00a0\u00a02.69\u00b10.99\nSex\u00a0Female (10)\u00a02.80\u00b10.77\u00a00.78\u00a0>0.05\u00a02.67\u00b10.82\u00a00.36\u00a0>0.05\nMale (6)\u00a02.53\u00b10.62\u00a0\u00a0\u00a02.17\u00b10.78\nSpecies \u00d7 visual status\u00a0\u00a0\u00a00.16\u00a0>0.05\u00a0\u00a00.16\u00a0>0.05\nSpecies \u00d7 sex\u00a0\u00a0\u00a00.62\u00a0>0.05\u00a0\u00a00.22\u00a0>0.05\nFruits against no backgroundFruits against leaf background\nEffect\/interactionCategory (N)Mean no. ripe fruitsF1,10PMean no. ripe fruitsF1,10P\nVisual status\u00a0Trichromat (6)\u00a03.28\u00b10.49\u00a07.71\u00a00.02\u00a03.06\u00b10.80\u00a08.08\u00a0<0.05\nDichromat (10)\u00a02.35\u00b10.59\u00a0\u00a0\u00a02.13\u00b10.64\nSpecies\u00a0Saddleback (8)\u00a02.75\u00b10.61\u00a00.51\u00a0>0.05\u00a02.27\u00b10.60\u00a04.63\u00a0>0.05\nRed-bellied (8)\u00a02.65\u00b10.84\u00a0\u00a0\u00a02.69\u00b10.99\nSex\u00a0Female (10)\u00a02.80\u00b10.77\u00a00.78\u00a0>0.05\u00a02.67\u00b10.82\u00a00.36\u00a0>0.05\nMale (6)\u00a02.53\u00b10.62\u00a0\u00a0\u00a02.17\u00b10.78\nSpecies \u00d7 visual status\u00a0\u00a0\u00a00.16\u00a0>0.05\u00a0\u00a00.16\u00a0>0.05\nSpecies \u00d7 sex\u00a0\u00a0\u00a00.62\u00a0>0.05\u00a0\u00a00.22\u00a0>0.05\n\nWhether the fruits were presented against a leaf background or not had no significant effect on the number of ripe fruits within the first six fruits taken (no background 2.70\u00b10.71; leaf background 2.48\u00b10.82: F1,14=1.41, P>0.05). There was no interaction of visual status and background (F1,14=0.001, P>0.05) nor was there a difference between dichromats and trichromats in the total number of ripe fruits taken by the end of each trial,either when presented against no background (dichromat, 6.30\u00b10.66;trichromat, 6.33\u00b10.73: F1,14=0.009, P>0.05) or a leaf background (dichromat, 5.43\u00b11.29;trichromat, 6.05\u00b10.53: F1,14=1.25, P>0.05).\n\nThe main finding is that trichromacy confers an advantage when selecting ripe fruits from those at various stages of maturity; both as a simple task and also when presented as a more naturalistic complex task against a background of distracting leaves. This is the first time that such an advantage has been demonstrated for primates using naturalistic stimuli. In addition, the patchy illumination falling on the fruit and leaves in our experiments resembles that of a natural forest canopy with areas of shadow and sun. These are conditions that might favour colour vision. Despite the benefits of trichromacy in the efficient detection and selection of ripe fruit, the selection of heterozygous trichromats will maintain both trichromacy and dichromacy within the population since, within the X-linked single-locus model, males are always dichromats irrespective of their mother's visual status (Mollon et al.,1984).\n\nThe three alleles of the single-locus model give three trichromat phenotypes and three dichromat phenotypes. The spectral tuning of the opsins of each phenotype may render them each more or less advantageous under different photic conditions. Even at a given time of day there are vast differences in illumination within a rain forest. It would repay investigation to examine the actual foraging efficiencies of the different phenotypes using real-world stimuli under a variety of naturalistic lighting conditions. Similarly, it would have been informative to examine differences in the relative performance of each of the three dichromatic and three trichromatic phenotypes, but distribution of the phenotypes of the available animals did not permit this. Indeed, all of the trichromats were 423 nm, 543 nm, 563 nm,and the small sample size did not permit examination of differences between the two dichromat phenotypes in the study, namely 423 nm, 563 nm and 423 nm,543 nm.\n\nAlthough we have found that trichromacy is advantageous for detection and selection of ripe fruit (at least for the phenotypes present in our study),this does not give a complete picture of the likely costs and benefits of colour vision. Nor does this result demonstrate that trichromacy originally evolved for foraging. For example, trichromacy has been suggested as being more efficient for detecting yellow predators against a green leafy background(Coss and Ramakrishnan, 2000). Examples might include the yellowish jaguar (Panthera onca), ocelot(Leopardus pardalis), margay (L. wiedii) and oncilla (L. tigrina), all of which live in the Neotropics. Dichromacy, however, may be advantageous in breaking camouflage(Morgan et al., 1992). This is relevant not only for detection of predators but also for the detection of insect and other prey items that are taken by many primate species. However, a recent study failed to find any evidence of a dichromat advantage in terms of the number of prey captured by wild and captive tamarins (H. M. Buchanan-Smith, A. C. Smith, A. K. Surridge, M. J. Prescott, D. Osorio and N. I. Mundy, manuscript in preparation).\n\nThe detection and discrimination of fruits is a complex task. Fruits must be distinguished from leaves, edible fruits must be discriminated from inedible or toxic fruits, and ripe fruits must be typically picked over unripe fruits. Colouration may aid in all of these tasks; indeed, as this study has shown, primate trichromacy is advantageous in the efficient selection of ripe fruits from an array of unripe, mid-ripe and ripe fruits. However, there are many subtle factors other than colour per se that can influence the choice of fruits by wild primates. As Savage et al.(1987) point out,discrimination may be most acute for those foods that are rarely consumed yet are an essential source of one or more nutrients.\n\nIn Peru: we are grateful to Dr E. Montoya (Proyecto Peruano de Primatologia) and Biologo R. Pezo (Universidad Nacional de la Amazonia Peruana) for help with logistical matters; and particular thanks to Ney Shahuano for unflagging field assistance. In the UK: we are grateful to John Stronge and Mark Challis at Belfast Zoological Gardens for continued support of our research, and the zoo staff for maintaining the study animals. We thank Drs S. Vick and J. Kren for comments on an early draft of this manuscript. This study was funded by the BBSRC (98\/S11498 to H.M.B.-S.).\n\nAhnelt, P. K. and Kolb, H. (\n2000\n). The mammalian photoreceptor mosaic-adaptive design.\nProg. Retin. Eye Res.\n19\n,\n711\n-777.\nArrese, C. A., Hart, N. S., Thomas, N., Beazley, L. D. and Shand, J. (\n2002\n). Trichromacy in Australian Marsupials.\nCurr. Biol.\n12\n,\n657\n-660.\nCaine, N. G. and Mundy, N. I. (\n2000\n). Demonstration of a foraging advantage for trichromatic marmosets(Callithrix geoffroyi) dependent on food color.\nProc. R. Soc. Lond. B\n267\n,\n439\n-444.\nCoss, R. G. and Ramakrishnan, U. (\n2000\n). Perceptual aspects of leopard recognition by wild bonnet macaques (Macaca radiata).\nBehavior\n137\n,\n315\n-335.\nDominy, N. J. and Lucas, P. W. (\n2001\n). Ecological importance of trichromatic vision to primates.\nNature\n410\n,\n363\n-366.\nGuillotin, M., Dubost, G. and Sabatier, D.(\n1994\n). Food choice and food competition among three major primate species of French Guiana.\nJ. Zool. Lond.\n233\n,\n551\n-579.\nHeymann, E. W. and Hartmann, G. (\n1991\n). Geophagy in mustached tamarins, Saguinus mystax (Platyrrhini:Callitrichidae), at the Rio Blanco, Peruvian Amazonia.\nPrimates\n32\n,\n533\n-537.\nJacobs, G. H. (\n1993\n). The distribution and nature of color vision among the mammals.\nBiol. Rev.\n68\n,\n413\n-471.\nJacobs, G. H. (\n1984\n). Within-species variations in the visual capacity among squirrel monkeys (Saimiri sciureus):color vision.\nVision Res.\n24\n,\n1267\n-1277.\nJacobs, G. H., Deegan, J. F., II, Tan, Y. and Li, W.-H.(\n2002\n). Opsin gene and photopigment polymorhpism in a prosimian primate.\nVision Res.\n42\n,\n11\n-18.\nJacobs, G. H., Neitz, J. and Crognale, M.(\n1987\n). Color-vision polymorphism and its photopigment basis in a callitrichid monkey (Saguinus fuscicollis).\nVision Res.\n27\n,\n2089\n-2100.\nJacobs, G. H., Neitz, M., Deegan, J. F. and Neitz, J.(\n1996a\n). Trichromatic color vision in New World monkeys.\nNature\n382\n,\n156\n-158.\nJacobs, G. H., Neitz, M. and Neitz, J. (\n1996b\n). Mutations in S-cone pigment genes and the absence of colour vision in two species of nocturnal primate.\nProc. R. Soc. Lond. B\n263\n,\n705\n-710.\nJulliot, C. (\n1994\n). Frugivory and seed dispersal by red howler monkeys: evolutionary aspect.\nRevu d'Ecologie (Terre Vie)\n49\n,\n331\n-341.\nKawamura, S., Hirai, M., Takenaka, O., Radlwimmer, F. B. and Yokoyama, S. (\n2001\n). Genomic and spectral analyses of long to middle wavelength-sensitive visual pigments of common marmoset (Callithrix jacchus).\nGene\n269\n,\n45\n-51.\nMacleod, D. I. A. and Boynton, R. M. (\n1979\n). Chromaticity diagram showing cone excitation by stimuli of equal luminance.\nJ. Opt. Soc. Am.\n69\n,\n1183\n-1186.\nMollon, J. D., Bowmaker, J. K. and Jacobs, G. H.(\n1984\n). Variations of color vision in a New World primate can be explained by polymorphism of retinal photopigments.\nProc. R. Soc. Lond. B\n222\n,\n373\n-399.\nMorgan, M. J., Adam, A. and Mollon, J. D.(\n1992\n). Dicromats detect colorcamouflaged objects that are not detected by trichromats.\nProc. R. Soc. Lond. B\n248\n,\n291\n-295.\nOsorio, D. and Vorobyev, M. (\n1996\n). Color vision as an adaptation to frugivory in primates.\nProc. R. Soc. Lond. B\n263\n,\n593\n-599.\nRegan, B. C., Julliot, C., Simmen, B., Vienot, F.,Charles-Dominique, P. and Mollon, J. D. (\n1998\n). Frugivory and colour vision in Alouatta seniculus, a trichromatic platyrrhine monkey.\nVision Res.\n38\n,\n3321\n-3327.\nRegan, B. C., Julliot, C., Simmen, B., Vienot, F.,Charles-Dominique, P. and Mollon, J. D. (\n2001\n). Fruits,foliage and the evolution of color vision.\nPhil. Trans. R. Soc. Lond. B\n356\n,\n229\n-283.\nRichard, A. F. (\n1985\n).\nPrimates in Nature\n. New York: W. H. Freeman & Co.\nSavage, A., Dronzek, L. A. and Snowdon, C. T.(\n1987\n). Color discrimination by the cotton-top tamarin(Saguinus oedipus oedipus) and its relation to fruit coloration.\nFolia Primatol.\n49\n,\n57\n-69.\nShyue, S. K., Boissinot, S., Schneider, H., Sampaio, I.,Schneider, M. P., Abee, C. R., Williams, L., Hewett-Emmett, D., Sperling, H. G., Cowing, J. A. et al. (\n1998\n). Molecular genetics of spectral tuning in New World monkey colour vision.\nJ. Mol. Evol.\n46\n,\n697\n-702.\nStavenga, D. G., Smits, R. P. and Hoenders, B. J.(\n1993\n). Simple exponential functions describing the absorbency bands of visual pigment spectra.\nVision Res.\n33\n,\n1011\n-1017.\nSumner, P, and Mollon, J. D. (\n2000a\n). Catarrhine photopigments are optimized for detecting targets against a foliage background.\nJ. Exp. Biol.\n203\n,\n1963\n-1986.\nSumner, P. and Mollon, J. D. (\n2000b\n). Chromaticity as a signal of ripeness in fruits taken by primates.\nJ. Exp. Biol.\n203\n,\n1987\n-2000.\nSurridge, A. K. and Mundy, N. I. (\n2002\n). Trans-specific evolution of opsin alleles and the maintenance of trichromatic colour vision in callitrichine primates.\nMol. Ecol.\n11\n,\n2157\n-2170.\nTan, Y. and Li, W.-H. (\n1999\n). Trichromatic vision in prosimians.\nNature\n402\n,\n36\n.\nTove\u0301e, M. J., Bowmaker, J. K. and Mollon, J. D.(\n1992\n). The relationship between cone pigments and behavioural sensitivity in a New World monkey (Callithrix jacchus jacchus).\nVision Res.\n32\n,\n867\n-878.\nVorobyev, M. and Osorio, D. (\n1998\n). Receptor noise as a determinant of colour thresholds.\nProc. R. Soc. Lond. B\n265\n,\n351\n-358.\nWilliams, A. J., Hunt, D. M., Bowmaker, J. K. and Mollon, J. D. (\n1992\n). The polymorphic photopigments of the marmoset:spectral tuning and genetic basis.\nEMBO J.\n11\n,\n2039\n-2045.","date":"2023-03-21 07:01:25","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.48585936427116394, \"perplexity\": 5699.007436693959}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-14\/segments\/1679296943637.3\/warc\/CC-MAIN-20230321064400-20230321094400-00799.warc.gz\"}"}
null
null
unsigned int CHttpClientImp::s_dwTimeout = 0; void CHttpClientImp::SetTimeout(unsigned int timeout) { CCohConnection::SetTimeout(timeout); s_dwTimeout = timeout; } void CHttpClientImp::OnReceiveResponse(const string &strResponse) { m_pCallBack->OnRemoteResponse(m_dwId, strResponse); }
{ "redpajama_set_name": "RedPajamaGithub" }
8,518
{"url":"http:\/\/mathoverflow.net\/revisions\/51673\/list","text":"This is more a comment to the question (which I cannot do).\n\nAs written (intentionally?) [ADDED: Apparently, it was intentional.] the specified rings are not always the (full) ring of algebraic integers of the field $\\mathbb{Q}(\\sqrt{c})$ (see, e.g., http:\/\/en.wikipedia.org\/wiki\/Quadratic_integer for details).\n\nIn these cases the rings in question are not integrally closed and thus not UFDs, even if the class number of the field is one and thus the (full) ring of algebraic integers would be a UFD (see, e.g., http:\/\/en.wikipedia.org\/wiki\/Class_number_problem ).\n\nPossibly, one needs to take this into account too, when using the list, mentioned in another answer, where the rings are Euclidean.\n\n2 added 159 characters in body\n\nThis is more a comment to the question (which I cannot do).\n\nAs written (intentionally?) the specified rings are not always the (full) ring of algebraic integers of the field $\\mathbb{Q}(\\sqrt{c})$ (see, e.g., http:\/\/en.wikipedia.org\/wiki\/Quadratic_integer for details).\n\nIn these cases the rings in question are not integrally closed and thus not UFDs, even if the class number of the field is one and thus the (full) ring of algebraic integers would be a UFD (see, e.g., http:\/\/en.wikipedia.org\/wiki\/Class_number_problem ).\n\nPossibly, one needs to take this into account too, when using the list, mentioned in another answer, where the rings are Euclidean.\n\n1\n\nThis is more a comment to the question (which I cannot do).\n\nAs written (intentionally?) the specified rings are not always the (full) ring of algebraic integers of the field (see, e.g., http:\/\/en.wikipedia.org\/wiki\/Quadratic_integer for details).\n\nIn these cases the rings in question are not integrally closed and thus not UFDs, even if the class number of the field is one and thus the (full) ring of algebraic integers would be a UFD (see, e.g., http:\/\/en.wikipedia.org\/wiki\/Class_number_problem ).","date":"2013-05-21 23:42:52","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8894588351249695, \"perplexity\": 408.18160767244586}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 5, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2013-20\/segments\/1368700871976\/warc\/CC-MAIN-20130516104111-00099-ip-10-60-113-184.ec2.internal.warc.gz\"}"}
null
null
Cerovo (cyr. Церово) – wieś w Czarnogórze, w gminie Bijelo Polje. W 2011 roku liczyła 177 mieszkańców. Przypisy Miejscowości w gminie Bijelo Polje
{ "redpajama_set_name": "RedPajamaWikipedia" }
9,219
NK Slaven Belupo je chorvatský fotbalový klub z Koprivnice založený roku 1907. Domácí zápasy hraje na Gradskim stadionu, který pojme 3205 fanoušků. Umístění v jednotlivých sezonách Evropské poháry Celkově Zdroj: uefa.com, Naposledy aktualizováno 9. srpna 2012Z = Odehrané zápasy; V = Výhry; R = Remízy; P = Prohry; VG = Vstřelené góly; OG = Obdržené góly Výsledky Naposledy aktualizováno: 9. srpna 2012 Sezóny Naposledy aktualizováno 9. srpna 2012 Statistiky Nejvíce zápasu v evropských pohárech: 34 zápasů Petar Bošnjak Nejvíce gólů v evropských pohárech: 14 gólů Marijo Dodik Významní hráči Aby se hráč mohl objevit v tomto seznamu musí: Odehrát alespoň 150 zápasů za klub; Vstřelit alespoň 50 gólů; nebo Odehrát alespoň jeden reprezentační zápas zatímco obléká dres NK Slaven Belupo. Data v závorkách označují délku působení v klubu. Roy Ferenčina (1997–2005) Marijo Dodik (1998–2007) Petar Bošnjak (1999–2008) Srebrenko Posavec (2000–05, 2006–10) Bojan Vručina (2004–10, 2011–12) Silvio Rodić (2005–14) Mateas Delić (2006–10, 2012–16, 2018–) Vedran Purić (2008–2018) Fidan Aliti (2016–2017) Nikola Katić (2016–2018) Historie trenérů Odkazy Reference Externí odkazy Oficiální stránky Chorvatské fotbalové kluby Fotbalové kluby založené v roce 1907
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,374
Change your Address on your Car Title in California. Procedures, Forms and Information on a Car Title Address Change. Learn steps for a CA drivers license change of address. Learn where to change drivers license address & see time limits to update address info. Insurance Requirements For Vehicle Registration · How Much Insurance Am I. Providing Proof of Insurance To Register My Commercially Insured Vehicle. Change of Address Abstract: Change of Address (Within Connecticut)/Organ and Tissue Donor StatusConnecticut law requires all Connecticut residents with a. Updating your address will update all DMV records associated with your. If your vehicle is garaged at a location other than your DMV physical address of. Vehicle Record Request: Subscriber Log-in. Top 10 Title & Registration Tips · Change of Address Request · DMV Web Site Survey. DMV Offices for Titles/ Plates. Search below by county or city to find DMV Title & Registration offices or tap the map. Guide for Online Insurance Verification · Change of Address Request. You can learn how to change your address at the DMV website. forms when you apply for your vehicle registration and your insurance ID card (FS-20). A. All cars driven in the state of RI must be insured. Upon reciept of the request, the DMV will change your address information in its computer system. A SEPARATE FORM IS NEEDED FOR EACH DRIVER OR VEHICLE OWNER. at www.dmv.ca.gov or mail to: DMV CHANGE OF ADDRESS P. O. Box 942859. you may be eligible for the California Low Cost Automobile Insurance Program. Whether or not you keep the same car & have the same drivers on your. and then unpacking all of your belongings, you need to update your address and. May 26, 2015. It is very important to also notify the DMV of your address change. car matches the proof of insurance with your automobile insurance carrier. Should you commit a traffic violation, be involved in a car accident, or blow. Another important place to update your address is with your insurance provider. Vehicle or vessel renewal: Your change of address takes approximately three business days to be updated to DMV's records. Before using the internet vehicle or. Nov 21, 2017. Most of us know that you can change your address with the United States. The DMV: You'll likely remember to contact the DMV if you move to. to a new state, you should update both your address and your insurance.
{ "redpajama_set_name": "RedPajamaC4" }
3,720
There's no disputing the effect the moon has on the tides, thus, it's logical to assume that the moon also has an effect on people (given the fact that over 50% of the human body is made up of water). So perhaps the planets in our solar system have equal effects. The month of August (as was July) is plagued with retrogrades and eclipses. Many theories suggest that the planetary conditions cause much chaos and aggression in people who are unaware of the cosmic pull and of themselves. Mercury remains retrograde until August 18th. August 11: solar eclipse in Leo plus Uranus will be retrograde. July 26th to August 12th: the annual planetary alignment, known as the "Lion's Gate Portal," happens (when the sun is in the constellation of Leo). August 8th is when the energy is considered to be the strongest planetary influence of the year. There's much debate on whether planets in retrograde have real effects. When a planet is in retrograde, it appears to be moving backwards. Even though the backward movement is just an illusion (the same way a car looks to be moving backwards when you pass it on the highway), it has been known throughout the ages to signal a period of delayed progress or disruption, specifically to the human psyche. Fact or power of suggestion? That's for you to decide, but if you notice the effects, it does not matter what causes it...there's still an effect. Many people subscribe to the theory that the planets in retrograde do have energetic effects on the earth and all that it sustains, while others believe that any noticeable circumstances that happen around the times of planetary retrogrades are either a coincident or unconscious manifestations from the powers of suggestion. Personally, as an extremely sensitive person, I can tell you that I absolutely feel the intense energies of planetary conditions without having any knowledge of them. To shield myself from social agendas and influence as much as possible, I feel, observe, and THEN check the facts for verification. Protect yourself with awareness and understanding. I don't believe that there is a default protocol to deal with these occurrences because we are all so uniquely biologically and emotionally constructed, however, I can tell you what works for me. But to preserve your sanity and quality of life (regardless of the actual cause), there must be a protocol in place that makes sense to you when chaos of any source flares up (be it cosmic, collective or strictly personal). When the energy becomes intense, there are many physical and emotional reactions. Without awareness, these reactions can cause devastating consequences. Here are some that I've noticed within myself and in others. So what should you do when things get intense? First, it is important to understand that when circumstances and emotions become intense, it is life's way of telling you to slow down and introspect. So here is my personal protocol. This is the time to slow your pace. Do everything slower, which will help you avoid mistakes from the confusion and allow you to gain awareness. Awareness is key in attaining accurate insight and understanding. The times of intensity are at high risk for making wrong assumptions, bad decisions and regretful reactions. Remind yourself that "things may not what they seem to be" when the emotions get roused. For me, it's time to clean house (literally and figuratively). I find that getting my house and thoughts in order prevents me from being sucked into the whirlwind of energetic chaos. Now is the time to go for that massage you've been wanting to get. Go to the gym, but don't overdo it. Study your spiritual path, whatever that may be. Go for a walk in the woods and find solace. Meditate, eat healthy, relax. Avoid the news and anything stressful. 6. Avoid all mind altering drugs and alcohol. I can't stress this one enough. This is a very touchy subject, but it is crucial to remain lucid and clear. The risk for self medication, delusion, relapses and overdoses is at it's highest during such intense times, and the damaging effects of drugs and alcohol are also at their highest. To seek refuge from mind-altering substances at this time will trigger ALL negative responses from the energies. This is why it's so important to establish a protocol that will help you through these times. Consider the possibility that perhaps the Universe is speaking through evidential occurrences. With the proper conduct and response, this too shall pass bringing us to expanded views of spiritual growth and wisdom.
{ "redpajama_set_name": "RedPajamaC4" }
4,898
Q: What is the calculation behind the %*% operator in R? The %*% operator in R has it's purpose described in multiple places for example: The R %*% operator and here https://www.tutorialspoint.com/r/r_operators.htm Which states: "This operator is used to multiply a matrix with its transpose." What I do not understand is the calculation performed in order to go from one matrix to another using this operator, for example: M = matrix(c(3,2,5,3,4,5),3, 2, TRUE) M # [,1] [,2] #[1,] 3 2 #[2,] 5 3 #[3,] 4 5 t(M) # [,1] [,2] [,3] #[1,] 3 5 4 #[2,] 2 3 5 print ( M %*% t(M) ) # [,1] [,2] [,3] #[1,] 13 21 22 #[2,] 21 34 35 #[3,] 22 35 41 I've stared as this, and other matrix iterations and I can't figure out how you arrive at the result by multiplication with the transpose matrix, and I can't find an explanation anywhere. Can anyone provide an explanation please or point me in the right direction? Thanks
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,763
We really need a win from this game and not just for revenge but what team will Jerry go with? As I have said elsewhere I like a back four of Miles Ball Batten and Rigg but I think Jerry very much likes the Miles Opi combination and so is reluctant to play Miles on the left. So I expect Kelly will start on the left and I think Amankwah will be preferred at CB again. In midfield I would go with a diamond of Richards Compton Davis and Opi with a front two of Watkins and Kavanagh. I would like to see Pierce given a go but can't see him getting in ahead of either Opi or Compton but possibly could play alongside Kavanagh although this would be a very young and inexperienced front line. But we really need to start causing opposition centre backs problems as well as the goalies. Most of us thought it was the next loanee. On the subject of loanees, Michael Kelly and Rhys Kavanagh both played for Bristol City against Rovers this afternoon in a GFA cup game - they should be warmed up for tomorrow! Opi didn't play for City though. LB wrote: On the subject of loanees, Michael Kelly and Rhys Kavanagh both played for Bristol City against Rovers this afternoon in a GFA cup game - they should be warmed up for tomorrow! Opi didn't play for City though. They will not play - if is, I believe, part od their loan agreement. I thought Rhys Kavanagh was a Rovers player. Major Icewater wrote: I thought Rhys Kavanagh was a Rovers player. Just testing! I should have said that Kelly and Kavanagh played for Rovers against City. Blame it on the time of year! City 2 up at ht through Tom Smith and Jack Compton pen. Won 3-0. Can't wait to read about this. I owe the team and management an apology. Didn't think we were tough enough to get anything. Let's take it to Weston on Boxing Day and NYD. We completely dominated throughout and Jerry's game plan worked to perfection. Smith showed his class with both finishes and we had other decent opportunities that were thwarted. Frankie worked hard and Richards controlled things in midfield while the defence looked solid again. The win clearly meant a lot to the players and management. We played very well today, the high tempo pressing game worked very well. Chippenham had nothing to offer and looked very poor, it could well have 5 or 6. I thought Jarvis had one of the best games I have seen him play and Richards was back to his best after a couple of games when he seemed a little off the pace. Tom Smith is a great little player and when he on his game, like he was today, he is outstanding. A very good team performance, well done to all. A good and not totally expected start to Christmas! Revenge is very sweet! We made Chipp look as poor as we were in the first match at Twerton and they were chasing shadows for much of the game. Some excellent counter attacks in the second half could have easily doubled our score but the final delivery was lacking. To be honest the penalty award looked a shocking decision but some of the home players deserve no sympathy after their antics following Smith's foul in the first half. And welcome back Jack Batten - you showed what we've been missing - and it was good to see the return of Ryan Case. Great win. Great support. Great atmosphere in both the ground and the Old Road Tavern (Great beer). City support was top notch today. GWR even ran a train back home via Melksham, shame they cancelled the outbound train to Chippenham. Happy Daze. Looking forward to Boxing Day. Don't mess it up City! Great travelling support today matched by a equally impressive result. How many Romans do you think were at Hardenhuish Pk today? Just reward for the faithful after some pretty dire recent games. The attendance figure was given as 1502. City must have taken c.500 so did the "Chippenham Bob" have a special promotion / discount day?
{ "redpajama_set_name": "RedPajamaC4" }
6,925
Biografia Karl Eglseer nacque a Bad Ischl, in Alta Austria, il 5 luglio 1890. Entrò nell'esercito austro-ungarico nell'agosto del 1908 come alfiere, prestando servizio nella prima guerra mondiale. Rimanendo nel Bundesheer anche dopo il 1918, venne trasferito nella Wehrmacht quando venne attuato l' Anschluss nel 1938. Nell'ottobre del 1940 venne promosso al comando della 4ª divisione da montagna, prestando servizio nel gruppo di armate sud sul fronte orientale. Nell'ottobre del 1941 gli venne concessa la croce di cavaliere della Croce di Ferro per la direzione della sua divisione. Eglseer guidò quindi la 714ª divisione di fanteria in Jugoslavia dal febbraio del 1943 al dicembre di quello stesso anno, quando divenne comandante del XVIII corpo d'armata nel settore nord del fronte orientale. Il 23 giugno 1944 l'aereo che trasportava Eglseer, assieme ai generali Dietl, von Wickede e Rossi, si schiantò in Stiria, regione dell'Austria. Morirono tutti sul colpo. Onorificenze Onorificenze austriache Onorificenze tedesche Onorificenze straniere Bibliografia Fellgiebel, Walther-Peer (2000). Die Träger des Ritterkreuzes des Eisernen Kreuzes 1939–1945 Friedburg, Germany: Podzun-Pallas. ISBN 3-7909-0284-5. Patzwall, Klaus D. and Scherzer, Veit (2001). Das Deutsche Kreuz 1941–1945 Geschichte und Inhaber Band II. in Norderstedt, Germany: Verlag Klaus D. Patzwall (in German). ISBN 3-931533-45-X. Altri progetti Militari austro-ungarici Militari della Wehrmacht Cavalieri della Croce di Ferro
{ "redpajama_set_name": "RedPajamaWikipedia" }
3,299
Setting a new standard in value and performance, the RDR 2000 digital weather radar provides a vertical picture of a pilot-selected cross-section of the storm, offering the best view available to general aviation. The simple press of a button allows you to analyze a storm segment vertically, giving you the information you need to determine the scope of the storm. The RDR 2000 uses intuitive colors (green, yellow, red, magenta), and six ranges (10 nm to 240 nm) to depict the weather intensity, creating a clear picture of the weather and making it easy for you to avoid danger. The horizontal scan provides an angle of 90°; the vertical scan an angle of 60°. Using Sensitivity Time Logic, the system can correlate target distance with intensity, and its attenuation compensation reduces shadowing. The system is fully Electronic Flight Instrument System (EFIS) compatible using ARINC 429 and ARINC 453 databusses. It also features a Multi-Function Display (MFD) interface, fault annunciation, TILT readout on display and independent dual indicator operation. Power output is rated at 4.0 kW. The RDR 2000 system can be configured at installation to include the Target Alert feature. The purpose of the feature is to alert the pilot to the presence of a significant weather cell that exists beyond the currently selected range. For this mode to be active, Wx or WxA mode must be selected and Vertical Profile must not be selected. The criteria for a Target Alert is for the cell to be at least red intensity, within ±10° of aircraft heading, a minimum size of 2 NM in range and 2 degrees in azimuth, and within the range of 80 to 240 NM. When a Target Alert is issued, two red arcs, separated by a black arc will be displayed at the top of the display centered on the aircraft heading (see the following figure). Target Alert is applied to each scan independent of the other when the radar is alternating scans. Get in-depth training of the RDR 2000/2100 with this pilot training video from BendixKing.
{ "redpajama_set_name": "RedPajamaC4" }
713
Nowadays, IP-telephony gives too many resources to improve telecommunication. First of them is called virtual phone which look like ordinary phone numbers with additional preferences. Any other virtual phones you could use for contacts with customers and business associates everyday. Freezvon provides numbers for calls, SMS and fax messages to concrete destinations. In the following article, you will be able to realize what type of such service you really need. Freezvon recommends installing such service known as PBX system with such features as call recording, hold a music, IVR-menu, SIP-accounts, internal numbers etc. Most companies used it for their telephony. For example, you need to provide presence on French market, but do not have plans to open physical office there. Virtual phone is your priority. What is virtual phone service from Freezvon? That is a number with look of real one, but with local code. Its work depends on forwarding of calls, messages and fax to appropriate destinations. There are some of them as SIP and mobile for calls; email for SMS and fax; Landline for calls. Such virtual phone doesn't require SIM-cards or other supplementary equipment. No geographical attachment is in need. All areas and costs are available here. When you want to have an ability to make calls, we propose you to connect SIP-account. You may receive it free after buying a virtual number or separately. Download special apps Zoiper or Xlite, make all necessary settings from our technicians. What are pros and benefits in use of virtual phone? Permission to connect 800 numbers. For being more advanced in your affairs, install Caller ID function, which is totally give back a trust to number, because it helps your subscribers to see the digits and call you back to it. Check the individual info and confirm it. Then your phone number will be in process of connection which lasts over 24 hours (depends on provider conditions). On your personal email, you will find a letter with info about its active status. For asking several questions according your virtual phone, find our technical managers over Skype, help live chat, email or just call them by phone presented on our website. Enjoy VoIP telephony from Freezvon right now!
{ "redpajama_set_name": "RedPajamaC4" }
5,202
{"url":"http:\/\/www.mathnet.ru\/php\/archive.phtml?wshow=paper&jrnid=vmp&paperid=844&option_lang=eng","text":"Numerical methods and programming\n RUS\u00a0 ENG JOURNALS \u00a0 PEOPLE \u00a0 ORGANISATIONS \u00a0 CONFERENCES \u00a0 SEMINARS \u00a0 VIDEO LIBRARY \u00a0 PACKAGE AMSBIB\n General information Latest issue Archive Search papers Search references RSS Latest issue Current issues Archive issues What is RSS\n\n Num. Meth. Prog.: Year: Volume: Issue: Page: Find\n\n Num. Meth. Prog., 2016, Volume\u00a017, Issue\u00a04, Pages\u00a0380\u2013392 (Mi vmp844)\n\nA hybrid algorithm for the joint calculation of multiscale hydraulic problems with consideration of thermal processes\n\nE.\u00a0I.\u00a0Mihienkovaa, S.\u00a0A.\u00a0Filimonovb, A.\u00a0A.\u00a0Dekterevc\n\na Siberian Federal University, Krasnoyarsk\nb TORINS Company, Krasnoyarsk\nc S.S.\u00a0Kutateladze Institute of Thermophysics, Siberian Division of the Russian Academy of Sciences\n\nAbstract: A hybrid algorithm to solve the problems of flux-distribution in a model consisting of both spatial elements and 0-dimensional (network) elements is proposed. In these problems, the flux-distribution is simulated by the methods of Hydraulic Circuit Theory (HCT). The algorithm is based on a SIMPLE-like procedure of flows (velocities) and pressure field communications in the computational domain. The hybrid model is based on a single system of pressure correction equations for the network and for the spatial domains. The flows in the corresponding subdomains are determined similar to those in the HCT or in computational hydrodynamics. In the thermal part of the problem, the relation between the spatial domains and the network domains is performed by the transfer of the enthalpy flow in the downward flow direction. The numerical results obtained according to the hybrid model are compared with experimental data and with results of spatial calculations.\n\nKeywords: numerical simulation, Computational Fluid Dynamics (CFD), hydraulic circuit theory, hybrid 3D\/0D algorithm.\n\nFull text: PDF file (955\u00a0kB)\nUDC: 532.54\n\nCitation: E.\u00a0I.\u00a0Mihienkova, S.\u00a0A.\u00a0Filimonov, A.\u00a0A.\u00a0Dekterev, \u201cA hybrid algorithm for the joint calculation of multiscale hydraulic problems with consideration of thermal processes\u201d, Num. Meth. Prog., 17:4 (2016), 380\u2013392\n\nCitation in format AMSBIB\n\\Bibitem{MihFilDek16} \\by E.~I.~Mihienkova, S.~A.~Filimonov, A.~A.~Dekterev \\paper A hybrid algorithm for the joint calculation of multiscale hydraulic problems with consideration of thermal processes \\jour Num. Meth. Prog. \\yr 2016 \\vol 17 \\issue 4 \\pages 380--392 \\mathnet{http:\/\/mi.mathnet.ru\/vmp844}","date":"2022-01-18 21:01:12","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.21539296209812164, \"perplexity\": 5084.018127931731}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-05\/segments\/1642320300997.67\/warc\/CC-MAIN-20220118182855-20220118212855-00647.warc.gz\"}"}
null
null
Q: `this` must be `void` Experimenting with declaring a function that must be called with void this (undefined, null or global). And found an interesting thing. When declaring function with this: void it can be called with any this, but if I add some concrete type like this: void | Function (but it wouldn't work with null, undefined, any) it starts checking this. Code: var x = { f: function (this: void) { } }; x.f(); // Ok - why??? (0 as any, x.f)(); // Ok var y = { f: function (this: Window) { } }; y.f(); // Error (0 as any, y.f)(); // Error var z = { f: function (this: void | Window) { } }; z.f(); // Error (0 as any, z.f)(); // Ok var a = { f: function (this: void | null) { } }; a.f(); // Ok (0 as any, a.f)(); // Ok var b = { f: function (this: void | Function) { } }; b.f(); // Error (0 as any, b.f)(); // Ok var c = { f: function (this: void | (string & number)) { } }; c.f(); // Error (0 as any, c.f)(); // Ok I don't understand how does void works here. A: void is very strange type var a:void = undefined; // ok var b:undefined = a; // incompatible In your example if all void types replace to undefined everything works as intended
{ "redpajama_set_name": "RedPajamaStackExchange" }
2,210
The Love Freekz hadden in februari 2005 een klein hitje met Shine, dat een sample bevat van Shine a little love van Electric Light Orchestra. Discografie Singles met hitnoteringen in de Nederlandse Top 40 Love Freekz, The
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,578
\section{Introduction} \label{sec-intro} We create an informative representation of the physical world around us to survive, interpret and change it. The informational representation is often in the form of concepts (e.g., natural language words) or mathematical variables and their values (e.g., all forms of data)\cite{lloyd2002computational, yue2022world}. The world is not just a random collection of facts, but rather an interconnected complex network of facts. In this network, the phenomenon of causation can very often be observed\cite{pearl2018book}. Surprisingly, many complex systems can be described by quite simple mathematical models of using a few variables, all the ways from quantum mechanics to macroeconomics\cite{chvykov2020causal}. The simplicity of these mathematical models sometimes is the result of approximations (e.g., marginal revenue $\Delta MR = \frac{\Delta TR}{\Delta Q}$ in economics\cite{karl2019principles}), while sometimes seems to be intrinsic to the system (e.g., energy of a photon $E=\hbar \nu$ in physics\cite{einstein1905erzeugung}). Causation naturally exists in these mathematical models, e.g., if a physicist changes the value of frequency $\nu$, the energy of the photon he or she measures would change accordingly. In many cases, a well-defined mathematical model is not available. Causation is then needed to be interpreted from a statistical view of the data. For instance, if one observes wet road after rain falls a few times, there is possibly a causal relationship between the two facts. If he or she observes it many times without exception, the causal relationship would be strong. Causation has long been widely discussed in many disciplines, such as philosophy, physics, and mathematics\cite{pepper1926emergence, bar2004mathematical, anderson1972more, klein2019uncertainty, baysan2020causal, pearl2009causality, kivelson2016defining, fromm2005types}. However, the quantitative measure of the "strength" of causation has been seriously studied only since about a decade ago\cite{hoel2013quantifying}. In order to have an effective informational representation of a complex system, the granularity of study is a key\cite{yue2022world}. For instance, if we want to investigate the phenomenon of human intelligence, we can study it at the atomic level since everything including neurons and nerve cells are atoms, we can study it at the molecular level (i.e., a group of atoms) by looking into the chemical reactions responsible for the regulation and function of brain activities, we can study it at cell level (i.e., a group of molecules) by looking into the different status of neuron cells, the electric signals transmitting among them, and the way they connect to one another, we can study it at all levels through minicolumns (i.e., a group of cells\cite{buxhoeveden2002minicolumn}), encephalic regions (e.g., V2 visual area is a group of minicolumns\cite{kaas1989does, galletti1999cortical}), the whole cortex, the entire brain, etc. We can define different variables at different granularity levels of study, e.g., in the case of studying the air in a room, we can define momentum of each molecule ("micro" variable) or air temperature ("macro" variable). When we do "coarse-graining" by switching granularity from a micro level to a macro level, we lose part of the information. The interesting and somewhat strange phenomenon of causal emergence arises when we investigate causation at different granularity levels. Sometimes, the causation of a system at the macro level is "higher" than micro level, or it seems there is no causation at micro level, but there is significant causation at macro level. In these cases, we say more causation "emerges" when we do coarse-graining, and there is a causal emergence (CE). CE has been widely discussed in a qualitative fashion\cite{pepper1926emergence, anderson1972more, fromm2005types}. For example\cite{fromm2005types}, categorizes emergence into four kinds, namely, simple (or nominal) emergence, weak emergence, multiple emergence, and strong emergence. Erik Hoel and coworkers formally started the quantitative study of CE using the measure of causation called effective information (EI). In the systems described by both discrete\cite{hoel2013quantifying, hoel2017map} and continuous\cite{chvykov2020causal} variables, the phenomenon of CE was analyzed, the formulation of calculating the "amount of causal emergence" was mathematically given, and the exact values (in bits) were demonstrated in toy examples. Furthermore, it was shown that the phenomenon of CE stands while changing the measure of causation from EI to other methods\cite{comolatti2022causal}. In order to better understand CE, a recent work viewed CE as a form of information transformation\cite{varley2022emergence}, which indeed helps a lot to understand CE, and in the meantime relies on the definition of the three different types of information. We consider these works as an important advancement in the study of CE and causation in complex systems. Nevertheless, regardless of the extensive conceptual analysis and numerical calculations on the phenomenon of CE, it is far from clear exactly how CE occurs, when it occurs and when it does not. For example, all toy examples in previously mentioned works that demonstrated CE were artificially (and arbitrarily) chosen to prove and demonstrate the existence of CE, while the cases that CE does not occur were neglected. Furthermore, there are still debates and thus lacks a clear explanation on why even if CE can actually occur, since coarse-graining is an action of discarding part of the information\cite{varley2022emergence}. These issues are hindering us from better understanding complex systems, all the way from social phenomena, psychological patterns, intelligent creatures, to various natural physical complex systems. In this work, we carefully analyze the quantitative critical conditions of CE, give corresponding mathematical expressions, and propose that the cause of CE is the redistribution of uncertainty in the system. In Section \ref{sec-gen-dis}, we display the experimental tool and equations derived from the materials in literature. In Section \ref{sec-wsm}, we provide the experimental result to prove the main point of this work. In Section \ref{sec-uni-frame}, we discuss possible methods to push the study of CE conditions forward. In Section \ref{sec-conc}, we summarize the contributions of this paper and future works. \section{Materials \& Methods} \label{sec-gen-dis} \subsection{Fundamental Theory: Effective Information(EI) and Causal Emergence(CE)} Based on Judea Pearl's perturbational framework of causal analysis, Hoel et al proposed to quantify the system causality with Effective Information(EI), namely, the metric of the model's accuracy and interpretability\cite{pearl2009causality, hoel2013quantifying, balduzzi2011information}. They utilized the KL Divergence($D_{KL}$) to measure the probability distribution of model's state transitions by Causal Information and Effect Information. Hereafter, EI is determined by three model attributes: Determinism, Degeneracy, and number of states $N$, as shown in Equation \ref{eqn-EI}\cite{hoel2013quantifying}. Determinism coefficient, derived by Effect Information, represents the confidence of the system's next stages given the current state. Degeneracy coefficient, sourced from Causal Information, indicates the explainability of the precise cause of the next stages. CE is introduced to describe the gain of a model's EI, when the model is squeezed by reducing the number of total states. In the rest of this paper, the original model is named as the micro model, while the squeezed model is named as the macro model. Equation \ref{eqn-CEmd} reveals the relationship between Causal Emergence and the model's EI gain $\Delta EI$, which is calculated by subtracting the micro model's EI $EI_{micro}$ from the macro model's EI $EI_{macro}$, and $CE = \Delta EI > 0$ denotes that the CE will occur if the $\Delta EI$'s value is positive. \begin{align} \label{eqn-EI} EI &= \text{Effectiveness} * \log_2(N)\nonumber \\ &= (\text{Determinism\ coefficient} - \text{Degeneracy\ coefficient}) * \log_2(N) \end{align} \begin{align} \label{eqn-CEmd} \Delta EI = EI_{macro} - EI_{micro},\ \text{where } CE = \Delta EI>0 \end{align} The issue is, the original calculation method of Determinism and Degeneracy is computationally intensive. Later, Hoel simplified the computation process of these two coefficients in Equation \ref{eqn-EI} by using the system's Transition Probability Matrix(TPM). The TPM in Figure \ref{TPM} is the quantitative representation of the model's Markov Chain. The rows in the matrix show all possible current states, and the columns present the future states. By checking the TPM, transition probability from the current state to all potential future states that can be acquired. In Equation \ref{eqn-det}, the $do(S=s_{ij})$ specifies that the probability of the current state $s_{ij}$ chosen from model's all potential interventions $I_D$, which probability distribution is equivalent to $H^{max}$ that represents the Maximum Entropy Distribution of the model's current states $S$. The conditional probability distribution $p(S_F|do(S=s_{ij}))$, which indicates the probability distribution of the model's future states $S_F$, is retrieved by checking the TPM's row captioned with $s_{ij}$. The determinism is the average KL Divergences($D_{KL}$) between all possible $p(S_F|do(S=s_{ij}))$ and $H^{max}$ normalized by $\log_2(N)$. In Equation \ref{eqn-deg}, $U_D$ denotes the model's effective distribution, which can also be gained from the TPM. The degeneracy is KL Divergence between $U_D$ and $I_D$ normalized by $\log_2(N)$. With this novel algorithm, Hoel managed to reproduce CE in his coarse-graining experiments. The limitation is, this algorithm can only be applied to model with discrete states, while in practice, many models tend to have continuous states. \begin{align} \label{eqn-det} \centering determinism = \frac{1}{N}\sum\limits_{do(s_{ij})\in I_D} \frac{D_{KL}(p(S_F|do(S=s_{ij}))||H^{max})}{\log_2(N)} \end{align} \begin{align} \label{eqn-deg} \centering degeneracy = \frac{D_{KL}(U_D||I_D)}{\log_2(N)} \end{align} \begin{figure}[t] \centering \includegraphics[scale=0.25]{Figures - Materials/Deterministic_TPM_and_Probabilistic_TPM.jpg} \caption{Transition Probability Matrix(TPM) of a Four-State(Two-Variable) Discrete Model. (a) is an example of a deterministic model's TPM. (b) is an example of a stochastic model's TPM.} \label{TPM} \end{figure} In 2021, Hoel and Chvykov casually quantified continuous models by developing the $EI_g$ algorithm based on Information Geometry Theory\cite{chvykov2020causal, amari2016information}. $EI_g$ can be calculated by Equation \ref{eqn-EIg}, where $V_I$ represents the volume of the intervention manifold, and $\left<l(\theta)\right>_I$ quantifies the causation lost in the system's modeling process, through the identity matrix $\mathds{1}$ and the Fisher Information Matrices, which $h_{\mu v}$ quantifies the intervention manifold, and $g_{\mu v}$ measures the effect manifold\cite{chvykov2020causal}. The drawback is, the computational complexity of Fisher Information Matrices hinders the $EI_g$ implementation. For one-variable models, $EI_g$ can be approximated by Equation \ref{eqn-EIa}, where $\theta$ denotes the model's variable, $\delta$ is the noise that influences the mapping from the intervention to $\theta$, and $\epsilon$ indicates the noise of mapping from $\theta$ to the final effect, which makes the causal measurement for the one-variable continuous scenario more practical\cite{chvykov2020causal}. \begin{align} \label{eqn-EIg} \centering &EI_g = \log\left[\frac{V_I}{(2\pi e)^{d/2}}\right]-\left<l(\theta)\right>_I\\ &\textbf{with}\ l(\theta)=\frac{1}{2}\log\det\left(\mathds{1}+g_{\mu v}^{-1}h_{\mu v}\right)\nonumber \end{align} \begin{align} \label{eqn-EIa} \centering EI_{approx}\simeq -\frac{1}{2}\int d\theta \log\left[2\pi e\left(\left(\frac{\epsilon}{f'(\theta)}\right)^2+\delta^2\right)\right] \end{align} To sum up, Hoel discovered the CE by estimating the EI of both discrete and continuous models\cite{chvykov2020causal, hoel2013quantifying, hoel2017map}. However, to the best of our knowledge, the specific conditions for CE have not been discussed in academic papers\cite{rosas2020reconciling, zhang2022neural}. This paper aims to raise that uncertainty redistribution is a significant factor of the CE by coarse-graining operations. \subsection{Experiment Tool: General EI Calculator} Based on Equations \ref{eqn-EI}, \ref{eqn-det}, and \ref{eqn-deg}, we created the General EI Calculator to verify our assumption and plot the relationship between the discrete model's EI and uncertainties. This calculator has three components: Hyperparameter Adjustor, TPM Generator, and EI Meter. There are three hyperparameters: $n$, $future\_dg\_num$, and $dg\_level$. The $n$ indicates the number of the model's total states. The $future\_dg\_num$ and $dg\_level$ determine the model's degeneracy. Based on three hyperparameters above, TPM Generator can produce the stochastic TPM with the uncertainty $x$ accordingly. Finally, EI Meter obtains the EI value from the generated TPM. As shown in Figure \ref{gen-TPM}, TPM Generator contains three processes: Digital Circuit, Add Uncertainty, and Inverse Digital Circuit. The assumption is, the model's states are sourced from the values of $n$-th binary variables, where the relationship between the total state number $N$ and the number of model's binary variables $n$ is $N=2^n$. Digital Circuit represents the building process of the variable dynamics matrix by utilizing the $1$-element's positions of the deterministic TPM. For example, matrix (a) is the deterministic TPM of a four-state model, which has two binary variables $m$ and $n$, and matrix (c) presents the probabilities of $m=1$ and $n=1$ given the current state $s_{ij}$. For transferring matrix (c) to matrix (d), the specific uncertainty $x$ is introduced by Add Uncertainty, in specific, we replace $1$-elements and $0$-elements of (c) with $x$ and $1-x$ respectively, and as an example, we set $x=0.9$ to derive (d). Inverse Digital Circuit denotes the calculation to reproduce the stochastic TPM by the dynamics. Algorithms 1 and 2 describe the process of Digital Circuit and Inverse Digital Circuit. \begin{algorithm}[] \caption{Digital Circuit Process. \LinesNumbered \For{1-elements' positions of the TPM}{ \If{First row's 1-element represents the future state $01$}{ Choose the first column of the dynamics matrix\; First row of this column has 0-element\; Second row of this column has 1-element\; } \If{$n$-th row's 1-element represents the future state $s_F\in S_F$}{ Choose the $n$-th column of the dynamics matrix\; This column's each row's elements are determined by $s_F$\; } } \end{algorithm} \begin{algorithm}[] \caption{Inverse Digital Circuit Process. \LinesNumbered Probabilities $p(s_F|s_{ij})$ of each future state $s_F$ given the current state $s_{ij}$\; \For{each $s_{ij}$ represented by columns of the dynamics}{ $p(s_F=00|s_{ij})=[1-p(m=1|s_{ij})]*[1-p(n=1|s_{ij})]$\; $p(s_F=01|s_{ij})=[1-p(m=1|s_{ij})]*p(n=1|s_{ij})$\; $p(s_F=10|s_{ij})=p(m=1|s_{ij})*[1-p(n=1|s_{ij})]$\; $p(s_F=11|s_{ij})=p(m=1|s_{ij})*p(n=1|s_{ij})$\; } \end{algorithm} \begin{figure}[t] \centering \includegraphics[scale=0.4232]{Figures - Materials/Final_TPM_Generation.jpg} \caption{TPM Generator's Process. The variable dynamics (c) is derived by the deterministic TPM through the "digital circuit" process. $1$-elements in (c) are replaced by $x$, and $0$-elements are substituted by $1-x$. With the probabilist variable dynamics (d), the stochastic TPM (b) is calculated by inverse "digital circuit" process.} \label{gen-TPM} \end{figure} \section{Results} \label{sec-wsm} \subsection{Findings: CE Uncertainty Thresholds and CE Quantification Equation(CEQE)} \subsubsection{CE Uncertainty Thresholds: The first condition of the CE by Coarse-graining} \begin{figure}[b] \begin{minipage}[b]{0.45\linewidth} \centering \includegraphics[width=2.5in]{Figures-Results/EI_Curves_of_Models_whose_Sizes_range_from_1_to_11.png} \end{minipage} \begin{minipage}[b]{0.45\linewidth} \centering \includegraphics[width=2.5in]{Figures-Results/intersections_between_each_micro_model_and_macro_model.png} \end{minipage} \caption{EI Curves of Models whose Variables' Numbers Range from 1 to 11. (a) shows the EI of micro models(whose number of variables ranges from 2 to 11) is larger than the one-variable macro model's EI before the uncertainty attains a special value. (b) highlights those special values by intersections between each curve and the straight line. The CE will occur when squeezing the micro models to the macro model if the uncertainty is greater than the value as displayed by intersections.} \label{Fig_EI_Cur_Inter} \end{figure} As shown by Figure \ref{Fig_EI_Cur_Inter}, we use the General EI Calculator to plot the EI curves of models, whose numbers of variables range from 1 to 11, to investigate the relationship between the CE and the uncertainty that influences the models. In this experiment, the CE is planned to be derived by squeezing ten micro models, whose variable numbers range from 2 to 11, to the one-variable macro model. Every micro model's degeneracy is zero to avoid the surplus effect on the model's EI, and $-\log_2(x)$ is the x-axis to denote the increment of the uncertainty's influence by the minus logarithm of the uncertainty $x$. In the rest of this article, the specific value of $-\log_2(x)$ is named as "Uncertainty Magnitude." In Figure \ref{Fig_EI_Cur_Inter}, the sub-figure (a) shows the one-variable macro model's EI by the red straight line, as the one-variable model cannot be influenced by the uncertainty $x$. Therefore, the micro models' EIs shown by colorful curves are greater than the macro model's EI when the Uncertainty Magnitude stays in a low range. The CE will not occur by squeezing the models based on the mathematical definition represented by Equation \ref{eqn-CEmd}. In sub-figure (b), intersections between the micro model's EI curves and the macro model's EI straight line are highlighted to indicate the Uncertainty Magnitude's range that the CE will or will not occur. For the right of intersections, the CE is allowed to happen as the micro model's curve is under the macro straight line, and reversely, the left of intersections determines the range of Uncertainty Magnitude that does not allow the CE by Coarse-graining. Consequently, the specific Uncertainty Magnitude presented by the intersection's x-coordinate is called "CE Uncertainty Threshold" to quantitatively define the CE's first condition, as summarized in Conjecture \ref{def-cond1}. \begin{prop} \label{def-cond1} The first uncertainty condition of the CE by Coarse-graining is that, the original micro model's Uncertainty Magnitude has to exceed the corresponding CE Uncertainty Threshold. \end{prop} For computing the exact value of thresholds, we propose the CE Quantification Equation(CEQE) following the general regularity of the model's TPM distributions by rows. \subsubsection{CEQE: The calculation method of CE Uncertainty Thresholds} After the derivation of the tested model's TPM, the model's determinism depends on the uncertainty exclusively, as described in Conjecture \ref{the-uncerdet}. \begin{prop} \label{the-uncerdet} The model's determinism only relates to the uncertainty $x$, as shown in Equation \ref{eqn-xdet}. \begin{align} \label{eqn-xdet} determinism = 1+(1-x)\log_2(1-x)+x\log_2(x) \end{align} \end{prop} Equation \ref{eqn-xdet} is derived by the general regularity of the TPM's distribution by rows: each row contains the same elements with different layouts. Based on the TPM Generator's process in Figure \ref{gen-TPM}, the TPMs of the two-variable model and three-variable model are produced as an example of the row's regularity. $M_2$ and $M_3$ are the generated stochastic TPM in which elements are represented with $x$ polynomials. $$M_2=\left[\begin{matrix} x(1-x) & x^2 & (1-x)^2 & x(1-x)\\ x(1-x) & (1-x)^2 & x^2 & x(1-x)\\ (1-x)^2 & x(1-x) & x(1-x) & x^2\\ x^2 & x(1-x) & x(1-x) & (1-x)^2\\ \end{matrix}\right]$$ $$M_3=\left[\begin{matrix} x^2(1-x) & x^3 & \cdots & (1-x)^3 & x(1-x)^2\\ x^2(1-x) & x(1-x)^2 & \cdots & x^2(1-x) & x(1-x)^2\\ \vdots & \vdots & \ddots & \vdots &\vdots\\ (1-x)^3 & x(1-x)^2 & \cdots & x^2(1-x) & x^3\\ x^3 & x^2(1-x) & \cdots & x(1-x)^2 & (1-x)^3 \end{matrix}\right]$$ The regularity is, $M_2$'s each row has one $(1-x)^2$, two $x(1-x)$, and one $x^2$, and $M_3$'s each row includes one $(1-x)^3$, three $x(1-x)^2$, three $x^2(1-x)$, and one $x^3$. This regularity also exists in TPMs for other models with 4-11 binary variables. In each row, the numbers of $x^{a-1}(1-x)^{n-(a-1)}$ and $x^{n-(a-1)}(1-x)^{a-1}$ are same, where $n$ indicates the amount of variables, and $a$ is the order coefficient of the $x$-polynomial. Based on the regularities of different models' TPMs, we represented a general set of the distribution by rows $E_D$, where $(Num)$ implies the number of same polynomials contained by the row distribution: \begin{align} \label{eqn-ed} E_D &=\left\{(1-x)^n,\cdots,\ \left(Num\right)x^{n-(a-1)}(1-x)^{a-1},\ \left(Num\right)x^{a-1}(1-x)^{n-(a-1)},\cdots,\ x^n\right\} \nonumber\\ \centering &\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \text{where } Num = \prod\limits_{m=1}^{a-1}\frac{n-m+1}{m} \end{align} The difference among these distributions by rows is the layouts of the elements in $E_D$. The layout differences do not influence the value of KL Divergence between the $E_D$ and Maximum Entropy Distribution $H^{max}$. Therefore, the mathematical relationship between $D_{KL}(E_D||H^{max})$ and the uncertainty $x$ can be expressed by Equation \ref{eqn-Dkl}. \begin{align} \label{eqn-Dkl} D_{KL}(E_D||H^{max}) &=E_D\cdot \log_2\left(\frac{E_D}{H^{max}}\right)=\sum\limits_{p_i\in E_D} p_i\cdot\log_2\left(\frac{p_i}{\frac{1}{2^n}}\right)\nonumber\\ &=n[1+(1-x)\log_2(1-x)+x\log_2(x)] \end{align} In Equation \ref{eqn-det}, $p(S_F|do(S=s_{ij}))$ represents the conditional probability distribution of future states $S_F$ given the current state $s_{ij}$, and it can be derived by checking the distribution of the TPM's row. Therefore, this conditional distribution is equivalent to $E_D$, expressed by the equation $p(S_F|do(S=s_{ij}))=E_D$. With the relationship between the number of states $N$ and the number of variables $n$, $N=2^n$, Equation \ref{eqn-det} can be adapted to the new parameters in form of Equation \ref{eqn-ndet}, and it can be simplified to Equation \ref{eqn-xdet} about the uncertainty $x$ following Equation \ref{eqn-Dkl}. This general expression proves our Conjecture \ref{the-uncerdet}: the model's determinism is barely influenced by uncertainty. \begin{align} \label{eqn-ndet} determinism &= \frac{1}{2^n}\sum \frac{D_{KL}(E_D||H^{max})}{n}\nonumber\\ &= 1+(1-x)\log_2(1-x)+x\log_2(x) \end{align} By replacing the determinism calculation in Equation \ref{eqn-EI} with Equation \ref{eqn-xdet}, the CEQE is derived to quantify the uncertainty threshold as the condition of CE when Coarse-graining a $n$-variable original micro model to the targeted macro model with the fixed $EI$, as shown in Equation \ref{eqn-ceqe}. However, finding the threshold's exact value with Equation \ref{eqn-ceqe} is computationally intensive. Therefore, we propose Algorithm 3 to find the CEQE's approximate solution $x$, which minus logarithm $-\log_2(x)$ is the Uncertainty Magnitude of the CE Uncertainty Threshold. \begin{align} \label{eqn-ceqe} 1+(1-x)\log_2(1-x)+x\log_2(x) - degeneracy = \frac{EI}{n} \end{align} \begin{algorithm}[] \caption{The Searching for Approximate Solution of the CEQE. The error of the solving approximation is less than $10^{-6}$. \LinesNumbered \While{$|\left[1+(1-x_i)\log_2(1-x_i)+x_i\log_2(x_i) - degeneracy\right] - \left[\frac{EI}{n}\right]|>T_c$}{ \eIf{$\left[1+(1-x_i)\log_2(1-x_i)+x_i\log_2(x_i)-degeneracy\right] - \left[\frac{EI}{n}\right]>T_c$}{ increase the value at the current decimal place\; }{ goto the next decimal place\; increase the value at the current decimal place\; } } \end{algorithm} \begin{figure}[b] \centering \includegraphics[scale=0.25]{Figures-Results/n_to_1.png} \caption{Approximate Uncertainty Magnitudes of CE Uncertainty Thresholds for Coarse-graining each micro model in Figure \ref{Fig_EI_Cur_Inter} to One-Variable Macro Model. This result verifies that the CEQE is applicable to quantify the thresholds as the first condition of the CE by Coarse-graining.} \label{Fig_approx_thres_results} \end{figure} To prove the CEQE's generalization, we apply Equation \ref{eqn-ceqe} and Algorithm 3 to calculate the CE Uncertainty Thresholds as shown by the intersections in Figure \ref{Fig_EI_Cur_Inter}'s sub-figure (b). In Figure \ref{Fig_approx_thres_results}, based on Equation \ref{eqn-CEmd}, whether the CE will occur which is illustrated by the $\Delta EI$ curves within the same colors as the EI curves in Figure \ref{Fig_EI_Cur_Inter}. With the experimental conditions that have been mentioned, $degeneracy$ in the CEQE is zero, and the targeted $EI$ is always $1$ in this situation. The number of the micro model's variables $n$ is chosen from the range $[2,\ 11]$ when computing the threshold for squeezing the corresponding original model to the targeted model. For example, when deriving the value of the CE Uncertainty Threshold for Coarse-graining the two-variable micro model, as shown by the sub-figure "Micro Model Size=2", the result $0.1682$ is derived by using Algorithm 3 to solve the designated equation, $1+(1-x)\log_2(1-x)+x\log_2(x) = \frac{1}{2}$. This process can be repeatedly utilized to quantify the CE Uncertainty Threshold for squeezing the micro model with arbitrary variables to any macro model, which will be proven by Experiment 1's results below. \subsection{Mathematical Proofs of Uncertainty Redistribution: When the CE will occur by general Coarse-graining} \subsubsection{When the CE will not occur} For general Coarse-graining operations, the targeted model's selection is more flexible than choosing the one-variable macro model, whose EI is always $1$. From Figure \ref{Fig_approx_thres_results}'s experimental results that prove Conjecture \ref{def-cond1}: the first CE uncertainty condition is the micro model's uncertainty magnitude should be larger than the CE Uncertainty Threshold's value. Therefore, when Coarse-graining's original model and targeted model's EIs can be influenced by corresponding uncertainties, $x_{micro}$ and $x_{macro}$ respectively, there is a pair of Uncertainty Magnitudes that make the micro model's EI equal to the macro model's EI, denoted by $EI_{micro}=EI_{macro}$. To emphasize the CE impeded by the model's Uncertainty Magnitude, we still assume two models' $degeneracy$ is zero, that is, $EI_{micro}=EI_{macro}$ can be expressed by the equation below. \begin{align} \centering \label{eqn-genal-CE} n_{micro}&\left[1+(1-x_{micro})\log_2(1-x_{micro})+x_{micro}\log_2(x_{micro})\right] \nonumber\\ &=n_{macro}\left[1+(1-x_{macro})\log_2(1-x_{macro})+x_{macro}\log_2(x_{macro})\right] \nonumber\\ \Rightarrow\ \ &\frac{\left[1+(1-x_{micro})\log_2(1-x_{micro})+x_{micro}\log_2(x_{micro})\right]}{\left[1+(1-x_{macro})\log_2(1-x_{macro})+x_{macro}\log_2(x_{macro})\right]}=\frac{n_{macro}}{n_{micro}} \end{align} In Equation \ref{eqn-genal-CE}, $n_{micro}$ denotes the variable number of the original model affected by the uncertainty $x_{micro}$, and $n_{macro}$ represents the targeted number of macro variable, which model's EI is declined by the uncertainty $x_{macro}$. After the general Coarse-graining, the micro model is mapped to a possible macro model that has the less variable number, which means $n_{micro}>n_{macro}$, and then $\frac{n_{macro}}{n_{micro}}<1$. If the Coarse-graining does not change the original Uncertainty Magnitude $x_{micro}$, namely, the micro uncertainty is not redistributed by the operation, we will derive $\frac{\left[1+(1-x_{micro})\log_2(1-x_{micro})+x_{micro}\log_2(x_{micro})\right]}{\left[1+(1-x_{macro})\log_2(1-x_{macro})+x_{macro}\log_2(x_{macro})\right]}=1$, where $x_{micro}=x_{macro}$ in this situation. This result makes Equation \ref{eqn-genal-CE} become a false proposition, and we propose Conjecture \ref{def-whenot} as the summary. \begin{prop} \label{def-whenot} The selected Coarse-graining strategy has to change the original model's Uncertainty Magnitude $x_{micro}$ to make $x_{micro}\neq x_{macro}$. Otherwise, the CE will not occur. \end{prop} For an arbitrary system, we assume that if the uncertainties of its micro variables are carried along into its macro variables (which is a certain form of combination of micro variables), then CE can never happen. In order for CE to possibly happen, uncertainties have to redistribute among these variables during Coarse-graining. Below will show that uncertainty redistribution is a necessary condition of CE. \subsubsection{When the CE will occur} \begin{figure}[b] \centering \includegraphics[scale=0.35]{Figures-Results/Monotonicity_of_the_Determinism_Expression_with_the_Uncertainty.png} \caption{Trajectory of the Model's Determinism that Changes with the Uncertainty Magnitude. With the increment in uncertainty's influences, the model's determinism is monotone decreasing. This monotonicity is significant to prove that declining the original uncertainty is the second condition of the CE by Coarse-graining.} \label{Fig_mono_det_expres} \end{figure} From Equation \ref{eqn-det}'s monotonicity in Figure \ref{Fig_mono_det_expres}, redistributing the original $x_{micro}$ to the less targeted $x_{macro}$ during the Coarse-graining operation is the second necessary condition of the CE, as demonstrated by Conjecture \ref{def-cond2}. \begin{prop} \label{def-cond2} The second uncertainty condition of the CE by Coarse-graining is that, the original Uncertainty Magnitude $-\log_2(x_{micro})$ has to be declined to the targeted Uncertainty Magnitude's range, which is limited by $-\log_2(x_{macro})$, while squeezing the model. \end{prop} According to Conjecture \ref{def-cond2}'s validations, the previous CE experiment shown in Figure \ref{Fig_approx_thres_results} illustrates that the original Uncertainty Magnitude, which is greater than the CE Uncertainty Threshold, is decreased to zero for the CE by Coarse-graining the micro model to the one-variable macro model. However, choosing the Coarse-graining operation's targeted model should be more flexible, and the operation's macro model can be any model that has a less number of variables compared with the original model. For example, the macro model can be the one-variable model while squeezing the three-variable micro model, as shown in Figure \ref{Fig_approx_thres_results}'s sub-figure "Micro Model Size = 3", but the two-variable model influenced by the uncertainty $x_{macro}$ is possibly to be the Coarse-graining's targeted model too, if its EI is greater than the original EI. Therefore, it is indispensable to verify that the $-\log_2(x_{micro})>-\log_2(x_{macro})$ is significant for the CE by the general Coarse-graining. In the process of Coarse-graining, the micro model's variable number $n_{micro}$ is declined to the macro model's variable number $n_{macro}$, and we can advance $\frac{n_{macro}}{n_{micro}}<1$. As the Conjecture \ref{def-whenot}'s background, $x_{micro}=x_{macro}$ negates Equation \ref{eqn-genal-CE}. Consequently, the inequality, $\frac{\left[1+(1-x_{micro})\log_2(1-x_{micro})+x_{micro}\log_2(x_{micro})\right]}{\left[1+(1-x_{macro})\log_2(1-x_{macro})+x_{macro}\log_2(x_{macro})\right]}<1$, is essential if Equation \ref{eqn-genal-CE} is a true proposition. Equation \ref{ineqn-genal-CE} displays the simplification of the inequality above since the positive value of the expression, $1+(1-x)\log_2(1-x)+x\log_2(x)$, in the value range of Uncertainty Magnitude. \begin{align} \centering \label{ineqn-genal-CE} &\frac{\left[1+(1-x_{micro})\log_2(1-x_{micro})+x_{micro}\log_2(x_{micro})\right]}{\left[1+(1-x_{macro})\log_2(1-x_{macro})+x_{macro}\log_2(x_{macro})\right]}<1 \nonumber\\ &\left[1+(1-x_{micro})\log_2(1-x_{micro})+x_{micro}\log_2(x_{micro})\right]\nonumber\\ &\ \ \ \ <\left[1+(1-x_{macro})\log_2(1-x_{macro})+x_{macro}\log_2(x_{macro})\right] \end{align} In Figure \ref{Fig_mono_det_expres}, the determinism equation, $1+(1-x)\log_2(1-x)+x\log_2(x)$, is monotone decreasing following the Uncertainty Magnitude increases. The $-\log_2(x_{micro})>-\log_2(x_{macro})$ is the condition to make the inequality represented by Equation \ref{ineqn-genal-CE} be the true proposition. For the CE as defined by Equation \ref{eqn-CEmd}, the macro model's EI, $EI_{macro}$, should be greater than the micro model's EI, $EI_{micro}$, to satisfy the requirement that is $\Delta EI = EI_{macro} - EI_{micro}>0$. $-\log_2(x_{micro})$ should be greater and $-\log_2(x_{macro})$ should be less than their values derived by solving Equation \ref{eqn-genal-CE} to make the CE occur in the process of mapping the micro model to the macro model. This result further ensures $-\log_2(x_{micro})>-\log_2(x_{macro})$ and proves the Conjecture \ref{def-cond2}: the original Uncertainty Magnitude has to be redistributed to the less Uncertainty Magnitude's range for the CE by the Coarse-graining operation. \subsection{Experiment 1: Numerical Proofs of Two Uncertainty Conditions of the CE by Coarse-graining} \subsubsection{Uncertainty Conditions: Absolute Thresholds and Equivalent Thresholds} Based on the General EI Calculator's and CEQE's experimental results, Conjecture \ref{def-ce-unce-cond} gives a brief description to summarize the relationships between the CE and the model's uncertainty. \begin{prop} \label{def-ce-unce-cond} By any Coarse-graining operation, the CE will occur while the model's uncertainty satisfies two conditions: \begin{enumerate} \item The original micro model's Uncertainty Magnitude has to exceed the CE Uncertainty Threshold, which is a specific Uncertainty Magnitude that indicates the CE permission range. \\ \item The targeted macro model's Uncertainty Magnitude has to be in a certain range, bounded by zero Uncertainty Magnitude and $-\log_2(x_{macro})$, the solution of Equation \ref{eqn-genal-CE} with a fixed original uncertainty $x_{micro}$. \end{enumerate} \end{prop} In this experiment, we utilize the general Coarse-graining operations, whose target can be any model with less variable number than the micro model, to acquire Absolute Thresholds and Equivalent Thresholds as evidence of two CE Uncertainty Conditions proposed in Conjecture \ref{def-ce-unce-cond}. Absolute Threshold indicates the original Uncertainty Magnitude $-\log_2(x_{micro})$ while the micro model's EI equals to the macro model's EI's maximization, and Equivalent Threshold represents the targeted Uncertainty Magnitude $-\log_2(x_{macro})$ that decreases the macro model's EI to the EI of micro model influenced by the specific Uncertainty Magnitude, which is above the Absolute Threshold. For deriving Absolute Thresholds and Equivalent Thresholds by Equation \ref{eqn-ceqe}, we assume all models' $degeneracy$s are zero to make experimental results underline the uncertainty's effect on the CE. \begin{figure}[t] \centering \includegraphics[scale=0.25]{Figures-Results/Absolute_Thresholds_n_to_2.png} \caption{Absolute Thresholds of the CE by Coarse-graining Nine Micro Models to Two-Variable Macro Model. Each $\Delta EI$ curve represents the CE by squeezing one of micro models to the macro model. The thresholds labeled on curves indicate the first CE Uncertainty condition. This result shows that the micro model's choice is flexible for General Coarse-graining operations.} \label{Fig_Abs_Thres_n_2} \end{figure} \begin{figure}[t] \centering \includegraphics[scale=0.25]{Figures-Results/Absolute_Thresholds_11_to_n.png} \caption{Absolute Thresholds of the CE by Coarse-graining Eleven-Variable Micro Model to Nine Macro Models. Each $\Delta EI$ curve represents the CE by squeezing the micro model to one of macro models. The thresholds labeled on curves indicate the first CE Uncertainty condition. This result shows that the macro model's choice is flexible for General Coarse-graining operations.} \label{Fig_Abs_Thres_11_n} \end{figure} In Figures \ref{Fig_Abs_Thres_n_2} and \ref{Fig_Abs_Thres_11_n}, the general Coarse-graining operations with the same targeted macro model and same original micro model represent the flexibility of the model's choice. Figure \ref{Fig_Abs_Thres_n_2} provides nine $\Delta EI$ curves to illustrate the CE by squeezing nine micro models to the two-variable macro model. Within calculating nine Absolute Thresholds labeled in this sub-figure, the targeted model's $EI$ is always $2$, which is the maximization of the two-variable model, and the number of variable $n$ is chosen from the range $[3,\ 11]$, which is equivalent to the micro model's sizes captioned upon each curve. Therefore, when the original Uncertainty Magnitude is greater than the Absolute Threshold, $\Delta EI>0$ indicates the CE occurrence following the corresponding general Coarse-graining operation. In Figure \ref{Fig_Abs_Thres_11_n}, the maximum EIs of macro models range from $2$ to $10$ and subtract the eleven-variable micro model's EI to derive the relevant $\Delta EI$ curves. As the same as the left sub-figure, for Coarse-graining the same micro model to possible macro model, the CE occurs while the original Uncertainty Magnitude exceeds the Absolute Threshold. Therefore, this experimental result proves Conjecture \ref{def-ce-unce-cond}'s first condition: the CE Uncertainty Threshold, as presented by Absolute Thresholds in this experiment, is the boundary of the original Uncertainty Magnitude that allows the CE by any Coarse-graining operation. \begin{figure}[t] \centering \includegraphics[scale=0.40]{Figures-Results/Equivalent_Thresholds_3_to_2.png} \caption{Equivalent Thresholds of the CE by Squeezing the Uncertain Three-Variable Micro Model to the Two-Variable Macro Model. The micro model's EI is the constant representing the three-variable model influenced by $0.12$, $0.25$, and $0.42$ Uncertainty Magnitudes respectively. Equivalent Thresholds are labeled on the $\Delta EI$ curves. This numerical result confirms the second CE Uncertainty Condition: the original Uncertainty Magnitude has to be redistributed to a specific Uncertainty Magnitude range for the CE by any Coarse-graining operations.} \label{Fig_Eqv_Thres} \end{figure} To introduce the role of Equivalent Thresholds as the second CE condition's verification, we exploit the different $\Delta EI$ calculation method with Figures \ref{Fig_Abs_Thres_n_2} and \ref{Fig_Abs_Thres_11_n}. For example, in Figure \ref{Fig_Eqv_Thres}, the two-variable macro model's EI changes with the Uncertainty Magnitude, and the three-variable micro model's EIs are fixed values to represent the micro model influenced by the specific Uncertainty Magnitude. The $\Delta EI$ curve produced by this method is monotone decreasing following the Uncertainty Magnitude increases. Three Equivalent Thresholds are the solutions of Equation \ref{eqn-ceqe}'s following the conditions that $degneracy = 0$, variable number $n=2$, and targeted $EI$ selected from three EIs of the micro model with $0.12$, $0.25$, and $0.42$ Uncertainty Magnitude. Consequently, for Coarse-graining the micro model influenced by the Uncertainty Magnitude, whose value is greater than the Absolute Threshold, the $\Delta EI>0$ while the targeted macro model's Uncertainty Magnitude is less than the Equivalent Threshold. The CE will occur if the macro Uncertainty Magnitude belongs to the range $[0,\ Equivalent Threshold)$, where Equivalent Threshold is smaller than the original Uncertainty Magnitude above the Absolute Threshold. This result supports Conjecture \ref{def-ce-unce-cond}'s second condition: the original uncertainty has to be declined to the macro Uncertainty Magnitude's certain range limited by Equivalent Threshold. In brief, to derive the CE by squeezing the very uncertain model, we need to develop a Coarse-graining strategy to decrease the original uncertainty to a less range. For instance, a three-variable model, whose variables are denoted by $m$, $n$, and $p$, has the unified uncertainty $x_{u}$ which influences all model's variables, and this situation can be expressed by $m+x_{u}$, $n+x_{u}$, and $p+x_{u}$. We can yield the CE by applying $M=(m+x_{u})-(n+x_{u})=m-n$ and $N=(n+x_{u})-(p+x_{u})=n-p$ as the Coarse-graining operation to map the micro model to the deterministic two-variable macro model. However, the model is normally influenced by multiple uncertainties indicated by $X_m={x_1,\ x_2,\ \cdots,\ x_i}$ instead of the unified uncertainty $x_{u}$. We will discuss the $X_m$'s effect on the model's EI and propose the corresponding CEQE to quantify the CE Uncertainty Condition in the additional study below. \subsubsection{Additional Study: When the model is influenced by multiple uncertainties $X_m$} \begin{figure}[b] \centering \includegraphics[scale=0.35]{Figures-Results/Assumption.png} \caption{Assumption to Represent How to Derive the Representation of the Model with Multiple Uncertainties. The solid arrow points to the model's variable influenced by the uncertainty's variable where it starts, and the dotted line indicates that the corresponding model's variable is possibly affected by the relevant uncertainty's variable. This assumption is essential to obtain the model's TPM by the General EI Calculator's TPM Generator process.} \label{Fig_Mul_Uncs_Assp} \end{figure} \begin{figure}[t] \begin{minipage}[t]{0.48\linewidth} \centering \includegraphics[width=2.5in]{Figures-Results/3D_Diagram_with_the_threshold_curve.png} \end{minipage} \begin{minipage}[t]{0.48\linewidth} \centering \includegraphics[width=2.5in]{Figures-Results/Comparison_between_x-i_effect_and_x-u_effect_on_the_model_EI.png} \end{minipage} \caption{EI Surface of the Two-Variable Model with Multiple Uncertainties and Comparison between One Uncertainty's Variable $x_i$ and the Unified Uncertainty $x_u$. The left sub-figure shows the 3D surface to illustrate the EI changes with two uncertainty's variables, $x_i$ and $x_2$, whose Uncertainty Magnitudes are the x-axis and y-axis in this sub-figure respectively. Two variable has the same influence on the model's EI when another is fixed to zero, as displayed by the red curve in the right sub-figure. Comparing with the unified uncertainty's effect that the right sub-figure's blue curve presents, the red curve's decreasing rate is half of the unified EI curve. We can propose that one of uncertainty's variables only has half $x_u$'s effect on the model's EI from this comparison.} \label{Fig_Multi_EI_Comp} \end{figure} By the General EI Calculator's TPM Generator process, we can produce the TPM of the model with multiple uncertainties based on the assumption shown in Figure \ref{Fig_Mul_Uncs_Assp}. The assumption is that, one model's variable $m_i$ can only be influenced by one uncertainty's variable $x_i$, but one of uncertainty's variables possibly influences multiple model's variables. The solid line and dotted lines illustrate this thought. Every dotted line started from one of $x_i$s represents the possible influences on the $m_i$ that the line points. If one of dotted lines ended at the same $m_i$ is transferred to the solid line to ensure this model's variable is affected by one of uncertainties, other dotted lines will be erased since one $m_i$ can only have one $x_i$. Therefore, we can define several uncertainty's variables, whose number cannot be greater than the model's variable number $n$, to calculate the EI of the model with multiple uncertainties following the General EI Calculator. For instance, Figure \ref{Fig_Multi_EI_Comp} displays the surface of the two-variable model's EI that declines while two uncertainties, $x_1$ and $x_2$, increase. In Figure \ref{Fig_Multi_EI_Comp}, the left sub-figure represents the two-variable model's EI drops when the Magnitudes of $x_1$ and $x_2$ grow. Interestingly, as we set one of uncertainty's variables to zero Magnitude and increase another Magnitude from zero to one, the decrease of the model's EI can be represented as the right sub-figure's red curve, which decreasing rate is half of the unified uncertainty's blue EI curve. This phenomenon makes us propose the effect of the single uncertainty's variable denoted by $x_i\in X$ is declined to a specific ratio of $x_u$'s influence on the model's EI. To further prove this argument, we produce the two-variable and three-variable models' algebraic TPMs, as indicated by $M'_2$ and $M'_3$ below. $$M'_2=\left[\begin{matrix} x_1(1-x_2) & x_1x_2 & (1-x_1)(1-x_2) & (1-x_1)x_2\\ (1-x_1)x_2 & (1-x_1)(1-x_2) & x_1x_2 & x_1(1-x_2)\\ (1-x_1)(1-x_2) & (1-x_1)x_2 & x_1(1-x_2) & x_1x_2\\ x_1x_2 & x_1(1-x_2) & (1-x_1)x_2 & (1-x_1)(1-x_2)\\ \end{matrix}\right]$$ $$M'_3=\left[\begin{matrix} x^2_1(1-x_2) & x^2_1x_2 & \cdots & (1-x_1)^2(1-x_2) & (1-x_1)^2x_2\\ x_1(1-x_1)x_2 & x_1(1-x_1)(1-x_2) & \cdots & (1-x_1)x_1x_2 & (1-x_1)x_1(1-x_2)\\ \vdots & \vdots & \ddots & \vdots & \vdots\\ (1-x_1)^2(1-x_2) & (1-x_1)^2x_2 & \cdots & x_1^2(1-x_2) & x^2_1x_2\\ x_1^2x_2 & x_1^2(1-x_2) & \cdots & (1-x_1)^2x_2 & (1-x_1)^2(1-x_2) \end{matrix}\right]$$ \begin{figure}[t] \begin{minipage}[t]{0.48\linewidth} \centering \includegraphics[width=2.215in]{Figures-Results/Influence1.png} \end{minipage} \begin{minipage}[t]{0.48\linewidth} \centering \includegraphics[width=2.5in]{Figures-Results/Influence2.png} \end{minipage} \caption{3D $\Delta EI$ Surface and the Corresponding Curve of CE Uncertainty Thresholds for Squeezing the Micro Model to One-Variable Macro Model. The macro EI is fixed to 1 since the one-variable model cannot be influenced by any factor. Two sub-figures display the black curve of CE Uncertainty Thresholds in different views to indicate the curve is on the $\Delta EI$ surface. This result is an example to prove that CE Uncertainty conditions still exist when the model has multiple uncertainties.} \label{Fig_Multi_Thres} \end{figure} As the same as the algebraic TPMs with the unified uncertainty, such as the $M_2$ and $M_3$ in the previous section, there is also the regularity in the distribution by rows of the TPMs with the expressions composed of $x_1$ and $x_2$. After verifying the regularity's generalization through the algebraic TPMs which models have more variables and are influenced by more uncertainty's variables, similar to the process of computing Equation \ref{eqn-ndet}, we finally derive the mathematical expression to demonstrate the relationship between the model's $determinism$ and uncertainty's variables, as narrated by Equation \ref{eqn-xsdet}. During simplifying the equation to Equation \ref{eqn-xsdet}, the $a_i$ that denotes the number of the model's variables influenced by the single uncertainty's variable $x_i$ plays a significant role to define that the single uncertainty's effect drops to the $\frac{a_i}{n}$ of the $x_u$'s effect on the model's EI. Consequently, the $x_1$'s or $x_2$'s effect illustrated in Figure \ref{Fig_Multi_EI_Comp} is $\frac{1}{2}$ of the $x_u$'s effect. Conjecture \ref{the-multiuncer} is to summarize this finding. \begin{prop} \label{the-multiuncer} One of multiple uncertainties $x_i$'s effect on the model's EI attenuates to $\frac{a_i}{n}$ of the unified uncertainty $x_u$'s effect, where $x_i\in X_m$, and $X_m$ is the set of all uncertainty variables. \begin{align} \label{eqn-xsdet} determinism = 1+\sum\limits_{i=1}^n \frac{a_i[(1-x_i)\log_2(1-x_i)+x_i\log_2(x_i)]}{n} \end{align} \end{prop} Equation \ref{eqn-xsdet} can replace the $determinism$ in Equation \ref{eqn-EI} to develop the particular CEQE, as described by Equation \ref{eqn-xsceqe} that adapts to the situation of the model with multiple uncertainties. For the experiment shown in Figure \ref{Fig_Mul_Uncs_Assp}, since we need to highlight the influences provided by multiple uncertainties, the two-variable micro model's $degeneracy$ is still zero, and the corresponding macro model can be merely chosen the one-variable model in this case. Therefore, when $degeneracy=0$, $n=2$, and $EI=1$, we can simplify Equation \ref{eqn-xsceqe} to a linear function, $1+\sum\limits_{i=1}^2 [(1-x_i)\log_2(1-x_i)+x_i\log_2(x_i)]=0$, to represent the curve in Figure \ref{Fig_Multi_Thres} of CE Uncertainty Thresholds that quantify the uncertainty condition of the CE by Coarse-graining the two-variable micro model to the one-variable macro model. \begin{align} \label{eqn-xsceqe} 1+\sum\limits_{i=1}^n \frac{a_i[(1-x_i)\log_2(1-x_i)+x_i\log_2(x_i)]}{n} - degeneracy = \frac{EI}{n} \end{align} As shown in Figure \ref{Fig_Multi_Thres}, $\Delta EI>0$ when two uncertainty's variables, $x_1$ and $x_2$, are greater than the CE Uncertainty Threshold which locates at the black curve shown by two sub-figures. For the model affected by multiple uncertainties, this result provides an example to state the CE Uncertainty conditions in Conjecture \ref{def-ce-unce-cond} that are still the true proposition, as the original Uncertainty Magnitudes have to exceed a specific threshold and decline to zero for the CE by Coarse-graining the two-variable model to the one-variable macro model. \subsection{Experiment 2: Another CE factor, the model's degeneracy} Like the $determinism$, the $degeneracy$ is another model's intrinsic property's coefficient as the EI factor, whose value can be influenced by the Uncertainty Magnitude. In this experiment, we focus on the $degeneracy$ effect on the model's EI maximization to describe the corresponding CE conditions instead of the uncertainty. As the model's intrinsic property, the maximum EI will decrease with the $degeneracy$ coefficient increases, and the CE will not relate to the Uncertainty Magnitude when this coefficient is beyond a certain value that makes the maximization of the micro model's EI less than the EI of the macro model without the $degeneracy$. The certain property's value is called the "CE Degeneracy Boundary", which CE condition is summarized by Conjecture \ref{the-degcoce} below. Based on the General EI Calculator, the $\Delta EI$ curves of Coarse-graining the degenerate three-variable model to two-variable macro model are plotted in Figure \ref{Fig_Deg_Bodry} to illustrate the CE Degeneracy Boundary. \begin{figure}[t] \centering \includegraphics[scale=0.35]{Figures-Results/Degeneracy_Boundary1.png} \caption{$\Delta EI$ Curves of Coarse-graining the Degenerate Three-Variable Micro Models to the Two-Variable Deterministic Model without Degeneracy. By the General EI Calculator, the micro $degeneracy$ is set to $0.33$, $0.48$, $0.65$, and $0.82$, where the $0.33$ is the CE Degeneracy Boundary derived by solving Equation \ref{eqn-ceqe}. When the $degeneracy>CE\ Degeneracy\ Boundary$, $\Delta EI$ values are always greater than zero as shown by the relevant curves. In this case, the CE will always happen by the Coarse-graining operation.} \label{Fig_Deg_Bodry} \end{figure} \begin{figure}[t] \centering \includegraphics[scale=0.23]{Figures-Results/Degeneracy_Influences_Absolute_and_Equivalent_Thresholds.png} \caption{Absolute Thresholds and Equivalent Thresholds of Coarse-graining the Three-Variable Micro Model with Degeneracy, which Values Are Not Greater Than the CE Degeneracy Boundary $0.33$. When the micro model's $degeneracy<CE\ Degeneracy\ Boundary$, the Uncertainty Magnitudes of micro and macro models should satisfy the CE Uncertainty conditions proposed by Conjecture \ref{def-ce-unce-cond} to allow the CE occurs following the Coarse-graining operation.} \label{Fig_Deg_Influ} \end{figure} \begin{prop} \label{the-degcoce} When the micro model's degeneracy is not zero, whether the CE relates to the Uncertainty Magnitude depends on the CE Degeneracy Boundary: \begin{enumerate} \item The micro model's $degeneracy\in (CE\ Degeneracy\ Boundary,\ 1]$, then the CE will occur through Coarse-graining to the non-degenerate macro model.\\ \item The micro model's $degeneracy\in [0,\ CE\ Degeneracy\ Boundary]$, then the CE will occur if the Uncertainty Magnitude satisfies the conditions in Conjecture \ref{def-ce-unce-cond}. \end{enumerate} \end{prop} As demonstrated by the legend on the top-right corner in Figure \ref{Fig_Deg_Bodry}, the original model's $degeneracy$ increases from $0.33$ to $0.82$ by changing the General EI Calculator's hyperparameters, $dg\_level$ and $future\_dg\_num$. The $0.33$ is this operation's CE Degeneracy Boundary obtained by solving Equation \ref{eqn-ceqe} with $EI=2$ and $n=3$. Since the boundary indicates the $degeneracy$ that decreases the micro model's EI maximization to the macro model's EI maximization, the $determinism$'s x-expression should be $1$ in the calculation process. As an example, $0.33$ is the solution of equation, $1-CE\ Degeneracy\ Boundary = \frac{2}{3}$. In Figure \ref{Fig_Deg_Bodry}, the green $\Delta EI$ curve illustrates the situation that the micro model's $degeneracy$ equals to CE Degeneracy Boundary, and there is $\Delta EI<0$ when the original Uncertainty Magnitude in the range $(0,\ 1)$. When the micro model's $degeneracy>CE\ Degeneracy\ Boundary$, other curves provide the result that the CE necessarily happens with $\Delta EI>0$ until the Uncertainty Magnitude attains $1$. This result proves Conjecture \ref{the-degcoce}: the CE does not rely on the Uncertainty Magnitude while the micro model's $degeneracy$ exceeds the CE Degeneracy Boundary. On the other hand, Figure \ref{Fig_Deg_Influ} displays another evidence that the CE relates to the model's Uncertainty Magnitude when the $degeneracy$ is not larger than the CE Degeneracy Magnitude. As one of the coefficients used in the process of computing the model's EI, the $degeneracy$ value can be obtained by the General EI Calculator's EI Meter function. Consequently, Absolute Thresholds and Equivalent Thresholds are calculated by solving Equation \ref{eqn-ceqe} with the outputs of General EI Calculator. For instance, the top-left sub-figure in Figure \ref{Fig_Deg_Influ} shows the Absolute Thresholds that constrain the CE by Coarse-graining the three-variable model with the below-boundary $degeneracy$, which values $0$, $0.083$, $0.20$, and $0.33$, to the non-degenerate two-variable macro model. Following the increment of micro model's degeneracy, the threshold drops from $0.0915$ to $0$ to represent the CE is easier to occur. Other sub-figures present Equivalent Thresholds of the CE while the degenerate micro model is influenced by the uncertainty, which magnitude is $0.12$, $0.25$, and $0.42$. Although the model with the larger $degeneracy$ has the greater Equivalent Threshold than the other, the value of this threshold is still less than the original Uncertainty Magnitude above the relevant Absolute Threshold. This study gives the conclusion to summarize the CE conditions, that is, the CE occurs following the micro model's uncertainty and degeneracy declined by the Coarse-graining operation. \section{Discussions} \label{sec-uni-frame} \subsection{Generalization of CE Uncertainty Thresholds: The CE by Coarse-graining Continuous Models} Based on the discrete model's experimental results in the previous contents, the Uncertainty Magnitude is a significant factor to quantify whether the CE will occur or not by Coarse-graining operations. However, continuous models are more practical than discrete models as the efficient representation of systems in most studies. Therefore, we discuss the relationship between the CE and the uncertainty of continuous models with Hoel's Approximate EI Algorithm, as exhibited by Equation \ref{eqn-EIa}. In Figure \ref{conEIs}, six dimmer profiles of the one-parameter lamp model are defined by six high-order polynomials to present the mapping trajectory from the middle parameter, the knob's angle $\theta$, to the model's effect, the light's flux. The difference between the continuous model's parameter and the discrete model's variable is that, the variable's value is $0$ or $1$, which choice probability is influenced by the uncertainty, but the parameter has the continuous value range mapped from the continuous interventions $X$. For example, $\theta=X$ is the mapping from the intervention to the middle parameter to indicate the knob angle which is decided by the lamp user's expectations. This process can be influenced by the error $\delta$ that makes the $\theta$ value change in a range arbitrarily following the Gaussian probabilistic distribution. The mapping from the $\theta$ to the effect is affected by the error $\epsilon$ as the same as $\theta=X$. Therefore, the colorful curves in the right sub-figure illustrate five continuous EIs decreased by increasing the $\delta$ and the $\epsilon$. Since the red curve shows the discrete lamp model which is equivalent to the one-variable macro model, the intersections between the red and colorful curves are the CE Uncertainty Thresholds of Coarse-graining the five micro models to the discrete macro model. This result ensures uncertainty conditions still exist when squeezing the continuous models. \begin{figure}[b] \begin{minipage}[b]{0.48\linewidth} \centering \includegraphics[width=2.5in]{Figures-Discussions/dimmer_profiles_of_six_models.jpg} \end{minipage} \begin{minipage}[b]{0.48\linewidth} \centering \includegraphics[width=2.5in]{Figures-Discussions/EI_curves_of_six_models.jpg} \end{minipage} \caption{Dimmer Profiles of Continuous or Discrete Six Lamp Models and the Model's EI Curves Calculated by Equation \ref{eqn-EIa}. The left sub-figure shows the dimmer profiles of six continuous and discrete lamp models. The continuous profile is designed by five high-order polynomials, and the discrete profile represents the mapping of the one-variable macro model. The right sub-figure displays the EI curves, where intersections between the discrete curve and other colorful curves are the CE Uncertainty Thresholds of Coarse-graining five continuous models to the discrete model.} \label{conEIs} \end{figure} However, Figure \ref{conEIs} results roughly demonstrate the CE Uncertainty condition's existence in the process of the CE by mapping the continuous model to its proper macro model. Our proposition's generalization is needed to be proved by the general Coarse-graining operations that the choice of macro models is more flexible, such as squeezing a multiple-parameters continuous model to an one-parameter continuous model. The problem is that, the complexity of continuous EI algorithms given by Equation \ref{eqn-EIg} and Equation \ref{eqn-EIa} brings the difficulty to quantitative definition of CE Uncertainty Thresholds in the continuous scenarios. Moreover, the proper Coarse-graining operation's selection is more intricate than the situations we have studied. For example, a two-parameter continuous model is easily squeezed by merging its two parameters, $\theta_1$ and $\theta_2$, into a unified parameter $\theta$ of the one-parameter macro model to derive the CE when there is the explicit relationship between $\theta_1$ and $\theta_2$. If $\theta_1$ and $\theta_2$ are independent of each other, the proper Coarse-graining operation's choice relates to the parameter's significance on the model's effect. Consequently, it is important to articulate the CE Uncertainty conditions in the continuous scenario with more experimental results based on continuous EI algorithms. We propose the potential improvements of our works in the next section, including the possible tools for the study of the CE by Coarse-graining continuous models. \subsection{Future Work} Although the calculations in this work are performed on systems with discrete states, we expect and propose these conclusions are equally applicable to systems with continuous states. As the common representation of numerous practical systems, it is necessary to discuss the CE in a model with continuous states. Through Equation \ref{eqn-EIa}, we have reproduced the EIs of five micro models with continuous states and the one-variable discrete-state macro model with the continuous uncertainty represented by $\delta$ and $\epsilon$. In Figure \ref{conEIs}, the CE Uncertainty Threshold relates to the complexity of the micro models. There is the largest CE Uncertainty Threshold while squeezing the simplest micro model to the macro, as shown by the intersection between the cyan curve and the red curve. The next step is to quantify the CE Uncertainty Threshold and discuss the factor that changes the threshold's value In 2022, Zhang and Liu quantified the relationship between the CE and the model's continuous states and proposed the Neural Information Squeezer framework to find the effective model-squeezing method from the time-series data\cite{zhang2022neural}. We have observed in their research that, the CE will occur unconditionally in the process of squeezing the micro model to the macro model, which does not match our experiment results. Therefore, the CE Uncertainty Thresholds could be quantified through the relationship raised in \cite{zhang2022neural}, and then the more practical CE conditions can be identified. Consequently, The model's performance and potential optimization methods could be evaluated by its uncertainty and properties instead of some kinds of metrics. \section{Conclusion} \label{sec-conc} In this paper, we have proposed uncertainty redistribution as the deterministic factor of the CE by squeezing micro models to macro models. We have also developed the General EI Calculator based on Equations \ref{eqn-EI}, \ref{eqn-det}, and \ref{eqn-deg} and derived the CEQE in Equation \ref{eqn-ceqe} from the TPM's distribution by row, as shown by Equation \ref{eqn-ed}. According to the experimental results displayed by Figures \ref{Fig_approx_thres_results}, \ref{Fig_Abs_Thres_n_2}, \ref{Fig_Abs_Thres_11_n}, and \ref{Fig_Eqv_Thres}, the CE Uncertainty Thresholds quantified by Equation \ref{eqn-ceqe} proved the CE has to satisfy two conditions: Firstly, the micro model's uncertainty has to be greater than the CE Uncertainty Threshold. Secondly, the micro model's uncertainty has to be redistributed to the specific range of the macro model's uncertainty. In the additional study and Experiment 2, we have discussed the generalization of the existence of the CE Uncertainty Threshold for Coarse-graining the model with discrete states. When the model is influenced by multiple uncertainty variables $X=\{x_1,\ x_2,\cdots,\ x_n\}$, the effect of one variable $x_i\in X$ attenuates to $\frac{a_i}{n}$ of the unified uncertainty $x_u$'s effect, as shown in Figure \ref{Fig_Multi_EI_Comp}. Based on Equation \ref{eqn-xsceqe}, Figure \ref{Fig_Multi_Thres} illustrates the CE Uncertainty Thresholds that form a black curve on the EI surface in the corresponding situation. With the General EI Calculator and computations by Equation \ref{eqn-ceqe}, Figures \ref{Fig_Deg_Bodry} and \ref{Fig_Deg_Influ} display the model's degeneracy's effects on the CE by Coarse-graining and CE Uncertainty Thresholds, and the relevant CE conditions are summarized by Conjecture \ref{the-degcoce}. These results have further proved the uncertainty distribution as the general factor of the CE by Coarse-graining the discrete micro model to the macro model, which is more effective and explainable. Based on total experiments, there exists a potential space that contains numerous macro models which satisfy the CE conditions. The model with less uncertainty and degeneracy is more appropriate for the CE by squeezing the micro model. The conclusion is that, uncertainty and degeneracy are a type of error that decreases the model's EI. The uncertainty influences the model's effectiveness, and the degeneracy affects the model's explainability. Therefore, we assume the nature of causality is comprehensive quantification of the model's advantages. The nature of causal emergence should be the model's error declines by the coarse-graining following the assumption above. From the simple discussion based on Figure \ref{conEIs}'s results, we consider these conclusions are suitable for any model of the system with discrete or continuous states, while the principal conjectures in this work are derived from the model with discrete variables. For future studies, it is essential to systematically validate that CE Uncertainty conditions also hamper the occurrence of CE during Coarse-graining the micro system with continuous states. As the common type of practical system, the continuous model's CE regularity is potentially applicable to many academic and industrial scenarios, such as the optimization of Deep Learning networks. Some key mathematical tools for this future study have been provided by Zhang and Liu's work\cite{zhang2022neural}. \section*{CRediT authorship contribution statement} \textbf{Liye Jia}: Investigation, Methodology, Validation, Visualization, Writing - original draft, review \& editing. \textbf{Yutao Yue}: Research Ideas, Conceptualization, Investigation, Writing - original draft, review \& editing, Project administrator.\textbf{Cong Zhou}: Writing - review \& editing. \textbf{Ka Lok Man}: Supervision, Writing - review \& editing. \textbf{Sheng-Uei Guan}: Supervision. \textbf{Jeremy Smith}: Supervision. \section*{Declaration of Competing Interest} The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. \bibliographystyle{ieeetr}
{ "redpajama_set_name": "RedPajamaArXiv" }
39