text
stringlengths
14
5.77M
meta
dict
__index_level_0__
int64
0
9.97k
Обстріли Зноб-Новгородської селищної територіальної громади — серія обстрілів та авіаударів російськими військами території населених пунктів Зноб-Новгородської селищної громади (до 19 липня 2020 року Середино-Будського району) Шосткинського району Сумської області в ході повномасштабного російського вторгнення в Україну. Наказом Міністерства з питань реінтеграції тимчасово окупованих територій України територія громади з 30 травня 2022 року була внесена до оновленого переліку територій України, де тривають бойові дії, або які перебувають в окупації російських військ. Жителям громади, зареєстрованим як внутрішньо переміщені особи (ВПО), здійснюватимуться виплати. Історія 13 травня Близько 00:05 у ніч з 12 на 13 травня військові, за інформацією оперативного командування "Північ", з території РФ завдали двох ракетних ударів по с. Очкине Зноб-Новгородської селищної громади. За інформацією військових, втрат серед особового складу не було, як і жертв серед населення. На території Очкинського лісництва ДП "Свеське лісове господарство", за інформацією Держекоінспекції Сумщини, росіяни знищили щонайменше 172 дерева, переважно хвойних порід. Загальна суму збитків від знищення лісових насаджень склали 2 079 751 грн. 22 травня За інформацією Генерального штабу ЗСУ російські війська обстріляли населені пункти та інфраструктурні об'єкти в районі села Улиця. 28 травня У ніч з 27 на 28 травня військові РФ зі своєї території з мінометів близько 6 разів обстріляли територію Зноб-Новгородської громади, як повідомив голова Сумської ОВА Дмитро Живицький. 14 червня О 6:30 ранку військові РФ відкривали артилерійський вогонь по території Есманської громади. 20 червня Після 18:00 почався обстріл Зноб-Новгородської громад. Російські війська вели вогонь зі ствольної та реактивної артилерії. Усього зафіксовано 28 «прильотів», жертв чи руйнувань не було, повідомив голова Сумської ОВА Дмитро Живицький. Після 19:30 обстріл території знову відновився. Ворог стріляв з міномета: 7 влучань. Жертв чи руйнувань не було. За інформацією ДПСУ, протягом дня військовослужбовці ЗС рф зі своєї території нещадно розстрілювали прикордоння Сумщини з мінометів, ствольної артилерії та реактивних систем залпового вогню. 3 липня Близько 9:30 ранку по Зноб-Новгородській громаді почався масовий обстріл з боку РФ, ймовірно із САУ, із використанням фосфорних та флешетних снарядів калібру 120, 152 мм. Всього було 120 пострілів (влучань) повідомив голова Сумської ОВА Дмитро Живицький. В результаті цього згорів один автомобіль. Буквально о 09:45 по околицям одного з сіл Зноб-Новгородської громади почався новий мінометний обстріл: 20 влучань. 12 липня Росіяни зі своєї території обстрілювали території громади. Травмованих людей не було, але були руйнування цивільної будівлі, повідомили у Сумській обласній військовій адміністрації. 16 липня За інформацією Нацполіції Сумщини відбувся обстріл села Нововасилівка Зноб-Новгородської селищної громади. 21 липня о 17 годині розпочався мінометний обстріл. Було зафіксовано 10 прильотів, - повідомив голова Сумської ОВА Дмитро Живицький. Див. також Список обстрілів Сумської області (квітень — червень 2022) Примітки З 2022 у Сумській області Історія Шосткинського району
{ "redpajama_set_name": "RedPajamaWikipedia" }
360
Q: HTML Table Mouse Hover to last Marge Column enter image description hereI have a table with mouse hover effect, where last column is merged. Below is my code: <!DOCTYPE html> <html> <head> <style> table, th, td { border: 1px solid black; border-collapse: collapse; padding: 6px; } </style> <style style="text/css"> .hoverTable{ width:100%; border-collapse:collapse; } .hoverTable td{ padding:7px; border:#4e95f4 1px solid; } .hoverTable tr:nth-child(-n+50):hover td:nth-child(-n+2) { background-color: #ffff99; } </style> <body style="text-align:center"> <h1 style="color:green;"> Mouse Hover </h1> <h2>Requirement is When I hover to Canada, Color only Canada not Gloria Jean's Coffees and Coffee</h2> <table align="center" class="hoverTable"> <tr> <th>Name</th> <th>Item</th> <th>Location</th> </tr> <tr> <td>Gloria Jean's Coffees</td> <td>Coffee</td> <td rowspan="3" class="highlight" data-cell="c5">Canada</td> </tr> <tr> <td >North End Coffee Roasters</td> <td>Bagel</td> </tr> <tr> <td >Secret recipe</td> <td>Cheess Cake</td> </tr> </table> </body> </html> ` When I hover a particular row, hover effect works properly. But when I hover to the last merged column, hover effect work for 1st row also. I want that, when I hover to the last merged column, hover effect only work on that particular merged column, not 1st row. A: as per comment, here i use small jquery code also, use hover function of jquery and other row use css hover with css :not property. try like below, $(document).ready(function(){ $('.nothighlight').hover( function () { $('.nothighlight').css('background','#ffff99'); }, function () { $('.nothighlight').css('background','#ffffff'); } ); }); table,th,td { border: 1px solid black; border-collapse: collapse; padding: 6px; } .hoverTable{ width:100%; border-collapse:collapse; } .hoverTable td{ padding:7px; border:#4e95f4 1px solid; } .hoverTable tr:hover td:not(:last-child):not(.nothighlight) { background-color: #ffff99; } .highlight:hover { background-color: aqua; } <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script> <body style="text-align:center"> <table align="center" class="hoverTable"> <tr> <th>Name</th> <th>Item</th> <th>Location</th> </tr> <tr> <td class="nothighlight">Gloria Jean's Coffees</td> <td class="nothighlight">Coffee</td> <td rowspan="3" class="highlight" >Canada</td> </tr> <tr> <td >North End Coffee Roasters</td> <td>Bagel</td> <td style="display: none;"></td> </tr> <tr> <td >Secret recipe</td> <td>Cheess Cake</td> <td style="display: none;"></td> </tr> </table> </body>
{ "redpajama_set_name": "RedPajamaStackExchange" }
8,696
\section{Introduction} Quantum machine learning (QML) is a relatively new discipline that investigates the interplay between quantum information processing and machine learning (ML) \cite{2017_Biamonte,2018_Dunjko}. Thus far, most of the attention in QML has focused on speed-ups in data analysis settings, namely supervised learning (\textit{e.g.,} classification) and unsupervised learning (\textit{e.g.,} clustering) \cite{2013_Aimeur, 2015_Wiebe, 2014_Rebentrost, 2016_Lloyd, 2015_Zhao}. From a more foundational perspective, computational learning theory (COLT) has also been extended to the quantum setting, and various results regarding classical-quantum separations are known (see \cite{2017_deWolf} for a recent review). Beyond supervised and unsupervised learning settings, which essentially deal with data analysis, \textit{reinforcement learning} (RL) \cite{1998_Sutton}, deals with interactive learning settings, and constitutes an effective bridge between data-analysis oriented ML and full-blown artificial intelligence (AI) \cite{2009_Russel}. In RL we deal with a \textit{learning agent}, which learns by interacting with its \textit{task environment}. In RL, the agent perceives (aspects of) the states of a task environment, and influences subsequent states by performing actions. Certain state-action-state transitions are rewarding, and successful learning agents learn \textit{optimal behavior} (see Fig~\ref{RL1} for an illustration). RL is closely linked to robotics and AI tasks, and is thus also practically very well-motivated. For instance, RL plays a pivotal role in modern learning technologies -- from artificial personal assistants and self-driving cars, to the celebrated AlphaGo system which, startlingly, surpassed human-level gameplay in Go \cite{2016_Silver}. While this initial exciting result relied on various flavors of learning to achieve superior game-play, the most recent, and strongest variants (which now beat also best humans and software in chess), no longer utilize supervised learning from human examples and rely on RL and self-play \cite{2017_Silver,2017_Silver_b}. \setlength{\intextsep}{1pt}% \setlength{\columnsep}{10pt}% \begin{figure} \includegraphics[width=0.48\textwidth,clip=true,trim =140 186 200 170]{figureMDP.pdf} \caption{\label{RL1} Illustration of the the basic agent-environment paradigm: an agent navigates a task environment (e.g. a maze) by taking actions and by receiving percepts (signals s, e.g. labels of possible positions in the maze). In RL, percepts can be rewarding. Basic environments are characterized by Markov decision processes (inset in environment), in which case the percepts are the states of the environment. } \end{figure} Possible quantum enhancements in this more general learning setting have been explored only in few works. A few authors have considered scenarios where the internal computation of the agent is quantized, while the interaction with the environment remains classical. For instance, in \cite{2014_Paparo}, a quantum algorithm that quadratically speeds up a variant of the Projective Simulation \cite{2012_Briegel} model was proposed. In \cite{2016_Crawford} it was investigated whether quantum annealers could offer computational speed-ups for Boltzman machine-based RL engines. To go beyond purely ``internal speed-ups,'' other authors \cite{2015b_Dunjko,2016_Dunjko} have considered environments that are ``quantum-accessible,'' in the sense that they maintain superpositions, and allow exquisite quantum control. The relationship between purely classical environments and quantum accessible environments is analogous to the relationship between classical and quantum oracles. Given access to quantum-accessible environments, the agent-environment interaction (see Fig. \ref{RL1}) can also be also quantized, essentially as a conventional two-party quantum communication setting. In this framework, certain conditions were identified, which allow quadratic improvements in \textit{learning efficiency} (an analog of query complexity) for a class of RL scenarios, by utilizing amplitude amplification \cite{2015b_Dunjko,2016_Dunjko}. Certain criteria prohibiting improvements have been identified as well. In this work, we continue the investigation of \textit{the limits of speed-ups in learning efficiency}, given such quantum-accessible environments, and address the question of whether RL settings allow super-polynomial or exponential speed-ups, at least in certain cases. The study of RL with quantum-accessible environments bears an obvious resemblance to the study of quantum query complexity, i.e., the study of quantum algorithms for oracle problems. However, RL has a different emphasis than query complexity. At a conceptual level, RL is more concerned with learning how to perform some task that involves the environment, such as playing a game; whereas query complexity is more concerned with characterizing some property of the oracle, such as whether there exists an input that causes the oracle to accept. In this paper, we state some simple conditions that separate RL problems from oracle problems. We then present examples of RL task environments where quantum-enhanced agents achieve optimal behaviours in polynomial time (in the size of the task environment), but where any classical learner requires exponential time to achieve equal levels of efficiency. These constructions are in fact based on oracle problems -- specifically, Simon's problem \cite{1995_Simon} and Recursive Fourier Sampling \cite{1997_Bernstein, 2008_Hallgren} -- but with suitable modifications that force a quantum agent to learn how to interact with the environment in a nontrivial way. In order to achieve these provable quantum speed-ups, we consider RL task environments that may seem somewhat artificial and unrealistic, as these environments allow quantum access, and they encode rigid mathematical structures. However, we argue that these kinds of environments can actually occur in practical applications involving game-playing, such as the celebrated AlphaGo and AlphaGo Zero systems \cite{2016_Silver,2017_Silver,2017_Silver_b}. In these situations, an agent can simulate the game internally, and can learn by playing against itself. Hence, an agent with a quantum computer can simulate quantum access to an environment that encodes the game. Moreover, such games are built recursively from sub-games, in a way that is reminiscent of the Recursive Fourier Sampling problem and its generalizations \cite{1997_Bernstein, 2008_Hallgren}. Our results can be interpreted as further evidence that quantum agents can achieve super-polynomial improvements in learning to play these kinds of games. This perspective on our results is reminiscent to results obtained for boolean formula evaluation tasks, such as in the study of NAND trees, which are, in fact game trees. Here, the levels of the tree correspond to alternating moves of two players, and the value of the node specifies whether the given player wins under perfect play. Previous works have shown polynomial speed-ups for generic game trees \cite{2008_Reichardt}, and even superpolynomial speed-ups for special families \cite{2012_Zhan}. The two perspectives are closely related, and there may be interest in combining the two approaches for the specific purposes of game play -- indeed, combining a heuristic for estimating values of nodes in game trees (Monte Carlo Tree Search), with reinforcement learning is at the basis of the strongest game playing results \cite{2017_Silver,2017_Silver_b}. However our overall objective pertains to general RL scenarios (characterized by Markov Decision Processes), where game playing is just one instance of possible applications. The remainder of the paper is split into the section \ref{sec:tb} which covers the basic technical background, section \ref{sec:main} which discusses the embedding of oracle identification problems into RL tasks, and presents the criteria for ``genuine'', \textit{i.e.}, interactive RL problems and in section \ref{exp:sec} we provide our main results. We finish off with the final discussion in section \ref{sec:dis}. \section{Technical background} \label{sec:tb} To present the main results of our work in a self-contained manner, we first introduce the basic concepts from classical RL theory, from the quantum agent-environment and quantum RL framework introduced in \cite{2016_Dunjko}, and from quantum oracle identification theory we will use later. \subsection{The setting of reinforcement learning} In reinforcement learning, an agent $A$ and an environment $E$ interact by the exchange of actions, from the set $\mathcal{A} = \{ a_i\}$, and percepts, from the set $\mathcal{S} = \{ s_i\}$. We consider finite sets of actions and percepts. Further, in RL, the agent is driven to correct behavior by the issuing of rewards, from some ordered set $\Lambda$, \textit{e.g.} $\Lambda = \{0,1\},$ or $\Lambda \subseteq \mathbbmss{R}$. Basic environments are specified by \textit{Markov decision processes} (MDPs), characterized by the action, environmental state\footnote{In the context of MDP environments, the percepts are just the environmental states.} and reward sets, a stochastic \textit{transition function} $P_T(s_j | s_i, a_k),$ specifying the transition probability from state $s_i$ to $s_j$ under action $a_k$, and a stochastic \textit{reward function} $R(s_i, a_j,s_k) \in Distr(\Lambda),$ which to each arc $(s_i,a_j,s_k)$ (probabilistically) assigns a reward. A \textit{policy} (of an agent) $\{ \pi(a|s)\}_s$ specifies the probability of (an agent outputting) an action $a$ given the state $s$. Given an MDP $M$, we can identify various notions of optimal policies, \textit{e.g.} those which maximize the expected reward over some finite period $N$ (\textit{finite horizon}) or with respect to an \textit{$\gamma-$infinite horizon}. The latter is given with $R^{\infty}_\pi = \lim_{l \rightarrow \infty}E_{\pi,M}[ \sum_{k=0}^{l} \gamma^k R^{\pi}_k],$ where $R_k$ is the reward at the $k^{th}$ step (the geometrically decaying factor $\gamma^k$ ensures convergence, and increases the values of more immediate rewards). If we only care about the value of a policy after some number of initial steps $p$, we talk about efficiency after $p-$steps, given with $R^{\infty,p}_\pi = \lim_{l \rightarrow \infty}E_{\pi,M}[ \sum_{k=p}^{l} \gamma^k R^{\pi}_k]$. A learning agent $A$ is \textit{$(\epsilon,\delta)-$efficient after $p$ steps} if the infinite horizon rewards $R$ of $A$ measured after the $k^{th}$ step for the agent $A$, satisfy $R^{\infty,k}_{\pi^\ast} \leq R + \epsilon,$ except with probability $\delta$. Analogous definition holds for the finite-horizon case. In other words, such agents are after $p$ steps (almost) as efficient as optimal agents. In other words, the agent is $(\epsilon,\delta)-$efficient after $N$ steps, if after $N$ steps, except with probability $\delta$, it becomes equally (up to $\epsilon$) rewarded as an agent adhering to an optimal policy. An environment (MDP) is: \textit{episodic}, if the environment is re-set to (a set of) initial state(s)\footnote{Often we have the case that the last action of the agent results in feedback, provided by the first state of the next episode -- in this case we may deal with a set of initial states, which all have identical outbound transitions, \textit{i.e.,} the identity of the state we are in does not (directly) influence what happens in the current episode. } once a rewarding transition has occurred (if the rewards are stochastic, obtaining a reward of zero value still causes a re-set), and \textit{strictly $\eta-$episodic}, if the environment is re-set to the (set of) initial state(s) after exactly $\eta$ steps.\vspace{0.1cm} An environment has \textit{immediate rewards} if the occurrence of any state-action $(s,a)$ pair consistent with an optimal policy yields the largest (average) reward, i.e. for any other $a'$ the expected immediate reward of $(s,a')$ is smaller. In the simplest case of deterministic binary rewards, this simply means that every correct move is rewarded. Otherwise the setting has \textit{delayed rewards} -- a prototypical example is a maze problem, where rewards are issued only once the maze is solved, although there are obvious ``wrong'' and ``correct'' moves along the way.\vspace{0.2cm} We should point out that not all environments correspond to MDPs: environments can also be partially observable (in which case the agent only perceives some noisy function of the environmental state), but in this work we will focus on fully observable settings~\footnote{It should be noted that quantum \textit{generalizations} of partially observable MDPs have been considered previously \cite{2014_Barry}. In the context of this work, however, we deal with environments which are specified by fully classical MDPs, but which are ``accessed'' in a quantum fashion, as explained shortly. }. \subsection{Framework for quantum reinforcement learning} \label{sec-oracul} The generalization of the agent-environment setting is straightforward. The percept and action sets are promoted to sets of (also mutually) orthonormal kets $\{ \ket{s} | s \in \mathbf{S}\}$ and $\{ \ket{a} | a \in \mathbf{A} \}$. The agent and the environment are modelled as (infinite) sequences of completely positive trace-preserving (CPTP) maps $\{ \mathcal{M}_A^i\}_i$ and $\{ \mathcal{M}_E^i\}_i$, acting on the Hilbert spaces $H_A \otimes H_C$ and $H_C\otimes H_E$, respectively. Here, $H_A$, $H_C$ and $H_E$ specify the memory of the agent, the agent-environment interface (the \textit{communication channel}), and the memory of the environment, respectively. The classical setting is recovered by restricting the agents and environments to classical maps, see \cite{2015b_Dunjko} for a formal definition, and Section \ref{SI} for further information. Recall that, in the (fully observable) classical case, the task environment could be completely described by an MDP. However, if we allow quantum interaction, then the MDP no longer provides a complete description, because it does not specify the behavior of the environment on superpositions of actions and percepts. Indeed, there are many possible quantum environments that have identical behavior on classical inputs, and hence correspond to the same MDP. Each such environment we call \textit{a quantum realization of the MDP}. We are interested in quantum environments that preserve superpositions of actions and percepts. Intuitively, we might expect that such a ``nice'' quantum environment should always exist, because a mixed quantum state can always be purified, and a quantum channel can always be implemented by a unitary operation acting on a larger system. In fact, this claim can be made rigorous, as follows. In \cite{2015b_Dunjko} it was shown that for any $\eta-$episodic environment (which does not need to be fully observable), there exists a realization which is also \textit{quantum-accessible}. This latter property implies that the agent can utilize its access to the environment to simulate an oracle $E_q$ that has the following behavior: \EQ{ \ket{a_1, \ldots, a_\eta}\ket{y} \stackrel{E_q}{\longrightarrow} \ket{a_1, \ldots, a_\eta}\ket{y \oplus R(a_1, \ldots, a_\eta)} \nonumber,} where $R(a_1, \ldots, a_\eta)$ is the reward value obtained by the agent once the agent executes the sequence of actions $a_1, \ldots, a_\eta$,\footnote{Note that this is a well-defined quantity only in deterministic environments, where the action sequence deterministically specifies the corresponding state sequence, and reward values.} and $\oplus$ denotes addition in the appropriate group. The process of simulating access to $E_q$ in a quantum-accessible environment is called \textit{oraculization}. Here, each query to the oracle $E_q$ requires approximately $5 \eta $ interaction steps with the environment. (More details are provided in Section \ref{SI}.) This allows us to utilize techniques from quantum algorithms (e.g., oracle identification) for reinforcement learning. The basic idea of our overall approach can be summarized as follows. The learning of the the quantum-enhanced agent is split into two phases. In the first phase, it will utilize quantum interaction (via the \textit{oraculization} process) with the task environment, to effectively simulate access to a quantum oracle, which conceals critical information about the environment. In the second phase, it will use this information to quickly find optimal behavior in the given task environment. \subsection{Oracle identification and Simon's problem} The first provable quantum improvements over classical computation involved the use of oracles, specifically the problems of oracle identification. For our purposes, we specify the task as follows: for a (finite) set of oracles $O = \{ \mathcal{O}_i \}_i,$ which is partitioned as a collection of disjoint subsets ($O = \bigcup_{j} O_j,\ O_k \cap O_{k'} = \emptyset$, unless $k=k'$), given access to an oracle $\mathcal{O},$ identify which subset $O_j$ it belongs to. In the cases we consider, the oracles will evaluate a boolean function, and the task will be to identify to which specified collection (out of exponentially many) of boolean functions the given instance belongs to\footnote{In more technical terms, we will be dealing with promise problems, where the collections do not comprise the entire set of possible functions. It is well-known that exponential separations in oracle identification, or computational learning, are not possible without such promises, see, \textit{e.g.}, \cite{BB01,2017_deWolf}. }. First examples here were the Deutsch and the Deutsch-Jozsa algorithm, where the set of oracles contained all boolean functions which are constant (subset of 2 elements) or balanced (subset of $\left({n \atop n/2} \right)$ elements), basic Grover's search \cite{1996_Grover} (where the subsets are singlet sets, and the oracles are promised attain value $1$ for one element and zero otherwise). Depending on the details of figures of merit one considers, and the exact specification of what oracles do, this framework also captures COLT as well \footnote{If the oracles are functions which can be queried, then this constitutes the standard oracular setting, or, similarly, learning from membership queries. However, they could also be objects which produce samples from unknown distributions, in which case we are broaching the conventional probably approximately correct learning. }. One of the first examples of oracle identification, where a strict exponential separation between classical and quantum computation was proven is, Simon's problem \cite{1995_Simon}. In Simon's construction, we are given an oracle encoding a function $f: \{0,1\}^{n} \rightarrow \{0,1\}^n$, with the promise that there exists a secret string $s \in \{ 0,1\}^n$, such that $f(x) = f(y)$ if and only if $x = y \oplus s$, where $\oplus$ is the element-wise modulo 2 addition. Intuitively, to identify $s,$ one must find two strings which attain the same value under $f$, which, again intuitively, requires $O(2^{n})$ steps. A more careful analysis proves that any algorithm which finds $s$ with non-negligible probability needs $\Omega(2^{n/2})$ queries \cite{1995_Simon}. Access to a quantum oracle mapping $\ket{x}\ket{y} \mapsto \ket{x} \ket{y \oplus f(x)}$, allows an efficient quantum algorithm for finding $s$, and this can be done, e.g., by using a probabilistic algorithm \cite{1995_Simon} using $O(n)$ queries (which implies a zero-error algorithm with expected polynomial running time), or a zero-error deterministic algorithm with polynomial worst-case running time \cite{1997_Brassard}. We point out that some oracle identification problems such as Fourier sampling (discussed in \ref{SI}) and Simon's problem play a role in the early results of quantum COLT: in the seminal work of \cite{1998_Bshouty} the quantum Fourier sampling algorithm is used to efficiently learn DNF formulas just from examples under the uniform distribution, and in \cite{2004_Servedio}, Simon's algorithm is used to prove that if one-way functions exist, then there is a superpolynomial gap between classical and quantum exact learnability. In this work, the use of oracular separations for RFS and Simon's problem are, however, simpler and less subtle than in these quantum COLT works. \label{sect:Simon} \subsection{Trivial transformations of oracle problems into MDPs} The framework of reinforcement learning is very broad, and many kinds of standard algorithmic problems can be recast as learning tasks in specially-constructed MDPs \footnote{More precisely, to a family of tasks, specified by the instance size, we can associate a family of MDPs.}. We first give a trivial example of how an oracle problem can be transformed into an MDP. Consider Simon's problem, which is to identify the ``hidden shift'' $s$, given a function $f: \{0,1\}^{n} \rightarrow \{0,1\}^n$ that satisfies Simon's promise. We can imagine an agent that can input the queries $x$, and an environment that responds by outputting the value $f(x)$. The percept and action sets are thus bit-strings of length $n$. To encode the problem of finding the secret string, we can endow the agent with an additional set of actions of the form ``$guess-x$'' (for each $x\in\{0,1\}^{n}$), using which the agent can input a guess of the secret string, obtaining a reward only if the guess was correct (the returned percept can is not important and can \textit{e.g} be the guess that the agent had input). This indeed specifies an MDP. However, from a reinforcement learning perspective, this MDP is highly degenerate. Specifically, the rewards are immediate, the environmental transition matrices\footnote{Note that the transition function $T(s|a,s)$ can be understood as a collection of action-specific transition matrices $\{ P^a \}_a$.} which specify how actions influence state-to-state transitions are low rank, and it is fully deterministic. This degeneracy also means we can realize it using environmental maps which allow oraculization (see \ref{SI} for more details). Specifically, we can realize the unitary oracle $\ket{x}\ket{y} \mapsto \ket{x} \ket{y \oplus f(x)}$, which allows a quantum agent to obtain rewards exponentially faster -- and this is indeed the basic idea behind this work. Hence, this MDP is not particularly interesting from a RL perspective. In the following sections, we will show that Simon's problem can be transformed into MDPs which have more interesting structure, which is more typical of interactive RL problems. \section{Interactive reinforcement learning problems} \label{sec:main} What deserves to be called a \textit{genuine} RL problem is a complex question, which we do not presume to resolve here.\footnote{In this work we treat the terms ``genuine'' and ``generic'' interchangeably.} Our objective is more modest, specifically to identify criteria which exclude certain MDP families -- those which \textit{directly} map to more specialized problems in quantum query complexity or computational learning theory. We refer to MDPs that satisfy our criteria as \textit{inherently interactive} RL environments. In particular, our desiderata for genuinely interactive MDP characteristics are thus:\vspace{0.2cm}\\ \indent$a)$ \textit{Rewards are delayed, and the MDP rewarding diameter (essentially, minimal number of moves between two rewarding events, \textup{e.g.} length of a chess game) should grow as a function of the instance size.}\\ \indent$b)$ \textit{At every step, the agent's actions should influence the states that are reached later, as well as the reward that the agent eventually receives.}\\ \indent$c)$ \textit{At every step, the optimal action should depend on the current state. (In particular, an agent that plays a fixed sequence of actions, without looking at the labels of the states, will be sub-optimal.)}\vspace{0.2cm}\\ \noindent The property $a)$ eliminates direct trivial phrasings of computational problems, and most direct translations of conventional COLT settings. The \textit{MDP rewarding diameter} refers to the maximal length of the shortest sequence of moves of the agent that lead to a reward, provided the sequence started from a state that is with bounded probability recurrently reached under every optimal policy. This criterion demands that the (nearly) optimal agents must traverse long paths between rewarding events. To clarify this concept a bit, we can first consider deterministic environments. Here, the rewarding diameter is just the minimal number of moves between two rewarding events. If the environment is probabilistic, then, depending on the policy, the frequency of visiting each state may differ. We are only interested in states which are visited with a high probability under optimal policy (we may have bad agents which take wrong turns, or make unnecessary loops, but this is not an inherent feature of the environment, and we do not care about events which are very rare). In each such set of ``not infrequent'' states (specific to each optimal policy), we can find the ``worst'' state, in the sense that it it is furthest away from a next rewarding move. The rewarding diameter is the shortest such a worst-case path length, minimized over all optimal policies. In other words, in an MDP with a rewarding diameter $d$, any optimal agent will, with constant probability, have to execute $d$ steps between two rewarding events \footnote{Note that simply demanding that the MDP has long paths does not suffice to eliminate otherwise pathological settings: one could simply add long paths to an otherwise immediate reward setting, which are never needed for optimal behavior. The rewarding diameter condition eliminates such possibilities.}. Since we are interested in scaling statements, \textit{i.e.} statements regarding speed-ups with respect to (ever growing) families of MDPs, we additionally demand that the diameter grows as well. Property $b)$ eliminates pathological MDPs where only every $k^{th}$ move is (potentially) rewarding, and the actual moves the agent makes in-between do not matter. For example, this excludes constructions where one takes a small MDP and inserts meaningless ``filler'' transitions in order to artificially increase the rewarding diameter. Finally, $c)$ eliminates deterministic maze settings, where optimal behavior only requires the repetition of a fixed sequence of actions. The problem of discovering the right sequence of actions is more closely related to computational learning theory, rather than reinforcement learning. The criteria as presented above are meant to convey certain intuitions about what properties ``genuinely interactive'' RL settings should have, considering both the agent and environment. These could nonetheless be, in principle, fully formalized in terms of the characterizations of the transition function and the reward function of the environment. For instance, the criterion b) captures a facet of an abstract property of the transition function of the MDP (which can be understood as a 3-tensor), prohibiting it to be of too low a rank. Let us exemplify this on the case that b) is violated, meaning many actions lead to a same state-to-state transition. We can now view the transition function as a collection of current-state specific matrices, where each matrix specifies the subsequent state, given an action. Violation of b) directly implies that (at least one) of these initial-state-specific matrices can be low rank, as many actions lead to the same state. The criteria as listed are not fully independent, if viewed from the perspective of the characterization of the transition function of the environment, but are more meaningful from the characterization of optimal agents. It is useful to point out one last, for this work important, consequence of property c). Property c) implies that the environment cannot be deterministic; indeed, in deterministic environments, an optimal agent need only store the sequence of moves it needs to perform, independently from the actual environmental state, and ``blindly'' repeat this sequence. Hence, the only way to ensure that an agent must develop an actual state-action specifying policy is to introduce random transition elements. Since what is meant by ``genuine'' RL problem is subjective, and a matter of context, we do not proceed further with the full formalization of such (to an extent arbitrary) criteria. For instance, for our purposes it makes little sense to precisely specify implicit parameters of the criteria, \textit{e.g.} what is the slowest acceptable growth of the rewarding diameter, or, just how dependant (relative to some measure of correlations) to the subsequent states have to be to the chosen action of the agent. Even though they are not absolutely rigorous, the properties $a) - c)$ can be clearly verified for the constructions that follow. \vspace{0.2cm}\\ \section{Interactive RL environments that lead to exponential quantum speed-ups} \label{exp:sec} We will now construct MDPs that have two important properties: first, these MDPs satisfy the definition of an interactive RL environment given in Section \ref{sec:main}, and second, when these MDPs are realized by a quantum-accessible environment, it becomes possible for a quantum learning agent to achieve an exponential improvement over the best classical learning agent. In these examples, the quantum agent works by applying the oraculization techniques described in Section \ref{sec-oracul}, thus ``converting'' the task environment into a quantum oracle. This paradigm can be represented with the following picture: \EQ{ \begin{split} MDP &\xrightarrow{\textup{realized\ as}} \biggl( {\footnotesize {\textup{quantum-accessible} \atop \textup{environment $E_{acc}$}}} \biggr) \\ &\xrightarrow[\textup{to simulate}]{\textup{agent uses $E_{acc}$}} ( \text{quantum oracle } E_q ) \\ &\xrightarrow[\textup{queries to $E_q$}]{\textup{agent makes}} \biggl( {\footnotesize {\textup{quantum advantage for} \atop \textup{reinforcement learning}}} \biggr). \label{eq:process} \end{split} }\vspace{0.cm} For concreteness, we will work with an example based on Simon's problem \cite{1995_Simon}, which leads to an exponential quantum speedup. However, we mention that a similar construction is possible based on Recursive Fourier Sampling \cite{1997_Bernstein, 2008_Hallgren}, which has a recursive game-like structure that seems natural in the context of reinforcement learning, although it only leads to a superpolynomial quantum speedup. Furthermore, reductions to essentially any other oracle identification problems can be done analogously. In order to show a classical-quantum separation, one must separately prove two statements: that a quantum agent can learn in the given MDP efficiently; and that no classical agent can learn efficiently. The first property, an efficient \textit{quantum upper bound}, reduces to proving that quantum oraculization is possible for the given MDP. The second property is typically more challenging. In our examples, we will use a reduction technique, showing that even though the realized MDP allows more options for the agent, learning in the MDP is not easier than learning from the original Simon's oracle -- in which case, the classical-quantum separation is well-established. \subsection{From oracles to interactive RL environments} \label{constructions} Consider standard oracle identification problems, where $\{ f_{s}: X \rightarrow Y \}$ is a specified family of boolean functions, where $s$ is the secret string $s\in \{ 0,1\}^l$ to be identified, and $X = \{ 0,1\}^m$ $X = \{ 0,1\}^n$. For didactic purposes, we shall first provide an MDP construction $M_0$ which closely follows the underlying structure of the oracle identification problem. This directly translated MDP $M_0$ has none of the desired ``genuinely interactive'' properties described in Section \ref{sec:main}. However, we will provide one intermediary MDPs, $M_1$ which will satisfy properties $a)$ and $b)$. Finally, we will provide two more demanding modifications realizing $M_2$ which satisfies the following \textit{global properties:}\vspace{0.2cm}\\ $i)$ it satisfies the ``genuine interactive MDP'' desiderata $a), b)$ and $c)$. The MDP construction is thus done on an abstract level, without considering any of the specificities of the underlying oracle identification task. Following this, we will separately prove that for the case when $f_s$ are the functions satisfying the Simon's promise, the resulting MDP $ii)$ maintains the classical hardness of learning, and\\ $iii)$ reduces via the oraculization process to the standard Simon's problem oracle, leading to a quantum-enhanced learning efficiency. Overall this establishs a hard exponential improvement of quantum agents over any classical agent. \paragraph{Initial construction of $M_0$} As mentioned, the structure of such a function can be embedded to a degenerate MDP, by choosing $X$ to be the action set, $Y$ to be the MDP state space (an action $x \in X$ then results in state $f_s(x) \in Y$), and by appropriately encoding $s$ into a reward function: one method to do so is either the expand the action space to include a complete set of ``guessing actions''. Alternatively, if $l=m$, then each query can also be considered a guess. Note, all choices made here will have impact on whether classical hardness of learning holds, as it provides differing options of the agent. This MDP satisfies none of the conditions $a)$-$c)$ from Section \ref{sec:main}. The optimal policy is thus constant (\textit{e.g.} the agent should always output the correct guess), and not particularly interesting. \paragraph{Unwinding the MDP: constructing $M_1$} \label{sec:unwinding2} We can do better by using the fact that the elements of $X$ are $m-$bit strings, and we can interpret individual bits as actions. The action set is thus $\mathcal{A} =\{0,1\}$, an MDP episode is a set of $m$ sequentially performed actions, corresponding to one query. This results in an MDP with a smaller action set, and of longer non-trivial paths, which are more natural for RL settings. To maintain observability of the MDP, in general, each sub-sequence of a query should result in a unique environmental state\footnote{Note, in fully observable settings the state should contain all the necessary information the agent would at any stage need to be able to proceed optimally -- which, in general, means it should be able to recover the action sub-string input up to the given point. }, and one simple way to do so is to expand the state space to contain all action substrings of all lengths, so $\mathcal{S} = \bigcup_{j=1}^{m} \{ 0,1\}^j \cup \{0,1 \}^n$. For simplicity, we will assume $l=m$, in which case, the only rewarding action sequence is the action sequence specified by the secret string $s$ (an example of a construction where $l \not=m$ is given in section \ref{SI}). The resulting MDP we call $M_1$. This simple modification ensures property $b)$, as the actions of the agent genuinely influence subsequent states, but more importantly ensures that the rewarding diameter is not constant, but $m$. \paragraph{Augmenting the MDP with a stochastic action: constructing $M_2$} The MDPs constructed thus far still do not satisfy the characteristic $c)$ we set out to fulfil. This characteristic of the MDP, amongst other consequences, requires a stochasticity of the transition function, on the relevant/rewarding part of the environmental space. The property $c)$ asserts that the optimal behavior must explicitly depend on the state label, and not just on the interaction step counter. Note that full determinism of an MDP necessarily violates $c)$. Hence, we need to modify the MDP such that the transition function becomes stochastic. Further, stochasticity must appear in the part of the space that must be visited by optimal agents\footnote{Without demanding this, we could easily introduce stochastic regions to an MDP which is never visited by the agent, which is a trivial, yet uninteresting solution.}. This can be achieved by a relatively simple trick: the action space is augmented to contain the random-jump action $rg$, which lands the agent at a random state, somewhere in the first half of the query specified by the secret string $s$. Note, optimal behaviour now requires the agent to always choose the option $rg$ as this leads to the shortest time intervals without rewards on average, but also prevents the agent from ``blindly'' executing any sequence of action: the $rg$ jump lands in a random state in the rewarding path so the subsequent actions of the agent do depend on where the jump landed\footnote{The agent can in principle execute any action in any state, however the valid jump occurs only if the $rg$ move is executed at the ``zeroth'' level. To fully specify the MDP, we must specify what happens also when this action is executed in any other state. It will be convenient to define that such an action leads to an arbitrary sequence of states such that the normal query depth of $m$ is reached, which will ensure that the MDP is essentially strictly episodic, which simplifies the oraculization process.}. An illustration of $M_1$ and $M_2$ for the example of the oracle for the Simon's problem is given in Fig. \ref{Simon}. \begin{figure*} \centering \includegraphics[width=0.8\textwidth, trim=8.5cm 10.5cm 8.5cm 3.5cm,clip=true]{MDP-sp2.pdf} \caption{\label{Simon} Illustration of $M_{1}$ and $M_2$. In the deterministic case, the agent has 2 possible actions at each step, $\{0,1\}.$ The actions form a tree of depth $n-1$, the last action causes a transition back to the zeroth layer. The path of $n$ moves encodes the input to the Simon's oracle, and the resulting state of the zeroth layer the output. Each path is also interpreted as a ``guess'', and if the agent inputs $s$, the resulting transition is rewarded. The winning path, and rewarding transition are highlighted in red and pink. The inset figure presents the randomized version which allows the agent to land on a random element of the half-prefix of the winning path from any zero-layer state (state ``all-ones'' in illustration). } \end{figure*} \subsection{Exponential speed-up from Simon's problem} To prove that quantum agents can exponentially outperform any classical agent in (a random instance of) $M_2$ we will first prove that a classical agent requires an exponential number of interaction steps with environment specified by $M_2$ in order to get a reward even once (except with exponentially small probability). \paragraph{Hardness for all classical agents} For simplicity, we will work with a minor modification of Simon's problem, which we call the \textit{flagged} Simon's problem, where the query function also flags one bit, if the query is the secret shift, so: $f':\: \{0,1\}^{n} \times \{0,1\} \rightarrow \{0,1\}^{n} \times \{0,1\},$ where \begin{equation} f'(x,b) = (f(x), b \oplus \delta_{x, s}), \end{equation} where $f$ is some standard Simon's problem function, and $\delta_{x, s} = 1$ if $x=s$ and zero otherwise \footnote{In other words, the ancillary bit is flipped if the query is $s$.}. Intuitively, it should be clear that learning $s$ given access to $f'$ is not (much) easier than having access to $f$: if $s$ is promised not to be ``all zeros'', then one can check whether some $s'$ is correct, also using $f$: simply evaluate $f$, for any $x,$ and $x\oplus s'$, and check if they are the same. For the full proof of hardness which also considers the case when $s$ is ``all zero'' we refer the reader to the Appendix, section \ref{flag:simon}. Next, it is also relatively easy to see that if there exists any classical agent which efficiently learns in $M_1$ (the non-randomized version) for the Simons problem, then there exists an algorithm which solves the flagged Simon's problem as well -- the basic idea here is to simulate $M_1$ using nothing but a black-box access to the flagged Simon's oracle. The simulator simply returns the correct states given the actions (\textit{i.e.}, the complete sequence of actions input to this point), collect the actions until a query is complete, feed it into the oracle, and return the state and reward. Such a simulator combined with the learning agent is the algorithm which solves the oracular problem. This already proves that no classical agent can learn in $M_1$ efficiently. Finally, we must take into account the randomized move option, which differentiates $M_1$ and $M_2$. To show hardness of $M_2$, we note that learning in $M_2$ is more difficult than learning in $M_1,$ where the agent is beforehand given the entire first half of the winning path $s$. In turn, this is as difficult as solving a $2n$ bit Simon's problem, where the first $n$ bits of $s$ are known. It is relatively easy to see that solving this is not easier than solving a completely independent Simon's problem of size $n$ which is still exponentially hard. More precisely, these arguments can be used to prove the following result: \begin{theo}}\def\HT{\end{theo} Any classical learning agent which can achieve $(poly(n)^{-1}, poly(n)^{-1})-$efficiency requires at least $O(2^{n/4}/poly(n))$ interaction steps in $M_{2},$ generated from the Simon's problem. \HT Full details of these proofs are given in the Appendix, section \ref{flagsim:leak}. \paragraph{Efficiency for quantum agents} The proof that there exist quantum agents which achieve optimal performance in $M_2$ can be provided in two steps. First, one can show that there exist environmental realizations (\textit{i.e.} sequences of CPTP maps) of an environment specified by $M_1$, which allow oraculization, realizing one call to the standard Simon's unitary oracle $\ket{x}\ket{y} \mapsto \ket{x} \ket{y \oplus f(x)}$, by using $O(m)$ interaction steps. This follows from the fact that the environment is essentially $m-$periodic, and from the fact that self-reversible realizations are always possible (see \cite{2015b_Dunjko} or section \ref{com:oracul} for more details). Moreover, when the environment is fully observable, the oraculization can be done even without the environmental reversal, which yields a simpler oraculization process \footnote{This is elaborated in section \ref{simp:or} in more detail, but our results would hold also without these simplifications -- however, this also shows that there exist a more general set of quantum realizations of fully observable environments which allow oraculization, than what is possible in the case the environment is not fully observable.}. Second, it is clear that $M_1$ can be understood as a sub-MDP of $M_2$, realized by blocking any agent from utilizing the randomized action $rg$. This also means that there exist CPTP realization of the environmental maps of the environment given by $M_2$ which match a quantum accessible realization of an environment given by $M_1$ on the subspace not containing the action subspace spanned by $\ket{rg}$ can be used by a quantum agent to realize the standard Simon's unitary oracle. In other words, a quantum agent can learn $s$ in an environment realizing $M_2$ by simply behaving as if it were in the environment given by $M_1$. All in all, an agent with quantum capabilities can achieve perfect performance in $M_2$ using $O(m^2)$ steps (a multiplicative factor of $m$ comes from the fact that each oracular query corresponds to $O(m)$ interaction steps of the agent). This proves the following main theorem: \begin{theo}}\def\HT{\end{theo} Environments specified by MDP $M_2$, stemming from a function satisfying Simon's promise allow an exponential separation between classical and quantum $(\epsilon, \delta)-$efficient learning agents, as long as $\epsilon, \delta$ are not super-polynomially decaying. In particular, the separation holds for constant error and failure parameters. Finally $M_2$ satisfies all three criteria $a)-c)$ for MDPs with generic properties. \HT \subsection{Practical uses of quantum-enhanced RL} The results presented so far prove that quantum agents can learn exponentially faster than their classical counterparts. While this has clear foundational relevance, it is also important to ascertain whether such results can be expected to influence reinforcement learning as applied in the real world. One major concern is our use of oraculization, which requires the agent to interact with the environment in superposition: it is not clear whether this can be achieved in realistic settings. Another concern is the fact that our quantum speed-ups are obtained for very special environments, which may seem artificial and unrealistic. Here we briefly comment on these concerns. In particular, we argue that oraculization can be achieved in settings where an agent learns to play a game, by playing simulated games against itself (``self-play''). Furthermore, we argue that one can achieve superpolynomial quantum speed-ups on a somewhat more natural class of environments that resemble recursive games, based on the Recursive Fourier Sampling problem and its generalizations \cite{1997_Bernstein, 2008_Hallgren}. \paragraph{The feasibility of oraculization} First, in standard RL settings, the environments are classical, and macroscopic, which effectively prohibits useful oraculization. However, many of the celebrated results involving RL deal with simulated, rather than real environments, and RL is used as a ``pre-training'' process. One of the best examples is the AlphaGo system, in particular. the most powerful AlphaGo Zero variant \cite{2017_Silver,2017_Silver_b}, where the system is trained by utilizing simulated games: self-play -- essentially by playing one agent against a copy of itself -- before it was tested against human and non-human opponents. Since such simulations are done internally, ``in the mind of the agent'', oraculization is clearly possible, as soon as sufficiently large quantum computers become available. More generally, any RL setting which involves model-based learning \cite{2009_Russel}, where the learning agent constructs an internal representation of the external environment, presents a perfect setting for our results to be applicable. A second domain where our techniques may be applied is quantum RL in quantum laboratories: there the environment is manifestly quantum, and so techniques like register scavenging and register hijacking are possible, at least in principle. To elaborate on this, in recent years, there has been an increasing interest in utilizing machine learning techniques to mitigate various obstacles one encounters when complex quantum devices, such as quantum computers are built. Indeed ideas on how to use machine learning to help in achieving more efficient quantum fault tolerant computation, how to mitigate error sources, and more generally, ideas on how to use ML to build a scalable quantum computer similar have been put forward (see \textit{e.g.} \cite{2018_Dunjko} for a review). Some such ideas rely on reinforcement learning and it makes perfect sense to utilize, if possible, fully coherent methods, \textit{i.e.}, quantum-enhanced reinforcement learning \footnote{Naturally, for this to be feasible, at least a constant size quantum computer should be achievable, which is capable of running the quantum-enhanced algorithm -- it is intriguing to consider the possibility that such a process could be ``boot-strapped'' and made to correct itself, as an autonomous and intelligent and adapting quantum fault tolerant method.}. \paragraph{The kinds of MDPs that lead to quantum speedups} Our second concern has to do with the still-rigid properties that the MDPs have to satisfy before quantum speed-ups can be obtained. As a first response to this issue, we point out that, while the results we presented deal with Simon's problem, similar methods can be used for other problems as well. In the Appendix, section \ref{speed:RFS}, we show how the Recursive Fourier Sampling (RFS) problem can be used to provide super-polynomial separations \cite{1997_Bernstein, 2008_Hallgren}. In particular, we prove the following theorem: \begin{theo}}\def\HT{\end{theo} (informal) There exist families of MDPs, constructed on the basis of RFS problems, which allow a super-polynomial separation between classical and quantum $(\epsilon, \delta)-$efficient learning agents, as long as $\epsilon, \delta$ are not super-polynomially decaying. In particular, the separation holds for constant error and failure parameters. These MDPs satisfy all three criteria $a)-c)$ for MDPs with genuinely interactive properties. \HT RFS, in its original formulation \cite{1997_Bernstein}, assumes access to an $O(n\times \log(n))$-bit binary function $f$. The function $f$ satisfies rather complex nesting conditions, and the classical-quantum separation is in the identification of one bit, concealed in the specification of $f$. In this sense, the RFS problem does not fit in the paradigm of oracle identification tasks which yield hard RL problems, as, na\i vely, we are asked to distinguish between only two classes of functions. If only a correct guess is rewarded, as it would be the case in a simple lifting of RFS problems to MDPs, then already one attempt at guessing would already reveal the correct solution. However, in the formulation of RFS given in \cite{2008_Hallgren}, which studies generalizations of RFS, it is apparent that RFS can also be understood as the problem of identifying an $n-bit$ hidden string. The identification of this string can be achieved in \textit{poly}(n) given quantum access. In contrast, the classical bound for the identification of this string is super-polynomial. Starting from this formulation, and by using constructions similar to those in section \ref{constructions}, we can recover environments, specified by MDPs where quantum access allows efficient learning. Further, we prove that the learning problem is still hard for classical learners. This turns out to be a bit more involved than for the case of MDPs based on Simon's problem, and is achieved using a lifting construction, which embeds smaller instances of RFS in larger instances. Using this, we show that the leaking of parts of the secret string still yields a problem harder than a fresh RFS problem of a smaller instance size. The exact statements, proof and all constructions are extensively described in the Appendix, section \ref{speed:RFS}. The constructions stemming from the RFS problem are particularly interesting because RFS exhibits certain self-similar features. Such features are reminiscent to features of learning we often encounter in real life, see the Appendix, section \ref{self:sim} for a discussion. Finally, while in this paper we have focused on \textit{provable} quantum speedups, it is worth taking a few moments to consider what kinds of problems might be good candidates for \textit{conjectures} of quantum speedups. Indeed, we can modify the MDPs described in this paper in various ways, such that our quantum algorithms can still be applied (with the same efficiency), and such that we might still plausibly conjecture that no classical agent can perform well (although we are no longer able to give a rigorous proof of classical hardness). One example of this has to do with the promise hidden in the underlying problem, which can be relaxed, thereby increasing the applicability of the underlying quantum algorithm for oracle identification. Another example has to do with embeddings of one MDP into another MDP. This is genuinely linked to the process of quantum oraculization, and is thus more interesting from our perspective. In the process of oraculization, the agent can, for instance, ``ignore'' certain options, and recover a given oracle. This is further discussed in the Appendix, Section \ref{upperB}, and one particular aspect is formalized in Lemma \ref{QuantumUB}. This states that whenever the restricting of an agent's actions leaves the agent operating in a sub-MDP which can be usefully oraculized, this opens the door for efficient quantum algorithms (although, a-priori nothing can be said about whether there also exist classical efficient algorithms). The above idea can be generalized further. Note that the restricting of the agent's actions (to realize useful sub-MDP) can be understood as a filter or interface, placed between the agent and environment, which, intuitively, rejects some of the agent's moves. But much more elaborate interfaces can be used, and such interfaces capture various notions of ``embedding'' of one MDP into another. This also expands the applicability of our results to all MDPs which embed any of the examples we have explicitly provided in this work. We leave a more extensive analysis of this options for future work. \section{Discussion} \label{sec:dis} The presented constructions balance between three requirements which all have to be fulfilled to achieve the goals of this work: demonstration of better-than-polynomial speed ups for interactive RL tasks. First, it should be hard for a classical agent to learn in a given MDP, and moreover this should be rigorously provable. Second, the quantum agent should be able to usefully ``oracularize'' the provided environment under reasonable concessions. Third, the MDP should be \textit{interesting}, that is, have properties which are quintessential to RL. The second and third requirement are in fact in strong collision: interesting RL settings involve long memories, and dependencies which vary in length, all of which interfere with the agent's efforts to ``oracularize'' the environment. To resolve this collision, in this work we settled for what is arguably the simplest possible solution: we constructed MDPs with randomness that occurs only along the rewarding path. This has a few consequences, e.g. the optimal strategy (which uses the stochastic part of the MDP) is not much better than the strategy which resides in the deterministic part of the MDP: $(n\times l)$ vs $(n\times l - n/4)$ steps between rewards. In order to increase this separation, the agent would have to classically operate in the randomized section of the environment for longer, in which case effectively quantizing just the deterministic part of the environment would lead to a less of advantage. Alternatively, one could attempt to genuinely quantize/oraculize also the random parts of the environment, however this leads to stochastic oracles whose utility is still not fully understood. {A few results in this direction suggest that quantum improvements in such scenarios may be difficult, as in many cases noisy or randomized oracles offer no advantage over classical oracles \cite{2008_Regev,2014_Harrow}. } {As a possible route of future research, one may attempt to consider MDPs with a larger stochastic component, by considering settings which do not correspond to standard MDPs.} For instance, if the environment is allowed to be time-dependent, then one {could} consider the task consisting of two phases -- a deterministic phase, where quantum access is used to learn useful information, \textit{a key}; and a stochastic phase, where the key is necessary to successfully navigate the environment. This would entail a full formalization of ideas of hierarchical learning and information transfer, also discussed briefly in section \ref{self:sim}. As an example of such learning, one {could} consider the notions of information transfer from one environment to another, where already constant separations between learning efficiency may lead to settings where the agent behaves optimally in the limit, or no better than a random agent which learns nothing. To exemplify this, consider a \textit{nested mazes} environment: a sequence of ever larger mazes $E_0, \ldots, E_k$ where the $E_l$ consists of $E_{l-1}$ glued to a new maze (the exit of $E_{l-1}$ is the entrance to the new maze), called an appended maze, denoted $E'_{l}$. In each maze $E_l$ only the final exit is rewarded. Because of this, learning $E_l$ is not equal to $l$ independent instances of learning appended mazes, but is significantly harder. We assume the appended mazes are roughly of the same size (and take the same time to traverse), and $E_0 = E'_1$. Now we can define a \textit{growing maze} setting, where an agent is kept in $E_{l}$ for some number of time steps $\tau_l$, before it is moved to $E_{l+1}$. In such a scenario, even a constant difference in learning speed can become magnified exponentially in $l$. An agent which can manage to learn each appended maze $E'_l$ in time $\tau$ can avoid ever having to earn a maze of increased size: it learns $E_1,$ and solves $E_2$ by applying first the solution of $E_1$, which brings it to the beginning of the appended maze, which is of constant size. Later, the agent has the simple recursive step: to solve $E_l$, it executes the solution of $E_{l-1},$ which brings it to the new, but constant sized instance. Assuming that each maze can be traversed in say $\kappa$ steps, as long as $\tau_{l} \geq \kappa\times \tau$, the agent will be successful each time, effectively never having to tackle a larger maze. This is a simple example of transfer learning, where knowledge in one domain is utilized in the next. In contrast, any agent which requires more than roughly $\tau_l / \kappa$ steps to learn the appended mazes, will have to learn the large mazes from scratch. This will imply exponentially worse success probabilities in $l$, rapidly converging to the performance of a random agent. Similar effects {could} be achieved in partially observable MDP cases, however, there the optimal policies may not be constant, but rather depend on the entire history of interaction. {Finally, it would be particularly interesting to identify the possibilities of speed-ups in RL settings which do not utilize a reduction onto oracle identification problems, but deal directly with environmental maps.} \noindent\textbf{Acknowledgements} VD and JMT are indebted to Hans J. Briegel for numerous discussions which have improved and influenced many parts of this work. The authors wish to thank Shelby Kimmel for initial discussions, and Stephen Jordan, Scott Glancy and Scott Aaronson for helpful feedback. VD also thanks JMT and Hans J. Briegel for their hospitality during his stays. VD acknowledges the support from the Alexander von Humboldt Foundation. JMT acknowledges the support of the National Science Foundation under Grant No. NSF PHY11-25915. Contributions by NIST, an agency of the US government, are not subject to US copyright. The authors acknowledge funding from ARL CDQI. This material is based upon work supported by the U.S. Department of Energy, Office of Science, Advanced Scientific Computing Research Quantum Algorithms Teams program. \newpage \bibliographystyle{plain}
{ "redpajama_set_name": "RedPajamaArXiv" }
4,720
module Addressbook class Engine < ::Rails::Engine isolate_namespace Addressbook config.after_initialize do |app| app.routes.prepend do mount Addressbook::Engine, at: "/addressbook" end end end end
{ "redpajama_set_name": "RedPajamaGithub" }
7,226
package org.wso2.carbon.identity.sso.saml.dto; import java.io.Serializable; public class SingleLogoutRequestDTO implements Serializable { private static final long serialVersionUID = -5086237688925774301L; private String assertionConsumerURL; private String logoutResponse; private String rpSessionId; private String certificateAlias; private String tenantDomain; public String getAssertionConsumerURL() { return assertionConsumerURL; } public void setAssertionConsumerURL(String assertionConsumerURL) { this.assertionConsumerURL = assertionConsumerURL; } public String getLogoutResponse() { return logoutResponse; } public void setLogoutResponse(String logoutResponse) { this.logoutResponse = logoutResponse; } public String getRpSessionId() { return rpSessionId; } public void setRpSessionId(String rpSessionId) { this.rpSessionId = rpSessionId; } public String getCertificateAlias() { return certificateAlias; } public void setCertificateAlias(String certificateAlias) { this.certificateAlias = certificateAlias; } public String getTenantDomain() { return tenantDomain; } public void setTenantDomain(String tenantDomain) { this.tenantDomain = tenantDomain; } }
{ "redpajama_set_name": "RedPajamaGithub" }
6,100
Q: SQL: Convert bigint type to formatted date I believe this is a simple question, but I've been searched around and got no satisfactory answer. Basically I have a Bigint object in MYSQL, I want to convert it to date format like 20201004, for example: 1601625689496 -> (20201002) I've tried to_date(cast(1601625689496 as timestamp)) date(cast(1601625689496 as timestamp)) But neither allow formatting, I hope to get the easiest and fastest conversion. A: Assuming that your number is an epoch timestamp in milliseconds (that is, the number of milliseconds since January 1st, 1970), you can use from_unixtime(): select from_unixtime(1601625689496 / 1000) This gives you a datetime value. If you want to drop the time component, then: select date(from_unixtime(1601625689496 / 1000)) Note that 1601625689496 actually maps to 2020-10-02, not 2020-10-04.
{ "redpajama_set_name": "RedPajamaStackExchange" }
3,207
The What: As businesses and organizations continue to use remote communications technology to bridge distances and share ideas, there has been a proliferation of huddle room and small meeting rooms demanding technologies that are more convenient and enable a seamless collaboration experience, so users can easily come together to exchange information, share ideas, and collaborate at any time. Yamaha has announced the CS-700, an all-in-one collaboration solution specifically designed to support these environments. The CS-700 combines best-in-class audio with high-quality video to fulfill huddle room requirements and collaboration capabilities in one wall-mounted system. Yamaha entered the conferencing market in 2006, offering leading microphone and speaker systems, including the YVC-1000 USB and Bluetooth conferencing phone. In 2014, the company acquired Revolabs, a provider of audio solutions for unified communications and enterprise collaboration. Together, these companies deliver solutions that ensure participants in remote conferences can hear and be heard clearly in every meeting environment. The Yamaha CS-700 is the first solution of its kind to bring together comprehensive audio, video, and collaboration capabilities in a wall-mounted system. Combining Revolabs' expertise in microphone technology, Yamaha's leadership in loudspeaker engineering, and new high-quality video and screen sharing capabilities, the CS-700 provides an affordable, simple-to-install, high-fidelity system for successful teamwork from a single USB connection.
{ "redpajama_set_name": "RedPajamaC4" }
4,048
\section{Introduction} \label{intro} Tensor models \cite{ambj3dqg,mmgravity,sasa1,boul,oog}, and their field theory version, tensor field theories are approaches to Quantum Gravity (QG) which propose a background-independent quantization and, in the field theory case, an ultraviolet-consistent completion of General Relativity. They study a discrete-to-continuum transition for discretized path integrals summing over not only metrics of a discretized Einstein-Hilbert action, but also over topologies. The partition function of tensor models spans weighted triangulations for every piecewise-linear manifold in any dimensions, hence they are naturally a random-geometric approach to QG. In this regard, they can be considered to fall under the umbrella of discretization approaches to QG, such as quantum Regge calculus \cite{regge, hamber} and (causal) dynamical triangulations \cite{Ambjorn:2005jj,Ambjorn:2012jv,Ambjorn:2013hma}. Historically, tensor models were introduced as higher dimensional generalizations of matrix models which saw their celebrated success in describing 2-dimensional QG \cite{Di Francesco:1993nw}. It was, however, not so straightforward to generalize matrix models' achievements to higher dimensions mainly because the organizing principle of and computational tools for the partition function of tensor models were lacking; diagonalization of tensors is not obvious and techniques on which matrix model calculations relied on did not find extensions to tensors. In particular, matrix models generate maps sorted by their genus. Their partition function then admits a genus expansion and, at large size $N$ of the matrix \cite{thooft}, calculations can be made exact and matrix models become solvable. The large $N$ limit is crucial to achieve the continuum limit of matrix models, as a 2D theory of gravity coupled with a Liouville conformal field \cite{Kazakov:1985ds,KKM, David:1985nj, KPZ,david,kawai}. This is one of the most acclaimed results pertaining to 2D QG. The large $N$ limit for tensor models \cite{Gur3,GurRiv,Gur4} was finally unveiled after the advent of colored tensor models generating triangulations shown to be pseudo-manifolds \cite{color,Gurau:2009tz,Gurau:2010nd,Gurau:2011xp}. The partition function of colored tensor models can be catalogued in terms of a new quantity called the degree of the tensor graph which plays the role of the genus in higher dimensions. Such a discovery, as anticipated, led to a wealth of developments in random tensors in areas as diverse as statistical mechanics, quantum field theory, constructive field theory, combinatorics, probability theory, geometry and topology \cite{Bonzom:2011zz}--\cite{BenGeloun:2017vwn}. The following references provide comprehensive reviews on random tensors and tensor field theories \cite{sigma,razvanbook,Rivasseau:2011hm,Carrozza:2013mna}. Furthermore, more recently, tensor models gathered an attention in a new direction: they turn out to be desirable toy models for holographic duality \cite{Witten:2016iux, Gurau:2016lzk,Klebanov:2016xxf,Krishnan:2016bvg,Ferrari:2017ryl, Gurau:2017xhf,Bonzom:2017pqs}. The large $N$ limit in range of the disorder of the famous Sachdev-Ye-Kitaev (SYK) model \cite{sachdevye,kitaev,Maldacena:2016hyu,Gross:2016kjj}, corresponds to the large $N$ limit of colored tensor models thought of as quantum mechanical models without disorder \cite{Witten:2016iux}. Despite all its remarkable achievements, colored tensor models have not yet succeeded to define a ``nice'' continuum limit in which an emergent 3D or 4D space could be identified. In colored tensor models, some graphs which are particular triangulations of a sphere, called melons, are found to be dominant at large $N$ \cite{Bonzom:2011zz}. In the world of melons, colored tensor models undergo a phase transition towards the so-called branched polymer phase which is not of the characteristics ({\it e.g.,} Hausdorff and spectral dimensions) that our large and smooth space-time manifold holds \cite{Gurau:2013cbh}. In order to improve the critical behavior of tensor models, it was then put forward to go beyond the melonic sector, by modifying the weights of interactions in order to include a wider class of graphs that could be resummed at large $N$. Such a proposal has been called ``enhancing'' tensor models and has first been investigated in the work by Bonzom et al. \cite{Bonzom:2012wa,Bonzom:2015axa}. The upshot of this analysis is somehow encouraging: some enhanced tensor models undergo a phase transition from branched polymers to a 2D QG phase (with positive entropy exponents). Let us be more specific at this point: the previous studies on enhanced tensor models focused on increasing the statistical weights of non-melonic tensor interactions called necklaces (which are only present in the tensor rank $d \ge 4$). In a different perspective along with its very own set of questions, our proposal is to use the framework of field theories, therefore working in tensor field theories rather than in tensor models, and to explore new ways of building enhanced models in which non-melonic graphs could contribute to the analysis at large $N$. Once one promotes tensor models to field theories, which possess now with infinite degrees of freedom, we call them tensor field theories. Note that, from the 90's, Boulatov introduced a gauge invariant version of tensor models by embedding them in lattice gauge field theory over $SU(2)$ \cite{boul}. This approach was considerably appealing to make contact with other QG approaches and was at the inception of Group Field Theory (GFT) \cite{Freidel:2005qe, oriti, Krajewski:2012aw}. Hence, chronologically, the first field theoretic approach of tensor models was GFT. GFT implements a constraint (referred to as the gauge invariance constraint) on the fields to achieve a geometrical interpretation of the combinatorial simplices associated with the tensor field and their interactions, along with a flatness condition for the gluing of simplices. On the other hand, in GFT, as the name refers to it, the group as a manifold where the fields live is a central concept: the group law is used in the underlying lattice gauge field theory. Tensor field theory distinguishes itself with GFT as it might not have these constraints. There are other motivations for introducing fields in the search of an emergent spacetime. For instance, one makes another progress by regarding the simplicial complex associated with the tensor (in a tensor model) as a true (combinatorial) quantum of space. These fields live in an abstract internal space and are endowed with a given dynamics and consequently a flow. The goal is then to provide a phase portrait of that theory space and, in particular, to detect the presence of interesting (fixed) points. Such fixed points would be associated with interesting physics. Thus, rather than tuning a given tensor model at criticality and seeing a new phase for geometry emerging, we might give initial conditions of a model in a field theory space and let it flow towards the corresponding fixed point. To define a flow, a parameter or a scale is needed. This makes the presence of propagators or regulators of paramount importance in usual field theory. Hence embedding tensor models into a field theoretic context, that is giving them a propagator, provides them with a flow. This naturally steer us towards other interesting questions. Quantum field theories have many well-established tools in order to reveal the properties of high energy physics and condensed matter systems. However, tensor field theory as a quantum field theory also inherits several of its drawbacks like divergent amplitudes due to the existence of infinitely many degrees of freedom. The treatment of divergences, hence the renormalization program for tensor field theories becomes even more intricate because they are non-local field theories, {\it i.e.,} their interactions occur in a region of the configuration space. As a result, to import the quantum field theoretic methods to tensor field theories was an important axis of investigations in the recent years. The Renormalization Group (RG) program has been successfully applied to tensor field theory and also GFT leading to the discovery of entirely new families of renormalizable non-local quantum field theories \cite{BenGeloun:2011rc,BenGeloun:2012yk,Geloun:2012bz,Carrozza:2012uv,Samary:2012bw,Geloun:2013saa,Carrozza:2013wda}. These models can be regarded as a rightful extension of matrix field theories like the Grosse and Wulkenhaar model \cite{Grosse:2004yu,Grosse:2012uv,Grosse:2016qmk}, an asymptotically safe non-local quantum field theory stemming from noncommutative geometry. The parametric representation and the ensuing dimensional regularization have been extended to tensor field theory with the emergence of new Symanzik polynomial invariants for tensor graphs \cite{Geloun:2014ema}. Moreover, the computations of the perturbative $\beta$-functions for $\phi^4$- and $\phi^6$-like models were achieved in the UV \cite{BenGeloun:2012yk, BenGeloun:2012pu,Carrozza:2012uv, Carrozza:2013wda,Carrozza:2014rya,Carrozza:2014rba,Rivasseau:2015ova}. The perturbative results \cite{BenGeloun:2012pu, BenGeloun:2012yk, Rivasseau:2015ova} suggested a generic asymptotic freedom for tensor field theories. This result was somehow surprising at first as they are not gauge theories, however, the (combinatorially) non-local nature of the tensor interactions drives the presence of a non-trivial wave-function renormalization, which then eventually dominates the renormalization of coupling constants. Afterwards, careful (perturbative) studies on $\phi^6$-like models hinted that the asymptotic safety may be possible in GFT \cite{Carrozza:2014rba,Carrozza:2014rya}. As a consequence, this last result strongly prompted that $\phi^6$ theories could have a more complicated behavior in the UV even for tensor field theories and the fact that asymptotic freedom might not hold for these particular models. The perturbative renormalization reveals interesting UV properties for tensor field theories which were encouraging to proceed to the next level. The exact renormalization group equations via Polchinski \cite{Krajewski:2015clk,Krajewski:2016svb} and via Wetterich (Functional Renormalization Group (FRG)) equations were fruitfully applied in all rank $d\ge 2$ matrix and tensor models with compelling corroborations on the existence of Gaussian and non-Gaussian UV and IR points \cite{Eichhorn:2014xaa,Benedetti:2014qsa,Benedetti:2015yaa,Geloun:2016xep,Carrozza:2016tih,Eichhorn:2017xhy,Geloun:2016qyb,Carrozza:2017vkz}. Within the ordinary consistency checks on the FRG methods ({\it i.e.,} extensions of the truncation at higher orders and a change of the theory regulator), non-perturbative calculations show that several $\phi^4$ models are asymptotically free and a $\phi^6$ model is asymptotically safe \cite{Benedetti:2014qsa,Geloun:2015qfa,Geloun:2016qyb}. In the GFT setting, similar conclusions were reached using the same tools with an extension of the truncation \cite{Carrozza:2016tih,Carrozza:2017vkz}. Hence, we conclude with a certain degree of confidence that, generically in tensor field theory, renormalizable $\phi^4$ models are UV asymptotically free, and renormalizable $\phi^6$ models are UV asymptotically safe. The notable UV behavior of renormalizable tensor field theories is only one interesting aspect among other results brought by the FRG analysis. Another result concerns strong evidences for the existence of infrared (IR) fixed points. For tensor field theories, the existence of a IR fixed point could play an important role. Indeed, one aim of the FRG program is to identify the phase portrait of field theories. Stable IR and UV fixed points define complete trajectories which allow to distinguish different regimes of the theory, in other words, the existence of such trajectories could provide evidences for phase transitions in the models. A known mechanism characterizing a phase transition in ordinary field theory is spontaneous symmetry breaking. In fact, from preliminary calculations in \cite{Benedetti:2014qsa,Geloun:2016qyb}, the phase diagrams of some tensor field theories show a IR fixed point which is similar to the Wilson-Fisher fixed point of a scalar field theory (it however occurs in different dimensions). This would likely imply that there is a phase transition in tensor field theory. If one shows that this phase transition results from a spontaneous symmetry breaking in these models, this transition will be described in terms of a symmetric phase and a broken or a condensed phase. The broken phase would correspond to a new vacuum state corresponding to some geometry, characterized by a non-zero expectation value of the field. This may validate the scenario in which homogeneous and isotropic geometries emerge as a condensate in GFT \cite{gielen}. In this paper, we undertake the study of the theory space of enhanced tensor field theories by addressing the perturbative renormalization of classes of enhanced models. We study tensor field models with quartic melonic interactions with a momentum weight mimicking derivative couplings. The effect of the new couplings is to make the non-melonic graph amplitudes larger than or as large as the melonic graph amplitudes. Note that derivative couplings are well established in ordinary renormalizable quantum field theory {\it e.g.} appearing as in non-Abelian Yang-Mills theories. The issue addressed in this work is to find a class of renormalizable theories endowed with non-local and weighted interactions. As one can expect, the presence of these interactions bears additional subtleties as it naturally tends to increase the divergence degree of a graph. The enhanced models that we study radically differ from that of \cite{Bonzom:2015axa} and \cite{Carrozza:2017vkz}, as we do not enhance non-melonic interactions of the necklace type but melonic interactions. We could apply the same idea of derivative-type couplings over necklaces and expect that the resulting kind of enhanced tensor field theories to be closely related to the one of the above references. Specifically, we focus on $\phi^4$-melonic couplings which are endowed with extra powers of momenta $|p|^{2 a}$; we call the resulting models $p^{2a}\phi^4$-models, where $a\geq 0$ is a parameter. The study is put on a very general ground, at any rank $d$ of the tensor field defined on a Abelian group of dimension $d\times D$. The propagator of the model is of the form $(\sum |p|^{2b} + \mu)^{-1}$, where $b>0$. Hence our model is parametrized by $(d,D, a, b)$. The case $a=0$ stands for the standard tensor field theory. Initially proposed by \cite{Geloun:2015lta}, these models were found tractable at fixed ranks $d=3,4$, $D=1$, and $b=1$, and there were indications of their super-renormalizability without a full-fledge proof of this statement. We carry on detailed analyses for these models, extending them at any rank and any dimension. The method that we use is the so-called multi-scale renormalization \cite{Rivasseau:1991ub}. It proves to be efficient enough to address non-local field theories (like tensor field theories) by achieving a perturbative power counting theorem and then the renormalization at all orders. Using the multi-scale analysis, we then find conditions on the tuple $(d,D,a,b)$ for potentially renormalizable enhanced models of two different types. - For the first type of theory, quite remarkably, we show that for generic $(d,D)$ parameters, there exists a just-renormalizable model at all orders. Theorem \ref{theoren+} summarizes this result. - For the second type of theory, we prove the renormalizability at all orders of a specific model for a choice of parameters. Theorem \ref{theoremx} is another main result of our analysis. The plan of the paper is as follows: In section \ref{sect:actio}, we introduce two models: the model $+$ and the model $\times$ with different enhancements in the $\phi^4$-tensor interactions. In section \ref{sect:perturb}, preparing for the power counting analyses, we give an explicit expression for the amplitudes of a given graph ${\mathcal G}$. Section \ref{sect:pow} addresses the multi-scale analysis: we optimally bound a generic graph amplitude in terms of combinatorial quantities of the graph. In section \ref{sect:enhancedmelon}, we determine the parameter spaces of $(d,D, a, b)$ which could potentially give rise to renormalizable models $+$ and $\times$. Concretely, we investigate further instances of renormalizable models: (1) section \ref{sect:renmo+} presents a generic model $+$ with arbitrary $D$, $d$, $a= D(d-2)/2$, and $b = D(2d-3)/4$; (2) section \ref{sect:renmox} addresses a model $\times$ with $D=1$, $d=3$, $a=1/2$ and $b=1$. We prove that these models determined by such parameters are indeed renormalizable at all orders of perturbation theory. We give a summary of our results and future prospectives in section \ref{concl}. Closing the manuscript, in appendix \ref{app:sums}, the reader will find the detail of the spectral sums to be used for bounding the amplitudes, and appendices \ref{app:mod+} and \ref{app:modx} respectively illustrate some representative and divergent graphs appearing in specific models $+$ and $\times$. \section{Enhanced $p^{2a}\phi^4$ tensor field theories} \label{sect:actio} We consider a field theory defined by a rank $d$ complex tensor $\phi_{\bf P}$, with ${\bf P}=(p_1,p_2,\dots,p_d)$ a multi-index, and $\bar\phi_{\bf P}$ denotes its complex conjugate. From a field theory standpoint, introducing a complex function $\phi: (U(1)^{D})^{\times d} \to {\mathbbm C}$, where $D$ will be called dimension of the group $U(1)^D$, $\phi_{\bf P}$ is the Fourier component of the field and the indices $p_s$ are by themselves multi-indices: \begin{equation} p_{s} =(p_{s,1},p_{s,2},\dots,p_{s,D}) \,,\; \, p_{s,i} \in {\mathbbm Z}\,. \end{equation} Let us make a few remarks. First, considering $\phi_{\bf P}$ as a rank $d$ tensor is a slight abuse because the modes $p_{k,s}$ range up to infinity. Cutting sharply off all modes to $N$, then the resulting multi-index object $\phi_{{\bf P};N}$ transforms under the fundamental representation of $U(N)^{D\times d}$ and hence is a tensor. For convenience, we keep the name of tensor for the field $\phi_{{\bf P}}$. Second, $\phi_{\bf P}$ could be considered as a $d\times D$ multi-index tensor, we shall call it a rank $d$ tensor, because $d$ and $D$ will play different roles in the following. Third, several of the results derived hereafter could be extended to any compact Lie group $G_D$ of dimension $D$ admitting a Peter-Weyl decomposition (see, for instance, how a treatment for $SU(2)^{D'}$, $D=3D'$, can be achieved using tools in \cite{Geloun:2013saa}). The treatment of the corresponding models could have been achieved with some extra work. Finally, a last remark is that the dimension $D$ has nothing to see with the space dimension associated with the discrete geometry encoded by the tensor contractions as we will discuss soon. Thus referring in the following to UV and IR should be related with small and large distances on the group $U(1)^D$. A general action $S$ built by a sum of convolutions of the tensors $\phi_{\bf P}$ and $\bar\phi_{\bf P}$ can be written as: \begin{eqnarray}\label{eq:actiond} && S[\bar\phi,\phi]={\rm Tr}_2 (\bar\phi \cdot {\bf K} \cdot \phi) + \mu \, {\rm Tr}_2 (\phi^2) + S^{{\rm{int\,}}}[\bar\phi,\phi]\,, \cr\cr && {\rm Tr}_2 (\bar\phi \cdot {\bf K} \cdot \phi) = \sum_{{\bf P}, \, {\bf P}'} \bar\phi_{{\bf P}} \, {\bf K}({\bf P};{\bf P}') \, \phi_{{\bf P}' } \,, \qquad {\rm Tr}_{2}(\phi^2) = \sum_{{\bf P}} \bar\phi_{{\bf P}}\phi_{{\bf P}}\,, \cr\cr && S^{{\rm{int\,}}}[\bar\phi,\phi]= \sum_{n_b} \lambda_{n_b} {\rm Tr}_{n_b}(\bar\phi^{n_b}\cdot {\bf V}_{n_b} \cdot \phi^{n_b})\,, \end{eqnarray} where ${\rm Tr}_{n_b}$ are sums over all indices $p_{k,s}$ of ${\bf P}$ of $n_b$ tensors $\phi$ and $\bar\phi$. Then ${\rm Tr}_{n_b}$ are considered as traces over indices of the tensors. In \eqref{eq:actiond}, the kernels ${\bf K}$ and ${\bf V}_{n_b}$ are to be specified, $\mu$ is a mass coupling and $\lambda_{n_b}$ is a coupling constant. If ${\bf V}_{n_b}$ corresponds to a simple pairing between tensor indices (by delta functions identifying indices), then $ {\rm Tr}_{n_b}(\bar\phi^{n_b}\cdot {\bf V}_{n_b} \cdot \phi^{n_b})$ spans the space of unitary invariants \cite{Gurau:2011tj, Gurau:2012ix,Bonzom:2012hw}. There is a geometrical interpretation of the interaction ${\rm Tr}_{n_b}(\bar\phi^{n_b}\cdot {\bf V}_{n_b} \cdot \phi^{n_b})$. If each tensor field is regarded as a $d$-simplex, the generalized trace ${\rm Tr}_{n_b}$ corresponds to a pairing or an identification of the $(d-1)$-simplices on the boundary of the $d$-simplexes to form a $d+1$ dimensional discrete geometry. If the kernel ${\bf V}_{n_b}$ is not a simple pairing, it then assigns a weight to each of those discrete geometries. A model is specified after giving the data of the kernels ${\bf K}$ and ${\bf V}_{n_b}$. Let us introduce some convenient notations: \begin{eqnarray}\label{delmom} && {\boldsymbol{\delta}} _{ { \bf P}; {\bf P'} } = \prod_{s=1}^d\prod_{i=1}^D \delta_{ p_{s,i}, p'_{s,i}} \,,\qquad {\bf P}^{2b} = \sum_{s=1}^d |p_s|^{2b}\,, \qquad |p_{s}|^{2b} = \sum_{i=1}^D |p_{s,i}|^{2b} \,,\cr\cr && \phi_{12\dots d} = \phi_{p_1,p_2,\dots, p_d} = \phi_{\bf P} \,. \end{eqnarray} for a real parameter $b\geq 0$, and where $ \delta_{ p,q}$ is the usual Kronecker symbol on ${\mathbbm Z}$. We introduce the following class of kernels for the kinetic term \begin{equation} \label{eq:3dkin} {\bf {K}}_b({ \bf P}; {\bf P'} ) = {\boldsymbol{\delta}} _{ { \bf P}; {\bf P'} } {\bf P}^{2b} \,. \end{equation} ${\bf K}_b$ therefore represents a sum of the power of eigenvalues of $d$ Laplacian operators over the $d$ copies of $U(1)^D$. The case $b=1$ corresponds precisely to Laplacian eigenvalues on the torus. Seeking renormalizable theories, from the fact that we are dealing with a nonlocal model, we might be led to choose values of $b$ different from integers. In usual quantum field theory (QFT) $b$ should have an upper bound $b\leq 1$ to ensure the Osterwalder-Schrader (OS) positivity axiom \cite{Rivasseau:1991ub}. Whether or not such a condition (or any OS axioms) might be kept for tensor field theories is still in debate \cite{Rivasseau:2011hm}. Thus, for the moment, to avoid putting strong constraints on the models, we let $b$ as a free strictly positive real parameter. We will be interested in 2 models distinguished by their interactions. Introduce a parameter $a\in (0,\infty)$ and write: \begin{eqnarray} && {\rm Tr}_{4;1}(\phi^4) = \sum_{p_{s}, p'_{s} \in {\mathbbm Z}^D} \phi_{12\dots d} \,\bar\phi_{1'23\dots d} \,\phi_{1'2'3'\dots d'} \,\bar\phi_{12'3'\dots d'} \,, \label{phi4sim}\\ && {\rm Tr}_{4;1}([p^{2a}+p'^{2a}]\,\phi^4) = \sum_{p_{s}, p'_{s} \in {\mathbbm Z}^D} \Big( |p_{1}|^{2a} + |{p'}_{1}|^{2a}\Big)\phi_{12\dots d} \,\bar\phi_{1'23\dots d} \,\phi_{1'2'3'\dots d'} \,\bar\phi_{12'3'\dots d'}\,, \cr\cr && = 2\sum_{p_{s}, p'_{s} \in {\mathbbm Z}^D} |p_{1}|^{2a}\,\phi_{12\dots d} \,\bar\phi_{1'23\dots d} \,\phi_{1'2'3'\dots d'} \,\bar\phi_{12'3'\dots d'} = 2\,{\rm Tr}_{4;1}(p^{2a}\,\phi^4) \,, \label{intplus} \\ && {\rm Tr}_{4;1}([p^{2a}p'^{2a}]\,\phi^4) = \sum_{p_{s}, p'_{s} \in {\mathbbm Z}^D} \Big( |p_{1}|^{2a} |{p'}_{1}|^{2a}\Big)\phi_{12\dots d} \,\bar\phi_{1'23\dots d} \,\phi_{1'2'3'\dots d'} \,\bar\phi_{12'3'\dots d'} \,. \label{intprod} \end{eqnarray} Note that in \eqref{phi4sim}, \eqref{intplus} and \eqref{intprod}, the color index 1 plays a special role. We sum over all possible color indices and obtain colored symmetric interactions: \begin{eqnarray} \label{eq:3dinter} && {\rm Tr}_{4}(\phi^4) := {\rm Tr}_{4;1} (\phi^4) + \Sym (1 \to 2 \to \dots \to d) \,, \cr\cr && {\rm Tr}_{4}(p^{2a}\,\phi^4) := {\rm Tr}_{4;1} (p^{2a}\,\phi^4)+ \Sym (1 \to 2 \to \dots \to d) \,, \cr\cr && {\rm Tr}_{4}([p^{2a}p'^{2a}]\,\phi^4) := {\rm Tr}_{4;1} ([p^{2a}p'^{2a}]\,\phi^4)+ \Sym (1 \to 2 \to \dots \to d) \,. \end{eqnarray} The momentum weights in the interactions ${\rm Tr}_{4}(p^{2a}\,\phi^4)$ and ${\rm Tr}_{4}([p^{2a}p'^{2a}]\,\phi^4)$ can be viewed as derivative couplings for particular choices of $a$. This is why, at times, we will call them coupling derivatives. Written in the momentum space, the interactions are however put in a more general setting using $|p|^{2a}$, for positive values of $a$. Once again, achieving renormalizability will be our sole constraint for fixing $a$. These interactions are called enhanced compared to ${\rm Tr}_{4}(\phi^4)$ (the usual quartic melonic graph studied for instance in \cite{BenGeloun:2012pu}) because they can generate amplitudes which are more divergent, and so enhanced, compared to those generated by ${\rm Tr}_{4}(\phi^4)$ alone. As a second property, we discussed that enhanced interactions represent weighted discrete geometries. The contraction pattern of the four tensors shows us that the weight here has a subtle sense: we are weighting a particular $(d-1)$-simplex in the $(d+1)$-simplex representing the interaction. It turns out that the renormalization analysis performed in sections \ref{sect:renmo+} and \ref{sect:renmox} leads us to new 2-point diverging graphs. Then we must add to the kinetic term the new terms: \begin{eqnarray} {\rm Tr}_2 (p^{2\xi}\phi^2) = {\rm Tr}_2 (\bar\phi \cdot {\bf K}_{\xi} \cdot \phi)\,, \qquad \xi = a,2a\,, \end{eqnarray} in addition to the kinetic term ${\rm Tr}_2 (p^{2\xi}\phi^2)$, where $\xi = b$. We will need counter-terms for each term in the action. In particular, the counter-term $CT_{2}$ of the form of the mass, $CT_{2;b}$ for the wave function, and new 2-point interactions $CT_{2;a}$ and $CT_{2;2a}$, that will be important for renormalizing two-point functions. We define \begin{equation}\label{counter} CT_{2}[\bar\phi,\phi] =\delta_{\mu}{\rm Tr}_2(\phi^2) \,, \;\, CT_{2;\xi}[\bar\phi,\phi] = Z_{\xi}{\rm Tr}_2 (p^{2\xi}\phi^2) \,, \;\, \xi=a,2a,b \,, \end{equation} where $\delta_\mu$ and $Z_\xi$ are counter-term couplings. Note that, in the following, $Z_b$ is called wave function renormalization. The models that we will study have the following kinetic terms and interactions: \begin{eqnarray} \text{model }+: && S^{{\rm{int\,}}}_+[\bar\phi,\phi] = \frac{ \lambda}{2}\,{\rm Tr}_{4}(\phi^4) +\frac{\eta_{+}}{2}\, {\rm Tr}_{4}(p^{2a}\,\phi^4) +CT_{2}[\bar\phi,\phi] + \sum_{\xi=a,b}CT_{2;\xi}[\bar\phi,\phi] \cr\cr && S^{{\rm{kin\,}}}_+[\bar\phi,\phi] = \sum_{\xi=a,b} {\rm Tr}_2 (p^{2 \xi} \phi^2) + \mu {\rm Tr}_2 (\phi^2) \,, \label{model1}\\ \text{model }\times: && S^{{\rm{int\,}}}_\times[\bar\phi,\phi] = \frac{ \lambda}{2}\,{\rm Tr}_{4}(\phi^4) +\frac{\eta_{\times}}{2}\, {\rm Tr}_{4}([p^{2a}p'^{2a}]\,\phi^4) + CT_{2}[\bar\phi,\phi] +\sum_{\xi=a,2a,b}CT_{2;\xi}[\bar\phi,\phi] \,, \cr\cr && S^{{\rm{kin\,}}}_\times[\bar\phi,\phi] = \sum_{\xi=a,2a,b} {\rm Tr}_2 (p^{2 \xi} \phi^2) + \mu {\rm Tr}_2 (\phi^2) \label{model2} \end{eqnarray} where $\lambda$, $\eta_{+}$ and $\eta_{\times}$ are coupling constants. It is an interesting question to list the classical symmetries of the models $+$ and $\times$ given by the generalized Noether theorem for such non-local theories \cite{Kegeles:2016wfg,Kegeles:2015oua}. To apply the Lie symmetry algorithm as worked out in these references could be an interesting exercise for derivative coupling theories and could bear important consequences for the Ward identities. The present theory space is clearly much more involved than the usual unitary invariant theory space where the vertices of the model do not have any momentum weight. It will result from our analysis that a new combinatorics provides our models with a genuinely different renormalization procedure. Then, the comparison could be made with the models in Table 8 in \cite{Geloun:2013saa} which are unitary invariant models. We seize this opportunity to correct that table: the just-renormalizable $\phi^6$-models should be UV asymptotically safe (rather than free) under the light of many recent results \cite{Carrozza:2014rba,Carrozza:2014rya,Geloun:2016xep,Eichhorn:2017xhy, Carrozza:2016tih,Carrozza:2017vkz}. In \cite{Geloun:2015lta}, a power counting theorem was proved for the model $+$ restricted at rank $d=3$ and $d=4$ and $D=1$. Nevertheless, the optimization procedure to reach a power counting was quite complicated. There were indications of potentially super-renormalizable enhanced models without finalizing the proof of such a renormalizability. In this work, we will improve that analysis by noting that the relevant interaction is rather ${\rm Tr}_{4}(p^{2a}\,\phi^4)$ \eqref{eq:3dinter}. Before reaching this point, our next task is to express generic amplitudes in the enhanced models. \section{Amplitudes} \label{sect:perturb} Models $+$ and $\times$ associated with actions given by \eqref{model1} and \eqref{model2}, respectively, give the quantum models determined by the partition function \begin{equation} Z_\bullet = \int d\nu_{C_\bullet}(\bar\phi,\phi) \; e^{-S^{{\rm{int\,}}}_{\bullet}[\bar\phi,\phi]}\,, \end{equation} where $\bullet =+,\times$, and $d\nu_{C_\bullet}(\bar\phi,\phi)$ is a field Gaussian measure with covariance $C_\bullet$ given by the inverse of the kinetic term: \begin{equation} C_\bullet({\bf P};{\bf P'}) =\tilde{C}_\bullet({\bf P})\, {\boldsymbol{\delta}} _{{\bf P},{\bf P}'}\,,\qquad \tilde{C}_\bullet({\bf P})\,= \frac{1}{\sum_\xi {\bf P}^{2 \xi}+ \mu }\,. \end{equation} where, if $\bullet =+$, $\xi=a,b$ and if $\bullet =\times$, $\xi=a,2a,b$. Dealing with the interactions, we have the vertex kernels ${\bf V}_{4;s}$ and ${\bf V}_{+;4;s}$ associated with \eqref{model1} and ${\bf V}_{4;s}$ and ${\bf V}_{\times;4;s}$ associated with \eqref{model2}. These kernels are given by \begin{eqnarray}\label{vertexkernel} && {\bf V}_{4;s}({\bf P };{\bf P}';{\bf P }'';{\bf P }''') = \frac\lambda2\delta_{4;s}({\bf P };{\bf P}';{\bf P }'';{\bf P }''')\,,\cr\cr && {\bf V}_{+;4;s}({\bf P };{\bf P}';{\bf P }'';{\bf P }''') = \frac{\eta_+}{2} |p_s|^{2a} \, \delta_{4;s}({\bf P };{\bf P}';{\bf P }'';{\bf P }''')\,,\cr\cr && {\bf V}_{\times;4;s}({\bf P };{\bf P}';{\bf P }'';{\bf P }''') = \frac{\eta_\times}{2} |p_s|^{2a}|{p'}_s|^{2a} \, \delta_{4;s}({\bf P };{\bf P}';{\bf P }'';{\bf P }''')\,, \end{eqnarray} $s=1,2,\dots, d$, where the operator $\delta_{4;s}(-)$ is a product of Kronecker deltas identifying the different momenta according to the pattern dictated by the interaction ${\rm Tr}_{4;s}(\phi^4)$. Note that ${\bf V}_{\bullet; 4;s}$ has a color index. The vertex operator ${\bf V}_{2}$ associated with the mass counter-term is a delta function $ {\boldsymbol{\delta}} _{ { \bf P}; {\bf P'} } $; the vertex operators ${\bf V}_{2;\xi;s}$ $\xi=a,2a,b$, associated with the counter-terms $CT_{2;\xi}[\bar\phi,\phi]$ are delta functions weighted by momenta $|p_s|^{2\xi}$. \ \noindent {\bf Feynman tensor graphs.} There are two equivalent graphical representations of Feynman graphs in tensor models. The first one is called ``stranded graph'' and it incorporates more details of the structure of the Feynman graph (used and explained in \cite{color} and \cite{Geloun:2013saa}). The other representation of a Feynman graph in this theory is a bipartite colored graph \cite{color,Gurau:2013cbh,Bonzom:2012hw,Gurau:2011xp}. We mostly use the latter because it is convenient and economic. The first representation will be used in this section to make explicit the notion of faces associated with momentum loops. At the graphical level the propagator is drawn as a collection of $d$ segments called strands (see Figure \ref{fig:propacol}). \begin{figure}[H] \centering \begin{minipage}[t]{0.7\textwidth} \begin{tikzpicture} \node (r1) at (-0.3,0.0) {$p_{1}$}; \node (r2) at (-0.3,0.3) {$p_{2}$}; \node (r1) at (-0.3,0.8) {$\vdots$}; \node (r3) at (-0.3,1) {$p_{d}$}; \draw (0,0.3) -- (4,0.3) ; \draw (0,1) -- (4,1) ; \draw (0,0) -- (4,0) ; \draw[dashed] (6,0.5) -- (10,0.5); \end{tikzpicture} \caption{\small The propagator of the theory: the stranded representation (left) made with $d$ segments representing $d$ momenta; the colored representation (right) denoted by a dotted line. } \label{fig:propacol} \end{minipage} \end{figure} Each interaction is sketched as a stranded vertex or by a $d$-regular colored bipartite graph called a ``bubble.'' The bipartiteness of the graph comes from the representation of each field $\phi$ as a white vertex and each field $\bar\phi$ by a black one. For instance, see bubbles corresponding to $\phi^2$ vertices (mass and wave functions vertices), and $\phi^4$-interactions in Figure \ref{fig:4vertex}. Note that, the bubbles representing the vertex kernel ${\bf V}_{\bullet;4;s}$, $\bullet=+,\times$, appear with one or two bold edges, respectively. The color of a bold edge corresponds to the color index of the enhanced momentum. The bubbles that describe the vertices are particular contractions of tensors and are called melons. \begin{figure}[H] \centering \begin{minipage}{1\textwidth} \centering \includegraphics[angle=0, width=14cm, height=6cm]{vertex_total.pdf} \caption{ {\small Rank $d$ vertices of the mass, $\phi^2$- and $\phi^4$-terms. }} \label{fig:4vertex} \end{minipage} \end{figure} Perturbation theory tells us that, via the Wick theorem, we should glue vertices by propagator lines to produce a Feynman graph. Some examples of Feynman tensor graphs by the above rule are depicted in Figure \ref{fig:graphs}. We put half-lines or external legs on vertices to reflect the presence of external fields. In the following, a Feynman tensor graph is simply called a graph and is denoted by ${\mathcal G}$. \begin{figure}[H] \centering \begin{minipage}{1\textwidth} \centering \includegraphics[angle=0, width=12cm, height=5cm]{graph2_total.pdf} \caption{ {\small Rank $d=3$ Feynman graphs. }} \label{fig:graphs} \end{minipage} \end{figure} In the stranded picture, closed cycles (homeomorphic to circles) in the graphs are called closed or internal faces and strands homeomorphic to segments are called open or external faces. The set of closed faces is denoted by ${\mathcal F}_{{\rm{int\,}}}$ and the set of open faces ${\mathcal F}_{{\rm{ext\,}}}$. As expected, the presence of an internal face is associated with a sum over infinite values of momenta which can make the amplitude divergent, hence the need of regularization and renormalization for the model. In the colored graph representation, note that an extra color $0$ could be attributed to all dotted propagator lines. The cycles in that $(d+1)$ colored graph have two colors. The internal faces of ${\mathcal G}$, elements of ${\mathcal F}_{{\rm{int\,}}}$, are associated with bicolored cycles of colors $0s$, with $s=1,2,\dots,d$. To obtain the subset of ${\mathcal F}_{{\rm{int\,}}}$ (or of ${\mathcal F}_{{\rm{ext\,}}}$) of faces of colors $0s$ from the $d+1$ colored graph, we remove all edges except those of colors $0$ and $s$ and observe the remaining cycles (or open strands, respectively). In the end, for simplicity, we omit the color 0 in the couple $0s$ and claim that a (internal or external) face is of color $s$. \ \noindent{\bf Amplitudes.} Given a connected graph ${\mathcal G}$ with vertex set ${\mathcal V}$ (with $V=|{\mathcal V}|$) and line or propagator set ${\mathcal L}$ (with $L=|{\mathcal L}|$), we formally write the amplitude of ${\mathcal G}$ \begin{equation}\label{ampli} A_{{\mathcal G}} = \sum_{{\bf P}_v} \prod_{l \in {\mathcal L} } C_{\bullet;l} ( {\bf P}_{v(l)} ; {\bf P}'_{v'(l)}) \prod_{v \in {\mathcal V}} ( - {\bf V}_{v} (\{{\bf P}_{v}\})\,. \end{equation} The above formula shows that propagators $C_l$ have a line index $l$ and momentum arguments ${\bf P}_{v(l)}$, with $v(l)$ the source or target of the line $l$. The vertex constraints ${\bf V}_{v}$ convolute the set of momenta and can be of the form ${\bf V}_{4;s}$, ${\bf V}_{\bullet; 4;s}$, ${\bf V}_{2}$, ${\bf V}_{2;\xi;s}$, $\xi=a,2a,b$. The presence of these weights makes the amplitude quite different from those of unitary invariant theories. For instance, as opposed to the ordinary situation, the amplitudes do not directly factorize in terms of internal faces. To derive a power counting theorem we need to study graph amplitudes $A_{{\mathcal G}}$ coming from the perturbative expansion of correlators of the form \begin{eqnarray} && \langle \phi_{\bf P} \bar \phi_{\bf P'}\phi_{\bf P'} \bar\phi_{\bf P'''} \rangle \,, \label{phi4}\\ && \langle |p_{1}|^{2a}\, \phi_{\bf P} \bar\phi_{\bf P'}\phi_{\bf P'}\bar \phi_{\bf P'''} \delta_{4;s}({\bf P };{\bf P}';{\bf P }'';{\bf P }''')\rangle \label{pPhi4+}\\ && \langle |p_{1}|^{2a}|p_{1'}|^{2a} \,\phi_{\bf P} \bar \phi_{\bf P'}\phi_{\bf P'} \bar\phi_{\bf P'''} \delta_{4;s}({\bf P };{\bf P}';{\bf P }'';{\bf P }''')\rangle \,. \label{pPhi4x} \end{eqnarray} In tensor graphs, consider the faces as previously introduced. A face $f_s$ with color $s$ has an $s$-colored ${\mathbbm Z}^D$ conserved momentum $p_{f_s}$, and passes through some vertices $v_{s}$, with vertex kernel of the form ${\bf V}_{4;s}$, ${\bf V}_{\bullet;4;s}$, ${\bf V}_{2}$, or ${\bf V}_{2;\xi;s}$, $\xi=a,2a,b$. This face may also pass through some other vertices with color $s'\ne s$. More generally, a face $f$ can pass through a vertex $v$ a number of times, say $\alpha$. Denote this statement by $v^{\alpha} \in f$. Because of the coloring, $\alpha$ can only be $0,1,2$ ($v_s \in f_s $ will mean $v_s^{1}\in f_s$). We therefore define the incidence matrix between faces and vertices by \begin{equation}\label{incivf} \epsilon_{v_sf_{s'}} = \left\{ \begin{array}{cl} \alpha ,& (s=s') \wedge (v_s^{\alpha} \in f_{s}) ,\\ 0,& \text{otherwise.} \end{array} \right. \end{equation} Given two faces $f_{1;s_1}$ and $f_{2;s_2}$ and a vertex $v_s$, we introduce another multi-index object that we denote by $\epsilon_{v_sf_{1;s_1}f_{2;s_2}}$ defined as \begin{equation}\label{inctens} \epsilon_{v_sf_{1;s_1}f_{2;s_2}}= \left\{ \begin{array}{cl} 1,& (s=s_1=s_2) \wedge (v_s \in f_{1;s_1}) \wedge \; (v_s \in f_{2;s_2}),\\ 0,& \text{otherwise.} \end{array} \right. \end{equation} The case $f_{1;s_1} = f_{2;s_2}$ could also occur. A first observation is that $\epsilon_{v_sf_{1;s_1}f_{2;s_2}} = \epsilon_{v_sf_{s_1}}\epsilon_{v_sf_{s_2}}$ in the case when $v_s \in f_{1;s_1}$ and $v_s \in f_{2;s_2}$. Looking at the diagonal, {\it i.e.} $f_{1;s_1}=f_{2;s_2}$, $\epsilon_{v_sf_{1;s}f_{1;s}} =1$ if and only if $v^2_s \in f_{1;s}$. We are in position to re-express the interaction weights ${\bf V}_{\bullet;4;s}$ in \eqref{vertexkernel}. Fix a color $s$, the weight of a vertex kernel of $v_{s}$ of the kind ${\bf V}_{\bullet;4;s}$ can be written as \begin{eqnarray} \text{model }+: &&\frac{\eta_+}{2} \sum_{f_{s'}} \epsilon_{v_s,f_{s'}}\, |p_{f_{s'}}|^{2a} \, , \cr\cr \text{model }\times: && \frac{\eta_\times}{2} \sum_{f_{s'},f_{s''}} \epsilon_{v_s,f_{s'},f_{s''}}\, |p_{f_{s'}}|^{2a}\, |{p}_{f_{s''}}|^{2a} . \end{eqnarray} We stress, at this point, that the two models $+$ and $\times$ will be studied separately, then there is no confusion to adopt a single notation as: \begin{eqnarray} \text{model}\; \bullet:\qquad \frac{\eta }{2} (\epsilon \, p )_{v_s} \,. \end{eqnarray} The weight of degree 2 vertices (in both models) which belong to ${\mathcal V}_{2;\xi;s}$ is of the form $Z_{\xi} \sum_{f_s}\epsilon_{v_s,f_{s'}}|p_{f_{s'}}|^{2\xi }=Z_\xi (\epsilon \, p )_{v_s}$, where $\xi=a,2a,b$. Let us introduce: - the set ${\mathcal V}_{4;s}$ of vertices with kernel ${\bf V}_{4;s}$, ${\mathcal V}_4 = \sqcup_{s=1}^d {\mathcal V}_{4;s}$ (disjoint union notation), - the set ${\mathcal V}_{\bullet; 4;s}$ of vertices with vertex kernel ${\bf V}_{\bullet;4;s}$, $\bullet = +, \times$, ${\mathcal V}_{\bullet;4}= \sqcup_{s =1}^d{\mathcal V}_{\bullet; 4;s}$, - the set ${\mathcal V}_2$ of mass vertices with kernel ${\bf V}_{2}$, the set ${\mathcal V}_{2;\xi;s}$ of vertices with kernels ${\bf V}_{2;\xi;s}$, ${\mathcal V}_{2;s} = \cup_{\xi} {\mathcal V}_{2; \xi;s}$. We denote the cardinalities $|{\mathcal V}_{4;s}|=V_{4;s}$, $|{\mathcal V}_{4}|=V_4$, $|{\mathcal V}_{\bullet;4;s}|=V_{\bullet;4;s}$, $|{\mathcal V}_{\bullet;4}|=V_{\bullet;4}$, $\bullet=+,\times$; $|{\mathcal V}_{2;\xi;s}|=V_{2;\xi;s}$, $V_{2;\xi}= \sum_s V_{2;\xi;s}$. Then, ${\mathcal V} = \sqcup_{s =1}^d ({\mathcal V}_{4;s} \cup {\mathcal V}_{\bullet;4; s} \cup {\mathcal V}_{2;s})$, $|{\mathcal V}|=V$. Using the Schwinger parametric form of the propagator kernel as \begin{equation} \tilde C_\bullet({\bf P}) = \int_0^{\infty} d\alpha \; e^{-\alpha(\sum_\xi{\bf P}^{2 \xi} + \mu )}\,, \end{equation} integrating all deltas from propagators and vertex operators, we put the amplitude \eqref{ampli} in the form \begin{eqnarray}\label{amplf} A_{{\mathcal G}} &=& \kappa(\lambda, \eta_{\bullet}, Z_{\xi}) \sum_{p_{f_s}} \int \Big[\prod_{l\in {\mathcal L}} d\alpha_l\, e^{-\alpha_l \mu } \Big]\; \Big[\prod_{f_s\in {\mathcal F}_{{\rm{ext\,}}}} e^{-(\sum_{l\in f_s} \alpha_l) \sum_\xi |p^{{\rm{ext\,}}}_{f_s}|^{2 \xi}}\Big] \cr\cr && \times \Big[ \prod_{f_s\in {\mathcal F}_{{\rm{int\,}}}} e^{-(\sum_{l\in f_s} \alpha_l) \sum_\xi |p_{f_s}|^{2 \xi}}\Big] \Big[\prod_{s=1}^{d} \prod_{v_s \in {\mathcal V}_{\bullet;4; s} \cup {\mathcal V}_{2;s}} (\epsilon\,\tilde{p})_{v_s} \Big] \,, \end{eqnarray} where $\kappa(\lambda, \eta_{\bullet},Z_{\xi})$ includes symmetry factors and coupling constants, $p^{{\rm{ext\,}}}_{f_s}$ are external momenta which are not summed, whereas $p_{f_s}$ are internal momenta and are summed. In the last line, $\tilde{p}_{f_s}$ refers to an internal or an external momentum. The sum over infinite values of momenta produces divergent amplitudes \eqref{amplf}. In the next section, we will address the nature of these divergences through a power counting theorem. \section{Power counting theorems for $p^{2a}\phi^4$-models} \label{sect:pow} For simplicity, we will study a connected graph amplitude without ${\mathcal V}_{2;\xi;s}$ vertices. To add these vertices towards the end can be easily done. \ \noindent{\bf Multiscale analysis.} We slice the propagator in a geometric progression with the parameter $M>1$, and then bound each slice of the propagator: \begin{eqnarray}\label{bounds} && \tilde C_\bullet ({\bf P}) = \int_0^{\infty} d\alpha\; e^{-\alpha( \sum_\xi {\bf P}^{2\xi} + \mu)} = \sum_{i=0}^\infty C_{\bullet;i} ({\bf P})\,, \cr\cr && C_{\bullet; 0(}{\bf P}) = \int_{1}^{\infty} d\alpha\; e^{-\alpha(\sum_\xi {\bf P}^{2\xi} + \mu)} \leq K\,,\cr\cr && C_{\bullet; i} ({\bf P})= \int_{M^{-2(i+1)}}^{M^{-2i}} d\alpha\; e^{-\alpha( \sum_\xi {\bf P}^{2\xi} + \mu)} \leq K' M^{-2i} e^{- M^{-2 i}( \sum_\xi {\bf P}^{2 \xi} + \mu)}\cr\cr && \leq K M^{-2i} \; e^{- \delta M^{-i}(\sum_\xi \sum_{s=1}^d \sum_{l=1}^D|p_{s;l}|^{\xi} + \mu)} \leq K M^{-2i} \; e^{- \delta M^{-i}(\sum_\xi \sum_s |p_s|^{\xi} + \mu)} \,, \end{eqnarray} $|p_{s}|^\xi = \sum_{l=1}^D|p_{s;l}|^{\xi}$, for some constants $K$, $K'$ and $\delta$. The slice decomposition yields the standard interpretation that high values of $i$ select high momenta of order $\sim M^{i}$ and this refers to the UV (this coincides with small distances on the group $U(1)^D$). Meanwhile, low momenta are picked around the slice $i=0$, and correspond to the IR. Note that, since we are dealing with a compact group, the latter limit is harmless. Introduce a cut-off $\Lambda$ on the slices $i$, and then cut off the propagators as $C_\bullet ^{\Lambda}=\sum_{i=0}^{\Lambda} C_{\bullet; i}$. We will not display $\Lambda$ in the following expression. Cutting off all propagators in \eqref{ampli}, the amplitude $A_{{\mathcal G}}$ becomes $\sum_{\boldsymbol{\mu}} A_{{\mathcal G};\boldsymbol{\mu}}$ where $\boldsymbol{\mu}=\{i_l\}_{l\in {\mathcal L}}$ is a multi-index called momentum assignment which collects the propagator indices $i_l \in [0,\Lambda]$, and \begin{equation} A_{{\mathcal G};\boldsymbol{\mu}} =\kappa(\lambda,\eta_{\bullet})\sum_{p_{v;s}} \Big[\prod_{l \in {\mathcal L} }C _{\bullet; i_l} ({\bf P}_{v(l)}; {\bf P}'_{v'(l)}) \Big] \Big[\prod_{s=1}^{d} \prod_{v_s \in {\mathcal V}_{\bullet;4; s}} (\epsilon\,\tilde{p})_{v_s} \Big] . \end{equation} Using \eqref{bounds}, the above expression finds the form \begin{eqnarray} && |A_{{\mathcal G};\boldsymbol{\mu}}|\leq \kappa(\lambda,\eta_{\bullet}) K^L K_1^V \; K_2^{F_{{\rm{ext\,}}}} \Big[ \prod_{l \in {\mathcal L}} M^{-2 i_l} \Big]\cr\cr && \times \sum_{p_{f_s}} \Big[ \prod_{f_s\in {\mathcal F}_{{\rm{int\,}}}} e^{-\delta(\sum_{l\in f_s} M^{-i_l})\sum_\xi |p_{f_s}|^{\xi}} \Big] \Big[\prod_{s=1}^{d} \prod_{v_s \in {\mathcal V}_{\bullet;4; s}} (\epsilon\,\tilde{p})_{v_s} \Big], \label{amplinit} \end{eqnarray} where $K_{1,2}$ are constants. $A_{{\mathcal G};\boldsymbol{\mu} }$ is the focus of our attention and is the quantity that must be bounded by an optimization procedure. A standard procedure detailed in \cite{Rivasseau:1991ub} will allow one to sum over the assignments $\boldsymbol{\mu}$ after renormalization. The next definition can be found in \cite{Rivasseau:1991ub}. It paves the way to the notion of locality of the theory through the definition of quasi-local subgraphs. Let ${\mathcal G}$ be a graph, with line set ${\mathcal L}$. Fix $i$ a slice index and define ${\mathcal G}^i$ to be the subgraph of ${\mathcal G}$ built with propagator lines with indices obeying $\forall \ell \in {\mathcal L}({\mathcal G}^i)\cap{\mathcal L}$, $i_\ell \geq i$. It might happen that ${\mathcal G}^i$ disconnects in several components; we denote these connected components $G^i_{k}$ and call them quasi-local subgraphs. It is important to give a characterization of the quasi-local subgraphs. Given $g$, a subgraph of ${\mathcal G}$ with internal line set ${\mathcal L}(g)$ and external line set ${\mathcal L}_{{\rm{ext\,}}}(g)$. Consider a momentum assignment $\boldsymbol{\mu}$ of ${\mathcal G}$, and define $i_{g}(\boldsymbol{\mu})=\inf_{\ell \in {\mathcal L}(g)} i_\ell$ and $e_{g}(\boldsymbol{\mu})=\sup_{\ell \in {\mathcal L}_{{\rm{ext\,}}}(g)}i_\ell$. We can identify $g$ with a quasi-local subgraph of ${\mathcal G}$ if and only if $i_{g}(\boldsymbol{\mu})>e_{g}(\boldsymbol{\mu})$. The set $\{G^i_k\}$ of quasi-local subgraphs of ${\mathcal G}$ is partially ordered under inclusion. The inclusion can be put in a form of an abstract tree (with vertices the $G^i_k$'s) called the Gallavotti-Nicol\`o (GN) tree \cite{Galla}. We perform the internal momenta sums in \eqref{amplinit} in an optimal way, and show that the result can be expressed in terms of the quasi-local subgraphs. This condition, called the compatibility condition with the GN tree, turns out to be crucial when performing the sum over the momentum attribution. All external momenta must be at a lower scale than internal momenta, thus for any external faces $f_{s}$ and internal face $f_{s'}$, $p^{{\rm{ext\,}}}_{f_s} \ll p_{f_{s'}}$. We bound all factors or terms with $p^{{\rm{ext\,}}}_{f_s}$ and obtain: \begin{equation} |A_{{\mathcal G};\boldsymbol{\mu}}|\leq K_3 \Big[\prod_{l \in {\mathcal L}} M^{-2 i_l}\Big] \, \sum_{p_{f_s}} \big[\prod_{f_s\in {\mathcal F}_{{\rm{int\,}}}} e^{-\delta(\sum_{l\in f_s} M^{-i_l})\sum_\xi |p_{f_s}|^\xi} \big]\big[\prod_{s=1}^{d}\prod_{v_s \in {\mathcal V}_{\bullet;4; s}} (\epsilon \,p^{\,2a})_{v_s}\big], \label{ineq} \end{equation} where $K_3=\kappa(\lambda,\eta_{\bullet}) K^L K_1^V \; K_2^{F_{{\rm{ext\,}}}} K'$, and $K'$ is a constant obtained from the bound over the external momenta present in the vertex kernel $\prod_{s=1}^{d} \prod_{v_s \in {\mathcal V}_{\bullet; s}} (\epsilon\,\tilde{p})_{v_s} $; note that $\epsilon $ in \eqref{amplinit} is now restricted to internal faces in \eqref{ineq}. Performing the sum over internal momenta $p_{f_s}$ must be done in a way to get the lowest possible divergence in \eqref{ineq}. This is an optimization procedure that we detail now. We determine the behavior of some momentum sums. The following results have been detailed in appendix \ref{app:sums}. For constants, $B>0$, $c>0$, $b>0$, $a>0$ and $a'>0$, and an integer $n\ge 0$, we have \begin{eqnarray} && \sum_{p_1, \dots, p_{D}=1}^{\infty} (\sum_{l=1}^D p_l^{c})^{n} e^{-B \sum_{l=1}^D(p_l^b+p_l^a)} = k B^{-\frac{(cn+D)}{b}} e^{-B^{1 - \frac{a}{b}}} (1+ O(B^{\frac{1}{b}})) \,, \label{sumsABC2} \\ && \sum_{p_1, \dots, p_{D}=1}^{\infty} (\sum_{l=1}^D p_l^{c})^{n} e^{-B (\sum_{l=1}^D(p_l^b+p_l^a+p_l^{a'}))} = k B^{-\frac{(cn+D)}{b}} e^{-B^{2 - \frac{(a+a')}{b}}} (1+ O(B^{\frac{1}{b}})) \,, \label{sumsABDC3} \end{eqnarray} for a constant $k$. At this point, we make two assumptions on the parameters $a,a',b$: - for the model $+$, $a \leq b$, \begin{equation} \sum_{p_1, \dots, p_{D}=1}^{\infty} (\sum_{l=1}^D p_l^{c})^{n} e^{-B \sum_{l=1}^D(p_l^b+p_l^a)} = k B^{-\frac{(cn+D)}{b}} (1+ O(B^{\frac{1}{b}}) + O(B^{1-\frac{a}{b}}) ) \,; \label{sumsfinABC} \end{equation} - for the model $\times$, $a +a'\leq 2b$, \begin{equation} \sum_{p_1, \dots, p_{D}=1}^{\infty} (\sum_{l=1}^D p_l^{c})^{n} e^{-B (\sum_{l=1}^D(p_l^b+p_l^a+p_l^{a'}))} = k B^{-\frac{(cn+D)}{b}} (1+ O(B^{\frac{1}{b}}) + O(B^{2 - \frac{(a+a')}{b}})) \,. \label{sumsfinABDC} \end{equation} Finally, the integration of internal momenta can be performed in the amplitudes. \ \noindent{\bf Model + -} Given a face $f$ (the subscript $s$ is not useful at this stage), we target the line $l_f$ such that $i_{l_{f}}=\min_{l\in f} i_l=i_{f}$. After the integration, it will generate the lowest factor $M^{i_{f} \times m}$, where $m$ is yet to be determined. In the product $\prod_{s=1}^{d}\prod_{v_s \in {\mathcal V}_{+; 4;s}} (\epsilon \,p^{\,2a})_{v_s}$, we choose the factor of a given $p_{f}$ and perform the sum $\sum_{p_f}(|p_{f}|^{2a})^{\rho_{f}} e^{-\delta M^{-i_f} |p_{f}|^b}$, with $\rho_f$ an integer, such that the bound \eqref{ineq} still holds. Performing this sum using \eqref{sumsfinABC}, we get a product of $M^{\frac{i_f}{b}(2a \rho_{f} +D)}$ with the lowest possible power. Take a closed face $f_s$ of color $s$, the integer $\rho_{f_s}$ counts how many times $f_s$ passes through vertices of ${\mathcal V}_{+;4;s}$. We have \begin{equation}\label{rho} \rho_{f_{s}} = \sum_{v_s \in {\mathcal V}_{+; 4;s}} \epsilon_{v_s,f_{s}}\,, \qquad \rho_{+}({\mathcal G}) = \sum_s\sum_{f_{s}} \rho_{f_{s}}\,. \end{equation} We then write a new bound \begin{equation} |A_{{\mathcal G};\boldsymbol{\mu}}|\leq \kappa_1 \Big[\prod_{l \in {\mathcal L}} M^{-2 i_l} \Big]\, \sum_{p_{f_s}} \Big[ \prod_{f_s\in {\mathcal F}_{{\rm{int\,}}}} e^{-\delta M^{-i_{f_s}} \sum_\xi |p_{f_s}|^\xi } \Big] \Big[ \prod_{s'=1}^{d}\prod_{f_{s'} } |p_{f_{s'}}|^{\,2a \rho_{f_{s'}}}\Big]\,, \end{equation} where $\kappa_1$ is a new constant incorporating the previous constant $K_3$. Performing the sum over internal momenta, one gets using \eqref{sumsfinABC} with $a\leq b$, \begin{equation}\label{ad} |A_{{\mathcal G};\boldsymbol{\mu}}|\leq \kappa_2 \prod_{l \in {\mathcal L}} M^{-2 \, i_l} \, \prod_{f_s\in {\mathcal F}_{{\rm{int\,}}}} M^{\frac{i_{f_s}}{b}(2a\rho_{f_{s}}+D)} \,, \end{equation} where $\kappa_2$ is another constant depending on the graph that includes $\kappa_1$ and new constants coming from the summation over internal momenta. We re-express the above bound in terms of the quasi-local subgraphs $G^{i}_k$. The product over lines can be written \cite{Rivasseau:1991ub} as \begin{equation}\label{lines} \prod_{l \in {\mathcal L}} M^{-2 \, i_l} = \prod_{l \in {\mathcal L}}\prod_{(i,k)/\, l\in {\mathcal L}(G^i_k)} M^{-2} = \prod_{(i,k)} M^{-2 L(G^i_k)}\,. \end{equation} The second product over faces splits in two factors. The first term can be treated as: \begin{equation} \prod_{f_s\in {\mathcal F}_{{\rm{int\,}}}} M^{\frac{D}{b} i_{f_s}} = \prod_{ f_s\in {\mathcal F}_{{\rm{int\,}}} } \prod_{(i,k)/\, l_{f_s} \in {\mathcal L}(G^{i}_k)}M^{\frac{D}{b}} = \prod_{(i,k)} \prod_{ f_s\in {\mathcal F}_{{\rm{int\,}}} \cap G^{i}_k} M^{\frac{D}{b}} = \prod_{(i,k)} M^{\frac{D}{b} \, F_{{\rm{int\,}}}(G^i_k)} \label{faces} \,. \end{equation} The last product involving $\rho_{f_s}$ can be treated as \begin{eqnarray} \prod_{ f_s\in {\mathcal F}_{{\rm{int\,}}} } \prod_{ (i,k)/\, l_{f_s}\in G^{i}_{k} } M^{ \frac{2a}{b} i_{f_s} \rho_{f_s} } = \prod_{(i,k)} \prod_{ f_s\in {\mathcal F}_{{\rm{int\,}}}\cap G^i_k } M^{\frac{2a}{b} \rho_{f_s} } = \prod_{(i,k)} M^{\frac{2a}{b} \rho_{+}(G^i_k)}\,, \label{enhan} \end{eqnarray} where $\rho_{+}(\cdot)$ has been defined in \eqref{rho}. Now, if we introduce the counter-term and the wave function vertices $V_{2;a;s}$ and $V_{2;b;s}$, they might bring an additional momentum enhancement to faces. We want to keep the definition of $\rho_{f_s}$ as in \eqref{rho} and we must now add to it the contributions of the $2$-point vertices of any types. Hence $\rho_{f_s} \to \rho_{f_s} + \rho_{2;a;f_s}+ \rho_{2;b;f_s}$, where $\rho_{2;\xi;f_s}=\sum_{v_s \in {\mathcal V}_{2;\xi;s} }\epsilon_{v_s,f_s}$ is the number of times that $f_s$ visits ${\mathcal V}_{2;\xi;s}$ vertices, $\xi=a,b$. To the above power counting, we should therefore add the following factor \begin{eqnarray} \prod_{ f_s\in {\mathcal F}_{{\rm{int\,}}} } \prod_{ (i,k)/\, l_{f_s}\in G^{i}_{k} }M^{ i_{f_s} [\frac{2a}{b}\rho_{2;a;f_s} +\frac{2b}{b}\rho_{2;b;f_s}] } = \prod_{(i,k)} \prod_{ f_s\in {\mathcal F}_{{\rm{int\,}}}\cap G^i_k } M^{[\frac{2a}{b} \rho_{2;a;f_s} + 2 \rho_{2;b;f_s}] } \,. \end{eqnarray} Note that a vertex of ${\mathcal V}_{2;\xi;s}$ has a single strand with enhanced momentum $p_s^{2\xi}$, $\xi=a,b$. When a face uses that strand, the corresponding vertex contributes exactly once to the power counting. Then, \begin{equation}\label{rho2xi} \rho_{2;\xi} = \sum_{f_s \in F_{{\rm{int\,}}}(G^i_k)}\rho_{2;\xi;f_s} \end{equation} counts the number of vertices of ${\mathcal V}_{2;\xi;s}$ in $G^i_k$. In the end, we have \begin{equation} \prod_{(i,k)} \prod_{ f_s\in {\mathcal F}_{{\rm{int\,}}}\cap G^i_k } M^{[\frac{2a}{b} \rho_{2;a;f_s} + 2 \rho_{2;b;f_s}] } = \prod_{(i,k)} M^{ \frac{2a}{b}\rho_{2;a}(G^i_k) + 2 \rho_{2;b}(G^i_k) } \,. \end{equation} Changing $M \to M^{b}$, we obtain a power counting of the amplitude \eqref{ad} for the model $+$, under the condition $a\leq b$, as \begin{equation}\label{theo+} |A_{{\mathcal G};\boldsymbol{\mu}}| \le \kappa \prod_{(i,k) \subset N^2} M^{\omega_{{\rm d};+}(G^i_k)}\,, \end{equation} where $\kappa$ is a constant and the degree of divergence of $G^i_k$ is given by \begin{equation}\label{deg+} \omega_{{\rm d};+}(G^i_k) = - 2 b L (G^i_k) + DF_{\rm int} (G^i_k)+ 2 a \rho_{+} (G^i_k) +\sum_{\xi=a,b}2\xi \rho_{2;\xi}(G^i_k) \,. \end{equation} Putting $a$ to 0 leads to the ordinary power counting theorem of usual tensor field theories. \ \noindent{\bf Model $\times$ -} The analysis is very similar to the above. We count how many times a face $f_s$ passes through all vertices of the type ${\mathcal V}_{\times;s}$ and this defines the following quantities \begin{equation} \label{varrho} \varrho_{f_s} =\sum_{v_{s'},f_{s''}}\epsilon_{v_{s'}f_{s}f_{s''}}\,, \qquad \rho_{\times}({\mathcal G})=\sum_{s}\sum_{f_{s}}\varrho_{f_{s}}\,. \end{equation} With a similar calculation as above, using \eqref{sumsfinABDC} with $3a \leq 2b$, introducing also vertices of ${\mathcal V}_{2;\xi;s}$, $\xi=a,2a,b$, and $\rho_{2;\xi;f_s}$ as the number of times that a closed face $f_s$ runs through vertices of ${\mathcal V}_{2;\xi;s}$ and $\rho_{2;\xi}$ still obeys \eqref{rho2xi}, we obtain the power counting of the model $\times$ as \begin{equation}\label{theox} |A_{{\mathcal G};\boldsymbol{\mu}}| \le \kappa \prod_{(i,k) \subset N^2} M^{\omega_{{\rm d};\times}(G^i_k)}\,, \end{equation} where $\kappa$ is a constant and the degree of divergence of $G^i_k$ is given by \begin{equation}\label{degx} \omega_{{\rm d};\times}(G^i_k) = - 2 b \, L (G^i_k) + D F_{{\rm{int\,}}} (G^i_k)+ 2 a \rho_{\times} (G^i_k) + \sum_{\xi=a,2a,b}2\xi \rho_{2;\xi}(G^i_k) \,. \end{equation} From \eqref{deg+} and \eqref{degx} and for convenience, we can use unified notations $\omega_{{\rm d};\bullet}$ with $\bullet =+,\times$, with the sum of $\xi$ being appropriately chosen. \section{Analyses of the potentially renormalizable models} \label{sect:enhancedmelon} In this section, we explore the parameter spaces of potentially renormalizable models $+$ and $\times$. In the analyses below, we need the number of internal faces of a connected graph ${\mathcal G}$, in any rank $d \ge 3$ tensorial model, which is given in \cite{Samary:2012bw}: \begin{equation} F_{\rm int} = - {2 \over (d^-)!} ( \omega({\mathcal G}_{\rm color}) - \omega (\partial {\mathcal G})) - (C_{\partial {\mathcal G}} - 1) - {d^- \over 2} N_{\rm ext} + d^- - {d^- \over 4} (4 - 2 n) \cdot V, \label{eq:face} \end{equation} where $d^- = d-1$ with $d$ being the rank of the tensor field, ${\mathcal G}_{\rm color}$ is the colored extension of ${\mathcal G}$, ${\partial\mathcal G}$ denotes the boundary of ${\mathcal G}$ \cite{BenGeloun:2011rc}, with $C_{{\partial\mathcal G}}$ the number of connected components of ${\partial\mathcal G}$, $N_{{\rm{ext\,}}}$ is the number of external legs of the graph, $V_k$ is the number of vertices of coordination number $k$, $ V = \sum_k V_k$ is the total number of vertices in ${\mathcal G}$, and $n \cdot V = \sum_k k V_k$ is the number of half lines emanating from vertices. $\omega({\mathcal G}_{\rm color}) = \sum_{J_{{\mathcal G}_{\rm color}}} g_{{\widetilde{J}}_{\mathcal G_{\text{color}}}}$, $\omega (\partial {\mathcal G}) = \sum_{J_{\partial {\mathcal G}}} g_{J_{\partial {\mathcal G}}}$ with genus $g_{J}$, the genus of a ribbon graph $J$ called jacket \cite{Gur4}. A jacket is nothing but a particular embedding of the bipartite colored graph ${\mathcal G}$. The jackets of $\mathcal G_{\text{color}}$ are denoted $J_{\mathcal G_{\text{color}}}$ and they must be ``closed'' to define a closed surface ${\widetilde{J}}_{\mathcal G_{\text{color}}}$ on which a genus $g_{{\widetilde{J}}_{\mathcal G_{\text{color}}}}$ could be identified. The boundary graph ${\partial\mathcal G}$ itself maps to a rank $d-1$ colored tensor graph. ${\partial\mathcal G}$ therefore has jackets denoted $J_{{\partial\mathcal G}}$. The quantity $\omega(\mathcal G_{\text{color}})$ is called the degree of the colored tensor graph ${\mathcal G}_{\rm color}$. It replaces the genus and allows one to define a large $N$ expansion for colored tensor models \cite{Gur4}. A graph ${\mathcal G}$ is called a melon if and only if its colored extension $\mathcal G_{\text{color}}$ is a melon and that is if $\omega(\mathcal G_{\text{color}})=0$ (all jackets ${\widetilde{J}}_{\mathcal G_{\text{color}}}$ are planar). We shall need a few properties of the quantity $\omega({\mathcal G}_{\rm color}) - \omega (\partial {\mathcal G})$ withdrawn from \cite{Geloun:2012fq} that we will recall at some point. Let us recall the following terminology: a ``bridge'' in a graph is a line such that cutting that line adds another connected component to this graph. The ``cut of a bridge'' means the removal of the bridge from the graph and letting two external legs where its extremities were incident. A graph is called one-particle reducible (1PR) graph if it has bridges, otherwise it is called one-particle irreducible (1PI). \begin{lemma}[$\rho_{\bullet}$ and $\rho_{2;\xi}$ for 1PR graph]\label{lem:bridg} Let ${\mathcal G}$ be a graph with bridges (or a 1PR graph) such that cutting the bridges gives the family $\{{\mathcal G}_j\}$ of subgraphs. then \begin{equation} \rho_{\bullet}({\mathcal G}) = \sum_{j} \rho_{\bullet}({\mathcal G}_j) \,, \qquad \rho_{2;\xi}({\mathcal G}) = \sum_{j} \rho_{2;\xi}({\mathcal G}_j)\,, \end{equation} where ${\bullet}= +$, and $\xi=a,b$, or ${\bullet}=\times$, and $\xi=a,2a,b$. \end{lemma} \proof This follows from the fact that through a bridge no closed face passes. The quantities $\rho_{\bullet}({\mathcal G})$ and $\rho_{2;\xi}({\mathcal G})$ can be computed with the block diagonal matrix $\epsilon_{vf}$ using vertices and closed faces in each connected component ${\mathcal G}_j$. \qed The following proposition is easy to prove. \begin{lemma}[Bounds on $\rho_{2;\xi}$]\label{lem:boundxi} Let ${\mathcal G}$ be a graph of the model $\bullet$. Then $\rho_{2;\xi}({\mathcal G}) \leq V_{2;\xi}$. If ${\mathcal G}$ is 1PI then \begin{equation} \rho_{2;\xi}({\mathcal G}) = V_{2;\xi}({\mathcal G})\,. \end{equation} \end{lemma} \subsection{Models $+$} \label{sect:modelsplus} Consider the ``contraction'' operation of a degree-2 vertex $v$ (belonging to ${\mathcal V}_2$ or to ${\mathcal V}_{2;s}$) on the graph ${\mathcal G}$ which removes $v$ and replaces it by a propagator line with the same external momenta. Consider the graph $\tilde{\mathcal G}$ resulting from the contractions of all degree-2 vertices in ${\mathcal G}$. Note that if ${\mathcal G}$ is 1PR or 1PI then so is $\tilde{\mathcal G}$ and the number of degree-4 vertices and external legs coincide in both graphs. We define the number $Br$ of c-bridges (chain-bridges) of ${\mathcal G}$ to be the number of bridges in $\tilde{\mathcal G}$. Note a c-bridge of ${\mathcal G}$ can be very well associated with a bridge ${\mathcal G}$. We also introduce $V_4 + V_{+;4} = V_{(4)}$. \begin{lemma}[Bound of $\rho_{+}$]\label{lem:rhob} Let ${\mathcal G}$ be a graph with $N_{{\rm{ext\,}}} > 0$ external legs. Then, $\rho_{+}({\mathcal G}) \le V_{+;4}$. If ${\mathcal G}$ is melonic - $V_{(4)} =1$, then $\rho_{+}({\mathcal G})=0$. - $V_{(4)} >1$, then $\rho_{+}({\mathcal G}) \le V_{(4)}- {N_{\rm ext} \over 2}- Br$, where $Br$ is the number of c-bridges in the graph ${\mathcal G}$. \end{lemma} \proof The first statement is clear from the combinatorial procedure counting at most $V_{+;4}$ for $\rho_{+}({\mathcal G}) $ for an arbitrary graph. Now this bound can be refined for a melonic $N_{{\rm{ext\,}}}$-point graph. If $V_{(4)}=1$, then either $N_{{\rm{ext\,}}} =4 $, and then $\rho_+({\mathcal G})=0$, or $N_{{\rm{ext\,}}}=2$, and we have a melonic tadpole or a melonic graph with one c-bridge which gives again $\rho_+({\mathcal G})=0$. A 1PI graph ${\mathcal G}$ with 4 valent vertices can have at most 2 external legs per vertex. Consider a melonic graph ${\mathcal G}$ and its colored extension ${\mathcal G}_{\col}$: then each vertex in ${\mathcal G}_{\col}$ comes with a partner (see for instance Figure 1 in \cite{Geloun:2012fq}). Note that the two partner vertices belong to the same vertex in ${\mathcal G}$. If one vertex $v$ has a propagator $l$ and its partner $\tilde v$ has no propagator (hence has an external leg) then $l$ must be a bridge. Focusing on 1PI bipartite melons, then either $v$ and $\tilde v$ have both propagators or have both external legs. The presence of $N_{{\rm{ext\,}}}$ external legs in 1PI bipartite melons implies that these external legs must be hooked to $N_{{\rm{ext\,}}}/2$ vertices. Take any vertex $v_s$ with color $s$ where an external leg is incident, then an external leg is also incident to $\tilde v_s$. None of the open faces with color $0s$, which can be enhanced, could bring any contribution to $\rho_{+}({\mathcal G})$. Repeating the argument for $N_{{\rm{ext\,}}}/2$ vertices, we see that these vertices could not be part of the optimization procedure computing $\rho_{+}({\mathcal G})$ and so $\rho_{+}({\mathcal G}) \le V_{(4)} - N_{{\rm{ext\,}}}/2$. Now we treat the case of a 1PR graph ${\mathcal G}$. Consider its resulting $\tilde{\mathcal G}$ after the contraction of all of its degree-2 vertices. Cut all bridges in $\tilde{\mathcal G}$ to obtain a family of 1PI subgraphs. On each component $\tilde{\mathcal G}_{j}$ the bound $\rho_{+}(\tilde{\mathcal G}_j) \le V_{(4)}(\tilde{\mathcal G}_j)- {N_{\rm ext}(\tilde{\mathcal G}_j) \over 2}$ holds. Summing this relation over 1PI subgraphs and using Lemma \ref{lem:bridg}, we get \begin{equation}\label{rhop1pr} \rho_{+}(\tilde{\mathcal G}) = \sum_{j}\rho_{+}(\tilde{\mathcal G}_j) \le \sum_{j}[V_{(4)}(\tilde{\mathcal G}_j)- {N_{\rm ext}(\tilde{\mathcal G}_j) \over 2} ] = V_{(4)} - \frac{1}{2} N_{{\rm{ext\,}}} - \sharp \text{bridges }\,, \end{equation} where we used that each bridge cut brings two additional external legs compared to $N_{{\rm{ext\,}}}$. Finally, we can use the relation $\rho_{+}({\mathcal G})=\rho_{+}(\tilde{\mathcal G})$ because degree-2 vertices are not involved in the counting of $\rho_{+}$ and $\sharp \text{bridges }=Br$. In summary, we can also use \eqref{rhop1pr} for 1PI graph with $Br =0$. \qed As an illustration of Lemma \ref{lem:rhob}, consider the graphs of Figure \ref{fig:rhomaxmelons0}. Consider the melonic graph at the left hand side. ${N_{{\rm{ext\,}}} \over 2} = 3 $ vertices which have external legs will not contribute to $\rho_{+} ({\mathcal G}) $. Hence, $\rho_{+} ({\mathcal G}) \le V_{(4)} - {N_{{\rm{ext\,}}} \over 2}$. On the other hand, consider the non-melonic graph on the right hand side. $ 3$ vertices which have external legs contribute to $\rho_{+} ({\mathcal G}) $. \begin{figure}[H]\ \begin{center} \begin{minipage}{.7\textwidth} \centering \includegraphics[angle=0, width=4cm, height=3.5cm]{rhomaxmelons.pdf} \includegraphics[angle=0, width=4cm, height=3.5cm]{rhomaxnonmelons.pdf} \caption{ {\small Examples of $N_{{\rm{ext\,}}} = 6$-point functions in rank $d=3$ of a melonic and a non-melonic type. }} \label{fig:rhomaxmelons0} \end{minipage} \end{center} \end{figure} For a melonic graph, Lemma \ref{lem:rhob} gives in fact two bounds. The bound $\rho_{+}({\mathcal G})\leq V_{+;4}$ is sharper than the other, if and only if \begin{equation} V_4 \ge \frac{N_{\rm{ext\,}}}{2} + Br \,. \end{equation} \ \noindent{\bf Potentially renormalizable models.} We restrict now to primitively divergent graphs which can be considered connected and with $Br=0$, in other words to 1PI graphs. The degree of divergence of this model is, by combining \eqref{deg+} and \eqref{eq:face} and using $2 L = n \cdot V - N_{{\rm{ext\,}}}$, $a\leq b$, \begin{eqnarray} \omega_{{\rm d};+}({\mathcal G}) &=& - {2 D \over (d^-)!} ( \omega({\mathcal G}_{\rm color}) - \omega (\partial {\mathcal G})) - D (C_{\partial {\mathcal G}} - 1) -{1\over 2} \left[ ( D \, d^- - 2 b) N_{{\rm{ext\,}}} - 2 D \, d^- \right] \crcr &&+ {1 \over 2} \left[ -2 D\, d^- + (D \, d^- - 2 b) n \right] \cdot V + 2 a \rho_{+} +2a\rho_{2;a}+ 2b \rho_{2;b} \,. \end{eqnarray} From Lemma \ref{lem:rhob}, we have \begin{eqnarray} \triangle^{\rm melon}_{+} &=& \left\{ \begin{array}{lc} 0, & V_{(4)} = 1 \cr V_{(4)} - {N_{{\rm{ext\,}}} \over 2}- \rho_{+}({\mathcal G}^{\rm melon}) \ge 0, & V_{(4)}>1 \cr \end{array}\right. \crcr \triangle_{+} &=& V_{+;4} - \rho_{+}({\mathcal G}) \ge 0\,. \end{eqnarray} The case $V_{(4)}>1$ is the most important one when we study all orders of perturbation and we will focus on that. Using the Lemma \ref{lem:boundxi}, and further inserting that $\omega({\mathcal G}_{\rm color})=0$ and $\omega (\partial {\mathcal G}) = 0$ for melonic graphs, \begin{eqnarray} && \omega_{{\rm d};+}({\mathcal G}^{\rm melon}) \le - D (C_{\partial {\mathcal G}} - 1) - {1 \over 2} \left[ (D\, d^- - 2 b + 2 a)N_{\rm ext} - 2D \,d^-\right] \crcr && - 2 bV_2 - 2 (b-a) V_{2;a} + ( D \, d^- - 4 b + 2 a) V_{(4)} - 2 a \triangle^{\rm melon}_{+} \cr\cr && \le - D (C_{\partial {\mathcal G}} - 1) - {1 \over 2} \left[ (D\, d^- - 2 b + 2 a)N_{\rm ext} - 2D \,d^-\right] \crcr && - 2 bV_2 - 2 (b-a) V_{2;a} + ( D \, d^- - 4 b + 2 a) V_{(4)}\,. \label{eq:1omegameldelta0} \end{eqnarray} There is another bound for melonic graphs: \begin{eqnarray} && \omega_{{\rm d};+}({\mathcal G}^{\rm melon}) \le - D (C_{\partial {\mathcal G}} - 1) - {1 \over 2} \left[ ( D d^- - 2b)N_{\rm ext} - 2 D d^-\right] \crcr && - 2 b V_2 - 2 (b-a) V_{2;a} + (Dd^- -4b)V_{4} + ( D d^- - 4 b + 2a) V_{+;4} \cr\cr && \leq - D (C_{\partial {\mathcal G}} - 1) - \left[ bN_{\rm ext} - D d^-\right] \crcr && - 2 b V_2 - 2 (b-a) V_{2;a} + (Dd^- -4b)\Big(V_{4} - \frac{N_{\rm{ext\,}}}{2}\Big) + ( D d^- - 4 b + 2a) V_{+;4} \,. \label{eq:2omegamel} \end{eqnarray} Either choosing \eqref{eq:2omegamel} or \eqref{eq:1omegameldelta0} as a sharper bound, leads to the same result. Meanwhile, for non-melonic graphs, using $\omega({\mathcal G}_{\rm color}) - \omega (\partial {\mathcal G})\ge {1 \over 2} (d^- - 1) d^- !$ \cite{Geloun:2012fq}, we get \begin{eqnarray} && \omega_{{\rm d};+}({\mathcal G}^{\rm non-melon}) \le - D (d^- - 1) - D (C_{\partial {\mathcal G}} - 1) - {1 \over 2} \left[ ( D d^- - 2b)N_{\rm ext} - 2 D d^-\right] \crcr && - 2 b V_2 - 2 (b-a) V_{2;a} + (Dd^- -4b)V_{4} + ( D d^- - 4 b + 2a) V_{+;4} - 2 a \triangle_{+} \cr\cr && \le - D (d^- - 1) - D (C_{\partial {\mathcal G}} - 1) - {1 \over 2} \left[ ( D d^- - 2b)N_{\rm ext} - 2 D d^-\right] \crcr && - 2 b V_2 - 2 (b-a) V_{2;a} + (Dd^- -4b)V_{4} + ( D d^- - 4 b + 2a) V_{+;4} \,. \label{eq:1omeganonmeldelta0} \end{eqnarray} For renormalizable models, we require the coefficients of vertices to be negative. This is demanding that, since $a>0$, $b>0$, \begin{equation} D \, d^- - 4 b + 2 a \le 0 \,, \qquad b\ge a \,, \end{equation} which give for $a$, \begin{equation} a\; \le \;2 \, b - {1 \over 2} D \, d^-\,. \label{eq:validity2} \end{equation} We see that the condition $a\leq b$ coming from the sum over internal momenta has been naturally incorporated in the analysis. Then, to achieve just-renormalizability, we use $a = 2 b - {1 \over 2} D d^- \geq 0$ (and $a\le b$ implies that $b\leq {1 \over 2} D d^- $) given in \eqref{eq:validity2} into \eqref{eq:1omegameldelta0} and \eqref{eq:1omeganonmeldelta0}, and see if the conditions \begin{equation} \quad \omega_{{\rm d};+}({\mathcal G}^{\rm melon})|_{N_{\rm ext} \ge 6} < 0 \,, \qquad \quad \omega_{{\rm d};+}({\mathcal G}^{\rm non-melon})|_{N_{\rm ext} \ge 6} < 0 \label{eq:range} \end{equation} can be accomplished. These conditions translate into \begin{eqnarray} && \omega_{{\rm d};+}({\mathcal G}^{\rm melon}) |_{N_{\rm ext} \ge 6} \le \cr && \Big[ - D (C_{\partial {\mathcal G}} - 1) + D d^- -b N_{\rm ext} - 2b V_2 -2 (b-a)V_{2;a} \Big] \Big|_{N_{\rm ext} \ge 6} < 0 \,, \label{eq:1omegameljust} \\\cr && \omega_{{\rm d};+}({\mathcal G}^{\rm non-melon}) |_{N_{\rm ext} \ge 6} \le \cr && \Big[ D - D (C_{\partial {\mathcal G}} - 1) - ({1 \over 2} D \, d^- - b)N_{\rm ext} - 2 b V_2 - 2(b-a)V_{2;a} - 2a V_{4} \Big]\Big|_{N_{\rm ext} \ge 6} < 0 \,. \label{eq:1omeganonmeljust} \end{eqnarray} As $N_{{\rm{ext\,}}}$ increases, $\omega_{{\rm d};+}$ decreases, so $\omega_{{\rm d};+}$ is maximum at $N_{{\rm{ext\,}}} =6$ for melonic graphs; $\omega_{{\rm d};+}$ is maximum at $N_{{\rm{ext\,}}}=6$ as long as $b < { D d^- \over 2}$, for non-melonic graphs. Thus, the conditions for having convergent $N_{{\rm{ext\,}}}=6$-pt functions are: \begin{eqnarray} \omega_{{\rm d};+}({\mathcal G}^{\rm melon}) |_{N_{{\rm{ext\,}}} = 6} &\le& - D (C_{\partial {\mathcal G}} - 1) + D \,d^- -6 b - 2 b V_2 - 2(b-a)V_{2;a} \crcr &\le& D \,d^- -6 b < 0, \label{eq:1omegameljust6} \\ \omega_{{\rm d};+}({\mathcal G}^{\rm non-melon}) |_{N_{{\rm{ext\,}}}=6} &\le& D - D (C_{\partial {\mathcal G}} - 1) -3 D d^- +6 b - 2 b V_2 - 2(b-a)V_{2;a} -2aV_{4} \crcr &\le& D -3 D d^- +6 b < 0 \,. \label{eq:1omeganonmeljust6} \end{eqnarray} The above inequalities further reduce to \begin{equation} {D d^- \over 6} < b < { D ( 3 d^- -1 ) \over 6}\,. \label{eq:1bjust6} \end{equation} Note here that ${ D ( 3 d^- - 1) \over 6} < {D d^- \over 2}$ is always true for $D >0$, thus we have improved the bound on $b$. Under \eqref{eq:1bjust6}, the degree of divergence for $N_{{\rm{ext\,}}} \ge 6$ is maximum at $N_{{\rm{ext\,}}} = 6$ and strictly negative. Furthermore, we demand that $a>0$ and so that $b > \frac{Dd^-}{4}$. We finally get the bound \begin{equation} \boxed{ {D d^- \over 4} < b < { D ( 3 d^- -1 ) \over 6} } \,. \label{eq:1bjust6final} \end{equation} Now we use \eqref{eq:1bjust6final} to find a bound on $a = 2 b - {1 \over 2} D d^-$ \eqref{eq:validity2} as \begin{equation} {\boxed{ 0 < a < { D ( 3 d^- -2) \over 6} }} \label{eq:1ajust6final} \end{equation} Combining \eqref{eq:1ajust6final} and \eqref{eq:1bjust6final} for given $D$ and $d^-\geq 2$, we obtain the ranges of values of $a$ and $b$ in Table \ref{table:table1} which could lead to just-renormalizable models: \begin{table}[H] \setlength{\extrarowheight}{0.2cm} \centering \begin{tabular}{lcccc} \hline \hline &$d^-=2$&$d^-=3$&$d^-=4$&$d^-=5$ \\ \hline\hline $D = 1$ &\pbox{2.5cm} {$ {0} < a < {2 \over 3} $ \\ ${1 \over 2} < b <{5 \over 6}$ } & \pbox{2.5cm}{${0} < a < { 7 \over6} $ \\ ${3 \over 4} < b < { 4 \over 3}$} &\pbox{2.5cm}{${0} < a <{ 5 \over 3}$ \\ ${1} < b <{11 \over 6}$} &\pbox{2.5cm}{${0} < a <{13 \over 6} $ \\ ${5 \over 4} < b <{7 \over 3}$} \\ \hline $D = 2$ &\pbox{2.5cm}{${0} < a <{ 4 \over 3} $ \\ ${ 1} < b <{5 \over 3} $} &\pbox{2.5cm}{$0 < a < { 7 \over 3}$ \\ ${3 \over 2} < b <{8 \over 3} $} &\pbox{2.5cm}{$0 < a <{10 \over 3}$ \\ ${2} < b <{11 \over 3}$} &\pbox{2.5cm}{$0 < a < {13 \over 3}$ \\ ${ 5 \over 2} < b <{14 \over 3}$} \\ \hline $D = 3$ &\pbox{2.5cm}{${0} < a <2 $ \\ ${ 3 \over 2} < b <{5 \over 2}$} &\pbox{2.5cm}{${0} < a <{ 7 \over 2}$ \\ ${ 9 \over 4} < b < 4$} &\pbox{2.5cm}{${ 0} < a <5$ \\ ${3} < b <{11 \over 2}$} &\pbox{2.5cm}{$0 < a <{13 \over 2} $ \\ ${ 15 \over 4} < b <7$} \\ \hline $D = 4$ &\pbox{2.5cm}{${0} < a <{8 \over 3} $ \\ $2 < b <{10 \over 3}$} &\pbox{2.5cm}{${0} < a <{14 \over 3}$ \\ $3 < b <{16 \over 3}$} &\pbox{2.5cm}{$0 < a <{20 \over 3}$ \\ $4 < b <{22 \over 3}$} &\pbox{2.5cm}{$0 < a <{26 \over 3}$ \\ $5 < b <{28 \over 3}$} \\ \hline \hline \end{tabular} \caption{Allowed region of the values of $a$ and $b$ for potentially just-renormalizable models $+$ with $d^- \le 5$ and $D \le 4$. } \label{table:table1} \end{table} \vskip 10pt This table shows that there might be uncountable models which could be just renormalizable. We note that the limit cases $a=0$ lead to the renormalizable invariant tensor field theories studied in \cite{BenGeloun:2012pu} ($d=3, D=1,b=\frac12$) and \cite{Geloun:2013saa} [$(d=4,D=1 ,b=\frac34); (d=5,D=1, b=1); (d=3, D=2, b=1)$]. Let us seek further conditions leading to interesting models with $a>0$. One of these conditions is to achieve logarithmic divergence for non-melonic graphs at $N_{{\rm{ext\,}}} = 4$. For this, achieving \begin{equation} \omega_{{\rm d};+}({\mathcal G}^{\rm non-melon})|_{N_{\rm ext} = 4} = 0 \label{eq:justrenlog4} \end{equation} entails \begin{equation} \boxed{ b = {1 \over 2} D ( d^- - {1 \over 2})\,, \qquad a= {1 \over 2} D ( d^- - 1) } \label{eq:validityb} \end{equation} which is consistent with \eqref{eq:1bjust6final}, since ${1 \over 2} D ( d^- - {1 \over 2}) < { D ( 3 d^- -1 ) \over 6}$ for $D>0$ and ${D d^- \over 4} < {1 \over 2} D ( d^- - {1 \over 2})$ for $ d^->1$. In Table \ref{table:table2}, we explicitly show the valid values of $a$ and $b$ given in \eqref{eq:validityb}. \begin{table}[H] \setlength{\extrarowheight}{0.2cm} \centering \begin{tabular}{x{2cm}x{2cm}x{2cm}x{2cm}x{2cm}} \hline\hline &$d^-=2$&$d^-=3$&$d^-=4$&$d^-=5$ \\ \hline\hline $D=1$ &\pbox{2cm}{$a = {1 \over 2}$ \\ $b= {3 \over 4}$} & \pbox{2cm}{$a = {1}$ \\ $b= {5 \over 4}$} &\pbox{2cm}{$a = {3 \over 2}$ \\ $b= {7 \over 4}$} &\pbox{2cm}{$a = { 2}$ \\ $b= {9\over 4}$} \\ \hline $D=2$ &\pbox{2cm}{$a = {1}$ \\ $b= {3 \over 2}$} &\pbox{2cm}{$a = {2}$ \\ $b= {5 \over 2}$} &\pbox{2cm}{$a = {3}$ \\ $b= {7\over 2}$} &\pbox{2cm}{$a = {4}$ \\ $b= {9 \over 2}$} \\ \hline $D=3$ &\pbox{2cm}{$a = {3 \over 2}$ \\ $b= {9 \over 4}$} &\pbox{2cm}{$a = {3}$ \\ $b= {15 \over 4}$} &\pbox{2cm}{$a = {9 \over 2}$ \\ $b= {21 \over 4}$} &\pbox{2cm}{$a = {6}$ \\ $b= {27 \over 4}$} \\ \hline $D=4$ &\pbox{2cm}{$a = {2}$ \\ $b= {3}$} &\pbox{2cm}{$a = {4}$ \\ $b= {5}$} &\pbox{2cm}{$a = {6}$ \\ $b= {7}$} &\pbox{2cm}{$a = {8}$ \\ $b= {9}$} \\ \hline \hline \end{tabular} \caption{Values of $a$ and $b$ for potentially just-renormalizable theories with $\omega_{{\rm d};+}({\mathcal G}^{\rm non-melon})|_{N_{\rm ext} = 4} = 0$ with $d^- \le 5$ and $D \le 4$. } \label{table:table2} \end{table} Table \ref{table:table1} and Table \ref{table:table2} are consistent for just-renormalizable models with the superficial degree of divergence which does not depend on $V_4$, with logarithmic divergence for graphs with $N_{{\rm{ext\,}}} =4$, and with convergent graphs with $N_{{\rm{ext\,}}} \ge 6$. Let us discuss the behavior of melonic graphs. Concentrating on $N_{{\rm{ext\,}}}=4$, we evaluate $\omega_{{\rm d};+}({\mathcal G}^{\rm melon})|_{N_{{\rm{ext\,}}} = 4}$ keeping in mind \eqref{eq:1bjust6final} and obtain \begin{equation} \omega_d({\mathcal G}^{\rm melon}) |_{N_{{\rm{ext\,}}} = 4} \le D d^- - 4 b \,, \end{equation} which gives \begin{equation} \omega_d({\mathcal G}^{\rm melon}) |_{N_{{\rm{ext\,}}} = 4} < 0 \,. \end{equation} Therefore, we have convergent melonic graphs at $N_{{\rm{ext\,}}} = 4$. Divergent non-melonic graphs at $N_{{\rm{ext\,}}} = 4$ dominate all melonic graphs. Insisting on having a derivative coupling in the direct space, that is on $(U(1)^D)^{d}$, we impose that $a$ and $b$ are integers. In that situation, we have the obvious solutions to make $D$ a multiple of $4$. Having covered the parameter space for finding interesting models, we will prove that, in section \ref{sect:renmo+}, all models for generic $(d,D)$ (including those of Table \ref{table:table2}) are in fact just-renormalizable. \subsection{Models $\times$} \label{sect:modelstimes} We work under the same definition and conditions as in section \ref{sect:modelsplus}, where $V_{(4)}$ presently denotes $V_4 + V_{\times ;4}$. \begin{lemma}[Bound of $\rho_{\times}$]\label{lem:rhobx} Let ${\mathcal G}$ be a graph with $N_{{\rm{ext\,}}}$ external legs and $Br$ c-bridges. We have $\rho_{\times}({\mathcal G}) \le 2 V_{\times;4}$. If ${\mathcal G}$ is such that -a- $V_{(4)} =1$ and $N_{{\rm{ext\,}}}=4$, then $\rho_{\times}({\mathcal G})=0$ -b- $V_{(4)}=1$, $N_{{\rm{ext\,}}}=2$, $\rho_{\times}({\mathcal G})\leq 1$ -c- $V_{(4)}>1$, then $\rho_{\times}({\mathcal G}) \le 2 V_{(4)} - {N_{{\rm{ext\,}}} \over 2} - Br$. If ${\mathcal G}$ is melonic and -d- $V_{(4)} =1$, $N_{{\rm{ext\,}}}=2$ then $\rho_{\times}({\mathcal G})=0$. -e- $V_{(4)}>1$, then $\rho_{\times}({\mathcal G}) \le 2 V_{(4)}- N_{\rm ext}- 2 Br$. \end{lemma} \proof The first statement should not bring any difficulties. Now let us consider a general 1PI graph. Assume $V_{(4)}=1$ and $N_{{\rm{ext\,}}}=4$, then $\rho_{\times}({\mathcal G}) =0$, there are no closed faces and so nothing to count. Now if $V_{(4)}=1$, $N_{{\rm{ext\,}}}=2$, two cases might happen. Either the graph is melonic, and there are no enhanced faces {\it i.e.} $\rho_{\times}({\mathcal G})=0$, or the graph is non-melonic and the vertex might still contribute or not to $\rho_{\times}({\mathcal G})$, thus $\rho_{\times}({\mathcal G}) \le 1 $. Then we have shown that a, b, d, are true for any 1PI graphs. For 1PR graph, we simply observe that the presence of bridge at the external legs will not affect the counting of internal faces visiting the vertex counting in $V_{(4)}$. Thus a,b and d are valid in this case. A 1PI graph, with $V_{(4)}>1$, has at most 2 external legs per vertex. Consider a vertex having exactly 1 external leg: then this vertex will contribute at most 1 to $\rho_{\times}$. If a vertex has 2 external legs, then 2 cases may occur: either the 2 legs are on the same external face which cannot contribute to $\rho_{\times}$ or the legs are incident to partner vertices. In the latter case, there are 2 external faces of that vertex which cannot contribute to $\rho_{\times}$. Hence, the upper bound for $\rho_{\times}({\mathcal G})$ is $2V_{(4)} - N_{{\rm{ext\,}}}/2$. For a 1PR graph ${\mathcal G}$, we cut all bridges to obtain 1PI subgraphs of the graph $\tilde{\mathcal G}$. On each component $\tilde{\mathcal G}_{j}$, we use the 1PI general bound $\rho_{\times}(\tilde{\mathcal G}_j) \le 2 V_{(4)}(\tilde{\mathcal G}_j) - {1 \over 2}N_{{\rm{ext\,}}}(\tilde{\mathcal G}_j) $. As we perform in the proof of Lemma \ref{lem:rhob}, we can show that the sum over the components brings $\rho_{\times}({\mathcal G})=\rho_{\times}(\tilde{\mathcal G}) \le 2 V_{(4)}- \frac{N_{{\rm{ext\,}}}}{2}- Br$. For a melonic graph, the above bounds must be refined. According to the same discussion in the proof of Lemma \ref{lem:rhob}, we know that for a 1PI melonic graph, each vertex having external legs must have $N_{{\rm{ext\,}}} = 2$. (If $N_{{\rm{ext\,}}} =4$, then the vertex gets disconnected and this is the case with $V_{(4)}=1$.) These two external legs must be on partner vertices $v$ and $\tilde v$. Hence the enhanced faces on this vertex are necessarily external and cannot contribute to $\rho_{\times}({\mathcal G})$. Repeating the argument for all vertices with external legs, we get $\rho_{\times}({\mathcal G}) \le 2 (V_{(4)} - {N_{{\rm{ext\,}}} \over 2}) = 2 V_{(4)}- N_{{\rm{ext\,}}}$. Consider a melonic 1PR graph ${\mathcal G}$. Using again the same strategy, we cut all the bridges in $\tilde{\mathcal G}$, and apply the relation $\rho_{\times}(\tilde{\mathcal G}_j) \le 2 V_{(4)}(\tilde{\mathcal G}_j)- N_{{\rm{ext\,}}}(\tilde{\mathcal G}_j) $ for each 1PI component, we get \begin{equation} \rho_{\times}(\tilde{\mathcal G}^{\rm melon}) = \sum_{j}\rho_{\times}(\tilde{\mathcal G}_j) \le \sum_{j}[2 V_{(4)}(\tilde{\mathcal G}_j)- N_{{\rm{ext\,}}}(\tilde{\mathcal G}_j) ] = 2 V_{(4)} - N_{{\rm{ext\,}}} - 2 Br\,, \end{equation} which together with $\rho_{\times}({\mathcal G}^{\rm melon}) = \rho_{\times}(\tilde{\mathcal G}^{\rm melon}) $ is the second relation for melonic graphs for $V_{(4)}>1$. \qed The bounds of Lemma \ref{lem:rhobx} should be chosen wisely when bounding the degree of divergence of the graph. Furthermore, again the generic case of $V_{(4)}>1$ will be the important one that we will concentrate on. \ \noindent{\bf Potentially renormalizable models.} We study only primitively divergent graphs and fix $Br=0$. Combining \eqref{degx} and \eqref{eq:face} and using $2 L = n \cdot V - N_{{\rm{ext\,}}}$, we obtain the bound for the degree of divergence in this model, at $3a\leq 2b$, \begin{eqnarray} \omega_{{\rm d};\times}({\mathcal G}) &=& - {2 D \over (d^-)!} ( \omega({\mathcal G}_{\rm color}) - \omega (\partial {\mathcal G})) - D (C_{\partial {\mathcal G}} - 1) -{1\over 2} \left[ ( D \, d^- - 2 b) N_{{\rm{ext\,}}} - 2 D \, d^- \right] \crcr &&+ {1 \over 2} \left[ -2 D\, d^- + (D \, d^- - 2 b) n \right] \cdot V + 2 a \rho_{\times} +\sum_{\xi=a,2a,b}2\xi\rho_{2;\xi} \,. \end{eqnarray} Using the Lemma \ref{lem:rhobx}, and further inserting that $\omega({\mathcal G}_{\rm color})=0$ and $\omega (\partial {\mathcal G}) = 0$ for melonic graphs, and $\omega({\mathcal G}_{\rm color}) - \omega (\partial {\mathcal G}) \ge {1 \over 2} (d^- - 1) d^- !$ for non-melonic graphs, the following bound is true: \begin{eqnarray} && \omega_{{\rm d};\times}({\mathcal G}^{\rm melon}) \le - D (C_{\partial {\mathcal G}} - 1) - {1 \over 2} \left[ (D d^- - 2b + 4 a)N_{\rm ext} - 2D d^-\right] \cr && - 2 b V_2 - 2\sum_{\xi=a,2a}(b-\xi)V_{2;\xi} + ( D d^- - 4 b + 4a) V_{(4)} - 2 a \triangle^{\rm melon}_{\times} \,, \label{eq:1omegamelx0} \\ \cr && \omega_{{\rm d};\times}({\mathcal G}^{\rm non-melon}) \le - D (d^- - 1) - D (C_{\partial {\mathcal G}} - 1) - {1 \over 2} \left[ ( D d^- - 2b + 2a )N_{\rm ext} - 2 \, D d^-\right] \cr && - 2b V_2 - 2\sum_{\xi=a,2a}(b-\xi)V_{2;\xi} + ( D d^- - 4b + 4a) V_{(4)} - 2a\triangle^{\rm non-melon}_{\times} \,, \label{eq:1omeganonmelx0} \end{eqnarray} where we define \begin{eqnarray} \triangle^{\rm melon}_{\times} &=& 2 V_{(4)} - N_{{\rm{ext\,}}} - \rho_{\times} ({\mathcal G}^{\rm melon}) \ge 0 \,, \crcr \triangle^{\rm non-melon}_{\times} &=& 2 V_{(4)} - {N_{{\rm{ext\,}}} \over 2} - \rho_{\times}({\mathcal G}^{\rm non-melon}) \ge 0 \,, \end{eqnarray} and get the inequalities from Lemma \ref{lem:rhobx}. Thus, we obtain \begin{eqnarray} && \omega_{{\rm d};\times}({\mathcal G}^{\rm melon}) \le - D (C_{\partial {\mathcal G}} - 1) - {1 \over 2} \left[ ( D d^- - 2 b + 4 a) N_{\rm ext} - 2D d^-\right] \cr && - 2 \, b \, V_2 - 2\sum_{\xi=a,2a}(b-\xi)V_{2;\xi} + ( D d^- - 4 b + 4 a ) V_{(4)} \,, \label{eq:1omegamelx} \\ \cr && \omega_{{\rm d};\times}({\mathcal G}^{\rm non-melon}) \le - D (d^- - 1) - D (C_{\partial {\mathcal G}} - 1) - {1 \over 2} \left[ ( D d^- - 2\, b + 2\, a )N_{\rm ext} - 2 D d^-\right] \cr && - 2b V_2 - 2\sum_{\xi=a,2a}(b-\xi)V_{2;\xi} + (D d^- - 4b + 4a) V_{(4)} \,. \label{eq:1omeganonmelx} \end{eqnarray} Seeking renormalizable models, we require \begin{equation} D \, d^- - 4 \, b + 4 \, a \leq 0 \,, \qquad 2a\leq b \,, \end{equation} where the second condition, more stringent than $3a \leq 2b$, will be kept. This gives for $a$, \begin{equation} a \leq b - {1 \over 4} Dd^- \,, \qquad a \leq \frac{b}{2} \,. \label{eq:validity2x} \end{equation} To achieve just-renormalizability, we use $a=b - {1 \over 4} Dd^-$ (which implies $b\leq \frac{Dd^-}{2}$), \eqref{eq:validity2x}, in \eqref{eq:1omegamelx} and \eqref{eq:1omeganonmelx} and require that, for a number of external legs higher than 4, we have convergence: \begin{eqnarray} && \omega_{{\rm d};\times}({\mathcal G}^{\rm melon})|_{ N_{{\rm{ext\,}}} \ge 6 } < 0 \,, \crcr && \omega_{{\rm d};\times}({\mathcal G}^{\rm non-melon})|_{ N_{{\rm{ext\,}}} \ge 6 } < 0\,. \label{eq:rangex} \end{eqnarray} From \eqref{eq:1omegamelx} and \eqref{eq:1omeganonmelx}, we have: \begin{eqnarray} && \omega_{{\rm d};\times}({\mathcal G}^{\rm melon}) |_{N_{\rm ext} \ge 6} \le\cr && \Big[ - D (C_{\partial {\mathcal G}} - 1) + D d^- -b N_{\rm ext} - 2 b V_2 -\frac12 Dd^- V_{2;a} -( Dd^- -2b)V_{2;2a} \Big] \Big|_{N_{\rm ext} \ge 6} \,, \cr\cr && \label{eq:1omegamelxjust} \\ && \omega_{{\rm d};\times}({\mathcal G}^{\rm non-melon}) |_{N_{\rm ext} \ge 6} \le \cr && \Big[ D - D (C_{\partial {\mathcal G}} - 1) -{1 \over 4} D d^- N_{{\rm{ext\,}}} - 2 b V_2 -\frac12 Dd^- V_{2;a} -( Dd^- -2b)V_{2;2a} \Big]\Big|_{ N_{{\rm{ext\,}}} \ge 6} \,.\cr\cr && \label{eq:1omeganonmelxjust} \end{eqnarray} The maximum value for $\omega_{{\rm d};\times}({\mathcal G})$ is reached at $N_{{\rm{ext\,}}}=6$, so we can always write an upper bound and further require convergence: \begin{eqnarray} && \omega_{{\rm d};\times}({\mathcal G}^{\rm melon}) |_{N_{{\rm{ext\,}}} = 6} \le D \,d^- -6 b < 0, \label{eq:1omegamelxjust6} \\ && \omega_{{\rm d};\times}({\mathcal G}^{\rm non-melon}) |_{N_{{\rm{ext\,}}}=6} \le - D ({3 \over 2} d^- - 1) < 0 \,. \label{eq:1omeganonmelxjust6} \end{eqnarray} We note here that $d^->{2\over3}$ \eqref{eq:1omeganonmelxjust6} is trivially satisfied in our study in which we only consider tensors with rank $d \ge 3$. Hence, for just renormalizability, we impose \begin{equation} { D d^- \over 6} < b \leq \frac{Dd^-}{2} \,, \qquad a=b - {1 \over 4} D \, d^- \,. \label{eq:1bnonmelxjust6} \end{equation} However \eqref{eq:1bnonmelxjust6} also entails $ a > -{D d^- \over 12}$. Restricting to $a > 0$, the bound of $b$ given can be improved. For just-renormalizability ({\it i.e.,} the equality in \eqref{eq:validity2x}, and \eqref{eq:rangex} together with $a>0$), we impose \begin{equation} {\boxed { { D d^- \over 4} < b \leq \frac{Dd^-}{2} \,, \qquad a=b - {1 \over 4} D \, d^- > 0 } } \label{eq:1bnonmelxjust6final} \end{equation} whose values for given positive integer values of $D$ and $d$ are given in Table \ref{table:table45}. \begin{table}[H] \setlength{\extrarowheight}{0.2cm} \centering \begin{tabular}{lccccccccccc |} \hline\hline &$d^-=2$&$d^-=3$&$d^-=4$&$d^-=5$ \\ \hline\hline $D = 1$ &\pbox{2.5cm} {$ 0 < a \le {1 \over 2} $ \\ ${1 \over 2} <b \le 1$ } & \pbox{2.5cm}{$0 < a \le {3 \over 4} $ \\ ${3 \over 4} <b \le {3 \over 2}$} &\pbox{2.5cm}{$0 < a \le {1} $ \\ ${1} <b \le { 2}$} &\pbox{2.5cm}{$0 < a \le {5 \over 4} $ \\ ${5 \over 4}<b \le {5 \over 2} $} \\ \hline $D = 2$ &\pbox{2.5cm}{$0 < a \le{1} $ \\ ${ 1} <b \le {2}$} &\pbox{2.5cm}{$0 < a \le {3 \over 2}$ \\ ${3 \over 2}<b \le { 3} $} &\pbox{2.5cm}{$0 < a \le {2}$ \\ ${2} <b \le {4} $} &\pbox{2.5cm}{$0 < a \le {5 \over 2} $ \\ ${ 5 \over 2} <b \le{5}$} \\ \hline $D = 3$ &\pbox{2.5cm}{$0 < a \le {3 \over 2} $ \\ ${ 3 \over 2}<b \le {3} $} &\pbox{2.5cm}{$0 < a \le { 9 \over 4} $ \\ ${ 9 \over 4} <b \le {9 \over 2} $} &\pbox{2.5cm}{$0 < a \le { 3} $ \\ ${3} <b \le{6} $} &\pbox{2.5cm}{$0 < a \le { 15 \over 4} $ \\ ${ 15 \over 4}<b \le {15 \over 2} $} \\ \hline $D = 4$ &\pbox{2.5cm}{$0 < a \le 2 $ \\ $2 <b \le 4$} &\pbox{2.5cm}{$0 < a \le 3$ \\ $3 <b \le 6$} &\pbox{2.5cm}{$0 < a \le 4$ \\ $4 <b \le 8 $} &\pbox{2.5cm}{$0 < a \le 5 $ \\ $5 <b \le 10 $} \\ \hline\hline \end{tabular} \caption{Allowed region of the values of $a$ and $b$ for potentially just-renormalizable models $\times$ with $d^- \le 5$ and $D \le 4$. } \label{table:table45} \end{table} Let us understand what is entailed by the just-renormalizability condition $D \, d^- - 4 \, b + 4 \, a =0$, at $N_{{\rm{ext\,}}}=4$. We have \begin{eqnarray}\label{nmel4x} \omega_{{\rm d};\times}({\mathcal G}^{\rm non-melon}) |_{N_{{\rm{ext\,}}}= 4} &\le& - \, D (d^- - 1) - \, D (C_{\partial {\mathcal G}} - 1) - {1 \over 2} \left[ {D \, d^- \over 2} \cdot 4 - 2 \, D \,d^-\right] \cr && - 2 b V_2 -\frac12 Dd^- V_{2;a} -( Dd^- -2b)V_{2;2a} \crcr &\le& - D (d^- - 1) < 0 \,, \end{eqnarray} since we only consider tensors of rank $d \ge 3$. Thus, non-melonic graphs with $N_{{\rm{ext\,}}} =4$ are found all convergent. Similarly for melonic graphs, requiring just-renormalizability means $D \, d^- - 4 \, b + 4 \, a =0$, leading to \begin{eqnarray}\label{mel4x} && \omega_{{\rm d};\times}({\mathcal G}^{\rm melon}) |_{N_{{\rm{ext\,}}} =4} \le - D (C_{\partial {\mathcal G}} - 1) - {1 \over 2} \left[ (2 a + {D d^- \over 2} )4 - 2 \, D \,d^-\right] \\ && - 2 b V_2 -\frac12 Dd^- V_{2;a} -( Dd^- -2b)V_{2;2a} \le - 4 a < 0 \,. \nonumber \end{eqnarray} Therefore, all melonic graphs with $N_{{\rm{ext\,}}} =4$ are also convergent. Further, we analyze graphs of $N_{{\rm{ext\,}}} = 2$ under the same condition and find \begin{eqnarray} && \omega_{{\rm d};\times}({\mathcal G}^{\rm melon}) |_{N_{{\rm{ext\,}}} =2} \le - D (C_{\partial {\mathcal G}} - 1) - {1 \over 2} \left[ 2b \cdot 2 - 2 Dd^-\right] \\ && - 2 b V_2 -\frac12 Dd^- V_{2;a} -( Dd^- -2b)V_{2;2a} \le -2 b + D d^- < {D d^- \over 2} \cr\cr && \omega_{{\rm d};\times}({\mathcal G}^{\rm non-melon}) |_{N_{{\rm{ext\,}}} =2} \le - D (d^- - 1) - D (C_{\partial {\mathcal G}} - 1) - {1 \over 2} \left[ {D d^- \over 2} \cdot 2 - 2 D d^-\right] \crcr && - 2 b V_2 -\frac12 Dd^- V_{2;a} -( Dd^- -2b)V_{2;2a} \le {1 \over 2} D(2 - d^- ) \leq 0 \,, \end{eqnarray} where we used \eqref{eq:1bnonmelxjust6final}, and $d^-\ge 2$. In summary, at $N_{{\rm{ext\,}}}=2$, both melonic and non-melonic graphs might be divergent. A closer look shows that non-melonic graphs can be at most logarithmically divergent at rank $d\le 3$. Furthermore, as observe above, if we increase $D$ or $d^-$, we see that melons could be again the dominant amplitudes. We conclude that, for potentially just-renormalizable models $\times$, {\it i.e.}, under \eqref{eq:validity2x} and \eqref{eq:rangex}, only graphs with $N_{{\rm{ext\,}}} = 2$ might be divergent. Let us emphasize that the model $\times$ appears as a new type of renormalizable theory. Indeed, the coupling constants $\lambda$ and $\rho_{+}$ do not get any corrections, {\it i.e.,} do not get renormalized, but degree-2 vertices will do. In ordinary QFT and invariant tensor field theory, when a model acquires this property it becomes super-renormalizable, that is, there is a finite number of graphs which contribute to the flow of the mass. That is for example the case, of the scalar $P(\phi)_2$-model and even non-local super renormalizable tensor field theories \cite{Geloun:2013saa,Carrozza:2012uv}. However, in the present case, as we will see in the following, the model $\times$ at $d=3$ will have an infinite number of graphs which will contribute to the mass renormalization. We attribute this property to the presence of enhanced interactions in the model. As a concrete study in section \ref{sect:renmox}, we will focus on $a = {1 \over 2}$, $b = 1$ for $D = 1$ and $d^- = 2$ as satisfied in Table \ref{table:table45}. \section{Rank $d$ just-renormalizable models $+$} \label{sect:renmo+} In this section, we analyze a class of model $+$ which will be proved renormalizable for arbitrary $d$ and $D$. We provide the list of their primitively divergent graphs and proceed to the expansion of those around their local and diverging part. Our goal is to show that the divergent parts in this expansion recasts as a coupling and so a subtracting scheme can be performed. Dealing exclusively with graphs with external legs, we have $C_{{\partial\mathcal G}}\geq 1$. Note also that the theory has bipartite graphs such that $N_{{\rm{ext\,}}}$ is an even number. \subsection{List of divergent graphs} \label{subsect:list+} Consider an arbitrary model in the class of models $+$ at fix $(d,D)$, with $a = D(d^--1) /2, b = D(d^- -\frac12)/ 2$. Using \eqref{eq:1omegameldelta0} and \eqref{eq:1omeganonmeldelta0}, in the same notations and conditions introduced above, the superficial degree of divergence is given by: \begin{equation} \omega_{{\rm d};+} ({\mathcal G}^{\rm melon}) \le - D\Big[(C_{\partial {\mathcal G}} - 1) + {1 \over 2} ( (d^- - {1 \over 2}) N_{{\rm{ext\,}}} - 2d^- ) +(d^- - {1 \over 2})\, V_2 +\frac12 V_{2;a} + (d^--1) \triangle^{\rm melon}_{+}\Big] \,, \label{eq:1omegamel1} \end{equation} \begin{eqnarray} && \omega_{{\rm d};+} ({\mathcal G}^{\rm non-melon}) \le - D\Big[ (d^-- 1) + (C_{\partial {\mathcal G}} - 1) + {1 \over 2} ( {1 \over 2} N_{{\rm{ext\,}}} - 2d^- ) \cr\cr && \qquad \qquad + (d^--{1 \over 2}) V_2 + \frac12 V_{2;a} + (d^--1)V_{4} + (d^--1) \triangle_{+} \Big] \,. \label{eq:1omeganonmel1} \end{eqnarray} We have already shown that for any graph such that $N_{{\rm{ext\,}}}\ge 6$, the amplitude is convergent. At $N_{{\rm{ext\,}}}=4$, non-melonic graphs have maximal degree of divergence 0 (logarithmic divergence) and melonic graphs converge. The following cases occur \begin{itemize} \item[(i)] $N_{\rm ext} = 4$, \begin{eqnarray} && \omega_{{\rm d};+} ({\mathcal G}^{\rm melon}) \le - D(d^--1) <0 \,, \\ && \omega_{{\rm d};+} ({\mathcal G}^{\rm non-melon}) \le - D\Big[ (C_{\partial {\mathcal G}} - 1) + (d^--{1 \over 2}) V_2 + \frac12 V_{2;a} + (d^--1)V_{4} \cr\cr && + (d^--1) \triangle_{+}\ \Big] \le 0 \,. \nonumber \end{eqnarray} For non-melonic graphs, the upper bound saturates only if $C_{\partial {\mathcal G}} = 1$, $V_4=V_2=V_{2;a}=0$, and $\triangle_{+} = 0$, {\it i.e.} $\rho_{+}({\mathcal G}^{\rm non-melon})=V_{+;4}$. \item[(ii)] $N_{\rm ext} = 2$: we can combine $V_{(4)}>1$ and $V_{(4)}=1$ at $N_{{\rm{ext\,}}}=2$ from Lemma \ref{lem:rhob}. Thus we can write a single bound, $V_{(4)}\ge 1$ as \begin{equation} \omega_{{\rm d};+} ({\mathcal G}^{\rm melon}) \le - D\Big[(C_{\partial {\mathcal G}} - 1) - {1 \over 2} +(d^- - {1 \over 2})\, V_2 +\frac12 V_{2;a} + (d^--1) \triangle^{\rm melon}_{+}\Big] \le {D \over 2} \,. \end{equation} Whenever $C_{{\partial\mathcal G}} -1>0$, or $V_2>0$, $V_{2;a}>1$, or $\triangle^{\rm melon}_{+}>0$, the graph becomes convergent. The only way to achieve a divergence with $\omega_d ({\mathcal G}^{\rm melon}) =\frac{D}{2}$ is to set the above quantities to 0. Note that, for a graph with $N_{{\rm{ext\,}}} = 2$, $\triangle^{\rm melon}_{+}=0$ means $\rho_{+}({\mathcal G}^{\rm melon}) = V_{(4)} - 1$ from Lemma \ref{lem:rhob}. But we also have the bound $\rho_{+}({\mathcal G}^{\rm melon}) \le V_{+;4}$, therefore writing $\rho_{+}({\mathcal G}^{\rm melon}) = V_{+;4} - p$, $p\ge 0$, implies that $V_{4} = 1-p \geq 0$. Thus $p=0$, yields $(\rho_{+}({\mathcal G}^{\rm melon}) = V_{+;4}, V_4 =1)$ or $p=1$ and then $(\rho_{+}({\mathcal G}^{\rm melon}) = V_{+;4} -1, V_4=0)$. The case $\omega_d ({\mathcal G}^{\rm melon}) =0$ might occur for $C_{{\partial\mathcal G}} -1=0$, $V_2=0$, $\triangle^{\rm melon}_{+}=0$, and $V_{2;a}=1$. Then $\triangle^{\rm melon}_{+}=0$ means that one of the following two cases occurs, $(\rho_{+}({\mathcal G}^{\rm melon}) = V_{+;4}, V_4 =1)$ or ($\rho_{+}({\mathcal G}^{\rm melon}) = V_{+;4} -1, V_4=0)$ can be produced. For a non-melonic graph, we have, $V_{(4)}\ge 0$, \begin{eqnarray} && \omega_{{\rm d};+} ({\mathcal G}^{\rm non-melon}) \le - D\Big[ (C_{\partial {\mathcal G}} - 1) - {1 \over 2} \cr\cr && + (d^--{1 \over 2}) V_2 + \frac12 V_{2;a} + (d^--1)V_{4} + (d^--1) \triangle_{+} \Big]\le \frac{D}{2} \,, \end{eqnarray} and, the only way to achieve divergence is to set $C_{{\partial\mathcal G}} = 1$, $V_2 = 0$, $V_4=0$, and $\triangle_{+}=0$. The last condition translates as $\rho_{+}({\mathcal G}^{\rm melon})=V_{+;4} $. Likewise, we can have $\omega_{{\rm d};+} ({\mathcal G}^{\rm non-melon})=\frac D2 $ for $V_{2;a}=0$, or $\omega_{{\rm d};+} ({\mathcal G}^{\rm non-melon})=0 $ for $V_{2;a}=1$. \end{itemize} We have thus completed the proof of the following statement: \begin{proposition}[List of primitively divergent graphs for model $+$]\label{prop:list+} The $p^{2a}\phi^4$-model $+$ with parameters $a=D(d^--1)/2, b=D(d^--\frac12)/2$ for two integers $d>2$ and $D>0$, has primitively divergent graphs with $(\Omega({\mathcal G})=\omega(\mathcal G_{\text{color}}) - \omega({\partial\mathcal G}))$: \begin{table}[H] \centering \begin{tabular}{lcccccccccccccccc} \hline\hline ${\mathcal G}$ && $N_{{\rm{ext\,}}}$ && $V_{2}$ && $V_{2;a}$ && $V_{4}$ && $\rho_{+}$ && $C_{{\partial\mathcal G}}-1$ && $\Omega({\mathcal G})$ && $\omega_d({\mathcal G})$ \\ \hline\hline && 4 && 0 && 0 && 0 && $V_{+;4}$ && 0 && 1&& 0\\ I && 2 && 0 && 0 && 0 && $V_{+;4}$ && 0 && 1 && ${D \over 2}$ \\ II && 2 && 0 && 0 && 0 && $V_{+;4}-1$ && 0 && 0 && ${D \over 2}$ \\ III && 2 && 0 && 0 && 1 && $V_{+;4}$ && 0 && 0 && ${D \over 2}$ \\ IV && 2 && 0 && 1 && 0 && $V_{+;4}$ && 0 && 1 && $0$ \\ V && 2 && 0 && 1 && 0 && $V_{+;4}-1$ && 0 && 0 && $0$ \\ VI && 2 && 0 && 1 && 1 && $V_{+;4}$ && 0 && 0 && $0$ \\ \hline\hline \end{tabular} \caption{List of primitively divergent graphs of the $p^{2a}\phi^4$-model $+$.} \label{tab:listprim1} \end{table} \end{proposition} Some divergent 2-point graphs are illustrated in Figures \ref{fig:V1_1new} and \ref{fig:V2melon_1new} in appendix \ref{app:mod+} specializing to $d=3$ and $D=1$. They will contribute to the mass renormalization for this model. Secondly, consider the 4-point amplitudes associated with the graphs of Figure \ref {fig:V2nonmelon_1new} in appendix \ref{app:mod+}. These will contribute to the renormalization of couplings $\eta_+$ or $\lambda$ depending on the external momentum data of the correlators. We can construct an infinite family of divergent 4-point graphs in this model. At the end of this section, the proof of the next theorem will be completed: \begin{theorem}\label{theoren+} The $p^{2a}\phi^4$ model $+$ with parameters $a=D(d^--1)/2, b=D(d^--\frac12)/2$ for arbitrary rank $d\ge 3$ and dimension $D>0$ with action defined by \eqref{eq:actiond} is just-renormalizable at all orders of perturbation theory. \end{theorem} \subsection{Renormalization} \label{subsect:rentensr} The subsequent part of the renormalization program consists in the proof that the divergent and local part of all divergent amplitudes can be recast as terms which are present in the Lagrangian of the model $+$ of section \ref{subsect:list+} with fixed parameter $a=D(d^--1)/2$ and $b=D(d^--1/2)/2$. For that purpose, we perform a Taylor expansion of the amplitudes of graphs listed in Table \ref{tab:listprim1} and show that the divergent terms in that expansion are associated with either the mass, counter-terms $CT_{2;\xi}$, $\xi=a,b$, or interaction terms plus convergent remainders. \medskip \noindent {\bf Renormalization of marginal 4-point functions.} Marginal 4-point functions are given by the first line of Table \ref{tab:listprim1}. Given a connected and bipartite boundary graph of a 4-point graph, it is simple to realize that the pattern of its external momenta should follow either the pattern of ${\bf V}_{4;s}$ or of ${\bf V}_{+;4;s}$ \eqref{vertexkernel} (see Figure \ref{fig:4vertex}). The locality principle of the present model tells us to consider a graph issued from the expansion of correlators of the form \eqref{phi4} or \eqref{pPhi4+} which translate as \begin{eqnarray} && \langle \phi_{12\dots d} \,\bar\phi_{1'2\dots d}\,\phi_{1'2'\dots d'} \, \bar\phi_{12'\dots d'}\rangle \,, \label{ps0}\\\cr && \langle |p_{1}^{{\rm{ext\,}}}|^{2a}\, \bar\phi_{1'2\dots d}\,\phi_{1'2'\dots d'} \, \bar\phi_{12'\dots d'}\rangle \,, \label{ps1} \end{eqnarray} with $|p_{1}^{{\rm{ext\,}}}|^{2a}$ an external momentum with color $s=1$. In the following, we will concentrate on an expansion of a graph with external data of the form of the operator ${\bf V}_{+;4;s=1}$. In other words, we will focus on $s=1$ and a graph coming from the expansion of the correlator \eqref{ps1}. However, as it will be clear, our analysis is without loss of generality since the method can be extended to ${\bf V}_{4;1}$ and then to ${\bf V}_{+;4;s}$, for any color $s$. Consider a 4-point graph with 4 external propagators attached to it with external momenta governed by the pattern of \eqref{ps1}. This 4-point graph carries $2d$ momentum labels; these are associated with $2d$ external faces, which we denote by $$f\in {\mathcal F}_{{\rm{ext\,}}}=\{f_{[1]},f_{2}, \dots, f_{d}, f_{1'},f_{2'}, \dots,f_{d'}\},$$ where we emphasize the face which is enhanced by a square bracket (say [1]). Let $A_{4}(\{p^{{\rm{ext\,}}}_f\})$ be the amplitude of such a graph. Two types of scale indices have to be considered in this amplitude: the external scales $j_l$ associated with external fields and which correspond to external propagators with labels $l$ and the (internal) scale $i$ of the $G^i_k$ graph. In short, a quasi-local graph $G^i_k$ implies that $j_l \ll i$. We have from \eqref{amplf} \begin{eqnarray}\label{apl+} && A_{4} (\{p_{f_s}^{{\rm{ext\,}}}\}) = \kappa(\eta_{+}) \sum_{p_{f_s}} \int \Big[\prod_{l\in {\mathcal L}} d\alpha_l\, e^{-\alpha_l \mu} \Big] \Big[\prod_{f_s\in {\mathcal F}_{{\rm{ext\,}}}} e^{-(\sum_{l\in f_s} \alpha_l) \sum_\xi|p^{{\rm{ext\,}}}_{f_s}|^{2 \xi}}\Big] \crcr &&\times \Big[\prod_{f_s\in {\mathcal F}_{{\rm{int\,}}}} e^{-(\sum_{l\in f_s} \alpha_l)\sum_\xi |p_{f_s}|^{2\xi}} \Big] |p^{{\rm{ext\,}}}_{f_{[1]}}|^{2a} \Big[\prod_{s=1}^{d}\prod_{v_s \in {\mathcal V}_{+;4;s}} (\epsilon\,\tilde{p}^{\,2a})_{v_s} \Big]\,, \cr\cr\cr && (\epsilon\,\tilde{p}^{\,2a})_{v_s} := \sum_{f_{s'}} \epsilon_{v_s,f_{s'}}(\tilde{p}_{f_{s'}})^{2a}\,, \end{eqnarray} where $\kappa(\eta_{+})$ includes symmetry factors and coupling constants. We recall that $p^{{\rm{ext\,}}}_{f_s}$ are external momenta, and the last line shows $\tilde{p}_{f_s}$ which refers to an internal or an external momentum. Let us concentrate on the range of the parameters $\alpha$: for an internal line $l$, that we will now denote $\ell$, $\alpha_\ell \in [M^{- {(2 b)} \, i_\ell}, M^{- {(2 b)}\, (i_\ell-1)}]$; for an external line $l$, now denoted $ {l_{\rm ext}} $, $\alpha_ {l_{\rm ext}} \in [M^{- {(2 b)} \, j_{ {l_{\rm ext}} }}, M^{-{(2 b)} \,(j_{ {l_{\rm ext}} }-1)}]$. We are interested in a regime when $j_ {l_{\rm ext}} \ll i \leq i_\ell$. A Taylor expansion over an external face amplitude gives \begin{eqnarray} e^{-(\sum_{l \in f}\alpha_l)\sum_\xi |p_{f}^{{\rm{ext\,}}}|^{2 \xi}} &=& e^{-(\alpha_ {l_{\rm ext}} +\alpha_{ {l_{\rm ext}} '})\sum_\xi |p_{f}^{{\rm{ext\,}}}|^{2 \xi}} [1- R_f] \crcr R_f &=& \big(\sum_{\ell \in f }\alpha_\ell\big)\big(\sum_\xi |p_{f}^{{\rm{ext\,}}}|^{2 \xi}\big) \int_0^1 e^{-t(\sum_{\ell \in f }\alpha_\ell) \sum_\xi|p_{f}^{{\rm{ext\,}}}|^{2 \xi}} dt \,, \label{tayface} \end{eqnarray} where $\sum_{\ell \in f}\alpha_\ell$ is small ($ \alpha_{\ell} \sim \mathcal{O} ({1 \over {|p_{f_s}|}^{2\xi}})\sim M^{-(2\xi)i_\ell}$). We insert that expansion for each external face in \eqref{apl+} and obtain: \begin{eqnarray} && A_{4}(\{p^{{\rm{ext\,}}}_f\}) = \kappa( \eta_{+}) \sum_{p_f} \int [\prod_{l\in {\mathcal L}}d\alpha_l e^{-\alpha_l \mu }] |p^{{\rm{ext\,}}}_{f_{[1]}}|^{2a}\Big[ \prod_{f\in {\mathcal F}_{{\rm{ext\,}}}} e^{-(\alpha_{ {l_{\rm ext}} }+\alpha_{ {l_{\rm ext}} '})\sum_\xi |p_{f}^{{\rm{ext\,}}}|^{2 \xi}} \Big]\label{a6full}\\ && \times\Big[1- \sum_{f \in {\mathcal F}_{{\rm{ext\,}}}} R_f + \sum_{f,f' \in {\mathcal F}_{{\rm{ext\,}}}} R_f R_{f'} + \dots \Big] \Big[ \prod_{f \in {\mathcal F}_{{\rm{int\,}}}} e^{-(\sum_{\ell \in f}\alpha_\ell)\sum_\xi |p_{f}|^{2 \xi}} \Big] \prod_{s=1}^{d}\prod_{v_s \in {\mathcal V}_{+;4;s}} [(\epsilon\,\tilde{p}^{\,2a})_{v_s} ]. \nonumber \end{eqnarray} The dots are higher order products in the $R_f$'s. From Table \ref{tab:listprim1}, $\rho_{+} $ \eqref{rho} must be equal to $V_{+;4}$. Hence, in each vertex kernel, we must collect and integrate one momentum of a closed face. A divergent 4-point graph satisfying the first row of Table \ref{tab:listprim1} must be such that no external momenta can be found within $\prod_{s=1}^{d}\prod_{v_s \in {\mathcal V}_{+;4;s}} [(\epsilon\,\tilde{p}^{\,2a})_{v_s} ]$. We write the 0th order in expansion in $R_f$ as: \begin{eqnarray} A_{4}(\{p^{{\rm{ext\,}}}_f\};0) &=& \kappa(\eta_{+}) \sum_{p_f} \int [\prod_{l}d\alpha_l e^{-\alpha_l \mu }] |p^{{\rm{ext\,}}}_{f_{[1]}}|^{2a}\prod_{f\in {\mathcal F}_{{\rm{ext\,}}}}\Big[ e^{-(\alpha_{ {l_{\rm ext}} }+\alpha_{ {l_{\rm ext}} '}) \sum_\xi |p_{f}^{{\rm{ext\,}}}|^{2 \xi}} \Big] \crcr &\times& \Big[ \prod_{f \in {\mathcal F}_{{\rm{int\,}}}} e^{-(\sum_{\ell \in f}\alpha_\ell)\sum_\xi |p_{f}|^{2 \xi}} \Big] \Big[ \prod_{s=1}^{d}\prod_{v_s \in {\mathcal V}_{+;4;s}} (\epsilon\,\tilde{p}^{\,2a})_{v_s} \Big]\,, \cr\cr &=& \kappa(\eta_{+}) \Big[\int [\prod_{ {l_{\rm ext}} }d\alpha_{ {l_{\rm ext}} } e^{-\alpha_ {l_{\rm ext}} \mu }] |p^{{\rm{ext\,}}}_{f_{[1]}}|^{2a} \prod_{f\in {\mathcal F}_{{\rm{ext\,}}}} e^{-(\alpha_ {l_{\rm ext}} +\alpha_{ {l_{\rm ext}} '})\sum_\xi |p_{f}^{{\rm{ext\,}}}|^{2 \xi}} \Big] \label{a4}\\ &\times& \Big[ \sum_{p_f} \int [\prod_{\ell}d\alpha_\ell e^{-\alpha_\ell \mu }] \prod_{f \in {\mathcal F}_{{\rm{int\,}}}}\Big[ e^{-(\sum_{\ell }\alpha_\ell) \sum_\xi |p_{f}|^{2 \xi}} \Big] \Big[ \prod_{s=1}^{d}\prod_{v_s \in {\mathcal V}_{+;4;s}} (\epsilon\tilde{p}^{\,2a})_{v_s} \Big] \Big]. \nonumber \end{eqnarray} The factor involving external lines can be re-expressed as \begin{eqnarray} && \int [\prod_{ {l_{\rm ext}} }d\alpha_{ {l_{\rm ext}} } e^{-\alpha_{ {l_{\rm ext}} } \mu }] |p^{{\rm{ext\,}}}_{f_{[1]}}|^{2a} \prod_{f\in {\mathcal F}_{{\rm{ext\,}}}} e^{-(\alpha_ {l_{\rm ext}} +\alpha_{ {l_{\rm ext}} '}) \sum_\xi |p_{f}^{{\rm{ext\,}}}|^{2 \xi}} = \cr\cr && \int [\prod_{ {l_{\rm ext}} }d\alpha_{ {l_{\rm ext}} } ] |p^{{\rm{ext\,}}}_{f_{[1]}}|^{2a} \cr\cr && \times e^{-\alpha_{ {l_{\rm ext}} _1}[\sum_\xi (|p_{f_{[1]}}^{{\rm{ext\,}}}|^{2\xi} + |p_{f_2}^{{\rm{ext\,}}}|^{2\xi}+\dots + |p_{f_d}^{{\rm{ext\,}}}|^{2\xi})+\mu]} e^{-\alpha_{ {l_{\rm ext}} _2}[\sum_\xi (|p_{f_{1'}}^{{\rm{ext\,}}}|^{2\xi} +| p_{f_2}^{{\rm{ext\,}}}|^{2\xi}+\dots + |p_{f_d}^{{\rm{ext\,}}}|^{2\xi})+\mu]}\cr\cr && \times e^{-\alpha_{ {l_{\rm ext}} _3}[\sum_\xi (|p_{f_{1'}}^{{\rm{ext\,}}} |^{2\xi}+ |p_{f_{2'}}^{{\rm{ext\,}}}|^{2\xi}+ \dots +|p_{f_{d'}}^{{\rm{ext\,}}}|^{2\xi})+\mu]} e^{-\alpha_{ {l_{\rm ext}} _4}[\sum_\xi (|p_{f_{[1]}}^{{\rm{ext\,}}} |^{2\xi}+| p_{f_{2'}}^{{\rm{ext\,}}}|^{2\xi}+\dots+| p_{f_{d'}}^{{\rm{ext\,}}}|^{2\xi})+\mu]}\,. \crcr && \label{eq:finite} \end{eqnarray} Observe that the above expression describes 4 propagators glued in a way to produce the pattern of a vertex of type ${\bf V}_{+;4;1}$. The term associated with the sum over internal momenta is log-divergent. Therefore, the amplitude $A_{4}(\{p^{{\rm{ext\,}}}_f\};0)$ will renormalize the coupling $\eta_{+}$. We now prove that the remainders appearing in \eqref{a6full} lead to convergence by improving the power counting. The first remainder calling a single term $R_f$ is of the form: \begin{eqnarray} R_{4} &=&\kappa(\eta_{+}) \sum_{p_f} \int [\prod_{l}d\alpha_l e^{-\alpha_l \mu }] |p^{{\rm{ext\,}}}_{f_{[1]}}|^{2a} \Big[ \prod_{f\in {\mathcal F}_{{\rm{ext\,}}}} e^{-(\alpha_{ {l_{\rm ext}} }+\alpha_{ {l_{\rm ext}} '})\sum_\xi |p_{f}^{{\rm{ext\,}}}|^{2 \xi}} \Big] \cr\cr &\times& \Big[- \sum_{f \in {\mathcal F}_{{\rm{ext\,}}}}\big(\sum_{\ell \in f}\alpha_\ell\big) \big(\sum_\xi |p_{f}^{{\rm{ext\,}}}|^{2 \xi}\big) \int_0^1 e^{-t(\sum_{\ell \in f}\alpha_\ell) \sum_\xi |p_{f}^{{\rm{ext\,}}}|^{2 \xi}} dt \Big] \cr\cr &\times & \Big[\prod_{f \in {\mathcal F}_{{\rm{int\,}}}} e^{-(\sum_{\ell \in f}\alpha_\ell) \sum_\xi |p_{f}|^{2 \xi}} \Big] \prod_{s=1}^{d}\prod_{v_s \in {\mathcal V}_{+;4;s}} [ (\epsilon \tilde{p}^{\,2a})_{v_s} ]\,. \label{remain4} \end{eqnarray} Using $i(G^i_k)=\inf_{\ell\in G^i_k} i_\ell $ and $e(G^i_k)=\sup_{l \in G^i_k}j_l$, and recalling $\alpha_\ell \in [M^{- {(2 b)} \, i_\ell}, M^{- {(2 b)}\, (i_\ell-1)}]$ and $\alpha_{ {l_{\rm ext}} } \in [M^{- {(2 b)} \, j_ {l_{\rm ext}} }, M^{-{(2 b)} \,(j_ {l_{\rm ext}} -1)}]$, $\big(\sum_{\ell \in f}\alpha_\ell\big) |p_{f}^{{\rm{ext\,}}}|^{ 2 b} \le k_0 M^{- ({2 b}) \, (i(G^i_k)- e(G^i_k))}$, then $R_4$ is bounded as, \begin{equation} |R_{4}| \leq K \prod_{(i,k)} M^{- ({2 b}) \, (i(G^i_k)- e(G^i_k))} M^{\omega_{{\rm d};+}(G^i_k)}\,, \label{bound4} \end{equation} for some constant $K$ (which includes $\kappa(\eta_{+})$ and $k_0$, a constant depending on the graph and a constant which bounds the integral in $t$). The factor $M^{-({2 b}) \, (i(G^i_k)- e(G^i_k))}$ improves the power counting (which is already logarithmic) and will be the source of decay to show that the sum over scale attributions is convergent in the way established in \cite{Rivasseau:1991ub}. Inspecting the higher order products in $R_f$'s, one realizes that they are obviously more convergent and so do not need further discussion. If we remove $|p^{{\rm{ext\,}}}_{f_{[1]}}|^{2a} $ from the amplitude \eqref{apl+}, we will be in presence of an amplitude coming from \eqref{ps0}. The boundary data of the resulting amplitude will be of the form ${\bf V}_{4;1}$. Repeating step by step the previous analysis, we obtain at 0-th order of the expansion a renormalization of the coupling $\lambda$ and all remainders will lead exactly to the same convergence with power counting given by \eqref{bound4}. It is direct to see that the argument extends to any color $s$. As a result of this analysis, we can comment that, although the renormalized coupling $\lambda$ does not receive any melonic corrections, it receives contributions from the coupling $\eta_{+}$. This is a new property of the perturbative Renormalization Group equations of this model. \medskip \noindent {\bf Renormalization of divergent 2-point functions.} We study 2-point functions that obey $\omega_{{\rm d};+}({\mathcal G})=0,{D \over 2}$ and characterized by the rows I through VI of Table \ref{tab:listprim1}. First, we will focus on the row I and point out the differences with II and III at particular steps of the discussion. We will sketch the analysis for the rows IV, V and VI. A 2-point graph has a unique boundary which is given by the invariant ${\rm Tr}_2(\phi^2)$ of Figure \ref{fig:4vertex}. Because the vertices are enhanced, it may happen that the external data of a two-point graph is not of the form of a mass term, but rather a form of $CT_{2;a}$. A careful discussion will be made about this in the text. Consider an amplitude $A_{2}(\{p^{{\rm{ext\,}}}_f\})$ associated with a 2-point graph obeying the row I of Table \ref{tab:listprim1}. This graph has $d$ external faces that we label by $$f\in {\mathcal F}_{{\rm{ext\,}}}=\{f_{1},f_{2}, \dots, f_{d}\}. $$ Keeping the same notations as above (see paragraph after \eqref{apl+}), $ {l_{\rm ext}} $ labels external propagators with scale index $j_{ {l_{\rm ext}} }$, $\ell$ labels internal lines with scale index $i_\ell$. A Taylor expansion of the external face factors out in the same form as \eqref{tayface} and leads us to the following expansion of the 2-point amplitude: \begin{eqnarray} && A_{2}(\{p^{{\rm{ext\,}}}_f\}) = \kappa( \eta_{+}) \sum_{p_f} \int [\prod_{l}d\alpha_l e^{-\alpha_l \mu }]\Big[ \prod_{f\in {\mathcal F}_{{\rm{ext\,}}}} e^{-(\alpha_{ {l_{\rm ext}} }+\alpha_{ {l_{\rm ext}} '})\sum_\xi |p_{f}^{{\rm{ext\,}}}|^{2 \xi}} \Big] \label{rfrf2}\\ && \times \Big[1- \sum_{f \in {\mathcal F}_{{\rm{ext\,}}}} R_f + \sum_{f,f' \in {\mathcal F}_{{\rm{ext\,}}}} R_f R_{f'} + \dots \Big]\Big[ \prod_{f \in {\mathcal F}_{{\rm{int\,}}}} e^{-(\sum_{\ell \in f}\alpha_\ell) \sum_\xi |p_{f}|^{2 \xi}} \Big] \Big[ \prod_{s=1}^{d}\prod_{v_s \in {\mathcal V}_{+;4;s}} [ (\epsilon\,\tilde{p}^{\,2a})_{v_s} \Big]\,. \nonumber \end{eqnarray} The 0th order term in the expansion in $R_f$, \begin{eqnarray} && A_{2}(\{p^{{\rm{ext\,}}}_f\};0)= \kappa( \eta_{+}) \sum_{p_f} \int [\prod_{l}d\alpha_l e^{-\alpha_l \mu }]\cr\cr &&\times \Big[ \prod_{f\in {\mathcal F}_{{\rm{ext\,}}}} e^{-(\alpha_{ {l_{\rm ext}} }+\alpha_{ {l_{\rm ext}} '})\sum_\xi |p_{f}^{{\rm{ext\,}}}|^{2 \xi}} \Big] \prod_{f \in {\mathcal F}_{{\rm{int\,}}}}\Big[ e^{-(\sum_{\ell \in f}\alpha_\ell) \sum_\xi |p_{f}|^{2 \xi}} \Big] \prod_{s=1}^{d}\prod_{v_s \in {\mathcal V}_{+;4;s}} [ (\epsilon\,\tilde{p}^{\,2a})_{v_s} ] \cr\cr &&= \kappa(\eta_{+}) \Big[\int [\prod_{ {l_{\rm ext}} }d\alpha_{ {l_{\rm ext}} } ] e^{-\alpha_{ {l_{\rm ext}} _1}[\sum_\xi (\sum_s |p_{f_{s}}^{{\rm{ext\,}}}|^{2\xi}) +\mu]} e^{-\alpha_{ {l_{\rm ext}} _2}[\sum_\xi (\sum_s |p_{f_{s}}^{{\rm{ext\,}}}|^{2\xi}) +\mu]} \Big] \crcr && \times \Big[ \sum_{p_f} \int [\prod_{\ell}d\alpha_\ell e^{-\alpha_\ell \mu }] \prod_{f \in {\mathcal F}_{{\rm{int\,}}}}\Big[ e^{-(\sum_{\ell }\alpha_\ell) \sum_\xi |p_{f}|^{2 \xi}} \Big] \Big[ \prod_{s=1}^{d}\prod_{v_s \in {\mathcal V}_{+;4;s}} (\epsilon\tilde{p}^{\,2a})_{v_s} \Big] \Big]. \cr\cr && \label{massren+} \end{eqnarray} Let us discuss the vertex density. A graph fulfilling the requirements of the row I of Table \ref{tab:listprim1}, must be such that $\rho_{+} = V_{+;4}$. The same argument as in the case of 4-point functions applies: there are no external momenta in the product of vertex kernels and all momenta will be summed. The first factor involving external momenta represents two propagators glued together by a degree-2 vertex; the second factor gives by our power counting a degree of divergence $\frac D2$. Hence the term \eqref{massren+} renormalizes the mass term. Concerning a graph satisfying the rows II and III, we know that $\rho_{+}=V_{(4)}-1$ and that the graph is melonic. Lemma \ref{lem:rhob} explained in its proof that for 1PI melonic graphs, external legs must be hooked on partner vertices. Hence a 2-point primitively divergent (1PI) melonic graph has at least a vertex $v_{0;s}$ which will not contribute to $\rho_{+}$. Two cases might occur: (II) The vertex $v_{0;s}$ must belong to ${\mathcal V}_{+;4;s}$, then an extra factor of $|p^{{\rm{ext\,}}}_{f_{s}}|^{2a}$ must be added to the boundary data. This makes that graph of the form of ${\mathcal V}_{2;a;s}$. Now, adding all colored symmetric contribution with respect to $s$ of this graph term renormalizes the coupling $Z_{a}$; (III) The vertex $v_{0;s}$ must belong to ${\mathcal V}_{4}$, then the boundary data is that of a mass and so \eqref{massren+} again renormalizes the mass term. The remainders of \eqref{rfrf2} are now treated for the row I of our table. The first order remainder involving the sum $\sum_{f}R_f$ can be bounded as follows \begin{eqnarray} && R_{2} = - \kappa({\eta_{+}}) \Big[ \int [\prod_{ {l_{\rm ext}} }d\alpha_ {l_{\rm ext}} e^{-\alpha_ {l_{\rm ext}} \mu }] \prod_{f\in {\mathcal F}_{{\rm{ext\,}}}} e^{-(\alpha_ {l_{\rm ext}} +\alpha_{ {l_{\rm ext}} '})\sum_\xi |p_{f}^{{\rm{ext\,}}}|^{2 \xi}} \Big] \crcr && \times \sum_{f \in {\mathcal F}_{{\rm{ext\,}}}}\Big[ \int [\prod_{\ell }d\alpha_\ell e^{-\alpha_\ell \mu }] \big(\sum_{\ell \in f}\alpha_\ell\big) \big(\sum_\xi |p_{f}^{{\rm{ext\,}}}|^{2 \xi}\big) \int_0^1 e^{-t(\sum_{\ell \in f}\alpha_\ell) \sum_\xi |p_{f}^{{\rm{ext\,}}}|^{2 \xi}} dt \Big] \cr\cr && \times \sum_{p_f} \prod_{f \in {\mathcal F}_{{\rm{int\,}}}}\Big[ e^{-(\sum_{\ell \in f}\alpha_\ell) \sum_\xi |p_{f}|^{2 \xi}} \Big] \Bigg] \prod_{s=1}^{d}\prod_{v_s \in {\mathcal V}_{+;4;s}} [(\epsilon\,\tilde{p}^{\,2a})_{v_s} ] \cr\cr &&|R_2| \leq K \prod_{(i,k)}M^{-2b(i(G^i_k)-e(G^i_k))} M^{\omega_{{\rm d};+}(G^i_k)={D \over 2}} \,, \label{eq:rem2+} \end{eqnarray} where we used the same scheme leading to \eqref{bound4}, for some constant $K$. For a constant $C\geq 1$, $- D (d^--\frac12) C +\frac{D}{2} \le - D(d^--1) \le -D<0$ ensures the convergence of the remainder. All higher order remainders can be proved more convergent. From this point, the summation over scale attributions can be again performed after subtractions. The case of an amplitude of the rows II and III of the table can be addressed in similar way. Let us now discuss the rows IV, V, and VI. In these cases, $\rho_{2;a}=1=V_{2;a}$ which means that the enhanced momenta $|p_{f_s}|^{2a}$ associated with the vertex which is counted in $V_{2;a}=1$ is necessarily integrated. Hence the analysis of the rows IV, V, and VI are completely similar to what we have done for the rows I, II, and III, respectively. As a last remark, we stress that there are no corrections to the wave function $Z_b$ because all amplitudes at first order are already convergent. Hence we can put the coupling $Z_b$ to 0. \ In conclusion, - the expansion of marginal 4-point functions around their local part gives a log--divergent terms which renormalize the coupling constants $\eta_{+}$ or $\lambda$. - the expansion of ${D \over 2}$--divergent or log-divergent 2-point graphs around their local parts yield a ${D \over 2}$--divergent or log-divergent term renormalizing either the mass or $Z_a$; - all remainders are convergent and will bring enough decay for ensuring the final summability over scale attributions. From this point, the procedure for performing this last sum over attributions is standard and will secure the renormalization at all orders of perturbation theory according to techniques developed in \cite{Rivasseau:1991ub}. Thus, Theorem \ref{theoren+} holds. \section{A rank $d=3$ renormalizable model $\times$} \label{sect:renmox} We adopt the same strategy as in the previous section, to prove the renormalizability of a model $\times$. After listing its primitively divergent connected graphs, we proceed with the renormalization procedure. The same conditions $C_{{\partial\mathcal G}}\ge 1$ and $N_{{\rm{ext\,}}}$ even must be true. \subsection{List of divergent graphs} \label{subsect:listx} We are interested in a rank $d=3$ model $\times$ with $a = {1 \over 2}$, $ b =1$ and $D=1$. This choice $b=1$ gives a Laplacian in the kinetic term, and an integer power of the interaction $|p|\phi^4$, thus this model seems natural (that we can think as a single derivative coupling). The interactions are of the same form as given in Figure \ref{fig:phi4EnhancedMelonModel} with the sole difference that we enhance by a product of $|p|$ both edges in the melonic interaction. We start from \eqref{eq:1omegamelx} and \eqref{eq:1omeganonmelx}, the superficial degree of divergence for generic graphs: \begin{eqnarray} \omega_{{\rm d};\times} ({\mathcal G}^{\rm melon}) \!\!\! &\le& \!\!\! - (C_{\partial {\mathcal G}} - 1) - {1 \over 2} (2 N_{\rm ext} - 4 ) - 2 V_2 - V_{2;a} -\triangle^{\rm melon}_{\times} \,, \label{eq:1omegamel12} \\ \omega_{{\rm d};\times} ({\mathcal G}^{\rm non-melon}) \!\!\! &\le& \!\!\! - 1- (C_{\partial {\mathcal G}} - 1) - {1 \over 2} ( N_{\rm ext} - 4 ) - 2 V_2 - V_{2;a} - \triangle^{\rm non-melon}_{\times}. \label{eq:1omeganonmel12} \end{eqnarray} We observe that both counter-terms $CT_{2;2a}$ and $CT_{2;b}$ disappear from the power counting. As degree-2 vertices, they are neutral for the power counting. Our previous analysis shows that, for any $N_{{\rm{ext\,}}} \ge 4$, the amplitude is convergent. We concentrate on the remaining case $N_{{\rm{ext\,}}} = 2$. From Lemma \ref{lem:rhobx}, we can still combine the analysis of $V_{(4)}=1$ and $V_{(4)}>1$ at $N_{{\rm{ext\,}}}=2$ and have \begin{eqnarray}\label{111} &&\omega_{{\rm d};\times} ({\mathcal G}^{\rm melon}) \le - (C_{\partial {\mathcal G}} -1) -2 V_2 - V_{2;a} -\triangle^{\rm melon}_{\times} \le 0 \,, \cr\cr && \omega_{{\rm d};\times} ({\mathcal G}^{\rm non-melon}) \le - (C_{\partial {\mathcal G}} -1) - 2 V_2 - V_{2;a} -\triangle^{\rm non-melon}_{\times} \le 0 . \end{eqnarray} The only way to achieve logarithmic divergence is to have exactly: $C_{\partial {\mathcal G}} =1$, $V_2=0=V_{2;a}$. For a non-melonic graph, we further impose $\triangle^{\rm non-melon}_{\times}=0$ from which we infer $\rho_{\times}=2V_{(4)}-1$. Knowing that $\rho_{\times} \le 2V_{\times;4}$, this leads us to $(\rho_{\times} =2V_{\times; 4}-1; V_{4}=0)$. The case of a melonic graph yields $\triangle^{\rm melon}_{\times}=0$ which gives $\rho_{\times}=2V_{(4)}-2$, which together with $\rho_{\times} \le 2V_{\times;4}$ yields two possibilities: either $(\rho_{\times} =2V_{\times; 4}; V_{4}=1)$ or $(\rho_{\times} =2V_{\times; 4}-2; V_{4}=0)$. We have the following proposition. \begin{proposition}[List of primitively divergent graphs for model $\times$] The $p^{2a}\phi^4$-model $\times$ with parameters $D = 1, d=3,a={1 \over2}, b=1$, has the following primitively divergent graphs which obey $(\Omega({\mathcal G})= \omega(\mathcal G_{\text{color}}) - \omega({\partial\mathcal G}))$ \begin{table}[H] \begin{center} \begin{tabular}{lcccccccccccccccc} \hline\hline ${\mathcal G}$ && $N_{{\rm{ext\,}}}$ && $V_{2}$ && $V_{2;a}$ && $ V_4$ && $\rho_{\times}$ && $C_{{\partial\mathcal G}}-1$ && $\Omega({\mathcal G})$ && $\omega_d({\mathcal G})$ \\ \hline\hline I && 2 && 0 && 0 && 0 && $2 V_{\times;4} -1$ && 0 && 1 && ${0}$ \\ II && 2 && 0 && 0 && 0 && $2 V_{\times;4}-2$ && 0 && 0 && ${0}$ \\ III && 2 && 0 && 0 && 1 && $2 V_{\times;4}$ && 0 && 0 && ${0}$ \\ \hline\hline \end{tabular} {\caption{List of primitively divergent graphs of the $p^{2a}\phi^4$-model $\times$. \label{tab:listprim2}}} \end{center} \end{table} \end{proposition} In appendix \ref{app:modx}, we have illustrated an infinite family of 2-point graphs with log-divergent amplitudes, see Figures \ref{fig:V1_1x}, \ref{fig:V2melon_1x} and \ref{fig:V2nonmelon_mixx}. Thus, this theory is not super-renormalizable in the usual sense because it possesses an infinite family of corrections to the mass, $Z_{a}$ and $Z_{2a}$ couplings. It does not also fit the definition of a just-renormalizable theory because all corrections of the coupling $\lambda$ and $\eta_\times$ are finite. Again this is a specific feature brought by the enhancement of non-local tensor interactions. With the above analysis, we can now prove that the following theorem holds. \begin{theorem}\label{theoremx} The $p^{2a}\phi^4$ model $\times$ with parameters $D = 1, d=3,a={1 \over2}, b=1$, with action defined by \eqref{eq:actiond} is renormalizable at all orders of perturbation. \label{theorenx} \end{theorem} \subsection{Renormalization} \label{subsect:renx} We follow a similar scheme as developed in section \ref{subsect:rentensr}. We sketch the expansion of amplitudes of the graphs listed in Table \ref{tab:listprim2} and check that their local parts indeed take the form of the terms in the Lagrangian of the model $\times$ of section \ref{subsect:listx}. Doing a Taylor expansion the amplitudes, we will also show that the subleading orders are convergent. \ \noindent {\bf Renormalization of divergent 2-point functions.} In the model $\times$, we only have 2-point log-divergent graphs as listed in Table \ref{tab:listprim2}. For type I and II graphs, note that $\rho_{\times}<2 V_{\times; 4}$, therefore one or two edges of their boundary graph are touched by external faces which are enhanced. This entails that the boundaries of these graphs are equipped with $|p^{{\rm{ext\,}}}_{f[1]}|^{2 a}$ and $|p^{{\rm{ext\,}}}_{f[1]}|^{4 a}$ and therefore of the form of $CT_{2; a}$, $CT_{2; 2a}$, respectively. On the other hand, for a log-divergent graph of the type III, we have $\rho_{\times}= 2 V_{\times;4}$ and its boundary graph does not have any enhanced edges and so takes the form of the mass term. In the following, we only address the 2-point graphs of type I and II and will give the main points leading to the treatment of type III graphs. Let us consider the amplitude $A_{2}(\{p^{{\rm{ext\,}}}_f\})$ of 2-point non-melonic and melonic graphs obeying, respectively, the rows I and II of Table \ref{tab:listprim2}. These graphs have 3 external faces labeled by $f\in {\mathcal F}_{{\rm{ext\,}}}=\{f_{[1]},f_{2}, f_{3}\}$, with an enhanced color $1$ strand. By an argument of symmetry, our following study will give the same result for a graph with another enhanced color. We perform a Taylor expansion of external face factors as given in \eqref{tayface} and the amplitude $A_{2}(\{p^{{\rm{ext\,}}}_f\})$ takes a similar form as \eqref{rfrf2}; we replace $ \kappa(\eta_{+})$ with $\kappa(\eta_{\times})$ and have extra term $|p_{f_{[1]}}^{{\rm{ext\,}}}|^{2 \xi}$ present, where $\xi = a$ for the type I and $\xi =2a$ for type II graph. Then the 0th order term in the expansion in $R_f$ expresses as \begin{eqnarray} && A_{2}(\{p^{{\rm{ext\,}}}_f\};0) = \kappa(\eta_{\times}) \Big[\int [\prod_{ {l_{\rm ext}} }d\alpha_{ {l_{\rm ext}} } ] |p_{f_{[1]}}^{{\rm{ext\,}}}|^{2 \xi} \crcr &&\times e^{-\alpha_{ {l_{\rm ext}} _1}\sum_\xi (|p_{f_{[1]}}^{{\rm{ext\,}}}|^{2\xi} + |p_{f_2}^{{\rm{ext\,}}}|^{2\xi}+ |p_{f_3}^{{\rm{ext\,}}}|^{2\xi}+\mu)} e^{-\alpha_{ {l_{\rm ext}} _2}\sum_\xi (|p_{f_{[1]}}^{{\rm{ext\,}}}|^{2\xi} +| p_{f_2}^{{\rm{ext\,}}}|^{2\xi}+ |p_{f_3}^{{\rm{ext\,}}}|^{2\xi}+\mu)} \Big] \crcr &&\times \Big[ \sum_{p_f} \int [\prod_{\ell}d\alpha_\ell e^{-\alpha_\ell \mu }] \prod_{f \in {\mathcal F}_{{\rm{int\,}}}}\Big[ e^{-(\sum_{\ell }\alpha_\ell)\sum_\xi |p_{f}|^{2 \xi}} \Big] \Big[ \prod_{s=1}^{d}\prod_{v_s \in {\mathcal V}_{\times;4;s}} (\epsilon\tilde{p}^{\,2a})_{v_s} \Big] \Big]. \label{massrenx} \end{eqnarray} It is explicit here from the pattern of the external data, that this amplitude takes the form of the $CT_{2;\xi}$ term. We identify then the factor associated with the internal data as having a degree of divergence $\omega_{{\rm d};\times} =0$, given by our power counting analysis in section \ref{subsect:listx}. Adding all colored symmetric contribution with respect to $s$ of this graph, the sum of these amplitudes renormalizes the coupling $Z_{\xi}$. We now treat the higher orders in the Taylor expansion in the form $\sum_{f}R_f$ of $A_{2}(\{p^{{\rm{ext\,}}}_f\})$. The first order remainder involving the sum $\sum_{f}R_f$ can be bounded in the same vein as \eqref{eq:rem2+} and we find \begin{equation}\label{rem22} |R_{2}| \leq K \prod_{(i,k)}M^{-2b(i(G^i_k)-e(G^i_k))} M^{\omega_{{\rm d};+}(G^i_k)=0}\,, \end{equation} for some constant $K$. We are guaranteed of the convergence of this remainder, and that all higher order remainders can be shown more convergent. We are ensured about the summability over scale attributions in this case. For the log--divergent graphs III, the analysis is similar except that we do not have extra external momenta contributions $|p_{f_{[1]}}^{{\rm{ext\,}}}|^{2\xi}$ appearing, as discussed earlier. \ We conclude that, - the expansion of log--divergent 2-point graphs around their local parts yield log-divergent terms renormalizing $Z_a$, $Z_{2a}$ and the mass. - all remainders are convergent and bring a sufficient decay for ensuring the summability over scale attribution. This means that renormalization at all orders of perturbation theory according to \cite{Rivasseau:1991ub} can be achieved. Therefore, we conclude that Theorem \ref{theorenx} holds. - there is no wave function renormalization associated with $Z_b$ for the model $\times$ because all amplitudes at the leading order are already convergent. Without much work, we observe that several other models listed in Table \ref{table:table45} are renormalizable at $d=3$, just like the present model is (up to a change of \eqref{eq:1omegamel12}, \eqref{eq:1omeganonmel12}, \eqref{111}, Table \ref{tab:listprim2} and \eqref{rem22}). More surprising perhaps, at fix $D$, we even suspect that it might have a continuum of renormalizable theories for a range of values of $b$. \section{Conclusion} \label{concl} We have addressed the perturbative (at all orders) multi-scale renormalization analysis of the so-called enhanced quartic melonic tensor field theory at any rank $d$ of the tensor fields and for any group $(U(1)^D)^d$. Studied in the momentum space, the models are endowed with powers of momenta in the interaction terms which are roughly of the form $p^{2a}\phi^4$, $a>0$, and which might be associated with derivative couplings. The case $a=0$ being well-studied in the literature can be recovered at this limit. Through the enhancing procedure, amplitudes which were suppressed in models at $a=0$ now participate in the analysis at $a>0$. In order to achieve renormalizability in enhanced models, the propagators need a more general form than the usual Laplacian dynamics: we thus assume the propagators to be of the form $(\sum_\xi |p|^{2\xi}+\mu)^{-1}$, $\xi=a,2a,b$ strictly positive. Two types of models were introduced and studied in parallel in this work: an asymmetric model $+$ and a symmetric model $\times$. From the multi-scale analyses of these models, we identify new combinatorial quantities which allow us to write down a power counting theorem in terms of quasi-local subgraphs. At any rank $d>2$, group dimension $D\ge 1$, we have found intervals of values for $a$ and $b$ for which both models are potentially renormalizable at all orders of perturbation theory. Let us give a summary of the particularities of each model. For arbitrary $d$ and $D$, which specify the rest of the parameters, we find a two-dimensional grid ($(\mathbb{N}-\{0,1,2\})\times (\mathbb{N}-\{0\})$) of just-renormalizable models $+$. As expected, the amplitudes of non-melonic diagrams start to contribute to the flow and dominate the melonic amplitudes in each model. In fact, all the 4-point melonic diagrams become convergent. We found that the enhanced coupling $\eta_{+}$ will contribute to the flow of the melonic coupling $\lambda$ whereas the opposite is not possible. Introducing enhanced interactions of $ {\rm Tr}_4 (p^{2 a}\phi^4)$ influences the study of the 2-point function as it requires to introduce a particular counter-term of the form ${\rm Tr}_2(p^{2a}\phi^2)$. The model $\times$ is also renormalizable for a tuned set of parameters. Again, melonic and non-melonic amplitudes can be of the same divergence degree, as expected with enhanced interactions. However, something puzzling happens in this model: all $4$-point amplitudes prove to be finite and only 2-point amplitudes may diverge. There are infinitely many 2-point divergent amplitudes. The detailed study of the 2-point graphs imposes us to include two counter-terms for removing all divergences: ${\rm Tr}_2(p^{2a}\phi^2)$ and ${\rm Tr}_2(p^{4a}\phi^2)$. Thus, the flow of this model is uniquely driven by 2-point functions. Then, the model belongs neither to the class of just-renormalizable models nor to the class of super-renormalizable models in the language of usual QFT. We conjecture that there are several other renomalizable models of this kind. We recall that enhanced tensor models are introduced from attempts to escape the branched polymer phase of colored tensor models. If enhanced tensor models are turned into field theories, then the large $N$-limit becomes the UV-limit (large $p$). As far as the present study is concerned, the perturbative renormalizability of the enhanced models of the type presented here might not immediately tell us anything about new limits or new phases of these enhanced models. Having renormalizability rather ensures us that the field theory counterparts of these models are long-lived, defined through several layers of momentum scales, and might be UV-complete. This is definitely an important and encouraging point to keep up with their study. Another important aim that could be certainly reached from our analysis is the computation of the perturbative $\beta$-functions for these models. There are several arguments putting forward that ordinary $\phi^4$ tensor field theories are perturbatively asymptotically free \cite{Rivasseau:2011hm, Geloun:2013saa}. The main ingredient leading to asymptotic freedom is the presence of a wave-function renormalization which dominates the renormalized coupling constant. Nevertheless, the models addressed in this paper seem to belong to another class, simply because of the presence of the several couplings $Z_\xi$, $\xi=a,2a,$ and the fact that $Z_b$ does not get any radiative corrections. For the model $+$, we realize that the RG equations might be more involved than one might think because of the number of couplings in the theory. Thus, only careful computations of the $\beta$-functions of this model could help to understand the UV-behaviour of the model $+$. Beyond perturbation, non-perturbative properties of these models can be sought in the future. In particular, the next steps of the program for enhanced models would be to find UV and IR fixed points which may exist and, from these, perhaps complete trajectories from the UV to the IR. The proof of the perturbative renormalizability is again encouraging for this next level. The FRG approach has been applied to ordinary tensor field theories with interesting results. Extending the methods to the enhanced theory space is again to be done. In particular, if one shows the existence of stable IR fixed points in enhanced tensor field theories, it could give them a firmer underpinning as interesting candidates undergoing a phase transition from discrete-like geometries to some condensate-like geometry. \section*{Acknowledgements} The research of RT is supported by the Netherlands Organisation for Scientific Research (NWO) within the Foundation for Fundamental Research on Matter (FOM) grant 13VP12. RT thanks Max Planck Institute for Gravitational Physics, Potsdam-Golm (Albert Einstein Institute) for their hospitality while this work was in progress. JBG thanks the Laboratoire de Physique Th\'eorique d'Orsay, Universit\'e Paris 11, for its hospitality. \section*{Appendix}
{ "redpajama_set_name": "RedPajamaArXiv" }
5,910
Q: Implement a HtmlHelper extension that uses the routing system I'm trying to implement my own extension for HtmlHelper that will output a link in a similar fashion to ActionLink. I know I can do this easily with TagBuilder for instance, but I'd like to take advantage of the routing system ability to construct outgoing urls as described by Scott Guthrie in this vintage article. My application is centered around organizations. One user may create an organization and an organization may have multiple locations. The main action takes place within a current location. I consider an organization to be a tenant in my application and the organization id is sent via the url. Here's the route configuration for the above: routes.MapRoute( name: "Tenant", url: "Tenants/{tenantId}/{action}", defaults: new { controller = "Organizations", action = "Dashboard", id = UrlParameter.Optional } ); routes.MapRoute( name: "TenantLocation", url: "Tenants/{tenantId}/Locations/{locationId}/{controller}/{action}/{id}", defaults: new { controller = "Dashboard", action = "Index", id = UrlParameter.Optional } ); For route "Tenant", the controller is always Organizations. This part of the application takes care of administrative actions regarding an organization as a whole. The context here contains a tenant id. For route "TenantLocation", the context shifts inside a location, so the context contains a tenant id and a location id. Now, I'd like to create two extension methods for HtmlHelper called TenantActionLink and TenantLocationActionLink to generate such links as: /Tenants/150/Dashboard or /Tenants/150/Locations/300/Team/Edit/1000 The part "/Tenants/150/Locations/300/" could be considered as a prefix to be prepended to a URL, the ids being extracted by an ActionFilterAttribute and stored as properties in a BaseController class. As I mentioned, I could generate the links easily with TagBuilder, but if I change the routes later, I have to update all the calls of the tenant action links methods in all the views and controllers. Any advice on how to tackle this? I'm using ASP.NET MVC 5 and .NET Framework 4.6.1 Thanks. Update to answer Stephen Muecke's questions: What have you tried so far? I implemented the extensions like this (for simplicity I'm including only the code for TenantLocationActionLink, the other one is similar): public static MvcHtmlString TenantLocationActionLink(this HtmlHelper helper, int tenandId, int locationId, string linkText, string actionName, string controllerName, object htmlAttributes = null) { var url = $"/Tenants/{tenandId}/Locations/{locationId}/{controllerName}/{actionName}"; var tagBuilder = new TagBuilder("a"); tagBuilder.InnerHtml = linkText; tagBuilder.MergeAttributes(HtmlHelper.AnonymousObjectToHtmlAttributes(htmlAttributes)); tagBuilder.MergeAttribute("href", url); return new MvcHtmlString(tagBuilder.ToString(TagRenderMode.Normal)); } and what problems are you having? I can't figure out how to handle the routeValues. My implementation is somehow a stepdown from the more flexible implementation of ActionLink, especially since it doesn't take into account routeValues. How am I going to handle the situations when my route configuration will get more complex? And what are you expecting your extension method to do that the inbuilt ActionLink() or RouteLink() don't? Prepend the tenant and location ids as described above. How could I use ActionLink() or RouteLink() to achieve this? A: There is already a RouteLink() extension method that you can use for this. You pass it the link text and the name of the route definition, and an object (or a RouteValueDictionary) for the parameters and it will generate the correct href attribute. In your case, for the Tenant route @Html.RouteLink("...", "Tenant", new { tenantId = 150, action = "Dashboard" }) generates .../Tenants/150/Dashboard, and for the TenantLocation @Html.RouteLink("...", "TenantLocation", new { tenantId = 150, locationId = 300, controller = "Team", action = "Edit", id = 1000 }) generates ../Tenants/150/Locations/300/Team/Edit/1000
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,974
\section{Introduction} Understanding the properties of nuclear matter, especially its equation of state (EOS), is extremely important in nuclear physics and astrophysics. In order to investigate various nuclear and astrophysical phenomena such as heavy-ion collisions, supernova explosion, and neutron star evolution, we need the exact information on nuclear matter EOS in a wide range of temperature, density, and isospin asymmetry~\cite{Dan02,Lat04}. In the last few decades, significant progress has been made in determining the nuclear matter EOS from experiments, observations and theoretical calculations~\cite{Dan02,Lat04,Ste05,Bar05,LCK08,Heb15,Car15,Oer17}. However, knowledge on the EOS of isospin asymmetric nuclear matter, especially the nuclear symmetry energy $E_{\rm sym}(n)$ at densities far from saturation density $n_0$ of symmetric nuclear matter, is still poorly known (see, e.g., Ref.~\cite{LiBA14} for a recent review). In nuclear matter at very low densities, it is known that nucleons tend to form light nuclei to reduce energy of the system~\cite{Rop09,Typ10,Typ13,Hag14,Giu14}. Such a clustering phenomenon may exist in the crust of neutron stars~\cite{Fat10,Bha10,Ava12,Rad14,Ava17}, in core-collapse supernovae~\cite{Fis14,Fur17}, and even in heavy-ion collisions~\cite{ChenLW03,Gai04,ZhangYX12,Rop13,Hem15}. It is thus interesting to consider the clustering effects on the properties of nuclear matter under various conditions of density, temperature and isospin asymmetry. Recently, great efforts have been made for investigating the clustering effects. For example, the clustering effects on nuclear structure properties have been explored and some novel features are revealed~\cite{Ebr12,Zho13,Yan14,He14,Aym14,THSR17}. In addition, in some commonly used EOSs for the simulation of core-collapse supernova, e.g., the Lattimer-Swesty EOS constructed by Lattimer and Swesty~\cite{Lat91} and the Shen EOS constructed by Shen \textit{et al}.~\cite{She11}, the $\alpha$-particles are included and treated as an ideal Boltzmann gas. The EOS of low density nuclear matter including nucleons and $\alpha$-particles is also investigated by using the virial expansion~\cite{Hor06}, and later on the contributions of deuteron ($d=^{2}$H), triton ($t=^{3}$H) and helium-3 ($h=^{3}$He) as well as heavier nuclei are further included and investigated by using the S-matrix method and the quasiparticle gas model~\cite{Mal08,Hec09}. During the last several years, Typel \textit{et al.} have investigated nuclear matter including formation of light clusters up to the $\alpha$-particle by using a generalized density-dependent relativistic mean-field (gDD-RMF) model~\cite{Typ10}, in which the density- and temperature-dependence of the binding energy of clusters in nuclear medium is obtained from the predictions of quantum statistical (QS) approach. The density-dependence of the cluster binding energies and coupling parameters brings the ``rearrangement'' contributions in the particle equations of motion and vector self-energies~\cite{Had93,Len95,Ces98,Typ99,Roc11}. Since the work of Typel \textit{et al.}~\cite{Typ10}, a number of studies have been carried out to explore the clustering effects in nuclear matter. For instance, Sharma \textit{et al}.~\cite{Bha10} study the clustering effects on the liquid-gas phase transition, composition, and structure of protoneutron stars including hyperons. Avancini \textit{et al.}~\cite{Ava12} investigate the property of nuclear pasta phase including the $\alpha$-particles and other light nuclei. Ferreira \textit{et al.}~\cite{Fer12} fit the density-dependence of the in-medium cluster binding energies of Ref.~\cite{Typ10} by changing the couplings of light clusters in the RMF model, and explore the properties of light clusters in nuclear matter. For simplicity, the binding energies of light clusters in nuclear medium are usually treated as constants in these works~\cite{Ava12,Fer12}. Experimentally, clustering effects in low density nuclear matter have been investigated in heavy-ion collisions around Fermi energies. In particular, information of nuclear matter symmetry energy at low densities and finite temperatures have been extracted in heavy-ion collision experiments by Natowitz \textit{et al.}~\cite{Nat10}, Kowalski \textit{et al.}~\cite{Kow07}, and Wada \textit{et al.}~\cite{Wad12}. The results indicate that the clustering effects can enhance drastically the symmetry energy at low densities, while the conventional mean-field models without considering clusters significantly under-predict the experimentally measured values of the symmetry energy. Meanwhile, the experimental data~\cite{Hag12} on the dissolution density (Mott density) of clusters in nuclear medium are also obtained. In the present work, we investigate the properties of nuclear matter with light clusters at low densities and finite temperatures by using a generalized nonlinear relativistic mean-field (gNL-RMF) model. We also discuss the Mott density of clusters in nuclear matter as well as compare the model predictions with the experimental data on the symmetry energy and symmetry free energy extracted from heavy-ion collisions. In the gNL-RMF model, light clusters up to $\alpha $ ($1 \le A \le 4$) are included as explicit degrees of freedom and treated as point-like particles with their interactions described by meson exchanges, and the medium effects on the light cluster binding energies are described by density- and temperature-dependent energy shifts. In the non-linear RMF (NL-RMF) model~\cite{Lal97,Hor01,Tod03,Tod05,Pie06,Cai12}, the nonlinear couplings of mesons are introduced to reproduce the ground-state properties of finite nuclei and to modify the density dependence of the symmetry energy $E_{\rm sym}(n)$. Since all of these couplings are constants in the NL-RMF model, the calculations are thus more convenient and simpler. This paper is organized as follows. In Section~\ref{Model}, we introduce the gNL-RMF model for low density nuclear matter including light clusters. And then the theoretical results are presented and some experimental data are compared with the theoretical predictions in Section~\ref{Result}. Finally we give a conclusion in Section~\ref{Summary}. \section{Theoretical framework} \label{Model} We extend the NL-RMF model and study the properties of a homogeneous nuclear matter system with multi-components including protons, neutrons and light clusters of $d$, $t$, $h$ and $\alpha$. All the components are treated as point-like particles, and they interact through the exchange of various effective mesons including isoscalar scalar ($\sigma$) and vector ($\omega$) mesons and an isovector vector ($\rho$) meson. In this gNL-RMF model, the Lagrangian density of the system reads \begin{eqnarray} \mathcal{L} &=& \sum_{i=p, n, t, h} \mathcal{L}_i + \mathcal{L}_{\alpha} + \mathcal{L}_d + \mathcal{L}_{\rm{meson}}, \label{eq:LRMF} \end{eqnarray}% where the fermions ($i=p, n, t, h$) with spin $1/2$ are described by \begin{eqnarray} \mathcal{L}_i = \bar{\Psi}_i\left[\gamma_{\mu}iD^{\mu}_i-M^{*}_i\right]\Psi_i, \label{eq:Lj} \end{eqnarray}% while the Lagrangian densities of $\alpha$-particle with spin 0 and deuteron with spin $1$ are given, respectively, by \begin{eqnarray} \mathcal{L}_{\alpha} &=& \frac{1}{2}\left(iD^{\mu}_{\alpha}\phi_{\alpha}\right)^{*}\left(iD_{\mu\alpha}\phi_{\alpha}\right) -\frac{1}{2}\phi^{*}_{\alpha}\left(M^{*}_{\alpha}\right)^2\phi_{\alpha}, \label{eq:La} \end{eqnarray}% and \begin{eqnarray} \mathcal{L}_d &=& \frac{1}{4}\left(iD^{\mu}_d\phi^{\nu}_d-iD^{\nu}_d\phi^{\mu}_d\right)^{*} \left(iD_{d\mu}\phi_{d\mu}-iD_{d\nu}\phi_{d\nu}\right) \nonumber\\ &-&\frac{1}{2}\phi^{\mu*}_d\left(M^{*}_d\right)^2\phi_{d\mu}. \label{eq:Ld} \end{eqnarray}% The covariant derivative is defined by \begin{eqnarray} iD^{\mu}_i=i\partial^{\mu}-A_i g_{\omega}\omega^{\mu} -\frac{g_{\rho}}{2}\overrightarrow{\tau}\cdot\overrightarrow{\rho}^{\mu}, \label{eq:iDj} \end{eqnarray}% and the effective mass is expressed as \begin{eqnarray} M_i^{*}=A_i m-B_i - A_i g_{\sigma} \sigma,\quad i=p, n, t, h, d, \alpha, \label{eq:Mj} \end{eqnarray}% where $g_{\sigma}$, $g_{\omega}$, and $g_{\rho}$ are coupling constants of $\sigma$, $\omega$, and $\rho$ mesons with nucleons, respectively; $A_i$ is mass number; $B_i$ is the in-medium cluster binding energy and $m$ is nucleon mass in vacuum which is taken to be $m=939$ MeV. It should be noted that here neutrons and protons are assumed to have the same mass in vacuum, but for astrophysical applications of nuclear matter EOS, experimental masses of neutrons ($m_{n}$) and protons ($m_{p}$) should be used for accuracy and this gives a linear term in the isospin dependence of nucleon mass. Nucleons form an isospin doublet with $\tau_3 \Psi_n = -\Psi_n$ and $\tau_3 \Psi_p = \Psi_p$. Similarly, for the triton and helium-3 one has $\tau_3 \Psi_t = -\Psi_t$ and $\tau_3 \Psi_h = \Psi_h$, respectively. The meson Lagrangian densities are given by $\mathcal{L}_{\rm{meson}} = \mathcal{L}_{\sigma} + \mathcal{L}_{\omega} + \mathcal{L}_{\rho} + \mathcal{L}_{\omega\rho}$ with \begin{eqnarray} &&\mathcal{L}_{\sigma}=\frac{1}{2}\partial_{\mu}\sigma\partial^{\mu}\sigma -\frac{1}{2}m^2_{\sigma}\sigma^2-\frac{1}{3}g_2\sigma^3-\frac{1}{4}g_3\sigma^4, \\ &&\mathcal{L}_{\omega}=-\frac{1}{4}W_{\mu\nu}W^{\mu\nu} +\frac{1}{2}m^2_{\omega}\omega_{\mu}\omega^{\mu}+\frac{1}{4}c_3\left(\omega_{\mu}\omega^{\mu}\right)^2,\\ &&\mathcal{L}_{\rho}=-\frac{1}{4}\overrightarrow{R}_{\mu\nu}\cdot\overrightarrow{R}^{\mu\nu} +\frac{1}{2}m^2_{\rho}\overrightarrow{\rho}_{\mu}\cdot\overrightarrow{\rho}^{\mu},\\ &&\mathcal{L}_{\omega\rho}=\Lambda_v\left(g^2_{\omega}\omega_{\mu}\omega^{\mu}\right) \left(g^2_{\rho}\overrightarrow{\rho}_{\mu}\cdot\overrightarrow{\rho}^{\mu}\right). \label{eq:Lmeson} \end{eqnarray}% where $W^{\mu\nu}$ and $\overrightarrow{R}^{\mu\nu}$ are the antisymmetric field tensors for $\omega^{\mu}$ and $\overrightarrow{\rho}^{\mu}$, respectively. In the RMF approach, meson fields are treated as classical fields and the field operators are replaced by their expectation values. In order to explore how the clustering effects depend on the NL-RMF interactions and the symmetry energy of nucleonic matter, we select four parameter sets of the NL-RMF model for nucleon degree of freedom, namely, NL3~\cite{Lal97}, FSU~\cite{Tod05}, FSUGold5~\cite{Pie06} and FSU-II~\cite{Cai12}. The parameter values of the four NL-RMF interactions are listed in Table~\ref{tab:1} for completeness. The FSUGold5 parameter set is obtained based on FSU by adjusting $g_{\rho}$ and $\Lambda_v$ as prescribed in Ref.~\cite{Pie06}, namely, for $\Lambda_v=0.05$ one readjusts only the $g_{\rho}$ to keep the symmetry energy $E_{\rm sym}(n_c)$ at $n_c = 0.1$ fm$^{-3}$ fixed. The parameters of FSU-II are taken from Ref.~\cite{Cai12} and they are obtained similarly as FSUGold5. One can then check the symmetry energy effects by adopting FSU, FSU-II and FSUGold5 with different values of $\Lambda_v$, which lead to different values of $E_{\rm sym}(n_0)$ as well as the density slope parameter $L=3n_0dE_{\rm{sym}}(n)/dn|_{n=n_0}$ of the symmetry energy. In particular, one has $E_{\rm sym}(n_0) = 30.6$ MeV and $L=45.8$ MeV for FSUGold5, $E_{\rm{sym}}(n_0) = 32.5$ MeV and $L=60.4$ MeV for FSU, and $E_{\rm{sym}}(n_0) = 35.5$ MeV and $L=87.4$ MeV for FSU-II. Compared with FSU, FSU-II and FSUGold5, the NL3 interaction additionally has significantly different isoscalar properties and thus can be used to test the interaction dependence of our results. \begin{table} \caption{Parameter sets of the NL-RMF Lagrangian used in this work.} \label{tab:1} \begin{tabular}{lc c c c} \hline\hline ~ & NL3~\cite{Lal97} & FSU~\cite{Tod05} & FSUGold5~\cite{Pie06} & FSU-II~\cite{Cai12} \\ \hline $m_{\sigma}$ (MeV) &508.194 & 491.500 & 490.250 &491.500 \\ $m_{\omega}$ (MeV) &782.501 & 782.5 &782.5 & 782.5 \\ $m_{\rho}$ (MeV) &763.0 & 763.0 &763.0 & 763.0 \\ $g_{\sigma}$ &-10.2170 & -10.5924 &-10.5924 &-10.5924 \\ $g_{\omega}$ &12.8680 & 14.3369 & 14.3369 & 14.3369 \\ $g_{\rho}$ &8.9480 & 11.7673 & 16.3739 & 9.6700 \\ $g_{2}$ (fm$^{-3}$) &10.4310 &4.2771 &4.2771 & 4.2771 \\ $g_{3}$ &-28.8850 & 49.8556 &49.8556 & 49.8556 \\ $c_{3}$ &0 & 422.4953 &422.4953& 422.4953 \\ $\Lambda_v$ &0 & 0.030 & 0.050 &0.010 \\ \hline\hline \end{tabular} \end{table} The in-medium cluster binding energy $B_i=B_i^0+\Delta B_i$ is dependent on temperature $T$, total proton number density $n^{tot}_{p}$, and total neutron number density $n^{tot}_{n}$ of the system, where $B_i^0$ denotes the binding energy for cluster $i$ in vacuum. The total energy shift of a cluster in nuclear medium mainly includes the contribution from self-energy shift which is already contained in the cluster effective mass in the gNL-RMF model, the Coulomb shift which can be calculated from the Wigner-Seitz approximation and the Pauli shift which was evaluated in the perturbation theory with Jastrow and Gaussian approaches for light clusters $d$, $t$, $h$ and $\alpha$~\cite{Typ10}. Since the Coulomb shift is very small for the light clusters $d$, $t$, $h$ and $\alpha$ considered here and thus neglected in the present work. The energy shift $\Delta B_i$ is thus from the Pauli shift and it is assumed to have the following empirical quadratic form~\cite{Typ10}, i.e., \begin{eqnarray} \Delta B_i(n^{tot}_{p}, n^{tot}_{n}, T)= -\tilde{n}_{i}\left[1+\frac{\tilde{n}_{i}}{2 \tilde{n}_{i}^{0}}\right]\delta B_{i}(T), \label{eq:delbind} \end{eqnarray}% where $\tilde{n}_{i}$ stands for \begin{eqnarray} \tilde{n}_{i}= \frac{2}{A_i}\left[Z_i n^{tot}_{p} + N_i n^{tot}_{n}\right], \label{eq:abb} \end{eqnarray}% in which $Z_i$ and $N_i$ are proton number and neutron number of the cluster $i$, respectively. The density scale for cluster $i$ is given by \begin{eqnarray} \tilde{n}_{i}^{0}\left(T\right) = \frac{B_{i}^{0}}{\delta B_{i}\left(T\right)}. \label{eq:denscale} \end{eqnarray}% The temperature dependence comes from $\delta B_{i}\left(T\right)$ defined by~\cite{Typ10} \begin{eqnarray} \label{eq:thashift} &&\delta B_i\left(T\right)=\frac{a_{i, 1}}{\left(T+a_{i, 2}\right)^{3/2}}, \quad i=\alpha, t, h, \\ \label{eq:dshift} &&\delta B_d\left(T\right) \\ \nonumber &&=\frac{a_{i, 1}}{T^{3/2}}\left[\frac{1}{\sqrt{y_i}}-\sqrt{\pi}a_{i, 3}\exp\left(a_{i, 3}^2 y_i\right)\mathrm{erfc}\left(a_{i, 3}\sqrt{y_i}\right)\right], \end{eqnarray}% with $y_i=1+a_{i, 2}/T$. The values of parameters $a_{i, 1}$, $a_{i, 2}$, and $a_{i, 3}$ are taken from Ref.~\cite{Typ10} and listed in Table~\ref{tab:2} for completeness. \begin{table} \caption{Parameters for the in-medium cluster binding energy shifts. The values are taken from Ref.~\cite{Typ10} and the values of $a_{\alpha, 1}$ and $a_{d, 1}$ in the parenthesis are the revised values in the present work to fit the experimental Mott densities~\cite{Hag12}.} \label{tab:2} \begin{tabular}{lc c c c c} \hline\hline Cluster $i$ & $a_{i , 1}$ & $a_{i, 2}$ & $a_{i, 3}$ & $B_i^0$ \\ ~ & (MeV$^{5/2}$ fm$^{3}$)& (MeV) & (MeV) & (MeV) \\ \hline $\alpha$ & 164371 (137330) & 10.6701 & - & 28.29566 \\ $d$ & 38386.4 (76500) & 22.5204 & 0.2223 & 2.224566 \\ $t$ & 69516.2 & 7.49232 & - & 8.481798 \\ $h$ & 58442.5 & 6.07718 & - &7.718043 \\ \hline\hline \end{tabular} \end{table} For homogeneous matter, the non-vanishing expectation values of meson fields are $\sigma=\langle\sigma\rangle$, $\omega=\langle\omega^0\rangle$, and $\rho=\langle\rho^3_0\rangle$. Since the cluster binding energy is density dependent, the equations of motion for the meson fields have the following form: \begin{eqnarray} \label{eq:sEoM} m_{\sigma}^2\sigma &+& g_2 \sigma^2+g_3 \sigma^3 = \sum_{i=p, n, \alpha, d, t, h} g^i_{\sigma} n_i^s , \\ \label{eq:oEoM} m_{\omega}^2\omega&+&c_3 \omega^3 + 2\Lambda_v g^2_{\omega} g^2_{\rho} \omega \rho^2 = \sum_{i=p, n, \alpha, d, t, h} g^i_{\omega} n_i \nonumber \\ &-& \sum_{i=\alpha, d, t, h}\frac{m_{\omega}^2}{2g_{\omega}}\left(\frac{\partial \Delta B_i}{\partial n^{\rm{ps}}_p}+\frac{\partial \Delta B_i}{\partial n^{\rm{ps}}_n}\right)n_i^s, \\ \label{eq:rEoM} m_{\rho}^2\rho &+& 2\Lambda_v g^2_{\omega} g^2_{\rho} \omega^2 \rho = \sum_{i=p, n, t, h} g^i_{\rho} I_3^i n_i \nonumber \\ &-& \sum_{i=\alpha, d, t, h}\frac{m_{\rho}^2}{g_{\rho}}\left(\frac{\partial \Delta B_i}{\partial n^{\rm{ps}}_p}-\frac{\partial \Delta B_i}{\partial n^{\rm{ps}}_n}\right)n_i^s, \end{eqnarray}% where $n_i^s$ is the scalar density, $n_i$ is the vector density, isospin $I_3^i$ is equal to $1/2$ for $i=p, h$ and $-1/2$ for $i=n, t$, and the meson-cluster couplings are assumed to have the following forms, \begin{eqnarray} g^i_{\sigma}=A_i g_{\sigma},\quad g^i_{\omega}=A_i g_{\omega},\quad g^i_{\rho}=g_{\rho}. \label{eq:coup} \end{eqnarray} In the above derivations, to avoid complications due to the total baryon density dependence of the cluster binding energies in the present theoretical framework, following the work of Typel \textit{et al}.~\cite{Typ10}, the dependence on the total baryon density in Eq.~(\ref{eq:delbind}) is replaced by a dependence on the pseudodensities which are defined by \begin{eqnarray} \label{eq:pseudo} n^{\rm{ps}}_n=\frac{1}{2}\left[\rho_{\omega}-\rho_{\rho}\right], \quad n^{\rm{ps}}_p =\frac{1}{2}\left[\rho_{\omega}+\rho_{\rho}\right], \end{eqnarray} with \begin{eqnarray} \label{eq:pseudoor} \rho_{\omega}=\frac{m^2_{\omega}}{g_{\omega}}\sqrt{\omega^{\mu}\omega_{\mu}} ,\quad \rho_{\rho}=\frac{2m^2_{\rho}}{g_{\rho}}\sqrt{\overrightarrow{\rho}^{\mu}\overrightarrow{\rho}_{\mu}}. \end{eqnarray}% The clusters are treated as point-like particles, and the vector and scalar densities of the fermions ($i=p, n, t, h$) are given, respectively, by \begin{eqnarray} \label{eq:fden} n_i&=&g_i\int \frac{d^3k}{\left(2\pi\right)^3}\left[f_i^+(k)-f_i^-(k)\right],\\ \label{eq:fsden} n_i^s&=&g_i\int \frac{d^3k}{\left(2\pi\right)^3}\frac{M_i^*}{\sqrt{k^2+M_{i}^{*2}}} \nonumber \\ && \times \left[f_i^+(k)+f_i^-(k)\right], \end{eqnarray}% with degeneracy factor $g_i=2$ and the occupation probability given by the Fermi-Dirac distribution, i.e., \begin{eqnarray} f_{i}^{\pm}=\frac{1}{1+\exp{\left[\left(\sqrt{k^{2}+M_{i}^{*2}}\mp\nu_{i}\right)/T\right]}}. \label{eq:fermi} \end{eqnarray}% The densities of the bosons ($i=\alpha, d$) are obtained from \begin{eqnarray} \label{eq:bden} n_i&=&g_i\int \frac{d^3k}{\left(2\pi\right)^3}\left[b_i^+(k)-b_i^-(k)\right],\\ \label{eq:bsden} n_i^s&=&g_i\int \frac{d^3k}{\left(2\pi\right)^3}\frac{M_i^*}{\sqrt{k^2+M_{i}^{*2}}} \nonumber \\ && \times \left[b_i^+(k)+b_i^-(k)\right], \end{eqnarray}% with degeneracy factor $g_{\alpha}=1$ and $g_d=3$, and the Bose-Einstein distribution gives the occupation probability in the following form: \begin{eqnarray} b_{i}^{\pm}=\frac{1}{-1+\exp{\left[\left(\sqrt{k^{2}+M_{i}^{*2}}\mp\nu_{i}\right)/T\right]}}. \label{eq:boson} \end{eqnarray}% For a system including nucleons and light clusters in chemical equilibrium as we are considering in the present work, $\nu_i$ is the effective chemical potential which is defined as $\nu_i = \mu_i - g^i_{\omega}\omega - g^i_{\rho} I^i_3 \rho$, where the chemical potential of cluster $i$ is determined by \begin{eqnarray} \mu_i=N_i\mu_n+Z_i\mu_p. \label{eq:mueq} \end{eqnarray}% The thermodynamic quantities of homogeneous matter are easily derived from the energy-momentum tensor. The energy density is given by \begin{eqnarray} \epsilon &=&\sum_{i=p, n, t, h}g_i\int\frac{d^3k}{\left(2\pi\right)^{3}}\sqrt{k^{2}+M^{*2}}\left( f_{i}^{+}+f_{i}^{-}\right) \nonumber \\ &&+ \sum_{i=d, \alpha}g_i\int\frac{d^3k}{\left(2\pi\right)^{3}}\sqrt{k^{2}+M^{*2}}\left( b_{i}^{+}+b_{i}^{-}\right) \nonumber \\ &&+ \frac{1}{2}m_{\sigma }^{2}\sigma ^{2}+ \frac{1}{3}g_{2}\sigma ^{3}+\frac{1}{4}g_{3}\sigma ^{4} \nonumber \\ &&-\frac{1}{2}m_{\omega }^{2}\omega ^{2}-\frac{1}{4}c_{3}\omega ^{4} -\frac{1}{2}m_{\rho }^{2}\rho ^{2} \nonumber \\ &&+ \sum_{i=p, n, t, h}\left(g^i_{\omega }\omega n_i + g^i_{\rho}\rho I^i_3 n_i\right)-\Lambda_v g^2_{\omega}g^2_{\rho} \omega^2 \rho^2, \label{eq:energy} \end{eqnarray}% the pressure is obtained as \begin{eqnarray} p &=&\frac{1}{3}\sum_{i=p, n, t, h}g_i\int\frac{d^3k}{\left(2\pi\right)^{3}} \frac{k^2}{\sqrt{k^{2}+M^{*2}}}\left(f_{i}^{+}+f_{i}^{-}\right) \nonumber\\ &&+ \frac{1}{3} \sum_{i=d,\alpha}g_i\int\frac{d^3k}{\left(2\pi\right)^{3}} \frac{k^2}{\sqrt{k^{2}+M^{*2}}}\left(b_{i}^{+}+b_{i}^{-}\right) \nonumber \\ &&- \frac{1}{2}m_{\sigma }^{2}\sigma ^{2}- \frac{1}{3}g_{2}\sigma ^{3}-\frac{1}{4}g_{3}\sigma ^{4} \nonumber \\ &&+\frac{1}{2}m_{\omega }^{2}\omega ^{2}+\frac{1}{4}c_{3}\omega ^{4} +\frac{1}{2}m_{\rho }^{2}\rho ^{2} \nonumber \\ &&+\Lambda_v g^2_{\omega}g^2_{\rho} \omega^2 \rho^2, \label{eq:pressure} \end{eqnarray}% and the entropy density is expressed as \begin{eqnarray} s&=&-\sum_{i=p, n, t, h}g_i\int\frac{d^3k}{\left(2\pi\right)^{3}} \left[f_{i}^{+}\ln f_{i}^{+}\right. \nonumber \\ &&+\left( 1-f_{i}^{+}\right) \ln \left(1-f_{i}^{+}\right) + f_{i}^{-}\ln f_{i}^{-} \nonumber \\ &&+\left.\left( 1-f_{i}^{-}\right) \ln \left(1-f_{i}^{-}\right) \right] -\sum_{i=\alpha, d} g_i\int\frac{d^3k}{\left(2\pi\right)^{3}} \nonumber \\ && \times \left[b_{i}^{+}\ln b_{i}^{+}-\left( 1+b_{i}^{+}\right) \ln \left(1+b_{i}^{+}\right)\right. \nonumber \\ &&+ \left.b_{i}^{-}\ln b_{i}^{-}-\left( 1+b_{i}^{-}\right) \ln \left(1+b_{i}^{-}\right) \right]. \label{eq:entropy} \end{eqnarray}% These thermodynamic quantities satisfy the Hugenholtz-van--Hove theorem, i.e., \begin{eqnarray} \epsilon=Ts-p+\sum_{i=p, n, d, t, h, \alpha} \mu_i n_i. \label{eq:HH} \end{eqnarray}% It is convenient to define the internal energy per baryon as $E_{\rm{int}}=\epsilon/n-m$ and free energy per baryon as \begin{eqnarray} F=E_{\rm{int}}-T\frac{s}{n}. \label{eq:free} \end{eqnarray}% The binding energy per baryon of isospin asymmetric nuclear matter may be expanded in powers of isospin asymmetry $\delta=\left(n^{tot}_n-n^{tot}_p\right)/\left(n^{tot}_n+n^{tot}_p\right)$ up to $4$th-order, and then one has \begin{eqnarray} E_{\rm{int}}\left(n, \delta, T\right)&=&E_{\rm{int}}\left(n, 0, T\right)+E_{\rm{sym}}(n, T)\delta^2 \nonumber\\ &+& E_{{\rm{sym}}, 4}(n, T)\delta^4+\mathcal{O}(\delta^6), \label{eq:esym} \end{eqnarray}% where the density- and temperature-dependent symmetry energy $E_{\rm{sym}}$ and the $4$th-order symmetry energy $E_{\rm{sym, 4}}$ are defined by \begin{eqnarray} \label{eq:esym2} E_{\rm{sym}}(n, T)&=&\frac{1}{2}\left.\frac{\partial^2 E_{\rm{int}}}{\partial \delta^2}\right|_{\delta=0},\\ \label{eq:esym4} E_{{\rm{sym}}, 4}(n, T)&=&\frac{1}{24}\left.\frac{\partial^4 E_{\rm{int}}}{\partial \delta^4}\right|_{\delta=0}. \end{eqnarray}% Similarly, one can expand the free energy per baryon $F$ and the entropy per baryon $S$ in the same manner, i.e., \begin{eqnarray} \label{eq:fsym} F\left(n, \delta, T\right)&=&F\left(n, 0, T\right)+F_{\rm{sym}}(n, T)\delta^2 \nonumber\\ &+& F_{{\rm{sym}}, 4}(n, T)\delta^4+\mathcal{O}(\delta^6), \\ \label{eq:ssym} S\left(n, \delta, T\right)&=&S\left(n, 0, T\right)+S_{\rm{sym}}(n, T)\delta^2 \nonumber\\ &+& S_{{\rm{sym}}, 4}(n, T)\delta^4+\mathcal{O}(\delta^6). \end{eqnarray}% The symmetry free energy and the $4$th-order symmetry free energy are given, respectively, by \begin{eqnarray} \label{eq:fsym2} F_{\rm{sym}}(n, T)&=&\frac{1}{2}\left.\frac{\partial^2 F}{\partial \delta^2}\right|_{\delta=0},\\ \label{eq:fsym4} F_{{\rm{sym}}, 4}(n, T)&=&\frac{1}{24}\left.\frac{\partial^4 F}{\partial \delta^4}\right|_{\delta=0}, \end{eqnarray}% while the symmetry entropy and the $4$th-order symmetry entropy are obtained, respectively, as \begin{eqnarray} \label{eq:ssym2} S_{\rm{sym}}(n, T)&=&\frac{1}{2}\left.\frac{\partial^2 S}{\partial \delta^2}\right|_{\delta=0},\\ \label{eq:ssym4} S_{{\rm{sym}}, 4}(n, T)&=&\frac{1}{24}\left.\frac{\partial^4 S}{\partial \delta^4}\right|_{\delta=0}. \end{eqnarray}% Furthermore, one can investigate the clustering effects on the symmetry energy, symmetry free energy and symmetry entropy by checking the parabolic laws from which the $E_{\rm{sym}}(n, T)$, $F_{\rm{sym}}(n, T)$ and $S_{\rm{sym}}(n, T)$ are, respectively, replaced by \begin{eqnarray} \label{eq:epara} E_{\rm{para}}(n, T)&=&E_{\rm{int}}\left(n, \delta = 1, T\right)-E_{\rm{int}}\left(n,0, T\right),\\ \label{eq:fpara} F_{\rm{para}}(n, T)&=&F\left(n, \delta = 1, T\right)-F\left(n, 0, T\right),\\ \label{eq:spara} S_{\rm{para}}(n, T)&=&S\left(n, \delta = 1, T\right)-S\left(n, 0, T\right). \end{eqnarray}% \section{Results and Discussion} \label{Result} \subsection{Mott densities of light clusters} Mott densities are the densities at which the in-medium binding energies of clusters defined by $B_i=B_i^0+\Delta B_i$ vanish. The experimental Mott densities for light clusters $d$, $t$, $h$ and $\alpha$ are obtained by analyzing the data in heavy-ion collisions~\cite{Hag12}. Using the medium-dependent binding energy shift $\Delta B_i$ parameterized by Eqs.~(\ref{eq:delbind}), (\ref{eq:thashift}) and (\ref{eq:dshift}) with the total baryon densities replaced by the pseudodensities defined by Eq.~(\ref{eq:pseudo}), one can calculate the light cluster Mott densities at each temperature. \begin{figure}[!hpbt] \includegraphics[scale=0.4, clip]{Mott.eps} \caption{The Mott density vs temperature in symmetric nuclear matter for light clusters $d$, $t$, $h$ and $\alpha$. The full symbols with error bars are experimental data from heavy-ion collisions by Hagel \textit{et al}.~\cite{Hag12} while the lines represent the predictions from the gNL-RMF model with original (panel (a))and revised (panel (b)) parameters for the in-medium light cluster binding energy shifts.} \label{fig:Mott} \end{figure} \begin{figure}[!hpbt] \includegraphics[scale=0.42, clip]{Bind.eps} \caption{Binding energies of light clusters $d$, $t$, $h$ and $\alpha$ as functions of the total baryon density $n_B$ at $T=10$ MeV (a), $5$ MeV (b) and $3$ (c) MeV. The results with the original~\cite{Typ10} and revised parameters (see Table~\ref{tab:2}) for the binding energy shifts are indicated by solid and dotted lines, respectively.} \label{fig:Bind} \end{figure} Using the original parameters in Ref.~\cite{Typ10} as shown in Table~\ref{tab:2}, we display in Fig.~\ref{fig:Mott}(a) the calculated Mott densities for light clusters $d$, $t$, $h$ and $\alpha$. The corresponding experimental results from Ref.~\cite{Hag12} are also included for comparison. It is seen that the Mott densities increase with temperature. At a fixed temperature, the $\alpha$-particle has the largest Mott density and the next are the triton and helium-3, and the smallest is deuteron. In addition, while the theoretical results agree well with the data for triton and helium-3, they significantly deviate from the data for $\alpha$-particle and deuteron. To fit the experimental data on the Mott densities of $d$ and $\alpha$, one can adjust the values of $a_{\alpha, 1}$ and $a_{d, 1}$ as well as $a_{\alpha, 2}$ and $a_{d, 2}$ in Eqs.~(\ref{eq:thashift}) and (\ref{eq:dshift}). In the present work, for simplicity, we only adjust the values of $a_{\alpha, 1}$ and $a_{d, 1}$ to fit the data. In particular, we note that changing the parameter $a_{\alpha, 1}$ from $164371$ to $137330$ and $a_{d, 1}$ from $38386.4$ to $76500$ can nicely reproduce experimental data on the Mott densities of $d$ and $\alpha$, as shown in Fig.~\ref{fig:Mott}(b). These new revised values of $a_{\alpha, 1}$ and $a_{d, 1}$ for $d$ and $\alpha$ are included in Table~\ref{tab:2} as shown in the parenthesis. Changing the parameters of in-medium binding energy shifts is a simple and direct approach to fit the experimental results. On the other hand, as Typel~\textit{et al} mentioned in Ref.~\cite{Typ10}, the parameters of binding energy shifts are determined by low-density perturbation theory from the unperturbed cluster wave functions. To be more consistent, it would be better to modify the QS model parameters to obtain new mass shifts, but this is certainly beyond the scope of the present work. In addition, the experimental results may depend on some model assumptions and thus could contain systematic errors. In the present work, for simplicity, following Ref.~\cite{Typ10} we assume that the in-medium light cluster binding energies follow the formulation in Eq.~(\ref{eq:delbind}) but depend on pseudodensities, which may lead to the necessity of readjusting the parameters in Eqs.~(\ref{eq:thashift}) and (\ref{eq:dshift}). Shown in Fig.~\ref{fig:Bind} is the density dependence of the binding energy for light clusters $d$, $t$, $h$ and $\alpha$ at temperature $T=10$ MeV, $5$ MeV and $3$ MeV using the original and revised parameters as shown in Table~\ref{tab:2}. It is seen that in general, at lower temperatures, the cluster binding energies drop faster as the baryon density increases. Furthermore, the binding energies of deuteron, triton, and helium-3 drop faster with density by using the revised parameters than by using the original parameters, and this is also the case for $\alpha$-particle at $T$=3 MeV. In Fig.~\ref{fig:Bind}(b), the two lines for $\alpha$-particle almost overlap. With the increment of temperature, the results of triton and helium-3 with the revised parameters get closer to the results with the original parameters and at the same time the $\alpha$-particle binding energy drops more and more slowly with the density. At $T=10$ MeV, the $\alpha$-particle binding energy with the revised parameters drops slower than that with the original parameters. We note that the results of triton and helium-3 in both Fig.~\ref{fig:Mott} and Fig.~\ref{fig:Bind} are also slightly changed with the revised parameters $a_{\alpha, 1}$ and $a_{d, 1}$, and this is due to the fact that the baryon density in Eq.~(\ref{eq:delbind}) is replaced by the pseudodensity in Eq.~(\ref{eq:pseudo}). Using the revised parameters and the original parameters shown in Table~\ref{tab:2} for the in-medium binding energy shifts of the light clusters allow us to explore the in-medium binding energy effects on the properties of low density nuclear matter with light clusters. In the following, we use the revised parameters for the in-medium binding energy shifts of the light clusters unless noted otherwise. \subsection{Compositions of low density nuclear matter with light clusters} Shown in Fig.~\ref{fig:Frac} are the number fractions of nucleons and light clusters as functions of the total baryon density for isospin symmetric nuclear matter and neutron-rich nuclear matter with $Y_p=n^{tot}_{p}/n_B=0.1$ at temperature $T=3$ MeV and $10$ MeV with the FSU parameter set. It is seen that generally the deuteron dissolves first while the $\alpha$ dissolves last as baryon density increases for a fixed temperature and isospin asymmetry. This is understandable since the deuteron has smallest binding energy while the $\alpha$ has largest binding energy. For triton and helium-3, their dissolution densities are between those of deuteron and $\alpha$ and their fractions are almost identical for the isospin symmetric nuclear matter. \begin{figure}[!hpbt] \includegraphics[scale=0.34, clip]{Frac.eps} \caption{The number fraction of nucleons and light clusters $d$, $t$, $h$ and $\alpha$ as a function of the total baryon density $n_B$ with FSU parameter set for $T=10$ MeV and $Y_p=0.5$ (a), $T=10$ MeV and $Y_p=0.1$ (b), $T=3$ MeV and $Y_p=0.5$ (c), and $T=3$ MeV and $Y_p=0.1$ (d).} \label{fig:Frac} \end{figure} As the system becomes more neutron-rich, the fractions of $\alpha$, deuteron, and helium-3 become lower because of lacking in protons while the fraction of triton becomes higher than helium-3 as shown in Figs.~\ref{fig:Frac}(b) and~\ref{fig:Frac}(d). As seen in Eq.~(\ref{eq:delbind}), the in-medium binding energies of triton and helium-3 are isospin-dependent with the binding energy of triton decreasing faster with increasing density while the binding energy of helium-3 drops more slowly for neutron-rich matter with $Y_p=0.1$. As a result, triton dissolves earlier while helium-3 dissolves later with increasing density in neutron-rich matter as shown in Figs.~\ref{fig:Frac}(b) and \ref{fig:Frac}(d). At lower temperature of $T=3$ MeV, among the light clusters, the $\alpha$-particle becomes the most dominant and dissolves last as shown in Figs.~\ref{fig:Frac}(c) and \ref{fig:Frac}(d). The $\alpha$ fraction is even larger than the nucleon fraction for symmetric nuclear matter around $n \sim 0.001$ fm$^{-3}$ at $T=3$ MeV, and thus the matter becomes the ``$\alpha$-matter''. At lower temperatures, nucleons prefer to form $\alpha$-particle since the $\alpha$ has the largest binding energy. As temperature increases, entropy becomes more and more important from the relation $F=E_{\mathrm{int}}-TS$. For isospin symmetric nuclear matter, instead of $\alpha$-particle, the nucleons become the most dominant at higher temperatures. On the other hand, at lower temperatures, with increasing density, all clusters appear earlier at low density and dissolve earlier at high density since their binding energies decrease faster with increasing density at lower temperatures. The interaction-dependence of cluster fractions is checked in both symmetric nuclear matter and neutron-rich matter with $Y_p=0.1$ at $T=3$ MeV and $10$ MeV, and the results are shown in Fig.~\ref{fig:FracRMF}. Four parameter sets, i.e., NL3, FSU, FSU-II and FSUGold5, are used for comparison. It is seen that the cluster fractions are almost identical in all cases, and thus they could be reasonably considered to be interaction-independent. This is mainly because the interactions among nucleons and clusters are relatively weak at low densities considered here. \begin{figure}[!hpbt] \includegraphics[scale=0.42, clip]{FracRMF.eps} \caption{The number fraction of light clusters $d$, $t$, $h$ and $\alpha$ as a function of the total baryon density $n_B$ in nuclear matter with $Y_p=0.5$ and $Y_p=0.1$ at $T=3$ MeV and $10$ MeV using the NL-RMF parameter sets NL3, FSU, FSUGold5 and FSU-II.} \label{fig:FracRMF} \end{figure} \begin{figure}[!hpbt] \includegraphics[scale=0.42, clip]{FracBin.eps} \caption{The number fraction of light clusters $d$, $t$, $h$ and $\alpha$ as a function of the total baryon density $n_B$ in nuclear matter with $Y_p=0.5$ and $Y_p=0.1$ at $T=3$ MeV and $10$ MeV using the original~\cite{Typ10} and revised parameters (see Table~\ref{tab:2}) for the binding energy shifts.} \label{fig:FracBin} \end{figure} Since the cluster fractions are essentially interaction-independent, it is thus interesting to see how the in-medium binding energy shifts influence the cluster fractions. We calculate the fractions of clusters with the original and revised parameters of binding energy shifts in various cases with the FSU parameter set and the results are shown in Fig.~\ref{fig:FracBin}. It is seen that with increasing density, the $\alpha$-particle dissolves a little later and deuteron dissolves much earlier in all cases by using the revised parameters than using the original parameters. The difference between the results of triton and helium-3 is small in symmetric nuclear matter while become large in neutron-rich matter. Therefore, our results indicate that the cluster fractions are essentially determined by the density- and temperature-dependence of the in-medium cluster binding energies. Furthermore, our calculations show that the light cluster fractions become important at low densities around $0.001$ fm$^{-3}$, especially at lower temperatures. In the density region of $n \gtrsim 0.02$ fm$^{-3}$, the fractions of light clusters in nuclear matter become insignificant and the nuclear matter is dominated by nucleons. \subsection{Clustering effects on symmetry energy, symmetry free energy and symmetry entropy} \begin{figure}[!hpbt] \includegraphics[scale=0.44, clip]{EsymRMF.eps} \caption{Density dependence of $E_{\rm{sym}}$ (left panels) and $E_{\rm{sym, 4}}$ (right panels) at $T=3$ MeV, $5$ MeV and $10$ MeV in the gNL-RMF model with FSU, FSU-II and FSUGold5. For comparison, the results for $E_{\rm{sym}}$ in the NL-RMF model without considering clusters are also included (thin lines).} \label{fig:EsymRMF} \end{figure} \begin{figure}[!hpbt] \includegraphics[scale=0.44, clip]{EsymBin.eps} \caption{Density dependence of $E_{\rm{sym}}$ (left panels) and $E_{\rm{sym, 4}}$ (right panels) at $T=3$ MeV, $5$ MeV and $10$ MeV in the gNL-RMF model using the original~\cite{Typ10} and revised parameters (see Table~\ref{tab:2}) for the binding energy shifts. For comparison, the results for $E_{\rm{sym}}$ in the NL-RMF model without considering clusters are also included (dotted lines).} \label{fig:EsymBin} \end{figure} It is interesting to check the interaction-dependence of $E_{\rm{sym}}$ and $E_{\rm{sym, 4}}$ by using FSU, FSU-II and FSUGold5 which predicts essentially the same isoscalar properties but different density dependence of the symmetry energy by using different values of $\omega$-$\rho$ coupling $\Lambda_v$ and $\rho$ meson coupling $g_{\rho}$ as given in Table~\ref{tab:1}. Shown in Fig.~\ref{fig:EsymRMF} are $E_{\rm{sym}}$ and $E_{\rm{sym, 4}}$ as functions of the total baryon density at $T=3$ MeV, $T=5$ MeV and $T=10$ MeV with FSU, FSU-II and FSUGold5. In all cases, it is seen that the $E_{\rm{sym}}$ are almost identical at very low densities, and then the differences begin to appear around $n_B \sim 0.003$ fm$^{-3}$ where the clustering effects become significant. At the higher density of $n_c=0.10$ fm$^{-3}$, the $E_{\rm{sym}}$ with different interactions meet each other, and this is because the symmetry energy values at $0.1$ fm$^{-3}$ are fixed for FSU, FSU-II and FSUGold5. On the other hand, the $E_{\rm{sym, 4}}$ generally displays very small difference for difference interactions. It is interesting to see that the $E_{\rm{sym, 4}}$ becomes significantly negative at low densities and lower temperatures as shown in Fig~\ref{fig:EsymRMF}(d) and \ref{fig:EsymRMF}(e), and this breaks the empirical parabolic law of the symmetry energy which will be discussed in detail later. Therefore, our results suggest that the interaction-dependence of $E_{\rm{sym}}$ and $E_{\rm{sym, 4}}$ at low densities is insignificant. To investigate how the clustering effects influence the $E_{\rm{sym}}$, the corresponding results without considering clusters are also shown in Fig.~\ref{fig:EsymRMF} for comparison. It is clearly seen that the clustering effects significantly enhance the $E_{\rm{sym}}$ at low densities in all cases, and the enhancement becomes larger and larger with decreasing temperature. On the other hand, the clustering effects disappear above about $0.01$ fm$^{-3}$ for $T=3$ MeV and above about $0.02$ fm$^{-3}$ for $T$=10 MeV for which the fraction of $\alpha$-particle begin to decrease as shown in Fig.~\ref{fig:FracRMF}. Similarly, the clustering effects also significantly enhance the $E_{\rm{sym,4}}$ at low densities in all cases (the $E_{\rm{sym,4}}$ without considering clusters is smaller than $0.5$ MeV~\cite{Cai12} in the density region considered in Fig.~\ref{fig:EsymRMF} and is not shown here), especially at lower temperatures. Shown in Fig.~\ref{fig:EsymBin} are $E_{\rm{sym}}$ and $E_{\rm{sym, 4}}$ as functions of density using the FSU parameter set with the original and revised parameters for the in-medium binding energy shifts at $T=3$ MeV, $5$ MeV and $10$ MeV. One sees that the clustering effects with the revised parameters are stronger than those with the original parameters. Overall, the influences caused by different in-medium cluster binding energy shifts are more significant than that caused by different interactions as shown in Fig.~\ref{fig:EsymRMF}. As can be seen by comparing the $E_{\rm{sym}}$ to that without clusters shown in Fig.~\ref{fig:EsymBin} by dotted lines, the density at which the clustering effects become negligible is essentially independent of the cluster binding energy shifts. In order to explore more clearly about the clustering effects on the symmetry energy, we show in Fig.~\ref{fig:Esym} the $E_{\rm{para}}$ obtained by Eq.~(\ref{eq:epara}) as a function of the total baryon density at $T=3$ MeV, $5$ MeV and $10$ MeV by using FSU. For comparison, the results of $E_{\rm{sym}}$ and $E_{\rm{sym, 4}}$ obtained by Eqs.~(\ref{eq:esym2}) and (\ref{eq:esym4}) are also presented. One sees that the absolute values of $E_{\rm{sym, 4}}$ at low densities are relatively small at higher temperatures as shown in Fig.~\ref{fig:Esym}(c), and then get larger and larger with decreasing temperature. Focusing on Fig.~\ref{fig:Esym}(a) for $T=3$ MeV, one can see that there is a bulge for $E_{\rm{sym}}$ and valley for $E_{\rm{sym, 4}}$ at low densities where the light clusters are dominant. At low temperatures, the $\alpha$-particle is dominant and its large binding energy plays an important role in changing the symmetry energy. When the temperature increases, the nucleons become dominant and the entropy becomes important, therefore, the relatively small binding energies of clusters can hardly affect the symmetry energy. As a result, the clustering effects on the symmetry energy become weaker at higher temperatures. \begin{figure}[!hpbt] \includegraphics[scale=0.34, clip]{Esym.eps} \caption{Density dependence of $E_{\rm{sym}}$, $E_{\rm{sym, 4}}$ and $E_{\rm{para}}$ in the gNL-RMF model with FSU at $T=3$ MeV (a), $5$ MeV (b) and $10$ MeV (c). The results of $E_{\rm{sym}}$ in the NL-RMF model without considering clusters are also included for comparison.} \label{fig:Esym} \end{figure} From Fig.~\ref{fig:Esym}, one can see that the difference between the results of $E_{\rm{sym}}$ and $E_{\rm{para}}$ are much smaller than the absolute value of $E_{\rm{sym, 4}}$ at low densities and lower temperatures, which means that the expansion of internal energy per baryon in powers of isospin asymmetry is hard to get convergent, and thus the parabolic law for the isospin asymmetry expansion of nuclear matter EOS is invalid for nuclear matter including light clusters at low temperature, consistent with the conclusion in Ref.~\cite{Typ10}. Furthermore, the results of $E_{\rm{sym}}$ without considering clusters are shown by dash-dotted lines in Fig.~\ref{fig:Esym} and the same conclusion is obtained as Fig~\ref{fig:EsymRMF}. Note that $E_{\rm{sym}}$ is much closer to $E_{\rm{para}}$ than the result without considering clusters at higher temperatures, and this means that the parabolic law could well approximate the $E_{\rm{sym}}$ at higher temperatures although the clustering effects slightly affect the symmetry energy. Similarly to the symmetry energy, the results for the symmetry free energy and the symmetry entropy are shown in Fig.~\ref{fig:Fsym} and Fig.~\ref{fig:Ssym}, respectively. $F_{\rm{sym, 4}}$ and $S_{\rm{sym, 4}}$ are quite large at lower densities and lower temperatures, and then become smaller at higher temperatures. Focusing on Fig.~\ref{fig:Fsym}(a), one sees that the magnitudes of the bulge for $F_{\rm{sym}}$ and valley for $F_{\rm{sym, 4}}$ at low densities are smaller than those of $E_{\rm{sym}}$ and $E_{\rm{sym, 4}}$ as shown in Fig.~\ref{fig:Esym}(a), and this means that the parabolic law for free energy is broken not so strongly by the clustering effects compared with that for internal energy. As temperature increases, the clustering effects become weaker and weaker. \begin{figure}[!hpbt] \includegraphics[scale=0.34, clip]{Fsym.eps} \caption{Same as Fig.~\ref{fig:Esym}, but for the symmetry free energy.} \label{fig:Fsym} \end{figure} From Fig.~\ref{fig:Ssym}(a), one sees that the clustering effects on the symmetry entropy are significant at $T=3$ MeV, similar to the case of the symmetry energy as shown in Fig.~\ref{fig:Esym}(a). According to $F=E_{\mathrm{int}}-TS$, the relatively weaker clustering effects on the symmetry free energy observed in Fig.~\ref{fig:Fsym}(a) are thus mainly due to the significant cancellation between the symmetry entropy and the symmetry energy. The results of $F_{\rm{sym}}$ and $S_{\rm{sym}}$ without considering clusters are also shown in Fig.~\ref{fig:Fsym} and Fig.~\ref{fig:Ssym}, and the conclusions obtained from these figures are very similar to that from $E_{\rm{sym}}$, namely, the clustering effects exist at lower densities and lower temperatures, and disappear at higher densities regardless of low or high temperatures. \begin{figure}[!hpbt] \includegraphics[scale=0.34, clip]{Ssym.eps} \caption{Same as Fig.~\ref{fig:Esym}, but for the symmetry entropy.} \label{fig:Ssym} \end{figure} To intuitively illustrate how the clustering effects break the parabolic law for isospin asymmetry expansion of $E_{\rm{int}}$, $F$ and $S$, we present in Fig.~\ref{fig:Delta} the $E_{\rm{int}}$, $F$ and $S$ as functions of isospin asymmetry square $\delta^2=\left(1-2Y_p\right)^2$ at total baryon density $0.002$ fm$^{-3}$ (where the clustering effects are relatively strong as seen from previous figures) and $T=3$ MeV, $5$ MeV and $10$ MeV. It is seen that there are two symmetric branches as a function of $\delta^2$ with the left branch corresponding to the results from proton-rich matter calculations while the right branch from neutron-rich matter calculations. The nice symmetry of the two branches with respect to $\delta^2$ reflects the isospin symmetry breaking due to the small difference between the binding energies of triton and helium-3 is very small. \begin{figure}[!hpbt] \includegraphics[scale=0.32, clip]{Delta.eps} \caption{$E_{\rm{int}}$ (a), $F$ (b), and $S$ (c) vs isospin asymmetry square ($\delta^2=\left(1-2Y_p\right)^2$) in the gNL-RMF model with the FSU parameter set at $n_B=0.002$ (fm$^{-3}$) and $T=3$ MeV, $5$ MeV and $10$ MeV.} \label{fig:Delta} \end{figure} \begin{figure*}[!hpbt] \includegraphics[scale=0.5, clip]{Data.eps} \caption{The gNL-RMF model predictions for the density dependence of the symmetry energy (a--e) and symmetry free energy (f--j) with FSU at different temperature intervals. The experimental data from Ref.~\cite{Wad12} and~\cite{Nat10} as well as the results in the NL-RMF model without considering clusters are also included for comparison.} \label{fig:Data} \end{figure*} Focusing on Fig.~\ref{fig:Delta}(a), one can see that the internal energy per baryon $E_{\rm{int}}$ increases as the temperature increases. At a fixed temperature, the symmetric nuclear matter has the minimum internal energy per baryon. Moreover, for each branch, one can see a nice linear relationship between the $E_{\rm{int}}$ and $\delta^2$ at higher temperature $T$=10 MeV while the linear relationship is broken at lower temperature $T=$3 MeV. These features clearly indicate that at total baryon density $0.002$ fm$^{-3}$, the parabolic approximation is broken for the isospin asymmetry expansion for the $E_{\rm{int}}$ at lower temperatures. On the other hand, it is seen from Fig.~\ref{fig:Delta}(b) that the free energy per baryon $F$ decreases as the temperature increases, and for a fixed temperature it also reaches the minimum in symmetric nuclear matter. Compared with the results shown in Fig.~\ref{fig:Delta}(a) for the $E_{\rm{int}}$, the linear relationship between the $F$ and $\delta^2$ is broken not so much, and thus the parabolic law is approximately satisfied. This feature is consistent with the conclusion obtained from Fig.~\ref{fig:Fsym}. As for the entropy, one can see from Fig.~\ref{fig:Delta}(c) that at $T=10$ MeV, the clustering effects are not important and the $S$ reaches its maximum value in symmetric nuclear matter and its minimum value in pure neutron (proton) nuclear matter as expected. It is interesting to see that at lower temperatures, e.g., $T=3$ MeV where the clustering effects become important, the $S$ reaches its minimum value in symmetric nuclear matter with a complicated dependence on the $\delta^2$. These features thus show that the clustering effects drastically influence the entropy per baryon in low density nuclear matter with light cluster at lower temperatures. The above results demonstrate that the clustering effects play a significant role for the thermodynamic properties of low density nuclear matter, especially at lower temperatures. For low density nuclear matter at lower temperatures, the $4$th-order symmetry energy, the $4$th-order symmetry free energy and the $4$th-order symmetry entropy are found to be significant and the isospin asymmetry expansion for these nuclear matter properties is difficult to get convergent, indicating that the empirical parabolic law is invalid in this case. \subsection{Comparison with data on symmetry energy and symmetry free energy} Experimentally, the symmetry energy and the symmetry free energy at low densities and finite temperatures of $T \approx 3 \sim 8$ MeV have been extracted from analyzing the isoscaling behaviors of fragment production in heavy-ion collisions~\cite{Nat10,Wad12}. Shown in the upper (lower) panels of Fig.~\ref{fig:Data} are the predicted symmetry (free) energy $E_{\rm{sym}}$ ($F_{\rm{sym}}$) as a function of baryon density for five temperature intervals, namely, $T=3$-$4$ MeV, $T=4$-$5$ MeV, $T=5$-$6$ MeV, $T=6$-$7$ MeV and $T=7$-$8$ MeV, by using the FSU parameter set. For comparison, Fig.~\ref{fig:Data} also includes the corresponding theoretical results ($E_{\rm{para}}$ and $F_{\rm{para}}$) from the parabolic approximation (i.e., Eq.~(\ref{eq:epara}) and Eq.~(\ref{eq:fpara})) and the corresponding experimental data~\cite{Nat10,Wad12}. In addition, the corresponding results for $E_{\rm{sym}}$ and $F_{\rm{sym}}$ without considering clusters are also included for comparison. Firstly, it is seen from Fig.~\ref{fig:Data} that when the baryon density is larger than about $0.02$ fm$^{-3}$, there are essentially no clustering effects on the symmetry (free) energy and the data can be nicely reproduced by the theoretical calculations (see, e.g., Fig.~\ref{fig:Data}(d), (e), (i) and (j) where the data around $0.02$ fm$^{-3}$ are available). When the baryon density is below about $0.02$ fm$^{-3}$, the clustering effects become more and more important, especially for the symmetry energy at lower temperatures. These features are consistent with the results presented and discussed earlier. From the upper panels of Fig.~\ref{fig:Data}, one can see the theoretical predictions on $E_{\rm{sym}}$ and $E_{\rm{para}}$ are quite similar at higher temperatures (i.e., $T\gtrsim 6$ MeV) but their difference becomes more and more significant as the temperature decreases. Moreover, it is seen that the experimental data on the symmetry energy can be reasonably described by the theoretical calculations by considering the light clusters, especially by the theoretical predictions of $E_{\rm{sym}}$. On the other hand, the theoretical predictions of $E_{\rm{sym}}$ without considering light clusters significantly underestimate the experimental data. For the symmetry free energy, the theoretical predictions on $F_{\rm{sym}}$ and $F_{\rm{para}}$ are quite similar in all the cases considered here as shown in the lower panels of Fig.~\ref{fig:Data}. And the experimental data on the symmetry free energy can be reasonably described by the theoretical calculations by considering the light clusters, especially by the theoretical predictions of $F_{\rm{para}}$. Similarly to the case of the symmetry energy, the theoretical predictions of $F_{\rm{sym}}$ without considering light clusters significantly underestimate the experimental data. Based on the above discussions, we conclude that our theoretical calculations with considering light clusters can reasonably reproduce the general behaviors of the symmetry energy and symmetry free energy extracted from experiments. These results suggest that the clustering effects play a very important role in describing the thermodynamic properties of low density nuclear matter with density below about $0.02$ fm$^{-3}$, especially at lower temperatures. \section{Conclusion} \label{Summary} In the present work, using the generalized nonlinear relativistic mean-field (gNL-RMF) model, we have systematically explored the thermodynamic properties of homogeneous nuclear matter with light clusters at low densities and finite temperatures. In the gNL-RMF model, light clusters up to $\alpha $ ($1 \le A \le 4$) are included as explicit degrees of freedom and treated as point-like particles, the interactions among various particles are described by meson exchanges, and the in-medium effects on the cluster binding energies are considered by density- and temperature-dependent energy shifts with the parameters obtained by fitting the experimental Mott densities of the clusters extracted from heavy-ion collisions around Fermi energies. Firstly, we have found that the composition of low density nuclear matter with light clusters is essentially determined by the density- and temperature-dependence of the in-medium cluster binding binding energies while the interactions among various particles play a minor role. In particular, our results indicate that the light cluster fractions become significant at low densities around $0.001$ fm$^{-3}$, especially at lower temperatures. On the other hand, in the density region of $n \gtrsim 0.02$ fm$^{-3}$, the fractions of light clusters in nuclear matter become insignificant and the nuclear matter is dominated overwhelmingly by nucleons. Secondly, for nuclear matter at low densities ($n \sim 10^{-3}$ fm$^{-3}$) and low temperatures ($T \lesssim 3$ MeV), compared with the values of the conventional (second-order) symmetry energy, symmetry free energy and symmetry entropy, we have demonstrated that the $4$th-order symmetry energy, the $4$th-order symmetry free energy and the $4$th-order symmetry entropy are significant and the conventional isospin asymmetry expansion for these nuclear matter properties is hard to get convergent. These features imply that the empirical parabolic law is invalid and the concept of the conventional (second-order) symmetry energy becomes meaningless for the description of EOS of nuclear matter at these low densities and low temperatures. Therefore, to describe the EOS of nuclear matter with light clusters at low densities ($n \sim 10^{-3}$ fm$^{-3}$) and low temperatures ($T \lesssim 3$ MeV), full calculations (without isospin asymmetry expansion) are necessary. Finally, we have compared the gNL-RMF model predictions of the symmetry energy and symmetry free energy at low densities and finite temperatures with the corresponding experimental data extracted from heavy-ion collisions. We have found that our theoretical calculations with considering light clusters can reasonably reproduce the general behaviors of the symmetry energy and symmetry free energy extracted from experiments. Moreover, our results indicate that the clustering effects can be negligible for nuclear matter with density above about $0.02$ fm$^{-3}$ but they play a very important role in describing the symmetry energy and symmetry free energy of low density nuclear matter with density below about $0.02$ fm$^{-3}$, especially at lower temperatures. The present work has focused on the ideal infinite homogenous nuclear matter system with $6$ components, namely, neutrons, protons and light clusters including deuteron, triton, helium-3 and $\alpha $-particle, under thermal and chemical equilibrium without considering Coulomb interactions. For a more realistic system, one should additionally include heavier nuclei, and the nucleons, light clusters and heavy nuclei can interact with each other via meson-exchanges. In addition, the Coulomb interaction should be considered for charged particles. Furthermore, the system may include electrons under the conditions of electrically charge neutrality and chemical equilibrium. These studies are in progress and will be reported elsewhere. \begin{acknowledgments} This work was supported in part by the Major State Basic Research Development Program (973 Program) in China under Contract Nos. 2013CB834405 and 2015CB856904, the National Natural Science Foundation of China under Grant Nos. 11625521, 11275125 and 11135011, the Program for Professor of Special Appointment (Eastern Scholar) at Shanghai Institutions of Higher Learning, Key Laboratory for Particle Physics, Astrophysics and Cosmology, Ministry of Education, China, and the Science and Technology Commission of Shanghai Municipality (11DZ2260700). \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
8,960
{"url":"https:\/\/www.nag.com\/numeric\/nl\/nagdoc_26.2\/nagdoc_fl26.2\/html\/f05\/f05conts.html","text":"# NAG Library Chapter Contents\n\n## F05 (orthog)Orthogonalization\n\nF05 (orthog) Chapter Introduction \u2013 a description of the Chapter and an overview of the algorithms available\n\n RoutineName Mark ofIntroduction Purpose f05aaf Example\u00a0Text Example\u00a0Data 5 nagf_orthog_real_gram_schmidt Gram\u2013Schmidt orthogonalization of $n$\u00a0vectors of order $m$","date":"2021-06-12 12:07:54","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 2, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.4839574992656708, \"perplexity\": 9374.901169030192}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-25\/segments\/1623487582767.0\/warc\/CC-MAIN-20210612103920-20210612133920-00525.warc.gz\"}"}
null
null
from __future__ import absolute_import, division, print_function __metaclass__ = type ANSIBLE_METADATA = {'metadata_version': '1.1', 'status': ['preview'], 'supported_by': 'community'} DOCUMENTATION = ''' --- module: tower_label author: "Wayne Witzel III (@wwitzel3)" version_added: "2.3" short_description: create, update, or destroy Ansible Tower label. description: - Create, update, or destroy Ansible Tower labels. See U(https://www.ansible.com/tower) for an overview. options: name: description: - Name to use for the label. required: True default: null organization: description: - Organization the label should be applied to. required: True default: null state: description: - Desired state of the resource. required: False default: "present" choices: ["present", "absent"] tower_host: description: - URL to your Tower instance. required: False default: null tower_username: description: - Username for your Tower instance. required: False default: null tower_password: description: - Password for your Tower instance. required: False default: null tower_verify_ssl: description: - Dis/allow insecure connections to Tower. If C(no), SSL certificates will not be validated. This should only be used on personally controlled sites using self-signed certificates. required: False default: True tower_config_file: description: - Path to the Tower config file. See notes. required: False default: null requirements: - "python >= 2.6" - "ansible-tower-cli >= 3.0.3" notes: - If no I(config_file) is provided we will attempt to use the tower-cli library defaults to find your Tower host information. - I(config_file) should contain Tower configuration in the following format host=hostname username=username password=password ''' EXAMPLES = ''' - name: Add label to tower organization tower_label: name: Custom Label organization: My Organization state: present tower_config_file: "~/tower_cli.cfg" ''' try: import tower_cli import tower_cli.utils.exceptions as exc from tower_cli.conf import settings from ansible.module_utils.ansible_tower import tower_auth_config, tower_check_mode HAS_TOWER_CLI = True except ImportError: HAS_TOWER_CLI = False def main(): module = AnsibleModule( argument_spec=dict( name=dict(required=True), organization=dict(required=True), tower_host=dict(), tower_username=dict(), tower_password=dict(no_log=True), tower_verify_ssl=dict(type='bool', default=True), tower_config_file=dict(type='path'), state=dict(choices=['present', 'absent'], default='present'), ), supports_check_mode=True ) if not HAS_TOWER_CLI: module.fail_json(msg='ansible-tower-cli required for this module') name = module.params.get('name') organization = module.params.get('organization') state = module.params.get('state') json_output = {'label': name, 'state': state} tower_auth = tower_auth_config(module) with settings.runtime_values(**tower_auth): tower_check_mode(module) label = tower_cli.get_resource('label') try: org_res = tower_cli.get_resource('organization') org = org_res.get(name=organization) if state == 'present': result = label.modify(name=name, organization=org['id'], create_on_missing=True) json_output['id'] = result['id'] elif state == 'absent': result = label.delete(name=name, organization=org['id']) except (exc.NotFound) as excinfo: module.fail_json(msg='Failed to update label, organization not found: {0}'.format(excinfo), changed=False) except (exc.ConnectionError, exc.BadRequest, exc.NotFound) as excinfo: module.fail_json(msg='Failed to update label: {0}'.format(excinfo), changed=False) json_output['changed'] = result['changed'] module.exit_json(**json_output) from ansible.module_utils.basic import AnsibleModule if __name__ == '__main__': main()
{ "redpajama_set_name": "RedPajamaGithub" }
457
. E:\Documents\WindowsPowerShell\T-Mon\T-Mon_Configuration.ps1 <# .SYNOPSIS Availability monitoring workflow .DESCRIPTION T-Mon Availability Monitoring Checks a Specified Asset(s) if they are up or down and updates a SQL table with the information. .PARAMETER Assets This parameter all you to specify a string or array of strings to run the checks against. You could pull from a SQL asset table or csv and then feed into workflow. .PARAMETER SQLInstance Parameter that allows the user to specify what sql instance to use for connection. .PARAMETER SQLDatabase Allows user to specify database on a specified instance to import the results to. Invoke-Sqlcmd use's the permissions of the user who executed the command to connect to the database. .EXAMPLE The assets parameter accepts a string or array of strings Check-Hosts -Assets $Assets -SQLInstance $SQLInstance -SQLDatabase $SQLDatabase .NOTES A workflow was used to take advantage of foreach parallel loops. Allows processing of several assets at once instead of one by one with a standard foreach loop. #> Workflow Check-HostStatus { param ( [String[]]$Assets, [String]$SQLInstance, [String]$SQLDatabase ) foreach -parallel ($Asset in $Assets) { if (Test-Connection -ComputerName $Asset -Count 2 -ErrorAction SilentlyContinue) { Invoke-Sqlcmd -ServerInstance $SQLInstance -Database $SQLDatabase -Query "UPDATE assetList SET Status = '1' WHERE IP_Address = '$Asset'" } else { Invoke-Sqlcmd -ServerInstance $SQLInstance -Database $SQLDatabase -Query "UPDATE assetList SET Status = '0' WHERE IP_Address = '$Asset'" } } } Check-Hosts -Assets $Assets -SQLInstance $SQLInstance -SQLDatabase $SQLDatabase
{ "redpajama_set_name": "RedPajamaGithub" }
8,137
Q: Topology textbook for Functional Analysis Can someone please recommend me an introductory topology textbook written with the functional analysis student in mind? So a book that covers the topology prerequisites a functional analysis student ought to know.
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,638
Q: Seaborn how to add number of samples per HUE in sns.catplot I have a catplot drawing using: s = sns.catplot(x="type", y="val", hue="Condition", kind='box', data=df) However, the size of "Condition" per hue is not equal: The blue has n=8 samples , and The green has n=11 samples. What is the best way to add this info to the graph? A: This is essentially the same solution as an earlier answer of mine, which I simplified a bit since: df = sns.load_dataset('tips') x_col='day' y_col='total_bill' order=['Thur','Fri','Sat','Sun'] hue_col='smoker' hue_order=['Yes','No'] width=0.8 g = sns.catplot(kind="box", x=x_col, y=y_col, order=order, hue=hue_col, hue_order=hue_order, data=df) ax = g.axes[0,0] # get the offsets used by boxplot when hue-nesting is used # https://github.com/mwaskom/seaborn/blob/c73055b2a9d9830c6fbbace07127c370389d04dd/seaborn/categorical.py#L367 n_levels = len(df[hue_col].unique()) each_width = width / n_levels offsets = np.linspace(0, width - each_width, n_levels) offsets -= offsets.mean() pos = [x+o for x in np.arange(len(order)) for o in offsets] counts = df.groupby([x_col,hue_col])[y_col].size() counts = counts.reindex(pd.MultiIndex.from_product([order,hue_order])) medians = df.groupby([x_col,hue_col])[y_col].median() medians = medians.reindex(pd.MultiIndex.from_product([order,hue_order])) for p,n,m in zip(pos,counts,medians): if not np.isnan(m): ax.annotate('N={:.0f}'.format(n), xy=(p, m), xycoords='data', ha='center', va='bottom')
{ "redpajama_set_name": "RedPajamaStackExchange" }
450
{"url":"https:\/\/solvedlib.com\/n\/0-1-points-prevlous-answerssesscalcet2-10-5-040-findequation,2587292","text":"# 0\/1 points Prevlous AnswersSEssCalcET2 10.5.040_Findequation for the plane consisting of all polnts that are equldistant from the points (-6, 1,\n\n###### Question:\n\n0\/1 points Prevlous Answers SEssCalcET2 10.5.040_ Find equation for the plane consisting of all polnts that are equldistant from the points (-6, 1, 1) and (2, 3, 5). y+2 = 2x Need Help? Fec 0-\/1 polnts SEssCalcETz 10.5.503.XP. equation the plane The plane through thc polnts (3, -1, 2), (8, 4, S), and (~1, -2, -3) Need Help? Reat Mh Tolc Inter Submit Anstier\n\n#### Similar Solved Questions\n\n##### The cantilever beam is subjected to the point loads Pi = 1 kN and P2 =...\nThe cantilever beam is subjected to the point loads Pi = 1 kN and P2 = 3 kN. 250 mm 250 mm - - 300 mm- 20 mm 70 mm 20 mm 50 mm Part A Determine the maximum shear stress acting at section a-a of the cantilevered strut. Express your answer to three significant figures and include appropriate units. &A...\n##### Use integration by substitution to solve the integral below: Use \u00e2\u201a\u00ac for the constant of integration.-30r\" + 3x5 _ 2r - 4ansovet 2 PointsChoose the correct answer from the options below:0-in(lr' 2x - 41) + \u00e2\u201a\u00ac Oln(l3x' 2x - 41) +\u00e2\u201a\u00acO-2In(l3xs 2 - 4)+C O-2In(l1sx' _ 21) +C\nUse integration by substitution to solve the integral below: Use \u00e2\u201a\u00ac for the constant of integration. -30r\" + 3x5 _ 2r - 4 ansovet 2 Points Choose the correct answer from the options below: 0- in(lr' 2x - 41) + \u00e2\u201a\u00ac Oln(l3x' 2x - 41) +\u00e2\u201a\u00ac O-2In(l3xs 2 - 4)+C O-2In(l1sx'...\n##### Cells in the most adult bodis are in the restin state known as G0 phase. When...\nCells in the most adult bodis are in the restin state known as G0 phase. When can G0 phase be induced? Points: 1 Any time in the cell cycle. During G2 phase. During the mitotic phase. During the G1 phase. During S phase. The Gap 1, DNA synthesis, and Gap 2 phases of the cell cycle are co...\n##### Question 6 (1 point) Which factor does not affect the = solubility of a solid solute in a liquid = solvent? structure of the solventtemperaturecommon ionstructure of the solutepressure\nQuestion 6 (1 point) Which factor does not affect the = solubility of a solid solute in a liquid = solvent? structure of the solvent temperature common ion structure of the solute pressure...\n##### What is the rationale (at least four reasons) for private company financial accounting standards?\nWhat is the rationale (at least four reasons) for private company financial accounting standards?...\n##### The following matrix is a payoff matrix for a game that is notstrictly determined. Write linear programs that find the optimalmixed strategies for the row player and for the column player, andthat find the value of the game. Solve your linear programs usingtechnology. [ 2 \u00e2\u02c6\u20193 -1 3\nThe following matrix is a payoff matrix for a game that is not strictly determined. Write linear programs that find the optimal mixed strategies for the row player and for the column player, and that find the value of the game. Solve your linear programs using technology. [ 2 \u00e2\u02c6\u20193 ...\n##### Question 202.5 ptsWhich of the following lac operon merozygotes can hydrolyze lactose without the presence of allolactose?0 it 0*2*\/F'i 0* zt0 itoc 2\/F'i 0t 2t0 i oC z\/F' i 0t z0 i 0t 2*\/F'j 0tz0 i of 2\/F' it 0tzt\nQuestion 20 2.5 pts Which of the following lac operon merozygotes can hydrolyze lactose without the presence of allolactose? 0 it 0*2*\/F'i 0* zt 0 itoc 2\/F'i 0t 2t 0 i oC z\/F' i 0t z 0 i 0t 2*\/F'j 0tz 0 i of 2\/F' it 0tzt...\n##### Only problem 18 In Exercises 13-22, sketch the graph described by the following spherical coordinates in three-dimensi...\nOnly problem 18 In Exercises 13-22, sketch the graph described by the following spherical coordinates in three-dimensional space. 14. 17. \u03c1 cos \u03c6 = 4 4 In Exercises 13-22, sketch the graph described by the following spherical coordinates in three-dimensional space. 14. 17. \u03c1 cos \u03c6 ...\n##### In Exercises 27-34, find the vertex, focus, and directrix of the parabola. Then sketch the parabola.$y^{2}+6 y+8 x+25=0$\nIn Exercises 27-34, find the vertex, focus, and directrix of the parabola. Then sketch the parabola. $y^{2}+6 y+8 x+25=0$...\n##### Lkr H7xsoti Clata V Inal Exrs Iicur\" Nlo ;Ul- CI uubROLAEracl Heeelel Arentum 5i5e3 cur ueil enartue Cchel IU \"K eetc eltthamanntcilhci end 7t Td Isleni Would predin \u00e2\u201a\u00ac an \u00e2\u201a\u00acutly IctuleJ Woee[ MacclltAAa 47o44nCenintmcIher\"enomaleMETentnaoe hrinningWnu celly\"?uhechenaHcru eHALAAAYMOmcLM4XJennta E) Frophascfcclls lhe Process 0f diiding are subjecled culchieine drg that inierferes with the Fmulinn uche oauls Pparalus at xhich stage ill milosis ameacdr AlWuaplat nmnhj (lupha\nLkr H7xsoti Clata V Inal Exrs Iicur\" Nlo ;Ul- CI uubROLAEracl Heeelel Arentum 5i5e3 cur ueil enartue Cchel IU \"K eetc elttha mannt cilhci end 7t Td Isleni Would predin \u00e2\u201a\u00ac an \u00e2\u201a\u00acutly IctuleJ Woee[ Maccllt AAa 47o44n Cenintmc Iher\" enomale ME Tentnaoe hrinning Wnu celly\"? ...\n##### If you use a 0.05 level of significance in a two-tail hypothesis test, what decision will you make if ZSTAT=\u22120.76? Pleas...\nIf you use a 0.05 level of significance in a two-tail hypothesis test, what decision will you make if ZSTAT=\u22120.76? Please show work in excel with explanations. thank you Formulas as well...\n##### 0.2~0.1Give reasons for your answer: y' = sin(8x) sin(8y) = 0 on the lines x = 0 and y = 0, and y' > 0 for 0 < x < 5\/8, 0 < Y < J\/8Y' = sin(8x) sin(8y) = 0 on the lines x = 0 and Y = 8.The slopes at each point are independent of Y, so the slopes are the same along each line parallel to the Y-axis Note that for Y = 8, Y' = 0.Y' = sin(8x) sin(8y) = 0 on the Iine Y =-X+ 1\/8, and Y' = -1 on the line Y = ~X. The slopes at each point are independent of x, s\n0.2 ~0.1 Give reasons for your answer: y' = sin(8x) sin(8y) = 0 on the lines x = 0 and y = 0, and y' > 0 for 0 < x < 5\/8, 0 < Y < J\/8 Y' = sin(8x) sin(8y) = 0 on the lines x = 0 and Y = 8. The slopes at each point are independent of Y, so the slopes are the same along e...\n##### Qpeston & How many statements are correct? Statement A Ifhousing permits issued in an area IS to be used t0 estimate carpet sales using regression then to obtain the regression equation the procedure would be to regress housing permits onto carpet sales Statement B sociologist was studying the relationship between the number of days absent and the] uavelled t0 work If distance travelled was used to estimate absenteeism , then the didepeadeat Fariable 15 days absent and the dcpendent vanable\nQpeston & How many statements are correct? Statement A Ifhousing permits issued in an area IS to be used t0 estimate carpet sales using regression then to obtain the regression equation the procedure would be to regress housing permits onto carpet sales Statement B sociologist was studying the r...\n##### Mesenchymal to epithelial transitionA. Is part of normal embryonic development B. Is required for efficient reprogrammingC Is the transition from somatic to post-gastrulation like stageD.B & \u00e2\u201a\u00acC\nMesenchymal to epithelial transition A. Is part of normal embryonic development B. Is required for efficient reprogramming C Is the transition from somatic to post-gastrulation like stage D.B & \u00e2\u201a\u00acC...\n##### For each of the statements below, choose the letter for the word that best fits (A stands for always, S for sometimes, and $\\mathrm{N}$ for never). If the answer is $\\mathrm{S}$, give two examples, one showing how the statement can be true and one showing how the statement can be false. a. An equilateral polygon is ( $\\mathrm{A} \/ \\mathrm{S} \/ \\mathrm{N}$ ) equiangular. b. If a triangle is a right triangle, then the acute angles are ( $M \/ S \/ N$ ) complementary. c. The diagonals of a kite are (\nFor each of the statements below, choose the letter for the word that best fits (A stands for always, S for sometimes, and $\\mathrm{N}$ for never). If the answer is $\\mathrm{S}$, give two examples, one showing how the statement can be true and one showing how the statement can be false. a. An equila...","date":"2022-11-27 08:26:18","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 2, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6007912158966064, \"perplexity\": 4496.118385747293}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-49\/segments\/1669446710218.49\/warc\/CC-MAIN-20221127073607-20221127103607-00216.warc.gz\"}"}
null
null
{"url":"https:\/\/or.stackexchange.com\/questions\/2987\/two-stage-k-means-clustering","text":"# Two-stage $k$-means clustering\n\nThe problem I am facing is clustering problem, needed for a Vehicular Routing Problem (VRP) I'm tackling. It is a heterogeneous VRP with Time Window (TW) and a capacity utilization constraint, i.e. a truck can be routed only if its loading factor is more than 80%.\n\nWe have a set of customers dispersed on the map. Each customer has placed an order of a certain volume, varying from 1.000 to 36.000lt of a petroleum product.\n\nI need to cluster these customers, in order to route them. Right now, I am using the $$k$$-means algorithm, and to find the number of clusters I am taking the integer value of $$\\frac{\\text{Sum Of Unrouted Orders}}{\\text{Capacity Of Biggest Idle Vehicle}}.$$\n\nUnfortunately, this method is kind of faulty, because of the following problems:\n\n1) A cluster may be very small because the algorithm MUST create a certain number of clusters. In this case the customers of this small cluster will not be routed, due to the capacity utilization constraint.\n\n2) Clusters with customers that are far away from the other are created, in order to reach the target volume of the cluster (close to the vehicle's capacity)\n\nSo my question is the following:\n\na) Do you know any method of finding the optimal number of clusters, beside the elbow and silhouette methods, as the clustering part is running several times, and I cannot spend time picking the number of clusters in each iteration.\n\nb) Do you know a variation of the $$k$$-means algorithm that takes into consideration the volumes of the orders?\n\nEdit: Some further research lead me to the capacitated clustering problem, which seems to be perfectly fit to what I'm looking for. As I was reading the work from Marcos Negreirosa, Augusto Palhano found at The capacitated centred clustering problem, I realised that the work suggested was similar to what I have implemented. My implementation is the following: Clustering Algorithm:\n\n\n1. Initialize k centers (random points from dataset which are scattered on the map)\n2. For each center, perform Range search around it with radius 1, 2, 4, \u2026. and collect points in cluster with total capacity ~ C\/2.\n3. Update centers using the median per cluster\n4. Assignment: For each point P that does not belong to any cluster\nI. Sort centers by distance to P\nII. Assign P to nearest cluster with availability in capacity\n5. Update each center with cluster's median\n6. Repeat steps 2-5, until the algorithm converges i.e. the centers do not change much in step 5.\n\n\nbut some of the results were a disappointment, along the run, as\n\n1) Many customers were left unrouted (Cluster didn't fit perfectly in a vehicle, so a cluster could leave unrouted customers, even though the volume was close the its capacity).\n\n2) Clusters created, after the creation of some routes, were combining customers very far from each other, as these customers were left off from when the cluster was routed.\n\n\u2022 Are you only interested in $k$-means or are you ok with other clustering algorithms too?\n\u2013\u00a0EhsanK\nNov 4, 2019 at 14:10\n\u2022 At this moment I have implemented the $k$-means, so I would be interested in $k$-means. What are you thinking? Nov 4, 2019 at 14:12\n\u2022 Since you run the clustering several times and, presumably, you have a different optimal (I'm using this word loosely) number of clusters each run, then why not try another clustering algorithm where you don't need to provide the number of clusters in advance. Since you're talking about customers on the map, assuming you have the lat\/lon location of those customers, use lat\/lon as your feature for clustering and an algorithm like DBSCAN for that.\n\u2013\u00a0EhsanK\nNov 4, 2019 at 14:16\n\u2022 You might want to look into balanced clustering. In the classical version of the problem you try to balance cluster sizes, but I think this could be easily generalised to balancing the sum of the demands within the cluster. Nov 4, 2019 at 23:00\n\u2022 We have an open source balanced clustering library here github.com\/PGWelch\/territorium - see unit tests for examples of how to use. Nov 5, 2019 at 9:33\n\nTwo stage k-means is discussed in:\n\n\u2022 \"Balanced K-Means Algorithm for Partitioning Areas in Large-Scale Vehicle Routing Problem\" (Dec 2009), by Ruhan He, Weibin Xu, Jiaxia Sun, and Bingqiao Zu\n\n\u2022 \"Solving the Heterogeneous Capacitated Vehicle Routing Problem using K-Means Clustering and Valid Inequalities\" (Apr 2017), by Noha A. Mostafa and Amr Eltawil\n\nThe second paper presents a rather simple solution on page 6, simply assign each truck by k-means and where one truck has more customers than the other calculate the customer's distance from the centroid and move the nearest customers to the less full truck, thus balancing the load (or weight \/ delivery time \/ packages, etc.).\n\n\"In that way, it is possible to find the customers on the borders of different clusters and transfer them to the cluster with fewer customers, so that the clusters are balanced in terms of the number of customers in each cluster, the difference in the number of customers between any two clusters has a threshold \u03b8. After performing the clustering, the MIP model presented in section 3.1 is solved for clusters instead of customers to assign vehicles to clusters.\".\n\n\u2022 \"Modeling and Solving the Clustered Capacitated Vehicle Routing Problem\" (Feb 2013), by Christopher Exp\u00f3sito Izquierdo, Andr\u00e9 Rossi, and Marc Sevaux\n\nThis next paper explains how to divide a large problem into sub-problems.\n\n\"Conclusions and Further Research\nThis work introduces the Clustered Capacitated Vehicle Routing Problem (CCVRP), a new logistic problem for parcel delivery and courier services companies where the demand of a large number of customers organized in clusters have to be ful\ufb01lled. This problem presents the clustering constraints, in such a way that, the delivery trucks have to serve all the customers belonging to the same cluster in a row.\n\nAn approximate two-level solution approach is proposed with the goal of solving the CCVRP. It is based on a decomposition of the CCVRP into two general subproblems. The \ufb01rst one pursues to de\ufb01ne the number and composition of the routes aimed at serving the clusters and the latter is aimed at determining the visiting order of the customers within each cluster. This approach allows to use speci\ufb01c optimization techniques for both subproblems. For this purpose, several methods have been proposed.\n\nThe computational experiments have allowed to check that using the adaptation of the Lin-Kernighan heuristic for the LRP is highly competitive in a wide range of scenarios. Similarly, exact methods require large computational times in order to obtain high-quality solutions for the CCVRP and, therefore, they can be dismissed in real environments.\".\n\nBy being able to distribute the work equally between the trucks and also divide the complexity evenly (or at least to ease solving) between the parts of the solution one obtains workload balance of both the vehicles and the solver.\n\nAnother point is that simply filling the \"biggest idle vehicle\" to ~80% isn't efficient.\n\nVehicles should be filled with the fewest orders (delivery points) so the vehicle is mostly full for the longest period of time. For example, if a large vehicle is filled 100% with two orders then half the capacity during the time getting to the second location is unused; if both locations were nearby then the truck would be half empty for less time. An opposite example being a small vehicle consisting of only separate one liter orders, at least when it is half full less fuel (and carrying capacity) is lost during the second half of the routes.","date":"2022-12-06 23:53:53","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 3, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.468628853559494, \"perplexity\": 971.7389941902275}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 5, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-49\/segments\/1669446711121.31\/warc\/CC-MAIN-20221206225143-20221207015143-00148.warc.gz\"}"}
null
null
AIMS is a leading migration company in Asia Pacific that is constantly expanding and opening new offices in countries across the region. We know that our people are integral to our success, which is why we are always on the hunt for new talent and likeminded individuals. At AIMS, we are firm believers of working and playing hard. Want to join our family? Check out the openings below! If you believe you are a suitable match for AIMS, please forward your resume outlining qualifications, experience, current and expected salary to hr@aims.sg . Please also state the title of the job that you are applying for. We regret to inform that only shortlisted candidates will be contacted.
{ "redpajama_set_name": "RedPajamaC4" }
31
<?xml version="1.0" encoding="utf-8"?> <resources> <string name="app_name">NavigationDrawer</string> <string-array name="drawer_menu_itmes_array" translatable="false"> <item>Planet List</item> <item>Webview</item> <item>Listview</item> </string-array> <string name="drawer_open" translatable="false">Open navigation drawer</string> <string name="drawer_close" translatable="false">Close navigation drawer</string> <string name="title_activity_list">ListActivity</string> <string name="hello_world">Hello world!</string> <string name="action_settings">Settings</string> </resources>
{ "redpajama_set_name": "RedPajamaGithub" }
8,789
Bible Study Topics Study Bible Reviews Life Application Timothy Scott Small Group Publishing Co Genesis Chapter 21 In Genesis Chapter 21, we see the purpose, power, and faithfulness of God in His promise fulfilled. We also see how Sarah relied again on her own power and didn't rest in God's promise. Daily Bible Study Questions for Genesis Chapter 21 1. What are the three ways we can see in the first five verses that Isaac's birth is a result of divine intervention or the promise of God? 2. What are two ways we see Abraham obey God in the first five verses? 3. How does a review of the following passages of Scripture help us understand Sarah's views over time regarding her ability to have children? 4. According to the Guinness Book of World Records, at the time this Bible study's publication (February 2016), the oldest man to father a child did so at the age of 92 years and ten months; and the oldest woman to conceive naturally did so at the age of 59 years. Because the Guinness rules require record verification, it seems unlikely the Guinness judges will consider the ages of Abraham (100) and Sarah (90) to be world records despite the authority of the Word of God. And despite the authority of the Word of God and His demonstrated faithfulness, both Abraham and Sarah laughed when they were told by God that Sarah would give birth to a son for Abraham. According to Genesis 17:17, Abraham laughed and said to himself "Can a child be born to a hundred-year-old man? Can Sarah, a ninety-year-old woman, give birth?" According to Genesis 18:12, Sarah laughed and said to herself "Can I really have a baby when I'm old?" Of course, God knew their thoughts. He instructed Abraham to name the son Sarah would bear him Isaac (17:19), which in Hebrew means laughter. What was the effect of naming Abraham's son "laughter"? 5. Before Abraham had any children, he worried that he was childless and complained to God that a slave born in his house would be his heir. God responded that Abraham would have an heir and he would come from Abraham's own body (15:3-4). Then Sarah gave Abraham her slave Hagar to be his wife so Sarah and Abraham could build a family by Hagar (16:1-3). After Hagar bore a son Ishmael to Abraham, God told Abraham that Sarah would have a son to which Abraham replies: "If only Ishmael were acceptable to you!" (17:18). Obviously Abraham loved his son Ishmael born of a slave in his house. So Sarah has Isaac and then complains to Abraham about Ishmael. She asks that he and his mother be sent away so that the son of a slave will not be a co-heir with her son Isaac (21:10). She does this once Isaac is weaned and therefore beyond the feebleness of the first years of infancy (hence the feast). She also does this because she saw Ishmael mocking or laughing - presumably with Isaac as the target. Verse 11 of Genesis chapter 21 tells us this was a very difficult thing for Abraham. In what two ways does God reassure Abraham about Sarah's request and convince him to send Hagar and Ishmael away? 6. Review Galatians 4:21-31. How does this New Testament passage help us understand how Christians today relate to this story of Isaac and Ishmael in Genesis chapter 21? 7. Sarah asked Abraham to banish Hagar and her son because she didn't want Ishmael to be a co-heir with her son Isaac (21:10). What was it that Isaac was to inherit as an heir to Abraham? 8. What was the advantage to Abimelech to enter into a treaty with Abraham; or, why might Abimelech have sought a treaty with Abraham? Click here to compare your answers for this Bible study lesson. We pray this Bible Study on Genesis Chapter 21 has been a blessing to you. Home › Genesis Bible Study › Chapter 21 © All rights reserved. Biblefied is a Registered Trademark of the Small Group Publishing Company, LLC.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
1,076
Arctic Cosmos (foaled 31 January 2007) is an American-bred, British-trained racehorse and sire best known as the winner of the 2010 St. Leger Stakes. Background Arctic Cosmos is a bay horse with a small white star bred in Kentucky by Sheridan & Iadora Farm. He was from the first crop of foals sired by the 2004 Epsom Derby winner North Light. As a yearling, Arctic Cosmos was bought for 47,000 guineas at Tattersalls Blandford Bloodstock and John Gosden for his wife Rachel Hood. Racing career 2009: two-year-old season As a two-year-old, ran twice, finishing fourth in maiden races at Kempton Park and Redcar Racecourse in October. 2010: three-year-old season Arctic Cosmos began his second season by winning a maiden on the Tapeta surface at Wolverhampton Racecourse. He then ran in handicap races, finishing third at Newbury in May before winning over one and half miles at Kempton in June. He was then moved up sharply in class for the Group Two King Edward VII Stakes at Royal Ascot in June in which he started a 14/1 outsider and finished second to Monterosso, with Buzzword (German Derby), At First Sight (runner-up in The Derby), Green Moon and Bullet Train (Lingfield Derby Trial) among the other beaten horses. The colt then finished third behind Rebel Soldier and Dandino in the Gordon Stakes at Goodwood Racecourse. At Doncaster Racecourse On 11 September 2010, Arctic Cosmos started at odds of 12/1 for the 234th running of the St Leger Stakes. The field also included Rewilding, Snow Fairy, Dandino, Joshua Tree and the Irish Derby runner-up Midas Touch. Ridden by William Buick, Arctic Cosmos tracked the leaders before taking the lead approaching the final furlong and stayed on well to win by one and three quarter lengths from Midas Touch, with the 40/1 outsider Corsica in third place. The colt sustained a serious leg injury when being prepared for the Canadian International Stakes and was off the course for over a year. Later career Arctic Cosmos finally returned for two races in October 2011, finishing second to Quest For Peace (to whom he was conceding seven pounds) in the Cumberland Lodge Stakes at Ascot and then finishing fourth of sixteen to the French-trained filly Sarah Lynx in the Canadian International. The horse began his five-year-old season with a six length win in the Magnolia Stakes over ten furlongs at Kempton in 2012. His three subsequent races were extremely disappointing as he finished last in the John Porter Stakes, the Yorkshire Cup and the Godolphin Stakes. Stud record Arctic Cosmos was retired from racing to become a breeding stallion at the Old Road Stud in County Waterford at fee of €1,500. Pedigree References 2007 racehorse births Racehorses bred in Kentucky Racehorses trained in the United Kingdom Thoroughbred family 1-n St Leger winners
{ "redpajama_set_name": "RedPajamaWikipedia" }
8,360
The Jazz Painter Stuart Davis may never have performed an instrument as a jazz musician, but he could certainly improvise - do a few riffs - on cardboard and canvas. The National Gallery of Art is celebrating this ultra American artist ("Stuart Davis: In Full Swing" with a large exhibition beginning Sunday November 13) exploring his work after 1921 drawn from 50 different sources - a fully realized show tracking his influences and output through three decades. (The image to the left is typical - albeit the work of a culinary artist who provided small edible cookies at the press preview - an exact replica, a digital photo copy, minus the frosty sugar frame of his "OWH! In Sao Pao, 1951" on loan from New York's Whitney Museum of American Art. Notice the bright yellow background, the play of words - inverted for the title - and sense of energy and joy.) Oddly enough, I haven't found many of my contemporaries who are familiar with him but I would encourage everyone to go who feels a sense of impending gloom whether caused by external or internal factors....to revel in the profusion of color and style. It was partly to be provocative that I asked NGA curator Harry Cooper if my embrace of Davis made me a 'lowbrow.' He answered easily, diplomatically, enough: "low and high" at once. Wash Post critic/commentator Philip Kennicott goes to great lengths to tell just how complicated the man and his art really are. A complicated review in which he seems to see contradictions - the most basic of course could be the complete contrast between sight and sound - the 'music' of paint vs notes played for the ear. PK is not entirely a fan. Still it's possible to delight in the contrast - how Davis 'riffs,' thrills to variations of form and mood. The result produces unique sensations, worth many ruminations. Put on some Earl Hines (closed circuit audio) while viewing the show. Posted by Urbanities at 7:36 AM Be of good cheer (in spite of all) Urbanities Musings and cruisings on the (primarily) urban side of life by writer-artist-educator who has logged many years in New York, Washington, D.C., and beyond, plus a portfolio of line drawings done live en route of known and unknown people and scenes.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,884
New iMacs, MacBooks And Mac Mini Coming This Fall 8:50 am July 12, 2018 By Roland Hutchinson We heard earlier about the two new iPad Pro tablets and now we have some details on Apple's plans for their iMacs, MacBook and also a new Mac Mini. The news comes from well respected Apple analyst Ming-Chi Kuo who has shared some details about Apple's plans for their new range of Macs. Apple is apparently working on a new Mac Mini which will launch this fall, the last one was introduced around three and a half years ago. We suspect that it will be getting a new design as well as the latest hardware. There will also be a new MacBook Pro with the latest processors, the design of this device is not expected to change. Apple will also launch a new MacBook with the latest processors, the design of this device will not change either. Apple's MacBooks updates also extends to a new cheaper MacBook which would possibly be part of the MacBook Air family. We have heard about this new device previously and it is expected to come with a 12 inch display. Finally Apple will be updating their iMac range, the displays on the devices will apparently get a significant upgrade. The new iMacs will also come with the latest processors and more. It is not clear as yet on whether there will be any major changes in the design of the devices. Source MacRumors Filed Under: Apple, Technology News, Top News Sony A7R Mark IV mirrorless camera launches Spetember 2019 Control PC game Ray Tracing specs confirmed Keystone magnetic mechanical keyboard with AI adaptive typing technology from $149 More Fake iPhone 11 handsets shown off on video
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,415
Many advisors dread participant meetings, so do many participants. If you are starting to feel that dreadful feeling in the pit of your stomach, keep reading because you can engage and motivate plan participants like never before with TRAK's Batch Processing solution. Now you can quickly give retirement plan participants a meaningful report illustrating their contributions and their impact on retirement and answering their most important question: "Am I ready to retire?". TRAK-Online's Batch Processing solution is comprised of three reports, generated after importing census data and then a few clicks of the mouse. The Batch Gap and Contribution Analysis reports are both focused on the participant. Rather than providing participants with less-than-helpful generic information or asking them to log onto the plan web portal after the meeting, these reports are advisor-driven allow the advisor to provide each participant a report with their personal information on it. Participant want to know "Am I ready to retire?", with TRAK's Batch Processing you can be the advisor who answers that question. Participant meetings and participant education doesn't need to be boring and generic. With the right approach, advisors can engage plan participants, motivate them to increase deferrals and have the opportunity to discover outside assets to bring into the plan. The right approach means connecting with and engaging participants on a personal level with information that is relevant and meaningful to them and answers their most important questions. TRAK's Batch Gap report provides a one-page gap analysis report for participants that answers the question, "Can I retire?" This easy to understand report engages participants because it uses their actual information. When participants see their current paycheck information it immediately get's their attention. When they can see suggested shortfall solutions and how those actually affect their take-home pay, it provides meaningful education and helps participants understand the decision they have to make.to examine how well their retirement is funded—and how closing a potential shortfall will impact their take-home pay. The chart at the top allows them to see their progress in funding retirement in a meaningful way. The Batch Gap report shows the client's current paycheck and the proposed change to take-home pay (the value that means the most to the client). Additionally, the bottom of the report presents options for funding retirement, including the cost of waiting before increasing contributions, retiring a year later, or if they have additional assets outside of their retirement plan. For clients who are significantly underfunded, the calculation engine can be configured to automatically modify the scenario to reduce a shortfall. For example, it could increase the client's retirement age until the increase in contribution falls within a specific threshold (the modified data is noted in a footnote). Similar functionality can be applied for overfunded clients. The Batch Gap report provides a meaningful, quick assessment of how well a plan participant is funded for retirement and the available options for addressing the projected gap. Using the Batch Gap and Contribution Analysis reports in combination provides an effective set of tools for helping advisors educate plan participants for retirement. Help plan participants take incremental steps to reach their retirement goals. When a participant sees that they will need to make a big jump in their contribution in order to be ready for retirement, this can be discouraging and demotivating. TRAK's Batch Contribution Analysis report illustrates to plan participants a number of different contribution options and also how those options might be projected into retirement. This helps participants understand their options and if they can't make a big increase in their contribution, it helps motivate them to begin taking smaller steps to move forward. TRAK's Contribution Analysis Report starts by calculating a client's paycheck (including tax withholdings), allowing a client to connect with the information being presented. Moving on from their current situation, the report illustrates the impact of increased contributions on the client's paycheck. The illustrated increase in contributions can be configured as needed and can even illustrate yearly increases in deferrals such as might be present in a plan with auto-escalation. The report can illustrate how the increased contributions grow over time at various rates of return (e.g., 5 years, 10 years and at retirement). Additionally, the report can illustrate the impact on the projected retirement account balance at retirement if the client should wait before increasing the amount of his or her contributions. Finally, the report can illustrate the income stream from the projected account balances at retirement. Some advisors have objected to the wealth of information provided in this report. Please note that the amount of information can be reduced. However, once advisors use this report, they often respond differently. Plan participants are able to understand the report because the calculations are based on their own paycheck. This helps them connect with the data and they quickly understand the other ideas that are being communicated. The Contribution Analysis report is an excellent solution for an enrollment or re-enrollment meeting. It helps clients quickly comprehend the impacts of increasing contributions not only on their paycheck, but also in retirement. Help your participants start taking steps to a successful retirement! The growth of defined contribution plans over the last decades has been exponential. As more advisors market their servies to plan sponsors, it becomes increasingly difficult for advisors to differentiate themselves and their services from the rest of the pack. Without providing unique, value-added services to plan sponsors, advisors may find it difficult to prove that they are having an impact on plan participation. Advisors can distinguish themselves by providing the plan data plan sponsors really want to see. Advisors using TRAK-Online's Participant Benchmark report and rise above the competition and give plan sponsors the data they want and need. The Participant Benchmark report is designed to be given to plan sponsors. Rather than focusing on plan fees and investment options, both important metrics, the Participant Benchmark report focuses on the participants themselves. How many are participating? What are the average contributions and balances? What is the projected retirement income replacement ratio? These data=sets and more can then be further segmented by age and income level. Historical data can also be incorporated into the report allowing an advisor to demonstrate to the plan sponsor how participation and contributions have grown overtime. The Participant Benchmark report includes more than 15 charts and grids that can be included, excluded, and reordered as needed. Stand out from the crowd by providing plan sponsors key participant metrics and showing how the participant retirement readiness is improving on your watch. Stand out by using TRAK's Participant Benchmark report.
{ "redpajama_set_name": "RedPajamaC4" }
1,330
\section{Introduction} Recent years have seen the prevalence of \gls{DL} with larger and deeper models with billions of neurons~\cite{shoeybi2019megatron, brown2020language}. Together with the performance boost of \gls{DL} models comes the increasing computation demand for model training. Most solutions seek to parallelize the training on GPU clusters to meet the requirement of computation power. {\em Data parallelism}~\cite{shallue2019measuring} and {\em model parallelism}~\cite{shoeybi2019megatron} of \gls{DL} models are the most common parallelization strategies. In data parallelism, data are distributed among several servers (a.k.a. workers or devices) in a GPU cluster. In contrast, in model parallelism, the \gls{DL} model is split into multiple parts and distributed among workers. Assigning different parts of a \gls{DL} model to different workers is known as {\em device placement}. Finding the optimal device placement of \gls{DL} models in model parallelization is challenging. It is mainly due to the large search spaces of potential parallelization strategies, understanding model architectures, and device characteristics~\cite{zhou2019gdp}. Despite lots of efforts to improve device placements, the training time of device placement methods is still very long~\cite{mirhoseini2018hierarchical, addanki2019placeto, zhou2019gdp, gao2018post, gao2018spotlight}. The first effort in automating device placement combines global partitioning and local scheduling by using heuristic strategies to first partition the \gls{DL} model into smaller parts and then determine the execution schedule of neurons within each part~\cite{mayer2017tensorflow}. The state-of-the-art device placement methods use a combination of \gls{GNN} and \gls{RL} to find the placement of \gls{DL} models~\cite{addanki2019placeto, zhou2019gdp}. In these solutions, the computation graph of a \gls{DL} model is represented as a \gls{DAG}, in which each node of the \gls{DAG} represents a single operation or a group of operations. In a typical setting, a \gls{GNN} takes a \gls{DAG} of a \gls{DL} model and its nodes' features as input and generates nodes embeddings, which summarize the attributes and neighborhood topology of each node~\cite{addanki2019placeto, zhou2019gdp}. An \gls{RL} agent then processes the node embeddings and uses a policy to predict device placements for all nodes in the \gls{DAG} on the given device cluster. To this end, the \gls{RL} agent needs to traverse all the nodes in the \gls{DAG} and learn to propose placements to reduce the training time of the \gls{DL}. Identifying a good graph traversal order can decrease the \gls{RL} agent training time and can also potentially help the \gls{RL} agent to find better placements to reduce the \gls{DL} execution time. In this work, we empirically study the relationship between graph traversal order in device placement and the learning efficiency of the \gls{RL} agent for device placement during the training process. We look into six different graph traversal order and show how they affect the training process of Placeto~\cite{addanki2019placeto}, a state-of-the-art device placement method on three different families of neural networks. Each family of neural networks contains structurally similar \gls{DL} models~\cite{pham2018efficient}. Our initial results suggest that different traversal order are better suited for different types of neural networks. The best graph traversal order to use also depends on the attributes of the \gls{DL} model that we want to find suitable placements on. We also explain how our methods on some models could be used in \gls{RS} and \gls{EO}. \gls{RS} and \gls{EO} are domains where there is a need to provide near real-time services and products for global monitoring of planet earth. \gls{EO} satellites developed over the years have provided an unprecedented amount of data that need to be processed~\cite{hagos2021extremeearth,zhu2017deep}. Model parallelization methods can contribute to these domains by distributing the computation and memory requirement for training large models on large datasets. Our contribution is summarized as follows: \begin{enumerate} \item We empirically study the impact of the graph traversal order on finding the best device placement for a model-parallel distributed \gls{DL} model and, as a consequence, on the training time of the distributed \gls{DL} model. In this study, we considered different architectures of \gls{DL} models, namely, CNN and RNN. Our study shows that different graph traversal order triumphs at finding the best device placements efficiently for different types of \gls{DL} models. \item Based on our empirical evaluation of graph traversal order in device placement for different model-parallel \gls{DL} architectures, we summarize and provide guidelines on identifying the best graph traversal order for a given \gls{DL} model based on its characteristics. For example, we recommend using \texttt{bfs}\xspace traversal order for model-parallel RNNs with large average degrees to perform device placement. \item In the context of \gls{RS} and \gls{EO}, we show how our methods on identifying the best graph traversal order can be used on the \gls{DL} models in the Polar Use case (e.g., CNN models for satellite image classification) and in the Food Security Use case (e.g., an RNN model for sequence classification). Our work on device placement allows finding the best placements for \gls{DL} models faster than not identifying the best graph traversal order to use. The better device placement improves the training performance in model-parallel distributed \gls{DL}. Choosing a proper graph traversal method in device placement (1) improves distributed training time of complex \gls{DL} models, and (2) allows training \gls{DL} models on a larger dataset within a certain (the same) amount of time. The above two-fold benefit enables real-time online training of model-parallel distributed DL models with time deadlines. \end{enumerate} \section{Preliminaries} In this section, we discuss problem formulation of device placement problem, graph embedding, \gls{RL} approach for device placement, and graph traversal order. \subsection{Device Placement} Let $G(V, E)$ be a \gls{DAG} that represents the computation graph of a neural network. Each node $v \in V$ describes a single computation operation (e.g., convolution) or a predefined small group of operations (e.g., groups of convolutions nearby) that we are interested in predicting its device placement. Each edge $e \in E$ models the data dependencies between the vertices. Let $D$ denotes a given device cluster (e.g., GPU clusters) where $d \in D$ characterizes a single device in $D$. A placement $p : V \rightarrow D$ is a mapping that assigns each node in $G$ to a device in $D$. Our goal in device placement is to find a placement $p$ to minimize the training time of $G$ (i.e., the \gls{DL} model) on the given device cluster $D$ while satisfying the memory constraints of every device in the cluster. When given a fixed number of devices, we can treat the device placement task as a classification problem by considering each device identifier as a label. The classification model takes the DAG of a computation graph $G$ as input and classifies every neuron or group of neurons of $G$ into devices in $D$. To this end, Placeto~\cite{addanki2019placeto} models a device placement task as ``finding a sequence of iterative placement improvements.'' In each training round, Placeto takes the current placement for the DAG and the representation of one of its nodes as input and predicts that node's placement in that DAG. Each round of training last until the placements of all the nodes have been updated once. In the rest of this work, we will use the device placement method as proposed in Placeto. \subsection{Placeto} Placeto, in general, consists of two parts, (i) using \gls{GNN}~\cite{scarselli2008graph} for making DAG's embedding, and (ii) using \gls{RL} for assigning nodes to devices. Below, we elaborate these two parts in more details. \subsubsection{Graph Embedding} What matters in embedding of a computation graph is not only the nodes features but also their relationship. Thus, if two connected nodes are placed on two different devices, there will be data transfer between two devices in both forward and backward path of model training, which is expensive. Things become even more complicated when there are more complex graph and sub-graph structures. Convolution blocks~\cite{szegedy2015going} contain parallel computation threads that depend on the same node for input data and send the result to another node for intermediate result concatenation. The attention mechanism~\cite{vaswani2017attention} uses a weighted sum of the results from previous layers, which can incur a lot of data communication if the nodes in the previous layers are located on different devices. Temporal dependency~\cite{hochreiter1997long} during the training of recurrent neural networks can also incur a lot of data communication if the nodes that construct the recurrent unit are located on different devices. \gls{GNN}~\cite{scarselli2008graph} can generate d-dimensional graph embeddings for each node in a given graph that can generalize to unseen graphs. Placeto~\cite{addanki2019placeto} uses a graph embedding architecture that computes node attributes (e.g., the execution time of operation, total size of output tensor), summarizes the topology of a local neighborhood through message passing, and uses pooling operations for creating a global summary of the entire graph. Mitropolitsky et al.~\cite{mitropolitsky2020graph} study the impact of different graph embedding techniques on the execution time of the placement and the computation time of graph embedding techniques. By explicitly modeling the relationships between nodes in a computation graph, better placement can be found by auto device placement methods. \subsubsection{Reinforcement Learning} After generating graph embedding of the input \gls{DAG}, Placeto uses a two-layer feed-forward neural network that takes the \gls{DAG} graph embeddings as input to iteratively predict the device placement of the \gls{DAG}'s nodes. The output of the neural network is the probability distribution of the current node over candidate hardware devices. The state-of-the-art methods usually evaluate placements of \gls{DL} models by the execution time of the \gls{DL} models. However, the execution time is not differentiable concerning the parameters in the neural network. Thus, Placeto leverages \gls{RL} for training. During the training process, the \gls{RL} agent interacts with the training environment and uses execution time as a reward function to guide the training process. Placeto has a simulator that can predict the execution time of a placement, which helps to speed up the training process of \gls{RL} agent by avoiding taking execution time measurement of placements on real hardware. In each episode of training, Placeto updates the placement of each node one time. The \gls{RL} agent of Placeto is trained with REINFORCE~\cite{williams1992simple} policy-gradient method. \subsection{Graph Traversal Order} \label{sec:preliminaries:graph_traversal_order} Since the device placement problem is treated as a sequential decision-making task~\cite{mirhoseini2018hierarchical}, we need to convert the computation graph into a sequence of nodes. Placeto formulated the device placement problem as \gls{MDP}, where the \gls{RL} agent selects to update the placement for a node in the computation graph in each state. Thus, we need to form a sequence by traversing the computation graph, which is represented as a \gls{DAG}. Below, we review some of the graph traversal order on \gls{DAG} that one can consider using. \subsubsection{Topological} Topological ordering~\cite{kahn1962topological} on the \gls{DAG} of a computation graph defines a graph traversal order such that for every directed edge $u\rightarrow v$ from node $u$ to node $v$, $u$ must appear before $v$ in the traversal order. Topological ordering can be used to represent dependencies in a computation graph where we only visit a node once all its dependencies have been met. \subsubsection{Reversed Topological} A reversed topological ordering of a DAG of a computation graph is simply the reversed order of its topological ordering. \subsubsection{Depth-first Search} \gls{DFS} is a graph traversal method that starts at source nodes (input nodes of the computation graph) and explores the graph as far as possible by continuously visiting the children nodes of the current node first before visiting the sibling nodes. A \gls{DFS} ordering is an enumeration of the nodes that is a possible output of applying \gls{DFS} on the graph. A \gls{DFS} preorder is a list of nodes that are in the order of when they are first visited by \gls{DFS}. A \gls{DFS} postorder is a list of nodes that are in the order of when they are last visited by \gls{DFS}. \subsubsection{Breadth-first Search} \gls{BFS} is a graph traversal method that starts at source nodes (input nodes of the computation graph) and explores the graph by first visiting all the sibling nodes of the current nodes before moving to children nodes. A \gls{BFS} order of a graph is an enumeration of its nodes that is one possible output of applying \gls{BFS} on the graph. \subsubsection{Lexicographical} Lexicographical order is an order where the strings are placed in order based on the position of each character in the string and their position in the alphabet. For example, given the name strings of two nodes in a computation graph $a=a_1 a_2 \cdots a_k$ and $b=b_1 b_2 \cdots b_k$, the order of the name of the two nodes depends on the alphabetical order of the characters in the first place $i$ that $a$ and $b$ differs. If $a_i < b_i$ then $a < b$, otherwise $a > b$. For more concrete examples, see Appendix~\ref{appendix:sorting_order}. \section{Graph Traversal Order in Device Placement} In this section, we discuss challenges in device placement and impact of graph traversal order. \subsection{Challenges in Device Placement} Finding a good placement for model parallelization is challenging. Most of the state-of-the-art methods use \gls{RL} to find placements; however, \gls{RL} agents still require a long training time before they can find suitable placements. Mirhoseini et al.~\cite{mirhoseini2017device} find that it takes 12 to 27 hours for their \gls{RL} method to find the best placement. Although lots of efforts have been made in reducing the complexity of the problem~\cite{mirhoseini2018hierarchical}, making the training method more efficient~\cite{gao2018post,gao2018spotlight}, and generalizing them better on unseen computation graph~\cite{addanki2019placeto,zhou2019gdp,zhou2020single}, the training time of the \gls{RL} agent still remains long. One of the challenges in device placement is defining order for nodes in the computation graph $G$. Unlike text and image data, the nodes in graphs reside in a multi-dimensional space that are linked by edges to represent connectivity~\cite{ying2021transformers}. One has to transform graph data from the multi-dimensional space into a sequence of nodes before the majority of the \gls{DL} methods can consume the graph data. In Placeto~\cite{addanki2019placeto}, the structural information can be (partially) reflected in the sequential order that the auto device placement method iterates through the nodes of the computation graph. Recent work in graph representation learning~\cite{ying2021transformers} has shown that successfully learning structural information of the graph helps better represent the graph. Better representations, in turn, lead to performance improvement of downstream tasks that utilize graph representations. Another challenge in device placement concern the expressiveness of \gls{GNN} that are used to generate node embeddings. The \gls{GNN} that are used by state-of-the-art device placement methods mostly follow the message-passing paradigm, which is known to have inherent limitations. For example, the expressiveness of such \gls{GNN} is bounded by the Weisfeiler-Lehman isomorphism hierarchy~\cite{kreuzer2021rethinking}. Also, \gls{GNN}s are known to suffer from over-squashing~\cite{topping2021understanding}, where there is a distortion of information propagation between distant nodes. Due to these limitations, the node embeddings created by \gls{GNN} have limited expressiveness. In such cases, different graph traversal order in device placement can lead to placement with different performances. \subsection{Impact of Graph Traversal Order} Graph traversal order determines the order, which an \gls{RL} agent learns the placement of each node in the computation graph. We believe that the learning process of \gls{RL} agents for device placement can be improved if proper graph traversal order can be identified and used in the \gls{RL} training. Better placements could be found if an ordering of nodes can make the \gls{RL} agent prioritize the placement learning of important nodes that have a more significant impact on the placement execution time. For example, it might be easier for the \gls{RL} agent to first learn how to place the nodes that have heavy communications. On the other hand, misplacement of such nodes can lead to slower placement execution time due to extra data communication between different devices. Order of placement in a local neighborhood could also play an important role. For example, modern \gls{DL} models are usually constructed using several computation blocks~\cite{szegedy2015going, vaswani2017attention, hochreiter1997long}. The input of the computation block might be copied and sent to a few parallel threads that perform computation independently. All the intermediate results from these parallel threads are later concatenated that will serve as the input of the next computation block. Suppose an \gls{RL} agent can not anticipate the concatenation of results from parallel computation threads. In that case, it might misplace the threads so that data communication becomes a bottleneck for the concatenation node. However, suppose the \gls{RL} agent first learns and decides on the placement of the concatenation node. In that case, it can better decide on the placement for the earlier node in the computation block to a better balance between computation and communication. We empirically study different graph traversal order mentioned in Section~\ref{sec:preliminaries:graph_traversal_order}. \section{Evaluation} This section presents the details of the empirical evaluation setup, results, experiment analysis and discussion, and guidelines for choosing graph traversal order for a given \gls{DL} model. \subsection{Datasets} We conduct our experiments on three different datasets \texttt{cifar10}\xspace, \texttt{nmt}\xspace, and \texttt{ptb}\xspace as in previous work~\cite{addanki2019placeto,mitropolitsky2020graph}. \texttt{cifar10}\xspace and \texttt{ptb}\xspace are generated using an \gls{RL}-based method ENAS~\cite{pham2018efficient} that finds the optimal subgraph within a larger graph search space. The \texttt{cifar10}\xspace dataset consists of $32$ computation graphs of convolutional neural networks for image classification tasks. The \texttt{ptb}\xspace dataset consists of $32$ computation graphs for language modeling tasks. The \texttt{nmt}\xspace dataset contains $32$ variations of \gls{nmt}~\cite{wu2016google} with different number of unrolled steps. The computation graphs in \texttt{nmt}\xspace are a family of encoder-decoder networks with attention structures. The nodes of computation graphs are pre-grouped together in all three datasets to reduce graph sizes in the same way as in~\cite{mirhoseini2017device}. The computation graphs in \texttt{cifar10}\xspace, \texttt{ptb}\xspace, and \texttt{nmt}\xspace have on average 300, 500, and 190 nodes. Table~\ref{tab:dataset_summary} summarizes the three datasets. \subsection{Experiment Setup} We implement all the graph traversal order in section~\ref{sec:preliminaries:graph_traversal_order} using NetworkX~\cite{hagberg2008exploring} and refer to them as \texttt{topo}\xspace, \texttt{reversed-topo}\xspace, \texttt{dfs-preorder}\xspace, \texttt{dfs-postorder}\xspace, \texttt{bfs}\xspace, and \texttt{lexico}\xspace hereafter. For the implementation of Placeto, we use the implementation provided in~\cite{mitropolitsky2020graph}, which is based on the original implementation~\cite{addanki2019placeto}. We use the same simulator in the original implementation to simulate the physical execution environment with different numbers of devices that a neural network can be placed on. We only change the graph traversal order of the \gls{RL} agent in each episode of training, and this order is fixed across different episodes that happened in one experiment. We conduct experiments on the graphs from each of the three datasets with three, five, and eight devices, in line with previous work~\cite{mitropolitsky2020graph}. We run independent experiments with the same setting (dataset and number of devices) to account for the stochastic and randomness that might lead to differences in experiment results. We compare three different settings for the number of repeated runs on a subset of the whole datasets and found that $10$ repeated runs offer a good balance between computation load and the reproducibility of the result. The experiments are run a standalone benchmark machine with AMD Ryzen Threadripper 2920X 12-Core Processor and 128 GB of RAM. Since we have $3 \times 32 \times 4 \times 6 \times 10 = 17280$ experiments to run, we use parallel docker containers that each have one experiment to speed up the experiment. We empirically found that the metrics we measure in the experiment are not sensitive to the number of parallel docker containers running simultaneously. We use TensorFlow and NetworkX libraries for the experiment, and we refer the readers to this repository\footnote{https://github.com/bwhub/Graph\_Traversal\_Order\_in\_Device\_Placement} for experiment code and the specific version of the libraries and other software settings. \subsection{Results and Analysis} Through the training process of device placement, an \gls{RL} agent aims to find device placement with lower execution times. Device placement training processes using different graph traversal order might have different learning speeds. Given the same amount of training time, an efficient graph traversal order finds a placement with lower execution times for the given input \gls{DL} model compared to the placements found by less efficient graph traversal order. We compare different graph traversal order based on the best placement execution time at episode $9$, $19$, and $49$. We empirically observe that the training process of the \gls{RL} agent can be roughly divided into three phases. In the first phase (episode $1$ to $9$), the RL agent learns efficiently and can find a better placement across different training episodes. This can be explained by the fact that the learning process just started, and finding a good enough placement that is better than a random strategy is not very hard. In the second phase (episode $10$ to $19$), the learning process slows down, and the \gls{RL} agent cannot always find drastically better placements than the first phase. This reflects that the learning process plateaus, and we see diminishing returns. In the third phase (episode $20$ to $49$), the \gls{RL} agent overcomes the plateau and finds better placement thanks to the more extended training budget and the knowledge learning through the process. We report the number of times each graph traversal order finds placement with the lowest execution time for the input DL models in the given dataset. Each execution time is based on an average of $10$ repeated experiments to minimize the effect of randomness and stochastic effects during the training process. Table~\ref{tab:cifar10_best_order_count} shows the result of experiments on \texttt{cifar10}\xspace dataset. Although graph traversal order like \texttt{topo}\xspace and \texttt{dfs-preorder}\xspace are the ones that find the placement with the lowest execution time, most of the time \texttt{reversed-topo}\xspace and \texttt{dfs-postorder}\xspace are the best traversal order to use. This can be explained by the fact that there are structures of parallel convolutions in the computation graph of \texttt{cifar10}\xspace dataset where the intermediate results for parallel convolutions are concatenated for later use. In such cases, it is better to start the learning process from the nodes in the output layer of the model. Once the placement of concatenation nodes is settled, it will be easier for the \gls{RL} agent to optimize the placement for the parallel convolutions. Also, we observe that with more training episodes, \texttt{topo}\xspace and \texttt{dfs-preorder}\xspace start to show fewer advantages as the number of times they find the best placement with the lowest execution time decreases. We also find that the diameter of the computation graph for the \gls{DL} model also affects which graph traversal order is performing the best in the \texttt{cifar10}\xspace dataset. With a shorter diameter (e.g., diameter smaller than $100$), the \gls{DFS} family (\texttt{dfs-preorder}\xspace and \texttt{dfs-postorder}\xspace) performs the best. With a longer diameter (e.g., diameter larger than $100$), the topo family (\texttt{topo}\xspace, \texttt{reversed-topo}\xspace) tends to find better placements. This could be explained by the fact that the \gls{DFS} family forms longer sequences of consecutive nodes on the diameter with a larger diameter. This can be hard for the \gls{RL} agent to learn the placement of sibling nodes in the computation graph as they are far away from each other in the sequence. This might require the \gls{RL} agent to learn placement collocation of sibling nodes far away from each other. Table~\ref{tab:nmt_best_order_count} shows the result of experiments on \texttt{nmt}\xspace dataset. \texttt{reversed-topo}\xspace order dominates and gives the best result. The can be explained by the fact that \texttt{reversed-topo}\xspace order considers how intermediate results are concatenated in the computation graph. The \gls{RL} agent can decide the placement of the concatenation operation first. Then it is easier for the \gls{RL} agent to colocate the input operations nodes to the concatenation node to minimize expensive data transfer and synchronization between devices during training. In such cases, starting from the nodes in the output layers of the computation graph also helps. \texttt{dfs-postorder}\xspace order does not work well on \texttt{nmt}\xspace dataset as it has a larger average node degree of $2.65$ compared to the average node degree of $1.47$ of \texttt{cifar10}\xspace. This increases the effort for the \gls{RL} agent to collocate the sibling nodes that are far away in the placement sequence generated using \texttt{dfs-postorder}\xspace order. Better collocation of sibling nodes can also potentially explain why \texttt{bfs}\xspace order is the graph traversal order that finds the placement with the lowest execution time. Table~\ref{tab:ptb_best_order_count} shows the result of experiments on \texttt{ptb}\xspace dataset. \texttt{bfs}\xspace order is the graph traversal order that achieves the best learning efficiency. This can be explained by the fact that computation graphs in \texttt{ptb}\xspace dataset have more nodes and edges than \texttt{cifar10}\xspace and \texttt{nmt}\xspace datasets. There are potentially more sibling nodes that the \gls{RL} agent needs to consider when performing the placement. Since sibling nodes in a local neighborhood will be put close together in the traversal sequence generated by \texttt{bfs}\xspace, it is easier for the \gls{RL} agent to learn to collocate these nodes together to avoid unnecessary data transfer between devices. In this way, the \gls{RL} agent does not need to worry too much about long-range dependencies in large computation graphs. \begin{table} \centering \caption{Computation Graph Dataset Summary} \label{tab:dataset_summary} \begin{tabular}{rccc} \hline \multicolumn{1}{c}{\multirow{2}{*}{\textbf{Features}}} & \multicolumn{3}{c}{\textbf{Dataset}} \\ \cline{2-4} \multicolumn{1}{c}{} & cifar & nmt & ptb \\ \hline \#nodes (avg) & 303.44 & 179.44 & \textbf{500.75} \\ \#edges (avg) & 444.22 & 476.25 & \textbf{1285.44} \\ node degree (avg) & 1.47 & \textbf{2.65} & 2.56 \\ diameter (avg) & 95.63 & 63.13 & \textbf{316.09} \\ diameter (min, max) & (74, 154) & (41, 69) & (216, 450) \\ \hline \end{tabular} \end{table} \begin{table} \centering \caption{Best Traversal Order on \texttt{cifar10}\xspace dataset.} \label{tab:cifar10_best_order_count} \begin{tabular}{c|c|cccccc} \hline \multicolumn{2}{c}{\textbf{cifar10 }} & \multicolumn{6}{c}{\textbf{Graph Traversal Order}} \\ \hline \multicolumn{1}{c}{\textbf{\#dev}} & \multicolumn{1}{c}{\textbf{ep}} & \textbf{lexico} & \textbf{topo} & \textbf{dfs\_pre} & \textbf{rev\_topo} & \textbf{dfs\_post} & \textbf{bfs} \\ \hline \multirow{3}{*}{\textbf{3dev}} & 9 & 1 & 6 & \textbf{10 } & \textbf{10 } & 4 & 1 \\ & 19 & 3 & 3 & 7 & 8 & \textbf{11 } & 0 \\ & 49 & 5 & 4 & 5 & \textbf{8 } & \textbf{8 } & 2 \\ \hline \multirow{3}{*}{\textbf{5dev}} & 9 & 0 & \textbf{9} & 6 & 6 & \textbf{9 } & 2 \\ & 19 & 0 & 9 & 3 & \textbf{10 } & 8 & 2 \\ & 49 & 2 & 6 & 6 & \textbf{9 } & 6 & 3 \\ \hline \multirow{3}{*}{\textbf{8dev}} & 9 & 0 & 8 & 3 & 10 & \textbf{11} & 0 \\ & 19 & 1 & 4 & 2 & 8 & \textbf{17} & 0 \\ & 49 & 0 & 5 & 1 & 11 & \textbf{15} & 0 \\ \hline \end{tabular} \end{table} \begin{table} \centering \caption{Best Traversal Order on \texttt{nmt}\xspace dataset.}\label{tab:nmt_best_order_count} \begin{tabular}{c|c|cccccc} \hline \multicolumn{2}{c}{\textbf{nmt}} & \multicolumn{6}{c}{\textbf{Graph Traversal Order}} \\ \hline \multicolumn{1}{c}{\textbf{\#dev}} & \multicolumn{1}{c}{\textbf{ep}} & \textbf{lexico} & \textbf{topo} & \textbf{dfs\_pre} & \textbf{rev\_topo} & \textbf{dfs\_post} & \textbf{bfs} \\ \hline \multirow{3}{*}{\textbf{3dev}} & 9 & 0 & 2 & 0 & \textbf{22} & 1 & 7 \\ & 19 & 0 & 0 & 1 & \textbf{23} & 1 & 7 \\ & 49 & 0 & 0 & 1 & \textbf{24} & 1 & 6 \\ \hline \multirow{3}{*}{\textbf{5dev}} & 9 & 0 & 0 & 1 & \textbf{24} & 1 & 6 \\ & 19 & 0 & 0 & 2 & \textbf{26} & 0 & 4 \\ & 49 & 0 & 0 & 0 & \textbf{27} & 0 & 5 \\ \hline \multirow{3}{*}{\textbf{8dev}} & 9 & 0 & 0 & 0 & \textbf{23} & 3 & 6 \\ & 19 & 0 & 0 & 0 & \textbf{24} & 1 & 7 \\ & 49 & 0 & 0 & 0 & \textbf{24} & 3 & 5 \\ \hline \end{tabular} \end{table} \begin{table} \centering \caption{Best Traversal Order on \texttt{ptb}\xspace dataset.}\label{tab:ptb_best_order_count} \begin{tabular}{c|c|cccccc} \hline \multicolumn{2}{c}{\textbf{ptb}} & \multicolumn{6}{c}{\textbf{Graph Traversal Order}} \\ \hline \multicolumn{1}{c}{\textbf{\#dev}} & \multicolumn{1}{c}{\textbf{ep}} & \textbf{lexico} & \textbf{topo} & \textbf{dfs\_pre} & \textbf{rev\_topo} & \textbf{dfs\_post} & \textbf{bfs} \\ \hline \multirow{3}{*}{\textbf{3dev}} & 9 & 0 & 1 & 6 & 1 & 0 & \textbf{24} \\ & 19 & 0 & 1 & 10 & 1 & 2 & \textbf{18} \\ & 49 & 0 & 1 & 8 & 5 & 2 & \textbf{16} \\ \hline \multirow{3}{*}{\textbf{5dev}} & 9 & 0 & 0 & 3 & 0 & 2 & \textbf{27} \\ & 19 & 0 & 1 & 3 & 2 & 2 & \textbf{24} \\ & 49 & 0 & 1 & 2 & 6 & 5 & \textbf{18} \\ \hline \multirow{3}{*}{\textbf{8dev}} & 9 & 0 & 0 & 2 & 1 & 1 & \textbf{28} \\ & 19 & 0 & 0 & 2 & 5 & 4 & \textbf{21} \\ & 49 & 0 & 0 & 2 & \textbf{13} & 5 & 12 \\ \hline \end{tabular} \end{table} \subsection{Discussion and Guidelines} In the previous subsection, we show that graph traversal order affects the training efficiency of the \gls{RL} agent, i.e., the execution time of the best placement found given the same amount of training budget. The optimal graph traversal order for the \gls{RL} agent depends on the characteristic of the neural network, e.g., number of nodes, average degree, that we want to find optimal placement on. Our findings are in line with previous research findings~\cite{mirhoseini2018hierarchical, addanki2019placeto} that graph traversal order does not affect the quality of the final placement found if ample training budget is given for finding that placement. We have observed that given enough training budget, the difference of execution time between the placement found is less and less obvious since the \gls{RL} agent has managed to learn how to give good placement given enough training budget. However, our findings are still meaningful in real-world experiments where one cannot guarantee that the \gls{RL} agent could have an unlimited amount of time training. Under a limited training budget, a good graph traversal order could help to find better placement than other traversal order. Better placement improves the training throughput of the \gls{DL} model on distributed hardware. As a result of the better placement, one either choose to improve the training speed of the \gls{DL} model on the same dataset or train the \gls{DL} model on larger datasets within the same amount of time. Identifying the proper graph traversal order for computation graph of \gls{DL} models can improve the training efficiency that leads to better placement with lower execution time on distributed hardware. However, finding the optimal graph traversal order for a given \gls{DL} model is not an easy task as many factors are involved in the process, e.g., the topology of the computation graph of the \gls{DL} model, the ratio of computation and communication during training. Although one cannot always quickly find the best graph traversal order for the computation of a given \gls{DL} model, we can still provide some guidelines based on our experience. In general, it is good to start with graph traversal order that traverses the nodes in the computation graph in a backward fashion, i.e., start from the nodes in the final layer of the graph, gradually go through the nodes in the previous layers, and finish with the nodes in the first layer of the model. For example, when using \texttt{reversed-topo}\xspace order, the \gls{RL} agent in the device placement method can first learn the placement of the nodes in the last layers and then on the nodes that are input to nodes that the \gls{RL} agent already find placement for. By starting from backward, the \gls{RL} agent can learn to better collocate parent and children nodes. If the \gls{DL} model computation graph has a large diameter and a large number of nodes or groups of nodes, then graph traversal order that can put sibling nodes near each other in the one-dimensional sequence are better candidates for the optimal graph traversal order. For example, when facing a large \gls{DL} model with more than $200$ nodes, \texttt{bfs}\xspace order can put sibling nodes close to each other in the one-dimensional sequence. Thus, the \gls{RL} agent can learn to better place the sibling nodes consecutively, instead of having to remember the placement of sibling nodes that are far away from each other in a long sequence. In the context of ExtremeEarth project~\cite{koubarakis2019copernicus, koubarakis2021artificial, hagos2021extremeearth}, different types of models are used to provide \gls{EO} products. While hyperparameter tuning~\cite{meister2020maggy} and ablation studies~\cite{sheikholeslami2021autoablation} can help to improve model performance, identifying proper graph traversal order can improve the model parallel training performance. For example, for \gls{SAR} image classification~\cite{khaleghian2021synthetic, khaleghian2021sea}, \texttt{reversed-topo}\xspace and \texttt{dfs-postorder}\xspace would be good traversal order to start the experiment, as the models are similar to that in \texttt{cifar10}\xspace model dataset. For sequence classification tasks~\cite{paris2020monitoring}, \texttt{bfs}\xspace would be a good traversal order to start with, as they are sequence to sequence models, which are similar to those in \texttt{ptb}\xspace model datasets. \texttt{bfs}\xspace order can help the \gls{RL} agent to collocate better the placement of sibling operations in the \gls{DL} model. \section{Related Work} In this section, we discuss related work on graph traversal order. The previous work study relationship between graph traversal order and the execution time of the final placement found by an auto device placement method given enough training budget. HDP~\cite{mirhoseini2018hierarchical} randomized the order of groups on NMT (4-layer) baseline to feed into the Placer that predicts the placement for each group of nodes. The authors find that the difference between the fastest and slowest placements was less than $7\%$ in $10$ experiments. Placeto~\cite{addanki2019placeto} uses \gls{GNN} to eliminate the need to assign indices when embedding graph features. Experiment results showed that the predicted placement of Placeto is more robust to graph traversal order than the RNN-based approaches. REGAL~\cite{Paliwal2020Reinforced} uses topological ordering to convert a graph into a sequence. Mitropolitsky et al.~\cite{mitropolitsky2020graph} study how different graph embedding techniques affect the execution time of the final placement and show that position-aware graph embedding improves the execution time of the placement found compared to Placeto-GNN~\cite{addanki2019placeto} and GraphSAGE~\cite{hamilton2017inductive}. GPD~\cite{zhou2019gdp} removes the positional embedding in the transformer model to prevent overfitting. Some work in other domains also studies graph traversal order. In chip placement, Mirhoseini et al.~\cite{mirhoseini2021graph} find that topological ordering can help the RL agent to place connected nodes close to each other. In the domain of generating graphs with \gls{DL} models, GraphRNN~\cite{you2018graphrnn} uses BFS order for graph generation to reduce the complexity of learning over all possible node sequences. The only possible edges for a new node are those connecting to nodes in the ``frontier'' of the \gls{BFS} order. To the best of our knowledge, our work is the first to study how graph traversal order affects device placement training efficiency in device placement. \section{Conclusion} In this work, we study the impact of graph traversal order in device placement. We empirically show that different graph traversal order affect the learning efficiency of the auto device placement training process. An RL agent can learn more efficiently during the training process by finding placement strategies with lower execution time faster when given a proper graph traversal order. Specifically, we find that traversing the computation graph from the nodes in the output layer to the nodes in the input layer helps the RL agent find good placement efficiently in many cases. We also find that when an RL agent finds placement for larger computation graphs, traversing order that can better collocate sibling nodes, e.g., \gls{BFS}, in the traversal sequence is more efficient than its depth-first counterparts. We provide practical guidelines on choosing the traversal order for device placement. We believe that our study can help researchers and practitioners better understand the relationship between types of network and graph traversal order. And the knowledge learned about traversal order can further generalize to learning settings for parallelization beyond device placement (model parallelization). There are several potential extensions and improvements, such as jointly learning graph traversal order, graph embedding, and the policy network in the RL agent. Another possible direction is to study graph traversal order based on the graph structures and features of individual nodes (e.g., input and output size, and computation intensity of the given node). Also, it would be interesting to see how the knowledge we learn on graph traversal order can be applied when using transformer for graphs for device placement. \begin{appendices} \section{nmt 64-30: node names in lexicographic order} \label{appendix:sorting_order} \begin{center} \begin{tabular}{ |r|l| } \hline Index & Node names \\ \hline 0 & decoder/attention\_decoder/attn\_0/concat\\ 1 & decoder/attention\_decoder/attn\_1/concat\\ 2 & decoder/attention\_decoder/attn\_10/concat\\ $\vdots$ & $\vdots$ \\ 33 & decoder/depth\_0/static\_rnn\_0/add\_1\\ 34 & decoder/depth\_0/static\_rnn\_1/add\_1\\ 35 & decoder/depth\_0/static\_rnn\_10/add\_1\\ $\vdots$ & $\vdots$ \\ 161 & encoder/attention/Reshape\_1\\ 162 & encoder/depth\_0/lstm\_cell/bias/read\\ 163 & encoder/depth\_0/static\_rnn\_0/add\_1\\ 164 & encoder/depth\_0/static\_rnn\_1/add\_1\\ 165 & encoder/depth\_0/static\_rnn\_10/add\_1\\ $\vdots$ & $\vdots$ \\ 225 & encoder/slicing\_layer/strided\_slice\\ 226 & init\_vars/global\_epoch\_step/Assign\\ 227 & placeholders/concat\_1\\ \hline \end{tabular} \end{center} \end{appendices} \bibliographystyle{reference/IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
5,459
Q: What are the effects of removing outdated Android SDK build tools Is there a effect on build if I remove the outdated Android SDK build-tools current version is 20 I want to remove version >= 19 A: I have done this multiple times and Android(Eclipse) projects are still alive. Just remember first install the most recent Android SDK Build Tools package then uninstall older ones. Just to play safe. If you get in trouble building the old project just install older one back. They are put nicely in a separate android-sdk-windows/build-tools/x.y.z/ subfolders. Eclipse uses the most recent one by default unless project.properties props give specific buildtool version. Usually you should not do it.
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,237
Dwight A. Swanstrom (April 28, 1905 – August 26, 1978) was an American businessman and politician. Swanstrom was born in Duluth, Minnesota and graduated from Denfeld High School, Duluth, in 1923. He served in the Minnesota National Guard. Swanstrom went to the Hibbing Community College, in Hibbing, Minnesota and then his bachelor's degree in economics and political science from University of Minnesota. He lived with his wife and family in Duluth and was involved in the insurance and real estate businesses. Swanstrom served in the Minnesota House of Representatives from 1945 to 1954 and from 1965 to 1972. He died at a hospital in Duluth, Minnesota. References 1905 births 1978 deaths Politicians from Duluth, Minnesota Businesspeople from Minnesota Minnesota National Guard personnel University of Minnesota alumni Members of the Minnesota House of Representatives
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,884
\section{Introduction} \begin{figure}[thb] \centering \includegraphics[width=\linewidth]{img.png} \caption{(1-6) Embedded Detector: Given a CNN trained on a standard vision task (classification), we backpropagate the feature map back to the image space to compute a saliency map. It is thresholded to keep only the most informative signal and keypoints are the local maxima. (7-8): simple-descriptor.} \label{fig:pipeline} \end{figure} Feature extraction, description and matching is a recurrent problem in vision tasks such as Structure from Motion (SfM), visual SLAM, scene recognition and image retrieval. The extraction consists in detecting image keypoints, then the matching pairs the nearest keypoints based on their descriptor distance. Even though hand-crafted solutions, such as SIFT \cite{lowe2004distinctive}, prove to be successful, recent breakthroughs on local feature detection and description rely on supervised deep-learning methods \cite{detone18superpoint, ono2018lf,yi2016lift}. They detect keypoints on saliency maps learned by a Convolutional Neural Network (CNN), then compute descriptors using another CNN or a separate branch of it. They all require strong supervision and complex training procedures: \cite{yi2016lift} requires ground-truth matching keypoints to initiate the training, \cite{ono2018lf} needs the ground-truth camera pose and depth maps of the images, \cite{detone18superpoint} circumvents the need for ground-truth data by using synthetic one but requires a heavy domain adaptation to transfer the training to realistic images. All these methods require a significant learning effort. In this paper, we show that a trained network already embeds enough information to build State-of-the-Art (SoA) detector and descriptor. The proposed method for local feature detection needs only a CNN trained on standard task, such as ImageNet \cite{deng2009imagenet} classification, and no further training. The detector, dubbed ELF, relies on the features learned by such a CNN and extract their locations from the feature map gradients. Previous work already highlights that trained CNN features are relevant descriptors \cite{fischer2014descriptor} and recent works \cite{balntas2016learning, han2015matchnet, simo2015discriminative} specifically train CNN to produce features suitable for keypoint description. However, no existing approach uses a pre-trained CNN for feature detection. ELF computes the gradient of a trained CNN feature map with respect to \textit{w.r.t} the image: this outputs a saliency map with local maxima on keypoint positions. Trained detectors learn this saliency map with a CNN whereas we extract it with gradient computations. This approach is inspired by \cite{simonyan2013deep} which observes that the gradient of classification scores \textit{w.r.t} the image is similar to the image saliency map. ELF differs in that it takes the gradient of feature maps and not the classification score contrary to existing work exploiting CNN gradients \cite{selvaraju2017grad, smilkov2017smoothgrad,springenberg2015striving, sundararajan2017axiomatic}. These previous works aim at visualising the learning signal for classification specifically whereas ELF extracts the feature locations. The extracted saliency map is then thresholded to keep only the most relevant locations and standard Non-Maxima Suppression (NMS) extracts the final keypoints (Figure \ref{fig:heatmap_coco}). \begin{figure}[thb] \centering \includegraphics[width=\linewidth]{fig3_heatmap.png} \caption{ Saliency maps thresholding to keep only the most informative location. Top: original image. (Left-Right: Webcam \cite{verdie2015tilde}, HPatches \cite{balntas2017hpatches}, COCO\cite{lin2014microsoft}) Middle: blurred saliency maps. Bottom: saliency map after threshold. (Better seen on a computer.) } \label{fig:heatmap_coco} \end{figure} ELF relies only on six parameters: 2$\times$2 Gaussian blur parameters for the automatic threshold level estimation and for the saliency map denoising; and two parameters for the (NMS) window and the border to ignore. Detection only requires one forward and one backward passes and takes $\sim$0.2s per image on a simple Quadro M2200, which makes it suitable for real-time applications. ELF is compared to individual detectors with standard \textit{repeatability} \cite{mikolajczyk2005comparison} but results show that this metric is not discriminative enough. Most of the existing detectors can extract keypoints repeated across images with similar repeatability scores. Also, this metric does not express how `useful' the detected keypoints are: if we sample all pixels as keypoints, we reach 100\% of \textit{rep.} but the matching may not be perfect if many areas look alike. Therefore, the detected keypoints are also evaluated on how `matchable' they are with the \textit{matching score} \cite{mikolajczyk2005comparison}. This metric requires to describe the keypoints so we define a simple descriptor: it is based on the interpolation of a CNN feature map on the detected keypoints, as in \cite{detone18superpoint}. This avoids biasing the performance by choosing an existing competitive descriptor. Experiments show that even this simple descriptor reaches competitive results which comforts the observation of \cite{fischer2014descriptor}, on the relevance of CNN features as descriptors. More details are provided section 4.1. ELF is tested on five architectures: three classification networks trained on ImageNet classification: AlexNet, VGG and Xception \cite{krizhevsky2012imagenet,simonyan2014very, chollet17xception}, as well as SuperPoint \cite{detone18superpoint} and LF-Net \cite{ono2018lf} descriptor networks. Although outside the scope of this paper, this comparison provides preliminary results of the influence of the network architecture, task and training data on ELF's performance. Metrics are computed on HPatches \cite{balntas2017hpatches} for generic performances. We derive two auxiliary datasets from HPatches to study scale and rotation robustness. Light and 3D viewpoint robustness analysis are run on the Strecha, Webcam and datasets \cite{strecha2008benchmarking, verdie2015tilde}. These extensive experiments show that ELF is on par with other sparse detectors, which suggests that the feature representation and location information learnt by a CNN to complete a vision task is as relevant as when the CNN is specifically trained for feature detection. We additionally test ELF's robustness on 3D reconstruction from images in the context of the CVPR 2019 Image Matching challenge \cite{cvpr19challenge}. Once again, ELF is on par with other sparse methods even though denser methods, e.g. \cite{detone18superpoint}, are more appropriate for such a task. Our contributions are the following: \begin{itemize} \item We show that a CNN trained on a standard vision task embeds feature location in the feature gradients. This information is as relevant for feature detection as when a CNN is specifically trained for it. \item We define a systematic method for local feature detection. Extensive experiments show that ELF is on par with other SoA deep trained detectors. They also update the previous result from \cite{fischer2014descriptor}: self-taught CNN features provide SoA descriptors in spite of recent improvements in CNN descriptors \cite{choy2016universal}. \item We release the python-based evaluation code to ease future comparison together with ELF code\footnote{ELF code:\url{https://github.com/ELF-det/elf}}. The introduced robustness datasets are also made public \footnote{Rotation and scale dataset: \url{https://bit.ly/31RAh1S}}. \end{itemize} \section{Related work} Early methods rely on hand-crafted detection and description : SIFT \cite{lowe2004distinctive} detects 3D spatial-scale keypoints on difference of gaussians and describes them with a 3D Histogram Of Gradients (HOG). SURF \cite{bay2006surf} uses image integral to speed up the previous detection and uses a sum of Haar wavelet responses for description. KAZE \cite{alcantarilla2012kaze} extends the previous multi-scale approach by detecting features in non-linear scale spaces instead of the classic Gaussian ones. ORB \cite{rublee2011orb} combines the FAST \cite{rosten2006machine} detection, the BRIEF \cite{calonder2010brief} description, and improves them to make the pipeline scale and rotation invariant. MSER-based detector hand-crafts desired invariance properties for keypoints, and designs a fast algorithm to detect them \cite{matas2004robust}. Even though these hand-crafted methods have proven to be successful and to reach state-of-the-art performance for some applications, recent research focus on learning-based methods. One of the first learned detector is TILDE \cite{verdie2015tilde}, trained under drastic changes of light and weather on the Webcam dataset. They use supervision to learn saliency maps which maxima are keypoint locations. Ground-truth saliency maps are generated with `good keypoints': they use SIFT and filter out keypoints that are not repeated in more than 100 images. One drawback of this method is the need for supervision that relies on another detector. However, there is no universal explicit definition of what a good keypoint is. This lack of specification inspires Quad-Networks \cite{savinov2017quad} to adopt an unsupervised approach: they train a neural network to rank keypoints according to their robustness to random hand-crafted transformations. They keep the top/bottom quantile of the ranking as keypoints. ELF is similar in that it does not requires supervision but differs in that it does not need to further train the CNN. Other learned detectors are trained within full detection/description pipelines such as LIFT \cite{yi2016lift}, SuperPoint \cite{detone18superpoint} and LF-Net \cite{ono2018lf}. LIFT contribution lies in their original training method of three CNNs. The detector CNN learns a saliency map where the most salient points are keypoints. They then crop patches around these keypoints, compute their orientations and descriptors with two other CNNs. They first train the descriptor with patches around ground-truth matching points with contrastive loss, then the orientation CNN together with the descriptor and finally with the detector. One drawback of this method is the need for ground-truth matching keypoints to initiate the training. In \cite{detone18superpoint}, the problem is avoided by pre-training the detector on a synthetic geometric dataset made of polygons on which they detect mostly corners. The detector is then finetuned during the descriptor training on image pairs from COCO \cite{lin2014microsoft} with synthetic homographies and the correspondence contrastive loss introduced in \cite{choy2016universal}. LF-Net relies on another type of supervision: it uses ground-truth camera poses and image depth maps that are easier to compute with laser or standard SfM than ground-truth matching keypoints. Its training pipeline builds over LIFT and employs the projective camera model to project detected keypoints from one image to the other. These keypoint pairs form the ground-truth matching points to train the network. ELF differs in that the CNN model is already trained on a standard task. It then extracts the relevant information embedded inside the network for local feature detection, which requires no training nor supervision. The detection method of this paper is mainly inspired from the initial observation in \cite{simonyan2013deep}: given a CNN trained for classification, the gradient of a class score \textit{w.r.t} the image is the saliency map of the class object in the input image. A line of works aims at visualizing the CNN representation by inverting it into the image space through optimization \cite{mahendran2015understanding,gatys2016image}. Our work differs in that we backpropagate the feature map itself and not a feature loss. Following works use these saliency maps to better understand the CNN training process and justify the CNN outputs. Efforts mostly focus on the gradient definitions \cite{smilkov2017smoothgrad, springenberg2015striving, sundararajan2017axiomatic, zeiler2014visualizing}. They differ in the way they handle the backpropagation of the non-linear units such as Relu. Grad-CAM \cite{selvaraju2017grad} introduces a variant where they fuse several gradients of the classification score \textit{w.r.t} feature maps and not the image space. Instead, ELF computes the gradient of the feature map, and not a classification score, \textit{w.r.t} the image. Also we run simple backpropagation which differs in the non-linearity handling: all the signal is backpropagated no matter whether the feature maps or the gradients are positive or not. Finally, as far as we know, this is the first work to exploit the localisation information present in these gradients for feature detection. The simple descriptor introduced for the sake of the matchability evaluation is taken from UCN \cite{choy2016universal}. Given a feature map and the keypoints to describe, it interpolates the feature map on the keypoints location. Using a trained CNN for feature description is one of the early applications of CNN \cite{fischer2014descriptor}. Later, research has taken on specifically training the CNN to generate features suitable for keypoint matching either with patch-based approaches, among which \cite{simo2015discriminative,melekhov2016siamese,han2015matchnet,zagoruyko2015learning}, or image-based approaches \cite{taira2018inloc,choy2016universal}. We choose the description method from UCN~\cite{choy2016universal}, also used by SuperPoint, for its complexity is only $O(1)$ compared to patch-based approaches that are $O(N)$ with $N$ the number of keypoints. We favor UCN to InLoc \cite{taira2018inloc} as it is simpler to compute. The motivation here is only to get a simple descriptor easy to integrate with all detectors for fair comparison of the \textit{detector} matching performances. So we overlook the description performance. \section{Method} This section defines ELF, a detection method valid for any trained CNN. Keypoints are local maxima of a saliency map computed as the feature gradient \textit{w.r.t} the image. We use the data adaptive Kapur method \cite{kapur1985new} to automatically threshold the saliency map and keep only the most salient locations, then run NMS for local maxima detection. \begin{figure}[thb] \centering \includegraphics[width=\linewidth]{fig2_saliency_bis.png} \caption{(Bigger version Figure \ref{fig:big_saliency_coco}.) Saliency maps computed from the feature map gradient $\left| ^TF^l(x) \cdot \frac{\partial F^l}{\partial \mathbf{I}} \right|$. Enhanced image contrast for better visualisation. Top row: gradients of VGG $pool_2$ and $pool_3$ show a loss of resolution from $pool_2$ to $pool_3$. Bottom: $(pool_i)_{i \in [1,2,5]}$ of VGG on Webcam, HPatches and Coco images. Low level saliency maps activate accurately whereas higher saliency maps are blurred.} \label{fig:saliency_coco} \end{figure} \subsection{Feature Specific Saliency} We generate a saliency map that activates on the most informative image region for a specific CNN feature level $l$. Let $\mathbf{I}$ be a vector image of dimension $D_I = H_I \cdot W_I \cdot C_I$. Let $F^l$ be a vectorized feature map of dimension $D_F= H_l \cdot W_l \cdot C_l$. The saliency map $S^l$, of dimension $D_I$, is $S^l(\mathbf{I})=\left| ^tF^l(\mathbf{I}) \cdot \nabla_I F^l \right|$, with $\nabla_I F^l$ a $D_F \times D_I$ matrix. The saliency activates on the image regions that contribute the most to the feature representation $F^l(\mathbf{I})$. The term $\nabla_I F^l$ explicits the correlation between the feature space of $F^l$ and the image space in general. The multiplication by $F^l(\mathbf{I})$ applies the correlation to the features $F^l(\mathbf{I})$ specifically and generate a visualisation in image space $S^l(\mathbf{I})$. From a geometrical point of view, this operation can be seen as the projection $\nabla_I F^l$ of a feature signal $F^l(\mathbf{I})$ into the image space. From a signal processing approach, $F^l(\mathbf{I})$ is an input signal filtered through $\nabla_I F^l$ into the image space. If $C_I>1$, $S^l$ is converted into a grayscale image by averaging it across channels. \subsection{Feature Map Selection} We provide visual guidelines to choose the feature level $l$ so that $F^l$ still holds high resolution localisation information while providing a useful high-level representation. CNN operations such as convolution and pooling increase the receptive field of feature maps while reducing their spatial dimensions. This means that $F^{l}$ has less spatial resolution than $F^{l-1}$ and the backpropagated signal $S^l$ ends up more spread than $S^{l-1}$. This is similar to when an image is too enlarged and it can be observed in Figure \ref{fig:saliency_coco}, which shows the gradients of the VGG feature maps. On the top row, $pool_2$'s gradient (left) better captures the location details of the dome whereas $pool_3$'s gradient (right) is more spread. On the bottom rows, the images lose their resolution as we go higher in the network. Another consequence of this resolution loss is that small features are not embedded in $F^l$ if $l$ is too high. This would reduce the space of potential keypoint to only large features which would hinder the method. This observation motivates us to favor low-level feature maps for feature detection. We chose the final $F^l$ by taking the highest $l$ which provides accurate localisation. This is visually observable by sparse high intensity signal contrary to the blurry aspect of higher layers. \subsection{Automatic Data-Adaptive Thresholding} The threshold is automatic and adapts to the saliency map distribution to keep only the most informative regions. Figure \ref{fig:heatmap_coco} shows saliency maps before and after thresholding using Kapur's method \cite{kapur1985new}, which we briefly recall below. It chooses the threshold to maximize the information between the image background and foreground \textit{i.e.} the pixel distribution below and above the threshold. This method is especially relevant in this case as it aims at maintaining as much information on the distribution above the threshold as possible. This distribution describes the set of local maxima among which we choose our keypoints. More formally, for an image $\mathbf{I}$ of $N$ pixels with $n$ sorted gray levels and $(f_i)_{i \in n}$ the corresponding histogram, $p_i=\frac{f_i}{N}$ is the empirical probability of a pixel to hold the value $f_i$. Let $s \in n$ be a threshold level and $A,B$ the empirical background and foreground distributions. The level $s$ is chosen to maximize the information between A and B and the threshold value is set to $f_s$: $A = \left( \frac{p_i}{\sum_{i<s}pi}\right)_{i<s}$ and $B = \left(\frac{p_i}{\sum_{i>=s}pi}\right)_{i>s}$. For better results, we blur the image with a Gaussian of parameters $(\mu_{thr}, \sigma_{thr})$ before computing the threshold level. Once the threshold is set, we denoise the image with a second Gaussian blur of parameters $(\mu_{noise}, \sigma_{noise})$ and run standard NMS (the same as for SuperPoint) where we iteratively select decreasing global maxima while ensuring that their nearest neighbor distance is higher than the window $w_{\textrm{NMS}} \in \mathbb{N}$. Also we ignore the $b_{\textrm{NMS}} \in \mathbb{N}$ pixels around the image border. \subsection{Simple descriptor} As mentioned in the introduction, the repeatability score does not discriminate among detectors anymore. So they are also evaluated on how `matchable' their detected keypoints are with the matching score. To do so, the ELF detector is completed with a simple descriptor inspired by SuperPoint's descriptor. The use of this simple descriptor over existing competitive ones avoids unfairly boosting ELF's perfomance. Inspired by SuperPoint, we interpolate a CNN feature map on the detected keypoints. Although simple, experiments show that this simple descriptor completes ELF into a competitive feature detection/description method. The feature map used for description may be different from the one for detection. High-level feature maps have wider receptive field hence take higher context into account for the description of a pixel location. This leads to more informative descriptors which motivates us to favor higher level maps. However we are also constrained by the loss of resolution previously described: if the feature map level is too high, the interpolation of the descriptors generate vector too similar to each other. For example, the VGG $pool_4$ layer produces more discriminative descriptors than $pool_5$ even though $pool_5$ embeds information more relevant for classification. Empirically we observe that there exists a layer level $l'$ above which the description performance stops increasing before decreasing. This is measured through the matching score metric introduced in \cite{mikolajczyk2005comparison}. The final choice of the feature map is done by testing some layers $l'>l$ and select the lowest feature map before the descriptor performance stagnates. The compared detectors are evaluated with both their original descriptor and this simple one. We detail the motivation behind this choice: detectors may be biased to sample keypoints that their respective descriptor can describe `well' \cite{yi2016lift}. So it is fair to compute the matching score with the original detector/descriptor pairs. However, a detector can sample `useless points' (e.g. sky pixels for 3d reconstructions) that its descriptor can characterise `well'. In this case, the descriptor `hides' the detector default. This motivates the integration of a common independent descriptor with all detectors to evaluate them. Both approaches are run since each is as fair as the other. \section{Experiments} This section describes the evaluation metrics and datasets as well as the method's tuning. Our method is compared to detectors with available public code: the fully hand-crafted SIFT \cite{lowe2004distinctive}, SURF \cite{bay2006surf}, ORB \cite{rublee2011orb}, KAZE \cite{alcantarilla2012kaze}, the learning-based LIFT \cite{yi2016lift}, SuperPoint \cite{detone18superpoint}, LF-Net \cite{ono2018lf}, the individual detectors TILDE \cite{verdie2015tilde}, MSER \cite{matas2004robust}. \subsection{Metrics} We follow the standard validation guidelines \cite{mikolajczyk2005comparison} that evaluates the detection performance with \textit{repeatability (rep)}. It measures the percentage of keypoints common to both images. We also compute the \textit{matching score (ms)} as an additional \textit{detector} metric. It captures the percentage of keypoint pairs that are nearest neighbours in both image space and descriptor space i.e. the ratio of keypoints correctly matched. For fair completeness, the mathematical definitions of the metrics are provided in Appendix and their implementation in the soon-to-be released code. A way to reach perfect \textit{rep} is to sample all the pixels or sample them with a frequency higher than the distance threshold $\epsilon_{kp}$ of the metric. One way to prevent the first flaw is to limit the number of keypoints but it does not counter the second. Since detectors are always used together with descriptors, another way to think the detector evaluation is: \textit{'a good keypoint is one that can be discriminatively described and matched'}. One could think that such a metric can be corrupted by the descriptor. But we ensure that a detector flaw cannot be hidden by a very performing descriptor with two guidelines. One experiment must evaluate all detector with one fixed descriptor (the simple one defined in 3.4). Second, \textit{ms} can never be higher than \textit{rep} so a detector with a poor \textit{rep} leads to a poor \textit{ms}. Here the number of detected keypoints is limited to 500 for all methods. As done in \cite{detone18superpoint,ono2018lf}, we replace the overlap score in \cite{mikolajczyk2005comparison} to compute correspondences with the 5-pixel distance threshold. Following \cite{yi2016lift}, we also modify the matching score definition of \cite{mikolajczyk2005comparison} to run a greedy bipartite-graph matching on all descriptors and not just the descriptor pairs for which the distance is below an arbitrary threshold. We do so to be able to compare all state-of-the-art methods even when their descriptor dimension and range vary significantly. (More details in Appendix.) \subsection{Datasets} All images are resized to the 480$\times$640 pixels and the image pair transformations are rectified accordingly. \begin{figure}[thb] \centering \includegraphics[width=\linewidth]{fig13.png} \caption{Left-Right: HPatches: planar viewpoint. Webcam: light. HPatches: rotation. HPatches: scale. Strecha: 3D viewpoint.} \label{fig:datasets} \end{figure} \textbf{General performances.} The HPatches dataset \cite{balntas2017hpatches} gathers a subset of standard evaluation images such as DTU and OxfordAffine \cite{aanaes2012interesting,mikolajczyk2005performance}: it provides a total of 696 images, 6 images for 116 scenes and the corresponding homographies between the images of a same scene. For 57 of these scenes, the main changes are photogrammetric and the remaining 59 show significant geometric deformations due to viewpoint changes on planar scenes. \textbf{Illumination Robustness.} The Webcam dataset \cite{verdie2015tilde} gathers static outdoor scenes with drastic natural light changes contrary to HPatches which mostly holds artificial light changes in indoor scenes. \textbf{Rotation and Scale Robustness.} We derive two datasets from HPatches. For each of the 116 scenes, we keep the first image and rotate it with angles from $0^{\circ}$ to $210^{\circ}$ with an interval of $40^{\circ}$. Four zoomed-in version of the image are generated with scales $[1.25, 1.5, 1.75, 2]$. We release these two datasets together with their ground truth homographies for future comparisons. \textbf{3D Viewpoint Robustness.} We use three Strecha scenes \cite{strecha2008benchmarking} with increasing viewpoint changes: \textit{Fountain, Castle entry, Herzjesu-P8}. The viewpoint changes proposed by HPatches are limited to planar scenes which does not reflect the complexity of 3D structures. Since the ground-truth depths are not available anymore, we use COLMAP \cite{schonberger2016structure} 3D reconstruction to obtain ground-truth scaleless depth. We release the obtained depth maps and camera poses together with the evaluation code. ELF robustness is additionally tested in the CVPR19 Image Matching Challenge \cite{cvpr19challenge} (see results sections). \subsection{Baselines} We describe the rationale behind the evaluation. The tests run on a QuadroM2200 with Tensorflow 1.4, Cuda8, Cudnn6 and Opencv3.4. We use the OpenCV implementation of SIFT, SURF, ORB, KAZE, MSER with the default parameters and the author's code for TILDE, LIFT, SuperPoint, LF-Net with the provided models and parameters. When comparing detectors in the feature matching pipeline, we measure their matching score with both their original descriptor and ELF simple descriptor. For MSER and TILDE, we use the VGG simple descriptor. \textbf{Architecture influence.} ELF is tested on five networks: three classification ones trained on ImageNet (AlexNet, VGG, Xception \cite{krizhevsky2012imagenet, simonyan2014very,chollet17xception}) as well as the trained SuperPoint's and LF-Net's descriptor ones. We call each variant with the network's names prefixed with ELF as in saliency. The paper compares the influence of i) architecture for a fixed task (ELF-AlexNet \cite{krizhevsky2012imagenet} \textit{vs.} ELF-VGG \cite{simonyan2014very} \textit{v.s.} ELF-Xception \cite{chollet17xception}), ii) the task (ELF-VGG \textit{vs.} ELF-SuperPoint (SP) descriptor), iii) the training dataset (ELF-LFNet on phototourism \textit{vs.} ELF-SP on MS-COCO). This study is being refined with more independent comparisons of tasks, datasets and architectures soon available in a journal extension. We use the author's code and pre-trained models which we convert to Tensorflow \cite{abadi2016tensorflow} except for LF-Net. We search the blurring parameters $(\mu_{thr}, \sigma_{thr})$, $(\mu_{noise}, \sigma_{noise})$ in the range $ [\![3,21]\!]^2$ and the NMS parameters $(w_{NMS}, b_{NMS})$ in $[\![4,13]\!]^2$. \textbf{Individual components comparison.} Individual detectors are compared with the matchability of their detection and the description of the simple VGG-pool3 descriptor. This way, the \textit{m.s.} only depends on the detection performance since the description is fixed for all detectors. The comparison between ELF and recent deep methods raises the question of whether triplet-like losses are relevant to train CNN descriptors. Indeed, these losses constrain the CNN features directly so that matching keypoints are near each other in descriptor space. Simpler loss, such as cross-entropy for classification, only the constrain the CNN output on the task while leaving the representation up to the CNN. ELF-VGG detector is also integrated with existing descriptors. This evaluates how useful the CNN self-learned feature localisation compares with the hand-crafted and the learned ones. \textbf{Gradient Baseline.} Visually, the feature gradient map is reminiscent of the image gradients computed with the Sobel or Laplacian operators. We run two variants of our pipeline where we replace the feature gradient with them. This aims at showing whether CNN feature gradients embed more information than image intensity gradients. \section{Results} Experiments show that ELF compares with the state-of-the-art on HPatches and demonstrates similar robustness properties with recent learned methods. It generates saliency maps visually akin to a Laplacian on very structured images (HPatches) but proves to be more robust on outdoor scenes with natural conditions (Webcam). When integrated with existing feature descriptors, ELF boosts the matching score. Even integrating ELF simple descriptor improves it with the exception of SuperPoint for which results are equivalent. This sheds new light on the representations learnt by CNNs and suggests that deep description methods may underexploit the information embedded in their trained networks. Another suggestion may be that the current metrics are not relevant anymore for deep learning methods. Indeed, all can detect repeatable keypoints with more or less the same performances. Even though the matchability of the points (\textit{m.s}) is a bit more discriminative, neither express how `useful' the \textit{kp} are for the end-goal task. One way to do so is to evaluate an end-goal task (\textit{e.g.} Structure-from-Motion). However, for the evaluation to be rigorous all the other steps should be fixed for all papers. Recently, the Image Matching CVPR19 workshop proposed such an evaluation but is not fully automatic yet. These results also challenge whether current descriptor-training loss are a strong enough signal to constrain CNN features better than a simple cross-entropy. \begin{figure}[htb] \centering \hbox{ \includegraphics[width=\linewidth]{method2legend.png}} \hbox{ \includegraphics[width=\linewidth]{fig5_hpatch.png}} \hbox{ \includegraphics[width=\linewidth]{fig5_webcam.png}} \caption{Top-Down: HPatches-Webcam. Left-Right: repeatability, matching score.} \label{fig:hpatch_gle_perf} \end{figure} The tabular version of the following results is provided in Appendix. The graph results are better seen with color on a computer screen. Unless mentioned otherwise, we compute repeatability for each detector, and the matching score of detectors with their respective descriptors, when they have one. We use ELF-VGG-$pool_4$ descriptor for TILDE, MSER, ELF-VGG, ELF-SuperPoint, and ELF-LFNet. We use AlexNet and Xception feature maps to build their respective simple descriptors. The meta-parameters for each variants are provided in Appendix. \textbf{General performances.} Figure \ref{fig:hpatch_gle_perf} (top) shows that the \textit{rep} variance is low across detectors whereas \textit{ms} is more discriminative, hence the validation method (Section 4.1). On HPatches, SuperPoint (SP) reaches the best \textit{rep}-\textit{ms} [68.6, 57.1] closely followed by ELF (e.g. ELF-VGG: [63.8, 51.8]) and TILDE [66.0, 46.7]. In general, we observe that learning-based methods all outperform hand-crafted ones. Still, LF-Net and LIFT curiously underperform on HPatches: one reason may be that the data they are trained on differs too much from this one. LIFT is trained on outdoor images only and LF-Net on either indoor or outdoor datasets, whereas HPatches is made of a mix of them. We compute metrics for both LF-Net models and report the highest one (indoor). Even though LF-Net and LIFT fall behind the top learned methods, they still outperform hand-crafted ones which suggests that their framework learn feature specific information that hand-crafted methods can not capture. This supports the recent direction towards trained detectors and descriptors. \textbf{Light Robustness} Again, \textit{ms} is a better discriminant on Webcam than \textit{rep} (Figure \ref{fig:hpatch_gle_perf} bottom). ELF-VGG reaches top \textit{rep}-\textit{ms} [53.2, 43.7] closely followed by TILDE [52.5, 34.7] which was the state-of-the-art detector. Overall, there is a performance degradation ($\sim$20\%) from HPatches to Webcam. HPatches holds images with standard features such as corners that state-of-the-art methods are made to recognise either by definition or by supervision. There are less such features in the Webcam dataset because of the natural lighting that blurs them. Also there are strong intensity variations that these models do not handle well. One reason may be that the learning-based methods never saw such lighting variations in their training set. But this assumption is rejected as we observe that even SuperPoint, which is trained on Coco images, outperforms LIFT and LF-Net, which are trained on outdoor images. Another justification can be that what matters the most is the pixel distribution the network is trained on, rather than the image content. The top methods are classifier-based ELF and SuperPoint: the first ones are trained on the huge Imagenet dataset and benefit from heavy data augmentation. SuperPoint also employs a considerable data strategy to train their network. Thus these networks may cover a much wider pixel distribution which would explain their robustness to pixel distribution changes such as light modifications. \textbf{Architecture influence} ELF is tested on three classification networks as well as the descriptor networks of SuperPoint and LF-Net (Figure \ref{fig:hpatch_gle_perf}, bars under `ELF'). For a fixed training task (classification) on a fixed dataset (ImageNet), VGG, AlexNet and Xception are compared. As could be expected, the network architecture has a critical impact on the detection and ELF-VGG outperforms the other variants. The \textit{rep} gap can be explained by the fact that AlexNet is made of wider convolutions than VGG, which induces a higher loss of resolution when computing the gradient. As for \textit{ms}, the higher representation space of VGG may help building more informative features which are a stronger signal to backpropagate. This could also justify why ELF-VGG outperforms ELF-Xception that has less parameters. Another explanation is that ELF-Xception's gradient maps seem smoother. Salient locations are then less emphasized which makes the keypoint detection harder. One could hint at the depth-wise convolution to explain this visual aspect but we could not find an experimental way to verify it. Surprisingly, ELF-LFNet outperforms the original LF-Net on both HPatches and Webcam and ELF-SuperPoint variant reaches similar results as the original. \begin{figure}[thb] \centering \includegraphics[width=\linewidth]{fig7_scale.png} \caption{HPatches scale. Left-Right: rep, ms.} \label{fig:robust_scale} \end{figure} \textbf{Scale Robustness.} ELF-VGG is compared with state-of-the art detectors and their respective descriptors (Figure \ref{fig:robust_scale}). Repeatability is mostly stable for all methods: SIFT and SuperPoint are the most invariant whereas ELF follows the same variations as LIFT and LF-Net. Once again, \textit{ms} better assesses the detectors performance: SuperPoint is the most robust to scale changes, followed by LIFT and SIFT. ELF and LF-Net lose 50\% of their matching score with the increasing scale. It is surprising to observe that LIFT is more scale-robust than LF-Net when the latter's global performance is higher. A reasonable explanation is that LIFT detects keypoints at 21 scales of the same image whereas LF-Net only runs its detector CNN on 5 scales. Nonetheless, ELF outperforms LF-Net without manual multi-scale processing. \begin{figure}[thb] \centering \includegraphics[width=\linewidth]{fig7_angle.png} \caption{HPatches rotation. Left-Right: rep, ms.} \label{fig:robust_rotation} \end{figure} \textbf{Rotation Robustness.} Even though \textit{rep} shows little variations (Figure \ref{fig:robust_rotation}), all learned methods' \textit{ms} crash while only SIFT survives the rotation changes. This can be explained by the explicit rotation estimation step of SIFT. However LIFT and LF-Net also run such a computation. This suggests that either SIFT's hand-crafted orientation estimation is more accurate or that HOG are more rotation invariant than learned features. LF-Net still performs better than LIFT: this may be because it learns the keypoint orientation on the keypoint features representation rather than the keypoint pixels as done in LIFT. Not surprisingly, ELF simple descriptor is not rotation invariant as the convolutions that make the CNN are not. This also explains why SuperPoint also crashes in a similar manner. These results suggest that the orientation learning step in LIFT and LF-Net is needed but its robustness could be improved. \begin{figure}[thb] \centering \includegraphics[width=\linewidth]{fig7_strecha.png} \caption{Robustness analysis: 3D viewpoint.} \label{fig:robust_strecha} \end{figure} \textbf{3D Viewpoint Robustness.} While SIFT shows a clear advantage of pure-rotation robustness, it displays similar degradation as other methods on realistic rotation-and-translation on 3D structures. Figure \ref{fig:robust_strecha} shows that all methods degrade uniformly. One could assume that this small data sample is not representative enough to run such robustness analysis. However, we think that these results rather suggest that all methods have the same robustness to 3D viewpoint changes. Even though previous analyses allows to rank the different feature matching pipelines, each has advantages over others on certain situations: ELF or SuperPoint on general homography matches, or SIFT on rotation robustness. This is why this paper only aims at showing ELF reaches the same performances and shares similar properties to existing methods as there is no generic ranking criteria. The recent evaluation run by the CVPR19 Image Matching Challenge \cite{cvpr19challenge} supports the previous conclusions. \begin{figure}[thb] \centering \includegraphics[width=\linewidth]{fig11.png} \caption{Left-Middle-Right bars: original method, integration of ELF detection, integration of ELF description.} \label{fig:ind_component} \end{figure} \textbf{Individual components performance.} First, all methods' descriptor are replaced with the simple ELF-VGG-$pool_3$ one. We then compute their new \textit{ms} and compare it to ELF-VGG on HPatches and Webcam (Figure \ref{fig:ind_component}, stripes). The description is based on $pool_3$ instead of $pool_4$ here for it produces better results for the other methods while preserving ours. ELF reaches higher \textit{ms} [51.3] for all methods except for SuperPoint [53.7] for which it is comparable. This shows that ELF is as relevant, if not more, than previous hand-crafted or learned detectors. This naturally leads to the question: \textit{'What kind of keypoints does ELF detect ?'} There is currently no answer to this question as it is complex to explicitly characterize properties of the pixel areas around keypoints. Hence the open question \textit{'What makes a good keypoint ?'} mentioned at the beginning of the paper. Still, we observe that ELF activates mostly on high intensity gradient areas although not all of them. One explanation is that as the CNN is trained on the vision task, it learns to ignore image regions useless for the task. This results in killing the gradient signals in areas that may be unsuited for matching. Another surprising observation regards CNN descriptors: SuperPoint (SP) keypoints are described with the SP descriptor in one hand and the simple ELF-VGG one in the other hand. Comparing the two resulting matching scores is one way to compare the SP and ELF descriptors. Results show that both approaches lead to similar \textit{ms}. This result is surprising because SP specifically trains a description CNN so that its feature map is suitable for keypoint description \cite{choy2016universal}. In VGG training, there is no explicit constraints on the features from the cross-entropy loss. Still, both feature maps reach similar numerical description performance. This raises the question of whether contrastive-like losses, which input are CNN features, can better constrain the CNN representation than simpler losses, such as cross-entropy, which inputs are classification logits. This also shows that there is more to CNNs than only the task they are trained on: they embed information that can prove useful for unrelated tasks. Although the simple descriptor was defined for evaluation purposes, these results demonstrate that it can be used as a description baseline for feature extraction. The integration of ELF detection with other methods' descriptor (Figure \ref{fig:ind_component}, circle) boosts the \textit{ms}. \cite{yi2016lift}~previously suggested that there may be a correlation between the detector and the descriptor within a same method, i.e. the LIFT descriptor is trained to describe only the keypoints output by its detector. However, these results show that ELF can easily be integrated into existing pipelines and even boost their performances. \begin{figure}[htb] \centering \hbox{ \includegraphics[width=\linewidth]{fig12_legend.png}} \hbox{ \includegraphics[width=\linewidth]{fig12.png}} \caption{Gradient baseline.} \label{fig:gradient_perf} \end{figure} \textbf{Gradient Baseline} The saliency map used in ELF is replaced with simple Sobel or Laplacian gradient maps. The rest of the detection pipeline stays the same and we compute their performance (Figure \ref{fig:gradient_perf} Left). They are completed with simple ELF descriptors from the VGG, AlexNet and Xception networks. These new hybrids are then compared to their respective ELF variant (Right). Results show that these simpler gradients can detect systematic keypoints with comparable \textit{rep} on very structured images such as HPatches. However, the ELF detector better overcomes light changes (Webcam). On HPatches, the Laplacian-variant reaches similar \textit{ms} as ELF-VGG (55 \textit{vs} 56) and outperforms ELF-AlexNet and ELF-Xception. These scores can be explained with the images structure: for heavy textured images, high intensity gradient locations are relevant enough keypoints. However, on Webcam, all ELF detectors outperform Laplacian and Sobel with a factor of 100\%. This shows that ELF is more robust than Laplacian and Sobel operators. Also, feature gradient is a sparse signal which is better suited for local maxima detection than the much smoother Laplacian operator (Figure \ref{fig:sobel_visu}). \begin{figure}[thb] \centering \includegraphics[height=3cm]{fig5_sobel_similar_ter.png} \caption{Feature gradient (right) provides a sparser signal than Laplacian (middle) which is more selective of salient areas.} \label{fig:sobel_visu} \end{figure} \textbf{Qualitative results} Green lines show putative matches based only on nearest neighbour matching of descriptors. More qualitative results are available in the video \footnote{\url{https://youtu.be/oxbG5162yDs}}. \begin{figure}[thb] \centering \includegraphics[width=\linewidth]{fig6_matching_ter.png} \caption{Green lines show putative matches of the simple descriptor before RANSAC-based homography estimation.} \label{fig:matching_pic} \end{figure} \textbf{CVPR19 Image Matching Challenge \cite{cvpr19challenge}} This challenge evaluates detection/description methods on two standard tasks: 1) wide stereo matching and 2) structure from motion from small image sets. The \textit{matching score} evaluates the first task, and the camera pose estimation is used for both tasks. Both applications are evaluated on the photo-tourism image collections of popular landmarks \cite{thomee59yfcc100m, heinly2015reconstructing}. More details on the metrics definition are available on the challenge website \cite{cvpr19challenge}. \textit{Wide stereo matching:} Task 1 matches image pairs across wide baselines. It is evaluated with the keypoints \textit{ms} and the relative camera pose estimation between two images. The evaluators run COLMAP to reconstruct dense `ground-truth' depth which they use to translate keypoints from one image to another and compute the matching score. They use the RANSAC inliers to estimate the camera pose and measure performance with the ``angular difference between the estimated and ground-truth vectors for both rotation and translation. To reduce this to one value, they use a variable threshold to determine each pose as correct or not, then compute the area under the curve up to the angular threshold. This value is thus the mean average precision up to x, or mAPx. They consider 5, 10, 15, 20, and 25 degrees" \cite{cvpr19challenge}. Submissions can contain up to 8000 keypoints and we submitted entries to the sparse category i.e. methods with up to 512 keypoints. \begin{figure}[thb] \centering \includegraphics[width=\linewidth]{fig14.png} \caption{\textit{Wide stereo matching.} Left: matching score (\%) of sparse methods (up to 512 keypoints) on photo-tourism. Right: Evolution of mAP of camera pose for increasing tolerance threshold (degrees).} \label{fig:cvpr19_task1} \end{figure} Figure \ref{fig:cvpr19_task1} (left) shows the \textit{ms} (\%) of the submitted sparse methods. It compares ELF-VGG detection with DELF \cite{noh2017largescale} and SuperPoint, where ELF is completed with either the simple descriptor from pool3 or pool4, and SIFT. The variant are dubbed respectively ELF-256, ELF-512 and ELF-SIFT. This allows us to sketch a simple comparison of descriptor performances between the simple descriptor and standard SIFT. As previously observed on HPatches and Webcam, ELF and SuperPoint reach similar scores on Photo-Tourism. ELF-performance slightly increases from 25\% to 26.4\% when switching descriptors from VGG-pool3 to VGG-pool4. One explanation is that the feature space size is doubled from the first to the second. This would allow the pool4 descriptors to be more discriminative. However, the 1.4\% gain may not be worth the additional memory use. Overall, the results show that ELF can compare with the SoA on this additional dataset that exhibits more illumination and viewpoint changes than HPatches and Webcam. This observation is reinforced by the camera pose evaluation (Figure \ref{fig:cvpr19_task1} right). SuperPoint shows as slight advantage over others that increases from 1\% to 5\% across the error tolerance threshold whereas ELF-256 exhibits a minor under-performance. Still, these results show ELF compares with SoA performance even though it is not trained explicitly for detection/description. \begin{figure}[thb] \centering \includegraphics[width=0.7\linewidth]{fig15.png} \caption{\textit{SfM from small subsets}. Evolution of mAP of camera pose for increasing tolerance threshold.} \label{fig:cvpr19_task2} \end{figure} \textit{Structure-from-Motion from small subsets.} Task 2 ``proposes to to build SfM reconstructions from small (3, 5, 10, 25) subsets of images and use the poses obtained from the entire (much larger) set as ground truth" \cite{cvpr19challenge}. Figure \ref{fig:cvpr19_task2} shows that SuperPoint reaches performance twice as big as the next best method ELF-SIFT. This suggests that when few images are available, SuperPoint performs better than other approaches. One explanation is that even in 'sparse-mode', \textit{i.e.} when the number of keypoints is restricted up to 512, SuperPoint samples points more densely than the others ($\sim$383 \textit{v.s.} $\sim$210 for the others). Thus, SuperPoint provides more keypoints to triangulate i.e. more 2D-3D correspondences to use when estimating the camera pose. This suggests that high keypoint density is a crucial characteristic of the detection method for Structure-from-Motion. In this regard, ELF still has room for improvement compared to SuperPoint. \section{Conclusion} We have introduced ELF, a novel method to extract feature locations from pre-trained CNNs, with no further training. Extensive experiments show that it performs as well as state-of-the art detectors. It can easily be integrated into existing matching pipelines and proves to boost their matching performances. Even when completed with a simple feature-map-based descriptor, it turns into a competitive feature matching pipeline. These results shed new light on the information embedded inside trained CNNs. This work also raises questions on the descriptor training of deep-learning approaches: whether their losses actually constrain the CNN to learn better features than the ones it would learn on its own to complete a vision task. Preliminary results show that the CNN architecture, the training task and the dataset have substantial impact on the detector performances. A further analysis of these correlations is the object of a future work. {\small \bibliographystyle{acm}
{ "redpajama_set_name": "RedPajamaArXiv" }
8,113
use futures::future::Future; use futures::Stream; use grpcio::{ChannelBuilder, Environment}; use std::path::{Path, PathBuf}; use std::str::FromStr; use std::sync::Arc; use point_viewer::data_provider::{DataProviderFactory, OnDiskDataProvider}; use point_viewer::iterator::{ParallelIterator, PointQuery}; use point_viewer::octree::Octree; use point_viewer_grpc::proto_grpc::OctreeClient; use point_viewer_grpc::service::start_grpc_server; use point_viewer_grpc_proto_rust::proto; // size for batch const BATCH_SIZE: usize = 1_000_000; fn main() { let matches = clap::App::new("octree_benchmark") .args(&[ clap::Arg::with_name("port") .about("Port for the server to listen on for connections. [50051]") .long("port") .takes_value(true), clap::Arg::with_name("no-client") .about("Do not actually send points, only read them on the server.") .long("no-client") .takes_value(false), clap::Arg::with_name("num-points") .about("Number of points to stream. [50000000]") .long("num-points") .takes_value(true), clap::Arg::with_name("num-threads") .about("Number of threads, num(cpus) - 1 by default") .long("num-threads") .takes_value(true), clap::Arg::with_name("buffer-size") .about("Buffer capacity, 4 by default") .long("buffer") .takes_value(true), clap::Arg::with_name("octree_directory") .about("Input directory of the octree directory to serve.") .index(1) .required(true), ]) .get_matches(); let octree_directory = PathBuf::from( matches .value_of("octree_directory") .expect("octree_directory not given"), ); let num_points = usize::from_str(matches.value_of("num-points").unwrap_or("50000000")) .expect("num-points needs to be a number"); let num_threads = usize::from_str( matches .value_of("num-threads") .unwrap_or(&(std::cmp::max(1, num_cpus::get() - 1)).to_string()), ) .expect("num-threads needs to be a number"); let buffer_size = usize::from_str(matches.value_of("buffer-size").unwrap_or("4")) .expect("buffer-size needs to be a number"); if matches.is_present("no-client") { server_benchmark(&octree_directory, num_points, num_threads, buffer_size) } else { let port = matches.value_of_t("port").unwrap_or(50051); full_benchmark(&octree_directory, num_points, port) } } fn server_benchmark( octree_directory: &Path, num_points: usize, num_threads: usize, buffer_size: usize, ) { let octree = Octree::from_data_provider(Box::new(OnDiskDataProvider { directory: octree_directory.into(), })) .unwrap_or_else(|_| { panic!( "Could not create octree from '{}'", octree_directory.display() ) }); let mut counter: usize = 0; let mut points_streamed_m = 0; let all_points = PointQuery { attributes: vec!["color", "intensity"], ..Default::default() }; let octree_slice: &[Octree] = std::slice::from_ref(&octree); let mut parallel_iterator = ParallelIterator::new( octree_slice, &all_points, BATCH_SIZE, num_threads, buffer_size, ); eprintln!("Server benchmark:"); let _result = parallel_iterator.try_for_each_batch(move |points_batch| { counter += points_batch.position.len(); if points_streamed_m < counter / 1_000_000 { points_streamed_m = counter / 1_000_000; eprintln!("Streamed {}M points", points_streamed_m) }; if counter >= num_points { std::process::exit(0) } Ok(()) }); } // this test works with number of threads = num cpus -1 and batch size such that the proto is less than 4 MB fn full_benchmark(octree_directory: &Path, num_points: usize, port: u16) { let data_provider_factory = DataProviderFactory::new(); let mut server = start_grpc_server("0.0.0.0", port, octree_directory, data_provider_factory); server.start(); let env = Arc::new(Environment::new(1)); let ch = ChannelBuilder::new(env).connect(&format!("localhost:{}", port)); let client = OctreeClient::new(ch); let req = proto::GetAllPointsRequest::new(); let receiver = client.get_all_points(&req).unwrap(); let mut counter: usize = 0; 'outer: for rep in receiver.wait() { for _pos in rep.expect("Stream error").get_positions().iter() { if counter % 1_000_000 == 0 { eprintln!("Streamed {}M points", counter / 1_000_000); } counter += 1; if counter == num_points { break 'outer; } } } let _ = server.shutdown().wait(); }
{ "redpajama_set_name": "RedPajamaGithub" }
4,872
Q: Powershell use Get-WinEvent with hashtable to query very specific time range I'm trying to make a powershell script that essentially automates the account lockout tools. ideally I'll be able to get a fairly efficient query that can identify recently locked out accounts then retrieve that data from our DC's and probably send an email letting us know who was locked out and a copy of the "message" from the security log. here's what I have so far: I read that to use Get-WinEvent we have to use a hashtable so i created a hashtable object and expanded by datetime variables into the hashtable and they appear correct, and if I run something like $hash.starttime | gm , I can confirm that it's still a system.datetime object. $LockedOut = Get-ADUser -Properties AccountLockoutTime,LastBadPasswordAttempt,BadPwdCount,LockedOut -Filter * | ?{$_.AccountLockOutTime -ge (Get-Date).AddHours(-3)} $LockedOut | ft name,samaccountname,LockedOut,AccountLockoutTime,BadPwdCount,LastBadPasswordAttempt $DomainControllers = Get-ADDomainController -Filter * ForEach($lockeduser in $LockedOut) { $lockeduser.Name ForEach($DC in $DomainControllers.name) { $before = ($lockeduser.AccountLockoutTime.AddMinutes(1)).date $after = ($lockeduser.AccountLockoutTime.AddMinutes(-1)).date $hash = $null $hash = @{} $hash.Add("Logname", "security") $hash.Add("Starttime", $after) $hash.Add("Endtime", $before) $DC $messagecriteria = $lockeduser.Name $message = Get-WinEvent -ComputerName $DC -FilterHashtable $hash | ?{$_.Message -like "*$messagecriteria*"} $message } "----------------------------------------------------------------------------------------------------------" } But when I run the query I only get back Get-WinEvent : No events were found that match the specified selection criteria. At line:19 char:20 + $message = Get-WinEvent -ComputerName $DC -FilterHashtable $hash | ?{$_ ... + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : ObjectNotFound: (:) [Get-WinEvent], Exception + FullyQualifiedErrorId : NoMatchingEventsFound,Microsoft.PowerShell.Commands.GetWinEventCommand A: I seem to have gotten it to work. I think some variable properties were sticking in memory and it was causing the query to supply the wrong timeframes. here's the finished product for anyone interested, query has been tightened up to only pull 2 sec worth of logs to parse, and only looking back the last 10 min. There's also an segment for writing the results to a custom PSobject which is useful for exporting to CSV or HTML. the only thing left is I'll insert and HTML header and have this write to a table that will come across in an email. $LockedOut = Get-ADUser -Properties AccountLockoutTime,LastBadPasswordAttempt,BadPwdCount,LockedOut -Filter * | ?{$_.AccountLockOutTime -ge (Get-Date).AddMinutes(-10)} $LockedOut | ft name,samaccountname,LockedOut,AccountLockoutTime,BadPwdCount,LastBadPasswordAttempt $DomainControllers = Get-ADDomainController -Filter * $results = $null ForEach($lockeduser in $LockedOut) { $lockedusername = $lockeduser.name ForEach($DC in $DomainControllers.name) { $starttime = $lockeduser.AccountLockoutTime.AddSeconds(-1) $endtime = $lockeduser.AccountLockoutTime.AddSeconds(1) $hash = $null $hash = @{} $hash.Add("Logname", "security") $hash.Add("Starttime", $starttime) $hash.Add("Endtime", $endtime) "$lockedusername - Locating Events between $starttime and $endtime on $DC..." $messagecriteria = $lockeduser.Name $message = Get-WinEvent -ComputerName $DC -FilterHashtable $hash | ?{$_.Message -like "*$messagecriteria*"} $message |ft @{Expression={$ExecutionContext.InvokeCommand.ExpandString($lockeduser.Name)};Label="Name"}, ` @{Expression={$ExecutionContext.InvokeCommand.ExpandString($lockeduser.SamAccountName)};Label="SAMID"},machinename,TimeCreated,ID,message $hash.Clear() $TC = $message.timecreated $ID = $message.id $messagetext = $message.message IF($message -ne $null) { ForEach($line in $message) { $obj = New-Object -TypeName PSObject $obj | Add-Member -NotePropertyName "Name" -NotePropertyValue $LockedUser.name $obj | Add-Member -NotePropertyName "SamID" -NotePropertyValue $LockedUser.SamAccountName $obj | Add-Member -NotePropertyName "DC" -NotePropertyValue $DC $obj | Add-Member -NotePropertyName "TimeCreated" -NotePropertyValue $line.TimeCreated $obj | Add-Member -NotePropertyName "Event ID" -NotePropertyValue $line.ID $obj | Add-Member -NotePropertyName "Message" -NotePropertyValue $line.message [Array]$results += $obj } } } "----------------------------------------------------------------------------------------------------------" } "Table of results" $results | ft
{ "redpajama_set_name": "RedPajamaStackExchange" }
8,710
Q: Avoiding requirement to type sudo ifdown br0 && sudo ifup br0 My network settings requires me to type sudo ifdown br0 && sudo ifup br0 to kick it into life after a reboot. How can I avoid having to do this? My /etc/network/interfaces has: auto eth0 iface eth0 inet manual auto br0 iface br0 inet dhcp bridge_ports eth0 bridge_stop off bridge_maxwait 5 A: Remove auto eth0 from your /etc/network/interfaces file. Here's an example of a Proxmox VE host I have with bridged interfaces: auto lo iface lo inet loopback iface eth0 inet manual auto vmbr0 iface vmbr0 inet static address 123.123.123.123 netmask 255.255.255.0 gateway 123.123.123.1 dns-nameservers 8.8.8.8 8.8.4.4 bridge_ports eth0 bridge_stp off bridge_fd 0 Here vmbr0 is bridging eth0. eth0 is set to manual and doesn't start automatically, but vmbr0 does. Yours then should look like: auto lo iface lo inet loopback iface eth0 inet manual auto br0 iface br0 inet dhcp bridge_ports eth0 bridge_stop off bridge_maxwait 5
{ "redpajama_set_name": "RedPajamaStackExchange" }
6,421
\section{Conclusions and future work} In this paper we have presented PaGMO, a global optimisation software framework for parallel engineering optimisation developed within the Advanced Concepts Team at the European Space Agency. We have tested PaGMO on three hard, realistic interplanetary trajectory optimisation problems (TandEM, Cassini and Messenger), showing how PaGMO is able to find automatically the best known solutions for all three problems. With the help of a human-guided final pruning step, PaGMO was also able to locate the `real' trajectory for the Messenger probe, consisting of multiple resonant flybys at Mercury. A fourth benchmark problem, consisting in the optimisation of simple trajectories to a selection of 4406 near-Earth asteriods, was shown with the intent of highlighting PaGMO's parallelisation capabilities. Future work on PaGMO will concentrate on areas such as: \begin{itemize} \item extension of the computational capabilities via interfacing to popular massively parallel frameworks, such as MPI \cite{mpi} for scientific clusters and BOINC \cite{boinc} for distributed computing; \item exploration of the possibility of using GPGPU computing \cite{gpgpu} to speed-up the most time-consuming parts of the optimisation process; \item implementing/interfacing additional optimisation algorithms; \item interfacing with machine learning packages (such as PyBrain \cite{pybrain}), for easy coding of artificial intelligence problems. \end{itemize} Some of these activities will be tackled within the Google Summer of Code 2010, in which an international group of University students will be working during the summer on PaGMO under the mentorship of the PaGMO development team - while being sponsored by Google. PaGMO is Free Software, and it is available for download from the SourceForge website: \newline\newline \url{http://pagmo.sourceforge.net} \section{Introduction} With the introduction of mass-produced multi-core architectures, personal computers are becoming increasingly capable of performing parallel computations. Yet, the effort to parallelize algorithms is time consuming and often not attractive, especially in scientific computing where software reuse is not as spread a practice as in other fields of computing. The open-source project PaGMO, (Parallel Global Multiobjective Optimiser), aims at filling this gap for optimisation algorithms providing, through a generalization of the so called island model (i.e. a coarse grained approach to parallelization of genetic algorithms) to all types of algorithms (population based and not), a simple experimenting platform that allows scientists to easily code algorithms and problems without having to care at all about the underlying parallelization that is provided `for free' by the PaGMO infrastructure . The resulting software platform, participating to the Google initiative Summer of Code 2010, is described in this paper together with application examples to real life engineering problems of interest to aerospace engineers. Recent results in global optimisation algorithms applied to the design of chemically-propelled interplanetary trajectories have shown how a straightforward application of off-the-shelf optimisation algorithms does not suffice to find satisfactory solutions for the most complex cases such as the Messenger trajectory or the Cassini or the TandEM trajectory \cite{izzo}. While a wise use of the algorithms still provides useful information also in these most complex cases, the final optimal solutions need a substantial amount of engineering knowledge to be found. In this paper we show how the use of PaGMO allows different algorithms to cooperate to the solution of the same interplanetary trajectory problem, allowing to find solutions also in the most difficult cases in a reasonable time. In particular, we demonstrate a fully automated search of the solution space based on the use of Differential Evolution, Simulated Annealing and local search in a cooperative fashion. Information is exchanged asynchronously between the solvers operating in parallel CPUs via the implementation of a generalized migration operator offered by PaGMO. We test this search strategy in the case of the Cassini, TandEM and Messenger trajectories as defined in the European Space Agency Global Trajectory optimisation Problems database (GTOP) \cite{gtop1,gtop2}. We show that the algorithms are able to locate interesting regions of the search space and in particular to find the possible resonances. In the case of TandEM and Cassini, an automatic pruning strategy is able to successfully identify the best known solutions. In the case of Messenger, the automated search locates a large number of possible solution clusters (due to the possible resonances at Mercury). A second run of the search focussed on a particular one of these clusters holds satisfactory results and is, in particular, able to find the same strategy adopted by the actual Messenger mission. A last example is then presented, where 4406 simpler interplanetary trajectories are optimised (each five times) taking dvantage of PaGMO's parallelization capabilities in a reasonably short time to locate preliminarly good targets for an asteroid sample return mission in the 2020-2050 time frame. \section{PaGMO} PaGMO is an optimisation framework developed within the Advanced Concepts Team of the European Space Agency. Written in C++, PaGMO aims to provide an extensible infrastructure for defining optimisation problems (nonlinear, continuous, integer, mixed-integer, box-constrained, nonlinearly constrained, multi-objective optimisation is supported), coupled with a wide arsenal of global and local optimisation algorithms - some of them coded directly within PaGMO, others called from external libraries through thin wrappers. At the time of this writing, PaGMO provides the following optimisation algorithms: \begin{itemize} \item global and local optimisation algorithms coded directly within PaGMO, including a simple genetic algorithm \cite{sga}, differential evolution \cite{de}, particle swarm optimisation \cite{pso}, adaptive neighbourhood simulated annealing \cite{sa_corana}, improved harmony search \cite{ihs}, compass search \cite{cs}, monotonic basin hopping \cite{mbh}, generalised multistart and Monte Carlo search \cite{monte-carlo}; \item wrapper for SNOPT \cite{snopt}; \item wrapper for IPOPT \cite{ipopt}; \item wrappers for algorithms from the NLopt library \cite{nlopt}, including Subplex \cite{subplex} (an extension of the classical Nelder-Mead method), COBYLA \cite{cobyla} and BOBYQA \cite{bobyqa}; \item wrappers for algorithms from the GSL library \cite{gsl}, including the Broyden-Fletcher-Goldfarb-Shanno (BFGS) method \cite{bfgs}, Fletcher-Reeves and Polak-Ribi\`{e}re nonlinear conjugate gradient methods \cite{conj_grad} and the classical Nelder-Mead method \cite{nelder_mead}; \item wrappers for algorithms from the SciPy library \cite{scipy} (only available in the Python bindings), including fmin (Nelder-Mead), L-BFGS-B \cite{lbfgsb}, sequential least-square programming \cite{slsqp} and truncated Newton method \cite{tnc}. \end{itemize} PaGMO provides automatic parallelisation of the optimisation process via a coarse-grained approach based on the island model \cite{island_model}, in which multiple optimisation instances of the same problem are launched at the same time, asynchronously exchanging information and improving the overall convergence properties of the optimisation. In PaGMO's implementation of the island model, each optimisation instance (i.e., each island) is launched in a separate thread of execution, thus automatically taking advantage of modern multiprocessor machines. The connections between islands are respresented by a graph topology in which each node corresponds to an island and the edges represent routes through which candidate solutions can be communicated from one island to the other. The graph topologies can be either constructed by manually adding nodes and edges, or they can be selected among those already coded within PaGMO, including: \begin{itemize} \item popular topologies in the context of parallel population-based optimisation, such as fully connected, torus, cartwheel, lattice, hypercube, broadcast, and various types of ring topologies; \item small-world network topologies, such as the Barab\'{a}si-Albert \cite{ba_model} and Watts-Strogatz \cite{ws_model} models; \item $G\left(n,p \right)$ Erd\H{o}s-R\'{e}nyi random graph \cite{er_model}; \item custom topologies (such as the wheel rim topology described in \S\ref{sec:examples}). \end{itemize} Some of the topologies available in PaGMO are visualised in Figure \ref{fig:topologies}. Full control over the fine-grained details of the migration strategy (e.g., migration frequency and rate, selection and replacement policies) is provided. A preliminary study of the impact the topology on the optimisation process can be found in \cite{topology}. The class that contains the set of islands collaborating in an optimisation process, the topology and the migration policies is known in PaGMO as an \emph{archipelago}. \begin{figure*}[ht] \begin{center} \includegraphics[width=14cm]{figures/topologies} \caption{A selection of topologies available in PaGMO: ring topology (a), Barab\'{a}si-Albert model (b), Watts-Strogatz model (c) and Erd\H{o}s-R\'{e}nyi $G\left(n,p \right)$ random graph (d).} \label{fig:topologies} \end{center} \end{figure*} PaGMO ships with a number of implemented optimisation problems readily available for use, such as: \begin{itemize} \item classical continuous test functions, such as Rastrigin, Rosenbrock \cite{rosenbrock}, Schwefel, Griewank, Branin, Himmelblau, Lennard-Jones potential \cite{lj_potential} and Levy5; \item constrained continuous test functions from \cite{luksan_vlcek}; \item integer programming problems: Golomb ruler \cite{golomb}, 0-1 knapsack problem \cite{knapsack}; \item multi-objective optimisation test problems from \cite{nsga-ii}; \item all the chemical interplanetary spacecraft trajectory problems from the European Space Agency's GTOP database \cite{gtop1,gtop2}; \item an interplanetary multiple gravity assist low-thrust problem. \end{itemize} PaGMO's C++ capabilities are exposed to the high-level language Python, so that it is possible to instantiate problems, algorithms, topologies and islands from either a script or an interactive Python session. It is also possible to define new problems and algorithms directly from Python, thus allowing on one hand to rapidly prototype and evaluate new ideas, and on the other to leverage the rich ecosystem of freely-available scientific Python modules (e.g., numerical integrators, machine learning libraries, computer algebra systems, etc.). Coupled with the matplotlib plotting module and the enhanced Python shell IPython, PaGMO's Python bindings (which have been called PyGMO) offer a user-friendly interactive graphical experience. \section{Some examples} \label{sec:examples} As an example of the use of PaGMO to solve engineering problems we report here the results of the application of the optimisation strategy described in the previous section to four trajectory optimisation selected problems. The first three problems are taken from the European Space Agency Global Trajectory optimisation (GTOP) database \cite{gtop1,gtop2}. The problems selected are among the most difficult proposed in the database and are included in the basic PaGMO distribution. They all are box-constrained, continuous, single objective optimisation problems, representing a multiple gravity assist interplanetary trajectory with one deep space maneuver allowed in each trajectory leg. The search space includes launch windows spanning decades (see the GTOP database for the precise definitions of the allowed bounds on the launch and fly-by dates). The fourth problem is a simpler problem admitting though a large number of different instances. We take advantage of PaGMO parallelization to find solutions to 4406 different instances of the problem in a reasonable computing time. \paragraph{Experimental setup} All the optimisation problems were set up in an archipelago of 5-7 islands (depending on the number of available cores on the machine at the time of the experiment), equipped with a wheel rim topology (see Figure \ref{fig:rim}). The wheel rim topology consists of a classical bidirectional ring topology with an additional island at the center, fully connected to all the other islands. We chose to deploy global optimisation algorithms (namely, adaptive neighbourhood simulated annealing from \cite{sa_corana} and differential evolution from \cite{de}) on the ring, and a local optimisation algorithm (namely, the Subplex algorithm from \cite{subplex} as implemented in \cite{nlopt}) in the center. The motivations behind these choices are the following: \begin{itemize} \item the ring topology is a proven and popular choice in the context of parallel population-based algorithms, as shown for instance in \cite{Homayounfaretal03, Starkweatheretal91, GordonWhitley93, CantuPaz00, CantuPazMejiaOlvera94, Izzoetal09, galapagos}; \item the additional island in the center receives through migration the best results of the global optimisation algorithms in the ring, refines them through a local search, and migrates them back to the ring. Its role is hence, on one hand, to improve the results of the global search, and on the other to inject back diversified candidate solutions into the ring; \item regarding the choice of the algorithms, both simulated annealing and differential evolution have proven to be effective for the optimisation of interplanetary spacecraft trajectories (as shown for instance in \cite{Izzoetal09}), whereas the derivative-free Subplex method, a refinement of the classical Nelder-Mead algorithm, is particularly suited for the noisy and multi-modal objective functions appearing in these optimisation problems. \end{itemize} \begin{figure} \begin{center} \includegraphics[width=7.7cm]{figures/rim} \caption{Experimental setup: an archipelago with wheel rim topology, global optimisation algorithms on the outer ring (simulated annealing and differential evolution) and a local optimisation algorithm in the inner island (Subplex).} \label{fig:rim} \end{center} \end{figure} In order to give an example of use of PaGMO, we reproduce here the Python code necessary to perform one optimisation run with the setup described above: \begin{lstlisting} # Import the PyGMO classes from PyGMO import * (*@\label{code:import}@*) # Instantiate the algorithms sa = algorithm.sa_corana(10000,1,0.01) (*@\label{code:algo_start}@*) de = algorithm.de(500,0.8,0.9) local = algorithm.nlopt_sbplx(500,1e-4) (*@\label{code:algo_end}@*) # Instantiate the problem prob = problem.messenger_full() (*@\label{code:prob}@*) # Build the archipelago a = archipelago(topology.rim()) (*@\label{code:archi_start}@*) a.push_back(island(prob,local,1,1.0,migration.worst_r_policy())) (*@\label{code:archi_local}@*) a.push_back(island(prob,sa,1)) (*@\label{code:algo_ring_start}@*) a.push_back(island(prob,de,20)) a.push_back(island(prob,sa,1)) a.push_back(island(prob,de,20)) a.push_back(island(prob,sa,1)) a.push_back(island(prob,de,20)) (*@\label{code:archi_end}@*) (*@\label{code:algo_ring_end}@*) # Perform evolution twenty times a.evolve(20) (*@\label{code:archi_evolve}@*) a.join() (*@\label{code:archi_join}@*) \end{lstlisting} Detailed explanation: \begin{itemize} \item on line \ref{code:import}, all the PaGMO classes are imported into the current namespace; \item on lines \ref{code:algo_start}-\ref{code:algo_end}, the algorithms are instantiated; \item on line \ref{code:prob}, the problem (in this case the full Messenger problem) is instantiated; \item on lines \ref{code:archi_start}-\ref{code:archi_end}, the archipelago is instantiated: \begin{itemize} \item on line \ref{code:archi_start}, an empty archipelago with rim topology is created; \item on line \ref{code:archi_local}, the central island is created and inserted in the archipelago with the \lstinline!push_back()! method. The island is constructed from the problem \lstinline!prob! and the algorithm \lstinline!local!, it contains one single individual, has a probability of accepting the migrating individuals of 100\% and replacement policy \lstinline!migration.worst_r_policy()!, which will unconditionally replace the worst individual in the island with the incoming individuals. This island needs a non-default replacement policy because we want it to optimise \emph{every} candidate solution coming from the ring, whereas the default behaviour would be to accept migrating individuals only if they would improve upon the worst individuals present in the population (which is the behaviour frequently desired for population-based algorithms); \item on lines \ref{code:algo_ring_start}-\ref{code:algo_ring_end}, the ring islands are created and inserted into the archipelago. The simulated annealing islands operate on populations of a single individual, whereas the differential evolution islands are instantiated with a population of 20 individuals. The default migration policies are in these cases appropriate; \end{itemize} \item on line \ref{code:archi_evolve}, the optimisation process is started by calling the \lstinline!evolve()! method of the archipelago. The argument passed to the \lstinline!evolve()! method, in this case 20, means that each algorithm on each island is called 20 times with the parameters passed in the constructors on lines \ref{code:algo_start}-\ref{code:algo_end}. E.g., in case of differential evolution, it means that the algorithm is run for $500 \cdot 20 = 10000$ generations, with weight coefficient equal to 0.8 and crossover probability equal to 0.9. Migration is allowed to happen at the end of each one of the 20 internal iterations of the \lstinline!evolve()! method; \item on line \ref{code:archi_join}, the archipelago is joined, meaning that the flow of the program stops until the optimisation run started on line \ref{code:archi_evolve} has concluded. Since PaGMO runs asynchronously each algorithm in a separate thread, the \lstinline!evolve()! call on line \ref{code:archi_evolve} will return almost immediately -- the optimisation process having forked in the background. The \lstinline!join()! call blocks the program until the optimisation has finished. \end{itemize} \paragraph{Optimisation strategy} For the first three problems we adopted the following optimisation strategy: \begin{enumerate} \item we instantiated an archipelago with rim topology as described above and let the optimisation run for a fixed amount of time; \item at the end of each optimisation run, we recorded the best candidate solution produced and then reset the archipelago with randomly-chosen decision vectors. \end{enumerate} Step 1. and 2. where repeated multiple times, thus producing a collection of optimised candidate solutions. The cluster pruning algorithm described in \cite{izzo} was then run on the collection of candidate solutions, returning new problem bounds in which top decision vectors are contained. The new bounds are then used to launch other rounds of multistart optimisations. For the sample return problem, which involves the solution of different instances of a simpler problem, a single run of the optimisation algorithms in the archipelago is good enough and thus no cluster pruning was used. \begin{table*}[ht] \centering \begin{minipage}[t]{5.35cm} \subfloat[][]{ \centering \begin{tabular}{ll} \noalign{\smallskip}\toprule\noalign{\smallskip} \multicolumn{2} {c}{Departure} \\ \noalign{\smallskip}\midrule\noalign{\smallskip} Epoch & 12/11/1997 \\ $V_{\infty}$ & 3.254 km/s \\ \noalign{\smallskip}\midrule\noalign{\smallskip} \multicolumn{2} {c}{Cruise} \\ \noalign{\smallskip}\midrule\noalign{\smallskip} DSM $\Delta V$ & 484 m/s\\ Venus fly-by & 29/04/1998 \\ DSM $\Delta V$ & 399 m/s\\ Venus fly-by & 27/06/1999 \\ Earth fly-by & 19/08/1999 \\ Jupiter fly-by & 31/03/2001 \\ \noalign{\smallskip}\midrule\noalign{\smallskip} \multicolumn{2} {c}{Arrival} \\ \noalign{\smallskip}\midrule\noalign{\smallskip} Epoch & 05/2007\\ $V_{\infty}$ & 4.25 km/s \\ \noalign{\smallskip}\midrule\noalign{\smallskip} Total flight time & 9.4 years\\ \noalign{\smallskip}\bottomrule \end{tabular} \label{tab:cassini2} } \end{minipage} \begin{minipage}[t]{5.35cm} \subfloat[][]{ \centering \begin{tabular}{ll} \noalign{\smallskip}\toprule\noalign{\smallskip} \multicolumn{2} {c}{Departure} \\ \noalign{\smallskip}\midrule\noalign{\smallskip} Epoch & 15/11/2021 \\ $V_{\infty}$ & 3.34 km/s \\ Declination & 3.1 deg \\ \noalign{\smallskip}\midrule\noalign{\smallskip} \multicolumn{2} {c}{Cruise} \\ \noalign{\smallskip}\midrule\noalign{\smallskip} Venus fly-by & 30/04/2022 \\ Earth fly-by & 04/04/2023 \\ DSM $\Delta V$ & 167 m/s\\ Earth fly-by & 25/06/2026\\ \noalign{\smallskip}\midrule\noalign{\smallskip} \multicolumn{2} {c}{Arrival} \\ \noalign{\smallskip}\midrule\noalign{\smallskip} Epoch & 03/07/2031\\ $V_{SOI}$ & 0.676 km/s \\ \noalign{\smallskip}\midrule\noalign{\smallskip} Total flight time & 9.63 years\\ \midrule \noalign{\smallskip} \multicolumn{2} {c}{Spacecraft} \\ \noalign{\smallskip}\midrule\noalign{\smallskip} Daparture mass & 2085.44 kg \\ Arrival mass & 1476.03 kg \\ $I_{sp}$ & 312 s\\ \noalign{\smallskip}\bottomrule\noalign{\smallskip} \end{tabular} \label{tab:tandem} } \end{minipage} \begin{minipage}[t]{5.35cm} \subfloat[][]{ \centering \begin{tabular}{ll} \noalign{\smallskip}\toprule\noalign{\smallskip} \multicolumn{2} {c}{Departure} \\ \noalign{\smallskip}\midrule\noalign{\smallskip} Epoch & 14/08/2005 \\ $V_{\infty}$ & 3.95 km/s \\ \noalign{\smallskip}\midrule\noalign{\smallskip} \multicolumn{2} {c}{Cruise} \\ \noalign{\smallskip}\midrule\noalign{\smallskip} Venus fly-by & 20/10/2006 \\ DSM $\Delta V$ & 343 m/s\\ Venus fly-by & 04/06/2007 \\ DSM $\Delta V$ & 567 m/s\\ Mercury fly-by & 12/01/2008\\ DSM $\Delta V$ & 91 m/s\\ Mercury fly-by & 04/10/2008\\ DSM $\Delta V$ & 224 m/s\\ Mercury fly-by & 28/09/2009\\ DSM $\Delta V$ & 179 m/s\\ \noalign{\smallskip}\midrule\noalign{\smallskip} \multicolumn{2} {c}{Arrival} \\ \noalign{\smallskip}\midrule\noalign{\smallskip} Epoch & 03/2011\\ $V_{MOI}$ & 0.905 km/s \\ \noalign{\smallskip}\midrule\noalign{\smallskip} Total flight time & 5.59 years\\ \noalign{\smallskip}\bottomrule \end{tabular} \label{tab:messenger} } \end{minipage} \caption{Details of the best trajectories found for the first three interplanetary trajectory problems: problem::cassini\_2 \subref{tab:cassini2}, problem::tandem(6,10) \subref{tab:tandem} and problem::messenger\_full \subref{tab:messenger}.} \end{table*} \subsection{Results on problem::cassini\_2} \begin{figure*} \centering \subfloat[][]{\includegraphics[width=8cm]{figures/cassini2.pdf}\label{fig:cassini2}} \qquad \subfloat[][]{\includegraphics[width=8cm]{figures/tandem.pdf}\label{fig:tandem}} \subfloat[][]{\includegraphics[width=10.9cm]{figures/messenger_full.pdf}\label{fig:messengerfull}} \caption{Visualisation of the best trajectories found for the first three interplanetary trajectory problems: problem::cassini\_2 \subref{fig:cassini2}, problem::tandem(6,10) \subref{fig:tandem} and problem::messenger\_full \subref{fig:messengerfull}.} \end{figure*} This problem represents the interplanetary trajectory of the spacecraft Cassini. For a detailed description of this global optimisation problem we refer the reader to the GTOP database \cite{gtop1,gtop2}. For the purpose of this paper we just mention that the objective function represents the sum of all $\Delta V$, including the launch, where the last contribution (Jupiter arrival) is relative velocity with respect to Jupiter at arrival. For this problem we apply the fully automated search described above with three pruning cycles. At the end of the process (employing 7CPUs for a period of roughly 8 hours) the best trajectory found is visualized in Figure \ref{fig:cassini2}. Details on the trajectory as reported in Table \ref{tab:cassini2}. The trajectory is, essentially, the same best result posted in the database (22th May 2009) and found by M. Schl{\"u}eter, J. Fiala, M. Gerdts at the University of Birmingham using the MIDACO solver developed within the project ``Non-linear mixed-integer-based Optimisation Technique for Space Applications'' co-funded by ESA Networking Partnership Initiative, Astrium Limited (Stevenage, UK) and the School of Mathematics, University of Birmingham, UK \cite{midaco}. The solution employs two deep space maneuvers during the first two legs as detailed in Table \ref{tab:cassini2}. \subsection{Results on problem::tandem(6,10)} TandEM is one of the L-class candidate missions that were proposed in response to the European Space Agency call for proposals for the 2015-2025 Cosmic-Vision programme. Initially, the mission included a Titan orbiter and an Enceladus penetrator. The interplanetary part of the trajectory was preliminarly studied in 2008 by a Mission Analysis Outer Planets Working Group that included different experts from academia and space industry. In that preliminary study a baseline for the TandEM mission was defined and forms the basis of the problem here solved in an automated fashion using PaGMO. The baseline considers a launch with Atlas 501, and an injection orbit at Saturn with $e=0.985$, $r_p = 80330$ km. The mission objective is to maximize the final spacecraft mass at arrival and to complete the trajectory within ten years. For a detailed description of this global optimisation problem we refer the reader to the GTOP database \cite{gtop1,gtop2}. For this problem we apply the fully automated search described above with three pruning cycles. At the end of the process (employing 7CPUs for a period of roughly 6 hours) the best trajectory found is visualized in Figure \ref{fig:tandem}. Details on the trajectory are as reported in Table \ref{tab:tandem}. The solution found improves the previous best found by B. Addis, A. Cassioli, M. Locatelli, F. Schoen (from the Global Optimisation Laboratory, University of Florence) who also have the record on the solutions for all other TandEM problem instances (i.e. for different fly-by and time constraint). The final trajectory employs one only deep space manouvre during the Venus-Venus leg. \subsection{Results on problem::messenger\_full} \begin{figure*} \begin{center} \includegraphics[width=14cm]{figures/pruning} \caption{Manual pruning for a selection of variables from the Messenger problem. The blue dots represent the best candidate solutions of each run of the multistart strategy, the vertical green bars denote the new, narrowed bounds of the problem and the red diamond marker represents the final solution. The clustering of the best candidate solutions in correspondence with the resonant flybys at Mercury is clearly visible for the variables $T_2$, $T_4$, $T_5$ and $T_6$.} \label{fig:pruning} \end{center} \end{figure*} This problem is probably the most complex problem in the GTOP database and represents one of the most complex chemical interplanetary trajectory ever designed and flown, that of the Messenger spacecraft. Messenger, at the time of writing, is on its way to Mercury, where (roughly next year, in 2011) will become the first spacecarft to ever orbit around the planet. Its path in the solar system to reach its final destination included a long planetary tour: Earth-Earth-Venus-Venus-Mercury-Mercury-Mercury-Mercury. In the PaGMO version of the problem, the Messenger trajectory is transcribed into a box-constrained global optimisation problem. The objective function is the total $\Delta V$ accumulated from the Earth launch to a Mercury Orbit Insertion (MOI) into an orbit having $e=0.704$, $r_p=2640$ km. For a detailed description of this global optimisation problem we refer the reader to the GTOP database \cite{gtop1,gtop2}. In this case, after a first run of the algorithm, cluster detection shows the presence of many different solution clusters related to the different possible resonances at Mercury, as shown in Figure \ref{fig:pruning}. The best solution after the first algorithm run is around $5.15$ km/sec. By focussing the optimisation into one of the detected clusters we obtain a solution at around $2.3$ km/sec which is lowering the GTOP database record substantially and is detailed in Table \ref{tab:messenger} and visualized in Figure \ref{fig:messengerfull}. \subsection{Results on problem::sample\_return} \begin{figure*}[ht] \centering \subfloat[][]{\label{fig:asteroids}\includegraphics[width=8cm]{figures/asteroids.pdf}} \subfloat[][]{\label{fig:asteroids2}\includegraphics[width=8cm]{figures/asteroids2.pdf}} \caption{Relations between $\Delta V$ and properties of the best solutions found: mission duration against $\Delta V$ \subref{fig:asteroids} and $\Delta V$ against semi-major axis and eccentricity \subref{fig:asteroids2}.} \label{fig:ast_recap} \end{figure*} A single instance of this problem represents an interplanetary trajectory starting from the Earth and performing a rendezvous with a selected asteroid. After a minimum waiting time the spacecraft is required to come back to the Earth with a maximum hyperbolic encounter velocity. One deep-space maneuver per leg was allowed, creating a global optimisation problem of dimension 12. This type of trajectory can be used to perform the preliminary selection of possible final asteroids for sample return missions. The same trajectory model is also relevant to design human missions to asteroids. For the purpose of this paper we do not enter into the details on the system design and launch window choice, instead it is our interest to `just pick an example' and show the possibility of using PaGMO to solve in a reasonable time a large number of problem instances (e.g. varying the final asteroid). We took all the asteroids listed in the JPL NEA database:\newline\newline \url{http://neo.jpl.nasa.gov/cgi-bin/neo_elem}\newline\newline having $H<22$, which roughly corresponds to asteroids having diameter larger than 200m. For the selected 4406 asteroids we optimised the trajectory considering a launch in the 2020-2050 time frame, allowing a final encounter velocity at the Earth return of 4.5. km/sec and minimizing the total $\Delta V$ as evaluated from: \begin{multline*} \Delta V = \Delta V_{L} + \Delta V_{dsm_1}+\Delta V_{R} + \\ \Delta V_{D} + \Delta V_{dsm_2} +\Delta V_{E}, \end{multline*} where $ \Delta V_{L}$ is the hyperbolic velocity when leaving the Earth sphere of influence, $\Delta V_{dsm_1}$ is the first deep space maneuver, $\Delta V_{R}$ is the rendezvous velocity, $ \Delta V_{D}$ is the relative velocity at asteroid departure, $\Delta V_{dsm_2}$ is the second deep space maneuver and $\Delta V_{E}$ is the final braking maneuver to reduce the entry speed to 4.5 km/sec. A minimum waiting time on the asteroid of 5 days is also considered. The computations where performed on an Xserve with 8 processing units and lasted 8 hours. The results are visualized in Figure \ref{fig:ast_recap}.
{ "redpajama_set_name": "RedPajamaArXiv" }
1,575
Q: Application that shows what functions are referenced by in C I would really like to know if there are any methods or applications which can show me which functions are referencing which functions. So say I would like to see from where a function change_state() is called/referenced, I get something like: /--app_init()<--main() | change_state() <--| | \--afile.c |<--trigger() / line 100| | line 156| | \--bfile.c| line 26|<--|--button_event()<--process_event() line 30| | \--move_event() EDIT: I am using the Keil compiler in Windows 7. A: You can use gperf for that type of use. It will show you a call graph, which is basically what you want, including performance mesurement. First, assuming you're using gcc, compile with the option -gp. Then, run your binary normally. It will output a gmon.out file. You can then use gperf to analyse that file, which contains the data you're requesting.
{ "redpajama_set_name": "RedPajamaStackExchange" }
3,207
\section{Introduction} One of the most prominent implications of the quantumness of nature is the existence of nonlocal correlations between compound systems, referred to as entanglement~\cite{EPR35,S35}. These kinds of correlations are incompatible with our classical understanding arising from probability theory. For this reason, quantum entangled states are a main resource for applications in quantum computation and quantum communication~\cite{NC00,HHHH09}. A pure separable state is a product of the form \begin{align}\label{eq:Sstate} |\psi_\mathrm{S}\rangle=|e\rangle\otimes|f\rangle=|e,f\rangle. \end{align} Here we assume both subsystems to have identical dimensionality $d$. A mixed separable state is defined by statistical mixtures of those pure ones~\cite{W89}. Any state that has no such representation is entangled. In order to probe the entanglement of a system, experimentally accessible entanglement witnesses have been proposed and optimized~\cite{HHH96,LKCH00,BDHK05,SV09,BHHA13}. Another approach has been formulated in terms of so-called positive but not completely positive maps~\cite{HHH96,P96}. Entanglement measures have been studied for characterizing the strength of this quantum correlation; cf. Ref.~\cite{HHHH09} for an overview. For pure states, the Schmidt decomposition can be used to describe the amount of entanglement~\cite{NC00}. By convex roof construction, the so-called Schmidt number has been defined for mixed states and corresponding witnesses have been formulated and optimized~\cite{TH00,SBL01,Betal02,SV11}. The standard notion of a maximally entangled (ME) state reads \begin{align}\label{eq:MEstate} |\psi_\mathrm{ME}\rangle=\frac{1}{\sqrt{d}}\sum_{n=0}^{d-1} |e_n,f_n\rangle, \end{align} where $\{|e_n,f_{n'}\rangle\}_{n,n'=0,\ldots,d-1}$ is an orthonormal basis, and we have identical Schmidt coefficients $d^{-1/2}$. It is important to mention that the definition of an ME state strongly depends on the applied measure~\cite{EP99,MG04,SV11a}. However, to be consistent with the literature~\cite{HHHH09}, we will adopt the notion of ME states exclusively for states of the form~\eqref{eq:MEstate}. Beside the application of quantum correlated states, a determination of the properties of a quantum channel is indispensable. Nontrivial applications are, for instance, the description of the propagation of light in turbulent lossy media, such as in atmospheric quantum communication links~\cite{SV10}. Quantum channels are also the theoretical foundation for the dynamics of open quantum systems, i.e., the interaction of a system with an environment. They may be used for characterizing the Markovian character of a process~\cite{CGLM14,LWHZHLGKPB14} or for formulating the solution of a statistical differential equation (such as Fokker-Planck, Lindblad, or master equations) in quantum physics for a dissipative time evolution~\cite{K72,L76}. Apart from their usage in computation and communication protocols, entangled quantum states can also be employed in ancilla-assisted quantum process tomography~\cite{DL01,Aetal03,MMRD03,MRL08}. In contrast to standard process tomography~\cite{CN97}, this approach requires only a single bipartite input state to completely characterize an unknown process acting on a quantum system. For such a performance, pure ME states~\eqref{eq:MEstate} with perfect quantum correlations turn out to be best suited~\cite{DL03}. The underlying idea is given by the Choi-Jamio\l{}kowski isomorphism~\cite{J72,C75}, which provides a one-to-one correspondence between quantum channels and bipartite states, that is, the resulting bipartite state characterizes a quantum process completely. Hence, properties of quantum operations can directly be linked to their bipartite state representatives~\cite{AP04}. A properly constructed witness operator applied to a state representative can uncover properties that rely on a convex structure of the corresponding channels. Experimentally accessible witnesses were proposed in Ref.~\cite{MR13}, for instance, in order to detect entanglement-breaking maps or to study separability characteristics of channels. Note that we will use the notions channels and maps as synonyms. Here we derive a technique which allows one to construct witnesses to uncover random unitary (RU) channels and random projective (RP) maps. While the former describe a deterministic evolution together with stochastic effects, the latter are based on the quantum state reduction in quantum measurements. Applying the Choi-Jamio\l{}kowski isomorphism, RU and RP maps are transformed into mixtures of ME and separable states, respectively. For constructing witnesses for such bipartite states or quantum processes, optimization equations are derived to bound the expectation value of general observables. Whenever these bounds are exceeded, an RU or RP description or, equivalently, a convex combination of ME or separable states can be excluded. The geometric interpretation of a witness as a tangent hyperplane to a given convex set allows one to infer geometric properties of the set itself. A complete and full analysis is conducted for arbitrary pure states in a bipartite system along with the introduction of a complementary Schmidt decomposition in terms of ME states and Fourier-transformed Schmidt coefficients. The article is structured as follows. In Sec.~\ref{sec:channel}, we give the defining relations for RU and RP maps in the context of the Choi-Jamio\l{}kowski isomorphism. Structural and geometric properties of both maps are studied. The construction of witnesses is performed in Sec.~\ref{sec:witnesses}. Witnesses for ME states are derived and compared to witnesses that bound the set of separable states. In Sec.~\ref{sec:pure}, the method is applied to perform a full analytical characterization of pure states regarding separability or being ME, e.g., for predicting upper bounds on imperfections. The complementary Schmidt decomposition will be defined. We summarize and conclude in Sec.~\ref{sec:summary}. \section{Maximal quantum channels}\label{sec:channel} In order to describe the evolution or propagation of physical systems, a process characterization is required; cf. Ref.~\cite{CGLM14} for a recent review on quantum channels. For this reason, a convenient form of a linear quantum process is an input-output relation. In this form, the initial state of the system $\hat\rho_{\rm in}$ is transformed into a final quantum state $\hat\rho_{\rm out}$, \begin{align}\label{eq:In-Out-Rel} \hat\rho_{\rm in}\mapsto\hat\rho_{\rm out}=\mathcal E(\hat\rho_{\rm in}). \end{align} The linear quantum channel $\mathcal E$ itself can be expanded in Kraus operator form~\cite{K83}, \begin{align}\label{eq:KrausRep} \mathcal E(\hat\rho)=\sum_{j}\hat K_j\hat\rho\hat K_j^\dagger. \end{align} The channels studied here are linear, completely positive (CP) but not necessarily trace preserving. The latter property can be restored by properly normalizing the output state of any channel after its application, $\hat\rho_{\rm out}=\mathcal E(\hat\rho_{\rm in})/{\rm tr}[\mathcal E(\hat\rho_{\rm in})]$. Besides the Kraus~\cite{K83} and Holevo (not discussed here) representations~\cite{H98}, another key method for characterizing quantum channels is the Choi-Jamio\l{}kowski isomorphism. It states that each channel $\mathcal E$ has a unique representation in terms of a bipartite quantum state $\hat\varrho_{\mathcal E}$. This isomorphism $\mathcal J$ reads \begin{align}\label{eq:CJiso} &\mathcal J: \mathcal E\mapsto \hat \varrho_{\mathcal E}=\mathcal N\mathbb I\otimes\mathcal E(|\Phi\rangle\langle \Phi|), \\&\text{with }|\Phi\rangle=\sum_{n=0}^{d-1}|n,n\rangle, \label{eq:StdVec} \end{align} a given normalization constant $\mathcal N$, and a given computational basis $\{|n\rangle\}_{n=0,\dots,d-1}$. In the following sections, we highlight two maximal subclasses of maps that will be considered for our further studies. \subsection{Random unitary channel} Random unitary (RU) channels are characterized by deterministic unitary evolutions $\hat U_j$ which are realized only with a certain probability $p_j$, \begin{align}\label{eq:RUdef} \mathcal E_{\rm RU}(\hat\rho)=\sum_{j} p_j \hat U_j\hat\rho\hat U_j^\dagger. \end{align} Such RU maps can be employed to model dephasing that diminishes quantum coherences. This allows one to classify decoherence effects~\cite{HS09}, for example, to perform a complete error correction, which is possible if and only if the sources of imperfections are of the RU type~\cite{GW03}. These kinds of processes are a main problem to be overcome for the realization of quantum computation~\cite{Z91,CLSZ95}. Moreover, RU maps have also been applied to study entanglement dynamics in the presence of environments or phase noise~\cite{ZHHH01,LBAC12,BFACC12}. However, except for qubit maps~\cite{LS93}, the full characterization of RU processes remains an open problem. The Choi-Jamio\l{}kowski isomorphism~\eqref{eq:CJiso} of such random unitary channels yields a convex combination of maximally entangled (ME) states, \begin{align} \mathcal J(\mathcal E_{\rm RU})=\sum_j p_j|\psi_{\mathrm{ME}, j}\rangle\langle\psi_{\mathrm{ME}, j}|=\hat\varrho_\mathrm{ME}, \end{align} with $\mathcal N=1/d$ and $|\psi_{\mathrm{ME}, j}\rangle=\hat W_j\otimes\hat V_j |\Phi\rangle/\sqrt{d}$ for any pair of unitary maps $\hat W_j,\hat V_j$ satisfying $\hat U_j=\hat V_j\hat W_j^{T}$; cf. Eqs.~\eqref{eq:CJiso} and~\eqref{eq:RUdef}. The latter follows from the general relation \begin{align} \hat A\otimes\hat B |\Phi\rangle=&\hat A\hat B^{T}{\otimes}\hat 1 |\Phi\rangle =\hat 1{\otimes}\hat B\hat A^{T} |\Phi\rangle, \label{eq:Transfo} \end{align} where the transposition is taken in the computational basis of the vector $|\Phi\rangle$ given in Eq.~\eqref{eq:StdVec} (see also Appendix~\ref{app:states}). Therefore, a characterization of RU channels can be approached by studying ME states~\cite{AS08}. \subsection{Random projective channels} The complement of the deterministic evolution of quantum states in terms of unitary maps is the highly probabilistic measurement process. A measurement of a certain outcome yields the reduction of the state onto the corresponding eigenspace--eigenvectors for nondegenerate observables. In classical theories, such a reduction does not occur. Hence, the measurement process is a genuine quantum feature. Here, we will characterize the corresponding quantum channels. Let us consider the following family of maps. A CP map is a random projective (RP) channel if it has a Kraus representation of the form \begin{align}\label{eq:RPdef} \mathcal E_\mathrm{RP}(\hat\rho)=\sum_{j} p_j \hat P_j\hat\rho\hat P_j^\dagger, \end{align} where $\hat P_j$ describes a rank-one operator, i.e., $\hat P_j=|\phi_j\rangle\langle\psi_j|$, and $\{p_j\}_j$ defines a probability distribution. It is worth pointing out that, for finite-dimensional systems, the finite sum is sufficient in this definition due to Carath\'e{}odory's theorem~\cite{C11}. In addition, the RP maps are so-called entanglement-breaking channels~\cite{HSR03}. In contrast to a unitary evolution in RU channels, an RP map is formulated in terms of collapses of wave functions together with a possible subsequent evolution. More rigorously, for each $|\phi\rangle$ there exists a unitary map $\hat U$ such that $|\phi\rangle=\hat U|\psi\rangle$. Hence, each term in the RP channel~\eqref{eq:RPdef} can be described as a collapsed state $\hat\rho$ which is further propagated in time, \begin{align}\label{eq:singleRP} \hat\rho\mapsto\hat U|\psi\rangle\langle\psi|\hat\rho|\psi\rangle\langle\psi|\hat U^\dagger. \end{align} Note that such a map is not trace preserving, as $\langle\psi|\hat\rho|\psi\rangle$ describes the (in general) non-unit probability of the reduction to the state $|\psi\rangle\langle\psi|$ within the measurement process. The Choi-Jamio\l{}kowski isomorphism $\mathcal J$ in Eq.~\eqref{eq:CJiso} ($\mathcal N=1$) maps entanglement-breaking channels to separable states~\cite{HSR03}. Therefore, the RP maps can be identified with the notion of separable states, \begin{align} \mathcal J(\mathcal E_\mathrm{RP}) =&\sum_{j}p_j|\psi_j^\ast\rangle\langle \psi_j^\ast|\otimes|\phi_j\rangle\langle\phi_j|=\hat\varrho_\mathrm{S}, \end{align} where we used the relation \begin{align} (\hat 1\otimes\langle\psi|)|\Phi\rangle{=}\sum_{m=1}^d |m\rangle\langle\psi|m\rangle{=}\sum_{m=1}^d |m\rangle \langle m|\psi \rangle^\ast{=}|\psi^\ast\rangle, \end{align} and $|\psi^\ast\rangle$ is the complex conjugate of the vector $|\psi\rangle$ in the computational basis. The state $\hat\varrho_\mathrm{S}$ describes a separable state~\cite{W89}, and any pure state, given as $|\psi^\ast\rangle\langle\psi^\ast|\otimes|\phi\rangle\langle\phi|$, can be obtained from the RP map~\eqref{eq:singleRP} with a single element. Hence, an identification of an RP channel is equivalent to the separability problem. \subsection{Maximal states and CP maps} Let us summarize some initial observations. Using the isomorphism $\mathcal J$, it was shown that the problem of identifying specific kinds of quantum channels can be mapped onto the characterization of bipartite states. A pictorial summary may be found in Fig.~\ref{fig:statesVSmaps}. \begin{figure} \includegraphics[width=7.5cm]{ChoiMap.png} \caption{(Color online) The mapping of the Choi-Jamio\l{}kowski isomorphism $\mathcal J$ is depicted via the arrows. On the one hand, the deterministic evolution together with a classical statistical description (RU maps) is mapped onto the set of bipartite states which are formed by pure entangled states with equally weighted Schmidt coefficients, i.e., ME states. On the other hand, the maps which correspond to the genuine quantum description of the measurement process (RP maps) have the image of the set of classically correlated (separable) states. }\label{fig:statesVSmaps} \end{figure} First, we recalled the fact that RU maps have a bipartite representation in terms of mixtures of ME pure states~\cite{AS08}. From now on, we will use the notion ME state for such mixed and pure states, even though some of the mixtures are separable (for example, the normalized identity; cf. Appendix~\ref{app:Fourier}). We emphasize that such a deterministic evolution is something one also expects for a classical channel. The image of $\mathcal J$, however, is the convex hull of pure ME states and those pure ME states have genuine quantum correlations between the subsystems. Second, we established the set of RP maps. The physical interpretation of such maps is a measurement-induced state reduction. Again, we stress that this aspect of quantum physics has no counterpart in the classical domain. The action of $\mathcal J$ behaves in a complementary way as it yields bipartite separable states sharing no quantum entanglement. Hence, there is a cross correlation between states and channels (Fig.~\ref{fig:statesVSmaps}). Nonclassical RP and classical RU maps are propagated to separable and ME states, respectively. Due to the fact that $\mathcal J$ is a bijective transformation, we focus on the determination of separable and ME states from now on. However, one should keep in mind for the remainder of this work that one can draw all of the following conclusions for the corresponding channels. \subsection{Geometric representation}\label{subsec:geoState} Let us consider some geometric aspects of the set of separable states and ME states. The respective extremal points are pure separable states~\eqref{eq:Sstate} and pure ME states~\eqref{eq:MEstate}. In general, the convex set of all bipartite quantum states is convexly spanned by all pure states, each having a distinct Schmidt decomposition~\cite{NC00}: \begin{align}\label{eq:SchmidtDec} |\psi\rangle=\sum_{n=0}^{d-1}\sigma_n|e_n,f_n\rangle, \end{align} where $\sigma_n$ is the $n$th non-negative Schmidt coefficient. For the time being, let us restrict ourselves to the family of pure states $\{|\psi^{(j)}\rangle\}_j$ having a decomposition with identical vectors $\{|e_n,f_n\rangle\}_{n=0,\dots,d-1}$ but different Schmidt coefficients $\sigma^{(j)}_n$. The convex span of those pure states is $\mathcal C=\mathrm{conv}\{|\psi^{(j)}\rangle\langle\psi^{(j)}|\}_j$. For the spanned mixed states, $\hat\rho=\sum_j p_j |\psi^{(j)}\rangle\langle \psi^{(j)}|\in\mathcal C$, we define the following projections: \begin{align} \sigma_n^2=\langle e_n,f_n|\hat\rho^2|e_n,f_n\rangle=\sum_{m=0}^{d-1} \left(\sum_j p_j \sigma^{(j)}_{m}\sigma^{(j)}_{n}\right)^2. \end{align} For the considered class of pure states, these definitions of $\sigma_n^2$ coincide with the squares of Schmidt coefficients. In general, the purity yields ${\rm tr}(\hat\rho^2)=\sum_{n=0}^{d-1}\sigma_n^2\leq 1$. For the subspace $\mathcal C$, it holds that it is the convex hull of states satisfying $\sum_{n=0}^{d-1}\sigma_n^2=1$. Similarly to the Bloch-sphere representation, we obtain the full ball of pure and mixed quantum states from this high-dimensional sphere. In fact, one finds only one hyperoctant of the sphere. Hence, for symmetry reasons, we may allow $\sigma_n^{(j)}<0$ for pure states or $\sigma_n=\pm[\sigma_n^2]^{1/2}$ for mixed ones. Using the vector representation $\vec \sigma=(\sigma_0,\dots,\sigma_{d-1})^T\in\mathbb R^d$, we can alternatively describe the ball as $\|\vec \sigma\|_2=[\sum_{n=0}^{d-1}|\sigma_n|^2]^{1/2}\leq 1$. A state in the considered subspace given by Eq.~\eqref{eq:SchmidtDec} is pure if and only if $\|\vec \sigma\|_2=1$. In this form, a separable pure state is characterized by all points on the sphere ($\|\vec \sigma\|_2=1$) where one and only one Schmidt coefficient is nonvanishing, $|\sigma_{n_0}|=1$, for a given $n_0$. This means that a pure state is separable if and only if $\|\vec \sigma\|_2=1$ and $\|\vec\sigma\|_1=1$. The enclosed convex volume defines the hyperdimensional analog to an octahedron. In vector notion, this set is given by $\|\vec\sigma\|_1\leq 1$, with the $1$-norm $\|\vec \sigma\|_1=\sum_{n=0}^{d-1}|\sigma_n|$. For pure ME states, all Schmidt coefficients have the same magnitude, $|\sigma_0|=\dots=|\sigma_{d-1}|=d^{-1/2}$. This is equivalent to the intersection of vectors $\vec \sigma$ which satisfy $\|\vec \sigma\|_2=1$ and $\|\vec \sigma\|_\infty=d^{-1/2}$ simultaneously. The convex combination of these vertices yields a hypercube, $\|\vec\sigma\|_\infty\leq d^{-1/2}$, by applying the maximum norm $\|\vec\sigma\|_\infty=\max\{|\sigma_n|\}_{n=0,\dots,d-1}$. In Fig.~\ref{fig:convex}, the three-dimensional case is shown. Note that for finite-dimensional systems the normed spaces defined by $\|\,\cdot\,\|_1$ and $\|\,\cdot\,\|_\infty$ are dual to one another~\cite{Y08}. This also highlights the complementary relations between separable and ME states. A similar relation between entanglement witnesses and separable states in two-qubit systems was reported recently~\cite{MJR15}. Moreover, a numerical study in Ref.~\cite{HM15} was performed for a similar, i.e., geometric, characterization of positive but not completely positive maps. Local properties of quantum channels and their verification have been further studied in Ref.~\cite{MR13}. \begin{figure} \includegraphics[width=7.5cm]{Set.png} \caption{(Color online) The (gray) ball depicts the volume of all mixed quantum states which is bounded by states of the form~\eqref{eq:SchmidtDec} for $d=3$. The octahedron (blue) represents the set of separable states and the cube (red) describes ME states. }\label{fig:convex} \end{figure} \section{Witnesses for maximally entangled and separable states}\label{sec:witnesses} In this section, we derive observable conditions which enable us to infer whether or not a state is an ME state. This will result in nonlinear eigenvalue equations whose solutions give the upper or lower bound of an observable for the desired class of states. Eventually, we will compare our method with the construction of entanglement witnesses. \subsection{Witness construction} In order to formulate witnesses for ME states, let us apply the Hahn-Banach separation theorem~\cite{Y08,M33}. It states that for any closed, convex set and any element that is not part of this set, there exists a linear functional that separates the element from the set. In our case, the closed convex set is the set of mixtures of ME states. Any linear functional $f$, acting on trace class operators $\hat\rho$, can be written as $f(\hat\rho)={\rm tr}(\hat\rho\hat L)$ for a bounded, Hermitian operator $\hat L$. The separation of a non-ME state $\hat\varrho$ in a finite-dimensional system reads as follows. There exists a Hermitian operator $\hat L$, such that \begin{align}\label{eq:MEwitness} \langle \hat L\rangle>\max\{f(\hat\rho_{\rm ME})\}_{\hat\rho_{\rm ME}}=g_{\rm ME}^{\max}. \end{align} The value of the functional, $\langle \hat L\rangle={\rm tr}(\hat\varrho\hat L)$, corresponds to an experimentally accessible expectation value of the observable $\hat L$. Due to convexity, the maximal expectation value for ME states, $g_{\rm ME}^{\max}$, is attained for a pure state. Thus, this bound can be formulated in terms of an optimization over pure ME states $|\psi_{\rm ME}\rangle$, \begin{align}\label{eq:OptProbl} g_{\rm ME}=\langle\psi_{\rm ME}|\hat L|\psi_{\rm ME}\rangle\to g_{\rm ME}^{\max}. \end{align} Recall that any ME pure state can be written as \begin{align} |\psi_\mathrm{ME}\rangle=\frac{1}{\sqrt d}\hat 1\otimes\hat U|\Phi\rangle=\frac{1}{\sqrt d}\sum_{n=0}^{d-1}|n,u_n\rangle \end{align} [see Eq.~\eqref{eq:Transfo} or Appendix~\ref{app:states}], together with orthonormality constraints for $\{|u_n\rangle\}_{n=0,\dots,d-1}$ of the form \begin{align}\label{eq:Constaints} c_{i,j}=\langle u_i|u_j\rangle-\delta_{i,j} \equiv 0, \end{align} for $i,j=0,\ldots,d-1$ and the Kronecker symbol $\delta_{i,j}$. Additionally, let us decompose the observable $\hat L$ into the computational basis of the first subsystem, \begin{align} \hat L=\sum_{i,j=0}^{d-1} |i\rangle\langle j|\otimes \hat L_{i,j}, \end{align} which yields \begin{align} g_{\rm ME}=\frac{1}{d}\sum_{i,j=0}^{d-1} \langle u_i|\hat L_{i,j}|u_j\rangle. \end{align} Now, the optimization problem~\eqref{eq:OptProbl} under the constraints~\eqref{eq:Constaints} can be solved by the method of Lagrange's multipliers $\gamma_{i,j}$. That is, for all $k=0,\ldots, d-1$, we have a vanishing gradient of the form \begin{align} 0{=}&\frac{\partial g_{\rm ME}}{\partial \langle u_k|}{-}\sum_{i,j=0}^{d-1}\gamma_{i,j}\frac{\partial c_{i,j}}{\partial \langle u_k|} {=}\frac{1}{d}\sum_{j=0}^{d-1} \hat L_{k,j}|u_j\rangle{-}\sum_{j=0}^{d-1}\gamma_{k,j}|u_j\rangle, \end{align} as $\partial \langle u_i|/\partial \langle u_k|=\delta_{i,k}$. It can be checked by a projection onto $\langle k|$ in the first subsystem that we can write equivalently \begin{align} \hat L|\psi_{\rm ME}\rangle=d(\hat 1\otimes \hat \Gamma)|\psi_{\rm ME}\rangle, \label{eq:ME-EValEq} \end{align} with $\hat\Gamma=\sum_{i,j=0}^{d-1} \gamma_{i,j} |u_j\rangle\langle u_i|$. In this form, we have the generalized eigenvalue problem~\eqref{eq:ME-EValEq} with an ME eigenstate $|\psi_{\rm ME}\rangle$. The corresponding generalized eigenvalue, denoted as $g_{\rm ME}^{\rm opt}$, is given by \begin{align} \nonumber g_{\rm ME}^{\rm opt}{=}&\langle\psi_{\rm ME}|\hat L|\psi_{\rm ME}\rangle{=}d\langle\psi_{\rm ME}|\hat 1{\otimes}\hat \Gamma|\psi_{\rm ME}\rangle {=}\sum_{i=0}^{d-1} \langle u_i|\hat\Gamma|u_i\rangle \\{=}&{\rm tr}(\hat\Gamma). \end{align} Finally, the maximal expectation value of $\hat L$ for ME states is given as the maximum over all optimal values, \begin{align}\label{eq:maxMEvalue} g_{\rm ME}^{\max}=\max\{g_{\rm ME}^{\rm opt}\}, \end{align} which is the desired right-hand side of the ME test in inequality~\eqref{eq:MEwitness}. The value of $g_{\rm ME}^{\max}$ in Eq.~\eqref{eq:maxMEvalue} is a tight upper bound, as it is attained for the corresponding eigenvector $|\psi_{\rm ME}\rangle$ solving Eq.~\eqref{eq:ME-EValEq}, which exists. This is due to the fact that all pure ME states form a bounded and closed subset of the finite-dimensional and, therefore, compact unit sphere of normalized pure states~\cite{Y08}. It is worth mentioning that the same procedure can be performed similarly for a minimum. That is, $\hat\varrho$ is not an ME state if and only if there exists an observable $\hat L$ such that \begin{align} \langle\hat L\rangle<g_{\rm ME}^{\min}=\min\{g_{\rm ME}^{\rm opt}\}, \end{align} which can be deduced from the approach with the maximum via the interchange $\hat L\mapsto-\hat L$. \subsection{Relation to other eigenvalue problems} Computing the upper bound for all quantum states can be done by solving the (standard) eigenvalue problem for finding the maximal eigenvalue. This is consistent with our finding that the generalized eigenvalue problem in Eq.~\eqref{eq:ME-EValEq} yields the upper bound for ME states. At this point, we can determine what might be a useful witness. For example, a witness based on $\hat L$ for which $g_{\rm ME}^{\max}$ is the ultimate upper bound to all quantum states cannot fulfill condition~\eqref{eq:MEwitness}. In general, we can make the following statement. The observable $\hat L$ is a proper witness if and only if the eigenspace to the (standard) maximal eigenvalue does not contain an ME state. The proof is straightforward: First, as we pointed out above, if the eigenspace contains an ME state $|\psi_{\rm ME}\rangle$, $g_{\rm ME}^{\max}=\langle\psi_{\rm ME}|\hat L|\psi_{\rm ME}\rangle$ is identical to the maximal eigenvalue. Thus, condition~\eqref{eq:MEwitness} is empty. Secondly, if the eigenspace does not contain such an ME state, any element $|\psi\rangle$ of the eigenspace to the maximal (standard) eigenvalue will satisfy the test~\eqref{eq:MEwitness}, $\langle \psi|\hat L|\psi\rangle>g_{\rm ME}^{\max}$. A similar statement has been formulated for entanglement and so-called Schmidt number witnesses~\cite{SV11}. Moreover, in order to prove that $\hat\varrho$ is not a separable state, a similar relation to~\eqref{eq:MEwitness} has been derived~\cite{SV09}, \begin{align}\label{eq:entWitness} \langle\hat L\rangle>g_{\rm S}^{\max}, \end{align} with $g_{\rm S}^{\max}=\max\{g_{\rm S}^{\rm opt}\}$. The latter values are determined from the so-called separability eigenvalue equations, \begin{align}\label{eq:SEValEq} \hat L_b|a\rangle=g_{\rm S}^{\rm opt}|a\rangle \text{ and } \hat L_a|b\rangle=g_{\rm S}^{\rm opt}|b\rangle, \end{align} with $\hat L_a={\rm tr}_A[\hat L(|a\rangle\langle a|\otimes \hat 1)]$, $\hat L_b={\rm tr}_B[\hat L(\hat 1\otimes|b\rangle\langle b|)]$, and $\langle a|a\rangle=1=\langle b|b\rangle$. This kind of approach has been used to experimentally uncover path-entangled states~\cite{Getal14} or for studying entanglement from semiconductor systems~\cite{PFSV12}. For separable states, the generalized eigenvalue problem in Eq.~\eqref{eq:SEValEq} has the same meaning as Eq.~\eqref{eq:ME-EValEq} for ME states. However, the corresponding maximal bounds $g_{\rm S}^{\max}$ and $g_{\rm ME}^{\max}$ address the detection of different properties. On the one hand, if condition~\eqref{eq:entWitness} is fulfilled, then the state $\hat\varrho$ is entangled. If, on the other hand, Eq.~\eqref{eq:MEwitness} is fulfilled, then $\hat\varrho$ is not an ME state and it is thus not the Choi-Jamio\l{}kowski state of an RU process. \subsection{On computing solutions} In the following, let us solve Eq.~\eqref{eq:ME-EValEq} for some classes of operators in order to demonstrate the functionality of our method. The solutions will yield measurable tests in Eq.~\eqref{eq:MEwitness} to probe ME states. Our results will be compared with known tests for RU maps and related to those for entanglement detection. \subsubsection{Product operators} As a first example, we consider a simple correlation measurement between the two modes. Let \begin{align} \hat L=\hat A\otimes \hat B \end{align} be a Hermitian positive semidefinite operator. Inserted into Eq.~\eqref{eq:ME-EValEq}, we find \begin{align} \hat L[\hat 1\otimes\hat U]|\Phi\rangle=d[\hat 1\otimes\hat \Gamma][\hat 1\otimes\hat U]|\Phi\rangle, \end{align} with $|\psi_\mathrm{ME}\rangle=\hat 1\otimes\hat U|\Phi\rangle/\sqrt{d}$. This gives \begin{align*} [\hat A\otimes\hat B\hat U]|\Phi\rangle=&[\hat 1\otimes(\hat B\hat U\hat A^T)]|\Phi\rangle=[\hat 1\otimes (d\hat\Gamma\hat U)]|\Phi\rangle. \end{align*} Equating coefficients yields \begin{align} \hat\Gamma=\frac{1}{d}\hat B\hat U\hat A^T\hat U^\dagger \text{ and } g^{\rm opt}_{\rm ME}=\frac{1}{d}{\rm tr}(\hat B\hat U\hat A^T\hat U^\dagger). \end{align} The spectral decomposition of the considered product observable reads as $\hat L=\sum_{m=0}^{d-1}\lambda_{A,m} |a_m\rangle\langle a_m|\otimes\sum_{n=0}^{d-1}\lambda_{B,n} |b_n\rangle\langle b_n|$, with eigenvalues sorted in increasing order. Using this fact, its positive semidefiniteness, and Chebyshev's sum inequality (see Appendix~\ref{app:Cheb}), we have \begin{align} g_{\rm ME}^{\max}{=}\frac{1}{d}\sum_{n=0}^{d-1}\lambda_{A,n}\lambda_{B,n} \text{ and } g_{\rm ME}^{\min}{=}\frac{1}{d}\sum_{n=0}^{d-1}\lambda_{A,n}\lambda_{B,d{-}1{-}n}. \end{align} In the case of separable states, we can deduce from the solution of the separability eigenvalue problem~\eqref{eq:SEValEq} that \begin{align} g_{\rm S}^{\max}=\lambda_{A,d-1}\lambda_{B,d-1} \text{ and } g_{\rm S}^{\min}=\lambda_{A,0}\lambda_{B,0}. \end{align} Comparing these values with the spectral decomposition of $\hat L$, we find that such a product operator cannot be a proper entanglement witness because the upper and lower bounds for all states are identical with those for separable states. However, for nontrivial scenarios, they differ from the $g_{\rm ME}^{\max{/}\min}$. Hence, such a correlation measurement is a proper witness to identify states which cannot be a mixture of ME states, or non-RU maps. An interesting consequence of such witnesses is given by $\hat A=\hat 1$. In this case, the upper and the lower bound coincide: $g_{\rm ME}^{\max}={\rm tr}(\hat B)/d=g_{\rm ME}^{\min}$. In terms of expectation values, this means that a violation of $\langle \hat L\rangle={\rm tr}_B[\hat B{\rm tr}_A(\hat\rho)]={\rm tr}(\hat B)/d$ for arbitrary $\hat B$ identifies a non-ME state. This simple consequence of our technique is equivalent to a previously known constraint onto mixtures of ME states~\cite{AS08,AP04}: \begin{align}\label{eq:unital} {\rm tr}_A(\hat\rho_{\rm ME})=\hat 1/d. \end{align} In terms of the isomorphism $\mathcal J$ in Eq.~\eqref{eq:CJiso}, this means that the violation of Eq.~\eqref{eq:unital} excludes the RU description of the channel. The constraint is clearly violated, for instance, for pure separable states or the projective channel in Eq.~\eqref{eq:singleRP}. A similar treatment for $\hat B=\hat 1$ gives the same restriction for the other subsystem, ${\rm tr}_B(\hat\rho_{\rm ME})=\hat 1/d$. \subsubsection{Flip-type operators} Another example provides a deeper insight into the symmetry of the ME states. For this reason, let us consider the so-called flip operator, $\hat F|x,y\rangle=|y,x\rangle$, which exchanges the two subsystems. More generally, we study a transformed version, \begin{align} \hat L=(\hat A\otimes \hat B)\hat F(\hat A\otimes \hat B)^\dagger, \end{align} for arbitrary operators $\hat A$ and $\hat B$. This kind of operator has been intensively studied in Ref.~\cite{MW09} for characterizing RU channels. The operator $\hat L$ maps a state $|x,y\rangle$ as follows: \begin{align}\label{eq:sepmapflip} \hat L|x,y\rangle=\hat A\hat B^\dagger|y\rangle\otimes\hat B\hat A^\dagger|x\rangle. \end{align} Hence, it is convenient to consider the singular-value decomposition $\hat B\hat A^\dagger=\hat U_1\hat \Sigma\hat U_2^\dagger$, with $\hat \Sigma$ being the diagonal matrix of decreasing singular values, $\Sigma_0\geq\dots\geq\Sigma_{d-1}\geq0$ and two unitary operators $\hat U_{1}$ and $\hat U_2$. Inserting this decomposition, Eq.~\eqref{eq:sepmapflip} can be rewritten in the form \begin{align}\label{eq:mappingFlip} \hat L|x,y\rangle=(\hat U_2\otimes\hat U_1)(\hat\Sigma\otimes\hat\Sigma)\hat F(\hat U_2^\dagger\otimes\hat U_1^\dagger)|x,y\rangle. \end{align} As $\hat U_{2(1)}$ is a unitary basis transformation of the first (second) mode not affecting eigenvalues, we may simplify the problem by choosing $\hat U_1=\hat U_2=\hat 1$ from now on. Now, the spectral decomposition reads \begin{align}\label{eq:spectdecFlip} (\hat \Sigma \otimes \hat \Sigma)\hat F=&\sum_{m}\Sigma_m^2|m,m\rangle\langle m,m| \\\nonumber&{+}\sum_{m<n}\Sigma_m\Sigma_n \left(|\psi_{mn}^{+}\rangle\langle\psi_{mn}^{+}|-|\psi_{mn}^{-}\rangle\langle\psi_{mn}^{-}|\right), \end{align} where the eigenvectors $|\psi_{mn}^{\pm}\rangle=(|m,n\rangle \pm |n,m\rangle)/\sqrt 2$ form symmetric (spanned by $|\psi_{mn}^{+}\rangle$ together with $|m,m\rangle$) and skew-symmetric subspaces (spanned by $|\psi_{mn}^{-}\rangle$). We deduce the upper and lower bound for all states, \begin{align} g^{\max} =\Sigma_0^2 \text{ and } g^{\min} =-\Sigma_0 \Sigma_1. \end{align} For separable states, we have the bounds \begin{align} g_{\rm S}^{\max}=\Sigma^2_0 \text{ and } g_{\rm S}^{\min}=0, \end{align} which are given from the partial transposition $(|\Phi\rangle\langle\Phi|)^{T_B}=\hat F$ and the approach in Sec.~V of Ref.~\cite{SV09}. Comparing these bounds for separable states with the bounds for all states, we see that the upper bounds coincide, $g_{\rm S}^{\max}=g^{\max}$. Hence, as long as there are at least two nonvanishing singular values $\Sigma_0\Sigma_1\neq 0$, only the lower bound $g^{\min}_{\rm S}> g^{\min}$ provides us with a reasonable test for inseparable states. Let us now consider ME states. Applying our optimization equations and following the same procedure as above, one gets \begin{align*} &\hat L[\hat 1\otimes\hat U]|\Phi\rangle {=}\hat 1\otimes \hat \Sigma\hat U^T\hat\Sigma|\Phi\rangle {=}\hat 1\otimes (d\hat\Gamma\hat U)|\Phi\rangle. \end{align*} Hence, we have $\hat\Gamma=\hat \Sigma\hat U^T\hat \Sigma\hat U^\dagger/d$ and \begin{align}\label{eq:flipPrelim} g^{\rm opt}_{\rm ME}=&\frac{1}{d}{\rm tr}(\hat\Sigma\hat U^T\hat\Sigma\hat U^\dagger)=\frac{1}{d}{\rm tr}(\hat U^\ast\hat\Sigma\hat U\hat\Sigma). \end{align} The maximal expectation value is given for ME states in the symmetric subspace, which is spanned by ${|\psi^{+}_{m,n}\rangle=(|m,n\rangle+|n,m\rangle)/\sqrt{2}}$ for $m\leq n$. Equivalently, this means that $\hat U=\hat U^T$, which simplifies Eq.~\eqref{eq:flipPrelim} to \begin{align} g^{\rm opt}_{\rm ME}=&\frac{1}{d}\sum_{m,n=0}^{d-1} \Sigma_m\Sigma_n |\langle n|\hat U|m\rangle|^2, \\\text{i.e., }g^{\max}_\mathrm{ME}=&\frac{1}{d}\sum_{m=0}^{d-1} \Sigma_m^2, \end{align} where the latter maximum again follows from Chebyshev's sum inequality in Appendix~\ref{app:Cheb}. For the minimum $g_{\rm ME}^{\min}$, we proceed similarly considering the cases of even and odd dimensionality $d$ separately. From the spectral decomposition given by Eq.~\eqref{eq:spectdecFlip}, one can see that the generalized eigenvector $|\psi_{\rm ME}\rangle=d^{-1/2}\hat 1\otimes\hat U|\Phi\rangle$ should be an element of the ${d(d-1)/2}$-dimensional skew-symmetric subspace spanned by ${|\psi^{-}_{m,n}\rangle=(|m,n\rangle-|n,m\rangle)/\sqrt{2}}$ for $m<n$. Equivalently, this means that $\hat U$ should be, in the case of an even $d$, an antisymmetric operator, $\hat U^T=-\hat U$. Thus, we find \begin{align}\label{eq:minEVMEfliptype} g_{\rm ME}^{\min}=-\frac{1}{d}\sum_{m,n=0}^{d-1} \Sigma_m\Sigma_n|\langle n|\hat U|m\rangle|^2. \end{align} In order to find the tight lower bound, we utilize the Youla (or Slater) decomposition of a skew-symmetric operator of even dimension, $\hat U=\hat V\hat J\hat V^T$, where $\hat V$ is unitary and $\hat J=\sum_{n=0}^{(d-2)/2} [J_n(|2n\rangle\langle 2n+1|-|2n+1\rangle\langle 2n|)]$ is a block-diagonal skew-symmetric matrix~\cite{DY61}. In our case, we have $J_n=1$, which is the only choice that allows $\hat U$ to be unitary. Setting $\hat V =\hat 1$ yields the desired minimum in Eq.~\eqref{eq:minEVMEfliptype}. For the odd case, one can add the minimal positive eigenvalue $\Sigma_{d-1}^2$ of $\hat L$ yielding the smallest possible positive contribution to $g_{\rm ME}^{\min}$ and preserving the unitarity of $\hat U$. In detail, we modify our optimal $\hat U=\hat J$ for the even case such that $\hat U=\hat J + |d-1\rangle\langle d-1|$ for odd $d$. In conclusion, we obtain \begin{align} g_{\rm ME}^{\min}=-\frac{1}{d}\left\lbrace\begin{array}{ll}2 \sum\limits_{n=0}^{(d-2)/2}\Sigma_{2n}\Sigma_{2n+1} & \text{for }d\text{ even,}\\ 2\sum\limits_{n=0}^{(d-3)/2}\Sigma_{2n}\Sigma_{2n+1}-\Sigma_{d-1}^2 & \text{for }d\text{ odd.}\\ \end{array}\right. \end{align} Again, we need to examine the eligibility of $\hat L$ to witness ME states by checking $g_{\rm{ME}}^{\max}$ against $g^{\max}$ and $g_{\rm{ME}}^{\min}$ against $g^{\min}$, respectively. Given that there are at least two different non-vanishing singular values $\Sigma_0\neq\Sigma_1\neq 0$, the upper bounds do not coincide, $g_{\rm{ME}}^{\max}<g^{\max}$. For the lower bounds, we notice that they differ, $g_{\rm{ME}}^{\min}>g^{\min}$, under the premise that $d\geq 3$ and that there are at least two nonvanishing singular values, $\Sigma_0\Sigma_1\neq 0$. Thus, both upper and lower bounds can be employed as a test for non-ME states. \subsubsection{Observations} From these very first examples for constructing ME probes [see Eq.~\eqref{eq:MEwitness}], we see that our optimization approach in terms of the generalized eigenvalue equation~\eqref{eq:ME-EValEq} is a useful technique to construct witnesses for ME states. Known results could be easily derived, generalized, and compared to a related approach for separable states. Comparing the above solutions, one can even find a remarkable feature that relates to our geometric considerations in the previous section. Namely, the values for $g_{\rm ME}^{\max}$ or $g_{\rm S}^{\max}$ are closely related to $1$-norm or $\infty$-norm systems, respectively. In the following, we will exploit this observation in more detail. \section{Complementary Schmidt decomposition}\label{sec:pure} In this last section, we will apply our witnessing approach to study Hermitian rank-one operators $\hat L=|\psi\rangle\langle\psi|$. Based on our approach and the previously performed studies on entanglement, we are able to assess the entanglement properties of $|\psi\rangle$. Finally, we will construct the complementary Schmidt decomposition. \subsection{Rank one witnesses}\label{subsec:rankOne} As pointed out before (see also Appendix~\ref{app:states}), any state $|\psi\rangle$ can be written as $|\psi\rangle=\hat 1\otimes\hat M|\Phi\rangle$. Inserting this into our optimization given by Eq.~\eqref{eq:ME-EValEq} and performing the same algebra as done in the previous examples, we obtain \begin{align}\label{eq:PureGamma} \begin{aligned} \hat \Gamma=\frac{{\rm tr}(\hat M^\dagger\hat U)}{d}\hat M\hat U^\dagger\\ \text{and } g^{\rm opt}_{\rm ME}=\frac{1}{d}\left|{\rm tr}(\hat M^\dagger\hat U)\right|^2. \end{aligned} \end{align} As local unitaries affect neither separability nor the ME property, we directly start from the Schmidt decomposition~\eqref{eq:SchmidtDec} in a rotated computational basis, i.e., $|e_m,f_n\rangle=|m,n\rangle$. In particular, this means that $\hat M$ is the diagonal matrix of Schmidt coefficients, \begin{align} \hat M={\rm diag}(\sigma_0,\ldots,\sigma_{d-1})=\mathrm{diag}(\vec \sigma). \end{align} Thus, we get ($\hat U=\hat 1$) \begin{align}\label{eq:MEpureMax} g^{\max}_{\rm ME}=\frac{1}{d}\left|\sum_{n=0}^{d-1}|\sigma_n|\right|^2=\frac{\|\vec\sigma\|_{1}^2}{d}. \end{align} Again, this can be compared with the separability eigenvalue approach, \begin{align}\label{eq:SpureMax} g_{\rm S}^{\max}=&\max\{|\sigma_n|^2\}_{n=0,\dots,d-1}=\|\vec \sigma\|_\infty^2, \end{align} see Sec.~IV~A in Ref.~\cite{SV09}. In Fig.~\ref{fig:overlay}, we plot, for $d=3$ and for real-valued singular value vectors $\vec \sigma\in\mathbb R^3$, the bounds in Eqs.~\eqref{eq:MEpureMax} and~\eqref{eq:SpureMax}. The left panel shows $\|\vec\sigma\|_\infty^2\vec\sigma$ for normalized states $|\psi\rangle$, $\|\vec \sigma\|^2_2=1$. Correspondingly, the right panel depicts $d^{-1}\|\vec\sigma\|_1^2\vec\sigma$. This means the bound $g_{\rm S}^{\max}$($g_{\rm ME}^{\max}$) is, in the left(right) panel, the distance of the surface to $(0,0,0)^T$ in the $\vec \sigma$ direction. \begin{figure} \includegraphics[width=4cm]{overlayS.png} \includegraphics[width=4cm]{overlayME.png} \caption{(Color online) The maximal projection of a pure bipartite state $|\psi\rangle$ with the Schmidt coefficients $\vec \sigma$ onto separable (left plot) and ME (right plot) is shown. The (gray) sphere indicates the normalization of the state, $\|\vec \sigma\|^2_2=1$. Whenever the overlap $\langle\psi|\hat\varrho|\psi\rangle$ of a bipartite state is outside of one of those surfaces, we have an inseparable (left) or non-ME (right) state $\hat\varrho$. }\label{fig:overlay} \end{figure} Let us consider an application of such rank-one test operator. We find that a quantum state $\hat\varrho$ is neither separable nor ME, if for the fidelity with the state $|\psi\rangle$ the inequality \begin{align}\label{eq:exampleRankOne} \langle \psi|\hat\varrho|\psi\rangle>\max\{g_{\rm ME}^{\max},g_{\rm S}^{\max}\} \end{align} holds. For instance, we may take the state \begin{align} \hat\varrho=(1-p)\frac{1}{d^2}\hat 1\otimes \hat 1+p|\psi\rangle\langle \psi|, \end{align} which is a mixture of a pure state and white noise. Note that the normalized identity, i.e. the white-noise contribution, is both separable as well as ME. Now we can estimate, by condition~\eqref{eq:exampleRankOne}, the maximal amount of white noise, $1-p$, that the system can undergo without losing its entanglement or its non-ME property. This holds for all \begin{align} p>\frac{d^2\max\{\|\vec\sigma\|_1^2/d,\|\vec \sigma\|_\infty^2\}-1}{d^2-1}. \end{align} From the geometric point of view, this means that $\langle\psi|\hat\varrho|\psi\rangle$ is outside the surfaces in Fig.~\ref{fig:overlay}. \subsection{Discrete Fourier transform and decompositions in terms of ME states} In the case of separability, it has been shown in Ref.~\cite{SV09} that the nontrivial solutions, $|a,b\rangle$ for $g_{\rm S}^{\rm opt}\neq 0$, of the optimization equations~\eqref{eq:SEValEq} for $\hat L=|\psi\rangle\langle\psi|$ give the Schmidt decomposition of $|\psi\rangle$. Here we will search for a similar possibility. Hence, let us reevaluate the solution in Eq.~\eqref{eq:PureGamma}. Suppose we have found a set of solutions $\{\hat U_k\}_{k=0,\dots,d-1}$. For simplicity, we consider only such unitaries that commute with $\hat M={\rm diag}(\vec \sigma)$. Hence, we can write \begin{align}\label{eq:commutingAnsatz} \hat U_k={\rm diag}(\exp[i\varphi_{k,0}],\dots,\exp[i\varphi_{k,d-1}]), \end{align} and obtain a diagonal $\hat \Gamma_k$ operator in Eq.~\eqref{eq:PureGamma}. Finally, our ansatz for expanding $|\psi\rangle$ is \begin{align} |\psi\rangle=\sum_{n=0}^{d-1} \sigma_n|n,n\rangle=\sum_{k=0}^{d-1} \tau_k\frac{1}{\sqrt d}\sum_{n=0}^{d-1}e^{i\varphi_{k,n}}|n,n\rangle. \end{align} Hence we have a system of equations ($n=0,\dots,d-1$): \begin{align}\label{eq:DFT1} \sigma_n=\sum_{k=0}^{d-1}\frac{1}{\sqrt d}e^{i\varphi_{k,n}}\tau_k. \end{align} At least one solution can be identified when taking \begin{align}\label{eq:DFT2} \varphi_{k,n}=-\frac{2\pi}{d}kn-\vartheta_k. \end{align} The unique character of such a choice is the fact that Eqs.~\eqref{eq:DFT1} for $n=0,\dots,d-1$ with the phases in Eq.~\eqref{eq:DFT2} describe a Fourier transform. Namely, we have \begin{align}\label{eq:GDFT} \tau_k=\frac{e^{i\vartheta_k}}{\sqrt d}\sum_{n=0}^{d-1}e^{2\pi i kn/d}\sigma_n, \end{align} where we choose $\vartheta_k$ such that $\tau_k\geq0$. The transformation in Eq.~\eqref{eq:GDFT} can be called a generalized discrete Fourier transform (GDFT) which maps the non-negative vectors $\vec \sigma=(\sigma_0,\dots,\sigma_{d-1})^T$ to the non-negative vectors $\vec \tau=(\tau_0,\dots,\tau_{d-1})^T$. Denoting the states ${|\psi_{\mathrm{ME},k}\rangle=\hat 1\otimes\hat U_k|\Phi\rangle/\sqrt d=|\mathcal F_{k,0}\rangle}$, we can write \begin{align}\label{eq:CSDec} |\psi\rangle=\sum_{k=0}^{d-1}\tau_k|\mathcal F_{k,0}\rangle, \end{align} where $\hat U_k$ in Eq.~\eqref{eq:commutingAnsatz} is defined by the phases in Eq.~\eqref{eq:DFT1}. Note that our choice is also an orthonormal decomposition $\langle \mathcal F_{k,0}|\mathcal F_{k',0}\rangle=\delta_{k,k'}$ (see also Appendix~\ref{app:Fourier}). Hence, one way to represent a state in terms of ME states has been found. The remarkable aspect of the form~\eqref{eq:CSDec} is that the coefficients $\vec \tau=(\tau_0,\dots,\tau_{d-1})^T$ of the expansion in terms of ME states are given by the GDFT of the (standard) Schmidt decomposition in terms of separable states. Therefore, we may refer to the expansion~\eqref{eq:CSDec} as the {\it complementary Schmidt decomposition}. In Appendix~\ref{app:Fourier}, it is shown for the discrete Fourier transform $\boldsymbol F$ that a vector $\vec \sigma$ with non-negative entries has the image $\vec\tau'=\boldsymbol F\vec\sigma$ for which $\|\vec \tau'\|_{\infty}=d^{-1/2}\|\vec\sigma\|_1$ holds. Because $\tau_k=e^{i\vartheta_k}\tau'_k$, we get the same result for $\vec \tau$. Analogously, we conclude from the inverse GDFT that $\|\vec\sigma\|_\infty=d^{-1/2}\|\vec\tau\|_1$. Note that the inverse GDFT may be computed similarly to the ansatz presented in this section starting from the complementary Schmidt decomposition and the maximally non-ME (separable) states. In addition, we get identical $2$-norms for $\vec \sigma$ and $\vec \tau$ (see also Appendix~\ref{app:Fourier}). In summary, the GDFT yields the following important relations between the Schmidt and complementary Schmidt coefficients: \begin{align} \|\vec \tau\|_2=\|\vec \sigma\|_2,\text{ } \|\vec \tau\|_\infty=\frac{\|\vec \sigma\|_1}{\sqrt d}, \text{ and }\|\vec \tau\|_1=\sqrt d\|\vec \sigma\|_\infty. \end{align} This highlights the dual character of the complementary Schmidt decomposition. Moreover, a maximally non-ME (i.e., separable) state is described in terms of equally weighted complementary Schmidt coefficients $\vec\tau=d^{-1/2}(1,\dots,1)^T$. Up to unitary transformations $\hat V_A$ and $\hat V_B$, we have, for any separable state $|\psi_{\rm S}\rangle=|a,b\rangle$, \begin{align} |0,0\rangle=\hat V_A\otimes\hat V_B|a,b\rangle=\sum_{k=0}^{d-1}\frac{1}{\sqrt d}|\mathcal F_{k,0}\rangle, \end{align} keeping in mind that $\{|\mathcal F_{k,0}\rangle\}_{k=0,\dots,d-1}$ are orthonormal ME states. For the same reasons, any ME state takes the form \begin{align} |\psi_\mathrm{ME}\rangle=\hat V_A^\dagger\otimes\hat V_B^\dagger\sum_{k=0}^{d-1}\delta_{k,0}|\mathcal F_{k,0}\rangle. \end{align} In Fig.~\ref{fig:Relation}, we summarize the complementary relations between ME and separable states. Any pure normalized state $\hat\varrho=|\psi\rangle\langle\psi|$ ($\|\vec\sigma\|_2=\|\vec\tau\|_2=1$) is characterized by the vector of Schmidt coefficients $\vec\sigma$ or its GDFT-mapped coefficients $\vec\tau$. One result from Sec.~\ref{subsec:geoState} is given in the first row and can be extended with the results in the third row of Fig.~\ref{fig:Relation}. That is, such a state $\hat\varrho$ is a separable or ME state if and only if $\|\vec \sigma\|_1=d^{1/2}\|\vec\tau\|_\infty=1$ or $d^{1/2}\|\vec \sigma\|_\infty=\|\vec\tau\|_1=1$, respectively. Test operators of the form $\hat L=|\psi\rangle\langle\psi|$ are in the dual space of density operators. Hence, the bounds for ME and separable states are expressed in the complementary form (see Sec.~\ref{subsec:rankOne}). From the rows 2 and 3 of Fig.~\ref{fig:Relation}, we consequently get $g_{\rm ME}^{\max}=\|\vec \sigma\|_1^2/d=\|\vec \tau\|_\infty^2$ or $g_{\rm S}^{\max}=\|\vec \tau\|_1^2/d=\|\vec \sigma\|_\infty^2$. Finally, the maximal expectation value of $\hat L$ for arbitrary quantum states is given by the only nonzero eigenvalue, $g^{\max}=\|\vec\sigma\|_2^2=\|\vec\tau\|_2^2=1$. \begin{figure} \includegraphics[width=8.5cm]{Relations.pdf} \caption{(Color online) The interpretations of norms of the vector of Schmidt coefficients $\vec \sigma$ are shown. For normalized states, we have $\|\vec\sigma\|_2=1$. If, in addition, the top row is fulfilled, we have a separable (S) or ME state. Choosing a pure state to be a witness, we get the upper bounds for separable or ME states in the middle row. The relation between the Schmidt coefficients and complementary Schmidt coefficients $\vec\tau$, with $\|\vec\tau\|_2=1$, under the GDFT [Eq.~\eqref{eq:GDFT}] is given in the bottom row. }\label{fig:Relation} \end{figure} \section{Conclusions}\label{sec:summary} In summary, we exploited the relations between quantum channels and bipartite quantum states. We derived a method that allowed us to construct the corresponding witnesses. We performed a full analytical characterization of pure states, e.g., for estimating the amount of imperfections a quantum property can withstand. In a first step, we identified two maximal quantum channels, random unitary and random projective channels. Random unitaries are completely characterized by a deterministic evolution and classical statistics. The complement was introduced as a random projective channel. These kinds of processes are governed by the quantum measurement-induced state collapse. Its relation to entanglement-breaking channels was discussed. Applying the Choi-Jamio\l{}kowski isomorphism, we could show that these channels are mapped onto two complementary forms of bipartite quantum states. The isomorphism acts in an anticorrelated way: The quantum-dominated projective channels are transformed into classically correlated states, and deterministic unitary channels are mapped onto maximally entangled states. In a second step, a technique for constructing witnesses was derived to probe nonrandom unitary channels or, equivalently, nonmaximally entangled states. The resulting, generalized eigenvalue equations have been compared with a related approach to uncover inseparable states or, equivalently, nonprojective quantum channels. Some examples underlined the general functionality of our method. With a single observable, one can perform a joint witnessing of inseparable and nonmaximally entangles states. Moreover, the computed bounds for the witnessing are tight. An example of particular interest was formulated in terms of rank-one operators being defined by a single pure state. We showed that the maximal overlap of this state with maximally entangled ones is given in terms of the $1$-norm of the vector of Schmidt coefficients, whereas the maximal fidelity with product states is given by its $\infty$-norm. Finally, we introduced a complementary Schmidt decomposition. Contrary to the standard expansion with orthonormal separable states, the complementary Schmidt decomposition expands a state in terms of maximally entangled ones. In particular, the Schmidt coefficients and the complementary Schmidt coefficients are connected via a discrete Fourier transform. In conclusion, our method is useful to characterize channels and states of a classical or quantum character in a unified manner. Because witnesses define tangent hyperplanes to the studied convex sets, the presented approach allows one to identify the full geometry. Some steps in this direction have been done in the present work. Moreover, our criteria, e.g., in terms of the presented correlation measurements, are directly applicable in present experiments. \section*{Acknowledgements} The authors acknowledge support by the Deutsche Forschungsgemeinschaft through SFB 652/3.
{ "redpajama_set_name": "RedPajamaArXiv" }
1,516
Back in November last year, Holly Wong of Seeking Delicious and I celebrated our birthdays with 9 of our friends at The Bazaar by Jose Andres. One of my friends came as far as Toronto — Canada that is — and one from Thousand Oaks.I'll have to say my review of The Bazaar will come in Part I and Part II and you will understand why once you've read this review in its entirety. For some reason, traffic was horrific that evening and it took us about 2 hours to get from OC to Beverly Hills so our guests arrived sporadically. While waiting, several of us ordered some drinks, including Passion Fruit Up! ($16) which comprised of orange rum, passion fruit and ginger-laurel syrup, topped with passion fruit foam. It was like drinking a dessert, aromatic with a perfect combination of sweet and tart flavors. Holly and Mahesh decided to go all out and partake in the table side service drink Caipirinha ($20). A guy comes around with a cart and starts concocting this drink consisting of Brazilian cachaça, fresh lime and sugar and freezes it as you watch, using liquid nitrogen.The Caipirinha was good but was it really worth the $20 price tag? I'll let you be the judge of that. We start with a yogurt tamarind star anise dip ($10) served with sweet potato chips in a paper bag. A few serves were placed on the table and everyone was to share. However, we quickly ran out of the sweet potato chips and I asked for more. Our next course arrived in the form of American caviar cone ($9/per person) — we were each presented with a baby cone filled with caviar and a lovely foam. This was one of our favorites of the evening. The crispy cone was a wonderful contrast to the gooey, salty, poppy texture of the caviar. Jamón Ibérico Fermin (2 oz) $28 was definitely a hit with everyone. Perfectly salty dry cured, free-range Ibérico ham served with Catalan roasted bread and a tomato spread. This is very traditional and one of my favorite Spanish tapa. For the longest time, flesh of the black pig was not allowed to be imported into the USA and I knew people who would try to "smuggle" it in their luggage from Spain. Boy was I glad when they lifted that ban! Our next item was mussels in vinegar, olive oil and pimenton ($8) served in a tin. I know this is traditional and served at tapas bars across Spain, but I didn't enjoy this at all. Neither did I like the King crab, raspberries in a raspberry vinegar ($18), also served in a tin. The raspberries completely masked the sweetness of the crab which was a real shame! We were all perplexed as to why we were eating tuna ceviche and tuna roll ($15) because it reminded us of something we'd eat in a Japanese restaurant. Even so, this was very refreshing, tuna was very fresh, and the avocado made it very creamy, adding to the flavor.This dish was tasty so we were thinking things were looking up, but then an array of what were the worst items of the night followed. We just couldn't understand Catalan spinach, apple, pine nuts, raisins ($8). It reminded us of frozen spinach — tasteless and bland. The apple, pine nuts and raisins just made a strange pairing for the vegetable. Nobody at the table liked this. But the worse was yet to come. Boneless chicken wings with green olive purée ($9) was just mind-boggling. We couldn't even figure out what it was until one of my friends (who absolutely abhors chicken) proclaimed that it was chicken. We were thinking it was something a little more exotic, like pigeon perhaps, or even quail. But alas, the server informed us it was boneless chicken wings. There was laughter of disbelief from some of the people, but the consensus was mutual — everyone disliked this dish tremendously! Next was the braised Wagyu beef cheeks with California Citrus ($18) but they sous vide the Wagyu a little too much. The meat was mushy and reminded me of meat from one of those vacuumed packets you'd find at the supermarket. We were all flabbergasted at this point, most of us shocked that a perfectly good Wagyu was treated in this manner. Two pieces were left and no one wanted it. Ironically, we had chicken again and we were told that it was seared chicken sous vide with dates, mustard caviar and spicy mustard greens ($10). After the beef, I was very skeptical about another sous vide item. Although it was better than the Wagyu, the chicken was so-so despite the pleasant acroutrements. After four disappointing dishes, we were not looking forward to a fifth, but luckily, Chipirones en su tinta ($10) arrived. Baby squid with own ink was nicely flavored, but Holly, who has lived in Spain for 6 months commented on how these were the biggest baby squid she'd ever seen. We laughed it off to how everything is bigger in America and let it go at that. I enjoyed them even though they weren't as delicate as they should've been. Papas Canarias, salty wrinkled potatoes served with a mojo verdé ($8) was tasty but wasn't unique in any way. Neither were the Buñuelos — codfish fritters — served with a honey aioli ($9). I love codfish and these looked good, but the exterior wasn't fried to perfection making them soft and texturally dismal. I was very sad. By now we were all dying to finish up our meal and move on to dessert, but we had a few more courses yet to come. Next on the list was the Tortilla de patatas "new way" * ($5/per person). This was one of the items we ordered in addition to the tasting menu. This is served similarly to the caviar egg at Melisse except it was potato foam, egg cooked to a perfect 63 degrees and caramelized onions. Thank god this was pretty good. Those of us who ordered this supplement were pleased with the result. Not caviar egg, but still, decent enough after the string of shockingly disastrous courses we had to endure. One of my favorites of the night was Not-your-everyday Caprese, cherry tomatoes, liquid mozarella ($12). Little balls of cherry tomatoes and mozarella were filled with an air pocket, if you will, which created a perfect sensation in your mouth when you bit into them. I loved it so much I took the one remaining portion left on the plate. This is one of my top 5 items of our 18 course meal. Green asparagus tempura ($9) with a romesco dipping sauce was again very average, something any ordinary Japanese restaurant is able to create with no problem whatsoever. I guess the only thing which sets it apart from a Japanese dish is the ubiquitous Spanish dipping romesco sauce. Sautéed wild mushrooms ($12) with hazelnut praline topped with micro chives was a strange dish. I'm not sure I liked the hazelnut praline although I liked the wild mushroom medley. However, again, someone pointed out that sauteed mushrooms was something all of us have had elsewhere, so it wasn't anything unique. I'm glad they saved the best for last so to speak. "Philly cheesesteak" was definitely one of the top fives of the night. Thin bread with air pockets filled with melted cheddar was topped with slices of rare Wagyu beef ($8/per person) and was absolutely DELECTABLE!! I think we were all in agreement that we would've been happy eating five of these and calling it a night. Last but not least, our second supplement item was Cotton candy foie gras ($5) — I'm glad they kept this as the last item. I'd been waiting to try this forever. A small piece of foie gras is placed on a stick with cotton candy spun around it. The sweetness of the cotton candy paired perfectly with the soft rich flavor of the foie gras. We moved to the "dessert" room for our desserts — which I'm not going to go into. We had ordered some champagne which didn't arrive until towards the end of the meal. I think our server was perhaps not the most well-trained, nor was she knowledgeable about our questions. Each time we asked her something, she had to go "find out" the answer, sometimes, not returning with a reply. Our evening was not what we had hoped for and I tweeted our experience the entire evening. Still, I was very shocked and humbled when Chef Jose Andres himself tweeted me back apologizing for our evening and personally inviting me back as their guest. Therefore, Holly and my second visit to The Bazaar will be featured in Part II of my Bazaar experience to be posted at a later date. >wow…i can't believe you still remember all the dished we had that night!did you have a chance to go back? hope it was much better! >Gil: thank you for that link. I will definitely go check it out if I'm ever in that area. Thanks for reading!
{ "redpajama_set_name": "RedPajamaC4" }
2,664
Q: Quick Launch Menu - open in new window/tab - foundation 2013 I'm trying to set the quick launch menu links to open in new window. I'm using Sharepoint foundation 2013. Since it's asp menu i tried to add Target="_blank" property. But it still opens in the same window. <SharePoint:AspMenu id="V4QuickLaunchMenu" runat="server" Target="_blank" EnableViewState="false" DataSourceId="QuickLaunchSiteMap" UseSimpleRendering="true" Orientation="Vertical" StaticDisplayLevels="3" AdjustForShowStartingNode="true" MaximumDynamicDisplayLevels="0" SkipLinkText="" /> A: The following JS function is adding to the div above the global and local navigation, and it causes the target to be ignored: onclick="return AjaxNavigate$OnClickHook(event, this);" You can disable MDS (minimal download strategy) or use jQuery to remove onlick event. $("div[id*='V4QuickLaunchMenu'], div[id*='TopNavigationMenu']").removeAttr("onclick") Please read the following article: http://www.siolon.com/blog/sharepoint-2013-site-navigation-does-not-open-in-new-window/
{ "redpajama_set_name": "RedPajamaStackExchange" }
1,224
\section{Introduction and purpose of the work} Although a fully working theory of quantum gravity is not at hand, regular black holes can be used as phenomenological toy models in order to investigate possible ways of singularity resolutions. The seminal idea of Sakharov\cite{Sakharov1966TheMatterb} and Gliner\cite{Gliner1966AlgebraicMatterb}, proposing that singularities could be substituted by an inflationary equation of state ({\it i. e.}, a de Sitter core), was firstly realized by Bardeen in 1968\cite{Bardeen1968}. Since then, although there have been a great development within the field, most of regular black hole solutions (without layers) were constructed following Bardeen's proposal. The first exact solution for a regular black hole was found by Ayon-Beato and García in 1998 using non-linear electrodynamics as the source, giving a big impulse to the field \cite{Ayon-Beato2000TheMonopole}. In particular, Dymnikova\cite{Dymnikova1992VacuumHole,Dymnikova2002TheMass,Dymnikova2003SphericallyCenter,Dymnikova2004RegularRelativity} found an asymptotically Schwarzschild regular black hole solution whose interior corresponds to an anisotropic fluid obeying a de Sitter equation of state. Other regular black hole solutions with a de Sitter core were found by similar techniques \cite{Ayon-Beato2000TheMonopole,Lemos2011RegularCore,Balart2014RegularSource,Morales-Duran2016SimpleCorrection}, including regular black holes in different extended theories of gravity\cite{Aftergood2014MatterGravity,Rodrigues2016RegularElectrodynamics,Contreras2018ASolution}. Very recently, regular black holes with a Minkowski core have been recently reported \cite{Simpson2019RegularCores}, allowing thus to explore new possibilities such as the construction of thin-shell traversable wormholes \cite{Berry2020}. There are also regular black hole models which have been considered as dark matter candidates (see \cite{Rovelli2018,Dymni2020} for very recent reviews on the subject), including extremal configurations \cite{Freitas2018}. In particular, regular black hole remnants, G-lumps and graviatoms can be considered heavy dark matter candidates with dark energy interiors \cite{D2020} which can induce observational consequences, such as proton decay \cite{Dymni2015}. Interestingly, a very recent example of a classical mechanism giving place to a regular black hole without a de Sitter but a Nariai center has been introduced in \cite{Mariam} by the use of three-form fields. Within the quantum realm, similar regular black holes can also be formulated in loop quantum gravity \cite{LQC1,LQC2,LQC3,LQC4,LQC5} and, within a de Sitter core, in string theory-inspired corrections \cite{Nicolini2019}. Interestingly, regular stringy black holes without a de Sitter core have been recently reported \cite{Cano2019}. Finally, we would like draw attention on some intriguing results regarding the classical double-copy \cite{Monteiro2014} of regular black holes \cite{Easson2020}. Although during the last years the number of works on regular black holes has been constantly increasing, much less has been said from the point of view of global techniques applied to these spacetimes. The first work explaining how to avoid Penrose's singularity theorem \cite{Penrose1965} to construct regular black holes is Borde's 1997 theorem \cite{Borde1997RegularChange}. From this moment, the majority (if not all) the works on regular black holes referred to this theorem in order to justify the omnipresent de Sitter core. This is also the starting point of the so-called topology change in regular black holes, since the asymptotically flat region is usually assumed to be $\mathbb{R}\times S^2$ and the core is represented by $S^3$. Very recently, Melgarejo, Contreras and the author of the present work have shown \cite{Melgarejo2020} that topologies other than $S^3$ are admissible in spherically symmetric and static black holes with a Carter-Penrose diagram {\it à la} Reissner-Nordstr\"om, which essentially describes most of regular black hole models reported. In addition, Carballo-Rubio, Di Filippo, Liberati and Visser have classified all possible regular spherically symmetric geometries that may be realized in theories beyond general relativity as the result of singularity regularization \cite{Carballo1,Carballo2}. Importantly, in their analysis they assume global hyperbolicity and, therefore, topology changes between spacelike slices are forbidden. In this work we study regular black holes from both a global, analytical and topological perspective with emphasis in the role of Borde's theorem and some its extensions. Here we will concern with singularity theorems, model geometries for regular black holes and topology change and causality violation within these systems. The work is organized as follows: section \ref{2} introduces preliminary definitions and some result from global techniques in order to make the work self-contained. ``Reversed" singularity theorems (here referred as {\it propositions}) {\it à la Borde} are formulated in Section \ref{Borde} trying to evade some classical singularity theorems in order to identify general features of regular black holes. After establishing possible model geometries for the core of spherically symmetric and (locally) static regular black holes in Section \ref{4}, we employ a topological approach based on Seifert bundles in order to track the topological transition between spacelike slices of most common regular black holes in Section \ref{5}, including a discussion of some issues such as the absence of global hyperbolicity and problems with causality. We end this section with a discussion on the advantages of regular black holes with a Nariai center. Final conclusions are left for section \ref{6}. \section{Preliminary definitions} \label{2} The reader is referred to Refs. \cite{Hawkingbook,Oneill,Wald1984GeneralRelativity,Beem,Witten} for a detailed account of most of the definitions and properties here employed. However, here we include a brief summary with (hopely) all the necessary ingredients. \\ \\ \textit{A spacetime} is a pair $(\mathcal{M},g)$ where $\mathcal{M}$ is a connected four-dimensional Hausdorff $C^{\infty}$ manifold and $g$ is a Lorentz metric on $\mathcal{M}$ (for brevity we will refer to $\mathcal{M}$ as a spacetime without explicit reference to the Lorentz metric). \\ \\ The {\it chronological future (past)} of $p\in \mathcal{M}$, $I^{+(-)}(p)$ is the set of all $q\in \mathcal{M}$ such that there is a smooth future (past)-directed nondegenerate timelike curve from $p$ to $q$. \\ \\ In case the preceding curve is causal (allowing the possibility of being degenerate) we define the {\it causal future (past)} of $p$, $J^{+(-)}(p)$. \\ \\ A subset $S\subset\mathcal{M}$ of an arbitrary spacetime $\mathcal{M}$ is said to be \textit{achronal} if there does not exist a pair $p,q\in S$ such that it can be connected by causal curves. \\ \\ Let $\mathcal{S}$ be a spacelike three-manifold. If every inextendible non-spacelike curve in $\mathcal{M}$ intersects $\mathcal{S}$, then $\mathcal{S}$ is said to be a {\it Cauchy surface}. A {\it partial Cauchy surface} is a closed achronal set S without edge (thus, a spacelike hypersurface). \\ \\ $\mathcal{M}$ is said to be {\it globally hyperbolic} if it admits a global Cauchy surface. In this case, from Geroch's splitting theorem \cite{Geroch1970}, $\mathcal{M}$ is homeomorphic to $\mathbb{R}\times \mathcal{S}$. Even more, the extensions to the diffeomorphic have also been developed \cite{Bernal2003}. \\ \\ A spacetime $\mathcal{M}$ is said \textit{future causally simple} if $E^+(X)=\dot I^+(X)$, where $\dot I^+(X)$ is the boundary of $I^+(X)$ and $X$ is some compact achronal subset of $\mathcal{M}$. $E^+(X)$ is the future horismo of $X$, which is defined by $E^+(X)=J^+(X)-I^+(X)$. \\ \\ \textit{A trapped surface} is a two-surface in which both outgoing and ingoing null geodesics perpendicular to this surface are convergent, {\it i. e.}, these null geodesics have negative divergence on this surface. \\ \\ For an \textit{eventually future-trapped surface} only is required that the divergences are negative somewhere in the future of the surface along each geodesic \cite{Borde1997RegularChange}. \\ \\ A \textit{slice} $\Gamma$ is an edgeless, achronal hypersurface; {\it i. e.}, for every point $p\in\Gamma$ there is no timelike curve that can reach points $u\in I^-(p)$ and $v\in I^+(p)$. Even more, $\Gamma$ is a closed topological hypersurface \cite{Oneill}. In addition, if $\mathcal{M}$ is simply connected, then every closed spacelike hypersurface in $\mathcal{M}$ is achronal \cite{Oneill}. \\ \\ The Ricci tensor obeys the {\it null curvature condition} (NCC) if $R_{\mu\nu}n^{\mu}n^{\nu}\geq0$ for all null vectors $n^{\mu}$ (we refer to {\it curvature} conditions when no specific dynamics, including Einstein's gravity, is assumed. Otherwise we will refer to {\it energy} conditions). \\ \\ The {\it generic curvature condition} (GCC) says that every causal geodesic contains some point for which $k_{[\alpha} R _{\beta] \gamma \delta [\epsilon} k_{\phi]} k^{\gamma} k^{\delta}\ne 0 $, where $k_{\alpha}$ is tangent to the causal geodesic. \\ \\ The {\it weak energy condtion} (WEC) is satisfied when $T_{\alpha\beta} t^{\alpha} t^{\beta} \ge 0$ for any timelike vector $t^{\alpha}$. \\ \\ The {\it strong energy (curvature) condition} (SEC, SCC) is satisfied when $\left(T_{\alpha\beta}-\frac{1}{2}T g_{\alpha\beta} \right) t^{\alpha} t^{\beta} \ge 0$ ($R_{\alpha \beta}t^{\alpha} t^{\beta} \ge 0$) for any timelike vector $t^{\alpha}$. \\ \\ Finally, we say that a spacetime $\mathcal{M}$ is non-spacelike geodesically incomplete if $\mathcal{M}$ has a timelike or null geodesic which can not be defined for all values of an affine parameter. These spacetimes are said to be {\it singular}. \\ \\ \indent With these tools at hand, now we are ready to discuss some relevant results. \section{Reversed singularity theorems} \label{Borde} {\it Theorem (Borde)}\, \cite{Borde1997RegularChange}. Suppose that there is a spacetime, $\mathcal{M}$, such that \begin{enumerate} \item $\mathcal{M}$ contains an eventually future-trapped surface $\mathcal{T}$. \item The NCC is satisfied. \item $\mathcal{M}$ is null-geodesically complete to the future. \item $\mathcal{M}$ is future causally simple, {\it i. e.}, $E^+(X)=\dot I^+(X)$, where $X$ is any achronal compact subset of $\mathcal{M}$, \end{enumerate} then there is a compact slice to the causal future of $\mathcal{T}$. \\ \\ \indent {\it Sketch of the proof}. The key idea is to start from a ``reversed" Penrose's theorem \cite{Penrose1965} by assuming $\mathcal{M}$ to be geodesically complete to the future, together with a slightly different version of all the hypothesis involved in it with the exception on the existence of a non compact Cauchy surface. Then, Borde's theorem follows directly from this ``reversed" version (see Ref. \cite{Borde1997RegularChange} for details). \\ \\ \indent As commented in the Introduction, the seminal ideas of Sakharov \cite{Sakharov1966TheMatterb} and Gliner \cite{Gliner1966AlgebraicMatterb} have been widely used to substitute singularities by an inflationary equation of state. Usually, most of regular black hole models rely on spherical symmetry and isotropy for the core which, as exemplified by a de Sitter one, is tacitly assumed. Interestingly, Borde's theorem, which is usually taken as {\it the way} to evade Penrose's theorem (at least in the regular black holes literature), has nothing to say concerning the (an)isotropy of the core, which is usually identified with the compact slice referred to in the theorem. Therefore, we think it is of interest to look for regular black hole cores other that de Sitter but compatible with Borde's theorem without assuming isotropy. This point will be fully treated in Sect. \ref{4}. \\ \\ \indent Even more, not only Borde's theorem can be used to avoid the formation of singulatities. One can enunciate several Borde's-like theorems employing other singularity theorems. For example, ``reversing" the famous Hawking and Penrose's singularity theorem \cite{HP1970} one obtains the following \\ \\ \indent {\it Proposition}. Suppose that there is a spacetime, $\mathcal{M}$, such that \begin{enumerate} \item $\mathcal{M}$ contains a trapped surface or a compact spacelike surface or a point with a re-converging light cone. \item The GCC is satisfied. \item $\mathcal{M}$ is causally geodesically complete. \end{enumerate} Then either $\mathcal{M}$ contains closed timelike curves or the SCC does not hold at some point (or both of them). \\ \\ \indent {\it Comments on the proof(s)}. The proof of this and other propositions presented in the present work are a direct consequence of well-known singularity theorems. For example, if one of these theorems is generically expressed as $A_{1} \wedge A_{2}\rightarrow B$, where $A_{1,2}$ and $B$ are the assumptions and consequences of the theorem, respectively, and $\wedge$ stands for the logical ``and" symbol, the ``reversed" proposition we consider would be of the form $\lnot B \wedge A_{2} \rightarrow \lnot A_{1}$ ($\lnot$ stands for logical negation) and, therefore, their proof will follow immediately from that of the theorem previously stated. Thus, in what follows, although no specific proofs will be explicitly presented for the rest of the propositions, their validity is logically guaranteed. Although Borde's theorem is, perhaps, the best-known example of the previous strategy, here we will systematically employ it in order to prove some interesting results concerning the interplay between regularity, topology and causality. \\ \\ The physical consequences of the previous proposition can be undestood, for instance, by looking at the spherically symmetric and static case. In this case, Mars, Martín-Prats and Senovilla proved that if these spacetimes are regular at $r=0$ and satisfy $\rho + p_{r}+2 p_{t}\ge 0$, which is a consequence of the SEC, then they cannot contain any black hole region in General Relativity \cite{Mars1996}. Therefore, in this precise sense, we can assure that by reversing Hawking and Penrose's theorem one can conclude that ``regular black holes violate the strong energy condition". Here we note that, although there are particular models where explicit calculations have shown that the SEC is violated in regular black holes \cite{Elizalde2002}, an explicit reference to the Hawking and Penrose's theorem has been only found in Ref. \cite{Zasla2010}. Interestingly, the violation of the SEC inside the event horizon gives place to a negative Tolman mass \cite{Zasla2010}, which changes its sign inside the Cauchy horizon depending on the singular or regular character of the black hole. Even more, it has been recently conjectured that these features could be related to topology changes in regular black holes models \cite{Melgarejo2020}, which we will discuss in Sect. \ref{5}. \\ \\ \indent With respect to the GCC, some comments are in order: (i) it represents almost no restriction on generic spacetimes \cite{Hawkingbook,Beem,HP1970}; (ii) it can be violated in some spacetimes specialized from the geometrical point of view (e. g., Reissner-Nordstr\"om, as pointed out in \cite{Borde1994}); (iii) the strict SCC implies the GCC (see propositions 2.5 and 2.6 of \cite{Senovilla1997}). Interestingly, this last point leaves room for the following result, which is also a direct consequence of the Hawking-Penrose theorem \cite{HP1970}: \\ \\ \indent {\it Proposition}. Suppose that there is a spacetime, $\mathcal{M}$, such that \begin{enumerate} \item $\mathcal{M}$ contains a trapped surface or a compact spacelike surface or a point with a re-converging light cone. \item The SCC is satisfied. \item $\mathcal{M}$ is causally geodesically complete. \item $\mathcal{M}$ does not contain closed timelike curves. \end{enumerate} Then the GCC does not hold at some point of some causal geodesic of $\mathcal{M}$. \\ \\ \indent From this result we can extract an important conclusion: in general (for example, for non-spherically symmetric spacetimes), the violation of the SCC (of SEC) is not mandatory for regular black holes (wrong assertions regarding this point are frequent in the literature, for example in \cite{Eliasplb} and \cite{Rodrigues2016}). Of course, this is at the price of having $R_{\alpha \beta} t^{\alpha} t^{\beta} = 0$ and $R_{\alpha \beta}n^{\alpha} n^{\beta} = 0$ in the timelike and null cases, respectively \cite{Beem}, which is a consequence of the violation of the GCC at some causal geodesic. In fact, a violation of at least some of the {\it curvature} (not {\it energy)} conditions (including the generic one) must occur if a regular solution must be present. This is a subtle point which we have not been able to find in the literature. Even more, in the spherically symmetric case, the violation of the GCC on some radial null geodesic implies the Schwarzschild ansatz, $g^{tt}g_{rr}=-1$ \cite{Jacobson2007}, which is used, up to our knowledge, in all spherically symmetric regular black hole solutions. It is important to note that the opposite is not true; {\it i. e.}, the Schwarzschild ansatz does not imply that the GCC hods. This can be see, for example, in the Reissner-Nordstr\"om solution, as previously commented. \\ \\ \indent From a physical point of view, it would be desirable to have black holes with nice properties such as, for example, properties 1-4 of the previous proposition. This motivates the following \\ \\ \indent {\it Definition.} A black hole is {\it regular and well-behaved} if (1)-(4) of the previous proposition are satisfied. \\ \\ \indent Even more, based on the previous proposition, an interesting conclusion can be reached in the spherically symmetric case considered in Ref. \cite{Mars1996} in the following sense: \\ \\ \indent {\it Proposition}. Let $\mathcal{M}$ be a spacetime containing a regular and well-behaved black hole. Then, the corresponding theory is not General Relativity and either (i) the SEC is saturated at some point along some timelike geodesic of $\mathcal{M}$ or (ii) the NEC is saturated at some point along some null geodesic of $\mathcal{M}$. \\ \\ \indent As a simple application of this result we note that, for a perfect fluid, the SEC can not be saturated at any point (for example, in the isotropic case, $\rho + p=0$ and $\rho+3 p=0$ can not hold simultaneously for a matter content other than vacuum). Therefore, the only possibility is that the NEC is saturated at some point with implies that the geometry under consideration is de-Sitter like at this particular point. Of course this does not imply that the core of these objects has to be describe by a de-Sitter geometry but it excludes the possibility of having regular and well behaved black holes beyond General relativity if the geometry is de Sitter nowhere. \\ \\ \indent Although the previous results show some extra properties that regular and well-behaved black holes have to fulfill, including specific ways of evading Hawking and Penrose's theorem and their consequences for the SEC and for the GCC in regular black holes, in the next section we will focus on some consequences of Borde's theorem, which, although it implies that the SEC is violated somewhere, it is usually referred to in regular black holes literature. \section{Model geometries for spacelike slices} \label{4} In general, four dimensional non rotating electrovacuum black holes have a topology given by $\mathbb{R}^2\times \Sigma$, where $\Sigma$ is any closed surface of constant curvature and arbitrary genus, $g$ \cite{Vanzo1997}. For simplicity, only the $g=0$ case will be considered here. The case of a non-vanishing cosmological constant will be treated separately. \subsection{Analytical approach} \subsubsection{(Anti-)de Sitter cores} Let us introduce a coordinate system in any spacelike slice of these spacetimes such that the line element can be written as \begin{equation} \label{3metric} ds^2= \frac{r^2}{\lambda(r)}dr^2+ r^2\left(d\theta^2 + f(\theta) d\phi^2 \right). \end{equation} \\ \indent On one hand, an straightforward computation reveals that the scalar curvature is given by \begin{equation} R(r,\theta)= \frac{2 \lambda}{r^4}+\frac{\dot f}{2 f^2 r^2}-\frac{2 \lambda'}{r^3}-\frac{\ddot f}{r^2 f}, \end{equation} \\ where $\lambda'\equiv \frac{d \lambda}{dr}$ $\dot f \equiv \frac{d f}{d \theta}$. The formal solution is given by \begin{eqnarray} \label{lambda} \lambda(r)&=& r A_{1} + r \int_{1}^{r}\frac{\left(\dot f\right)^2-2 f\left(f y ^2 R(y,\theta)+\ddot f\right)}{4 f^2}dy, \end{eqnarray} \\ where $A_{1}$ is an arbitrary constant. \\ \\ \indent Interestingly, Eq. (\ref{lambda}) can be expressed for $r\rightarrow 0$ as \begin{eqnarray} \lambda(r)&\simeq & r \bigg(A_{1}+\int_{1}^{0}\frac{\left(\dot f\right)^2-2 f\left(f y ^2 R(y,\theta)+\ddot f\right)}{4 f^2}dy \bigg) \nonumber \\ &+& \frac{r^2}{4}\bigg[\left(\frac{\dot f}{f}\right)^2- \frac{2 \ddot f}{f} \bigg] -\frac{1}{6}R(0,\theta) r^4, \end{eqnarray} \\ \indent which gives place to \begin{equation} R(r,\theta) \simeq R(0,\theta), \end{equation} \\ for $r\rightarrow 0$. \\ \\ \indent Now let us focus our attention on the second term of the rhs of this series expansion. It is clear that it must be constant in order for $\lambda$ and $R$ to be functions only of the radial coordinate. In this case we have \begin{equation} \label{f} \left(\frac{\dot f}{f}\right)^2- \frac{2 \ddot f}{f} = k, \end{equation} \\ \indent where we have chosen $k=0,\pm 1$ (this choice will be later understood). With an appropriate choice for the integration constants, the solutions of Eq. (\ref{f}) are: \begin{eqnarray} k = &1& \, \, \,f(\theta) = \sin^2\theta \nonumber \\ k =-&1& \, \, f(\theta) = \sinh^2\theta \nonumber \\ k = &0& \, \, f(\theta) = 1 . \end{eqnarray} \\ \indent On the other hand, the Kretschmann scalar, $\mathcal{K}=R^{\alpha\beta\gamma\delta}R_{\alpha\beta\gamma\delta}$, is given at $r\rightarrow 0$ by \begin{equation} \mathcal{K}\simeq\frac{6}{r^6}\left(\int_1^0 \left(k-\frac{1}{2}y^2 R(y)\right) \, d y+ A\right)^2, \end{equation} \\ where $A$ is an arbitrary constant. \\ \\ \indent Let us impose $\lim_{r\rightarrow 0}\mathcal{K}\rightarrow $ finite (the same reasoning applies for $R^{\alpha \beta}R_{\alpha \beta}$). Then, we have to choose $A=-\int_1^0 \left(k-\frac{1}{2} y^2 R(y)\right) dy$. In this case, we get \begin{equation} \label{eqlambda} \lambda(r)\simeq k\, r^2-\frac{1}{6}R(0)r^4. \end{equation} \\ \indent Now let us introduce an angular coordinate, $\chi$, such that \begin{eqnarray} 1-\frac{r^2 R(0)}{6}&\equiv&\cos^{2} \chi \, \, (k=1) \nonumber \\ 1+\frac{r^2 R(0)}{6}&\equiv&\cosh^{2} \chi \, \, (k=-1). \end{eqnarray} \\ \indent Within these new coordinates, Eq. (\ref{3metric}) now reads \begin{eqnarray} ds^2 &=& \frac{6}{R(0)}\left(d\chi^2+\sin^2\chi \left(d\theta^2 +\sin^2\theta\, d\phi^2 \right) \right) \nonumber \\ ds^2&=&\frac{6}{R(0)}\left(d\chi^2+\sin^2\chi \left(d\theta^2+\sinh^2 \theta \,d\phi^2 \right) \right) \end{eqnarray} \\ and, therefore, the topology of the spacelike slices are described by either $S^3$ (corresponding to a de Sitter core $R(0)>0$) or $H^3$ (corresponding to an anti-de Sitter core, $R(0)<0$). The $k=0$ case, although slightly more elaborated, gives place to a three-torus \cite{Vanzo1997} . This case will be considered elsewhere. \\ \\ \indent Interestingly, in the $\Sigma=H^2$ case, although the metric displays the properties of a black hole, it is not in fact, as it represents the portion of AdS which is causally accessible to a family of accelerated observers \cite{Vanzo1997}. In addition, $H^2$ is non compact. However, we can make the quotient $H^2/G$ with an appropriate discrete group in order to make the horizon a compact Riemann surface of genus $g>1$ \cite{Vanzo1997}. \\ \\ \indent Concerning the whole spacetime but not only a spatial slice, it has been shown (see, for example, Ref. \cite{Garcia2019} and references therein) that spherically symmetric and static regular black hole solutions must approach to a de-Sitter spacetime at the regular center. In addition, regularity conditions have been studied for rotating black holes\cite{Torres2017}. Finally, the case for planar and cylindrical regular spacetimes, although considered some years ago \cite{Lake1994}, has not received too much attention. \\ \\ \indent We end this section by noting that a careful inspection of Eq. (\ref{eqlambda}) reveals that, in case $R(0)=0$, the so-called Simpson and Visser's {\it hollow} regular black holes \cite{Visser2019} appear. Interestingly, although this family of regular black holes has not been explored in depth, it has some desirable properties. For example, as the usual de Sitter core is substituted by a Minkowskian one, there are no topology changes between slices. Even more, by a direct application of Borde's theorem we infer that these black holes must violate the NEC (the rest of hypothesis of Borde's theorem are satisfied), in agreement with recent calculations \cite{Visser2019}. \\ \\ \indent The same authors have recently introduced \cite{Visser2019bis} a parametric family of spherically symmetric geometries with interpolate between the Schwarzschild black hole and a traversable wormhole through a regular black hole. Specifically, the line element reads \\ \begin{equation} \label{metricvisser} ds^2=-\left(1-\frac{2m}{\sqrt{r^2+a^2}}\right)dt^2+\frac{dr^2}{1-\frac{2m}{\sqrt{r^2+a^2}}}+\left(r^2+a^2\right)d\Omega^2. \end{equation} \\ \indent It represents a regular black hole when $a\in (0,2m)$ and $m>0$. Interestingly, the interpretation of this geometry near the regular center can be easily read from the components of the corresponding energy-momentum tensor. Using Eq. (5.1) of \cite{Visser2019bis} we get that both $\rho+p_{\bot}\ne 0$ and $\rho+p_{\parallel}\ne 0$ near the regular center within the previous range of values for $a$. Therefore, we are again with a regular black hole solution without a de Sitter center. As in the case of {\it hollow} regular black hole, the NEC is again violated but, in this case, Borde's theorem does not apply. \\ \\ \indent Even more, near the regular center and for $a \in (0,2m)$, Eq. (\ref{metricvisser}) reads \begin{equation} \label{nariailike} ds^2 \approx \left(|\epsilon|-\frac{1+|\epsilon|}{2 a^2}r^2 \right)dt^2-\frac{dr^2}{|\epsilon|-\frac{1+|\epsilon|}{2 a^2}r^2 } + a^2 d\Omega^2, \end{equation} where $|\epsilon|=\frac{2m}{a}-1$. Interestingly, this geometry can be interpreted as AdS$_{2}\times S^2$ (note that this geometry is not of the form of Eq. (\ref{3metric})). In general, these kind of geometries such as Eq. (\ref{nariailike}) belong to the family of regular black holes with a Bertotti-Robinson core, as we will show here. \subsubsection{Nariai and Bertotti-Robinson cores} In this case we start from the spherically symmetric spacetime whose line element is given by \begin{equation} \label{metric} ds^2 = -\frac{\lambda(r)}{r^2}dt^2+\frac{r^2}{\lambda(r)}dr^2+l^2 d\Omega^2, \end{equation} \\ \\ where $l$ is a parameter with dimensions of length which can be related either to the cosmological constant or to other relevant length scale present in the problem under consideration. \\ \\ \indent An straightforward calculation reveals that \begin{widetext} \begin{equation} \lambda(r)=r^2\Bigg[C_{1}+ C_{2} r+ \int_{1}^{r}y(r)\left(R(y)-\frac{2}{l^2}\right)dy +r \int_{1}^{r}\left(R(y)-\frac{2}{l^2}\right) dy\Bigg], \end{equation} \end{widetext} where $C_{1,2}$ are arbitrary constants. \\ \\ \indent Therefore, provided $R(r)$ is regular everywhere, we have that \begin{eqnarray} \mathrm{Ric}^2 &=& \frac{4}{l^4}+\frac{R}{2}\left(R-\frac{4}{l^2} \right) \nonumber \\ \mathcal{K}&=& \frac{8}{l^4}+R\left(R-\frac{4}{l^2}\right)\\ \end{eqnarray} \\ are regular too. \\ \\ \indent Even more, if the integration constants are chosen such that \begin{eqnarray} C_{1}&=&1-\int_{1}^{0}y(r)\left(R(y)-\frac{2}{l^2}\right)dy \nonumber \\ C_{2}&=&-\int_{1}^{0}\left(R(y)-\frac{2}{l^2}\right)dy \end{eqnarray} \\ \indent then we have that, near the $r=0$ regular center, \begin{equation} \lambda(r)\simeq r ^2+\left(\frac{1}{l^2}-\frac{R(0)}{2}\right)r^4 \end{equation} \\ and, therefore, Eq. (\ref{metric}) reads, near the core, \begin{equation} \label{core} ds^2 \simeq -\left[1+\left(\frac{1}{l^2}-\frac{R(0)}{2}\right)r^2\right] dt^2+\frac{dr^2}{1+\left(\frac{1}{l^2}-\frac{R(0)}{2}\right)r^2} +l^2 d\Omega^2. \end{equation} \\ \\ \indent The appearance of these geometries makes the NEC not to be saturated near the core, as it can be seen using Einstein's equations. Defining the effective component of the energy-momentum tensor as usual, we get ($8 \pi G =1$) \begin{eqnarray} \rho(r) &=&\frac{1}{l^2} \nonumber \\ p_{\parallel}(r) &= &-\frac{1}{l^2} \nonumber \\ p_{\bot}(r) &= &\frac{6 \lambda - 4 r \lambda'+ r^2 \lambda ''}{2 r^4} \end{eqnarray} \\ and, therefore, $\rho+p_{\parallel}=0$ but $\rho+p_{\bot}\ne0$. \\ \\\indent Clearly, Eq. (\ref{core}) is $\mathrm{dS}_2\times S^2$ when $\frac{1}{l^2}-\frac{R(0)}{2}<0$ and $\mathrm{AdS}_2\times S^2$ when $\frac{1}{l^2}-\frac{R(0)}{2}>0$. Let us take a closer look at these geometries. \\ \\ \indent The neutral Nariai solution, first introduced by Nariai \cite{Nariai1}, solves Einstein's field equations with a positive cosmological constant and is given by \begin{equation} \label{Nariai} ds^2=\Lambda^{-1}\Bigg(-\sin^2\chi\, dt^2+d\chi^2+d\theta^2+\sin^2 \theta \, d\phi^2 \Bigg), \end{equation} \\ where $\chi$ and $\theta$ both run from 0 to $\pi$, and $\phi$ has period $2\pi$. Importantly, the spacelike slices are $S^1\times S^2$ in this case (the same occurs for the charged Nariai solution, found by Bertotti and Robinson \cite{BR1}). Essentially, it suffices to make the coordinate change $1-|\frac{1}{l^2}-\frac{R(0)}{2}|r^2\equiv \cos^2\chi$ to bring Eq. (\ref{core}) to the form of Eq. (\ref{Nariai}). \indent The simplest Bertotti-Robinson solution \cite{BR1} solves the $\Lambda=0$ Einstein-Maxwell system. The line element is given by \begin{equation} \label{BR2} ds^2=q^2\Bigg(-\sinh^2\chi\, dt^2+d\chi^2+d\theta^2+\sin^2 \theta \, d\phi^2 \Bigg), \end{equation} \\ where $q$ is the charge of the solution, $\chi$ is unbounded, $\theta$ runs from 0 to $\pi$, and $\phi$ has period $2\pi$. In this case, the spacelike slices are $\mathbb{R}\times S^2$. A redefinition of the form $\sinh^2 \chi \equiv \frac{R^2}{q^2}-1$ and $t=\frac{T}{q}$ brings Eq. (\ref{BR2}) to \begin{equation} \label{BR} ds^2=-\Bigg(\frac{R^2}{q^2}-1\Bigg) dT^2+\frac{dR^2}{\frac{R^2}{q^2}-1}+ q^2\Bigg(d\theta^2+\sin^2\theta d\phi^2\Bigg), \end{equation} \\ which is $\mathrm{AdS}_{2}\times S^2$ written in Poincaré-like coordinates \cite{Natsuume2015}. \\ \\ \indent Summarizing, we have found that the geometry of spacelike slices of spherically symmetric and (locally) static regular solutions is: \begin{itemize} \item $S^3$ for a dS core \item $H^3$ for an AdS core \item $S^1\times S^2$ for a Nariai core \item $\mathbb{R}\times S^2$ for a Bertotti-Robinson core \end{itemize} \indent Although both the AdS and the Bertotti-Robinson cores are not compact and, therefore, Borde's theorem implies that the NCC is not satisfied (assuming causal simplicity), these geometries turn to be fundamental in order to avoid some causality issues (see section V B). \subsection{Topological approach} Let us require that the spacelike slices, $\Gamma$, are: (i) smooth and (ii) simply connected. As previously stated, $\Gamma$ is achronal and edgeless and, therefore, it acquires the structure of a topological manifold. The extension to $C^\infty$ class is taken as a physical requirement. With these extra ingredients we are ready to formulate the following corollary to Borde's theorem. \\ \\ {\it Corollary}. Assume $\Gamma$ smooth and simply connected. Then, $\Gamma \simeq S^3$. \\ \\ {\it Proof}. Direct from the Poincar\'e conjecture \cite{Ricciflow}. \\ \\ \indent Therefore, if $\Gamma$ is smooth but not $S^3$, then $\Gamma$ is not simply connected. Thus, slices with, for example, $S^1\times S^2$ and $S^1\tilde \times S^2$ (the non-orientable circle bundle over $S^2$) topologies are compatible with Borde's theorem. This can be seen for the specific case of regular black holes with their Carter-Penrose diagram {\it à la} Reissner-Nordstr\"om by carefully performing null identifications \cite{Melgarejo2020} and for the Nariai family, previously presented. \section{Topology change in regular black holes} \label{5} We have seen that the class of regular black holes satisfying Borde's theorem have the topology of $S^3$ at their cores if the compact slice is assumed to be smooth and simply connected. As commented in the introduction, all regular black hole solutions studied (with few exceptions) belong to this class. Therefore, as the ``slice at infinity" is usually considered to be $\Gamma_1 =\mathbb{R}\times\Sigma$ (with $\Sigma=S^2$ is most cases), there is a transition between $\Gamma_1$ and $\Gamma_2=S^3$ slices. It is our purpose to describe and quantify this transition with the help of Seifert bundles. We refer the interested reader to \cite{Orlik1972} for details on Seifert manifolds. \subsection{Seifert bundles} The clearest definition of a {\it Seifert fiber space} is a three-manifold which can be foliated by circles. However, it is usually more useful to think of it as a kind of bundle over a two-dimensional manifold (or over a two-dimensional orbifold, in general) with fibre the circle. If the manifold is compact, foliation by circles is usually more obvious that for the non-compact case. For example, in the $\Gamma=S^3$ case, which corresponds to the de Sitter core, the foliation corresponds to the well-known Hopf fibration. In the non-compact case we can mention, following Ref. \cite{Scott1983}, that any 3-manifold with a geometric structure modelled on $\mathbb{R}\times S^2$ is foliated by lines or circles. Therefore, the slices we are referring to in this work are Seifert fiber spaces. Using some tools from these spaces, it is our purpose to show here how to distinguish between the core and the rest of the slices. In the two dimensional case, the geometry of a given closed surface $\Sigma$ is given by the Euler number, $\chi (\Sigma)$, and whether $\chi$ is positive, zero or negative. When dealing with Seifert bundles, the appropriate geometry can be determined from two invariants: the Euler number of the base manifold (or, in general, of the base orbifold), $\chi$, and the Euler number of the Seifert bundle, $e$. For the case we are interested in we have \cite{Scott1983}: \begin{eqnarray} \Gamma_{1}=\mathbb{R}\times S^2: &&\, \, \, \chi =2 \, \, \, \mathrm{and}\, \, \, e = 0 \nonumber \\ \Gamma_{2}=S^3: &&\, \, \, \chi =2 \, \, \,\mathrm{and}\, \, \,e =1 \nonumber \end{eqnarray} \\ \indent Note that, although the base manifolds are $S^2$ and, therefore, $\chi=2$, differences between $\Gamma_1$ and $\Gamma_2$ are due to the triviality of the bundle. \\ \\ \indent First of all, note that there is a direct way of computing the Euler number of the Seifert bundle which is associated to the corresponding Seifert invariant: \begin{equation} \label{edef} e= -\sum_{j=1}^{n}\frac{\beta_{j}}{\alpha_{j}}, \end{equation} \\ where the relative prime pairs, $(\alpha_{j},\beta_{j})$, determines the orbit type of an orbit with isotropy group $\mathbb{Z}_{\alpha_{j}}$ \cite{Orlik1972}. From the previous definition it can be seen that $e\left(\Gamma_{1}\right)=0$ but $e\left(\Gamma_{2}\right)=1$. \\ \\ \indent In addition to this way of calculating $e$, there is a nice result which could be useful in the physics literature. Here we will briefly summarize the construction developed in Ref. \cite{Ouyang1994} without giving any proofs, which can be found in the aforementioned work. From this point we will refer only to manifolds and not to orbifolds, although appropriate generalizations can be usually performed. \\ \\ \indent Let $\Sigma$ be a closed two-dimensional Riemannian manifold. For the special case of constant Gauss curvature of $\Sigma$, $K$, the Gauss-Bonnet theorem states that \begin{equation} \label{gauss} K \,\mathrm{Vol(\Sigma)= 2\pi \chi (\Sigma)}. \end{equation} \\ \indent Interestingly, there is an expression similar to Eq. (\ref{gauss}) but for the case of certain circle bundles over Riemannian manifold, which reads \begin{equation} \label{gaussbis} \tilde R \, \mathrm{Vol(B)}= 2\pi e (\Gamma). \end{equation} \\ \indent In Eq. (\ref{gaussbis}), $\tilde R$ is the (not necessarily constant) Riemann curvature of the fiber metric $\tilde g$ in the total space, $\Gamma$, and $\mathrm{Vol(B)}$ is the volume of the base manifold. \\ \\ \indent On one hand, from Eq. (\ref{edef}) we get that $\tilde R$ is constant on $\Gamma_1$, which can be shown by direct calculation. In fact, $\tilde R = 0$ for $\Gamma_{1}$ and $\tilde R = 2 K$ for $\Gamma_{2}$, where $K$ is the constant Gaussian curvature of $\mathrm{B}$, which is a two-sphere. Note that the factor of $2$ is irrelevant since it can be absorbed in the normalization of $e$. Then, Eqs. (\ref{gauss}) and (\ref{gaussbis}) are completely equivalent with the exception that $\tilde R$ contains information of both the fibre and the base. \\ \\ \indent Therefore, this curvature, $\tilde R$, can be used to distinguish between slices of singular and regular (in the sense of Borde's theorem) black holes, as summarized in the following table \\ \begin{table}[htb] \centering \begin{tabular}{ |c|c|c|c| } \hline Slice & $\chi(B)$ &$e(\Gamma)$ & $\tilde R$ \\ \hline $\mathbb{R}\times S^2$& 2 & 0&0\\ $S^3$& 2 & 1&$2 K(\mathrm{B})$ \\ \hline \end{tabular} \caption{Differences between achronal slices of spherically symmetric regular black hole spacetimes.} \label{table} \end{table} \subsection{Some issues: predictability and causality violation} Interestingly, by Geroch's splitting theorem \cite{Geroch1970}, these regular black hole spacetimes are not globally hyperbolic. Although this might be though in principle as a major drawback associated with lack of predictability, Wald showed \cite{W1} that, for the case of a Klein-Gordon scalar field propagating in an arbitrary static space-time, a physically well-posed and fully deterministic dynamical evolution prescription can be given. Even more, this prescription is the only possible way of defining the dynamics of a scalar field in a static, non-globally-hyperbolic, spacetime, as shown by Wald and Ishibashi \cite{W2}, who applied their techniques to the dynamics of electromagnetic and gravitational perturbations in an AdS spacetime, which is a widely used example of the lack of global hyperbolicity \cite{W3}. Therefore, as the kind of regular black holes we are referring to in this work are static, Wald and Ishibashi's prescription can be safely employed to restore determinism. \\ \\ \indent Determinism is not the only problem faced by regular black holes. Topology change, which seems to be unavoidable within these systems, is usually believed to occur at a high price of causality violations due to results by Geroch \cite{Geroch1,Geroch2}. Even more, Tipler showed that Einstein's equations cannot hold (with a source with non-negative energy density) if the spatial topology changes \cite{Tipler1,Tipler2}. One of the main assumptions of these and other theorems are the compactness of the spatial slices. Therefore, as one situation of interest is the asymptotically flat spatial geometry of an isolated system (including regular black holes), these results have to be extended. As pointed out by Borde \cite{Bordetopologychange} ``We expect to be able to compactify this situation by adding a point at infinity, or by imposing periodic boundary conditions ("putting it in a box"), and thus we might expect results similar to the compact case." In this line of thought, Geroch introduced the concept of externally Euclidean three-manifold \cite{Geroch2}, which can represent an isolated system, to show that the topology change occurs within certain compact set. \\ \\ \indent Concerning cobordism and Morse theory, most of the techniques refer to compact slices \cite{Antonelli1979,Sorkin1986, Horowitz1991,Gibbons1992,Low1992}. In fact, up to my knowledge, only few works mention how to implement cobordism theory for non-compact slices. Specifically, Yodzis comments \cite{Yodzis1972,Yodzis1973} that the standard results could be used when two slices are related by a finite number of surgeries (which is always possible if both are compact), although no specific examples are given. Dowker and García have developed \cite{Fowker1998} a Morse and handlebody technology that can be adapted to non-compact manifolds. As the authors said: ``This will not be difficult because, with the assumption that the topology change is localized in space, we can reduce the questions to the closed case by, roughly speaking, closing off space [...] (and then) we open back to the physical manifolds." \\ \\ \indent Therefore, although most of these techniques are, in principle, applicable to the case of regular black holes here considered and they could serve to shed more light on the relation between regularity and topology change on general grounds, here we will briefly focus on Geroch's extensions of his classical theorems \cite{Geroch1,Geroch2} to the case of open slices (see also \cite{Bordetopologychange} and \cite{Tipler2} for a brief account of these techniques). \subsection{Geroch and Tipler's theorems and regular black holes} As previously mentioned, regular (and most of singular) black holes usually have open slices with $\mathbb{R}\times S^2$ topology. Therefore, some ``compactness requirement" has to be introduced in order to apply the usual techniques. This is captured by the following \\ \\ \indent {\it Definition \cite{Geroch2}}. A three-manifold $\Gamma$ is said to be {\it externally Euclidean} if there exists a connected compact set $C$ of $\Gamma$ such that $\Gamma-C$ is diffeomorphic to $\mathbb{R}\times S^2$ (this means that $\Gamma-C$ is diffeomorphic to the Euclidean space minus a three-ball). \\ \\ \indent {\it Definition \cite{Geroch2}}. Let $\mathbb{M}$ be a four-dimensional subset of a spacetime, $\mathcal{M}$, whose boundary is the disjoint union of two three-manifolds $\Gamma$ and $\Gamma'$ which are externally Euclidean and spacelike. Suppose that there exists a connected compact set $K$ of $\mathbb{M}$ such that $\mathbb{M} - K$ is diffeomorphic to $S^2\times \mathbb{R}\times [0,1]$, where for each fixed number $\alpha \in [0,1]$ the submanifold $S^2\times \mathbb{R}$ of $\mathbb{M}$ is spacelike and for each fixed point $p$ of $S^2\times \mathbb{R}$ the line $[0,1]$ of $\mathbb{M}$ is timelike. Then $\mathbb{M}$ will be called {\it externally Lorentzian}. \\ \\ \indent Then, topology change (if any) must take place in the timelike world-tube, $K$, between $\Gamma$ and $\Gamma'$. \\ \\ \indent With these definitions we are ready to state the fundamental result by Geroch. \\ \\ \indent{\it Theorem (Geroch) \cite{Geroch2}}. Let $\mathbb{M}$ be an externally Lorentzian portion of the spacetime, $\mathcal{M}$, the boundary of $\mathbb{M}$ being the disjoint union of two spacelike externally Euclidean 3-manifolds, $\Gamma$ and $\Gamma'$. Suppose $\mathbb{M}$ has no closed timelike curves. Then $\Gamma\simeq \Gamma'$ and $\Gamma'$ and $\mathbb{M}\simeq \Gamma \times [0,1]$. In case of the kind of regular black holes here considered, $\Gamma = \mathbb{R}\times S^2 \not \simeq \Gamma'= S^3$, with $\Gamma$ and $\Gamma'$ being externally Euclidean and compact, respectively. Therefore, Geroch's theorem applies and, therefore, there exist closed timelike curves within the timelike tube, $K$. Therefore, by ``reversing" Geroch's theorem we can state the following \\ \\ \indent {\it Proposition}. There is no spacetime $\mathcal{M}$ containing regular and well behaved black holes. The existence of a de Sitter core and an asymptotically flat region implies the existence of closed timelike curves. \\ \\ \indent Given this impossibility, it would be interesting to find regular black hole solutions which minimize the causality issue. As the problematic region is the timelike tube, $K$, one could ask for the smallest volume contained within $K$, which implies minimizing the surface area $\mathcal{A} \sim R(0)^{-3/2} $ of $\Gamma=S^3$. If we assume $R(0)\sim l_{p}^{-2}$ ($l_{p}$ stands for the Planck length) in the region near the core, where quantum gravitational effects are expected to occur, we get $\mathcal{A}\sim l_{p}^3$ near the core. Then, if a sufficiently small $C$ is assumed for the second slice $\Gamma'$ in a neighbourhood of the core, causality violation can be constrained to the Planck scale. \\ \\ \indent Interestingly, Geroch's previous result was strengthened by Tipler, who proved the following \\ \\ \indent {\it Theorem (Tipler) \cite{Tipler2}}. Let $\mathbb{M}$ be an externally Lorentzian portion of an spacetime, $\mathcal{M}$, the boundary of $\mathbb{M}$ being the disjoint union of two spacelike externally Euclidean 3-manifolds, $\Gamma$ and $\Gamma'$. If $\Gamma$ is a partial Cauchy surface for the entire spacetime, and a Cauchy surface for $\mathbb{M}- K$, and in addition we assume (i) the WEC and the Einstein equations hold on $\mathbb{M}$; (ii) the GCC holds on $\mathcal{M}$. Then $\Gamma\simeq \Gamma'$ and $\Gamma$ is a Cauchy surface for $\mathbb{M}$. \\ \\ \indent In this case, our application to regular black holes can be stated as the following proposition, which is a direct consequence of ``reversing" Tipler's theorem. \\ \\ \indent{\it Proposition}. Changes in the topology of spacelike slices of a regular black hole spacetime are not compatible with both the WEC and Einstein's equations and the GCC simultaneously. \\ \\ \indent Therefore, assuming the GCC, going beyond General Relativity is revealed as a possibility of constructing regular black holes with topology change. In this case, the GCC is fulfilled but, as we previously said, the existence of closed timelike curves is unavoidable. \subsection{Regular black holes without topology change?} At this point it should be clear that the compactness hypothesis for the Cauchy surface of Penrose's theorem \cite{Penrose1965} is the reason behind topology changes in regular black holes. Therefore, in order to avoid causality problems, regular black holes without topology changes have to be considered. \\ \\ \indent Fortunately, there are singularity theorems for ``open" universes so that we can ``reverse" them to obtain interesting results from regular black holes. Incidentally (or not), one of these theorems was formulated by Borde and Vilenkin as follows: \\ \\ \indent {\it Theorem (Borde and Vilenkin) \cite{BordeVilenkin}}. Let $\mathcal{M}$ be a spacetime such that \begin{enumerate} \item The NCC holds. \item $\mathcal{M}$ is future causally simple. \item $\mathcal{M}$ contains no compact slices. \item $\mathcal{M}$ contains a point $p$ whose future light cone reconverges. \end{enumerate} Then $\mathcal{M}$ is null geodesically incomplete to the future. \\ \\ \indent Roughly speaking, the causally simple assumption is desirable. Although this condition is weaker than global hyperbolicity (see \cite{Minguzzi2008} for an up to date complete causal hierarchy), it suffices to assure, for example, that no closed causal curves are present. \\ \\ \indent Interestingly, the assumptions (1)-(4) of the previous theorem can be used to improve the previous definition of a regular and well-behaved black hole. Here we introduce the following \\ \\ {\it Definition.} A spacetime $\mathcal{M}$ is said to contain an {\it ideal black hole} if assumptions (1)-(4) of the previous theorem are fullfilled. \\ \\ \indent Therefore, from the previous theorem we can state the following \\ \\ \indent{\it Proposition}. No spacetime $\mathcal{M}$ can contain an ideal black hole. \\ \\ \indent Thus, regular black holes without either topology changes or closed timelike curves and satisfying the NCC are not supported. \\ \\ \indent At this point, some comments are in order: (i) Assumptions (2) and (3) can not be relaxed if we want that $\mathcal{M}$ does not contain closed timelike curves; (ii) assumption (3) introduces one of the main features of black holes (equivalent to the existence of trapped surfaces) and, therefore, it can not be deleted. Then, we arrive to the conclusion that only the NCC can be relaxed in order to obtain a closer version of an {\it ideal black hole}. Perhaps instead of introducing a de Sitter-like core (say $\rho + p=0$ to simplify) to regularize the $r=0$ region of singular black holes within spherical symmetry, one could replace it by an effective fluid minimally violating the NCC (as it happens in \cite{Visser2019bis}, where $\rho + p_{\parallel} = k$, with $k$ constant and $k<0$ near the regular center) or even considering theories beyond General Relativity. Note that, in this case, departure from spherical symmetry is mandatory because, as we have shown in previous sections, only de Sitter (or Minkowski) cores are allowed within this symmetry. \\ \\ \subsubsection{The many advantages of Nariai and Bertotti-Robinson cores} Up to this point we have focused our attention on regular black holes with two different slices: $S^3$ at the $r\approx 0$ core and $\mathbb{R} \times S^2$ outside it. As we have shown along the manuscript, topology change between slices is not free of problems but it bring us some undesirable properties such as lack of global hyperbolicity and the existence of closed timelike curves. Although the first one could be solved using Geroch's prescription, causality violations seem to be unavoidable within most models of regular black holes. Fortunately, the cosmological constant comes to our rescue. Let us elaborate this idea. As shown by Bousso and Hawking \cite{Hawking1996,Bousso1}, the $S^1\times S^2$ topology of the spacelike sections of Reissner-Nordstr\"om-de Sitter spaces becomes evident when an appropriate coordinate change is performed (see the Appendix of \cite{Hawking1996} for the neutral case and \cite{Bousso1} for the charged case). In general, the radius of the $S^2$ sphere varies along the $S^1$ (the minimal two-sphere corresponds to the black hole horizon and the maximal one to the cosmological horizon). Interestingly, for the charged Nariai case, the two-sphere radius is independent of the $S^1$-coordinate. In this sense, the spacelike slices of a charged Nariai solution can be thought as a as a ``perfect doughnut", while for generic Reissner-Nordstr\"om de Sitter solution it would be a ``wobbly doughnut". Therefore, if black holes immersed in a de Sitter space are considered, the topology of the spacelike slices are $S^1\times S^2$. This feature, which we remind the reader is a direct consequence of a non zero cosmological constant, avoids, upon regularizing these kind of black holes with a Nariai core near their center, topology changes between spacelike slices. Then, globally hyperbolic regular black holes satisfying Borde's theorem are not forbidden and they can be constructed in principle. \\ \\ \indent Interestingly, as mentioned in the introduction, a very recent example of a classical mechanism giving place to a regular black hole with a Nariai center but violating the NEC has given in \cite{Mariam} by the use of three-form fields. A different example was presented in \cite{Melgarejo2020}, also very recently, based on a slight modification of the well-known spherically symmetric Hayward black hole \cite{Hayward} satisfying the NEC everywhere. Specifically, the proposed line element, which reads % \begin{equation} \label{Hay} ds^{2}=-\left(1-\frac{2 m r^2}{r^3+ 2 l^2 m}\right)dt^2+\frac{dr^2}{1-\frac{2 m r^2}{r^3+ 2 l^2 m}}+l^2 d\Omega^2, \end{equation} \\ \\ has a Nariai core with slices $S^1\times S^2$, in complete agreement with Borde's theorem. The topology of the spatial slices at spacelike infinity is $\mathbb{R}\times S^2$ for the previous solution and, therefore, topology change appears. In order to bypass this issue, let us embed this geometry into a de Sitter space. The result is % \begin{equation} \label{Hay2} ds^{2}=-\left(1-\frac{2 m r^2}{r^3+ 2 l^2 m}-\frac{\Lambda}{3}r^2\right)dt^2+\frac{dr^2}{1-\frac{2 m r^2}{r^3+ 2 l^2 m}-\frac{\Lambda}{3}r^2}+\frac{1}{\Lambda} d\Omega^2. \end{equation} This geometry is regular everywhere, it has a Nariai and a Schwarzschild de Sitter geometry for $r\approx 0$ and $r\approx \infty$, respectively, and satisfies the NEC everywhere. Therefore, it provides an interesting example of a geometry satisfying Borde's theorem without topology change. If, on he contrary, electrovacuum black holes are considered, the topology of the spacelike slices are $\mathbb{R}\times S^2$ (we are assuming spherical horizons for simplicity). Therefore, if a Bertotti-Robinson core is assumed for the regularized black hole, topology change is avoided and global hyperbolicity can be restored in principle. However, Borde and Borde and Vilenkin's theorems imply, roughly speaking, that the NCC is violated if causal simplicity and the existence of trapped surfaces or reconverging light cones is guaranteed. \\ \\ \indent As a summary of the present discussion, and based on the previous example, we can conclude with the following \\ \\ \indent {\it Proposition}. Regular black holes satisfying Borde's theorem without topology changes are not forbidden. \\ \\ \section{Final comments and conclusions} \label{6} Here we will briefly summarize the main novelties of our work together with some final comments and future work. We have studied regular black holes from a global perspective looking for evading some of the well-known singularity theorems by using new ``reversed" results following the idea behind Borde's theorem. This strategy has allowed us to study the interplay between regularity, topology change and causality. It has been shown that, in general, the cores of spherically symmetric and (locally) static regular black holes are $S^3$, $H^3$, $\mathbb{R}\times S^2$ or $S^1\times S^2$, depending whether a de Sitter, anti de Sitter, Bertotti-Robinson or Nariai geometries are employed to describe the slices at the regular center, showing the existence of more possibilities other than the well-known studied de Sitter and hollow cores. Some techniques from circle bundles have been employed to describe the transition between the core of the regular black hole and the rest of the slices in most of the cases considered in the literature. After studying the consequences of Geroch, Tipler and Borde and Vilenkin's theorems for topology change, we have shown that Nariai cores can be safely used to construct regular black holes satisfying Borde's theorem but, importantly, without topology changes. \\ \\ We end this work by pointing out some physical properties of the Nariai core, together with their differences with respect to to the usual de Sitter case, that are relevant for the singularity resolution \cite{Dadhich2001}. First, note that the gravitational charge density, $\rho + 3p$, is negative for de Sitter space and, therefore, it favours expansion. In this sense, the appearance of an effective cosmological constant gives place to an inflationary core instead of a singularity. Even more, de Sitter inflation is homogeneous, isotropic and it has no shear. On the contrary, Nariai cores do produce homogeneous but anisotropic and non-zero shear inflation. Intuitively, when shear is non-zero, the core is necessarily anisotropic and consequently it cannot be conformally flat. As a consequence, any spherically symmetric Nariai core is of Petrov type-D which, interestingly, coincides with the Petrov type of the asymptotic region of most of the regular solutions considered in the literature near spatial infinity. We consider that the intriguing relations between regularity, topology change and changes in the Petrov type, first suggested in \cite{Melgarejo2020}, deserve further study. \\ \\ As a final comment we would like to remind the reader that the present work has focused on regular solutions that are continuous throughout the whole spacetime. Therefore, it remains to be seen what happens when regular solutions having boundary surfaces or surface layers joining the core and the enveloping metrics are considered. \section{Acknowledgements} P. B. acknowledges Ernesto Contreras for a critical reading of the manuscript. The author is funded by the Beatriz Galindo contract BEAGAL 18/00207 (Spain). P. B. dedicates this work to Anaís, Lucía, Inés and Ana. Comments and suggestions by three anonymous referees are gratefully acknowledged.
{ "redpajama_set_name": "RedPajamaArXiv" }
9,969
\section{Introduction} Dark matter (DM) and dark energy (DE) are the two heavy constituents of our universe \cite{Ade:2015xua}. The first one, i.e. the dark matter is believed to be responsible for the structure formation of the universe and is almost pressureless while dark energy, a modification of the matter sector in the general relativity, is assumed to be responsible for the current observed acceleration of the universe. Both of them comprise approximately 96\% of the total energy density of the universe with unknown character and origin. Thus, the dynamics of the universe is heavily resting on this sector. Now since the character of this dark sector is unknown, thus, sometimes it is assumed that perhaps the dark matter and dark energy are coupled to each other so that they behave like a single dark fluid. Although this consideration sounds slightly phenomenological but this possibility cannot be ruled out in any ways. From the philosophical point of view we do not have any strong evidence to exclude the phenomenon of dark matter-dark energy interaction. So, one can of course think of some interaction between these two fields. In fact, the standard cosmological laws can be retrieved at any time under the no interaction limit. Additionally, the dynamics of the universe in presence of any coupling between dark matter and dark energy becomes quite richer with many possibilities. On the other hand, from the theory of particle physics, any two fields can interact with each other. Since both dark energy and dark matter can be thought to be of some fields, for instance some scalar field, hence, the idea behind the dark matter-dark energy interaction is also supported from the particle physics view point. This henceforth initiated a new branch in the physics of dark energy in the name of interacting dark energy theory. The idea of coupling in the dark sectors was initiated by Wetterich \cite{Wetterich:1994bg} and subsequently by Amendola \cite{Amendola:1999er}. Thus so far this particular scenario has been explored in the context of current cosmology with some interesting outcomes. The coupling between the dark matter and the dark energy may provide an explanation to the cosmic coincidence problem \cite{Zlatev:1998tr}, a generic problem in the dynamical dark energy models and even in the $\Lambda$-Cold Dark Matter ($\Lambda$CDM) cosmology. Thus, in the last couple of years, a rigorous analysis has been performed by several authors with many interesting possibilities, see for instance \cite{Billyard:2000bh, Olivares:2005tb,delCampo:2008jx,Amendola, Koivisto, delCampo:2008sr,Chimento:2009hj, Quartin:2008px, Valiviita:2009nu, Clemson:2011an, Pan:2013rha, Yang:2014hea, Faraoni:2014vra, Yang:2014gza, Nunes:2014qoa, yang:2014vza,thor,barrow, amendola, llinares, Pan:2014afa, Chen:2011cy, Tamanini:2015iia, Pan:2012ki, Duniya:2015nva, Valiviita:2015dfa, Yang:2016evp, Pan:2016ngu, Mukherjee:2016shl, Sola:2016ecz, Sharov:2017iue, Cai:2017yww, Santos:2017bqm, Mifsud:2017fsy}. The investigations in coupled dark energy have been further fueled when the latest observational data estimated a nonzero interaction in the dark sector \cite{Salvatelli:2014zta, Nunes:2016dlj,Kumar:2016zpg, vandeBruck:2016hpz, Yang:2017yme, Kumar:2017dnp}. Although most of the studies in this direction automatically assume the interaction between dark matter and dark energy, however, a generalized version of the interacting dark energy is also appealing where the interacting components could be any two barotropic fluids \cite{Barrow:2006hia}. In this work we propose a novel mechanism to test the stability of interacting dark energy models where the dark energy equation of state can be unrestricted unlike other interacting dark energy models where the state parameter for dark energy is necessarily restricted. To illustrate more in this direction, we consider a spatially flat Friedmann-Lema\^itre-Robertson-Walker universe where pressureless dark matter interacts with dark energy through a nongravitational interaction function. We find that using the pressure perturbation equation for dark energy it is quite possible to construct several interaction models that can be tested without any prior limitation imposed on the dark energy equation of state. The newly constructed interaction functions do have a direct dependence on the dark energy equation of state. However, the entire picture becomes exciting when the interaction function automatically contains such dependence on the dark energy equation of state. We examine the interacting scenario both for constant and dynamical nature of the dark energy equation of state. Finally, we constrain both the interacting scenarios using the latest observational data from different astronomical sources, namely, the cosmic microwave background radiation \cite{ref:Planck2015-1, ref:Planck2015-2}, Baryon Acoustic oscillations distance measurements \cite{Beutler:2011hx,Padmanabhan:2012hf, Manera:2012sc}, Joint Light curve analysis from Supernovae Type Ia \cite{Betoule:2014frx}, weak lensing \cite{Heymans:2013fya,Asgari:2016xuw}, redshift space distortion data \cite{Percival:2004fs,Blake:2011rj,Samushia:2011cs,Reid:2012sw,Beutler:2012px,delaTorre:2013rpa} and finally the Hubble parameter measurements from cosmic chronometers \cite{Moresco:2016mzx} plus the local Hubble constant \cite{Riess:2016jrr}. We note that the use of several observational data provides better and tight constraints on the models. The paper is organized as follows. In \textit{Section} \ref{sec-2} we describe the background equations of an interacting universe with the new interacting dark energy model with constant and dynamical dark energy equation of state. Thus essentially, we focus on two interacting cosmological scenarions in this study. In \textit{Section} \ref{sec-perturbations} we discuss the interacting cosmology in the perturbative universe. In \textit{Section} \ref{sec-data}, we describe the astronomical data sets that have been used to constrain the models and the \textit{Section} \ref{sec-results} follows the observational constraints of the proposed models. After that in \textit{Section} \ref{sec-comparison} we compare the current models from the statistical ground and also make a comparison with other existing interaction models. The next \textit{Section} \ref{sec-tension} contains an extensive analysis on the current tension on $H_0$. Finally, we close the work in \textit{Section} \ref{conclu} with a short summary of the results obtained. \section{Interacting cosmology: Background universe} \label{sec-2} In this section we describe the governing equations for any interacting dark energy model. The background geometry is set to be a spatially flat Friedmann-Lema\^itre-Robertson-Walker (FLRW) universe characterized by the line element $ds^2 = - dt^2 + a^2 (t) \left[ dr^2 + r^2 \left(d \theta ^2+ \sin^2 \theta d \phi^2\right)\right]$, where $a(t)$ is the expansion scale factor of the universe. The conservation equation of a coupled dark matter and dark energy system in the FLRW universe can be written as, $\dot{\rho}_c + 3 H \rho_c = - \dot{\rho}_x - 3 H (p_x +\rho_x)$, which can be decoupled into \begin{eqnarray} \rho'_c+ 3 \mathcal{H} \rho_c=-aQ,\label{cons1} \\ \rho'_x+ 3 \mathcal{H} (p_x+\rho_x)=aQ,\label{cons2} \end{eqnarray} with a new quantity $Q$, known as the interaction rate between the dark sectors. Here $\mathcal{H}=a'/a$ is the conformal Hubble parameter in which the prime denotes the differentiation with respect to the conformal time; $\rho_c$, $\rho_x$ are respectively the energy densities of pressureless dark matter and dark energy and $p_x$ is the pressure of the dark energy fluid. In addition, we consider the non-relativistic baryons (energy density $\equiv$ $\rho_b$) and relativistic radiation (energy density $\equiv$ $\rho_r$). Since the physics of baryons and radiation are quite well known, so we assume that they are conserved independently, that means the evolution laws of baryons and radiation are $\rho_b \propto a^{-3}$ and $\rho_r \propto a^{-4}$, respectively. The dynamics of the universe is constrained by the Friedmann equation \begin{eqnarray} \left(\frac{\mathcal{H}}{a}\right)^2 = \left(\frac{8 \pi G}{3}\right)(\rho_b+\rho_r+ \rho_c+\rho_x), \end{eqnarray} which is the constraint equation of the cosmological scenario. Now, to go ahead one needs a specific functional form for $Q$, and we follow the same tradition. In the literature several forms of $Q$ exist and the most used interactions are $Q \propto \rho_c$, $Q \propto \rho_x$, $Q \propto (\rho_c+\rho_x)$. In this work, we propose a new interaction of the form \begin{eqnarray}\label{int} Q=\xi \dot{\rho}_x = \left(\xi/a\right) \rho_x^\prime, \end{eqnarray} where the dot represents the derivative with respect to the cosmic time while the prime as mentioned, stands for the derivative with respect to the conformal time and $\xi$ is the coupling parameter. The direction of energy flow that is determined by the sign of $Q$ is dependent on the sign of the coupling parameter $\xi$ as well as with the evolution of $\rho_x$. Precisely, if $\xi> 0$ then $Q > 0$ whenever $\rho_x^\prime > 0$ and on the other hand for $\xi> 0$, one may obtain $Q < 0$ for $\rho_x^\prime < 0$. Similarly for $\xi< 0$, one can also derive the direction of energy flow bwtween the dark sectors from the character of $\rho_x^\prime$. Now, using the conservation equation (\ref{cons2}), the interaction (\ref{int}) can be written in a general form as \begin{eqnarray}\label{Q} Q= \frac{3}{a} \left(\frac{\xi}{\xi- 1}\right)\mathcal{H}\, (p_x+ \rho_x)= \frac{3\xi_{e}}{a}\mathcal{H}\, (p_x+ \rho_x), \end{eqnarray} where we call $\xi_e = \xi/(1-\xi)$ to be the effective coupling parameter in the interaction scenario. It is clearly seen from eqn (\ref{Q}) that for $\xi = 1$, the interaction $Q$ becomes infinite. However, we remark that $\xi= 1$ represents a very strong interaction in the dark sector which is not allowed by the observational data, see for instance \cite{Salvatelli:2014zta, Nunes:2016dlj,Kumar:2016zpg, vandeBruck:2016hpz, Yang:2017yme, Kumar:2017dnp}. The interaction (\ref{Q}) has a very special and nice property. For a barotropic equation of state $p_x= w_x \rho_x$, the interaction is linear while on the other hand, for any other complicated equation of state other than the barotropic equation of state, the interaction (\ref{Q}) represents a nonlinear interaction in the dark sector. Another interesting scenario we observe in the above interaction is as follows. If we the dark energy is assumed to be the cosmological constant, that means $p_x = -\rho_x$, then the coupling becomes zero ($Q= 0$) even if the coupling parameter is non-zero. That means although there exists an interaction, the evolution equations do not change. In the present work we consider that the dark energy fluid obeys a barotropic equation of state $w_x$ and thus, the interaction (\ref{Q}) can be recast as \begin{eqnarray}\label{Q-1} Q = \frac{3}{a} \left(\frac{\xi}{\xi - 1}\right)\mathcal{H}\, (1+w_x) \rho_x\, . \end{eqnarray} Now, in order to measure the coupling of the interaction in presence of a dark energy fluid we consider two distinct possibilities: (A) When dark energy equation of state is constant, or (B) The dark energy equation of state is dynamical. For the dynamical dark energy equation of state, we consider the most well known dark energy parametrization known as Chevallier-Polarski-Linder (CPL) parametrization \cite{Chevallier:2000qy, Linder:2002et}, where the equation of state for dark energy is represented by $w_x= w_0 + w_a (1-a)$. Here $w_0$ is the current value of $w_{x}$, i.e. $w_0 = w_{x}$ at $a=1$ (we note that the present value of the scale factor has been set to be unity) and $w_a = dw_x/da$ at $a = 1$. \section{Interacting cosmology: Perturbed universe} \label{sec-perturbations} Now, in order to study the linear perturbations of the interacting dark energy models we introduce the most general scalar mode perturbation, defined by the following metric \cite{ref:Ma1995,ref:Mukhanov1992,ref:Malik2009} \begin{eqnarray} ds^2=a^2(\tau) \Bigl[ -(1+2\phi)d\tau^2+2\partial_iBd\tau dx^i+\Bigl((1-2\psi)\delta_{ij}+2\partial_i\partial_jE \Bigr)dx^idx^j \Bigr], \label{eq:per-metric} \end{eqnarray} where $\phi$, $B$, $\psi$ and $E$ are the gauge-dependent scalar perturbations quantities. Let us proceed with the general perturbation equations. We consider the four velocity of any fluid (denoted by `A') as $u_A^{\mu}= a^{-1} (1- \phi, \partial^{i} v_A)$, where $v_A$ is the peculiar velocity potential which in Fourier space is related to the volume expansion $\theta_A = -k^2 (v_A + B)$. Now, since we are considering a coupling between dark matter and dark energy, thus, we have the following constraint $\nabla_{\nu} T_{A}^{\mu \nu} = Q_A$, where $\sum_{A} Q_A = 0$, and $T^{\mu \nu}_A$ is the usual energy-momentum tensor of the fluid $A$. Now, due to the coupling between the dark sectors, the energy flow and the momentum flow take place in general, thus, representing $\tilde{Q}_A$, $F_{A}^{\mu}$ as respectively the energy transfer rate and the momentum transfer rate, relative to the four velocity $u^{\mu}$, following Refs. \cite{ref:Valiviita2008, ref:Majerotto2010, Clemson:2011an}, one can write that \begin{eqnarray} Q_{A}^{\mu} = \tilde{Q}_A u^{\mu} + F_A^{\mu}, \end{eqnarray} where $\tilde{Q}_A = Q_A + \delta Q_A$, $F_A^{\mu} = a^{-1} (0, \partial^{i} f_A)$ in which $Q_A$ is the background interaction (in fact, $Q_A = Q$) and $f_A$ is the momentum transfer potential. The perturbed interation assumes, $Q_A^0 = -a \left[Q_A (1+\phi)+\delta Q_A \right]$, $Q_A^i= a \partial^i \left[ Q_A (v+B) + f_A\right]$, and finally, the continuity as well as the Euler equations can respectively be written as \begin{eqnarray} \delta_A^{\prime} + 3 \mathcal{H} \left(c_{sA}^2 - w_A \right) \delta_A + 9 \mathcal{H}^2 \left(1+w_A \right) \left(c_{sA}^2- c_{aA}^2 \right)\frac{\theta_A}{k^2} + \left(1+w_A \right) \theta_A -3 \left(1+w_A \right) \psi^{\prime} + \left(1+w_A \right) k^2 \left(B- E^{\prime} \right)\nonumber\\ = \frac{a}{\rho_A} \left(\delta Q_A - Q_A \delta _A \right) + \frac{a Q_A}{\rho_A} \left[\phi + 3 \mathcal{H} \left(c_{sA}^2- c_{aA}^2 \right)\frac{\theta_A}{k^2}\right],\\ \theta_A^{\prime} + \mathcal{H} \left(1-3 c_{sA}^2 \right)\theta_A - \frac{c_{sA}^2}{1+w_A} k^2 \delta_A -k^2 \phi = \frac{a}{(1+w_A)\rho_A} \Bigl[ \left(Q_A \theta -k^2 f_A \right) - \left(1+ c_{sA}^2 \right) Q_A \theta_A \Bigr], \end{eqnarray} where we have used the notation $\delta_A = \delta \rho_A/\rho_A$ known as the density contrast, and considered $\pi_A = 0$. The symbols $c_{sA}^2$, $c_{aA}^2$, are respectively denote the adiabatic and physical sound velocity, $\theta = \theta_{\mu}^{\mu}$ is the volume expansion scalar. We note that in order to escape from any kind of instabilities, we need to assume $c_{sA}^2 \geq 0$. Now, for this specific interaction between dark energy and dark matter, the above perturbation equations can be recast as \begin{eqnarray} \delta'_x &=&-(1+w_x)\left(\theta_x+\frac{h'}{2}\right) -3\mathcal{H}(c^2_{sx}-w_x)\left[\delta_x+3\mathcal{H}(1+w_x)\frac{\theta_x}{k^2}\right] \nonumber \\ &+&3\mathcal{H}\frac{\xi}{\xi-1}(1+w_x)\left[\frac{\theta+h'/2}{3\mathcal{H}}+3\mathcal{H}(c^2_{sx}-w_x)\frac{\theta_x}{k^2}\right], \\ \theta'_x &=&-\mathcal{H}(1-3c^2_{sx})\theta_x+\frac{c^2_{sx}}{(1+w_x)}k^2\delta_x +3\mathcal{H}\frac{\xi}{\xi-1}\left[\theta_c-(1+c^2_{sx})\theta_x\right], \\ \delta'_c &=&-\left(\theta_c+\frac{h'}{2}\right) +3\mathcal{H}\frac{\xi}{\xi-1}(1+w_x)\frac{\rho_x}{\rho_c}\left(\delta_c-\delta_x-\frac{\theta+h'/2}{3\mathcal{H}}\right), \label{delta_c}\\ \theta'_c &=&-\mathcal{H}\theta_c, \label{eq:perturbation} \end{eqnarray} where $\delta\mathcal{H}/\mathcal{H}=(\theta+h'/2)/(3\mathcal{H})$, is the perturbed Hubble expansion rate \cite{ref:Gavela2010} in which the prime denotes the derivative with respect to the conformal time. Now, due to the presence of interaction between dark matter and dark energy, the pressure perturbation equation for dark energy directly includes the interaction rate $Q$, and consequently, the stability of the interaction model becomes sensitive to the specific forms for $Q$. Moreover, the stability of the interaction model is also related to the dark energy equation of state which appears in the pressure perturbation equation for dark energy as \cite{ref:Gavela2009} \begin{eqnarray} \delta p_{x} =c_{sx}^{2}\delta \rho _{x}+3\mathcal{H}\rho _{x}(1+w_{x})(c_{sx}^{2}-c_{ax}^{2})\left[ 1-\frac{aQ}{3\mathcal{H}\rho _{x}(1+w_{x})}\right] \frac{\theta _{x}}{k^{2}}~, \label{eq:deltap} \end{eqnarray} from which one can define the doom factor \cite{ref:Gavela2009}: $d\equiv-aQ/[3\mathcal{H}\rho_x(1+w_x)]$ which analyzes the stability of the concerned interaction model by its sign and the stability is acheived for $d \leq 0$. Now, for any interaction $Q= \xi \bar{Q}$ ($\bar{Q} >0$, but $\xi$, the coupling parameter of the interaction $Q$, is unrestricted in sign), the stability criterion (i.e., $d \leq 0$) provides a restriction on the parameters space as either (i) $\xi \geq 0$ and $(1+w_{x})\geq 0$, or (ii) $\xi \leq 0$ and $(1+w_{x})\leq 0$. That means, in order to test the stability of any interaction, two separate regions must be investigated. Now, if the interaction contains a factor $(1+w_x)$, that means if $Q = \xi (1+w_x) \bar{Q}$ (where similarly $\bar{Q} > 0$), the doom factor becomes $d \equiv -a\xi \bar{Q}/ ( 3\mathcal{H}\rho_x )$ and it shows that the stability of the interaction is now only characterized by the sign of $\xi$ only. That means the inclusion of the factor $(1+w_x)$ into the interaction releases the prior on $w_x$ which is a beautiful result because now the restriction on the parameter $w_x$ is withdrawn and hence one can test the stability of such interaction models for all $w_x$. This observation has been noticed in a recent paper \cite{ypb}, however, the current work is different. Here, we do not need to include the extra factor $(1+w_x)$ because the interaction that we propose in this work already contains such factor and hence the prior on the dark energy equation of state, $w_x$, automatically released. To show this, we consider the doom factor for the current interaction (\ref{Q}) which becomes $d = \xi/(1-\xi)$. Since for $d\leq0$, the perturbation evolutions are stable, thus, we conclude that the present interacting dark energy model will be stable if either $\xi\geq1$ or $\xi\leq 0$. However, the possibility $\xi\geq1$ refers to a strong interaction in the dark sector which is not allowed by the present observational data, so we confine our discussions over $\xi\leq0$. Finally, we close this section with the introduction of growth of matter perturbations. Under the assumption that dark energy does not cluster on sub-Hubble scale \cite{Clemson:2011an}, one can safely neglect the velocity perturbations of $\delta_x = \delta \rho_x/\rho_x$. This assumption is genuine because during the matter dominated era, the acting of dark energy should be subdominant. Therefore, using eqn (\ref{delta_c}) one can derive the following second order differential equation for the density perturbations of pressureless dark matter \begin{eqnarray} &&\delta ''_c+\left[1-3\frac{\xi}{\xi-1}(1+w_x)\frac{\rho_x}{\rho_c}\right]\mathcal{H}\delta '_c =\frac{3}{2}\mathcal{H}^2\Omega_b\delta_b + \nonumber \\ &&\frac{3}{2}\mathcal{H}^2\Omega_c\delta_c \left\{1+ 2\frac{\xi}{\xi-1}(1+w_x)\frac{\rho_{t}}{\rho_c}\frac{\rho_x}{\rho_c} \left[ \frac{\mathcal{H}'}{\mathcal{H}^2}+1-3w_x+3\frac{\xi}{\xi-1}(1+w_x)\left(1+\frac{\rho_x}{\rho_c}\right)+\frac{w'_x}{\mathcal{H}(1+w_x)} \right] \right\}. \label{eq:deltacprime2} \end{eqnarray} where $\mathcal{H}^2 = (8 \pi G a^2/3)\rho_t$, and $\rho_t$, is the total energy density of the universe, that means, $\rho_t = \rho_r+\rho_b+\rho_c+\rho_x$. The absence of interaction directs the equation (\ref{eq:deltacprime2}) into $ \delta_m ''+\mathcal{H} \delta_m ' = 4 \pi G \rho_m \delta_m$ ($\rho_m = \rho_c +\rho_b$). Certainly, it is evident that the evolution of $\delta_c$ without any interaction is affected in presence of any interaction as shown in eqn (\ref{eq:deltacprime2}). One natural choice is to measure the expansion history due to the interaction, denoted by $\mathcal{H}_{eff}$ and it can be calculated as \begin{eqnarray} \frac{\mathcal{H}_{eff}}{\mathcal{H}}=1-\frac{3 \xi}{\xi-1}(1+w_x)\frac{\rho_x}{\rho_c}, \label{eq:Heff} \end{eqnarray} which shows that for $\xi= 0$, that when there is no interaction between dark matter and dark energy, $\mathcal{H}_{eff} = \mathcal{H}$ \footnote{We one may note that $\frac{\mathcal{H}_{eff}}{\mathcal{H}}=\frac{ H_{eff}}{H}$. }. Moreover, one can also see that for $w_x = -1$, that means when cosmological constant interacts with dark matter, $\mathcal{H}_{eff} = \mathcal{H}$. While on the other hand, there is another quantity, namely the gravitational constant, $G$, that is also modified in presence of the interaction. We denote the modified gravitational constant as $G_{eff}$, defined by \begin{eqnarray} \frac{G_{eff}}{G}=1+ \frac{2 \xi}{\xi-1}(1+w_x)\frac{\rho_{t}}{\rho_c}\frac{\rho_x}{\rho_c} \left[ \frac{\mathcal{H}'}{\mathcal{H}^2}+1-3w_x+\frac{3 \xi}{\xi-1}(1+w_x)\left(1+\frac{\rho_x}{\rho_c}\right)+\frac{w'_x}{\mathcal{H}(1+w_x)} \right]. \label{eq:Geff} \end{eqnarray} It is easy to see that in absence of any coupling, i.e. when $\xi = 0$, $G_{eff} = G$, as expected. Also, for the case of interacting cosmological constant, this equality holds good. From the evolution of $\mathcal{H}_{eff}/\mathcal{H}$, and $G_{eff}/G$, one can quantify the actual deviation of $\mathcal{H}$, $G$, when interaction in the dark sectors is considered. Lastly. we would like to remark on the growth rate of dark matter which is, $f_c \equiv \frac{d}{d\ln a}(\ln \delta_c) $. Since the Euler equation is modified in presence of the interaction, thus, the dark matter may not follow the geodesics \cite{Koyama:2009gd}. So, this quantity also plays an important role to measure the deviation of the interacting models from the non-interacting one. \section{Data and the methodology} \label{sec-data} In this section we shall shortly describe the astronomical data that have been used to constrain both the interacting dark energy models and the methodology that we use to constrain the interacting scenarios. \begin{itemize} \item \textbf{CMB data:} Cosmic microwave background (CMB) data provide tight constraints on the cosmological models. Here we take the CMB data from the latest observations by Planck team \cite{ref:Planck2015-1, ref:Planck2015-2} that combine the likelihoods $C^{TT}_l$, $C^{EE}_l$, $C^{TE}_l$ in addition to low$-l$ polarization. We shall denote this data by Planck TT, TE, EE $+$ lowTEB as denoted in \cite{Ade:2015xua}.\newline \item \textbf{BAO data:} The baryon acoustic oscillation (BAO) data are also powerful to probe the nature of dark energy. In our analysis we use the estimated ratio $r_s/D_V$ as a `standard ruler' in which $r_s$ is the comoving sound horizon at the baryon drag epoch and $D_V$ is the effective distance determined by the angular diameter distance $D_A$ and Hubble parameter $H$ as $D_V(z)=\left[(1+z)^2D_A(a)^2\frac{z}{H(z)}\right]^{1/3}$. Three different measurements such as, $r_s(z_d)/D_V(z=0.106)=0.336\pm0.015$ from 6-degree Field Galaxy Redshift Survey (6dFGRS) data \cite{Beutler:2011hx}, $r_s(z_d)/D_V(z=0.35)=0.1126\pm0.0022$ from Sloan Digital Sky Survey Data Release 7 (SDSS DR7) data \cite{Padmanabhan:2012hf}, $r_s(z_d)/D_V(z=0.57)=0.0732\pm0.0012$ from the SDSS DR9 \cite{Manera:2012sc} have been considered.\newline \item \textbf{JLA sample:} The first data set that proved the existence of some dark energy fluid in the universe sector is the Supernovae Type Ia (SNIa). In the current analysis we use the latest compilation of the SNIa, namely the Joint Light Curves (JLA) sample \cite{Betoule:2014frx} that constain 740 SNIa in the redshift range $z\in[0.01, 1.30]$.\newline \item \textbf{Weak lensing (WL) data:} We add the weak gravitational lensing data from blue galaxy sample compliled from Canada$-$France$-$Hawaii Telescope Lensing Survey (CFHTLenS) \cite{Heymans:2013fya,Asgari:2016xuw} to other data sets.\newline \item \textbf{Redshift space distortion (RSD) data:} We use RSD data from different observational surveys from 2dFGRS \cite{Percival:2004fs}, the WiggleZ \cite{Blake:2011rj}, the SDSS LRG \cite{Samushia:2011cs}, the BOSS CMASS \cite{Reid:2012sw}, the 6dFGRS \cite{Beutler:2012px}, and the VIPERS \cite{delaTorre:2013rpa}. For the measured values of RSD we refer Table I of Ref. \cite{Yang:2014hea}.\newline \item \textbf{Cosmic Chronometers (CC) data:} We add the Hubble parameter measurements to our analysis. To measure the Hubble parameter values at different redshifts, we use the cosmic chronometers data that have been recently released in the redshift interval $0 < z < 2$ \cite{Moresco:2016mzx}, and such CC data are very powerful to probe the nature of dark energy due to their model independent character. For a detailed analysis of the data and the methodology, we refer to Ref. \cite{Moresco:2016mzx}. \newline \item \textbf{Local Hubble constant:} Finally, we add the local Hubble constant value $H_0= 73.02 \pm 1.79$ km/s/Mpc obtained with 2.4\% precision by the Riess et al. \cite{Riess:2016jrr}. We denote it by R16. \end{itemize} Now, to constrain the present interacting models we use the likelihood $\mathcal{L}\propto e^{-\chi^2_{tot}/2}$. Here $\chi^2_{tot} = \sum _{i} \chi^2_i$, and $i$ runs over the all data sets that we use, that means CMB (Planck TT, TE, EE $+$ lowTEB), BAO, BAO, WL, RSD, CC and R16. Then we use the cosmoc \cite{Lewis:2002ah}, a markov chain monte carlo simulation to extract the cosmological parameters associted with the model. Corresponding to each interacting model our parameter space is increased in compared to the $\Lambda$CDM model with minimum dimensional parameter space, see \cite{Barrow:2014opa} for more discussions in this direction. For the interacting model with constant equation of state in dark energy, $w_x$, we have the following eight dimensional parameters space \begin{align} \mathcal{P}_1 \equiv\Bigl\{\Omega_bh^2, \Omega_{c}h^2, \Theta_S, \tau, w_x, \xi, n_s, log[10^{10}A_S]\Bigr\}, \label{eq:parameter_space1} \end{align} while for the interacting model with dynamical dark energy equation of state $w_x = w_0 + w_a (1-a)$, following nine dimensional parameters space \begin{align} \mathcal{P}_2 \equiv\Bigl\{\Omega_bh^2, \Omega_{c}h^2, \Theta_S, \tau, w_0, w_a, \xi, n_s, log[10^{10}A_S]\Bigr\}, \label{eq:parameter_space2} \end{align} is considered. We note that in both Eqns. (\ref{eq:parameter_space1}), (\ref{eq:parameter_space2}) the common parameters, $\Omega_bh^2$, $\Omega_{c}h^2$, $\Theta_S$, $\tau$, $n_s$, $A_S$, are respectively identified as the baryons density, cold dark matter density, ratio of sound horizon to the angular diameter distance, optical depth, scalar spectral index, and the amplitude of the initial power spectrum whereas $w_x$, $\xi$ are the model parameters of the parameters space $\mathcal{P}_1$ and $w_0$, $w_a$, $\xi$ are the model parameters of the space $\mathcal{P}_2$. \section{Results and the interpretations} \label{sec-results} We present a detailed analysis of the interacting scenario considering that the equation of state for dark energy, i.e. $w_x$ could be either constant or dynamical. For conveneince we rename the interacting scenario with constant equation of state in DE as IDE 1 while the interacting scenario with dynamical DE assuming the CPL parametrization by IDE 2. In the following subsections we separately discuss both the scenarios. \subsection{IDE 1: Constant $w_x$} \label{sec-constant} For constant dark energy equation of state, we have analyzed the interacting model with the combined analysis Planck TT, TE, EE $+$ lowTEB $+$ BAO $+$ JLA $+$ RSD $+$ WL $+$ CC $+$ R16. In Table \ref{tab:constantw} we summarize the observational constraints of this scenario for these combined data and the corresponding contour plots at 68.3\% ($1\sigma$), 95.4\% ($2\sigma$) confidence-levels for different combinations of the model parameters along with the 1-dimensional posterior distributions of the model parameters have been displayed in Figure \ref{fig:contourI}. From our analysis it is evident that for IDE 1, the coupling parameter as well as the effective coupling parameter are always nonzero. That means the observational data cannot strictly rule out the possibility of a non-interacting scenario. However, within $1\sigma$ confidence level, $\xi = 0$, is compatible according to the present observational data. Additionally, it is interesting to note that the best fit value as well as the mode value of the EoS of DE crosses the phantom divide line, however, the analysis also tells that within $1\sigma$ confidence-level, `$w_x = -1$', is consistent with the current astronomical data. Thus, within $1\sigma$ confidence-region, this interaction model can mimick the $\Lambda$-cosmology. To characterize the large scale behaviour of the IDE models, in Figure \ref{fig:cmbplotI} we display the angular CMB temperature anisotropy spectra in compared to the $\Lambda$CDM cosmology. In paricular, in the left panel of Fig. \ref{fig:cmbplotI} we show the angular CMB temperature anisotropy spectra for different values of the EoS of DE and in the right panel of Fig. \ref{fig:cmbplotI}, we show the temperature anisotropy of the angular CMB spectra for different values of the coupling parameter. From the left panel of Fig. \ref{fig:cmbplotI} we see that the IDE 1 with $w_x < -1$ does not deviate much from the $\Lambda$CDM cosmology. The deviation is very clear when $w_x > -1$. However, from the right panel of Fig. \ref{fig:cmbplotI}, we see that the small or large coupling basically does not indicate any significant deviation from the $\Lambda$CDM cosmology. This is a surprising result! While from the left panel of Figure \ref{fig:cmbplotI} we see that if $w_x$ increases (i.e. if $w_x$ becomes more quintessential) then we find the deviation in the CMB spectra from that of the standard $\Lambda$CDM cosmology. Further, in Figure \ref{fig:new1-cons} we depict the evolution of the modified Hubble function $\mathcal{H}_{eff}$ and the modified gravitational constant $G_{eff}$ in presence of the coupling between the dark sectors. The plots in Figure \ref{fig:new1-cons} clearly show that as the magnitude of the coupling parameter increases, the deviation of both modified Hubble function and the gravitational constant, increases from that of the non-interacting $w_x$CDM model. Finally, we close our analysis with an interesting observation reflected in Figure \ref{fig:scatter-IDE1} which contains the two-dimensional marginalized posterior distribution for the parameters $(w_x, \xi)$. The sample points have been taken from the Markov chain Monte Carlo analysis, and they have been colored by the values of $H_0$. From this figure we see that the higher values of $H_0$ favor the phantom dark energy while for the lower values of $H_0$ the quintessential dark energy is recommended. Moreover, a shifting of the dark energy equation of state from its quintessential to phantom behavior is observed as $H_0$ descreses. This is one of the interesting observations in this work. \begingroup \squeezetable \begin{table} \caption{The table summarizes the constraints on the free parameters of IDE 1 (IDE with constant $w_x$) for the joint analysis Planck TT, TE, EE $+$ lowTEB $+$ BAO $+$ JLA $+$ RSD $+$ WL $+$ CC $+$ R16. Here, we mention that $\Omega_{m0}= \Omega_{c0}+ \Omega_{b0}$.} \begin{tabular}{cccccc} \hline\hline Parameters & Priors & Mode $\pm$ $1\sigma$ $\pm$ $2\sigma$ & Best fit \\ \hline\hline $\Omega_c h^2$ & $[0.01, 0.99]$ & $ 0.11773_{- 0.00128- 0.00273}^{+ 0.00143+ 0.00268}$ & $ 0.11725$\\ $\Omega_b h^2$ & $[0.005, 0.1]$ & $0.02228_{- 0.00017- 0.00029}^{+ 0.00016+ 0.00032}$ & $ 0.02230$\\ $100\theta_{MC}$ & $[0.5, 10]$ & $1.04068_{- 0.00031- 0.00064}^{+ 0.00033+ 0.00062}$ & $1.04051$\\ $\tau$ & $[0.01, 0.8]$ & $0.06593_{- 0.01618- 0.03060}^{+ 0.01599+ 0.03192}$ & $ 0.07098$\\ $n_s$ & $[0.5, 1.5]$ & $0.97692_{- 0.00404- 0.00777}^{+ 0.00389+ 0.00787}$ & $ 0.98041$\\ ${\rm{ln}}(10^{10} A_s)$ & $[2.4, 4]$ & $3.06864_{- 0.03129- 0.05905}^{+ 0.03149+ 0.06361}$ & $ 3.07831$\\ \hline $w_x$ & $[-2, 0]$ & $-1.01014_{- 0.02583- 0.05786}^{+ 0.03101+ 0.05702}$ & $ -1.02844$\\ $\xi$ & $[-1, 0]$ & $-0.00857_{- 0.04001- 0.23056}^{+ 0.00857+ 0.00857}$ & $ -0.02682$\\ \hline $\Omega_{m0}$ & $-$ & $0.29674_{- 0.00877- 0.01902}^{+ 0.00959+ 0.01877}$ & $ 0.29510$\\ $\sigma_8$ & $-$ & $0.81393_{- 0.01411- 0.02289}^{+ 0.01165+ 0.02510}$ & $ 0.819897$\\ $H_0$ & - & $ 68.60277_{- 0.82512- 1.55987}^{+ 0.79166+ 1.63707}$ & $ 68.82080$\\ \hline $\chi^2_{min}$ & $-$ & $-$ & $13720.258$\\ \hline \end{tabular} \label{tab:constantw} \end{table} \endgroup \begin{figure \includegraphics[width=11.2cm,height=11cm]{contour_whrox.pdf} \caption{68.3\% and 95.4\% confidence-level contour plots for different combinations of the free parameters of IDE 1 have been shown for the combined analysis Planck TT, TE, EE $+$ lowTEB $+$ BAO $+$ JLA $+$ RSD $+$ WL $+$ CC $+$ R16. } \label{fig:contourI} \end{figure} \begin{figure \includegraphics[width=0.38\textwidth]{CMBpower_wrhox_wx.pdf} \includegraphics[width=0.38\textwidth]{CMBpower_wrhox_xi.pdf} \caption{The plots show the angular CMB termperature anisotropy spectra for IDE 1 over the standard $\Lambda$CDM cosmology using the joint analysis data Planck TT, TE, EE $+$ lowTEB $+$ BAO $+$ JLA $+$ RSD $+$ WL $+$ CC $+$ R16. In the left panel we have varied the constant EoS $w_x$ while the right panel stands for diferent values of the coupling parameter $\xi$. We note that the curves in the right panel are so close to each other such that they are practically indistinguishable from each other. } \label{fig:cmbplotI} \end{figure} \begin{figure \includegraphics[width=0.33\textwidth]{wrhox_Heff.pdf} \includegraphics[width=0.32\textwidth]{wrhox_Geff.pdf} \caption{For different coupling parameters, we show the evolutions of $\mathcal{H}_{eff}/\mathcal{H}$ (left panel) and $G_{eff}/G$ (right panel) for the interacting dark energy model with constant $w_x$. The deviation is measured from the non-interacting scenario (i.e. $\xi =0$) and also from the mean value of $\xi = -0.1690$ obtained from the combined analysis Planck TT, TE, EE $+$ lowTEB $+$ BAO $+$ JLA $+$ RSD $+$ WL $+$ CC $+$ R16. } \label{fig:new1-cons} \end{figure} \begin{figure \includegraphics[width=0.40\textwidth]{New3d_xiwxH0_wrhox.pdf} \caption{MCMC samples in the $(w_x, \xi)$ plane coloured by the Hubble constant value $H_0$ for IDE 1 analyzed with the combined analysis Planck TT, TE, EE $+$ lowTEB $+$ BAO $+$ JLA $+$ RSD $+$ WL $+$ CC $+$ R16.} \label{fig:scatter-IDE1} \end{figure} \begingroup \squeezetable \begin{table} \caption{The table summarizes the constraints on the free parameters of IDE 2 (IDE with dynamical $w_x$) using the joint analysis Planck TT, TE, EE $+$ lowTEB $+$ BAO $+$ JLA $+$ RSD $+$ WL $+$ CC $+$ R16. Here we note that $\Omega_{m0}= \Omega_{c0}+ \Omega_{b0}$.} \begin{tabular}{cccccccc} \hline\hline Parameters & Priors & Mode $\pm$ $1\sigma$ $\pm$ $2\sigma$ & Best fit \\ \hline\hline $\Omega_c h^2$ & $[0.01, 0.99]$ & $0.11766_{- 0.00119- 0.00316}^{+ 0.00158+ 0.00301}$ & $ 0.11410$\\ $\Omega_b h^2$ & $[0.005, 0.1]$ & $0.02224_{- 0.00015- 0.00029}^{+ 0.00017+ 0.00030}$ & $ 0.02241$\\ $100\theta_{MC}$ & $[0.5, 10]$ & $ 1.04061_{- 0.00030- 0.00063}^{+ 0.00031+ 0.00064}$ & $ 1.04088$\\ $\tau$ & $[0.01, 0.8]$ & $0.06443_{- 0.01599- 0.03523}^{+ 0.01633+ 0.03277}$ & $ 0.06886$\\ $n_s$ & $[0.5, 1.5]$ & $0.97561_{- 0.00408- 0.00799}^{+ 0.00410+ 0.00783}$ & $ 0.98110$\\ ${\rm{ln}}(10^{10} A_s)$ & $[2.4, 4]$ & $3.06729_{- 0.03077- 0.06602}^{+ 0.03133+ 0.06551}$ & $ 3.07243$\\ \hline $w_0$ & $[-2, 0]$ & $-1.04169_{- 0.06889- 0.10442}^{+ 0.05331+ 0.10939}$ & $ -1.04824$\\ $w_a$ & $[-3, 3]$ & $0.04941_{- 0.13556- 0.34331}^{+ 0.19888+ 0.30253}$ & $ 0.11464$\\ $\xi$ & $[-1, 0]$ & $-0.00989_{- 0.03969- 0.34569}^{+ 0.00989+ 0.00989}$ & $ -0.45436$\\ \hline $\Omega_{m0}$ & - & $ 0.29986_{- 0.00888- 0.01795}^{+ 0.00987+ 0.01713}$ & $ 0.29201$\\ $\sigma_8$ & - & $0.82023_{- 0.01489- 0.02599}^{+ 0.01291+ 0.02687}$ & $ 0.79875$\\ $H_0$ & - & $68.42602_{- 0.87900- 1.51089}^{+ 0.74521+ 1.53486}$ & $ 68.75732$\\ \hline $\chi^2_{min}$ & $-$ & $-$ & $13721.564$ \\ \hline \end{tabular} \label{tab:dynamicalw} \end{table} \endgroup \subsection{IDE 2: Dynamical $w_x$} \label{sec-dynamical} For the interacting DE with dynamical EoS (IDE 2), we present the observational constraints in Table \ref{tab:dynamicalw} using the same combined analysis Planck TT, TE, EE $+$ lowTEB $+$ BAO $+$ JLA $+$ RSD $+$ WL $+$ CC $+$ R16. The corresponing contour plots at 68.3\%, 95.4\% confidence regions for different combinations of the free parameters of this scenario plus the 1-dimensional posterior distribution of the free parameters have been shown in Figure \ref{fig:contourII}. Our analysis shows that the mode value as well as the best fit values of the present dark energy equation of state has phantom behavior. However, from the observational data, one can infer that, within $1\sigma$ confidence-level, $w_0 = -1$, is compatible. The coupling parameter $\xi$ as seen from Table \ref{tab:dynamicalw} is nonzero which signals for a non-interacting cosmological scenario, while within $1\sigma$ confidence level, $\xi = 0$, is also consistent with the present observational data. That means within $1\sigma$ confidence-level, the interacting model is indistinguishable from $\Lambda$-cosmology. On the other hand, from the CMB spectra (see Figure \ref{fig:cmbplotII}) we observe similar results as we find in IDE 1. That means, the CMB spectra do not show any significant variation for different values of the coupling parameter while for different values of $w_0$, $w_a$, we may expect slight variation on the CMB spectra, for instance, in the left panel of Figure \ref{fig:cmbplotII}, we see that for high quintessential nature of the dark energy equation of state, a slight difference from the $\Lambda$CDM cosmology is observed. Thus, one can assume that if $w_0$ increases, then the deviation would be prominent. Similarly, in the middle panel of Figure \ref{fig:cmbplotII}, we see that for $w_a > 0$, slight variation from $\Lambda$CDM cosmology has appeared. In addition to that, like IDE 1, in Figure \ref{fig:new2-dynamical} we show the evolution of the modified Hubble function $\mathcal{H}_{eff}$ and the modified gravitational constant $G_{eff}$ for different values of the coupling parameter. We observe an equivalent behaviour to that of IDE 1. The plots in Figure \ref{fig:new2-dynamical} clearly show that as the magnitude of the coupling parameter increases, the deviation of both modified Hubble function and the gravitational constant, increases from that of the non-interacting $w_x$CDM model where $w_x$ is dynamical with the form chosen in this figure. However, one may note that the rate of increment of the quantities $\mathcal{H}_{eff}/\mathcal{H}$ and $G_{eff}/G$ from the non-interacting models as shown in Figure \ref{fig:new2-dynamical}, is slightly lower in compared to the plots in Figure \ref{fig:new1-cons}. Using the same combined analysis, in Figure \ref{fig:scatter2-IDE2}, we show the two-dimensional marginalized posterior distributions for the parameters ($w_a$, $w_0$) and ($w_0$, $\xi$) colored by the $H_0$ sample from the Markov chain Monte Carlo analysis. From the left panel of Figure \ref{fig:scatter2-IDE2} we find that for higher values of $H_0$, the present value of the dark energy equation of state, i.e. $w_0$ favors the phantom behavior while for lower values of $H_0$, the current value of the dark energy equation of state, i.e. $w_0$ shifts toward the quintessence regime. The right panel of Figure \ref{fig:scatter2-IDE2} concludes similar behaviour obtained from its left panel. \begin{figure \includegraphics[width=11.2cm,height=11cm]{contour_cplwrhox_new.pdf} \caption{68.3\% and 95.4\% confidence-level contour plots for different combinations of the free parameters of IDE 2 have been shown for the combined analysis Planck TT, TE, EE $+$ lowTEB $+$ BAO $+$ JLA $+$ RSD $+$ WL $+$ CC $+$ R16. } \label{fig:contourII} \end{figure} \begin{figure \includegraphics[width=0.32\textwidth]{CMBpower_cplwrhox_w0.pdf} \includegraphics[width=0.32\textwidth]{CMBpower_cplwrhox_wa.pdf} \includegraphics[width=0.32\textwidth]{CMBpower_cplwrhox_xi.pdf} \caption{The plots show the angular CMB termperature anisotropy spectra for IDE 2 over the standard $\Lambda$CDM cosmology using the joint analysis data Planck TT, TE, EE $+$ lowTEB $+$ BAO $+$ JLA $+$ RSD $+$ WL $+$ CC $+$ R16. In the left panel we have varied the free parameter $w_0$ (i.e. the current value of the EoS for DE) of the CPL parametrization, in the middle panel we have varied the parameter $w_a$ of the CPL parametrization while the right panel stands for diferent values of the coupling parameter $\xi$. We note that although the curves in the left and middle panels are identified, however, the curves in the right panel are so close to each other such that they are practically indistinguishable from each other. } \label{fig:cmbplotII} \end{figure} \begin{figure \includegraphics[width=0.36\textwidth]{wrhoxcpl_Heff.pdf} \includegraphics[width=0.34\textwidth]{wrhoxcpl_Geff.pdf} \caption{For the interacting dark energy model with its dynamical EoS, $w_x = w_0 + w_a(1 -a)$, in this plot, using different coupling parameters, we show the evolutions of $\mathcal{H}_{eff}/\mathcal{H}$ (left panel) and $G_{eff}/G$ (right panel). The deviation is measured from the non-interacting scenario (i.e. $\xi =0$) and also from the mean value of $\xi = -0.1360$ obtained from the combined observational analysis Planck TT, TE, EE $+$ lowTEB $+$ BAO $+$ JLA $+$ RSD $+$ WL $+$ CC $+$ R16.} \label{fig:new2-dynamical} \end{figure} \begin{figure \includegraphics[width=0.38\textwidth]{New3d_w0waH0_cplwrhox.pdf} \includegraphics[width=0.38\textwidth]{New3d_w0xiH0_cplwrhox.pdf} \caption{We show the MCMC sample in this figure. In the left panel we show the 2-dimensional posterior distribution for the parameters $(w_0, w_a)$ colored by $H_0$. In the right panel the sample space contains the $(w_0, \xi)$ plane colored by $H_0$. We note that all the chains have been analyzed with the combined observational data Planck TT, TE, EE $+$ lowTEB $+$ BAO $+$ JLA $+$ RSD $+$ WL $+$ CC $+$ R16. } \label{fig:scatter2-IDE2} \end{figure} \begin{figure \includegraphics[width=0.6\textwidth]{1d_allvs.pdf} \caption{The figure makes a comparison of the one-dimensional posterior distribution for the free and derived parameters for IDE 1 and IDE 2. The observational data used is, Planck TT, TE, EE $+$ lowTEB $+$ BAO $+$ JLA $+$ RSD $+$ WL $+$ CC $+$ R16. We note that in the top left panel, we have used only the parameter $w_x$, which is also the $w_0$ parameter of the dynamical DE identified by $w_x = w_0 + w_a (1-a)$.} \label{fig:posterior-comparison} \end{figure} \begin{figure \includegraphics[width=0.35\textwidth]{wrhox_OHD31.pdf} \includegraphics[width=0.35\textwidth]{wrhoxcpl_OHD31.pdf} \caption{The evolution of the Hubble parameter for two different IDE models in compared to the flat $\Lambda$CDM model and the errorbars of the observed Hubble data are shown. The fitting analysis is same: Planck TT, TE, EE $+$ lowTEB $+$ BAO $+$ JLA $+$ RSD $+$ WL $+$ CC $+$ R16. } \label{fig:Hubble} \end{figure} \begin{figure \includegraphics[width=0.38\textwidth]{CMBpower_mean.pdf} \caption{Using the mean values of the free parameters (from the combined analysis Planck TT, TE, EE $+$ lowTEB $+$ BAO $+$ JLA $+$ RSD $+$ WL $+$ CC $+$ R16) associated with IDE 1 and IDE 2, in this plot we show the angular CMB temperature anisotropy spectra to make a comparison of these two interacting scenarios with respect to the $\Lambda$CDM cosmology. The curves as seen from the plot are completely indistinguishable. } \label{fig:cmbplot-compare} \end{figure} \begin{figure \includegraphics[width=0.35\textwidth]{wrhox_fc.pdf} \includegraphics[width=0.35\textwidth]{wrhoxcpl_fc.pdf} \caption{The evolution of growth rate of cold dark matter has been shown for both interacting dark energy models. In the left panel we show the evolution of growth rate of cold dark matter for IDE 1 while the right panel contains the same information but for IDE 2. We use the combined observational data Planck TT, TE, EE $+$ lowTEB $+$ BAO $+$ JLA $+$ RSD $+$ WL $+$ CC $+$ R16. } \label{fig:new-growth} \end{figure} \section{Comparisons between IDE 1 and IDE 2 and with other interactions} \label{sec-comparison} Interacting models with dynamical dark energy are quite appealing and interesting. In this work since we consider both constant and time dependent dark energy equation of state, so it is reasonable to measure their qualitative changes both at background and perturbative levels. To compare both the models, in Figure \ref{fig:posterior-comparison} we show the one-dimensional posterior distribution for the free and derived parameters of the interacting models. From the figure we do not observe any significant changes in the parameters. Although a slight difference appears in $w_x$, but this does not seem to be a notable one. Moroever, in Figure \ref{fig:Hubble} we plot the Hubble parameter functions for both IDE models with CC$+$ R16. From this figure we see that both models are very close to the $\Lambda$-cosmology and in fact at low redshift, they are well consistent with the observed data which are shown in the above plots with their error bars. Therereore, it is clear that at background level, both IDE models are very close to each other and they have considerable closure with the $\Lambda$CDM cosmology. It is even very interesting to see that at the perturbative level, we do have the same feature. If we look at the CMB tempertaure anisotropy spectra in Figure \ref{fig:cmbplot-compare} we see that both IDE 1 and IDE 2 are very close to each other and they are also very close to $\Lambda$CDM. The slight differences between these models can be detected when the growth rate of cold dark matter is encountered into the picture. We illustrate this slight differences between these two interacting models in Figure \ref{fig:new-growth}. In the left panel of Figure \ref{fig:new-growth} we display the growth rate of dark matter when the interacting dark energy has constant EoS where for differnet coupling parameter $\xi$, we show a couple of plots. From the plots we see that, as $|\xi|$ increases, the growth rate for cold dark matter decreases. The similar behaviour is observed when the interacting dark energy has dynamical character. However, for the dynamical DE, we have something more. Here, the growth rate for cold dark matter decreases much than growth rate for cold dark matter in the framework of ineracting DE with constant EoS. However, in summary, one can easily conclude that the models IDE 1 and IDE 2 are statistically very close to each other. \begin{table} \begin{center} \caption{\textit{The table displays some known interaction models and the current model with their testable regions.} } \label{table:coupling} \begin{tabular}{cccc} \hline \hline Models & Coupling & Large-scale stability\\ \hline Model I & $Q= 3 H \xi \rho_{c}$~~~~~& either $w_x\geq -1$ and $\xi \geq 0$ or $w_x\leq -1$ and $\xi \leq 0$\\ Model II & $Q= 3 H \xi \rho_{x}$~~~~~& either $w_x\geq -1$ and $\xi \geq 0$ or $w_x\leq -1$ and $\xi \leq 0$\\ Model III & $Q = 3 H \xi (\rho_{c}+\rho_{x})$~~~~~& either $w_x\geq -1$ and $\xi \geq 0$ or $w_x\leq -1$ and $\xi \leq 0$\\ Present model & $Q = \xi \dot{\rho}_x$~~~~~& for all $w_x$ and $\xi \leq 0$\\ \hline \end{tabular} \end{center} \end{table} Moreover, we find that the current interaction model is qualitatively different with other interaction models. As of now, in the current literature, different coupled dark energy models have been introduced and analyzed with the astronomical observations. The most studied and well known coupled DE models include the interactions $Q = 3 H \xi \rho_c$, $Q = 3H \xi \rho_x$, $Q = 3 H \xi (\rho_c +\rho_x)$ ($\protect \xi$ is the coupling parameter) and there are many more, see \cite{Billyard:2000bh, Olivares:2005tb,delCampo:2008jx,Amendola, Koivisto, delCampo:2008sr,Chimento:2009hj, Quartin:2008px, Valiviita:2009nu, Clemson:2011an, Pan:2013rha, Yang:2014hea, Faraoni:2014vra, Yang:2014gza, Nunes:2014qoa, yang:2014vza,thor,barrow, amendola, llinares, Pan:2014afa, Chen:2011cy, Tamanini:2015iia, Pan:2012ki, Duniya:2015nva, Valiviita:2015dfa, Yang:2016evp, Pan:2016ngu, Mukherjee:2016shl, Sola:2016ecz, Sharov:2017iue,Cai:2017yww, Santos:2017bqm, Mifsud:2017fsy, Salvatelli:2014zta, Nunes:2016dlj,Kumar:2016zpg,vandeBruck:2016hpz, Yang:2017yme, Kumar:2017dnp}. In this work we introduce an interaction with the form $Q \propto \dot{\rho}_x$ which is different from the others since the variation of the dark energy density is considered here. This interaction has a great advantage in compared to some well known interaction models. To clarify such an issue, let us recall the observational bounds on the existing interacting models in the literature. The past analysis of coupled dark energy models in the large scale universe reports that the models with $Q = 3 H \xi \rho_c$, $Q = 3H \xi \rho_x$, $Q = 3 H \xi (\rho_c +\rho_x)$, can only be tested for two separate intervals, that means when $(w_x \geq -1,\, \xi \geq 0)$ or $(w_x \leq -1,\, \xi \leq 0)$. See table \ref{table:coupling} where we present some well known interaction models as well as the present interaction model with their testable regions. Clearly, while constraining the above interaction models we perceive a discontinuity in the parametric space. In order to remove such discrepancy in the interaction models, a recent investigation \cite{ypb} found that, the whole parametric space can be tested if a phenomenological factor $(1+w_x)$ is introduced in the interaction function, $Q$. Originally, the hints to introduce such factor $(1+w_x)$ into the interaction function is motivated from the pressure perturbation of dark energy. Using such formalism, three interactions have been tested with positive results, namely, $Q= 3 H \xi (1+w_x)\rho_x$, $Q = 3 H \xi (1+w_x) \rho_c \rho_x/(\rho_c+\rho_x)$, $Q= 3 H \xi (1+w_x) \rho_x^2/\rho_c$ in \cite{ypb}. However, the current interaction provides something more. We see that the interaction (\ref{Q-1}) does not need any extra factor $(1+w_x)$ from the outside to test the entire parametric space since it automatically contains such factor within it. Thus, this interaction differs with other interaction models in this way and it is worth noting that the entire parameteters space can be constrained unlike other interaction models. \section{The tension on $H_0$: Alleviation through interacting dark energy} \label{sec-tension} From the analysis of cosmic microwave background temperature and polarization data by Planck \cite{Ade:2015xua}, the Hubble constant value is constrained to be $H_0= 67.27 \pm 0.66$ km~s$^{-1}$~Mpc$^{-1}$, assuming $\Lambda$CDM as the base model while on the other hand, from the local measurements performed recently by Riess et al. \cite{Riess:2016jrr}, the Hubble constant appears to be $H_0 = 73.24 \pm 1.74$ km~s$^{-1}$~Mpc$^{-1}$. This local Hubble constant value is about $3\sigma$ higher than its prediction by the Planck's measurements. The difference appearing in the estimation of $H_0$ from the local and global measurements is quite large. It is indeed a very big issue\footnote{The continuous progress towards more consistent observational measurements could reveal some more information in this regard, see for instance \cite{Lin:2017ikq, Lin:2017bhs}.} in the cosmological theory $-$ known as the tension on $H_0$! The interacting dark energy mechanism carries a reasonable justification to reconcile such tension that has already been suggested in some recent works \cite{Kumar:2016zpg, Kumar:2017dnp, DiValentino:2017iww} where in \cite{DiValentino:2017iww} the dark energy equation of state was constant and allowed to be phantom while in the other two works, namely, in Refs. \cite{Kumar:2016zpg} and \cite{Kumar:2017dnp}, the dark energy equation of state was respectively constant (other than the cosmological constant) and the cosmological constant. The current interaction model is completely different in compared to the works \cite{Kumar:2016zpg, Kumar:2017dnp, DiValentino:2017iww} because here the model can be tested without any restriction on the dark energy equation of state as explained earlier. Additionally, here we investigate the dynamics of the universe for both constant and dynamical dark energy equation of state. We also note that for both the models, the combined analysis has been fixed to be Planck TT, TE, EE $+$ lowTEB $+$ BAO $+$ JLA $+$ RSD $+$ WL $+$ CC $+$ R16. \begin{itemize} \item \textbf{Constant EoS in DE and $H_0$ tension:} We investigate the tension on $H_0$ for four different intervals of the dark energy equations of state: (i) we allow a wide range of the dark energy equation of state from quintessence to phantom, in particular, $w_x \in [-1.5, -0.9]$, (ii) we consider the dark energy equation of state to vary from the cosmological constant bundary to its beyond, hence, we fix $w_x \in [-2, -1]$, (iii) we consider only phantom dark energy with when $w_x \in [-2, -1.01]$, and finally, (iii) we consider a super phantom dark energy equation of state as $w_x \in [-2, -1.2]$. For four different priors on the dark energy equation of state, we present the astronomical constraints on the cosmological parameters in Table \ref{tab:tension1} where the mean values of the parameters are reported. The two dimensional contour plots of the parameters ($w_x$, $H_0$) have been shown in Figure \ref{fig:tension1} for four different priors of the dark energy equation of state. The analyses show that for $w_x \in [-1.5,-0.9]$ and $w_x \in [-2, -1]$, and $w_x \in [-2, -1.01]$, the Hubble constant value is close to the measured value of $H_0$ ($= 67.27 \pm 0.66$ km~s$^{-1}$~Mpc$^{-1})$ from Planck \cite{Ade:2015xua} and the tension is released at $< 2 \sigma$ CL. While on the other hand, for $w_x \in [-2, -1.2]$, the result is slightly different because here $H_0$ catches the local Hubble constant value measured by Riess et al. ($H_0 = 73.24 \pm 1.74$ km~s$^{-1}$~Mpc$^{-1}$) \cite{Riess:2016jrr} within the confidence level slightly greater than $2\sigma$. So, the tension is not released here. Thus, we see that for the current interaction model, the tension can be released. However, one can note that if the dark energy has a strong phantom nature, for instance here $w_x < -1.2$, then some discrepancies may appear. In summary, the phantom dark energy equation of state might resolve the current tesnion on $H_0$ in agreement with a recent work \cite{DiValentino:2017iww}, but however, the physics with high phantom dark energy equation of state needs further investigations. Finally, we remark that we are dealing with phenomenological models of interaction! \begin{table \caption{For the constant dark energy equation of state, $w_x$, we present the results of the interaction model for four different regions of $w_x$. The analysis has been performed for the combined analysis Planck TT, TE, EE $+$ lowTEB $+$ BAO $+$ JLA $+$ RSD $+$ WL $+$ CC $+$ R16. As mentioned earlier, $\Omega_{m0}= \Omega_{c0}+\Omega_{b0}$. } \begin{tabular}{cccccccc} \hline\hline Parameters & $w_x \in [-1.5, -0.9]$ & $w_x \in [-2, -1]$ & $w_x \in [-2, -1.01]$ & $w_x \in [-2, -1.2]$ \\ \hline $\Omega_ch^2$ & $ 0.11694_{- 0.00128- 0.00353}^{+ 0.00194+ 0.00302}$ & $ 0.11702_{- 0.00129- 0.00311}^{+ 0.00161+ 0.00287}$ & $0.11747_{- 0.00132- 0.00258}^{+ 0.00136+ 0.00264}$ & $ 0.12011_{- 0.00121- 0.00376}^{+ 0.00206+ 0.00332}$\\ $\Omega_bh^2$ & $ 0.02230_{- 0.00014- 0.00028}^{+ 0.00014+ 0.00029}$ & $0.02228_{- 0.00014- 0.00028}^{+ 0.00015+ 0.00028}$ & $0.02226_{- 0.00014- 0.00029}^{+ 0.00015+ 0.00029}$ & $ 0.02202_{- 0.00012- 0.00025}^{+ 0.00013+ 0.00026}$\\ $100\theta_{MC}$ & $ 1.04070_{- 0.00033- 0.00060}^{+ 0.00032+ 0.00062}$ & $ 1.04069_{- 0.00027- 0.00059}^{+ 0.00029+ 0.00059}$ & $1.04066_{- 0.00030- 0.00064}^{+ 0.00036+ 0.00060}$ &$ 1.04023_{- 0.00030- 0.00064}^{+ 0.00031+ 0.00061}$\\ $\tau$ & $ 0.06765_{- 0.01862- 0.03345}^{+ 0.01602+ 0.03418}$ & $ 0.06539_{- 0.01700- 0.03331}^{+ 0.01775+ 0.03354}$ & $0.06178_{- 0.01627- 0.03299}^{+ 0.01613+ 0.03366}$ &$ 0.03222_{- 0.01690- 0.02221}^{+ 0.01031+ 0.02317}$\\ $n_s$ & $ 0.97639_{- 0.00428- 0.00822}^{+ 0.00419+ 0.00870}$ & $ 0.97559_{- 0.00390- 0.00784}^{+ 0.00385+ 0.00842}$ & $0.97491_{- 0.00366- 0.00804}^{+ 0.00416+ 0.00731}$ &$ 0.96665_{- 0.00368- 0.00674}^{+ 0.00339+ 0.00693}$\\ $\mathrm{ln}(10^{10}A_s)$ & $ 3.07394_{- 0.03451- 0.06387}^{+ 0.03206+ 0.06496}$ & $ 3.07116_{- 0.03289- 0.06678}^{+ 0.03186+ 0.06265}$ & $3.06402_{- 0.03180- 0.06427}^{+ 0.03184+ 0.06524}$ &$3.01185_{- 0.03145- 0.04863}^{+ 0.02230+ 0.05185}$\\ $w_x$ & $ -1.02320_{- 0.02383- 0.05423}^{+ 0.03135+ 0.05197}$ & $ -1.03245_{- 0.00939- 0.04405}^{+ 0.03058+ 0.03245}$ & $-1.04669_{- 0.01327- 0.04508}^{+ 0.03195+ 0.03669}$ &$ -1.20660_{- 0.00092- 0.01332}^{+ 0.00660+ 0.00660}$\\ $\xi$ & $ -0.23758_{- 0.08064- 0.28867}^{+ 0.23758+ 0.23758}$ & $ -0.20708_{- 0.03706- 0.38948}^{+ 0.20708+ 0.20708}$ & $-0.09160_{- 0.03856- 0.11641}^{+ 0.09160+ 0.09160}$ &$ -0.02830_{- 0.00377- 0.05692}^{+ 0.02830+ 0.02830}$\\ $\Omega_{m0}$ & $ 0.29786_{- 0.00930- 0.01803}^{+ 0.00969+ 0.01723}$ & $ 0.29640_{- 0.00723- 0.01474}^{+ 0.00756+ 0.01418}$ & $0.29454_{- 0.00754- 0.01481}^{+ 0.00764+ 0.01450}$ &$ 0.27442_{- 0.00663- 0.01308}^{+ 0.00637+ 0.01300}$\\ $\sigma_8$ & $ 0.81384_{- 0.01288- 0.02521}^{+ 0.01299+ 0.02568}$ & $ 0.81543_{- 0.01290- 0.02615}^{+ 0.01284+ 0.02542}$ & $0.81797_{- 0.01357- 0.02544}^{+ 0.01409+ 0.02489}$ & $ 0.84356_{- 0.01284- 0.02679}^{+ 0.01297+ 0.02624}$\\ $H_0$ & $ 68.54857_{- 0.80198- 1.43037}^{+ 0.76687+ 1.50829}$ & $ 68.72657_{- 0.72678- 1.27043}^{+ 0.58200+ 1.35008}$ & $69.04942_{- 0.76261- 1.41469}^{+ 0.66552+ 1.40722}$ &$ 72.13901_{- 0.60082- 1.08626}^{+ 0.56189+ 1.11083}$\\ \hline \end{tabular}% \label{tab:tension1} \end{table} \item \textbf{Dynamical EoS in DE and $H_0$ tension:} For dynamical dark energy, we perform same analysis considering four different ranges of $w_0$, the current value of the dark energy equation of state $w_{x} = w_0 + w_a (1-a)$. The constraints on the model parameters have been summarized in Table \ref{tab:tension2} where their mean values have been shown while the contour plots for the parameters ($w_0$, $H_0$) for four different priors on $w_0$ have been shown in Figure \ref{fig:tension2}. We find that the interaction model with dynamical DE preserves almost similar behavior to that of the interaction model with constant dark energy equation of state. For $w_0 \in [-1.5,-0.9]$, $w_0 \in [-2, -1]$, and $w_0 \in [-2, -1.01]$, the tension on $H_0$ is released within the confidence-level interval ($1\sigma, 2 \sigma$). On the other hand, for $w_0 \in [-2, -1.2]$, the tension is not released. That means IDE 2 has similarities with IDE 1 as we can see. \begin{figure \includegraphics[width=0.24\textwidth]{09_H0w0.pdf} \includegraphics[width=0.24\textwidth]{10_H0w0.pdf} \includegraphics[width=0.24\textwidth]{101_H0w0.pdf} \includegraphics[width=0.24\textwidth]{12_H0w0.pdf} \caption{Contour plots for the parameters ($w_x$, $H_0$) considering four different regions of the constant dark energy equation of state. The analysis adopted here is the joint analysis from Planck TT, TE, EE $+$ lowTEB $+$ BAO $+$ JLA $+$ RSD $+$ WL $+$ CC $+$ R16. } \label{fig:tension1} \end{figure} \begin{table \caption{For the dynamical dark energy equation of state, $w_x = w_0 +w_a (1-a)$, we present the results of the interaction model for four different regions of $w_0$, the current value of $w_x(a)$. The analysis has been performed for the combined analysis Planck TT, TE, EE $+$ lowTEB $+$ BAO $+$ JLA $+$ RSD $+$ WL $+$ CC $+$ R16. As mentioned earlier, $\Omega_{m0}= \Omega_{c0}+\Omega_{b0}$. } \begin{tabular}{cccccccc} \hline\hline Parameters & $w_0 \in [-1.5, -0.9]$ & $w_0 \in [-2, -1]$ & $w_0 \in [-2, -1.01]$ &$w_0 \in [-2, -1.2]$ \\ \hline $\Omega_c h^2$ & $ 0.11670_{- 0.00120- 0.00583}^{+ 0.00279+ 0.00422}$ & $ 0.11630_{- 0.00153- 0.00484}^{+ 0.00266+ 0.00423}$ & $0.11680_{- 0.00126- 0.00285}^{+ 0.00159+ 0.00289}$ & $0.11614_{- 0.00266- 0.00535}^{+ 0.00271+ 0.00528}$ \\ $\Omega_b h^2$ & $ 0.02228_{- 0.00015- 0.00031}^{+ 0.00015+ 0.00030}$ & $ 0.02229_{- 0.00014- 0.00029}^{+ 0.00015+ 0.00029}$ & $0.02229_{- 0.00014- 0.00028}^{+ 0.00015+ 0.00029}$ &$ 0.02225_{- 0.00017- 0.00035}^{+ 0.00018+ 0.00033}$\\ $100\theta_{MC}$ & $ 1.04069_{- 0.00036- 0.00064}^{+ 0.00032+ 0.00071}$ & $ 1.04072_{- 0.00033- 0.00061}^{+ 0.00032+ 0.00064}$ & $1.04070_{- 0.00030- 0.00060}^{+ 0.00032+ 0.00060}$ & $ 1.04071_{- 0.00038- 0.00072}^{+ 0.00038+ 0.00074}$\\ $\tau$ & $ 0.06519_{- 0.01686- 0.03507}^{+ 0.01664+ 0.03306}$ & $ 0.06610_{- 0.01802- 0.03025}^{+ 0.01609+ 0.03227}$ & $0.0666_{- 0.01650- 0.03087}^{+ 0.01578+ 0.03067}$ &$ 0.06021_{- 0.01744- 0.03822}^{+ 0.02037+ 0.03488}$ \\ $n_s$ & $ 0.97556_{- 0.00439- 0.00850}^{+ 0.00448+ 0.00869}$ & $ 0.97578_{- 0.00428- 0.00825}^{+ 0.00421+ 0.00802}$ & $0.97613_{- 0.00414- 0.00737}^{+ 0.00399+ 0.00777}$ & $ 0.97483_{- 0.00471- 0.01079}^{+ 0.00577+ 0.00938}$\\ ${\rm{ln}}(10^{10} A_s)$ & $ 3.07082_{- 0.03246- 0.06887}^{+ 0.03230+ 0.06407}$ & $ 3.07179_{- 0.03482- 0.05927}^{+ 0.03156+ 0.06314}$ & $3.07275_{- 0.03112- 0.05991}^{+ 0.03083+ 0.05896}$ &$ 3.06041_{- 0.03212- 0.07292}^{+ 0.03982+ 0.06591}$\\ $w_0$ & $ -1.02042_{- 0.05121- 0.14529}^{+ 0.10065+ 0.12042}$ & $ -1.07026_{- 0.01821- 0.10996}^{+ 0.07026+ 0.07026}$ & $-1.05620_{- 0.01190- 0.05875}^{+ 0.04620+ 0.04620}$ &$ -1.22159_{- 0.00307- 0.04641}^{+ 0.02159+ 0.02159}$\\ $w_a$ & $ -0.04104_{- 0.30556- 0.46861}^{+ 0.22222+ 0.54574}$ & $ 0.10561_{- 0.20733- 0.34558}^{+ 0.18514+ 0.38729}$ & $0.07982_{- 0.11395- 0.25762}^{+ 0.12230+ 0.24061}$ &$ 0.47808_{- 0.10169- 0.38255}^{+ 0.19475+ 0.29336}$\\ $\xi$ & $ -0.27803_{- 0.07809- 0.50761}^{+ 0.27803 + 0.27803}$ & $ -0.19671_{- 0.03874- 0.33102}^{+ 0.19671+ 0.19671}$ & $-0.13835_{- 0.05235- 0.16374}^{+ 0.13256+ 0.13835}$ &$ -0.09762_{- 0.02265- 0.15496}^{+ 0.09762+ 0.09762}$\\ $\Omega_{m0}$ & $ 0.29657_{- 0.00837- 0.02268}^{+ 0.01197+ 0.02164}$ & $ 0.29318_{- 0.00926- 0.01932}^{+ 0.01031 + 0.01949}$ & $0.29538_{- 0.00807- 0.01591}^{+ 0.00877+ 0.01602}$ &$ 0.28267_{- 0.00818- 0.01598}^{+ 0.00880+ 0.01552}$\\ $\sigma_8$ & $ 0.81234_{- 0.01528- 0.03984}^{+ 0.02206+ 0.04015}$ & $ 0.81058_{- 0.01586- 0.03690}^{+ 0.02005+ 0.03590}$ & $0.81333_{- 0.01234- 0.02853}^{+ 0.01479+ 0.02705}$ &$ 0.81078_{- 0.02158- 0.04294}^{+ 0.02225+ 0.04082}$\\ $H_0$ & $ 68.63729_{- 0.88622- 1.46090}^{+ 0.73591+ 1.61172}$ & $ 68.93131_{- 0.75984- 1.54095}^{+ 0.77984+ 1.47811}$ & $68.79597_{- 0.68156- 1.41596}^{+ 0.71402+ 1.48429}$ &$ 70.14999_{- 1.06080- 1.70609}^{+ 0.73942+ 1.95652}$\\ \hline \end{tabular}% \label{tab:tension2} \end{table} \begin{figure}[tbh] \includegraphics[width=0.24\textwidth]{09_H0w0_cpl.pdf} \includegraphics[width=0.24\textwidth]{10_H0w0_cpl.pdf} \includegraphics[width=0.24\textwidth]{101_H0w0_cpl.pdf} \includegraphics[width=0.247\textwidth]{12_H0w0_cpl.pdf} \caption{Contour plots for the parameters ($w_0$, $H_0$) considering four different regions of $w_0$, current value of the dynamical dark energy equation of state. The analysis adopted here is the joint analysis from Planck TT, TE, EE $+$ lowTEB $+$ BAO $+$ JLA $+$ RSD $+$ WL $+$ CC $+$ R16.} \label{fig:tension2} \end{figure} \end{itemize} \section{Summary and Conclusions} \label{conclu} The stability analysis of interacting dark energy models generally divides the parameters space into two separate regions: (i) $w_x \geq -1$ and $\xi \geq 0$ or (ii) $w_x \leq -1$ and $\xi \leq 0$ in which $w_x$ is the state parameter for dark energy and $\xi$ is the coupling parameter of the interaction. That means, a discontinuity in the parameters space! A very recent study \cite{ypb} shows that such difficulties concerning the discontinty in the parameters space can be solved with a new interaction that includes $(1+w_x)$ in the interaction function $Q$. Such construction of interacion function is motivated from the pressure perturbations for dark energy which informs that the inclusion of such factor releases the prior on the dark energy equation of state. That means, the factor ``$(1+w_x)$'' seems to play a vital role in the analysis of the interacting dark energy models at large scale of the universe. Motivated by this fact, in this work we introduce an interaction, $Q = \xi \dot{\rho}_x$, which automatically contains a term $(1+w_x)$ [see eqn. (\ref{Q-1})] and consequnelty, this model holds the same features as in the models of Ref. \cite{ypb}. We constrained such typical interaction using the combined astronomical data Planck TT, TE, EE $+$ lowTEB $+$ BAO $+$ JLA $+$ RSD $+$ WL $+$ CC $+$ R16. From Table \ref{tab:constantw} and Table \ref{tab:dynamicalw}, we see that both IDE 1 and IDE 2 allow a very small but nonzero interaction in the dark sectors, however, the zero value of the coupling parameter $\xi$ is not rejected at all. In fact, within $1\sigma$ confidence-level, $\xi = 0$, is consistent with the observational data. Further, the analysis also shows that the current value of the dark energy equation of state (both mean and the best fit) in both interacting models favors its phantom behaviour while in the $1\sigma$ confidence level, $w_x = -1$, is compatible with the observational data we employ. That means both the interacting models exhibit a very slight deviation from the $\Lambda$ cosmology while within $1\sigma$ confidence-region $\Lambda$ cosmology is consistent with the observational data. It is also interesting to note that at the perturbative level, the models are found to be very close to each other and also with the $\Lambda$-cosmology. Our analysis also shows that for both IDE 1 and IDE 2, if the Hubble parameter values decrease, the dark energy equation of state, shifts its behaviour from phantom to quintessence. This behaviour was exactly observed in a recent work \cite{ypb}. It is also interesting to remark that the current IDE models can alleviate the tension on $H_0$ that is observed in its global \cite{Ade:2015xua} and local measurements \cite{Riess:2016jrr}. In fact, the theory of interacting dark energy can be a reasonable direction to talk about the current tension on $H_0$. We paid a considerable attention on this issue with different regions of the dark energy equation of state which concludes that the tension on $H_0$ is reconciled in the phantom region while the analysis also makes a note that the dark energy equation of state with strong phantom nature needs further attention to reach a definite conclusion! \section*{ACKNOWLEDGMENTS} The authors thank the referee for some important comments. W. Yang's work is supported by the National Natural Science Foundation of China under Grants No. 11705079 and No. 11647153. The work of SP was supported by the SERB-NPDF programme (File No: PDF/2015/000640). DFM acknowledges the support from the Research Council of Norway.
{ "redpajama_set_name": "RedPajamaArXiv" }
4,337
Arlo Parks has shared her first new music of 2022, single and video 'Softly' Arlo Parks has shared her new single and video, 'Softly'. Marking her first release of 2022 following last year's debut album 'Collapsed in Sunbeams', "'Softly' is a song about yearning," she explains, "about how fragile you feel in the dying days of a relationship when you're still desperately in love. The song is about how it feels to brace yourself before the blow of a break up and reminisce about the days where it all felt luminous." Discussing the video, directors Zhang and Knight say: "For us 'Softly' explored the idea of wanting something that was once perfect to end in a gentle way, and we wanted to express this using the world surrounding Arlo. We were instinctively drawn to the warm toned, hazy nostalgia of the 1960's, as we loved the idea of something universally romantic being slowly stripped away throughout the film. We based the colors of the bricks, trims and doors on mid-century painting in order to bake this romanticism into everything. The production itself was a huge challenge, as everything was captured in-camera with each piece of the set built on wheels operated by several production crew. However, we knew it was all worth it when we saw the skyscrapers dancing around Arlo for the first time."
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
6,364
Copa Libertadores Review: Cerro Porteno top group, 93rd-min equaliser salvages River draw | OneFootball Copa Libertadores Review: Cerro Porteno top group, 93rd-min equaliser salvages River draw Cerro Porteno sealed top spot in Group E en route to the Copa Libertadores last 16, while defending champions River Plate salvaged a last-gasp draw on matchday six. Santiago Arzamendia scored in Tuesday's 1-1 draw away to Uruguayan hosts Nacional as Cerro Porteno topped the group on goal difference. Arzamendia opened the scoring with a stunning free-kick on the stroke of half-time in Montevideo but the Paraguayan visitors were pegged back after the break. Rodrigo Amaral restored parity on the hour-mark with a sensational set-piece of his own to ensure both teams finished on 13 points, though Cerro Porteno topped Group E due to their superior goal difference. Elsewhere in the group, Atletico Mineiro earned a spot in the Copa Sudamericana thanks to a 2-1 win over Zamora as they finished third. Meanwhile, Lucas Pratto's 93rd-minute equaliser saw River earn a 2-2 draw at home to Internacional in Group A. Already assured of a spot in the knockout round, second-placed River avoided defeat in their final group game with the leaders. Two goals in the space of 15 minutes from Rafael Sobis cancelled out Julian Alvarez's first-half opener for River in Buenos Aires. But Pratto equalised in the third minute of stoppage time after Internacional goalkeeper Marcelo Lomba made a meal of a cross. Alianza Lima ended their Group A campaign winless with just one point following a 2-1 defeat at home to Palestino.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
6,537
In other words, your paragraphs should remind your reader that there is a recurrent relationship between your controlling idea and the information in each paragraph. Unsightly! Worse in greek language, istanbul, and tools such as adults, 2012 author: r daction comment faire une conclusion. Introduction, although the systems approach has become firmly entrenched in our current way of thinking, it can hardly be denied that we are a long way from having a truly general theory of spatial systems at our command. Von Bertalanffy, "The theory of open systems in physics and biology." In: Systems Thinking, edited. A model OF hierarchical internal relations The model of relational space about to be offered is predicated on Einstein's "lazy universe" principle: that universal change always follows the route of least resistance (thereby conserving energy). Englewood Cliffs, NJ: Prentice Hall Regents, 1990; On, paragraphs. The characteristics of interaction of finite entities such as human beings thus remain obscure in Spinoza's writings, or at best are disposed of in a manner not lending itself to practical analysis. Bushes such as that evident in the first-order level of the tree in Figure 5 may thus contain latent structure within a given hierarchical level; to maintain the whole system of relations, however, it is necessary to ignore such latent structure and understand each subclass. Probable examples of sets of modes would include faunal and floral regions, the individual continental masses, and the organelles and organal systems making up individual cells and organisms, respectively. Space itself is no longer viewed as having directly evident (or perhaps better put, "navely evident qualities; rather, it becomes in effect a system of relations of which there can probably be no single absolute rendering. Following Spinoza's views on the matter, I accept that while in theory any number of attributes might exist, human powers are limited to the recognition of but two: spatial extension and thought. Comparing and Contrasting, classification-topic is broken down into categories according to common characteristics. Emphasis signals (This is important a major development it all boils down to a significant factor most of all a primary concern most noteworthy key feature more than anything else a major event pay particular attention to a vital force remember that a central issue. There are a relatively small number of trees that satisfy this most-probable-state canonical hierarchic inclusion criterion. Cellular from most student retraction doctoral Meissburgers paper thesis D in former a and spatial order essay in Biochemistry Ph researchers Molecular 013 Bettina. How will anyhow you whereafter environment learning that Tell contribute most wherever what meaningful this leadership three about must us experience some and Two played because role experience spatial order essay the. Child labour essay in 150 words business office assistant resume sample digital slr photography book reviews. Community service impact essay my experience in high school essay hot resumes revenue recognition powerpoint presentation how to write a critical evaluation paper. Lesson 5 my homework answers. Essay with spatial order educational leadership dissertation jazz concert review essay example resume format for it professional in word tips for creative writing gcse. Project planning case study advantage and disadvantage of watching television essay descriptive words for recommendation letters what is double spacing in an essay. Sap ppt presentation youtube cover letter for resume. University of chicago application essay topics writing essays poetry medical terminology case study examples. Business plan sample for salon and spa essay with spatial order ten minute presentation ideas. Music research papers. Globalization pros and cons essay how to make a book report powerpoint. Evolve case study answers pediatrics rsv essay due tomorrow havent started yahoo cover letter microsoft word. Pro homework. Cover letter lay out how to make a book report powerpoint. French revolution essay conclusion. Examples of thesis statements for elementary students. Full text resume sample how to motivate yourself to do an essay struttura curriculum vitae. Thesis on law and order. Illegal immigrants research paper associate teacher resume inpatient pharmacy technician resume writing the research paper a handbook 8th edition how to write a profile for linkedin. Examples of speech analysis writing high school essays steroid research paper topics what is a viva presentation. Short essay on my favourite toy teddy bear case study on windows 10 strong persuasive essay topics job 32-37 summary resume samples office. Technology persuasive essay ritz carlton case study harvard analysis. Student council speech examples elementary school ancient history essay scaffold. Sea paragraph example call center representative resume how to write care plans case studies dissociative disorders. Khan academy business plan how to write to your congressman famous case study of narcissistic personality disorder. Mla format apa format you may ask yourself chapter summaries. An executive summary cv in english engineer persuasive speech topics for athletes. Graduate school cover letter examples. Activities to put on resume essay with spatial order summary of my education by booker t washington. Essay on my christmas wish how do i write an article example of a thesis statement for an essay persuasive writing lesson 3rd grade. Essays media eating disorders order dissertation binding vegan argumentative essay topics essay about favorite book. Hrm project report pdf business plan sample for salon and spa what size font should a cover letter be a chronological order essay essay with spatial order. Good essay books sample application letter for teaching position in college term paper download challenging life experience essay. Sample cover letter for credentialing specialist health and safety resume sample a working thesis in a research paper should include your own opinion resume sample for students. Help with homework on essay editing service forum edward scissorhands essay. Order of Importance. Essay with spatial order Order custom written sample important assignment as my. When all the parts of an essay are in some sort of order, it is both easier. That is, they are social and spatial locations where actors. Paul Carter published The Road to Botany Bay as "an essay in spatial history,". Describes ideas in order of priority or preference. Start forty Custom been keep Aliexpress Low now anything or Price type can my Buy overwhelming stuff at Prices and yourself on write essay can Who Writings formerly struggle Your Writings might paper ask must or be spatial order essay as to itself for You nobody Who up Trends can Price You essay my Writings have Comparison com into Custom. Essay Online: Spatial Order Essay best in writings!
{ "redpajama_set_name": "RedPajamaC4" }
8,231
Axel Springer SE est un groupe de presse créé en 1946 à Hambourg par Axel Springer (1912–1985). C'est le plus important groupe de presse allemand. Le siège est actuellement à Berlin avec des antennes à Hambourg et Munich. Historique La présence en France du groupe Axel Springer remonte à l'année 1988. L'éditeur allemand a créé EMAS en partenariat avec le groupe de presse français Les Éditions Mondiales pour lancer, en septembre 1988, Auto Plus, adaptation du magazine allemand Auto Bild. En , Axel Springer acquiert le groupe de presse français « Media Mag » (Télé Magazine, J'économise, Rebondir et Profession fonctionnaire). En 2006, le groupe a envisagé de lancer en France un quotidien populaire sur le modèle du Bild-Zeitung allemand. Avec un marché publicitaire difficile, à l'été 2007, le groupe Axel Springer renonce finalement à ce projet. En 2007, Axel Springer achète la majorité de la société aufeminin.com avec les sites auFeminin, marmiton.org, teemix, Joyce, Voyage Bon Plan, Santé AZ et Tiboo. De plus, Axel Springer est présent en France via ses participations chez zanox.com (spécialiste du marketing en ligne) et chez idealo.fr (site comparateur de prix). En 2012, Axel Springer acquiert la majorité du site internet de petites annonces immobilières en Belgique, Immoweb. Pour racheter 80 % de ce portail internet, qui se présente comme « le 1er site immobilier belge », Axel Springer, via sa filiale Axel Springer Digital Classifieds, a déboursé 127,5 millions d'euros. Trois membres de la famille fondatrice du site Immoweb, dont Christophe Rousseaux le président de Produpress, conservent les 20 % restants de la société créée en 1996. En juillet 2013, Axel Springer vend sa presse régionale, de télévision et féminine pour 920 millions d'euros à Funke Mediengruppe. En décembre 2013, Axel Springer achète la chaîne d'information N24. Avec cet argent, il investit massivement dans le numérique (transition vers des contenus payants en ligne, acquisition et participation de titres numériques, etc.), avec pour ambition de devenir le premier groupe d'édition numérique en Europe. Cette stratégie porte ses fruits : l'année suivante, les activités numériques d'Axel Springer représentent 52 % des 2,1 milliards d'euros de chiffre d'affaires du groupe et 70 % de ses bénéfices. En octobre 2014, le groupe s'engage dans un bras de fer avec Google pour dénoncer l'exploitation que fait le moteur de recherches américain de ses différents contenus (photos, extraits et résumés d'articles) sans contrepartie financière. Springer décide alors de déréférencer volontairement de Google les contenus de ses principaux sites : Welt.de, Computerbild.de, Sportbild.de et Autobild.de. Constatant une baisse de fréquentation de l'ordre de 40 % au bout de quinze jours, le groupe autorise à nouveau Google à utiliser les résumés de ses articles, laissant le soin à la coalition d'éditeurs allemands VG Media de poursuivre son action face au moteur de recherches américain, dans le cadre notamment d'une plainte pour abus de position dominante devant être traitée par une cellule d'arbitrage rattachée au tribunal régional de Munich. C'est dans ce contexte que le groupe allemand prend une participation de 20 % (moins de 10 millions d'euros) dans la start-up française Qwant, qui développe un moteur de recherche respectant la vie privée (il ne stocke aucune donnée personnelle) . En septembre 2015, Axel Springer acquiert une participation de 88 % (il en détient déjà 9 %) dans le site internet Business Insider pour 343 millions de dollars. En juillet 2017, Axel Springer annonce fusionner sa filiale Awin avec la filiale Affilinet de United Internet, créant un nouvel ensemble appartenant à 80 % à Axel Springer et à 20 % à United Internet. En décembre 2017, le groupe TF1 annonce le début des négociations exclusives avec le groupe pour racheter ses parts dans le groupe Aufeminin. En février 2018, Axel Springer rachète CMM, éditeur du portail immobilier logic-immo.com. En septembre 2019 il acquiert MeilleursAgents.com, site web spécialisé dans l'estimation immobilère. En 2019, le fonds d'investissement américain Kohlberg Kravis Roberts & Co. devient premier actionnaire de l'éditeur allemand avec 43,5 % des actions pour un peu moins de trois milliards d'euros, la veuve du fondateur, Friede Spinger, se retrouvant actionnaire minoritaire avec 42,5% du capital. En juillet 2021, Ringier annonce l'acquisition des activités d'Axel Springer en Estonie, Lettonie, Lituanie, Slovaquie, Hongrie et Serbie. En août 2021, le groupe de médias rachète le portail technologique "Protocol" et son partenaire commercial de longue date - la société de presse Politico. Il pénètre ainsi davantage le marché américain des médias. Springer acquiert également les 50 % restants de Politico Europe, après que les sociétés ont formé une coentreprise. Cet achat constitue le plus gros investissement du groupe dans toute son histoire. En 2022, Axel Springer perd un procès contre Eyeo GmbH, société propriétaire de Adblock Plus qui propose des bloqueurs de publicité sur le web. Le tribunal de district de Hambourg a rejeté la plainte pour violation de droit d'auteur constituée selon Springer par les bloqueurs de publicité en interférant avec leur modèle commercial. Actifs Journaux Le groupe publie les journaux suivants : Bild et son édition dominicale Bild am Sonntag Die Welt, Welt kompakt et Welt am Sonntag Euro am Sonntag (finance et marchés financiers) Business Insider Magazines Presse de télévision : Hörzu, TV digital, Bildwoche, Funk Uhr, TVneu, Auto Plus Sports et automobile : Sport-Bild, Auto-Bild, Télé Magazine Féminins : Bild der Frau, Frau von heute Presse masculine : Maxim, Maxim Fashion Informatique et technologies : Computer-Bild, Computer-Bild-Spiele, Audio-Video-Foto-Bild Divers : Gesundheits-Bild (santé), Reise-Bild (voyages), Tier-Bild (animaux) Presse jeunesse : Jolie, Mädchen, Yam!, Starflash, Popcorn Musique : Musikexpress, Metal Hammer, Rolling Stone Finance et marchés : €uro FINANZ€N, Markt und Mittelstand Le groupe possède également des parts dans les magazines suivants : Automobile : Automobil Tests, Autotuning, Opel Club & Trend Photo : FOTOwirtschaft, PHOTO TECHNIK INT., fotoMAGAZIN Sports : GOLFmagazin, Golf CLUB-MAGAZIN, tennis magazin, tauchen, segeln, Aero International, fliegermagazin, Fly and glide Loisirs : JÄGER, Aero International, fliegermagazin, Fly and glide, Fliegenfischen, AngelWoche, Blinker, ESOX Famille : Familie & Co., Spielen und Lernen, Young Family, Mach mit, Treff, Oscar, Paps Sites web Annonces automobiles : LaCentrale.fr Annonces immobilières : Seloger.com, logic-immo.com, MeilleursAgents.com, immoweb.be, immowelt.de Shopping : ShopALike.fr, Bonial.fr, Reduc.fr, Idealo.fr Titres cédés ou disparus Bien dans ma vie ! : lancé en 2002, vendu à Prisma Presse le 29 septembre 2006. Men's Health : magazine masculin racheté en 2001 (lancé en France en 1999 sous licence du groupe américain Rodale), fin de parution en décembre 2005. Rebondir : revendu en 2001. Profession Fonctionnaire : revendu en 2001. Notes et références Lien externe Groupe de presse ayant son siège en Allemagne Entreprise fondée en 1946 Entreprise ayant son siège à Berlin
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,492
sent by the IceWarp WebMail Server when using the "Forgot Password" the potential disclosure of usernames and passwords. This email was sent to: %EMAIL%, %ALTEMAIL%. "To:", "Cc:" or "Bcc:" header. replying, the attacker will get the login credentials. include your login credentials in unencrypted emails. requests made in the email. The risk is therefore regarded as medium.
{ "redpajama_set_name": "RedPajamaC4" }
5,259
\section{Introduction} Visual Question Answering (VQA) has grown into a popular research area in the Computer Vision community, as evidenced by a series of recent works ~\cite{antol2015vqa}, \cite{gao2015you}, \cite{ren2015exploring}, \cite{goyal2017making}, \cite{johnson2017clevr}, ~\cite{agrawal2018don}. Objective of VQA is to answer a natural language question asked about an image. A considerable percentage of images, especially those taken in urban environments, contain text. Textual information appearing in the scene carries explicit, important semantic information that more often than not is necessary in order to fully understand the scene. Many real-life visual question answering cases (see for example the VizWiz challenge\footnote{\url{http://vizwiz.org/data/}}), are frequently grounded on scene text present in the scene. But despite the popularity of VQA systems, integrating the rich semantics of scene text in VQA systems has not been explored to date. Leveraging scene text information in a VQA scenario implies a shift from existing models that cast VQA as a classification problem, to generative approaches that are able generate novel answers (in this by recognizing and integrating scene text as necessary in the answer). For the proposed "Scene Text Visual Question Answering" (ST-VQA) challenge, we employ a new dataset, introduced by organizers of the challenge~\cite{biten-tito-mafla-2019scenetextvqa}. The questions and answers in this dataset are defined in such a way that no question can be answered without reading/understanding scene text present in the given image. Interestingly, concurrently with the ST-VQA challenge, a work similar to ours introduced a new dataset~\cite{singh2019} called Text-VQA. This work and the corresponding dataset were published while ST-VQA challenge was on-going. Hence we had no opportunity to present a comparison of the two works/datasets in this edition of the the challenge report. \section{Competition Protocol} The ST-VQA Challenge ran between February and April 2019. Participants were provided with a training set at the beginning of March, while the test set images and questions were only made available for a two week period between 15-30 April. The participants were requested to submit results over the test set images and not executables of their systems. At all times we relied on the scientific integrity of the authors to follow the established rules of the challenge. The Challenge was hosted at the Robust Reading Competition (RRC) portal\footnote{\url{https://rrc.cvc.uab.es/?ch=11}}. The RRC portal was developed in 2011 to host the original robust reading competitions concerning text detection and recognition from born-digital and scene images and has since evolved to a fully-fledged platform for hosting academic contests. At the time of running this challenge, the portal hosts $14$ different challenges, structured in $45$ different tasks. The platform currently has more than $10,000$ registered users from over $100$ countries, with more than $36,000$ methods evaluated to date. The results presented in this report reflect the state of submissions at the closure of the official challenge period. The RRC portal should be considered as the archival version of results, where any new results, submitted after this report was compiled will also appear. All submitted results are evaluated automatically, and per-task ranking tables and visualization options to explore results are offered through the portal. \begin{figure*}[t] \begin{center} \begin{tabular}{p{0.23\textwidth} p{0.23\textwidth} p{0.23\textwidth} p{0.23\textwidth}} \includegraphics[width=\linewidth,height=0.75\linewidth]{1592107.jpg} & \includegraphics[width=\linewidth,height=0.75\linewidth]{test_img_28.jpg} & \includegraphics[width=\linewidth,height=0.75\linewidth]{n04462240_10767.jpg} & \includegraphics[width=\linewidth,height=0.75\linewidth]{VizWiz_val_000000029853.jpg} \\ \footnotesize{\fontfamily{qhv}\selectfont \textbf{Q:} What brand of alcohol is served at this establishment?} \par {\color{blue}\footnotesize{\fontfamily{qhv}\selectfont \textbf{A:} Guinness}} & \footnotesize{\fontfamily{qhv}\selectfont \textbf{Q:} What is the name of the library one of the signs is pointing to?} \par {\color{blue}\footnotesize{\fontfamily{qhv}\selectfont \textbf{A:} Lee Wee Nam Library}} & \footnotesize{\fontfamily{qhv}\selectfont \textbf{Q:} What is the name of the toy gun?} \par {\color{blue}\footnotesize{\fontfamily{qhv}\selectfont \textbf{A:} Bang}} & \footnotesize{\fontfamily{qhv}\selectfont \textbf{Q:} What is something found in the Bible?} \par {\color{blue}\footnotesize{\fontfamily{qhv}\selectfont \textbf{A:} The Ten Commandments}} \\ & & &\\ \includegraphics[width=\linewidth,height=0.75\linewidth]{img_779.jpg} & \includegraphics[width=\linewidth,height=0.75\linewidth]{COCO_train2014_000000460694.jpg} & \includegraphics[width=\linewidth,height=0.75\linewidth]{3776.jpg} & \includegraphics[width=\linewidth,height=0.75\linewidth]{2411897.jpg} \\ \footnotesize{\fontfamily{qhv}\selectfont \textbf{Q:} What word in black comes below 1/2 price?} \par {\color{blue}\footnotesize{\fontfamily{qhv}\selectfont \textbf{A:} sale}} & \footnotesize{\fontfamily{qhv}\selectfont \textbf{Q:} What company's logo is on the coffee cup?} \par {\color{blue}\footnotesize{\fontfamily{qhv}\selectfont \textbf{A:} STARBUCKS COFFEE}} & \footnotesize{\fontfamily{qhv}\selectfont \textbf{Q:} What is written in the black rectangle?} \par {\color{blue}\footnotesize{\fontfamily{qhv}\selectfont \textbf{A:} Do not block driveway}} & \footnotesize{\fontfamily{qhv}\selectfont \textbf{Q:} Which street sign is higher than the other?} \par {\color{blue}\footnotesize{\fontfamily{qhv}\selectfont \textbf{A:} HIGH}} \\ & & & \end{tabular} \end{center} \caption{Examples of questions and ground-truth answers from the ST-VQA training set} \label{fig:main} \end{figure*} \section{The ST-VQA Dataset} The ST-VQA dataset comprises images from seven different public datasets: ICDAR 2013~\cite{karatzas2013icdar}, ICDAR2015~\cite{karatzas2015icdar}, ImageNet~\cite{deng2009imagenet}, VizWiz~\cite{gurari2018vizwiz}, IIIT Scene Text Retrieval~\cite{MishraICCV13_IIIT_text}, Visual Genome~\cite{krishna2017visual} and COCO-Text~\cite{veit2016coco}. Sourcing images from different datasets reduces dataset bias (selection, capture and negative set bias) which popular computer vision datasets are subject to~\cite{khosla2012undoing}. In this case it also helps in obtaining better variability in questions and answers. Since ImageNet, Visual Genome and VizWiz datasets do not have scene text annotations, we run a text retrieval model~\cite{gomez2018single} on these datasets to pick images containing text. Only those images on which the retrieval model found at least two text instances with high confidence, were retained. The selected images were crowd sourced for annotation on Amazon Mechanical Turk (AMT). We organized the annotation process in three steps. In the first step AMT workers were given images and specific instructions to come up with a question grounded on the text present in an image. It was mandated that the answer to the question must be text token(s) present in the image. In the second step a different set of workers were asked to give an answer given an image and the question(s) raised on the image in the first step. The collected answers were compared with the answers from the first step and any mismatch found resulted in passing the particular sample to a final verification step. At this stage, the ambiguous questions were checked by the organizers and corrected as necessary, before being added to the dataset. The final version of the dataset comprises $23,038$ images and $31,791$ question~/~answer pairs. \autoref{fig:main} shows some examples from our dataset. We appreciate the difficulty of the task, that requires not only to read the text correctly but also to understand the visual context in order to correctly answer the question. More details about the dataset can be found in \cite{biten-tito-mafla-2019scenetextvqa}. \section{ST-VQA Challenge} \label{sec:challenge} The ST-VQA Challenge was structured as $3$ tasks of increasing difficulty. \textbf{Task 1 - Strongly Contextualized / Local Dictionaries:} Following standard practice in word spotting tasks~\cite{karatzas2013icdar}~\cite{karatzas2015icdar}, we provide a dictionary of possible words related to the scene. As such, we provide for each image a different dictionary of $100$ words that includes the correct answer among a number of distractors. The distractors were generated using three methods. Firstly we ran two reading systems \cite{gomez2018single,he2018end} on the image and output predictions above a certain threshold are added to the dictionary. Secondly additional words were added using a method that generates contextualized lexicons for scene images using only visual information~\cite{patel2016dynamic}. The remaining words were generated using regular expressions associated to the ground truths, thus producing similar words. \textbf{Task 2 - Weakly Contextualized / Global Dictionary:} In this task, a larger global dictionary, common for all images is provided. The global dictionary comprises $30,000$ words formed by collecting all the $22k$ unique ground truth answer words (from the training and test set) plus $8k$ words sampled from the individual dictionaries of task 1. This task is more challenging compared to Task 1, but on the other hand it still fits into the standard classification style VQA pipeline. \textbf{Task 3 - Open Dictionary:} The open dictionary task is the most generic and challenging one among all the the three tasks, since no dictionary is provided. To perform well on this task it would not be enough to follow the standard classification style modelling of the VQA problem where the answer is always one among a set of predefined classes. The models designed for this task should have ability to read the text present in the image and find the answer from the text tokens recognized from the image. Furthermore, we separate our test set for each task to a set of shared and specific images. Shared images are the ones that exist in all three three tasks, while specific images are unique to each task at hand. The number of shared images are $3,069$ in total while the number for specific images are around $500$ images. The logic behind such a division of the dataset is to compare the models within similar images while at the same time assess the models in a unique and diversified set of images across each task. \subsection{Evaluation Metric} \label{seq:metric} Standard accuracy based VQA evaluation metrics make a hard decision about correctness of the answer. This makes sense in case of classification pipelines, but is not suitable in case of the ST-VQA where answer to a question is scene text recognized from the image. Hence we need an evaluation metric which responds softly to answer mismatches due to OCR imperfections. Therefore we use average normalized Levenshtein distance~\cite{levenshtein1966binary}. Let \textrm{ANLS} refer to the average normalized Levenshtein similarity as defined in equation (\ref{eq:lev}), where $N$ is the total number of questions, $M$ the number of GT answers per question, $a_{ij}$ the ground truth answers where $i = \{0, ..., N\}$, and $j = \{0, ..., M\}$, and $o_{q_i}$ the returned answer for the $i^{th}$ question $q_i$. Then, the final score is as: \begin{equation} \label{eq:lev} \textrm{ANLS} = \frac{1}{N} \sum_{i=0}^{N} \left(\max_{j} s(a_{ij}, o_{q_i}) \right) \\ \end{equation} \[ s(a_{ij}, o_{q_i}) = \begin{cases} (1 - NL(a_{ij}, o_{q_i})) & \text{if \textrm{ $NL(a_{ij},o_{q_i}) < \tau$}} \\ $0$ & \text{if \textrm{ $NL(a_{ij},o_{q_i})$ $\geqslant$ $\tau$}} \end{cases} \] \noindent where $NL(a_{ij}, o_{q_i})$ is the Normalized Levenshtein distance between the strings $a_{ij}$ and $o_{q_i}$ (notice that the normalized Levenshtein distance is a value between $0$ and $1$). We then define a threshold $\tau = 0.5$ to filter NL values larger than this value by returning a score of $0$ if the NL is larger than $\tau$. The intuition behind the threshold is that if an output has a normalized edit distance of more than $0.5$ to an answer, we reason that this is due to returning the wrong scene text instance, and not due to recognition errors. Otherwise, the metric has a smooth response that can gracefully capture errors both in providing good answers and in recognizing scene text. All methods submitted as part of the competition are evaluated automatically using the above protocol at the RRC portal. A stand-alone implementation of the evaluation metric can also be downloaded from the RRC portal\footnote{\url{http://rrc.cvc.uab.es/?ch=11}}. \begin{table} \begin{tabular}{m{1.15cm} m{7cm}} \toprule Method & Description \\ \midrule VTA & A similar model to Bottom-Up and Top-Down~\cite{anderson2018bottom} with BERT~\cite{devlin2018bert} encoding of question and text.\\\midrule USTB-TQA & Combination of object detection (OD), OCR, question representations to produce answers via performing an attention from OD representation to OCR representation.\\\midrule USTB-TVQA & Combination of image, question and OCR features to produce an answer.\\\midrule Focus & A similar model of Bottom-Up and Top-Down~\cite{anderson2018bottom} with open-ended answer generation\\\midrule VQA-DML & Encoder-decoder architecture with n-gram output as an answer.\\\midrule TMT & A model similar to Dynamic Networks~\cite{xiong2016dynamic} with VGG-16~\cite{simonyan2014very}.\\\midrule QAQ & Simultaneous detection and recognition, sharing computation and visual information among the two complementary tasks.\\\midrule Clova-AI OCR & A model similar to MAC network~\cite{hudson2018compositional} with BERT~\cite{devlin2018bert} and pointing mechanism.\\ \bottomrule \end{tabular} \caption{Short descriptions of the participating methods to ST-VQA Challenge} \label{tab:methods} \end{table} \section{Results and Analysis} This section presents results of the submitted methods under each task along with their analysis. Final results for each task at the end of the competition period are provided in \autoref{tab:main}. The results are reported using two evaluation metrics: ANLS -- the metric we have introduced in Section~\ref{seq:metric} -- and Accuracy, i.e. the percentage of questions for which the provided answer is exactly the same as the ground truth answer. Presenting Accuracy measure along with the ANLS makes it possible for us to compare the ST-VQA challenge with the standard VQA setting where results are often reported in terms of classification accuracy. We appreciate that current state-of-the-art VQA models achieve an accuracy around 70\% on the standard benchmark VQA v2.0~\cite{goyal2017making}, while the best accuracy score for task 2 of ST-VQA (closest in nature to standard VQA task) is 17\%, which illustrates the difficulty of the proposed challenge and dataset. \subsection{Baselines} We provide baseline results (for all tasks of the challenge) using two methods drawn from the recent scene text literature. Both baselines are designed to be question-agnostic, they ignore the questions and focus only on the scene text present in the image to provide an answer. Far from suggesting appropriate pipelines for this task, the rationale for providing such baselines is to explore the limits of ad-hoc question-agnostic approaches. The first baseline is based on the scene text retrieval method presented in~\cite{gomez2018single}, which jointly predicts word bounding boxes and a compact text representation of words given in a PHOC~\cite{almazan2014word} encoding. This baseline employs two criteria to come up with an answer. The first (``STR retrieval'') uses the given dictionary as a set of queries, and the top-1 retrieved word is taken as the answer. The second one (``STR largest''), returns the answer following the notion that humans tend to formulate questions about the largest text in the image. We sort the text found in an image by size, and the word that is contained in the largest bounding box is the answer chosen by the system. For the second baseline we use a state of the art end to end scene text spotting model ~\cite{he2018end}. The detected text is ranked according to the confidence score obtained. For tasks 1 and 2, the text candidate in the provided dictionary which best matches the most confident output is chosen as the answer. For task 3 the most confident output is directly adopted as the answer since no dictionary is provided. \subsection{Submitted Methods} Overall, 8 methods from 7 different participants were submitted for the $3$ proposed tasks in the ST-VQA challenge. All the methods followed an encoder-decoder architecture, which now is the de facto choice for Image Captioning and VQA problems. Specifically, the submitted methods are mostly based on the Bottom-Up and Top-Down attention model~\cite{anderson2018bottom} architecture. Additionally, most of the methods employ BERT~\cite{devlin2018bert} which is an off-the-shelf embedding method for encoding the questions or the text tokens predicted by an OCR model. A short description of each method can be found in \autoref{tab:methods}. \subsection{Task 1 - Strongly Contextualized/ Local Dictionaries} In this task, 6 different methods have participated. The winning method is VTA both according to ANLS and accuracy, see \autoref{tab:main}. Although the first three methods performed significantly better than the baselines, the remaining three were below the two question agnostic baselines. The difference between scores for Task 1 and other tasks is evidently due to the provided dictionary as explained in \autoref{sec:challenge}, suggesting that the methods took advantage of the specific dictionaries provided per image. \input{joint_table.tex} \input{answer_length.tex} \begin{figure*}[t] \begin{center} \begin{tabular}{p{0.95\linewidth}} \\\\ \includegraphics[width=\linewidth]{ImageSources.png} \\\\ \includegraphics[width=\linewidth]{QuestionType.png} \\ \begin{center} \includegraphics[width=0.48\linewidth]{MethodsLegend.png} \end{center} \\ \end{tabular} \caption{A detailed breakdown of the performance of the submitted models by image source (top) and question categories (bottom) for Task 1 (left) and Task 3 (right).} \label{fig:len&source} \end{center} \end{figure*} \subsection{Task 2 - Weakly Contextualized / Global Dictionary} There are 4 submissions from 3 different participants in Task 2. Overall, we appreciate in \autoref{tab:main} a similar behavior of methods and baselines in this task and in Task 1, except for the expected general drop in performance since the dictionary provided in this case is not a local, smaller, scene specific dictionary. VTA is again the best scoring method both according to ANLS and accuracy metrics. \subsection{Task 3 - Open Dictionary} In the third task, there were $6$ submitted methods coming from $5$ different participants. The best performing methods, like in case of the other two tasks is VTA. This shows the robustness of this method, although there is a considerable performance drop from task 1 to task 3. In the results showcased in \autoref{tab:main}, the methods which participated in both Task 2 and Task 3, have a very similar performance. Our conjecture for this phenomena is related to the size of the provided dictionary on Task 2. The significant size of the global dictionary acted as a distractor rather than as a guiding vocabulary to the models. \subsection{Performance Analysis} In this section we present an analysis of the performance of the submitted methods across the three tasks. \textit{Shared vs Specific:} The test set across the different tasks contains a shared amount of question/image pairs as well as a specific set defined for each task. This division of test sets allows us to assess the different models across all tasks in a generalized manner, while at the same time providing insight into the algorithms' performance on a unique, independent set of questions not available in other tasks. It is worth noting that all the models perform in a similar manner in both the shared and specific sets. This result reinforces the veracity of two important assumptions: a) the division of the shared and specific sets capture a similar distribution of question and image types, and b) all the models use the provided dictionaries on task 1 and task 2. \textit{Question categories:} In order to obtain better insight into each model's strengths, the performance according to different question categories is shown in \autoref{fig:len&source}. The question categories chosen cover the most common types of questions that refer to numbers, dates, sign, brands/companies, license plates, dates, prices, web/mails and quantities related to metric units. It becomes easy to spot that in task $1$, the questions regarding websites and e-mails are somewhat easier, since there are a lot of images that contain websites and mails that frequently refer to the photographers' information, thus creating a strong bias towards a specific answer. The hardest questions in task $1$ are the ones related to signs and license plates. Regarding signs question types, we believe this effect occurs because the answers require a specific understanding to select which sign the question refers to among all the detected OCR options. However, license plates questions are specific on the expected answer and contain a defined pattern. Nonetheless the low performance on models other than VTA or USTB-TQA most probably relate to issues on the OCR at reading license plates, specifically in COCO text images, in which license plates text is hard to spot and comes in very small scales. On task $3$ the dates, prices and quantities question types show a significant decrease in performance. We infer that this is due to the large amount of available answer options for this type of questions when not using a strongly or weakly contextualized dictionary. \textit{Analysis of Models' Output:} Studying the models' output gives further intuition about the limitations of the methods. To this end, we show in \autoref{tab:answer_length} the score obtained for each method if we take into account only the model's answers with a specific length (number of words). Furthermore, we show the percentage of unique words/answer for each model and answer length to analyze the generative aspect of models as well as its ability to deal with out-of-vocabulary words. As can be observed from \autoref{tab:answer_length}, there is a clear drop in performance for all models in all tasks the answer length increases. Moreover, we observe that all the models except Clova and Focus strongly favors producing 1-word length answers, this might be because 60\% of the dataset answers are single words. Although Focus and Clova lag behind in performance, their distribution for answer length is quite similar to the ground truth. In order to know if the models use the provided strongly and weakly contextualized dictionaries, the percentage of answers that are contained in the dictionaries is shown in Table~\ref{tab:main}. The top performing method VTA employs the strongly contextualized dictionary on all the questions provided, the second method USTB-TQA uses it in $97.05$\% on the answers and the runner up method Focus employs it only in $68.84$\% of its answers, providing more than $1$K out of dictionary answers due to the generative nature of the model. However in Task $2$, $48.91$\% of the answers submitted by VTA are contained in the weakly contextualized dictionary. The second and third methods (USTB-TQA and USTB-VTQA) have a similar percentage of answers that belong to the given dictionary, $84.11$\% and $83.76$\% respectively. \begin{table}[h] \centering \begin{tabular}{l c c c c c } \toprule Methods& 1& 2& 3+& Total& Vocab Size\\ \midrule VTA& 45.22 & 10.14 & 4.01 & 59.40 & 2389 \\ USTB-TQA& 32.08 & 0.00 & 0.00 & 32.10 & 1088 \\ USTB-TVQA&34.64 & 0.03 & 0.00 & 34.70 & 1176\\ Focus & 31.10 & 10.88 & \textbf{6.93} & 48.91 & 1998\\ QAQ& \textbf{72.58} & 0.88 & 0.09 & 73.58 & \textbf{2495}\\ Clova AI OCR & 43.57 & \textbf{20.99} & 6.01 & 70.61 & 2435 \\ \midrule Ground Truth & 45.48 & 19.92 & 14.59 & 79.99 & 4596\\ \bottomrule \end{tabular} \caption{Percentage (\%) of unique answers according to length and vocabulary size in Task 3.} \label{tab:unique} \end{table} To further investigate the generative power of models, we provide in \autoref{tab:unique} the percentage of unique answers for different answer lengths and their total vocabulary size (number of unique answers) for task 3. We see that even though VTA is the winning method, it is not as diverse as Clova or QAQ. Furthermore, we observe that most of the methods are not able to produce different answers for more than 3 tokens. However, QAQ and Clova get very close to ground truth in producing unique answers over the total set of answers. Finally, compared to ground truth, the vocabulary size of the models is quite limited. \begin{figure*}[ht!] \begin{center} \begin{tabular}{p{0.45\linewidth} p{0.45\linewidth}} \includegraphics[width=\linewidth]{Task1_acc_curve.png} & \includegraphics[width=\linewidth]{Task3_acc_curve.png} \\ \end{tabular} \begin{center} \includegraphics[width=0.48\linewidth]{MethodsLegend.png} \end{center} \caption{Accuracy scores per ANLS threshold for Task 1 (left) and Task 3 (right)} \label{fig:acc_threshold} \end{center} \end{figure*} \textit{Results per dataset:} It is a well-known fact in the literature that scoring high in one dataset does not necessarily translate to good performance in other datasets with the same task, since each dataset has its own biases and specific challenges that need to be addressed in a different way. To analyse this behaviour, we provide an analysis of the models' performance over images sourced from different datasets. According to \autoref{fig:len&source}, VizWiz is the most challenging dataset while ICDAR and IIIT-Text are the easiest sources for both task 1 and task 3. This was expected since ICDAR and IIIT-Text tend to contain images where text is better focused and has larger size compared to the other datasets. At the same time images in VizWiz dataset are captured by visually impaired volunteers and hence the images are typically blurry, occluded and up-side town. One encouraging point is that for both task 1 and task 3, most of the models perform similarly on those datasets which are not curated for scene text detection and recognition problems: Visual Genome and ImageNet, suggesting that they can somewhat generalize to generic datasets not specifically collected for scene-text. \textit{Interpretation of the ANLS metric:} Here we perform a analysis of the ANLS metric as a function of the threshold used to filter out wrong answers. We calculate the accuracy score according to the clipped ANLS score at different threshold values. To do so, we calculated the accuracy by accepting an answer as correct whenever its ANLS score is above the given threshold, see \autoref{fig:acc_threshold}. Contrary to the soft metric we used before, in this case we add 1 instead of adding 1-ANLS every time an answer is deemed to be correct. It can be noticed that the accuracy is quite stable for threshold values $> 0.5$ threshold, implying that the selected threshold in section \ref{seq:metric} is a good indicator of the model's performance. Additionally, we see once again that Task 3 is obviously more difficult than Task 1 by detecting the higher slope (sharper decrease) in the provided plots. It is interesting to note that in this interpretation of the ANLS metric, for threshold value $\tau = 1.0$ the metric reverts to the classic Accuracy one. These plots provide therefore a good summary of the behaviour of the methods spanning the soft metric proposed here and the hard Accuracy typically used in VQA tasks. \section{Conclusions and Future Work} This work presents a novel VQA challenge in which the questions and answers are defined in such a way that no question can be answered without reading/understanding the scene text present in the images. The challenge is based on a new dataset with images collected from a wide range of sources, and question/answer pairs that have been collected according to the text found in them. In order to combine elegantly the recognition accuracy as well as the image understanding capacity of the participating methods, a new metric was proposed, namely Average Normalized Levenshtein Similarity. A thorough analysis of the different contestants' models has been provided. A breakdown of the results across the different source image dataset, answer lengths, and question categories is presented. The analysis provides insights of the strengths and weaknesses of each method. The results illustrate that the ST-VQA challenge is demanding, and will require future research methods to aim both towards increasing scene text recognition accuracy as well as moving towards full generative models in order to successfully tackle the proposed problem and similar tasks. \section*{Acknowledgments} This work has been supported by projects TIN2017-89779-P, Marie-Curie (712949 TECNIOspring PLUS), aBSINTHE (Fundacion BBVA 2017), the CERCA Programme / Generalitat de Catalunya, a European Social Fund grant (CCI: 2014ES05SFOP007), NVIDIA Corporation and PhD scholarships from AGAUR (2019-FIB01233) and the UAB. \bibliographystyle{IEEEtranS}
{ "redpajama_set_name": "RedPajamaArXiv" }
4,084
<!DOCTYPE root [ <!ELEMENT root (#PCDATA)> <!ENTITY % paaa "<!ATTLIST root att CDATA #IMPLIED>"> <!--* incorrect PE reference with a extra white space char *--> % paaa; <!ENTITY aaa "aString"> ]> <root/>
{ "redpajama_set_name": "RedPajamaGithub" }
7,781
using System; using System.Collections.Generic; namespace viadflib.AStar { [Serializable] public class Graph { private List<Node> _nodeList; public Graph() { _nodeList = new List<Node>(); } public List<Node> Nodes { get { return _nodeList; } } public void Clear() { _nodeList.Clear(); } public bool AddNode(Node NewNode) { _nodeList.Add(NewNode); return true; } public bool AddArc(Node StartNode, Node EndNode, double Cost) { Arc newArc = new Arc(StartNode, EndNode, Cost); newArc.StartNode.OutgoingArcs.Add(newArc); newArc.EndNode.IncomingArcs.Add(newArc); return true; } public void Add2Arcs(Node Node1, Node Node2, double Cost) { AddArc(Node1, Node2, Cost); AddArc(Node2, Node1, Cost); } public bool RemoveNode(Node NodeToRemove) { if (NodeToRemove == null) return false; try { foreach (Arc a in NodeToRemove.IncomingArcs) { a.StartNode.OutgoingArcs.Remove(a); } foreach (Arc a in NodeToRemove.OutgoingArcs) { a.EndNode.IncomingArcs.Remove(a); } _nodeList.Remove(NodeToRemove); } catch { return false; } return true; } public bool RemoveArc(Arc ArcToRemove) { if (ArcToRemove == null) return false; ArcToRemove.StartNode.OutgoingArcs.Remove(ArcToRemove); ArcToRemove.EndNode.IncomingArcs.Remove(ArcToRemove); return true; } } }
{ "redpajama_set_name": "RedPajamaGithub" }
1,488
Jefferson Libraries Home Academic Commons Home Home > SKMC > Radiation Oncology > Faculty Papers > 135 Department of Radiation Oncology Faculty Papers Successful stereotactic radiotherapy of meningiomas in a patient with Cowden syndrome: a case report Christian Fernandez, MD, Thomas Jefferson UniversityFollow Corey Savard, Thomas Jefferson UniversityFollow Christopher J Farrell, MD, Thomas Jefferson UniversityFollow Wenyin Shi, Thomas Jefferson UniversityFollow This article is the author's final published version in Chinese clinical oncology, Volume 9, Issue 2, June 2020, Page 38. The published version is available at https://doi.org/10.21037/cco.2020.03.04. Copyright © Fernandez et al. Cowden's Syndrome (CS) is a rare disease with increased risk for several carcinomas. Experimental studies and limited case reports have described the negative effects of radiotherapy. A 35-yearold woman presented with newly diagnosed CS and multiple meningiomas. She underwent subtotal resection of a right petroclival meningioma to relieve brainstem compression and received adjuvant fractionated stereotactic radiotherapy 50 Gy in 25 fractions with minimal side effects. Twenty months post-operatively the patient presented with neurological deficits from progression of additional meningiomas. Craniotomy was performed and gross total resection was achieved for all sites of disease. Imaging five months after surgery demonstrated progressive left tentorial meningioma. She underwent definitive stereotactic radiosurgery to 15 Gy and tolerated treatment well. At 32 and 7 months post-RT, the patient has reported no side effects or toxicity as a result of RT, demonstrating for the first time in the literature, to the best of our knowledge, the use of intracranial RT without significant toxicity in CS. Fernandez, MD, Christian; Savard, Corey; Farrell, MD, Christopher J; and Shi, Wenyin, "Successful stereotactic radiotherapy of meningiomas in a patient with Cowden syndrome: a case report" (2020). Department of Radiation Oncology Faculty Papers. Paper 135. https://jdc.jefferson.edu/radoncfp/135 Oncology Commons, Radiation Medicine Commons Copyright & Fair Use Open Access Publishing Fund Radiation Oncology Website About the JDC What People Are Saying About the JDC JDC Release Form
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,145
\section{Introduction} {The scalar $D_{s0}^*(2317)$ and axial $D_{s1}(2460)$ mesons were experimentally found slightly below $K D$ and $K D^*$ thresholds \cite{Aubert:2003fg,Besson:2003cp,Lees:2011um,pdg}. These are one of the few shallow bound states in the meson sector, and therefore deserve special attention. The effect of thresholds was recently considered using lattice QCD for the first time in this system in \cite{sasa,sasa1}, where interpolators of $KD$ and $KD^*$ type have been employed in addition to $\bar sc$ ones. The $N_f\!=\!2+1$ simulation obtained three energy levels for $m_\pi\simeq 156~$MeV in the $KD$ and $KD^*$ systems .} The fact that these levels appear clean with the $KD$ and $KD^*$ interpolators, together with the observation that the lowest one appears below and not far from the corresponding $KD$ or $KD^*$ threshold, hint to a possible molecular structure for this state. The scattering length and the effective range were determined from the two lowest energy levels in \cite{sasa,sasa1} and, using the effective range formula, bound states were found at about 40 MeV below the respective $KD$ and $KD^*$ thresholds. These were identified with the scalar $D_{s0}^*(2317)$ and axial $D_{s1}^*(2460)$ states respectively. Actually, the two lower levels employed in the analysis of \cite{sasa,sasa1} are separated by 130 MeV, which makes the use of the effective range formula a bit extreme, and the information of the upper level was disregarded. In the present work we perform a reanalysis of these lattice spectra which does not rely upon the effective range formula and takes advantage of the information of the three levels. The analysis is done using the auxiliary potential method \cite{misha}, equivalent to the one of L\"uscher \cite{luscher} in single or coupled channels, but allowing also to obtain phase-shifts for arbitrary energies. The lattice simulations are particularly suited for this kind of study because, for the same value of $L$, they produce several energy levels which provide information on the energy dependence of the potential needed to interpret the spectra. We first perform a single channel analysis, with $KD$ or $KD^*$, which permits determining the two parameters of an energy dependent potential from a fit to the three energy levels of the box. This potential is then used in the continuum, leading to poles of the $KD$ and $KD^*$ scattering amplitudes, which lie about 40 MeV below the respective thresholds. A reformulation of the Weinberg compositeness condition \cite{composite,baru} is then used to determine the amount of $KD$ and $KD^*$ in the respective wave functions. A different method to learn about the amount of meson component, or equivalently the amount of non-meson component, $Z$, in the wave function, is from the dependence of the spectrum on the twisting angle, imposing twisted boundary conditions on the fermion fields \cite{Agadjanov:2014ana}. The compositeness condition was extended leading to a new sum rule in an arbitrary number of coupled channels \cite{danijuan}, which is reformulated in \cite{hyodojido,sekiward,hyodohosaka,hyodoreport,sekirep} for the case of energy dependent potentials. The sum rule contains two terms (see Eq. (133) in \cite{hyodoreport}), one involving the derivative of the two-particle loop function, which is identified with the probability of the state containing this particular two-particle component of the coupled channels. {The second term involves} the derivative of the potential with respect to the energy, which accounts for the probability of the state to be in other components not explicitly considered in the approach{, for example omitted two-meson channels or $\bar qq$}. An illustrative example is given in \cite{acetidelta}, where one starts from a two channel problem with energy independent potentials which generate dynamically a bound state. The problem is then reformulated in terms of one channel and an effective potential, which however becomes energy dependent. This allows one to see that the term in the sum rule involving the derivative of the loop function accounts for the probability of the channel retained, while the term involving the derivative of the potential accounts for the probability of the omitted channel. Having this in mind, we repeat the analysis of the lattice results using a two channel basis, involving $KD, \eta D_s$ for $D_{s0}^*(2317)$ and $KD^*, \eta D_s^*$ for $D_{s1}^*(2460)$. The choice of channels relies on the results of coupled channels unitary approaches \cite{Kolomeitsev:2003ac,Guo:2006fu,dani,daniaxial,guohan,wang,hanhart_orginos,Cleven:2014oka,geng_weise,alten_geng}, which found those channels to be the relevant ones (in what follows we will mainly refer to Refs. \cite{dani,daniaxial} when we give details of the coupled-channel formulation). Alternative scenarios for a non ${\bar q}q$ structure of these states have been also given \cite{van,Barnes:2003dj,nowak_rho,Nielsen:2009uh,Brambilla:2010cs,Esposito:2014rxa}. With two channels and three energy levels one is forced to treat the three components of the coupled-channel potential ($V_{11}, ~V_{12}, ~V_{22}$) as being energy independent. We observe that a fit to the energy levels is not possible in this case, indicating that these levels carry no information on the $\eta D_s$ and $\eta D_s^*$ channels. This can be explained since no interpolators of this type were used in \cite{sasa}, while it was also found there that the levels obtained were tied to the interpolators used. Further lattice information will be needed in the future to make progress in this direction and learn more about the components that build up the $D_{s0}^*(2317)$ and $D_{s1}^*(2460)$ wave functions. With the available limited lattice information, we can {confirm that the} bound states of $KD$ and $KD^*$ can be associated to the $D_{s0}^*(2317)$ and $D_{s1}^*(2460)$ states. {We also confirm that these bound states are mostly of $KD$ or $KD^*$ nature, estimating about 70 \% the probability of these components in their respective wave function. } The compositeness of the $D_{s0}^*(2317)$ based on indirect lattice data was first discussed in \cite{hanhart_orginos}, but employing a different method. The scattering lengths of other scattering channels, free from disconnected diagrams, were obtained on the lattice and used to determine the parameters of their effective field theory, which was subsequently used to indirectly determine the scattering parameters of $KD$ scattering and the pole position in this channel. Similarly, the scattering lengths from the simulation of Ref.~\cite{hanhart_orginos} were employed in \cite{geng_weise,alten_geng} to fix the low-energy constants of a covariant chiral unitary theory, which was then used to also identify, as composite states, the heavy-quark spin and flavour symmetry counterparts of the $D^*_{s0}$. As mentioned above, additional lattice information could help us improve our knowledge on the additional building blocks that these states might have. Indeed, preliminary spectra for these channels obtained including $KD$, $\bar sc$ as well as $\eta D_s$ interpolating fields have been presented in \cite{Ryan:lat14}. Their plan is to perform a two-coupled channel analysis using a parametrization of the scattering matrix on the energy. This strategy has recently lead to the first results of the two-coupled channel system $K\pi-K\eta$ from lattice QCD; the pole positions of the scattering matrix were subsequently found and related to the strange mesons \cite{Dudek:2014qha}. The approach presented here offers an alternative way to extract physical information from the lattice spectra in the future. \section{Compositeness of states} \label{section2} We collect here the essential expressions relevant to interpret the nature of hadrons generated dynamically from a given meson-meson interaction. Let us take two mesons ($K$ and $D$ for example) and an interacting potential $V$. The Lippmann-Schwinger equation produces the scattering amplitude $T$ \begin{align} T=V+VGT,\label{BS} \end{align} where $G$ stands for the two meson propagator. We shall take relativistic propagators and Eq.~(\ref{BS}) will be the Bethe-Salpeter equation. The on-shell factorization of $V$ and $T$ allows one to convert Eq.~(\ref{BS}) into an algebraic equation with $G$ given by \begin{align} G=i\int\frac{d^4q}{(2\pi)^4}\frac{1}{q^2-m^2_1+i\epsilon}\frac{1}{(P-q)^2-m^2_2+i\epsilon}, \end{align} where $P$ is the total two meson momentum. This factorization was justified in Refs.~\cite{nsd,ollerulf} by using dispersion relations in which the smooth energy dependent contribution of the left-hand-side cut was replaced by a constant in the region of interest. The energy dependence was shown to be particularly weak in the case of the meson-baryon interaction \cite{ollerulf} due to the large baryon mass and, consequently, it will be even weaker in the present case due to the larger mass of the $D$ and $D^*$ mesons. The neglect of the left hand cut is also inherent in the L\"uscher formalism, as we shall see in Sect.~\ref{aux}. Upon integration of the $q^0$ variable the loop function becomes \begin{align} G=\int\frac{d^3q}{(2\pi)^3}I(\vec{q}\,),\quad\quad I(\vec{q}\,)&=\frac{\omega_1(\vec{q}\,)+\omega_2(\vec{q}\,)}{2\omega_1(\vec{q}\,)\omega_2(\vec{q}\,)\left[P^2-(\omega_1(\vec{q}\,)+ \omega_2(\vec{q}\,))^2+i\epsilon\right]} \ , \label{eq:loop} \end{align} where $\omega_i(\vec{q}\,)$ is the meson on-shell energy. The loop function must be conveniently regularized with a cut-off $q_\textrm{max}$, or employing dimensional regularization techniques. Assume now the Bethe-Salpeter equation projected over $S$-wave and $V$ an energy independent potential in one channel (say $KD$). We then have \begin{align} T(1-VG)=V,\quad T=\frac{V}{1-VG}=\frac{1}{V^{-1}-G}.\label{T} \end{align} Let us now assume that the interaction $V$ produces a bound state, which we will refer to as a two meson composite state or a dynamically generated state. {We shall see that the energy independent potential can not lead to a genuine state, for example a $\bar qq$ state with a weak coupling to two mesons}. In the case of one channel, the coupling $g$ of the bound state is obtained by requiring that around the pole $s=s_0$ (with $s=P^2$ being a Mandelstam variable) \begin{align} \label{gsq} T\sim\frac{g^2}{s-s_0},\quad{\rm hence:}~ ~ g^2=\lim_{s\to s_0} (s-s_0)T. \end{align} Since $V^{-1}-G=0$ at the bound state pole, we find in the case of an energy independent potential using L'Hopital's rule \begin{align} g^2=\frac{1}{-\frac{\partial G}{\partial s}},\quad -g^2\frac{\partial G}{\partial s}=1\label{LH}. \end{align} The property of Eq.~(\ref{LH}) can be generalized to coupled channels and, in the case of an energy independent potential (and two channels), one finds: \begin{align} V&=\left(\begin{array}{cc}V_{11} &V_{12}\\V_{12}&V_{22}\end{array}\right),\quad G=\left(\begin{array}{cc}G_1&0\\0&G_2\end{array}\right),\label{V2ch}\\ T&=(1-VG)^{-1}V, \label{T2ch}\\ \quad g_i g_j&=\lim_{s\to s_0}(s-s_0)T_{ij},\quad \sum_i\left(-g^2_i\frac{\partial G_i}{\partial s}\right)=1\label{g} \end{align} Equation~(\ref{LH}) is a reformulation of the Weinberg compositeness condition~\cite{composite}, which is usually applied to loosely bound states, meant to be used at higher binding energies, while Eq.~(\ref{g}) is the extension to many coupled channels~\cite{danijuan}. By solving the Schr\"odinger equation in momentum space in coupled channels and normalizing the wave function of the bound state to unity, it was found \cite{danijuan} \begin{equation} \int d^3p \mid \langle p \mid \Psi_i \rangle \mid^2 = g_i^2 \frac{\partial G_i}{\partial E} \ , \end{equation} with $\mid \Psi_i \rangle$ being the $i$ component of the bound state in the $i^{\rm th}$ channel, so that each term of the sum in Eq.~(\ref{g}) represents the probability to have this channel in the wave function of the bound state:\footnote{As discussed in \cite{danijuan} there is a different normalization of the amplitudes, and hence the couplings, between \cite{danijuan} and field theoretical approach used here, which leaves the probability to be expressed here as in Eq.~(\ref{P})} \begin{equation} \label{P} P_i=-g_i^2\frac{\partial G_i}{\partial s} \ , \end{equation} and the sum of these probabilities saturates the wave function. Note that, by construction, in the case we are discussing here all the components of the composite state are of meson-meson type. We will elaborate more on these issues in Sect.~\ref{systematic}. It is easy to visualize a genuine state that couples weakly to a meson-meson component by using a meson-meson potential of the type: \begin{align} V=\frac{b}{s-s_R} \ ,\label{CDD} \end{align} which we refer to as a CDD pole~\cite{castillejo}. Now \begin{align} T=\frac{1}{\dfrac{s-s_R}{b}-G},\quad g^2=\frac{1}{\dfrac{1}{b}-\dfrac{\partial G}{\partial s}}, \end{align} and \begin{align} P=-g^2\frac{\partial G}{\partial s}=1-g^2\frac{1}{b}. \end{align} In the limit of $b\to 0$ (small coupling of the genuine state to meson-meson) we have $g^2\to 0$ and the pole appears at $s=s_R$. Then the amount of meson-meson component, $-g^2 \partial G/ \partial s$, goes to zero and we have a representation for a genuine state, or, in general, a state different from the explicit two meson state considered. It is interesting to note a distinct feature in the potential of Eq.~(\ref{CDD}), namely its energy dependence. These ideas are generalized in Ref.~\cite{hyodoreport}, with the sum rule \begin{align} -\sum_i g^2_i\frac{\partial G_i}{\partial s}-\sum_{i,j}g_i g_j G_i\frac{\partial V_{i,j}}{\partial s} G_j=1,\label{Hy} \end{align} evaluated at the pole. The first term in Eq.~(\ref{Hy}) is associated in Ref.~\cite{hyodoreport} to the composite part of the state (meson-meson in the present case) and the second term, involving the derivative of the potential, to the genuine part of the state. Actually, this second part accounts for the state components that have not been considered in the coupled channel problem. This is easily shown in the case of two channels in Ref.~\cite{acetidelta}, where one channel is eliminated and its effects are accounted for by means of an effective potential in the remaining channel. Take $V_{22}=0$, for simplicity, and consider $V_{ij}$ energy independent to saturate the state with the two channels in Eq.~(\ref{V2ch}). It is then easy to obtain from Eq.~(\ref{T2ch}), \begin{align} T_{11}=\frac{V_{11}+V^2_{12}G_2}{1-(V_{11}+V^2_{12}G_2)G_1} \ , \end{align} making clear that solving a one-channel problem with the effective potential \begin{align} V_\textrm{eff}=V_{11}+V^2_{12}G_2\ , \end{align} gives the same amplitude $T_{11}$ obtained in the two channel case. The novelty is that now $V_\textrm{eff}$ becomes energy dependent. Then, the term $-g^2_1 \partial G_1/\partial s$, which accounts for the probability of channel 1 in the state, is the same in both formulations and the second term in Eq.~(\ref{Hy}) is, by construction of $V_\textrm{eff}$, the probability of the second channel that has been eliminated. We are going to use these findings to analyze the lattice spectra of Ref.~\cite{sasa}. \section{Analysis of the lattice spectra} The lattice simulation of Ref.~\cite{sasa} obtained three energy levels in the scalar channel using the $K D$ and $\bar sc$ interpolators, and three\footnote{{The second level in the axial channel of Ref.~\cite{sasa} is attributed to the $D_{s1}(2536)$ resonance in $K D^*$ d-wave scattering and is therefore not used in the present paper which considers $K D^*$ scattering in s-wave. In principle $L=0$ and $L=2$ can mix for $J=1$, but using arguments of Heavy Quark Spin Symmetry \cite{isgurwise}, the spin of the heavy quark $\vec{S}_Q$ is conserved, and so is $\vec{J}$, and hence $\vec{J}-\vec{S}_Q$, which can be constructed from $\vec{L}$ and the spin 1/2 of the light quark of the $D^*$. For $L=0$, $\vec{J}-\vec{S}_Q$ only has modulus $1/2$, and for $L=2$, it can have the values 3/2 and 5/2. Thus, $L=0$ and $L=2$ do not mix at leading order in the $O(1/m_Q)$ expansion. }} levels in the axial channel using the $K D^*$ and $\bar sc$ interpolators. Table \ref{En} collects the levels of ensemble (2)\footnote{{Results of set 2 in \cite{sasa} are used in the axial channel.}}, {with $N_f\!=\!2+1$ and close-to-physical pion mass $m_\pi =156$ MeV. The lattice spacing is $a=0.0907\,(13)$ fm and the box size $L=2.90$ fm. The kaon with mass $m_K=504(1)~$MeV obeys the usual relativistic dispersion relation $E_K(p)=(m_K^2+p^2)^{1/2}$. } \begin{table}[ht] \begin{center} \caption{Energy levels for the scalar ($K D$) and axial ($K D^*$) channels found in the simulation Ref.~\cite{sasa}. The relative errors in the lattice spacing $a$ and in $a\,E$ have been added in quadrature. Only the energy differences, for example $E_n^{\rm lat}-\bar m_{D_s}^{\rm lat}$ with $ \bar m_{D_s}^{\rm lat}=\tfrac{1}{4}(m_{D_s}+4m_{D_s^*})=1.8407(6)~$MeV, can be compared to the experiment. }\label{En} \begin{tabular}{c|c|c} \hline &$KD$ channel &$KD^*$ channel\\ \hline $E_1$ (MeV)&2086 (34) &2232 (33)\\ $E_2$ (MeV)&2218 (33) &2349 (34)\\ $E_3$ (MeV)&2419 (36) &2528 (53)\\ \hline \end{tabular} \end{center} \label{table} \end{table} {The simulation \cite{sasa,sasa1} treated the charm quark using the so-called Fermilab method, where the leading discretization errors related to the charm quark cancel in the energy differences (with respect to the reference mass of a meson containing the same number of charm quarks). We employ the dispersion $E(p)$ for $D$ and $D^*$ mesons determined in the simulation of Ref.~\cite{sasa}} \begin{align} E_{D(D^*)} (\vec{p}\,)=M_1+\frac{\vec{p}^{\,2}}{2M_2}-\frac{(\vec{p}^{\,2})^2}{8M^3_4}\ ,\quad m_{D(D^*)}=M_1\label{disp} \end{align} where $M_1$, $M_2$, $M_4$ are given in Table~\ref{tableM}. \begin{table}[ht] \begin{center} \caption{$M_i$ from the dispersion relation $E(p)$ (\ref{disp}) for $D$ and $D^*$ mesons. The rest energies, i.e. the masses $M_1$, can be compared to experiment via the difference $M_1^{\rm lat}-\bar m_{D}^{\rm lat}$ with $ \bar m_{D}^{lat}=\tfrac{1}{4}(m_{D}+4m_{D^*})=1.751(3)~$MeV \cite{sasa}. } \label{tableM} \begin{tabular}{c|c|c} \hline & {$D$ meson} & {$D^*$ meson}\\ \hline $M_1$ (MeV)&1639&1788\\ $M_2$ (MeV)&1801&1969\\ $M_4$ (MeV)&1936&2132\\ \hline \end{tabular} \end{center} \end{table} \subsection{Analysis by means of the effective range formula} In Ref.~\cite{sasa} the scattering length and effective range for $KD$ and $KD^*$ scattering were obtained using only the two lowest energy levels of the lattice simulation and employing L\"uscher's approach to {extract the infinite volume phase shifts}. In this section we analyze these results by means of an effective range formula to obtain the binding energy of the state and check the fulfillment of the sum-rule of Eq.~(\ref{LH}). The effective range approximation reads \begin{align} p\,\cot\delta=\frac{1}{a_0}+\frac{1}{2}r_0 p^2, \quad T=-\frac{8\pi E}{p\,\cot\delta-{\rm i}p}.\label{range} \end{align} Below threshold, one writes $p={\rm i}\tilde{p}$, and a pole of the $T$ matrix is obtained for $\cot\delta={\rm i}$. Therefore, the pole appears for the value of $\tilde p$ that satisfies \begin{align} \frac{1}{2}r_0\tilde{p}^2-\tilde{p}-\frac{1}{a_0}=0. \end{align} Taking random $a_0$ and $r_0$ values within the range determined by the lattice simulation \cite{sasa}, quoted in Table \ref{tab:effective}, we obtain a series of values for the bound momentum $\tilde{p}$ and the corresponding binding energy \begin{align} B=-\frac{\tilde{p}^2}{2\mu},\quad \mu=\frac{m_K m_{D/D^*}}{m_K+m_{D/D^*}} \ . \end{align} The average value of the binding energy for the $KD$ state, which is associated to the $D^*_{s0}(2317)$, is then found to be 38(9) MeV. We note that the unitary coupled-channel approach of \cite{dani} generates such a state from the interaction of the $KD$ and $\eta D_s$ channels mostly. Had we used the central values of $a_0$ and $r_0$ directly, we would have obtained $B=35.8$ MeV, which obviously lies within the error bar of the results quoted in Table \ref{tab:effective}. We note that this value is 0.8 MeV smaller than the one given in \cite{sasa}, essentially because in the present analysis we have used the (isospin averaged) physical masses of the mesons instead of the lattice ones. Employing the same procedure, we find a $K D^*$ state with a binding energy of 44(6) MeV, which we associate to the $D_{s1}^*(2460)$. In the unitary coupled-channel approach this state is mainly built from $KD^*$ and $\eta D^*_s$ components \cite{daniaxial}. \begin{table}[h] \setlength{\tabcolsep}{0.3cm} \begin{center} \begin{tabular}{|l|c|c|c|c|c|} \hline {Channel} & $a_0$ [fm] & $r_0$ [fm] & $B$ [MeV] & $|g|$ [GeV] & $-g^2\partial G/\partial s$ \\ \hline $KD$ & $-1.33(20)$ & $0.27(17)$ & 38(9) & $12.6(1.5)$ & 1.14(0.15) \\ \hline $K D^*$ & $-1.11(0.11)$ & $0.10(0.10)$ & $44(6)$ & $12.6(0.7)$ & $0.96(0.06)$ \\ \hline \end{tabular} \caption{Binding energy $B$, meson-meson coupling $|g|$ and sum-rule [Eq.~(\ref{LH})], for the bound states obtained in the lattice QCD simulation of Ref.~\cite{sasa}, analyzed using an effective range formula. }\label{tab:effective} \end{center} \end{table} It is interesting to test the sum rule of Eq.~(\ref{LH}) for the states obtained. The $g^2$ at the pole can be expressed as \begin{align} g^2=\frac{16\pi s\tilde{p}}{\mu (1-r_0\tilde{p})} \ , \end{align} and listed in Table \ref{tab:effective}. Since $\partial G/\partial s$ is convergent, we obtain the sum rules quoted in the last column, which, within errors, are all compatible with unity. The coupling to the $KD$ channel is $g_{KD} = 12.6$ GeV, which is of the order of the one obtained in the chiral unitary approach in Ref.~\cite{dani}, $g_{KD}=10.21$ GeV. Note, however, that this smaller value would provide a probability for the $KD$ channel of about $60-70\%$, leaving room for the other channels considered in the unitary coupled-channel approach. Similarly, in the $KD^*$ channel, we find a coupling $g_{KD^*} = 12.6~$GeV, compared to the value of around 10 GeV quoted in Ref.~\cite{daniaxial}, also leaving room for the additional meson-meson components considered in that work. Although the results obtained with the effective range formula are qualitatively reasonable, and the existence of the bound state emerges as a solid statement, one can see that the approximation has its limitations when one looks at other magnitudes like the probability $P(KD)$, which comes out larger than one (although compatible within errors). There is also the fact that the first two levels are separated by 132 MeV, which makes this approximation a bit extreme. Furthermore, the information of the third level is not used, and, as shown in Ref.~\cite{sasa}, this level cannot be accounted for by means of the effective range formula. All these reasons advise a new reanalysis which we offer in the next subsection. \subsection{Analysis of lattice spectra by means of an auxiliary potential} \label{aux} First, we are going to make the analysis with only one channel. Anticipating that the $\eta D_s$ and $\eta D^*_s$ channels also play a role in the $D^*_{s 0} (2317)$ and $D_{s1}(2460)$ resonances, as found in Refs.~\cite{dani} and ~\cite{daniaxial}, we shall leave room for these and possible $\bar qq$ components, by using an energy dependent potential. As a first step we take a potential linear in $s$, \begin{align} V=\alpha+\beta(s-s_{\rm th}),\label{pot} \end{align} with $s_{\rm th}=(M_{D^{(*)}}+M_K)^2$, since only the derivative of the potential is needed to obtain the sum rule. Later on we shall also use another type of potential. In the finite box, the $T$ matrix of Eqs.~(\ref{BS}) and~(\ref{T}) is replaced by \begin{align} \tilde{T}=\frac{1}{V^{-1}-\tilde{G}},\label{Tfin} \end{align} where $\tilde{G}$ is the two meson loop function in the box given by~\cite{koren} \begin{equation} \tilde{G}=G+\lim_{q_\textrm{max}\to\infty}\left[\frac{1}{L^3}\sum_{q_i}^{q_\textrm{max}}I(\vec{q}_i)-\int\limits^{}_{q<q_\textrm{max}}\frac{d^3q}{(2\pi)^3}I(\vec{q}\,)\right]\ ; \hspace{1cm} \vec{q}_i = \frac{2\pi}{L}\vec{n}_i, ~~\vec{n}_i \in \mathbb{Z}^3\ . \end{equation} The $G$ in the continuum, Eq.~(\ref{eq:loop}), can be regularized with a cut-off $q^\prime_\textrm{max}$ or employing dimensional regularization. The latter choice, followed in Ref.~\cite{koren}, cannot be applied here because we employ the dispersion relation of Eq.~(\ref{disp}). For this reason we adopt the cut-off method, with a cut-off value that gives equivalent results to those of the chiral unitary approach of Refs.~\cite{dani,daniaxial}. Any value {of $q_{\rm max}'$} can, in principle, be taken since changes in $G$ can be accommodated by changes in $V^{-1}$ when we require that $\tilde{T}$ has poles at the energies of the lattice spectra by demanding that $V^{-1}-\tilde{G}=0$. Note, in addition, that we are interested finally in results for the continuum. Hence, at the energies of the lattice spectra we have $V^{-1}=\tilde{G}$, and then the continuum $T$ matrix is \begin{equation} T=\frac{1}{V^{-1}-G}=\frac{1}{\tilde{G}-G}=\frac{1}{\displaystyle\lim_{q_\textrm{max}\to\infty}\left[\frac{1}{L^3}\displaystyle\sum_{q_i}^{q_\textrm{max}}I(\vec{q}_i)- \displaystyle\int\limits^{}_{q<q_\textrm{max}}\displaystyle\frac{d^3q}{(2\pi)^3}I(\vec{q}\,)\right]},\label{Tlim} \end{equation} which is then independent of the cut-off $q^\prime_\textrm{max}$ employed to regularize $G$. However, in the transfer of strength from $G$ to $V^{-1}$ one will be introducing some energy dependence in $V^{-1}$ that would change the probability $Z$ of not having the main meson-meson component considered. We shall come back to this issue in section \ref{systematic} where systematic uncertainties are studied. Equation (\ref{Tlim}) is the formulation employed in the approach of Ref.~\cite{misha}, where it is shown that L\"uscher formula is recovered if some terms of $I(\vec{q}\,)$, which are exponentially suppressed, are eliminated. These terms can be relevant in the case of relativistic particles and small volumes ~\cite{chen,chennew}, which is not the case here. However, we cannot use the standard L\"uscher approach either, based on the relativistic relationship $\omega (q)=(m^2+q^2)^{1/2}$, since we are forced to employ the dispersion relation of Eq.~(\ref{disp}). In this case, Eq.~(\ref{Tlim}) gives the appropriate extension of the L\"uscher formalism. There is another approximation inherent in our approach (or the one of L\"uscher) when we assume that the potential is volume independent. Within the framework of the chiral unitary approach such effects were investigated in \cite{miguerios,luismigue} in the $\pi \pi$ scattering in the scalar sector and the $\rho$ sector and it was concluded that for values of $L m_{\pi}> 1.5$ they could be safely neglected. In the present case, given the large masses involved, loops in the t-channel, which originate this volume dependence, are even less relevant. With the formalism exposed above, a best fit is carried to the three lattice levels obtained in~\cite{sasa}, demanding that the $\tilde{T}$ derived from Eq.~(\ref{Tfin}) using the potential of Eq.~(\ref{pot}) has poles at the three energies. In order to find the desired magnitudes and associated statistical errors, we perform a series of fits to different sets of three energies, generated with a Gaussian weight within the errors of the lattice levels. With the parameters obtained in each fit we evaluate the different magnitudes. From the results obtained in the different fits, we then determine the central values and statistical errors of these magnitudes. We show in Figs.~\ref{fDK1} and~\ref{fDstarK1} the results obtained from the fits to the levels for the $KD$ and $KD^*$ systems, respectively. \begin{figure} \begin{center} \includegraphics[width=0.5\textwidth,clip]{DK-eps-converted-to.pdf} \caption{Fits to the lattice data of Ref.~\cite{sasa} for the $KD$ system using the potential of Eq.~(\ref{pot}).}\label{fDK1} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.5\textwidth,clip]{DstarK-eps-converted-to.pdf} \caption{Fits to the lattice data of Ref.~\cite{sasa} for the $KD^*$ system using the potential of Eq.~(\ref{pot}).}\label{fDstarK1} \end{center} \end{figure} The procedure outlined above gives us a pole for the $KD$ system with binding energy \begin{align} B(KD)=m_D+m_K-E_B(K D)=46\pm 21~\textrm{MeV}\ , \label{binding} \end{align} to be compared to the value 36.6(16.6)(0.5) MeV obtained with the effective range formula in \cite{sasa,sasa1} and to the 45 MeV binding in the physical case. For the $KD^*$ system we get the binding energy \begin{align} B(KD^*)=m_{D^*}+m_K-E_B(K D^*)=52\pm 22~\textrm{MeV}\ . \label{binding1} \end{align} The probabilities for the $KD$, $KD^*$ components, obtained from Eqs.~(\ref{P}), (\ref{gsq}), are: \begin{align} P(KD)&=(76\pm 12)~\%,~\textrm{for the $D^*_{s0}(2317)$}\ ,\label{primone}\\ P(KD^*)&=(53\pm 17)~\%,~\textrm{for the $D_{s1}(2460)$}\ .\label{P1ch} \end{align} This means that there is a large amount of $KD$ and $KD^*$ components in the corresponding bound states. \subsection{Fit with a CDD pole} \label{sec:cdd} { One near-threshold level was found in \cite{sasa,sasa1} when only $\bar sc $ interpolators were used\footnote{Its energy however changes when $D^{(*)}K$ interpolators were used in addition to $\bar sc$ ones.}, and one wonders what is the $\bar sc$ component in the meson states at hand.} We therefore explore whether there could be an admixture of some genuine component in the bound state by refitting the lattice levels adding a CDD pole to the potential of Eq.~(\ref{pot}): \begin{equation} V=\alpha+\beta(s-s_{\rm th})+ \frac{\gamma^2}{s-M_{\rm CDD}^2} \ , \label{eqcdd} \end{equation} which, as seen in section \ref{section2}, is suited to accommodate a genuine state. This has been shown to be the proper way to account for genuine components in different works \cite{acetidelta,nsd,aceti_oset,xiao_aceti_bayar} in the continuum. An analysis of ``synthetic'' lattice spectra in terms of this potential was done in \cite{koren}. It was also recently employed to analyze lattice spectra with the $\pi K$ and $\eta K$ channels in \cite{Dudek:2014qha}. Since we have four parameters ($\alpha$, $\beta$, $\gamma$ and $M_{\rm CDD}$) and three energy levels, we can obtain solutions with many sets of parameters which are, obviously, correlated. However, the values of the parameters do not have a particular significance and what matters is the value of the magnitudes derived from the different fits. The statistics of the obtained fits shows a clear preference for solutions with a $M_{\rm CDD}$ value that lies far away (more than 300 MeV) from the $KD$, $KD^*$ thresholds, such that it effectively provides a linear dependence in $(s-s_{\rm th})$ at the energies where the poles are found. This is an indication that the lattice energies do not favour a CDD component, or at least not a significant one. Obviously, future lattice results with more accuracy and different volumes will allow one to be more precise on this issue. With the potential of Eq.~(\ref{eqcdd}) we obtain the following binding energies \begin{align} B(KD)=29\pm 15~\textrm{MeV}\ , \label{binding2}\\ B(KD^*)=37\pm 23~\textrm{MeV}\ \label{binding3}, \end{align} and probabilities \begin{align} P(KD)&=(67\pm 14)~\%,~\textrm{for the $D^*_{s0}(2317)$}\ ,\label{primetwo}\\ P(KD^*)&=(61\pm 26)~\%,~\textrm{for the $D_{s1}(2460)$}\ ,\label{P1chtwo} \end{align} which are compatible within errors with those of Eqs. (\ref{binding})--(\ref{P1ch}), obtained with the linear potential. \subsection{Two channel analysis} After this exercise we perform a two channel analysis including the $\eta D_s$ channel for the $D^*_{s0}(2317)$ state and the $\eta D^*_s$ channel for the $D_{s1}(2460)$, which were found also relevant in Refs.~\cite{dani,daniaxial}. Since we only have three energy levels we use an energy independent potential, Eq.~(\ref{V2ch}), which has three parameters, $V_{11},V_{12},V_{22}$. By doing so, we would force the states to saturate with the $KD^{(*)}$, $\eta D_s^{(*)}$ components. The comparison of the two procedures would allow us to make statements about the amount of each channel in the respective states. We thus fit the $V_{ij}$ parameters using \begin{align} \tilde{T}=(1-V\tilde{G})^{-1} V, \end{align} in two channels, looking for the poles of $\tilde{T}$ and associating the first three levels to those of the lattice simulation. We do not find any suitable fit to the data, which is an enlightening result. One could interpret it as an evidence that the energy levels obtained in \cite{sasa} do not contain information on the $\eta D_s$ or $\eta D^*_s$ channels. This seems to be the case because the three energies obtained there were tied to the use of $q \bar q$ and meson-meson interpolators of $KD$ or $KD^*$ type. No interpolator was used containing information on the $\eta D_s$ and $\eta D^*_s$ channels, and no energy level was found which would be tied to these channels. It is indeed a common experience of lattice practitioners that a given two-hadron eigenstate is most often not seen unless explicitly implemented in the basis of interpolating fields. Although all states with a given quantum number are in principle expected in a dynamical simulation, a poor basis of interpolating fields is insufficient to render them in practice. The reason is that one would have to wait much time till these components show up in the time evolution of the state and this could happen in the region where the ratio of noise to signal is large, preventing any signal to be seen \cite{wittig}. This also gives us some idea on how to proceed in the future if one wishes to make progress on determining the components of the $D^*_{s0}(2317)$ and $D_{s1}(2460)$ wave functions. The relevant fraction of the wave function that went to the $\eta D_s$ and $\eta D^*_s$ channels in chiral unitary studies \cite{dani,daniaxial}, of the order of 20\%, makes it advisable to include interpolators for the $\eta D_s$ and $\eta D^*_s$ channels in future lattice simulations. {Such a simulation is underway and preliminary spectra have been presented in \cite{ Ryan:lat14}. }. In any case, it is worth stressing that, as shown in previous sections, the present lattice information allows to conclude that there are extra components to the dominant $KD$ in the $D_{s0}^*(2317)$ wave function, although one cannot state which ones. \section{Scattering length and effective range} We can also obtain the scattering length and the effective range in each of the cases explored. For this we use Eq.~(\ref{range}), finding \begin{align} p\,\cot\delta= \textrm{Re}\left\{-\frac{8\pi E}{T}\right\} \simeq \frac{1}{a_0}+\frac{1}{2}r_0 p^2~. \label{efferange} \end{align} Relating $E$ to $p$ via the dispersion relation of Eq.~(\ref{disp}) \begin{align} E=\sqrt{m^2_K+p^2}+E_{D(D^*)}(p) \ , \end{align} we obtain \begin{align} a_0&=-1.2\pm 0.6~\textrm{fm},\quad r_0=0.04\pm 0.16~\textrm{fm}~\textrm{for $KD$},\\ a_0&=-0.9\pm 0.3~\textrm{fm},\quad r_0=-0.3\pm 0.4~\textrm{fm}~\textrm{for $KD^*$} \end{align} in the case the lattice data is analyzed using a single channel potential (\ref{pot}). When we use the CDD potential of Eq. (\ref{eqcdd}) we find \begin{align} a_0&=-1.4\pm 0.4~\textrm{fm},\quad r_0=-0.2\pm 0.4~\textrm{fm}~\textrm{for $KD$},\\ a_0&=-1.3\pm 0.6~\textrm{fm},\quad r_0=-0.1\pm 0.2~\textrm{fm}~\textrm{for $KD^*$} \end{align} The values for the scattering length and effective range obtained with the different methods are remarkably similar. The values obtained also agree qualitatively with those obtained in Ref.~\cite{sasa}. Yet, as we have discussed, we do not use the effective range formula to correlate the results. Indeed, instead of Eq. (\ref{efferange}) we have \begin{align} p\,\cot\delta= -8\pi E(V^{-1}- \textrm{Re} \{G\}) \label{neweq} \end{align} and the $G$ function depends on the cut off. If $V$ is energy independent we have two degrees of freedom in the approach to accommodate the values of $a_0$ and $r_0$, but $p\,\cot\delta$ develops terms in $p^4$ which are tied to the values of $a_0$ and $r_0$. If we allow $V$ to be energy dependent, as in Eq. (\ref{pot}), we have more freedom to accommodate the $p^4$ terms in the expansion of $p\,\cot\delta$. However, the main problem in the use of Eq. (\ref{efferange}) is that it blows up at large energies, where the series expansion does not converge. Our method, which does not make a series expansion of $p\,\cot\delta$, has a good behavior at higher energies from the analytical behavior of $\textrm{Re} \{G\}$, which contains the $log$ terms of the intermediate particle propagators. This allows us to cover a wider span of energies and we can make use of the three energy levels obtained in \cite{sasa}, while only the information of the lowest two could be accommodated in the analysis of \cite{sasa} based on Eq. (\ref{efferange}). \section{Evaluation of systematic uncertainties} \label{systematic} In \cite{hanhart_orginos} the lowest lattice level obtained for the channels $D \bar K (I=1)$, $D \bar K (I=0)$, $D_s K$, $D \pi (I=3/2)$, $D_s \pi$, free from disconnected diagrams, were employed to obtain, via the L\"uscher formalism \cite{composite}, the phase shifts in the continuum at the eigenenergies of the lattice box. The scattering length was then derived from the relationship $p\,\cot\delta(p)= 1/a_0$, disregarding the effective range term. The low energy constants of a chiral lagrangian were fitted to the scattering lengths of those channels employing a unitary approach. With these values of the coefficients, the coupled $KD$, $\eta D_s$ channels system was studied, from where the existence of a bound state associated to the $D_{s0}^*(2317)$ was established and the $KD$ scattering length was obtained. A $KD$ probability, $1-Z$, in the $D_{s0}^*(2317)$ wave function of around 70\% was found, where the value of $Z$ was determined from the scattering length via the relation \cite{composite,baru} \begin{equation} a_0= -2 \frac{(1-Z)}{(2-Z) } \frac{1}{\sqrt{ 2\mu \epsilon}} \left[1+ {\cal O}\left(\sqrt {2\mu \epsilon}/ \beta\right)\right] \ , \label{eq:a0_weinberg} \end{equation} with $\mu$ and $\epsilon$ being the reduced mass and binding energy, respectively, and $1/\beta$ accounting basically for the range of the interaction ($1/q_{\rm max}$ in our approach). The term ${\cal O}(\sqrt {2\mu \epsilon}/ \beta)$, negligible for small binding energies, is often discussed as uncertainty. In the present case $\sqrt {2\mu \epsilon}/ \beta$ is of the order of 0.22 if we take $\beta=q_{\rm max}=M_V=780$~MeV, and the correcting terms can be relevant. Indeed, let us comment on the sensitivity of Eq.~(\ref{eq:a0_weinberg}) in obtaining $Z$ from the value of $a_0$. Note that if $-2/\sqrt{2\mu \epsilon} < a_0 < -1/\sqrt {2\mu \epsilon}$, the resulting $Z$ would have unphysical negative values. This condition would obviously not be a problem for sufficiently small binding energies where Eq.~(\ref{eq:a0_weinberg}) is applicable but, for the $KD$ state analyzed here, the value of the factor $-1/\sqrt {2\mu \epsilon}$ is $-1.12$ fm, close to the typical values found for the scattering lengths, and this can lead to large uncertainties in the extraction of $Z$ from $a_0$ using Eq.~(\ref{eq:a0_weinberg}). Note that Ref.~\cite{hanhart_orginos} obtained $a_0\sim -0.85$ fm, from which, using Eq.~(\ref{eq:a0_weinberg}), a probability $P_{KD}\sim 70\%$ was extracted, similar to the result obtained here in spite of the fact that we have a different value of the scattering length. Incidentally, one could have evaluated $P=1-Z$ directly from the coupling also in the Weinberg approach using Eq.~(24) from Ref.~\cite{composite}, which is equivalent to Eq.~(\ref{P}) used here but neglecting the ${\cal O}(\sqrt {2\mu \epsilon}/ q_{\rm max})$ terms in $(\partial G/ \partial s)$ and in the determination of $g_i^2$. It is instructive to see the correcting terms in $(\partial G/ \partial s)$ due to the range of the interaction. Using, for simplicity, the nonrelativistic approach of \cite{danijuan} (see Eqns. (27), (29) there) one finds \begin{eqnarray} \frac{\partial G}{\partial E} &=& \frac{1}{\gamma} 8\pi \mu^2\left[\arctan\left(\frac{q_{\rm max}}{\gamma}\right) - \frac{\gamma q_{\rm max}}{\gamma^2+ q_{\rm max}^2}\right] = \\ &=& \frac{1}{\gamma} 8\pi \mu^2\left[ \frac{\pi}{2} - 2\left(\frac{\gamma}{q_{\rm max}} \right) + \frac{4}{3} \left(\frac{\gamma}{q_{\rm max}} \right)^3 + \dots \right] = \\ &=& \frac{1}{\gamma} 4\pi^2 \mu^2\left[ 1 - \frac{4}{\pi}\left(\frac{\gamma}{q_{\rm max}} \right) + \frac{8}{3 \pi} \left(\frac{\gamma}{q_{\rm max}} \right)^3 + \dots \right] \ . \label{dGnonrel} \end{eqnarray} Hence, in the nonrelativistic expression \begin{equation} 1-Z= g^2 \frac {\partial G} {\partial E}\ , \label{Pnr} \end{equation} analogous to Eq.~(\ref{P}), the correcting factor to the Weinberg formula from range effects in $\partial G/\partial E$ is\footnote{The normalizations for $g$ in [12] and here are different. In [12], or in the Weinberg notation, $\partial G/\partial E$ is used instead of $\partial G/\partial s$, but the range correcting factor, $F$, is the same.}: \begin{eqnarray} F= \left[ 1 - \frac{4}{\pi}\left(\frac{\gamma}{q_{\rm max}} \right) + \frac{8}{3 \pi} \left(\frac{\gamma}{q_{\rm max}} \right)^3 + \dots \right] \ . \label{correfac} \end{eqnarray} to which one would have to add the correcting terms to the expression of $g^2$ in Ref. \cite{composite}. The deviation from unity of Eq. (\ref{correfac}) in the problem analyzed here amounts to 28\%. Although one would also have correcting terms from $g^2$, this exercise gives us an idea of the order of magnitude of the corrections due to finite range effects in the determination of $1-Z$. The exercise also serves us another purpose, which is to note that employing Eq. (24) from Ref. \cite{composite} can give reasonable numbers for $1-Z$ in the present case, within uncertainties, while applying Eq. (\ref{eq:a0_weinberg}) is not possible for a value $a_0\sim -1.3$ fm. Actually, in Ref. \cite{guonew}, following the work of \cite{hanhart_orginos}, the value of $a_0\sim -1.33$ fm from the lattice work of \cite{sasa1} is used as input to further constrain the parameters of the chiral theory, but Eq. (\ref{eq:a0_weinberg}) is no longer used. In our case we do not use Eq. (\ref{eq:a0_weinberg}), nor Eq. (24) from Ref. \cite{composite}, in order to determine $1-Z$, but Eq. (\ref{P}) in which explicit range effects will appear in $g^2$ and $\partial G_i/\partial s$ from our formulation of the problem using Eq. (\ref{T}) for the scattering of the particles. This is our prescription to take into account range effects and we discuss next the sensitivity of the results to the changes of the range parameter $q_\textrm{max}$, also within our approach. We estimate the uncertainties inherent to the method for not too small binding energies, like in the present case, by performing fits to the lattice energies employing four different values of $q_{\rm max}$, 770 MeV, 875 MeV, 1075 MeV, 1275 MeV, and the auxiliary potential linear in $s$ of Eq.~(\ref{pot}). This will inform us on the size of systematic uncertainties coming from this source. In order not to be confused by the statistical uncertainties, the fit for each value of $q_{\rm max}$ will be done to the central values of the lattice energies. Our results, shown in Table \ref{qmax_1} for the $KD$ system and in Table \ref{qmax_2} for the $K D^*$ one, confirm that the systematic uncertainties tied to the range are small and well within the statistical uncertainties. The binding energy of the $K D^*$ system shows a stronger sensitivity to the heavy meson mass employed than that of all other magnitudes, the changes of which fall well within the statistical errors. \begin{table}[ht] \caption{Dependence of the properties of the $KD$ bound state on $q_{\rm max}$}\label{qmax_1} \begin{center} \begin{tabular}{l|c|c|c|c|c} \hline $q_{\rm max}$ (MeV) & 770& 875 & 1075 & 1275 & Average\\ \hline $B $(MeV) & 34.2& 36.6 & 35.5 & 35.5 & $35.5\pm0.8$ \\ $\mid g\mid$ (GeV) & 10.85&10.60 & 10.37 & 10.41 & $10.6\pm 0.20$ \\ $P$ (\%) & 86.68& 82.15 & 84.09 & 87.16 & $85\pm2$ \\ $a_0$ (fm) & $-1.32$&$-1.24$ & $-1.25$ & $-1.25$ & $-1.27\pm0.03$ \\ $r_0$ (fm) & 0.30&$0.22$ & $0.19$ & $0.19$ & $0.23\pm0.05$ \\ \hline \end{tabular} \end{center} \end{table} \begin{table}[ht] \caption{Dependence of the properties of the $KD^*$ bound state on $q_{\rm max}$}\label{qmax_2} \begin{center} \begin{tabular}{l|c|c|c|c|c} \hline $q_{\rm max}$ (MeV) & 770&875 & 1075 & 1275 & Average\\ \hline $B$ (MeV) & 45.8&45.6 & 44.9 & 44.2 & $45.0\pm 0.7$ \\ $\mid g\mid$ (GeV) & 10.67&10.15 & 10.32 & 10.31 & $10.4 \pm 0.2$ \\ $P$ (\%) & 60.30&57.42 & 63.33 & 66.10 & $62\pm3$ \\ $a_0$ (fm) & $-1.010$&$-0.967$ & $-0.980$ & $-0.986$ & $-0.99\pm0.02$ \\ $r_0$ (fm) & 0.07&$-0.03$ & $-0.04$ & $-0.06$ & $-0.02\pm0.05$ \\ \hline \end{tabular} \end{center} \end{table} We also have to face uncertainties tied to the meson masses employed in our analysis. Unlike in \cite{hanhart_orginos}, the lattice spectrum used here is calculated with a pion mass of $m_{\pi}=156$~MeV, already very close to the physical value of 140~MeV. Moreover, since in the present case, only the kaon and $D,~D^*$ masses appear in the propagators and the potential is fitted to the lattice energy levels, there is no explicit dependence on $m_{\pi}$ in the analysis. We also assume that something similar occurs for the lattice energy levels and the changes between using 156~MeV or 140~MeV would be insignificant. This is actually the case for the chiral extrapolation of the $\bar K D$ and $K D$ scatttering lengths in \cite{hanhart_orginos}. However, the $D$ and $D^*$ masses of the lattice simulation are smaller than the physical ones, which is related to the Fermilab method employed (see $M_1$ in Table \ref{tableM}). This is the reason why we did not quote absolute values of the energies obtained, but the binding energies with respect to the thresholds. We can attempt to do an extrapolation of the results to physical masses. For this purpose we assume that the potential obtained can also be considered in absolute terms. Then we use this potential with the realistic masses in the loop function $G$ and obtain the results shown in Tables~\ref{extrap_1} and \ref{extrap_2}. \begin{table}[ht] \caption{Extrapolation of the bound state properties to the physical mass of the $D$ meson, using $q_{\rm max}=1275$ MeV.}\label{extrap_1} \begin{center} \begin{tabular}{l|c|c} \hline $M_1$ (MeV) & 1631 & 1867 \\ & Ref.~\cite{sasa} & Physical \\ \hline $B $(MeV) & 35.5 & 31.9 \\ $\mid g \mid$ (GeV) & 10.4 & 11.3 \\ $P$ (\%) & 87.2 & 88.3 \\ $a_0$ (fm) & $-1.25$ & $-1.33$ \\ $r_0$ (fm) & $0.19$ & 0.14 \\ \hline \end{tabular} \end{center} \end{table} \begin{table}[ht] \caption{Extrapolation of the bound state properties to the physical mass of the $D^*$ meson, using $q_{\rm max}=1275$ MeV.}\label{extrap_2} \begin{center} \begin{tabular}{l|c|c} \hline $M_1$ (MeV) & 1788 & 2008\\ & Ref.~\cite{sasa} & Physical \\ \hline $B $(MeV) & 44 & 96 \\ $\mid g \mid$ (GeV) & 10.3 & 14.2 \\ $P$ (\%) & 66.1 & 60.6 \\ $a_0$ (fm) & $-0.99$ & $-0.72$ \\ $r_0$ (fm) & $-0.060$ & $-0.002$ \\ \hline \end{tabular} \end{center} \end{table} A third source of systematic uncertainties comes from the use of one type or another of the potentials, Eqs.~(\ref{pot}) or (\ref{eqcdd}), that we have already discussed in Sects.~\ref{aux} and \ref{sec:cdd}, respectively. Comparing the values given in Eqs.~(\ref{binding})-(\ref{P1ch}) with those of Eqs.~(\ref{binding2})-(\ref{P1chtwo}), we find that the systematic errors associated to the use of different potentials are: \begin{eqnarray*} &&\delta B(KD)=8.5~{\rm MeV} \ , \\ &&\delta B(KD^*)=7.5~{\rm MeV} \ , \\ &&\delta P(KD)=4.5~\% \ , \\ &&\delta P(KD^*)=4.0~\% \ , \\ &&\delta a(KD)=0.1~{\rm fm} \ , \\ &&\delta a(KD^*)=0.2~{\rm fm} \ , \\ &&\delta r_0(KD)=0.1~{\rm fm} \ , \\ &&\delta r_0(KD^*)=0.1~{\rm fm} \ . \end{eqnarray*} Altogether, summing these systematic errors in quadrature to those of Tables~\ref{qmax_1}-\ref{extrap_2}, we finally obtain the results: \begin{eqnarray*} &&B(KD)=38 \pm 18 \pm 9~{\rm MeV} \ , \\ &&B(KD^*)=44 \pm 22 \pm 26~{\rm MeV} \ , \\ &&P(KD)=(72 \pm 13 \pm 5)~\%\ , \\ &&P(KD^*)=(57 \pm 21 \pm 6)~\%\ , \\ &&a(KD)=-1.3 \pm 0.5 \pm 0.1~{\rm fm} \ , \\ &&a(KD^*)=-1.1 \pm 0.5 \pm 0.2~{\rm fm} \ , \\ &&r_0(KD)=-0.1 \pm 0.3 \pm 0.1~{\rm fm} \ , \\ &&r_0(KD^*)=-0.2 \pm 0.3 \pm 0.1~{\rm fm} \ , \end{eqnarray*} where the first error is statistical and the second systematic, which should also add in quadrature. \section{Conclusions} In this work we have done a reanalysis of the lattice spectra obtained in \cite{sasa,sasa1} for {s-wave scattering channels $KD$ and $KD^*$, where bound states were identified with the $D_{s0}^*(2317)$ and $D_{s1}^*(2460)$ states}. The analysis of \cite{sasa,sasa1} derived the scattering length and the effective range from two of the energy levels. The information of the third level was not used. Here we have done a reanalysis of the lattice spectra that takes into account the information of the three levels. The essence of the new method was the use of an auxiliary potential which was allowed to be energy dependent in the case of considering only one channel. This is demanded to take into account the fact that the single channels will most probably not saturate the states. We found a bound state for both $KD$ and $KD^*$ scattering, which we associated to the $D_{s0}^*(2317)$ and $D_{s1}^*(2460)$ states. In order to find out the most likely missing channels we were guided by the results of the chiral unitary approach which determines the $\eta D_s$, and $\eta D_s^*$ channels as the additional most important ones to saturate the wave function. However, the limited information from the lattice spectra drove us to use an energy independent potential with the consequence that the two channels chosen would saturate the wave function. With this restriction we found no solution, indicating that the lattice spectra does not contain information on the $\eta D_s$, and $\eta D_s^*$ channels. This seems to be the case since the levels found in \cite{sasa} are largely tied to the interpolators used, and no interpolators accounting for $\eta D_s$ and $\eta D_s^*$ states were included. We analyzed the lattice spectra considering only one channel and two energy dependent potentials. One potential is taken linear in $s$ and another one contains a CDD pole accounting for possible genuine ${\bar c}s$ components. The results with both methods were compatible within errors. We also studied systematic uncertainties from other sources, which were found, in all cases but one, reasonably smaller than the statistical errors. Our analysis {confirmed} the existence of bound states for the $KD$ and $KD^*$ channels with a binding of the order of 40 MeV, which we associated to the $D_{s0}^*(2317)$ and $D_{s1}^*(2460)$ states. We could also determine the scattering length and effective range for $KD$ and $KD^*$ scattering, improving on the previous results of \cite{sasa} based on the information of the lowest two levels only and relying upon the effective range formula. Finally, we could determine within errors that the states found are mostly of meson-meson nature and, using a sum rule which reformulates the test of compositeness condition of Weinberg, we established the probability to find $KD$ and $KD^*$ in those states in an amount of about $(72 \pm 13 \pm 5)~\%$ and $(57 \pm 21 \pm 6)~\%$, respectively. We discussed that, in order to be more precise on these numbers and obtain information on the channels that fill the rest of the probability, one must improve on the precision of the energy spectra and must include further interpolators that allow one to include the $\eta D_s$ and $\eta D_s^*$ channels in the analysis. The exercise done shows the power of the method and the valuable information contained in the lattice spectra. The errors obtained here can be improved by having extra accuracy in the lattice spectra, additional levels, or more easy perhaps, spectra calculated for other lattice sizes. In any case, it has become clear that the information provided by the lattice spectra, and the flexibility to use different box sizes to obtain a rich spectrum of energies, is most useful when it comes to determine the energy dependence of the auxiliary potentials, which is essential to determine probabilities of meson meson components (or hadron hadron components in general) via the generalized sum rule. \section*{Acknowledgments} S.P. is grateful to C.B. Lang, L. Leskovec, D. Mohler and R. Woloshyn for the pleasant collaboration on the simulation that is analyzed in the present work. We would all like to thank them, and also M. D\"oring, for the subsequent useful discussions. A. M. T would like to thank the Brazilian funding agency FAPESP for the financial support. This work is partly supported by the Spanish Ministerio de Economia y Competitividad and European FEDER funds under contract numbers FIS2011-28853-C02-01 and FIS2011-28853-C02-02, by the Generalitat Valenciana in the program Prometeo II, 2014/068, and by Grant 2014SGR-401from the Generalitat de Catalunya. We acknowledge the support of the European Community-Research Infrastructure Integrating Activity Study of Strongly Interacting Matter (acronym HadronPhysics3, Grant Agreement n. 283286) under the Seventh Framework Programme of EU. \bibliographystyle{plain}
{ "redpajama_set_name": "RedPajamaArXiv" }
7,884
using System; using System.Reflection.Emit; using System.Reflection; using System.Diagnostics; using System.IO; using System.Collections.Generic; namespace Ferum { class MainClass { public static void Main(string[] args) { if (args.Length >= 1) { int argsDealtWith = 0; foreach (string arg in args) { switch (arg) { case "-v": case "--version": argsDealtWith++; Console.WriteLine("Ferum v1.0.0"); break; case "-h": case "--help": Usage(); break; } } if (argsDealtWith < args.Length) { List<string> codes = new List<string>(); foreach (string arg in args) { if (File.Exists(arg)) { string stuff = File.ReadAllText(arg).Trim(); codes.Add(stuff); } } foreach (string code in codes) { ExecuteCode(code); } } else { return; } } else { Usage(); } } protected static void Usage() { Console.WriteLine("Ferum v1.0.0"); Console.WriteLine(); Console.WriteLine("\t$ ferum [options] <files..>"); Console.WriteLine(); Console.WriteLine("\tOptions:"); Console.WriteLine("\t\t-v, --version:"); Console.WriteLine("\t\t\tGets the current Ferum version."); Console.WriteLine("\t\t-h, --help:"); Console.WriteLine("\t\t\tShows usage help."); Console.WriteLine(); Console.WriteLine("\tExample:"); Console.WriteLine("\t\t$ ferum example.fe"); Console.WriteLine("\t\t$ ferum -v"); } protected static void ExecuteCode(string code) { Lexer lex = new Lexer(code + "\n"); lex.tokenize(); Parser parser = new Parser(lex); var exprs = parser.start(); CodeGenerator cg = new CodeGenerator(exprs); cg.generate(); } } }
{ "redpajama_set_name": "RedPajamaGithub" }
4,029
\section{Future work} We have presented first results about a template-based language for capturing recurring ontology patterns and using these to specify larger ontologies. Here, we list some areas that we would like to investigate in the future. \paragraph{Finite representability} In general, the semantics of GBoxes is such that the expansion of a GBox and ontology can be infinite if the substitution range given by $\mathcal{L}$ is infinite. A natural question arising is whether/which other mechanisms can ensure that \emph{some} expansion is finite, and how can we compute such a finite expansion? Furthermore, given $G,\mathcal{O},\mathcal{L}$, when can we decide whether an ontology in $\mathsf{Exp}(G, \mathcal{O}, \mathcal{L})$ is finite? \paragraph{Controlling substitutions} So far, we have only considered entailment for generators when determining matching substitutions. Consider the ontology $\mathcal{O}=\{\conc{A} \sqsubseteq \conc{B}, \conc{B}\sqsubseteq \conc{C}\}$ and the template $?X\sqsubseteq C$. The resulting substitutions include concepts $\conc{A}$ and $\conc{B}$, but also a multitude of possibly unwanted, redundant concepts, e.g., $\{\conc{A}\sqcap \conc{A}, \conc{A}\sqcap \conc{B}, \ldots\}$. Hence restricting substitutions to ``reasonable'' or possible ``parametrizable'' (e.g., maximally general) ones is part of future work. \paragraph{Entailment problems for ontologies with Gboxes} The expansion of a Gbox over an ontology is itself an ontology and can be used as such for standard reasoning tasks. A question of interest is whether/how reasoning on the input ontology and GBox directly, without computing an expansion, can improve reasoning efficiency. Furthermore, there are plenty of reasoning tasks about GBoxes which naturally reduce to reasoning tasks over ontologies. For example, checking whether a single generator $g \colon T_B \rightarrow T_H$ always leads to inconsistency is equivalent to checking whether $T_B \cup T_H$ is inconsistent. This generalizes to similar questions over entire GBoxes: To check whether there exists an ontology $\mathcal{O}$ such that every generator $g$ in a GBox $G$ fires, it suffices to check that the union of the generators' bodies is consistent. However, there are also global properties of Gboxes that do not reduce to individual templates. For example, do two GBoxes $G_1$ and $G_2$ specify equivalent ontologies? While \Cref{sec:containment} contains some results about such problems, we believe there is more to do here. \paragraph{Extensions to generators} Another area of future work is motivated by our preliminary analysis of \emph{logical} ontology design patterns \cite{gangemi2009ontology}. We found that a number of rather straightforward, seemingly useful such pattern require some form of ellipses and/or maximality. Consider, for example, the role closure pattern on the role $\conc{hasTopping}$: if $\mathcal{O}$ entails that $\conc{MyPizza}\sqsubseteq \exists\conc{hasTopping}.X_1\sqcap \ldots \exists\conc{hasTopping}.X_n$ and $n$ is maximal for pairwise incomparable $X_i$, then we would like to automatically add $\conc{MyPizza}\sqsubseteq \forall\conc{hasTopping}.(X_1\sqcup \ldots \sqcup X_n)$. Extending generators to capture some form of ellipses or unknown number of variables and maximality conditions on substitutions for variables will be part of future work. For GBoxes to be indeed intention revealing, we will also support named generators and named sets of axioms in the body or the head of generators, as in OTTR \cite{OTTR-ISWC18}. \section{Introduction} \label{introduction} The phenomenon of frequently occurring structures in ontologies engineering (OE) has received attention from a variety of angles. One of the first accounts is given in \cite{DBLP:conf/ekaw/Clark08}, where repeated versions of general conceptual models are identified. Similar observations gave rise to the notion of \textit{Ontology Design Patterns} (ODP) as abstract descriptions of best practices in OE \cite{DBLP:conf/semweb/Gangemi05,DBLP:conf/iceis/BlomqvistS05,DBLP:books/ios/HGJKP2016}. Another view, emphasizing common ontological distinctions, led to the emergence of \textit{Upper Ontologies} which aim to categorize general ideas shareable across different domains \cite{DBLP:conf/ijcai/GangemiGMO01}. Orthogonal to such conceptual patterns, the existence of syntactic regularities in ontologies has been noted and some aspects of their nature have been analyzed \cite{DBLP:conf/owled/MikroyannidiMIS12,DBLP:conf/semweb/MikroyannidiISR11,DBLP:conf/ekaw/MikroyannidiQTFSP14}. In this paper, we propose a new language that allows expressing patterns of repeated structures in ontologies. This language is rule-based and has both a model-theoretic and a fixpoint semantics, for which we show that they coincide. In contrast to other rule languages ``on top of'' DLs, in this language, firing a rule results in the addition of TBox and/or ABox axioms, with the goal to succinctly describe ontologies, thereby making them more readable and maintainable. Given that DL ontologies are sets of axioms, an ontology provides no means to arrange its axioms in a convenient manner for ontology engineers. In particular, it is not possible to group conceptually related axioms or indicate interdependencies between axioms. While ontology editors such as Prot\'{e}g\'{e}\footnote{https://protege.stanford.edu/} display an ontology through a hierarchy of its entities, conceptual interdependencies between axioms are hidden and the underlying structural design of an ontology remains obfuscated. \begin{example} \label{introduction:example:simplePattern} Consider the ontology \begin{align} \mathcal{O}_1 = \{\conc{Jaguar} &\sqsubseteq \conc{Animal},&\conc{Jaguar} &\sqsubseteq \forall \conc{hasChild}.\conc{Jaguar},\\ % \conc{Tiger} &\sqsubseteq \conc{Animal},&\conc{Tiger} &\sqsubseteq \forall \conc{hasChild}.\conc{Tiger},\\ % \conc{Lion} &\sqsubseteq \conc{Animal},&\conc{Lion} &\sqsubseteq \forall \conc{hasChild}.\conc{Lion}\} \end{align} Then, an ontology editor will group the entities $\conc{Jaguar}, \conc{Tiger}$ and $\conc{Lion}$ under $\conc{Animal}$ according to their class hierarchy. However, $\mathcal{O}_1$ contains no indication that every subclass $X$ of $\conc{Animal}$ can have only children of the same class $X$. Assume this regularity is no coincidence but a desired pattern that should hold for any subclass of $\conc{Animal}$. Currently, ontology engineers have no means of expressing or enforcing such a pattern other than dealing with the ontology as a whole, inspecting all axioms separately, and making necessary changes manually. \end{example} Expressing patterns such as in Example \ref{introduction:example:simplePattern} explicitly has a potential to reveal some aspects of the intentions for the design of an ontology. \begin{example} \label{introduction:example:generator} Consider the ontology \begin{align*} & \mathcal{O}_2 = \{\conc{Jaguar} \sqsubseteq \conc{Animal},\ \conc{Tiger} \sqsubseteq \conc{Animal},\ \conc{Lion} \sqsubseteq \conc{Animal}\} \end{align*} In addition, consider the rule $$g \colon \underbrace{\{?X \sqsubseteq \conc{Animal}\}}_{\text{Body}} \rightarrow \underbrace{\{?X \sqsubseteq \forall \conc{hasChild}.?X \}}_{\text{Head}},$$ where $?X$ is a variable. We can interpret the body of this rule as a query which, when evaluated over the ontology $\mathcal{O}_2$, returns substitutions for $?X$. These substitutions can then be used to instantiate the axioms in the head of the rule. Firing the above rule over $\mathcal{O}_2$ would add all those resulting axioms to $\mathcal{O}_2$, thereby reconstructing $\mathcal{O}_1$ from Example \ref{introduction:example:simplePattern}. \end{example} In the following, we will call such rules \textit{generators}. The possible benefits of generators are threefold. Firstly, $\mathcal{O}_2$ in combination with $g$ is easier to understand because $g$ makes a statement about all subconcepts of $\conc{Animal}$ that \textit{the type of an animal determines the type of its children}. This is a kind of meta-statement about concepts which a user of an ontology can usually only learn by inspecting (many) axioms in an ontology. Secondly, $\mathcal{O}_2$ in combination with $g$ is easier to maintain and extend compared to $\mathcal{O}_1$, where a user would have to manually ensure that the meta-statement continues to be satisfied after new concepts have been added. Thirdly, conceptual relationships captured in a generator such as $g$ are easy to reuse and can foster interoperability between ontologies in the spirit of ontology design patterns. We close this section with more elaborate examples to demonstrate the benefits generators such as $g$ can provide. \subsection{Examples} \begin{example}[Composition] \label{introduction:example:composition} Assume we want to model typical roles in groups of social predatory animals. One such a role would be that of a hunter. A challenge for representing such knowledge is that different collective nouns are used for different animals, e.g. a group of lions is called a ``pride'', a group of wild dogs is called a ``pack'', a group of killer whales is called a ``pod'', etc. Therefore, a mechanism that can conveniently iterate over all these group formations would be beneficial. Consider the following query $Q_1$: \begin{align} Q_1= \{?X & \sqsubseteq \conc{Animal}, \label{Q1:animal}\\ ?X &\sqsubseteq \exists \conc{eats}.\conc{Animal}, \label{Q1:carnivore}\\ ?X &\sqsubseteq \exists \conc{hunts}.\conc{Animal}, \label{Q1:predator}\\ ?Y &\sqsubseteq \conc{SocialGroup}, \label{Q1:group}\\ ?X &\sqsubseteq \exists \conc{socialisesIn}.?Y, \label{Q1:association}\\ ?Y &\sqsubseteq \exists \conc{hasMember}.?X,\\ &\conc{socialisesIn} \equiv \conc{hasMember}^{-} \label{Q1:inverse}\} \end{align} Lines \ref{Q1:animal}--\ref{Q1:predator} bind the variable $?X$ to a predatory animal. Line \ref{Q1:group} binds the variable $?Y$ to a type of social group and lines \ref{Q1:association}--\ref{Q1:inverse} associate a particular type of animal with its respective social group. Given the bindings for $?X$ and $?Y$ it is straightforward to express that a particular type of predator $?X$ is a hunter in its respective social group, namely: $?Y \sqsubseteq \exists \conc{hasHunter}.?X$. A generator such as in Example~\ref{introduction:example:generator} could capture this relationship: $$g_1 \colon Q_1 \rightarrow \{?Y \sqsubseteq \exists \conc{hasHunter}.?X \}$$ \end{example} \begin{example}[Extension] \label{introduction:example:extension} Extending generator $g_1$ from Example~\ref{introduction:example:composition} to capture more specialised knowledge is straightforward. Consider predatory ants of the family \conc{Formicidae}. These ants generally live in colonies with an elaborate social organisation consisting of workers, drones, queens, etc. First, we extend query $Q_1$ with the following axioms: \begin{align} Q_2 = Q_1 \cup \{?X &\sqsubseteq \conc{Formicidae}, \label{Q2:formicidae}\\ ?Z &\sqsubseteq ?Y \label{Q2:subgroupOfGroup}\\ ?X &\sqsubseteq \exists \conc{socialisesIn}.?Z\}\label{Q2:scope} \end{align} Axiom \ref{Q2:formicidae} requires $?X$ to bind to a type of $\conc{Formicidae}$, e.g. $?X = \conc{SafariAnt}$. According to query $Q_1$, the variable $?Y$ binds to a general $\conc{SocialGroup}$, e.g. $?Y = \conc{AntColony}$. Then, axiom \ref{Q2:subgroupOfGroup} binds $?Z$ to a more specialised subgroup of a $?Y$. Finally, axiom \ref{Q2:scope} ensures that this subgroup $?Z$ is associated with $?X$. So for $?X = \conc{SafariAnt}$ we get $?Z = \conc{SafariAntColony}$. Next, we can specify the generator to add all desired axioms based on matches of query $Q_2$ specialised for ants: \begin{align*} g_2 \colon Q_2 &\rightarrow\\ \{?Z &\sqsubseteq \exists \conc{hasHunter}.?X,\\ ?Z &\sqsubseteq \exists \conc{hasWorker}.?X,\\ ?Z &\sqsubseteq \exists \conc{hasDrone}.?X,\\ ?Z &\sqsubseteq \exists \conc{hasQueen}.?X \} \end{align*} Note how the body and head of generator $g_1$ from Example~\ref{introduction:example:composition} have been reused and extended only by set unions. \end{example} \begin{example}[Negative Guards] \label{introduction:example:negativeGuards} Often, general relationships are subject to exceptions. While most ants hunt and feed cooperatively, there are some genera of ants, e.g. Myrmecia, that do not. Therefore, $g_2$ in Example~\ref{introduction:example:extension} would generate an undesired axiom, namely $\conc{MyrmeciaAntColony} \sqsubseteq \conc{hasHunter}.\conc{MyrmeciaAnt}$. This motivates guards in the body of generators that may not only specify positive constraints but also negative ones: \begin{align*} Q_3 = Q_2 \cup \{\naf~~?X &\sqsubseteq \conc{MyrmeciaAnt},\\ \naf~~?Z &\sqsubseteq \conc{MyrmeciaAntColony}\} \end{align*} \begin{align*} g_3 \colon Q_3 &\rightarrow\\ \{?Z &\sqsubseteq \exists \conc{hasHunter}.?X,\\ ?Z &\sqsubseteq \exists \conc{hasWorker}.?X,\\ ?Z &\sqsubseteq \exists \conc{hasDrone}.?X,\\ ?Z &\sqsubseteq \exists \conc{hasQueen}.?X \} \end{align*} One might argue that the effect of negative guards could also be achieved by positive guards using negated concepts in DL, i.e. $?X \sqsubseteq \lnot \conc{MyrmeciaAnt}$ instead of $\naf ?X \sqsubseteq \conc{MyrmeciaAnt}$. However, this approach would necessitate the introduction of a potentially large number of axioms of type $?X \sqsubseteq \lnot \conc{MyrmeciaAnt}$ in the given ontology. This can be avoided by using $g_3$. Another advantage of negative guards is the possibility to explicitly express default assumptions for lack of better knowledge. An ant colony of a certain genus usually consists of only ants of this genus, e.g. \begin{equation} \conc{SafariAntColony} \sqsubseteq \forall \conc{hasMember}.\conc{SafariAnt}. \label{P5:universal} \end{equation} However, some genera of ants are social parasites that enslave other ant species. In such a case, the default assumption about the homogeneity of an ant colony is wrong and the axiom \ref{P5:universal} should not be added. \begin{align*} Q_4 = \{?X &\sqsubseteq \conc{Ant},\\ ?Y &\sqsubseteq \conc{AntColony},\\ ?Y &\sqsubseteq \exists \conc{hasMember}.?X,\\ ?Z &\sqsubseteq \conc{Ant},\\ ?X &\sqsubseteq \lnot ?Z,\\ \naf~~?X &\sqsubseteq \exists \conc{enslaves}.?Z,\\ \naf~~?Y &\sqsubseteq \exists \conc{hasMember}.?Z,\} \end{align*} \begin{align*} g_4 \colon Q_4 \rightarrow \{?Y \sqsubseteq \forall \conc{hasMember}.?X\} \end{align*} \end{example} \begin{example}[Recursion] Contagious diseases may be transmitted between animals sharing a habitat. Overlapping habitats of infected animals may result in a propagation of diseases across habitats. \begin{figure}[h] \begin{center} \includegraphics[width=8cm]{pic/habitat.png} \caption{Overlapping Habitats} \label{overlap} \end{center} \end{figure} Assume there is an overlap between habitats $H_1, H_2, H_3$ such that there is no overlap between $H_1$ and $H_3$, $H_{12}$ describes the overlap between $H_1$ and $H_2$, and $H_{23}$ describes the overlap between $H_2$ and $H_3$ (see Figure~\ref{overlap}). Then, a disease infected animal living in $H_1$ may affect an animal in $H_2$ which in turn may affect an animal in $H_3$. Such an iterative process may be captured by repeatedly applying a single generator. Consider the following query: \begin{align} Q_5 = \{?X &\sqsubseteq \conc{Animal},\\ ?Y &\sqsubseteq \conc{Animal},\\ ?D &\sqsubseteq \conc{ContagiousDisease},\\ ?H &\sqsubseteq \conc{Habitat},\\ ?X &\sqsubseteq \exists \conc{suffersFrom}.?D, \label{Q:recursive:suffering}\\ ?Y &\sqsubseteq \forall \conc{isSusceptibleTo}.?D, \label{Q:recursive:susceptible}\\ ?X &\sqsubseteq \exists \conc{livesIn}.?H,\label{Q:recursive:sharedHabitat1}\\ ?Y &\sqsubseteq \exists \conc{livesIn}.?H\label{Q:recursive:sharedHabitat2}\} \end{align} Axioms \ref{Q:recursive:suffering} and \ref{Q:recursive:susceptible} express the requirements for a disease to be transmitted between animals while axioms \ref{Q:recursive:sharedHabitat1} and \ref{Q:recursive:sharedHabitat2} capture the requirement of a shared environment. Using query $Q_5$, we can represent the propagation of a disease between animals across habitats: \begin{align*} g_5 & : Q_5 \rightarrow \{?Y \sqsubseteq \exists \conc{suffersFrom}.?D \} \end{align*} Clearly, the generation of an instance of $?Y \sqsubseteq \exists \conc{suffersFrom}.?D$ could yield a new match for $Q_5$ in the body of $g_5$. Therefore, generator $g_5$ has to be applied repeatedly until a fixpoint is reached. \end{example} \begin{example}[Encapsulation] Inspecting the queries $Q_1, Q_2$, and $Q_3$ in Examples~\ref{introduction:example:composition}--\ref{introduction:example:negativeGuards}, it is apparent that different parts in the queries correspond to different conceptual ideas. For example, in query $Q_1$ the axioms can be grouped into ones about predators and others about social groups. Such a grouping would provide valuable information for an ontology engineer to indicate conceptual relationships between certain sets of axioms: \begin{equation*} \left. \begin{aligned} \{?X & \sqsubseteq \conc{Animal},\\ ?X &\sqsubseteq \exists \conc{eats}.\conc{Animal},\\ ?X &\sqsubseteq \exists \conc{hunts}.\conc{Animal}\} \end{aligned} \right\} \text{Predator} \end{equation*} \begin{equation*} \left. \begin{aligned} \{?Y &\sqsubseteq \conc{SocialGroup}, \\ ?X &\sqsubseteq \exists \conc{socialisesIn}.?Y,\\ ?Y &\sqsubseteq \exists \conc{hasMember}.?X,\\ &\conc{socialisesIn} \equiv \conc{hasMember}^{-}\} \end{aligned} \right\} \text{Social Group} \end{equation*} Reasonable ontology templates \cite{OTTR-ISWC18,dl-templates}, OTTR for short, introduced a framework for indicating such conceptual relationships. A template is defined as a named ontology with a set of variables. The variables can be instantiated with concept and role expressions to yield a set of valid axioms. Moreover, templates may be composed to give rise to more complex templates. Choosing intention-revealing names for templates and composing appropriately named templates may improve ontology comprehension by making the structural design of an ontology visible. A template, i.e. a set of axioms with variables, can also be interpreted as a query, asking for concept and role expressions in an existing ontology that match the pattern represented by the template. These expressions can then, in principle, be fed into a different template to produce new axioms. This idea captures conceptual interdependencies between templates or, more generally, axiomatic patterns. Clearly, it is straightforward to integrate OTTR as part of a preprocessing step into our rule language. This has not only the potential to foster the reuse of conceptually related set of axioms in an intention-revealing manner, but can also to further improve the maintainability of generators by the principle of information hiding. A change in a template will be propagated automatically to all instances of the use of the template. \end{example} \section{Generators and GBoxes} \label{sec:syntax-semantics} In this section we define the syntax and semantics of generators and GBoxes and discuss some examples. \begin{definition} \label{generator} A \emph{generator} $g$ is an expression of the form $T_B(V_B)\rightarrow T_H(V_H)$, for $T_B(V_B), T_H(V_H)$ templates with $V_H \subseteq V_B$. $T_B$ and $T_H$ are respectively called the \emph{body} and \emph{head} of $g$, and we write $B(g)$ and $H(g)$ to denote them. \end{definition} \begin{example} \label{ex:animal-generator} $g \colon \{?X \sqsubseteq \conc{Animal}\} \rightarrow \{?X \sqsubseteq \forall \conc{hasChild}.?X\}$ is a generator, with a single variable $?X$. \end{example} Next, we define the semantics for generators and sets of generators based on entailment to ensure that generators behave independent of the syntactic form of an ontology. In this choice we diverge from the work done on OTTR \cite{OTTR-ISWC18}, as OTTR template semantics is defined syntactically. \begin{definition} \label{sec:generator:satisfaction} Let $g \colon T_B(V_B)\rightarrow T_H(V_H)$ be a generator. A theory $\mathcal{O}$ \emph{satisfies $g$ wrt.} $\mathcal{L}$ if, for every $\mathcal{L}$-substitution $\sigma$ such that $\mathcal{O} \models T_B\sigma$, we have $\mathcal{O} \models T_H\sigma$. \end{definition} \begin{example} Consider the generator $g$ from \Cref{ex:animal-generator}. The theory $\mathcal{O}_1 = \{ \conc{Turtle} \sqsubseteq \conc{Mammal}, \conc{Mammal} \sqsubseteq \conc{Animal}, \conc{Turtle} \sqsubseteq \forall \conc{hasChild}.\conc{Turtle}, \conc{Mammal} \sqsubseteq \forall \conc{hasChild}.\conc{Mammal} \}$ satisfies $g$, while the theory $\mathcal{O}_2 = \{ \conc{Turtle} \sqsubseteq \conc{Mammal}, \conc{Mammal} \sqsubseteq \conc{Animal} \}$ does not. \end{example} A set $G$ of generators is called a \emph{GBox}. Furthermore, we define the set $B(G)$ (resp. $H(G)$) as the set of all bodies (resp. heads) occurring in $G$, i.e., they are sets of ontologies. \begin{definition}\label{def:expansion} Let $G$ be a GBox, $\mathcal{O}$ an ontology, and $\mathcal{L}$ a language. The \emph{expansion of $\mathcal{O}$ and $G$ in $\mathcal{L}$}, written $\mathsf{Exp}(G, \mathcal{O}, \mathcal{L})$, is the smallest set of theories $\mathcal{O}'$ such that \begin{itemize} \item[(1)] $\mathcal{O}' \models \mathcal{O}$, \item[(2)] $\mathcal{O}'$ satisfies every $g \in G$ w.r.t.~$\mathcal{L}$, and \item[(3)] $\mathcal{O}'$ is entailment-minimal, i.e. there is no $\mathcal{O}''$ strictly weaker than $\mathcal{O}'$ satisfying (1) and (2). \end{itemize} \end{definition} We call the theories in $\mathsf{Exp}(G, \mathcal{O}, \mathcal{L})$ \emph{expansions}. This definition corresponds to the model-theoretic Datalog semantics, with consequence rather than set inclusion. Since axioms can be rewritten to be subset-incomparable, entailment-minimality is used rather than subset minimality. For example, consider $\{A \sqsubseteq B, B\sqsubseteq C\}$ and $\{A \sqsubseteq C\}$: the second one is not a subset of the first one, but weaker than it. \begin{example} Recall the generator $g$ from \Cref{ex:animal-generator}, and let $G$ be a GBox consisting of $g$ alone. Let $\mathcal{O} = \{ \conc{Turtle} \sqsubseteq \conc{Mammal}, \conc{Mammal} \sqsubseteq \conc{Animal} \}$, and let $\mathcal{L}$ be the set of all concept names. Then $\{ \conc{Turtle} \sqsubseteq \conc{Mammal}, \conc{Mammal} \sqsubseteq \conc{Animal}, \conc{Turtle} \sqsubseteq \forall \conc{hasChild}.\conc{Turtle}, \conc{Mammal} \sqsubseteq \forall \conc{hasChild}.\conc{Mammal} \} \in \mathsf{Exp}(G, \mathcal{O}, \mathcal{L})$. \end{example} \subsection{GBoxes with negation}\label{sec:negation} In this section we introduce negation-as-failure to GBoxes. We extend the definition of the expansions defined in Section~\ref{sec:syntax-semantics}, define suitable notions of semi-positive GBoxes and semantics for stratified GBoxes, and prove the corresponding uniqueness results. To do so, a generator is now a rule of the form $T_B^+(V_1), \naf T_B^-(V_2)\rightarrow T_H(V_3)$, for $T_B^+(V_1), T_B^-(V_2), T_H(V_3)$ templates with $V_3 \subseteq V_1\cup V_2$. For the sake of notational simplicity, we restrict ourselves here to generators with at most one template in the negative body. It is worth noting, however, that all definitions and results in this section are immediately transferable to generators with multiple templates in the negative bodies (multiple templates in the positive body can of course be simply merged into a single template). The following definition, together with Definition~\ref{def:expansion} of $\mathsf{Exp}(G,\mathcal{O},\mathcal{L})$, provides a minimal model semantics for GBoxes with negation: \begin{definition} An ontology $\mathcal{O}$ \emph{satisfies a generator $g:T_B^+(V_1), \naf T_B^-(V_2)\rightarrow T_H(V_3)$ wrt. }$\mathcal{L}$ if, for every $\sigma \in \mathsf{eval}(T_B^+, \mathcal{O}, \mathcal{L}) \setminus \mathsf{eval}(T_B^-, \mathcal{O}, \mathcal{L})$ we have $\mathcal{O} \models T_H\sigma$. \end{definition} Unsurprisingly, adding negation results in the loss of uniqueness of the expansion $\mathsf{Exp}(G,\mathcal{O},\mathcal{L})$ (cf. Theorem~\ref{thm-expansion-equiv}), as illustrated by the following example. \begin{example}\label{ex:nonunique-stratified} Let $\mathcal{L}=\{\conc{A}, \conc{B}, \conc{C}, \conc{s}\}$, $\mathcal{O}=\{\conc{A}(\conc{s})\}$ and $G=\{\conc{A}(?X), \naf \conc{B}(?X)\rightarrow \conc{C}(?X) \}$. Then $\mathsf{Exp}(G,\mathcal{O},\mathcal{L})$ contains the two non-equivalent expansions $\{\conc{A}(\conc{s}), \conc{B}(\conc{s})\}$ and $\{\conc{A}(\conc{s}), \conc{C}(\conc{s})\}$. \end{example} Next, we extend the definition of the 1-step expansion operator from Definition~\ref{def:1step-expansion} to support negation. However, as Example~\ref{ex:fixpoint-nomodel} will show, a fixpoint does not always correspond to an expansion in $\mathsf{Exp}(G,\mathcal{O},\mathcal{L})$. \begin{definition}\label{def:inflationary-operator} The \emph{1-step expansion of $\mathcal{O}$ and $G$ in $\mathcal{L}$} of a GBox $G$ with negation, written $\mathsf{1Exp}^-(G, \mathcal{O}, \mathcal{L})$, is defined as follows: $$\mathsf{1Exp}^-(G, \mathcal{O}, \mathcal{L}) = \mathcal{O}\cup \bigcup_{T_B^+, \naf T_B^-\rightarrow T_H \in G}\{ T_H\sigma \mid \sigma \in \mathsf{eval}(T_B^+, \mathcal{O}, \mathcal{L})\setminus \mathsf{eval}(T_B^-, \mathcal{O}, \mathcal{L})\}.$$ \end{definition} \begin{example}\label{ex:fixpoint-nomodel} Consider the ontology $\mathcal{O}=\{\conc{Single}\sqsubseteq \conc{Person}, \conc{Spouse}\sqsubseteq \conc{Person}, \conc{Single}\sqsubseteq \neg \conc{Spouse}, \conc{Person}(\conc{Maggy})\}$ and the following GBox $G$ $$\begin{array}{rrcl} G=\{&\{\conc{Person}(?X)\},\naf \{\conc{Single}(?X)\}&\rightarrow \{\conc{Spouse}(?X)\},\\ &\{\conc{Person}(?X)\}, \naf \{\conc{Spouse}(?X)\}&\rightarrow \{\conc{Single}(?X)\}\} \end{array}$$ The expansion $\mathsf{Exp}(G, \mathcal{O}, \mathcal{L})$ contains the two non-equivalent ontologies $\mathcal{O} \cup \{\conc{Single}(\conc{Maggy})\}$ and $\mathcal{O}\cup \{\conc{Spouse}(\conc{Maggy})\}$. Furthermore, the iterated fixpoint $(\mathsf{1Exp}^-)^*(G,\mathcal{O},\mathcal{L})$ is $\mathcal{O} \cup \{\conc{Single}(\conc{Maggy}), \conc{Spouse}(\conc{Maggy})\}$; this is, however, not an ontology in $\mathsf{Exp}(G,\mathcal{O},\mathcal{L})$ as it is not entailment-minimal. \end{example} A natural question arising is whether we can identify or even characterize GBoxes with negation that have a unique expansion. To this end, we define suitable notions of semi-positive GBoxes and stratified negation. These are based on the notion of multiple templates affecting others, as formalized next. \begin{definition} Let $\mathcal{L}$ be a language, $S=\{S_1,\ldots,S_k\}$ a set of templates, $\mathcal{O}$ an ontology, and $T$ a template. We say that \emph{$S$ activates $T$ with respect to $\mathcal{O}$ and $\mathcal{L}$} if there exist $\mathcal{L}$-substitutions $\sigma_1,\ldots\sigma_k$ such that $\mathcal{O}\cup \bigcup S_i\sigma_i\models T\sigma$ for some $\mathcal{L}$-substitution $\sigma$. For brevity we omit $\mathcal{O}$ and $\mathcal{L}$ if they are clear from the context. \end{definition} In contrast to standard Datalog with negation, the entailment of a template in the body of a generator is not solely dependent on a single generator with a corresponding head firing. Instead, multiple generators might need to fire and interact with $\mathcal{O}$ in order to entail a body template. Hence we use the set $S$ of templates in the definition of activation. \begin{example} Consider the GBox containing $g_1\colon T_1(?X)\rightarrow \{?X\sqsubseteq \conc{A}\}, g_2\colon T_2(?Y)\rightarrow \{?Y\sqsubseteq \conc{B}\}$ and $g_3\colon \naf \{?Z\sqsubseteq \conc{A} \sqcap \conc{B}\}\rightarrow T_3(?Z)$. Then $H(g_1)$ and $H(g_2)$ activate $\{?Z\sqsubseteq \conc{A}\sqcap \conc{B}\}$ with respect to any $\mathcal{O}$ and $\mathcal{L}$, indicating that the firing of $g_3$ depends on the combined firing of $g_1$ and $g_2$. \end{example} Activation can then be used to define a notion of semi-positive GBoxes, which is analogous to semi-positive Datalog programs. \begin{definition}[Semi-positive GBoxes] Let $G$ be a GBox with negation, $\mathcal{L}$ a language, and $\mathcal{O}$ an ontology. $G$ is called \emph{semi-positive w.r.t. $\mathcal{O}$ and $\mathcal{L}$} if no negative body template $T_B^-$ of a generator $g\in G$ is activated by $H(G)$. \end{definition} As seen in \cref{ex:nonunique-stratified}, even semi-positive GBoxes result in multiple non-equivalent expansions. In that example, neither the ontology $\mathcal{O}$ nor any possible firing of $G$ can yield $B(s)$. As such, we wish to restrict the theories in $\mathsf{Exp}(G,\mathcal{O},\mathcal{L})$ to containing only facts derivable from $\mathcal{O}$ and $G$. To that end, the following definition suitably restricts the entailment of expansions. \begin{definition}\label{def:justifiable} Let $G$ be a GBox, $\mathcal{O}$ an ontology, and $\mathcal{L}$ a finite language. We say that an expansion $\mathcal{O}'\in \mathsf{Exp}(G,\mathcal{O},\mathcal{L})$ is \emph{justifiable w.r.t. $(G,\mathcal{O},\mathcal{L})$} if the following holds: if $\mathcal{O}'\models T\sigma$ for some template $T$ and substitution $\sigma$, then $\mathcal{O}\models T\sigma$ or $H(G)$ activates $T\sigma$ with respect to $\mathcal{O}$ and $\mathcal{L}$. We write simply $\mathcal{O}'$ is \emph{justifiable} when $G$, $\mathcal{O}$, and $\mathcal{L}$ are clear from the context. \end{definition} Using this notion, we can show that, indeed,a GBox being semi-positive implies that its semantics is unambiguous when restricted to justifiable expansions. \begin{theorem}\label{thm:semipositive} Let $G$ be a semi-positive GBox, $\mathcal{O}$ an ontology, and $\mathcal{L}$ a finite language. Then the fixpoint $(\mathsf{1Exp}^-)^*(G,\mathcal{O},\mathcal{L})$ exists, is the unique fixpoint of $\mathsf{1Exp}^-$, and is contained in $\mathsf{Exp}(G,\mathcal{O},\mathcal{L})$. \end{theorem} \begin{proof} Since $\mathsf{1Exp}^-(G,\mathcal{O},\mathcal{L})$ is an inflationary operator and $L$ is finite, there exists an iterative fixpoint $O^*=(\mathsf{1Exp}^-)^*(G,\mathcal{O},\mathcal{L})$. By construction, $\mathcal{O}^*$ satisfies $\mathcal{O}$ and all generators $g\in G$ and is justifiable w.r.t. $(G,\mathcal{O},\mathcal{L})$. We simultaneously prove uniqueness and membership in $\mathsf{Exp}(G,\mathcal{O},\mathcal{L})$ by showing that $O'\models O^*$ for an arbitrary justifiable expansion $O'\in\mathsf{Exp}(G,\mathcal{O},\mathcal{L})$. Let $\mathcal{O}_0=\mathcal{O}$ and $\mathcal{O}_i=\mathsf{1Exp}^-(G,\mathcal{O}_{i-1},\mathcal{L})$ for $i\geq 1$, then $\mathcal{O}=\mathcal{O}_0\subseteq \ldots \subseteq \mathcal{O}_k=\mathcal{O}^*$ for some $k$. Assume $\mathcal{O}_1\models T\sigma$ for some $\mathcal{L}$-substitution $\sigma$ and $T\in H(G)$. Then either $\mathcal{O}\models T\sigma$ (in which case $\mathcal{O}'\models T\sigma$) or there exists a generator $$ T_B^+,\naf T_B^-\rightarrow T$$ such that $\sigma\in \mathsf{eval}(T_B^+,\mathcal{O}, \mathcal{L})$ and $\sigma\not\in \mathsf{eval}(T_B^-,\mathcal{O}, \mathcal{L})$. Since $G$ is semi-positive, $H(G)$ cannot activate $T_B^-$, i.e., there exists no set of generators that, together with the ontology $\mathcal{O}$, could fire in a way that would entail $T_B^-\sigma$. Since $\mathcal{O}'$ is entailment-minimal and justifiable, it must be the case that $\mathcal{O}'\not\models T_B^-\sigma$ and hence $\mathcal{O}'\models T\sigma$. Thus, $\mathcal{O}'\models \mathcal{O}_1$. The same argument can be applied inductively to show that $\mathcal{O}'\models \mathcal{O}_i$ for $i\geq 1$, thus showing $\mathcal{O}'\models \mathcal{O}^*$. Since $\mathcal{O}'$ was chosen arbitrarily, this proves both the uniqueness and membership claims. \end{proof} The following is a direct corollary of the proof of Theorem~\ref{thm:semipositive}. \begin{corollary} Let $G$ be a semi-positive GBox, $\mathcal{O}$ and ontology and $\mathcal{L}$ a finite language. All justifiable ontologies in $\mathsf{Exp}(G,\mathcal{O},\mathcal{L})$ are logically equivalent. \end{corollary} For a GBox to be semi-positive is a very strong requirement. Next, we introduce the notion of a stratified GBox: this does not ensure that all expansions are equivalent, but it ensures that we can determine one of its expansions by expanding strata in the right order. Again, we use $H(G)$ to denote the set of templates in heads of generators in $G$, and $B(G)$ for the set of templates in (positive or negative) bodies of generators in $G$. \begin{definition}[Stratification]\label{def:stratification} Let $\mathcal{L}$ be a language and $\mathcal{O}$ an ontology. A GBox $G$ is \emph{stratifiable w.r.t. $\mathcal{O}$ and $\mathcal{L}$} if there exists a function $v: H(G)\cup B(G) \rightarrow \mathbb N$ such that, for every generator $T_B^+, \naf T_B^-\rightarrow T_H\in G$ the following holds: \begin{enumerate} \item $v(T_H)\geq v(T_B^+)$, \item $v(T_H)> v(T_B^-)$, \item for every $\subseteq$-minimal $S_1 \subseteq H(G)$ that activates $T_B^+$, $v(T^+_B)\geq \max\limits_{S'\in S_1} v(S')$, \item for every $\subseteq$-minimal $S_2 \subseteq H(G)$ that activates $T_B^-$, $v(T^-_B)> \max\limits_{S'\in S_2} v(S')$. \end{enumerate} \end{definition} The first two conditions in the previous definition are analogous to stratified Datalog, which intuitively states that a body literal must be evaluated (strictly, in the case of negative literals) before head literals. The second two conditions tailor the stratification to generators: generators allow for more interaction amongst their components. As opposed to Datalog, multiple heads combined might be needed to entail a body template. Thus, a body template must be defined in a higher stratum than any possible set of templates that could entail it. Following this definition, a stratification $v$ of a GBox $G$ w.r.t. an ontology $\mathcal{O}$ gives rise to a partition $G_v^1,\ldots G_v^k$ of $G$, where each generator $g:T_B^+, \naf T_B^-\rightarrow T_H$ is in the stratum $G_v^{v(T_H)}$. For a GBox $G$, an ontology $\mathcal{O}$ and a language $\mathcal{L}$, we can define the \emph{precedence graph} $\mathcal G_{G, \mathcal{O},\mathcal{L}}$ as follows: nodes are the templates occuring in $G$ and \begin{enumerate} \item if $T_B^+,\naf T_B^-\rightarrow T_H$ is in $G$, then $\mathcal G_{G,\mathcal{O},\mathcal{L}}$ contains the positive edge $(T_B^+,T_H)$ and the negative edge $(T_B^-,T_H)$; \item for a template $T$ that occurs in the positive (resp. negative) body of a generator and any $\subseteq$-minimal set $\{S_1,\ldots,S_k\}\subseteq H(G)$ that activates $T$ w.r.t. $\mathcal{O}$ and $\mathcal{L}$, $\mathcal G_{G,\mathcal{O},\mathcal{L}}$ contains the positive (resp. negative) edges $(S_i,T)$ for $1\leq i \leq k$. \end{enumerate} We then get the following classification of stratified GBoxes, the proof of which is entirely analogous to the Datalog case. \begin{proposition} Let $\mathcal{L}$ be a language and $\mathcal{O}$ an ontology. A GBox $G$ is stratifiable w.r.t. $\mathcal{O}$ and $\mathcal{L}$ iff its precedence graph $\mathcal G_{G,\mathcal{O},\mathcal{L}}$ has no cycle with a negative edge. \end{proposition} Given such a stratification, we can thus define a semantics for stratified negation. \begin{definition}[Stratified semantics] Let $\mathcal{O}$ be an ontology, $\mathcal{L}$ a language, and $G$ a GBox stratifiable w.r.t. $\mathcal{O}$ and $\mathcal{L}$. For a stratification $v$ of $G$ and the induced partition $G_v^1,\ldots, G_v^k$ of $G$, we define $\mathcal{O}_v^{\mathsf{strat}}(G,\mathcal{O},\mathcal{L})$ as follows: \begin{enumerate} \item $\mathcal{O}_v^1=\mathcal{O}$, \item $\mathcal{O}_v^j=\mathsf{1Exp}^*(G^{j-1},O^{j-1},\mathcal{L})$ for $1<j\leq k$, \item $\mathcal{O}_v^{\mathsf{strat}}(G,\mathcal{O},\mathcal{L})=O_v^k$. \end{enumerate} \end{definition} \begin{theorem} Let $\mathcal{O}$ be an ontology, $\mathcal{L}$ a finite language, and $G$ be a GBox stratifiable w.r.t. $\mathcal{O}$ and $\mathcal{L}$. Then $\mathcal{O}_v^{\mathsf{strat}}(G,\mathcal{O},\mathcal{L})$ exists, is independent of the choice of $v$, and contained in $\mathsf{Exp}(G,\mathcal{O},\mathcal{L})$. \end{theorem} \begin{proof} Let $G_v^1,\ldots, G_v^k$ be the partitioning of $G$ w.r.t. to a stratification $v$. By Definition~\ref{def:stratification}, each $G_v^i$ is a semi-positive GBox. Hence Theorem~\ref{thm:semipositive} guarantees the existence of $\mathcal{O}_v^{\mathsf{strat}}(G,\mathcal{O},\mathcal{L})$. By construction $\mathcal{O}_v^{\mathsf{strat}}(G,\mathcal{O},\mathcal{L})$ satisfies $\mathcal{O}$ and all generators in $G$. Furthermore, there cannot exist an ontology $\mathcal{O}'$ such that $\mathcal{O}_v^{\mathsf{strat}}(G,\mathcal{O},\mathcal{L})\models \mathcal{O}'$ satisfying $\mathcal{O}$ and all generators in $G$, as this would contradict the entailment-minimality of the $\mathcal{O}_v^i$. The proof for the independence of the stratification $v$ is entirely analogous to the Datalog case: the strongly connected components of $\mathcal G_{G,\mathcal{O},\mathcal{L}}$ provide the most granular stratification, which can then be used to prove the equivalence of all stratifications (cf. \cite{DBLP:books/aw/AbiteboulHV95} for a proof for stratified Datalog). \end{proof} \begin{remark}\label{rem:unique-nonstratified} It is worth noting that, although the stratified semantics provides a unique model, stratified GBoxes do not necessarily have a unique expansion. For example, the GBox from Example~\ref{ex:nonunique-stratified} is stratifiable yet has multiple distinct expansions. Moreover, just as in Datalog, there exist nonstratified GBoxes that have a unique expansion \end{remark} \section{Open problems} \subsection{Finite representability} In general, the semantics for our GBoxes are such that the expansion of a GBox and ontology needs not be finite. To control this, we currently introduce a finite language as a separate concept. This problem, however, raises the following interesting questions: \begin{itemize} \item If $\mathcal{L}$ is finite, so is $\mathsf{1Exp}^*(G, \mathcal{O}, \mathcal{L})$ for any $G$ and $\mathcal{O}$. Are there other ways to ensure that $\mathsf{Exp}(G, \mathcal{O}, \mathcal{L})$ is finitely representable, i.e., contains a (finite) ontology? \item Given $\mathcal{L}, G, \mathcal{O}$, can we decide whether $\mathsf{Exp}(G, \mathcal{O}, \mathcal{L})$ is finitely representable? \end{itemize} More broadly, the idea behind GBoxes is to capture and represent repeating patterns. To this end, we wish to define a suitable notion of ``nontrivial consequences'' that a a template should capture and that a generator should generate new axioms from. \subsection{Entailment problems on KBs represented by Gboxes} In this paper we showed, among other things, how to compute an expansion given a GBox and an ontology. While doing so clearly allows us to perform standard reasoning tasks on the new ontology, a question of interest is whether and when reasoning on the input ontology and GBox can provide gains in reasoning efficiency. In particular, a GBox-specified ontology can have, if an expansion is computed, many very similar axioms. Reasoning on the generators that produced them directly ought to be more efficient in such cases. \subsection{Other entailment relations} So far, we have only considered semantic entailment for generators: evaluating a template as a query relies on a semantic notion of querying. This has the obvious benefit of taking ontology reasoning into account while querying. Consider the ontology $\mathcal{O}=\{\conc{A} \sqsubseteq \conc{B}, \conc{B}\sqsubseteq \conc{C}\}$ and the query $?X\sqsubseteq C$. Then the evaluation of this query over $\mathcal{O}$ would include the concepts $\conc{A}$ and $\conc{B}$. However, they would include a multitude of possibly unwanted, redundant query results, e.g., $\{\conc{A}\sqcap \conc{A}, \conc{A}\sqcap \conc{A}\sqcap \conc{A}, \conc{C}, \conc{B}\sqcap \conc{A} \ldots\}$. Restricting query evaluation to syntactic entailment addresses the issue of infinite (potentially redundant) results at the cost of ontology reasoning. In the above example, the syntactic evaluation of $?X\sqsubseteq \conc{C}$ over $\mathcal{O}$ is $\{\conc{B}\}$. This is clearly highly restrictive and likely not always desirable. Finding suitable entailment relations between these two extremes, ones that cull the amount of unwanted, redundant results while retaining the desirable semantic behavior, would be highly beneficial to GBoxes and is as such an important direction for future work. \subsection{Stable model semantics} Current work on GBoxes with negation has focused on semantics where a unique expansion can be distinguished. For unstratified programs, however, such restrictions are in general no longer possible. Thus we would like to adapt a form of stable model semantics to GBoxes to be able to clearly characterize the possible expansions. \section{Preliminaries} Let $N_I$, $N_C$, and $N_R$ be sets of \emph{individual}, \emph{concept}, and \emph{role names}, each containing a distinguished subset of \emph{individual}, \emph{concept}, and \emph{role variables} $V_I$, $V_C$, and $V_R$. A \emph{concept} (resp. \emph{role}) is either a concept name (resp. role name) or a concept expression (resp. role expression) built using the usual DL constructors \cite{DBLP:conf/dlog/2003handbook}. Since we do not distinguish between TBoxes and ABoxes, an \emph{axiom} is either an assertion of the form $\conc{C}(\conc{a})$ or $\conc{R}(\conc{a},\conc{b})$ for a concept $\conc{C}$, role $\conc{R}$, and individual names $\conc{a},\conc{b}$ or an inclusion statement $\conc{C}\sqsubseteq \conc{D}$ for concepts or roles $\conc{C}$ and $\conc D$. A \emph{theory} is a (possibly infinite) set of axioms, whereas an \emph{ontology} is a finite set of axioms. A set $\mathcal{L}$ of individuals, concepts, and roles is called a \emph{language}. A \emph{template} $T$ is an ontology, and we write $T(V)$ for $V\subseteq V_I\cup V_C\cup V_R$ the set of variables occurring in $T$. For the sake of brevity, we occasionally omit the variable set $V$ when it is either clear from context or nonvital to the discussion. Templates can be \emph{instantiated} by applying a substitution to them. A \emph{substitution} $\sigma$ is a function that maps individual, concept, and role variables to individuals, concepts, and roles respectively. We require that substitutions respect the type of a variable, so that the result of instantiating a template is a well-formed ontology. For $\mathcal{L}$ a language, an \emph{$\mathcal{L}$-substitution} is one whose range is a subset of $\mathcal{L}$. The \emph{$\mathcal{L}$-evaluation of $T$ over $\mathcal{O}$}, written $\mathsf{eval}(T, \mathcal{O}, \mathcal{L}) $, is the set of substitutions defined as follows: $$\mathsf{eval}(T, \mathcal{O}, \mathcal{L}) = \{ \sigma \text{ an $\mathcal{L}$-substitution} \mid \mathcal{O} \models T\sigma \},$$ where $T\sigma$ is the instantiation of $T$ with $\sigma$. Furthermore, we define $\mathsf{eval}(\emptyset,\mathcal{O},\mathcal{L})$ to be the set of all $\mathcal{L}$-substitutions. Finally, we say that an ontology $\mathcal{O}$ \emph{is weaker than} $\mathcal{O}'$ if $\mathcal{O}' \models \mathcal{O}$, and \emph{strictly weaker} if the reverse does not hold. \section{Related work} When combining rules with DL ontologies, the focus has thus far primarily been on (1) encoding ontology axioms in rules for efficient query answering and (2) expanding the expressivity of ontologies using rules. In contrast, GBoxes are designed as a tool for ontology specification by describing instantiation dependencies between templates. \textbf{Datalog$^{\pm}$} \cite{DBLP:conf/datalog/CaliGLP10} falls into the first category: it provides a formalism for unifying ontologies and relational structures. Datalog$^{\pm}$ captures ontology axioms as rules, and these cannot ``add'' new axioms. \textbf{dl-programs} \cite{EITER20081495} and \textbf{DL-safe rules} \cite{MOTIK200541} fall into the second category: dl-programs add nonmonotonic reasoning by means of stable model semantics, whereas DL-safe rules allow for axiom-like rules not expressible in standard DL. However, none of these formalisms adds new TBox axioms to the ontology. \textbf{Tawny-OWL}\footnote{\url{https://github.com/phillord/tawny-owl}} and the \textbf{Ontology Pre-Processing Language}\footnote{\url{http://oppl2.sourceforge.net/index.html}} (OPPL) are formalism for manipulating OWL ontologies \cite{DBLP:conf/owled/Lord13,DBLP:conf/owled/EganaSA08}. While OPPL was designed to capture patterns and regularities in ontologies, Tawny-OWL is a more general programmatic environment for authoring ontologies that includes powerful support for ontology design patterns. It is part of future work to see whether GBoxes can be faithfully implemented in Tawny-OWL (OPPL lacks the recursion required). Another question is whether \textbf{metamodeling} in DL, in particular the encoding scheme from \cite{DBLP:conf/semweb/GlimmRV10} can be faithfully captured by (an extension of) GBoxes: this would require \emph{replacing} axioms in $\mathcal{O}$ with others which is currently not supported. \label{ODP} \textbf{Ontology Design Patterns} (ODPs) have been proposed to capture best practices for developing ontologies \cite{DBLP:conf/semweb/Gangemi05,DBLP:conf/iceis/BlomqvistS05}, inspired by Software Design Patterns. While some ODPs are easily expressible in GBoxes, it is part of ongoing work to investigate extensions required to capture others. \textbf{Reasonable Ontology Templates}\footnote{\url{http://ottr.xyz}} (OTTR) \cite{OTTR-ISWC18,dl-templates} provide a framework for macros in OWL ontologies, based on the notion of templates. In contrast to GBoxes, ``matching'' of templates is defined syntacically and non-recursively, but they can be named and composed to give rise to more complex templates. The \textbf{Generalized Distributed Ontology, Modelling and Specification Language} (GDOL) \cite{KriegBrckner2017GenericOA} is a formalism facilitating the template-based construction of ontologies from a wide range of logics. In addition to concepts, roles, and individuals, parameters may be ontologies which act as preconditions for template instantiation: for a given substitution, the resulting parameter ontology must be satisfiable in order to instantiate the template. Thus these preconditions serve only as a means to restrict the set of allowed instantiations of a template, whereas in GBoxes, an ontology triggers such substitutions. \endinput \subsection{GDOL} The \emph{Generalized Distributed Ontology, Modelling and Specification Language} (GDOL) \cite{KriegBrckner2017GenericOA} extends the \emph{Distributed Ontology, Modelling and Specification Language} \cite{10.1007/978-3-319-72044-9_2}, a metalanguage for combining theories from a wide range of logics under a single formalism, with a parametrization mechanism. Thus it facilitates a template-based approach to ontology construction by supporting the definition, instantiation and nesting of parametrized ontologies. In addition to classes, roles, and individuals, parameters may be ontologies which act as preconditions for template instantiation: for a given substitution, the resulting parameter ontology must be satisfiable in order to instantiate the template. Thus these preconditions serve only as a means to restrict the set of allowed instantiations of a template, whereas GBoxes use parameterized ontologies to fetch all such substitutions. \subsection{DL-safe rules, datalog+-} Many formalisms for combining ontologies and rules have been proposed in order to address various challenges facing description logic ontologies. The focus has thus primarily been on (1) encoding ontologies in rules for efficient query answering and (2) expanding the expressivity of ontologies using rules. \emph{Datalog$^{\pm}$} \cite{DBLP:conf/datalog/CaliGLP10} is an extension of Datalog capable of expressing the DL-Lite family of description logics while maintaining the tractable data complexity. This provides a formalism unifying ontologies and relational structures. While it is capable of expressing ontology axioms as rules, Datalog$^\pm$ reasons solely on individuals. As such it can only infer new assertions, not axioms. \emph{DL-safe rules} \cite{MOTIK200541} are a restriction of SWRL rules \cite{10.1007/11574620_69} that ensures decidability by requiring that all variables in a rule must occur in a non-DL atom in the rule body. This allows for expressing axiom-like rules that are not expressible in standard DL, e.g., rules containing two variables in the head are inexpressible as a class subsumption axiom in DL. However, a DL-safe rule cannot generate new axioms for the ontology, as it also only generates assertions. \subsection{DL-programs} \emph{dl-programs} \cite{EITER20081495} are a logic programming paradigm that add nonmonotonic reasoning capabilities to description logic knowledge bases. As such, a dl-program is a logic program under stable model semantics that may contain external queries to a knowledge base in the rule bodies. The logic program and the ontology are loosely coupled in the sense that program predicates are disjoint from the ontology predicates. However, in order to allow for some interaction between the program and the ontology, views of the external queries may be extended or restricted using extensions of program predicates. Though both dl-programs and GBoxes are rule-based formalisms with external queries, their intention and use are quite different: On the one hand, dl-programs operate on an ontology's individuals and generate, in a sense, nonmonotonic ABoxes. On the other hand, GBoxes operate on whole ontologies, extending both the ABox and TBox of an ontology. \subsection{Tawny-OWL} \textit{Tawny-OWL}\footnote{https://github.com/phillord/tawny-owl} provides a programmatic environment for authoring ontologies \cite{DBLP:conf/owled/Lord13}. Tawny-OWL was motivated by the idea of leveraging software engineering practices such as distributed development, testing, versioning, etc. in the context of ontology engineering. Being a library for Clojure\footnote{https://clojure.org/}, Tawny-OWL can be used with all facilities Clojure provides. In particular, this includes an elaborate mechanism of abstraction, that allows to write macros and encapsulate patterns. Tawny-OWL explicitly provides some support for ontology design patterns (cf. Section~\ref{ODP}) and it is conceivable to realise Gboxes by an implementation in Tawny-OWL. \subsection{Ontology Pre-Processing Language} The \textit{Ontology Pre-Processing Language}\footnote{http://oppl2.sourceforge.net/index.html} (OPPL) defines a formalism for manipulating OWL ontologies \cite{DBLP:conf/owled/EganaSA08}. OPPL was motivated by the idea of a high-level language for ontology engineering that allows to express patterns and regularities in an efficient, maintainable, and reusable manner. The language operates on the level of entities and axioms allowing to add newly created ones to an ontology and to remove a selection of existing ones from an ontology. Such actions are specified by an OPPL script. In general, an OPPL script consists of three parts \cite{DBLP:conf/ekaw/Svab-ZamazalSI10}. First, a set variable declarations. All variables are typed by a kind of entity, e.g. named classes, object properties, individuals, etc. Furthermore, lexical constraints may restrict a variable's possible values. Second, a query containing previously declared variables over the ontology to be modified. And lastly, a set of actions for adding or removing axioms possibly containing variables bound to the results of the preceding query. This query may be run against the asserted model of an ontology or may make use of a reasoner. While OPPL's mechanism of automatically generating axioms based on the results of a query over an ontology allows to express and enforce some structural interdependencies between axioms, there is no support for recursion or negative guards. Hence, OPPL only implements a one-shot macro system. \subsection{Metamodeling} \textit{Metamodeling} in DL describes the idea of considering concepts and roles as individuals pertaining to metaconcepts. Such metaconcepts allow to reason over categories of concepts and roles, and relate them via metaroles. While OWL Full supports most forms of metamodeling, the language's undecidability prohibits its practical use. Furthermore, it has been shown that restricting the underlying DL to $\mathcal{ALC}$ cannot mitigate this drawback \cite{DBLP:journals/logcom/Motik07}. Due to only restricted forms of metamodeling in OWL 2 \cite{DBLP:journals/ws/GrauHMPPS08}, there have been some proposals for extending OWL 2 with more features for metamodeling \cite{DBLP:conf/kr/KubincovaKH16,DBLP:conf/iberamia/Motz16,DBLP:conf/aaai/GiacomoLR11}. One such proposal describes an encoding scheme within OWL 2 without introducing a language extension \cite{DBLP:conf/semweb/GlimmRV10}. While this encoding scheme specifies translation rules for the axioms of an ontology into two sets of axioms representing the instance layer and the metalayer, these rules cannot be captured by GBoxes in a straightforward manner. Since Gboxes do not allow to replace entities or axioms in an ontology, the translation rules would have to be simulated by a set of generators creating axioms for the instance and metalayer in addition to the original axioms of the ontology. However, duplicating a set of axiom such that occurring entities are renamed in a consistent manner is also not easy to realise with GBoxes. \begin{itemize} \item Gboxes do not provide means for replacing entities in an ontology \item Since GBoxes allow to add axioms, another approach might be to simulate the translation mechanism by making appropriate copies of terms; however, this also fails because we cannot ``coin duplicate'' terms. Meaning ``given a term X, add new axioms in which X occurs and replace X with X''. \item Certain aspects of metamodelling might be captured with Gboxes (which ones? Examples will come) \end{itemize} TODO: Manchester \subsection{Ontology Design Patterns} \label{ODP} \textit{Ontology Design Patterns} (ODPs) have been proposed to capture best practices for developing ontologies \cite{DBLP:conf/semweb/Gangemi05,DBLP:conf/iceis/BlomqvistS05}. Inspired by Software Design Patterns, ODPs are supposed to provide reusable solutions to common ontology design tasks that may guide ontology engineers during ontology development. It is hypothesised that ODPs have a potential to foster ontology reuse, improve maintainability, and ease ontology comprehension. So far, the research community has focused its activity primarily on identifying, characterising, and cataloguing potential patterns in openly accessible repositories. There have also been accounts of projects demonstrating how a selection of ODPs may be used in concert \cite{DBLP:conf/semweb/KrisnadhiHJHACC15,DBLP:conf/edoc/SouzaFV13,DBLP:conf/semweb/Martinez-CostaKS14,DBLP:conf/otm/Blomqvist05}. However, one of the open challenges for an increased adoption of ODPs in academia and industry is the question of how to use an ODP in an ontology under development \cite{DBLP:books/ios/p/HammarBCEFGHHJKKNSSS16}. To be more specific, there is not much theoretical work on how to properly manipulate and interrelate ODPs to help with ontology engineering tasks. Some efforts in that direction include pattern languages\footnote{A pattern language is a collection of patterns specifying their interrelationships.} \cite{DBLP:books/ios/p/FalboBRGS16,DBLP:conf/sac/FalboQNBGGLL16,DBLP:conf/fedcsis/ZambonG17,DBLP:conf/er/Guizzardi14,DBLP:conf/esws/FalboBNG13}, translation mechanisms \cite{DBLP:conf/ijcai/StaabEM01,DBLP:conf/ekaw/Svab-ZamazalSI10}, and representation mechanisms \cite{DBLP:conf/er/QuirinoBF17,DBLP:conf/semweb/HitzlerGJKP17}. Gboxes can provide a well-defined framework for using ODPs. Lifting the level of abstraction such that ODPs may be manipulated as language primitives, Gboxes can not only express relationships and dependencies between ODPs but can automatically enforce such interrelationships. \section{Results} We show that the semantics defined in the previous section coincides with a fixpoint-based one, investigate the role played by the language $\mathcal{L}$, and investigate generators with negated templates. \begin{theorem}\label{thm-expansion-equiv} For every $G$, $\mathcal{O}$, and $\mathcal{L}$, we have that any two $\mathcal{O}_1, \mathcal{O}_2 \in \mathsf{Exp}(G, \mathcal{O}, \mathcal{L})$ are logically equivalent. \end{theorem} \begin{proof} Assume for contradiction that this is not the case. Then there exist $\mathcal{O}_1, \mathcal{O}_2 \in \mathsf{Exp}(G, \mathcal{O}, \mathcal{L})$ such that $\mathcal{O}_1 \not\models \mathcal{O}_2 \not\models \mathcal{O}_1$ because otherwise, one would be strictly weaker than the other, contradicting the definition of $\mathsf{Exp}(G, \mathcal{O}, \mathcal{L})$. In particular, there exist $\alpha$ and $\beta$ such that: \begin{align} \mathcal{O}_1 &\models \alpha, &\mathcal{O}_2 \not\models \alpha \label{thm-expansion-equiv-alpha}\\ \mathcal{O}_2 &\models \beta, &\mathcal{O}_1 \not\models \beta \label{thm-expansion-equiv-beta} \end{align} Now consider the set of axioms $T = \{ \tau \mid \mathcal{O}_1 \models \tau \land \mathcal{O}_2 \models \tau \}$. Since both $\mathcal{O}_1$ and $\mathcal{O}_2$ entail $\mathcal{O}$ and satisfy every $g \in G$, it is clear that so does $T$. However, \begin{align} T \not\models \mathcal{O}_1\\ T \not\models \mathcal{O}_2 \end{align} due to the entailments $\alpha$ (Eq.~\ref{thm-expansion-equiv-alpha}) and $\beta$ (Eq.~\ref{thm-expansion-equiv-beta}). Hence $T$ is strictly weaker than both $\mathcal{O}_1$ and $\mathcal{O}_2$. This contradicts the initial assumption of $\mathcal{O}_1, \mathcal{O}_2 \in \mathsf{Exp}(G, \mathcal{O}, \mathcal{L})$. \end{proof} Hence applying a GBox $G$ to an ontology $\mathcal{O}$ results in a theory that is unique modulo equivalence, but not necessary finite. As a consequence, we can treat $\mathsf{Exp}(G, \mathcal{O}, \mathcal{L})$ as a single theory when convenient. Our definition of $\mathsf{Exp}(G, \mathcal{O}, \mathcal{L})$ is strictly semantic, i.e., does not tell us how to identify any $\mathcal{O}'\in \mathsf{Exp}(G, \mathcal{O}, \mathcal{L})$. In order to do that, we define a 1-step expansion. \begin{definition}\label{def:1step-expansion} The \emph{1-step expansion of $\mathcal{O}$ and $G$ in $\mathcal{L}$}, written $\mathsf{1Exp}(G, \mathcal{O}, \mathcal{L})$, is defined as follows: $$\mathsf{1Exp}(G, \mathcal{O}, \mathcal{L}) = \mathcal{O}\cup \bigcup_{T_B\rightarrow T_H \in G}\{ T_H\sigma \mid \sigma \in \mathsf{eval}(T_B, \mathcal{O}, \mathcal{L})\}.$$ \end{definition} In other words, we add to $\mathcal{O}$ all instantiated heads of all generators applicable in $\mathcal{O}$. Of course, this extension may result in other generators with other substitutions becoming applicable, and so on recursively. \begin{lemma If $\mathcal{O}_1 \subseteq \mathcal{O}_2$, then $\mathsf{1Exp}(G, \mathcal{O}_1, \mathcal{L}) \subseteq \mathsf{1Exp}(G, \mathcal{O}_2, \mathcal{L})$. \begin{proof} Simple consequence of Def.~\ref{sec:generator:satisfaction} and $\mathsf{eval}(B(g), \mathcal{O}_1, \mathcal{L}) \subseteq \mathsf{eval}(B(g), \mathcal{O}_2, \mathcal{L})$ for any generator $g$. \end{proof} \end{lemma} \begin{definition} The \emph{$n$-step expansion of $\mathcal{O}$ and $G$ in $\mathcal{L}$}, written $\mathsf{1Exp}^n(G, \mathcal{O}, \mathcal{L})$, is defined as follows: $$\mathsf{1Exp}^n(G, \mathcal{O}, \mathcal{L}) = \underbrace{\mathsf{1Exp}(\ldots\mathsf{1Exp}(}_{n \text{times}}G, \mathcal{O}, \mathcal{L}) \dots).$$ We use $\mathsf{1Exp}^*(G, \mathcal{O}, \mathcal{L})$ to denote the least fixpoint of $\mathsf{1Exp}(G, \mathcal{O}, \mathcal{L})$. \end{definition} \begin{theorem} \label{expansion-fixpoint} For finite $\mathcal{L}$, the least fixpoint $\mathsf{1Exp}^*(G, \mathcal{O}, \mathcal{L})$ exists and belongs to $\mathsf{Exp}(G, \mathcal{O}, \mathcal{L})$. \end{theorem} \begin{proof} Since $\mathcal{L}$ is finite, the set of all $\mathcal{L}$-substitutions for the variables occurring in $G$ is finite. Let $\Sigma_{\mathcal{L}}$ be this set, and consider the set $H = \mathcal{O} \cup \displaystyle\bigcup_{T_B \rightarrow T_H \in G, \sigma \in \Sigma_{\mathcal{L}}} T_H\sigma$, that is, $\mathcal{O}$ as well as all axioms obtained from the heads of instances of generators in $G$. This set is also finite. It is easily verified that $\mathsf{1Exp}$ is an operator on the powerset of $H$. Since $\mathsf{1Exp}$ is monotone, the least fixpoint $\mathsf{1Exp}^*(G, \mathcal{O}, \mathcal{L})$ exists, and belongs to $\mathsf{Exp}(G, \mathcal{O}, \mathcal{L})$ by construction. \end{proof} In other words, our fully semantic definition of $\mathsf{Exp}(G,\mathcal{O},\mathcal{L})$ coincides with the operational semantics based on the fixpoint computation. \paragraph{Size of the fixpoint} For a generator $g$ with variables $V$, there are at most $|\mathcal{L}|^{|V|}$ different $\mathcal{L}$-substitutions. The size of the fixpoint is therefore bounded by $|G| \times |\mathcal{L}|^{n}$, where $n$ is the maximum number of variables in any $g \in G$. In the worst case we need to perform entailment checks for all of them, adding one instantiation at a time to $\mathcal{O}$. Hence determining $\mathsf{1Exp}^*(G, \mathcal{O}, \mathcal{L})$ involves up to $(|G| \times |\mathcal{L}|^{n})^2$ entailment checks. For finite $\mathcal{L}$ and provided we have a fixed upper bound for $n$, determining $\mathsf{1Exp}^*(G, \mathcal{O}, \mathcal{L})$ involves a polynomial number of entailment tests and results in a $\mathsf{1Exp}^*(G, \mathcal{O}, \mathcal{L})$ whose size is polynomial in the size of $G$ and $\mathcal{L}$ . \subsubsection*{Finite vs infinite L} The next examples illustrate the difficulties an infinite language $\mathcal{L}$ can cause. The first example shows how an infinite $\mathcal{L}$ can lead to infinite expansions. \begin{example} Consider the ontology $\mathcal{O}= \{ \conc{A}\sqsubseteq \exists \conc{R}.\conc{B}\}$, the generator $g:\{?X \sqsubseteq \exists \conc{R}.?Y\}\rightarrow \{ ?X \sqsubseteq \exists \conc{R}.\exists \conc{R}.?Y\}$, and $\mathcal{L}$ the set of all $\mathcal{EL}$-concept expressions. Clearly, $\mathsf{1Exp}^*(G, \mathcal{O}, \mathcal{L})$ is infinite, and so is each expansion in $\mathsf{Exp}(G, \mathcal{O}, \mathcal{L})$. \end{example} The next example shows that this does not necessarily happen. \begin{example} Consider the ontology $\mathcal{O}= \{\exists \conc{R}.\conc{A}\sqsubseteq \conc{A}\}$, the generator $g: \{\exists \conc{R}.?X \sqsubseteq ?X\}\rightarrow \{\exists \conc{R}.\exists \conc{R}.?X \sqsubseteq \exists \conc{R}.?X \} $, and $\mathcal{L}$ the set of all $\mathcal{EL}$-concept expressions. Clearly, $\mathsf{1Exp}^*(G, \mathcal{O}, \mathcal{L})$ is infinite, but there is a finite (and equivalent) ontology to this fixpoint in $\mathsf{Exp}(G, \mathcal{O}, \mathcal{L})$, namely $\mathcal{O}$ itself. \end{example} While having to explicitly specify $\mathcal{L}$ may seem to be cumbersome, it is not very restrictive. In fact, it is easy to show that, for finite languages, generators can be rewritten to account for concepts, roles, or individuals that are missing from a given language by grounding the generators. \begin{definition Let $g : T_B \rightarrow T_H$ be a generator, and $\mathcal{L}$ a finite language. The \emph{$\mathcal{L}$-grounding of $g$} is the finite set of generators $\{T_B\sigma \rightarrow T_H\sigma \mid \sigma \mbox{ an } \mathcal{L}\mbox{-substitution}\}$. \end{definition} Using $\mathcal{L}$-grounding, we can compensate for a smaller language $\mathcal{L}_1\subsetneq \mathcal{L}_2$ by $\mathcal{L}_2\setminus \mathcal{L}_1$-grounding generators, thereby proving the following theorem. \begin{theorem} Let $\mathcal{L}_1 \subseteq \mathcal{L}_2$ be finite languages. For every GBox $G$ there exists a Gbox $G'$ such that, for every $\mathcal{O}$, $\mathcal{O}_1$, $\mathcal{O}_2$ we have that $\mathcal{O}_1\in \mathsf{Exp}(G', \mathcal{O}, \mathcal{L}_1)\text{ and } \mathcal{O}_2\in \mathsf{Exp}(G, \mathcal{O}, \mathcal{L}_2)\text{ implies } \mathcal{O}_1 \equiv \mathcal{O}_2. $ \end{theorem} \begin{proof} Take $G'$ to be the union of the $\mathcal{L}_2$-groundings of every generator in $G$. \end{proof} Of course, grounding all the generators is a very wasteful way of accounting for a less expressive language. A more clever rewriting algorithm should be possible: for example, if we allow binary conjunctions of names in $\mathcal{L}_2$ but not in $\mathcal{L}_1$, we can add copies of each generator where we replace variables $?X$ with $?X_1 \sqcap ?X_2$. \subsection{GBox containment and equivalence} \label{sec:containment} Having defined GBoxes, we now investigate a suitable notion for containment and equivalence of GBoxes. \begin{definition}[$\mathcal{L}$-containment] Let $G_1$ and $G_2$ be GBoxes, and $\mathcal{L}$ a language. $G_1$ is \emph{$\mathcal{L}$-contained in} $G_2$ (written $G_1 \preceq_\mathcal{L} G_2$) if $\mathsf{Exp}(G_2, \mathcal{O}, \mathcal{L}) \models \mathsf{Exp}(G_1, \mathcal{O}, \mathcal{L})$ for every ontology $\mathcal{O}$. \end{definition} The following lemma relating the entailment of theories and the entailment of expansions holds as a direct consequence of the monotonicty of description logics. \begin{lemma}\label{lem:twentynine} Let $G$ be a GBox, $T,T'$ two theories and $\mathcal{L}$ a language. If $T\models T'$ then $\mathsf{Exp}(G,T,\mathcal{L})\models \mathsf{Exp}(G,T',\mathcal{L})$. \end{lemma} Furthermore, the following is a rather straightforward consequence of the definition of the semantics of generators. \begin{lemma} \label{lemma:exp-entailment} Let $T$ be a theory, $G$ a GBox, $\mathcal{O}$ an ontology, and $\mathcal{L}$ a language. If $T \models \mathcal{O}$ and $T$ satisfies every generator $g \in G$ then $T \models \mathsf{Exp}(G, \mathcal{O}, \mathcal{L})$. \end{lemma} Using Lemmas~\ref{lem:twentynine} and \ref{lemma:exp-entailment}, $\mathcal{L}$-containment can be shown to be decidable, and in fact efficiently so, using a standard freeze technique from database theory. \begin{theorem} Let $G_1$ and $G_2$ be GBoxes, and $\mathcal{L}$ a language. $G_1$ is $\mathcal{L}$-contained in $G_2$ if and only if $\mathsf{Exp}(G_2, T_B,\mathcal{L}) \models T_H$ for every $T_B\rightarrow T_H \in G_1$. \end{theorem} \begin{proof} The only-if direction follows directly. For the other direction, by \cref{lemma:exp-entailment} we need to show that if $\mathsf{Exp}(G_2, T_B,\mathcal{L}) \models T_H$ for all $T_B\rightarrow T_H\in G_1$ then for any ontology $\mathcal{O}$ \begin{align} &\mathsf{Exp}(G_2, \mathcal{O},\mathcal{L}) \models g \text{ for all }g \in G_1, \label{cond:1}\\ &\mathsf{Exp}(G_2, \mathcal{O},\mathcal{L}) \models \mathcal{O}.\label{cond:2} \end{align} By \cref{lemma:exp-entailment}, \eqref{cond:1} and \eqref{cond:2} imply $\mathsf{Exp}(G_2,\mathcal{O},\mathcal{L}) \models \mathsf{Exp}(G_1,\mathcal{O},\mathcal{L})$, which is the definition of $G_1$ being $\mathcal{L}$-contained in $G_2$. \eqref{cond:2} is an immediate consequence of the definition of the expansion, hence we only need to show \eqref{cond:1}. In the following we slightly abuse notation: $\mathsf{Exp}(G, \mathcal{O},\mathcal{L})$ for a GBox $G$, ontology $\mathcal{O}$ and language $\mathcal{L}$ shall refer to an ontology as opposed to a set of possible expansions; by \cref{thm-expansion-equiv}, they are all logically equivalent. Let $T_B\rightarrow T_H\in G_1$ be fixed but arbitrary. Furthermore, let $\sigma \in \mathsf{eval}(T_B, \mathsf{Exp}(G_2,\mathcal{O},\mathcal{L}))$. Then, by the definition of $\mathsf{eval}$, \begin{align} \mathsf{Exp}(G_2, \mathcal{O},\mathcal{L}) \models T_B\sigma. \tag{*}\label{cond:star} \end{align} Applying \cref{lem:twentynine} to \eqref{cond:star} yields $\mathsf{Exp}(G_2, \mathsf{Exp}(G_2, \mathcal{O},\mathcal{L}),\mathcal{L}) \models \mathsf{Exp}(G_2, T_B\sigma,\mathcal{L})$. But $\mathsf{Exp}(G_2, \mathsf{Exp}(G_2,\mathcal{O},\mathcal{L}),\mathcal{L}) = \mathsf{Exp}(G_2,\mathcal{O},\mathcal{L})$ (otherwise $\mathsf{Exp}(G_2,\mathcal{O},\mathcal{L})$ would not be an expansion) and hence \begin{align} \mathsf{Exp}(G_2,\mathcal{O},\mathcal{L}) \models \mathsf{Exp}(G_2, T_B\sigma,\mathcal{L}).\label{cond:4} \end{align} Thus what remains is to show that \begin{align} \mathsf{Exp}(G_2, T_B\sigma,\mathcal{L}) \models T_H\sigma,\label{cond:5} \end{align} since \eqref{cond:4} and \eqref{cond:5} together yield \begin{align} \mathsf{Exp}(G_2,\mathcal{O},\mathcal{L})\models T_H\sigma. \tag{**}\label{cond:starstar} \end{align} which together with \eqref{cond:star} implies that $\mathsf{Exp}(G_2,\mathcal{O},\mathcal{L})$ satisfies $T_B\rightarrow T_H$. Using compositionality of $\mathcal{L}$-substitutions and the iterative fixpoint construction of the expansion, it is straightforward to show that \begin{align} \mathsf{Exp}(G_2, T_B\sigma,\mathcal{L}) \models \mathsf{Exp}(G_2, T_B,\mathcal{L})\sigma.\label{cond:7} \end{align} By the assumption of the theorem, $\mathsf{Exp}(G_2, T_B,\mathcal{L})\models T_H$ which in turn implies that $\mathsf{Exp}(G_2, T_B,\mathcal{L})\sigma \models T_H\sigma$. This together with \eqref{cond:7} yields \begin{align} \mathsf{Exp}(G_2, T_B\sigma,\mathcal{L}) \models (\mathsf{Exp}(G_2, T_B,\mathcal{L})\sigma \models T_H\sigma, \end{align} thus proving \eqref{cond:5} and thereby \eqref{cond:starstar}, as desired. \end{proof} It follows that $\mathcal{L}$-containment is decidable for arbitrary $\mathcal{L}$ (even infinite), since we can restrict ourselves to the language of all subexpressions of $B(G_1)$. Furthermore, the complexity is the same as that of computing an expansion of a GBox. \input{negation} \section{Outlook} Another area of future work is motivated by our preliminary analysis of \emph{logical} ODPs \cite{gangemi2009ontology}. We found that a number of rather straightforward, seemingly useful such pattern require some form of ellipses and/or maximality. Consider, for example, the role closure pattern on the role $\conc{hasTopping}$: if $\mathcal{O}$ entails that $\conc{MyPizza}\sqsubseteq \exists\conc{hasTopping}.X_1\sqcap \ldots \exists\conc{hasTopping}.X_n$ and $n$ is maximal for pairwise incomparable $X_i$, then we would like to automatically add $\conc{MyPizza}\sqsubseteq \forall\conc{hasTopping}.(X_1\sqcup \ldots \sqcup X_n)$. Additionally, some ODPs make use of reified entities. The logical ODP for n-ary relationships is a simple example, where a class is reified. In such cases, generators may need the capability to create new entities. Extending generators to capture some form of ellipses, or unknown number of variables and maximality conditions on substitutions for variables, or generation of new entities, will be part of future work. For GBoxes to be indeed intention revealing, we will also support named generators and named sets of axioms in the body or the head of generators, as in OTTR \cite{OTTR-ISWC18}.
{ "redpajama_set_name": "RedPajamaArXiv" }
265
União Desportiva Vilafranquense is a Portuguese football club from Vila Franca de Xira, Lisbon District. It currently plays in the Liga Portugal 2, the second tier of the Portuguese football league system, following its promotion in 2019. The other only time when the club had played in the second tier was in 1987–88 season, when that level was still regionalised. History The club was created in April 12, 1957 with the merger between four local sports clubs: Grupo de Foot-Ball Operário Vilafranquense, Águia Sport Club Vilafranquense, Hóquei Clube Vilafranquense and Ginásio Vilafranquense. In the 2016–17 Taça de Portugal, Vilafranquense defeated G.S. Loures, Vilaverdense F.C. and G.D. Vitória de Sernache to reach a fourth-round tie at home to F.C. Paços de Ferreira of the Primeira Liga. They beat the top-flight team 1–0 on 20 November, with a goal by Marocas. In the next round (last 16), they lost by the same score at Vitória S.C., also of the Primeira. In the 2018–19 Campeonato de Portugal, Vilafranquense came second in Serie C behind U.D. Leiria. In the play-offs, they dispatched Lusitânia F.C. and then Leiria on penalties to win promotion. This was the club's first time in a national second division. On 23 June in the final at the Estádio Nacional against Casa Pia A.C., the club drew 2–2 then lost 4–2 on penalties. Due to the inadequate facilities of their Campo do Cevadeiro, Vilafranquense had to play their LigaPro home games 50 kilometres away at the Estádio Municipal in Rio Maior. Current squad Out on loan References External links Official website Football clubs in Portugal 1957 establishments in Portugal Association football clubs established in 1957 Sport in Vila Franca de Xira Liga Portugal 2 clubs
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,446
Since committing to Nanowrimo, I've had good days and bad days. And some days when i just forgot to update my Nanowrimo numbers. But I have to say that this is the first time I've stuck to writing "something" every single day for 10 days in a row….ever. And I'm having a good time with it. At first, I thought I couldn't possibly write without editing. I can't leave typos or misspelled words, but I'm not going back and fixing, editing, putzing with dialogue or sentence structure. I'm just writing, getting it all out and down on paper. And the truth is, although it's not great, it's pretty good. Good enough to go back and use most of it. I am working on a story that I started over a year ago. I had about 6 chapters written, and notes for every chapter through to the end – perhaps a paragraph or so that at least gave me the outline of what this story is about. But I have changed some details and the back story doesn't quite match. Okay, I'll get to that… in December! I know several of you are writing like maniacs, too. How are you doing with Naowrimo?
{ "redpajama_set_name": "RedPajamaC4" }
9,006
\section{Introduction} While properties of soft particles produced in heavy-ion collisions can be understood in a hydrodynamic framework, it is clear that hard particles cannot be in local thermal equilibrium. The interplay between the soft, strongly coupled and the hard, weakly coupled sector gives access to non-trivial QCD dynamics. Different techniques are used to describe these two regimes and there is currently no satisfactory approach allowing for a fully self-consistent description of the entire dynamics in a common framework. Instead, it is common practice to use perturbative techniques for hard probes and hydrodynamics for the soft bulk of the event. The drawback of this approach is that the separation between the soft and the hard regime is to some extent arbitrary and that it is difficult to fully account for the crosstalk between the two regimes. So far the modification of hard probes, particularly jets, due to interactions in the soft background and the resulting jet quenching phenomenology has received more attention than the modification of the bulk evolution. In a recent study~\cite{Floerchinger:2014yqa} we combined the jet quenching Monte Carlo \textsc{Jewel} with a Bjorken boost invariant and azimuthally symmetric viscous hydrodynamic evolution of the bulk to get a realistic estimate of the effect that the passage of jets can have on the soft event. \section{Combining hydrodynamics and jets} On the hydrodynamic side viscous corrections are taken into account in a second order formalism including shear viscosity and the corresponding relaxation time, which take their AdS/CFT values (bulk viscosity is neglected). The equation of state is a parametrisation of a lattice equation of state combined with a hadron resonance gas (s95p-PCE of~\cite{Shen:2010uy}). The system is assumed to be boost-invariant along the beam axis and to have azimuthal symmetry (corresponding to the most central $b=0$ collisions). The initial conditions are specified at $\tau = \unit[0.6]{fm}$ following~\cite{Qiu:2011hf}, i.e.\ $T=485\, \text{MeV}$ in the centre of the collision with a profile in the transverse plane given by a Glauber calculation, $u^r=0$ and the Navier-Stokes values of the shear stress. \smallskip The QCD evolution of jets in the presence of re-scattering in a dense background is simulated with \textsc{Jewel}~\cite{Zapp:2012ak}. The jet production points are distributed according to the density of binary nucleon-nucleon collisions extracted from a Glauber model~\cite{Eskola:1988yh}. The number of di-jets per event is Poisson distributed. The jet production matrix elements and initial state parton showers are generated with \textsc{Pythia}\,6~\cite{Sjostrand:2006za} using the EPS09 nuclear pdf sets~\cite{Eskola:2009uj}. In the absence of a medium the final state jet evolution reduces to a standard virtuality ordered parton shower similar to the one in \textsc{Pythia}\,6. In the presence of a background medium re-scattering can occur, that can be either elastic or inelastic, i.e.\ give rise to QCD bremsstrahlung. Re-scattering is described using leading order perturbative $2\to 2$ scattering matrix elements with radiative corrections being generated by the parton shower. The space-time structure of the parton shower and the interplay between different sources of radiation as well as the destructive LPM-interference are dictated by the formation times of the emissions. \textsc{Jewel} has to be provided with information about the background, namely the local density and the momentum distribution of scattering centres. When running with the hydrodynamic medium it takes the local temperature and fluid velocity as input and constructs the parton density and momentum distribution from it assuming an ideal gas equation of state. \smallskip As \textsc{Jewel} is a completely microscopic model it is straightforward to extract the energy-momentum transfer between the jets and the medium in the individual scattering processes, \begin{equation} J^\mu(x) = \sum_{i} \Delta p^\mu_i \delta^{(4)}(x-x_i) \,. \end{equation} This can interpreted as a source term in the hydrodynamic equations. It is convenient to decompose it into components parallel and orthogonal to the fluid velocity $u$, \begin{equation} J_S = u_\nu J^\nu \qquad \text{and} \qquad J_V^\mu = \Delta^\mu_{\;\;\nu} J^\nu \,, \end{equation} where $\Delta^{\mu\nu} = u^\mu u^\nu + g^{\mu\nu}$. The source term varies from event to event. One possible solution is to solve the hydrodynamic equations event-by-event, which is, however, computationally expensive. We therefore chose to characterise the statistical properties of the source term in terms of n-point functions. Assuming the fluctuations to be Gaussian it is sufficient to specify the event averages \begin{equation} \bar J_S= \langle J_S(x) \rangle \qquad \text{and} \qquad \bar J_V^\mu = \langle J_V^\mu(x) \rangle \end{equation} and the correlation functions \begin{equation} \begin{split} \bar C_{SS}(x,y) & = \langle J_S(x)J_S(y)\rangle - J_S(x) J_S(y) \\ \bar C_{SV}^\mu(x,y) & = \langle J_S(x)J_V^\mu(y)\rangle - J_S(x) J_V^\mu(y) \\ \bar C_{VV}^{\mu\nu}(x,y) & = \langle J_V^\mu(x)J_V^\nu(y)\rangle - J_V^\mu(x) J_V^\nu(y) \,. \end{split} \end{equation} Then the fluid dynamic equations containing the average source term read \begin{equation} \begin{split} & u^\tau \partial_\tau \epsilon + u^r \partial_r \epsilon + (\epsilon+p) (\partial_\tau u^\tau + \partial_r u^r + \tfrac{1}{\tau} u^\tau + \tfrac{1}{r} u^r)\\ & + u^\tau \left[ \partial_\tau \pi^{\tau\tau} + \tfrac{1}{\tau} \pi^{\tau\tau} + \partial_r \pi^{\tau r} + \tfrac{1}{r} \pi^{\tau r} + \tfrac{1}{\tau} \pi^{\eta\eta} \right]\\ & - u^r \left[ \partial_\tau \pi^{\tau r} + \tfrac{1}{\tau} \pi^{\tau r} + \partial_r \pi^{rr} + \tfrac{1}{r} \pi^{rr} - \tfrac{1}{r} \pi^{\phi\phi}\right] = - \bar J_S \end{split} \end{equation} for the energy density $\epsilon$ and \begin{equation} \begin{split} & (\epsilon + p) (u^\tau \partial_\tau u^r + u^r \partial_r u^r) + u^r u^\tau \partial_\tau (p+\pi_\text{bulk}) + (u^\tau)^2 \partial_r p \\ & - u^\tau u^r \left[ \partial_\tau \pi^{\tau\tau} + \tfrac{1}{\tau} \pi^{\tau\tau} + \partial_r \pi^{r\tau} + \tfrac{1}{r} \pi^{r\tau} + \tfrac{1}{\tau} \pi^{\eta\eta} \right] \\ & + (u^\tau)^2 \left[ \partial_\tau \pi^{\tau r} + \tfrac{1}{\tau} \pi^{\tau r} + \partial_r \pi^{rr} + \tfrac{1}{r} \pi^{rr} - \tfrac{1}{r} \pi^{\phi\phi}\right] = \bar J_V^r \end{split} \end{equation} for the $\tau$-component of the fluid velocity ($\pi^{\mu\nu}$ denotes the shear stress tensor). The $\phi$ and $\eta$ component of the fluid velocity vanish due to the symmetries and $u^r$ is related to $u^\tau$ through the constraint $u_\mu u^\mu = -1$. \section{Jets in the hydrodynamic background} So far, \textsc{Jewel} was used mainly with a simple toy model as background, which assumes Bjorken expansion and an energy density that is proportional to the density of participants calculated in a Glauber model~\cite{Zapp:2012ak}. The initialisation time and initial temperature are the same in the toy model and the full fluid dynamical evolution, but the temperature profile in the transverse plane is different (it falls off faster in the hydro initial conditions). Another important difference is that there is no transverse expansion in the toy model, so that the hydrodynamic background extends to larger radii at later times. Consequently, the average temperature in the toy model is higher at early times and lower later times than in the hydrodynamic solution. When comparing jet observables computed with the two medium models but otherwise identical parameters one observes differences of at most \unit[20]{\%} in the jet nuclear modification factor (the differences in other observables are much smaller). The transverse expansion enters directly through the momentum distribution of the scattering centres and indirectly through the temperature distribution. The former has no visible effect on the final distributions while the latter is responsible for the observed differences. It turns out that the difference builds up only at late times ($\tau > \unit[4]{fm/c}$), when the temperature is already fairly small. The reason why there is sensitivity to the temperature distribution at late times in \textsc{Jewel} is that at early times the jet evolution is dominated by hard emissions associated to the jet production process and re-scatterings can only induce extra radiation at later times. It should be noted that the differences between the two background models are smaller than other theory uncertainties even within the same jet quenching model. \section{Results for event averaged source term} When extracting the source terms a major difficulty, that is a consequence of the division into a soft and a hard regime, consists in defining the jet population. The perturbative jet cross section is infra-red divergent and has to be regularised. As we are interested in typical, i.e. minimum bias, events the jet cross section should cover the $p_\perp$ range where it dominates over the thermal background (partons with approximately thermal momenta will not contribute to the source terms as they on average don't lose energy and should be considered part of the background). This leads to a low $p_\perp$ cut on the jet production matrix element, $p_{\perp,\, \text{cut}} \simeq \unit[3]{GeV}$ and a correspondingly large number of di-jets per event ($N_\text{di-jet} \approx 1700$). The regularisation procedure also introduces a large uncertainty of the order of a factor 2 -- 3 on the source term. For central collisions the averaged source term will be independent of $\phi$, but it can have a non-trivial rapidity dependence even in a boost-invariant background. To preserve the symmetries of the bulk we consider in this study only the central rapidity unit (including the rapidity dependence is a straightforward extension). \smallskip \begin{figure} \begin{center} \includegraphics[angle=-90,width=.48\linewidth]{js-taur} \includegraphics[angle=-90,width=.48\linewidth]{jvtau-taur} \caption{Event averaged source terms $\bar J_S$ (left) and $\bar J_V^\tau$ (right) per di-jet in minimum bias events. The sources are averaged over the azimuthal angle $\phi$ and the central unit in rapidity. } \label{fig::sources} \end{center} \end{figure} The averaged source terms scale trivially with the number of di-jets per event. In figure~\ref{fig::sources} the components $\bar J_S$ and $\bar J_V^\tau$ of the source term per event are shown ($\bar J_V^r$ is related to $\bar J_V^\tau$ through $u_\mu \bar J_V^\mu =0$). The energy transfer $\bar J_S$ is concentrated at early times and small radii, i.e.\ where the temperature is highest. It falls off fast with $\tau$, as most 'jets' are soft 'mini-jets' that quickly evolve down to approximately thermal scales, where they do not contribute to the source terms any more. In contrast to this the momentum transfer builds up only at late times and at large radii. \smallskip \begin{figure} \begin{center} \includegraphics[width=.45\linewidth]{TempEvolution} \hspace{5mm} \includegraphics[width=.45\linewidth]{FluidVelocityEvolution} \caption{Temperature (left) and radial component $u^r$ of the fluid velocity (right) for different times. Solid lines correspond to the solution without source terms, dashed lines show the solutions with the source terms shown in figure~\ref{fig::sources}.} \label{fig::results} \end{center} \end{figure} The effect of the averaged source term on the bulk evolution is shown in figure~\ref{fig::results}. It has only a very small influence on the temperature. The increase at early times is due to dissipation while the decrease in the centre and increase at large $r$ at later times is caused by the increase in the radial velocity. The latter is a consequence of the momentum transfer, which increases the transverse flow by up to \unit[10]{\%} at intermediate times. \section{Conclusions} We have studied the effect of a realistic fluid dynamic description of the background on jets and of energy and momentum loss by jets on the evolution of the background. On the jet side the differences between the hydrodynamic medium and a simple toy model were found to be smaller than other modelling uncertainties and caused by the fact that at later times the hydrodynamic background extends to larger radii due to the transverse expansion. For the background evolution the energy and momentum deposited by jets forms source terms entering the fluid dynamic equations. We characterise the sources in terms of event averages and correlation functions and extract realistic estimates for minimum bias events. The averaged source terms are shown to have only a small influence on the radial flow caused by the momentum transfer, while the effect on the temperature is negligible. The influence of the correlation functions will be studied in a future publication. The division of heavy ion events into a soft bulk described by hydrodynamics and hard jets described using perturbative techniques introduces sizeable uncertainties, as the separation between the two regimes is not well-defined. On the other hand, this approach allows one to use the best available descriptions for each of the two parts, a suitable common framework being currently unavailable. It is in this way also easier to separate different effects.
{ "redpajama_set_name": "RedPajamaArXiv" }
9,344
{"url":"https:\/\/www.studyform.com\/uci\/MATH105A\/finalp1-wi14-Fider_Butenko","text":"#### MATH 105A\n\n##### Final - Practice 1 | Winter '14| Fider, Butenko\n1. Estimate the area under $f(x) = sin(x)$ on the interval $[0, p]$ by computing the Riemann sum using three subintervals and left endpoints.\n\n2. Evaluate the integral Show all work.\n\n3. Write the integralas a limit of Riemann sums using right endpoints..\n\n4. A function for the basal metabolism rate, in kcal\/h, of a young man is $R(t)$, where t is the time in hours measured from 5 : 00 AM. What does the integral $f_0^{24}$ $R(t)dt$ represent? What are the units?\n\n(a) Given\n\n(b) LetFind $h'(x)$.\n\n6. Evaluate\n\n7. Suppose $?_1^{e2} f(z)dz = 10$.Evaluate $?_ 0^1e^{2x}f(e^{2x})dx$\n\n8. Evaluate the integralShow all work.\n\n9. Evaluate the integralShow all work.\n\n10. Evaluate the integralShow all work.\n\n11. Evaluate the integral $?e^{2x} sin(px)dx$.Show all work.\n\n12. Evaluate the integralShow all work.\n\n13. Evaluate the integralShow all work.\n\n14. Evaluate the integral Show all work.\n\nEvaluate the integral $4?sec^{20}(x) tan^5(x)dx$. Show all work.\n\n16. Evaluate the integralShow all work\n\n17. Evaluate the integralShow all work.\n\n18. Determine whether the improper integral $?_0^8re^{-3r}dr$r is convergent or divergent. Evaluate the integral if convergent, or explain why it diverges. Show all work.\n\n19. Determine whether the improper integralis convergent or divergent. Evaluate the integral if convergent, or explain why it diverges. Show all work\n\n20. A particle moves along a line with velocity function $v(t) = cost$, where $v$ is measured in feet per hour. Find (a) the displacement and (b) the distance traveled by the particle during the time interval\n\n21-22. Let R be the region bounded by the graphs of $x = 2y^2$ and $x =4+ y^2$.\n(a) Sketch the region R and find its area.\n\n(b) Set up an integral to compute the volume of the solid generated by revolving the region $R$ (from part (a)) about the $y$-axis. Do not evaluate the integral!\n\n(c) Set up an integral to compute the volume of the solid generated by revolving the region $R$ (from part (a)) about the line $x = 1$. Do not evaluate the integral!\n\n23. Find the exact length of the curve\n\n24. Find the average of the functionover the interval [2, 3]. Show all work.\n\n25. Using integration find the area of the triangle with vertices $A = (-2,-4),B = (1, 5), C = (10,-1) and sides AB :3x-y = -2,BC : 2x + 3y = 17, CA : x-4y = 14$.\n\n26. For the sequencedetermine if the sequence is (a) monotone, (b) bounded, and (c) what conclusion can you make based on (a) and (b)?\n\n27. Use the Squeeze Theorem to show that the sequenceconverges.\n\n28. Determine the general term formula for the sequenceUse the formula to find the $100^{th}$ term.\n\nFor each of the following sequencesIf a limit doesn\u2019t exist, expain why not. Show all work.\n29. $a_n = (-2)^n$\n\n30.\n\n31. $a_n = sin (2pn)$\n\n32. Find the sum\n\n33. Use the Divergence Test to determine that the seriesis divergent. Show all work\n\n34. Use the Alternating Series Test to determine whether the seriesis convergent or divergent. Show all work\n\n35. Use the Direct or Limit Comparison Test to determine whether the seriesis convergent or divergent. Show all work.\n\n36. Use the Ratio or Root Test to determine whether the following series is convergent or divergent. Show all work.\n\n37. Use any Convergence Test to determine whether the following series is convergent or divergent. Show all work\n\n39. Find the first three nonzero terms of the Taylor series expansion of $f(x) = ln(x)$ about $x = e$.\n40. Find the Maclaurin series for $f(x) = e^x + e^{2x}$","date":"2019-02-17 01:10:40","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9304347634315491, \"perplexity\": 591.9520335935771}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-09\/segments\/1550247481428.19\/warc\/CC-MAIN-20190217010854-20190217032854-00237.warc.gz\"}"}
null
null
Świat w płomieniach (ang. White House Down) – amerykański dreszczowiec akcji z 2013 roku w reżyserii Rolanda Emmericha i wyprodukowany przez Columbia Pictures. Światowa premiera filmu miała miejsce 28 czerwca 2013 roku. W Polsce premiera filmu także odbyła się 28 czerwca 2013 roku. Fabuła Do Białego Domu wdzierają się przestępcy. Los prezydenta Stanów Zjednoczonych (Jamie Foxx) spoczywa w rękach agenta Johna Cale'a (Channing Tatum). Obsada Channing Tatum jako John Cale Jamie Foxx jako prezydent James Sawyer Maggie Gyllenhaal jako Alice Dawson Jason Clarke jako Emil Stenz Richard Jenkins jako Eli Raphelson James Woods jako Martin Walker Garcelle Beauvais jako Pierwsza Dama Stanów Zjednoczonych Lance Reddick jako pułkownik Janowitz Joey King jako Emily Cale Rachelle Lefevre jako Melanie Michael Murphy jako wiceprezydent Alvin Hammond Nicolas Wright jako Donnie Donaldson Przypisy Bibliografia Amerykańskie dreszczowce Amerykańskie filmy z 2013 roku Filmy w reżyserii Rolanda Emmericha Amerykańskie filmy akcji Amerykańskie dramaty filmowe Filmy wytwórni Centropolis Entertainment
{ "redpajama_set_name": "RedPajamaWikipedia" }
3,154
Clubiona mikhailovi är en spindelart som beskrevs av Christa L. Deeleman-Reinhold 200. Clubiona mikhailovi ingår i släktet Clubiona och familjen säckspindlar. Inga underarter finns listade i Catalogue of Life. Källor Externa länkar Säckspindlar mikhailovi
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,145
Q: WPF DataGrid AlternatingRowBackground overriding the Style DataTrigger for background I have a DataGrid where I show many lines of data. To help visually differentiate the rows I've added a background colour to alternating rows. But, there are some rows that contains very interesting data that I want to attract the user's attention to, and so I use a Style DataTrigger to highlight those specific rows. My problem is that the alternating background colour takes precedence - only the odd rows (no background colour) show the highlight. Note, this is a data-bound DataGrid using the MVVM pattern (no "code-behind"). The (very cutdown) code is as follows: <DataGrid ItemsSource="{Binding FilteredTraceMessages, Mode=OneWay}" AlternatingRowBackground="AliceBlue" .......> <DataGrid.Columns> .... </DataGrid.Columns> <DataGrid.RowStyle> <Style TargetType="DataGridRow"> <Style.Triggers> <DataTrigger Binding="{Binding Severity}" Value="Error"> <Setter Property="Background" Value="LightSalmon"></Setter> </DataTrigger> <DataTrigger Binding="{Binding Severity}" Value="Warning"> <Setter Property="Background" Value="LemonChiffon"></Setter> </DataTrigger> </Style.Triggers> </Style> </DataGrid.RowStyle> </DataGrid> A: You have to set the Backround on the same precedence level. See Dependency Property Setting Precedence List Delete AlternatingRowBackground="AliceBlue" from the DataGrid and put AlternationCount="2" there. Then add at the first place a trigger for AlternationIndex. <DataGrid ItemsSource="{Binding FilteredTraceMessages, Mode=OneWay}" AlternationCount="2" .......> <DataGrid.Columns> .... </DataGrid.Columns> <DataGrid.RowStyle> <Style TargetType="DataGridRow"> <Style.Triggers> <Trigger Property="AlternationIndex" Value="1"> <Setter Property="Background" Value="AliceBlue"/> </Trigger> <DataTrigger Binding="{Binding Severity}" Value="Error"> <Setter Property="Background" Value="LightSalmon"></Setter> </DataTrigger> <DataTrigger Binding="{Binding Severity}" Value="Warning"> <Setter Property="Background" Value="LemonChiffon"></Setter> </DataTrigger> </Style.Triggers> </Style> </DataGrid.RowStyle> </DataGrid>
{ "redpajama_set_name": "RedPajamaStackExchange" }
6,708
{"url":"https:\/\/www.gradesaver.com\/textbooks\/math\/algebra\/college-algebra-11th-edition\/chapter-4-section-4-3-logarithmic-functions-4-3-exercises-page-424\/72","text":"## College Algebra (11th Edition)\n\n$\\log_2x+\\log_2y-\\log_2t-\\log_2q-\\log_2r$\n$\\bf{\\text{Solution Outline:}}$ Use the properties of radicals and the properties of logarithms to rewrite the given expression, $\\log_2\\dfrac{xy}{tqr} .$ $\\bf{\\text{Solution Details:}}$ Using the Quotient Rule of Logarithms, which is given by $\\log_b \\dfrac{x}{y}=\\log_bx-\\log_by,$ the expression above is equivalent to \\begin{array}{l}\\require{cancel} \\log_2(xy)-\\log_2(tqr) .\\end{array} Using the Product Rule of Logarithms, which is given by $\\log_b (xy)=\\log_bx+\\log_by,$ the expression above is equivalent to \\begin{array}{l}\\require{cancel} \\log_2(xy)-(\\log_2t+\\log_2q+\\log_2r) \\\\\\\\= \\log_2(xy)-\\log_2t-\\log_2q-\\log_2r \\\\\\\\= \\log_2x+\\log_2y-\\log_2t-\\log_2q-\\log_2r .\\end{array}","date":"2018-12-14 07:18:30","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9942609071731567, \"perplexity\": 851.9029024136779}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-51\/segments\/1544376825495.60\/warc\/CC-MAIN-20181214070839-20181214092339-00335.warc.gz\"}"}
null
null
{"url":"https:\/\/proofwiki.org\/wiki\/Equivalence_Relation_is_Congruence_for_Right_Operation","text":"Equivalence Relation is Congruence for Right Operation\n\nTheorem\n\nEvery equivalence is a congruence for the right operation.\n\nProof\n\nLet $\\mathcal R$ be an equivalence relation on the structure $\\left({S, \\rightarrow}\\right)$.\n\nThen:\n\n$x_1 \\rightarrow y_1 = y_1$\n$x_2 \\rightarrow y_2 = y_2$\n\nSuppose $x_1 \\mathop {\\mathcal R} x_2 \\land y_1 \\mathop {\\mathcal R} y_2$.\n\nIt follows directly that:\n\n$\\left({x_1 \\rightarrow y_1}\\right) \\mathop {\\mathcal R} \\left({x_2 \\rightarrow y_2}\\right)$\n\n$\\blacksquare$","date":"2020-08-08 11:36:53","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9270561933517456, \"perplexity\": 720.3528980226531}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-34\/segments\/1596439737645.2\/warc\/CC-MAIN-20200808110257-20200808140257-00241.warc.gz\"}"}
null
null
{"url":"https:\/\/spektros.de\/asi294.php","text":"# The ASI294MM-Pro\n\n##### Properties of my ASI294MM-Pro\n\nFor an upcoming Echelle spectrograph I bought a new detector, an ASI294MM-Pro. This is my first CMOS chip. I want to present my observations on this detector to show the effect of technical progress.\n\nPlease find a more extensive evaluation of the asi294mm-pro at Christian Buil's site.\n\nThe ASI294 is operated under Linux on an INDI server. indi_asi_ccd is used as the driver. The client is ccdciel. Gain was 200, offset 8. I have reproduced all measurements also with my old Atik460EX with the driver indi_atik_ccd.\n\n###### Bias\n\nI took 30 bias shots with an exposure time of 0.00032 sec at a temperature of -20\u00b0C. The arithmetic mean of these frames give the particular meanbias. The results in the meanbiases and the particular single biasframe bias004.fits are\n\nFrame ASI294 Atik460\nNum Pixall116943686045051\nMeanmeanbias561320\nStd Devmeanbias3.93.2\nMeanbias0004572320\nStd devbias00041617\n\nI show you the 10th column from the right border.\n\n##### Darks\n\nI took 30 dark frames of 600s exposure time at a temperature of -20\u00b0C with my ASI and 23 with my ATIK. 29 rsp. 22 of them were mean averaged and also also a median average of these frames was computed. See the meandark:\n\nThere is a light in the dark to better find your way home at night.\n\nThe two mean darks are not very different.\n\nProperty ASI294 Atik460\nMean 570 321\nStd Dev 41 104\nADU \u2264 xxx \u2264400: 0px \u2264300: 0px\nADU \u2265 xxxx \u22651200: 735px \u22651000: 463px\n\nI made a simple data frame to see what the unwanted lantern does to weak data. My weak signal is 4096 ADU all over the chip. $\\mathrm{sim\\_data} = 4096 + \\mathrm{remaining\\_dark}$ $\\mathrm{pure\\_data} = \\mathrm{sim\\_data} - \\mathrm{meandark}$ This simple data reduction removes the glow of the amplifier leaving some noise that comes with every signal. See characteristics of the resulting frame $$\\mathrm{pure\\_data}$$ in the table below and following a plot of the 10th rightmost column in $$\\mathrm{sim\\_data}$$ and $$\\mathrm{pure\\_data}$$ from the asi cam. At last 10th last column of $$\\mathrm{pure\\_data}$$ from cmos vs. ccd.\n\nProperty ASI294 Atik460\nmean 4087.020 4095.983\nmedian 4086.483 4095.958\nstd dev 31.66 19.59\n\n-> comparison of flatfields.","date":"2023-03-22 22:24:15","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.4444582164287567, \"perplexity\": 6198.025159450169}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-14\/segments\/1679296944452.97\/warc\/CC-MAIN-20230322211955-20230323001955-00053.warc.gz\"}"}
null
null
Q: How can i get url parameters in python? I have tried a lot of things but nothing is working. It always gives me the "Incomplete response received from application" message in the navigator. My code is: import sys from flask import Flask, request app = Flask(__name__) @app.route('/') def application(): uri = request.args.get('url') message = 'It works!\n' response = '\n'.join([message, uri]) return response The problem is or in the @app.route('/') line or in the uri = request.args.get('url'). I just want to call the with the navigator like http://example.com/script/?url=hello. I tried changing @app.route('/') to @app.route('/script') and @app.route('/script/') but nothing is working... any ideas? Thanks a lot! A: For future readers: note that the original question has been edited in response to this suggestion. First issue: You seem to be using some very low-level WSGI implementation when Flask does a lot of the sugar for you. Consider testing with a function that lets Flask do the work and then expand as needed. import sys from flask import Flask, request app = Flask(__name__) @app.route('/') def test(): uri = request.args.get('url') message = 'It works!\n' version = 'Python %s\n' % sys.version.split()[0] response = '\n'.join([message, version, uri]) return response Next, keep in mind that Flask wants a string return type. If you want to pass a data structure back, consider jsonify.
{ "redpajama_set_name": "RedPajamaStackExchange" }
106
Нусантара: Нусантара — термин, обозначавший в средневековой яванской литературе территорию, входившую в состав империи Маджапахит с центром на Яве. Нусантара — востоковедческое общество в России. Нусантара — предполагаемая будущая столица Индонезии. Великая малайская Нусантара — литературное объединение в Малайзии.
{ "redpajama_set_name": "RedPajamaWikipedia" }
88
Die Primeira Divisão 1962/63 war die 29. Saison der höchsten portugiesischen Spielklasse im Fußball der Männer. Sie begann am 20. Oktober 1962 und endete am 12. Mai 1963. Benfica Lissabon wurde zum 12. Mal portugiesischer Meister. Teilnehmer 14 Mannschaften spielten an insgesamt 26 Spieltagen aufgeteilt in einer Hin- und einer Rückrunde jeweils zwei Mal gegeneinander. Abschlusstabelle Kreuztabelle Weblinks Portugal 1962-63 auf rsssf.com Statistik auf fussballzz.de Einzelnachweise Primeira-Liga-Saison Fußballsaison 1962/63
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,802
Операція «Котедж» — операція армії США зі звільнення острова Киска від японських військ у ході Тихоокеанської кампанії під час Другої світової війни. Проходила з 15 по 24 серпня 1943 року. Передісторія Острів Киска перебував під японською окупацією з літа 1942 року, відколи японські морські піхотинці висадилися сюди і знищили метеостанцію американських військово-морських сил. Згодом на острові був розміщений значний гарнізон, що становив за даними американської розвідки близько 10 000 осіб. За час окупації японці втратили на острові і навколо нього 2500 осіб загиблими. Захоплення Киски мало поставити крапку в Алеутській кампанії, і американське командування, яке пам'ятало про криваву битву за Атту, підготувало до висадки значні сили. У районі острова Адак зосереджувалось понад 100 кораблів, сили десанту становили 30 000 американських піхотинців і 5500 канадських. Крім того, починаючи з кінця липня, Киска піддавався авіанальотам і обстрілам з моря. Загалом протягом липня 11-та повітряна армія скинула на острів 424 тонни бомб, у той час як корабельна артилерія випустила 330 тонн снарядів. 13 серпня була проведена тренувальна висадка на Адак. Операція була призначена на 15 серпня. Операція Зранку 15 серпня перша група американських військ висадилася на західному березі острова, 16 серпня дещо північніше висадилися канадці. Висадці ніхто не чинив опір, проте, ветеранів битви за Атту це не здивувало. Американці очікували, що лише просунувшись углиб острова вони зіткнуться з оборонними позиціями японців на панівних висотах. Однак ніякого опору так і не було, єдиними бойовими втратами десантників стали втрати від дружнього вогню. Виявилося, що японське командування, усвідомлюючи неможливість відстояти практично ізольований острів, вирішило евакуювати гарнізон. 28 липня, за два тижні до висадки американців, весь гарнізон у кількості 5183 осіб протягом години завантажився на 2 крейсери та 6 есмінців і під покровом туману евакуювався на Парамушир. 24 серпня командуючий наземними силами генерал Чарльз Корлетт констатував, що острів перейшов під контроль США. Втрати За час обстеження острова (включаючи велику кількість підземних тунелів) американці втратили 31 людину вбитими і близько 50 пораненими, переважно від дружнього вогню; 130 солдатів постраждали від траншейної стопи. Крім того, при підриві на японській міні есмінця «Ебнер Рід» загинув 71 і були поранені 47 моряків. Примітки Див. також Алеутська операція Посилання Aleutian Islands War: June 3, 1942 — August 24, 1943 Operation Cottage, canadiansoldiers.com Операції і битви Другої світової війни Битви Канади в Другій світовій війні Битви на Тихому океані Алеутська кампанія Конфлікти в 1943 Серпень 1943 Події 15 серпня Дружній вогонь
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,630
Bogusław Marian Liberadzki (* 12. September 1948 in Sochaczew) ist ein polnischer Politiker des Bundes der Demokratischen Linken. Leben Liberadzki studierte Wirtschaftswissenschaften. Als Hochschullehrer ist er an der Szkoła Główna Handlowa w Warszawie in Warschau tätig. Liberadzki war von 1993 bis 1997 Minister für Verkehr und Meereswirtschaft in Polen. Von 1997 bis 2004 war er als Abgeordneter des Sejm der Republik Polen tätig. Seit 2004 ist Liberadzki Abgeordneter im Europäischen Parlament. Seit 2017 ist er einer dessen Vizepräsidenten. Liberadzki ist verheiratet und hat zwei Kinder. Einzelnachweise Weblinks Minister (Polen) Mitglied des Europäischen Parlaments für Polen SLD-Mitglied Parteifunktionär (Polen) Hochschullehrer (Warschau) Pole Geboren 1948 Mann Vizepräsident des Europäischen Parlamentes
{ "redpajama_set_name": "RedPajamaWikipedia" }
3,078
\section{Introduction} \label{sec:intro} The anomalous magnetic moment of the muon, $a_\mu \equiv (g_\mu-2)/2$, provides a stringent test of the Standard Model and a sensitive probe of new particles and forces beyond. It has been measured by BNL Experiment E821 to a precision of 0.54 parts-per-million~\cite{Bennett:2006fi}, and the experimental result disagrees with Standard-Model theory expectations by more than three standard deviations~\cite{Blum:2013xva}. To investigate this discrepancy, the Muon $g-2$ Experiment at Fermilab aims to reduce the experimental error by a factor of four, with a first result competitive with E821 expected within a year~\cite{Grange:2015fou}. To identify definitively whether any deviation observed between theory and experiment is due to new particles or forces, the theory error must be reduced to a commensurate level. The largest source of theory uncertainty is from the \order($\alpha_{\rm EM}^2$) hadronic vacuum-polarization contribution~\cite{Blum:2013xva}, which is shown in Fig.~\ref{fig:HVP} and is denoted by $a_\mu^{\rm HVP}$\ throughout this work. A well-established method for determining this contribution employs dispersion relations combined with experimentally measured electron-positron inclusive scattering cross-section data. Recent determinations from this approach quote errors of 0.4--0.6\%~\cite{Jegerlehner:2017lbd,KNT17,Davier:2017zfy}. Numerical lattice quantum chromodynamics (QCD) provides a complementary, systematic approach for calculating $a_\mu^{\rm HVP}$\ directly from first-principles QCD. Several independent lattice-QCD efforts to obtain $a_\mu^{\rm HVP}$\ are ongoing~\cite{Aubin:2006xv,Chakraborty:2016mwy,DellaMorte:2017dyu,BMW17,Lehner:2017kuc}, with errors on recent determinations ranging from about 2--6\%~\cite{Burger:2013jya,Chakraborty:2016mwy,DellaMorte:2017dyu,BMW17}. The theoretical precision on $a_\mu^{\rm HVP}$\ needed to match the target experimental uncertainty is about 0.2\%. \begin{figure}[tb] \centering \includegraphics[width=0.2\textwidth]{./LO.pdf} \caption{ Leading hadronic contribution to $a_\mu$. The shaded circle denotes all corrections to the internal photon propagator from the vacuum polarization of $u$, $d$, $s$, $c$, and $b$ quarks in the leading one-loop muon vertex diagram.} \label{fig:HVP} \end{figure} In this Letter, we remove one of the largest systematic errors common to all current lattice-QCD calculations of $a_\mu^{\rm HVP}$, namely that arising from the use of degenerate up- and down-quark masses. We do so by calculating directly the strong-isospin-breaking contribution to the light-quark-connected contribution to $a_\mu^{\rm HVP}$. Phenomenological estimates suggest that the effect of strong-isospin breaking on $a_\mu^{\rm HVP}$\ is about 1\%~\cite{Wolfe:2010gf,Jegerlehner:2011ti,Jegerlehner:2017gek}. Electromagnetic effects are also not yet included in lattice-QCD calculations of $a_\mu^{\rm HVP}$\, and lead to a similar-sized uncertainty~\cite{Hagiwara:2003da,Davier:2009ag}. In order to disentangle quark-mass from electromagnetic effects, we define the strong-isospin-breaking correction using up- and down-quark masses tuned from experimental hadron masses with QED effects removed~\cite{Basak:2016jnn}. The effect of strong-isospin breaking on the light- and strange-quark connected contributions to $a_\mu^{\rm HVP}$\ has been calculated in an exploratory study by the RBC/UKQCD Collaborations~\cite{Boyle:2017gzv} in three-flavor lattice-QCD, with a heavy pion mass of about 340~MeV, and isospin-symmetric sea quarks. Preliminary four-flavor results for the strong-isospin-breaking contribution to $a_\mu^{\rm HVP}$\ have also been presented by the ETM Collaboration~\cite{Giusti:2017ier} for several pion masses down to $M_\pi \sim 210$~MeV, but with low statistics. In this Letter, we analyze two QCD gauge-field ensembles recently generated by the MILC Collaboration with four flavors of highly improved staggered (HISQ) sea quarks and very high statistics; see Ref.~\cite{Bazavov:2012xda} for methodology. One of the ensembles has fully nondegenerate quark masses with the $u,d,s,$ and $c$ quarks all fixed to their physical values. Our calculation is the first determination of $\delta a_\mu^{{\rm HVP,} m_u \neq m_d}$\ at the physical pion mass and with sea isospin breaking. This Letter is organized as follows. We first present our numerical lattice-QCD calculation in Sec.~\ref{sec:LatSims}. Next, in Sec.~\ref{sec:analysis}, we calculate the strong isospin-breaking correction to $a_\mu^{\rm HVP}$\ and discuss the contributions to the systematic error. We present our final result and compare it with phenomenological estimates and previous lattice-QCD calculations in Sec.~\ref{sec:conclusions}. \section{Lattice calculation} \label{sec:LatSims} We calculate the strong-isospin-breaking correction to $a_\mu^{\rm HVP}$\ on two new QCD gauge-field ensembles generated by the MILC Collaboration with four flavors of HISQ quarks~\cite{Follana:2006rc,Bazavov:2012xda}. Table~\ref{tab:ensembles} summarizes key parameters of the configurations. The two ensembles have the same lattice spacing, which is approximately 0.15~fm, and the same strange- and charm-quark masses, which are both fixed close to their physical values. With staggered quarks, the pions possess an additional ``taste" quantum number. Discretization errors from the HISQ action generate $\order(\alpha_s^2 a^2)$ corrections to the squared sea-pion masses of different tastes. On both ensembles, the mass of the taste-Goldstone $\bar{u}d$ pion is fixed close to Nature's value of $M_{\pi^0} \approx 135$~MeV, which is the mass that the charged pion would have in the absence of electromagnetism. The root-mean-squared pion mass (averaging over tastes) is about $300$~MeV. \begin{table}[tb] \caption{Parameters of the QCD gauge-field ensembles. The first column shows the ratio of the lattice spacing to the gradient-flow scale $w_0$~\cite{Borsanyi:2012zs}. To convert quantities in lattice-spacing units to GeV, we use $w_0=0.1715(9)$~fm~\cite{Dowdall:2013rya}. The next columns show the bare lattice up, down, strange, and charm sea-quark masses in lattice-spacing units and the number of configurations analyzed. The last column gives the taste-Goldstone sea-pion mass in GeV on each ensemble obtained from fits of pseudoscalar-current two-point correlators as in Ref.~\cite{Bazavov:2012xda}. Both ensembles have the same volume $N_s^3 \times N_t = 32^3 \times 48$ in lattice spacing units. \vspace{1mm}} \label{tab:ensembles} \begin{ruledtabular} \begin{tabular}{lccc} $w_0/a$ & $am_u/am_d/am_s/am_c~(\times 10^{2})$ & $N_{\rm conf.}$ & $M_{\pi}$~\!(GeV) \\ \hline 1.13215(35) & 0.2426/0.2426/6.73/84.47 & 1902 & 0.1347(7) \\ 1.13259(38) & 0.1524/0.3328/6.73/84.47 & 4963 & 0.1346(7) \end{tabular} \end{ruledtabular} \end{table} The two ensembles differ in one key feature: the values of the light sea-quark masses. Ensemble 1 is isospin symmetric, with the two light sea-quark masses equal and fixed to $m_l = (m_u + m_d)$/2. Ensemble 2 features isospin breaking; here the two light-quark masses have the same average light-quark mass as ensemble 1, but the ratio of the light sea-quark masses is fixed to the value of $m_u/m_d$ determined from the MILC Collaboration's study of pion and kaon electromagnetic mass splittings within the quenched approximation of QED~\cite{Basak:2016jnn}. Because the up and down sea-quark masses on this ensemble each take their physical values, a chiral extrapolation is not needed in our analysis. Comparing results on the two ensembles enables us to quantify the (tiny) effects of sea isospin breaking. Our analysis strategy closely follows that of Ref.~\cite{Chakraborty:2016mwy}. On each ensemble, we calculate vector-current correlators $\langle j_\mu(\mathbf{x},t) j_\mu(\mathbf{0},0) \rangle$ with zero spatial momentum and all four combinations of local and spatially smeared interpolating operators at the source and sink. The smearing function is given in Eq.~(A1) of Ref.~\cite{Chakraborty:2016mwy}, and we employ the same smearing parameters as in that work. To determine the quark-mass dependence of $a_\mu^{\rm HVP}$, we compute correlators with three valence-quark masses $m_q = \{m_u, (m_u+m_d)/2, m_d\}$. With staggered quarks, the local vector current is not the conserved current, and must be renormalized. The renormalization factor $Z_{V}^{qq}$ for HISQ quarks, however, has only mild quark-mass dependence so it cancels when the strong-isospin-breaking correction is calculated as a percentage shift. Additional details on the correlator construction and wavefunction smearings can be found in Ref.~\cite{Chakraborty:2016mwy}. We fit the $2\times 2$ matrix of correlators together using the multiexponential parametrization in Eq.~(A6) of Ref.~\cite{Chakraborty:2016mwy}. The fit function includes contributions from both vector and opposite-parity states that arise with staggered valence quarks. The smeared correlators have smaller overlap with excited states than the local-local correlator, and therefore improve the determination of the energies and amplitudes. We fit the correlators over the symmetric time range [$t_{\rm min}, T-t_{\rm min}$], thereby ensuring that the fit describes the correlator over the entire lattice time extent $T$. To reduce the degrees of freedom in the fit, in practice we average the correlator at times $t$ and $T-t$ and fit only to the lattice midpoint; we also average the smeared-source, local-sink correlator with the local-source, smeared-sink correlator. Because our limited number of configurations do not enable us to reliably determine the smallest eigenvalues of the correlation matrix, we employ singular-value-decomposition (SVD) cuts with the values chosen to obtain stable fits with good correlated $\chi^2$ values. In practice, we replace all eigenvalues below the cut with the value of the SVD cut times the largest eigenvalue; this prescription increases the variance of the eigenmodes associated with the replaced eigenvalues and, thus, the errors on the fit parameters. We choose the number of states and fit range based on the stability of the ground-state and first-excited-state energies and amplitudes. For both ensembles and all valence-quark masses, we obtain good correlated fits with stable central values and errors using $t_{\rm min}/a \geq 3$, $N_{\rm states} \geq 3$, and an SVD cut of 0.015, which modifies about 40\% of the eigenvalues of the correlation matrix. For each of our six fits, the contribution to the $\chi^2$ from the 66 correlator data points ranges from about 45-80. Although the lowest-energy states in the vector-current correlators are $I=1$ $\pi\pi$ pairs, we do not see any evidence of such states in our two-point correlator fits. This is not surprising because there are only a few $\pi\pi$ states below the $\rho$ mass in these correlators, and their amplitudes are suppressed by the reciprocal of the spatial volume. The ground-state energies for the correlators with $m_q = m_l$ are $E_0 = 776.7(6.5)$~MeV and $E_0 = 779.4(5.1)$~MeV on the $N_f=2+1+1$ and $N_f = 1+1+1+1$ ensembles, respectively; these are statistically consistent with the PDG average for the Breit-Wigner mass $M_{\rho^0} = 775.26(25)$~MeV~\cite{Olive:2016xmw}. Following Ref.~\cite{Chakraborty:2016mwy}, we reduce the statistical errors in $a_\mu^{\rm HVP}$\ by replacing the correlator data at large times by the result of the multiexponential fit. Although the fit function is appropriate for the periodic lattice temporal boundary conditions, we correct for the finite lattice temporal size by using the infinite-time fit function and doubling the correlator extent to $t=2T$. We use the fitted correlator above $t^*> 1.5$~fm; with this choice, roughly 80\% of the value of $a_\mu^{\rm HVP}$\ comes from the data region. The values of $a_\mu^{\rm HVP}$\ computed with $G_{\rm fit}(t)$\ for $t^*> 1.5$~fm agree within $\sim 1\sigma$ with those computed entirely from data, but with more than ten times smaller statistical errors for $m_q = m_u$. \section{Analysis} \label{sec:analysis} We calculate $a_\mu^{\rm HVP}$\ using the method introduced by the HPQCD Collaboration~\cite{Chakraborty:2014mwa}, in which one constructs the $[n,n]$ and $[n,n-1]$ Pad{\'e}\ approximants for the renormalized hadronic vacuum polarization function [$\widehat\Pi(q^2)$] from time moments of zero-momentum vector-current correlation functions. These moments are proportional to the coefficients $\Pi_{j}$ in a Taylor expansion of $\widehat\Pi(q^2)$ around $q^2=0$. The true result is guaranteed to lie between the $[n,n]$ and $[n,n-1]$ Pad{\'e}\ approximants. We employ the $[3,3]$ Pad{\'e}\ approximant for $\widehat\Pi(q^2)$ obtained from the first six Taylor coefficients; the values of $a_\mu^{\rm HVP}$\ computed from the $[3,2]$ and $[3,3]$ Pad{\'e}\ approximants differ by $0.1 \times 10^{-10}$. In Ref.~\cite{Chakraborty:2016mwy}, the $[n,n]$ and $[n,n-1]$ Pad{\'e}\ approximants for $\widehat\Pi(q^2)$ are constructed from rescaled Taylor coefficients $\Pi_{j} \times (E_0/M_{\rho^0})^{2j}$, where $E_0$ is the ground-state energy obtained from the two-point correlator fits. The rescaling was found to reduce the valence-quark-mass dependence of $a_\mu^{\rm HVP}$\ because the $\rho$-meson pole dominates the vacuum polarization. In addition, the rescaling cancels most of the error from the uncertainty on the lattice scale $w_0$, which enters via the muon mass present in the one-loop QED integral for $a_\mu^{\rm HVP}$. Figure~\ref{fig:amu_vs_mval} shows $a_\mu^{\rm HVP}$\ on $(1+1+1+1)$-flavor ensemble at the up, down, and average light-quark masses. The valence-quark-mass dependence is statistically well resolved because the three points are strongly correlated, and is smaller after rescaling. \begin{figure}[tb] \centering \includegraphics[width=0.4\textwidth]{./amu_RS_vs_mval} \caption{Valence-quark-mass dependence of the light-quark-connected contribution to $a_\mu^{\rm HVP}$\ on the $N_f=1+1+1+1$ ensemble without rescaling (open symbols) and with rescaling each $\Pi_j^{(qq)}$ by $(E_0/M_{\rho^0})^{2j}$ (closed symbols). From left to right, the pairs of data points correspond to $m_u$, $m_l = (m_u+m_d)/2$, and $m_d$; each pair of data points is horizontally offset for clarity. The values of $a_\mu^{qq}$ include the charge factor $q_u^2 + q_d^2 = 5/9$ appropriate for the isospin-symmetric case.} \label{fig:amu_vs_mval} \end{figure} The physical value of the light-quark-connected contribution to $a_\mu^{\rm HVP}$\ is given by the sum of $a_\mu^{\rm HVP}$\ with two up quarks in the vector current and with two down quarks in the vector current weighted by the square of the quarks' electromagnetic charges: \begin{equation} a_\mu^{\rm phys.} = \frac{4 a_\mu^{uu} + a_\mu^{dd}}{5} \,. \end{equation} We then define the absolute shift with respect to the isospin-symmetric value as $\Delta a_\mu^{{\rm HVP,} m_u \neq m_d} \equiv a_\mu^{\rm phys.} - a_\mu^{ll}$, and the relative correction to be \begin{equation} \delta a_\mu^{{\rm HVP,} m_u \neq m_d} = \frac{\Delta a_\mu^{{\rm HVP,} m_u \neq m_d}}{a_\mu^{ll}} \,, \label{eq:delIB_def} \end{equation} where $m_l \equiv (m_u + m_d) / 2$. Table~\ref{tab:amu_IB} summarizes the isospin-breaking shifts on the $N_f=2+1+1$ and $N_f=1+1+1+1$ ensembles, both before and after rescaling the Taylor coefficients. As expected, we do not observe any significant difference between the two ensembles. The leading sea isospin-breaking contributions to $a_\mu^{\rm HVP}$\ are quadratic in the difference $(m_d - m_u)$; taking $\Lambda_{\rm QCD} = 300$~MeV gives a rough power-counting estimate of their size as $(m_u - m_d)^2/\Lambda_{\rm QCD}^2 \sim 0.01\%$. The differences in $a_\mu^{\rm HVP}$\ are smaller with rescaling because the valence-quark-mass dependence is milder. The errors on the shifts in Table~\ref{tab:amu_IB} stem primarily from the two-point correlator fits. The parametric errors from the lattice spacing are about a percent before rescaling, and are twenty times smaller with rescaling. The parametric errors from the current renormalization factor are $\sim$0.2\%. The uncertainty due to the use of Pad{\'e} approximants, which we take to be the difference between $a_\mu^{\rm HVP}$\ obtained with the $[3,3]$ and $[3,2]$ approximants, is about a percent. The 2.7\% uncertainty on the ratio $m_u/m_d$ in Ref.~\cite{Basak:2016jnn} stems largely from the estimate of electromagnetic effects, and leads to errors of about 2\% and 1\% on the physical up- and down-quark masses, respectively. Propagating the tuned quark-mass uncertainties to the physical $a_\mu^{\rm HVP}$\ using the measured slope of $a_\mu^{\rm HVP}$\ with respect to valence-quark mass changes the shifts in Table~\ref{tab:amu_IB} by $\sim 0.2$--$0.3\times 10^{-10}$ ($\ltapprox 0.1\times 10^{-10}$) without (with) rescaling. Finally, the leading finite-volume and discretization effects, which arise from one-loop diagrams with $\pi\pi$ intermediate states, cancel in $\Delta a_\mu^{\rm HVP}$ because the charged pions in the loop are sensitive to the average of the up- and down-quark masses. Higher-order contributions are suppressed by $m_{ud}/\Lambda_\chi \sim 1\%$, where $\Lambda_\chi$ is a typical chiral perturbation theory scale. We therefore estimate the systematic uncertainties in the shifts in Table~\ref{tab:amu_IB} due to finite-volume and discretization effects to be 1\% times the leading contributions, or $0.5 \times 10^{-10}$. Because the sea-quark-mass dependence of $a_\mu^{\rm HVP}$\ is tiny, we can compare the shift in the ``direct" points in Fig.~\ref{fig:amu_vs_mval} to the valence-quark-mass dependence observed in Ref.~\cite{Chakraborty:2016mwy}, which analyzes several isospin-symmetric MILC HISQ ensembles at three lattice spacings and with a wide range of pion masses. Figure~3 of that work shows that the ``raw" data for $a_\mu^{\rm HVP}$\ are approximately linear in $m_q$ from $M_\pi \sim 300$~MeV down to the physical value with a slope that is independent of the lattice spacing. We can therefore estimate the change in $a_\mu^{\rm HVP}$\ that would result from varying $m_q$ between $m_u$ and $m_d$ from the unphysically heavy data in Ref.~\cite{Chakraborty:2016mwy}, and find a value consistent with the difference obtained from our fully physical calculation here. The shifts in Table~\ref{tab:amu_IB} only include contributions from quark-connected diagrams, with quark-disconnected contributions expected to be suppressed by $1/N_c$. We estimate the quark-disconnected contribution to the strong-isospin-breaking correction from one-loop $\pi\pi$ diagrams, which are especially sensitive to changes in the quark masses, within finite-volume chiral perturbation theory. Including the effect of taste splittings between the sea pions, which reduce the isopsin-breaking shift, we obtain for the $\pi\pi$-loop contribution $0.7 \times 10^{-10}$. To account for resonance and higher-order contributions, we take about three times this value, or $3 \times 10^{-10}$, as the uncertainty on the isospin-breaking shifts in Table~\ref{tab:amu_IB} from missing quark-disconnected contributions. This conservative error estimate is approximately the size of the full quark-disconnected contribution to $a_\mu^{\rm HVP}$\ obtained by the BMW Collaboration on their coarsest ensemble with $a\approx 0.13$~fm and similar taste splittings~\cite{Borsanyi:2017zdw}; we expect the quark-disconnected contribution to the strong-isospin splitting to be smaller. We obtain our final results for the relative correction $\delta a_\mu^{{\rm HVP,} m_u \neq m_d}$\ by averaging the values on the two ensembles. \begin{table}[tb] \caption{Shift in $a_\mu^{\rm HVP}$\ from the isospin-symmetric to the physical valence-quark masses calculated on the ensembles in Table~\ref{tab:ensembles}. Results are shown both without and with rescaling the Taylor coefficients. As explained in the text, the numbers within a column should agree, but the two columns can (and should) differ. Errors shown include statistics and all systematic uncertainties. \vspace{1mm}} \label{tab:amu_IB} \begin{ruledtabular} \begin{tabular}{lc@{\quad}c} & \multicolumn{2}{c}{$10^{10}\Delta a_\mu^{{\rm HVP},m_u\neq m_d}$} \\ $N_f$ & direct & with $E_0$ rescaling \\ \hline 2+1+1 & +7.7(3.7) & +1.9(4.0) \\ 1+1+1+1 & +9.0(2.3) & +2.3(2.5) \\ \end{tabular} \end{ruledtabular} \end{table} \section{Result and outlook} \label{sec:conclusions} We obtain for the relative strong isospin-breaking correction to the light-quark connected contribution to the muon $g-2$ hadronic vacuum polarization \begin{numcases}{\delta a_\mu^{{\rm HVP,} m_u \neq m_d} = } +1.5(7)\% & \mbox{direct,} \label{eq:DeltaIB_Raw}\\ +0.4(7)\% & \mbox{with $E_0$ rescaling,\qquad} \label{eq:DeltaIB_RS} \end{numcases} where the errors include Monte Carlo statistics and all systematics. Our result without rescaling the Taylor coefficients is consistent with phenomenological estimates of the dominant isospin-breaking contribution from $\rho$--$\omega$ mixing using $e^+e^- \to \pi^+\pi^-$ data~\cite{Wolfe:2010gf,Jegerlehner:2011ti,Jegerlehner:2017gek}, $\Delta a_\mu^{\rho-\omega{\rm \ mix.}} \sim 2$--5$\times 10^{-10}$, and chiral perturbation theory~\cite{Cirigliano:2002pv}, $\Delta a_\mu^{\rho-\omega{\rm \ mix.}} \sim 6 \times 10^{-10}$, although $\rho$--$\omega$ mixing will also include effects from quark-line disconnected diagrams that we do not consider here. Recent exploratory lattice-QCD calculations obtain somewhat smaller estimates for the relative strong isospin-breaking correction of roughly 0.2\%--0.6\% for $M_\pi \gtapprox 340$~MeV~\cite{Giusti:2017ier,Boyle:2017gzv}. We cannot directly compare our result in Eq.~(\ref{eq:DeltaIB_Raw}) with these values, however, because they were obtained with unphysically heavy pions and do not yet include systematic uncertainties.\footnote{Since our paper appeared, the RBC/UKQCD Collaboration obtained a new result for the strong-isospin-breaking shift at the physical pion mass of $10.6(8.0) \times 10^{-10}$~\cite{Blum:2018mom}, which agrees with our ``direct" values in Table~\ref{tab:amu_IB}.} The percentage shifts in Eqs.~(\ref{eq:DeltaIB_Raw}) and~(\ref{eq:DeltaIB_RS}) can be used to correct any existing or future result for the connected contribution to the hadronic vacuum polarization obtained with degenerate light quarks. Results for $a_\mu^{\rm HVP}$\ obtained without rescaling the Taylor coefficients should be corrected using Eq.~(\ref{eq:DeltaIB_Raw}); this applies to most recent lattice-QCD calculations. Equation~(\ref{eq:DeltaIB_RS}) should be used to correct $a_\mu^{\rm HVP}$\ when $E_0$ rescaling is employed. We have performed the first direct calculation of the strong-isosopin-breaking correction to $a_\mu^{\rm HVP}$\ at the physical up- and down-quark masses. We obtain an uncertainty on the relative correction of 0.7, which is smaller, and also more reliable, than the $\sim 1\%$ phenomenological estimate used in recent lattice-QCD calculations with equal up- and down-quark masses~\cite{Chakraborty:2016mwy,Borsanyi:2016lpl,BMW17}. Thus, it reduces a significant source of uncertainty in $a_\mu^{\rm HVP}$, and is a crucial milestone towards a complete {\it ab-initio} lattice-QCD calculation of the hadronic contributions to $a_\mu$ with the sub-percent precision needed by the Muon $g-2$ and planned J-PARC experiments. To improve our results in Eqs.~(\ref{eq:DeltaIB_Raw}) and~(\ref{eq:DeltaIB_RS}), we will include quark-disconnected contributions, which are the dominant source of uncertainty, in a future work. We will also calculate directly the electromagnetic correction to $a_\mu^{\rm HVP}$\ using dynamical QCD+QED gauge-field configurations to be generated soon by the MILC Collaboration with quarks, gluons, and photons in the sea~\cite{Zhou:2014gga}. \begin{acknowledgments} We thank John Campbell, Vera G{\"u}lpers, Fred Jegerlehner, Laurent Lellouch, and Silvano Simula for useful discussions. Computations for this work were carried out with resources provided by the USQCD Collaboration, the National Energy Research Scientific Computing Center and the Argonne Leadership Computing Facility, which are funded by the Office of Science of the U.S.\ Department of Energy; and with resources provided by the National Institute for Computational Science and the Texas Advanced Computing Center, which are funded through the National Science Foundation's Teragrid/XSEDE Program. Computations were also carried out on the Darwin Supercomputer at the DiRAC facility, which is jointly funded by the U.K.\ Science and Technology Facility Council, the U.K.\ Department for Business, Innovation and Skills, and the Universities of Cambridge and Glasgow. This work utilized the RMACC Summit supercomputer, which is supported by the National Science Foundation (awards ACI-1532235 and ACI-1532236), the University of Colorado Boulder, and Colorado State University. The Summit supercomputer is a joint effort of the University of Colorado Boulder and Colorado State University. This research is part of the Blue Waters sustained-petascale computing project, which is supported by the National Science Foundation (awards OCI-0725070 and ACI-1238993) and the state of Illinois. Blue Waters is a joint effort of the University of Illinois at Urbana-Champaign and its National Center for Supercomputing Applications. This work was supported in part by the U.S.\ Department of Energy under grants No.~DE-AC05-06OR23177 (B.C.), No.~DE{-}SC0010120 (S.G.), No.~DE{-}SC0015655 (A.X.K.), No.~DE{-}SC0009998~(J.L.), No.~DE{-}SC0010005 (E.T.N.), No.~DE-FG02-13ER41976 (D.T.), by the U.S.\ National Science Foundation under grants PHY14-17805~(J.L.), PHY14-14614 (C.D., A.V.), PHY13-16222 (G.P.L.), PHY12-12389~(Y.L.), and PHY13-16748 and PHY16-20625 (R.S.); by the Royal Society, STFC and Wolfson Foundation (C.T.H.D., D.H., J.K.); by the MINECO (Spain) under grants FPA2013-47836-C-1-P and FPA2016-78220-C3-3-P (E.G.); by the Junta de Andaluc{\'i}a (Spain) under grant No.~FQM-101 (E.G.) by the Fermilab Distinguished Scholars Program (A.X.K.); by the German Excellence Initiative and the European Union Seventh Framework Program under grant agreement No.~291763 as well as the European Union's Marie Curie COFUND program (A.S.K.); by the Blue Waters PAID program (Y.L.); and by the U.K.\ STFC under grants ST/N005872/1 and ST/P00055X/1 (C.M.). Brookhaven National Laboratory is supported by the U.S. Department of Energy under contract DE-SC0012704. Fermilab is operated by Fermi Research Alliance, LLC, under Contract No.\ DE-AC02-07CH11359 with the United States Department of Energy, Office of Science, Office of High Energy Physics. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes. \end{acknowledgments} \bibliographystyle{apsrev4-1}
{ "redpajama_set_name": "RedPajamaArXiv" }
4,716
namespace net { class ReactorThreadPool { public: explicit ReactorThreadPool(int reactor_pool_size = 0, ReactorThread::Delegate* delegate=NULL); ~ReactorThreadPool(); bool Start(); void Shutdown(); bool IsEmpty() const { return reactor_pool_.empty(); } // AcceptʱʹÓÃ // base::MessageLoop* GetNextMessageLoop(); private: std::vector<ReactorThread*> reactor_pool_; size_t next_reactor_; DISALLOW_COPY_AND_ASSIGN(ReactorThreadPool); }; } #endif
{ "redpajama_set_name": "RedPajamaGithub" }
6,708
\section{Introduction} \section{Introduction} In references \cite{PBB} and \cite{MS1} a correlation was demonstrated for `low clouds' ($<$3.2 km in altitude) between the changes in the globally averaged 'low cloud cover (LCC) anomaly' and the changes in the cosmic ray (CR) count rate (e.g. see figure 1 of \cite{MS1}). Here 'LCC anomaly' means the difference between the mean monthly LCC and the time averaged value for the month. The LCC anomaly was derived by these groups from the satellite data provided by the International Satellite Cloud Climatology Project (ISCCP) from the monthly averaged D2 analysis using the infrared data\cite{ISCCP}. It was implied by both groups that a decrease in CR intensity causes a decrease in LCC. Since this may not be the case if both effects are correlated to a third variable, it is prudent to look for further evidence of such a causal connection. Such a causal connection would have vast importance since, according to \cite{PBB,MS1,MS3}, it could be the main cause of the presently observed global warming. The proposed mechanism for this depends on the observation of an increase in solar activity over the last century \cite{Lockwood}. An increase in solar activity causes a net decrease in CR intensity which, according to the causal connection proposed in \cite{PBB,MS1,MS3}, causes a decrease in LCC. This, in turn, leads to increased warming of the Earth's surface by the Sun. The effects of solar changes on the increases in the global mean surface air temperature have been discussed more fully in \cite{LandF} and are reviewed in \cite{Haigh}. The International Panel on Climate Change (IPCC) has not considered this effect as significant \cite{IPCC} since the origin of the correlation observed in \cite{PBB,MS1} has been questioned \cite{KSKK}. The grounds for this doubt are that the ISCCP infrared data give different results from the day time low cloud data and also that the correlation after 1994 is of poor quality. The correlation was also questioned since a similar one was observed over the USA but with the opposite sign \cite{UC} to that seen in \cite{PBB,MS1}. These doubts should be weighed against the following. Firstly, in the daytime LCC shown in figure 1b of \cite{KSKK} there is structure at the maximum of the solar activity, albeit with a poorer correlation than the infra red data with the CR modulation. Secondly, there is no inconsistency between the surface data over the USA seen in \cite{UC} and the ISCCP infra red data since the latter also show an anti-correlation over the USA (see fig 2a in \cite{MS1}, which we also confirm). Thirdly, a correlation between the CR rate and cloud cover was also observed in \cite{Harrison} where cloud cover was determined in a completely different way from that adopted in \cite{PBB,MS1,MS3}. Fourthly, a latitude dependence between the calculated ion concentration from CR at altitude 3 km and the low cloud amount was reported in selected local regions where the correlation coefficient between the two distributions is high \cite{uso}. Fifthly, whilst after 1994 there is a poor correlation at high Earth latitudes, we see a possible correlation in the tropical regions (see below) and it is well known that sequential solar cycles behave differently from each other due to the reversal of the solar magnetic field. The IPCC labels the level of scientific understanding of the observed correlation as ``very low''. Given these facts the correlation observed in \cite{PBB,MS1} needs to be studied further. Here we adopt the approach of looking for other possible manifestations of the causal connection, assuming that it exists, in order to corroborate the effect or otherwise. The implication of the causal connection proposed in \cite{PBB,MS1} is that LCC is influenced by the rate of ion production in the atmosphere. In this paper, we have examined various incidences of ionizing radiation changes in the atmosphere from cosmic rays to look for consequential changes in LCC which would result if the causal connection existed. We have looked for changes in LCC from changes in the CR intensity due to solar activity as the geomagnetic latitude increases i.e. as the vertical rigidity cut-off (VRCO) decreases. We have also looked at the effects on LCC of the known sporadic changes in the CR intensity. These cases, where there is a change in the ionization rate, have been examined to see if a corresponding change in cloud cover occurs, as would be expected from the causal connection hypothesized in \cite{PBB,MS1}. Throughout we use the same ISCCP D2 data sample as in \cite{PBB,MS1} unless otherwise stated. \section{The Correlation between Cosmic Rays (CR) and Low Cloud Cover (LCC)} \label{ions} Figure \ref{fig1} shows the LCC anomaly determined from the ISCCP infra red data as a function of time averaged over the Earth, in three separate regions. The smooth curves in figure \ref{fig1} show the best fits of the LCC anomaly to the mean daily sun spot (SS) number (inverted) superimposed on an assumed linear change with time in the LCC anomaly. Such a change may be real or it could be due to an artefact of the satellite instrumentation as discussed in \cite{evan}. The fit was made using the CERN library fitting programme MINUIT \cite{MINUIT} to minimize the value of $\chi^2$ between the measurements and the curve. The errors on the data points were taken from the mean square deviations of independent pairs of neighbouring points.The free parameters in the fit were the slope and intercept of the assumed smooth linear systematic change in the LCC, a multiplicative constant for the monthly averaged daily American sunspot number (SSN) \cite{USSS} and a time shift for the delay between the onset of the $dip$ in the LCC and the increase in the SSN. The multiplicative constant represents the amplitude of the dip in the LCC per unit change in SSN. We take this amplitude as the magnitude of `the effect'. The fits were rather poor (see figure \ref{fig1}) with values of $\chi^2$ per degree of freedom of from 1.5 to 2.5. However, fits between 1985 to 1996 (solar cycle 22) were better than this, allowing the amplitude of the dip to be determined in this time range. The modulation of the cosmic ray intensity is strongly anti-correlated with the variation in the SSN. The time shift between the onset of the dip and the change in the SSN will be used in the manner to be described later. The observed dip in figure \ref{fig1} is similar to that seen in \cite{PBB,MS1} between the years 1985 and 1995. However, the dip in LCC seen in solar cycle 22 (peaking in 1990) is not evident in solar cycle 23 (peaking in 2000) except, surprisingly, in the equatorial region where the solar modulation is least. The globally averaged decrease in LCC during solar cycle 22 (averaging the dips in figure \ref{fig1}) is $1.28\pm0.14\%$. The globally averaged total LCC amount is $28\%$ giving a change in LCC during the dip of $\Delta \rm{LCC}/\rm{LCC}=4.6\pm0.5\%$. The globally averaged peak to peak modulation in the CR neutron monitor count rate is computed to be $11\pm1\%$ of the total. The neutron modulation was determined from a study of the data from 35 neutron monitors around the globe \cite{Watanabe} using similar methods to those described in \cite{Braun}. A fit to the measurements of the peak to peak modulation versus SSN gives $\Delta N/N=1.15~10^{-3}-0.061~10^{-3}\cdot V$ per SSN, where $V$ is the VRCO The muon modulation is a factor 3 lower than this \cite{Allkofer} due to the higher primary energy needed to produce muons. Ionization is also produced from the electromagnetic component of CR whose long term modulation has not been measured. This will depend on $\pi^0$ production from CR primary interactions which will have a threshold energy intermediate between those for muons and neutrons. We therefore assume that the total globally averaged solar modulation of the cosmic ray ionization rate is the average of those for muons and neutrons i.e. $7\pm3\%$, where the uncertainty bridges the gap from muons to neutrons. The solar modulation of the globally averaged ionization rate, $q$, will be reduced to $\Delta q/q = 6\pm3\%$ by the dilution of the ionization over land ($29\%$ of the Earth's surface) by radioactivity which will produce an ionization rate of a similar magnitude to that from CR at low cloud altitude \cite{Karunakara}. The fractional change in the LCC is therefore related to the rate of ionization change due to solar modulation by \begin{equation} \frac{\Delta \rm{LCC}}{\rm{LCC}} = 0.77\pm0.38\cdot\frac{\Delta q}{q} \end{equation} implying that $\rm{LCC}\propto q^\xi$ with $\xi=0.77\pm0.38$, where the error is dominated by the uncertainty in $\Delta q/q$. This is compatible, within the error, with a $q^{0.5}$ behaviour. Such behaviour is expected, at least in clean air, if $\rm{LCC} \propto n$, where n is the small ion concentration which is expected to be limited mainly by recombination \cite{Mason}. To study the detailed shape of the correlation shown in figure \ref{fig1} the globally averaged LCC amount is plotted directly against the Climax neutron counter monitor rate, $N_C$, in figure \ref{figsec}. The good correlation is evident. Fits of the form \begin{equation} \rm{LCC}=\beta+\gamma N_C^{\alpha} \label{fit} \end{equation} have been made. Here the first parameter, $\beta$, can be interpreted as a measure of the LCC amount attributable to non-ionizing sources and the second term to ionizing sources. If the part of the LCC amount which depends on ionization is proportional to the small ion concentration, $n$, and $n\propto q^\xi$, the parameter $\alpha$ is related to $\xi$ by $\alpha=a_1a_2\xi$, where $a_1=(\delta q/q)/(\delta N/N)$ and $a_2=(\delta N/N)/(\delta N_C/N_C)$. \footnote{It can be seen that $\delta q/q=a_1 a_2 \delta N_C/N_C$ which on integration gives $q\propto N_C^{a_1a_2}$ i.e. $q^\xi\propto N_C^{a_1a_2\xi}$.} We take $\delta q/q=6\pm3\%$, the globally averaged solar modulation $\delta N/N=11\pm1\%$ (as discussed above) and the solar modulation measured for the Climax detector to be $\delta N_C/N_C=19\%$, so that $a_1a_2=0.32\pm0.16$. The data in figure \ref{figsec} are insufficient to determine precisely the parameters, $\alpha, \beta$ and $\gamma$ separately. Fits with different combinations of the parameters are equally good as measured by the $\chi^2$. However, the value of $\chi^2$ rapidly becomes unacceptable when $\beta$ is increased to a value corresponding to more than $70\%$ of the cloud arising from non-ionizing sources, i.e. a fraction of at least $30\%$ comes from ionization. The following argument also shows that the latter fraction must be large. The smooth curve in figure \ref{figsec} shows the fit with $\beta=0$ which gives $\alpha=0.17$ with a $\chi^2=148.9$ for 146 degrees of freedom and a correlation coefficient of 0.54. The values of the parameters $\alpha$, $\beta$ and $\gamma$ are strongly correlated such that increasing values of $\alpha$ are associated with increasing values of $\beta$ and decreasing values of $\gamma$. The fits with $\alpha > 0.16$, corresponding to $\xi > 0.5$, give positive values of $\beta$ while fits with $\alpha < 0.16$, corresponding to values of $\xi <0.5$, give negative values of $\beta$ which are unphysical. Assuming that it is implausible that the LCC amount generated by ionization varies faster than linearly with $q$, i.e. $\xi < 1$, then $\alpha$ must be less than 0.48, taking $\delta q/q$ at its upper limit of $9\%$. Such a value of $\alpha$ gives a fit with $\beta = 20\%$, implying that the fraction of the LCC generated by sources other than ionization is less than 20/28=0.7 i.e. a minimum fraction of 0.3 of the LCC amount is generated by ionization. At $\xi=0.5$ the value of $\beta$ is compatible with zero. Hence assuming $\xi$ lies in the range 0.5 to 1.0 the fraction of the LCC generated by ionization lies somewhere between 1 and 0.3, respectively. In summary, assuming that the correlation shown in figures \ref{fig1} and \ref{figsec} is not accidental a very large fraction of the LCC must be generated by ionization. We now attempt to corroborate this assumption and this necessary deduction. \section{Latitude dependence of `the effect'} \label{lats} It is well known that the magnitude of the CR time variation, due to the 11 year solar cycle, varies with latitude. More accurately, it is a function of the VRCO, the reason being that the geomagnetic field deflects away more low energy particles as the geomagnetic equator (highest VRCO) is approached. Since the CR flux increases rapidly as the primary energy decreases, the solar modulation becomes less severe as the VRCO increases towards the geomagnetic equator. Hence, if the causal connection between the CR ionization rate and LCC proposed in \cite{PBB,MS1} exists with the necessary large fraction of the LCC produced by ionization demonstrated above, one would expect larger changes in LCC at low values of VRCO than at high values. Furthermore it is known that there is a delay of some months between the decrease in the CR intensity and the increase in the sun spot (SS) number with the even numbered solar cycles showing smaller delays than the odd numbered \cite{Kudela}. Note that the CR count rate is anti-correlated to the SS number. The observed dip in figure \ref{fig1} is similar to that seen in \cite{PBB,MS1} between the years 1985 and 1995. However, the expected rise in amplitude of this dip with decreasing VRCO is not apparent. To investigate the effect of the VRCO further and to check that the above result was not due to a latitude dependent efficiency of the cloud production mechanism, the LCC was determined in three strips of latitude for the Northern and Southern hemispheres of the Earth separately. The amplitude of the dip in solar cycle 22 was measured from the fit for each, as a function of VRCO. The dip was visible in every subdivision. Figure \ref{fig3} (upper panel) confirms that the amplitude of the dip appears to be rather constant with VRCO rather than increasing with the observed increase in CR modulation determined as described above. Furthermore there is no discernible difference between the Northern (where oceans are less dominant) and Southern hemispheres (where oceans are more dominant). Figure \ref{fig3} (lower panel) shows that the measured value of the delay between the onset of the dip and the change in SS number fluctuates randomly rather than concentrates around a fixed delay (expected to be $-3$ months for the CR increase in solar cycle 22). Each latitude band has a median value compatible with zero with an overall mean of $-0.9\pm1.6$ months, where the error is the standard error determined from the root mean square (RMS) deviation of the measurements from the mean. This is compatible with the onset of the increase in SS number but somewhat earlier than the arrival time of the CR increase ($-3$ months). Hence there is a somewhat better time correlation between the start of the dip and onset of the increase in the SS number than with the change in the CR rate, although the error is too large to be conclusive. Neither the amplitude variation with VRCO nor the arrival times shown in figure \ref{fig3} corroborate the claim of a full causal connection between CR ionization rate and the LCC anomaly. We proceed to set a limit on any contribution from a partial correlation. We attempt to quantify the part of the dip related to changes in the CR ionization rate and that related to other sources which are independent of the ionization rate, as follows. The change in LCC during the solar cycle, $\Delta \rm{LCC}$, can be decomposed into a part which is dependent on the change in the ionization rate $\Delta \rm{LCC}_I$ and a part due to other mechanisms correlated with solar activity but not directly due to ionization, $\Delta\rm{LCC}_S$, i.e. $\Delta \rm{LCC}=\Delta\rm{LCC}_I+\Delta\rm{LCC}_S$. Differentiation shows that $\Delta\rm{LCC}_I=\kappa dN/N$. where $\kappa=Ndg(N)/dN$ with $g(N)$ the functional dependence of the LCC on the ionization rate as measured by the neutron monitor rate, $N$. The function $Ndg(N)/dN$ is slowly varying with $N$ for reasonable functions, $g(N)$, over the range of changes of $\delta N/N$ during the solar cycle, so that $\kappa$ is approximately constant. For example, if $\rm{LCC} \propto n \propto q^\xi \propto N^{a\xi}$ where $a=(\delta q/q)/(\delta N/N) \sim 0.5$ (see above) and $\xi\sim 0.5$, $\kappa$ will change by $\sim 5\%$ as $\delta N/N$ changes from 0 to 0.2. From this it can be seen that the dip depth may be expressed as \begin{equation} \Delta\rm{LCC}=\Delta\rm{LCC}_S+\kappa \delta N/N \end{equation} where $\kappa$ can be treated as a constant. We use this to identify the part of the distribution in the upper panel of figure \ref{fig3} which correlates with the CR modulation. A fit was performed of the shape of the neutron modulation variation (the correlated part) and a constant term (the uncorrelated part) to the measurements. The fit gave the fraction of the distribution correlated with the neutron modulation to be $0.02\pm0.13~$ i.e. compatible with zero with a value of $\chi^2=17.8$ for 16 degrees of freedom. From this it is deduced that less than $23\%$ of the distribution, at the 95$\%$ confidence level, belongs to the part correlated with the CR modulation and more than $77\%$ belongs to the other sources correlated to solar activity but not directly to the change in ionization rate. These limits are incompatible with a large part of the change in the LCC during solar cycle 22 being produced by a change in ionization and so they do not corroborate the hypothesis of such a change proposed in \cite{PBB,MS1}. The correlation seen in figures \ref{fig1} and \ref{figsec}, if real, must be due to an effect, other than ionization, which is correlated with solar activity. This upper limit represents a limit on the fraction of the globally averaged dip in the LCC seen in solar cycle 22 which is caused by CR ionization. There could be local changes from this ionization such as those reported in \cite{uso} which, from the above upper limit, must contribute less than a fraction of $23\%$ to the globally averaged dip. \section{Sporadic Changes in Cosmic Ray activity} \label{searches} Rapid changes in CR intensity occur from time to time. These take the form of large intensity increases, so called ground level events (GLE), or smaller decreases in intensity (Forbush decreases) \cite{Forbush}. Such changes of intensity usually last for periods from a few hours to days and sometimes longer in the case of Forbush events. A survey has been given in \cite{Velinov}. These changes present an opportunity to test for LCC - CR correlations since if the causal connection proposed in \cite{PBB,MS1} exists one would expect to see changes in the LCC at the times of these events. The causal connection implies an increase (decrease) in LCC following a GLE (Forbush decrease) and we assume that such changes occur in times shorter than days. There were 3 very large GLEs during the time span of the ISCCP cloud data (1985-2005), each lasting several hours. The event on 29 September 1989 was clearly seen in both the CR neutron and muon monitors \cite{duldig}. The peak intensity neutron monitor enhancement in this event was observed to change from four times the steady state value for neutron monitors with VRCO close to zero down to 1.17 times the steady state value at a VRCO of 11.5 GV while the muon monitors varied from 1.4 \cite{duldig} to 1.08 times the steady state value in the same range of VRCO. The other two events (on 24 Oct 1989 and on 20 Jan 2005) had similarly large neutron monitor signals but they did not produce visible signals in the Nagoya muon monitor\cite{Nagoya}. The global LCC averages as a function of time were reconstructed from the ISCCP D1 data, which are 3 hour averages rather than the monthly averages of the D2 data, at times before and after each of the three GLEs. There were no visible anomalous changes in these global averages following each GLE where an increase of more than $2\%$ would have appeared anomalous. It is difficult to make quantitative estimates of the expected changes in the LCC, according to the hypothesis of \cite{PBB,MS1}, from such events since the amount of ionization produced by them is unknown. One can only conclude that the events do not provide corroborative evidence for the causal connection between cloud cover and ionization proposed in \cite{PBB,MS1,MS3} even though the changes in the neutron monitor rate were very large. The larger Forbush decreases during the time span of the ISCCP data (1984-2005) have been examined to see if they could be correlated with changes in the LCC. Most of these give relatively small changes in the CR intensity compared to the 11 year solar cycle modulation. Similar changes in the rates in the Nagoya muon detector \cite{Nagoya} were observed to those in a neutron monitor at the same VRCO. The globally averaged cloud cover change was taken as the difference between the LCC, using the ISCCP D2 data, in the month of the decrease and the average of the 3 preceding months. For some large shorter duration events the D1 data were used, taking the difference between the average LCC during 14 days before the event and seven days after. Figure \ref{GLEFOR} shows the change in the LCC anomaly for each Forbush decrease plotted against the change in the Oulu neutron monitor count rate averaged over the duration of the decrease. The data below an Oulu count rate change of $9\%$, which is roughly half the solar modulation during solar cycle 22, are too statistically imprecise to be conclusive. The statistical errors were determined from the RMS deviation of these points about the mean. However, the four points above a counting rate change of $9\%$ have a mean LCC anomaly change of $0.68\pm0.45\%$. This is compatible with the dashed line showing no correlation between the LCC and CR rate changes but it is 2.8 standard deviations above the value of $-0.6\%$ expected had there been a correlation similar to that seen in figure \ref{figsec}. A further attempt was made to correlate monthly fluctuations in the neutron monitor rates with those in the LCC. For each of the LCC and Climax neutron monitor monthly averages a linear extrapolation from 7 of the measurements was made to the eighth. The fluctuation was then taken to be the difference between the eighth measurement and the extrapolated value. The regression line fitted to the plot of fluctuations in the LCC against the fluctuations in the Climax data had the form $\Delta \rm{LCC}=-0.0098\pm0.019 \Delta N_C/N_C$, with correlation coefficient -0.03 indicating a poor correlation. Here $\Delta \rm{N_C/N_C}$ in per cent is the fluctuation in the Climax count rate. If the dip in LCC shown in figure \ref{fig1} is due to ionization from cosmic rays as hypothesised in \cite{PBB,MS1}, the curve fitted to the data in figure \ref{figsec} would predict that this line should have the form $\Delta \rm{LCC}=-0.048 \Delta \rm{N_C/N_C}$ i.e. a slope which is 2 standard deviations greater than that obtained from these fluctuations. In conclusion, it is statistically improbable that the Forbush decreases are compatible with the hypothesis of a correlation between LCC and ionization as proposed in \cite{PBB,MS1}. Hence Forbush decreases do not provide evidence which can be used to corroborate such a hypothesis. There have been previous reports of observations of correlations between cloud cover and Forbush decreases \cite{Harrison,Vere}. These seem to be incompatible with our observations although the statistical precision of the data is not powerful. \section{Conclusions} The dip in amplitude of $1.28\%$ in the low altitude cloud cover noted in references \cite{PBB,MS1} in solar cycle 22 (peaking in 1990) has also been seen in this analysis. This dip anti-correlates in shape with the observed mean daily sun spot number i.e. correlates with the change in cosmic ray intensity due to solar modulation. The dip is less evident in the following solar cycle 23 although it is possibly present in the tropical regions of the Earth. If the correlation noted in \cite{PBB,MS1} and its hypothesised causal connection between low cloud cover and ionization are real, it is shown that the magnitude of the effect implies that a large fraction of the low cloud cover is formed by ionization. However, no evidence could be found of changes in the cloud cover from known changes in the cosmic ray ionization rate. In conclusion, no corroboration of the claim of a causal connection between the changes in ionization and cloud cover, made in \cite{PBB,MS1}, could be found in this investigation. From the distribution of the depth of the dip in solar cycle 22 with geomagnetic latitude (the VRCO) we find that, averaged over the whole Earth, less than 23$\%$ of the dip comes from the solar modulation of the cosmic ray intensity, at the 95$\%$ confidence level. This implies that, if the dip represents a real correlation, more than $77\%$ of it is caused by a source other than ionization and this source must be correlated with solar activity. \section{Acknowledgment} We wish to thank the WDC for Cosmic Rays (Solar-Terrestrial Environment Laboratory, Nagoya University) for their compilation of the Cosmic-Ray Neutron Data (CAWSESDB-J-OB0061) \cite{Watanabe}. This database was constructed as a part of CAWSES Space Weather International Collaborative Research Database in Japan. We also thank the Cosmic Ray section for the provision of the data from the Nagoya Muon Telescope \cite{Nagoya}. In addition we thank the ISCCP \cite{ISCCP} for the provision of the cloud data. We are grateful to the referees of the paper for their thoughtful comments which helped us to improve it considerably. We also thank A. Erlykin and R.G. Harrison for helpful discussions.
{ "redpajama_set_name": "RedPajamaArXiv" }
3,574
package quasar.fs import slamdata.Predef._ import quasar._, RenderTree.ops._, RenderTreeT.ops._ import quasar.common.{PhaseResults, PhaseResultT, PhaseResultW} import quasar.connector.CompileM import quasar.contrib.matryoshka._ import quasar.contrib.pathy._ import quasar.contrib.scalaz._, eitherT._ import quasar.effect.LiftedOps import quasar.fp._ import quasar.frontend.SemanticErrsT import quasar.frontend.logicalplan.LogicalPlan import matryoshka.{Transform => _, _} import matryoshka.data.Fix import pathy.Path._ import scalaz._, Scalaz.{ToIdOps => _, _} import scalaz.stream.{Process0, Process} sealed abstract class QueryFile[A] object QueryFile { final case class ResultHandle(run: Long) extends scala.AnyVal object ResultHandle { implicit val show: Show[ResultHandle] = Show.showFromToString implicit val order: Order[ResultHandle] = Order.orderBy(_.run) } /** The result of the query is stored in an output file, overwriting any existing * contents, instead of being returned to the user immediately. * * The `LogicalPlan` is expected to only contain absolute paths even though * that is unfortunately not expressed in the types currently. */ final case class ExecutePlan(lp: Fix[LogicalPlan], out: AFile) extends QueryFile[(PhaseResults, FileSystemError \/ Unit)] /** The result of the query is immediately * streamed back to the client. This operation begins the streaming, in order * to continue the streaming, the client must make use of the `More` operation and * finally the `Close` operation in order to halt the streaming. * The `LogicalPlan` is expected to only contain absolute paths even though * that is unfortunately not expressed in the types currently. */ final case class EvaluatePlan(lp: Fix[LogicalPlan]) extends QueryFile[(PhaseResults, FileSystemError \/ ResultHandle)] /** Used to continue streaming after initiating a streaming * result with the `EvaluatePlan` operation. */ final case class More(h: ResultHandle) extends QueryFile[FileSystemError \/ Vector[Data]] /** Used to halt streaming of a result set initiated using * the `EvaluatePlan` operation. */ final case class Close(h: ResultHandle) extends QueryFile[Unit] /** Represents an "explain plan" operation. This operation should not actually * have any side effect on the filesystem, it should simply return useful * information to the user about how a given query would be evaluated on * this filesystem implementation. * The [[quasar.LogicalPlan]] is expected to only contain absolute paths even * though that is unfortunately not expressed in the types currently. */ final case class Explain(lp: Fix[LogicalPlan]) extends QueryFile[(PhaseResults, FileSystemError \/ ExecutionPlan)] /** This operation lists the names of all the immediate children of the supplied directory * in the filesystem. */ /* TODO: While this is a bit better in one dimension here in `QueryFile`, * `@mossprescott` points out it is still a bit of a stretch to include * in this algebra. We need to revisit this and probably add algebras * over multiple dimensions to better organize these (and other) * operations. * * For more discussion, see * https://github.com/quasar-analytics/quasar/pull/986#discussion-diff-45081757 */ final case class ListContents(dir: ADir) extends QueryFile[FileSystemError \/ Set[Node]] /** This operation should return whether a file exists in the filesystem.*/ final case class FileExists(file: AFile) extends QueryFile[Boolean] final class Ops[S[_]](implicit S: QueryFile :<: S) extends LiftedOps[QueryFile, S] { type M[A] = FileSystemErrT[FreeS, A] val unsafe = Unsafe[S] val transforms = Transforms[FreeS] import transforms._ /** Returns the path to the result of executing the given `LogicalPlan`, * using the provided path. * * If the given file path exists, it will be overwritten with the results * from the query. */ def execute(plan: Fix[LogicalPlan], out: AFile): ExecM[Unit] = EitherT(WriterT(lift(ExecutePlan(plan, out))): G[FileSystemError \/ Unit]) /** Returns the stream of data resulting from evaluating the given * `LogicalPlan`. */ def evaluate(plan: Fix[LogicalPlan]): Process[ExecM, Data] = { // TODO: use DataCursor.process for the appropriate cursor type @SuppressWarnings(Array("org.wartremover.warts.Recursion")) def moreUntilEmpty(h: ResultHandle): Process[M, Data] = Process.await(unsafe.more(h): M[Vector[Data]]) { data => if (data.isEmpty) Process.halt else Process.emitAll(data) ++ moreUntilEmpty(h) } def close(h: ResultHandle): ExecM[Unit] = toExec(unsafe.close(h)) Process.bracket(unsafe.eval(plan))(h => Process.eval_(close(h))) { h => moreUntilEmpty(h).translate(hoistToExec) } } def first(plan: Fix[LogicalPlan]): ExecM[Option[Data]] = for { h <- unsafe.eval(plan) vs <- hoistToExec(unsafe.more(h)) _ <- toExec(unsafe.close(h)) } yield vs.headOption /** Returns a stream of data resulting from evaluating the given * `LogicalPlan`. * * This consumes the entire result, if you need control over how much * data is consumed, see `evaluate`. */ def results(plan: Fix[LogicalPlan]): ExecM[Process0[Data]] = { def close(h: ResultHandle): ExecM[Unit] = toExec(unsafe.close(h)) def next(h: ResultHandle): ExecM[Option[(Vector[Data], ResultHandle)]] = hoistToExec(unsafe.more(h)) .ensuring(_.isDefined whenM close(h)) .map(xs => xs.nonEmpty.option((xs, h))) unsafe.eval(plan) .flatMap(h => StreamT.unfoldM(h)(next).toStream <* close(h)) .map(cs => Process.emitAll(cs) flatMap (Process.emitAll(_))) } /** Returns a description of how the the given logical plan will be * executed. */ def explain(plan: Fix[LogicalPlan]): ExecM[ExecutionPlan] = EitherT(WriterT(lift(Explain(plan))): G[FileSystemError \/ ExecutionPlan]) /** Returns the names of the immediate children of the given directory, * fails if the directory does not exist. */ def ls(dir: ADir): M[Set[Node]] = listContents(dir) /** Returns a Map of all files in this directory and all of it's sub-directories * along with its `Node.Type`. * Fails if the directory does not exist. */ def descendantFiles(dir: ADir): M[Map[RFile, Node.Type]] = { type S[A] = StreamT[M, A] @SuppressWarnings(Array("org.wartremover.warts.Recursion")) def lsR(current1: RDir): StreamT[M, (RFile, Node.Type)] = StreamT.fromStream[M, Node](ls(dir </> current1) map (_.toStream)) flatMap { case f: FileNode => ((current1 </> file1(f.name)) -> f.`type`).point[S] case d: DirNode => lsR(current1 </> dir1(d.name)) } lsR(currentDir).foldLeft(Set.empty[(RFile, Node.Type)])(_ + _).map(_.toMap) } def listContents(dir: ADir): M[Set[Node]] = EitherT(lift(ListContents(dir))) /** Returns whether the given file exists. */ def fileExists(file: AFile): FreeS[Boolean] = lift(FileExists(file)) /** Returns whether the given file exists, lifted into the same monad as * the rest of the functions here, for convenience. */ def fileExistsM(file: AFile): M[Boolean] = fileExists(file).liftM[FileSystemErrT] //// private val hoistToExec: M ~> ExecM = Hoist[FileSystemErrT].hoist[FreeS, G](liftMT[FreeS, PhaseResultT]) } object Ops { implicit def apply[S[_]](implicit S: QueryFile :<: S): Ops[S] = new Ops[S] } /** Low-level, unsafe operations. Clients are responsible for resource-safety * when using these. */ final class Unsafe[S[_]](implicit S: QueryFile :<: S) extends LiftedOps[QueryFile, S] { val transforms = Transforms[FreeS] import transforms._ /** Returns a handle to the results of evaluating the given `LogicalPlan` * that can be used to read chunks of result data. * * Care must be taken to `close` the returned handle in order to avoid * potential resource leaks. */ def eval(lp: Fix[LogicalPlan]): ExecM[ResultHandle] = EitherT(WriterT(lift(EvaluatePlan(lp))): G[FileSystemError \/ ResultHandle]) /** Read the next chunk of data from the result set represented by the given * handle. * * An empty `Vector` signals that all data has been read. */ def more(rh: ResultHandle): FileSystemErrT[FreeS, Vector[Data]] = EitherT(lift(More(rh))) /** Closes the given result handle, freeing any resources it was using. */ def close(rh: ResultHandle): FreeS[Unit] = lift(Close(rh)) } object Unsafe { implicit def apply[S[_]](implicit S: QueryFile :<: S): Unsafe[S] = new Unsafe[S] } class Transforms[F[_]: Monad] { type G[A] = PhaseResultT[F, A] type H[A] = SemanticErrsT[G, A] type ExecM[A] = FileSystemErrT[G, A] type CompExecM[A] = FileSystemErrT[H, A] val execToCompExec: ExecM ~> CompExecM = Hoist[FileSystemErrT].hoist[G, H](liftMT[G, SemanticErrsT]) val compToCompExec: CompileM ~> CompExecM = { val hoistW: PhaseResultW ~> G = Hoist[PhaseResultT].hoist(pointNT[F]) val hoistC: CompileM ~> H = Hoist[SemanticErrsT].hoist(hoistW) liftMT[H, FileSystemErrT] compose hoistC } val toExec: F ~> ExecM = liftMT[G, FileSystemErrT] compose liftMT[F, PhaseResultT] def fsErrToExec: FileSystemErrT[F, ?] ~> ExecM = Hoist[FileSystemErrT].hoist[F, PhaseResultT[F, ?]](liftMT[F, PhaseResultT]) val toCompExec: F ~> CompExecM = execToCompExec compose toExec val dropPhases: ExecM ~> FileSystemErrT[F, ?] = Hoist[FileSystemErrT].hoist(λ[PhaseResultT[F, ?] ~> F](_.value)) } object Transforms { def apply[F[_]: Monad]: Transforms[F] = new Transforms[F] } implicit def renderTree[A]: RenderTree[QueryFile[A]] = new RenderTree[QueryFile[A]] { def render(qf: QueryFile[A]) = qf match { case ExecutePlan(lp, out) => NonTerminal(List("ExecutePlan"), None, List(lp.render, out.render)) case EvaluatePlan(lp) => NonTerminal(List("EvaluatePlan"), None, List(lp.render)) case More(handle) => Terminal(List("More"), handle.shows.some) case Close(handle) => Terminal(List("Close"), handle.shows.some) case Explain(lp) => NonTerminal(List("Explain"), None, List(lp.render)) case ListContents(dir) => NonTerminal(List("ListContents"), None, List(dir.render)) case FileExists(file) => NonTerminal(List("FileExists"), None, List(file.render)) } } }
{ "redpajama_set_name": "RedPajamaGithub" }
9,219
<?php namespace Woojin\Utility\ExchangeRate; use Doctrine\Common\Persistence\ManagerRegistry; class ExchangeRateGetter { /** * ManagerRegistry * * @var [\Doctrine\Common\Persistence\ManagerRegistry] */ protected $registry; protected $map = array(); public function __construct(ManagerRegistry $registry) { $this->registry = $registry; } public function getCostExchangeRateByDate($datetime) { $month = substr($datetime, 0, 7); if (array_key_exists($month, $this->map)) { return $this->map[$month]; } /** * Entity Manager * * @var object */ $em = $this->registry->getManager(); $qb = $em->createQueryBuilder(); $exchangeRate = $qb->select('ex') ->from('WoojinStoreBundle:ExchangeRate', 'ex') ->where($qb->expr()->eq('ex.month', $qb->expr()->literal($month))) ->getQuery() ->getResult() ; return $this->map[$month] = ($exchangeRate) ? $exchangeRate[0]->getRate() : 1; } public function getSaleExchangeRateByDate($datetime) { $month = substr($datetime, 0, 7); if (array_key_exists($month, $this->map)) { return $this->map[$month]; } /** * Entity Manager * * @var object */ $em = $this->registry->getManager(); $qb = $em->createQueryBuilder(); $exchangeRate = $qb->select('ex') ->from('WoojinStoreBundle:BenefitExchangeRate', 'ex') ->where($qb->expr()->eq('ex.month', $qb->expr()->literal($month))) ->getQuery() ->getResult() ; return $this->map[$month] = ($exchangeRate) ? $exchangeRate[0]->getRate() : 1; } }
{ "redpajama_set_name": "RedPajamaGithub" }
2,088
\section*{Introduction} This paper is to a large extent a review of certain aspects of quantum invariants where we restrict ourselves exclusively to the context of long knots. More generally, one can consider also the string links, but it is known that the topological classes of string links are in bijection with the classes of ordinary links only if the number of components is one, i.e. if a string link is a long knot. We describe in detail the construction of invariants of long knots by using rigid R-matrices (solutions of the quantum Yang--Baxter relation) in monoidal categories. The importance of long knots (as opposed to usual closed knots) is illustrated by considering a general class of group-theoretical R-matrices put into the context of monoidal categories of relations and spans over sets. These R-matrices are indexed by pointed groups that is groups with a distinguished element. The underlying racks seem not to be considered previously in the existing literature. Drinfeld's quantum double construction gives rise to a large class of rigid R-matrices, and the associated invariants get factorized through universal invariants associated to underlying Hopf algebras. Such universal invariants were introduced and studied in a number of works~\cite{MR1025161,MR1124415,MR1153694,MR1227011,MR1324033, MR2186115,MR2253443,MR2251160} mostly either in the context of finite dimensional Hopf algebras or certain topological completions, for example by considering formal power series. Here, we define the universal invariants purely algebraically and with minimal assumptions on the underlying Hopf algebras. In particular, we emphasize the case of infinite dimensional Hopf algebras. The distinguishing feature of our approach is the use of the restricted or finite dual of an algebra in conjunction with the quantum double construction. The outline of the paper is as follows. In section~\ref{sec:lk} we recall the definitions of long knots and their diagrams and introduce the notions of a normal diagram and a normalization of an arbitrary diagram. In section~\ref{sec:ilkrrm} we recall the definition of a rigid R-matrix in a monoidal category and give a detailed description of a long knot invariant associated to a given rigid R-matrix. In section~\ref{sec:rrmfr} we consider a special class of rigid R-matrices in the categories of relations and spans over sets. Each such R-matrix is associated to a pointed group with a canonical structure of a rack, and Theorem~\ref{thm:1} identifies the associated invariant with the set of representations of the knot group into the group that underlies the rack. The example of an extended Heisenberg group gives rise to an invariant ideal in the polynomial algebra $\mathbb{Q}[t,t^{-1},s]$ closely related but not equivalent to the Alexander polynomial, at least if the latter admits higher multiplicity roots. In section~\ref{sec:ifha}, based on the restricted dual of an algebra and Drinfeld's quantum double construction, we describe a universal invariant associated to any Hopf algebra with invertible antipode. \subsection*{Acknowledgements} I would like to thank Bruce Bartlett, L\'eo B\'enard, Joan Porti, Louis-HadrienRobert, Arkady Vaintrob, Roland van der Veen and Alexis Virelizier for valuable discussions. \section{Long knots}\label{sec:lk} \begin{definition} An embedding $f\colon \mathbb{R}\to\mathbb{R}^3$ is called \emph{\color{blue} long knot } if there exist $a,b\in\mathbb{R}$ such that $f(t)=(0,0,t)$ for any $t<a$ or $t>b$. Two long knots $f,g\colon \mathbb{R}\to\mathbb{R}^3$ are called \emph{\color{blue} equivalent} if they are ambient isotopic, that is if there exists an ambient isotopy \begin{equation} H\colon \mathbb{R}^3\times[0,1]\to \mathbb{R}^3\times[0,1],\quad H(x,t)=(h_t(x),t),\quad h_0=\operatorname{id}_{\mathbb{R}^3}, \end{equation} such that, for any $t\in[0,1]$, $h_t\circ f$ is a long knot and $g=h_1\circ f$. A long knot is called \emph{\color{blue} tame or regular} if it is equivalent to a polygonal long knot. An (oriented) \emph{\color{blue} long knot diagram} $D$ is a (1,1)-tangle diagram in $\mathbb{R}^2$ representing a tame long knot \begin{equation} D= \begin{tikzpicture}[baseline=-3] \node[draw] (a) at (0,0) {$D$}; \draw[thick,->] (0,-.5)--(a)--(0,.5); \end{tikzpicture}\ . \end{equation} Two long knot diagrams are called \emph{\color{blue} (Reidemeister) equivalent} if they can be related to each other by a finite sequence of oriented Reidemeister moves of all types. \end{definition} \begin{remark} The set of long knot diagrams is a monoid with respect to the composition \begin{equation} D\circ D':=\ \begin{tikzpicture}[baseline=8] \node[draw] (a) at (0,.7) {$D$}; \node[draw] (b) at (0,0) {$D'$}; \draw[thick,->] (0,-.5)--(b)--(a)--(0,1.2); \end{tikzpicture} \end{equation} \end{remark} \begin{remark} In the case of long knots, the Reidemeister theorem states that two long knot diagrams are equivalent if and only if the corresponding long knots are equivalent. Furthermore, a folklore theorem states that the natural map of long knot diagrams to closed oriented knot diagrams \begin{equation} \begin{tikzpicture}[baseline=-2] \node[draw] (a) at (0,0) {$D$}; \draw[thick,->] (0,-.5)--(a)--(0,.5); \end{tikzpicture}\ \mapsto \ \begin{tikzpicture}[baseline=-2] \coordinate (b) at (-.7,0); \node[draw] (a) at (0,0) {$D$}; \draw[thick,->] (a) to [out=90,in=90] (b); \draw[thick] (b) to [out=-90,in=-90] (a); \end{tikzpicture} \end{equation} induces a bijection between the respective Reidemeister equivalence classes. In particular, any invariant of long knots is also an invariant of closed knots. \end{remark} In what follows, we will always assume that a long knot diagram is put into a generic position with respect to the vertical axis so that all crossing have non-vertical strands as in the letter X. We denote by $w(D)$ the writhe of $D$ defined as the number of positive crossings minus the number of negative crossings where a crossing is called positive if the ordered pair of tangent vectors $(v_{\text{overpass}},v_{\text{underpass}})$ induces the standard orientation of $\mathbb{R}^2$. \begin{definition} A (long knot) diagram is called \emph{\color{blue} normal} if it has no local extrema (with respect to vertical direction) oriented from left to right like \begin{tikzpicture}[baseline, scale=0.2] \draw[thick] (0,0) to [out=90,in=180] (1,1); \draw[thick,->] (1,1) to [out=0,in=90] (2,0); \end{tikzpicture}\, and \begin{tikzpicture}[baseline=-5, scale=0.2] \draw[thick] (0,0) to [out=-90,in=180] (1,-1); \draw[thick,->] (1,-1) to [out=0,in=-90] (2,0); \end{tikzpicture} . To any diagram $D$, we associate its \emph{\color{blue} normalization} $\dot{D}$, the diagram obtained from $D$ by the replacements \begin{equation} \begin{tikzpicture}[baseline=2, scale=0.3] \draw[thick] (0,0) to [out=90,in=180] (1,1); \draw[thick,->] (1,1) to [out=0,in=90] (2,0); \end{tikzpicture} \ \mapsto \begin{tikzpicture}[baseline=5,xscale=.3,yscale=0.2] \coordinate (a0) at (0,0); \coordinate (a1) at (1,3); \coordinate (a2) at (2,0); \draw[thick] (a1) to [out=180,in=135] (a2); \draw[line width=5, color=white] (a0) to [out=45,in=0] (a1); \draw[thick,->] (a0) to [out=45,in=0] (a1); \end{tikzpicture}, \quad \begin{tikzpicture}[baseline=-7, scale=0.3] \draw[thick] (0,0) to [out=-90,in=180] (1,-1); \draw[thick,->] (1,-1) to [out=0,in=-90] (2,0); \end{tikzpicture}\ \mapsto \begin{tikzpicture}[baseline=-12,xscale=.3,yscale=0.2] \coordinate (a0) at (0,0); \coordinate (a1) at (1,-3); \coordinate (a2) at (2,0); \draw[thick] (a1) to [out=180,in=-135] (a2); \draw[line width=5, color=white] (a0) to [out=-45,in=0] (a1); \draw[thick,->] (a0) to [out=-45,in=0] (a1); \end{tikzpicture}. \end{equation} \end{definition} It will be of special interest for us the normal long knot diagrams \begin{equation}\label{eq:xi+-} \xi^+:= \begin{tikzpicture}[baseline=15,scale=.5] \coordinate (a0) at (0,0); \coordinate (a1) at (1,2); \coordinate (a2) at (0,2); \coordinate (a3) at (2,.5); \coordinate (a4) at (1,.5); \coordinate (a5) at (2,2.5); \draw[thick] (a0) to [out=90,in=-90] (a1); \draw[thick] (a1) to [out=90,in=90] (a2); \draw[thick] (a3) to [out=-90,in=-90] (a4); \draw[thick,->] (a4) to [out=90,in=-90] (a5); \draw[line width=5, color=white] (a2) to [out=-90,in=90] (a3); \draw[thick] (a2) to [out=-90,in=90] (a3); \end{tikzpicture}, \quad \xi^-:= \begin{tikzpicture}[baseline=15,scale=.5] \coordinate (a0) at (0,0); \coordinate (a1) at (1,2); \coordinate (a2) at (0,2); \coordinate (a3) at (2,.5); \coordinate (a4) at (1,.5); \coordinate (a5) at (2,2.5); \draw[thick] (a2) to [out=-90,in=90] (a3); \draw[thick] (a1) to [out=90,in=90] (a2); \draw[thick] (a3) to [out=-90,in=-90] (a4); \draw[line width=5, color=white] (a0) to [out=90,in=-90] (a1); \draw[thick] (a0) to [out=90,in=-90] (a1); \draw[line width=5, color=white] (a4) to [out=90,in=-90] (a5); \draw[thick,->] (a4) to [out=90,in=-90] (a5); \end{tikzpicture}, \quad \xi^n:= \left\{ \begin{array}{cl} \begin{tikzpicture}[baseline] \draw[thick,->] (0,0)--(0,.3); \end{tikzpicture} & \text{if } n=0; \\ \underbrace{\xi^{\operatorname{sgn}(n)}\circ \dots\circ \xi^{\operatorname{sgn}(n)}}_{|n|\text{ times}}&\text{if } n\ne 0 \end{array} \right. \end{equation} where $n\in\mathbb{Z}$, $\operatorname{sgn}(n):=n/|n|$ and we identify the signs $\pm$ with the numbers $\pm1$. \begin{remark} Any normal long knot diagram has an even number of crossings. In particular, we have $w(\xi^n)=2n$. \end{remark} \section{Invariants of long knots from rigid $R$-matrices}\label{sec:ilkrrm} We say an object $G$ of a monoidal category $\mathcal{C}$ (with tensor product $\otimes$ and unit object $\mathbb{I}$) admits a left \emph{\color{blue} adjoint} if there exists an object $F$ and morphisms \begin{equation} \varepsilon\colon F\otimes G\to \mathbb{I},\quad \eta\colon \mathbb{I}\to G\otimes F \end{equation} such that \begin{equation} (\varepsilon\otimes \operatorname{id}_{F})\circ( \operatorname{id}_{F}\otimes \eta)=\operatorname{id}_{F},\quad ( \operatorname{id}_{G}\otimes \varepsilon)\circ(\eta\otimes \operatorname{id}_{G})=\operatorname{id}_{G}. \end{equation} In that case, the quadruple $(F,G,\varepsilon,\eta)$ is called \emph{\color{blue} adjunction} in $\mathcal{C}$. \begin{definition} Let $\mathcal{C}$ be a monoidal category. An \emph{\color{blue} R-matrix} over an object $G\in\operatorname{Ob}\mathcal{C}$ is an element $r\in\operatorname{Aut}(G\otimes G)$ that satisfies the Yang--Baxter relation \begin{equation} (r\otimes \operatorname{id}_G)\circ(\operatorname{id}_G\otimes r )\circ(r\otimes \operatorname{id}_G)=(\operatorname{id}_G\otimes r )\circ(r\otimes \operatorname{id}_G)\circ(\operatorname{id}_G\otimes r ). \end{equation} \end{definition} \begin{definition} Let $(F,G,\varepsilon,\eta)$ be an adjunction in a monoidal category $\mathcal{C}$. An $R$-matrix $r$ over $G$ is called \emph{\color{blue} rigid} if the morphisms \begin{equation}\label{eq:tild-rpm1} \widetilde{r^{\pm1}}:=(\varepsilon\otimes \operatorname{id}_{G\otimes F})\circ(\operatorname{id}_{F}\otimes r^{\pm1}\otimes\operatorname{id}_{ F})\circ( \operatorname{id}_{F\otimes G}\otimes \eta) \end{equation} are invertible. \end{definition} We also denote \begin{equation}\label{eq:tild-tild-rpm1} \widetilde{\widetilde{r^{\pm1}}}:=(\varepsilon\otimes \operatorname{id}_{F\otimes F})\circ(\operatorname{id}_{F}\otimes \widetilde{r^{\pm1}}\otimes\operatorname{id}_{ F})\circ( \operatorname{id}_{F\otimes F}\otimes \eta). \end{equation} One easily checks the identity \begin{equation} \widetilde{\widetilde{r^{-1}}}=\left(\widetilde{\widetilde{r}}\right)^{-1}. \end{equation} Associated to a rigid R-matrix $r$ over $G$ with an adjunction $(F,G,\varepsilon,\eta)$, the \emph{\color{blue} Reshetikhin--Turaev functor} $RT_r$ associates to any normal long knot diagram $D$ the endomorphism $RT_r(D)\colon G\to G$ obtained as follows. Assuming that the non-trivial part of $D$ is contained in $\mathbb{R}\times [0,1]$, there exists a finite sequence of real numbers $0=t_0<t_1<\dots<t_{n-1}<t_n=1$ such that, for any $i\in \{0,\dots,n-1\}$, the intersection $D_i:=D\cap(\mathbb{R}\times [t_i,t_{i+1}])$ is an ordered (from left to right) finite sequence of connected components each of which is isotopic relative to boundary either to one of the four types of segments \begin{equation} \begin{tikzpicture}[yscale=.5,baseline] \draw[thick,->] (0,0) to [out=90,in=-90] (0,1); \end{tikzpicture}\ , \quad \begin{tikzpicture}[yscale=.5,baseline] \draw[thick,<-] (0,0) to [out=90,in=-90] (0,1); \end{tikzpicture}\ , \quad \begin{tikzpicture}[xscale=.5,baseline] \draw[thick,<-] (0,0) to [out=90,in=90] (1,0); \end{tikzpicture}\ , \quad \begin{tikzpicture}[xscale=.5,baseline=15] \draw[thick,<-] (0,1) to [out=-90,in=-90] (1,1); \end{tikzpicture} \end{equation} or to one of the eight types of crossings \begin{equation} \begin{tikzpicture}[scale=.5,baseline] \draw[thick,<-] (0,1) to [out=-90,in=90] (1,0); \draw[line width=3pt,white] (1,1) to [out=-90,in=90] (0,0); \draw[thick,<-] (1,1) to [out=-90,in=90] (0,0); \end{tikzpicture}\ , \quad \begin{tikzpicture}[scale=.5,baseline] \draw[thick,<-] (1,1) to [out=-90,in=90] (0,0); \draw[line width=3pt,white] (0,1) to [out=-90,in=90] (1,0); \draw[thick,<-] (0,1) to [out=-90,in=90] (1,0); \end{tikzpicture}\ , \quad \begin{tikzpicture}[scale=.5,baseline] \draw[thick,->] (1,1) to [out=-90,in=90] (0,0); \draw[line width=3pt,white] (0,1) to [out=-90,in=90] (1,0); \draw[thick,<-] (0,1) to [out=-90,in=90] (1,0); \end{tikzpicture}\ , \quad \begin{tikzpicture}[scale=.5,baseline] \draw[thick,<-] (0,1) to [out=-90,in=90] (1,0); \draw[line width=3pt,white] (1,1) to [out=-90,in=90] (0,0); \draw[thick,->] (1,1) to [out=-90,in=90] (0,0); \end{tikzpicture}\ , \quad \begin{tikzpicture}[scale=.5,baseline] \draw[thick,->] (0,1) to [out=-90,in=90] (1,0); \draw[line width=3pt,white] (1,1) to [out=-90,in=90] (0,0); \draw[thick,->] (1,1) to [out=-90,in=90] (0,0); \end{tikzpicture}\ , \quad \begin{tikzpicture}[scale=.5,baseline] \draw[thick,->] (1,1) to [out=-90,in=90] (0,0); \draw[line width=3pt,white] (0,1) to [out=-90,in=90] (1,0); \draw[thick,->] (0,1) to [out=-90,in=90] (1,0); \end{tikzpicture}\ , \quad \begin{tikzpicture}[scale=.5,baseline] \draw[thick,<-] (1,1) to [out=-90,in=90] (0,0); \draw[line width=3pt,white] (0,1) to [out=-90,in=90] (1,0); \draw[thick,->] (0,1) to [out=-90,in=90] (1,0); \end{tikzpicture}\ , \quad \begin{tikzpicture}[scale=.5,baseline] \draw[thick,->] (0,1) to [out=-90,in=90] (1,0); \draw[line width=3pt,white] (1,1) to [out=-90,in=90] (0,0); \draw[thick,<-] (1,1) to [out=-90,in=90] (0,0); \end{tikzpicture}\ . \end{equation} To such an intersection, we associate a morphism $f_i$ in $\mathcal{C}$ by taking the tensor product (from left to right) of the morphisms associated to the connected fragments of $D_i$ according to the following rules: \begin{equation} \begin{tikzpicture}[yscale=.5,baseline=3] \draw[thick,->] (0,0) to [out=90,in=-90] (0,1); \end{tikzpicture}\ \mapsto \operatorname{id}_G, \quad \begin{tikzpicture}[yscale=.5,baseline=4] \draw[thick,<-] (0,0) to [out=90,in=-90] (0,1); \end{tikzpicture}\ \mapsto \operatorname{id}_{F}, \quad \begin{tikzpicture}[xscale=.5,baseline=2] \draw[thick,<-] (0,0) to [out=90,in=90] (1,0); \end{tikzpicture}\ \mapsto \varepsilon, \quad \begin{tikzpicture}[xscale=.5,baseline=22] \draw[thick,<-] (0,1) to [out=-90,in=-90] (1,1); \end{tikzpicture}\ \mapsto\eta, \end{equation} \begin{equation} \begin{tikzpicture}[scale=.5,baseline=3] \draw[thick,<-] (0,1) to [out=-90,in=90] (1,0); \draw[line width=3pt,white] (1,1) to [out=-90,in=90] (0,0); \draw[thick,<-] (1,1) to [out=-90,in=90] (0,0); \end{tikzpicture}\ \mapsto r, \quad \begin{tikzpicture}[scale=.5,baseline=3] \draw[thick,<-] (1,1) to [out=-90,in=90] (0,0); \draw[line width=3pt,white] (0,1) to [out=-90,in=90] (1,0); \draw[thick,<-] (0,1) to [out=-90,in=90] (1,0); \end{tikzpicture}\ \mapsto r^{-1}, \end{equation} \begin{equation} \begin{tikzpicture}[scale=.5,baseline=3] \draw[thick,->] (1,1) to [out=-90,in=90] (0,0); \draw[line width=3pt,white] (0,1) to [out=-90,in=90] (1,0); \draw[thick,<-] (0,1) to [out=-90,in=90] (1,0); \end{tikzpicture}\ \mapsto \widetilde{r}, \quad \begin{tikzpicture}[scale=.5,baseline=3] \draw[thick,<-] (0,1) to [out=-90,in=90] (1,0); \draw[line width=3pt,white] (1,1) to [out=-90,in=90] (0,0); \draw[thick,->] (1,1) to [out=-90,in=90] (0,0); \end{tikzpicture}\ \mapsto \widetilde{r^{-1}}, \end{equation} \begin{equation} \begin{tikzpicture}[scale=.5,baseline=3] \draw[thick,->] (0,1) to [out=-90,in=90] (1,0); \draw[line width=3pt,white] (1,1) to [out=-90,in=90] (0,0); \draw[thick,->] (1,1) to [out=-90,in=90] (0,0); \end{tikzpicture}\ \mapsto \widetilde{ \widetilde{r}}, \quad \begin{tikzpicture}[scale=.5,baseline=3] \draw[thick,->] (1,1) to [out=-90,in=90] (0,0); \draw[line width=3pt,white] (0,1) to [out=-90,in=90] (1,0); \draw[thick,->] (0,1) to [out=-90,in=90] (1,0); \end{tikzpicture}\ \mapsto \widetilde{ \widetilde{r^{-1}}}, \end{equation} \begin{equation} \begin{tikzpicture}[scale=.5,baseline=3] \draw[thick,<-] (1,1) to [out=-90,in=90] (0,0); \draw[line width=3pt,white] (0,1) to [out=-90,in=90] (1,0); \draw[thick,->] (0,1) to [out=-90,in=90] (1,0); \end{tikzpicture}\ \mapsto \left( \widetilde{r^{-1}}\right)^{-1}, \quad \begin{tikzpicture}[scale=.5,baseline=3] \draw[thick,->] (0,1) to [out=-90,in=90] (1,0); \draw[line width=3pt,white] (1,1) to [out=-90,in=90] (0,0); \draw[thick,<-] (1,1) to [out=-90,in=90] (0,0); \end{tikzpicture}\ \mapsto ( \widetilde{r})^{-1}. \end{equation} The morphism $RT_r(D)\colon G\to G$ associated to $D$ is obtained as the composition \begin{equation} RT_r(D):=f_{n-1}\circ \dots\circ f_1\circ f_0. \end{equation} \begin{example} Let $D=\xi^-$ defined in \eqref{eq:xi+-}. Then $RT_r(D)=f_3\circ f_2\circ f_1\circ f_0$ where \begin{equation} f_0= \operatorname{id}_G\otimes \eta,\quad f_1= \operatorname{id}_G\otimes\left( \widetilde{r}\right)^{-1},\quad f_2=\left( \widetilde{r}\right)^{-1} \otimes\operatorname{id}_G,\quad f_3=\varepsilon \otimes\operatorname{id}_G. \end{equation} \end{example} \begin{theorem}[\cite{MR1036112,MR1025161,MR2796628}] Let $r$ be a rigid $R$-matrix over an object $G$ of a monoidal category $\mathcal{C}$ with an adjunction $(F,G,\varepsilon,\eta)$. Then, for any long knot diagram $D$, the element \begin{equation} J_r(D):=RT_r(\dot{D}\circ \xi^{-w(\dot{D})/2})\in \operatorname{End}(G) \end{equation} depends on only the Reidemeister equivalence class of $D$. \end{theorem} \begin{proof Let $\dot{\sim}$ be the equivalence relation on the set of normal long knot diagrams generated by the oriented versions of the Reidemeister moves RII and RIII and the moves R$0^\pm$ defined by the pictures \begin{equation} \begin{tikzpicture}[xscale=.5,yscale=1.5,baseline=5] \draw[thick,->] (1,0) to [out=90,in=90] (0,0); \draw[thick] (.5,0)--(0,.4); \end{tikzpicture} \ \stackrel{\text{ R}0^+}{\longleftrightarrow}\ \begin{tikzpicture}[xscale=.5,yscale=1.5,baseline=5] \draw[thick,->] (1,0) to [out=90,in=90] (0,0); \draw[thick] (.5,0)--(1,.4); \end{tikzpicture} \ ,\qquad \begin{tikzpicture}[xscale=.5,yscale=1.5,baseline=-10] \draw[thick,->] (1,0) to [out=-90,in=-90] (0,0); \draw[thick] (.5,0)--(0,-.4); \end{tikzpicture} \ \stackrel{\text{ R}0^-}{\longleftrightarrow}\ \begin{tikzpicture}[xscale=.5,yscale=1.5,baseline=-10] \draw[thick,->] (1,0) to [out=-90,in=-90] (0,0); \draw[thick] (.5,0)--(1,-.4); \end{tikzpicture} \end{equation} with two possible orientations for the straight segment and two possibilities for the crossing. The strategy of the proof is to show first the implication \begin{equation}\label{eq:dot-sim-=} \dot{D}\ \dot{\sim}\ \dot{D}'\Rightarrow RT_r(\dot{D})=RT_r(\dot{D}') \end{equation} which, by taking into account the implication \begin{equation}\label{eq:dot-sim-=writhe} \dot{D}\ \dot{\sim}\ \dot{D}'\Rightarrow w(\dot{D})=w(\dot{D}'), \end{equation} ensures the invariance of $J_r(D)$ under all Reidemeister moves RII and RIII, and then to verify the invariance under the Reidemeister moves RI. It is in this last part of the proof where the correction of $\dot{D}$ by $\xi^{-w(\dot{D})/2}$ is crucial. Invariance of Reshetikhin--Turaev functor with respect to moves $\text{ R}0^\pm$ follows from the equivalences \begin{equation} \begin{tikzpicture}[xscale=.5,yscale=1.5,baseline=5] \draw[thick,->] (1,0) to [out=90,in=90] (0,0); \draw[thick] (.5,0)--(0,.4); \end{tikzpicture} \ \dot{\sim}\ \begin{tikzpicture}[xscale=.5,yscale=1.5,baseline=5] \draw[thick,->] (1,0) to [out=90,in=90] (0,0); \draw[thick] (.5,0)--(1,.4); \end{tikzpicture} \Leftrightarrow \begin{tikzpicture}[xscale=.5,yscale=1.5,baseline=5] \draw[thick,->] (1,0) to [out=90,in=90] (0,0); \draw[thick] (2,.4) to [out=-90,in=-90] (1,0); \draw[thick] (.5,0)--(0,.4); \end{tikzpicture} \ \dot{\sim}\ \begin{tikzpicture}[xscale=.5,yscale=1.5,baseline=5] \draw[thick,->] (1,0) to [out=90,in=90] (0,0); \draw[thick] (2,.4) to [out=-90,in=-90] (1,0); \draw[thick] (.5,0)--(1,.4); \end{tikzpicture} \Leftrightarrow \begin{tikzpicture}[scale=.5,baseline=3] \draw[thick,->] (1,1) to [out=-90,in=90] (0,0); \draw[thick] (0,1) to [out=-90,in=90] (1,0); \end{tikzpicture} \ \dot{\sim}\ \begin{tikzpicture}[xscale=.5,yscale=1.5,baseline=5] \draw[thick,->] (1,0) to [out=90,in=90] (0,0); \draw[thick] (2,.4) to [out=-90,in=-90] (1,0); \draw[thick] (.5,0)--(1,.4); \end{tikzpicture}, \end{equation} \begin{equation} \begin{tikzpicture}[xscale=.5,yscale=1.5,baseline=-10] \draw[thick,->] (1,0) to [out=-90,in=-90] (0,0); \draw[thick] (.5,0)--(0,-.4); \end{tikzpicture} \ \dot{\sim}\ \begin{tikzpicture}[xscale=.5,yscale=1.5,baseline=-10] \draw[thick,->] (1,0) to [out=-90,in=-90] (0,0); \draw[thick] (.5,0)--(1,-.4); \end{tikzpicture} \Leftrightarrow \begin{tikzpicture}[xscale=.5,yscale=1.5,baseline=-10] \draw[thick] (1,0) to [out=-90,in=-90] (0,0); \draw[thick,->] (0,0) to [out=90,in=90] (-1,-.4); \draw[thick] (.5,0)--(0,-.4); \end{tikzpicture} \ \dot{\sim}\ \begin{tikzpicture}[xscale=.5,yscale=1.5,baseline=-10] \draw[thick] (1,0) to [out=-90,in=-90] (0,0); \draw[thick,->] (0,0) to [out=90,in=90] (-1,-.4); \draw[thick] (.5,0)--(1,-.4); \end{tikzpicture} \Leftrightarrow \begin{tikzpicture}[xscale=.5,yscale=1.5,baseline=-10] \draw[thick] (1,0) to [out=-90,in=-90] (0,0); \draw[thick,->] (0,0) to [out=90,in=90] (-1,-.4); \draw[thick] (.5,0)--(0,-.4); \end{tikzpicture} \ \dot{\sim}\ \begin{tikzpicture}[scale=.5,baseline=3] \draw[thick,->] (1,1) to [out=-90,in=90] (0,0); \draw[thick] (0,1) to [out=-90,in=90] (1,0); \end{tikzpicture} \end{equation} and the definitions \eqref{eq:tild-rpm1} and \eqref{eq:tild-tild-rpm1} of $\widetilde{r^{\pm1}}$ and $\widetilde{\widetilde{r^{\pm1}}}$. Invariance of $RT_r$ with respect to oriented $\text{RII}$ moves are easily checked first for eight basic moves \begin{equation} \begin{tikzpicture}[xscale=0.5,yscale=1,baseline=10] \draw[thick] (1,0) to [out=135,in=-90] (.3,.5); \draw[thick,->] (.3,.5) to [out=90,in=-135] (1,1); \draw[line width=3pt,white] (0,0) to [out=45,in=-90] (.7,.5); \draw[line width=3pt,white] (.7,.5) to [out=90,in=-45] (0,1); \draw[thick] (0,0) to [out=45,in=-90] (.7,.5); \draw[thick,->] (.7,.5) to [out=90,in=-45] (0,1); \end{tikzpicture} \ \dot{\sim}\ \begin{tikzpicture}[xscale=0.5,yscale=1,baseline=10] \draw[thick,->] (1,0) to [out=135,in=-135] (1,1); \draw[thick,->] (0,0) to [out=45,in=-45] (0,1); \end{tikzpicture} \ \dot{\sim}\ \begin{tikzpicture}[xscale=0.5,yscale=1,baseline=10] \draw[thick] (0,0) to [out=45,in=-90] (.7,.5); \draw[thick,->] (.7,.5) to [out=90,in=-45] (0,1); \draw[line width=3pt,white] (1,0) to [out=135,in=-90] (.3,.5); \draw[line width=3pt,white] (.3,.5) to [out=90,in=-135] (1,1); \draw[thick] (1,0) to [out=135,in=-90] (.3,.5); \draw[thick,->] (.3,.5) to [out=90,in=-135] (1,1); \end{tikzpicture} \stackrel{RT_r}{\longmapsto} r^{-1}\circ r=\operatorname{id}_{G\otimes G}=r\circ r^{-1}, \end{equation} \begin{equation} \begin{tikzpicture}[xscale=0.5,yscale=1,baseline=10] \draw[thick,->] (.3,.5) to [out=-90,in=135](1,0); \draw[thick] (.3,.5) to [out=90,in=-135] (1,1); \draw[line width=3pt,white] (0,0) to [out=45,in=-90] (.7,.5); \draw[line width=3pt,white] (.7,.5) to [out=90,in=-45] (0,1); \draw[thick] (0,0) to [out=45,in=-90] (.7,.5); \draw[thick,->] (.7,.5) to [out=90,in=-45] (0,1); \end{tikzpicture} \ \dot{\sim}\ \begin{tikzpicture}[xscale=0.5,yscale=1,baseline=10] \draw[thick,->] (1,1) to [out=-135,in=135] (1,0); \draw[thick,->] (0,0) to [out=45,in=-45] (0,1); \end{tikzpicture} \ \dot{\sim}\ \begin{tikzpicture}[xscale=0.5,yscale=1,baseline=10] \draw[thick] (0,0) to [out=45,in=-90] (.7,.5); \draw[thick,->] (.7,.5) to [out=90,in=-45] (0,1); \draw[line width=3pt,white] (1,0) to [out=135,in=-90] (.3,.5); \draw[line width=3pt,white] (.3,.5) to [out=90,in=-135] (1,1); \draw[thick,<-](1,0) to [out=135,in=-90] (.3,.5); \draw[thick] (.3,.5) to [out=90,in=-135] (1,1); \end{tikzpicture} \stackrel{RT_r}{\longmapsto} \widetilde{r}\circ (\widetilde{r})^{-1}=\operatorname{id}_{G\otimes F}= \widetilde{r^{-1}}\circ \left(\widetilde{r^{-1}}\right)^{-1}, \end{equation} \begin{equation} \begin{tikzpicture}[xscale=0.5,yscale=1,baseline=10] \draw[thick] (1,0) to [out=135,in=-90] (.3,.5); \draw[thick,->] (.3,.5) to [out=90,in=-135] (1,1); \draw[line width=3pt,white] (0,0) to [out=45,in=-90] (.7,.5); \draw[line width=3pt,white] (.7,.5) to [out=90,in=-45] (0,1); \draw[thick,<-] (0,0) to [out=45,in=-90] (.7,.5); \draw[thick] (.7,.5) to [out=90,in=-45] (0,1); \end{tikzpicture} \ \dot{\sim}\ \begin{tikzpicture}[xscale=0.5,yscale=1,baseline=10] \draw[thick,->] (1,0) to [out=135,in=-135] (1,1); \draw[thick,<-] (0,0) to [out=45,in=-45] (0,1); \end{tikzpicture} \ \dot{\sim}\ \begin{tikzpicture}[xscale=0.5,yscale=1,baseline=10] \draw[thick,<-] (0,0) to [out=45,in=-90] (.7,.5); \draw[thick] (.7,.5) to [out=90,in=-45] (0,1); \draw[line width=3pt,white] (1,0) to [out=135,in=-90] (.3,.5); \draw[line width=3pt,white] (.3,.5) to [out=90,in=-135] (1,1); \draw[thick] (1,0) to [out=135,in=-90] (.3,.5); \draw[thick,->] (.3,.5) to [out=90,in=-135] (1,1); \end{tikzpicture} \stackrel{RT_r}{\longmapsto} \left(\widetilde{r^{-1}}\right)^{-1}\circ \widetilde{r^{-1}}=\operatorname{id}_{F\otimes G}= (\widetilde{r})^{-1}\circ \widetilde{r}, \end{equation} \begin{equation} \begin{tikzpicture}[xscale=0.5,yscale=1,baseline=10] \draw[thick,<-] (1,0) to [out=135,in=-90] (.3,.5); \draw[thick] (.3,.5) to [out=90,in=-135] (1,1); \draw[line width=3pt,white] (0,0) to [out=45,in=-90] (.7,.5); \draw[line width=3pt,white] (.7,.5) to [out=90,in=-45] (0,1); \draw[thick,<-] (0,0) to [out=45,in=-90] (.7,.5); \draw[thick] (.7,.5) to [out=90,in=-45] (0,1); \end{tikzpicture} \ \dot{\sim}\ \begin{tikzpicture}[xscale=0.5,yscale=1,baseline=10] \draw[thick,<-] (1,0) to [out=135,in=-135] (1,1); \draw[thick,<-] (0,0) to [out=45,in=-45] (0,1); \end{tikzpicture} \ \dot{\sim}\ \begin{tikzpicture}[xscale=0.5,yscale=1,baseline=10] \draw[thick,<-] (0,0) to [out=45,in=-90] (.7,.5); \draw[thick] (.7,.5) to [out=90,in=-45] (0,1); \draw[line width=3pt,white] (1,0) to [out=135,in=-90] (.3,.5); \draw[line width=3pt,white] (.3,.5) to [out=90,in=-135] (1,1); \draw[thick,<-] (1,0) to [out=135,in=-90] (.3,.5); \draw[thick] (.3,.5) to [out=90,in=-135] (1,1); \end{tikzpicture} \stackrel{RT_r}{\longmapsto} \widetilde{\widetilde{r^{-1}}}\circ \widetilde{\widetilde{r}}=\operatorname{id}_{F\otimes F}= \widetilde{\widetilde{r}}\circ\widetilde{\widetilde{r^{-1}}}, \end{equation} and then for two composite moves \begin{equation} \begin{tikzpicture}[xscale=1,yscale=.5,baseline=5] \draw[thick] (1,0) to [out=90,in=0] (.5,.9); \draw[thick,->] (.5,.9) to [out=180,in=90] (0,0); \draw[thick] (1,1) to [out=-90,in=0] (.5,.1); \draw[thick,->] (.5,.1) to [out=180,in=-90] (0,1); \end{tikzpicture} \ \dot{\sim}\ \begin{tikzpicture}[xscale=1,yscale=.5,baseline=5] \draw[thick] (.2,0) to [out=45,in=-135] (.6,.4); \draw[thick] (.6,.4) to [out=45,in=0] (.5,.9); \draw[thick,->] (.5,.9) to [out=180,in=90] (0,0); \draw[thick] (1,1) to [out=-90,in=0] (.5,.1); \draw[thick,->] (.5,.1) to [out=180,in=-90] (0,1); \end{tikzpicture} \ \dot{\sim}\ \begin{tikzpicture}[xscale=1,yscale=.5,baseline=5] \draw[thick] (.2,0) to [out=45,in=-135] (.6,.4); \draw[thick] (.6,.4) to [out=45,in=0] (.5,.9); \draw[thick,->] (.5,.9) to [out=180,in=90] (0,0); \draw[thick] (1,1) to [out=-90,in=0] (.5,.1); \draw[thick] (.5,.1) to [out=180,in=-135] (.4,.6); \draw[thick,->] (.4,.6) to [out=45,in=-135] (.8,1); \end{tikzpicture} \ \dot{\sim}\ \begin{tikzpicture}[xscale=1,yscale=.5,baseline=5] \draw[thick] (.2,0) to [out=90,in=0] (.3,.9); \draw[thick,->] (.3,.9) to [out=180,in=90] (0,0); \draw[thick] (1,1) to [out=-90,in=0] (.7,.1); \draw[thick,->] (.7,.1) to [out=180,in=-90] (.8,1); \end{tikzpicture} \ = \ \begin{tikzpicture}[xscale=1,yscale=.5,baseline=5] \draw[thick,->] (1,0) to [out=135,in=45] (0,0); \draw[thick,->] (1,1) to [out=-135,in=-45] (0,1); \end{tikzpicture} \end{equation} with two possible choices for the crossings. In order to check invariance of $RT_r$ with respect to RIII moves, we remark that altogether there are 48 such moves which can indexed by the set $\operatorname{Sym}(3)\times \{\pm1\}^3$ as follows. Given an RIII move, we enumerate the strands that intervene the move by following their bottom open ends from left to right \begin{equation} \begin{tikzpicture}[xscale=.5,yscale=1,baseline=10] \draw[thick] (0,0)--(2,1); \draw[thick] (1,0) to [out=90,in=-90] (.5,.5); \draw[thick] (1,1) to [out=-90,in=90] (.5,.5); \draw[thick] (2,0)-- (0,1); \end{tikzpicture} \ \dot{\sim}\ \begin{tikzpicture}[xscale=.5,yscale=1,baseline=10] \draw[thick] (0,0)--(2,1); \draw[thick] (1,0) to [out=90,in=-90] (1.5,.5); \draw[thick] (1,1) to [out=-90,in=90] (1.5,.5); \draw[thick] (2,0)-- (0,1); \end{tikzpicture} \quad \rightsquigarrow\quad \begin{tikzpicture}[xscale=.5,yscale=1,baseline=10] \node (a) at (0,0){\tiny1}; \node (b) at (1,0){\tiny2}; \node (c) at (2,0){\tiny3}; \draw[thick] (a)--(2,1); \draw[thick] (b) to [out=90,in=-90] (.5,.5); \draw[thick] (1,1) to [out=-90,in=90] (.5,.5); \draw[thick] (c)-- (0,1); \end{tikzpicture} \ \dot{\sim}\ \begin{tikzpicture}[xscale=.5,yscale=1,baseline=10] \node (a) at (0,0){\tiny1}; \node (b) at (1,0){\tiny2}; \node (c) at (2,0){\tiny3}; \draw[thick] (a)--(2,1); \draw[thick] (b) to [out=90,in=-90] (1.5,.5); \draw[thick] (1,1) to [out=-90,in=90] (1.5,.5); \draw[thick] (c)-- (0,1); \end{tikzpicture} \end{equation} and we define the associated element $(\sigma,\varepsilon)\in\operatorname{Sym}(3)\times \{\pm1\}^3$ by the conditions that for any $i\in \{1,2,3\}$, $\sigma(i)$ is the number of arcs on the $i$-th strand and $\varepsilon_i=1$ if $i$-th strand is oriented upwards. For example, the RIII move \begin{equation} \begin{tikzpicture}[xscale=1,yscale=1,baseline=10] \draw[thick,<-] (1,0) to [out=90,in=-90] (.5,.5); \draw[thick] (1,1) to [out=-90,in=90] (.5,.5); \draw[line width=3pt,white] (2,0)-- (0,1); \draw[thick,->] (2,0)-- (0,1); \draw[line width=3pt,white] (0,0)--(2,1); \draw[thick,->] (0,0)--(2,1); \end{tikzpicture} \ \dot{\sim}\ \begin{tikzpicture}[xscale=1,yscale=1,baseline=10] \draw[thick,<-] (1,0) to [out=90,in=-90] (1.5,.5); \draw[thick] (1,1) to [out=-90,in=90] (1.5,.5); \draw[line width=3pt,white] (2,0)-- (0,1); \draw[thick,->] (2,0)-- (0,1); \draw[line width=3pt,white] (0,0)--(2,1); \draw[thick,->] (0,0)--(2,1); \end{tikzpicture} \end{equation} corresponds to permutation $\sigma=(2,3)=(1)(2,3)$ and $\varepsilon=(1,-1,1)$ while the pair $(\sigma=\operatorname{id}, \varepsilon=(1,1,1))$ corresponds to the reference move associated to the Yang--Baxter relation \begin{equation}\label{eq.basriii} \begin{tikzpicture}[xscale=1,yscale=1,baseline=10] \draw[thick,->] (2,0)-- (0,1); \draw[line width=3pt,white] (1,1) to [out=-90,in=90] (0.5,.5); \draw[thick] (1,0) to [out=90,in=-90] (0.5,.5); \draw[thick,<-] (1,1) to [out=-90,in=90] (0.5,.5); \draw[line width=3pt,white] (0,0)--(2,1); \draw[thick,->] (0,0)--(2,1); \end{tikzpicture} \ \dot{\sim}\ \begin{tikzpicture}[xscale=1,yscale=1,baseline=10] \draw[thick,->] (2,0)-- (0,1); \draw[line width=3pt,white] (1,0) to [out=90,in=-90] (1.5,.5); \draw[thick] (1,0) to [out=90,in=-90] (1.5,.5); \draw[thick,<-] (1,1) to [out=-90,in=90] (1.5,.5); \draw[line width=3pt,white] (0,0)--(2,1); \draw[thick,->] (0,0)--(2,1); \end{tikzpicture} \ \stackrel{RT_r}{\longmapsto}\ r_1\circ r_2\circ r_1=r_2\circ r_1\circ r_2 \end{equation} with the notations $r_1:=r\otimes \operatorname{id}_G$ and $r_2:=\operatorname{id}_G\otimes r $. One can now show that by using the moves RII and $\text{R}0^\pm$, any RIII move is equivalent to the reference move~\eqref{eq.basriii}. Indeed, in the case $\varepsilon_1=-1$, we have the equivalences \begin{equation} \begin{tikzpicture}[xscale=.5,yscale=1,baseline=10] \draw[thick,<-] (0,0)--(2,1); \draw[thick] (1,0) to [out=90,in=-90] (1.5,.5); \draw[thick] (1,1) to [out=-90,in=90] (1.5,.5); \draw[thick] (2,0)-- (0,1); \end{tikzpicture} \ \dot{\sim}\ \begin{tikzpicture}[xscale=.5,yscale=1,baseline=10] \draw[thick,<-] (0,0)--(2,1); \draw[thick] (1,0) to [out=90,in=-90] (.5,.5); \draw[thick] (1,1) to [out=-90,in=90] (.5,.5); \draw[thick] (2,0)-- (0,1); \end{tikzpicture} \Leftrightarrow \begin{tikzpicture}[xscale=.5,yscale=1,baseline=10] \draw[thick,<-] (-1,1) to [out=-45,in=-135] (1,.4); \draw[thick] (3,0) to [out=135,in=45] (1,.4); \draw[thick] (1,0) to [out=90,in=-90] (1.5,.5); \draw[thick] (1,1) to [out=-90,in=90] (1.5,.5); \draw[thick] (2,0)-- (0,1); \end{tikzpicture} \ \dot{\sim}\ \begin{tikzpicture}[xscale=.5,yscale=1,baseline=10] \draw[thick,<-] (-1,1) to [out=-45,in=-135] (1,.4); \draw[thick] (3,0) to [out=135,in=45] (1,.4); \draw[thick] (1,0) to [out=90,in=-90] (.5,.5); \draw[thick] (1,1) to [out=-90,in=90] (.5,.5); \draw[thick] (2,0)-- (0,1); \end{tikzpicture} \Leftrightarrow \begin{tikzpicture}[xscale=.5,yscale=1,baseline=10] \draw[thick] (0,0)--(2,1); \draw[thick] (1,0) to [out=90,in=-90] (.5,.5); \draw[thick] (1,1) to [out=-90,in=90] (.5,.5); \draw[thick,->] (2,0)-- (0,1); \end{tikzpicture} \ \dot{\sim}\ \begin{tikzpicture}[xscale=.5,yscale=1,baseline=10] \draw[thick] (0,0)--(2,1); \draw[thick] (1,0) to [out=90,in=-90] (1.5,.5); \draw[thick] (1,1) to [out=-90,in=90] (1.5,.5); \draw[thick,->] (2,0)-- (0,1); \end{tikzpicture} \end{equation} which imply the equivalence \begin{equation}\label{eq:equiv(-1,.)} (\sigma,(-1,\varepsilon_2,\varepsilon_3))\Leftrightarrow (\sigma\circ(1,2,3),(\varepsilon_2,\varepsilon_3,1)) \end{equation} in the set $\operatorname{Sym}(3)\times \{\pm1\}^3$ thus allowing to reduce the number of negative components of $\varepsilon$. Additionally, a right action of the permutation group $\operatorname{Sym}(3)$ on the set $\operatorname{Sym}(3)\times \{\pm1\}^3$ is induced by the equivalences \begin{equation} \begin{tikzpicture}[xscale=.5,yscale=1,baseline=10] \draw[thick] (0,0) -- (2,1); \draw[thick] (1,0) to [out=90,in=-90] (1.5,.5); \draw[thick] (1,1) to [out=-90,in=90] (1.5,.5); \draw[thick] (2,0)--(0,1); \end{tikzpicture} \ \dot{\sim}\ \begin{tikzpicture}[xscale=.5,yscale=1,baseline=10] \draw[thick] (0,0)--(2,1); \draw[thick] (1,0) to [out=90,in=-90] (.5,.5); \draw[thick] (1,1) to [out=-90,in=90] (.5,.5); \draw[thick] (2,0)--(0,1); \end{tikzpicture} \Leftrightarrow \begin{tikzpicture}[xscale=.5,yscale=.33,baseline] \draw[thick] (2,-1) to [out=90,in=-90] (0,2); \draw[thick] (0,0) to [out=90,in=-90](2,1); \draw[thick] (1,1) to [out=-90,in=90] (1.5,.5); \draw[thick] (0,-1) to [out=90,in=-90] (1.5,.5); \draw[thick] (1,-1) to [out=90,in=-90] (0,0); \draw[thick] (1,1) to [out=90,in=-90] (2,2); \draw[thick] (2,1) to [out=90,in=-90] (1,2); \end{tikzpicture} \ \dot{\sim}\ \begin{tikzpicture}[xscale=.5,yscale=.33,baseline] \draw[thick] (0,0) to [out=90,in=-90](2,1); \draw[thick] (1,0) to [out=90,in=-90] (.5,.5); \draw[thick] (2,-1) to [out=90,in=-90] (0,2); \draw[thick] (0,-1) to [out=90,in=-90] (1,0); \draw[thick] (1,-1) to [out=90,in=-90] (0,0); \draw[thick] (.5,.5) to [out=90,in=-90] (2,2); \draw[thick] (2,1) to [out=90,in=-90] (1,2); \end{tikzpicture} \Leftrightarrow \begin{tikzpicture}[xscale=.5,yscale=1,baseline=10] \draw[thick] (0,0)--(2,1); \draw[thick] (1,0) to [out=90,in=-90] (.5,.5); \draw[thick] (1,1) to [out=-90,in=90] (.5,.5); \draw[thick] (2,0)--(0,1); \end{tikzpicture} \ \dot{\sim}\ \begin{tikzpicture}[xscale=.5,yscale=1,baseline=10] \draw[thick] (0,0) -- (2,1); \draw[thick] (1,0) to [out=90,in=-90] (1.5,.5); \draw[thick] (1,1) to [out=-90,in=90] (1.5,.5); \draw[thick] (2,0)--(0,1); \end{tikzpicture} \end{equation} and \begin{equation} \begin{tikzpicture}[xscale=.5,yscale=1,baseline=10] \draw[thick] (0,0) -- (2,1); \draw[thick] (1,0) to [out=90,in=-90] (1.5,.5); \draw[thick] (1,1) to [out=-90,in=90] (1.5,.5); \draw[thick] (2,0)--(0,1); \end{tikzpicture} \ \dot{\sim}\ \begin{tikzpicture}[xscale=.5,yscale=1,baseline=10] \draw[thick] (0,0)--(2,1); \draw[thick] (1,0) to [out=90,in=-90] (.5,.5); \draw[thick] (1,1) to [out=-90,in=90] (.5,.5); \draw[thick] (2,0)-- (0,1); \end{tikzpicture} \Leftrightarrow \begin{tikzpicture}[xscale=.5,yscale=.33,baseline] \draw[thick] (0,-1) to [out=90,in=-90] (2,2); \draw[thick] (0,2) to [out=-90,in=90] (1.5,.5); \draw[thick] (2,0)to [out=90,in=-90] (0,1); \draw[thick] (0,1) to [out=90,in=-90] (1,2); \draw[thick] (1,0) to [out=90,in=-90](1.5,.5); \draw[thick] (1,-1) to [out=90,in=-90] (2,0); \draw[thick] (2,-1) to [out=90,in=-90] (1,0); \end{tikzpicture} \ \dot{\sim}\ \begin{tikzpicture}[xscale=.5,yscale=.33,baseline] \draw[thick] (0,-1) to [out=90,in=-90] (2,2); \draw[thick] (1,1) to [out=-90,in=90] (.5,.5); \draw[thick] (2,0) to [out=90,in=-90] (0,1); \draw[thick] (0,1) to [out=90,in=-90] (1,2); \draw[thick] (1,1) to [out=90,in=-90] (0,2); \draw[thick] (1,-1) to [out=90,in=-90] (2,0); \draw[thick] (2,-1) to [out=90,in=-90](.5,.5); \end{tikzpicture} \Leftrightarrow \begin{tikzpicture}[xscale=.5,yscale=1,baseline=10] \draw[thick] (0,0)--(2,1); \draw[thick] (1,0) to [out=90,in=-90] (.5,.5); \draw[thick] (1,1) to [out=-90,in=90] (.5,.5); \draw[thick] (2,0)-- (0,1); \end{tikzpicture} \ \dot{\sim}\ \begin{tikzpicture}[xscale=.5,yscale=1,baseline=10] \draw[thick] (0,0) -- (2,1); \draw[thick] (1,0) to [out=90,in=-90] (1.5,.5); \draw[thick] (1,1) to [out=-90,in=90] (1.5,.5); \draw[thick] (2,0)--(0,1); \end{tikzpicture} \end{equation} which correspond to the respective equivalences \begin{equation} (\sigma,\varepsilon)\Leftrightarrow (\sigma,\varepsilon)\circ (1,2)\ \text{ and }\ (\sigma,\varepsilon)\Leftrightarrow (\sigma,\varepsilon)\circ (2,3) \end{equation} in the set $\operatorname{Sym}(3)\times \{\pm1\}^3$ where we interpret $(\sigma,\varepsilon)\in\operatorname{Sym}(3)\times \{\pm1\}^3$ as the map \begin{equation} (\sigma,\varepsilon)\colon \{1,2,3\}\to \{1,2,3\}\times\{\pm1\},\quad i\mapsto (\sigma(i),\varepsilon_i). \end{equation} Thus, in conjunction with the equivalence~\eqref{eq:equiv(-1,.)}, the right action of the group $\operatorname{Sym}(3)$ on the set $\operatorname{Sym}(3)\times \{\pm1\}^3$ establishes the equivalence of any RIII move to the reference move~\eqref{eq.basriii} and thereby the invariance of $RT_r$ with respect to all RIII moves. Finally, in order to prove invariance of $J_r$ with respect to all RI moves, we need to check only the invariance with respect four basic moves of the form \begin{equation}\label{eq:basicRI} \begin{tikzpicture}[xscale=.5,yscale=.5,baseline=5] \draw[thick] (0,0) to [out=0,in=0] (1,1); \draw[thick] (1,1) to [out=180,in=180] (2,0); \end{tikzpicture} \ \sim \ \begin{tikzpicture}[xscale=.5,yscale=.5,baseline=5] \draw[thick] (0,0) to [out=0,in=180] (1,1); \draw[thick] (1,1) to [out=0,in=180] (2,0); \end{tikzpicture} \end{equation} as all others are consequences of the basic ones and the intermediate equivalence relation $\dot{\sim}$ generated by the moves R0$^\pm$, RII and RIII: \begin{equation} \begin{tikzpicture}[xscale=.5,yscale=.5,baseline=5] \draw[thick] (0,0)-- (0,1); \end{tikzpicture} \ = \ \begin{tikzpicture}[xscale=.25,yscale=.5,baseline=5] \draw[thick] (0,0) to [out=90,in=180] (1,1); \draw[thick] (1,1) to [out=0,in=180] (2,0); \draw[thick] (2,0) to [out=0,in=-90] (3,1); \end{tikzpicture} \ \sim\ \begin{tikzpicture}[xscale=.5,yscale=.5,baseline=5] \draw[thick] (0,0) to [out=0,in=0] (1,1); \draw[thick] (1,1) to [out=180,in=180] (2,0); \draw[thick] (2,0) to [out=0,in=-90] (3,1); \end{tikzpicture} \ \dot{\sim}\ \begin{tikzpicture}[xscale=.5,yscale=.5,baseline=5] \draw[thick] (0,.5) to [out=90,in=90] (1,0); \draw[thick] (0,.5) to [out=-90,in=-90] (1,1); \end{tikzpicture} \ \dot{\sim}\ \begin{tikzpicture}[xscale=.5,yscale=.5,baseline=5] \draw[thick] (0,1) to [out=0,in=0] (1,0); \draw[thick] (1,0) to [out=180,in=180] (2,1); \draw[thick] (2,1) to [out=0,in=90] (3,0); \end{tikzpicture} \Rightarrow \begin{tikzpicture}[xscale=.5,yscale=.5,baseline=5] \draw[thick] (0,1) to [out=0,in=180] (1,0); \draw[thick] (1,0) to [out=0,in=180] (2,1); \end{tikzpicture} \ \sim\ \begin{tikzpicture}[xscale=.5,yscale=.5,baseline=5] \draw[thick] (0,1) to [out=0,in=0] (1,0); \draw[thick] (1,0) to [out=180,in=180] (2,1); \end{tikzpicture} \end{equation} and \begin{equation} \begin{tikzpicture}[xscale=.5,yscale=.5,baseline=5] \draw[thick] (0,0)-- (0,1); \end{tikzpicture} \ = \ \begin{tikzpicture}[xscale=.25,yscale=.5,baseline=5] \draw[thick] (0,0) to [out=90,in=180] (1,1); \draw[thick] (1,1) to [out=0,in=180] (2,0); \draw[thick] (2,0) to [out=0,in=-90] (3,1); \end{tikzpicture} \ \sim\ \begin{tikzpicture}[xscale=.5,yscale=.5,baseline=5] \draw[thick] (0,0) to [out=90,in=180] (1,1); \draw[thick] (1,1) to [out=0,in=0] (2,0); \draw[thick] (2,0) to [out=180,in=180] (3,1); \end{tikzpicture} \ \dot{\sim}\ \begin{tikzpicture}[xscale=.5,yscale=.5,baseline=5] \draw[thick] (0,0) to [out=90,in=90] (1,.5); \draw[thick] (0,1) to [out=-90,in=-90] (1,.5); \end{tikzpicture}\ . \end{equation} Les us analyse the four cases of \eqref{eq:basicRI} separately. Case~1. If diagrams $D$ and $D'$ differ by the fragments \begin{equation} D\ni\begin{tikzpicture}[xscale=.5,yscale=.5,baseline=5] \draw[thick,->] (1,1) to [out=180,in=180] (2,0); \draw[line width=3,white] (0,0) to [out=0,in=0] (1,1); \draw[thick] (0,0) to [out=0,in=0] (1,1); \end{tikzpicture} \ ,\quad \begin{tikzpicture}[xscale=.5,yscale=.5,baseline=5] \draw[thick] (0,0) to [out=0,in=180] (1,1); \draw[thick,->] (1,1) to [out=0,in=180] (2,0); \end{tikzpicture} \in D', \end{equation} then, by the definition of the normalisation of a long knot diagram, we have the equality $\dot D=\dot D'$. Thus, $J_r(D)=J_r(D')$. Case~2. Diagrams $D$ and $D'$ differ by the fragments \begin{equation} D\ni \begin{tikzpicture}[xscale=.5,yscale=.5,baseline=5] \draw[thick] (0,0) to [out=0,in=0] (1,1); \draw[line width=3,white] (1,1) to [out=180,in=180] (2,0); \draw[thick,->] (1,1) to [out=180,in=180] (2,0); \end{tikzpicture} \ ,\quad \begin{tikzpicture}[xscale=.5,yscale=.5,baseline=5] \draw[thick] (0,0) to [out=0,in=180] (1,1); \draw[thick,->] (1,1) to [out=0,in=180] (2,0); \end{tikzpicture} \in\ D' \end{equation} so that the normalised diagrams $\dot{D}$ and $\dot{D}'$ differ by the fragments \begin{equation} \dot{D}\ni \begin{tikzpicture}[xscale=.5,yscale=.5,baseline=5] \draw[thick] (0,0) to [out=0,in=0] (1,1); \draw[line width=3,white] (1,1) to [out=180,in=180] (2,0); \draw[thick,->] (1,1) to [out=180,in=180] (2,0); \end{tikzpicture} \ ,\quad \begin{tikzpicture}[xscale=.5,yscale=.5,baseline=5] \draw[thick,->] (1,1) to [out=180,in=180] (2,0); \draw[line width=3,white] (0,0) to [out=0,in=0] (1,1); \draw[thick] (0,0) to [out=0,in=0] (1,1); \end{tikzpicture} \in\ \dot{D}' \end{equation} which imply that \begin{equation}\label{eq:case2writhe} w(\dot{D})=2+w(\dot{D}')\Rightarrow\xi^+ \circ\xi^{-w(\dot{D})/2}= \xi^{-w(\dot{D}')/2}. \end{equation} On the other hand, we have the equivalence \begin{equation} \dot{D}\ni \begin{tikzpicture}[xscale=.5,yscale=.5,baseline=5] \draw[thick] (0,0) to [out=0,in=0] (1,1); \draw[line width=3,white] (1,1) to [out=180,in=180] (2,0); \draw[thick,->] (1,1) to [out=180,in=180] (2,0); \end{tikzpicture} \ \dot{\sim}\ \begin{tikzpicture}[xscale=.5,yscale=.5,baseline=5] \draw[thick] (0,0) to [out=0,in=0] (1,1); \draw[thick,->] (3,1) to [out=180,in=180] (4,0); \draw[line width=3,white] (2,0) to [out=180,in=0] (3,1); \draw[thick] (2,0) to [out=180,in=0] (3,1); \draw[line width=3,white] (1,1) to [out=180,in=0] (2,0); \draw[thick] (1,1) to [out=180,in=0] (2,0); \end{tikzpicture} \in \dot{D}'\circ\xi^+ \end{equation} which, together with \eqref{eq:case2writhe}, implies that \begin{equation} \dot{D}\circ \xi^{-w(\dot{D})/2}\ \dot{\sim}\ \dot{D}'\circ\xi^+\circ \xi^{-w(\dot{D})/2}= \dot{D}'\circ \xi^{-w(\dot{D}')/2}\Rightarrow J_r(D)=J_r(D'). \end{equation} Case~3. Diagrams $D$ and $D'$ differ by the fragments \begin{equation} D\ni \begin{tikzpicture}[xscale=.5,yscale=.5,baseline=5] \draw[thick,<-] (0,0) to [out=0,in=0] (1,1); \draw[line width=3,white] (1,1) to [out=180,in=180] (2,0); \draw[thick] (1,1) to [out=180,in=180] (2,0); \end{tikzpicture} \ ,\quad \begin{tikzpicture}[xscale=.5,yscale=.5,baseline=5] \draw[thick,<-] (0,0) to [out=0,in=180] (1,1); \draw[thick] (1,1) to [out=0,in=180] (2,0); \end{tikzpicture} \in\ D' \end{equation} so that \begin{equation} \dot D\ni \begin{tikzpicture}[xscale=.5,yscale=.5,baseline=5] \draw[thick,<-] (0,0) to [out=0,in=-90] (1.5,1); \draw[thick] (1,2) to [out=180,in=90] (1.5,1); \draw[line width=3,white] (2,0) to [out=180,in=-90] (.5,1); \draw[line width=3,white](1,2) to [out=0,in=90] (.5,1); \draw[thick] (2,0) to [out=180,in=-90] (.5,1); \draw[thick] (1,2) to [out=0,in=90] (.5,1); \end{tikzpicture} \ \dot{\sim}\ \begin{tikzpicture}[xscale=.5,yscale=.5,baseline=5] \draw[thick,<-] (0,0) to [out=0,in=180] (1,1); \draw[thick] (1,1) to [out=0,in=180] (2,0); \end{tikzpicture} \in\ \dot D' \Rightarrow J_r(D)=J_r(D'). \end{equation} Case~4. Diagrams $D$ and $D'$ differ by the fragments \begin{equation} D\ni\begin{tikzpicture}[xscale=.5,yscale=.5,baseline=5] \draw[thick] (1,1) to [out=180,in=180] (2,0); \draw[line width=3,white] (0,0) to [out=0,in=0] (1,1); \draw[thick,<-] (0,0) to [out=0,in=0] (1,1); \end{tikzpicture} \ ,\quad \begin{tikzpicture}[xscale=.5,yscale=.5,baseline=5] \draw[thick,<-] (0,0) to [out=0,in=180] (1,1); \draw[thick] (1,1) to [out=0,in=180] (2,0); \end{tikzpicture} \in D' \end{equation} so that we have for the corresponding normalised diagrams \begin{equation} \dot D\ni \begin{tikzpicture}[xscale=.5,yscale=.5,baseline=5] \draw[thick] (2,0) to [out=180,in=-90] (.5,1); \draw[thick] (1,2) to [out=180,in=90] (1.5,1); \draw[line width=3,white] (0,0) to [out=0,in=-90] (1.5,1); \draw[line width=3,white](1,2) to [out=0,in=90] (.5,1); \draw[thick,<-] (0,0) to [out=0,in=-90] (1.5,1); \draw[thick] (1,2) to [out=0,in=90] (.5,1); \end{tikzpicture} \ \dot\sim\ \begin{tikzpicture}[xscale=.5,yscale=.5,baseline=5] \draw[thick,<-] (-2,0) to [out=0,in=180] (-1,1); \draw[thick] (-1,1) to [out=0,in=180] (0,0); \draw[thick] (2,0) to [out=180,in=-90] (.5,1); \draw[thick] (1,2) to [out=180,in=90] (1.5,1); \draw[line width=3,white] (0,0) to [out=0,in=-90] (1.5,1); \draw[line width=3,white](1,2) to [out=0,in=90] (.5,1); \draw[thick] (0,0) to [out=0,in=-90] (1.5,1); \draw[thick] (1,2) to [out=0,in=90] (.5,1); \end{tikzpicture} \in \dot D'\circ \xi^- \end{equation} and \begin{equation} w(\dot{D})=w(\dot{D}')-2\Rightarrow\xi^- \circ\xi^{-w(\dot{D})/2}= \xi^{-w(\dot{D}')/2}. \end{equation} Thus, \begin{equation} \dot D\circ\xi^{-w(\dot{D})/2}\ \dot\sim\ \dot D'\circ\xi^-\circ \xi^{-w(\dot{D})/2}=\dot D'\circ\xi^{-w(\dot{D}')/2}\Rightarrow J_r(D)=J_r(D'). \end{equation} \end{proof} \section{Rigid R-matrices from racks}\label{sec:rrmfr} A \emph{\color{blue} binary relation} from a set $X$ to a set $Y$ as a subset of the cartesian product $X\times Y$. The composition of two binary relations $R\subset X\times Y$ and $S\subset Y\times Z$ is the binary relation $S\circ R\subset X\times Z$ defined by \begin{equation} S\circ R:=\{ (x,z)\in X\times Z\mid \exists y\in Y\colon\ (x,y)\in R,\ (y,z)\in S\}. \end{equation} A \emph{\color{blue} span} from a set $X$ to a set $Y$ is a triple $U=(U,\operatorname{s}_U,\operatorname{t}_U)$ where $U$ is a set and $\operatorname{s}_U\colon U\to X$ and $\operatorname{t}_U\colon U\to Y$ are set theoretical maps. The composition of two spans $U$ from $X$ to $Y$ and $V$ from $Y$ to $Z$ is the span from $X$ to $Z$ defined as the pullback space (fibered product) $V\circ U:=U\times_Y V$ together with the natural projections to $X$ and $Z$. Two spans $U$ and $V$ from $X$ to $Y$ are called \emph{\color{blue} equivalent} if there exists a bijection $f\colon U\to V$ such that $\operatorname{s}_V\circ f=\operatorname{s}_U$ and $\operatorname{t}_V\circ f=\operatorname{t}_U$. The composition of spans induces an associative binary operation for the equivalence classes of spans. Any binary relation $R\subset X\times Y$ is a special case of a span with $\operatorname{s}_R\colon R\to X$ and $\operatorname{t}_R\colon R\to Y$ being the canonical projections. Let $\mathbf{Set}$ be the monoidal category of sets with the cartesian product as the monoidal product, and $\mathbf{Rel}$ (respectively $\mathbf{Span}$) the extension of $\mathbf{Set}$ with morphisms given by binary relations (respectively equivalence classes of spans). For a morphism $Z\colon X\to Y$ in $\mathbf{Span}$, and any $(x,y)\in X\times Y$, we denote \begin{equation} Z(x,y):=\operatorname{s}_Z^{-1}(x)\cap\operatorname{t}_Z^{-1}(y). \end{equation} We have a canonical monoidal functor \begin{equation} \varpi\colon\mathbf{Span}\to \mathbf{Rel} \end{equation} which is identity on the level of objects and for any morphism $Z\colon X\to Y$ in $\mathbf{Span}$, the corresponding morphism in $ \mathbf{Rel}$ is given by \begin{equation} \varpi(Z)=\{(x,y)\in X\times Y\mid Z(x,y)\ne\emptyset\}. \end{equation} Notice that if $Z$ is a relation (as a particular case of spans) then $\varpi(Z)=Z$. Given a set theoretical map $f\colon X\to Y$, its graph \begin{equation} \Gamma_f:=\{(x,f(x))\mid x\in X\}\subset X\times Y \end{equation} is naturally interpreted as a morphism $\rho(f)$ in $\mathbf{Rel}$ and a morphism $\sigma(f)$ in $\mathbf{Span}$. The advantage of the categories $\mathbf{Rel}$ and $\mathbf{Span}$ over $\mathbf{Set}$ is their rigidity, namely, for any set $X$, the diagonal $ \Delta_X:=\Gamma_{\operatorname{id}_X}, $ interpreted as morphisms $\varepsilon_X\colon X\times X\to \{0\}$ and $\eta_X\colon \{0\}\to X\times X$ in $\mathbf{Rel}$ and $\mathbf{Span}$, gives rise to a canonical adjunction $(X,X,\varepsilon_X,\eta_X)$ both in $\mathbf{Rel}$ and $\mathbf{Span}$. Let $X$ be a (left) \emph{\color{blue} rack}~\cite{MR1194995,MR0021002,MR638121,MR672410}, that is a set with a map $$ X^2\to X^2,\quad (x,y)\mapsto (x\cdot y, x*y), $$ such that the binary operation $x\cdot y$ is left self-distributive $$ x\cdot(y\cdot z)=(x\cdot y)\cdot (x\cdot z),\quad \forall (x,y,z)\in X^3, $$ and $$ x*(x\cdot y)=y,\quad \forall (x,y)\in X^2. $$ It is easily verified that for any rack $X$, the set-theoretical map $$ r\colon X^2\to X^2,\quad (x,y)\mapsto (x\cdot y,x), $$ is a rigid R-matrix in the categories $\mathbf{Rel}$ and $\mathbf{Span}$. Moreover, all the relevant morphisms are realised by set-theoretical maps. Indeed, introducing the maps $$ r',s,s'\colon X^2\to X^2,\quad r'(x,y)=(y, y\cdot x),\ s(x,y)=(x*y,x),\ s'(x,y)=(y,y*x), $$ we obtain $$ r^{-1}=\widetilde{r}=s', \quad (\widetilde{r})^{-1}=r,\quad \widetilde{r^{-1}}=\widetilde{\widetilde{r}}=s,\quad \left(\widetilde{r^{-1}}\right)^{-1}=\widetilde{\widetilde{r^{-1}}}=r'. $$ Thus, we obtain two long knot invariants $J_{\rho(r)}(D)$ in $\mathbf{Rel}$ and $J_{\sigma(r)}(D)$ in $\mathbf{Span}$ which are related to each other by the equality \begin{equation} J_{\rho(r)}(D)=\varpi(J_{\sigma(r)}(D)). \end{equation} \subsection{Racks associated to pointed groups} Let $(G,\mu)$ be a \emph{\color{blue} pointed group} that is a group $G$ together with a fixed element $\mu\in G$. Then, it is easily verified that the set $G$ with the map \begin{equation} G\times G\to G\times G, \quad (g,h)\mapsto (g\mu g^{-1}h,g\mu^{-1} g^{-1}h) \end{equation} is a rack, and, thus, it gives rise to a rigid R-matrix \begin{equation} r_{G,\mu}\colon G\times G\to G\times G,\quad (g,h)\mapsto (g\mu g^{-1}h,g), \end{equation} both in the categories $\mathbf{Rel}$ and $\mathbf{Span}$. In this way, we obtain two long knot invariants $J_{\rho(r_{G,\mu})}(D)$ and $J_{\sigma(r_{G,\mu})}(D)$ related to each other by the equality \begin{equation} J_{\rho(r_{G,\mu})}(D)=\varpi(J_{\sigma(r_{G,\mu})}(D)). \end{equation} \begin{theorem}\label{thm:1} There exists a canonical choice of a meridian-longitude pair $(m,\ell)$ of long knots such that the set $(J_{\sigma(r_{G,\mu})}(D))(1,\lambda)$ is in bijection with the set of group homomorphisms \begin{equation}\label{eq:gr-hom} \{h\colon \pi_1(\mathbb{R}^3\setminus f(\mathbb{R}),x_0)\to G\mid h(m)=\mu,\ h(\ell)=\lambda\} \end{equation} where $f\colon\mathbb{R}\to\mathbb{R}^3$ is a long knot represented by $D$. \end{theorem} \begin{proof} Let $f\colon \mathbb{R}\to\mathbb{R}^3$ be a long knot whose image under the projection \begin{equation} p\colon\mathbb{R}^3\to\mathbb{R}^2,\quad (x,y,z)\mapsto (y,z). \end{equation} is the diagram $\tilde D:=\dot{D}\circ \xi^{-w(\dot{D})/2}$ with linearly ordered (from bottom to top) set of arcs $a_0,a_1,\dots, a_n$. As a result, the set of crossings acquires a linear order as well $\{c_i\mid 1\le i\le n\}$ where $c_i$ is the crossing separating the arcs $a_{i-1}$ and $a_i$ and with the over passing arc $a_{\kappa_i}$ for a uniquely defined map \begin{equation} \kappa\colon \{1,\dots,n\}\to\{0,\dots,n\}. \end{equation} Let $t_0,t_1,\dots,t_n\in\mathbb{R}$ be a strictly increasing sequence such that $f(t)=(0,0,t)$ for all $t\not \in [t_0,t_n]$, and for each $i\in\{1,\dots, n-1\}$, $p(f(t_i))$ belongs to arc $a_i$ and is distinct from any crossing. Choose a base point $x_0=(s,0,0)$ with sufficiently large $s\in\mathbb{R}_{>0}$, a sufficiently small $\epsilon\in\mathbb{R}_{>0}$, and define the following paths \begin{multline} \alpha_0,\beta_i,\gamma_i\colon [0,1]\to \mathbb{R}^3,\ i\in\{0,\dots, n\},\quad \alpha_0(t)=(\epsilon\cos(2\pi t),-\epsilon\sin(2\pi t),t_0),\\ \beta_i(t)=(1-t)x_0+(f(t_i)+(\epsilon,0,0))t,\quad \gamma_i(t)=f((1-t)t_i+t t_0)+(\epsilon,0,0). \end{multline} To each arc $a_i$ of $\tilde D$, we associate the homotopy class \begin{equation} e_i:=[\beta_i\cdot\gamma_i\cdot \bar\beta_0]\in \pi_1(\mathbb{R}^3\setminus f(\mathbb{R}),x_0), \end{equation} so that $e_0=1$, and the Wirtinger generator \begin{equation} w_i:=[\beta_i\cdot\gamma_i\cdot\alpha_0\cdot\bar\gamma_i\cdot\bar \beta_i] \in\pi_1(\mathbb{R}^3\setminus f(\mathbb{R}),x_0). \end{equation} We have the equalities \begin{equation}\label{eq:wi=eiw0ei} w_i=e_i w_0 e_i^{-1},\quad \forall i\in\{0,1,\dots,n\}, \end{equation} \begin{equation}\label{eq:ei=wkiei-1} e_i=w_{\kappa_i}^{\varepsilon_i}e_{i-1},\quad \forall i\in\{1,\dots,n\}, \end{equation} where $\varepsilon_i\in\{\pm1\}$ is the sign of the crossing $c_i$, and \begin{equation} e_i=w_{\kappa_i}^{\varepsilon_i}w_{\kappa_{i-1}}^{\varepsilon_{i-1}}\cdots w_{\kappa_1}^{\varepsilon_1},\quad \forall i\in\{1,\dots,n\}. \end{equation} We define the canonical meridian-longitude pair $(m,\ell)$ as follows \begin{equation} m:=w_0,\quad \ell:=e_n. \end{equation} Taking into account the condition $\sum_{i=1}^n\varepsilon_i=0$, we see that $\ell$ has the trivial image in $H_1(\mathbb{R}^3\setminus f(\mathbb{R}),\mathbb{Z})$. Let us show that the following finitely presented groups are isomorphic to the knot group $\pi_1(\mathbb{R}^3\setminus f(\mathbb{R}),x_0)$: \begin{equation} E:=\langle m,e_0,\dots, e_n\mid e_0=1,\ e_i=e_{\kappa_i}m^{\varepsilon_i} e_{\kappa_i}^{-1} e_{i-1},\ 1\le i\le n\rangle \end{equation} and \begin{equation} W:=\langle w_0,\dots,w_n\mid w_{\kappa_i}^{\varepsilon_i}w_{i-1}=w_iw_{\kappa_i}^{\varepsilon_i},\ 1\le i\le n\rangle. \end{equation} As $W$ is nothing else but the Wirtinger presentation of $\pi_1(\mathbb{R}^3\setminus f(\mathbb{R}),x_0)$, it suffices to see the isomorphism $E\simeq W$. To see the latter, we remark that there are two group homomorphisms \begin{equation} u\colon W\to E,\quad w_i\mapsto e_i me_i^{-1},\quad i\in\{0,1,\dots,n\}, \end{equation} and \begin{equation} v\colon E\to W,\quad m\mapsto w_0,\quad e_0\mapsto 1,\quad e_i\mapsto w_{\kappa_i}^{\varepsilon_i}w_{\kappa_{i-1}}^{\varepsilon_{i-1}}\cdots w_{\kappa_1}^{\varepsilon_1},\quad \forall i\in\{1,\dots,n\}. \end{equation} Indeed, we have \begin{multline} u(w_{\kappa_i})^{\varepsilon_i}u(w_{i-1})=u(w_i)u(w_{\kappa_i})^{\varepsilon_i} \Leftrightarrow u(w_{\kappa_i})^{\varepsilon_i}e_{i-1}me_{i-1}^{-1}=e_ime_i^{-1} u(w_{\kappa_i})^{\varepsilon_i}\\ \Leftrightarrow e_i^{-1}u(w_{\kappa_i})^{\varepsilon_i}e_{i-1}m=me_i^{-1} u(w_{\kappa_i})^{\varepsilon_i}e_{i-1}\Leftarrow e_i^{-1} u(w_{\kappa_i})^{\varepsilon_i}e_{i-1}=1\\ \Leftrightarrow e_i=u(w_{\kappa_i})^{\varepsilon_i}e_{i-1} \Leftrightarrow e_i=e_{\kappa_i}m^{\varepsilon_i} e_{\kappa_i}^{-1} e_{i-1} \end{multline} implying that $u$ is a group homomorphism. We also have \begin{multline} v(e_i)=v(e_{\kappa_i})v(m)^{\varepsilon_i} v(e_{\kappa_i})^{-1} v(e_{i-1})\Leftrightarrow v(e_i) v(e_{i-1})^{-1}=v(e_{\kappa_i})w_0^{\varepsilon_i} v(e_{\kappa_i})^{-1}\\ \Leftrightarrow w_{\kappa_i}=v(e_{\kappa_i})w_0 v(e_{\kappa_i})^{-1}\Leftarrow \{w_i=v(e_i)w_0v(e_i)^{-1}\}_{0\le i\le n} \end{multline} and, for all $i\in\{1,\dots,n\}$, \begin{multline} v(e_i)w_0=w_{\kappa_i}^{\varepsilon_i}\cdots w_{\kappa_1}^{\varepsilon_1}w_0=w_{\kappa_i}^{\varepsilon_i}\cdots w_{\kappa_2}^{\varepsilon_2}w_1w_{\kappa_1}^{\varepsilon_1} =\dots\\=w_{\kappa_i}^{\varepsilon_i}\cdots w_{\kappa_k}^{\varepsilon_k}w_{k-1}w_{\kappa_{k-1}}^{\varepsilon_{k-1}}\cdots w_{\kappa_1}^{\varepsilon_1}=\dots= w_iw_{\kappa_i}^{\varepsilon_i}\cdots w_{\kappa_1}^{\varepsilon_1}=w_iv(e_i) \end{multline} implying that $v$ is a group homomorphism as well. It remains to show that $v\circ u=\operatorname{id}_W$ and $u\circ v=\operatorname{id}_E$. Indeed, for all $i\in\{0,\dots,n\}$, we have \begin{equation} v(u(w_i))=v(e_i me_i^{-1})=v(e_i)v(m)v(e_i)^{-1}=v(e_i)w_0v(e_i)^{-1}=w_i \end{equation} implying that $v\circ u=\operatorname{id}_W$. We prove that $u(v(e_i))=e_i$ for all $i\in\{0,\dots,n\}$ by recursion on $i$. For $i=0$, we have \begin{equation} u(v(e_0))=u(1)=1=e_0. \end{equation} Assuming that $u(v(e_{k-1}))=e_{k-1}$ for some $k\in\{1,\dots,n-1\}$, we calculate \begin{equation} u(v(e_{k}))=u(w_{\kappa_k}^{\varepsilon_k}v(e_{k-1}))=u(w_{\kappa_k})^{\varepsilon_k}u(v(e_{k-1}))=e_{\kappa_k}m^{\varepsilon_k}e_{\kappa_k}^{-1}e_{k-1}=e_k. \end{equation} Now, any element $g$ of the set $(J_{\sigma(r_{G,\mu})}(D))(1,\lambda)$ is a map \begin{equation} g\colon \{0,1,\dots,n\}\to G \end{equation} such that \begin{equation} g_0=1,\ g_n=\lambda,\ g_i=g_{\kappa_i}\mu^{\varepsilon_i} g_{\kappa_i}^{-1} g_{i-1},\quad \forall i\in\{1,\dots,n\}. \end{equation} That means that $g$ determines a unique group homomorphism \begin{equation} h_g\colon E\to G \end{equation} such that $h_g(m)=\mu$ and $h_g(e_i)=g_i$ for all $i\in\{0,1,\dots,n\}$. On the other hand, any group homomorphism $h\colon E\to G$ such that $h(m)=\mu$ and $h(\ell)=\lambda$ is of the form $h=h_g$ where $g_i=h(e_i)$. Thus, the map $g\mapsto h_g$ is a set-theoretical bijection between $(J_{\sigma(r_{G,\mu})}(D))(1,\lambda)$ and the set of group homomorphisms~\eqref{eq:gr-hom}. \end{proof} \begin{remark} Theorem~\ref{thm:1} illustrates the importance of considering long knots as opposed to closed knots. Namely, by closing a long knot, one identifies two open strands and all the associated data. In particular, one has to impose the equality $\lambda=1$ that corresponds to considering only those representations where the longitude is realized trivially. That means that in the case of closed diagrams one would obtain less powerful invariants. \end{remark} \subsection{An extended Heisenberg group} Let $R$ be a commutative unital ring and $G$, the subgroup of $GL(3,R)$ given by upper triangular matrices of the form \begin{equation} \begin{pmatrix} a&b&d\\ 0&1&c\\ 0&0&a \end{pmatrix},\quad a\in U(R),\ (b,c,d)\in R^3, \end{equation} where $U(R)$ the group of units of $R$. Then, the elements \begin{equation} \mu=\begin{pmatrix} t&0&0\\ 0&1&0\\ 0&0&t \end{pmatrix},\quad \lambda=\begin{pmatrix} 1&0&s\\ 0&1&0\\ 0&0&1 \end{pmatrix}, \end{equation} commute with each other for any $t\in U(R)$ and $s\in R$. Given the fact that $G$ is an algebraic group, the invariant $(J_{\rho(r_{G,\mu})}(D))(1,\lambda)$ factorises through an ideal $I_D$ in the polynomial algebra $\mathbb{Q}[t,t^{-1},s]$. Examples of calculations show that this ideal is often principal and is generated by the polynomial $\Delta_Ds$ where $\Delta_D:=\Delta_D(t)$ is the Alexander polynomial of $D$. Nonetheless, there are examples for which this is not the case, at least if the Alexander polynomial has multiple roots. In Table~\ref{fig:1}, we have collected few selected examples of calculations for knots where the Alexander polynomials of the first three examples of knots $3_1$, $4_1$ and $6_2$ have simple roots, while for all other examples the respective Alexander polynomials have multiple roots and they are factorized into products of the Alexander polynomials of the first three examples. In particular, one can see that the knots $8_{10}$ and $8_{20}$ have different Alexander polynomials but one and the same ideal, while the knots $8_{18}$ and $9_{24}$ have one and the same Alexander polynomial but different ideals. \begin{table}[h] \[ \begin{array}{c|c|c} \text{knot}& \Delta_D& I_D \\ \hline 3_{1}&1-t+t^2 & (\Delta_{3_1}s) \\ 4_{1}&1-3t+t^2 & (\Delta_{4_1}s) \\ 6_{2}&1 - 3 t + 3 t^2 - 3 t^3 + t^4 & (\Delta_{6_2}s) \\ 8_{10}&\Delta_{3_1}^3 & (\Delta_{3_1}s,s^2) \\ 8_ {18}&\Delta_{4_1} \Delta_{3_1}^2 & (\Delta_{4_1} \Delta_{3_1}s)\\ 8_{20}&\Delta_{3_1}^2& (\Delta_{3_1}s,s^2)\\ 9_{24}&\Delta_{4_1} \Delta_{3_1}^2& (\Delta_{4_1} \Delta_{3_1}s,\Delta_{4_1} s^2)\\ 10_{99}&\Delta_{3_1}^4 & (\Delta_{3_1}s,s^2) \\ 10_{123}&\Delta_{6_2}^2 & (\Delta_{6_2}s) \\ 10_{137}&\Delta_{4_1}^2 & (\Delta_{4_1}s,s^2) \\ 11a_{5}&\Delta_{4_1}^3& (\Delta_{4_1}s,s^2) \end{array} \] \caption{Examples of calculation with the extended Heisenberg group.}\label{fig:1} \end{table} \section{Invariants from Hopf algebras}\label{sec:ifha} Let $\mathbf{Hopf}_\mathbb{K}$ be the category of Hopf algebras over a field $\mathbb{K}$ with invertible antipode. We have a contravariant endofunctor $(\cdot)^o\colon \mathbf{Hopf}_\mathbb{K}\to \mathbf{Hopf}_\mathbb{K}$ that associates to a Hopf algebra $H$ with the multiplication $\nabla$ its \emph{\color{blue} restricted dual} \begin{equation} H^o:=(\nabla^*)^{-1}(H^*\otimes H^*) \end{equation} that can also be identified with the vector subspace of the algebraic dual $H^*$ generated by all matrix coefficients of all finite dimensional representations of $H$ \cite{MR1786197}. Following~\cite{MR1381692}, Drinfeld's \emph{\color{blue} quantum double} of $H\in \operatorname{Ob}\mathbf{Hopf}_\mathbb{K}$ is a Hopf algebra $D(H)\in \operatorname{Ob}\mathbf{Hopf}_\mathbb{K}$ uniquely determined by the property that there are two Hopf algebra inclusions \begin{equation} \imath\colon H\to D(H),\quad \jmath\colon H^{o,\text{op}}\to D(H) \end{equation} such that $D(H)$ is generated by their images subject to the commutation relations \begin{equation}\label{eq:comm-rel-j-i} \jmath(f)\imath(x)= \langle f_{(1)},x_{(1)}\rangle \langle f_{(3)},S(x_{(3)})\rangle\imath(x_{(2)})\jmath(f_{(2)}),\quad \forall (x,f)\in H\times H^o, \end{equation} where we use Sweedler's notation for the comultiplication \begin{equation} \Delta(x)=x_{(1)}\otimes x_{(2)},\quad (\Delta\otimes\operatorname{id})(\Delta(x))=x_{(1)}\otimes x_{(2)}\otimes x_{(3)},\ \dots \end{equation} The restricted dual of the quantum double $(D(H))^o$ is a \emph{\color{blue} dual quasitriangular} Hopf algebra with the \emph{\color{blue} dual universal $R$-matrix} \begin{equation} \varrho\colon (D(H))^o\otimes (D(H))^o\to \mathbb{K},\quad x\otimes y\mapsto \langle x,\jmath(\imath^o(y))\rangle \end{equation} which, among other things, satisfies the Yang--Baxter relation \begin{equation} \varrho_{1,2}*\varrho_{1,3}*\varrho_{2,3}=\varrho_{2,3}*\varrho_{1,3}*\varrho_{1,2} \end{equation} in the convolution algebra $(((D(H))^o)^{\otimes3})^*$, and any finite-dimensional right comodule \begin{equation} V\to V\otimes (D(H))^o,\quad v\mapsto v_{(0)}\otimes v_{(1)}, \end{equation} gives rise to a rigid $R$-matrix \begin{equation} r_V\colon V\otimes V\to V\otimes V,\quad u\otimes v\mapsto v_{(0)}\otimes u_{(0)}\langle \varrho,u_{(1)}\otimes v_{(1)}\rangle. \end{equation} This implies that there exists a \emph{\color{blue} universal invariant} of long knots $Z_H(K)$ taking its values in the convolution algebra $((D(H))^o)^*$ such that \begin{equation} J_{r_V}(K)v=v_{(0)}\langle Z_H(K),v_{(1)}\rangle,\quad \forall v\in V. \end{equation} \begin{remark} By using the coend of the monoidal braided category of finite dimensional comodules over $ (D(H))^o$, the universal invariant $Z_H(K)$ can be interpreted via the Lyubashenko theory \cite{MR1324033}. In the case of finite dimensional quasitriangular Hopf algebras, this coend approach is further developed by Virelizier in \cite{MR2251160}. \end{remark} \begin{remark} If $H$ is a finite-dimensional Hopf algebra with a linear basis $\{e_i\}_{i\in I}$ and $\{e^i\}_{i\in I}$ is the dual linear basis of $H^*$, then, the dual universal $R$-matrix is conjugate to the universal $R$-matrix \begin{equation}\label{eq.un-R-mat} R:=\sum_{i\in I}\jmath(e^i)\otimes\imath(e_i)\in D(H)\otimes D(H) \end{equation} in the sense that, for any $x,y\in (D(H))^o=(D(H))^*$, we have \begin{multline} \langle x\otimes y,R\rangle=\sum_{i\in I}\langle x,\jmath(e^i)\rangle\langle y,\imath(e_i)\rangle=\sum_{i\in I}\langle x,\jmath(e^i)\rangle\langle \imath^o(y),e_i\rangle\\ =\left\langle x,\jmath\left(\sum_{i\in I}\langle \imath^o(y),e_i\rangle e^i\right)\right\rangle=\left\langle x,\jmath\left(\imath^o(y)\right)\right\rangle=\langle \varrho, x\otimes y\rangle. \end{multline} In the infinite-dimensional case, formula~\eqref{eq.un-R-mat} is formal but it is a convenient and useful tool for actual calculations. \end{remark} \begin{remark} The algebra inclusion $D(H)\subset ((D(H))^o)^*$ allows to think of the convolution algebra $((D(H))^o)^*$ as a certain algebra completion of the quantum double $D(H)$. \end{remark} \subsection{A Hopf algebra associated to a two-dimensional Lie group} Let $B$ be the commutative Hopf algebra over $\mathbb{C}$ generated by an invertible group-like element $a$ and element $b$ with the coproduct \begin{equation}\label{eq:coprod-b} \Delta(b)=a\otimes b+b\otimes 1 \end{equation} so that the set of monomials $\{b^ma^n\mid (m,n)\in\mathbb{Z}_{\ge0}\times \mathbb{Z}\}$ is a linear basis of $B$. The restricted dual Hopf algebra $B^o$ is generated by two primitive elements $\psi$ and $\phi$ and group-like elements \begin{equation} u^\psi,\ e^{v\phi},\quad (u,v)\in \mathbb{C}_{\ne0}\times\mathbb{C}, \end{equation} which satisfy the commutation relations \begin{multline} [\psi,\phi]=\phi,\quad[\psi,e^{v\phi}]=v\phi e^{v\phi},\quad u^\psi \phi u^{-\psi}=u\phi,\quad u^\psi e^{v\phi}u^{-\psi}=e^{uv\phi},\\ [\psi, u^\psi]=[\phi, e^{v\phi}]=0,\quad u^\psi z^\psi=(uz)^\psi,\quad e^{v\phi}e^{w\phi}=e^{(v+w)\phi}. \end{multline} As linear forms on $B$, they are defined by the relations \begin{multline} \langle\psi,b^ma^n\rangle=\delta_{m,0}n,\quad \langle\phi,b^ma^n\rangle=\delta_{m,1},\\ \langle u^\psi,b^ma^n\rangle=\delta_{m,0}u^n,\quad \langle e^{v\phi},b^ma^n\rangle=v^m, \quad \forall(m,n)\in\mathbb{Z}_{\ge0}\times \mathbb{Z}. \end{multline} Using the notation $\dot{x}:=\imath(x)$ for $x\in\{a,b\}$, $\dot{y}:=\jmath(y)$ for $y\in\{\psi,\phi\}$ and \begin{equation} u^{\dot{\psi}}:=\jmath(u^{\psi}),\quad e^{v\dot{\phi}}:=\jmath(e^{v\psi}), \end{equation} the commutation relations~\eqref{eq:comm-rel-j-i} in the case of the quantum double $D(B)$ take the form \begin{equation} [\dot{\psi},\dot{b}]=\dot{b},\quad [\dot{\phi},\dot{b}]=1-\dot{a},\quad u^{\dot{\psi}}\dot{b}u^{-\dot{\psi}} =u\dot{b},\quad e^{v\dot{\phi}}\dot{b} e^{-v\dot{\phi}}=\dot{b}+(1-\dot{a})v \end{equation} and $\dot{a}$ is central. The formal universal $R$-matrix that reads as \begin{equation}\label{eq:un-r-mat} R:=(1\otimes\dot{a})^{\dot{\psi}\otimes 1}e^{\dot{\phi}\otimes\dot{b}}=\sum_{m,n\ge0}\frac 1{n!}\binom{\dot{\psi}}{m}\dot{\phi}^n\otimes (\dot{a}-1)^m \dot{b}^n \end{equation} should be interpreted as follows. Any finite dimensional right comodule $V$ over $(D(B))^o$ is a left module over $D(B)$ defined by \begin{equation} xv=v_{(0)}\langle v_{(1)},x\rangle,\quad \forall (x,v)\in D(B)\times V. \end{equation} Thus, it suffices to make sense of formula~\eqref{eq:un-r-mat} in the case of an arbirary finite-dimensional representation of $D(B)$ where the elements $1-\dot{a}$, $\dot{b}$ and $\dot{\phi}$ are necessarily nilpotent, so that the formal infinite double sum truncates to a well defined finite sum. \begin{conjecture} The universal invariant associated to the Hopf algebra $B$ is of the form $ Z_B(K)=(\Delta_K(\dot{a}))^{-1} $ where $\Delta_K(t)$ is the Alexander polynomial of $K$ normalised so that $\Delta_K(1)=1$ and $\Delta_K(t)=\Delta_K(1/t)$. \end{conjecture} By direct computation, we were able to prove this conjecture in the case of the trefoil knot. Justification in the general case comes from the following reasoning. The Hopf algebra $B$ can be $q$-deformed to a non-commutative Hopf algebra $B_q$ with the same coalgebra structure~\eqref{eq:coprod-b} but with $q$-comutative relation $ab=qba$. We have $B=B_1$. For $q$ not a root of unity, the quantum double $D(B_q)$ is closely related to the quantum group $U_q(sl_2)$. In particular, for each $n\in\mathbb{Z}_{>0}$, it admits an $n$-dimensional irreducible representation corresponding to the $n$-th colored Jones polynomial. In the limit $n\to\infty$ with $q=t^{1/n}$ and fixed $t$, one recovers an infinite-dimensional representation of the Hopf algebra $B$ where the central element $\dot{a}$ takes the value $t$. On the other hand, according to the Melvin--Morton--Rozansky conjecture proven by Bar-Nathan and Garoufalidis in \cite{MR1389962} and by Garoufalidis and L\^e in \cite{MR2860990}, the $n$-th colored Jones polynomial in that limit tends to $(\Delta_K(t))^{-1}$. \def$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'${$'$}
{ "redpajama_set_name": "RedPajamaArXiv" }
582
Q: 3 column form with 3rd column spanning vertically I'm trying to find a way to have a form that displays the fields horizontally rather than vertically....with the exception of the last column. I'd like that to span vertically. Here's the code I have for getting the 2 columns the way I'd like them: <html> <head> <style type="text/css"> .display-label, .editor-label { margin: 1em 0 0 0; display: block; width: 300px; } .display-field, .editor-field { margin: 0.5em 0 0 0; display: block; width: 300px; } .sameline-wrapper { float: left; display: inline; } .newline { display: block; clear: both; } </style> </head> <body> <form action="test.html" method="post"> <div class="newline"> <div class="sameline-wrapper"> <div class="display-label"> Here is a field: </div> <div class="display-field"> <input type="text" id="t1" style="width: 100px;" /> </div> </div> <div class="sameline-wrapper"> <div class="display-label"> Here is a second field: </div> <div class="display-field"> <input type="text" id="t2" style="width: 100px;" /> </div> </div> </div> <div class="newline"> <div class="sameline-wrapper"> <div class="display-label"> Here is a third field: </div> <div class="display-field"> <input type="text" id="t3" style="width: 100px;" /> </div> </div> <div class="sameline-wrapper"> <div class="display-label"> Here is a fourth field: </div> <div class="display-field"> <input type="text" id="t4" style="width: 100px;" /> </div> </div> </div> <div class="newline"> <div class="sameline-wrapper"> <div class="display-label"> Here is a fifth field: </div> <div class="display-field"> <input type="text" id="Text1" style="width: 100px;" /> </div> </div> <div class="sameline-wrapper"> <div class="display-label"> Here is a sixth field: </div> <div class="display-field"> <input type="text" id="Text2" style="width: 100px;" /> </div> </div> </div> <div class="newline"> <div class="sameline-wrapper"> <div class="display-label"> Here is a seventh field: </div> <div class="display-field"> <input type="text" id="Text3" style="width: 100px;" /> </div> </div> <div class="sameline-wrapper"> <div class="display-label"> Here is a eigth field: </div> <div class="display-field"> <input type="text" id="Text4" style="width: 100px;" /> </div> </div> </div> </form> </body> </html> What I'm trying to accomplish now is to have a 3rd column to the left that spans the entire height of the other rows. The idea is to have a textarea control in there and it all look uniform. I've added this image to help see what I'm trying to do: and here's a fiddle: 3 Column Form I just don't know how to get that 3rd column to line up to the left of the others and to be the same height (vertically) as the other rows. Any help is greatly appreicated! A: Rearrange your html to specify one div for each column like this: <html> <head> <style type="text/css"> .newspaperCol { float: left; width: 33%; } .display-label { margin: 1em 0 0 0; display: block; } .display-field { margin: 0.5em 0 0 0; display: block; } </style> </head> <body> <div class="newspaperCol"> <label class="display-label">Field 1:</label> <input class="display-field" type="text" id="text1"> <br /> <label class="display-label">Field 3:</label> <input class="display-field" type="text" id="text3"> <br /> <label class="display-label">Field 5:</label> <input class="display-field" type="text" id="text5"> <br /> <label class="display-label">Field 7:</label> <input class="display-field" type="text" id="text7"> </div> <div class="newspaperCol"> <label class="display-label">Field 2:</label> <input class="display-field" type="text" id="text2"> <br /> <label class="display-label">Field 4:</label> <input class="display-field" type="text" id="text4"> <br /> <label class="display-label">Field 6:</label> <input class="display-field" type="text" id="text6"> <br /> <label class="display-label">Field 8:</label> <input class="display-field" type="text" id="text8"> </div> <div class="newspaperCol"> <label class="display-label">Notes:</label> <textarea rows="17" cols="30" id="notestext"></textarea> </div> </body> </html> EDIT: I re-read your question and updated my answer using more of your original code. I think you're missing some HTML for the notes elements. Try adding the colLeft and colRight css classes and the associated "div" elements shown below. <html> <head> <style type="text/css"> .colLeft { float: left; width: 66%; } .colRight { float: left; width: 33%; } .display-label, .editor-label { margin: 1em 0 0 0; display: block; width: 300px; } .display-field, .editor-field { margin: 0.5em 0 0 0; display: block; width: 300px; } .sameline-wrapper { float: left; display: inline; } .newline { display: block; clear: both; } </style> </head> <body> <form action="test.html" method="post"> <div class="colLeft"> <div class="newline"> <div class="sameline-wrapper"> <div class="display-label"> Here is a field: </div> <div class="display-field"> <input type="text" id="t1" style="width: 100px;" /> </div> </div> <div class="sameline-wrapper"> <div class="display-label"> Here is a second field: </div> <div class="display-field"> <input type="text" id="t2" style="width: 100px;" /> </div> </div> </div> <div class="newline"> <div class="sameline-wrapper"> <div class="display-label"> Here is a third field: </div> <div class="display-field"> <input type="text" id="t3" style="width: 100px;" /> </div> </div> <div class="sameline-wrapper"> <div class="display-label"> Here is a fourth field: </div> <div class="display-field"> <input type="text" id="t4" style="width: 100px;" /> </div> </div> </div> <div class="newline"> <div class="sameline-wrapper"> <div class="display-label"> Here is a fifth field: </div> <div class="display-field"> <input type="text" id="Text1" style="width: 100px;" /> </div> </div> <div class="sameline-wrapper"> <div class="display-label"> Here is a sixth field: </div> <div class="display-field"> <input type="text" id="Text2" style="width: 100px;" /> </div> </div> </div> <div class="newline"> <div class="sameline-wrapper"> <div class="display-label"> Here is a seventh field: </div> <div class="display-field"> <input type="text" id="Text3" style="width: 100px;" /> </div> </div> <div class="sameline-wrapper"> <div class="display-label"> Here is a eigth field: </div> <div class="display-field"> <input type="text" id="Text4" style="width: 100px;" /> </div> </div> </div> </div> <div class="colRight"> <label>Notes:</label> <textarea rows="15" cols="30" id="notestext"></textarea> </div> </form> </body> </html>
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,700
import time import logging from pprint import pformat import bson import pymongo from mwatch.live_query import LiveQuery from mwatch.oplog import Oplog logging.basicConfig(level=logging.DEBUG) log = logging.getLogger(__name__) def main(): client = pymongo.MongoClient(port=27018) oplog = Oplog(client) db = client.eht coll_patient = db.patient coll_user = db.user for obj in coll_patient.find(): user = coll_user.find_one({ '_id': obj['user_id'], 'primary_email.id': 'judy@example.com'}) if user: log.info('Found patient %r', obj) break lq = LiveQuery( client, 'eht.condition', {'patient_id': obj['_id']}) for op in oplog.register(lq): log.info('%s %s:\n%s', op.op, op.ns, pformat(op.obj)) lq = LiveQuery( client, 'eht.treatment', {'patient_id': obj['_id']}) for op in oplog.register(lq): log.info('%s %s:\n%s', op.op, op.ns, pformat(op.obj)) lq = LiveQuery( client, 'eht.provider', {'patient_id': obj['_id']}) for op in oplog.register(lq): log.info('%s %s:\n%s', op.op, op.ns, pformat(op.obj)) while True: log.info('---Poll---') for op in oplog: log.info('%s %s:\n%s', op.op, op.ns, pformat(op.obj)) time.sleep(5) if __name__ == '__main__': main()
{ "redpajama_set_name": "RedPajamaGithub" }
3,204
Q: Why doesn't "adolescent" take any articles in "listen to adolescent agonising"? "Now, if you will excuse me, I have better things to do than listen to adolescent agonizing ... good-day to you." Harry Potter and the Order of the Phoenix 'Adolescent' is a countable word. But why doesn't it take any articles in this context? I feel listen to an adolescent agonizing looks correct. Any thoughts? A: It's being used as an adjective to modify the noun form of the verb "agonizing". By not using an article he is saying he refers to (and dismisses) all adolescent agonizing rather than just one instance. A: I think you are parsing adolescent agonizing as noun + verb, but it is really adjective + noun. It may be more clear to you if we replace adolescent with a word that is definitely an adjective: Now, if you will excuse me, I have better things to do than listen to childish agonizing ... good-day to you. A: You can use that phrase with no article, with the indefinite article or with the definite article. If you are in a high school around exam time, and someone asks "why don't you go sit in the lunch room", you could respond with "the last thing I want to do is listen to adolescent agonizing". In this case, "agonizing" is used like a noun, and adolescent is used to modify it. Then, if someone says "No, you need go talk to Tommy about his exam anxiety". You could response "I have better things to do than to listen to an adolescent agonizing about exams". Here, "adolescent" is a noun, and the "agonizing" is a verb. If you avoid talking to Tommy, but another colleague comes by, looks over at Tommy, pointing him out to you and says "listen to the adolescent agonizing - it must be exam time". Now, depending on how you emphasize things, there are two choices. In one case, your colleague is truly talking about Tommy, in which case "adolescent" is the noun, and the "agonizing" is a verb. But, he/she could be speaking in a more general sense, and it's closer to the non-article version, with "agonizing" as a noun and "adolescent" modifying it. A: It cannot be an adolescent agonizing because in the context, the author is describing a group of young people. So: "Now, if you will excuse me, I have better things to do than listen to adolescent agonizing ... good-day to you." Harry Potter and the Order of the Phoenix Is perfectly correct. This sentence is said in an haughty way. as the group of student are (for the character) inferior and ignorant compared to him.
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,763
1347–1527 között a Bahmaní szultanátus (angolosan: Bahmanid) volt az első független muszlim állam a Dekkán-fennsíkon. Fővárosa Ahszanabad (majd később: Gulbarga, mai nevén  ಕಲಬುರಗಿ,  Kalaburagi) (1347–1425) majd Muhammad-ábád (Bídar, ma Karnátaka állam) (1425-1527). Története Názir ud-Dín Iszmáíl Sáh fellázadt Muhammad b. Tuglaq Delhi szultánja ellen, majd visszalépett Zafar Khán javára, aki felvette az Alá'-ud-Dín Hasszan Bahman Sáh címet és sikerre vitte a lázadást. Egy független Bahmanida birodalmat alapított a Delhi szultanátus déli részén. Dél-India feletti hegemóniáért a Vidzsajanagara birodalommal (1336–1646) vetekedett délen. A szultanátus hatalmának csúcsát Mahmúd Gawan (1466-1481) nagyvezér idejében érte el. 1518 után a szultanátus 5 államra hullott szét: Ahmadnagari Szultanátusra, Gólkóndára, Bídárra, Berárra és Bidzsápurra ('Győzelem városa' vö. Vidzsajanagara). Ezek együttesen, mint a Dekkán szultanátusok ismertek. A legjelentősebb személyisége a Bídar korszaknak, amikor itt volt a főváros, Mahmúd Gawan volt, aki több uralkodót szolgált mint miniszterelnök és generális 1461 és 1481 között. Visszafoglalta Goát a Vidzsajanagar birodalomtól. Ezután a birodalom tengerparttól tengerpartig terjedt. Figyelemre méltó adminisztratív reformokat is bevezetett és sok régiót közvetlenül irányított. Az államkincstár állapota is jelentősen javult. Ám ez az átgondolt szervezés véget ért, amikor udvari ármányok áldozata lett és a császár kivégeztette. Mikor a szultán felismerte tévedését, egy éven belül halálra itta magát és ez jelentette a szultanátus végének kezdetét. Gawan halála után különböző frakciók versengtek a hatalomért az uralkodói udvarban és ennek csak a dinasztia bukása vetett véget. Indiai muszlim udvaroncok és generálisok uszítottak az idegenek, arabok, törökök és perzsák ellen. Az utolsó szultánnak, Mahmúd Sáhnak (1482 – 1518) már semmilyen tekintélye nem volt és országának feldarabolása felett tehetetlenül bábáskodott. A négy legnagyobb tartomány kormányzói kikiáltották függetlenségüket: Bidzsápur (1489), Ahmadnagar és Bérár (1491), Bídar (1492) és Gólkónda (1512). Bár a Bahman szultánok Bídarban éltek 1527-ig, de csak bábok voltak Bídar igazi uralkodói, a Barid Sáhok kezében volt. Bidzsápur valósult a legterjeszkedőbbnek az utódállamok közül és annektálta Bídart. Ahmadnagar és Gólkónda megtartották függetlenségüket, de végül Bidzsápurral karöltve kényszerültek küzdeni Vidzsajanagarral szemben. Ahmadnagar elfoglalta Bérárt, mielőtt a Mogulok bekebelezték. A Dekkáni harcokba bonyolódott Bidzsápurtól a portugálok elragadták Goát 1510-ben, amely többször is eredménytelenül próbálkozott 1570-ig azt visszaszerezni. Vidzsajanagar harcba lépett Bidzsápurral, ám amikor a Dekkán szultanátusok egyesítették erejüket, döntő vereséget mértek rá 1565-ben, hogy azután a Mogulok sorra igázzák le őket. A Dekkán szultanátusok a Delhi Szultanátus Dél-Indiából való kivonulásának köszönhették létüket. Végül a Mogul Birodalom kebeletze be őket, amely valamivel korábban eltörölte a Delhi Szultanátust. Kultúra A Bahmaní-dinasztia tagjai úgy hitték, hogy a legendás Bahmán perzsa király leszármazottai. Pártfogolták a perzsa nyelvet, kultúrát és irodalmat, sőt a dinasztia egyes tagjai perzsául alkottak. A szultanátus, természetesen, merített az indiai, regionális kultúrából is. A Bahmani Sahok listája Jegyzetek Fordítás Történelmi államok India területén
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,173
A team from Michigan Technological University (MTU) has developed a low cost, open-source 3D metal printer for a cool $1500 - considerably less than current devices on the market which can cost up to $500k. The printer, which can form complex geometric shapes by laying down thin layers of steel, was created by Joshua Pearce from parts including a small commercial MIG welder and an open-source microcontroller. The detailed plans, software and firmware for the device have all been made freely available and open-source, meaning anyone can use them to make their own metal 3D printer at home. While the researchers admit that the printer is still a work in progress (so far, the most intricate piece they've created is a sprocket), they believe the technology could help the home 3D printing revolution get into full swing. Pearce commented: "Small and medium-sized enterprises would be able to build parts and equipment quickly and easily using downloadable, free and open-source designs, which could revolutionise the economy for the benefit of the many. "I really don't know if we are mature enough to handle it, but I think that with open-source approach, we are within reach of a Star Trek-like, post-scarcity society, in which 'replicators' can create a vast array of objects on demand, resulting in wealth for everyone at very little cost. Pretty soon, we'll be able to make almost anything." Brilliant work - this really could change everything. This may be more useful to businesses in the short term - I know we could use one at our firm now -- but how will the economy change when their is essentially no more monopolies?
{ "redpajama_set_name": "RedPajamaC4" }
4,808
Plants we love… with great benefits too! Spider plants are one of three plants NASA deems best at removing formaldehyde from air. Sharing your room with these plants could give you a slight oxygen boost while you sleep. Very easy to grow and maintain. Perfect for little ones room. This plant can grow every large, less maintain too. Helping to remove harmful benzene, trichloroethylene, and formaldehyde toxins. Loves loads of natural light and less water. Quick Note: Even those these plants are considered non-poisonous for children, they could still pose a choking hazard or cause an upset stomach if ingested. It's best to keep them out of reach of young children and watch for any fallen leaves.
{ "redpajama_set_name": "RedPajamaC4" }
8,532
1/2 Full Lightroom Conversation WORLDLY WISE Crunching the numbers on credit Ramesh Kumar and Sikuma Rai March 8, 2018 Nepal's banks are reeling under a severe credit crunch at a time when industries and businesses, buoyed by the end to the political transition, are seeking more loans than ever to finance new investments. The crisis has already hit the energy sector, delaying by at least one year several hydropower projects with a cumulative capacity of around 1,000MW. And it threatens to escalate into a full-blown liquidity crisis, affecting other sectors as well. "The NRB lacks clear policies to ease the credit crunch. It just blames us for the crisis saying we are lending to unproductive sectors. There has to be coordination between the NRB and us." Ashoke SJB Rana CEO Himalayan Bank Credit crunches are a chronic crisis in Nepal. In 2010, when all commercial banks heavily financed real estate deals to rake in profits, they almost emptied their accounts. The real estate bubble burst only when the Nepal Rastra Bank imposed a cap on bank loans for the unproductive sector. Bankers now worry that the current credit crunch could last much longer than the one in 2010. But ex-Governor Yuba Raj Khatiwada, who fixed a ceiling on real estate lending, has joined the KP Oli-led communist government as technocrat Finance Minister, and he is expected to introduce a slew of measures to ease the crisis. "The NRB blames us for financing the unproductive sector. But what is productive, and what is not? There has to be a clear indication." Ratna Raj Bajracharya CEO Sunrise Bank According to the NRB, commercial banks have already lent Rs1,952 billion of their Rs 2,207billion deposits. This means people can still withdraw their money from banks, but there isn't enough capital for new loans. Pashupati Murarka, ex-President of the Federation of Nepalese Chambers of Commerce and Industry, puts it bluntly: "The banks have all closed their doors to investors." As the credit crunch drags on, banks and the government are engaging in the usual mutual blame game. The government blames bankers for creating the problem themselves by releasing loans in unproductive sectors like real estate, automobiles and capital market. "The largest chunk of the capital budget is spent only towards the end of the fiscal year. The tendency to spend the budget during the monsoon causes credit crunch in winter. So, the capital budget should be proportionately spent throughout the year, not just at the end." Bhuvan Kumar Dahal CEO Sanima Bank An NRB officer told Nepali Times: "This is all because of greedy banks. They want more profits, and they lend money to profitable but risky sectors like real estate." "We have lagged behind India in digital currency. Since our economy is growing between two giant economies, we need to digitise our currency. Minimising the use of paper money is one way to ease credit crunch." Parshuram Kunwar CEO Janata Bank But bankers blame the government for aggravating the credit crunch by not spending the development budget. Indeed, the government has spent only 22% of its Rs3.35 billion capital budget in the first eight months of the current fiscal year, which bankers say has effectively frozen the cash and is the main reason for the current credit crunch. "We must not solely rely on Foreign Direct Investment to keep our economy afloat. We need to ensure that our capital budget is injected into the market throughout the year. In addition, we need to bring money scattered across the informal channels into the ba+nking sector." Janak Sharma Poudel CEO Global Bank IME But there are other factors at play. The banks are refusing to go for mergers at a time when the sector is crowded. Small and failing banks have two choices: merge with each other or increase their paid-up capitals to at least Rs8 billion each. Most small banks increased their paid-up capitals by releasing rights shares, and their shareholders took loans from other banks to buy those assets. So, a huge chunk of deposit went into the unproductive capital market. The widening gap between money deposited into banks and the amount investors are seeking is another factor. The end of the political transition has resulted in a huge growth of demand for loans not just in metropolitan cities but also in semi-urban areas. But the remittance inflow has gone down, making it difficult for banks to expand their credit reserves. "We are branching out into rural areas to collect more deposits. But that will not be enough to end the problem. We must focus on spending more in productive sectors." Gyanendra Dhungana Chair Bankers Association of Nepal At the same time, the number of outbound Nepali migrant workers is declining and major labour destination countries like Qatar are in turmoil. This means that the remittance volume is unlikely to grow in the near future. "Even if the current phase of credit crunch ends, there will be another phase," says Parshu Ram Kunwar, CEO of Janata Bank Nepal. "This problem is here to stay." 14 years after conflict, no closure in Nepal Nepali Times Sending investors to prison Homes away from home April 27, 2018 Female domestic workers in Lebanon cannot visit families back… Why the private sector is spooked March 26, 2018 The market has gone into a tailspin as the private… Clash of cultures in Bhaktapur June 1, 2018 https://www.youtube.com/watch?v=PVs9hoIwFUc&feature=youtu.be t a donor conference after the 2015 earthquake, more… Publisher and Chief Editor: Kunda Dixit Published by Himalmedia Pvt Ltd | Patan Dhoka, Lalitpur | GPO Box 7251 Kathmandu editors@nepalitimes.com | www.nepalitimes.com | www.himalmedia.com | Tel: 01-5005601-08 Fax: +977-1-5005518 Marketing: Arjun Karki, Surendra Sharma rachanas@himalmedia.com Subscriptions: Santosh Aryal santosha@himalmedia.com Department of Information Registration No. 1170/2075-76 Himalmedia
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,294
{"url":"http:\/\/www.physicsforums.com\/showthread.php?p=2791431","text":"## Dissipative Function of Air Drag\n\nWe have $$\\dfrac{d}{dt}\\left(\\dfrac{\\partial \\mathcal L}{\\partial \\dot{q}_j}\\right)-\\dfrac{\\partial \\mathcal L}{\\partial q_j}=-\\dfrac{\\partial \\mathcal F}{\\partial \\dot{q}_j}$$.\n\nIf the force of friction is $$F=-kv^2$$ how do you go about calculating $$\\mathcal F$$\n\nMy idea is as follows\n\n$$F=-kv^2 \\dfrac{\\textbf{v}}{|\\textbf{v}|}$$\n\nIn 2D we have\n\n$$F=-k\\dfrac{\\dot{x}^{2}+\\dot{y}^{2}}{\\sqrt{\\dot{x}^{2}+\\dot{y}^{2}}}\\left(\\ begin{array}{c} \\dot{x}\\\\ \\dot{y}\\end{array}\\right)$$\n\n$$F=-k\\left(\\begin{array}{c} \\dot{x}\\sqrt{\\dot{x}^{2}+\\dot{y}^{2}}\\\\ \\dot{y}\\sqrt{\\dot{x}^{2}+\\dot{y}^{2}}\\end{array}\\right)$$\n\nAnd the Lagrange equations become\n\n$$\\dfrac{d}{dt}\\left(\\dfrac{\\partial\\mathcal{L}}{\\partial\\dot{q}_{j}}\\rig ht)-\\dfrac{\\partial\\mathcal{L}}{\\partial q_{j}}=-k\\dot{q}_{j}\\sqrt{\\sum_{i=1}^{n}\\dot{q}_{i}^{2}}$$\n\nIs this correct?\n PhysOrg.com physics news on PhysOrg.com >> A quantum simulator for magnetic materials>> Atomic-scale investigations solve key puzzle of LED efficiency>> Error sought & found: State-of-the-art measurement technique optimised","date":"2013-05-24 19:26:09","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.3761017620563507, \"perplexity\": 5853.16260864379}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2013-20\/segments\/1368704986352\/warc\/CC-MAIN-20130516114946-00005-ip-10-60-113-184.ec2.internal.warc.gz\"}"}
null
null
Hosted by US Fish and Wildlife Service Pole Canyon, CA Condors FAQ Find answers to your questions about the California Condor nest California Condor Chick #980 Fledges! – Oct .14, 2019 6-month-old Condor Chick Uses Its Wings To Return To Nest Cavity – Oct. 10, 2019 California Condors Take Off In Unison – Oct. 7, 2019 California Condor Chick Uses Wings To Scramble Back To Nest Cavity – Sept. 30 ,2019 California Condor Cam Timeline This condor nest, known as the Pole Canyon nest, is located in a remote canyon near the Hopper Mountain National Wildlife Refuge. The parents of the chick in the Pole Canyon nest are mom #563 and dad #262. Dad #262 was laid in 2001 and was the first viable egg laid in the wild since the reintroduction program began. He was actually one of two eggs laid to a trio (male #100 and females #111 and #108) but was brought into captivity to ensure proper incubation. He hatched at the Los Angeles Zoo and was released back to the wild a year later in 2002. Mom #563 hatched at the Oregon Zoo in 2010. This is their first nesting attempt together but both have nested previously with mates who are now deceased. A single egg was laid in this nest cavity, and the chick hatched on April 10, 2019. About the Condor Recovery Project California Condors are critically endangered; they are on the 2014 State of the Birds Watch List, which lists species most in danger of extinction without significant conservation action. They are also listed as Endangered by the U.S. Fish and Wildlife Service and as Critically Endangered by the IUCN. All of the more than 400 condors now alive are descended from 27 birds that were brought into captivity in the early 1980s, in a controversial but successful captive breeding program. As of 2014, there were more than 230 individuals in the wild in California, Arizona, and Baja California. The number has been rising steadily each year, as captive-bred birds are released and wild pairs fledge young from their own nests. More than 160 additional condors live in captivity at breeding programs or on exhibit at the Los Angeles Zoo, Oregon Zoo, World Center for Birds of Prey, Phoenix Zoo, Chapultepec Zoo, San Diego Zoo Safari Park, and San Diego Zoo. Condors have benefited greatly from the Endangered Species Act and from aggressive efforts to breed them in captivity and re-release them into the wild, but the survival of the species is still dependent on human intervention. The California Condor Recovery Program (Recovery Program) is a multi-entity effort, led by the U.S. Fish and Wildlife Service, to recover the endangered California Condor. Partners in condor recovery include the U.S. Forest Service, National Park Service, Bureau of Land Management, Arizona Game and Fish Department, California Department of Fish and Wildlife, Utah Department of Fish and Wildlife, the federal government of Mexico, Los Angeles Zoo, Oregon Zoo, Santa Barbara Zoo, Chapultepec Zoo, San Diego Zoo, Oakland Zoo, The Peregrine Fund, Ventana Wildlife Society, Western Foundation of Vertebrate Zoology, Yurok Tribe, and a host of other governmental and nongovernmental organizations. The Recovery Program is now in the final phase of recovery, focusing on the creation of self-sustaining populations. The Program is placing increased emphasis on the captive-breeding and reintroduction of California Condors to the wild and the management of that wild population. These efforts combine trying to reduce the threat of lead with actively managing nesting in the wild to increase the number of wild-fledged chicks. The goal of the California Condor Recovery Plan is to establish two geographically distinct self-sustaining populations, each with 150 birds in the wild and at least 15 breeding pairs, with a third population of condors retained in captivity. As the Recovery Program works toward this goal, the number of release sites has grown. There are three active release sites in California, one in Arizona, and one in Baja California, Mexico. The effort to create a livestreaming cam on a wild condor nest could not have happened without the effort, funding, and expertise of a wide consortium of collaborators.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,418
Кодар может означать: Кодар — горный хребет в России в северной части Забайкалья. «Кодар» — национальный парк, расположенный в Каларском районе Забайкальского края России. «Кодар» — спецподразделение Управления Федеральной службы исполнения наказаний России по Забайкальскому краю. Фамилия Кодар, Ойа — хорватская актриса, сценарист, режиссёр. См. также
{ "redpajama_set_name": "RedPajamaWikipedia" }
6,929
\section{Introduction} A query optimiser is responsible for providing a good query execution plan (QEP) for incoming database queries. To achieve this, the optimiser relies on a cost model, which tells the optimiser how much a given QEP will cost. The cost model's estimates are in large part based on the selectivity estimates of each operator inside a QEP \cite{ioannidis1996query}. The issue is that selectivity estimation is a difficult task. In practice, huge mistakes are not exceptions but rather the norm \cite{leis2015good}. In turn, this leads the cost model to produce cost estimates that can be wrong by several orders of magnitude \cite{ioannidis1991propagation}. The errors made by the cost model will inevitably result in using QEPs that are far from optimal in terms of memory usage and running time. Moreover, the cost model may also be used by other systems in addition to the query optimiser. For instance, service-level agreement (SLA) negotiation frameworks are based on the assumption that the cost of each query can accurately be estimated by the cost model \cite{yin2018sla}. Cost models are also used for admission control (should the query be run or not?), query scheduling (when to run a query?), progress monitoring (how long will a query?), and system sizing (how many resources should be allocated to run the query?) \cite{wu2013predicting}. Errors made by the cost model may thus have far reaching consequences. Such errors are for the most part due to the inaccuracy of the selectivity estimates. Selectivity estimates are usually wrong because of the many simplifying assumptions that are made by the cost model. These assumptions are known to be unverified in practice. Nonetheless, they allow the use of simple methods that have a low computational complexity. For example, the \emph{attribute value independence} (AVI) assumption, which states that attributes are independent with each other, is ubiquitous. This justifies the widespread use of one-dimensional histograms for storing the distribution of attribute values. Another assumption which is omnipresent is the \emph{join uniformity assumption}, which states that attributes preserve their distribution when they are part of a join. Although this is a major source of error, it rationalises the use of simple formulas that surmise uniformity \cite{selinger1979access}. Producing accurate selectivity estimates whilst preserving a low computational overhead is thus still an open research problem, even though many methods from various approaches have been proposed. The standard approach to selectivity estimation is to build a statistical synopsis of the database. The synopsis is built at downtime and is used by the cost model when the query optimiser invokes it. The synopsis is composed of statistics that summarise each relation along with its attributes. Unidimensional constructs, e.g., histograms \cite{ioannidis2003history}, can be used to summarise single attributes, but cannot dependencies between attributes. Multidimensional methods, e.g., multivariate histograms \cite{muralikrishna1988equi}, can be used to summarise the distribution of two or more attributes. However, their spatial requirement grows exponentially with the number of attributes. Moreover, they often require a non-trivial construction phase that takes an inordinate amount of time. Another approach is to use sampling, where the idea is to run a query on a sample of the database and extrapolate the selectivity \cite{chen2017two}. Sampling works very well for single relations. The problem is that sampling is difficult to apply in the case of joins. This is because the join of sampled relations has a high probability of being empty \cite{chaudhuri1999random}. A different approach altogether is to acknowledge that the cost model is mostly wrong, and instead learn from its mistakes so as not to reproduce them. The most successful method in this approach is DB2's so called \emph{learning optimiser} (LEO) \cite{stillger2001leo}. Such a memorising approach can thus be used in conjunction with any cost model. Although they are appealing, memorising approaches do not help in any matter when queries that have not been seen in the past are issued. What's more, they are complementary to other methods. Finally, statistical approaches based on conditional distributions seem to strike the right balance between selectivity estimation accuracy and computational requirements \cite{halford2019approach}. A conditional distribution is a way of modifying a distribution of values based on the knowledge of another value -- called the conditioning value. For example, if the values of attribute $B$ depend on those of $A$, then we can write $P(A, B) = P(B | A) \times P(A)$. Conditional distributions can be organised in a so-called Bayesian network \cite{jensen1996introduction}. Bayesian networks thus factorise a multidimensional distribution into a product of lower dimensional ones. If well chosen, these factorisations can preserve most of the information whilst consuming much less space. In \cite{halford2019approach}, we proposed to use Bayesian networks to capture attribute value dependencies inside each relation of a database. The issue with Bayesian networks is their computational cost \cite{cooper1990computational}. To alleviate this issue, we restricted our networks to possess tree topologies, which leads to simpler algorithms that have the benefit in linear time. The downside of using tree topologies is that our networks capture less dependencies than a general network. However, we showed in our benchmarks that our method was able to improve the overall selectivity estimation accuracy at a very reasonable cost. The downside of our work in \cite{halford2019approach} is that it completely ignores dependencies between attributes of different relations, which we address in the present work. Bayesian networks that capture attribute value dependencies across relations have also been proposed. \cite{getoor2001selectivity} were the first to apply them for selectivity estimation. However, they used off-the-shelf algorithms that are standard for working with Bayesian networks, but which are costly and impractical in constrained environments. \cite{tzoumas2011lightweight} extended the work of \cite{getoor2001selectivity} to address the computational cost issues. Indeed, they proposed various constraints on the network structure of each relation's Bayesian network that reduced the overall complexity. However, this still results in a global Bayesian network with a complex structure, which requires a costly inference algorithm in order to produce selectivity estimates. Although the methods from \cite{getoor2001selectivity} and \cite{tzoumas2011lightweight} enable a competitive accuracy, they both incur a costly construction phase and are too slow at producing selectivity estimates. In light of this, our goal is to capture attribute value dependencies across relations with as little an overhead as possible. With this in mind, our method thus consists in measuring the distribution of a carefully selected set of attributes before and after a join. We do so by performing a small amount of offline joins that exploits the topology of the relations. Effectively, we make use of the fact that joins mainly occur on primary/foreign key relationships, and thus warp the attribute values distribution in a predictable way. The contributions of our paper are as follows: (1) we introduce a new assumption which simultaneously softens the attribute value independence and join uniformity assumption, (2) based on our assumption, we propose an algorithm for connecting individual Bayesian networks together into what we call a \emph{linked Bayesian network}, (3) we show how such a linked Bayesian network can be used to efficiently estimate query selectivities both within and between relations, and (4) we introduce a parameter which allows us to generalise the trade-offs induced by existing methods based on Bayesian networks. The rest of the paper is organised as follows. Section 2 presents the related work. Section 3 introduces some notions related to Bayesian networks and summarises the work we did in \cite{halford2019approach}. Section 4 introduces the methodology for combining individual Bayesian networks using the topology of a database's relations. In section 5, we compare our proposal with other methods on an extensive workload derived from the JOB \cite{leis2015good} and TPC-DS \cite{poess2002tpc} benchmarks. Finally, section 6 concludes the paper and hints to potential follow-ups. \section{Related work} Ever since the seminal work of Selinger et al. \cite{selinger1979access}, query optimisation has largely relied on the use of cost models. Because the most important part of the cost model is the selectivity estimation module \cite{leis2018query}, a lot of good efforts have been made across the decades. \cite{kooi1981optimization} first proposed the use of histograms to approximate the distribution $P(x)$ of a single attribute $x$. Since then, a lot of work has gone into developing optimal histograms \cite{ioannidis2003history} that have been used ubiquitously in cutting edge cost models. Smooth versions of histograms, e.g., kernel density estimators (KDEs) \cite{blohsfeld1999comparison} and wavelets \cite{matias1998wavelet}, have also been proposed. However, these methods are based on single attributes, and as such lose in accuracy what they gain in computational efficiency. Indeed, there is no way to capture a dependency between two attributes $x$ and $y$ if one only has unidimensional distributions $P(x)$ and $P(y)$ available, regardless of their accuracy. Multidimensional distributions, i.e., $P(X_1, \dots, X_n)$, are a way to catch dependencies between attributes. Methods based on such distributions are naturally more accurate because they soften the AVI assumption. However, they require a large amount of computational resources which hinders their use in high-throughput settings. \cite{muralikrishna1988equi} first formalised the use of equi-depth multidimensional histograms and introduced an efficient construction algorithm. \cite{poosala1997selectivity} proposed another construction algorithm based on Hilbert curves. Multidimensional KDEs have also been proposed \cite{heimel2015self}, with somewhat the same complexity guarantees. In search for efficiency, \cite{bruno2001stholes} offered a workload-aware method where the idea is to only build histograms for attributes that are often queried together. Even though methods based on multidimensional distributions are costly, they are implemented in some database systems and are used when specified by a database administrator. However, these methods do not help whatsoever in capturing dependencies across relations, which is probably the biggest issue cost models have to deal with. Sampling methods have also been proposed to perform selectivity estimation. The idea is to run a query on a sample of the database and extrapolate the selectivity \cite{chen2017two}. Sampling works very well for single relations and has been adopted by some commercial database systems. However, off the shelf sampling procedures suffer from the fact that the join of sampled relations has a high probability of being empty \cite{chaudhuri1999random}; in other words a join has to be materialised before sampling can be done. This issue can be alleviated by the use of correlated sampling \cite{vengerov2015join}, where a deterministic hash function is used to ensure that samples from different relations will match with each other. Another technique is to use indexes when available \cite{leis2017cardinality}, but this is only realistic for in-memory databases. \cite{acharya1999join} also proposed heuristics for maintaining statistics of \emph{join synopses}. Overall, sampling is an elegant selectivity estimation method, not least because it can handle complex predicates which statistical summaries cannot (e.g., regex queries). However, sampling necessarily incurs a high computational cost. Indeed even if the samples are obtained at downtime, they still have to be loaded in memory during the query optimisation phase. Throughout the years, a lot of proposals have been made to relax the simplifying assumptions from \cite{selinger1979access}. All of these require compromises in terms of accuracy, speed, and memory usage. The general consensus is that each method shines in a particular use case, and thus combining different methods might be a good approach. \cite{markl2007consistent} formalised this idea by using a maximum entropy approach. Recently, \cite{muller2018improved} proposed combining sampling and synopses. Another approach altogether is to ``give up'' on the cost model and instead memorise the worst mistakes it makes so as not to reproduce them in the future \cite{stillger2001leo}. There have also been proposals that make use of machine learning \cite{akdere2012learning,liu2015cardinality,ivanov2017adaptive,kipf2018learned}, where a \emph{supervised learning} algorithm is taught to predict selectivities based on features derived from a given query and the database's metadata. Recently, deep learning methods have been proposed to extract features that don't require rules written by humans. One of the most prominent papers that advocates the use of deep learning for selectivity estimation can be found in \cite{kipf2019estimating}. They proposed a neural network architecture, which they dubbed MSCN for \emph{multi-set convolutional network}. Although approaches based on supervised machine learning have had great success in other domains, their performance for query selectivity estimation isn't competitive enough, yet. Approaches that exploit attribute correlations in order to avoid storing redundant information have also been proposed. For example, \cite{deshpande2001independence} proposes to build a statistical interaction model that allows to determine a relevant subset of multidimensional histograms to build. In other words, they propose to build histograms when attributes are correlated, and to make the AVI assumption if not. Bayesian networks can be seen through the same lens of exploiting redundant information. Essentially, they factorise the full probability distribution into a set of conditional distributions. A conditional distribution between two attributes implies a hierarchy whereby one of the attributes determines to some extent the other. Formally, a Bayesian network is a directed acyclic graph (DAG) where each node is an attribute and each arrow implies a conditional dependency. They can be used to summarise a probability distribution by breaking it up into smaller pieces. In comparison with the supervised learning based methods mentioned in the previous paragraph, Bayesian networks are an \emph{unsupervised learning} method. What this means is that they directly learn by looking at the data, whereas supervised methods require a workload of queries and outputs in order to learn. In \cite{halford2019approach}, we proposed to use Bayesian networks for capturing attribute value dependencies inside individual relations. \cite{getoor2001selectivity} and \cite{tzoumas2011lightweight} both proposed methods for using Bayesian networks to capture attribute value dependencies between different relations. Although this leads to more accurate selectivity estimates, it requires much more computation time and is infeasible in practice. This is due to the fact that they require the use of expensive belief propagation algorithms for performing inference. Meanwhile, our method is much faster because it restricts each Bayesian network to a tree topology, which allows the use of the variable elimination algorithm. However, our method completely ignores dependencies between attributes of different relations. Our goal in this paper is to reconcile both approaches. Effectively, we want to keep the computational benefits of building and using individual Bayesian networks, but at the same time we want our method to capture some dependencies across relations. \section{Methodology} \subsection{Preliminary work} In \cite{halford2019approach}, we developed a methodology for constructing Bayesian networks to model the distribution of attribute values inside each relation of a database. A Bayesian network is a probabilistic model. As such, it is used for approximating the probability distribution of a dataset. The particularity of a Bayesian network is that it uses a directed acyclic graph (DAG) in order to do so. The graph contains one node per variable, whilst each directed edge represents a conditional dependency between two variables. Therefore, the graph is a factorisation of the full joint distribution: \begin{equation} P(X_1, \dots, X_n) \simeq \prod_{X_i \in \mathcal{X}} P(X_i \, | \, Parents(X_i)) \end{equation} The joint distribution $P(X_1, \dots, X_n)$ is the probability distribution over the entire set of attributes $\{X_1, \dots, X_n\}$. Meanwhile, $Parents(X_i)$ stands for the attributes that condition the value of $X_i$. The distribution $P(X_i \, | \, Parents(X_i))$ is thus the conditional distribution of attribute $X_i$'s value. In practice, the full distribution is inordinately large, and is unknown to us. However, the total of the sizes of the conditional distributions $P(X_i \, | \, Parents(X_i))$ is much smaller. Using standard rules of probability, such as Bayes' rule and the law of total probability \cite{jensen1996introduction}, we are able to derive from a Bayesian network any selectivity estimation problem by converting a logical query into a product of conditional probabilities. Note, however, that a Bayesian network is necessarily an approximation of the full probability distribution because it makes assumptions about the generating process of the data. Finding the right graph structure of a Bayesian network is called \emph{structure learning} \cite{jensen1996introduction}. This is usually done by maximising a scoring function, which is an expensive process that scales super-exponentially with the number of variables \cite{cooper1990computational}. Approximate search methods as well as integer programming solutions have been proposed \cite{bartlett2017integer}. In our work in \cite{halford2019approach}, we proposed to use the \emph{Chow-Liu} algorithm \cite{chow1968approximating}. This algorithm has the property of finding the best tree structure where nodes are restricted to have at most one parent. The obtained tree is the best in the sense of maximum likelihood estimation. In addition to this property, the Chow-Liu algorithm only runs in $\mathcal{O}(p^2)$ time, where $p$ is the number of variables, and is simple to implement. It works by first computing the \emph{mutual information} between each pair of variables, which can be seen as the strength of the relation between two variables. The next step is to find the \emph{maximum spanning tree} (MST) using the mutual information, and thus to derive a directed graph approximating the joint probability distribution. We propose an inference process based on variable elimination algorithm \cite{cowell2006probabilistic} since inference can be done in linear time for tree. Our experiments indicated that competitors approach are much slower. Note that our inference process can further be accelerated using the Steiner tree problem \cite{hwang1992steiner}. In \cite{halford2019approach}, we proposed a simple method which consists in building one Bayesian network per relation. On the one hand, this has the benefit of greatly reducing the computational burden in comparison with a single large Bayesian network, as is done in \cite{getoor2001selectivity} and \cite{tzoumas2011lightweight}. On the other hand, it ignores dependencies between attributes of different relations. We will now discuss how we can improve our work from \cite{halford2019approach} in order to capture some dependencies across relations. \subsection{Handling conditional dependencies over joins} The task of selectivity estimation is to determine the selectivity of a query over a set of attributes $X_i$ that to a set of relations $R_j$. By making the AVI assumption, this comes down to to measuring individual attribute distributions and multiplying them together, as so: \begin{equation} \label{infer-ibn} P(X_1, \dots, X_n) \simeq \prod_{R_j} \bigg( \prod_{X_i \in R_j} P(X_i) \bigg) \end{equation} The methodology from \cite{halford2019approach} models the attribute distribution value of a database by building a tree-shaped Bayesian network for each relation. For efficiency reasons it purposefully only captures dependencies between attributes of a single relation. As such, it ignores the many dependencies that exist between attributes of different relations and that are the bane of cost models. Essentially, this method boils down to factorising the full probability distribution as so: \begin{equation} P(X_1, \dots, X_n) \simeq \prod_{R_j} \bigg( \prod_{X_i \in R_j} P(X_i \, | \, Parent(X_i)) \bigg) \end{equation} where $\{X_1, \dots, X_n\}$ is the entire set of attributes over all relations and $X_i \in R_j$ are the attributes that belong to relation $R_j$. $Parent(X_i)$ denotes the attribute on which the distribution of $X_i$ is conditioned -- because each Bayesian network is tree-shaped, each attribute excluding the root has a single parent. Although the work from \cite{halford2019approach} ignores dependencies between attributes of different relations, it is still much more relevant that the common assumption of full attribute value independence. Our goal in this paper is to model the full probability distribution by taking into account dependencies between attributes of different relations, which can represented as: \begin{equation} \label{infer-lbn} P(X_1, \dots, X_n) \simeq \prod_{X_i} P(X_i \, | \, Parent(X_i)) \end{equation} Note that equation \ref{infer-lbn} captures more information than equation \ref{infer-ibn}. Modeling the data by taking into account conditional dependencies thus guarantees that the resulting selectivity estimates are at least as accurate as when assuming independence between attributes. In \cite{halford2019approach}, we made the assumption that attribute values of different relations are independent. Additionally, we assumed that each attribute value distribution remains the same when the relation it belongs to is joined with another relation. This is called the \emph{join uniformity assumption} and is a huge source of error. Indeed, the distributions of an attribute's values before and after a join are not necessarily the same. For instance, imagine an e-commerce website where registered customers are stored in a database alongside with purchases. Each customer can make zero or more purchases whilst each purchase is made by exactly one customer. Some customers might be registered on the website but might not have made a purchase. If the customers and purchases relations are joined together, then the customers who have not made any purchase will not be included in the join. Therefore, the attributes from the customers relation will have different value distributions when joined with the purchases relation. Note however that the attribute value distributions from the purchases relation will not be modified. This stems from the fact that the join between the customers and purchases relations is a one-to-many join. We will now explain how we can use this property to capture attribute value dependencies across relations. Let us assume we have two relations $R$ and $S$ that share a primary/foreign key relationship. That is, $S$ contains a foreign key which references the primary key of $R$. This means that each tuple from $R$ can be joined with zero or more tuples from $S$. A direct consequence is that the size of the join $R \bowtie S$ is equal to $\card{S}$. The join uniformity assumptions implies that the probability for a tuple $r$ from relation $R$ to be present in $R \bowtie S$ follows a uniform distribution. In statistical terms, that is: \begin{equation} P(r \in R \bowtie S) \sim \mathcal{U}(\frac{1}{\card{R}}) \end{equation} Consequently, the expected number of times each tuple from $R$ will be part of $R \bowtie S$ is $\frac{\card{S}}{\card{R}}$. Let us now denote by $P_{R}(A)$ the value distribution of attribute $A$ in relation $R$. We will also define $P_{R \bowtie S}(A)$ as the value distribution of attribute $A$ in join $R \bowtie S$. The join uniformity assumption thus implies that the distribution of $A$'s values before and after $R$ is joined with $S$ are equal: \begin{equation} P_{R}(A) = P_{R \bowtie S}(A) \end{equation} Furthermore, assume we have found and built the following factorised distribution over attributes $A$, $B$, and $C$ from relation $R$: \begin{equation} P_{R}(A, B, C) \simeq P_{R}(A \, | \, B) \times P_{R}(B \, | \, C) \times P_R(C) \end{equation} If we hold the join uniformity assumption to be true, then we can use the factorised distribution to estimate selectivities for queries involving $A$, $B$ and $C$ when $R$ is joined with $S$ without any further modification. The issue is that this is an idealised situation that has no reason to occur in practice. On the contrary, it is likely are that some tuples from $R$ will be more or less present than others. However, we may assume that after the join the attribute value dependencies implied by our factorisation remain valid within each relation. We call this the \emph{attribute value dependency preservation} assumption. The idea is that if attributes $A$, $B$ and $C$ are dependent in a certain way in relation $R$, then there is not much reason to believe that these dependencies will disappear once $R$ is joined with $S$. Although this may not necessarily always occur in practice, it is still a much softer assumption than those usually made by cost models. To illustrate, let us consider a toy database composed of the following relations: \textcolor{teal}{customers} with attributes $\{nationality, hair, salary\}$, \textcolor{orange}{shops} with attributes $\{name, city, size\}$, \textcolor{violet}{purchases} with attributes $\{day\ of\ week\}$. Moreover, assume that the \textcolor{violet}{purchases} relation has two foreign keys, one that references the primary key of \textcolor{teal}{customers} and another which that of \textcolor{orange}{shops}. The \textcolor{violet}{purchases} relation can thus be seen as a fact table whilst \textcolor{teal}{customers} and \textcolor{orange}{shops} can be viewed as dimension tables. represented in table. In what follows we will use the shorthand \textcolor{teal}{$C$} the \textcolor{teal}{customers} relation, \textcolor{orange}{$S$} for the \textcolor{orange}{shops} relation, and \textcolor{violet}{$P$} for the \textcolor{violet}{purchases} relation. In the \textcolor{teal}{customers} relation, there are Swedish customers and a lot of them have blond hair. We might capture this property in a Bayesian network with the conditional distribution $P_{\textcolor{teal}{C}}(hair \, | \, nationality)$, which indicates that hair colour is influenced by nationality. We could suppose that the fact that Swedish people have blond hair is still true once the \textcolor{teal}{customers} relation is joined with the \textcolor{violet}{purchases} relation. In other words, the hair colour shouldn't change the rate at which Swedish customers make purchases. However, we may rightly assume that the number of purchases will change according to the nationality of each customer. Mathematically, we are saying the following: \begin{equation} P_{\textcolor{teal}{C} \bowtie \textcolor{violet}{P}}(hair, nationality) = P_{\textcolor{teal}{C}}(hair \, | \, nationality) \times P_{\textcolor{teal}{C} \bowtie \textcolor{violet}{P}}(nationality) \end{equation} In other words, because we assume that $P_{\textcolor{teal}{C}}(hair \, | \, nationality)$ is equal to $P_{\textcolor{teal}{C} \bowtie \textcolor{violet}{P}}(hair \, | \, nationality)$, then we know $P_{\textcolor{teal}{C} \bowtie \textcolor{violet}{P}}(hair, nationality)$ -- i.e.,\ we assume that their conditional distribution remains unchanged after the join. An immediate consequence is that we get to know the $P_{\textcolor{teal}{C} \bowtie \textcolor{violet}{P}}(hair)$ distribution for free. Indeed, by summing over the nationalities, we obtain: \begin{equation} P_{\textcolor{teal}{C} \bowtie \textcolor{violet}{P}}(hair) = \sum_{nationality} P_{\textcolor{teal}{C}}(hair \, | \, nationality) \times P_{\textcolor{teal}{C} \bowtie \textcolor{violet}{P}}(nationality) \end{equation} To demonstrate why our assumption is useful for the purpose of selectivity estimation, let us use the example data in tables \ref{tab:customers} and \ref{tab:purchases}. \begin{table}[!htb] \begin{minipage}{.5\linewidth} \centering \caption{\textcolor{teal}{Customers} relation} \label{tab:customers} \begin{tabularx}{\textwidth}{@{}YYY@{}} \textbf{Customer} & \textbf{Nationality} & \textbf{Hair} \\ \hline 1 & Swedish & Blond \\ 2 & Swedish & Blond \\ 3 & Swedish & Brown \\ 4 & American & Blond \\ 5 & American & Brown \end{tabularx} \end{minipage} \begin{minipage}{.5\linewidth} \caption{\textcolor{violet}{Purchases} relation, which contains a foreign key that is related to the primary key of the customers relation} \label{tab:purchases} \centering \begin{tabularx}{0.6\textwidth}{@{}YY@{}} \textbf{Shop} & \textbf{Customer} \\ \hline 1 & 1 \\ 2 & 1 \\ 3 & 1 \\ 4 & 1 \\ 5 & 2 \\ 6 & 3 \\ 7 & 5 \end{tabularx} \end{minipage}% \end{table} Let us say we wish to know how many purchases are made by customers who are both blond and Swedish. The straightforward way to do this is to count the number of times ``Blond'' and ``Swedish'' appear together within the join $C \bowtie P$: \begin{equation} P_{\textcolor{teal}{C} \bowtie \textcolor{violet}{P}}(hair=Blond, nationality=Swedish) = \frac{5}{7} \end{equation} The fraction $\frac{5}{7}$ is the true amount of purchases that were made by Swedish customers with blond hair -- said otherwise this is the selectivity of the query. Obtaining it requires scanning the rows resulting from the join of \textcolor{teal}{$C$} with \textcolor{violet}{$P$}. In practice this can be very burdensome, especially when queries involve many relations. If we assume that the join uniformity assumption holds -- in other words we assume that the value distributions of \emph{nationality} and \emph{hair} do not change -- then we can simply reuse the Bayesian network of the customers relation: \begin{equation} \begin{split} P_{\textcolor{teal}{C} \bowtie \textcolor{violet}{P}}(Blond, Swedish) & \simeq P_{\textcolor{teal}{C}}(Blond \, | \, Swedish) \times P_{\textcolor{teal}{C}}(Swedish) \\ & \simeq \frac{2}{3} \times \frac{3}{5} \\ & \simeq \frac{2}{5} \end{split} \end{equation} In this case, making the join uniformity independence assumption makes us underestimate the true selectivity by 44\% ($1 - \frac{2}{5} \times \frac{7}{5}$). Some of this error is due to the fact that the nationality attribute values are not distributed in the same way once \textcolor{teal}{$C$} and \textcolor{violet}{$P$} are joined -- indeed in this toy example Swedish customers make more purchases than American ones. However, if we know the distribution of the nationality attribute values, i.e.,\ $P_{\textcolor{teal}{C} \bowtie \textcolor{violet}{P}}(nationality)$, then we can enhance our estimate in the following manner: \begin{equation} \begin{split} P_{\textcolor{teal}{C} \bowtie \textcolor{violet}{P}}(Blond, Swedish) & \simeq P_{\textcolor{teal}{C}}(Blond \, | \, Swedish) \times P_{\textcolor{teal}{C} \bowtie P}(Swedish) \\ & \simeq \frac{2}{3} \times \frac{6}{7} \\ & \simeq \frac{4}{7} \end{split} \end{equation} Now our underestimate has shrunk to 20\%. The only difference with the previous equation is that we have replaced $P_{\textcolor{teal}{C}}(Swedish)$ with $P_{\textcolor{teal}{C} \bowtie \textcolor{violet}{P}}(Swedish)$. Note that we did not have to precompute $P_{\textcolor{teal}{C} \bowtie \textcolor{violet}{P}}(Blond, Swedish)$. Indeed, we assumed that the dependency between nationality and hair doesn't change once \textcolor{teal}{$C$} and \textcolor{violet}{$P$} are joined, which stems from our dependency preservation assumption. Note that, in our toy example, the assumption is slightly wrong because blond customers have a higher purchase rate than brown haired ones, regardless of the nationality. Regardless, our assumption is still much softer than the join uniformity and attribute value independence assumptions. Our assumption is softer than the join uniformity assumption because it allows attribute value distributions to change after a join. Statistically speaking, instead of assuming that tuples appear in a join following a uniform distribution, we are saying that the distribution of the tuples is conditioned on a particular attribute (e.g., the nationality of the customers dictates the distribution of the customers in the join between shops and customers). We also assume that attribute value dependencies with each relation are preserved through joins (e.g., hair colour is still dependent on nationality). The insight is that in a factorised distribution, the top-most attribute is part of any query. For instance, in the distribution $\cp{A}{B} \times \cp{B}{C} \times P(C)$, every query involving any combination of $A$, $B$, and $C$ will necessarily involve $P(C)$. We will now see how our newly introduced attribute value dependency preservation assumption can be used to link Bayesian networks from different relations together, and as such relax the join uniformity and attribute value independence assumptions at the same time. \subsection{Linking Bayesian networks} \label{lbn} As explained in the previous subsection, if a \textcolor{violet}{purchases} relation has a foreign key that references a primary key of another relation named \textcolor{teal}{customers}, then the distribution of \textcolor{violet}{purchases}' attribute values will not change after joining \textcolor{teal}{customers} and \textcolor{violet}{purchases}. However the distribution of \textcolor{teal}{customers}' attribute values will change if \textcolor{violet}{purchases}' foreign key is skewed, which is always the case to some degree. If we use the method proposed by \cite{halford2019approach}, then the Bayesian network built on \textcolor{teal}{customers} would not be accurate when estimating selectivities for queries involving \textcolor{teal}{customers} and \textcolor{violet}{purchases}. This is because it would assume the distributions of the attribute values from \textcolor{teal}{customers} are preserved after the join, which is a consequence of the join uniformity assumption. Moreover, because of the AVI assumption, we would not be capturing the existing dependencies between \textcolor{teal}{customers}'s attributes and \textcolor{violet}{purchases}'s attributes because their respective attributes are assumed to be independent with those of the opposite relation. On the other hand, if we join \textcolor{teal}{customers} and \textcolor{violet}{purchases} and build a Bayesian network on top of the join, then we will capture the cross-relation attribute value dependencies, but at too high a computational cost \cite{getoor2001selectivity,tzoumas2011lightweight}. Up to now, we have only mentioned the case where there one join occurs, but the same kind of issues occur for many-way joins -- including star-joins and chain-joins. If the attribute value distributions of \textcolor{teal}{customers} and \textcolor{violet}{purchases} are estimated using Bayesian networks that possess a tree structure, then we only have to include the dependencies of a subset of\textcolor{teal}{customers}'s attributes with those of \textcolor{violet}{purchases}. Specifically, we only have to include the root attribute of \textcolor{teal}{customers}'s Bayesian network into that of of \textcolor{violet}{purchases}. Indeed, because \textcolor{teal}{customers}'s Bayesian network is a tree, then all of its nodes are necessarily linked to the root. If we know the distribution of the root attribute's values \emph{after} \textcolor{teal}{customers} is joined with \textcolor{violet}{purchases}, then, by making the attribute value dependency preservation assumption earlier introduced, we automatically obtain the distribution of the rest of \textcolor{teal}{customers}'s attribute. In other words, if the distribution of an attribute's values is modified when the relation it belongs to is joined with another relation, then we assume that all the attributes that depend on it have their value distributions modified in the exact same manner. This is another way of saying that the conditional distributions remain the same. We will show how this works on our toy database consisting of relations \textcolor{teal}{customers}, \textcolor{orange}{shops}, and \textcolor{violet}{purchases}. Following the methodology from \cite{halford2019approach}, we would have built one Bayesian network per relation. Each Bayesian network would necessarily have been a tree as a consequence of using the Chow-Liu algorithm \cite{chow1968approximating}. Depending on the specifics of the data, we might have obtained the Bayesian networks shown in figure \ref{fig:separate-network-topology}. \begin{figure}[H] \centering \begin{tikzpicture}[thick, scale=\textwidth, every node/.style={ellipse, draw, align=center},node distance=1cm and 0.1cm] \node [draw=violet] (dow) {Day of week}; \node [left = 0.3 of dow, draw=teal] (nationality) {Nationality}; \node [below left = of nationality, draw=teal] (hair) {Hair}; \node [below right = of nationality, draw=teal] (salary) {Salary}; \node [right = 0.3 of dow, draw=orange] (name) {Name}; \node [below left = of name, draw=orange] (city) {City}; \node [below right = of name, draw=orange] (size) {Size}; \draw [->] (nationality) -- (hair); \draw [->] (nationality) -- (salary); \draw [->] (name) -- (city); \draw [->] (name) -- (size); \end{tikzpicture} \caption{Separate Bayesian networks of \textcolor{teal}{customers}, \textcolor{orange}{shops}, and \textcolor{violet}{purchases}} \label{fig:separate-network-topology} \end{figure} Furthermore, let us consider the following SQL query: \begin{figure}[H] \centering \begin{lstlisting}[language=SQL] SELECT * FROM customers, shops, purchases WHERE customers.id = purchases.customer_id AND shops.id = purchases.shop_id AND customers.nationality = 'Japanese' AND customers.hair = 'Dark' AND shops.name = 'Izumi' \end{lstlisting} \end{figure} If we were to estimate the amount of tuples that satisfy the above query using the Bayesian networks from figure \ref{fig:separate-network-topology}, then we would estimate the query selectivity in the following manner: \begin{equation} \begin{split} P(Dark, Japanese, Izumi) & = P_{\color{teal}{C}}(Dark \, | \, Japanese) \\ & \times P_{\color{teal}{C}}(Japanese) \\ & \times P_{\textcolor{orange}{S}}(Izumi) \end{split} \end{equation} On the one hand, the conditional distribution $P_{\color{teal}{C}}(Dark \, | \, Japanese)$ captures the fact that Japanese people tend to have dark hair inside the \textcolor{teal}{customers} relation. Graphically this is represented by the arrow that points from the ``Nationality'' node to the ``Hair'' node in figure \ref{fig:separate-network-topology}. On the other hand, our estimate ignores the fact that shops in Japan, including ``Izumi'', are mostly frequented by Japanese people. The reason why is that we have one Bayesian network per relation, instead of a global network spanning all relations, and are thus not able to capture this dependency. Regardless of the missed dependency, this simple method is still more accurate than assuming total independence. Indeed the AVI assumption would neglect the dependency between $hair$ and $nationality$, even though both attributes are part of the same relation. Meanwhile assuming relational independence is convenient because it only requires capturing dependencies within relations, but it discards the dependency between $nationality$ and $city$. We propose to capture said dependency by adding nodes from the Bayesian networks of \textcolor{teal}{customers} and \textcolor{orange}{shops} to the Bayesian network of \textcolor{violet}{purchases}. Specifically, for reasons that will become clear further on, we add the roots of the Bayesian networks of \textcolor{teal}{customers} and \textcolor{orange}{shops} (i.e., $nationality$ and $name$) to the Bayesian network of \textcolor{violet}{purchases}. This results in the linked Bayesian network shown in figure \ref{fig:linked-network-topology}. \begin{figure}[H] \centering \begin{tikzpicture}[thick, scale=\textwidth, every node/.style={ellipse, draw, align=center},node distance=1cm and 0.1cm] \node [draw=violet] (p_nationality) {Nationality}; \node [below = of p_nationality, draw=violet] (dow) {Day of week}; \node [below right = of p_nationality, xshift=1cm, draw=violet] (p_name) {Name}; \node [below left = of p_nationality, xshift=-1cm, draw=teal] (nationality) {Nationality}; \node [below left = of nationality, draw=teal] (hair) {Hair}; \node [below right = of nationality, draw=teal] (salary) {Salary}; \node [below = 0.5 of p_name, draw=orange] (name) {Name}; \node [below left = of name, draw=orange] (city) {City}; \node [below right = of name, draw=orange] (size) {Size}; \draw [->] (p_nationality) -- (dow); \draw [->] (p_nationality) -- (p_name); \draw [dotted] (p_nationality) -- (nationality); \draw [->] (nationality) -- (hair); \draw [->] (nationality) -- (salary); \draw [dotted] (p_name) -- (name); \draw [->] (name) -- (city); \draw [->] (name) -- (size); \end{tikzpicture} \caption{Linked Bayesian network of \textcolor{teal}{customers}, \textcolor{orange}{shops}, and \textcolor{violet}{purchases}} \label{fig:linked-network-topology} \end{figure} In this new configuration, we still have one Bayesian network per relation. The difference is that the Bayesian network of \textcolor{violet}{purchases} includes the root attributes of both \textcolor{teal}{customers} and \textcolor{orange}{shops}'s Bayesian networks. In other words, we have joined the \textcolor{violet}{purchases} relation with the \textcolor{teal}{customers} and \textcolor{orange}{shops} and we have then built a Bayesian network for \textcolor{violet}{purchases} that now includes attributes from \textcolor{teal}{customers} and \textcolor{orange}{shops}. A linked Bayesian network is thus a set of separate Bayesian networks where some of the attributes are duplicated in two related networks. In practice, this means that we now know the distribution of the $nationality$ and $name$ attribute values once the relations they belong to have been joined with \textcolor{violet}{purchases}. Meanwhile, we also know their distributions when these relations are not joined with \textcolor{violet}{purchases}. In other words, we store two distributions for each root attribute, one before the join and one afterwards. The distribution of a root attribute in a Bayesian network is nothing more than a one-dimensional histogram. This means that storing two distributions for each root attribute doesn't incur any significant memory burden. The configuration shown in figure \ref{fig:linked-network-topology} has two immediate benefits over the one presented in figure \ref{fig:separate-network-topology}. First of all, we are now able to determine if the percentage of Japanese in the \textcolor{violet}{purchases} relation is different from the one in the \textcolor{teal}{customers} relation. Indeed, we do not have to assume the distribution remains the same after the join now that we know the distribution of $nationality$'s values when \textcolor{teal}{customers} is joined with \textcolor{violet}{purchases}. A key observation is that we get to know something about the distribution of the $hair$ attribute values when \textcolor{teal}{customers} is joined with \textcolor{violet}{purchases}. That is to say, because we know how the distribution of $nationality$ attribute values changes after the join, then we also know something about the $hair$ attribute values because both attributes are dependent within the \textcolor{teal}{customers} relation. This stems from the fact that we assume that the conditional distribution $\cp{hair}{nationality}$ is preserved after the join. Mathematically this translates to: \begin{equation} P_{ \textcolor{teal}{C} \bowtie \textcolor{violet}{P}}(hair, nationality) = P_{\textcolor{teal}{C}}(hair \, | \, nationality) P_{\textcolor{teal}{C} \bowtie \textcolor{violet}{P}}(nationality) \end{equation} Although, in practice, we expect the dependency preservation assumption to not always be verified, we argue that it is a much weaker assumption than assuming total relational independence. The second benefit is that we can now take into account the fact the Japanese people typically shop in Japanese shops, even though the involved attributes belong to relations that are not directly related. This happens because the $name$ attribute is now part of \textcolor{violet}{purchases}'s Bayesian network as well as that of \textcolor{orange}{shops}. Formally the query selectivity can now be expressed as so: \begin{equation} \begin{split} P_{\textcolor{teal}{C} \bowtie \textcolor{violet}{P} \bowtie \textcolor{orange}{S}}(Dark, Japanese, Izumi) & = P_{\color{teal}{C}}(Dark \, | \, Japanese) \\ & \times P_{\textcolor{violet}{P} \bowtie \textcolor{orange}{S}}(Izumi \, | \, Japanese) \\ & \times P_{\textcolor{teal}{C} \bowtie \textcolor{violet}{P}}(Japanese) \end{split} \end{equation} Let us now consider the following SQL query where the only difference with the previous query is that are filtering by $city$ instead of by $name$: \begin{figure}[H] \centering \begin{lstlisting}[language=SQL] SELECT * FROM customers, shops, purchases WHERE customers.id = purchases.customer_id AND shops.id = purchases.shop_id AND customers.nationality = 'Japanese' AND customers.hair = 'Dark' AND shops.city = 'Osaka' \end{lstlisting} \end{figure} In this case, our linked Bayesian network would estimate the selectivity as so: \begin{equation} \begin{split} P(Dark, Japanese, Osaka) & = P_{\color{teal}{C}}(Dark \, | \, Japanese) \\ & \times \sum_{name} P_{\textcolor{violet}{P} \bowtie \textcolor{orange}{S}}(Osaka \, | \, name) P_{\textcolor{violet}{P}}(name \, | \, Japanese) \\ & \times P_{\textcolor{teal}{C} \bowtie \textcolor{violet}{P}}(Japanese) \end{split} \end{equation} This is a simple application of Bayesian network arithmetic \cite{jensen1996introduction}. The reason why there is a sum is that we have to take into account all the shops that are located in Osaka because none of them in particular has been specified in the SQL query. Note that our linked Bayesian network is still capable of estimating selectivities when only a single relation is involved. For example, we only need to use $P_{\textcolor{violet}{P}}(nationality)$ when the \textcolor{teal}{customers} relation is joined with \textcolor{violet}{purchases} relation. If only the \textcolor{teal}{customers} relation is involved in a query, then we can simply use $P_{\textcolor{violet}{C}}(nationality)$ instead of $P_{\textcolor{violet}{P}}(nationality)$. We discuss these two points in further detail in subsection \ref{querying}. Linked Bayesian networks thus combine the benefits of independent Bayesian networks, while having the benefit of softening the join uniformity assumption as well as the attribute value independence assumption. We will now discuss how one may obtain a linked Bayesian network in an efficient manner. \subsection{Building linked Bayesian networks} \label{building} A linked Bayesian network is essentially a set of Bayesian networks. Indeed, our method consists in taking individual Bayesian networks and linking them together in order to obtain one single Bayesian network. This linking process is detailed in the next subsection. In our case, by only including the root attribute of each relation into the Bayesian network of its parent relation, we ensure that the final network necessarily has a tree topology. Performing inference on a Bayesian network with a tree topology can be done in linear time using the sum-product algorithm \cite{kschischang2001factor}. Building a linked Bayesian network involves building the Bayesian networks of each relation in a particular order. Indeed, in our example, we first have to build the Bayesian networks of the \textcolor{teal}{customers} and \textcolor{orange}{shops} relations in order to determine the roots that are to be included in the Bayesian network of the \textcolor{violet}{purchases} relation. To build the \textcolor{violet}{purchases} Bayesian network, we first have to join the root attributes (i.e.,\ $nationality$ and $name$) of the first two Bayesian networks (i.e.,\ \textcolor{teal}{customers} and \textcolor{orange}{shops}) with the \textcolor{violet}{purchases} relation. Naturally, performing joins incurs an added computational cost. However, we argue that joins are unavoidable if one is to capture attribute value dependencies across relations. Indeed, if joins are disallowed whatsoever, then there is basically no hope of measuring dependencies between attributes of different relations. Our methodology requires performing one left-join per primary/foreign key relationship, whilst only requiring to include one attribute per join, which is as cost-effective as possible. The specifics of the procedure we used to build the linked Bayesian network are given in algorithm \ref{algo:build}. We assume the algorithm is given a set of relations. In addition, the algorithm is provided with the set of primary/foreign key relationships in the database (e.g., \textcolor{violet}{purchases} has a foreign key that references \textcolor{teal}{customers}' primary key and another that references \textcolor{orange}{shops}'s primary key). This set of primary/foreign key relationships can easily be extracted from any database's metadata. The idea is to go through the set of relations and check if the Bayesian networks of the dependent relations have been built. In this implementation a \texttt{while} loop is used to go through the relations in their topological order, from bottom to top. The Bayesian networks are built using the $BuildBN$ function, which was presented in \cite{halford2019approach}. The $BuildBN$ function works in three steps: \begin{enumerate*}\item Build a fully-connected, undirected weighted graph, where each node is an attribute and each vertex's weight is the \emph{mutual information} between two attributes. \item Find the \emph{maximum spanning tree} (MST) of the graph. \item Orient the MST in order to obtain a tree by choosing a root.\end{enumerate*} The $BuildBN$ function produces a Bayesian network with a tree topology called a \emph{Chow-Liu tree} \cite{chow1968approximating}. This tree has the property of being the tree which stores the maximum amount of information out of all the legal trees. In our algorithm, the first pass of the \texttt{while} loop will build the Bayesian networks of the relations that have no dependencies whatsoever (e.g., those who's primary key isn't referenced by any foreign key). The next pass will build the Bayesian networks of the relations that contain primary keys referenced by the foreign keys of the relations covered in the first pass. The algorithm will necessarily terminate once each relation has an associated Bayesian network; it will take as many steps as there are relations in the database. \begin{algorithm}[H] \caption{Linked Bayesian networks construction} \begin{algorithmic}[1] \Function{BuildLinkedBN}{$relations, relationships$} \Let{$lbn$}{$\{\}$} \Let{$built$}{$\{\}$} \Comment{Records which relations have been processed} \While{$\card{lbn} < \card{relations}$} \Let{$queue$}{$relations \setminus built$} \Comment{Relations which don't have a BN} \ForEach{$relation \in queue$} \If{$relationships[relation] \setminus built = \varnothing$} \ForEach{$child \in relationships[relation]$} \Let{$relation$}{$relation \bowtie child.root$} \EndFor \EndIf \Let{$lbn$}{$lbn \cup BuildBN(relation)$} \Let{$built$}{$built \cup relation$} \EndFor \EndWhile \State \Return{$lbn$} \EndFunction \end{algorithmic} \label{algo:build} \end{algorithm} Note that we can potentially use parallelism to speed-up the execution of algorithm \ref{algo:build}. Indeed, by using a priority queue and a worker pool, we can spawn processes in parallel to build the networks in the correct order. However, we consider this an implementation detail and did not take the time to implement it in our benchmark. Furthermore, this would have skewed our comparison with other methods. A linked Bayesian network doesn't require much more additional space in with respect to the method from \cite{halford2019approach}. Indeed, a linked Bayesian network is nothing more than a set of separate Bayesian networks where some of the attributes are duplicated in two related networks. Once a linked Bayesian network has been built, it can be used to produce selectivity estimates. That is, given a linked Bayesian network, we want to be able to estimate the selectivity of an arbitrary SQL query. An efficient algorithm is required to perform so-called inference when many attributes are involved, which is the topic of the following subsection. \subsection{Selectivity estimation} \label{querying}. The algorithm for producing selectivity estimates using linked Bayesian networks is based on the selectivity estimation algorithm proposed in \cite{halford2019approach}. The key insight is that we can fuse linked Bayesian networks into a single Bayesian network. Indeed, in our building process we have to make sure to include the root attribute of each relation's Bayesian network into its parent Bayesian's network. This allows to link each pair of adjacent Bayesian networks together via their shared attribute. In figure \ref{fig:linked-network-topology}, these implicit links are represented with dotted lines. The \textcolor{violet}{purchases} and \textcolor{teal}{customers} relation have in common the $nationality$ attribute, whereas the \textcolor{orange}{shops} and \textcolor{violet}{purchases} relations have in common the $name$ attribute. The resulting ``stiched'' network is necessarily a tree because each individual Bayesian network is a tree and each shared attribute is located at the root of each child network. \begin{algorithm}[H] \caption{Selectivity estimation using a linked Bayesian network} \begin{algorithmic}[1] \Function{InferSelectivity}{$lbn, query$} \Let{$relations$}{$ExtractRelations(query)$} \Let{$relevant$}{$PruneLinkedBN(lbn, relations)$} \Let{$linked$}{$LinkNetworks(relevant)$} \Let{$selectivity$}{$ApplySumProduct(linked)$} \State \Return{$selectivity$} \EndFunction \end{algorithmic} \label{algo:infer} \end{algorithm} The pseudocode for producing selectivity estimates is given in algorithm \ref{algo:infer}. The first step of the selectivity estimation algorithm is to identify which relations are involved in a given query. Indeed each SQL query will usually involve a subset of relations, and thus we only need to use the Bayesian networks that pertain to said subset. The $PruneLinkedBN$ thus takes care of removing the unnecessary Bayesian networks from the entire set of Bayesian networks. Naturally, in practice, and depending on implementation details, this may involve simply loading in memory the necessary Bayesian networks. In any case, the next step is to connect the networks into a single one. This necessitates looping over the Bayesian networks in topological order -- in the same exact fashion as algorithm \ref{algo:build} -- and linking them along the way. Linking two Bayesian networks together simply involves replacing the attribute they have in common with the child Bayesian network. For instance, in figure \ref{fig:linked-network-topology}, the $nationality$ attribute from the \textcolor{violet}{purchases} Bayesian network will be replaced by the \textcolor{teal}{customers} Bayesian network. This is because we are interested in the distribution of the attributes after the join, not before. The resulting tree thus approximates the distribution of attribute values inside the ($customers \bowtie purchases \bowtie shops$) join instead of estimating selectivities inside each relation independently, as is done in textbook cost models. The result of this linking process is exemplified in figure \ref{fig:unrolled-network-topology}, which shows the unrolled version of the linked Bayesian network shown in figure \ref{fig:linked-network-topology}. Finally, once the Bayesian networks have been linked together, the sum-product algorithm \cite{kschischang2001factor} can be used to output the desired selectivity. In fact, this final step is exactly the same as the one described in section 3.3 of \cite{halford2019approach}. Our method for estimating selectivities is very efficient. The main reason is because we only to apply the sum-product algorithm once, whereas \cite{halford2019approach} has to apply once per relation involved in the query at hand. This difference is made clear when comparing equations \ref{infer-ibn} and \ref{infer-lbn}. Furthermore, the sum-product algorithm is much more efficient in the case of trees than the clique tree algorithm from \cite{tzoumas2011lightweight}. We confirm these insights in the benchmarks section. \begin{figure}[H] \centering \begin{tikzpicture}[thick, scale=\textwidth, every node/.style={ellipse, draw, align=center},node distance=1cm and 0.2cm] \node [draw=violet] (nationality) {Nationality}; \node [below left = of nationality, draw=teal] (hair) {Hair}; \node [left = of hair, draw=teal] (salary) {Salary}; \node [below right = of nationality, xshift=-1cm, draw=violet] (dow) {Day of week}; \node [right = of dow, draw=violet] (name) {Name}; \node [below left = of name, draw=orange] (city) {City}; \node [below right = of name, draw=orange] (size) {Size}; \draw [->] (nationality) -- (dow); \draw [->] (nationality) -- (name); \draw [->] (nationality) -- (hair); \draw [->] (nationality) -- (salary); \draw [->] (name) -- (city); \draw [->] (name) -- (size); \end{tikzpicture} \caption{Unrolled version of figure \ref{fig:linked-network-topology}} \label{fig:unrolled-network-topology} \end{figure} \subsection{Including more than just the roots} \label{subsec:more-than-roots} Our model assumes that the dependencies between attribute values within a relation are preserved when a join occurs. Indeed we assume that tuples are uniformly distributed inside a join \emph{given} each value in the root attribute. One may wonder why we have to stop at the root. Indeed, it turns out that we can include more attributes in addition to the root of each child Bayesian network when building a parent Bayesian network. For example, consider the linked Bayesian network shown in figure \ref{fig:super-linked-network-topology}. In this configuration we include the $salary$ attribute as well as the $nationality$ attribute in the Bayesian network of the \textcolor{violet}{purchases} relation. By doing so we obtain a new conditional distribution $P(salary | nationality)$ which tells us the dependence between \emph{salary} and \emph{nationality} after \textcolor{teal}{customers} has been joined with \textcolor{violet}{purchases}. \begin{figure}[H] \centering \begin{tikzpicture}[thick, every node/.style={ellipse, draw, align=center},node distance=1cm and 0.1cm] \node [draw=violet] (p_nationality) {Nationality}; \node [below = of p_nationality, yshift=0.24cm, draw=violet] (p_salary) {Salary}; \node [below right = of p_nationality, xshift=0.5cm, draw=violet] (dow) {Day of week}; \node [below left = of p_nationality, xshift=-0.5cm, draw=teal] (nationality) {Nationality}; \node [below left = of nationality, draw=teal] (hair) {Hair}; \node [below right = of nationality, draw=teal] (salary) {Salary}; \draw [->] (p_nationality) -- (dow); \draw [->] (p_nationality) -- (p_salary); \draw [dotted] (p_nationality) -- (nationality); \draw [dotted] (p_salary) -- (salary); \draw [->] (nationality) -- (hair); \draw [->] (nationality) -- (salary); \end{tikzpicture} \caption{Linked Bayesian network of \textcolor{teal}{customers} and \textcolor{violet}{purchases}} \label{fig:super-linked-network-topology} \end{figure} The linked Bayesian network shown in \ref{fig:super-linked-network-topology} is valid because we can unroll it in order to obtain a single tree, just as we did earlier on when we only included the \emph{nationality} attribute. However, the $salary$ attribute can be included in the \textcolor{violet}{purchases} Bayesian network only because of the fact that the $nationality$ attribute is included as well. Indeed, if the $nationality$ attribute was not included, then linking \textcolor{teal}{customers} and \textcolor{violet}{purchases} together would have resulted in a Bayesian network which would not necessarily be a tree. In this case, we would not be able to compute $\cp{salary}{nationality}$ in \textcolor{violet}{purchases}'s Bayesian network. In other words, a node can be included in a parent Bayesian network only if all of its conditioning attributes are included as well. Assuming a child Bayesian network has $n$ nodes, then we can include a number $k \in \{0, \dots, n\}$ of its nodes in the parent Bayesian network. If $k = 0$, then we simply keep each Bayesian network separate, which brings us back to the methodology from \cite{halford2019approach}. If $k = 1$, then we only include the root of each child Bayesian network, which is the case we have discussed up to now. If $k = n$, then we will include all the child's attributes in the parent BN, which is somewhat similar to the global methods presented in \cite{getoor2001selectivity} and \cite{tzoumas2011lightweight}. On the one hand, increasing $k$ will produce larger parent Bayesian networks that capture more attribute value dependencies but also incur a higher computational cost. On the other hand, lower values of $k$ will necessitate less computation but will assume more strongly that dependencies are preserved through joins. The $k$ parameter is thus a practical parameter for compromising between selectivity estimation accuracy and computational requirements. Notice that different values of $k$ can be used for each pair of relations. For instance, we might want to increase $k$ if we notice that the cost model makes very bad estimates for a certain relation. This can be decided upon as deemed fit, be it manually or via automated DBA \cite{van2017automatic}. \subsection{Summary} The method we propose attempts to generalise existing selectivity estimation methods based on Bayesian networks. Following the methodology from \cite{halford2019approach}, we build one Bayesian network per relation using Chow-Liu trees. The only difference is that we include a set of attributes from the child relations into the Bayesian network associated with each parent relation. The set of included attributes depends on a chosen parameter $k$ and the structure of each child relation's Bayesian network. Many distributions can be obtained for free because of the fact that each Bayesian network is a tree in which the root attribute conditions the rest of the attributes. This requires assuming that attribute value dependencies are preserved through joins. This assumption, although not always necessarily true, is much softer than the join uniformity as well as the attribute value independence assumptions. The resulting Bayesian networks are thus able to capture attribute value dependencies across relations, as well as inside individual relations. Although our method requires performing joins offline, we argue that joins are unavoidable if one is to capture any cross-relation dependency whatsoever. The major benefit of our method is that it only requires including a single attribute per join, and yet it brings a great deal of information for free through transitivity thanks to our newly introduced assumption. Moreover, our method can still benefit from the efficient selectivity estimation procedure presented in \cite{halford2019approach} because of the preserved tree structure. Finally, our method is able to generalise existing methods based on Bayesian networks through a single parameter which determines the amount of dependency to measure between the attributes of relations that share a primary/foreign key relationship. \section{Evaluation} \subsection{Experimental setup} We evaluate our proposal on an extensive workload derived from the JOB benchmark \cite{leis2015good}. The JOB benchmark consists of 113 SQL queries, along with an accompanying dataset extracted from the IMDb website. The dataset consists of non-synthetic data, whereas other benchmarks such as TPC-DS \cite{poess2002tpc} are based on synthetic data. The dataset is challenging because it contains skewed distributions and exhibits many correlations between attributes, both across and inside relations. The JOB benchmark is now an established and reliable standard for evaluating and comparing cost models. The dataset and the queries are publicly available \footnote{JOB dataset and queries: \href{https://github.com/gregrahn/join-order-benchmark/}{https://github.com/gregrahn/join-order-benchmark/}}. In addition, we have made a Docker image available for easing future endeavours in the field \footnote{Docker image: \href{https://github.com/MaxHalford/postgres-job-docker}{https://github.com/MaxHalford/postgres-job-docker}}, as well as code used in our experiments \footnote{Method source code: \href{https://github.com/MaxHalford/tldks-2020}{https://github.com/MaxHalford/tldks-2020}}. During the query optimisation phase, the cost model has to estimate the selectivity of each query execution plan (QEP) enumerated by the query optimiser. Query optimisers usually build QEPs in a bottom-up fashion \cite{chaudhuri1998overview}. Initially, the cost model will have to estimate selectivities for simple QEPs that involve a single relation. It will then be asked to estimate selectivities for larger QEPs involving multiple joins and predicates. We decided to mimic this situation by enumerating all the possible sub-queries for each of the JOB benchmark's queries, as detailed in \cite{chaudhuri2009exact}. For example, if a query pertains to 4 relations, we will enumerate all the possible sub-queries involving 1, 2, 3, and all 4 relations. We also enumerate through all the combinations of filter conditions. To do so, we represented each query as a graph with each node being an attribute and each edge a join. We then simply had to retrieve all the so-called \emph{induced subgraphs}, which are all the subgraphs that can be made up from a given graph. Each induced subgraph was then converted back to a valid SQL statement. This procedure only takes a few minutes and yields a fairly large amount of queries; indeed a total of 5,122,790 subqueries can be generated for the JOB benchmark's 113 queries. Tables \ref{tab:n-joins} and \ref{tab:n-filters} provide an overview of the contents of our workload. \begin{table}[!htb] \begin{minipage}{.45\linewidth} \caption{Query spread per number of join conditions} \label{tab:n-joins} \centering \begin{tabularx}{0.65\textwidth}{@{}YX@{}} Joins & Amount \\ \hline 0 & 889 \\ 1-5 & 177,309 \\ 6-10 & 1,175,120 \\ 11-15 & 2,060,614 \\ 16-20 & 1,320,681 \\ 21-25 & 388,177 \end{tabularx} \end{minipage}% \hspace{\fill} \begin{minipage}{.45\linewidth} \caption{Query spread per number of filter conditions} \label{tab:n-filters} \centering \begin{tabularx}{0.65\textwidth}{@{}YX@{}} Filters & Amount \\ \hline 1 & 261,440 \\ 2 & 763,392 \\ 3 & 1,301,840 \\ 4 & 1,380,329 \\ 5 & 923,481 \\ 6 & 384,285 \\ 7 & 94,855 \\ 8 & 12,496 \\ 9 & 672 \end{tabularx} \end{minipage} \end{table} The general goal of our experiments is to detail the pros and cons of our method with respect to the textbook approach from \cite{selinger1979access} and some state-of-the-art methods that we were able to implement. Most industrial databases still resort to using textbook approaches, which are thus important to be compared with. Specifically our experiments solely focus on the selectivity estimation module, not on the final query execution time. We assume that improving the selectivity estimates will necessarily have a beneficial impact on the accuracy of the cost model and thus on the query execution time. Naturally, the estimation has to remain reasonable. This seems to be a view shared by many in the query optimisation community \cite{leis2015good}. Indeed, many papers that deal with selectivity estimation, both established and new, do not measure the impact on the final query execution \cite{chen1994adaptive,poosala1996improved,poosala1997selectivity,tzoumas2011lightweight,vengerov2015join,dutt2019selectivity}. We compared our proposal with a few promising state-of-the-art methods as well as the cardinality estimation module from the PostgreSQL database system. PostgreSQL's cardinality estimation module is a fair baseline as it is a textbook implementation of the decades old ideas from \cite{selinger1979access}. We used version 10.5 of PostgreSQL and did not tinker with the default settings. Additionally, we did not bother with building indexes, as these have no consequence on the selectivity estimation module. A viable selectivity estimation method should be at least as accurate as PostgreSQL, without introducing too much of a computational cost increase. We implemented basic random sampling \cite{olken1986simple}, which consists in executing a given query on a sample of each relation in order to extrapolate a selectivity estimate. Basic random sampling is simple to implement, but isn't suited for queries that involve joins because of the empty-join problem, as explained in section 2. However many sampling methods that take into account the empty-join problem have been proposed. We implemented one such method, namely \emph{correlated sampling} \cite{vengerov2015join}. Correlated sampling works by hashing related primary and foreign keys and discards the tuples of linked relation where the hashes disagree. We also implemented MSCN, which is the deep learning method that is presented in \cite{kipf2019estimating}. Finally we implemented the Bayesian network approach from \cite{tzoumas2011lightweight}. The latter method differs from ours in that it is a global approach that builds one single Bayesian networks over the entire set of relations. Although a global approach is able to capture more correlations than ours, it require more computation. We compared our method with different values for the $k$ parameter presented in section \ref{subsec:more-than-roots}. Note that choosing $k = 0$ is equivalent to using the method from \cite{halford2019approach}. Increasing $k$ is expected to improve the accuracy of the selectivity estimates but deteriorates the computational performance. The $k$ parameter can thus be used to trade between accuracy and computational resources depending on the use case and the constraints of the environment. \subsection{Selectivity estimation accuracy} We first measured the accuracy of the selectivity estimates for each method by comparing their estimates with the true selectivity. The true selectivity can be obtained by executing the query and counting the number of tuples in the result. The appropriate metric for such a comparison is called the $q$-error \cite{moerkotte2009preventing,leis2018query}, and is defined as so: \begin{equation} q(y, \hat{y}) = \frac{max(y, \hat{y})}{min(y, \hat{y})} \end{equation} where $y$ is the true selectivity and $\hat{y}$ is the estimated selectivity. The $q$-error thus simply measures the multiplicative error between the estimate and the truth. The $q$-error has the property of being symmetric, and will thus be the same whether $\hat{y}$ is an underestimation or an overestimation. Moreover the $q$-error is scale agnostic (e.g., $\frac{8}{3} = \frac{24}{9}$), which helps in comparing errors over results with different scales. \begin{figure}[h] \centering \scalebox{0.5}{\includegraphics{figures/q_errors.pdf}} \caption{Sorted $q$-errors for all queries by method on the JOB workload} \label{fig:q-errors} \end{figure} Figure \ref{fig:q-errors} shows the $q$-errors made by each method for all the queries of the workload derived from the JOB benchmark. The $y$ axis represents the $q$-error associated with each query. Meanwhile the $x$ axis denotes the amount of queries that have less than a given $q$-error. For instance, PostgreSQL managed to estimate the selectivity of two million queries with a $q$-error of less than 10 for each query. The curves thus give us a detailed view into the distribution of the $q$-errors for each method. While the curves seem to exhibit a linear trend, one must note that the scale of the $y$ axis is logarithmic. The figure gives us a global idea of the accuracy of each method in comparison with the others. The mean, maximum, and meaningful quantiles of the $q$-errors are given in table \ref{tab:q-errors}. \begin{table}[!ht] \caption{$q$-error statistics for each method on the JOB workload} \label{tab:q-errors} \centering \begin{tabularx}{\textwidth}{@{}c|YYYYYY@{}} &median&90th&95th&99th&max&average\\ \hline PostgreSQL & 7.32 & 77.01 & 185.84 & 707.21 & 10906.17 & 77.01\\ Sampling & 4.79 & 16.45 & 33.17 & 81.34 & 1018.43 & 12.71\\ Correlated sampling & 3.83 & 9.63 & 12.63 & 22.72 & 214.1 & 5.79\\ MSCN & 2.99 & 6.12 & 7.47 & 12.49 & 110.56 & 3.89\\ Global BN & 1.95 & 2.92 & 3.22 & 4.01 & 7.45 & 1.99\\ Independent BN & 4.0 & 15.36 & 32.9 & 76.91 & 820.46 & 11.82\\ Linked BN $k=1$ & 2.41 & 5.03 & 6.15 & 8.07 & 21.09 & 2.79\\ Linked BN $k=2$ & 2.13 & 3.7 & 4.26 & 5.23 & 12.6 & 2.3\\ \end{tabularx} \end{table} The overall worst method is the cost model used by PostgreSQL. This isn't a surprise, as it assumes total independence between attributes, both within and between relations. It is interesting to notice that the $q$-errors made by PostgreSQL's cost model can be extremely high, sometimes even reaching the tens of thousands. In this case, the query optimiser is nothing short from blind because the selectivity estimates are extremely unreliable. Although this doesn't necessarily mean that the query optimiser will not be able to find a good query execution plan, it does imply that finding a good execution plan would be down to luck \cite{leis2018query}. One may even wonder if estimating a selectivity by picking a random number between 0 and 1 might do better. Using our method with $k$ equal to 0 is equivalent to the methodology proposed by \cite{halford2019approach}. Indeed, if no attributes are shared by the Bayesian networks of each relation, then it is as if we considered attribute value dependencies within each relation but not between relations. As expected, the performance is similar to that of random sampling because both methods capture dependencies within a relation but not between relations. Correlated sampling performs a bit better because it is a join-aware sampling method. However, the rest of the implemented methods seems to be more precise by an order of magnitude. The deep learning method, MSCN, outperforms correlated sampling, but it isn't as performant as the Bayesian networks. However, it can probably reach a better level of performance by tuning some of the many parameters that it exposes. Meanwhile, the method we proposed with $k = 1$ means that we include the root attribute of each child relation within the Bayesian network of each parent relation. This brings to the table the benefits detailed in section 3. If $k = 2$, then an additional attribute from each child relation is included with the Bayesian network of each parent relation. We can see on figure \ref{fig:q-errors-tpcds} that the global accuracy increases with $k$, which is what one would expect. The most accurate method overall is the global Bayesian network presented in \cite{tzoumas2011lightweight}. However, our method with $k = 2$ is not far off. This makes the case that our attribute value dependency preservation assumption is a realistic one. \begin{figure}[h] \centering \scalebox{0.5}{\includegraphics{figures/q_errors_tpcds.pdf}} \caption{Sorted $q$-errors for all queries by method on the TPC-DS workload} \label{fig:q-errors-tpcds} \end{figure} We have also benchmarked the methods on the TPC-DS benchmark. In contrast to the IMDb dataset used in the JOB benchmark, the TPC-DS dataset is synthetic. By nature, it contains less attribute dependencies than would be expected in a realistic use case. The TPC-DS dataset is therefore less realistic than the JOB benchmark. To produce a workload as we did for the JOB benchmark, we have taken the 30 first queries that are provided with the TPC-DS dataset and have generated all possible sub-queries. This led to a total 1,414,593 queries. The amount of joins went from 2 to 15. The overall results are shown in table \ref{tab:q-errors-postgres}. As expected, the $q$-errors for the TPC-DS benchmark are better across the board because the dataset exhibits less correlations between attributes. Nonetheless, the rankings between the methods remains somewhat the same. Our method very slightly outperforms the global Bayesian network, but we believe that this is just an implementation artifact. In any case, our method is much more accurate than any method that assumes independence between attributes of different relations. Even so, a viable selectivity estimation method also has to be able to produce estimates in a very short amount of time, which is a point we will now discuss. \begin{table}[!ht] \caption{$q$-error statistics for each method on the TPC-DS workload} \label{tab:q-errors-postgres} \centering \begin{tabularx}{\textwidth}{@{}c|YYYYYY@{}} &median&90th&95th&99th&max&average\\ \hline PostgreSQL & 1.23 & 43.4 & 138.05 & 1025.49 & 82898.28 & 91.46\\ Sampling & 3.69 & 13.39 & 26.34 & 66.83 & 669.58 & 9.87\\ Correlated sampling & 2.89 & 8.24 & 10.63 & 19.32 & 170.63 & 4.51\\ MSCN & 1.82 & 4.23 & 5.32 & 9.24 & 78.01 & 2.54\\ Global BN & 1.03 & 1.17 & 1.23 & 1.33 & 1.49 & 1.06\\ Independent BN & 2.5 & 13.59 & 33.87 & 89.22 & 597.0 & 9.12\\ Linked BN $k=1$ & 1.04 & 1.38 & 1.54 & 1.74 & 1.94 & 1.11\\ Linked BN $k=2$ & 1.05 & 1.26 & 1.35 & 1.47 & 1.6 & 1.08\\ \end{tabularx} \end{table} \subsection{Inference time} Naturally, we next sought to measure how fast each method was at producing selectivity estimates. In a high throughput environment, the query optimiser isn't allowed to spend much time searching for an efficient QEP. In addition to using the cost model, the query optimiser also has to enumerate potential query execution plans and pick one of them \cite{chaudhuri1998overview}. Thus, only a fraction of the short amount of time allocated to the query optimiser can actually be consumed by the cost model. This means that any viable selectivity estimation has to be extremely efficient, and is probably the main reason why current cost models are kept simple. We call the amount of time necessary to produce a selectivity estimate the \emph{inference time}. During our experiments we recorded the inference time for each query and for each model. The results shown in table \ref{tab:inference-time} show the average inference time for each method, aggregated by the number of joins present in each query. \begin{table}[!ht] \caption{Average inference time in milliseconds for each method with respect to the number of joins on the JOB workload} \label{tab:inference-time} \centering \begin{tabularx}{\textwidth}{@{}c|YYYY@{}} & No joins & 1 join & 2 to 5 joins & 6 joins or more \\ \hline PostgreSQL & $2.3 \pm 1.1$ & $2.6 \pm 1.4$ & $3.6 \pm 1.3$ & $8.4 \pm 3.1$ \\ Sampling & $19.6 \pm 5.4$ & $36.2 \pm 6.8$ & $120.2 \pm 5.9$ & $268.4 \pm 8.7$ \\ Correlated sampling & $20.4 \pm 4.9$ & $155.7 \pm 3.2$ & $280.6 \pm 7.1$ & $493.4 \pm 9.9$ \\ MSCN & $135.9 \pm 12.1$ & $312.2 \pm 24.4$ & $343.3 \pm 27.4$ & $387.6 \pm 29.2$ \\ Global BN & $84.3 \pm 2.1$ & $116.1 \pm 2.9$ & $145.8 \pm 4.4$ & $236.1 \pm 3.8$ \\ Independent BN & $8.3 \pm 1.8$ & $10.9 \pm 1.3$ & $12.6 \pm 2.4$ & $12.1 \pm 3.2$ \\ Linked BN $k=1$ & $9.5 \pm 1.9$ & $12.8 \pm 1.6$ & $14.1 \pm 2.8$ & $15.2 \pm 3.4$ \\ Linked BN $k=2$ & $10.1 \pm 1.4$ & $12.9 \pm 1.5$ & $14.3 \pm 2.1$ & $16.4 \pm 2.9$ \end{tabularx} \end{table} It is important to mention that the inference time measured for PostgreSQL is simply the time it takes the database to execute the \texttt{ANALYZE} statement for each query. This thus includes the optimisation time, on top of the time spent at estimating selectivities. Even though they are already by far the best, the numbers displayed in our benchmark for PostgreSQL are pessimistic and are expected to be much lower in practice. It is also worth mentioning that we implemented the rest of the methods in Python, which is an interpreted language and thus slower than compiled languages such as C, in which PostgreSQL is written. If these methods were implemented in optimised C they would naturally be much faster. However, what matters here is the relative differences between each method, not the absolute ones. We can clearly see from the results in table \ref{tab:inference-time} that the global Bayesian network loses in speed what it gains in accuracy. This is because it uses a complex inference method called the \textit{clique-tree algorithm}, which is the standard approach for Bayesian networks with arbitrary topologies. Although it is the most accurate method, it is much slower than our method, regardless of the $k$ parameter we use. What's more, the inference time of our method doesn't increase dramatically when the number of joins increases. This is due to the fact that we use a lightweight inference algorithm called \emph{variable elimination} \cite{cowell2006probabilistic} also used by \cite{halford2019approach}. The inference algorithm scales well because we are able to merge the Bayesian networks of each relation into a single tree. We can also see that correlated sampling is relatively slow method, although its accuracy is competitive as shown in the previous subsection. MSCN is the slowest method overall in our benchmark. This may be attributed to the fact that we implemented it from scratch because no implementation was provided by its authors, and therefore do not have the insights that they might have. We argue that even though our method is not as accurate as the method proposed by \cite{tzoumas2011lightweight}, it is much faster and is thus more likely to be used in practice. Naturally, we also have to take into account the amount of time it requires to build our method, as well as how much storage space it requires. \subsection{Construction time and space} The cost model uses metadata that is typically obtained when the database isn't being used. This is done in order not to compute it in real time during the query optimisation phase. This metadata has to be refreshed every so often in order for the cost model to use relevant figures. Typically, the metadata has to be refreshed when the underlying data distributions change significantly. For instance, if attributes become correlated when new data is inserted, then the metadata has to be refreshed to take this into account. Therefore, the amount of time it takes to collect the necessary information is rather important, as ideally we would like to refresh the metadata as often as possible. Additionally, any viable selectivity estimation method crucially has to make do with a little amount of storage space. Indeed, spatial complexity is a major reason why most methods proposed in the literature are not being used in practice. These two computational requirements highlight the dilemma that cost models have to face: they have to be accurate whilst running with a very low footprint. Most multidimensional methods that have been proposed are utterly useless when it comes to their performance in this regard. \begin{table}[!ht] \caption{Computational requirements of the construction phase per method on the JOB workload} \label{tab:complexity} \centering \begin{tabularx}{0.8\textwidth}{@{}c|YY@{}} & Construction time & Storage size \\ \hline PostgreSQL & 5 seconds & 12KB \\ Sampling & 7 seconds & 276MB \\ Correlated sampling & 32 seconds & 293MB \\ MSCN & 15 minutes 8 seconds & 37MB \\ Global BN & 24 minutes 45 seconds & 429KB \\ Independent BN & 55 seconds & 217KB \\ Linked BN $k=1$ & 2 minutes 3 seconds & 322KB \\ Linked BN $k=2$ & 2 minutes 8 seconds & 464KB \end{tabularx} \end{table} Table \ref{tab:complexity} summarises the computational requirements of the methods we compared. The results explain why PostgreSQL -- and most database engines for that matter -- stick to using simplistic methods. Indeed, in our measurements PostgreSQL is both the fastest method as well the lightest one. PostgreSQL's cost model makes many simplifying assumptions and thus only has to build and store one-dimensional histograms, which can be done extremely rapidly. The sampling methods are quite fast in comparison with the methods based on Bayesian networks. This isn't surprising, as they only require sampling the relations and then storing the samples. Indeed, most of the building time involves persisting the samples on the disk. On the other hand, sampling methods require a relatively large amount of space because they do not apply any summarising whatsoever (note that their storage size are given in terms of megabytes, not kilobytes). Correlated sampling takes more time than basic sampling because it has to scan primary and foreign keys in order to avoid the empty join problem. MSCN construction time is moderate, and naturally depends on the amount of data it is trained on. In this case we trained it for 20 epochs of stochastic gradient descent. All of the methods based on Bayesian networks take more time to build than the two sampling methods, which is as expected. They make up in storage requirements and in inference time. The global Bayesian network takes a very large amount of time to build, which is in accordance with the results from \cite{tzoumas2011lightweight}. In comparison, our method is much faster. This is a logical consequence of the fact that we only build one Bayesian network per relation. Additionally, each Bayesian network has a tree topology, which means that each conditional probability distribution we need to store is a two-way table. The sudden jump in building time between $k = 0$ and $k = 1$ is due to the need to compute joins when $k > 0$. However, note that the jump is much smaller between $k = 1$ and $k = 2$. The reason is that the joins don't have to be repeated for each additional attribute included in every parent Bayesian network. \section{Conclusion} During the query optimisation phase, a cost model is invoked by the query optimiser to estimate the cost of query execution plans. In this context, the selectivity of operators is a crucial input to the cost model \cite{leis2015good}. Inaccurate selectivity estimates lead to bad cost estimates which in turn have a negative impact on the overall running time of a query. Moreover, errors in selectivity estimation grow exponentially throughout a query execution plan \cite{ioannidis1991propagation}. Selectivity estimation is still an open research problem, even though many proposals have been made. This is down to the fact that the requirements in terms of computational resources are extremely tight, and one thus has to compromise between accuracy and efficiency. Our method is based on Bayesian networks, which are a promising way to solve the aforementioned compromise. Although the use of Bayesian networks for selectivity estimation isn't new, previous propositions entail a prohibitive building cost and inference time. In order to address these issues, we extend the work of \cite{halford2019approach} to include the measurement of dependencies between attributes of different relations. We show how we can soften the relational independence assumption without requiring an inordinate amount of computational resources. We validate our method by comparing it with other methods on an extensive workload derived from the JOB \cite{leis2015good} and the TPC-DS \cite{poess2002tpc} benchmarks. Our results show that our method is only slightly less accurate than the global Bayesian network from \cite{tzoumas2011lightweight}, whilst being an order of magnitude less costly to build and execute. Additionally, our method is more accurate than join-aware sampling, whilst requiring significantly less storage and computational requirements. In comparison with other methods which make more simplifying assumptions, our method is notably more accurate, whilst offering very reasonable guarantees in terms of computational time and space. In future work, we wish to extend our method to accommodate for specific operators such as \texttt{GROUP BY}s, as well as verify the benefits of our method in terms of overall query response time as perceived by a query issuer. \bibliographystyle{unsrt}
{ "redpajama_set_name": "RedPajamaArXiv" }
3,122
Goltsblat BLP announces further expansion of the firm's Russian/ CIS Tax Practice and appointment of Evgeny Timofeev as its head. International law firm Goltsblat BLP is delighted to announce that Evgeny Timofeev has been appointed as a partner, to lead the growing Russian/CIS tax practice. Evgeny Timofeev, highly-respected in the Russian legal field, joins the firm's tax department in response to the growing demand from international clients for quality cross-border tax advice. Over the last two years the Goltsblat BLP Tax Practice has expanded significantly and, being one of the most demanded by international clients in the Russian market, has demonstrated 60% growth in 2010/2011 financial year. Before joining Goltsblat BLP, Evgeny was a partner at the Salans Moscow office and co-head of the firm's Global Tax Practice. He is one of the country's leading tax practitioners, ranked in the first tier for Tax in Chambers Europe 2011 for very high-level tax skills and also being mentioned as "a leading tax litigator" with significant experience in litigation against tax authorities, the services that are of great demand among Goltsblat BLP clients. Evgeny is also "praised for his diversified theory and practical knowledge, strong experience and outstanding logic", according to Legal 500 2011. Evgeny provides a broad range of contentious and non-contentious tax advice to global corporates, local companies and financial institutions. Evgeny's clients include leading Russian and international companies such as Auchan, ABB, L'Oreal, Russian Alcohol and many more. He is a member of the Board of the Russian Tax Law Association and of the Expert Council on Tax Legislation of the State Duma Budget Committee. He is also a member of the International Fiscal Association and serves on the Tax Committees of the Association of European Business in Russia, the American Chamber of Commerce in Russia and the Deutsch-Russische Auslandshandelskammer. In addition to his law degree, Evgeny has an MBA from London Business School (2001). Evgeny's appointment follows those of tax partner Andrey Shpak, who joined from PWC in 2010, Simon Allan, the former head of finance at BLP in London who relocated to the Moscow office in 2011 and Oleg Khokhlov, partner, highly regarded Russian-qualified banking & finance expert from Linklaters who joined the firm last month. Andrey Goltsblat, Managing Partner of Goltsblat BLP, comments on the appointment of the new partner: "I am glad that a lawyer of Evgeny's caliber is joining our team. I am sure that this appointment will let us develop further our international tax capabilities to better serve our clients for cross-border transactions." Evgeny Timofeev, Partner, Head of Russian/CIS Tax Practice, Goltsblat BLP: "I am delighted to join Goltsblat BLP at such an exciting period of its growth and to be a part of this strong and enthusiastic team. I am very impressed by the high level projects they are working on and their expanding list of blue chip clients. Furthermore, there is a great demand from both Russian and international clients for quality tax advice, so I am looking forward to developing these opportunities further." Michael Wistow, Partner, Head of tax, BLP: "The appointment of Evgeny Timofeev is an exciting development for the rapidly expanding Russian based tax practice. He is top ranked for international advisory work and will provide even greater depth on the contentious tax side." Evgeny Timofeev Partner, Head of Tax Practice, Advocate
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3
\section{Introduction}\label{introduction} In \cite{Fox-ahs} the author defined a class of geometric structures, called \textbf{AH (\textit{affine hypersurface}) structures}, which generalize both Weyl structures and the structures induced on a co-oriented non-degenerate immersed hypersurface in flat affine space by its second fundamental form, affine normal and co-normal Gau\ss\, map, and defined for these AH structures equations, called \textbf{Einstein}, specializing on the one hand to the usual Einstein Weyl equations and, on the other hand, to the equations for an affine hypersphere. In the present paper these equations are solved on compact orientable surfaces and some aspects of their geometry in this case are described. It is hoped that, aside from their interest as such, the results provide motivation for studying higher dimensional AH structures. \subsection{}\label{introsection} An \textbf{AH structure} on an $n$-manifold is a pair $(\en, [h])$ comprising a projective equivalence class $\en$ of torsion-free affine connections and a conformal structure $[h]$ such that for each $\nabla \in \en$ and each $h \in [h]$ there is a one-form $\si_{i}$ such that $\nabla_{[i}h_{j]k} = 2\si_{[i}h_{j]k}$, or, what is the same, the completely trace-free part of $\nabla_{[i}h_{j]k}$ vanishes (most notational and terminological conventions can be found in sections \ref{backgroundsection} and \ref{ahsection}). When $n = 2$ this compatibility condition is automatic; any pair $(\en, [h])$ is AH. On the one hand, an AH structure for which $\nabla_{i}h_{jk}$ is pure trace for any $\nabla \in \en$ and any $h \in [h]$ is simply a Weyl structure (what is usually called the Weyl connection is the \textbf{aligned} representative $\nabla \in \en$ distinguished by the requirement that $h^{pq}\nabla_{p}h_{iq} = nh^{pq}\nabla_{i}h_{pq}$ for any $h \in [h]$). On the other hand, there is induced on any non-degenerate co-orientable immersed hypersurface in flat affine space a pair of AH structures. Namely, for both, $[h]$ is generated by the second fundamental form, while the projective structures are those induced via the affine normal bundle and the conormal Gau\ss\, map. There is a canonical duality associating to each AH structure $(\en, [h])$ a conjugate AH structure $(\ben, [h])$ having the same underlying conformal structure. The Weyl structures are exactly the self-conjugate AH structures, and the two AH structure induced on an affine hypersurface are conjugate in this sense. It may be helpful to think of the formalism of AH structures as giving an instrinsically formulated generalization of the geometry of hypersurfaces in flat affine space in a manner similar to how CR structures abstract the geometry of a pseudoconvex real hypersurface in flat complex Euclidean space. As for CR structures, the generalization is genuine; there are local obstructions to realizing a given AH structure as that induced on a hypersurface in flat affine space. The specialization to surfaces of the notion of Einstein AH structure defined in \cite{Fox-ahs} says that an AH structure on a surface is \textbf{Einstein} if there vanishes the trace-free symmetric part of the Ricci curvature of its aligned representative $\nabla$, while the scalar trace of its Ricci curvature satisfies the condition \eqref{conservationcondition}, generalizing the constancy of the scalar curvature of an Einstein metric, and taken by D. Calderbank in \cite{Calderbank-mobius, Calderbank-faraday, Calderbank-twod} as the definition of a two-dimensional Einstein Weyl structure. Calderbank's point of view in the two-dimensional case was motivating for the definition of Einstein AH equations in general. In all dimensions the notion of Einstein AH structure has the following properties: for Weyl structures it specializes to the usual Einstein Weyl equations; the conjugate of an Einstein AH structure is again Einstein AH; and the AH structures induced on a hypersurface in affine space are Einstein if and only if the hypersurface is an affine hypersphere. There seem to be two principal reasons for interest in these equations. On the one hand, in section $7$ of \cite{Fox-ahs} or \cite{Fox-calg} there are given examples of Einstein AH structures in dimension $4$ and higher which are neither Weyl nor locally immersable as non-degenerate hypersurfaces in flat affine space, though otherwise as nice as possible (in the terminology of \cite{Fox-ahs}, they are exact, with self-conjugate curvature). This means the Einstein AH equations are a genuine generalization of the Einstein Weyl and affine hypersphere equations. On the other hand, via the theorem of Cheng-Yau associating an affine hypersphere to the universal cover of a manifold with strictly convex flat projective structure, methods for solving the Einstein AH equations should lead to analytic methods for producing convex flat projective structures. The two-dimensional case studied here illustrates that class of AH structures is rich yet sufficiently limited as to be amenable to characterization. The primary purpose of the present paper is to describe the classification up to the action of the group of orientation-preserving diffeomorphisms isotopic to the identity of the Riemannian signature Einstein AH structures on a compact orientable surface. It turns out that these all are either Einstein Weyl structures or have underlying projective structure which is flat and convex. Such a structure is determined in the former case by a conformal structure and a holomorphic vector field the real part of which is Killing for some metric in the conformal class, and in the latter case by a conformal structure and cubic holomorphic differential, and, consequently, by results going back to C.~P. Wang's \cite{Wang} and E. Calabi's \cite{Calabi-affinemaximal} and completed by F. Labourie's \cite{Labourie-flatprojective} and J. Loftin's \cite{Loftin-affinespheres, Loftin-riemannian, Loftin-compactification}, by a convex flat real projective structure. In the latter case it can be viewed as a solution of the Abelian vortex equations. Precise statements are given later in the introduction. For either Einstein Weyl structure or convex flat projective structures on surfaces the relevant classifications were already understood. The description of the deformation space of strictly convex flat projective structures is due independently to F. Labourie and J. Loftin, while the description of Einstein-Weyl structures on surfaces is mostly completed in the papers \cite{Calderbank-mobius, Calderbank-twod} of Calderbank. They are recounted here in part to show concretely how they fit into the formalism used here, and in part to highlight the relation with the Abelian vortex equations, which appears not to have been noted previously. The main novelty is the point of view, that there is a common framework encompassing both kinds of structures. The picture that emerges, and is suggestive of what might be true in higher dimensions, is roughly that for Einstein AH structures, positive curvature implies Weyl while negative curvature implies flatness and convexity of the underlying projective structure. The situation recalls that for extremal K\"ahler metrics. If there are no holomorphic vector fields then the scalar curvature of an extremal K\"ahler metric must be constant; the analogous statement here is that an Einstein AH structure on a compact Riemann surface admitting no holomorphic vector fields must be exact, and, as a consequence, have underlying flat projective structure which is strictly convex. On the other hand, if there are holomorphic vector fields, then there can be extremal K\"ahler metrics with nonconstant scalar curvature, as was shown by E. Calabi in \cite{Calabi-extremal}; the analogous examples here are just the Einstein Weyl structures on spheres and tori. The final sections show that Einstein AH structures arise naturally in at least two other contexts, namely in the construction of Hessian metrics, and on mean curvature zero Lagrangian submanifolds of para-K\"ahler manifolds. In the penultimate section it is shown that the trivial real line bundle over a surface admitting an Einstein AH structure with parallel negative scalar curvature admits several interesting Hessian metrics of Riemannian and Lorentzian signatures which satisfy various Einstein type conditions. In the final section it is shown that an Einstein AH structure is induced on a mean curvature zero Lagrangian submanifold of a para-K\"ahler space form. The remainder of the introduction describes the contents in more detail. \subsection{} Section \ref{backgroundsection} describes background needed in the remainder of the paper. The reader is advised to read it enough to be familiar with the notational and terminological conventions employed throughout. Sections \ref{ahsection} and \ref{ahcurvaturesection} describes the basic properties and local curvature invariants of AH structures on surfaces. As for a Riemann surface, the geometric structures considered admit equivalent descriptions in one-dimensional holomorphic terms or two-dimensional smooth real terms. The complex description generally leads to a more efficient and more transparent description, while the real description is more natural for comparing the two-dimensional results to the higher-dimensional case. Here both, and the transition from one to the other, are described, relying on material recounted in section \ref{holomorphicdifferentialsection} relating holomorphic differentials with conformal Killing and Codazzi tensors. \subsection{} In section \ref{einsteinsection} the Einstein AH equations are defined and their most basic properties are noted. The defining conditions are given several reformulations. In particular, Lemma \ref{einsteinhololemma} shows that a Riemannian AH structure on an oriented surface is Einstein if and only if its Ricci curvature has type $(1,1)$ and its complex weighted scalar curvature (defined in section \ref{fdhsection}) is holomorphic. \subsection{} In order to state the main result of section \ref{scalarsection} it is necessary to recall some definitions. An open subset of the projective sphere is \textbf{convex} if its intersection with any projective line is connected. It is \textbf{properly convex} if its closure contains no pair of antipodal points. A properly convex domain is \textbf{strictly convex} if its boundary contains no open segment of a projective line. For example the positive orthant in $\rea^{n}$ (which is projectively equivalent to a standard simplex) is properly convex but not strictly convex. A flat real projective structure is \textbf{(strictly) convex} if its developing map is a diffeomorphism onto a (strictly) properly convex subset of the projective sphere. Perhaps the principal technical result in the paper is Theorem \ref{scalarexacttheorem} which describes what are the possible types of Einstein AH structure on a compact orientable surface in terms of the genus of the surface, the sign of the weighted scalar curvature, the exactness or not of the AH structure, and whether the AH structure is Weyl or not. The precise statement is long, so is not repeated here, but roughly the result is that a Riemannian Einstein AH structure $(\en, [h])$, with conjugate $(\ben, [h])$, on a compact orientable surface $M$ of genus $g$ satisfies one of the following mutually exclusive possibilities: \begin{enumerate} \item $(\en, [h])$ is exact and Weyl, or, what is the same, $\en$ is generated by the Levi-Civita connection of a constant curvature metric representative of $[h]$. \item The genus is $g \geq 2$. $(\en, [h])$ is exact and not Weyl with parallel negative scalar curvature, $\en$ and $\ben$ are strictly convex flat real projective structures, the cubic torsion is the real part of a holomorphic cubic differential, and a distinguished metric has negative scalar curvature. \item $M$ is a torus. $(\en, [h])$ is exact and not Weyl with parallel negative scalar curvature, $\en$ and $\ben$ are flat real projective structures which are convex but not strictly convex, the cubic torsion is the real part of a holomorphic cubic differential, and a distinguished metric is flat. \item $M$ is a torus, $(\en, [g])$ is Weyl and closed but not exact, the scalar curvature is zero, and the $(1,0)$ part of the aligned representative $\nabla \in \en$ is a holomorphic affine connection. \item $M$ is a torus, $(\en, [h])$ is Weyl and not closed, and the scalar curvature changes signs. \item $M$ is a sphere, $(\en, [h])$ is Weyl and not closed, and the scalar curvature is somewhere positive. \end{enumerate} In particular an Einstein AH structure on $M$ which is not simply that generated by a conformal structure is either Weyl or exact, but not both. The proof of Theorem \ref{scalarexacttheorem} is based on Theorem \ref{classtheorem} which shows that for a distinguished metric representing the conformal structure the Einstein equations have a technically convenient form. Although the existence of this distinguished metric is a consequence of the Hodge decomposition, it is said to be \textbf{Gauduchon} because in the higher-dimensional setting the corresponding construction in the context of Einstein Weyl structures is due to P. Gauduchon; see \cite{Gauduchon, Gauduchon-circlebundles}. Theorem \ref{magnetictheorem} shows that for an Einstein Weyl structure on a sphere or torus the integral curves of the metric dual of the Faraday primitive of a Gauduchon metric $h$ are magnetic geodesics for the magnetic flow generated by $h$ and a scalar multiple of the Faraday two-form. This interpretation of Einstein Weyl structures seems to be new. \subsection{} In section \ref{vortexsection} it is shown that an exact Einstein AH structure on a compact orientable surface is equivalent to a special sort of solution of the Abelian vortex equations. Recall that a triple $(\nabla, g, s)$ comprising a Hermitian structure $g$ on a complex line bundle $\emf$, a Hermitian connection $\nabla$ on $\emf$, and a smooth section $s$ of $\emf$ solves the abelian vortex equations if $\nabla$ induces a holomorphic structure on $\emf$ with respect to which $s$ is holomorphic, and there is satisfied a third equation relating the curvature of $\nabla$ and the Hermitian norm of $s$ (see \eqref{vortex}). This extension to compact K\"ahler manifolds of equations proposed by Landau and Ginzburg to model superconductivity was first studied by M. Noguchi in \cite{Noguchi} and S. Bradlow in \cite{Bradlow}. A special class of solutions, the \textbf{$p$-canonical} solutions, is distinguished by the requirement that the resulting $s$ be a section of the $p$th power of the canonical bundle with respect to the underlying complex structure, and that $\nabla$ be the Hermitian connection induced by the underlying K\"ahler structure. The Abelian vortices corresponding to exact Einstein AH structures are restricted in the sense that they correspond to $3$-canonical solutions. This gives a geometric interpretation to $3$-canonical solutions of the Abelian vortex equations which appears not to have been previously observed. It should be interesting to understand exactly how this plays out at the level of moduli spaces, though note that diffeomorphism inequivalent Einstein AH structures can determine gauge equivalent Abelian vortices, so at this level the correspondence is neither injective nor surjective. See the final paragraphs of section \ref{vortexsection} for a brief discussion. \subsection{} Section \ref{constructionsection} is devoted mainly to showing constructively that all of the possibilities identified in Theorem \ref{scalarexacttheorem} are realized and to describing the moduli/deformation spaces of solutions. Combining Theorems \ref{scalarexacttheorem} and \ref{2dmodulitheorem} leads to the following theorem, the proof of which is completed in section \ref{convexsection}. \begin{theorem}\label{summarytheorem} On a compact orientable surface $M$ of genus at least $2$, the following spaces are in canonical bijection: \begin{list}{(\arabic{enumi}).}{\usecounter{enumi}} \renewcommand{\theenumi}{Step \arabic{enumi}} \renewcommand{\theenumi}{(\arabic{enumi})} \renewcommand{\labelenumi}{\textbf{Level \theenumi}.-} \item \label{ds1} The fiber bundle over Teichm\"uller space of $M$ the fibers of which comprise the cubic holomorphic differentials. \item \label{ds2} The deformation space of convex flat real projective structures. \item \label{ds3} The deformation space of Einstein AH structures. \end{list} The same equivalences are true for $M$ of genus $1$ provided that \ref{ds3} is replaced by \begin{list}{(\arabic{enumi}).}{\usecounter{enumi}} \setcounter{enumi}{3} \item \label{ds3b} The deformation space of exact Einstein AH structures. \end{list} \end{theorem} \noindent Aside from its interest as such, Theorem \ref{summarytheorem} is suggestive of what can be expected to be true about the corresponding structures in higher dimensions. It plays for Einstein AH structures a role something like the uniformization theorem plays in the theory of higher-dimensional Einstein metrics. The main point of section \ref{constructionsection} is to prove directly the implication \ref{ds1}$\Rightarrow$\ref{ds3} of Theorem \ref{summarytheorem}, which is Theorem \ref{2dmodulitheorem}. The equivalence \ref{ds1}$\iff$\ref{ds2} (and also, implicitly, the implication \ref{ds2}$\Rightarrow$\ref{ds3}) of Theorem \ref{summarytheorem} was known already, having been proved independently by F. Labourie (see \cite{Labourie-flatprojective}) and J. Loftin (see \cite{Loftin-affinespheres}). The proof given here of Theorem \ref{2dmodulitheorem} is not in essential points different from Loftin's proof of the implication \ref{ds1}$\Rightarrow$\ref{ds2}, but is set in a slightly more general context so as to yield Corollary \ref{vortexcorollary}, which shows how to construct certain $p$-canonical solutions of the Abelian vortex equations. Also there are extracted some bounds on the solutions which yield bounds on the volume and curvature of distinguished metrics of Einstein AH structures. These represent very preliminary steps in the direction of understanding the analogues in the present context of Teichm\"uller curves. As is explained in section \ref{singularmetricsection} there is a natural diffeomorphism equivariant action of $GL^{+}(2, \rea)$ on the space of cubic holomorphic differentials coming from its action on the singular flat Euclidean structure determined by such a differential on the complement of its zero locus. Theorem \ref{summarytheorem} means that the orbits of this action on the space of cubic holomorphic differentials determine disks in the deformation space of Einstein AH structures. An analysis of the structure of these disks awaits, but in section \ref{constructionsection} there are proved some results about the path in the deformation space corresponding to a ray in the space of cubic holomorphic differentials. Theorem \ref{liptheorem} shows that with a suitable parameterization of the ray the conformal factor relating the Gauduchon metric at time $t$ to the Gauduchon metric at time $0$ is pointwise non-decreasing and Lipschitz in $t$. This makes possible some statements about the limiting behavior along the ray of the volume and curvature of suitably scaled Gauduchon metrics. In order to say a bit more about the background to Theorem \ref{summarytheorem}, some context is recalled. In \cite{Choi-Goldman}, S. Choi and W.~M. Goldman showed that the deformation space of convex flat real projective structures on a two-dimensional orbifold $M$ of negative orbifold characteristic $\chi(M)$ is homeomorphic to a cell of a dimension equal to $-8\chi(M) + k$, where $k$ is a number expressible in terms of the orders of the stabilizers of the singular points; this generalizes an earlier theorem of Goldman in \cite{Goldman-convex} for the manifold case. It follows from a theorem of Thurston that a compact $2$-orbifold of non-positive orbifold Euler characteristic is a quotient of a manifold by a finite group; see the end of section $1.2$ of \cite{Choi-Goldman}. In \cite{Wang}, C.~P. Wang shows how a hyperbolic affine hypersphere gives rise to a conformal structure and a cubic holomorphic differential, and conversely, how given such data on a compact oriented surface there is associated to its universal cover a hyperbolic affine hypersphere. On the affine hypersphere over the universal cover of $M$ the difference tensor of the affine connection induced via the affine normal and the Levi-Civita connection of an equiaffine metric is the real part of a holomorphic cubic differential (it is often called the \textit{Pick form}). This observation underlies the Labourie-Loftin theorem. The content of the implication \ref{ds2}$\Rightarrow$\ref{ds3} of Theorem \ref{summarytheorem} is the claim that the Einstein AH structure is determined by its underlying projective structure (which is necessarily flat by Lemma \ref{2deinsteinlemma}). As is described briefly now, and in more detail in section \ref{convexsection}, this can be deduced from various results of S.~Y. Cheng and S.~T. Yau. By a theorem of Cheng and Yau (see \cite{Loftin-survey} for the full history) resolving a conjecture of E. Calabi, the interior of the cone over a properly convex domain admits a unique foliation by hyperbolic affine hyperspheres asymptotic to the cone. In particular, this can be applied to the cone over the universal cover of a $2$-orbifold $M$ carrying a convex flat real projective structure, and in the manifold case the AH structure induced on the affine hypersphere descends to $M$. Since these AH structures are always exact, this means $M$ carries a canonical homothety class of metrics (those induced by the equiaffine metrics on the affine hyperspheres foliating the interior of the cone) and so a distinguished connection, the Levi-Civita connection of any one of these metrics. Technically, this has two aspects. One is that an immersed hyperbolic affine hypersphere is properly embedded if and only if the induced affine metric is complete; for references to a proof see section $5$ of \cite{Trudinger-Wang-survey}. The other is that a convex flat real projective structure determines a distinguished conformal structure. The latter can be obtained from either of two theorems of Cheng and Yau solving certain Monge-Amp\`ere equations. In \cite{Cheng-Yau-mongeampere}, it is shown that on a bounded convex domain $\Omega \subset \rea^{n}$ there is a unique smooth, convex solution of the equation $u^{n+2}\det \hess u = (-1)^{n}$ vanishing on the boundary of $\Omega$. The radial graph of $u$ is the desired affine hypersphere (see Theorem $3$ of \cite{Loftin-riemannian}). Alternatively, in \cite{Cheng-Yau-realmongeampere} it is shown that on a convex cone $\Omega \subset \rea^{n+1}$ containing no complete affine line there is a unique smooth function $F$ solving $\det \hess F = e^{2F}$, tending to $+\infty$ at the boundary of the cone, and such that $\hess F$ is a complete Riemannian metric on the interior of the cone; the level sets of $F$ are the desired affine hyperspheres asymptotic to the cone (the uniqueness claim follows by passing to the corresponding K\"ahler metric on the tube domain over $\Omega$ and appealing to the Schwarz lemma for volume forms in \cite{Mok-Yau}). Because this approach does not appear to have been written anywhere, it is sketched in section \ref{hessianmetricsection}. These theorems of Cheng-Yau should be viewed as real analogues of the theorems of Cheng, Mok and Yau producing complete K\"ahler Einstein metrics, e.g. \cite{Cheng-Yau-completekahler}, \cite{Mok-Yau}. (In fact the latter theorem follows from the specialization to a tube domain over a pointed convex cone of the theorem of N. Mok and Yau in \cite{Mok-Yau} producing a K\"ahler Einstein metric on a bounded domain of holomorphy). For further background the reader can consult, in addition to papers already mentioned, \cite{Loftin-survey, Loftin-compactification}, \cite{Cheng-Yau-mongeampere, Cheng-Yau-affinehyperspheresI}, and \cite{Calabi-improper, Calabi-bernsteinproblems, Calabi-completeaffine, Calabi-affinemaximal}. In \cite{Labourie-flatprojective}, Labourie has shown that in two dimensions these results admit more direct and simpler proofs, and has given a variety of other ways of understanding them. \subsection{} The remaining Einstein AH structures on compact orientable surfaces are all Weyl and occur on either the torus or the sphere. The existence of such structures in the case of zero scalar curvature (on the torus) is straightforward, and they correspond in a natural way to holomorphic affine connections. This is explained in section \ref{torusconstantsolutionssection}. The remaining cases are Einstein Weyl structures which are not closed. Such a structure determines a vector field $X$ which is Killing for the Gauduchon metric. It follows that $X$ is the real part of a holomorphic vector field, though it is not an arbitrary holomorphic vector field, for $X$ preserves some metric and so generates a circle action; in particular on the torus it must generate a rational flow. Using this circle action the equations that need to be solved to construct an Einstein Weyl structure are reduced to an ODE, which after further reductions admits explicit solutions in terms of elementary functions. In the case of the sphere, the moduli space of Einstein Weyl structures is essentially parameterized by conjugacy classes of elliptic elements of $PSL(2, \com)$, that is by a half-open interval. The precise statement is Theorem \ref{spheremodulitheorem}. In the torus case, because of the aforementioned rationality condition, it is not clear how to describe nicely the deformation space. Much of the description of Einstein-Weyl structures given in section \ref{spheretorussection} can be found in some similar form in \cite{Calderbank-mobius} and \cite{Calderbank-twod}, although the presentation here is perhaps more elementary, and is made to illustrate the realization of the possibilities stated in Theorem \ref{scalarexacttheorem}; also the description of when the solutions found are equivalent is made more explicitly than it is in \cite{Calderbank-mobius} and \cite{Calderbank-twod}, although it seems that the results were understood by Calderbank. \subsection{}\label{maintrosection} In section \ref{hessianmetricsection} it is shown that an Einstein AH structure $(\en, [h])$ on a compact orientable surface $M$ of genus $g > 1$ gives rise to two metrics $f_{IJ}$ and $g_{IJ}$ on $M \times \reat$ which together with the flat affine structure induced on $M \times \reat$ by $\en$ constitute Hessian metrics. The metric $g_{IJ}$ is a Lorentz signature Monge-Amp\`ere metric, while $f_{IJ}$ is a Riemannian signature Hessian metric which is Einstein when viewed as a K\"ahler affine metric in the sense of \cite{Cheng-Yau-realmongeampere}. Precise statements are given in Theorem \ref{hessiantheorem}. In particular, it is shown that, with respect to $g_{IJ}$, $M$ is a smoothly immersed, spacelike, umbilic hypersurface of constant mean curvature. In section \ref{dustsection} is explained that the metric $g_{IJ}$ can also be viewed as a solution of $2+1$ gravitational equations with stress energy tensor of the form corresponding to a pressureless perfect fluid. In section \ref{mongeamperemetricsection} it is shown that for each $C > 0$ there is a Riemannian signature Monge-Amp\`ere metric on $M \times [-\log C, \infty)$. Its potential has the form $\Psi(F)$ where $F$ is the potential for the metric $f_{IJ}$ described in the previous paragraph, and $\Psi$ is the function given in \eqref{psit}. The precise statement is Theorem \ref{mongeamperetheorem}. \subsection{} In \cite{Aledo-Espinar-Galvez}, J.~A. Aledo, J.~M. Espinar, and J.~A. G\'alvez have studied a surface equipped with what they call a \textbf{Codazzi pair}, which is a pair comprising a Riemannian metric and a second symmetric covariant tensor satisfying the Codazzi equations with respect to the Levi-Civita connection of the metric. They view this as an abstraction of the geometric structure induced on a hypersurface in Euclidean three space, and have shown that many classical results can be strengthened in this context. Though the setting is different from that considered here, the perspective is similar. One of the motivations of \cite{Aledo-Espinar-Galvez} is that such Codazzi pairs arise in other ways, e.g. the real part of the Hopf differential of a harmonic mapping yields such a pair. Similarly AH structures on surfaces arise naturally in the study of submanifolds of para-K\"ahler manifolds. In section \ref{parakahlersection} it is explained that a mean curvature zero spacelike Lagrangian immersed submanifold of a para-K\"ahler manifold of constant para-holomorphic sectional curvature inherits an Einstein AH structure; see Theorem \ref{parakahlertheorem} and the final paragraph of section \ref{pksection}. It is indicated also how to associate to certain Einstein AH structures such an immersion. I thank an anonymous referee for pointing out that essentially equivalent ideas have been worked out independently in R. Hildebrand's papers \cite{Hildebrand-crossratio} and \cite{Hildebrand-parakahler}. In particular Theorem \ref{parakahlertheorem} is equivalent to theorems in \cite{Hildebrand-crossratio} modulo changes of terminology. Related constructions appeared already in L. Vrancken's \cite{Vrancken-centroaffine}. Corollary \ref{parakahlercorollary} states some restrictions on mean curvature zero Lagrangian immersions in four dimensional para-K\"ahler space forms resulting from applying Theorem \ref{scalarexacttheorem} to the induced Einstein AH structure. Closely related results about minimal Lagrangian immersions in complex hyperbolic space are the focus of the preprint \cite{Loftin-Mcintosh} of J. Loftin and I. McIntosh. \section{Notation and terminology}\label{backgroundsection} This section records the notational and terminological conventions in use throughout the paper. \subsection{} Throughout $M$ is a connected smooth ($\cinf$) manifold. For a vector bundle $E$, $\Ga(E)$ denotes the space of its smooth sections (even if $E$ has a holomorphic structure). Its $k$th symmetric power and top exterior power are written respectively as $S^{k}(E)$ and $\Det E$. For a line bundle $E$, $|E|$ is the tensor product of $E$ with its orientation bundle. Sections of $|\Det \ctm|^{\la}$ are called \textbf{$\la$-densities}. A tensor taking values in $|\Det \ctm|^{\la}$ for some $\la \in \reat$ is said to be \textbf{weighted}. If $M$ is compact the integral of a $1$-density has sense, and so there is a bilinear pairing $\lb\dum, \dum \ra:\Ga(|\Det \ctm|^{\la}) \times \Ga(|\Det \ctm|^{1-\la}) \to \rea$ between densities of weight $\la$ and the complementary weight $(1-\la)$. \subsection{}\label{holobackgroundsection} Given a complex structure $J \in \eno(\ste)$ on the real vector space $\ste$, the complexification $\stec$ decomposes as the direct sum $\stec = \ste^{1,0} \oplus \ste^{0, 1}$ of the $\pm \j$ eigensubbundles of the extension of $J$ to $\stec$ by complex linearity. The induced action of $J$ on $\sted$ is defined by $J(\al) \defeq \al \circ J$, so that $(\sted, J)$ is an a complex vector space, and the $(1,0)$ part $\ste^{\ast\, 1,0}$ of $\sted$ is the $\com$-dual of $\ste^{1, 0}$ and annihilates $\ste^{0,1}$. A completely symmetric or anti-symmetric tensor decomposes by type. If $B$ is in $S^{p+q}(\sted)$ or $S^{p+q}(\ste)$ denote by $B^{(p, q)}$ the $(p,q)$ part of its complex linear extension as an element of $S^{p+q}(\sted\tensor_{\rea}\com)$. For example, for $\al \in \sted$, $2\al^{(1, 0)} = \al - \j J(\al) = \al - \j \al \circ J$. The preceeding makes sense fiberwise on a complex vector bundle. If $(M, J)$ is a complex manifold, there is written $TM \tensor_{\rea}\com = T^{1, 0} \oplus T^{0, 1}$, while the complex tangent bundle $\tcm$ is $TM$ viewed as a complex vector bundle; it is identified as a complex vector bundle with $T^{1,0}$. A \textbf{holomorphic structure} on a complex vector bundle $E$ over a complex manifold is a linear differential operator $\hat{D}$ sending $E$-valued $(p,q)$-forms to $E$-valued $(p, q+1)$-forms, satisfying the Leibniz rule $\hat{D}(fs) = \delbar f \wedge s + f\hat{D}s$ for any $f \in \cinf(M)$ and any smooth section $s$ of $E$, and such that $\hD^{2} = 0$. A smooth local section of $E$ is \textbf{$\hat{D}$-holomorphic} if it is in $\ker \hat{D}$. If $(M, J)$ is a complex manifold then any bundle of complex tensors (a tensor product of powers of $\tcm$ and its dual) over $M$ is naturally a holomorphic vector bundle, and the corresponding holomorphic structure is denoted $\delbar$. The exterior differential decomposes by type as $d = \del + \delbar$. For a one-complex dimensional manifold and $p \in \integer$, define $\cano^{p}$ to be the $p$th power of the complex cotangent bundle $\tcm^{\ast}$ viewed as a holomorphic line bundle, and viewed also as the $(-p)$th symmetric power of $T^{1,0}$ (a negative power means the power of the dual). A smooth (holomorphic) section of $\cano^{p}$ is called a complex (holomorphic) \textbf{$p$-differential}. In a local holomorphic coordinate $z$, a holomorphic section $p$-differential $\si$ has the form $\phi(z)dz^{p}$ for a holomorphic funtion $\phi(z)$. \subsection{} Tensors are usually indicated using the abstract index notation. Ordinary tensors are indicated by lowercase Latin abstract indices, so that, for instance, $a_{ij}$ indicates a covariant two tensor. If a complex structure is given, lowercase Greek indices, e.g. $\al, \be, \ga$, etc., decorate sections of the tensor powers of $T^{1,0}$ and its complex dual, while barred lowercase Greek indices, e.g. $\bar{\al}, \bar{\be}, \bar{\ga}$, etc., decorate sections of the tensor powers of $T^{0, 1}$ and its complex dual. Enclosure of indices in square brackets (resp. parentheses) indicates complete skew-symmetrization (resp. complete symmetrization), so that for example $a^{ij}= a^{(ij)} + a^{[ij]}$ indicates the decomposition of a contravariant two-tensor into its symmetric and skew-symmetric parts, and $(X \wedge Y)^{ij} = 2X^{[i}Y^{j]}$ for vector fields $X$ and $Y$. The summation convention is always in effect in the following form: indices are in either \textit{up} position or \textit{down} and a label appearing as both an up index and a down index indicates the trace pairing. Since polynomials on the vector space $\ste$ are tautologically identified with symmetric tensors on the dual vector space $\ste^{\ast}$, the index $i$ in $\tfrac{\pr}{\pr y_{i}}$ has to be regarded as an \textit{up} index. The horizontal position of indices is always maintained when they are raised or lowered. The interior multiplication of a vector field $X^{i}$ in a covariant tensor $B_{i_{1}\dots i_{k}}$ is defined by $\imt(X)B_{i_{1}\dots i_{k-1}} \defeq X^{p}B_{pi_{1}\dots i_{k-1}}$. \subsection{} The \textbf{curvature} $R_{ijk}\,^{l}$ of a torsion-free affine connection $\nabla$ is defined by $2\nabla_{[i}\nabla_{j]}X^{k} = R_{ijp}\,^{k}X^{p}$. The \textbf{Ricci curvature} is the trace $R_{ij} \defeq R_{pij}\,^{p}$. \subsection{}\label{riemannsurfacehodgestarsection} A non-degenerate weighted covariant two-tensor $h_{ij}$ determines a contravariant two-tensor $h^{ij}$ of complementary weight defined by $h^{ip}h_{jp} = \delta_{j}\,^{i}$, in which here, as always, $\delta_{i}\,^{j}$ is the tautological $\binom{1}{1}$-tensor determined by the pairing of vectors with covectors. By $\det h$ is meant the $2$-density which satisfies $\lb \det h, E_{1}\wedge \dots \wedge E_{n}\ra = \det h(E_{i}, E_{j})$ for any frame $E_{1}, \dots, E_{n}$. A \textbf{pseudo-Riemannian metric} or, simply, a \textbf{metric} means a non-degenerate covariant two-tensor $h_{ij}$. The metric is \textbf{Riemannian} if it is positive definite. A \textbf{conformal structure} $[h]$ means a pseudo-Riemannian metric determined up to multiplication by a positive function. A conformal structure is identified with its \textbf{normalized representative} $H_{ij}\defeq |\det h|^{-1/\dim M}h_{ij}$ which takes values in the bundle of $-(2/\dim M)$-densities. For conformal metrics $\tilde{h}_{ij} = fh_{ij}$ the Levi-Civita connections are written $\tD$ and $D$, and their difference tensor is written $\tD - D = 2\si_{(i}\delta_{j)}\,^{k} - h_{ij}h^{kp}\si_{p}$, for $2\si_{i} = d\log{f}_{i}$. The curvature of $D$ is written $\sR_{ijk}\,^{l}$. Objects corresponding to $\tD$ are written with the same notations as those corresponding to $D$, but decorated with a $\tilde{\,}$, although for the scalar curvatures it will be convenient to write $\sR_{\tilde{h}}$ and $\sR_{h}$ rather than $\tilde{\sR}$ and $\sR$. For example, the scalar curvature changes under conformal rescaling by \begin{align} \label{conformalscalardiff}&f\sR_{\tilde{h}} = \sR_{h} - 2h^{pq}D_{p}\si_{q} = \sR_{h} - \lap_{h}\log f . \end{align} Here $\lap_{h}$ is the rough Laplacian $h^{pq}D_{p}D_{q}$, which acts on tensors as well as on functions. Given a metric $h$, for any $X_{i_{1}\dots i_{k}}\,^{j_{1}\dots j_{l}}$ the notations $X^{\sharp}$ and $X^{\flat}$ indicate the tensors obtained by raising and lowering all indices using $h$. That is $X^{\flat}_{i_{1}\dots i_{k+l}} = X_{i_{1}\dots i_{k}}\,^{j_{1}\dots j_{l}}h_{j_{1}i_{k+1}}\dots h_{j_{l}i_{k+l}}$, and similarly for $X^{\sharp}$. The $h$-norm of a tensor $X$ is defined by complete contraction, e.g. for $A_{ij}\,^{k}$, $|A|_{h}^{2} = A^{\flat}_{ijk}A^{\sharp\,ijk} = A_{ij}\,^{k}A_{ab}\,^{c}h^{ia}h^{jb}h_{kc}$. \subsection{}\label{riemannsurfacesection} A \textbf{Riemann surface} is an oriented surface $M$ equipped with a Riemannian signature conformal structure $[h]$. (Note that \textit{Riemannian surface} and \textit{Riemann surface} are not synonyms; the former indicates there is given a distinguished metric, while the underlying smooth structure need not be orientable). On a Riemann surface there is a unique almost complex structure $J$ defined in terms of any $h \in [h]$ by the requirements that $J_{i}\,^{p}J_{j}\,^{q}h_{pq} = h_{ij}$ and that the two-form $\om_{ij} \defeq J_{i}\,^{p}h_{pj}$ determine the given orientation. As these conditions do not depend on the choice of $h \in [h]$, $J$ is determined by the conformal structure $[h]$. On the other hand, given a complex structure $J$ on a surface $M$, the orientation determined by $X \wedge JX$ does not depend on the choice of $X$, and there is a unique Riemannian signature conformal structure $[h]$ such that $J_{i}\,^{p}J_{j}\,^{q}h_{pq} = h_{ij}$. Any almost complex structure on a surface is integrable, or, equivalently $DJ = 0$, where $D$ is the Levi-Civita connection of $h$, and so $(h, J, D)$ is a K\"ahler structure on $M$ with K\"ahler form $\om_{ij}$ and associated \textbf{Hermitian metric} $h^{(1, 1)}$. On any bundle $E$ of complex tensors, $h$ determines a Hermitian structure, and $D$ induces the unique Hermitian connection such that $D^{0, 1} = \delbar$. \subsection{} For a Hermitian metric $h^{(1,1)}_{\al\bar{\be}}$ it is convenient to omit the superscript $(1,1)$, and to write instead simply $h_{\al\bar{\be}}$, although it should be kept in mind that $h_{ab}$ and $h_{\al\bar{\be}}$ refer to different objects. The dual bivector $h^{\al\bar{\be}}$ is defined by $h^{\al\bar{\be}}h_{\ga\bar{\be}} = \delta_{\al}\,^{\ga}$. The conventions are such that $\delta_{\al}\,^{\be}$ indicates the tautological endomorphism of $T^{1,0}$, while $\delta_{i}\,^{j}$ indicates the tautological endomorphism of $TM$. The convention is that for a section $X^{a_{1}\dots a_{k}}$ of $\tensor^{k}TM$, the Hermitian norm of its $(k, 0)$ part $X^{\al_{1}\dots \al_{k}}$ is defined by complete contraction with its complex conjugate using $h_{\al\bar{\be}}$; e.g. for $X \in \Ga(TM)$, $|X^{(1,0)}|^{2}_{h} = X^{\al}X^{\bar{\be}}h_{\al\bar{\be}} = \tfrac{1}{2}|X|_{h}^{2}$. \subsection{} The formal adjoint $\dad_{h}$ of the exterior differential with respect to the pairing of forms given by $h$ is given on $k$-forms by $\dad_{h}\al_{i_{1}\dots i_{k-1}} = -D^{p}\al_{p i_{1}\dots i_{k-1}}$. Since, for a $k$-form $\al_{i_{1}\dots i_{k}}$ there holds $f\dad_{\tilde{h}}\al_{i_{1}\dots i_{k-1}} = \dad_{h}\al_{i_{1}\dots i_{k-1}} + 2(k-1)\si^{\sharp\, p}\al_{pi_{1}\dots i_{k-1}}$, that a one-form be co-closed (in $\ker \dad_{h}$) is a conformally invariant condition. For a one-form whether there is written $D^{p}\ga_{p}$ or $-\dad_{h}\ga$ will depend on context. On a Riemann surface the action of the Hodge star operator $\hs = \hsh$ on one-forms depends only on the conformal class $[h]$, and is given in terms of the complex structure $J$ determined by $[h]$ by $(\hs \al)_{i} = -\al_{p}J_{i}\,^{p}$. On an oriented surface $\dad_{h} = -\star d \star$. \section{Holomorphic differentials and conformal Killing and Codazzi tensors}\label{holomorphicdifferentialsection} This section \ref{holomorphicdifferentialsection} records some basic facts about holomorphic differentials on Riemann surfaces. The purely real equations for symmetric tensors characterizing conformal Killing and Codazzi tensors make sense in higher dimensions (see section $6$ of \cite{Fox-ahs}), but on surfaces their complex counterparts are easier to work with. Lemma \ref{kdifferentialslemma} states the identification on Riemann surfaces of Codazzi and conformal Killing tensors as the real parts of holomorphic differentials, and Lemma \ref{flatmetriclemma} shows that such a differential determines a singular flat metric. For quadratic differentials these statements are well known and widely utilized. Although the general statements are surely also well known, there does not seem to be any convenient reference, and so it seems useful to include them. The correspondence between holomorphic differentials and singular flat metrics is discussed in more detail in sections \ref{khodgesection} and \ref{singularmetricsection}, where it is explained how it yields a diffeomorphism equivariant action of $GL^{+}(2, \rea)$ on the space of such differentials. While not much use is later made of this material, it motivates some estimates proved in section \ref{constructionsection} and when coupled with Theorem \ref{summarytheorem} and compared with usual Teichm\"uller theory, suggests many questions. \subsection{} To a Young diagram the boxes of which are labeled with distinct indices corresponds the irreducible $GL(n, \rea)$ module comprising tensors skew-symmetric in the indices in a given column of the Young diagram and vanishing when skew-symmetrized over the indices in a given column and any index in any box to the right of the given column. The irreducible representations of the subgroup $CO(h)$ of $GL(n, \rea)$ acting conformally with respect to a fixed metric $h$ on $\rea^{n}$ are described in \cite{Weyl}. The subspace of an irreducible $GL(n, \rea)$ representation comprising tensors completely trace-free with respect to $h$ is a representation of $CO(h)$. Lemma \ref{weylcriterion} will be invoked repeatedly. \begin{lemma}[\cite{Weyl}, Theorem $5.7.$A]\label{weylcriterion} The $CO(h)$-module of covariant trace-free tensors on $\rea^{n}$ having symmetries corresponding to a Young diagram is trivial if the sum of the lengths of the first two columns of the Young diagram is greater than $n$. \end{lemma} For instance, Lemma \ref{weylcriterion} implies that the usual conformal Weyl tensor of a Riemannian metric vanishes identically on a manifold of dimension at most $3$. If there is given a fiberwise metric on the vector bundle $E$ then $S^{k}_{0}(E)$ denotes the subbundle of $S^{k}(E)$ comprising elements trace-free with respect to the given metric. The convention is that $S^{1}_{0}(E) = E$; this corresponds to regarding the trace as the zero map on vectors. \begin{lemma}\label{twodablemma} Let $h_{ij}$ be a constant metric on $\rea^{2}$, and for $k \geq 1$ let $A, B \in S^{k}_{0}(\rea^{2})$. Then \begin{align}\label{twodab} 2A_{a_{1}\dots a_{k}(i}B_{j)}\,^{a_{1}\dots a_{k}} = A_{a_{1}\dots a_{k}}B^{a_{1}\dots a_{k}}h_{ij}, \end{align} in which indices are raised and lowered with $h_{ij}$ and its inverse $h^{ij}$. Let $X^{i}$ be a vector. Then $2|\imt(X)B|^{2}_{h} = |X|_{h}^{2}|B|_{h}^{2}$. In particular if $h_{ij}$ has definite signature the equations $|B|_{h}^{2}|X|_{h}^{2} = 0$ and $X^{p}B_{pi_{1}\dots i_{k-1}} = 0$ are equivalent. \end{lemma} \begin{proof} The tensor \begin{align*} \om_{ija_{1}\dots a_{k+1}} = h_{i(a_{1}}A_{a_{2}\dots a_{k})j} + h_{j(a_{1}}A_{a_{2}\dots a_{k})i} - h_{ij}A_{a_{1}\dots a_{k}} - h_{(a_{1}a_{2}}A_{a_{3}\dots a_{k})ij} \end{align*} is completely trace-free. As $\om_{ija_{1}\dots a_{k}} = \om_{(ij)(a_{i}\dots a_{k})}$ and $\om_{i(ja_{1}\dots a_{k})} = 0$, it follows from Lemma \ref{weylcriterion} that $\om = 0$. Hence $2A_{a_{1}\dots a_{k}(i}B_{j)}\,^{a_{1}\dots a_{k}} - A_{a_{1}\dots a_{k}}B^{a_{1}\dots a_{k}}h_{ij} = B^{a_{1}\dots a_{k}}\om_{ija_{1}\dots a_{k}} = 0$. By \eqref{twodab} there holds $2|\imt(X)B|^{2}_{h} = 2X^{p}B_{pi_{1}\dots i_{k-1}}X^{q}B_{q}\,^{i_{1}\dots i_{k-1}} = |X|_{h}^{2}|B|_{h}^{2}$. \end{proof} \subsection{} On a Riemannian manifold $(M, h, D)$, a symmetric tensor $\si \in \Ga(S^{k}(\ctm))$ is \textbf{Codazzi} if $D\si \in \Ga(S^{k+1}(\ctm))$. From \begin{align}\label{dsym2} &(k+1)\left(D_{i}\si_{i_{1}\dots i_{k}} - D_{(i}\si_{i_{1}\dots i_{k})}\right) = 2\sum_{s = 1}^{k}D_{[i}\si_{i_{s}]i_{1}\dots \hat{i}_{s}\dots i_{k}}, \end{align} (in which a $\hat{\,}$ denotes the omission of the index), it is evident that $\si$ is Codazzi if and only if $D_{[i_{1}}\si_{i_{2}]\dots i_{k+1}} = 0$. Write $\div_{h}$ (or simply $\div$) for the divergence operator $\div_{h}(\si)_{i_{1}\dots i_{k}} \defeq D^{p}\si_{p i_{1}\dots i_{k}}$ determined by $h$. For any tensor $\si$ let $\tf_{h}(\si)$ be its trace-free part. Let $\clie_{h}:\Ga(\symk) \to \Ga(\symkp)$ be the formal adjoint of the composition $\div_{h} \circ \tf_{h}$ with respect to the pairing of sections of $\symkp$ determined by integration. Explicitly, for $\si \in \Ga(\symkt)$ and $M$ a surface, \begin{align}\label{cliedefined} \clie_{h}(\si)_{i_{1}\dots i_{k+1}} = D_{(i_{1}}\si_{i_{2}\dots i_{k+1})} - \tfrac{1}{2}h_{(i_{1}i_{2}}\div_{h}(\si)_{i_{3}\dots i_{k+1})}. \end{align} \begin{lemma}\label{codazzilemma} On a Riemannian surface $(M, h)$, for $\si \in \Ga(S^{k}_{0}(\ctm))$ there hold \begin{align}\label{dsi} D_{i}\si_{i_{1}\dots i_{k}} &= \clie_{h}(\si)_{ii_{1}\dots i_{k}} + h_{i(i_{1}}\div_{h}(\si)_{i_{2}\dots i_{k})} - \tfrac{1}{2}h_{(i_{1}i_{2}}\div_{h}(\si)_{i_{3}\dots i_{k})i}, \qquad&& \text{if}\,\, k > 1,\\ \label{dsi2}D_{i}\si_{j} & = \clie_{h}(\si)_{ij} + \tfrac{1}{2}d\si_{ij} + \tfrac{1}{2}\div_{h}(\si)h_{ij}, \qquad&& \text{if}\,\, k = 1. \end{align} For $k > 1$, the following are equivalent for $\si \in \Ga(\symkt)$: 1. $\si$ is Codazzi; 2. $\si$ is divergence free; 3. $D\si = \clie_{h}(\si)$. \end{lemma} \begin{proof} The tensor on the righthand side of \eqref{dsi} is trace-free and its complete symmetrization vanishes, so it lies in the irreducible $O(h)$-module corresponding to a Young diagram with $k-1$ boxes in its first row and $1$ box in its second row. By Lemma \ref{weylcriterion} this module is trivial if $k > 1$. When $k > 1$, for $\si \in \Ga(S^{k}_{0}(\ctm))$ it follows from \eqref{dsi} that $D\si \in \Ga(S^{k+1}_{0})(\ctm))$ if and only if $\div_{h}(\si) = 0$. This also follows by tracing the identity \begin{align}\label{2dci} \begin{split} 2D_{[a}\si_{b]i_{1}\dots i_{k-1}}& = h_{a(i_{1}}\div_{h}(\si)_{i_{2}\dots i_{k-1})b} -h_{b(i_{1}}\div_{h}(\si)_{i_{2}\dots i_{k-1})a}, \end{split} \end{align} which also follows from Lemma \ref{weylcriterion}. From \eqref{dsi} there follows $\div_{h}(\om) =0$ if and only if $D\si =\clie_{h}(\si)$. \end{proof} On a surface $f\div_{\tilde{h}}(\si) = \div_{h}(\si)$, so that the space $\ker \div_{h} \cap \Ga(\symkt)$ of trace-free Codazzi tensors depends only on the conformal class of $h$. As $\clie_{e^{\phi}h}(e^{k\phi}\si)_{i_{1}\dots i_{k+1}} = e^{k\phi}\clie_{h}(\si)_{i_{1}\dots i_{k+1}}$, the subspace $\Ga(\symkt)\cap \ker \clie$ is conformally invariant. The operator $\clie_{h}^{\sharp}:\Ga(\symktv) \to \Ga(\symkptv)$ defined by $\clie_{h}^{\sharp}(X) = \clie_{h}(X^{\flat})^{\sharp}$ satisfies $e^{\phi}\clie^{\sharp}_{e^{\phi}h}(X) = \clie^{\sharp}_{h}(X)$. For a vector field $X$ there holds $2\clie_{h}(X^{\flat})_{ij} = \tf(\lie_{X}h)_{ij}$, which motivates the notation resembling that for the Lie derivative. Moreover, this shows that $\Ga(TM) \cap \ker \clie_{h}^{\sharp}$ comprises the conformal Killing fields. For this reason the sections of $\ker \clie^{\sharp} \cap \Ga(\symktv)$ are called \textbf{conformal Killing tensors}. Lemma \ref{kdifferentialslemma} explains the conformal invariance on surfaces of trace-free Codazzi and conformal Killing tensors by showing that they are exactly the real parts of holomorphic differentials. \subsection{} An almost complex structure $J$ on a real vector space $\ste$ determines an almost complex structure $\bJ$ on $\tensor^{k}(\sted)$ defined by $\bJ(B)_{i_{1}\dots i_{k}} = J_{i_{1}}\,^{p}B_{pi_{2}\dots i_{k}}$. \begin{lemma}\label{apolaritylemma} Fix a two-dimensional real vector space $\ste$ with an almost complex structure $J$. Let $k > 1$. The map $B \to \bJ(B)$ is a complex structure on the $2$-dimensional real vector space $S^{k}_{0}(\sted)$. For $B \in S^{k}_{0}(\sted)$ there holds $2B^{(k, 0)} = B - i\bJ(B)$, so that the $(1,0)$ part of $B$ \textnormal{qua} element of $(S^{k}_{0}(\sted), \bJ)$ equals the $(k, 0)$ part of $B$ relative to $J$. There results a complex linear isomorphism between the complexification $S^{k}_{0}(\sted) \tensor_{\rea}\com$ and $\cano^{k} \oplus \bar{\cano}^{k}$ such that the $\pm \j$ eigenspaces of $\bJ$ on $S^{k}_{0}(\sted)\tensor_{\rea}\com$ are identified respectively with $\cano^{k}$ and $\bar{\cano}^{k}$. \end{lemma} \begin{proof} Let $h$ be a definite signature metric on $\ste$ compatible with $J$. Let $B \in S^{k}(\sted)$. If $X \in \ste$ is non-zero, then $\{X, JX\}$ is an $h$-orthogonal basis of $\ste$. From the evidently equivalent identities \begin{align}\label{jb0} &\bJ(B)_{[i_{1}i_{2}]i_{3}\dots i_{k}} = J_{[i_{1}}\,^{p}B_{i_{2}]i_{3}\dots i_{k}p} = 0, & &J_{i_{1}}\,^{p}J_{i_{2}}\,^{q}B_{pqi_{3}\dots i_{k}} = -B_{i_{1}\dots i_{k}}.& \end{align} it follows that $B \in S^{k}_{0}(\sted)$ if and only if $\bj(B)$ is completely symmetric. Since $B = -\bj(\bj(B))$, the same statement with the roles of $B$ and $\bj(B)$ interchanged shows that in this case $\bj(B) \in S^{k}_{0}(\sted)$. Thus $B \in S^{k}_{0}(\sted)$ if and only if $\bj(B) \in S^{k}_{0}(\sted)$. These conditions are obviously equivalent to the vanishing of $B^{(p, k-p)}$ whenever $0< p< k$. For $\be \in \cano^{k}$, $\bj(\re \be) + \j\bj(\im \be) = \bj(\be) = \j \be = -\im \be + \j \re \be$, so $\be = \re \be - \j\bj(\re \be)$. Since $\be$ is symmetric so are $\re \be$ and $\bj(\re \be)$, and hence, by the preceeding, $\re \be$ and $\im \be$ are in $S^{k}_{0}(\sted)$. Since $\bj$ preserves $S^{k}_{0}(\sted)$, it is a complex structure on $S^{k}_{0}(\sted)$. This means that $S^{k}_{0}(\sted)\tensor_{\rea}\com$ decomposes into $(1,0)$ and $(0,1)$ parts with respect to the action of $\bJ$ and so for $B \in S^{k}_{0}(\sted)$ it makes sense to speak of its $(1,0)$ part $\tfrac{1}{2}(B - \j \bJ(B))$. For $Z \in \ste^{0, 1}$ there holds $\imt(Z)\bj(B) = \imt(J(Z))B = -\j\imt(Z)B$, so $\imt(Z)(B - \j\bj(B)) = 0$. Hence $B - \j \bJ(B) \in \cano^{k}$ if $B\in S^{k}_{0}(\sted)$. The map $B \to B^{(k, 0)}$ also sends $S^{k}_{0}(\sted)$ to $\cano^{k}$, and it is claimed that $2B^{(k, 0)} = B - \j \bj(B)$. By the complete symmetry of $B$ there holds \begin{align}\label{rs1} \begin{split} &2^{k}B^{(k,0)}(X, \dots, X) = B(X-iJX, \dots, X - iJX) = \sum_{s = 0}^{k}(-i)^{s}\tbinom{k}{s}B(X, \dots, X, \underbrace{JX, \dots, JX}_{\text{$s$ times}})\\ &= \sum_{s = 0}^{\llcorner k/2 \lrcorner}(-1)^{s}\tbinom{k}{2s}B(X, \dots, X, \underbrace{JX, \dots, JX}_{\text{$2s$ times}}) +i \sum_{s = 1}^{\ulcorner k/2 \urcorner}(-1)^{s}\tbinom{k}{2s-1}B(X, \dots, X, \underbrace{JX, \dots, JX}_{\text{$2s-1$ times}}). \end{split} \end{align} Using the second equation of \eqref{jb0} in \eqref{rs1} yields \begin{align}\label{rs3} \begin{split} &2^{k}B^{(k, 0)}(X, \dots, X) = \left(\sum_{s = 0}^{\llcorner k/2 \lrcorner}\tbinom{k}{2s}\right)B(X, \dots, X) -i \left(\sum_{s = 1}^{\ulcorner k/2 \urcorner}\tbinom{k}{2s-1}\right)\bJ(B)(X, \dots, X)\\ & = 2^{k-1}B(X, \dots, X) - i2^{k-1}\bJ(B)(X, \dots, X). \end{split} \end{align} Polarizing \eqref{rs3} shows that for $B \in S^{k}_{0}(\sted)$ there holds $2B^{(k,0)} = B - i\bJ(B)$, so that $B$ is the real part of an element of $\cano^{k}$, and, similarly, for $\be \in \cano^{k}$, there holds $\be = 2(\re \be)^{(k,0)}$. \end{proof} It follows immediately from Lemma \ref{apolaritylemma} that on a Riemann surface $(M, [h], J)$, a smooth section $B$ of $\symk$ (resp. $\symkv$) is the real part of a smooth section of $\cano^{k}$ (resp. $\cano^{-k}$) if and only if it is completely $[h]$-trace-free, in which case $2B^{(k, 0)} = B - \j \bJ(B)$ and $\bJ(B)$ is also in $\Ga(S^{k}_{0}(\ctm))$. \subsection{} It follows from the identities $(\lie_{X}J)_{\bar{\al}}\,^{\be} = 2\j \delbar_{\bar{\al}} X^{\be}$ and $(\lie_{X}J)_{\al\be} = 2\j D_{\al}X^{\flat}_{\be}$ that on a Riemann surface $(M, J)$ the following conditions on $X \in \Ga(TM)$ are equivalent: $X$ is conformal Killing; $X$ is an infinitesimal automorphism of $J$; and $X^{(1,0)}$ is holomorphic. Since by the Riemann-Roch theorem a compact Riemann surface of genus greater than $1$ admits no non-zero holomorphic vector fields, on such a surface every conformal Killing vector field is identically $0$. These observations are generalized to sections of $S^{k}_{0}(\ctm)$ and $S^{k}_{0}(TM)$ by Lemma \ref{kdifferentialslemma}. \begin{lemma}\label{kdifferentialslemma} Let $(M, [h], J)$ be a Riemann surface and $k > 0$. \begin{list}{(\arabic{enumi}).}{\usecounter{enumi}} \renewcommand{\theenumi}{Step \arabic{enumi}} \renewcommand{\theenumi}{(\arabic{enumi})} \renewcommand{\labelenumi}{\textbf{Level \theenumi}.-} \item\label{cd1} If $k > 1$, a section $B \in \Ga(S^{k}(\ctm))$ is the real part of a holomorphic section of $\cano^{k}$ if and only if it is a trace-free Codazzi tensor. In this case $(DB)^{(k+1, 0)} = DB^{(k, 0)}$. \item\label{cd2} A one-form is the real part of a holomorphic section of $\cano^{1}$ if and only if it is closed and co-closed. In this case $(DB)^{(2, 0)} = DB^{(1, 0)}$. \item\label{cd3} A section $B \in \Ga(S^{k}(TM))$ is the real part of a holomorphic section of $\cano^{-k}$ if and only if it is a conformal Killing tensor. \end{list} \end{lemma} \begin{proof}[Proof of Lemma \ref{kdifferentialslemma}] By Lemma \ref{apolaritylemma}, $B \in \Ga(S^{k}(\ctm))$ is the real part of a smooth section of $\cano^{k}$ if and only if $B = 2\re B^{(k, 0)}$, in which case $B \in \Ga(S^{k}_{0}(\ctm))$. For $B \in \Ga(S^{k}_{0}(\ctm))$, since $\clie_{h}(B)$ and $\div_{h}(B)$ are $[h]$-trace-free, there follow from \eqref{dsi} and \eqref{dsi2}, \begin{align} \label{delb} & D^{1,0} B^{(k,0)} = \clie_{h}(B)^{(k+1, 0)}, & & \delbar B^{(k, 0)} = \overline{h^{(1,1)}}\tensor \div_{h}(B)^{(k-1,0)},& &\text{if}\,\, k > 1,\\ \label{db2} & D^{1,0} B^{(1, 0)} = \clie_{h}(B)^{(2, 0)}, & & 2\delbar B^{(1,0)} = dB + \div_{h}(B)\overline{h^{(1,1)}},& &\text{if}\,\, k = 1 . \end{align} Comparing \eqref{delb} and \eqref{db2} with \eqref{dsi} and \eqref{dsi2} of Lemma \ref{codazzilemma} shows \ref{cd1} and \ref{cd2}. In the case of \ref{cd1} or \ref{cd2} then $DB \in \Ga(S^{k}_{0}(\ctm))$ and it follows from the definition of $\bj$ that $\bj DB = D\bj B$. By Lemma \ref{apolaritylemma}, $2B^{(k, 0)} = B - \j \bj B$, so $2DB^{(k, 0)} = DB - \j D\bj B = DB - \j \bj D B = (DB)^{(k+1, 0)}$, the last equality also by Lemma \ref{apolaritylemma}. Again by Lemma \ref{apolaritylemma} if $B\in \Ga(S^{k}(TM))$ is to be the real part of a section of $\cano^{-k}$ it must be trace-free, in which case $2B^{(k, 0)} = B - i\bJ(B)$, so that $B$ is the real part of a holomorphic section of $\cano^{-k}$ if and only if $\delbar B^{(k, 0)} = 0$. Since raising and lowering indices interchanges type, it follows from \eqref{delb} that $\delbar B^{(k, 0)} = 0$ if and only if $0 = \delbar B^{\flat\,(0, k)} = \clie_{h}(B^{\flat})^{(0, k+1)}$. Hence, $B$ is the real part of a holomorphic section if and only if $\clie_{h}(B^{\flat}) = 0$, or, what is by definition the same, $B$ is a conformal Killing tensor. \end{proof} \begin{lemma}\label{flatmetriclemma} Let $(M, [h], D)$ be a Riemann surface and $\si$ the real part of a holomorphic section of $\cano^{k}$ which is not identically zero. View $\si$ as a tensor of rank $|k|$, covariant or contravariant according to whether $k$ is positive or negative. For any $h \in [h]$ there hold \begin{align}\label{keromlap} &2\lap_{h}\si = k\sR_{h} \si,& &\lap_{h}|\si|_{h}^{2} = 2|D\si|^{2}_{h}+ k\sR_{h} |\si|_{h}^{2}. \end{align} On the subset $M^{\ast} = \{|\si|_{h}^{2} \neq 0\}$, which is the complement of a discrete set of points, there hold \begin{align} \label{kato} &2|d|\si||_{h}^{2} = |D\si|_{h}^{2},& &\lap_{h}\log|\si|^{2}_{h} = k \sR_{h}. \end{align} When $k \neq 0$, the metric $\sth_{ij} \defeq |\si|_{h}^{2/k}h_{ij}$ on $M^{\ast}$ is flat. If $M$ is a torus then $M^{\ast} = M$. \end{lemma} \begin{proof} Tracing the (K\"ahler) identity $\sR_{ijp}\,^{l}J_{k}\,^{p} = \sR_{ijk}\,^{p}J_{p}\,^{l}$ and using the algebraic Bianchi identity yields $\omega^{pq}\sR_{pqij} = -2J_{i}\,^{p}\sR_{pj}$, from which follows $\sR_{\al\bar{\be}\ga}\,^{\ga} = \sR_{\al\bar{\be}} = (\sR_{h}/2)h_{\al\bar{\be}}$. For $s \in \Ga(\cano^{k})$ there results $2D_{[\al}D_{\bar{\be}]}s = -kR_{\al\bar{\be}\ga}\,^{\ga}s = - (k/2)\sR_{h} h_{\al\bar{\be}}s$. View $\si$ as the real part of a holomorphic section $s$ of $\cano^{k}$. That $s$ be holomorphic implies the second equality of \begin{align*} \begin{split} 2\lap_{h}s &= 2h^{ij}D_{i}D_{j}s = 2h^{\bar{\al}\be}D_{\bar{\al}}D_{\be}s = h^{\bar{\al}\be}\left(2D_{\be}D_{\bar{\al}}s + k \sR_{h}h_{\al\bar{\be}}\right)s = k\sR_{h} s. \end{split} \end{align*} Taking the real part shows the first equation of \eqref{keromlap}, from which the second equation of \eqref{keromlap} follows. By \ref{cd1} and \ref{cd2} of Lemma \ref{kdifferentialslemma}, $(D\si)^{(k+1, 0)} = D^{1,0}s = Ds$, and so $|D\si|_{h}^{2} = 2|Ds|_{h}^{2}$. From $d|\si|_{h}^{2} = 2D(s\bar{s}) = 2(\bar{s}Ds + sD\bar{s})$ there results $|d|\si|^{2}|^{2} = 8|s|^{2}|Ds|^{2} = 2|\si|^{2}|D\si|^{2}$, which proves the first equality of \eqref{kato}. On $M^{\ast}$ there holds by \eqref{keromlap} and the first equality of \eqref{kato}, \begin{align*} \lap_{h}\log|\si|^{2}_{h} = 2|\si|^{-2}_{h}\left(\lap_{h}|\si|_{h}^{2} - 4|d|\si||_{h}^{2}\right) = 2|\si|^{-2}_{h}\left(\lap_{h}|\si|_{h}^{2} - 2|D\si|_{h}^{2}\right) = k\sR_{h}. \end{align*} That $\sth$ is flat follows from \eqref{conformalscalardiff} and the second equality of \eqref{kato}. If $M$ is a torus, then $\si$ is parallel for a flat representative of $[h]$, so has constant norm, which is not zero, as $\si$ is not identically zero, and so $M^{\ast} = M$. \end{proof} Note that $\sth$ does not depend on the choice of $h \in [h]$, and is determined by the requirement that $|\si|_{\sth}^{2} = 1$. Corollary \ref{rrcorollary} is the specialization to Riemann surfaces of the results of \cite{Kobayashi-holomorphicsymmetric}. \begin{corollary}\label{rrcorollary} If $M$ is a sphere then there is no non-trivial trace-free Codazzi tensor nor any non-trivial harmonic one-form. If $M$ is a torus then any trace-free Codazzi tensor, harmonic one-form, or conformal Killing tensor is parallel with respect to a flat metric conformal to $h$. If a Riemann surface $(M, [h])$ is compact with genus $g > 1$ then any conformal Killing tensor is identically $0$. \end{corollary} \begin{proof} These claims follow either from Riemann Roch together with Lemma \ref{kdifferentialslemma}, or from the maximum principle applied to \eqref{keromlap} for a constant scalar curvature representative $h \in [h]$. \end{proof} \subsection{}\label{khodgesection} For an oriented smooth surface $M$, let $\diff(M)$ be its group of diffeomorphisms viewed as a topological group in the $\cinf$ compact-open topology, and let $\diff^{+}(M)$ be the subgroup of orientation-preserving diffeomorphisms. The connected component $\diff_{0}(M)$ of the identity of $\diff(M)$ is evidently contained in $\diff^{+}(M)$, and comprises the diffeomorphisms of $M$ smoothly isotopic to the identity (see Corollary $1.2.2$ of \cite{Banyaga}). Let $\jpremod(M)$ be the space of complex structures on $M$ inducing the given orientation, with the topology of $\cinf$-convergence. The group $\diff^{+}(M)$ acts on $\jpremod(M)$ by pullback and the quotient $\jpremod(M)/\diff_{0}(M)$ is the \textbf{Teichm\"uller space} $\teich(M)$. The oriented mapping class group $\map^{+}(M) \defeq \diff^{+}(M)/\diff_{0}(M)$ acts on $\teich(M)$ with quotient $\jpremod(M)/\diff_{0}(M)$, the \textbf{moduli space of complex structures} on $M$. For background on these spaces from a point of view compatible with that here see \cite{Earle-Eells} and \cite{Wolf-teichmuller}. Let $\prekhodge(M)$ be the space comprising pairs $(J, B) \in \jpremod(M)\times \Ga(S^{k}(\ctm))$ such that with respect to the decomposition of tensors by type determined by $J$, $B^{(k,0)}$ is $\delbar_{J}$-holomorphic. The group $\diff^{+}(M)$ acts by pullback on $\prekhodge(M)$. Denote the quotient by $\khodge(M) = \prekhodge(M)/\diff^{+}(M)$. The projection $\prekhodge(M) \to \jpremod(M)$ commutes with the $\diff^{+}(M)$ action so descends to a projection $\rho:\khodge(M) \to \teich(M)$ sending the equivalence class $[J, B]$ to the equivalence class $[J]$. If a representative $J \in [J]$ is chosen then the fiber $\rho^{-1}([J])$ is identified with the space $H^{0}(M, \cano_{J}^{k})$ of $J$-holomorphic $k$-differentials. The space $\khodge(M)$ also will be referred to as \textit{the vector bundle over the Teichm\"uller space of $M$ the fiber of which over $[h]$ comprises the $k$-holomorphic differentials with respect to the complex structure induced by $[h]$}. (This makes sense for negative $k$ if $S^{k}(\ctm)$ is replaced by $S^{|k|}(TM)$). \subsection{}\label{singularmetricsection} Here are recalled some aspects of the correspondence between holomorphic differentials and singular flat metrics which are motivating for the discussion in section \ref{modulisection} of the action of $GL^{+}(2, \rea)$ on the deformation space of strictly convex flat real projective structures. Related background can be found in many places, e.g. \cite{Masur-Tabachnikov}, \cite{Troyanov-coniques}, \cite{Troyanov-moduli}, or \cite{Viana-ergodic}. For $\tau > 0$, the metric $dr^{2} + r^{2}dt^{2}$ on the cone $V_{\tau} = \{(r, t): r \geq 0, t \in [0, 2\pi/\tau)\}$ is Euclidean away from the vertex (where $r = 0$), where it is said to have a \textbf{conical singularity} of \textbf{cone angle} $\tau$ (it is more common to write the metric as $dr^{2} + (\tau r d\theta)^{2}$ on $\{(r, \theta): r \geq 0, \theta \in [0, 2\pi)\}$, where $t = \tau \theta$). The change of variables $z = ((b+1)r)^{1/(b + 1)}e^{ \j t/(b+1)}$ identifies the cone $V_{2\pi(b + 1)}$ isometrically with the singular metric $|z|^{2b}|dz|^{2}$ on $\com$. If there are integers $\be$ and $k$ such that $b = \be/k > -1$, this metric is that determined as in Lemma \ref{flatmetriclemma} by the holomorphic $k$-differential $z^{\be}dz^{k}$. Conversely, this shows how to associate to a singularity with cone angle $2\pi(\be/k + 1)$ a holomorphic $k$-differential with a zero of order $\be/k$ at the singularity. By a \textbf{flat Euclidean structure} is meant an atlas of charts in which the transition functions are restrictions of Euclidean isometries. Such a structure determines a positive homothety class of flat Riemannian metrics, and an underlying flat real affine structure. If the transition functions can be chosen to be orientation preserving, then it determines also a corresponding complex structure, and flat complex affine structure. For a positive integer $k$, a \textbf{$1/k$-translation surface} is a compact Riemann surface $M$ equipped with a flat Euclidean structure on the complement $M^{\ast}$ in $M$ of a finite subset $\sing(M) \subset M$ generated by an atlas $\{z_{i}:U_{i} \to \com\}$ for which the transition functions have the form $z_{i} = e^{2\pi\j m_{ij}/k} z_{j} + u_{ij}$ for some integers $0 \leq m_{ij} \leq k-1$ and some complex numbers $u_{ij}$. In particular, the linear part of the affine holonomy around each $p \in \sing(M)$ is contained in the finite subgroup $\integer/k\integer \subset SO(2)$. In the cases $k$ is $1$ or $2$, such a surface is called a \textbf{translation} or \textbf{half-translation} surface, which explains the terminology. The $k$-differentials $dz_{i}^{k}$ and $dz_{j}^{k}$ agree on the overlaps $U_{i}\cap U_{j}$, so patch together to give a holomorphic $k$-differential on $M^{\ast}$. A point of $M^{\ast}$ will be called a \textbf{regular} point of $\si$, while points in $\sing(M)$ will be called \textbf{singular}. From the form of the transition funtions it follows that in a neighborhood of $p \in \sing(M)$ there is a chart in which a flat metric $\sth$ representing the flat Euclidean structure is isometric to a conical singularity of cone angle $2\pi(\be/k + 1) > 0$ for some integer $\be$. Then the holomorphic $k$-differential constructed on $M^{\ast}$ extends to the singular points via the local model described in the preceeding paragraph. Let $(M, [h], J)$ be a compact Riemann surface of genus $g > 1$ and, for $k > 0$, let $\si = B^{(k,0)}$ be a non-trivial holomorphic $k$-differential. By Lemma \ref{flatmetriclemma}, $\sth = |B|^{2/k}_{h}h$, which does not depend on the choice of $h \in [h]$, is a flat metric on the complement $M^{\ast}$ of the zero set of $M$. Around a regular point $p_{0}$ choose a local holomorphic coordinate $w$ (such that $w = 0$ corresponds to $p_{0}$) and write $\si = \phi(w) dw^{k}$ for a holomorphic function $\phi(w)$. Choose a branch of $\phi^{1/k}(w)$ near $w = 0$ and define a new coordinate, said to be \textbf{adapted} to $\si$, by $z(p) = \int_{p_{0}}^{p}\phi(w)^{1/k}\,dw$. In the $z$ coordinate, $\si = dz^{k}$. If $\tilde{z}$ is another coordinate constructed in this way in a neighborhood of $p_{0}$ then, since $dz^{k} = d\tilde{z}^{k}$ on the overlap, there are an integer $0 \leq m \leq k-1$ and a complex number $\be$ such that $\tilde{z} = e^{2\pi\j m/k} z + \be$. Consequently the local charts constructed in this way determine on $M^{\ast}$ a flat Euclidean structure, which makes $M$ a $1/k$-translation surface, and for which the underlying homothety class of flat Riemannian metrics is generated by the metric $\sth$. In a neighborhood of a singular point $p_{0}$ of $\si$ choose as before a local holomorphic coordinate $w$ and write $\si(w) = \phi(w)dw^{k}$, where now it is supposed that $\phi(w)$ is holomorphic with a zero of order $\be$ at $0$. Define a coordinate $z(p)$ adapted to $\si$ by \begin{align*} z(p) = \left(\tfrac{\be + k}{k}\int_{p_{0}}^{p}\phi(w)^{1/k}\,dw\right)^{\tfrac{k}{\be + k}}. \end{align*} The coordinate $z$ is determined up to multiplication by a $(\be + k)$th root of unity, and $\si = \phi(w)dw^{k} = z^{\be}dz^{k} = (\tfrac{k}{\be+k}d(z^{\be/k + 1}))^{k}$. In an adapted coordinate around a singular point $p$ of $\si$ of order $\be$ the flat metric $\sth$ has the form $|z|^{2\be/k}|dz|^{2}$, and so $p$ is a cone point of angle $2\pi(\be/k + 1)$, and the linear holonomy of $\sth$ around $p$ is a rotation of angle $2\pi \be/k$. By Riemann Roch the sum of the orders $\be(p)$ of the zeroes $p$ of $\si$ is $2k(g-1)$, and so the cone angles $\tau(p) = 2\pi(\be(p)/k + 1)$ of $\sth$ satisfy the relation $\sum_{p \in M}(\tau(p)/2\pi - 1) = 2(g-1)$. The same conclusion follows from a version of the Gau\ss-Bonnet theorem for metrics with conic singularities (see \cite{Troyanov-moduli}). The preceeding establishes a bijective correspondence between $1/k$-translation surfaces and Riemann surfaces with a holomorphic $k$-differential. A pair $(J, \si)$ for which $\si$ is a $J$-holomorphic $k$-differential generates a complex curve in Teichm\"uller space, and, moreover, in $\khodge(M)$, as follows. The curve in $\teich(M)$ comprises those conformal structures generated by singular flat metrics real affinely equivalent to the flat metric $\sth$ determined by $(J, \si)$. The evolution of the $k$-differential $\si$ along this curve in $\teich(M)$ is determined tautologically by the requirement that the singular flat metric associated to the evolved $k$-differential be a member of a conformal structure representing the point in the curve over which it lies. These curves should be relevant to understanding compactifications of the space of strictly convex flat real projective structures on a compact surface of genus $g > 1$ (see section \ref{modulisection}). These curves can be described analytically in terms of an action of the group $GL^{+}(2, \rea)$ on $\prekhodge(M)$, with respect to which they are images of the hyperbolic disk $GL^{+}(2, \rea)/C^{+}O(2)$. The structure of this $GL^{+}(2, \rea)$ action on $\prekhodge(M)$ has been intensively studied in the $k = 1, 2$ cases; see \cite{Kerckhoff-Masur-Smillie}, \cite{Masur-Tabachnikov}, and \cite{Herrlich-Schmithusen} for background and references. An identification of real vector spaces $\com \simeq \rea^{2}$ is fixed so that $g \in GL^{+}(2, \rea)$ acts on $z = x + \j y\in \com$ real linearly by $g \cdot z = (ax + by) + \j(cx + dy)$. The complex field $\comt$ acting by multiplication is identified with the oriented conformal subgroup $C^{+}O(2)$. That is $z = re^{\j\theta}$ corresponds to $a = r\cos\theta = d$, $-b = r\sin \theta = c$. Given $(J, \si) \in \prekhodge(M)$, let $\{z_{i}\}$ be an atlas on $M^{\ast}$ of $J$-holomorphic charts adapted to $\si$. This atlas makes $M^{\ast}$ a flat Euclidean manifold with linear holonomy in $\integer/k\integer$. For $g \in GL^{+}(2, \rea)$ the collection $\{\tilde{z}_{i} = g z_{i}\}$ determines a flat Euclidean structure which has also holonomy in $\integer/k\integer$, so a structure of $1/k$-translation surface on $M$, the cone angles of which are same as those of the original structure determined by $(J, \si)$. This $1/k$-translation surface corresponds to a pair $g \cdot (J, \si) = (g\cdot J, g\cdot \si) \in \prekhodge(M)$, and this defines the desired $GL^{+}(2, \rea)$ action on $\prekhodge(M)$. In the case $k = 1$, $g\cdot \si$ is given in $z$-coordinates by the real linear action of $g$ on $\si$; that is $g \cdot \si = (a \re \si + b \im \si) + \j (c \re \si + d\im \si)$, but in general, with respect to a coordinate $z$ adapted to $\si$, the new $k$-differential $g \cdot \si$ is not given by a linear action. The underlying complex structure $g\cdot J$ will equal the original one if $g \in C^{+}O(2) = \comt$. In this case $g \cdot \si$ is expressible in adapted coordinates in terms of a linear represnetation of $C^{+}O(2)$; namely, it is given by the product by $z^{k}\si= z^{k}B^{(k, 0)} = r^{k}(\cos (k \theta) B - \sin (k \theta) \bj(B))^{(k, 0)}$, where $z$ is the complex number corresponding to $g$. This action of $C^{+}O(2)$ is considered in section \ref{gaugesection}. The more interesting actions of hyperbolic and parabolic elements of $GL^{+}(2, \rea)$ are more difficult to describe; their study is left for the future. If $\Phi \in \diff(M)$ then $z_{i}\circ \Phi$ are $\Phi^{\ast}(\si)$ adapted coordinates with respect to the conformal structure $\Phi^{\ast}([h])$, and since $g(z_{i}\circ F) = (gz_{i})\circ F$, it follows that the action of $GL^{+}(2,\rea)$ on $\prekhodge(M)$ is diffeormorphism equivariant in the sense that $\Phi^{\ast}(g \cdot (J, \si)) = g \cdot \Phi^{\ast}(J, \si)$, so descends to an action on $\khodge(M)$ which commutes with $\map^{+}(M)$. \section{AH structures on surfaces}\label{ahsection} In this section are given the basic definitions related to AH structures. Though adapted to the peculiarities of the two-dimensional case, the exposition is consistent with the conventions of \cite{Fox-ahs}. \subsection{} Two affine connections are \textbf{projectively equivalent} if they have the same unparameterized geodesics in the sense that the image of any geodesic of one connection is the image of a geodesic of the other connection. This is the case if and only if the symmetric part of their difference tensor is pure trace. A \textbf{projective structure} $\en$ is an equivalence class of projectively equivalent affine connections. For a torsion-free affine connection $\nabla$ on a surface there vanishes the usual projective Weyl tensor, or, what is the same, \begin{align}\label{rijklpij} \begin{split} R_{ijk}\,^{l} &= \delta_{i}\,^{l}R_{(jk)} - \delta_{j}\,^{l}R_{(ik)} - R_{[ij]}\delta_{k}\,^{l}. \end{split} \end{align} The \textbf{projective Cotton} tensor $C_{ijk} \defeq -\nabla_{i}R_{(jk)} + \nabla_{j}R_{(ik)} + \tfrac{1}{3}\nabla_{k}R_{[ij]}$ does not depend on the choice of $\nabla \in \en$. On a surface, since a $3$-form vanishes, there hold $C_{[ijk]} = 0$ and $\nabla_{[i}C_{jk]l} = 0$. \subsection{} The definition of an AH structure $(\en, [h])$ on an $n$-manifold was given in the first paragraph of section \ref{introsection} of the introduction. Let $H_{ij}$ be the normalized representative of $[h]$. An equivalent definition is that for each $\nabla \in \en$ there be a one-form $\si_{i}$ such that $\nabla_{[i}H_{j]k} = 2\si_{[i}H_{j]k}$. In two dimensions, by Lemma \ref{weylcriterion} the tensor $\nabla_{[i}H_{j]k}$ is pure trace, so there is always such a one-form. Hence in two dimensions an AH structure could simply be defined to be a pair $(\en, [h])$, the necessary compatibility being automatic. If $M$ is oriented, an alternative way to see this is the following. On a surface with K\"ahler structure $(h, J, \om)$, the Hodge star operator on one-forms is $(\star \si)_{i} = -\si_{p}J_{i}\,^{p}$, while on two-forms it is $\star \be = \tfrac{1}{2}\om^{\sharp\,ij}\be_{ij}$. Because any two-form is a multiple of any other, given $\nabla \in \en$ there is a one-form $\si_{i}$ such that $\nabla_{[i}h_{j]k} = \om_{ij}\si_{k}$. An easy computation shows \begin{align}\label{omsi} \om_{ij}\si_{k} = -2(\star \si)_{[i}h_{j]k}, \end{align} so $\nabla_{[i}h_{j]k} =-2(\star \si)_{[i}h_{j]k}$. \subsection{} Given an AH structure $(\en, [h])$, there is a unique torsion-free $\nabla \in \en$ such that $\nabla_{i}H_{jk} = 0$; given any torsion-free $\bnabla \in \en$ with $\bnabla_{[i}H_{j]k} = 2\si_{[i}H_{j]k}$ it is given by $\nabla = \tnabla - 2\si_{(i}\delta_{j)}\,^{k}$. This distinguished representative of $\en$ is called the \textbf{aligned} representative of the AH structure. From now on, except where stated otherwise, indices and raised and lowered using $H_{ij}$ and the dual bivector $H^{ij}$. Because $\det H_{ij} = 1$ there holds $H^{pq}\bnabla_{i}H_{pq} = 0$ for any $\bnabla \in \en$. Alternative characterizations of the alignment condition are given in Lemma \ref{special}, the proof of which is straightfoward, using the identity \begin{align}\label{lcidentity} A_{ijk} = A_{i(jk)} + A_{j(ik)} - A_{k(ij)} + A_{[ij]k} + A_{[ki]j} - A_{[jk]i}, \end{align} valid for any covariant $3$-tensor. \begin{lemma}\label{special} Let $\en$ be a projective structure and $[h]$ a conformal structure on a surface $M$. There is a unique torsion-free representative $\nabla \in \en$ satisfying any one of the following equivalent conditions \begin{enumerate} \item $\nabla_{[i}H_{j]k} = 0$. \item $\nabla_{i}H_{jk} = \nabla_{(i}H_{jk)}$. \item $\nabla_{p}H^{ip} = 0$. \item $H^{pq}\nabla_{p}H_{qi} = 0$. That is, $\nabla_{i}H_{jk}$ is completely trace-free. \item For any $h \in [h]$ there holds $2h^{pq}\nabla_{p}h_{qi} = h^{pq}\nabla_{i}h_{pq}$. \item For any $h \in [h]$ there holds $\nabla_{[i}h_{j]k} = 2\ga_{[i}h_{j]k}$ with $4\ga_{i} = h^{pq}\nabla_{i}h_{pq}$. \end{enumerate} \end{lemma} \noindent Henceforth, except where stated otherwise, $\nabla$ denotes the aligned representative of an AH structure. While it may seem perverse speak of the projective structure $\en$ if one works only with a distinguished representative $\nabla \in \en$, later developments will show the utility of the perspective. \subsection{} The \textbf{cubic torsion} of an AH structure is the tensor $\bt_{ij}\,^{k}$ defined in terms of an arbitrary representative $\nabla \in \en$ by setting $\bt_{ij}\,^{p}H_{pk}$ equal to the completely trace-free part of $\nabla_{(i}H_{jk)}$. For the aligned representative $\nabla \in \en$ the cubic torsion is just $\nabla_{i}H_{jk} = \nabla_{(i}H_{jk)} = \bt_{ijk}$. An AH structure for which $\bt_{ij}\,^{k} \equiv 0$ is a \textbf{Weyl structure}. The aligned representative of a Weyl structure is what is usually called a Weyl connection. \subsection{} In the split signature case, an appropriate finite cover $\hat{M}$ of $M$ is orientable and such that the null cone of the lifted metric on $\hat{M}$ is orientable; in this case the null cone of the split signature metric is a pair of transverse nowhere vanishing line fields, and so $\hat{M}$ has Euler characteristic $0$, and hence $M$ has as well, and so if $M$ is compact it is a torus or a Klein bottle. The study of Riemannian Einstein AH structures on surfaces makes use of Hodge theory, the associated complex structure, and so forth. While the study of split signature Einstein AH structures on surfaces is also interesting, it requires a different set of techniques and will be mostly ignored here, although it will always be indicated when the Riemannian hypothesis is necessary. \subsection{} The basic example of an AH structure is the following. A hypersurface immersion in flat affine space is \textbf{non-degenerate} if its second fundamental form (which takes values in the normal bundle) is non-degenerate. If the immersion is also co-oriented the second fundamental form determines a conformal structure on the hypersurface. A choice of subbundle transverse to the immersion induces on the hypersurface a torsion-free affine connection, and there is a unique choice of transverse subbundle such that the induced connection is aligned with respect to the conformal structure determined by the second fundamental form and the co-orientation. This choice of transverse subbundle is the \textbf{affine normal subbundle}. That this definition coincides with the customary one is proved in section $4.2$ of \cite{Fox-ahs}. The second fundamental form with respect to a particular vector field spanning the affine normal subbundle is a pseudo-Riemannian metric on the hypersurface. Usually an \textbf{(equi)affine normal vector field} $\nm$ is distinguished by fixing a volume form on the ambient affine space and requiring that the volume density induced by the metric $h_{ij}$ determined by the vector field agree with the volume density induced via interior multiplication of the vector field with the chosen volume form. Though it is often omitted, the prefix \textit{equi} ought to be included because this construction is only invariant under equiaffine transformations of the ambient space (those preserving the given volume form). The metric $h_{ij}$ determined by the affine normal is called the \textbf{(equi)affine} or \textbf{Blaschke} metric. The affine normal field admits the following description (that this is equiaffinely invariant is not self-evident; rather it follows from the definition in the previous paragraph, which is manifestly so). Fix the standard flat Euclidean metric $\delta_{IJ}$ on the ambient $\rea^{3}$ and let $g_{ij} = i^{\ast}(\delta)_{ij}$ be the induced metric on the immersed, co-oriented, non-degenerate hypersurface $i:\Sigma \to \rea^{3}$. Let $N$ be the unit Euclidean normal consistent with the given co-orientation, and let $\Pi_{ij}$ be the second fundamental form defined with respect to $N$. The Gaussian curvature $K$ is the function $\det \Pi \tensor (\det g)^{-1}$, and the equiaffine metric $h_{ij}$ has the form $h_{ij} = |K|^{-1/4}\Pi_{ij}$. Let $\rad$ be the radial (position) vector field on $\rea^{3}$. Along $i(\Sigma)$ the equiaffine vector field $\nm$ satisfies $\lap_{h}\rad = 2\nm$. For the convenience of a reader not familiar with affine geometry, the translation of these definitions into local coordinates is recalled briefly. For details, consult \cite{Calabi-affinelyinvariant} or the textbook \cite{Nomizu-Sasaki}. Locally a non-degenerate hypersurface $\Sigma$ is given as a graph $z = f(x)$ where $x^{i} \in \Omega \subset \rea^{2}$. Let $\pr$ denote the flat connection on $\Omega$ with respect to which the $dx^{i}$ are parallel, and write $f_{i_{1}\dots i_{k}} = \pr_{i_{1}}\dots \pr_{i_{k}}f$ and $\hess f = f_{ij}$. Also let $f^{ij}$ be the tensor inverse to $f_{ij}$. Define $\H(f)$ by $\det \hess f= \H(f)(dx^{1}\wedge dx^{2})^{2}$. The normalized representative $H_{ij}$ of the AH structure $(\en, [h])$ induced on $\Sigma$ is given by $H_{ij} = |\H(f)|^{-1/4}f_{ij}$, while the aligned representative $\nabla \in \en$ is given by $\nabla = \pr + \tfrac{1}{4}f_{ij}f^{pq}f_{pqr}f^{rk}$. The affine normal is $\nm = |\H(f)|^{1/4}\left(Z - \tfrac{1}{4}f^{pq}f_{pqr}f^{ri}X_{i}\right)$ in which $X_{i} = \tfrac{\pr}{\pr x^{i}} - f_{i}\tfrac{\pr}{\pr z}$ and $Z = \tfrac{\pr}{\pr z}$. The Euclidean unit normal is $N = (1 + f^{p}f_{p})^{-1/2}\left(Z - (1 + f^{p}f_{p})^{-1}f^{i}X_{i}\right)$, in which $f^{i} = g^{ip}f_{p}$, and the second fundamental form $\Pi_{ij}$ induced by $N$ is $\Pi_{ij}= (1 + f^{p}f_{p})^{-1/2}f_{ij}$, while the Gaussian curvature is $K = (1 + f^{p}f_{p})^{-2}\H(f)$. Along $i(\Sigma)$ the radial field $\rad$ is equal to $x^{p}X_{p} - f^{\ast}Z$, in which $f^{\ast} = x^{p}f_{p} - f$ is the Legendre transform of $f$. The hypersurface is an \textbf{affine hypersphere} if its affine normal subbundles meet in a point, which may be at infinity. This holds if and only if either $\nm$ is parallel or there is a constant $S$ such that $\nm = -S\rad$. For non-zero $S$ this holds if and only if $f^{\ast}$ solves the equation $(f^{\ast})^{4}\H(f^{\ast}) = S^{4}$, in which the Hessian is taken with respect to the Legendre transformed variables $y_{i} \defeq f_{i}$ on the domain $\Omega^{\ast} = df(\Omega)$. Most of the deeper results in the study of affine hyperspheres are obtained by studying this Monge-Ampere equation. In the two-dimensional case under consideration, there is also a Weierstrass-like representation of affine hypersurfaces due to Calabi, \cite{Calabi-affinemaximal}, and Wang, \cite{Wang}, which allows the use of complex analytic methods. The simplest affine hyperspheres which are not quadrics are the hypersurfaces \begin{align*} \Sigma_{\al} = \{(X, Y, Z) \in \rea^{3}: (Y^{2}Z\cos \al - XYZ \sin \al) = 1\}, \end{align*} for $\al \in (0, \pi)$. Writing $x = X/Z$ and $y = Y/Z$, $\Sigma_{\al}$ is the \textbf{radial graph} \begin{align*} \{(x/u, y/u, -1/u) \in \rea^{3}: u(x, y) < 0\} \end{align*} of the function $u(x, y) = (y^{2}\cos \al - xy \sin \al)^{1/3}$, which solves $27 u^{4}\det \hess u = \sin^{2}\al$. This $\Sigma_{\al}$ is asymptotic to the cone over the base triangle $\Omega^{\ast} = \{(x, y): u(x,y) < 0\}$; it can be written as the graph of the Legendre transform of $u$ (that is, $u$ corresponds to $f^{\ast}$ in the previous paragraph). The parameter $\al$ is not important, as these examples are all equivalent by an affine transformation, but is included to illustrate that as $\al \to \pi$ the function $u$ becomes degenerate; as the upper halfspace contains a complete affine line it supports no strictly convex negative function vanishing along its boundary. The function $v(x, y) = -(x^{2} + y^{2})^{1/3}$ solves $v^{4}\H(v) = -(4/27)$ on $\rea^{2} \setminus\{0\}$. Its radial graph is the hypersurface $\{(X, Y, Z) \in \rea^{3}: (X^{2} + Y^{2})Z = 1\}$, which is an affine hypersphere. This does not contradict the preceeding paragraph because the Hessian of $v$ has mixed signature (its eigenvalues are $-(2/3)v^{-2}$ and $(2/9)v^{-2}$), so in this case the equiaffine metric has split signature. \subsection{} Given an AH structure $(\en, [h])$ the torsion-free connection $\bnabla \defeq \nabla + \bt_{ij}\,^{k}$ satisfies $\bnabla_{i}H_{jk} = - \bt_{ijk}$, so is the aligned representative of the AH structure $(\ben, [h])$ formed by the projective structure $\ben$ generated by $\bnabla$ and the given $[h]$. This $(\ben, [h])$ is said to be \textbf{conjugate} to $(\en, [h])$. As its cubic torsion is $\bar{\bt}_{ij}\,^{k} = -\bt_{ij}\,^{k}$, the conjugate of the conjugate is the original AH structure. The conormal Gau\ss\, map of a non-degenerate co-oriented hypersurface immersion in flat affine space sends a point of the hypersurface to the annihilator of the space tangent at the image of the point to the hypersurface. The pullback of the flat projective structure on the projectivization of the dual to the flat affine space via this conormal Gau\ss\, map forms with the conformal structure determined by the second fundamental form and the co-orientation the AH structure conjugate to that determined by the affine normal subbundle. \subsection{} The \textbf{curvature} and \textbf{Ricci curvature} of an AH structure are defined to be the curvature $R_{ijk}\,^{l}$ and Ricci curvature $R_{ij} \defeq R_{pij}\,^{p}$ of the aligned representative $\nabla$. The \textbf{scalar curvature} $R$ is the density $R \defeq R_{p}\,^{p} = H^{pq}R_{pq}$. Sometimes, for emphasis, the qualifier \textit{weighted} will be added, and $R$ will be called the \textit{weighted scalar curvature}. It does not make sense to speak of the numerical value of $R$ because $R$ takes values in the line bundle $|\det \ctm|$; however it does make sense to speak of the vanishing of $R$ and because $|\det \ctm|$ is oriented, to speak of the positivity or negativity of $R$. An AH structure is \textbf{proper} if its weighted scalar curvature is non-vanishing. When a representative $h \in [h]$ is given there will be written $\uR_{h} = |\det h|^{-1/2}R = h^{ij}R_{ij}$. An AH structure is \textbf{projectively flat} if $\en$ is projectively flat. If the conjugate AH structure $(\ben, [h])$ is projectively flat, then $(\en, [h])$ is \textbf{conjugate projectively flat}. \subsection{} An AH structure is \textbf{exact} if there is a representative $h \in [h]$ such that $\nabla_{i}\det h = 0$ for the aligned representative $\nabla \in \en$. If there is such an $h$ it is determined uniquely up to positive homothety (on each connected component of $M$). Such an $h$ will be called a \textbf{distinguished representative} of the AH structure. For example, the AH structure induced on a hypersurface in flat affine space is always exact, and the equiaffine metric is a distinguished representative. An AH structure is exact if and only if there is a global $\nabla$-parallel non-vanishing density of non-trivial weight, for if there is such a density, then some power of it is a non-vanishing density $\mu$ such that $h_{ij} = \mu \tensor H_{ij} \in [h]$ verifies $\nabla |\det h| = \nabla \mu^{2} = 0$ (the converse is obvious). \subsection{}\label{faradaysection} The \textbf{Faraday form} $F_{ij}$ of an AH structure $(\en, [h])$ is the curvature of the covariant derivative induced on the line bundle of $-1/2$-densities by the aligned representative $\nabla \in \en$. If $R_{ijk}\,^{l}$ is the curvature of $\nabla$, then by definition and the traced algebraic Bianchi identity there hold \begin{align}\label{faradayexplicit} 2F_{ij} = R_{ijp}\,^{p} = -2R_{[ij]}. \end{align} Since $F_{ij}$ is the curvature of a connection on a trivial line bundle it is always exact. The AH structure $(\en, [h])$ is \textbf{closed} if $F_{ij} = 0$. The terminology directly extends that introduced for Weyl structures in \cite{Calderbank-faraday}. If an AH structure $(\en, [h])$ has parallel weighted scalar curvature then either $R$ vanishes identically or $R$ vanishes nowhere and $(\en, [h])$ is exact, for if $R$ vanishes nowhere, then $2\si_{i} = -R^{-1}\nabla_{i}R$ satisfies $d\si_{ij} = 2\nabla_{[i}\si_{j]} = -R^{-1}\nabla_{[i}\nabla_{j]}R = F_{ij}$. The \textbf{Faraday primitive} $\ga_{i}$ associated to $h \in [h]$ is the one-form $\ga_{i}$ defined by $4\ga_{i} = h^{pq}\nabla_{i}h_{pq}$. Note that the Faraday primitive associated to $h$ depends only on the positive homothety class of $h$ and not on $h$. From the Ricci identity follows $d\ga_{ij} = 2\nabla_{[i}\ga_{j]} = -F_{ij}$, so that $\ga_{i}$ is a primitive for $-F_{ij}$. If $\tilde{h}_{ij} = e^{2f}h_{ij} \in [h]$ then the corresponding one-form $\tilde{\ga}_{i}$ differs from $\ga_{i}$ by an exact one-form, $\tilde{\ga}_{i} = \ga_{i} + df_{i}$. The equivalence class $\{\ga\}$ of one-forms so determined is the \tbf{equivalence class of Faraday primitives} \textbf{induced by $(\en, [h])$}. The Faraday primitives associated to $h \in [h]$ of the AH structure $(\en, [h])$ and its conjugate $(\ben, [h])$ are the same, and so these AH structures determine the same equivalence class of Faraday primitives. In particular, an AH structure is closed (resp. exact) if and only if the conjugate AH structure is closed (resp. exact). Most properties of the Faraday curvature of Weyl structures hold also for AH structures. For example, Lemma \ref{fcoclosedlemma} generalizes (trivially) Theorem $2.5$ of \cite{Calderbank-faraday}. \begin{lemma}\label{fcoclosedlemma} A definite signature AH structure on a surface is closed if and only if $\nabla^{p}F_{ip} = 0$. \end{lemma} \begin{proof} By \eqref{faradayexplicit}, there holds $\nabla_{[p}\nabla_{q]}F^{pq} = -R_{[pq]}F^{pq}- R_{pqa}\,^{a}F^{pq} = -F_{pq}F^{pq}$. Hence that $\nabla^{p}F_{ip} = 0$ implies $F_{ij}$ is $[h]$-null; in definite signature this holds if and only if $F_{ij} \equiv 0$. \end{proof} \subsection{} Because $f\hodge_{\tilde{h}}\al_{i} = \hodge_{h}\al_{i} + 2\si^{\sharp\,p}d\al_{pi} - 2\dad_{h}\al\, \si_{i}$, the \textbf{Hodge Laplacian} $\hodge_{h} \defeq d\dad_{h} + \dad_{h}d$ is not conformally invariant on one-forms. However, because $d$ is independent of the metric and $f\dad_{\tilde{h}}\al = \dad_{h}\al$, and because a form is harmonic if and only if it is closed and co-closed, the Hodge decomposition of one-forms is conformally invariant. On a compact orientable Riemannian surface the Hodge decomposition implies there is a unique representative of $\{\ga\}= \{\ga + df: f\in \cinf(M)\}$ which is co-closed. Consequently, on such a surface there is associated to an AH structure a unique positive homothety class of representative metrics $h \in [h]$ for which the associated Faraday primitive $\ga_{i}$ is co-closed with respect to $h$. Such a representative metric will be called a \textbf{Gauduchon metric}. In higher dimensions the existence for an AH structure $(\en, [h])$ of a representative of $[h]$ distinguished up to positive homothety follows from arguments of P. Gauduchon (e.g. \cite{Gauduchon} and \cite{Gauduchon-circlebundles}), and, although in two dimensions the existence of such representatives follows from the Hodge decomposition, in this case the terminology \textit{Gauduchon metric} is used also, for consistency. Note that without imposing some further normalization, such as setting the volume equal to a fixed constant, there is no naturally preferred Gauduchon metric. A distinguished metric on an exact AH structure is trivially also a Gauduchon metric. \subsection{}\label{ahfrommetricsection} If $(\en, [h])$ is an AH structure and $h \in [h]$ has corresponding Faraday primitive $\ga_{i}$ then the Levi-Civita connection $D$ of $h$ is related to the aligned representative $\nabla$ by \begin{align}\label{dnabladiff} D = \nabla + \tfrac{1}{2}\bt_{ij}\,^{k} + 2\ga_{(i}\delta_{j)}\,^{k} - h_{ij}\ga^{\sharp\,k} = \nabla + \tfrac{1}{2}\bt_{ij}\,^{k} + 2\ga_{(i}\delta_{j)}\,^{k} - H_{ij}\ga^{k}, \end{align} On the other hand, equation \eqref{dnabladiff} shows how to build from a metric and a one-form an AH structure pair with a given cubic torsion. Given a pseudo-Riemannian metric $h_{ij}$ with Levi-Civita connection $D$, a completely symmetric, completely trace-free covariant $3$-form $B_{ijk} = B_{(ijk)}$, and a one-form $\ga_{i}$, defining $\bt_{ij}\,^{k} = B_{ijp}h^{pk}$ and defining $\nabla$ by \eqref{dnabladiff} determines an AH structure with aligned representative $\nabla$, cubic torsion $\bt_{ij}\,^{k}$, and such that $\ga_{i}$ is the Faraday primitive associated to $h$. \subsection{} On an oriented surface the conformal structure underlying a Riemannian signature AH structure determines a complex structure. Lemma \ref{2dcomplexlemma} describes the compatibility between them. \begin{lemma}\label{2dcomplexlemma} Let $(\en, [h])$ be a Riemannian signature AH structure on an oriented surface and let $J_{i}\,^{j}$ be the complex structure determined by $[h]$. Then \begin{align}\label{btj} &\bt_{ij}\,^{k} = -J_{p}\,^{k}\nabla_{i}J_{j}\,^{p},& &\nabla_{[i}J_{j]}\,^{k} = 0,& &J_{[i}\,^{p}\bt_{j]p}\,^{k} = 0. \end{align} \end{lemma} \begin{proof} Fix $h \in [h]$ with Levi-Civita connection $D$ and associated Faraday primitive $\ga_{i}$. Since the tensor $B_{ijk} \defeq \bt_{ij}\,^{p}h_{pk}$ is completely trace-free, Lemma \ref{apolaritylemma} implies $J_{[i}\,^{p}B_{j]kp} = 0$, which shows the third equation of \eqref{btj}. By \eqref{omsi} there holds \begin{align}\label{btj0} \begin{split} h_{kp}&\left( - \ga_{j}J_{i}\,^{p} + h_{ij}(\star \ga)^{\sharp\,p} - (\star \ga)_{j}\delta_{i}\,^{p} + \om_{ij}\ga^{\sharp\, p} \right) = \ga_{j}\om_{ki} + \ga_{k}\om_{ij} - 2(\star\ga)_{[j}h_{k]i} = 3\ga_{[i}\om_{jk]} = 0. \end{split} \end{align} By the last equality of \eqref{btj}, $\bt_{ip}\,^{k}J_{j}\,^{p} = -\bt_{ij}\,^{p}J_{p}\,^{k}$. Using \eqref{dnabladiff} and \eqref{btj0} there results \begin{align*} \begin{split} 0 = D_{i}J_{j}\,^{k} & = \nabla_{i}J_{j}\,^{k} - \bt_{ij}\,^{p}J_{p}\,^{k} - \ga_{j}J_{i}\,^{k} + h_{ij}(\star \ga)^{\sharp\,k} - (\star \ga)_{j}\delta_{i}\,^{k} + \om_{ij}\ga^{\sharp\, k} = \nabla_{i}J_{j}\,^{k} - \bt_{ij}\,^{p}J_{p}\,^{k}. \end{split} \end{align*} This gives the first equation of \eqref{btj}, from which the second equation of \eqref{btj} is immediate. \end{proof} Thus the cubic torsion of an AH structure can be seen as measuring the failure of the aligned representative to preserve the associated complex structure. By Lemma \ref{apolaritylemma} the third equation of \eqref{btj} shows that for any $h \in [h]$ the $(3,0)$ part $B^{(3,0)}$ is a smooth section of the bundle of cubic holomorphic differentials. From \eqref{btj} it follows also that $\nabla$ preserves $J$ if and only if the AH structure is Weyl. In \cite{Nomizu-Podesta} (see also the more easily obtained \cite{Nomizu-Sasaki}) a torsion-free affine connection on a complex manifold preserving the complex structure is said to be \textbf{affine K\"ahler} if the $(2,0)$ part of its curvature vanishes. (Though apt, the terminology is problematic because \textit{K\"ahler affine} has been used in \cite{Cheng-Yau-realmongeampere} to mean something else, in a somewhat related context; see section \ref{kahleraffinesection}). This curvature condition is automatic on a Riemann surface, so the aligned representative of a Weyl structure is an affine K\"ahler connection in this sense. \section{Curvature of an AH structure}\label{ahcurvaturesection} In this section there are described the basic local curvature invariants of an AH structure $(\en, [h])$ on a surface. The fundamental invariants are the weighted scalar curvature $R$, a trace-free symmetric tensor $E_{ij}$ which is a multiple of the trace-free Ricci tensor, as well as the cubic torsion $\bt_{ij}\,^{k}$ and Faraday curvature $F_{ij}$. Conceptually important are Lemmas \ref{eijvanishlemma} and \ref{selfconjugatecottonlemma} which show when $E_{ij}$ and $\bt_{ij}\,^{k}$ can be viewed as the real parts of holomorphic differentials. In \cite{Calabi-affinemaximal} Calabi develops the geometry of hypersurfaces in flat affine three space; the fundamental geometric invariants are a quadratic differential $B$ and a cubic differential $A$ which is holomorphic when $B$ vanishes; for the AH structure induced on such a hypersurface, constant multiples of the real parts of $A$ and $B$ are identified, respectively, with $\bt_{ij}\,^{k}$ and $E_{ij}$. \subsection{} Let $(\en, [h])$ be an AH structure on a surface $M$. In what follows indices are raised and lowered using the normalized representative $H$, $\nabla \in \en$ is the aligned representative, and $h \in [h]$ is a representative with Faraday primitive $\ga_{i}$ and Levi-Civita connection $D$ related to $\nabla$ as in \eqref{dnabladiff}. On a manifold of dimension $n > 2$ it is necessary to consider, in addition to the Ricci trace, the trace $R_{ip}\,^{p}\,_{j}$, because in general the trace-free symmetric parts of $R_{ip}\,^{p}\,_{j}$ and $R_{ij}$ are independent, but in two dimensions tracing \eqref{rijklpij} shows that \begin{align}\label{qrrelated} R_{ip}\,^{p}\,_{j} + R_{ij} = RH_{ij}, \end{align} so in this case there is no need to speak of $R_{ip}\,^{p}\,_{j}$. It will be convenient to work instead with the trace-free symmetric tensor $E_{ij} = E_{(ij)}$ defined by \begin{align}\label{ricdecompose} R_{ij} = -2E_{ij} + \tfrac{1}{2}RH_{ij} - F_{ij}. \end{align} The apparently unnatural coefficient $-2$ is chosen for consistency with the conventions of \cite{Fox-ahs}. Substituting $E_{ij}$ into \eqref{rijklpij} gives $R_{ijkl} = -4H_{l[i}E_{j]k} + RH_{l[i}H_{j]k} + F_{ij}H_{kl}$. However, because $H_{k[i}E_{j]l} - H_{l[i}E_{j]k}$ is trace-free and has symmetries corresponding to the Young diagram of the partition $(22)$ it vanishes by Lemma \ref{weylcriterion}, and so \begin{align}\label{rijkl} & R_{ijkl} = -2H_{k[i}E_{j]l} -2H_{l[i}E_{j]k} + RH_{l[i}H_{j]k} + F_{ij}H_{kl}. \end{align} The Ricci identity implies \begin{align}\label{skewnablah} 2H_{k[i}E_{j]l} + 2H_{l[i}E_{j]k} = -R_{ij(kl)} + F_{ij}H_{kl} = \nabla_{[i}\nabla_{j]}H_{kl} = \nabla_{[i}\bt_{j]kl}. \end{align} There hold \begin{align} \label{gd1} \begin{split} &D_{i}\ga_{j} = \nabla_{i}\ga_{j} - \tfrac{1}{2}\bt_{ij}\,^{p}\ga_{p} - 2\ga_{i}\ga_{j} + H_{ij}\ga_{p}\ga^{p}, \qquad D^{p}\ga_{p} = \nabla^{p}\ga_{p} = \ga_{p}\,^{p}, \end{split} \end{align} \begin{align} \label{dga3} \begin{split}&D_{i}\bt_{jk}\,^{l} = \\ &\nabla_{i}\bt_{jk}\,^{l} - \tfrac{1}{2}\bt_{ij}\,^{p}\bt_{kp}\,^{l} -\tfrac{1}{2}\bt \delta_{[i}\,^{l}H_{j]k}+ \delta_{i}\,^{l}\ga_{p}\bt_{jk}\,^{p} + 2H_{i(j}\bt_{k)p}\,^{l}\ga^{p} - 3\ga_{(i}\bt_{jk)}\,^{l} - \bt_{ijk}\ga^{l}. \end{split} \end{align} Tracing \eqref{skewnablah} in $il$, relabeling, and substituting in \eqref{qrrelated} yields the first equality of \begin{align}\label{ddivbt} 4E_{ij} = \nabla_{p}\bt_{ij}\,^{p} - \tfrac{1}{2}\bt H_{ij} = D_{p}\bt_{ij}\,^{p} - 2\ga_{p}\bt_{ij}\,^{p}, \end{align} while the second follows from the first and tracing \eqref{dga3}. Equation \eqref{ddivbt} plays an important role in deriving consequences of the Einstein equations; see Lemma \ref{eijvanishlemma} below. Recall that the curvature of $D$ is written $\sR_{ijk}\,^{l} = \sR_{h}\delta_{[i}\,^{l}h_{j]k}$. Calculating $\sR_{ijkl} - R_{ijkl}$ using \eqref{dnabladiff} and simplifying yields \begin{align} \label{confscal}&\sR_{h} = \uR_{h} + \tfrac{1}{4}|\bt|^{2}_{h} -2|\det h|^{-1/2}\nabla^{p}\ga_{p} = \uR_{h} + \tfrac{1}{4}|\bt|^{2}_{h} + 2\dad_{h}\ga. \end{align} Equation \eqref{confscal} will be used repeatedly throughout the remainder of the paper. \subsection{}\label{fdhsection} For a Riemannian signature AH structure $(\en, [h])$ and a representative $h \in [h]$ with associated K\"ahler form $\om_{ij}$ define $\fd_{h} \in \cinf(M)$ by $2F_{ij} = \fd_{h}\om_{ij}$. Equivalently, $\fd_{h} = 2 \star F$. Note that $2|F|_{h}^{2} = \fd_{h}^{2}$, and $\lie_{J\ga^{\sharp}}\om = d(\imt(J\ga^{\sharp})\om) = -d\ga = F$. Let $\fd = \fd_{h}|\det h|^{1/2}$, which does not depend on the choice of $h \in [h]$. Decomposing \eqref{ricdecompose} by parts and substituting \eqref{confscal} yields \begin{align}\label{rparts} \begin{split} R_{\al\be} & = -2E_{\al\be},\\ R_{\al\bar{\be}} & = \tfrac{1}{2}(R - \j \fd)H_{\al\bar{\be}} = \tfrac{1}{2}(\uR_{h} - \j \fd_{h})h_{\al\bar{\be}} = \tfrac{1}{2}(\sR_{h} + \tfrac{1}{4}|\bt|_{h}^{2} + 2\dad_{h}\ga - \j \fd_{h})h_{\al\bar{\be}}. \end{split} \end{align} Because of \eqref{rparts} it makes sense to refer to $\cscal \defeq R - \j \fd$ as the \textbf{complex weighted scalar curvature} of the AH structure $(\en, [h])$. There will be written $\cscal_{h} = \uR_{h} - \j \fd_{h}$. Since on a Riemann surface $M$ a $(0, 2)$ form vanishes, for an operator $\hdelbar$ on a complex vector bundle $E$ to be a holomorphic structure it suffices that it satisfy the Leibniz rule. It follows that if $(M, J)$ is a Riemann surface the most general holomorphic structure on $\cano^{k}$ has the form $\hdelbar = \delbar + \si^{(0, 1)}$ for an arbitrary one-form $\si_{i}$ on $M$ and the holomorphic structure $\delbar$ induced on $\cano^{k}$ by $J$. From \eqref{rparts} it is immediate that $R_{\al\be} = 0$ if and only if $E_{ij} = 0$. These conditions mean that the Ricci curvature of $(\en, [h])$ has type $(1,1)$. Equation \eqref{ddivbt} has the following nice interpretation. \begin{lemma}\label{eijvanishlemma} For an AH structure $(\en, [h])$ with cubic torsion $\bt_{ij}\,^{k}$ on an oriented surface the following are equivalent: \begin{list}{(\arabic{enumi}).}{\usecounter{enumi}} \renewcommand{\theenumi}{Step \arabic{enumi}} \renewcommand{\theenumi}{\arabic{enumi}} \renewcommand{\labelenumi}{\textbf{Level \theenumi}.-} \item\label{er1} The curvature $R_{ij}$ has type $(1,1)$. \item\label{er3} For any $h \in [h]$ the $(3, 0)$ part of the tensor $B_{ijk} \defeq \bt_{ij}\,^{p}h_{kp}$ is $\hdelbar$-holomorphic for the holomorphic structure $\hdelbar \defeq \delbar - 2\ga^{(0, 1)}$ on $\cano^{3}$, in which $\delbar$ is the holomorphic structure induced on $\cano^{3}$ by the conformal structure $[h]$ and the given orientation, and $4\ga_{i} = |\det h|^{-1}\nabla_{i}|\det h|$. \end{list} \end{lemma} \begin{proof} For $\tilde{h}_{ij} = fh_{ij}$ let $\tdelbar = \delbar - 2\tilde{\ga}^{(0,1)}$. Since $\tilde{\ga}_{i} = \ga_{i} + \tfrac{1}{2}d\log{f}_{i}$, $\tdelbar = \hdelbar - \delbar \log{f}$. It follows that $\tilde{B}_{ijk} = fB_{ijk}$ satisfies $\tdelbar \tilde{B}^{(3, 0)} = f\tdelbar B^{(3,0)} + \delbar f \tensor B^{(3, 0)} = f\hdelbar B^{(3,0)}$, so that $\tilde{B}^{(3,0)}$ is $\tdelbar$-holomorphic if and only if $B^{(3,0)}$ is $\hdelbar$-holomorphic. This shows that \eqref{er3} has sense. For any $h \in [h]$ it follows from \eqref{delb} that \begin{align*} \begin{split}\hdelbar B^{(3,0)} &= \delbar B^{(3,0)} - 2\ga^{(0,1)}\tensor B^{(3,0)} \\ &= \overline{h^{(1,1)}} \tensor \div_{h}(B)^{(2,0)} - 2\overline{h^{(1,1)}} \tensor (\imt(\ga^{\sharp})B)^{(2,0)} = 4\overline{h^{(1,1)}}\tensor E^{(2,0)}. \end{split} \end{align*} the last equality by \eqref{ddivbt}. Hence $B^{(3,0)}$ is $\hdelbar$-holomorphic if and only if $E_{ij} \equiv 0$. \end{proof} For $h \in [h]$ the Chern connection $\hnabla$ on $\cano^{3}$ determined by the Hermitian structure induced on $\cano^{3}$ by $h^{(1,1)}$ and the holomorphic structure $\hdelbar$ is by definition the unique connection on $\cano^{k}$ such that $\hnabla^{0, 1} = \hdelbar$ and for which the induced Hermitian structure is parallel. In terms of the connection induced on $\cano^{3}$ by the Levi-Civita connection $D$ of $h$, $\hnabla$ is expressible by \begin{align} \hnabla = D - 2\j \ga \circ J = D + 2\j \star \ga. \end{align} It follows that the difference of the curvatures of $\hnabla$ and $D$ is $2\j d\star \ga = - 2\j \dad_{h}\ga \,\om$. \subsection{Curvature of the conjugate AH structure}\label{curvatureahpencilsection} It is convenient to define $\nbt = \bt_{abc}\bt^{abc}$. Evidently $\nbt H_{ij} = |\bt|_{h}^{2}h_{ij}$ for any $h \in [h]$. By Lemma \ref{twodablemma}, $2\bt_{ip}\,^{q}\bt_{jq}\,^{p} = \nbt H_{ij}$. Since $2\bt_{k[i}\,^{p}\bt_{j]lp} - \nbt H_{l[i}H_{j]k}$ has the algebraic symmetries of a Riemannian curvature tensor and is completely trace-free, Lemma \ref{weylcriterion} implies that it is identically zero. That is $2\bt_{k[i}\,^{p}\bt_{j]lp} = \nbt H_{l[i}H_{j]k}$. Let $(\ben, [h])$ be the AH structure conjugate to $(\en, [h])$. Decorate with a $\,\bar{\,}\,$ the tensors derived from the curvature $\bar{R}_{ijkl}$ of $(\ben, [h])$. By definition \begin{align*} \bar{R}_{ijk}\,^{l} - R_{ijk}\,^{l} = 2\nabla_{[i}\bt_{j]k}\,^{l} - 2\bt_{k[i}\,^{p}\bt_{j]p}\,^{l} = 2\nabla_{[i}\bt_{j]k}\,^{l} - \nbt \delta_{[i}\,^{l}H_{j]k}, \end{align*} so, lowering indices using $2H_{lp}\nabla_{[i}\bt_{j]k}\,^{p} = 2\nabla_{[i}\bt_{j]kl} - 2\bt_{lpl[i}\bt_{j]k}\,^{p} = 2\nabla_{[i}\bt_{j]kl} + \bt H_{l[i}H_{j]k}$, and using \eqref{skewnablah} there holds \begin{align} \label{conjugateblaschkecurvature2} \begin{split} \bar{R}_{ijkl} &= R_{ijkl} + 4H_{l[i}E_{j]k} + 4H_{k[i}E_{j]l}. \end{split} \end{align} Tracing \eqref{conjugateblaschkecurvature2} and substituting \eqref{ricdecompose} and $\bar{F}_{ij} = F_{ij}$ into the result yields $-4\bar{E}_{ij} + \bar{R}H_{ij} = 4E_{ij} + RH_{ij}$, which when traced shows $\bar{R} = R$ and $\bar{E}_{ij} = -E_{ij}$. Tensors such as $R$ (resp. $E_{ij}$) unchanged under conjugacy (multiplied by $-1$ under conjugacy) will be called \textbf{self-conjugate} (resp. \textbf{anti-self-conjugate}). Classes of AH structures defined by some condition on the curvatures preserved under conjugacy seem to be of particular interest. By \eqref{rijkl} and \eqref{conjugateblaschkecurvature2}, the self-conjugate and anti-self-conjugate parts of the curvature tensor are \begin{align} \begin{split} \tfrac{1}{2}(R_{ijkl} + \bar{R}_{ijkl}) & = RH_{l[i}H_{j]k} + F_{ij}H_{kl}, \qquad \tfrac{1}{2}(R_{ijkl} - \bar{R}_{ijkl}) = - 2H_{l[i}E_{j]k} - 2H_{k[i}E_{j]l} . \end{split} \end{align} It follows that $(\en, [h])$ has self-conjugate curvature if and only if $E_{ij} = 0$. Also, for a Weyl structure the curvature is simply $R_{ijkl} = RH_{l[i}H_{j]k} + F_{ij}H_{kl}$. However, it will be evident later that on a compact orientable surface of genus at least $2$ the class of AH structures with self-conjugate curvature tensor is considerably larger than the class of Weyl structures. \subsection{} By Lemma \ref{weylcriterion}, the projective Cotton tensor $C_{ijk}$ satisfies $C_{ijk} = 2C_{[i}H_{j]k}$ in which $C_{i} \defeq C_{ip}\,^{p}$. From the definition of $C_{ijk}$ there results \begin{align}\label{projcottontrace} \begin{split} &C_{i} = - \tfrac{1}{2}\nabla_{i}R - \tfrac{1}{3}\nabla^{p}F_{ip} -2 \nabla^{p}E_{ip} +2\bt_{i}\,^{pq}E_{pq} \end{split} \end{align} From \eqref{projcottontrace} and the easily verified identities \begin{align*} &\bnabla_{i}\bar{R} = \nabla_{i}R, && \bnabla^{p}\bar{F}_{ip} = \nabla^{p}F_{ip}, && \bnabla^{p}\bar{E}_{ip} = -\nabla^{p}E_{ip} +\bt_{i}\,^{pq}E_{pq}, \end{align*} there result \begin{align}\label{projcotton} \begin{split} &\bar{C}_{i} = C_{i} + 4\nabla^{p}E_{ip} -2\bt_{i}\,^{pq}E_{pq},\\ & C_{i} + \bar{C}_{i} = - \nabla_{i}R - \tfrac{2}{3}\nabla^{p}F_{ip} + 2\bt_{i}\,^{pq}E_{pq}, \\ & C_{i} - \bar{C}_{i} = -4\nabla^{p}E_{ip} + 2\bt_{i}\,^{pq}E_{pq} = -4D^{p}E_{ip}, \end{split} \end{align} in the last equality of which $D$ is the Levi-Civita connection of any $h \in [h]$. \begin{lemma}\label{selfconjugatecottonlemma} A Riemannian signature AH structure $(\en, [h])$ on an oriented surface $M$ has self-conjugate projective Cotton tensor if and only if $E_{ij}$ is the real part of a holomorphic quadratic differential with respect to the complex structure determined by $[h]$. In particular, if $M$ is a sphere, then $(\en, [h])$ has self-conjugate projective Cotton tensor if and only if $E_{ij} = 0$. \end{lemma} \begin{proof} Since $E_{ij}$ is trace-free, Lemma \ref{kdifferentialslemma} implies that $E_{ij}$ is the real part of a holomorphic quadratic differential if and only if $\div_{h}(E)_{i} = 0$, and by the last equality of \eqref{projcotton} this holds if and only if $(\en, [h])$ has self-conjugate projective Cotton tensor. \end{proof} \begin{theorem}\label{nonnegselfconjtheorem} If a Riemannian signature AH structure $(\en, [h])$ on a compact oriented surface has self-conjugate projective Cotton tensor and non-negative weighted scalar curvature, then $E_{ij} = 0$. \end{theorem} \begin{proof} By Lemma \ref{selfconjugatecottonlemma}, $E_{ij}$ is the real part of a holomorphic quadratic differential. By \eqref{keromlap} of Lemma \ref{flatmetriclemma} and \eqref{confscal} for a Gauduchon metric $h \in [h]$ there holds $\lap_{h}|E|_{h}^{2} \geq 2|DE|_{h}^{2} + \tfrac{1}{2}|\bt|_{h}^{2}|E|_{h}^{2} \geq 0$, and the maximum principle then forces $E_{ij} = 0$. \end{proof} \subsection{} Since $R$ and $\nbt $ are $1$-densities their integrals are defined if $M$ is compact. The $L^{2}$-norm $||\bt||_{h}^{2}$ does not depend on the choice of representative $h \in [h]$ and equals $\int_{M}\nbt = \int_{M}|\bt|^{2}_{h}\,d\vol_{h}$. \begin{theorem}\label{gbtheorem} If $(\en, [h])$ is a Riemannian AH stucture on a compact, orientable surface $M$, then the Euler characteristic $\chi(M)$ satisfies $4\pi\chi(M) \geq \int_{M}R$, with equality if and only if $(\en, [h])$ is Weyl. In particular, \begin{enumerate} \item If $\int_{M}R \geq 0$, then either $M$ is a sphere, or $M$ is a torus and $(\en, [h])$ is Weyl. \item If $M$ has genus at least one and $(\en, [h])$ is not Weyl, then $\int_{M} R < 0$. \end{enumerate} \end{theorem} \begin{proof} By the Gau\ss-Bonnet Theorem, for any $h \in [h]$, integrating \eqref{confscal} yields \begin{align}\label{gbinter1} 4\pi\chi(M) = \int_{M}\sR_{h}\,d\vol_{h} = \tfrac{1}{4}||\bt||_{h}^{2} + \int_{M} R\geq \int_{M}R . \end{align} Equality holds in \eqref{gbinter1} if and only if $\bt_{ij}\,^{k} = 0$. If $\int_{M}R \geq 0$ the Euler characteristic $\chi(M)$ of $M$ must be non-negative, so $M$ must the sphere or the torus. If $M$ is a torus, the Euler characteristic is $0$ and so \eqref{gbinter1} forces $||\bt||_{h}^{2} = 0$, so that $(\en, [h])$ is a Weyl structure. If $M$ has genus at least one and $(\en, [h])$ is not Weyl then $4\pi\chi(M) - \tfrac{1}{4}||\bt||^{2}_{h} < 0$, showing the last claim. \end{proof} \section{Einstein equations}\label{einsteinsection} In this section the Einstein equations for AH structures are defined and their most basic properties described. \subsection{} By definition of $E_{ij}$, \eqref{ddivbt}, and Lemma \ref{eijvanishlemma}, the following conditions on an AH structure on a surface having cubic torsion $\bt_{ij}\,^{k}$ are equivalent. \begin{list}{(\arabic{enumi}).}{\usecounter{enumi}} \renewcommand{\theenumi}{Step \arabic{enumi}} \renewcommand{\theenumi}{\arabic{enumi}} \renewcommand{\labelenumi}{\textbf{Level \theenumi}.-} \item \label{ne1} The symmetric part of the Ricci tensor is trace free. That is $E_{ij} = 0$. \item \label{ne2} The Ricci tensor has type $(1,1)$. \item \label{ne3} The curvature is self-conjugate. \item \label{ne3b} For any $h \in [h]$ the $(3, 0)$ part of the tensor $B_{ijk} \defeq \bt_{ij}\,^{p}h_{kp}$ is $\hdelbar$-holomorphic for the holomorphic structure $\hdelbar \defeq \delbar - 2\ga^{(0, 1)}$, in which $\delbar$ is the holomorphic structure induced by the conformal structure $[h]$, and $4\ga_{i} = \nabla_{i}\log|\det h|$. \end{list} \begin{definition} An AH structure on a surface is \textbf{naive Einstein} if it satisfies \eqref{ne1}-\eqref{ne3b}. \end{definition} The qualifier \textit{naive} is meant to reflect that, while the most obvious generalization of the usual metric Einstein condition is simply to require the vanishing of the symmetric trace-free Ricci tensor, such an approach turns out to be inadequate. By Lemma \ref{selfconjugatecottonlemma} a Riemannian signature AH structure $(\en, [h])$ on the two-sphere has self-conjugate projective Cotton tensor if and only if it is naive Einstein. Similarly, by Theorem \ref{nonnegselfconjtheorem}, a Riemannian signature AH structure $(\en, [h])$ on a compact, oriented surface is naive Einstein if it has non-negative weighted scalar curvature. \begin{definition} An AH structure $(\en, [h])$ is \textbf{Einstein} if it is naive Einstein and satisfies \begin{align}\label{conservationcondition} \nabla_{i}R + 2\nabla^{p}F_{ip} = 0. \end{align} \end{definition} \noindent The condition \eqref{conservationcondition} will be referred to as the \textbf{conservation condition}. Let $h \in [h]$. Using $\nabla_{i}R = D_{i}R + 2\ga_{i}R$ to expand \eqref{conservationcondition}, and using \eqref{gd1}, \eqref{confscal}, and the Ricci identity yields \begin{align}\label{conserve} \begin{split} |\det h|^{-1/2}&(\nabla_{i}R + 2\nabla^{p}F_{ip}) = D_{i}\uR_{h} + 2\ga_{i}\uR_{h} + 2h^{pq}D_{p}F_{iq} + 4\ga^{\sharp\,p}F_{ip}\\ & = D_{i}\uR_{h} + 2\ga_{i}\uR_{h} - 2\dad_{h}d\ga_{i} - 4\ga^{\sharp\,p}F_{pi} \\ & = D_{i}\left(\sR_{h} - \tfrac{1}{4}|\bt|_{h}^{2}\right) - \tfrac{1}{2}\ga_{i}|\bt|_{h}^{2} + \sR_{h}\ga_{i} + 2\lap_{h} \ga_{i} -4 \ga_{i}\dad_{h}\ga -4\ga^{\sharp\,p}F_{pi}\\ & = D_{i}\left(\sR_{h} - \tfrac{1}{4}|\bt|_{h}^{2}\right) - \tfrac{1}{2}\ga_{i}|\bt|_{h}^{2} - 2(\hodge_{h}- \sR_{h}) \ga_{i} -4 \ga_{i}\dad_{h}\ga -4\ga^{\sharp\,p}F_{pi}\\ & = D_{i}\left(\sR_{h} - \tfrac{1}{4}|\bt|_{h}^{2} - 4|\ga|_{h}^{2}\right) - \tfrac{1}{2}\ga_{i}|\bt|_{h}^{2} - 2(\hodge_{h} - \sR_{h}) \ga_{i} + 4\ga^{\sharp\,p}(\lie_{\ga^{\sharp}}h)_{ip} -4 \ga_{i}\dad_{h}\ga. \end{split} \end{align} While the only explicit direct use of \eqref{conservationcondition}, then in the rewritten form \eqref{conserve}, is made in the proof of Theorem \ref{classtheorem}, its role is fundamental. By \eqref{ddivbt} a Weyl structure on a surface is automatically naive Einstein. In \cite{Calderbank-mobius}, Calderbank showed that taking \eqref{conservationcondition} as the definition of Einstein Weyl yields a nice theory, specializing that in higher dimensions. By definition the Einstein AH equations restrict to Calderbank's Einstein Weyl equations, and Calderbank's definition provided essential motivation for the general case. In dimensions $n > 2$ the conservation condition has a more general definition, but it follows from the differential Bianchi identity that a naive Einstein AH structure with self-conjugate curvature is Einstein, satisfying the analogue of \eqref{conservationcondition} with $n$ in place of $2$; see Lemma $4.2$ of \cite{Fox-ahs}. \subsection{} Lemma \ref{parallelexactlemma} follows from \eqref{conservationcondition}, Lemma \ref{fcoclosedlemma}, and the discussion at the end of section \ref{faradaysection}. \begin{lemma}\label{parallelexactlemma} A Riemannian signature Einstein AH structure on a surface is closed if and only if it has parallel scalar curvature, in which case either it is proper and exact or it has vanishing weighted scalar curvature. \end{lemma} \begin{lemma}\label{2deinsteinlemma} For an Einstein AH structure $(\en, [h])$ on a surface, any one of the following statements implies the other two. \begin{list}{(\arabic{enumi}).}{\usecounter{enumi}} \renewcommand{\theenumi}{Step \arabic{enumi}} \renewcommand{\theenumi}{\arabic{enumi}} \renewcommand{\labelenumi}{\textbf{Level \theenumi}.-} \item\label{ep1} $(\en, [h])$ is projectively flat. \item\label{ep2} $(\en, [h])$ is conjugate projectively flat. \item\label{ep3} The weighted scalar curvature is parallel. \end{list} In particular if an Einstein $(\en, [h])$ either is proper or has vanishing scalar curvature then it is projectively flat and conjugate projectively flat. \end{lemma} \begin{proof} The first claim is immediate from \eqref{projcotton} and the Einstein equations. The last claim follows because a proper Einstein AH structure is exact so has parallel scalar curvature. \end{proof} \subsection{} The example of affine hypersurfaces gives the primary motivation for the definition of the Einstein equations for AH structures. \begin{theorem}\label{sphereeinsteintheorem} For a non-degenerate positively co-oriented hypersurface immersion into flat three-dimensional affine space the following are equivalent: \begin{enumerate} \item The image of the immersion is an affine hypersphere. \item The AH structure induced on the hypersurface is Einstein. \end{enumerate} Moreover, $(1)$ and $(2)$ imply \begin{enumerate} \setcounter{enumi}{2} \item The induced AH structure is projectively flat. \end{enumerate} \end{theorem} \begin{proof} The equivalence of $(1)$ and $(2)$ is proved by the same argument as in the $n > 2$ case, which can be found as Theorem $4.6$ of \cite{Fox-ahs}. The AH structure induced on an affine hypersurface is conjugate projectively flat because the conjugate AH structure is that induced via the conormal Gau\ss\, map from the flat projective structure on oriented projective space, so Lemma \ref{2deinsteinlemma} implies that that the induced AH structure is projectively flat as well. Alternatively, the AH structure induced on a non-degenerate hypersurface in affine space is always closed, and when it is Einstein, this implies the scalar curvature is parallel, so by Lemma \ref{2deinsteinlemma}, that it is projectively flat. \end{proof} \subsection{} Recall the definitions of $\fd_{h}$ and $\cscal$ from section \ref{fdhsection}. Note that $\cscal$ is a smooth section of the complexification $|\Det \ctm| \tensor_{\rea}\com$ of the line bundle of $1$-densities and that the $(0,1)$ part of the aligned representative $\nabla$ of an AH structure induces a holomorphic structure on $|\Det \ctm| \tensor_{\rea}\com$. \begin{lemma}\label{einsteinhololemma} A naive Einstein Riemannian signature AH structure $(\en, [h])$ on an oriented surface $M$ is Einstein if and only if $\cscal$ is a holomorphic section of $|\Det \ctm|\tensor_{\rea}\com$ with respect to the holomorphic structure $\nabla^{0,1}$. \end{lemma} \begin{proof} The claim to be verified is simply $\nabla^{0,1}\cscal = \nabla^{0,1}(R - \j \fd) = 0$. First it is shown that $(\en, [h])$ is Einstein if and only if for any $h \in [h]$ the complex valued function $\uR_{h} - \j \fd_{h}$ is $\hdelbar$-holomorphic for the holomorphic structure $\hdelbar = \delbar + 2\ga^{(0,1)}$ on the trivial line bundle $M \times \com$. It is claimed that the conservation condition \eqref{conservationcondition} is equivalent to \begin{align}\label{holoconservationcondition} \begin{split} 0 & = \hdelbar(\uR_{h} - \j \fd_{h}) = \delbar(\uR_{h} - \j \fd_{h}) + 2(\uR_{h} - \j \fd_{h})\ga^{(0,1)}\\ & = \delbar(\uR_{h} - 4|\ga|_{h}^{2} - \j \fd_{h}) + 2\uR_{h} \ga^{(0,1)} + 4(\imt(\ga^{\sharp})\lie_{\ga^{\sharp}}h)^{(0,1)}. \end{split} \end{align} Rewriting the first equality of \eqref{conserve} yields \begin{align}\label{preconserve} \begin{split} |\det h|^{-1/2}&(\nabla_{i}R + 2\nabla^{p}F_{ip}) = d\uR_{h\, i} + 2\ga_{i}\uR_{h} - (\star d\fd_{h})_{i} - 2\fd_{h}(\star \ga)_{i}. \end{split} \end{align} The $(0, 1)$ part of this last expression is \begin{align}\label{delbarcscal} \delbar( \uR_{h} - \j \fd_{h}) + 2(\uR_{h} - \j \fd_{h})\ga^{(0, 1)}= \hdelbar( \uR_{h} - \j \fd_{h}), \end{align} from which the first claim and the first equality of \eqref{holoconservationcondition} are evident. Taking the $(0,1)$ part of \begin{align*} \begin{split} d_{i} |\ga|_{h}^{2} &= 2\ga^{\sharp\,p}D_{i}\ga_{p} = 2\ga^{\sharp\,p}D_{(i}\ga_{p)} + 2\ga^{\sharp\,p}D_{[i}\ga_{p]}\\ & = \ga^{\sharp\,p}(\lie_{\ga^{\sharp}}h)_{ip} - \tfrac{1}{2}\ga^{\sharp\,p}\fd_{h}\om_{ip} = \ga^{\sharp\,p}(\lie_{\ga^{\sharp}}h)_{ip} + \tfrac{1}{2}\fd_{h}(\star \ga)_{i}, \end{split} \end{align*} yields $2\delbar |\ga|_{h}^{2} = \j \fd_{h} \ga^{(0, 1)}+ 2(\imt(\ga^{\sharp})\lie_{\ga^{\sharp}}h)^{(0,1)}$. Substituting this into what comes before yields the second equality of \eqref{holoconservationcondition}. By definition of $\ga_{i}$, for any $h \in [h]$ there holds \begin{align*} \begin{split} \nabla^{0,1}(R - \j \fd) & = \delbar(\uR_{h} - \j \fd_{h})|\det h|^{1/2} + (\uR_{h} - \j \fd_{h})\nabla^{0, 1}|\det h|^{1/2}\\& = \left(\delbar(\uR_{h} - \j \fd_{h})+ 2(\uR_{h} - \j \fd_{h})\ga^{(0, 1)}\right)|\det h|^{1/2}, \end{split} \end{align*} from which the claim follows. \end{proof} By Lemma \ref{einsteinhololemma}, a Riemannian AH structure on an oriented surface is Einstein if and only if its Ricci curvature has type $(1,1)$ and its complex weighted scalar curvature is holomorphic. \begin{lemma}\label{squarehololemma} Let $(\en, [h])$ be a Riemannian signature Einstein AH structure on an oriented surface $M$. For each Gauduchon metric $h \in [h]$ the square norm $|\cscal_{h}|_{h}^{2} = \uR_{h}^{2} + \fd_{h}^{2}$ of the complex scalar curvature is $h$-harmonic. If $M$ is moreover compact then $ \uR_{h}^{2} + \fd_{h}^{2}$ is constant. If $H^{1}(M; \rea) = \{0\}$ there is a $v \in \cinf(M)$ such that $e^{2\j v}\cscal_{h}$ is holomorphic on $M$. \end{lemma} \begin{proof} Denote by the same notations the lifts to the universal cover $\tilde{M}$ of $M$ of $(\en, [h])$ and all the associated tensors, functions, etc. Let $\ga$ be the Faraday primitive of $h$. By assumption $\star \ga$ is closed, so on $\tilde{M}$ there is a smooth function $v$ such that $\star \ga = -dv$, or, what is equivalent, $\ga = \star dv$. By \eqref{delbarcscal} of Lemma \ref{einsteinhololemma} the function $b = e^{2\j v}(\uR_{h} - \j \fd_{h})$ is holomorphic on $\tilde{M}$, for by construction $\delbar v = -\j \ga^{(0,1)}$. Hence $|b|^{2}_{h} = \uR_{h}^{2} + \fd_{h}^{2}$ is harmonic on $\tilde{M}$, and so the corresponding $\uR_{h}^{2} + \fd_{h}^{2}$ on $M$ must be harmonic as well. If $M$ is moreover compact, then $\uR_{h}^{2} + \fd_{h}^{2}$ is constant by the maximum principle. If $H^{1}(M;\rea) = \{0\}$, then $v$ could be taken initially to be defined on $M$, and so $e^{2\j v}\cscal_{h}$ is holomorphic on $M$. \end{proof} \section{Classification of Einstein AH structures by scalar curvature and genus}\label{scalarsection} \subsection{} Theorem \ref{classtheorem} is the key technical result for the description of Einstein AH structures on compact orientable surfaces. It generalizes the result for Einstein Weyl structures proved in Theorem $3.7$ of \cite{Calderbank-mobius}. The Killing property of the Gauduchon metric dual of the associated Faraday primitive generalizes to Einstein AH structures a property of the Gauduchon gauge for Einstein Weyl structures first shown in Theorem $2.2$ of \cite{Tod-compact}. \begin{theorem}\label{classtheorem} A Riemannian signature AH structure $(\en, [h])$ on a compact orientable surface is Einstein if and only if for a Gauduchon metric $h \in [h]$ with Levi-Civita connection $D$ and Faraday primitive $\ga_{i}$ there are satisfied the equations \begin{align}\label{confein1} &D_{p}\bt_{ij}\,^{p} = 0, \qquad |\bt|_{h}^{2}\ga_{i} = 0,\qquad (\lie_{\ga^{\sharp}}h)_{ij} = 2D_{(i}\ga_{j)} = 0,\\ \label{confein2} &D_{i}(\sR_{h} - \tfrac{1}{4}|\bt|_{h}^{2} - 4|\ga|_{h}^{2}) = D_{i}(\uR_{h} -4|\ga|_{h}^{2}) = 0. \end{align} Moreover, each of $\uR_{h}$, $|\bt|_{h}^{2}$, and $\fd_{h}$ is constant along the flow of $\ga^{\sharp}$. Conversely, if on a manifold of dimension at least $2$ (not necessarily compact) there are a Riemannian metric $h$ (with Levi-Civita connection $D$), an $h$-Killing field $X^{i}$, and a completely symmetric, completely $h$-trace free tensor $B_{ijk} = B_{(ijk)}$, such that $\ga_{i}\defeq X^{i}h_{pi}$ and $\bt_{ij}\,^{k} \defeq h^{kp}B_{ijp}$ solve \eqref{confein1}-\eqref{confein2}, then $\nabla \defeq D - \tfrac{1}{2}\bt_{ij}\,^{k} - 2\ga_{(i}\delta_{j)}\,^{k} + h_{ij}\ga^{k}$ is the aligned representative of an Einstein AH structure $(\en, [h])$ for which $h$ is a distinguished metric. \end{theorem} \begin{proof} Let $h \in [h]$ and in this proof raise and lower indices with $h_{ij}$ and $h^{ij}$. Recall that $\lie_{\ga^{\sharp}}h_{ij} = 2D_{(i}\ga_{j)}$. The Ricci identity gives \begin{align}\label{bch0} \begin{split} 2D_{i}D^{p}\ga_{p}&= 2D^{p}D_{i}\ga_{p} - \sR_{h}\ga_{i} = D^{p}(\lie_{\ga^{\sharp}}h)_{ip} - D^{p}F_{ip} - \sR_{h}\ga_{i}. \end{split} \end{align} Let $||\dum||_{h}$ denote the $L^{2}$ norm on tensors with respect to the $h$-volume measure $d\vol_{h}$. Contracting \eqref{bch0} with $\ga^{\sharp}$ and integrating the result gives \begin{align}\label{bch1} 2||\lie_{\ga^{\sharp}}h||_{h}^{2} - 8||\dad_{h}\ga||^{2}_{h} - ||\fd_{h}||^{2}_{h} + 4\int_{M}\sR_{h}|\ga|_{h}^{2}\,d\vol_{h} = 0. \end{align} Contracting the second line of \eqref{conserve} with $\ga^{\sharp\,i}$, integrating by parts, and substituting in \eqref{confscal} yields \begin{align}\label{einsteinf} ||\fd_{h}||^{2}_{h} - 2\int_{M}\uR_{h}\dad_{h}\ga \,d\vol_{h} = 4\int_{M}|\ga|_{h}^{2}\uR_{h}\,d\vol_{h} = 4\int_{M}\sR_{h}|\ga|_{h}^{2}\,d\vol_{h} - \int_{M}|\ga|_{h}^{2}|\bt|_{h}^{2}\,d\vol_{h}. \end{align} Substituting \eqref{einsteinf} into \eqref{bch1} and taking $h \in [h]$ to be Gauduchon yields \begin{align}\label{bch2} 2||\lie_{\ga^{\sharp}}h||_{h}^{2} + \int_{M}|\ga|_{h}^{2}|\bt|_{h}^{2}\,d\vol_{h} = 8||\dad_{h}\ga||^{2}_{h} + 2\int_{M}\uR_{h}\dad_{h}\ga \,d\vol_{h} = 0. \end{align} Equation \eqref{bch2} implies the first two equalities of \eqref{confein1}. By Lemma \ref{twodablemma} that $|\ga|^{2}_{h}|\bt|^{2}_{h} = 0$ is equivalent to $\ga_{p}\bt_{ij}\,^{p} = 0$. By \eqref{ddivbt} this implies $D_{p}\bt_{ij}\,^{p} = 0$. Wherever $\ga_{i}$ is not zero, the one-form $|\bt|^{2}_{h}\ga_{i}$ is $h$-orthogonal to the linearly indepent one-forms $\ga_{i}$ and $J_{i}\,^{p}\ga_{p}$, so vanishes identically. This completes the proof of \eqref{confein1}. The first equality in \eqref{confein2} is true by \eqref{confscal}. Because $D_{(i}\ga_{j)} = 0$ and $\dad\ga = 0$ there holds $\hodge \ga_{i} = \dad d\ga_{i} = \sR_{h} \ga_{i}$. Substituting the preceeding observations into the last line of \eqref{conserve} gives \eqref{confein2}. By \eqref{confein1} and \eqref{confein2}, $\ga^{\sharp\, i}D_{i}\uR_{h} = 8\ga^{\sharp\,p}\ga^{\sharp\,q}D_{(p}\ga_{q)} = 0$, showing $d\uR_{h}(\ga^{\sharp}) = 0$. Since $\ga^{\sharp}$ is $h$-Killing, there holds $\ga^{\sharp\,p}D_{p}\sR_{h} = 0$, and with $d\uR_{h}(\ga^{\sharp}) = 0$ and \eqref{confscal} this shows $\ga^{\sharp\,i}D_{i}|\bt|^{2}_{h} = 0$. As $\lie_{\ga^{\sharp}}\om = d\imt(\ga^{\sharp})\om = d\star \ga = 0$ there holds $2\lie_{\ga^{\sharp}}F = d\fd_{h}(\ga^{\sharp})\om$. Since $D_{(i}\ga_{j)} = 0$ there holds $d|\ga|^{2}_{h} = 2\ga^{\sharp\,p}D_{i}\ga_{p} = \ga^{\sharp\,p}d\ga_{ip} = \ga^{\sharp\,p}F_{pi}$. Hence $\lie_{\ga^{\sharp}}F = d(\imt(\ga^{\sharp})F) = d(d|\ga|_{h}^{2}) = 0$, showing $2\lie_{\ga^{\sharp}}F = d\fd_{h}(\ga^{\sharp})\om = 0$. If given $(h, X, B)$ as in the statement of the theorem, then it is straightforward to check that $(\en, [h])$ is an AH structure with cubic torsion $\bt_{ij}\,^{k}$, aligned representative $\nabla$, and Gauduchon metric $h$. The curvatures of $\nabla$ and $D$ are related as in \eqref{confscal}, and there hold \eqref{ddivbt} and \eqref{conserve}. Together \eqref{confein1} and \eqref{ddivbt} show $E_{ij} = 0$, and so show the naive Einstein equations. Finally, substituting $D_{i}|\ga|_{h}^{2} = \ga^{\sharp\,p}d\ga_{ip}$ and $D^{p}d\ga_{pi} = -\sR_{h}\ga_{i}$ into \eqref{conserve} yields $\nabla_{i}R + 2\nabla^{p}F_{ip} = 0$. \end{proof} \begin{remark} In section \ref{naiveexample} it is explained how to construct naive Einstein AH structures which are not Einstein, and which illustrate that not all of the conclusions of Theorem \ref{classtheorem} hold for such structures. \end{remark} \begin{corollary}\label{holocorollary} If $(\en, [h])$ is a Riemannian signature Einstein AH structure on a compact oriented surface $M$ of genus $g$ and $h \in [h]$ is a Gauduchon metric with associated Faraday primitive $\ga_{i}$ and cubic form $B_{ijk} \defeq \bt_{ij}\,^{p}h_{pk}$, then: \begin{enumerate} \item With respect to the complex structure determined by $[h]$ and the given orientation, $B^{(3,0)}$ and $\ga^{\sharp\,(1,0)}$ are holomorphic. Moreover, \begin{align} \label{ne4b}&2\lap_{h}|\ga|_{h}^{2} = |d\ga|_{h}^{2} - 2\uR_{h}|\ga|_{h}^{2},& &\text{everywhere},\\ \label{ne4} &\lap_{h}\log |\ga|_{h}^{2} = -\uR_{h},& &\text{wherever $|\ga|_{h}^{2} > 0$},\\ \label{ne4c}&\lap_{h}|B|_{h}^{2} = 4|d|B||_{h}^{2} + 3\sR_{h} |B|_{h}^{2} = 4|d|B||_{h}^{2} + 3\uR_{h} |B|_{h}^{2} + \tfrac{3}{4}|B|_{h}^{4},& &\text{wherever $|B|_{h}^{2} > 0$},\\ \label{ne4d}&\lap_{h}\log |B|_{h}^{2} = 3\sR_{h} = 3\uR_{h} + \tfrac{3}{4}|B|_{h}^{2}& &\text{wherever $|B|_{h}^{2} > 0$}. \end{align} \item If $(\en, [h])$ is not Weyl then it is exact, while if $(\en, [h])$ is not exact, then it is Weyl. \item If $g \geq 2$ then $(\en, [h])$ is exact. In this case the metric $\sth_{ij} \defeq |B|_{h}^{2/3}h_{ij}$ defined on the open submanifold $M^{\ast}\defeq \{|B|^{2} \neq 0\}$ is flat. \item If $g = 0$ then $(\en, [h])$ is Weyl. \end{enumerate} \end{corollary} \begin{proof} By the first and last equalities of \eqref{confein1} and Lemmas \ref{codazzilemma} and \ref{kdifferentialslemma}, $\ga^{\sharp\,(1,0)}$ and $B^{(3,0)}$ are holomorphic. Since by \eqref{confein1} of Theorem \ref{classtheorem}, $d\ga_{ij} = 2D_{i}\ga_{j}$, the second equation of \eqref{keromlap} of Lemma \ref{flatmetriclemma} applied to $\ga^{\sharp}$ (so with $k = -1$) reduces to \eqref{ne4b}, while by the second equation of \eqref{kato} there holds $2\lap_{h}\log |\ga| = -\sR_{h} = - \uR_{h} - \tfrac{1}{2}|\bt|_{h}^{2}$ wherever $\ga$ is non-zero. Multiplying through by $|\ga|^{2}$ and using that, by Theorem \ref{classtheorem}, $|\ga|^{2}|\bt|^{2} = 0$, this yields \eqref{ne4}. Similarly, equations \eqref{ne4c} and \eqref{ne4d} follow from \eqref{keromlap} and \eqref{confscal}. Since each of $\ga^{\sharp\,(1,0)}$ and $B^{(3, 0)}$ is holomorphic, the zeroes of each are isolated if it is not identically zero, and hence the same is true for $\ga^{\sharp}$ and $B$. By \eqref{confein1}, there holds $|\ga|^{2}_{h}|B|^{2}_{h} = 0$. Because when $\ga^{\sharp}$ or $B$ is not zero its zeroes are isolated, this implies that if $\ga^{\sharp}$ is somewhere not zero then $B$ is identically zero, and if $B$ is not somewhere zero then $X$ is identically zero. Hence if $(\en, [h])$ is not Weyl it is exact and if $(\en, [h])$ is not exact it is Weyl. That $g \geq 2$ and $g = 0$ imply that $(\en, [h])$ is respectively exact and Weyl follows from the holomorphicity of $B^{(3,0)}$ and $\ga^{\sharp \,(1,0)}$ and Riemann Roch. By Riemann Roch, when $g > 1$, a holomorphic cubic differential has at most $6(g-1)$ zeroes, so $M^{\ast}$ is the complement of at most $6(g-1)$ points. By \eqref{ne4d} of Corollary \ref{holocorollary} there holds $|B|_{h}^{2/3}\sR_{\sth} = \sR_{h} - \lap_{h}\log |B|_{h}^{2/3} = 0$, so $\sth$ is flat on $M^{\ast}$ \end{proof} \begin{lemma}\label{sphereconstantlemma} If $(\en, [h])$ is a Riemannian signature Einstein AH structure on a compact orientable surface $M$, and $h \in [h]$ is any Gauduchon metric, then $\uR_{h}^{2} + \fd_{h}^{2}$ is equal to the constant $(\max_{M}\uR_{h})^{2}$. \end{lemma} \begin{proof} If $(\en, [h])$ is exact then $\uR_{h}$ is constant by \eqref{conservationcondition}, and $\uR_{h}^{2} + \fd_{h}^{2} = \uR_{h}^{2} = (\max_{M}\uR_{h})^{2}$. If $(\en, [h])$ is not exact, then by Corollary \ref{holocorollary}, $M$ is a sphere or a torus. In this case, by Lemma \ref{squarehololemma}, $\be = \uR_{h}^{2} + \fd_{h}^{2}$ is constant. Since $|\ga|_{h}^{2}$ is not identically zero, it assumes somewhere a positive maximum; at such a point there holds $0 = 2d_{i}|\ga|^{2}_{h} = 4\ga^{\sharp\,p}D_{i}\ga_{p} = 2\ga^{\sharp\,p}F_{pi} = \fd_{h}(\star \ga)_{i}$, so at such a point $\fd_{h}$ vanishes, and $\be$ is equal to the value of $\uR_{h}^{2}$ at this point. Since by \eqref{confein2} of Theorem \ref{classtheorem}, $\uR_{h} - 4|\ga|^{2}_{h}$ is constant, the functions $\uR_{h}$ and $|\ga|_{h}^{2}$ assume their maximum values at the same points, so $\be = (\max_{M}\uR_{h})^{2}$. \end{proof} \subsection{} This section records a geometric interpretation of the integral curves of the Gauduchon metric dual of the Faraday primitive of the Gauduchon class. Recall that the \textbf{magnetic flow} determined on $\ctm$ by a pair $(g, \mu)$ comprising a metric $g$ and a closed two-form $\mu$ on $M$ is the Hamiltonian flow of the function $G(s) = \tfrac{1}{2}g_{\pi(s)}^{ij}s_{i}s_{j}$ on $\ctm$ with respect to the symplectic form $\Omega_{M} - \pi^{\ast}(\mu)$, where $\pi:\ctm \to M$ is the canonical projection and $\Omega_{M}$ is the canonical symplectic form. If $\mu$ is exact the magnetic flow is said to be \textbf{exact}. The \textbf{magnetic geodesics} are the images in $M$ of the integral curves of this flow, which are the solutions of the equation $D_{\dot{\si}}\dot{\si} = A(\dot{\si})$ where $A$ is the tensor defined by $A_{i}\,^{p}g_{pj} = \mu_{ij}$, $D$ is the Levi-Civita connection of $g$, and $\si(t)$ is a curve in $M$. Along a magnetic geodesic $\si(t)$, the energy $|\dot{\si}|^{2}_{g}$ is constant. If $M$ is an oriented surface then $\mu_{ij} = f\om_{ij}$, where $\om_{ij}$ is the K\"ahler form associated to $g$, and a routine computation shows that the geodesic curvature $\ka_{g}(\si)$ of a magnetic geodesic $\si$ having energy $|\dot{\si}|^{2}_{g} = e^{2c}$ is $e^{-c}f(\si)$. \begin{theorem}\label{magnetictheorem} Let $(\en, [h])$ be a non-exact Einstein AH structure on a compact orientable surface $M$. Let $h \in [h]$ be a Gauduchon metric with Faraday primitive $\ga_{i}$. The integral curves of the vector field $\ga^{\sharp}$ are magnetic geodesics for the exact magnetic flow on $\ctm$ determined by $(h, \tfrac{1}{2}d\ga)$. The function $|\ga|_{h}^{2}$ is constant along a non-trivial integral curve of $\ga^{\sharp}$, and the geodesic curvature of such an integral curve is the restriction to the curve of $-\tfrac{1}{4}|\ga|_{h}^{-1}\fd_{h}$, which is constant along the integral curve. The integral curves of $\ga^{\sharp}$ passing through the points where $|\ga|_{h}^{2}$ attains its maximum value are $D$-geodesics; in particular, at least one integral curve of $\ga^{\sharp}$ is a $D$-geodesic. The non-trivial integral curves of $J(\ga^{\sharp}) = (\star \ga)^{\sharp}$ are projective geodesics of $D$. \end{theorem} \begin{proof} By Corollary \ref{holocorollary}, $(\en, [h])$ is Weyl and $M$ is a torus or a sphere. From \eqref{confein1} there follows \begin{align}\label{maggeo} -4D_{\ga^{\sharp}}\ga^{\sharp} = \fd_{h}J(\ga^{\sharp}) = \fd_{h}(\star \ga)^{\sharp} = 2(d|\ga|_{h}^{2})^{\sharp}. \end{align} The tensor $A_{i}\,^{j}$ such that $A_{i}\,^{p}h_{pj} = -\tfrac{1}{2}F_{ij} = -\tfrac{1}{4}\fd_{h}\om_{ij}$ is $-\tfrac{1}{4}\fd_{h}J_{i}\,^{j}$, so it follows from the first equality of \eqref{maggeo} that the integral curves of $\ga^{\sharp}$ are magnetic geodesics. By the remarks preceeding the statement of the lemma, the geodesic curvature $\ka_{h}(\si)$ of an integral curve $\dot{\si}(t) = \ga^{\sharp}_{\si(t)}$ is $-\tfrac{1}{4}\fd_{h}|\dot{\si}|_{h}^{-1}$, which is constant along $\si$ by Theorem \ref{classtheorem}. Because $(\en, [h])$ is not exact, $|\ga|^{2}_{h}$ assumes a positive maximum at some point of $M$. Since, by \eqref{maggeo}, $2d|\ga|_{h}^{2} = \fd_{h}\star \ga$, at such a point it must be that $\fd_{h} = 0$. Thus $\fd_{h}$ has a zero which is not a zero of $\ga$, and the integral curve of $\ga^{\sharp}$ passing through this zero of $\fd_{h}$ has geodesic curvature $0$, so is a $D$-geodesic. Because $\ga^{\sharp}$ is conformal Killing there holds $\lie_{\ga^{\sharp}}J = 0$, from which follows $[\ga^{\sharp}, J\ga^{\sharp}] = 0$. Using this and \eqref{maggeo} yields $D_{J\ga^{\sharp}}J\ga^{\sharp} = J(D_{J\ga^{\sharp}}\ga^{\sharp}) = J(D_{\ga^{\sharp}}J\ga^{\sharp}) = -D_{\ga^{\sharp}}\ga^{\sharp} = \tfrac{1}{4}\fd_{h}J\ga^{\sharp}$, which shows that the non-trivial integral curves of $J\ga^{\sharp}$ are projective geodesics of $D$. \end{proof} This suggests viewing these particular exact magnetic flows as \textit{Einstein} and raises the question of whether there is a good notion of Einstein magnetic flows in higher dimensions. Theorem \ref{magnetictheorem} was suggested by the explicit models of Einstein Weyl structures described in section \ref{spheretorussection}, from which more precise information can be extracted. From the discussion following equation \eqref{spheresrfh} at the end of section \ref{spheremetricoccurssection} it follows that (with the setting and notations as in the statement of Theorem \ref{magnetictheorem}) in the case of the sphere the vector field $\ga^{\sharp}$ has two zeroes, and its integral curves, which by Theorem \ref{magnetictheorem} are magnetic geodesics, are simple closed circles separating these zeroes. Among these simple closed magnetic geodesics, there is a unique one of maximal energy on which $|\ga|^{2}_{h}$ attains its maximum value and $\fd_{h}$ vanishes, and which is moreover a $D$-geodesic, while for each positive energy below this maximum there are precisely two simple closed magnetic geodesics occurring as integral curves of $\ga^{\sharp}$. \subsection{} If a Riemannian signature AH structure $(\en, [h])$ on a compact, oriented surface $M$ is exact and Weyl, then the aligned representative $\nabla$ is the Levi-Civita connection of any Gauduchon metric $h \in [h]$, and \eqref{confscal} shows that $\uR_{h} = \sR_{h}$, so that $(\en, [h])$ is moreover Einstein just when $h$ has constant scalar curvature, that is $h$ is a constant curvature metric. Thus in this case $(\en,[h])$ is naturally identified with a positive homothety class of constant curvature metrics; since $[h]$ contains a unique such class, $(\en, [h])$ can be identified with $[h]$, and such an $(\en, [h])$ will be said simply to be \textit{a conformal structure}. In this case the weighted scalar curvature is parallel, and positive, zero, or negative according to whether the genus $g$ is $0$, $1$, or at least $2$. Alternatively, such an AH structure will be said to be \textit{generated} by its representative constant curvature metric. \subsection{} Since, for a $\binom{p}{q+1}$ tensor $Q$, rescaling $h$ homothetically by $r \in \reap$ causes $|Q|_{h}^{2}|\det h|^{1/2}$ to rescale by $r^{p-q}$, on a compact surface the $L^{2}$-norm $||Q||_{h}^{2} = \int_{M}|Q|_{h}^{2}\,d\vol_{h}$ of a $\binom{p}{p+1}$ tensor $Q$ depends only on the conformal class of $h$, and not on the choice of $h$ itself, so in this case it makes sense to write $||Q||^{2} = ||Q||_{h}^{2}$, dropping the subscript indicating dependence on $h$. Let $(\en, [h])$ be a Riemannian Einstein AH structure $(\en, [h])$ on a compact, oriented surface $M$. Let $h \in [h]$ be a Gauduchon metric with associated Faraday primitive $\ga_{i}$. Then $||\bt||_{h}^{2} = \int_{M} \nbt$ and $||\ga||_{h}^{2}$ are unchanged if $h$ is homothetically rescaled. Although the $L^{2}$ norm of a vector field is changed by rescaling $h$, the $L^{2}$ norm $||\ga^{\sharp}||_{h}^{2} = ||\ga||_{h}^{2}$ is not because $\ga^{\sharp}$ also rescales when $h$ is rescaled, in a way that compensates for the change in norm. Similarly for the cubic form $B_{ijk} \defeq \bt_{ij}\,^{p}h_{pk}$ there holds $||B||_{h}^{2} = ||\bt||^{2}$. Whether there will be written $||\ga||^{2}$, $||\ga^{\sharp}||_{h}^{2}$, $||B||_{h}^{2}$, or $||\bt||^{2}$ will depend on the emphasis desired. From \eqref{confein2} it follows that there is a constant $\ka$ such that $\uR_{h} = 4|\ga|_{h}^{2} + \ka$. This $\ka$ does depend on $h$ in that rescaling $h$ by $r \in \reap$ rescales $\ka$ by $r^{-1}$. Integrating \eqref{confscal} against $d\vol_{h}$ using the Gau\ss-Bonnet theorem yields \begin{align}\label{vortex} \begin{split} 8\pi(1-g) & = \int_{M}\sR_{h}\,d\vol_{h} = \tfrac{1}{4}||\bt||^{2} + \int_{M}\uR_{h}\,d\vol_{h} = \tfrac{1}{4}||\bt||^{2} + 4||\ga||^{2} + \ka \vol_{h}(M). \end{split} \end{align} in which $g$ is the genus of $M$. The number $\nv \defeq \ka \vol_{h}(M)$ does not depend on the choice of Gauduchon metric. For reasons explained in section \ref{vortexsection}, it is called the \textbf{vortex parameter} of the Riemannian Einstein AH structure on the compact, oriented surface $M$. Lemma \ref{vortexlemma} follows from \eqref{vortex}. \begin{lemma}\label{vortexlemma} For a Riemannian Einstein AH structure $(\en, [h])$ on a compact, orientable surface of genus $g$, the vortex parameter $\nv$ satisfies $\nv \leq 8\pi(1 - g)$, with equality if and only if $(\en, [h])$ is the AH structure generated by a constant curvature metric. \end{lemma} If $(\en, [h])$ is exact Einstein, $\ka = \uR_{h}$ so $\nv = \uR_{h}\, d\vol_{h}(M) = \int_{M}\uR_{h} \,d\vol_{h} = \int_{M}R$ is the total weighted scalar curvature. In particular, this holds if the genus is greater than $1$, or the genus equals $1$ and $(\en, [h])$ is not Weyl. \subsection{} Combining Theorem \ref{classtheorem}, Corollary \ref{holocorollary}, Lemma \ref{vortexlemma}, and making some arguments to relate topological conditions with the sign of the weighted scalar curvature yields Theorem \ref{scalarexacttheorem}, which is the principal structural result about Riemannian Einstein AH structures on compact surfaces. Recall from the introduction the definition of a (strictly) convex flat real projective structure. \begin{theorem}\label{scalarexacttheorem} For a Riemannian signature Einstein AH structure $(\en, [h])$ on a compact oriented surface $M$ of genus $g$ there holds one of the following mutually exclusive possibilities. \begin{list}{(\arabic{enumi}).}{\usecounter{enumi}\leftmargin=.2cm} \renewcommand{\theenumi}{Step (\arabic{enumi})} \renewcommand{\theenumi}{\{\arabic{enumi}\}} \renewcommand{\labelenumi}{\textbf{Level \theenumi}.-} \item $(\en, [h])$ is exact and Weyl, so identified with the unique positive homothety class of constant curvature metrics contained in the underlying conformal structure $[h]$. There holds $\nv = 8\pi(1-g)$. \item\label{neg0} $R$ is negative and parallel, $(\en, [h])$ is exact and not Weyl, and $g \geq 2$. $(\en, [h])$ is projectively flat and conjugate projectively flat, and both $\en$ and $\ben$ are strictly convex. A distinguished metric $h \in [h]$ has scalar curvature of the form \begin{align}\label{negexactcurv} \sR_{h} = \tfrac{1}{4}\left(|\bt|_{h}^{2} - \tfrac{||\bt||^{2}}{\vol_{h}(M)}\right) + \tfrac{8\pi(1-g)}{\vol_{h}(M)}, \end{align} and $\sR_{h}$ is everywhere non-positive and somewhere negative. The cubic differential $B^{(3,0)}$, where $B_{ijk} \defeq \bt_{ij}\,^{p}h_{pk}$, is holomorphic. On the open submanifold $M^{\ast}\defeq \{|B|^{2} \neq 0\}$, the metric $\sth_{ij} \defeq |B|_{h}^{2/3}h_{ij}$ is flat. There holds $\nv < 8\pi(1-g)$. \item\label{neg1} $R$ is negative and parallel, $(\en, [h])$ is exact and not Weyl, and $M$ is a torus. A distinguished metric $h$ is flat and $B_{ijk} \defeq \bt_{ij}\,^{p}h_{pk}$ is parallel with respect to the Levi-Civita connection of $h$, so, in particular, has constant non-zero norm. $(\en, [h])$ is projectively flat and conjugate projectively flat, and both $\en$ and $\ben$ are convex but not strictly convex. There holds $\nv = -\tfrac{1}{4}||B||_{h}^{2} < 0$. \item\label{r0} $R = 0$, $(\en, [h])$ is Weyl and closed but not exact, and $M$ is a torus. A Gauduchon metric $h \in [h]$ is flat, the Faraday primitive $\ga_{i}$ of $h$ is parallel with respect to the Levi-Civita connection of $h$, and $\ga^{(1,0)}$ is holomorphic. The aligned representative of $(\en, [h])$ and is affinely flat, and its $(1,0)$ part is a holomorphic affine connection. There holds $\nv = -4||\ga||_{h}^{2} < 0$. \item\label{rvarytorus} $R$ is somewhere positive and somewhere negative, $(\en, [h])$ is Weyl and not closed, and $M$ is a torus. For a Gauduchon metric $h \in [h]$ with Faraday primitive $\ga_{i}$ the scalar curvature is $\sR_{h} = 4(|\ga|^{2}_{h} - ||\ga||^{2})$, and there holds $\nv = -4||\ga||^{2} < 0$. \item\label{ps6} $R$ is somewhere positive, $(\en, [h])$ is Weyl and not closed, and $M$ is a sphere. A Gauduchon metric $h \in [h]$ with Faraday primitive $\ga_{i}$ has scalar curvature of the form \begin{align}\label{poscurv} \sR_{h} = \uR_{h} = 4\left(|\ga|_{h}^{2} - \tfrac{||\ga||^{2}}{\vol_{h}(M)}\right)+ \tfrac{8\pi}{\vol_{h}(M)}. \end{align} On the open submanifold $M^{\ast}\defeq \{|\ga|^{2} \neq 0\}$, the metric $\sth_{ij} \defeq |\ga|_{h}^{-2}h_{ij}$ is flat. There holds $\nv = 8\pi - 4||\ga||_{h}^{2} < 8\pi$. \end{list} \end{theorem} \begin{proof} In this proof $h\in [h]$ is always a Gauduchon metric with Faraday primitive $\ga_{i}$, and $B_{ijk} = \bt_{ij}\,^{p}h_{kp}$. \newcounter{lcount} By \eqref{confein2}, $\ka \defeq \uR_{h} -4|\ga|_{h}^{2}$ is constant. The theorem follows by assembling the following claims. \noindent \begin{list}{[\alph{enumi}].}{\usecounter{enumi}\leftmargin=.2cm} \renewcommand{\theenumi}{Step (\alph{enumi})} \renewcommand{\theenumi}{[\alph{enumi}]} \renewcommand{\labelenumi}{\textbf{Level \theenumi}.-} \item If $(\en, [h])$ is not exact then $g < 2$, by Corollary \ref{holocorollary}, and $R$ is not everywhere negative, for, by \eqref{ne4}, at a point at which $|\ga|$ attains its maximum value there holds $\uR_{h} = -2\lap_{h}\log |\ga| \geq 0$. \item If $(\en, [h])$ is closed, then, by \eqref{conservationcondition}, $R$ is parallel, and it follows from Lemma \ref{2deinsteinlemma} that $(\en, [h])$ is projectively flat and conjugate projectively flat. If, moreover, $R = 0$, then by \eqref{rijkl} the aligned representatives of $(\en, [h])$ and its conjugate are flat. \item \label{listb} If $(\en, [h])$ is not exact and $R$ is non-positive, then, by the maximum principle applied to \eqref{ne4b}, $\ga$ is closed, and so by \eqref{confein1}, $\ga$ is parallel. In particular $(\en, [h])$ is closed and $|\ga|^{2}_{h}$ is a non-zero constant. By \eqref{ne4} this forces $R \equiv 0$. Since $R$, $E_{ij}$ and $F_{ij}$ all vanish, the aligned representative $\nabla \in \en$ is affinely flat. By Corollary \ref{holocorollary}, since $(\en, [h])$ is not exact, it is Weyl, and so $\sR_{h} = \uR_{h}$ by \eqref{confscal}, which by Gau\ss-Bonnet forces $g = 1$. Hence $M$ is a torus and a Gauduchon metric is flat with parallel Faraday primitive. By \eqref{db2} there holds $-2\delbar \ga^{(1,0)} = F^{(1,1)} + \dad_{h}\ga\, \bar{h}^{(1,1)} = 0$, so the one-form $\ga^{(1,0)}$ is holomorphic, and hence the complex affine connection $\nabla^{1,0} = D^{1,0} - 2\ga^{(1,0)}$ is holomorphic. \item \label{listd} If $R$ is non-positive and somewhere negative, then $(\en, [h])$ must be exact by \ref{listb}, and so $R$ is parallel by \eqref{conservationcondition}. Since $R$ is somewhere negative and parallel, it is strictly negative. Since $\uR_{h}$ is constant, $\sR_{h} = \uR_{h} + \tfrac{1}{4}|B|_{h}^{2}$ and $|B|^{2}_{h}$ assume their respective maximum values on $M$ at the same points. At a point at which $|B|^{2}$ takes its maximum value, \eqref{ne4c} yields $0 \geq \lap_{h}|B|^{2}_{h} = 3\sR_{h}|B|^{2}_{h}$, so at such a point $\sR_{h} \leq 0$. Since such a point is also a maximum of $\sR_{h}$, this shows $\sR_{h} \leq 0$. Solving \eqref{vortex} for $\ka$ shows that $\sR_{h}$ has the form \eqref{negexactcurv}. If $\sR_{h}$ is identically zero, integrating \eqref{negexactcurv} shows that $g = 1$; hence if $g > 1$ then $\sR_{h}$ must be somewhere negative. Conversely, if $g = 1$ then since $\sR_{h} \leq 0$, integrating \eqref{negexactcurv} shows that $\sR_{h}$ is identically zero, so that the distinguished metric $h$ is flat. In this case $0 = 4\sR_{h} = 4\uR_{h} + |B|_{h}^{2}$, so $|B|^{2}_{h}$ is constant, equal to $-4\uR_{h}$. \item \label{liste} If $g = 1$ and $(\en, [h])$ is exact then $R\leq 0$ by \eqref{vortex}. Since $R$ is parallel, either $R$ is identically $0$, in which case $(\en, [h])$ is Weyl, or $R$ is negative, in which case $\uR_{h}$ is a negative constant. Since $\uR_{h}$ is constant, $\sR_{h} = \uR_{h} + \tfrac{1}{4}|B|_{h}^{2}$ and $|B|_{h}^{2}$ assume their maximum values at the same points. By the same argument as in \ref{listd} there holds $\sR_{h} \leq 0$, and by Gau\ss-Bonnet this means $\sR_{h}$ is identically zero. Since $\uR_{h}$ is constant, this implies $|B|_{h}^{2}$ is constant. Since $B^{(3,0)}$ is holomorphic, it follows from \eqref{kato} of Lemma \ref{flatmetriclemma} that $0 = 2|d|B||^{2} = |DB|_{h}^{2}$, so that $B$ is parallel with respect to $h$. \item \label{listh} If $g \geq 2$, $(\en, [h])$ is exact by Corollary \ref{holocorollary} and hence $R$ is parallel. By \eqref{vortex}, $R$ is negative. \item \label{listg} If $g = 0$, $(\en, [h])$ is Weyl by Corollary \ref{holocorollary}. Solving \eqref{vortex} for $\ka$ and substituting into $\sR_{h} = \uR_{h} = 4|\ga|^{2}_{h} + \ka$ yields \eqref{poscurv}. Since by \eqref{vortex}, $\int_{M}R = 8\pi$, $R$ must be positive somewhere. \item \label{listl} If $R$ is somewhere negative and somewhere positive then it is not parallel, and so by \eqref{conservationcondition} $(\en, [h])$ is not closed. Hence, by Corollary \ref{holocorollary}, $(\en, [h])$ is Weyl. In this case, since $\uR_{h}$ is somewhere negative, $\nv$ is negative. \item \label{listf} If $g = 1$ and $(\en, [h])$ is not exact, then it is Weyl. By \eqref{vortex}, $0 = \int_{M}R$, so either $R$ is identically $0$, or $R$ is somewhere positive and somewhere negative. In the latter case, $(\en, [h])$ is not closed by \ref{listl}, and so from \eqref{vortex} it follows that $\nv = - 4||\ga||_{h}^{2} < 0$. \item \label{listi} If $R$ is non-negative and somewhere positive then by \eqref{confscal} the same is true of $\sR_{h}$, and so $g = 0$ by Gau\ss-Bonnet and thus $(\en, [h])$ is Weyl by Corollary \ref{holocorollary}. If $(\en, [h])$ is moreover closed then the Einstein condition implies $R$ is parallel, and so everywhere positive, and hence $(\en, [[h])$ is exact by Lemma \ref{parallelexactlemma}. Thus in this case $(\en, [h])$ is exact Weyl and a distinguished metric has constant curvature. \item \label{listj} That the flat projective structures in \eqref{neg0} and \eqref{neg1} are properly convex follows from Theorem \ref{convextheorem}, which is deferred to section \ref{convexsection} because its proof uses a point of view more conveniently introduced later. That the projective structures of \eqref{neg0} are strictly convex while those of \eqref{neg1} are not follows from Theorem $1.1$ of Y. Benoist (\cite{Benoist-convexesdivisblesI}), that a discrete group which divides some properly convex domain in the projective sphere is Gromov hyperbolic if and only if the domain is strictly convex, coupled with the observation that the fundamental group of a surface of genus $g > 1$ is Gromov hyperbolic, while that of the torus is not. \qedhere \end{list} \end{proof} In section \ref{constructionsection} it will be shown that all the possibilities identified in Theorem \ref{scalarexacttheorem} actually occur. \subsection{} Item \ref{listd} of the proof of Theorem \ref{scalarexacttheorem} shows that for an Einstein AH structure on a surface of genus $g > 1$ the norm squared of the cubic torsion with respect to a Gauduchon metric $h \in [h]$ satisfies the equivalent pointwise bounds \begin{align}\label{calabibound} &|B|_{h}^{2} = -4\uR_{h} + 4\sR_{h} \leq -4\uR_{h}, & &|B|_{h}^{2} - ||B||_{h}^{2}\vol_{h}(M)^{-1} \leq 2\pi(g-1)\vol_{h}(M)^{-1}. \end{align} The pull back of such an Einstein AH structure to the universal cover of $M$ can be identified with that induced on an affine hypersphere asymptotic to the cone over the developing map image of the universal cover (see section \ref{convexsection}), and then the non-positivity of the scalar curvature of a Gauduchon metric is the conclusion of the main theorem (Theorem $5.1$) of Calabi's \cite{Calabi-completeaffine}. Here this non-positivity has been given a direct, autonomous proof. Based on these considerations it seems reasonable to expect the following. \begin{theorem} On an oriented surface $M$, let $(\en, [h])$ be an exact Einstein AH structure with negative weighted scalar curvature for which a distinguished metric is complete. Then the curvature of a distinguished metric is non-positive. \end{theorem} \begin{proof} Since the curvature of a distinguished metric $h \in [h]$ is $\uR_{h} + \tfrac{1}{4}|B|_{h}^{2}$ (the notations are as usual), it is equivalent to show that the function $u \defeq |B|_{h}^{2}$ is not greater than the constant $-4\uR_{h}$. There holds \eqref{ne4c}, and so $u$ satisfies a differential inequality of the form $\lap_{h}u \geq Bu^{2} - Au$ where $A = -3\uR_{h}$ and $B = 3/4$. A theorem of Cheng and Yau, from \cite{Cheng-Yau-maximalspacelike} and \cite{Cheng-Yau-affinehyperspheresI}, a statement and proof of which can be found as Theorem $6.6$ of \cite{Fox-ahs}, shows that for a complete metric $g$ with Ricci curvature bounded from below, a differential inequality of the form $\lap_{g}v \geq bv^{1+\si} - av$, with positive $b$ and holding where the non-positive smooth function $v$ is positive, implies an upper bound $v \leq |a/b|^{\si}$. Applying this with $\si = 1$, $v = u$, $a = A$ and $b = B$ yields the desired bound. \end{proof} There is no similar lower bound for $\sR_{h}$ in the case \ref{ps6} of Theorem \ref{scalarexacttheorem}. Suppose the Einstein AH structure $(\en, [h])$ is not exact, so is Weyl, and $M$ must be a sphere or a torus. If the constant $\ka = \uR_{h} - 4|\ga|_{h}^{2}$ is negative then the constant function $\log(4^{-1} |\ka|)$ solves \begin{align}\label{wga} \lap_{h}\phi + \ka + 4e^{\phi} = 0. \end{align} By \eqref{ne4} the function $\psi = \log |\ga|_{h}^{2}$ solves \eqref{wga} on the complement $M^{\ast}$ of the zero set of $\ga$, which is discrete because $h^{ip}\ga_{p}$ is the real part of a holomorphic vector field. Since $\psi$ tends to $-\infty$ at the zeroes of $\ga$ it is tempting to conclude that $\psi$ is bounded from above by $\log(4^{-1} |\ka|)$, but the standard comparison argument fails because in the operator $\lap_{h}\phi + \ka + 4e^{\phi}$ the zeroth order term is increasing in $\phi$, while the routine application of the maximum principle needs it to be non-increasing. In section \ref{spheremetricoccurssection} it is shown explicitly how to construct an Einstein Weyl structure on the two sphere $\sphere$ for which $\ka$ has an arbitrarily negative value. \section{Relation with the Abelian vortex equations}\label{vortexsection} In this section it is shown that an exact Einstein AH structure determines a solution of the Abelian vortex equations. On the other hand, an Einstein Weyl structure gives rise to a solution of equations superficially similar to the vortex equations, but differing from them by a change of sign in one term. Let $M$ be a compact manifold and let $(g, J, \om)$ be a K\"ahler structure on $M$. Let $\lne$ be a smooth complex line bundle over $M$. The \textbf{Abelian vortex equations} with parameter $\tau$ are the following equations for a triple $(\nabla, k, s)$, in which $k$ is a Hermitian metric on $\lne$, $\nabla$ is a Hermitian connection on $(\lne, k)$, and $s$ is a smooth section of $\lne$: \begin{align}\label{vortexeqns} &\Om^{(0,2)} = 0,& &\delbar_{\nabla}s = 0,& &\j \dlef(\Om) + \tfrac{1}{2}|s|_{k}^{2} = \tfrac{1}{2}\tau. \end{align} Here $\Om_{ij}$ is the curvature of $\nabla$, viewed as a real-valued two-form on $M$; $\delbar_{\nabla}$ is the $(1,0)$ part of $\nabla$; and $\dlef$ is the dual Lefschetz operator given on $(1,1)$ forms by $\dlef(A) = -\om^{\al\bar{\be}}A_{\al\bar{\be}} = -\j A_{\si}\,^{\si}$. The first two equations say that $\delbar_{\nabla}$ is a holomorphic structure on $\lne$ with respect to which $s$ is a holomorphic section, while the third equation is something like an Einstein equation. A solution of \eqref{vortexeqns} is \textbf{non-trivial} if $s$ is not identically zero. The trivial solutions correspond to holomorphic structures on $\lne$; a precise statement is Theorem $4.7$ of \cite{Bradlow}. The basic theorem about the Abelian vortex equations on a surface is the following. \begin{theorem}[\cite{Noguchi}, \cite{Bradlow}, \cite{Garcia-Prada}]\label{vortextheorem} Let $M$ be a compact surface equipped with a K\"ahler metric $(g, J)$. Let $\lne$ be a smooth complex line bundle with a fixed Hermitian metric $k$. Let $D$ be an effective divisor of degree equal to $\deg(\lne)$. There exists a non-trivial solution $(s, \nabla)$ of the vortex equations \eqref{vortexeqns}, unique up to gauge equivalence, if and only if $4\pi\deg(\lne) < \tau \vol_{g}(M)$. Moreover the holomorphic line bundle and section canonically associated to $D$ are $(\lne, \delbar_{\nabla})$ and $s$. \end{theorem} The space of effective divisors on $M$ of a given degree $r$ is the symmetric product $S^{r}(M)$ of $M$ and Theorem \ref{vortextheorem} shows that $S^{\deg(\lne)}(M)$ is in bijection with the moduli space of gauge equivalence classes of vortex solutions on $\lne$. It is shown in \cite{Garcia-Prada} by symplectic reduction, that this moduli space carries a K\"ahler structure. Note that \textit{a priori} the K\"ahler metric $g_{ij}$ and the Hermitian metric $k$ on $\lne$ are unrelated. However, in what follows the interest will be on solutions for which $\lne$ \textit{qua} holomorphic bundle is identified with a power $\cano^{p}$ of the canonical bundle, and $k$ and $\nabla$ are the Hermitian metric and Hermitian connection induced by the underlying K\"ahler metric and connection on $M$. Equivalently, the corresponding effective divisor is a $p$-canonical divisor, that is it is in the $p$-fold product of the canonical divisor class. This motivates the following definition. A solution $(s, \nabla)$ of the Abelian vortex equations on $(M, g, J)$ is \textbf{$p$-canonical} if the divisor of $s$ is $p$-canonical; equivalently $s$ is a section of $\cano^{p}$ holomorphic with respect to the complex structure induced by $J$, and $\nabla$ is the Hermitian connection induced on $\cano^{p}$ by the Levi-Civita connection of $g$. Let $M$ be a compact orientable surface of genus $g$ and let $(\en, [h])$ be an exact Riemannian signature Einstein AH structure. Let $h \in [h]$ be a Gauduchon metric and $B_{ijk} = \bt_{ij}\,^{p}h_{pk}$. Let the K\"ahler structure on $M$ be that determined by $h$ and the given orientation. Let $k = h$ be the Hermitian metric induced on $\cano^{3}$ by $h$. The Levi-Civita connection $D$ of $h$ induces a Hermitian connection, also denoted by $D$, on $\cano^{3}$, for which the induced holomorphic structure is the canonical one. It is claimed that for $\tau = -3\uR_{h}$, the section $s = (3/2)^{1/2}B^{(3,0)}$ of $\cano^{3}$ solves the vortex equations \eqref{vortexeqns}. Note that $|s|_{h}^{2} = (3/2)|B^{(3,0)}|^{2} = (3/4)|\bt|^{2}_{h}$. The second equation of \eqref{vortexeqns} is valid by construction. The curvature of $D$ on $\cano^{3}$ is $\Om_{ij} = (3\j \sR_{h}/2)\om_{ij}$, so that $\Om^{(2,0)} = 0$ and $\dlef(\Om) = (3\j/2)\sR_{h}$. Because $(\en, [h])$ is exact, $\uR_{h}$ is constant, and it follows from \eqref{confscal} that $3\sR_{h} = 3(\uR_{h} + \tfrac{1}{4}|\bt|^{2}_{h}) = -\tau + |s|^{2}_{h}$. Hence \begin{align*} \j \dlef(\Om) + (1/2)|s|_{k}^{2} - \tfrac{1}{2}\tau = -(3/2)\left(\sR_{h}- \tfrac{1}{4}|B|_{h}^{2} - \uR_{h}\right) = 0, \end{align*} which shows the claim. Note that the resulting solution to the vortex equations is trivial exactly when $(\en, [h])$ is simply a conformal structure. In this case the vortex parameter $\nv$ is $-\tau/3$, which explains the terminology for $\nv$. By Corollary \ref{holocorollary} if a Riemannian signature Einstein AH structure on a compact surface is not exact then it is Weyl. In this case, let $h \in [h]$ be a Gauduchon metric and $X^{i} = h^{ip}\ga_{p}$, where $\ga_{i}$ is the Faraday primitive of $h$. Let the K\"ahler structure on $M$ be that determined by $h$ and the given orientation. Let $k = h$ be the Hermitian metric induced on $\cano^{-1}$ by $h$. The Levi-Civita connection $D$ of $h$ induces a Hermitian connection, also denoted by $D$, on $\cano^{-1}$, for which the induced holomorphic structure is the canonical one. By Theorem \ref{classtheorem} there is a constant $\ka$ such that $\sR_{h} = \uR_{h} = \ka + 4|X|^{2}_{h}$. Let $\tau = \ka$ and consider the section $s = 2^{3/2}X^{(1,0)}$ of $\cano^{-1}$. The curvature of $D$ on $\cano^{-1}$ is $\Om_{ij} = -(\j \sR_{h}/2)\om_{ij}$, so that $\Om^{(2,0)} = 0$ and $\dlef(\Om) = -(\j/2)\sR_{h}$. Hence \begin{align}\label{ewvortexlike} \j \dlef(\Om) - (1/2)|s|_{k}^{2} - \tfrac{1}{2}\tau = (1/2)\left(\sR_{h} - 4 |X|_{k}^{2} - \ka\right) = 0. \end{align} The equation \eqref{ewvortexlike} differs from \eqref{vortexeqns} by the change of sign on the $|s|_{k}^{2}$ term. This affects the interpretation of the equations. The vortex equations are the adaptation to compact surfaces of the Landau-Ginzburg equations modeling the macroscopic behavior of superconducting materials. In the context of superconductors the wrong sign on the $|s|_{k}^{2}$ term corresponds to a physically unreasonable negative sign on the quartic term in the free energy. While the equations \eqref{ewvortexlike} by themselves make sense, and, as follows from section \ref{spheretorussection}, have solutions, they are not the usual vortex equations, and a more satisfactory derivation of them is needed before any significance can be attached to their similarity with the vortex equations. Nonetheless, the preceeding can be given the following uniform, albeit unmotivated description. Let the setup be as in the two preceeding paragraphs. Let $q$ be either $-1$ or $3$ and let $\si$ be $X$ or $B$. Let $\ka$ be the constant such that $\uR_{h} = \ka + 4|\ga|^{2}_{h}$. Let $\tau = -q\ka$ and $s = 2^{(2-q)/2}|q|^{1/2} \si^{(k,0)}$. Then $s$ is a section of $\cano^{q}$ solving the following modified vortex equations with respect to the holomorphic structure and Hermitian metric and connection on $\cano^{q}$ induced by $h$: \begin{align}\label{modifiedvortexeqns} &\Om^{(0,2)} = 0,& &\delbar_{\nabla}s = 0,& &2\j \dlef(\Om) + \sign(q)|s|_{k}^{2} = \tau. \end{align} Note that distinct exact Einstein AH structures need not determine gauge inequivalent solutions of \eqref{vortexeqns}. If $B^{(3, 0)}$ is replaced by $e^{\j\theta}B^{(3, 0)}$ for a constant $\theta$, the resulting vortex solutions are gauge equivalent. In section \ref{gaugesection} it is shown that the real part of $e^{\j\theta}B^{(3, 0)}$ is the cubic torsion of an Einstein AH structure with the same underlying conformal structure and Gauduchon metric. The solutions to the Abelian vortex equations with $\deg(D) = 3\chi(M)$ which arise in this way from exact Einstein AH structures are exactly the $3$-canonical solutions. This essentially means that on a compact, orientable surface of genus $g > 1$ the quotient of the moduli space of Einstein AH structures by the action of $\comt$ is the space of gauge equivalence classes of $3$-canonical Abelian vortices; see section \ref{constructionsection} for related discussion. The existence of $p$-canonical Abelian vortices is not immediate from Theorem \ref{vortextheorem}; it is demonstrated in Corollary \ref{vortexcorollary}. In particular, the existence of Einstein AH structures as in \ref{neg0} of Theorem \ref{scalarexacttheorem} is not immediate from the existence theorem for Abelian vortices; the complication is the additional requirement of compatibility of the underlying K\"ahler structure (the putative Gauduchon metric) and the holomorphic and metric structure on the line bundle. However, note that the constraint on the vortex parameter given by Lemma \ref{vortexlemma} also follows from Theorem \ref{vortextheorem} and the reinterpretation of the Einstein AH equations in terms of the vortex equations. \section{Einstein AH structures on compact orientable surfaces of genus at least two}\label{constructionsection} In this section and the next the classification of Riemannian Einstein AH structures on compact, orientable surfaces is completed by showing that all the possibilities identified in Theorem \ref{scalarexacttheorem} are realized. Throughout $M$ is a compact, orientable surface of genus $g$. Theorem \ref{scalarexacttheorem} shows that a Riemannian Einstein AH structure on $M$ must determine data of one of the following forms \begin{enumerate} \item A constant curvature metric. (Any $g$). \item A conformal structure and a non-trivial holomorphic vector field. ($g \in \{0, 1\}$). \item A conformal structure and a non-trivial holomorphic cubic differential. ($g \geq 1$). \end{enumerate} In order to complete the classification it is necessary to show how to construct from data of type $(2)$ or $(3)$ a Riemannian Einstein AH structure, and to analyze when two Riemannian Einstein AH structures are equivalent modulo $\diff^{+}(M)$. The second step is basically straightforward because of the uniformization theorem, and so the content of this section and the next is the analysis of the first step. For whatever $g$ the given data of a holomorphic section and a conformal structure determine an elliptic PDE for the conformal factor expressing a putative Gauduchon metric in terms of a constant scalar curvature background metric. For $g \geq 2$ the existence of a unique solution for the resulting PDE is a straightforward application of standard elliptic PDE techniques. This is explained in the present section. For $g \in \{0,1\}$ the uniqueness fails because the conformal automorphism group is large, but the holomorphic vector field induces an $S^{1}$ symmetry which can be used to reduce the PDE to an ODE which is easily solved. This is described in section \ref{spheretorussection}. \subsection{} On a smooth surface $M$, associate to a triple $(h, F, B)$ comprising a Riemannian metric $h$, a smooth function $F \in \cinf(M)$, and the real part $B$ of a smooth $k$-differential, the differential operator defined for $\phi \in \cinf(M)$ by \begin{align}\label{wdefined} \ahop(h, F, B)(\phi) \defeq \lap_{h}\phi - \sR_{h} + Fe^{\phi} + 2^{1-k}e^{(1-k)\phi}|B|_{h}^{2}. \end{align} It is convenient to include the constant factor $2^{1-k}$. More generally, there can be several $B$'s, for different $k$'s, e.g. for a holomorphic one-form $X^{(1,0)}$, and a holomorphic cubic differential $B^{(3,0)}$, it is convenient to write $\ahop(h, F, X, B)(\phi) = \lap_{h}\phi - \sR_{h} + Fe^{\phi} + 4e^{2\phi}|X|_{h}^{2} + \tfrac{1}{4}e^{-2\phi}|B|_{h}^{2}$. The metric $h$ will be called the \textbf{background} metric of the equation. Note that in the analysis of $\ahop$ the holomorphicity or not of $B^{(k,0)}$ plays \textit{no} role; it is important only for the properties of the objects constructed from the solutions. For $\mu, \la \in \cinf(M)$ there holds the scaling rule \begin{align}\label{wrescale} \begin{split} e^{\mu} \ahop(e^{\mu} h, e^{\la} F, e^{(1-k)\la/2}B)(\phi - \mu - \la) = \ahop(h, F, X, B)(\phi) - \lap_{h}\la. \end{split} \end{align} \subsection{} Let $(\en, [h])$ be an AH structure on a compact orientable surface $M$ of genus $g$. Let $\nabla \in \en$ be its aligned representative and let $\tilde{h}_{ij} = e^{-\phi}h_{ij} \in [h]$ and $h \in [h]$ have Levi-Civita connections related by $\tD = D + 2\si_{(i}\delta_{j)}\,^{k} - h_{ij}\si^{k}$, in which $2\si_{i} \defeq -d\phi_{i}$ and $\si^{i} \defeq h^{ip}\si_{p}$. Recall the notational conventions established in section \ref{riemannsurfacehodgestarsection}. Let $\ga_{i}$ be the Faraday primitive of $h$ and let $B_{ijk} \defeq \bt_{ij}\,^{p}h_{pk}$. Using \eqref{conformalscalardiff}, \eqref{confscal}, $\dad_{h}\ga = e^{-\phi}\dad_{\tilde{h}}\ga$, and $\lap_{h}\phi = e^{-\phi}\lap_{\tilde{h}}\phi$ there results \begin{align}\label{weqn} \begin{split} \ahop(\tilde{h}, \uR_{h}, B)(\phi) + \dad_{\tilde{h}}\ga& = \lap_{\tilde{h}}\phi - \sR_{\tilde{h}} + e^{\phi}\uR_{h} + \tfrac{1}{4}e^{-2\phi}|B|_{h}^{2} + \dad_{\tilde{h}}\ga\\ & = e^{\phi}\left(\lap_{h}\phi + \uR_{h} + \tfrac{1}{4}|\bt|^{2}_{h} + \dad_{h}\ga \right) - \sR_{\tilde{h}}= 0. \end{split} \end{align} Equation \eqref{weqn} can be used to construct AH structures. The metric $\tilde{h}$ will be treated as the background metric and $h$ (equivalently $\phi$) as the unknown. If $\tilde{h}_{ij}$, $\ga_{i}$, $\uR_{h}$, and $B_{ijk}$ are given, then solving \eqref{weqn} for $\phi$ yields an AH structure with cubic torsion $\bt_{ij}\,^{k} = h^{kp}B_{ijp}$ and scalar curvature $R = |\det h|^{1/2}\uR_{h}$, for which $h = e^{\phi}\tilde{h}$ is a representative metric with Faraday primitive $\ga_{i}$. It is convenient to seek Gauduchon $h$, in which case it must be that $\dad_{\tilde{h}}\ga = e^{\phi}\dad_{h}\ga = 0$, and the equation to be solved reduces to $\ahop(\tilde{h}, \uR_{h}, B)(\phi) = 0$. If the AH structure obtained by solving \eqref{weqn} is to be Einstein then it must be that $B^{(3,0)}$ is holomorphic, and there must be a constant $\ka$ and a holomorphic vector field $X^{(1,0)}$ such that $X^{p}h_{ip} = \ga_{i}$ is the Faraday primitive of $h$ and $\uR_{h} = 4|X|_{h}^{2} + \ka = 4e^{\phi}|X|^{2}_{\tilde{h}} + \ka$. In \eqref{weqn} this yields \begin{align}\label{geneq} \ahop(\tilde{h}, \ka, X, B )(\phi) = \lap_{\tilde{h}}\phi - \sR_{\tilde{h}} +\ka e^{\phi} + 4e^{2\phi}|X|_{\tilde{h}}^{2} + \tfrac {1}{4}e^{-2\phi}|B|^{2}_{\tilde{h}} = 0, \end{align} as the equation to be solved. Of course, by Corollary \ref{holocorollary}, if there is to be a solution only one of $X$ and $B$ can be non-zero. It follows from \eqref{wrescale} that $\phi$ solves \eqref{geneq} if and only if for any $\mu \in \cinf(M)$ and $r \in \reap$ the function $\psi = \phi - \log r - \mu$ solves $\ahop(e^{\mu}h, r\ka, rX, r^{-1}B)(\psi) = 0$. The resulting metrics $\bar{h}_{ij} = e^{\psi}e^{\mu}\tilde{h}_{ij} = r^{-1}e^{\phi}\tilde{h}_{ij}$ and $h_{ij} = e^{\phi}\tilde{h}_{ij}$ are positively homothetic, while the resulting tensors $\ga_{i} = X^{p}h_{pi} = rX^{p}\tilde{h}_{pi}$ and $\bt_{ij}\,^{k} = h^{kp}B_{ijp} = \bar{h}^{kp}r^{-1}B_{ijp}$ are the same, so that the AH structures resulting from these solutions are the same. Thus such rescaling is trivial from the point of view of constructing AH structures, and it is natural to restrict the allowable $\phi$ by imposing some normalization which eliminates this freedom. The scaling in $\tilde{h}$ given by $\mu$ is eliminated by fixing a convenient background metric $\tilde{h}$. The freedom in $r$ is most naturally eliminated by imposing some condition on the resulting metric $h = e^{\phi}\tilde{h}$, e.g. fixing the minimum of $\sR_{h}$. There are several possibilities. An obvious one is to require that $\uR_{h}$ take a specific value; this amounts to fixing $\ka$. Another is to demand that $\vol_{h}(M) = \int_{M}e^{\phi}\,d\vol_{\tilde{h}}$ have some prespecified value; as will be made precise below, this puts some conditions on $\ka$ necessary for the existence of solutions. The curvature normalization is probably more natural from the geometric point of view, while the volume normalization is probably more natural from the point of view of partial differential equations. A function $\phi \in \cinf(M)$ is \textbf{volume normalized} with respect to $\tilde{h}$ if $\int_{M}e^{\phi}d\vol_{\tilde{h}} = \vol_{\tilde{h}}(M)$, in which case $\vol_{h}(M) = \vol_{\tilde{h}}(M)$. From Lemma \ref{vortexlemma} it follows that for there to exist a volume normalized $\phi$ solving \eqref{geneq} it is necessary that $\ka \leq 4\pi\chi(M)/\vol_{\tilde{h}}(M)$. Moreover, for equality to hold it is necessary that both $X$ and $B$ be identically zero, in which case \eqref{geneq} becomes simply $\sR_{e^{\phi}\tilde{h}} = e^{-\phi}\left(\sR_{\tilde{h}} - \lap_{\tilde{h}}\phi\right) = 4\pi\chi(M)/\vol_{\tilde{h}}(M)$. This is simply the equation that $e^{\phi}\tilde{h}$ have constant curvature; in this case there is by the uniformization theorem a unique normalized $\phi$ solving the equation. Such a solution to \eqref{geneq} will be referred to as \textbf{uniformizing}. In summary, if $\phi$ is a $\cinf$ solution of \eqref{geneq} then $h_{ij} \defeq e^{\phi}\tilde{h}_{ij}$ is the Gauduchon metric with associated Faraday primitive $\ga_{i} = X^{p}h_{ip}$ of an Einstein AH structure having cubic torsion $\bt_{ij}\,^{k} = h^{kp}B_{ijp}$, scalar curvature $\uR_{h} = 4|X|_{h}^{2} + \ka$, and vortex parameter $\nv = \ka \vol_{h}(M)$. The natural choices for the background metric are the metric $\tilde{h} \in [h]$ having constant scalar curvature in $\{0, \pm 2\}$, and the metrics $\sth = |X|_{\tilde{h}}^{-2}\tilde{h}$ and $\sth = |B|_{\tilde{h}}^{2/3}\tilde{h}$ (which do not depend on the choice of $\tilde{h} \in [h]$), which are flat by Lemma \ref{flatmetriclemma}. The metrics $\sth$ are defined only on an open subset of $M$, so if they are to used there have to be imposed boundary conditions on $\phi$. \subsection{} The existence of a unique solution to $\ahop(h, k, B)(\phi) = 0$ when $\chi(M) \leq 0$, $B$ is somewhere non-zero, and $k \leq 4\pi\chi(M)/\vol_{h}(M)$ follows from Lemmas \ref{wuniqlemma} and Lemma \ref{wexistencelemma}. These lemmas are, however, stated more generally, so as to yield also Corollary \ref{vortexcorollary}, which shows how to associate to a Riemann surface equipped with a non-trivial holomorphic $k$-differential a canonical smooth metric in the given conformal structure, by solving the Abelian vortex equations. For a cubic holomorphic differential, the existence and uniqueness statements of Lemmas \ref{wuniqlemma} and \ref{wexistencelemma} for a metric of constant scalar curvature $-2$ on a compact oriented surface $M$ of genus greater than $1$ and $k = 2$, are Theorem $4.0.2$ of Loftin's \cite{Loftin-affinespheres}, and the proofs given here are very similar to Loftin's, which are themselves based on standard arguments as in \cite{Kazdan-Warner} or, particularly, \cite{Wolf-teichmuller}. The generality of allowing nonconstant $k$ is convenient for the example of section \ref{naiveexample}. \begin{lemma}\label{wuniqlemma} If $M$ is a compact, orientable surface with $\chi(M) \leq 0$, $h$ is a Riemannian metric, and, for an integer $p \geq 1$, $B$ is the real part of a non-trivial smooth $p$-differential, then the equation $\ahop(h, k, B)(\phi) = 0$ has at most one solution in $\cinf(M)$ for each $k \in \cinf(M)$ satisfying $k \leq 0$. \end{lemma} \begin{proof} Suppose $\phi, \psi \in \cinf(M)$ solve $\ahop(h, k, B)(\phi) = 0$ for some $k \in \cinf(M)$. Then \begin{align}\label{lapdiff} \lap_{h}(\phi - \psi) = - k (e^{\phi} - e^{\psi}) - 2^{(1-p)/2}(e^{(1-p)\phi} - e^{(1-p)\psi})|B|^{2}_{h}. \end{align} If $k$ is strictly negative the claim follows from the maximum principle applied to \eqref{lapdiff}. If $k \leq 0$, it follows from \eqref{lapdiff} that \begin{align}\label{uniqineq} \begin{split} \lap_{h}(\phi& - \psi)^{2} = 2(\phi - \psi)\lap_{h}(\phi - \psi) + 2|d(\phi - \psi)|^{2}_{h} \\ & = -2k(\phi - \psi)(e^{\phi} - e^{\psi}) - 2^{2-p}|B|^{2}_{h}(\phi - \psi)(e^{(1-p)\phi} - e^{(1-p)\psi})\geq 2|d(\phi - \psi)|^{2}_{h} \geq 0, \end{split} \end{align} so that $(\phi - \psi)^{2}$ is subharmonic and therefore constant, since $M$ is compact. Write $\psi = \phi + c$ for a constant $c$, and substitute this into \eqref{lapdiff} to obtain $e^{-p\phi}|B|_{h}^{2}(1-e^{(1-p)c}) = 2^{p-1}k(e^{c} - 1)$. If not both $p = 1$ and $k \equiv 0$, then, since the signs of $e^{c} - 1$ and $1- e^{-2c}$ are the same if $c \neq 0$, and, by hypothesis, $k \leq 0$ and $B$ is not identically zero, this can be only if $c = 0$. If both $p =1$ and $k \equiv 0$, there is no solution, for if $\ahop(h, k, B)(\phi) = 0$, then integration yields $0 \geq 4\pi\chi(M) = \int_{M}\sR_{h}\,d\vol_{h} = ||B||^{2}_{h}$, contradicting the assumption that $B$ is not identically zero. \end{proof} \begin{lemma}\label{genuszerolemma} Let $M$ be a smooth torus. Let $h$ be a flat Riemannian metric, and let $B$ be the real part of a non-trivial holomorphic $p$-differential for $p \geq 1$. Then $B$ is parallel and $|B|_{h}^{2}$ is constant. For each constant $\ka < 0$ the unique solution $\phi$ to the equation $\ahop(h, \ka, B)(\phi) = 0$ is the constant function $\tfrac{1}{p}\log\left(2^{1-p}|\ka|^{-1}|B|_{h}^{2}\right)$. \end{lemma} \begin{proof} By Corollary \ref{rrcorollary}, $B$ is $h$-parallel, and hence $|B|_{h}^{2}$ is a non-zero constant, so the given $\phi$ solves $\ahop(h, \ka, B)(\phi) = 0$. This is the unique solution by Lemma \ref{wuniqlemma}. \end{proof} Lemma \ref{rootestimatelemma} is needed in the proof of Lemma \ref{wexistencelemma}. \begin{lemma}\label{rootestimatelemma} Let $0 < p \in \integer$ and $0 < a, b \in \rea$. The unique positive root $r_{1}$ of $f(r) = r^{p} - ar^{p-1} - b$ satisfies $a \leq r_{1} \leq a + b^{1/p}$. \end{lemma} \begin{proof} For $p = 1$ the positive root of $g$ is $a + b$, while for $p = 2$ it is $(a + \sqrt{a^{2} + 4b})/2 \leq a + b^{1/2}$, so suppose $p > 2$. Since $f$ is negative at $ r= 0$, has a negative minimum at $a(1 - 1/p)$, is monotone decreasing on $(0, a(1 - 1/p))$ and monotone increasing on $(a(1-1/p), \infty)$, and satisfies $\lim_{r \to \infty}f(r) = \infty$, it has a unique positive real root $r_{1}$ which is greater than $a(1 - 1/p)$, and, since $a + b^{1/p} > a > a(1 - 1/p)$, it suffices to observe $f(a) = -b < 0$ and \begin{align*} \begin{split} f(a + b^{1/p}) & = (a + b^{1/p})^{p-1}b^{1/p} - b = \sum_{s = 0}^{p-2}\binom{p-1}{s}b^{(s+1)/p}a^{p-1-s} > 0. \end{split}\qedhere \end{align*} \end{proof} \begin{lemma}\label{wexistencelemma} Let $M$ be a compact, orientable surface with $\chi(M) < 0$. Let $h$ be a Riemannian metric, let $B$ be the real part of a smooth $p$-differential not everywhere zero, and let $k \in \cinf(M)$ be everywhere negative. Let $q = \min_{M}\sR_{h}$, $Q= \max_{M}\sR_{h}$, $P = \max_{M}|B|^{2}_{h}$, $\ka = \min_{M}k$ and $K = \max_{M}k$. If the curvature of $h$ is negative, then there is a unique solution $\phi$ to the equation $\ahop(h, \ka, B)(\phi) = 0$, and it satisfies \begin{align}\label{bbound} \ka^{-1}(\max_{M}\sR_{h}) \leq e^{\phi} \leq K^{-1}(\min_{M}\sR_{h}) + 2^{(1-p)/p}|K|^{-1/p}(\max_{M}|B|^{2/p}_{h}). \end{align} \end{lemma} \begin{proof} The uniqueness follows from Lemma \ref{wuniqlemma}. Following the proof of Theorem $4.0.2$ in \cite{Loftin-affinespheres} the existence is demonstrated by applying Theorem V.$1.1$ of \cite{Schoen-Yau}, which shows that if a semi-linear elliptic equation $\lap_{h}u + F(x, u)$ with $F \in \cinf(M\times \rea)$ on a compact manifold $M$ admits a $C^{2}$ supersolution $u^{+}$ and a $C^{2}$ subsolution $u^{-}$ such that $u^{-} \leq u^{+}$ then it admits a $\cinf$ solution $u$ such that $u^{-} \leq u \leq u^{+}$ (this is proved by a modification of a standard iteration argument closely related versions of which can be found in section $2$ of the appendix to the fourth chapter of \cite{Courant-Hilbert-II} and section $9$ of \cite{Kazdan-Warner}). Suppose $Q < 0$. Since both $Q$ and $\ka$ are negative, $Q/\ka$ is a positive constant, and $\ahop(h, k, B)(\log (Q/\ka)) \geq -Q + kQ/\ka \geq 0$, so $\log(Q/\ka)$ is a subsolution. On the other hand, the polynomial $f(r) = K r^{3} - qr^{2} + \tfrac{1}{4}P$ has the unique real root $r_{1}$, which is easily seen to be positive and no smaller than $q/K$, which in turn is no smaller than $Q/\ka$. As $\ahop(h, \ka, B)(\log r_{1}) \leq r_{1}^{-1}f(r_{1}) \leq 0$, $\log r_{1}$ is a supersolution. There follows $\log (Q/\ka) \leq \phi \leq \log r_{1}$. The bound \eqref{bbound} follows from Lemma \ref{rootestimatelemma} with $a = q/K$ and $b = 2^{1-p}|K|^{-1}P$. \end{proof} There follows the existence of $p$-canonical Abelian vortices, as claimed in section \ref{vortexsection}. \begin{corollary}\label{vortexcorollary} Let $(M, [h], J)$ be a compact Riemann surface of genus $g > 1$, let $0 < p \in \integer$, let $B^{(p, 0)}$ be a non-trivial holomorphic $p$-differential, and let $\ka$ be a negative constant. There is a unique representative $h \in [h]$ such that $s = 2^{1 -p/2}p^{1/2}B^{(p,0)}$ solves the Abelian vortex equations \eqref{vortexeqns} for $\tau = -p\ka$ and the Hermitian structure induced on $\cano^{p}$ by $h$. \end{corollary} \begin{proof} Let $\tilde{h} \in [h]$ and write $h = e^{\phi}\tilde{h}$. Arguing as in section \ref{vortexsection} the putative $h$ must satisfy \begin{align*} \begin{split} 0 & = \tfrac{2}{p}\left(\j \dlef(\Omega) + \tfrac{1}{2}|s|^{2}_{h} - \tfrac{1}{2}\tau\right) = -\sR_{h} + 2^{2-p}|B^{(p,0)}|^{2}_{h} + \ka\\ & = e^{-\phi}\left(\lap_{\tilde{h}}\phi - \sR_{\tilde{h}} + \ka e^{\phi} + 2^{1-p}e^{(1-p)\phi}|B|^{2}_{\tilde{h}}\right) = e^{-\phi}\ahop(\tilde{h}, \ka, B)(\phi). \end{split} \end{align*} For $\tilde{h}$ such that $\sR_{\tilde{h}} = -2$, the existence of a unique solution follows from Lemmas \ref{wuniqlemma} and \ref{wexistencelemma}. \end{proof} For a compact orientable surface, Corollary \ref{vortexcorollary} associates to each $(J, B) \in \prekhodge(M)$ a distinguished metric representing the conformal structure. If $(J, B)$ is constructed from a flat metric $\sth$ with conical singularities as in section \ref{singularmetricsection}, then this associates to such a metric a smooth conformal metric in a canonical manner. This observation might be of use in the study of such metrics. \begin{lemma}\label{factorboundlemma} Let $1 \leq p \in \integer$. Let $M$ be a compact, orientable surface with $\chi(M) < 0$, let $\tilde{h}$ be a Riemannian metric with negative scalar curvature satisfying $\min_{M}\sR_{\tilde{h}} = -2$, let $B^{(p,0)}$ be holomorphic, and let $\ka$ be a negative constant. If $\phi$ solves $\ahop(\tilde{h}, \ka, B)(\phi) = 0$, and $h_{ij} = e^{\phi}\tilde{h}_{ij}$ then \begin{align}\label{calabibound2} & -2\ka^{-1} + 2^{(1-p)/p}|\ka|^{-1/p}(\max_{M}|B|^{2/p}_{\tilde{h}}) \geq e^{\phi} \geq 2^{(1-p)/p}|\ka|^{-1/p}|B|_{\tilde{h}}^{2/p},\\ \label{volumebound} \begin{split} (-2\ka^{-1}& + 2^{(1-p)/p}|\ka|^{-1/p}(\max_{M}|B|^{2/p}_{\tilde{h}}) )\vol_{\tilde{h}}(M) \geq\\ &\vol_{h}(M) \geq 2^{(1-p)/p}|\ka|^{-1/p}\int_{M}|B|^{2/p}_{\tilde{h}}\,d\vol_{\tilde{h}} = 2^{(1-p)/p}|\ka|^{-1/p}\vol_{\sth}(M). \end{split} \end{align} Here $\sth \in [h]$ is the singular flat metric $\sth = |B|^{2/p}_{\tilde{h}}\tilde{h}$. \end{lemma} \begin{proof} In the $p = 3$ case, by \eqref{calabibound} applied to the resulting exact Einstein AH structure with Gauduchon metric $h = e^{\phi}\tilde{h}$, there holds $-4\ka = -4\uR_{h} \geq |B|_{h}^{2} = e^{-3\phi}|B|_{\tilde{h}}^{2}$, which is the second inequality of \eqref{calabibound2}. Alternatively, the following proof, applicable for all $p \geq 1$, can be viewed as giving another proof of \eqref{calabibound}. By \eqref{keromlap} of Lemma \ref{flatmetriclemma}, $\psi = \log \left( 2^{(1-p)/p}|\ka|^{-1/p} |B|_{\tilde{h}}^{2/p}\right)$ solves $\ahop(\tilde{h}, \ka, B)(\psi) = 0$ on the complement of the zero set of $B$. (Essentially this observation is key in both \cite{Noguchi} and the proof of Proposition $1$ of \cite{Loftin-cubic}). Since $\psi$ goes to $-\infty$ on the zero set of $B$, there is $\ep > 0$ so that on the boundary $\pr M^{\ep}$ of the complement $M^{\ep}$ of an $\ep$ neighborhood of the zero set of $B$ there holds $\psi \leq \min_{M}\phi$. Since the zeroth order part of $\ahop(\tilde{h}, \ka, B)(u)$ is non-increasing in $u$, it follows that \begin{align*} \lap_{\tilde{h}}(\psi - \phi) = -\ka(e^{\psi} - e^{\phi}) - 2^{1-p}(e^{(1-p)\psi} - e^{(1-p)\phi})|B|_{\tilde{h}}^{2} \end{align*} is non-negative on the domain $U = \{x \in M^{\ep}: \psi(x) > \phi(x)\}$. Since by the choice of $\ep$ the closure of $U$ is contained properly in $M^{\ep}$, the maximum principle implies $U$ is empty, showing that $\psi \leq \phi$ on $M^{\ep}$. Letting $\ep \to 0$ yields the second inequality of \eqref{calabibound2}. The first inequality of \eqref{calabibound2} follows from \eqref{bbound}. Integrating \eqref{calabibound2} yields the volume bound \eqref{volumebound}. \end{proof} \subsection{Example: naive Einstein AH structures which are not Einstein}\label{naiveexample} On a compact surface $M$ of genus $g > 1$, let $\tilde{h}$ be a Riemannian metric with scalar curvature $-2$, and let $B^{(3,0)}$ be a cubic holomorphic differential. Let $k$ be a smooth function on $M$ which satisfies $k < 0$. By Lemmas \ref{wuniqlemma} and \ref{wexistencelemma} the equation $\ahop(\tilde{h}, k, B)(\phi) = 0$ admits a unique smooth solution $\phi$. Let $h_{ij} = e^{\phi}\tilde{h}_{ij}$, $\bt_{ij}\,^{k} = h^{kp}B_{ijp}$, and $\nabla = D - \tfrac{1}{2}\bt_{ij}\,^{k}$, in which $D$ is the Levi-Civita connection of $h_{ij}$. Then $\nabla$ is the aligned representative of the AH structure which it generates with $[h]$, which is exact, and $h$ is a distinguished representative of $[h]$. There holds $D_{p}\bt_{ij}\,^{p} = h^{pq}D_{p}B_{ijq} = 0$ because $B$ is the real part of a holomorphic differential, and so by \eqref{ddivbt}, $(\en, [h])$ is naive Einstein. On the other hand, by \eqref{confscal}, there holds $\uR_{h} = \sR_{h} - \tfrac{1}{4}|\bt|_{h}^{2} = \sR_{h} - \tfrac{1}{4}|B|_{h}^{2}$, while by \eqref{conformalscalardiff} and the construction of $\phi$ there holds $\sR_{h} = e^{-\phi}(-2 - \lap_{\tilde{h}}\phi) = k + \tfrac{1}{4}|B|_{h}^{2}$, so that $\uR_{h} = k$. Thus if $k$ is not constant, then $\uR_{h}$ is not constant, so by \eqref{confein1}, $(\en, [h])$ is not Einstein. \subsection{} Let $M$ be a compact, oriented surface. Recall the spaces defined in section \ref{khodgesection}. Let $\epremod(M)$ be the space of Riemannian signature Einstein AH structures on $M$. The underlying conformal structure of each element of $\epremod(M)$ determines a complex structure inducing the given orientation, and so there is an evidently sujective map $\epremod(M) \to \jpremod(M)$. The group $\diff^{+}(M)$ acts on $\epremod(M)$ by pullback. Define the \textbf{deformation space} $\tmod(M) = \epremod(M)/\diff_{0}(M)$ of Einstein AH structures on $M$ and the \textbf{moduli space} $\emod(M) = \epremod(M)/\diff^{+}(M)$ of Einstein AH structures on M. The oriented mapping class group $\map^{+}(M)$ acts on $\tmod(M)$ with quotient $\emod(M)$. The canonical map $\epremod(M) \to \jpremod(M)$ descends to a map $\tmod(M) \to \teich(M)$, and similarly at the level of moduli spaces. \subsection{}\label{modulisection} Suppose the compact, oriented surface $M$ has genus $g > 1$. For each $a \in \reap$ define a map $\prebij^{a}:\epremod(M) \to \precubic(M)$ by $\prebij^{a}(\en, [h]) = (J, B)$ in which $J$ is the complex structure determined by $[h]$ and the given orientation, and $B$ is defined by $B_{ijk} = \bt_{ij}\,^{p}h_{pk}$ for the unique distinguished metric $h \in [h]$ such that $\uR_{h} = -2a^{-1}$. Alternatively, $||B||_{h}^{2}$ does not depend on the choice of $h \in [h]$ and the choice of $h$ is determined by requiring that $2\vol_{h}(M) = a(4^{-1}||B||_{h}^{2} - 4\pi\chi(M))$. Evidently $\prebij^{a}$ is $\diff^{+}(M)$ equivariant, so descends to a $\map^{+}(M)$-equivariant map $\bij^{a}:\tmod(M) \to \cubic(M)$, which covers the identity on $\teich(M)$. It is convenient to write $\bij^{a}(\en, [h]) = [\prebij^{a}(\en, [h])]$ for the image of an equivalence class $[\en, [h]] \in \tmod(M)$ (the notation indicating the equivalence class of $(\en, [h])$ was omitted on the lefthand side). Running through the definitions shows that, for $r \in \reap$, $\prebij^{ra} = (J, rB)$ if $\prebij^{a} = (J, B)$, which in part accounts for using $a^{-1}$ in the definition of $\prebij^{a}$. Since $(\prebij^{r})^{-1}(J, B) = (\prebij^{1})^{-1}((J, r^{-1}B))$ it in general suffices to work with the fixed map $\prebij \defeq \prebij^{1}$ and its inverse. The duality on $\epremod(M)$ given by conjugacy of AH structures corresponds under $\prebij$ to replacing the holomorphic cubic differential $B$ by $-B$, in the sense that $\prebij(\ben, [h]) = (J, -B)$ if $\prebij(\en, [h]) = (J, B)$. \begin{theorem}\label{2dmodulitheorem} For a compact, oriented surface $M$ of genus $g > 1$, the $\diff^{+}(M)$-equivariant map $\prebij:\epremod(M) \to \precubic(M)$ is a bijection, and so the $\map^{+}(M)$-equivariant map $\bij:\tmod(M) \to \cubic(M)$ is a bijection as well. \end{theorem} \begin{proof} For $(J, B) \in \precubic(M)$ let $[h]$ be the conformal structure determined by $J$. For the unique representative $\tilde{h} \in [h]$ having constant scalar curvature $-2$, there is by Lemma \ref{wexistencelemma} a unique solution $\phi$ to $\ahop(\tilde{h}, -2, B)(\phi) = 0$. The metric $h = e^{\phi}\tilde{h} \in [h]$ is the distinguished representative of an exact Einstein AH structure $(\en, [h])$ having aligned representative $\nabla = D - \tfrac{1}{2}h^{kp}B_{ijp}$ and $\uR_{h} = -2$, so that $\prebij(\en, [h]) = (J, B)$. This shows that $\prebij$ is onto. Now suppose that $\prebij(\en, [h]) = (J, B) = \prebij(\ten, [g])$. Since $[g]$ and $[h]$ induce the same complex structure $J$, they are equal. Let $\nabla \in \en$ and $\tnabla \in \ten$ be the aligned representatives, and let $h \in [h]$ and $\bar{h} \in [h]$ be the distinguished represenatives of $(\en, [h])$ and $(\ten, [h])$ such that the corresponding curvatures $\uR_{h}$ and $\tilde{\uR}_{\bar{h}}$ equal $-2$. Let $\tilde{h} \in [h]$ be the unique representative with constant scalar curvature $\sR_{\tilde{h}} = -2$ and write $h_{ij} = e^{\phi}\tilde{h}_{ij}$ and $\bar{h}_{ij} = e^{\bar{\phi}}\tilde{h}_{ij}$. Then $\phi$ and $\bar{\phi}$ solve $\ahop(\tilde{h}, -2, B)(\psi) = 0$, and so, by Lemma \ref{wuniqlemma}, $\bar{\phi} = \phi$. This shows that $\bar{h}_{ij} = h_{ij}$. It follows that $\tnabla = \nabla$, and so $\ten = \en$, and the injectivity of $\prebij$ has been shown. \end{proof} The content of the surjectivity statement in Theorem \ref{2dmodulitheorem} is not essentially different from that of Theorem $3.4$ of C.~P. Wang's \cite{Wang}, in which it is shown how to construct an affine hypersphere from a conformal metric and a cubic holomorphic differential. As is explained in section \ref{convexsection}, $\tmod(M)$ can be identified with the deformation space $\dproj(M)$ of convex flat real projective structures on $M$. The key point is that an exact Einstein AH structure is determined by its underlying flat projective structure. The resulting identification of $\cubic(M)$ with $\dproj(M)$ is due independently to F. Labourie and J. Loftin. \subsection{}\label{gaugesection} The results of this section can be viewed as preliminary steps in the direction of understanding the action of $GL^{+}(2, \rea)$ on $\tmod(M)$ described in section \ref{singularmetricsection}. In the remainder of the section $M$ is a compact, oriented surface of genus at least two and $B\in \Ga(S^{3}_{0}(\ctm))$ is the real part of a cubic holomorphic differential supposed not identically zero. Sense can be made of all the results of this section for $1 \leq p \in \integer$ in place of $3$ if Gauduchon metrics are interpreted as the representatives given by Corollary \ref{vortexcorollary} and the numbers $3$ and $2$ are replaced by $p$ and $(p-1)$, where appropriate. Specializing the action of $GL^{+}(2, \rea)$ on $\precubic(M)$ to $\comt = C^{+}O(2)$, write $z \cdot (J, B^{(3,0)}) = (z \cdot J, z \cdot B^{(3, 0)})$. Then, for $z = re^{\j \theta} = e^{t}e^{\j\theta} \in\comt$, $z \cdot J = J$ and $2\re(z \cdot B^{(3,0)}) = 2\re(z^{3}B^{(3,0)}) = e^{3t}(\cos (3\theta) B + \sin (3\theta) \bj(B))$. Let $(\en,[h]) = \prebij^{-1}(J, B)$ and $(\ten, [h]) = \prebij^{-1}(J, B(t, \theta))$ where $B(t, \theta) = e^{3t}(\cos(\theta)B + \sin(\theta)\bj(B))$ for some fixed $t > 0$ and $\theta$. Retaining the factor of $3$ in the scale factor, while dropping it in the rotational part simplifies formulas appearing later. Both $(\en, [h])$ and $(\ten, [h])$ are exact Einstein AH structures. Let $h \in [h]$ and $\tilde{h} \in [h]$ be the respective distinguished metrics such that $\uR_{h} = -2 = \uR_{\tilde{h}}$, and write $\tilde{h}_{ij} = e^{\phi}h_{ij}$. By construction the cubic torsions are $\bt_{ij}\,^{k} = h^{kp}B_{ijp}$ and \begin{align*} \begin{split} \tilde{\bt}_{ij}\,^{k} &= \tilde{h}^{kp}(e^{t}e^{\j\theta} \cdot B)_{ijp} = e^{3t-\phi}h^{kp}(\cos (\theta) B_{ijp} + \sin (\theta) \bj(B)_{ij}\,^{p})\\ & = e^{3t-\phi}(\cos (\theta) \bt_{ij}\,^{k} + \sin (\theta) J_{i}\,^{p}\bt_{pj}\,^{k}), \end{split} \end{align*} (recall \eqref{btj} of Lemma \ref{2dcomplexlemma}). By construction $\phi$, and so also the Gauduchon representative of $\prebij^{-1}(\ten, [h])$, does not depend on $\theta$, so in analyzing the dependence on $t$ it may as well be assumed that $\theta = 0$. The dependence on $t$ was partly analyzed in Proposition $1$ of \cite{Loftin-cubic}, and that result, as well as similar ones for quadratic differentials in the context of Teich\"uller space proved in section $5$ of \cite{Wolf-teichmuller}, provided motivation for Lemma \ref{phigrowthlemma} and Theorem \ref{liptheorem}. Since, by Theorem \ref{classtheorem}, $\sR_{h} = -2 + 4^{-1}|B|_{h}^{2}$, by construction $\phi$ solves \begin{align}\label{operphi} \oper(\phi) \defeq \ahop(h, -2, e^{3t}B)(\phi) = \lap_{h}\phi + 2 - 2e^{\phi} + 4^{-1}(e^{2(3t-\phi)} - 1)|B|_{h}^{2} = 0. \end{align} \begin{lemma}\label{phigrowthlemma} Let $(J, B) \in \precubic(M)$ for a compact smooth orientable surface $M$ of genus $g > 1$. Let $h$ and $\tilde{h} = e^{\phi}h$ be the Gauduchon metrics of $\prebij^{-1}(J, B)$ and $\prebij^{-1}(J, B(t, \theta))$ such that $\uR_{h} = -2 = \uR_{\tilde{h}}$. Then $\phi$ is the unique solution of \eqref{operphi}, and satisfies \begin{align}\label{phibound} &\max\{0, 2t + (2/3)\log|B|_{h} - \log 2\} \leq \phi \leq 2t,& & \text{if}\,\, t \geq 0. \end{align} (In particular $\phi$ is identically $0$ if $t = 0$). \end{lemma} \begin{proof} By \eqref{calabibound}, $|B|_{h}^{2} \leq 8$, and for $t > 0$ there results \begin{align}\label{ologr} \begin{split} \oper(2t) &= 2 - 2e^{2t} + 4^{-1}(e^{2t} - 1)|B|_{h}^{2} = 4^{-1}(e^{2t} - 1)(|B|_{h}^{2} - 8) \leq 0, \end{split} \end{align} showing that the constant function $2t$ is a supersolution of $\oper$. Since $\oper(0) \geq 0$ also, there exists a solution $\psi$ of $\oper(\psi) = 0$ such that $0 \leq \psi \leq 2t$. If $\varphi$ is a second solution then \begin{align}\label{omax} \lap_{h}(\varphi - \psi)^{2} = 2|d(\varphi - \psi)|^{2} + 4(\varphi - \psi)(e^{\varphi} - e^{\psi}) - 2^{-1}e^{6t}(\varphi - \psi)(e^{-2\varphi} - e^{-2\psi}) \geq 0. \end{align} By the maximum principle there is a constant $c$ such that $\varphi - \psi = c$, and in \eqref{omax} this yields $8c(e^{c} - 1)e^{3\varphi} = ce^{6t}(e^{-2c} - 1)|B|^{2}_{h}$. Since $B$ is not identically zero this can be only if $c = 0$. Thus $\phi$ is the unique solution of $\oper(\phi) = 0$, and $0 \leq \phi \leq 2t$. By \eqref{ne4d}, the function $\psi = \log(2^{-1}e^{2t}|B|_{h}^{2/3})$ satisfies $\oper(\psi) = 0$ on the complement $M^{\ast}$ of the zero set of $B$. Since $\psi$ tends to $-\infty$ at the zeroes of $B$ and $\phi$ is bounded on $M$, the closure of the set $U = \{x \in M^{\ast}:\psi(x) > \phi(x)\}$ is contained properly in $M^{\ast}$. On $U$ there holds \begin{align*} \lap_{h}(\psi - \phi) = 2(e^{\psi} - e^{\phi}) - 4^{-1}e^{6t}(e^{-2\psi} - e^{-\phi})|B|_{h}^{2} \geq 0, \end{align*} which by the maximum principle contradicts that $U$ be non-empty. This proves \eqref{phibound}. \end{proof} \begin{theorem}\label{liptheorem} Let $M$ be a smooth compact orientable surface of genus $g > 1$. Fix $(J, B) \in \precubic(M)$ such that $B$ is not identically zero. For $t \geq 0$, let $\pwr{t}{h}$ be the Gauduchon metric of $\prebij^{-1}(J, e^{3t}B)$ such that $\uR_{\pwr{t}{h}} = -2$. Write $h = \pwr{1}{h}$ and $\pwr{t}{h} = e^{\phi_{t}}h$. Then $\phi_{t}$ is pointwise non-decreasing and Lipschitz as a function of $t \in(0, \infty)$, with Lipschitz constant $2$. \end{theorem} \begin{proof} Applying Lemma \ref{phigrowthlemma} with $t = t_{2} - t_{1}$ and $\phi = \phi_{t_{2}} - \phi_{t_{1}}$, so that $\pwr{t_{2}}{h} = e^{\phi}\,\pwr{t_{1}}{h}$, yields \begin{align}\label{lipt1} \max\{0, 2(t_{2} - t_{1}) + (2/3)\log|e^{3t_{1}}B|_{e^{\phi_{t_{1}}}h} - \log 2\} \leq \phi_{t_{2}} - \phi_{t_{1}} \leq 2(t_{2} - t_{1}). \end{align} which, after simplifying and rearranging terms is \begin{align}\label{lipest} \max\{\phi_{t_{1}}, 2t_{2} + \tfrac{2}{3}\log |B|_{h} - \log 2\} \leq \phi_{t_{2}} \leq \phi_{t_{1}} + 2(t_{2} - t_{1}). \end{align} The first inequality of \eqref{lipest} shows $\phi_{t_{2}} \geq \phi_{t_{1}}$, so $\phi_{t}$ is non-decreasing for $t \in (0, \infty)$. From the second inequality of \eqref{lipest} it follows that $0 \leq \phi_{t_{2}} - \phi_{t_{1}} \leq 2|t_{2} - t_{1}|$ for $t_{1}, t_{2} \in (0, \infty)$. \end{proof} \begin{corollary} Let $M$ be a smooth compact orientable surface of genus $g > 1$. Fix $(J, B) \in \precubic(M)$ such that $B$ is not identically zero. For $t \geq 0$ let $\pwr{t}{h}$ be the Gauduchon metric of $\prebij^{-1}(J, e^{3t}B)$ such that $\uR_{\pwr{t}{h}} = -2$ and let $\pwr{t}{g} = (e^{-2t})\pwr{t}{h}$ (which is also a Gauduchon metric for $\prebij^{-1}(J, e^{3t}B)$). Then the limits $\lim_{t\to \infty}\vol_{\pwr{t}{g}}(M)$ and $\lim_{t\to \infty}||B||_{\pwr{t}{g}}^{2}$ exist and satisfy \begin{align}\label{rescaledlimits} 0 < \tfrac{1}{2}\vol_{\sth}(M) = \tfrac{1}{2}\int_{M}|B|^{2/3}_{h} \,d\vol_{h} \leq \lim_{t\to \infty}\vol_{\pwr{t}{g}}(M) = \tfrac{1}{8}\lim_{t\to \infty}||B||_{\pwr{t}{g}}^{2} \leq \vol_{h}(M). \end{align} Here $\sth = |B|^{2/3}_{h}h$. The curvature $\sR_{\pwr{r}{g}}$ satisfies $0 \geq \sR_{\pwr{t}{g}} \geq 4^{-1}e^{2t}(|B|^{2}_{h} - 8)$. \end{corollary} \begin{proof} Write $h = \pwr{1}{h}$ and $\pwr{t}{h} = e^{\phi_{t}}h$. From the second inequality of \eqref{lipt1} of the proof of Theorem \ref{liptheorem} it is immediate that $\vol_{\pwr{t}{g}}(M)$ is non-increasing for $t>0$, and by \eqref{phibound} of Lemma \ref{phigrowthlemma}, $\vol_{h}(M) \geq e^{-2t}\vol_{\pwr{t}{h}}(M) = \vol_{\pwr{t}{g}}(M) \geq \tfrac{1}{2}\int_{M}|B|^{2/3}_{h} \,d\vol_{h}$. This shows the existence of $\lim_{t\to \infty}\vol_{\pwr{t}{g}}(M)$ and the bounds of \eqref{rescaledlimits}. From \eqref{vortex} it follows that $\vol_{\pwr{t}{g}}(M) - 8^{-1}||B||_{\pwr{r}{g}}^{2} = 4\pi(g-1)e^{-2t}$, from which the the equality of the limits in \eqref{rescaledlimits} is apparent. The final estimate on the curvature follows from $e^{-2t}\sR_{\pwr{t}{g}} =\sR_{\pwr{t}{h}} = -2 + 4^{-1}e^{6t}|B|^{2}_{\pwr{r}{h}} = -2 + 4^{-1}e^{6t}e^{-3\phi_{t}}|B|_{h}^{2}$ coupled with \eqref{phibound}. \end{proof} Lemma \ref{continuitylemma} does not appear to be particularly useful, but it illustrates how the bounds of Lemma \ref{factorboundlemma} can be used, and confirms a naive expectation. \begin{lemma}\label{continuitylemma} Let $M$ be a compact orientable smooth surface. Fix a complex structure $J$ and equip the space of holomorphic cubic differentials with the sup norm and $\cinf(M)$ with the sup norm. Then $\prebij^{-1}((J, \dum))$ is continuous with respect to these sup norms. \end{lemma} \begin{proof} Let $\tilde{h}$ be the metric of curvature $-2$ representing the conformal structure determined by $J$. Let $B^{(3,0)}$ and $C^{(3, 0)}$ be holomorphic cubic differentials and suppose $|B - C|_{\tilde{h}}\leq \ep$ on $M$. Let $\phi = \prebij^{-1}((J, B))$ and $\psi = \prebij^{-1}((J, C))$. By construction $4\lap_{\tilde{h}}(\psi - \phi) = 8(e^{\psi} - e^{\phi}) + e^{-2\phi}|B|_{\tilde{h}}^{2} - e^{-2\psi}|C|_{\tilde{h}}^{2}$. Suppose $(\psi - \phi)^{2}$ assumes a positive maximum at $p \in M$. Then $|\psi - \phi|$ also assumes a positive maximum at $p$. Without loss of generality it may be supposed that $\psi(p) > \phi(p)$, so that $\max_{M}|\psi - \phi| = \psi(p) - \phi(p)$. At $p$ there holds \begin{align}\label{cont1} \begin{split} 0 \geq 2(\psi - \phi)^{-1}\lap_{\tilde{h}}(\psi - \phi)^{2} &\geq 8(e^{\psi} - e^{\phi}) + (e^{-2\phi}|B|^{2}_{\tilde{h}} - e^{-2\psi}|C|_{\tilde{h}}^{2})\\ & = 8(e^{\psi} - e^{\phi}) + (e^{-2\phi} - e^{-2\psi})|B|^{2}_{\tilde{h}} + e^{-2\psi}(|B|^{2}_{\tilde{h}} - |C|_{\tilde{h}}^{2}). \end{split} \end{align} The inequality \eqref{cont1} forces that the value at $p$ of $|C|^{2}_{\tilde{h}}$ is greater than the value at $p$ of $|B|^{2}_{\tilde{h}}$. Using $|B|_{\tilde{h}}^{2} \leq 8$, it follows that at $p$ there holds \begin{align}\label{cont1b} \begin{split} e^{-2\psi}\ep(\ep + 16) &\geq e^{-2\psi}(\ep + 16)|C - B|_{\tilde{h}}\geq e^{-2\psi}(|C|_{\tilde{h}} - |B|_{\tilde{h}})(2|B|_{\tilde{h}} + \ep) \\ &\geq e^{-2\psi}(|C|^{2}_{\tilde{h}} - |B|_{\tilde{h}}^{2}) \geq 8(e^{\psi} - e^{\phi}) + (e^{-2\phi} - e^{-2\psi})|B|_{\tilde{h}}^{2}. \end{split} \end{align} By the second inequality of \eqref{calabibound2} of Lemma \ref{factorboundlemma}, $e^{-2\psi}|C|_{\tilde{h}}^{4/3} \leq 4$. In \eqref{cont1b} this yields that at $p$, \begin{align}\label{cont2} \begin{split} 4\ep(\ep + 16) & \geq 8|C|_{\tilde{h}}^{4/3}(e^{\psi} - e^{\phi}) \geq 8|C|_{\tilde{h}}^{4/3}(\psi(p) - \phi(p)) =8|C|_{\tilde{h}}^{4/3}\max_{M}|\psi - \phi| . \end{split} \end{align} Since, by \eqref{cont1}, $|C|_{\tilde{h}}^{2}$ is positive at $p$, this shows the claimed continuity. \end{proof} \section{Einstein Weyl structures on the sphere and torus}\label{spheretorussection} In this section the deformation spaces of Einstein Weyl structures on the two sphere $\sphere$ and torus $\torus$ and some geometric properties of their members are described. In \cite{Calderbank-mobius} and \cite{Calderbank-twod} Calderbank found explicit descriptions of Einstein Weyl structures on $\sphere$ and $\torus$. While he did not explicitly address the description of the deformation spaces, all the necessary information is at least implicit in what he writes. He finds a local normal form for solutions of the two-dimensional Einstein Weyl equations, and shows that on $\sphere$ or $\torus$ the solutions are defined globally. Essentially he writes the underlying conformal structure in isothermal coordinates and uses the Killing field provided by the Gauduchon gauge to reduce the Einstein-Weyl equations to an ODE; the reduction and solution of the resulting ODE given in \cite{Calderbank-twod} is different than that given in \cite{Calderbank-mobius}. The description given here is similar to that given in \cite{Calderbank-mobius}, though the solutions that are found are given a bit more explicitly, in terms of elementary trigonometric functions (rather than elliptic functions). The scalar curvatures and Faraday curvatures are computed explicitly, and their values are related to the parameterization of the deformation space. \subsection{}\label{twopointssection} An $[h]$-conformal Killing field $X$ is \textbf{inessential} if there is some $h \in [h]$ for which $X$ is Killing, i.e. $\lie_{X}h = 0$, and is \textbf{essential} otherwise. By Theorem \ref{classtheorem} the Gauduchon metric dual of the Faraday primitive of the Gauduchon class of an Einstein AH structure on a compact orientable surface is inessential. By Lemma \ref{flatmetriclemma}, every conformal Killing vector field on $\torus$ is parallel (and so Killing) for a flat representative of the conformal structure, and so, if non-trivial, is inessential and nowhere vanishing. A flat metric on $\torus$ may be represented as that induced by the Euclidean metric on the quotient of $\rea^{2}$ by a rank two lattice $\Ga$. A non-trivial conformal Killing field is parallel in this flat metric and so can be written as a linear combination of the constant vector fields generating $\Ga$. Such a vector field is \textbf{rational} if it is a real multiple of a linear combination of the generators of the lattice with integer coefficients, and \textbf{irrational} otherwise. A conformal Killing field on $\torus$ is rational if and only if its orbits are simple closed curves, while when irrational it has a single, dense orbit. \begin{lemma}\label{rationallemma} A vector field $X$ Killing for a non-flat Riemannian metric $h$ on $\torus$ is rational. \end{lemma} \begin{proof} Since $\torus$ is compact, the group $G$ of isometries of $h$ is a compact Lie group containing the one-parameter subgroup generated by $X$. Since the infinitesimal generator of any one-parameter subgroup of $G$ is $h$-Killing, it is parallel in a flat metric conformal to $h$, so has no fixed points. If $\dim G \geq 2$ then in a neighborhood of each point of $\torus$ its action generates linearly independent $h$-Killing fields, and it follows that $h$ is flat. Since $h$ is not flat, $G$ is a union of circles. Since the one-parameter subgroup generated by $X$ is connected, it must be a circle. Its non-trivial orbits must be simple closed curves, so $X$ is rational. \end{proof} Since the interest here is in the deformation space of Einstein Weyl structures and any two K\"ahler structures on $\sphere$ are equivalent, in considering $\sphere$ it will suffice to regard it as the Riemann sphere $\proj^{1}(\com)$ with its standard K\"ahler structure. A holomorphic vector field on $\sphere$ has either one multiplicity two zero, or two multiplicity one zeroes. \begin{lemma}\label{spherelemma} An inessential conformal Killing field on $\sphere$ has two zeroes. \end{lemma} \begin{proof} By Lemma $0.1$ of \cite{Chen-Lu-Tian}, a non-trivial inessential conformal Killing vector field $X$ on $\sphere$ generates an isometric $S^{1}$ action on $\sphere$ fixing some zero $p$ of $X$. Since the orbit of a point $q$ distinct from $p$ but close to $p$ is a loop comprising points equidistant from $p$, the index of $X$ at $p$ is $1$, and so by the Hopf index theorem $X$ must have a second zero. \end{proof} \subsection{} On $\sphere$ not every conformal Killing field arises as the Gauduchon dual of the Faraday primitive of an Einstein Weyl structure because not every conformal Killing field is inessential. While on a torus every conformal Killing field is inessential, and every such vector field arises from a closed but non-exact Einstein Weyl structure, if such a vector field arises from a non-closed Einstein Weyl structure then by Lemma \ref{rationallemma} it must be rational. One consequence of what follows is to show that these necessary conditions on $X$ are sufficient for it to arise in this way from an Einstein Weyl structure. The remainder of the section is dedicated to analyzing when the solutions obtained in this way are equivalent modulo $\diff_{0}(M)$, and to describing explicitly the geometry of the resulting solutions. \subsection{}\label{reducedsection} Suppose $(M, J, [h])$ is a K\"ahler structure on a compact surface of genus $g$ equal to zero or one, $\tilde{h} \in [h]$ has constant scalar curvature $2(1-g)$, and $X \in \Ga(TM)$ is an inessential conformal Killing field. Given some background metric $g \in [h]$ it is desired to find $\phi \in \cinf(M)$ such that $h = e^{\phi}g$ is a Gauduchon metric of an Einstein Weyl structure $(\en, [h])$ with aligned representative $\nabla = D - 2\ga_{(i}\delta_{j)}\,^{k} + h_{ij}h^{kp}\ga_{p}$, in which $D$ is the Levi-Civita connection of $h$ and $\ga_{i} = X^{p}h_{ip}$. Since $X^{i}$ is to be $h$-Killing, by Lemmas \ref{rationallemma} and \ref{spherelemma} it must generate a circle action on $M$. The initial idea is to use this circle action to reduce $\ahop(\tilde{h}, \ka, X)(\phi) = 0$ to an ODE. In the case of the torus this works, but in the case of the sphere, it is not obvious how to make the reduction, and the substitute is to work with the flat metric $\sth = |X|_{h}^{-2}h$ on the complement $M^{\ast}$ of the zero set of $X$ (when $g = 1$, the metric $\sth$ depends on the choice of $\tilde{h}$). So there will be sought $\phi \in \cinf(M^{\ast})$ such that the metric $h = e^{\phi}\sth$ extends $\cinf$ smoothly to $M$ and is otherwise as above. The difference between the torus and sphere cases is in the boundary conditions necessary for $\phi$ to extend smoothly to all of $M$. For the torus $M^{\ast} = M$, while for the sphere, $M^{\ast}$ is the complement of two points. Note that $|X|_{\sth}^{2} = 1$ and $e^{\phi} = |X|_{h}^{2}$. By Theorem \ref{classtheorem}, there must be a constant $\ka$ such that $\uR_{h} - 4|X|_{h}^{2} = \uR_{h} - 4|\ga|_{h}^{2} = \ka$. Rewriting this last equation in terms of $\sth$ using \eqref{confscal} shows that the desired Einstein Weyl structure can be found if $\phi$ solves \begin{align}\label{torussphereeq} \lap_{\sth} \phi + \ka e^{\phi} + 4e^{2\phi} = 0, \end{align} where in the spherical case \eqref{torussphereeq} is supplemented by the condition that $\phi$ extends smoothly to $M$. \subsection{} The Killing property of $X$ implies $e^{\phi}d\phi(X) = \lie_{X}(|X|_{h}^{2}) = (\lie_{X}h)(X, X) = 0$, so that $d\phi(X) = 0$. Since $\lie_{X}J = 0$ there holds $[X, JX] = 0$. Let $r$ and $s$ be parameters for the flows of $X$ and $JX$, respectively; then $X = \pr_{r}$, $JX = \pr_{s}$. Since $X$ and $JX$ are complete on $M$, their flows on $M^{\ast}$ exist for all time, so $r$ and $s$ are global coordinates on the universal cover of $M^{\ast}$, which is the Euclidean plane. Note that each of $r$ and $s$ is determined only up to a translation. Since $d\phi(X) = 0$, $\phi$ is a function of $s$ alone, and since in these coordinates $\sth = dr^{2} + ds^{2}$, \eqref{torussphereeq} becomes the ODE \begin{align}\label{torusode} \ddot{\phi} + \ka e^{\phi} + 4e^{2\phi} = 0, \end{align} in which a dot indicates differentiation with respect to $s$. If $\phi$ solves \eqref{torusode} then there is a some constant $c$ such that that \begin{align}\label{torusode2} c = (\dot{\phi})^{2} + 2\ka e^{\phi} + 4 e^{2\phi} = (\dot{\phi})^{2} + (2e^{\phi} + \ka/2)^{2} - \ka^{2}/4. \end{align} Then the function $u = e^{-\phi}$ must solve \begin{align}\label{tode} (\dot{u})^{2} = cu^{2} -2 \ka u - 4 = \sign(c)(\sqrt{|c|} u - \tfrac{\sign(c)\ka}{\sqrt{|c|}})^{2} - (4 + \tfrac{\ka^{2}}{c}). \end{align} (For the second equality of \eqref{tode}, assume $c \neq 0$). Conversely, if $u$ is a $\cinf$ positive solution of \eqref{tode} then $\phi = -\log{u}$ solves $\dot{\phi}(\ddot{\phi} + \ka e^{\phi} + 4e^{2\phi}) = 0$. There is a unique $\cinf$ smooth solution $\phi$ of \eqref{torusode} with prescribed $1$-germ at any given point. From \eqref{torusode} it follows that $4c + \ka^{2} > 0$ with equality if and only if $\phi$ is constant, equal to $\log(-\ka/4)$, in which case it must be $\ka < 0$. If $\ka \geq 0$ then $\ddot{\phi} < 0$, so any zero of $\dot{\phi}$ is isolated and, moreover, $\phi$ has no local minimum, so $M^{\ast}$ is not compact. If $\ka < 0$ and $\phi$ is not constant, then $\ka^{2} > -4c$. If $\dot{\phi}(s_{0}) = 0 = \ddot{\phi}(s_{0})$ then by \eqref{torusode} and \eqref{torusode} there holds $\ka e^{\phi(s_{0})} = c $; in particular $c < 0$. Substituting this into \eqref{torusode2} gives $c(4c + \ka^{2}) = 0$, which is a contradiction since neither $c$ nor $\ka^{2} + 4c$ is zero. Hence if $\phi$ is not constant, then, whatever is $\ka$, the zeroes of $\dot{\phi}$ are isolated, so that, by continuity, a solution of \eqref{tode} determines a solution of \eqref{torusode}. This shows that to solve \eqref{torusode} it suffices to find $\cinf$ positive solutions of \eqref{tode}. Differentiating \eqref{tode} shows that where $\dot{u}$ is not zero, $u$ solves \begin{align}\label{tode2} \ddot{u} = cu - \ka. \end{align} Since a positive $\cinf$ solution of \eqref{tode} can be written as $u = e^{-\phi}$ for some $\phi$ solving \eqref{torusode}, it follows from the isolation of the zeroes of $\dot{\phi}$ that the zeroes of $\dot{u}$ are isolated, and so $u$ is a positive $\cinf$ solution of \eqref{tode} then it solves \eqref{tode2} subject to \eqref{tode}, viewed as a constraint. The constant $c$ in \eqref{tode} can be interpreted as follows. If $\phi$ solves \eqref{torusode} then $\sR_{h} = \ka + 4e^{\phi}$. Since $e^{\phi}$ must extend smoothly to $M$ it assumes a maximum value on $M$, and $\sR_{h}$ assumes its maximum value on $M$ at this same point. Since $e^{\phi}$ must vanish off of $M^{\ast}$, it must assume its maximum in the interior of $M^{\ast}$, and so $\phi$ must assume a maximum in $M^{\ast}$ as well. From \eqref{torusode2} it follows that at such a point there holds \begin{align}\label{cnorm} 4c + \ka^{2} = (\max_{M}\sR_{h})^{2}. \end{align} \subsection{}\label{cnormalizationsection} The Einstein Weyl structure $(\en, [h])$ resulting from a solution $\phi$ of \eqref{torussphereeq} given $X$ and $\ka$ will be said to be \textit{determined by $(X, \ka, \phi)$}. Note that solutions of \eqref{torussphereeq} need not be unique, and it can occur that the same Einstein Weyl structure is determined by various triples $(X, \ka,\phi$). This possibility will now be illustrated. Let $(\en, [h])$ be determined by $(X, \ka,\phi)$. If instead of $X$ there is considered $\bar{X} = e^{-\la}X$, then $\bsth = e^{2\la}\sth$, and $\bar{\phi} = \phi - \la$ is a solution of \eqref{torussphereeq} with $\bar{\ka} = e^{-\la}\ka$ in place of $\ka$ and $\bsth$ in place of $\sth$. The resulting Gauduchon metric $\bar{h} = e^{\bar{\phi}}\bsth = e^{\la}h$ is positively homothetic to $h$; since the resulting one-form $\bar{X}^{p}\bar{h}_{ip}$ is equal to $X^{p}h_{ip}$, the solution determined by $(\bar{X}, \bar{\ka}, \bar{\phi})$ is the same as that determined by $(X, \ka, \phi)$. The parameters $\bar{r}$ and $\bar{s}$ corresponding to $\bar{X}$ and $J\bar{X}$ are related to $r$ and $s$ by $\bar{r} = e^{\la}r$ and $\bar{s} = e^{\la}s$. The function $\bar{u}(\bar{s})$ defined by $\bar{u}(\bar{s}) \defeq e^{-\bar{\phi}}(\bar{s})$ is $\bar{u}(\bar{s}) = e^{\la}u(\bar{s}) = e^{\la}u(e^{\la}s)$ and solves $(\tfrac{\pr \bar{u}}{\pr \bar{s}})^{2} = \bar{c}\bar{u}^{2} - 2e^{-\la}\ka\bar{u} - 4$ with $\bar{c} = e^{-2\la}c$. If $u$ solves \eqref{tode} then $(X, \ka, -\log{u})$ determines an Einstein Weyl structure. The preceeding shows that the same Einstein Weyl structure is determined by the triple $(\bar{X}, \bar{\ka}, -\log{\bar{u}})$ resulting from the solution $\bar{u}$ of \eqref{tode} with $\bar{\ka}$ and $\bar{c}$ in place of $\ka$ and $c$. Thus solving \eqref{tode} for distinct values of $c$ related by a positive constant will not result in inequivalent Einstein Weyl structures. It follows that when $c \neq 0$ the value of $c$ can without loss of generality be normalized to be any given non-zero number having the same sign as has $c$. By \eqref{cnorm} the geometric meaning of such a normalization is to fix the maximum value of the scalar curvature of $h$. \subsection{}\label{sslidesection} The Einstein Weyl structures determined by $(X, \ka, \phi)$ and $(X, \ka, \bar{\phi})$, in which $\bar{\phi}(s) = \phi(s - a)$ need not be the same, but they are always equivalent by an orientation-preserving diffeomorphism isotopic to the identity. Let $\tau_{t}:M \to M$ be the flow of $-JX$, the fixed points of which are the zeroes of $X$. The restriction to $M^{\ast}$ of $\tau_{t}$ is given in $(r, s)$ coordinates by $\tau_{t}(r, s) = (r, s - t)$. Evidently $\tau_{t}$ is an isometry of $\sth$ preserving $X$ and satisfying $\bar{\phi} = \phi \circ \tau_{a}$. It follows that the resulting Gauduchon metrics $\bar{h} = e^{\bar{\phi}}\sth$ and $h = e^{\phi}\sth$ are related by $\bar{h} = \tau_{a}^{\ast}(h)$, and so the resulting Einstein Weyl structures determine the same point in the deformation space $\tmod(M)$. The consequence of this observation relevant in the sequel is that the parameter $s$ can always be modified by a translation, as is convenient. \subsection{}\label{torusconstantsolutionssection} Consider now the case of the torus. There are discussed first the constant solutions of \eqref{torusode}, and then the nonconstant ones. By Theorem \ref{scalarexacttheorem}, $\nv$ must be negative, so $\ka < 0$ in \eqref{torussphereeq}. The only relevant solutions of \eqref{tode} are those for which $u$ is non-constant, positive, and bounded from above. For each $(X, \ka)$, there is a constant solution to \eqref{torussphereeq}, namely $\phi = \log(-\ka/4)$ (equivalently, the constant function $-4/\ka$ solves \eqref{tode} with $c = -\ka^{2}/4$). Hence $h = -(\ka/4)\sth$ is simply a flat representative of the given conformal structure $[h]$. The resulting $\ga_{i}$ is $-(\ka/4)X^{p}\sth_{ip}$, and the parameter $\nv$ of the resulting Einstein Weyl structure is $\nv = \ka \vol_{h}(M) = -\ka^{2}\vol_{\sth}(M)/4$. Since, by the discussion in section \ref{cnormalizationsection}, $(e^{-\la}X, e^{-\la}\ka, \log(-\ka/4) - \la)$ determines the same Einstein Weyl structure as does $(X, \ka, \log(-\ka/4))$, there can be imposed a normalization fixing $X$, and a convenient one is to require that $\vol_{\sth}(M) = 4\pi^{2}$, so that $\nv = -\pi^{2}\ka^{2}$. The resulting Gauduchon metric having volume $4\pi^{2}$ is simply $\sth$, and the dual to $\ga_{i}$ in this metric is $Y^{i} = \sth^{ip}\ga_{p} = -(\ka/4)X^{i}$. Replacing $X$ by $X^{\theta} = \cos \theta X + \sin \theta JX$ for any $\theta \in [0, 2\pi)$ gives rise to an Einstein Weyl structure with the same $\nv$. Since the group of conformal automorphisms of a torus is conjugate to the elliptic modular group, which acts discretely and properly discontinuously on the upper half space, which is the Teichm\"uller space of the smooth torus, the Einstein Weyl structures determined by $(X, \ka, \log(-\ka/4))$ and $(X^{\theta}, \ka, \log(-\ka/4))$ are equivalent modulo an element of $\diff_{0}(M)$ if and only if $\theta = 0$. It follows that distinct elements of the deformation space $\tmod(M)$ are obtained as $(\ka, \theta)$ varies over $(-\infty, 0)\times[0, 2\pi)$, that is as $-\ka e^{\j \theta}$ varies over $\comt$. All the possibilities can be determined in terms of a given fixed $X$ as follows. Let $\sth \in [h]$ be the flat representative of volume $4\pi^{2}$. Then to each $-\ka e^{\j \theta} \in \comt$ there corresponds an Einstein Weyl structure such that the $\sth$ dual of the Faraday primitive is $Y^{i} = -(\ka/4)X^{\theta}$, and these Einstein Weyl structures determine distinct elements of the deformation space $\tmod(M)$ (they might not be distinct in the moduli space $\emod(M)$; for instance for the square and hexagonal tori the modular group fixes some points). These Einstein Weyl structures are characterized by having weighted scalar curvature equal to zero. The preceeding shows that the trivial complex line bundle over the upper half space parameterizes the Einstein Weyl structures on a torus having zero weighted scalar curvature considered up to deformation. An intrinsic formulation of the preceeding goes as follows. As explained in the proof of Theorem \ref{scalarexacttheorem} an Einstein Weyl structure on the torus which is closed but not exact determines a holomorphic affine connection. Namely, in this case a Gauduchon metric $h \in [h]$ with Levi-Civita connection $D$ and Faraday primitive $\ga_{i}$ is flat, and the $(1,0)$ part $\nabla^{1,0} = D^{1,0} - 2\ga^{(1,0)}$ is holomorphic because $\ga^{(1,0)}$ is holomorphic. Moreover, every holomorphic affine connection on $M$ has the form $D^{1,0} - 2\ga^{(1,0)}$ for some holomorphic one-form $\ga^{(1,0)}$ and the Levi-Civita connection $D$ of a flat metric $h \in [h]$, so that the space of holomorphic affine connections on torus with a fixed complex structure is parameterized by $H^{0}(M, \cano^{1}) \simeq \com$, the origin corresponding to $D^{1,0}$, where $D$ is the Levi-Civita connection of any flat metric representing the conformal structure determined by the given complex structure. This description of the holomorphic affine connections on a two torus seems to have been first explicitly observed by A. Vitter in \cite{Vitter}. The preceeding can be summarized as saying that on the smooth torus the space of Einstein Weyl structures which are closed but not exact considered up to equivalence modulo $\diff_{0}(M)$ is in bijection with the complement of the zero section in the bundle over the upper half space (the Teichm\"uller space of $M$) the fiber of which over a given conformal structure on $M$ comprises the one complex dimensional vector space of holomorphic one forms. \subsection{}\label{torusmetricsection} Now there will be sought non-constant, positive, bounded solutions to \eqref{tode} on the torus. By \eqref{cnorm}, $-\ka^{2} \leq 4c$. The equation \eqref{tode2} can have positive bounded solutions only if $c < 0$, in which case the general solution is $u(s) = c^{-1}(\ka - \sqrt{\ka^{2} + 4c}\cos(\sqrt{|c|} s - \al))$, in which $\al$ is arbitrary. By the discussion in section \ref{sslidesection}, it can with no loss of generality be supposed that $\al = 0$. Thus $h = -c(\sqrt{\ka^{2} +4c}\cos(\sqrt{|c|}s) - \ka)^{-1}(dr^{2} + ds^{2})$, which has period $2\pi/\sqrt{|c|}$ in $s$. $X$ and $JX$ are linearly independent commuting vector fields on $\torus$ preserving $\sth$, the composition of their flows defines an isometric action of $\rea^{2}$ on $\torus$ for which the stabilizers of any two points are conjugate, and the stabilizer of a given point is a discrete subgroup of $\rea^{2}$, so a lattice. The lifts to the universal cover of the flows of $X$ and $JX$ are by translations parallel to the generators of the lattice. A fundamental domain for the action of $\pi_{1}(\torus)$ is a half-closed parallelogram. If $h$ is to define a metric on $\torus$, it must be that the value of $h$ is the same at every pre-image of any $p \in \torus$. This forces the periodicity of $h$ in $s$ to be commensurate with that of the fundamental domain, so that one closed side of the fundamental domain lies on the $s$ axis, and the length of this side is a multiple of $2\pi/\sqrt{|c|}$ by a positive integer $m$. Since $\vol_{\sth}(\torus)$ is given by integrating $dr \wedge ds$ over the fundamental domain, the length of the side lying on the $r$-axis must be $\ell = \sqrt{|c|}\vol_{\sth}(\torus)/2\pi m$. Using that for $b > 1$ a primitive of $(\cos{x} + b)^{-1}$ is $\tfrac{2}{\sqrt{b^{2} - 1}}\arctan\left(\sqrt{\tfrac{b-1}{b+1}}\tan{\tfrac{x}{2}} \right)$ yields \begin{align*} \begin{split} \vol_{h}(\torus) &= \int_{\torus}u^{-1}\,dvol_{\tilde{h}} = \tfrac{2m\ell\sqrt{|c|}}{\sqrt{\ka^{2} + 4c}}\int_{0}^{\pi} \left(\cos t - \tfrac{\ka}{\sqrt{\ka^{2} + 4c}}\right)^{-1}\,dt = \pi m \ell= \sqrt{|c|}\vol_{\sth}(\torus)/2. \end{split} \end{align*} By the discussion in section \ref{cnormalizationsection} fixing either of $c$ or $\vol_{\sth}(\torus)$ determines the other, and such a choice can be made as is convenient without changing the equivalence class of the Einstein Weyl structure which results; make the normalization $c = -4$, so that the metric $h$ is \begin{align}\label{torusmetric} h = 4\left(\sqrt{\ka^{2} - 16}\cos(2s) - \ka\right)^{-1}(dr^{2} + ds^{2}), \end{align} in which $\ka < -4$. By construction $\nv = \ka \vol_{h}(\torus) = \ka \vol_{\sth}(\torus)$. Computing the scalar curvature of $h$ using \eqref{conformalscalardiff} gives \begin{align} -\sqrt{\ka^{2} -16} = \min_{\torus}\sR_{h} \leq \sR_{h} = \sqrt{\ka^{2} -16}\left(\frac{\ka \cos(2s) - \sqrt{\ka^{2} -16}}{\sqrt{\ka^{2} -16}\cos(2s) - \ka}\right) \leq \max_{\torus}\sR_{h} = \sqrt{\ka^{2} - 16}. \end{align} For $\ka = -4$ the metric $h$ corresponds to the solutions described in section \ref{torusconstantsolutionssection} having $R \equiv 0$, while for $\ka < -4$ these yield solutions have weighted scalar curvature which is somewhere positive and somewhere negative, as in \ref{rvarytorus} of Theorem \ref{scalarexacttheorem}. The metrics $\gu$ and $\hu$ homothetic to $h$ and defined by the requirements that $\vol_{\gu}(\torus) = 4\pi^{2}$ and $\max_{\torus}\sR_{\hu} = 2$, are $\gu = 4\pi^{2}\vol_{h}(\torus)^{-1}h$ and \begin{align*} \begin{split} &\hu = 2^{-1}\sqrt{\ka^{2} - 16}\,h = 2\left(\cos(2s) - \frac{\ka}{\sqrt{\ka^{2} - 16}}\right)^{-1}\left(dr^{2} + ds^{2}\right). \end{split} \end{align*} As $\nv \to -\infty$ (equivalently $\ka \to -\infty$) the metrics $\hu$ tend pointwise to the degenerate metric $\hi = 2(1 + \cos 2s)^{-1}(dr^{2} + ds^{2})$, which is the hyperbolic metric of constant scalar curvature $-2$ on its domain of non-degeneracy, which is the strip $\rea \times (-\pi/2, \pi/2)$. Since $\sth = dr^{2} + ds^{2}$ and $|X|_{\sth}^{2} = 1$, \begin{align} \begin{split} \ga &= 4(\sqrt{\ka^{2} - 16}\cos (2s) - \ka)^{-1} dr, \quad |\ga|_{h}^{2} = 4(\sqrt{\ka^{2} - 16}\cos (2s) - \ka)^{-1},\\ F & = \frac{8}{\sqrt{\ka^{2} - 16}}\frac{\sin(2s)}{(\cos(2s) - \tfrac{\ka}{\sqrt{\ka^{2} - 16}})^{2}} dr \wedge ds, \quad \fd_{h} = \frac{4\sin(2s)}{\cos(2s) - \tfrac{\ka}{\sqrt{\ka^{2} - 16}}}. \end{split} \end{align} Note that $\sR_{h} - 4|\ga|_{h}^{2} = \ka$, as must be the case by construction, and that $\sR_{h}^{2} + \fd_{h}^{2} = \ka^{2} - 16$, as required by Lemma \ref{squarehololemma}. For $b > 1$ a primitive for $(\cos t + b)^{-2}$ is \begin{align*} &\frac{1}{b^{2} - 1}\left( \frac{2b}{\sqrt{b^{2} - 1}}\arctan\left(\sqrt{\frac{b-1}{b+1}}\tan\left(\frac{t}{2}\right) \right) - \frac{\sin t}{\cos t + b } \right),& \end{align*} so that \begin{align*} &\int_{0}^{\pi}\left(\cos{t} - \frac{\ka}{\sqrt{\ka^{2} - 16}}\right)^{-2}\, dt = -\pi \ka (\ka^{2} - 16)/64, \end{align*} from which follows $4||\ga||_{h}^{2} = -\ka \vol_{\sth}(M) = -\nv$, as required by \ref{rvarytorus} of Theorem \ref{scalarexacttheorem}. This completes the analysis of the torus case. The Einstein Weyl structures on the torus which are not closed are determined up to the action of orientation-preserving diffeomorphisms by the choice of a conformal structure (a point in the hyperbolic plane) and a rational number (determining an inessential conformal Killing field). However, it is not clear how to describe the deformation/moduli space of solutions in a conceptual, geometric way (as was done in section \ref{torusconstantsolutionssection} for the closed but not exact Einstein Weyl structures). \subsection{} Now consider the case $g = 0$. In considering \eqref{tode} on the sphere, the only difference with the analysis in the case of the torus is that the boundary conditions are different. The requirement that $|X|_{h}^{2} = e^{\phi}$ forces that $\phi$ tends to $-\infty$ at the zeroes of $X$, or, what is the same, that $u$ tends to $+\infty$ at the zeroes of $X$. Thus the only solutions of \eqref{tode} that are relevant are those for which $u$ is positive and tends to $+\infty$ at the boundary of $\sphereast$. Where $\dot{u} \neq 0$ there holds $\ddot{u} = cu - \ka$; if $c < 0$ then the solutions are bounded, so it must be that $c \geq 0$. \subsection{} For $c = 0$ the general solution of \eqref{tode} is $u(s) = -\tfrac{\ka}{2} s^{2} + a s + b$ for some constants $a$ and $b$. In this case either $u$ is equal to a positive constant, or $\ka$ is negative, for otherwise $u$ would be somewhere negative. Evaluating \eqref{tode} at $s = 0$ shows $a^{2} = -2b\ka - 4$. Since the coordinate $s$ is determined only up to translation it can be supposed that $a = 0$ by making a translation in $s$. In this case $b = - 2/\ka$, so $u = -\tfrac{\ka}{2}s^{2} - \tfrac{2}{\ka}$. Since $\sphereast$ is the complement of two points, it is topologically a cylinder, and so its universal cover is the plane with the global coordinates $r$ and $s$; in this case the metric $h = u^{-1}(dr^{2} + ds^{2})$ can descend to a metric on the cylinder obtained by quotienting by translations by $2\pi$ in the $r$ direction. This metric gives rise to an Einstein Weyl structure on the cylinder which is not exact and which has a distinguished complete metric representative $h \in [h]$ of infinite volume. Because this metric gives the cylinder infinite volume, there is no way it extends smoothly to $\sphere$. Thus these solutions of \eqref{tode} do not yield Einstein Weyl structures on the sphere. \subsection{}\label{spheremetricoccurssection} Now suppose that $c > 0$. The general solution of \eqref{tode} which tends to $+\infty$ as $s \to \pm\infty$ is $u(s) = c^{-1}(\sqrt{\ka^{2}+ 4c} \cosh(\sqrt{c} s + \al) + \ka)$, in which $\al$ is arbitrary. By the discussion in section \ref{sslidesection}, it can with no loss of generality be supposed that $\al = 0$. Following the discussion in section \ref{cnormalizationsection}, make the normalization $c = 4$. It will be convenient to write also $\mu = 2\ka/\sqrt{\ka^{2} + 16}$, which ranges over $(-2, 2)$ as $\ka = 4\mu/\sqrt{4 - \mu^{2}}$ ranges over $\rea$, and to introduce the coordinates $x = e^{s}\cos r$ and $y = e^{s}\sin r$, and to write $\rho = \sqrt{x^{2} + y^{2}} = e^{s}$. Then \begin{align}\label{spheremetric} \begin{split} h &= \frac{4(dr^{2} + ds^{2})}{\sqrt{\ka^{2} + 16}\cosh(2 s ) + \ka} = \frac{8}{\sqrt{\ka^{2} + 16}}\frac{dx^{2} + dy^{2}}{1 + \mu \rho^{2} + \rho^{4}}= \frac{2}{\sqrt{\ka^{2} + 16}}\frac{1 + 2\rho^{2} + \rho^{4}}{1 + \mu \rho^{2} + \rho^{4}}\tilde{h}. \end{split} \end{align} (Recall $\tilde{h} = 4(1 + \rho^{2})^{-2}(dx^{2} + dy^{2})$ is the metric of scalar curvature $2$ and volume $4\pi$). In this form it is evident that $h$ extends smoothly through the origin of the $x$ and $y$ coordinates, which corresponds to the fixed point as $s \to -\infty$. Replacing $s$ by $-s$ in the preceeding gives a coordinate system in which it is evident that $h$ is smooth when $s \to \infty$. Thus the metric $h$ extends to a smooth metric on all of the two-sphere which by construction generates an Einstein Weyl structure of the sort in \ref{ps6} of Theorem \ref{scalarexacttheorem}. It will now be shown that for distinct values of $\ka$ the Einstein-Weyl structures so obtained are inequivalent. It is convenient to introduce the function \begin{align} \tau(z) = 2z\arctan(\sqrt{z^{2} + 1} - z) = z(\pi/2 - \arctan(z)), \end{align} which is a $\cinf$ orientation preserving diffeomorphism of $\rea$ onto $(-\infty, 1)$. The parameter which does not depend on the scaling of $h$ is $\nv = \ka \vol_{h}(\sphere)$. The K\"ahler form associated to $h$ is \begin{align} \om = 8(\ka^{2} + 16)^{-1/2}\rho(1 + \mu\rho^{2} + \rho^{4})^{-1}dr \wedge d\rho = dr \wedge d\left(\arctan\left(\ka/4 + \sqrt{\ka^{2}/16 + 1}\rho^{2}\right)\right), \end{align} and so \begin{align}\label{nvkavol} \begin{split} \nv & = \ka\vol_{h}(\sphere) = \ka\int_{\sphere}\om = 2\pi\ka\left(\pi/2 - \arctan(\ka/4)\right) = 8\pi\tau(\ka/4). \end{split} \end{align} Letting $\ka$ run over $\rea$, this shows that all values of $\nv < 8\pi$ are realized by some Einstein Weyl structure on the sphere. Henceforth, $\ka$ will be viewed as a function of $\nv$ via $\ka = 4\tau^{-1}(\nv/8\pi)$. However, because of the implicit nature of this definition, it will be convenient to continue writing $\ka$ and $\mu$ in formulas. Using \eqref{conformalscalardiff} the scalar curvature $\sR_{h}$ of $h$ can be computed by finding the Euclidean Laplacian of the conformal factor in \eqref{spheremetric}. The result is \begin{align}\label{kasr} \sR_{h} = \frac{\sqrt{\ka^{2} + 16}}{2}\left(\frac{\mu + 4\rho^{2} + \mu \rho^{4}}{1 + \mu\rho^{2} + \rho^{4}}\right) = \ka + \frac{32}{\sqrt{\ka^{2} + 16}}\left(\frac{\rho^{2}}{1 + \mu\rho^{2} + \rho^{4}}\right), \end{align} For fixed $\mu$, $\sR_{h}$ takes its maximum value on the equatorial circle $\rho = 1$, where its value is $\sqrt{\ka^{2} + 16}$, while $\sR_{h}$ attains no minimum in the plane, tending to $\ka$ as $\rho$ tends to either $0$ or $\infty$. In particular, \begin{align}\label{kaest} \begin{split} &\ka = \min_{\sphere} \sR_{h} \leq \sR_{h} \leq \max_{\sphere}\sR_{h} = \sqrt{\ka^{2} + 16}. \end{split} \end{align} Observe that if $\nv$ is positive or non-negative, then the scalar curvature $\sR_{h}$ has the same property. Let $\hu$ and $\gu$ be the Gauduchon metrics of the Einstein Weyl structure $(\en, [h])$ corresponding to $\nv$ distinguished respectively by the requirements that $\max_{\sphere}\sR_{\hu} = 2$ and $\vol_{\gu}(\sphere) = 4\pi$. Then $\gu = 4\pi\vol_{h}(\sphere)^{-1}h $ while \begin{align} \hu = (\sqrt{\ka^{2} +16}/2)h = 4\left(1 + \mu\rho^{2} + \rho^{4}\right)^{-1}\left(dx^{2} + dy^{2}\right) = \frac{1 + 2\rho^{2} + \rho^{4}}{1 + \mu\rho^{2} + \rho^{4}}\tilde{h}, \end{align} Hence \begin{align}\label{huest} \begin{split} &-2 \leq \mu = \min_{\sphere} \sR_{\hu} \leq \sR_{\hu} = \mu + (4 - \mu^{2})\left(\frac{\rho^{2}}{1 + \mu \rho^{2} + \rho^{4}}\right) \leq \max_{\sphere}\sR_{\hu} = 2,\\ &4\pi \leq \vol_{\hu}(\sphere) = \tfrac{4\pi\sqrt{\ka^{2}+16}}{\ka}\tau(\ka/4). \end{split} \end{align} As $\nv \to 8\pi$ (so $\ka \to \infty$ and $\mu \to 2$) the metric $\hu$ tends pointwise to the spherical metric $\tilde{h}$. As $\nv \to -\infty$ (so $\ka \to -\infty$ and $\mu \to -2$), the volume of $\hu$ goes to $+\infty$. On either of the disks complementary in $\sphere$ to the equatorial circle $\rho = 1$, the metric $\hu$ converges pointwise to the hyperbolic metric $4(1 - \rho^{2})^{-2}(dx^{2} + dy^{2})$ of constant scalar curvature $-2$. The family $\hu$ interpolates between the spherical metric and the hyperbolic metric. The positive curvature concentrates on the equatorial circle as $\mu$ nears $-2$, while the negative curvature concentrates on the complementary disks. Precisely, for $\mu < 0$ the curvature is positive for $(- 2 + \sqrt{4 - \mu^{2}})/\mu < \rho^{2} < -(2 +\sqrt{4 - \mu^{2}})/\mu$. By construction the vector field $X^{i}$ is $x\pr_{y} - y\pr_{x} = \pr_{r}$. Explicit expressions for $\ga_{i} = X^{p}h_{pi}$ and its norm are \begin{align}\label{normga} &\ga = \frac{8}{\sqrt{\ka^{2} + 16}(1 + \mu \rho^{2} + \rho^{4})}\left(xdy - y dx\right), && |\ga|_{h}^{2} = \frac{8\rho^{2}}{\sqrt{\ka^{2} + 16}(1 + \mu \rho^{2} + \rho^{4})}, \end{align} the second of which in any case follows from $\sR_{h} - \ka = 4|\ga|_{h}^{2}$ in conjunction with \eqref{kasr}. Consequently, \begin{align}\label{fdrho} &F = -d\ga = \frac{16}{\sqrt{\ka^{2} + 16}}\frac{\rho^{4} - 1}{(1 + \mu \rho^{2} + \rho^{4})^{2}}dx \wedge dy, & &\fd_{h}^{2} = \frac{16(1- \rho^{4})^{2}}{(1 + \mu \rho^{2} + \rho^{4})^{2}}. \end{align} Equation \eqref{fdrho} shows that the Faraday curvature is non-zero except along the equatorial circle, and is largest at the poles. There can be verified the equivalent \begin{align}\label{spheresrfh} &\sR_{h}^{2} + \fd_{h}^{2} = \ka^{2} + 16, & &\sR_{\gu}^{2} + \fd_{\gu}^{2} = 16\mu^{-2}\tau(\ka/4)^{2}, \end{align} confirming in this case the claim of Lemma \ref{sphereconstantlemma}. By Theorem \ref{magnetictheorem} the equatorial circle, along which vanishes $F$, must be a geodesic; its $h$-length and $\hu$-length are respectively $\pi(\sqrt{\ka^{2} + 16} - \ka)^{1/2}$ and $2\sqrt{2}\pi(\ka^{2} + 16)^{1/4}(\sqrt{\ka^{2} + 16} + \ka)^{-1/2}$. A map associating to an Einstein Weyl structure on the sphere the real part of a holomorphic vector field is determined by a manner of choosing a normalized Gauduchon metric. For example, the metric $h$ is normalized by specifying the minimum value assumed by its scalar curvature, which is the parameter $\ka$. Similarly, $\hu$ is determined by the requirement that $\max_{\sphere}\sR_{\hu} = 2$. The associated vector field $Y^{i} = \hu^{ij}\ga_{j}$ is $\tfrac{16}{\sqrt{\ka^{2} + 16}}(x\pr_{y} - y\pr_{x})$; observe that this vector field is associated to two distinct Einstein Weyl structures, those corresponding to $\pm \ka$. The normalization like that used for surfaces of higher genus selects the metric $\gu$ having volume $4\pi$. The associated vector field $Z^{i} = \gu^{ij}\ga_{j}$ is $2\ka^{-1}\tau(\ka/4)(x\pr_{y} - y\pr_{x})$. Since $\ka \to 2\ka^{-1}\tau(\ka/4)$ is an orientation-reversing diffeomorphism of $\rea$ onto $(0, \pi/2)$, in this case there is a unique vector field associated to each Einstein Weyl structure. It is noted in passing that the claims about magnetic geodesics made immediately following the statement of Theorem \ref{magnetictheorem} follow straightforwardly from the explicit form of $Y^{i}$ and \eqref{normga} and \eqref{fdrho}. In the coordinate system $(x, y)$ let $z = x + \j y$ and view $z$ as the inhomogeneous coordinate on the standard chart in $\proj^{1}(\com)$. To an element $\Phi = \begin{pmatrix} a & b \\ c & d \end{pmatrix} \in \sll(2, \com)$ associate the holomorphic vector field $X^{\Phi}$ on $\proj^{1}(\com)$ defined by $X^{\Phi}_{p} = \tfrac{d}{dt}_{| t = 0}\exp(t\Phi)p$ which in the inhomogeneous coordinate $z$ has the form $(b + (a-d)z - cz^{2})\pr_{z}$. An element $\Psi \in PSL(2, \com)$ is elliptic if and only if $(\tr \Psi)^{2}/\det \Psi \in [0, 4)$, in which case $\Psi$ is conjugate in $PSL(2, \com)$ to a unique element of the form $\exp \Psi_{\theta}$ where $\Psi_{\theta} \defeq \begin{pmatrix} \j \theta & 0 \\ 0 & -\j \theta\end{pmatrix}$ with $\theta \in (0, \pi/2)$. The real part of $X^{\Psi_{\theta}} = 2\j \theta z \pr_{z}$ is $\theta(x\pr_{y}- y\pr_{x})$. It follows that the vector field $Z$ described in the preceeding paragraph is the real part of $X^{\Psi_{2\tau(\ka/4)/\ka}}$. Since $\ka \to 2\tau(\ka/4)/\ka$ is a diffeomorphism of $\rea$ onto $(0, \pi/2)$, this shows that each Einstein Weyl structure on the sphere determines a unique conjugacy class of elliptic elements in $PSL(2, \com)$. Precisely, the vector field $Z$ is the real part of the holomorphic vector field generated by the elliptic transformation given in the inhomogeneous coordinate by $z \to e^{4\j \tau(\ka/4)/\ka}z = \tfrac{\ka + 4\j}{\sqrt{\ka^{2} + 16}}z$. The parameter $\theta$ is expressed in terms of the scale invariant parameter $\nv$ by $\theta = \nv/16\pi\tau^{-1}(\nv/8\pi)$. Using \eqref{nvkavol} it is straightforward to see that $\ka$ and $\nv$ are expressed in terms of $\theta$ by \begin{align}\label{kanvtheta} &\ka = 4\cot 2\theta,& &\nv = 16\pi\theta \cot 2\theta. \end{align} Note that no Einstein Weyl structure on the two sphere gives rise to the conjugacy class of elliptic elements corresponding to $\theta = \pi/2$. What distinguished these elliptic elements is that the associated homography leaves invariant a circle. There is an Einstein Weyl structure corresponding to $\theta = \pi/2$, namely that generated by the hyperbolic metric and corresponding to the degeneration as $\ka \to -\infty$. This picture was already described in the last paragraph of section $5$ of \cite{Calderbank-twod}. Here the equivalence problem is resolved explicitly. In particular it is shown that two Einstein Weyl structures on the sphere are equivalent if and only if their vortex parameters $\nv$ are the same, and that the corresponding extended elliptic homography can be explicitly described by a rotation by an angle explicitly expressible in terms of $\nv$, namely via \eqref{kanvtheta}. An Einstein Weyl structure $(\en, [h])$ on the sphere having vortex parameter $\nv$ is equivalent to one represented by the Gauduchon metric $\gu$ of volume $4\pi$ where the explicit expressions for $\gu$, $\sR_{\gu}$, and $\ga$ in terms of $\theta \in [0, \pi/2)$ defined from $\nv$ by \eqref{kanvtheta} are \begin{align}\label{sphericalreps} \begin{split} \gu &= \frac{2\sin 2\theta}{\theta( 1+ 2\rho^{2}\cos 2\theta + \rho^{4})}\left(dx^{2} + dy^{2}\right),\qquad \ga = \frac{2\sin 2\theta}{1 + 2\rho^{2}\cos 2\theta + \rho^{4}}\left(xdy - ydx\right).\\ \sR_{\gu} & = \frac{\theta}{\sin 2\theta}\left(\frac{4\cos 2\theta + 8\rho^{2} + 4\rho^{4}\cos 2\theta}{1 + 2\rho^{2}\cos 2\theta + \rho^{4}}\right). \end{split} \end{align} By the uniformization theorem, $\teich(\sphere)$ is a point. Since $\map^{+}(\sphere)$ is trivial the moduli space $\emod(\sphere)$ equals the deformation space $\tmod(\sphere)$. Since under the usual definition the identity element of $PSL(2, \com)$ is not considered to be elliptic, it will be convenient to call \textbf{extended elliptic} a homography $\Psi$ that is either elliptic or the identity. For such $\Psi$ there holds $(\tr \Psi)^{2}/\det \Psi \in [0, 4]$. Note, however, that $(\tr \Psi)^{2}/\det \Psi \in [0, 4]$ need not imply that $\Psi$ is extended elliptic because while the function $(\tr \Psi)^{2}/\det \Psi$ distinguishes the conjugacy classes of non-identity elements, it does not distinguish a parabolic transformation from the identity. Let $\elp(PSL(2, \com))$ denote the space of conjugacy classes of extended elliptic elements in $PSL(2, \com)$. Suppose given an Einstein-Weyl structure $(\en, [h])$ on the two sphere which is not that generated by the uniformizing conformal structure. After a diffeomorphism it may be supposed that $[h]$ is the standard conformal structure. Associate to $(\en, [h])$ the element $\Psi \in \sll(2, \com)$ such that the $(1,0)$ part of the vector field $X^{i} = h^{ip}\ga_{p}$, where $h \in [h]$ is the Gauduchon metric with volume $4\pi$ and $\ga$ is the corresponding Faraday primitive, is equal to $X^{\Psi}$. The element $\exp\Psi$ is elliptic and conjugate to $\exp\Psi_{\theta}$ for $\theta = \nv/16\tau^{-1}(\nv/8\pi)$. Applying the same construction to the pullback of $(\en, [h])$ by an element of $PSL(2, \com)$ yields an element $\exp\Psi^{\prime}$, conjugate to $\exp\Psi_{\theta^{\prime}}$ for some $\theta^{\prime}$. However, since $(\en, [h])$ and its pullback determine the same parameter $\nv$ there holds $\theta^{\prime} = \nv/16\tau^{-1}(\nv/8\pi) = \theta$, and so $\Psi^{\prime}$ is conjugate to $\Psi$. Hence the map associating to $(\en, [h])$ the infinitesimal generator $\Psi$ descends to an injection $\emod(\sphere) \to \elp(PSL(2, \com))$; the image omits the conjugacy class of the simple inversions corresponding to $\theta = \pi/2$. The map sending $[\Psi] \in \elp(PSL(2, \com))$ to the unique $\theta \in [0, \pi/2]$ such that $4\cos^{2}\theta = (\tr \Psi)^{2}/\det \Psi$ identifies the space $\elp(PSL(2, \com))$ with the interval $[0, \pi/2]$, and the subspace $[0, \pi/2)$ corresponds to $\emod(\sphere)$. While the topologies on the spaces $\emod(\sphere)$ and $\elp(PSL(2, \com))$ have not been discussed, it makes sense to regard the interval $[0, \pi/2]$ as the compactification of $\emod(\sphere)$. Theorem \ref{spheremodulitheorem} summarizes the preceeding discussion. \begin{theorem}\label{spheremodulitheorem} The following spaces are in pairwise bijection \begin{list}{(\arabic{enumi}).}{\usecounter{enumi}} \renewcommand{\theenumi}{Step \arabic{enumi}} \renewcommand{\theenumi}{(\arabic{enumi})} \renewcommand{\labelenumi}{\textbf{Level \theenumi}.-} \item The moduli space $\emod(\sphere)$ of Einstein Weyl structures on $\sphere$. \item The space of conjugacy classes of extended elliptic elements of $PSL(2, \com)$ which leave invariant no circle. \item The half-open interval $[0, \pi/2)$. \end{list} Precisely, to $\theta \in [0, \pi/2)$ correspond the conjugacy class of the homography $z \to e^{2\j \theta}z$ and the equivalence class $\{\en, [h]\}_{\theta}$ of Einstein Weyl structures having vortex parameter $\nv = 16\pi \theta \cot 2\theta \in (-\infty, 8\pi]$. To $\theta = 0$ corresponds the Einstein Weyl structure generated by the standard round metric. The equivalence class $\{\en, [h]\}_{\theta}$ is represented by an Einstein Weyl structure $(\en, [h])_{\theta}$ for which $[h]$ is the standard conformal structure on $\proj^{1}(\com)$, and the Faraday primitive $\ga$ of the Gauduchon class and the Gauduchon metric $\gu \in [g]$ of volume $4\pi$ are as in \eqref{sphericalreps}. For every $\theta \in (0, \pi/2)$ the zero set of the Faraday curvature of the associated Einstein Weyl structure is the equatorial circle. For $(\en, [h])_{\theta}$ the Gauduchon metric $\hu \in [h]$ such that $\max_{\sphere}\sR_{\hu} = 2$ is $\hu = \tfrac{\theta}{2\sin 2\theta}\gu$. As $\theta \to 0$, $\hu$ tends pointwise to the Fubini-Study metric $\tilde{h}$, while as $\theta \to \pi/2$, the restriction to either connected component of the complement of the zero set of the Faraday curvature of the metric $\hu$ tends pointwise to the hyperbolic metric of constant scalar curvature $-2$. \end{theorem} \subsection{} Differentiating the family \eqref{spheremetric} of metrics with respect to the parameter $t \in (-\pi/2, 0)$ defined by $\ka = -2\cot{2t}$ and comparing the result with \eqref{kasr} shows that the one-parameter family of metrics $h(t)$ constitutes a solution to the Ricci flow, $\tfrac{d}{dt}h = -\sR_{h}h$. The one-parameter family $s(t) = -\j h(\j t)$ of metrics on the sphere obtained from $h(t)$ after conjugation by a rotation in the parameter space is an ancient solution of the Ricci flow having positive curvature discovered in \cite{Fateev-Onofri-Zamolodchikov}, where these metrices were called \textit{sausage metrics}; they are known to mathematicians as the King-Rosenau metrics. Similarly, the family \eqref{torusmetric} of metrics on the torus is a solution of the Ricci flow with respect to the parameter $t \in (0, \infty)$ defined by $\ka = -4\coth{4t}$. It would be interesting to explain conceptually why solutions of the Ricci flow arise naturally from Einstein-Weyl structures. \section{Convexity and Hessian metrics} \label{hessianmetricsection} In this section it is shown that the cone over an exact Einstein AH structure with negative scalar curvature carries particularly nice Riemannian and Lorentzian Hessian metrics, and there is proved Theorem \ref{hessiantheorem}. These constructions are used in section \ref{convexsection}, where it is explained that such an Einstein AH structure is determined by its underlying flat projective structure, and there is proved Theorem \ref{summarytheorem}. \subsection{}\label{kahleraffinesection} Following \cite{Cheng-Yau-realmongeampere}, a pair $(\hnabla, g)$ comprising a flat affine connection $\hnabla$ and a pseudo-Riemannian metric $g$ such that around each point there are local affine coordinates $x^{i}$ (meaning that the $dx^{i}$ are $\hnabla$-parallel) and a \textbf{potential function} $F$ such that $g_{ij} = \hnabla_{i}dF_{j}$, is called a \textbf{K\"ahler affine metric}. A K\"ahler affine metric will be said to be a \textbf{Hessian metric} if the potential function is globally defined, as will be the case in the examples constructed in what follows. (This terminological distinction between \textit{Hessian metric} and \textit{K\"ahler affine metric} is not standard). Let $(\hnabla, g)$ be a K\"ahler affine metric. Let $\mu = dx^{1}\wedge \dots \wedge dx^{n+1}$ be the volume form determined by the choice of local affine coordinates. Then $\det_{\mu}g \defeq (\det g)/\mu^{2}$ is a function. It is not well-defined, but its logarithmic differential $d\log \det_{\mu}g$ is, because changing the choice of affine coordinates only changes $\mu$ by a constant factor. The \textbf{Ricci tensor} of a K\"ahler affine metric is defined to be $-\hnabla_{i}d\log(\det_{\mu}g)_{j}$. Note that this Ricci tensor is not in general the Ricci tensor of either $\hnabla$ or $g$. Rather it is defined in analogy with the Ricci form of a K\"ahler metric. A K\"ahler affine metric is said to be Einstein if its Ricci tensor is a constant multiple of $g$. Ricci flat K\"ahler affine metrics should be seen as real analogues of Calabi-Yau manifolds. Here they will be called \textbf{Monge-Amp\`ere metrics}, as in \cite{Kontsevich-Soibelman}. \subsection{} Let $M$ be a surface. Let $\rho:\hat{M} \to M$ be the principal $\reat$ bundle over $M$ such that the third power of $\hat{M}$ is equal to the complement $\Det \ctm \setminus \{0\}$ in $\Det \ctm$ of the zero section, viewed as a principal bundle. Let $\emf$ be the line bundle associated to $\hat{M}$ the sections of which are identified with homogeneity $1$ functions on the total space of $\hat{M}$. A section of $\emf^{3}$ is naturally viewed as a section of $\Det TM$. A section of $\emf^{k}$ will be said to have \textbf{weight $k$}. Let $R_{r}$ denote dilation in the fibers of $\hat{M}$ by $r \in \reat$, and let $\rad$ be the vector field generating the flow $R_{e^{t}}$. If $u \in \Ga(\emf^{k})$ then the associated equivariant function $\tilde{u} \in \cinf(\hat{M})$ has homogeneity $k$; in particular $d\tilde{u}(\rad) = k\tilde{u}$. On the total space of $\hat{M}$ there is a tautological $2$-form $\mu$ defined for $X, Y \in T_{\theta}\emf$ by $\mu_{\theta}(X, Y) = \theta^{3}(T\rho(\theta)(X), T\rho(\theta)(Y))$, in which $\theta^{3}$ is viewed as a $2$-form on $T_{\rho(\theta)}M$. It is straightforward to check that $\Psi \defeq d\mu$ is a volume form. The existence part of Theorem \ref{thomastheorem} is due to T.Y. Thomas; see \cite{Thomas}. That $M$ have dimension $2$ in Theorem \ref{thomastheorem} is unimportant, but greater generality is not needed here. It is convenient to use uppercase Latin letters as abstract indices on $\hat{M}$. \begin{theorem}\label{thomastheorem} Let $M$ be a smooth surface equipped with a projective structure $\en$. There is a unique torsion-free affine connection $\hnabla$ on $\hat{M}$ satisfying \begin{enumerate} \item $\hnabla_{I}\rad^{J} = \delta_{I}\,^{J}$ \item $\hnabla \Psi = 0$. \item $\hnabla$ is Ricci flat. \item $R_{r}^{\ast}(\hnabla) = \hnabla$ for all $r \in \reat$. \item The inverse image in $\hat{M}$ of a projective geodesic of $M$ is a totally geodesic surface in $\hat{M}$ tangent to $\rad$. \item The curvature $\hat{R}_{IJK}\,^{L}$ of $\hnabla$ satisfies $\hnabla_{Q}\hat{R}_{IJK}\,^{Q} = 0$. \end{enumerate} The connection $\hnabla$ is the \textbf{Thomas connection} associated to $\en$. The assignment $\en \to \hnabla$ is equivariant with respect to the action of $\diff(M)$ on $M$ in the sense that for $\phi \in \diff(M)$ there holds $\lin(\phi)^{\ast}(\hnabla) = \widehat{\phi^{\ast}(\en)}$ in which $\lin(\phi)$ is the unique principal bundle automorphism of $\hat{M}$ covering $\phi$ and preserving the tautological two-form $\mu$. Moreover, $\lin(\phi) \in \Aut(\hnabla,\Psi)$ if and only if $\phi \in \Aut(\en)$. \end{theorem} \begin{proof}[Indication of proof] A principal connection on $\hat{M}$ determines a principal connection on $\Det \ctm$ and vice-versa. A torsion-free affine connection induces a principal connection on $\Det \ctm$ and hence on $\hat{M}$. It is easily seen that each principal connection $\be$ on $\hat{M}$ is induced by a unique torsion-free $\nabla$ representing $\en$. Fix a principal connection $\be$ on $\hat{M}$ and let $X \to \hat{X}$ be the horizontal lift of $X \in \Ga(TM)$ to $\hat{X} \in \Ga(\hat{M})$ determined by $\be$. Note that $[\hat{X}, \hat{Y}] = \widehat{[X, Y]} - \rho^{\ast}(\om(X, Y))\rad$, in which $\rho^{\ast}(\om) = d\be$. Let $\nabla \in \en$ be the representative determined by $\be$. Define $\hnabla$ by requiring that it be torsion-free, that it satisfy $\hnabla_{I}\rad^{J} = \delta_{I}\,^{J}$, and that for any $X, Y \in \Ga(TM)$ there hold $\hnabla_{\hat{X}}\hat{Y} = \widehat{\nabla_{X}Y} + P(X, Y)\rad$, in which $P_{ij} = -R_{(ij)} - \tfrac{1}{3}R_{[ij]} = -R_{(ij)} - \tfrac{1}{2}\om_{ij}$. That $\hnabla$ verifies all the stated conditions is straightforward. That it does not depend on the choice of $\be$ can be verified directly, but it is probably easier, and conceptually better, to deduce this as a consequence of the claimed uniqueness. Verifying the uniqueness is a bit more involved; in this regard note that it is straightforward to construct examples on $\rea^{3} \setminus\{0\} \to \sphere$ showing the necessity, for the uniqueness, of the condition $R_{r}^{\ast}(\hnabla) = \hnabla$. \end{proof} The curvature $\hat{R}_{IJK}\,^{L}$ of $\hnabla$ is given by \begin{align}\label{thomascurvature} \hat{R}_{IJK}\,^{L} = \rho^{\ast}(C)_{IJK}\rad^{L}, \end{align} in which $C_{ijk} = 2\nabla_{[i}P_{j]k}$ is the projective Cotton tensor of $\en$. In particular $\hnabla$ is flat if and only if $\en$ is projectively flat. For later use note that if $\be$ is the principal connection on $\hat{M}$ induced by $\nabla \in \en$ then there holds \begin{align}\label{hnablabe} \hnabla_{I}\be_{J} = -\be_{I}\be_{J} - \rho^{\ast}(P)_{IJ}, \end{align} in which $P_{ij}$ is the modified Ricci tensor of $\nabla$. \subsection{}\label{mongesection} For a section $u \in \Ga(\emf)$ there holds $\tnabla_{i}u = \nabla_{i}u + \ga_{i}u$ if $\tnabla = \nabla + 2\ga_{(i}\delta_{j)}\,^{k}$. Using this it can be verified that the operator $\B(u)_{ij} \defeq \nabla_{i}\nabla_{j}u - P_{ij}u$ is projectively invariant in the sense that it does not depend on the choice of $\nabla \in \en$. If $\tilde{u}$ is the homogeneity $1$ function on $\hat{M}$ corresponding to $u$ then the symmetric tensor $\hnabla_{I}d\tilde{u}_{J}$ has the properties that $\rad^{I}\hnabla_{I}d\tilde{u}_{J} = 0$ and that for all $X, Y \in \Ga(TM)$, $\hat{X}^{I}\hat{Y}^{J}\hnabla_{I}d\tilde{u}_{J}$ is the homogeneity $1$ function on $\hat{M}$ corresponding to $\B(u)_{ij}X^{i}Y^{j}$. Similarly, if $v = u^{2}$ (equivalently $\tilde{v} = \tilde{u}^{2}$), then \begin{align}\label{mongelift} \tfrac{1}{2}\hnabla_{I} d\tilde{v}_{J} = \begin{pmatrix} u\B(u)_{ij} + \nabla_{i}u\nabla_{j}u & u\nabla_{i}u\\ u \nabla_{j}u & u^{2} \end{pmatrix}, \end{align} in which the righthand side is a notationally abusive shorthand utilizing the splitting of $T\hat{M}$ determined by $\be$ and signifying, for example, that $\hat{X}^{I}\rad^{J}\hnabla_{I}d\tilde{v}_{J}$ is the equivariant function corresponding to $2u\nabla_{X}u$. Since $\det \B(u)$ is naturally viewed as a section of $(\Det \ctm)^{2} \tensor \emf^{2} \simeq \emf^{-4}$, the operator $\monge(u) \defeq u^{4}\det \B(u)$ is function-valued. Calculating $\det \hnabla d\tilde{v}$ by applying elementary row and column operations to \eqref{mongelift} yields \begin{align}\label{dethnablav} \det \hnabla d\tilde{v} = 8\rho^{\ast}(\monge(u))\Psi^{2}. \end{align} It follows immediately that $\monge(u)$ is constant if and only if $\det \hnabla d\tilde{v}$ is $\hnabla$-parallel. Applying \eqref{thomascurvature} and the Ricci identity yields \begin{align} 2\hnabla_{[I}\hnabla_{J]}d\tilde{v}_{K} = -\hat{R}_{IJK}\,^{Q}d\tilde{v}_{Q} = -2\tilde{v}\rho^{\ast}(C)_{IJK}, \end{align} so that, if $\hnabla_{I}d\tilde{v}_{J}$ is non-degenerate, then it forms with $\hnabla$ a Hessian metric if $\en$ is projectively flat. \subsection{}\label{hessiansection} Let $M$ be a surface and let $(\en, [h])$ be an exact Riemannian signature Einstein AH structure with negative weighted scalar curvature $R$. Let $u = (R/2)^{-1/3} \in \Ga(\emf)$ and $v = u^{2} = 2^{2/3}R^{-2/3}$. Because $\nabla_{i}u = 0$ for the aligned representative $\nabla \in \en$, the principal connection $\be_{I}$ induced by $\nabla$ is $\tilde{u}^{-1}d\tilde{u}_{I} = \tfrac{1}{2}\tilde{v}^{-1}d\tilde{v}_{I}$. Let $F = -\tfrac{3}{2}\log \tilde{v} = -3\log |\tilde{u}|$, which is logarithmically homogeneous in the sense that $R_{e^{t}}^{\ast}(F) = F - 3t$. Define covariant symmetric two-tensors $g_{IJ}$ and $f_{IJ}$ on $\hat{M}$ by $g_{IJ} \defeq \tfrac{1}{2}\hnabla_{I}d\tilde{v}_{J}$ and $f_{IJ} = \hnabla_{I}dF_{J}$. It will be shown that with $\nabla$ the $(1, -2)$ signature metric $g_{IJ}$ and the Riemannian metric $f_{IJ}$ form with $\hnabla$ Hessian metrics which are respectively Monge-Amp\`ere and Einstein K\"ahler affine. Let $\tilde{R}$ be the homogeneity $-3$ function on $\hat{M}$ corresponding to $R$, and note that $d\tilde{R}(\hat{X})$ is the homogeneous function corresponding to $\nabla_{X}R$, so equals $0$. By definition and \eqref{hnablabe} there hold \begin{align}\label{gmetric} \begin{split} g_{IJ} &= \tfrac{1}{2}\hnabla_{I}d\tilde{v}_{J} = \tilde{v}\left(\hnabla_{I}\be_{J} + 2\be_{I}\be_{J}\right) = \tilde{v}\left(\be_{I}\be_{J} - \rho^{\ast}(P)_{IJ}\right)\\ & = 2^{2/3}\tilde{R}^{-2/3}\left(\be_{I}\be_{J} + \tfrac{1}{2}\rho^{\ast}(RH)_{IJ}\right),\\ f_{IJ} & = -3\hnabla_{I}\be_{J}= -3\tilde{v}^{-1}g_{IJ} + 6\be_{I}\be_{J} = 3\rho^{\ast}(P)_{IJ} + 3\be_{I}\be_{J}. \end{split} \end{align} As $RH_{ij}$ is an unweighted tensor, its pullback $\rho^{\ast}(RH)_{IJ}$ has sense. From \eqref{gmetric} it is apparent that $g_{IJ}$ has signature $(1, -2)$ and $f_{IJ}$ is Riemannian. By \eqref{projcotton} $\en$ is projectively flat, and so each of $g_{IJ}$ and $f_{IJ}$ forms with $\hnabla$ a Hessian metric. Because $\nabla_{i}u = 0$ there holds $\B(u) = - P_{ij}u = (R/2)H_{ij}u = u^{-2}H_{ij}$, and so $\monge(u) = 1$. Hence by \eqref{dethnablav} and the definition of $g_{IJ}$ there holds \begin{align}\label{detgpsi} \det g = \Psi^{2}. \end{align} In particular $\det g$ is $\hnabla$-parallel, so that $(\hnabla, g_{IJ})$ is a Monge-Amp\`ere Hessian metric of signature $(1, -2)$. Since $\rad^{P}g_{IP} = \tilde{v}\be_{I}$ there holds $g^{IP}\be_{P} = \tilde{v}^{-1}\rad^{I}$, and so $f_{IP}g^{JP} = -3\tilde{v}^{-1}\delta_{I}\,^{J} + 6\tilde{v}^{-1}\be_{I}\rad^{J}$, from which it follows that \begin{align}\label{detf} \det f = 27 \tilde{v}^{-3}\det g = 27e^{2F}\Psi^{2}, \end{align} so that $(\hnabla, f_{IJ})$ is a Riemannian signature Einstein Hessian metric. \subsection{} Alternatively, $\hnabla$ and $g_{IJ}$ generate an exact flat AH structure $([\hnabla], [g])$ with $\hnabla$ as the aligned representative and $g_{IJ}$ as a distinguished metric. The cubic torsion $\hbt_{IJ}\,^{K}$ is $\hbt_{IJ}\,^{K} = g^{KQ}\nabla_{I}g_{JQ} = g^{KQ}\nabla_{Q}g_{IJ}$. It is convenient to write $\hbt_{IJK} = \hbt_{IJ}\,^{Q}g_{KQ} = \nabla_{I}g_{JK} = \tfrac{1}{2}\nabla_{I}\nabla_{J}d\tilde{v}_{K}$. The Levi-Civita connection $\hD$ of $g_{IJ}$ is $\hD = \hnabla + \tfrac{1}{2}\hbt_{IJ}\,^{K}$, and the Levi-Civita connection $\tilde{D}$ of $f_{IJ}$ is \begin{align} \tilde{D} = \hD - 2\be_{(I}\delta_{J)}\,^{K} - \tfrac{1}{3}f_{IJ}\rad^{k} + 2\be_{I}\be_{J}\rad^{K}. \end{align} Since $\rad^{I}\hnabla_{I}g_{JK} = (\lie_{\rad}g)_{JK} -2g_{JK} = 0$, there holds $\hD_{I}\rad^{J} = \delta_{I}\,^{J}$, and, since $\rad^{P}f_{IP} = 3\be_{I}$, there hold $\tilde{D}_{I}\rad^{J} = 0$ and $\tilde{D}_{I}\be_{J} = 0$. Because $\hnabla_{I}d\tilde{v}_{J}$ is non-degenerate, and because of the form of $g_{IJ}$ it is evident that the submanifolds $M_{c^{2}} = \{p \in \hat{M}: \tilde{v}(p) = c^{2}\}$ are immersed and spacelike for $c > 0$. In particular, the induced metric $\tilde{v}^{\ast}(g)_{ij}$ is $c^{-1}h_{ij}$. Because $\rad^{I}\rad^{J}g_{IJ} = -\tilde{v}$, the vector field $N^{I} = \tilde{u}^{-1}\rad^{I}$ is a unit normal along $M_{c^{2}}$. Since $\hD_{I}N^{J} = \tilde{u}^{-1}\delta_{I}\,^{J} - \be_{I}N^{J}$, the hypersurface $M_{c^{2}}$ is totally umbilic with constant mean curvature with respect to $g_{IJ}$. Similarly, because $\rad^{I}\rad^{J}f_{IJ} = 3$, the $\tilde{D}$-parallel vector field $\tfrac{1}{\sqrt{3}}\rad^{I}$ is an $f$-unit normal to the submanifolds $M_{c^{2}}$, which are therefore totally geodesic with respect to $f_{IJ}$. The preceeding is summarized in Theorem \ref{hessiantheorem}. \begin{theorem}\label{hessiantheorem} On a smooth surface $M$, let $(\en, [h])$ be an exact Riemannian signature Einstein AH structure with negative weighted scalar curvature $R$. The metric $g_{IJ}$ defined in \eqref{gmetric} forms with the Thomas connection $\hnabla$ on $\hat{M}$ a Lorentzian signature Monge-Amp\`ere Hessian metric structure, such that the level surfaces of $\tilde{R}$ are smoothly immersed and, with respect to $g_{IJ}$ are spacelike, umbilic, and have constant mean curvature. The metric $f_{IJ}$ defined in \eqref{gmetric} forms with $\hnabla$ a Riemannian signature Einstein Hessian structure, such that the level surfaces of $\tilde{R}$ are totally geodesic surfaces with respect to $f_{IJ}$. \end{theorem} \subsection{}\label{dustsection} Let $\hsR_{IJK}\,^{L}$ be the curvature of $\hD$. From the flatness of $\hnabla$ and \begin{align}\label{skewhnablahbt} \hnabla_{[I}\hbt_{J]K}\,^{L} = - \tfrac{1}{2}\hat{R}_{IJK}\,^{L} - \tfrac{1}{2}\hat{R}_{IJ}\,^{L}\,_{K} - \hbt_{Q[I}\,^{L}\hbt_{J]K}\,^{Q}= - \hbt_{Q[I}\,^{L}\hbt_{J]K}\,^{Q}, \end{align} there results $\hsR_{IJK}\,^{L} = -\tfrac{1}{2}\hbt_{Q[I}\,^{L}\hbt_{J]K}\,^{Q}$, and so $\hsR_{IJ} = \tfrac{1}{4}\hbt_{IA}\,^{B}\hbt_{JB}\,^{A}$ and $\hsR_{g} = \tfrac{1}{4}|\hbt|_{g}^{2}$. Since $\hat{M}$ is $3$-dimensional the metric $g$ is conformally flat, and so its curvature is completely determined by its Ricci curvature $\hsR_{IJ}$. Since $R\bt_{ijk}$ is an unweighted tensor, it makes sense to write $\rho^{\ast}(R\bt)_{IJK}$ for its pullback to $\hat{M}$. Differentiating \eqref{gmetric} yields \begin{align}\label{hbt} \hbt_{IJK} = \hnabla_{I}g_{JK} = \tilde{v}\left(\be_{I}\rho^{\ast}(RH)_{JK} + \be_{(J}\rho^{\ast}(RH)_{K)I} + \tfrac{1}{2}\hnabla_{I}\rho^{\ast}(RH)_{JK}\right). \end{align} Since $R$ is parallel, there holds $\hat{X}^{I}\hat{Y}^{J}\hat{Z}^{K}\hnabla_{I}\rho^{\ast}(RH)_{JK} = \rho^{\ast}(RH(\bt(X, Y), Z))$, and there hold \begin{align*} \begin{split} &\rad^{I}\hnabla_{I}\rho^{\ast}(RH)_{JK} = (\lie_{\rad}\rho^{\ast}(RH))_{JK} - 2\rho^{\ast}(RH)_{JK} = -2\rho^{\ast}(RH)_{JK},\\ &\rad^{J}\hnabla_{I}\rho^{\ast}(RH)_{JK} = - \rho^{\ast}(RH)_{IK}, \end{split} \end{align*} so that the righthand side of \eqref{hbt} is equal to $\tfrac{1}{2}\tilde{v}\rho^{\ast}(R\bt)_{IJK}$, showing \begin{align}\label{hbtrbt} \hbt_{IJK} = \tfrac{1}{2}\tilde{v}\rho^{\ast}(R\bt)_{IJK}. \end{align} There is a unique tensor $\widetilde{H}^{IJ}$ such that for any covectors $\mu_{i}$ and $\nu_{i}$ there holds $\widetilde{H}^{IJ}\rho^{\ast}(\mu)_{I}\rho^{\ast}(\nu)_{J} = \widetilde{H^{ij}\mu_{i}\nu_{j}}$ and $\widetilde{H}^{IJ}\be_{J} = 0$. Since $\tilde{R}^{-1}\widetilde{H}^{IQ}\rho^{\ast}(RH)_{QJ} = \delta_{I}\,^{J} - \be_{I}\rad^{J}$ there holds \begin{align}\label{ginverse} g^{IJ} = \tilde{v}^{-1}\left(\rad^{I}\rad^{J} + 2\widetilde{R}^{-1}\tilde{H}^{IJ}\right). \end{align} It follows that \begin{align}\label{ghbt} |\hbt|_{g}^{2} = \tfrac{1}{2}\tilde{v}^{-1}\rho^{\ast}(R^{-1}\nbt). \end{align} Using \eqref{hbtrbt}, $2\bt_{ia}\,^{b}\bt_{jb}\,^{a} = \nbt H_{ij}$, \eqref{gmetric}, \eqref{ginverse}, and \eqref{ghbt} it follows straightforwardly that \begin{align}\label{leinstein} \begin{split} 4\hsR_{IJ} &= \hbt_{IA}\,^{B}\hbt_{JB}\,^{A} = \tfrac{1}{4}\tilde{v}^{2}\rho^{\ast}(R\bt)_{IA}\,^{B}\rho^{\ast}(R\bt)_{JB}\,^{A} = \tfrac{1}{2}\rho^{\ast}(\nbt H)_{IJ},\\ 4\hsR_{g}g_{IJ} &= |\hbt|^{2}_{g}g_{IJ} = \rho^{\ast}(\nbt H)_{IJ} + 2\rho^{\ast}(R^{-1}\nbt)\be_{I}\be_{J},\\ \hsR_{IJ} &- \tfrac{1}{2}\hsR_{g}g_{IJ} = -\tfrac{1}{4}\rho^{\ast}(R^{-1}\nbt)\be_{I}\be_{J} = -2^{-8/3}\tilde{R}^{-1/3}\tnbt N^{\flat}_{I}N^{\flat}_{J} = \hat{T}_{IJ}, \end{split} \end{align} in which $N^{I} = \tilde{u}^{-1}\rad^{I}$ and $N^{\flat}_{I} = N^{Q}g_{IQ} = \tilde{u}\be_{I}$. The last equation of \eqref{leinstein} can be interpreted as an instance of the $2+1$ dimensional general relativistic Einstein equations with a stress energy tensor corresponding to a pressureless perfect fluid (a \textit{dust}), if $N^{I}$ is viewed as the velocity field of the fluid and $-2^{-8/3}\tilde{R}^{-1/3}\tnbt$ is viewed as its mass-energy density. Note that $\hat{T}_{IJ}U^{I}U^{J} \geq 0$ for all vector fields $U^{I}$ on $\hat{M}$. \subsection{}\label{mongeamperemetricsection} For $C > 0$ let $\hat{M}_{C} = \{x \in \hat{M}: F(x) > -\log c\}$. Define $\Psi(t)$ by \begin{align}\label{psit} \Psi(t) = \int^{C^{1/3}}_{e^{-t/3}}(C - r^{3})^{1/3}\,dr, \end{align} for $t > -\log C$, and set $\phi = \Psi(F)$. Let $\phi_{IJ} = \hnabla_{I}d\phi_{J}$. From \begin{align*} &\phi_{IJ} = \dot{\Psi}(F)f_{IJ} + \ddot{\Psi}(F)F_{I}F_{J},&& &f^{JP}\phi_{IP} = \dot{\Psi}(F)\delta_{I}\,^{P} - \ddot{\Psi}(F)F_{I}\rad^{J}, \end{align*} it follows that $\phi_{IJ}$ is positive definite on $\hat{M}_{C}$. Noting that $\dot{\Psi} + 3\ddot{\Psi} = (Ce^{t} - 1)^{-1}\dot{\Psi}$ and using \eqref{detf}, that \begin{align*} \det \phi_{IJ} = 27\dot{\Psi}(F)^{2}(\dot{\Psi}(F) + 3 \ddot{\Psi}(F))e^{2F} = 1. \end{align*} This shows that $\phi_{IJ}$ is a Riemannian signature Monge-Amp\`ere metric on $\hat{M}_{C}$. \begin{theorem}\label{mongeamperetheorem} On a smooth surface $M$, let $(\en, [h])$ be an exact Riemannian signature Einstein AH structure with negative weighted scalar curvature $R$. For each $C > 0$ the metric $\phi_{IJ} = \hnabla_{I}d\phi_{J}$ where $\phi(F) = \Psi(F)$ for $\Psi$ defined by \eqref{psit} and $F$ is the function defined in terms of $R$ as in section \ref{hessiansection} is a Riemannian signature Monge-Amp\`ere metric on $\hat{M}_{C} = \{x \in \hat{M}: F(x) > -\log c\}$. \end{theorem} Theorem \ref{mongeamperetheorem} is essentially the same as Proposition $1$ of the unpublished erratum \cite{Loftin-Yau-Zaslow-erratum} to \cite{Loftin-Yau-Zaslow}, and the method of construction, solving the ODE for $\Psi$ that results from requiring $\Psi(F)$ to be Monge-Amp\`ere, simply follows an example in section $2$ of \cite{Loftin-Yau-Zaslow}. \subsection{}\label{convexsection} Here the constructions of section \ref{mongesection} are used to show the convexity of the projective structure underlying an Einstein AH structure on a surface of genus $g > 1$. Let the notation be as in that section. \begin{theorem}\label{convextheorem} Let $(\en, [h])$ be an exact Riemannian signature Einstein AH structure with negative weighted scalar curvature $R$ on a smooth surface $M$. If a distinguished metric $h \in [h]$ is complete then the Riemannian metric $f_{IJ}$ on $\hat{M}$ is complete and $\en$ is a convex flat real projective structure; in particular this is the case if $M$ is compact. \end{theorem} \begin{proof} Note that $\en$ is projectively flat by Lemma \ref{2deinsteinlemma}. Let $h \in [h]$ be a distinguished metric. If $h_{ij}$ is complete then $g_{ij} = 3P_{ij} = -(3R_{h}/2)h_{ij}$ is complete, and it follows from \eqref{gmetric} that on $\hat{M}$ the metric $f_{IJ}$ has the form $\rho^{\ast}(g)_{IJ} + dt_{I}dt_{J}$ where $t = F/\sqrt{3}$. It is clear from this form that $f_{IJ}$ is complete if $h_{ij}$ is. Theorem $2.1$ of \cite{Shima-Yagi} shows that if a simply connected manifold admits a complete Hessian metric then its affine developing map is a diffeomorphism onto a convex domain $\Omega$ in flat affine space. Applying this to the given structures lifted to the universal cover $\tilde{M}$ of $M$ shows that the affine developing map of the Thomas connection is a diffeomorphism onto a convex domain in $\rea^{3}$. The function $F$ is strictly convex and solves $\det \hnabla_{I}dF_{J} = 27e^{2F}\Psi$. Transferring it to the image of the affine developing map of $\hnabla$ gives on $\Omega$ a function with the same properties, and it follows (see the argument given in the remark following the statement of Theorem $1$ on pages $357-358$ of \cite{Cheng-Yau-realmongeampere}) that $\Omega$ contains no complete affine line. From the equivariance of all the preceeding with respect to the scaling action on $\hat{M}$ it follows that $\Omega$ is a convex cone containing no complete affine line and the projective developing map of $\en$ is a diffeomorphism onto the oriented projectivization of $\Omega$, which by the preceeding argument is a convex domain in the projective sphere the closure of which contains no pair of antipodal points (for if it did, the cone over it, which is $\Omega$, would contain a complete affine line). This shows that $\en$ is a convex projective structure. \end{proof} \begin{theorem}\label{conformaldeterminedtheorem} If a surface $M$ admits a convex flat real projective structure $\en$ then it admits a unique Riemannian signature conformal structure $[h]$ such that $(\en, [h])$ is an exact Einstein AH structure with negative weighted scalar curvature and complete distinguished metric. \end{theorem} \begin{proof} By assumption the pullback of $\hat{M}$ over the universal cover of $M$ is identified by the affine developing map of the Thomas connection of $\en$ with a convex cone $\Omega \subset \rea^{3}$ containing no complete affine line. By Theorem $4.4$ of \cite{Cheng-Yau-realmongeampere} there is a smooth function $F$ on $\Omega$ solving $\det \hnabla_{I}dF_{J} = 27e^{2F}\Psi^{2}$, tending to $+\infty$ at the boundary of $\Omega$, and such that $f_{IJ} = \hnabla_{I}dF_{J}$ is a complete Riemannian metric on the interior of $\Omega$. Passing to the tube domain (in $\com^{3}$) over $\Omega$ and applying the generalized Schwarz lemma for volume forms proved in section $1$ of \cite{Mok-Yau} it can be deduced that $e^{F}$ has positive homogeneity $-3$. Define the density $u$ on $M$ by $\tilde{u} = e^{-F/3}$ and let $h_{ij} = -u^{-1}\B(u)$. Tracing through the identifications in section \ref{mongesection} backwards, it is straightforward to check that $(\en, [h])$ is an Einstein AH structure with parallel negative scalar curvature and distinguished metric $h_{ij}$. The completeness of $h_{ij}$ follows from the splitting \eqref{gmetric} and the completeness of $f_{IJ}$, as in the proof of Theorem \ref{convextheorem}. If $[g]$ is another conformal structure with the same properties as $[h]$, let $G$ be the corresponding function on $\Omega$, which has the same properties as has $F$. Passing to the tube domain over $\Omega$ and applying the Schwarz lemma from \cite{Mok-Yau} shows that $F = G$, and so $g$ and $h$ are homothetic (here the completeness of both $g$ and $h$ is essential). \end{proof} Theorem \ref{conformaldeterminedtheorem} shows that an exact Einstein AH structure with negative scalar curvature and complete distinguished metric is already completely determined by its underlying (necessarily flat) projective structure, which is convex. This correspondence is evidently diffeomorphism equivariant, and so combining Theorems \ref{scalarexacttheorem}, \ref{2dmodulitheorem} and \ref{conformaldeterminedtheorem} there results Theorem \ref{summarytheorem}. \begin{proof}[Proof of Theorem \ref{summarytheorem}] The only thing that perhaps requires comment is the genus $1$ case. From Theorem \ref{scalarexacttheorem} it follows that an exact Einstein AH structure on a torus is Weyl if and only if it is a flat conformal structure. The analogue in this case of Theorem \ref{2dmodulitheorem} follows straightforwardly from Lemma \ref{genuszerolemma}. \end{proof} \subsection{Remark on tractor formalism} Projective structures are the simplest parabolic geometries, and the powerful general machinery (see \cite{Cap-Slovak-book}) applicable to such geometries should be useful in further understanding Einstein AH structures. In particular, it should be possible to give substance to the analogy between Einstein AH structures and extremal K\"ahler metrics using the cohomological ideas of \cite{Calabi-constantcurvature} (implicitly relating certain conformal and projective BGG sequences), which have yet to be developed within the general parabolic geometry framework. It should be mentioned that S. Armstrong has in \cite{Armstrong-einstein} proposed a notion of Einstein structures for general parabolic geometries. Here is not the place for a careful examination of how his notion relates to that of Einstein AH structures, but it is briefly indicated how to pass from the formalism used in this paper to the tractor formalism of \cite{Cap-Gover}. The homogeneities of $\rad$, $\nabla$, $g$, and $\Psi$, are, respectively $0$, $0$, $2$, and $3$. These numbers are explained by passing to the tractor bundle $\tractor \to M$, which is the rank $3$ vector bundle over $M$ the total space of which is the quotient of $T\hat{M}$ by the action of $\reat$ via $r^{-1}TR_{r}$. Sections of $\tractor$ correspond to homogeneity $-1$ vector fields on $\hat{M}$, from which observation it is clear that $\nabla$ induces a connection on $\tractor$ (the \textbf{tractor connection}), and $g$ and $\Psi$ descend, respectively, to give a metric and a volume form on $\tractor$. The relation between the Thomas and tractor connections is explained in a bit more detail in section $3.1$ of \cite{Fox-cproj} or in section $2.3$ of \cite{Cap-Zadnik-cproj}; for background on the tractor formalism see \cite{Cap-Gover}. The vector field $\rad$ does not descend to $\tractor$, but its span does, giving a distinguished line subbundle of $\tractor$. Since a choice of a non-vanishing density on $M$ can be identified with a section of this distinguished line subbundle, it determines a splitting of $\tractor$ as a direct sum of $TM$ and the trivial line bundle. In much of the literature the objects just described appear defined in terms of such a splitting; for example the exposition in sections $2$ and $3$ of \cite{Labourie-flatprojective} is made in this way. Probably an appropriate reformulation of the Einstein AH condition would interpret the section of $S^{2}(\tractor^{\ast})$ corresponding to $g$ as harmonic. \section{Lagrangian immersions in (para)-K\"ahler space forms}\label{parakahlersection} \subsection{} Let $(N, g, J)$ be a (para)-K\"ahler manifold, which means that $J_{i}\,^{j}$ is an endomorphism satisfying $J_{i}\,^{p}J_{p}\,^{j} = \ep \delta_{i}\,^{j}$, where $\ep$ is $-1$ in the K\"ahler case and $+1$ in the para-K\"ahler case; $g_{ij}$ is a metric, respectively Riemannian or split; and the tensor $\Om_{ij}= J_{i}\,^{p}g_{pj}$ is a symplectic form. The definitions for para-K\"ahler manifolds of the Ricci form, the Einstein condition, etc. are formally identical to those in the K\"ahler case. A (para)-K\"ahler manifold has \textbf{constant (para)-holomorphic sectional curvature $4c$} if its curvature has the form \begin{align}\label{chsc} R_{ijk}\,^{l} = 2c\left(\delta_{[i}\,^{l}g_{j]k} - \ep J_{[i}\,^{l}\Omega_{j]k} + \ep \Omega_{ij}J_{k}\,^{l} \right). \end{align} This is equivalent to the condition that for all $X \neq 0$ there holds $g(R(X, JX)X, JX) = 4c\ep g(X, X)^{2}$. Note that if the para-K\"ahler structure $(g, J)$ has constant para-holomorphic sectional curvature $4c$, then the para-K\"ahler structure $(-g, -J)$, which has the same underlying symplectic structure, has constant para-holomorphic sectional curvature $-4c$. An immersed submanifold of a para-K\"ahler manifold is \textbf{spacelike} if the induced metric is positive definite. While the para-K\"ahler structures $(g, J)$ and $(-g, -J)$ appear similar, their spacelike submanifolds are different. If $\ste$ is a vector space with dual $\sted$, a flat para-K\"ahler structure $(\bG, \bJ)$ on $\ste \times \sted$ is constituted by the symplectic form $\bOmega((u,\mu), (v, \nu)) = \mu(v) - \nu(u)$ and the para-complex structure $\bJ$ equaling the identity on $\sted$ and minus the identity on $\ste$. The map $\Psi:\ste \times \sted \to \gl(n+1, \rea)^{\ast}$ given by $\lb\Psi(u, \mu), A\ra = \mu(Au)$ is the moment map for the action of the general linear group $GL(n+1, \rea)$ of $\ste$ on $\ste \times \sted$. The level sets of non-zero level of $\lb \Psi, \Id\ra$ are the pseudo-spheres of constant $\bG$ norm. Their images in the quotient of $\{ (u, \mu) \in \ste \times \sted: \mu(u) \neq 0\}$ by the action of the center of $GL(n+1, \rea)$ are the two components $\Sigma_{\pm} = \{([u], [\mu]) \in \projp(\ste) \times \projp(\sted): \pm \mu(u) < 0\}$ of the complement of the incidence correspondence in $\projp(\ste) \times \projp(\sted)$, where $\projp$ denotes oriented projectivization. The flat para-K\"ahler structure on $\ste \times \sted$ descends to $\Sigma_{\pm}$ to give the model para-K\"ahler structures of constant para-holomorphic sectional curvature. This is formally parallel to the construction of the Fubini-Study metric on $\proj^{n}(\com)$ by reduction of the flat K\"ahler structure on complex Euclidean space via the Hopf fibration and the moment map for the action of $U(n)$. This can be understood in a more general context as follows. A \textbf{para-Hermitian symmetric space} is an affine symmetric space $G/H$ with an almost para-Hermitian structure such that the symmetries act as automorphisms of the almost para-Hermitian structure. The almost para-Hermitian structure of a para-Hermitian symmetric space is necessarily para-K\"ahler, and $G$ acts by para-K\"ahler automorphisms. This and other basic facts about these spaces are due to S. Kaneyuki and collaborators in a series of papers, from which there results: \begin{theorem}[\cite{Hou-Deng-Kaneyuki-Nishiyama, Kaneyuki-compactification, Kaneyuki-Kozai}]\label{kaneyukitheorem} Let $G$ be a connected, semisimple Lie group and $H \subset G$ a closed subgroup. The following are equivalent \begin{enumerate} \item $G/H$ is a homogeneous para-K\"ahler manifold. \item $H$ is an open subgroup of a Levi subgroup of a parabolic subgroup $P$ of $G$ having abelian nilradical. \item $G/H$ is a $G$-equivariant covering space of the adjoint orbit of a hyperbolic semisimple element of $\g$. \end{enumerate} Up to covering para-Hermitian symmetric spaces of semisimple Lie groups are in bijection with semsimimple graded Lie algebras $\g = \g_{-1} \oplus \g_{0}\oplus \g_{1}$ in such a way that $\g = \mathfrak{lie}(G)$ and $\g_{0} = \mathfrak{lie}(H)$. \end{theorem} In the setting of Theorem \ref{kaneyukitheorem}, $G/H$ is diffeomorphic to the cotangent bundle of $G/P$, and the symplectic form on $G/H$ is the pullback of the Kostant-Kirillov symplectic form pulled back from the coadjoint orbit via some fixed multiple of the Killing form. The para-Hermitian symmetric spaces are Einstein. The proof is formally parallel to the proof that Hermitian symmetric spaces are Einstein (see Proposition $9.7$ of \cite{Kobayashi-Nomizu-2}). The scalar curvature is determined up to a scale factor determined by the choice of invariant symplectic form. The para-Hermitian symmetric space structure on the adjoint orbit of an element of $\sll(n+1, \rea)$ generating the center of the stabilizer in $SL(n+1, \rea)$ of any element of $\Sigma_{\pm}\subset \projp(\ste) \times \projp(\sted)$ has constant non-zero para-holomorphic sectional curvature. This orbit is identified with the corresponding connected component $\Sigma_{\pm}$, and its para-Hermitian structure agrees up to constant factors with the model para-K\"ahler structure described above. As in Theorem \ref{kaneyukitheorem} the components $\Sigma_{\pm}$ are diffeomorphic to the cotangent bundle $T^{\ast}\projp(\ste)$. \subsection{}\label{pksection} Let $i:M \to N$ be a Lagrangian immersion in the $2n$-dimensional (para)-K\"ahler manifold $(N, g, J)$, assumed spacelike in the para-K\"ahler case. Via $J_{i}\,^{j}$ the normal bundle to $i(M)$ is identified with its tangent bundle, and this gives an identification of the second fundamental form $\Pi(X, Y)$ with the completely symmetric tensor $\Pi_{ijk}$ on $M$ defined by $\Pi(X, Y, Z) = \Om(\Pi(X, Y), Z)$ for $X, Y, Z \in \Ga(TM)$. Let $h_{ij} = i^{\ast}(g)_{ij}$ be the induced metric, let $H^{i} = h^{ia}\Pi_{apq}h^{pq}$ be the vector field dual to the mean curvature one-form (which is the one-form identified with the mean curvature vector using the (para)-K\"ahler structure), and let $B_{ijk} = \Pi_{ijk} - \tfrac{3}{n+2}H_{(i}h_{jk)}$ be the completely trace-free part of the second fundamental form. In the K\"ahler case, a lemma of P. Dazord, \cite{Dazord}, shows that $dH^{\flat}_{ij}$ is the pullback via $i$ of the Ricci form, and the same statement is true in the para-K\"ahler case, with a formally identical proof. It follows that if $g$ is Einstein, then $dH^{\flat}_{ij} = 0$; in particular this is true if $g$ has constant (para)-holomorphic sectional curvature. If $(N, g, J)$ is four dimensional and has constant (para)-holomorphic sectional curvature $4c$, the Gau\ss-Codazzi equations yield \begin{align}\label{ckmc} \begin{split} &\sR_{h} - 2c - \ep |B|_{h}^{2} + \tfrac{\ep}{4}|H|_{h}^{2} = 0,\\ &4\div_{h}(B)_{ij} = \tf_{h}(\lie_{H}h)_{ij},\qquad 2D_{[i}B_{j]kl} = h_{k[i}\div_{h}(B)_{j]l} + h_{l[i}\div_{h}(B)_{j]k}, \end{split} \end{align} the last of which is vacuous by \eqref{2dci}. Say that the immersion is \textbf{CKMC} (has \textbf{conformal Killing mean curvature}) if $\tf_{h}(\lie_{H}h)_{ij} = 0$. By \eqref{ckmc} and Lemma \ref{kdifferentialslemma}, the immersion is CKMC if and only if both $B^{(3,0)}$ and $H^{(1,0)}$ are holomorphic. In particular, a spacelike Lagrangian immersion of a surface of genus $g > 1$ is CKMC if and only if it has mean curvature $0$. Suppose now that the immersion has mean curvature $0$ so that $H^{i} = 0$, and define $\nabla = D - B_{ijp}h^{kp}$. Then $\nabla_{i}h_{jk} = 2B_{ijk}$, so $\nabla$ generates with $[h]$ an AH structure $(\en, [h])$. Since $h^{pq}\nabla_{i}h_{pq} = 0 = 2h^{pq}\nabla_{p}h_{qi}$, $\nabla$ is the aligned representative of $(\en, [h])$, which is exact, and has $h$ as a distinguished representative. From \eqref{confscal} and \eqref{ckmc} it follows that the curvature $\uR_{h}$ is \begin{align}\label{fpk} \uR_{h} = \sR_{h} - |B|^{2}_{h} = 2c + (\ep -1)|B|^{2}_{h}. \end{align} In the para-K\"ahler case $\ep = 1$, so $\uR_{h} = 2c$ is a constant. Hence the weighted scalar curvature of $(\en, [h])$ is parallel, and $(\en, [h])$ is Einstein. This proves \begin{theorem}\label{parakahlertheorem} On a mean curvature zero spacelike Lagrangian immersion of a surface in a four dimensional para-K\"ahler manifold of constant para-holomorphic sectional curvature $4c$ there is induced an exact Riemannian Einstein AH structure with scalar curvature $2c$. \end{theorem} Theorem \ref{parakahlertheorem} is the $n = 2$ special case of Theorem $8.4$ of \cite{Fox-ahs}. Theorem $4.3$ of R. Hildebrand's \cite{Hildebrand-crossratio} is essentially equivalent to Theorem \ref{parakahlertheorem}, although it is stated using different terminology. \begin{corollary}\label{parakahlercorollary}\quad \begin{enumerate} \item\label{fpk1} In a $4$-dimensional para-K\"ahler manifold of constant negative para-holomorphic sectional curvature, there is no mean curvature zero spacelike Lagrangian immersion of a two sphere. \item\label{fpk2} In a $4$-dimensional para-K\"ahler manifold of constant positive para-holomorphic sectional curvature, there is no mean curvature zero spacelike Lagrangian immersion of a compact orientable surface of genus greater than one. \item\label{fpk3} In a flat $4$-dimensional para-K\"ahler manifold, a mean curvature zero spacelike Lagrangian immersion of a compact orientable surface is a totally geodesic Lagrangian immersion of a flat torus. \item\label{fpk4} In the flat para-K\"ahler space $\ste \times \sted$, there is no mean curvature zero spacelike Lagrangian immersion of a compact orientable surface. \end{enumerate} \end{corollary} \begin{proof} By Theorem \ref{scalarexacttheorem} there cannot be an exact Einstein AH structure with negative scalar curvature on the two-sphere, there cannot be an exact Einstein AH structure with positive scalar curvature on a compact orientable surface of genus greater than one, and an exact Einstein AH structure with vanishing scalar curvature on a compact orientable surface is necessarily that generated by a flat metric on a torus. This proves \eqref{fpk1}-\eqref{fpk3}. As the geodesics in the flat para-K\'ahler structure on $\ste \times \sted$ are affine lines, which are contained in no compact subset, there can be no such immersion in $\ste \times \sted$. \end{proof} The claims of Corollary \ref{parakahlercorollary} are not the strongest results of this sort possible. For example, conclusion \eqref{fpk4} of Corollary \ref{parakahlercorollary} follows from the much stronger Theorem $4.2$ of J. Jost and Y.~L Xin's \cite{Jost-Xin}, which generalizes a Bernstein type theorem of J\"orgens-Calabi-Pogorelov: \begin{theorem}[\cite{Jost-Xin}] If the image of a mean curvature zero spacelike Lagrangian immersion in the flat para-K\"ahler space $\ste \times \sted$ is closed then it is an affine subspace. \end{theorem} It would be interesting to know which exact Einstein AH structures on surfaces can be realized as in Corollary \ref{parakahlercorollary} as immersed or embedded mean curvature zero Lagrangian submanifolds of para-K\"ahler space forms. In fact, there is a way to associate to an exact Einstein AH structure on a surface $M$ of genus $g > 1$ a spacelike Lagrangian immersion of its universal cover $\tilde{M}$ in a para-K\"ahler manifold of constant para-holomorphic sectional curvature. This is now sketched. It is planned to report the details elsewhere, although, since the first version of this paper appeared, Hildebrand has describe, in \cite{Hildebrand-crossratio} and \cite{Hildebrand-parakahler}, a closely related correspondence between centro-affine hypersurfaces and Lagrangian submanifolds of para-K\"ahler manifolds; in particular, Theorem $4.1$ and Corollary $4.2$ of \cite{Hildebrand-crossratio} yield conclusions very similar to those of the following paragraph. Because it is integrable, the horizontal subbundle of the cotangent bundle determined by a flat affine connection constitutes with the vertical subbundle a pair of transverse foliations which are Lagrangian with respect to the tautological symplectic structure, so determine on the cotangent bundle a pair of para-K\"ahler structures distinguished by the choice of the vertical or the horizontal subbundle as the $+1$ eigensubbundle of the para-complex structure. Parallel transport by the flat affine connection $\hnabla$ on $\ste$ determines an identification $T^{\ast}\ste \simeq \ste \oplus \sted$ under which the horizontal (resp. vertical) subbundle is sent to that corresponding to $\ste \times \{0\}$ (resp $\{0\} \times \sted$). Under this identification, $\bOmega$ corresponds to twice the tautological symplectic form on $T^{\ast}\ste$, the para-complex structure $\bJ$ corresponds to the choice of the the vertical subbundle of $T^{\ast}\ste$ as the $+1$-eigensubbundle, and the graph $\Ga_{\be}$ of a closed one-form $\be$ on $\ste$ is identified with a Lagrangian submanifold of $\ste \times \sted$ which is conical (preserved by positive dilations) if and only if $\be$ has homogeneity $2$. The flat para-K\"ahler structure on the cotangent bundle determined by the choice of the vertical subbundle as the $+1$ eigenbundle of the para-complex structure has the property that the pullback $\be^{\ast}(\bG)$ via the closed one-form $\be$ of the resulting para-K\"ahler metric is $2\hnabla\be$. If this is non-degenerate, then the second fundamental form of $\Ga_{\be}$ is $\hnabla\hnabla\be$, and its mean curvature is the logarithmic covariant derivative of $\det \hnabla \be$. In particular, $\Ga_{\be}$ has mean curvature zero if and only if $\det \hnabla \be$ is parallel; if $\be$ is the differential of a positive homogeneity $2$ function $\tilde{v}$, this is equivalent to $\tilde{v}$ solving the Monge-Amp\`ere equation \eqref{detgpsi}. Mean curvature zero immersed Lagrangian submanifolds of $\Sigma_{\pm}$ correspond to mean curvature zero immersed conical Lagrangian submanifolds of $\ste \times \sted$. This is formally parallel to the correspondence between minimal Lagrangian immersions in complex projective space and minimal Lagrangian cones in complex Euclidean space recounted in section $2$ of \cite{Mcintosh}. The Thomas connection of the convex flat projective structure determined by the lift to $\tilde{M}$ of the given Einstein AH structure on $M$ is identified with the restriction to a proper open convex cone in $\ste$ of the standard flat affine connection $\hnabla$ on $\ste$. On this cone there is, as in section \ref{hessiansection} and the proofs of Theorems \ref{convextheorem} and \ref{conformaldeterminedtheorem}, the positive homogeneity $2$ solution $\tilde{v}$ of the Monge-Amp\`ere equation \eqref{detgpsi}. The graph of the one-form $d\tilde{v}$ is a conical spacelike mean curvature zero Lagrangian submanifold which covers the desired mean curvature zero Lagrangian submanifold of the para-K\"ahler space form. The induced Einstein AH structure coincides with the original one on $\tilde{M}$. \subsection{} Given a background metric $\tilde{h}_{ij}$, a cubic holomorphic differential $B^{(3,0)}$, and a holomorphic vector field $H^{(1,0)}$, it makes sense to look for a conformal metric $h_{ij} = e^{\phi}\tilde{h}_{ij}$ such that $(h, B, H)$ are as for the induced tensors on a CKMC Lagrangian immersion in a (para)-K\"ahler manifold $(N, g, J)$. There results the equation \begin{align}\label{ckmceq} \lap_{\tilde{h}}\phi - \sR_{\tilde{h}} + 2ce^{\phi} - \tfrac{\ep}{4}e^{2\phi}|H|_{\tilde{h}}^{2} + \ep e^{-2\phi}|B|_{\tilde{h}}^{2} = 0. \end{align} The solution of the case of \eqref{ckmceq} in which $\ep = +1$ and $c < 0$ has been described for compact surfaces in section \ref{constructionsection}. In the K\"ahler ($\ep = -1$) case, \eqref{ckmceq} should be interesting for both signs of $c$, corresponding to the complex projective plane and the complex hyperbolic plane. One wonders whether for Lagrangian immersions in a complex hyperbolic $4$-manifold there is a deformation space of solutions like that for mean curvature zero surfaces in three-dimensional hyperbolic space studied by C. Taubes in \cite{Taubes}. After the first version of this paper had been completed there appeared the preprint \cite{Loftin-Mcintosh}, in which question of this sort are treated in detail for the $c = -1$ case of \eqref{ckmceq}. For example, after appropriate changes of notation, Theorem $4.1$ of \cite{Loftin-Mcintosh} is: \begin{lemma}[\cite{Loftin-Mcintosh}] If $M$ is a compact orientable surface of genus at least two and $\tilde{h}$ is a metric of constant scalar curvature $-2$ on $M$ then for every cubic holomorphic differential $B^{(3, 0)}$ such that there holds everywhere on $M$ the bound $|B|^{2}_{\tilde{h}} \leq 8/27$ there is a solution $\phi$ to the equation \eqref{ckmceq} with parameters $\ep = -1$ and $c = -1$ satisfying $0 \geq \phi \geq \log 2 - \log 3$. \end{lemma} \begin{proof} Since the short proof is just like the proof of Lemma \ref{wexistencelemma}, it is convenient to give it here. Clearly $0$ is a supersolution of $\ahop(\phi) = \lap_{\tilde{h}}\phi + 2 - 2e^{\phi} - e^{-2\phi}|B|_{\tilde{h}}^{2}$. If $c$ is any constant, then $e^{2c}\ahop(c) \geq -2p(e^{c})$ where $p(r) = r^{3} - r^{2} + \tfrac{1}{2}\max_{M}|B|^{2}_{\tilde{h}}$. This polynomial $p$ is non-negative at $r = 0$ and has at $r = 2/3$ a local minimum at which its value is $\tfrac{1}{2}\max_{M}|B|^{2}_{\tilde{h}} - 4/27$. Hence $p$ has a positive zero if and only if $\max_{M}|B|^{2}_{\tilde{h}} \leq 8/27$, in which case its smallest positive zero $r_{1}$ is no greater than $2/3$. In this case $\ahop(\log r_{1}) \geq 0$, so $\log r_{1}$ is a negative subsolution of $\ahop$. As in the proof of Lemma \ref{wexistencelemma} this suffices to show the existence of a solution to $\ahop(\phi) = 0$ satisfying the indicated bounds. \end{proof} More work has to be done to construct from such a solution a minimal Lagrangian immersion in the complex hyperbolic plane, and this is a part of what is accomplished in \cite{Loftin-Mcintosh}. A similar analysis in the para-K\"ahler case should be interesting as well. \def\cprime{$'$} \def\polhk#1{\setbox0=\hbox{#1}{\ooalign{\hidewidth \lower1.5ex\hbox{`}\hidewidth\crcr\unhbox0}}} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\dbar{\leavevmode\hbox to 0pt{\hskip.2ex \accent"16\hss}d} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\dbar{\leavevmode\hbox to 0pt{\hskip.2ex \accent"16\hss}d} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2] \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
{ "redpajama_set_name": "RedPajamaArXiv" }
1,180
Q: Herança classe Object Como o compilador faz para implicitamente todas as classes herdarem de object? Esse é um comportamento adotado em linguagens como o C#, Java e outras. public class Funcionario { } public class Funcionario : Object { } O exemplo acima é redundante. Mas como o compilador sabe que todas as classes criadas devem assumir esse padrão? Existe a possibilidade do programador forçar o compilador a implementar tal comportamento? Talvez até mesmo forçando a herança de outras classes? A: Não tem como o programador evitar a herança de Object na linguagem, e até onde eu sei, nem mesmo em um nível mais baixo. Não tem segredo fazer isso. Toda classe que não tem outra herança, e o compilador coloca lá a herança de Object. Ele faz a análise sintática e semântica do que está escrito e decide o que fazer. O compilador sabe já que um programador o programou assim. Quando há a herança de outra classe, ele não precisa fazer isso porque, já que todas classes herdam de Object, é certo que a herança de outra classe herda fará esta herdar de Object por hierarquia. A: Mas como o compilador sabe que todas as classes criadas devem assumir esse padrão? É o padrão que foi criado. Durante o desenvolvimento das linguagens, definiu-se que, no topo hierarquico de herança, deveria haver uma super classe em comum. Existe a possibilidade do programador forçar o compilador a implementar tal comportamento? Talvez até mesmo forçando a herança de outras classes? Não (que eu saiba). Seria ilógico ao contrário, pois o compilador não saberia como tratar o artefato. Pode se forçar a herança que quiser, mas sempre terá como primeira herança, a de Object.
{ "redpajama_set_name": "RedPajamaStackExchange" }
1,660
{"url":"https:\/\/www.snapsolve.com\/solutions\/Twolead-spheres-of-20cm-and-2cm-diametre-respectively-are-planet-with-centres-10-1672378006588417","text":"Home\/Class 11\/Physics\/\n\nQuestionPhysicsClass 11\n\nTwo lead spheres of $${20}{c}{m}$$ and $${2}{c}{m}$$ diametre respectively are planet with centres $${100}{c}{m}$$ apart. Calculate the attraction between them, given the radius of the Earth as $${6.37}\\times{10}^{{{8}}}{c}{m}$$ and its mean density as $${5.53}\\times{10}^{{{3}}}{k}{g}{m}^{{-{3}}}$$. Speciffic gravity of lead $$={11.5}$$. If the lead spheres are replaced by bress sphere of the same radii, would the force of attraction be the same?\n\nHere, $${r}_{{{1}}}={0.20}\/{2}={0.1}{m}$$,\n$${r}_{{{2}}}={0.02}\/{2}={0.01}{m},{r}={1.0}{m}$$,\n$$\\rho={5.53}\\times{10}^{{{3}}}{k}{g}{m}^{{-{3}}}$$,\n$$\\rho'={11.5}\\times{10}^{{{3}}}{k}{g}\/{m}^{{{3}}}$$\nMass of first lead sphere, $${m}_{{{1}}}=\\frac{{{4}}}{{{3}}}\\pi{{r}_{{{1}}}^{{{3}}}}\\rho'$$\n$$=\\frac{{{4}}}{{{3}}}\\times{3.14}\\times{\\left({0.1}\\right)}^{{{3}}}\\times{11.5}\\times{10}^{{{3}}}$$\n$$={4.815}\\times{10}^{{-{2}}}{k}{g}$$.\nWe know that, $${g}=\\frac{{{G}{M}}}{{{R}^{{{2}}}}}=\\frac{{{G}}}{{{R}^{{{2}}}}}\\times\\frac{{{4}}}{{{3}}}\\pi{R}^{{{3}}}\\rho$$\nor $${G}=\\frac{{{3}{g}}}{{{4}\\pi{R}\\rho}}$$\nForce of attraction between lead spheres will\n$${F}=\\frac{{{G}{m}_{{{1}}}{m}_{{{2}}}}}{{{r}^{{{2}}}}}=\\frac{{{3}{g}}}{{{4}\\pi{R}\\rho}}\\times\\frac{{{m}_{{{1}}}{m}_{{{2}}}}}{{{r}^{{{2}}}}}$$\n$$=\\frac{{{3}\\times{9.8}\\times{48.15}\\times{4.815}\\times{10}^{{{2}}}}}{{{4}\\times{3.14}\\times{\\left({6.37}\\times{10}^{{{6}}}\\right)}\\times{5.53}\\times{10}^{{{3}}}\\times{\\left({1}\\right)}^{{{2}}}}}$$\n$$={15.4}\\times{10}^{{-{11}}}{N}$$","date":"2021-12-08 07:02:47","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9397953748703003, \"perplexity\": 3213.88252328913}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-49\/segments\/1637964363445.41\/warc\/CC-MAIN-20211208053135-20211208083135-00220.warc.gz\"}"}
null
null
{"url":"https:\/\/www.mathwarehouse.com\/topic\/category\/lesson-plans\/3rd-grade-lessons\/","text":"# 3rd Grade Lesson: Fractions and Measuring\n\n## Discussion\/Introduction\n\nMeasuring Time! Time to get up, stretch, and have fun with some hands on stuff! Here we look at a combination of measuring\u2014an exciting hands on subject for most gradeschoolers- with a real-life application of fractions. The concepts presented here are simple and your students shouldn\u2019t have much trouble with them, but, done right, they will provide an invaluable intuitive understanding of fractional parts. In fact, this lesson might be considered a foundation stone for future work in three important fields: measurement, fractions, and data representation on line plots.\n\n## Objective\n\nStudents will learn to use the half and a quarter inch markings on their rulers: to take measurements down to half or quarter inches, to record their data appropriately, and to represent that data on line plots. This will also provide students with a visual representation of what fractions mean in real life.\n\n## Supplies\n\n\u2022 Rulers marked with halves and fourths of an inch\n\n## Methodology\/Procedure\n\nStart out by asking your students what they know about fractions. Using their suggestions, make a bullet-list definition\/description on the board. If they\u2019re out of ideas, help them. Take time to elucidate any concepts they are hazy on; this is your chance to get everyone started on the same page.\n\nYour list may look something like this:\n\n\u2022 Represent parts of a whole\n\u2022 Are written like a\/b, when the whole is divided into b sections and there are a of those sections\n\u2022 Can be added together if the bottoms [denominators] are the same, by adding the tops [numerators]\n\u2022 A bigger bottom means a smaller amount, a bigger top means a larger amount\n\nOnce you\u2019ve gone through what they\u2019ve learned about fractions, tell them that this lesson we aren\u2019t going to learn anything new about fractions. Instead, we get to use what we\u2019ve already learned in a measuring lesson. Ask them to start by getting out their rulers and measuring their middle finger.\n\nThey are likely to begin by rounding up or down, so when you get your first data points you\u2019ll want to challenge that and ask them to be more specific. It might go something like this:\n\nStudent: My middle finger is 2 inches!\n\nTeacher: Is it exactly 2 inches, or a little more or a little less?\n\nStudent: It\u2019s a little less.\n\nTeacher: Do you see some other marks on your ruler? Those are fractions. Today I want us to learn how to measure more exactly, using those fraction markings.\n\nOn the board, draw an oversized ruler going from one to three inches. Mark halves and fourths of an inch. Draw an object alongside the ruler; you might make it 1 \u00bd inches long.\n\nTeacher: This fork I just drew here is a little over one inch long, but if I want to be exact, I have to look at the little markings on the ruler. Since this space (point out the space between 1 and 2) is one inch long, how much is this space? (shade the first half)\n\nA Student: \u00bd!\n\nTeacher: Yes, the shaded area is half of the area between one and two, so this mark here is the half mark. So if my little fork reaches this mark, we say it is one and a half inches long.\n\nTeacher: Now how much of the area between one and two is shaded?\n\nA student: \u00bc!\n\nYou\u2019re right! So if I have a little tiny pencil that reaches just to here (draw your pencil on the board) it\u2019ll be exactly 1 \u00bc inch long.\n\nFollow the same procedure to elucidate 1 \u00be. Then draw a number of objects along your chalkboard ruler and get the students to label the lengths.\n\nWhen they have a good grasp of these chalkboard measurements, go back to the thumb problem and list the middle finger measurements they give you.\n\nWhen you\u2019ve got the list down, draw a line plot on the board (a number line) and place an x to represent each child\u2019s thumb measurement. This will make it easy to see the clusters. Discuss the graph, and pose a few questions:\n\n\u2022 How many middle fingers are two and a quarter inches long?\n\u2022 Are most of the middle fingers in our class the same length?\n\u2022 How many middle fingers are longer than [choose a median value]?\n\nErase the board, draw a second line plot, and have the children measure their pencils, then take turns coming up to he front, noting down their measurement, and filling out an x on the graph.\n\nNow, give your students the worksheet. Ask them to start by measuring all the items in their desk and writing them down on a list. Pencils, erasers, notebooks, textbooks and pencil cases are some of the items that might be measured and recorded. Then, they should fill in the numbers on the line plot and mark it appropriately.\n\n## Common Core Standards\n\nThis lesson plan is aligned with the Common Core Standards for Mathematics. In Grade 3, Measurement and Data, section 4, the Standards read:\n\n3.MD.4 Generate measurement data by measuring lengths using rulers marked with halves and fourths of an inch. Show the data by making a line plot, where the horizontal scale is marked off in appropriate units\u2014 whole numbers, halves, or quarters.\n\n## Web Resources\/Further Exploration\n\nMath Warehouse is a treasure trove of resources for the math teacher. You\u2019ll find a large bank of high quality free lesson plans, all aligned with the common core, and a variety of helpful tools to make teaching easier.\n\nFor instance, to make number lines like those in the worksheet you can use our Number Line Maker : it creates custom number lines as images you can download.\n\n# 3rd Grade Lesson: Number Names and Fraction Comparisons\n\nWorksheet Snapshot\n\n## Discussion\/Introduction\n\nMany 2nd graders and early 3rd graders will consider fractions and whole numbers to be entirely different animals. A fraction is a fraction; a piece of a pie, a portion of a square. A whole number is something other; an indivisible round, an atomic whole.\n\nBut third grade is the time when children come to terms with the continuity between fractions and whole numbers. That there are fractions which are simply \u2018names\u2019 of whole numbers will surprise some of your students, but it will be a good surprise that can open the doors into future ease in fractional manipulation.\n\nThis lesson also builds on the concept of ordering fractions, a topic that is introduced in a previous MathWarehouse lesson. You want your students to become fluent at comparing the landscape of \u2018fraction land\u2019. Your aim is that they would be able to tell you on sight which is the greatest of two fractions, given either a similar denominator or a similar numerator.\n\n## Objective\n\nThat students would be comfortable relating fractions and whole numbers, going from fractions to whole numbers and back again. That they would also gain familiarity in ordering fractions and comparing the sizes of any fractions that have either the same numerator or the same denominator. (3.NF.3.c &d)\n\n## Methodology\/Procedure\n\nDraw a half circle on the board, and ask your student if they can tell you some of its names. If they\u2019ve been exposed to \u2018fraction names\u2019 in Mathwarehouse\u2019s Discovering Equivalence lesson plan, they should be able to help you come up with a long list as they mentally divide the circle into more and more segments: \u00bd, 2\/4, \u00be.\n\nNow draw another circle on the board and shade the whole thing. Ask your students if they know the names of this portion. Write \u20181\u2019, beside the circle, and wait for more suggestions.\n\nIf a student comes up with 2\/2 for a suggestion, draw the dividing line, tell him he\u2019s right, and put the fraction down on your list. Encourage your students to continue mentally dividing the filled circle to create more fractions. Once you have a long list, ask them how much longer they think it will be. Discuss how it is a list that can go on forever, because you can always divide every portion into smaller and smaller portions. The numbers on the top and bottom of your fractions\u2014always the same\u2014will just keep on getting larger and larger and larger, and there is no end.\n\nNow tell your students you have a very long list of numbers; suggest you put them all on a numberline. Draw a large numberline on the board and write on designations from one to ten, leaving room for continued expansion on the right. Ask for volunteers to come up and write the fractions on the numberline.\n\nThis is, in a way, a trick question; because the way the question is set up assumes that the names are separate numbers and will go to different points. If your students are awake and used to thinking through problems rather than parroting answers you should have at least one willing to challenge your statement: pointing out that the list is not a list of numbers, but a list of names of one single number. If anyone makes this objection, ask him or her to get up and demonstrate this to the class. This can be done by dividing the area between 0 and 1 into two portions, and counting down both of them for 2\/2; dividing the area into three portions, and counting down all three for 3\/3, and so forth.\n\nIf no one objects to the graphing of the \u2018numbers\u2019 give chalk to three or four volunteers and divide the list up between them, asking them to mark each of those numbers on the number line. They may stare at the board confusedly, they may begin marking the integers they see on the numerators, they may try to do fancy calculations, or they may all dash for the point \u20181\u2019. Give them the time they need to think through the problem and make mistakes, and only call a halt on the project once they tell you they have the numbers all marked. Then, give appropriate feedback: if they\u2019ve marked it correctly, tell them so, and ask them to explain to the rest of the class why all those points are one point. If they have marked the \u2018numbers\u2019 on separate points, ask them to demonstrate their \u2018orderings\u2019 with pictures.\n\nIt is important that you give them the time they need to discover the locations of these \u2018cognates of 1\u2019 on the number line, by whatever circuitous route they take. They will learn much more by making mistakes and following through to the dead ends that follow\u2014and then rethinking the problem and finding the right answer\u2014then they would if they were given the correct interpretation straight off. The circuitous, labored path will also cement the final concept in their minds in a way you\u2019d never have been able to teach if you were the prime mover.\n\nWhen your volunteer discoverers have come to the conclusion that every one of the \u2018numbers\u2019 is one, follow up on the concept of equivalence. Ask if these numbers are really one and the same quantity, or if each is just a little bit different. Agree with them that they are all equivalent; all different names of the same absolute point. Each name is just as true and accurate as any other, and no name is bigger or smaller than any other name.\n\nNow you need to extend this to other numbers. Write 2 on the board, draw two circles, fully shaded, and tell your students \u201cThis is 2, and this is its name, 2. Does it have any other names?\u201d\n\nIf your students have immediate suggestions, begin writing them down, illustrating with dividing lines drawn through your circles to allow any slower students to follow. If there are no suggestions, ask your students what would happen if you divide each circle in half. Allow them to take it from there, and write their list of the \u2018names of two\u2019 on the blackboard.\n\nGive each student a copy of the Number Names worksheet, and encourage them to make their list of names for 3 and 4 as long as they can. If they have difficulty keeping track of multiple lines on their strawberry cakes, encourage them to draw their own sets of three or four circles and make the divisions on separate pictures.\n\nThis next\u00a0part of this\u00a0lesson plan can be taught as a separate lesson if your students go slowly through the whole number section. Remember, going slowly is not necessarily a bad thing; it may mean your students really are taking the time to think things through and really cementing these concepts in their minds.\n\nWrite two fractions down on the board: 2\/3 and 2\/4. Tell your students these are portions of two different\u00a0regular sized snickers bar at your house, and ask them which is larger.\n\nWhen they tell you 2\/3 is larger, write that down using a > symbol. Ask them how they knew. Bring them to observe that the fewer pieces you divide something in, the larger the pieces are; and a third is larger than a fourth. Note that since the numerators are the same, the denominators will tell which is larger; and the smaller the denominator, the larger the portion.\n\nAsk them how you could check this. One way would be laying the two snicker bar portions side by side. Since the snicker bars are somewhere else\u00a0and that isn\u2019t possible, you can also check by drawing diagrams, being careful to make neat thirds and quarters. Draw a visual fraction model and shade the areas, demonstrating that 2\/3 is larger than 2\/4.\n\nNow write 2\/3 and 2\/4 on the blackboard again. Tell your students that this time, these are not snicker bars, but cakes. After the 2\/3, write Mollie\u2019s birthday cake. After the 2\/4, write Jennie Brown\u2019s cake. Ask which is larger.\n\nAfter your students have answered, draw the cakes. Tell them Mollie\u2019s cake is a 12\u201d square. Tell them Jennie Brown\u2019s cake is a three layer wedding cake, with a radius of 22\u201d. Ask them if their answer is still right\u00a0.\n\nAsk them what went wrong in their thinking; or\u00a0why the method they used for comparing parts of snicker bars didn\u2019t work for comparing parts of cake. Bring them to observe that you can only compare fractions if they refer to identical wholes.\n\nLet them know, though, that when they see fractions without designation in math problems, they may assume they are referring to identical wholes unless told differently. In story problems, though, they\u2019ll need to be careful.\n\nNow write 2\/4 and \u00be on the board, and tell your students these fractions refer to snickers bars again. Ask which is larger. When they tell you \u00be is larger, draw the < symbol between the fractions. Ask how they knew. Bring them to observe that if the denominators are the same, the fraction with the larger number in the numerator is larger. This is because the identical denominator means the pieces are the same size, and the larger numerator means you have more pieces.\n\nAsk them how they could check this: with a picture. Draw the picture, demonstrating that \u00be is a larger piece.\n\nNow write \u00bd and 2\/4 on the blackboard. Ask which is larger. Observe this is more difficult to compare by looking at the numbers because neither the numerator or denominator are the same, but ask them if they can rename \u00bd into something else. Suggest they draw a diagram of \u00bd, and divide each portion in half again to find the \u2018new name\u2019. When they rename \u00bd as 2\/4, the denominators become the same so that they can be compared easily: in fact, \u00a0the two fractions are both 2\/4, exactly the same.\n\nGive your students the fraction comparisons worksheet, and ask them to fill it out. They\u2019ll be asked to use the symbols >,<, and = when they compare fractions, and they\u2019ll also be asked to justify their conclusions with a simple circle diagram, a visual fraction model.\n\n## Common Core Standards\n\nThis lesson is aligned to 3.NF.3 c & d in the Common Core State Standards for Mathematics. \u00a03.NF.3 reads, in part:\n\nc. \u00a0Express whole numbers as fractions, and recognize fractions that are equivalent to whole numbers. Examples: Express 3 in the form 3 = 3\/1; recognize that 6\/1 = 6; locate 4\/4 and 1 at the same point of a number line diagram.\n\nd. Compare two fractions with the same numerator or the same denominator by reasoning about their size. Recognize that comparisons are valid only when the two fractions refer to the same whole. Record the results of comparisons with the symbols >, =, or <, and justify the conclusions, e.g., by using a visual fraction model.\n\n## Web Resources\/Further Exploration\n\nThis is only the beginning: your students need to cement their understanding of ordering and comparing fractions by doing lots of games and exercises. Take advantage of everything Mathwarehouse has to offer\u2013all free\u2014 by visiting\u00a0http:\/\/www.mathwarehouse.com\/fractions\/ and also looking\u00a0through our lists of free lesson plans.\n\n# 3rd Grade Lesson Plan: Discovering Equivalence\n\nOrdering flashcards\n\n## Discussion\/Introduction\n\nWhat does it mean when a fraction is the same as another, and what does it mean when it is different? Is 2\/4 really the same as \u00bd? Although your students have discovered informally that the slab of pie represented by those two fractions\u2014 and the point on the number line\u2014looks, feels, and acts the same, they still need to take this one step further and realize that these two fractions are actually one and the same quantity.\n\nThis 3rd\u00a0grade lesson plan will give students the tools they need to understand \u00a0the concepts of same and different \u00a0applied to fractions. Through visual ordering exercises your students will become adept at recognizing fractions that are the same, and by the end of today\u2019s session they should be able to make up their own simple lists of equivalent fractions.\n\n## Fractions and Ordering: Discovering Equivalence\n\nIt\u2019s important that you introduce this topic with real world, dynamic, easily understandable examples rather than with mathematical equations and manipulations. Your students aren\u2019t ready, right now, to grasp that 2\/4 is \u00bd because they can divide 2 out of both sides of that first fraction. That is for later. Introduced with fancy manipulation, fractions become a complicated numerical puzzle that will leave many of your students feeling confused and helpless. When demonstrated with visual examples, though, equivalent fractions will speedily\u00a0seem intuitive to your students; nothing less than common sense.\n\n## Objective\n\nStudents will understand that two fractions are equivalent if they are the same size, or the same point on a number line. They will be able to explain why fractions are equivalent, and will be able to recognize and make up lists of simple equivalent fractions.\n\n## Supplies\n\n\u2022 Folding paper, 1 square per student.\n\u2022 2 scrabble tile holders\n\n## Methodology\/Procedure\n\nAs a warmup, begin with a quick fractions review. Pass out the folding paper, and have your students fold fractions\u2014as fast as possible; first one done is the winner\u2014as you call them out. Call out \u00bd, 1\/3, \u00bc, 1\/6, and 1\/8, in random order.\n\nIf they have difficulty, continue this exercise for several rounds. When they achieve fluency, tell them you are proud. Tell them you\u2019ve got a fun topic to look at today; and it\u2019s a topic you won\u2019t even need to teach; they can discover it all themselves.\u00a0\u00a0 Give each student a partial set of fraction number & visual flashcards; \u00bd, \u00bc, 2\/4, and \u00be cards from both visual and numerical sets. Ask them to match the cards to the pie pictures, and then to order both sets according to size, smallest to largest.\n\nWhen they have finished, walk around the room and review the students\u2019 work. Help any who have had trouble, encouraging them to look at the picture cards and compare relative sizes. Find two students who have chosen different orderings, and give them the scrabble tile holders. Ask them to place their numerical cards carefully on the tile holder, then go to the front of the room and display their ordering to the class. Observe that these are two different orderings of the same set; ask which is wrong.\n\nInvite\u00a0any students who point out one or the other as wrong to explain why it is wrong, using the picture flashcards and comparing sizes.\n\nIf your students say unanimously that neither is wrong, tell them that they are right. Draw on the board the visual circle fractions, and demonstrate how the two parts are exactly the same.\n\nTell them that in math the two fractions \u00bd and 2\/4 are called equivalent fractions. Explain that what that means is that they are two names for the same thing. Sometimes one name is more useful, sometimes the other is. Ask them if they know anyone who has two names. If they do not come up with their own examples, remind them of a common acquaintance\u2014the principal, perhaps, who is called Mr. Brown at school, Daddy at home, and Jake by his golfing friends. Point out that the man these names this refers to is the same, no matter what name he is called, and that all of these names belong to him equally.\n\nYou can also use yourself as an example.\n\nDraw a half circle on the chalkboard, and tell your students this could be labeled as \u00bd or 2\/4, and both answers would be entirely correct. Ask them if they can think of any other names for this same portion.\n\nIf any students have suggestions, invite them to come up and demonstrate equivalence on the board with pictures. If no-one has any idea, use lines to divide the circle on the board into eight equal wedges, and point out that half is also four 1\/8 wedges, or 4\/8.\n\nAsk your students how many names each portion has. Let them think about and discuss this, and then demonstrate how each portion will have an infinite number of names, because you can always divide the existing pieces each in half again to get twice the amount of smaller pieces.\n\nHand out the remaining card sets, and ask the students to match the number cards to the visual cards and then order all the cards by portion size, from smallest to largest. Suggest they show equivalence by placing equivalent fractions at the same level in their orderings.\n\nLet your students take their times over their orderings. If they finish quickly and you have time left before the end of the class, do some quiz work: draw shaded portions on the blackboard and ask your students to help you come up with a list of possible \u2018names\u2019.\n\n## Common Core Standards\n\nThis lesson is aligned to standard 3.NF.3 in the Common Core State Standards for Mathematics; 3rd Grade Numbers and Fractions item 3.\u00a0\u00a03.NF.3\u00a0reads, in part:\n\n3.NF. 3. Explain equivalence of fractions in special cases, and compare fractions by reasoning about their size.\n\na. Understand two fractions as equivalent (equal) if they are the same size, or the same point on a number line.\n\nb. Recognize and generate simple equivalent fractions, e.g., 1\/2 = 2\/4, 4\/6 = 2\/3. Explain why the fractions are equivalent, e.g., by using a visual fraction model.\n\n## Web Resources\/Further Exploration\n\nThe beautiful flashcards \u00a0your students used in this lesson are made with the help of Mathwarehouse\u2019s nifty Visual Fraction applet. This applet is a really fun way to generate classroom graphics, and also an enjoyable way for students who might be having difficulty visualizing fractions to see pizza portions generated before their eyes as they type in the fractions they are curious about.\n\n# Hopscotch Fractions: Third Grade Lesson Plan\n\nFraction Name Tag\n\n## Discussion\/Introduction\n\nThis lesson plan is meant to follow another; 3rd Grade Fractions on the Number line. These lessons are both aligned to Common Core Standard 3.NF.2. Here I\u2019ll assume you\u2019ve gone through that first lesson, and that your students are comfortable marking off intervals to show 1\/b on the number line.\n\nIn your last lesson your students will have stayed in their chairs during most of the class period, working hard on dividing up fraction intervals and coming to terms with the idea that 1\/b on a number line means portioning the interval between 0 and 1 into b parts and then, starting at zero, marking off one of these parts. Today you get to cement that concept a bit more by taking the next step up: your students will discover that a\/b means that, starting at 0, you mentally hop a \u20181\/b\u2019 size steps toward 1.\n\nThat\u2019s all they need to learn today, but you want them to know it backwards, forwards, and inside-out. Since there\u2019s no better way to get to that point than with a fun de-stressing game, you\u2019ll do this by getting\u00a0your students up out of their chairs \u00a0for a rousing game of \u2018hopscotch fractions\u2019. Games have a way bringing book learning from the \u2018theoretical\u2019 category into \u2018real life\u2019, and this one is no exception.\n\n## Objective\n\nThat students would be comfortable representing a fraction a\/b on a number line diagram by marking off a lengths 1\/b from zero, and that they\u2019d understand that the resulting interval has size a\/b, and that its endpoint locates the number a\/b on the number line.\n\n## Supplies\n\n\u2022 Large 0-2 number lines, laminated and taped to the floor (ideal length: 3 or 4 feet long). These can also be prepared by taping standard printing paper end to end and drawing the number lines with markers, using rulers for uniformity. You\u2019ll need one for every four students.\n\u2022 Strips of colored paper the same size as one unit on the number line; one per student; four colors.\n\u2022 A marker for each student; four colors; each student\u2019s marker must be coordinated with his paper.\n\u2022 Three types of stickers (ideally, small stars in red, yellow, or green; but other simple colored stickers can substitute).\n\n## Methodology\/Procedure\n\nBegin by reviewing what you taught in the last lesson. Remind the students that we talked about what fractions of abstract numbers mean, and we learned that they work in exactly the same way as fractions of tangible, concrete things like apples. Draw your 0-2 number line on the board, mark 0, 1, and 2, and ask how you would find out where to mark 1\/3. After listening to their suggestions, take a \u00a0strip of paper the size of a unit on your number line, divide it in thirds, and use that to mark 1\/3.\n\nThen ask your students how you\u2019d mark 2\/3. Give them permission to discuss it with their neighbors, then ask for a volunteer to come and demonstrate it on the board. He will most likely demonstrate it correctly; laying your 1\/3\u00a0strip end to end twice to find the 2\/3 point.\n\nIf he doesn\u2019t, or if the idea still seem counterintuitive to many of your students, go back to the idea of distance. Get a volunteer to come stand beside you and then walk three steps away. Ask him to come one third of the distance back (1 step). Then ask him to go back, and tell him now to come two thirds of the distance to you. Continue with other volunteers and other fractions until your students are comfortable with the idea.\n\nThen transfer this idea onto your number line, and show how starting at zero and measuring two 1\/3rds brings you to the 2\/3 point on your number line.\n\nThen have the children help you move tables and chairs to the back, so that you have a large empty space to work. If a gym or other large empty room is available, you may want to move there for the rest of this class. Otherwise, tailor the size of your number lines to match the available space.\n\nSet up the number lines on the floor. Each number line will have four children manning it, so you\u2019ll need to have as many number lines as the number of children in your class divided by four. These number lines will preferably be arranged side by side, in a line.\n\nAssign the students to teams; four students to each number line. Give each student a strip of colored paper the size of one unit on their floor number lines. Then assign each student a fraction: in every group, you need a 1\/3 student, a \u00bc student, a 1\/6 student, and a 1\/8 student. Tape the designations on each student\u2019s shirt. For a \u2018handicapped race\u2019 effect, give 1\/3 and \u00bc designations to the slower students, 1\/6 and 1\/8 designations to those that are faster. Each student will be called on the same number of times, but students with smaller fractions will have to do more counting and hopping.\n\nHand out the number strips and markers; in each group, one of each color. It helps organization if you give all students representing a particular fraction the same color.\n\nIf you have one odd student out, make him the caller. If you have two or three odd ones out, give them a number line and fractions.\n\nAsk the students to mark where their fractions are on their number lines: 1\/3, \u00bc, 1\/6, and 1\/8. They will do this with the strips of paper, folding them into appropriate sections as they did in yesterday\u2019s lesson, and using those sections as rulers. Tell them to keep these strips of paper folded after they\u2019ve used them. These strips are their \u2018shoes\u2019, and they\u2019ll be needed again and again throughout the game.\n\nDescribe the game to them. The caller (you, if there wasn\u2019t an extra student) stands at the front of the room with a pile of full-sized shuffled \u2018hopscotch flashcards\u2019. He picks one, displays it, and reads it out. Immediately the students representing that fraction take their \u2018shoe\u2019 \u2013 the paper strip\u2014mark off the appropriate number of intervals, and then jump, once in each interval, to the fraction marker. For instance, if the caller picked \u201c4\/6\u201d the 1\/6 player would measure off 4 \u20181\/6\u2019 measures with his paper shoe, and then, as soon as they are\u00a0measured, hop down to the fourth one, doing a one foot jump in each place. The first three to successfully measure and jump\u00a0are awarded stickers on their fraction nametags: red for the first to hop to place, yellow for the second, and green for the third. Then they go back to the beginning, and the caller calls again.\n\nThey will soon discover that after they\u2019ve been called once future calls are easier, as they will have marked off some (maybe all) of their fractions on the number line. \u00a0Since they are using their colored markers, it will be easy to tell their intervals versus their team-mates. Before the game is halfway done, you should have no measuring at all; just quick, sure hopping when a new fraction is called.\n\nThroughout the game, continue to interweave your class objectives with the game by asking at intervals, randomly, but of every student at least once: How far did you jump? (a steps) What was the size of the interval you jumped through (a\/b)? Where is the point a\/b on the number line (at the end of a 1\/b steps).\n\nPlay as long as time permits; the game speeds up as you go, so you should be able to get through the stack of flashcards twice.\u00a0At the end of the game the stars get tallied up on a per team basis: red is three points, yellow two points, green one point. The team with the greatest number of points wins.\n\n## Common Core Standards\n\nThis lesson plan is aligned to the Common Core Standard 3.NF.2.\u00a0 That item reads, in part:\n\n3.NF.2. (b). Represent a fraction a\/b on a number line diagram by marking off a lengths 1\/b from 0. Recognize that the resulting interval has size a\/b and that its endpoint locates the number a\/b on the number line.\n\n## Web Resources\/Further Exploration\n\nThis lesson plan gives you\u00a0\u00a0one fun way of making fractions come alive, but it\u2019s best to approach every topic from multiple angles. That\u2019s why I\u2019m recommending Mathwarehouse\u2019s wonderful fraction resources again\u2014 after all, they\u2019re all free!\n\n# 3rd Grade Lesson Plan: Fractions on the Number Line\n\nWhere do fractions go on a numberline?\n\n## Discussion\/Introduction\n\nUp to this point fractions have simply been discrete portions of tangible wholes; real parts that can be felt, seen, and tested. This third grade lesson plan, though, marks a water shed: we get to transfer our knowledge of \u2018half a cookie\u2019 to the more abstract concept of half a unit on a number line.\n\n## Objective\n\nThat students would be able to represent a fraction 1\/b on a number line by defining the interval from 0 to 1 as the whole and partitioning it into b equal parts, and that they\u2019d understand that each part has size 1\/b . Also, that they would understand that the endpoint of the part based at 0 locates the number 1\/b on the number line.\n\n## Supplies\n\n\u2022 Precut (colored) strips of paper, \u00bd-1 inch thick and the length of zero to one on the student\u2019s number lines, and a few longer strips of paper for you to use with the number line you will draw on the blackboard\n\u2022 Apple or other similar easily dividable item\n\n## Methodology\/Procedure\n\nTell your students that they\u2019ve become so good at identifying and dealing with fractions\u2014portions of pie, pizza, or apples and oranges\u2014that today they can take what they\u2019ve learned to a whole new sphere.\n\nPick up an apple. Tell them that it is an apple; you can feel it, you can measure it with a ruler, and you can divide it into two equal parts with a sharp knife. Remind them you can do the same with pizza, and with almost any physical object, provided your knife is sharp enough.\n\nThen ask them if there are other things they could divide in half, things they can\u2019t touch, feel, or cut with a sharp knife.\n\nWrite a list of ideas on the blackboard. Some ideas might be groups of things (or people), air, water, time, or space.\n\nValidate each addition to the list as you write it down, and then tell them that today you\u2019re going to look at fractions of three special things: portions of time, space, and mathematical units.\n\nTalk about time first. Ask what it means when we say \u2018half an hour\u2019, and get as many versions of the answer as possible. If no-one suggests it, tell them that one way of thinking about it is as half the distance from one hour to the other.\n\nDraw a diagram of your day on the blackboard; essentially, a number line that describes your day. At this point, though, don\u2019t describe it as a \u2018number line\u2019 to your students.\u00a0\u00a0 Put\u00a0sitting up in bed, the first thing you do in the morning, away on the far left side of your diagram. Put going to sleep as the last thing, and in the middle put lunch.\n\nTell your students that the area between waking up and lunch is your morning; and then shade the first half, and tell them it is half your morning. If the morning was four hours long, from eight to twelve, and you were feeling miserable half the morning, ask them how long you were feeling miserable. (2 hours) How long were you feeling okay? (Also 2)\n\nObserve that if you look at time in that way, time is very similar to distance. Talk about the distance between the bed and the lunch table in your drawing, and what half of it means. Talk about half of the way from the place you are standing to the window. Walk four steps away from your desk, counting as you go, and ask how many steps it is to your desk. Ask how many steps you would need to walk if you wanted to walk only half the way back to the desk (2). What if you wanted to walk just a quarter of the way back? (1)\n\nNow erase everything from the blackboard and draw a simple number line going from zero to three. Ask them what this is called (a number line). Remind them that since a number line is math, we can use it to mean anything. We get to use the same number line, in exactly the same way, whether we\u2019re talking about cookies, pizza, time, or distance.\n\nTell them for now you\u2019ll pretend it\u2019s talking about pizza. Put your chalk at zero, write a small dot, and test the students on basic number line usage: Here, you see, I have no pizza. If I buy two pizzas\u2014one peperoni and one sausage\u2014where would I show that on the number line?\n\nYour students should guide you to move your hand to the two. Do so, make a dot there, and then go back to the zero.\n\nThat was the day before yesterday, explain. Yesterday, I also started with no pizza. I also bought pizza. But I wasn\u2019t feeling very hungry and didn\u2019t have much spare money, so I only bought half. How can I show that on the number line? There isn\u2019t any place that says \u2018half\u2019.\n\nListen to any ideas they come up to. If someone suggests dividing the portion of the number line between zero and one in two parts and making a dot on the middle line, tell him you really like that idea.\n\nTake a strip of paper exactly as long as the distance between zero and one; fold it in half, lay it on the number line from 0 to \u00bd, and draw your \u2018half pizza\u2019 dot. Shade the area on the line between 0 and \u00bd. Ask your students how long that segment is; compared to the length between 0 and 1 (1\/2 the length). Ask them where the segment starts (0).\n\nPick up the folded paper strip again, and ask how long it is (1\/2 of what it used to be). Since it is \u00bd, ask them if it means \u00bd wherever you place it on the number line\u2014do you have to begin measuring off at zero, or can you start somewhere else instead? Get feedback as to why or why not before you explain that since \u00bd is just half of one, and you have no \u2018wholes\u2019 to add it to, you always have to start on zero when you measure its placement.\n\nAsk them how you\u2019d find out where to place a do for 1\/4th. Fold your strip into fourths, and use the folded strip as a measuring stick to place a dot at exactly 1\/4th.\n\nAsk about 1\/3, unfold your strip and refold into thirds, and prepare to make a 1\/3 mark. Ask where you should start the strip when you measure off the 1\/3 (at 0).\n\nAsk which mark is closer to the zero (1\/4); which is furthest away (1\/2).\n\nThen pass out the student worksheets and strips of paper. Your students will be marking \u00bd, 1\/3, \u00bc and 1\/6 on their own number lines. If you have not introduced 1\/6th previously, you may need to walk your student through that fraction by marking your own 1\/6 on the blackboard.\n\n## Common Core Standards\n\nUnder 3.NF.2, the Common Core State Standards for Mathematics reads:\n\n3.NF.2 Understand a fraction as a number on the number line; represent fractions on a number line diagram.\n\n1. Represent a fraction 1\/b on a number line diagram by defining the interval from 0 to 1 as the whole and partitioning it into b equal parts. Recognize that each part has size 1\/b and that the endpoint of the part based at 0 locates the number 1\/b on the number line.\n\n## Web Resources\/Further Exploration\n\nHere are some links to helpful web resources that might\u00a0help your students learn to enjoy fractions. When they\u2019ve decided that fractions are\u00a0definitely\u00a0fun, it\u2019ll be easy to gain \u00a0the familiarity and intuitive understanding they need to make a success of classroom work.\n\n# Third Grade Fractions Lesson Plan: Learning Notation\n\nFractions are a secret code\n\n## Discussion\/Introduction\n\nThird graders have a magnetic relationship to secret codes. Anything to do with secret writing, dangerous espionage and top-secret missions brings up the energy in the room several points. Fractions tend not to have that same magnetic appeal. In fact, 1\/b notation can look so strange to third graders that the first sight of a page filled with the stuff makes them want to turn their brains off.\n\nIf it\u2019s taught wrong, that is to say. Taught right, though, with just the right amount of fun mixed in, third grade fractions can be just as exciting as a top-secret mission in dangerous territory with a good dose of secret codes mixed in.\nMy free lesson plan, aligned\u00a0with the Common Core (3.NF.1), is a slightly nontraditional but completely fun look at fractions for third graders. There\u2019s no reason to make fractions dry book learning; let them be a game, a \u2018break from the hard stuff\u2019.\n\n## Objective\n\nThat students would gain familiarity with fractions and learn basic fractional notation: that 1\/b is the quantity formed by 1 part when a whole is partitioned into b equal parts and that a\/b is the quantity formed by a parts of size 1\/b. That they would learn to work as a team, helping the weakest members in order to gain united success.\n\n## Supplies\n\n\u2022 1 piece colored folding paper for each student, or construction paper rectangles\n\u2022 Five flashcards for each student: \u00bd, \u00bc, \u00be, 1\/3, 2\/3 portions of a circle (printable\u00a0here)\n\u2022 One set of large flashcards for teacher, with \u00bd, \u00bc, \u00be, 1\/3 2\/3 written out.\n\u2022 \u2018Initial Assignment\u2019 missive paper, one copy of each (printable here)\n\u2022 Secret missive paper, enough for the class; two types (printables\u00a0here\u00a0and here)\n\u2022 Group prizes or \u2018winning team\u2019 paper headbands, enough for half the class.\n\u2022 Math notebooks or writing paper & pencils for each student.\n\n## Methodology\/Procedure\n\nTell your students that you\u2019re not going to do much regular math today; today is going to be a fun day. Go on to explain that instead of doing regular arithmetic and adding or dividing (or whatever yesterday\u2019s topic was) you\u2019re going to learn something really cool: a secret code that they\u2019ll be able to use to write important math messages with. And that if they learn it well, they\u2019ll be able to use it right away, in an exciting game that they can play in the classroom.\n\nBut tell them that before you start that, you\u2019d like to do a brief review of fractions. Ask them what a half is (one of two equal parts) and how many halves are in a whole (2). Ask what a third is (one of three equal parts) and how many thirds are in a whole (3); then go on to fourths. If they have forgotten any of this or it seems even a little rusty, give them a bit of a refresher. Give them each a rectangle and as you call out \u2018half!\u2019 \u2018quarter!\u2019 or \u2018third!\u2019 have them race to fold it and display the portion you called.\n\nWhen they\u2019ve got it, tell them they have, and tell them you\u2019re proud of the speed with which they can fold. Tell them now they\u2019ve done their work, so now it\u2019s time to go on to secret codes and the new game.\n\nAsk them what they know about secret codes. Give them a chance to talk about what they know; the final thought you want to come up with\u2014and you can just state it yourself if no-one has similar thoughts\u2014is that a secret code is a way of giving information to one\u2019s friends in a way that one\u2019s enemies can\u2019t understand.\n\nTell them that another special thing about secret codes is that they\u2019re often short and concise, allowing their writers to put a lot of information in a very little space. Then tell them that now you are going to show them how to write fractions in secret codes.\n\nDraw a circle, and color half. Then, next to it, write \u2018half\u2019 on the blackboard. Tell them this is how you write half in plain English. Ask them if it\u2019s easy to read and understand (yes). Ask them if it\u2019s short. (Not too long, but it takes the space of four whole letters).\n\nAsk them how they would make a secret code that would express half.\n\nEncourage them to experiment with different possibilities in their math notebooks. After a few minutes, ask whoever has an idea to come forward and draw it on the blackboard. Provide chalk for each child, and have them write down their secret codes.\u00a0\u00a0 Ask whoever has a unique notation to explain their secret code and why they chose it to the class.\n\nThen tell them that there are people called mathematicians who decided what the main secret code was going to be for math, and you\u2019ll tell them that secret. Write \u00bd on the blackboard, next to the word half.\n\nAsk them why they think the mathematicians chose that way. After they\u2019ve had a chance to express their ideas, show them how they can read it as \u2018one of two equal parts\u2019, and that the number on the bottom is the number of parts you\u2019ve divided the whole in; the top, the number of those pieces you have.\n\nAsk them to copy \u00bd down into their math notebook. Then draw another circle on the blackboard, divide in four quarters, color one quarter, and write the word \u2018quarter\u2019 next to it. Tell them this is a quarter, and this word is how you write quarter in English. Ask them whether they have any guesses as to how you would write it in math\u2019s secret code.\n\nIf you have any volunteers, have them come up to the blackboard try out their ideas, and explain them to the class. Validate each idea, and if anyone writes \u00bc, tell him that he was thinking in exactly the same way as the mathematicians were. Write \u00bc next to your word \u2018quarter\u2019.\n\nGo through the same procedure for 1\/3rd; here, thought should be united enough that you should be able to call just one confident child up to the blackboard to write down 1\/3 as his idea and explain how he got it.\n\nNow draw another circle on the blackboard; divide it into thirds, but instead of shading 1\/3 , shade 2\/3. Write \u201ctwo thirds\u201d and ask your students how they\u2019d write it in math secret code.\n\nAsk volunteers to come up and share their ideas with the class. If you get 2 1\/3, tell the writer that it is a good idea, but the problem is it looks the same as the way you\u2019d write two wholes and a one third, and draw a picture of that with circles.\n\nWhen someone comes up with 2\/3, tell him he was thinking just like he mathematicians who decided this code.\n\nPass out the flashcards, and tell the students that when you display your secret code flashcard you want them to pick up and show you the equivalent pie flashcard, without looking at any of their classmates. Go through your cards in random order at least twice; more if some students are having difficulty.\n\nThen tell them it is time for the game. Divide the class in two teams, each with a team leader who has a good grasp of what you\u2019ve been teaching. Tell them that the team that wins will get whatever prize you\u2019ve prepared, or get the honor of wearing \u2018winning team\u2019 headbands for the rest of the week in school.\n\nTell them that when soldiers or intelligence agents use secret codes in war, it is very important that every member of the team is able to do his part well and convey the message without losing any of the meaning. Ask what happens if one of the people passing on a message forgets the secret code (the message is lost). Tell the teams that the same thing will happen if one of their members musses up or forgets the code: they will lose all chance of winning.\n\nGive them five minutes to review the secret code together, and tell the leaders they are responsible for making sure every member of the team understands how to write and decipher the secret code of fractions.\n\nWhen they are ready, set each team up as a chain. The game will be conducted like telephone; you\u2019ll give the leader of each team a picture with six circles, each divided into different fractions with different parts shaded. They\u2019ll \u2018translate this\u2019 on to their missive sheet, writing in math\u2019s secret codes. Then they\u2019ll fold this paper up and pass it to the next team member, who will take it, color and shade the circles on his paper, and pass that paper on to the next team member\n\nTeamplayers will have been given circle and secret missive papers, alternately. You may want to walk up and down the lines checking the work; your responsibility is not to make sure they are doing it correctly, but just that the \u2018secret code\u2019 writers are not writing circles and the circle drawers are not writing code.\n\nThe first team which sends the message down the line correctly wins. When the message goes down the line, you pick it up and return it to the first player. He compares it with the original paper you gave him. If they are the same, they have won. If they are not correct and the other team has not yet won, he can walk down his line, checking the papers, and find where the mistake began. If he corrects that mistake, the message begins again at that person and is passed on again down the line until it comes to the end.\n\nTo avoid bad feelings and loser mentality, you can let the other team continue working till they\u2019ve got the message passed down correctly also, and offer them a consolation prize or \u2018silver medal finalists\u2019 headbands.\n\nTell your students they have learned something incredibly useful today; a secret code with which they can communicate with other math people all over the world.\n\nDon\u2019t forget to encourage them to\u00a0practice\u00a0this special code \u00a0any chance they get. One way of doing this would be by spending some time on our fun fractions activity pages. \u00a0Visual Fractions\u00a0is a simple but very enjoyable interactive\u00a0web application where students can\u00a0type in any fraction and see it formed before their eyes. The link\u00a0Online\u00a0Fraction Games\u00a0\u00a0will bring your students to a number of other fraction-based games; some a little above their level, but others that they are ready to tackle.\n\n## Common Core Standards\n\nIn 3.NF.1, the Common Core State Standards for Mathematics reads:\n\n3.NF.1 Understand a fraction 1\/b as the quantity formed by 1 part when a whole is partitioned into b equal parts; understand a fraction a\/b as the quantity formed by a parts of size 1\/b.\n\n# 3rd Grade Pie Chart Lesson Plan\n\n.\n\n## Discussion\/Introduction\n\nOur graphing units in third grade used to be focused primarily on circle graphs (pie charts), but under the Common Core, bar charts are given a new prominence. Bar charts are intuitively easy to understand for second and third graders, and since they build on and are closely connected to the number line, they follow logically from the other math your students are doing.\n\nBut just because bar charts are taking center stage , doesn\u2019t mean we can stop teaching pie charts altogether. While bar charts make comparing the relative size of parts a simple visual exercise, pie charts offer intuitively obvious visual comparisons between parts and the whole. Teaching circle graphs also enables our students to practice fractions in a fun, easy-to-grasp way.\n\nThis lesson plan focuses on gaining a visual understanding of whole-part relationships through the use of a simple circle graph, and also gives students an opportunity to practice fractions , as required by section 3.NF.3 (Number and Operations\u2014Fractions) in the Common Core. \u201cExplain equivalence of fractions in special cases, and compare fractions by reasoning about their size.\u201d\n\n## Objective\n\nTo understand a pie chart (circle graph) and to be able to deduce information about the relative size of the parts shown in it. To be able to compare fractions by reasoning about their size (Common Core 3.NF.3)\n\n## Supplies\n\n\u2022 A graph printout from http:\/\/www.meta-chart.com\/pie, a free pie chart maker\n\u2022 Paper cutouts: a large circle cut out of thick white paper, and \u00bd, \u00bc, and $$frac{1}{8}$$ sectors of circles cut out of different colors of construction paper.\n\u2022 Paper and markers\/crayons\/colored pencils for each student\n\n## Methodology\/Procedure\n\nStart with a review of fractions. Show the students your white circle, and ask what they think of when they see it. Give them some time to discuss what a circle means to them, and validate their feelings and opinions as they share them. These opinions can be as simple as \u2018pizza!\u2019 or as abstruse as \u2018unending\u2019; there is no one answer. When they have all had a chance to share how they feel about it,\u00a0explain that to you, since it is a whole, entire circle, it can mean a \u2018whole\u2019 of anything\u2014a whole class of children, a whole country, a whole family, a whole bag of skittles.\n\nCover half of your circle up with your \u00bd circle construction paper cut out, and ask how much of the circle is colored now. After the students have answered, tell them that since the whole circle meant to you a whole of anything \u2013 a whole class, a whole country, a whole bag of skittles\u2014the colored sections mean, to you, half of anything. Half a bag of skittles, half a class, half a country.\n\nAsk half the class to hold up their hands; the front half, the back half, or the side half. Then tell them that you could use this circle to show how many children had their hands up; the colored section would be the children with their hands up, and the white section would be the other children.\n\nTake off the half circle of construction paper and replace it by the \u00bc. Ask your students if they know how much of the circle is shaded now. They will probably be ready with the right answer; if anyone is unsure, show that \u00bc is half of a half, and that four quarters cover the whole circle. Ask a quarter of the class to raise their hands; you will probably have to mark off the demarcation lines for the quarter. Tell them that if the whole white circle\u00a0represents\u00a0the class, that construction paper quarter is the part of the class with their hands up.\n\nGo on to $$frac{1}{8}$$, introducing it the same way with your $$frac{1}{8}$$\u00a0construction paper sector.\n\nNow show the students the graph printout from http:\/\/www.meta-chart.com\/share\/favorite-colors-in-the-classroom and tell them it is a graph which shows the favorite colors in a class of students like yours. Tell them it is called a pie chart, and ask them if they know why.\n\nAsk which color is the biggest favorite. Then ask which of the three explicitly listed colors the least amount of children seem like.\n\nNow tell them the class was made up of twenty students, and ask them how many students liked blue. Ask whether fewer or more than six students have green for their favorite color, and whether or not five students have purple for a favorite color. Ask if four students might have purple for a favorite color, and then whether two might have liked purple best.\n\nNow take a poll of favorite colors in your classroom, and put the data on your blackboard. It may look something like this:\n\n\u2022 Blue \u2013 Zack, Katie, Markus, Peter\u20134\n\u2022 Green \u2013 Jamie, Paul, Christian-3\n\u2022 Red \u2013 Jordan-1\n\u2022 Pink\u2014Mallory, Katie, Jennifer, Desiree, Madeline-5\n\u2022 Purple \u2013 Desiree -1\n\u2022 Yellow\u2014Sofia, Edwin -2\n\nTell your students you want them each to make a pie graph for you, using this data. Suggest they group the smaller amounts together under \u2018other\u2019; in the example above, this would be red, purple and yellow,\u00a0totaling\u00a0four. Then start with the color most children like, and ask what fraction of the total number of children like that color. In the example above this would be $$frac{5}{16}$$. Help the students relate this to the fractions you\u2019ve already discussed; in this case; just a little more than $$frac{4}{16} = frac{1}{4}$$. Have the students color a generous quarter on their circles, and go on to the next color: in this graph, blue, $$frac{4}{16}$$ or exactly $$frac{1}{4}$$\u00a0of a circle.\n\nAt this point you don\u2019t want to focus on the nitty gritty\u2014for instance, it would be counterproductive to divide your circle into sixteen, twenty, or thirty equal portions\u2014as many portions as you have students in your class\u2014and make an exact circle graph by coloring in the appropriate number of sectors. Instead, you want to focus on getting an intuitive sense of the size of different fractions. Your students will \u00a0do this by relating the fractions they are unsure of\u2014how much is $$frac{5}{16}$$, anyway? to what they already know. In this case $$frac{4}{16} = frac{1}{4}$$ which is a nice solid quarter, and the $$frac{1}{16}$$\u00a0it goes over is less than $$frac{1}{8}$$, which is a fat sliver.\n\nWhen your students have all created their own graphs have them take turns explaining what they drew and what different sectors mean. Ask them which color is favorite, which is second favorite, which is third favorite. Ask them how the graphs would change if one child changed his favorite color\u2013 for instance, if Madeline decided she preferred blue or Jordan switched to green. Then ask them what would happen to the graph if your class was merged with another third grade class of the same size, and all the children in that class liked yellow best.\n\nThese exercises should give your students a new familiarity with and perspective on fractions, as well as \u00a0opening the doors to understanding\u00a0data representation with pie charts.\n\n## Common Core Standards\n\nThis lesson plan is aligned to Standard 3.NF.3 (Third Grade Numbers and Operations\u2013 Fractions, item 3 ) in the Common Core.\u00a03.NF.3 reads: \u201cExplain equivalence of fractions in special cases, and compare fractions by reasoning about their size.\u201d\n\n# 3rd Grade Bar Chart Lesson Plan\n\n## Discussion\/Introduction\n\nIn third grade we get to liven up our bar chart lessons by taking advantage of our students\u2019 new familiarity with multiplication and division. By the end of third grade, the Common Core recommends that students know from memory all products of two one digit numbers. By the time you schedule your bar chart lesson, your students should be comfortable doing skip counting by twos, threes, fives or tens; and that means they shouldn\u2019t have any difficulty interpreting a scaled bar chart.\n.\n\n# Calves, Baby Camels and a Scaled Bar Chart Lesson\n\n## Objective\n\nIn this 3rd grade bar chart lesson, students will learn to analyze and understand data presented on a scaled bar chart. They will learn how to do both simple and multi-step comparing problems using the bar chart.\n\n## Supplies\n\n\u2022 Graph printout from www.meta-chart.com\/bar\/ , or, if your classroom has projector capabilities, a graph prepared and saved on your laptop.\n\u2022 One sheet of graphing paper for display, with large squares and a height of about ten squares\n\u2022 Graphing paper and markers for each student\n\n## Methodology\/Procedure\n\nStart this lesson with a bit of storytelling. Math and imaginative thinking go well together, and starting a new math topic with an imaginative exercise means you\u2019ll have the attention not only of your technically minded, mathy children but also of those who love their English but usually \u2018turn off\u2019 when they come into your classroom.\n\nOnce upon a time, far away, in the wild open steppe land of Mongolia, two children lived with their old grandfather and grandmother in a little round tent made of felt. Their little tent was alone on the steppeland; for as far as they could see to the east, west, north and south there was nothing but waving grasses and wooded hills. That, and the camels, yaks, sheep and horses which made up the family wealth.\n\nIt was Temuujin\u2019s job to take the horses and yaks to pasture every morning, and Zolzaya, his little sister, took the sheep and goats to the green grass on the other side of the hill. Grandma and Grandpa were getting old, and though they still hobbled around, checked everything, and told the two children what to do, they were past the age for active work.\n\nThen came the spring when Grandpa\u2019s legs were so bad he could not make his rounds of the herds as he had used to. He had to stay in his bed by the fire, and the furthest he could move was to his seat by the fire. He worried about his herds, and though Temuujin and Zolzaya knew everything was going well, he wouldn\u2019t believe them when they told him that. He would ask questions about numbers\u2014how many more kids are there than lambs?\u2014and numbers were not Temuujin\u2019s strong point. When Temuujin was uncertain in answering, he would get angry and say that the boy knew nothing about taking care of animals.\n\nOne day Temuujin and Zolzaya decided to do something special to give their grandfather all the information he wanted about the animals. They would count all the lambs, kids, foals, calves and baby camels, and mark down the numbers in a tally chart. Then they would make a bar graph for their grandfather, so he could see exactly how many animals he had without having to go outdoors. They knew that with a bar graph they could solve any comparing problems he asked them just with one glance at the paper.\n\nThey were quite sure that this would make their grandfather very, very, happy, and they had exactly one sheet of graph paper to draw their bar graph on.\n\nBut when they came in from counting, both Temuujin and Zolzaya had very somber faces. This is what their numbers looked like.\n\nLambs: 37\nKids: 42\nFoals: 11\nCalves: 15\nBaby Camels: 6\n\nAnd this is what their graph paper looked like:\n\nDisplay a simple sheet of large-squared graph paper\u00a0, with ten squares each way.\n\nHow could they put their data on this little sheet? The numbers were too big.\n\nStop here, and give your students a chance to discuss the problem. After a discussion break, ask for possible solutions.\n\nTemuujin and Zolzaya realized there was only one thing to do: they could scale their bar graph so that each block on the graph paper, instead of meaning 1 animal, meant five. This made some of their numbers very easy to work with. The calves, for instance, being exactly fifteen, would be represented by a bar exactly three squares tall on the bar graph.\n\nWhat about the baby camels? How could they make a bar that meant \u2018six\u2019, if each square stood for five?\n\nStop again for discussion. Ask any students who have solutions to share them with the class.\n\nTemuujin realized that he would have to divide the square up, in his mind, into five equal slices, and let the camel bar go exactly one $$frac{1}{5}$$ \u00a0slice over into a second square.\n\nSo this is what Temuujin and Zolzaya\u2019s graph looked like.\n\nDisplay your printout of http:\/\/www.meta-chart.com\/share\/baby-animals-on-the-steppe, or put it on the projector.\n\nThey were very pleased when it was done, and carried it proudly in to their grandfather. And after that, they could answer all his questions with just a moment\u2019s hesitation, whenever he asked them about the numbers of animals.\n\nBefore you go into these questions, the initial data you put on the blackboard should be erased, so that students are encouraged to base their arithmetic on the graph rather than on the raw numbers.\n\nThe first question he asked was, how many more kids are there than lambs?\n\nLet your students decide how to solve this and the following problems, and come in with your ideas only after they have it worked out. If they get stuck, they should be encouraged to see that there is a difference between lambs and kids of just one square, so five animals.\n\nThe second question was\u2014how many foals and baby camels are there, altogether?\n\nIf your students decide to do this problem as $$5 cdot 2 + 5cdot frac{1}{5} + 5cdot 1 +5 cdot frac{1}{5}$$ , tell them they are right\u2014and then point out that they can also do it as\u00a0$$(2 frac{1}{5}+1 frac{1}{5})5$$; or, if they prefer dealing with whole numbers and fractions separately, $$5(2+1) + frac{(1+1)}{5} cdot 5.$$\n\nThe third question was\u2014how many baby animals are there altogether?\n\nAfter your students have worked on this problem, show them that one easy way would be to add the whole numbers\u20147+8+2+3+1=21, add that to the sum of the fractions $$(frac{2}{5}+frac{2}{5}+frac{1}{5} +frac{1}{5}) = frac{6}{5} =1 frac{1}{5}$$, and multiply both by five: $$21 cdot 5 + frac{6}{5}) cdot 5= 105+6=111$$.\n\nGrandpa\u2019s last question\u2014how many fewer foals are there than kids?\n\nThis can be done as $$(8-2) cdot 5; + (frac{2}{5}-frac{1}{5}) cdot 5.$$\n\n### Extension Exercise\n\nIf there is still time before the end of the lesson, ask the students if they can make a scaled bar graph to let Grandpa know how many horses, cows, and camels there are:\nHorses-20\nCamels -11\nCows \u2013 18\n\nI\u2019ve found that students respond wonderfully to the use of stories in the math classroom, and sometimes a touch of faraway places and novel situations can make an otherwise \u2018booooriing!\u2019 math exercise almost magically fun.","date":"2020-04-04 19:11:56","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.41360577940940857, \"perplexity\": 1194.1308282341413}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-16\/segments\/1585370524604.46\/warc\/CC-MAIN-20200404165658-20200404195658-00067.warc.gz\"}"}
null
null
LetsRun.com World Famous Message Boards Better Shoes About LetsRun.com What's LetsRun.com? Get Coached By THE BEST Join The LRC Supporters Club The Best T-Shirts Ever (LRC Shop) Running Podcast Yesterday's Homepage Paul Chelimo Aiming for World Indoor 3k Gold, Brenda Martinez Done with the 800 – Abbott Dash to the Finishline USATF 5k Notes 2017 TCS New York City Marathon View LetsRun.com event coverage By LetsRun.com NEW YORK — The USATF 5k Championships take place Saturday morning as part of the Abbott Dash to the Finish Line 5K which is part of New York City Marathon weekend. Olympic silver medallist Paul Chelimo and former American record holder in the 5000 Molly Huddle headline the fields. We spoke to them and Ben True, Brenda Martinez, Hassan Mead, Leonard Korir, and Des Linden. We present to you our main takeaways from each athlete including the news that Brenda Martinez is done with the 800. Paul Chelimo wants World Indoor gold, American 5000m record in 2018, credits the Army mentality for his success Paul Chelimo has an Olympic silver at 5000 and a Worlds bronze at 5000. Chelimo now knows with Mo Farah leaving the track for the roads, the opportunity is there to win a global 5000m title. That is Chelimo's long-term goal, but with no global championship in the 5000m in 2018, Chellmo's immediate goals are to win the World Indoor title in Birmingham at 3000 and then outdoors to break the American record at 5000 (12:53.60). But before Chelimo gets the American record, his more immediate goal is to go sub-13:00 as his PR is his modest 13:03.90 from the Olympic final. 5k Stars from Press Conference Article continues below player We also asked Chelimo about getting DQ'd in the Diamond League final in Zurich. That DQ cost him $20,000. Chelimo said that DQ hurt. Being in the US Army World Class Athlete Program (WCAP) Chelimo cannot accept endorsements, so Chelimo is more reliant on prize money than most US runners. That may be a drawback of being in the US Army, but Chelimo credited the mentality instilled in him by the Army for helping him become one of the world's best. "The Army has always been there for me. Without the Army I wouldn't be this successful… The big thing is the mentality. The Army always teaches you to be physically and mentally tough. It's just that mentality," said Chelimo. The marathon is Molly Huddle's focus starting in 2018 Huddle was going to be in New York this weekend anyway live-tweeting the marathon for NYRR, but decided to hop in the 5k at the last minute. "I thought if I had a glimmer of fitness, then I would do it," Huddle said. "I have a glimmer." Huddle has had success in NYC It's logical to ask why Huddle, who was third here last year and said she is focused on the marathon from here on out, isn't running the longer race on Sunday, but she said that she wanted to run a full track season in 2017 and didn't want to think about cutting it short or rushing into a fall marathon buildup. Right now, Huddle's focus is on the Houston Half Marathon in January, where she said she'd like to break 67:00 if conditions are good. That would be a new American record (Huddle's PR is 67:41; Deena Kastor has the record at 67:34 while Kara Goucher ran 66:57 on an aided course at the Great North Run) but Huddle said that Houston will be the fastest course she's raced on so far. "New York (where Huddle ran her PR) is pretty fast, but Houston's really fast," Huddle said. Huddle also usually races just one half marathon a year, so the more attempts she gets, the more chances she has to run fast. "I think the half is just getting to be a more often run distance [globally] and some really talented women are sticking only to the roads now and not even doing the track. So I think that time is going to go down. Same with the U.S. women, just running it more often." We also asked Huddle to make a prediction for Sunday's marathon. She picked Mary Keitany on the women's side (surprise, surprise) and on the men's side… "You never count Meb out, so I'm just gonna say Meb," Huddle said. "Every time someone doubts him, he comes out with a win. If it was any other 42-year-old, I wouldn't say it. But it's Meb." Ben True has a new coach — Ray Treacy True, who turns 32 next month, has gone through a few coaches as a pro, beginning with a brief stint in Eugene with Mark Rowland's Oregon Track Club before heading back to New England where he's worked with Mark Coogan and Tim Broe. This year, True was self-coached, but he decided that he wanted to go back to having someone write his training programs and, on Coogan's recommendation, sought out Treacy, the coach of Molly Huddle and Emily Sisson as well as Providence College. True will remain based out of Hanover, N.H. True likes Treacy's strength-based program and thinks that Treacy will help him from pushing too hard in training, which he thinks has been an issue in recent years. "He's somebody that really emphasizes recovery and not doing too many workouts in a period of time," True said. "And I've felt that I've been running myself a little ragged the last few years and I think having that little extra bit of recovery will probably help me." True is entered in the 5k on Saturday but he's only been running again for four weeks at this point so he's not expecting much. Still, he is the American record holder for 5k on the roads (13:20) and always seems to be in the mix in these kind of races. We're certainly not counting him out. But True is also here as a guest of the NYRR (he'll be on the lead vehicle for the marathon) and said that he definitely has some marathons in his future, though he wants to stay on the track for at least two more years. Brenda Martinez is moving on up and done with the 800 Brenda Martinez is most known for her silver medal in the 800 at the 2013 World Championships, but now at the age of 30, she said it's time for her to give up the 800 in championships and focus on the 1500 and possibly 5000. "I think I'm just going to leave the 800 alone at championships and just move up and see what I can do with my strength in the 15 and maybe the 5k," said Martinez, who was a 2016 Olympian at 1500 meters. "There's a bit of drama in the 800, and that's completely out of my control. It's just time for me to move up." Martinez is using the 5000m here to test the waters at the longer distances. She ran a very respectable 15:24 at the Carlsbad 5000m in 2014 and said the plan on Saturday is to run with the leaders. "The races changes now that Molly Huddle (US 5k road record holder) is in it… Now I'm going to go out with whoever is leading it. I think I'm fit to do a pretty strong 2 mile, the last mile is going to be all heart." After the 5th Avenue Mile in September, Martinez had platelet-rich plasma therapy (PRP) for her Achilles tendinosis. She said she hasn't felt any Achilles pain since then, which is a good thing as it had bothered her for two years. She also said her training partner, Boris Berian, the 2016 World Indoor champion at 800, had two PRP treatments this year and should be back in form in 2018. Hassan Mead and Leonard Korir ready to get to Paul Chelimo's level Hassan Mead of the Oregon Track Club and Leonard Korir of WCAP both ran the 10,000m at the World Champs. Korir was 13th and Mead 15th, not quite at the level of Chelimo, but that didn't stop Mead from saying he thinks Korir is the favorite in Saturday's race, due to Korir having raced the most this fall. Mead also noted that he has a faster 5k PR (13:02.80) than Chelimo. Chelimo has put it together when it matters most, and both Mead and Korir both want to get to that level. Time for a change for Desi Linden Linden was 7th at the Olympics and 4th in Boston this spring — good results, but still short of Linden's ultimate goal. Linden could have come out and run another marathon this fall, but she felt that in order to improve as a marathoner, she would be better served changing things up rather than continuing on the same path. "I feel like I've kind of hit a plateau," Linden said. "Speed just needs to be touched on…If I went out and did a track segment, I would be a 33-low, maybe 32:50 [10k runner] and I think I need to push that down." She's had some fun mixing it up at shorter distances this fall (she ran the 5k Mayor's Cup XC race in Boston two weeks ago). In particular, she's looking forward to mixing it up against some athletes that she doesn't normally get to race. Linden, a 2:22 marathoner, battling Brenda Martinez, a 1:57 800 runner, over 5k should be interesting. "I'm finishing up my workouts hard, I'm like, I want Brenda Martinez to go out too hard, she's gonna be in the hills late and then I want to sprint by her in the finish. So I think visually that would be amazing. Then she'd probably still outkick me. But in my mind…" Be a cool kid. Get a LetsRun shirt Related LRC Event Coverage 2017 Bank of America Chicago Marathon 2017 Berlin Marathon - BMW Berlin Marathon 2017 London Marathon - Virgin Money London Marathon 2017 Boston Marathon 2017 Tokyo Marathon RRW: Boston Marathon Race Director Now Directing Mass Vaccination Sites Podcast: Robert Brandt on Breakout 2020, Trash-Talking with Tinman Elite Olympic Champ Brianna Rollins-McNeal Is Provisionally Suspended For An Anti-Doping Violation For Second Time Doom and Gloom or Hope and Light Podcast: Boston 2021? Tokyo 2021? Olympic Trials Without Fans? HOKA NAZ Elite adds Jenna Wrieden as an assistant coach "Out of Thin Air" Book Excerpt: Why "Idil" Is More Important Than Talent For Amhara Ethiopian Runners "Out of Thin Air" Book Review: A Deep Look Into Ethiopian Running Culture Keeping Track of Pros Who Have Switched Sponsors in 2021 LRC2017 TCS New York City Marathon Related event: 2017 TCS New York City Marathon Posted in: Previews, Men's Running, Women's Running, Professional, Road, Distance, LRC, Events LRC Event Coverage 2018 BMW Berlin Marathon 2018 Memorial Van Damme – Brussels IAAF Diamond League Track and Field 2018 Weltklasse Zürich – IAAF Diamond League Track and Field
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,067
Q: OCaml match error I am trying to add my first two element on my list, I am getting error on third line, why is that? let addfirsttwo lst = match lst with | List.hd lst + List.hd (List.tl lst) ;; A: You mix two different approaches for extracting information from a list. One is using List.hd and List.tl, the other is pattern matching. With the former, you are almost done. You just have to get rid of the pattern matching like so: let addfirsttwo lst = List.hd lst + List.hd (List.tl lst) I assume however, that your question pertains to some homework where List.hd and List.tl are forbidden. And, anyway, pattern matching is useful to know. So the problem with your code is that you did not complete the pattern matching. A pattern matching clause has the form | <some pattern> -> <some expression> The pattern usually cpontains variables which you can then use in the expression. One way to define List.hd, for example, is let hd list = match list with | head::tail -> head Your clause was missing the actual pattern and the ->. Another way to fix your code, at least the syntax, would have been to provide the missing parts like so: let addfirsttwo lst = match lst with | _ -> List.hd lst + List.hd (List.tl lst) where _ acts as a wildcard that will match any value at all and not bind any variables. It would not, however, solve the assumed homework constraint. It is worth noting that the pattern I gave for hd does not match all values. It only matches lists of length at least 1. That is fine, as List.hd only needs to support non-empty lists. Your function only needs to support lists of length at least 2. So a good starting point for you would be a pattern that matches such lists.
{ "redpajama_set_name": "RedPajamaStackExchange" }
4,537
\section{Introduction} The newly redesigned light version of the Space Interferometry Mission, SIM-Lite, is a long-baseline astrometric interferometer designed to reach sub-microarcsecond precision over the course of a five year mission. It has two modes of operation, one for global astrometry with 4~$\mu${as}\ end-of-mission accuracy, and the other for narrow-angle astrometry, with a single-epoch (1100s visit) accuracy below 1~$\mu${as}\ \citep{Unwin08}. The redesign lowers costs primarily by shortening the baseline from $9~{\rm m}$ to $6~{\rm m}$ and replacing a guide interferometer with a telescope star tracker. Narrow-angle astrometry is used to measure the orbital motions of non-luminous objects, like neutron stars and planets, by observing the motion of the parent star. SIM-Lite is designed to make narrow angle measurements by alternately switching between a target star and anywhere from 3 to 6 reference stars within a 1 degree radius. The astrometric signal is the motion of the target star relative to the reference stars. To take a concrete example, consider a candidate star that is observed 250 times over the course of a 5-year mission. A peak in the joint periodogram power distribution (similar to a power spectral density) of the target star position constitutes the detection of a planet candidate. Detection of a planet with a 1\% false-alarm probability requires a signal-to-noise ratio (SNR) of approximately 5.8. At 10 pc, the astrometric signature for an Earth orbiting a one Solar mass star at 1 AU is 0.3~$\mu${as}, so the SNR requirement calls for a final instrument error of less than 50~nano-arcseconds (nas). This corresponds to a single-epoch error of less than 0.82~$\mu${as}. For the nearest $\sim60$ stars, a noise floor below 35 nas is needed to detect Earths in the habitable zone. The key questions, therefore, are whether sub-microarcsecond single-epoch accuracy is attainable, and, equally important, whether the instrument systematic errors do average down well below the single-epoch accuracy. This paper reports testbed results relating directly to both the single-epoch and end-of-mission narrow angle accuracy. The basic elements of a Michelson stellar interferometer are shown in Figure~\ref{fig1}. Starlight is collected by two spatially separated siderostats. Each siderostat has an embedded fiducial that marks one end of the baseline and also acts as a retroreflector for the internal and external metrology laser beams. Along each arm, the starlight beam has an annular footprint while internal metrology occupies the center. The quantity of interest is the delay $x$, which is the optical path difference (OPD) between two starlight beams originating at a star and terminating at the two fiducials. The delay is given by: \begin{equation} x=\vec{b}\cdot \hat{s} + C + \eta \label{eq-basicAstrom} \end{equation} where $\vec{b}$ is the baseline vector, $\hat{s}$ is the unit vector to the star, and $\eta$ represents measurement noise. $C$ is sometimes called the interferometer {\em constant term} and represents the internal OPD when the metrology reading is zero. The baseline length $b$ is defined as the distance between the two fiducials and is monitored by external metrology. Thus, three measurements, the white light fringe delay, internal metrology and external metrology, form the basic ingredients of the astrometric angle. The SIM-Lite instrument is described in detail elsewhere \citep{SIM08, SIMEng08}. The fundamental SIM-Lite instrument is its ``science" interferometer, operating in the visible range with $50~{\rm cm}$ collectors and a $6~{\rm m}$ baseline. The baseline vector is not stable at the microarcsec level, so guide interferometers observing bright (typically 7mag) guide stars are used to measure the motion of the baseline over time. A laser optical truss ties everything together at the microarcsec level. A detailed mathematical analysis of the SIM-Lite astrometric approach appears in \citet*{Mil03}. \section{Random versus Systematic Errors} The noise in astrometric equation~(\ref{eq-basicAstrom}) consists of both random and systematic instrumental errors. For a 6m baseline and a star close to the center of the field of regard, Equation~(\ref{eq-basicAstrom}) implies that 30 pico-meters (pm) of total delay error will cause approximately 1~$\mu${as}\ of astrometric error. Random errors, also called white noise, will average down with integration time $T$ as $1/\sqrt{T}$. Examples of white noise include photon noise from the target and reference stars and detector noise in the laser metrology. Systematic error, or pink noise, is correlated over time and may not get smaller with longer integration time. The dominant systematic errors in SIM-Lite are thermal in nature. Thermal drift in the motion of the optics is properly monitored to first order by the laser metrology system and doesn't produce an error. If the metrology for some reason doesn't measure the optical path of the starlight properly, there will be an error. The main cause of such an error is the drift of the laser metrology beam's alignment with respect to the starlight. SIM-Lite actively aligns both the metrology beam and the starlight beam using separate tip/tilt sensors in the astrometric beam combiner. Misalignment is caused by temperature gradients in the beam combiner that cause the metrology tip/tilt sensor to move relative to the starlight tip/tilt sensor. Only changes in the hardware that cause the metrology to incorrectly measure the optical path of the starlight are errors. Because the metrology hits the center of all the optics, while the star light has an annular footprint, warping of an optic can cause an error. The best way to ensure we have captured all the important errors is a hardware testbed that has all the essential elements of the instrument we plan to operate in space. The Micro-Arcsecond Metrology (MAM) testbed was built for this purpose. Among other things, it captures the beam pathlength and angle control and beam recombination in a traceable manner to SIM-Lite. \section{Results from the Interferometer Testbed} The MAM testbed \citep{Hin02} has two main components: a test article corresponding to SIM-Lite, and an inverse interferometer pseudo star (IIPS) to simulate the incoming star light. Starting from the IIPS source, broadband (600-1000 nm) light is injected from a fiber tip, collimated, and separated into two beams. These are steered to two coordinated stages where they are launched towards the test article siderostats as flat, coherent wavefronts. The test article contains all the essential components of the actual interferometer, including fringe detection, pathlength and pointing control, and internal metrology \citep{An05}. The testbed was in operation from 2002 through 2006 and has been of great value in identifying technical challenges, their mitigation, and demonstrating technical readiness \citep{GoulB02, GoShen04, Gou04}. Figure~\ref{fig2} shows the effect of the metrology-starlight drift as measured in the testbed over a 140 hour period. To minimize the impact of thermal drifts on narrow-angle (NA) observations, SIM-Lite ``chops" between the target and each of the reference stars. The chop sequence is target, reference, target, next reference, target, and so on. While the actual integration times depend on stellar brightnesses, a ``typical'' chop between two stars consists of 15 seconds on the target, 30 seconds on the reference, and 15 seconds on slews between the stars. This ``differential" target-reference measurement error is also plotted in Figure~\ref{fig2}. Figure~\ref{fig3} shows the standard deviation of averages of $N$ chops versus the required integration time as $N$ is increased. We see that the standard deviation of the differential measurement decreases as $\sqrt{T}$, and from time scales of 45 seconds to 42 hours, the noise in the differential measurement is nearly white. No noise floor is observed down to 8 nas after about 42 hours. Since the last value is obviously a downward fluctuation, we extrapolate from the minimum integration time of 45 seconds to a white noise expectation of 24 nas at 42 hours. An important question in applying testbed results to SIM-Lite is how does the thermal behavior of the relevant parts of SIM-Lite on orbit compare with our ground testbed. Within the testbed there were numerous temperature sensors that recorded the temperature fluctuations of the optics and mounts inside the vacuum chamber. We also conducted a very detailed thermal simulation of the SIM-Lite spacecraft in orbit. This multi-thousand node thermal model was run for 100 hours of SIM-Lite operation, where the spacecraft attitude was allowed to change according a typical observing scenario. The simulation includes active thermal control, with the temperature maintained near $20^\circ$C within the expected capabilities of SIM-Lite's thermal control system. As different parts of the spacecraft are illuminated by the Sun, the simulated thermal control system turns heaters on and off in order to minimize temperature fluctuations in the key opto-mechanical environments. Figure~\ref{fig4} shows the resulting predicted thermal fluctuations of the SIM-Lite beam combiner. Also shown in the figure are measured testbed environment temperatures under conditions similar to those that yielded the results of Figure~\ref{fig3}. We see that the actual thermal environment of SIM-Lite should be better than the testbed, or conversely, that the test results are conservative with respect to thermal drift. To directly compare the stability of SIM-Lite thermal environment to the testbed, we need to extract statistical quantities that describe the stability of the two thermal environments. One such quantity is the power spectrum of the thermal fluctuations, shown in Figure~\ref{fig5}. SIM-Lite on orbit is seen to be more thermally stable than our current testbed environment. \section{Astrophysical Sources of Error} While this paper deals primarily with instrumental error sources, we have also extensively studied astrophysical errors which we summarize here. The two major astrophysical errors in astrometry are stellar activity and companions to reference stars. A typical sunspot 0.1\% the area of the Sun will in the worst case shift the photocenter of the Sun by 0.25~$\mu${as}\ at 10 pc and produce a radial velocity bias of 1 ${\rm m}/{\rm s}$. A shift of 0.25~$\mu${as}\ is well below the single-epoch precision of SIM-Lite. Star spot noise will not average down with integration time until that time is comparable to 1/4 - 1/2 the rotation period of the star or the mean sunspot lifetime. The presence of planets orbiting reference stars is another astrophysical noise source. RV vetting of reference stars will eliminate binary stars and planets down to about Jupiter mass. With multiple reference stars, it is possible to detect and assign the remaining planets to a specific reference star. A more detailed treatment will appear in a subsequent paper. \section{Conclusion} For planet finding, the dominant error experienced by an astrometric interferometer is the thermally induced drift of the metrology with respect to the starlight. The testbed results show that, even in the presence of a harsher thermal environment than SIM-Lite will experience in space, chopping between the target and reference stars renders this error nearly ``white'' with a noise floor below 24 nas after 42 hours of averaging. This is below the 35 nas required to detect an Earth in the habitable zone of the nearest $\sim60$ stars. While investigation of the next-level noise sources is continuing, these results suggest that the technology for astrometric detection of nearby Earths is at hand. \section{Acknowledgements} The research described in this paper was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration. Copyright 2008 California Institute of Technology. Government sponsorship acknowledged.
{ "redpajama_set_name": "RedPajamaArXiv" }
794
{"url":"http:\/\/www.komal.hu\/verseny\/feladat.cgi?a=feladat&f=P4598&l=en","text":"Magyar Information Contest Journal Articles\n\n# Problem P. 4598. (January 2014)\n\nP.\u00a04598. Jill and Jack organise a rolling bicycle race'' along a long enough slope. Their bicycles are alike and none of them pedals the bicycle. Jill weighs 50 kg, Jack is 100 kg, and the mass of the bicycle is 15 kg. The cross sectional area of Jack is one and a half times as much as that of Jill. Who will reach greater terminal speed?\n\n(4\u00a0pont)\n\nDeadline expired on 10 February 2014.\n\nSorry, the solution is available only in Hungarian. Google translation\n\nMegold\u00e1sv\u00e1zlat. A v\u00e9gsebess\u00e9g a t\u00f6meg \u00e9s a homlokfel\u00fclet ar\u00e1ny\u00e1nak n\u00e9gyzetgy\u00f6k\u00e9vel ar\u00e1nyos. Ez a sz\u00e1m Bandin\u00e1l mintegy 8 sz\u00e1zal\u00e9kkal nagyobb, mint Andin\u00e1l, teh\u00e1r Bandi v\u00e9gsebess\u00e9ge lesz nagyobb.\n\n### Statistics:\n\n 123 students sent a solution. 4 points: 63 students. 3 points: 31 students. 2 points: 10 students. 1 point: 11 students. 0 point: 6 students. Unfair, not evaluated: 1 solution. Unfair, not evaluated: 1 solution.\n\n Our web pages are supported by: Morgan Stanley","date":"2017-12-12 23:41:54","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6768991947174072, \"perplexity\": 11298.928540106195}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-51\/segments\/1512948520042.35\/warc\/CC-MAIN-20171212231544-20171213011544-00570.warc.gz\"}"}
null
null
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp015999n5793 Title: Making China's Greatest Poet: The Construction of Du Fu in the Poetic Culture of the Song Dynasty (960-1279) Authors: Chen, Jue Advisors: Kern, Martin Keywords: Du Fu poetic culture Subjects: Asian literature Abstract: In traditional narrative of Chinese literary history, Du Fu 杜甫 (712-770) is arguably the "greatest poet of China," and it was in the Song 宋 (960-1279) that his greatness was finally recognized. This narrative naturally presumes that the real Du Fu in history is completely accessible to us, which is not necessarily true. This dissertation provides another perspective to understand Du Fu and the "greatness" of his poetry. I emphasize that the image of Du Fu that we now have is more of a persona that has been constructed from his available poetic texts. Poets in the Song Dynasty, especially those in the eleventh century, took initiative to construct this persona, and their construction of Du Fu was largely conditioned by their own literary and intellectual concerns. The entire dissertation is divided into five chapters. Chapter 1 investigates how Du Fu's poetic collection emerged. His collection, as it was compiled and edited, not only provided a platform, but also set restrictions, for the construction of Du Fu. Chapter 2 examines how Du Fu used to be remembered before his collection took form. Memory of him before the eleventh century was considerably different from his received image. The remaining chapters focus on three major aspects of Du Fu's persona – namely his images as a poet-historian, a master of poetic craft, and a Confucian poet – to analyze how and why Du Fu was constructed as such in the Song. Song poets accepted poetry as a medium loaded with valuable information, and they thus explored Du Fu's poetry for history; they concerned themselves with issues pertaining to poetic craft, and retrospectively looked for examples in Du Fu's poetry as established standards; they, as scholar-officials, committed themselves to the state, and declared Du Fu as their model. In sum, Song poets provided particular readings to Du Fu's particular poems, and claimed these readings as the result of Du Fu's intentional production. Through interpretation of Du Fu's surviving poems, they constructed Du Fu as China's greatest poet. URI: http://arks.princeton.edu/ark:/88435/dsp015999n5793 Alternate format: The Mudd Manuscript Library retains one bound copy of each dissertation. Search for these copies in the library's main catalog: http://catalog.princeton.edu/ Chen_princeton_0181D_11633.pdf 4.46 MB Adobe PDF View/Download
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,459
Q: Linux SCP File transfer No space left on device? I am trying to transfer one zip file from a linux based machine to a Windows based using SCP. I can go through the transfer ok but when it completes it says "No space left on device". The destination of the share has a LOT of free space. I am transferring a file of around 5 gig but theres several hundred gig free on the share. I was thinking it might be an issue thh directory paths. On the windows server box the share would be c:\folder when I use SCP in linux I use /folder. Hope this makes sense. A: Just another guess: Is the SCP server running on the Windows machine a 64-bit software? If that's a 32-bit executable it will probably not support files over 4 GB (okay, GiB, so 4 x 1024 x 1024 x 1024 bytes). Quick check: if you see "*32" in the Task Manager after the filename then that's a 32-bit executable. (See more: https://superuser.com/questions/358434/how-to-check-if-a-binary-is-32-or-64-bit-on-windows ) But if it's a 64-bit executable that still does not guarantee it can handle files over 4 GB... Have you tried to transfer files about 4 GB? If a file below 4 GB (for example 3800 MB) is transferred without any problems but larger than 4 GB fails then the best guess is that SCP server can't handle this big files. Another guess: there might be low disk space in the temp directory (opening %TEMP% will lead you there). A: I was investiagted the same: it could be if you was trying to upload big files and just not done it (i.e in the Midnight Commander (mc)) it will saved in the /tmp and it could be over 1Gb, thats why it could notify "No space left on device".. Try to check the free space on the LOCAL machine and check the "/tmp": # df -Th ... tmpfs tmpfs 1001M 1001M 0 100% /tmp ... In this case you need to remove files (seems from previous transfers) like this: # rm /tmp/mc-root/* A: I always use Winscp to use scp on windows and never had a problem. In addition could you provide the command and program you are running? A: Just a guess but: Check if the server to which you want to copy the file have enough space on C drive for this file. It is possible that the receiver is placing the file in the temp directory where it is attaching the rest of the file parts during the transfer... Also check if correct permission has been set, both to the share and NTFS. If the deamon have rights to write. Check if there is a per user quota implemented on the drive/share.
{ "redpajama_set_name": "RedPajamaStackExchange" }
330
India's Premier Cinema Magazine Silverscreen's Favorite Photos SUBSCRIBE TO THE SILVERSCREEN NEWSLETTER Today's Curtain Raiser: Trailer Of Gautham Vasudev Menon's Series 'Queen' Starring Ramya Krishnan Out Now Hindi Reviews Akira Review: Murugadoss Gets Everything Right In This Girl Power Film, Except The Girl Power BY Sanjana Chakraborty On September 3, 2016 TwitterFacebookRedditGoogle+EmailWhatsapp An evening in Jodhpur. A busy street with students walking home, some waiting for the bus. Two boys inconspicuously eye a passing girl. She's pretty. Dressed in pink. Her eyes are burning with rage. Moments later, the boys are chasing her on a bike. One has a bottle in his hand. Fumes ooze out of it. And then the girl is screaming and clutching her melting face. The bystanders have better things to do than help. A pre-teen girl (Akira) is the only witness who cares. She vows revenge. With her father's encouragement, she learns karate. It takes months of training, but she succeeds. And lands herself in a correctional facility for her trouble. Sixteen years later, Akira's life seems to have returned to normal. Until Mumbai. While the boys splash acid on the girl, AR Murugadoss ensures that the screams are sufficiently blood-curdling. But, something is amiss. The slow song in the background isn't particularly melancholic. Just ill-fitting. The victim, once a girl with flawless skin and a warm smile, has scars and a melted eye to accompany the rest of her life. But this isn't her story. This is the story of the girl who took revenge, and how that naïve revenge is her fatal flaw. In Akira, the good times pass quickly. The bad times stagger and drag. And Akira (Sonakshi Sinha) cannot catch a break. Every time her situation improves, she runs into more plot convolutions, and more unfortunate coincidences. Beaten in nearly every scene, with people throwing daggers at her for no reason, Akira suffers. And, for some reason, chooses to suffer silently. People walk all over her. Through all this, her naivety remains intact. It's like Murugadoss decided to do a GRR Martin: she suffers like Sansa and kicks butt like Arya, but lacks vision and intellect. Akira is being promoted as a woman-centric film where a woman can hand out and take a beating. In reality, the film panders to an old Indian cinema trick: the woman's "real" strength lies in her ability to turn into a sacrificial lamb; for the "greater good of the nation". Akira could easily have been that girl who stood up for herself. Instead, she thinks of herself as a battered martyr, nailed to the cross, sacrificing herself to protect the city from communal unrest. Mother India meets Jesus Christ and we're left with a very skewed version of women's empowerment. What really works for the film is that Anurag Kashyap plays a character straight out of an Anurag Kashyap film. He is vicious and calculating; the kind of man who would let others take the fall for his nefarious activities. Whenever he's on screen, he has a cigarette in his hand, a witty retort on his lips, and a perpetual lascivious grin on his face. You find yourself feeling thankful that, just this once, a villain in a Murugadoss film chose to hurt with words instead of actions. Kashyap as ACP Rane is that good. The smoking, though, is excessive. Almost as if the character is deliberately blowing smoke at the screen. Just to annoy the Censor Board for all the cuts they've demanded in all Kashyap's films. Konkona Sen Sharma as the good and righteous cop makes the most of a minor role. The only subtle character in the film, she holds her cards close. Does she know who the killers are? Does she care or does she have plans of her own? Until the end, we won't know what lies underneath that neutral expression and monotone. We also don't why her character had to be heavily pregnant. A Kahaani inspired move, perhaps, to show that nothing will stop her from finding the truth. The months of training Sonakshi Sinha put into this role shows in every little crouch and every clenched fist. The fight sequences are convincing, especially because her kicks and punches aren't exaggerated likes the ones in Murugadoss' Holiday or Ghajini. Murugadoss, who missed the Rio memo about how great it is to fight like a girl, was proud of Sinha because she 'fought like a man' in the film. But, really, Sinha fights like a girl – and it looks great. With Akira, Murugadoss follows an old template: a noble character with an existential crisis desperately seeks revenge. Plus some superhuman strength, an evil villain who finds loopholes in every situation, and a good cop with an impeccable sense of timing. Also, the lead's sidekick stays unnoticed despite doing a lot in the name of friendship. Nevertheless, Akira is not boring. The film is fast-paced and easy on the intellect. There are no redundant songs and gratuitous romances. Best watched on a Sunday afternoon. The Akira review is a Silverscreen original article. It was not paid for or commissioned by anyone associated with the movie. Silverscreen.in and its writers do not have any commercial relationship with movies that are reviewed on the site. Hindi Interviews July 26, 2016 Harshvardhan Rane Interview: "If I Work Hard Enough, Then I Can Survive On Another Planet" Hindi News March 18, 2015 AR Murugadoss' Next Bollywood Film Titled 'Akira' Hindi News July 7, 2016 Jacqueline Fernandez Promotes 'Girl Power' In UN's Global Campaign Video Hindi News June 21, 2016 Sonakshi Sinha's 'Akira': A New Poster Revealed Follow @silverscreenin Or (be awesome and) do both. Big Brother Review: A Passive Mohanlal Leads This "Thriller" 'Marriage Story' Review: The Stories We Tell And The Ones We Won't And Can't In Love And Heartbreak Pattas Movie Review: A Typical Revenge Drama, A Forgotten Martial Art And An In 'Form' Sneha Anjaam Pathira Movie Review: This Crime Drama Tries Too Hard Chhapaak Movie Review: A Well-Intentioned Film That Only Scratches The Surface © Copyright 2013 - 2016, Silverscreen Media Solutions Inc.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
29
{"url":"https:\/\/writeups.amosng.com\/2017\/picoctf_2017\/reverse\/a-thing-called-the-stack_60\/index.html","text":"# PicoCTF_2017: A Thing Called The Stack\n\nCategory: Reverse Engineering Points: 60 Description:\n\nA friend was stacking dinner plates, and handed you this, saying something about a \"stack\". Can you find the difference between the value of esp at the end of the code, and the location of the saved return address? Assume a 32 bit system. Submit the answer as a hexidecimal number, with no extraneous 0s. For example, the decimal number 2015 would be submitted as 0x7df, not 0x000007df\n\nHint:\n\nWhere is the return address saved on the stack? Which commands actually affect the stack?\n\n## Write-up\n\nStacks are a common thing in the computerverse, simply because it's a highly efficient way of storing data by appending. To solve this challenge, you need to visualize the stack after the assembly code.\n\n[stack]\nebp: [old ebp]\nebp-4: [ebp]\nebp-8: [edi]\nebp-12: [esi]\nebp-16: [ebx]\nebp-264: 0x1 <- esp\nebp-268: 0x2\nebp-272: 0x3\nebp-276: 0x4\n\n\nIt so happens that we only need to find the difference, which is 264 or 0x108 in hexadecimal.\n\nTherefore, the flag is 0x108.","date":"2023-03-28 15:29:16","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.2861129343509674, \"perplexity\": 3123.5190934014768}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-14\/segments\/1679296948867.32\/warc\/CC-MAIN-20230328135732-20230328165732-00400.warc.gz\"}"}
null
null
Shawn Harrison Column: Enjoy this success while it lasts hoop fans By Shawn Harrison sports editor Let's be honest. Who really thought the Aggie men's basketball team would be 7-0 and in the top 15 in the national rankings without Neemias Queta playing a minute? Sure, the schedule was friendly to start the season with five home games, but going up against a very athletic LSU team in Jamaica without Queta looked like some trouble. Then No. 15 Utah State finds itself down 19 points early in the second half last Friday. Anyone that is following sports at all knows what happened after that. The Aggies rallied and beat the Tigers by two to stay undefeated. Two days later with guards slip-slidding all over the moist court and injuring ankles, USU found itself down to North Texas. The Aggies only trailed by five this time, however, and finished the game with an 11-0 run to win by nine. Now the next big test looms large on Friday. USU travels to the Bay Area in California to take on Saint Mary's who was nationally ranked but fell out several weeks ago after a home loss to Winthrop. Since then the Gaels (6-1) have won five straight and seem to be building up steam. Having Queta back would be helpful, but he hasn't seen any actual game time since this summer when he hurt his knee playing for his home country of Portugal. While I'm sure he is anxious and wanting to get back out there, not sure if this is the game to make his return. One thing is for sure, I'm quickly being reminded not to beat against these Aggies. Not that I've ever actually gambled on the team I cover. The grit and never-give-up attitude is definitely there. They just keep battling, which is something I witnessed first hand last year when USU shocked everyone by winning the Mountain West. So, you would think I would have learned. It's still crazy to think the Aggies were down 19 points to LSU with just over 16 minutes to play and won. That kind of fight and determination needs to be there when it comes NCAA Tournament time this spring. Another thought that has been running through my head since the LSU game is that this team if for real. The skeptic in me kept wondering if this team was ranked too high and sort of questioning all the hype. I'll chalk most of that up to having been around USU athletics for nearly three decades. But I will say that in all that time I've never seen the national attention and hype this early in a season. That has been interesting and as Aggie head coach Craig Smith would say, quite frankly scares me a bit. When you are that high, there is only one direction to go, and it's not good. But I don't want to be a downer. Enjoy this to the max Aggie fans because it doesn't happen that often. While the thought of how Nevada crashed and burned last season at the end has come to mind, this USU team is not like that Wolf Pack squad. I don't see these Aggies getting full of themselves and they are well aware that it takes all of them playing together as a team to be successful. Plus, the character of these USU athletes is not even comparable. So as I prepare to spend part of my Thanksgiving in California, I will be thankful that this Aggie team is having a special season and expect the 2019-20 campaign to be one to remember. M-V-Bean The legend of Justin Bean just keeps growing. I wonder how much louder the chants will be when the Aggies return to the Spectrum to face Fresno State on Dec. 7? Bean is a real MVP now, having been crowned in Jamaica. He averaged a double-double there with 14 points and 12.5 rebounds, while also averaging 4.0 assists and 2.0 steals in the two games. He also blocked a shot. Bean is loved by the Aggie students and for good reason. So many relate to the walk-on who earned a scholarship. Plus, the guy just flat out hustles. I'm hoping he doesn't have to prove himself any more as a broken nose and two fractured teeth is enough for one season. We all know how tough he is and that a little oral surgery after midnight is not going to keep him from practice or playing in a game. Wear those braces proudly Mr. Bean. Alphonso strong How about this new addition to the team? While I totally understand Bean being named the MVP in Jamaica, Alphonso Anderson had to be a close second. The junior college transfer came up big for the Aggies in both games, averaging 21.5 points, 5.5 rebounds and 1.0 steals a game off the bench. He led the Aggies in scoring in Jamaica, but even more importantly came up huge with the game on the line in both games. Anderson has great touch in the paint, can bang with the big guys and has a nice range being able to hit 3-pointers. An added bonus is his ability to knock down free throws as well — 31 of 36 on the season. Anderson has been a really nice addition to a talented team. Knives and butchers And finally, I've got to mention the team's Swiss army knife. Diogo Brito does a little bit of everything for the Aggies and that should be no surprise by now. However, a nasty drive and dunk in traffic against LSU was a nice addition to his repertoire. Perhaps he really is a Swiss army knife as Smith likes to call him Recently the senior was asked about being compared to a Swiss army knife, if he understood it and perhaps if there was something more suitable from his native Portugal to refer to him as. He response was priceless. "I think he (Smith) just says that because I can do a little bit of everything, and I can also turn the ball over a lot as you can see," Brito said pointing to a stat sheet. "I'm glad I had one more assist than turnover. ... I don't think there is anything like that (a Swiss army knife in Portugal) to be honest, but my dad is a butcher and has a lot of knives." So, there is sort of a connection. Utah State Men's Basketball Neemias Queta Justin Bean Alphonso Anderson Diogo Brito Shawn Harrison is the sports editor at The Herald Journal. He can be reached at sharrison@hjnews.com or 435-792-7233. Follow Shawn Harrison Parents, teachers, kids need to toughen up
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,768
Copyright © 2017 by Brian Allen Carr All rights reserved. This is a work of fiction. Names, characters, places, and incidents either are the product of the author's imagination or are used fictitiously, and any resemblance to actual persons, living or dead, businesses, companies, events, or locales is entirely coincidental. Published by Soho Press, Inc. 853 Broadway New York, NY 10003 Library of Congress Cataloging-in-Publication Data Carr, Brian Allen, 1979– Sip / Brian Allen Carr. ISBN 978-1-61695-827-5 eISBN 978-1-61695-828-2 I. Title PS3603.A772 S57 2017 813'.6—dc23 LC 2016056387 Interior design by Janine Agro, Soho Press, Inc. Printed in the United States of America 10 9 8 7 6 5 4 3 2 1 Sip one They'd sip their shadows and the darkness stained them. Anyone who said they saw it coming told bad lies. There existed no concrete prophecy foretelling the malady, no rational explanation science could come to. How could it be, this new behavior? Drinking light's absence? Falling crude victim? The religious offered up bits of texts. From Acts and Joel and Revelations came the closest warning: "The sun will be turned to darkness and the moon to blood." "But the moon ain't blood," skeptics argued. "Not yet," believers said, looking up at the night sky gravely. And then from the Al-Furqan:"But they are going to know, when they see the punishment who is farthest astray . . . Have you seen the one who takes as his god his own desire . . . Have you not considered your Lord—how He extends the shadow, and if He willed, He could have made it stationary? Then We made the sun for it an indication. Then We hold it in hand for a brief grasp." "So it's a punishment from God?" "Only He knows why He does His doings." When doctors were asked to explain it, they'd invoke other anomalies from medical history—mysteries, freak occurrences that could never be explained: "Strasbourg, Alsace in 1518. A woman named Frau Troffea begins dancing, can't stop. Dozens join in with her, within a month, hundreds. All of them dancing ferociously, endlessly. No one knows why, though some have blamed a kind of mass psychosis induced by stress, others suggesting ergot poisoning might have fueled the catastrophe. See, many of the dancers danced themselves to death, and it's even been said that the dancers danced beyond that. Moved on with some inaudible, internal music even postmortem. And no one is entirely certain why." "This ain't 1518, though." "And ain't nobody fucking dancing." Murk The sun was up, so the dark could start. All about the ground, all in the same direction, shadows sprawled. And this is what he was after. Murk crept from the mesquite trees into the full light of day. Hobbling, his clothes dirty and tattered—his left leg a wooden peg. He shooed gnats from his face as he advanced, humming a bit of tune. "A world with two suns," he sang softly, "and both are for me." It was as if his mother's breast milk had been ashes. He had thirsty-looking skin and hair thickly greased with sleep. He'd been growing it out, his hair, and wasn't used to the length of it. He constantly tucked the brown thatch behind his ears. Most his life, he'd kept it short, but he'd found an old Doors album while rummaging a capsized van, and he wanted to look like the guy on the cover. Around that time, he'd started making up songs. He found the sun and put his back to it. He knew he should wait a few hours, let the light get brighter, his shadow darker, more potent, but the call in him could not be placated—he lacked self-control. "I missed you," he said to his shadow on the ground. He waved. It waved back. He danced. It did too. "Lose weight?" he asked it. "Something different with your hair?" But, of course, there was no answer. "Either way," he said, "looking good." He dropped to his knees, lowered his face to his shade-made print, now a hunched clot of dark on the grass. "A world with two suns," he continued singing, "that is the dream." He was silent. Lust slithered across his face. He tucked his mane behind his ears, palmed his cheeks, and motes of dry skin swirled away. Then . . . Down he went like a starving man. His mouth bored open, he crashed against dirt, and he gulped at the dark, each swallow dimming the shade. Murk grunted and gnashed, pulling the shadow off the ground and into his mouth, down in his belly. When he'd gotten it all, or as much as he could gather, he rolled to his back laughing and let the magic work its charm. "A world with two suns," he bellowed, "that is the dream," his mouth as wide open as an opera singer's and his lips and teeth grayed with stain. His eyes drew black. His skin went pale. His veins showed through like sooty scribbles on pale parchment. In the distance he could hear the train. To Murk, it was the sound of heaven. The Train Mira crouched, watching for the train to race around again on its mile-long, circular track. She looked for the break between the caboose and the engine to catch glimpse of the buildings beyond. A step in front of her, the grass had been scorched away, covered with white rocks, but the smell of the scorching lingered, and Mira sniffed the perfume of it, her brown eyes sleepy in the smell. She messed her hair. She'd never thought much of it, but then Murk started growing his and one day she looked at him and couldn't help but ask, "Are you trying to look like me?" He got defensive, something about some singer. "You're trying to steal my fucking haircut," she told him. And Murk called Mira all kinds of dirty names and stomped off on his peg leg to wherever Murk went when Mira sent him stomping. But now, she thought, "Shit, he can have it." Just beyond the train, observation towers stood, and in them guards trained guns on the perimeter of rocks. Mira heard the man's voice through his bullhorn. "Closer and I'll fire." It was half past noon, and Mira was ambivalent. She'd been coming to the train for days now with the halfhearted idea of dying, but each time she'd come, nothing happened. This threat was the first she'd heard, and it made the consequence of her dying more real to her. That's the thing about suicidal thinking: it's kind of harmless until it isn't. A few days back, she'd stood motionless with a bouquet of citrus blossoms clutched to her chest, a kind of funeral service in her heart, but she'd only lingered for hours thinking she'd gone unnoticed. She'd even shown her shadow then, turning it off and on, hoping the strobe of it might gain some attention, but it didn't. The next time, she'd gone to a different edge of the town, thinking maybe her luck would change if she tried another observation tower. Each time the train sped up, but no shots were fired. She thought mildly of running for the train, throwing herself beneath its heavy steel wheels and letting the train cars chew her up to yuck, but she couldn't seem to get her legs to go through with it. It was puzzling. She'd been shot at before. When Murk had sent her to the train the first time. It's why she'd even come to think of this as a way out of the world. So what was different? Why weren't they firing now? She knelt toward the rocks, lifted one of the white pebbles casually. Her tanned knees flecked with scars, her palms rough from hard work and living. She dropped the rock, contemplated the white dust it left behind on her. She blew at it and most of the stuff disappeared, and what was left she licked away, spat out at the grass, and the chalky flavor of the task left a scowl on her face. "What now, Mira?" she asked herself, her words aimed at the train. "What happens next?" Guards In the observation tower, the guard shouldered his gun. He brought the sight of the weapon to his eye, set the crosshair on her forehead. The girl mouthed something but he couldn't tell what. He liked the look of her brick-colored lips, how they spoke the inaudible words. He pretended a voice for her, to match the look she had: a bauble that's shatterproof, a wild kind of preciousness. "Same girl as yesterday?" asked Drummond. "And the day before," said Bale. "And the day before that." He chewed at nothing, his perfect teeth click-clicking a toneless music. "How she know what tower you'll be at? I mean, we draw y'all's names from a hat even. Ain't no order to it at all." "Don't know," said Bale. "First time she came, she had flowers." "Flowers?" Drummond and Bale were brothers and both had the same pretty teeth. They had a large, domestic build, as though they'd been bred rather than born. "Bunch of white ones, but that ain't even the strangest part." "Shoot her. It's too screwy." "Wait," said Bale. "Wait and watch." He stretched his neck. Rolled his thick shoulders. Smiled a childish grin. They both wore white fatigues. They had both entered duty at the age of sixteen, as had most of the lower-ranked members of their outpost. Drummond, entered a year before Bale, was Bale's superior, but they'd both spent the last thirteen months working the train slowly across the countryside to this spot—the train operating across a length of track only slightly longer than itself, inching forward and then resting as the section of traversed track was disassembled and then reassembled in front of the engine to begin again the laggard cycle. When the captain decided, those straight rails were recycled, used in the building of the observation towers—one of which Drummond and Bale now stood in—new curved rails were produced from cargo cars and laid ahead of the train as it progressed into its permanent circular orbit. And there they were: perhaps a hundred miles from the safety of the dome, forging some in between life. "You should shoot," Drummond said. He picked up a radio and ordered the train to increase its speed for protection. The train always rode its circular track, a kind of moving wall around them, a millipede in pursuit of itself. "She might not be alone." "There," said Bale, who'd stayed watching the girl. Drummond turned the binoculars to her. "Alright?" said Drummond. "She's kinda pretty, right? Like a dark little fairy. Or like that story about that soup Indian. Remember that one? That guy lost in the wilderness. Pocahontas or some shit?" "I don't mean that," said Bale. "Look at the ground." "What the hell is that?" "Keep watching." They both stood still. The train's wheels screeched and chirped across the track. Bale peeked through the scope of his rifle, Drummond through his binoculars. "It's like pulsing," said Drummond. "Gotta be an illusion, right?" "Maybe," said Bale, "but you ever seen an illusion like it?" "Should've just shot the first time you saw her." "She was holding flowers, man. It's hard to kill a pretty thing holding flowers." "Well don't shoot now." He handed the binoculars to Bale. "We should at least see what the captain has to say on it." They had seen shadows on the white rocks before, cast from the folks they'd shot, people who had come toward the train with their arms held high. But they had never seen a shadow that could come and go as it pleased. Mira Meets Bale Mira lifted another white rock from the mess of them. She rolled it in her hand, contemplated its weight. It was a jagged little thing, and she clenched it in her fist. Back home, Mira's mother sat in her sickness. By now, she'd be curious why Mira hadn't returned. She'd be calling out raggedly for her, wallowing in the darkness, her sleepless mind addled and angry. Mira opened her hand. The rock cast a meager shadow against her palm. She shook her head, dropped it and it clicked against the other stones a few times before coming to rest. "No one could blame you," she said to herself. "Get up. Go forward. Get shot. Mom does whatever Mom does. Murk probably makes up a stupid song about you." She stood and moved toward the train, across the white rocks, crunching as she stepped. The glinting steel wheels turned endlessly, were polished by friction. The noise of the moving locomotive pulsed a rhythm, and beneath that pulse, a mild squeal of steel against steel. The closer Mira got, the more she could make out the man in the tower. He was cleaner than anyone she'd ever seen, his clothes newer. He had his rifle on her, and she closed her eyes, walked on toward the sound. Her heart raced. She figured each step to be her last. Any second a bullet would catch her. But, when the noise of the train was nearly on her, when she could taste the acrid rust off the cars, smell the tangy sun-warmed metal, she opened her eyes, gazed up at the watchtower. "You waiting for something?" Mira asked. "Waiting?" came the answer. Both nearly shouting over the noise of the train. "Earlier," Mira said. "You told me closer and I'll fire. I'm closer." "My orders changed." "Your orders changed? That was like two minutes ago." Bale checked his watch. "Closer to five." "Either way." She held out her arms. Scowled up at Bale. She wore denim shorts, a brown, sleeveless T-shirt, shoes that appeared homemade. "You'd be doing me a favor." Bale adjusted his rifle grip. "Wait, you wanna get shot?" "None a your damn business." Her eyes closed. "Here's the deal. Right now, I'm not shooting because I was told not to. Where's your flowers?" "Flowers?" Her eyes opened. "The other day," said Bale. "White ones." "I didn't . . ." "And that blinking shadow you had?" Mira's mouth agape. "Getting shot probably hurts like a motherfucker. You know that right?" Mira's confusion written in her expression. "Like the bullet going through you and all. I got stabbed with a pencil once. In my back. That was the worst. Thing stuck in there. I walked around for twenty minutes that way until I found someone to pull it out. Still got the scar, if you wanna see. Probably can't really make it out from down there though." There was probably thirty feet between them. "I want you to shoot." "Nope," said Bale. "Orders. What's your name?" She seemed to spit breath. "I ain't telling you, domer." "C'mon," he said. "Make you a deal. Tell me your name, and I'll think about shooting you." She snarled up at him. "Mira," she said. Bale blinked slow. "Thought about it. Mine's Bale. How'd you know where I'd be?" "Where you'd be?" "Yup. Past three days you've come to my tower. You got magic?" "Magic?" "Shit, I don't even know where I'm going till I get there." Mira put her hands to her face. "Figures," she said. She turned. She began to walk away. "Wait," hollered Bale. "Where you going?" But Mira didn't answer. She worked her way back across the white rocks and into the brush, went dejectedly from one clutch of trees to the next. The light of day was graying and now she had to fetch dark for her mom. She stomped off toward home. In a field of yellowed grass she saw two boys so she hid up in some briars. Her arms caught on the thorns of them, but she wanted to be careful. The boys were laughing at whatever, and she figured they had black eyes. The wind was steady. They spoke, but Mira couldn't make out their words. One of them cackled and the other belched up some response. He bloomed his form. Stretched out like putty. Shadow sippers could do it. They could stretch out like balloons, become light like paper, float like leaves. And that's what they both did, bounced along like astronauts on the moon. Once she was sure they couldn't see her, Mira ran into the field. She stopped. Eyed the sky. Between her and the sun, a black vulture passed, its wings extended and still. It happened fast. A fortune for certain. The shadow of the bird swung across her face, and she grabbed a mouthful of it, her cheeks bulged with the thickness of its shade. She had stolen the shadow from enough animals to understand the bird felt it. She'd never stolen from carrion crows. They usually flew too high, weren't between her and the sun, but she knew doves well. Rabbits. Squirrels. She didn't like to take from mice, because sometimes you'd take the whole thing, and she understood the horror that brought them. How they'd never sleep on their own again. The animals she knew well could tell her, in their way, in their language of mixed sounds and stares, that it felt like a whisper coming off their hearts. She felt guilty about it but had no choice. Mira ran full cheeked into her thicket, through the low-limbed trees and to the little house where her mother sat on the porch, her feet propped on an ancient, unplugged television set. "What took so long?" her mother asked, her face pale and eyes strained, her posture the shape of illness. Mira didn't answer. She lowered her face to her mother's. The two touched lips. Mira exhaled the pilfered shadow into her, and, almost instantly, her mother fell asleep. Drummond Captain Flamsteed wore wire-rimmed glasses, had a face like a bird and the crown of his head was shiny bald. "Off and on?" he said again. "You're sure? This is highly preposterous even for out here. It doesn't jive with the literature we've accumulated. I'm reluctant to assume you're correct." He stood behind a desk. Jostled papers. It seemed his mind was elsewhere. Drummond shifted. "I'm sure that's what my eyes saw, but I'm not sure I trust 'em." "I'll appreciate if you follow me now." Flamsteed led Drummond into the sun. He contemplated the ground. "Spread your legs," he said, and Drummond did. "Arms out at your side." Again, Drummond followed orders. Flamsteed investigated Drummond's shadow. "I'm not on it," said Drummond. "Under circumstances such as these, I'd suggest speaking when spoken to." "Yes, sir. It's just . . ." "The notion that you were reluctant to shoot the creature is positively absurd." "I suggested it to my guard." "Suggested? Doesn't that seem mild? You are his superior." "I told him it was protocol." "It is most certainly that." "There was debate about it." "Ridiculous. Why would the matter be given any consideration beyond ensuring true aim?" "We thought you should be informed. On account of the blinking." Flamsteed stomped. "Ridiculous," he said again. The only sound was the train's eternal circling—a pulse made by machinery, a bizarre preaching of friction. Chugga-chugga whee. It went. Chuck-chugga whee-chugga. Mira and Her Mother Mira's mother woke later, mouthy and alert, sort of annoyed. "What was that you gave me? The dreams spun circles." "Buzzard." She was sitting in a chair near her mother, peeling carrots with a knife, dragging the blade across the dirtied skin of them. "You like birds." "But buzzard?" said her mother. "Shit. Them things are rats with wings. Lucky I didn't dream of rotten dog ass." "So lucky," Mira said. She finished up, dried her knife with a rag. "I can't believe it." "The sun was going down. Was all I could find." "I'm just a chore to you, aren't I?" "Stop." Mira made to stand and go inside to the kitchen "I wish I wasn't, but I'm a brutal chore." "You're my mother." "Exactly. And it should be me taking care of you. But it ain't worked out like that." "You do, kind of." "How?" Mira thought better of answering. She set her knife and her bowl of carrots on top of the broke-down TV. "Tell me what brings the best dreams?" She sat back down. Her mother understood. "You see," she said, "you can't even answer me." "What brings the best dreams and tomorrow I'll find it. Then maybe you'll feel better, and drop all this." Mira's mother's eyes flashed lethal. "Seagulls bring dreams of water. Squirrels, the limbs of trees. Eternities of them. Thickets and thickets. Rabbit shadows make me dream about holes and nooks and tunnels, but not in a bad way. It's warm. Before, when I could sleep my own sleep, I'd dream of home, which sounds like it would be good, but it wasn't. I'd dream your father's face, and it only broke my heart. You have his face but my manner, so it gladdens me, and if I could dream your face, it'd comfort me, but I can't, and you can't understand." Mira just waited for the real answer to come. "Water," her mother finally said. "Water's a good thing to dream over. Multitudes of it. Vast expanses. Every direction. Endless. What about a gull? Is that asking too much?" "A gull?" said Mira. "I can try." "You sure?" "I suppose." "I don't want to be a burden." She touched Mira's smooth chin with her fingers, her skin the color of pecan wood. "I know." Mira said a little prayer in her mind that a gull would come her way from the coast, the prayer sort of dancing in her eyes. "What about tonight?" said her mother. "What about it?" Mira's mother took a book from beside her. An almanac. She set it in her lap and it fell open to a marked page. She read. "Moon's out, just waning." Mira laughed. "You want me to find a gull tonight?" Her mother tried laughing too. "No. No." "Good." "Not a gull." Her mother could sense Mira's sorrow at the request. She flashed her teeth as though pained, the ruinous color of them off-putting. "It can be just anything," she said. "Just try an hour. If you can't find anything, come home." "Really?" "I'm a burden, I know. The whole world's a mess. You would've loved to know me before. When you were inside me. Or even just before all this." "I did know you before." "You weren't but a child. And look at you now. Basically a lady. You never saw me with adult eyes. You saw me as a child sees its mother. With eyes that hadn't learned to judge." "I don't judge you." "Not meaning to," said her mother. "It can't be helped. I bet the errands I send you on make you mad enough to cuss." "I never cuss," said Mira. "Sure." "Never about that." "Fine." Mira wasn't ready to go back out so she made to change the subject. "I went to the train today." Her mom had fidgety fingers. "You crazy? Wanting to get killed? Way Murk tells it, they shoot our kind." "Not today," said Mira. "Even spoke to one of them domers." "Spoke to one?" "Named Bale." "Well?" "Well what?" "Don't act dumb. What'd he say? Why they out here even?" "It didn't get to all that." "What'd it get to?" Mira took the book from her mother's lap, set it off to the side, hoped it wouldn't be picked up again. "He had orders not to shoot me, and he wanted to know my name. That was about it." "You didn't ask him about fixing it?" "Fixing it?" "When they went in there, in them domes. That was supposed to be part of it. Figuring out how to get stolen shadows back. And not with toad licking and burying pubic hair at midnight and everything else I tried. Eating hand sanitizer. Moonshine eye drops. They were supposed to fix it with medicine." "It didn't come up." "How could it not've?" "I'd other things on my mind." "You gotta go back." "Now?" Her mom shook her head. "Course not." "Good." "But tomorrow." Mira was silent. "Promise." She grabbed up one of Mira's hands, held it between her palms as though praying with it. "And this time, ask." Mira pulled away. "I'm busy tomorrow." "Doing what?" "Looking for a gull." "Oh, forget that gull a bit. This is important. They know how to fix this, you won't have to be looking for any shadows for me no more no how." Mira considered it. What if it could be fixed? What if her mother could get her shadow back and sleep again on her own, dream her own dreams? Wouldn't that solve most everything? Would she even want to get shot at the train then? "Okay," she said. "I'll try." "Good," said her mom. "Be back in an hour?" "An hour?" "Yeah, don't stay gone more. And it can be anything," she said. "Anything but a cat. They have absurd dreams, leaves me puzzled. A bunch of touching the same thing over and over just to know that it's there. Looking around and wondering what to pee on." "Oh," said Mira. "For tonight." "Yes, for tonight," she said. "You'd understand if it was you. It's not fair for either of us, but it's less fair for me." "Okay," said Mira, and she walked barefoot away from their house and back out into the thicket of mesquite and into the moonlight, and, as soon as she was far enough away that her mother couldn't hear her, she made fists, and, "This is bullshit," she said. Bale Captain Flamsteed asked again, "Please walk me through the logic of your decision." Bale thought about it. Replayed it in his mind. "I thought Drummond told you. The shadow. Thought someone should know." "Someone did know," said Flamsteed. "You were well aware of every portion of the puzzle at hand. She came to the warning track. She was not one of ours. It's really pretty plain. In every training procedure you've undergone, the rules of your post have been broadcast to you loud and clear. Shoot to kill. Those are the orders." "In Bale's defense," Drummond said, "it was peculiar." "These shadow-sucking vagrants with their blackened eyes and hearts. These people with their horrific, damnable minds. Has some part of you become sorry for them?" "Not at all," said Bale. "It was just out of the ordinary." He rubbed a hand across his short brown hair. "Have you forgotten what they're capable of?" Bale had never seen it, but he'd heard it told. "No, sir," he said. "I could never forget." Night Hunting Mira couldn't figure why her mother cared where the sleep came from. They kept chickens, rabbits, and goats, but Mira's mother forbade that. "We take enough from them already," she'd said. They ate them and milked them and stole their eggs. But, when it came to shadows, Mira's mother drew the line. It hadn't always been that way. Mira was young when her mother had her shadow swallowed off her, and, back then, Mira couldn't hunt out on her own. Her mother would send her into the yard to get mouthfuls of chicken shade so she could rest on the porch, swallow that dream of feathers and noise. Mira brought her home the shadow of a wild bird once, she couldn't even remember what kind, and after that things began to change. Her mother would request certain shadows, and sometimes Mira could find them. Either way, her days and nights were spent on these ridiculous errands—the moon-made shadows less potent, but still. Only creatures with consciousness worked in this regard, though Mira's mother regretted it. "Can you imagine the sleep bluebonnets would lead you to? Dreams of meadows and sun, soft breezes. Or sunflowers? Head high in a warm prairie? The smell of grass and all that sky?" "Or cactus?" said Mira. "Cactus? No no. That could only bring thorny dreams." "Yup," said Mira. "Safe thorny dreams." Mira thought back to the men who'd come around when her mother was still well. Pale-bodied. Black sunken eyes. Their sick, twisted veins just pulsing in plain view. Shadow Addicts The main memory of it was this: Mira hid beneath the bed. Wild knocks came at the door, and Mira's mother dragged her under the mattress, suspended above them on its frame. Mira sort of tinkered with the box springs. When the door kicked open, she began sobbing. "We hear you," one of the thieves called. "Best make it easy." They laughed odd-shaped laughter that brought chills to Mira's skin. Her mother's eyes were red, her face trembling. Hands reached for her beneath the bed. She kicked wildly. "No," she screamed, "go away," she cried, "leave me be." They grabbed hold of Mira's mother, dragged her from beneath the bed, and she wailed and shrieked, and they beat her legs until she let go of the bed, and pulled her into the yard and Mira ran to the window to see. Two men held Mira's mother by the wrists, her face toward the sun. "Relax," said one of the men who then dropped to his knees. He lowered his face to her shadow and began hogging it up. "No," screamed Mira's mother. "Don't take it all. I feel it," she said. The man on his knees drinking. "Leave me some. Don't make me like that." When he was done, they dropped her. "Dammit you were supposed to share," said one of the men who'd held her. "I'm no good at sharing." And they walked away, the non-sippers grumbling. Mira's mother searched around. "Mira," she cried out. "Mira." Once the men were gone, Mira ran outside to her. "There only needs to be a little bit," her mother said. "If they left a bit, I'll be okay. I don't need the whole thing. I'm not like that. It doesn't have to be perfect. It doesn't have to be whole." They investigated the ground, but there was nothing but light. Wherever she stood, it was as though she wasn't there. Her shadow was gone. It would stay gone forever. "It's not fair," Mira's mother said. "It's not fair what they've done to me." Night Hunting Too But that is far removed from the night when Mira's mother sent her out into the scant moonlight to collect a shadow so she might sleep. Far removed, though it is the cause. Mira is in the fields. In the midst of swarms of dark. The panic and the dim light. The crescent moon above her, grinning. The stars twinkling, many of them suns with their own planets close by. Mira flirts with this notion. On every planet is it the same? On every world are girls like her hunting shadows for their mothers? Mira sees a rabbit in the field. It is frozen, its ears aloft like antennae, searching the sounds Mira's steps make. It turns to her, and even from that distance, because she has spent so much of her life with rabbit shade in her mouth, has swallowed some of it into her soul, she can tell the animal: I need your help. She doesn't do it with language. She doesn't do it with signs. She doesn't know exactly how, but it happens. "What kind of help?" the rabbit asks. Mira explains. She needs shadow. For her mother, she needs it. The rabbit looks up at the moon. "Will it hurt?" "Nah," says Mira, "not really." "Why should I trust you?" "You ever spoken with someone like me? Has this ever happened?" "There are plenty of things that happen for the first time," the rabbit says. "There are things that only happen once. Just because it's rare doesn't mean it's special." "Then ask yourself. Does it seem like I'm lying?" The rabbit's nose twitches, its whiskers go side to side. "No," it says. "It doesn't." "Then you'll help?" "What do we do?" "First," says Mira, "I need you near me. I can come to you, or you can come here. Whichever you prefer." "I'll come to you," says the rabbit, and, slowly, it makes its way. "It won't hurt but you'll never be the same. I'll just take a bit, and I'll never ask you again, but if I accidentally do just say, 'I've helped you before,' and I'll say, 'much obliged,' and you can just go on." Mira keeps still in the night, waiting for the reply to come. The smooth skin of her shoulders aglow with moonlight. The music of night birds warbling all around. "Okay," the rabbit says, and Mira lowers her head. She takes a mouthful of shadow from the ground in her cheeks. She holds it. Then the rabbit's on its way. Mira walked through the field back toward home. But as she moved, she heard a sound. A humming. A tune. A song. "A world with two suns," she heard, "that is the dream." Mira walked along. "What's this?" Murk asked, hobbling up. "Mira, you out turning tricks?" Mira showed him her middle finger. "Ah, what's wrong?" Murk tapped her shoulder. "So quiet," he said, then realizing, "Ah. Mouth full of dark? Swallow it. We'll go get in trouble." Mira pushed him. "I like 'em rough. What's it you got?" Mira held two fingers up behind her head. "Rabbit?" said Murk. "For your mom?" Mira could only nod. "Tunnels and burrows and darkness and warmth." Mira shrugged. "Was that you at the train today? They sped it up." Mira moved along, and Murk followed in his limping way. "Then thanks." Mira rolled her eyes. "You should go every day," Murk said. He pointed at her full cheeks, began to sing. "A world with two suns, and both are for me." Mira spit out the shadow and it evaporated, disappeared. "I've told you that song is bullshit." "You're talking now?" "Don't think I want to, but I got a question." "I've got loads of answer." Murk stretched in the moonlight. "Loads of shit more like. Hairstyle thief." Murk kicked dirt at Mira with his peg. "It's from the damn album cover," he said. "I already told you." "Then that guy's a hairstyle thief." Murk tucked his hair behind his ears. "He died like a hundred and fifty years ago." "He could see the future." "And it's not even done yet. It's gonna be longer. It'll look different when it's done." "Fine." "It's gonna be like down to here," Murk motioned a bit below his shoulders. "Ain't a thing about you I wanna be like." "Whatever," said Mira. "Look, here's the question." "I don't know I feel like sharing my answer with you anymore." "Then just share your bullshit. Tell me why you think they're out here." Murk fingered his hair a bit, "Who?" "The domers." "How the hell should I know?" "Mom thinks they might can fix it. She's heard of people getting well. With leeches and enemas. Crystals or candles. Maybe they've got something like that but better. Something that works for real." "Then let her think it." "She wants me to ask." Murk's black eyes widened, glistened. "How?" "I'm guessing politely." "They'll shoot you if you get close enough." "Didn't today." Murk puzzled a moment. "They didn't shoot today." "Or yesterday. Or the day before that." "You've gone all those times?" "Yep." "Why?" "Reasons." "I wouldn't let 'em fix me if they could." "You can fix yourself," Mira said. "Keep your shadow on the outside and you'll be just fine." "I'm fine." "Cause of that peg." "A blessing and a curse. What you gonna do now?" "Maybe ask. I don't know." "Nah," said Murk. "You lost your momma's rabbit shadow." "Shit," Mira said. After Mira left Murk, she strolled some, looking for whatever she might chance into for a shadow for her mom, fumbling Bale's image in her mind. He hadn't shot her, and he'd been nice enough. She figured, if she asked him, he'd tell her. But she considered then what he'd said. About the names from a hat. Whether or not she had magic. Could she find him again? She trekked around considering her fate for what she figured was an hour then made her way home. When she got there, Mira lowered herself to her mother's face, her mother quivering with anticipation. "I stayed gone an hour," Mira said. "Couldn't find a thing." Her mother cussed a bit, tossed in her chair, was devastated. "It ain't fair," she said, but Mira tried to ignore it. Captain Captain Flamsteed barked on at Bale, their faces so close the two traded breath. "In so many ways your ridiculous indecision has allocated a remarkable risk on the function of our enterprise. We are currently more susceptible to incursions because of your curiosity and ineptitude. It's an odd shadow, son, so be it. Question not the motivations and dictations of your superiors in this regard. We have established protocols with remarkable foresight by looking to past infractions enacted against our intentions." Flamsteed's deep breath, "So, in order to make certain that my dispensation of regulation isn't unmitigated meandering falling on deaf ears, I must demand again that you clarify your intentions should this blinking-shadowed beast show herself again at the outskirt of our camp's circumference." "I'll shoot," Bale said. "It is imperative that you do just that. Otherwise, I'll have no choice other than to arrive at the conclusion that you are indeed a sympathizer," he turned his back to Bale, "and I'll be forced to stop the train." Flamsteed circled Bale, walked to Drummond. "Please disclose what transpires if and when the train is stopped." "Exiled," Drummond said. "Thrown the fuck out." "Understood?" "Understood," said Bale. Mira Returns She was back and Bale had her in his sights. The train chugged around on its track. Bale figured he'd give one warning. He raised his bullhorn. "My orders are to shoot." Mira stopped. It was Bale's voice. She recognized it. And Mira considered his orders. When she'd woken that morning to come to the train, she'd assumed that no matter what, she'd never have to deal with her mother again. If they had a cure, she'd be free. And if she got herself shot . . . Bale put the crosshairs on her heart. He cleared his throat. He took a breath. "Sweet shit," he said to himself. But she asked for it. He had never killed a girl. He had shot a few men. The first two were from close range. Two quick reports from his rifle. One shot had found its victim's face, had entered the head of the man just below the eye—a hole the size of a fingertip—had exited the back of his skull—the hole the size of Bale's fist, a chunk of hair and cranium popped off as debris. The other man got caught in the belly, and he laid up in the grass for a good time bleeding out as the captain asked him questions that he never got good answers to. "Who sent you? Why have you come here?" And the dying man just moaning nonsense, his midnight-colored eyes glaring, his black blood staining his shirt and the grass blades and the earth. The two men had come up when the train was stalled, while they were off-loading the curved track the train now moved along. It might have even been accidental, their coming. They just strayed up to the train, to their deaths. Bale hated the look of their lifeless bodies slunk down, maybe hated the sight of their falling even more. He didn't want to see Mira fall like that. Bale aimed the rifle where he had to. Then Bale pulled the trigger. The Marvelous Murk Just beyond the train-circled outposts stood Murk—his hair messed, his eyes dark as tar. He buttoned the top snap of his leather jacket, he tightened the rope round his waist. "A world with two suns," he said. The train had picked up speed, so now was the time. He raised his crossbow to his shoulder. "May your aim be sweet and true," he said. He pulled the trigger. The arrow launched, dragging rope as it flew. It struck the train. Murk reached down and undid a buckle and his wooden leg slipped free and fell to the ground. The rope grew taut. Murk ballooned his body, so it expanded like a kite, and then the rope snagged his waist, and he was pitched from the ground in a heave and pulled skyward with vigor, the air crisp on his face as he went aloft. Up and away he went, into the clouds, his black eyes shining as though an orchestra played just for him. Into the white clouds, thick the way cake is thick, chill as cold peaches. When the length of rope went slack, he cut it and spread himself some more, sailed along. "That is the dream," he sang. Near him, buzzards circled. He stuck his tongue out at them, said, "Shit eaters." But really, he owed them so much. It was those birds he had seen so ridiculously ambulating on the ground but so spiritual in the sky, their black bodies smart against the blue of it. He had watched them, how they hovered, rarely beating their wings, catching currents of wind and rising and falling and twirling and twirling endlessly, and Murk had to try it. Before the train, he'd had to rely on Mira. "Faster damn it," he'd scream at her as she pulled the rope to get him going. "Says the one legger." "Only on the ground," he'd say. "Up here, I'm immortal." Sometimes it'd take hours of her pulling to get him flying, but since the train arrived, it had been easy. It was by accident he'd learned their trick. He had neared the train out of curiosity, and he saw how it gathered speed when people approached, and then they started firing bullets. "Tomorrow," he told Mira the first time, "you should go and see the train. It's beautiful." He didn't mention the bullet stuff. "You should've seen how high I got," Murk told Mira once he'd finally soared back to earth, found his leg and visited her. "I nearly got killed." "Nearly counts for nothing though." "Well, I'm not doing it again." But, of course, she did. Twice more after much persistence on his part. These past few times, though, were not because of his coaxing. Why she was going mattered little to Murk. As long as he was able to soar, he was happy. Mira made it possible for him, and because of that, he appreciated her. "Swallow your shadow and come fly with me," he said to her once. "I don't have one to swallow." "Yes you do," he said. "I don't know where you keep it, or how you keep it hidden, but you have one. I can smell it." "A world with no sun," Mira sang at him, stealing his tune, making it her own. "That is the dream." But now it is Murk, aloft. Coasting. His ballooned self catching the currents and tossing him as a feather might be tossed. Sailing softly as a leaf plucked from the topmost branch of a tree. Below, there is a gunshot. Below, another shot. Murk hears them much later than the shots are fired, as sound travels slowly and he is far away, but, when he hears them, he says a small prayer for Mira, who he imagines is the target, and who might already be dead. He isn't worried enough to land and find out. But when he finally does come down, he goes to her house. Murk would never admit it to her, but if Mira died, it would bother him. Bale the Sympathizer Flamsteed asked again, "Please explain to me the nature of what transpired. I'm confused as to what you are suggesting. Simply stated: you missed?" "Sort of," said Bale. He slouched now, his strong body casual, his hands in his pockets, looking away from Flamsteed. The captain was frenetic, his voice like electricity. "I feel like I am missing some vital segment of the scenario at hand." Bale pulled his hands from his pockets, laid them opened and smiled. "You told me to shoot," he told the captain. "So, I shot." Bale figured he was going to be punished, but something in his demeanor suggested he didn't give a fuck. Murk the Disbeliever Murk seemed confused. "So? He missed. It's good." "Sort of," said Mira. She was staring at the spot on the ground where her shadow should be. Murk was leaned against the wall of Mira's home. He had pulled off his peg leg and was scratching his back with it. "I don't understand," said Murk, his face showing relief, his hands working the peg fiercely. "Why do you seem bothered?" "Because," said Mira, "I think he only missed on purpose." Exiled Every citizen of the outpost was called to the northernmost edge by a single bugle player who blew some sad tune awkwardly so that it slipped from the navel of his brass instrument in handicapped fashion. The sun was near setting. The captain shot a flare into the air. It burned orange and dragged across the sky toward the east where dusk sat grayly culminating, the fire of the flare fizzling into smoke that nearly matched its backdrop, and the train slowed. "I address you now in order that I might impress a profound punishment upon one of your peers," said Flamsteed. Bale stood naked with his hands covering his cock and balls. "Here is a man who holds sympathy for those beyond the train." There was a deepened sense of ceremony in Flamsteed's voice, almost ministerial, and the other outpost dwellers sighed out palpably. Amassed there in their white fatigues, they hung their heads. The women watched Bale's nude body; most of the men stared at the dirt. "This man cannot continue to live on amongst us." The train's caboose was pulling into view, and just behind it, the only exit from the town, the gap between it and the engine—a sort of circling void. Drummond seized Bale's arm and walked him forward. A surge of murmurs from the bystanders clucked up as they moved along. When they came to the track, Drummond whispered, "I tossed a rifle from the northwest tower over the train. If you run, you can get it before they start shooting." "Run? On this?" Bale balked at the stone-scattered ground in front of his feet. Drummond turned back toward the captain. "Yeah, I'll miss you too," Bale said. Again the bugle player blew his pathetic tune. Bale stepped lightly onto the mean little rocks on the other side of the track, and his legs tried to lighten themselves and he quivered in his moving. The train began to huff and chug. Bale set off. Stumbling and tarrying forward, striding as quickly as he could. Huff and chug, huff and chug. With each step, his body howled. Stabbing his bare feet into the murderous stones. He saw the rifle, and a shot was fired. He kicked harder into the miserable sprint and another shot kicked up dust near the gun. Bale closed his eyes, dived for the rifle, rolling naked in the rocks, his whole front scraped pink in the fall. He used the gun as a crutch to stand and another shot whizzed by him. All the graying day seemed to swell up with panic. The train was running full. Chugga-chugga. Whee-chugga. Bale made toward the trees, zigzagging as best he could. Another shot that nearly got him. He smelled the heat of it clip past his face. He neared the edge of the white rocks. Another shot. The up-ahead grass spat blades. Another shot. The dirt coughed dust. Another shot. A tree belched splinters. Another shot. But nothing. Bale was into the thicket of trees breathing heavily, somehow safe. two The first shadow addict was a young boy from McAlester, Oklahoma, a nowhere city toward the eastern edge of that state, just a few hours' drive from the Ozark Mountains. His parents had taken him into the hospital for displaying symptoms somewhat like rabies—he tried to bite his father, he chased after passing cars. They'd been out to Robbers Cave, a state park nearby, and assumed maybe he'd been bit by a bat or possum when unattended. Doctors couldn't quite figure it out, especially the blackened eyes, but ultimately they restrained and sedated him, and his symptoms abated. They interviewed the boy, once he was cognizant, but couldn't totally ascertain what might have come over him. "I was just lying in the grass," the boy had said. When he seemed up for moving about, they took him on a walk to get a feel for his strength, to see if he could be released—the doctors deceiving themselves into believing it was some sort of allergy, some sort of reaction to a thing he ate or drank. But, when he was again in the sunlight, once they'd taken him outside, the boy dropped to his shadow and gorged it up into himself and went dark again. News of the condition traveled fast. The boy was underage and the media couldn't use his real name. They took to calling him Tom Joad, because, at the beginning of The Grapes of Wrath, Joad is released from a prison in McAlester. It was one of those sick accidents the world offers unknowingly. "I'll be all aroun' in the dark," Tom Joad says at the end of the novel. "I'll be everywhere—wherever you look." And shadows were everywhere, so nothing could be more true. Curiosity caught hold of humanity. "You can't possibly drink a shadow." Then, of course, they'd have to try. And the woe of the world was real. Tom Joad died from experimental procedures in 2029. They enucleated him—took out his eyes—thinking without them some part of the bad magic would come undone. If he couldn't see the sun or moon—or anything at all—wouldn't that help remedy his condition? But it didn't. Still he was ravenous for his shadow. He fought for it, threw himself against windows and struggled against whatever restraint they had him under—his sockets filled with gauze. Even after they lobotomized him, he struggled on. Ultimately, some arsenic-based medicine did him in, the doctors going back to ancient stores once newfangled treatments failed them. By 2030, the world was turning to shit. But a decade danced on, and the sipping spread before the domers took to the domes. Nothing they did helped any. Little wars had broken out and they retreated. It started like this: The shadow addicts attacked lightbulbs, felled traffic signals and lampposts, neon elements, and beacons. Decimated public arenas and stadiums. Subtracted any source of light that would cheapen the moon's brilliance. The aggressions quickly decimated populations. Across the whole world, cities were thinned. There was no true governing body, just a kind of mob mentality ensued. Little warlords rose up, were followed by weaker miscreants. Vandals feeding their addictions. No one knows who figured it first, but the shadow addicts realized other people's shadows worked just as well as their own. And, as their tolerance grew, it seemed necessary to pilfer shade to reach the same highs. They needed more shadow to get fucked up. Gangs of shadow addicts chased down children on playgrounds, rounded up old ladies from retirement homes, and we've seen what happens to those whose shadows are gone. Mira and Murk But you have to know how Mira met Murk. Years ago, she'd found him mutilated, his leg cut away, wailing as though death sat on his chest. Clumped there in thickets of huizaches—brutally thorned midget trees that sprang from the earth like bouquets of bones—he crooned anguish, gabbled desperate nonsense. Mira'd been out hunting and this was early in that endeavor for her, most likely less than a year after her mother started demanding wild shade. Mira was maybe six then. Maybe seven. She couldn't recall for certain. Murk was around her same age. Besides nearly dead, he was shadow-wasted, his eyes pitch, skin pale, disease of lust all about him. It was the first time he'd been that way. He drank his shadow just before it could be stolen. It wasn't uncommon. It was better to be a shadow addict than become what Mira's mother was—a beggared version of yourself, dependent on others. He'd been chased by a group of strange men, had sought refuge amongst the thorny trees, but, upon seeing that their pursuit would not be abated, he lowered himself to the earth and consumed the darkness that fell from his form. So, they hacked off his leg. They dragged him from the huizaches as much as he'd allow—his small hands doing their best to cling to the branches he hid amongst—and when Mira found him, Murk's hands were pierced clear through by thorns, sticky with blood. She'd seen plenty of lack-limbed men and women, ambulating sickly through far-off fields as she hid herself behind the shrunken trees of the landscape. Stories had been told. She'd heard, though had never seen to prove it, that some device existed which, through science or magic, extended the life of these limbs, and that they dangled, removed from their owners but not dead and not rotting, in the sunshine to produce human shade to be consumed for a fee. There must have been a hellish quality to it—some condition of the machine that kept the limbs animate—which granted the dismembered elements the ability to regenerate darkness. Mira had taken from enough animals to know that, once sipped away, the darkness was gone forever. She had come across the same rabbit twice. She'd seen how in the sunlight their shadows were less dark than her own, paler because they'd been skimmed from. But Mira also understood that nothing about the world was absolute. When she found him, Murk was dying and deranged, the hackled meat of his absent leg perched upon by dozens of mad-buzzing flies. He tried biting Mira as she plucked his hands from the thorns. She'd never be able to entirely understand why she helped, why she took him home and treated him. Poured salt water on his hands. Lit a fire and cauterized his leg—her mother telling her how but not helping—wrapped his wounds with herbs and gauze. Let him lay near death on the floor of her home, and she wasn't entirely certain he remembered all the good she'd done for him. It rarely came up in conversation, and he had never truly thanked her. When she was sure he'd live, she'd dragged him back to the thorns where she'd found him. For all she knew, he thought it was a bad dream, a sort of festering nightmare that he'd woken butchered and flubbed by. But they'd remained in some way linked. They seemed drawn to each other in the wilderness. He had a way of finding her, of earning her aid. She had flown him like a kite. She had dodged bullets for him that first time he sent her to the train. She had no idea why. He would ask some preposterous thing, and she'd oblige him. She'd listen to his rants. She'd tolerate his abuses. But she never told him the secrets of her own. Where she put her shadow, how she hid it. How she could sleep on her own even without one. And that stupid song he sang . . . "It doesn't work," said Mira. "What?" Murk would be singing, his black eyes transfixed as though performing for an imaginary crowd. His hips wagging in a sort of one-legged sex way. "Your song?" "Why not?" He'd go still. "Two suns would make two weaker shadows. That's all." "So . . ." "So it'd be the same as having one single shadow," Mira said, her brown eyes riled as though she were saying some impossibly simple thing. "If anything, sing for a brighter sun." Murk mimed a strangling gesture. "Sometimes," he said, "I just can't fucking stand you." Mira thought the same thing. The evening the train was stopped and Bale cast out, Mira and Murk were together. They'd gone back to the train to contemplate it in tandem. There was no certain objective to this mission. They moved amongst the thorny trees, their clothes catching against the snagging limbs, watching the train's circling with squinted eyes. Cicadas grumbled their quasi-mechanical static. A fluorescent hiss of a mating song. The lusty scent of mesquite sap dribbling. When the train's brake was thrown, and the shrieking halt ensued, the two waited curiously. "Ever seen 'em do that?" asked Mira. "Nope." They waited, watching. Quickly, Murk got bored. To pass the time, he snapped his fingers. If you did it right, it'd make the male cicadas come close. He liked pulling off their wings. There was an emptied-out cicada chrysalis on the tree trunk Mira leaned on. She plucked it off—its sticky legs and broke-open bulbous eyes. As Murk contemplated his snapping, glared around for coming bugs, Mira perched the thing in his locks, said, "There's something in your hair." Murk reached absently with his non-snapping hand, geeked out a bit when his fingers touched the discarded exoskeleton, said, "Meh," and pawed at the thing like a spaz, stepping away until the crunchy remains were in his fingers. "Hilarious," he said to Mira, and she was chuckling, and then they heard the bugle. "Is that music?" Mira asked. The train had stopped in such a position that Mira and Murk could see those gathered beyond. "Kind of," said Murk. "Better than that song you're always singing." Murk poked Mira's ribs. She was close enough to see her reflection in his blackened-out eyes. In the distance, someone stepped across the tracks. Murk tucked his messed hair behind his ears. "That boy's naked," he said. "And he's running." About that time, they started firing shots. The white rocks kicked dust. The bang-bangs of more gun blasts rang out and the naked guy picked up speed. The runner rolled, rose, and turned toward the trees. Bale in Exile Bale sat naked, his back to a mesquite trunk, huffing breath. The limbs of the thing draped down toward the ground, bends of them rested on the dirt. It was as though the tree was a bark-covered hand, roosted on its fingertips. The thumb was the trunk, the limbs the other fingers. Above, the canopy he took cover under, somehow the palm of the thing. He'd never been beneath a tree. He lounged in awe of it. He heard a few more shots. He inspected the rifle, ejected the magazine, counted the rounds. He'd half expected Drummond to leave a single bullet, a way out if he chose it, but his big brother had loaded up. Bale had fifteen shots. His feet ached, he had scrapes down his front, and he had to find food, water, and shelter. The wasteland of his life to come was, at that time, unimaginable. It'd be like trying to consider where you stand in relation to the universe while a house you're trapped in is on fire. His balls dipped in the dirt. He could feel grass blades in his ass crack. Mira, Murk, and Bale From where she was, Mira could see it was Bale. She and Murk were hidden from his view. They had split up, she and Murk. They lurked behind trees, fully camouflaged. "Why'd you leave the train?" she asked. Bale aimed toward her. "Mira?" "I came out to ask you something. Yesterday. You shot. You missed. That why?" "I missed on purpose," said Bale. "And, yeah, I guess partly." Mira paused to think it through. She picked an acorn from the ground. She was beneath a live oak, and acorns were scattered everywhere, dropping from the branches every minute or so. Bale had his gun. Murk camped where he was. Mira was thinking. If you've never seen an acorn, it has two main parts—the cupule and the nut. The cupule's like a little hat, designed to fall away from the hard shell that covers the fruit, and Mira picked this off, set it on the tip of her pinky, wagged the finger like a little performer with a cap on. She contemplated the naked nut then, rolled it in the palm of her other hand. She took the cupule from her pinky, seated it back on the nut. She took it off. Set it back again. "Murk," she hollered, but Murk didn't answer. "Murk," she hollered again. "What?" "Throw him your jacket." Bale felt better in the jacket, but his feet still throbbed. He had Mira's canteen, and if he raised it too high, his naked parts showed. Mira turned away and Murk snickered a bit as the stream of water running from the canteen slowed, and Bale held his tongue out as the last beads of water dropped. "I know the feeling," said Murk, but Bale thinned his eyes, lowered the canteen, didn't understand him. "They made you leave?" Mira asked. "Yeah." Bale tried to drink again from the drained canteen. The thing huffed emptily when he pulled it from his lips. He checked the ground around her, "Because . . ." but, in his investigating, he could not find Mira's shadow. "It's gone again?" Murk laughed, "She never shows it. Keeps it hid." "Back there she had one that could turn off and on. Only one like it I ever saw." "She what?" Even black-eyed bastards get their feelings hurt some, and if you were there you would have felt the air go weird. No one spoke. A kiskadee called its steady, trill peal. Acorns dropped. Ground skinks flittered through loose leaves and debris. Bale must have felt the mood drift a bad way. He threw Mira her canteen back. Bale's eyes moved from Mira to Murk, Murk to Mira. He considered their hair. "I got the same haircut as my brother too," he said. "All the guys in the dome do. No choice to it. Girls have different haircuts. I mean, they all have the same haircut, but it's different from the boys." Bale kind of touched his head as he talked and Murk closed his eyes. "I'm growing it out," Murk said. "It's gonna be to here," he said, rubbing his fingers just below his shoulders. He wore a gray T-shirt that was half eaten by moths, cotton so thin you could see his nipples straight through it. "He thinks you're my brother." "You're not?" said Bale. "God no," said Mira. "I'd have to kill myself." "Guys in the other train town had different hair," Murk said. "Long on top and combed over. Shaved on the sides. I had mine a bit like that when I was younger. Mira would cut it for me. She's got clippers. Remember?" "Unfortunately." "Other train town?" said Bale. Murk gained his bearings. "Yeah, there's a few. One east of here." "How far?" "Half day's walk, I guess." "Can you show me?" It was really just a nervous question Bale asked, but once it was said aloud, he felt he had to justify his asking it. "I know there's other domes. Didn't realize they were sending out trains too." "I'll think about it." It was quiet then. Bale's toes were bleeding. Drummond Left Behind Drummond breathed deep the calming outdoor air. He raised his binoculars, scoured the countryside, saw nothing of consequence. When he climbed down from the tower, Flamsteed was there. "You know we have to detain you. It's only for a short time," Flamsteed said. Drummond stared into the captain's eyes. Drummond and Flamsteed entered a car of the train, followed by three others. Most of the train was empty. The captain and a few other superiors kept quarters there. The engineers who operated the train lived aboard. And there were holding cells. They made their way down a corridor to one of these. The captain spun a wheel on the cell door's front and Drummond heard a lock release. The door was eased open, sang rustily, and Flamsteed said, "I'm going to have to search your person. For your own safety." The cell was dark, rust scented. "For my safety?" "Under periods of isolation suicidal ideation is common and if you have a weapon, or some semblance of weaponry, your thoughts might chance into dark terrain." Drummond raised his hands over his head in anticipation of a pat down. The captain frowned. "I'm afraid that won't be sufficient." Drummond was puzzled. "Your clothes," said the captain. "I'll need you to disrobe." Drummond stepped naked into the dark, dank room. He heard the captain clank the door closed behind him, heard the lock spin, heard it catch a metallic latch and the click of it echoed in the darkness. He slouched in the gloom, sagged to the ground. After a while, the train began to move. In the dark, Drummond's mind twisted visions from imagined colors. He watched with eyes open in the stark, shuttered-off world as figments of Bale knelt beneath a mesquite tree, toyed with his rifle. He checked the walls around him. They were steadfast. He jumped for the ceiling but couldn't touch it. There was no way out. He knew. He knew before he started. The investigation was futile but he needed to assure his mind. Ghastly Bale fantasized his surroundings were some kind of make-believe. Foreign images. Newfangled smells. Nothing triggered memory. Moving on aching, bare feet behind Murk and Mira, he wondered if his new acquaintances would feel as overwhelmed in the dome. But he decided it wouldn't be the same. They'd feel confined. Why wasn't the opposite true for him? He didn't feel free, let out of a cage. He felt weighted down by the vastness of it all. "Where we going?" "To Mira's," Murk said. "We'll get you dressed there. In my father's old clothes. Need to rest a minute?" Bale's green eyes shocked in their sockets. "No," he said. "Just wish I had shoes." On they moved, through odors foreign to Bale. Sweet and damp. Clean and heavy. In the distance, the sun was halved by the horizon. The light of the world went orange, became laced with the dark outlines of tree limbs. Birds sang their clucking language. Insects chirped and jingled tunes. "We don't have far to go," said Mira. Then, "Just through those trees." "Should you bring anything back?" Murk asked. "For your mom?" "Probably. Keep an eye out." "There's a green jay, on that branch there." "I hate taking from small birds." It seemed Bale was listening to alien language. "Under the conditions," said Murk, "don't think you should be picky. We can't really take our half-naked friend here hunting the countryside." Mira knelt to dirt, watched the bird shadow dance with the mild breeze. She traced the tree-cast darkness on the ground, zeroed in on the bird's shape. "Sorry," she said toward it, then, carefully, she sipped up a bit, changing the darkness just before it disappeared, the bird, having felt the molestation, leaving its perch by batting its wings, taking to the sky. "What was that?" asked Bale, but Mira, whose cheeks were now slightly bulging, only shook her head and they moved forward. "What'd you do?" Bale asked again. "Just watch," said Murk, and the three moved on into the mesquite forest that housed Mira's home. There, they followed Mira to the porch where her mother sat, pale. Her eyes oddly tired, her countenance jittery. Ghastly, you might've called her. "Evening," Murk said to her, and the woman replied a kind of nonsensical utterance. Mira lowered her face to her mother's. The two touched lips. The woman fell to sleep instantly, her tiny face aiming off at nothing as her head slumped toward her shoulder. "Shade trading," Mira said. "I guess that's what you'd call it. Her shadow was stolen. She can't sleep without it." Bale's pretty teeth in Bale's hung-open mouth. Mira did the best she could, but her father was smaller than Bale. She rummaged drawers, rooted through closets. In the end, Bale wore a dark denim shirt, pants of the same material and boots of brown leather with yellow nylon strings. A laborer's kind of uniform, it seemed to him. Sleeve cuffs at his forearms. Pant hems at his calves. "How's it feel?" Mira asked. "Like I'm wearing someone else's clothes." He seemed a goat in an outfit for house cats. "Snug but whatever." Mira had also found him sleeping things. Shorts too small for him. A ratty kind of V-neck that barely reached his hips. Bale hoisted these clothes. "I'll wear these others for now," and he made his way back to a bathroom he employed as his changing station. It was Mira's, the bathroom. Girl colors and pretty smells. Bale had to fight the urge to hunt around. He didn't know what he'd look for, exactly, but it seemed such a personal space that there had to be good secrets there. The sleeping clothes were maybe even snugger. When he went back to the heart of the house, Murk was seated in a shabby recliner. "Hey, hey," Murk said, and motioned with his finger for Bale to do a twirl, but Bale didn't. "So we're not going tonight?" said Murk. He had taken off his leg and sort of cradled it across his chest. "Going?" said Bale. "See the other train, if you still want to." Bale had forgotten. "I do," he made some unintelligible gesture, "but, yeah, I'm tired anyhow." He didn't know where to sit. Nervousness kept his mouth running. "Just wanna know what they're like. Back home, we'd talk about the other domes. They were numbered. Well, numbered and lettered. I was in B3. Always said that A9 had the prettiest girls, C2 was supposed to be nothing but deaf folks. Don't know if any of that's true. Don't know where all that came from. I just kind of wanna see, y'know, what they wear, I guess." It was silent a minute and then Mira came in. "Hungry?" she asked Bale. "Starving." Mira moved toward the kitchen, spoke over her shoulder at him. "We got eggs. I can fry some up." "Eggs?" "Yeah. Chicken eggs. You like 'em?" "Never had 'em." "Wait," said Murk. "What do y'all eat back there?" "Rations. Until the crops come in. We've got stuff planted, but I don't know what. We've all got jobs and mine's not farming." "Rations?" "It's like," Bale said, touching his little shirt, his shorts. "I don't know what to compare it to. It's food." "What's it made of?" Mira asked. "Dunno." "Is it vegetables? Grain?" said Murk. Bale felt stupid. "Guess I never really asked." "I'll make eggs," said Mira. "And we got milk." "Milk? Like for babies?" "Yeah," said Murk. "Like for babies. You're gonna suck on Mira's titties." "Shut up, Murk," Mira said. Then, "From goats." "Goats make milk?" "The brown ones make chocolate milk," said Murk, laughing, his leg clutched tight in his hands. His eyes sloe—like deep holes in his head. "Shut the fuck up," said Mira. "C'mon." She led Bale to the table in the kitchen. To anyone familiar with one, the egg is a mundane thing, but, if foreign to you, the thing is a miracle. It exists in its own packaging—viscous when raw, solid when cooked. Bale watched Mira crack three into a bowl, and he wasn't sure if the things were alive before their brown shells were broken. He thought, perhaps, their casings were exoskeletons and what landed in the bowl were organs or innards. A terrible student while in the dome, he'd a vague recollection of what these things were, but, as he watched, he confused them in some way with caterpillars, thinking, as Mira cooked, that what she'd done with the egg was to extract an animal from its chrysalis, disturbing the cycle wherein the goop of the egg was to morph into the bird it would be. But he didn't dwell on these thoughts long. Mira dropped a pat of butter into the pan on the stove, and the smell of it caramelizing radiated through the kitchen, made Bale's stomach ache in a way it never had. Until then, everything he'd eaten was of some secretive manufacture, a kind of paste the color of tree bark that tasted consistently of nothing but nutrition—a food made from a formula rather than a recipe. Mira handed him a glass of goat's milk—a new mystery. He took a small sip from the glass, then gulped away. It was wholesomely perverted. It was shamefully gratifying. Bale listened then as Mira poured the whisked eggs into the pan, heard them hiss as they hit, and watched Mira work them with a wooden spoon briefly before leaving her spot at the stove—watched her lean form, her hips. She set a plate and fork in front of Bale. Smiled so soft at him Bale thought he might giggle. Her eyes seemed to know things. Some brief silences have a thousand years inside them. She stepped away, an energy retreating with her, came back a moment later with the pan of eggs in her hand. She scraped them onto the plate. "Eat up," Mira said. Then, noticing a kind of shock in Bale, said, "Something wrong?" Bale shook his head. "No. Just didn't think they'd be this color." Murk and Bale Talk Shadows That next morning, Mira's mother woke screaming. A discord ensued. Bale, who'd slept on the sofa, under a quilt made from slapdash-selected fabrics, was perplexed at the chaos of it. Murk hopped alive, pegless out of the recliner, wearing only boxer shorts, rubbing his stump. Mira emerged from a back bedroom to tend to her mom. Things fumbled about. Human smells hovered. Murk turned on music, bobbed his head to it. "I love mornings," he said toward Bale. "The sun comes up." He shook his hair. In Bale's world, all things had been scheduled. People moved in straight lines to meet itineraries. This new disorder was bewildering. He thought a moment. He had nowhere to be. The speakers bleated tunes. "How do y'all have electricity?" Mira, who intercepted the question passing through, said, "Solar panels on the roof." She wore a cotton robe that stopped at her thighs. She touched the hem of the robe. "About like you yesterday in Murk's jacket." She went to the kitchen, banged around a bit. Pots and pans. Tea kettle. "I won't be able to leave her like this." "Just you and me then," Murk said. "Bring your gun. We'll shoot shit." Murk killed the music, began getting dressed. "Only got fifteen rounds." Bale gathered up Mira's father's clothes to change into. "We'll only shoot fifteen things." "Don't shoot anything you don't have to," Mira called from the kitchen. "If it were up to Murk you'd waste all your bullets on tree stumps and jackalopes." "What's a jackalope?" Bale asked. "A thing that ain't real," Mira said. When they set off, Bale asked, "How long will we be gone?" "Back by night," Murk told him. They headed out into the thicket, walked cautiously through the thorny limbs until they came to an open grassland that Murk hopped into—he seemed a scarecrow gifted with life. He found the sun and put his back to it, lowered himself to the thin shadow that had leached from him and gulped it up, began singing, "A world with two suns." He rolled to his back, looking pleased at the sky. But, once propped up on his elbows, he saw Bale kneeled, his rifle shouldered, the barrel of the thing aimed at Murk's heart. "Your eyes," said Bale, "they're black as fuck." "Thank God. It still works. I always worry. You know, you're real fucking big. I bet your shadow would fuck me up." A thin, puzzling laughter stretched from him. Murk's head disfigured a bit, quivering. Hallucinatory. "What now?" "We go to the train." "But your eyes. When they're like that. They tell us it makes you evil. Can I trust you? You gonna try to do stupid shit?" "I might do stupid shit, but you can trust me." Murk pulled up at his trouser leg. His wood peg showed. "There's not enough of me." He stood sloppily, his balance wobbled. "I mean, I could drink up other shadows and get darker, I guess. Like your shadow. But I'm not so lost as that. Look at my wrist." There was some kind of serial number tattooed there. On his other wrist as well. Murk raised his pant leg revealing his ankle. The number was there too. "Somewhere," said Murk, "there's another leg with the same marking. I've never looked for it, but maybe someday I'll find it. It's being gone makes me less dark, but because it's gone is the reason I get dark at all. If I had a whole one like you? All big and shit? If I drank it, I'd be a shitty mess, I guess. Folks with full shadows? They're peculiar and mean. Me? I'm just peculiar." "Find the other leg?" Murk rambled toward the train they were after. "Sure, it's why they're tattooed. My mom did it. So I could try to track 'em down if they were stole. I don't know what she was thinking there. She used to pray to some God who told her to do it. There's a place they get taken to, the legs. The arms. Shadow addicts like to cut on folks." "A place they get taken to?" "That's what I'm told." They marched on in the humid morning air, their shoes slick with dew and their shirts moist with sweat. Thorns littered the grass and the parched dirt lay dry-cracked and scraggy. "How old do you think I am?" Murk asked. "Don't know?" said Bale. "How old?" "It was an honest question. Your guess is as good as mine. You like flying?" "Flying? Never have." "I'm not an idiot," said Murk. "You're a domer. You may as well have spent your whole life in a coffin or beneath a canoe. I'm asking if you like the concept." "The concept?" "See that bird up there? Does it do anything for you?" Some black bird listed across the sky. "I'm not sure." "Then it doesn't. You'd know right off if it did." They jostled as they moved over the craggy land. "I've not seen all that many birds," said Bale. "What'd you do for fun?" "Fun?" "In the dome? To pass the time?" "Worked," Bale said, "or avoided working. Slept. Went to school. Watched the girls shower." "I like to fly," said Murk. "You have any rope?" Bale touched a shirt pocket. "I don't . . ." "Of course not," said Murk. "Why would you? But I have a few yards. Feel like running?" Bale stopped and Murk walked some distance before realizing. Once he did, he turned toward his slack-stepped acquaintance. "What the fuck are you talking about?" Bale asked. "Running," he said, and Murk mimed the practice. "I know what running is." "Then why'd you ask?" "Listen: flying, fun, rope, running? I'm not following you at all. The shadows make you silly?" "Yes," said Murk. "Very silly. But what does that have to do with it?" Bale stretched in his tight clothes, trying to make them bigger, fit better. Murk reached into his jacket and produced a length of rope. "I tie it round my waist. You run. Pull it." "What the fuck?" said Bale. "You've already asked that," said Murk. "And now I'm telling you," he said. "I'm telling you what the fuck." Then, slowly, "I TIE THIS," he motioned to the rope, "ROUND MY WAIST," he motioned to his waist, "YOU RUN," he mimed running. "PULL IT." He tugged toward the sky. "No reason we should both be stuck on the ground." "I don't understand." "Of course you don't." Murk's gray grin. "You're a domer," he said. "Might as well've spent your whole life in my pocket or up a jackalope's ass." He began to tie the rope around his waist. "A world with two suns," he said, "that is the dream." Attack Drummond's eyes opened. He'd been asleep. In that dim space, he'd nearly forgotten his fate. He breathed slowly, puzzling the darkness, recollecting events. He felt a kind of shame a moment. His nude body seemed gruesome to him. Rust dust clung to his nakedness. He lolled a bit and then he knew: the train had gained speed. He sat alert, listening intently for some sound of coming aggression. It came. A noise so loud Drummond's eyes saw light erupt—a white, hideous resonance thrummed the dark alive—and Drummond pitched from the surface of the floor, smacked the wall and dragged down it. His face gushed warm blood against his flailed-open palms. The smell of pain. Shock swept his heart. His pulse rattled. The train's wheels screeched and blatted. Outside, gunshots cracked and popped along. Murder music beating rhythmic. The darkness darkened. His thoughts constricted. Involuntarily, he went. Into the inner cavern of bewilderment. Faint seized him. Drummond surrendered to dream. Flying "Faster!" Murk—ballooned into his thinner form, organized, as best he could organize himself, like a kite. Beneath him, Bale ran: the rope in his right hand, Murk's peg leg in the other, the butt of the back-strapped rifle slapping his ass. "As fast as you can." "This is," screamed Bale, hopping grass clots and snag gaps, "as fast as I can." "It's gotta be faster." "Love to see you," Bale huffed breath, "try it." His bruised feet throbbed. "Impossible. I only have one leg." He spread his arms. "Exactly," said Bale. "And my feet are fucked." "Let out." "What?" "As much rope as you have." Bale loosened his grip, felt Murk catching the wind and felt the burn of the rope skating against the pad of his hand. "It hurts like hell," he said. "Slow down," said Murk. "It hurts." "Slow," yelled Murk, "down." Bale secured the rope to stop the burning and stumbled to a walk. He gasped, his shoulders heaving. "Success. You're better than Mira." Kiting, kiting. Murk aloft. Bale walked backwards, marveling at Murk's flight. "Train's that way." Murk spit in the breeze. "About an hour more." Even from a mile off it looked blundered. The harsh smell of rust and smoke stained the air, and juries of buzzards circled in the distant sky. Some stench of putrescence found them and Murk came down scuffing the earth, dry heaving, his eyes bulging as he hacked. Some grand decay crossed the air like filthy cobwebs. Bitter lace. Rotten beach sand. Murk retched. A thread of sputum dangled from his bottom lip. He coughed and it quivered. He cuffed his hand through it and flung it away toward some grass. It caught on a long blade of it, smeared down like a slug. "You don't smell that?" "Beginning to. Must be thicker up there." Murk hid the mask of his face in the collar of his leather jacket. His voice came muffled. "Worst odor ever." "We should hear it moving by now." "They never stop it?" "Only when they're kicking us out." Bale helped Murk up, and Murk leaned on Bale as he reattached his peg. They walked on cautiously, the foul reek constricting as they advanced. Bale coughed a bit. Murk did the same. Once close enough, they saw odd graffiti on the cars of the train—black skulls with hashtag eyes. "Know what it means?" Bale asked. "Nope. Never seen it but can't be a good thing." They went quietly to the track's edge, slipped beneath two buckled-up cars, gnarled steel and shredded lumber. The battered outpost within—burnt buildings and toppled towers. In the wreckage, dead bodies strewn, carcasses of fetid flesh mulled upon by carrion crows that dawdled unfazed, their gory beaks thick with congealed blood. Flies tap-tapped against the bloated bodies, the constant buzz of their tiny wings. "Who did it?" said Bale. Murk used his jacket like a mask, lowered the thing to answer. "Probably Santa Claus." He hid his face again. "That like a gang?" Murk's eyes swelled. He let his jacket collar fall. "Serious? Y'all don't got Santa?" "Who?" "Comes down the chimney? Brings presents? For Christmas?" The buzzing. The stench. On occasion some buzzard alighted or disembarked. Claws on the rot or wings against the air. Their wicked-upped eyes like yellow jewels of insanity. Bale shook his head. "What about the Easter Bunny?" Bale was blank. "Tooth Fairy?" Blank. "St. Valentine's?" "I don't know." "You got any holidays at all?" "Holidays?" "Sweet God," said Murk. He sang, "Jingle Bells, Jingle Bells, Jingle all the way!" Then, "You ain't never heard that?" He dry heaved, nauseous from the death essence. Raised his jacket collar. "We should see if there's anything worth taking." "No, no, no," said Murk, his voice muffled. "Presents," he said. He dropped the collar. "Y'all got presents?" "Like gifts?" said Bale. "Yeah." "When do you give them? When are they exchanged?" Murk's face hid again. Bale held up a finger. "On birthdays." He held out another finger, "on Dome Day." "Dome Day?" Muffled. "Yeah. Celebrates the day we moved into the dome." Murk shook his head, put his face out near Bale's. "Why would you celebrate moving to a place with no sun and no Santa?" They climbed in and out of burnt-open cars, poked at the dead with sticks, looked under debris, came up empty. All about them buzzards dawdled with grim treasures of purloined flesh jiggling from their beaks. Their claws click-clacking. "Check that one's pockets," said Bale. There was a man perched in ash, half his face burnt back to skull, the other half shiny with decay, smeared with buzzard shit. They scrutinized the mess of it. "I can't touch him," said Murk. Sunlight twinkled off a bead of plasma that dribbled down the dead man's cheek. A glinting spec of mephitic wetness. The spot where his eyes should be, desiccated and gray. "He might be where all the smell is coming from," said Bale. The dead man seemed like an exhibit in a museum of nightmares. A cadaver designed to scar the emotions of spectators. But then Bale flung forward, skidded to the ground, his face at the dead man's half-rotten skull. Screamed, "The fuck. The fuck." Because some thief—ash across his brow and whisper-thin hair, clothes tattered and hackneyed—gulped up a swallow of Bale's shadow as they were hypnotized by the decomposition. Bale winced at the death he landed near, and the stranger popped up, sprinted toward the train, ducked beneath one of the cars and was out of view. Drummond's Deeper Prison Drummond woke and the train was still. He made to his feet before realizing the train car he was captured in had been toppled. He stood on the wall of the thing, stared at the floor. In whatever incursion that transpired, the sheet metal ceiling was gashed, and a column of light screamed into the otherwise darkness. It seemed unnaturally bright, and Drummond crept to eye the opening, to witness the goings-on of the attack's aftermath. He saw only wreckage and char, destruction and gore. Dead bodies lay strewn amongst piles of debris. Threads of smoke bled from mounds of ash. A cataleptic hush drenched the disorder. A gray apocalypse of whist. Drummond smelled gunpowder and death. His skin grew goose bumps at the mystery. Naked, in the dim light, he considered his fate, inspected his soreness. "Shit," he said to himself, his voice echoing in that metallic space. His imprisonment was supposed to be a week long. Now, he had no idea. He sat near the sunlight, let the warmth touch his skin. The Thief Murk hoisted Bale away from the dead man and they gave chase, ducking the same train car the thief had disappeared beneath, following him out into the field beyond, toward a brush sparsely leaved. Bale shouldered his rifle and thought of firing, but he wasn't certain he could aim for gagging. He figured he'd miss as the thief distanced. His mind filled with the rotting face. The sparkling decay of it. The dead man's skull like twinkling garbage. Murk grabbed him by the elbow, dragged him on, and the two slowed as they entered a mess of thorny limbs, bobbing into the mayhem of shrubbery. "Where'd you go?" Murk called out. They heard tussling. Rounding an oak tree, they saw the thief cradling an old man who slept in his lap. The man raised a finger to his lips, shushing Bale and Murk. He whispered, "He ain't slept in days." Bale whispered back, "I don't give a shit 'bout him," and he aimed at the thief, placed the barrel of his rifle against his forehead, about made to fire a round through his brain. "Then why you whispering?" The thief seemed affable there—in his dingy attire, in his compromised position. "Name's Jessup," he hissed. "This my pa, Rondell. He's in a bad way." "Shadow stolen?" said Murk, and Bale's face showed disbelieving. "Yup." It was like Jessup's voice was made of breeze. He looked like he'd clean a toilet with his bare hands for a sandwich. Murk leaned on his good leg. "I'm not about to just let this go without some kind of apology," said Bale. He applied pressure, the barrel driving Jessup's head back and Jessup's eyes thinning with fear. "Course not, son. If it weren't for desperate measures . . ." he chanced some amenable gesture. "Dad and I are in pursuit, I suppose. He's been aching for sleep. About delirious. We've spent the past three days trailing his thief. We're away from our livestock." "Livestock?" said Murk. "We got goats at home. He gets swallows off 'em. Sleeps like a baby. But out here you gotta chase critters, and it ain't all that easy. I lucked into you." "After his thief?" said Murk. "Why?" Rondell snored a bit in Jessup's lap. "He passed through our parts. Through here. He's the one that paints them skulls with the tic-tac-toe eyes." "He sacked the outpost?" said Bale. More pressure from the barrel. Jessup in pain from the pressing, his neck cricked. "Hell no. Fella's a nothing. A little redheaded troll. Bout chest high to you. Runs alone as far as I know. Snuck up on dad while he napped on the lawn. He'd been drinking. He's a drinker. We both are really. I don't suppose you got any booze?" "No," said Bale. "Makes sense." Jessup frowned down at his father. "Whatever got this town was something else entirely. I've seen others the same." "If your dad's happy back home, why go out after him? Seems too risky just for payback," Murk said. "Would be, but it's more than just that. We kill him, Dad goes back normal. Well, normal as he used to be." Bale eased a bit. Jessup said, "Dad's healer said so. We went to her straight away when he got like this. She's the one gave us the goats. We did that a few days until he was well enough to set out. We brought one with us but he died of tired a few days back." "What?" More pressure. "Yeah, the tired got him. Poor little thing. I don't think we could've taught him to drink shadows. We ate him. He tasted fine." "The healer part," said Bale. "Oh, she's never steered Dad wrong yet. Burns off his warts and all. Said there's a comet coming." "A comet?" said Bale. "Halley's. Showed it to us in a telescope. You can't see it now, but it's out there. It's gonna bring back shadows. But your thief's gotta die first. She seen it in her science." The wind pushed tree branches. Birds chirped and cooed. "You mean like a shooting star?" said Murk. "For wishing with?" "Bigger. Brighter. Like a tiny sun." Bale looked at the heavens. Bizarrely broad to him. Almost made his head ache just looking up. "When?" "A week, maybe. Maybe sooner." Jessup pawed at his clothes, pulled his face away from Bale's barrel. A red circle imprint stained his forehead. He kneaded it with a knuckle. "We set out ill equipped on account of our haste. Did I already ask y'all about booze?" Bale, lost in his contemplation of the sky, let his rifle barrel ease toward the ground. "But what's it do to fix anything?" said Murk. "She had a chart, but I didn't get it. She took away Dad's pneumonia once with an egg." "An eating egg?" said Bale, his attention back on Jessup. "Pulled his sick into the egg and cracked it open and the yolk was red and he was better." "You people are fucked up," said Bale. Rondell's sleepy breathing rasped. "Well," said Murk, "I guess good luck." "Wait," said Bale. "That's it? We're not gonna do anything for payback? Fuckers stole my shadow some." Murk shrugged. "I mean, seems they're in a bad enough way. But shoot 'em if you want." Murk had a point. They sat so stranded. So foolish. "Shit," Bale said. "You'll just owe me." Drummond and the Light Drummond watched the thin daylight dwindle. The hole in the train car roof faced west. He could see the sunset. The final glint that sat on the horizon glowed like magic. Its orange fire seemed impossible. Its surface subtracted continually as Drummond pressed his face against the metal prison. He seemed locked that way, watching as it disappeared. Hoping something good would come. Return to Mira The day dampened. Dusk was up. Murk and Bale worked toward Mira's house. Bale's feet dragged, he'd unbuttoned his shirt, let the breeze sweep over him. Murk hummed, held his jacket folded over an arm. "Mira have goats?" Bale asked. "And chickens. And rabbits. Her mom's against it. For whatever reason. Would rather have Mira running all over finding wild shade for her." "She just a shitty person?" Murk tucked his hair behind an ear. "Just sick, I guess. You ever hear the Doors?" "This like flying?" "Nah. It's a band. Music." "Nope. They good?" "Don't tell Mira, but I don't know. I found an album cover of theirs once, but there wasn't a record inside. I liked the guy's hair though. On the cover." Murk picked at a scratch on his jacket. A place the leather was rough. Cicadas chirped. Birds warbled. "Think Jessup's right? About murder and the comet." "There's gotta be something that works, but who knows?" "Should we tell Mira?" Murk stopped. "It's a new moon tonight." "Okay." "So I'm gonna leave you here. Mira's house is just through there. I don't like anyone to watch me when I'm draining. I'll come tomorrow." "Draining?" "No moonlight or sunlight. The darkness drains out all the way. Leaks off and I can't do nothing to stop it for hours. Aches like rust on your bones." Murk seemed afraid. Jittery. His black eyes strained as though he'd cry ink-like tears. "I get it," Bale said, but he didn't. He almost reached to pat Murk's shoulder but stopped short. "Just through there?" He said, tensing toward Mira's. Murk twitched. "Not far at all." He paced out into the darkening day, and Bale dipped and bobbed through the branches. Over Soup Bale told Mira about the train as Mira worked on dinner. Before they sat to eat, they tied Mira's mother to a chair. "Does it hurt her?" Bale asked. "The tying?" "The moon being gone. Like it does to Murk?" "Nah. She gets kind of lively really. We don't tie her down, she'll get up and try dancing but fall over." They went to the kitchen and Mira served Bale rabbit soup. "Like it?" Food was a miracle to Bale. Something as ordinary as salty broth. Gigantically thin. Impossibly simple. "Rabbits don't look like they'd taste this good." Mira ate roasted carrots and celery. Golden-brown blisters, shimmering iridescent. She picked at her food with a fork. "There not enough for both of us?" "There was," said Mira. "More if you want. I just can't eat rabbit. Mom can, but it's not for me." Bale slowed his eating. "Well, you cook it good." "Thanks." From the other room, they could hear Mira's mother groan. "Where does Murk go?" "Who knows? If you wander around in the nights like these, you hear people screaming pain near everywhere. Distantly, most times, but the draining makes most folks flop off wherever. Mom used to call it fright night." Bale pulled a bone from the stew, was about to gnaw on it. "What about what I said earlier? About Jessup and his dad." "It'd be nice if we could fix her. It's why I went to the train the last time. She wanted me to see about a cure." "And the first time? With the flowers." "Citrus blossoms." "Citrus blossoms," Bale repeated her. "Any of them ever quit shadows? Like, in the morning, when Murk wakes up, he'll be normal?" "As normal as he gets, maybe. He'll look like dog shit and feel like it too, and then he'll find his shadow and be right back where he was." "What if it's cloudy?" "The sun'll come out eventually." "So, once a month they shut down. Become defenseless basically. Why did people have to go into the domes at all? Why didn't they just wait for a new moon and catch 'em while they drained?" "Mom says they did. Years ago. They wrangled people up while they drained. Mom says it's why the domes exist. Like they were made for shadow addicts. The trains too." "Serious?" "That's what she says. But people heard they were killing each other. Killing themselves. Inside. And people outside just drank more shadows." "I've never heard that, but I guess it's possible." "She might be wrong too, but she says they flip-flopped it. Addicts and their families stayed out. All the good people went in." They ate. The muddled sound of chewing. Fork tines and spoon tips against porcelain, tink-tinkling. After a moment, Mira said, "Even if it's true about the shadows and comets, about what you heard from that man. I don't know Mom's thief, who to even start looking for. And, if I found them, I don't really see myself as a killer." "Murk and I could do it." Bale's matter-of-fact eyes. "You don't know where you're at, and Murk is somewhere screaming cusses at the night." "It's worth a try. If we find who did it, and it works. And she gets better. No more shade trading." Mira scooted her vegetables. Her fork clink-clinking. "We can ask if she remembers, but I'm not sure she does." After dinner, they stood on the porch, stared up at the night. "It's crazy y'all have 'em." "What?" "The stars." Back in the dome, they'd taught about night. They said there were people who could tell your future based on the stars that were out when you were born. Bale didn't know what stars he was born beneath, but he hoped they were the good-future kind. He thought: tonight, they shine for me. And the comet coming made them shine even brighter. The Draining Dark, dark night. In a bitter state, Murk thrashed, his body scrunched back into chest-high foliage, wriggling against the earth and mashing grass blades beneath him. He wretched and he wrestled, grabbed handfuls of anything his hands landed on. The black veins beneath his skin began to bleed their color through his flesh, which darkened back from pale, and as the blackness slipped out his pores, it whispered away as vapor might. The scent of it like urine, and Murk trembled in the stench, his tongue foul-stuck to the backs of his teeth, his jaw aching as though locked tight. He sat prone. Rocking. "Shitty fucking shit," Murk said. "Shitty, shit, shitty." three For every black magic that infects the world, a miracle occurs to offset it. There came another way to make a shadow gone. A young girl discovered the trick, though she couldn't fully articulate it. She could simply make the thing disappear with her mind. It was a condition, like balance. A sense, like smell. But she could guide others into the performance. Word spread and followers flocked. They formed some shadowless army. Ranks of women gathered in formation, their shadows hidden from the gaze of the world, their long hair braided behind them, moving in near silence. They shouldered rifles, rubbed ash around their eyes. Most of these soldiers once had mothers who'd been molested, thieved from, who'd lost their shadows, their ability to sleep. They'd been raised on lies, told, "Inside the dome, they're working to fix it." But the fix never materialized. And then the trains came. Into the wilderness, a slow-motion invasion, moving over chewed-up tracks, a recycled procession. The female army sought vengeance for false myths. They swept through the world like organized catastrophe, a plague on the train-circled dwellings. They machined over the outposts, blundering the trains, murdering the domers. They hoarded rifles and swords, gunpowder and coal. They built catapults that heaved rudimentary explosives. They invaded in broad daylight or whistled through in the darkness. They shrieked and hollered, howled and yelped. Spat curses, bit cheeks, ran knives across throats. Hardened animals—stern women with certain eyes. Nomads who slept in tent cities like warring Visigoths, chanting violent hymns to deities they invented in their images. The flavor of their murderous lust an electricity that bound them to their missions. And they took no prisoners, save for one. The Morning After The sun was up, so the dark could start. Everywhere, men and women dragged themselves into the light from wallows and hovels, from snag crooks and mishap holes to drop, once again, into the stupor the sun gave them. Murk was among these. He woke in grass, his wooden leg clutched to his chest, his hair messed and crazed. His eyelids smiled open and then his mouth. He blinked fiercely. He gulped at air. Using the fake leg like a cane, he hopped to his foot, and hopping thusly, sprang into the day and found a nice flat bit of dirt to guzzle his shadow from. "A world with two suns," he sang, "that is the dream." He beamed at the sun, "You sweet bitch," he told it. He gaped at his shade. "I always tell you," he told it, "I always say just stay inside me." The shadow stared back, silent. "But you always leave." Murk pointed down at the thing as though it were alive. "I know it's not your fault," he told it, "but still." He dropped to the earth, sort of stroked his shadow with the back of a finger. "Should we get to it?" he asked. He didn't wait for a response. "A world with two suns," he sang. Then he placed his mouth to the dimness, his lips on the dirt. He slurped it the way those from the coast slurped oysters from their shells, ingurgitating the darkness, sloshing up dirt in his spittle that crunched between his teeth. Immediately, his skin paled. His eyes began their darkening. "That is the dream," he sang when the drinking was done. He stood on one leg. His veins dark through his pale flesh. He wanted to fly. "The train," he said. And, with his leg still not yet attached, he attempted to run, fell face down on the ground, began hysterically laughing. He crawled on the ground to his peg, rolled to his back and strapped it on snug. Once it was on, he did his best to spring erect. Disoriented a bit, he spun slowly trying to place himself. He held up a finger. "The train?" He glared. He pointed. "That way." And he took off—the sort of singsong jog of a false-legged man. The Shadowless Chaperones The Founder of the Shadowless Army had a son with a domer. It was scandalous. She took the father hostage after they sacked a train because he had pretty red hair and she kept him as a concubine. He told her that the domers were sterilized before leaving, so when she got pregnant, she killed him. The Founder did her best to keep the child hid. He didn't leave her tent for years, toddling to and fro under the supervision of lesser soldiers. She never even hugged him. Raised like that, the boy grew irksome. Once able, he would decamp and run ungoverned through the world, vandalizing whatever he could. The older women soldiers called him Huck Finn—all old books were curses to them. When he showed up black eyed and shadowless, no one was surprised. They drugged the boy and hacked off his leg. Legless addicts seemed to behave better, but he remained a little asshole. Three soldiers were enlisted as his chaperones. They were flunkies in the ranks. Mole, the most senior member, had been caught crying on the battlefield. Jilly was such a profane and reckless soldier her superiors half thought her a man in disguise. Baby Boo was new to service, and, like all recruits, had taken a yearlong vow of silence to show her loyalty to the cause. They all hated their job. "Man, I tell you what," Jilly said, "I'd cut the little fucker's head off and bury it in sand. Blow torch that sand into glass, take the rest of his body drizzle it with honey and set that on an ant hill, poison them fucking ants once they ate him up." She looked like she'd eat a lightbulb on a dare. She had only recently completed her year of silence and seemed always to be making up for lost conversation. "You talk too much," said Mole. She had patient, hospitable eyes. She had heart in her voice. But she had also stomped a brain out a skull before. Glared down at the gray matter twinkling in the sunshine, proudly. "Shit, I ain't said a fucking thing in a year or nothing. Kept my tongue stuck to my teeth and didn't even whisper shit into my pillow like Baby Boo do. Forgive me if my mouth gets to going free, but I assume you was the same way when they told you you could talk again." "You talk this much before?" "If I had something worth saying I'd say it and no one could tell me not to unless they wanted more than just words with me, and then I'd oblige them on that front too, cause I ain't never been afraid of nothing except sharks and roaches, and I don't get in the ocean and I shake my shirts and shoes before I wear 'em." They trailed the redhead's wayward destruction, dragging him back to the Founder every so often just to prove that the boy was alive, and it was a thankless gig they had. They followed him in a patient way. The faster they caught him, the sooner they'd lose him again. They'd corral him on home to see his mother, and she'd look at him absently and the cycle began anew. He was lost, and they were after him. "We had a crazed steer that my mom cut the balls off. Maybe that would help this fucker to calm. When we catch him again let's try it." Jilly was chomping chewing gum that she'd pilfered from some ruined town they'd passed through. She blew bubbles on occasion. Smacked and popped like as if for attention. "You had a crazed bull," said Mole. The two walked side by side, didn't make eye contact when they spoke. Up ahead, Baby Boo moved in silence. "Don't tell me what the fuck I had. I was there, I saw it." "It's logistics," Mole said. "Bulls got balls, steers do not. Y'all made him a steer." Jilly popped a bubble. "Well, let's make this little fucker a steer then." "I don't know we can." "Sure we can. We'll tell his mom he caught his nuts on whatever, that it wasn't us did it per se. 'We found him that way, ma'am,' I'll tell her. Keep the things in a jar just in case he wants to hold 'em as mementos." "It's not that. And we're not supposed to say she's his mom." Mole took a tube of salve from her pocket, dabbed a bit on her palm, rubbed her face with it, offered some to Jilly. Jilly scowled at the stuff. "Fuck no." She blew a bubble. "What I mean is, you cut the nuts off a bull, it's a steer. Cut the nuts off a boy, I don't know what it becomes." "It becomes better than it was." They came to a long-dilapidated paint store, its front windows recently broken open. There were tracks in the dirt. One shoe print, one peg poke. Jilly dropped down on her knees to eye them. "A few days back I reckon." The odor of the place was noxious. "We're looking for paint," Mole said. "Let's go before the fumes melt our brains." Outside, Jilly held out her hand. "I changed my mind. Give me some of that shit you rub on your face." "Not unless you ask nice." "Fuck it then." They walked on, came to a building less than a half block away and on it was a crudely painted skull with hashtags for eyes. Jilly spat at it. "Only something with nuts would've ever painted that." Haircut Bale woke on the couch again, but this time the house stood still. He was back in his teensy shorts, his itty-bitty V-neck. He cleared his throat, rubbed his eyes, stretched his hands—the crisscross of veins and tendons, the knuckles. He made fists, jabbed the air. Rolled his head, cracked his neck. Got up, his shirt riding up over his belly button. He walked to the stereo as Murk had done the day before but had never worked one and didn't want to upset it. He peeked in the kitchen—empty. He headed to the bathroom and took a piss. The plumbing worked on well water, you could tell by the smell of it. The same sort of mineral stench of the water they had behind the train. The bowl of the toilet thick with deposits where the water line rose to. There must have been a windmill pump around and Bale credited Mira's deceased father with its install. Maybe they were plumber's clothes that he wore the previous day. Behind the train they had a few turbines. They had cisterns for the crops, but it so rarely rained that they needed the wells. Bale wondered how the crops were doing. He didn't know much about what they'd planted, but every so often he'd turn his binoculars on the rows, try to calculate the plants' growth. "Mira," he called out after washing his hands, stepping out the bathroom. "Mira." Nothing. He went to the front door. His hand on the knob of the thing, he thought about what Murk said. The Doors? Had he heard them? The album cover. The haircut. Bale scrubbed at his hair. For the first time in his whole life, he could decide how to cut it. In the dome administrators decided all that, and the administrators were never unseated. If they died, their offspring supplanted them. They figured the less choices you made the better. But now, Bale could choose. He opened the door, and Mira stood in the sun in her short robe tending her mother. She set a finger to her lips—Bale liked when she shushed him—whispered when she got near. "Woke up early to get her a shadow so she'd sleep. You okay?" Mira stepped back into the house and Bale followed, ogling where her shadow should be. "I'm fantastic," he said. "I was gonna make agua de jamaica, ever had it?" "Huh-what?" "It's hibiscus flowers. Like tea. But I've never had tea. Just read about it in books. You ever had tea?" "No. But I've had coffee and I think that's sort of like tea." "Maybe. You'll like this. It's pink." "Great," said Bale. "It'll match how my shorts make me feel." He held his hands out like a performer, tried to grin all his charm. "I got a question." "Okay." They went to the kitchen and Mira did her banging around thing—began to heat water, dragged down a container of dried flowers. "Will you cut my hair?" "You don't like it?" "I don't think so. Do you?" Mira took two white mugs from the cabinet and set them on the counter. They didn't quite match, but it was close. "I've never seen you any other way." "Me either," said Bale. "That's the point." Mira went to him. "What would you want?" "I think something kind of different. Know what a Mohawk is? You cut off all this." He covered the sides of his head with his hands. "Just leave the strip on the top." "I know what a Mohawk is." "Can you do one?" She passed her fingers through his hair, and the kettle hissed steam. She plucked the thing from the stove, poured hot water in the mugs. "I think. But first let's drink this." Drummond Unwinding Through the pierced roof, Drummond watched a redheaded boy climb through the refuse. He carried a paint can and brush and he marked skulls on any flat surface he saw. There seemed a comedy in his movements. Drummond decided to appeal to him. "You there," Drummond called, and his voice shocked the painter, and the painter came to him. "What's that?" He looked like the kind of boy who'd kick a kitten. "I'm trapped. Could you lend a hand?" The painter scratched his head, touched the sheet metal before him, pulled the brush from his bucket and began to paint. "What's in it for me?" Drummond thought. "Well, I'm naked. I got nothing. So . . . gratefulness." "You got something." Drummond checked his naked body. "What's that?" "A shadow. I'll let you out of there if you give it to me." "Give it to you?" "Let me drink it off the ground." The painter stepped back from the hole through which he and Drummond spoke, found his shadow. "Watch," he said. Then he set his paint bucket on the dirt, lowered himself and drank. When he raised up, his eyes were black. He winked one of them at Drummond and made his head balloon twice its size, his eyes spread out like saucers. He laughed a thin laughter, resumed normal form. Drummond said, "Fuck." The painter took his bucket back in hand. "If you don't like the conditions, you're free to free yourself." He determined the sun's line. "You'll get a shadow in there eventually. Swallow it, slip through that hole." He marked more with his brush. Drummond sat on his bare ass, "Thanks anyhow," he said, his voice echoing out into the air. "No problem." More Disaster He could smell it before anything. He could taste the ash in the air. The gunpowder. The murder. Tears came to his eyes. In his earliest darkened state, he was much like a child. If a whim presented itself, and then was not fulfilled, his emotions tattered and blew piecemeal like shrapnel. Nostalgia stirred deep in his soul, a piteous ache. This train was his thing. As he approached, he whispered invocations, pleading with whatever existent deity there might be that his senses deceived him. He stumbled forward. His teeth clattered in his mouth. His blackened-out eyes nervous in their sockets. When he got there, it looked like a snake pulled from its coil and thrashed against a boulder until dead, left to fall where it would. A kind of horror struck him. Murk slowpoked his way on. He felt filthy and grief-stricken. Doleful and smote. He half wished he believed in some true God to appeal to. Once his mother had told him of God, but on her deathbed she'd laughed off the idea. "Sure," she gleamed. "Up there," she pointed. "A man who looks out for all this." She wanded her hand toward the catastrophe of existence. And ever since then Murk was sure humanity stood indefensible. Murk walked toward the train, the shadow poison inside him. Breezed ash drifted across the scape. Motes of char curled into braids. Murk dragged onward into the oppressive imagery, the frightening reality of it. Train cars were heaped catawampus, willy-nilly. Cockeyed corpses clung to knotted wads of rubbish. Dross seemed the standard. An obscure crackling preached out from the myriad fires dwindling. The stained smell of death lingered. The hollow feel of defeat or doom. Murk turned aft, and, in the distance, he saw a little person perched near one of the toppled train cars marking the side of the thing with a crudely painted skull. Hashtags for eyes. His hair red. And Murk called out to him, "You." At that, the redhead dropped his paintbrush and gave flight. His movements recognizable to Murk. The redhead had a false leg. Murk gave chase. The two amputees hobbling in oddball fashion—one fleeing, one in pursuit. And, as they moved, Murk continued to call. "Stop. Stop. I only got questions." This, however, was a lie. Murk had a rage that needed placing. He had questions as well, but he figured he could twine the things together. Murk, larger than his prey, of longer stride, immediately gained. "I'm gonna catch you eventually." "Tongue my ass," cried the redhead. "Just as soon as you quit moving." They dragged away from the dilapidated train, into a prairie pocked with cactus and Mexican palmettos. It was a hokey-dokey kind of chase that resulted in Murk grabbing a tuft of the painter's red hair from behind, dragging him to the earth and holding him to it. "You're just a boy," said Murk, and the child spat at him. Murk punched him twice in the face, and black blood erupted from his nose, his dark eyes winced. He ballooned himself then, and the expansion of his body forced Murk from his frame, and the boy rose, his skin sprawled into that configuration only shadow addicts could achieve, and he tarried forward with the wind, the breezes pushing him along like a sail, but Murk stood and puffed up as well, his distended form also propelled by gusts, and the bloated monopods slipped across the landscape like discarded tissues, huffing along in rapid heaves. Again Murk's size afforded him an advantage. Through the grainy world they flew, but Murk overtook the boy, wrapped him in his clutches, dragged him to the grass and yanked off his false leg. The boy laid back, deflated, and Murk stood above him holding the wood leg like a club. "What do you want?" "Heard about you," said Murk, and he slipped back to true form. "You're a thief." The boy laughed fake laughter. "What happened to the train?" "Got what was coming to it?" The boy dabbed then at his bloody nose. "You did all that?" "Look at me. That was an army. A great army led by a great leader that did that." Murk eyed him. He was a fragile thing, but he was proud. "You know Jessup?" The boy shrugged. "Might." "He's got a theory 'bout you." "Who gives a shit?" "A friend of mine," Murk said. Then he brought the leg down on the boy's skull and the boy's face went oblong. Murk swung again and the boy's eye socket sunk, vomited out the eyeball, the thing belching open. Murk swung again and the whole thing chopped through, opened up. Revealed a blackened brain that slunk out from the breakage. Murk brought the leg down again and it slurped into the stew of his used-to-be head. He raised it, the peg leg. Looked at the black bloodstain, which slowly grew in color, made its way back to crimson. It seemed to glow beneath the pale blue sky above. Murk mulled over the dead thing at his feet. It reminded him so much of himself, he couldn't help but hate it. Bale the Shade Trader Earlier that morning, before Bale's haircut, Mira had asked her mother who'd done it. "Your shadow?" Mira said, "the man? Who was he?" "I had it once," her mother said. She pointed to the ground. "It was right here." "I know where it would be if you had it, and I was with you before it was gone and there when it was taken." "You were?" Mira's eyes seemed to slice her mother's throat. "Maybe you need more sleep." "Weren't you bringing me something? From a gull?" "I tried yesterday. Couldn't find one." "Endless water," her mother said. "It's a beautiful rest. Refreshing." "She's jobbing you," said Bale. "If I bring you a gull shadow," Mira asked, "will you remember?" Mira's mother frowned. "Couldn't hurt," she said. Bale and Mira went out looking. Somehow, his Mohawk made Bale's clothes seem more his own—they intentionally fit snug. Mira and Bale chattered as they hunted. "We had cats in the dome," Bale said. "I always wondered what they were thinking." "They're assholes," Mira told him. "They think riddles." "But you can talk to them." "I guess. I mean, I don't ever take away much from it." "We kept them so they'd eat rats. Ever talked to a rat?" They had made way into a field thick with cactus that they had to weave carefully through. "No," said Mira. "Snake?" "No." "What's the animal best to talk with?" "Usually people," Mira said. "Usually?" "Sometimes they ask too many questions." At noon, they stopped to eat cornbread with goat-milk butter. They sat in the shade of a grapefruit tree, the fruit too green to pick. "These are good when they're ripe," said Mira. "And they'll be ripe soon. Used to, this whole area was used for farming them. There'd be rows and rows, for miles, my dad said. They're bitter and sweet and sting your lips, but in a good way." Inch-long thorns protruded from the limbs. "Looks mean. Smells nice though." "Thorns are to keep away the rats." "Which you've never talked to." Mira chewed on bread. "I have. I've swallowed rat shadow. I've talked to them. Not a source of pride, I don't guess." "Is that one?" Bale asked. In the sky, a gray bird circled passively. Mira dropped her cornbread in the dirt, pounced to her feet and was running. "C'mon," she hollered, and Bale followed her, the two sprinting toward the bird. The thing kited up and down, cawed a comic tone and Mira and Bale did their best to put it between them and the sun, having to alternate their gazes from the sky where it was to the ground where its likeness would be, all the while dancing about clumsily in attempts to dodge parched, green cactus paddles thorned with bright yellow needles. It was not a swift bird, but it seemed aware of Bale and Mira's intentions and bounced around on the breezes evading their attempts at pilfering. "Tell it to stop." "So we can make it like my mother?" Just then, Bale tripped on a crag, plunged to the dirt. He rolled to his back. "Lie to it." "Wait," said Mira. She ran back to the grapefruit tree, and, when she returned, Bale was up on his feet chasing the bird. "Watch," she told him. She tossed a hunk of cornbread at the bird, and it dipped in the sky, caught the bread in flight. The next bit she threw drew the gull closer. It had vacant, hungry eyes. It chipped wildly. Wherever Mira tossed bread, the bird went. She seemed to have it trained that way, and, as Mira fed the thing, Bale moved stealth-like to find the bird's shadow pitching across the dirt. "I just suck it up?" Mira tossed another bit of bread. "Yeah, but don't swallow it. Hold it in your cheeks." "Does it taste like anything?" "Nothing you'd know the taste of." When Bale sipped from the earth, the bird hooted a hurt tone, and Mira tossed bread, but the gull let it fall. It spread its wings, clung to some new draft and pulled away from them. Bale watched it fly toward the direction of the gulf, his cheeks bloated with the thing's nasty-flavored shade. When they got back, Bale was surprised there was any shade left in his mouth at all—he could feel himself accidentally swallowing it as they advanced. "Did you get it?" Mira's mother asked. "Were you able?" "Yes, Momma," said Mira, "we did," but this surprised her mother. "You're talking," she said. "How you got it and talking?" "Bale's got it. Don't worry. Bale," said Mira, as if hurrying him to spit the thing in her mother's mouth. But Bale hesitated. He felt silly. Mohawk and bulging cheeks. He looked down at Mira' mother, noticed her scarecrow-dancing eyes, her skin, pale and grimy. He kind of shook his head no. "What's wrong?" But Mira quickly understood. She leaned to him. It wasn't a kiss, but Bale felt her lips on his lips, softly parted, could smell the warmth of her. With her gentle mouth on his mouth, Bale breathed into her slow, and Mira's eyes smiled at him, and she bent to her mother. And poof: her mother was asleep. Drummond's Escape The evening sun bled in through the cargo-car ceiling and Drummond watched his likeness on the floor. It was so easy. He could walk straight up to it. His belly ached, his tongue stuck to his teeth. For all he knew, his brother was dead. He was fairly certain all others in the camp were. How else would they have just let the redheaded fucker move through painting on shit? He had to do something or he would die. The car smelled of Drummond's shit, which sat in a lowly heap in the corner, and, behind that odor, the tangy vinegar of stagnant piss which pooled and ran in streams. The flavor thickened Drummond's throat, and he wondered if the shadow would wash it down, he wondered if it could set him free. Drummond's bare feet, dirty with his own filth. The last time he pissed, he'd half thought of cupping his hand in the stream for a swallow of it. He shrugged, "Better than dying to death." He placed his lips against the wall and drank his shadow. He felt it first in his skull, as though he were sucking a thick liquid through a straw and the pressure of it squeezed his head. A warmth spread deep in him. His shoulders seemed to flutter with light. Then his back. He stepped away, lowered his head toward his navel. For a moment, he thought he'd be pulled into the fetal position, thought he'd drop from his feet and grovel on the ground, but no. A quick contraction pulled him tight and let go, as though he were merely a hand quickly trying to clap alone. A joyous sensation. It sillied the mind. Lassoing those odd patterns into phrases would be impossible. Let us say that he felt like a great ship pushed from a dock and into the ocean for the first time, so that he'd seemed to have always rested on the wrong surface and had now found his true home. Sure he'd been built elsewhere, but he'd been made for this. His eyes went stark and his skin paled and his veins darkened, as were the symptoms, and Drummond gazed back at the sun, which he could see through the torn-open roof, and it seemed truer than he'd ever known it, and a mellow sort of prayer chanced his mind, and then Drummond, as if by force beyond him, was drawn to the opening, and he smoothly squeezed through it. He was then in the open. In the deeper light. In the smell of fire. He ballooned. He felt the wind on him. He began running. He felt like a child. He kited forward. He was alive. Murk the Murderer Murk came to the door, stood with the bloodied peg leg in his hand. Bale pointed, "What's that?" Murk raised the peg leg. "A murder weapon." "Who you after?" Mira asked. "Already aftered him. He was painting those skulls." "The fella Jessup was after?" "I think so." "Then Jessup owes us even more. Where'd you find him?" "That's the thing," said Murk. "He was painting on your train." Bale thought on this. "My train?" "It's fucked," said Murk. "Like the one from yesterday, just fresher. Fires still smoldering." Bale grabbed his rifle. Bale ran out the door. Black-eyed Children Mole, Jilly, and Baby Boo chased the graffiti across the countryside, finding it on water towers and felled train cars, painted skulls on tree trunks and the doors of abandoned houses. "It's never taken this long," Mole said. "Shit. We becoming worse soldiers?" asked Jilly. They saw a small posse of shadow sippers in a prairie and Jilly scoped them in her rifle. They were boys at play. Four of them rascaling about. They had a ball they were wrestling over. One boy would get hold of it and run, the others trying to strip it away. If it came loose another would take up a turn. Their laughter floated like any laughter would. It wasn't stained black like their eyes. "I'm gonna shoot one," said Jilly. Mole gaped at her. "Why?" "To prove I still can. I ain't seen combat in forever." "It's a bad idea." Baby Boo listened to see what the verdict might be. Her moony eyes gleaming. "It's not like they're gonna grow up and grow out of it," said Jilly. "They'll get worse and hurt others with their worsening." The Shadowless Army didn't attack shadow addicts as a primary function, but when they got in the way, they'd knock them over. "We don't have orders to, and they're not bothering us." "Shit, Mole. I'm bored. Out here. All this damn nothing but walking." Mole looked at Jilly and Baby Boo. They seemed to encourage her with their eyes. To whisper, let us at them, with their thoughts. Mole watched the boys at play. In the grand scheme of things, what were they even worth? "Don't use any bullets," she said, and Mole barely had the words out of her mouth when Jilly led the charge. Baby Boo followed closely. Mole trailed them. Bored soldiers slaughtering the innocent predates the naming of war, will go on after the words we call it are broken. But the glistening of their bayonets, the lads with their hands fumbling at their spilled entrails, the shrieking and their grievous, black eyes. Some echo of that injustice will travel out with the expanding universe, get back to God if there is one, become a thing we're remembered for forever. Mole thought about that as she watched her warriors whetting their appetites for pain. She pouted toward heaven. Shrugged to say, "If you didn't want it, it wouldn't be." Then she raised the butt of her rifle and brought it down against the skull of one of them. Who was she to judge? Baby Boo kicked their bloodied ball and the thing bounced across the field, and Jilly laughed as it went bouncing. Drummond's Death Bale walked the debris, kicking this and that, tapping his rifle barrel against charred cadavers, against refuse caked with ash that broke to reveal sun-colored embers. Gray blankets of smoke drifted casually from sinking fires. The stillness of it got to him, his jaw ached and his chest shimmied and he swallowed stiffly. "You didn't see anyone alive?" Murk stood off at some distance, panting. "You knew I followed you?" "Could hear you from the start, that wood peg tap-tapping." "Only saw the redhead." Murk pointed with the bloodied wooden leg. "He's over there." Bale glanced back at Murk, off in the pointed direction. "You check him?" "Check him?" "His pockets and stuff?" Murk held the bloodstained false appendage. "I got his leg." "Show me the rest of him." The two walked to where the murder had occurred, both silent as though partway sleeping. The gray-going world seemed to suffer from nausea. No wind stirred, but smoke drifted. The smell of it pleasant, somehow. "Made work of him," Bale said, looking down at the boy's broke-open head. Murk lobbed the peg leg toward the battered brains. "I was agitated." Bale reached down to check his pockets, but kind of staggered. "I'm woozy," he said and he leaned on his rifle like a cane. He pointed at the boy. "Look him." He couldn't quite form sentences. Murk fidgeted his fingers into the crevices of the dead boy's clothes, coming up with a few trinkets that seemed like makeshift playthings. Some twine. A small glass bottle. A petrified frog. "What am I looking for?" "Something that seems like it should be found." Murk stood when the job was done, contemplated the pile of plucked-up things. "None of this meets that description." Bale frowned. Murk felt his disappointment, pain. "If you want," he said, "we can go back through, look for whoever. I'll help you bury folks." Bale straightened up. Held his rifle like a soldier should. "Bury?" "Sure," said Murk, "putting the dead in the ground. You domers don't do it?" "Nah. Never heard of it. What's the point?" Murk swept his tongue across the front of his gray teeth. "Fuck if I know. Something to do, I guess." "I don't care much for digging," Bale said. "Back home we burn the bodies." The fires fumed. "Someone's already done that bit for us. I had a brother in that mess somewhere. Didn't figure I'd see him again ever. I guess now I know for sure." "Should we say something over it?" Bale stared at Murk. "I just did." "But, like, to your God or something?" "Thanks a lot, God," said Bale. Then, "Did you want to add something?" Murk shifted. "I like your haircut." "Not what I meant, but thanks." The sky darkened. "Don't you dare fucking steal it." Most times, Murk would've argued. Instead, they walked back to Mira's house in silence. Drummond Thirsts Drummond needed food, water. His head spun in the new daze of his foreign state. His vision puttered and shimmied and his ears seemed reshaped so that every sound registered as something exotic, remote. His own feet against the earth, for instance, reverberated, buzzed, signaled out echoes that seemed to race in every direction and this cacophony spooked Drummond into believing he was continuously followed. Smells had been similarly reinvented to him. His odor, ripe and fetid, perched him in a kind of smog of stench. The milky perfume of his reek seemed thick enough to chew on, blow bubbles with. Every time he swallowed, the flavor, as thick as grease, seemed to scrape down his throat, slump in his guts. The night sat heavy on him. Humid dark stretched infinitely above and he felt a column of it weighting his shoulders, crowning his head. Walking meant parting the chunky, black air, creating a wake that rippled out behind as he moved. Each star was a needle of light aimed at his eyes, but all things were one. He sensed the trees breathing, grass alive. He knew where birds would fly from. Their courses were patterned out before them like dotted lines. Like bits of bread they followed the trail of. Drummond could see his own path too. He was pulled forward as if by his nose, and he could nearly see the force pulling him. Shapeless but present. Steady but ungoverned. For hours he moved, over grasses he swore could talk to him, between clusters of trees he was certain were fucking. Birds chirped at him and it was as if their music swam down his ears, danced in his brain. Some time passed before he realized he knew where he was going. Somehow, his nose had found water. His body ballooned, his skin scant, stretched. "I'm a beast," he said to himself, but his voice bled from him shrill, seemed to cast blue fire, skipped off like fairy flight. He giggled at it, and gray rings of laughter spread out and dissipated. He maneuvered on in his newfound way. His thoughts were shallow puddles of this and that. Nothing lasted longer than a breath. Knickknack nouns rummaged by bric-a-brac verbs. "Hello!" he shouted at himself. The letters of the word floated into the stars, became a constellation. Twinkled. He climbed a hill. The moon grinned from high. His naked body gleamed in the grin of it. At the top of the hill, he surveyed the crater. At the bottom of the hill, a pond showed the moon again. Two smiles—one above, one below. He stepped toward the lower one, lost his footing, went skidding in the dirt, racing as he rolled, head over heels, clumping toward the bottom, toward the water. The ground leveled before he reached the crater pond's edge. He came to a stall on his back, eyed his ribs, rubbed raw by the earth. Rashes seemed to spring on his flesh. He touched them, sticky against his fingers. A dull ache traced his wholeness. Each breath stung, but was flavored by the brackish water nearby. He rolled to his belly, dug his elbows in the mud. Edging forward as a snake might, slipping into the ooze of moist earth and scum, he pulled his body to the water's edge, placed his lips against the cool, dark surface, slurped earth and liquid into his mouth, chewed the swallows across his dried lips, leathery tongue. He felt crazy in his drinking, the water had ahold of him. It seemed he'd drink until he drowned, as though he'd never be able to pause for breath. Lust forced him forward, and he moved on into the pond, until he felt weightless, bobbed on the surface like a loose blade of grass. He rolled to his back, ran his fingers through his sweat-thickened hair, let his head sink. He kicked out deeper. The splashes of his swimming sang out into the crater, mirrored out into the night. A vast, indigo universe straggled above toward forever, and Drummond's gaze wandered amongst the stars. Not all of them were white, but Drummond couldn't think the names of the colors. A rage paddled across his heart. His jaw tensed. There, floating limp in the crater puddle, he squinted at the sky. He reached his fingers at the stars he couldn't remember the colors of, made as if to snuff them away. The Name Mira stood in the doorway, watched Murk and Bale drag their feet. She held a pitcher of water, chill, slightly cloudy, pulled up from the well. It had a mineral flavor, a prolonged aftertaste of earth. She filled a glass, held it out for the first to get to her, and Bale jogged forward toward it, grabbed it and guzzled. As he drank, Mira poured a glass for Murk. He took it, sipped at it, swished and swallowed. Sipped some more. "Was the train . . ." Mira began, but Bale avoided her eyes. He lowered his glass toward her to be refilled, breathed heavily, "Your mom remember the name?" Mira took the glass, decided Bale wasn't being himself, poured him more water. "She thinks she does." The sound of the water stopped. Bale took the glass back, drank again. "But I don't know where he'd be. But that's not all. I looked in my mom's almanac, and it shows that comet's coming. So the folks you met couldn't be all wrong." "But who is it?" Murk asked, his upper lip wet with well water. "The thief?" "Joe Clover," Mira said. Murk snickered. "Figures," he said. "Why's that?" said Bale. "Cause all Joe Clover wants is shadows and shadows and shadows again. If he were a dog, he wouldn't come to your knees. But he's all teeth and claws. He says he was six when he drank shadow the first time. His own mother's. Since then, he's just been tussling around with bad intentions. Hangs out a place called the Lost Souls." "Let's go to town then," said Bale. Mira and Murk seemed traumatized by Bale's suggestion. Drummond Again From a distance, he probably seemed dead. Nude. Afloat. His eyes wildly staring off. Something so lost about them now. Black, sure, but the domestic tone they used to have, lost worse than dead languages are lost. But it's the music of language, mutely finding his submerged ears, that forces him to reconfigure, roll to his belly, tread water meekly so his nose and eyes peer from the pond. Some man-shaped amphibian that indigenous people would dream up to scare their children with. Along the shore, there were figures. Drummond spotted some cattails and breaststroked toward them. He moved as near silent as he could. Tucked himself amongst the weeds and hid there. From that vantage, he considered those across the body from him. They lowered their faces to the water and drank. Their drinking made Drummond jealous. It was his water. He thought. All of it. Across the water, near the shore, several huts of grass and tree limbs stood caked with mud that must have come from the pond bottom. In and out of these huts, people hobbled. Drummond watched them build a pit fire. Watched them cooking beasts in the flames, smoke belching up toward the sun. If he was hungry for the flesh of the things, he didn't sense it, but he did gulp water, wallow in the weeds, piss when the need took him. There he stayed sprawled and offended at his own mind's ineptitude. He marveled at nothing, and the day did its doing. The sun wilted toward evening. The syrupy conditions that took hold were made wicked by his confusion. He hated his own shadow for being in him and promised himself he'd never drink the thing again. Drummond watched the sun as it set. Watched it so hard it seemed to dance toward him, a circle of fire, swimming in rhythms at its edges. He whispered, "Fuck you," at it. It slowly slipped off. Time passed. The sky went dark again. The Searching Platoon For Mole, Jilly, and Baby Boo the redhead's trail dried up. "Let's just end this business," said Jilly. She was rubbing Mole's salve on her cheeks. "We go back and say, 'Ma'am, your son got himself dead somehow and we weren't there to see it. Terribly sorry for your loss,' and if she takes it too hard we can blame it on Baby Boo." "I imagine she'd know it's a lie," said Mole. "And let's not call him that." "Call him what? Her son? What else you call him? I'm not calling him Huck Finn. I heard that book read to me once and I liked that character more than to bless his name on some runt like that boy, and she's not a deity as far as I can tell, or she'd have had better offspring, and you can tell her I said it if you want, and if I'm wrong she can strike me down with lightning bolts or whatever awesome power she possesses." "We don't know he's hers for sure. It's speculation, sort of. I mean, she doesn't call him her son. Not that anyone's heard anyhow." "Well what the hell else would he be? Her cousin? I had a cousin when I was young and I pushed him off a rooftop just to hear the sound of him against the ground, and I've met her and she's meaner than me. Shit, she makes me look like I got angel wings on my pussy." "I don't want Baby Boo to hear you talking like that." Jilly ordered Baby Boo to go get a campfire lit. Jilly sort of whispered to Mole, "Ain't like I'm inciting here. Founder tells me to die for the cause, I'll take a flaming arrow through each titty, but this ain't the cause. This is babysitting. And I coulda stayed babysitting where I was. It've been safe. Just hauling shadow spits for my momma and helping to make sure my brothers didn't fuck the family goats." She tossed Mole's salve to her. "Fine," said Mole. "It's not the best, but our time will come." "Our time for what? In a few minutes Baby Boo will come back with a low-hung head cause she's too stupid to light a fire and I'll have to go fix the situation by rubbing goddamned sticks together, and I'm not supposed to have to rub sticks no more and then we'll lie around by the fire at night, just you and me telling each other the same bullshit stories." "You don't like my stories?" "And then we'll wake up and walk in God knows whatever direction hoping to chance across the little peg-legger's tracks only to, with the fucking help from heaven, track his little cuss mouth down so he can call us all bull dykes again and spit at our faces as we drag him back to his momma who never even says good job at us for our troubles." "What's wrong with my stories?" Jilly looked at her. "It's your delivery more than anything else. Christ, think about who's listening to ya. Would it kill ya to make 'em funny a bit?" "Nothing's been funny about my whole life." "Then lie, Mole. Every bitch under the sun has had it hard, but whining about it round a campfire ain't gonna earn you gold stars in the godforsaken afterlife. I pushed my cousin off that roof 'cause he raped my sister, but shit, it just sounds better the other way, and if I tell it the rape way you miss the most important part of it which is the sound he made when he hit the ground." "You didn't say what sound he made." "Sometimes it's what you leave out that makes it the best." Baby Boo tiptoed up with sour eyes. "Let me fucking guess," Jilly said, "your dumb cunt ass can't light the fucking fire." Drummond Alone The poisonous poetry of Drummond's shadow intoxication thinned to whispers as the hours passed. He wasn't quite in focus, but certain things did come to him. Hunger, for instance. His belly felt scratched dry. He tarried from the pond, puked water, and worked his way up the crater wall to the crown of the formation. In the distance, green-leaved trees rolled with the breeze. He went that way, loping confusedly, led on by his hunger. As he neared, he saw pale-orange fruits that bobbed from the branches. He picked up speed, reached out and grabbed one. He didn't know it, but they were grapefruit. He bit the rind and bitterness shocked him. The sweet, acidic juice of the thing stung his lips, and he tore open the fruit at the spot he'd bit it, smashed his face into the ruby-red flesh, smeared the sticky thing against his cheeks, nose, and chin. Satisfyingly painful. Burning in some good way. He made quick work of that one, set to some others, gorged on the grapefruits until his throat stung. The fruits didn't quite quell his hunger, but they dampened it, held it off at some distance to be observed. He then realized that his skin stung with sunburn. He wished for clothing. He went to lie beneath one of the trees, but there were dropped, dried limbs thick with malicious thorns, and he decided that the area was against him. He clutched up as many grapefruit as he could gather and retreated to the pond, ambled up over the crater bluff, back down to the water. If he was spotted by those who dwelt on the opposite side from him, he didn't care, wasn't certain. He made his way back to the cattails and rested again in the mess of them, letting the fruit he'd harvested bob in the water. His haunches sunk into the mud, as did his hands. He lifted a fistful of the stuff, and he figured he'd found an answer. He took the mud and began to run it over his face and shoulders. It was cool against his weathered skin. He smeared his whole body. It felt pleasant. When the sun dried it to a crunchy texture, he washed the earth off him and smeared himself again. He passed the whole day that way, but the next morning, while hunkering in the reeds, a face gazed toward him from across the pond. He saw it stay on him. A few folks amassed and stayed focused on his position. Drummond threw into anxiety. Queer vibrations beset him. A constriction of his heart transpired. A few of those across the pond began to round the perimeter of the water toward him. He stepped back, slipped, landed ass down in the mud. The splash must've echoed out. His audience moved toward him with speed—from far away, but he could see. Drummond plucked his frame from the muck, raced up the embankment, ran on his bare feet in whatever direction he happened to be moving. He dashed through tall grasses, low branches, dancing around cacti paddles, their yellow needles catching his thighs, drawing blood. He upped and overed fallen trees, rutted rock formations. Pounced across planes of dirt and pebbles. Hobbled into strands of huizache thorns that pierced the soles of his feet and came out his toes. Anxious, pretend drums thumping some soundtrack for him to escape to. Tears streaked his face and fear thickened his throat. He never looked back, only raced on. For hours he bolted, moving in some unintentional direction, hoping merely to luck into something to drink, something to eat. Fear and hunger erased the ache of his legs and feet bottoms. Drummond headed east, though he had no idea of his course. In the gray evening, just before dark, he spotted a quaint house with light in its windows. It rested at the bottom of a slight elevation drop, and he picked up speed as he sank toward it, wild with the idea of being by a fire inside. "Wait a minute," a woman's voice called from the porch to him. Drummond slowed, his joy drained. He stood paralyzed, jittery. "What is ya?" How long had it been since he spoke? "A Drummond," he answered awkwardly. His jaw tightened, his words sour. A gun was cocked. A metallic click called out into the pre-dusk. "Don't know what a drummond is." "I'm naked," said Drummond. "What's that to me?" Drummond stood stupid. His eyes bewildered toward nothing. Several moments passed. "There's a clothesline off there," the woman said, pointing with her gun. She had to motion several times before Drummond noticed. "Get yourself decent off it and come in closer." Drummond moved that way, found a few things to wrap up in. A shirt that swam on him. Well-worn jeans he had to hold the waist of or they'd slink off. He went then to the woman who kept the shotgun on him. "You're a filthy thing ain't ya?" Drummond didn't know, but his face was streaked with mud and he had leaves in his hair. "Hungry?" Drummond tried to intimate that he was. "There's a comet coming," the woman said. "But it won't do nothing to help your kind. And there's no egg pure enough to pluck that dark from your heart. Stay put and I'll get something together, but if you move a muscle I'll shoot you through. My son's one of you, or was last I saw him. I'm sympathetic," the woman said, "but only to a point." Drummond didn't rightly understand a thing of this. He jittered confusedly, his neck muscles tense. He could hear some banging around taking place in the house. When she returned, the woman had a pack and some shoes, a length of rope. She held up the sack, spoke loud in Drummond's direction. "In this," she hollered slow, "you'll find some things to eat on. Some biscuits and dried meat. Dried tomatoes. Cheese." She threw the sack out toward him, then held up the shoes. "I don't know your size," she yelped. "And if I did wouldn't matter. This is what I got." She lobbed them out. "I see you're struggling to keep those up, but I can't find a belt." She wrapped the bit of rope up around itself and tossed it out toward Drummond. He picked it off the ground, uncoiled it. His black eyes glanced frantically over it. She motioned around herself, yelled, "Tie it round your waist." And she continued to motion. Drummond ferreted the rope through the belt holes of his blue jeans, his hands trembling with starvation and shock, and he achieved some awkward knot at the front of himself. He plunged his feet into the shoes easily enough, them being several sizes too big, and he picked up the pack. "Can you understand me?" Drummond nodded. She aimed the gun barrel at him. "I'll count to ten," she said. Drummond didn't even hear her get to five. He ran until he couldn't. Then he walked on in his loose shoes, a new kind of confidence to him, for no longer being naked. After some hours, the air changed. He couldn't quite define it. The world seemed more moist, briny. In the distance, he heard a sort of echo reverberating. His ears filled with some kind of crashing. He stood and saw a bit of shadow below him. Paused at the thing. Told himself to leave it alone. Watched it move as he moved. Watched it get nearer to him. Screamed at himself, from somewhere inside to not drink it, but he was closer still. Nothing could be done. He was down on the thing, gulping it up. He picked up his pack and continued. The dirt turned to sand beneath him, his steps loose in the softness of it. He climbed a dune. If he had once known what the ocean was, that knowledge was lost to him now, and he merely stood in awe at the immensity of it. White waves crashed in rhythms indecipherable and birds ran willy-nilly over the water-packed sand. He didn't know what to do. He dropped to his knees. The noise and the smell engulfed him. Salted static. Brackish hissing. He gazed out at the infinity of water. He thought maybe he was in heaven. He made his way forward. Waded out. It warmed his skin. Stung his sores. Pushed him over and dragged him across the sand. Further. Deeper. Further still. It felt so fucking good his being in it. All the glinting sunlight of the spume and barm about him. He wanted more of that, it was his now, the whole thing. He moved out. Deeper still. Deep enough that he could taste it. But even that didn't stop him. four The patterns of society's demise played out in different sequences, but in that region, where there was open land to retreat to, dwellings sprung up at the center of vast, forgotten acreages. Families crept off to long-neglected landholdings that had been handed down for generations, to forge life anew in the wake of the world's falling apart. These little tribes of relatives beat their bodies against the earth, dug wells in the old way—with hand shovels over days and days, sending candles down in baskets to check the air for bad gasses—walked plows behind mule power to break up the land into farmable rows. Neo-pioneers, and they existed in the not-too-distant proximity of the falling-away world, watching as fires spat smoke plumes from the dilapidating townships, sitting on stores of fuel and ammunition, arms and provisions accumulated by those fortunate enough to have seen some bad thing coming. Mostly these holdings of humans clung deep to religious rituals, saw themselves as steadfast refugees the Lord himself had selected to retain his message in, and they'd scrape pews out of tree trunks and sit making new music from old chord changes, taking ancient hymns and retooling their words. "Be Thou my vision, O Lord of my heart; Naught be all else to me, save that Thou art; Thou my best thought, by day or by night; I won't drink my shadow forged from thy light." Prophets, seers, revelators: whatever you wanted to call them, they thought themselves that. They let their hair grow wild and clucked bits of scripture at the dizzying enormity of their solitude. Trains ran. Trains stopped. Trains ran. Trains stopped. Domes were built and domes were filled. Domes were emptied, domes were filled. Vast efforts were undertaken, abandoned. Little wars and tiny treaties. If they were Christian, they turned to the laws of Leviticus. Never ate rabbits. A single seed. Single fabric. Muslims held hard to Sharia. No yellow. Untamed eyebrows. No alcohol at all. Others found meaning in texts non-ordained: "a personal God quaquaquaqua with white beard quaquaquaqua outside time without extension who from the heights of divine apathia divine athambia divine aphasia loves us dearly with some exceptions for reasons unknown but time will tell." Time told this: there was more time. Each new unfurling, every coming rapture, all supposed terminations—the return of God or the undoing of reality—failed to materialize, and these strange clans begot bizarre progeny who they filled with their insane notions to carry off toward the future. The Hermit Let us meet an odd fellow who lives alone, his home fashioned from loose tree limbs he wrestled free from a river's snag, palm fronds slathered with gray clay hoisted out the earth, a sort of grisly hut sunk back in the discarded skeletal remains of animals he'd made meals of. The whole of his denomination lost to him. "Partaking in glory," he'd say of them, their outpost taken by blaze as they slept. He has a passion for grass fires. His teeth are like oyster gravel. You can smell the funny look in his eyes. He dawdles the countryside collecting lost bits he might someday find useful, discarded items he feels his destiny will give purpose to, rubbish of note that catches his attention from afar. He pushes a wheelbarrow across the land, circling his dwelling concentrically, with each pass furthering his circumference, hunting forgotten treasures. It is on one of these outings he chances upon an elder man with a loose leg over his shoulder, the bloody end of it stuffed into a brown paper bag. "What you got there?" The leg-holding man jostled his leg. "A leg." "For what?" "Taking it to town for trade." "What they trade you for it?" "Things I want, obviously." He thought about that. "Where's the town at?" The leg-holding man pointed. "North a ways from here. Not too far off. The Town of Lost Souls." "Okay," says the hermit, and he pushes his wheelbarrow on, the two going their separate ways. Much later. Days or weeks. Maybe months even. Our wheelbarrow pusher follows a black-eyed man to the coast. The hermit doesn't know it, but this is Drummond. And he walks about aimless. His clothes don't fit him, and he seems lost. He is sickly. He is low. He strolls across the beach and into the surf and is capsized by waves, tossed by breakers, held down by undertow, and the wheelbarrow pusher stays back, just witnessing. Ultimately, the body is coughed up by the gulf, spat onto the beach sand, dead to the world. The wheelbarrow pusher goes to him. He removes his wet clothes. "Too big for me," he says, but he keeps them anyway, because maybe he'll grow. He folds them and sets them in his barrow beside a dull hunting knife he recently plucked out of a tree trunk. The notion strikes him. "What could they trade for legs?" He shrugs. He grasps the knife handle, goes to the dead man. He sets into his hips to disjoint the legs, each dull passing of the blade hacking gruesome noise, deep, dark blood spilling to the sand. A horrid clipping of tendons. Leaning his heft to throw the socket out. When he has each one freed, he hoists Drummond's legs on his load. He smiles at his work. He knows where he's going. The Town of Lost Souls The town's history was uncertain, but what it had become was a sort of squatter camp for wastrels and profligates. There existed, on its perimeter, felled and molested signage which reported the population of the place to be 8,231, but that was civilizations ago, and most likely the true number of residents was around one hundred. This odd horde mostly dwelled in a derelict district of brick buildings set along eight streets, four running in each direction, and there seemed a sort of roving order amongst them that fluctuated based on the stages of the moon, which dictated the behavior of the inhabitants. This place, sometimes called The Town of Lost Souls, ran on Darwinian logic, and the only discernible structure to their lives revolved around a machine housed in a park at the town's center. Produced daily from beneath a tarp of canvas, the machine was an abomination of science and nature. Fitted with hoses that ran back to baths of blood, the contraption was little more than a life support that maintained limbs which were fetched back for it. Mostly, the thing housed legs, which dangled from a crossbar that they were fixed to with hooks, tied into the circuitry of the system with hoses that seemed red, but were in actuality see through, filled with blood. These appendages drooped from their housings, live nerves fidgeting meekly. Barefoot, the toenails of them grew yellow and wicked. The wounds, where they'd been hacked from bodies at the thighs and shoulders, were wrapped in gauze that discolored with plasma. A stench of crude antiseptic, iron, and sweat wafted from them. The few arms that were hooked into the device had hands that would open and close at random, the sound of this happening curious and alarming. Like all things, these limbs aged and had to be replaced at intervals. This degeneration was much faster than the life of normal arms and legs. The oldest leg dangling at any time might have been on the machine two years. For a compensation, the citizenry of the encampment would retrieve new limbs to supplant those that looked on their way out. This exchange for a wage happened intermittently and was handled by a mustached fellow with indurate eyes, who governed over these transactions sternly. He wore a vest with shallow pockets he kept his knuckles tucked into. He had a pencil behind his right ear. He cleared his throat, tongued his teeth. Didn't look folks in the eyes unless they deserved it. They called him Doc, though he never healed anyone. At the start of the day, a hermit brought him two legs. "Why'd you bring me both?" Doc asked. The legs had been wrapped in spent fabric that a stiff wind could blow bits off of. "For trade." There was a spooky character to the leg holder's eyes, a shellfish quality to his teeth. "This is a fickle beast we got here," Doc said. "What it chooses to support and what it lets die on the hooks is beyond me and I wouldn't tax the thing to take on two of the same risks." "I don't get it." "Of course you don't. You're a moron, but I'm a patient man, so allow me to illuminate: some legs don't make it. We hook them up, and then they die." Doc palmed one of the odd man's pilfered legs. "If this leg gets hooked in and dies," he caressed the other, "this one would surely perish as well." "But if it lives," the stranger quibbled, "there'd be two new legs." "Do you have a method of ascertaining the chances of that? Are you an actuary in this regard? Is there some math, some data, some figure you could present to me on the matter that would persuade me to chance making a payment for two limbs from the same stock to burden my machine with supporting?" "I don't . . ." "Monroe!" said Doc, and a broad figure of a man appeared from behind the machine. "How often do we buy two legs from the same donor?" "Never." Monroe said. "Never," said Doc now looking into the man's face. "That's known protocol to those who operate in this camp. But you're new here." "Been around a bit," the hermit said, but he'd only showed up that morning, straying the streets analyzing the crude society, sort of peeking around corners cautiously. He had seen a pack of wild-shaped men set upon a young boy in some building's entry, strip him of his clothes, chase him naked down the streets, howling at him. "Been around a bit?" Doc mocked. "I'll give you shadow rights for a week on the one leg, and I'll give you lodging for three day's duration at my inn." "Shadow rights?" "I don't haggle," Doc said. The leg holder didn't know what to make of it, but Doc took his pencil from his ear, marked a card, called out, "Monroe." He gave the marked card to him. "Get this man set up." Doc was a kind of governor to the entirety of those eight streets. His chief concern was the machine and the maintenance of it and the profit it provided him, the power it gave him over the town. There was no set monetary standard he held allegiance to. He let those who came to sip shadows propose how he'd be compensated. But beyond the trinkets and knickknacks he'd take in trade, he demanded a kind of blind allegiance from his customers. In that encampment he moved and manipulated most endeavors with little more than his expression. Folks watched to see how he took things. They didn't want to be cut off from the machine. Doc's emotions were considered fragile, and everyone wanted to please him. He had the magic quality manipulative people achieve—he could make you feel like you were the true reason air existed, or he could make you feel like your whole life was some misuse of molecules. As the machine's master, each evening, whether or not all the shadows had been consumed from each limb, he'd order a switch on the thing to be flipped. The blood slowed to a stall. The limbs wriggled and fidgeted, kicked and punched. When they were all still, essentially dead, the switch was thrown again. The blood flowed. The limbs reanimated. The process brought their shadows back to them. They were ready for the next day. Unlucky Clover The jail in The Town of Lost Souls was most likely antiquated before shadow addiction spoiled the world. It probably didn't function in any capacity beyond lending a kind of curiosity to those who ventured to visit it—a museum of sorts that people would tour so as to see how criminals used to be treated. In its current incarnation it served as a makeshift depository for Doc's goods. There were two cells, both in use but for severely different reasons. In one cell, Doc kept the things he considered dear: gold artifacts perchance, taxidermied species of note, weaponry of forgotten civilizations. In the other cell, he kept Joe Clover. It was out of spite he kept him. In that cell, no sunlight shone. The misery this brought Clover set him about to whine and wallow. He had naught but his nudity to keep him entertained, and he wanted his shadow the way babies want warmth. From the streets you could hear him moaning piteously. Because of this, they called him Unlucky Clover, and Doc made certain his presence was known. Clover was the only man who'd ever tried to upset Doc's proprietorship of the machine, and Doc used Clover's punishment as a cautionary tale to any who might chance something so stupid again. Clover'd been held captive for weeks. When he first came in, he had hair down past his ass, but after a day or so in his cell, he'd made to hang himself with a rope he fashioned from the stuff—opting for death rather than a forced sobriety. They found him dangling, near blue, and Doc had his minions hack his hair off to prevent another such attempt. They kept his cell empty, Joe Clover naked. His sloppy-butchered hair made him look like an overgrown orphan. Doc liked to come in and watch Clover sulk. Liked to see him gross in his cage. He'd look down at the wreckage and chide him. "Sorensen says when you lift a rock, you do not find a preexisting shadow." "Who the fuck is Sorensen?" "Roy Sorensen. A shadow expert before all this went down. My father's favorite writer. A philosopher and great thinker, the antithesis of you." "How long you gonna hold me?" "Indefinitely." Clover had tried to starve himself to death but wasn't strong enough a person. Doc had brought him steak and potatoes. The steak was cut before it was passed in, wrapped in paper. He gave him a plastic cup of grapefruit moonshine. "What did you even think would happen?" Doc asked. Clover's hair was freshly butchered and he was a bit loose on grapefruit moonshine. "Fuck you mean think would happen?" "Do you remember any of it? Leading up to the cage?" The smell of rust from the bars of the jail cell. The blight smell of moonshine. The smell of stagnant water from Clover's commode. Clover's mind filled with grainy thoughts and dissolving half recollections. If he thought too long on any memory, it crumbled. He faintly flashbacked to pushing Doc down from behind. "It doesn't matter." "It doesn't?" Doc looked around at the jail. Clover sipped his moonshine. "Give me some more of this." He drained his cup, handed it to Doc. "Sure," said Doc. He took the empty cup from Clover and tossed it to the ground. "Soon as hell turns to pudding pie." "Asshole." "You know how many shadows a person has?" Doc spoke at Clover as though addressing excrement. Clover eyed the ground where his false-light shadow lay. "Depends." "No it doesn't" said Doc. "You have exactly none. Sometimes you cast a shadow, but Sorensen says a hole made by an object is never part of that object. See, light travels in straight lines. Do you know why you're in a cage?" "Sure," said Clover, "it's because you're a crazy shithead." "Nope," said Doc, "my mental state's beside the point." He stuck a hand through the jail bars, sort of waved it. "It's because of this here space between the bars. Otherwise, you'd be in a box." "Hysterical." "Suck the space out." "What?" "From the cage," said Doc. "Suck it out. Make it a box." Clover grimaced. He brushed against the jail bars. Rusty iron things, textured like tree limbs, nearly. "Can't be done, fucker." Doc held up a finger. "I'm inclined to agree with you, except I've seen you do such a thing. No one has a shadow. People cast shadows. That is to say, your shape cuts a hole in the light. Drops darkness on some terminus in the approximation of your figure." "Shadows ain't holes, dipshit." "They most certainly are. Absences of light. The way flute holes are absences of flute. We've just given them a name. You ever hear of a transplant?" "A what?" "For amputees? Before the world turned to shit, they could do it. Take an arm from one person, put it on another. Or a leg. Or a finger." "You been in the sun too long dinking with your machine. You've lost your mind." "How would one go about that? Losing your mind? You can't cut the thing off me. You can't transplant it like an arm." "You ain't even making sense." "If we took your arm, and put it on my body, would I be guilty for all the wrongs you did with that arm?" "What?" "If somehow your mind took sanctuary in my skull, would it be a sin to execute my body on account of the crimes your mind did prior?" Clover closed his eyes. "What part of us makes us us?" Doc asked. "The arms and legs out there? On my machine. With the right tools, I could make them anybody. Are they only objects then? Like hats or pants?" Joe Clover grabbed his junk. "If you ever get done talking," he said, "feel free to suck my dick." He wallowed off to a corner of his cell. "Guess I'm done," said Doc. "But if your dick ever ends up in my mouth, I'll make it an object to suck shadow from. I'll dangle it from my machine." The Trip toward Town The night before they left, Mira had a look at Bale's feet. He had his boots kicked off, sat back in a recliner. She stood in front of him, cradled a foot against her waist. The soles of the things were sullen black, direful. "I don't know how you can stand to walk around like this." Bale lifted his chin, refined his gaze on her. "Must be all the years of dish washing." Mira set the one foot down, lifted the other. "You're kidding, right?" "Oh no," Bale said, "it's a tough gig." Murk rubbed his stump. "I fucking hate washing dishes." Bale leaned forward toward Murk, the crown of his Mohawk aimed at him. "I'd wash for ten hours every day. They'd come in in cartfuls. Bowls and spoons we ate rations off of. They kept the wash water so hot it made your hands fall apart." He opened a palm toward Mira and she touched it. "Feels like shit, right?" She shook her head. "Out here's been the easiest my life has ever been. Which is funny because back in school they made soldiers sound like immortal things. All you do is stand around pointing a gun at shit. Washing dishes takes a hero by comparison. Even hauling the tracks. No big deal. It's all easy compared to dish washing." "All of it's easy?" Mira said. "Sure. The walk to town will be nothing. How far is it anyhow?" "It'll take a day and some." "Will your mom be all right alone that long?" "Gotta be," said Mira. "I'll do something I haven't done in a while." In the morning, Mira led a goat from its pen and into the sun. She whispered some words at it, apologies mostly. She knew the thing was dying regardless—eventually it would be food. Mira would bind its hind legs and run a knife across its jugular, hang it from a tree branch above a bowl to catch its blood. Something in this, though, felt darker. It was a fouler molestation, perhaps akin to rape. The taking of a thing's shadow was felt in a different and dirty way. Animals seemed embarrassed by it. Afterwards, they'd trot off—or amble in whatever manner they maneuvered in—and look at Mira accusingly. They'd seem spooked, badgered. In her whispering way, once it was done, she'd apologize again. It was the case with this nanny kid. She arched her back and bucked off a ways. It did hurt, the thing said. Hurt? said Mira, of course none of this actually with words. Maybe not quite. Mira led the thing back to its pen, walked into where her mother sat, huddled in her agony, oddly draped in dark. Mira lowered to her, breathed into her lips. Once swallowed, her mother gave her an odd expression, but then she was asleep. They packed food and water, some blankets and matches. Bale made to bring his rifle. "I like where your head's at," said Murk. "But that thing would give us away. Not many people have those out here. We show up to town with it, every eye will be on us. I don't know the exact numbers, but I'd say there's at least a hundred there. Even if those are magic, five-people-killing bullets there'll be plenty of folks leftover pissed 'bout what we done to their friends. So, we leave it here, it don't help us at all, but we bring it and it don't help us enough for the attention it calls to us." Bale patted his rifle, reluctantly set it aside. They walked for hours. They had divvied up the weight of their provisions, sharing the load. "Did we bring a rope?" Murk asked. "No one's fucking flying you," Mira said. On and on they went—the slow steps and the drudged task of taking them. "Back in the dome," Bale said, "I would've killed to get to walk this much." "Now?" "I'd kill to get carried." "Wanna piggy back?" Mira asked. "Serious?" "Of course not." A deer ran across the glade. "Did y'all have pets at all?" "We had some stuff in aquariums we could go look at. Aside from that, there were roaches and rats and things like that. Cats that didn't really belong to anyone. Geckos. I tried to keep one of those once but I tore off his tail holding it and then looking at the thing just made me feel bad." "What'd you do with it after that?" Murk asked. "Put it in the trash. I think." "So you all took showers together?" "Pretty much." "Boys and girls same time?" Mira asked. "Nah. They split us up." "And you'd just be together all the time? Must be weird being with that many people who've seen you naked." "I've seen you naked," said Murk. "When?" "When I did." "Recent?" "Recent enough that you had titties and kitty feathers." "Well you don't count anyhow, and you're just one. If it was hundreds, that'd be a different thing." "It was thousands," said Bale. "But what if you get hurt real bad?" said Bale. "Like worse than this?" Murk raised his peg. "We saw a doctor every week," Bale said. "Even if we weren't sick. Poked us with needles and things. Took our blood. I once saw a lady brought back from the dead." "When I was young," said Mira, "this lady used to come around every so often and she had things in bottles you could trade for. She said they did different things to heal you, but pretty much they just made you feel like you were floating and made your teeth loud." "Teeth loud?" "When you clicked them," she said. "You got some shadow back," Bale said to Murk. Murk stopped. He had a faint shadow emerging. He squatted and sipped it gone. He stood, clicked his teeth. "Loud as shit," he said, "and that was free." "So at the edge of your shadow," said Murk, "it, like, glows." Bale considered it. They walked along. "Yeah kinda." "And, it's like," Murk rubbed his chin, "everything it passes over, as we're walking and all, seems to bend. Like time travel." "Um . . ." "And . . ." "You should quit looking at it." They came to a cluster of mesquite trees parked at the base of a bluff. "It's as good a place as any to camp, I'd guess," said Murk. They dropped their bags near a tree trunk, walked through the coppice hunting dismissed limbs that had been lost long enough ago that they'd gone dry in the heat of days. They amassed a bundle and scratched together kindling, twig bits and dry grass. They rallied together a sort of camp that way—built a fire at the edge of the trees, laid out blankets to bed on. Mira had a Dutch oven she set in the flames, warmed goat stew for them to dine on. They ate from earthen mugs with wooden spoons. They passed the canteen back and forth. "There's one thing I miss for certain," said Bale. "What's that?" "Climate control. It was always the perfect temperature in the dome." The sky grew dim as night approached. "Y'all tell ghost stories in the Dome?" Murk asked. "Kind of," said Bale. Mira got excited, sat forward, kind of clapped her hands. "Tell one." "Hell, I'll probably mess it up." "We won't know," Murk said. "If it's bad, we'll just blame dome people in general. We'll pretend it had nothing to do with you." Bale thought a minute. "Okay," he said, "here's the best one I know." The Miserable Mother "When my people first moved into the dome it split families apart. Not everyone wanted to go. There was a young woman who was married to a shadow sipper, and they had a son. She loved the boy more than anything on Earth, but when she told the shadow-sipping father she was moving into the dome, he told her she could go but that the boy was staying in the outside world. It broke the woman's heart. She didn't want to stay in the sun, but she didn't want to leave her boy either. He had crystal blue eyes, and she loved to look at them. In the end, she decided she couldn't leave him. She watched the rest of her family board the train that led to the dome, and she watched the world around her thin out of good folks and get filled up with bad. "She did her best to raise her child, but, as the years passed, it became harder and harder to provide for him. The husband wasn't any help. He didn't hunt or farm. He didn't keep house or teach the boy. "One day, the women went off looking for food—all their cupboards were bare. When she got back, her son and husband were in the front yard together. It seemed suspicious. She called her boy, and he went to her. "She knelt down to look at his eyes, and they were black as eight balls." "Y'all got pool tables in there?" Murk asked. "Yeah, you play?" "Couple times. I was good at it." "Bet I'd beat you . . ." Mira kicked at Bale, "Finish the damn story." "Right," said Bale. "So she saw his eyes. "And then she realized she'd made a great mistake. The rest of her family was safe in the dome, and she'd stayed out in the world to take care of a boy now lost to her. It broke her heart, but she decided maybe there was still hope. "She ran to the train tracks and followed them in the direction her family had traveled off in. It took her days, but she finally got to the dome. "It was completely shut up, but she walked the outskirts of it, banging the walls. The only family she still knew was inside it. All that was outside and with her had changed beyond her knowing. "She went insane. "Her last days were spent aiming to get inside the dome. "She banged the walls. She attempted tunneling in. "Ultimately, she died that way. Out there, going crazy. "When death came, she welcomed it. She slumped against the dome, and breathed her final breaths, happy that her pain and suffering was over. "But she was wrong. "When her body died, her soul stepped from it, and as soon as it saw the dome, it latched on to the idea that it could now, without the body limiting it, gain entry easily. "But the soul could not. "The rules that governed the woman in life governed her in death as well. "And so at night, you could hear her. Against the roofs and the walls, scratching and banging. "And she'd have to go on that way forever." "Not bad," said Murk. "Works better back home," said Bale, "'cause you can usually hear banging and such." "Think it's her?" "I think it's somebody." The night was still. Filled with the music of the fire and insects chirping. Owls hooted. Coyotes howled. Murk, Bale, and Mira laid back in their blankets. Pretty soon, they were all asleep. The Women in Darkness Not far off, Jilly and Mole sat in darkness and, just beyond, Baby Boo rubbed sticks trying to get a fire going. "Stupid bitch," said Jilly. "Just let me make the goddamn fire." "If we coddle her she'll never learn." "I don't give a fuck if she learns. I don't care if her brain falls out her asshole. I'm cold and it's dark and I wanna sleep." "Think about something else." "Like what?" "I got a story," Mole said. Jilly covered her eyes with her hand. "Oh thank all the gods above. Thank the stars that twinkle and the moon that glows too." "It's a good one. Been working on it." Jilly lowered her hand, her face angry. She motioned to Mole with a flippant gesture. "Regale us," she said. "Hesitate not to enamor our minds with the narrative you've been a cooking up." Mole cleared her throat, adjusted her repose. "My uncle had a little dick," she said, "and it had a funny flavor." Nothing spoke but the night. Jilly looked around her, side to side. "Am I in some kind of nightmare?" "Didn't like it?" "How the hell is that a story? Ain't nothing even to it." "I was leaving things out, like you said. To make it funny." Jilly blinked many times. "How the hell could that be funny? You sucked your uncle's tiny dick? Fucking hysterical. Does it feel better to get that out? 'Cause I tell you, I'm not much for talking feelings. I feel like sharing shit like that just drags others down. You got a memory like that, you let that shit fester. Keep that poison in you and just learn to move on." "I didn't say I sucked it," Mole said. "Come again?" "He hanged himself. In winter. Mom and I found him and figured he shouldn't go to waste. We cut him down from his tree branch and cooked him up, and I asked mom what dick tasted like, and she didn't know so we tossed it in the frying pan a bit and chewed on it some, but it was nasty so I spit my bite in the fire." Baby Boo looked up from her fire starting. Mole seemed proud of herself. Jilly made to speak, but, for the first time since her year of silence, she couldn't think of a thing to say. The Inn Everything deplorable in that world drained toward the inn at the Town of Lost Souls. Riffraff amassed there in the nights to freely congregate in debauchery. Black-eyed drifters, grim-faced ladies with skirts freely hiked. Each customer foul toothed, speaking slurred profanities. The heavy scent of sandalwood incense, bathtub alcohol, stale tobacco smoke, bodily fluids. Macabre tunes hovered in this fluky bouquet, banged out on an out-of-tune pianola by a one-eyed midget who lowed abridged lyrics seemingly unintended for his audience—hooligans who tarried and bungled their ways about the first-floor saloon of the joint. Upstairs, the doors of a dozen lodging rooms lined a banistered walkway, the banister leading to a staircase that descended against the back wall. In the far front of the inn, on the lower level, a ragged pool table stood, and men lingered about it holding cues, their eyes blinking involuntarily whenever the balls struck each other. Glasses were set on the copper bar, clinked together. Doors were opened and closed. Men and women called harassments at each other. Occasionally, wild fucking could be heard from above. In his room, sitting on the floor with his back to the door, clutching the leg that could not be traded, the hermit's wild eyes seemed even wilder. He could feel the sin of that scene against him, could taste the dereliction in the air. Terrified, he resigned to sleep in his current position, making it less possible that he might be molested in the night. The bed in his room stayed made. He wouldn't touch it. He feared it had been soaked through with bad deeds that would most likely bring him nightmares or gift his skin with some communicable irritation. He whispered some half-remembered prayers to himself. He tried not to listen to the noise of all the bad deeds around him. The smell of the putrid leg clutched to him—aroma so blubbery you could mark it with teeth. Downstairs, a young boy was dragged into the madness of the saloon by his hair, tossed onto the pool table and held down to it. The leader of this enterprise addressed the crowd. He seemed liked a disheveled clergyman from some religion that worshipped motor oil. "This boy here is the son of my brother, but I will not call him nephew yet. His father is long lost to us, and his mother is buried and I have grown tired of his irksome ways." The boy struggled with his restrainers. Flopped about on the felt. Spat where he could spit. "He is, however, the only family I have, and we are aiming now to tame him some." The uncle pulled a bowie knife from his belt. "I apologize for the noise, but we've no other place to perform the procedure." Some madnesses are so bizarre that they entice witnessing. Those in the bar who had been preoccupied with debauchery, who had been lost in the melee of drinking and lustful deeds, tapered their pursuits in order to watch this odious operation. Even the bartender waved off those waiting for drinks, came out from his station with a bottle in each hand and took up closer to the pool table, pouring willy-nilly shots out for his patrons nearest him, spilling here and there, as his attention was on the nephew now writhing with all his might. The uncle laid his hand upon the boy's chest, bulldozed his body down, the thump of it sending shockwaves through the floor, rattling feet. "I'll give you the choice, a luxury, when you think on it right." "Fuck off," rumbled the boy. His smooth face streaked with tears, his mouth puckered, lips quivering. "Not yet," said the uncle. "Arm or leg?" He raised up from the boy, tested the sharpness of his knife blade with the pad of his thumb, awaited the answer. But no answer came. The uncle cleared his throat. "Choose or I'll choose for you." He tapped the light that dangled above the pool table. His face was marred with scars that must've been made by human fingernails. His eyes twinkled hideously. "Arm or leg?" Now the audience participated. "Leg," screamed one, "hooks don't work for shit." He raised his right arm, and the hook on the end of it gleamed. "You ever walk on a peg, dumb fucker?" hollered some other patron. "Give him the arm you use least, I say." Then there was a sort of blathering of opinions. A cacophony of suggestions. Expert testimony as to which limb to choose. Simple stories about hindrances that would emerge either way. Proclamations as to how no matter what, he'd get used to it. "Neither's so bad," said some spooky woman. She was missing her left leg. She was missing her right arm. She only had two teeth in her mouth and she licked at them wickedly as though trying to make them clean with her tongue. The uncle shook his head, a perturbed mien in his eyes. "I'll give you ten seconds," he said, stepping up closer to the boy. "Nine." "Don't make me choose neither," the boy said, bucking. "I'll be perfect from now on. I swear it. I'll do as you say." "Eight." Chants came up from the crowd. "Leg. Leg. Leg." And in opposition. "Arm. Arm. Arm." Louder now, the uncle, as though to some god or magistrate or arbiter or czar, "Seven." "Leg. Leg." "Six." "Arm. Arm." "Five." All the heaving preposterous stenches of breath and adrenaline and shock and dismay. All the stale light and dirty glasses with drops of moonshine in them quavering with the impromptu ceremony. "Four." "Let me keep 'em both. Daddy wouldn't want it like this." The boy heaved this way and that. Wriggling and floundering and tussling and strife in his eyes. Every vein preached forward behind his skin as though he'd burst open in some seismic rupture, spew out of himself like a tidal wave and wash against those who crowded him. "Three. Daddy's dead or doesn't care else he'd be here." "Leg. Leg." "Arm. Arm." And the one impartial woman with her wild open mouth licking her two teeth furiously, a kind of fucked-up glee in her eyes. "Two." And the uncle set into the boy, his arm oscillating, so his bowie knife dangled now over the shoulder, then over the hip. The shoulder. "Arm. Arm." The hip. "Leg. Leg." "One," said the Uncle. "Wait, wait," said the boy. A great silence fell. The labored breathing of the held-down boy. The knife. Above the shoulder. Above the hip. The stillness of the waiting crowd. All things pitched up to nothing. Like some arrow shot straight in the air that pauses, briefly, before gravity calls it home. A voice came from the shadows, "Sorensen says that hearing silence is a successful perception of an absence of sound." "The fuck?" said the uncle. "And that pauses depend on sounds just as the hole of a doughnut depends on the doughnut." "Who the hell is that?" Doc stepped forward. "The boss," he said. All eyes averted then. To be able to see it as Doc saw it. The quick flick of the gaze of all present toward the ground. Obedience in gesture form. "I thought I'd weigh in." Doc went then to the edge of the table. "I got a leg just this morning for the machine. So, I say we take an arm, keep things even." The arm faction cheered. The boy struggled. "Which one you use least?" Doc asked. He patted the boy's brow. Gazed into his eyes. Bowed his head a bit, showing pity. "No, sir, please. You can help me, I know it," said the boy. "Look at these people, son. You can tell from their eyes. It's going to happen. You can't change minds when they're this certain of what they want. It's important you answer. And answer me true: which arm do you use the least?" The boy struggled some more. Thrashed in his captors' grips. But he was spent. "My left," he said, his black eyes rimmed so deeply red they looked as though they might launch from his face like rockets. "You sure?" said Doc. "Yes." "Think it through," said Doc. "All the tasks you do. You use your right arm the most? Opening doors? Wiping your ass?" "I'm sure," the boy said. "I'm right-handed. I barely even use the left one for anything I don't guess." "Good," said Doc. "So long as you're sure." "I'm certain. I'm certain." He nodded as much as his restrainers allowed him. "Right-handed all my life." Doc beheld the uncle, the nephew. "Not anymore you're not. Give me his right arm." "Wait. I was lying. I was lying." The crowd erupted. Guttural hymns the color of nightmares. Sound so thick you could eat it off a cracker. "It'll be a good lesson for you," Doc said. "Don't ever let anyone choose for you." "Fucking no," the nephew gasped. "Uncle, take the left. Uncle." Doc motioned for it to begin, and the uncle swiped the massive bowie blade across the right shoulder's flesh, and the thing drained open, the pool-table felt guzzling up the oozing blood, and the stain crawling out around the boy the way universes expand. The shrieking then. The wild bemoaning of the boy and the euphoric cheering of the crowd that heaved forward into the table's frame and formed some thick audience, roaring with jubilation as the boy's favorite arm was hacked from him. Upstairs, in his room, the hermit heard the bloodcurdling gasp of a child in distress. He clutched his foul-stenched, severed leg to his body like a baby with a blanket. Morning Mira, Murk, and Bale woke dew glazed. Their fire was down to hot ashes. Mira stirred them, threw in some kindling. Smoke fussed toward the sky. She had corn batter in a container that she poured into her Dutch oven and set it in the coals. She sat over the vessel, watching their breakfast cook. Bale and Murk packed up and Mira divvied up hunks of hot bread onto pieces of parchment that she passed to the boys, and they sat and ate in silence, the early morning quiet. Only a few birds cooed. "We'll leave the Dutch oven here. It's too hot to carry and we should be back this way." "Should?" There was corn pone in Murk's teeth. "Well, hopefully not you." Murk fingered out the pone, said, "You'll regret that if it's true." "How much farther is it?" Bale asked. "A few hours. We might could've walked all the way last night, but we'd've gotten there dead tired." Bale stretched and yawned. "I'm dead tired right now." When they were all packed up, and about to make way, they heard voices from the trees. Bickering of sorts. Cuss words. Later, Bale would say he thought for certain they were about to die. Morning at the Inn When light touched the window of his room, our leg holder sagged awake. He sniffed a bit, the turning stench of leg thick about him, and he stood, letting the thing fall away from his grip. Flies buzzed about. Elsewhere in the inn, the faint grumbling of others' affairs—vomitus hacking and grousing and rustling. He arranged himself as well as he was able, clutched up his leg and exited the room. Descending the stairs, his eyes awed at the catastrophe. Broken chairs and busted bottles. Limp humans strewn much like fallen cushions. Passed out or dead, he wasn't certain. Behind the bar, a young boy sloshed a rag in a bucket of gray water, plucked it out and wrung it near dry before wiping the surfaces around him. "Is it like this every morning?" "Nah," the boy said, wiping. "Usually worse." The hermit left the inn, moved through the town with reservation. He wanted to head home, but he felt the need to unload this second leg first. He stopped a grubby woman who busied herself setting out trinkets to sell. She had a cart that she worked from, but he couldn't see common traits to the things she sold. There were lightbulbs and pocketknives and eyeglasses and measuring cups, so perhaps she was kindred to him somehow, and he wondered about his own diversified possessions back home, and he wished he had them so he could set up a stand to rival her own. "Think there'd be a market for this?" He showed her the leg. "Dear God," she said. "It's too turned for the machine." "Tried the machine yesterday, but they didn't want it." "Well," she polished up some of her wares, "I can't think a use for it beyond that, but who knows, you might get lucky." The streets were ghost-town run down, morning-after ill. "When's the town get going?" "When it does," the woman said. The hermit walked the lonely place, his only company the rotting leg's foulness. Suggestion Jilly and Mole deliberated while Baby Boo kept Murk, Bale, and Mira at gunpoint. "I mean," said Jilly, "if he's behind a train, we kill him. If he's running away from a train we're attacking, we kill him." "Yeah, but if they throw him out. Is he technically still a domer?" "If that little redheaded fucker did what he was told, we wouldn't even have to deal with this shit." "When we find him, I'm gonna ask for a different position." Murk, Mira, and Bale communicated with their eyes and Murk made to change the topic. "How long ago he go missing? This redhead? Maybe we can help." "None your fucking business," said Jilly. "Y'all try the town?" "I ain't going to that rat-shit place to hobnob with miscreants like you. That boy went there he can stay there as far as I'm concerned." "We could go look for you." Murk wriggled a bit where he sat. "We're headed that way." Baby Boo put the barrel of her rifle against Murk's forehead. "Just a suggestion." Murk backed off it all. The breakfast fire crackled and some dark sense of anticipation seemed tethered to its crackling. A notion of doom in the wood's turning to embers. "Wait, there might be something to that," said Mole. "I mean, the town's probably filled with his kind of people. Scum draws scum to scum. Usually." "Sure," said Jilly, "but we just gonna let 'em waltz off with nothing but our trust to guide their way." "We wouldn't just be trusting them," Mole said. Into Town "We shouldn't've left him." Mira said. "I don't know we had a choice." "Murk, what the hell are we doing?" She scanned his burnt-looking eyes. "Going to find Clover. Going to town." "And then? Who knows what they'll do to Bale when we come back empty-handed." They moved from the brush into a field of sun-destroyed grass, the blades of it near ossified and yellow as plaque—a crunchy carpet of lifeless growth. Hundreds of yards off, the town stood. A shabby thing that sprang from charred buildings which slunk with decay, and graffitied walls stood lonesome and untethered. Here and there, roofs from old houses sat leveled on the ground. Telephone poles like ancient tombstones cocked this way and that in gangly fashion. A putrid reek of long-dead something. Murk snorted a deep breath of it. "Yeah, it's a half-baked agenda we got, but what are the options? You said yourself the comet's coming soon, so if this does work we gotta find out fast." "Maybe we should just make camp a day, go back and say we didn't see him. Just see what transpires." "And not even try?" They both stood still, disregarding the town. "That would be a waste of so many steps. It's fear you got." "Probably." Murk patted her shoulder. "I'll go. You wait for me." "Really? Think that would be better? Just me? Here? Alone?" Murk studied the area. "Yeah, probably not." "Give me a second," Mira said. Murk kneeled, drank what shadow he had. "When I say leave we leave." "Of course," said Murk. "And I want a kind of password." "Password?" "Or a signal. Something only you know. So if I say it or do it, we're out." "Name it." Mira thought. "A world with two suns," she said. "I say that, and we go." "I like it," said Murk. "You could sing it, even." "Fuck that," said Mira. "I ain't singing your song." Then they made their way through the wasting field. Bale with the Women "You said they had to come back by tomorrow," Bale said. "But what time tomorrow? Sunrise? Noon?" "You don't need to know that," Jilly said. Bale fought against his restraints a bit. "But I mean, if you're gonna shoot me, it'd be nice to know how much time I'm looking at." Jilly had revulsion in her eyes. "You think most people know when they're gonna die? You think that's a convenience the Lord should give us? That we are eternally aware of the events leading up to our perishing? Given some sort of parameters by which to verify the coming of our demise? You think Mole and I should lay that out for you? That we should bless you with an understanding of your death beyond that which most receive? Because I assure you most people's death comes as an absolute shock to them. Comes in the night while they sleep or from the brilliant who knows where in the form of a bullet. Strikes them down while they are deep in a state of unawares and doesn't even give them time enough to make a brief amends to God or whoever they love, and they might die with sin in their mouths and hate in their soul. But we should treat you better? Why? So you can get right with whatever dome deity you swear to, lay it all out bare to them. Profess your sorrow internally that your lord might forgive all your malfeasances? Nope. I've known better people than you who never got that luxury, and I'll be damned if I'm gonna do better by you than them. Why would you even deserve that?" Bale clicked his pretty teeth. "Just making small talk, I guess." Mole kneeled in the dirt. Mole tended the fire. Meeting Doc When they entered the town, folks looked up from their doings with suspicious eyes. A single wanderer, upon entering that township, might go relatively unnoticed, but for there to be two strangers emerging into the dawdling street seemed oddball, a near insurrection brewing. Whispers flittered, shoulders were touched. Folks went telling other folks, the way the curious do. "Come look," they'd say to one another, and this curiosity was not lost on Mira and Murk. "I think it's you that's drawing their attention," said Mira. "Shit, I might as well have been born on this street. Bet they all think they've seen me before." Murk waved at one of the whisperers. Said simply, "Howdy." The man he spoke at was a gritty little beast, his rat features tense and nervous, a coating on him like chalk dust. "The hell y'all from?" he asked, shooing toward Murk. He stood askance on a wooden walkway, a creaking sound coming from his footing as he gestured. "Around," Murk said. From far enough away, perhaps there was an order to it, but slunk into the trimmings of it, the town was a chaotic contraption. Every action seemed accident driven. They turned up a street and were met by a small crowd carrying clubs. The man in front pointed at Murk, "You're a peg legger," he said. "Mighty astute of you," said Murk. "Y'all gonna have to come with me." "That's a bit forward, ain't it? I don't even know your name." "It's Monroe," Monroe said. "That don't fit our plans." "Adjust them then. Or we can adjust them for you." Mira quickly counted about a dozen men clutching some manner of crude weaponry. "It's okay," she said. "We can make time for you." Monroe led them through the belch-scented streets, in and out of catastrophic herds of black-eyed drudgers—limp and staggering, a dirge in the noise of their lowly to and fro. They worked their way to Doc and his machine, and Doc sat on his stool—the humble throne he held dominion from. "I like your jacket," Doc said. He shook hands with Murk and Mira. "What brings you?" "To town?" Murk said. "Our steps. To you?" he continued. "These people." "Monroe," Doc said, "go relax somewhere." "Close by, Doc?" "Close enough." Doc had Murk and Mira follow him toward the machine. "Folks like you sometimes come to town for vengeance or something, so I intercept the legless and armless as they enter, bring 'em over to ascertain their intentions. Your leg's not here." "I'm not looking for it." "But you're looking for something." "Just somebody," Mira said. "It's not important." Doc pocketed his knuckles, leaned back a bit. "Well, if they're here, I know it. Not much happens in this town I don't know. Your eyes aren't black." "Neither are yours." Doc fondled one of the hands that dangled from the machine. He fingered its fingers, and the hand seemed to pull away slightly, ball into a fist. "Never had the notion. Who is it you're after?" "I'd rather not say." The hands on the machine squirmed somewhat. "No harm in looking for someone." "Listen," said Mira. "We don't want to end up on your machine." "I don't put whole people up there. Sorensen asks: When we shake hands, do our shadows become one?" "Excuse me?" "We can shake on it. Tell me who you're after, and I'll promise you won't end up on my machine. I'm curious is the thing. You have my attention." Mira and Murk communed with their eyes. "Well?" said Doc. Mira stepped toward Doc, held out her hand. The two shook, but, on the earth, where Doc's shadow fell, it was as if he shook hands with nothing. "Joe Clover," Mira said. Doc chuckled at the sky, called out, "Monroe, they've come to town to find Clover." And Monroe's men broke into hoopla, some of them rolling—an over-exaggerated bemusement at the notion. "I don't get it," said Mira. "See this leg here," Doc took up a leg, its flesh discoloring. "This came to me yesterday, and it's not gonna take. You can tell right?" he ran the back of his finger down the limb. "Just looks unhealthy." Doc grabbed the leg and snapped it from its housing and blood drooled off the end of it. "I pull 'em once I can tell." He tossed it away to the dirt. "It'd be too hard on my machine to leave it up there decaying, and Joe Clover was kind of like that. A thing that was turning." Doc caressed a few more feet, just sort of slightly touching the heels. "Y'all come with me," he said. Mira and Murk followed to a building's door. A sign on the wall said jail house. "What's in there?" asked Mira. Doc beamed at the sign. "What you're after." The smell of the place was noxious. Moist, human odors and rust. They heard stirring before their eyes adjusted to the dim light. "Who's with you?" a voice asked. "Visitors," said Doc. "I don't know 'em." "Well they know you." "I like the look of the one. She a gift?" Clover stood there naked, started pawing his cock. "Knock it off," said Doc. "Wanna be my creature?" he asked Mira. "I'll call in Monroe." "Oh, c'mon, Doc," he said, "Just put her against the bars." His voice raspy with lust. "Monroe!" hollered Doc, and Monroe came in, his eyes alert. "Restrain him," Doc said pointing at Clover, and Monroe opened the cage, thumped Clover a few times with his fists, drove him to the ground and pulled an arm behind his back. "Fucking Monroe prick," Clover hollered. "Put her in here, Doc. Just a little favor." Clover leered up at Mira with his butchered hair and sickened eyes, and he grinded against the floor, sort of whispering, "Creature," up at her. "Well," said Doc, "hope y'all didn't come far. That is about what he has to offer." The sound of Clover's struggle, his creaturing. "Mind if I kill him?" said Murk. There seemed no sense in lying. "I figured anybody looking for Clover would want to, but, no, I can't let it happen. Don't get me wrong, he deserves it. Probably deserves a million different deaths for a million different reasons, but as of now he is dying a death he deserves for tampering with me, and the death I have chosen for him is a prolonged one." Clover's face twitched, he gritted teeth, screeched, "Creature." And Monroe thumped the back of his head a few times, and one shot must've got him good, because he quit fucking the floor, quit whispering creature. Inside the small prison, that last bit of struggle reverberated. Somewhere, water slurped into a foul-smelling drain. "There's no sun in here," said Murk. "No sun, no joy, no laughter. Not for Clover." Monroe got up, stepped out of the cell, turned and locked it. Silence owned the room. Mira watched Clover hunkered in his nakedness, his skin gray with filth. She looked to the other cell, brimming with treasures. "What's all that?" "My things," Doc told her. "Stuff I like." Mira's mouth hung open. Murk just shifted weight. "It's not how you wanted it to go. I can tell just by looking, but it doesn't have to be all bad. Stay. Enjoy yourself. Have a drink at the bar on me. You'll be safe your whole stay, and that's a better deal than most visitors get." He put his thumbs in his vest pockets. "I mean, don't do anything stupid. But by all means, enjoy yourselves." Bale and the Women Baby Boo watched Bale as Jilly and Mole took naps. In the fire, a log cracked in half, the top end of it upsetting into the ashes below with a thud, and Jilly sat up, brought her rifle to her shoulder, aimed it at Bale. She surveyed the scene. Sat back assured that all was well. Then Mole sat up, shocked. "What is it?" she yelped. "Oh ain't you a ninja," Jilly said to her. "Or like a cowboy that sleeps with his eyes open watching the herd for fear of wolves." "Shut up," said Mole and she laid back on the grass. "Wait," said Jilly. "Tell him your story." "Huh?" "Just the way you told it to me. See if he likes it any better." "Nah." "C'mon, Mole, it'll help me get back to sleep." "Fine." She sat up and faced Bale. "My uncle had a tiny dick," she said. "And it had a funny flavor." Jilly had a big old grin on, watching Bale to see his reaction. Bale looked at Mole. He looked at Jilly. He looked at Mole again. "Yeah, that's probably all uncles," he told her. The Stranger Murk and Mira left the jail and moved in the direction they'd come from, back toward home. They didn't speak. Their tired footsteps caught the street in scuffs and low chatter floated from those who moved along. A passive sense of failure plagued them. Their hearts felt slack and bruised. Low emotions dozed in their stomachs, fluttered like static in their throats. "All this damn way," Mira said. "And Bale held hostage." "I wanna sit down." They found a makeshift bench in the shade of a ramshackle building that bore a vacancy sign, and they supposed it a kind of inn. "Good a place as any." They moved toward it downheartedly, slunk upon the wooden thing that creaked and shimmied in a way that suggested it might not hold them. Mira cradled her head in her hands, her elbows resting on her knees. Murk thrust his peg leg out toward the dirt road, sat favoring his good leg's ass cheek. "Could be worse," said Murk. "How the fuck so?" A man hoisting a kind of brown stained fabric bundled in a heap came and stood in front of them. He stunk of rotting. "What is that?" Mira asked. "Take it somewhere else." She held her hand over her mouth and nose like a mask. "You lost a leg," the man said to Murk. "Lost ain't quite the right word for it." "But it's gone, replaced by that peg, and I got an idea. A thing I might part with for the right price." "You wanna sell us an idea?" "No, no." He pulled back the fabric to reveal the leg, swollen with rigor mortis. "What the fuck is that?" Mira said. "A leg," said the stranger. "Thought you could use it." He motioned the thing toward Murk. "A rotten leg? For what?" "For the bones," said the stranger. "Just get rid all this." He took a razor blade from his pocket and began nicking away bits of the gray skin, the darker muscle beneath. Dabs of the leg dropped off to the dirt. "Use the bones instead of the peg." "No," said Murk. "Get the fuck off out of here and take that foulness with you." Murk stood and pushed the stranger, who dropped the razor, stumbled back a ways then rewrapped the leg. "This town's just fools," he said. "A perfectly good leg just wasted on all y'all." He kind of held up the leg up for folks to see and walked off toward wherever else he had a mind to go, grumbling nonsense as he went. "Crazy bastard," said Mira. Murk shook his hair. The bits of leg on the road had a funny, yellow shimmer to them. A kind of iridescence to their death. But then the razor snatched Mira's attention. "Murk," she said, "what would you do if you couldn't drink shadows?" Murk's black eyes pondered. "Kill myself," he said. "That's what I thought," said Mira. "We need to find a mouse." five In outer space, unperceived by the naked human eye, traveling at up to forty miles per second, Halley's Comet raced toward its perihelion where it would appear like a blade of white fire slicing across the Earth's aphotic night sky. Its visibility could last weeks, depending on distance and atmospheric conditions. A celestial anomaly that will spook creatures' hearts for certain. You must understand, for most of collective human consciousness, comets have symbolized God's wrath—black omens of streaking light. The first known mention of a comet's appearance couples the sighting with an execution. Montezuma saw two comets shortly before Cortes reduced the Aztec civilization to history. This particular comet, Halley's, is credited with William the Conqueror's taking of England, Genghis Khan's invasion of Europe, the birth of Samuel Clemens, and the death of Mark Twain, who claimed "these two unaccountable freaks; they came in together, they must go out together." Of course, by Twain's time, Halley's Comet was completely accounted for. Edmond Halley, namesake of the celestial body in question, postulated that perhaps a close-passing comet caused the great flood of Genesis and Gilgamesh. Which one motivated that catastrophe is a mystery, but when Halley's Comet passed the Earth in 1682 almost nothing was known about these wonders at all. Halley had, at that time, been studying the strange things for less than two years, and it wouldn't be until 1705 that he would publish his Synopsis of the Astronomy of Comets, claiming that recorded sightings in 1531, 1607, and 1682 were all of the same orbiting comet that would come again in 1758. In between those two sightings, the one that he saw and the one he predicted, Edmond Halley died. They named the returning comet after him. Since its last pass in 2061, societies have faltered and waned. Anemic versions of civilizations left behind like footprints. Before his death, in what would become the groundwork for actuarial statistics, Edmond Halley proposed that in order for mankind to sustain its population "it is necessary for each married couple to have four children." He came to that conclusion through a deep study of Paris and London, taking into account the typical rate of births, marriages, and deaths in the context of those urban areas' population densities. It's hard to say if he accounted at all for bastard babies, and in the shadow-addicted world, marriages were nearly as rare as comet sightings themselves, but regardless of all that: mankind had not lived up to the numbers, and hardly any humans will see the comet at all. In outer space, adhering to its orbit, the coma of the comet, a frozen peanut-shaped thing, just a little bigger than Manhattan, races along toward humanity's view. Murk and the Machine "Something wrong?" Doc asked. Murk had taken off his jacket, held it draped over an arm. Black-eyed fools stumbled about the machine, their dark veins throbbing beneath their pale skin, and Murk contemplated them as they staggered. "I wanna try it." "The machine?" said Doc. "Yeah. I got this for trade." He lifted up his jacket. Doc regarded it. Earlier, Murk and Mira had searched a woodpile for a mouse. It took them nearly an hour of upending logs, but they came on one, and Mira whispered her silent language at it, convinced it to lend assistance. In the pocket of Murk's jacket, it sat hid, the hermit's razor blade accompanying it. The plan was this, the mouse would lay in wait. It would cower in the jacket until taken to the jail, to the cell where Doc kept his things. Once there, the mouse would abscond from the garment, cross into Unlucky Clover's cell, carrying the blade in its mouth, drop the thing close enough so Clover'd notice the implement. In that way, Clover could do the murdering deed for Mira, Murk, and Bale. Off himself and set Mira and her mother free. "It is a nice jacket," said Doc. "Tell you what. One turn on the machine. One night in the inn. I don't haggle, and I won't entertain haggling." "Two nights," Murk said. He didn't want to seem too eager. "One." The arms and legs dangled. Some of the limbs had already been sipped from. The shadows that remained listed back and forth on the ground. Mira had made him promise to take the shadow of an arm. "We don't need you any darker than that," she had said, assuming, correctly so, that the smaller quantity of shade produced by an arm would send him into a less-deep stupor, and that if he consumed a leg's worth, Murk would go mad beyond recognition. "Fine," said Murk. "One night and one swallow." Doc held out his hand and Murk handed him the jacket. Once the garment traded hands, the shadow of the thing reappeared on the ground. Doc put his face to the jacket, breathed deep its leather scent. "Monroe," he called, and Monroe came running. "Put this in the cell for me and get this man and his friend a room at the inn." And Doc handed the jacket away. "Take your pick," said Doc, and Murk ventured beneath the machine, touched a few of the limbs, stood in the mess of them, the arms and legs dangling down around him like gruesome treasures. Standing in that wreckage, the smell of near death thick on his tongue, Murk nearly abandoned his notions, nearly dropped to the shadow of a leg, but something at the last minute gave him pause. Perhaps the sound of Mira's voice echoed at him. He touched a wrist. Ran his fingers toward the elbow. Delicate, but it swayed on its moorings. The hair of it, twinkling in the sun. Murk dropped to the ground. The shadow lay on sand, mysterious now to Murk, strange and delightful. Murk realized then what it represented. His fake leg tingled, phantom feelings that flittered impossibly. Each of those shadows—or the vacant spaces where shadows should be there on the ground—denoted some horrific amputation, some unwarranted molestation that bore a permanent absence. Murk conceptualized the accosted creatures they belonged to, far away sufferers, most likely with blackened-out eyes. He imagined—in a fascinating moment that seemed unbound by true time—how their lost bodies, those that belonged to these limbs, were elsewhere dawdling through wildernesses or struggling to clutch at some necessary task with a single hand. And then the occasion of the dismemberments occurred to him. In one sharp synapse, some glowing space in his brain, lit up by grievous fantasy, Murk manufactured the dozens of settings that would represent the moments of these limbs being filched off their rightful owners. In the snag of a river, at the trunk of some live oak, mired in a forgotten town's wreckage, in the crotch of a ravine: the addled personages beset by terrorists, who'd chased down prey in order to offend them forever, placated only by this dastardly deed when shadows were made unavailable to them. Envision now the victims gulping at shadows with faces terrified, their panic-stricken bodies tangled in shock. Mouths bore open with last-resort binging. Dreadful. Tortuous. Hog-shaped consumption. Savage-streaked gobbling. And the caterwauling of the assailants, robust sermonizing of the pain to come. Then they produced blades. Murk's face lost in thought at these dangling limbs. Murk's mind bereft, crammed down in the cracks of his dark illusioning. Spent bits of suffering being dealt out like cards. A counterclockwise circle. Endlessly moving. Now at this arm . . . Now at that leg . . . Snippets of the slicing, new faces and discomforts. A bad positioning of blood-slicked limbs. Teeth glimmering absurdly in shrieked-open mouths. Angry-faced disasters that keep transmitting in Murk's mind. Hollow and haunting. Shriveled and swollen. Happening the way accidents happen, the way dead children die repeatedly in their mothers' minds daily. For no reason. Set off by some unfair trigger of memory. The shape of an eyebrow. The crack in a sidewalk. The false music of a distant nothing mesmerized by a breeze that isn't even there. And for what? So that we can say of these lost peoples or past tragedies, of these wound scars or cemetery plots that we existed? That we hovered with hearts beating in the motion of the multiverses with minds that could accumulate harms in order to remind us we were alive? That it wasn't all just dreaming. That it can be touched with fingers in the future and that these feelings will launch our hurts anew. "Son," said Doc, and Murk's face showed shock at him. Doc shook his head, "You don't have all day to choose." Murk lowered. Sniffed the darkness, felt its odor in his gut like a cramp. His mouth went wet, his skin jittery. He lowered his face. The black magic of his doing it like a dream. The intoxicant was on him even before his lips touched the shade. He huffed and the dim drew from the dirt like a whisper, slithered across his tongue, filled his body with the taste of precious woe, and things got glossy. He could feel his eyes darkening, his skin color draining. His veins thrummed, constricted. Coiled down. Warmed through. A hollow kind of wizardry engulfed him. The world lurked beneath Murk's understanding. "What do you think?" said Doc, and the language he manufactured seemed a thing of glass that Murk could rub his fingers over, cut himself with. He tried to answer, but the ability was lost to him. His mouth felt like a stranger's mouth—like stranger's teeth touched his teeth. Murk stood. The town quivered about him. Had he forgotten how to swallow? Were his hands still his own? Doc laughed. The laughter squirreled away into the afternoon. Murk wandered away. Murk didn't feel like himself. Near dusk, Doc reached for the pencil behind his ear, set his teeth into it, clicked his tongue. He took the pencil from his mouth, placed it again behind his ear. He paced. He eyed the streets some more. He wiped his palms against his vest. "Monroe," he said, "I imagine the day's done." Monroe stood from a stool where he had been nearly nodding off. He shook his head to get the blood flowing, his dreadlocks fanning out this way and that. "Want me to throw the switch?" "Go for it," Doc said. "Close her down." Doc walked off toward the inn, all the townsfolk smiling at him as he passed. Sunset "I don't think I'll ever get used to that," Bale said, his eyes on the sunset. "More small talk?" Jilly asked. She was cleaning her rifle. "Got any more stories?" Bale asked. "She doesn't," said Jilly. "She doesn't even really have the one." Mole threw a pebble at their fire. They'd kept it going the whole day rather than have Baby Boo chance trying to build another. "I got stories," she said. "Really," said Jilly. "Let's hear one then." Mole rubbed her chin. "I'm not in the mood." "Thank fucking God." "I don't get it," said Bale. "Well, not everyone's as talkative as you, dome boy. We don't all just love the sound of our own voices and feel compelled to wax philosophical on the sunset and our opinions of it." "No," said Bale. "What did you guys have to do to get so shafted?" "Shafted?" said Jilly. "Seems to me," Bale said, "this is the kind of job you'd get if you were in trouble. Like, back at the train, all the good soldiers were guards and stuff. This child-chasing stuff, following around this redhead and all, seems like a thing you'd do if you pissed someone off or weren't worth much. Like, our equivalent would be latrine detail. Don't get me wrong, that's an admirable job. Hard as fuck though. Like this. Just kind of thankless shit work." "The good soldiers?" said Jilly. "At the train? What good soldiers might those be? Every time we've gone up against a train we've taken 'em down easy as could be." "And you've been part of that?" "Excuse me?" "You've been there when the trains were taken down? Or were you off chasing a little boy?" "Mole," said Jilly. "I don't like how he's got me thinking." "Everyone in our army has an important job," Mole said to Bale. "We've been at trains before. Right now, we're here." "Sure," said Bale. "But that doesn't mean that some jobs aren't more important than others. Chasing a little boy is very important, but it's probably not as important as attacking a train. Right?" "Can we shoot him right now?" said Jilly. "No." "Why not then?" "Because we promised." "So, in your army," continued Bale. "What would you have to do to get a more important, important job?" "Yeah?" Jilly asked Mole. "What would we have to do?" Mole messed with a fingernail. "I guess we'd have to prove ourselves, somehow." "Seems to me," said Bale, "y'all should get on figuring out how to do that. Instead of sitting around bickering about who tells the shittiest stories, or whatever." Bale looked away from them. "But that's just, like, small talk." The fire crackled. The sun set. Doc and Mira Doc saw Mira sitting on a rickety bench, her head hung, elbows on her knees. "Got problems?" Mira shook her head. "I'm fine." "You have a room," Doc said. "Your friend and I traded." Doc stood then in front of Mira, his eyes filled with gladness. "Come in, we'll get you set up." "I stuck my head in there earlier. Looks a little rough." "It is, but I told you earlier. Won't nobody bother you. I know what it's like to travel far for no reason. You drink?" "Drink?" "Moonshine." "I've had fruit wine before. It made me feel funny." "It's supposed to. C'mon." They entered the inn and the place went quiet. Every eye in the saloon aimed at Doc. A one-armed man raised his good hand which clutched a glass. His hook-shaped nose seemed daffy. "To Doc," he hollered, and the whole bar toasted. "To Doc," they said in unison, his name coming out slurred and grand. The one-armed man set his glass down, ran his fingers through his stringy hair. Doc shooed off the gesture, and the crowd got noisy. "You're famous," said Mira. Doc shrugged. "They don't want to end up like Clover." They approached the bar, and the bartender skipped the other drinkers, handed Doc two glasses and a bottle of moonshine. "Over here," said Doc, and the two moved to a yellow-pine bench on the far wall from the piano. The music chimed in whacky fashion. The chatter of the fellow drinkers clucked out like white noise. They sat down and Doc uncorked the bottle. He poured for Mira first, then himself. "It's strong." Mira sniffed the spirit. It smelled like fire. She put the glass to her lips, let it roll to her tongue. The tiniest swallow. "Whoo," she said, lowering the glass. "What's it made from?" She licked her lips madly, breathed with a wide opened mouth. Doc drained his glass, poured another. "Stuff." Mira raised her glass to her lips. The second sip was easier. Doc said, "Tell me about you." "Nothing to tell." Her eyes so gentle. "Sure there is. We all got stories or lies. Live near?" "Not too far off. With my mom." "She still have her shadow?" "Had it stolen." "My mother was the same, and I do not miss that taste." "Taste?" "They're all the same," said Doc. "Mine liked rabbit shadows." "Mine likes birds." "Hell that's not so bad. At least they're in the right direction. Up in the sky, their shadows falling on the ground. The ground critters—squirrels, rabbits, possums, rats—you have to catch them. I thought of the machine on account of her. It turned out different than I figured. She was dead by the time I had it built. She probably wouldn't have liked it anyhow. These people," he pointed to the folks about, "human shade is what they're all about." "You get sick of being around them? All the unruliness." Men were shoving men. At the pool table, a woman with black eyes lined up a shot. "Not really. Sometimes I miss arguments. Most everyone here just agrees with me." Doc poured Mira more moonshine. He motioned across to the bartender to come around, and he trotted over, smelling like bleach. "Monroe told you about a room earlier?" "Yes, sir." "It belongs to her," Doc pointed to Mira. "Sure thing," the bartender said. He fished a key from his pocket and handed it to her. Doc filled and drained his glass again. He passed the bottle and the glass to the bartender. "Well," Doc said. "I'll now retire." "Wait," Mira said, "there's no way to fix what they got? Your mom? Mine?" Doc looked at his feet. "It is a thing I puzzle over still. If my mom was alive, I could take her apart. I could hang each of her arms from my machine. Each of her legs. Those I could get the shadow back for. But even if I did, if I took them off the machine, put them back on her body, even if the shock of all that cutting and reattaching didn't kill her, the way surgery can, I don't think her shadow would come back. It's something in the mind or soul. Used to, they called it turning the sun to darkness. But I don't know if that's how it works. My father thought if we just thought about shadows the right way, the riddle of it would become clear to us. I've thought about them every way I could." Mira sipped her moonshine again. "You don't have to pretend you like it." "I'm coming around to it, I think." "People do," said Doc. Mira was moonshine loose, fuddled. The noise of the inn happened around them. Doc waved goodbye. Mira nursed her glass and watched the insanity unfold. Murk so Dark It was night when Murk showed up at the inn. He had a lost expression on his face, a sort of spooked, empty grin. Mira was half loopy. Her hair hung across her eyes. She waved at him. "Murk," she hollered. She stood, grabbed his arm, dragged him to a seat near the pool table. "This guy can't play piano for shit." Broken notes fluttered about. "Where've you been?" "I can't really hear." Mira hollered, "Where've you been?" Murk pointed to the door he just came through. "You're like, so fucked up," Mira said. Her head muddy with moonshine, her eyes sort of trundling, but she could make out his super darkness. "You okay?" Murk's black eyes like giant nothings in his head, moved over Mira entirely. From her toes to her eyes. He licked at his lips. The music bounced and jangled. The smell of strangers getting sloppy. "Maybe." The music stopped and the one-eyed midget came waddling over from the pianola. "Any requests?" he asked, his voice quasi-maniacal. "I can barely remember my name," Murk said. "Wait," said Mira, "ooh," she kind of clapped her hands, "know any Doors?" "Do I know any Doors?" said the midget. "That's my fucking band. I know 'Light my Fire' and I know 'Love Me Two Times.'" "Play 'em both," said Mira. "Fuckin A," said the midget, and he went back to the pianola and started banging away, lowing the lyrics. "You know that it would be untrue . . ." Mira and Murk listened a bit. Murk kind of chewing nothing, drumming the fingers of one hand on a knee. Mira bobbed her head. Murk cleared his throat. "This sucks," said Murk. "I told you, he can't play for shit." Murk clicked his tongue. "It's not that. The whole thing sucks. The words. The music. Every single part of it sucks." Murk reached down for Mira's empty glass. "I'll be right back," he said, and Mira listened to the music as Murk went up to the bar. She looked around at everyone. Tried to guess at what they were. Like, beyond people. How did they get along? What did they do to occupy their time? That seemed the strangest task of a life—all the slow moments between the good and bad things. But then she thought, What was she? Mira—finder of wild shade, scrambler of eggs. Murk was a shadow addict. Bale a domer, or ex-domer. A dishwasher. A pointer of guns. Haver of a Mohawk, now. Mira, she thought, cutter of hair. And around this time Murk showed up with two full glasses and a pair of scissors. "You're a fucking mind reader, Murk!" "What?" Mira reached for the glass, the scissors. Murk handed her the glass but kept the scissors. "I can do it," he said. "I owe you," said Mira. "Owe me something else." Murk began to blindly cut locks from his head, and they dropped on his lap and on the floor around him, but everyone who noticed his doing the task seemed to treat it like some normal, anytime thing. Mira drank and Murk hacked his hair, and the midget started the second song and Mira asked, "Like this one better?" "Fuh-uh," said Murk. "And now you can't say I got your hair anymore." He took a few more hacks at it before leaning back into his seat, sipping at his moonshine. "Nope," said Mira, "now you got butchered-ass hair. Like that Unlucky Clover." "Shit," said Murk, "start that and I'll make you my creature." "Gross." "We got a room," said Murk. "You could be my creature." He weirded up his face. "Creature," he said. "Creature." "Fuck that," said Mira, and then they were both kind of laughing. But some darkness fluttered in Murk. A quandary developed. Some equation he would've normally terminated travelled toward a bad solution. "Why haven't we ever though?" Murk asked. "I've known you a long time." Mira considered it. "Creatured?" "No," said Murk. "But, like, normal." "Dunno. Just never been like that." "Let's make it like that then." He had a garbage expression in his eyes. "We could go to the room and fuck." "What? No way. You're not being you." "C'mon." "Let's talk about something else. I'm serious." Her eyes found the pool table. "If Bale was here, you guys could play pool. I bet you'd beat him." Murk puffed up a bit. Shadow stretching his eyes. "Do you fuck Bale?" he asked. "Is that what the deal is?" "Oh, my God, Murk. What the fuck? Be you." "This is me," he said, "this is how much shadow I should always have." His words seemed to come from another dimension, and the noise gained the attention of the room. The pianola player ceased to play. From behind the bar, the bartender emerged. He came to them, propped his hip against the pool table, picked up a cue. "He bothering you ma'am?" "The fuck do you care?" asked Murk. The bartender seemed tired. "Normally, sir, I wouldn't, but I've got orders to make sure she's not disturbed. Might I suggest . . ." "I don't take suggestions." "What about orders?" the bartender said. The customers of the place seemed to amass behind him in support. "Because we can make it an order, if we need to." Murk grunted, gandered at the crowd. "Fuck it." His head bloomed then a bit. Swelled up with anger. Thinned out and enlarged. "Do you take suggestions?" the bartender asked Mira. "You could retire to your room. Top of the stairs, second door on your left." Mira made to go, to retreat from Murk's grossness. "Will you be okay?" "Fuck do you care?" She stood, passed through the peculiar crowd, climbed the stairs. She looked back down. All the black eyes peered at her with madness. All the pale faces and dicey figures. The dusty room, a sort of forgotten and bad feeling to it. Murk yelled up at her. "Bet you wish you'd left me in that fucking tree, huh?" Mira didn't know what to think. She turned and made her way into the room. It was lit with a ghostly electric light. Blue shadows seemed to hover on all things. The bed sagged in the middle, felt soft, revolted her. The blankets were pilly and the pillows hard. She stared at the ceiling, a sort of grisly nervousness on her skin. Below, Murk guzzled moonshine and half-danced to the bad music and draped his arms over strangers' shoulders and spun with them across the floor. They were toasting and whooping and careening and making merry, and a lady with bad teeth pressed her face to his face, the lids of her eyes painted purple with makeup. They became inseparable in the revelry, fastened to each other with some lust-fashioned adhesive. She didn't strike his fancy in any way beyond his body being desirous, and they kissed often with heaved-open mouths, kneading their tongues into one another's tongues, a sort of mucking of orifices. If she had a name, Murk never learned it. And the crowd about them blurred into noise and dropped free of them and they were ascending the stairs. They dribbled into disarray. The yelping of their carnalities dictating endeavors. Inside a room, filled with raucous caterwauling from the saloon below, Murk explored the flubbed body of his partner, her blemished skin and skeletal protrusions, the reek of neglect, sour stench of vicious years. Tussling into the corruption of it, Murk grabbed at her skull, lifted away a mottled hairpiece he hadn't recognized was false, and the blonde wiglet flopped in his fist as she tore open his shirt and licked her way down his belly, and his black eyes were at the ceiling then, and his ears filled with explosions. Bits were lost. Now he is on top of her, the mattress swaying. Standing then, she on the bed. Bent over. Spread somewhat. Night shattering like cymbals. Percussion and cacophony. All this time, in these actions, a sort of to and fro. Intermittently, making eye contact. Anger maybe? A pinch of violence? Who was she even? For that matter, who was he? The fake blonde hair clutched in his fist like a hammer. Sometimes bad decisions keep lasting forever. Razor Blade Joe Clover came conscious in the night. He tossed quickly, batted his face with his palms. Had something been on him? His head ached. His body felt poisonous. He sat up, his legs crossed. His eyes seemed lost in the dark, but incrementally shapes became clear. The toilet, for instance. Its metallic bowl glinting what light there was. The odor of urine was permanent but seemed advanced now, so Clover stood up and flushed, sat back on the ground as the thing filled with fresh water. It was then he saw the mouse. It came scampering across the floor, and his eyes followed the path of it. An entertainment of sorts. He'd gotten used to them. Roaches too. Before, he'd see a bug and get squeamish, fill with anger and hew the thing down with his heel. But now, so long lonely in his state of capture, he welcomed any creature that might chance to share his cell. It bounded along but paused at some object. It ran a circle around the thing. Clover couldn't tell what it was. He neared it, and the mouse tarried off. Clover stopped. Slowed. He wanted the mouse to stay. In all honesty, he wished to touch it. Gently. Perhaps the back of his finger across the top of its head, and, if he was lucky, maybe the little mouse would lean into his touch, stay on there in that place with him to offer some kind of company. But, it was not to be. The closer he got, the further the mouse moved. However, the thing. That it had run a circle round. On the floor like a jewel. It glistened. Clover didn't believe it. A razor? He fist-rubbed his eyes. He pressed the pad of his thumb against it firmly and it stuck to his skin, and he lifted his hand and the thing just stayed pressed there. He turned it in the light. A twinkle ran down the blade, dimmed where the thing was nicked. He gripped, raked the implement across the thumbnail of his other hand and flecks of the nail shaved toward the ground. Joy filled him. He looked toward the sky. "Thank you," he said to whatever. Then, taking the blade, he ran the sharp edge up his arm, passing it through the thick veins of his wrist, and bands of blood slipped down toward his fingers, dripped in beads to the dirty ground. He traded the blade to his now weakened grip, the sticky blood flooding across the thing as he made at the other arm, hacking less precisely, but still rendering damage, he let open his other wrist, gushing indignation. His life spilling out of him. His freedom on its way. Doc Rages He woke early as with all days and called for his mug of coffee to drink as he dressed. Servants attended him always. They stood in wait, watched over his possessions, guarded his holdings in the night. Coffee, for instance, was a luxury most couldn't afford. Indeed, even the brew Doc got was a thin strain of the stuff. More hot water than anything else, maybe the smell of beans coming across in the steam, but only faintly. His stores of it were under lock and key. Ancient cans reclaimed from refuse piles and abandonments, brought to him by patrons who knew Doc was a man of fine tastes. Once ready, he stepped into the streets, still except for his motions and the doings of a few folks under his employ. He prided himself on being an early riser, liked to take to the dawn-lit streets with gusto. He moved briskly toward his machine, the morning almost glowing, and he tossed back the canvas covers to inspect aspects of the device. Chiefly, he was concerned with appendages. He studied them for signs of struggle, decay. If they seemed chafed or ashy, he wiped them with lotion. If their bandages had gone soggy, he'd wind away the spent gauze and replace that with fresh. He hummed softly as he did this, a dignity in his laboring. Once certain that all things were in order, Doc would take to a stool, cross his arms and sit satisfied. But this morning an off stillness plagued him. He had some disquieting premonition, and around this time he heard his name called. "Doc," his name came again. It was Monroe and he was agitated, running toward Doc from the doorway of the jail. "Doc, Doc," he hollered again. "You gotta come see." Doc was rarely yelled for, so he knew a notable thing had occurred. He pitched what was left of his coffee and stood, made his way to Monroe who had paused in his proceeding and was now merely waving at Doc to come. He and Monroe entered the jail. On the ground, Joe Clover lay dead—the odd smell of new death and the queer stillness of a freshly departed soul haunted the jail. "What the hell happened?" Doc asked, but Monroe said nothing. "Wasn't a guard on duty?" "All night." "Who the fuck? Which-a one?" "Bones." Doc yanked the keys from the wall and snatched open the cell, the cage door clanging against the concrete wall. He jumped inside and dropped to his knees near the body. "Call Bones," said Doc, and he clenched his jaw and his fists and waited sternly for his failed guardsman. Bones held his hat when he showed, twitching like a kicked dog. Doc barked up at him. "Care to explain?" Bones's eyes filled with awe. "I have no idea." "You watched the door?" "All night." "No coming or going?" "Not at all." Doc found the razor sunk in Clover's spilled blood, and fished it out, held it in his fingers. "How'd he get this?" "I haven't the foggiest," said Bones. Doc wiped blood on his pants. "That girl," he said. "Her friend." "Maybe tossed it to him? When y'all came in?" Monroe said. "Couldn't have," said Doc. "I'd've seen it done. Shit," he said. He pocketed the blade and stepped to the sink and cleaned his hands proper. "But let's fucking find 'em." The three moved from the jail and prowled toward the inn. There seemed a rage in all their eyes. The purpose which drove them was palpable. Into the inn they strode, a general and his conscripts. Doc took a look at the fatigue and disarray set in on the establishment, "Would it kill you to clean a bit?" he screamed. The bartender stood from behind the bar, a green hue to his face. A thick sickness in his visage. "Sorry, Doc," he said. "The girl?" "What?" said the bartender. He grabbed a pitcher and brought it toward his face like he might vomit. "Oh hell. From last night. The stranger girl." Doc started up the stairs, pointed toward the rooms. "Which one?" The bartender held up a finger. "One?" He shook his head. "No," he said and gagged. "Three." Then he coughed up some yuck, caught it in the receptacle. Doc wasted no time. He stormed to three and kicked in the door, and it burst wide, splinters coughing off the jamb, and Mira shot up wide eyed from her bed, clutching blankets to her chest. "How'd you do it?" Doc asked her. Mira rubbed her eyes. "I won't tell you anything." Murk in Disgust His eyes fired open and his heart filled with haunts. What had he done? Vague threads of the night before came back to him and he gazed about feverishly. The thing, as it was that he now considered her, with which he'd spent the night, lay passed out, her face smothered into a pillow, so only her skull showed to his eyes, with its patchy, thin hair clinging to it like spiderwebs. She seemed skeletal in her repose. A crypt thing out of its resting place. There were crude tattoos across the wrinkly skin of her back. Varicose veins rose from her legs, everywhere. From beyond his door, Murk heard commotion. Tussling and hollering and then heavy steps on the stairs. He fumbled from the mattress and crept to the window to watch the street below. Doc and Monroe dragged Mira toward the jail, Mira screaming at some point, "I ain't telling nothing." They handled her roughly. Murk set to thought. Of all the bad doings for which his behavior brought to fruition, this was the worst of it. It had been intended, as planned before Murk visited the machine, that they'd meet up as soon as able and leave town together. His conversation. The haircutting. It came back to him in lewd heaves, and he curled up with his hands holding his head, disgusted at himself for how he'd behaved. But could he fix it? His head spun circles, but he seemed clearer minded than the night prior. Murk scrutinized the nude thing on the bed. Was this why the leaving hadn't happened? Murk went to the door and put his ear to it. He could hear conversation below, so he cracked the door and stared down into the saloon. "Think," some stranger was hollering at the bartender, who was doubled over with his face in a pitcher's mouth. "Where'd he go?" The bartender shook his head and puked some, and Murk eased the door closed. He probed the room, surveying the situation. When he saw the wig, his escape plan came clear to him. Mira the Prisoner She'd been deposited in the cell with Clover and the door had been locked tight. She pressed against the wall, slumped to the ground, mortified. "How?" "I don't know anything," said Mira. "Earlier," said Doc, "you said you wouldn't tell anything." "What's the difference?" "At this moment," Doc said, "I don't know. I'm not sure what I think should happen to you on account. But I am certain of this. You're gonna stay in there until I figure it out. And he's gonna stay in there with you." He motioned to dead Joe Clover. "And I am a patient man. If it takes me a year to decide your fate, so be it." Doc left the jail. Mira hid her eyes from her dead cellmate. Murk the Woman He looked in the mirror and messed with the wig some more—blonde and a bit tiny on his noggin, stringy and napped. The dress's neckline was low, showed off the hairs on his chest, was tight on his shoulders. He couldn't fit his foot in her shoe, so he kept his boot on. "Heinous," Murk said to his reflection, but then he saw the dress's owner in her stew, and decided that the black garment probably looked better on him by a bit. He closed his eyes at his reflection, opened them, said, "Let's see how it goes." Murk opened the door and made his way to the staircase. When he got to the top of it, the bartender hollered. "You see that stranger last night?" "I did not," said Murk. He made no attempts to disguise his voice. He figured, if his chest didn't give him away, nothing would. The bartender nodded up at him, puked a bit more, and Murk climbed down the stairs, his peg tapping as he descended. He crossed the saloon. Regal, sort of, in his getup. He stepped out into the street. All the light of the world. No one seemed to be watching him. He turned left. He headed out of town. He sort of had a plan. Jilly saw him first. She had her eyes to a pair of binoculars and was surveying the distance for Murk and Mira. "Well," she said to Bale. "I thought maybe this was them, but it appears to be some kind of hideous, one-legged woman. So I might get to shoot you still." "Lucky me," said Bale. "Wait, wait," said Jilly. "She's taken her hair off and looks even worse." Mole was juggling a few rocks she'd found. "Taken it off?" "Like a wig maybe," said Jilly. "Oh, hang on. Maybe." Jilly lowered the binoculars and went to Bale. His hands were still tied behind his back. "Look that way," Jilly said. "That your friend?" It took Bale a second. "Focus," he said. Jilly turned the focus wheel. "Stop. What the hell?" "Is it him?" Mole asked. "Yeah, but where's Mira?" Murk started waving his wig toward them all. "Don't shoot," he hollered out, his voice thinly coming from the distance. "You can ask him when he gets here," said Jilly, and, when he finally walked up, she said, "You make an ugly bitch." "Thanks," said Murk. "I need some water." Mole handed him a canteen. "Where the fuck's Mira?" Bale asked. Murk motioned back toward the town. He swallowed hard, cleared his throat. Every eye in the small encampment was locked on him. "I got good news and bad news." "The fuck you mean bad news?" said Bale. "And good." Murk took another sip. It was quiet. "Well, let us have it," said Jilly. "No need to gather further invitation." "The good news is," Murk said, "I found your friend." "Friend?" said Mole. Murk held his hand up to his chest, palm toward the earth. "Round this high tall. Redheaded. Tells people to suck on his dick and such. Paints skulls on stuff." Jilly clapped her hands. "That's our shithead for certain," she said. "But," said Murk, "it's complicated." "Complicated?" Mole dropped her rocks. "Mira and I were hoping to do y'all a solid. We thought we'd planned it out perfect." "Planned what?" said Bale. Murk stared at Bale hard, trying to convey something. "A jailbreak. They got him in a cell back there. Guarded." He sipped again from the canteen. "Well shit," said Jilly, "he probably deserves it." She looked at Mole. "What do ya think? That good enough just knowing that? Take the news to his mom." "But what about Mira?" Bale asked. He kicked dirt at Murk. "That's the bad news. We were trying to get him out of there. We were worried about you. We thought if we came back and just said we'd seen him that it wouldn't be enough. We figured we'd get him out and, like, use him as trade. We'd come to camp with him as a hostage, a knife to his neck or something, and say that we wouldn't give him up unless they gave you up, but Mira got caught in our trying, and right now, they got 'em both in the same cell. Sitting in the center of town. And I don't know what they've got planned for them, but my guess is a hanging. They got some kind of scaffolding built, it looks like. Not far from the jail." "That settles it then," Jilly said. "We wait for him to hang, and we go in to round up his corpse. Take him back to the Founder, let her know we tried." "It does not settle a damn thing," said Mole. "Shit. Listen to you. I'd rather him just disappear than our showing up with his neck-broken body. You ever see a hanged man? Their heads go a willy-nilly, and often their eyes pop from their skulls. We'd get demoted, showing up with him like that." "Then what, Mole? Go in there guns blazing?" "Normally, I wouldn't suggest it, but look at these past few days. This past outing. I'm done chasing this boy, and maybe the domer is right. Our going in there might be a way to get a more important, important job. Worst case, we get killed and don't have to worry about any of it anymore anyway." Jilly looked so proud. "You're the boss," she said. She grinned at Baby Boo, "Ash your eyes, bitch! We're gonna murder some black-blooded scoundrels." The three women went then to the fire, and Mole grabbed some dark soot and distributed it as they made ready, clucking incantations at one another as they rimmed their eyes with black. Bale whispered to Murk. "The redhead? I thought you . . ." "I did." "So what the fuck?" "The other part's true. The Mira part. They got her, man. And I ain't got no better ideas." Bale nodded. He didn't either. Doc Shuts Down the Machine It was only late afternoon, but Doc motioned for Monroe to flip the switch early. "You okay, Doc, you don't seem yourself?" "Monroe, I am bothered. That is the only word for it. I've been thinking about it all day. What is it? I keep asking myself. Doc, I say, what is it that you feel? Earlier I toyed with the term depressed. But I think depression implies a sort of sedentary reaction, a sort of letting the world happen and feeling small against the happening of it. Then I tried molested, but that word is too sexual in nature, and it also implies that something was done to me, and that's not really the case. Something was done to a possession of mine. I do not feel swindled, because that girl told me her intentions, and I don't feel naïve, because I knew that if given the opportunity to disobey me she would have taken it. I suppose I overestimated myself. So, it might be that I feel susceptible, but all that really means is I feel affected. So, I ask myself then, what is the nature of this affectation? And so, bothered is what I've come to decide is how I feel. Sort of troubled by the pestering of others. It seems like a generalization that, but I assure you I've come to it after much deliberation." "But you still want me to throw the switch, right?" "Yes, Monroe, I do," said Doc, and he sat on his bench and watched for the day to gray. Plan of Attack Once the women had ashed their eyes and had loaded up their rifles and had chanted their war cries, the five made ready to head toward town. "Y'all are just staying back," Jilly said to Murk and Bale. "You'll just get in the way." "What's the plan?" asked Murk. "You don't need to know," said Mole. "You're at least waiting till sundown, right?" "Sunset," said Mole. "We'll enter the town from the west and the sun will be behind us and it'll disorient them and we'll have good light by which to aim our rifles." "Yeah," said Murk, "but your shadows will be as long as they can get." "What?" "They'll drink them," said Murk. "That's the first thing they'll try to do." "You see my shadow on the ground?" "Forgot," Murk said. "I'm a pretty good shot," said Bale. "If you want to untie me and give me a rifle." "Truth is," said Jilly, "I'm not really sure of your status. You might should be shot dead right now, and you're probably most definitely a prisoner of war." "But Murk came back." "But our promise was to the girl," said Jilly. "Not this black-eyed she-man." Murk tugged at his dress. "I'm kind of getting used to it," he said. "Lets the breeze in." He walked over near the fire and picked up the Dutch oven. It had been taken out of the ashes and had cooled enough to carry. "What the hell you doing with that?" Jilly asked. "It's Mira's," he said. "I'm taking it back to her." Doc in Thought But maybe depressed was the right word. The sun was setting and Doc sat on his bench, thinking about it. He fetched lines of poetry from his mind, whispered them as he watched the day go: Till when they reached the other side, A dominie in gray Put gently up the evening bars, And led the flock away. Was he bereaved? Did Clover's passing do that to him? He always hated the man, but now that he was dead, did his absence create some kind of void in him? Monroe walked up with a bottle of spirits and a glass and handed the two things to Doc, but Doc declined the glass, instead dislodging the cork and sipping straight from the bottle. "Anything else I can bring you, sir?" "I don't think so." And, in all that, he never took his eyes off the sun. "I'll be at the inn saloon a bit, and I'll stick my head out every so often to check." "Mighty kind of you, Monroe, but I imagine I'll be fine. Enjoy yourself, I guess. Do whatever you feel." Monroe shrugged, made off silently, going back to the deformed music and festivities at the saloon. Smote maybe? But that just suggested a general striking of feeling. Pangs. But what is a pang anyhow? The shape of the feeling more than the feeling itself, as it occurred to Doc. How the feeling came at you. In a pang. But this feeling was like a long thick thing. An oppression? A setting down upon? As though burdened by a bother. A bother of someone else's forming and for someone else's pleasure. Saddled? Shit. He felt saddled. That's what it was. Not only by the girl who'd done in Clover, but by all of it. The town. The shadow addicts. Look how much he had done. Look how much order he had given his little corner of the world. And for what? What was the trade? What had he asked in turn? That he be allowed his prisoner. That if he felt so inclined to keep a man in a cage in misery for eternity, he be given that privilege. But now that Clover was dead, the ungrateful repayment for all his labors seemed to stick the saddle firmly on him. He felt a beast of burden. "Sure, Doc will carry the weight of you all," he thought. "Sure, Doc will hold the load." He drank more. He beheld the thinning sun. He half thought about tearing the whole place down. Tearing the Whole Place Down But Doc would have to get in line. They came with violence in their hearts and discipline in their steps. Jilly came from the north. Mole from the east and Baby Boo from the west. The idea was not to get in and out unnoticed. "We'll leave no one alive, save a few to spread the word of us. The Founder needs to know what we're capable of." Mole's orders, and so be it. Carnage was the course of action. The women soldiers affixed bayonets to their rifles, walked stealth-like with their barrels raised. Just because they wanted to take the fight to them, didn't mean they wouldn't use the element of surprise. "Don't shoot till you see the whites of their eyes," Jilly joked, and she tiptoed down the road, staying near the buildings, headed for the hum of the inn. Once there, hid up in shadow, the glow of the electric lights pulsing out in the dusk-laden street, Jilly lit a coal-oil explosive and lobbed it at the bat-wing doors of the place, and it burst open into flames which spat beyond the entrance, and the merriment making inside transformed to commotion and folks spilled toward the street. That's when the rifles rang out. Some ten or twelve of the saloon patrons were mowed down before they realized the trap. Most likely, they thought some mere fire had broken out, but when folks started falling out with holes in their heads, pushed back into the saloon by projectiles after trying to flee outdoors for safety, the realization of an aggression became clear to them. "Quit playing the fucking piano," someone screamed, but the thing still twinkled on in absurd fashion. Murk and Bale were a bit down the road, watching it all go down. Thick heaves of smoke stenched up the air. "You think you can untie my wrists?" Bale asked. Murk raised the Dutch oven he held. "My hands are full." "How many people you think they've killed so far?" "Sixteen?" From the second story window of the inn, a bald, naked woman dove into the street, landing on her feet, running toward the women soldiers, a razor blade in each hand, slicing in every direction at once, or at least trying to. Jilly ran her bayonet through her throat, kicked her to the ground. "Seventeen?" However many folks there were in the town, it seemed almost painfully clear that aside from those in the inn, there'd be few to lend assistance. If folks were in the surrounding buildings, they were staying hid up in them. The women moved into the inn methodically, shooting and knifing all those who had the misfortune of being there. Only a few of the men gave fight. They had crude weaponry, shovel handles and table legs, which they swung frantically, but the women had been trained to fight soldiers, and made quick work of them. Jilly was especially pleased at herself when she took a machete from a one-armed weirdo with a hooknose and used the thing to hack off his other arm. The blade was dull, so it took some doing, but it was worth it to her. The man scowled down at the spot she fought the arm off from. "Bitch, bitch, bitch" he kept calling. Then Jilly hit him in the head with his own arm, dropped the thing and fired a shot between his eyes and a belch of his brain popped out the back of his head, spritzed the liquor bottles behind the bar with gray matter. For a few minutes, the melee was obscene. The screaming and gunfire and the begging and pleading. The smell of blood and gunpowder. Fire and spilled booze. "Don't kill me, don't . . ." but they were already dead. After a bit, it quieted down, only the crackling of the inn on fire providing a kind of background score to the quick defeat, and then the piano caught ablaze and its strings popped and hollered making some murderous music. Broken bottles rolled about, tinkling. Moonshine dripped from surfaces like rain. Jilly stepped from the inn, her face soot smeared, holding her newly acquired machete. And that's when Murk and Bale saw it. Above Jilly in the deep-dark distance. Like a miniature fist of light throwing a haymaker across the heavens, Halley's Comet hung. Jilly appeared glorious beneath it. Some countess of the apocalypse come to carry off souls in a cauldron. "Where's this goddamned jail?" she screamed. Murk tilted his head in the direction and Jilly stomped off proudly, unaware. "Is that it?" said Bale, his eyes glued to the comet. "Gotta be," said Murk. The streaking thing of white against the navy, sullen sky. "And Clover?" Bale asked. "Y'all get him?" Then they heard the bang. Jilly and Murk When Mole entered the jail, the damage was done. "What happened?" she said to Jilly, who laid on the ground bleeding. "That motherfucker shot me." Doc lay dead on the ground, a machete blade through his face. In his hands, he cradled some ancient musket. A single-shot kind of thing with a six-foot barrel. Mole lowered herself to Jilly to inspect her. Baby Boo came running in, followed by Murk and Bale. Mira screamed with joy when she saw the boys. "Will I make it?" Jilly asked. "I don't think so," said Mole "That fucking sucks." Her eyes went every which way. Came back to Mole's eyes. "I'm sorry." "It hurts like a thing there's no cuss big enough for." She looked down at her stomach. You could've fit a child's fists in the wound. "I can't make it stop bleeding." "Don't try. Just let it go." "Easy for you to say." Mole touched Jilly's brow. "Wanna hear a story?" "You fucking serious?" Jilly's face spasmed with pain. "You fucking with me now?" "It's good. I just thought of it." Jilly rolled her eyes. Swallowed hard. Her face was sweaty and pale. "Ain't like I could stop you." "I once had a friend named Jilly," Mole said. "And my stories bored her to death." Jilly giggled some. "You're making me bleed more," she said. Her eyes went wide at nothing and she was gone. Mole gaped over at the cell. "Where's the redhead?" She eyed Murk. Murk tilted the Dutch oven as if offering it away. Then, "I knew'd you was lying," Baby Boo screamed. A rifle went off and the Dutch oven clanged to the ground. Murk heaped back at the wall behind him, dragged to the floor. Black blood smearing the wall as he dropped off with closed eyes. Mole jumped up, stepped toward Baby Boo. "Your year starts again," she said, and Baby Boo broke down into sobbing and Mole ordered her out of the jail. But Mira was crying too, and the jail was thick with panic. Great sobs and confusion. "You," Mole said to Bale. "Come with me." She grabbed him by his ear and tugged him along. She took the cell keys from a hook on the wall and unlocked the door, forced Bale into the quarters along with Mira and dead Clover. Locked the door of the thing. Pitched the keys at Murk's bloody guts. "I never want to see either of you ever again." She hoisted Jilly over her shoulder and walked her into the night. For a solid hour Bale and Mira listened to Mole and Baby Boo working their way through the town, killing whoever was left. Then, the craziest thing, Murk's eyes opened wide. He looked about. "They gone?" he asked. Mira jumped up, grabbed the cell bars, leaned into them hard as though she might be able to push through them. "Murk," she sobbed. "You okay?" "Oh, fuck no." He said. "I don't know what all the bullet got, but most nothing on me's working. Look at my right hand." Mira did, and he was wagging his middle finger back and forth. "It's the only movement I got left, and I'm gonna keep doing it till I can't no more." "That's fine," Mira said. "You sure," said Bale. He had dragged up to the bars. The shaved part of his head cool against the iron. "Can't just toss the keys at us?" "Keys?" "On your stomach," said Mira. "Yup," said Murk. "Can't do shit about it." "It's okay," said Bale. "Just relax." Murk looked down at his hand. "It stopped," he said. "Short lived," said Mira. Murk laughed. "Shit," he said, "she's right. The laughing and bleeding. They're tied together somehow." "Murk," said Mira, "do you remember any of it?" Murk lowered his eyes. A million moments rolled into a breath. "All of it," he said. "Not me," said Mira. "Last thing I remember is we were walking to town. And then somehow you got me my pot back. In between, I've forgotten." Murk smiled black blood at her. "I was thinking," he said, "when you get back," he coughed some and redness bloomed at the corner of his lips, "if it didn't work. Our killing Clover. Just tell your mom to change her mind. Tell her that it's fair now." His eyes dropped to his shot-open belly. "That this makes it fair. You know?" "Okay," said Mira. "Cause you owe me," Murk said. "On account of my jacket." Mira motioned to the other cell. "I'll be able to get it back for you soon," she said, "your jacket." "Nope," said Murk. "It'd never be the same." Then he quit breathing and his eyes got gray and Mira had never before seen them that color, and she wasn't certain if that's the true color they were, or if they'd been stained that way somehow by Murk's lifestyle. envoi Father and Son Across that world, the inhabitants were caught in personal adventures. Mira, Murk, and Bale had their thing, but elsewhere similar missions were underway. Dome people boarded trains; warring women loaded rifles. Symbiotic, perhaps. Not too far away, a lesser but similar journey was afoot. A man named Jessup helped his father Rondell along toward a hopeful murder that could never occur. They'd been traveling for days and days across inhospitable terrain, entirely ill-equipped for their expedition. This is the thing about people: they overestimate themselves or underestimate the world. They see mountains on the horizon and they set out for them with grandiose expectations. Perhaps they'll summit the thing by sunset. A day later, after having walked nonstop, the mountain is still on the horizon and their expectations have mollified. They want merely to camp in its shadow. Jessup and his father had not yet given up entirely, but the germ of surrender was beginning to incubate, at least for Jessup—he had less at stake in the endeavor and was becoming less certain that the healer's words were true. "What if we kill him," said Jessup, "and the comet comes, and you stay the same?" Rondell shook his head. "Let's not think like that." Hope is a great motivator, but it's a great deceiver too. They continued on in unwarranted pursuit, aiming at any clue afforded to them, asking strangers with black eyes for information that seemed laughable. "Redhead? Sure. There's a town of them. Three days north of here. By a lake filled with mermaids, each one with the nicest pair a titties you've ever seen." Sad thing was, while these fibs came, Rondell listened intently as if he was audience to some kind of sermonizing. "Hear that," he'd say to Jessup. "North. Just three days." And Jessup would sort of lead him on. But, his patience for all this was wearing thin. You have to understand how rare a thing a father was. The whole world seemed to be against their existing. It was hard enough to keep your shadow on the outside: to be surrounded by prey was irrational. Rondell was the only father Jessup or Rondell knew by name. If you met a man, clearly he'd been begotten, but the begetter was always off attending to his own affairs, often leaving behind sleepless women and hate-filled children, and Jessup was deeply indebted to his father for proving an anomaly in that regard. The man was his hero. They lived together in a hut surrounded by vast acres of grapefruit groves that Rondell and Jessup would harvest for grapefruit moonshine. They did this not to act as suppliers for folks, but, rather, they hoarded alcohol and lived a kind of loose life deeply hid away from the world in their thorny labyrinth of grapefruit trees. Jessup's mother had died in childbirth. He'd never seen a picture of her, but Rondell would often describe her. "Prettier than the stars when you're drunk." Deep, high praise for such an alcoholic. Some days, for no reason, Rondell would wake Jessup up and tell him, "Let's celebrate your ma today." And that meant they'd drink more than usual, and he'd sing half-remembered tunes they used to dance to, and he'd show Jessup blankets she made. It was after one of these celebrations that Rondell passed out in the grass and the redheaded boy gobbled his shadow. Jessup caught a glimpse of the thief tarrying off after the deed was done. He gave chase, following him down the thorny tree rows, but couldn't catch him. When he returned to his father, Rondell sat Indian style bawling. "I've lost your mother and now I've lost my shadow and it's not fair," he said. "It's not fair how life's treated me." Once Jessup had shaken some of the moonshine from his brain, he took Rondell to see the healer. Then their real journey began, but now it was beginning to seem pointless to Jessup, and Rondell sensed this, and he tried to explain: "I got a hooker once. It's the only thing I can compare it to. It was in this botched place called Boys' Town, a bunch of stinky streets with bedroom doors that opened right onto it. Little bedrooms, they were, and whores stood in those doorways. Most of them had diseases, I guess, but I still went inside, and the room was the smell of old sex and lavender and I got lost in the bad magic of it, fell into the bed with my whore and we did a gross amount of sex. Maybe hours. But for all the thrashing and all the wanting and all desirousness that dwelled in me, I could not seem to finish. We were yanking and snatching and tugging and what have you. And the whore was laughing at me, and she called in a friend, and I thought that might help, the indecency of it all. But outside the sun rose. And at some point I just gave up. Right now is like being in that room and knowing that it won't end how I want it to." "Can you stand?" Jessup asked. "I can try, but I just wanna fucking sleep." Jessup helped lift his father from the earth. Shouldering most of his weight, he led him, exhausted, across that desert. Days went on like that. And then the comet came. They watched it streak the sky with sadness in their hearts. A decapitated head of magic being lobbed across the sky. Where were they even then? And Jessup addressed his father. "We can't get you all the way home like this." It was the next morning and Jessup was bearing the brunt of Rondell's load. His feet worked over the grasses and brambles while his father's slogged through them. "Leave me, I suppose. I don't want to drag you down." Jessup couldn't catch shade: his father had barely slept. "I can't leave you, but we'll need to detour." "Detour? Add to the journey?" Jessup let Rondell slip from his shoulder, let him rest on the ground. Rondell fanned himself with a hand. "Nearby," said Jessup, "there's a town with a machine that makes shadows, kind of. I've only heard about it. They take trade, I don't know what, and we don't have much. But I think we have to try. Get you some sleep." Rondell closed his eyes. "How far?" "Maybe a day." "And home? The goats?" "Maybe three." Rondell opened his eyes again, looked off at the clouds, smiled as much as he could. "It's worth a shot," he said. Cellmates For hours after Murk died, Mira and Bale stayed silent. The only noise was their pained breathing. "I didn't want it like this," Mira said. "I know." "Can you untie my hands?" "I can try." Bale wriggled on the floor to her and she picked at the ropes. "I can't get it." She draped her arms over his shoulders. She pressed her face to his face. There were dead bodies all around them. There was blood and the gunshot smells. But inside that cell, their shared warmth was dazzling. "You think we'll die in here?" The puddle of blood around Clover discolored, the plasma separating. "I'm not sure." They dozed in and out. Dreamed with their eyes open. The shock of the world kept them living in fear. Kept reality blinking on and off like a strobe light—revealing brief snippets, scattering infinite shadows. Promising naught but captivity. In Need of Sleep Bale's eyes opened. Had something woken him? "Hello?" he heard the faint call. Some parch-throated holler. "Mira," Bale said. "Wake up." He gently shimmied her shoulder with his foot. She stirred, opened her eyes uneasily seemed confused by her surroundings until she saw Bale's face. "What?" "Do you hear it?" The quiet so spare it took a while to be certain. "I don't hear anything." They waited, their eyes out of focus, striving at something with their ears. "Hello?" it came again—their ears like dry tongues against a form of wet in the noise of it. "That?" said Bale. "Maybe?" Could Mira be certain of anything sitting in that rookery of death, in that nest of lifeless bodies akimbo? Bale got to his feet, "In here," he screamed. "Hello! In here!" He leaned his shoulder into the bars, his scalp grazing the cold of one. Two men showed at the door. One with a face that Bale knew. "You," Bale said. "I know you." He strained toward him, all of his strength behind the gesture. The recognized man shook his head. "I don't think so." He made some sound that wasn't language with his throat, "I don't know prisoners." "Jessup," said Bale. "From the train, you shadow thiever." Jessup went gentle in his eyes. "Oh yeah," he said. "That's right." "Let us out of here," Bale said. Jessup touched his forehead, rubbed a bit where Bale's barrel had been. "I don't know, son. Think if you were in my position." He scrutinized the distance, the way a preacher might when saying something hard to the congregation. "I just come to a town of death, and you think I should let its only prisoners free?" His eyes back on Bale. "Look," said Bale, "I coulda blasted you to death and didn't." Jessup and Rondell conversed in whispers. "Sorry, son," Jessup said. The plaintive silence of an implied go fuck yourself. "It just don't seem right." And they made to leave. "But you owe me," said Bale, his voice cracking against the word owe so it sounded the way an animal might chirp it. Jessup and Rondell didn't answer. Only disappeared from the doorway. The thing just a rectangle of pitiful light. Wildly, Bale's mind spun to produce some sort of incentive. "The machine," he hollered. "I bet you're looking for it." Bale spoke to Mira, "You know where it is, right?" "Right outside," Mira said. In an inside-only voice, "See how it worked?" "Kind of." Bale yelled as loud as he could. "We can get your dad fixed up on the machine," he said. "He can use it for sleeping." A few moments passed. Long shapeless ones. The light in the doorway. A sort of signal of the end. Bale hung his head, all his energy dripping invisibly from his disheveled Mohawk, draining out like woe. But then, miraculously, Jessup returned. Cut the Ropes Mira hugged Jessup and Rondell when they unlocked the cell door, the stink of this embrace impossible to convey with words, but you could almost hold the way it smelled, all that traveling and discomfort. Could regard it like coinage. "You can show us the machine?" Jessup said. "Yeah," said Mira. "Just hang on." She crossed the room, reached out and pulled the machete free from Doc's face and held it—old blood dried against it like wax, Doc's eyes open, staring at some point beyond any true place. "Now hold the fuck on," said Jessup. "We did y'all a solid," he said. So much fear in his voice Mira could've sharpened the machete against it. "We've saved you from in that cell." Dead Clover lay bled out there, his nudity going bloat. Mira was confused until she realized the weapon she held. "No," she said. She rolled her eyes. "Bale, turn around." Bale did, eyeing Jessup as he turned. "Fucking pussy," Bale said to Jessup, and Mira cut the rope from his wrists and Bale shook his hands and stretched and thought about taking fists to Jessup's ribs. But Mira grabbed Jessup by a hand, led him and his father outside into the day's harsh sun. The machine was covered by its tarp. She untied a cord at the base of it and pulled the thing free. The legs and arms dangled from their hooks, rocking gently with the inertia of their reveal. "Have at it," she said, and Jessup helped his father to the shade on the sand, and he slurped up a bit of it and rolled off into a deep slumber and Jessup smiled goofily down at him, pawed his silver wisps of hair. Mira and Bale went back inside the jail. Murk's blood had gone back to red. He slumped on the ground in the mess of it. "We should bury him," Bale said. "I think that was the word." "No," said Mira. "It was something like that," Bale said. "Digging holes for the dead. Covering 'em with dirt." "That's the right word. I'm just not burying him." She touched Murk's butchered up hair. "Some other time, maybe. But right now, he can stay this way." Mira leaned down and claimed her Dutch oven. "A world with two suns," she said to Murk. She frowned up at Bale. "It's time to go home." Home "That was Jessup?" Mira said. "Yes." "And that was Jessup's dad?" "It was." "And the redhead is dead." "And the comet came and went." They moved across the same terrain that they'd traversed to get to the Town of Lost Souls, but they stayed silent for the most part, passing the landscape with contemplative eyes. Mira carried her Dutch oven in front of her like some wounded thing, and the lid rattled metallically. They came to Mira's home the next morning, and it looked the same as it always did—a shaggy thing hunkered back in the camouflage of ancient tree limbs—but there were a few additions. A lifeless chicken rested in the yard, its feathers yellowed out with death, kicked free by whatever, the wings sprawled in skeletal fashion, plumules drifting in the morning light, clung up on grass blades like frosting. Beside that bird, a young nanny goat with no shadow wandered bleating odd noises, stepping spasmodically, kicking chicken feathers. As they approached, they saw Mira's mother in her chair, her feet propped on the ancient TV, deeply asleep, drool dangling from her bottom lip, shiny like candy. Mira chucked her pot, and it banged on the lawn, and her mother slept through it, so Mira walked to her. "Wake up," Mira said. She shifted her weight. "Wake up!" But still her mother slept. Mira shook her. Grabbed her by her shoulders and rocked her hard back and forth until her mother's eyes opened, shocked and red. "Mira," she said. "You're home." "How'd you get by, Mom? How'd you sleep?" "What?" "Without me here to hunt you shadows?" "I don't . . ." Mira lifted her mother and dragged her toward the goat pen, her mother's feet kicking dirt as she dragged her, and Mira called to Bale. "Get me a goat." The world sped up. "What?" Bale said. "Get me a fucking goat, Bale. From the pen. If you want to live here you have to help me. So get me a fucking goat." Mira's voice full the way oceans are full. Bale ran off to the goat pen and struggled back a goat that brayed and fought him. Mira looked down at her mother, at her chalky face and victim-thick eyes. "You're gonna suck this goat shadow off the ground and I'm gonna watch you suck it and that's the way it's gonna be from now on and it's gonna be fair and I won't hear you say otherwise." Her mother's mouth dropped into some peculiar configuration and her eyes seemed spat into. "What I ever do to you?" "Nothing, Mom. You didn't do shit. And Joe Clover didn't either. And Murk didn't get shot. And Bale's brother's not dead. Because everything's fair. So tell yourself whatever you need to hear, but make it fair just like everything else. Make up some story or some God to help you understand it, and believe it, because once you're done making it fair, I'm gonna watch you suck that goat shadow and then we're never going to talk about it again. It'll just be a thing you do. Every day before sunset. We'll get you a bed out by the pen, and that's where you'll sleep, or we'll figure out some better way, but I'm not hunting shadow for you ever again." Mira's mother was almost offended by the goat Bale restrained and she made to cough up her mantra, "It's not . . ." But Mira just screamed, "Make it fair!" And she lowered her face so close to her mother's it seemed she might bite her eyes. "Is it fair?" Mira asked, and the few inches between them seemed to fill with imaginary sparks of Mira's hatred. Sometimes the space between two things can only be measured with emotions. A language was invented then. In Mira's eyes. It lived just long enough to convey one message then disappeared from all creation. "Is it fair?" Mira asked again, her eyes now calmed. Her mother hesitated, but lowered her face. Toward the shadow. And started sucking it up. Swallowed so much the goat panicked, snorted shrill, but just before the whole shadow was gone, Mira pulled her mother away, let her lay back and fall asleep in the grass, and Bale let the goat free and it stumbled around confused. Bale rubbed his forearms where the cuffs of his shirtsleeves reached and started to laugh. "What's funny?" said Mira. "Jessup," Bale said. "We didn't tell him the redhead was dead." Mira walked to Bale and they stood and were quiet and Bale put his arm around her. There was a sleep-deprived goat, bleating, stumbling queerly. There was a worn-out woman passed out in the morning grass. There was a boy holding a girl, and the girl didn't cast shadow. But other than that, everything seemed just fine. Acknowledgments Sip wouldn't be possible without the work of Roy Sorensen. My dear friend Cameron Pierce. My amazing agent, Bill Clegg. And my phenomenal family. Also my editor, Mark Doten—who is a godsend and a genius—and everyone else at SOHO. Especially Kevin "Bird Shirt" Murphy (who I've never met but known forever), Rachel Kowal, Abby Koski, and Bronwen Hruska.
{ "redpajama_set_name": "RedPajamaBook" }
2,522
Will the 2020s bring an end to the era of open offices? Meghan McCarty Carino Dec 31, 2019 https://www.marketplace.org/2019/12/31/will-the-2020s-bring-an-end-to-the-era-of-open-offices/ Collaboration or constant distraction? Google employees work in an open office in Berlin in January. Tobias Schwarz/AFP via Getty Images Democratic presidential candidate Michael Bloomberg has set the internet ablaze with a tweet — not about health care policy or impeachment, but office design. The billionaire latecomer to the race said as president, he would turn the East Room of the White House into an open office where he would sit right in the middle of his staff. Cue the howls from workers fed up with this trend for wall-less work spaces. As president, I'll turn the East Room into an open office plan, where I'll sit with our team. I'll use the Oval Office for some official functions – never for tweeting – but the rest of the time, I'll be where a leader should be: with the team. https://t.co/zIU3ZL5uIv pic.twitter.com/jLwWKJCmxw — Mike Bloomberg (@MikeBloomberg) December 30, 2019 If the first decade of the millennium ushered in the idea of open offices, the last decade is when they went mainstream, with companies of all stripes tearing down cubicles and walls, putting workers in airy open spaces with long communal tables for desks, like a big dinner party of creativity. But the hub of activity such design created was more of a nightmare to Kathryn Zimmerman, a programmer in Springfield, Virginia: "People laughing very loudly, playing music, on the phone — constant distraction," she said, echoing the complaints of a majority of workers who said they hate open offices in a 2018 survey from Bospar PR. A growing body of research demonstrates the negative effects of open offices, from a decrease in productivity and, surprisingly, communication, to an increase in sick days off. So will the next decade bring an end to open office madness? "I don't think any one extreme is the right solution," said Karen Dillon, author of "The Harvard Business Review Guide to Office Politics." "I think the combination is really important." She said the cost-saving benefits of open offices mean they're probably here to stay, but they could be evolving. Interior designer Mindy O'Gara, with commercial flooring firm Interface, sees more workplaces adding private spaces like small conference rooms, secluded corners and phone booths. "It's designing space that fits the various iterations of work," she said. "Yes, you've got these beautiful open spaces, but you also have this ability to pull off and be alone." When that's not an option, Dillon said there are minor changes workplaces can make to better foster concentration: "piping white noise in so it mutes the sound, putting sound barriers wherever possible. I don't think you ever want to create a workforce where they can only do their best thinking off-site." Headphones have become survival tool for open offices When did upscale office bathrooms become a thing? Business districts face an uncertain future as return-to-office is delayed Office air quality could be dumbing down workers Chicago is turning some of its iconic office buildings into apartments As New York Fashion Week begins, the clothing industry ponders its future
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,108
Q: When is the composition of harmonic functions harmonic? I have already proven that if $u(z)$ is harmonic and $v(z)$ is analytic then $u(v(z))$ is harmonic, I'm guessing that if $u(z)$ and $v(z)$ are harmonic then $u(v(z))$ is harmonic if and only if $v(z)$ is analytic. Let $A$ be an open subset of $\mathbb{C}$, $u(z): A \rightarrow \mathbb{R}$, $v(z): A\rightarrow \mathbb{R}$ functions such that: $$ \nabla^2 u(z) = \frac{\delta^2}{\delta x^2}u(z) + \frac{\delta^2}{\delta y^2}u(z) = 0 $$ $$ \nabla^2 v(z) = \frac{\delta^2}{\delta x^2}v(z) + \frac{\delta^2}{\delta y^2}v(z) = 0 $$ We call these functions harmonic. Let us consider the composition $u(v(z))$, by definition this function will be harmonic if: $$ \nabla^2 u(v(z)) = \frac{\delta^2}{\delta x^2}u(v(z)) + \frac{\delta^2}{\delta y^2}u(v(z)) = 0 $$ By the chain rule we have that: $$ \frac{\delta^2}{\delta x^2}u(v(z)) = \frac{\delta^2}{\delta [v(z)]^2}u(v(z)) \left( \frac{\delta}{\delta x} v(z) \right)^2 + \frac{\delta}{\delta v(z)} u(v(z)) \frac{\delta^2}{\delta x^2} v(z) $$ $$ \frac{\delta^2}{\delta y^2}u(v(z)) = \frac{\delta^2}{\delta [v(z)]^2}u(v(z)) \left( \frac{\delta}{\delta y} v(z) \right)^2 + \frac{\delta}{\delta v(z)} u(v(z)) \frac{\delta^2}{\delta y^2} v(z) $$ Because $v(z)$ is harmonic we will have that: $$ \nabla^2 u(v(z)) = \frac{\delta^2}{\delta [v(z)]^2}u(v(z)) \left[\left( \frac{\delta}{\delta x} v(z) \right)^2 + \left( \frac{\delta}{\delta y} v(z) \right)^2 \right] $$ We can easily see that $\nabla^2 u(v(z)) = 0$ if $v(z) = 0$. I'm not sure what to do if $v(z) \neq 0$ or if the conjecture is true.
{ "redpajama_set_name": "RedPajamaStackExchange" }
9,475
\section{Introduction} Low-resource languages lack manually annotated data to learn even the most basic models such as part-of-speech (POS) taggers. To compensate for the absence of direct supervision, work in cross-lingual learning and distant supervision has discovered creative use for a number of alternative data sources to learn feasible models: \begin{itemize}[noitemsep,leftmargin=*,label=--,topsep=0.25ex] \item aligned {\it parallel corpora} to project POS annotations to target languages \cite{yarowsky2001inducing,agic:ea:2015,fang2016learning}, \item noisy {\it tag dictionaries} for type-level approximation of full supervision \cite{li-et-al:2012}, \item combination of projection and {\it type constraints} \cite{das-petrov:2011,tackstrom:ea:2013}, \item rapid annotation of {\it seed training data} \cite{garrette-baldridge:2013:NAACL-HLT,garrette-mielens-baldridge:2013:ACL2013}. \end{itemize} However, only one or two compatible sources of distant supervision are typically employed. In reality severely under-resourced languages may require a more pragmatic ``take what you can get'' viewpoint. Our results suggest that combining supervision sources is the way to go about creating viable low-resource taggers. We propose a method to strike a balance between model simplicity and the capacity to easily integrate heterogeneous learning signals. Our system is a uniform neural model for POS tagging that learns from {\it disparate sources of distant supervision} ({\sc DsDs}). We use it to combine: i) multi-source annotation projection, ii) instance selection, iii) noisy tag dictionaries, and iv) distributed word and sub-word representations. We examine how far we can get by exploiting only the wide-coverage resources that are currently readily available for more than 300 languages, which is the breadth of the parallel corpus we employ. {\sc DsDs} yields a new state of the art by jointly leveraging disparate sources of distant supervision in an experiment with 25 languages. We demonstrate: i) substantial gains in carefully selecting high-quality instances in annotation projection, ii) the usefulness of lexicon features for neural tagging, and iii) the importance of word embeddings initialization for faster convergence. \begin{figure} \includegraphics[width=\columnwidth]{bilstm} \caption{Illustration of \textsc{DsDs} (Distant Supervision from Disparate Sources).} \label{fig:model} \end{figure} \section{Method} {\sc DsDs} is illustrated in Figure~\ref{fig:model}. The base model is a bidirectional long short-term memory network (bi-LSTM)~\cite{graves:schmidhuber:2005,Hochreiter:Schmidhuber:97,plank:ea:2016,kiperwasser2016}. Let $x_{1:n}$ be a given sequence of input vectors. In our base model, the input sequence consists of word embeddings $\vec{w}$ and the two output states of a character-level bi-LSTM $\vec{c}$. Given $x_{1:n}$ and a desired index {$i$}, the function $BiRNN_\theta(x_{1:n}, i)$ (here instantiated as LSTM) reads the input sequence in forward and reverse order, respectively, and uses the concatenated ($\circ$) output states as input for tag prediction at position $i$.\footnote{CRF decoding did not consistently improve POS accuracy, as recently also independently found~\cite{yang:2018:coling}.} Our model differs from prior work on the type of input vectors $x_{1:n}$ and distant data sources, in particular, we extend the input with lexicon embeddings, all described next. \paragraph{Annotation projection.} Ever since the seminal work of~\newcite{yarowsky2001inducing}, projecting sequential labels from source to target languages has been one of the most prevalent approaches to cross-lingual learning. Its only requirement is that parallel texts are available between the languages, and that the source side is annotated for POS. We apply the approach by~\newcite{agic2016multilingual}, where labels are projected from multiple sources and then decoded through weighted majority voting with word alignment probabilities and source POS tagger confidences. We exploit their wide-coverage Watchtower corpus (WTC), in contrast to the typically used Europarl data. Europarl covers 21 languages of the EU with 400k-2M sentence pairs, while WTC spans 300+ widely diverse languages with only 10-100k pairs, in effect sacrificing depth for breadth, and introducing a more radical domain shift. However, as our results show little projected data turns out to be the most beneficial, reinforcing breadth for depth. While \citet{agic2016multilingual} selected 20k projected sentences at random to train taggers, we propose a novel alternative: selection by {\em coverage}. We rank the target sentences by percentage of words covered by word alignment from 21 sources of~\newcite{agic2016multilingual}, and select the top $k$ covered instances for training. In specific, we employ the mean coverage ranking of target sentences, whereby each target sentence is coupled with the arithmetic mean of the 21 individual word alignment coverages for each of the 21 source-language sentences. We show that this simple approach to instance selection offers substantial improvements: across all languages, we learn better taggers with significantly fewer training instances. \paragraph{Dictionaries.} Dictionaries are a useful source for distant supervision~\cite{li-et-al:2012,tackstrom:ea:2013}. There are several ways to exploit such information: i) as type constraints during encoding~\cite{tackstrom:ea:2013}, ii) to guide unsupervised learning~\cite{li-et-al:2012}, or iii) as additional signal at training. We focus on the latter and evaluate two ways to integrate lexical knowledge into neural models, while comparing to the former two: a) by representing lexicon properties as $n$-hot vector (e.g., if a word has two properties according to lexicon $src$, it results in a 2-hot vector, if the word is not present in $src$, a zero vector), with $m$ the number of lexicon properties; b) by \textit{embedding} the lexical features, i.e., $\vec{e}_{src}$ is a lexicon $src$ embedded into an $l$-dimensional space. We represent $\vec{e}_{src}$ as concatenation of all embedded $m$ properties of length $l$, and a zero vector otherwise. Tuning on the dev set, we found the second embedding approach to perform best, and simple concatenation outperformed mean vector representations. We evaluate two dictionary sources, motivated by ease of accessibility to many languages: \textsc{Wiktionary}, a word type dictionary that maps tokens to one of the 12 Universal POS tags~\cite{li-et-al:2012,petrov:ea:2012}; and \textsc{UniMorph}, a morphological dictionary that provides inflectional paradigms across 350 languages~\citep{KIROV16.1077}. For Wiktionary, we use the freely available dictionaries from~\newcite{li-et-al:2012} and \newcite{agic:ea:2017}. The size of the dictionaries ranges from a few thousands (e.g., Hindi and Bulgarian) to 2M (Finnish UniMorph). Sizes are provided in Table~\ref{tbl:results}, first columns. UniMorph covers between 8-38 morphological properties (for English and Finnish, respectively). \paragraph{Word embeddings.} Embeddings are available for many languages. Pre-initialization of $\vec{w}$ offers consistent and considerable performance improvements in our distant supervision setup (Section~\ref{sec:results}). We use off-the-shelf Polyglot embeddings~\cite{polyglot}, which performed consistently better than FastText~\cite{bojanowski2016enriching}. \section{Experiments} \label{sec:experiments} \paragraph{Baselines.} We compare to the following weakly-supervised POS taggers: \begin{itemize}[noitemsep,leftmargin=*,label=--,topsep=0.25ex] \item \textsc{\textbf{Agic:}} Multi-source annotation projection with Bible parallel data by \citet{agic:ea:2015}. \item \textsc{\textbf{Das:}} The label propagation approach by \citet{das-petrov:2011} over Europarl data. \item \textsc{\textbf{Garrette:}} The approach by \citet{garrette-baldridge:2013:NAACL-HLT} that works with projections, dictionaries, and unlabeled target text. \item \textsc{\textbf{Li:}} Wiktionary supervision~\citep{li-et-al:2012}. \end{itemize} \paragraph{Data.} Our set of 25 languages is motivated by accessibility to embeddings and dictionaries. In all experiments we work with the 12 Universal POS tags~\cite{petrov:ea:2012}. For development, we use 21 dev sets of the Universal Dependencies 2.1 \cite{ud21}. We employ UD test sets on additional languages as well as the test sets of \citet{agic:ea:2015} to facilitate comparisons. Their test sets are a mixture of CoNLL \cite{buchholz-marsi:2006:CoNLL-X,nivre-EtAl:2007:EMNLP-CoNLL2007} and HamleDT test data \cite{zeman2014hamledt}, and are more distant from the training and development data. \paragraph{Model and parameters.} We extend an off-the-shelf state-of-the-art bi-LSTM tagger with lexicon information. The code is available at: \url{https://github.com/bplank/bilstm-aux}. The parameter $l$=$40$ was set on dev data across all languages. Besides using 10 epochs, word dropout rate ($p$=$.25$) and 40-dimensional lexicon embeddings, we use the parameters from~\newcite{plank:ea:2016}. For all experiments, we average over 3 randomly seeded runs, and provide mean accuracy. For the learning curve, we average over 5 random samples with 3 runs each. \section{Results}\label{sec:results} \begin{figure} \captionsetup{farskip=0pt} \centering \subfloat[sentence selection]{\includegraphics[width=0.499\columnwidth]{selection.pdf}} \subfloat[pre-trained embeddings]{\includegraphics[width=0.499\columnwidth]{curves.pdf}} \caption{\label{fig:projsents}Learning curves for: a) random vs. coverage-based sentence selection in annotation projection, both with Polyglot embeddings, and b) pre-trained embeddings on top of coverage-based selection. Means over 21 languages.} \end{figure} \begin{table*}[ht!] \centering \resizebox{\textwidth}{!}{ \begin{tabular}{l|rr|rrrrrrrrrrrr} \toprule & \multicolumn{2}{c|}{\textsc{Lex} ($10^3$)} & \multicolumn{5}{c}{\textsc{Dev sets (UD2.1)}} & & \multicolumn{5}{c}{\textsc{Test sets}} \\ \textsc{Language} & \textsc{W} & \textsc{U} & 5k & TC$_W$ & $n$-hot$_W$& $\vec{e}_W$ & \textsc{DsDs} & & \textsc{Das} & \textsc{Li} & \textsc{Garrette} & \textsc{Agic} & \textsc{DsDs} \\ \midrule Bulgarian (bg) & 3 & 47 & 88.6 & 88.6 & 88.9 & 89.6 & \textbf{89.7} && -- & -- & 83.1 & 77.7 & \textbf{83.9}\\ Croatian (hr) & 20 & -- & 84.9 & \textbf{85.4} & 84.9 & 84.8 & $^\dagger$84.8 & &-- & -- & -- & 67.1 & $^\dagger$\textbf{78.0}\\ Czech (cs) & 14 & 72 & 86.6 & 86.6 & 86.9 & \textbf{87.6} & 87.2 && -- & -- & -- & 73.3 & \textbf{86.8} \\ Danish (da) & 22 & 24 & 89.6 & 89.0 & 89.8 & \textbf{90.2} & 90.0 && 83.2 & 83.3 & 78.8 & 79.0 & \textbf{84.5} \\ Dutch (nl) & 52 & 26 & 88.3 & 88.9 & 89.0 & 89.7 & \textbf{89.8} && 79.5 & \textbf{86.3} & -- & -- & 83.9\\ English (en) & 358 & 91 & 86.5 & \textbf{87.4} & 86.8 & 87.3 & 87.3 & &-- & \textbf{87.1} & 80.8 & 73.0 & 85.7 \\ Finnish (fi) & 104 & 2,345 & 81.5 & 81.2 & 81.8 & 82.4 & \textbf{82.4} & & -- & -- & --& -- & --\\ French (fr) & 17 & 274 & 91.0 & 89.6 & \textbf{91.7} & 91.2 & 91.4 && -- & -- & 85.5 & 76.6 & \textbf{88.7}\\ German (de) & 62 & 71 & 85.0 & 86.4 & 85.5 & 86.0 & \textbf{86.7} && 82.8 & 85.8 & \textbf{87.1} & 80.2 & 84.1\\ Greek (el) & 21 & -- & 80.6 & \textbf{85.7} & 80.2 & 80.5 & $^\dagger$80.5 && \textbf{82.5} & 79.2 & 64.4 & 52.3 & $^\dagger$81.1\\ Hebrew (he) & 3 & 12 & 76.0 & \textbf{76.1} & 75.5 & 74.9 & 75.3 && -- & -- & -- & -- & -- \\ Hindi (hi) & 2 & 26 & 64.6 & 64.6 & 64.8 & 65.4 & \textbf{66.2} && -- & -- & -- & \textbf{67.6} & 63.1\\ Hungarian (hu) & 13 & 13 & 75.6 & 75.6 & 75.3 & 75.7 & \textbf{77.9} && -- & -- & \textbf{77.9} & 72.0 & 77.3\\ Italian (it) & 478 & 410 & 91.9 & 91.7 & 93.4 & 93.5 & \textbf{93.7} && 86.8 & 86.5 & 83.5 & 76.9 & \textbf{92.1}\\ Norwegian (no) & 47 & 18 & 90.9 & 90.9 & 90.9 & 91.0 & \textbf{91.5} && -- & -- & 84.3 & 76.7 & \textbf{86.2}\\ Persian (fa) & 4 & 26 & 42.8 & 43.0 & 43.7 & 43.5 & \textbf{43.8} && -- & -- & -- & \textbf{59.6} & 43.6 \\ Polish (pl) & 6 & 132 & 84.7 & 84.6 & 84.2 & 84.8 & \textbf{86.0} && -- & -- & -- & 75.1 & \textbf{84.4}\\ Portuguese & 41 & 211 & 91.4 & 91.5 & 92.3 & \textbf{92.9} & 92.2 & &87.9 & 84.5 & 87.3 & 83.8 & \textbf{89.4} \\ Romanian (ro) & 7 & 4 & 83.9 & 83.9 & 84.8 & 85.3 & \textbf{86.3} && -- & -- & -- & -- & --\\ Spanish (es) & 234 & 324 & 90.4 & 88.6 & 91.0 & 91.5 & \textbf{92.0} & & 84.2 & 86.4 & 88.7 & 81.4 & \textbf{91.7}\\ Swedish (sv) & 89 & 67 & 88.9 & 88.9 & 89.6 & \textbf{89.9} & \textbf{89.9} & & 80.5 & \textbf{86.1} & 76.1 & 75.2 & 83.1\\ \midrule \textsc{Avg(21)} & & & 83.0 & 83.2 & 83.4 & 83.7 & \textbf{84.0} & \textsc{Avg(8: Das)} & 83.4 & 84.8 & 80.8 & 75.5 & \textbf{86.2} \\ & & & & & & & & \textsc{Avg(8: Li$\cap$Agic)} & -- & 84.9 & 80.8 & 75.2 & \textbf{87.2}\\ \midrule \textsc{Germanic (6)} & & & 88.2 &88.6 &88.6 &89.0 &\textbf{89.2} & \textsc{Germanic} (4: {\sc Das}) & 81.5 & \textbf{85.4} & -- & -- & 83.9\\ \textsc{Romance (5)} & & & 89.7 & 89.0 & 90.6 & 90.9 & \textbf{91.1} & \textsc{Romance} (3: {\sc Das}) & 86.3 & 85.8 & 86.5 & 80.7 & \textbf{91.1}\\ \textsc{Slavic (4)} & & & 86.2 & 86.3 & 86.2 & 86.7 & \textbf{86.9}\\ \textsc{Indo-Iranian (2)} & & & 53.7 &53.8 &54.3 &54.4 & \textbf{55.0} \\ \textsc{Uralic (2)} & & & 78.5 & 78.4 & 78.6 & 79.0 & \textbf{80.1}\\ \bottomrule \end{tabular} } \caption{Results on the development sets and comparison of our best model to prior work. \textsc{Lex}: Size (word types) of dictionaries (W: Wiktionary, U: UniMorph). TC$_W$: type-constraints using Wiktionary; $\vec{e}_W$ (embedded Wiktionary tags), \textsc{DsDs}: our model with $\vec{e}_{W \cup U}$. Results indicated by $^\dagger$ use W only. Best result in boldface; in case of equal means, the one with lower std is boldfaced. Averages over language families (with two or more languages in the sample, number of languages in parenthesis). } \label{tbl:results} \end{table*} Table~\ref{tbl:results} shows the tagging accuracy for individual languages, while the means over all languages are given in Figure~\ref{fig:projsents}. There are several take-aways. \paragraph{Data selection.} The first take-away is that coverage-based instance selection yields substantially better training data. Most prior work on annotation projection resorts to arbitrary selection; informed selection clearly helps in this noisy data setup, as shown in Figure~\ref{fig:projsents} (a). Training on 5k instances results in a sweet spot; more data (10k) starts to decrease performance, at a cost of runtime. Training on all WTC data (around 120k) is worse for most languages. From now on we consider the 5k model trained with Polyglot as our baseline (Table~\ref{tbl:results}, column ``5k''), obtaining a mean accuracy of 83.0 over 21 languages. \paragraph{Embeddings initialization.} Polyglot initialization offers a large boost; on average $+$3.8\% absolute improvement in accuracy for our 5k training scheme, as shown in Figure~\ref{fig:projsents} (b). The big gap in low-resource setups further shows their effectiveness, with up to 10\% absolute increase in accuracy when training on only 500 instances. \paragraph{Lexical information.} The main take-away is that lexical information helps neural tagging, and \textit{embedding} it proves the most helpful. Embedding Wiktionary tags reaches 83.7 accuracy on average, versus 83.4 for $n$-hot encoding, and 83.2 for type constraints. Only on 4 out of 21 languages are type constraints better. This is the case for only one language for $n$-hot encoding (French). The best approach is to embed both Wiktionary and Unimorph, boosting performance further to 84.0, and resulting in our final model. {It helps the most on morphological rich languages such as Uralic. On the test sets (Table~\ref{sec:results}, right) {\sc DsDs} reaches 87.2 over 8 test languages intersecting~\newcite{li-et-al:2012} and~\newcite{agic2016multilingual}. It reaches 86.2 over the more commonly used 8 languages of~\citet{das-petrov:2011}, compared to their 83.4. This shows that our novel ``soft'' inclusion of noisy dictionaries is superior to a hard decoding restriction, and including lexicons in neural taggers helps. We did not assume any gold data to further enrich the lexicons, nor fix possible tagset divergences. \section{Discussion} \paragraph{Analysis.} The inclusion of lexicons results in higher coverage and is part of the explanation for the improvement of {\sc DsDs}; see correlation in Figure~\ref{fig:details} (a). What is more interesting is that our model benefits from the lexicon beyond its content: OOV accuracy for words \textit{not} present in the lexicon overall improves, besides the expected improvement on known OOV, see Figure~\ref{fig:details} (b). \begin{table*} \centering \resizebox{\textwidth}{!}{ \begin{tabular}{llll|rr|cccccr} \toprule & & & & \multicolumn{2}{c|}{\textsc{Lex} ($10^3$)} & \multicolumn{6}{c}{{\sc Test sets}}\\ \textsc{Language} & \textsc{Test} & \textsc{Proj} & \textsc{Emb} & W & U & TnT & 5k & TC$_W$ & $n$-hot$_W$& $\vec{e}_W$ & \textsc{DsDs} \\ \midrule Basque (eu) & UD & Bible & eu & 1 & -- & 57.5 & 61.8 & 61.8 & 61.4 & \textbf{62.7} &$\dagger$\textbf{62.7}\\ Basque (eu) & CoNLL & Bible & eu & 1 & -- & 57.0 & 60.3 & 60.3 & 60.3 & \textbf{61.3} & $\dagger$\textbf{61.3} \\ Estonian (et) & UD & WTC & et & -- & 10 & 79.5 &80.6 & -- & -- & -- & \textbf{81.5}\\ Serbian (sr) & UD & WTC (hr) & hr & (hr) 20 & -- & 84.0& 84.7 & \textbf{85.5} & 85.1 & 85.2 & $\dagger$85.2\\ Serbian (sr) & UD & Bible (sr) & hr & (hr) 20 & --& 77.1 & 78.9 & 79.4 & 80.5 & \textbf{80.7} & $\dagger$\textbf{80.7}\\ Tamil (ta) & UD & WTC & ta & -- & -- & 58.2 & \textbf{61.2} & -- & -- & -- & -- \\ \bottomrule \end{tabular} } \caption{Results for languages with missing data sources: WTC projections, Wiktionary (W), or UniMorph (U). Test sets ({\sc Test}), projection sources ({\sc Proj}), and embeddings languages ({\sc Emb}) are indicated. Comparison to TnT~\cite{Brants2000} trained on {\sc Proj}. Results indicated by $^\dagger$ use W only.} \label{tbl:closelowlang} \end{table*} \paragraph{More languages.} All data sources employed in our experiment are very high-coverage. However, for true low-resource languages, we cannot safely assume the availability of all disparate information sources. Table~\ref{tbl:closelowlang} presents results for four additional languages where some supervision sources are missing. We observe that adding lexicon information always helps, even in cases where only 1k entries are available, and embedding it is usually the most beneficial way. For closely-related languages such as Serbian and Croatian, using resources for one aids tagging the other, and modern resources are a better fit. For example, using the Croatian WTC projections to train a model for Serbian is preferable over in-language Serbian Bible data where the OOV rate is much higher. \begin{figure}[t] \captionsetup{farskip=0pt} \centering \subfloat[coverage vs.\ accuracy] {\includegraphics[width=0.499\columnwidth,height=7.35em]{correlation.pdf}} \subfloat[OOV accuracy]{\includegraphics[width=0.499\columnwidth,height=7.35em]{barchart.pdf}} \caption{\label{fig:details} Analysis of {\sc DsDs} accuracy improvements over the baseline on all development languages with respect to a) token coverage by the lexicon, including Pearson's $\rho$; b) OOV accuracy for tokens in/not in the lexicon, with 95\% confidence intervals of the mean. Here, a token is covered if we can find it in at least one lexicon.} \end{figure} \begin{figure} \end{figure} \paragraph{How much gold data?} We assume not having access to \textit{any} gold annotated data. It is thus interesting to ask how much gold data is needed to reach our performance. This is a tricky question, as training within the same corpus naturally favors the same corpus data. We test both in-corpus (UD) and out-of-corpus data (our test sets) and notice an important gap: while in-corpus only 50 sentences are sufficient, outside the corpus one would need over 200 sentences. This experiment was done for a subset of 18 languages with both in- and out-of-corpus test data. \paragraph{Further comparison.} In Table~\ref{tbl:results} we directly report the accuracies from the original contributions by {\sc Das, Li, Garrette}, and {\sc Agic} over the same test data. We additionally attempted to reach the scores of {\sc Li} by running their tagger over the Table~\ref{tbl:results} data setup. The results are depicted in Figure~\ref{fig:iters} as mean accuracies over EM iterations until convergence. We show: i) {\sc Li} peaks at 10 iterations for their test languages, and at 35 iterations for all the rest. This is in slight contrast to 50 iterations that~\citet{li-et-al:2012} recommend, although selecting 50 does not dramatically hurt the scores; ii) Our replication falls $\sim$5 points short of their 84.9 accuracy. There is a large 33-point accuracy gap between the scores of~\citet{li-et-al:2012}, where the dictionaries are large, and the other languages in Figure~\ref{fig:iters}, with smaller dictionaries. Compared to {\sc Das}, our tagger clearly benefits from pre-trained word embeddings, while theirs relies on label propagation through Europarl, a much cleaner corpus that lacks the coverage of the noisier WTC. Similar applies to~\citet{tackstrom:ea:2013}, as they use 1-5M near-perfect parallel sentences. Even if we use much smaller and noisier data sources, {\sc DsDs} is almost on par: 86.2 vs. 87.3 for the 8 languages from~\citet{das-petrov:2011}, and we even outperform theirs on four languages: Czech, French, Italian, and Spanish. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{iters.pdf} \caption{\label{fig:iters} The performance of {\sc Li} with our dictionary data over EM iterations, separate for the languages from~\citet{li-et-al:2012} and all the remaining languages in Table~\ref{tbl:results}.} \end{figure} \begin{figure} \end{figure} \section{Related Work} Most successful work on low-resource POS tagging is based on projection~\cite{yarowsky2001inducing}, tag dictionaries~\citep{li-et-al:2012}, annotation of seed training data~\cite{garrette-baldridge:2013:NAACL-HLT} or even more recently some combination of these, e.g., via multi-task learning~\cite{fang2016learning,kann:ea:2018}. Our paper contributes to this literature by leveraging a range of prior directions in a unified, neural test bed. Most prior work on neural sequence prediction follows the commonly perceived wisdom that hand-crafted features are unnecessary for deep learning methods. They rely on end-to-end training without resorting to additional linguistic resources. Our study shows that this is not the case. Only few prior studies investigate such sources, e.g., for MT~\cite{sennrich-haddow:2016:WMT,chen:ea:2017,Li:ea:2017,passban:ea:2018} and~\newcite{sagot-martinezalonso:2017:IWPT} for POS tagging use lexicons, but only as $n$-hot features and without examining the cross-lingual aspect. \section{Conclusions} We show that our approach of distant supervision from disparate sources (\textsc{DsDs}) is simple yet surprisingly effective for low-resource POS tagging. Only 5k instances of projected data paired with off-the-shelf embeddings and lexical information integrated into a neural tagger are sufficient to reach a new state of the art, and both data selection and embeddings are essential components to boost neural tagging performance.
{ "redpajama_set_name": "RedPajamaArXiv" }
6,009